content
stringlengths 0
625k
| subset
stringclasses 1
value | meta
dict |
---|---|---|
---
abstract: |
Achieving carbon neutrality is probably one of the most important challenges of the 21st century for our societies. Part of the solution to this challenge is to leverage renewable energies. However, these energy sources are often located far away from places that need the energy, and their availability is intermittent, which makes them challenging to work with. In this paper, we build upon the concept of Remote Renewable Energy Hubs (RREHs), which are hubs located at remote places with abundant renewable energy sources whose purpose is to produce carbon-neutral synthetic fuels. More precisely, we model and study the Energy Supply Chain (ESC) that would be required to provide a constant source of carbon-neutral synthetic methane, also called e-NG, in Belgium from an RREH located in Morocco. To be carbon neutral, a synthetic fuel has to be produced from existing carbon dioxide (CO$_2$) that needs to be captured using either Direct Air Capture (DAC) or Post Combustion Carbon Capture (PCCC). In this work, we detail the impact of three different carbon sourcing configurations on the price of the delivered e-NG. Our results show that PCCC is the most competitive from a cost perspective, and that e-NG could be delivered to Belgium at a price of 136€/MWh when using existing CO$_2$ sources in Morocco. For comparison, this price is less than half of the one of regular fossil natural gas delivered in Belgium at the peak of the 2022 energy crisis in Europe.
author:
- Michaël Fonder
- Pierre Counotte
- Victor Dachet
- Jehan de Séjournet
- Damien Ernst
bibliography:
- abbreviation.bib
- abbreviation-short.bib
- refs.bib
date: September 2023
title: "Synthetic methane for closing the carbon loop: Comparative study of three carbon sources for remote carbon-neutral fuel synthetization"
---
# Introduction
Global warming is a climate change that is due to large anthropogenic emissions of greenhouse gases, mainly [cdo]{acronym-label="cdo" acronym-form="singular+short"}, in Earth's atmosphere. As its effects are detrimental for our societies and ecosystems in general, it has appeared necessary to reduce, and even cancel, the emission of these gases to minimise these effects. Since most of the [cdo]{acronym-label="cdo" acronym-form="singular+short"} is emitted by burning fossil fuels to generate energy, transitioning our energy production out of these fuels is mandatory.
The main alternatives to fossil fuels are renewable energies such as wind, solar or hydropower, which can be harvested to produce electricity. However, most renewable energy sources are intermittent, and the best sources are located far away from places where this energy is the most needed [@Hampp2023Import]. As electrical energy is challenging to transport over long distance and to store in large quantities with current technology, innovations are needed to bridge the gap between energy production and consumption locations.
One such innovation is the concept of [RREH]{acronym-label="RREH" acronym-form="singular+short"} for carbon-neutral fuel synthesis that was first laid out by @Hashimoto1999Global. The idea underlying [RREH]{acronym-label="RREH" acronym-form="singular+short"}s is to install power-to-X facilities, that convert electrical energy into chemical energy, at remote locations where renewable energy sources are the most abundant [@sadeghi2019energy; @geidl2006energy], and to transport the converted energy to places where it is needed. A power-to-X facility converts the electrical energy into chemical energy by synthesizing energy-dense molecules [@LewandowskaBernat2018Opportunities], which are easier to store and transport back to the energy consumption locations than electricity. The molecules typically considered for this kind of application are dihydrogen, ammonia, or methane, each having its own advantages and drawbacks [@Brynolf2018Electrofuels].
Among these three molecules, synthetic methane, also called [eng]{acronym-label="eng" acronym-form="singular+short"}, holds a particular place. Indeed, it can be used as a simple drop-in replacement within existing energy-demanding installations while decreasing GHG emissions when produced from renewable energy sources, as shown in several case studies [@Rixhon2021TheRole; @Reiter2015Evaluating; @Zhang2017Life]. More generally, existing life-cycle analysis of methanation indicate that the [gwp]{acronym-label="gwp" acronym-form="singular+short"} of [eng]{acronym-label="eng" acronym-form="singular+short"} produced with renewable energy sources is smaller than fossil natural gas [@Schreiber2020Life; @Federici2022Life]. Benefits of [eng]{acronym-label="eng" acronym-form="singular+short"} on [gwp]{acronym-label="gwp" acronym-form="singular+short"} are even larger if the methanation is done through catalysts through captured [cdo]{acronym-label="cdo" acronym-form="singular+short"} rather than from biomass [@GoffartDeRoeck2022Comparative; @Chauvy2022Environmental].
To be considered as carbon-neutral, catalyst-based [eng]{acronym-label="eng" acronym-form="singular+short"} has to be generated from existing [cdo]{acronym-label="cdo" acronym-form="singular+short"}, which must be actively captured prior to being used for methanation. Capturing [cdo]{acronym-label="cdo" acronym-form="singular+short"} can be done either by filtering it out of ambient air thanks to [DAC]{acronym-label="DAC" acronym-form="singular+short"} technologies [@McQueen2021AReview; @Shayegh2021Future] or by extracting it directly at the source of emission, in industrial fumes, where it is the most concentrated, by [PCCC]{acronym-label="PCCC" acronym-form="singular+short"}[@Zanco2021Postcombustion; @Mukherjee2019Review]. Fossil fuel power plants, or cement and steel factories are examples of industries that emit sufficiently large amounts of [cdo]{acronym-label="cdo" acronym-form="singular+short"} to be considered for [PCCC]{acronym-label="PCCC" acronym-form="singular+short"}. The high concentration of [cdo]{acronym-label="cdo" acronym-form="singular+short"} in industrial fumes leads [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} to be less expensive than [DAC]{acronym-label="DAC" acronym-form="singular+short"} for extracting the same amount of [cdo]{acronym-label="cdo" acronym-form="singular+short"}[@Dachet2023Towards-arxiv]. However, capturing [cdo]{acronym-label="cdo" acronym-form="singular+short"} by [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} needs to be done at the factory that uses the synthesized fuel. The [cdo]{acronym-label="cdo" acronym-form="singular+short"} then needs to be transported where needed, which induces additional costs. This contrasts with [DAC]{acronym-label="DAC" acronym-form="singular+short"} that can be performed anywhere on the planet directly where needed, which eliminates the transportation costs. As changing the source of [cdo]{acronym-label="cdo" acronym-form="singular+short"} impacts the price of the final commodity produced by an [RREH]{acronym-label="RREH" acronym-form="singular+short"}, it is necessary to know which [cdo]{acronym-label="cdo" acronym-form="singular+short"} source and capture method combination is the most energy-efficient and/or the most cost-effective for a given [RREH]{acronym-label="RREH" acronym-form="singular+short"}.
Multiple works have studied [ptg]{acronym-label="ptg" acronym-form="singular+short"}[RREH]{acronym-label="RREH" acronym-form="singular+short"}s in different configurations. For example, @Berger2021Remote studied an [RREH]{acronym-label="RREH" acronym-form="singular+short"} located in Algeria designed to produce [eng]{acronym-label="eng" acronym-form="singular+short"} to be delivered in Belgium. @Dachet2023Towards-arxiv built on this work by adding a hub in Greenland and by studying the impact of carbon pricing on the sizing of the system. @Hampp2023Import performed an extensive study on the opportunity to build [RREH]{acronym-label="RREH" acronym-form="singular+short"}s at different places on Earth to deliver energy to Germany. As opposed to the works of @Berger2021Remote and @Dachet2023Towards-arxiv that focus on [eng]{acronym-label="eng" acronym-form="singular+short"}, @Hampp2023Import consider and compare multiple carriers for energy. However, all these works consider only a single configuration for sourcing the [cdo]{acronym-label="cdo" acronym-form="singular+short"} required for the methanation process in their study. Therefore, it is impossible to know if the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing chosen in each of these works is the most effective one.
This paper aims to compare the impact of changing the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing used to close the carbon loop to the cost of the generated [eng]{acronym-label="eng" acronym-form="singular+short"}, and to the sizing of the [RREH]{acronym-label="RREH" acronym-form="singular+short"}, which has not been done before. More specifically, we study the impact of three different configurations of [cdo]{acronym-label="cdo" acronym-form="singular+short"} sources and capture methods on the cost and the sizing of an [RREH]{acronym-label="RREH" acronym-form="singular+short"} located in Morocco and designed to produce [eng]{acronym-label="eng" acronym-form="singular+short"} to be delivered in Belgium. We detail the exact scope and configurations considered in our problem statement, in the next section. We detail our model and methodology in Section 3, and analyse our results in Section 4. Section 5 concludes this paper.
The main contributions of this work are as follows:
- We model the whole [esc]{acronym-label="esc" acronym-form="singular+short"} needed to deliver [eng]{acronym-label="eng" acronym-form="singular+short"}, produced from salt water and renewable energy in Morocco, to Belgium;
- We consider three different ways of sourcing the [cdo]{acronym-label="cdo" acronym-form="singular+short"} needed for the methanation in the [RREH]{acronym-label="RREH" acronym-form="singular+short"};
- We provide a detailed analysis of the advantages and drawbacks of each of these sources.
# Problem statement
In this section, we detail the scope of this work. We consider an [RREH]{acronym-label="RREH" acronym-form="singular+short"} located in Morocco whose purpose is to produce [eng]{acronym-label="eng" acronym-form="singular+short"} for the Belgian market. The [RREH]{acronym-label="RREH" acronym-form="singular+short"} has to produce [eng]{acronym-label="eng" acronym-form="singular+short"} from renewable energy sources (for electricity), from salt water and from captured [cdo]{acronym-label="cdo" acronym-form="singular+short"}. The [eng]{acronym-label="eng" acronym-form="singular+short"} produced by the hub has to be delivered to Belgium by LNG carriers.
The choice of Morocco for the location of the hub is first motivated by the quality of its renewable energy sources [@Fasihi2015Economics]. Second, an [RREH]{acronym-label="RREH" acronym-form="singular+short"} requires a significant land area to be deployed, especially for collecting renewable energies. Morocco is also interesting in this regard, since the southern half of the county is mostly a desert with an extremely low existing land use.
We consider three different configurations for capturing the [cdo]{acronym-label="cdo" acronym-form="singular+short"} required for the methanation process in the [RREH]{acronym-label="RREH" acronym-form="singular+short"}, namely: (i) [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site, in Morocco; (ii) [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} from Moroccan coal-fired power plants; and (iii) [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} from [eng]{acronym-label="eng" acronym-form="singular+short"} use in Belgium, with capture losses being compensated by [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site, in Morocco.
The purpose of this work is to model the [esc]{acronym-label="esc" acronym-form="singular+short"} that corresponds to each of these configurations, starting from renewable electricity and salt water in Morocco up to the delivery of [eng]{acronym-label="eng" acronym-form="singular+short"} in Belgium, and to provide an insight on the impact each configuration has on the price of the delivered [eng]{acronym-label="eng" acronym-form="singular+short"} and on the sizing of the different elements making the [esc]{acronym-label="esc" acronym-form="singular+short"}.
The [esc]{acronym-label="esc" acronym-form="singular+short"} necessary to deliver [eng]{acronym-label="eng" acronym-form="singular+short"} to Belgium will be designed to be fully autonomous and auto sufficient. To be comparable to previous works [@Berger2021Remote; @Dachet2023Towards-arxiv], the system will be designed to deliver 10TWh of [eng]{acronym-label="eng" acronym-form="singular+short"} uniformly over a year in Belgium. The study will be performed with a [WACC]{acronym-label="WACC" acronym-form="singular+short"} set to 7% , and with extra contingency costs added to all [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} to factor in for feasibility uncertainties [@Roussanaly2021Towards]. More precisely, contingency costs of 10% and 30% are to be added to the [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} of mature and unproven technologies respectively.
# Material and method
In this section, we present the method used to carry the study described in the previous section. We first detail the framework used to model and size the desired [esc]{acronym-label="esc" acronym-form="singular+short"}. We then fix the geographical parameters of our study. Finally, we provide a detailed insight on the setup of the models used to analyse the three [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configurations considered alongside the parameters used for each part of the model.
## Modelling framework
This study is built on the work of @Berger2021Remote which introduced the use of a Graph-Based Optimization Modeling Language (GBOML) as an optimization framework for multi-energy system models. The GBOML language [@Miftari2022] allows one to model each part of a complex system as a set of nodes interconnected by hyperedges that model the constraints existing between these nodes. In the case of an [RREH]{acronym-label="RREH" acronym-form="singular+short"} each node models a specific module of the hub, and each hyperedge models the flow of a given commodity within the hub.
In the GBOML language, the nodes are, in part, modelled by a set of variables that need to be tuned to minimize the objective function of the model while verifying a set of linear constraints specific to each node. For this, each node has to provide an objective to minimize that is a linear function of its variables. The objective function of the whole model is then defined as the sum of all the objectives of its nodes. In this study, we want to minimize the cost of the [eng]{acronym-label="eng" acronym-form="singular+short"} delivered in Belgium, and therefore the cost of the hub. As a result, our objective functions will be proportional to the [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} and [OPEX]{acronym-label="OPEX" acronym-form="singular+short"} of the different modules of the hub.
Since the lifetime of each module is specific, raw [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} are not representative of real costs when studying the cost of the hub over a limited time horizon. One method to address this concern, which we employ in this study, involves the use of annualized [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"}. The annualized [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"}$\zeta_m$ of a module $m$ can be computed from the raw [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"}, from the life-time $L_m$ of the module and from the [WACC]{acronym-label="WACC" acronym-form="singular+short"}$w$ as follows: $$\zeta_m = \text{\gls{CAPEX}\xspace}_m \times \frac{w}{1-(1+w)^{-L_m}}$$
Given that the optimization problem of interest has already been comprehensively formalized by @Berger2021Remote, we shall only remind ourself of the set of assumptions made by prior studies to model and size an [RREH]{acronym-label="RREH" acronym-form="singular+short"} within this framework:
- All technologies and their interactions are modelled using linear equations;
- The infrastructures and networks needed to transport and process commodities within a single node are not modelled;
- Curves modelling the boundaries of the model, such as the renewable energy load factors and the energy demand, are assumed to be known in advance for the whole optimization time horizon;
- The sizing of the modules is assumed to be constant over the whole time horizon considered;
- Sizing the whole hub with an optimization framework assumes that all planning, investment and operation decisions are made by a single entity.
## Geographical parameters
![image](img/map.pdf){width="75%"}
One of the aspects that heavily impact any study of an [RREH]{acronym-label="RREH" acronym-form="singular+short"} is the location of its different components. As mentioned in the problem statement, this study focuses on an [RREH]{acronym-label="RREH" acronym-form="singular+short"} located in Morocco with a [eng]{acronym-label="eng" acronym-form="singular+short"} terminal in Belgium. In this part of the document we detail the choice of location of all the parts of the hub and the constraints that result from this choice.
The precise location of the different modules of the hub is mapped in Fig. [\[fig:geography\]](#fig:geography){reference-type="ref" reference="fig:geography"}. The delivery terminal is set to be in the Belgian harbour of Zeebrugge as it already features LNG terminals. We build our study on a renewable energy capture hub located on the Atlantic coat in the Western Sahara desert. We consider this location because it has excellent wind and solar energy sources, and because it is mostly empty, which leaves a lot of room to deploy large fields of windmills and solar PV panels. Finally, we set our [eng]{acronym-label="eng" acronym-form="singular+short"} production hub to be near the Moroccan city of Safi. By doing so, the [eng]{acronym-label="eng" acronym-form="singular+short"} production hub is located near a pool of available workforce, which is required to run the hub, near the coast, which is needed to export the produced [eng]{acronym-label="eng" acronym-form="singular+short"} by carriers, and near a large coal-fired power plant [@Nareva2023Safi], which can provide a valuable source of [cdo]{acronym-label="cdo" acronym-form="singular+short"} for [PCCC]{acronym-label="PCCC" acronym-form="singular+short"}. It is worth noting that the Safi power plant is sized to deliver 10TWh of electricity over a year [@Nareva2023Safi], and is therefore a source of [cdo]{acronym-label="cdo" acronym-form="singular+short"} large enough to feed the methanation process of the hub studied in this work. In addition, the area of Safi has a lot of available space for large industrial projects, which is a requirement due to the sheer scale of the hub considered.
With such a setup, the power and [eng]{acronym-label="eng" acronym-form="singular+short"} production hubs are separated by 650 km, and need to be connected by a [hvdc]{acronym-label="hvdc" acronym-form="singular+short"} line. Since the power production hub is also on the coast, the [hvdc]{acronym-label="hvdc" acronym-form="singular+short"} line can be installed offshore to get the shortest connection distance. The [eng]{acronym-label="eng" acronym-form="singular+short"} production hub and the LNG terminal in Belgium are separated by 2600 km, and require connection by LNG carriers. We estimate that 100 hours are required to connect Safi to Zeebrugge by carrier.
![image](img/rreh-modules_1_.png){width="70%"}
## Model
Despite being close to the works of @Berger2021Remote and @Dachet2023Towards-arxiv, the [RREH]{acronym-label="RREH" acronym-form="singular+short"} model used in this study features some differences. In this section, we first explain the fundamental principles of the hub, and then highlight the key differences with existing works. We would like to emphasize that we open-sourced the code used for the hub to provide all the details required to reproduce this work[^1].
### Hub principle
As detailed in the previous section, the whole [esc]{acronym-label="esc" acronym-form="singular+short"} necessary to deliver [eng]{acronym-label="eng" acronym-form="singular+short"}, synthesized from renewable energy sources in Morocco, in Belgium is made up of the three main hubs illustrated in Fig. [\[fig:sub-hubs\]](#fig:sub-hubs){reference-type="ref" reference="fig:sub-hubs"}. First, the power production hub is made of three nodes; the solar PV panel farm, the windmill farm, and a pack of batteries used to smooth out variations inherent to renewable energy sources. The power production hub is used to deliver the electricity needed to capture [cdo]{acronym-label="cdo" acronym-form="singular+short"} in Morocco, and to synthesize [eng]{acronym-label="eng" acronym-form="singular+short"}. The power profiles used to model the renewable energy production over the optimization time horizon were obtained from the *renewable.ninja* website [@Staffell2016Using; @Pfenninger2016Longterm; @renewablesninja].
Second, the [eng]{acronym-label="eng" acronym-form="singular+short"} production hub, which takes electricity and a source of [cdo]{acronym-label="cdo" acronym-form="singular+short"} as inputs, is responsible for synthesizing [eng]{acronym-label="eng" acronym-form="singular+short"} from salt water and [cdo]{acronym-label="cdo" acronym-form="singular+short"}. This hub gets the clean water required for electrolysis from a reverse osmosis water desalination module. This fresh water then passes through an electrolysis module to produce the hydrogen needed by the hub. The core of this hub, the methanation unit, consumes both hydrogen and [cdo]{acronym-label="cdo" acronym-form="singular+short"} to synthesize [eng]{acronym-label="eng" acronym-form="singular+short"} with the Sabatier reaction. This node has two byproducts that can be used at other places in the [esc]{acronym-label="esc" acronym-form="singular+short"}: fresh water and low-grade steam heated to 300°C. Lastly, the synthesized [eng]{acronym-label="eng" acronym-form="singular+short"} is liquefied for transportation. In addition, water, hydrogen, and liquefied [eng]{acronym-label="eng" acronym-form="singular+short"} storage nodes are added to buffer flow variations of each commodity.
Third, the LNG hub in Belgium is simply made of two nodes: one liquefied [eng]{acronym-label="eng" acronym-form="singular+short"} storage and one [eng]{acronym-label="eng" acronym-form="singular+short"} regasification node to deliver the [eng]{acronym-label="eng" acronym-form="singular+short"} in a gaseous form.
The model of the complete [esc]{acronym-label="esc" acronym-form="singular+short"} necessary to produce [eng]{acronym-label="eng" acronym-form="singular+short"} from [cdo]{acronym-label="cdo" acronym-form="singular+short"} capture by [DAC]{acronym-label="DAC" acronym-form="singular+short"} units in Morocco is illustrated in Fig. [\[fig:sc1\]](#fig:sc1){reference-type="ref" reference="fig:sc1"}. The one for [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourced from coal-fired power plants [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} is given in Fig. [\[fig:sc2\]](#fig:sc2){reference-type="ref" reference="fig:sc2"}, while Fig. [\[fig:sc3\]](#fig:sc3){reference-type="ref" reference="fig:sc3"} illustrates the model of the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configuration implying [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium and [DAC]{acronym-label="DAC" acronym-form="singular+short"} in Morocco. In all of these models, [cdo]{acronym-label="cdo" acronym-form="singular+short"} transits as a gas, excepted for storage and carrier transportation where it needs to be liquefied.
![image](img/scenario-1-c.pdf){width="80%"}
![image](img/scenario-2-c.pdf){width="60%"}
![image](img/scenario-3-c-1.png){width="80%"}
### Our adaptations
Although we use the same nodes with the same parameters as @Berger2021Remote and @Dachet2023Towards-arxiv for most nodes of the model, we need to adapt some parameters of the reference model for our study. Since @Berger2021Remote already detail all the parameters of their model, we only focus on our adaptations to these parameters hereafter.
As required by our problem statement, we add a contingency cost to the [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} of all nodes. This contingency cost, which was not considered in previous works, is equal to 10% of the [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} for well-established technologies, and 30% of the [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} for less mature technologies. The technologies that we consider to be less mature regarding the scale of the [RREH]{acronym-label="RREH" acronym-form="singular+short"} are the following: large-scale battery packs, [hvdc]{acronym-label="hvdc" acronym-form="singular+short"} lines, electrolysis units, methanation units, and [cdo]{acronym-label="cdo" acronym-form="singular+short"} capture technologies.
Modifications are also carried out to the methanation node when compared to the original node proposed by @Berger2021Remote. First, we align our [CAPEX]{acronym-label="CAPEX" acronym-form="singular+short"} with the $300\text{k€/}(\text{MWH CH$_4$\xspace/h})$ proposed by @Gorre2019Production. Second, we add an output for the residual heat produced by the Sabatier reaction as it can also be valued. Indeed, @Coppitters2023Energy showed that this heat can be recycled in the hub to lower the heat energy required by solid sorbent [DAC]{acronym-label="DAC" acronym-form="singular+short"} units. According to their work, the Sabatier reaction produces 2.1MWh of usable heat energy per ton of synthesized CH$_4$.
Using a heat source of 300°C to regenerate a sorbent is only possible for solid sorbents. Indeed, solid sorbents only require to be heated to 100°C to regenerate [@Coppitters2023Energy], whereas liquid sorbents need to be heated to 900°C [@Berger2021Remote]. Therefore, as opposed to preceding works [@Berger2021Remote; @Dachet2023Towards-arxiv], we use solid sorbent [DAC]{acronym-label="DAC" acronym-form="singular+short"} units in our model, and rely on the data of the @DanishEnergyAgency for their specifications. In addition to using recirculated heat, the system has the ability to burn hydrogen to provide heat for the sorbent regeneration.
A last minor change of in our model when compared to previous works is the cost of liquefaction, storage and regasification of [cdo]{acronym-label="cdo" acronym-form="singular+short"}. The source used in previous works [@Mitsubishi2004Ship] dates from 2004, which is significantly older than most of our sources. To uniformize cost estimates, we adjusted the estimates of @Mitsubishi2004Ship by taking inflation between 2004 and 2021 into account.
----------------------------------------------------------------------------- -------- -------- -------- --------
Free [cdo]{acronym-label="cdo" acronym-form="singular+short"}
(€/MWh CH$_4$)
DAC MA
(€/MWh CH$_4$)
PCCC MA
(€/MWh CH$_4$)
PCCC BE + DAC MA
(€/MWh CH$_4$)
Solar PV Field 10.43 1.23 0.37 0.11
Windmill farm 43.00 4.02 1.9 0.20
Batteries 4.06 1.49 -0.05 0.33
HVDC 11.86 1.12 0.50 0.06
[cdo]{acronym-label="cdo" acronym-form="singular+short"} capt. + transport. 0.00 21.61 8.39 20.09
Desalination plant 0.11 0.66 0.00 0.16
Water storage 0.05 0.00 0.00 0.00
Electrolysis plant 35.42 2.89 0.29 0.05
H2 storage 4.55 0.44 0.17 0.02
Methanation plant 7.75 0.00 0.00 0.01
CH$_4$ liquefaction 5.09 0.00 0.00 0.01
CH$_4$ storage (Morocco) 0.23 0.00 0.00 0.00
CH$_4$ carriers 0.67 0.00 0.00 0.00
CH$_4$ storage (Belgium) 0.23 0.00 0.00 0.00
CH$_4$ regasification 1.02 0.00 0.00 0.00
Total cost 124.47 33.46 11.57 21.04
CH$_4$ cost 124.47 157.93 136.04 145.51
----------------------------------------------------------------------------- -------- -------- -------- --------
# Results
With the study and the model being defined, we can analyse the results of the optimization process. In this section, we first analyse the three [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configurations from a cost perspective. We then analyze them from an energy perspective. Finally, we discuss the results obtained in this work.
## Costs analysis
The cost for delivering CH$_4$ to Belgium for our three [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configurations are detailed in Tables [\[tab:costs\]](#tab:costs){reference-type="ref" reference="tab:costs"} and [\[tab:co2-costs\]](#tab:co2-costs){reference-type="ref" reference="tab:co2-costs"}. The price for the commodities produced by the hub is given in Table [1](#tab:commodity-costs){reference-type="ref" reference="tab:commodity-costs"}. Producing and delivering 1MWh of CH$_4$ costs 124.47€ without factoring in costs related to [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing. The largest part of this cost is due to the renewable energy capture (53.43€/MWh CH$_4$), electricity transportation (11.86€/MWh CH$_4$), and water electrolysis (35.42€/MWh CH$_4$).
Capturing and transporting [cdo]{acronym-label="cdo" acronym-form="singular+short"} to the [eng]{acronym-label="eng" acronym-form="singular+short"} production hub creates additional costs that range from 11.57€/MWh CH$_4$ for the [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Morocco to 33.46€/MWh CH$_4$ for [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site, which is the most expensive way to source [cdo]{acronym-label="cdo" acronym-form="singular+short"} to the hub. Therefore, delivering 1MWh of [eng]{acronym-label="eng" acronym-form="singular+short"} in Belgium costs a total of 136.04€ when sourcing[cdo]{acronym-label="cdo" acronym-form="singular+short"} from [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Morocco, 145.51€ when sourcing [cdo]{acronym-label="cdo" acronym-form="singular+short"} from [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium, and 157.93€ when sourcing [cdo]{acronym-label="cdo" acronym-form="singular+short"} from [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site. For comparison, regular fossil natural gas has been exchanged at prices well above these costs for several months during the 2022 energy crisis in Europe (up to 342€/MWh in August 2022).
[cdo]{acronym-label="cdo" acronym-form="singular+short"} coming from [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site seems to also be more expensive than [PCCC]{acronym-label="PCCC" acronym-form="singular+short"}[cdo]{acronym-label="cdo" acronym-form="singular+short"} imported from Belgium. As a result, the methanation uses as much [cdo]{acronym-label="cdo" acronym-form="singular+short"} from Belgium as possible in the configuration where [cdo]{acronym-label="cdo" acronym-form="singular+short"} can be sourced either from [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site or from [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium. This leads the fraction of [cdo]{acronym-label="cdo" acronym-form="singular+short"} coming from Belgium used for methanation to be equal to the efficiency of the [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} process (90% in our model).
In order to be exhaustive, we tested an alternate model where CH$_4$ and [cdo]{acronym-label="cdo" acronym-form="singular+short"} are transported through offshore pipelines for the configuration where part of the [cdo]{acronym-label="cdo" acronym-form="singular+short"} comes from [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium. In this case, the price of [eng]{acronym-label="eng" acronym-form="singular+short"} delivered in Belgium rises to 151.24€/MWh, which is more expensive than transportation by carriers.
---------------------------------------------------------------------------- -------
[cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing elements
(€/Mwh)
DAC (Morocco) 21.61
[cdo]{acronym-label="cdo" acronym-form="singular+short"} storage (Morocco) 0.0
Total cost 21.61
---------------------------------------------------------------------------- -------
\
---------------------------------------------------------------------------- ------
[cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing elements
(€/Mwh)
PCCC (Morocco) 8.08
[cdo]{acronym-label="cdo" acronym-form="singular+short"} storage (Morocco) 0.3
[cdo]{acronym-label="cdo" acronym-form="singular+short"} pipe 0.01
Total cost 8.39
---------------------------------------------------------------------------- ------
----------------------------------------------------------------------------------- -------
[cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing elements
(€/Mwh)
Solar PV field (Belgium) 1.12
Windmill farm (Belgium) 2.36
Batteries (Belgium) 1.16
HVDC (Belgium) 0.24
PCCC (Belgium) 7.07
[cdo]{acronym-label="cdo" acronym-form="singular+short"} liquefaction (Belgium) 0.14
[cdo]{acronym-label="cdo" acronym-form="singular+short"} storage (Belgium) 0.44
[cdo]{acronym-label="cdo" acronym-form="singular+short"} carriers 1.77
DAC (Morocco) 5.28
[cdo]{acronym-label="cdo" acronym-form="singular+short"} storage (Morocco) 0.45
[cdo]{acronym-label="cdo" acronym-form="singular+short"} regasification (Morocco) 0.06
Total cost 20.09
----------------------------------------------------------------------------------- -------
Commodity Cost \[€\]
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------
Hydrogen \[t\] 3210.69
Water \[t\] 1.17
[cdo]{acronym-label="cdo" acronym-form="singular+short"}- [DAC]{acronym-label="DAC" acronym-form="singular+short"} MA \[t\] 256.28
[cdo]{acronym-label="cdo" acronym-form="singular+short"}- [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} MA \[t\] 72.28
[cdo]{acronym-label="cdo" acronym-form="singular+short"}- [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} BE + [DAC]{acronym-label="DAC" acronym-form="singular+short"} MA \[t\] 136.62
Wind power - MA \[MWh\] 30.19
Solar power - MA \[MWh\] 22.89
Wind power - BE \[MWh\] 65.47
Solar power - BE \[MWh\] 58.51
: Cost of the commodities produced by our [esc]{acronym-label="esc" acronym-form="singular+short"} model. Due to limitations, the price given for [cdo]{acronym-label="cdo" acronym-form="singular+short"} sources using [DAC]{acronym-label="DAC" acronym-form="singular+short"} correspond to the upper bound price that happens when all the heating energy comes from hydrogen. The production costs for power, water, and hydrogen are independent of the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configuration.
## Energy analysis
-------------------- ------- ------- ----- -------
Configuration
Efficiency \[%\]
Curtailment \[%\]
Curtailment \[%\]
Recycled Heat\[%\]
DAC MA 48.44 25.17 0.0 65.41
PCCC MA 51.10 25.17 0.0 N/A
DAC MA + PCCC BE 51.26 25.14 0.0 99.99
-------------------- ------- ------- ----- -------
Table [\[tab:energy-stats\]](#tab:energy-stats){reference-type="ref" reference="tab:energy-stats"} gives some key statistics related to energy within our model. This table shows that the efficiency of the hub, that is the ratio between the energy contained in the [eng]{acronym-label="eng" acronym-form="singular+short"} delivered in Belgium and the energy captured by windmills and solar panels, lies around 50% with small variations depending on the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configuration. The highest efficiency, 51.26%, is obtained by the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing that combines [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium and [DAC]{acronym-label="DAC" acronym-form="singular+short"} in Morocco and beats the efficiency of simple [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Morocco. This observation, which may be surprising at first glance, can be explained by the use of heat recirculation. As shown in the last column of the table, a large part of the heat required for [DAC]{acronym-label="DAC" acronym-form="singular+short"} can be provided by the Sabatier reaction. In the case of joint [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} and [DAC]{acronym-label="DAC" acronym-form="singular+short"}, the Sabatier reaction can even provide all the heat required for [DAC]{acronym-label="DAC" acronym-form="singular+short"}, which lowers the energy input required for carbon capture when compared to [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} alone. However, it is worth noting that the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing involving [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Morocco has residual heat that is not used in our model, but that could be valued for other applications, which would virtually increase the efficiency of the hub.
The Table [\[tab:energy-stats\]](#tab:energy-stats){reference-type="ref" reference="tab:energy-stats"} also shows the fraction of available renewable energy that is not used by the hub. For windmills, the energy curtailment rises to 25%, while all the available solar energy is used. This observation is identical for all [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configurations. We believe that the discrepancy between the curtailments of solar and wind power can be explained by a difference in regularity of the energy sources. Indeed, the available solar power is pretty regular from a day to the other whereas wind power fluctuates more.
In addition to these insights, we computed the load factor for different parts of the system. Windmills and PV panels have a load factor of 41.4% and 25.5% respectively. Electrolysis units have a load factor of about 81%, while desalination and methanation units are used at full capacity all the time by design. These load factors are similar for all [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing configurations.
Finally, Table [\[tab:sizing\]](#tab:sizing){reference-type="ref" reference="tab:sizing"} shows the capacity that needs to be installed for delivering an average power of 1MW of [eng]{acronym-label="eng" acronym-form="singular+short"} to Belgium for various modules of the hub. Since the model is fully linear, these capacities scale linearly with the desired power of [eng]{acronym-label="eng" acronym-form="singular+short"} to be delivered. It is interesting to note that the biggest consumers of fresh water are by far[DAC]{acronym-label="DAC" acronym-form="singular+short"} units according to numbers given in this table.
--------------------------------------------------------------- -------- -------- --------- --------
Free [cdo]{acronym-label="cdo" acronym-form="singular+short"}
\[/MW CH$_4$\]
DAC MA
\[/MW CH$_4$\]
PCCC MA
\[/MW CH$_4$\]
PCCC BE + DAC MA
\[/MW CH$_4$\]
PV Capacity \[MW\] 2.1389 0.2514 0.0769 0.0226
Windmill capacity \[MW\] 3.4660 0.3238 0.1528 0.0162
Battery capacity \[MWh\] 1.1266 0.4182 -0.0188 0.0909
Battery throughput \[MW\] 0.1469 0.0469 0.0007 0.0110
HVDC line capacity \[MW\] 2.3105 0.2178 0.0983 0.0110
Desalination units \[MW\] 0.1508 0.9468 0.0000 0.2257
Electrolysis units \[MW\] 2.1717 0.1774 0.0179 0.0029
--------------------------------------------------------------- -------- -------- --------- --------
## Results discussion
All the results presented in this section are directly linked to our hypothesis and to our model. Our observations and analysis may be slightly different when considering some additional factors. In this section, we discuss how some factors that could impact the results presented in this work.
First, our model does not factor in for the economies of scale that could come with the deployment of a large [RREH]{acronym-label="RREH" acronym-form="singular+short"}. Indeed, most of the current cost estimated are projections based on existing prototypes or units that are of small scale or still at experimental stage. Therefore, it seems reasonable to assume that the overall cost of modules would fall if manufactured in large quantities.
Second, the efficiency of [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} can vary depending on the industry on which [cdo]{acronym-label="cdo" acronym-form="singular+short"} capture is performed [@Reiter2015Evaluating]. As little data are available on the type and capacity of industries that may use the delivered [eng]{acronym-label="eng" acronym-form="singular+short"} in Belgium, we had to make an educated assumption on the efficiency of [PCCC]{acronym-label="PCCC" acronym-form="singular+short"}. Real-world deployments may lead to efficiencies different to the assumptions made in this work, and could change our observations. Indeed, if the efficiency of [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} drops to steeply, costs associated to [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} may rise to a point where the balance between [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium and [DAC]{acronym-label="DAC" acronym-form="singular+short"} in Morocco has a different optimum.
Third, we modelled the hub to be self-sufficient. As shown in the previous section, this implies to synthesize hydrogen to provide heat to [DAC]{acronym-label="DAC" acronym-form="singular+short"} units when the heat resulting from the Sabatier reaction is not sufficient, which increases the cost of the hub. However, the low-grade heat required to regenerate solid sorbents in [DAC]{acronym-label="DAC" acronym-form="singular+short"} units is a common industrial byproduct that is often considered as a waste due to a lack usecase, but that can find a use within the hub. As a result, importing heat from a neighbouring industry could make the price of the configuration relying on [DAC]{acronym-label="DAC" acronym-form="singular+short"} units more competitive price-wise when compared to other configurations of [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing.
Last, the ability of [DAC]{acronym-label="DAC" acronym-form="singular+short"} units to provide a [cdo]{acronym-label="cdo" acronym-form="singular+short"} source that does not need a dedicated supply chain is a practical advantage over [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} that does not appear when simply looking at numbers. A simpler supply chain for the production of [eng]{acronym-label="eng" acronym-form="singular+short"} is also probably a more robust one as fewer elements come into play. This could lead to a higher reliability of the whole supply chain, which has an economic value that is not taken into account in our model.
# Conclusion
In this paper, we consider the sizing and the cost of an [esc]{acronym-label="esc" acronym-form="singular+short"} designed to synthesize [eng]{acronym-label="eng" acronym-form="singular+short"} from renewable energy sources in an [RREH]{acronym-label="RREH" acronym-form="singular+short"} in Morocco and to deliver it in Belgium. Synthesizing [eng]{acronym-label="eng" acronym-form="singular+short"} requires a source of [cdo]{acronym-label="cdo" acronym-form="singular+short"}. In this work, we considered three different configurations for sourcing the required [cdo]{acronym-label="cdo" acronym-form="singular+short"} to the [RREH]{acronym-label="RREH" acronym-form="singular+short"}, and studied their impact on the sizing and the cost of the [RREH]{acronym-label="RREH" acronym-form="singular+short"}, and by extension on the cost of the [eng]{acronym-label="eng" acronym-form="singular+short"} delivered in Belgium. The three different configurations considered for capturing the [cdo]{acronym-label="cdo" acronym-form="singular+short"} are (i) [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site, in Morocco; (ii) [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} from Moroccan coal-fired power plants; and (iii) [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} from [eng]{acronym-label="eng" acronym-form="singular+short"} use in Belgium, with capture losses being compensated by [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site, in Morocco.
We modelled and optimized the [esc]{acronym-label="esc" acronym-form="singular+short"} corresponding to the three configurations using the GBOML framework. Results show that [DAC]{acronym-label="DAC" acronym-form="singular+short"} is more expensive than [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} generally speaking. This is in part due to the heat required to regenerate the sorbent of [DAC]{acronym-label="DAC" acronym-form="singular+short"} units. Indeed, the [cdo]{acronym-label="cdo" acronym-form="singular+short"} sourcing relying only on [DAC]{acronym-label="DAC" acronym-form="singular+short"} units leads to a price of 157.93€/MWh of [eng]{acronym-label="eng" acronym-form="singular+short"} delivered in Belgium, while the configuration relying on [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} from Moroccan coal-fired power plants lead to of price of 136.04€/MWh. The hybrid configuration, which implies [PCCC]{acronym-label="PCCC" acronym-form="singular+short"} in Belgium and [DAC]{acronym-label="DAC" acronym-form="singular+short"} in Morocco, leads to an intermediate price of 145.51€/MWh of [eng]{acronym-label="eng" acronym-form="singular+short"} delivered in Belgium. The price for [cdo]{acronym-label="cdo" acronym-form="singular+short"} induced by this configuration is still lower than the one of [cdo]{acronym-label="cdo" acronym-form="singular+short"} captured by [DAC]{acronym-label="DAC" acronym-form="singular+short"} on site despite the additional costs induced by the transportation of [cdo]{acronym-label="cdo" acronym-form="singular+short"} between Belgium and Morocco. Overall, the cost for delivering [eng]{acronym-label="eng" acronym-form="singular+short"} in Belgium with the configurations presented in this work seems reasonable since regular fossil natural gas has already been sold for higher prices during the 2022 energy crisis in Europe.
[^1]: <https://gitlab.uliege.be/smart_grids/public/gboml/-/tree/master/examples/synthetic_methane_morocco>
| arxiv_math | {
"id": "2310.01964",
"title": "Synthetic methane for closing the carbon loop: Comparative study of\n three carbon sources for remote carbon-neutral fuel synthetization",
"authors": "Micha\\\"el Fonder, Pierre Counotte, Victor Dachet, Jehan de\n S\\'ejournet, Damien Ernst",
"categories": "math.OC",
"license": "http://creativecommons.org/licenses/by-sa/4.0/"
} |
---
abstract: |
We consider group actions on compact median algebras. We show that, given a generating probability measure $\mu$ on the acting group and under suitable conditions on the median algebra, it could be realized in a unique way as a $\mu$-boundary in the sense of Furstenberg. Along the way, we prove some structural results.
author:
- Uri Bader
- Aviv Taller
bibliography:
- endtobig.bib
title: Compact Median Algebras are $\mu$-Boundaries in a unique way
---
# Introduction
In this work we study group actions on compact topological median algebras. Our main goal is to understand *stationary measures* for such actions. This is a continuation of [@BaderTaller], where we assumed the acting group is amenable and studied the associated invariant measures. However, here we remove the amenability assumption, thus invariant measures do not exist in general, while stationary measures do exist. Our main result is Theorem [Theorem 1](#thm:stationary-nofactors){reference-type="ref" reference="thm:stationary-nofactors"} which shows that, under suitable assumptions, there exists a unique stationary measure making the given median space a boundary in the sense of Furstenberg.
We first state Proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"} below, which is a general structural result - it describes a *maximal cubical factor* of a median algebra. Recall that a *cube* is a median algebra isomorphic to $\{0,1\}^n$ for some cardinal $n$, called the *dimension* of the cube. A finite dimensional cube is called a *finite cube*.
Given a topological group $G$, a topological median algebra endowed with an action of $G$ by automorphisms such that the action map $G\times M\to M$ is continuous is called a topological $G$-median algebra. A $G$-equivariant continuous median map of topological $G$-median algebras is called a *morphism*. As usual in the theory of dynamical systems, we study *minimal actions*, that is we assume that our median algebra does not contain any non-empty, proper closed invariant sub-median algebra. As in [@BaderTaller], we will always assume the underlying median algebra is *sclocc*, that is *second countable, locally open-convex and compact*, see §[2](#sec:medianalgebras){reference-type="ref" reference="sec:medianalgebras"} for precise definitions. Moreover, we impose one more condition on our algebras - we assume that all intervals are countable (which includes finite, by our convention).
**Proposition 1**. Let $G$ be a topological group and let $M$ be a $G$-median algebra. Assume that $M$ is sclocc and with countable intervals. Then there exist topological $G$-median algebras $M'$ and $C$ such that $M\simeq M' \times C$, $C$ is a finite cube and every surjective morphism $M\to C'$, where $C'$ is a $G$-cube, is the composition of the projection map $M\to C$ and a unique morphism $C\to C'$.
In case $M\simeq M'$, that is $C$ is trivial in Proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"}, we say that $M$ *has no cubical factor*. Recall that, given a probability measure $\mu$ on $G$, a probability measure $\nu$ on $M$ is said to be *$\mu$-stationary* if $\mu*\nu=\nu$, where $\mu*\nu$ is the push forward of the measure $\mu\times \nu$ under the action map $G\times M\to M$.
**Theorem 1**. Let $G$ be a countable discrete group and let $M$ be a minimal topological $G$-median algebra. Assume that $M$ is sclocc with countable intervals and with no cubical factor. Let $\mu$ be a generating probability measure on $G$. Then there exists a unique $\mu$-stationary probability measure $\nu$ on $M$. Moreover, the $G$-space $(M,\nu)$ is a $\mu$-boundary in the sense of Furstenberg.
Given $\mu$, a generating probability measure on $G$, we recall Furstenberg's correspondence between $G$-maps $B(G,\mu)\to \text{Prob}(M)$ and $\mu$-stationary measures on $M$, where $B(G,\mu)$ denotes the corresponding *Furstenberg-Poisson boundary* of $G$, see [@Bader2006 Theorem 2.16]. The uniqueness of the stationary measure $\nu$ in Theorem [Theorem 1](#thm:stationary-nofactors){reference-type="ref" reference="thm:stationary-nofactors"} is thus equivalent to having a unique $G$-map $\phi:B(G,\mu)\to \text{Prob}(M)$ and the fact $(M,\nu)$ is a $\mu$-boundary is equivalent to fact that the image of $\phi$ consists of $\delta$-measures in $\text{Prob}(M)$.
We will recall in §[5](#sec:ergodic){reference-type="ref" reference="sec:ergodic"} the definition of a *boundary pair* for $G$, as discussed in [@Bader2014]. By [@Bader2014 Theorem 2.7], the future and past Furstenberg-Poisson boundaries associated with a generating probability measure form a boundary pair. Theorem [Theorem 1](#thm:stationary-nofactors){reference-type="ref" reference="thm:stationary-nofactors"} will follow from the following.
**Theorem 2**. Let $G$ be a countable discrete group and let $M$ be a minimal topological $G$-median algebra. Assume that $M$ is sclocc with countable intervals and with no cubical factor. Let $(B_-,B_+)$ be a boundary pair for $G$. Then there are unique (up to a.e equivalence) a.e defined measurable $G$-maps $\phi_-:B_-\to M$ and $\phi_+:B_+\to M$. Moreover, these maps are the unique (up to a.e equivalence) a.e defined measurable $G$-maps $B_-\to \text{Prob}(M)$, $B_+\to \text{Prob}(M)$.
We denote by $\text{Cub}(M)$ the set of all cubes in $M$ and endow it with the Chabouty topology. Any measurable map $B_{\pm}\rightarrow \text{Cub}(M)$, give rise to a map $B_{\pm}\rightarrow \text{Prob}(M)$, by sending every cube to its unique uniform probability measure. Thus, theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"} implies in particular the uniqueness of the maps $B_{\pm}\rightarrow \text{Cub}(M)$. The following is the generalization of the above theorem, where we consider the possibility of a non-trivial cubical factor.
**Theorem 3**. Let $G$ be a countable discrete group and let $M$ be a minimal topological $G$-median algebra. Assume that $M$ is sclocc with countable intervals. Let $(B_-,B_+)$ be a boundary pair for $G$. Then, there exist unique (up to a.e equivalence) a.e defined measurable $G$-maps $\phi_-:B_-\to \text{Cub}(M)$ and $\phi_+:B_+\to \text{Cub}(M)$. Under the decomposition $M\simeq M' \times C$ given in Proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"}, $\phi_-$ and $\phi_+$ correspond to the products of the unique maps $\phi'_-:B_-\to M'$ and $\phi'_+:B_+\to M'$ given in Theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"} and the constant maps $B_\pm\to \{C\} \in \text{Cub}(C)$.
**Example 1**. Let $G:=F_2\times \{\pm1\}$ where $F_2:=\langle a, b\rangle$ is the free group generated by two elements, $\{\pm1\}\cong {\mathbb Z}/2$. Let $M:= (T_4\cup \partial T_4)\times \{\pm1\}$, where the 4-regular tree $T_4$ is identified with the Cayley graph of $F_2$ and $\partial T_4$ with its boundary. We take the usual compact median structure on $T_4\cup \partial T_4$ and the corresponding product structure on $M$. We consider the natural action of $G$ on $M$. This action is clearly minimal. With respect to the measure $\mu(\{(x,\pm1)\})=1/8$ for $x\in \{a^{\pm 1},b^{\pm 1}\}$, the Furstenberg Poisson boundary $B(G,\mu)$ of $G$ is isomorphic to $\partial T_4$ endowed with the standard stationary measure. Then the map $$\partial T_4 \to \text{Cub}(M), \quad \theta \mapsto \theta \times \{\pm1\}$$ is the unique (up to a.e equivalence) a.e defined measurable $G$-map $\partial T_4 \to \text{Cub}(M)$.
It is worth noting that in case of a non-trivial cubical factor, it is no longer true that there are unique maps $B_{\pm}\to \text{Prob}(M)$. The following example illustrates this fact.
**Example 2**. Consider the subgroup $G\leq \{\pm1\}^3$ consisting of all the elements $g=(x,y,z)$ such that $xyz=1$. Note that its action on the cube $\{\pm1\}^3$ is minimal. For every $t\in [0,1]$, let $\mu_t$ be the probability measure on $\{\pm1\}^3$ which takes the value $t/8$ on elements of $G$ and the value $(1-t)/8$ on elements not in $G$. This is a continuum family of $G$-invariant probability measures.
# Median Algebras {#sec:medianalgebras}
This section will focus on essential results and definitions in the context of median algebras that are required for this paper. For an extensive survey on median algebras we refer the reader to [@rol98; @fioBoundary; @bowMed].
A *median algebra* is a set $M$ with a ternary operator $m:M^3\rightarrow M$, satisfying the following three axioms. For every $x,y,z,u,v\in M$
1. $m(x,y,z)=m(x,z,y)=m(y,x,z)$,
2. $m(x,x,y)=x$, and
3. $m(m(x,y,z),u,v)=m(x,m(y,u,v),m(z,u,v))$
A *median morphism* between two median algebras $M$ and $N$ is a map $\phi:M\rightarrow N$ such that $\phi\circ m=m\circ \phi^3$. For two points $x$ and $y$ in $M$, the *interval* $[x,y]$ is defined as the set of all points $z\in M$, such that $m(x,y,z)=z$. A subset $C\subset M$ is said to be *convex*, if for every $x,y\in C$, $[x,y]\subset C$. The following is a median algebra version of Helly's theorem:
**Lemma 3** (Helly's Theorem, [@rol98 Theorem 2.2]). If $C_1,...,C_n$ are convex, and for every $i\neq j,\ \ C_i\cap C_j\neq \varnothing$, then also $\overset{n}{\underset{i=1}{\cap}}C_i \neq \varnothing$.
The *convex hull* of a subset $S\subset M$, denoted by $\text{Conv}(S)$, is the smallest convex set containing S. For two subsets $A,B\subset M$, the *join* of A and B, denoted by $[A,B]$, is the union of all intervals $[a,b]$, for $a\in A$ and $b\in B$. For two convex sets, the join gives a neat description for the convex hull of their union. Namely:
**Lemma 4** ([@rol98 Proposition 2.3]). If $C,C'\subset M$ are two convex sets, then, $\text{Conv}(C\cup C')=[C,C']$
A *half-space* $\mathfrak{h}\subset M$, is a convex subset, such that the set $\mathfrak{h}^*:=M\backslash \mathfrak{h}$ is also convex. An unordered pair of two complementary half-spaces $\mathfrak{w}=\{\mathfrak{h},\mathfrak{h}^*\}$ is called a *wall*. Given two sets $A$ and $B$, we say that a half-space $\mathfrak{h}$ is *separating A from B*, if $A\subset\mathfrak{h}$ and $\mathfrak{h}\cap B=\varnothing$. We denote by $\Delta(A,B)$ the collection of all half-spaces that separates $A$ from $B$. We say the a wall $\mathfrak{w}$ *separates* $A$ and $B$ if $\mathfrak{w}\cap \Delta(A,B)$ is non-empty. One of the main features of median algebras is that for any two convex sets $A$ and $B$, $\Delta(A,B)\neq \varnothing$. See theorem 2.8 in [@rol98].
Let $C\subset M$ be a subset and $x\in M$. An element $y\in C$ is said to be the *gate* of $x$ in $C$, if $y\in [x,z]$ for every $z\in C$. Note that the gate, if exist, is unique. We say that $C$ is *gate-convex*, if for every $x\in M$ there exists a gate $y$ in $C$. Given a gate-convex set $C$, we denote by $\pi_C$ the map that assign for every element $x\in M$ its gate in $C$. The map $\pi_C$ is called *gate-projection*. The following are key features of gate-convex sets:
**Lemma 5** ([@fioBoundary Proposition 2.1]). If a map $\phi:M\rightarrow M$ is a gate-projection, then for all $x,y,z\in M$, we have $\phi(m(x,y,z))=m(\phi(x),\phi(y),\phi(z))=m(\phi(x),\phi(y),z)$.
**Lemma 6** ([@fioBoundary Lemma 2.2]).
1. If $C_1\subset M$ is convex and $C_2\subset M$ is gate-convex, the projection $\pi_{C_2}(C_1)$ is convex. If moreover, $C_1\cap C_2\neq \varnothing$, we have $\pi_{C_2}(C_1)=C_1\cap C_2$.
2. If $C_1,C_2\subset M$ are gate-convex, the sets $\pi_{C_1}(C_2)$ and $\pi_{C_2}(C_1)$ are gate-convex with gate-projections $\pi_{C_1}\circ \pi_{C_2}$ and $\pi_{C_2}\circ \pi_{C_1}$, respectively.
3. If $C_1, C_2\subset M$ are gate-convex and $C_1\cap C_2\neq \varnothing$, then $C_1\cap C_2$ is gate-convex with gate-projection $\pi_{C_2}\circ \pi_{C_1}=\pi_{C_1}\circ \pi_{C_2}$. In particular, if $C_2\subset C_1$, then $\pi_{C_2}=\pi_{C_1}\circ \pi_{C_2}$.
It follows by the axioms of the median operator that for every $x,y,z\in M$, $m(x,y,z)\in [x,y]$. Moreover, by lemma [Lemma 5](#lem:fio1){reference-type="ref" reference="lem:fio1"} and axiom (3), intervals are gate-convex, and gate-convex sets are convex. A criterion for a convex set to be gate-convex, will be given in the context of compact median algebras.
Let $\phi:M\rightarrow M$ be an automorphism of M, and $C$ a gate-convex set. Observe that for every $x\in M$, and every $y\in C$, we have $$\phi(\pi_C(x))=\phi(m(x,y,\pi_C(x)))=m(\phi(x),\phi(y),\phi(\pi_C(x)))\in [\phi(x),\phi(y)].$$ That is, $\phi(C)$ is gate-convex, and for every $x\in M$, $\pi_{\phi(C)}(x)=\phi(\pi_C(\phi^{-1}(x))
)$.
A *topological median algebra* is a median algebra M, endowed with a Hausdorff topology, with respect to which the median operator is continuous. It is called *locally open-convex* if, in addition, each of its points has a basis of open and convex neighborhoods. The reader should note the differences between our definition of locally open-convex to that of *locally convex* in [@fioBoundary].
Let $M$ be a compact median algebra. For any two points $x,y\in M$, the interval $[x,y]$ is closed, as it is the continuous image of the compact set $\{x\}\times\{y\}\times M$. It follows by lemmata 2.6 and 2.7 in [@fioBoundary], that any convex set in $M$ is gate-convex if and only if it is closed, and that gate-projections are continuous. In particular, the convex hull of two gate-convex sets $C$ and $C'$ is compact, and therefore a gate-convex as well, as it is, by lemma [Lemma 4](#lem:convexisjoin){reference-type="ref" reference="lem:convexisjoin"}, the continuous image of the compact set $C\times C' \times M$.
**Lemma 7** ([@BaderTaller Lemma 2.4]). For disjoint non-empty subsets $A,B\subset M$, if $A$ is gate-convex then there exists $a\in A$ such that $\Delta(A,B)=\Delta(a,B)$.
Note that it implies in particular that also $\Delta(B,A)=\Delta(B,a)$.
Our main object is *second countable, locally open-convex, compact*, or in short *sclocc*, median algebra M. We fix such a space for the rest of this section.
Denote by $\text{Prob}(M)$ the space of probability measures on $M$. In [@BaderTaller] we introduce the *self-median operator* $\Phi: \text{Prob}(M) \rightarrow \text{Prob}(M)$, given by $\mu\mapsto m_*(\mu^3)$. A probability measure is said to be *balanced*, if it is invariant under $\Phi$.
Given a cube $C=\{0,1\}^k\simeq ({\mathbb Z}/2{\mathbb Z})^k$, one can easily verify that its corresponding normalized Haar measure $\lambda$, is balanced. Thus, each subcube $C\subset M$, induces a balanced measure on M. Such a measure in $M$ is called *cubical*. We have the following characterisation of balanced measures:
**Theorem 8** ([@BaderTaller Theorem A]). Every balanced measure on a sclocc median algebra is cubical.
A half-space $\mathfrak{h}$ in $M$ is said to be *admissible* if $\mathfrak{h}$ is open and $\mathfrak{h}^*$ has non empty interior. The following is an enhanced separation property:
**Lemma 9** ([@BaderTaller Proposition 3.6]). Let $C$ and $C'$ be two disjoint closed convex subsets in M. Then, there exist an admissible half-space $\mathfrak{h}\in \Delta(C,C')$.
**Corollary 10**. There exists a countable collection $\mathcal{H}$ of closed half-spaces in M, such that for every $x\neq y \in M$, there exist two disjoint, closed half-spaces $\mathfrak{h}\in \Delta(x,y)\cap \mathcal{H}$ and $\mathfrak{h}'\in \Delta(y,x)\cap\mathcal{H}$, and such that $x\in \mathfrak{h}^{\circ}$ and $y\in \mathfrak{h}'^{\circ}$.
*Proof.* As $M$ is second countable and locally open-convex, we can find a countable basis for the topology, $\mathcal{C}$, consisting of open convex sets. Thus, every two different points can be separated by two elements of $\mathcal{C}$ with disjoint closures.
Let $U$ and $V$ be two elements of $\mathcal{C}$ with disjoint closures. By lemma [Lemma 9](#lem:separatewithadmissible){reference-type="ref" reference="lem:separatewithadmissible"} we can choose a closed half-space $\mathfrak{h}_{(U,V)}\in \Delta (\overline{U},\overline{V})$. Using lemma [Lemma 9](#lem:separatewithadmissible){reference-type="ref" reference="lem:separatewithadmissible"} again, we find closed half-space $\mathfrak{h}_{(V,U)}\in \Delta (\overline{V},\mathfrak{h}_{(U,V)})$.
Denote by $\mathcal{H}$ the countable collection of all half-spaces of the form $\mathfrak{h}_{(U,V)}$ and $\mathfrak{h}_{(V,U)}$. The reader can now verify that $\mathcal{H}$ possesses the required properties. ◻
Given a collection of half-spaces $\mathcal{H}$, and a non-empty subset $A$ of $M$, we denote by $\mathcal{H}(A)$ the subcollection of $\mathcal{H}$ consisting of all the half-spaces $\mathfrak{h}\in \mathcal{H}$ that cut $A$ into two, i.e. such that both $\mathfrak{h}\cap A$ and $\mathfrak{h}^*\cap A$ are non empty.
Fix some collection of half spaces $\mathcal{H}$, and its corresponding collection of walls $\mathcal{W}_{\mathcal{H}}$. That is, $$\mathcal{W}_{\mathcal{H}}:=\big\{\{\mathfrak{h},\mathfrak{h}^*\}\ | \ \mathfrak{h}\in \mathcal{H}\big\}.$$
We have a natural surjection $\mathcal{H}\rightarrow \mathcal{W}_{\mathcal{H}}$, given by $\mathfrak{h}\mapsto \{\mathfrak{h},\mathfrak{h}^*\}$. Fix a section $\sigma:\mathcal{W}_{\mathcal{H}}\rightarrow \mathcal{H}$. Given a set $A$ we denote by $\chi_A$ its characteristic function. Consider the following map $$\iota_{\mathcal{W}_{\mathcal{H}}}:M\rightarrow 2^{\mathcal{W}_{\mathcal{H}}}, \ x\mapsto\big(\chi_{\sigma(\mathbf{w})}(x)\big)_{\mathbf{w}\in \mathcal{W}_{\mathcal{H}}}.$$
We say that $\mathcal{W}_{\mathcal{H}}$ is *separating* if $\iota_{\mathcal{W}_{\mathcal{H}}}$ is injective, that is, if the walls in $\mathcal{W}_{\mathcal{H}}$ are separate points in M. It is called *transverse* if $\iota_{\mathcal{W}_{\mathcal{H}}}$ is a surjection. The collection $\mathcal{H}$ is said to have one of the above properties if $\mathcal{W}_{\mathcal{H}}$ possess it. We note that these properties are independent of the choice of the section $\sigma$.
We say that two collections of walls $\mathcal{W}_1$ and $\mathcal{W}_2$ are transverse, if for every $\mathfrak{w}_1\in\mathcal{W}_1$ and $\mathfrak{w}_2\in\mathcal{W}_2$, the set of walls $\{\mathfrak{w}_1,\mathfrak{w}_2\}$ is transverse. Two collections of half-spaces $\mathcal{H}_1$ and $\mathcal{H}_2$ are said to be transverse, if their collections of walls, $\mathcal{W}_{\mathcal{H}_1}$ and $\mathcal{W}_{\mathcal{H}_2}$, are transverse.
We finish with a handy observation:
**Lemma 11**. Let $\phi:M\rightarrow N$ be a continuous median bijection between two sclocc median algebras. Then $\phi$ is an isomorphism.
*Proof.* Any continuous bijection between compact Hausdorff spaces is a homeomorphism. So it is left to show that the inverse $\phi^{-1}$ is a median map.
Indeed, let $x,y,z\in M$. Then,
$$\begin{aligned}
\phi^{-1}(m(x,y,z))&=\phi^{-1}(m(\phi(\phi^{-1}(x)),\phi(\phi^{-1}(y)),\phi(\phi^{-1}(z))))\\ &=\phi^{-1}\circ \phi \big(m(\phi^{-1}(x),\phi^{-1}(y),\phi^{-1}(z))\big)=m(\phi^{-1}(x),\phi^{-1}(y),\phi^{-1}(z)).\end{aligned}$$ ◻
# Cubing Decomposition And Proof Of Proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"} {#sec:cubing}
Given a median algebra M, denote by $\mathcal{H}_M$ and $\mathcal{W}_M$ the collection of all half-spaces and walls of M, respectively. We fix a sclocc median algebra M and a group $G$ as in Proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"}, and a section $\sigma_M:\mathcal{W}_M\rightarrow \mathcal{H}_M$.
**Lemma 12**. Let $\mathcal{W}\subset \mathcal{W}_M$ be a $G$-invariant subcollection of walls that separates points in M. Then, there exist two sclocc median algebras $M'$ and $C$, where C is a finite cube and an isomorphism $\phi:M\rightarrow M'\times C$, if and only if there exists a partition $\mathcal{W}=\mathcal{W}_1\amalg \mathcal{W}_2$ such that the following conditions hold:
1. The set $\mathcal{W}_1$ is transverse, in which all the walls consists of clopen half-spaces,
2. The collections $\mathcal{W}_1$ and $\mathcal{W}_2$ are transverse.
Furthermore, we can choose $M'$ and $C$ to be $G$-spaces, and $\phi$ to be $G$-equivariant, if and only if there exists a partition as above, with $\mathcal{W}_1$ and $\mathcal{W}_2$ $G$-invariant.
*Proof.* Suppose first that we have a partition as above. Then, item (1) implies that the map $\iota:=\iota_{\mathcal{W}_1}$ is in fact a continuous projection onto the cube $C:=2^{\mathcal{W}_1}$. Fix $x\in C$, and denote by $M_x$ the fiber $\iota^{-1}(x)$. As $\iota$ is a continuous median map, $M_x$ is closed and convex, thus, a gate-convex set. We claim that for every $x,y\in C$, $\pi_{M_x}|_{M_y}$ is an isomorphism.
Let $a,b\in M_y$. By lemma [Corollary 10](#cor:separatingcollection){reference-type="ref" reference="cor:separatingcollection"}, we can find two disjoint closed half-spaces $a\in \mathfrak{h}_a^0\subset\mathfrak{h}_a\in\Delta(a,b)$ and $b\in\mathfrak{h}_b^0\subset \mathfrak{h}_b\in\Delta(b,a)$. For a set $A\subset M$, denote by $\overline{A}$ the topological closure of $A$. Applying lemma [Lemma 7](#lem:gateconvexseparbypoint){reference-type="ref" reference="lem:gateconvexseparbypoint"} to the pairs $\{a,\overline{\mathfrak{h}_a^*}\cap M_y\}$ and $\{b,\overline{\mathfrak{h}_b^*}\cap M_y\}$, implies the existence of two points $a'\in \overline{\mathfrak{h}_a^*}\cap M_y$ and $b'\in\overline{\mathfrak{h}_b^*}\cap M_y$, such that $\Delta(a,a')=\Delta(a,\overline{\mathfrak{h}_a^*})$ and $\Delta(b,b')=\Delta(b,\overline{\mathfrak{h}_b^*})$. These equalities are true in M, although we applied the lemma in $M_y$, thanks to lemma 2.3 in [@fioBoundary].
Denote by $\mathcal{H}$ the union $\sigma_M(\mathcal{W})\cup \sigma_M(\mathcal{W})^*$. As $\mathcal{W}$ is separating, we can find two half-spaces $\mathfrak{f}_a\in \Delta(a,a')\cap \mathcal{H}$ and $\mathfrak{f}_b\in\Delta(b,b')\cap \mathcal{H}$. Now, it follows by Helly's theorem, compactness, and the assumption that $\mathcal{W}_1$ and $\mathcal{W}_2$ are transverse, that $\overline{\mathfrak{f}_a}\cap M_x\neq\varnothing$ and that $\overline{\mathfrak{f}_b}\cap M_x\neq\varnothing$. Note that $\overline{\mathfrak{f}_a}$ and $\overline{\mathfrak{f}_b}$ are disjoint, and therefore, it follows by lemma [Lemma 6](#lem:fio2){reference-type="ref" reference="lem:fio2"}(1) that $\pi_{M_x}(a)\neq \pi_{M_x}(b)$. Thus, $\pi_{M_x}|_{M_y}$ is injective. By lemma [Lemma 6](#lem:fio2){reference-type="ref" reference="lem:fio2"}(2), $\pi_{M_x}\circ\pi_{M_y}|_{\pi_{M_x}(M_y)}=id_{\pi_{M_x}(M_y)}$. In particular, $\pi_{M_y}(\pi_{M_x}(M_y))=\pi_{M_x}(M_y)$, and thus, by injectivity, $\pi_{M_y}(\pi_{M_x}(M_y))=M_y$, that is, $\pi_{M_y}|_{M_x}$ is surjective. The claim now follows by the symmetry of the arguments.
Fix $x_0\in C$, denote $M_0:=M_{x_0}$ and by $\pi_0$ its gate-projection. As $M$ is the disjoint union of the $M_x$, it is clear that the continuous map $\iota\times \pi_0:M\rightarrow C\times M_0$ is a bijection. It follows by lemma [Lemma 11](#lem:bijectioniscontinuous){reference-type="ref" reference="lem:bijectioniscontinuous"} that this is an isomorphism, as needed. Note that $C$ must be finite, as the intervals in M are countable.
Suppose now that $M=M'\times C$, for $M'$ and $C$ as in the lemma. Let $p_{M'}$ and $p_C$ be the natural projection to each factor. We have the following two subcollections of $\mathcal{W}_M$: $$\begin{aligned}
& \mathcal{W}_M^1:=\big\{\{p_C^{-1}(\mathfrak{w})\ | \ \mathfrak{w}\in\mathcal{W}_C\big\} \\
& \mathcal{W}_M^2:=\big\{\{p_{M'}^{-1}(\mathfrak{w})\ | \ \mathfrak{w}\in\mathcal{W}_{M'}\big\}
\end{aligned}$$ Clearly, these collections are transverse, and since $\mathcal{W}_C$ is transverse and its walls are consists of clopen half-spaces, so is $\mathcal{W}_M^1$ and so are its walls. Take $\mathfrak{h}\in\mathcal{H}_M$. Observe that if $(x_1,y_1),(x_2,y_2)\in \mathfrak{h}$, then, $[x_1,x_2]\times[y_1,y_2]\subset \mathfrak{h}$. Hence, $\mathfrak{h}$ is of the form $C_1\times C_2$ for some two convex sets $C_1\subset M'$ and $C_2\subset C$. Since this is true also for $\mathfrak{h}^*$, we must have $\mathfrak{h}\in \mathcal{W}_M^1$ or $\mathfrak{h}\in \mathcal{W}_M^2$. That is, $\mathcal{W}_M^1$ and $\mathcal{W}_M^2$ constitute a partition of $\mathcal{W}_M$. Define $\mathcal{W}_1:=\mathcal{W}\cap\mathcal{W}^1_M$ and $\mathcal{W}_2:=\mathcal{W}\cap\mathcal{W}^2_M$. It is clear that this is the required partition. We note that it is possible that $\mathcal{W}_1$ is in fact empty, which just implies that $C$ is a point.
The last assertion in the lemma is immediate. ◻
*Proof of proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"}.* If M is already a cube, there is nothing to prove. Otherwise, define $$\begin{aligned}
& \mathcal{W}_1:=\big\{\mathfrak{w}\ | \ \mathfrak{w}\in\mathcal{W}_M \text{ consist of clopen half-spaces and }\mathcal{W}_M\backslash \{\mathfrak{w}\} \ and \ \{\mathfrak{w}\} \text{ are transverse }\big\} \\
& \mathcal{W}_2:=\mathcal{W}_M\backslash \mathcal{W}_1
\end{aligned}$$
Any isomorphism of $M$ must preserve $\mathcal{W}_1$, and therefore, is also preserving $\mathcal{W}_2$. By construction, these sets meets the conditions in lemma [Lemma 12](#lem:productthm){reference-type="ref" reference="lem:productthm"}, and thus, correspond to a decomposition $M=M'\times C$, where $M'$ is a sclocc median algebra, $C$ is a finite cube, and on both there exist $G$-actions, such that the action on $M$ is the diagonal.
Let $\varphi :M\rightarrow C'$ be a surjective morphism, with $C'$ a cube. Denote by $\mathcal{W}$ the collection of walls of the form $\{\varphi^{-1}(\mathfrak{f}),\varphi^{-1}(\mathfrak{f}^*)\}$ for $\mathfrak{f}\in\mathcal{H}_{C'}$. It is clear that $\mathcal{W}\subset \mathcal{W}_1$, and that $\varphi$ is the composition of $p_C=\iota_{\mathcal{W}_1}$ with the projection from $C$ to the factor generated by $\mathcal{W}$. ◻
# Special Constructions And Technical Lemmata {#sec:constructions}
The purpose of this section is to establish all the constructions and technical results that are necessary for the proof of the main theorems, e.g. checking the measurability of some sets and maps between them, etc. The trusting reader can skip this section for the benefit of the reading flow.
## Subsets of $\mathcal{C}$M
Let M be a *sclocc* median algebra. As it is a metrizable space, we may consider its space $\mathcal{C}M$ of closed subsets, endowed with the Hausdorff distance that corresponds to a metric on M. We fix such a metric and recall that $\mathcal{C}M$ is a compact metric space (see for example lemma 5.31 in [@Bridson1999]).
Let $F\subset M$ be a finite subset. It follows by lemma [Lemma 4](#lem:convexisjoin){reference-type="ref" reference="lem:convexisjoin"}, and by an inductive argument, that the convex hull of F is the join of two compact convex sets, and therefore, it is itself a compact and convex set. In particular, as discussed in the previous section, $conv(F)$ is a gate-convex set, and is an element of $\mathcal{C}M$.
**Definition 13**. Denote by $\mathcal{J}_n$ the subspace of $\mathcal{C}M$ consisting of all the convex sets that generated by n elements. In addition, denote by $\mathcal{J}_n^{\subseteq}$ the subspace of $\mathcal{J}_n\times\mathcal{J}_n$ consisting of pairs (C,J) such that $J\subset C$.
**Lemma 14**. $\mathcal{J}_n$ and $\mathcal{J}_n^{\subseteq}$ are closed subspaces of $\mathcal{C}M$ and $\mathcal{C}M\times \mathcal{C}M$, respectively.
*Proof.* Notice that it is enough to show that $\mathcal{J}_n$ is a closed subspace. We argue by induction on n. The case $n=1$ is clear, as $\mathcal{J}_1$ is none other than M itself. Assume the claim is correct for n, and let $C_i\in\mathcal{J}_{n+1}$ be a sequence that converges to some element $C\in \mathcal{C}M$. For every i, denote by $x_j^i\in M$, j=1,..,n+1, the collection of generators of $C_i$, that is, $C_i=Conv(\{x_j^i\}_{j=1}^{n+1})$. For every i, denote by $C_i'$ the set $Conv(\{x_j^i\}_{j=1}^n)$, and notice that by the definition of the convex hall, $C_i=Conv(C_i'\cup \{x_{n+1}^i\})$. By the induction hypothesis, after taking a subsequence, we may assume that there exist $x_{n+1}\in M$ and $C'\in\mathcal{J}_{n}$ such that the sequences $\{x_{n+1}^i\}_i$ and $\{C_i'\}_i$ converge to $x_{n+1}$ and $C'$, respectively. We claim that $C=Conv(C'\cup \{x_{n+1}\})\in \mathcal{J}_{n+1}$.
Lemma [Lemma 4](#lem:convexisjoin){reference-type="ref" reference="lem:convexisjoin"} implies that $C_i=[C_i',x_{n+1}^i]$ and $Conv(C'\cup \{x_{n+1}\})=[C',x_{n+1}]$. Notice that, as for every i, $C_i'\subset C_i$, then $C'\subset C$. Therefore, we get one inclusion for free: $[C',x_{n+1}]\subset C$. For the other inclusion, fix an element $y\in C$. Let $y_i\in C_i$ be a sequence that converges to $y$. By the observation above, for every i we can find $z_i\in C_i'$ such that $y_i\in [z_i,x_{n+1}^i]$. After passing to a subsequence, if needed, we may assume that $z_i$ converges to an element $z\in C'$. But now, by continuity of the median $$y=\underset{n}{\lim}\ y_i=\underset{n}{\lim}\ m(z_i,y_i,x_{n+1}^i)=m(z,y,x_{n+1})\in [C',x_{n+1}]$$ ◻
Denote by $\text{Cub}(M)$ the collection of all closed subcubes in M.
**Lemma 15**. The space $\text{Cub}(M)$ is a closed subspace of $\mathcal{C}M$.
Before proving this lemma, let us introduce a new notion, and a related property. Let $A\subset M$. We denote by $\text{Ends}(A)$ the subset of all elements $x\in A$, for which there is an *antipodal element in A*. That is, an element $x^*\in A$, such that $m(x,x^*,z)=z$, for all $z\in A$. If $A\in \mathcal{J}_2$, this amounts to saying that $[x,x^*]=A$. We claim the following:
**Lemma 16**. Let $A\in \mathcal{C}M$ be a median subalgebra. If $\text{Ends}(A)\neq \varnothing$, then $\text{Ends}(A)\in \text{Cub}(M)$
*Proof.* First, let us prove that $\text{Ends}(A)$ is a closed median subalgebra. Let $x_1,x_2,x_3\in \text{Ends}(A)$, and let $x_1^*,x_2^*,x_3^*\in \text{Ends}(A)$ be their antipodal elements in $A$. We claim that $m(x_1^*,x_2^*,x_3^*)$ is the antipodal in $A$ of $m(x_1,x_2,x_3)$. Namely, that for every $z\in A$ $$(*) \ m\big(z,m(x_1,x_2,x_3),m(x_1^*,x_2^*,x_3^*)\big)= z$$
Fix $z\in A$. For the sake of contradiction, suppose that we can find $$\mathfrak{h}\in \Delta\big(z,m\big(z,m(x_1,x_2,x_3),m(x_1^*,x_2^*,x_3^*)\big)\big)$$
Since $z\in \mathfrak{h}$, it follows by convexity of $\mathfrak{h}$ and $\mathfrak{h}^*$, that both $m(x_1,x_2,x_3)$ and $m(x_1^*,x_2^*,x_3^*)$ are in $\mathfrak{h}^*$. By convexity, again, there must be $3\geq i\geq 1$ such that both $x_i$ and $x_i^*$ are in $\mathfrak{h}^*$. But now, $$z\in A=[x_i,x_i^*]\cap A\subset \mathfrak{h}^*$$ which is a contradiction. Hence, $(*)$ holds.
Suppose now that there is a sequence $x_n\in \text{Ends}(A)$ that converges to some element $x\in A$. Let $x^*_n\in \text{Ends}(A)$ be the corresponding sequence of antipodal elements. After replacing $x_n$ with some sub-sequence, if necessary, we may assume that $x_n^*$ converges to an element $x^*\in A$. Fix $z\in A$. By continuity of the median operator, $$\begin{aligned}
z & = \underset{n\rightarrow \infty}{\lim} z = \underset{n\rightarrow \infty}{\lim} m(x_n,x_n^*,z) \\
& = m(x,x^*,z)
\end{aligned}$$ Thus, $A=[x,x^*]\cap A$, which implies that $x\in \text{Ends}(A)$, as needed.
We now show that $\text{Ends}(A)$ isomorphic to a cube. By the above, we may assume without loss of generality that $\text{Ends}(A)=A=M$. Denote by $\mathcal{W}^{\circ}$ the collection of all clopen walls of $M$. That is, the collection of all pairs $\{\mathfrak{h},\mathfrak{h}^*\}$ for a clopen half-space $\mathfrak{h}\subset M$.
It follows by corollary 3.4 in [@BaderTaller], that it is sufficient (and necessary) to show that $\mathcal{W}^{\circ}$ is separating points in $M$ and is transverse.
For starter, let us show that for any closed half-space $\mathfrak{h}$, $\{\mathfrak{h},\mathfrak{h}^*\}\in \mathcal{W}^{\circ}$. Notice that for every $x\in M$, the antipodal element $x^*$ is unique. Indeed, if $x^*$ and $x'^*$ are antipodal elements of $x$, then $$x^*=m(x,x^*,x'^*)=x'^*$$ Moreover, the above discussion implies that if $x_n$ converges to $x$, then $x_n^*$ converges to $x^*$. Thus, the antipodal map, $*:x\mapsto x^*$, is a continuous median automorphism of $M$. If $\mathfrak{h}$ is a non-empty half-space, such that also $\mathfrak{h}^*$ is non-empty, then $\mathfrak{h}=*^{-1}(\mathfrak{h}^*)$ (otherwise it would imply that there exists $x\in M$, such that $M=[x,x^*]\subset \mathfrak{h}$ or $M=[x,x^*]\subset \mathfrak{h}^*$). In particular, if $\mathfrak{h}$ is closed, then it is also open, as needed.
Let $\{\mathfrak{h},\mathfrak{h}^*\}$ and $\{\mathfrak{f},\mathfrak{f}^*\}$ be two walls in $\mathcal{W}^{\circ}$. Suppose that they are not transverse. Without loss of generality we may assume that $\mathfrak{h}\subset \mathfrak{f}$. But then, $\mathfrak{h}^*= *(\mathfrak{h})\subset *(\mathfrak{f})=\mathfrak{f}^*\subset \mathfrak{h}^*$ which implies that $\{\mathfrak{h},\mathfrak{h}^*\}=\{\mathfrak{f},\mathfrak{f}^*\}$. That is, $\mathcal{W}^{\circ}$ is transverse, as required. ◻
**Corollary 17**. For $I\in\mathcal{J}_2$, $\text{Ends}(I)\in \text{Cub}(M)$.
In the course of proving lemma [Lemma 16](#lem:suparecubes){reference-type="ref" reference="lem:suparecubes"}, we also proved the following lemma:
**Lemma 18**. Suppose that $M$ is a *sclocc* median algebra which is a cube. Then, every closed half-space is also open.
*Proof of lemma [Lemma 15](#lem:cubclosed){reference-type="ref" reference="lem:cubclosed"}.* Let $C_n\in \text{Cub}(M)$ be a sequence that converges to some closed set $C\in \mathcal{C}M$. It follows by lemma [Lemma 16](#lem:suparecubes){reference-type="ref" reference="lem:suparecubes"} that we only need to show that $C$ is a median subalgebra and that $\text{Ends}(C)=C$.
If $x_n,y_n$ and $z_n\in C_n$ converges to $x,y$ and $z\in C$, respectively, then clearly, $m(x_n,y_n,z_n)\in C_n$ converges to $m(x,y,z)\in C$.
Pick $x\in C$, and $x_n\in C_n$ such that $x_n$ converges to $x$. Let $x_n^*\in C_n$ be the corresponding sequence of antipodal elements, and suppose it converges to $x^*\in C$ (restrict to a sub-sequence, if necessary). Fix $z\in C$, and some (sub-)sequence $z_n\in C_n$ that converges to $z$. Then,
$$\begin{aligned}
z & = \underset{n\rightarrow \infty}{\lim} z = \underset{n\rightarrow \infty}{\lim} m(x_n,x_n^*,z) \\
& = m(x,x^*,z)
\end{aligned}$$ ◻
## Mapping Properties
For the rest of this section, fix a *sclocc* median algebra M, and a discrete group $G$ that acts on M continuously by automorphisms. Note that in particular, we are not assuming that intervals in M are countable, nor that subcubes of M are of finite dimension (unless stated otherwise).
**Lemma 19**. The map $\mathcal{J}_n^2\rightarrow \mathcal{J}_n^{\subseteq}$, defined by $(I,J)\mapsto (I,\pi_I(J))$, is continuous.
*Proof.* This is simply because the map $\pi_I$ is a continuous function, that depends continuously on the generators of I. ◻
**Lemma 20**. Let $C\subset M$ be a gate convex set. Suppose that for every $\gamma\in G$, $\pi_C$ is a surjection from $\gamma C$ to C. Then the assignment $\gamma.x:= \pi_C(\gamma x)$, for $\gamma\in G$ and $x\in C$, defines a continuous action of $G$ on C by median automorphisms. Moreover, $\pi_C$ is a median $G$-map between $M$ to $C$.
*Proof.* The continuity part is clear, as the new action is defined by the composition of continuous functions. We now show that this is an action. Let $x\in C$ and $\gamma',\gamma\in G$. By lemma [Lemma 6](#lem:fio2){reference-type="ref" reference="lem:fio2"}.(2), as $\pi_C|_{\gamma C}$ is surjective, $\pi_C\circ \pi_{\gamma C}=\pi_C$. So we have: $$\gamma.(\gamma'.x)=\pi_C(\gamma\pi_C(\gamma'x))=\pi_C(\pi_{\gamma C}(\gamma\gamma'x))=\pi_C(\gamma\gamma'x)=(\gamma\gamma').x$$ As required.
If $y,z\in C$ are another two elements, then by lemma [Lemma 5](#lem:fio1){reference-type="ref" reference="lem:fio1"} we have $$\begin{aligned}
\gamma.m(x,y,z)&=\pi_C(\gamma m(x,y,z))=\pi_C(m(\gamma x, \gamma y, \gamma z))\\
&=m(\pi_C(\gamma x),\pi_C(\gamma y), \pi_C(\gamma z)) = m(\gamma.x,\gamma.y,\gamma.z) \end{aligned}$$ Finely, note that by the definition of the induced action, $\pi_C$ is a $G$-map, and it is a median map by lemma [Lemma 5](#lem:fio1){reference-type="ref" reference="lem:fio1"}. ◻
**Lemma 21**. The map $C\mapsto I_C:=\text{Conv}(C)$, is a continuous $G$-map from $\text{Cub}(M)$ to $\mathcal{J}_2$.
*Proof.* The fact that this map is a $G$-map, is clear. We left to prove that it is continuous.
Let $C_n$ be a sequence of subcubes that converges to $C\in \text{Cub}(M)$. Fix some $x\in C$, its antipodal $x^*\in C$, some sequence $x_n\in C_n$ that converges to $x$, and the corresponding sequence of antipodal points $x_n^*\in C_n$. We recall that as in the proof of lemma [Lemma 16](#lem:suparecubes){reference-type="ref" reference="lem:suparecubes"}, it follows that $x_n^*$ converges to $x^*$.
Fix $z\in I_C$, and denote $z_n:=m(x_n,x_n^*,z)\in I_{C_n}$. By the continuity of the median, we have $$\underset{n}{\lim} \ z_n=\underset{n}{\lim} \ m(x_n,x_n^*,z)=m(x,x^*,z)=z$$
On the other hand, let $z_n\in I_{C_n}$ be a (sub-)sequence that converges to some $z\in M$. Then, again, by contiuity of the median: $$\begin{aligned}
z & = \underset{n}{\lim} \ z_n = \underset{n}{\lim} \ m(x_n,x_n^*,z_n) \\
& = m(x,x^*,z)\in I_C
\end{aligned}$$ Thus, indeed, $I_{C_n}$ converges to $I_C$. ◻
It is natural to wonder whether the map in the other direction, namely, the map that assigns to every interval its collection of ends, is continuous as well. Apparently, the answer is no. This is reflected by the fact that the image of the $\text{Ends}$ map is not always a closed subsets of $\text{Cub}(M)$.
This image, on the other hand, can be identified as the collection of maximal subcubes. Here, we mean that $C$ is maximal, in the the sense that if $C'$ is another subcube containing $C$, than $C$ is a proper face of $C'$. Denote this collection by $\text{Cub}_m(M)$.
**Lemma 22**. The set $\text{Cub}_{m}(M)$ is a $G_{\delta}$ subset of $\text{Cub}(M)$.
*Proof.* Define $F_{\epsilon}:=\{ C\in \text{Cub}(M) \ | \ d_H(C,\text{Ends}(I_C))\geq \epsilon\}$, where, as always, $I_C$ stands for the convex hull of C. When $\epsilon>0$, then, $F_{\epsilon}\subset \text{Cub}(M)\backslash \text{Cub}_m(M)$. On the other hand, if $C\notin \text{Cub}_m(M)$, then $\epsilon':=d_H(C,\text{Ends}(I_C))>0$, and in particular, $C\in F_{\epsilon'}$. Thus, we have $$\text{Cub}(M)\backslash \text{Cub}_m(M) = \underset{n\in{\mathbb N}}{\bigcup}F_{1/n}$$ where the right hand side is an increasing union of subsets. Therefore, it is enough to show that $F_{\epsilon}$ is a closed set.
Fix some sequence $C_n\in F_{\epsilon}$, that converges to some $C\in \text{Cub}(M)$. After restricting to a subsequence, if necessary, we may assume that $C_n':=\text{Ends}(I_{C_n})$ converges to some subcube $C'$. We have the following short sequence of inclusions: $$C\subset C'\subset \text{Ends}(I_C)=\text{Ends}(I_{C'})$$ We note that the later equality follows from the fact that $I_{C_n}$ converges to $I_C$. See lemma [Lemma 21](#lem:cubetoint){reference-type="ref" reference="lem:cubetoint"}. Therefore, for any limit $x\in I_C$ of some $x_n\in \text{Ends}(I_{C_n})$, there exists an antipodal point $x^*$, which is the limit of $x_n^*$. Thus, $x\in \text{Ends}(I_C)$.
For every n, the fact that $d_H(C_n,\text{Ends}(I_{C_n}))\geq \epsilon$, implies that there exists $x_n\in \text{Ends}(I_{C_n})$ with $d(\{x_n\},C_n)\geq \epsilon$, that is, for every $y_n\in C_n$, $d(x_n,y_n)\geq \epsilon$. After further restricting to another sub-sequences, if necessary, we may assume that $x_n$ converges to some $x\in C'$. Take $y\in C$, and $y_n\in C_n$ that converges to $y$. Then by continuity of the metric, we have $$\epsilon \leq \underset{n}{\lim} \ d_H(\{x_n\},\{y_n\}) = d_H(\{x\},\{y\})=d(x,y)$$ In particular, $$\epsilon \leq d_H(C,C')\leq d_H(C,\text{Ends}(I_C))$$ Hence, $C\in F_{\epsilon}$, as required. ◻
**Corollary 23**. The assignment $I\mapsto \text{Ends}(I)$, is a Borel $G$-map isomorphism from $\mathcal{J}_2$ to $\text{Cub}_m(M)$.
*Proof.* Denote the above assignment by $\text{Ends}$. This is a bijection onto $\text{Cub}_m(M)$, with an inverse denoted by $\text{Conv}$.
We note that a $G_{\delta}$ subset of a Polish space, is Polish. See [@Kechris1995 Theorem 3.11]. Therefore, by lemmata [Lemma 21](#lem:cubetoint){reference-type="ref" reference="lem:cubetoint"} and [Lemma 22](#lem:maximalcubesareGdelta){reference-type="ref" reference="lem:maximalcubesareGdelta"}, $\text{Conv}$ is a continuous bijection between two Polish spaces.
The corollary now follows by the fact that such a map is a Borel isomorphism. See [@Kechris1995 Theorem 15.1]. ◻
**Lemma 24**. The map from $\text{Cub}(M)$ to $\text{Prob}(M)^{\Phi}$, defined by the rule $C\mapsto \lambda_C$, where $\lambda_C$ is the balanced measure on $C$, is a homehomorphism (with respect to the $weak^*$ topology on $\text{Prob}(M)^{\Phi}$).
Before proving this Lemma we need the following:
**Lemma 25**. Let $\eta\in\text{Prob}(M)^{\Phi}$, and let $\mathfrak{h}_1,...,\mathfrak{h}_m$ be a collection of closed half-spaces. Then, $\eta(\cap\mathfrak{h}_i)\in \{0\}\cup\{2^{-s}\ | \ m\geq s\geq 0\}$
*Proof.* Without loss of generality, suppose that $M=\text{supp}(\eta)$. Denote by $\chi_{\mathfrak{h}_i}$ the indicator function of $\mathfrak{h}_i$. By lemma [Lemma 18](#lem:clopenhalfspace){reference-type="ref" reference="lem:clopenhalfspace"}, each $h_i$ is clopen. Therefore, the map $\varphi:M\rightarrow \{0,1\}^m$, mapping each x to $\{\chi_{\mathfrak{h}_i}(x)\}_i$ is a continuous median map. It follows that $\varphi_*(\eta)$ is a balanced measure, and by theorem [Theorem 8](#thm:balancediscubical){reference-type="ref" reference="thm:balancediscubical"}, it is the uniform measure on a subcube of $\{0,1\}^m$. Therefore, on each point in its support, its value is $2^{-s}$, for some natural number $m\geq s\geq 0$. The statement follows. ◻
*Proof of Lemma [Lemma 24](#lem:cubetobalanced){reference-type="ref" reference="lem:cubetobalanced"}.* We first claim that the assignment $\eta\mapsto \text{supp}(\eta)$ is continuous from $\text{Prob}(M)^{\Phi}$ to $\text{Cub}(M)$. Indeed, suppose that $\eta_n\xrightarrow{w^*} \eta$.
Fix $x\in \text{supp}(\eta)$. Since every open neighborhood of $x$ is of positive $\eta$-measure, for $n>>0$, it is also of positive $\eta_n$-measure, so we can find in it some $x_n\in \text{supp}(\eta_n)$. Therefore, $\text{supp}(\eta)$ is contained in any limit of $\text{supp}(\eta_n)$.
On the other hand, let $x\notin \text{supp}(\eta)$. For every $y\in \text{supp}(\eta)$, we can find a closed half-space $\mathfrak{h}_y\in\Delta(x,y)$, and such that $x\in \mathfrak{h}_y^{\circ}$. See corollary [Corollary 10](#cor:separatingcollection){reference-type="ref" reference="cor:separatingcollection"}. By compactness of $\text{supp}(\eta)$, there are $y_1,..,y_m\in\text{supp}(\eta)$ such that $\text{supp}(\eta)\subset \bigcup \mathfrak{h}^*_{y_i}$. Thus, the set $\bigcap \mathfrak{h}_{y_i}$ is an $\eta$-null compact set, so as $\eta$ is a Radon measure, we can find a continuous function $f$ with $\bigcap \mathfrak{h}_{y_i}\subset \text{supp}(f)$, and $\eta(f)<2^{-m-1}$. It follows by lemma [Lemma 25](#lem:balancemeasurefiniteintersection){reference-type="ref" reference="lem:balancemeasurefiniteintersection"} that every balanced measure $\xi$ with $\xi(\bigcap \mathfrak{h}_{y_i})>0$, then, $\xi(\bigcap \mathfrak{h}_{y_i})\geq 2^{-m}$. For $n>>0$, $\eta_n(\bigcap \mathfrak{h}_{y_i})\leq\eta_n(f)<2^{-m}$, thus, $\eta_n(\bigcap \mathfrak{h}_{y_i})=0$. Since $x\in (\bigcap \mathfrak{h}_{y_i})^{\circ}$, $\lim \inf d(x,\text{supp}(\eta_n))>0$. In particular, any limit of $\text{supp}(\eta_n)$ is contained in $\text{supp}(\eta)$.
By lemma [Lemma 15](#lem:cubclosed){reference-type="ref" reference="lem:cubclosed"}, $\text{Cub}(M)$ is a compact Hausdorff space. Since $\text{Prob}(M)^{\Phi}$ is a $weak^*$-closed subspace of $\text{Prob}(M)$, it is also a compact Hausdorff space. Clearly, the assignment $\eta\mapsto \text{supp}(\eta)$ is a bijection between the above two sets (this is due to the uniqueness of the fully supported balanced measure on any cube). Thus, this assignment is also a homeomorphism, as required. ◻
**Lemma 26** ([@cfi16 Lemma A.1]). Let $\mathcal{H}$ be a countable collection of closed half-spaces. Fix $i\in\{0,1/2,1\}$, and let $\mathcal{H}^i_{\eta}:=\{ \mathfrak{h}\in \mathcal{H}\ | \ \eta(h)=i\}$. Then, the map $\text{Prob}(M)\rightarrow 2^{\mathcal{H}}$, defined by $\eta\mapsto \mathcal{H}^i_{\eta}$, is a measurable map with respect to the weak-\* topology on $\text{Prob}(M)^{\Phi}$.
Therefore, also the map $N:\text{Prob}(M)\rightarrow {\mathbb N}\cup \{\infty\}$, defined by $N(\eta)=|\mathcal{H}^i_{\eta}|$, is measurable
*Proof.* The proof of lemma A.1 in [@cfi16] works almost perfectly in our situation, except the fact that here, for $\mathfrak{h}\in\mathcal{H}$, the assignment $\eta\mapsto \eta(h)$ is measurable, and not necessarily continuous, with respect to the weak-\* topology.
Indeed, there is a sequence $1\geq f_n\in C(M)$, that converges point-wise to $\chi_{\mathfrak{h}}$. Therefore, by dominated convergence theorem, for every $\eta\in \text{Prob}(M)$, $\underset{n}{\lim} \ \eta(f_n) =\eta(\mathfrak{h})$. That is, the assignment $\eta \mapsto \eta(\mathfrak{h})$ is the point-wise limit of the continuous maps $\eta\mapsto \eta(f_n)$. In particular, it is measurable. ◻
**Corollary 27** ([@cfi16 Corollary A.2]). Let $\mathcal{H}$ be a countable collection of closed half-spaces. The following maps are measurable:
1. $C_1: \text{Prob}(M)\rightarrow {\mathbb N}\cup \{\infty\}$, defined by $C_1(\eta):=|\mathcal{H}_{\eta}|$
2. $C_3: \text{Prob}(M)\times \text{Prob}(M)\rightarrow {\mathbb N}\cup \{\infty\}$ defined by $C_3(\eta,\rho):=|\mathcal{H}_{\eta}\Delta \mathcal{H}_{\rho}|$
*Proof.* The measurability of all the above maps follows by lemma [Lemma 26](#lem:probtosubhalfspace){reference-type="ref" reference="lem:probtosubhalfspace"}, the fact that composition and product of measurable maps are measurable, and the fact that the operation $\Delta$ on subsets of $\mathcal{H}$, can be described in terms of continuous ring operation on $2^{\mathcal{H}}$. ◻
**Lemma 28**. Let $C$ and $C'$ be two gate-convex subsets of $M$. Then, the gate projections $\pi_C|_{C'}$ and $\pi_{C'}|_C$ are isomorphism if and only if $\mathcal{H}(C)=\mathcal{H}(C')$.
*Proof.* Suppose that $\pi_C|_{C'}$ and $\pi_{C'}|_C$ are isomorphism. Let $\mathfrak{h}\in \mathcal{H}(C)$. It follows by lemma [Lemma 6](#lem:fio2){reference-type="ref" reference="lem:fio2"}(1), and the fact that both $\pi_C(C')\cap \mathfrak{h}$ and $\pi_C(C')\cap \mathfrak{h}^*$ are non-empty, that $\mathfrak{h}\in\mathcal{H}(C')$. The claim now follows by symmetry of the argument.
Suppose now that $\mathcal{H}(C)=\mathcal{H}(C')$. Assume that $x\in C \backslash\pi_C(C')$. Then, $\varnothing\neq \Delta(x,\pi_C(C'))\subset\mathcal{H}(C)\backslash \mathcal{H}(C')$, which is a contradiction. Thus, $\pi_C|_{C'}$ is surjective. By lemma [Lemma 6](#lem:fio2){reference-type="ref" reference="lem:fio2"}(2), $\pi_C=\pi_C\circ \pi_{C'}$. That is, $\pi_C|_{C'}\circ \pi_{C'}|_C= \pi_C|_C=id_C$. By symmetry, the statement follows. ◻
For a tuple $(x,y,\eta)\in M^2\times \text{Prob}(M)$, we denote by $\mu_{(x,y,\eta)}\in\text{Prob}(M^3)$ the probability measure associated to the following functional: $$\forall s\in C(M^3), \ s\mapsto\int_{M}s(x,m(x,y,c),c)d\eta(c)$$
**Lemma 29**. The map $\mu_{(.)}:M^2\times \text{Prob}(M)\rightarrow \text{Prob}(M^3)$ is continuous.
*Proof.* Suppose $(x_n,y_n,\eta_n),(x,y,\eta)\in M^2\times \text{Prob}(M)$ are such that $(x_n,y_n,\eta_n)\xrightarrow{n}(x,y,\eta)$. Fix $s\in C(M^3)$ and $\epsilon>0$. By continuity of $m$ and $s$ and the compactness of $M^3$, for $n\gg 0$, $|s(x_n,m(x_n,y_n,c),c)-s(x,m(x,y,c),c)|<\epsilon/2$ for every $c\in M$. Also, for $n\gg 0$, $$\big|\int_{M}s(x,m(x,y,c),c)d\eta(c)-\int_{M}s(x,m(x,y,c),c)d\eta_n(c)\big|<\epsilon/2$$ Therefore, for $n\gg 0$, $$\begin{aligned}
|\mu_{(x,y,\eta)}(s)-\mu_{(x_n,y_n,\eta_n)}(s)|& \leq \big|\int_{M}s(x,m(x,y,c),c)d\eta(c)-\int_{M}s(x,m(x,y,c),c)d\eta_n(c)\big| \\
& + \big| \int_{M}s(x,m(x,y,c),c)d\eta_n(c) - \int_{M}s(x_n,m(x_n,y_n,c),c)d\eta_n(c)\big| \\
&\leq \epsilon/2 + \int_{M}|s(x,m(x,y,c),c)d-s(x_n,m(x_n,y_n,c),c)|d\eta_n(c) \\
& \leq \epsilon
\end{aligned}$$ as needed. ◻
## General results about measurability
**Lemma 30**. Let $p:X\rightarrow Y$ be a countable-to-one Borel map between two standard Borel spaces. Then $p$ is Borel bimeasurable, i.e. the image of every Borel set $B\subset X$, is Borel in $Y$.
*Proof.* Fix a Borel subset $B\subset X$, and define the following set: $$P:=\{(x,y)\in B\times Y\ : \ p(x)=y\}$$ Note that this is a Borel subset of $X\times Y$. To see that, it is enough to observe that $P$ is equal to the set $B\times Y\cap (p\times id_Y)^{-1}(\Delta(Y))$, where $p\times id_Y$ is the map from $X\times Y$ to $Y\times Y$ defined by $(x,y)\mapsto (p(x),y)$, and $\Delta(Y)\subset Y\times Y$ is the diagonal.
By assumption, each section $P_y$ is countable. Therefore, according to [@Kechris1995 Lemma 18.12], $p(B)=\text{proj}_Y(P)$ is Borel. ◻
**Lemma 31**. Let $p:X\rightarrow Y$ be a countable-to-one Borel map between two standard Borel spaces. Then, there exists a Borel subset $A\subset {\mathbb N}\times Y$, and Borel isomorphism $\psi:X\rightarrow A$, such that the following diagram commutes: $$\begin{tikzcd}
X \arrow[r, "\psi"] \arrow[d,"p"]
& A \arrow[d, "\text{proj}_Y"] \\
Y \arrow[r, "id_Y"]
& Y
\end{tikzcd}$$
*Proof.* First, consider the following equivalence relation on X: $$E:=\{(x,x')\in X\times X\ : \ p(x)=p(x')\}=(p\times p)^{-1}(\Delta(Y))$$ Where $p\times p$ is the map from $X\times X$ to $Y\times Y$ defined by $(x,x')\mapsto (p(x),p(x'))$, and $\Delta(Y)\subset Y\times Y$ is the diagonal. Therefore, it is clear that $E$ is a Borel subset of $X\times X$, and by assumption, all of its equivalence classes are countable.
Fix any section $s:Y\rightarrow X$. It follows Lemma [Lemma 30](#lem:countoonisbimeas){reference-type="ref" reference="lem:countoonisbimeas"} that $s$ is Borel. Indeed, choose any Borel subset $B\subset X$. Then, $s^{-1}(B)=p(B)$, which is a Borel.
According to [@Feldman1977 Theorem 1], there exists a countable group $H=\{h_i\}_{i\in{\mathbb N}}$ of Borel automorphisms of $X$, such that the following holds: $$E = \{(x,h_i.x) \ : \ \forall x\in X, i\in {\mathbb N}\}$$ Without loss of generality, assume that $h_1=e_H$. For every $x\in X$, we denote by $c_x$ the elements of $Y$ for which $x\in p^{-1}(c_x)$. Also, we denote by $n_x$ the minimal positive integer $i$ such that $h_i.s(c_x)=x$. Finaly, we define $\psi:X\rightarrow {\mathbb N}\times Y$, by $x\mapsto (n_x,c_x)$.
We claim that $\psi$ is measurable. Note that it is enough to show that sets of the form $\psi^{-1}(\{i\}\times A)$ are Borel, whenever $A\subset Y$ is Borel. Fix such a Borel set $A\subset Y$. We will prove by induction on $i$ that these sets are indeed Borel.
For $i=1$, $\psi^{-1}(\{1\}\times A)=s(A)$, and this is a Borel set, according to Lemma [Lemma 30](#lem:countoonisbimeas){reference-type="ref" reference="lem:countoonisbimeas"}. Now, note that we have the following equality: $$\psi^{-1}(\{i\}\times A)=h_i.s(A)\backslash \underset{1\leq j<i}{\cup}\psi^{-1}(\{j\}\times A)$$ By the induction hypothesis, $\psi^{-1}(\{i\}\times A)$ is Borel.
We finish by noting that $\psi$ is injective, and therefore, it is bimeasurable by Lemma [Lemma 30](#lem:countoonisbimeas){reference-type="ref" reference="lem:countoonisbimeas"}. Hence, $\psi$ is an isomorphism between $X$ and $\psi(X)\subset {\mathbb N}\times Y$. ◻
**Lemma 32**. Let $Y$ be a compact metric space, and consider the factor map $\text{proj}_Y:{\mathbb N}\times Y\rightarrow Y$. Fix some $\mu\in \text{Prob}(Y)$, and denote by $\mathfrak{G}$ the space of classes of $\mu$-a.e. defined measurable sections for $\text{proj}_Y$. Consider the following metric $d_{\mu}$ on $\mathfrak{G}$ $$\ d_{\mu}(f,f'):=\int_Y (1-\delta_{f(c)f'(c)})d\mu(c)$$
where $\delta_{ab}=1$ if $a=b$ and zero otherwise. Then, the metric space $(\mathfrak{G},d_{\mu})$ is separable.
*Proof.* Fix a countable basis for the topology $\{U_i\}_i$, and denote by $\mathcal{B}$ the Boolean algebra that is generated by this basis. Consider the collection of sections $s:Y\rightarrow {\mathbb N}\times Y$, of the form $$s(y)=(\sum_{i=1}^m j_i \chi_{B_i}(y),y)$$ where $m$ and $j_i$ run over all positive numbers, and the $B_i$'s run over all the elements of $\mathcal{B}$, and where $\chi_{B_i}$ denotes the characteristic function of $B_i$. Observe that this is a countable collection. We now show that it is dense in $(\mathfrak{G},d_{\mu})$.
Let $f\in \mathfrak{G}$ and $\epsilon>0$. For every $n\in {\mathbb N}$, denote by $A_n:=f^{-1}(\{n\}\times Y)$. As $Y=\cup_n A_n$, there exists $k\in {\mathbb N}$, such that $\mu(\cup_{n\geq k}A_n)<\epsilon/2$. Note that $\mu$ is a Radon probability measure, therefore, for every $k>n$, we can choose an open set $V_n$ such that $\mu(V_n\Delta A_n)<\epsilon/2^{k+1}$. For every such $V_n$, we can choose some $B_n\in \mathcal{B}$, such that $B_n\subset V_n$ and $\mu(V_n\backslash B_n)<\epsilon/2^{k+1}$. Finely, we define the following section $s:Y\rightarrow {\mathbb N}\times Y$: $$s(y)=(\sum_{n=1}^{k-1} n \chi_{B_n}(y),y)$$ Note that we have the following inclusion: $$\{y\in \text{supp}(f)\ : \ s(y)\neq f(y)\}\subset (\cup_{n<k}B_n\Delta A_n )\bigcup (\cup_{n\geq k}A_n)\subset (\cup_{n<k}(V_n\Delta A_n \cup V_n\backslash B_n))\bigcup (\cup_{n\geq k}A_n)$$ Therefore, $$\begin{aligned}
d_{\mu}(f,s)& =\int_Y 1-\delta_{f(y)s(y)}d\mu(y)=\mu(\{y\in \text{supp}(f)\ : \ s(y)\neq f(y)\}) \\
& \leq \sum_{n=1}^{k-1}\mu(V_n\Delta A_n \cup V_n\backslash B_n)+\mu(\cup_{n\geq k}A_n) \\
& \leq \sum_{n=1}^{k-1}\epsilon/2^k + \epsilon/2<\epsilon
\end{aligned}$$ As needed. ◻
## The space $\text{Map}_G(S,\text{Prob}(X))$ {#subsection:amenableaction}
Consider a standard Borel space $S$, endowed with a probability measure $\mu\in\text{Prob}(S)$ and a separable Banach space $E$. Define the following space $$L^1(S,E):= \big\{ f:S\rightarrow E \ | \ f \text{ measurable and }\int\|f(s)\|\mathrm{d}\mu\leq \infty \big\}/\sim$$ Where $f\sim f'$ if and only if the map $s\mapsto \|f(s)-f'(s)\|$ is of $\mu$-measure zero. As discussed in [@Zimmer1978 p. 357], the space $L^1(S,E)$ is a Banach space. Its dual can be identified with the collection of all *weakly measurable* maps $\lambda:S\rightarrow E^*$, such that the assignment $s\mapsto \|\lambda(s)\|$ is an element of $L^{\infty}(S)$. See also [@funcanalysisandapp Theorem 8.18.2]. We denote this space by $L^{\infty}(S,E^*)$. The identification is given by $\lambda\mapsto \big( f\mapsto \Tilde{\lambda}(f)\big)$, where we define $\Tilde{\lambda}(f):=\langle \lambda , f\rangle:=\int\langle\lambda(s),f(s)\rangle\mathrm{d}\mu(s)$, for $\lambda\in L^{\infty}(S,E^*)$ and $f\in L^1(S,E)$. Endowed with the essential sup norm, the space $L^{\infty}(S,E^*)$ becomes a Banach space, and the above identification turns into isometric isomorphism.
Suppose now that we are given a compact metric space $(X,d)$, and fix our Banach space $E$ to be $(C(X),\|\ \|_{\text{sup}})$. Define the following subspace of $L^{\infty}(S,C(X)^*)$: $$\text{Map}(S,\text{Prob}(X)):=\big\{\lambda\in L^{\infty}(S,C(X)^*) \ | \text{ for a.e. s, } \lambda(s)\in \text{Prob}(X) \big\}$$ Note that when endowed $\text{Prob}(X)$ with the weak-\* topology, then the space $\text{Map}(S,\text{Prob}(X))$ is exactly the collection of all measurable maps $\lambda:S\rightarrow \text{Prob}(X)$. Indeed, it is enough to notice that for such $\lambda$, the map $s\mapsto \|\lambda(s)\|$ is just the constant map $1$. Moreover, it equals exactly to the space $B$ as in [@Zimmer1978 Proposition 2.2], when the Borel field $\{A_s\}$ is defined to be the constant field $A_s=\text{Prob}(X)$. According to the proposition, this space is a compact and convex subspace of $L^{\infty}(S,C(X)^*)$.
Let $G$ be a lcsc group. Suppose that $(S,\mu)$ is an ergodic $G$-space, and that $X$ is endowed with a continuous $G$-action. We have a natural action of $G$ on $L^{\infty}(S,C(X)^*)$, given by $(g.\lambda)(s):=\lambda(gs)$, for $g\in G$, $\lambda\in L^{\infty}(S,C(X)^*)$ and $s\in S$. According to the proof of [@Zimmer1978 Theorem 2.1], this is the induced adjoint action to a continuous action of $G$ on $L^1(S,C(X))$, with respect to the trivial cocycle $\alpha:S\times G\rightarrow \text{Iso}(C(X))$, $\alpha(s,g)=Id_{C(X)}$. Therefore, for every $g\in G$, the assignment $\lambda\mapsto g.\lambda$ is a continuous automorphism of $L^{\infty}(S,C(X)^*)$.
Clearly, the space $\text{Map}(S,\text{Prob}(X))$ is invariant under the action of $G$. Denote by $\text{Map}_G(S,\text{Prob}(X))$ the collection of all measurable $G$-equivariant maps $\lambda:S\rightarrow \text{Prob}(X)$. By all the above discussion, it follows that $\text{Map}_G(S,\text{Prob}(X))$ is a closed, thus compact, and convex subspace of $\text{Map}(S,\text{Prob}(X))$.
The motivation to introduce the space $\text{Map}_G(S,\text{Prob}(X))$ derived by our use of the concept of *amenable action*. See [@Zimmer1978]. A key feature of amenable action $G\curvearrowright B$, where $B$ is a Lebesgue $G$-space, is that given an affine $G$-action on a convex $weak^*$ compact set $Q\subset E^*$, where $E$ is a separable Banach space, there exists a $G$-map $\phi\in \text{Map}_{G}(B,Q)$. In particular, if $G$ acts on a metrizable compact space $X$, then the space $\text{Map}_{G}(B,\text{Prob}(X))$ is not empty.
# Isometric ergodicity {#sec:ergodic}
Let $G$ be a lcsc group. A Lebesgue $G$-space $(X,\nu)$ is *isometrically ergodic*, if for every separable metric space $Y$ on which $G$ acts by isometries, and every $G$-equivariant map $\varphi:X\rightarrow Y$, $\varphi$ is essentially constant.
Given a Borel map $q:\mathcal{M}\rightarrow V$ between standard Borel spaces, a metric on q is a Borel function $d:\mathcal{M}\times_V\mathcal{M}\rightarrow [0,\infty]$ whose restriction $d_v$ to each fiber $\mathcal{M}_v=q^{-1}(\{v\})$ is a separable metric. A *fiber-wise isometric $G$-action* on such $\mathcal{M}$ consists of q-compatible actions $G\curvearrowright\mathcal{M},G\curvearrowright V$, so that the maps between the fibers $g:\mathcal{M}_v\rightarrow \mathcal{M}_{gv}$ are isometries.
Suppose we have actions of $G$ on two such Borel spaces $\mathcal{M}$ and $V$, and a Borel map $q:\mathcal{M}\rightarrow V$ in which all the fibers are countable. Define $d:\mathcal{M}\times_V \mathcal{M}\rightarrow [0,\infty]$ to be the trivial metric, that is, $d(m,n)=\delta_{mn}$. Then, the action of $G$ on $\mathcal{M}$, equipped with the metric $d$ above on q, is an example of a fiber-wise isometric $G$-action.
A map $p:A\rightarrow B$ between Lebesgue $G$-spaces, is *relatively isometrically ergodic* if for every fiber-wise isometric action on $q:\mathcal{M}\rightarrow V$, and $q$-compatible maps $f:A\rightarrow \mathcal{M}$, $f_0:B \rightarrow V$, there exists a compatible $G$-map $f_1:B\rightarrow \mathcal{M}$ making the following diagram commutative:
$$\label{equ:relativeergod}
\begin{tikzcd}
A \arrow[r, "f"] \arrow[d, "p"]
& \mathcal{M} \arrow[d, "q"] \\
B \arrow[r, "f_0"] \arrow[ru, dashrightarrow, "f_1"]
& V
\end{tikzcd}$$
We say that a pair of Lebesgue $G$-spaces $(B_-,B_+)$ is a *boundary pair* if the actions of $G$ on $B_+$ and $B_-$ are amenable, and the projection maps $p_{\pm}:B_-\times B_+\rightarrow B_{\pm}$ are relatively isometrically ergodic.
**Remark 33**. Let $(B_-,B_+)$ be a boundary pair for $G$.
1. For a Lebesgue $G$-spaces A and B, if the projection $A\times B\rightarrow B$ is relatively isometrically ergodic, then, A is isometrically ergoic. See [@Bader2014 Proposition 2.2.(iii)].
In particular, both $B_+$ and $B_-$ are isometrically ergodic.
2. The action of $G$ on $B_-\times B_+$ is ergodic. Indeed, if $S\subset B_-\times B_+$ is a Borel $G$-invariant subset, then its characteristic map $\chi_S$ is a $G$-invariant map into the set $\{0,1\}$. Completing diagram [\[equ:relativeergod\]](#equ:relativeergod){reference-type="ref" reference="equ:relativeergod"} with $f=\chi_A$, $A=B_-\times B_+$, $B=B_+$ and $V=\{pt\}$, will generate a Borel $G$-invariant map $f_1:B_+\rightarrow \{0,1\}$. It follows by the previous remark, that $f_1$ must be constant.
**Theorem 34** ([@Bader2014 Theorem 2.7]). Let $\mu_+$ be a probability measure on $G$, that is continuous with respect to the Haar measure and is not supported on a proper closed sub-semigroup. Define the probability measure $\mu_-$ to be the inverse of $\mu_+$. That is, $\mu_-(A)=\mu_+(A^{-1})$. We denote by $(B_+,\nu_+)$ (resp. by $(B_-,\nu_-)$) the Furstenberg-Poisson boundary corresponds to $(G,\mu_+)$ (resp. to $(G,\mu_-)$). See [@Bader2006 section 2] and references therein for a comprehensive discussion on the Furstenberg-Poisson boundary. The pair $(B_-,B_+)$ is a boundary pair for $G$ and for any of its lattices.
We now turn to apply the theory of boundary pairs in the case of convex minimal actions on sclocc median algebras. For the definition of the space $\mathcal{J}_n^{\subseteq}$, see [Definition 13](#def:joinspace){reference-type="ref" reference="def:joinspace"}
**Proposition 35**. Let $G$ be a countable discrete group, $M$ a sclocc $G$-median algebra and suppose that $G\curvearrowright M$ is convex minimal. Let $(B_-,B_+)$ be a boundary pair for $G$, and suppose that we have two Borel $G$-maps $\varphi_{\pm}:B_{\pm}\rightarrow \mathcal{J}_n$. Then, for almost every $(\theta_-,\theta_+)$, $$\pi_{\varphi_+(\theta_+)}(\varphi_-(\theta_-))=\varphi_+(\theta_+)$$
*Proof.* Consider the map $\mathfrak{f} :B_-\times B_+ \rightarrow \mathcal{J}_n^{\subseteq}$ defined by $$(\theta_-,\theta_+)\mapsto (\varphi_+(\theta_+),\pi_{\varphi_+(\theta_+)}(\varphi_-(\theta_-))$$ It is a measurable $G$-map. For the measurably part, see lemma [Lemma 19](#lem:continousmapping){reference-type="ref" reference="lem:continousmapping"}. As $(B_-,B_+)$ is a boundary pair for $G$, there is a compatible $G$-map $\varphi': B_+\rightarrow \mathcal{J}_n^{\subseteq}$ making the following diagram commutative:
where $p_+:B_-\times B_+\rightarrow B_+$ and $p: \mathcal{J}_n^{\subseteq}\rightarrow \mathcal{J}_n$ are the natural propjections. Note that it follows by the diagram, that for almost every $(\theta_+,\theta_-)$ $$(\dagger)\mathfrak{f}(\theta_+,\theta_-)=\varphi'(\theta_+)= (\varphi_+(\theta_+),\pi_{\varphi_+(\theta_+)}(\varphi_-(\theta_-)))$$
For the sake of contradiction, suppose there exists a subset $A\subset B_-\times B_+$ of positive $\nu_-\otimes\nu_+$-measure, such that for every $(\theta_+,\theta_-)\in A$, $$(\ddagger)\pi_{\varphi_+(\theta_+)}(\varphi_-(\theta_-))\varsubsetneq\varphi_+(\theta_+)$$
As $A$ is $G$-invariant, it follows by ergodicity that it is of full measure. By Fubin's theorem, we can find $\theta_+\in B_+$ and $\Omega\subset B_-$ with full $\nu_-$-measure, such that $(\dagger)$ and $(\ddagger)$ holds for the pair $(\theta_+,\theta_-)$, for every $\theta_-\in \Omega$. In particular, for every $\theta,\theta'\in \Omega$ $$I':=\pi_{\varphi_+(\theta_+)}(\varphi_-(\theta))=\pi_{\varphi_+(\theta_+)}(\varphi_-(\theta'))\varsubsetneq \varphi_+(\theta_+)=:I$$
Fix $x\in I\backslash I'$, and a closed half-space $\mathfrak{h}\in\Delta(I',x)$. It follows from lemma [Lemma 6](#lem:fio2){reference-type="ref" reference="lem:fio2"}.(1), that for $\theta\in \Omega$, $\varphi_-(\theta)\subset \mathfrak{h}$. Since we may assume $\Omega$ is $G$-invariant, we got a contradiction to the minimality of the action: the closure of the convex hull of the union of all $\varphi_-(\theta)$ for $\theta\in \Omega$, is a $G$-invariant closed convex set that not contains the element x. Therefore, the proposition holds. ◻
# Proof of Theorems [Theorem 1](#thm:stationary-nofactors){reference-type="ref" reference="thm:stationary-nofactors"}, [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"} and [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"} {#sec:theoremnocubicalfactor}
In what follows, the first two subsections are devoted to the proof of theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"} and the latter subsections are devoted to the proofs of theorems [Theorem 1](#thm:stationary-nofactors){reference-type="ref" reference="thm:stationary-nofactors"} and [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"} correspondingly.
## Proof of Theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"}: Existence of boundary maps {#proof-of-theorem-thmnocubicalfactor-existence-of-boundary-maps}
Let M be a minimal sclocc $G$-median algebra, with countable intervals and with no cubical factors. Let $(B_-,\nu_-)$ and $(B_+,\nu_+)$ be two $G$-Lebesgue spaces, such that the pair $(B_-,B_+)$ is a boundary pair for $G$. In this subsection we will prove the existence of a $G$-map $B_-\to M$. The proof for $B_+$ is done in a similar way.
Observe that the self-median operator $\Phi$ acts continuously on the compact convex spaces $$\text{Map}_{G}(B_{\pm},\text{Prob}(M)).$$ These spaces are not empty, as the actions of $G$ on $B_+$ and $B_-$ are amenable. See subsection [4.4](#subsection:amenableaction){reference-type="ref" reference="subsection:amenableaction"}. It follows by Tychonoff's theorem that $\text{Map}_{G}(B_{\pm},\text{Prob}(M))^{\Phi}=\text{Map}_{G}(B_{\pm},\text{Prob}(M)^{\Phi})\neq \varnothing$. See [@tychonoffixedpoint]. For the rest of this section we fix two maps $$\varphi_{\pm}\in \text{Map}_{G}(B_{\pm},\text{Prob}(M)^{\Phi}).$$ In sake of abbreviation, we might confuse between an element $\theta\in B_{\pm}$ and its image under the appropriate map.
Fix a countable collection of closed half-spaces, as in corollary [Corollary 10](#cor:separatingcollection){reference-type="ref" reference="cor:separatingcollection"}, taking any additional half-spaces so that the collection will be $G$-invariant, and denote it by $\mathcal{H}$. For every $\theta_{\pm} \in B_{\pm}$ and every $i\in \{0,1/2,1\}$, we define the following subset of $\mathcal{H}$: $$\mathcal{H}_{\theta}^i:=\{\mathfrak{h}\ | \ \theta_{\pm}(\mathfrak{h})=i\}$$ As almost every $\theta_{\pm}$ is balanced, it follows by theorem [Theorem 8](#thm:balancediscubical){reference-type="ref" reference="thm:balancediscubical"} that its support constitutes a subcube of $M$. Therefore, the sets $\mathcal{H}_{\theta_{\pm}}^0, \mathcal{H}_{\theta_{\pm}}^{1/2}$ and $\mathcal{H}_{\theta_{\pm}}^1$ constitute a partition of $\mathcal{H}$.
Among all these three sets, only those of the form $\mathcal{H}_{\theta_{\pm}}^{1/2}$ will be in use. So let us denote them with the abbreviated notation $\mathcal{H}_{\theta\pm}$. Consider the following map: $$\begin{aligned}
(\dagger) \ & \psi: B_-\times B_+\rightarrow ({\mathbb N}\cup \infty) \\
& (\theta_-,\theta_+)\mapsto |\mathcal{H}_{\theta_-}\Delta\mathcal{H}_{\theta_+}|
%& (\theta_-,\theta_+)\mapsto (|\Hc_{\theta_-}\cap \Hc_{\theta_+}|,|\Hc_{\theta_-}\Delta\Hc_{\theta_+}|)\end{aligned}$$
According to corollary [Corollary 27](#cor:measurabilitysubshalfspa){reference-type="ref" reference="cor:measurabilitysubshalfspa"}, this map is Borel and clearly it is $G$-invariant. As $G$ acts ergodicly on $B_-\times B_+$, the essential image of $\psi$ is some constant $m\in {\mathbb N}\cup \{\infty\}$. We claim that the essential image is in fact zero. This will promise us that the image of $\varphi_+$ inside $\text{Prob}(M)$ consists of delta measures.
We would like to draw the reader's attention, that the use of the map $(\dagger)$ is greatly inspired by the proof of theorem 7.1 in [@fer18], although the analysis is substantially different.
For a generic $\theta_{\pm}\in B_{\pm}$, we set $C_{\pm}:=C_{\theta_{\pm}}:=\text{supp}\big(\varphi_{\pm}(\theta_{\pm})\big)$, $I_{\pm}:=I_{\theta_{\pm}}:=\overline{\text{Conv}(C_{\pm})}=\text{Conv}(C_{\pm})$, and we denote by $\pi_{\pm}$ or $\pi_{\theta_{\pm}}$ the gate-projections of $I_{\pm}$. As the convex hull of a cube is an interval, the assignments $\theta_{\pm}\mapsto I_{\pm}$ are maps from $B_{\pm}$ to $\mathcal{J}_2$. We consider also the assignments $I\mapsto \text{supp}(I)$ from $\mathcal{J}_2$ to $\text{Cub}(M)$, and $C$ mapped to its unique fully supported balanced measure $\lambda_C$, from $\text{Cub}(M)$ to $\text{Prob}(M)^{\Phi}$. It follows by lemmata [Lemma 21](#lem:cubetoint){reference-type="ref" reference="lem:cubetoint"}, [Lemma 24](#lem:cubetobalanced){reference-type="ref" reference="lem:cubetobalanced"} and corollary [Corollary 23](#cor:inttocubeismeasurable){reference-type="ref" reference="cor:inttocubeismeasurable"}, that all these maps and their various compositions, are measurable. It is easy to check that they are also $G$-maps.
Let us show that $m=0$. For the sake of contradiction suppose that for almost every $(\theta_-,\theta_+)$, $$\mathcal{H}_{\theta_+}\Delta \mathcal{H}_{\theta_-}\neq \varnothing$$ By proposition [Proposition 35](#prop:match){reference-type="ref" reference="prop:match"}, for almost every $(\theta_-,\theta_+)$, $I_+=\pi_+(I_-)$ and $I_-=\pi_-(I_+)$. The former assertion implies that $\mathcal{H}_{\theta_+}\subset \mathcal{H}_{\theta_-}$, while the latter means that $\mathcal{H}_{\theta_-}\subset \mathcal{H}_{\theta_+}$. This is a contradiction, thus, we may assume from now on that $m=0$.
Let $\Omega\subset B_-$ be a Borel set of $\nu_-$-measure 1, and $\theta_+\in B_+$ such that for every $\theta_-\in \Omega$ $$\mathcal{H}_{\theta_+}=\mathcal{H}_{\theta_-}$$ In particular, for every $\theta,\theta'\in \Omega$ $$\mathcal{H}':=\mathcal{H}_{\theta}=\mathcal{H}_{\theta'}$$ Moreover, we hvae $$\mathcal{H}(I_{\theta})=\mathcal{H}_{\theta}=\mathcal{H}_{\theta'}=\mathcal{H}(I_{\theta'})$$ Therefore, it follows by lemma [Lemma 28](#lem:convgateisomorphism){reference-type="ref" reference="lem:convgateisomorphism"} that $I_{\theta}$ and $I_{\theta'}$ are isomorphic, and the isomorphisms are given by $\pi_{\theta}$ and $\pi_{\theta'}$.
Let us now show that all these intervals are in fact points. Fix $\theta_0\in \Omega$. By lemma [Lemma 20](#lem:InducedAction){reference-type="ref" reference="lem:InducedAction"} and the above discussion, we may consider the induced action of $G$ on $I_0:=I_{\theta_0}$. Observe that this action is minimal. Indeed, as $\pi_0:=\pi_{I_0}$ is a continuous median $G$-map, the pre-image of any closed $G$-invariant median subalgebra of $I_0$, is a median subalgebra with the same properties of M.
In particular, since $C_0:=C_{\theta_0}$ is a $G$-invariant median subalgera of $I_0$, we get that $I_0=C_0$. As $\theta_0$ is generic, it follows that for almost every $\theta\in B_-$, $I_{\theta}=C_{\theta}$.
But now, $\pi_0$ is a surjective morphism onto a cube. Since $M$ has no cubical factors, $C_0$ must be a point. Similarly, $C_{\theta}$ is a point, for every $\theta \in \Omega$. That is, $\theta\mapsto C_{\theta}$ can be viewed as a Borel $G$-map between $B_-$ to $M$. For $\theta\in B_-$, we denote its corresponding point in $M$ by $x_{\theta}$.
## Proof of Theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"}: Uniqueness of boundary maps {#proof-of-theorem-thmnocubicalfactor-uniqueness-of-boundary-maps}
We now show that there are no other $G$-maps from $B_-$ into $\text{Prob}(M)$. Denote by $\phi_{\pm}:B_{\pm}\rightarrow M$ our boundary maps, and suppose we have another $G$-map $\psi:B_-\rightarrow \text{Prob}(M)$ .
Define $Y=M\times \text{Prob}(M)$ and set $T:=\{(x,y,z)\in M^3 \ : \ y\in [x,z]\}$. We denote by $p_i:T\rightarrow M$ the projection onto the $i'th$ coordinate of elements in $T$. Define the following space: $$X:=\{\eta\in\text{Prob}(T)\ | \ \exists a_{\eta}\in M \text{ s.t. }(p_1)_*(\eta)=\delta_{a_{\eta}}, \text{ and } p_3 \text{ is a measure iso.}\}$$
We have a natural projection $q:X\rightarrow Y$, namely $\eta\mapsto (a_{\eta},(p_3)_*(\eta))$. This map will fit to the right side of diagram [\[equ:relativeergod\]](#equ:relativeergod){reference-type="ref" reference="equ:relativeergod"}. For some purposes, it will be convenient to give an equivalent description of $X$. For every $(a_y,\eta_y)=y\in Y$, define the following space: $$X_y:=\{f\in \text{Map}( (M,\eta_y), M)\ | \eta_y- \text{a.e. } c\in M, (a_y,f(c),c)\in T\}$$
Here, the space $\text{Map}( (M,\eta_y), M)$ denotes the space of classes of measurble functions defined up to $\eta_y$-almost everywhere. We claim that $X_y$ is isomorphic to the fiber of $y$ in $X$ under the projection $q$. Indeed, fix $f\in X_y$, and define the following positive functional on $C(T)$: $$\forall s\in C(T):s\mapsto\int_M s(a_y,f(c),c)d\eta_y(c)$$
Denote by $\eta_{y,f}$ the corresponding measure, and let us check that $\eta_{y,f}\in q^{-1}(y)$. It is rather easy to verify that $(p_1)_*(\eta_{y,f})=\delta_{a_y}$ and that $(p_3)_*(\eta_{y,f})=\eta_y$. Lastly we note that the $\eta_y$-a.e. defined map $\overline{f}:M\rightarrow T$, $c\mapsto (a_y,f(c),c)$, is the measurable inverse of $p_3$ as a map with the domain $(T,\eta_{y,f})$. Given a measure $\eta\in q^{-1}(y)$, denote by $p_3^{-1}$ the measurable inverse of $p_3:(T,\eta)\rightarrow (M,\eta_y)$, then indeed, $\eta=\eta_{y,p_2\circ p_3^{-1}}$.
We now move to define a metric on the projection $q$, in the sense of section [5](#sec:ergodic){reference-type="ref" reference="sec:ergodic"}. Given two elements $a,b\in M$, denote by $\delta_{ab}$ the function that equals 1 if $a=b$, and 0 otherwise. For $(a_y,\eta_y)=y\in Y$ and two functions $f,f'\in X_y$, define $d_y(f,f')$ to be the following value: $$(*) \ d_{\eta_y}(f,f'):= d_y(f,f'):=\int_M 1-\delta_{f(c)f'(c)}d\eta_y(c)$$ It is rather simple to check that this is indeed a metric. We claim that the space $(X_y,d_y)$, is separable metric space. In addition, as described in section [5](#sec:ergodic){reference-type="ref" reference="sec:ergodic"}, we need also to check that the map we get $d:X\times_Y X\rightarrow [0,\infty]$ is Borel. See Lemmata [Lemma 36](#lem:fibersseparable){reference-type="ref" reference="lem:fibersseparable"} and [Lemma 37](#lem:distisborel){reference-type="ref" reference="lem:distisborel"} for the details.
The diagonal action of $G$ on $M^3$, induces a natural action on $T$. As this action commutes with both $p_1$ and $p_3$, it gives rise to an induced action on $X$. Finally, consider the natural action of $G$ on $Y$. Let us define an action on $X$, which turns $q$ into a $\Gamma$-map: Given $\eta\in X$ and $g\in G$, the measurable inverse of $p_3$ with respect to $g_*\eta$ is $g_*(g \cdot p_3^{-1})$, where $p_3^{-1}$ is the measurable inverse with respect to $\eta$. Therefore, the action of $g\in G$ on the fiber $X_y$ for $y\in Y$ translates to $$f\in X_y\mapsto g_*(g \cdot f)\in X_{g y}$$ The above turns the action of $G$ on $q$ into a fiber-wise isometric action. Indeed, if $y\in Y$, and $f,f'\in X_y$: $$\begin{aligned}
d_{g y}(g_*(g \cdot f),g_*(g \cdot f')) & = \int_M 1-\delta_{g_*(g \cdot f)(c)g_*(g \cdot f')(c)}dg_*\eta_y(c) \\
& = \int_M 1-\delta_{g_*( f)(c)g_*(f')(c)}dg_*\eta_y(c) \\
& = \int_M 1-\delta_{g_*( f)(g c) g_*(f')(g c)}d\eta_y(c) \\
& = \int_M 1-\delta_{f(c)f'(c)}d\eta_y(c) \\
& = d_y(f,f')\end{aligned}$$
As needed.
Let us now define the horizontal arrows in diagram [\[equ:relativeergod\]](#equ:relativeergod){reference-type="ref" reference="equ:relativeergod"}. The lower arrow, namely $\varphi:B_-\rightarrow Y$, is defined by $\theta_-\mapsto (\phi_-(\theta_-),\psi(\theta_-))$. We define the upper horizontal arrow in terms of the fibers of $q$. Given a generic pair $(\theta_-,\theta_+)\in B_-\times B_-$, we define a measurable map $f_{(\theta_-,\theta_+)}:M\rightarrow M$ by $f_{(\theta_-,\theta_+)}(c)=m(\phi_-(\theta_-),\phi_+(\theta_+),c)$. Finally, the map $\varphi':B_-\times B_+\rightarrow X$ is defined as follows: $$\varphi'(\theta_-,\theta_+)=f_{(\theta_-,\theta_+)}\in X_{(\phi_-(\theta_-),\varphi(\theta_-))}$$ We claim that $\varphi'$ is measurable. Consider the map from $B_-\times B_+\rightarrow M^2\times \text{Prob}(M)$, defined by $$(\theta_-,\theta_+)\mapsto (\phi_-(\theta_-),\phi_+(\theta_+),\varphi(\theta_-))$$ Clearly it is measurable, and note that $\varphi'$ is just the composition of this map with the map $\mu_{(.)}$ in [Lemma 29](#lem:boundarypairintoxmeasurable){reference-type="ref" reference="lem:boundarypairintoxmeasurable"}, and therefore, it is also measurable. It is easy to verify that it's also a $G$-map. Indeed, given $(\theta_-,\theta_+)\in B_-\times B_+$, $g\in G$ and $c\in M$ $$\begin{aligned}
f_{(g\theta_-,g\theta_+)}(c) & = m(g\phi_-(\theta_-),g\phi_+(\theta_+),gg^{-1}c)=g m(\phi_-(\theta_-),\phi_+(\theta_+),g^{-1}c) \\
& = \big(g_*(g\cdot f_{(g\theta_-,g\theta_+)})\big)(c)\end{aligned}$$ and note that the last line is precisely the action of $g$ on the elements of $X_{\varphi(\theta_-)}$.
The relatively isometrically ergodic projection $B_-\times B_+\rightarrow B_-$ gives us a map $\varphi'' :B_-\rightarrow X$ that completes the diagram:
$$\begin{tikzcd}
B_-\times B_+ \arrow[r, "\varphi'"] \arrow[d]
& X \arrow[d, "q"] \\
B_- \arrow[r, "\varphi"] \arrow[ru, dashrightarrow, "\varphi''"]
& Y
\end{tikzcd}$$
We claim that for almost every $(\theta_-,\theta_+)\in B_-\times B_+$, and $\psi(\theta_-)$-a.e. $c\in M$ $(\ddagger) \varphi'(\theta_-,\theta_+)(c)=\varphi''(\theta_-)(c)=\phi_-(\theta_-)$. Suppose this is not the case and note that that the collection of points for which $(\ddagger)$ doesn't hold, defines a measurable set in $B_-\times B_+$. For details, see [Lemma 38](#lem:measurablecondition){reference-type="ref" reference="lem:measurablecondition"}.
This collection is invariant under the action of $G$, and therefore, by ergodicity, for a.e. $(\theta_-,\theta_+)\in B_-\times B_+$, there exists $c\in \text{supp}(\psi(\theta_-))$, with $\varphi'(\theta_-,\theta_+)(c)\neq \phi_-(\theta_-)$. Fix $\Omega\subset B_+$ of measure 1, and $\theta_-\in B_-$, such that for every $\theta_+\in \Omega$, $\varphi'(\theta_-,\theta_+)=\varphi''(\theta_-)$. By the above, there exists $c\in \text{supp}(\psi(\theta_-))$, such that for every $\theta_+\in \Omega$, $x=\varphi''(\theta_-)(c)=\varphi'(\theta_-,\theta_+)(c)\neq \phi_-(\theta_-)$. This is in turn implies a contradiction, as $M$ is a minimal $G$- median algebra. Indeed, take a closed half-space $\mathfrak{h}\in \Delta(x,\phi_-(\theta_-))$. By the construction of $\varphi'$ and convexity, it follows that $\phi_+(\theta_+)\in \mathfrak{h}$, for every $\theta_+\in \Omega$. But now, the closure of the convex hull of all those $\phi_+(\theta_+)$ is contained in $\mathfrak{h}\varsubsetneq M$, and thus, it constitutes a proper $G$-invariant median subalgebra. Contradiction!
We finish by showing that for a.e. $\theta_-\in B_-$, $\psi(\theta_-)=\delta_{\phi_-(\theta_-)}$. Suppose this is not the case. Consider the $G$-map from $B_-$ to $(\mathcal{C}M)^2$, defined by $\theta\mapsto (\text{supp}(\psi(\theta)),\phi_-(\theta))$. By assumption, the preimage of the diagonal $\Delta \mathcal{C}M\subset (\mathcal{C}M)^2$ is a $G$ invariant measurable set that is not of full measure. Therefore, by ergodicity, again, for a.e. $\theta_-$, $\psi(\theta_-)\neq\delta_{\phi_-(\theta_-)}$.
Fix $\Omega\subset B_+$ and $\theta_-\in B_-$ as above, $c\in \text{supp}(\psi(\theta_-))$ different than $\phi_-(\theta_-)$, and a closed half-space $\mathfrak{h}\in\Delta(\phi_-(\theta_-),c)$. It follows the discussion above that now, $\phi_+(\theta_+)\in \mathfrak{h}$ for every $\theta_+\in \Omega$. This implies a contradiction, as required.
**Lemma 36**. The space $(X_y,d_y)$ is a separable metric space.
*Proof.* Consider the space $Z:= \{ (b,c)\in M^2\ : \ b\in [a_y,c]\}$ and the natural projection $p:Z\rightarrow M$, $(b,c)\mapsto c$. Note that $Z$ is a compact metric space, as it is the image of the continuous map $k:M\times M\rightarrow M\times M$, that defined by $(z,c)\mapsto (m(a_y,z,c),c)$. Hence, we are in the situation of Lemma [Lemma 31](#lem:counttoonenum){reference-type="ref" reference="lem:counttoonenum"}. Let $\psi:Z\rightarrow {\mathbb N}\times M$ be the bimeasurable, injective map such that the following diagram commute: $$\begin{tikzcd}
Z \arrow[r, "\psi"] \arrow[d,"p"]
& {\mathbb N}\times M \arrow[d, "\text{proj}_M"] \\
M \arrow[r, "id_M"]
& M
\end{tikzcd}$$ Denote by $\mathfrak{G}$ the space of classes of $\eta_y$-a.e. defined measurable sections for $\text{proj}_M$. We have a natural injective map $\mathfrak{f}:X_y\rightarrow \mathfrak{G}$, defined by $f\mapsto \psi\circ f$. Observe that as $\psi$ injective, for every $f,f'\in X_y$ we have $$\{c\in \text{supp}(f)\cap \text{supp}(f')\ : \ f(c)\neq f'(c)\}=\{c\in \text{supp}(f)\cap \text{supp}(f')\ : \ \psi(f(c))\neq \psi(f'(c))\}$$ That is, $\mathfrak{f}$ is an isometry from $(X_y,d_y)$ into $(\mathfrak{G},d_{\eta_y})$. By lemma [Lemma 32](#lem:sectionsspaceseparable){reference-type="ref" reference="lem:sectionsspaceseparable"}, the latter is separable metric space, hence, so is the former. ◻
**Lemma 37**. The map $d:X\times_Y X \rightarrow [0,1]$ is Borel.
*Proof.* Here the Borel structure on $X\times_Y X$ is the one induced by the ambiant space $X\times X$, where $X\subset \text{Prob}(T)$ is eqquiped with the weak\* topology.
So first we want to describe the distance $d_y$ in terms of elements in $X$. Let $x_1,x_2\in X$, such that $q(x_1)=q(x_2)=y\in Y$, and let $l^i:M\rightarrow T$ be the corresponding inverses for $p_3$. Then, $$\begin{aligned}
d_y(x_1,x_2)& =d_y(p_2\circ l^1, p_2\circ l^2) = \eta_y(\{c\in M\ : \ p_2(l^1(c))\neq p_2(l^2(c))\}) \\
& = \eta_y(\{c\in M\ : \ l^1(c)\neq l^2(c)\}) = \eta_y(p_3(\text{supp}(x_1)\Delta \text{supp}(x_2))) \\
& = x_1(\text{supp}(x_1)\backslash \text{supp}(x_2)) + x_2(\text{supp}(x_2)\backslash \text{supp}(x_1)) \\
& = 2-x_1( \text{supp}(x_2))-x_2( \text{supp}(x_1))
\end{aligned}$$ So let us show that the map $(\mu,\eta)\mapsto \mu(\text{supp}(\eta))$ is a measurable map from $\text{Prob}(T)^2$ to \[0,1\].
First, let $\mathcal{C}T$ denote the space of closed subsets of $T$, equipped with the Chabauty topology. We claim that the map $v:\text{Prob}(T)\mapsto \mathcal{C}T$, defined by $\mu\mapsto \text{supp}(\mu)$, is measurable. Given an open set $U\subset T$, the map $t_U:\text{Prob}(T)\rightarrow [0,1]$ that evaluates every measure $\mu$ at $U$, is a measurable map (in fact, it is a lower semi-continuous map). Therefore, the following sets are evidentially measurable: $$\begin{aligned}
& v^{-1}\big(\{F \in \mathcal{C}T\ : \ F\cap U\neq \varnothing\} \big)=t_U^{-1}((0,1]) \\
& v^{-1}\big(\{F \in \mathcal{C}T\ : \ F\subset U\}\big) =\bigcup_n t_{U_n}^{-1}(\{1\})
\end{aligned}$$ Where $U_n=T\backslash \overline{V_{1/n}(U^c)}$, and where $V_{\epsilon}(F)$ is the epsilon neighborhood of $F$ in $T$. This implies the measurability of $v$.
Finely, let us show that the map $u:\text{Prob}(T)\times \mathcal{C}T\rightarrow [0,1]$, given by $(\mu,F)\mapsto \mu(F)$ is measurable, and more specifically, upper semi-continuous. Suppose that $(\mu_n,F_n),(\mu,F)\in \text{Prob}(T)\times \mathcal{C}T$ are such that $(\mu_n,F_n)\xrightarrow{n}(\mu,F)$. For every $m\in{\mathbb N}$, we know that $\underset{n\rightarrow \infty}{\lim} \sup \mu_n(\overline{V_{1/m}(F)})\leq \mu(\overline{V_{1/m}(F)})$. In addition, for $n\gg 0$ $F_n\subset V_{1/m}(F)\subset \overline{V_{1/m}(F)}$. Therefore, for every $m\in{\mathbb N}$, $\underset{n\rightarrow \infty}{\lim}\sup \mu_n(F_n)\leq \mu(\overline{V_{1/m}(F)})$. We end by noting that $\underset{m\rightarrow \infty}{\lim}\mu(\overline{V_{1/m}(F)})=\mu(F)$. ◻
**Lemma 38**. In the above notations, the condition $(\ddagger)$ defines a measurable set in $B_-\times B_+$.
*Proof.* The relation described in $(\ddagger)$, defines the collection of pairs $(\theta_-,\theta_+)\in B_-\times B_+$ such that $\text{supp}\varphi'(\theta_-,\theta_+)= \{\phi_-(\theta_-)\}^2\times \text{supp}\psi(\theta_-)$. Consider the map $\text{supp}\circ \varphi':B_-\times B_+\rightarrow \mathcal{C}T\subset\mathcal{C}M^3$, which is measurable, and note that the collection above is precisely the preimage of the set $\Delta M\times \mathcal{C}M\subset \mathcal{C}(M^3)$. ◻
## Proof of Theorem [Theorem 1](#thm:stationary-nofactors){reference-type="ref" reference="thm:stationary-nofactors"} {#proof-of-theorem-thmstationary-nofactors}
Let $G$, $M$ and $\mu$ be as in the theorem and set $\mu_+=\mu$ and $\mu_-=\iota_*(\mu)$, for $\iota(g)=g^{-1}$. Let $(B_+,\nu_+)$ be the Furstenberg-Poisson boundary corresponds to $(G,\mu_+)$, and $(B_-,\nu_-)$ the Furstenberg-Poisson boundary corresponds to $(G,\mu_-)$. According to Theorem [Theorem 34](#thm:mainboundarypair){reference-type="ref" reference="thm:mainboundarypair"}, the pair $(B_-,B_+)$ is a boundary pair. Thus, by Theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"}, there exists a unique $G$-map from $B_+$ to $\text{Prob}(M)$. The theorem now follows by Furstenberg's correspondence, see [@Bader2006 Theorem 2.16].
*Proof.* Define the space $T_y:=p_1^{-1}(a_y)$, and consider the restriction of $p_3$ to $T_y$, which we denote also by $p_3$. So we can see that $X_y$ is in fact the collection of measurable $\eta_y$-a.e. defined sections of $p_3$.
Note that for every $c\in M$ the fiber $p_3^{-1}(c)$ is precisely the interval $[a_y,c]$. In particular all the fibers are countable. For every $n\in {\mathbb N}_{\infty}:={\mathbb N}\cup \{\infty\}$ denote by $A_n\subset M$ the collection of all elements with a $p_3$ fiber of size $n$. We claim that $A_n$ is a measurable subsets. .
Fix $n\in {\mathbb N}_{\infty}$, and consider the restriction of $p_3$ to $T_y^n:=p_3^{-1}(A_n)$. Let us first show that the collection sections for $p_n:=p_3|_{T_y^n}$ is separable. By lemma [Lemma 39](#lem:standardfacorcountfibers){reference-type="ref" reference="lem:standardfacorcountfibers"}, we may assume that $p_n:T_y^n\rightarrow A_n$ is just the trivial bundle. ◻
**Lemma 39**. Let $q:X\rightarrow Y$ be a factor map between two standard Borel spaces. Suppose that the fiber of every element $y\in Y$ is of the same size $n$, for some fix $n\in {\mathbb N}_{\infty}$. Then, there exists a factor map $q':X'\rightarrow Y'$ between two standard Borel spaces, and two Borel isomorphisms $f_X:X\rightarrow X'$ and $f_Y:Y\rightarrow Y'$, such that the corresponding diagram commute. Moreover, $X'=A\times Y'$, where A is discrete set with $|A|=n$, and such that $q'(a,y)=y$, for every $(a,y)\in A\times Y'$.
**Lemma 40**. Let $Y$ be a standard Borel space, $A$ a countable set, and $\mu\in \text{Prob}(Y)$ be some probability measure on $Y$. Let $\pi_2:A\times Y \rightarrow Y$ be the projection onto the second factor, and consider the set $S$ that consist of all measurable $\mu$-a.e. defined sections of $\pi_2$. We define a metric $d$ on $S$ as in $(*)$, replacing $\eta_y$ with $\mu$. Then, the metric space $(S,d)$ is a separable space.
*Proof.* For simplicity, let us assume that $A={\mathbb N}$. The case that $A$ is a finite set is done in a similar manner.
Without loss of generality, suppose that $Y=[0,1]$. Let $B$ be the collection of open subintervals with rational edges.
Let $f:Y\rightarrow {\mathbb N}\times Y$ be a measurable $\mu$-a.e. defined section, and $\epsilon>0$. For every $n\in{\mathbb N}$, denote by $X_n=f^{-1}(\{n\}\times Y)$. Let $N\in{\mathbb N}$ be such that $\mu(\cup_{k>N}X_k)\leq \epsilon/4$. For every $k\leq N$, let $U_k$ be a finite union of elements of $B$, such that $\mu(U_k\Delta X_k)<\epsilon/4N$. We define $f_{\epsilon}:Y\rightarrow A\times Y$, by $f_{\epsilon}(x)=(\sum_{k\leq N} k\chi_{U_k}(x),x)$. Note that the set of points on which $f_{\epsilon}$ and $f$ disagree, is contained in the union of the sets $X_k$, for $k>N$, and the sets $U_k\Delta X_k$ for $k\leq N$. The $\mu$-measure of this subsets is bounded by $\epsilon/2$. We note that that the collection of sections of the form of $f_{\epsilon}$ is countable, thus, the lemma follows. ◻
## Proof of Theorem [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"} {#sec:theoremgeneral}
Let $M$, $G$, $(B_-,\nu_-)$ and $(B_+,\nu_+)$ be as in Theorem [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"}, and $C$ and $M'$ as promised by Proposition [Proposition 1](#prop:structure){reference-type="ref" reference="prop:structure"}. As in the previous section, we will show the statement for $B_-$. Therefore, the space $B_+$ is mentioned here merely for the application of theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"}.
Denote by $p_{M'}$ the natural projection from $M$ onto $M'$. This is a $G$-equivariant continuous median map. Thus, the action of $G$ on $M'$ must be minimal as well.
As $M'$ has no cubical factor, it follows by theorem [Theorem 2](#thm:nocubicalfactor){reference-type="ref" reference="thm:nocubicalfactor"} that we have a unique $G$-map $\theta\mapsto x_{\theta}$, from $B_-$ to $M'$, which is also the unique map from $B_-$ to $\text{Cub}(M')$. This map gives rise to a map $\varphi: B_-\rightarrow \text{Cub}(M)$, defined by $\theta\mapsto \{x_{\theta}\}\times C$.
Suppose we have another $G$-map $\varphi':B_-\rightarrow \text{Cub}(M)$. For a generic $\theta\in B_-$, denote by $I_{\theta}'$ the corresponding convex hull of $\varphi'(\theta)$. Observe that $p_{M'}$ maps intervals to intervals. Therefore, we have the following $G$-map: $$B_-\rightarrow\text{Cub}(M'),\ \theta\mapsto \text{Ends}(p_{M'}(I_{\theta}'))$$
By the uniqueness of $x_{\theta}\in M'$, it follows that $p_{M'}(I_{\theta}')=\{x_{\theta}\}$ for a.e. $\theta\in B_-$. That is, for a.e. $\theta\in B_-$, $\varphi'(\theta)=\{x_{\theta}\}\times C_{\theta}''$, for some subcube $C_{\theta}''\subset C$.
That is, we have a $G$-equivariant assignment from $B_-$ to the finite set $\text{Cub}(C)$, namely, $\theta\mapsto C_{\theta}''$. By metric ergodicity, this map is essentially constant. Denote by $C'\subset C$ its essential image. As in the case of $M'$, the action of $G$ on $C$ must be minimal as well, and since $C'$ is $G$-invariant, we get an equality $C'=C$. In other words, $\varphi'$ is nothing but the map $\varphi$.
| arxiv_math | {
"id": "2310.02024",
"title": "Compact Median Algebras are $\\mu$-Boundaries in a unique way",
"authors": "Uri Bader and Aviv Taller",
"categories": "math.GR math.GN math.PR",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We analyse the solutions of networked heterogeneous nonlinear systems[^1] $$\label{120}
\dot x_i = f_i(x_i) + u_i \qquad x_i \in \mathbb{R}, \quad i \in \{1,2,\cdots,n\},$$ where $f_i : \mathbb{R} \rightarrow \mathbb{R}$ is continuous for all $i \in \{1,2,\cdots, n\}$ and the control inputs are set to $$\begin{aligned}
\label{134}
u_i := - \gamma \sum_{i=1}^n a_{ij} (x_i - x_j) \qquad \forall i \in \{1,2,\cdots,n\}, \end{aligned}$$ where $\gamma>0$ is a coupling gain and $a_{ij}\geq 0$ are interconnection weights. We assume that the closed-loop interconnected systems form a network with an underlying connected directed graph that contains a directed spanning tree. For these systems, we establish global uniform ultimate boundedness of the solutions, under the assumption that each system [\[120\]](#120){reference-type="eqref" reference="120"} defines a semi-passive [@Pogr3] map $u_i \mapsto x_i$. As a corollary, we also establish global uniform global boundedness of the solutions.
author:
- "Anes Lazri Mohamed Maghenem Elena Panteley Antonio Lorı́a [^2]"
title: "**Global Uniform Ultimate Boundedness of Semi-Passive Systems Interconnected over Directed Graphs**"
---
# Preliminaries {#sec1}
**Notations.** For $x \in \mathbb{R}^n$, $x^\top$ denotes its transpose, $|x|$ denotes its Euclidean norm, $\text{blkdiag}\{x\} \in \mathbb{R}^{n \times n}$ denotes the diagonal matrix whose $i$th diagonal element is the $i$th element of $x$. For a set $K \subset \mathbb{R}^n$, $|x|_K := \min \{|x-y| : y \in K \}$ denotes the distance of $x$ to the set $K$. For a symmetric matrix $Q \in \mathbb{R}^{n \times n}$, $\lambda_i(Q)$ denotes the $i$th smallest eigenvalue of $Q$. For an invertible matrix $M \in \mathbb{R}^{n \times n}$, $M^-$ or $M^{-1}$ denotes its inverse. Given $N \in \mathbb{R}^{n \times n}$, $\text{Ker}(N) :=
\{v : N v =0\}$ denotes the kernel of $N$. A class $\mathcal{K}^\infty$ function $\alpha : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{\geq 0}$ is continuous, strictly increasing, unbounded, and $\alpha(0) = 0$. Furthermore $\alpha^-$ denotes the inverse function of $\alpha$.
## On Some Classes of Matrices
A matrix $M := [m_{ij}]$, $(i,j) \in \{1,2,...,n\}^2$, is a $\mathcal{Z}$-matrix if $m_{ij} \leq 0$ whenever $i \neq j$. It is an $M$-matrix if it is a $\mathcal{Z}$-matrix and its eigenvalues have non-negative real parts. Equivalently, $M := \lambda I_n - B$, where $B$ is a non-negative matrix and $\lambda \geq \rho(B)$, where $\rho(B) := \max \left\{ |\lambda_i(B)| : i\in \{1,2,...,n\} \right\}$ is the spectral radius of $B$. $M$ is a non-singular $M$-matrix if it is a $\mathcal{Z}$-matrix and its eigenvalues have positive real parts. Equivalently, $M := \lambda I_n - B$, where $B$ is a non-negative matrix and $\lambda > \rho(B) > 0$; see [@7337394; @9312975] for more details.
## Graph Notions
A directed graph or a digraph $\mathcal{G}(\mathcal{V},\mathcal{E})$ is characterized by the set of nodes $\mathcal{V} = \{1,2,...,n\}$, and the set of directed edges $\mathcal{E}$. The edge set $\mathcal{E}$ consists of ordered pairs, of the form $(k, i)$, that indicate a directed link from node $k$ to node $i$. Given a directed edge $(k, i) \in \mathcal{E}$, then node $k$ is called an *in-neighbor* of node $i$. We assign a positive weight $a_{ik}$ to each edge $(k, i)$. That is, $a_{ik} = 0$ if $(k,i)$ is not an edge. The Laplacian matrix of a digraph is given by $$\label{126} L :=
\begin{bmatrix}
d_1 & - a_{12} & \cdots & - a_{1n}
\\
- a_{21} & d_{2} & \cdots & - a_{2n}
\\
\vdots & \vdots & \vdots & \vdots
\\
- a_{n-1 1} & \cdots & d_{n-1} & - a_{n-1 n}
\\
- a_{n 1} & \cdots & - a_{n n-1} & d_{n}
\end{bmatrix} =: D - A,$$ where $d_i := \sum^n_{j=1} a_{ij}$ for all $i \in \{1,2,...,n\}$, $D$ is the diagonal part of $L$ and $A$ is called the adjacency matrix.
A digraph is *strongly connected* if, for any two distinct nodes $i$ and $j$, there is a path from $i$ to $j$. The Laplacian matrix of a strongly connected graph admits $\lambda_1(L)=0$ as an eigenvalue with the corresponding right and left eigenvectors $1_n = \begin{bmatrix} 1 ~ 1 ~ \cdots ~ 1 \end{bmatrix}^\top$ and $v_{o} := \begin{bmatrix} v_1 ~ v_2 ~ \cdots ~ v_n \end{bmatrix}^\top$, respectively, where $v_i >0$ for all $i \leq n$.
## Graph and Matrix Decomposition
Suppose that the digraph $\mathcal{G}$ is connected and contains a spanning tree. Then, it admits a decomposition into a leading strongly connected subgraph $\mathcal{G}_\ell \neq \O$ and a subgraph $\mathcal{G}_f:= \mathcal{G} \backslash \mathcal{G}_l$ of followers; namely, the agents that do not belong to the leading component, and which we call the follower agents. In this case, up to a permutation, the Laplacian $L$ admits the lower-block decomposition $$\begin{aligned}
\label{eqWCdec}
L = \begin{bmatrix}
L_\ell & 0 \\ - A_{\ell f} & M_f \end{bmatrix},
\end{aligned}$$ where $L_\ell := D_\ell - A_\ell \in \mathbb{R}^{n_\ell \times n_\ell}$ is the Laplacian matrix of the strongly connected component $\mathcal{G}_\ell$, the lower-left block $A_{\ell f} \in \mathbb{R}^{n_f \times n - n_f}$, $n_f := n - n_\ell$, is a non-negative matrix, and the lower-right block $M_f \in \mathbb{R}^{n_f \times n_f}$ is a non-singular M-matrix. The block $M_f$ can be seen as the sum of the Laplacian matrix $L_f$ corresponding to $\mathcal G_f$ and a diagonal matrix $D_{\ell f}$ gathering the weights of the interconnections between nodes in $\mathcal G_\ell$ and the nodes in $\mathcal G_f$. That is, $M_f = L_f + D_{\ell f}$, where $L_f = D_f - A_f$.
## Lyapunov Analysis of a Directed Graph
Consider a network of $n$ single integrators of the form $\dot{x}_i = u_i$ interconnected according to the classical consensus protocol $$u_i := - \sum_{i=1}^n a_{ij} (x_i - x_j) \qquad \forall i \in \{1,2,\cdots,n\}.$$
In closed loop, the network is governed by the linear system $\dot{x} = - L x$, where $L \in \mathbb{R}^n$ is the Laplacian matrix of a connected di-graph $\mathcal{G}$ that contains a directed spanning tree. According to Section [1](#sec1){reference-type="ref" reference="sec1"}, we can decompose the state $x$ into $x^\top := \left[ x_l^\top \ x_f^\top \right]$, where $x_l \in \mathbb{R}^{n_l}$ gathers the states of the leading component and is governed by $$\begin{aligned}
\Sigma_\ell: \, \dot{x}_\ell = - L_\ell x_\ell, \end{aligned}$$ and the non-leading component whose state is $x_f \in \mathbb{R}^{n_f}$, are governed by $$\begin{aligned}
\Sigma_f: \, \dot{x}_f = - M_f x_f,\end{aligned}$$ on the manifold $\{ x_f = 0 \}$. In the rest of this section, we overview some Lyapunov-function constructions allowing to prove uniform exponential stability of $\mathcal{A}$ for $\Sigma_\ell$, where $$\begin{aligned}
\label{eqAell}
\mathcal{A}:= \{ x_l \in \mathbb{R}^{n_l} : x_{l1} = x_{l2} = \cdots = x_{l_{n_\ell}} \},
\end{aligned}$$ and exponential stability of the origin for $\Sigma_f$.
### Proof of uniform exponential stability of $\mathcal{A}$ for $\Sigma_\ell$
let $v_{o} := \begin{bmatrix} v_1 ~ v_2 ~ \cdots ~ v_{n_\ell} \end{bmatrix}^\top$ be a left eigenvector associated to $\lambda_1(L_\ell)=0$ and $V_o :=\text{blkdiag} \{v_o \}$. Based on Lemma [Lemma 1](#lem1){reference-type="ref" reference="lem1"} in the Appendix, $Q_o:= L_\ell^\top V_{o} + V_{o} L_\ell$ is symmetric and positive semi-definite, and its kernel is spanned by $\boldsymbol 1_{n_\ell}$. Then, the derivative of the Lyapunov function candidate $W(x_\ell) := x^\top_\ell V_{o} x_\ell$, along the solutions to $\Sigma_\ell$, satisfies $$\dot W(x_\ell)= - x^\top_\ell ( L_\ell^{\top} V_{o} + V_{o} L_\ell) x_\ell \leq -\lambda_2 (Q_o) |x_\ell|_{\mathcal{A}_\ell}^2.$$ Now, we let $$Z(x_\ell) := \left( x_\ell - \boldsymbol 1_{n_\ell} v_{o}^\top x_\ell \right)^\top V_{o} \left( x_\ell - \boldsymbol 1_{n_\ell} v_{o}^\top x_\ell \right),$$ which is positive definite. Its derivative along the solutions of $\dot x_\ell = - L_\ell x_\ell$ satisfies $$\begin{aligned}
\label{eqDeltaZs}
\dot Z(x_\ell)= - x^\top_\ell Q_{o} x_\ell \leq
- \lambda_2(Q_{o}) |x_\ell|^2_{\mathcal{A}_\ell}. \end{aligned}$$ To obtain the previous expression we used $v_o^\top L = 0$, $v_{1}^\top \boldsymbol 1_{n_\ell} =1$ and that $\boldsymbol 1_{n_\ell}$ is in the kernel of $I_{n_\ell} - \boldsymbol 1_{n_\ell} v_{o}^\top$. Moreover, $I_{n_\ell} - \boldsymbol 1_{n_\ell} v_{o}^\top$ is the Laplacian matrix of an all-to-all graph; hence, $\boldsymbol 1_{n_s}$ spans the kernel of $I_{n_\ell} - \boldsymbol 1_{n_\ell} v_{o}^\top$. Therefore, there exist $\bar{z}$, $\underline{z} >0$ such that $$\begin{aligned}
\label{eqZpropers}
\underline{z} |x_\ell|^2_{\mathcal{A}_\ell} \leq Z(x_\ell) \leq \bar{z} |x_\ell|^2_{\mathcal{A}_\ell} \qquad \forall x_\ell \in \mathbb{R}^{n_\ell}.
\end{aligned}$$ Uniform exponential stability of $\mathcal{A}_\ell$ from [\[eqDeltaZs\]](#eqDeltaZs){reference-type="eqref" reference="eqDeltaZs"} and [\[eqZpropers\]](#eqZpropers){reference-type="eqref" reference="eqZpropers"} and standard Lyapunov-stability theory.
### Proof of exponential Stability of the Origin for $\Sigma_f$
based on Lemma [Lemma 2](#lem2Mf){reference-type="ref" reference="lem2Mf"}, since $M_f$ is a non-singular $M$-matrix, we can use the Lyapunov function candidate $Y(x_f) := x^\top_f R_f x_f$, where $R_f := \text{blkdiag} \left\{ {M_f}^{-\top}
1_{n_f} \right\} \left( \text{blkdiag} \left\{ M_f^{-1}
1_{n_f} \right\} \right)^{-1}$, which is positive definite. Furthermore, along the solutions to $\Sigma_f$, we have $$\dot Y(x_f)= - x^\top_f [ M^{\top}_f R_{f} + R_f M_f] x_f.$$ Now, since $( M^{\top}_f R_{f} + R_f M_f)$ is positive definite, exponential stability of the origin for $\Sigma_f$ follows.
# Problem formulation
Consider the systems [\[120\]](#120){reference-type="eqref" reference="120"}-[\[134\]](#134){reference-type="eqref" reference="134"}, with $\gamma>0$ and $a_{ij} \geq 0$. Then, defining $x := [x_1\ \cdots \ x_n]^\top$, and $F(x) := \big [f_1( x_1),f_2( x_2), \cdots, f_n(x_n) \big ]^\top$, we may write the closed-loop system in compact form as $$\label{eqcl}
\dot x = F(x) - \gamma L x,$$ where $L$ is defined as in [\[126\]](#126){reference-type="eqref" reference="126"}. This is a networked system with an underlying topology that may be represented by a graph $\mathcal{G}$.
**Assumption 1**. *The graph $\mathcal G$ is connected and contains a directed spanning tree. $\bullet$*
We are interested in verifying the following two boundedness properties for [\[eqcl\]](#eqcl){reference-type="eqref" reference="eqcl"}.
1. [\[itemP1\]]{#itemP1 label="itemP1"} *Global Uniform Boundedness* (GUB). The solutions $t \rightarrow x(t)$ to [\[eqcl\]](#eqcl){reference-type="eqref" reference="eqcl"} are globally bounded, uniformly in $\gamma$, if, for every $r_o > 0$ and $\gamma_o > 0$, there exists $R = R(r_o, \gamma_o) \geq r_o$ such that, for all $\gamma \geq \gamma_o$, $$|x(t_o)| \leq r_o \Rightarrow |x(t)| \leq R \quad \forall t \geq 0.$$
2. [\[itemP2\]]{#itemP2 label="itemP2"} *Global Uniform Ultimate Boundedness* (GUUB). The solutions $t \rightarrow x(t)$ to [\[eqcl\]](#eqcl){reference-type="eqref" reference="eqcl"} are ultimately bounded, uniformly in $\gamma$, if given $\gamma_o > 0$, there exists $r = r(\gamma_o)>0$ such that, for all $r_o >0$, there exists $T = T(r_o,\gamma_o) \geq 0$ such that, for all $\gamma \geq \gamma_o$, $$|x(t_o)| \leq r_o \Rightarrow |x(t)| \leq r \quad \forall t \geq T.$$
To verify the latter two properties, we make the following assumption on the individual nodes' dynamics in [\[120\]](#120){reference-type="eqref" reference="120"}.
**Assumption 2** (State strict semi-passivity). *For each $i \in \{1,2,...,n\}$, the input-output map $u_i \mapsto x_i$ defined by the dynamics [\[120\]](#120){reference-type="eqref" reference="120"} is state strict semipassive [@IJBC_Pogromsky]. Furthermore, there exists a continuously differentiable storage function $V_i : \mathbb{R}^n \rightarrow \mathbb{R}^+$, a class $\mathcal{K}_\infty$ function $\underline{\alpha}_i$, a constant $\rho_i > 0$, a continuous function $H_i : \mathbb{R} \rightarrow \mathbb{R}$, and a continuous function $\psi_i : \mathbb{R}_{\geq 0} \rightarrow \mathbb{R}_{> 0}$, such that $$\label{128}
\underline{\alpha}_i(|x_i|) \leq V_i(x_i), \quad \dot{V}_i(x_i) \leq 2 u_i x_i - H_i(x_i),$$ and $H_i(x_i) \geq \psi_i(|x_i|)$ for all $|x_i| \geq \rho_i$. $\bullet$*
**Remark 1**. *The property described in Assumption [Assumption 2](#ass1){reference-type="ref" reference="ass1"} is called strict *quasipassivity* in [@POLUSHIN1998505]. In [@Pogr3] the authors define a similar concept named strict semi-passivity, but radial unboundedness of the storage function is not imposed. See also [@IJBC_Pogromsky]. $\bullet$*
# Main result
**Theorem 1** (Uniform ultimate boundedness). *The solutions of the networked system [\[120\]](#120){reference-type="eqref" reference="120"}-[\[134\]](#134){reference-type="eqref" reference="134"} are globally uniformly ultimately bounded, *i.e.,* Property [\[itemP2\]](#itemP2){reference-type="ref" reference="itemP2"} holds, if Assumptions [Assumption 1](#ass0){reference-type="ref" reference="ass0"} and [Assumption 2](#ass1){reference-type="ref" reference="ass1"} are satisfied. $\square$*
*Proof :* Under Assumption [Assumption 1](#ass0){reference-type="ref" reference="ass0"}, the Laplacian matrix $L$ admits a permutation, such that [\[eqWCdec\]](#eqWCdec){reference-type="eqref" reference="eqWCdec"} holds. Therefore, the state $x$ may be decomposed into $x := [x_\ell^\top\ x_f^\top]^\top$ and the system [\[eqcl\]](#eqcl){reference-type="eqref" reference="eqcl"} takes the cascaded form
[\[273\]]{#273 label="273"} $$\begin{aligned}
\label{eqLead}
\dot x_\ell & = f_\ell (x_\ell ) - \gamma L_\ell x_\ell, & f_\ell(x_\ell) := \begin{bmatrix}f_1(x_{\ell_1})& \cdots& f_{n_\ell}(x_{\ell _{n_\ell}}) \end{bmatrix}^\top\\
\label{foldyn}
\dot x_f &= f_f(x_f) + \gamma A_{\ell f} x_\ell - \gamma M_f x_f, &\qquad f_f(x_f) := \begin{bmatrix} f_{n_\ell+1}(x_{f_1}) & \cdots& f_{n_\ell+n_f}(x_{f _{n_f}}) \end{bmatrix}^\top.\end{aligned}$$
Equation [\[eqLead\]](#eqLead){reference-type="eqref" reference="eqLead"} corresponds to the dynamics of a *leading* component, a networked system with an underlying strongly connected graph $\mathcal G_\ell$, and a *follower* component, with dynamics [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"}. The proof of the statement is constructed using a cascades argument and proving, firstly, global uniform ultimate boundedness for the solutions of [\[eqLead\]](#eqLead){reference-type="eqref" reference="eqLead"} and, consequently, the same property for [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"}.
To that end, let $r_o > 0$ be arbitrarily fixed and let $|x(0)| \leq r_o$. Then, $|x_\ell(0)| \leq r_o$ and $|x_f(0)| \leq r_o$.
) *Uniform ultimate boundedness for the leading component*: after Assumption [Assumption 2](#ass1){reference-type="ref" reference="ass1"}, for each $i \in \{1,2,..., n_\ell\}$, there exists a storage function $V_{i}$ such that its total derivative along the trajectories of [\[120\]](#120){reference-type="eqref" reference="120"} satisfies $$\label{217}
\dot V_{i}(x_{\ell i}) \leq 2 u_i^\top x_{\ell i} - H_i (x_{\ell i}), \quad H_i(x_{\ell i}) \geq \psi_i(|x_{\ell i}|) \quad \forall |x_{\ell i}| \geq \rho_i.$$
Next, let $W(x_\ell) := \sum_{i=1}^{n_\ell} v_i V_i(x_{\ell i})$, where $v_i$ corresponds to the $i$th element of $v_o$, which is the left eigenvector associated to the zero eigenvalue of $L_\ell$. Since the graph $\mathcal{G}_\ell$ is strongly connected, then $v_i > 0$ for all $i \leq n_\ell$, so $W$ is positive definite and radially unbounded. Now, from [\[217\]](#217){reference-type="eqref" reference="217"}, we obtain $$\label{222}
\dot W (x_{\ell}) \leq 2 \sum_{i=1}^{n_\ell} v_i u_i^\top x_{\ell i} - \sum_{i=1}^N v_i H_i (x_{\ell i}), \qquad \forall x_\ell \in \mathbb R^{n_\ell}. \color{black}$$ The first term on the right-hand side of [\[222\]](#222){reference-type="eqref" reference="222"} satisfies $$\label{226}
\sum_{i=1}^{n_\ell} v_i u_i^\top x_{\ell i} = u^\top V_o x_\ell,$$ where $V_o:=\mbox{blkdiag}\{v_o\}$ and, since $u=-\gamma L_\ell x_{\ell}$, it follows that $$\begin{aligned}
\label{248}
\nonumber
\dot W (x_{\ell}) &\leq - \sum_{i=1}^{n_\ell} v_i H_i (x_{\ell i}) - \gamma x_\ell^\top [ L_\ell^\top V_o +V_o L_\ell] x_\ell\\
&\leq - \sum_{i=1}^{n_\ell} v_i H_i (x_{\ell i}) - \gamma x_\ell^\top Q_o x_\ell,\end{aligned}$$ with $Q_o := V_o L_\ell + L_\ell^\top V_o$, which is positive semi-definite---see Lemma [Lemma 1](#lem1){reference-type="ref" reference="lem1"} in the Appendix. Furthermore, we note that $$- x_\ell^\top Q_o x_\ell = - \left[x_\ell - \boldsymbol 1_{n_\ell} \boldsymbol 1_{n_\ell}^\top x_\ell/n_\ell \right]^\top Q_o \left[x_\ell - \boldsymbol 1_{n_\ell} \boldsymbol 1_{n_\ell}^\top x_\ell/n_\ell \right] \leq - \lambda_2(Q_o) |x_\ell|_{\mathcal A}^2,$$ where $|x_\ell|_{\mathcal A}$ denotes the distance of $x_\ell$ to the set $\mathcal{A}$ and $\lambda_2(Q_o)$ is the second smallest eigenvalue of $Q_o$.
Now, on one hand, we have that $v_i >0$ for all $i\in \{1,2,\ldots,n\}$ and, on the other, $-H_i(x_{\ell i}) > 0$ only if $|x_{\ell i}| \leq \rho_i$. Therefore, the constant $H_\ell := - \sum_{i=1}^{n_\ell} \max_{|x_i| \leq \rho_i} \left\{ v_i H_i\big (x_{\ell i} \big ) \right\} > 0$. Therefore, after [\[248\]](#248){reference-type="eqref" reference="248"}, we get $$\begin{aligned}
\label{eqWdot}
\dot W (x_{\ell}) &\leq H_\ell - \gamma \lambda_2(Q_o) |x_\ell|_{\mathcal A}^2 \qquad \forall \, x_\ell \in \mathbb R^{n_\ell}. \color{black}\end{aligned}$$ In turn, given $\gamma_o>0$ and $\epsilon >0$, for all $\gamma \geq \gamma_o$, we have $$\begin{aligned}
\label{257}
\dot W (x_{\ell}) & \leq H_\ell - \gamma_o \lambda_2(Q_o) |x_\ell|_{\mathcal A}^2 \leq -\epsilon \qquad \forall x_\ell \notin \mathcal{C}, \end{aligned}$$ where $$\mathcal C := \left\{ x_\ell \in \mathbb{R}^{n_\ell} : |x_\ell|_{\mathcal A} \leq \sqrt{n_\ell} R_e := \sqrt{\frac{\epsilon + H_\ell}{\gamma_o \lambda_2(Q_o)}} \right\}.$$
Next, let $\bar \rho := \text{arg}\hspace{-.9em}\displaystyle\max_{\hspace{-1em}i \in \{1,2,...,n_\ell\}} \rho_i$ and $$\mathcal B_\beta := \{ x_\ell \in \mathbb{R}^{n_\ell} : |x_\ell| \leq \beta := \sqrt{n_\ell} \big (\bar \rho + 2 R_e) \}.$$ Note that for all $x_\ell \notin \mathcal B_\beta$, we have $|x_\ell| > \sqrt{n_\ell} (\bar \rho + 2R_e)$ and, for all $x_\ell \in \mathcal{C} \backslash \mathcal{B}_\beta$, $$\begin{aligned}
\label{eqbounds}
|x_\ell| > \sqrt{n_\ell} (\bar \rho + 2 R_e) \quad \text{and} \quad |x_\ell|_\mathcal{A} \leq \sqrt{n_\ell} R_e. \end{aligned}$$ Furthermore, we use the fact that $x_\ell = \boldsymbol 1_{n_\ell} (\boldsymbol 1_{n_\ell}^\top x_\ell)/n_\ell + \left[ x_\ell - \boldsymbol 1_{n_\ell} (\boldsymbol 1_{n_\ell}^\top x_\ell)/n_\ell \right]$, and the fact that $|x_\ell|_\mathcal{A} = |x_\ell - \boldsymbol 1_{n_\ell} (\boldsymbol 1_{n_\ell}^\top x_\ell)/n_\ell|$, to conclude that $$\begin{aligned}
\label{eqbounds1}
| x_\ell | \leq |x_\ell|_\mathcal{A} + | \boldsymbol 1_{n_\ell}^\top x_\ell| / \sqrt{n_\ell}. \end{aligned}$$ Now, combining [\[eqbounds\]](#eqbounds){reference-type="eqref" reference="eqbounds"} and [\[eqbounds1\]](#eqbounds1){reference-type="eqref" reference="eqbounds1"}, we conclude that for all $x_\ell \in \mathcal{C} \backslash \mathcal{B}_\beta$, $$\begin{aligned}
\label{eqbounds2}
\sqrt{n_\ell} (\bar{\rho} + 2 R_e) < | x_\ell | \leq |x_\ell|_\mathcal{A} + | \boldsymbol 1_{n_\ell}^\top x_\ell| / \sqrt{n_\ell} \leq \sqrt{n_\ell} R_e + | \boldsymbol 1_{n_\ell}^\top x_\ell|/ \sqrt{n_\ell}. \end{aligned}$$ So, for all $x_\ell \in \mathcal{C} \backslash \mathcal{B}_\beta$, $| \boldsymbol 1_{n_\ell}^\top x_\ell|/n_\ell > \bar{\rho} + R_e$. Next, we use the fact that $$x_{\ell i} = \boldsymbol 1_{n_\ell}^\top x_\ell/n_\ell + \left( x_{\ell i} - \boldsymbol 1_{n_\ell}^\top x_\ell / n_\ell \right) \quad \forall i \in\{1,2,...,n_\ell\}$$ to conclude that $|x_{\ell i}| > \color{black}|\boldsymbol 1_{n_\ell}^\top x_\ell| / n_\ell - |\left( x_{\ell i} - \boldsymbol 1_{n_\ell}^\top x_\ell/n_\ell \right)|$. Hence, $$|x_{\ell i}| > \bar{\rho} + R_e - \sqrt{n_\ell} R_e > \bar{\rho} \qquad \forall i \in \{1,2,...,n_\ell \}$$ for all $x_\ell \in \mathcal{C} \backslash \mathcal{B}_\beta$. The latter, under Assumption [Assumption 2](#ass1){reference-type="ref" reference="ass1"}, implies that $$\begin{aligned}
\nonumber
- \sum_{i=1}^{n_\ell} v_i H_i (x_{\ell i}) \leq - \sum_{i=1}^{n_\ell} v_i \psi_i (|x_{\ell i}|) \leq 0 \qquad \forall x_\ell \in \mathcal{C} \backslash \mathcal{B}_\beta.\end{aligned}$$ As a result, setting $\Psi (x_\ell) := \sum_{i=1}^{n_\ell} v_i \psi_i (|x_{\ell i}|)$---note that $\Psi$ is continuous and positive---we conclude that $$\dot W (x_{\ell}) \leq - \Psi (x_\ell) \qquad \forall x_\ell \in \mathcal{C} \backslash \mathcal{B}_\beta.$$ Combining the latter inequality to [\[257\]](#257){reference-type="eqref" reference="257"}, we conclude that $$\dot W (x_{\ell}) \leq - \min \{ \Psi (x_\ell), \epsilon \} \qquad \forall x_\ell \in \mathbb{R}^{n_\ell} \backslash \mathcal{B}_\beta.$$ The latter is enough to conclude global attractivity and forward invariance of the set $$\begin{aligned}
\mathcal{S}_\sigma := \{ x_\ell \in \mathbb{R}^{n_\ell} : W(x_\ell) \leq \sigma \}, \quad \sigma := \max \{ W(y) : y \in \mathcal{B}_\beta \}. \end{aligned}$$ Furthermore, since $W : \mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ is continuous and $\mathcal{B}_\beta$ is bounded, we conclude that $\sigma$ is well defined. Consequently, the ultimate bound is $$r_\ell := \left[ \min_i \{\underline{\alpha}_i\} \right]^- (\sigma),$$ where, with an abuse of notation, $\min_i \{\underline{\alpha}_i\}$ corresponds to the function $s\mapsto \psi(s)$ defined as $\psi(s) := \min_i \{\underline{\alpha}_i(s)\}$ for each $s \geq 0$ and $\underline{\alpha}_i$ is defined in Assumption [Assumption 2](#ass1){reference-type="ref" reference="ass1"}, so $\psi: \mathbb R_{\geq 0}\to \mathbb R_{\geq 0}$ is strictly increasing and radially unbounded, hence, globally invertible. Thus, $W(x_\ell) \leq \sigma$ implies that $|x_\ell| \leq r_\ell$.
Next, we compute an upperbound $T_\ell (r_o,\gamma_o)$ on the time that the solutions to [\[eqLead\]](#eqLead){reference-type="eqref" reference="eqLead"}, with $\gamma \geq \gamma_o$ and starting from $\mathcal{B}_{r_o} := \{x_\ell \in \mathbb{R}^{n_\ell} : |x_\ell| \leq r_o \}$, take to reach the compact set $\mathcal{B}_\beta \subset \mathcal{S}_\sigma$. For this, we assume without loss of generality that $r_o \geq \beta$, and we define $$\epsilon_{r_o} := \min \{ \min \{ \Psi (x_\ell), \epsilon \} : |x_\ell | \geq \beta, ~ x_\ell \in \mathcal{S}_{\sigma_o} \},$$ where $$\begin{aligned}
\label{eqSsigmao} \mathcal{S}_{\sigma_o} := \{ x_\ell \in \mathbb{R}^{n_\ell}
: W(x_\ell) \leq \sigma_o
\}, ~ \sigma_o := \max \{ W(y) : y \in \mathcal{B}_{r_o} \}. \end{aligned}$$ Clearly, $\mathcal{S}_{\sigma_o}$ is compact; hence, $\epsilon_{r_o}$ is positive.
Therefore, along every solution $t \mapsto x_\ell(t)$ to [\[eqLead\]](#eqLead){reference-type="eqref" reference="eqLead"} starting from $x_{\ell} (0) \in \mathcal{B}_{r_o} \backslash \mathcal{B}_{\beta}$, we have $\dot W (x_{\ell}(t)) \leq - \epsilon_{r_o}$, up to the earliest time when $x_\ell$ reaches $\mathcal{B}_{\beta}$. For any earlier time, we have $$\label{338}
W(x_{\ell}(t)) \leq - \epsilon_{r_o} t + W(x_{\ell}(0)),$$ so we can take $T_\ell(r_o,\gamma_o) = \sigma_o/\epsilon_{r_o}$. Clearly, $T_\ell$ depends only on $(r_o,\gamma_o)$ and $r_\ell$ depends only on $\gamma_o$. Thus, the ultimate bounded guaranteed for the solutions of [\[eqLead\]](#eqLead){reference-type="eqref" reference="eqLead"} is uniform in $\gamma$.
) *Uniform ultimate boundedness for the follower dynamics*: following up the previous computations and arguments, establish global uniform ultimate boundedness for the non-leading component, determined by [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"}.
Using Lemma [Lemma 2](#lem2Mf){reference-type="ref" reference="lem2Mf"}, we conclude that the matrices $$S := P M_f + M_f^\top P \quad \text{and} \quad P := \text{blkdiag} \left\{ {M_f^\top}^-
1_{n} \right\} \left( \text{blkdiag} \left\{ M_f^-
1_{n} \right\} \right)^-$$ are symmetric and positive definite. We also note that $P$ is diagonal. Then, let $p_i$, for $i \in \{1,2,..., n_f\}$, be the $i$th diagonal element of $P$. In addition, let $Z (x_f) := \sum_{i=1}^{n_f} p_i V_i (x_{f i})$. Its total derivative along the trajectories of [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"} satisfies $$\begin{aligned}
\label{405}
\dot Z(x_f) \leq - \sum_{i=1}^{n_f} p_i H_i(x_{ f i})- \gamma x_f^\top [ P M_f + M_f^\top P ] x_f + 2 \gamma x_f^\top [ P A_{\ell f} ] x_\ell.\end{aligned}$$
On one hand, we already established the existence of $r_\ell(\gamma_o) > 0$ and $T_\ell(\gamma_o,r_o)$ such that $$|x_\ell(t)|<r_\ell \qquad \forall t \geq T_\ell.$$ On the other, for all $|x_\ell| \leq r_l$, $$\begin{aligned}
\label{296}
\dot Z (x_f) \leq H_f - \gamma \lambda_{\text{1}}(S) |x_f|^2 + 2 \gamma \bar p r_\ell |x_f|, \end{aligned}$$ where $\bar p :=| P A_{\ell f}|$ and $H_f := \sum_{i=1}^{n_f} \max_{|x_i| \leq \rho_i} \{v_i H_i\big (x_{f i} \big ) \}$. Now, from this and [\[405\]](#405){reference-type="eqref" reference="405"}, we obtain $$\begin{aligned}
\dot Z(x_f) & \leq H_f - \gamma x_f^\top S x_f + 2 \gamma x_f^\top [ P A_{\ell f} ] x_\ell
\\ &
\leq H_f - \gamma \left[ x_f^\top S x_f/2 - 2 x_\ell^\top A_{\ell f}^\top P^\top S^- P A_{\ell f} x_\ell \right].\end{aligned}$$ At the same time, integrating [\[eqWdot\]](#eqWdot){reference-type="eqref" reference="eqWdot"}, we obtain that, for each $t \in [0,T_\ell]$, $$W(x_\ell(t)) \leq H_\ell T_\ell + W(x_\ell(0)) \leq H_\ell T_\ell + \sigma_o,$$ where $\sigma_o$ comes from [\[eqSsigmao\]](#eqSsigmao){reference-type="eqref" reference="eqSsigmao"}. Defining $$R_\ell := \left[ \min_i
\{\underline{\alpha}_i\} \right]^- \left(H_\ell T_\ell + \sigma_o \right),$$ we have $$\begin{aligned}
\dot Z(x_f) &
\leq H_f - \gamma
\left[ \lambda_{1}(S) |x_f|^2/2 - 2 |A_{\ell f}^\top P^\top S^- P A_{\ell f}| R_\ell^2 \right],\end{aligned}$$ for all $|x_\ell| \leq R_\ell$.
Note that, for all $x_f$ such that $$|x_f|^2 > d^2_f := \frac{4 |A_{\ell f}^\top P^\top S^- P A_{\ell f}| R_\ell^2}{\lambda_1(S)} + \frac{4 2 H_f}{\lambda_1(S) \gamma_o},$$ $\dot{Z}(x_f) \leq 0$. This implies that, for all $t \in [0,T_\ell]$, $$\begin{aligned}
\label{eqZb}
Z(x_f(t)) \leq \max \left\{ \sigma_{fo}, \sigma_f \right\}, \quad \sigma_f := \max \{ Z(x_f) : |x_f| \leq d_f\} \quad \sigma_{f o} := \max \{ Z(x_f) : |x_f| \leq r_o \}. \end{aligned}$$ In turn, for each $t \in [0,T_\ell]$, $$\label{398}
|x_f(t)| \leq \bar{r}_o := \left[ \left( \sum^{n_f}_{i = 1} p_i \right) \min_i \{\underline{\alpha}_i\} \right]^- \left( \max \left\{ \sigma_{fo}, \sigma_f \right\}\right).$$ Clearly, the previous upper bound is uniform in $\gamma \geq \gamma_o$.
Next, we focus on the solutions' behaviour after $T_\ell$ (i.e., once $|x_\ell| \leq r_\ell$). Given $\epsilon>0$, we see that, for all $\gamma \geq \gamma_o$ and for all $x_f$ and $x_\ell$ such that $$|x_f| > \beta_1 := 1 + \frac{2 \bar p r_\ell}{\lambda_{1}(S)} + \sqrt{ \frac{\epsilon + H_f}{\gamma_o \lambda_{1}(S)}} \quad \text{and} \quad |x_\ell| \leq r_\ell,$$ after [\[296\]](#296){reference-type="eqref" reference="296"}, we conclude that $\dot Z(x_f) \leq - \epsilon.$ Furthermore, $|x_\ell(t)| \leq r_\ell$ for all $t \geq T_\ell$, then the set $$\mathcal{S}_{\sigma_1} := \{ x_f \in \mathbb{R}^{n_f} : Z(x_f) \leq \sigma_1 \}, \quad \sigma_1 := \max \{ Z(y) : y \in \mathcal{B}_{\beta_1} \}, \quad \mathcal B_{\beta_1} := \{ x_f \in \mathbb{R}^{n_f} : |x_f| \leq \beta_1 \},$$ is attractive and becomes forward invariant after time $T_\ell$.
Since $Z : \mathbb{R}^n \rightarrow \mathbb{R}_{\geq 0}$ is continuous and $\mathcal{B}_{\beta_1}$ is bounded, we conclude that $\sigma_1$ is well defined. As a result, the ultimate bound for $x_f(t)$ is $$r_f = \left[ \left( \sum^{n_f}_{i = 1} p_i \right) \min_i \{\underline{\alpha}_i\} \right]^- (\sigma_1).$$ Indeed, $Z(x_f) \leq \sigma_1$ implies $|x_f| \leq r_f$.
Finally, as for $t\mapsto x_\ell(t)$ we give next an upperbound, denoted by $T_f (r_o,\gamma_o)$, on the time that the solutions to [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"}, with $\gamma \geq \gamma_o$ and starting from $\mathcal{B}_{r_o} := \{x_f \in \mathbb{R}^{n_\ell} : |x_f| \leq r_o \}$, take to reach $\mathcal{B}_{\beta_1} \subset \mathcal{S}_{\sigma_1}$.
Let a solution $t \mapsto x_f(t)$ to [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"} starting from $x_f(0) \in \mathcal{B}_{r_o}$. Now, we use the fact $|x_f(T_\ell)| \leq \bar{r}_o$ with $\bar{r}_o$ coming from [\[398\]](#398){reference-type="eqref" reference="398"} and $\bar{r}_o$ is uniform in $\gamma$. As a result, along the solution $t \mapsto x_f(t)$, we have $\dot Z (x_{f}(t)) \leq - \epsilon$ from $T_\ell$ and up to when it reaches $\mathcal{B}_{\beta_1}$ for the first time after $T_\ell$. Hence, before reaching $\mathcal{B}_{\beta_1}$, we have $$\label{459}
Z(x_{f}(t)) \leq - \epsilon t + Z(x_{f}(T_\ell))$$ and, thus, using [\[eqZb\]](#eqZb){reference-type="eqref" reference="eqZb"}, we can take $T_f = T_\ell + \max\{\sigma_{fo}, \sigma_f \}/\epsilon$. Clearly, $T_f$ and $r_f$ depend only on $(r_o,\gamma_o)$. Thus, the ultimate bounded guaranteed for the solutions of [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"} is also uniform in $\gamma$. $\blacksquare$
**Corollary 1** (Uniform boundedness). *Under Assumptions [Assumption 1](#ass0){reference-type="ref" reference="ass0"} and [Assumption 2](#ass1){reference-type="ref" reference="ass1"} the solutions of the closed-loop system in [\[eqcl\]](#eqcl){reference-type="eqref" reference="eqcl"} are globally uniformly bounded, *i.e.,* Property [\[itemP1\]](#itemP1){reference-type="ref" reference="itemP1"} holds. $\square$*
*Proof :* The statement of Theorem [Theorem 1](#thm1){reference-type="ref" reference="thm1"} holds, therefore, given $r_o>0$ and $\gamma_o>0$, for all $\gamma \geq \gamma_o$, we have $$|x_\ell(0)| \leq r_o \implies |x_\ell(t)| \leq r_\ell(\gamma_o) \qquad \forall t \geq T_\ell(r_o, \gamma_o).$$ Furthermore, we were able to show that on the interval $[0, T_\ell(r_o, \gamma_o)]$, we have $$W(x_{\ell}(t)) \leq H_\ell T_\ell + W(x_{\ell}(0)).$$ Hence, if we let $\sigma_\ell :=\max \{ W(y) : |y| \leq r_o \}$, it follows that $$|x_\ell(t)| \leq R_\ell := \left[ \min_i \{ \underline{\alpha}_i\} \right]^- \big( \sigma_\ell + H_\ell T_\ell + r_\ell \big) \qquad \forall t \geq 0.$$
Next, for the solutions to [\[foldyn\]](#foldyn){reference-type="eqref" reference="foldyn"}, for any $\gamma> \gamma_o$ and $|x_f(0)| \leq r_o$, we know that $$|x_f(t)| \leq r_f \qquad \forall t \geq T_f(\gamma_o,r_o).$$ At the same time, from the previous proof, we know that $$\begin{aligned}
\dot Z(x_f) &
\leq H_f - \gamma
\left[ \lambda_{1}(S) |x_f|^2/2 - 2 |A_{\ell f}^\top P^\top S^- P A_{\ell f}| R_\ell^2 \right].\end{aligned}$$ As a result, when $$|x_f|^2 > d^2_f := \frac{4 |A_{\ell f}^\top P^\top S^- P A_{\ell f}| R_\ell^2}{\lambda_1(S)} + \frac{4 2 H_f}{\lambda_1(S) \gamma_o},$$ then $\dot{Z}(x_f) \leq 0$. Hence, for each $t \geq 0$, $$\begin{aligned}
%\label{eqZb}
Z(x_f(t)) \leq \max \left\{ \sigma_{fo}, \sigma_f \right\}, \quad \sigma_f := \max \{ Z(x_f) : |x_f| \leq d_f\}, \quad \sigma_{f o} := \max \{ Z(x_f) : |x_f| \leq r_o \}. \end{aligned}$$ In turn, for each $t \geq 0$, we have $$%\label{398}
|x_f(t)| \leq R_f := \left[ \left( \sum^{n_f}_{i = 1} p_i \right) \min_i \{\underline{\alpha}_i\} \right]^- \left( \max \left\{ \sigma_{fo}, \sigma_f \right\}\right).$$ $\blacksquare$
# Appendix {#appendix .unnumbered}
The following lemma is proposed in [@9312975], see also [@qu2009cooperative].
**Lemma 1**. *Let $L \in \mathbb{R}^{n \times n}$ be the Laplacian matrix of a directed and strongly connected graph. Let $v_o := [v_1,v_2,...,v_n]^\top \in \mathbb{R}^{n}$ be the left eigenvector of $L$ associated to the null eigenvalue of $L$.*
*Then, the vector $v$ has strictly positive entries and, for $V_o := \text{blkdiag}(v_o)$, we have $\text{Ker} (V_o L + L^\top V_o) =
\textrm{Span}~ (1_{n})$ and $V_o L + L^\top V_o \geq 0$. $\square$*
The next result can be deduced from [@qu2009cooperative Section 4.3.5].
**Lemma 2**. *Let $M \in \mathbb{R}^{n \times n}$ be a non-singular M-matrix. Then, the matrices $$S := R M + M^\top R \quad \text{and} \quad R := \text{blkdiag} \left\{ {M^\top}^-
1_{n} \right\} \left( \text{blkdiag} \left\{ M^-
1_{n} \right\} \right)^-$$ are positive definite.*
*$\square$*
1
A. Pogromsky. Passivity-based design of synchronizing systems. , 8, 02 1998.
X. Chen, B. Xudong, M-A. Belabbas, and T. Basar. Controllability of formations over directed time-varying graphs. , 4(3):407--416, 2017.
M. U. Javed, J. I. Poveda, and X. Chen. Excitation conditions for uniform exponential stability of the cooperative gradient algorithm over weakly connected digraphs. , 6:67--72, 2021.
I. G. Polushin, D. Hill, and A. L. Fradkov. Strict quasipassivity and ultimate boundedness for nonlinear control systems. , 31(17):505--510, 1998. 4th IFAC Symposium on Nonlinear Control Systems Design 1998 (NOLCOS'98), Enschede, The Netherlands, 1-3 July.
A. Y. Pogromsky, T. Glad, and H. Nijmeijer. On difffusion driven oscillations in coupled dynamical systems. , 9(4):629--644, 1999.
Z. Qu. . Springer Verlag, London, UK, 2009.
[^1]: For simplicity, but without loss of generality, we assume that $x\in \mathbb{R}$; all statements hold after pertinent changes in the notation, if $x\in \mathbb{R}^p$, with $p >1$.
[^2]: M. Maghenem is with University of Grenoble Alpes, CNRS, Grenoble INP, GIPSA-Lab, France. E-mail: mohamed.maghenem\@cnrs.fr; E. Panteley and A. Loría are with L2S, CNRS, 91192 Gif-sur-Yvette, France. E-mail: elena.panteley\@cnrs.fr and antonio.loria\@cnrs.fr A. Lazri is with L2S, CNRS, Univ Paris-Saclay, France (e-mail: anes.lazri\@centralesupelec.fr)
| arxiv_math | {
"id": "2309.12480",
"title": "Global Uniform Ultimate Boundedness of Semi-Passive Systems\n Interconnected over Directed Graphs",
"authors": "Anes Lazri and Mohamed Maghenem and Elena Panteley and Antonio Loria",
"categories": "math.OC",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The classical Erdős--Lax inequality states that if $P$ is a polynomial of degree $n$ which is zero free on $\{|z|<1\}$ then $\|P'\|\leq \frac{n}{2}\|P\|$ where the norm is taken as supremum norm on the unit circle. Equality holds if and only if all zeros of $P$ are situated on the unit circle. In the present article we generalise the equality to functions of the form $f = P^{1/m}$ where $P$ has all its roots on the unit circle obtaining $\|f'\|= \frac{n/m}{2}\|f\|$. We further prove an inequality of the form $\|f'\|\leq \frac{n/m}{2}\|f\|$ where $P$ is zero free on $\{|z|<1\}$.
address: |
Centre for Mathematical Sciences, Lund University\
Box 118, SE-22100, Lund, Sweden\
author:
- Alex Bergman & Olof Rubin
bibliography:
- ref.bib
title: Erdős--Lax estimates for rational powers of polynomials
---
# Introduction
The purpose of this article is to establish estimates in the uniform norm of derivatives of fractional powers of polynomials. In particular, we consider functions $f=P^{1/m}$ where $P$ is a polynomial and $m\in \mathbb{N}$ with the goal of relating the supremum norms of $f'$ and $f$ relative to the unit circle. Throughout the text we adopt the notation that $\|\cdot\|$ denotes the supremum norm on the unit circle, $\mathbb{T} = \{z:|z|=1\}$. We further adopt the notation $\mathbb{D} = \{z:|z|<1\}$ for the unit disk.
For $m>1$ the function $f$ will be multi-valued. However, $f$ and its derivatives will have single-valued absolute values.
A classical example of a relation of the aforementioned type comes from Bernstein's inequality. If $P$ is a polynomial of degree $n$ then $$\label{eq:bernstein}
\|P'\|\leq n\|P\|$$ with equality if and only if $P(z) = cz^n$ for some constant $c\in \mathbb{C}$.
Erdős suggested that this inequality can be sharpened in the case when $P$ is zero free on $\mathbb{D}$. He suggested that a sharp inequality in this case is given by $$\label{eq:erdos-lax}
\|P'\|\leq \frac{n}{2}\|P\|.$$ This was later proven to be correct by Lax [@Lax] using techniques developed by Szegő and Pólya. Szegő and Pólya verified the special case of equality in [\[eq:erdos-lax\]](#eq:erdos-lax){reference-type="eqref" reference="eq:erdos-lax"} when the zeros of $P$ are situated on $\mathbb{T}$.
Lax was able to lift this result using so-called reciprocal polynomials to prove [\[eq:erdos-lax\]](#eq:erdos-lax){reference-type="eqref" reference="eq:erdos-lax"}. We should mention that preceding Lax's proof of [\[eq:erdos-lax\]](#eq:erdos-lax){reference-type="eqref" reference="eq:erdos-lax"}, Turán [@Turan] showed that if $P$ is a polynomial which is zero free on $\mathbb{C}\setminus \mathbb{D}$ then $$\|P'\|< \frac{n}{2}\|P\|.
\label{eq:turan}$$
It is interesting to ask if the case of equality in [\[eq:erdos-lax\]](#eq:erdos-lax){reference-type="eqref" reference="eq:erdos-lax"} that was proven by Szegő and Pólya can be generalised to the setting where we instead of the polynomial $P$ consider the function $f = P^{1/m}$. In particular, can we then replace the degree $n$ in the equation with the number $n/m$? Here we provide the affirmative answer to this question. We also consider the case where the zeros are situated exterior to $\mathbb{D}$ and provide a case for which the inequality is valid. Several generalisations of the Erdős--Lax inequality exists for the case of polynomials. See, for example, [@Malik; @MirHussainWagay; @AzizMohammad]. In particular, we mention the $L^{p}$ versions of the Erdős--Lax inequality proved in [@RAHMAN198826] ($0 < p < \infty$) which were obtained as consequences of $L^{p}$ Bernstein estimates established by Arestov in [@arestov1981integral]. We remark that the results were obtained earlier in the case $1\leq p<\infty$. For such $p$ the Berstein estimate was given by Zygmund [@MR1576159] and the Erdős--Lax estimate was given by de-Bruijn [@MR0023380]. However, the direction toward similar inequalities for fractional powers of polynomials appear to be unexplored.
As $k$ ranges between $1,\dotsc,N$, let $a_k$ be complex numbers, $\lvert a_{k} \rvert \geq 1$, $s_k$ positive rational numbers, and $m \geq 1$ an integer such that $ms_{k} \in \mathbb{N}$ for all $s_{k}$. Denote by $P$ the polynomial $$P(z) = c^m\prod_{k=1}^{N} (z-a_{k})^{s_{k}m}.$$ The degree of $P$ will be denoted by $n =m\sum_{k=1}^{N} s_{k}$. Let $$\label{eq:form}
f(z) = P(z)^{1/m} = c\prod_{k=1}^{N} (z-a_{k})^{s_{k}}, \quad z \in \mathbb{D}$$ where the branches are chosen as to make $f$ analytic in $\mathbb{D}$. The function $f$ extends continuously to the unit circle and the derivative of $f$ is given by $$\label{eq:derivative}
f'(z) = \frac{1}{m}f(z)\frac{P'(z)}{P(z)} = \frac{f(z)}{m} \sum_{k=1}^{n} \frac{1}{z-c_{k}},$$ where the $c_{k}$'s are the zeros of $P$ counting multiplicities. If $s_{k} \geq 1$ for all $k$, the derivative is also continuous in $\overline{\mathbb{D}}$. Our goal is to prove the following Erdős--Lax type equality.
**Theorem 1**. *Let $|a_k|=1$ and let $s_k\geq 1$ denote rational numbers. If $$f(z) = c \prod_{k} (z-a_{k})^{s_{k}},$$ then $$\lVert f'\rVert = \frac{\sum s_{k}}{2} \lVert f\rVert.$$*
Using this equality we are able to prove an Erdős--Lax type inequality with certain restrictions on the zeros of $f$ in $\left\{ z : \lvert z \rvert > 1\right\}$.
**Theorem 2**. *Let $|a_k|\geq 1$ and let $s_{k} \geq 1$ denote rational numbers. If $$f(z) = c \prod_{k} (z-a_{k})^{s_{k}},$$ and $s_k\in \mathbb{N}$ whenever $|a_k|>1$, then $$\lVert f'\rVert \leq \frac{\sum s_{k}}{2} \lVert f\rVert$$ with equality if and only if all $a_k$'s are unimodular.*
This result allows us to have zeros of $f$ away from the unit circle. However, the restriction is that the zeros which are located away from the circle must have integer exponents.
It should be noted that all our results extend to the case where $s_k$ is real-valued with the same bound as specified above by a density argument.
The generalisation of the Erdős--Lax inequality presented here serves the purpose of allowing for fractional powers of roots on the unit circle. The motivation for this comes from applications to weighted Chebyshev problems on the unit circle. Given a weight function $w:\mathbb{T}\rightarrow [0,\infty)$ this amounts to finding the unique monic minimiser $T_n^{w}(z) = z^n+\sum_{k=0}^{n-1}a_kz^k$ such that $$\|wT_n^w\|= \min_{b_k}\left\|w(z)\left(z^n+\sum_{k=0}^{n-1}b_kz^k\right)\right\|.$$ With Theorem [Theorem 1](#thm:2){reference-type="ref" reference="thm:2"} at hand it is possible to generalise work of Lachance, Saff and Varga [@LachanceSaffVarga] characterising such polynomials in the case where $w(z) = |z+1|^{s}$ for $s>0$. The reader is referred to [@ChristiansenRubin] for more details.
Proofs of the main Theorems Assume, as before, that $f$ is of the form $$f(z) = c \prod_{k} (z-a_{k})^{s_{k}}$$ where $|a_k|,s_k\geq 1$. First of all, note that the restriction $s_k\geq 1$ is necessary for otherwise may $f'$ not be bounded on $\mathbb{T}$ if $|a_k| = 1$ for some $k$. Secondly, note that if $f(z) = (z-a)^s$ with $|a|\geq 1$ and $s\geq 1$ then $$f'(z) = s(z-a)^{s-1},$$ and thus $\|f'\| = s(2+(|a|-1))^{s-1}$. Since $\|f\| = (2+(|a|-1))^s$ this implies that $$\frac{\|f'\|}{\|f\|} = \frac{s}{(2+|a|-1)}\leq \frac{s}{2}$$ with equality if and only if $|a|=1$. Also note that in this case $f'$ achieves its maximum modulus at the same point on the unit circle as $f$ does. This is something we will investigate further. The argument we now give will be an adaptation of Szegő's argument for polynomials suitably modified to our situation. The key step is to prove that any point of maximality for $f'$ is a point of maximality for $f$ (with one exception).
*Proof of Theorem [Theorem 1](#thm:2){reference-type="ref" reference="thm:2"}.* Recall that we are assuming $s_{k} \geq 1$, for all $k$ and that $\sum_{k=1}^{N} s_k = n/m$ where $n$ is the degree of the polynomial $P$ satisfying $f(z) = P(z)^{1/m}$. With these restrictions on $s_{k}$, we clearly see that $f'$ is continuous up to the circle. The second derivative of $f$ is given by $$\begin{split}
f''(z)& = \frac{1}{m}\left( f'(z)\frac{P'(z)}{P(z)} + f(z)\frac{P''(z)P(z)-(P'(z))^{2}}{P(z)^2}\right)\\& = \frac{1}{m}\left( f'(z)\frac{P'(z)}{P(z)} + mf'(z)\frac{P(z)}{P'(z)}\frac{P''(z)P(z)-(P'(z))^{2}}{P(z)^2}\right) \\ &= \frac{f'(z)}{m}\left( (1-m)\frac{P'(z)}{P(z)} + m\frac{P''(z)}{P'(z)}\right).
\end{split}$$ With the intent of initially proving that $\|f'\|\leq \frac{n/m}{2}\|f\|$, we let $z_{0} \in \mathbb{T}$ be a maximum of $f'$. Consider first the case where $P(z_{0}) \neq 0$ and hence also $P'(z_{0}) \neq 0$ (otherwise $f'(z_0) =0$). The derivative of the map $\theta \mapsto \lvert f'(z_{0}e^{i\theta}) \rvert^{2}$ attains the value $0$ at $\theta = 0$. Thus $$z_{0}f''(z_{0})\overline{f'(z_{0})} = \overline{z_{0}}f'(z_{0})\overline{f''(z_{0})},$$ meaning that $z_{0}f''(z_{0})/f'(z_{0}) \in \mathbb{R}$. Combining this with the relation for $f''$, we obtain $$\label{eq:max}
\mathbb{R} \ni z_{0}\cdot \frac{f''(z_{0})}{f'(z_{0})} = \frac{1}{m}\left( (1-m)\text{Re}\left(z_{0}\frac{P'(z_{0})}{P(z_{0})}\right) + m\text{Re}\left(z_{0}\frac{P''(z_{0})}{P'(z_{0})}\right)\right).$$ For $z \in \mathbb{T}$, $$z\cdot \frac{P'(z)}{P(z)} = \sum_{k=1}^{n} \frac{z}{z-a_{k}} = \frac{n}{2} + i \sum_{k=1}^{n} t_{k}(z),$$ where $t_{k}(z) = \text{Im}\Big(z(z-a_{k})^{-1}\Big)$. If $\sum_{k=1}^{n} t_{k}(z_0) = 0$ then we attain from equation [\[eq:derivative\]](#eq:derivative){reference-type="eqref" reference="eq:derivative"} the inequality $$\|f'\| = |f'(z_0)| = \frac{n/m}{2}|f(z_0)|\leq \frac{n/m}{2}\|f\|.$$ We want to show that this is the only possible case (with one exception) and hence we assume that $\sum_{k=1}^{n} t_{k}(z_0) \neq 0$. We consider the quotients $$z^{2}\cdot\frac{P''(z)}{P(z)} = \left( \sum_{k=1}^{n} \frac{z}{z-a_{k}}\right)^{2} - \sum_{k=1}^{n} \left( \frac{z}{z-a_{k}}\right)^{2},$$ $$z\cdot\frac{P''(z)}{P'(z)} = z^{2}\frac{P''(z)}{P(z)} / z\frac{P'(z)}{P(z)} = \sum_{k=1}^{n} \frac{z}{z-a_{k}} - \frac{\sum_{k=1}^{n} \left( \frac{z}{z-a_{k}}\right)^{2}}{\sum_{k=1}^{n} \frac{z}{z-a_{k}}}.$$ From [\[eq:max\]](#eq:max){reference-type="eqref" reference="eq:max"} we see $$\begin{aligned}
&(1-m)\sum_{k=1}^{n} t_{k}(z_0)\\ + &m\left( \sum_{k=1}^{n} t_{k}(z_0) - \frac{n/2\sum_{k=1}^{n} t_{k}(z_0) - \sum_{k=1}^{n} t_{k}(z_0) \sum_{k=1}^{n} (1/4-t_{k}(z_0)^{2})}{n^{2}/4 + \left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2} }\right) = 0.\end{aligned}$$ Thus we obtain $$\frac{n^{2}}{4} + \left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2} = m\left( n/2-\sum_{k=1}^{n} (1/4-t_{k}(z_0)^{2}) \right).$$ Substituting this into the equation for $\text{Re}(z_{0}P''(z_0)/P'(z_0))$, we obtain $$\begin{split}
\text{Re}\left(z_{0}\frac{P''(z_0)}{P'(z_0)}\right) = \frac{n}{2} - \frac{n/2\sum_{k=1}^{n}(1/4-t_{k}(z_0)^{2}) + \left(\sum_{k=1}^{n} t_{k}(z_0) \right)^{2}}{n^{2}/4+\left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2}} \\ = \frac{n}{2} - \frac{n/2\left( n/2 - n^{2}/4m - \left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2}/m\right) + \left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2}}{n^{2}/4+\left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2}} \\ = \frac{n}{2} - \frac{(1-n/2m)n^{2}/4 + (1-n/2m)\left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2}}{n^{2}/4+\left( \sum_{k=1}^{n} t_{k}(z_0) \right)^{2}} = \frac{n}{2}\left(1+\frac{1}{m}\right)-1.
\end{split}$$ Finally, we obtain that $$z_{0}\frac{f''(z_{0})}{f'(z_{0})} = m^{-1}\left( (1-m)\frac{n}{2} + m\left( \frac{n}{2}\left(1+\frac{1}{m}\right)-1\right) \right) = \frac{n}{m}-1,$$ whenever $z_{0}$ is a maximum of $f'$ such that $\sum t_k(z_0)\neq 0$. It is easily seen that $(f')^{m} = Q$ is a polynomial of degree $n-m$. Thus $$\frac{z_0}{m}\frac{Q'(z_{0})}{Q(z_{0})} = z_{0}\frac{f''(z_{0})}{f'(z_{0})} = \frac{n}{m}-1.$$ Since $Q$ attains its maximum modulus at the same point, $z_{0}$, as $f'$, it follows from the condition for equality in Bernstein's inequality that $f' = cz^{n/m-1}$ (also, $n/m$ must be an integer). In this case one verifies by direct computation that the inequality holds. Otherwise, $$\sum_{k=1}^{n} t_{k}(z_{0}) = 0.$$ In conclusion we obtain the desired inequality $$\lVert f'\rVert = \lvert f'(z_{0}) \rvert = \frac{n/m}{2} \lvert f(z_{0}) \rvert \leq \frac{n/m}{2}\lVert f\rVert.$$ It remains to consider the case when $P(z_{0}) = 0$. Then $z_{0} = a_{k}$ for some $k$ and $s_{k} = 1$, since otherwise $f'(z_{0}) = 0$. Then the right-hand side of $$\frac{f''(z)}{f'(z)} = \frac{1}{m}\left( (1-m)\frac{P'(z)}{P(z)} + m\frac{P''(z)}{P'(z)}\right),$$ has a singularity at $a_{k}$ (unless $m = 1$, in which case it is nothing but the classical version of the Theorem), but the left-hand side does not since $f''$ is continuous at $a_{k}$ and $f'$ has a maximum. Hence this cannot happen (unless $f' \equiv 0$).
The lower bound $\|f'\|\geq \frac{n/m}{2}\|f\|$ follows from Lemma [Lemma 1](#lem:turan){reference-type="ref" reference="lem:turan"} which generalises Turán's inequality. ◻
**Lemma 1**. *Let $f(z) = c\prod_{k=1}^{N}(z-a_k)^{s_k}$ with $s_k\geq 1$ and $|a_k|\leq 1$. Then $$\|f'\|\geq \frac{\sum s_k}{2}\|f\|.$$*
*Proof.* By differentiating, we obtain $$f'(z) = f(z)\sum_{k=1}^{N}\frac{s_k}{z-a_k}$$ and the result follows if we can show that $$\left|\sum_{k=1}^{N}\frac{s_k}{z-a_k}\right|\geq \frac{\sum s_k}{2}\quad \text{for }z\in \mathbb{T}.$$ To see this, note that, with $|z|=1$, $$\begin{aligned}
\text{Re}\sum_{k=1}^{N}s_k\frac{z}{z-a_k} &= \frac{1}{2}\sum_{k=1}^{N}s_k\left(\frac{z}{z-a_k}+\frac{\overline{z}}{\overline{z-a_k}}\right) \\
& = \frac{1}{2}\sum_{k=1}^{N}s_k\frac{z(\overline{z}-\overline{a_k})+\overline{z}(z-a_k)}{|z-a_k|^2} \\
& = \frac{1}{2}\sum_{k=1}^{N}s_k\frac{2-2\text{Re}(z\overline{a_k})}{1-2\text{Re}(z\overline{a_k})+|a_k|^2} \geq \frac{1}{2}\sum_{k=1}^{N}s_k.
\end{aligned}$$ ◻
From the proof of Theorem [Theorem 1](#thm:2){reference-type="ref" reference="thm:2"} we gather the following corollary.
**Corollary 1**. *Let $$f(z) = c\prod_{k=1}^{N}(z-a_k)^{s_k}$$ with $|a_k| = 1$ and $s_k\geq 1$. Then $f'$ is maximal on the unit circle at any point $z$ for which $|f(z)| = \|f\|$ and in that case $$\left|\sum_{k=1}^{N}\frac{s_k}{z-a_k}\right| = \mathrm{Re}\sum_{k=1}^{N}s_k\frac{z}{z-a_k}= \frac{\sum_{k=1}^{N}s_k}{2}.$$*
Even though it is not difficult to conclude this result for polynomials using Lax's argument, it appears to the best of our knowledge that this observation is new, even for the case where $f$ is a polynomial.
We now proceed to consider the case where $|a_k|\geq1$.
*Proof of Theorem [Theorem 2](#thm:3){reference-type="ref" reference="thm:3"}.* Let $f^\ast(z)$ denote the reciprocal of $f$ defined via $$f^\ast(z) = \overline{c}\prod_k(1-\overline{a_k}z)^{s_k}.$$ The reciprocal can equivalently be defined as an $m$th root of the reciprocal of $P$. Indeed, in this case $$P(z) = f(z)^m = c^m\prod_{k=1}^N(z-a_k)^{ms_k}$$ whence $$P^\ast(z) = \overline{c}^m\prod_{k=1}^N(1-\overline{a_k}z)^{ms_k}.$$ We see that $f^\ast(z) = P^\ast(z)^{1/m}$. Now it is easily verified that for $z\in \mathbb{T}$, $|P(z)| = |P^\ast(z)|$ and $|P'(z)|\leq |(P^\ast)'(z)|$. The last inequality holds since we are assuming that all the zeros of $P$ are exterior to $\mathbb{D}$. But this implies that for $z\in \mathbb{T}$, we have $|f(z)| = |f^\ast(z)|$ and $$|(f^\ast)'(z)| = \left|\frac{1}{m}P^\ast(z)^{1/m-1}(P^\ast)'(z)\right|\geq \left|\frac{1}{m}P^{1/m-1}(z)P'(z)\right| = |f'(z)|.$$ Note further that under the above assumptions, $$f^\ast(z) = \prod_{j}(z-a_{k_j})^{s_{k_j}}Q(z)$$ with $|a_{k_j}|=1$ where the indices $k_j$ correspond to those $s_{k_j}$ which are non-integer and $Q$ is a polynomial all of whose zeros are inside $\mathbb{D}$. We conclude that for any $|\zeta|=1$, it holds true that $$f(z)+\zeta f^\ast(z) = \prod_{j}(z-a_{k_j})^{s_{k_j}}\tilde{Q}(z),$$ where $\tilde{Q}$ is some polynomial which has all its zeros on the unit circle. To see this, note that $$\log \left|\frac{f^\ast}{f}\right|$$ is subharmonic on $\mathbb{D}$ and superharmonic on $\mathbb{C}\setminus\mathbb{D}$ while constantly equal to $0$ on the unit circle. Hence $f+\zeta f^\ast$ has all its zeros on $\mathbb{T}$, the same also being true for $\tilde{Q}$. Therefore we can apply Theorem [Theorem 1](#thm:2){reference-type="ref" reference="thm:2"} to the function $g = f+\zeta f^\ast$ where $\zeta$ is chosen such that $\arg f'(z_0) = \arg \zeta (f^\ast)'(z_0)$ for a point $z_0$ satisfying $|f'(z_0)| = \|f'\|$. But then $$\|g'\|\geq |g'(z_0)| = |f'(z_0)|+|(f^\ast)'(z_0)|\geq 2 |f'(z_0)| = 2\|f'\|.$$ Since we further have that $$\|g'\|= \frac{\sum s_k}{2}\|g\|\leq \left(\sum s_k \right)\|f\|$$ the result follows. ◻
Concluding remarks Theorem [Theorem 1](#thm:2){reference-type="ref" reference="thm:2"} is a direct generalisation of a result due to Szegő and Pólya that first appeared in print in [@Lax]. Theorem [Theorem 2](#thm:3){reference-type="ref" reference="thm:3"} generalises the Erdős--Lax inequality. However, we believe that this result should be valid without restrictions on the zeros in the exterior of the closed unit disc. In fact, we conjecture the following.
**Conjecture 1**. *Let $P(z) = c\prod_{k=1}^{N}(z-a_k)^{ms_k}$ where $c\in \mathbb{C}$, $m\in \mathbb{N}$, $s_k\geq 1$, and $|a_k|\geq 1$. Then for any branch $f(z) = P(z)^{1/m}$, we have $$\|f'\|\leq \frac{\sum_{k=1}^{N} s_k}{2}\|f\|$$ with equality if and only if $|a_k| = 1$ for all $k$.*
The reason that the techniques developed in this article does not seem to work to prove this case is that for such functions $f$ it is not guaranteed that $f+\zeta f^\ast$ from the proof of Theorem [Theorem 2](#thm:3){reference-type="ref" reference="thm:3"} is a fractional power of a polynomial. Instead, we believe that new ideas are needed to prove this conjecture.
| arxiv_math | {
"id": "2309.02047",
"title": "Erd\\H{o}s-Lax estimates for rational powers of polynomials",
"authors": "Alex Bergman and Olof Rubin",
"categories": "math.CV math.CA",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
The intersection graph of ideals associated with a commutative unitary ring $R$ is the graph $G(R)$ whose vertices all non-trivial ideals of $R$ and there exists an edge between distinct vertices if and only if the intersection of them is non-zero. In this paper, the structure of the resolving graph of $G(R)$ is characterized and as an application, we evaluate the strong metric dimension of $G(R)$.
author:
- |
E. Dodongeh${}^{\mathsf{a}}$, A. Moussavi${}^{\mathsf{a}}$, R. Nikandish${}^{\mathsf{b}}$[^1] \
${}^{\mathsf{a}}$*Department of Mathematics, University of Tarbiat Modarres, Tehran, Iran*\
${}^{\mathsf{b}}$*Department of Mathematics, Jundi-Shapur University of Technology, Dezful, Iran*\
${}^{\mathsf{}}$\
$\mathsf{[email protected]}$$\mathsf{[email protected]}$ $\mathsf{[email protected]}$
title: |
**Strong resolving graph of\
the intersection graph in commutative rings[^2]**
---
# Introduction
Metric and strong metric dimension of a graph are two of the most applicable parameters with several usages in robotic, computer science, chemistry, optimization etc. Although these invariants have been computed for some classes of well-known graphs, they are still the subjects of many researches; for instance see [@abr; @bai; @ghala; @Ji; @Ve]. Among the reasons for considerable interest in characterizing these parameters for graphs associated with algebraic structures one may cite variety of uses and complexity of computations. Some examples in this direction may be found in [@bakht; @ali; @dol; @dol2; @ma; @nili; @nili2; @Pirzada1; @Pirzada2; @zai]. This paper has a such goal and aims to discuss the strong metric dimension in intersection graphs of ideals of commutative rings.
For graph theory terminology, we follow [@west]. Let $G=(V,E)$ be a graph with $V=V(G)$ as the vertex set and $E=E(G)$ as the edge set. A complete graph of order $n$ is denoted by $K_n$. Also, distance between two distinct vertices $x$ and $y$ is denoted by $d(x,y)$. By diam$(G)$, we mean the diameter of $G$. Moreover, the induced subgraph by $V_0\subseteq V$ is denoted by $G[V_0]$. The open and closed neighborhood of the vertex $x$ are denoted by $N(x)$ and $N[x]$, respectively. The complement of $G$ is denoted by $\overline{G}$. The independence number and vertex cover number of the graph $G$ are denoted by $\beta(G)$ and $\alpha(G)$, respectively. Let $S=\{v_1,v_2,\dots,v_k\}$ be an ordered subset of $V$ and $v\in V\setminus S$. Then the representation vector of $v$ with respect to $S$ is denoted by $D(v|S)$ which is defined as follows: $D(v|S)=(d(v,v_1),d(v,v_2),\dots, d(v,v_k))$. An ordered subset $S\subseteq V(G)$ is called *resolving* provided that distinct vertices out of $S$ have different representation vectors with respect to $S$. The cardinality of any resolving set of minimum cardinality is called the *metric dimension of* $G$ and denoted by $dim_M(G)$. Two different vertices $u,v$ *are mutually maximally distant* if $d(v, w) \leq d(u, v)$, for every $w \in N(u)$ and $d(u, w) \leq d(u, v)$, for every $w \in N(v)$. For a graph $G$, *the strong resolving graph of* $G$, is denoted by $G_{SR}$ and its vertex and edge set are defined as follow: $V(G_{SR})= \lbrace u \in V (G)|\,there~exists ~v \in V (G) ~such~that ~u, v ~are ~mutually ~maximally ~distant \rbrace$ and $uv \in E(G_{SR})$ if and only if $u$ and $v$ are mutually maximally distant. Two vertices $u$ and $v$ are *strongly resolved* by some vertex $w$ if either $d(w, u)$ is equal to $d(w, v) + d(v, u)$ or $d(w, v)$ is equal to $d(w, u) + d(v, u)$. A set $W$ of vertices is a *strong resolving set of* $G$ if every two distinct vertices of $G$ are strongly resolved by some vertex of $W$ and a minimum strong resolving set is called *strong metric basis* and its cardinality is *the strong metric dimension of* $G$. We denote the strong metric dimension of $G$, by $sdim(G)$.
Throughout this paper, all rings are assumed to be commutative with identity. The set of all non-trivial ideals of $R$ is denoted by $I(R)$. The ring $R$ is called *reduced* if it has no nilpotent elements other than $0_R$. For undefined notions in ring theory, we refer the reader to [@ati].
*The intersection graph of ideals of a ring* $R$, denoted by $G(R)$, is a simple and undirected graph whose vertex set is $I(R)$ and two distinct vertices are adjacent if and only if they have non-zero intersection. This graph was first introduced and studied by Chakrabarty et.al in [@Chak] and many beautiful properties of it were obtained. Later, many researchers investigated different aspects of this concept; see for instance [@akb; @mah; @xu]. In [@nik], the metric dimension of intersection graphs of rings was discussed. In this paper, we characterize the structure of the resolving graph of $G(R)$ and as an application $sdim(G(R))$ is computed.
# $G(R)_{SR}$ and $sdim(G(R))$; $R$ is reduced
In this section, for a given ring $R$, first it is shown that $sdim_M(G(R))$ is finite if and only if $|I(R)|< \infty$. Then the graph $G(R)_{SR}$ and its vertex cover number are determined, when $R$ is reduced. Finally, $sdim(G(R))$ is given in an explicit formula.
****Proposition** 1**. *Let $R$ be a ring that is not a field. Then $sdim_{M}(G(R))< \infty$ if and only if $|I(R)|< \infty$.*
****Proof.** 1**. * First assume that $sdim_M(G(R))$ is finite. Then $dim_M(G(R))$ is finite too, as $dim_M(G(R))\leq sdim_M(G(R))$. Let $W=\{W_1,\ldots,W_n\}$ be a metric basis for $G(R)$, where $n$ is a non-negative integer. By [@akb Theorem 2.1], there exist $2^n$ possibilities for $D(X|W)$, for every $X \in V(G(R))\setminus W$. Thus $|V(G(R)| \leq 2^{n}+n$ and hence $R$ has finitely many ideals. The converse is trivial.*
To compute $sdim_M(G(R))$, it is enough to consider rings with finitely many ideals, by Proposition [**Proposition** 1](#dimfinite){reference-type="ref" reference="dimfinite"}. Therefore, from now on, we suppose that all rings $R$ have finitely many ideals. These rings are direct product of finitely many fields, if they are reduced.
We state a series of lemmas to calculate $sdim(G(R))$.
****Lemma** 1**. *([@oller Theorem 2.1]) For any connected graph $G$, $sdim_M(G)=\alpha(G_{SR})$.*
****Lemma** 2**. *(Gallai$^{^,}$s theorem) For any graph $G$ of order $n$, $\alpha(G)+\beta(G)=n$.*
The following remark introduces a notion which will be used several times.
****Remark** 1**. *Let $R \cong \prod_{i=1}^{n}R_{i}$, where $R_{i}$ is a ring for every $1\leq i \leq n$, and $I=I_{1}\times \cdots \times I_{n} \in V(G(R))$. Then by $I^{c}=I_{1}^{c}\times \cdots \times I_{n}^{c}$, we mean a vertex of $G(R)$ such that $I_{i}^{c}= 0$ if $I_{i}\neq 0$ and $I_{i}^{c} =R_i$ if $I_{i}= 0$, for every $1\leq i \leq n$. The ideal $I^{c}$ is called the complement of $I$. We note that different ideals may have a same complement.*
****Lemma** 3**. *Let $n\geq 2$ be a positive integer and $R\cong \prod_{i=1}^{n}\mathbb{F}_{i}$, where $\mathbb{F}_{i}$ is a field for every $1\leq i \leq n$. Then the following statements hold.\
$1)$ $V(G(R)_{SR})=V(G(R))$.\
$2)$ Suppose that $I, J \in V(G(R)_{SR})$, then $IJ \in E(G(R)_{SR})$ if and only if $IJ \notin E(G(R))$.*
****Proof.** 2**. *$1)$ For every $I=I_{1}\times\cdots \times I_{n}\in V(G(R))$, since $I\cap I^{c}= \emptyset$, we deduce that $d(I,I^{c})=2=diam(G(R))$. Thus $I$ and $I^{c}$ are mutually maximally distant and so $I\in V(G(R)_{SR})$ i.e., $V(G(R)_{SR})=V(G(R))$.\
$2)$ First suppose that $IJ \notin E(G(R))$. Since $d(I,J)=2$, obviously $IJ\in E(G(R)_{SR})$.\
Conversely, suppose that $IJ \in E(G(R)_{SR}$, for some $I, J \in V(G(R)_{SR})$. If $I\sim J$, then since $I\neq J$, we have $I\sim J^{c}$ or $J\sim I^{c}$. Thus $d_{G(R)}(J,J^{c})=2 > 1=d(I,J)$ or $d_{G(R)}(I,I^{c})=2 > 1= d(I,J)$, and so $I, J$ are not mutually maximally distant, a contradiction. This completes the proof.*
Now, we have the following immediate corollary.
****Corollary** 1**. *Let $n\geq 2$ be a positive integer and $R\cong \prod_{i=1}^{n}\mathbb{F}_{i}$, where $\mathbb{F}_{i}$ is a field for every $1\leq i \leq n$. Then $G(R)_{SR}=\overline{G(R)}$.*
The next example explains Corollary [**Corollary** 1](#cor1){reference-type="ref" reference="cor1"} in case $n=3$.
****Example** 1**. * Suppose that $R\cong \prod_{i=1}^{3}\mathbb{F}_{i}$, where $\mathbb{F}_{i}$ is a field for every $1\leq i \leq 3$. Thus $|V(G(R))|=6$. Let\
$$V_{1}=\mathbb{F}_{1}\times \mathbb{F}_{2}\times 0, \,\,\,\,\ V_{2}=\mathbb{F}_{1}\times 0 \times \mathbb{F}_{3}, \,\,\,\,\ V_{3}= 0 \times \mathbb{F}_{2} \times \mathbb{F}_{3},$$\
$$V_{4}=0 \times 0 \times \mathbb{F}_{3},\,\,\,\,\ V_{5}=0\times \mathbb{F}_{2}\times 0 , \,\,\,\,\ V_{6}=\mathbb{F}_{1}\times 0 \times 0$$.*
Then $\overline{G(R)}$ and $G(R)_{SR}$ are identical.
****Lemma** 4**. *Let $n\geq 2$ be a positive integer and $R\cong \prod_{i=1}^{n}\mathbb{F}_{i}$, where $\mathbb{F}_{i}$ is a field for every $1\leq i \leq n$. Then $\beta(G(R)_{SR})= 2^{n-1}-1$.*
****Proof.** 3**. * By Lemma [**Lemma** 3](#lemma2g){reference-type="ref" reference="lemma2g"}, $V(G(R)_{SR})=V(G(R))$. Let $I=I_{1}\times\cdots \times I_{n}\in V(G(R)_{SR})$ and $NZC(I)$ be the number of zero components in $I$. Obviously, $1 \leq NZC(I) \leq n-1$. Assume that\
$A_{1}=\lbrace I \in V(G(R)_{SR})| NZC(I)=1\rbrace$,\
$A_{2}=\lbrace I \in V(G(R)_{SR})| NZC(I)=2\rbrace$,\
⋮\
and $A_{n-1}=\lbrace I \in V(G(R)_{SR})| NZC(I)=n-1\rbrace$.\
It is easily seen that $V(G(R))=\cup_{i=1}^{n-1}A_{i}$ and $A_{i}\cap A_{j}=\emptyset$, for every $i\neq j$ and so $\lbrace A_{1}, \ldots, A_{n-1}\rbrace$ is a partition of $V(G(R))$. Take the following facts into observation:\
**Fact 1.** Let $I,J\in A_i$, for some $1 \leq i\leq n-1$. If $I$ is not adjacent to $J$ in $G(R)_{SR}$, then by Lemma [**Lemma** 3](#lemma2g){reference-type="ref" reference="lemma2g"}, $I\sim J$ in $G(R)$.\
**Fact 2.** Let $1 \leq i\leq [\dfrac{n}{2}]-1$, for even $n$ and $1 \leq i \leq [\dfrac{n}{2}]$, otherwise. Then $S_i\subseteq A_i$ is the largest subset of $A_i$ such that $IJ\notin E(G(R)_{SR})$, for every $I,J\in S_i$ (indeed, $S_i$ is the largest independent subset of $A_i$ in $G(R)_{SR}[A_i]$). For every $I,J\in A_{i}$, we have $I\cap J \neq 0$, so by Fact 1, $I$ is not adjacent to $J$ in $G(R)_{SR}$. Thus $|S_i|=|A_{i}|= {n \choose i}$.\
**Fact 3.** Let $t=\dfrac{n}{2}$, where $n$ is even. Then for every $I \in A_t$, $I$ is only adjacent to $I^{c}$. Thus $|S_t|=\frac{|A_{t}|}{2}= \dfrac{{n \choose t}}{2}$, where $S_t\subseteq A_t$ is the largest subset of $A_t$ such that $IJ\notin E(G(R)_{SR})$, for every $I,J\in S_t$.\
Now let $S^{\prime}=\cup _{i=1}^{[t]}S_{i}$ and $[t] \leq i\leq n-1$. Then there exists $J\in S^{\prime}$ such that $I\cap J = 0$, for every $I\in A_{i}$. Thus $S^{\prime} \cap (\cup_{i=t+1}^{n-1}A_i)=\emptyset$. Furthermore, $|S^{\prime}|={n \choose 1} +\cdots+ {n \choose t}= 2^{n-1}-1$, where $n$ is odd and $|S^{\prime}|={n \choose 1} +\cdots+ {n \choose t-1}+ \dfrac{{n \choose t}}{2}= 2^{n-1}-1$, where $n$ is even. Hence $S^{\prime}$ is the largest independent subset of $V(G(R)_{SR}$ in $G(R)_{SR}$ and so $\beta(G(R)_{SR})=|S^{\prime}|= 2^{n-1}-1$. *
****Theorem** 1**. *Let $n\geq 2$ be a positive integer and $R\cong \prod_{i=1}^{n}\mathbb{F}_{i}$, where $\mathbb{F}_{i}$ is a field for every $1\leq i \leq n$. Then $sdim(G(R)_{SR})=2^{n}- 2^{n-1}-1$.*
****Proof.** 4**. * The result follows from Lemmas [**Lemma** 1](#Oellermann){reference-type="ref" reference="Oellermann"}, [**Lemma** 4](#dimprod2){reference-type="ref" reference="dimprod2"}, Gallai's theorem and the fact that $|V(G(R)_{SR})|=2^{n}-2$.*
# $G(R)_{SR}$ and $sdim(G(R))$; $R$ is non-reduced
As it has been mentioned in Section $2$, we consider rings $R$ with finitely many ideals. Then there exists positive integer $m$ such that $R\cong R_1\times\cdots \times R_m$, where $(R_{i},m_{i})$ is a local Artinian ring, for all $1\leq i \leq m$. If every $m_{i}$ is principal, then by [@ati Propostion 8.8], every $R_{i}$ is a principal ideal ring (PIR, for short) with finitely many ideals (**we suppose throughout this section that $|I(R_i)|=n_i$, for $1\leq i\leq m$**). Moreover, ideals of every $R_{i}$ make an inclusion chain.
In this section, we study the structure of $G(R)_{SR}$ and compute $sdim(G(R))$ for such rings $R$.
First, the case in which no fields appear in decomposition of $R$ is investigated.
****Remark** 2**. *Suppose that $R\cong \prod_{i=1}^{m}{R}_{i}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$ and $m\geq 2$ is a positive integer. Assume that $I=I_{1}\times \cdots \times I_{m}$ and $J=J_{1}\times \cdots \times J_{m}$ are vertices of $G(R)$, where $I_{i}, J_{i} \in R_{i}$, for every $1\leq i\leq m$. Define the relation $\thicksim$ on $V(G(R))$ as follows: $I\thicksim J$, whenever "$I_{i}=0$ if and only if $J_{i}=0$", for every $1\leq i\leq m$. It is easily seen that $\thicksim$ is an equivalence relation on $V(G(R))$. By $[I]$, we mean the equivalence class of $I$. Let $X_{1}$ and $X_{2}$ be two elements of $[X]$. Since $X_{1}\thicksim X_{2}$, $X_{1}\cap X_{2}\neq 0$, i.e., $X_{1}$ and $X_{2}$ are adjacent. Moreover, $N(X_{1})=N(X_{2})$ and the number of these equivalence classes is $2^m-1$.*
****Lemma** 5**. *Suppose that $R\cong \prod_{i=1}^{m}{R}_{i}$, where $R_{i}$ is a PIR non-field, for every $1\leq i\leq m$ and $m\geq 2$ is a positive integer. Then the following statements hold:\
$1)$ $V(G(R)_{SR})=V(G(R))$.\
$2)$ For every $I, J \in V(G(R)_{SR})$, if $[I]=[J]$, then $IJ \in E(G(R)_{SR})$.\
$3)$ For every $I, J\in V(G(R)_{SR})$, if $[I]\neq [J]$, then $IJ \in E(G(R)_{SR})$ if and only if $IJ \notin E(G(R))$.*
****Proof.** 5**. *$1)$ It is enough to show that $V(G(R))\subseteq V(G(R)_{SR})$. Let $I=I_{1}\times\cdots \times I_{m}\in V(G(R))$, $NZC(I)$ be the number of zero components of $I$ and $A_{i}=\lbrace I \in V(G(R)| NZC(I)=i\rbrace$, for $0\leq i \leq m-1$. Then $V(G(R))=\cup _{i=0}^{m-1}A_{i}$. Suppose that $I=I_{1}\times\cdots \times I_{m}\in V(G(R))\setminus A_{0}$. Since $d(I,I^{c})=2=diam(G(R))$, we conclude that $I, I^{c}$ are mutually maximally distant and so $I\in V(G(R)_{SR})$. Now, let $I \in A_{0}$. Then $d(I, V)= d(J,V)=1$, for every $J \in A_{0}$ and $V\in V(G(R))\setminus \lbrace I, J \rbrace$. Thus $I, J$ are mutually maximally distant and so $I\in V(G(R)_{SR})$.\
$2)$ If $[I]=[J] \subset V(G(R)_{SR})$, then by Remark [**Remark** 2](#dimfin1){reference-type="ref" reference="dimfin1"}, $N(I)=N(J)$. Thus $I, J$ are mutually maximally distant and so $IJ \in E(G(R)_{SR})$.\
$3)$ If $IJ \notin E(G(R))$, then clearly $IJ \in E(G(R)_{SR})$. To prove the other side, suppose to the contrary, $IJ \in E(G(R))$. Since $[I]\neq [J]$, if $[I]=A_{0}$ or $[J]=A_{0}$, then $d(I,I^{c})=2> d(I,J)=1$ or $d(J,J^{c})=2> d(I,J)=1$. Thus $I, J$ are not mutually maximally distant and so $IJ \notin E(G(R)_{SR})$, else since $[I]\neq [J]$, we conclude that $I\sim J^{c}$ or $J \sim I^{c}$. Thus $d(I,I^{c})=2> d(I,J)=1$ or $d(J,J^{c})=2> d(I,J)=1$. Hence $I, J$ are not mutually maximally distant and $IJ\notin E(G(R)_{SR})$, a contradiction.*
****Lemma** 6**. *Suppose that $R\cong \prod_{i=1}^{m}{R}_{i}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$ and $m\geq 2$ is a positive integer. Then $G(R)_{SR}= K_{\Pi_{i=1}^{m}(n_{i}+1)-1}+ H$, where $H$ is a connected graph.*
****Proof.** 6**. *Using the notations in the proof of Lemma [**Lemma** 5](#dimprod4){reference-type="ref" reference="dimprod4"}, $V(G(R)_{SR})=V(G(R))$. If $I, J\in A_{0}$, then $[I]=[J]$ and so $IJ\in E(G(R)_{SR})$. Thus the induced subgraph $G(R)[A_{0}]$ is complete. Also, by Lemma [**Lemma** 5](#dimprod4){reference-type="ref" reference="dimprod4"}, if $I\in A_{0}$ and $J\in V(G(R)_{SR})\setminus A_{0}$, then $IJ\notin E(G(R)_{SR})$. Furthermore, $|A_{0}|= \Pi_{i=1}^{m}(n_{i}+1)-1$. Thus $G(R)[A_{0}]= K_{\Pi_{i=1}^{m}(n_{i}+1)-1}$.\
Next, we show that $H$ is a connected graph, where $V(H)=\cup _{i=1}^{m-1}A_{i}$. We have to find a path between arbitrary vertices $I=I_1\times \cdots \times I_m$ and $J=J_1\times \cdots \times J_m$ in $V(H)$. To see this, we consider the following cases:\
**Case 1.** $[I]= [J]$.\
If $[I]= [J]$, then by Lemma [**Lemma** 5](#dimprod4){reference-type="ref" reference="dimprod4"}, $I$ and $J$ are adjacent in $G(R)_{SR}$.\
**Case 2.** $[I]\neq[J]$.\
If $IJ\notin E(G(R))$, then by Lemma [**Lemma** 5](#dimprod4){reference-type="ref" reference="dimprod4"}, $IJ\in E(G(R)_{SR})$. Thus suppose that $IJ\in E(G(R))$, so $I \cap J\neq 0$. If $I \subset J$ or $J \subset I$, then there exists $1 \leq i \leq n$ such that $I_{i}=J_{i}=0$, as $I,J \notin A_0$. In this case $I \sim V \sim J$, where $V= 0\times \cdots \times 0 \times R_{i}\times 0 \times \cdots \times 0$. Thus we may assume that $I \nsubseteq J$ and $J \nsubseteq I$. Hence there exist $1\leq i\neq j \leq m$ such that $I_{i}\neq 0 \neq J_{j}$ and $I_{j}= 0 = J_{i}$. In this case $I \sim V_{1}\sim V_{2} \sim J$, where $V_{1}= 0\times \cdots \times 0 \times R_{j}\times 0 \times \cdots \times 0$ and $V_{2}= 0\times \cdots \times 0 \times R_{i}\times 0 \times \cdots \times 0$. Thus $H$ is a connected graph.*
The next example explains Lemma [**Lemma** 6](#dimprod5){reference-type="ref" reference="dimprod5"} in case $n=2$.
****Example** 2**. * Suppose that $R\cong R_{1}\times R_{2}$, where $R_{i}$ is a PIR non-field for $i=1, 2$. Let $I(R_{i})=\lbrace I_{i1}, I_{i2}\rbrace$, for $i=1,2$. Thus $|V(G(R))|=14$. Suppose that\
$$V_{1}=R_{1} \times 0, \,\,\,\,\ V_{2}= 0 \times R_{2}, \,\,\,\,\ V_{3}= I_{11} \times 0, \,\,\,\,\ V_{4}=I_{11} \times I_{21},$$\
$$V_{5}=I_{11}\times I_{22}, \,\,\,\,\ V_{6}=I_{11}\times R_{2}, \,\,\,\,\ V_{7}= I_{12} \times 0, \,\,\,\,\ V_{8}=I_{12} \times I_{21}, \,\,\,\,\ V_{9}=I_{12}\times I_{22},$$\
$$V_{10}=I_{12}\times R_{2}, \,\,\,\,\ V_{11}= 0 \times I_{21}, \,\,\,\,\ V_{12}=0 \times I_{22}, \,\,\,\,\ V_{13}=R_{1}\times I_{21}, \,\,\,\,\ V_{14}=R_{1}\times I_{22}$$.*
Then Figure [\[figure:fr1\]](#figure:fr1){reference-type="ref" reference="figure:fr1"} shows how $G(R)_{SR}$ extract from $G(R)$.
****Lemma** 7**. *Suppose that $R\cong \prod_{i=1}^{m}{R}_{i}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$ and $m\geq 2$ is a positive integer. Then $\beta(G(R)_{SR})=2^{m-1}$.*
****Proof.** 7**. *By Lemma [**Lemma** 6](#dimprod5){reference-type="ref" reference="dimprod5"}, $G(R)_{SR}= K_{\Pi_{i=1}^{m}(n_{i}+1)-1}+ H$, where $H=G(R)_{SR}[\cup_{i=1}^{m-1}A_{i}]$ is a connected graph. Thus $\beta(G(R)_{SR})=1+ \beta(H)$. We show that $\beta(H)=2^{m-1}-1$. Clearly, for every $I, J\in V(H)$ if $d_{G(R)}(I,J)=diam(G(R))$, then $IJ\in G(R)_{SR}$. Therefore, to find the largest independent set in $H$, we have to investigate cliques in $G(R)$. Let $1 \leq i \leq [\dfrac{m}{2}]-1$, for even $m$ and $1 \leq i \leq [\dfrac{m}{2}]$, for odd $m$ and $I, J \in A_{i}$. Then $I\cap J \neq 0$ and so $I$ and $J$ are adjacent, i.e., $G(R)[A_i]$ is a complete graph. Moreover, if $I\in A_{i}$ and $J\in A_{j}$ with $1 \leq i\neq j \leq [\dfrac{m}{2}]-1$, for even $m$ and $1 \leq i\neq j \leq [\dfrac{m}{2}]$, for odd $m$, then $I\cap J \neq 0$ and so $I$ and $J$ are adjacent. The above arguments show that $G(R)[A]$ is a complete graph, if one let $A=\cup_{i=1}^{[\frac{m}{2}]}A_{i}$, for odd $m$ and $A=\cup_{i=1}^{[\frac{m}{2}]-1}A_{i}$, for even $m$. Now, let $t=\frac{m}{2}$, where $m$ is even. Then $I$ and $J$ are adjacent in $G(R)$, for every $I\in A_{t}$ and $J\in A$. We note that if $I\in A_t$, then $I\cap V =0$ and so $IV\in E(G(R)_{SR})$, for every $V \in [I^c]$. This means that the largest independent set $P$ in $A_t$ contains exactly one element from either $[I]$ or $[I^c]$. Moreover, $|P|= \dfrac{{m\choose t}}{2}$.\
Now, we are ready to find the largest independent set in $H$. By Lemma [**Lemma** 5](#dimprod4){reference-type="ref" reference="dimprod4"}, if $[I]=[J]$, then $IJ \in E(G(R)_{SR})$, for all $I, J \in V(G(R)_{SR})$. Thus only one element of the equivalence class $[I]$ can be contained in the largest independent set in $G(R)_{SR}[A]$, for every $I\in A$. On the other hand, the number of equivalence classes in the subgraph induced by every $A_i$ is ${m\choose i}$. Consider the independent set*
*$$S= \lbrace I|\, I~is~ representative~ of~ equivalence~ class ~[I],~ for ~every ~I \in A \rbrace,$$*
*in $H$. Let $S^{\prime}=S$, for odd $m$ and $S^{\prime}=S\cup P$, for even $m$. Then $S^{\prime}$ is an independent set in $H$. Finally, if $m$ is odd (or even), then there exists $I\in S^{\prime}$ such that $I$ and $J$ are not adjacent in $G(R)$, for every $J\in V(H)\setminus A$ (or $J\in V(H)\setminus (A\cup A_{t})$). Hence $IJ\in E(G(R)_{SR})$ and so $S^{\prime}\cap (V(H)\setminus A)=\emptyset$ (or $S^{\prime}\cap V(H)\setminus (A\cup A_{t})=\emptyset$). Furthermore, $|S^{\prime}|={m \choose 1} +\cdots+ {m \choose t}= 2^{m-1}-1$, where $m$ is odd and $|S^{\prime}|={m \choose 1} +\cdots+ {m \choose t-1}+ \dfrac{{m \choose t}}{2}= 2^{m-1}-1$, where $m$ is even. Thus $S^{\prime}$ is the largest independent subset of $V(H)$ of order $2^{m-1}-1$ and so $\beta(H)=|S^{\prime}|= 2^{m-1}-1$.*
****Theorem** 2**. *Suppose that $R\cong \prod_{i=1}^{m}{R}_{i}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$ and $m\geq 2$ is a positive integer. Then $sdim(G(R))=\Pi_{i=1}^{m}(n_{i}+2)-2^{m-1}-2$.*
****Proof.** 8**. * By Lemma [**Lemma** 7](#dimprod6){reference-type="ref" reference="dimprod6"}, $\beta(G(R)_{SR})=2^{m-1}$. Since $|V(In(R)_{SR})|=\Pi_{i=1}^{m}(n_{i}+2)-2$, Gallai$^{^,}$s theorem and Lemma [**Lemma** 1](#Oellermann){reference-type="ref" reference="Oellermann"} show that $sdim(G(R))=|V(G(R)_{SR})|-\beta(G(R)_{SR}) =\Pi_{i=1}^{m}(n_{i}+2)-2^{m-1}-2$. *
Finally, we investigate $sdim(G(R))$, where both of fields and non-fields appear in decomposition of $R$.
****Lemma** 8**. *Let $R\cong S\times T$ such that $S= \prod_{i=1}^{m}{R}_{i}$, $T=\prod_{j=1}^{n}\mathbb{F}_{j}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$, $\mathbb{F}_{j}$ is a field for every $1\leq j\leq n$ and $m,n\geq 1$ are positive integers. Then the following statements hold:\
$1)$ $V(G(R)_{SR})=V(G(R))$.\
$2)$ For every $I, J \in V(G(R)_{SR})$, if $[I]=[J]$, then $IJ \in E(G(R)_{SR})$.\
$3)$ For every $I, J\in V(G(R)_{SR})$, if $[I]\neq [J]$, then $IJ \in E(G(R)_{SR})$ if and only if $IJ \notin E(G(R))$.*
****Proof.** 9**. *It is enough to apply a similar argument to that of Lemma [**Lemma** 5](#dimprod4){reference-type="ref" reference="dimprod4"}.*
****Lemma** 9**. *Let $R\cong S\times T$ such that $S= \prod_{i=1}^{m}{R}_{i}$, $T=\prod_{j=1}^{n}\mathbb{F}_{j}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$, $\mathbb{F}_{j}$ is a field for every $1\leq j\leq n$ and $m,n\geq 1$ are positive integers. Then $G(R)_{SR}= K_{\Pi_{i=1}^{m}(n_{i}+1)2^{n}}+ H$, where $H$ is a connected graph.*
****Proof.** 10**. *By Lemma [**Lemma** 8](#dimprod7){reference-type="ref" reference="dimprod7"}, $V(G(R))=V(G(R)_{SR})$. Also, since for every $I, J\in A_{0}$, $[I]=[J]$ so $IJ\in E(G(R)_{SR})$. Thus induced subgraph $G(R)[A_{0}]$ is a complete graph. Also, by Lemma [**Lemma** 8](#dimprod7){reference-type="ref" reference="dimprod7"}, for every $I\in A_{0}$ and for every $J\in V(G(R)_{SR})\setminus A_{0}$, $IJ\notin E(G(R)_{SR})$. Furthermore, $|A_{0}|= \Pi_{i=1}^{m}(n_{i}+1)2^{n}$. Thus $G(R)[A_{0}]= K_{\Pi_{i=1}^{m}(n_{i}+1)}2^{n}$.\
To complete the proof, it is enough to apply a similar argument to that of Lemma [**Lemma** 6](#dimprod5){reference-type="ref" reference="dimprod5"}.*
****Lemma** 10**. *Let $R\cong S\times T$ such that $S= \prod_{i=1}^{m}{R}_{i}$, $T=\prod_{j=1}^{n}\mathbb{F}_{j}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$, $\mathbb{F}_{j}$ is a field for every $1\leq j\leq n$ and $m,n\geq 1$ are positive integers. Then $\beta(G(R)_{SR})=2^{m+n-1}$.*
****Proof.** 11**. *By Lemma [**Lemma** 9](#dimprod8){reference-type="ref" reference="dimprod8"}, $G(R)_{SR}= K_{\Pi_{i=1}^{m}(n_{i}+1)2^{n}} + H$, so $\beta(G(R)_{SR})= \beta(H)+ 1$. Also, by a similar argument to that of Lemma [**Lemma** 7](#dimprod6){reference-type="ref" reference="dimprod6"} and case $(1)$ of Lemma [**Lemma** 8](#dimprod7){reference-type="ref" reference="dimprod7"}, $S= \cup_{i=1}^{[\frac{m+n}{2}]}A_{i}$, where $m+n$ is odd and $S= \cup_{i=1}^{[\frac{m+n}{2}]-1}A_{i}$ union with half of the members of $A_{\frac{m+n}{2}}$ , where $m+n$ is even is the largest independent subset of $V(H)$ and $|S|= 2^{m+n-1}-1$. Hence $\beta(G(R)_{SR})=|S| + 1= 2^{m+n-1}$.*
We close this paper with the following result.
****Theorem** 3**. *Let $R\cong S\times T$ such that $S= \prod_{i=1}^{m}{R}_{i}$, $T=\prod_{j=1}^{n}\mathbb{F}_{j}$, where $R_{i}$ is a PIR non-field for every $1\leq i\leq m$, $\mathbb{F}_{j}$ is a field for every $1\leq j\leq n$ and $m,n\geq 1$ are positive integers. Then $sdim(G(R))=\Pi_{i=1}^{m}(n_{i}+2)2^{n}- 2^{m+n-1}-2$.*
****Proof.** 12**. *By Lemma [**Lemma** 10](#dimprod9){reference-type="ref" reference="dimprod9"}, $\beta(G(R)_{SR})=2^{m+n-1}$. Since $|V(G(R)_{SR})|=\Pi_{i=1}^{m}(n_{i}+1)2^{n}-2$, Gallai$^{^,}$s theorem and Lemma [**Lemma** 1](#Oellermann){reference-type="ref" reference="Oellermann"} show that $sdim(G(R))=|V(G(R)_{SR})|-\beta(G(R)_{SR}) =\Pi_{i=1}^{m}(n_{i}+1)2^{n}- 2^{m+n-1}-2$.*
G. Abrishami, M. A. Henning, M. Tavakoli, Local metric dimension for graphs with small clique numbers, Discrete Math. 345 (2022) 112763.
N. Abachi, M. Adlifard, M. Bakhtyiari, Strong metric dimension of a total graph of nonzero annihilating ideals, Bull. Aust. Math. Soc. 105 (2022) 431--439.
S. Akbari, R. Nikandish, M. J. Nikmehr, Some results on the intersection graphs of ideals of rings, J. Algebra its Appl. 12 (2013) (04), 1250200.
F. Ali, M. Salman, S. Huang, On the commuting graph of dihedral group, Comm. Algebra 44 (2016) 2389--2401.
M. F. Atiyah, I. G. Macdonald, Introduction to Commutative Algebra, Addison-Wesley Publishing Company (1969).
R. F. Bailey, P. Spiga, Metric dimension of dual polar graphs, Arch Math 120 (2023) 467---478.
I. Chakrabarty, S. Ghosh, T. K. Mukherjee, M. K. Sen, Intersection graphs of ideals of rings, Discrete Math. 309 (2009) 5381--5392.
D. Dolžan, The metric dimension of the total graph of a finite commutative ring, Canad. Math. Bull. 59 (2016) 748--759.
D. Dolžan, The metric dimension of the annihilating-ideal graph of a finite commutative ring, Bull. Aust. Math. Soc. 103 (2021) 362--368.
A. Ghalavand, S. Klavža, M. Tavakoli, Graphs whose mixed metric dimension is equal to their order, 42 (2023) Article number: 210.
Z. Jiang, N. Polyanskii, On the metric dimension of cartesian powers of a graph, J. Comb. theory Ser. A 165 (2019) 1--14. X. Ma, L. Li, On the metric dimension of the reduced power graph of a finite group, Taiwanese J. Math. 26 (2022) 1-15.
A. Mahmoodi, A. Vahidi, R. Manaviyat, R. Alipour, Intersection graph of idealizations, Collectanea Mathematica (2023). https://doi.org/10.1007/s13348-023-00407-7
R. Nikandish, Investigating the metric dimension of an intersection graph in a commutative ring, Karafan 17 (2021) 35--44.
R. Nikandish, M. J. Nikmehr, M. Bakhtyiari, Strong resolving graph of a zero-divisor graph, Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas 116 (2022) Article number: 116.
R. Nikandish, M. J. Nikmehr, M. Bakhtyiari, Metric and Strong Metric Dimension in Cozero-Divisor Graphs, Mediterr. J. Math. 18 (2021) Article number: 112.
O. R. Oellermann, J. Peters-Fransen, The strong metric dimension of graphs and digraphs, Discrete Appl. Math. 155 (2007) 356--364.
S. Pirzada, R. Raja, On the metric dimension of a zero-divisor graph, Comm. Algebra 45 (2017) 1399--1408.
R. Raja, S. Pirzada and S. P. Redmond, On Locating numbers and codes of zero-divisor graphs associated with commutative rings, J. Algebra Appl. 15 (2016) 1650014 (22 pages).
A. Sebö, E. Tannier, On metric generators of graphs, Math. Oper. Res. 29 (2004) 383--393.
T. Vetrı́k, The metric dimension of circulant graphs, Canad. Math. Bull. 60 (2017) 206--216.
D. B. West, Introduction to Graph Theory, 2nd ed., Prentice Hall, Upper Saddle River (2001).
J. Wu, L, Wang, W. Yang, Learning to compute the metric dimension of graphs, Appl. Mat. Comput. 432 (2022) 127350.
F. Xu, D. Wong, F. Tian, Automorphism group of the intersection graph of ideals over a matrix ring, Linear and Multilinear Algebra 70 (2022) 322--330.
L. Zhai, X. Ma, Y. Shao, G. Zhong, Metric and strong metric dimension in commuting graphs of finite groups, Comm. Algebra. 51 (2023) 1000--1010.
[^1]: Corresponding author
[^2]: *Key Words*: Strong resolving graph, Strong metric dimension, Intersection graph of ideals, Commutative ring.
| arxiv_math | {
"id": "2309.13284",
"title": "Strong resolving graph of the intersection graph in commutative rings",
"authors": "E. Dodongeh, A. Moussavi, R. Nikandish",
"categories": "math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
Let $(\Gamma,+,F)$ be a finitely generated ${\mathbb Z}[F]$-module where $F$ is an injective endomorphism of the abelian group $\Gamma$. We restrict ourselves to a finite automa presentable subclass, introduced by J. Bell and R. Moosa [@BM] and define an expansion containing the ${\mathcal F}$-sets defined by R. Moosa and T. Scanlon [@MS], where every automatic subset is definable.
address: |
Department of Mathematics (De Vinci)\
UMons\
20, place du Parc 7000 Mons, Belgium
author:
- Françoise Point
title: A decidable expansion of $(\Gamma,+,F)$ with the independence property.
---
# Introduction
Let $(\Gamma,+, F)$ be a finitely generated ${\mathbb Z}[X]$-module where $F$ be an endomorphism of $(\Gamma,+,0)$ and the action of $X$ is given by the action of $F$; we will denote these modules ${\mathbb Z}[F]$-modules. We will always assume that $F$ is injective.
Under the further assumption that these modules are finitely generated abelian groups, the class of finitely generated ${\mathbb Z}[F]$-modules we consider, has been introduced by R. Moosa and T. Scanlon in their work on the Mordell-Lang conjecture for semi-abelian varieties over finite fields. They enriched these finitely generated abelian groups $(\Gamma,+,F)$ by a collection of, so called, $F$-sets (for instance orbits by powers of $F$) (see Definition [\[Fsets\]](#Fsets){reference-type="ref" reference="Fsets"}), retaining the stability of their first-order theory. Their motivation was to understand the nature of subsets of the form $X\cap \Gamma$, where $\Gamma$ is a finitely generated subgroup of a semiabelian variety $V$ over a finite field ${\mathbb F}_{q}$, $X$ is a closed subvariety of $V$ and $\Gamma$ is invariant under the action of $q$-power Frobenius endomorphism $F$ [@MS].
Later J. Bell and R. Moosa showed that these $F$-sets ${\mathcal F}$ are automatic [@BM Theorem 6.9] and this aspect was further investigated by C. Hawthorne [@H].
We aim to define an expansion of $(\Gamma,+,F,{\mathcal F})$ which is finite-automaton (FA) presentable where definable subsets coincide with automatic ones. This expansion will be automatically decidable but it will have the independence property.
When $\Gamma$ is the group of integers and $F$ the multiplication by a prime number, one recovers a classical result of Büchi (see section [3](#buchi){reference-type="ref" reference="buchi"}).
These finitely generated ${\mathbb Z}[F]$-modules have a finite generating set $\Sigma$ which is a $F^n$-spanning set, for some $n>0$, which allows them to be FA-presentable structures (see Definition [\[span\]](#span){reference-type="ref" reference="span"} and Fact [Fact 13](#rec){reference-type="ref" reference="rec"}). When $\Gamma$ is a finitely generated abelian group, C. Hawthorne has shown that this is equivalent to require that the eigenvalues of $F\otimes (1\restriction {\mathbb C})$ acting on the ${\mathbb C}$-subspace $\Gamma\otimes {\mathbb C}$ are strictly bigger than $1$ (see section [4](#main){reference-type="ref" reference="main"}).
We will first review the notion of FA-presentable structures in section [2](#FA){reference-type="ref" reference="FA"}, then placing ourselves in the special case where $\Gamma={\mathbb Z}$, we will recall the result of Büchi (section [3](#buchi){reference-type="ref" reference="buchi"}). Then we will describe some of the previous results on this subclass of finitely generated ${\mathbb Z}[F]$-modules $(\Gamma,+,F)$ (section [4](#main){reference-type="ref" reference="main"}). Finally we will define a sufficiently rich expansion of $(\Gamma,+,F)$ in order to use the fact that automatic sets coincide with regular languages [@PP].
# Finite-automaton presentable structures {#FA}
In order to be self-contained we briefly review below the notion of *automatic* structures. We begin by recalling the definition of finite automata.
## Finite automata {#aut}
There are several equivalent definitions of finite automata. A classical reference is the book of S. Eilenberg [@E] (see also [@PP chapter 1]).
A finite automaton ${\mathcal A}$ is a finite state machine given by the following data $(\Sigma, Q, q_0, F_a, T)$, where $\Sigma$ is a finite alphabet, $q_0$ the initial state, $Q$ a finite set of states, $F_a\subset Q$ a set of accepting states and $T$ a transition function from $Q\times \Sigma\to Q$ with the convention that $T(q,\lambda)=q$ for $\lambda$ the empty word.
Let $\Sigma^*$ be the set of all finite words on $\Sigma$. Let $\sigma\in \Sigma^*$ and $a\in \Sigma$, one extends the transition function to a function from $Q\times \Sigma^*\to Q$, still denoting it by $T$, by setting $T(q,\sigma a):=T(T(q,\sigma),a)$.
The automaton ${\mathcal A}$ accepts $\sigma\in \Sigma^*$ if $T(q_0,\sigma)\in F_a$. (We will say that ${\mathcal A}$ works on $\Sigma$.)
A subset $L\subset \Sigma^*$ (or alternatively a language on $\Sigma$) is recognized/accepted by ${\mathcal A}$ if $L=\{\sigma\in\Sigma^*\colon T(q_0,\sigma)\in F_a\}$.
Now let us recall the notion of regular languages on $\Sigma$. Given two words $\sigma, \tau \in \Sigma^*$, we denote their concatenation by $\sigma^{\frown}\tau$.
**Definition 1**. *The class $\mathcal R$ of *regular* languages is the smallest class of languages on $\Sigma$ that contains all finite languages and is closed under the following operations (called regular operations): if $L_0, L_1, L_2\in \mathcal R$, then*
1. *$L_1\cup L_2\in \mathcal R$,*
2. *$L_1^{\frown}L_2:=\{\sigma^{\frown}\tau\colon \sigma\in L_1, \tau\in L_2\}\in \mathcal R$,*
3. *$L_0^*:=\{\sigma_1^{\frown}\ldots^{\frown}\sigma_n\colon n\in {\mathbb N}, \sigma_i\in L_0, 1\leq i\leq n\}\in \mathcal R.$*
*A regular language $L$ is of *complexity $\leq c$* if $L$ can be obtained starting from $\{a\}$, $a\in \Sigma$ and $\{\lambda\}$, using $c$ times these regular operations.*
In the following, we are going to use Kleene's theorem on the correspondence between subsets of $\Sigma^*$ accepted (or recognized) by a finite automaton and regular languages on $\Sigma$.
**Fact:** [@PP Chapter 1, Theorem 4.4] A subset $L\subset \Sigma^*$ is regular iff it is accepted by a finite automaton ${\mathcal A}$ working on $\Sigma$.
## Automatic structures {#automatic}
Before recalling the notion of automatic structures, we need to recall the operation of convolution [@Ho section 3]. Given two elements $u, v\in \Sigma^*$ and an additional symbol $\sharp$, with $u=u_{1}\ldots u_{k}$ and $v=v_{1}\ldots v_{m}$, $k\leq m$, we define $u\star v$ as the word of length $m$ on $(\Sigma\cup\{\sharp\})^2$ of the form $(u_{1},v_{1})\ldots (u_{k},v_{k})(\sharp,v_{k+1})\ldots (\sharp,v_{m})$.
**Definition 2**. *[@Ho] Let ${\mathcal L}$ be a finite relational language. A first-order (countable) ${\mathcal L}$-structure ${\mathcal M}$ is automatic if there is a finite alphabet $\Sigma$ and an effective procedure to represent the domain of ${\mathcal M}$ by a regular language $D\subset \Sigma^*$ and using convolution, to represent the graph of each $n_{i}$-relation, $1\leq n_i$, $i\in I$, $I$ a finite set, by a regular subset of $((\Sigma\cup\{\sharp\})^{n_{i}})^*$.*
*When ${\mathcal L}$ is a finite but not a relational language, we will say that the countable ${\mathcal L}$-structure is automatic if the corresponding relational structure is automatic, namely we replace every function by its graph. (Of course it changes the notion of substructures but will not matter when considering decidability issues.)*
It follows that in an automatic structure, whether an atomic formula holds in ${\mathcal M}$ can be checked by finite automata [@Ho Lemma 3.6] (which is the definition of an FA-presentable structure given in [@N Section 2.1], where function symbols are allowed in ${\mathcal L}$). Note that being FA-presentable for a group or a ring is a strong restriction on its algebraic structure [@N sections 3, 4] (however any finitely generated abelian group is FA-presentable). This is reflected by the following result, which connects automaticity and decidability. (Hodgson attributes the paternity of that result to R. Büchi and C. Elgot). ,3cm
**Fact 3**. *[@Ho Theorem 3.5] If a countable structure ${\mathcal M}$ is automatic, then it is decidable.*
The proof of the above result uses the closure of the class of finite automata by the operations corresponding to the boolean operations on formulas and the existential quantification. One then deduces that any definable subset in ${\mathcal M}$ is recognized by a finite automaton. The result follows from decidability of the emptiness problem for finite automata.
# Definability and finite automata in the natural numbers {#buchi}
Each natural number $n$ can be written in base $d\geq 2$, so as a finite word in the alphabet $\Sigma:=\{0,\ldots,d-1\}$. A subset of ${\mathbb N}$ is $d$-automatic if it is recognized by a finite automaton ${\mathcal A}$ working on $\Sigma$.
Let $d=2$. The study of $2$-automatic subsets of ${\mathbb N}$ has been marked by a result of R. Büchi who showed that the sets definable in $({\mathbb N},+,V_2)$, where $V_2(n)$ is the highest power of $2$ dividing $n$, are exactly the $2$-automatic sets. (The original statement of R. Büchi is about $({\mathbb N},+,P_2)$, where $P_2$ is the set of powers of $2$, which is incorrect as observed, for instance, by Semenov. In his review of Büchi's paper, McNaughton suggested to consider instead the structure $({\mathbb N},+,\epsilon_2)$, where $\epsilon_2(n,m)$ holds iff $n$ is a power of $2$ which occurs in the binary expansion of $m$. This last structure is interdefinable with $({\mathbb N},+,V_2)$ (a modified proof of Büchi's result with $V_2$ can be found in [@Br]).
Note that $({\mathbb Z},+,<,V_2)$ has IP (one can define any finite subset of $P_2$ using $\epsilon_2$). But the expansion $({\mathbb Z},+,<,P_2)$ is NIP [@LP Corollary 2.33], [@H1 Theorem 6.1]. This can be shown using the following model-theoretic description of definable sets or a criterium due to Chernikov and Simon on NIP pairs of structures.
**Fact 4**. *[@D] The theory of $({\mathbb Z},+,<,P_2)$ is model-complete in the language $\{+,-,<,\mod_n,\lambda_2\colon n\in {\mathbb N}\setminus\{0,1\}\}$, where $\lambda_2(x)$ is the largest power of $2$ smaller than $x$.*
If one forgets about the order and considers the reduct: $Th({\mathbb Z},+,P_2)$, we have the following result. (We postpone till the next section, the definition of elementary $2$-sets and more generally of $F$-sets, but we recall here the notion of a stable subset.)
**Definition 5**. *[@H Definition 4.1], [@H1] A subset $A\subset {\mathbb Z}$ is stable if the formula $x+y\in A$ is stable in $Th({\mathbb Z},+,A)$.*
**Fact 6**. *[@MS] The theory of $Th({\mathbb Z},+,P_2)$ is stable.*
*[@H] Let $A\subset {\mathbb Z}$. Then $A$ is $2$-automatic and stable in $({\mathbb Z},+)$ iff $A$ is a boolean combination of elementary $2$-sets and cosets of subgroups of ${\mathbb Z}$ iff $A$ is definable in $({\mathbb Z},+,P_2)$.*
# Finitely generated abelian groups with a distinguished injective endomorphism {#main}
In their work on Mordell-Lang conjecture for semiabelian varieties over finite fields, R. Moosa and T. Scanlon studied certain finitely generated abelian groups $\Gamma$ endowed with an endomorphism $F$ [@MS Section 2] and equipped them with a collection of sets called $F$-sets, for instance orbits by powers of $F$. They called the resulting structure on the abelian group $\Gamma$, an $F$-structure and showed it admits quantifier elimination and is stable (see [@MS Theorem A] for a precise statement).
Let $(\Gamma,+,0)$ be a finitely generated ${\mathbb Z}[F]$-module (namely a ${\mathbb Z}[X]$-module with the action of $X$ given by an endomorphism $F$ of $(\Gamma,+,0)$). We further assume that $F$ is injective.
Note that the structure $(\Gamma,+,F)$, adding a function symbol for $F$, or $(\Gamma,+,0)$ endowed with the usual ${\mathbb Z}[F]$-module language, namely adding scalar multiplication for each element of ${\mathbb Z}[F]$ are bi-interpretable. In particular the theory of $(\Gamma,+,F)$ is stable.
Let $\Sigma$ be a finite symmetric generating subset of $\Gamma$ as a ${\mathbb Z}[F]$-module, containing $0$. These will be the assumptions on $(\Gamma,+,F)$ from now on.
We say that the finite word $\sigma:=a_0\cdots a_n$ represents $g\in \Gamma$ if $g=a_0+F(a_1)+\ldots+F^n(a_n)$. We use the notation $g=[\sigma]_F$.
Here we adopt the definition of $F$-spanning subset given in [@H] which is slightly more general that the one given in [@BM Definition 5.1], or in [@H2 Definition 2.4].
**Definition 7**. *[@H Definition 2.12] [\[span\]]{#span label="span"} Let $\Sigma$ be as above. Then $\Sigma$ is a $F$-spanning set for $\Gamma$ if*
1. *any element of $\Gamma$ can be represented by an element of $\Sigma^*$,*
2. *if $a_1, a_2, a_3\in \Sigma$, then $a_1+a_2+a_3\in \Sigma+F(\Sigma)$,*
3. *if $a_1, a_2\in \Sigma$ are such that for some $b\in \Gamma$, $a_1+a_2=F(b)$, then $b\in \Sigma$.*
**Definition 8**. *[@BM Definition 6.4], [@H2 Definition 2.5] A subset $A\subset \Gamma$ is $F$-automatic if there is an $m>0$ and an $F^m$-spanning set $\Sigma$ such that $\{\sigma\in \Sigma^*\colon [\sigma]_F\in A\}$ is a regular language. (One easily extends the definition to subsets of $\Gamma^n$, $n>1$.)*
Then one shows that if a subset of $\Gamma$ is $F$-automatic, then for any $n>0$ and any $F^n$-spanning subset $\Sigma$, $\{\sigma\in \Sigma^*\colon [\sigma]_{F^n}\in A\}$ is a regular language [@H2 Proposition 2.6].
When $\Gamma$ is a finitely generated group, C. Hawthorne has shown the following characterization of the existence of a spanning subset. Let $F$ an injective endomorphism of $\Gamma$. Then for some $n>0$ there is an $F^n$-spanning set for $\Gamma$ if and only if the eigenvalues of $F\otimes (1\restriction {\mathbb C})$ in the ${\mathbb C}$-vector space $\Gamma\otimes {\mathbb C}$ are of modulus $>1$ [@H2 Theorem 3.12].
In the general case, C. Hawthorne gave another characterization in terms of existence of *well-behaved* length functions [@H Theorem 2.43] (that he will use to show that $F$-sets are $F$-automatic (see Fact [Fact 13](#rec){reference-type="ref" reference="rec"})).
**Remark 9**. *Note that in the definition of automatic structure, one chooses a set of representatives for the elements of the domain of the structure (and similarly for the predicates). In the definition above for a subset $A$ of $\Gamma$ to be $F$-automatic, one needs to check that all the finite words which represents an element of $A$ form a regular language. However, we have the following result of Bell and Moosa, which says that if $L$ is a regular language on $\Sigma$, then the subset $[L]_F$ of $\Gamma$ consisting of $\{[\sigma]_F\colon \sigma\in L\}$ is $F$-automatic [@BM Proposition 6.8 (2)]. This is a consequence of the fact that the equivalence relation $E$ defined between two words $\sigma, \tau$ (of the same length) by $E(\sigma,\tau)$ if and only if $[\sigma]_{F}=[\tau]_{F}$, is regular.*
*We have seen we may represent an element $g\in \Gamma$ by words of different length, but we use the extra symbol $\sharp$ and the operation of convolution.*
**Definition 10**. *[\[Fsets\]]{#Fsets label="Fsets"}[@MS Definition 2.3] Let $a\in \Gamma$, then set $K(a,F):=\{a+F(a)+\cdots+F^n(a)\colon n\in {\mathbb N}\}$.*
*An *elementary $F$-set* of $\Gamma$ is a subset of $\Gamma$ of the form $a_0+K(a_1,F^{n_1})+\ldots+K(a_m,F^{n_m})$, for some $a_0,\ldots,a_m\in \Gamma$ and $n_1,\ldots,n_m\in {\mathbb N}$. For instance, the $F$-orbit of $a$: $\{F^n(a): n\in {\mathbb N}\}=a+K(Fa-a,F)$ is an elementary $F$-set.*
*An *$F$-set* of $\Gamma$ is a finite union of sets which can be written as a sum of an elementary $F$-set of $\Gamma$ and an $F$-invariant subgroup of $\Gamma$ (namely ${\mathbb Z}[F]$-submodules).*
Let ${\mathcal F}(\Gamma)$ (respectively ${\mathcal F}(\Gamma^n)$) be the collection of $F$-sets of $\Gamma$ (respectively $\Gamma^n$). Consider the structure whose domain is $\Gamma$ and a set of predicates interpreted by the elements of $\bigcup_{n>0} {\mathcal F}(\Gamma^n)$. Denote the resulting structure by $(\Gamma, {\mathcal F})$. (Note that it expands $(\Gamma,+,F)$.) Assume $\Gamma$ is a finitely generated abelian group and $\bigcap_{i\in {\mathbb N}} (F^i)=\{0\}$, where $(F^i)$ denotes the ideal generated by $F^i$ in ${\mathbb Z}[F]$. Then Moosa and Scanlon showed that the theory of $(\Gamma,{\mathcal F})$ is stable [@MS Theorem 6.11] (see also [@MS Remark 6.12] where a proof of superstability is sketched).
Now we can state the result analogous to Fact [Fact 6](#stableZ){reference-type="ref" reference="stableZ"}, for ${\mathbb Z}[F]$-module $(\Gamma,+,F)$, where $\Gamma$ is a finitely generated, but first we need to recall the notion of *sparse*. As before, a subset $A$ of $\Gamma$ is stable if the formula $x+y\in A$ is a stable formula.
**Definition 11**. *Let $L\subset \Sigma^*$, then $L$ is *sparse* if $L$ is regular and if the set of words in $L$ of length smaller than or equal to $n$, $n\in {\mathbb N}$, is bounded by a polynomial function of $n$. (For instance the set of binary expansions of powers of $2$ is sparse).*
*A subset $A\subset \Gamma$ is $F$-sparse if $A=[L]_{F^r}$ for some sparse $L\subset \Sigma^*$ with $\Sigma$ a $F^r$-spanning set, $r>0$.*
We denote the length of $\sigma\in \Sigma^*$ by $\ell(\sigma)$. Let $\ell_{\Sigma}:\Gamma\to {\mathbb N}$ be the map sending $g\in \Gamma$ to the length of the shortest word $\sigma=a_0\ldots a_n\in \Sigma^*$ such that $[\sigma]_F=g$. We will say that a word $\sigma$ has support $1$ if there is only one occurrence in $\sigma$ of elements of $\Sigma\setminus\{0\}$.
**Fact 12**. *[@MS], [@H2 Theorem 6.3] Assume that $\Gamma$ is a finitely generated group and that it has a $F^m$-spanning subset for some $m>0$. Let $A\subset \Gamma$ be $F$-sparse and stable in $\Gamma$. Then $A$ is a boolean combination of elementary $F$-sets.*
Now starting with the following result of Bell and Moosa ([@BM Lemma 6.7, Theorem 6.9]), revisited by C. Hawthorne, we aim to define an expansion of $(\Gamma,+,F)$ where the definable subsets are exactly the $F$-automatic ones.
**Fact 13**. *[@H Proposition 2.32, Theorem 2.54] Let $\Gamma$ be a finitely generated ${\mathbb Z}[F]$-module and suppose it has a $F^m$-spanning set for some $m>0$. Then $(\Gamma,+,F)$ is FA-presentable. Moreover every $F$-set of $\Gamma^m$, $m>0$, is $F$-automatic.*
First let us make a few observation on $\Sigma$. Note that if $a\in \Sigma\cap F(\Gamma)$, then by condition (C3), $a=F(b)$ for some $b\in \Sigma$. So w.l.o.g. we may assume that $\Sigma\subset \Gamma\setminus F(\Gamma)\cup\{0\}$ **and it will be our assumption from now on**. Then we pick a subset $\Sigma_0$ of $\Sigma$ containing $0$ and a unique coset representative of $\Sigma$ modulo $F(\Gamma)$. Then for $a_0\in \Sigma$, there is $a\in \Sigma_0$ such that $a_0-a\in F(\Gamma)$. So by condition $(C3)$, there is $b\in \Sigma$ such that $a_0+(-a)=F(b)$, namely $a_0=a+F(b)$. More generally, using this subset $\Sigma_0$, we want to represent any element of $\Gamma$ as follows.
**Lemma 14**. *Let $g=a_0+\ldots+F^n(a_n)\in \Gamma\setminus\{0\}$ with $a_i\in \Sigma$, $1\leq i\leq n$, then there is another representation of $g$ of the form $F^m(b_m)+\ldots+F^{n}(b_n)+F^{n+1}(b)+F^{n+2}(b')$ with $b_m\in \Sigma_0\setminus\{0\}$, $m\leq i\leq n$, $b_i, b, b'\in \Sigma$ and where $m$ is the smallest indice where $a_i\neq 0$.*
Proof:Let $g=a_0+F(a_1)+\ldots+F^n(a_n)\neq 0$. We will show the following statement: $g=F^m(b_m)+\ldots+F^{n}(b_n)+F^{n+1}(b)+F^{n+2}(b')$ with $b_m\in \Sigma_0\setminus\{0\}$, $m\leq i\leq n$, $b_i\in \Sigma_0, b, b'\in \Sigma$, where $m$ is the smallest indice where $a_i\neq 0$.
By the choice of $\Sigma_0$, there is $b_{0}\in \Sigma_0, a_1'\in \Sigma$ such that $a_0=b_0+F(a_1')$.
Then we rewrite $g$ as $b_0+F(a_1+a_1')+F^2(a_2)+\ldots+F^n(a_n)$. By condition (C2), there are $u_1, u_2\in \Sigma$ such that $a_1+a_{1}'=u_{1}+F(u_{2})$. Now let $b_{1}\in \Sigma_{0}$ be such that $u_{1}=b_1+F(a_2')$ with $a_2'\in \Sigma$.
Then we rewrite $g$ as $b_0+F(b_1)+F^2(a_2+u_{2}+a_{2}')+\ldots$. Then write $a_2+u_2+a_2'=u_3+F(u_4)$, $u_3, u_4\in \Sigma$ and proceed as before, namely replace $u_3$ by $b_2+F(a_3')$ with $b_{2}\in \Sigma_{0}$. So at stage $m\leq n$, we obtain $g=b_0+F(b_1)+\ldots+F^{m-1}(b_{m-1})+F^{m}(a_m+u+u')+ \ldots$, with $u, u'\in \Sigma$. Apply condition (C2), to write $a_m+u+u'$ as $v+F(w)$ for some $u, w\in \Sigma$ and $v$ as $b_{m}+F(v')$, $b_{m}\in \Sigma_{0}$. So we have replaced $F^{m}(a_m+u+u')$ by $F^m(b_{m})+F^{m+1}(w+v')$. Either $m=n$ and are left with a summand of the form $F^{n+1}(w+v')$ that we can write as $F^{n+1}(w')+F^{n+2}(w'')$, for some $w', w''\in \Sigma$, or $m<n$ and we have to deal an expression of the same form, namely $F^{m+1}(a_{m+1}+w+v')$.$\quad\Box$
From now on, we will also make the convention that when we write $g=[\sigma]_F$ with $\sigma\in \Sigma^*$, then the rightmost letters in $\sigma$ is not equal to $0$. For ${\bold{c}}\in \Sigma\setminus\{0\}$, denote by $\Sigma^*{\bold{c}}$ the subset of $\Sigma^*$ of all finite words ending with ${\bold{c}}$. With this convention, we have that $\Gamma\setminus\{0\}=\bigcup_{{\bold{c}}\in \Sigma\setminus\{0\}} [\Sigma^*{\bold{c}}]_F$.
**Notation 15**. *Let $I_F:=\{F^n(a)\colon a\in \Sigma\setminus\{0\}, n\in \omega\}$. It is easily seen that this subset $I_{F}$ (of $\Gamma$) is automatic. Indeed in the $F$-representation of the elements of $\Gamma$, $I_{F}$ is the set of elements which can be represented by a (finite) word of support $1$.*
*For each $a\in \Sigma\setminus\{0\}$, we will also need to single out the orbits $a$ by $F$: $I_{F,a}:=\{F^n(a)\colon n\in \omega\}$. This subset $I_{F,a}$ (of $\Gamma$) is again automatic and $I_{F}=\bigcup_{a\in \Sigma\setminus\{0\}} I_{F,a}$.*
Recall that we assumed that $F$ is injective and that $\Sigma\setminus\{0\}\subset \Gamma\setminus F(\Gamma)$), so one cannot have $F^n(a)=F^{m}(b)$ with $a, b \in \Sigma\setminus\{0\}$ and $m\neq n$. This allows us to define the following preorder $\preceq$ on $I_F$.
**Definition 16**. *We set $F^i(a)\preceq F^j(b)$ if $i\leq j$ and $F^i(a)\prec F^j(b)$ if $i<j$, with $a, b\in \Sigma\setminus\{0\}$, (equivalently $F^i(a)\prec F^j(b)$ if ($F^i(a)\preceq F^j(b)$ and $\neg (F^j(b)\preceq F^i(a))$)). Furthermore we set $I_F\prec 0$.*
We will use the notation $u\sim v$, for $u, v\in I_{F}$ to mean that $u\preceq v$ and $v\preceq u$.
For convenience, we also define the partial function $F^{-1}$ on $I_F\setminus \Sigma$ as follows: let $u\in I_F\setminus \Sigma$, then $F^{-1}(u)=v \leftrightarrow u=F(v)$.
**Definition 17**. *For $g\in \Gamma\setminus\{0\}$, define a (unary) function $V_F:\Gamma\setminus\{0\}\to I_F$ as follows. We use Lemma [Lemma 14](#unicity){reference-type="ref" reference="unicity"} to represent $g$ as $\sum_{i=m}^n F^i(b_i)$, with $b_m\neq 0$, $b_m\in \Sigma_0$, $b_i\in \Sigma$, $m\leq i\leq n$. Then define $V_{F}(g):=F^m(b_{m})$. We extend $V_{F}$ on the whole of $\Gamma$ by $V_{F}(0)=0$.*
Note that $V_F$ is well-defined by choice of $\Sigma_0$ and that the graph of $V_F$ is automatic.
Let $g, g'\in \Gamma\setminus\{0\}$, $u\in I_{F}$. Then, using condition (C2), we have that if $u\prec V_{F}(g)$ and $u\prec V_{F}(g')$, then $u\prec V_{F}(g-g')$.
Also, one cannot have $F^n(a)=F^{m}(b)+F^k(c)$ with $a, b, c \in \Sigma\setminus\{0\}$ and $m<k$, and $m\neq n$, using that $\Sigma\cap F(\Gamma)=\{0\}.$
**Definition 18**. *We define the binary relation $R$ on $(\Gamma\setminus\{0\})\times (\Gamma\setminus\{0\})$ as follows $R(g,u)$ if $u\in I_F$ with $u=F^n(a)$, $a\in \Sigma\setminus\{0\}$ and $g=\sum_{i=m}^n F^i(a_i)$, with $a_{i}\in \Sigma$, $m\leq i\leq n$, $a_{m}\neq 0$ and $a_{n}=a$, namely $a$ occurs as the rightmost letter in $\sigma\in \Sigma^*$ with $g=[\sigma]_F$. (The relation $R(g,u)$ holds if there is $\sigma\in \Sigma^*$ such that $[\sigma]_F=g$ and the rightmost letter of $\sigma$ equals $u$. We can extend it to $\Gamma\times\Gamma$ by adding $R(0,0)$.)*
*Then we define the relation $\epsilon_F(u,g)$, expressing that $u$ occurs in some representation of $g$, by $$u\in I_F\,\wedge\,(g=u\vee\;(\exists h\;(R(h,u)\wedge u\prec V_F(g-h))).$$*
Note that both relations $R$ and $\epsilon_{F}$ are automatic.
**Lemma 19**. *Let $g\in \Gamma$ and $u\in I_F$ and suppose that $\epsilon_F(u,g)$ holds. Then there is a unique element $h\in \Gamma$ such that $$R(h,u)\wedge u\prec V_F(g-h).$$*
Proof:Let $g, u$ as above with $u=F^m(a)$, $m\geq 0$, $a\in \Sigma$ and suppose that we have $h, h'$ such that $R(h,u)\wedge u\prec V_F(g-h)$ and $R(h',u)\wedge u\prec V_F(g-h')$. This implies that $u\prec V_F(h-h')$.
By definition, both $h, h'$ have a representation of the form $h=a_0+F(a_1)+\ldots+F^m(a)$, $h'=b_0+F(b_1)+\ldots+F^m(a)$, with $a_i, b_i\in \Sigma$, $0\leq i\leq m$. If $m=0$, then $h=h'$, so let us assume that $m\geq 1$. Let $i$ be the smallest index such that $a_i-b_i\neq 0$. Write $a_i+(-b_i)=d+F(d')$, with $d, d'\in \Sigma$, applying condition (C2). If $d\neq 0$, we get a contradiction (with $u\prec V_F(h-h')$). So assume that $a_i+(-b_i)=F(d')$ with $d'\neq 0$ and replace $a_i$ by $b_i+F(d')$. If $i+1=m$, then $V_F(h-h')\sim u$, a contradiction. If $i+1<m$, we compare $a_{i+1}+d'$ and $b_{i+1}$. By condition $(C2)$, we have $a_{i+1}+d'+(-b_{i+1})=v+F(w)$, $v, w\in \Sigma$. If $v\neq 0$, again we get a contradiction. If $v=0$, then we replace $a_{i+1}+d'$ by $b_{i+1}+F(w)$. If $w=0$, then we have obtained another expression for $h, h'$ which makes them equal up to index $i+1$. If there is another index $m>j>i+1$ where they differ, we proceed as before. If $w\neq 0$ and $i+2<m$, then we compare $a_{i+2}+w$ and $b_{i+2}$, until we reach a contradiction or the case where we found a witness for $h=h'$. We eventually reach the case where $a_{i+k}=a$ and either we get that $h=h'$ or that $V_F(h-h')\sim u$, a contradiction. $\quad\Box$
**Remark 20**. *Now suppose that $g\in \Gamma\setminus\{0\}$, $u\in I_F$ but $\neg \epsilon_F(u,g)$, then either there is a largest (in the preorder $\preceq$) $\tilde u\preceq u\in I_F$ such that $\epsilon_F(\tilde u,g)$, then by the preceding lemma there is a unique element $h$ such that $R(h,\tilde u)\wedge \tilde u\prec V_F(g-h)$, otherwise set $h=0$. Note that in both cases, we still have that $u\prec V_F(g-h)$ and that we have a unique such $h$ with $\forall \tilde u\in I_F\;(R(h,\tilde u)\rightarrow \tilde u \preceq u)$.*
**Notation 21**. *[\[interval\]]{#interval label="interval"} This allows us to define an auxiliary function $f:\Gamma\setminus\{0\}\times I_{F} \to \Gamma$ by $f(g,u)=h$ if ($V_F(g)\preceq u\;\wedge\;\exists u'\in I_{F}\;(u'\preceq u\,\wedge \,R(h,u')\,\wedge u\prec V_F(g-h))$ or $u\prec V_F(g)\;\wedge h=0$).*
*Let $u_1, u_2\in I_F$ with $u_1\prec u_2$. Then denote by $$g\restriction [u_1\;u_2]:=f(g,u_2)-f(g,F^{-1}(u_1))
\;{\rm and}\; g\restriction [u_2\;u_1[:=f(g,u_2)-f(g,u_1).$$*
Let ${\mathcal L}:=\{+, F, V_F, R,I_{F}, \preceq, I_{F,a}; a\in \Sigma\setminus\{0\}\}$ and
$\Gamma_V:=(\Gamma,+,F,V_F, R, I_{F}, \preceq, I_{F,a}; a\in \Sigma\setminus\{0\}).$
**Proposition 22**. *Let $(\Gamma,+,F)$ be a finitely generated ${\mathbb Z}[F]$-module, where $F$ is injective, with a $F$-spanning subset. Then $\Gamma_{V}$ is FA-presentable. In particular this structure is decidable. The structure $(\Gamma,{\mathcal F})$ is definable in $\Gamma_V$.*
Proof:The fact that $\Gamma_V$ is FA-presentable follows from [@H Proposition 2.32] where it is shown that the diagonal, the graph of $+$ and the graph of $F$ are $F$-automatic. In the discussion above we also noted that all the predicates $V_F, R, I_{F}, \preceq, I_{F,a}; a\in \Sigma\setminus\{0\}$ are automatic. Now let us show that any elementary F-set is definable in $\Gamma_V$. Note that it will give another proof that $F$-sets are automatic [@H Theorem 2.54].
First let us define the elementary F-set $K(a,F)$, $a\in \Sigma\setminus\{0\}$, as follows: $g\in K(a; F)$ if $$\exists u\in I_{F,a}\;(R(g,u)\;\wedge\;
\forall c\in I_{F}(V_F(g)\preceq c\prec u\;\rightarrow g\restriction [c\;F(c)[\in I_{F,a})).$$
Now let $a\in \Gamma\setminus\{0\}$, $a=\sum_{i=0}^m F^{n_i}(a_i)$, $a_i\in \Sigma\setminus\{0\}$, $0\leq n_{0}<\ldots<n_{m}$ and $k=n_{m}-n_{0}+1$. First define the orbit of $a$, namely $Orb(a):=\{F^m(a): m\in {\mathbb N}\}$ as follows: $\exists g_0\;\ldots\exists g_m\;$ $$\bigwedge_{i=0}^m g_i\in F^{n_i}(I_{F,a_i})\;\wedge\;g=\sum_{i=0}^m g_i\,\wedge\;\bigwedge_{i=0}^{m-1} F^{n_{i+1}-n_i}(g_{i})\sim g_{i+1}$$ Define $g\in K(a;F^k)$ as follows: $$\exists h\;\big(\exists u\in I_F\;(V_F(h)=V_F(g)\;\wedge\;R(g,u)\wedge R(h,F(u))\wedge\;$$ $$(\forall u_1\forall u_2 \big(u_1, u_{2}\in I_{F} \wedge u_{1}\prec u_{2} \wedge \bigwedge_{i=1}^2\epsilon_{F}(h,u_{i})\wedge \neg(\exists u_3\in I_F (\epsilon_{F}(h,u_{3})\wedge u_1\prec u_3\prec u_2)))\big)$$ $$\rightarrow\;(g\restriction [u_1\;u_2[ \in Orb(F^k(a))\big)\big).$$ Finally, define $g\in K(a;F)$ as follows: $$\exists g_0\;\ldots\exists g_{k-1}\;
\bigwedge_{i=0}^{k-1} g_i\in K(F^i(a);F^k)\wedge g=\sum_{i=0}^{k-1} g_{i}.$$
One can easily modify the above formulas to define any elementary F-set, as well as any elementary F-set of some power of $\Gamma$. $\quad\Box$
We have that as an ${\mathbb Z}[F]$-module, the structure $(\Gamma,+,F)$ is stable and as we have seen earlier on certain conditions on $(\Gamma,+,F)$, $(\Gamma,{\mathcal F})$ is stable (even superstable) [@MS Theorem 6.11].
C. Hawthorne considered the question of which subsets $A$ of $\Gamma$, to add to get a NIP-expansion $(\Gamma,+,A)$ [@H2 section 7].
In our framework, it would be interesting to find an intermediate structure between $(\Gamma,+,F)$ and $\Gamma_V$ which is NIP, more specifically:
**Question** Is there a reduct of $\Gamma_V$ interdefinable with $(\Gamma,+, F,{\mathcal F}(\Gamma))$, as in the special case of $\Gamma={\mathbb Z}$ and $F$ the multiplication by a prime number?
Now we want to identify all automatic sets in some $\Gamma^n$, $n>0$, as the class of all ${\mathcal L}$-definable subsets in $\Gamma_{V}$. The proof presented here follows the same strategy as in [@MP]. We use Kleene's theorem on the correspondence between regular languages and languages accepted by a finite automaton (see section [2.1](#aut){reference-type="ref" reference="aut"}).
A well-known observation is that if a language $L$ is accepted by a finite automaton, then the language consisting of all words in $\Sigma^*$ of the form $\underbrace{0\ldots0}_n\omega$ with $\omega\in L$ is again recognizable. We will denoted such language by $0^nL$, $n>0$ and when $n=0$, $0^{0}=\lambda$.
**Lemma 23**. *Let $L$ be a regular language on $\Sigma$ and let ${\bold{c}}\in \Sigma\setminus\{0\}$. Then there are ${\mathcal L}$-formulas such that $\rho_{L}(g,x)$ such that $\Gamma_V\models \rho_{L}([w^{\frown}{\bold{c}}]_{F},F^n({\bold{c}}))$ iff $w\in 0^nL$.*
We work by induction on the complexity of regular languages. Let $L$, $L_1$, $L_2$ be languages of complexity $\leq c+1$. Since any element of $\Gamma\setminus\{0\}$ is represented by a word ending with an element of $\Sigma\setminus\{0\}$, we consider elements of $\Sigma^*$ concatenated with a letter in ${\bold{c}}\in \Sigma\setminus\{0\}$. Moreover for each such letter ${\bold{c}}$, there is some $b\in \Sigma$ such that ${\bold{c}}={\bold{c}}_0+F(b)$ with ${\bold{c}}_0\in \Sigma_0$.
By induction we assume that for languages $L$ of complexity $\leq c$, there exist formulas $\rho_{L}(g,x)$ such that $\rho_{L}([w^{\frown}{\bold{c}}]_{F},F^n({\bold{c}}))$ holds iff $w\in 0^nL$.
$\bullet$ The case of complexity $0$ is the case of languages $L$ of the form $L=\{a\}$ with $a\in \Sigma$.
If $L=\{0\}$, then set $\rho_{L}(g, x):=(g=F(x))$;\
if $L=\{a\}$, $a\neq 0$, then set $\rho_{L}(g, x):=(g=u+F(x)\wedge u\in I_{F,a}\wedge F(u)\sim F(x)).$ So $\rho_{L}([w^{\frown} {\bold{c}}]_{F},F^n({\bold{c}}))$ holds iff $[w^{\frown} {\bold{c}}]_F=F^n(a)+F^{n+1}({\bold{c}})$ iff $w=0^na$.
$\bullet$ Then by induction we assume that we have formulas $\rho_{L_i}(g,x)$ such that $\rho_{L_i}([w^{\frown} {\bold{c}}]_F,F^n({\bold{c}}))$ holds iff $w\in 0^n(L_i)$, $0\leq i\leq 2$ and we show there exist formulas with the following property:
1. $\rho_{L_1\cup L_2}([w^{\frown} {\bold{c}}]_{F},F^n({\bold{c}}))$ holds iff $w\in 0^n(L_1\cup L_2)$,
2. $\rho_{L_1^{\frown} L_2}([w^{\frown} {\bold{c}}]_{F},F^n({\bold{c}}))$ holds iff $w\in 0^n(L_1^{\frown} L_2)$, $w{\bold{c}}=0^nv_{1}v_{2}{\bold{c}}, v_{1}\in L_{1}, v_{2}\in L_{2}$,
3. $\rho_{L_0^*}([w^{\frown} {\bold{c}}]_{F},F^n({\bold{c}}))$ holds iff $w\in 0^n L_0^*$, $w{\bold{c}}=0^nv_{1}v_{2}\ldots v_{m}{\bold{c}}, v_{i}\in L_{0}$.
Case $(1)$ is immediate since $w\in 0^n(L_1\cup L_2)$ iff ($w\in 0^nL_1$ or $w\in 0^nL_2$).
In case $(2)$, we let $\rho_{L_1^{\frown} L_2}(g,x)$ be the following formula: $$\exists v, u\in I_{F,{\bold{c}}}\exists v', v''\in I_{F,{\bold{c}}}\;((x \preceq v \preceq u\,\wedge \,S(v)\sim v'\,\wedge v''\sim S(u))\wedge$$ $$x\preceq V_{F}(g)\,\wedge \,R(g,u)\wedge \,\rho_{L_1}(f(g,v)+v',x)\wedge \, \rho_{L_2}(g-f(g,v)+v'',v').$$ Then we check that if $\rho_{L_1^{\frown} L_2}([w^{\frown} {\bold{c}}]_F,F^n({\bold{c}}))$ holds iff $w\in 0^{n}(L_1^{\frown} L_2)$.
In case (3), we let $\rho_{L_0^*}(g,x)$ to be the formula: $$g=x\vee \exists h\;\exists u\in I_{F,{\bold{c}}}\big(R(h,u)\, \wedge R(g,u)\,\wedge (V_F(h)\sim x\sim V_F(g))\,\wedge$$ $$(\forall u_1\forall u_2 \big(u_1, u_{2}\in I_{F,{\bold{c}}} \wedge u_{1}\prec u_{2} \wedge \bigwedge_{i=1}^2\epsilon_{F, {\bold{c}}}(h,u_{i})\wedge \neg(\exists u_3\in I_F (\epsilon_{F}(h,u_{3})\wedge u_1\prec u_3\prec u_2)))\big)$$ $$\rightarrow\; \rho_{L_{0}}(g\restriction [u_1\;u_2],u_1)\big)$$
Again, we check that $\rho_{L_0^*}([w^{\frown} {\bold{c}}]_F,F^n({\bold{c}}))$ holds iff $w\in 0^{n}L_0^*$.$\quad\Box$
**Theorem 24**. *Let $(\Gamma,+,F)$ be a finitely generated ${\mathbb Z}[F]$-module, where $F$ is injective, with a $F$-spanning subset. Then $\Gamma_V$ is FA-presentable and definable subsets in $\Gamma_{V}$ coincide with automatic sets.*
Proof:The first part of the statement is Proposition [Proposition 22](#F-def){reference-type="ref" reference="F-def"}. The second part follows from Lemma. [Lemma 23](#regular){reference-type="ref" reference="regular"}. We will simply do the proof for subsets of $\Gamma$. For subsets of $\Gamma^n$, one uses the operation of convolution (and works with the alphabet $(\Sigma\cup\{\sharp\})^n$) (see section [2.2](#automatic){reference-type="ref" reference="automatic"}).
Let $\Sigma$ be a $F$-spanning set for $\Gamma$ and let $A$ be an automatic subset of $\Gamma$. So there is a regular language $L\subset \Sigma^*$ such that $A=\{[\omega]_{F}\colon \omega\in L\}$. Now the set of finite words ending with a non zero element of $\Sigma$ is a regular language, say $\tilde L$. So $L\cap (\tilde L\cup \{\lambda\})$ is again a regular language, that we rename $L$. Let $g\in A$; then $g=[\omega]_F$, for some $\omega\in L$. First assume that $g\neq 0$, then by Lemma [Lemma 23](#regular){reference-type="ref" reference="regular"}, $\omega\in L$ iff $\rho_{L}([\omega^{\frown}{\bold{c}}]_{F},{\bold{c}})$ holds.
Now $[w^{\frown}{\bold{c}}]_{F}=g+F(u)$ with $u\in I_{F,c}\wedge R(g,u)$. If $g=0$, then $[\lambda^{\frown}{\bold{c}}]_{F}={\bold{c}}$ and $\lambda\in L$ iff $\rho_L({\bold{c}},{\bold{c}})$.
So, putting both cases together, we have $g\in A$ iff $$\Gamma\models(g\neq 0\wedge (\exists u\;(u\in I_{F,c}\wedge R(g,u)\wedge \rho_{L}(g+F(u),{\bold{c}}))\vee (g=0\wedge \rho_{L}({\bold{c}},{\bold{c}}))).$$ $\quad\Box$
**Lemma 25**. *The structure $\Gamma_V$ has the independence property. When $\Gamma$ is a finitely generated abelian group and $\bigcap_{i\in {\mathbb N}} (F^i)=\{0\}$, $\Gamma_V$ is a proper expansion of $(\Gamma,{\mathcal F})$.*
Proof:The structure $\Gamma_V$ has IP. As in the case of $({\mathbb Z},+,V_{2})$, one codes finite subsets of $I_{F}$ as follows. Let $u\in I_{F}$ and $g\in \Gamma\setminus\{0\}$. Then $u\in g$ if $\epsilon_{F}(u,g)$ holds.
When $\Gamma$ is a finitely generated ${\mathbb Z}$-module and $\bigcap_{i\in {\mathbb N}} (F^i)=\{0\}$, $(\Gamma,{\mathcal F})$ is stable [@MS Theorem 6.11]. $\quad\Box$
## Acknowledgements {#acknowledgements .unnumbered}
I would like to thank P. D'Aquino who gave me the opportunity to give a talk on that subject in the model theory seminar in may 2022 in the University "degli studi della Campania Luigi Vanvitelli". I also would like to thank R. Moosa and C. Hawthorne for helpful communications.
999999 Bell J., Moosa R., F-sets and finite automata. J. Théor. Nombres Bordeaux 31 (2019), no. 1, 101-130. Bruyère V., Entiers et automates finis, Mémoire de fin d'études, Université de Mons (1985). van den Dries L., The field of reals with a predicate for the powers of two, Manuscripta Math. 54 (1985), no. 1-2, 187--195. Eilenberg S., Automata, Languages and Machines (vol. A), Academic Press, New York, 1974. Hawthorne C., A guide to F-automatic sets, Ph.D. thesis, University of Waterloo, 2021. Hawthorne, C., Automata and tame expansions of $({\mathbb Z},+)$, Israel J. Math. 249 (2022), no. 2, 651--693. Hawthorne, C., Contributions to the theory of F-automatic sets, J. Symb. Log. 87 (2022), no. 1, 127--158. Hodgson B.R., Théories décidables par automate fini, Ph.D. thesis, University of Montreal, 1976. Lambotte Q., Point F., On expansions of $({\mathbb Z},+,0)$, Annals of Pure and Applied Logic 171 (2020), Article no. 102809. Michaux C., Point F., Les ensembles $k$-reconnaissables sont définissables dans $({\mathbb N},+,V_{k})$, C.R. Acad. Sc. Paris, t.303, Série I, no. 19, 1986. Moosa R., Scanlon, T., F-structures and integral points on semiabelian varieties over finite fields, Am. J. Math. 126 (2004), no. 3, p. 473-522. Nies A., Describing groups, Bull. Symbolic Logic 13 (2007), no. 3, 305-339. Perrin D., Pin J.-E., Infinite words, Automata, Semigroups, Logic and Games, Pure and Applied Mathematics series 141, Elsevier Academic Press, 2004.
| arxiv_math | {
"id": "2309.00392",
"title": "A decidable expansion of $(\\Gamma,+,F)$ with the independence property",
"authors": "Fran\\c{c}oise Point",
"categories": "math.LO",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The primary tool for analysing groups acting on trees is Bass--Serre Theory. It is comprised of two parts: a decomposition result, in which an action is decomposed via a graph of groups, and a construction result, in which graphs of groups are used to build examples of groups acting on trees. The usefulness of the latter for constructing new examples of 'large' (e.g. nondiscrete) groups acting on trees is severely limited. There is a pressing need for new examples of such groups as they play an important role in the theory of locally compact groups. An alternative 'local-to-global' approach to the study of groups acting on trees has recently emerged, inspired by a paper of Marc Burger and Shahar Mozes, based on groups that are 'universal' with respect to some specified 'local' action. In recent work, the authors of this survey article have developed a general theory of universal groups of local actions, that behaves, in many respects, like Bass--Serre Theory. We call this the theory of local action diagrams. The theory is powerful enough to completely describe all closed groups of automorphisms of trees that enjoy Tits' Independence Property $(\mathrm{P}_{})$.
This article is an introductory survey of the local-to-global behaviour of groups acting on trees and the theory of local action diagrams. The article contains many ideas for future research projects.
address:
- "Colin D. Reid. The University of Newcastle, School of Mathematical and Physical Sciences, Callaghan, NSW 2308, Australia. Email: colin\\@reidit.net"
- "Simon M. Smith. Charlotte Scott Research Centre for Algebra, University of Lincoln, Lincoln, U.K. Email: sismith\\@lincoln.ac.uk"
author:
- Colin D. Reid
- Simon M. Smith
title: An introduction to the local-to-global behaviour of groups acting on trees and the theory of local action diagrams
---
# Introduction
Actions on trees have a significant role in the general theory of finite and infinite groups. In finite group theory such actions are, for example, a natural setting for questions about vertex-transitive groups $G$ of automorphisms of connected finite graphs. For such a pair $G \leq \mathop{\mathrm{Aut}}(\Gamma)$ with $G$ nontrivial, the universal cover of $\Gamma$ is an infinite regular tree $T$, and there is a natural projection $\pi$ of $T$ to $\Gamma$. The fundamental[^1] group $\Pi(\Gamma, v)$ of $\Gamma$ at any vertex $v$ can be identified with a subgroup of $\mathop{\mathrm{Aut}}(T)$ in such a way that $\Gamma$ can be identified with the quotient graph $\Pi(\Gamma, v) \backslash T$ and the lift $\tilde{G}$ of $G$ along $\pi$ can be identified with the quotient $\tilde{G} / \Pi(\Gamma, v)$. For a thorough description of this correspondence, see [@potocnik_spiga_19] for example.
In this context, important open problems in finite groups can translate into natural questions about groups acting on trees. Consider, for example, the well-known Weiss conjecture ([@weiss78]) due to Richard Weiss. In the language of finite groups, it states that there exists a function $f: \mathbb{N}\rightarrow \mathbb{N}$ such that if $G$ is vertex-transitive and locally primitive[^2] on a graph with finite valency $k$, then all vertex stabilisers satisfy $|G_v| < f(k)$. Equivalently, in the language of groups acting on trees, it states that for each $k \in \mathbb{N}_{\geq 3}$ the automorphism group of the $k$-regular tree $T_k$ contains only finitely many conjugacy classes of discrete, locally primitive and vertex transitive subgroups.
The role played by actions on trees in the general theory of infinite groups is, of course, well-known. Much of our understanding of groups acting on trees comes from the celebrated Bass--Serre Theory, described in detail in Jean-Pierre Serre's book [@Serre:trees]. We give an introductory overview to Bass--Serre Theory in Section [2.1](#Sec:BassSerre){reference-type="ref" reference="Sec:BassSerre"}. Serre's theory concerns a group $G$ *acting on a tree* $T$; by this we mean that $G$ acts on $T$ as a group of automorphisms of $T$. We will denote such an action by the pair $(T,G)$, and will often identify $(T,G)$ with its image in the group $\mathop{\mathrm{Aut}}(T)$ of automorphisms of $T$. In Bass--Serre theory, the algebraic structure of a group $G$ acting on a tree $T$ is 'decomposed' into pieces with the decomposition described via a combinatorial structure called a *graph of groups*. This graph of groups associated to the action $(T,G)$ is essentially a vertex- and edge-coloured graph, where each colour is in fact a group. The groups in this palette are called the *vertex groups* and *edge groups* of the graph of groups. Importantly, this decomposition process can be reversed, where one starts with a graph of groups $\Gamma$ and then constructs the universal cover $\tilde{T}$ (which is a tree) of $\Gamma$ and the fundamental group $\Pi$ of $\Gamma$. The group $\Pi$ acts on $\tilde{T}$ in such a way that its associated graph of groups is again $\Gamma$. This 'decomposition' of $(T,G)$ in Bass--Serre Theory hinges on the observation that if $\Gamma$ is the associated graph of groups for $(T,G)$, and $\tilde{T}$ and $\Pi$ are respectively the universal cover and fundamental group of $\Gamma$, then the actions $(T,G)$ and $(\tilde{T}, \Pi)$ can be identified.
Two algebraic constructions, the HNN extension and the amalgamated free product, play a foundational role in this theory: the graph of groups of an HNN extension is a single vertex with a loop, and the graph of groups of an amalgamated free product is a pair of vertices joined by an edge. Intuitively, edges and loops in the graph of groups of the action give information about how the action decomposes into HNN extensions and amalgamated free products of the vertex groups.
Naturally, there are areas in which the usefulness of Bass--Serre Theory is limited. One of the most significant limitations can be seen when attempting to construct a group acting on a tree with certain desired properties, by first writing down a graph of groups. We discuss this in Section [3.1](#subsection:bass_serre_limitations){reference-type="ref" reference="subsection:bass_serre_limitations"}. Essentially Bass--Serre Theory can be used to construct an action on a tree with a given graph of groups, but the action itself cannot, in general, be fully controlled. This limitation is the source of the intractability of several famous conjectures, including the aforementioned Weiss Conjecture and the related Goldschmidt--Sims Conjecture (see [@BurgerMozes], [@goldschmidt]). This limitation is particularly problematic when trying to construct nondiscrete actions on trees, and recent developments in the theory of locally compact groups have made the need to address this limitation more acute.
In the theory of locally compact groups, the structure theory of compactly generated locally compact groups is known to depend on the class $\mathscr{S}$ of nondiscrete, compactly generated, locally compact topologically simple groups; see Pierre-Emmanuel Caprace and Nicolas Monod's paper [@CapraceMonod]. For a survey of the class $\mathscr{S}$, placing it in a broad mathematical context, see Caprace's survey paper [@CapraceSimple]. Groups acting on trees are one of the main sources of examples of groups in $\mathscr{S}$. In fact the very first examples of nonlinear, nondiscrete, locally compact simple groups were constructed using groups acting on trees, by Jacques Tits (answering a question of J. P. Serre) in [@Tits70]. Tits used an independence condition he called Property $(\mathrm{P}_{})$ for groups acting on trees, and showed that for any group $G$ of automorphisms of a tree $T$, if $G$ has Property $(\mathrm{P}_{})$, fixes (setwise) no nonempty proper subtree of $T$ and fixes no end of $T$, then the subgroup $G^+$ of $G$ generated by arc stabilisers is (abstractly) simple[^3] (see Section [2.2](#Section:tits){reference-type="ref" reference="Section:tits"}). As an immediate consequence, for the infinite regular tree $T_n$ of (finite or infinite) valency $n$ we have that $\mathop{\mathrm{Aut}}(T_n)^+$ is simple for $n \geq 3$. This group is nondiscrete and when $n$ is finite it is easily seen to be compactly generated and locally compact (under the permutation topology).
The limitations of Bass--Serre Theory when constructing nondiscrete groups acting on trees can be avoided using a variety of techniques inspired by a 2000 article [@BurgerMozes] by Marc Burger and Shahar Mozes; these techniques might collectively be known as *local-to-global constructions of groups acting on trees*. In their paper, Burger and Mozes take a permutation group $F \leq S_n$ of finite degree $n$ and build a group $\mathbf{U}(F)$ of automorphisms of $T_n$ that has *local action* $F$ (or is *locally-$F$*); that is, vertex stabilisers in $\mathbf{U}(F)$ induce $F$ on the set of neighbours of the stabilised vertex. For example, $\mathbf{U}(S_n) = \mathop{\mathrm{Aut}}(T_n)$. These *Burger--Mozes* groups $\mathbf{U}(F)$ enjoy Tits' Independence Property $(\mathrm{P}_{})$, and when $F$ is transitive $\mathbf{U}(F)$ is 'universal' among subgroups of $\mathop{\mathrm{Aut}}(T_n)$ that have local action $F$; that is, $\mathbf{U}(F)$ contains an $\mathop{\mathrm{Aut}}(T_n)$-conjugate of every subgroup of $\mathop{\mathrm{Aut}}(T_n)$ that is locally-$F$. When $F$ is transitive and generated by its point stabilisers, $\mathbf{U}(F)$ contains a simple subgroup $\mathbf{U}(F)^+$ of index $2$. Since 2000, the majority of constructions of compactly generated, simple, locally compact groups have used the ideas of Tits [@Tits70] and Burger--Mozes [@BurgerMozes].
In [@SmithDuke], the second author generalised the Burger--Mozes construction to *biregular trees* $T_{m,n}$, where $m,n \geq 2$ are (possibly infinite) cardinals and $T_{m,n}$ is the infinite tree in which vertices in one part of its bipartition have valency $m$ and those in the other part have valency $n$. Given permutation groups $F_1 \leq S_m$ and $F_2 \leq S_n$, this generalisation is a group $\mathbf{U}(F_1, F_2) \leq \mathop{\mathrm{Aut}}(T_{m,n})$, called *the box product* of $F_1$ and $F_2$. The group $\mathbf{U}(F_1, F_2)$ has local actions $F_1$ (at vertices of $T_{m,n}$ in one part of the bipartition) and $F_2$ (at vertices in the other part of the bipartition); we say any such action on $T_{m,n}$ is *locally-$(F_1, F_2)$*. For $m \geq 3$ and $F \leq S_m$ it can be shown that the Burger--Mozes group $\mathbf{U}(F)$ is isomorphic as a topological group (but not as a permutation group) to $\mathbf{U}(F, S_2)$. The properties of $\mathbf{U}(F_1, F_2)$ mirror those of the Burger--Mozes groups: it has Tits' Independence Property $(\mathrm{P}_{})$, and when $F_1$ and $F_2$ are transitive, $\mathbf{U}(F_1, F_2)$ is 'universal' among subgroups of $\mathop{\mathrm{Aut}}(T_{m,n})$ that are locally-$(F_1, F_2)$, in that it contains an $\mathop{\mathrm{Aut}}(T_{m,n})$-conjugate of every locally-$(F_1,F_2)$ subgroup of $\mathop{\mathrm{Aut}}(T_{m,n})$. When $F_1$ and $F_2$ are generated by their point stabilisers and at least one group is nontrivial, $\mathbf{U}(F_1,F_2)$ is simple if and only if $F_1$ or $F_2$ is transitive.
The Burger--Mozes groups are automatically totally disconnected, locally compact (henceforth, "t.d.l.c.") and compactly generated groups because they are defined only on locally-finite trees. Despite admitting actions on non-locally finite trees, $\mathbf{U}(F_1, F_2)$ can still be a compactly generated t.d.l.c. group under mild conditions on $F_1$ and $F_2$. Indeed, if $F_1$ and $F_2$ are closed and compactly generated, with compact nontrivial point stabilisers and finitely many orbits, and either $F_1$ or $F_2$ is transitive (e.g. $F_1$ is closed, nonregular, subdegree-finite and primitive and $F_2$ is finite, nonregular and primitive), then $\mathbf{U}(F_1, F_2)$ is a nondiscrete compactly generated t.d.l.c. group. By taking $F_1$ to be infinite permutation representations of various Tarski--Ol'shanskiı̆ Monsters (see [@olshanski]), the second author used this box product construction to show that there are precisely $2^{\aleph_0}$ isomorphism types of nondiscrete, compactly generated simple locally compact groups, answering a well-known open question; the analogous result for discrete groups was proved in 1953 by Ruth Camm in [@Cam53].\
In light of the Burger--Mozes and box product constructions, the authors of this article wished to create a unified way of seeing $\mathbf{U}(F)$ and $\mathbf{U}(F_1, F_2)$, within a framework permitting yet more local actions to be specified. What emerged from this endeavour was something far deeper: a general theory of 'universal' groups of local actions, that behaves, in many respects, like Bass--Serre Theory. This work, which we call *the theory of local action diagrams*, is fully described in our paper [@ReidSmith].
To better understand this theory, let us define the *$(\mathrm{P}_{})$-closure* of an action $(T, G)$ of a group $G$ on a tree $T$ as being the smallest closed subgroup of $\mathop{\mathrm{Aut}}(T)$ with Tits Independence Property $(\mathrm{P}_{})$ that contains $(T,G)$; we denote it $G^{(\mathrm{P}_{})}$. That is, $G^{(\mathrm{P}_{})}$ is the smallest closed subgroup of $\mathop{\mathrm{Aut}}(T)$ with Tits Independence Property $(\mathrm{P}_{})$ that contains the subgroup of $\mathop{\mathrm{Aut}}(T)$ induced by the action of $G$ on $T$. If $G = G^{(\mathrm{P}_{})}$ we say that $G$ is *$(\mathrm{P}_{})$-closed*. [\[def:P-closure\]]{#def:P-closure label="def:P-closure"} In the theory, any group $G$ acting on a tree $T$ is 'decomposed' into its local actions with the decomposition described via a combinatorial structure called a *local action diagram*. This local action diagram is a graph decorated with sets and groups that codify the local actions of $G$. Importantly, this decomposition process can be reversed, where one starts with a local action diagram $\Delta$ and then constructs an arc-coloured tree ${\bf T}$ called the $\Delta$-tree and a group $\mathbf{U}(\Delta)$ called the *universal group of $\Delta$*. The group $\mathbf{U}(\Delta)$ acts on ${\bf T}$ in such a way that its local action diagram is again $\Delta$, and moreover it exhibits various desirable global properties that are often impossible to verify for groups arising from graphs of groups via Bass--Serre Theory. This 'decomposition' of $(T,G)$ into local actions hinges on the observation that if $\Delta$ is the associated local action diagram for $(T,G)$ and ${\bf T}$ and $\mathbf{U}(\Delta)$ are the $\Delta$-tree and the universal group of $\Delta$, then the actions $(T,G^{(\mathrm{P}_{})})$ and $({\bf T}, \mathbf{U}(\Delta))$ can be identified. Thus, viewing $\mathbf{U}(\Delta)$ as a subgroup of $\mathop{\mathrm{Aut}}(T)$, we see that $\mathbf{U}(\Delta)$ is 'universal' with respect to the local actions of $G$; that is, $\mathbf{U}(\Delta)$ contains an $\mathop{\mathrm{Aut}}(T)$-conjugate of every action $(T,H)$ whose local action diagram is also $\Delta$.
The Burger--Mozes groups $\mathbf{U}(F)$ and the box product construction $\mathbf{U}(F_1, F_2)$ play a foundational role in this new theory: the local action diagram of $\mathbf{U}(F)$ is a (suitably decorated) graph consisting of a single vertex with a set of loops, each of which is its own reverse; the local action diagram of $\mathbf{U}(F_1, F_2)$ when $F_1$ and $F_2$ are transitive is a (suitably decorated) graph consisting of a pair of vertices joined by an edge and no loops (c.f. the graphs of groups of HNN extensions and amalgamated free products).
The theory of local action diagrams is powerful enough to give a complete description of closed actions on trees with Tits' Independence Property $(\mathrm{P}_{})$: these actions are precisely the universal groups of local action diagrams. From this one immediately obtains a robust classification of all actions $(T,G)$ with Tits' Independence Property $(\mathrm{P}_{})$, by taking closures in the permutation topology. Note that $(T,G)$ and its closure in the permutation topology have the same orbits on all finite (ordered and unordered) sets of vertices.
Working with local action diagrams is remarkably easy; unlike for graphs of groups there are no embeddability issues to contend with. Moreover, all properties of faithful closed actions with property $(\mathrm{P}_{})$ can be read directly from the local action diagram; many can be read easily from the diagram, including --- surprisingly --- geometric density, compact generation and simplicity.\
The theory of local action diagrams is described in [@ReidSmith]. This note is intended to be an accessible introduction to the theory, placing it in its broad context and giving illustrative diagrams and examples, while omitting most proofs. It largely follows the contents of the second author's plenary talk at Groups St Andrews 2022 in Newcastle. Our intention with this note is to make the motivations and ideas in the theory accessible to non-specialists. To that end, we give an overview of Bass--Serre Theory, with enough depth so that the reader can understand its limitations, appreciate how the theory of local action diagrams mirrors the rich relationship between action and combinatorial description in Bass--Serre Theory, and to see how fundamentally different the two theories are. Following this we introduce the theory of local action diagrams, giving examples and some consequences, but largely omitting proofs (all proofs can be found in [@ReidSmith]). We conclude with some suggestions for future research projects.
Since this note is intended to be an introduction to local action diagrams, many topics from [@ReidSmith] have been omitted. The most significant omissions concern subgroups of universal groups of local action diagrams. We refer the interested reader to [@ReidSmith] for further details.
## Notation and conventions {#notation-and-conventions .unnumbered}
Unless otherwise stated we follow the definitions in our paper [@ReidSmith]. In particular, our graphs can have multiple distinct edges between two vertices, and each edge is comprised of two arcs (one in each direction). Loops are allowed, meaning that our graphs are graphs in the sense of Serre (except for us a loop $a$ may or may not equal its reverse $\overline{a}$, which is not the case for Serre). Our graphs $\Gamma$ have a (finite or infinite) vertex set $V = V\Gamma$, a (finite or infinite) arc set $A = A\Gamma$, an arc-reversal map $a \mapsto \overline{a}$ (also called an arc or edge *inversion*) together with an *origin* map $o:A \rightarrow V$ and a *terminal* map $t:A \rightarrow V$, so that $a \in A$ is an arc *from $o(a)$ to $t(a)$*. Edges are pairs $\{a, \overline{a}\}$ and are said to *contain* vertices $o(a)$ and $t(a)$, or to be *between* $o(a)$ and $t(a)$. We define *loops* to be arcs $a$ such that $o(a) = t(a)$. The *valency* of a vertex $v$ is $|o^{-1}(v)|$, sometimes denoted $|v|$; if this is finite for all vertices then $\Gamma$ is *locally finite*. A *leaf* in $\Gamma$ is a vertex with exactly one edge containing it, and that edge is not a loop. A graph is *simple* if it has no loops and there is at most one edge between any two vertices.
Our graphs are not simple so we must define paths with care. Given an interval $I \subseteq \mathbb{Z}$, let $\hat{I} = \{i \in I : i+1 \in I\}$; a *path* in $\Gamma$ indexed by $I$ is then a sequence of vertices $(v_i)_{i \in I}$ and edges $(\{a_i, \overline{a_i}\})_{i \in \hat{I}}$ such that $\{a_i, \overline{a_i}\}$ is an edge in $\Gamma$ between $v_i$ and $v_{i+1}$ for all $i \in \hat{I}$. For finite $I$ the path has *length* $|\hat{I}|$. A path is *simple* if all its vertices $v_i$ are distinct from one another. We can now define *directed paths* in the obvious way, as a sequence of vertices and arcs. For $n > 0$, if $I = \{0,\dots,n\}$ and $v_0 = v_{n}$ and vertices $\{v_0, \dots, v_{n-1}\}$ are distinct, then the path is called a *cycle* of length $n$. The *distance* between two vertices $v,w$, denoted $d(v,w)$, is the length of the shortest path between them if it exists, and is infinite otherwise. A graph is *connected* if there is a path between any two distinct vertices. For a vertex $v$ the set of vertices whose distance from $v$ is at most $k$ is called a *$k$-ball* and is denoted $B_v(k)$. We will sometimes write $B(v)$ for the set $B_v(1)$ of *neighbours* of $v$. An *orientation* of $\Gamma$ is a subset $O \subseteq A\Gamma$ such that for each $a \in A\Gamma$, either $a$ or $\overline{a}$ is in $O$, but not both. For graphs that are not simple, the graph subtraction operation is not well behaved and so we avoid it except for the following situation: for $a \in A\Gamma$ the graph $\Gamma \setminus \{a\}$ is obtained from $\Gamma$ by removing arcs $a, \overline{a}$. For a simple graph $\Gamma$ with subgraph $\Lambda$ we define graph subtraction $\Gamma \setminus \Lambda$ in the usual way: $\Gamma \setminus \Lambda$ is obtained from $\Gamma$ by removing all vertices that lie in $\Lambda$ and their incident edges.
A *tree* is a nonempty simple, connected graph that contains no cycles. In a simple graph $\Gamma$, a *ray* is a one-way infinite simple path and a *double ray* or *line* is a two-way infinite simple path. For us the *ends* (sometimes called *vertex-ends* for non-locally-finite graphs) of $\Gamma$ are equivalence classes on the set of rays, where rays $R_1, R_2$ lie in the same end if and only if there exists a ray $R$ in $\Gamma$ containing infinitely many vertices of $R_i$ for $i=1,2$. In a tree $T$, there is a unique shortest path between any two vertices $v$ and $w$, denoted $[v,w]$ (or $[v,w)$, for example, if we wish to exclude $w$). For an arc or edge $e$ in $T$ the graph $T \setminus \{e\}$ has two connected components; these are called the *half-trees* associated with $e$.
Actions of a group $G$ on a set $X$ are from the left, with $Gx$ denoting the orbit of $x \in X$ under the action of $G$. We denote the stabiliser of $x$ by $G_x$, and for a subset $Y \subseteq X$ we write $G_{\{Y\}}$ (resp. $G_{(Y)}$) for the setwise (resp. pointwise) stabiliser of $Y$ in $G$. The action is *transitive* on $X$ if $X = Gx$ for some $x \in X$. The group of all permutations of $X$ is denoted $\mathop{\mathrm{Sym}}(X)$. Subgroups of $\mathop{\mathrm{Sym}}(X)$ in which all orbits of point stabilisers are finite are called *subdegree-finite*. A group $G \leq \mathop{\mathrm{Sym}}(X)$ acts *freely* or *semiregularly* if the stabiliser $G_x$ of any $x \in X$ is trivial; if $G$ is transitive and semiregular we say it is *regular*. If $G$ is transitive, then it is *primitive* if and only if the only $G$-invariant equivalence relations on $X$ are the trivial relation (where equivalence classes are singletons) or the universal relation (where $X$ is an entire equivalence class).
There is a natural topology on $G$ that can be obtained from the action of $G$ on $X$, called the *permutation topology*, in which a neighbourhood basis of the identity is taken to be the pointwise stabilisers of finite subsets of $X$. If we think of $X$ as a discrete space with elements of $G$ as maps from $X$ to $X$, then the topology is equal to the topology of pointwise convergence and the compact-open topology. Permutational properties of $G$ have topological ramifications. For example, $G$ is totally disconnected if and only if the action on $X$ is faithful, and if $G \leq \mathop{\mathrm{Sym}}(X)$ is closed and subdegree-finite then all stabilisers $G_x$ are compact and open, so $G$ is a totally disconnected and locally compact group (henceforth, t.d.l.c.) with compact open stabilisers. See [@Moller:PermTDLC] for a thorough guide to this topology. Topological statements concerning $\mathop{\mathrm{Sym}}(X)$ will always appertain to the permutation topology. Note that a topological group is *compactly generated* if there is a compact subset that abstractly generates the group.
A *graph homomorphism* $\theta: \Gamma \rightarrow \Gamma'$ is a pair of maps $\theta_V: V\Gamma \rightarrow V\Gamma'$ and $\theta_A: A\Gamma \rightarrow A\Gamma'$ that respect origin vertices and edge reversal; if $\theta_V$ and $\theta_A$ are both bijections we say $\theta$ is an *isomorphism*. A group $G$ acting on $\Gamma$ gives rise to a *quotient graph* $G \backslash \Gamma$ whose vertex (resp. arc) set is the set of $G$-orbits on $V$ (resp. on $A$), and for an arc $Ga$ in $G \backslash \Gamma$ we have $o(Ga) = Go(a)$ and $t(Ga) = Gt(a)$ and $\overline{Ga} = G\overline{a}$. The group of automorphisms of $\Gamma$ is denoted $\mathop{\mathrm{Aut}}(\Gamma)$. When $\Gamma$ is a simple graph, $\mathop{\mathrm{Aut}}(\Gamma)$ acts faithfully on $V$ as those elements in $\mathop{\mathrm{Sym}}(V)$ that respect the arc relation in $V\times V$, and in this case we identify $\mathop{\mathrm{Aut}}(\Gamma)$ with the corresponding subgroup of $\mathop{\mathrm{Sym}}(V)$.
For a tree $T$ (which recall is always simple) and a line $L$ in $T$, a *translation* of $L$ is an orientation-preserving automorphism of $L$ that does not fix any point on the line. If $B$ is a subtree of $T$ and $G \leq \mathop{\mathrm{Aut}}(T)$, we say that $G$ leaves $B$ *invariant* if $G$ fixes setwise the vertices of $B$. Throughout, countable means finite or countably infinite. We say that an action on a tree $T$ is *geometrically dense* if the action does not leave invariant any nonempty proper subtree of $T$ and does not fix any end of $T$.
# Groups acting on trees
## An overview of Bass--Serre Theory {#Sec:BassSerre}
Traditionally, Bass--Serre Theory concerns groups acting on trees *without inversion*, meaning that no element of the group acts as an edge inversion. This is not a significant restriction, since given an action with inversion one can subdivide the edges of the tree and thus obtain an action without inversion. We abide by this tradition here and restrict our attention to inversion-free actions. Let $T$ be a tree and let $G$ act on $T$ without inversion. We closely follow Section 5 of Serre's book [@Serre:trees], so readers seeking a more complete description of the theory can consult this source. Our aim in this section is for readers to see that a local action diagram and its corresponding universal group are thoroughly dissimilar to a graph of groups and its universal cover, but nevertheless the beautiful correspondence in Bass--Serre Theory between the action and its description as a graph of groups is mirrored in our theory of local action diagrams. In Serre's work, a loop always admits an automorphism of order two which changes its orientation. We temporarily adopt this convention for Section [2.1](#Sec:BassSerre){reference-type="ref" reference="Sec:BassSerre"}.
The combinatorial structure at the heart of Bass--Serre Theory is called a *graph of groups*. This is a connected nonempty graph $\Gamma$, together with some groups that will be associated with the vertices and edges of $\Gamma$; this association can be thought of as the vertices and edges of $\Gamma$ being coloured with a pallet of colours comprised of these groups. More precisely, for each vertex $v \in V\Gamma$ we have a group $\mathcal{G}_v$ (these are called the *vertex groups*) and for each arc $a \in A\Gamma$ we have a group $\mathcal{G}_a$ satisfying $\mathcal{G}_{\overline{a}} = \mathcal{G}_a$ (these are called the *edge groups*); we also have a monomorphism $\mathcal{G}_a \rightarrow \mathcal{G}_{t(a)}$ allowing us to view any edge group as a subgroup of the vertex groups of the vertices that comprise the edge. Let us denote this graph of groups as $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$.
This concise combinatorial structure admits two natural universal objects. The first is a tree called the *universal cover* of the graph of groups $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$; the second is the *fundamental group* of the graph of groups. Before formally defining these, let us first explore their significance. The first significant part of the Fundamental Theorem of Bass--Serre Theory (see Theorem [Theorem 4](#thm:FundThmBassSerre){reference-type="ref" reference="thm:FundThmBassSerre"} below) is essentially a decomposition of the action $(T,G)$ in the language of graphs of groups:
> ($\circledast$) *There is a graph of groups associated to $(T,G)$, and $G$ can be identified with the fundamental group of this graph of groups.*
The second significant part of the theorem is essentially a method of constructing inversion-free actions of groups on trees using graphs of groups:
> ($\circledcirc$) *Given a graph of groups $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$, its fundamental group acts on its universal cover (which is a tree) without inversion, and the graph of groups associated with this action via $\circledast$ is again $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$.*
These two components of the theorem are the foundation of Bass--Serre Theory. For our purposes they are significant for two reasons. The constructive part ($\circledcirc$) is not usable in general for constructing new examples of nondiscrete actions of groups on trees because of what might be thought of as a 'chicken or the egg' dilemma (see Section [3.1](#subsection:bass_serre_limitations){reference-type="ref" reference="subsection:bass_serre_limitations"}). This limitation can be overcome via the local-to-global theory of groups acting on trees, and this local-to-global approach provided the motivation for our theory of local action diagrams. The second significance is that the rich correspondence between the action $(T,G)$ and the combinatorial object (i.e. the graph of groups) is mirrored in our theory of local action diagrams, in which the combinatorial object is a local action diagram.
Let us now formally define the objects in Bass--Serre Theory, and then give a precise statement of the theorem.
### The associated graph of groups (see [@Serre:trees §5.4]) {#sec:Assoc_graph}
Suppose $G$ acts on a tree $T$ without inversion. We first describe its *associated graph of groups*. The underlying graph of the graph of groups is the quotient graph $\Gamma := G \backslash T$. We next use $\Gamma$ to choose a subtree of $T$ in a coherent way so that we can take the vertex and arc stabilisers of this subtree to be the vertex and edge groups of the graph of groups.
Choose an orientation $E^+$ of $\Gamma$, and for each $a \in A\Gamma$ set $e(a) = 0$ whenever $a \in E^+$ and set $e(a) = 1$ otherwise. Consider the subgraphs of $\Gamma$ that are trees; these form an ordered set (ordered by inclusion) and by Zorn's Lemma this set has a maximal element $M$, called a *maximal tree of $\Gamma$*, which is easily seen to contain every vertex of $\Gamma$ (see [@Serre:trees §2 Proposition 11] for example). One can lift $M$ to a subtree $T'$ of $T$; that is, $T'$ is isomorphic to $M$ via the natural map $\pi$ which takes each vertex $v \in VT'$ (resp. arc $a \in AT'$) to the vertex $Gv$ (resp. arc $Ga$) in $M \subseteq G \backslash T$.
Next, we construct a map $\varphi$ that takes arcs in $A\Gamma$ to arcs in $AT$ with the property that $\varphi(\overline{a}) = \overline{\varphi(a)}$ for all $a \in A\Gamma$. The map will also be defined on the vertices of $M$. On $M$ take $\varphi$ to be $\pi^{-1}: M \rightarrow T'$. For $a \in E^+ \setminus AM$ there is an arc $b \in AM$ such that $o(a) = o(b)$, and we set $\varphi(a) = \varphi(b)$. In this way we have $o(\varphi(a)) = \varphi(o(a))$. Now $t(\varphi(a))$ and $\varphi(t(a))$ have the same image under $\pi$ since both project to $t(b) \in V\Gamma$. In particular, $t(\varphi(a))$ and $\varphi(t(a))$ lie in the same $G$ orbit, so we can choose $\gamma_a \in G$ such that $t(\varphi(a)) = \gamma_a \varphi(t(a))$. From this we obtain elements $\gamma_a \in G$ for all arcs $a \in A\Gamma$ by specifying that $\gamma_{\overline{a}} = \gamma_a^{-1}$ and $\gamma_a$ is the identity whenever $a \in AM$.
We can now define the vertex and edge groups of our associated graph of groups. For each vertex $v$ of $\Gamma$ recall that $v$ is a vertex in $M$, so $\varphi(v)$ is defined and is equal to a vertex in $T$; take the vertex group $\mathcal{G}_v$ to be the vertex stabiliser $G_{\varphi(v)} \leq G$. Similarly, for an arc $a$ of $\Gamma$ recall that $\varphi(a)$ is an arc in $T$; take the edge group $\mathcal{G}_a$ to be the arc stabiliser $G_{\varphi(a)} \leq G$, with $\mathcal{G}_{\overline{a}} = \mathcal{G}_a$. Finally, we define the monomorphism $\mathcal{G}_a \rightarrow \mathcal{G}_{t(a)}$ as $g \mapsto \gamma_a^{e(a)-1} g \gamma_a^{1-e(a)}$.
**Example 1**. Let $T$ be the biregular tree $T_{m,n}$ for finite distinct $m, n > 2$. Notice that the action $(T, \mathop{\mathrm{Aut}}(T))$ has one edge orbit and two vertex orbits, and these vertex orbits correspond to the natural bipartition of $T_{m,n}$ into vertices with valency $m$, and vertices with valency $n$. Thus, the graph of groups associated with $(T, \mathop{\mathrm{Aut}}(T))$ is a pair of vertices connected by a single edge. The vertex groups are the stabilisers $\mathop{\mathrm{Aut}}(T)_v$ and $\mathop{\mathrm{Aut}}(T)_w$ where $v,w$ are adjacent vertices in $VT$ with $v$ having valency $m$ in $T$ and $w$ having valency $n$, and the edge group is $\mathop{\mathrm{Aut}}(T)_{(v,w)}$.
### The fundamental group of a graph of groups (see [@Serre:trees §5.1]) {#Sec:BassSerreFundGp}
Suppose we have a graph of groups $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$. For each arc $a \in A\Gamma$ we have a monomorphism $\mathcal{G}_a \rightarrow \mathcal{G}_{t(a)}$, and we denote the image of any $h \in \mathcal{G}_a$ under this monomorphism by $h^a$. We need extra generators for the fundamental group, one for each arc in $\Gamma$, so for all $a \in A\Gamma$ we find new letters $g_a$ not contained in any of the vertex or edge groups. Our generating set $\Omega$ is then the union of $\{g_a : a \in A\Gamma\}$ and the vertex groups $\bigcup_{v \in V\Gamma} \mathcal{G}_v$. Let $M$ be a maximal tree of $\Gamma$. Then we define the *fundamental group $\Pi_1((\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$*, abbreviated to $\Pi_1$, to be $$\langle \Omega : g_{\overline{a}} = g_a^{-1}, \ g_a h^a g_a^{-1} = h^{\overline{a}} \ (\forall a \in A\Gamma, h \in \mathcal{G}_a), \ g_b = 1 \ (\forall b \in AM) \, \rangle.$$
One can show (see [@Serre:trees §5 Proposition 20]) that the definition of $\Pi_1$ is independent of the choice for $M$.
**Example 2**. If $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ is the graph of groups in Example [Example 1](#Ex:AssocGraphOfGroups){reference-type="ref" reference="Ex:AssocGraphOfGroups"}, then the definition above gives $\Pi_1$ to be the amalgamated free product $\mathop{\mathrm{Aut}}(T)_v \ast_{\mathop{\mathrm{Aut}}(T)_{(v,w)}} \mathop{\mathrm{Aut}}(T)_w$.
### Universal covering of a graph of groups (see [@Serre:trees §5.3]) {#Sec:UnivCoveringBassSerre}
Now suppose we are given a graph of groups $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$, a maximal tree $M$ of $\Gamma$ and an orientation $E^+$ of $\Gamma$. Let $\Pi_1$ be the fundamental group of $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$. Define the map $e : A\Gamma \rightarrow \{0,1\}$ as in Section [2.1.1](#sec:Assoc_graph){reference-type="ref" reference="sec:Assoc_graph"}, with $e(a) = 0$ whenever $a \in E^+$ and $e(a) = 1$ otherwise. For each arc $a \in \Gamma$, let $\hat{a} \in \{a, \overline{a}\}$ be such that $\hat{a} \in E^+$, and let $\mathcal{G}_a^a$ be the image (in $\mathcal{G}_{t(a)}$) of the edge group $\mathcal{G}_a$ under the monomorphism $h \mapsto h^a$. Notice that $\hat{a} = \hat{\overline{a}}$.
The *universal cover* of $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ is a graph $\tilde{T}$ whose vertex set is the disjoint union of left cosets, $$V\tilde{T} := \bigsqcup_{v \in V\Gamma} \Pi_1 / \mathcal{G}_v,$$ and whose arc set is the disjoint union, $$\bigsqcup_{a \in A\Gamma} \Pi_1 / \mathcal{G}_{\hat{\overline{a}}}^{\hat{\overline{a}}}.$$ Arc inversion in $\tilde{T}$, and the maps $o$ and $t$, are defined as follows. For each arc $a \in A\Gamma$, let $\tilde{a}$ denote the trivial coset in $\Pi_1 / \mathcal{G}_{\hat{\overline{a}}}^{\hat{\overline{a}}}$ corresponding to $\mathcal{G}_{\hat{\overline{a}}}^{\hat{\overline{a}}}$, and for each vertex $v \in V\Gamma$, let $\tilde{v}$ denote the trivial coset in $\Pi_1 / \mathcal{G}_{v}$ corresponding to $\mathcal{G}_{v}$. Any $\tilde{T}$-arc lies in $\Pi_1 / \mathcal{G}_{\hat{\overline{a}}}^{\hat{\overline{a}}}$ for some arc $a \in A\Gamma$. Each $\tilde{T}$-arc in $\Pi_1 / \mathcal{G}_{\hat{\overline{a}}}^{\hat{\overline{a}}}$ is of the form $g \tilde{a}$, for some $g \in \Pi_1$. Recalling our elements $g_a \in \Pi_1$ from Section [2.1.2](#Sec:BassSerreFundGp){reference-type="ref" reference="Sec:BassSerreFundGp"} and setting $v_o := o(a)$ and $v_t := t(a)$, we then take $$\overline{g\tilde{a}} = g \tilde{\overline{a}}, \quad o(g\tilde{a}) = gg_a^{-e(a)} \widetilde{v_o}, \quad t(g\tilde{a}) = gg_a^{1-e(a)} \widetilde{v_t}.$$
One can then (see [@Serre:trees §5.3]) check that: $\tilde{T}$ is a tree and $\Pi_1$ acts on the tree $\tilde{T}$ via left multiplication, and moreover this action is as automorphisms of $\tilde{T}$. Under this action the quotient graph $\Pi_1 \backslash \tilde{T}$ is $\Gamma$, and for all vertices $v \in V\Gamma$ (resp. arcs $a \in A\Gamma$) we have that the stabiliser $\Pi_{1 \tilde{v}}$ (resp. $\Pi_{1 \tilde{a}}$) is equal to $\mathcal{G}_v$ (resp. $\mathcal{G}_{\hat{\overline{a}}}^{\hat{\overline{a}}} \cong \mathcal{G}_a$). For any arc $a$ in the maximal tree $M$ of $\Gamma$ we have $g_a = 1$ and thus $\widetilde{o(y)} = o(\tilde{y})$ and $\widetilde{t(y)} = t(\tilde{y})$, so we have a lift of $M$ to a subtree $\tilde{T}'$ of the tree $\tilde{T}$ via $v \mapsto \tilde{v}$ and $a \mapsto \tilde{a}$ for $v \in VM$ and $a \in AM$. Thus, the associated graph of groups for $(\tilde{T}, \Pi_1)$ is $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$.
**Example 3**. Let us continue Examples [Example 1](#Ex:AssocGraphOfGroups){reference-type="ref" reference="Ex:AssocGraphOfGroups"}--[Example 2](#Ex:FundGp){reference-type="ref" reference="Ex:FundGp"}, resuming their notation. For the tree $T = T_{m,n}$ and the action $(T, \mathop{\mathrm{Aut}}(T))$, the graph of groups $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ consists of two vertices connected by a single edge, with vertex groups $\mathop{\mathrm{Aut}}(T)_v$ and $\mathop{\mathrm{Aut}}(T)_w$, and edge group $\mathop{\mathrm{Aut}}(T)_{v,w}$, where $v$ has valency $m$ in $T$ and $w$ has valency $n$. Recall that the fundamental group of this graph of groups is $\Pi_1=\mathop{\mathrm{Aut}}(T)_v \ast_{\mathop{\mathrm{Aut}}(T)_{(v,w)}} \mathop{\mathrm{Aut}}(T)_w$. Now $\mathop{\mathrm{Aut}}(T)_v$ is transitive on the edges in $T$ that are incident to $v$, so the index of the edge group $\mathop{\mathrm{Aut}}(T)_{(v,w)}$ in the vertex group $\mathop{\mathrm{Aut}}(T)_v$ is $m$. Similarly the index of $\mathop{\mathrm{Aut}}(T)_{(v,w)}$ in $\mathop{\mathrm{Aut}}(T)_w$ is $n$. The universal cover $\tilde{T}$ of $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ is a tree, and by definition we have $V\tilde{T} = \left ( \Pi_1 / \mathop{\mathrm{Aut}}(T)_v \right ) \sqcup \left ( \Pi_1 / \mathop{\mathrm{Aut}}(T)_w \right )$ with the two sets in this disjoint union naturally bipartitioning $T$.
### Fundamental Theorem of Bass--Serre Theory (see [@Serre:trees §5.4]) {#Sec:FundThmBassSerre}
We are now able to formally state the fundamental theorem.
**Theorem 4** ([@Serre:trees §5]). *Let $G$ be a group and $T$ be a tree.*
*($\circledast$) Suppose $G$ acts on $T$ without inversion. Let $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ be its associated graph of groups, and choose a maximal tree $M$ of $\Gamma$. Let $\Pi_1$ be the fundamental group of this graph of groups (with respect to $M$) and let $\tilde{T}$ be the universal covering (with respect to $M$). Then the map $\phi : \Pi_1 \rightarrow G$ defined by the inclusion $\mathcal{G}_v \leq G$ (for $v \in V\Gamma$) and $\phi(g_a) = \gamma_a$ (for $a \in A\Gamma$) is an isomorphism of groups. The map $\psi : \tilde{T} \rightarrow T$ given, for all $h \in \Pi_1$ , by $\psi(h \tilde{v}) = \phi(h) \varphi(v)$ (for all $v \in V\Gamma$) and $\psi(h\tilde{a}) = \phi(h) \varphi(a)$ (for all $a \in A\Gamma$) is an isomorphism of graphs. Moreover, the isomorphism $\psi$ is $\phi$-equivariant, and so the actions $(\tilde{T}, \Pi_1)$ and $(T,G)$ can be identified.*
*($\circledcirc$) Suppose $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ is a graph of groups. Let $\Pi_1$ be its fundamental group and $\tilde{T}$ its universal cover. Then $(\tilde{T}, \Pi_1)$ is an inversion-free action on a tree whose associated graph of groups is $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$.*
**Example 5**. Let us continue Examples [Example 1](#Ex:AssocGraphOfGroups){reference-type="ref" reference="Ex:AssocGraphOfGroups"}--[Example 3](#Ex:UnivCover){reference-type="ref" reference="Ex:UnivCover"}, resuming their notation. We have the tree $T = T_{m,n}$ and a graph of groups $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ for the action $(T, \mathop{\mathrm{Aut}}(T))$. The universal cover of $(\Gamma, (\mathcal{G}_v), (\mathcal{G}_a))$ is $\tilde{T}$ and its fundamental group is $\mathop{\mathrm{Aut}}(T)_v \ast_{\mathop{\mathrm{Aut}}(T)_{(v,w)}} \mathop{\mathrm{Aut}}(T)_w$. By the Fundamental Theorem of Bass--Serre Theory, we have that the actions of $\mathop{\mathrm{Aut}}(T)_v \ast_{\mathop{\mathrm{Aut}}(T)_{(v,w)}} \mathop{\mathrm{Aut}}(T)_w$ on $\tilde{T}$ and of $\mathop{\mathrm{Aut}}(T)$ on $T$ are permutationally isomorphic.
## Jacques Tits' Independence Property $(\mathrm{P}_{})$ and simplicity {#Section:tits}
We describe [@Tits70 §4.2], where *Tits' Independence Property $(\mathrm{P}_{})$* (also called *property $(\mathrm{P}_{})$* or the *independence property*) is introduced. Suppose $G \leq \mathop{\mathrm{Aut}}(T),$ where $T$ is a tree. If $C$ is a (finite or infinite) nonempty simple path in $T$, then for each $v \in VT$ there is a unique vertex $\pi_{C}(v)$ in $C$ which is closest to $v$. This gives a well-defined map $v \mapsto \pi_{C}(v)$. For each $w \in VC$, the set $\pi^{-1}_{C}(w)$ is the vertex set of a subtree of $T$. Each of these subtrees is invariant under the action of the pointwise stabiliser $G_{(C)}$ of $C$, and so we define $G^w_{(C)}$ to be the subgroup of $\mathop{\mathrm{Sym}}(\pi^{-1}_{C}(w))$ induced by $G_{(C)}$. We therefore have homomorphisms $\varphi_w: G_{(C)} \rightarrow G_{(C)}^w$ for each $w \in VC$ from which we obtain the homomorphism, $$\label{equation:propP}
\varphi: G_{(C)} \rightarrow \prod_{w \in VC} G_{(C)}^w.$$ Now $G$ has the *independence property for $C$* if the homomorphism $\varphi$ is an isomorphism, and $G$ has *Tits' Independence Property $(\mathrm{P}_{})$* if it has the independence property for $C$ for every possible simple path $C$. Intuitively, $G$ has the independence property for a path $C$ if $G_{(C)}$ can act independently on all of the subtrees 'hanging' from $C$.
**Theorem 6** ([@Tits70 Théorème 4.5]). *Let $T$ be a tree and suppose $G \leq \mathop{\mathrm{Aut}}(T)$ has property $(\mathrm{P}_{})$. Let $G^+$ be the group generated by the pointwise stabilisers in $G$ of edges in $T$. If $G$ is geometrically dense, then $G^+ \unlhd G$ and $G^+$ is trivial or abstractly simple.*
# Universal groups acting on trees
## Limitations to Bass--Serre Theory {#subsection:bass_serre_limitations}
In a graph of groups, we specify embeddings of edge groups into vertex groups. In doing so, the vertex stabiliser is specified *as an abstract group*, while the action on the $1$-ball is also specified. In many commonly encountered situations, this leads to complications that are intractable. We describe three of them here; they are related but different.
1\. In the action on a tree resulting from a graph of groups, the vertex stabiliser can only be a quotient of the vertex group. So, we can't obtain 'large' (e.g. nondiscrete) vertex stabilisers this way, unless the vertex group we started with already had an interesting action on a tree.
2\. Suppose we are given a graph of groups and in addition we are told that the action is in some sense interesting. Even though we have been given the vertex stabiliser, how it acts on the tree on a large scale is unintuitive and often abstruse. Even basic questions, like whether or not the action is faithful, can be difficult to answer. Restricting to locally finite trees does not sufficiently reduce the complexity: we are still in a sense required to understand the dynamics of sequences of virtual isomorphisms as we travel along all possible walks through the graph of groups. This is typically an insurmountable problem when the vertex stabilisers are infinite, or when one is interested in understanding ever larger finite vertex stabilisers as in the Weiss conjecture.
3\. The possible combinations of vertex groups are difficult to understand because they are not independent of one another: we need a common edge group for any pair of neighbouring vertices. If we wish to choose a collection of local actions independently of one another, there are very few ways to do this and all have significant limitations. For example, we could make all the vertex groups infinitely generated free groups, but then determining the action of the resulting vertex groups on the tree is hopelessly complicated, rendering the task of understanding the group's closure in $\mathop{\mathrm{Aut}}(T)$ as a topological group futile.
Readers who are familiar with Bass--Serre Theory will no doubt recognise these limitations. For readers unfamiliar with Bass--Serre Theory, we set an (intractable) exercise that highlights some of these issues.
**Exercise 1**. Let $T$ be the biregular tree $T_{7,5}$. Using only Bass--Serre Theory, attempt to construct a subgroup $G \leq \mathop{\mathrm{Aut}}(T)$ that has two vertex orbits on $T$, such that every vertex stabiliser $G_v$ is infinite and does not induce $S_7$ or $S_5$ on the set of neighbours of the vertex $v$. The point of this exercise is not to complete the construction, it is to make an attempt and in doing so experience the aforementioned limitations 1--3.
As we shall see, the exercise has an easy solution using the theory of local action diagrams. However, even when we take this solution and (pretending for a moment that we do not know it is a solution) use Bass--Serre Theory to analyse its action, our point (2) above becomes apparent.
## Burger--Mozes groups {#Section:BurgerMozes}
Here we largely follow Burger and Mozes' paper [@BurgerMozes §3.2] with one significant difference: we do not insist that the trees be locally finite. Let $d > 2$ be some finite or infinite cardinal and let $T$ be a regular tree of valency $d$. Fix some set $X$ such that $|X| = d$, and let $F \leq \mathop{\mathrm{Sym}}(X)$. We say that $G \leq \mathop{\mathrm{Aut}}(T)$ is *locally-$F$* if for all $v \in VT$ the permutation group induced on the set $B(v)$ of neighbours of $v$ by the vertex stabiliser $G_v$ is permutationally isomorphic to $F$. A *legal colouring* is a map $\mathcal{L}: AT \rightarrow X$ that satisfies the following:
1. For each vertex $v \in VT$, the restriction $\mathcal{L}\big|_{o^{-1}(v)}: o^{-1}(v) \rightarrow X$ is a bijection;
2. For all arcs $a \in AT$ we have $\mathcal{L}(a) = \mathcal{L}(\overline{a})$.
One can always construct a legal colouring of $T$. For $g \in \mathop{\mathrm{Aut}}(T)$ and $v \in VT$ we define the *$\mathcal{L}$-local action of $g$ at $v$* to be, $$\label{eq:LocalAction}
\sigma_{\mathcal{L}, v}(g):= \mathcal{L}\big|_{o^{-1}(gv)} g \mathcal{L}\big|^{-1}_{o^{-1}(v)}.$$ Notice that $\sigma_{\mathcal{L}, v}(g) \in \mathop{\mathrm{Sym}}(X)$. One might ask if we can constrain these bijections, so that they all lie in some common subgroup of $\mathop{\mathrm{Sym}}(X)$. Such a restriction gives rise to the Burger--Mozes universal groups.
The *Burger--Mozes universal group of $F$* (with respect to $\mathcal{L}$) is the group, $$\mathbf{U}_\mathcal{L}(F) := \{g \in \mathop{\mathrm{Aut}}(T) : \sigma_{\mathcal{L}, v}(g) \in F \quad \forall v \in VT\}.$$ Two legal colourings $\mathcal{L}$ and $\mathcal{L}'$ give rise to universal groups $\mathbf{U}_\mathcal{L}(F)$ and $\mathbf{U}_{\mathcal{L}'}(F)$ that are conjugate in $\mathop{\mathrm{Aut}}(T)$; for this reason we replace $\mathbf{U}_\mathcal{L}(F)$ with $\mathbf{U}(F)$ and speak of *the* Burger--Mozes universal group $\mathbf{U}(F)$ of $F$. Further properties of $\mathbf{U}(F)$ described in [@BurgerMozes] are as follows. Note that some of these properties are expanded upon in Section [3.3](#Section:BoxProduct){reference-type="ref" reference="Section:BoxProduct"}.
1. $\mathbf{U}(F)$ is a vertex transitive and locally-$F$ subgroup of $\mathop{\mathrm{Aut}}(T)$, and if $F$ has finite degree then $\mathbf{U}(F)$ is closed;
2. $\mathbf{U}(F)$ enjoys Tits Independence Property $(\mathrm{P}_{})$, and consequently by Theorem [Theorem 6](#theorem:Tits4_5){reference-type="ref" reference="theorem:Tits4_5"} the subgroup $\mathbf{U}(F)^+$ is trivial or simple;
3. If $F$ has finite degree then the index $|\mathbf{U}(F) : \mathbf{U}(F)^+|$ is finite if and only if $F \leq \mathop{\mathrm{Sym}}(X)$ is transitive and generated by point stabilisers; when this happens $\mathbf{U}(F)^+ = \mathbf{U}(F) \cap (\mathop{\mathrm{Aut}}(T))^+$ and therefore $|\mathbf{U}(F) : \mathbf{U}(F)^+| = 2$;
4. when $F$ is transitive on $X$, the group $\mathbf{U}(F)$ has the following universal property: $\mathbf{U}(F)$ contains an $\mathop{\mathrm{Aut}}(T)$-conjugate of every locally-$F$ subgroup of $\mathop{\mathrm{Aut}}(T)$.
Thus, in the situation when $F$ is transitive, we have a natural way to describe $\mathbf{U}(F)$: it is the largest locally-$F$ subgroup of $\mathop{\mathrm{Aut}}(T)$.
Burger and Mozes used $U(F)$ to build towards a hoped-for classification of closed $2$-transitive actions on locally finite trees. Here we instead see them as a first step towards a local-to-global theory of groups acting on trees. When $T$ is locally finite, $\mathbf{U}(F)$ is obviously a t.d.l.c. subgroup of $\mathop{\mathrm{Aut}}(T)$ with compact vertex stabilisers. In fact, the group can be t.d.l.c. in more general situations; this was shown in [@SmithDuke], where the Burger--Mozes universal groups are viewed as a special case of a more general construction called the *box product*. In this more general setting, various global topological properties are easy to characterise using local conditions (i.e. conditions on $F$). We describe these in section [3.3](#Section:BoxProduct){reference-type="ref" reference="Section:BoxProduct"}.
## The box product construction {#Section:BoxProduct}
Here we largely follow the second author's paper [@SmithDuke], where the box product was introduced as a product for permutation groups; as a permutational product it is, in some sense, the dual of the wreath product in its product action. In the same paper, the box product was used to produce the first examples of $2^{\aleph_0}$ distinct isomorphism classes of compactly generated locally compact groups that are nondiscrete and topologically simple, answering a long standing open question. The box product arises naturally in the structure theory of subdegree-finite primitive permutation groups due to the second author ([@SmithPrimitive]).
Let $d_1, d_2 > 1$ be two finite or infinite cardinal numbers and let $T = T_{d_1, d_2}$ be the biregular tree with valencies $d_1, d_2$. Fix disjoint sets $X_1, X_2$ such that $|X_1| = d_1$ and $|X_2|=d_2$. Thus there is a natural bipartition of $T$ as $VT = V_1 \sqcup V_2$ such that vertices in $V_1$ have valency $|X_1|$ and vertices in $V_2$ have valency $|X_2|$. Let $F_i \leq \mathop{\mathrm{Sym}}(X_i)$, for $i = 1,2$ with at least one of $F_1, F_2$ being nontrivial. We say $G \leq \mathop{\mathrm{Aut}}(T)$ is *locally-($F_1, F_2$)* if $G$ preserves setwise the parts $V_1$ and $V_2$, and for all $v \in VT$ the permutation group induced on $B(v)$ by the vertex stabiliser $G_v$ is permutationally isomorphic to $F_i$ whenever $v \in V_i$, for $i=1,2$.
In this new context we define a *legal colouring* to be a map $\mathcal{L}: AT \rightarrow X$ that satisfies the following:
1. For each vertex $v \in V_i$, the restriction $\mathcal{L}\big|_{o^{-1}(v)}: o^{-1}(v) \rightarrow X_i$ is a bijection, for $i=1,2$;
2. For all $v \in VT$ the restriction $\mathcal{L}\big|_{t^{-1}(v)}$ is constant.
One can always construct a legal colouring of $T$. Using the same $\mathcal{L}$-local action defined in Equation [\[eq:LocalAction\]](#eq:LocalAction){reference-type="ref" reference="eq:LocalAction"}, we define the *topological box product of $F_1$ and $F_2$* to be the group $$\mathbf{U}_\mathcal{L}(F_1, F_2) := \left \{g \in \mathop{\mathrm{Aut}}(T)_{\{V_1\}} : \sigma_{\mathcal{L}, v}(g) \in F_i \quad \forall v \in V_i, \text{ for } i=1,2 \right \}.$$ The *permutational box product of $F_1$ and $F_2$* is the subgroup of $\mathop{\mathrm{Sym}}(V_2)$ induced by the action of $\mathbf{U}_\mathcal{L}(F_1, F_2)$ on $V_2$, and is denoted $F_1 \boxtimes_{\mathcal{L}} F_2$.
As with the Burger--Mozes groups, two legal colourings $\mathcal{L}$ and $\mathcal{L}'$ give rise to groups $\mathbf{U}(F_1, F_2)_\mathcal{L}$ and $\mathbf{U}(F_1, F_2)_{\mathcal{L}'}$ that are conjugate in $\mathop{\mathrm{Aut}}(T)$ and so we speak of *the* box product and write $\mathbf{U}(F_1, F_2)$ and $F_1 \boxtimes F_2$.
The Burger--Mozes groups arise as special cases of the box product construction: for any permutation group $F \leq \mathop{\mathrm{Sym}}(X)$ where $|X| \geq 3$, the Burger--Mozes group $\mathbf{U}(F)$ is permutationally isomorphic to $S_2 \boxtimes F$, where $S_2$ here denotes the symmetric group of degree $2$. To see why, write $d := |X|$ and note that $\mathbf{U}(F)$ as a group of automorphisms of the $d$-regular tree $T_d$ induces a faithful action on $T_{2,d}$ because $T_{2,d}$ is the barycentric subdivision of $T_d$. Now $\mathbf{U}(S_2, F)$ also acts as a group of automorphisms on $T_{2,d}$ and one can easily verify that the actions of $\mathbf{U}(F)$ and $\mathbf{U}(S_2, F)$ induced on the set of $d$-valent vertices of $T_{2,d}$ is permutationally isomorphic. From this we have that $\mathbf{U}(F)$ is topologically isomorphic to $S_2 \boxtimes F$, $\mathbf{U}(F, S_2)$, $F \boxtimes S_2$, and $\mathbf{U}(S_2, F)$.\
As a permutational product, the box product has the following striking similarity to the wreath product in its product action. Recall that $F_1 \mathop{\rm Wr}\nolimits F_2$ (in its product action) is a primitive permutation group if and only if $F_1$ is primitive and nonregular and $F_2$ is transitive and finite. Special cases of this fact were, according to Peter M. Neumann, first proved by W. A. Manning in the early 20th Century. For over a century, no other product of permutation groups was known to preserve primitivity in this kind of generality. Compare this with the box product: $F_1 \boxtimes F_2$ is a primitive permutation group if and only if $F_1$ is primitive and nonregular and $F_2$ is transitive.
With the benefit of hindsight, we can see hints of this local and global primitivity equivalence for the box product construction going back to the 1970s. Indeed, groups of automorphisms of simple graphs with connectivity one (see [@ReidSmith §7.2]) can be realised as faithful inversion-free actions on trees, and in 1977 H. A. Jung and M. E. Watkins in [@JungWatkins Theorem 4.2] proved that the automorphism group of a simple graph $\Gamma$ with connectivity one is (i) vertex primitive if and only if (ii) all its lobes are vertex primitive, have at least 3 vertices and are pairwise isomorphic. Now (i) implies that the induced faithful tree action is primitive on one part $P$ of the bipartition of the tree, and (ii) implies that the local actions at vertices in $T \setminus P$ are primitive. Arguments by Rögnvaldur G. Möller in the 1994 paper [@Moller:Primitive] (which used Warren Dicks and M. J. Dunwoody's powerful theory of structure trees from [@DicksDunwoody]) can be used to show all infinite, subdegree-finite primitive permutation groups with more than one end have a locally finite orbital digraph with connectivity one. In 2010 Jung and Watkins' result was generalised to directed graphs by the second author in [@SmithDigraph], and in this context the primitivity condition in (ii) becomes primitive but not regular. These observations inspired constructions in the second author's 2010 paper [@SmithSubdegrees] which, again with hindsight, can be seen as precursors to the permutational box product construction. As noted in [@SmithDuke], the box product $F_1 \boxtimes F_2$ can be constructed for closed groups $F_1, F_2$ using refinements of the arguments in [@SmithSubdegrees] (the arguments are based on countable relational structures).
We also see hints coming from the world of groups acting on locally finite trees: obviously, in Burger and Mozes' 2000 paper [@BurgerMozes], but also in Pierre-Emmanuel Caprace and Tom De Medts' 2011 result [@CapraceDeMedts Theorem 3.9], which states the following. Let $T$ be a locally finite tree and suppose $G \leq \mathop{\mathrm{Aut}}(T)$ is nondiscrete, noncompact, compactly generated, closed, t.d.l.c. and topologically simple with Tits' Independence Property $(\mathrm{P}_{})$. Then the following conditions are equivalent: (i) every proper open subgroup of $G$ is compact; and (ii) $G$ splits as an amalgamated free product $G \cong A \ast_C B$, where $A$ and $B$ are maximal compact open subgroups, $C = A \cap B$ and the $A$-action on $A/C$ (resp. the $B$-action on $B/C$) is primitive and noncyclic.
For a group $G$ satisfying Caprace and De Medts' result, condition (i) implies $G$ has a natural permutation representation that is primitive, and (ii) implies that the action of $G$ on its Bass--Serre tree is primitive on *both* parts of its bipartition and moreover the local action at vertices in both parts of the bipartition is primitive but not regular.\
The following are further properties of the box product construction. Since $\mathbf{U}(F)$ is topologically isomorphic to $\mathbf{U}(F, S_2)$ and permutationally isomorphic to $S_2\boxtimes F$, many of these properties expand the properties of Burger--Mozes groups given in Section [3.2](#Section:BurgerMozes){reference-type="ref" reference="Section:BurgerMozes"}.
1. $\mathbf{U}(F_1, F_2)$ is a locally-$(F_1, F_2)$ subgroup of $\mathop{\mathrm{Aut}}(T)$.
2. Any subset $Y \subseteq X_1 \cup X_2$ is an orbit of $F_1$ or $F_2$ if and only if $t(\mathcal{L}^{-1}(Y))$ is an orbit of $\mathbf{U}_\mathcal{L}(F_1, F_2)$. In particular, $F_1 \boxtimes F_2$ is transitive if and only if $F_1$ is transitive, and $\mathbf{U}(F_1, F_2)$ has precisely two vertex-orbits ($V_1$ and $V_2$) if and only if $F_1$ and $F_2$ are transitive.
3. If $F_i$ is a closed subgroup of $\mathop{\mathrm{Sym}}(X_i)$ for $i=1,2$ then $\mathbf{U}(F_1, F_2)$ is a closed subgroup of $\mathop{\mathrm{Aut}}(T)$.
4. $\mathbf{U}(F_1, F_2)$ enjoys Tits Independence Property $(\mathrm{P}_{})$.
5. [\[item:BoxSimple\]]{#item:BoxSimple label="item:BoxSimple"} If $F_1, F_2$ are generated by point stabilisers, then $\mathbf{U}(F_1, F_2)$ is simple if and only if $F_1$ or $F_2$ is transitive.
6. When $F_1, F_2$ are transitive, the group $\mathbf{U}(F_1, F_2)$ has the following universal property: $\mathbf{U}(F_1, F_2)$ contains an $\mathop{\mathrm{Aut}}(T)$-conjugate of every locally-$(F_1, F_2)$ subgroup of $\mathop{\mathrm{Aut}}(T)$.
7. If $F_1, F_2$ are closed, then $\mathbf{U}(F_1, F_2)$ is locally compact (and hence t.d.l.c.) if and only if all point stabilisers in $F_1$ and $F_2$ are compact. Moreover, for all $v \in V_2$ the stabiliser $\mathbf{U}(F_1, F_2)_v$ is compact if and only if $F_2$ is compact and every point stabiliser in $F_1$ is compact.
8. If $F_1, F_2$ are closed with compact point stabilisers, then all point stabilisers in $\mathbf{U}(F_1, F_2)$ are compactly generated if and only if $F_1$ and $F_2$ are compactly generated. Moreover, if $F_1$ and $F_2$ are compactly generated with finitely many orbits, and at least one of the groups is transitive, then $\mathbf{U}(F_1, F_2)$ is compactly generated.
9. $\mathbf{U}(F_1, F_2)$ is discrete if and only if $F_1$ and $F_2$ are semiregular.
Thus, in the situation when $F_1, F_2$ are transitive, we again have a natural way to describe $\mathbf{U}(F_1, F_2)$: it is the largest locally-$(F_1, F_2)$ subgroup of $\mathop{\mathrm{Aut}}(T)$. Furthermore, we can under mild *local* conditions (that is, conditions on $F_1, F_2$) ensure that $\mathbf{U}(F_1, F_2)$ is nondiscrete, compactly generated, t.d.l.c. and abstractly simple. An important point here is that one can use discrete local groups $F_1, F_2$ with certain topological properties, and obtain a nondiscrete topological group $\mathbf{U}(F_1, F_2)$ that inherits those local topological properties (except of course, discreteness) as global topological properties, as well as being guaranteed other desirable global properties like Tits' Independence Property $(\mathrm{P}_{})$ and nondiscreteness.
Constructing $2^{\aleph_0}$ distinct isomorphism classes of compactly generated locally compact groups that are nondiscrete and topologically simple is now relatively straightforward. For example, for a large enough prime $p$ (e.g. $p> 10^{75}$) let $\{\mathcal{O}_i\}_{i \in I}$ be a set of representatives from the $2^{\aleph_0}$ isomorphism classes of A. Yu. Ol'shanskiı̆'s $p$-Tarski Monsters ([@olshanski]). Each $\mathcal{O}_i$ is infinite and simple and any nontrivial proper subgroup $H_i < \mathcal{O}_i$ has finite order $p$. The groups $\mathcal{O}_i$ can therefore be viewed as (faithful) groups of permutations of $\mathcal{O}_i / H_i$. The groups $\mathbf{U}_i := \mathbf{U}(\mathcal{O}_i, S_3)$ are nondiscrete, compactly generated, t.d.l.c. and simple. It can then be shown (with a little work) that the $\mathbf{U}_i$ are pairwise nonisomorphic.
## Further generalisations {#Sec:FurtherGens}
In this section we survey a selection of generalisations of the Burger--Mozes groups, the box product construction, and Tits' Independence Property $(\mathrm{P}_{})$. These generalisations play no part in the theory of local action diagrams, but they suggest natural generalisations to the theory that we will revisit in Section [8](#Section:FutureWork){reference-type="ref" reference="Section:FutureWork"}.\
Adrien Le Boudec in [@LeBoudec] considered universal groups of the locally finite $d$-valent tree $T_d$, for $d \geq 3$, that have local action $F \leq S_d$ at all but finitely many vertices. (Earlier, the special case where $F$ is the alternating group of degree $d$ was examined in [@BCGM].) More precisely, for a Burger--Mozes legal colouring $\mathcal{L}$ of $T_d$ and a group $F \leq S_d$, the *Le Boudec group* $G_{\mathcal{L}}(F)$ is the group, $$\{g \in \mathop{\mathrm{Aut}}(T_d) : \sigma_{\mathcal{L}, v}(g) \in F \quad \text{for all but finitely many } v \in VT_d\}.$$ For a given $g \in G_{\mathcal{L}}(F)$, the finitely many vertices $v$ for which $\sigma_{\mathcal{L}, v}(g) \not \in F$ are called the *singularities of $g$*. Of course we have $\mathbf{U}_\mathcal{L}(F) \leq G_\mathcal{L}(F) \leq \mathop{\mathrm{Aut}}(T_d)$. There is a topology on $G_\mathcal{L}(F)$ such that the inclusion map of $\mathbf{U}_\mathcal{L}(F)$ into $G_\mathcal{L}(F)$ is continuous and open (in general this is not the topology inherited from $\mathop{\mathrm{Aut}}(T_d)$). Under this topology $G_\mathcal{L}(F)$ is a t.d.l.c. group, and $G_\mathcal{L}(F)$ is discrete if and only if $F$ is a semiregular permutation group.
For permutation groups $F \leq F' \leq S_d$, Le Boudec defines a group $G_\mathcal{L}(F, F')$ now called the *restricted universal group*, with $G_\mathcal{L}(F, F') := G_\mathcal{L}(F) \cap \mathbf{U}_\mathcal{L}(F')$ under the topology induced from $G_\mathcal{L}(F)$. Note that, while the groups $G_\mathcal{L}(F)$ and $G_\mathcal{L}(F, F')$ are subgroups of $\mathop{\mathrm{Aut}}(T_d)$, in general they are not closed in $\mathop{\mathrm{Aut}}(T_d)$.
The Le Boudec groups and restricted universal groups are an important source of examples of compactly generated locally compact simple groups that do not admit lattices[^4] Prior to Le Boudec's work, the only example of a lattice-free compactly generated locally compact simple group was Neretin's group $\mathop{\mathrm{AAut}}(T_d)$ of almost automorphisms (see [@BCGM]). In particular Le Boudec uses these constructions to show the very nice result that there exist compactly generated abstractly simple t.d.l.c. groups $H\leq G$ with $H$ is cocompact in $G$ such that $G$ contains lattices but $H$ does not.\
Waltraud Lederle in [@Lederle] simultaneously generalises Neretin's group $\mathop{\mathrm{AAut}}(T_d)$ and the Burger--Mozes groups for the locally finite tree $T_d$. Some of the terms defined in [@Lederle] have subsequently become known by different names, and we follow this more recent naming convention here.
Let us first define Neretin's group $\mathop{\mathrm{AAut}}(T_d)$. Finite subtrees of $T_d$ in which every vertex is either a leaf or of valency $d$ are called *complete*. An *almost automorphism* of $T_d$ is an isomorphism $g$ of rooted forests $g : T_d \setminus C_0 \rightarrow T_d\setminus C_1$ where $C_0, C_1$ are complete finite subtrees of $T_d$. If $g : T_d \setminus C_0 \rightarrow T_d\setminus C_1$ and $g' : T_d \setminus C'_0 \rightarrow T_d\setminus C'_1$ are almost automorphisms, we say they are equivalent if and only if they agree on $T_d \setminus C$ for some complete finite subtree $C$ satisfying $C_0 \cup C'_0 \subseteq C$. Equivalent almost automorphisms induce the same homeomorphism of the set $\partial T_d$ of ends of $T_d$; in other words they give rise to the same *spheromorphism* of $\partial T_d$. The spheromorphisms are thus the equivalence classes of almost automorphisms under this equivalence relation. To multiply two spheromorphisms $[g]$ and $[g']$, choose representatives $\overline{g} \in [g]$ and $\overline{g}' \in [g']$ that are defined on the same forest $T_d \setminus C$ (where $C$ is a finite complete subtree), then take $[g][g']$ to be the equivalence class containing $\overline{g} \,\overline{g}'$. The set of spheromorphisms under this multiplication form a group $\mathop{\mathrm{AAut}}(T_d)$ called *Neretin's group*. It was shown in [@BCGM] that Neretin's group can be given a group topology by taking the conjugates of vertex stabilisers in $\mathop{\mathrm{Aut}}(T_d) \leq \mathop{\mathrm{AAut}}(T_d)$ as a subbasis of neighbourhoods of the identity (this makes sense because $\mathop{\mathrm{AAut}}(T_d)$ commensurates each stabiliser $\mathop{\mathrm{Aut}}(T_d)_v$). Under this topology, $\mathop{\mathrm{AAut}}(T_d)$ is a compactly generated t.d.l.c. group that contains $\mathop{\mathrm{Aut}}(T_d)$ as an open subgroup. Moreover, $\mathop{\mathrm{AAut}}(T_d)$ is simple and contains no lattice (see [@BCGM Theorem 1]).
Given $G \leq \mathop{\mathrm{Aut}}(T_d)$, Waltraud Lederle's group $\mathcal{F}(G)$ consists of elements of $\mathop{\mathrm{AAut}}(T_d)$ that intuitively 'locally look like' $G$. When $G$ is closed and has Tits' Independence Property $(\mathrm{P}_{})$ (that is, when $G$ is $(\mathrm{P}_{})$-closed), there is a unique group topology on $\mathcal{F}(G)$ such that the inclusion of $G$ into $\mathcal{F}(G)$ is continuous and open. Since $G$ is t.d.l.c. the group $\mathcal{F}(G)$ is also a t.d.l.c. group. The focus of [@Lederle] is when $G$ is taken to be a Burger--Mozes group $\mathbf{U}(F)$.
To define $\mathcal{F}(G)$ precisely, we first define a *$G$-almost automorphism* of $T_d$ to be an almost automorphism $h : T_d \setminus C \rightarrow T_d \setminus C'$ such that for every component $\mathcal{T}$ of the forest $T_d \setminus C$ there is some $g \in G$ such that $g$ and $h$ agree on $\mathcal{T}$. The elements of $\mathcal{F}(G)$ are the equivalence classes of all $G$-almost automorphisms and one can easily see that $\mathcal{F}(G) \leq \mathop{\mathrm{AAut}}(T_d)$.
In [@Lederle] it is shown that if $F \leq S_d$ then $\mathcal{F}(\mathbf{U}(F))$ is a compactly generated t.d.l.c. group and its commutator subgroup is open, abstractly simple and has finite index in $\mathcal{F}(\mathbf{U}(F))$. Moreover, if $F$ is a Young subgroup with strictly fewer than $d$ orbits, then this commutator subgroup is a nondiscrete compactly generated simple group without lattices.\
Jens Bossaert and Tom De Medts in [@JensTom] generalise the box product construction by defining universal groups over right-angled buildings where the local actions are again arbitrary permutation groups. As with the box product, there is no requirement for these local actions to have finite degree. Bossaert and De Medts characterise when their universal groups are locally compact, abstractly simple or have primitive action on the residues of the building.\
An important generalisation of Tits' Independence Property $(\mathrm{P}_{})$ is described by Christopher Banks, Murray Elder and George A. Willis in [@BanksElderWillis], where it is called *Property $(\mathrm{P}_{k})$.* This definition is made only for locally finite trees. For groups $H \leq \mathop{\mathrm{Aut}}(T)$ satisfying Property $(\mathrm{P}_{k})$ there is a natural subgroup denoted $H^{+_k}$ that is trivial or simple whenever $H$ neither fixes an end nor setwise stabilises a proper subtree of $T$ (see [@BanksElderWillis Theorem 7.3]). In the same paper they define something they call the $k$-closure of a group. In a different context this notion was independently described by Sam Shepherd in [@SamShep]. The term $k$-closure has a different and well-established meaning in permutation group theory due to Wielandt, and so we do not use the term; we instead call this notion the *$(\mathrm{P}_{k})$-closure* and define it in the following paragraph. In both [@BanksElderWillis] and [@SamShep] the notion of $(\mathrm{P}_{k})$-closure pertains only to locally finite trees where issues surrounding the closure or non-closure of local actions do not occur, because they are always closed. In fact, the definition of $(\mathrm{P}_{k})$-closure can be adjusted slightly so as to work in a natural way for trees that are not locally finite, and this latter notion was introduced in our paper [@ReidSmith]. Here we follow [@ReidSmith], and note that for locally finite trees the definitions coincide.
Let $T$ be a tree that may or may not be locally finite, and let $G \leq \mathop{\mathrm{Aut}}(T)$. For $k \in \mathbb{N}$, the *$(\mathrm{P}_{k})$-closure* of $G$, denoted $G^{(\mathrm{P}_{k})}$, is the group consisting of all $g \in \mathop{\mathrm{Aut}}(T)$ such that for all $v \in VT$, and all finite subsets $\Phi \subseteq B_v(k)$, there exists $g_\Phi \in G$ such that $gw = g_\Phi w$ for every vertex $w \in \Phi$. We say that the group $G$ is *$(\mathrm{P}_{k})$-closed* if $G$ is equal to its $(\mathrm{P}_{k})$-closure. Intuitively, we can think of the $k$-local action of $G$ as being its action on $k$-balls $B_v(k)$, and the $(\mathrm{P}_{k})$-closure of $G$ is then the largest subgroup of $\mathop{\mathrm{Aut}}(T)$ whose $k$-local action is equal to the closure of the $k$-local action of $G$. Note that all of the groups described thus far in this subsection that arise as closed groups of automorphisms of trees are $(\mathrm{P}_{k})$-closed for $k=1$.
In [@BanksElderWillis] the following properties of $G^{(\mathrm{P}_{k})}$ are established for $G \leq \mathop{\mathrm{Aut}}(T)$ where $T$ is locally finite; see [@ReidSmith] for trivial adjustments to the arguments so they continue to work in the non-locally finite case. Again let $T$ be a (not necessarily locally finite) tree, let $G \leq \mathop{\mathrm{Aut}}(T)$, and fix $k \in \mathbb{N}$.
1. $G^{(\mathrm{P}_{k})}$ is a closed subgroup of $\mathop{\mathrm{Aut}}(T)$.
2. $G^{(\mathrm{P}_{r})} = (G^{(\mathrm{P}_{k})})^{(\mathrm{P}_{r})}$ whenever $r \le k$. In particular, $G^{(\mathrm{P}_{k})}$ is $(\mathrm{P}_{k})$-closed.
3. If $G$ is closed then $G = G^{(\mathrm{P}_{1})}$ if and only if $G$ satisfies Tits' Independence Property $(\mathrm{P}_{})$. Consequently, to decide if $G$ satisfies Tits' Independence Property $(\mathrm{P}_{})$ it is sufficient to check only that the definition of Property $(\mathrm{P}_{})$ holds for paths in $T$ of length one (i.e. edges).
The $(\mathrm{P}_{k})$-closures of $G$ as $k \rightarrow \infty$ give a series of approximations to the action of $G$ on $T$. Moreover, the $(\mathrm{P}_{k})$-closures converge to the closure of the action of $G$ in $\mathop{\mathrm{Aut}}(T)$. Intuitively then the notion of $(\mathrm{P}_{k})$-closure gives a tool for understanding all actions of groups on trees as 'limits' of 'universal groups' of 'local actions' on $k$-balls. In our theory of local action diagrams we make precise this idea for $k=1$, creating an overarching theory for $(\mathrm{P}_{k})$-closed groups when $k=1$.\
Understanding $(\mathrm{P}_{k})$-closures for $k>1$ presents significant challenges, but it is the obvious direction in which to generalise our theory of local action diagrams (see Section [8](#Section:FutureWork){reference-type="ref" reference="Section:FutureWork"}). Stephan Tornier in [@Tornier] has generalised the Burger--Mozes construction (for locally finite trees $T_d$) in a way that allows the local action on balls of a given radius $k \geq 1$ to be prescribed, where the Burger--Mozes groups $\mathbf{U}(F)$ then arise precisely when $k=1$. Even in this setting, Tornier's work highlights how the global behaviour of the resulting 'universal' group is difficult to control---interactions between the prescribed local action on overlapping $k$-balls is the source of this complexity.
Let $\mathcal{L}$ be a legal colouring in the sense of Burger--Mozes (see Section [3.2](#Section:BurgerMozes){reference-type="ref" reference="Section:BurgerMozes"}) of the locally finite $d$-regular tree $T_d$. Fix a subtree of the labelled tree $T_{d}$ arising as a ball of radius $k$ around some vertex, and denote this by $B_{d,k}$. For each vertex $v \in VT_d$ recall that $B_v(k)$ is the $k$-ball around $v$, which is isomorphic (as a labelled tree) to $B_{d,k}$; denote this label-respecting isomorphism (which is unique) by $l_{v}^{k}:B_v(k)\to B_{d,k}$. The *$k$-local action* of $g \in \mathop{\mathrm{Aut}}(T_d)$ is then the graph isomorphism $$\sigma_{k, v}(g): B_{d,k} \rightarrow B_{d,k} \quad \quad \sigma_{k, v}(g) = l_{gv}^{k}\circ g\circ (l_{v}^{k})^{-1}.$$ For any group $F \leq \mathop{\mathrm{Aut}}(B_{d,k})$ we define $\mathbf{U}_k(F)$ as follows, $$\mathbf{U}_{k}(F) := \{g \in \mathop{\mathrm{Aut}}(T_{d}) : \sigma_{k, v}(g) \in F \quad \forall v\in VT_d\}.$$
In [@Tornier] it is shown that $\mathbf{U}_{k}(F)$ is a closed, vertex transitive and compactly generated subgroup of $\mathop{\mathrm{Aut}}(T_d)$, however $\mathbf{U}_{k}(F)$ can fail to have $k$-local action $F$. Despite this, the group retains a universal property: if $H \leq \mathop{\mathrm{Aut}}(T_d)$ is locally transitive and contains an involutive inversion (that is, an involution $g$ that maps some $e \in AT_d$ to $\overline{e}$), and $F$ is the $k$-local action of $H$, then $\mathbf{U}_k(F)$ contains an $\mathop{\mathrm{Aut}}(T_d)$-conjugate of $H$.
Precisely when $\mathbf{U}_{k}(F)$ has local action $F$ is characterised by a condition on $F$ that Tornier calls Condition (C); this condition can be thought of as a 'compatibility' condition for the local action with itself as it interacts with an involutive automorphism. Another condition on $F$ called Condition (D) is sufficient for $\mathbf{U}_k(F)$ to be discrete, and when $F$ satisfies Condition (C) then (D) precisely characterises whether or not $\mathbf{U}_k(F)$ is discrete.\
Finally we mention Caprace and De Medts' paper [@CapraceDeMedts] which contains an abundance of interesting results. The paper largely concerns compactly generated locally compact groups which act on locally finite trees and satisfy Tits' independence property $(\mathrm{P}_{})$. In other words, compactly generated locally compact $(\mathrm{P}_{})$-closed actions on locally finite trees. The central theme of the paper is to consider the extent to which selected global properties of these groups (e.g. having every proper open subgroup compact) are determined by the local action of a stabiliser of a vertex on its neighbours. The theory of local action diagrams shows that *all* properties of $(\mathrm{P}_{})$-closed actions on trees are determined by these local actions, because every $(\mathrm{P}_{})$-closed action is the universal group of a local action diagram.
# The theory of local action diagrams
In this section we give an intuitive introduction to the various ingredients of our theory, and we describe the relationship between them. Everything here can be found in [@ReidSmith].
## Local action diagrams
At the heart of our theory is a combinatorial object we call a *local action diagram*. It plays the same role in our theory that a graph of groups plays in Bass--Serre Theory (see Section [2.1](#Sec:BassSerre){reference-type="ref" reference="Sec:BassSerre"}).
**Definition 7**. A *local action diagram* is comprised of three things:
1. A nonempty connected graph $\Gamma$.
2. A nonempty set $X_a$ of colours for each arc $a \in A\Gamma$, such that the colour sets of distinct arcs are disjoint. For each vertex $v \in V\Gamma$ let $X_v := \bigsqcup_{a \in o^{-1}(v)}X_a$.
3. A closed group $G(v) \leq \mathop{\mathrm{Sym}}(X_v)$ for each vertex $v \in V\Gamma$, such that the arc colour sets $X_a$ are the orbits of $G(v)$ on $X_v$.
We denote such a local action diagram as $\Delta = (\Gamma,(X_a),(G(v)))$. We call each $X_a$ the *colour set* of $a$, and each group $G(v)$ the *local action* at $v$.
Notice how easy it is to construct a local action diagram. One chooses any nonempty connected graph and some groups (that have the specified orbit structure) to play the role of local actions; there are no further compatibility conditions governing whether or not various local actions can be combined on a local action diagram. Nevertheless, we will see they provide a complete description of all $(\mathrm{P}_{})$-closed groups of automorphisms of trees.
Two local action diagrams are isomorphic if the graphs are isomorphic and their local actions are permutationally isomorphic, and these two types of isomorphisms are compatible. More precisely, if we have two local action diagrams $\Delta = (\Gamma,(X_a),(G(v)))$ and $\Delta' = (\Gamma',(X'_a),(G'(v)))$, an isomorphism between them consists of two things:
1. an isomorphism $\theta: \Gamma \rightarrow \Gamma'$ of graphs; and
2. bijections $\theta_v: X_v \rightarrow X'_{\theta(v)}$ for each vertex $v \in V\Gamma$, that restrict to a bijection $X_a \rightarrow X_{\theta(a)}$ for each $a \in o^{-1}(v)$, and such that $\theta_v G(v) \theta^{-1}_v = G'(\theta(v))$.
**Example 8**. In Figures [\[Fig:MundaneLAD\]](#Fig:MundaneLAD){reference-type="ref" reference="Fig:MundaneLAD"} and [\[Fig:FancyLAD\]](#Fig:FancyLAD){reference-type="ref" reference="Fig:FancyLAD"} we see examples of local action diagrams. For now they are nothing more than combinatorial objects, but we will soon see that they give rise to universal groups acting on trees. The second local action diagram shows how one can embed interesting permutation groups or topological groups as local actions: the automorphism group of the $3$-regular tree $T_3$ is a component of the local action of $v$, where $\mathop{\mathrm{Aut}}(T_3)$ is acting on the colour set (i.e. $VT_3$) of the arc $a$.
## The associated local action diagram {#Sec:AssocLad}
As we described in Section [2.1.1](#sec:Assoc_graph){reference-type="ref" reference="sec:Assoc_graph"}, for every group of automorphisms of a tree we can associate a graph of groups. The analogous situation arises here: for every group of automorphisms of a tree we can associate a local action diagram.
**Definition 9**. Suppose $G$ is a group of automorphisms of a tree $T$. The *associated local action diagram* is a local action diagram $(\Gamma, (X_a), (G(v)))$ with the following parameters.
- For our connected graph we take $\Gamma$ to be the quotient graph $G \backslash T$, with $\pi$ denoting the natural quotient map.
- For our arc colours we proceed as follows. The vertices of $\Gamma$ are orbits of $G$ on $VT$, so for each vertex $v \in V\Gamma$ we choose a representative vertex $v^*$ in $VT$ such that $\pi(v^*) = v$, and write $V^*$ for the set of all such representatives. Now the stabiliser $G_{v^*}$ permutes the arcs in $o^{-1}(v^*)$, and so we take $X_v := o^{-1}(v^*)$. The arcs in $\Gamma$ are the orbits of $G$ on $AT$, so this set $X_v$ breaks down into $G_{v^*}$-orbits as $X_a := \{b \in o^{-1}(v^*) : \pi(b) = a\}$ for each $a \in A\Gamma$ satisfying $o(a) = v$, and these sets $X_a$ become our arc colours.
- For the local actions, take $G(v)$ to be the closure of the permutation group induced on $X_v$ by the stabiliser $G_{v^*}$, and note that the orbits of $G(v)$ and $G_{v^*}$ on $X_v$ coincide.
There are many choices for the associated local action diagram, but they are all isomorphic as local action diagrams.
**Example 10**. Consider the diagram in Figure [\[Fig:AssocLAD\]](#Fig:AssocLAD){reference-type="ref" reference="Fig:AssocLAD"}, which at the top shows part of the infinite $(2,4)$-biregular tree $T:= T_{2,4}$ (ignoring for now the decorations). Let $G$ be the closed subgroup of $\mathop{\mathrm{Aut}}(T)$ with two orbits on vertices (indicated using black vertices and white vertices), and two orbits on edges (solid edges and dashed edges).
Thus the local action diagram has two vertices, say $v,w$, and two edges (each consisting of an arc in each direction), say $\{a, \overline{a}\}$ and $\{b, \overline{b}\}$. We pick vertex orbit representatives $v^*$ and $w^*$ in $VT$. The $T$-arcs leaving these vertices will become the arc colours in our local action diagram, so we name them as indicated: the two arcs leaving $v^*$ are named $1,2$ and the four arcs leaving $w^*$ are named $3$ to $6$. Now $o^{-1}(v^*) = \{1,2\}$ and $G_{v^*}$ induces the trivial subgroup of $S_2$ on this set. Thus, we take $G(v) = \langle (1)(2) \rangle$. Meanwhile, $o^{-1}(w^*) = \{3,4,5,6\}$ and $G_{w^*}$ induces $\langle (3 4), (5 6) \rangle \leq \mathop{\mathrm{Sym}}(\{3,4,5,6\})$ on $o^{-1}(w^*)$, so we take $G(w) = \langle (3 4), (5 6) \rangle$.
From these observations we obtain the local action diagram for $(T,G)$, shown at the bottom of Figure [\[Fig:AssocLAD\]](#Fig:AssocLAD){reference-type="ref" reference="Fig:AssocLAD"}.
**Example 11**. Let $d \geq 3$ be a finite or infinite cardinal and let $F \leq S_d$. Consider the Burger--Mozes group $\mathbf{U}(F) \leq \mathop{\mathrm{Aut}}(T_d)$. Fix an arc $a \in AT_d$ and notice that the condition $\mathcal{L}(a) = \mathcal{L}(\overline{a})$ ensures that there is an element $g \in \mathop{\mathrm{Aut}}(T_d)$ inverting $a$ (that is, mapping $a$ to $\overline{a}$ and vice versa) such that $\mathcal{L}(b) = \mathcal{L}(gb)$ for all $b \in AT_d$. For this element $g$ we have that $\sigma_{\mathcal{L}, v}(g)$ is trivial and therefore lies in $F$, for all $v \in VT_d$. Thus, $g \in \mathbf{U}(F)$. We have seen then that for any arc $a \in AT_d$ we have that $a$ and $\overline{a}$ lie in the same orbit of $\mathbf{U}(F)$. Moreover, in Section [3.2](#Section:BurgerMozes){reference-type="ref" reference="Section:BurgerMozes"} we saw that $\mathbf{U}(F)$ is vertex transitive. Hence the associated local action diagram for $\mathbf{U}(F)$ consists of a single vertex with some loops, each of which is its own reverse. Each loop corresponds to an orbit of $F$.
For example, taking $F$ to be the group $\mathop{\mathrm{Aut}}(T_3) \times S_3$ acting via the product action on $VT_3 \times \{1,2,3\}$, we have that $F$ has two orbits and therefore $\mathbf{U}(\mathop{\mathrm{Aut}}(T_3) \times S_3)$ has the associated local action diagram shown in Figure [\[Fig:AssocLADTwoLoops\]](#Fig:AssocLADTwoLoops){reference-type="ref" reference="Fig:AssocLADTwoLoops"}.
**Example 12**. Let $d_1, d_2 > 1$ be finite or infinite cardinals, let $X_1, X_2$ be sets of cardinality $d_1, d_2$ respectively, and let $F_1 \leq \mathop{\mathrm{Sym}}(X_1)$ and $F_2 \leq \mathop{\mathrm{Sym}}(X_2)$. Consider the box product group $\mathbf{U}(F_1, F_2) \leq \mathop{\mathrm{Aut}}(T)$, where $T$ is the biregular tree $T_{d_1, d_2}$.
In this general setting the local action diagram of $\mathbf{U}(F_1, F_2)$ can be significantly more complicated than those of the Burger--Mozes groups; despite this it is still tractable. Indeed, in [@SmithDuke Lemma 22] it is shown that the quotient graph $\mathbf{U}(F_1, F_2) \backslash T$ is the complete bipartite graph $K_{n_1,n_2}$, where $n_i$ is the number of orbits of $F_i$ on $X_i$ for $i=1,2$. In particular, between any two vertices in $K_{n_1,n_2}$ there is a single edge consisting of two arcs, one in each direction.
Identifying $\mathbf{U}(F_1, F_2) \backslash T$ and $K_{n_1,n_2}$ we can now form the associated local action diagram. Let $V_1, V_2$ be the two parts corresponding to the bipartition of $K_{n_1,n_2}$, with each part $V_i$ consisting of vertices with valency $n_i$ for $i=1,2$. Vertices in $V_i$ have local action $F_i$. Each arc $a$ from $V_1$ to $V_2$ represents an orbit $\Omega$ of $F_1$ and so we set $X_a := \Omega$. Similarly, each arc $b$ from $V_2$ to $V_1$ represents and orbit $\Omega'$ of $F_2$, and so we set $X_b := \Omega'$. We have a local action diagram.
In the special case where $F_1$ and $F_2$ are transitive, the local action diagram associated to $\mathbf{U}(F_1, F_2)$ is then two vertices connected by a single edge which consists of a pair of arcs, one in each direction. For example, the $2^{\aleph_0}$ pairwise nonisomorphic simple nondiscrete compactly generated t.d.l.c. groups that were constructed in the proof of [@SmithDuke Theorem 38] were of the form $\mathbf{U}(\mathcal{O}_i, S_3)$, where each group $\mathcal{O}_i$ is viewed as a permutation group $\mathcal{O}_i \leq \mathop{\mathrm{Sym}}(\mathcal{O}_i / H_i)$ as in Section [3.3](#Section:BoxProduct){reference-type="ref" reference="Section:BoxProduct"}. The local action diagrams of these groups are shown in Figure [\[Fig:SimpleLAD\]](#Fig:SimpleLAD){reference-type="ref" reference="Fig:SimpleLAD"}.
## Building a $\Delta$-tree {#Section:DeltaTree}
In Section [2.1.3](#Sec:UnivCoveringBassSerre){reference-type="ref" reference="Sec:UnivCoveringBassSerre"} we described how every graph of groups in Bass--Serre Theory gives rise to a universal cover that is a tree. An analogous situation arises here: every local action diagram gives rise to an arc-coloured tree we call a *$\Delta$-tree*. Intuitively this arc-coloured tree is built from taking 'coloured walks' around the local action diagram. Shortly, we will construct a natural universal group from the automorphism group of this arc-coloured tree.
**Definition 13**. Let $\Delta = (\Gamma, (X_a), (G(v))$ be a local action diagram. A *$\Delta$-tree* is comprised of three things:
1. a tree $T$;
2. a surjective graph homomorphism $\pi: T \rightarrow \Gamma$; and
3. a colouring map $\mathcal{L}: AT \rightarrow \bigsqcup_{a \in A\Gamma}X_a$, such that for every vertex $v \in VT$, and every arc $a$ in $o^{-1}(\pi(v))$, the map $\mathcal{L}$ restricts to a bijection $\mathcal{L}_{v,a}$ from $\{b \in o^{-1}(v) : \pi(b) = a\}$ to $X_a$.
We denote such a $\Delta$-tree as $\mathbf{T} = (T, \mathcal{L}, \pi)$.
In [@ReidSmith Lemma 3.5] it is shown that for any local action diagram $\Delta$ there exists a $\Delta$-tree, and moreover, any two $\Delta$-trees $(T,\pi,\mathcal{L})$ and $(T',\pi',\mathcal{L}')$ are isomorphic in the following sense: there is a graph isomorphism $\alpha: T \rightarrow T'$ such that $\pi' \circ \alpha = \pi$.
The construction of a $\Delta$-tree is intuitive, but it requires careful notation to describe formally. We choose a root vertex $v_0 \in V\Gamma$. Then, for $v \in V\Gamma$ and $c \in X_v$, we define the *type* $p(c)$ of the colour $c$ to be the arc $a \in A\Gamma$ for which $c \in X_a$. A finite sequence of colours $(c_1,c_2,\dots,c_n)$ such that $o(p(c_{i+1})) = t(p(c_i))$ for all $1 \le i < n$ is called a *coloured path*, and we think of our vertices in $VT$ as being labelled by these coloured paths from the origin $v_0$.
Intuitively, we build $T$ inductively by taking all coloured paths around $\Gamma$ starting from $v_0$, but at each 'step' we will have a choice to make about the reverse colour for our step. We can choose arbitrarily, subject to two constraints: the colour must be of the correct type, and once we have chosen a reverse colour we cannot pick it for the coloured arc corresponding to our next step. To make this precise, we need more notation. If a vertex $v$ has label $(c_1,c_2,\dots,c_n)$ then we write the length of $v$ as $\ell(v) = n$, and we say any vertex with label $(c_1, c_2, \ldots, c_m)$ for $m \leq n$ is a *prefix* of $v$. If $v$ is a prefix of $w$ and $\ell(v) = \ell(w)-1$, then we write $v \ll w$. The *reverse label* of $v$ is $\overline{v} = (d_1,d_2,\dots,d_n)$, where each $d_i$ is a colour such that (i) $p(d_i) = \overline{p(c_i)}$, and (ii) whenever $v$ is a prefix of $w$, the label $\overline{v}$ is the corresponding prefix of $\overline{w}$.
We build the vertex set $VT$ of $T$ inductively, starting at a base vertex $()$. If we have already defined a vertex $v = (c_1,c_2,\dots,c_n)$ with $\overline{v} = (d_1,d_2,\dots,d_n)$, then we create new vertices $v_{+c_{n+1}} = (c_1,\dots,c_n,c_{n+1})$ by letting $c_{n+1}$ range over all colours satisfying $o(p(c_{n+1})) = t(p(c_n))$ and $c_{n+1} \neq d_n$. For each of these new vertices, choose $d_{n+1}$ arbitrarily from $X_{\overline{p(c_{n+1})}}$ and let $\overline{v_{+c_{n+1}}} = (d_1,d_2,\dots,d_{n+1})$.
Now we have constructed the vertices of $T$, creating the remaining tree structure requires little effort. We define $AT_+ := \{(v,w) : v \ll w\}$ and $AT_- := \{(w,v) : (v,w) \in AT_+\}$ and $AT := AT_- \sqcup AT_+$, with functions $o, t$ and edge reversal defined in the obvious way. It is clear that from this we obtain a tree. Furthermore, $T$ is naturally arc-coloured by the following colouring map $\mathcal{L}$: for all $(v,w) \in AT_+$, take $\mathcal{L}(v,w)$ to be the last entry of $w$, and take $\mathcal{L}(w,v)$ to be the last entry of $\overline{w}$.
To define $\pi: T \rightarrow \Gamma$ we define $\pi$ of the base vertex $()$ to be our fixed root vertex $v_0 \in V\Gamma$, and then set $\pi(v)$ to be the vertex $t(p(c_n)) \in V\Gamma$, where $v \in VT$ has label $(c_1,\dots,c_n)$. For arcs in $a \in AT$, we set $\pi(a) = p(\mathcal{L}(a))$. It is now routine to verify that $\pi$ is a surjective graph homomorphism, and that $\mathcal{L}$ restricts to a bijection from $\{b \in o^{-1}(v) : \pi(b) = a\}$ to $X_a$. Whence, we have our $\Delta$-tree $\mathbf{T} = (T, \mathcal{L}, \pi)$.
**Remark 14**. Given a tree action $(T,G)$ and its associated local action diagram $\Delta$, we can equip $T$ with a natural colouring map $\mathcal{L}$ and projection map $\pi$ so that $(T, \mathcal{L}, \pi)$ is a $\Delta$-tree. Indeed, the associated local action diagram already has an appropriate map $\pi$, so we need only specify $\mathcal{L}$. Using the notation of Definition [Definition 9](#def:AssocLad){reference-type="ref" reference="def:AssocLad"} we choose, for each vertex $v \in VT$, an element $g_v \in G$ such that $g_v v \in V^*$. Then $g_v$ induces a bijection from $o^{-1}(v)$ to $X_v$, because $X_v$ was defined to be $o^{-1}(v^*)$ for $v^* := g_v v \in V^*$. We then set $\mathcal{L}(b) := g_v b$ for all $b \in o^{-1}(v)$. It transpires that this colouring map satisfies the requirements of Definition [Definition 13](#def:Delta_tree){reference-type="ref" reference="def:Delta_tree"}, ensuring that $(T, \mathcal{L}, \pi)$ is indeed a $\Delta$-tree.
## The universal group of a local action diagram
In Section [2.1](#Sec:BassSerre){reference-type="ref" reference="Sec:BassSerre"} we saw that every graph of groups in Bass--Serre Theory gives rise to a fundamental group that acts naturally on its universal cover. An analogous situation arises here: every local action diagram $\Delta$ gives rise to a universal group that acts naturally on a $\Delta$-tree.
**Definition 15**. Let $\Delta = (\Gamma, (X_a), (G(v)))$ be a local action diagram with $\Delta$-tree $\mathbf{T} = (T, \mathcal{L}, \pi)$. An *automorphism* of $\mathbf{T}$ is a graph automorphism $\phi$ of the tree $T$ such that $\pi \circ \phi = \pi$. The group of all automorphisms of $\mathbf{T}$ is $\mathop{\mathrm{Aut}}_{\pi}(T)$. For $g \in \mathop{\mathrm{Aut}}_{\pi}(T)$ and $v \in VT$ we define the $\mathcal{L}$-local action of $g$ at $v$ as in Equation [\[eq:LocalAction\]](#eq:LocalAction){reference-type="ref" reference="eq:LocalAction"}: $\sigma_{\mathcal{L},v}(g) := \mathcal{L}|_{o^{-1}(gv)} g \mathcal{L}|^{-1}_{o^{-1}(v)}$. Again we note that $\sigma_{\mathcal{L},v}(g) \in \mathop{\mathrm{Sym}}(X_{\pi(v)})$ for all $v \in VT$.
The *universal group* of $\Delta$ and $\mathbf{T}$ is the group consisting of all $\mathbf{T}$-automorphisms whose local action at any $v \in VT$ always lies in the corresponding local action $G(\pi(v))$ of the local action diagram. Formally, $$\mathbf{U}_{\mathbf{T}}(\Delta) := \{g \in \mathop{\mathrm{Aut}}_{\pi}(T) : \sigma_{\mathcal{L},v}(g) \in G(\pi(v)) \quad \text{ for all $v \in VT$}\}.$$ It transpires (see [@ReidSmith Theorem 3.12]) that for a fixed $\Delta$, different $\Delta$-trees give rise to the same universal group; that is, if $\mathbf{T}, \mathbf{T'}$ are $\Delta$-trees with underlying trees $T, T'$ respectively, then there is a graph isomorphism $\phi : T \rightarrow T'$ such that $\phi \mathbf{U}_{\mathbf{T}}(\Delta) \phi^{-1} = \mathbf{U}_{\mathbf{T'}}(\Delta)$. For this reason we typically omit the subscripts and speak of *the* universal group $\mathbf{U}(\Delta)$ of a local action diagram $\Delta$.
## The correspondence theorem {#Sec:LadCorrespondenceThm}
Recall that the $(\mathrm{P}_{})$-closure of an action $(T, G)$ of a group $G$ on a tree $T$ is the smallest closed subgroup of $\mathop{\mathrm{Aut}}(T)$ with Tits Independence Property $(\mathrm{P}_{})$ that contains the action $(T,G)$. We denote this by $G^{(\mathrm{P}_{})}$, and if $G = G^{(\mathrm{P}_{})}$ then we say that $G$ is $(\mathrm{P}_{})$-closed.
In Section [2.1.4](#Sec:FundThmBassSerre){reference-type="ref" reference="Sec:FundThmBassSerre"} we described the Fundamental Theorem of Bass--Serre Theory, a correspondence theorem linking a tree action $(T,G)$ with the action of the fundamental group of its graph of groups on its universal cover. There is an analogous correspondence theorem for local action diagrams, linking the $(\mathrm{P}_{})$-closure of a tree action $(T,G)$ with the action of the universal group of its local action diagram $\Delta$ on its $\Delta$-tree.
Moreover, in the Fundamental Theorem of Bass--Serre Theory we have that the fundamental group of a graph of groups $\mathbf{\Gamma}$ in its action on the universal cover of $\mathbf{\Gamma}$ (which is a tree) has $\mathbf{\Gamma}$ as its associated graph of groups. An analogous statement holds for local action diagrams: the universal group of a local action diagram $\Delta$ in its action on a $\Delta$-tree has an associated local action diagram that is isomorphic to $\Delta$.
**Theorem 16** ([@ReidSmith Theorems 3.9 & 3.10]). *Let $G$ be a group and $T$ be a tree.*
*($\circledast$) Suppose $G$ acts on $T$ with associated local action diagram $\Delta$, universal group $\mathbf{U}(\Delta)$ and $\Delta$-tree $\mathbf{T}$. Then the actions $(T, G^{(\mathrm{P}_{})})$ and $(\mathbf{T}, \mathbf{U}(\Delta))$ can be identified.*
*($\circledcirc$) Suppose $\Delta$ is a local action diagram. Let $\mathbf{T}$ be a $\Delta$-tree and $\mathbf{U}(\Delta)$ its universal group. Then $\mathbf{U}(\Delta)$ is a $(\mathrm{P}_{})$-closed group of automorphisms of the $\Delta$-tree $\mathbf{T}$ and the associated local action diagram of $(\mathbf{T}, \mathbf{U}(\Delta))$ is isomorphic to $\Delta$.*
The first statement ($\circledast$) follows from the following observations. We have seen in Remark [Remark 14](#Rem:TurnTintoDeltaTree){reference-type="ref" reference="Rem:TurnTintoDeltaTree"} that we can equip $T$ with a colouring map $\mathcal{L}$ so that $T$ becomes a $\Delta$-tree $\mathbf{T'} = (T, \mathcal{L}, \pi)$. Using these we can construct the universal group $\mathbf{U}_{\mathbf{T'}}(\Delta) \leq \mathop{\mathrm{Aut}}(T)$. In [@ReidSmith Theorem 3.10] we show that in fact $\mathbf{U}_{\mathbf{T'}}(\Delta) = G^{(\mathrm{P}_{})}$. As noted previously, the possibly different $\Delta$-trees $\mathbf{T'}$ and $\mathbf{T}$ give rise to permutationally isomorphic universal groups, and via this relationship we can identify the actions of $(T, G^{(\mathrm{P}_{})})$ and $(\mathbf{T}, \mathbf{U}(\Delta))$. The second statement ($\circledcirc$) is [@ReidSmith Theorem 3.9].
From this correspondence theorem, we obtain the following universal property for $\mathbf{U}(\Delta)$, which holds because for any group $G \leq \mathop{\mathrm{Aut}}(T)$ with local action diagram $\Delta$ we have $G \leq G^{(\mathrm{P}_{})} = \mathbf{U}(\Delta)$. This property clarifies the nature of the universal properties for the Burger--Mozes and box product groups, which both needed local transitivity to hold.
**Corollary 17**. *Suppose $\Delta$ is a local action diagram, and form a $\Delta$-tree $\mathbf{T} = (T, \mathcal{L}, \pi)$. Then the universal group $\mathbf{U}(\Delta) \leq \mathop{\mathrm{Aut}}(T)$ contains a permutationally isomorphic copy of every group $G \leq \mathop{\mathrm{Aut}}(T)$ whose associated local action diagram is $\Delta$.*
# A classification of groups with Tits' Independence Property $(\mathrm{P}_{})$
## The classification
As we have already mentioned, Tits' Independence Property $(\mathrm{P}_{})$ plays an important role in the theory of infinite groups, because it gives a general tool for constructing infinite nonlinear simple groups and because of its obvious importance to the subject of groups acting on trees. Our theory of local action diagrams gives a complete and highly usable description of all closed actions that have Tits' Independence Property $(\mathrm{P}_{})$. From this one immediately obtains a classification of all groups that have Tits' Independence Property $(\mathrm{P}_{})$.
In Theorem [Theorem 16](#Thm:CorrespThmLad){reference-type="ref" reference="Thm:CorrespThmLad"} and the subsequent commentary, if we have a tree action $(T, G)$ where $G$ is closed with Tits' Independence Property $(\mathrm{P}_{})$, then $G = G^{(\mathrm{P}_{})}$ and therefore $(T, G)$ is equal to the universal group $\mathbf{U}_{\mathbf{T'}}(\Delta) \leq \mathop{\mathrm{Aut}}(T)$, where $\Delta$ is the local action diagram of $(T,G)$ and $\mathbf{T'}$ is the $\Delta$-tree obtained from $T$ as described in Remark [Remark 14](#Rem:TurnTintoDeltaTree){reference-type="ref" reference="Rem:TurnTintoDeltaTree"}. On the other hand, if we have the universal group $\mathbf{U}(\Delta)$ of a local action diagram then it is a group of automorphisms of the underlying tree $T$ of a $\Delta$-tree, and the action $(T, \mathbf{U}(\Delta))$ is closed with Tits' Independence Property $(\mathrm{P}_{})$. Thus, we have the following description of closed groups with $(\mathrm{P}_{})$.
> *Closed groups of automorphisms of trees with Tits' independence property $(\mathrm{P})$ are precisely the universal groups of local action diagrams.*
In our paper [@ReidSmith], we state the correspondence between $(\mathrm{P}_{})$-closed actions and local action diagrams as follows.
**Theorem 18** ([@ReidSmith Theorem 3.3]). *There is a natural one-to-one correspondence between isomorphism classes of $(\mathrm{P}_{})$-closed actions on trees and isomorphism classes of local action diagrams.*
This correspondence is easy for us to state explicitly. For a local action diagram $\Delta$, we have a corresponding pair $(T,\mathbf{U}(\Delta))$, where $T$ is the underlying tree of a $\Delta$-tree. As previously discussed, different $\Delta$-trees give rise to permutationally isomorphic universal groups. Hence the pair $(T,\mathbf{U}(\Delta))$ is unique up to isomorphisms. Moreover, by construction we see that two isomorphic local action diagrams $\Delta$ and $\Delta'$ will produce isomorphic actions $(T,\mathbf{U}(\Delta))$ and $(T',\mathbf{U}(\Delta'))$. We have shown that there is a well-defined map $\theta$ from isomorphism class of actions $(T,G)$, where $T$ is a tree and $G$ is a $(\mathrm{P}_{})$-closed group of automorphisms of $T$, to isomorphism classes of local action diagrams. Our correspondence theorem (Theorem [Theorem 16](#Thm:CorrespThmLad){reference-type="ref" reference="Thm:CorrespThmLad"}) shows that $\theta$ is a bijection. Thus $\theta$ is our claimed natural one-to-one correspondence.
Recall that a permutation group $G \leq \mathop{\mathrm{Sym}}(\Omega)$ and its closure in $\mathop{\mathrm{Sym}}(\Omega)$ have the same orbits on all $n$-tuples of $\Omega$, for all $n \in \mathbb{N}$. Our complete description of all $(\mathrm{P}_{})$-closed actions on trees immediately yields a useful classification of all (not just closed) actions $(T,G)$, where $T$ is a tree and $G$ has Tits' Independence Property $(\mathrm{P}_{})$, whereby such actions $(T,G)$ are classified according to the isomorphism type of their associated local action diagram. In such a classification, every such action $(T,G)$ lies in precisely one class and the classification gives a complete description of the closure of $G$. Thus, we now have a deep understanding of all groups with Tits' Independence Property $(\mathrm{P}_{})$.
## Some consequences {#Sec:SomeConsequences}
There are many consequences to our description of all $(\mathrm{P}_{})$-closed actions on trees. In this section we describe two (see [@ReidSmith] for more).
The first is that any $(\mathrm{P}_{})$-closed action on a tree can be described completely by drawing a local action diagram. This means that *all* properties of the action (e.g. whether it is simple, geometrically dense, etc) can be read directly from the local action diagram. We explore this further in Section [6](#Sec:SimplictyScoposEtc){reference-type="ref" reference="Sec:SimplictyScoposEtc"}.
The second is that for natural numbers $d, n$ there are only finitely many conjugacy classes of $(\mathrm{P}_{})$-closed actions $(T,G)$ such that $T$ is locally finite of bounded valency $d$ and $G$ has at most $n$ vertex orbits. Indeed, any such action arises as the universal group of a local action diagram $\Delta = (\Gamma, (X_a), (G(v)))$ where $\Gamma$ has at most $n$ vertices and all groups $G(v)$ are finite permutation groups of degree at most $d$. In a sense, our theory reduces the study of such groups to questions about finite graphs and finite permutation groups. In particular, for sensible choices for $d, n$ it would be possible to enumerate and describe (with the help of a computer) all such groups, for example by constructing all (finitely many) possible local action diagrams and then determining which are isomorphic.
In [@ReidSmith §7.1] we consider a special case of this: $(\mathrm{P}_{})$-closed actions $(T_d,G)$ where $T_d$ is the $d$-regular tree and $G$ is vertex-transitive. By our classification theorem we know that up to conjugacy, there are only finitely many such actions. To determine them, we define an *orbit pairing* for $H \leq S_d$ to be a permutation of the set $H\backslash [d]$ of $H$-orbits whose square is the identity, where $[d]$ here denotes the set $\{1,\ldots,d\}$. We consider pairs $(H, r)$, where $H \leq S_d$ and $r$ is an orbit pairing for $H$, and say two pairs $(H_1, r_1)$ and $(H_2, r_2)$ are equivalent whenever there exists $g \in S_d$ such that $gH_1g^{-1}= H_2$ and the map $g': H_1 \backslash [d] \rightarrow H_2 \backslash [d]$ induced by $g$ satisfies $g'r_1 = r_2g'$. The (finitely many) $\mathop{\mathrm{Aut}}(T_d)$-conjugacy classes for $(\mathrm{P}_{})$-closed vertex transitive actions $(T_d,G)$ are in one-to-one correspondence with the set of equivalence classes of pairs $(H,r)$.
Since each $G$ is vertex transitive, its associated local action diagram $\Delta = (\Gamma, (X_a), (G(v)))$ has only one vertex $v_0$, together with some loops that may or may not be their own reverse. For each pair $(H, r)$, the group $H$ is the group $G(v_0)$; the arcs of $\Delta$ correspond to orbits of $H$; and the edges of $\Delta$ correspond to orbits of $r$ on $H \backslash [d]$. Recall that the vertices, arcs and edges of $\Delta$ correspond to, respectively, the vertex-, arc- and edge-orbits of $\mathbf{U}(\Delta)$ on $T$. Those orbits of $H$ that are fixed by $r$ correspond to the arc-orbits of $\mathbf{U}(\Delta)$ on arcs that are reversed by some element in $\mathbf{U}(\Delta)$. The equivalence classes of these pairs $(H,r)$ give rise to all isomorphism classes of local action diagrams of $(\mathrm{P}_{})$-closed vertex transitive actions $(T_d,G)$, and we can enumerate these pairs for reasonable values of $d$.
In [@ReidSmith §7] we use this method to classify all such actions for $0 \leq d \leq 5$. The appendix to [@ReidSmith] is written by Stephan Tornier and contains a GAP ([@GAP4]) implementation that can perform this classification for values of $d$ greater than $5$. Even for $d=3$ we find examples of such actions that do not arise as Burger--Mozes groups; for larger values of $d$ the GAP implementation shows that the conjugacy classes of vertex transitive $(\mathrm{P}_{})$-closed actions on $T_d$ that do not arise as Burger--Mozes groups grows rapidly.
# Reading simplicity from a local action diagram {#Sec:SimplictyScoposEtc}
Recall Tits' result, Theorem [Theorem 6](#theorem:Tits4_5){reference-type="ref" reference="theorem:Tits4_5"}: If $G \leq \mathop{\mathrm{Aut}}(T)$ has property $(\mathrm{P}_{})$ then the subgroup $G^+$ generated by arc stabilisers is trivial or simple whenever the action of $G$ is geometrically dense (i.e. $G$ leaves invariant no nonempty proper subtree of $T$ and $G$ fixes no end of $T$). Closed groups $G \leq \mathop{\mathrm{Aut}}(T)$ with property $(\mathrm{P}_{})$ are completely described by their local action diagrams, so it is not surprising that the simplicity of $G^+$ can be read directly from the local action diagram. What is surprising, however, is how easily this can be done. It happens that the invariant subtrees and fixed ends of a tree action $(T,G)$ correspond to combinatorial features of the local action diagram that we call *strongly confluent partial orientations* (or scopos). Again we note that $T$ does not need to be locally finite.
**Definition 19**. A *strongly confluent partial orientation* (or *scopo*) in a local action diagram $\Delta = (\Gamma,(X_a),(G(v)))$ is a subset $O$ of $A\Gamma$ satisfying:
1. For all $a \in O$ we have $\overline{a} \not\in O$ and $|X_a|=1$;
2. [\[item:scopo\]]{#item:scopo label="item:scopo"} For all $a \in O$ we have that $t^{-1}(o(a)) \setminus \{\overline{a}\} \subseteq O$.
The empty set is always a scopo; if the empty scopo is the only scopo of $\Delta$ then $\Delta$ is said to be *irreducible*.
In [@ReidSmith Theorem 1.4] it is noted that the invariant subtrees and fixed ends of $(T, G)$ correspond to scopos of the local action diagram $\Delta$. Under this natural correspondence, the empty scopo(which always exists) corresponds to $T$ (which is always invariant under $G$). Thus $(T,G)$ being geometrically dense is equivalent to $\Delta$ being irreducible.
We can completely characterise all types of scopos that correspond to invariant subtrees and fixed ends of faithful actions $(T,G)$ with Property $(\mathrm{P}_{})$. There are four types of these scopos, and we call them a *stray leaf*, a *focal cycle*, a *horocyclic end* and a *stray half-tree*. Before describing these types of scopos, we note the following.
**Theorem 20** ([@ReidSmith Corollary 1.5]). *If $G \leq \mathop{\mathrm{Aut}}(T)$ has Tits' Property $(\mathrm{P}_{})$ and local action diagram $\Delta$, then the following are equivalent:*
1. *$G$ is geometrically dense;*
2. *[\[item:NoFocalCycle\]]{#item:NoFocalCycle label="item:NoFocalCycle"} $\Delta$ is not a focal cycle and has no stray half-trees, no horocyclic ends and no stray leaves.*
*In particular, if [\[item:NoFocalCycle\]](#item:NoFocalCycle){reference-type="ref" reference="item:NoFocalCycle"} holds then $G^+$ is abstractly simple or trivial.*
In this theorem we have a complete characterisation in the local action diagram of when Tits' Theorem can be applied to an action with Tits' Property $(\mathrm{P}_{})$.
**Definition 21**. Let $\Delta = (\Gamma,(X_a),(G(v)))$ be a local action diagram. The following are the aforementioned scopos that correspond to fixed ends and proper invariant subtrees.
- If $\Gamma$ is a finite cycle with a cyclic orientation $O$, such that for all $a \in O$ we have $|X_a|=1$, then we say that $\Delta$ is a *focal cycle*.
- A *stray leaf* of $\Delta$ is a leaf $v$ of $\Gamma$ such that $|X_v| = 1$ (or equivalently such that $G(v)$ is trivial).
- A *horocyclic end* of $\Delta$ occurs only when $\Gamma$ is a tree. It is an end $\xi$ of the tree $\Gamma$ such that all the arcs $a \in A\Gamma$ that are directed towards $\xi$ satisfy $|X_a|=1$.
- If $\Gamma \setminus \{a, \overline{a}\}$ is not connected and $\Gamma_a$ is the connected component containing $t(a)$, then $\Gamma_a$ is a *stray half-tree* of $\Delta$ whenever $\Gamma_a$ is a tree that contains no leaves of $\Gamma$, and moreover within $\Gamma_a$ all arcs $b$ orientated towards $t(a)$ satisfy $|X_{b}|=1$.
This characterisation of scopos allows us to quickly determine whether or not $\Delta$ is irreducible in the frequently encountered case where $\Gamma$ is a finite graph that is not a cycle: $\Delta$ is irreducible if and only if $\Delta$ has no stray leaves. Thus, in this situation, if $\Delta$ has no stray leaves then $G^+$ is simple or trivial.
Determining whether or not $G^+$ is trivial is easy to detect in the local action diagram.
**Proposition 22** ([@ReidSmith Lemma 5.11]). *If $T$ is a tree and $G \leq \mathop{\mathrm{Aut}}(T)$ with associated local action diagram $\Delta = (\Gamma,(X_v),(G(v)))$, then $G^+$ is trivial if and only if $G(v)$ acts freely (i.e. semiregularly) on $X_v$ for all $v \in V\Gamma$.*
Our discussion so far concerned the simplicity of $G^+$. However, in various naturally occurring situations (see for example property [\[item:BoxSimple\]](#item:BoxSimple){reference-type="ref" reference="item:BoxSimple"} of the box product construction in Section [3.3](#Section:BoxProduct){reference-type="ref" reference="Section:BoxProduct"}) it is the case that $G$ itself is simple. Regarding this, we have an almost complete characterisation of simplicity for faithful $(\mathrm{P}_{})$-closed actions. It is almost complete because we must exclude some degenerate cases and we must ensure that the action results in a closed induced action on any invariant subtree.
**Definition 23**. A group $G \leq \mathop{\mathrm{Aut}}(T)$ is *strongly closed* if for every $G$-invariant subtree $T'$ of $T$, the induced action of $G$ on $T'$ is closed.
Being strongly closed is easily achieved. For example, in [@ReidSmith Corollary 6.4], we show that a locally compact $(\mathrm{P}_{})$-closed subgroup of $\mathop{\mathrm{Aut}}(T)$ that acts with translation is always strongly closed.
**Theorem 24** ([@ReidSmith Theorem 1.8]). *If $(T,G)$ is a faithful $(\mathrm{P}_{})$-closed and strongly closed action on a tree $T$, then the following are equivalent:*
1. *$G$ is nondiscrete, abstractly simple, and acts with translation.*
2. *There exists an infinite $G$-invariant subtree $T'$ of $T$ (not necessarily proper) on which $G$ acts faithfully. Furthermore, if the associated local action diagram of $(T',G)$ is $\Delta = (\Gamma,(X_a),(G(v)))$, then $\Gamma$ is a tree; $\Delta$ is irreducible; all the groups $G(v)$ are closed and generated by point stabilisers; and at least one of the groups $G(v)$ is nontrivial.*
# Topological properties of universal groups
In this section we survey a selection of results in [@ReidSmith] concerning various topological properties of $(\mathrm{P}_{})$-closed subgroups of $\mathop{\mathrm{Aut}}(T)$. All statements are with respect to the permutation topology. Of course the topological properties of $(\mathrm{P}_{})$-closed subgroups of $\mathop{\mathrm{Aut}}(T)$ when $T$ is locally finite are well-understood. The novelty in our results is that (i) we make no assumptions about local finiteness, and (ii) our results are typically concerned with deducing 'global' topological statements from 'local' properties that can be found in the local action diagram.
To characterise local compactness and compact generation of 'nondegenerate' $(\mathrm{P}_{})$-closed groups via their local action diagrams, we first need to define a combinatorial feature of local action diagrams called a *cotree*. Let $\Gamma$ be a connected graph. Directed paths $(v_0,\dots,v_n)$ in $\Gamma$ are called *nonbacktracking* if $n = 0, 1$ or for $n \geq 2$ we have that $v_i \not = v_{i+2}$ for all $0 \leq i \leq n-2$. Given an induced subgraph $\Gamma'$ of $\Gamma$, a finite directed nonbacktracking path $(v_0,\dots,v_n)$ in $\Gamma$ such that $v_n \in V\Gamma'$ and $v_i \not\in V\Gamma'$ for $i < n$, is called a *projecting path* from $v_0 \in V\Gamma$ to $\Gamma'$. We say that $\Gamma'$ is a *cotree of $\Gamma$* if it is nonempty and for all $v \in V\Gamma \setminus V\Gamma'$ there is precisely one projecting path $(v_0,\dots,v_n)$ from $v$ to $\Gamma'$ (in particular this means that multiple arcs in $\Gamma$ from any $v_i$ to $v_{i+1}$ are not permitted). Note that if $\Gamma$ is a connected graph that is not a tree then there is always a (unique) smallest cotree of $\Gamma$ and cotrees are connected induced subgraphs that contain this smallest cotree. Given a cotree $\Gamma'$ of $\Gamma$, there is a scopo$O_{\Gamma'}$ associated with $\Gamma'$, consisting of all arcs $a$ satisfying (i) $o(a) \not\in V\Gamma'$ and (ii) $a$ lies on the projecting path from $o(a)$ to $\Gamma$.
By excluding some degenerate cases, we can characterise local compactness of $(\mathrm{P}_{})$-closed actions via the local action diagram as follows.
**Proposition 25** ([@ReidSmith Proposition 6.3]). *Suppose $T$ is a tree and $G \leq \mathop{\mathrm{Aut}}(T)$. Let $\Delta = (\Gamma,(X_a),(G(v)))$ be the local action diagram for $(T,G)$ and let $\Gamma'$ be the unique smallest cotree of $\Delta$. Suppose further that there is a unique minimal $G$-invariant subtree $T'$ of $T$ that has at least $3$ vertices. Then the following are equivalent.*
1. *$G^{(\mathrm{P}_{})}$ is locally compact.*
2. *For all $a \in AT'$, the arc stabiliser $(G^{(\mathrm{P}_{})})_a$ is compact.*
3. *For all $a \in A\Gamma$ such that $\overline{a} \not\in O_{\Gamma'}$, and for all $x \in X_a$, the orbits of the stabiliser $(G(o(a)))_x$ in its action on $X_v$ are finite.*
By excluding some degenerate cases, we can characterise the compact generation of $(\mathrm{P}_{})$-closed actions via their local action diagrams as follows.
**Proposition 26** ([@ReidSmith Proposition 6.5]). *Let $T$ be a tree and suppose $G \leq \mathop{\mathrm{Aut}}(T)$ is closed with all vertex-orbits having unbounded diameter. Let $\Delta = (\Gamma,(X_a),(G(v)))$ be the local action diagram of $(T,G)$. If some arc stabiliser in $G$ is compact, then $G$ and $G^{(\mathrm{P}_{})}$ are locally compact, and the following are equivalent.*
1. *$G$ is compactly generated;*
2. *$G^{(\mathrm{P}_{})}$ is compactly generated;*
3. *there is a unique smallest $G$-invariant subtree $T'$ such that $G$ has finitely many orbits on $VT' \sqcup AT'$ and $G_v$ is compactly generated for each $v \in VT'$;*
4. *there is a unique smallest cotree $\Gamma'$ of $\Delta$ such that $\Gamma'$ is finite and $G(v)$ is compactly generated for each $v \in V\Gamma'$.*
Recall the class $\mathscr{S}$ of nondiscrete, topologically simple, compactly generated, locally compact groups. Let $\mathscr{S}_{td}$ be the class of totally disconnected groups in $\mathscr{S}$. For constructing groups in the class $\mathscr{S}_{td}$, we are typically interested in the situation where the universal group is compactly generated, locally compact, and acts geometrically densely on its associated tree (allowing us to deduce simplicity via Tits' Theorem). For this situation we have the following.
**Theorem 27** ([@ReidSmith Theorem 1.9]). *Suppose $\Delta = (\Gamma,(X_a),(G(v)))$ is a local action diagram. Then $\mathbf{U}(\Delta)$ is compactly generated, locally compact, and acts geometrically densely on its associated tree if and only if $\Delta$ is irreducible, $\Gamma$ is finite, and all groups $G(v)$ are subdegree-finite and compactly generated.*
The following result is particularly useful, since it allows us to construct local action diagrams that immediately yield groups in $\mathscr{S}_{td}$. Again for faithful $(\mathrm{P}_{})$-closed actions it is an almost perfect characterisation of membership of the class $\mathscr{S}_{td}$.
**Theorem 28** ([@ReidSmith Corollary 1.10]). *If $(T,G)$ is a faithful $(\mathrm{P}_{})$-closed and strongly closed action on a tree $T$, then the following are equivalent:*
1. *$G \in \mathscr{S}_{td}$ and $G$ fixes no vertex of $T$.*
2. *[\[item:lad_description:comp_gen\]]{#item:lad_description:comp_gen label="item:lad_description:comp_gen"} There exists a unique smallest $G$-invariant subtree $T'$ of $T$ (not necessarily proper) on which $G$ acts faithfully. Furthermore, if $\Delta = (\Gamma,(X_a),(G(v)))$ is the associated local action diagram of $(T',G)$, then $\Gamma$ is a finite tree; all of the groups $G(v)$ are closed, compactly generated, subdegree-finite and generated by point stabilisers; and for every leaf $v$ of $\Gamma$ the group $G(v)$ is nontrivial.*
To conclude this section we give a theorem that establishes an entirely different universal property of the groups $\mathbf{U}(\Delta)$ within the class $\mathscr{S}_{td}$.
**Theorem 29** ([@ReidSmith Theorem 1.14]). *Let $G_1,\dots,G_n$ be a finite list of nontrivial compactly generated t.d.l.c. groups, such that for each $G_i$ there is a compact open subgroup $U_i$ such that $G_i = \langle g U_i g^{-1}: g \in G_i \rangle$ and $\bigcap_{g \in G_i}gU_ig^{-1}$ is trivial. For example, we can take $G_i \in \mathscr{S}_{td}$ and $U_i$ to be any compact open subgroup. Then there exists $\mathbf{U}(\Delta) \in \mathscr{S}_{td}$ acting continuously on a countable tree $T$, vertex stabilisers $O_1,\dots,O_n$ of $\mathbf{U}(\Delta)$ and compact normal subgroups $K_i$ of $O_i$, such that $O_i \cong K_i \rtimes G_i$ for $1 \le i \le n$.*
# Project ideas and open problems {#Section:FutureWork}
There are a number of directions in which our theory of local action diagrams can be generalised, and many areas where it can be applied; we describe some of them here.
**Project 1**. *The application of local action diagrams to understand the automorphism groups of various types of infinitely ended graphs.*
A connected, locally finite graph $\Gamma$ with infinitely many thin ends (i.e. ends that do not contain infinitely many disjoint rays) and only countably many thick ends (i.e. ends that contain infinitely many disjoint rays) is known by work of Carsten Thomassen and Wolfgang Woess (see [@ThomassenWoess]) to resemble a tree in some precise way. Thomassen and Woess' work relies heavily on Dicks and Dunwoody's theory of structure trees (see [@DicksDunwoody]), and it is via this theory that we can see the 'tree-like' nature of $\Gamma$. See [@RoggiSurvey] for an 'accessible' introduction to these ideas.
Now the automorphism group $\mathop{\mathrm{Aut}}(\Gamma)$ will act on the structure tree $T$ of $\Gamma$, and this action can be studied via local action diagrams. We give an example of this in [@ReidSmith §7.2], where we find a complete description of all automorphism groups of simple, nontrivial, vertex-transitive graphs with vertex connectivity one: they are precisely the universal groups of a certain type of local action diagram.
The automorphism groups of many other classes of graphs could be understood in this way.
**Project 2**. *Generalise the ideas behind local action diagrams to better understand $(\mathrm{P}_{k})$-closures of groups acting on trees.*
This project idea appears in our paper ([@ReidSmith §8 Question 2]). Such a project would be a significant undertaking, given the complexities of Tornier's generalisation of the Burger--Mozes groups (see Section [3.4](#Sec:FurtherGens){reference-type="ref" reference="Sec:FurtherGens"}). Nevertheless, it is our opinion that a usable theory could be developed, perhaps using a modified version of the local action action diagram built around $k$-arcs rather than arcs.
**Project 3**. *Find further examples of locally determined global properties of $(T,G)$. [\[Q:find_more_lad_features\]]{#Q:find_more_lad_features label="Q:find_more_lad_features"}*
This is [@ReidSmith §8 Question 5]. A long term research theme could be built around continuing to find global properties of $G \leq \mathop{\mathrm{Aut}}(T)$ that are perfectly characterised by the associated local action diagram. Such properties could be found by looking for global properties of $G$ that are characterised by properties of $G^{(\mathrm{P}_{})}$ in its action on $T$. In [@ReidSmith] we call such properties *locally determined global properties of $(T,G)$*. As we have seen, geometrically dense actions are a locally determined global property. An important measure of success here will be that the locally determined global properties should be expressed in terms of features of the local action diagram.
**Project 4**. *Write software to search local action diagrams for known features that characterise the locally determined global properties of actions $(T,G)$.*
As we saw in [5.2](#Sec:SomeConsequences){reference-type="ref" reference="Sec:SomeConsequences"}, for given natural numbers $d,n$ there are only finitely many isomorphism classes of local action diagrams for actions $(T,G)$ where $G$ has at most $n$ vertex orbits on $T$, and $T$ is locally finite with every valency bounded by $d$. Constructing all possible local action diagrams for a given pair $(d,n)$ is then possible for reasonable choices of $(d,n)$, since these are finite graphs decorated with finite groups and finite colour sets. Algorithms can be created to search these constructions for known locally determined global properties. In this way, we could obtain lists of $\mathop{\mathrm{Aut}}(T_d)$-conjugacy classes of, for example, vertex transitive actions on $T_d$ that have some locally determined global property. As new properties come to light via Project [\[Q:find_more_lad_features\]](#Q:find_more_lad_features){reference-type="ref" reference="Q:find_more_lad_features"}, more software solutions will be needed.
In [@ReidSmith §8 Question 2] we give a somewhat related project, asking for the asymptotics of the number $N_d$ of conjugacy classes of vertex-transitive $(\mathrm{P}_{})$-closed subgroups of $\mathop{\mathrm{Aut}}(T_d)$ as a function of $d$.
**Project 5**. *Create analogies of the constructions in Section [3.4](#Sec:FurtherGens){reference-type="ref" reference="Sec:FurtherGens"}, for universal groups of local action diagrams.*
In Section [3.4](#Sec:FurtherGens){reference-type="ref" reference="Sec:FurtherGens"} we outlined some generalisations of either the Burger--Mozes groups or the box product construction. For each of these generalisations there are natural ways to modify their definitions so that they can be applied to local action diagrams. For example, inspired by Le Boudec groups one could allow finitely many (or boundedly many) singularities for local actions in local action diagrams, and the local action at these singularities could be restricted to give an analogy to Le Boudec's restricted universal groups.
For all of these generalisations, it would be interesting to understand what permutational and topological properties the resulting 'modified universal' groups possess.
**Further projects 1**. As this is an introductory note, we have omitted several aspects of the theory (e.g. local subaction diagrams as a tool to better understand subgroups of universal groups of local action diagrams). There are several interesting questions arising from these omitted topics. We refer the interested reader to [@ReidSmith §8], where further open questions and project ideas are given.
8
Uri Bader, Pierre-Emmanuel Caprace, Tsachik Gelander and Shahar Mozes, Simple groups without lattices. *Bull. Lond. Math. Soc.* 44 (2012), 55--67.
Christopher Banks, Murray Elder and George A. Willis, Simple groups of automorphisms of trees determined by their actions on finite subtrees. *J. Group Theory* 18 (2015), 235--261.
Jens Bossaert and Tom De Medts, Topological and algebraic properties of universal groups for right-angled buildings. *Forum Math.* 33 (2021), 867--888.
Marc Burger and Shahar Mozes, Groups acting on trees: from local to global structure. *Publ. Math. IHÉS* 92 (2000), 113--150.
Pierre-Emmanuel Caprace and Nicolas Monod, Decomposing locally compact groups into simple pieces. *Math. Proc. Camb. Phil. Soc.* 150 (2011), 97--128.
Pierre-Emmanuel Caprace, Non-discrete simple locally compact groups. In: *European congress of mathematics. Proceedings of the 7th ECM (7ECM) congress, Berlin, Germany, July 18--22, 2016,* (European Mathematical Society (EMS), Zürich, 2018), pp. 333--354.
Pierre-Emmanuel Caprace and Tom De Medts, Simple locally compact groups acting on trees and their germs of automorphisms. *Transformation Groups*, 16 (2011) 375--411.
Ruth Camm, Simple free products. *J. London Math. Soc.* 28 (1953), 66--76.
Warren Dicks and M. J. Dunwoody, *Groups acting on graphs*. Cambridge Studies in Advanced Mathematics (Cambridge University Press, 1989).
The GAP Group, GAP -- Groups, Algorithms, and Programming, Version 4.8.7 (2017), <http://www.gap-system.org>
David M. Goldschmidt, Automorphisms of trivalent graphs. *Ann. Math.* 111 (1980), 377--406.
H. A. Jung and M. E. Watkins, On the structure of infinite vertex-transitive graphs. *Discrete Math.* 18 (1977), 45--53.
Adrien Le Boudec, Groups acting on trees with almost prescribed local action. *Commentarii Mathematici Helvetici* 91 (2016), 253--293.
Waltraud Lederle, Coloured Neretin groups. *Groups Geom. Dyn.* 13 (2019), 467--510.
Rögnvaldur G. Möller, Primitivity and ends of graphs. *Combinatorica* 14 (1994), 477--484.
Rögnvaldur G. Möller, Groups acting on locally finite graphs---a survey of the infinitely ended case. In: *Groups '93 Galway/St. Andrews. Proceedings of the international conference, Galway, Ireland, August 1--14, 1993. Volume 2.* Lond. Math. Soc. Lect. Note Ser. 212, (Cambridge University Press, 1995), pp. 426--456.
Rögnvaldur G. Möller, Structure theory of totally disconnected locally compact groups via graphs and permutations. *Canad. J. Math.* 54 (2002), 795--827.
Primož Potočnik and Pablo Spiga, Lifting a prescribed group of automorphisms of graphs. *Proc. Am. Math. Soc.* 147 (2019), 3787--3796.
A. Yu. Ol'shanskiı̆, *Geometry of defining relations in groups*. Mathematics and its Applications (Soviet Series) 70 (Kluwer Acad. Publ., Dordrecht, 1991).
Colin D. Reid and Simon M. Smith (with an appendix by Stephan Tornier), Groups acting on trees with Tits' independence property (P). *Preprint*, (2022) arXiv:2002.11766v2.
Jean-Pierre Serre, *Trees*. Springer Monogr. in Math. (Springer, Berlin, 2003).
Sam Shepherd (with appendix by Giles Gardam and Daniel J. Woodhouse), Two Generalisations of Leighton's Theorem. *Preprint*, (2019) to appear in *Groups, Geometry, and Dynamics.* arXiv:1908.00830.
Simon M. Smith, Infinite primitive directed graphs. *J. Algebr. Comb.* 31 (2010), 131--141.
Simon M. Smith, Subdegree growth rates of infinite primitive permutation groups. *J. Lond. Math. Soc.* 82 (2010), 526--548.
Simon M. Smith, A product for permutation groups and topological groups, *Duke Math. J.* 166 (2017), 2965--2999.
Simon M. Smith, The structure of primitive permutation groups with finite suborbits and t.d.l.c. groups admitting a compact open subgroup that is maximal. *Preprint*, (2019) arXiv:1910.13624v2.
Carsten Thomassen and Wolfgang Woess, Vertex-transitive graphs and accessibility. *J. Comb. Theory, Ser. B* 58 (1993), 248--268.
Stephan Tornier, Groups acting on trees with prescribed local action. *Preprint*, (2020) arXiv:2002.09876v3.
Jacques Tits, Sur le groupe des automorphismes d'un arbre. In: *Essays on topology and related topics (Mémoires dédiés à Georges de Rham)*, (Springer, New York, 1970), pp. 188--211.
Richard Weiss, s-Transitive graphs, *Algebraic methods in graph theory* **25** (1978), 827--847.
[^1]: We use the notation $\Pi(\Gamma, v)$ for the fundamental group, rather than the more common $\pi(\Gamma, v)$, because for us $\pi$ will be reserved for projection maps.
[^2]: For a group $G$ acting on a graph $\Gamma$, the *neighbours* of a vertex $v$ are the vertices in $\Gamma$ at distance one from $v$. Each vertex stabiliser $G_v$ induces a permutation group on the neighbours of $v$; if every such induced permutation group has some permutational property $\mathcal{P}$ (e.g. transitive, primitive), we say that $G$ is *locally $\mathcal{P}$*.
[^3]: We will always think of $\mathop{\mathrm{Aut}}(T)$ as a topological group under the *permutation topology*, defined below. A topological group is *topologically simple* if it has no nontrivial proper closed normal subgroups; it is *abstractly simple* if it has no nontrivial proper normal subgroups. When we write simple, we will always mean abstractly simple.
[^4]: A *lattice* $\Lambda$ in a locally compact group $G$ is a discrete subgroup such that the quotient space $G/\Lambda$ admits a $G$-invariant finite measure; the lattice is *uniform* if the quotient space $G/\Lambda$ is compact.
| arxiv_math | {
"id": "2309.05065",
"title": "An introduction to the local-to-global behaviour of groups acting on\n trees and the theory of local action diagrams",
"authors": "Colin D. Reid and Simon M. Smith",
"categories": "math.GR",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
An interesting question in the field of martingale optimal transport, is to determine the martingale with prescribed initial and terminal marginals which is most correlated to Brownian motion. Under a necessary and sufficient irreducibility condition, the answer to this question is given by a *Bass martingale*. At an intuitive level, the latter can be imagined as an order-preserving and martingale-preserving space transformation of an underlying Brownian motion starting with an initial law $\alpha$ which is tuned to ensure the marginal constraints.
In this article we study how to determine the aforementioned initial condition $\alpha$. This is done by a careful study of what we dub the *Bass functional*. In our main result we show the equivalence between the existence of minimizers of the Bass functional and the existence of a Bass martingale with prescribed marginals. This complements the convex duality approach in a companion paper by the present authors together with M. Beiglböck, with a purely variational perspective. We also establish an infinitesimal version of this result, and furthermore prove the displacement convexity of the Bass functional along certain generalized geodesics in the $2$-Wasserstein space.
*Keywords:* Optimal transport, Brenier's theorem, Benamou--Brenier, Stretched Brownian motion, Bass martingale.
*Mathematics Subject Classification (2010):* Primary 60G42, 60G44; Secondary 91G20.
address:
- Faculty of Mathematics, University of Vienna
- Faculty of Mathematics, University of Vienna
- Faculty of Mathematics, University of Vienna
author:
- Julio Backhoff-Veraguas
- Walter Schachermayer
- Bertram Tschiderer
bibliography:
- joint_biblio.bib
title: The Bass functional of martingale transport
---
[^1]
# Introduction
## Martingale optimization problem
Let $\mu, \nu$ be elements of $\mathcal{P}_{2}(\mathbb{R}^{d})$, the space of probability measures on $\mathbb{R}^{d}$ with finite second moments. Assume that $\mu, \nu$ are in convex order, denoted by $\mu \preceq_{\textnormal{c}}\nu$, and meaning that $\int \phi \, d\mu \leqslant \int \phi \, d\nu$ holds for all convex functions $\phi \colon \mathbb{R}^{d}\rightarrow \mathbb{R}$. As in [@BaBeHuKa20; @BBST23] we consider the martingale optimization problem $$\label{MBMBB} \tag{MBB}
MT(\mu, \nu) \coloneqq
\inf_{\substack{M_{0} \sim \mu, \, M_{1} \sim \nu, \\ M_{t} = M_{0} + \int_{0}^{t} \sigma_{s} \, dB_{s}}}
\mathbb{E}\Big[\int_{0}^{1} \vert \sigma_{t} - I_{d} \vert^{2}_{\textnormal{HS}} \, dt\Big],$$ where $B$ is Brownian motion on $\mathbb{R}^{d}$ and $\vert \cdot \vert_{\textnormal{HS}}$ denotes the Hilbert--Schmidt norm. The abbreviation "MBB" stands for "Martingale Benamou--Brenier" and this designation is motivated from the fact that [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"} can be seen as a martingale counterpart of the classical formulation in optimal transport by Benamou--Brenier [@BeBr99], see [@BaBeHuKa20; @BBST23]. The problem [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"} is equivalent to maximizing the covariance between $M$ and Brownian motion subject to the marginal conditions $M_{0} \sim \mu$ and $M_{1} \sim \nu$, to wit $$\label{MBMBB2}
P(\mu, \nu) \coloneqq
\sup_{\substack{M_0 \sim \mu, \, M_1 \sim \nu, \\ M_t = M_0 + \int_0^t \sigma_s \, dB_s}}
\mathbb{E}\Big[\int_0^1 \textnormal{tr} (\sigma_t) \, dt\Big].$$ Both problems have the same optimizer and the values are related via $$MT(\mu,\nu) = d + \int \vert y \vert^{2} \, d\nu(y) - \int \vert x \vert^{2} \, d\mu(x) -2 P(\mu,\nu).$$ As shown in [@BaBeHuKa20], the problem [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"} admits a strong Markov martingale $\hat{M}$ as a unique optimizer, which is called the *stretched Brownian motion* from $\mu$ to $\nu$ in [@BaBeHuKa20].
## Bass martingales and structure of stretched Brownian motion
Owing to the work [@BBST23] it is known that the optimality property of stretched Brownian motion is related to a structural / geometric description. For its formulation we start with the following definition.
**Definition 1**. For probability measures $\mu, \nu$ we say that the pair $(\mu,\nu)$ is *irreducible* if for all measurable sets $A, B \subseteq \mathbb{R}^{d}$ with $\mu(A), \nu(B)>0$ there is a martingale $X= (X_{t})_{0 \leqslant t \leqslant 1}$ with $X_{0} \sim \mu$, $X_{1} \sim \nu$ such that $\mathbb{P}(X_{0}\in A, X_{1}\in B) >0$.
We remark that in the classical theory of optimal transport one can always find couplings $(X_{0},X_{1})$ of $(\mu,\nu)$ such that $\mathbb{P}(X_{0}\in A, X_{1}\in B) > 0$, for all measurable sets $A, B \subseteq \mathbb{R}^{d}$ with $\mu(A),\nu(B) > 0$; e.g., by letting $(X_{0},X_{1})$ be independent. In martingale optimal transport this property may fail.
Next we recall the following concept from [@Ba83; @BaBeHuKa20; @BBST23].
**Definition 2**. Let $B = (B_{t})_{0 \leqslant t \leqslant 1}$ be Brownian motion on $\mathbb{R}^{d}$ with $B_{0} \sim \hat{\alpha}$, where $\hat{\alpha}$ is an arbitrary element of $\mathcal{P}(\mathbb{R}^{d})$, the space of probability measures on $\mathbb{R}^{d}$. Let $\hat{v} \colon \mathbb{R}^{d}\rightarrow \mathbb{R}$ be convex such that $\nabla \hat{v}(B_{1})$ is square-integrable. We call $$\label{def:BassMarti_intro_eq}
\hat{M}_{t} \coloneqq
\mathbb{E}[\nabla \hat{v}(B_{1}) \, \vert \, \sigma(B_{s} \colon s \leqslant t)]
= \mathbb{E}[\nabla \hat{v}(B_{1}) \, \vert \, B_{t}], \qquad 0 \leqslant t \leqslant 1$$ a *Bass martingale* with *Bass measure* $\hat{\alpha}$ joining $\mu = \operatorname{Law}(\hat{M}_{0})$ with $\nu = \operatorname{Law}(\hat{M}_{1})$.
The reason behind this terminology is that Bass [@Ba83] used this construction (with $d=1$ and $\hat{\alpha}$ a Dirac measure) in order to derive a solution of the Skorokhod embedding problem.
In [@BBST23 Theorem 1.3] it is shown that under the irreducibility assumption on the pair $(\mu,\nu)$ there is a unique Bass martingale $\hat{M}$ from $\mu$ to $\nu$, i.e., satisfying $\hat{M}_{0} \sim \mu$ and $\hat{M}_{1} \sim \nu$:
**Theorem 3**. *Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$ and assume that $(\mu,\nu)$ is irreducible. Then the following are equivalent for a martingale $\hat{M}=(\hat{M}_{t})_{0 \leqslant t \leqslant 1}$ with $\hat{M}_{0} \sim \mu$ and $\hat{M}_{1} \sim \nu$:*
1. *[\[MainTheorem_1\]]{#MainTheorem_1 label="MainTheorem_1"} $\hat{M}$ is stretched Brownian motion, i.e., the optimizer of [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"}.*
2. *[\[MainTheorem_2\]]{#MainTheorem_2 label="MainTheorem_2"} $\hat{M}$ is a Bass martingale.*
Since, for probability measures $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$, stretched Brownian motion always exists by [@BaBeHuKa20 Theorem 1.5], the above theorem states that the existence of a Bass martingale follows from --- and is in fact equivalent to --- the irreducibility assumption on the pair $(\mu,\nu)$.
Denoting by $\ast$ the convolution operator and by $\gamma$ the standard Gaussian measure on $\mathbb{R}^{d}$, we remark that the convex function $\hat{v}$ and the Bass measure $\hat{\alpha}$ from Definition [Definition 2](#def:BassMarti_intro){reference-type="ref" reference="def:BassMarti_intro"} satisfy the identities $$\label{eq_def_id_bm}
(\nabla \hat{v} \ast \gamma)(\hat{\alpha}) = \mu
\qquad \textnormal{ and } \qquad
\nabla \hat{v}(\hat{\alpha} \ast \gamma) = \nu.$$ We formalize the fundamental relations [\[eq_def_id_bm\]](#eq_def_id_bm){reference-type="eqref" reference="eq_def_id_bm"} and their correspondence with Bass martingales in Lemma [Lemma 10](#lem_eq_co_bm_13){reference-type="ref" reference="lem_eq_co_bm_13"} below.
Throughout we write $\gamma^{t}$ for the $d$-dimensional centered Gaussian distribution with covariance matrix $tI_{d}$ and set $\hat{v}_{t} \coloneqq \hat{v} \ast \gamma^{1-t} \colon \mathbb{R}^{d}\rightarrow \mathbb{R}$, for $0 \leqslant t \leqslant 1$. In these terms, [\[def:BassMarti_intro_eq\]](#def:BassMarti_intro_eq){reference-type="eqref" reference="def:BassMarti_intro_eq"} amounts to $$\hat{M}_{t} = \nabla \hat{v}_{t}(B_{t}), \qquad 0 \leqslant t \leqslant 1.$$
## Main results
In the following we denote by $\textnormal{MCov}$ the *maximal covariance* between two probability measures $p_{1},p_{2} \in \mathcal{P}_{2}(\mathbb{R}^{d})$, defined as $$\label{eq_def_mcov}
\textnormal{MCov}(p_{1},p_{2}) \coloneqq \sup_{q \in \mathsf{Cpl}(p_{1},p_{2})} \int \langle x_{1},x_{2} \rangle \, q(dx_{1},dx_{2}),$$ where $\mathsf{Cpl}(\mu,\nu)$ is the set of all couplings $\pi \in \mathcal{P}(\mathbb{R}^{d}\times \mathbb{R}^{d})$ between $\mu$ and $\nu$, i.e., probability measures on $\mathbb{R}^{d}\times \mathbb{R}^{d}$ with first marginal $\mu$ and second marginal $\nu$. As is well known, maximizing the covariance between $p_{1}$ and $p_{2}$ is equivalent to minimizing their expected squared distance; see also [\[def_eq_was\]](#def_eq_was){reference-type="eqref" reference="def_eq_was"} below.
**Definition 4**. We introduce the *Bass functional* $$\label{def.bass.func}
\mathcal{P}_{2}(\mathbb{R}^{d}) \ni \alpha \longmapsto \mathcal{V}(\alpha)
\coloneqq \textnormal{MCov}(\alpha \ast \gamma, \nu) - \textnormal{MCov}(\alpha,\mu).$$
In our first main result we derive a novel formulation of problem [\[MBMBB2\]](#MBMBB2){reference-type="eqref" reference="MBMBB2"}, which characterizes the Bass measure $\hat{\alpha}$ in [\[eq_def_id_bm\]](#eq_def_id_bm){reference-type="eqref" reference="eq_def_id_bm"} as the optimizer of the Bass functional [\[def.bass.func\]](#def.bass.func){reference-type="eqref" reference="def.bass.func"}.
**Theorem 5**. *Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$. Then $$\label{eq:variational_alpha_mc}
P(\mu,\nu)
= \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \mathcal{V}(\alpha).$$ The right-hand side of [\[eq:variational_alpha_mc\]](#eq:variational_alpha_mc){reference-type="eqref" reference="eq:variational_alpha_mc"} is attained by $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$ if and only if there is a Bass martingale from $\mu$ to $\nu$ with Bass measure $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$.*
The proof of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"} is given in Section [3](#sec2avcob){reference-type="ref" reference="sec2avcob"}. In Section [4](#sec3aivot){reference-type="ref" reference="sec3aivot"} we will show the following infinitesimal version of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"}, which constitutes our second main result:
**Theorem 6**. *Let $(M_{t})_{0 \leqslant t \leqslant 1}$ be an $\mathbb{R}^{d}$-valued martingale bounded in $L^{2}$, which is given by the stochastic integral $$M_{t} = M_{0} + \int_{0}^{t} \sigma_{s} \, dB_{s}, \qquad 0 \leqslant t \leqslant 1,$$ where $(\sigma_{t})_{0 \leqslant t \leqslant 1}$ is a progressively measurable process. Denote by $\mu_{t}$ the law of $M_{t}$. For Lebesgue-a.e. $0 \leqslant t \leqslant 1$ we have, for each $\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})$, the inequality $$\label{eq_m_theo_ms2}
\mathbb{E}\big[ \textnormal{tr}(\sigma_{t})\big]
\leqslant \liminf_{h \rightarrow 0} \tfrac{1}{h}
\Big( \textnormal{MCov}(\alpha \ast \gamma^{h},\mu_{t+h}) - \textnormal{MCov}(\alpha,\mu_{t}) \Big).$$*
We note that, for a Bass martingale $(\hat{M}_{t})_{0 \leqslant t \leqslant 1}$ of the form $$d\hat{M}_{t} = \hat{\sigma}_{t}(\hat{M}_{t}) \, dB_{t},$$ with associated Bass measure $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$ and diffusion function $\hat{\sigma}_{t} \colon \mathbb{R}^{d}\rightarrow \mathbb{R}^{d \times d}$, we have, for Lebesgue-a.e. $0 \leqslant t \leqslant 1$, the equality $$\mathbb{E}\big[ \textnormal{tr}\big(\hat{\sigma}_{t}(\hat{M}_{t})\big)\big]
= \frac{d}{dt} \, \textnormal{MCov}(\hat{\alpha} \ast \gamma^{t},\hat{\mu}_{t}),$$ where $\hat{\mu}_{t} = \operatorname{Law}(\hat{M}_{t})$. This exhibits the sharpness of [\[eq_m\_theo_ms2\]](#eq_m_theo_ms2){reference-type="eqref" reference="eq_m_theo_ms2"} and shows that Theorem [Theorem 6](#theo_ms2){reference-type="ref" reference="theo_ms2"} is an infinitesimal analogue of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"}.
In our final main result we discuss convexity properties of the Bass functional $\alpha \mapsto \mathcal{V}(\alpha)$ defined in [\[def.bass.func\]](#def.bass.func){reference-type="eqref" reference="def.bass.func"}.
**Theorem 7**. *We have the following results:*
1. *If $d = 1$, then $\mathcal{V}$ is displacement convex, i.e., convex along the geodesics given by McCann interpolations [@McCgas].*
2. *If $d \geqslant 1$, then $\mathcal{V}$ is displacement convex along generalized geodesics with base $\mu$.*
The proof of this result, together with a discussion on the various forms of convexity stated therein (see e.g. [@AGS08; @Vi03; @McCgas]), and a treatment of the strict convexity of $\mathcal{V}$, are deferred to Section [5](#sec4dcotbf){reference-type="ref" reference="sec4dcotbf"}. We merely stress here that the Bass functional fails to be convex, and can even be concave, if we consider convex combinations of measures in the usual linear sense.
## Related literature
Optimal transport as a field in mathematics goes back to Monge [@Mo81] and Kantorovich [@Ka42], who established its modern formulation. The seminal results of Benamou, Brenier, and McCann [@Br87; @Br91; @BeBr99; @Mc94; @Mc95] form the basis of the modern theory, with striking applications in a variety of different areas, see the monographs [@Vi03; @Vi09; @AmGi13; @Sa15].
We are interested in transport problems where the transport plan satisfies an additional martingale constraint. This additional requirement arises naturally in finance (e.g. [@BeHePe12]), but is of independent mathematical interest. For example there are notable consequences for the study of martingale inequalities (e.g. [@BoNu13; @HeObSpTo12; @ObSp14]) and the Skorokhod embedding problem (e.g. [@BeCoHu14; @KaTaTo15; @BeNuSt19]). Early articles on this topic of *martingale optimal transport* include [@HoNe12; @BeHePe12; @TaTo13; @GaHeTo13; @DoSo12; @CaLaMa14]. The study of irreducibility of a pair of marginals $(\mu,\nu)$ was initiated by Beiglböck and Juillet [@BeJu16] in dimension one and extended in the works [@GhKiLi19; @DeTo17; @ObSi17] to multiple dimensions.
Continuous-time martingale optimal transport problems have received much attention in the recent years; see e.g. [@BeHeTo15; @CoObTo19; @GhKiPa19; @GuLoWa19; @GhKiLi20; @ChKiPrSo20; @GuLo21]. In this paper we concern ourselves with the specific structure given by the martingale Benamou--Brenier problem, introduced in [@BaBeHuKa20] in probabilistic language and in [@HuTr17] in PDE language, and subsequently studied through the point of view of duality theory in [@BBST23]. In the context of market impact in finance, the same kind of problem appeared independently in a work by Loeper [@Lo18].
It was also shown in [@BaBeHuKa20] that the optimizer $\hat{M}$ of the problem [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"} is the process whose evolution follows the movement of Brownian motion as closely as possible with respect to an *adapted Wasserstein distance* (see e.g. [@BaBaBeEd19a; @Fo22a]) subject to the given marginal constraints.
# Preliminaries
In this short section we give a more detailed review of some of the main results in [@BBST23], which will be useful for the coming discussions and proofs.
## Dual viewpoint
As established in [@BBST23 Theorem 1.4], the problem [\[MBMBB2\]](#MBMBB2){reference-type="eqref" reference="MBMBB2"} admits a dual formulation with a particularly appealing structure:
**Theorem 8**. *Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$. The value $P(\mu,\nu)$ of the problem [\[MBMBB2\]](#MBMBB2){reference-type="eqref" reference="MBMBB2"} is equal to $$\label{WeakDual}
D(\mu,\nu) \coloneqq \inf_{\substack{\psi \in L^{1}(\nu), \\ \textnormal{$\psi$ convex}}} \Big( \int \psi \, d\nu - \int (\psi^{\ast} \ast \gamma)^{\ast} \, d \mu \Big)$$ and is attained by a convex function $\hat{\psi}$ if and only if $(\mu,\nu)$ is irreducible. In this case, the (unique) optimizer to [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"}, [\[MBMBB2\]](#MBMBB2){reference-type="eqref" reference="MBMBB2"} is given by the Bass martingale with associated convex function $\hat{v} = \hat{\psi}^{\ast}$ and Bass measure $\hat{\alpha} = \nabla (\hat{\psi}^{\ast} \ast \gamma)^{\ast}(\mu)$.*
Note that the symbol $\ast$ used as a superscript denotes the convex conjugate of a function. We also remark that attainment of $D(\mu,\nu)$ has to be understood in a "relaxed" sense, since the optimizer $\hat{\psi}$ is not necessarily $\nu$-integrable; see [@BBST23 Proposition 4.2].
## Static martingale optimal transport {#subs_smot}
We fix $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$ and consider a static / discrete-time version of the continuous-time martingale optimization problem [\[MBMBB2\]](#MBMBB2){reference-type="eqref" reference="MBMBB2"}, to wit $$\label{eq_primal}
\tilde{P}(\mu,\nu)
\coloneqq \sup_{\pi \in \mathsf{MT}(\mu,\nu)} \int \textnormal{MCov}(\pi_{x},\gamma) \, \mu(dx).$$ The collection of martingale transports $\mathsf{MT}(\mu,\nu)$ consists of those couplings $\pi \in \mathsf{Cpl}(\mu,\nu)$ that satisfy $\operatorname{bary}(\pi_{x}) \coloneqq \int y \, \pi_{x}(dy) = x$, for $\mu$-a.e. $x \in \mathbb{R}^{d}$. Here, the family of probability measures $\{\pi_{x}\}_{x\in\mathbb{R}^{d}} \subseteq \mathcal{P}_{2}(\mathbb{R}^{d})$ is obtained by disintegrating the coupling $\pi$ with respect to its first marginal $\mu$, i.e., $\pi(dx,dy) = \pi_{x}(dy) \, \mu(dx)$.
By [@BaBeHuKa20 Theorem 2.2] the value $\tilde{P}(\mu,\nu)$ of [\[eq_primal\]](#eq_primal){reference-type="eqref" reference="eq_primal"} is finite and equals $P(\mu,\nu)$, as defined in [\[MBMBB2\]](#MBMBB2){reference-type="eqref" reference="MBMBB2"}. Furthermore, there exists a unique optimizer $\hat{\pi} \in \mathsf{MT}(\mu,\nu)$ of [\[eq_primal\]](#eq_primal){reference-type="eqref" reference="eq_primal"} and if $(\hat{M}_{t})_{0 \leqslant t \leqslant 1}$ is the stretched Brownian motion from $\mu$ to $\nu$, then the law of $(\hat{M}_{0},\hat{M}_{1})$ equals $\hat{\pi}$.
As already alluded to, maximizing the maximal covariance is equivalent to minimizing the squared quadratic Wasserstein distance, modulo adding constants. More precisely, in the present setting we have the relation $$\inf_{\pi \in \mathsf{MT}(\mu,\nu)} \int \mathcal{W}_{2}^{2}(\pi_{x},\gamma) \, \mu(dx)
= d + \int \vert y \vert^{2} \, d\nu(y) -2 \tilde{P}(\mu,\nu),$$ where the quadratic Wasserstein distance $\mathcal{W}_{2}(\, \cdot \, , \, \cdot \,)$ between two probability measures $p_{1},p_{2} \in \mathcal{P}_{2}(\mathbb{R}^{d})$ is defined as $$\label{def_eq_was}
\mathcal{W}_{2}(p_{1},p_{2}) \coloneqq \sqrt{\inf_{q \in \mathsf{Cpl}(p_{1},p_{2})} \int \vert x_{1} - x_{2}\vert^{2} \, q(dx_{1},dx_{2})}.$$ In these terms, the value of [\[MBMBB\]](#MBMBB){reference-type="eqref" reference="MBMBB"} can be expressed as $$MT(\mu,\nu)
= \inf_{\pi \in \mathsf{MT}(\mu,\nu)} \int \mathcal{W}_{2}^{2}(\pi_{x},\gamma) \, \mu(dx)
- \int \vert x \vert^{2} \, d\mu(x).$$
## Structure of optimizers
From [@BBST23 Theorem 6.6] we recall the following characterization of the dual optimizer $\hat{\psi}$ of [\[WeakDual\]](#WeakDual){reference-type="eqref" reference="WeakDual"} and of the primal optimizer $\hat{\pi} \in \mathsf{MT}(\mu,\nu)$ of [\[eq_primal\]](#eq_primal){reference-type="eqref" reference="eq_primal"}.
**Lemma 9**. *Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$. Suppose that a Bass martingale $(\hat{M}_{t})_{0 \leqslant t \leqslant 1}$ from $\mu$ to $\nu$ with Bass measure $\hat{\alpha} \in \mathcal{P}(\mathbb{R}^{d})$ and associated convex function $\hat{v}$ exists. Then the Legendre transform $\hat{v}^{\ast}$ is equal to the dual optimizer $\hat{\psi}$ of [\[WeakDual\]](#WeakDual){reference-type="eqref" reference="WeakDual"} and $\operatorname{Law}(\hat{M}_{0},\hat{M}_{1})$ is equal to the primal optimizer $\hat{\pi}$ of [\[eq_primal\]](#eq_primal){reference-type="eqref" reference="eq_primal"}. Furthermore, we have $\hat{\alpha} = \nabla \hat{\varphi}(\mu)$, where $$\label{eq_du_op_b_m_xi_fin}
\nabla \hat{\varphi}(x)
= (\nabla \hat{v} \ast \gamma)^{-1}(x)
= \nabla(\hat{v} \ast \gamma)^{\ast}(x),$$ for $\mu$-a.e. $x \in \mathbb{R}^{d}$, and $$\label{eq_id_pisbmx_v_xi}
\hat{\pi}_{x} = \operatorname{Law}(\hat{M}_{1} \, \vert \, \hat{M}_{0} = x)
= \nabla \hat{v}(\gamma_{\nabla \hat{\varphi}(x)}),$$ where $\gamma_{\nabla \hat{\varphi}(x)}$ denotes the $d$-dimensional Gaussian distribution with barycenter $\nabla \hat{\varphi}(x)$ and covariance matrix $I_{d}$.*
We set $\hat{u} \coloneqq \hat{v} \ast \gamma$, so that $\nabla \hat{u}(\hat{\alpha}) = \mu$. Recalling [\[eq_def_id_bm\]](#eq_def_id_bm){reference-type="eqref" reference="eq_def_id_bm"}, we summarize the relationships between the optimizers in the following diagram: $$\begin{tikzcd}[row sep=huge, column sep = huge]
\hat{\alpha} \ast \gamma \arrow[shift right, swap]{r}{\nabla \hat{v}} & \nu \arrow[shift right, swap]{l}{\nabla \hat{\psi}} \\
\hat{\alpha} \arrow[shift right, swap]{r}{\nabla \hat{u}} \arrow{u}{\ast} & \mu
\arrow[swap]{u}{\hat{\pi}_{\cdot}} \arrow[shift right, swap]{l}{\nabla \hat{\varphi}}
\end{tikzcd}$$
Finally, we prove the equivalence between the identities [\[eq_def_id_bm\]](#eq_def_id_bm){reference-type="eqref" reference="eq_def_id_bm"} and the existence of a Bass martingale from $\mu$ to $\nu$.
**Lemma 10**. *Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$. There is a Bass martingale $\hat{M}$ with Bass measure $\hat{\alpha} \in \mathcal{P}(\mathbb{R}^{d})$ from $\mu = \operatorname{Law}(\hat{M}_{0})$ to $\nu = \operatorname{Law}(\hat{M}_{1})$ if and only if there is a convex function $\hat{v} \colon \mathbb{R}^{d}\rightarrow \mathbb{R}$ satisfying the identities $$\label{eq_def_id_bm_rep}
(\nabla \hat{v} \ast \gamma)(\hat{\alpha}) = \mu
\qquad \textnormal{ and } \qquad
\nabla \hat{v}(\hat{\alpha} \ast \gamma) = \nu.$$ Moreover, the Bass martingale $\hat{M}$ can be expressed as $$\label{eq_def_id_bm_sv_rep}
\hat{M}_{t} = \nabla \hat{v}_{t}(B_{t}), \qquad 0 \leqslant t \leqslant 1.$$*
**Proof.* Let $\hat{M}$ be a Bass martingale in the sense of Definition [Definition 2](#def:BassMarti_intro){reference-type="ref" reference="def:BassMarti_intro"}. We first prove [\[eq_def_id_bm_sv_rep\]](#eq_def_id_bm_sv_rep){reference-type="eqref" reference="eq_def_id_bm_sv_rep"}. Let $A \subseteq \mathbb{R}^{d}$ be a Borel set. We have to show that $$\label{lem_eq_co_bm_13_i}
\mathbb{E}\big[\nabla \hat{v}(B_{1}) \, \boldsymbol{1}_{\{B_{t} \in A \}}\big]
= \mathbb{E}\big[(\nabla \hat{v} \ast \gamma^{1-t})(B_{t}) \, \boldsymbol{1}_{\{B_{t} \in A \}}\big].$$ Denote by $\varphi_{t}(x,y)$ the Gaussian kernel, for $t \in (0,1]$ and $x,y \in \mathbb{R}^{d}$. Then the left-hand side of [\[lem_eq_co_bm_13_i\]](#lem_eq_co_bm_13_i){reference-type="eqref" reference="lem_eq_co_bm_13_i"} can be expressed as $$\int \hat{\alpha}(dx_{0}) \int_{A} \varphi_{t}(x_{0},dx_{t}) \int \nabla \hat{v}(x_{1}) \, \varphi_{1-t}(x_{t},dx_{1}),$$ while the right-hand side is equal to $$\int \hat{\alpha}(dx_{0}) \int_{A} (\nabla \hat{v} \ast \gamma^{1-t})(x_{t}) \, \varphi_{t}(x_{0},dx_{t}).$$ Now we see that [\[lem_eq_co_bm_13_i\]](#lem_eq_co_bm_13_i){reference-type="eqref" reference="lem_eq_co_bm_13_i"} follows from $$\int \nabla \hat{v}(x_{1}) \, \varphi_{1-t}(x_{t},dx_{1})
= \int \nabla \hat{v}(x_{1}) \, \gamma_{x_{t}}^{1-t}(dx_{1})
= (\nabla \hat{v} \ast \gamma^{1-t})(x_{t}),$$ where $\gamma_{x_{t}}^{1-t}$ denotes the $d$-dimensional Gaussian distribution with barycenter $x_{t}$ and covariance matrix $(1-t)I_{d}$. This completes the proof of [\[eq_def_id_bm_sv_rep\]](#eq_def_id_bm_sv_rep){reference-type="eqref" reference="eq_def_id_bm_sv_rep"}.*
*In particular, at times $t=0$ and $t=1$ we obtain from [\[eq_def_id_bm_sv_rep\]](#eq_def_id_bm_sv_rep){reference-type="eqref" reference="eq_def_id_bm_sv_rep"} that $\hat{M}_{0} = (\nabla \hat{v} \ast \gamma)(B_{0})$ and $\hat{M}_{1} = \nabla \hat{v}(B_{1})$, respectively. If $\hat{M}$ is a Bass martingale from $\mu = \operatorname{Law}(\hat{M}_{0})$ to $\nu = \operatorname{Law}(\hat{M}_{1})$, this readily gives [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"}*
*Conversely, suppose that $\mu,\nu,\hat{\alpha},\hat{v}$ satisfy the identities [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"}. Let $(B_{t})_{0 \leqslant t \leqslant 1}$ be Brownian motion on $\mathbb{R}^{d}$ with $\operatorname{Law}(B_{0}) = \hat{\alpha}$. We then define a process $(\hat{M}_{t})_{0 \leqslant t \leqslant 1}$ by [\[def:BassMarti_intro_eq\]](#def:BassMarti_intro_eq){reference-type="eqref" reference="def:BassMarti_intro_eq"}. In light of the previous argument, $\hat{M}$ is characterized by [\[eq_def_id_bm_sv_rep\]](#eq_def_id_bm_sv_rep){reference-type="eqref" reference="eq_def_id_bm_sv_rep"}. Since by assumption the identities [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"} are satisfied, we see that $\operatorname{Law}(\hat{M}_{0}) = \mu$ and $\operatorname{Law}(\hat{M}_{1}) = \nu$. Thus $\hat{M}$ is indeed a Bass martingale from $\mu$ to $\nu$. ◻*
# A variational characterization of Bass measures {#sec2avcob}
Throughout this section we fix $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$ with $\mu \preceq_{\textnormal{c}}\nu$ and provide the proof of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"}. This is done in several steps.
**Lemma 11**. *We have the weak duality $$\label{eq:variational_alpha_mc_step1}
P(\mu,\nu)
\leqslant \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \mathcal{V}(\alpha).$$*
**Proof.* Let $\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})$ be arbitrary. By Brenier's theorem [@Vi03 Theorem 2.12] there is a convex function $v$ such that $\nabla v(\alpha \ast \gamma) = \nu$. Hence from the Kantorovich duality [@Vi09 Theorem 5.10] it follows that $$\textnormal{MCov}(\alpha \ast \gamma,\nu)
= \int v \, d(\alpha \ast \gamma) + \int v^{\ast} \, d\nu
= \int ( v \ast \gamma) \, d\alpha + \int v^{\ast} \, d\nu.$$ Since $v \ast \gamma$ is convex, applying once more the Kantorovich duality yields $$\begin{aligned}
\textnormal{MCov}(\alpha \ast \gamma,\nu)
&= \int v^{\ast} \, d\nu - \int ( v \ast \gamma)^{\ast} \, d\mu
+ \int ( v \ast \gamma) \, d\alpha + \int ( v \ast \gamma)^{\ast} \, d\mu \\
&\geqslant \int v^{\ast} \, d\nu - \int ( v \ast \gamma)^{\ast} \, d\mu + \textnormal{MCov}(\alpha,\mu).\end{aligned}$$ Finally, from Theorem [Theorem 8](#theorem_new_duality){reference-type="ref" reference="theorem_new_duality"} we deduce that $$\begin{aligned}
\textnormal{MCov}(\alpha \ast \gamma,\nu)
&\geqslant \inf_{\textnormal{$\psi$ convex}} \Big( \int \psi \, d\nu - \int (\psi^{\ast} \ast \gamma)^{\ast} \, d \mu \Big) + \textnormal{MCov}(\alpha,\mu) \\
&= P(\mu,\nu) + \textnormal{MCov}(\alpha,\mu),\end{aligned}$$ which gives the inequality [\[eq:variational_alpha_mc_step1\]](#eq:variational_alpha_mc_step1){reference-type="eqref" reference="eq:variational_alpha_mc_step1"}. We remark that it is immaterial whether in [\[WeakDual\]](#WeakDual){reference-type="eqref" reference="WeakDual"} we optimize over convex functions $\psi$ which are elements of $L^{1}(\nu)$ or which are just $\mu$-a.s. finite, see [@BBST23 Section 4]. ◻*
**Lemma 12**. *Suppose that there exists a Bass martingale from $\mu$ to $\nu$ with Bass measure $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$. Then the right-hand side of [\[eq:variational_alpha_mc_step1\]](#eq:variational_alpha_mc_step1){reference-type="eqref" reference="eq:variational_alpha_mc_step1"} is attained by $\hat{\alpha}$ and is equal to $$\label{prop_alpha_vari_mc_step2_eq}
\mathcal{V}(\hat{\alpha})
= \int \textnormal{MCov}(\hat{\pi}_{x},\gamma) \, \mu(dx),$$ where $\hat{\pi} \in \mathsf{MT}(\mu,\nu)$ is the optimizer of [\[eq_primal\]](#eq_primal){reference-type="eqref" reference="eq_primal"}.*
**Proof.* By assumption there exists a Bass martingale from $\mu$ to $\nu$, with Bass measure $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$ and associated convex function $\hat{v}$ satisfying (recall Lemma [Lemma 10](#lem_eq_co_bm_13){reference-type="ref" reference="lem_eq_co_bm_13"}) the identities [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"}. According to Lemma [Lemma 9](#theo_du_op_b_m){reference-type="ref" reference="theo_du_op_b_m"}, we have that $\hat{\alpha} = \nabla \hat{\varphi}(\mu)$ and $$\hat{\pi}_{x}
= \nabla \hat{v}(\gamma_{\nabla \hat{\varphi}(x)}),$$ for $\mu$-a.e. $x \in \mathbb{R}^{d}$. Applying Brenier's theorem, we deduce that $$\begin{aligned}
&\int \textnormal{MCov}(\hat{\pi}_{x},\gamma) \, \mu(dx)
= \int \int \big\langle \nabla \hat{v}\big(\nabla\hat{\varphi}(x) + z \big), z \big\rangle \, \gamma(dz) \, \mu(dx) \\
& \qquad = \int \int \Big( \big\langle \nabla \hat{v}\big(\nabla\hat{\varphi}(x) + z \big), \nabla\hat{\varphi}(x) + z \big\rangle - \big\langle \nabla \hat{v}\big(\nabla\hat{\varphi}(x) + z \big), \nabla\hat{\varphi}(x) \big\rangle \Big) \, \gamma(dz) \, \mu(dx) \\
& \qquad = \int \int \big( \langle \nabla \hat{v}(a + z ), a + z \rangle - \langle \nabla \hat{v}(a + z ), a \rangle \big) \, \gamma(dz) \, \hat{\alpha}(da) \\
&\qquad = \int \langle \nabla \hat{v}, \operatorname{Id} \rangle \, d(\hat{\alpha} \ast \gamma) - \int \langle (\nabla\hat{v} \ast \gamma), \operatorname{Id} \rangle \, d\hat{\alpha} \\
&\qquad = \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha},\mu)
= \mathcal{V}(\hat{\alpha}),\end{aligned}$$ which shows [\[prop_alpha_vari_mc_step2_eq\]](#prop_alpha_vari_mc_step2_eq){reference-type="eqref" reference="prop_alpha_vari_mc_step2_eq"}. Together with the weak duality [\[eq:variational_alpha_mc_step1\]](#eq:variational_alpha_mc_step1){reference-type="eqref" reference="eq:variational_alpha_mc_step1"} of Lemma [Lemma 11](#prop_alpha_vari_mc_step1){reference-type="ref" reference="prop_alpha_vari_mc_step1"} above, and recalling from Subsection [2.2](#subs_smot){reference-type="ref" reference="subs_smot"} that the right-hand side of [\[prop_alpha_vari_mc_step2_eq\]](#prop_alpha_vari_mc_step2_eq){reference-type="eqref" reference="prop_alpha_vari_mc_step2_eq"} is equal to $\tilde{P}(\mu,\nu) = P(\mu,\nu)$, we conclude the assertion of Lemma [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"}. ◻*
**Lemma 13**. *We have the duality result $$\label{eq:variational_alpha_mc_step3}
P(\mu,\nu)
= \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \mathcal{V}(\alpha).$$*
**Proof.* For $\varepsilon > 0$ we define $\mu^{\varepsilon} \coloneqq \mu\ast \gamma^{\varepsilon}$ and $\nu^{\varepsilon} \coloneqq \nu \ast \gamma^{2\varepsilon}$. Then $\mu^{\varepsilon} \preceq_{\textnormal{c}}\nu^{\varepsilon}$ and the pair $(\mu^{\varepsilon} ,\nu^{\varepsilon})$ is irreducible. Hence by Theorem [Theorem 3](#MainTheorem){reference-type="ref" reference="MainTheorem"} there is a Bass martingale from $\mu^{\varepsilon}$ to $\nu^{\varepsilon}$, so that by Lemma [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"} we have $$\label{eq:variational_alpha_mc_e}
\sup_{\pi \in \mathsf{MT}(\mu^{\varepsilon},\nu^{\varepsilon})} \int \textnormal{MCov}(\pi_{x},\gamma) \, \mu^{\varepsilon}(dx)
= \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \Big( \textnormal{MCov}(\alpha \ast \gamma,\nu^{\varepsilon}) - \textnormal{MCov}(\alpha,\mu^{\varepsilon}) \Big).$$ By weak optimal transport arguments (see [@BeJoMaPa21b Theorem 2.3]) we know $$\limsup_{\varepsilon \rightarrow 0} \sup_{\pi \in \mathsf{MT}(\mu^{\varepsilon},\nu^{\varepsilon})}
\int \textnormal{MCov}(\pi_{x},\gamma) \, \mu^{\varepsilon}(dx)
\leqslant \sup_{\pi \in \mathsf{MT}(\mu,\nu)} \int \textnormal{MCov}(\pi_{x},\gamma) \, \mu(dx).$$ Therefore, if we can show that the right-hand side of [\[eq:variational_alpha_mc_e\]](#eq:variational_alpha_mc_e){reference-type="eqref" reference="eq:variational_alpha_mc_e"} converges to the right-hand side of [\[eq:variational_alpha_mc_step3\]](#eq:variational_alpha_mc_step3){reference-type="eqref" reference="eq:variational_alpha_mc_step3"}, we will obtain the inequality $$P(\mu,\nu)
\geqslant \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \mathcal{V}(\alpha)
= \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \Big( \textnormal{MCov}(\alpha \ast \gamma,\nu) - \textnormal{MCov}(\alpha,\mu) \Big),$$ which, together with the weak duality of Lemma [Lemma 11](#prop_alpha_vari_mc_step1){reference-type="ref" reference="prop_alpha_vari_mc_step1"}, establishes [\[eq:variational_alpha_mc_step3\]](#eq:variational_alpha_mc_step3){reference-type="eqref" reference="eq:variational_alpha_mc_step3"}. But this follows easily from $$\vert \textnormal{MCov}(\alpha,\mu^{\varepsilon}) -\textnormal{MCov}(\alpha,\mu) \vert
\leqslant c_{1} \varepsilon + \tfrac{1}{2} \vert \mathcal{W}_{2}^{2}(\alpha,\mu^{\varepsilon}) - \mathcal{W}_{2}^{2}(\alpha,\mu) \vert
\leqslant c_{2}(\varepsilon+\varepsilon^{2})$$ and a similar estimate for $\vert\textnormal{MCov}(\alpha \ast \gamma,\nu^{\varepsilon}) -\textnormal{MCov}(\alpha\ast\gamma,\nu)\vert$. ◻*
**Lemma 14**. *Suppose that the right-hand side of [\[eq:variational_alpha_mc_step3\]](#eq:variational_alpha_mc_step3){reference-type="eqref" reference="eq:variational_alpha_mc_step3"} is attained by $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$. Then there exists a Bass martingale from $\mu$ to $\nu$ with Bass measure $\hat{\alpha}$.*
**Proof.* By Brenier's theorem there is a convex function $\hat{v}$ such that $\nabla \hat{v}(\hat{\alpha} \ast \gamma) = \nu$. According to Lemma [Lemma 10](#lem_eq_co_bm_13){reference-type="ref" reference="lem_eq_co_bm_13"}, for the existence of a Bass martingale from $\mu$ to $\nu$, it remains to show the first equality in [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"}, i.e., $$\label{prop_alpha_vari_mc_step4_i}
(\nabla \hat{v} \ast \gamma)(\hat{\alpha}) = \mu.$$ Let $\hat{Z}$ and $X$ be random variables with laws $\hat{\alpha}$ and $\mu$, respectively, such that $$\label{prop_alpha_vari_mc_step4_eq_aa}
\textnormal{MCov}(\hat{\alpha},\mu) = \mathbb{E}\big[ \langle \hat{Z},X \rangle \big].$$ Denote by $\hat{q}(dz,dx)$ the law of the coupling $(\hat{Z},X)$. Let $\boldsymbol{w} \colon \mathbb{R}^{d}\times \mathbb{R}^{d}\rightarrow \mathbb{R}^{d}$ be a smooth function with compact support and define probability measures $(\alpha_{u})_{u \in \mathbb{R}} \subseteq \mathcal{P}_{2}(\mathbb{R}^{d})$ by $$\label{prop_alpha_vari_mc_step4_eq_ab}
\int f \, d\alpha_{u} \coloneqq \int \int f\big(z + u\boldsymbol{w}(z,x)\big) \, q(dz,dx), \qquad f \in C_{b}(\mathbb{R}^{d}).$$ We claim that $$\label{prop_alpha_vari_mc_step4_cl_1}
\liminf_{u \rightarrow 0} \tfrac{1}{u} \Big( \textnormal{MCov}(\alpha_{u},\mu) - \textnormal{MCov}(\hat{\alpha},\mu) \Big)
\geqslant \mathbb{E}\big[ \big\langle \boldsymbol{w}(\hat{Z},X),X \big\rangle \big]$$ and $$\label{prop_alpha_vari_mc_step4_cl_2}
\lim_{u \rightarrow 0} \tfrac{1}{u} \Big( \textnormal{MCov}(\alpha_{u} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) \Big)
= \mathbb{E}\big[ \big\langle \boldsymbol{w}(\hat{Z},X), (\nabla \hat{v} \ast \gamma)(\hat{Z}) \big\rangle \big].$$ Using the optimality of $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$ for the right-hand side of [\[eq:variational_alpha_mc_step3\]](#eq:variational_alpha_mc_step3){reference-type="eqref" reference="eq:variational_alpha_mc_step3"} and admitting the two claims [\[prop_alpha_vari_mc_step4_cl_1\]](#prop_alpha_vari_mc_step4_cl_1){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_1"}, [\[prop_alpha_vari_mc_step4_cl_2\]](#prop_alpha_vari_mc_step4_cl_2){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_2"}, we deduce that $$\begin{aligned}
0 &\leqslant
\liminf_{u \rightarrow 0} \tfrac{1}{u} \bigg(
\Big( \textnormal{MCov}(\alpha_{u} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) \Big)
- \Big( \textnormal{MCov}(\alpha_{u},\mu) - \textnormal{MCov}(\hat{\alpha},\mu) \Big) \bigg) \\
&\leqslant \mathbb{E}\big[ \big\langle \boldsymbol{w}(\hat{Z},X) , (\nabla \hat{v} \ast \gamma)(\hat{Z}) - X \big\rangle \big]. \end{aligned}$$ Since $\boldsymbol{w}$ was arbitrary, it follows that the random variable $(\nabla \hat{v} \ast \gamma)(\hat{Z})$ has the same law as $X$, which readily gives [\[prop_alpha_vari_mc_step4_i\]](#prop_alpha_vari_mc_step4_i){reference-type="eqref" reference="prop_alpha_vari_mc_step4_i"}.*
*We now turn to the proof of the claim [\[prop_alpha_vari_mc_step4_cl_1\]](#prop_alpha_vari_mc_step4_cl_1){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_1"}. By the definition of $\alpha_{u}$ in [\[prop_alpha_vari_mc_step4_eq_ab\]](#prop_alpha_vari_mc_step4_eq_ab){reference-type="eqref" reference="prop_alpha_vari_mc_step4_eq_ab"}, the random variable $Z_{u} \coloneqq \hat{Z} + u \boldsymbol{w}(\hat{Z},X)$ has law $\alpha_{u}$. Consequently, $$\label{prop_alpha_vari_mc_step4_eq_ac}
\textnormal{MCov}(\alpha_{u},\mu) \geqslant \mathbb{E}\big[ \langle Z_{u},X \rangle \big].$$ Combining [\[prop_alpha_vari_mc_step4_eq_aa\]](#prop_alpha_vari_mc_step4_eq_aa){reference-type="eqref" reference="prop_alpha_vari_mc_step4_eq_aa"} and [\[prop_alpha_vari_mc_step4_eq_ac\]](#prop_alpha_vari_mc_step4_eq_ac){reference-type="eqref" reference="prop_alpha_vari_mc_step4_eq_ac"} yields [\[prop_alpha_vari_mc_step4_cl_1\]](#prop_alpha_vari_mc_step4_cl_1){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_1"}.*
*It remains to show the claim [\[prop_alpha_vari_mc_step4_cl_2\]](#prop_alpha_vari_mc_step4_cl_2){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_2"}. By analogy with the proof of [\[prop_alpha_vari_mc_step4_cl_1\]](#prop_alpha_vari_mc_step4_cl_1){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_1"}, we obtain the inequality "$\geqslant$" in [\[prop_alpha_vari_mc_step4_cl_2\]](#prop_alpha_vari_mc_step4_cl_2){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_2"}. For the reverse inequality, we note that by the Kantorovich duality we have $$\begin{aligned}
\textnormal{MCov}(\alpha_{u} \ast \gamma,\nu)
&= \inf_{\textnormal{$v$ convex}} \Big( \int v \, d(\alpha_{u} \ast \gamma) - \int v^{\ast} \, d \nu \Big) \\
&\leqslant \int \hat{v} \, d(\alpha_{u} \ast \gamma) - \int \hat{v}^{\ast} \, d \nu \\
&= \int (\hat{v} \ast \gamma) \, d\alpha_{u} - \int \hat{v}^{\ast} \, d \nu\end{aligned}$$ and $$\textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu)
= \int \hat{v} \, d(\alpha \ast \gamma) + \int \hat{v}^{\ast} \, d\nu
= \int ( \hat{v} \ast \gamma) \, d\hat{\alpha} + \int \hat{v}^{\ast} \, d\nu.$$ Therefore $$\begin{aligned}
\textnormal{MCov}(\alpha_{u} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu)
&\leqslant \int (\hat{v} \ast \gamma) \, d\alpha_{u} - \int ( \hat{v} \ast \gamma) \, d\hat{\alpha} \\
&= \mathbb{E}\big[(\hat{v} \ast \gamma)\big(\hat{Z} + u \boldsymbol{w}(\hat{Z},X)\big) - (\hat{v} \ast \gamma)(\hat{Z})\big]\end{aligned}$$ Using the convexity of the function $\hat{v} \ast \gamma$, we deduce that $$\tfrac{1}{u} \Big( \textnormal{MCov}(\alpha_{u} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) \Big)
\leqslant \mathbb{E}\big[ \big\langle \boldsymbol{w}(\hat{Z},X), (\nabla \hat{v} \ast \gamma)\big(\hat{Z}+ u\boldsymbol{w}(\hat{Z},X)\big) \big\rangle \big].$$ Now observe that the expectation on the right-hand side of the above inequality is equal to the expectation of the random variable $$Y_{u} \coloneqq \Big\langle \boldsymbol{w}(\hat{Z},X) \, , \nabla \hat{v}(\hat{Z} + \Gamma) \exp\big(u \big\langle \Gamma , \boldsymbol{w}(\hat{Z},X) \big\rangle - \tfrac{u^{2}}{2} \vert \boldsymbol{w}(\hat{Z},X) \vert^{2}\big)\Big\rangle,$$ where $\Gamma$ is a standard Gaussian random vector on $\mathbb{R}^{d}$, independent of $\hat{Z}$ as well as of $X$. Clearly by continuity $$\lim_{u \rightarrow 0} Y_{u} =
\big\langle \boldsymbol{w}(\hat{Z},X) , \nabla \hat{v}(\hat{Z} + \Gamma) \big\rangle, \qquad \mathbb{P}\textnormal{-a.s.}$$ As $\boldsymbol{w}$ is smooth with compact support, for $\delta > 0$ we can find constants $c_{1},c_{2}$ such that $$\forall u \in [-\delta,\delta] \colon \quad
\vert Y_{u} \vert \leqslant c_{1} \, \vert \nabla \hat{v}(\hat{Z} + \Gamma) \vert \, \mathrm{e}^{c_{2} \vert \Gamma \vert}.$$ By the Cauchy--Schwarz inequality and since $\nabla \hat{v}(\hat{\alpha} \ast \gamma) = \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$, we have the bound $$\mathbb{E}\big[ \vert \nabla \hat{v}(\hat{Z} + \Gamma) \vert \, \mathrm{e}^{\vert \Gamma \vert} \big]
%\leqslant \sqrt{\mathbb{E}\big[ \vert \nabla \hat{v}(\hat{Z} + \Gamma) \vert^{2} \big]}
%\, \sqrt{\mathbb{E}\big[ \mathrm{e}^{2 \vert \Gamma \vert} \big]}
\leqslant \sqrt{\int \vert y \vert^{2} \, d\nu(y)} \ \sqrt{\mathbb{E}\big[ \mathrm{e}^{2 \vert \Gamma \vert} \big]} < + \infty.$$ Therefore we can apply the dominated convergence theorem and conclude that $$\limsup_{u \rightarrow 0} \tfrac{1}{u} \Big( \textnormal{MCov}(\alpha_{u} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) \Big)
\leqslant \mathbb{E}\big[ \big\langle \boldsymbol{w}(\hat{Z},X), (\nabla \hat{v} \ast \gamma)(\hat{Z}) \big\rangle \big],$$ which completes the proof of the claim [\[prop_alpha_vari_mc_step4_cl_2\]](#prop_alpha_vari_mc_step4_cl_2){reference-type="eqref" reference="prop_alpha_vari_mc_step4_cl_2"}. ◻*
*Proof of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"}.* The assertion of the theorem follows from Lemmas [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"} -- [Lemma 14](#prop_alpha_vari_mc_step4){reference-type="ref" reference="prop_alpha_vari_mc_step4"}. ◻
The reader has certainly noticed that the proof of Lemma [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"} was given in an analytic style while the proof of Lemma [Lemma 14](#prop_alpha_vari_mc_step4){reference-type="ref" reference="prop_alpha_vari_mc_step4"} was given in a more probabilistic language. In the remainder of this section we give an alternative probabilistic proof of Lemma [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"} and sketch how to translate the proof of Lemma [Lemma 14](#prop_alpha_vari_mc_step4){reference-type="ref" reference="prop_alpha_vari_mc_step4"} into a more analytic language.
The following probabilistic proof of Lemma [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"} does not require the duality results developed in [@BBST23], but only relies on the definition of Bass martingales.
*Probabilistic proof of Lemma [Lemma 12](#prop_alpha_vari_mc_step2){reference-type="ref" reference="prop_alpha_vari_mc_step2"}.* By assumption there exists a Bass martingale from $\mu$ to $\nu$, with Bass measure $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$ and associated convex function $\hat{v}$ satisfying (recall Lemma [Lemma 10](#lem_eq_co_bm_13){reference-type="ref" reference="lem_eq_co_bm_13"}) the identities [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"}. Let $\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})$ be arbitrary. We have to show that $$\label{eq:variational_alpha_mc_step1_alt_pro}
\begin{aligned}
\mathcal{V}(\hat{\alpha}) = & \, \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha},\mu) \leqslant \\
\leqslant & \, \textnormal{MCov}(\alpha \ast \gamma,\nu) - \textnormal{MCov}(\alpha,\mu) = \mathcal{V}(\alpha).
\end{aligned}$$ Take a random variable $\hat{Z}$ with law $\hat{\alpha}$ and define $$\label{eq_prop_alpha_vari_mc_prop_ii_pre_bb}
X \coloneqq (\nabla \hat{v} \ast \gamma)(\hat{Z}).$$ By Brenier's theorem the coupling $(\hat{Z},X)$ is optimal and according to [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"} the random variable $X$ has law $\mu$. Now choose a random variable $Z$ with law $\alpha$ such that the coupling $(Z,X)$ is optimal with respect to the maximal covariance (equivalently, with respect to the quadratic Wasserstein distance). Clearly $$\label{eq_prop_alpha_vari_mc_prop_ii_pre}
\textnormal{MCov}(\alpha,\mu) - \textnormal{MCov}(\hat{\alpha},\mu)
= \mathbb{E}\big[\langle Z - \hat{Z},X \rangle\big].$$ Take a standard Gaussian random vector $\Gamma$ on $\mathbb{R}^{d}$, independent of $Z$ as well as of $\hat{Z}$. The random variables $\hat{Z} + \Gamma$ and $$\label{eq_prop_alpha_vari_mc_prop_ii_pre_aa}
Y \coloneqq \nabla \hat{v}(\hat{Z} + \Gamma)$$ have laws $\hat{\alpha} \ast \gamma$ and $\nu$, respectively. As by Brenier's theorem the coupling $(\hat{Z} + \Gamma,Y)$ is optimal, we have $$\label{eq_prop_alpha_vari_mc_prop_iii_pre}
\textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu) = \mathbb{E}\big[\langle \hat{Z} + \Gamma,Y \rangle\big].$$ Since the random variable $Z + \Gamma$ has law $\alpha \ast \gamma$ we conclude that $(Z+\Gamma,Y)$ is some coupling between $\alpha \ast \gamma$ and $\nu$, i.e., $$\label{eq_prop_alpha_vari_mc_prop_iv_pre}
\textnormal{MCov}(\alpha \ast \gamma,\nu) \geqslant \mathbb{E}\big[ \langle Z+\Gamma, Y\rangle \big].$$ From [\[eq_prop_alpha_vari_mc_prop_ii_pre\]](#eq_prop_alpha_vari_mc_prop_ii_pre){reference-type="eqref" reference="eq_prop_alpha_vari_mc_prop_ii_pre"} -- [\[eq_prop_alpha_vari_mc_prop_iv_pre\]](#eq_prop_alpha_vari_mc_prop_iv_pre){reference-type="eqref" reference="eq_prop_alpha_vari_mc_prop_iv_pre"} we obtain the inequality $$\textnormal{MCov}(\alpha \ast \gamma,\nu) - \textnormal{MCov}(\hat{\alpha} \ast \gamma,\nu)
- \textnormal{MCov}(\alpha,\mu) + \textnormal{MCov}(\hat{\alpha},\mu) \geqslant \mathbb{E}\big[\langle Z - \hat{Z},Y - X \rangle\big].$$ Therefore, in order to establish the inequality [\[eq:variational_alpha_mc_step1_alt_pro\]](#eq:variational_alpha_mc_step1_alt_pro){reference-type="eqref" reference="eq:variational_alpha_mc_step1_alt_pro"}, it remains to show that $$\label{eq_prop_alpha_vari_mc_prop_v_pre}
\mathbb{E}\big[\langle Z - \hat{Z},Y - X \rangle\big] = 0.$$ For that purpose, we condition $Y - X$ on the random variables $Z$ as well as $\hat{Z}$, so that by [\[eq_prop_alpha_vari_mc_prop_ii_pre_bb\]](#eq_prop_alpha_vari_mc_prop_ii_pre_bb){reference-type="eqref" reference="eq_prop_alpha_vari_mc_prop_ii_pre_bb"} and [\[eq_prop_alpha_vari_mc_prop_ii_pre_aa\]](#eq_prop_alpha_vari_mc_prop_ii_pre_aa){reference-type="eqref" reference="eq_prop_alpha_vari_mc_prop_ii_pre_aa"} we obtain $$\mathbb{E}[ Y - X \, \vert \, Z, \hat{Z} ] = 0,$$ which implies [\[eq_prop_alpha_vari_mc_prop_v\_pre\]](#eq_prop_alpha_vari_mc_prop_v_pre){reference-type="eqref" reference="eq_prop_alpha_vari_mc_prop_v_pre"}. ◻
We finally give an alternative heuristic argument for Lemma [Lemma 14](#prop_alpha_vari_mc_step4){reference-type="ref" reference="prop_alpha_vari_mc_step4"}, which is based on differentiating the maximal covariance along a continuity equation.
*Alternative heuristic proof of Lemma [Lemma 14](#prop_alpha_vari_mc_step4){reference-type="ref" reference="prop_alpha_vari_mc_step4"}.* Suppose that the right-hand side of [\[eq:variational_alpha_mc_step3\]](#eq:variational_alpha_mc_step3){reference-type="eqref" reference="eq:variational_alpha_mc_step3"} is attained by $\hat{\alpha} \in \mathcal{P}_{2}(\mathbb{R}^{d})$. We want to show that there exists a Bass martingale from $\mu$ to $\nu$ with Bass measure $\hat{\alpha}$. The idea is to perturb $\hat{\alpha}$ along a continuity equation $$\partial_{t} \alpha_{t} + \operatorname{div}(\boldsymbol{v}_{t} \alpha_{t}) = 0, \qquad t \in (-h,h),$$ with $h > 0$, $\alpha_{0} \coloneqq \hat{\alpha}$, and where $\boldsymbol{v}_{t}$ is a velocity field. Observe that $$\begin{aligned}
\partial_{t} \vert_{t=0} \, \mathcal{V}(\alpha_{t})
&= \partial_{t} \vert_{t=0} \, \Big( \textnormal{MCov}(\alpha_{t} \ast \gamma,\nu) - \textnormal{MCov}(\alpha_{t},\mu) \Big) \\
&= \partial_{t} \vert_{t=0} \int \hat{v} \, d(\alpha_{t} \ast \gamma)
- \partial_{t} \vert_{t=0} \int \hat{u} \, d\alpha_{t},\end{aligned}$$ where $\nabla \hat{v}(\hat{\alpha}\ast\gamma)=\nu$ is optimal and likewise $\nabla\hat{u}(\hat{\alpha})=\mu$ is optimal. By the continuity equation we obtain $$\partial_{t} \vert_{t=0} \int \hat{u} \, d\alpha_{t}
= \int \langle \nabla \hat{u} , \boldsymbol{v}_{0} \rangle \, d\hat{\alpha}.$$ With similar computations we have $$\partial_{t} \vert_{t=0} \int \hat{v} \, d(\alpha_t\ast\gamma)
= \int \langle \nabla \hat{v} \ast \gamma , \boldsymbol{v}_{0} \rangle \, d\hat{\alpha} .$$ As $\boldsymbol{v}_{0}$ was arbitrary and $\hat{\alpha}$ was optimal, we conclude that $$0 = \int \big\langle \nabla \hat{v} \ast \gamma - \nabla \hat{u} , \boldsymbol{v}_{0} \big\rangle \, d\hat{\alpha},$$ so that $\nabla \hat{u}$, the optimal map from $\hat{\alpha}$ to $\mu$, is $\hat{\alpha}$-a.s. equal to $\nabla \hat{v} \ast\gamma$, where $\nabla \hat{v}$ is the optimal map from $\hat{\alpha}\ast\gamma$ to $\nu$. Recalling [\[eq_def_id_bm_rep\]](#eq_def_id_bm_rep){reference-type="eqref" reference="eq_def_id_bm_rep"} and Lemma [Lemma 10](#lem_eq_co_bm_13){reference-type="ref" reference="lem_eq_co_bm_13"}, this is precisely the structure of the Bass martingale. ◻
# An infinitesimal version of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"} {#sec3aivot}
We provide the proof of Theorem [Theorem 6](#theo_ms2){reference-type="ref" reference="theo_ms2"}, an infinitesimal version of Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"}.
*Proof of Theorem [Theorem 6](#theo_ms2){reference-type="ref" reference="theo_ms2"}.* For a partition $\Pi = \{t_{0}, t_{1}, \ldots, t_{n} \}$ of the interval $[0,1]$ with $$0 = t_{0} < t_{1} < \ldots < t_{n} = 1$$ we denote by $\Sigma^{\Pi}$ the collection of all progressively measurable and $L^{2}$-bounded processes $(\sigma_{t}^{\Pi})_{0 \leqslant t \leqslant 1}$ such that the stochastic integral $$M_{t}^{\Pi} \coloneqq M_{0} + \int_{0}^{t} \sigma_{s}^{\Pi} \, dB_{s}, \qquad 0 \leqslant t \leqslant 1$$ defines an $L^{2}$-bounded martingale with $\operatorname{Law}(M_{t_{k}}^{\Pi}) = \mu_{t_{k}}$, for $k = 0, \ldots, n$. We define $$\label{theo_ms2_eq_1}
m^{\Pi}([t_{k-1},t_{k}]) \coloneqq \sup_{\sigma^{\Pi} \in \Sigma^{\Pi}}
\mathbb{E}\Big[ \int_{t_{k-1}}^{t_{k}} \textnormal{tr}(\sigma_{s}^{\Pi}) \, ds \Big].$$ By [@BaBeHuKa20], we know that the optimizer of $$m^{\Pi}([0,1]) = \sup_{\sigma^{\Pi} \in \Sigma^{\Pi}}
\mathbb{E}\Big[ \int_{0}^{1} \textnormal{tr}(\sigma_{s}^{\Pi}) \, ds \Big]$$ is given, on each interval $[t_{k-1},t_{k}]$, by the stretched Brownian motion from $\mu_{t_{k-1}}$ to $\mu_{t_{k}}$. By Theorem [Theorem 5](#prop_alpha_vari_mc){reference-type="ref" reference="prop_alpha_vari_mc"} we have $$\label{theo_ms2_eq_3}
m^{\Pi}([t_{k-1},t_{k}]) = \inf_{\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})} \Big( \textnormal{MCov}(\alpha \ast \gamma^{t_{k}-t_{k-1}},\mu_{t_{k}}) - \textnormal{MCov}(\alpha,\mu_{t_{k-1}}) \Big).$$ For $t_{k} \in \Pi$ and a refinement $\Pi_{1}$ of $\Pi$ we have $$m^{\Pi}([0,t_{k}]) \geqslant m^{\Pi_{1}}([0,t_{k}]),$$ as the process $(\sigma_{t}^{\Pi_{1}})_{0 \leqslant t \leqslant 1}$ has to satisfy more requirements than the process $(\sigma_{t}^{\Pi})_{0 \leqslant t \leqslant 1}$. We therefore may pass to a limit $m \coloneqq \lim m^{\Pi}$ along the net of finite partitions $\Pi$ of the interval $[0,1]$, which extends to a finite measure on $[0,1]$, still denoted by $m$. Clearly the measure $m$ is absolutely continuous with respect to Lebesgue measure on $[0,1]$ and we denote the corresponding density by $g(t)$, for $0 \leqslant t \leqslant 1$. We claim that, for $0 \leqslant r \leqslant u \leqslant 1$, we have $$\label{theo_ms2_eq_2}
\mathbb{E}\Big[ \int_{r}^{u} \textnormal{tr}(\sigma_{s}) \, ds \Big]
\leqslant m([r,u]).$$ Indeed, otherwise we could find a partition $\Pi$ with $r,u \in \Pi$, such that $$\mathbb{E}\Big[ \int_{r}^{u} \textnormal{tr}(\sigma_{s}^{\Pi}) \, ds \Big] > m^{\Pi}([r,u]),$$ which yields a contradiction to the definition of $m^{\Pi}( \, \cdot \,)$ in [\[theo_ms2_eq_1\]](#theo_ms2_eq_1){reference-type="eqref" reference="theo_ms2_eq_1"}. Since [\[theo_ms2_eq_2\]](#theo_ms2_eq_2){reference-type="eqref" reference="theo_ms2_eq_2"} holds for all intervals $[r,u] \subseteq [0,1]$, we deduce that $$\label{theo_ms2_eq_4}
\mathbb{E}\big[ \textnormal{tr}(\sigma_{t})\big]
\leqslant g(t),$$ for Lebesgue-a.e. $0 \leqslant t \leqslant 1$. From [\[theo_ms2_eq_3\]](#theo_ms2_eq_3){reference-type="eqref" reference="theo_ms2_eq_3"} we conclude, for Lebesgue-a.e. $0 \leqslant t \leqslant 1$ and for each $\alpha \in \mathcal{P}_{2}(\mathbb{R}^{d})$, the inequality $$g(t) \leqslant \liminf_{h \rightarrow 0} \tfrac{1}{h}
\Big( \textnormal{MCov}(\alpha \ast \gamma^{h},\mu_{t+h}) - \textnormal{MCov}(\alpha,\mu_{t}) \Big).$$ Together with [\[theo_ms2_eq_4\]](#theo_ms2_eq_4){reference-type="eqref" reference="theo_ms2_eq_4"}, this finishes the proof of [\[eq_m\_theo_ms2\]](#eq_m_theo_ms2){reference-type="eqref" reference="eq_m_theo_ms2"}. ◻
Again we provide a more analytic argument for Theorem [Theorem 6](#theo_ms2){reference-type="ref" reference="theo_ms2"}, at least on a formal level.
*Alternative heuristic proof of Theorem [Theorem 6](#theo_ms2){reference-type="ref" reference="theo_ms2"}.* We will use the Kantorovich duality and the Fokker--Planck equations to get a hold of $\frac{d}{dh} \textnormal{MCov}(\alpha \ast \gamma^{h},\mu_{t+h})$. By a change of variables we then get an equivalent expression which, when minimized, gives the left-hand side of [\[eq_m\_theo_ms2\]](#eq_m_theo_ms2){reference-type="eqref" reference="eq_m_theo_ms2"}. We suppose here that $M$ is a strong solution of the stochastic differential equation $dM_{u} = \sigma_{u}(M_{u}) \, dB_{u}$, with $\sigma$ as benevolent as needed, so that in particular $\mu_{u}$ admits a density for each $u$.
We set $\rho_{h} \coloneqq \alpha \ast \gamma^{h}$, $\Sigma \coloneqq \sigma\sigma'$, and notice that for fixed $t$ we have $$\begin{aligned}
\partial_{h} \rho_{h}(x) &= \tfrac{1}{2} \Delta\rho_{h}(x), \qquad \rho_{0} = \alpha;\\
\partial_{h} \mu_{t+h}(x) &= \tfrac{1}{2} \sum_{i,k} \partial^{2}_{ik} \big(\Sigma_{ik}\mu_{t+h}(x)\big).\end{aligned}$$ By the Kantorovich duality we have $$\begin{aligned}
\textnormal{MCov}(\rho_h,\mu_{t+h})
&= \inf_{\textnormal{$\phi$ convex}} \int \phi \, d\rho_{h} + \int \phi^{\ast} \, d\mu_{t+h} \\
&= \int \phi_{\rho_{h}}^{\mu_{t+h}} \, d\rho_{h} + \int \phi^{\rho_{h}}_{\mu_{t+h}} \, d\mu_{t+h},\end{aligned}$$ where we denote by $\phi_{p}^{q}(\, \cdot \, )$ the convex function, which is unique up to a constant, such that $\nabla \phi_{p}^{q}(p)=q$. Using this, or more directly [@Vi09 Theorem 23.9], we have $$\begin{aligned}
\frac{d}{dh} \, \textnormal{MCov}(\rho_{h},\mu_{t+h})
&= \int \phi_{\rho_{h}}^{\mu_{t+h}} \, \partial_{h} \rho_{h} \, d\lambda
+ \int \phi^{\rho_{h}}_{\mu_{t+h}} \, \partial_{h} \mu_{t+h} \, d\lambda \\
&= \int \phi_{\rho_{h}}^{\mu_{t+h}} \, \tfrac{1}{2} \Delta \rho_{h} \, d\lambda
+ \int \phi^{\rho_{h}}_{\mu_{t+h}} \, \tfrac{1}{2} \sum_{i,k} \partial^{2}_{ik}(\Sigma_{ik}\mu_{t+h})\, d\lambda \\
&= \tfrac{1}{2} \int \sum_{i,k} \partial^{2}_{i,k}(\phi_{\rho_h}^{\mu_{t+h}}) \, I_{ik} \, d\rho_{h}
+ \tfrac{1}{2} \int \sum_{i,k} \partial^{2}_{i,k}(\phi^{\rho_{h}}_{\mu_{t+h}}) \, \Sigma_{ik} \, d\mu_{t+h} \\
&= \tfrac{1}{2} \int \textnormal{tr} \big(D^{2}(\phi_{\rho_{h}}^{\mu_{t+h}})\big) \, \rho_{h} \, d\lambda
+\tfrac{1}{2} \int \textnormal{tr} \big(D^{2}(\phi^{\rho_{h}}_{\mu_{t+h}})\Sigma\big) \, \mu_{t+h} \, d\lambda,\end{aligned}$$ where we denote by $D$ and $D^{2}$ the Jacobian and Hessian matrix, respectively. During this proof we will use the convention that if $x \mapsto a(x) \in \mathbb{R}^{d}$ is an invertible vector-valued function, then $a^{-1}(x)$ denotes the inverse function, whereas if $x \mapsto A(x) \in \mathbb{R}^{d \times d}$ is a matrix-valued function, then $[A(x)]^{-1}$ denotes the matrix inverse of $A(x)$. Now observe that $$D^{2}(\phi_{\rho_{h}}^{\mu_{t+h}})(x)
=D(\nabla \phi_{\rho_{h}}^{\mu_{t+h}})(x)
=D\big((\nabla \phi^{\rho_{h}}_{\mu_{t+h}})^{-1}\big)(x)
=[D\nabla \phi^{\rho_{h}}_{\mu_{t+h}} \circ \nabla \phi_{\rho_{h}}^{\mu_{t+h}}(x)]^{-1},$$ so that $$\begin{aligned}
\int \textnormal{tr}\big(D^{2}(\phi_{\rho_h}^{\mu_{t+h}})(x)\big) \, \rho_h \, d\lambda
&= \int \textnormal{tr}\big([D\nabla \phi^{\rho_{h}}_{\mu_{t+h}} \circ \nabla \phi_{\rho_{h}}^{\mu_{t+h}}(x)]^{-1}\big) \, \rho_{h} \, d\lambda \\
&= \int \textnormal{tr}\big([D^{2} \phi^{\rho_{h}}_{\mu_{t+h}}(y)]^{-1}\big) \, \mu_{t+h} \, d\lambda.\end{aligned}$$ Altogether we have $$\begin{aligned}
\frac{d}{dh} \, \textnormal{MCov}(\rho_{h},\mu_{t+h})
= \tfrac{1}{2} \int \Big( \textnormal{tr} \big( [D^{2} \phi^{\rho_{h}}_{\mu_{t+h}}]^{-1} \big) + \textnormal{tr}\big(D^{2}(\phi^{\rho_{h}}_{\mu_{t+h}})\Sigma \big) \Big) \, \mu_{t+h} \, d\lambda.\end{aligned}$$ Define now the functional on invertible, positive-semidefinite symmetric matrices $$A \mapsto J(A) \coloneqq \textnormal{tr}(A^{-1})+\textnormal{tr}(A\Sigma).$$ We remark that $J(A) \geqslant 2\textnormal{tr}(\Sigma^{1/2})$, since this is equivalent to the trivial statement $$\vert A^{-1/2}-A^{1/2}\Sigma^{1/2} \vert_{\textnormal{HS}} \geqslant 0.$$ Hence in fact the minimum of $J(\, \cdot \,)$ is attained at $A=\Sigma^{-1/2}$. We conclude that $$\frac{d}{dh} \Big\vert_{h=0} \, \textnormal{MCov}(\rho_{h},\mu_{t+h})
\geqslant \int \textnormal{tr}\big(\sigma_{t}(y)\big) \, \mu_{t}(dy)
= \mathbb{E}\big[ \textnormal{tr}\big(\sigma_{t}(M_{t})\big)\big],$$ which completes the proof of Theorem [Theorem 6](#theo_ms2){reference-type="ref" reference="theo_ms2"}. ◻
# Displacement convexity of the Bass functional {#sec4dcotbf}
We observe that the Bass functional $\alpha \mapsto \mathcal{V}(\alpha)$ provides a novel example of a convex functional with respect to the almost-Riemannian structure of the quadratic Wasserstein space $\mathcal{P}_{2}$. As mentioned in [@Vi03 Open Problem 5.17], there are only few known examples of so-called displacement convex functionals (see [@Vi03 Definition 5.10], [@AGS08 Definition 9.1.1], [@McCgas]), and it is desirable to find new ones.
We shall state two versions of this result. The first one, Proposition [Proposition 15](#sec_4_dis_one){reference-type="ref" reference="sec_4_dis_one"}, pertains to the case $d=1$, while the second one, Proposition [Proposition 16](#sec_4_dis_d){reference-type="ref" reference="sec_4_dis_d"}, holds for general $d \in \mathbb{N}$. We also note that, contrary to the rest of this paper, we do not assume that $\mu \preceq_{\textnormal{c}}\nu$.
**Proposition 15**. *Suppose $d = 1$. Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R})$. The Bass functional $$\label{sec_4_dis_one_aa}
\mathcal{P}_{2}(\mathbb{R}) \ni \alpha \longmapsto \mathcal{V}(\alpha)
= \textnormal{MCov}(\alpha \ast \gamma, \nu) - \textnormal{MCov}(\alpha,\mu)$$ is displacement convex. Moreover, if a geodesic $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ in $\mathcal{P}_{2}(\mathbb{R})$ is such that $\alpha_{1}$ is not a translate of $\alpha_{0}$ and if $\nu$ is not a Dirac measure, the function $u \mapsto \mathcal{V}(\alpha_{u})$ is strictly convex.*
**Proof.* We start by noting that the Bass functional $\mathcal{V}(\, \cdot \,)$ of [\[sec_4\_dis_one_aa\]](#sec_4_dis_one_aa){reference-type="eqref" reference="sec_4_dis_one_aa"} can equivalently be defined in terms of the quadratic Wasserstein distance $\mathcal{W}_{2}(\, \cdot \, , \, \cdot \,)$ of [\[def_eq_was\]](#def_eq_was){reference-type="eqref" reference="def_eq_was"} rather than in terms of the maximal covariance $\textnormal{MCov}(\, \cdot \, , \, \cdot \,)$ of [\[eq_def_mcov\]](#eq_def_mcov){reference-type="eqref" reference="eq_def_mcov"}. Indeed, we have the identity $$\mathcal{V}(\alpha)
= \textnormal{MCov}(\alpha \ast \gamma, \nu) - \textnormal{MCov}(\alpha,\mu)
= \tfrac{1}{2} \mathcal{W}_{2}^{2}(\alpha,\mu) - \tfrac{1}{2} \mathcal{W}_{2}^{2}(\alpha \ast \gamma, \nu) + \textnormal{const},$$ where the constant $$\textnormal{const} = \tfrac{d}{2} + \tfrac{1}{2} \int \vert y \vert^{2} \, d\nu(y) - \tfrac{1}{2} \int \vert x \vert^{2} \, d\mu(x)$$ does not depend on $\alpha$. Therefore showing the (strict) displacement convexity of the Bass functional $\mathcal{V}(\, \cdot \,)$ is equivalent to showing the (strict) displacement convexity of the functional $$\label{eq_func_ualph}
\mathcal{U}(\alpha) \coloneqq \mathcal{W}_{2}^{2}(\alpha,\mu) - \mathcal{W}_{2}^{2}(\alpha \ast \gamma,\nu).$$*
*Fix $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R})$ and let $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ be a geodesic in the quadratic Wasserstein space $\mathcal{P}_{2}(\mathbb{R})$. Using the hypothesis $d = 1$ we can choose mutually comonotone random variables $Z_{0}$, $Z_{1}$ and $X$ with laws $\alpha_{0}$, $\alpha_{1}$ and $\mu$, respectively. As $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ is a geodesic, the random variable $Z_{u} \coloneqq (1-u) Z_{0} + u Z_{1}$ has law $\alpha_{u}$, for $0 \leqslant u \leqslant 1$. Also note that each $Z_{u}$ is comonotone with $X$. Let $u_{0},u \in [0,1]$. As regards the first Wasserstein distance in [\[eq_func_ualph\]](#eq_func_ualph){reference-type="eqref" reference="eq_func_ualph"}, a straightforward calculation yields $$\begin{aligned}
&\mathcal{W}_{2}^{2}(\alpha_{u},\mu) - \mathcal{W}_{2}^{2}(\alpha_{u_{0}},\mu) = \label{sec_4_dis_one_ai1} \\
& \qquad \qquad = \mathbb{E}[\vert Z_{u}-X\vert^{2}] - \mathbb{E}[\vert Z_{u_{0}}-X\vert^{2}] \label{sec_4_dis_one_ai2} \\
& \qquad \qquad = \mathbb{E}[\vert Z_{u}-Z_{u_{0}}\vert^{2}] - 2 \mathbb{E}[ \langle Z_{u}-Z_{u_{0}} , X - Z_{u_{0}}\rangle ] \label{sec_4_dis_one_ai3} \\
& \qquad \qquad = (u-u_{0})^{2} \mathbb{E}[\vert Z_{1}-Z_{0}\vert^{2}] - 2 (u-u_{0}) \mathbb{E}[ \langle Z_{1}-Z_{0} , X - Z_{u_{0}}\rangle]. \label{sec_4_dis_one_ai4}\end{aligned}$$ Passing to the second Wasserstein distance in [\[eq_func_ualph\]](#eq_func_ualph){reference-type="eqref" reference="eq_func_ualph"}, we take a standard Gaussian random variable $\Gamma$ on $\mathbb{R}$, independent of $Z_{0}$ as well as of $Z_{1}$. Next we choose a random variable $Y_{u_{0}}$ such that $(Z_{u_{0}}+\Gamma,Y_{u_{0}})$ is an optimal coupling of $(\alpha_{u_{0}} \ast \gamma,\nu)$. As $(Z_{u} \ast \Gamma,Y_{u_{0}})$ is a (typically sub-optimal) coupling of $(\alpha_{u} \ast \gamma, \nu)$, we obtain the inequality $$\begin{aligned}
&\mathcal{W}_{2}^{2}(\alpha_{u} \ast \gamma,\nu) - \mathcal{W}_{2}^{2}(\alpha_{u_{0}} \ast \gamma,\nu) \leqslant \label{sec_4_dis_one_ai5} \\
& \qquad \qquad \leqslant \mathbb{E}[\vert Z_{u} + \Gamma -Y_{u_{0}}\vert^{2}] - \mathbb{E}[\vert Z_{u_{0}} + \Gamma -Y_{u_{0}}\vert^{2}] \label{sec_4_dis_one_ai6} \\
& \qquad \qquad = \mathbb{E}[\vert Z_{u}-Z_{u_{0}}\vert^{2}] - 2 \mathbb{E}[ \langle Z_{u}-Z_{u_{0}} , Y_{u_{0}} - (Z_{u_{0}} + \Gamma) \rangle ] \label{sec_4_dis_one_ai7} \\
& \qquad \qquad = (u-u_{0})^{2} \mathbb{E}[\vert Z_{1}-Z_{0}\vert^{2}] - 2 (u-u_{0}) \mathbb{E}[ \langle Z_{1}-Z_{0} , Y_{u_{0}} - (Z_{u_{0}} + \Gamma) \rangle]. \label{sec_4_dis_one_ai8}\end{aligned}$$ Combining [\[sec_4\_dis_one_ai1\]](#sec_4_dis_one_ai1){reference-type="eqref" reference="sec_4_dis_one_ai1"} -- [\[sec_4\_dis_one_ai8\]](#sec_4_dis_one_ai8){reference-type="eqref" reference="sec_4_dis_one_ai8"}, we deduce that $$\begin{aligned}
&\mathcal{U}(\alpha_{u}) - \mathcal{U}(\alpha_{u_{0}}) =\\
& \qquad = \Big( \mathcal{W}_{2}^{2}(\alpha_{u},\mu) - \mathcal{W}_{2}^{2}(\alpha_{u} \ast \gamma,\nu) \Big)
- \Big( \mathcal{W}_{2}^{2}(\alpha_{u_{0}},\mu) - \mathcal{W}_{2}^{2}(\alpha_{u_{0}} \ast \gamma,\nu) \Big) \geqslant \label{sec_4_dis_one_ai9} \\
& \qquad \geqslant
2 (u-u_{0}) \mathbb{E}[ \langle Z_{1}-Z_{0} , Y_{u_{0}} - X - \Gamma \rangle] \label{sec_4_dis_one_ai10} \\
& \qquad =
2 (u-u_{0}) \mathbb{E}[ \langle Z_{1}-Z_{0} , Y_{u_{0}} - X \rangle], \label{sec_4_dis_one_ai11} \end{aligned}$$ where the last equation follows from conditioning on $Z_{0},Z_{1}$. The expression in [\[sec_4\_dis_one_ai11\]](#sec_4_dis_one_ai11){reference-type="eqref" reference="sec_4_dis_one_ai11"} defines a linear function in $u$, which lies below and touches the function $$\begin{aligned}
u \longmapsto & \, \mathcal{U}(\alpha_{u}) - \mathcal{U}(\alpha_{u_{0}}) = \\
& \qquad = \Big( \mathcal{W}_{2}^{2}(\alpha_{u},\mu) - \mathcal{W}_{2}^{2}(\alpha_{u} \ast \gamma,\nu) \Big)
- \Big( \mathcal{W}_{2}^{2}(\alpha_{u_{0}},\mu) - \mathcal{W}_{2}^{2}(\alpha_{u_{0}} \ast \gamma,\nu) \Big)\end{aligned}$$ at the point $u = u_{0}$. This readily implies the convexity of the function $$u \longmapsto \mathcal{U}(\alpha_{u}) = \mathcal{W}_{2}^{2}(\alpha_{u},\mu) - \mathcal{W}_{2}^{2}(\alpha_{u} \ast \gamma,\nu).$$*
*It remains to show the strict convexity assertion of Proposition [Proposition 15](#sec_4_dis_one){reference-type="ref" reference="sec_4_dis_one"}. If $\alpha_{1}$ is not a translate of $\alpha_{0}$, then $\alpha_{u}$ is not a translate of $\alpha_{u_{0}}$ either, provided that $u \neq u_{0}$. As $Z_{u_{0}} + \Gamma$ is comonotone with $Y_{u_{0}}$ and $Y_{u_{0}}$ is assumed to be non-constant, we may find $y_{0} \in \mathbb{R}$ and $z_{0} \in \mathbb{R}$ such that $\mathbb{P}[Y_{u_{0}} < y_{0}] \in (0,1)$ and $$\{ Z_{u_{0}} + \Gamma < z_{0} \} = \{ Y_{u_{0}} < y_{0} \}.$$ If $Z_{u} + \Gamma$ were also comonotone with $Y_{u_{0}}$, we could find $z \in \mathbb{R}$ such that $$\{ Z_{u_{0}} + \Gamma < z_{0} \}
= \{ Y_{u_{0}} < y_{0} \}
= \{ Z_{u} + \Gamma < z \},$$ where we have used that the law of $Z_{u} + \Gamma$ is continuous. Conditioning on $\Gamma = \zeta$ this implies that, for Lebesgue-a.e. $\zeta \in \mathbb{R}$, $$\{ Z_{u_{0}} < z_{0} - \zeta \}
= \{ Z_{u} < z - \zeta \},$$ so that $Z_{u_{0}}$ and $Z_{u}$ are translates. This gives the desired contradiction, showing that there is a strict inequality in [\[sec_4\_dis_one_ai5\]](#sec_4_dis_one_ai5){reference-type="eqref" reference="sec_4_dis_one_ai5"}, [\[sec_4\_dis_one_ai6\]](#sec_4_dis_one_ai6){reference-type="eqref" reference="sec_4_dis_one_ai6"} (thus also in [\[sec_4\_dis_one_ai9\]](#sec_4_dis_one_ai9){reference-type="eqref" reference="sec_4_dis_one_ai9"}, [\[sec_4\_dis_one_ai10\]](#sec_4_dis_one_ai10){reference-type="eqref" reference="sec_4_dis_one_ai10"}), which implies the strict convexity assertion of Proposition [Proposition 15](#sec_4_dis_one){reference-type="ref" reference="sec_4_dis_one"}. ◻*
We now pass to the case of general $d \in \mathbb{N}$. In Proposition [Proposition 16](#sec_4_dis_d){reference-type="ref" reference="sec_4_dis_d"} below we formulate a convexity property of the Bass functional $\mathcal{V}(\, \cdot \,)$ pertaining to the notion of *generalized geodesics* as analyzed in [@AGS08 Definition 9.2.2]. Recall that $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ is a generalized geodesic with base $\mu$, joining $\alpha_{0}$ to $\alpha_{1}$, if there are random variables $Z_{0}$, $Z_{1}$ and $X$ with laws $\alpha_{0}$, $\alpha_{1}$ and $\mu$, respectively, such that $(Z_{0},X)$ and $(Z_{1},X)$ are optimal couplings and such that the random variable $Z_{u} \coloneqq uZ_{1} + (1-u) Z_{0}$ has law $\alpha_{u}$, for $0 \leqslant u \leqslant 1$.
**Proposition 16**. *Let $\mu, \nu \in \mathcal{P}_{2}(\mathbb{R}^{d})$. The Bass functional $$\mathcal{P}_{2}(\mathbb{R}^{d}) \ni \alpha \longmapsto \mathcal{V}(\alpha)
= \textnormal{MCov}(\alpha \ast \gamma, \nu) - \textnormal{MCov}(\alpha,\mu)$$ is convex along generalized geodesics $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ in $\mathcal{P}_{2}(\mathbb{R}^{d})$ with base $\mu$.*
We do not know whether the above assertion is also true along (non generalized) geodesics $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ in $\mathcal{P}_{2}(\mathbb{R}^{d})$, when $d > 1$.
*Proof of Proposition [Proposition 16](#sec_4_dis_d){reference-type="ref" reference="sec_4_dis_d"}.* We follow the lines of the proof of Proposition [Proposition 15](#sec_4_dis_one){reference-type="ref" reference="sec_4_dis_one"} and consider again the functional $$\mathcal{U}(\alpha) = \mathcal{W}_{2}^{2}(\alpha,\mu) - \mathcal{W}_{2}^{2}(\alpha \ast \gamma,\nu)$$ as in [\[eq_func_ualph\]](#eq_func_ualph){reference-type="eqref" reference="eq_func_ualph"}. Let $(\alpha_{u})_{0 \leqslant u \leqslant 1}$ be a generalized geodesic with base $\mu$, joining $\alpha_{0}$ to $\alpha_{1}$. Take $Z_{0}, Z_{1}, Z_{u}, X$ as above such that $(Z_{0},X)$ and $(Z_{1},X)$ are optimal couplings and by definition $Z_{u} \sim \alpha_{u}$. Note that $(Z_{u},X)$ is an optimal coupling of $(\alpha_{u},\mu)$ by [@AGS08 Lemma 9.2.1], for $0 \leqslant u \leqslant 1$. The equalities [\[sec_4\_dis_one_ai1\]](#sec_4_dis_one_ai1){reference-type="eqref" reference="sec_4_dis_one_ai1"} -- [\[sec_4\_dis_one_ai4\]](#sec_4_dis_one_ai4){reference-type="eqref" reference="sec_4_dis_one_ai4"} and the inequality in [\[sec_4\_dis_one_ai5\]](#sec_4_dis_one_ai5){reference-type="eqref" reference="sec_4_dis_one_ai5"} -- [\[sec_4\_dis_one_ai8\]](#sec_4_dis_one_ai8){reference-type="eqref" reference="sec_4_dis_one_ai8"} then carry over verbatim and we again arrive at [\[sec_4\_dis_one_ai9\]](#sec_4_dis_one_ai9){reference-type="eqref" reference="sec_4_dis_one_ai9"} -- [\[sec_4\_dis_one_ai11\]](#sec_4_dis_one_ai11){reference-type="eqref" reference="sec_4_dis_one_ai11"}, which shows the convexity of the function $[0,1] \ni u \mapsto \mathcal{U}(\alpha_{u})$. ◻
[^1]: We thank Ben Robinson for his valuable comments during the preparation of this paper. WS and BT acknowledge support by the Austrian Science Fund (FWF) through projects P 35197 and P 35519, and JB acknowledges support by the FWF through projects Y 00782 and P 36835.
| arxiv_math | {
"id": "2309.11181",
"title": "The Bass functional of martingale transport",
"authors": "Julio Backhoff-Veraguas and Walter Schachermayer and Bertram\n Tschiderer",
"categories": "math.PR",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We present a new example for the Lavrentiev phenomenon in context of nonlinear elasticity, caused by an interplay of the elastic energy's resistance to infinite compression and the Ciarlet--Nečas condition, a constraint preventing global interpenetration of matter on sets of full measure.
address:
- University of Naples Federico II, Department of Mathematics and Applications R. Caccioppoli, via Cintia, Monte S. Angelo, 80126 Naples, Italy.
- The Czech Academy of Sciences, Institute of Information Theory and Automation, Pod vodárenskou věžı́ 4, 182 08 Praha 8, Czech Republic.
- University of Vienna, Faculty of Mathematics, Oskar-Morgenstern Platz 1, 1090 Vienna, Austria.
author:
- Stefano Almi
- Stefan Krömer
- Anastasia Molchanova
bibliography:
- NLEbib.bib
title: A new example for the Lavrentiev phenomenon in Nonlinear Elasticity
---
# Introduction[\[sec:intro\]]{#sec:intro label="sec:intro"}
Following the by-now classical theory of nonlinear elasticity [@Ant05B; @Ba77a; @Cia88B; @Sil97B], we consider an elastic body occupying in its reference configuration an open bounded set $\Omega\subseteq \mathbb R^{d}$ with Lipschitz boundary $\partial \Omega$, subject to a prescribed boundary condition on a part $\Gamma\subset \partial\Omega$ with positive surface measure, i.e., $\mathcal{H}^{d-1}(\Gamma)>0$. A possible deformation of the body is described by a mapping $y\colon \Omega \to \mathbb R^{d}$ such that $y = y_0$ on $\Gamma$, where $y_0$ is the imposed boundary data. Its associated internally stored elastic energy is given by the functional $$\label{def:energy}
E(y):=\int_{\Omega} W(\nabla y(x)) \, dx,$$ with a function $W$ representing material properties: the local energy density, which is assumed to be a function of the deformation gradient. A crucial aspect of this mathematical model [@Ball2010] is to define a suitable class of admissible deformations that capture relevant features, such as non-interpenetration of matter, which mathematically translates into injectivity of $y$. However, considering different admissible classes can lead to a Lavrentiev phenomenon, i.e., the functional infima differ when restricting the minimization of [\[def:energy\]](#def:energy){reference-type="eqref" reference="def:energy"} to more regular deformations, such as $W^{1, \infty}$ in place of $W^{1, p}$. Functionals demonstrating this behavior were first discovered in the early 20th century [@La1927a; @Man1934]. There the minimum value over $W^{1,1}$ is strictly less than the infimum over $W^{1,\infty}$. For an extensive survey on the Lavrentiev phenomenon in a broader context, we refer the interested reader to [@ButBel1995].
In the context of nonlinear elasticity, the Lavrentiev phenomenon was first observed with admissible deformations that allow cavitations, i.e., the formation of voids in the material [@Bal1982]. For the study of cavitations, we refer to [@BarDou2020; @HeMo15a] and references therein.
A natural question raised in [@BalMiz1985] and [@Ba02a] is:\
*Can the Lavrentiev phenomenon occur for elastostatics under growth conditions on the stored-energy function, ensuring that all finite-energy deformations are continuous?*\
This is indeed the case, and the first example of this kind has been given in two dimensions [@Foss2003; @FosHruMiz2003b; @FoHruMi03a]. It features an energy density with desirable properties: $W$ is smooth, polyconvex, frame-indifferent, isotropic, $W(F) \gtrsim |F|^{p}$ with $p>2$, and $W(F) \to \infty$ as $\det F \to 0+$. Moreover, admissible deformations are almost everywhere (a.e.) injective. In these examples, the reference configuration is represented by a disk sector $\Omega_{\alpha}: = \{ r (\cos \theta,\sin \theta): 0<r<1, 0<\theta < \alpha\}$. A crucial aspect for the emergence of the Lavrentiev phenomenon in that example is the local behavior of (almost) minimizers near the tip at $r=0$, interacting with a particular choice of boundary conditions. The latter fix the origin $y(0,0) = (0,0)$, $y(1,\theta) = (1, \frac{\beta}{\alpha} \theta)$ and $y(\Omega_\alpha) \subset \Omega_\beta$, where $0 < \beta < \frac{3}{4} \alpha$.
In the current paper, we provide examples of the Lavrentiev phenomenon in elasticity both in two and three dimensions. The elastic energy is of a simple neo-Hookean form with physically reasonable properties as described above, and admissible deformations are continuous and a.e.-injective. Differently from [@Foss2003; @FosHruMiz2003b; @FoHruMi03a], the Lavrentiev phenomenon in our example is not related to the local behavior of almost minimizers near prescribed boundary data, but to a possible global self-intersection of the material that still maintains a.e. injectivity by compressing two different material cross-sections to a single point (or line in 3D) of self-contact in deformed configuration. It turns out to be energetically favorable due to our particular choice of boundary conditions but is no longer possible if we restrict to a sufficiently smooth class of admissible deformations. This then leads to a higher energy infimum.
Throughout the paper, we consider *locally orientation preserving* deformations with $p$-Sobolev regularity $$\label{def:W1p+}
W_+^{1,p} (\Omega, \mathbb R^{d}) := \{ y \in W^{1,p} (\Omega; \mathbb R^{d})\mid \det\nabla y>0~\text{a.e.~in $\Omega$}\}\subset W^{1,p}(\Omega; \mathbb R^{d}).$$ If $p>d$, the Sobolev embedding theorems ensure the continuity of $W^{1,p}$-mappings. The question of injectivity of deformations, i.e., non-interpenetration of matter, is more delicate and it has been extensively studied. Let us mention just a few references. For local invertibility conditions, see [@BarHenMor2017; @FoGa95a; @Hen-Str-2020]. As for global injectivity one may ask some coercivity with respect to specific ratios of powers of a matrix $F$, its cofactor matrix $\operatorname{cof}F$, and its determinant $\det F$ combined with global topological information from boundary values [@Ba81a; @HeMoOl21a; @IwaOnn2009; @Kroe20a; @MolVod2020; @Sve88a] or second gradient [@HeaKroe09a], as well as other regularity [@CiaNe87a; @Tan1988; @Sve88a] and topological restrictions such as (INV)-condition [@BarHenMor2017; @ConDeLel2003; @HenMor2010; @MulSpe1995; @SwaZie2004] and considering limits of homeomorphisms [@BouHenMol2020; @DolHenMol2022; @IwaOnn2009; @MolVod2020]. In this paper, we adopt the approach from [@CiaNe87a], where the authors investigate a class of mappings $y\in W_+^{1,p} (\Omega; \mathbb R^{d})$ satisfying the *Ciarlet--Nečas condition*: $$\begin{aligned}
\label{CN}\tag{CN}
\int_\Omega \det(\nabla y (x)) \,dx \leq \left\vert y(\Omega)\right\vert,\end{aligned}$$ and prove that the mappings of this class are a.e.-injective.
In the examples we consider $W(F) \gtrsim |F|^{p} + (\det F)^{-q}$, the reference configuration $\Omega$ and the boundary data $y_0$ are chosen in such a way that the energy $E$ favors deformations that have nonempty sets of non-injectivity. In particular, we construct in Section [\[sec:Lav2d\]](#sec:Lav2d){reference-type="ref" reference="sec:Lav2d"} (resp. Section [4](#sec:Lav3d){reference-type="ref" reference="sec:Lav3d"}) a competitor $y \in W^{1, p}_{+} (\Omega; \mathbb R^{2})$ (resp. $y \in W^{1, p}_{+}(\Omega; \mathbb R^{3})$) satisfying the Ciarlet--Nečas condition [\[CN\]](#CN){reference-type="eqref" reference="CN"} and having a line (resp. a plane) of non-injectivity. The energy of such deformation is shown to be strictly less than that of Lipschitz deformations, for which injectivity is ensured everywhere. The global injectivity in this case follows from the Reshetnyak theorem for mappings of finite distortion [@ManVill1998]. Specifically, a mapping $y\in W^{1,d}_{loc}( \Omega; \mathbb R^{d})$ with $\det \nabla y \geq 0$ a.e. has finite distortion if $|\nabla y (x)| = 0$ whenever $\det \nabla y (x) = 0$. If, in addition, the distortion $K_{y}:=\frac{|\nabla y |^d}{\det \nabla y} \in L^{\varkappa}$ with $\varkappa > d-1$, then $y$ is either constant or open and discrete. Furthermore, it is not difficult to see that an a.e.-injective and open mapping $y\in W^{1,d}_{loc} (\Omega; \mathbb R^{d})$ is necessarily injective everywhere, as pointed out in [@GraKruMaiSte19a Lemma 3.3]. For a general theory of mappings of finite distortion the reader is referred to [@HenKos2014].
Our example also shows that, depending on the precise properties of the energy density $W$, there can be an energy gap between the class of orientation preserving a.e. injective deformations (i.e., satisfying the Ciarlet--Nečas condition) on the one hand and the strong (or weak) closure of Sobolev homeomorphisms in the ambient Sobolev space on the other hand. If these classes do not coincide (which can certainly happen if there is not enough control of the distortion via the energy to apply the Reshetnyak theorem [@ManVill1998] as above), one has to carefully choose which constraint to use to enforce non-interpenetration of matter, even if $p>d$. In our example, the Ciarlet--Nečas condition does allow a "deep" self-interpenetration in such a scenario. As a matter of fact, this self-interpenetration is also topologically stable in the sense that all $C^0$-close deformations still self-intersect (see Figures [\[f:reference_configuration\]](#f:reference_configuration){reference-type="ref" reference="f:reference_configuration"} and [\[f:cross\]](#f:cross){reference-type="ref" reference="f:cross"} for reference and deformed configurations in the 2D case). To us, it seems doubtful that such a deformation corresponds to a physically meaningful state. This strongly speaks for preferring a closure of homeomorphisms as the admissible class in such cases. An open problem in this context is to find sharp conditions for the energy density so that all a.e.-injective orientation preserving Sobolev maps can be found as strong (or weak) limits of Sobolev homeomorphisms in $W^{1,p}$. In case $p\geq d$, having $K_{y}\in L^{\varkappa}$ with $\varkappa > d-1$ as above is clearly sufficient, but probably not necessary, at least not in dimension $d\geq 3$.
The plan of the paper is the following. Section [\[sec:ex\]](#sec:ex){reference-type="ref" reference="sec:ex"} is dedicated to the general setting of the problem and a few basic auxiliary results. In Sections [\[sec:Lav2d\]](#sec:Lav2d){reference-type="ref" reference="sec:Lav2d"} and [4](#sec:Lav3d){reference-type="ref" reference="sec:Lav3d"}, we discuss the Lavrentiev phenomenon in dimensions two and three for the energy $E$ in the class of deformations $y\in W_+^{1,p} (\Omega, \mathbb R^{d})$ satisfying the Ciarlet--Nečas condition [\[CN\]](#CN){reference-type="eqref" reference="CN"} as well as suitable Dirichlet boundary conditions on selected parts of $\partial \Omega$.
# General setting[\[sec:ex\]]{#sec:ex label="sec:ex"}
In dimension $d \geq 2$ we consider a Neo-Hookean nonlinear elastic material with energy density: $$\begin{aligned}
\label{W0}
W(F):=\begin{cases}
\left\vert F\right\vert^p+\gamma \dfrac{1}{(\det F)^q} & \text{if $\det F>0$},\\
+\infty & \text{else,}
\end{cases}
\qquad \text{for $F\in \mathbb R^{d\times d}$.}\end{aligned}$$ In [\[W0\]](#W0){reference-type="eqref" reference="W0"}, $\left\vert F\right\vert:=\big(\sum_{ij} F^2_{ij}\big)^{\frac{1}{2}}$ denotes the standard Euclidean matrix norm, $q>0$ is a constant, and $\gamma>0$ is chosen in such a way that $W$ is minimized at the identity matrix ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$, i.e., $$\label{def:gamma}
\gamma := \frac{p d^{\frac{p}{2}-1}}{q}.$$ Indeed, let $\det F >0$ and $\lambda_1, \ldots, \lambda_d >0$ be singular values of $F$, then $$W(F)=\mathcal{W}(\lambda_1,\dots,\lambda_d) = \Big(\sum_{k=1}^{d} \lambda_k^2\Big)^{\frac{p}{2}} + \gamma \Big(\prod_{k=1}^{d}\lambda_k\Big)^{-q}$$ and equalities $\frac{\partial}{\partial \lambda_i}\mathcal{W}\mid_{\lambda_j=1}=0$ give us [\[def:gamma\]](#def:gamma){reference-type="eqref" reference="def:gamma"}. Moreover, $\lambda_i =1$, $i=1,\ldots d$, is the global minimizer of $\mathcal{W}$. Indeed, if $(\lambda_1, \ldots \lambda_d)$ is a local minimum, then for any $i = 1,\ldots d$, $$\frac{\partial}{\partial \lambda_i}\mathcal{W}
=p\lambda_i \Big(\sum_{k=1}^{d} \lambda_k^2\Big)^{\frac{p}{2}-1} - \gamma q \frac{1}{\lambda_i} \Big(\prod_{k=1}^{d}\lambda_k\Big)^{-q}
=0.$$ Therefore, $\lambda_i = \lambda_j=1$ for all $i$, $j = 1,\ldots d$. In other words only rotation matrices $F \in SO(d)$ are minimizers of $W$.
Below we summarize some "good" [@Ball2010] properties of the energy density $W$.
**Proposition 1**. *For $W$ given by [\[W0\]](#W0){reference-type="eqref" reference="W0"} and [\[def:gamma\]](#def:gamma){reference-type="eqref" reference="def:gamma"}, we have that*
1. *$W \in C^{\infty}(\mathbb R^{d\times d}_{+}, \mathbb R)$,*
2. *$W(F) \to +\infty$ as $\det F \to 0{+}$,*
3. *$W$ is frame-indifferent and isotropic, i.e. $W(RF) = W(FR) = W(F)$ for all $R\in SO(d)$ and $F\in \mathbb R^{d\times d}_{+}$,*
4. *$W$ is polyconvex,*
5. *$W(F) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \geq 0$ for all $F\in \mathbb R^{d\times d}_{+}$,*
6. *there exist constants $c=c(d,p,q)>0$ and $b=b(d,p,q)\in \mathbb{R}$ such that $$\begin{aligned}
\label{Wcoercivity}
W(F) \geq c \left(\left\vert F\right\vert^p + \left\vert\operatorname{cof}F\right\vert^{\frac{p}{d-1}} + (\det F)^{\frac{p}{d}}\right) +b.
\end{aligned}$$*
For later use, we point out the following proposition, which express the minimality of the identity map in a quantitative form. We denote from now on $\| F\|_{2}$ the operator norm of $F \in \mathbb{R}^{d \times d}$, i.e., $\left\|F\right\|_2:=\sup\{ \left\vert Fe\right\vert:\left\vert e\right\vert=1\}$.
**Proposition 2**. *For $W$ given by [\[W0\]](#W0){reference-type="eqref" reference="W0"} and [\[def:gamma\]](#def:gamma){reference-type="eqref" reference="def:gamma"}, we have the following lower bound $$\begin{aligned}
\label{Wlowerbound}
W(F)-W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \geq c \left\vert\left\|F\right\|_2-1\right\vert^p+c(\left\|F\right\|_2-1)^2
\end{aligned}$$ with some constant $c=c(d,p,q)>0$.*
*Proof.* As before, we express $W(F)=\mathcal{W}(\lambda_1,\ldots,\lambda_d)$ and $\left\|F\right\|_2=\max\{\lambda_i:i=1,\ldots,d\}$ in terms of the singular values of $F$. Abbreviating $$\textstyle S:=\Big(\sum_{k=1}^{d} \lambda_k^2\Big)^{\frac{1}{2}},\quad P:=\prod_{k=1}^{d}\lambda_k,$$ we have that $\mathcal{W}(\lambda_1,\ldots,\lambda_d) = S^p+\gamma P^{-q}$. Thus, it holds $$\begin{aligned}
&\frac{\partial^2}{\partial \lambda_i \partial \lambda_j} \mathcal{W}(\lambda_1,\dots,\lambda_d)
%&~~~
= \left(p(p-2)\lambda_i\lambda_j+p\delta_{ij} S^{2}\right)
S^{p-4}
+\gamma q(q+\delta_{ij}) \frac{1}{\lambda_i\lambda_j} P^{-q}.
\end{aligned}$$ Notice that $\big(p(p-2)S^{p-4}\lambda_i\lambda_j\big)_{ij}$ and $\big(\gamma q^2 P^{-q} \lambda_i^{-1}\lambda_j^{-1}\big)_{ij}$ are positive semidefinite matrices of rank $1$, while the other contributions involving Kronecker's $\delta_{ij}$ give a diagonal matrix with positive coefficients that can be estimated for all $i=j$. Indeed, defining $$\mu=\mu(d,q):=\min \left\{
\lambda_j^{-2} P^{-q} \mid 1 \geq \lambda_1,\ldots,\lambda_d>0,~S=1
\right\}\geq d^{\frac{qd}{2}}>0$$ (due to symmetry, $\mu$ does not depend on $j$), we obtain that $$\begin{aligned}
\label{e:SP}
& p S^{p-2}+\gamma q \lambda_j^{-2}P^{-q}\geq p S^{p-2}+\gamma q \mu S^{-qd-2}.\end{aligned}$$ Choosing $\alpha \in (0, 1)$ such that $p \leq \frac{2- \alpha}{1 - \alpha}$ we may continue in [\[e:SP\]](#e:SP){reference-type="eqref" reference="e:SP"} with $$\begin{aligned}
& p S^{p-2}+\gamma q \lambda_j^{-2}P^{-q}\geq (1 - \alpha) p(p-1) S^{p-2}+ \big( (\alpha - 1) p^{2} + (2 - \alpha) p\big) S^{p-2} + \gamma q \mu S^{-qd-2},\end{aligned}$$ from which we infer the existence of a constant $\hat{c}=\hat{c}(\gamma,d,p,q,\mu)>0$ such that $$p S^{p-2}+\gamma q \lambda_j^{-2}P^{-q}\geq \hat{c} (p(p-1)S^{p-2}+2).$$ Altogether, we get $$\begin{aligned}
\label{D2Wc-posdef}
\xi\cdot D^2\mathcal{W}(\lambda_1,\ldots,\lambda_d)\xi\geq \hat{c} (p(p-1)S^{p-2}+2)\left\vert\xi\right\vert^2\quad\text{for $\xi\in \mathbb R^d$}.\end{aligned}$$
We now conclude for [\[Wlowerbound\]](#Wlowerbound){reference-type="eqref" reference="Wlowerbound"}. Without loss of generality, we may assume that $\left\|F\right\|_2=\lambda_1$. Let us define the curve $\lambda\colon[0,1]\to (0, +\infty)^d$ connecting $(\lambda_{1}, \ldots, \lambda_{d})$ to $(1, \ldots, 1)$ $$t\mapsto \lambda(t)=(\lambda_1(t),\ldots,\lambda_d(t)):=(t\lambda_1+1-t,\ldots,t\lambda_d+1-t).$$ Since $\frac{\partial}{\partial \lambda_j}\mathcal{W}(1,\ldots,1)=0$, integrating twice along $\lambda$ and using [\[D2Wc-posdef\]](#D2Wc-posdef){reference-type="eqref" reference="D2Wc-posdef"} we obtain $$\begin{aligned}
W(F)-W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})&=\int_0^1 \int_0^\tau \frac{d^2}{dt^2}\mathcal{W}\big(\lambda_1(t),\ldots,\lambda_d(t)\big)\,dt\,d\tau \\
&\geq \hat{c} \int_0^1 \int_0^\tau (p(p-1)\lambda_1(t)^{p-2}+2)\dot\lambda_1(t)^2\,dt\,d\tau \\
&= \hat{c} \int_0^1 \int_0^\tau \frac{d}{dt} \Big[\big(p\lambda_1(t)^{p-1}+2\lambda_1(t)\Big] \dot\lambda_1(t)\,dt\,d\tau \\
& = \hat{c} \left( \lambda_1^{p}-1-p(\lambda_1-1)+(\lambda_1-1)^2 \right) \geq c \left(|\lambda_1-1|^{p}+(\lambda_1-1)^2\right),
\end{aligned}$$ with $c:=\min\big\{\hat{c},\frac{1}{2}\big\}$ (exploiting that $p\geq 2$). ◻
In both the examples we present in this paper, we fix as reference configuration an open bounded set $\Omega_{s} \subseteq \mathbb R^{d}$ with Lipschitz boundary $\partial \Omega_{s}$. Our set will always have two connected components whose precise shape will be chosen depending on the dimension $d$ and will further depend on a parameter $s>0$. For every $y \in W_+^{1,p} (\Omega_{s}, \mathbb R^{d})$ we define the energy functional $$\begin{aligned}
\label{Eelastic}
E_s(y):=\int_{\Omega_s} \left(W(\nabla y)-W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})\right) \,dx.\end{aligned}$$ In particular, notice that the energy $E_{s}$ is normalized to $0$ at $y = id$, since $W$ attains minimum value on $SO(d)$. Let $\Gamma_{s}$ be a subset of $\partial\Omega_{s}$, $\mathcal{H}^{n-1}(\Gamma_s)>0$, with imposed Dirichlet boundary data $y_{0}\in W^{1, p}_{+} (\Omega_{s}, \mathbb R^{d})$. The set of admissible deformations $\mathcal{Y}_s \subseteq W^{1, p}_{+} (\Omega_{s}, \mathbb R^{d})$ reads as $$\label{def:Ys}
\mathcal{Y}_s:=\left\{y\in W_+^{1,p}(\Omega_s;\mathbb R^d)\,\left|\,\text{\eqref{CN} holds and $y=y_0$ on $\Gamma_s$}\right.\right\}.$$
The existence of minimizers is nowadays classic and follows, e.g., from [@CiaNe87a Theorem 5] due to Proposition [Proposition 1](#p:properties-W){reference-type="ref" reference="p:properties-W"} since $p>d$.
**Theorem 3**. *If $\mathcal{Y}_{s}\neq \varnothing$ and $\inf\limits_{{y} \in \mathcal{Y}_s} E_s(y)<\infty$, then there exists $\hat{y}_s \in \mathcal{Y}_{s}$ such that $\inf\limits_{{y} \in \mathcal{Y}_s} E_s(y) = E_s(\hat{y}_s)$.*
# The Lavrentiev phenomenon in dimension two[\[sec:Lav2d\]]{#sec:Lav2d label="sec:Lav2d"}
In dimension $d=2$ we consider a reference configuration $\Omega_{s}$ consisting of two stripes of width $0<s<1$, given by (see also Fig. [\[f:reference_configuration\]](#f:reference_configuration){reference-type="ref" reference="f:reference_configuration"}) $$\label{def:omega_s}
\begin{aligned}
&\Omega_s:=S_1 \cup S_2,\quad\text{where}~~S_1:=(-1,1)\times (-s,s),~~S_2:=\xi+QS_1, \\
&~~Q:=\left(\begin{array}{rr}0 & -1\\1 & 0\end{array}\right)~~\text{and}~~\xi:=\left(\begin{array}{c} 4\\0 \end{array}\right).
\end{aligned}$$
We denote by $\Gamma_{s}$ the subset of $\partial\Omega_{s}$ given by $$\Gamma_s:= \big[ \{-1,1\}\times (-s,s) \big] \cup \big[ (4-s,4+s)\times \{-1,1\} \big] \subset \partial\Omega_s.$$ On $\Gamma_{s}$ we impose the following Dirichlet boundary condition $$y_0(x):=
\begin{cases}
x & \text{for $x\in \overline{S_1}$,}\\
x-\xi &\text{for $x\in \overline{S_2}$}
\end{cases}$$ Notice that on both pieces of $\Omega_s$ the function $y_{0}$ is such that $\nabla y_0={\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$, which minimizes $W$ pointwise. However, $y_0(\Omega_s)$ is cross-shaped with $y_0$ doubly-covering the center. Hence, $y_0$ is not globally injective and does not satisfy [\[CN\]](#CN){reference-type="eqref" reference="CN"}. It is not hard to see that $\mathcal{Y}_s$, defined by [\[def:Ys\]](#def:Ys){reference-type="eqref" reference="def:Ys"}, still contains many admissible functions as long as $s<1$.
**Theorem 4** (The Lavrentiev phenomenon occurs). *Let $p \in (2, +\infty)$ and $q \in (1, \frac{p}{p-2})$. Then, there exists $\overline{s} \in (0, 1)$ such that for every $s \in (0, \overline{s}]$ the following holds: $$\label{e:lav2d}
\inf_{y\in W^{1,\infty} (\Omega_{s}; \mathbb R^{d}) \cap \mathcal{Y}_s}E_s(y) >
\min_{y\in \mathcal{Y}_s}E_s(y).$$*
The proof of Theorem [Theorem 4](#thm:lav2d){reference-type="ref" reference="thm:lav2d"} is a consequence of the following two propositions, which determine the asymptotic behavior of the minimum problems in [\[e:lav2d\]](#e:lav2d){reference-type="eqref" reference="e:lav2d"}.
**Proposition 5**. *Let $p \in (2, +\infty)$ and $q \in (1, \frac{p}{p-2}]$. Then $\min_{y\in \mathcal{Y}_s} E_s(y)=o(s)$, i.e., we have that $$\label{e:step1}
\lim_{s\searrow 0} \frac{1}{s}\inf_{y \in \mathcal{Y}_{s}} \, E_{s}(y) =0.$$*
**Proposition 6**. *Let $p \in (2, +\infty)$. Then there exists $\overline{q} \in (1, +\infty)$ and $\overline{s} \in (0, 1)$ such that for every $q\in (1,\overline{q}]$ and every $s \in (0, \overline{s}]$, $$\label{e:step2}
ms\leq \inf_{y \in W^{1, \infty}(\Omega_{s}; \mathbb R^{2}) \cap \mathcal{Y}_{s}} \, E_{s}(y) \leq Ms.$$ with constants $0<m<M <+\infty$ independent of $s$.*
We start with the proof of Proposition [Proposition 5](#p:lav-2d-1){reference-type="ref" reference="p:lav-2d-1"}.
*Proof of Proposition [Proposition 5](#p:lav-2d-1){reference-type="ref" reference="p:lav-2d-1"}.* In order to prove [\[e:step1\]](#e:step1){reference-type="eqref" reference="e:step1"} we explicitly construct a deformation $y_{\alpha,\beta}$ forming a cross with self-intersection. By squeezing with suitable rate two central cross-sections to a point, which will be the only point of intersection in $y_{\alpha,\beta}(\Omega)$, we produce an almost-minimizer of $E_{s}$ in $\mathcal{Y}_s$.
We start with the case $q \in (1, \frac{p}{p-2})$. We devide $S_1$ into two pieces $S'_1=S_1\cap\{|x_1|\leq s\}$ and $S''_1=S_1\cap\{|x_1|\geq s\}$ and fix $\frac{p-1}{p}<\alpha<\beta\leq1$. For $x\in S'_1$ we set $$y_{\alpha,\beta}(x):=\left(\begin{array}{c} \displaystyle \frac{x_1}{\left\vert x_1\right\vert^{1-\alpha}} \\[5mm] \displaystyle \frac{\left\vert x_1\right\vert^\beta}{s^\beta} x_2 \end{array}\right),\quad\text{whence}\quad \nabla y_{\alpha,\beta}(x)=
\left(\begin{array}{cc}
\displaystyle \frac{\alpha}{\left\vert x_1\right\vert^{1-\alpha}} & 0 \\[5mm]
\displaystyle \frac{\beta}{s^\beta} {\left\vert x_1\right\vert^{\beta-2} x_1 x_2} & \displaystyle\frac{\left\vert x_1\right\vert^\beta}{s^\beta}
\end{array}\right).$$ For $x\in S''_1$ we connect $y_{\alpha, \beta}$ to the boundary datum $y_{0}$ as follows: $$y_{\alpha,\beta}(x):=\left(\begin{array}{c} \displaystyle \frac{x_{1}}{|x_{1}|} \, \left(\frac{1-s^\alpha}{1-s} ( |x_1| - 1) + 1\right) \\[4mm] x_2 \end{array}\right),\quad\text{whence}\quad \nabla y_{\alpha,\beta}(x)=
\left(\begin{array}{cc}
\displaystyle \frac{1-s^\alpha}{1-s} & 0 \\[4mm]
0 & 1
\end{array}\right).$$ In particular, $\det\nabla y_{\alpha,\beta}=\frac{1-s^\alpha}{1-s} >0$ in $S''_1$ and $\det\nabla y_{\alpha,\beta}=\frac{\alpha}{s^\beta} \left\vert x_1\right\vert^{\alpha+\beta-1} >0$ a.e. in $S'_1$ and $$\frac{1}{(\det\nabla y_{\alpha,\beta})^q}=
\begin{cases}
\displaystyle \frac{s^{\beta q}}{\alpha^q}\left\vert x_1\right\vert^{(1-\alpha-\beta) q}, &\text{if } x\in S'_1,\\[3mm]
\displaystyle\Big( \frac{1-s}{1-s^\alpha}\Big) ^{q}, &\text{if } x\in S''_1,\\
\end{cases}$$ Moreover, for $x\in S'_1$ we have that $$|\nabla y_{\alpha,\beta}| = \left(\frac{\alpha^2}{\left\vert x_1\right\vert^{2(1-\alpha)}} + \frac{\beta^2}{s^{2\beta}}\left\vert x_1\right\vert^{2\beta - 2} \left\vert x_2\right\vert^2 + \frac{\left\vert x_1\right\vert^{2\beta}}{s^{2\beta}}\right)^{\frac{1}{2}}
\leq \frac{\alpha}{\left\vert x_1\right\vert^{1-\alpha}} + \beta \frac{ s^{1 - \beta }}{|x_{1}|^{1 - \beta }} +1.$$ Thus, $(\det\nabla y_{\alpha,\beta})^{-q}+|\nabla y_{\alpha,\beta}|^p\in L^1(S_1)$ as long as $$\begin{aligned}
\label{p-q-alpha}
(1-\alpha -\beta)q>-1, \quad p(\alpha-1)>-1, \quad p(\beta - 1) >-1. \end{aligned}$$ Such restrictions on $\alpha$ and $\beta$ can be satisfied whenever $q \in (1, \frac{p}{p-2})$ by choosing $\alpha\in(\frac{p-1}{p},1)$ and $\beta \in (\alpha,1]$ accordingly.
We now estimate the behavior of the energy $E_{s} (y_{\alpha, \beta})$ as $s\to 0$. Below, the symbol $\lesssim$ stands for an inequality up to a positive multiplicative constant independent of $s\in (0,1]$ and $x\in S_1$. We further write $\approx$ if such inequalities hold in both directions. By minimality of the identity matrix, by definition of $W$, and by construction of $y_{\alpha,\beta}$ on $S'_{1}$, we have that $$\label{W-upperboundcross}
\begin{aligned}
0& \leq W(\nabla y_{\alpha,\beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \lesssim |\nabla y_{\alpha,\beta}|^p+\frac{1}{(\det\nabla y_{\alpha,\beta})^q}
\\
&
\lesssim \left\vert x_1\right\vert^{p(\alpha-1)} + \beta s^{p(1 - \beta)} |x_{1} |^{p( \beta-1) } + 1 + s^{\beta q} |x_1|^{(1-\alpha-\beta) q} .
\end{aligned}$$ Moreover, since $0<\alpha<1$, the mean value theorem gives $$\frac{1-s^\alpha}{1-s}-1 \approx s^{\alpha}+ s \approx s^{\alpha}.$$ This means that on $S_1''$ it holds that $\left\vert\nabla y_{\alpha,\beta} -{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}\right\vert\lesssim s^{\alpha}$ uniformly in $x$. By Taylor expansion of $W$ at ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}$ (where $DW({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})=0$ by definition of $\gamma$), we infer that $$\begin{aligned}
\label{W-upperboundrest}
0\leq W(\nabla y_{\alpha,\beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}})\lesssim s^{2\alpha}\quad\text{on}~S_1''.\end{aligned}$$
Combining [\[W-upperboundcross\]](#W-upperboundcross){reference-type="eqref" reference="W-upperboundcross"} and [\[W-upperboundrest\]](#W-upperboundrest){reference-type="eqref" reference="W-upperboundrest"}, we obtain the following upper bound for the energy for all sufficiently small $s$ as long as [\[p-q-alpha\]](#p-q-alpha){reference-type="eqref" reference="p-q-alpha"} holds: $$\begin{aligned}
\label{e:estimate-S1}
\int_{S_{1}} & W(\nabla y_{\alpha,\beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx
\\
& \lesssim \int_{S'_1}
\left(\left\vert x_1\right\vert^{p(\alpha-1)}+ s^{p(1 - \beta)} |x_{1} |^{p( \beta-1) } + s^{\beta q} |x_1|^{(1-\alpha-\beta) q} +1\right)\,dx + \int_{S''_1} s^{2\alpha} \,dx \nonumber\\
& = \frac{4s \cdot s^{p\alpha-p+1}}{1+p(\alpha-1)}+ \frac{4s^{2}}{1+p(\beta-1)}+ \frac{4s \cdot s^{\beta q} \cdot s^{1+(1-\alpha -\beta) q}}{1+(1-\alpha-\beta) q}+4s^2+4s(1-s)s^{2\alpha} \nonumber \\
&\vphantom{\int_{S'_{1}}} \approx s^{p\alpha-p+2}+s^2+s^{2+(1-\alpha) q}+s^{2\alpha+1}. \nonumber\end{aligned}$$ Setting $\gamma:=\min\{p\alpha - p +2, 2+(1-\alpha) q, 2\alpha+1\}$, since $\frac{p-1}{p}<\alpha<\beta \leq 1$ we have that $\gamma>1$. By [\[e:estimate-S1\]](#e:estimate-S1){reference-type="eqref" reference="e:estimate-S1"} we conclude that $$\label{e:energy-S1}
\int_{S_{1}} W(\nabla y_{\alpha,\beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx \lesssim s^\gamma.$$
For $x \in S_{2}$, we extend $y_{\alpha,\beta}$ with a suitable shifted copy. With a slight abuse of notation, we set $$y_{\alpha,\beta}(x):=Q y_{\alpha,\beta} (Q^{T}(x - \xi)),$$ where $Q$ and $\xi$ are given by [\[def:omega_s\]](#def:omega_s){reference-type="eqref" reference="def:omega_s"}. It is straightforward that $y_{\alpha,\beta}$ is injective on $\Omega_s\setminus (\{0\}\times (-s,s)\cup (4-s,4+s)\times \{0\})$ while $y_{\alpha,\beta}(\{0\}\times (-s,s)) = y_{\alpha,\beta}((4-s,4+s)\times \{0\}) = \{0\}$. By the change-of-variables formula for Sobolev mappings, $y_{\alpha,\beta}$ satisfies [\[CN\]](#CN){reference-type="eqref" reference="CN"}. Clearly, the estimate [\[e:energy-S1\]](#e:energy-S1){reference-type="eqref" reference="e:energy-S1"} holds true also on $S_{2}$. This concludes the proof of [\[e:step1\]](#e:step1){reference-type="eqref" reference="e:step1"} for $q \in (1, \frac{p}{p-2})$.
To cover $q=\frac{p}{p-2}$ we need to consider a slightly different example. With the same notation introduced above for $S_{1}'$ and $S_{1}''$, we set $$\hat{y}_{\alpha, \beta} (x):=\left(\begin{array}{c} \displaystyle\frac{x_1}{\left\vert x_1\right\vert^{1-\alpha} \left\vert\ln{ | x_1 | }\right\vert} \\[5mm]
\displaystyle \left\vert\frac{\ln{s}}{\ln{| x_1 | }}\right\vert^2 \cdot \frac{\left\vert x_1\right\vert^{\beta}}{s^{\beta}} x_2 \end{array}\right)
\quad \text{for} \: x\in S_1',$$ and for $x\in S_1''$ $$\hat{y}_{\alpha, \beta} (x):=\left(\begin{array}{c} \displaystyle \frac{x_{1}}{|x_{1}|} \, \left(\frac{1-\frac{s^\alpha}{\left\vert\ln s\right\vert}}{1-s} ( |x_1| - 1) + 1\right) \\[4mm] x_2 \end{array}\right).$$
It is straightforward to check that in $S'_1$ $$|\nabla \hat{y}_{\alpha, \beta} |^p \lesssim
\frac{\left\vert x_1\right\vert^{p(\alpha - 1)}}{ \left\vert\ln{ | x_1 | }\right\vert^p }
+ \frac{ \left\vert x_1\right\vert^{p(\alpha - 1)}}{ \left\vert\ln{ | x_1 | }\right\vert^{2p}} + \frac{|x_{1}|^{p(\beta-1)} | \ln s|^{2p} }{s^{(\beta-1) p} | \ln |x_{1}| |^{2p}} + \frac{|x_{1}|^{p(\beta-1)} | \ln s|^{2p} }{s^{(\beta-1) p} | \ln |x_{1}| |^{3p}}
+ \frac{|x_{1}|^{\beta p} | \ln s|^{2p}}{ s^{\beta p} | \ln | x_{1}| |^{2p}}.$$ and $$(\det\nabla \hat{y}_{\alpha, \beta} )^{-q} \lesssim \frac{s^{q\beta}}{|\ln s|^{2q}} \left\vert x_1\right\vert^{q(1-\alpha-\beta)} | \ln |x_{1}| |^{3q}.$$ We have $(\det\nabla \hat{y}_{\alpha, \beta} )^{-q}+|\nabla \hat{y}_{\alpha, \beta} |^p\in L^1(S_1)$ if $\beta \geq \alpha$, $(1-\alpha -\beta)q\geq -1$ and $p(\alpha-1)\geq -1$. Moreover, if $\beta = \alpha = \frac{p-1}{p}$ and $q=\frac{p}{p-2}$ $$\begin{aligned}
\int_{S_{1}} W(\nabla y_{\alpha,\beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx &
\lesssim
o(s). \end{aligned}$$ On $S_{1}''$ we can repeat the argument of [\[W-upperboundrest\]](#W-upperboundrest){reference-type="eqref" reference="W-upperboundrest"}. This concludes the proof of the proposition. ◻
*Proof of Proposition [Proposition 6](#p:lav-2d-2){reference-type="ref" reference="p:lav-2d-2"}.* With an explicit construction of a competitor $y \in W^{1, \infty}(\Omega_{s}; \mathbb R^{2}) \cap \mathcal{Y}_{s}$ as $$y(x) := \left\{
\begin{array}{ll}
x & \text{for $x \in S_{1}$,}\\[2mm]
x - \xi +
\left(
\begin{matrix}
\frac{1+s}{1-2s}(1-|x_2|)\\
0
\end{matrix}
\right)
& \text{for $x \in S_{2}$},
\end{array}\right.$$ one can show that there exists $M>0$ such that $$\min_{y\in W^{1,\infty} (\Omega_{s}; \mathbb R^{d}) \cap \mathcal{Y}_s} E_s(y)\leq Ms .$$
Let us now fix $y \in W^{1, \infty}(\Omega_{s}; \mathbb R^{2}) \cap \mathcal{Y}_{s}$ with finite energy $E_{s} (y)$. We claim that for a.e. $\sigma \in (-s, s)$ one of the following inequality is satisfied:
[\[neq:stretching\]]{#neq:stretching label="neq:stretching"} $$\begin{aligned}
& \int_{-1}^{1} | \partial_{x_{1}} y(x_{1}, \sigma) | \, dx_{1} \geq 2\sqrt{2} ( 1 - s), \label{e:either-1}\\
& \int_{-1}^{1} | \partial_{x_{2}} y(4 + \sigma, x_{2}) | \, dx_{2} \geq 2\sqrt{2} ( 1 - s). \label{e:either-2}
\end{aligned}$$
For $\sigma \in (-s, s)$, let us denote by $T_1^{\sigma} := (-1, 1) \times \{\sigma\}$ and $T^{\sigma}_2:= \{4 + \sigma\} \times (-1, 1)$ the sections of each stripe. By the boundary conditions and continuity of $y$, for every $\sigma, \zeta \in (-s, s)$ the curve $y(T^{\sigma}_1)$ has to intersect the line $\{z\in\mathbb R^2\mid z_1=\zeta\}$. Similarly, $y(T^{\sigma}_{2})$ has to intersect $\{z\in\mathbb R^2\mid z_2 = \zeta\}$ (see also Fig. [\[f:1\]](#f:1){reference-type="ref" reference="f:1"}). For $\sigma \in (-s, s)$, we distinguish two cases:
1. [\[i\]]{#i label="i"}$y(T^{\sigma}_1)$ intersects $\{\sigma\} \times ( (-\infty, -1] \cup [1, +\infty))$ or $y(T^{\sigma}_2)$ intersects $( (-\infty, -1] \cup [1, +\infty)) \times \{\sigma\}$;
2. [\[ii\]]{#ii label="ii"} $y(T^{\sigma}_1)$ and $y(T^{\sigma}_2)$ only intersect $\{\sigma\} \times (-1, 1)$ and $(-1, 1) \times \{\sigma\}$, respectively.
Denoting by $K (x,y(x) )$ the distorsion of $y\in W^{1, \infty}(\Omega_{s}; \mathbb R^{2}) \cap \mathcal{Y}_{s}$ in $x \in \Omega_{s}$ $$\label{distortion2d}
K (x,y(x) ) := \frac{| \nabla y (x) |^{2}}{(\det \nabla y(x))},$$ we notice that $y$ satisfies $$\int_{\Omega_s} K^q (x,y) \, dx \lesssim \|\nabla y\|_{L^\infty}^{2q} \, E_s(y) <\infty,$$ with $q>d-1=1$. Since $y$ is nonconstant, due to the boundary data, by the Reshetnyak theorem [@ManVill1998] for mappings of finite distortion, $y$ is open and discrete. Moreover, any open map that is injective almost everywhere is indeed injective everywhere (as pointed out in [@GraKruMaiSte19a Lemma 3.3]). Hence, the case [\[ii\]](#ii){reference-type="eqref" reference="ii"} is impossible, and the general deofrmation is pictured in Fig. [\[f:around\]](#f:around){reference-type="ref" reference="f:around"}.
Therefore, for every $\sigma \in (-s, s)$ we are in the case [\[i\]](#i){reference-type="eqref" reference="i"}. For every $\sigma \in (-s, s)$ such that the integrals in [\[neq:stretching\]](#neq:stretching){reference-type="eqref" reference="neq:stretching"} are well defined, we may assume without loss of generality that $y(T^{\sigma}_{1}) \cap [ \{\sigma\} \times [1, +\infty) ] \neq \varnothing$ (the other cases can be treated similarly), and let $\overline{x}_{1} \in (-1, 1)$ be such that $y( \overline{x}_{1}, \sigma) \in y(T^{\sigma}_{1}) \cap [ \{\sigma\} \times [1, +\infty)]$. Since the shortest path connecting $(-1, \sigma)$ to the point $y( \overline{x}_{1}, \sigma)$ is the segment, by the boundary conditions of $y$ we have that $$\begin{aligned}
\label{e:length}
\int_{-1}^{\overline{x}_{1}} | \partial_{x_{1}} y(x_{1}, \sigma) | \, dx_{1} & \geq \sqrt{ 2} (1 - \sigma) \geq \sqrt{2} ( 1 - s). \end{aligned}$$ With the same argument, we deduce that $$\begin{aligned}
\label{e:length-2}
\int_{\overline{x}_{1}}^{1} | \partial_{x_{1}} y(x_{1}, \sigma) | \, dx_{1} & \geq \sqrt{2} ( 1 - s). \end{aligned}$$ Combining [\[e:length\]](#e:length){reference-type="eqref" reference="e:length"}--[\[e:length-2\]](#e:length-2){reference-type="eqref" reference="e:length-2"} we obtain [\[e:either-1\]](#e:either-1){reference-type="eqref" reference="e:either-1"}. If $y(T^{\sigma}_{2})$ intersects $((-\infty, -1] \cup [1, +\infty)) \times \{\sigma\}$, the same argument leads to [\[e:either-2\]](#e:either-2){reference-type="eqref" reference="e:either-2"}.
We are now in a position to conclude for [\[e:step2\]](#e:step2){reference-type="eqref" reference="e:step2"}. We define the sets $$\begin{aligned}
A &:= \left\{ x_2 \in (-s,s) \,\left|\, \text{\eqref{e:either-1} is satisfied}\right.\right\},\\
B & := \left\{ x_{1} \in (-s, s) \setminus A\,\left|\, \text{\eqref{e:either-2} is satisfied}\right.\right\}.\end{aligned}$$ In view of [\[neq:stretching\]](#neq:stretching){reference-type="eqref" reference="neq:stretching"}, we have that $A \cup B = (-s, s)$, up to a set of $\mathcal{L}^{1}$-measure zero. Moreover, $A \cap B = \varnothing$. By [\[Wlowerbound\]](#Wlowerbound){reference-type="eqref" reference="Wlowerbound"} we estimate (recall that $\| \cdot\|_{2}$ denotes the operator norm) $$\begin{aligned}
\label{e:en-estimate}
\int_{\Omega_{s}} & (W(\nabla y) - W( {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) )\, d x
\\
&
= \int_{-s}^{s} \int_{-1}^{1} (W(\nabla y) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) )\, d x_{1} \, dx_{2} + \int_{4-s}^{4+s} \int_{-1}^{1} (W(\nabla y) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) )\, d x_{2} \, dx_{1} \nonumber
\\
&
\geq c \int_{A} \int_{-1}^{1} \left\vert\left\|\nabla y\right\|_2-1\right\vert^p + (\left\|\nabla y\right\|_2-1)^2\, d x_{1} \, dx_{2} \nonumber
\\
&
\qquad + c \int_{4 + B} \int_{-1}^{1} \left\vert\left\|\nabla y\right\|_2-1\right\vert^p + (\left\|\nabla y\right\|_2-1)^2\,d x_{2} \, dx_{1}. \nonumber
\end{aligned}$$ Thanks to the Jensen inequality, to [\[neq:stretching\]](#neq:stretching){reference-type="eqref" reference="neq:stretching"}, and to the definition of $A$ and $B$, we continue in [\[e:en-estimate\]](#e:en-estimate){reference-type="eqref" reference="e:en-estimate"} with $$\begin{aligned}
\int_{\Omega_{s}} & (W(\nabla y) - W( {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) ) d x
\\
&
\geq c \int_{A} \int_{-1}^{1} \left\vert| \partial_{x_{1}} y| - 1\right\vert^p + (| \partial_{x_{1}} y| - 1)^2\, d x_{1} \, dx_{2} \nonumber
\\
&
\qquad + c \int_{4 + B} \int_{-1}^{1} \left\vert| \partial_{x_{2}} y| - 1\right\vert^p + (| \partial_{x_{2}} y| - 1)^2\,d x_{2} \, dx_{1} \nonumber
\\
&
\geq c \int_{A} \left\vert\int_{-1}^{1} | \partial_{x_{1}} y| \, dx_{1} - 1\right\vert^{p} + \Big( \int_{-1}^{1} | \partial_{x_{1}} y| \, dx_{1} - 1 \Big)^{2} \, dx_{2} \nonumber
\\
&
\qquad + c \int_{4 + B} \left\vert \int_{-1}^{1} | \partial_{x_{2}} y|\, dx_{2} - 1\right\vert^p + \Big(\int_{-1}^{1} | \partial_{x_{2}} y| \, dx_{2} - 1\Big)^2 \, dx_{1} \nonumber
\\
&\geq \,c\, (|A| + |B|) \big[ \big( 2\sqrt{2} ( 1 - s) - 1\big)^{p} +\big( 2\sqrt{2} ( 1 - s ) - 1 \big)^{2} \big] \geq m s + o(s)\end{aligned}$$ for some positive constant $m$ independent of $y$ and of $s$. This concludes the proof of [\[e:step2\]](#e:step2){reference-type="eqref" reference="e:step2"}. ◻
*Remark 7*. The Lavrentiev phenomenon is valid even if we replace $W^{1, \infty} (\Omega_{s}; \mathbb R^{2})$ with $W^{1,r} (\Omega_{s}; \mathbb R^{2})$ for $r>\frac{2q}{q-1}$. In this case, we have that for $y \in W^{1,r} (\Omega_{s}; \mathbb R^{2})\cap \mathcal{Y}_{s}$ with $E_{s} (y) <+\infty$, the distortion coefficient $K(x,y)$ defined in [\[distortion2d\]](#distortion2d){reference-type="eqref" reference="distortion2d"}, belongs to $L^{\eta} (\Omega_{s})$ for $\eta:=\frac{rq}{2q+r}$, $\eta \in (1, q)$. Indeed, by Hölder inequality it holds $$\label{neq:distprtion-int}
\int_{\Omega_{s}} \left(\frac{|\nabla y |^2}{\det \nabla y }\right)^\eta \, dx \leq
\left(\int_{\Omega_{s}} |\nabla y |^{2\eta\frac{q}{q-\eta}} \, dx\right)^{\frac{q-\eta}{q}}
\left(\int_{\Omega_{s}} \frac{dx}{\det \nabla y ^{\eta \frac{q}{\eta}}}\right)^{\frac{\eta}{q}}
<\infty,$$ since $r=\frac{2q\eta}{q-\eta}$ and $E_{s} (y) <+\infty$. This implies that any competitor $y \in W^{1,r} (\Omega_{s}; \mathbb R^{2})\cap \mathcal{Y}_{s}$ with finite energy must satisfy [\[i\]](#i){reference-type="eqref" reference="i"} for every $\sigma \in (-s, s)$. Then, the proof of the lower bound of $E_{s}(y)$ proceeds as in the $W^{1,\infty}$-case.
*Remark 8*. The argument in Remark [Remark 7](#rem:1){reference-type="ref" reference="rem:1"} also shows that the two-dimensional example in Proposition [Proposition 5](#p:lav-2d-1){reference-type="ref" reference="p:lav-2d-1"} is optimal in the following sense: if $p>2$ and $q > \frac{p}{p-2}$, then every $y \in W^{1, p} (\Omega_{s}; \mathbb R^{2}) \cap \mathcal{Y}_{s}$ with finite energy satisfies $K_{y} \in L^{\eta} (\Omega_{s})$ for $\eta = \frac{pq}{2q+p} > 1$ (see [\[neq:distprtion-int\]](#neq:distprtion-int){reference-type="eqref" reference="neq:distprtion-int"}). Hence, $y$ has to be injective. This would rule out the example constructed in the proof of Proposition [Proposition 5](#p:lav-2d-1){reference-type="ref" reference="p:lav-2d-1"}.
# The Lavrentiev phenomenon in dimension three {#sec:Lav3d}
In this section, we show a three-dimensional generalization of the Lavrentiev phenomenon proven in Theorem [Theorem 4](#thm:lav2d){reference-type="ref" reference="thm:lav2d"}. The example is created by simply thickening the two-dimensional version in another direction, corresponding to the variable $x_1$ below, while $(x_2,x_3)$ correspond to the two variables of the 2D example.
For $s \in (0, 1)$, the reference configuration $\Omega_{s}$ consists now of the union of two thin cuboids of width $s$. Namely, we write $$\label{def:omega_s-3D}
\begin{split}
\Omega_s & :=S_1 \cup S_2,\\
S_1 & :=(-1,1)\times (-1, 1) \times (-s,s) , \qquad S_2:=\xi + Q S_1\, , \\
Q & :=\left(\begin{array}{rrr}1 & 0 & 0\\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{array}\right), \qquad \xi:=\left(\begin{array}{c} 0 \\ 4 \\ 0 \end{array}\right).
\end{split}$$ We consider the Dirichlet datum $$y_0(x):=\begin{cases}
x & \text{for $x\in \overline{S_1}$,}\\
x-\xi &\text{for $x\in \overline{S_2}$,}
\end{cases}$$ and the set of admissible deformations $$\mathcal{Y}_s:=\left\{y\in W_+^{1,p}(\Omega_s;\mathbb R^3)\,\left|\,\text{\eqref{CN} holds and $y=y_0$ on $\Gamma_s$}\right.\right\},$$ where $$\Gamma_s:= \big( [-1, 1] \times \{-1,1\}\times [-s,s] \big) \cup \big( [-1, 1] \times [4-s,4+s] \times \{-1,1\} \big) \subset \partial\Omega_s.$$
Similar to Theorem [Theorem 4](#thm:lav2d){reference-type="ref" reference="thm:lav2d"}, we have the Lavrentiev phenomenon in the following form.
**Theorem 9**. *For every $p \in (3, 4)$ and every $q \in (2, \frac{p}{p-2})$ there exists $\overline{s} \in (0, 1]$ such that for every $s \in (0, \overline{s}]$ the following holds: $$\begin{aligned}
\label{e:l2-3d}
\inf_{y \in W^{1, \infty}(\Omega_{s}; \mathbb R^{3}) \cap \mathcal{Y}_{s}} \, E_{s}(y) > \inf_{y \in \mathcal{Y}_{s}} \, E_{s}(y) .
\end{aligned}$$*
The proof of Theorem [Theorem 9](#thm:lav3d){reference-type="ref" reference="thm:lav3d"} is subdivided into two propositions given below. Compared to the two-dimension case, we now have to face an additional difficulty, because "fully going around" (case [\[i\]](#i){reference-type="eqref" reference="i"} in the proof of Proposition [Proposition 6](#p:lav-2d-2){reference-type="ref" reference="p:lav-2d-2"}) is no longer the only way the two pieces can avoid each other after deformation. In principle, it should be possible to generalize our three-dimensional example to any dimension $d\geq 3$, but for the sake of simplicity, we will stick to $d=3$, the practically most relevant case.
**Proposition 10**. *For every $p \in (3, 4)$ and every $q \in (2, \frac{p}{p-2})$, $\inf_{y \in \mathcal{Y}_{s}} \, E_{s}(y)=o(s)$, i.e, $$\label{e:3d-step1}
\lim_{s\searrow 0} \frac{1}{s}\inf_{y \in \mathcal{Y}_{s}} \, E_{s}(y) =0.$$*
**Proposition 11**. *For every $p \in (3, 4)$ and every $q \in (2, \frac{p}{p-2})$, there exists $\overline{s} \in (0, 1)$ such that for every $s \in (0, \overline{s}]$ $$\label{e:3d-step2}
ms\leq \inf_{y \in W^{1, \infty}(\Omega_{s}; \mathbb R^{3}) \cap \mathcal{Y}_{s}} \, E_{s}(y) \leq Ms$$ with constants $0<m<M<+\infty$ independent of $s$.*
We start with the proof of Proposition [Proposition 10](#p:lav3d-1){reference-type="ref" reference="p:lav3d-1"}.
*Proof of Proposition [Proposition 10](#p:lav3d-1){reference-type="ref" reference="p:lav3d-1"}.* As in the proof of Proposition [Proposition 5](#p:lav-2d-1){reference-type="ref" reference="p:lav-2d-1"}, it is enough to construct a sequence of competitors $y^{s} \in \mathcal{Y}_{s}$ satisfying $E_{s} (y^{s}) =o(s)$ as $s \searrow 0$. To this purpose, let us fix $\alpha, \beta \in (0, 1)$ (to be determined later on) and let us define $S'_{1}:= \{ x \in S_{1}: | x_{2} | \leq s\}$, $S'_{2}:= \{x \in S_{2}: | x_{3}| \leq s\}$, and $S''_{i} := S_{i} \setminus S'_{i}$ for $i = 1, 2$.
In order to prove the asymptotic [\[e:3d-step1\]](#e:3d-step1){reference-type="eqref" reference="e:3d-step1"} we define the map $y_{\alpha, \beta}\colon \Omega_{s} \to \mathbb R^{3}$ as $$\begin{aligned}
y_{\alpha, \beta} (x) & := \left( \begin{array}{ccc}
x_{1}
\\[1mm]
\displaystyle \frac{x_{2}}{| x_{2} |^{1 - \alpha}}
\\[5mm]
\displaystyle\frac{|x_{2}|^{\beta}}{s^{\beta}} \, x_{3}
\end{array}\right) \qquad \text{for $x \in S'_{1}$},\\[2mm]
y_{\alpha, \beta} (x) & := \left( \begin{array}{ccc}
x_{1}
\\[2mm]
\displaystyle \frac{x_{2}}{|x_{2}|} \, \left( \frac{1-s^\alpha}{1-s} ( | x_{2} | - 1 ) + 1\right)
\\[2mm]
x_{3}
\end{array}\right) \qquad \text{for $x \in S''_{1}$},\\[2mm]
y_{\alpha, \beta} (x) & :=Q y_{\alpha, \beta} (Q^{T}( x - \xi ) ) \qquad \text{for $x \in S_{2}$}.\end{aligned}$$ To show that $y_{\alpha, \beta} \in \mathcal{Y}_{s}$ for $s$ small, we have to show that $\nabla y_{\alpha, \beta} \in L^{p}(\Omega_{s}; \mathbb R^{3\times 3})$. We focus on $S_{1}$, as the definition of $y_{\alpha, \beta}$ leads to the same computations on $S_{2}$. By construction of $y_{\alpha, \beta}$, on $S_{1}$ we have that $$\begin{aligned}
\nabla y_{\alpha, \beta} (x) & = \left(
\begin{array}{ccc}
1 & 0 & 0\\
0 & \alpha | x_{2}|^{\alpha - 1} & 0\\
0 & \frac{\beta}{s^{\beta}} |x_{2}|^{\beta - 2} x_{2} x_{3} & \frac{|x_{2}|^{\beta}}{s^{\beta}}
\end{array}\right) \qquad \text{for $x \in S'_{1}$},\\[2mm]
\nabla y_{\alpha, \beta} (x) & = \left(
\begin{array}{ccc}
1 & 0 & 0\\
0 & \frac{1 - s^{\alpha}}{1 - s} & 0\\
0 & 0 & 1
\end{array}\right) \qquad \text{for $x \in S''_{1}$}.\\\end{aligned}$$ Imposing $\nabla y_{\alpha, \beta} \in L^{p}(S_{1};\mathbb R^{3\times 3})$ implies that $$\begin{aligned}
\label{e:alphabeta}
\alpha, \beta > 1 - \frac{1}{p}.\end{aligned}$$
We notice that $$\begin{aligned}
\det \nabla y_{\alpha, \beta} (x) & = \frac{\alpha | x_{2}|^{\alpha + \beta - 1} }{s^{\beta} }\qquad \text{in $S'_{1}$} ,\\
\det \nabla y_{\alpha, \beta}(x) & = \frac{ 1- s^{\alpha} }{1 - s}\qquad \text{in $S''_{1}$},\end{aligned}$$ so that $\det \nabla y_{\alpha, \beta} >0$ on $\Omega_{s}$. As in the proof of Theorem [Theorem 4](#thm:lav2d){reference-type="ref" reference="thm:lav2d"}, $y_{\alpha, \beta}$ is injective on $\Omega_{s} \setminus \big( (-1, 1) \times \{0 \} \times (-s,s) \cup (-1, 1) \times (4-s,4+s) \times \{0 \} \big)$ and while $y_{\alpha, \beta} ((-1, 1) \times \{0 \}\times (-s,s)) = y_{\alpha, \beta} ((-1, 1) \times (4-s,4+s)\times \{0\}) = (-1, 1) \times \{0\} \times \{0\}$. Thus, $y_{\alpha, \beta}$ satisfies [\[CN\]](#CN){reference-type="eqref" reference="CN"} and $y_{\alpha, \beta} \in \mathcal{Y}_{s}$ for $\alpha, \beta \in (0, 1)$ such that [\[e:alphabeta\]](#e:alphabeta){reference-type="eqref" reference="e:alphabeta"} holds.
Imposing the integrability of $(\det \nabla y_{\alpha, \beta})^{-q}$ on $S_{1}$ we deduce that it must be $$\begin{aligned}
\label{e:alphabeta-2}
( 1- \alpha - \beta ) q >-1.\end{aligned}$$ Combining [\[e:alphabeta\]](#e:alphabeta){reference-type="eqref" reference="e:alphabeta"} and [\[e:alphabeta-2\]](#e:alphabeta-2){reference-type="eqref" reference="e:alphabeta-2"}, we infer that for any choice of $p \in (3, 4)$ and of $q \in (2, \frac{p}{p-2})$, we can find $\alpha, \beta \in (0, 1)$ such that $y_{\alpha, \beta} \in \mathcal{Y}_{s}$ with $(\det \nabla y_{\alpha, \beta} )^{-q} \in L^{1}(\Omega_{s})$. A direct estimate of $W(\nabla y_{\alpha, \beta})$ on $S'_{1}$ yields that $$\begin{aligned}
\label{e:W3d}
0 &\leq \int_{S'_{1}} W(\nabla y_{\alpha, \beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx
\lesssim s^{2} + s^{(\alpha -1)p + 2} + s^{\beta q + 1} s^{(1 - \alpha - \beta) q + 1} . \end{aligned}$$ From [\[e:alphabeta\]](#e:alphabeta){reference-type="eqref" reference="e:alphabeta"}--[\[e:W3d\]](#e:W3d){reference-type="eqref" reference="e:W3d"} we deduce that there exists $\rho \in (0, 1)$ (depending on $\alpha, \beta$ but not on $s$) such that $$\begin{aligned}
\label{e:W3d-2}
0 \leq \int_{S'_{1}} W(\nabla y_{\alpha, \beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx & \lesssim s^{1 + \rho}.\end{aligned}$$ As for $S''_{1}$, we may use the estimate of [\[W-upperboundrest\]](#W-upperboundrest){reference-type="eqref" reference="W-upperboundrest"} and obtain that $$\label{e:W3d-3}
0 \leq \int_{S''_{1}} W(\nabla y_{\alpha, \beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx \lesssim s^{1 + 2\alpha}.$$ Defining $\delta := \min\{ \rho, 2\alpha\}$ we infer that $$\label{e:W3d-4}
0 \leq \int_{S_{1}} W(\nabla y_{\alpha, \beta}) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx \lesssim s^{1 + \delta}.$$ Arguing in the same way, estimate [\[e:W3d-4\]](#e:W3d-4){reference-type="eqref" reference="e:W3d-4"} can be obtained on $S_{2}$, leading to [\[e:3d-step1\]](#e:3d-step1){reference-type="eqref" reference="e:3d-step1"}. This concludes the proof of the proposition. ◻
The following two lemmas show some useful properties of deformations $y \in W^{1, \infty} (\Omega_{s};\mathbb R^{3}) \cap \mathcal{Y}_{s}$ with low energy, which will be useful to conclude for [\[e:3d-step2\]](#e:3d-step2){reference-type="eqref" reference="e:3d-step2"}. In the sequel, we denote by $\pi\colon \mathbb R\to [-1, 1]$ the projection of $\mathbb R$ to the interval $[-1, 1]$, defined as $$\pi(t):=
\begin{cases}
t, & \text{if } -1\leq t\leq 1,\\
-1, & \text{if } t< -1,\\
1, & \text{if } t> 1.\\
\end{cases}$$
**Lemma 12**. *There exists $M>0$ such that for every $s \in (0, 1)$ $$\label{e:rough-estimate}
\min_{y \in W^{1, \infty}(\Omega_{s};\mathbb R^{3}) \cap \mathcal{Y}_{s}} \, E_{s} (y_{s}) \leq Ms.$$*
*Proof.* The thesis follows easily by a direct construction of a competitor $y \in W^{1, \infty}(\Omega_{s} ; \mathbb R^{3}) \cap \mathcal{Y}_{s}$. For instance, we define $$y(x) := \left\{ \begin{array}{ll}
x & \text{for $x \in S_{1}$,}\\[2mm]
x - \xi +
\left(
\begin{matrix}
0\\
\frac{1+s}{1-2s}(1-|x_3|)\\
0
\end{matrix}
\right)
& \text{for $x \in S_{2}$}.
\end{array}\right.$$ Then, it is clear that $E_{s} (y) \leq Ms$ for some $M>0$ independent of $s$. ◻
**Lemma 13**. *Let $s \in (0, 1)$, $N>0$, $\gamma := 1 - \frac{2}{p}$, $\sigma \in (-s, s)$, and $y \in W^{1, \infty} (\Omega_{s};\mathbb R^{3}) \cap \mathcal{Y}_{s}$ be such that $$\begin{aligned}
& \int_{-1}^{1} \int_{-1}^{1} W(\nabla y( x_{1}, x_{2}, \sigma)) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx_{1} \, dx_{2} \leq N \label{e:EN}.
\end{aligned}$$ Then, there exists $\overline{c}_{N, p}>0$ depending only on $p$ and $N$ (but not on $s$) such that for every $\varepsilon>0$ the following holds: if $$\begin{aligned}
& \int_{-1}^{1} \Bigg| \int_{-1}^{1} \big( | \partial_{2} y | (x_{1}, x_{2}, \sigma) - 1\big) dx_{2} \Bigg|^{p} dx_{1} \leq \varepsilon , \label{e:eps}
\end{aligned}$$ then for every $x_{1} , x_{2} \in [-1, 1]$ $$\label{e:distance-H}
\big| y (x_{1}, x_{2} , \sigma) - (x_{1}, \pi (y_{2} (x_{1}, x_{2} , \sigma) ) , \sigma) \big| \leq \overline{c}_{N, p} \, \varepsilon^{\frac{\gamma}{p+1}}.$$*
*Similarly, if $$\begin{aligned}
& \int_{-1}^{1} \int_{-1}^{1} W(\nabla y( x_{1}, \sigma + 4, x_{3})) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx_{1} \, dx_{3} \leq N, \label{e:EN-2} \\
& \int_{-1}^{1} \Bigg| \int_{-1}^{1} \big( | \partial_{3} y | (x_{1}, \sigma + 4, x_{3} ) - 1\big) dx_{3} \Bigg|^{p} dx_{1} \leq \varepsilon , \label{e:eps-2}
\end{aligned}$$ then, for every $x_{1}$, $x_{3} \in [-1, 1]$ $$\label{e:distance-H-2}
\big| y (x_{1}, \sigma + 4, x_{3} ) - (x_{1}, \sigma , \pi (y_{3} (x_{1}, \sigma + 4, x_{3}) ) ) \big| \leq \overline{c}_{N, p} \, \varepsilon^{\frac{\gamma}{p+1}}.$$*
*Proof.* As $p>3$, Morrey's embedding and [\[e:EN\]](#e:EN){reference-type="eqref" reference="e:EN"} imply that the map $(x_{1}, x_{2}) \mapsto y(x_{1}, x_{2}, \sigma)$ is Hölder-continuous. Precisely, there exists $\widetilde{c}_{N, p}>0$ depending only on $p$ and $N$ such that for every $x_{1}, x_{2}, \overline{x}_{1}, \overline{x}_{2} \in [-1, 1]$ $$\label{e:Holder}
| y(x_{1}, x_{2}, \sigma) - y(\overline{x}_{1}, \overline{x}_{2}, \sigma)| \leq \widetilde{c}_{N, p} \, | (x_{1}, x_{2}) - (\overline{x}_{1}, \overline{x}_{2})|^{\gamma}.$$
Let us define the set $D_{\varepsilon} \subseteq (-1, 1)$ as $$D_{\varepsilon}:= \Bigg\{ x_{1} \in (-1, 1) : \ \int_{-1}^{1} \big( | \partial_{2} y| (x_{1}, x_{2}, \sigma) - 1\big) \, dx_{2} \leq \varepsilon^{\frac{1}{p+1}}\Bigg\}.$$ In particular, we notice that, due to the boundary condition on $\Gamma_{s}$, we have that $$\int_{-1}^{1} \big( | \partial_{2} y| (x_{1}, x_{2}, \sigma) - 1\big) \, dx_{2} \geq 0 \qquad \text{for a.e.~$x_{1} \in (-1, 1)$}.$$ Then, by [\[e:eps\]](#e:eps){reference-type="eqref" reference="e:eps"} and by the Chebyshev inequality, $$\begin{aligned}
\label{e:Deps}
\mathcal{H}^{1} & ((-1, 1) \setminus D_{\varepsilon})
\leq \frac{1}{\varepsilon^{\frac{p}{p+1}}} \int_{-1}^{1} \Bigg| \int_{-1}^{1} \big( | \partial_{2} y | (x_{1}, x_{2}, \sigma) - 1 \big) \, dx_{2} \Bigg|^{p} dx_{1}
\leq \varepsilon^{\frac{1}{p+1}}. \end{aligned}$$
Let us now fix $x_{1} \in D_{\varepsilon}$ and $x_{2} \in [-1, 1]$, let us denote by $\theta_{x_{1}} \colon [-1, 1] \to \mathbb R^{3}$ the curve $\theta_{x_{1}} (t) := y(x_{1}, t, \sigma)$, and let us write $$y(x_{1}, x_{2}, \sigma) = (x_{1}, \pi ( y_{2} (x_{1}, x_{2}, \sigma)) , \sigma) + v.$$ for suitable $v = (v_{1}, v_{2}, v_{3}) \in \mathbb R^{3}$ depending on $(x_{1}, x_{2})$. In particular, we notice one of the two cases must hold:
1. [\[3di\]]{#3di label="3di"} $\pi ( y_{2} (x_{1}, x_{2}, \sigma)) = y_{2} (x_{1}, x_{2}, \sigma)$ and $v_{2} = 0$,
2. [\[3dii\]]{#3dii label="3dii"} $\pi ( y_{2} (x_{1}, x_{2}, \sigma)) \in \{1, -1\}$ and $|v_{2}| = \min \{ | 1 - y_{2} (x_{1}, x_{2}, \sigma)| ; | 1+ y_{2} (x_{1}, x_{2}, \sigma)|\}$.
In the case [\[3di\]](#3di){reference-type="eqref" reference="3di"}, by definition of $D_{\varepsilon}$ and by the boundary conditions on $y$ we have that $$\begin{aligned}
\label{e:something}
2 + \varepsilon^{\frac{1}{p+1}} & \geq \int_{-1}^{1} | \dot{\theta}_{x_{1}} (t)| \, dt \geq |(x_{1}, -1, \sigma) - y(x_{1}, x_{2}, \sigma)| + | y(x_{1}, x_{2}, \sigma) - (x_{1}, 1, \sigma)|
\\
&
= | (v_{1}, y_{2} (x_{1}, x_{2}, \sigma) + 1, v_{3})| + | (v_{1}, y_{2} (x_{1}, x_{2}, \sigma) - 1, v_{3})| \nonumber
\\
&
= \sqrt{(v_{1}^{2} + v_{3}^{2}) + ( y_{2} (x_{1}, x_{2}, \sigma) + 1)^{2}} + \sqrt{(v_{1}^{2} + v_{3}^{2}) + ( y_{2} (x_{1}, x_{2}, \sigma) - 1)^{2}}\nonumber
\\
&
\geq \min_{z \in [-1, 1]} \sqrt{(v_{1}^{2} + v_{3}^{2}) + ( z + 1)^{2}} + \sqrt{(v_{1}^{2} + v_{3}^{2}) + ( z- 1)^{2}} \nonumber
\\
&
= 2 \sqrt{ (v_{1}^{2} + v_{3}^{2}) + 1}, \nonumber\end{aligned}$$ which implies that $|v| \leq \varepsilon^{\frac{1}{p+1}} + \varepsilon^{\frac{1}{2 (p+1)}}$.
If [\[3dii\]](#3dii){reference-type="eqref" reference="3dii"} holds and $y_{2}( x_{1}, x_{2}, \sigma) \notin [-1, 1]$, we may repeat the argument of the first two lines of [\[e:something\]](#e:something){reference-type="eqref" reference="e:something"} and obtain that $|v| \leq \varepsilon^{\frac{1}{p+1}}$. All in all, we have shown that for every $x_{1} \in D_{\varepsilon}$ and every $x_{2} \in [-1, 1]$ it holds $$\label{e:intermediate-proof2}
| y(x_{1}, x_{2}, \sigma) - (x_{1}, \pi (y_{2} (x_{1}, x_{2}, \sigma), \sigma ) | \leq \varepsilon^{\frac{1}{p+1}} + \varepsilon^{\frac{1}{2 (p+1)}}.$$
To achieve [\[e:distance-H\]](#e:distance-H){reference-type="eqref" reference="e:distance-H"} it remains to consider $x_{1} \notin D_{\varepsilon}$. In this case, by [\[e:Deps\]](#e:Deps){reference-type="eqref" reference="e:Deps"} we may find $\overline{x}_{1} \in D_{\varepsilon}$ such that $| x_{1} - \overline{x}_{1}| \leq 2 \varepsilon^{\frac{1}{p+1}}$. Then, by triangle inequality, by the Hölder continuity [\[e:Holder\]](#e:Holder){reference-type="eqref" reference="e:Holder"} of $y$, and by the previous step we have that for every $x_{2} \in [-1, 1]$ $$\begin{aligned}
\label{e:something-2}
|& y(x_{1}, x_{2}, \sigma) - (x_{1}, \pi (y_{2}(x_{1}, x_{2}, \sigma)), \sigma) |
\\
&
\leq | y(x_{1}, x_{2}, \sigma) - y( \overline{x}_{1}, x_{2}, \sigma)| + |y( \overline{x}_{1}, x_{2}, \sigma) - (\overline{x}_{1}, \pi (y_{2}(\overline{x}_{1}, x_{2}, \sigma)), \sigma) | \nonumber
\\
&
\quad + | (\overline{x}_{1}, \pi (y_{2}(\overline{x}_{1}, x_{2}, \sigma)), \sigma) - (x_{1}, \pi (y_{2}(x_{1}, x_{2}, \sigma)), \sigma)| \nonumber
\\
&
\leq 2 \, \widetilde{c}_{N, p} \, | x_{1} - \overline{x}_{1}|^{\gamma} + \varepsilon^{\frac{1}{p+1}} + \varepsilon^{\frac{1}{2 (p+1)}} + | x_{1} - \overline{x}_{1}| \nonumber
\\
&
\leq 2^{1+\gamma} \, \widetilde{c}_{N, p} \, \varepsilon^{\frac{\gamma}{p+1}} + 3 \varepsilon^{\frac{1}{p+1}} + \varepsilon^{\frac{1}{2 (p+1)}} \leq \overline{c}_{N, p} \varepsilon^{\frac{\gamma}{p+1}} \nonumber\end{aligned}$$ for a suitable constant $\overline{c}_{N, p}>0$ depending only on $\gamma$ and $\widetilde{c}_{N, p}$, and therefore only on $p$ and $N$. This concludes the proof of [\[e:distance-H\]](#e:distance-H){reference-type="eqref" reference="e:distance-H"}.
The same argument can be used to infer [\[e:distance-H-2\]](#e:distance-H-2){reference-type="eqref" reference="e:distance-H-2"} taking into account the boundary conditions on $\partial S_{2}$. ◻
We are now in a position to prove Proposition [Proposition 11](#p:lav3d-2){reference-type="ref" reference="p:lav3d-2"}.
*Proof of Proposition [Proposition 11](#p:lav3d-2){reference-type="ref" reference="p:lav3d-2"}.* Since Lemma [Lemma 12](#p:proof2){reference-type="ref" reference="p:proof2"} holds, we are left to provide a lower bound for the minimum problem [\[e:3d-step2\]](#e:3d-step2){reference-type="eqref" reference="e:3d-step2"}. To this purpose, let $M >0$ be the constant determined in Lemma [Lemma 12](#p:proof2){reference-type="ref" reference="p:proof2"} and fix a deformation $y \in W^{1, \infty} (\Omega_{s};\mathbb R^{3}) \cap \mathcal{Y}_{s}$ such that $$\label{e:en-bound-M}
E_{s} (y) \leq (M+1)s.$$
Let us fix $N >0$ such that $\frac{M+1}{N} <\frac{1}{10}$ and let us set $$\begin{aligned}
A_{N} & := \Bigg\{ \sigma \in (-s, s) : \, \int_{-1}^{1} \int_{-1}^{1} W(\nabla y (x_{1}, x_{2}, \sigma) ) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx_{1} \, dx_{2} \leq N \Bigg\}, \\
B_{N} & := \Bigg\{ \sigma \in (-s, s) : \, \int_{-1}^{1} \int_{-1}^{1} W(\nabla y (x_{1}, \sigma + 4, x_{3}) ) - W({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l}%
{\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}) \, dx_{1} \, dx_{3} \leq N \Bigg\}.\end{aligned}$$ Then, by the Chebyshev inequlity and by [\[e:en-bound-M\]](#e:en-bound-M){reference-type="eqref" reference="e:en-bound-M"} we have that $$\begin{aligned}
\mathcal{H}^{1} ((-s, s) \setminus A_{N}) & \leq \frac{1}{N} E_{s} (y) \leq \frac{M+1}{N} \, s < \frac{s}{10}, \label{e:AL}\\
\mathcal{H}^{1} ((-s, s) \setminus B_{N}) & \leq \frac{1}{N} E_{s} (y) \leq \frac{M+1}{N} \, s < \frac{s}{10}. \label{e:BL}\end{aligned}$$ Hence, we deduce from [\[e:AL\]](#e:AL){reference-type="eqref" reference="e:AL"} and [\[e:BL\]](#e:BL){reference-type="eqref" reference="e:BL"} that $$\label{e:ALBL}
\mathcal{H}^{1} (A_{N} \cap B_{N}) \geq \frac{18}{10}\, s.$$ We further set $\gamma:= 1 - \frac{2}{p}$ and fix $\varepsilon>0$ such that $\overline{c}_{N, p} \, \varepsilon^{\frac{\gamma}{p+1}} \leq \frac{1}{3}$, where $\overline{c}_{N, p}>0$ is the constant defined in Proposition [Lemma 13](#p:proof3){reference-type="ref" reference="p:proof3"}. We claim that for every $\sigma \in A_{N} \cap B_{N}$, at least one of the following inequalities must hold:
[\[e:claim-3d\]]{#e:claim-3d label="e:claim-3d"} $$\begin{aligned}
& \int_{-1}^{1} \Bigg| \int_{-1}^{1} \big( | \partial_{2} y| (x_{1}, x_{2}, \sigma) - 1\big) \, dx_{2} \Bigg|^{p} dx_{1} >\varepsilon, \label{e:claim-eps-1}\\
& \int_{-1}^{1} \Bigg| \int_{-1}^{1} \big( | \partial_{3} y| (x_{1}, \sigma + 4, x_{3}) - 1\big) \, dx_{3} \Bigg|^{p} dx_{1} >\varepsilon. \label{e:claim-eps-2}
\end{aligned}$$
By contradiction, let us assume that both inequalities [\[e:claim-eps-1\]](#e:claim-eps-1){reference-type="eqref" reference="e:claim-eps-1"} and [\[e:claim-eps-2\]](#e:claim-eps-2){reference-type="eqref" reference="e:claim-eps-2"} are not satisfied for some $\sigma$. By Lemma [Lemma 13](#p:proof3){reference-type="ref" reference="p:proof3"}, we deduce that for every $x_{1}, x_{2}, x_{3} \in [-1, 1]$,
[\[e:y12\]]{#e:y12 label="e:y12"} $$\begin{aligned}
& | y(x_{1}, x_{2}, \sigma) - (x_{1}, \pi (y_{2}(x_{1}, x_{2}, \sigma)) , \sigma) | \leq \frac{1}{3},\label{e:y1}\\
& | y(x_{1}, \sigma + 4, x_{3}) - (x_{1}, \sigma , \pi (y_{3}(x_{1}, \sigma + 4 , x_{3})) ) | \leq \frac{1}{3}.\label{e:y2}
\end{aligned}$$
We now show that given [\[e:y1\]](#e:y1){reference-type="eqref" reference="e:y1"} and [\[e:y2\]](#e:y2){reference-type="eqref" reference="e:y2"}, $y\in \mathcal{Y}_{s}$ cannot be injective in $\Omega_s$. This immediately yields a contradiction, as we already know that any $y \in W^{1, \infty}(\Omega_{s}; \mathbb R^{3}) \cap \mathcal{Y}_{s}$ with finite energy must be a homeomorphism (as a consequence of the theory mappings of finite distortion [@ManVill1998], as already outlined in the introduction).
To see that $y$ indeed cannot be injective, let us consider the function $$g \colon [-1,1]^3\to \mathbb R^3,\qquad g(x_1,\tau_1,\tau_2):=y(x_1,\tau_1,\sigma)-y(0,\sigma+4,-\tau_2).$$ Notice that if $g(\hat x_1,\hat\tau_1,\hat\tau_2)=0$ for some $(\hat x_1,\hat\tau_1,\hat\tau_2)\in(-1,1)^3$, then $y$ is not injective, since $(\hat x_1,\hat\tau_1,\sigma)\in S_1$, $(0,\sigma+4,-\hat\tau_2)\in S_2$ and the values of $y$ on these two points coincide. As a consequence of [\[e:y12\]](#e:y12){reference-type="eqref" reference="e:y12"} and of the boundary conditions of $y\in \mathcal{Y}_{s}$ on $\Gamma_s$, the vector field $g=(g_1,g_2,g_3)$ always points outwards on the boundary of the cube $[-1,1]^3$: $$\begin{alignedat}{2}
& g_1(-1,\tau_1,\tau_2)\leq -1+\frac{2}{3}<0,\\
& g_1(1,\tau_1,\tau_2)\geq 1-\frac{2}{3}>0,\\
& g_2(x_1,-1,\tau_2)=-1-y_2(0,\sigma+4,-\tau_2)
\leq -1-\sigma+\frac{1}{3}<0, \\
& g_2(x_1,1,\tau_2)=1-y_2(0,\sigma+4,-\tau_2)
\geq 1-\sigma-\frac{1}{3}>0,\\
& g_3(x_1,\tau_1,-1)=y_3(x_1,\tau_1,\sigma)-1 \leq \sigma+\frac{1}{3}-1<0,\\
& g_3(x_1,\tau_1,1)=y_3(x_1,\tau_1,\sigma)+1\geq 1-\sigma+\frac{1}{3}>0.\\
\end{alignedat}$$ (Above, we also used that $s$ is small enough so that $\left\vert\sigma\right\vert\leq s <\frac{1}{3}$.) As $g$ is also continuous, it thus satisfies the prerequisites of the Poincaré--Miranda theorem (see [@Ma13a], e.g.). The latter yields that $g$ attains the value $0\in\mathbb R^3$ in $[-1,1]^3$; actually even in $(-1,1)^3$, as the above rules out zeroes on the boundary. (Alternatively, this is also not hard to see directly, observing that the topological degree of $g$ satisfies $\operatorname{deg}(g;(-1,1)^3;0)
=\operatorname{deg}(\operatorname{id};(-1,1)^3;0)=1$ by homotopy invariance of the degree.) Consequently, $y$ is not injective on $\Omega_s$.
We are in a position to conclude the proof of Theorem [Theorem 9](#thm:lav3d){reference-type="ref" reference="thm:lav3d"}. Let us define $$\begin{aligned}
\mathcal{A} & := \{ \sigma \in A_{N} \cap B_{N} | \, \text{\eqref{e:claim-eps-1} holds}\},\\
\mathcal{B} & := \{ \sigma \in A_{N} \cap B_{N} | \, \text{\eqref{e:claim-eps-2} holds}\} \setminus \mathcal{A}.\end{aligned}$$ Since one of inequalities [\[e:claim-eps-1\]](#e:claim-eps-1){reference-type="eqref" reference="e:claim-eps-1"} or [\[e:claim-eps-2\]](#e:claim-eps-2){reference-type="eqref" reference="e:claim-eps-2"} holds for every $\sigma \in A_{N} \cap B_{N}$, we have that $\mathcal{A} \cup \mathcal{B} = A_{N} \cap B_{N}$. While by construction we clearly have that $\mathcal{A} \cap \mathcal{B} = \varnothing$. Arguing as in [\[e:en-estimate\]](#e:en-estimate){reference-type="eqref" reference="e:en-estimate"}, applying Proposition [Proposition 2](#p:lower-bound){reference-type="ref" reference="p:lower-bound"} we estimate the energy $E_{s}(y)$ as $$\begin{aligned}
E_{s} (y) & \geq c \int_{\mathcal{A}} \int_{-1}^{1} \int_{-1}^{1} \big| \| \nabla y ( x_{1}, x_{2}, x_{3})\|_{2} - 1\big|^{p} dx_{1} \, dx_{2} \, dx_{3}
\\
&
\qquad + c \int_{\mathcal{A}} \int_{-1}^{1} \int_{-1}^{1} \big| \| \nabla y (x_{1}, x_{2}, x_{3}) \|_{2} - 1\big|^{2} dx_{1} \, dx_{2} \, dx_{3} \nonumber
\\
&
\qquad + c \int_{\mathcal{B} + 4} \int_{-1}^{1} \int_{-1}^{1} \big| \| \nabla y (x_{1}, x_{2}, x_{3}) \|_{2} - 1\big|^{p} dx_{1} \, dx_{3} \, dx_{2} \nonumber
\\
&
\qquad + c \int_{\mathcal{B} + 4} \int_{-1}^{1} \int_{-1}^{1} \big| \| \nabla y (x_{1}, x_{2}, x_{3}) \|_{2} - 1\big|^{2} dx_{1} \, dx_{3} \, dx_{2} \nonumber
\\
&
\geq c \int_{\mathcal{A}} \int_{-1}^{1} \int_{-1}^{1} \big| | \partial_{2} y ( x_{1}, x_{2}, x_{3})| - 1\big|^{p} dx_{1} \, dx_{2} \, dx_{3} \nonumber
\\
&
\qquad + c \int_{\mathcal{A}} \int_{-1}^{1} \int_{-1}^{1} \big| | \partial_{2} y (x_{1}, x_{2}, x_{3}) | - 1\big|^{2} dx_{1} \, dx_{2} \, dx_{3} \nonumber
\\
&
\qquad + c \int_{\mathcal{B} + 4} \int_{-1}^{1} \int_{-1}^{1} \big| | \partial_{3} y (x_{1}, x_{2}, x_{3}) | - 1\big|^{p} dx_{1} \, dx_{3} \, dx_{2} \nonumber
\\
&
\qquad + c \int_{\mathcal{B} + 4} \int_{-1}^{1} \int_{-1}^{1} \big| | \partial_{3} y (x_{1}, x_{2}, x_{3}) | - 1\big|^{2} dx_{1} \, dx_{3} \, dx_{2} \nonumber
\\
&
\geq 2^{p-1}c \int_{\mathcal{A}} \int_{-1}^{1} \Bigg| \int_{-1}^{1} ( | \partial_{2} y ( x_{1}, x_{2}, x_{3})| - 1)dx_{2} \Bigg|^{p} dx_{1} \, dx_{3} \nonumber
\\
&
\qquad + 2^{p-1} c \int_{\mathcal{B} + 4} \int_{-1}^{1} \Bigg| \int_{-1}^{1} ( | \partial_{3} y (x_{1}, x_{2}, x_{3}) | - 1) dx_{3} \Bigg|^{p} \, dx_{1} \, dx_{2} \nonumber
\\
&
\geq 2^{p-1} c \, \varepsilon \frac{18}{10} \, s . \nonumber\end{aligned}$$ All in all, we have shown that any deformation $y \in W^{1, \infty}(\Omega_{s};\mathbb R^{3}) \cap \mathcal{Y}_{s}$ satisfying [\[e:en-bound-M\]](#e:en-bound-M){reference-type="eqref" reference="e:en-bound-M"} has energy $E_{s}(y)\geq \delta s$ for some positive constant $\delta$ independent of $s$. Thus, [\[e:3d-step2\]](#e:3d-step2){reference-type="eqref" reference="e:3d-step2"} holds and the proof of the proposition is concluded. ◻
*Remark 14*. Similarly to Remark [Remark 7](#rem:1){reference-type="ref" reference="rem:1"}, we point out that the Lavrentiev phenomenon in dimension $d=3$ is valid if we replace $W^{1, \infty} (\Omega_{s}; \mathbb R^{3})$ with $W^{1,r} (\Omega_{s}; \mathbb R^{3})$ for $r>\frac{6q}{q-2}$. As in [\[neq:distprtion-int\]](#neq:distprtion-int){reference-type="eqref" reference="neq:distprtion-int"}, we would indeed have that for $y \in W^{1,r} (\Omega_{s}; \mathbb R^{2})\cap \mathcal{Y}_{s}$ with $E_{s} (y) <+\infty$ the distortion coefficient $K_{y} = \frac{|\nabla y|^3}{\det \nabla y}$ belongs to $L^{\eta} (\Omega_{s})$ for $\eta:=\frac{rq}{3q+r}$, $\eta \in (2, q)$. This implies that any competitor $y \in W^{1,r} (\Omega_{s}; \mathbb R^{3})\cap \mathcal{Y}_{s}$ with energy $E_{s}(y) \approx s$ still fulfills [\[e:eps\]](#e:eps){reference-type="eqref" reference="e:eps"} and [\[e:eps-2\]](#e:eps-2){reference-type="eqref" reference="e:eps-2"} of Lemma [Lemma 13](#p:proof3){reference-type="ref" reference="p:proof3"}. Hence, the proof of the lower bound of $E_{s}(y)$ in Proposition [Proposition 11](#p:lav3d-2){reference-type="ref" reference="p:lav3d-2"}proceeds as in the $W^{1,\infty}$-case.
# Acknowledgments {#acknowledgments .unnumbered}
The work of S.A. was partially funded by the Austrian Science Fund (FWF) through the projects P35359-N and ESP-61. The work of S.K. was supported by the Czech-Austrian bilateral grants 21-06569K (GA ČR-FWF) and 8J23AT008 (MŠMT-WTZ mobility). A.M. was supported by the European Unions Horizon 2020 research and innovation programme under the Marie Składowska-Curie grant agreement No 847693.
| arxiv_math | {
"id": "2309.08288",
"title": "A new example for the Lavrentiev phenomenon in Nonlinear Elasticity",
"authors": "Stefano Almi, Stefan Kr\\\"omer, Anastasia Molchanova",
"categories": "math.AP math.FA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The notion of naı̈ve lifting of DG modules was introduced by the authors in [@NOY; @NOY1] for the purpose of studying problems in homological commutative algebra that involve self-vanishing of Ext. Our goal in this paper is to deeply study the naı̈ve lifting property using the tensor algebra of the shift of the diagonal ideal (or, diagonal tensor algebra, as is phrased in the title of this paper). Our main result provides several characterizations of naı̈ve liftability of DG modules under certain Ext vanishing conditions. As an application, we affirmatively answer [@NOY2 Question 4.10] under the same assumptions.
address:
- |
Department of Mathematical Sciences\
Georgia Southern University\
Statesboro, GA 30460, U.S.A.
- Institute for the Advancement of Higher Education, Okayama University of Science, Ridaicho, Kitaku, Okayama 700-0005, Japan
- Graduate School of Environmental, Life, Natural Science and Technology, Okayama University, Okayama 700-8530, Japan
author:
- Saeed Nasseh
- Maiko Ono
- Yuji Yoshino
title: Diagonal tensor algebra and naı̈ve liftings
---
[^1]
# Introduction {#sec20200314a}
Lifting theory of differential graded (DG) modules along certain DG algebra homomorphisms was initially studied by Nasseh and Sather-Wagstaff [@nasseh:lql] in order to gain a better understanding on a question of Vasconcelos; see [@Vasc]. Since then, this theory has been significantly developed by the authors in [@NOY; @NOY1; @NOY3; @nassehyoshino; @OY] for the purpose of studying the following long-standing conjecture in homological commutative algebra, known as the *Auslander-Reiten Conjecture*.
**Conjecture 1** ([@AR]). *A finitely generated module $M$ over a commutative noetherian local ring $R$ is free if $\operatorname{Ext}^{i}_R(M,M\oplus R)=0$ for all $i\geqslant 1$.*
While this conjecture is still open in general, it has been proven affirmatively for several classes of commutative noetherian local rings; see, for instance, [@AY; @auslander:lawlom; @avramov:svcci; @avramov:edcrcvct; @avramov:phcnr; @huneke:vtci; @huneke:voeatoscmlr; @jorgensen:fpdve; @nasseh:vetfp; @nasseh:lrqdmi; @nasseh:oeire; @MR1974627; @sega:stfcar], to name a few. The following discussion briefly explains a relation between this conjecture and lifting theory of DG modules.
** 2**. In Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"}, the ring $R$ can be assumed to be complete. Let $R\cong Q/I$ be a minimal Cohen presentation, i.e., $Q$ is a regular local ring and $I$ is an ideal of $Q$. Tate [@Tate] showed that there is a (finite or infinite) free extension of DG algebras $$\label{eq20230424c}
Q\hookrightarrow Q'=Q\langle X_n\mid n\leqslant\infty\rangle$$ such that $Q'$ is quasiisomorphic to $R$. Hence, there is an equivalence $\mathcal{D}(Q')\simeq \mathcal{D}(R)$ between the derived categories under which Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} can be translated to an equivalent problem for a DG module $N$ over the DG algebra $Q'$ with corresponding Ext vanishing assumptions; see [ 68](#para20230521a){reference-type="ref" reference="para20230521a"} for details. Since $Q$ is a regular ring, developing a *suitable* notion of lifting along the DG algebra extension [\[eq20230424c\]](#eq20230424c){reference-type="eqref" reference="eq20230424c"} that lifts $N$ to a complex over $Q$ will result in an affirmative answer to Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"}.
The classical notions of lifting for DG modules studied in [@NOY; @nasseh:lql; @nassehyoshino; @OY], namely, *lifting* and *weak lifting*, are generalizations of those for modules and complexes studied by Auslander, Ding, Solberg [@auslander:lawlom] and Yoshino [@yoshino]. As we mention in the introduction of [@NOY3], these versions of lifting are not suitable in the sense of [ 2](#para20230424s){reference-type="ref" reference="para20230424s"}, as they do not allow working with free extensions of DG algebras that are obtained by adjoining more than one variable, e.g., [\[eq20230424c\]](#eq20230424c){reference-type="eqref" reference="eq20230424c"}. To overcome this issue, the authors introduced the notion of *naı̈ve lifting* for DG modules in [@NOY; @NOY1] and proved the following result; see [ 54](#defn20230127a){reference-type="ref" reference="defn20230127a"} for the definition of naı̈ve lifting.
**Theorem 3** (). *Let $Q$ be a commutative ring (not necessarily regular). Assume that $Q'=Q \langle X_n\mid n<\infty \rangle$ is obtained by adjoining finitely many variables of positive degrees to $Q$. If $N$ is a bounded below semifree DG $Q'$-module with $\operatorname{Ext}_{Q'} ^i (N, N)=0$ for all $i\geqslant 1$, then $N$ is naïvely liftable to $Q$.*
While, by Theorem [Theorem 3](#thm20230127f){reference-type="ref" reference="thm20230127f"}, naı̈ve lifting property holds along *finite* free extensions of DG algebras under the above Ext vanishing assumptions, it is still an open problem whether it holds along *infinite* free extensions of DG algebras, even under stronger assumptions on the vanishing of Ext. However, the following statement holds true.
**Theorem 4** (). *If naı̈ve lifting property holds along the free extension of DG algebras [\[eq20230424c\]](#eq20230424c){reference-type="eqref" reference="eq20230424c"}, then Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} holds.*
Theorems [Theorem 3](#thm20230127f){reference-type="ref" reference="thm20230127f"} and [Theorem 4](#thm20230428a){reference-type="ref" reference="thm20230428a"} provide another proof for the following well-known fact; see the discussion in [@NOY1 7.4].
**Corollary 5**. *If $R$ is a complete intersection local ring, then Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} holds.*
Theorems [Theorem 3](#thm20230127f){reference-type="ref" reference="thm20230127f"} and [Theorem 4](#thm20230428a){reference-type="ref" reference="thm20230428a"} show that naı̈ve lifting is a suitable version of lifting in the sense of [ 2](#para20230424s){reference-type="ref" reference="para20230424s"}. Hence, the goal of this paper is to deeply study naı̈ve lifting property of DG modules under certain Ext vanishing assumptions along a homomorphism $A\to B$ of DG algebras, where $B$ is a semifree DG module over $A$, e.g., where $B=A \langle X_i\mid n\in \mathbb{N} \rangle$ is an infinite free extension of DG algebras; see Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}. The main tool that we use for our study is the tensor algebra of the shift of the diagonal ideal (or, diagonal tensor algebra, as is phrased in the title of this paper).
The organization of this paper is as follows.
In Section [2](#sec20221213a){reference-type="ref" reference="sec20221213a"}, we specify some of the terminology and provide the background on the diagonal ideal, which we denote by $J$. We also introduce the diagonal tensor algebra, which is denoted by $T$, and explain that in addition to the usual grading as a DG algebra, $T$ has another grading, which we refer to as the tensor grading. Then, in Section [3](#sec20230425d){reference-type="ref" reference="sec20230425d"} we introduce the notions of tensor graded DG $T$-modules and tensor graded DG $T$-module homomorphisms. Furthermore, for a semifree DG $B$-module $N$, we construct a map, denoted by $\omega_N$, that is shown in Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"} to be the obstruction to naı̈ve lifting, and hence, to Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"}. The map $\omega_N$ is an element of a certain tensor graded endomorphism ring, which we denote by $\Gamma_{N \otimes _B T}$.
In Section [4](#sec20230425s){reference-type="ref" reference="sec20230425s"}, we discuss the action of shifts of $\omega_N$ as an element in the ring $\Gamma_{N \otimes _B T}$ on certain graded right $\Gamma_{N \otimes _B T}$-modules. Our main result in this section is Theorem [Theorem 37](#main){reference-type="ref" reference="main"} whose proof involves several steps, including an analysis of the mapping cone of $\omega_N$. In Section [5](#sec20230425r){reference-type="ref" reference="sec20230425r"}, we study the relationships among certain endomorphism rings. Our first main result in this section, namely Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"}, states that $\Gamma _{N \otimes _B T}$ can be described as a polynomial ring in the variable $\omega_N$ over another endomorphism ring under certain assumptions for DG modules. Our second main theorem in this section is Theorem [Theorem 53](#kernel of omega){reference-type="ref" reference="kernel of omega"} in which we determine the kernel and cokernel of the map that is defined by the left multiplication by $\omega_N$ on $\Gamma _{N \otimes _B T}$. These theorems are crucial in our study and are used in the subsequent section.
Section [6](#sec20230422a){reference-type="ref" reference="sec20230422a"} is where we prove our main result in this paper, namely Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}, in which we give several characterizations of naı̈ve lifting property under certain Ext vanishing conditions. The proof of this theorem uses our entire work from the previous sections. As an immediate application of Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}, we provide an affirmative answer to [@NOY2 Question 4.10] under the same assumptions; see Corollary [Corollary 61](#cor20230807a){reference-type="ref" reference="cor20230807a"}. Finally, we conclude this paper with Appendix [7](#sec20230521a){reference-type="ref" reference="sec20230521a"} in which we briefly discuss some details on the conditions that we expose on the DG modules throughout the paper and their relationships with the assumptions in Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"}.
# Background and setting {#sec20221213a}
We assume that the reader is familiar with differential graded (DG) homological algebra; general references on this subject include [@avramov:ifr; @avramov:dgha; @felix:rht; @GL]. The purpose of this section is to fix our notation, specify some of the terminology, and briefly provide the necessary background on the diagonal ideal and tensor algebra.
** 6**. Throughout the paper, $R$ is a commutative ring and $(A,d^A)$, or simply $A$, is a *DG $R$-algebra*. This means that $A = \bigoplus _{n \geqslant 0} A _n$ is a non-negatively graded $R$-algebra with a differential $d^A=\{d_n^A\colon A_n\to A_{n-1}\}_{n\geqslant 0}$ (i.e., each $d_n^A$ is an $R$-linear map with $d_n^Ad_{n+1}^A=0$) such that for all homogeneous elements $a \in A_{|a|}$ and $b \in A_{|b|}$
1. $ab = (-1)^{|a| |b|}ba$ and $a^2 =0$ if $|a|$ is odd;
2. $d^A(ab) = d^A(a) b + (-1)^{|a|}ad^A(b)$, that is, $d^A$ satisfies the *Leibniz rule*.
Here, $|a|$ is called the degree of the homogeneous element $a$.
** 7**. DG modules in this paper are *right* DG modules, unless otherwise is specified. By a *DG $A$-module* $(M, \partial^M)$, or simply $M$, we mean a graded right $A$-module $M=\bigoplus_{n\in \mathbb{Z}}M_n$ with a differential $\partial^M=\{\partial^M_n\colon M_n\to M_{n-1}\}_{n\in \mathbb{Z}}$ that satisfies the *Leibniz rule*, i.e., for all homogeneous elements $a\in A_{|a|}$ and $m\in M_{|m|}$ the equality $\partial^M(ma) = \partial^M(m)\ a + (-1)^{|m|} m\ d^A(a)$ holds. (Left DG $A$-modules are defined similarly.) We often refer to $M=\bigoplus_{i\in \mathbb{Z}}M_i$ as the *underlying graded $A$-module*.
For an integer $i$, the *$i$-th shift* of a DG $A$-module $M$ is denoted by $\mathsf{\Sigma}^i M$. Note that $\left(\mathsf{\Sigma}^i M, \partial^{\mathsf{\Sigma}^i M}\right)$ is a DG $A$-module such that for all integers $j$ we have $\left(\mathsf{\Sigma}^i M\right)_j = M_{j-i}$ and $\partial_j^{\mathsf{\Sigma}^i M}=(-1)^i\partial_{j-i}^M$. We simply write $\mathsf{\Sigma}M$ instead of $\mathsf{\Sigma}^1 M$.
** 8**. We denote by $\mathcal{C}(A)$ the category of DG $A$-modules and DG $A$-module homomorphisms. For DG $A$-modules $M,N$, the set of DG $A$-module homomorphisms from $M$ to $N$ is denoted by $\operatorname{Hom}_{\mathcal{C}(A)}(M,N)$. Isomorphisms in $\mathcal{C}(A)$ are identified by $\cong$ and quasiisomorphisms in $\mathcal{C}(A)$ (i.e., DG $A$-module homomorphisms that induce bijections on all homology modules) are identified by $\simeq$.
The homotopy category of $A$ is denoted by $\mathcal{K}(A)$, which is a triangulated category. Objects of $\mathcal{K}(A)$ are DG $A$-modules and morphisms are the set of homotopy equivalence classes of DG $A$-module homomorphisms. In fact, for DG $A$-modules $M,N$ we have $\operatorname{Hom}_{\mathcal{K}(A)} (M, N)= \operatorname{Hom}_{\mathcal{C}(A)} (M, N)/ \sim$. Note that for $f,g\in \operatorname{Hom}_{\mathcal{C}(A)} (M, N)$ we write $f \sim g$ if and only if there exists a graded $A$-module homomorphism $h\colon M \to \mathsf{\Sigma}^{-1}N$ of underlying graded $A$-modules such that the equality $f - g = \partial ^N h + h \partial ^M$ holds.
** 9**. A DG $A$-module $P$ is called *semifree* if it has a *semfree basis*, i.e., if there is a well-ordered subset $F\subseteq P$ which is a basis for the underlying graded $A$-module $P$ such that for every $f\in F$ we have $\partial^P(f)\in \sum_{e<f}eA$. Equivalently, the DG $A$-module $P$ is semifree if there exists an increasing filtration $$0=P_{-1}\subseteq P_0\subseteq P_1\subseteq \cdots\subseteq P$$ of DG $A$-submodules of $P$ such that $P=\bigcup_{i\geqslant 0}P_i$ and each DG $A$-module $P_i/P_{i-1}$ is a direct sum of copies of $\mathsf{\Sigma}^n A$ with $n\in \mathbb{Z}$; see [@AH], [@AINSW A.2], or [@felix:rht] for details.
A *semifree resolution* of a DG $A$-module $M$ is a quasiisomorphism $P\xrightarrow{\simeq}M$, where $P$ is a semifree DG $A$-module.
For $i\in \mathbb{Z}$ and DG $A$-modules $M,N$, the extension $\operatorname{Ext}^i_A(M,N)$ is defined to be $\operatorname{H}_{-i}\left(\operatorname{Hom}_A(P,N)\right)$, where $P\xrightarrow{\simeq}M$ is a semifree resolution of $M$. Note that $$\operatorname{Ext}^i_A(M,N)= \operatorname{Hom}_{\mathcal{K}(A)}(P, \mathsf{\Sigma}^iN).$$
** 10**. A DG $A$-module $M$ is called *perfect* if it is quasiisomorphic to a semifree DG $A$-module that has a finite semifree basis. Equivalently, a DG $A$-module $M$ is perfect if there is a finite sequence of DG $A$-submodules $$0=M_0\subseteq M_1\subseteq \cdots \subseteq M_{n-1} \subseteq M_{n} = M$$ of $M$ such that each $M_k/M_{k-1}$ is a finite direct sum of copies of $\mathsf{\Sigma}^{r_k}A$ with $r_k\geqslant 0$. Note that a direct summand of a perfect DG $A$-module is not necessarily perfect.
** 11**. Throughout the paper, $(B,d^B)$, or simply $B$, is another DG $R$-algebra that is semifree as a DG $A$-module and $\varphi\colon A\to B$ is a homomorphism of DG $R$-algebras. Note that $A$ is a DG $R$-subalgebra of $B$, and setting $\overline{B}= B/A$, we see that $\overline{B}$ is a semifree DG $A$-module as well. We further assume that $\overline{B}$ is positively graded, i.e., $B_n=0$ for all $n\leqslant 0$.
Examples of DG $R$-algebras $A$ and $B$ that satisfy these conditions include the polynomial and free extensions $B=A[X_i\mid 1\leqslant i\leqslant n]$ and $B=A\langle X_i\mid 1\leqslant i\leqslant n\rangle$ of $A$ with $n\leqslant\infty$ and $|X_i|>0$ for all $1\leqslant i\leqslant n$, where in the latter case $A$ is assumed to be a divided power DG $R$-algebra; see [@NOY2 Example 2.5].
** 12**. Let $B^e=B \otimes_A B$, or more precisely $(B^e, d^{B^e})$, be the *enveloping DG $R$-algebra of $B$ over $A$*. The algebra structure on $B^e$ is given by the formula $$(b_1 \otimes_A b_2)( {b'}_1 \otimes_A {b'}_2) = (-1)^{|{b'}_1| |b_2|} b_1 {b'}_1 \otimes_A b_2{b'}_2$$ for all homogeneous elements $b_1, b_2, b'_1, b'_2\in B$ and the graded structure is given by the equalities $(B^e)_i = \sum_{j} B_j\otimes_R B_{i-j}$ for all $i\geqslant 0$ with the relations $b_j\otimes_R a b_{i-j} = b_j a\otimes_R b_{i-j}$ for all $b_j \in B_j$, $b_{i-j}\in B_{i-j}$, and $a\in A$. Note that $B$ is considered as a DG $R$-subalgebra of $B^e$ via the injective DG $R$-algebra homomorphism $B \to B^e$ defined by $b\mapsto b \otimes_A 1$.
Following [@NOY1], the kernel of the DG $R$-algebra homomorphism $\pi_B\colon B^e\to B$ defined by $\pi_B(b\otimes_A b')=bb'$ is called the *diagonal ideal* and is denoted by $J$. Hence, we have a short exact sequence $$\label{basic sequence}
0 \to J \xrightarrow{\iota} B^e \xrightarrow{\pi_B} B\to 0$$ of DG $B^e$-modules in which $\iota$ is the natural injection.
** 13**. For a DG $B$-ideal $I$ and an integer $n \geqslant 0$, let $I^{\otimes _B n}=I \otimes _ B \cdots \otimes _ B I$ be the $n$-fold tensor product of $I$ over $B$ with the convention that $I^{\otimes_B 0}=B$.
Note that $J$ is free as an underlying graded left $B$-module. Hence, applying the functor $- \otimes _ B J^{\otimes _B n}$ to [\[basic sequence\]](#basic sequence){reference-type="eqref" reference="basic sequence"}, we obtain the short exact sequence $$\label{eq20230113a}
0 \to J ^{\otimes _B (n+1)}\xrightarrow{\iota_n} B \otimes _A J^{\otimes _B n} \xrightarrow{\pi_n} J^{\otimes _B n} \to 0$$ of DG $B^e$-modules.
** 14**. The $A$-linear map $\delta\colon B\to J$ defined by $\delta(b)=b\otimes_A 1-1\otimes_A b$ for all $b\in B$ is well-defined and is called the *universal derivation*; see [@NOY3]. Under the setting of [ 11](#para20230425r){reference-type="ref" reference="para20230425r"} we see that $\inf J:=\inf\{n\mid J_n\neq 0\}>0$; see [@NOY2 Proposition 2.9].
The rest of this section is devoted to the tensor algebra and some of its properties.
** 15**. Let $T=\bigoplus _{n \geqslant 0} (\mathsf{\Sigma}J) ^{\otimes_B n}$ be the *tensor algebra of $\mathsf{\Sigma}J$ over $B$* (or the *diagonal tensor algebra*). The $R$-algebra structure of $T$ comes from taking the tensor product over $B$ as the multiplication. Note that $T$ is also a DG $B^e$-module with the differential $\partial^T$ defined by the Leibniz rule on $(\mathsf{\Sigma}J) ^{\otimes_B n}$ for all $n\geqslant 1$ as follows: $$\partial ^T (c_1 \otimes_B \cdots \otimes_B c_n ) = \sum _{i=1}^n (-1) ^{|c_1| + \cdots + |c_{i-1}|} c_1 \otimes_B \cdots \otimes_B \partial ^{\mathsf{\Sigma}J} (c_i) \otimes_B \cdots \otimes_B c_n$$ where $c_i \in \mathsf{\Sigma}J$ for all $1 \leqslant i \leqslant n$.
Throughout the paper, we simply write $T$ instead of $(T, \partial^T)$, which we regard as a DG $R$-algebra (but not necessarily commutative), and at the same time, as a DG $B^e$-module.
** 16**. By definition, a DG $T$-module $M$ is an underlying graded $B$-module $M$ with a DG $B$-module homomorphism $\mu\colon M \otimes_B \mathsf{\Sigma}J \to M$, which defines the action of $T$ on $M$ as $m \cdot c := \mu (m \otimes_B c)$, for all $m \in M$ and $c \in \mathsf{\Sigma}J$. Note that $$T^+ = T \otimes _B \mathsf{\Sigma}J = \bigoplus _{n\geqslant 1} (\mathsf{\Sigma}J)^{\otimes _B n}$$ is a two-sided DG ideal of $T$. The natural inclusion $T^+\hookrightarrow T$ defines a DG $T$-module structure on $T$. Note also that $T/T^+ \cong B$ and hence, every DG $B$-module is naturally a DG $T$-module.
** 17**. In addition to the usual grading, the DG $R$-algebra $T$ has another grading, which we refer to as the *tensor grading*. More precisely, we say that an element $t\in T$ is of tensor degree $n$ if and only if $t$ belongs to $(\mathsf{\Sigma}J) ^{\otimes _B n}$. If necessary, we call the ordinary grading that gives the DG $R$-algebra structure on $T$ the *DG grading*. Hence, the $n$-th tensor graded part of $T$ is $(\mathsf{\Sigma}J)^{\otimes _B n}$, which we denote by $T^n$, i.e., in the tensor grading we have $T= \bigoplus _{n\geqslant 0} T^n$. On the other hand, the $n$-th DG graded part of $T$ is $$B_n \oplus (\mathsf{\Sigma}J)_{n} \oplus \left((\mathsf{\Sigma}J)^{\otimes _B 2}\right)_{n} \oplus \cdots=B_n \oplus J_{n-1} \oplus (J^{\otimes _B 2})_{n-2} \oplus \cdots.$$
# Tensor graded DG modules {#sec20230425d}
The purpose of this section is to introduce a specific map $\omega_N$, where $N$ is a semifree DG $B$-module, which plays a crucial role in this paper; see [ 29](#para20230115e){reference-type="ref" reference="para20230115e"} for the definition. We start with generalizing our discussion in [ 17](#para20230114b){reference-type="ref" reference="para20230114b"} by defining the notions of tensor graded DG $T$-modules and tensor graded DG $T$-module homomorphisms.
** 18**. A *tensor graded DG $T$-module* $M$ is a direct sum $M = \bigoplus _{i \in \mathbb{Z}} M^i$ of DG $B$-modules with a collection $\{a_M ^i\colon M^i \otimes _B \mathsf{\Sigma}J \to M^{i+1}\}_{i \in \mathbb{Z}}$ of DG $B$-module homomorphisms. Notice that this collection $\{ a_M ^i\}_{i\in \mathbb{Z}}$ defines an action of $T$ on $M$ and makes it into a DG $T$-module; see [ 16](#para20230114a){reference-type="ref" reference="para20230114a"}.
Let $N = \bigoplus _{i \in \mathbb{Z}} N^i$ be another tensor graded DG $T$-module. A *tensor graded DG $T$-module homomorphism* $f\colon M \to N$ is a collection $\{ f^i\colon M^i\to N^i\}_{i \in \mathbb{Z}}$ of DG $B$-module homomorphisms which make the following squares commutative in $\mathcal{C}(B)$: $$\xymatrix{
M^i \otimes _B \mathsf{\Sigma}J \ar[r]^-{a_M^i} \ar[d] _-{f^i \otimes_B \operatorname{id}_{\mathsf{\Sigma}J} } & M^{i+1} \ar[d] ^-{f^{i+1}} \\
N^i \otimes _B \mathsf{\Sigma}J \ar[r]^-{a_N^i} &N ^{i+1}.
}$$ We denote by $\mathcal{C}^{gr}(T)$ the category of all tensor graded DG $T$-modules and tensor graded DG $T$-module homomorphisms.
** 19**. One can check that $\mathcal{C}^{gr}(T)$ is an abelian category. Note that $\mathcal{C}^{gr}(T)$ possesses two shift functors that are of different natures. For an object $M$ in $\mathcal{C}^{gr}(T)$ and an integer $n$, we have $\mathsf{\Sigma}^n M$, which is the $n$-th shift that is defined in [ 7](#para20221103a){reference-type="ref" reference="para20221103a"} using the DG $T$-module structure of $M$. On the other hand, using the notation from [ 18](#para20230114c){reference-type="ref" reference="para20230114c"}, the $n$-th shift of $M$ as a tensor graded DG $T$-module is defined to be $M[n]= \bigoplus_{i\in \mathbb{Z}} M^{i+n}$. Note that both of the functors $\mathsf{\Sigma}^n$ and $[n]$ are auto-functors on $\mathcal{C}^{gr}(T)$.
** 20**. Let $M$ be in $\mathcal{C}^{gr}(T)$ and use the notation from [ 18](#para20230114c){reference-type="ref" reference="para20230114c"}. For each integer $i$, the $i$-th tensor graded part $M^i$ is a DG $B$-submodule of $M$, and hence, $\partial^M (M^i) \subseteq M^i$. If $m \in M^i$ and $c \in T^1=\mathsf{\Sigma}J$, then $m\cdot c=a_M^i(m\otimes_Bc) \in M^{i+1}$. It then follows from the Leibniz rule that $$M^{i+1}\ni\partial^M (m \cdot c) = \partial^M(m) \cdot c \pm m \cdot d^T(c).$$
**Example 21**. Let $\mathbb{B}= B \otimes _A T$ and consider the DG $B^e$-module $(\mathbb{B}, \mathbb{D})$ constructed in [@NOY2 §3]. Note that the direct sum description $\mathbb{B} = \bigoplus_{i\in \mathbb{Z}} (B \otimes _A T^i)$ does not give a tensor graded structure on $(\mathbb{B}, \mathbb{D})$ because by the definition of $\mathbb{D}$ we have $\mathbb{D}(B \otimes _A T^i) \subseteq (B \otimes _A T^i) \cup (B \otimes _A T^{i-1})\not\subseteq B \otimes _A T^i$. Hence, $(\mathbb{B}, \mathbb{D}) \not\in \mathcal{C}^{gr}(T)$. On the other hand, $\partial ^{B \otimes _A T} (B \otimes _A T^i) \subseteq B \otimes _A T^i$ for all integers $i$ and we see that $(\mathbb{B}, \partial ^{B \otimes _A T})\in \mathcal{C}^{gr}(T)$.
**Example 22**. If $M$ is a DG $B$-module, then $M \otimes _B T$ is a tensor graded DG $T$-module by considering $(M\otimes_BT)^i:=M\otimes_BT^i$ with $a^i_{M\otimes_BT}:=\operatorname{id}_{M\otimes_BT^{i+1}}$ for all $i\geqslant 0$. (Here, we assume that $(M\otimes_BT)^i=0$ for all integers $i<0$.)
** 23**. The homotopy category $\mathcal{K}^{gr} (T)$ is defined in a natural way. In fact, the objects of $\mathcal{K}^{gr}(T)$ are the same objects as $\mathcal{C}^{gr} (T)$, i.e., all tensor graded DG $T$-modules. Also, assuming $M,N$ are tensor graded DG $T$-modules, we set $$\operatorname{Hom}_{\mathcal{K}^{gr} (T)} (M,N)= \operatorname{Hom}_{\mathcal{C}^{gr} (T)} (M,N) /\sim$$ where $\sim$ denotes the homotopy, i.e., for $f,g\in \operatorname{Hom}_{\mathcal{C}^{gr} (T)} (M,N)$ we have $f \sim g$ if and only if there exists a tensor graded $T$-linear map $h\colon M \to \mathsf{\Sigma}^{-1} N$ such that $f-g = h \partial^M + \partial ^N h$. It is straightforward to check that $\mathcal{K}^{gr}(T)$ is a triangulated category with the shift functor of the DG grading, i.e., with $\mathsf{\Sigma}$.
The derived category $\mathcal{D}^{gr}(T)$ is also defined naturally by taking the localization of $\mathcal{K}^{gr}(T)$ by the multiplicative system of all quasiisomorphisms.
** 24**. For tensor graded DG $T$-modules $M,N$ let $$\begin{gathered}
{}^*\!\operatorname{Hom}_{\mathcal{K}^{gr} (T)}(M,N)= \bigoplus _{n \in \mathbb{Z}} \operatorname{Hom}_{\mathcal{K}^{gr} (T)} (M, N[n])\\
\Gamma _M = {}^*\mathrm{End}_{\mathcal{K}^{gr} (T)}(M)\end{gathered}$$ where ${}^*\mathrm{End}_{\mathcal{K}^{gr} (T)}(M)$ denotes ${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr} (T)}(M,M)$. Note that $\Gamma _M$ is a tensor graded ring in which the multiplication is defined by the composition of morphisms and for an integer $n$, its $n$-th tensor graded part is $\Gamma ^n_M=\operatorname{Hom}_{\mathcal{K}^{gr} (T)}(M,M[n])$. We call $\Gamma _M = \bigoplus _{n\in \mathbb{Z}} \Gamma _M^n$ the *graded endomorphism ring of $M$ in $\mathcal{K}^{gr} (T)$*. Note that ${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr} (T)}(M,N)$ is naturally a graded right $\Gamma_M$- and left $\Gamma _N$-module.
The following key lemma will be used frequently in the subsequent sections.
**Lemma 25** (Adjointness). *Assume that $M \in \mathcal{K}(B)$ and $N=\bigoplus_{i\in \mathbb{Z}}N^i \in \mathcal{K}^{gr}(T)$. Then, there exists a natural bijection $$\operatorname{Hom}_{\mathcal{C}(B)} (M, N^0) \xrightarrow{\zeta} \operatorname{Hom}_{\mathcal{C}^{gr}(T)} (M \otimes _B T, N)$$ defined by $\zeta(\alpha)(m \otimes_B t)=\alpha (m) t$ for all $\alpha\in \operatorname{Hom}_{\mathcal{C}(B)} (M, N^0)$, $m\in M$, and $t\in T$. Moreover, $\zeta$ induces the bijection $$\label{eq20230604a}
\operatorname{Hom}_{\mathcal{K}(B)} (M, N^0) \xrightarrow{\cong} \operatorname{Hom}_{\mathcal{K}^{gr}(T)} (M \otimes _B T, N).$$*
*Proof.* The inverse map $\zeta^{-1}$ is defined by $\zeta^{-1}(f)=f^0$, where $f^0\colon M \to N^0$ is the restriction of $f\in \operatorname{Hom}_{\mathcal{C}^{gr}(T)} (M \otimes _B T, N)$ to the $0$-th tensor graded part. It is straightforward to see that $\zeta$ and $\zeta^{-1}$ both preserve a homotopy equivalence. Therefore, the isomorphism [\[eq20230604a\]](#eq20230604a){reference-type="eqref" reference="eq20230604a"} follows. ◻
** 26**. Applying the shift functor $\mathsf{\Sigma}^n$ on the short exact sequence [\[eq20230113a\]](#eq20230113a){reference-type="eqref" reference="eq20230113a"} and taking the direct sum $\bigoplus_{n\geqslant 0}$, since $(T^+)^n=(\mathsf{\Sigma}J)^{\otimes_Bn}$, we obtain a short exact sequence $$\label{eq20230115a}
0 \to \mathsf{\Sigma}^{-1}(T^+ [1]) \to B \otimes _A T \to T \to 0.$$ of DG $T$-modules in $\mathcal{C}^{gr}(T)$.
The following proposition is an immediate consequence of our discussion in [ 26](#para20230115d){reference-type="ref" reference="para20230115d"}.
**Proposition 27**. *Let $N$ be a semifree DG $B$-module. Then the following hold.*
1. *[\[triangles1\]]{#triangles1 label="triangles1"} There is a short exact sequence $$\label{eq20130118a}
0 \to N \otimes _B \mathsf{\Sigma}^{-1}(T^+ [1]) \to N \otimes _A T \xrightarrow{\theta_N} N \otimes _B T \to 0$$ in $\mathcal{C}^{gr}(T)$, where $\theta_N$ is the induced natural homomorphism.*
2. *[\[triangles2\]]{#triangles2 label="triangles2"} The following triangle exists in $\mathcal{K}^{gr}(T)$: $$N \otimes _B \mathsf{\Sigma}^{-1}(T^+ [1]) \to N \otimes _A T \xrightarrow{\theta_N} N \otimes _B T \xrightarrow{\omega^+_N} N \otimes _B T^+[1].$$*
3. *[\[triangles3\]]{#triangles3 label="triangles3"} The following triangle exists in $\mathcal{D}^{gr}(T)$: $$N \otimes^{\mathbf{L}}_B \mathsf{\Sigma}^{-1}(T^+ [1]) \to N \otimes^{\mathbf{L}}_A T \xrightarrow{\theta_N} N \otimes^{\mathbf{L}}_B T \xrightarrow{\omega^+_N} N \otimes^{\mathbf{L}}_B T^+[1].$$*
In the next result, we explicitly describe the connecting morphism $\omega^+_N$.
**Lemma 28**. *Let $N$ be a semifree DG $B$-module with a semifree basis $\{ e_{\lambda}\}_{\lambda \in \Lambda}$, and let $\partial ^N (e_{\lambda}) = \sum _{\mu < \lambda} e_{\mu} b_{\mu \lambda}$, where $b _{\mu \lambda} \in B$. Then, the tensor graded DG $T$-module homomorphism $\text{\textsf{w}}_N^+\colon N\otimes_BT\to N\otimes_BT^+[1]$ in $\mathcal{C}^{gr}(T)$ defined by $$\text{\textsf{w}}^+_N (e_{\lambda} \otimes _B t ) = \sum _{\mu < \lambda} e_{\mu} \otimes _B \delta (b_{\mu \lambda} ) \otimes _B t$$ for all $t\in T$ satisfies the equality $\omega_N^+=[\text{\textsf{w}}_N^+]$.*
*Proof.* Consider the right $B$-linear homomorphisms $\sigma_N\colon N\otimes_BB^e\to N\otimes_BJ$ and $\rho_N\colon N \to N\otimes_BB^e$ which are defined by the equalities $$\begin{gathered}
\sigma_N(e_{\lambda}\otimes_B (b_1\otimes_A b_2))=e_{\lambda}\otimes_B(b_1\otimes_A b_2-1\otimes_A b_1b_2)\\
\rho_N(e_{\lambda} b)=e_{\lambda}\otimes_B (1\otimes_A b)\end{gathered}$$ for a basis element $e_{\lambda}$ and homogeneous elements $b,b_1,b_2\in B$. We define $$\text{\textsf{w}}^+_N=(\sigma_N\partial^{N\otimes_BB^{e}}\rho_N)\otimes_B\operatorname{id}_T.$$ The assertion now follows from our computations in [@NOY3 Theorem 3.9]. ◻
Next, for a semifree DG $B$-module $N$, we define an element $\omega_N$ in the tensor graded endomorphism ring $\Gamma_{N\otimes _BT}$ that plays a crucial role in this paper.
** 29**. Let $N$ be a semifree DG $B$-module with a semifree basis $\mathcal{B}=\{ e_{\lambda}\}_{\lambda \in \Lambda}$. For $e_{\lambda}\in \mathcal{B}$ assume that $\partial ^N (e_{\lambda}) = \sum _{\mu < \lambda} e_{\mu} b_{\mu \lambda}$, where $b _{\mu \lambda} \in B$. Let $$\text{\textsf{w}}_N\colon N \otimes _B T \to N \otimes _B T [1]$$ be the composition $\varpi_N\text{\textsf{w}}^+_N$ in $\mathcal{C}^{gr}(T)$, where $\varpi_N\colon N \otimes _B T^+ [1]\to N \otimes _B T[1]$ is the natural inclusion map. Note that for all $e_{\lambda}\in \mathcal{B}$ and $t\in T$ we have $$\label{eq20230423a}
\text{\textsf{w}}_N (e_{\lambda} \otimes_B t) = \sum _{\mu < \lambda} e_{\mu} \otimes _B \delta (b_{\mu \lambda} ) \otimes _B t.$$ Therefore, $\omega_N:=[\text{\textsf{w}}_N] \in \Gamma_{N\otimes_BT} ^1= \operatorname{Hom}_{\mathcal{K}^{gr} (T)} (N \otimes _B T, N \otimes _B T[1]) \subseteq \Gamma_{N\otimes _BT}$.
For all integers $\ell\geqslant 1$, set $\text{\textsf{w}}_N^{\ell}:=\text{\textsf{w}}_N[\ell-1]\circ \cdots \circ \text{\textsf{w}}_N[1]\circ \text{\textsf{w}}_N$. Then, for all basis elements $e_{\lambda}$ and $t\in T$, one can compute that $$\label{eq20230822a}
\text{\textsf{w}}_N^{\ell}(e_{\lambda}\otimes_Bt)=\!\!\!\!\!\!\sum_{\mu_{\ell}<\cdots<\mu_{1}<\lambda}\!\!\!\!\!\!e_{\mu_{\ell}}\otimes_B\delta(b_{\mu_{\ell}\mu_{\ell-1}})\otimes_B\cdots\otimes_B\delta(b_{\mu_{2}\mu_{1}})\otimes_B\delta(b_{\mu_{1}\lambda})\otimes_Bt.$$
** 30**. The definition of $\text{\textsf{w}}_N$ from [ 29](#para20230115e){reference-type="ref" reference="para20230115e"} is independent of the choice of the semifree basis for the semifree DG $B$-module $N$ in the following sense: Let $\mathcal{B}'=\{e'_\lambda\}_{\lambda\in\Lambda}$ be another semifree basis of $N$ and $u\colon N\to N$ be the DG $B$-module automorphism such that $e'_\lambda=u(e_\lambda)$. Note that $\partial^N(e'_\lambda)=\sum_{\mu<\lambda}e'_{\mu}b_{\mu\lambda}$. Let $\text{\textsf{w}}_N=\text{\textsf{w}}_N^\mathcal{B}$ and consider $\text{\textsf{w}}_N^\mathcal{B'}$ which is defined by a similar formula as [\[eq20230423a\]](#eq20230423a){reference-type="eqref" reference="eq20230423a"}, i.e., $\text{\textsf{w}}_N^{\mathcal{B'}}(e'_\lambda \otimes_B t)= \sum_{\mu<\lambda} e'_{\mu}\otimes_B \delta(b_{\mu\lambda})\otimes_B t$. Then, we have $\text{\textsf{w}}_N^{\mathcal{B'}}=(u\otimes_B \operatorname{id}_T)\text{\textsf{w}}_N^{\mathcal{B}}u^{-1}$.
Also, it is important to note that $\text{\textsf{w}}_N$ preserves the DG degree and is not a map of DG degree $-1$.
In the following result, for a positive integer $n$, we denote by $B^{\oplus n}$ the direct sum of $n$ copies of the DG $R$-algebra $B$.
**Proposition 31**. *Let $n$ be a positive integer. Then $\omega_{B^{\oplus n}}=0$ in $\mathcal{K}^{gr}(T)$.*
*Proof.* Note that the short exact sequence [\[eq20130118a\]](#eq20130118a){reference-type="eqref" reference="eq20130118a"} is split for $N=B^{\oplus n}$ and hence, $\omega_{B^{\oplus n}}^+=0$ in $\mathcal{K}^{gr}(T)$. This implies that $\omega_{B^{\oplus n}}=0$ in $\mathcal{K}^{gr}(T)$. ◻
**Proposition 32**. *Assume that $N$ is a semifree DG $B$-module that is bounded below (i.e., $N_i=0$ for all $i\ll 0$). Then, $\text{\textsf{w}}_N$ is locally nilpotent, that is, for each element $x \in N \otimes _B T$, there exists a positive integer $n_x$ such that $\text{\textsf{w}}_N^{n_x}(x)=0$, where $\text{\textsf{w}}_N^{n_x}$ is the composition morphism $\text{\textsf{w}}_N[n_x-1]\circ \cdots\circ \text{\textsf{w}}_N[1]\circ \text{\textsf{w}}_N$.*
Note that the locally nilpotent property does not mean that $\text{\textsf{w}}_N$ itself is nilpotent; compare this proposition with Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}.
*Proof.* Assume that $x\in N\otimes_B T$ is an element of the DG degree $m$ and of the tensor degree $i$, that is, $x \in (N \otimes _B T^i)_m$. Then, for each integer $n$ we have $$\text{\textsf{w}}_N ^n(x) \in (N \otimes _B T^{i+n})_m = (N \otimes _B J^{\otimes_B (i+n)})_{-i-n+m}$$ which vanishes if $n \gg m-i$ since the lower bound of $N \otimes _B J^{\otimes_B (i+n)}$ is not less than that of $N$. Hence, such a positive integer $n_x$ with $\text{\textsf{w}}_N ^{n_x}(x)=0$ exists. ◻
The map $\omega_N$ can also be described functorially as follows.
**Proposition 33**. *Let $N,N'$ be semifree DG $B$-modules and $f\colon N \to N'$ be a DG $B$-module homomorphism. Then, there exists a commutative diagram $$\label{eq20230117a}
\xymatrix{
N \otimes _B T \ar[r]^-{\omega _N} \ar[d]_{[f\otimes_B \operatorname{id}_T]} & N \otimes _B T[1] \ar[d]^{[f\otimes_B \operatorname{id}_{T[1]}]} \\
N' \otimes _B T \ar[r]^-{\omega _{N'}} & N' \otimes _B T[1] \\
}$$ in $\mathcal{K}^{gr} (T)$.*
*Proof.* There is a commutative diagram $$\xymatrix{
0 \ar[r] & N \otimes _B (\mathsf{\Sigma}^{-1}T^+)[1] \ar[rr] \ar[d]^{f \otimes_B \operatorname{id}_{(\mathsf{\Sigma}^{-1}T^+)[1]}} && N \otimes _AT \ar[rr] \ar[d]^{f \otimes_A \operatorname{id}_T} && N \otimes _BT \ar[r] \ar[d]^{f \otimes _B \operatorname{id}_T}& 0 \\
0 \ar[r] & N' \otimes _B (\mathsf{\Sigma}^{-1}T^+)[1] \ar[rr] && N' \otimes _AT \ar[rr] && N' \otimes _BT \ar[r] & 0 \\
}$$ with exact rows in $\mathcal{C}^{gr}(T)$. This induces the commutative diagram $$\xymatrix{
N \otimes _B (\mathsf{\Sigma}^{-1}T^+)[1] \ar[r] \ar[d]^{[f \otimes _B \operatorname{id}_{(\mathsf{\Sigma}^{-1}T^+)[1]}]} & N \otimes _AT \ar[r] \ar[d]^{[f \otimes _A \operatorname{id}_T]} & N \otimes _BT \ar[rr]^-{\omega ^+ _N} \ar[d]^{[f \otimes _B \operatorname{id}_T]}&& N \otimes _B T^+[1] \ar[d]_{[f \otimes _B \operatorname{id}_{T^+[1]}]} \\
N' \otimes _B (\mathsf{\Sigma}^{-1}T^+)[1] \ar[r]& N' \otimes _AT \ar[r] & N' \otimes _BT \ar[rr]^-{\omega ^+_{N'}} && N' \otimes _B T^+[1] \\
}$$ with triangle rows in $\mathcal{K}^{gr}(T)$. On the other hand, the diagram $$\xymatrix{
N \otimes _B T^+[1] \ar[r] ^{[\varpi_N]} \ar[d] _{[f \otimes _B \operatorname{id}_{T^+[1]}]} & N \otimes _B T[1] \ar[d] ^{[f \otimes _B \operatorname{id}_{T[1]}]} \\
N' \otimes _B T^+[1] \ar[r] ^{[\varpi_{N'}]} & N \otimes _B T[1]
}$$ is also commutative. The existence of the commutative diagram [\[eq20230117a\]](#eq20230117a){reference-type="eqref" reference="eq20230117a"} now follows from the equalities $\omega _N=[\varpi_N]\circ \omega ^+_{N}$ and $\omega _{N'}=[\varpi_{N'}]\circ \omega ^+_{N'}$. ◻
The next result indicates that $\omega_N$ commutates with the elements of $\Gamma _{N \otimes _B T}$.
**Corollary 34**. *Let $N$ be a semifree DG $B$-module. If $f \in \mathrm{End}_{\mathcal{K}(B)} (N)$, then we have the equality $(f \otimes _B \operatorname{id}_{T[1]}) \omega_N = \omega_N (f \otimes _B\operatorname{id}_T)$ in $\mathcal{K}^{gr}(T)$.*
# Hom sets in the homotopy categories {#sec20230425s}
In this section, for a semifree DG $B$-module $N$ and integers $m$, we discuss the action of $\mathsf{\Sigma}^m\omega_N$ as an element in the ring $\Gamma_{N \otimes _B T}$ on the graded right $\Gamma_{N \otimes _B T}$-modules ${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N \otimes _B T, N \otimes _B (\mathsf{\Sigma}^m T))$, assuming that condition $\operatorname{\texttt{(AR1)}}$, defined in [ 35](#para20230424v){reference-type="ref" reference="para20230424v"} below, holds for $N$. Our main result in this section is Theorem [Theorem 37](#main){reference-type="ref" reference="main"} whose proof takes up the balance of this section and involves several steps, including an analysis of the mapping cone of $\omega_N$.
** 35**. We say that $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$ if:
1. [\[AR-1-1\]]{#AR-1-1 label="AR-1-1"} $N$ is a semifree DG $B$-module that is non-negatively graded;
2. [\[AR-1-2\]]{#AR-1-2 label="AR-1-2"} $B$ and $N$ are perfect considered as DG $A$-modules via $\varphi$; and
3. [\[AR-1-3\]]{#AR-1-3 label="AR-1-3"} $\operatorname{Hom}_{\mathcal{K}(B)}(N, \mathsf{\Sigma}^nB) = 0$ for all integers $n\geqslant 1$.
** 36**. Note that $\operatorname{\texttt{(AR1)}}$ is not necessarily preserved under degree shift. More precisely, if $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$ and $i$ is a non-zero integer, then $\mathsf{\Sigma}^iN$ may not satisfy conditions [\[AR-1-1\]](#AR-1-1){reference-type="eqref" reference="AR-1-1"} and [\[AR-1-3\]](#AR-1-3){reference-type="eqref" reference="AR-1-3"}.
**Theorem 37**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then the map $$\begin{aligned}
\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_B T, \mathsf{\Sigma}^m(N\otimes _BT[n])) &\xrightarrow{\mathsf{\Sigma}^m(\omega_N[n])\circ-}&\\
&\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_B T, \mathsf{\Sigma}^m(N\otimes _BT [n+1]))&\end{aligned}$$ which is defined by the left composition by $\mathsf{\Sigma}^m(\omega_N[n])$ is always surjective for all $n, m \geqslant 0$ and is bijective if either $m \geqslant 1$ and $n \geqslant 0$, or $m =0$ and $n\geqslant 1$. In particular, the left action of $\mathsf{\Sigma}^m\omega_N$ on ${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N \otimes _B T, N \otimes _B (\mathsf{\Sigma}^m T))$ is surjective for all $m \geqslant 0$.*
The proof of Theorem [Theorem 37](#main){reference-type="ref" reference="main"} will be given near the end of this section.
** 38**. For each integer $n\geqslant 0$, let $\overline{B} ^{\otimes_A n}= \overline{B} \otimes_A \cdots \otimes_A \overline{B}$ be the $n$-fold tensor product of $\overline{B}$ over $A$ with the convention that $\overline{B} ^{\otimes_A 0}=A$. Note that if $B$ is a perfect DG $A$-module via $\varphi$, then $\overline{B}$ is quasiisomorphic to a semifree DG $A$-module with a finite semifree basis that consists of basis elements with positive DG degrees. Hence, for each integer $n\geqslant 0$, the DG $A$-module $\overline{B}^{\otimes _A n}$ is quasiisomorphic to a semifree DG $A$-module with a finite semifree basis that consists of basis elements of DG degrees $\geqslant n$. It follows from [@NOY2 Proposition 2.9] that $J ^{\otimes _B n}$ is also quasiisomorphic to a semifree DG $B$-module with a finite semifree basis consisting of basis elements of degrees $\geqslant n$.
**Lemma 39**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then $$\label{eq20230124a}
\operatorname{Hom}_{\mathcal{K}(B)}(N, \mathsf{\Sigma}^m(J^{\otimes _B n})) = 0$$ for all integers $n \geqslant 0$ and $m > -n$.*
*Proof.* By our discussion in [ 38](#para20230116a){reference-type="ref" reference="para20230116a"} there is a finite filtration $$0=F_0 \subseteq F_1 \subseteq F_2 \subseteq \cdots \subseteq F_i \simeq J^{\otimes _B n}$$ of DG $B$-submodules such that each $F_k/F_{k-1}$ is a finite direct sum of copies of $\mathsf{\Sigma}^{r_k}B$ with $r_k \geqslant n$. Condition $\operatorname{\texttt{(AR1)}}$[\[AR-1-3\]](#AR-1-3){reference-type="eqref" reference="AR-1-3"} implies that $\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^m(\mathsf{\Sigma}^{r_k}B))=0$ for all $m > -n \geqslant-r_k$. Therefore, by induction on $k$ we have $\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^mF_k)=0$ for all $1\leqslant k\leqslant i$ and $m > -n$. In particular, the equality [\[eq20230124a\]](#eq20230124a){reference-type="eqref" reference="eq20230124a"} holds. ◻
**Lemma 40**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then*
1. *$\operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _A \mathsf{\Sigma}^m(J^{\otimes _B n})) = 0$ for all $n \geqslant 0$ and $m > -n$.*
2. *$\operatorname{Hom}_{\mathcal{K}^{gr}(T)} (N \otimes _B T, N \otimes _A \mathsf{\Sigma}^m(T[n])) =0$ for all $n \geqslant 0$ and $m > -2n$.*
*Proof.* (a) By $\operatorname{\texttt{(AR1)}}$[\[AR-1-2\]](#AR-1-2){reference-type="eqref" reference="AR-1-2"}, there is a finite filtration $$0=F_0 \subseteq F_1 \subseteq F_2 \subseteq \cdots \subseteq F_i \simeq N$$ of DG $A$-submodules, where each $F_k/F_{k-1}$ is a finite direct sum of copies of $\mathsf{\Sigma}^{r_k}A$ with $r_k \geqslant 0$. Thus, fixing an integer $n \geqslant 0$, the DG $B$-module $N \otimes _A J^{\otimes _B n}$ has a finite filtration $$0=F'_0 \subseteq F'_1 \subseteq F'_2 \subseteq \cdots \subseteq F'_i \simeq N\otimes _A J^{\otimes _B n}$$ of DG $B$-submodules, where $F'_k = F_k \otimes _A J^{\otimes _B n}$ with each $F'_k/F'_{k-1}$ being a finite direct sum of copies of $\mathsf{\Sigma}^{r_k}(J^{\otimes _B n})$ with $r_k \geqslant 0$. Lemma [Lemma 39](#first vanishing){reference-type="ref" reference="first vanishing"} implies that $\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^m(F'_k/F'_{k-1}))=0$ for all $1\leqslant k\leqslant i$ and $m >-n$. Now, by induction on $1 \leqslant k \leqslant i$, we see that $\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^m F'_k)=0$ for all $m >-n$, as well.
\(b\) By Lemma [Lemma 25](#lem20230116a){reference-type="ref" reference="lem20230116a"}, we only need to show that $\operatorname{Hom}_{\mathcal{K}(B)} (N, N \otimes _A \mathsf{\Sigma}^mT^n) =0$ for all $n \geqslant 0$ and $m > -2n$. Equivalently, we need to show that the equality $\operatorname{Hom}_{\mathcal{K}(B)} (N, N \otimes _A \mathsf{\Sigma}^{n+m}(J^{\otimes _B n})) =0$ holds for all $n \geqslant 0$ and $n+m > -n$. However, this follows from part (a) and the proof is complete. ◻
** 41**. For a semifree DG $B$-module $N$, the mapping cone $C(\omega_N)$ of $\omega_N$ is an object in $\mathcal{K}^{gr}(T)$. Moreover, the following triangle exists in $\mathcal{K}^{gr}(T)$: $$\label{cone triangle}
\mathsf{\Sigma}^{-1}C(\omega_N) \to N \otimes _B T \xrightarrow{\omega_N} N \otimes _B T[1] \to C(\omega_N).$$
**Lemma 42**. *Let $N$ be a semifree DG $B$-module. Then, there is a triangle $$\label{cone2}
\mathsf{\Sigma}^{-1}(N[1]) \to N \otimes _A\mathsf{\Sigma}T \to C(\omega_N) \to N[1]$$ in $\mathcal{K}^{gr}(T)$, where the DG $B$-module $N$ is regarded as a tensor graded DG $T$-module that is concentrated in tensor degree zero. In particular, for each integer $n$, taking the $n$-th tensor graded part, we have the following isomorphisms in $\mathcal{K}(B)$: $$C(\omega_N) ^n \cong
\begin{cases}
N \otimes _A\mathsf{\Sigma}T^n = N \otimes _A\mathsf{\Sigma}^{n+1}(J^{\otimes _Bn}) &\ \text{if}\ n \geqslant 0\\
N &\ \text{if}\ n= -1 \\
0 &\ \text{if}\ n\leqslant-2.
\end{cases}$$*
*Proof.* By the octahedron axiom, there exists a commutative diagram $$\xymatrix{ & N \otimes _B \mathsf{\Sigma}T \ar@{=}[r] & N \otimes _B \mathsf{\Sigma}T& \\
\mathsf{\Sigma}^{-1}(N[1]) \ar[r] & N \otimes _A\mathsf{\Sigma}T \ar[u] \ar[r] & C(\omega_N) \ar[r] \ar[u] & N[1] \\
\mathsf{\Sigma}^{-1}(N[1]) \ar@{=}[u] \ar[r] & N \otimes _B T^+ [1] \ar[r]^{[\varpi_N]} \ar[u] & N \otimes _B T [1] \ar[u] \ar[r] &N[1] \ar@{=}[u] \\
&N \otimes _B T \ar[u]^-{\omega_N^+} \ar@{=}[r] & N \otimes _B T \ar[u]^-{\omega_N} & }$$ in which all rows and columns are triangles in $\mathcal{K}^{gr}(T)$ and the second row is [\[cone2\]](#cone2){reference-type="eqref" reference="cone2"}. The last assertion is obtained by taking the $n$-th tensor graded part in [\[cone2\]](#cone2){reference-type="eqref" reference="cone2"} and noting that $N^n= 0$ for all $n \not= 0$ and $N \otimes _A \mathsf{\Sigma}T^n =0$ for all $n < 0$. ◻
**Proposition 43**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then $$\operatorname{Hom}_{\mathcal{K}^{gr}(T)} (N \otimes _BT, \mathsf{\Sigma}^m (C(\omega_N)[n]))\!\cong\!
\begin{cases}
0&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\text{if}\ n\geqslant 0, m \geqslant-2n, \text{or}\ n\leqslant-2\\
\operatorname{Hom}_{\mathcal{K}(B)}(N, \mathsf{\Sigma}^m N)&\! \text{if}\ n= -1.
\end{cases}$$*
*Proof.* By Lemmas [Lemma 25](#lem20230116a){reference-type="ref" reference="lem20230116a"} and [Lemma 42](#cone){reference-type="ref" reference="cone"} we have $$\begin{aligned}
\operatorname{Hom}_{\mathcal{K}^{gr}(T)} (N \otimes _BT, \mathsf{\Sigma}^m(C(\omega_N)[n]))\!\!\!\!
&\cong&\!\!\!\! \operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^m(C(\omega_N)^n)) \\
&\cong&\!\!\!\!
\begin{cases}
\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^{m+1}(N \otimes _A T^n)) & \text{if}\ n \geqslant 0 \\
\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^m N) & \text{if}\ n=-1 \\
0 & \text{if}\ n \leqslant-2.
\end{cases}\end{aligned}$$ Now, the assertion follows from Lemma [Lemma 40](#vanishing){reference-type="ref" reference="vanishing"}. ◻
*Proof of Theorem [Theorem 37](#main){reference-type="ref" reference="main"}.* Applying the functor $\operatorname{Hom}_{\mathcal{K}^{gr}(T)} (N \otimes _BT, \mathsf{\Sigma}^m ((-)[n]))$ to the triangle [\[cone triangle\]](#cone triangle){reference-type="eqref" reference="cone triangle"} we obtain an exact sequence $$\begin{aligned}
\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_B T, \mathsf{\Sigma}^{m-1}(C(\omega_N)[n])) & \to \operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_B T, \mathsf{\Sigma}^m(N\otimes _BT[n]))\\ &\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \xrightarrow{\mathsf{\Sigma}^m(\omega_N[n])\circ-}
\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_B T, \mathsf{\Sigma}^m(N\otimes _BT [n+1]))\\
&\to \operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_B T, \mathsf{\Sigma}^m(C(\omega_N)[n]))\end{aligned}$$ of $R$-modules. By Proposition [Proposition 43](#cone vanishing){reference-type="ref" reference="cone vanishing"}, if $n \geqslant 0$, then $\operatorname{Ker}(\mathsf{\Sigma}^m(\omega_N[n])\circ-)=0$ for all $m \geqslant 1$ and $\operatorname{Coker}(\mathsf{\Sigma}^m(\omega_N[n])\circ-)=0$ for all $m \geqslant 0$. Moreover, if $n\geqslant 1$, then $\operatorname{Ker}(\mathsf{\Sigma}^m(\omega_N[n])\circ-)=0$ and $\operatorname{Coker}(\mathsf{\Sigma}^m(\omega_N[n])\circ-)=0$ for all $m \geqslant 0$. 0◻
We conclude this section with Theorem [Theorem 45](#thm20230425a){reference-type="ref" reference="thm20230425a"} in which in addition to $\operatorname{\texttt{(AR1)}}$ we are considering another Ext vanishing condition on DG modules, defined next.
** 44**. We say that $\operatorname{\texttt{(AR2)}}$ holds for a DG $B$-module $N$ if $\operatorname{Hom}_{\mathcal{K}(B)}(N, \mathsf{\Sigma}^n N) = 0$ for all integers $n \geqslant 1$.
**Theorem 45**. *If $\operatorname{\texttt{(AR1)}}$ and $\operatorname{\texttt{(AR2)}}$ hold for a DG $B$-module $N$, then $${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^m T) =0$$ for all integers $m\geqslant 1$.*
*Proof.* Recall that ${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^m T)$ is a non-negatively graded module over the graded ring $\Gamma_{N\otimes _B T}$ with $${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^m T)^0\cong \operatorname{Hom}_{\mathcal{K}(B)}(N, \mathsf{\Sigma}^m N)$$ by Lemma [Lemma 25](#lem20230116a){reference-type="ref" reference="lem20230116a"}. Assuming $m\geqslant 1$, for an integer $n\geqslant 1$, by Theorem [Theorem 37](#main){reference-type="ref" reference="main"} we have $${}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^m T)^n=\omega_N ^n \cdot \operatorname{Hom}_{\mathcal{K}(B)}(N, \mathsf{\Sigma}^m N).$$ Now the assertion follows from the assumption that $\operatorname{\texttt{(AR2)}}$ holds for $N$. ◻
# The endomorphism rings {#sec20230425r}
In this section, for a semifree DG $B$-module $N$ that satisfies $\operatorname{\texttt{(AR1)}}$, we study the relationship between the endomorphism rings $\mathrm{End}_{\mathcal{K}(B)}(N)$ and $\Gamma_{N\otimes_BT}$. Our main results in the section are Theorems [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"} and [Theorem 53](#kernel of omega){reference-type="ref" reference="kernel of omega"} which play an important role in Section [6](#sec20230422a){reference-type="ref" reference="sec20230422a"}.
Our first main result of this section, stated next, shows that if $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then $\Gamma _{N \otimes _B T}$ is an $\mathrm{End}_{\mathcal{K}(B)}(N)$-algebra generated by $\omega_N$, and if $\omega_N\neq 0$, then it is a non-zero divisor on $\Gamma _{N \otimes _B T}^+=\bigoplus _{n\geqslant 1} \Gamma _{N \otimes _B T}^n$.
**Theorem 46**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then we have the equality $$\Gamma _{N \otimes _B T} = \mathrm{End}_{\mathcal{K}(B)}(N) [\omega_N]$$ of rings, that is, every element of $\Gamma _{N \otimes _B T}$ can be written as a polynomial of the form $\alpha_0 + \omega_N\alpha_1+ \omega_N ^2 \alpha_2+ \cdots + \omega_N ^n\alpha_n$, where $\alpha _i \in \mathrm{End}_{\mathcal{K}^{gr(T)}}(N\otimes_BT)\cong\mathrm{End}_{\mathcal{K}(B)}(N)$ for all $0\leqslant i\leqslant n$. Moreover, the left multiplication by $\omega_N$ induces the bijections $\Gamma _{N \otimes _B T} ^n \xrightarrow{\omega_N \cdot-}\Gamma _{N \otimes _B T}^{n+1}$ for all $n\geqslant 1$.*
The proof of Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"} will be given after the following result, which is an immediate consequence of Lemma [Lemma 25](#lem20230116a){reference-type="ref" reference="lem20230116a"}.
**Proposition 47**. *Let $N \in \mathcal{K}(B)$. Then, there is an isomorphism $$\Gamma _{N\otimes_BT} ^0 \cong \mathrm{End}_{\mathcal{K}(B)} (N)$$ and $\Gamma ^n _{N \otimes _BT} =0$ for all integers $n<0$. In particular, $\Gamma _{N \otimes _BT}$ is a non-negatively graded ring and we have $$\Gamma _{N \otimes _BT} \cong
\bigoplus _{n\geqslant 0} \operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _B (\mathsf{\Sigma}J)^{\otimes_B n})
= \bigoplus _{n\geqslant 0} \operatorname{Ext}_B^n (N, N \otimes _B J^{\otimes_B n}).$$*
*Proof of Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"}.* By Proposition [Proposition 47](#cor to adjoint){reference-type="ref" reference="cor to adjoint"} we know that $\Gamma _{N \otimes _B T}$ is a non-negatively graded ring whose $0$-th graded part is $\mathrm{End}_{\mathcal{K}(B)}(N)$. Also, recall from [ 29](#para20230115e){reference-type="ref" reference="para20230115e"} that $\omega_N\in \Gamma^1_{N \otimes _B T}$. By Theorem [Theorem 37](#main){reference-type="ref" reference="main"}, for all $n\geqslant 0$, the $n$-th graded part of $\Gamma _{N \otimes _B T}$ is of the form $\omega_N ^n \cdot \mathrm{End}_{\mathcal{K}(B)}(N)$. Hence, we have the equality $$\label{direct decomp}
\Gamma _{N \otimes _B T} = \bigoplus _{n \geqslant 0} \omega_N ^n \cdot \mathrm{End}_{\mathcal{K}(B)}(N).$$ The last statement of this result follows from Theorem [Theorem 37](#main){reference-type="ref" reference="main"} as well. 0◻
**Corollary 48**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then $\omega_N$ lies in the center of the ring $\Gamma _{N \otimes _B T}$.*
*Proof.* By Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"}, we have $\Gamma _{N \otimes _B T} = \mathrm{End}_{\mathcal{K}(B)}(N) [\omega_N]$. Now, the assertion follows from Corollary [Corollary 34](#cor20230117a){reference-type="ref" reference="cor20230117a"}. ◻
The rest of this section is devoted to the second main result of this section, i.e., Theorem [Theorem 53](#kernel of omega){reference-type="ref" reference="kernel of omega"}. In this theorem, for a DG $B$-module $N$ that satisfies $\operatorname{\texttt{(AR1)}}$, we determine the kernel and cokernel of the map $\Gamma _{N \otimes _B T} \xrightarrow{\omega_N \cdot-} \Gamma _{N \otimes _B T}[1]$ induced by the left multiplication by $\omega_N$. For this purpose, we start with the following lemma.
**Lemma 49**. *Let $N$ be a DG $B$-module and $\mathfrak{p}$ be the ideal of the ring $\mathrm{End}_{\mathcal{K}(B)}(N)$ consisting of all morphisms that factor through a finite direct sum of copies of $B$. Considering $\mathfrak{p}$ as a subset of the ring $\Gamma _{N\otimes _B T}$ via the natural inclusion map $\mathrm{End}_{\mathcal{K}(B)}(N) \hookrightarrow \Gamma _{N\otimes _B T}$ we have the equality $\omega_N \cdot \mathfrak{p}=0$.*
*Proof.* Any morphism $f \in \mathfrak{p}$ is, by definition, a composition $N \xrightarrow{h} B ^{\oplus n}\xrightarrow{g} N$ for some integer $n \geqslant 1$. Thus, we obtain a commutative diagram $$\xymatrix{
N \otimes _BT \ar[rrrr]^-{f\otimes_B \operatorname{id}_T} \ar[rrd]_{h\otimes_B \operatorname{id}_T} \ar[ddd]_{\omega_N} &&&& N \otimes _BT \ar[ddd]^-{\omega_N} \\
&& B^{\oplus n} \otimes _B T\ar[rru]_{g \otimes_B \operatorname{id}_T} \ar[ddd]^-(0.3){\omega_{B^{\oplus n}} }&& \\ \\
N \otimes _BT[1] \ar[rrrr]^(0.3){f\otimes_B \operatorname{id}_{T[1]}} \ar[rrd]_{h\otimes_B \operatorname{id}_{T[1]}} &&&& N \otimes _BT[1] \\
&& B^{\oplus n} \otimes _B T[1]\ar[rru]_{g \otimes_B \operatorname{id}_{T[1]}} & \\
}$$ in which, by Proposition [Proposition 31](#cor20230124a){reference-type="ref" reference="cor20230124a"}, we have $\omega _{B ^{\oplus n}} =0$. Therefore, $\omega_N (f \otimes _B \operatorname{id}_T) =0$, which means that $\omega_N \cdot \mathfrak{p}= 0$ in $\Gamma _{N\otimes _B T}$. ◻
**Corollary 50**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then the ideal $\mathfrak{p}$ introduced in Lemma [Lemma 49](#lem20230118a){reference-type="ref" reference="lem20230118a"} is a graded (two-sided) ideal of the ring $\Gamma _{N\otimes _B T}$ that is concentrated in degree $0$.*
*Proof.* By Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"}, the ring $\Gamma _{N\otimes _B T}$ is generated by $\omega_N$ as an $\mathrm{End}_{\mathcal{K}(B)}(N)$-algebra. Hence, the ideal $\mathfrak{p}$ of $\mathrm{End}_{\mathcal{K}(B)}(N)$ is an ideal of $\Gamma _{N\otimes _B T}$ as well. ◻
** 51**. Assume that $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$. By Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"}, each morphism $\Gamma _{N \otimes _B T} ^n \xrightarrow{\omega_N \cdot-}\Gamma _{N \otimes _B T}^{n+1}$ with $n \geqslant 1$ is an isomorphism. On the other hand, by Theorem [Theorem 37](#main){reference-type="ref" reference="main"} the map $\Gamma _{N \otimes _B T}^{0} \xrightarrow{\omega_N \cdot-} \Gamma _{N \otimes _B T}^{1}$ is surjective, but it is not injective in general. Therefore, it has a non-trivial kernel. By Lemma [Lemma 25](#lem20230116a){reference-type="ref" reference="lem20230116a"}, note that the map $\Gamma _{N \otimes _B T}^{0} \xrightarrow{\omega_N \cdot-} \Gamma _{N \otimes _B T}^{1}$ is in fact the map $$\mathrm{End}_{\mathcal{K}(B)} (N) \to \operatorname{Hom}_{\mathcal{K}(B)}(N, N\otimes_B\mathsf{\Sigma}J)$$ which is induced by applying the functor $\operatorname{Hom}_{\mathcal{K}(B)} (N , -)$ to $\omega_N\colon N \to N \otimes _B \mathsf{\Sigma}J$. To be precise, the morphism $\omega_N\colon N \to N \otimes _B \mathsf{\Sigma}J$, which we are considering here, is in fact the restricted morphism $\omega_N|_N$, for which we use the same notation $\omega_N$. Since there is a triangle $N \otimes _B J \to N \otimes _A B \to N \xrightarrow{\omega_N} N \otimes _B \mathsf{\Sigma}J$ in $\mathcal{K}(B)$ which is obtained from the short exact sequence [\[basic sequence\]](#basic sequence){reference-type="eqref" reference="basic sequence"}, we have $$\operatorname{Ker}(\Gamma _{N \otimes _B T}^{0} \xrightarrow{\omega_N \cdot-} \Gamma _{N \otimes _B T}^{1})=\operatorname{Im}\left(\operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _A B) \to \mathrm{End}_{\mathcal{K}(B)}(N)\right).$$
**Lemma 52**. *If $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, then for the ideal $\mathfrak{p}$ introduced in Lemma [Lemma 49](#lem20230118a){reference-type="ref" reference="lem20230118a"} we have a short exact sequence $$0 \to \mathfrak{p}\to \mathrm{End}_{\mathcal{K}(B)}(N) \xrightarrow{\operatorname{Hom}_{\mathcal{K}(B)}(N, \omega_N)} \operatorname{Hom}_{\mathcal{K}(B)}(N, N\otimes_B \mathsf{\Sigma}J) \to 0.$$*
*Proof.* The map $\operatorname{Hom}_{\mathcal{K}(B)}(N, \omega_N)$ is the multiplication (i.e., composition) by $\omega_N$ from the left. Hence, by Lemma [Lemma 49](#lem20230118a){reference-type="ref" reference="lem20230118a"} we have $\mathfrak{p}\subseteq \operatorname{Ker}(\operatorname{Hom}_{\mathcal{K}(B)}(N, \omega_N))$.
For the reverse containment, let $f \in \operatorname{Ker}(\operatorname{Hom}_{\mathcal{K}(B)}(N, \omega_N))$ be an arbitrary element. By [ 51](#par20230117a){reference-type="ref" reference="par20230117a"}, we have $f\in \operatorname{Im}\left(\operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _A B) \to \mathrm{End}_{\mathcal{K}(B)}(N)\right)$. Note that $N\otimes _AB$ is a perfect DG $B$-module, and hence, it has a finite filtration $$0=F_{0} \subseteq F_1 \subseteq \cdots \subseteq F_n \simeq N\otimes _A B$$ such that each $F_k /F_{k-1}$ is a direct sum of copies of $\mathsf{\Sigma}^{r_k}B$ with $r_k\geqslant k-1$. Since $\operatorname{Hom}_{\mathcal{K}(B)} (N, \mathsf{\Sigma}^i B)=0$ for all $i\geqslant 1$, we see that the natural inclusion $F_1 \hookrightarrow N \otimes _A B$ induces a surjective map $\operatorname{Hom}_{\mathcal{K}(B)} (N, F_1) \to \operatorname{Hom}_{\mathcal{K}(B)} (N, N \otimes_A B)$. Hence, we conclude that $f$ factors through $F_1$ that is a finite direct sum of copies of $B$. Thus, $f\in \mathfrak{p}$ and therefore, we have the equality $\mathfrak{p}=\operatorname{Ker}(\operatorname{Hom}_{\mathcal{K}(B)}(N, \omega_N))$. ◻
The following result is our second main theorem of this section that follows from Corollary [Corollary 50](#cor20230118a){reference-type="ref" reference="cor20230118a"} and Lemma [Lemma 52](#lem20230118b){reference-type="ref" reference="lem20230118b"}.
**Theorem 53**. *Assume that $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$, and let $\mathfrak{p}$ be the ideal of $\mathrm{End}_{\mathcal{K}(B)}(N)$ introduced in Lemma [Lemma 49](#lem20230118a){reference-type="ref" reference="lem20230118a"}. Then, there is an exact sequence $$0 \to \mathfrak{p}\to \Gamma _{N \otimes _B T} \xrightarrow{\omega_N \cdot -} \Gamma _{N \otimes _B T}[1] \to \mathrm{End}_{\mathcal{K}(B)} (N)[1] \to 0$$ of graded $\Gamma _{N \otimes _B T}$-modules in which $\mathfrak{p}$ is regarded as a graded ideal of the ring $\Gamma _{N \otimes _B T}$ concentrated in degree $0$.*
# Naı̈ve liftings {#sec20230422a}
This section is devoted to the proof of Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}, which is our main result in this paper, and its application, namely, Corollary [Corollary 61](#cor20230807a){reference-type="ref" reference="cor20230807a"}. The proof of this theorem uses our entire work from the previous sections. We start with reminding the reader of the definition of naı̈ve liftability which was first introduced in [@NOY] and further studied in [@NOY1; @NOY3; @NOY2].
** 54**. Let $N$ be a semifree DG $B$-module and $N|_A$ denote $N$ regarded as a DG $A$-module via the DG $R$-algebra homomorphism $\varphi\colon A\to B$. We say $N$ is *naïvely liftable to $A$* if the map $\pi _N\colon N|_A\otimes_AB\to N$ defined by $\pi_N(n\otimes_Ab)=nb$ is a split DG $B$-module epimorphism. Equivalently, $N$ is naïvely liftable to $A$ if $\pi_N$ has a right inverse in the abelian category $\mathcal{C}(B)$.
** 55**. It is worth mentioning that if $B=A\langle X\rangle$ is a simple free extension of DG algebras, then naïvely liftability is equivalent to the classical notions of lifting for bounded below semifree DG $B$-modules; see [@NOY Theorem 6.8].
**Theorem 56**. *Assume that $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$. Then, the following conditions are equivalent:*
1. *$N$ is naı̈vely liftable to $A$;*
2. *$\omega_N=0$ as an element of the ring $\Gamma_{N\otimes _B T}$;*
3. *$\omega_N$ is nilpotent in $\Gamma_{N\otimes _B T}$, i.e., there exists an integer $n\geqslant 1$ such that $\omega_N ^n =0$;*
4. *The natural inclusion $\mathrm{End}_{\mathcal{K}(B)}(N) \hookrightarrow \Gamma_{N\otimes _B T}$ is an isomorphism;*
5. *The ring $\Gamma_{N\otimes _B T}$ is finitely generated as a (right) $\mathrm{End}_{\mathcal{K}(B)}(N)$-module;*
6. *$\Gamma_{N\otimes _B T}^i = \operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _B T^i)=0$ for all integers $i\geqslant 1$;*
7. *$\Gamma_{N\otimes _B T}^i = \operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _B T^i)=0$ for some integer $i\geqslant 1$;*
8. *$\operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _B \mathsf{\Sigma}J)=0$;*
9. *$N$ is a direct summand of a finite direct sum of copies of $B$ in $\mathcal{D}(B)$.*
The proof of Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"} is given after the following lemma.
**Lemma 57**. *Let $N$ be a semifree DG $B$-module. Then, the following conditions are equivalent:*
1. *$N$ is naı̈vely liftable to $A$;*
2. *The short exact sequence [\[eq20130118a\]](#eq20130118a){reference-type="eqref" reference="eq20130118a"} is split in $\mathcal{C}^{gr}(T)$;*
3. *$\omega_N ^+ =0$ in $\mathcal{K}^{gr}(T)$;*
4. *$\omega_N =0$ in $\mathcal{K}^{gr}(T)$.*
*Proof.* The implication (i)$\implies$(ii) follows from the fact that $\theta_N=\pi_N\otimes_B\operatorname{id}_T$ in [\[eq20130118a\]](#eq20130118a){reference-type="eqref" reference="eq20130118a"}. The implications (ii) $\Longleftrightarrow$ (iii)$\implies$(iv) are trivial from the definition and Proposition [Proposition 27](#triangles){reference-type="ref" reference="triangles"}.
For (ii)$\implies$(i) assume that there is a map $\rho\colon N \otimes _B T \to N \otimes _A T$ in $\mathcal{C}^{gr}(T)$ such that $\theta_N \rho = \operatorname{id}_{N \otimes _BT}$. Note that $\theta_N$ and $\rho$ are tensor graded and $\pi_N=(\theta_N)^0$. Now we have $\pi_N (\rho)^0 = \operatorname{id}_{N}$, which means that $\pi_N$ splits and $N$ is naı̈vely liftable to $A$.
For (iv)$\implies$(iii) note that $T^+$ is a DG ideal of $T$ and $T/T^+ \cong B$. Hence, we obtain a short exact sequence $0 \to N \otimes _B T^+ \to N \otimes _B T \to N \to 0$ in $\mathcal{C}^{gr}(T)$ that yields a commutative diagram $$\xymatrix{
\mathsf{\Sigma}^{-1}N[1] \ar[r] & N \otimes _B T^+ [1] \ar[rr]^{[\varpi_N]} && N \otimes _B T [1] \ar[r] &N[1] \\
&N \otimes _B T \ar[u]_{\omega_N^+} \ar@{=}[rr] \ar@{.>}[ul]^{\gamma} && N \otimes _B T \ar[u]^-{\omega_N} & }$$ in $\mathcal{K}^{gr} (T)$ in which the first row is a triangle. If $\omega_N =0$ in $\mathcal{K}^{gr}(T)$, then $\omega_N^+$ factors through a morphism $\gamma\colon N \otimes _B T \to \mathsf{\Sigma}^{-1}N$. On the other hand, we have $\gamma=0$ because $\operatorname{Hom}_{\mathcal{K}^{gr}(T)}(N\otimes_BT,\mathsf{\Sigma}^{-1}N[1])\cong \operatorname{Hom}_{\mathcal{K}(B)}(N,\left(\mathsf{\Sigma}^{-1}N[1]\right)^0)=0$. Therefore, $\omega_N^+=0$, as desired. ◻
*Proof of Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}.* The equivalence (i) $\Longleftrightarrow$ (ii) has been proven in Lemma [Lemma 57](#omega+){reference-type="ref" reference="omega+"}.
\(ii\) $\Longleftrightarrow$ (iii): The second assertion of Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"} assures that $\omega_N = 0$ if and only if $\omega_N ^n=0$ for some integer $n\geqslant 1$.
The equivalences (ii) $\Longleftrightarrow$ (iv) $\Longleftrightarrow$ (vi) $\Longleftrightarrow$ (vii) follow from Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"}.
The equivalence (ii) $\Longleftrightarrow$ (viii) follows from Theorem [Theorem 46](#omega generates end){reference-type="ref" reference="omega generates end"} as well because $$\operatorname{Hom}_{\mathcal{K}(B)}(N, N \otimes _B \mathsf{\Sigma}J) = \Gamma _{N\otimes _B T}^1 = \omega_N \cdot \Gamma _{N\otimes _B T}^0.$$
\(iv\) $\Longleftrightarrow$ (v) is trivial.
For (ix) $\Longleftrightarrow$ (ii), let $\mathfrak{p}$ be the ideal of the ring $\mathrm{End}_{\mathcal{K}(B)}(N)$ consisting of all morphisms that factor through a finite direct sum of copies of $B$. Then, we have
$\omega_N =0 \Longleftrightarrow \mathfrak{p}= \mathrm{End}_{\mathcal{K}^{gr}(T)} (N) = \Gamma _{N \otimes _BT} \Longleftrightarrow \operatorname{id}_N\in \mathfrak{p}$
where the left equivalence follows from Theorem [Theorem 53](#kernel of omega){reference-type="ref" reference="kernel of omega"}. Note that $\operatorname{id}_N\in \mathfrak{p}$ means that $\operatorname{id}_N$ factors through $B^{\oplus n}$ for some $n\geqslant 0$, i.e., $N$ is a direct summand of $B^{\oplus n}$. 0◻
** 58**. In Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}, if we further assume that $N$ is a semifree resolution of the $\operatorname{H}_0(B)$-module $\operatorname{H}_0(N)$, i.e., the natural augmentation map $N \to \operatorname{H}_0(N)$ is a quasiisomorphism, then condition (v) is equivalent to the following:
1. *The ring $\Gamma_{N\otimes _B T}$ is finitely generated as a (right) $\operatorname{H}_0(B)$-module.*
Note that since $N$ is a perfect DG $A$-module, $\operatorname{H}_0(N)$ is finitely generated over $\operatorname{H}_0(A)$ and we also have the ring homomorphism $\operatorname{H}_0(A) \to \operatorname{H}_0(B)$. Therefore, in this case, $\mathrm{End}_{\mathcal{K}(B)} (N) \cong \mathrm{End}_{\operatorname{H}_0(B)}(\operatorname{H}_0(N))$, where $\operatorname{H}_0(N)$ is finitely generated over $\operatorname{H}_0(B)$.
As an application of Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}, we can show Corollary [Corollary 61](#cor20230807a){reference-type="ref" reference="cor20230807a"} below which provides an affirmative answer to [@NOY2 Question 4.10] under the $\operatorname{\texttt{(AR1)}}$ condition. In order to explain this, we need the following preparations.
** 59**. Before our next discussion in [ 60](#para20230807s){reference-type="ref" reference="para20230807s"}, we need to make a special arrangement: for an integer $\ell$, a map $\omega_N^{\ell}\colon N\to N\otimes_B(\mathsf{\Sigma}J)^{\otimes_B\ell}$ was defined in [@NOY2 Remark 4.9]. This map is different from the $\ell$-th power of $\omega_N$ that we define in [ 29](#para20230115e){reference-type="ref" reference="para20230115e"} of the present paper as an element in $\Gamma_{N\otimes_BT}$. Hence, in order to avoid confusion made by using the same notation for two different objects, in this paper we use the notation $\chi^{\ell}_N$ instead of the notation $\omega_N^{\ell}$ defined in [@NOY2 Remark 4.9].
** 60**. Let $(\mathbb{B},\mathbb{D})$ be the semifree resolution of the DG $B^e$-module $B$ constructed in [@NOY2]. Following [@NOY2 §4], for each integer $\ell\geqslant 0$ we have that $(\mathbb{B}^{\leqslant\ell}, \mathbb{D}|_{\mathbb{B}^{\leqslant\ell}} )$ is a DG $B^e$-submodule of $(\mathbb{B}, \mathbb{D})$, where $$\mathbb{B} ^{\leqslant\ell} := B \otimes _A \left(\bigoplus _{n=0}^{\ell} (\mathsf{\Sigma}J) ^{\otimes _Bn}\right).$$ Note that $\mathbb{B} ^{\leqslant 0} = (B^e,\,d^{B^e})$. Moreover, let $\omega ^\ell\colon \mathbb{B} \to \mathbb{B}/\mathbb{B}^{\leqslant\ell}$ be the natural DG $B^e$-module homomorphism for each integer $\ell \geqslant 0$. For such integers, it follows from [\[eq20230822a\]](#eq20230822a){reference-type="eqref" reference="eq20230822a"} and [@NOY2 Remark 4.9] that $\text{\textsf{w}}_N^{\ell}=\chi_N^{\ell}\otimes_B\operatorname{id}_T$. Ignoring isomorphisms in $\mathcal{K}(B)$, we see that $\omega_N^{\ell}=[\chi_N^{\ell}\otimes_B\operatorname{id}_T]=[\operatorname{id}_N\otimes_B\omega^{\ell}\otimes_B\operatorname{id}_T]$.
Let $N$ be a semifree DG $B$-module. By [@NOY2 Theorem 4.8], the following conditions are equivalent.
1. $N$ is naı̈vely liftable to $A$;
2. The DG $B$-module homomorphism $\operatorname{id}_N \otimes _B \omega^0$ is null-homotopic;
3. The DG $B$-module homomorphisms $\operatorname{id}_N \otimes _B \omega^\ell$ are null-homotopic for *all* integers $\ell \geqslant 0$.
As we mention in [@NOY2 Question 4.10], it is natural to ask whether these conditions are equivalent to the following:
1. The DG $B$-module homomorphism $\operatorname{id}_N \otimes _B \omega^\ell$ is null-homotopic for *some* integer $\ell\geqslant 1$.
As we stated above, the following result provides an affirmative answer to this question under the $\operatorname{\texttt{(AR1)}}$ condition and follows immediately from [ 60](#para20230807s){reference-type="ref" reference="para20230807s"} along with the fact that conditions (ii) and (iii) in Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"} are equivalent.
**Corollary 61**. *Assume that $\operatorname{\texttt{(AR1)}}$ holds for a DG $B$-module $N$. Under the setting of [ 60](#para20230807s){reference-type="ref" reference="para20230807s"}, conditions (i) through (iv) are equivalent.*
** 62**. It is worth highlighting that if $\operatorname{\texttt{(AR1)}}$ and $\operatorname{\texttt{(AR2)}}$ hold for a DG $B$-module $N$, then by Theorem [Theorem 45](#thm20230425a){reference-type="ref" reference="thm20230425a"} we automatically have the vanishing of $\Gamma_{N\otimes_BT}$-modules $$\begin{gathered}
{}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^1 T) \\
{}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^2 T) \\
\vdots\end{gathered}$$ However, naı̈ve liftability of $N$ is independent of the above modules and by Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"}, it is detected only by the vanishing of $\omega_N$ as an element of the $\Gamma_{N\otimes_BT}$-module $$\Gamma_{N\otimes_BT}={}^*\!\operatorname{Hom}_{\mathcal{K}^{gr}(T) } (N \otimes _B T, N \otimes _B \mathsf{\Sigma}^0 T).$$
We conclude this section with the following conjecture that we call *Naı̈ve Lifting Conjecture*; compare with the same named conjecture in [@NOY1].
**Conjecture 63**. *If $\operatorname{\texttt{(AR1)}}$ and $\operatorname{\texttt{(AR2)}}$ hold for a DG $B$-module $N$, then $N$ is naı̈vely liftable to $A$, i.e., one of the equivalent conditions in Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"} holds.*
# The Auslander-Reiten Conjecture and conditions $\operatorname{\texttt{(AR1)}}$ and $\operatorname{\texttt{(AR2)}}$ {#sec20230521a}
In this appendix, we explain that conditions $\operatorname{\texttt{(AR1)}}$ and $\operatorname{\texttt{(AR2)}}$ are derived from translating the assumptions in the [A]{.ul}uslander-[R]{.ul}eiten Conjecture (Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"}) into the DG setting; see [ 68](#para20230521a){reference-type="ref" reference="para20230521a"}. This is why we call these conditions "`AR`".
**Proposition 64**. *Let $N$ be a semifree DG $B$-module with $\operatorname{H}_i(N)=0$ for all $i\neq 0$. Then, $\operatorname{Hom}_{\mathcal{K}(B)}(N,\mathsf{\Sigma}^{\ell}N)=0$ for all $\ell<0$.*
*Proof.* Using the assumption that $\operatorname{H}_i(N)=0$ for all $i<0$, we may assume that $N$ has a semifree filtration $$0=F_{-1}\overset{\iota_{-1}}\hookrightarrow F_0\overset{\iota_0}\hookrightarrow \cdots\hookrightarrow F_{i-1}\overset{\iota_{i-1}}\hookrightarrow F_i\overset{\iota_{i}}\hookrightarrow\cdots \hookrightarrow N$$ such that for all $i\geqslant 0$, $$F_{i-1}\xrightarrow{\iota_{i-1}} F_i\to \bigoplus(\mathsf{\Sigma}^{i} B)\to \mathsf{\Sigma}F_{i-1}$$ is a triangle in $\mathcal{K}(B)$. It follows from the isomorphism $\operatorname{Hom}_{\mathcal{K}(B)}(\mathsf{\Sigma}^nB,N)\cong \operatorname{H}_n(N)$ that $\operatorname{Hom}_{\mathcal{K}(B)}(F_n,\mathsf{\Sigma}^{\ell}N)\cong \operatorname{Hom}_{\mathcal{K}(B)}(F_0,\mathsf{\Sigma}^{\ell}N)\cong \prod\operatorname{H}_{-\ell}(N)=0$ for all $n\geqslant-1$ and $\ell\leqslant-1$. Note that there is a triangle $$\bigoplus_{n=-1}^{\infty} F_n\xrightarrow{\psi}\bigoplus_{n=-1}^{\infty} F_n\to N\to \mathsf{\Sigma}\left(\bigoplus_{n=-1}^{\infty} F_n\right)$$ where $\psi$ is defined by $\psi|_{F_n}=\operatorname{id}_{F_n}-\iota_n$. This triangle induces the exact sequence $$\operatorname{Hom}_{\mathcal{K}(B)}(\mathsf{\Sigma}(\bigoplus_{n=-1}^{\infty} F_n),\mathsf{\Sigma}^{\ell}N)\to\operatorname{Hom}_{\mathcal{K}(B)}(N,\mathsf{\Sigma}^{\ell}N)\to
\operatorname{Hom}_{\mathcal{K}(B)}(\bigoplus_{n=-1}^{\infty} F_n,\mathsf{\Sigma}^{\ell}N).$$ Thus, since $\operatorname{Hom}_{\mathcal{K}(B)}(\mathsf{\Sigma}(\bigoplus_{n=-1}^{\infty} F_n),\mathsf{\Sigma}^{\ell}N)\cong \prod_{n=-1}^{\infty}\operatorname{Hom}_{\mathcal{K}(B)}(F_n,\mathsf{\Sigma}^{\ell-1}N)=0$ for all $\ell\leqslant 0$, we conclude that $\operatorname{Hom}_{\mathcal{K}(B)}(N,\mathsf{\Sigma}^{\ell}N)=0$ for all $\ell\leqslant-1$. ◻
** 65**. In contrast to Proposition [Proposition 64](#para20230520a){reference-type="ref" reference="para20230520a"}, one can construct an example of a semifree DG $B$-module $N$ such that $\operatorname{H}_i(N)\neq 0$ for some $i\neq 0$ and at the same time $\operatorname{Hom}_{\mathcal{K}(B)}(N,\mathsf{\Sigma}^{-1}N)\neq 0$. To see this, let $a\neq 0$ be an element in the ring $R$ with $a^2=0$, and let $N$ be a semifree DG $R$-module with a semifree basis $\{e_0,e_1\}$, where $|e_0|=0$, $|e_1|=1$, and $\partial^N(e_1)=e_0a$. In other words, $N$ is the Koszul complex $K^R(a)$. Then, $\operatorname{H}_1(N)\neq 0$ because $a$ is a zero divisor in $R$. Now, considering the DG $R$-module homomorphism $f\colon N\to \mathsf{\Sigma}^{-1} N$ defined by $f(e_0)=e_1a$ and $f(e_1)=0$, we see that $f$ is a non-zero element in $\operatorname{Hom}_{\mathcal{K}(B)}(N,\mathsf{\Sigma}^{-1}N)$.
The following is proved by the same technique as in the proof of Proposition [Proposition 64](#para20230520a){reference-type="ref" reference="para20230520a"}.
**Proposition 66**. *If $X$ is a DG $B$-module with $\operatorname{H}_i(X)=0$ for all $i\geqslant 1$ and $F$ is a semifree DG $B$-module with $\operatorname{H}_i(F)=0$ for all $i<0$, then $\operatorname{Hom}_{\mathcal{K}(B)}(F,\mathsf{\Sigma}^iX)=0$ for all $i<0$.*
**Corollary 67**. *If $\operatorname{H}_i(B)=0$ for all $i\geqslant 1$ and $N$ is a semifree DG $B$-module with $\operatorname{H}_i(N)=0$ for all $i<0$, then $\operatorname{Hom}_{\mathcal{K}(B)}(N,\mathsf{\Sigma}^iB)=0$ for all $i<0$.*
** 68**. We work in the setting of Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} and discussion [ 2](#para20230424s){reference-type="ref" reference="para20230424s"}. Note that the $R$-module $M$ is regarded as a DG $Q'$-module via the natural augmentation $Q' \to R$. Assume that $N\xrightarrow{\simeq} M$ is a semifree resolution of the DG $Q'$-module $M$. It follows from Proposition [Proposition 64](#para20230520a){reference-type="ref" reference="para20230520a"} and Corollary [Corollary 67](#para20230520b'){reference-type="ref" reference="para20230520b'"} (with $B=Q'$) that $\operatorname{Ext}^i_{Q'}(N,N\oplus Q')=0$ for all $i\neq 0$ (e.g., $\operatorname{\texttt{(AR1)}}$ and $\operatorname{\texttt{(AR2)}}$ hold for the DG $Q'$-module $N$) when we translate the Ext vanishing assumptions for $M$ in Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} into the DG setting. With this explanation, if Conjecture [Conjecture 63](#conj20230520a){reference-type="ref" reference="conj20230520a"} holds (i.e., if one of the equivalent conditions in Theorem [Theorem 56](#equivalent conditions){reference-type="ref" reference="equivalent conditions"} holds for $N$ under the Ext vanishing assumptions of Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} for $M$), then Conjecture [Conjecture 1](#conj20230122a){reference-type="ref" reference="conj20230122a"} holds.
10 T. Araya and Y. Yoshino, *Remarks on a depth formula, a grade inequality and a conjecture of Auslander*, Comm. Algebra **26** (1998), no. 11, 3793--3806.
M. Auslander, S. Ding, and Ø. Solberg, *Liftings and weak liftings of modules*, J. Algebra **156** (1993), 273--397.
M. Auslander, I. Reiten, *On a generalized version of the Nakayama conjecture*, Proc. Amer. Math. Soc. **52** (1975), 69--74.
L. L. Avramov, *Infinite free resolutions*, Six lectures on commutative algebra (Bellaterra, 1996), Progr. Math., vol. 166, Birkhäuser, Basel, 1998, pp. 1--118.
L. L. Avramov and R.-O. Buchweitz, *Support varieties and cohomology over complete intersections*, Invent. Math. **142** (2000), no. 2, 285--318.
L. L. Avramov, R.-O. Buchweitz, and L. M. Şega, *Extensions of a dualizing complex by its ring: commutative versions of a conjecture of Tachikawa*, J. Pure Appl. Algebra **201** (2005), no. 1-3, 218--239.
L. L. Avramov, H.-B. Foxby, and S. Halperin, *Differential graded homological algebra*, in preparation.
L. L. Avramov and S. Halperin, *Through the looking glass: a dictionary between rational homotopy theory and local algebra*, Algebra, algebraic topology and their interactions (Stockholm, 1983), 1--27, Lecture Notes in Math., 1183, Springer, Berlin, 1986.
L. L. Avramov, S. B. Iyengar, S. Nasseh, and S. Sather-Wagataff, *Homology over trivial extensions of commutative DG algebras*, Comm. Algebra **47** (2019), 2341--2356.
L. L. Avramov, S. B. Iyengar, S. Nasseh, and K. Sather-Wagstaff, *Persistence of homology over commutative noetherian rings*, J. Algebra, **610** (2022), 463--490.
Y. Félix, S. Halperin, and J.-C. Thomas, *Rational homotopy theory*, Graduate Texts in Mathematics, vol. 205, Springer-Verlag, New York, 2001.
Tor H. Gulliksen and G. Levin, *Homology of local rings*, Queen's Paper in Pure and Applied Mathematics, No. 20 (1969), Queen's University, Kingston, Ontario, Canada.
C. Huneke, D. A. Jorgensen, and R. Wiegand, *Vanishing theorems for complete intersections*, J. Algebra **238** (2001), no. 2, 684--702.
C. Huneke, L. M. Şega, and A. N. Vraciu, *Vanishing of Ext and Tor over some Cohen-Macaulay local rings*, Illinois J. Math. **48** (2004), no. 1, 295--317.
D. A. Jorgensen, *Finite projective dimension and the vanishing of ${\rm
Ext}_R(M,M)$*, Comm. Algebra **36** (2008), no. 12, 4461--4471.
S. Nasseh, M. Ono, and Y. Yoshino, *The theory of $j$-operators with application to (weak) liftings of DG modules*, J. Algebra, **605** (2022), 199--225.
S. Nasseh, M. Ono, and Y. Yoshino, *Naı̈ve liftings of DG modules*, Math. Z., **301** (2022), no. 1, 1191--1210.
S. Nasseh, M. Ono, and Y. Yoshino, *Obstruction to naı̈ve liftability of DG modules*, preprint 2022 (`arXiv:2109.00607`).
S. Nasseh, M. Ono, and Y. Yoshino, *On the semifree resolutions of DG algebras over the enveloping DG algebras*, to appear in Comm. Algebra (`arXiv:2301.12267`).
S. Nasseh and S. Sather-Wagstaff, *Vanishing of Ext and Tor over fiber products*, Proc. Amer. Math. Soc. **145** (2017), no. 11, 4661--4674.
S. Nasseh and S. Sather-Wagstaff, *Liftings and Quasi-Liftings of DG modules*, J. Algebra, **373** (2013), 162--182.
S. Nasseh and S. Sather-Wagstaff, *Geometric aspects of representation theory for DG algebras: answering a question of Vasconcelos*, J. London Math. Soc., **96** (2017), no. 1, 271--292.
S. Nasseh and R. Takahashi, *Local rings with quasi-decomposable maximal ideal*, Math. Proc. Cambridge Philos. Soc. **168** (2020), no. 2, 305--322.
S. Nasseh and Y. Yoshino, *On Ext-indices of ring extensions*, J. Pure Appl. Algebra **213** (2009), no. 7, 1216--1223.
S. Nasseh and Y. Yoshino, *Weak liftings of DG modules*, J. Algebra, **502** (2018), 233--248.
M. Ono and Y. Yoshino, *A lifting problem for DG modules*, J. Algebra **566** (2021), 342--360.
L. M. Şega, *Vanishing of cohomology over Gorenstein rings of small codimension*, Proc. Amer. Math. Soc. **131** (2003), no. 8, 2313--2323.
L. M. Şega, *Self-tests for freeness over commutative Artinian rings*, J. Pure Appl. Algebra **215** (2011), no. 6, 1263--1269.
J. Tate, *Homology of Noetherian rings and local rings*, Illinois J. Math. **1** (1957), 14--27.
Y. Yoshino, *The theory of L-complexes and weak liftings of complexes*, J. Algebra **188** (1997), no. 1, 144--183.
[^1]: Y. Yoshino was supported by JSPS Kakenhi Grant 19K03448.
| arxiv_math | {
"id": "2309.05293",
"title": "Diagonal tensor algebra and naive liftings",
"authors": "Saeed Nasseh, Maiko Ono, Yuji Yoshino",
"categories": "math.AC",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We study the formation of singularities for the curvature flow of networks when the initial data is symmetric with respect to a pair of perpendicular axes and has two triple junctions. We show that, in this case, the set of singular times is finite.
address:
- Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy.
- Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy.
author:
- Matteo Novaga
- Luciano Sciaraffia
title: Singularities of the network flow with symmetric initial data
---
# Introduction {#sec:intro}
The *Mean Curvature Flow* is one of the best studied geometric evolution equations, in particular its one-dimensional version, often called the *Curve Shortening Flow*. This last flow is completely understood thanks to the works of Gage--Hamilton and Grayson [@GageHamilton-Convex1986; @Grayson-RoundPoints1987]: a closed, embedded curve in the plane becomes convex in finite time and then shrinks to a *round point*. A natural and interesting generalisation of this flow is the *Network Flow*, also known as *Multiphase Mean Curvature Flow* in higher dimensions, where instead of considering a single curve the underlying geometric object is a *regular network*, that is, a finite union of embedded curves which can meet only at their endpoints, and at each multiple junction only three curves meet forming equal angles of $2\pi/3$ (more precise definitions will be provided in Section [2](#sec:setup){reference-type="ref" reference="sec:setup"}). This last condition, called after Herring, arises naturally because of the variational structure of the flow, since these triple junctions minimise length locally.
The network flow has been thoroughly studied, although a complete understanding as in the case of a single curve is far from being achieved. One of the first results in this line comes from Bronsard--Reitich [@Bronsard-Reitich-1993], where they showed short time existence for the flow of *triods*, i.e. networks consisting of three curves and one triple junction, and with Neumann boundary conditions. Subsequently, the works of Mantegazza--Novaga--Tortorelli [@MNT-NetsI-2004] and later Magni--Mantegazza--Novaga [@MMN-NetsII-2016] studied the singularity formation under the flow, with Dirichlet boundary conditions, stating in which cases the flow exists for all times and reaches in the limit the Steiner tree spanned by the three endpoints.
More recently, in [@Goesswein-Menzel-Pluda-Existence2023; @MNPS-Survey2018] a general proof of existence of a solution to the network flow with regular initial data was given. It was also shown that the flow can be extended to a maximal existence time at which, if finite, a singularity forms: either the $L^2$-norm of the curvature blows-up, or the length of one of the curves goes to zero. Moreover, as in the case of the curve shortening flow, there holds a *geometric uniqueness:* every other solution starting at the same initial network is just a reparametrisation of the flow. Thus, to give a complete description of the flow, it becomes crucial to understand and classify the singularities that can arise.
In contrast to what happens in the case of a single curve, it could be possible during the network flow that the length of one or more curves goes to zero while the curvature of the network remains bounded. This kind of phenomenon is often called a *type-0 singularity*, and allows the flow to approach an irregular network with junctions of multiplicity greater than three. Because of this, it becomes a compelling question to understand if it is possible to start a regular flow when the initial network fails to satisfy the Herring condition. It turns out that, thanks to the results of [@Ilmanen-Neves-Schulze-Shortexistence-2019; @LMPS-2021], such an irregular network can serve as initial data for a regular flow, enabling is continuation beyond some singularities, albeit not in a unique way. Such flows are constructed by locally replacing an irregular junction by one of the self-similar, tree-like expanding solitons obtained in [@Mazzeo-Saez-2011] according to the number of curves concurring at said junction. In this way, new edges might "emerge" flowing out of the junction, and the nonuniqueness of the continuation is directly tied to the nonuniqueness of the expanders. In any case, the number of possible geometric solutions is classified by the number of expanders at each irregular junction [@LMPS-2021]\*Cor. 8.7. Thanks to these findings, it becomes possible to continue the flow past a type-0 singularity. We remark that it could also be possible to restart the flow in other potential scenarios where the curvature does blow-up, but these are not within the scope of this discussion. The interested reader might see the discussion present in [@MNPS-Survey2018]\*Sec. 10.4.
Much of the analysis of singularities can be done conditional to the so-called *multiplicity-one conjecture*, which states that every limit of parabolic rescalings of the flow around a fixed point is a flow of embedded networks with multiplicity one. Indeed, Mantegazza--Novaga--Pluda [@MNP-Type0trees2022] showed, conditionally to this statement, that if there are no loops in the initial network, or in other words, the initial datum is a tree, then only type-0 singularities can occur. Hence, in the case of trees the flow could in principle be continued indefinitely.
It turns out that the multiplicity-one conjecture is true for networks with at most two triple junctions, as it was shown in [@MNP-2juncs-2017]. As a result, a complete description of the possible singularities was obtained in this case. In particular, if no loop disappears at a singular time, there is only one sensible way to continue the flow, which the authors call the *standard transition* (cf. [@LMPS-2021]\*Cor. 8.8). This situation arises when the two triple junctions coalesce into a single point, forming a quadruple junction with equal opposing angles of $\pi/3$ and $2\pi/3$. However, the previous results only give a short time existence with no uniform control over the lifespan of the flow, which makes it difficult to rule out the possible accumulation of singular times. This is the only question remaining to be answered to give a complete description and a global time existence theorem in this case.
In this note we address this problem in the case of networks with two triple junctions which are symmetric to a pair of perpendicular axes. We will refer to this class of networks simply as *symmetric*. With this condition, there are only four possible cases: the *tree*, the *lens*, the $\theta$-*network*, and the *eyeglasses*, which are illustrated in Figure [\[fig:net-types\]](#fig:net-types){reference-type="ref" reference="fig:net-types"}.
Before stating our main result, we mention that the case of the lens has already been studied and settled in a slightly more general case in [@Bellettini-Novaga-Lens2011; @Schnuerer-etal-Lens2011]. In complete analogy with [@GageHamilton-Convex1986; @Grayson-RoundPoints1987], it is proved that a (non-compact) lens shaped network which is symmetric with respect to one axis eventually becomes convex and approaches a straight line in finite time, as the enclosed region desappears and the curvature blows-up. The following theorem gives a complete description in the remaining cases.
**Theorem 1**. *Let $\Gamma_0$ be a symmetric regular network with two triple junctions. Then there is a maximal time $T>0$ and a unique network flow $\{ \Gamma(t) \}_{0 \leq t < T}$ with initial data $\Gamma_0$, such that the set of its singular times is a finite subset of $(0,T]$, in particular there is no accumulation of singularities. Moreover:*
- *if $\Gamma_0$ is a tree, then $T = \infty$ (global existence) and $\lim_{t \to \infty} \Gamma(t)$ is either a (standard) cross or a Steiner tree;*
- *otherwise $T < \infty$, $\Gamma(t)$ becomes eyeglasses-shaped after the last type-0 singularity, and the curvature blows up as the enclosed regions vanish with $t \uparrow T$.*
**Remark 2**. *In the case of a tree, we cannot rule out a singularity at infinity, as the example in [@Pluda-Pozzetta-Lojasiewicz2023]\*Thm. 6.1 shows. There the authors construct a globally defined flow which stays regular for every time and converges to a cross in infinite time.*
Let us briefly describe what are the dynamics in this situation. Since the initial datum is symmetric, it is easy to see that the evolution also stays symmetric until the first singularity forms. If we encounter a type-0 singularity, i.e. the length of one of the curves goes to zero and the curvature of the network remains bounded, then again by symmetry the vanishing curve must be the straight edge passing through the origin. In this way, the two triple junctions collide, and we can the apply the results in [@Ilmanen-Neves-Schulze-Shortexistence-2019]\*Thm. 1.1 and [@LMPS-2021]\*Thm. 1.1 to restart the flow (cf. [@MNP-2juncs-2017]\*Thm. 6.1). Since there is a unique self-similar expanding soliton flowing out a standard cross, and it has the same symmetries as the cross [@Mazzeo-Saez-2011]\*Prp. 2.2, we may conclude that the evolution remains symmetric as before. A tree transitions to a tree, and a $\theta$-network transitions to eyeglasses and vice-versa. This process can continue as long as the curvature remains bounded, and as stated, this can only happen a finite number of times, so any oscillatory behaviour is excluded.
The proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} relies on a result by Angenent [@AngParabolicII91]\*Thm 1.3, which we present as Proposition [Proposition 10](#prp:intersecs){reference-type="ref" reference="prp:intersecs"}, adapted to this singular case. Its proof is grounded in the *Sturmian theorem*, as stated by Angenent [@AngSing91]\*Thm. 2.1 (cf. [@AngZero88]\*Thms. C and D). In essence, this theorem asserts that if $u \in C^\infty(Q_T)$ is a solution to a linear parabolic equation in $Q_T := [0,1] \times [0,T]$, and if $u(x,t) \neq 0$ for all $0 \leq t \leq T$ and $x = 0,1$, then at any time $t \in (0,T]$, the number of zeroes of $u(\cdot,t)$ will be finite. Furthermore, this number decreases as a function of $t$ and strictly decreases whenever $u(\cdot,t)$ has a multiple zero. For additional applications, we refer to [@AngNodal91], and for a detailed proof, the reader may consult [@AngZero88].
Although we are dealing with a very special case of the network flow, it is reasonable to believe that Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} holds true in general for tree-like networks, with appropriate modifications.
**Conjecture 3**. *The number of singular times during the evolution of a tree is finite. If no boundary curve disappears during the evolution, then the flow exists for all positive times and converges to a (possibly degenerate) minimal network.*
The plan of the paper is the following: in Section [2](#sec:setup){reference-type="ref" reference="sec:setup"} we introduce the notion of network and network flow, and we recall some preliminary results on existence and uniqueness of solutions. In Section [3](#sec:proof){reference-type="ref" reference="sec:proof"} we prove our main result on the singularities of the flow of symmetric networks with two triple junctions. Finally, in Section [4](#sec:extension){reference-type="ref" reference="sec:extension"} we extend the result to the flow of symmetric networks on the 2-sphere.
# Notation and preliminary results {#sec:setup}
Before proceeding to the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, we shall first establish our notation, present the necessary definitions, and recapitulate the essential results.
Let $\gamma: [0,1] \to \mathbf{R}^2$ be a regular $C^2$ curve, meaning that $\gamma'(x) \neq 0$ for all $x \in [0,1]$. We denote its unit tangent as $\tau(x) := \gamma'(x)/|\gamma'(x)|$ and its unit normal as $\nu$, such that $\{ \tau, \nu\}$ is a positive basis of $\mathbf{R}^2$. The curvature of $\gamma$ with respect to $\nu$ is denoted as $\kappa$. Occasionally, we may use superscript indices to label curves, and when we do, we will also label their tangents, normals, and curvatures accordingly.
**Definition 4** (Network). *A *network* $\Gamma$ is a finite union of embedded, regular curves $\{ \gamma^j \}_{j=1}^n$ of class $C^2$, called *edges*, that meet only at their endpoints and nontangentially, and such that the union of their images $\bigcup_{j=1}^n \gamma^j([0,1])$ is a connected set. A network $\Gamma$ is said to be *regular* when its edges intersect solely at triple junctions, at which their interior tangents form equal angles of $2\pi/3$. The endpoints of curves that are not shared by other curves are referred to as *endpoints* of the network.*
Note that the regularity condition at triple junctions can be stated as follows: if three curves $\gamma^{j_k}$ ($k = 1,2,3$) intersect at, say, $x=0$, then $$\tau^{j_1}(0) + \tau^{j_2}(0) + \tau^{j_3}(0) = 0 .$$
**Definition 5** (Network flow). *Let $\Gamma(t) = \{ \gamma^j(\cdot,t) \}_{j=1}^n$, with $t \in (a,b)$, be one-parameter family of regular networks, with fixed endpoints $p^1, \ldots, p^r$, and time-dependent triple junctions $o^1(t), \ldots, o^s(t)$. Then $\{ \Gamma(t) \}_{a < t < b}$ is said to be a solution to the *network flow* if at every time $t \in (a,b)$, with possible curve relabelling, the following system is satisfied: $$\label{eq:netflow}
\left\{ \begin{alignedat}{3}
\langle \partial_t\gamma^j(x,t) , \nu^j(x,t) \rangle & = \kappa^j(x,t) , &\quad &x \in [0,1] , \quad j = 1, \ldots, n , \\
\gamma^k(1,t) & = p^k , &\quad &k = 1, \ldots, r ,\\
\tau^{l_1} + \tau^{l_2} + \tau^{l_3} & = 0 &\quad &\text{at the triple junction } o^l(t) , \quad l = 1, \ldots, s .
\end{alignedat}\right.$$*
We now state a version of short time existence for the network flow that suits our needs.
**Proposition 6** (Cf. [@Goesswein-Menzel-Pluda-Existence2023]\*Thms. 1.1--2). *Let $\Gamma_0$ be a regular network. Then there exists a smooth solution $\{ \Gamma(t) \}_{0 \leq t < T}$, unique up to reparametrisations, to the network flow [\[eq:netflow\]](#eq:netflow){reference-type="eqref" reference="eq:netflow"} starting at $\Gamma_0$ and fixing its endpoints. Moreover, the flow can be extended to a maximal time $T>0$, at which at least one of the following scenarios unfolds:*
- *$T = \infty$;*
- *the inferior limit as $t \uparrow T$ of the length of one of the edges of $\Gamma(t)$ is zero;*
- *the superior limit as $t \uparrow T$ of the $L^2$-norm of the curvature of $\Gamma(t)$ is infinite.*
Proposition [Proposition 6](#prp:reg-short-time){reference-type="ref" reference="prp:reg-short-time"} allows us to initiate the flow from any regular data and extend it to a maximal time while ensuring the flow remains smooth. Nevertheless, in the specific case we address here, where the initial network possesses only two triple junctions, we can provide additional insights.
**Proposition 7** (Cf. [@MNP-2juncs-2017]\*Thm. 1.1). *Let $\Gamma_0$ be a regular network with exactly two triple junctions, and let $\{ \Gamma(t) \}_{0 \leq t < T}$ be the maximal smooth flow starting at $\Gamma_0$. Suppose also that $T$ is finite and the length of no boundary edge goes to zero. Then, as $t \uparrow T$, one of the following occurs:*
- *the length of a curve joining the triple junctions goes to zero while the curvature remains bounded;*
- *the limit of the lengths of the curves composing a loop goes to zero and the $L^2$-norm of the curvature goes to infinity.*
*If the network $\Gamma_0$ is a tree, then only the first situation happens.*
As explained in the Introduction, when the first scenario in Proposition [Proposition 7](#prp:2juncs){reference-type="ref" reference="prp:2juncs"} occurs, the flow approaches an irregular network with a single quadruple junction, resulting in the development of a type-0 singularity. Nonetheless, a transition to a regular flow is made possible by the following proposition.
**Proposition 8** (Cf. [@LMPS-2021]\*Thm. 1.1, Prp. 8.5). *Let $\Gamma_0$ be an irregular network. Then there exists a solution $\{ \Gamma(t) \}_{0 \leq t < T}$ to the network flow [\[eq:netflow\]](#eq:netflow){reference-type="eqref" reference="eq:netflow"} such that $\Gamma(t)$ converges in the Hausdorff distance to $\Gamma_0$ as $t \downarrow 0$. Furthermore, all the solutions, accounting for possible reparametrizations, can be classified by the self-similar, tree-like expanding solitons described in [@Mazzeo-Saez-2011] at each irregular junction. In particular, if $\Gamma_0$ consists of a single quadruple junction with angles $\pi/3$ and $2\pi/3$, and no other junctions are present, then the flow admits a unique solution.*
While the convergence to the initial datum $\Gamma_0$ can be understood in a much stronger sense, as discussed in [@LMPS-2021], for our purposes, local uniform convergence suffices. In any case, it is worth noting that convergence remains smooth away from the irregular junctions, provided that $\Gamma_0$ itself is smooth.
Proposition [Proposition 8](#prp:irreg-short-time){reference-type="ref" reference="prp:irreg-short-time"} can be regarded as a restarting theorem after the formation of an irregular network, as previously explained. Furthermore, since the flow remains regular for positive times, we can employ Proposition [Proposition 6](#prp:reg-short-time){reference-type="ref" reference="prp:reg-short-time"} to extend it until the next singularity, if there is one. This leads us to the following definition, which is the primary focus of our discussion here.
**Definition 9** (Extended Network Flow). *An *extended network flow* with initial condition $\Gamma_0$ is a one-parameter family of networks $\{ \Gamma(t) \}_{0 \leq t < T}$ that satisfies the following conditions:*
- *there exists a finite number of times $0 = t_0 < t_1 < \cdots < t_m = T$, such that the restriction $\{ \Gamma(t) \}_{t_{k} < t <t_{k+1}}$, $k = 0, \ldots, m-1$, is a regular network flow in the sense of Definition [Definition 5](#def:netflow){reference-type="ref" reference="def:netflow"};*
- *at each $t_k$, $k = 1, \ldots, m-1$, a type-0 singularity forms, and we call these *singular times*;*
- *$\Gamma(t)$ converges to $\Gamma(t_k)$ in the Hausdorff distance as $t \downarrow t_k$.*
Thus, another way then to rephrase Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} is that every solution to the network flow in the sense of Definition [Definition 9](#def:extnetflow){reference-type="ref" reference="def:extnetflow"} can be continued to a maximal extended flow such that either $T = \infty$, or else $T < \infty$ and the curvature increases without bound as $t \uparrow T$.
We now present the main tool used for the proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}.
**Proposition 10** (Cf. [@AngParabolicII91]\*Thm. 1.3). *Let $\gamma^1, \gamma^2: [0,1] \times [0,T) \to \mathbf{R}^2$ be two solutions to the curve shortening flow, and suppose that for each $(x,t) \in [0,1] \times [0,T)$ $$\gamma^1(0,t) , \gamma^1(1,t) \neq \gamma^2(x,t) \quad \text{and} \quad \gamma^2(0,t) , \gamma^2(1,t) \neq \gamma^1(x,t) .$$ Then the number of intersections of $\gamma^1(\cdot,t)$ and $\gamma^2(\cdot,t)$, $$i(t) := \#\{ (x_1,x_2) \in [0,1] \times [0,1] : \gamma^1(x_1,t) = \gamma^2(x_2,t) \} ,$$ is finite for every $t \in (0,T)$. Moreover, $i(\cdot)$ is a nonincreasing function of $t$, and decreases exactly when $\gamma^1$ and $\gamma^2$ become tangent at some point.*
We briefly remark that the presence of triple junctions is what makes it difficult to apply Proposition [Proposition 10](#prp:intersecs){reference-type="ref" reference="prp:intersecs"} to a general, non-symmetric network.
# Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} {#sec:proof}
We are now ready for the proof of our main result. Let $\Gamma_0$ be a symmetric regular network with two triple junctions, and let $\{ \Gamma(t) \}_{0 \leq t < T}$ be the evolution given by Proposition [Proposition 6](#prp:reg-short-time){reference-type="ref" reference="prp:reg-short-time"}. By our symmetry assumptions, from the system [\[eq:netflow\]](#eq:netflow){reference-type="eqref" reference="eq:netflow"} it follows that at each time $t \in (0,T)$ the network $\Gamma(t)$ is also symmetric with the same axes of symmetry as $\Gamma_0$. Therefore, modulo a rotation and translation, the flow is completely described by one single curve in the first quadrant of $\mathbf{R}^2$, where the coordinate axes coincide with the symmetry axes of $\Gamma(t)$. Let $\gamma: [0,1] \times [0,T) \to \mathbf{R}^2$ be the evolution of this defining curve, and call $(x_1,x_2)$ the rectangular coordinates of $\mathbf{R}^2$. After a reparametrisation, we may further suppose that $\gamma(0,t)$ is the triple junction, which lies in the $x_1$-axis, and $\tau(0,t) = ( \frac{1}{2}, \frac{\sqrt{3}}{2} )$ is the unit tangent, which is constant. Thus, we have the following boundary conditions for the curve evolution of $\gamma$, according to each case, if $\Gamma_0$:
- is a tree, then $\gamma(1,t) = p$ is a fixed point;
- is a $\theta$-network, then $\gamma(1,t)$ is a free point in the $x_2$-axis such that $\tau(1,t) = (-1,0)$;
- is eyeglasses, then $\gamma(1,t)$ is a free point in the $x_1$-axis such that $\tau(1,t) = (0,-1)$.
As a consequence of Proposition [Proposition 7](#prp:2juncs){reference-type="ref" reference="prp:2juncs"}, the flow develops a type-0 singularity at $T$ if and only if $\lim_{t \uparrow T} \gamma(0,t) = (0,0)$.
*Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}.* To fix ideas, let us suppose that the initial network $\Gamma_0$ is a tree with fixed endpoints. Suppose also that at a finite time $T$ the flow develops a type-0 singularity. We can then apply Proposition [Proposition 8](#prp:irreg-short-time){reference-type="ref" reference="prp:irreg-short-time"} and extend the flow a little further to a time $\widehat{T} > T$. By symmetry, this can be viewed as extending the evolution of the defining curve to a map $\gamma: [0,1] \times [0, \widehat{T}) \to \mathbf{R}^2$, where the triple junction $\gamma(0,t)$ is in the $x_2$-axis for times $t \in (T,\widehat{T})$.
Now, consider a straight line $\ell$ through the origin, such that the endpoint $p \notin \ell$ and the angle between the $x_1$-axis and $\ell$ is in the range $( \frac{\pi}{6} , \frac{\pi}{3} )$. Note also that $\ell$ is a static solution to the curve shortening flow. Define the function $$i(t) := \# \{ x \in [0,1]: \gamma(x,t) \in \ell \} , \quad t \in (0, \widehat{T}).$$ We will show that $i$ is finite and nonincreasing in time, and decreases strictly across $t = T$.
Indeed, as long as the vertices of the network do not collide, i.e. as $\gamma(0,t)$ stays away from the origin, we can invoke Proposition [Proposition 10](#prp:intersecs){reference-type="ref" reference="prp:intersecs"} to see that $i(t)$ is not increasing in time. On the other hand, $\gamma(\cdot,t)$ converges smoothly to a curve $\gamma(\cdot,T)$ as $t \uparrow T$, such that $\gamma(0,T) = 0$, and $\tau(0,T) = ( \frac{1}{2} , \frac{\sqrt{3}}{2} )$. Therefore, there exist some small $\varepsilon, \delta> 0$ such that $\gamma(\cdot,t)$ crosses $\ell$ in the rectangle $[0,\varepsilon] \times [0,2\varepsilon]$ at a single point, for every $t \in (T - \delta, T]$. Note that $\gamma(\cdot,T)$ intersects $\ell$ exactly at the origin, and besides this point it lies completely above $\ell$. We will now show that, after we restart the flow, $\gamma(\cdot,t)$ remains at a positive distance above $\ell$ in $[0,\varepsilon] \times [0,2\varepsilon]$ for a short time.
Because of the invariance under reparametrisations, we can locally represent the evolution of $\gamma$ as a graph $u(x,t)$ over the $x_1$-axis, for $(x,t) \in [0,\varepsilon] \times [T, T+\delta)$, with $\delta$ possibly smaller. The function $u: [0,\varepsilon] \times [T,T+\delta) \to \mathbf{R}$ then satisfies the partial differential equation $$u_t = \frac{u_{xx}}{1+(u_x)^2} \quad \text{in} \quad [0,\varepsilon] \times (T,T+\delta)$$ with Cauchy-Neumann boundary conditions $$u(x,T) = u_T(x) , \quad u_x(0,t) = 1/\sqrt{3} , \quad (x,t) \in [0,\varepsilon] \times (T, T+\delta) ,$$ where $u_T: [0,\varepsilon] \to \mathbf{R}$ is a function parametrising $\gamma(\cdot,T)$. If we consider the function $w(x,t) := u(x,t) - mx$, with $m \in ( \frac{1}{\sqrt{3}} , \sqrt{3} )$ being the tangent of the angle $\ell$ forms with the $x_1$-axis, then $w$ solves $$w_t = \frac{w_{xx}}{1+(w_x+m)^2} ,$$ which is strictly parabolic. Thanks to the estimate on the shortest curve of the flow with singular initial data [@Ilmanen-Neves-Schulze-Shortexistence-2019]\*Thm. 1.1, there is a positive constant $c$ such that $$w(0,t) \geq c\sqrt{t-T} .$$ Furthermore, Proposition [Proposition 8](#prp:irreg-short-time){reference-type="ref" reference="prp:irreg-short-time"} implies that as $t$ approaches $T$ from above, $u(\cdot, t)$ uniformly converges to $u_T$. Therefore, for sufficiently small $\delta$, we have $w(\varepsilon, t) > 0$ for all $t \in [T, T+\delta)$. This, combined with the fact that $w(x,T) \geq 0$ for all $x \in [0, \varepsilon]$ and the application of the maximum principle, shows that $w$ remains greater than zero in $[0, \varepsilon] \times [T, T+\delta)$, which means $\gamma(\cdot, t)$ remains above the line $\ell$ during this interval. Hence, in a neighbourhood of the origin, the number of intersections between $\gamma(\cdot, t)$ and $\ell$ decreases by precisely one as $t$ crosses $T$. Outside this neighbourhood, we can once again employ Proposition [Proposition 10](#prp:intersecs){reference-type="ref" reference="prp:intersecs"} for $t$ in the range $(T, T+\delta)$. This analysis demonstrates that $i(t)$ decreases by at least one over the interval $(0, T+\delta)$. For illustration purposes, see Figure [\[fig:i-decrease-sing\]](#fig:i-decrease-sing){reference-type="ref" reference="fig:i-decrease-sing"}.
Due to our symmetry assumptions and our choice of the line $\ell$, after reflecting with respect to the diagonal $x_1 = x_2$ we find ourselves back in the initial setup. As a consequence, we can repeat the previous reasoning every time the two triple junctions of $\Gamma(t)$ coalesce into the origin, and as $i(t)$ cannot decrease indefinitely, it must become constant for sufficiently large $t>0$, after which there are no more type-0 singularities. We can thus obtain an extended network flow as in Definition [Definition 9](#def:extnetflow){reference-type="ref" reference="def:extnetflow"}.
The steps described above carry almost identically to the other types of networks, with the only difference that we do not need to be concerned about avoiding any particular point $p$ because there are no external endpoints.
We conclude the proof by referencing once again Proposition [Proposition 7](#prp:2juncs){reference-type="ref" reference="prp:2juncs"}, from which it follows that the flow of a tree can be extended indefinitely. In contrast, for the $\theta$-network and the eyeglasses cases, there must exist a time at which the two bounded regions collapse simultaneously, causing the $L^2$-norm of the curvature to blow up. This can only happen as an eyeglasses-shaped network, as there is no self-similar shrinking $\theta$-network [@BaldiHaussMantegazza-NoShrinkingTh2018].
Finally, in the case of a tree, there exists a sequence $t_n \to \infty$ such that $\Gamma(t_n)$ converges in $C^{1,\alpha} \cap W^{2,2}$ for every $\alpha\in (0,\frac{1}{2})$ to either a regular Steiner tree or a standard cross. If it converges to a Steiner tree, we can apply [@Pluda-Pozzetta-Lojasiewicz2023]\*Thm. 1.2 to establish the full smooth convergence of the flow as $t \to \infty$. Otherwise, regardless of the sequence of times, the limit is a standard cross. Thus, in this scenario as well, we observe full and smooth convergence. ◻
# Extension to the 2-sphere {#sec:extension}
We conclude this note by extending Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} to the network flow on the sphere $\mathbf{S}^2$ instead of $\mathbf{R}^2$. Thanks to the theory developed in [@Angenent-ParabolicI1990; @AngParabolicII91], this extension can be achieved almost effortlessly, albeit with a mild change in the flow's behaviour near the maximal time of existence. Since the outcome remains unchanged when considering a tree-like initial configuration, we will focus on the case of the $\theta$-network and eyeglasses. First, we state the analogous theorem, and then we explain how to adapt the arguments presented in Section [3](#sec:proof){reference-type="ref" reference="sec:proof"} to obtain the results.
**Theorem 11**. *Let $\Gamma_0$ be a symmetric, closed and regular network with two triple junctions on the sphere $\mathbf{S}^2$. Then there is a maximal time $T>0$ and a unique network flow $\{ \Gamma(t) \}_{0 \leq t < T}$ with initial data $\Gamma_0$, such that the set of its singular times is a finite subset of $(0,T)$, in particular there is no accumulation of singularities. Moreover, either:*
- *$T = \infty$ and $\lim_{t \to \infty} \Gamma(t)$ is a minimal $\theta$-network;*
- *or $T < \infty$ and the curvature blows up as one of the enclosed regions vanish with $t \uparrow T$.*
In this context, "symmetric" means symmetry with respect to a reflection across two perpendicular great circles, which are the geodesics of $\mathbf{S}^2$.
It is worth noting that all the definitions presented in Section [2](#sec:setup){reference-type="ref" reference="sec:setup"} straightforwardly apply to this case, and all the propositions in that section remain valid.
Regarding the proof, the only change is that instead of counting intersections with a straight line, we use a great circle passing through the centre of symmetry of the network, making an angle greater than $\pi/6$ but less than $\pi/3$ with respect to a chosen great circle of symmetry. The analogue of Proposition [Proposition 10](#prp:intersecs){reference-type="ref" reference="prp:intersecs"} asserts that this number decreases during the evolution. Across a type-0 singularity, we again represent the evolution locally as a graph over the chosen great circle using the exponential map. The resulting equation for the evolution remains strictly parabolic, and the application of the maximum principle yields the desired strict monotonicity. We therefore conclude that the flow can be extended until the curvature becomes unbounded as an enclosed region vanishes, or it can be extended indefinitely and converges to a minimal network. In this case, the minimal network must be a $\theta$-network, as minimal eyeglasses or 8-figures do not exist on the 2-sphere. In particular, infinity is excluded as a singular time.
| arxiv_math | {
"id": "2310.02890",
"title": "Singularities of the network flow with symmetric initial data",
"authors": "Matteo Novaga, Luciano Sciaraffia",
"categories": "math.AP math.DG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We discuss an approach towards the Hopf problem for aspherical smooth projective varieties recently proposed by Liu, Maxim, and Wang in [@LMW17b]. In complex dimension two, we point out that this circle of ideas suggests an intriguing conjecture regarding the geography of aspherical surfaces of general type.
author:
- |
Luca F. Di Cerbo[^1]\
University of Florida\
[ldicerbo\@ufl.edu]{.sans-serif}
- |
Rita Pardini [^2]\
Università di Pisa\
[rita.pardini\@unipi.it]{.sans-serif}
title: On the Hopf Problem and a Conjecture of Liu-Maxim-Wang
---
\
# Introduction
A long-standing and important problem in geometry is a conjecture of Hopf on the sign of the Euler characteristic of aspherical manifolds.
**Conjecture 1** (Hopf Conjecture). *If $X$ is a closed aspherical manifold of real dimension $2n$, then: $$(-1)^n\chi_{\rm top}(X)\geq 0.$$*
For more details about this problem, we refer to M. Berger's panoramic book on Riemannian geometry and to S.-T. Yau's authoritative list of problems in geometry, see in particular [@Berger Chapter 12, Note 12.3.1.1] and [@SchoenY Section VII, Problem 10]. Conjecture [Conjecture 1](#Hopf){reference-type="ref" reference="Hopf"} is true when $n=1$ thanks to the uniformization theorem for Riemann surfaces. Interestingly, this problem is still open when $n=2$. Moreover, Conjecture [Conjecture 1](#Hopf){reference-type="ref" reference="Hopf"} is still wide open even in the realm of aspherical, smooth, projective varieties! This is particularly surprising as, in the projective case, one can use a large variety of tools coming from algebraic geometry, on top of the usual differential geometry, geometric analysis, and geometric topology approaches to such conjecture.
The literature around the Hopf problem is very vast and it continues to grow, we refer the interested reader to [@Chern], [@DX84], [@Gro], [@Ko], [@JZ], [@PS13], [@LMW17b], [@DS22], [@Maxim] for a selection of diverse contributions on this topic over a seven decades period. In an interesting recent paper [@LMW17b], Y. Liu, L. Maxim, and B. Wang connect the Hopf problem for smooth projective varieties with the well-known Shafarevich conjecture in algebraic geometry. More precisely, in [@LMW17b Theorem 1.8] they observe that if the Shafarevich conjecture is true then any aspherical, smooth, projective variety must have Stein topological universal cover. They also conjecture (see [@LMW17b Conjecture 6.3]) that varieties with Stein universal cover have *nef* cotangent bundle, and show that the sign of the Euler characteristic of a smooth variety with nef cotangent bundle satisfies the statement of Conjecture [Conjecture 1](#Hopf){reference-type="ref" reference="Hopf"} (see [@LMW17b Proposition 3.6]). Therefore, they suggest a new interesting approach to Conjecture [Conjecture 1](#Hopf){reference-type="ref" reference="Hopf"} via the Shafarevich conjecture and the study of the nefness properties of the holomorphic cotangent bundle.
In [@Yiyu], Y. Wang (a graduate student of B. Wang) addresses Conjecture 6.3 in [@LMW17b] and produces some examples of smooth projective varieties with Stein universal topological cover, but with *non-nef* holomorphic cotangent bundle. The goal of this paper is to show that such examples are *not* aspherical. The question of the asphericity of such examples was raised during the recent **American Mathematical Society** Special Session on "Singer-Hopf Conjecture in Geometry and Topology", that was part of the **Spring Southeastern Sectional Meeting**, Georgia Institute of Technology, Atlanta, GA, March 18-19, 2023. During this special session, L. Maxim gave a lecture on his program towards the Hopf conjecture contained in [@LMW17b], [@Maxim], and Y. Wang presented the examples contained in [@Yiyu].
We provide several proofs of the *non-asphericity* of the examples described in [@Yiyu]. All such proofs crucially rely upon results of Tovena and Pardini [@PT95] concerning the fundamental group of finite abelian branched covers. In complex dimension two, we also offer a proof entirely based on complex analysis. Indeed, we show that Y. Wang's examples are deformation equivalent to smooth projective varieties containing smooth rational curves. These spaces are then non-aspherical in a very strong sense, and they are deformation equivalent to smooth projective varieties with non-Stein universal cover.
Concluding, the approach to Conjecture [Conjecture 1](#Hopf){reference-type="ref" reference="Hopf"} via the Shafarevich conjecture and the study of the nefness properties of the holomorphic cotangent bundle is still a very viable one. The next intriguing question to explore is whether smooth, aspherical, projective varieties have *nef* holomorphic cotangent bundle (see [@LMW17b Conjecture 6.4]). Even if Conjecture [Conjecture 1](#Hopf){reference-type="ref" reference="Hopf"} is known to be true for aspherical complex surfaces thanks to the Kodaira-Enriques classification, the nefness of their cotangent bundle is currently unknown. Indeed, in Section [3](#surfaces){reference-type="ref" reference="surfaces"} we point out several intriguing questions on the geography of aspherical surfaces of general type suggested by this circle of ideas.
**Acknowledgments**. The first named author thanks the participants of the AMS Special session on "Singer-Hopf Conjecture in Geometry and Topology" at the 2023 Spring Southeastern Sectional Meeting for a very stimulating exchange of ideas around the Hopf problem. He also thanks Yiyu Wang for a useful correspondence. The authors thank János Kollár for very useful bibliographical suggestions and for pertinent comments on the manuscript (see Remark [\[rem: kollar\]](#rem: kollar){reference-type="ref" reference="rem: kollar"}).\
# Main Result -- Several Proofs {#Proofs}
[\[sec: main\]]{#sec: main label="sec: main"}
A smooth closed manifold $X$ is said to be *aspherical* if its topological universal cover $\widetilde{X}$ is contractible. This is equivalent to the vanishing of the higher homotopy groups $\pi_i(X)$, for $i\geq 2$. Recall that for $i\geq 2$ we have the isomorphism $\pi_i(X) \cong \pi_i(\widetilde{X})$ (see [@Hat Proposition 4.1]). We refer in general to Chapter $4$ in Hatcher's book [@Hat] for the relevant definitions and the basic homotopy theory results.
The examples constructed in [@Yiyu] are easily described. Start with an abelian variety of dimension $n$ (a projective complex torus of real dimension $2n$), say $Y$. The topological universal cover is then given by $$\pi\colon \mathbb C^n\to Y, \quad Y\cong \mathbb C^n/ \Lambda, \quad \Lambda\cong \mathbb Z^{2n},$$ where $\widetilde{Y}=\mathbb C^n$ is clearly contractible and Stein. Next, one takes an ample line bundle, say $L$, on $Y$ and a smooth divisor $B$ in the linear system $|2L|$. One can define on $\mathcal O_Y\oplus L^{-1}$ a $\mathbb Z_2$-action by declaring the first summand to be invariant and the second one anti-invariant, and then an ${\mathcal O}_Y$-algebra structure compatible with this action using the map $L^{-1}\otimes L^{-1}\cong {\mathcal O}_Y(-B)\hookrightarrow {\mathcal O}_X$. Setting $X:=\mathop{\mathrm{Spec}}(\mathcal O_Y\oplus L^{-1})$ defines a degree two finite morphism of projective varieties $f\colon X\to Y$, that we call the *double cover given by the relation $2L\sim B$* ($\sim$ denotes linear equivalence). The variety $X$ is smooth, since $B$ is, and standard formulae for double covers give $K_X=f^*(K_Y\otimes L)=f^*(L)$, where $K_X$ denotes as usual the canonical bundle. So $K_X$ is ample and $X$ is minimal of general type. As shown in [@Yiyu Corollary 2], the holomorphic cotangent bundle $\Omega^1_{X}$ is not nef, while the topological universal cover $\widetilde{X}$ is Stein.
**Remark 2**. The above examples have maximal Albanese dimension, namely their Albanese map is generically finite (it is easily seen to coincide with $f$, see [@geography §2.4 (d)]); they satisfy equality in the generalized Severi inequality $K_X^n\ge 2n! \chi(K_X)$ for varieties of general type and maximal Albanese dimension ([@barja-severi], [@zhang-severi]). Recall that $\chi(K_X)=(-1)^n\chi({\mathcal O}_X)$ by Serre duality and that $\chi(K_X)\ge 0$ if $X$ is of Albanese general type by the Generic Vanishing Theorem (Corollary of [@GL1 Theorem 1]). Thus, they satisfy the inequality $\chi(K_{X})>0$. Conjecturally, this inequality holds true for a general class of varieties, see for example [@KBook Conjecture 18.12.1].
In complex dimension two, it is proven in [@BPS] that the minimal surfaces $S$ of general type with maximal Albanese dimension and $K^2_S=4\chi({\mathcal O}_S)$ have $h^1({\mathcal O}_S)=2$ and are precisely the (minimal resolutions of) double covers of their Albanese surface branched on an ample divisor $B$ with at most A-D-E singularities (see [@bpv § II.8] for the definition). The cover is smooth (and $K_S$ is ample) precisely when $B$ is smooth, otherwise it has rational double points and is the canonical model of a surface of general type.
**Theorem 3**. *If $n\ge 2$, the $n$-dimensional smooth projective variety $X$ is not aspherical.*
*Proof.* Assume by contradiction that $\widetilde{X}$ is contractible. Since $\widetilde{Y}=\mathbb C^n$ is contractible and Stein, by Corollary 3.4 in [@PT95] we know that $$f_{*}\colon \pi_1(X)\to\pi_1(Y)\cong \mathbb Z^{2n}$$ is an isomorphism. By Whitehead's theorem (see [@Hat Theorem 4.5]), a map between connected CW complexes that induces isomorphisms on all homotopy groups is necessarily a homotopy equivalence. Thus, we conclude that $f\colon X\to Y$ is a homotopy equivalence. At this point there are several different arguments that can be used to obtain a contradiction, showing that $X$ is not aspherical:
- (Topological arguments) Assume by contradiction that $X$ is aspherical. $X$ and $Y$ have a choice of orientations that determines isomorphisms $H_{2n}(X; \mathbb Z)\cong \mathbb Z$ and $H_{2n}(Y; \mathbb Z)\cong \mathbb Z$. Since $f\colon X\to Y$ is a double branched cover, with these choices the induced map $f_*\colon H_{2n}(X; \mathbb Z)\to H_{2n}(Y; \mathbb Z)$ is multiplication by $2$. Thus, $f$ cannot be a homotopy equivalence. We have reached a contradiction and $X$ cannot be aspherical.
Alternatively, we could argue with the Euler characteristic instead of the degree of the map. Since $f$ is a homotopy equivalence and $Y$ is a $2n$-dimensional real torus we have $\chi_{\rm top}(X)=\chi_{\rm top}(Y)=0$. On the other hand, we are going to show $(-1)^n\chi_{\rm top}(X)>0$. First, the double cover $f\colon X\to Y$ induces an isomorphism of the ramification divisor with the branch divisor $B$ and restricts to a topological cover of degree 2 of $Y\setminus B$. By Mayer--Vietoris for the Euler characteristic, we then have: $$\chi_{\rm top}(X)=2\chi_{\rm top}(Y\setminus B)+\chi_{\rm top}(B)=2\chi_{\rm top}(Y)-\chi_{\rm top}(B)=-\chi_{\rm top}(B)$$ Since $B$ is an ample divisor in $Y$, by Lefschetz hyperplane theorem (see for example [@Laz Theorem 3.1.17]) we have that the map in cohomology associated to the inclusion of $B$ in $Y$ $$H^{i}(Y; \mathbb Z)\to H^{i}(B; \mathbb Z)$$ is an isomorphism for $i\leq n-2$ and an injection for $i=n-1$. Since $Y$ is a torus of real dimension $2n$, we have $b_{i}(Y)=\binom{2n}{i}$ for $0\leq i\leq 2n$. We conclude $$(-1)^n\chi_{\rm top}(X)=(-1)^{n-1}\chi_{\rm top}(B)\geq \binom{2n}{n}-\binom{2n}{n-1}>0.$$
- (Hodge theoretic argument) The map $f^*\colon H^k(Y;\mathbb C)\to H^k(X; \mathbb C)$ is an isomorphism, since $f$ is a homotopy equivalence, and preserves the Hodge decomposition, since $f$ is a morphism. So $X$ and $Y$ have the same Hodge numbers. In particular $h^{n,0}(Y)=h^{n,0}(X)=1$. On the other hand, since $K_X=f^*(K_Y\otimes L)=f^*(L)$, the projection formula for a finite flat morphism gives $$f_*K_X=f_{*}f^{*}(L)=f_*({\mathcal O}_Y)\otimes L=({\mathcal O}_Y\oplus L^{-1})\otimes L= L\oplus{\mathcal O}_{Y}.$$ Since $h^0(K_X)=h^0(f_*K_X)$, we then have $$h^{n,0}(X)=1+h^0(L)=1+\frac{L^n}{n!}>1,$$ where the last equality follows by Riemann-Roch and Kodaira vanishing, as $L$ is ample by assumption. We have reached a contradiction.
- (Rigidity argument) A classical rigidity result of Catanese ([@Catanese Proposition 4.8]) implies that $X$ is an abelian variety, since it is Kähler with $b_1(X)=2n$ and the cohomology algebra $H^{\bullet}(X;\mathbb Z)$ is isomorphic to the exterior algebra $\wedge^{\bullet}H^1(X;\mathbb Z)$. This is not possible for multiple reasons: a) $\Omega^1_X$ is not nef [@Yiyu Corollary 2]; b) the Hurwitz formula gives $K_X=f^*(L)$, hence $X$ is of general type; c) up to a translation, a surjective map of tori of the same dimension is an isogeny, so in particular it is étale, while $f$ is ramified by construction.
◻
**Remark 4**. The strategy of proof of Theorem [Theorem 3](#main){reference-type="ref" reference="main"} can be applied to the more general setting of a smooth abelian cover $f\colon X\to Y$ such that $Y$ is aspherical, the irreducible components of the branch divisor are ample, and $f$ is completey ramified, i.e., it does not factor through a non-trivial étale cover $Y'\to Y$. Indeed, if this is the case, by [@PT95 Proposition 3.3] the map $f_*\colon \pi_1(X)\to \pi_1(Y)$ is surjective with kernel a finite central subgroup of $\pi_1(Y)$, say $N$. If $N \ne \{1\}$, then $X$ is not aspherical for elementary topological reasons (see [@Hat Proposition 2.45]). If $N$ is trivial, then arguing with the degree of the map $f$ as in the proof of Theorem [Theorem 3](#main){reference-type="ref" reference="main"} one can reach a contradiction.
**Remark 5**. After the appearance of this manuscript on the arXiv, János Kollár pointed out to us that [@Pardon Theorem 16] can be used to prove the non-asphericity of the examples considered in Theorem [Theorem 3](#main){reference-type="ref" reference="main"}. This approach is more general since it applies to certain analytic maps between compact complex analytic spaces where the codomain can be mildly singular. In our case one argues as follows: $f_*\colon \pi_1(X)\to \pi_1(Y)$ is an isomorphism by [@PT95 Corollary 3.4], so if $X$ is aspherical we can apply [@Pardon Theorem 16] to the map $f\colon X\to Y$ and deduce that it is a topological covering, contradicting the fact that by construction $f$ is branched on a divisor of $Y$. Note that also one of the arguments given in the proof by rigidity of Theorem [Theorem 3](#main){reference-type="ref" reference="main"} leads to the same contradiction.
# The Case of Surfaces {#surfaces}
Here we reprove Theorem [Theorem 3](#main){reference-type="ref" reference="main"} in case $n=2$ by a different argument. The advantage of this approach is that it can be used to prove non-asphericity for many examples of surfaces of general type. We start with a simple observation:
**Proposition 6**. *Let $X$ be a smooth $n$-dimensional complex projective variety and let $p\colon \widetilde X\to X$ be its universal cover. Let $B$ a smooth curve and $g\colon B\to X$ a non-constant morphism. Then:*
1. *if $g_*( \pi_1(B))\subset \pi_1(X)$ is a finite subgroup, then $H_2(\widetilde X,\mathbb R)\ne 0$;*
2. *if $B=\mathbb P^1$, then $\pi_2(\widetilde X)$ is infinite.*
*In particular, in both cases $X$ is not aspherical.*
*Proof.* (1) Let $H$ be an ample line bundle on $X$. Since $g$ is non constant we have $\deg_B(g^*H)>0$. If $\omega$ is a 2-form that represents the class of $H$ in $H^2(X, \mathbb R)$, then $\deg_B(g^*H)=\int_Bg^*\omega$, so $B$ defines a non-zero integral class in $H_2( X,\mathbb R)$. Let now $B_0$ be a connected component of $B\times_X\widetilde X$. The induced map $q\colon B_0\to B$ is a finite étale morphism of degree equal to the cardinality $\nu$ of $g_*( \pi_1(B))$ and $\int_{B_0}(g\circ q)^*\omega=deg_{B_0}((g\circ q)^*H)=\nu \deg_B(g^*H)>0$, so $B_0$ defines a non zero integral class in $H_2(\widetilde X,\mathbb R)$.
\(2\) By the proof of (1), the map $g$ gives a non-torsion class in $H_2(X, \mathbb Z)$. By the Hurewicz homomorphism this implies that $g$ gives an element of infinite order of $\pi_2(X)$. ◻
Let $\mathfrak M$ be a connected component of the moduli space of (canonical models of) surfaces of general type. It is well known that the minimal models of the surfaces parametrized by $\mathfrak M$ are all diffeomorphic (see [@tesi-manetti Ch. V] for a nice discussion of this and other related facts).
An immediate application of Proposition [Proposition 6](#prop: pi2){reference-type="ref" reference="prop: pi2"} is the following:
**Corollary 7**. *Let $\mathfrak M$ be a connected component of the moduli space of canonical models of surfaces of general type. If a point of $\mathfrak M$ corresponds to a singular surface, then the minimal models of the surfaces in $\mathfrak M$ have infinite $\pi_2$.*
*Proof.* Let $[\bar S]\in \mathfrak M$ be a point corresponding to a singular surface and let $S\to \bar S$ be the minimal resolution. Then $S$ has rational double points and the exceptional curves of $S\to \bar S$ are $(-2)$-curves, namely they are isomorphic to $\mathbb P^1$ and have self-intersection $-2$. So $\pi_2(S)$ is infinite by Proposition [Proposition 6](#prop: pi2){reference-type="ref" reference="prop: pi2"}, (2). Since the minimal models of surfaces of $\mathfrak M$ are all diffeomorphic, this is enough to prove the statement. ◻
Variants of Corollary [Corollary 7](#cor: M){reference-type="ref" reference="cor: M"} can be used also when all surfaces in a connected component of the moduli are smooth. Indeed, arguing exactly as in the proof of Corollary [Corollary 7](#cor: M){reference-type="ref" reference="cor: M"}, one can show that if a surface $S$ with $K_S$ ample belongs a connected component $\mathfrak M$ of the moduli space that contains a surface $S'$ with a rational curve, then $S$ is not aspherical. We give an example below.
**Example 8**. Consider the symmetric square $S=S^2C$ with $C$ a curve of genus $g\ge 4$ (see [@geography §2.4] for the definition and properties of symmetric squares). The Albanese map of $S$ can be identified with the map $a\colon S^2C\to\mathop{\mathrm{Pic}}^{(2)}(C)$ that associates to a degree 2 effective divisor on $C$ its linear equivalence class. So if $C$ is not hyperelliptic then the Albanese map is injective and therefore $S$ contains no rational curve. If $C$ is hyperelliptic then the pairs of points in the canonical $g^1_2$ give a smooth rational curve $\Gamma$ on $S$, contracted by $a$, and $a$ is injective on $S\setminus \Gamma$. This implies that $\Gamma$ is the only rational curve contained in $S$. One can compute $$K_S\Gamma=g-3\ge 1, \quad K^2_{S}=(g-1)(4g-9).$$ Thus, Nakai's ampleness criterion ([@bpv Corollary 6.4]) implies that $K_S$ is ample in either case. Now, since all symmetric squares belong to the same irreducible component of the moduli space they are not aspherical. Interestingly, since $$\chi({\mathcal O}_S)=\frac{g(g-3)}{2}+1,$$ we have that the ratio $\frac{K^2_S}{\chi({\mathcal O}_S)}$ tends to $8$ from below as $g\to \infty$.
Using this circle of ideas, and in particular Corollary [Corollary 7](#cor: M){reference-type="ref" reference="cor: M"}, we can now give an alternative proof of Theorem [Theorem 3](#main){reference-type="ref" reference="main"} in dimension $n=2$.
**Proposition 9**. *Let $S$ be a minimal surface of general type and maximal Albanese dimension such that $K^2_S=4\chi({\mathcal O}_S)$. Then $S$ is not aspherical. In particular, the examples of Section [\[sec: main\]](#sec: main){reference-type="ref" reference="sec: main"} are not aspherical for $n=2$.*
*Proof.* As explained in Remark [Remark 2](#rem: severi-line){reference-type="ref" reference="rem: severi-line"}, a surface $S$ as in the assumption is the minimal resolution of a double cover of an abelian surface $Y$ branched on an ample effective divisor $B$ with at most A-D-E singularities. If $B$ is smooth then $S$ is one of the examples constructed in Section [\[sec: main\]](#sec: main){reference-type="ref" reference="sec: main"} while if $B$ is singular, then the double cover has canonical singularities and $S$ contains a rational curve. If $B$ is smooth, then by Corollary [Corollary 7](#cor: M){reference-type="ref" reference="cor: M"} it is enough to show that there is a singular surface in the connected component of the moduli containing $S$. Let the double cover be given by the relation $2L \sim B$ and let $D:=(d_1, d_2)$ be the type of the polarization $L$. The moduli space $\mathcal A_{2, D}$ of polarized abelian surfaces of type $(d_1, d_2)$ is irreducible ([@BL-CAV Thm. 8.2.6]) and it has a finite covering carrying a universal family ([@mumford-GIT Thm. 7.9]. Using these facts it is a standard exercise to construct an irreducible flat family of surfaces containing all the double covers of abelian surfaces $Y$ with $L$ of type $D$ and $B$ with at most A-D-E singularities. This shows that for fixed $D$ all our examples lie in the same component of the moduli space. This component contains covers where $Y=E_1\times E_2$ is a product of elliptic curves, $L$ is a product polarization and $B=B_1+B_2$, with $B_1$ the pull back of a smooth divisor of degree $2d_1$ on $E_1$ and $B_2$ the pullback of a divisor of degree $2d_2$ on $E_2$. These covers have $A_1$ singularities at the intersection points of $B_1$ and $B_2$, so we can apply Corollary [Corollary 7](#cor: M){reference-type="ref" reference="cor: M"} and complete the proof. ◻
The case of surfaces is interesting for another reason. Indeed, Liu-Maxim-Wang conjecture implies a tantalizing conjecture regarding the geography of aspherical surfaces of general type. By the work of Demailly-Peternell-Schneider (see [@DPS94 Theorem 2.5]) if $S$ is a Kähler surface with *nef* holomorphic cotangent bundle, we then have that $c^2_1(S)\geq c_2(S)$. We therefore obtain the following statement.
**Proposition 10**. *Let $S$ be an aspherical surface of general type. If Liu-Maxim-Wang conjecture is true, we then have $$K^2_{S}\geq 6\chi(\mathcal{O}_S).$$*
*Proof.* This follows by combining the standard identities $$K^2_{S}=c^2_{1}(S), \quad \chi(\mathcal{O}_S)=\frac{c^2_{1}(S)+c_2(S)}{12}$$ with the Demailly-Peternell-Schneider inequality $$c^2_{1}(S)\geq c_{2}(S).$$ ◻
This proposition provides a concrete test on the plausibility of Liu-Maxim-Wang conjecture (see [@LMW17b Conjecture 6.4]). Indeed, any aspherical surface of general type $S$ with $$K^2_{S}< 6\chi(\mathcal{O}_S)$$ would be a counterexample. Thus, it seems extremely interesting to ask if such surfaces exist. Unfortunately, the list of aspherical surfaces of general type seems to be not too rich. The list includes ball quotients, surfaces isogeneous to product of curves, Kodaira fibrations, Mostow-Siu surfaces, see the paper of Bauer-Catanese [@Fabrizio] for more details. In all of these examples, the stronger inequality $$\begin{aligned}
\label{bconjecture}
K^2_{S}\geq 8\chi(\mathcal{O}_S)\end{aligned}$$ is satisfied. Maybe somewhat less well-known, the list of aspherical surfaces of general type includes also the vast majority of smooth minimal toroidal compactifications of ball quotients, see [@tesi Theorem A]. These examples tend to satisfy the bound in Equation [\[bconjecture\]](#bconjecture){reference-type="ref" reference="bconjecture"}, and it is indeed an interesting problem to determine whether Equation [\[bconjecture\]](#bconjecture){reference-type="ref" reference="bconjecture"} is satisfied by all aspherical toroidal compactifications of ball quotient surfaces. More generally, this circle of ideas highlights the search for aspherical surfaces of general type as an extremely interesting problem. Unfortunately, the vast majority of known explicit constructions of surfaces of general type involve linear systems and naturally specialize to examples with rational double points, and so cannot be aspherical by Corollary [Corollary 7](#cor: M){reference-type="ref" reference="cor: M"}. In the same vein, if a surface has a positive number of moduli, also in this case it is reasonable to expect that in most cases it can be specialized to a surface with rational double points and thus is not aspherical. Since the expected number of moduli of a surface of general type, say $S$, is $10\chi({\mathcal O}_S)-2K_S$, it seems even harder to find examples of aspherical surfaces of general type such that $$K^2_S/\chi({\mathcal O}_S)<5.$$ Further exploration of these issues would seem to be one of the most compelling potential directions in the study of complex surfaces of general type and its interaction with low-dimensional topology. The many questions we highlighted here are a clear indication of the depth of our present ignorance and the frustrating lack of relevant examples.
ELMNPM
M.A. Barja. Generalized Clifford-Severi inequality and the volume of irregular varieties. *Duke Math. J. 164* (2015), no. 3, 541-568.
M.A. Barja, R. Pardini, L. Stoppino. Surfaces on the Severi line. *J. Math. Pures Appl. (9) 105* (2016), no. 5, 734-743.
W. P. Barth, K. Hulek, C. A. M. Peters, A. Van de Ven. Compact complex surfaces, 2nd ed., *Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge*. A Series of Modern Surveys in Mathematics \[Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics\], vol. 4, Springer-Verlag, Berlin, 2004.
I. Bauer, F. Catanese. On rigid complex surfaces and manifolds *Adv. Math. 333*, (2018), 620-669.
M. Berger. A panoramic view of Riemannian geometry. *Springer-Verlag, Berlin*, 2003. xxiv+824 pp.
C. Birkenhake, H. Lange. Complex abelian varieties. 2nd augmented ed. *Grundlehren der Mathematischen Wissenschaften 302*. Berlin: Springer xii, 635 p. (2004).
F. Catanese. Deformation types of real and complex manifolds, Contemporary trends in algebraic geometry and algebraic topology (Tianjin, 2000), 195-238, Nankai Tracts Math., 5, World Sci. Publishing, River Edge, NJ, 2002.
S. S. Chern. On Curvature and Characteristic Classes of a Riemannian Manifold. *Abh. Math. Semin. Univ. Hamb. 20* (1955), 117-126.
J.- P. Demailly, T. Peternell, M. Schneider. Compact complex manifolds with numerically effective tangent bundles. *J. Algebraic Geom. 3* (1994), no. 2, 295--345.
L. F. Di Cerbo. Finite-volume complex-hyperbolic surfaces, their toroidal compactifications, and geometric applications. *Pacific J. Math. 255* (2012), no. 2, 305-315.
L. F. Di Cerbo, M. Stern. Price Inequalities and Betti numbers growth on manifolds without conjugate points. *Comm. Anal. Geom. 30* (2022), no. 2, 297-334.
H. Donnelly, F. Xavier. On the Differential Form Spectrum of Negatively Curved Riemannian Manifolds. *Amer. J. Math. 106* (1984), no. 1, 169-185.
M. Green, R. Lazarsfeld. Deformation theory, generic vanishing theorems and some conjectures of Enriques, Catanese and Beauville, *Invent. Math. 90* (1987), 389--407.
M. Gromov. Kähler hyperbolicity and $L_2$-Hodge theory. *J. Differential Geom. 33* (1991), no. 1, 263-292.
J. Jost, K. Zuo. Vanishing theorems for $L^2$-cohomology on infinite coverings of compact Kähler manifolds and applications in algebraic geometry. *Comm. Anal. Geom. 8* (2000), no. 1, 1-30.
A. Hatcher. Algebraic Topology. *Cambridge University Press, Cambridge*, 2002, xii+544 pp.
J. Kollár. Shafarevich maps and plurigenera of algebraic varieties. *Invent. Math. 113* (1993), no. 1, 177-215.
J. Kollár. Shafarevich maps and automorphic forms. M. B. Porter Lectures. *Princeton University Press, Princeton, NJ*, 1995, x+201 pp.
J. Kollár, J. Pardon. Algebraic varieties with semialgebraic universal cover. *J. Topol. 5* (2012), no. 1, 199-212.
R. Lazarsfeld. Positivity in algebraic geometry I,textitErgebnisse der Mathematik und ihrer Grenzgebiete, vol. 48, Berlin: Springer 2004.
Y. Liu, L. Maxim, B. Wang. Aspherical manifolds, Mellin transformation and a question of Bobadilla-Kollár. *J. Reine Angew. Math. 781* (2021), 1-18.
M. Manetti. Degeneration of algebraic surfaces and applications to moduli problems, *edizioni Scuola Normale Superiore (EN) collana Tesi*, 1996. (available at https://www1.mat.uniroma1.it/people/manetti/dispense/tesiperf.pdf)
L. Maxim. On singular generalizations of the Singer--Hopf conjecture. *Math. Nachrichten* (2023), https://doi.org/10.1002/mana.202200322.
M. Mendes Lopes, R. Pardini, The geography of irregular surfaces, *Current developments in algebraic geometry, Math. Sci. Res. Inst. Publ. 59*, Cambridge Univ. Press (2012), 349--378.
D. Mumford, J.Fogarty, F. Kirwan, Geometric invariant theory. *3rd enl. ed. Ergebnisse der Mathematik und ihrer Grenzgebiete. 2. Folge. 34.* Berlin: Springer-Verlag. 320 p. (1994).
R. Pardini, F. Tovena. On the fundamental group of an abelian cover. *Internat. J. Math 6* (1995), no. 5, 767-789.
M. Popa, Ch. Schnell, Generic vanishing theory via mixed Hodge modules. *Forum Math. Sigma 1* (2013), e1, 60pp.
R. Schoen, S.-T. Yau. Lectures on differential geometry. Conference Proceedings and Lecture Notes in Geometry and Topology, I. *International Press, Cambridge, MA*, 1994.
Y. Wang. Ramified covers of varieties with nef cotangent bundle. *C. R. Math. Acad. Paris 360* (2022), 929-932.
T. Zhang. Severi inequality for varieties of maximal Albanese dimension. *Math. Ann. 359* (2014), no. 3-4, 1097-1114.
[^1]: Supported in part by NSF grant DMS-2104662
[^2]: Partially supported by PRIN 2017SSNZAW_004. Member of Gnsaga-INdAM.
| arxiv_math | {
"id": "2310.03129",
"title": "On the Hopf Problem and a Conjecture of Liu-Maxim-Wang",
"authors": "Luca F. Di Cerbo, Rita Pardini",
"categories": "math.AG math.DG math.GT",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We examine the problem of Bragg-edge elastic strain tomography from energy resolved neutron transmission imaging. A new approach is developed for two-dimensional plane-stress and plane-strain systems whereby elastic strain can be reconstructed from its Longitudinal Ray Transform (LRT) as two parts of a Helmholtz decomposition based on the concept of an Airy stress potential. The solenoidal component of this decomposition is reconstructed using an inversion formula based on a tensor filtered back projection algorithm whereas the potential part can be recovered using either Hooke's law or a finite element model of the elastic system. The technique is demonstrated for two-dimensional plane-stress systems in both simulation, and on real experimental data. We also demonstrate that application of the standard scalar filtered back projection algorithm to the LRT in these systems recovers the trace of the solenoidal component of strain and we provide physical meaning for this quantity in the case of 2D plane-stress and plane-strain systems.
author:
- CM Wensrich
- S Holman
- M Courdurier
- WRB Lionheart
- A Polyakova
- I Svetov
bibliography:
- References.bib
title: Direct inversion of the Longitudinal Ray Transform for 2D residual elastic strain fields
---
Strain tomography ,Longitudinal Ray Transform ,Bragg edge ,Neutron transmission
# Introduction and context
Elastic strain imaging via energy-resolved neutron transmission measurement and Bragg-edge analysis forms a natural tensor-tomography problem aimed at reconstructing the full triaxial elastic strain field within a sample from a set of lower-dimensional strain images. This problem has been studied for more than a decade, with various experimental demonstrations on special cases (e.g. axisymmetric systems and *in situ* applied loads) (e.g. [@hendriks2017bragg; @abbey2012neutron; @kirkwood2015neutron]), and, more recently, solutions for general systems using Bayesian and least-squares techniques constrained by equilibrium (e.g. [@gregg2018tomographic; @hendriks2019tomographic]).
With reference to Figure [1](#LRTGeom){reference-type="ref" reference="LRTGeom"}, strain images of this type refer to projections of the average of elastic strain, $\epsilon$, along straight-line ray paths through a sample $\Omega$ of the form $$\frac{1}{L} \int_{-\infty}^\infty \epsilon_{ij}(x_0+s\xi)\xi_{i}\xi_{j}ds,
\label{BEStrain}$$ where $L$ is the path-length associated with a ray passing through the point $x_0 \in \Omega$, travelling in the direction $\xi$, and, as in the rest of the paper, we use the summation convention for repeated indices. For convenience, strain outside of the boundary of the sample is assigned a value of zero. From many measurements of this form, we wish to reconstruct the original strain field.
[\[LRTGeom\]]{#LRTGeom label="LRTGeom"} ![Geometry of the Longitudinal Ray Transform and Bragg-edge strain measurements.](LRTGeom.jpg "fig:"){#LRTGeom width="0.3\\linewidth"}
Bragg-edge strain measurements are naturally related to the Longitudinal Ray Transform (LRT), $I$, which can be written for suitable $f \in L^2(\mathcal{S}^m;\mathbb{R}^n)$ as $$If(x_0,\xi)=\int_{-\infty}^\infty f_{{i_1}{i_2}...{i_m}}(x_0+s\xi)\xi_{i_1}\xi_{i_2}...\xi_{i_m} ds,$$ with the extension to all of $L^2(\mathcal{S}^m;\mathbb{R}^n)$ achieved in the usual way (see below for definitions and notation).
Unfortunately, the LRT has a large null space that creates a well-known issue with direct tomographic reconstruction of strain from Bragg-edge imaging and LRT measurements in general [@lionheart2015diffraction]. For $f \in L^2(\mathcal{S}^m;\mathbb{R}^n)$, this null space consists of potential fields of the form $du$ for any $u$ that vanishes at infinity.
The structure of this null space, particularly in the case of bounded support, is important for reconstruction. In this context, we will explore the mechanics of linear elastic systems, and tensor decompositions and inversion formulas related to the LRT. Before we do this, we will begin by introducing notations used throughout the paper.
# Notation and definitions
First, $\Omega$ will be a subset of $\mathbb{R}^n$ possibly with Lipschitz boundary and possibly equal to $\mathbb{R}^n$ in the following definitions, and we will write $\mathbb{S}^{n-1}$ for the unit sphere in $\mathbb{R}^n$. Given a vector $v \in
\mathbb{R}^2$, we write $v^\perp$ for the anti-clockwise rotation of $v$ by 90 degrees. If $v=(v_1,v_2)$, $v^\perp=(-v_2,v_1)$.
For function spaces we use:
- $L^2(\mathcal{S}^m;\Omega)$ -- The space of square-integrable $m$-rank symetric tensor fields on $\Omega$. We will also use $L^2(\Omega) = L^2(\mathcal{S}^0;\Omega)$ for square-integrable scalar fields;
- $H^k(\mathcal{S}^m;\Omega)$ -- The Sobolev space of square-integrable $m$-rank symetric tensor fields on $\Omega$ whose weak derivatives up to order $k$ are also square-integrable. For scalar fields we use $H^k(\Omega) = H^k(\mathcal{S}^0;\Omega)$;
- $\mathcal{C}^\infty(\mathcal{S}^m;\Omega)$ -- The space of smooth $m$-rank symetric tensor fields on $\Omega$ with continuous derivatives of all orders;
- $\mathcal{C}_c^\infty(\mathcal{S}^m;\Omega)$ -- The subspace of $\mathcal{C}^\infty(\mathcal{S}^m;\Omega)$ comprising fields with compact support;
- $H^k_0(\mathcal{S}^m;\Omega)$ -- The closure of $\mathcal{C}_c^\infty(\mathcal{S}^m;\Omega)$ in $H^k(\mathcal{S}^m;\Omega)$.
We also use the following differential operators:
- $d$ -- Symmetric gradient operator. For $f \in \mathcal{C}^\infty(\mathcal{S}^m;\mathbb{R}^n)$, $df \in \mathcal{C}^\infty(\mathcal{S}^{m+1};\mathbb{R}^n)$ will be the symmetric derivative defined in [@sharafutdinov2012integral]. This coincides with the gradient when $m = 0$ and for $u \in \mathcal{C}^\infty(\mathcal{S}^1;\mathbb{R}^n)$ $$(du)_{ij} = \frac{1}{2}\left (\frac{\partial u_{i}}{\partial x_{j}} + \frac{\partial u_{j}}{\partial x_{i}} \right ),$$ or equivalently $du=\tfrac{1}{2}\big(\nabla \otimes u + (\nabla \otimes u)^T\big)$, where $\otimes$ refers to dyadic product and $(\cdot)^T$ refers to the transpose operation;
- $d^\perp$ -- Perpendicular symmetric gradient operator. Note this operator is only defined in dimension $n=2$. For $f \in \mathcal{C}^\infty(\mathcal{S}^m;\mathbb{R}^2)$, $d^{\perp}f \in \mathcal{C}^\infty(\mathcal{S}^{m+1};\mathbb{R}^2)$ is the symmetrisation of the perpendicular gradient of the components of $f$ introduced in [@derevtsov2015tomography].
For $\psi \in \mathcal{C}^\infty(\mathbb{R}^2)$ this is given by $$(d^\perp \psi)_{i} = \frac{\partial \psi}{\partial x^j} e_{ji3}$$ and for $u \in \mathcal{C}^\infty(\mathcal{S}^1,\mathbb{R}^2)$ $$(d^\perp u)_{ij} = \frac{1}{2}\left (\frac{\partial u_{i}}{\partial x_k} e_{kj3} + \frac{\partial u_{j}}{\partial x_k} e_{ki3} \right ),$$ where $e_{ijk}$ is the usual Levi-Civita permutation symbol. Equivalently $d^\perp\psi = \nabla^\perp \psi$ and $d^\perp u=\tfrac{1}{2}\big(\nabla^\perp \otimes u + (\nabla^\perp \otimes u)^T\big)$;
- $\text{Div}$ -- The divergence operator which is the formal adjoint of $-d$ and maps $\mathcal{C}^\infty(\mathcal{S}^{m+1};\mathbb{R}^n) \rightarrow \mathcal{C}^\infty(\mathcal{S}^{m};\mathbb{R}^n)$. This is the contraction of the gradient of a tensor field and for the general formula see [@sharafutdinov2012integral]. For $u \in \mathcal{C}^\infty(\mathcal{S}^1,\mathbb{R}^n)$, $\text{Div}(u)$ is the standard divergence of $u$;
- $\text{Div}^\perp$ -- The perpendicular divergence which is the formal adjoint of $-d^\perp$ and maps $\mathcal{C}^\infty(\mathcal{S}^{m+1};\mathbb{R}^2) \rightarrow \mathcal{C}^\infty(\mathcal{S}^{m};\mathbb{R}^2)$. This is the same as the operator $\delta^\perp$ in [@derevtsov2015tomography].
We additionally say that a tensor field is divergence-free if its divergence is zero. The differential operators are initially defined on smooth tensor fields, but can be extended to fields with distributional coefficients, and in particular for $k\geq 1$ provide continuous operators $d$, $d^\perp: H^k(\mathcal{S}^m;\mathbb{R}^2) \rightarrow H^{k-1}(\mathcal{S}^{m+1};\mathbb{R}^2)$ while $\text{Div}$, $\text{Div}^\perp: H^{k}(\mathcal{S}^{m+1};\mathbb{R}^2) \rightarrow H^{k-1}(\mathcal{S}^{m};\mathbb{R}^2)$.
We will mostly be concerned with tensors of rank either $m=1$ or $2$ and use the standard notations $f:g$ for contraction of 2-rank tensors and $f\cdot g$ for multiplication of a 2-rank tensor with a 1-rank tensor, or the dot product of 1-rank tensors.
We now return to the topic and begin with a review of Helmholtz decomposition and inversion of the LRT, both in general, and in the context of elastic strain in $\mathbb{R}^2$.
# Helmholtz decompositions and LRT inversion formulas
As per [@sharafutdinov2012integral] and others, the null space of the LRT forms part of an orthogonal Helmholtz decomposition of symmetric tensor fields of the form $$\label{decomp}
f=du+{^s}f,$$ where ${^s}f$ is the divergence-free 'solenoidal' component of $f \in L^2(\mathcal{S}^m;\mathbb{R}^n)$, $m\geq 1$, and $u \in H^1(\mathcal{S}^{m-1};\mathbb{R}^n)$ gives the 'potential' part $du$. Here the differential operators are understood to act in the sense of distributions on $\mathbb{R}^n$.
Various inversion formulas exist that can uniquely recover $^sf$ from $If$ (e.g. [@sharafutdinov2012integral; @derevtsov2015tomography; @louis2022inversion]). Sharafutdinov [@sharafutdinov2012integral] provides the general result for $f \in L^2(\mathcal{S}^m;\mathbb{R}^n)$ as $$\label{fullSharafutdinov}
^sf=(-\Delta)^{1/2}\Big[\sum_{k=0}^{[m/2]}c_k(i-\Delta^{-1}d^2)^kj^k\Big]\mu^m If,$$ where $c_k$ are specified scalar coefficients, powers of the Laplacian $(-\Delta)^{1/2}$ and $(-\Delta)^{-1}$ are defined via the Fourier transform, the operators $i$ and $j$ respectively refer to product and contraction with the Kronecker tensor, and $\mu^m$ is the formal adjoint of $I$ when the measure on $\mathbb{S}^{n-1}$ is normalised to one. In practical terms, $\mu^m$ is related to the adjoint of the X-ray transform[^1] (i.e. scalar back-projection), $\mathcal{R^*}$, acting component-wise with back-projections weighted by the diadic product of $\xi$ with itself $m$-times; $$\mu^m_{i_1i_2...i_m}= \frac{1}{2 \pi^{n/2}}\Gamma\left (\tfrac{n}{2} \right )\mathcal{R}^*\xi_{i_1}\xi_{i_2}...\xi_{i_m}.$$ Note that the constant factor is present because of the normalisation of the measure on $\mathbb{S}^{n-1}$ in [@sharafutdinov2012integral].
Inversion formula [\[fullSharafutdinov\]](#fullSharafutdinov){reference-type="eqref" reference="fullSharafutdinov"} recovers ${^s}f$ which is defined on $\mathbb{R}^n$ and in general has unbounded support. However, we are interested in reconstruction and solenoidal decomposition on a bounded domain. Indeed, let $\Omega \subset \mathbb{R}^n$ be a bounded domain with Lipschitz boundary and outward surface normal $n$ on $\partial\Omega$. Similar to ([\[decomp\]](#decomp){reference-type="ref" reference="decomp"}), there is a unique decomposition of $f \in L^2(\mathcal{S}^m;\mathbb{R}^n)$, $m \geq 1$, restricted to this set of the form $$\label{decomp_bd}
f = du_{\Omega} + {^s}f_\Omega + dh_{\Omega} \quad \mbox{on $\Omega$},$$ where $u_\Omega \in H^1_0(\mathcal{S}^{m-1};\Omega)$, $h_{\Omega} \in H^1(\mathcal{S}^{m-1};\Omega)$, known as the 'harmonic part', satisfies $$\text{Div} (d h_\Omega) = 0 \quad \mbox{on $\Omega$},$$ and ${^s}f_{\Omega} \in L^2(\mathcal{S}^m;\Omega)$ satisfies the weak equation $$\label{weakfomega}
\int_\Omega ({^s}f_{\Omega})_{i_1 \ ... \ i_m} \frac{\partial \varphi_{i_1 \ ... \ i_{m-1}}}{{\partial x^{i_m}}} \ \mathrm{d} x = 0 \quad \forall \varphi \in H^1(\mathcal{S}^{m-1};\Omega).$$ For suitable ${^s}f_\Omega$, this is equivalent to ${^s}f_\Omega$ being divergence-free on $\Omega$ and $({^s}f_{\Omega})_{i_1 \ ... \ i_{m}}n_{i_{m}}|_{\partial \Omega} = 0$. Also, it is clear from [\[weakfomega\]](#weakfomega){reference-type="eqref" reference="weakfomega"} that, in this case, ${^s}f_{\Omega}$ extended by zero to $\mathbb{R}^n$ is divergence-free. A key point for our result is that, for fields where the boundary trace makes sense, this extension by zero is only divergence-free when the boundary condition $({^s}f_{\Omega})_{i_1 \ ... \ i_{m}}n_{i_{m}}|_{\partial \Omega} = 0$ holds. For an in depth discussion of weak formulation of the Helmholtz decomposition in the case of $L^2$ vector fields, see [@schweizer].
When the harmonic part vanishes, the solenoidal decomposition on $\Omega$ is related to the one on $\mathbb{R}^n$ as in the following lemma.
**Lemma 1**. *Suppose that $\Omega$ contains the support of $f$. If $h_\Omega$ in [\[decomp_bd\]](#decomp_bd){reference-type="eqref" reference="decomp_bd"} is zero, then ${^s}f$ and $u$ in [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"} are equal to the extension by zero of ${^s}f_\Omega$ and $u_\Omega$ to $\mathbb{R}^n$. Conversely, if ${^s}f$ and $u$ in [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"} are supported in $\Omega$, then $h_\Omega = 0$.*
*Proof.* Assume that decomposition [\[decomp_bd\]](#decomp_bd){reference-type="eqref" reference="decomp_bd"} holds with $h_\Omega = 0$ and extend $u_\Omega$ and ${^s}f_\Omega$ to $u$ and ${^s}f$ on $\mathbb{R}^n$ by setting them equal to zero outside of $\Omega$. By [\[weakfomega\]](#weakfomega){reference-type="eqref" reference="weakfomega"}, ${^s}f$ is then divergence-free on $\mathbb{R}^n$. Since $u_\Omega \in H^1_0(\mathcal{S}^{m-1};\Omega)$ we also have, from the weak formulation, that $u \in H^1(\mathcal{S}^{m-1};\mathbb{R}^n)$ and $d u$ is $d u _\Omega$ extended by zero. Thus, by uniqueness of the decomposition, [\[decomp_bd\]](#decomp_bd){reference-type="eqref" reference="decomp_bd"} implies [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"} in this case.
Conversely, suppose that ${^s}f$ and $u$ in [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"} are supported in $\Omega$ and define $u_\Omega$ and ${^s}f_\Omega$ by restricting to $\Omega$. Then, since $u\in H^1(\mathcal{S}^1;\mathbb{R}^n)$ with support contained in $\Omega$, its restriction $u_\Omega$ is in $H^1_0(\mathcal{S}^1;\Omega)$. Additionally, we can see that [\[weakfomega\]](#weakfomega){reference-type="eqref" reference="weakfomega"} holds because the same must hold for ${^s}f$ on $\mathbb{R}^n$ for any $\varphi \in H^1(\mathcal{S}^{m-1};\mathbb{R}^n)$. By uniqueness of the decomposition we then see that [\[decomp_bd\]](#decomp_bd){reference-type="eqref" reference="decomp_bd"} holds with $h_\Omega = 0$ on $\Omega$ as claimed. ◻
For 2D elastic strain $\epsilon \in L^2(\mathcal{S}^2;\mathbb{R}^2)$, [\[fullSharafutdinov\]](#fullSharafutdinov){reference-type="eqref" reference="fullSharafutdinov"} simplifies to $${^s}\epsilon = \frac{1}{2\pi}(- \Delta)^{1/2}\Big[c_0 + c_1 (\text{\bf{I}}-\Delta^{-1}d^2) tr \Big] I^* I \epsilon,
\label{SharInv}$$ where $c_0 = 3/4, c_1 = -1/4$, $tr$ is the trace operator, $\text{\bf{I}}$ is the 2-rank identity and $I^*=\mathcal{R}^*\xi\otimes\xi$. In comparison, Derevtsov and Svetov [@derevtsov2015tomography] and Louis [@louis2022inversion] consider the case when $\Omega$ is the unit ball in $\mathbb{R}^2$, implicitly assuming that the hamonic part of the field is equal to zero so that ${^s}\epsilon = {^s}\epsilon_\Omega$. In this context, [@louis2022inversion] provides a much simpler inversion formula of the form $$^s\epsilon =
\frac{1}{4 \pi}(- \Delta)^{1/2} I^* I \epsilon,
\label{InvLRT}$$ while Derevtsov and Svetov [@derevtsov2015tomography] provide the same formula [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"} but, due to a typographical error, multiplied by a factor of $2$ on the right side. We should expect that [\[SharInv\]](#SharInv){reference-type="eqref" reference="SharInv"} and [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"} are the same formula and, by Lemma [Lemma 1](#lem_ext){reference-type="ref" reference="lem_ext"}, when the harmonic part $h_\Omega$ of $\epsilon$ is zero they both equal zero outside of $\Omega$. As we will see, when $\epsilon$ is the strain field arising from a stress tensor satisfying equilibrium for a homogeneous and isotropic elastic medium with no boundary traction, this will always be the case.
We now show that [\[SharInv\]](#SharInv){reference-type="eqref" reference="SharInv"} and [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"} are indeed equivalent, extending [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"} to a new setting.
**Lemma 2**. *For $m=n=2$, formulas [\[SharInv\]](#SharInv){reference-type="eqref" reference="SharInv"} and [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"} are equivalent.*
*Proof.* Taking the component-wise Fourier transform with spatial frequency vector $\kappa$, ([\[SharInv\]](#SharInv){reference-type="ref" reference="SharInv"}) can be written $${^s}\hat\epsilon = \frac{1}{2\pi}|\kappa|\Big[c_0 + c_1 \Big(\text{\bf{I}}- \frac{\kappa\kappa^T}{|\kappa|^2} \Big)tr \Big] \hat{g},
\label{FSharInv}$$ where $g = I^* I \epsilon$. Since $^s\epsilon$ is solenoidal ${^s}\hat \epsilon \kappa =0$ and we can write ${^s}\hat \epsilon=\alpha \kappa^\bot (\kappa^\bot)^T$ for some $\alpha \in L^2(\mathbb{R}^2)$. Hence ([\[FSharInv\]](#FSharInv){reference-type="ref" reference="FSharInv"}) becomes $$\alpha \kappa^\bot (\kappa^\bot)^T = \frac{1}{2\pi}|\kappa|\Big[c_0 + c_1 \Big(\text{\bf{I}}- \frac{\kappa\kappa^T}{|\kappa|^2} \Big)tr \Big] \hat{g}.$$ Multiplying by $\kappa^\bot (\kappa^\bot)^T$ and rearranging; $$\alpha \kappa^\bot (\kappa^\bot)^T |\kappa|^2 = {^s}\hat\epsilon |\kappa|^2 = \frac{1}{2\pi}\kappa^\bot (\kappa^\bot)^T |\kappa|\Big[c_0 + c_1 \Big(\text{\bf{I}}- \frac{\kappa\kappa^T}{|\kappa|^2} \Big)tr \Big] \hat g$$ which provides $${^s}\hat\epsilon = \frac{1}{2\pi}\frac{\kappa^\bot (\kappa^\bot)^T}{|\kappa|}\Big[c_0 + c_1 \text{\bf{I}} tr \Big] \hat{g}.$$ Now $g$ is also solenoidal and hence can also be written $\hat g=\beta \kappa^\bot (\kappa^\bot)^T$ for some $\beta \in L^2(\mathbb{R}^2)$; $$\begin{aligned}
{^s}\hat\epsilon &= \frac{1}{2\pi}\frac{\kappa^\bot (\kappa^\bot)^T}{|\kappa|}\Big[c_0 \kappa^\bot (\kappa^\bot)^T + c_1 \text{\bf{I}}|\kappa|^2 \Big] \beta \\
&=\frac{1}{2\pi}c_0 |\kappa| \beta \kappa^\bot (\kappa^\bot)^T +\frac{1}{2\pi} c_1 |\kappa| \beta \kappa^\bot (\kappa^\bot)^T \\
&=\frac{1}{2\pi}|\kappa|(c_0+c_1)\hat g.\end{aligned}$$ In the spatial domain this implies: $$\begin{aligned}
{^s}\epsilon &= \frac{1}{2\pi}(-\Delta)^{1/2}(c_0+c_1)I^*I\epsilon \\
&=\frac{1}{4\pi}(-\Delta)^{1/2}I^*I\epsilon,\end{aligned}$$ which is identical to ([\[InvLRT\]](#InvLRT){reference-type="ref" reference="InvLRT"}). ◻
Given Lemma [Lemma 2](#lem1){reference-type="ref" reference="lem1"}, we use only [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"} which provides a component-wise approach to reconstruction of the solenoidal component of strain of the form $$\label{TFBP}
{^s}\epsilon = \frac{1}{4\pi}\mathcal{R}^* \Lambda \xi \otimes \xi I\epsilon,$$ where $\Lambda$ is the Ram-Lak filter (or similar) used in standard scalar Filtered Back Projection (FBP). This inversion formula applies over all of $\mathbb{R}^2$, however our primary concern is finite samples; i.e. strain fields with support contained within some bounded set $\Omega \subset \mathbb{R}^2$. Regardless, numerical computation can generally only occur over a bounded domain. Because of this, it is important to know when the harmonic component of the strain vanishes over the computational domain if we are to apply ([\[TFBP\]](#TFBP){reference-type="ref" reference="TFBP"}); in general the components of the Helmholtz decomposition [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"} do not have bounded support even if $f$ does. Before we address this, we first provide a brief review of the mechanics of stress and strain on the plane in the context of this work.
# Elasticity theory and residual stress {#sec:elast}
Consider a sample consisting of an elastic body represented by the bounded domain $\Omega$ with outward surface normal $n$. Within $\Omega$ we can decompose the total strain at each point, $\epsilon_T$, into an elastic component, $\epsilon$ and an 'eigenstrain', $\epsilon^*$ (e.g. permanent strain introduced by plasticity, phase change, thermal expansion, etc.) [@korsunsky2017teaching; @mura1982micromechanics] $$\epsilon_T=\epsilon+\epsilon^*.
\label{strain_decomp}$$ The elastic component of strain is related to stress, $\sigma$, through Hooke's law, which in its most general form, can be written in terms of a 4-rank stiffness tensor; $\sigma_{ij}=C_{ijkl}\epsilon_{kl}$. In the isotropic case with Young's modulus $E$ and Poisson's ratio $\nu$ $$C_{ijkl}= \frac{E}{1+\nu} \Big(\frac{\nu}{1-2\nu}\delta_{ij} \delta_{kl} + \frac{1}{2}\left( \delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk}\right)\Big).$$
Governing equations can be assembled for this system on the basis of equilibrium, compatibility of strain and boundary conditions. In the absence of body forces (gravity, magnetism, etc.) mechanical equilibrium holds that $$\label{equilib}
\text{Div}(\sigma) = \text{Div}\big(C:\epsilon\big)=0.$$ The total strain physically originates as the symmetric gradient of a displacement field (i.e. is potential) and can be expressed as $\epsilon_T = du$ for some $u$, where, in general, $u\ne 0$ on $\partial\Omega$. This condition is known as strain 'compatibility' which for a simply connected domain can be expressed as a vanishing Saint-Venant operator[^2], $W(\epsilon_T)=0$, or $$\label{compat}
W(\epsilon)=-W(\epsilon^*).$$ The final ingredient is to specify boundary conditions experienced by the sample. These can vary, but in the case of 'residual stress' problems, the surface of the sample is typically free of any traction $$\label{BC}
\sigma \cdot n = \big(C:\epsilon\big) \cdot n=0 \text{ on } \partial\Omega.$$ Equations ([\[equilib\]](#equilib){reference-type="ref" reference="equilib"}), ([\[compat\]](#compat){reference-type="ref" reference="compat"}) and ([\[BC\]](#BC){reference-type="ref" reference="BC"}) together form an elliptic boundary value problem for $\epsilon$ based on a known eigen-strain $\epsilon^*$.
While $\sigma$ and $\epsilon$ are inherently three-dimensional in nature, there are two typical limiting assumptions on the plane that have practical utility:
1. Plane-strain conditions ($\epsilon_{i3}=0 \quad \forall i)$;
2. Plane-stress conditions ($\sigma_{i3}=0 \quad \forall i$).
In both of these cases, we can define $\sigma$ in terms of a scalar Airy stress potential, $\psi \in H^2(\Omega)$ in such a way that it automatically satisfies equilibrium: $$\label{Airy1}
\sigma=(d^\perp)^2\psi.$$ It follows from Hooke's law that, in the isotropic case, strain can also be written in terms of this same potential as $$\label{planestess}
\epsilon=\frac{1}{E}\Big((d^\perp)^2-\nu d^2\Big) \psi$$ for plane-stress conditions, or $$\label{planestrain}
\epsilon=\frac{1+\nu}{E}\Big((1-\nu)(d^\perp)^2-\nu d^2\Big) \psi$$ in the case of plane-strain. Our focus is the recovery of this tensor from its LRT.
# Helmholtz decomposition of strain in $\mathbb{R}^2$
Now we seek to connect the stress $\sigma$ and strain $\epsilon$ initially defined only on the bounded set $\Omega$ to the solenoidal decomposition [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"} on all of $\mathbb{R}^2$. Given that the stress $\sigma$ satisfies [\[equilib\]](#equilib){reference-type="eqref" reference="equilib"} in the classical sense (i.e. is twice differentiable) on $\Omega$ and satisfies the traction-free boundary condition [\[BC\]](#BC){reference-type="eqref" reference="BC"}, in fact $\sigma$ extended as zero outside of $\Omega$ is divergence free in the distributional sense and is therefore its own solenoidal part with no potential part if decomposed according to [\[decomp\]](#decomp){reference-type="eqref" reference="decomp"}. Our goal in this section is to use this fact, together with [\[planestess\]](#planestess){reference-type="eqref" reference="planestess"} or [\[planestrain\]](#planestrain){reference-type="eqref" reference="planestrain"} to find the solenoidal decomposition of $\epsilon$. The issue is that the Airy stress potential $\psi$ appearing in [\[Airy1\]](#Airy1){reference-type="eqref" reference="Airy1"} may not satisfy the same equation distributionally when extended as zero to $\mathbb{R}^2$. The next lemma shows that when the traction-free boundary condition [\[BC\]](#BC){reference-type="eqref" reference="BC"} is satisfied, in fact there is an Airy stress potential which extends as zero.
**Lemma 3**. *Suppose that $\sigma \in L^2(\mathcal{S}^m;\mathbb{R}^2)$ has support contained in a bounded and simply connected set $\Omega$ and satisfies [\[equilib\]](#equilib){reference-type="eqref" reference="equilib"} in the distributional sense on $\mathbb{R}^2$. Then there exists unique $\psi \in H^2(\mathbb{R}^2)$ such that $\mathrm{supp}(\psi) \subset \Omega$ and $$\label{sigma_pot}
\sigma = (d^\perp)^2 \psi \quad \mbox{on $\mathbb{R}^2$.}$$ Furthermore, $$\label{psi_cont}
\|\psi\|_{H^2(\mathbb{R}^2)} \leq M \|\sigma\|_{L^2(\mathcal{S}^2;\mathbb{R}^2)}.$$ for a constant $M>0$ which depends on $\Omega$ but not $\sigma$.*
*Proof.* First consider the case when $\sigma \in \mathcal{C}_c^\infty(\mathcal{S}^m;\mathbb{R}^2)$ satisfies [\[equilib\]](#equilib){reference-type="eqref" reference="equilib"} and has support contained in $\Omega$ which is itself inside an open ball $B_R$ of radius $R$ centred at the origin. The two columns of $\sigma$, $\sigma_{i1}$ and $\sigma_{i2}$, are divergence free vector fields on $\mathbb{R}^2$ and so the path integrals of $e_{ik3} \sigma_{ij} dx_k$ between any two points are independent of path due to Green's theorem. For $x_0 \in \partial B_R$ and any $x \in \mathbb{R}^2$, we define new functions via the path integrals $$\label{path1}
\phi_j(x) = \int_{x_0}^x e_{ik3} \sigma_{ij} dx_k$$ in which the path is left unspecified. Defining the vector field $\phi = (\phi_1, \phi_2)$ it follows, due to path independence and the fundamental theorem of calculus, that $$\label{dphi}
\frac{\partial \phi_j}{\partial x_k} = e_{ik3} \sigma_{ij}.$$ Additionally, since $\Omega$ is simply connected, for any $x \in \mathbb{R}^2 \setminus \Omega$ we can choose a path from $x_0$ to $x$ outside of $\Omega$ and by its path integral definition [\[path1\]](#path1){reference-type="eqref" reference="path1"}, we have $\phi(x) = 0$. Thus, we conclude that $\phi$ is also supported in $\Omega$.
Next, from [\[dphi\]](#dphi){reference-type="eqref" reference="dphi"} we obtain $$\frac{\partial \phi_1}{\partial x_1} + \frac{\partial \phi_2}{\partial x_2} = \text{Div}(\phi) = 0.$$ This implies as before that line integrals of $e_{ik3} \phi_i\ d x_k$ between two points are independent of path, and we define $$\psi(x) = \int_{x_0}^x e_{jl3} \phi_j \ d x_l.$$ Also as before, this implies that $\psi$ is supported in $\Omega$ and $$\frac{\partial \psi}{\partial x_l} = e_{jl3} \phi_j.$$ Putting together the previous construction and using path independence we see that $\psi$ is directly related to $\sigma$ by the formula $$\psi(x_1,x_2) = \iint_{\{s<x_1, \ t >x_2\}} \sigma_{12}(s,t) \ \mathrm{d}s \ \mathrm{d} t.$$ Since the support $\sigma$ is bounded we can restrict the area of integration in the previous integrals to bounded rectangles, and then use the Cauchy-Schwarz inequality to prove [\[psi_cont\]](#psi_cont){reference-type="eqref" reference="psi_cont"} where the constant $M$ depends only on the size of $\Omega$.
We have now proved the lemma for the case when $\sigma$ is smooth. For $\sigma \in L^2(\mathcal{S}^m;\mathbb{R}^2)$ we approximate by a sequence $\sigma_j \in \mathcal{C}_c^\infty(\mathcal{S}^m;\mathbb{R}^2)$ of divergence free fields such that $\sigma_j \rightarrow \sigma$ in $L^2(\mathcal{S}^m;\mathbb{R}^2)$ and each $\sigma_j$ is supported within $2^{-j}$ of $\Omega$. By [\[psi_cont\]](#psi_cont){reference-type="eqref" reference="psi_cont"} the corresponding potentials $\psi_j$ also converge in $H^2(\mathbb{R}^2)$ to a function $\psi$ and by continuity of the derivatives from $H^2$ to $L^2$ we see that [\[sigma_pot\]](#sigma_pot){reference-type="eqref" reference="sigma_pot"} also holds. The supports of the potentials will also shrink to $\Omega$ and so we see that the support of $\psi$ is contained in $\Omega$.
Finally, note that from [\[sigma_pot\]](#sigma_pot){reference-type="eqref" reference="sigma_pot"} the potential $\psi \in H^2(\mathbb{R}^2)$ satisfies the biharmonic equation $$\Delta^2 \psi = (\text{Div}^\perp)^2 \sigma.$$ This equation has a unique solution in $H^2(\mathbb{R}^2)$ and so the proof is complete. ◻
From Lemma [Lemma 3](#SupportLemma){reference-type="ref" reference="SupportLemma"}, we can conclude the following:
If a two dimensional residual elastic strain field on the bounded domain $\Omega$ experiences no boundary traction, its extension by zero to all of $\mathbb{R}^2$ has a unique Helmholtz decomposition of the form $$\label{StrainDecomp}
\epsilon=d\omega + {^s\epsilon}$$ where ${^s\epsilon}$ and $d\omega$ are compactly supported within the complement of the unbounded component of the complement of $\Omega$ (when $\Omega$ is simply connected this is equal to $\Omega$). By uniqueness and comparison to [\[planestess\]](#planestess){reference-type="eqref" reference="planestess"} and [\[planestrain\]](#planestrain){reference-type="eqref" reference="planestrain"}, this decomposition can be written in terms of the Airy stress potential as $$\begin{aligned}
\omega&=-\frac{\nu}{E}d\psi \\
\label{PStressAsStress}
^s\epsilon&=\frac{1}{E}(d^\perp)^2\psi,\end{aligned}$$ for plane-stress, or $$\begin{aligned}
\omega&=-\frac{\nu(1+\nu)}{E}d\psi \\
\label{PStrainAsStress}
^s\epsilon&=\frac{1-\nu^2}{E}(d^\perp)^2\psi,\end{aligned}$$ in the case of plane-strain. Note that in each case $^s\epsilon$ is proportional to $\sigma$.
From a reconstructed solenoidal component, we would like to recover the full elastic strain tensor over a sample. Before we approach this task, we provide a brief comment on recent experimental work in this area.
# Isotropic strain and scalar Filtered Back Projection
Some recent work in Bragg-edge strain tomography has approached this problem through an assumption that strain is isotropic at all points within the sample; i.e. $\epsilon=\bar\epsilon \hspace{0.3ex} \text{\bf{I}}$ for some scalar mean strain $\bar\epsilon$. This assumption is plainly false in almost all cases; the only hydrostatic stress field (and hence strain field) that satisfies equilibrium is constant for all $x$. However, the assumption does allow for a direct means of reconstruction by standard scalar FBP since $I\epsilon=\mathcal{R}\bar\epsilon$ for this case.
For example, in Busi *et al* [@busi2022bragg] the authors perform a slice-by-slice FBP to recover an assumed isotropic strain within an additively manufactured stainless steel cube from a set of 19 Bragg-edge strain images. Similarly, Zhu *et al* [@zhu2023bragg] recover an assumed scalar isotropic strain in a laser welded steel sample using a similar technique.
Clearly the assumption of isotropic strain was invalid in both cases, however the question remains: What has been recovered? How does the scalar FBP of the LRT relate to the strain field within the sample?
To answer this question, we examine the trace of the solenoidal component of elastic strain in ([\[TFBP\]](#TFBP){reference-type="ref" reference="TFBP"}) to obtain the following (note that $|\xi|=1$); $$\begin{aligned}
{^s\epsilon}_{kk} %&= I^*_{kk}\Lambda I \epsilon \\
&=\frac{1}{4 \pi} \mathcal{R}^*\xi_k\xi_k \Lambda I \epsilon \\
&= \frac{1}{4 \pi}\mathcal{R}^*\Lambda I \epsilon\end{aligned}$$ Hence the recovered scalar field stemming from an isotropic assumption is precisely the trace of the (in-plane) solenoidal component, and in general there are no further conclusions that can be made.
However, if the strain field is inherently two-dimensional, we can extend this result by considering stress in terms of the Airy potential. As before, under plane-stress or plane-strain conditions, $^s\epsilon$ can be interpreted through the natural Helmholtz decompositions ([\[PStressAsStress\]](#PStressAsStress){reference-type="ref" reference="PStressAsStress"}) and ([\[PStrainAsStress\]](#PStrainAsStress){reference-type="ref" reference="PStrainAsStress"}). From this perspective, it follows that for plane-stress $$\frac{1}{4\pi}\mathcal{R}^*\Lambda I \epsilon=\frac{1}{E}\sigma_{kk}$$ and for plane-strain $$\frac{1}{4\pi}\mathcal{R}^*\Lambda I \epsilon=\frac{1-\nu^2}{E}\sigma_{kk}.$$
# Recovery of $\epsilon$ from $^s\epsilon$
We now turn our attention to the problem of recovering $\epsilon$ from $^s\epsilon$ using the constraints provided by elasticity theory. To this end, we present three approaches.
## Recovery of $\epsilon$ from compatibility
Applying the Saint-Venant operator to [\[StrainDecomp\]](#StrainDecomp){reference-type="eqref" reference="StrainDecomp"} implies $W(\epsilon)=-W(\epsilon^*)=W({^s}\epsilon)$ and we can replace the compatibility relation ([\[compat\]](#compat){reference-type="ref" reference="compat"}) to form a boundary value problem for $\epsilon$; $$\label{equilibrium}
\begin{cases}
\text{Div}(C:\epsilon)=0 & \text{(Equilibrium)} \\
W(\epsilon)=W({^s\epsilon}) & \text{(Compatibility)}\\
(C:\epsilon) n =0 \text{ on } \partial\Omega &\text{(Boundary condition)}
\end{cases}$$
Under two-dimensional plane-stress or plane-strain conditions we can satisfy equilibrium via [\[planestess\]](#planestess){reference-type="eqref" reference="planestess"} or [\[planestrain\]](#planestrain){reference-type="eqref" reference="planestrain"}, and the compatibility condition becomes a non-homogeneous bi-harmonic equation $$\Delta^2 \psi = \frac{\partial^4\psi}{\partial x_1^4}+\frac{\partial^4\psi}{\partial x_2^4}+2\frac{\partial^4\psi}{\partial x_1^2 \partial x_2^2} = E(\nabla^\perp)^T {^s\epsilon} \nabla^\perp,
\label{biharmonic}$$ subject to the boundary condition $$(d^\perp)^2\psi \cdot n = 0 \text{ on } \partial\Omega.$$ Potentially this provides a direct approach to recover $\epsilon$ through numerical solution. However, it should be recognised that computing the right hand side of [\[biharmonic\]](#biharmonic){reference-type="eqref" reference="biharmonic"} involves taking second order numerical derivatives. In the presence of experimental uncertainty, this is likely to be a very unstable process.
## Recovery of the potential component
An alternate approach involves the recovery of the potential part of $\epsilon$ using equilibrium. From ([\[StrainDecomp\]](#StrainDecomp){reference-type="ref" reference="StrainDecomp"}) and ([\[equilibrium\]](#equilibrium){reference-type="ref" reference="equilibrium"}), the equilibrium of the system implies $$\begin{aligned}
\text{Div}\big(C:(d\omega+{^s\epsilon})\big)&=0,\end{aligned}$$ which leads to an elliptic boundary value problem for $\omega$ of the form $$\begin{aligned}
\label{OmegaEqu}
\text{Div}(C:d\omega)=b \\
\label{OmegaBC}
\omega = 0 \text{ on } \partial \Omega\end{aligned}$$ where $b=-\text{Div}(C:{{^s}\epsilon})$.
This is in the form of a standard structural elasticity problem for $\omega$ as a displacement field resulting from a distributed body force and trivial Dirichlet boundary condition. For 2D plane-stress conditions $$\begin{aligned}
\label{bx}
b_1&=-\frac{E}{1-\nu^2}\Big(\frac{\partial{{^s}\epsilon_{11}}}{\partial x_1} + \nu \frac{\partial{{^s}\epsilon_{22}}}{\partial x_1} + (1-\nu) \frac{\partial{{^s}\epsilon_{12}}}{\partial x_2}\Big), \\
b_2&=-\frac{E}{1-\nu^2}\Big(\nu\frac{\partial{{^s}\epsilon_{11}}}{\partial x_2} + \frac{\partial{{^s}\epsilon_{22}}}{\partial x_2} + (1-\nu) \frac{\partial{{^s}\epsilon_{12}}}{\partial x_1}\Big).
\label{by}\end{aligned}$$ In contrast to the previous approach, calculation of $b$ only involves computing first derivatives, and hence is potentially a much more stable process.
## Recovery of $\epsilon$ from Hooke's law
By far the most direct means for recovering $\epsilon$ from $^s\epsilon$ is through Hooke's law. Recognising that $\sigma=(d^\perp)^2\psi$ and applying Hooke's law to [\[PStressAsStress\]](#PStressAsStress){reference-type="eqref" reference="PStressAsStress"} and [\[PStrainAsStress\]](#PStrainAsStress){reference-type="eqref" reference="PStrainAsStress"}, we can write $$\begin{aligned}
\label{HookeRecon1}
\epsilon_{11}&={^s\epsilon}_{11}-\nu{^s\epsilon}_{22} \\
\label{HookeRecon2}
\epsilon_{22}&={^s\epsilon}_{22}-\nu{^s\epsilon}_{11} \\
\label{HookeRecon3}
\epsilon_{12}&=(1+\nu){^s\epsilon}_{12}\end{aligned}$$ for plane-stress conditions, or $$\begin{aligned}
\epsilon_{11}&={^s\epsilon}_{11}+\frac{\nu}{1-\nu}{^s\epsilon}_{22} \\
\epsilon_{22}&={^s\epsilon}_{22}+\frac{\nu}{1-\nu}{^s\epsilon}_{11} \\
\epsilon_{12}&=\frac{1}{1-\nu}{^s\epsilon}_{12}\end{aligned}$$ for plane-strain.
# Numerical demonstration: Simulated data
## Strain fields
Numerical demonstrations of the above process were performed on three synthetic two-dimensional plane-stress strain fields. The first of these fields was generated over the unit disk from an Airy stress potential of the form $$\psi=e^{-\alpha((x+1/4)^2+y^2)}-e^{-\alpha((x-1/4)^2+y^2)},$$ with $\alpha=15$, and elastic properties $E=1$ and $\nu=0.34$. The three unique components of this strain field are shown in Figure [5](#AiryField){reference-type="ref" reference="AiryField"}a.
![A reconstruction of a synthetic strain field computed from an Airy stress field. (a) The original strain field. (b) A reconstruction of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](EquiField3_Original.jpg "fig:"){#AiryField width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an Airy stress field. (a) The original strain field. (b) A reconstruction of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](EquiField3_Solenoidal.jpg "fig:"){#AiryField width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an Airy stress field. (a) The original strain field. (b) A reconstruction of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](EquiField3_Potential.jpg "fig:"){#AiryField width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an Airy stress field. (a) The original strain field. (b) A reconstruction of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](EquiField3_Reconstruction.jpg "fig:"){#AiryField width="0.24\\linewidth"} (-380,230)(a) (-282.5,230)(b) (-183.5,230)(c) (-85,230)(d)
The second and third corresponded to finite element simulations of physical samples that were the focus of prior experimental work [@gregg2018tomographic]. All relevant details can be found in the reference, however a brief description of each sample is as follows;
1. *Crushed Ring*: A sample formed by plastically deforming an initially stress-free steel ring along its diameter. The geometry of the sample and applied deformation is shown in Figure [6](#Samples){reference-type="ref" reference="Samples"}a. The residual strain field in this sample originates from a distributed eigen-strain related to plastic deformation (see Figure [10](#CRFEA){reference-type="ref" reference="CRFEA"}a)
2. *Offset Ring-and-Plug*: A cylindrical steel sample constructed by shrink-fitting an oversize cylindrical 'plug' into an undersize hole that is offset from the centreline (see Figure [6](#Samples){reference-type="ref" reference="Samples"}b). The strain field within this sample originates from the interference between the offset ring and the plug (see Figure [14](#RPFEA){reference-type="ref" reference="RPFEA"}a). In the context of ([\[strain_decomp\]](#strain_decomp){reference-type="ref" reference="strain_decomp"}), the interference imposes a discrete eigen-strain with localised support on the interface.
![Two samples representing strain fields used to perform numerical demonstrations of the reconstruction algorithm. (a) A crushed steel ring containing a distributed eigen-strain field. (b) An offset ring and plug system containing a discrete eigen-strain field generated through mechanical interference.](SampGeom.pdf "fig:"){#Samples width="0.6\\linewidth"} (-260,80)(a) (b)\
![A reconstruction of a synthetic strain field computed from an elasto-plastic finite element model of the crushed ring. (a) The original strain field. (b) A reconstructed of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](CrushedRingFEA_Original.jpg "fig:"){#CRFEA width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an elasto-plastic finite element model of the crushed ring. (a) The original strain field. (b) A reconstructed of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](CrushedRingFEA_Solenoidal.jpg "fig:"){#CRFEA width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an elasto-plastic finite element model of the crushed ring. (a) The original strain field. (b) A reconstructed of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](CrushedRingFEA_Potential.jpg "fig:"){#CRFEA width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an elasto-plastic finite element model of the crushed ring. (a) The original strain field. (b) A reconstructed of the solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](CrushedRingFEA_Reconstruction.jpg "fig:"){#CRFEA width="0.24\\linewidth"} (-380,230)(a) (-282.5,230)(b) (-183.5,230)(c) (-85,230)(d)
![A reconstruction of a synthetic strain field computed from an linear-elastic finite element model of the offset ring and plug system. (a) The original strain field. (b) A reconstructed solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](RingPlugFEA_Original.jpg "fig:"){#RPFEA width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an linear-elastic finite element model of the offset ring and plug system. (a) The original strain field. (b) A reconstructed solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](RingPlugFEA_Solenoidal.jpg "fig:"){#RPFEA width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an linear-elastic finite element model of the offset ring and plug system. (a) The original strain field. (b) A reconstructed solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](RingPlugFEA_Potential.jpg "fig:"){#RPFEA width="0.24\\linewidth"} ![A reconstruction of a synthetic strain field computed from an linear-elastic finite element model of the offset ring and plug system. (a) The original strain field. (b) A reconstructed solenoidal component of this field from a simulated LRT consisting of 200 equally spaced projections over 360$^\circ$. (c) The recovered potential component from elastic finite element modelling. (d) The reconstructed strain field formed by the sum of the solenoidal and potential components.](RingPlugFEA_Reconstruction.jpg "fig:"){#RPFEA width="0.24\\linewidth"} (-380,230)(a) (-282.5,230)(b) (-183.5,230)(c) (-85,230)(d)
Both samples were 14mm thick and were simulated as steel with $E=209$GPa, $\nu=0.34$ and a yield stress of 650MPa. The finite element model for the first sample required a non-linear solve based on an elasto-plastic material model, while the second sample was modelled using linear-elasticity. Both models were built and solved in the software package PTC/Creo.
All three strain fields were represented as three scalar components mapped to regular two-dimensional grids. The size and resolutions of these grids were as follows: Airy -- $400\times400$, spacing 0.006, Crushed Ring -- $500\times500$, spacing 48$\mu$m, Ring and Plug -- $521\times521$, spacing 50$\mu$m. In each case, all three strain components were extended by zero outside the sample boundaries.
What follows is a demonstration of the reconstruction of these fields from synthetic LRT data.
## Procedure
The demonstrations were was carried out with the help of the Matlab '`radon`' and '`iradon`' functions. In this context, the implementation was as defined in the following process:
1. Forward map the LRT of the strain field by successive application of the '`radon`' Matlab function for each individual projection angle. i.e. for a given projection angle $\theta$: $$I\epsilon(s,\theta)=\mathcal{R}[\cos^2\theta\epsilon_{11} +2 \cos\theta\sin\theta\epsilon_{12}+\sin^2\theta\epsilon_{22}]$$
2. Component-wise back-project the resulting strain-sinogram to compute the three unique components of $^s\epsilon$ using the FBP algorithm as implemented in the '`iradon`' intrinsic Matlab function (as per [\[InvLRT\]](#InvLRT){reference-type="eqref" reference="InvLRT"}).
3. Calculate a first reconstruction of $\epsilon$ from $^s\epsilon$ based on Hooke's law using [\[HookeRecon1\]](#HookeRecon1){reference-type="eqref" reference="HookeRecon1"}, [\[HookeRecon2\]](#HookeRecon2){reference-type="eqref" reference="HookeRecon2"} and [\[HookeRecon3\]](#HookeRecon3){reference-type="eqref" reference="HookeRecon3"}.
4. Calculate derivatives of $^s\epsilon$ by first transforming the individual components to the Fourier domain using the '`fft2`' and '`fftshift`' intrinsic Matlab functions. These transformed components are then multiplied by appropriate $\kappa$-space filters corresponding to $\partial/\partial x_1$ and $\partial/\partial x_2$ before transforming back to the real domain using '`fftshift`' and '`ifft2`'
5. From these derivatives, calculate the two components of the vector $b$ using [\[bx\]](#bx){reference-type="eqref" reference="bx"} and [\[by\]](#by){reference-type="eqref" reference="by"}.
6. [\[FEAmodel\]]{#FEAmodel label="FEAmodel"}Using the Matlab PDE solver, calculate a finite element solution for the displacement field $\omega$ satisfying ([\[OmegaEqu\]](#OmegaEqu){reference-type="ref" reference="OmegaEqu"}) and ([\[OmegaBC\]](#OmegaBC){reference-type="ref" reference="OmegaBC"}) subject to the calculated vector field $b$.
7. Calculate a second reconstruction for $\epsilon$ as the sum $\epsilon={^s\epsilon}+d\omega$, where $d\omega$ is computed from the shape functions within the finite element solution.
The target element size for the finite element model in step [\[FEAmodel\]](#FEAmodel){reference-type="ref" reference="FEAmodel"} was set to be 0.5% of the maximum sample dimensions. This was conservatively chosen through a standard mesh-independence investigation.
## Results
In all three cases the reconstructions based on Hooke's law and the finite element recovery of the potential component were visually indistinguishable from each other. However, the reconstruction based on Hooke's law was slightly more accurate in terms of a root-mean-square error.
Figures [5](#AiryField){reference-type="ref" reference="AiryField"}, [10](#CRFEA){reference-type="ref" reference="CRFEA"} and [14](#RPFEA){reference-type="ref" reference="RPFEA"} show the results of this process based on simulated LRT data from 200 equally spaced angular projections over 360$^\circ$. Each figure shows the original strain field together with the reconstructed solenoidal component, the recovered potential component, and the final reconstruction based on the sum of the two.
It was also interesting to note that, in each case, the reconstructed solenoidal component was approximately zero outside the sample boundary (as expected from Lemma [Lemma 3](#SupportLemma){reference-type="ref" reference="SupportLemma"}). This is examined further in Section [8.5](#Support){reference-type="ref" reference="Support"} below.
![The Saint-Venant operator as applied to the reconstructed solenoidal components compared to the same for the original strain fields. ](EquiField3_eta.jpg "fig:"){#Etaplot width="0.32\\linewidth"} ![The Saint-Venant operator as applied to the reconstructed solenoidal components compared to the same for the original strain fields. ](CrushedRingFEA_eta.jpg "fig:"){#Etaplot width="0.32\\linewidth"} ![The Saint-Venant operator as applied to the reconstructed solenoidal components compared to the same for the original strain fields. ](RingPlugFEA_eta.jpg "fig:"){#Etaplot width="0.32\\linewidth"}\
![The Saint-Venant operator as applied to the reconstructed solenoidal components compared to the same for the original strain fields. ](EquiField3_etaOriginal.jpg "fig:"){#Etaplot width="0.32\\linewidth"} ![The Saint-Venant operator as applied to the reconstructed solenoidal components compared to the same for the original strain fields. ](CrushedRingFEA_etaOriginal.jpg "fig:"){#Etaplot width="0.32\\linewidth"} ![The Saint-Venant operator as applied to the reconstructed solenoidal components compared to the same for the original strain fields. ](RingPlugFEA_etaOriginal.jpg "fig:"){#Etaplot width="0.32\\linewidth"}
The difference between the reconstructions and the original field was small; typically around 1-5% of the maximum value of the original components. However, it was observed that this did not significantly decrease along with the number of projections. The source of this persistent discrepancy was discretisation error related to minor deviations from the equilibrium relation introduced by various interpolations onto the regular grid. This is examined further in the following section.
Figure [20](#Etaplot){reference-type="ref" reference="Etaplot"} shows the computed Saint-Venant incompatibility of the reconstructed solenoid compared to the original for all three fields. These images were calculated using a similar transform-filter-transform approach in the Fourier domain.
The Airy stress field shows incompatibility distributed over the sample domain, whereas the other two samples show more localised support. In the case of the crushed-ring, this is likely to have originated from localised plastic shear within the elasto-plastic finite element model, while the offset ring-and-plug indicates a clear dipole around the circumference of the plug corresponding to the interference.
As expected, the incompatibility of the reconstructed solenoidal components are identical to that of the original fields within a small amount of numerical noise.
## Reconstruction in the presence of measurement uncertainty
A further set of simulations was carried out in order to examine the behaviour of reconstructions in the presence of Gaussian noise. In this respect both approaches were found to be quite stable and converged to the original field with an increasing number of projections (notwithstanding the dicretisation error identified earlier).
Although not strictly necessary, slight improvement was found by limiting the order of terms in the numerical derivatives used to compute $b$. This was achieved by cutting-off the $\kappa$-space filters for frequencies above a certain threshold. A cut-off frequency equal to 0.7 times the maximum magnitude provided a good compromise between noise and fidelity.
![The overall error in the reconstruction of the Airy stress field in the presence of 10% Gaussian measurement noise as a function of the number of projections. The relative error is computed as the root-mean-square of the residual divided by the root-mean-square of the original strain field over all components. Dotted lines show the minimum error possible for the given mesh density (calculated using 50,000 projections with no added noise). ](RelError.jpg){#RelErr width="0.95\\linewidth"}
For the Airy stress field, Figure [21](#RelErr){reference-type="ref" reference="RelErr"} shows the convergence of the reconstructed fields along with the number of projections in the presence of Gaussian random noise with a standard deviation of 10% of the maximum LRT value. Results from three systems are shown corresponding to different spatial resolutions (i.e. grid size). In each case, convergence of the relative error to zero is observed to occur at $\mathcal{O}(n^{-1/2})$ until the lower limit corresponding to the discretisation error is reached.
Generally speaking, the reconstruction based on Hooke's law had a lower persistent error and the size of the persistent error was observed to be directly related to the resolution of the grid.
It should be noted that, in the presence of noise the calculation of the Saint-Venant operator was found to be inherently unstable regardless of any reasonable cut-off frequency used in the relevant filters.
## Boundary traction and compact support {#Support}
In order to examine the effect of the boundary conditions, a further set of simulations were carried out on the strain field specified in Appendix A of Gregg *et al* [@gregg2017tomographic] with $e_0=R=1$ (see Figure [27](#BCFigure){reference-type="ref" reference="BCFigure"}a). This is an axi-symmetric 'plane-stress' strain field on the unit disk originating from the hydrostatic eigen-strain $$\epsilon^*_{rr}=\epsilon^*_{\theta\theta}=(1-r)^2,$$ and subject to a zero traction boundary condition (i.e. $\sigma_{rr}(1)=0$). In polar coordinates it has the form $$\begin{aligned}
\epsilon_{rr}&=\frac{7+5\nu+(1+\nu)(9r-16)r}{12}-(1-r)^2 \\
\epsilon_{\theta\theta}&=\frac{7+5\nu+(1+\nu)(3r-8)r}{12}-(1-r)^2.\end{aligned}$$
![Demonstration of the effect of the no-traction boundary condition on reconstruction. (a) An axisymmetric residual 'plane-stress' strain field that satisfies the no-traction boundary condition (see [@gregg2017tomographic]), along with (b) its reconstruction, and (c) the residual LRT between these fields. (d) The same field with an additional hydrostatic component that violates the no-traction condition, along with (e) a failed reconstruction and (f) the non-zero residual.](AxiSym_Original.jpg "fig:"){#BCFigure width="0.24\\linewidth"} ![Demonstration of the effect of the no-traction boundary condition on reconstruction. (a) An axisymmetric residual 'plane-stress' strain field that satisfies the no-traction boundary condition (see [@gregg2017tomographic]), along with (b) its reconstruction, and (c) the residual LRT between these fields. (d) The same field with an additional hydrostatic component that violates the no-traction condition, along with (e) a failed reconstruction and (f) the non-zero residual.](AxiSym_HookeReconstruction.jpg "fig:"){#BCFigure width="0.24\\linewidth"}
![Demonstration of the effect of the no-traction boundary condition on reconstruction. (a) An axisymmetric residual 'plane-stress' strain field that satisfies the no-traction boundary condition (see [@gregg2017tomographic]), along with (b) its reconstruction, and (c) the residual LRT between these fields. (d) The same field with an additional hydrostatic component that violates the no-traction condition, along with (e) a failed reconstruction and (f) the non-zero residual.](AxiSymPlusHydro_Original.jpg "fig:"){#BCFigure width="0.24\\linewidth"} ![Demonstration of the effect of the no-traction boundary condition on reconstruction. (a) An axisymmetric residual 'plane-stress' strain field that satisfies the no-traction boundary condition (see [@gregg2017tomographic]), along with (b) its reconstruction, and (c) the residual LRT between these fields. (d) The same field with an additional hydrostatic component that violates the no-traction condition, along with (e) a failed reconstruction and (f) the non-zero residual.](AxiSymPlusHydro_HookeReconstruction.jpg "fig:"){#BCFigure width="0.24\\linewidth"} (-380,225)(a) (-282.5,225)(b) (-183.5,225)(d) (-85,225)(e)\
![Demonstration of the effect of the no-traction boundary condition on reconstruction. (a) An axisymmetric residual 'plane-stress' strain field that satisfies the no-traction boundary condition (see [@gregg2017tomographic]), along with (b) its reconstruction, and (c) the residual LRT between these fields. (d) The same field with an additional hydrostatic component that violates the no-traction condition, along with (e) a failed reconstruction and (f) the non-zero residual.](AxiSym_Sinogram.jpg "fig:"){#BCFigure width="0.49\\linewidth"}
![Demonstration of the effect of the no-traction boundary condition on reconstruction. (a) An axisymmetric residual 'plane-stress' strain field that satisfies the no-traction boundary condition (see [@gregg2017tomographic]), along with (b) its reconstruction, and (c) the residual LRT between these fields. (d) The same field with an additional hydrostatic component that violates the no-traction condition, along with (e) a failed reconstruction and (f) the non-zero residual.](AxiSymPlusHydro_Sinogram.jpg "fig:"){#BCFigure width="0.49\\linewidth"} (-380,115)(c) (-185,115)(f)
A simulated reconstruction based on 1000 equally spaced LRT projections from a $400\times400$ Cartesian grid is shown in Figure [27](#BCFigure){reference-type="ref" reference="BCFigure"}b. As expected, the reconstructed strain matches the original field accurately and the support of the reconstruction is contained within the boundary of the sample. Outside of the boundary, the reconstructed solenoidal component was around three orders-of-magnitude smaller than the original field.
Figure [27](#BCFigure){reference-type="ref" reference="BCFigure"}c shows the residual between the LRT of the original field and the reconstruction.
Figure [27](#BCFigure){reference-type="ref" reference="BCFigure"}d shows the same field with the addition of a constant hydrostatic strain of magnitude $\bar\epsilon=0.2$. Like the original field, this altered version satisfies equilibrium at all points within the sample, however it clearly violates the traction-free boundary condition since $| \sigma \cdot n |=0.2$ on $\partial\Omega$.
An attempted reconstruction of this field based on the same process is shown in Figure [27](#BCFigure){reference-type="ref" reference="BCFigure"}e. A visual inspection of the result clearly indicates the reconstruction has failed to reproduce the original field.
It is also interesting to note that the reconstructed field is far from zero outside the boundary of the sample. This observation, together with Lemma [Lemma 1](#lem_ext){reference-type="ref" reference="lem_ext"} suggests that the apparent support of $^s\epsilon$ reconstructed from data gives a reliable indicator of the existence of a harmonic potential component, and hence the appropriateness of the traction-free assumption for a given experimental system.
It is also clear that the LRT of the reconstructed solenoid does not match that of the original field. Figure [27](#BCFigure){reference-type="ref" reference="BCFigure"}f shows the difference between these two sinograms computed with $^s\epsilon$ masked to zero outside the boundary. The residual is of a significant magnitude and appears to correspond directly to the added hydrostatic/harmonic component. This poses an interesting question: Given the harmonic component is compatible, can it be recovered through reconstruction of a non-zero boundary condition similar to the process carried out by Wensrich *et al* [@hendriks2017bragg]? This question will form the focus of future work in this area.
# Numerical demonstration: Experimental data
As a final demonstration, the reconstruction approach was applied to experimental data measured from the physical samples using the RADEN energy resolved imaging instrument within the Materials and Life Sciences institute at the J-PARC spallation neutron source in Japan [@shinohara2020energy]. All relevant details of this experiment are described in Gregg *et al* [@gregg2018tomographic]. The outcome of this experiment was measured strain-sinograms from the crushed-ring and offset ring-and-plug samples corresponding to a set of 50 golden-angle projections. As per [\[BEStrain\]](#BEStrain){reference-type="eqref" reference="BEStrain"}, these measurements correspond to average strain along ray-paths, which require multiplication by appropriate values of $L$ to compute the LRT (see Figure [33](#RealData){reference-type="ref" reference="RealData"}a and [33](#RealData){reference-type="ref" reference="RealData"}b).
![Reconstruction of residual strain fields from real data. (a) and (b) Measured LRT data from the crushed-ring and offset ring-and-plug samples using Bragg-edge strain imaging on the RADEN energy-resolved neutron imaging instrument [@gregg2018tomographic] (c) and (e) Reference measurements from each sample taken using traditional neutron diffraction based strain measurement techniques on the KOWARI engineering diffractometer (see [@hendriks2019robust]). (d) and (f) reconstructed strain fields formed by the sum of the reconstructed solenoidal and recovered potential components. ](CrushedRingRealData_Sinogram.jpg){#RealData width="0.49\\linewidth"}
![Reconstruction of residual strain fields from real data. (a) and (b) Measured LRT data from the crushed-ring and offset ring-and-plug samples using Bragg-edge strain imaging on the RADEN energy-resolved neutron imaging instrument [@gregg2018tomographic] (c) and (e) Reference measurements from each sample taken using traditional neutron diffraction based strain measurement techniques on the KOWARI engineering diffractometer (see [@hendriks2019robust]). (d) and (f) reconstructed strain fields formed by the sum of the reconstructed solenoidal and recovered potential components. ](RingPlugRealData_Sinogram.jpg "fig:"){#RealData width="0.49\\linewidth"} (-380,115)(a) (-185,115)(b)\
![Reconstruction of residual strain fields from real data. (a) and (b) Measured LRT data from the crushed-ring and offset ring-and-plug samples using Bragg-edge strain imaging on the RADEN energy-resolved neutron imaging instrument [@gregg2018tomographic] (c) and (e) Reference measurements from each sample taken using traditional neutron diffraction based strain measurement techniques on the KOWARI engineering diffractometer (see [@hendriks2019robust]). (d) and (f) reconstructed strain fields formed by the sum of the reconstructed solenoidal and recovered potential components. ](CrushedRingKOWARI_Original.jpg "fig:"){#RealData width="0.24\\linewidth"} ![Reconstruction of residual strain fields from real data. (a) and (b) Measured LRT data from the crushed-ring and offset ring-and-plug samples using Bragg-edge strain imaging on the RADEN energy-resolved neutron imaging instrument [@gregg2018tomographic] (c) and (e) Reference measurements from each sample taken using traditional neutron diffraction based strain measurement techniques on the KOWARI engineering diffractometer (see [@hendriks2019robust]). (d) and (f) reconstructed strain fields formed by the sum of the reconstructed solenoidal and recovered potential components. ](CrushedRingRealData_Reconstruction.jpg "fig:"){#RealData width="0.24\\linewidth"}
![Reconstruction of residual strain fields from real data. (a) and (b) Measured LRT data from the crushed-ring and offset ring-and-plug samples using Bragg-edge strain imaging on the RADEN energy-resolved neutron imaging instrument [@gregg2018tomographic] (c) and (e) Reference measurements from each sample taken using traditional neutron diffraction based strain measurement techniques on the KOWARI engineering diffractometer (see [@hendriks2019robust]). (d) and (f) reconstructed strain fields formed by the sum of the reconstructed solenoidal and recovered potential components. ](RingPlugKOWARI_Original.jpg "fig:"){#RealData width="0.24\\linewidth"} ![Reconstruction of residual strain fields from real data. (a) and (b) Measured LRT data from the crushed-ring and offset ring-and-plug samples using Bragg-edge strain imaging on the RADEN energy-resolved neutron imaging instrument [@gregg2018tomographic] (c) and (e) Reference measurements from each sample taken using traditional neutron diffraction based strain measurement techniques on the KOWARI engineering diffractometer (see [@hendriks2019robust]). (d) and (f) reconstructed strain fields formed by the sum of the reconstructed solenoidal and recovered potential components. ](RingPlugRealData_Reconstruction.jpg "fig:"){#RealData width="0.24\\linewidth"} (-380,225)(c) (-282.5,225)(d) (-183.5,225)(e) (-85,225)(f)
Figure [33](#RealData){reference-type="ref" reference="RealData"}d and [33](#RealData){reference-type="ref" reference="RealData"}f show the results of the reconstruction based on Hooke's law compared to traditional neutron diffraction based strain measurements from the KOWARI engineering diffractometer at the Australian Centre for Neutron Scattering within the Australian Nuclear Science and Technology Organisation [@kirstein2010kowari]. This reference data (Figure [33](#RealData){reference-type="ref" reference="RealData"}c and [33](#RealData){reference-type="ref" reference="RealData"}e) is in the form of interpolated/inferred fields computed from scattered measurements using a technique that guarantees equilibrium is satisfied at each point [@hendriks2019robust].
Overall the reconstruction has performed well in terms of overall magnitude and distribution within the limits of resolution. In particular, the reconstructions show remarkable similarity to that of previous work from the same data by Gregg *et al* [@gregg2018tomographic] based on constrained least squares optimisation of Fourier basis functions.
# Conclusion
A direct link has been established between the concept of Airy stress potentials in two-dimensional elastic systems and the standard Helmholtz decomposition at the heart of the LRT and its null space.
Through this lens, direct approaches for the reconstruction of two-dimensional elastic strain fields from LRT data have been developed and demonstrated. Using a tensorial version of standard FBP, a solenoidal (divergence free) component of the strain field can be recovered, which can then be used to recover the original field through the application of Hooke's law or a process involving the numerical solution of a standard elasticity problem. In simulation, both approaches were found to be robust to measurement noise. Both approaches also performed well on real experimental data.
From this perspective it was also possible to identify the result of standard scalar FBP when applied to LRT measurement as the trace of the solenoidal component. In some situations (e.g. plane-stress or plane-strain) this can be related to the trace of the stress tensor, however in general, more information is required to bring meaning to such a reconstruction in a three-dimensional system.
# Acknowledgements
This work is supported by the Australian Research Council through a Discovery Project Grant (DP170102324). Access to the RADEN and KOWARI instruments was made possible through the respective user access programs of J-PARC and ANSTO (J-PARC Long Term Proposal 2017L0101 and ANSTO Program Proposal PP6050).
Contributions from W Lionheart and S Holman were supported by the Engineering and Physical Sciences Research Council through grant EP/V007742/1.
Contributions from A Polyakova and I Svetov were supported by the framework of the government assignment of the Sobolev Institute of Mathematics, project FWNF-2022-0009.
Contributions from Matias Courdurier were partially supported by ANID Millennium Science Initiative Program through Millennium Nucleus for Applied Control and Inverse Problems NCN19-161.
The authors would also like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme Rich and Non-linear Tomography: A Multidisciplinary Approach when work on this paper was undertaken. This program was supported by EPSRC grant number EP/R014604/1.
While in Cambridge, all authors received support from the Simons Foundation. C Wensrich would also like to thank Clare Hall for their support and hospitality over this period.
[^1]: Equivalent to the Radon transform in 2D.
[^2]: The Saint-Venant operator is defined by $$W_{ijkl}(f)=\frac{\partial^2f_{ij}}{\partial x_k\partial x_l} + \frac{\partial^2f_{kl}}{\partial x_i\partial x_j} -\frac{\partial^2f_{il}}{\partial x_j\partial x_k} - \frac{\partial^2f_{jk}}{\partial x_i\partial x_l}.$$ In $\mathbb{R}^3$, this simplifies to six unique components specified by the 2-rank symmetric incompatibility tensor $Rf=\nabla \times (\nabla \times f)^T$, or component-wise $[Rf]_{ij}=e_{kpi}e_{lqj}\nabla_p\nabla_q f_{kl}$ where $e_{ijk}$ is the Levi-Civita permutation symbol.
In a simply connected domain in $\mathbb{R}^n$, $W(f)=0$ if and only if $f=du$ for some $u$. On a multiply connected domain with $k$ holes, $n(n+1)k/2$ additional integral constraints are required along with $W(f)=0$ to imply $f=du$ (see [@yavari2013compatibility Proposition 2.8]).
| arxiv_math | {
"id": "2309.02440",
"title": "Direct inversion of the Longitudinal Ray Transform for 2D residual\n elastic strain fields",
"authors": "CM Wensrich and S Holman and M Courdurier and WRB Lionheart and A\n Polyalova and I Svetov",
"categories": "math.NA cs.NA",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
This paper mainly concerns the frequency-preserving Kolmogorov-Arnold-Moser (KAM) theorem via irregular continuity with respect to the parameter. Instead of digging out domains or requiring the uniform weak convexity for the frequency mapping, we introduce the concept of relative singularity, allowing many explicit parameterized Hamiltonian systems that admit arbitrarily weak regularity. The KAM iterative scheme employed here is quasi-linear that differs from the usual linear one. Additionally, we also construct a number of counterexamples, aiming to emphasize the indispensability of our new conditions towards frequency-preserving. Moreover, we also show the parallel applicability of our results to partial frequency-preserving and infinite-dimensional cases.\
\
**Keywords:** Invariant torus of Hamiltonian system, frequency-preserving, partial frequency-preserving, weak regularity, relative singularity.\
**2020 Mathematics Subject Classification:** 37J40, 70H08, 70K43
author:
- Zhicheng Tong $^{\mathcal{z},1}$, Yong Li $^{*,\mathcal{x},1,2}$
title: "*A sharp frequency-preserving KAM theorem with continuous dependence on parameters and several counterexamples*"
---
# Introduction and the main KAM results {#FPKAMintro}
The motivation for the celebrated KAM theory which stems from the works of Kolmogorov and Arnold [@MR0140699; @MR0163025; @MR0170705; @MR0068687], Moser [@MR4201442; @MR0147741; @MR0199523], arises from the stability problems in celestial mechanics. To be more precise, it mainly concerns the preservation of quasi-periodic trajectories of Hamiltonian systems under small perturbations. Small divisors, being the most difficult part, could be well controlled via a linear iteration (Newtonian iteration) that admits super-exponential convergence. So far, KAM theory has been well developed and widely applied to a variety of dynamical systems and PDEs, and we refer to Kuksin [@MR0911772], Eliasson [@MR1001032], Pöschel [@MR1022821], Wayne [@MR1040892], Bourgain [@MR1316975; @MR1345016], Kuksin and Pöschel [@MR1370761] and etc. for some fundamental developments. We also mention the readable survey of Sevryuk [@MR2078576]. As to recent advancements and summaries in this direction, see Bambusi and Grébert [@MR2272975], Berti et al [@MR3112201], Khesin et al [@MR3269186], Eliasson et al [@MR3357183], Chierchia and Procesi [@MR4570702] and the references therein for instance.
Focusing on the question of how much of the dynamics can be preserved, it is well known since the classic KAM theory that for an analytic, integrable Hamiltonian system, the majority of quasi-periodic tori survives small analytic perturbations. However, the toral frequencies are generally drifted subjected to perturbations (see Cheng and Sun [@MR1302146], Xu et al [@MR1483538] and etc.), unless the Hamiltonian system under consideration has certain nondegeneracy (e.g., Kolmogorov nondegenerate condition in [@MR0163025; @MR0068687; @MR0147741]). In the absence of such nondegeneracy, the problem of finding a KAM torus possessing the same frequency as the unperturbed one is not a trivial issue, while being fundamental in KAM theory. Another instance of that same challenge is weakening the continuous dependence in parameterized settings, e.g., from the bidirectional Lipschitz regularity (e.g., see Kuksin and Pöschel [@MR1370761], Berti and Biasco [@MR2819413], Grébert and Thomann [@MR2837120], Liu and Yuan [@MR2842962] for instance) to Hölder regularity or even arbitrarily weak regularity about the parameter. On this aspect, by introducing certain topological degree condition and (uniform) weak convexity condition, Du et al [@DL] obtained a frequency-preserving KAM theorem via Hölder continuity about the parameter, through a parameter translation technique and a quasi-linear (quasi-Newtonian) KAM iterative scheme. Building upon the previous contribution, very recently, Tong et al extended this result by assuming only modulus of continuity for Hamiltonian systems, and simultaneously, Liu et al investigated mappings with intersection property and established a frequency-preserving version of Moser's theorem.
It shall be mentioned that only requiring the topological degree condition is not enough, as shown by the counterexample given in [@DL]. However, the uniformity of the weak convexity condition might not allow wide-ranging applications in dynamical systems. One of the main motivations of this paper is to weaken this condition to a *Relative singularity condition*, thereby enabling us to construct many explicit parameterized Hamiltonian systems admitting frequency-preserving KAM tori, while still requiring only continuous dependence about the parameter. One will also see later that our main KAM theorem (Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}) possesses sharpness that is mainly reflected in the indispensability of the assumptions, see certain counterexamples constructed in Section [2](#FPKAMCONTEREXAMPLES){reference-type="ref" reference="FPKAMCONTEREXAMPLES"}. Moreover, we also provide other counterexamples, aiming to discuss various fundamental problems towards frequency-preserving. As a consequence, we give a basically complete study of the frequency-preserving KAM in the parameterized settings.
Let us get in the main topic. Consider the parameterized family of the perturbed Hamiltonian systems $$\label{FPKAMHamilton}
\left\{ \begin{gathered}
H:G \times {\mathbb{T}^n} \times \mathcal{O} \to {\mathbb{R}^1}, \hfill \\
H\left( {y,x,\xi,\varepsilon } \right) = \left\langle {\omega \left( \xi \right),y} \right\rangle + \varepsilon P\left( {y,x,\xi } \right), \hfill \\
\end{gathered} \right.$$ where $x$ is the angle variable in the standard torus $\mathbb{T}^n : = {\mathbb{R}^n}/2\pi {\mathbb{Z}^n}$, $y$ is the action variable in $G\subset {\mathbb{R}^n}$, $\xi$ is a parameter in $\mathcal{O} \subset {\mathbb{R}^m}$, and $n,m \in \mathbb{N}^+$. Here $G$ and $\mathcal{O}$ are bounded connected closed domains with interior points. Moreover, $\omega \left( \cdot \right)$ is continuous about $\xi$ on $\mathcal{O}$, $P\left( { \cdot , \cdot ,\xi } \right)$ is real analytic about $y$ and $x$ on $G \times {\mathbb{T}^n}$, $P\left( {y,x, \cdot } \right)$ is only continuous about the parameter $\xi$, and $\varepsilon>0$ is sufficiently small. Such Hamiltonian systems are called usually parameterized ones.
For the sake of convenience, we introduce the following notations. Denote by $|\cdot|$ the sup-norm for vectors in $\mathbb{R}^n$ without causing ambiguities. We formulate that in the limit process, $f_1(x)=\mathcal{O}^{\#}\left(f_2(x)\right)$ means there are absolute positive constants $\ell_1$ and $\ell_2$ such that ${\ell _1}{f_2}\left( x \right) \leqslant {f_1}\left( x \right) \leqslant {\ell _2}{f_2}\left( x \right)$, and $f_1(x)=\mathcal{O}\left(f_2(x)\right)$ (or $f_1(x) \lesssim f_2(x)$) implies that there exists an absolute positive constant $\ell_3$ such that $|f_1(x)| \leqslant \ell_3 f_2(x)$ (or $f_1(x) \leqslant \ell_3 f_2(x)$). Finally, $f_1(x)=o\left(f_2(x)\right)$ is equivalent to $\lim f_1(x)/f_2(x)=0$.
Let us start with the following assumptions:
(H1) \[Internal condition\] For $\Upsilon \in \left(\omega(\mathcal{O})\right)^o$ given in advance, there exists $\xi_0 \in \mathcal{O}^o$ such that $\Upsilon = \omega \left( {{\xi _0}} \right)$ admits Diophantine nonresonance as $$\label{FPKAMDIO}
\left| {\left\langle {k,\omega \left( {{\xi _0}} \right)} \right\rangle } \right| \geqslant \gamma {\left| k \right|^{ - \tau }},\;\;\gamma > 0,\;\;\tau>\max\{n-1,1\},\;\;\forall 0 \ne k \in {\mathbb{Z}^n}.$$
(H2) \[Relative singularity condition\] There exists a neighborhood $\mathcal{V} \subset \mathcal{O}$ of $\xi_0$, such that the following holds (allowing the supremum to be continuously supplemented according to sup-limit) $$\mathop {\sup }\limits_{\xi \ne \zeta ,\xi ,\zeta \in \mathcal{V}} \frac{{\mathop {\sup }\limits_{y,x \in G \times {\mathbb{T}^n}} \left| {P\left( {y,x,\xi } \right) - P\left( {y,x,\zeta } \right)} \right|}}{{\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right|}} <+\infty.$$
(H3) \[Controllability condition\] Assume that $\varphi \left( \delta \right): = \sup\left\{ { \left| {\xi - \zeta } \right|:\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right| \leqslant \delta } \right\}$ is continuously defined on $[0, \tau]$ with some $0<\tau\leqslant1/2$, and is monotonically increasing, and satisfies $\varphi(0)=0$ and $\varphi(x)>0$ for $x \in (0, \tau]$. Moreover, $\varphi$ has certain integrability as $$- \int_0^{{\tau}} {\frac{{\varphi (x)}}{{x\ln x}}dx} < + \infty .$$
Then, let us make some notable comments on (H2) and (H3), which involve considerable weakening of the frequency mapping properties.
- In general, for perturbations $\varepsilon P(y,x,\xi)$ having Lipschitz continuity about the parameter, it is always required that the frequency mapping $\omega(\xi)$ is bidirectional Lipschitz, e.g., see [@MR1370761; @MR2837120; @MR2842962; @MR2819413] and etc. This brings stronger limitations on practical applications because such a frequency mapping $\omega(\xi)$ is essentially locally linear.
At the same time, the results given in [@DL; @PRM; @TD] also cannot deal with the frequency mapping $\omega(\xi)$ with good regularity (or degeneracy, i.e., locally smoother than linearly), and sometimes $\omega(\xi)$ has to be nowhere differentiable. As an example, for $P(y,x,\xi)=\xi^3$ in the case $n=1$, the introduced modulus of continuity $\varpi_1 (\sigma)$ of $P$ corresponds to Lipschitz modulus $\sigma$, although it is smoother at $0$. Then the (uniform) weak convex modulus $\varpi_2(\sigma)$ of the frequency mapping $\omega(\xi)$ must be at least $\sigma$, which yields almost surely linearity as we mentioned before. Unfortunately, if the perturbation has a weaker regularity with respect to $\xi$, e.g., Hölder type $P(y,x,\xi)=|\xi|^{1/2}$, then the frequency mapping would be at least nowhere differentiable.
Particularly, our (H2) and (H3) will overcome this essential difficulty, allowing the frequency mapping to have degeneracy. Note that the translation of a mapping does not change its properties, so below we discuss only the asymptotic properties near $0$. (i) Consider the $n=1$ case with $P(y,x,\xi)=\xi^3$, then $\omega(\xi)=\xi^3$ satisfies (H2) and (H3), and one observes that the inverse ${\omega ^{ - 1}}\left( \xi \right) = {\xi ^{1/3}}$ is not Lipshcitz, but admits Hölder continuity with (optimal) Hölder exponent $1/3$. Therefore it is not a bidirectional Lipschitz mapping. (ii) Moreover, it is easy to see that (H2) does not require the frequency mapping to be too bad (e.g., nowhere differentiable), only that it has a relative singularity to the perturbation. As illustrations, in the case $n=1$, $P(y,x,\xi)={\rm{sign}(\xi)}|\xi|^{1/2}$ with $\omega(\xi)=\xi^{1/3}$, and even $P\left( {y,x,\xi } \right) = {\left| \xi \right|^\alpha }$ with $\omega \left( \xi \right) = {\rm{sign}(\xi)}{\left( { - \ln |\xi| } \right)^{ -\beta }}$ (here $\omega(0):=0$) for all $\alpha,\beta>0$. Leaving aside the singularities at $0$, both of these frequency mappings are $C^\infty$ (note that they are only defined on $[0,\tau]\subset[0,1/2]$).
- For example, the linear mapping $\omega(\xi)=a\xi +b \in \mathbb{R}^n$ admits $\varphi(\delta)=\mathcal{O}^{\#}(\delta)$ as $\delta \to 0^+$, whenever neither component of $a$ is $0$ (we call this a non-degenerate linear mapping). But if $a$ has one component of $0$ (degenerate case), then such a continuous function $\varphi(\delta)$ satisfying $\varphi(0)=0$ in (H3) does not exist, e.g., $\omega(\xi)=(0,\xi_2)$ in the case of $n=m=2$, $|\omega(\xi)-\omega(\zeta)|=|\xi_2-\zeta_2| \leqslant \delta \ll 1$ does not imply that $|\xi-\zeta|\geqslant |\xi_1-\zeta_1|$ is small. As to a general local (near $\xi_0$) injection $\omega(\xi)$, one way to determine the order of $\varphi(\delta)$ is to consider its local inverse $\omega^{-1}(\xi)$, then $\varphi(\delta)$ corresponds to the modulus of continuity of $\omega^{-1}(\xi)$, i.e., $\varphi \left( \delta \right) = {\sup _{\left| {\xi - \zeta } \right| \leqslant \delta }}\left| {{\omega ^{ - 1}}\left( \xi \right) - {\omega ^{ - 1}}\left( \zeta \right)} \right|$.
- In view of the integrability in (H3), a critical case in $\mathbb{R}^1$ is $$\notag
\omega \left( \xi \right) = \bar \omega + \text{sign}\left( \xi \right)\exp \left( { - {{\left| \xi \right|}^{ - \alpha }}} \right)$$ with $\xi \in \left[ { - 1,1} \right]$ and $\alpha > 0$, where $\bar \omega$ is Diophantine. Obviously, it is very smooth at $0$, i.e., all derivatives vanish at $0$. One can verify that $\varphi \left( \delta \right) =\mathcal{O}^\#({\left( { - \ln \delta } \right)^{ - {\alpha ^{ - 1}}}})$ as $\delta \to0^+$, thus (H3) is satisfied due to $$\label{FPKAMXING}
- \int_0^{{\tau}} {\frac{{\varphi (x)}}{{x\ln x}}dx} =\mathcal{O}^\#\left(\int_0^{{\tau}} {\frac{1}{{x{{\left( { - \ln x} \right)}^{1 + {\alpha ^{ - 1}}}}}}dx}\right) =\mathcal{O}^\#\left({\int_1^{ + \infty } {\frac{1}{{{t^{1 + {\alpha ^{ - 1}}}}}}dt} }\right) < + \infty ,$$ whenever $0<\tau \leqslant 1/2$. The criticality is mainly reflected in the convergence rate of the integrand in [\[FPKAMXING\]](#FPKAMXING){reference-type="eqref" reference="FPKAMXING"}. Actually, one can construct much more critical cases via the same approach.
- As previously shown, (H3) is mainly proposed to accommodate situations where the frequency mapping is too smooth. When perturbations under consideration have weak regularity about the parameter, such as at most Lipschitz or further Hölder, then (H3) automatically holds according to (H2). In other words, the corresponding KAM theorem does not require (H3) at this case (see [@TD; @DL; @PRM] for similar cases, although the conditions required there are much stronger than in this paper), e.g., Corollary [**Corollary** 1](#FPKAMCORO1){reference-type="ref" reference="FPKAMCORO1"} concerning Hölder regularity.
Now, our main KAM results in this paper, namely frequency-preserving KAM in the parameterized settings, are stated as follows.
****Theorem** 1** (Frequency-preserving KAM). *Assume (H1) to (H3). Then there exists a sufficiently small ${\varepsilon _0} > 0$, for any $0 < \varepsilon < {\varepsilon _0}$, one can find some ${\xi^* }$ near $\xi_0$ such that the perturbed Hamiltonian system $H\left( {y,x,{\xi^* },\varepsilon } \right)$ with parameter $\xi^*$ in [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} admits an analytic, quasi-periodic, invariant torus with toral frequency $\Upsilon=\omega \left( {{\xi _0}} \right)$. Moreover, ${\xi ^ * } = {\xi _0} +\mathcal{O}( { - \int_0^{\varepsilon } {\frac{{\varphi (x)}}{{x\ln x}}dx} } )$ as $\varepsilon \to 0^+$.*
****Remark** 1**. *In fact, beyond the neighborhood $\mathcal{V}$ of $\xi_0$ given in (H1), the regularity of the frequency mapping and perturbations with respect to the parameter $\xi$ can be arbitrarily weak, such as nowhere differentiability, nowhere Hölder continuity, or even nowhere continuity. While in $\mathcal{V}$, they could be only continuous, and this point also reveals sharpness.*
Below we give some explicit applications of Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}.
****Corollary** 1** (Hölder continuity). *Assume besides (H1) and (H2), that the perturbation $P$ is $\alpha$-Hölder near $\xi_0$ with some $0<\alpha<1$, and is injective with respect to $\xi$ near $\xi_0$ for fixed $y,x$. Then there exists a sufficiently small ${\varepsilon _0} > 0$, for any $0 < \varepsilon < {\varepsilon _0}$, one can find some ${\xi^* }$ near $\xi_0$ such that the perturbed Hamiltonian system $H\left( {y,x,{\xi^* },\varepsilon } \right)$ with parameter $\xi^*$ in [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} admits an analytic, quasi-periodic, invariant torus with toral frequency $\Upsilon=\omega \left( {{\xi _0}} \right)$. Moreover, ${\xi ^ * } = {\xi _0} +\mathcal{O}( { - \int_0^{\varepsilon } {\frac{{\varphi (x)}}{{x\ln x}}dx} } )$ as $\varepsilon \to 0^+$.*
****Remark** 2**. *One could obtain more general modulus of continuity cases from the notations introduced in [@MR0538680; @TD; @PRM]. As a consequence, we could deal with the case that perturbations only admit continuity (beyond Hölder) with respect to the parameter, e.g., Logarithmic Hölder type $$\begin{aligned}
H\left( {y,x,\xi ,\varepsilon } \right) = &\left\langle {\omega \left( {{\xi _0}} \right),y} \right\rangle + \sum\limits_{i = 1}^n {{\rm{sign}}\left( {{\xi _i}} \right){{\left( {\ln \left( { - \ln \left| {{\xi _i}} \right|} \right)} \right)}^{ - \beta }}{y_i}} \\
& + \varepsilon \sum\limits_{i = 1}^n {{{\left( { - \ln \left| {{\xi _i}} \right|} \right)}^{ - \gamma }}\left( {\sin {x_{n-i}} +(\sin {y_i})^2} \right)} ,
\end{aligned}$$ where $\beta,\gamma>0$, and $\xi \in \mathcal{O}:=[-1/4,1/4]^n$. Here we supplement the definitions in terms of limits.*
****Corollary** 2**. *Assume that the Hamiltonian system in [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} admits a non-degenerate linear frequency mapping. Then almost all (in a full Lebesgue measure sense) frequencies in $\omega(\mathcal{O})$ could be preserved in KAM by selecting appropriate parameters in $\mathcal{O}^o$, whenever $\varepsilon$ is sufficiently small, and the perturbation $P$ is Lipschitz with respect to the parameter.*
****Remark** 3**. *In view of the Mean Value Theorem, this corollary could be generalized to the case that $\det \left( {D\omega \left( \xi \right)} \right) \ne 0$ on $\mathcal{O}$.*
****Remark** 4**. *Instead of digging out a set of small positive Lebesgue measure (and the rest of frequencies may drift), we preserve frequencies of full Lebesgue measure through the frequency translation technique.*
The remainder of the paper is organized as follows. Section [2](#FPKAMCONTEREXAMPLES){reference-type="ref" reference="FPKAMCONTEREXAMPLES"} provides a number of counterexamples that helps us understand the frequency-preserving KAM in detail. Asides for this, we also show further applicability of our results to partial frequency-preserving KAM and infinite-dimensional KAM, etc. Finally, the proofs of Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} and Corollaries [**Corollary** 1](#FPKAMCORO1){reference-type="ref" reference="FPKAMCORO1"} and [**Corollary** 2](#FPKAMCORO2){reference-type="ref" reference="FPKAMCORO2"} are given in Sections [3](#FPKAMsec-4){reference-type="ref" reference="FPKAMsec-4"} and [4](#FPKAMsec-5){reference-type="ref" reference="FPKAMsec-5"}, following a parameter translation technique and a quasi-linear (quasi-Newtonian) type KAM iteration.
# Counterexamples, parallel applicability and further comments {#FPKAMCONTEREXAMPLES}
This section is mainly divided into two parts. The first part includes Sections [2.1](#FPKAMSEC2.1){reference-type="ref" reference="FPKAMSEC2.1"} and [2.2](#FPKAMSEC2.2){reference-type="ref" reference="FPKAMSEC2.2"}, and provides several counterexamples based on different perturbations, aiming to illustrate the frequency-preserving KAM almost completely. In particular, we show that our assumptions, namely the Internal condition (H1), the Relative singularity condition (H2) and the Controllability condition (H3), are indeed indispensable in the sense of frequency preserving, therefore Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} admits sharpness. The second part involves Sections [2.3](#FPKAMLESS){reference-type="ref" reference="FPKAMLESS"}, [2.4](#FPKAMPAR){reference-type="ref" reference="FPKAMPAR"} and [2.5](#FPKAMINFINITE){reference-type="ref" reference="FPKAMINFINITE"}, and extends our KAM results to the partial frequency-preserving KAM, the infinite-dimensional KAM and etc.
## Counterexamples via perturbations in Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} {#FPKAMSEC2.1}
Consider a specific analytic Hamiltonian system [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} in the case $n=m=2$: $$\label{FPKAMCE1}
H\left( {y,\xi,\varepsilon } \right) = \left\langle {\omega \left( \xi \right),y} \right\rangle + \varepsilon P\left( {y,\xi } \right),$$ where $0<\varepsilon\ll 1$, and the frequency mapping $\omega \left( \xi \right) = \left( {{{\bar \omega }_1} + \tilde \omega_1 \left( {{\xi }} \right),{{\bar \omega }_2} + \tilde \omega_2 \left( {{\xi }} \right)} \right)$ with $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$ being Diophantine. Note that according to the symplectic structure, $\dot x (t,\xi,\varepsilon) ={H_y}\left( {y,\xi ,\varepsilon} \right)$ and $\dot y (t,\xi,\varepsilon) =-{H_x}\left( {y,\xi ,\varepsilon} \right) =0$. Then the unperturbed torus for fixed $y=y_0$ can be written as $x(t,\xi,0) = \omega \left( \xi \right)t + x_0$, associated with initial value $y_0$ and $x_0$, and the perturbed one for fixed $y=y_0$ is $x(t,\xi,\varepsilon) = \left( {\omega \left( \xi \right) + \varepsilon {\partial _y}P\left( {y,\xi } \right)} \right)t + x_0$. Therefore, to obtain a $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$-frequency-preserving KAM torus with parameter ${\xi ^ * } = \left( {\xi _1^ * ,\xi _2^ * } \right) \in \mathcal{O}$ (the parameter set $\mathcal{O} \subset \mathbb{R}^2$ will be specified later), the following must be satisfied: $$\label{FPKAMlianli}
\left\{ \begin{gathered}
\tilde \omega_1 \left( {\xi ^ * } \right) + \varepsilon {\partial _{y_1}}P\left( {y,{\xi ^ * }} \right) = 0, \hfill \\
\tilde \omega_2 \left( {\xi ^ * } \right) + \varepsilon {{\partial _{y_2}}P}\left( {y,{\xi ^ * }} \right) = 0. \hfill \\
\end{gathered} \right.$$
### Non-uniqueness of the parameter corresponding to frequency-preserving
The parameter corresponding to frequency-preserving KAM torus might not be unique, in other words, there could be many KAM tori with the prescribed frequency.
Let $\mathcal{O}:=[-1,1]\times[0,3\pi]$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = \sin {\xi _2}$, and $P\left( {y,\xi } \right) = - {y_2}$. Note that $\xi_0$ could be non-unique, i.e., ${\xi _0} = (0,\pi)$ or $(0,2\pi)$. Near these two points, (H1) to (H3) automatically hold (with $\varphi(\delta)=\mathcal{O}^\#(\delta)$). This is because (H2) is a local assumption, it does not require the frequency mapping $\omega(\xi)$ to be a global injection. Now, for $0< \varepsilon\ll 1$, the frequency-preserving equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} give $$\left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\sin {\xi _2^*} - \varepsilon = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * = \arcsin \varepsilon ,\pi - \arcsin \varepsilon ,2\pi + \arcsin \varepsilon ,3\pi - \arcsin \varepsilon , \hfill \\
\end{gathered} \right.$$ which shows the non-uniqueness of the parameter corresponding to frequency-preserving. It should be mentioned that $\xi _2^ * = \arcsin \varepsilon$ and $\xi _2^ * = 3\pi-\arcsin \varepsilon$ are not derived from our KAM theorem (Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}) because they are both at a positive distance from their respective $\xi_0$, i.e., $$\mathop {\lim }\limits_{\varepsilon \to {0^ + }} \left| {\left( {0,\arcsin \varepsilon } \right) - \left( {0,\pi } \right)} \right| = \mathop {\lim }\limits_{\varepsilon \to {0^ + }} \left| {\left( {0,3\pi - \arcsin \varepsilon } \right) - \left( {0,2\pi } \right)} \right| = \pi \ne 0.$$ But $\xi _2^ * = \pi-\arcsin \varepsilon$ and $\xi _2^ * = 2\pi+\arcsin \varepsilon$ are compatible with our KAM theorem by verifying (H1) to (H3), respectively.
The previous linear counterexample concerns the case that $\xi_0$ could be selected differently. Next, we give a nonlinear Hamiltonian system that admits infinitely (uncountable) many frequency-preserving KAM tori, and $\xi_0$ must be unique at this time. Let $\mathcal{O}:=[-1,1]^2$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = {\xi _2}$, and $P\left( {y,\xi } \right) = - \left( {y_1^2/2 + {y_2} + y_2^2/2} \right)$. Then (H1) to (H3) automatically hold (with $\varphi(\delta)=\mathcal{O}^\#(\delta)$) near $\xi_0 =(0,0)$, and the equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} turn to $$\left\{ \begin{gathered}
\xi _1^ *-\varepsilon y_1 = 0, \hfill \\
{\xi _2^*} - \varepsilon (1+y_2) = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * =\varepsilon y_1, \hfill \\
\xi _2^ * = \varepsilon (1+y_2) . \hfill \\
\end{gathered} \right.$$ If $y \in G:=[-1,1]^2$, then $\xi^*$ could be infinitely many, i.e., ${\xi ^ * } = \left( {\varepsilon {y_1},\varepsilon \left( {1 + {y_2}} \right)} \right)$ with any initial value $y = \left( {{y_1},{y_2}} \right) \in G$ fixed (recall that $\dot y = - {H_x}\left( {y,\xi ,\varepsilon } \right) = 0$). In other words, on any level set near the origin, we can adjust the parameter such that the prescribed frequency is preserved. Interestingly, if this counterexample takes no parameters, i.e., ${{\tilde \omega }_1}\left( \xi \right)= {{\tilde \omega }_2}\left( \xi \right) = 0$, then the $(\bar \omega_1, \bar \omega_2)$-frequency-preserving KAM torus may not exist, as long as we choose $G:=[-1/2,1/2]^2$ (because $y_2=-1 \notin [-1/2,1/2]$). If $G:=[-1,1]^2$, then a unique $(\bar \omega_1, \bar \omega_2)$-frequency-preserving KAM torus does exist on $\mathbb{T}^n \times \{(0,-1)\}$, which is completely different from the parameterized case.
### Non-differentiability of $\xi^*$ with respect to $\varepsilon$
The obtained parameter $\xi^*$ might be non-differentiable with respect to $\varepsilon$ at $0$.
Let $\mathcal{O}:=[-1,1]^2$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = \xi _2^3$, and $P\left( {y,\xi } \right) = - {y_2}$. Then (H1) to (H3) automatically hold. To be more precise, note that $\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right| \leqslant \delta$ for $0<\delta\ll1$ means $\left| {\left( {{\xi _1} - {\zeta _1},\xi _2^3 - \zeta _2^3} \right)} \right| \leqslant \delta$, then $\left| {{\xi _1} - {\zeta _1}} \right| \leqslant \delta /2$, and $\left| {\xi _2^3 - \zeta _2^3} \right| \leqslant \delta /2$. Therefore, one concludes that $\left| {{\xi _2} - {\zeta _2}} \right| = {\mathcal{O}^\# }\left( {{\delta ^{1/3}}} \right)$. Taking $\varphi \left( \delta \right) = \max \left\{ {{\mathcal{O}^\# }\left( \delta \right),{\mathcal{O}^\# }\left( {{\delta ^{1/3}}} \right)} \right\} = {\mathcal{O}^\# }\left( {{\delta ^{1/3}}} \right)$, it is easy to verify that for any $0<\tau\leqslant1/2$, it holds $$- \int_0^\tau {\frac{{\varphi (x)}}{{x\ln x}}dx} \lesssim - \int_0^\tau {\frac{1}{{{x^{2/3}}\ln x}}dx} \lesssim \int_0^\tau {\frac{1}{{{x^{2/3}}}}dx} < + \infty.$$ However, aiming to achieve $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$-frequency-preserving, the equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} turn to $$\left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
{(\xi _2^ * )^3} - \varepsilon = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * = {\varepsilon ^{1/3}} \hfill, \\
\end{gathered} \right.$$ which implies that $\xi^*$ is non-differentiable with respect to $\varepsilon$ at $0$.
### Nonuniformity of the distance from $\xi^*$ to $\xi_0$ about $\varepsilon$
As shown in Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}, the selected parameter $\xi^*$ corresponding to frequency-preserving converges to $\xi_0$ whenever $\varepsilon\to 0^+$, and the convergence rate could be dominated by $\mathcal{O}( { - \int_0^{\varepsilon} {\frac{{\varphi (x)}}{{x\ln x}}dx} } )$ with $\varphi$ defined in (H3). But this is an upper bound estimate and does not represent the real rate. It is worth mentioning that such a rate is not uniform with respect to $\varepsilon$, it can be logarithmically slow, or arbitrarily fast (for any rate given in advance) via some particular frequency mappings.
Recall the critical case in Comment (C3), i.e., $$\label{FPKAMcri}
\omega \left( \xi \right) = \bar \omega + \text{sign}\left( \xi \right)\exp \left( { - {{\left| \xi \right|}^{ - \alpha }}} \right)$$ with $\xi \in \left[ { - 1,1} \right]$ and $\alpha > 0$, where $\bar \omega$ is Diophantine. Now, let us consider a simple parameterized Hamiltonian system with $n=m=1$: $$\label{FPKAMcri2}
H\left( {y,\xi,\varepsilon } \right) = {\omega \left( \xi \right)y} - \varepsilon y.$$ Then (H1) to (H3) automatically hold, and it is easy to verify that $\xi^* =(\omega-\bar\omega)^{-1}(\varepsilon)=(-\ln \varepsilon)^{-\lambda}$ with $\lambda=\alpha^{-1} >0$, which provides the logarithmically slow rate. One can also construct a slower convergence rate by increasing the regularity of $\omega(\xi)$. As for any given fast rate $\rho(\varepsilon)$, where the monotone odd function $0<\rho(x)=\mathcal{O}(x)$, then modifying $\omega(\xi)$ in [\[FPKAMcri\]](#FPKAMcri){reference-type="eqref" reference="FPKAMcri"} to $\omega \left( \xi \right) = \bar \omega + \rho^{-1}(\xi)$ one deduces from [\[FPKAMcri2\]](#FPKAMcri2){reference-type="eqref" reference="FPKAMcri2"} that $\xi^*-\xi_0 =\rho(\varepsilon)$ (note that (H1) to (H3) still hold, because $\rho^{-1}(\xi)$ admits weak regularity at $0$, see Comment (C4)). It is indeed an interesting fact that the weak regularity of the frequency mapping actually makes the convergence rate faster.
### Nonexistence of the frequency-preserving KAM torus when $\mathcal{O}$ is disconnected
We have previously assumed that the parameter set $\mathcal{O}$ is connected. However, if the connectivity is removed, then the frequency-preserving KAM torus may not exist, even $\mathcal{O}$ has relatively full Lebesgue measure.
Let $\mathcal{O}:=\left[ { - 1,1} \right] \times \left( {\left[ { - 1,1} \right]\backslash \left( {\left[ { - 1,1} \right] \cap \mathbb{Q}} \right)} \right)$, and set ${{\tilde \omega }_1}\left( \xi \right) = \xi _1$, ${{\tilde \omega }_2}\left( \xi \right) = \xi _2$, and $P\left( {y,\xi } \right) = - ({\xi _2}+1){y_2}$. Then $\mathcal{O}$ admits full Lebesgue measure relative to $[-1,1]^2$, and (H1) to (H3) automatically hold (with $\varphi(\delta)=\mathcal{O}^\#(\delta)$). The equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} become $$\left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * - \varepsilon ({\xi _2^*}+1) = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * = \varepsilon (1-\varepsilon)^{-1}. \hfill \\
\end{gathered} \right.$$ Note that $\varepsilon (1-\varepsilon)^{-1} \in \mathbb{Q}$ whenever $\varepsilon\in \mathbb{Q}$ and $0< \varepsilon \ll 1$, and hence, such a parameter $\xi^* \notin \mathcal{O}$.
### Nonexistence of the frequency-preserving KAM torus when $\omega(\xi)$ is discontinuous {#FPKAMSUB213}
Note that all of our assumptions (H1) to (H3) are based on the premise that the frequency mapping and perturbations are continuous with respect to the parameter. Below we will show that if the frequency mapping is not continuous (which of course also violates the assumptions, but does not conflict with some subsequent counterexamples, which require continuity of the frequency mapping), then the frequency-preserving KAM torus may not exist.
Let $\mathcal{O}:=[-1,1]^2$, and set ${{\tilde \omega }_1}\left( \xi \right) = \mathcal{M}\left( {{\xi _1}} \right)$, ${{\tilde \omega }_2}\left( \xi \right) = \xi _2$, and $P\left( {y,\xi } \right) = - {\xi _2}{y_1} - {y_2}$, where $\mathcal{M}\left( 0 \right) = 0$, and $\mathcal{M}\left( {{\xi _1}} \right) = \xi _1^{ - 1}$ for $\xi_1 \ne 0$. Then the frequency mapping is discontinuous with respect to $\xi$ at $(0,0) \in \mathcal{O}^o$, and the equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} read $$\left\{ \begin{gathered}
\mathcal{M}(\xi _1^ * ) - \varepsilon \xi _2^ * = 0, \hfill \\
\xi _2^ * - \varepsilon = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = {\varepsilon ^{ - 2}}, \hfill \\
\xi _2^ * = \varepsilon, \hfill \\
\end{gathered} \right.$$ which implies that $\xi_1^* \notin [-1,1]$ as long as $\varepsilon>0$ is sufficiently small, and therefore such a parameter solution $\xi^*$ does not exist.
### Nonexistence of the frequency-preserving KAM torus when $P(y,x, \xi)$ is discontinuous about $\xi$
Similar to the exposition in Section [2.1.5](#FPKAMSUB213){reference-type="ref" reference="FPKAMSUB213"}, we will show here that if perturbations are not continuous with respect to the parameter, then the frequency-preserving KAM torus may be destroyed.
Let $\mathcal{O}:=[-1,1]^2$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = \xi _2$, but $P\left( {y,\xi } \right) = - \mathcal{N}\left( {{\xi _2}} \right){y_1} - {y_2}$, where $\mathcal{N}\left( 0 \right) = 0$, and $\mathcal{N}\left( {{\xi _2}} \right) = \xi _2^{ - 2}$ for $\xi_2 \ne 0$. It is evident that the perturbation is discontinuous with respect to $\xi$ at $(0,0) \in \mathcal{O}^o$, and $\varepsilon P(y,\xi)$ is small for fixed $y,\xi$, whenever $0<\varepsilon \ll 1$ (but the smallness is nonuniform). Then the $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$-frequency-preserving equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} become $$\left\{ \begin{gathered}
\xi _1^ * - \varepsilon \mathcal{N}(\xi _2^ * ) = 0, \hfill \\
\xi _2^ * - \varepsilon = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = {\varepsilon ^{ - 1}}, \hfill \\
\xi _2^ * = \varepsilon , \hfill \\
\end{gathered} \right.$$ which implies that $\xi_1^* \notin [-1,1]$ whenever $0<\varepsilon \ll 1$, and therefore $\xi^*$ does not exist.
### Nonexistence of the frequency-preserving KAM torus in the absence of the Internal condition (H1)
Here we show a crucial fact that the Internal condition (H1) cannot be removed in the sense of frequency-preserving, by constructing the counterexample below.
Let $\mathcal{O}:=[-1,1]\times[0,1]$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = {\xi _2}$, and $P\left( {y,\xi } \right) = {y_2}$. Then (H1) fails because the unique parameter $\xi_0=(0,0) \notin \mathcal{O}^o$ ($\xi_0 \in \mathcal{O}$), but (H2) and (H3) automatically hold (with $\varphi(\delta)=\mathcal{O}^\#(\delta)$). Now, the equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} turn to $$\left\{ \begin{gathered}
\xi _1^ * = 0 ,\hfill \\
\xi _2^ * + \varepsilon = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0 ,\hfill \\
\xi _2^ * = - \varepsilon , \hfill \\
\end{gathered} \right.$$ which implies that $\xi_2^* \notin [0,1]$ due to $\varepsilon>0$, and therefore $\xi^*$ does not exist.
Next, we construct another weaker counterexample, which contradicts both (H1) and (H3). However, this is different from the above counterexample, which focuses on the internal property of the parameter $\xi_0$, here we focus on the internal property of the frequency value $\omega(\xi_0)$.
Let $\mathcal{O}:=[-1,1]^2$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = \xi _2^2$, and $P\left( {y,\xi } \right) = \xi_1 y_1+\left( {\xi _2^2 + 1} \right){y_2}$. Then (H1) fails because $\omega \left( \xi_0 \right)= \omega \left( {\left( {0,0} \right)} \right)=(\bar \omega_1, \bar \omega_2)$ is not an interior point of the set $\omega \left( \mathcal{O} \right) =[\bar \omega_1 -1,\bar \omega_1 +1]\times \bar [\bar \omega_2,\bar \omega_2 +1]$ due to the non-negativity of $\xi_2^2$ at $0$, but the internal property of $\xi_0=(0,0) \in \mathcal{O}^o=(-1,1)^2$ holds. As for (H2), note that near $\xi_0$, we have $$\begin{aligned}
\mathop {\sup }\limits_{y,x \in G \times \mathbb{T}^n} \left| {P\left( {y,\xi } \right) - P\left( {y,\zeta } \right)} \right| &\leqslant \mathop {\sup }\limits_{y \in G} \left| {{\xi _1}{y_1} - {\zeta _1}{y_1}} \right| + \mathop {\sup }\limits_{y \in G} \left| {\xi _2^2{y_2} - \zeta _2^2{y_2}} \right|\\
& \leqslant {\text{diag}}G\left( {\left| {{\xi _1} - {\zeta _1}} \right| + \left| {\xi _2^2 - \zeta _2^2} \right|} \right)\\
& \leqslant 2{\text{diag}}G\left| {\left( {{\xi _1} - {\zeta _1},\xi _2^2 - \zeta _2^2} \right)} \right|\\
& = 2{\text{diag}}G\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right|.\end{aligned}$$ Therefore, by supplementing the definition ($\omega$ and $P$ agree on the properties of the parameter simultaneously), we prove that (H2) holds. However, (H3) fails due to the non-monotonicity of $\tilde \omega_2$ at $0$. Note that the equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} take the form $$\left\{ \begin{gathered}
\xi _1^ * +\varepsilon\xi _1^ *=0, \hfill \\
{(\xi _2^ * )^2} + \varepsilon ( {{{(\xi _2^ * )}^2} + 1} ) = 0, \hfill \\
\end{gathered} \right.$$ then $\xi_2^*$ does not exist for any $\varepsilon>0$, so does $\xi^*$. This shows that the $(\bar{\omega}_1,\bar{\omega}_2)$-frequency-preserving KAM torus cannot survive.
### Nonexistence of the frequency-preserving KAM torus in the absence of the Relative singularity condition (H2) and the Controllability condition (H3) {#FPKAMSEC217}
Here we emphasize a significant point: If the Relative singularity condition (H2) and the Controllability condition (H3) fail, then the frequency-preserving KAM torus may not exist. Noticing that (H2) and (H3) are not completely independent, we will construct a non-injective frequency mapping counterexample below.
Let $\mathcal{O}:=[-1,1]^2$, and set ${{\tilde \omega }_1}\left( \xi \right) = {\xi _1}$, ${{\tilde \omega }_2}\left( \xi \right) = {\xi _1}{\xi _2}$, and $P\left( {y,\xi } \right) = - {y_2}$. It is evident that (H1) holds (with $\xi_0=(0,0)$), but (H2) and (H3) fail, because for any $j>1$, taking $a = \left( {0,1/j} \right)$ and $b = \left( {0, - 1/j} \right)$ yields $\left| {\omega \left( a \right) - \omega \left( b \right)} \right| \equiv 0$, but $\left| {a - b} \right| = \left| {\left( {0,2/j} \right)} \right| = 2/j > 0$. Therefore, (H2) fails because the supremum is undefined; and (H3) fails because $\varphi(\delta) \equiv \varphi(0)=\kappa>0$ for all $0<\delta\leqslant 2$, and the integrability condition also fails due to $$- \int_0^{{\tau}} {\frac{{\varphi (x)}}{{x\ln x}}dx} = -\kappa \int_0^{{\tau}} {\frac{1}{{x\ln x}}dx} = - \ln \left( { - \ln x} \right)|_0^\tau = + \infty ,$$ provided with any $0<\tau\leqslant1/2$. Now, if the $(\bar{\omega}_1,\bar{\omega}_2)$-frequency-preserving KAM torus exists, then the equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} become $$\left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _1^ * \xi _2^ * - \varepsilon = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\varepsilon = 0, \hfill \\
\end{gathered} \right.$$ which contradicts $\varepsilon>0$.
Another much more trivial counterexample could be considered as $\omega(\xi)\equiv(\bar \omega_1,\bar \omega_2)$ (i.e., a constant mapping) and $P(y,\xi) =-y_2$.
## Some strange phenomena via general small perturbations beyond Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} {#FPKAMSEC2.2}
Note that the whole perturbation in Hamiltonian system [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} is in the form $\tilde P\left( {y,x,\xi, \varepsilon} \right) =\varepsilon P\left( {y,x,\xi } \right)$. If it has some singularity with respect to $\varepsilon$, then some bad phenomenon may set in. Throughout this section, let us consider a specific analytic Hamiltonian system in the case $n=m=2$: $$H\left( {y,\xi,\varepsilon } \right) = \left\langle {\omega \left( \xi \right),y} \right\rangle + \tilde{P}\left( {y,\xi ,\varepsilon } \right),$$ where $\xi=(\xi_1,\xi_2) \in \mathcal{O}:=[-1,1]^2$, and $\omega \left( \xi \right) = \left( {{{\bar \omega }_1} + \tilde \omega_1 \left( {{\xi_1 }} \right),{{\bar \omega }_2} + \tilde \omega_2 \left( {{\xi_2 }} \right)} \right)$ with $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$ being Diophantine. The perturbation $\tilde P\left( {y,\xi,\varepsilon } \right)$ is still small whenever $0<\varepsilon \ll 1$. Now, the $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$-frequency-preserving equations in [\[FPKAMlianli\]](#FPKAMlianli){reference-type="eqref" reference="FPKAMlianli"} with parameter ${\xi ^ * } = \left( {\xi _1^ * ,\xi _2^ * } \right) \in \mathcal{O}$ become $$\label{FPKAMlianli2}
\left\{ \begin{gathered}
\tilde \omega_1 \left( {\xi_1 ^ * } \right) + {\partial _{y_1}}\tilde{P}\left( {y,{\xi ^ * },\varepsilon } \right) = 0, \hfill \\
\tilde \omega_2 \left( {\xi_2 ^ * } \right) + {\partial _{y_2}}{\tilde P}\left( {y,{\xi ^ * },\varepsilon } \right) = 0, \hfill \\
\end{gathered} \right.$$ because $H(y, \xi, \varepsilon)$ is independent of the angle variable $x$.
### Almost everywhere discontinuity of $\xi^*$ with respect to $\varepsilon$
If the small perturbation $\tilde{P}$ is discontinuous with respect to the parameter $\xi$, then the obtained parameter $\xi^*$ corresponding to frequency-preserving KAM torus may be discontinuous with respect to $\varepsilon$.
For example, let ${{\tilde \omega }_1}\left( {\xi _1 } \right) = \xi _1$, ${{\tilde \omega }_2}\left( {\xi _2} \right) = \xi _2$, and $\tilde{P}\left( {y,{\xi },\varepsilon } \right) = - \varepsilon D\left( \varepsilon \right)y_2$, where $D(x)$ denotes the Dirichlet function, i.e., $D(x)= 1$ when $x$ is irrational, and $D(x)= 0$ when $x$ is rational. Then one verifies that (H1) to (H3) automatically hold (with $\varphi(\delta)=\mathcal{O}^\#(\delta)$), and the perturbation is small due to $\left| {\varepsilon D\left( \varepsilon \right)} \right| \leqslant \varepsilon \to {0^ + }$ as $\varepsilon \to 0^+$, but the equations in [\[FPKAMlianli2\]](#FPKAMlianli2){reference-type="eqref" reference="FPKAMlianli2"} lead to $$\left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * - \varepsilon D\left( \varepsilon \right) = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * = \varepsilon D\left( \varepsilon \right), \hfill \\
\end{gathered} \right.$$ which implies that the parameter $\xi^*$ is almost everywhere discontinuous with respect to $\varepsilon$ in any neighborhood of $0$ (except for $\varepsilon=0$).
### Nowhere differentiability of $\xi^*$ with respect to $\varepsilon$
Even if the obtained parameter $\xi^*$ is continuous with respect to $\varepsilon$, it might be nowhere differentiable.
For example, let ${{\tilde \omega }_1}\left( {\xi _1 } \right) = \xi _1$, ${{\tilde \omega }_2}\left( {\xi _2 } \right) = \xi _2$, and $\tilde{P}\left( {y,{\xi },\varepsilon } \right) = - \varepsilon W(\varepsilon)y_2$, where $W(x)$ denotes the Weierstrass function, e.g., $W\left( x \right) = \sum\nolimits_{n = 0}^\infty {{2^{ - n}}\cos \left( {{{99}^n}\pi x} \right)}$. It is easy to check that (H1) to (H3) automatically hold (with $\varphi(\delta)=\mathcal{O}^\#(\delta)$), and the perturbation is small due to $\left| {\varepsilon W\left( \varepsilon \right)} \right| \leqslant 2\varepsilon \to {0^ + }$ as $\varepsilon \to 0^+$, but the equations in [\[FPKAMlianli2\]](#FPKAMlianli2){reference-type="eqref" reference="FPKAMlianli2"} yield $$\left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * - \varepsilon W\left( \varepsilon \right) = 0, \hfill \\
\end{gathered} \right. \Rightarrow \left\{ \begin{gathered}
\xi _1^ * = 0, \hfill \\
\xi _2^ * = \varepsilon W\left( \varepsilon \right), \hfill \\
\end{gathered} \right.$$ which means that $\xi^*$ is continuous but nowhere differentiable about $\varepsilon$.
## Utilizing extra parameters to adjust the Relative singularity condition (H2) {#FPKAMLESS}
As we will show in this section, relatively extra parameters sometimes play an unexpected role to achieve frequency-preserving.
Let us first consider an interesting problem: If the parameter number of the frequency mapping is less than the perturbation, can there be a corresponding frequency-preserving KAM theorem like Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}? Fortunately, we could answer this question positively by establishing Theorem [**Theorem** 2](#FPKAMT2){reference-type="ref" reference="FPKAMT2"} via weaker assumptions below. For the sake of simplicity in the statements of the main KAM results in Section [1](#FPKAMintro){reference-type="ref" reference="FPKAMintro"}, we prefer to present the weakened version here.
We still study the parameterized perturbed Hamiltonian system [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} with continuous $\omega \left( \cdot \right):{\mathcal{O}_1} \to {\mathbb{R}^n}$ and $P\left( {y,x, \cdot } \right):{\mathcal{O}_1} \times {\mathcal{O}_2} \to {\mathbb{R}^1}$, where ${\mathcal{O}_1} \subset {\mathbb{R}^{{m_1}}}$ and ${\mathcal{O}_2} \subset {\mathbb{R}^{{m_2}}}$ with $m_1,m_2 \in \mathbb{N}^+$ are closed parameter sets having interior points. Denote by $\xi = (\bar \xi ,\tilde \xi ) \in {\mathcal{O}_1} \times {\mathcal{O}_2}$. Then the corresponding assumptions read:
(H1\*) \[Internal condition\] For $\Upsilon \in \left(\omega(\mathcal{O}_1)\right)^o$ given in advance, there exists $\xi_0 \in (\mathcal{O}_1)^o$ such that $\Upsilon = \omega \left( {{\xi _0}} \right)$ admits Diophantine nonresonance as $$\label{FPKAMDIO2}
\left| {\left\langle {k,\omega \left( {{\xi _0}} \right)} \right\rangle } \right| \geqslant \gamma {\left| k \right|^{ - \tau }},\;\;\gamma > 0,\;\;\tau>\max\{n-1,1\},\;\;\forall 0 \ne k \in {\mathbb{Z}^n}.$$
(H2\*) \[Relative singularity condition\] There exists a neighborhood $\mathcal{V} \subset \mathcal{O}_1$ of $\xi_0$ and a continuous function $\tilde\xi (\bar\xi)$ with $\tilde\xi (\mathcal{V}) \subset (\mathcal{O}_2)^o$, such that the following holds (allowing the supremum to be continuously supplemented according to sup-limit) $$\mathop {\sup }\limits_{\bar\xi \ne \bar\zeta ,\bar\xi ,\bar\zeta \in \mathcal{V}} \frac{{\mathop {\sup }\limits_{y,x \in G \times {\mathbb{T}^n}} \left| {P\left( {y,x,\bar\xi, \tilde\xi (\bar\xi) } \right) - P\left( {y,x,\bar\zeta, \tilde\zeta (\bar\zeta) } \right)} \right|}}{{\left| {\omega ( \bar\xi ) - \omega ( \bar\zeta )} \right|}} <+\infty.$$
Now we could give the following theorem.
****Theorem** 2**. *Assume (H1\*), (H2\*) and (H3). Then there exists a sufficiently small ${\varepsilon _0} > 0$, for any $0 < \varepsilon < {\varepsilon _0}$, one can find some ${\xi^* }=(\bar\xi^*, \tilde\xi^*)\in ({\mathcal{O}_1} \times {\mathcal{O}_2})^o$ such that the perturbed Hamiltonian system $H\left( {y,x,{\xi^* },\varepsilon } \right)$ admits an analytic, quasi-periodic, invariant torus with toral frequency $\Upsilon=\omega \left( {{\xi _0}} \right)$. Moreover, ${\bar\xi^* } = {\xi _0} +\mathcal{O}( { - \int_0^{\varepsilon } {\frac{{\varphi (x)}}{{x\ln x}}dx} } )$ as $\varepsilon \to 0^+$.*
It shall be mentioned that the proof of Theorem [**Theorem** 2](#FPKAMT2){reference-type="ref" reference="FPKAMT2"} is almost the same as that of Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}, as long as we fix the extra parameters of the perturbation relative to the frequency mapping, then only the latter parameters are translated in each KAM step, see details from Section [4.1](#FPKAMProofT1){reference-type="ref" reference="FPKAMProofT1"}.
Let us take an explicit example to understand Theorem [**Theorem** 2](#FPKAMT2){reference-type="ref" reference="FPKAMT2"} more clearly. Consider the parameterized perturbed Hamiltonian system with parameter sets ${\mathcal{O}_1} = {\left[ { - 2,2} \right]^2}$ and ${\mathcal{O}_2} = \left[ { - 2,2} \right]^2$: $$\label{FPKAMHAM2}
H\left( {y,\xi,\varepsilon } \right) = \left\langle {\omega \left( \xi \right),y} \right\rangle + \varepsilon P\left( {y,\xi } \right),$$ where $0<\varepsilon\ll 1$, provided the frequency mapping $\omega \left( \bar\xi \right) = \left( {{{\bar \omega }_1} + \xi_1,{{\bar \omega }_2} + \xi_2} \right)$ with $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$ being Diophantine, and $P( {y,\xi })=\langle ( {{{\left| {{\xi _1}} \right|}^{1/2}} + {\xi _3},{{\left| {{\xi _2}} \right|}^{1/2}} + {\xi _4}} ), y\rangle$. It is evident that (H2) fails near $(0,0,0,0)$, but taking $\tilde \xi = \left( {{\xi _3},{\xi _4}} \right) = ( { - {{\left| {{\xi _1}} \right|}^{1/2}} + {\xi _1}, - {{\left| {{\xi _2}} \right|}^{1/2}} + {\xi _2}} )$ one can verify that (H2\*) holds near $(0,0)$. In other words, we can adjust the relative singularity by modifying the weak regularity with relatively extra parameters.
Following the same idea, we can also deal with the case that the frequency mapping has more parameters than the perturbation. Note that (H2) may not be satisfied at this time, but we have the opportunity to modify the relative singularity by doing similar approach so that (H2\*) is satisfied. For example, consider the parameterized perturbed Hamiltonian system [\[FPKAMHAM2\]](#FPKAMHAM2){reference-type="eqref" reference="FPKAMHAM2"}, where the frequency mapping $\omega \left( \xi \right) = \left( {{{\bar \omega }_1} + \xi_1,{{\bar \omega }_2} + \xi _2^2 + {\xi _3}} \right)$ with $\left( {{{\bar \omega }_1},{{\bar \omega }_2}} \right)$ being Diophantine, and $P\left( {y,\xi } \right) = {\xi _2}{e^{{y_2}}}$. Here $\xi_i \in [-2,2]$ for $i=1,2,3$. Then if $y \in [-1,1]^2$, taking $\xi = \left( {0,{\xi _2},0} \right)$ and $\zeta = \left( {0,{\zeta _2},0} \right)$ with ${\xi _2} + {\zeta _2} = {j^{ - 1}}$ ($j \in \mathbb{N}^+$) yields that $$\begin{aligned}
\mathop {\sup }\limits_{y,x \in G \times {\mathbb{T}^n}} \left| {P\left( {y,\xi } \right) - P\left( {y,\zeta } \right)} \right| &= \mathop {\sup }\limits_{{y_2} \in \left[ { - 1,1} \right]} {e^{{y_2}}}\left| {{\xi _2} - {\zeta _2}} \right| = e\left| {{\xi _2} - {\zeta _2}} \right|\\
& = ej\left| {\xi _2^2 - \zeta _2^2} \right| = ej\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right|,\end{aligned}$$ which contradicts (H2) near $(0,0,0)$, whenever $j$ is sufficiently large. However, taking $\xi_3$ as a function of $\xi_2$ may make (H2\*) hold near $(0,0)$ with the new parameter $\xi=(\xi_1,\xi_2)$ ($\zeta=(\zeta_1,\zeta_2)$) of two variables, e.g., $\xi_3:=- \xi_2^2+\xi_2$. Then one can similarly achieve frequency-preserving.
It shall be mentioned that this approach is also valid for partial frequency-preserving KAM in Section [2.4](#FPKAMPAR){reference-type="ref" reference="FPKAMPAR"} and infinite-dimensional KAM in Section [2.5](#FPKAMINFINITE){reference-type="ref" reference="FPKAMINFINITE"}.
## Partial frequency-preserving KAM {#FPKAMPAR}
As known to all, in general, some dynamics cannot be completely preserved if the dynamical system under consideration has certain degeneracy. Back to our concern on the frequency-preserving KAM, it is evident that the degeneracy of the Hamiltonian system may destroy the prescribed Diophantine frequency for the perturbed torus, even in the parameterized settings (see the counterexample constructed in Section [2.1.8](#FPKAMSEC217){reference-type="ref" reference="FPKAMSEC217"}). Partial preservation of frequencies, therefore, is a foundational problem in KAM theory, although remains relatively unexplored. On this aspect, via a nondegenerate condition of Rüssmann type on a submanifold, Chow et al [@MR1938331] proved that the perturbed toral frequencies could be partially preserved according to the maximal degeneracy of the Hessian matrix of the unperturbed Hamiltonian system. They also obtained KAM results concerning partial frequency-ratio-preserving. See also Sevryuk [@MR2221801; @MR2433684; @MR3783834] for partial frequency-preserving KAM under moderately degenerate integrable or partially integrable Hamiltonian systems. To be more precise, he proved that the unperturbed invariant $n$-tori with prescribed frequencies (or frequency ratios) do not persist, but the first $d<n$ frequencies (or ratios) are preserved. We also mention the recent work of Zhao and Li [@MR4355926], which considers the partial frequency-ratio-preserving KAM for unperturbed Hamiltonian systems starting from the same Riemannian manifold.
Interestingly, a new version of the partial frequency-preserving KAM theorem could also be obtained via a similar approach in this paper. Note that the nondegeneracy here is mainly reflected in the requirement of the frequency mapping $\omega(\xi)$, namely the Relative singularity condition (H2) and the Controllability condition (H3). As a consequence, one may consider the case where (H2) holds for some (not all) components of $\omega(\xi)$, i.e., $\omega(\xi)$ admits certain degeneracy. In this case, we could combine a succession of our parameter translation technique with the classic digging method to establish a partial frequency-preserving KAM theorem. Let us consider the simplest case, i.e., the parameterized Hamiltonian system [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"} with an uncoupled frequency mapping $\omega(\xi)=(\omega_1(\xi_1), \cdots, \omega_n(\xi_n))$ having nondegeneracy for components of indices $1 \leqslant {i_1} < \cdots < {i_a} \leqslant n$. Denote by $1\leqslant{\ell _1}< \cdots <{\ell _b}\leqslant n$ the other indices, where $\sum\nolimits_{j = 1}^a {{i_j}} + \sum\nolimits_{j = 1}^b {{\ell _j}} = n$. On these grounds, let $\hat\xi=(\xi_{i_1},...,\xi_{i_a})$ and $\check\xi=(\xi_{\ell_1},...,\xi_{\ell_b})$ for convenience. Now, the corresponding new assumptions are given below.
(H2$^\prime$) There exists a neighborhood $\mathcal{V}' \subset \cup _{j = 1}^a{\mathcal{O}_{{i_j}}}$ of $\hat\xi_0=({({\xi _0})_{{i_1}}}, \ldots, {({\xi _0})_{{i_a}}})$, such that the following holds (allowing the supremum to be continuously supplemented according to sup-limit) $$\mathop {\sup }\limits_{\hat\xi \ne \hat\zeta ,\hat\xi ,\hat\zeta \in \mathcal{V}'} \frac{{\mathop {\sup }\limits_{y,x \in G \times {\mathbb{T}^n},\;\check \xi \in \cup _{j = 1}^b{\mathcal{O}_{{\ell_j}}}} \left| {P\left( {y,x,\check\xi,\hat \xi } \right) - P\left( {y,x,\check\xi,\hat \zeta } \right)} \right|}}{\sum\limits_{j = 1}^a { {\left| {\omega_{i_j} \left( \xi_{i_j} \right) - \omega_{i_j} \left( \zeta_{i_j} \right)} \right|}}}<+\infty.$$ Moreover, $|\omega(\xi)-\omega(\zeta)| \geqslant L|\xi-\zeta|$ with some $L>0$ and all $\xi,\zeta \in \mathcal{O}$.
(H3$^\prime$) Assume that $\varphi \left( \delta \right): = \sup_{1\leqslant j \leqslant a}\left\{ { \left| {\xi_{i_j} - \zeta_{i_j} } \right|:\left| {\omega_{i_j} \left( \xi_{i_j} \right) - \omega_{i_j} \left( \zeta_{i_j} \right)} \right| \leqslant \delta } \right\}$ is continuously defined on $[0, \tau]$ with some $0<\tau\leqslant1/2$, and is monotonically increasing, and satisfies $\varphi(0)=0$ and $\varphi(x)>0$ for $x \in (0, \tau]$. Moreover, it holds $$- \int_0^{{\tau}} {\frac{{\varphi (x)}}{{x\ln x}}dx} < + \infty .$$
****Theorem** 3** (Partial frequency-preserving KAM). *Assume (H1), (H2$^\prime$) and (H3$^\prime$). Then there exists a sufficiently small ${\varepsilon _0} > 0$, for any $0 < \varepsilon < {\varepsilon _0}$, one can find some ${\xi^* }$ in a family of Cantor sets ${\Pi _\varepsilon }: = \prod\nolimits_{j = 1}^b {\left( {{\mathcal{O}_{{\ell _j}}}\backslash {\mathcal{O}_{{\ell _j}}}\left( \varepsilon \right)} \right)} \times \left\{ {\xi _{{i_1}}^ * , \ldots ,\xi _{{i_a}}^ * } \right\} \subset \mathcal{O}^o$, such that the perturbed Hamiltonian system $H\left( {y,x,{\xi^* },\varepsilon } \right)$ with parameter $\xi^*$ admits an analytic, quasi-periodic, invariant torus with toral frequency $\omega(\xi^*)$, where $\omega_{i_j}(\xi^*)=\omega_{i_j} \left( {{\xi _0}} \right)$ for all $1 \leqslant j \leqslant a$. Moreover, $\mathop {\lim }\nolimits_{\varepsilon \to {0^ + }} (\xi _{{i_1}}^ * , \ldots ,\xi _{{i_a}}^ * ) = ({({\xi _0})_{{i_1}}}, \ldots ,{({\xi _0})_{{i_a}}})$, and $\mathop {\lim }\nolimits_{\varepsilon \to {0^ + }} {\rm meas}\prod\nolimits_{j = 1}^b {\left( {{\mathcal{O}_{{\ell _j}}}\backslash {\mathcal{O}_{{\ell _j}}}\left( \varepsilon \right)} \right)} = {\rm meas}\prod\nolimits_{j = 1}^b {{\mathcal{O}_{{\ell _j}}}}$.*
****Remark** 5**. *Note that $P$ and $\omega$ could still have weak regularity with respect to the parameter, e.g., Hölder continuity and even Logarithmic Hölder continuity, see Remark [**Remark** 2](#FPKAMRE1.2){reference-type="ref" reference="FPKAMRE1.2"}.*
For the sake of brevity, let us only give the key idea. We also recommend that the reader skip here and come back to it after following the whole proof of Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} in Sections [3](#FPKAMsec-4){reference-type="ref" reference="FPKAMsec-4"} and [4.1](#FPKAMProofT1){reference-type="ref" reference="FPKAMProofT1"}. In this case, the semi-norm for $P$ similar to [\[FPKAMSEMI\]](#FPKAMSEMI){reference-type="eqref" reference="FPKAMSEMI"} should be sightly modified according to (H2$^\prime$). We shall first fix the $1 \leqslant {i_1} < \cdots < {i_a} \leqslant n$ components of the parameter $\xi_\nu$ in the $\nu$-th KAM step, and dig small measures for other components, i.e., indices $1\leqslant{\ell _1}< \cdots <{\ell _b}\leqslant n$. This is indeed classic in the traditional KAM theory, see Chow et al [@MR1938331], Li and Yi [@MR1926285; @MR2003447], Zhao and Li [@MR4355926] and etc. for relevant technique and iteration settings (note that the property $|\omega(\xi)-\omega(\zeta)| \geqslant L|\xi-\zeta|$ in (H2$^\prime$) ensures this, and we may dig less measures if $\omega(\xi)$ has weaker regularity). Then we use (H1) to find the new parameter $\xi_{\nu+1}$ for partial frequency-preserving, i.e., $$\label{FPKAMpartial}
{\omega _{{i_j}}}({({\xi _{\nu + 1}})_{{i_j}}}) + \sum\limits_{l = 0}^\nu {{{(p_{01}^l)}_{{i_j}}}({\xi _{\nu + 1}})} = {\omega _{{i_j}}}({\xi _0}),\;\;1 \leqslant j \leqslant a.$$ Now, similar to [\[FPKAMCANSHU5\]](#FPKAMCANSHU5){reference-type="eqref" reference="FPKAMCANSHU5"}, (H2$^\prime$) and (H3$^\prime$) will provide the estimates for $({({\xi _{\nu+1} })_{{i_1}}}, \ldots ,{({\xi _{\nu+1} })_{{i_a}}})$, i.e., $$\left| {{{({\xi _{\nu + 1}})}_{{i_j}}} - {{({\xi _\nu })}_{{i_j}}}} \right| \leqslant \varphi ({\varepsilon ^{{q^\nu }}}),\;\;1 \leqslant j \leqslant a$$ with some $q>1$ and the function $\varphi$ in (H3$^\prime$), because the KAM errors are still super-exponentially small. Moreover, the integrability in (H3$^\prime$) enables us to prove that the partial parameter sequence ${\left\{ {({{({\xi _\nu })}_{{i_1}}}, \ldots ,{{({\xi _\nu })}_{{i_a}}})} \right\}_{\nu \in {\mathbb{N}^ + }}}$ is indeed a Cauchy sequence, therefore we can denote by $(\xi _{{i_1}}^ * , \ldots ,\xi _{{i_a}}^ * )$ the corresponding limit (about $\nu$). Obviously, it tends to $({({\xi _0})_{{i_1}}}, \ldots ,{({\xi _0})_{{i_a}}})$ as $\varepsilon \to 0^+$. Finally, after infinitely many KAM steps, the parameter set for indices ${\ell _1}, \ldots ,{\ell _b}$ becomes $\prod\nolimits_{j = 1}^b {\left( {{\mathcal{O}_{{\ell _j}}}\backslash {\mathcal{O}_{{\ell _j}}}\left( \varepsilon \right)} \right)}$ with asymptotically full Lebesgue measure, i.e., $\mathop {\lim }\nolimits_{\varepsilon \to {0^ + }} {\rm meas}\prod\nolimits_{j = 1}^b {\left( {{\mathcal{O}_{{\ell _j}}}\backslash {\mathcal{O}_{{\ell _j}}}\left( \varepsilon \right)} \right)} = {\rm meas}\prod\nolimits_{j = 1}^b {{\mathcal{O}_{{\ell _j}}}}$. We therefore obtain the KAM torus with partial frequency-preserving due to the equations in [\[FPKAMpartial\]](#FPKAMpartial){reference-type="eqref" reference="FPKAMpartial"}, i.e., $({({\xi _0})_{{i_1}}}, \ldots ,{({\xi _0})_{{i_a}}})$ in the toral frequency is preserved.
To see the above process more clearly, we provide a simple example when $n=m=2$, $i_1=1$ and $\ell _1=1$. Now, (H2$^\prime$) holds for $i_1=1$, i.e., we aim to preserve the first component $\omega_1(\xi_0)$ of the prescribed Diophantine frequency $\omega(\xi_0)=(\omega_1(\xi_0),\omega_2(\xi_0))$. Therefore, during our KAM iteration, [\[FPKAMpartial\]](#FPKAMpartial){reference-type="eqref" reference="FPKAMpartial"} becomes $$\label{FPKAMC2}
{\omega _{{1}}}({{(\xi_{\nu+1})_1}}) + \sum\limits_{l = 0}^\nu {{{(p_{01}^l)}_{{1}}}(({\xi _{\nu + 1}})_1, ({\xi _{\nu + 1}})_2)} = {\omega _{{1}}}({\xi _0})$$ for $\xi_{\nu + 1}=((\xi_{\nu + 1})_1,(\xi_{\nu + 1})_2)$. After digging out some domain when $({\xi_{\nu+1}})_1$ is fixed, we could solve [\[FPKAMC2\]](#FPKAMC2){reference-type="eqref" reference="FPKAMC2"} by appropriately choosing a new $({\xi_{\nu+1}})_1$ with fixed $({\xi_{\nu+1}})_2$ in the rest domain, i.e., the first component of $\omega(\xi_0)$ is always unchanged.
It should be pointed out that, our partial frequency-preserving KAM also admits sharpness, since one can construct counterexamples similar to those in Section [2](#FPKAMCONTEREXAMPLES){reference-type="ref" reference="FPKAMCONTEREXAMPLES"}, to show the indispensability of the previous assumptions.
## Applicability for infinite-dimensional Hamiltonian systems {#FPKAMINFINITE}
As mentioned in [@TD], this parameter translation technique is not dimensionally limited, e.g., the Internal condition (H1) can be directly generalized to the infinite-dimensional case ($m=n=+\infty$). Therefore, one can use the detailed spatial structure of Pöschel [@MR1037110] or Montalto and Procesi [@MR4201442] to establish the corresponding infinite-dimensional frequency-preserving KAM theorem via irregular continuity with respect to the parameter (note that we can also construct many explicit examples thanks to our new conditions, see Comment (C1) and Remark [**Remark** 2](#FPKAMRE1.2){reference-type="ref" reference="FPKAMRE1.2"}).
Let us take the spatial structure introduced by Pöschel in [@MR1037110] for instance. Let $\Lambda$ be an infinite-dimensional lattice with a weighted spatial structure $\mathcal{S}$, where $\mathcal{S}$ is a family of finite subsets $A$ of $\Lambda$. Namely, $\mathcal{S}$ is a spatial structure on $\Lambda$ characterized by the property that the union of any two sets in $\mathcal{S}$ is again in $\mathcal{S}$, if they intersect: $$A,B \in \mathcal{S},\;\;A \cap B \ne \phi \Rightarrow A\cup B \in \mathcal{S}.$$ Then we introduce a nonnegative weight function $\left[ \cdot \right]:A \to \left[ A \right]$ defined on $\mathcal{S}\cap \mathcal{S} = \left\{ {A\cap B :A,B \in \mathcal{S}} \right\}$ to reflect the size, location and something else of the set $A$. The weight function satisfies the monotonicity and subadditivity for all $A,B$ in $\mathcal{S}$: $$\begin{aligned}
A \subseteq B &\Rightarrow \left[ A \right] \leqslant \left[ B \right],\\
A \cap B \ne \phi &\Rightarrow \left[ {A \cup B} \right] + \left[ {A \cap B} \right] \leqslant \left[ A \right] + \left[ B \right].\end{aligned}$$ Next we define the norms for $k$ runs over all nonzero integer vectors in $\mathbb{Z}^\Lambda$ whose support $\operatorname{supp} k = \left\{ {\lambda :{k_\lambda } \ne 0} \right\}$ is a finite set: $$\left| k \right|: = \sum\limits_{\lambda \in \Lambda } {\left| {{k_\lambda }} \right|} ,\;\;\left[ {\left[ k \right]} \right] = \mathop {\min }\limits_{\operatorname{supp} k \subseteq A \in \mathcal{S}} \left[ A \right].$$ Under these grounds, the infinite-dimensional nonresonant condition can be defined as follows.
****Definition** 1** (Infinite-dimensional nonresonant condition). *Given a nondecreasing approximation function $\Delta :\left[ {0, + \infty } \right) \to \left[ {1, + \infty } \right)$, i.e., $\Delta \left( 0 \right) = 1$, and $$\frac{{\ln \Delta \left( t \right)}}{t} \searrow 0 \;\;\text{as \;$ 0 \leqslant t \to + \infty $}, \;\;\int_1^{ + \infty } {\frac{{\ln \Delta \left( t \right)}}{t}dt} < + \infty .$$ Then for some $\alpha>0$ and every $0 \ne k \in {\mathbb{Z}^\Lambda }$ with finite support, the infinite-dimensional nonresonant condition read $$\left| {\left\langle {k,\omega } \right\rangle } \right| \geqslant \frac{\alpha }{{\Delta \left( {\left[ {\left[ k \right]} \right]} \right)\Delta \left( {\left| k \right|} \right)}}.$$*
Let $N = \left\langle {\omega ,y} \right\rangle$ be the unperturbed integrable Hamiltonian with $\omega \left( \xi \right):{\mathbb{R}^\Lambda } \supseteq \mathcal{Z} \to {\mathbb{R}^\Lambda }$, and $\varepsilon P=\varepsilon P(y,x,\xi)$ be the perturbation of the form $\varepsilon\sum\nolimits_{A \in \mathcal{S}} {{P_A}\left( {{y_A},{x _A};{\xi_A }} \right)}$, where $\varepsilon>0$ is sufficiently small, and ${y _A} = \left( {{y _\lambda }:\lambda \in A} \right)$, and similarly $x_A$ and $\xi_A$. Suppose that the perturbed Hamiltonian $$\label{FPKAMinfinteH}
H = N + \varepsilon P = \left\langle {\omega ,y} \right\rangle + \varepsilon\sum\limits_{A \in S} {{P_A}\left( {{y_A},{x _A};{\xi_A}} \right)}$$ is real analytic in the phase space variables $y,x$ on a complex neighbourhood $${\mathcal{D}_{r,s}}: {\left| y \right|_w} < s,\;\;{\left| {\operatorname{Im} x } \right|_\infty } < r$$ of the torus ${\mathcal{T}_0}: = \left\{ 0 \right\}\times {\mathbb{T}^\Lambda }$ with $w>0$, and continuous with respect to the parameter $\xi$ on $\mathcal{Z}$. The corresponding norms are defined by $${\left| y \right|_w} := \sum\limits_{\lambda \in \Lambda } {\left| {{y_\lambda }} \right|{e^{w\left[ \lambda \right]}}} ,\;\;\left[ \lambda \right] := \mathop {\min }\limits_{\lambda \in A \in \mathcal{S} \cap \mathcal{S}} \left[ A \right],\;\;{\left| x \right|_\infty } := \mathop {\sup }\limits_{\lambda \in \Lambda } \left| {{x _\lambda }} \right|.$$ Moreover, the size of the perturbation is measured in terms of the weighted norm $$|||P|||{_{m,r,s}}: = \sum\limits_{A \in S} {{{\left\| {{P_A}} \right\|}_{r,s}}{e^{m\left[ A \right]}}} ,$$ where ${P_A} = \sum\nolimits_k {{P_{A,k}}\left( {y,\xi } \right){e^{\sqrt { - 1} \left\langle {k,x } \right\rangle }}}$ is the Fourier series expansion, and $$\|{P_A}\|{_{r,s}}: = \sum\limits_{k \in {\mathbb{Z}^\Lambda }} {{{\left\| {{P_{A,k}}} \right\|}_s}{e^{r\left| k \right|}}} ,\;\;{\left\| {{P_{A,k}}} \right\|_s}: = \mathop {\sup }\limits_{{{\left| y \right|}_w} < s,\;\xi \in \mathcal{Z}} \left| {{P_{A,k}}\left( {y,\xi } \right)} \right|.$$ Since quantitative estimates of the smallness of the perturbation are not emphasized here, we omit some notations in [@MR1037110]. Similar to Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} considering the finite-dimensional case, we make the following assumptions.
(H1$^\#$) For $\Xi \in \left(\omega(\mathcal{Z})\right)^o$ given in advance, there exists $\xi_0 \in \mathcal{Z}^o$ such that $\Xi = \omega \left( {{\xi _0}} \right)$ admitting nonresonance in Definition [**Definition** 1](#FPKAMifnc){reference-type="ref" reference="FPKAMifnc"}.
(H2$^\#$) There exists a neighborhood $\mathcal{U} \subset \mathcal{Z}$ of $\xi_0$, such that the following holds (allowing the supremum to be continuously supplemented according to sup-limit) $$\mathop {\sup }\limits_{\xi \ne \zeta ,\xi ,\zeta \in \mathcal{U}} \frac{ ||| {P\left( {y,x,\xi } \right) - P\left( {y,x,\zeta } \right)} |||_{m,r,s}}{{\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right|}} <+\infty.$$
The framework for infinite-dimensional KAM is the same as that in Sections [3](#FPKAMsec-4){reference-type="ref" reference="FPKAMsec-4"} and [4](#FPKAMsec-5){reference-type="ref" reference="FPKAMsec-5"}, i.e., instead of digging out domains, we use the technique of parameter translation to keep the prescribed nonresonant frequency unchanged, and consequently, the measure estimates are not involved along our KAM approach. To be more precise, for some ${{ \xi }_0} = {\omega ^{ - 1}}\left( {\Xi} \right) \in \mathcal{Z} \subseteq {\mathbb{R}^\Lambda }$, we could obtain by (H1) a parameter sequence ${\{ {{{ \xi }_\nu }} \}_{\nu \in \mathbb{N}^+} }$ near ${{ \xi }_0}$ due to the KAM smallness through iteration and the continuity of $\omega$, such that $\omega( \xi_\nu)=\Xi$, i.e., we achieve frequency-preserving. Except for this, similar to [\[FPKAMCANSHU6\]](#FPKAMCANSHU6){reference-type="eqref" reference="FPKAMCANSHU6"}, ${\{ {{{ \xi }_\nu }} \}_{\nu \in \mathbb{N}^+} }$ could be proved to be a Cauchy sequence by applying (H2). Since below we only consider the case that the perturbation possesses at most parametric Lipschitz continuity, we do not need an assumption like (H3) according to Comment (C4). Finally, the uniform convergence of KAM iteration could be obtained directly. We therefore give the following infinite-dimensional frequency-preserving version without proof.
****Theorem** 4** (Infinite-dimensional frequency-preserving KAM). *Consider the Hamiltonian system [\[FPKAMinfinteH\]](#FPKAMinfinteH){reference-type="eqref" reference="FPKAMinfinteH"} with the perturbation having at most Lipschitz continuity about the parameter $\xi$. Assume that (H1$^\#$) and (H2$^\#$) hold. Then there exists at least a $\xi^*$ near $\xi_0$ as long as $\varepsilon>0$ is sufficiently small, such that [\[FPKAMinfinteH\]](#FPKAMinfinteH){reference-type="eqref" reference="FPKAMinfinteH"} admits an an analytic, quasi-periodic, full-dimensional invariant torus with toral frequency $\Xi=\omega \left( {{\xi _0}} \right)$. Moreover, ${\xi ^ * } = {\xi _0} +\mathcal{O}( { - \int_0^{\varepsilon } {\frac{{\varphi (x)}}{{x\ln x}}dx} } )$ as $\varepsilon \to 0^+$.*
In addition, many counterexamples in the infinite-dimensional case can be similarly constructed. Indeed, we just need to adjust the infinite-dimensional part and emphasize the finite-dimensional part, see Sections [2.1](#FPKAMSEC2.1){reference-type="ref" reference="FPKAMSEC2.1"} and [2.2](#FPKAMSEC2.2){reference-type="ref" reference="FPKAMSEC2.2"} for details.
## Further comments on the Controllability condition (H3) {#FPKAMFCH3}
We conjecture that the Controllability condition (H3) is independently indispensable due to the cumulative effect (the nonindependent indispensability can be seen in Section [2.1.8](#FPKAMSEC217){reference-type="ref" reference="FPKAMSEC217"}). However, we do not have a rigorous counterexample to show this. Here we give an intuitive idea. Consider the parameterized Hamiltonian system [\[FPKAMHamilton\]](#FPKAMHamilton){reference-type="eqref" reference="FPKAMHamilton"}, where the perturbation is independent of the parameter. Then using the parameter translation technique, we obtain the frequency-preserving equations $- {p_{01}^\nu } = \omega \left( {{\xi _ {\nu+1} }} \right) - \omega \left( \xi_\nu \right)$ for all $\nu \in \mathbb{N}$ (see Section [4.1](#FPKAMProofT1){reference-type="ref" reference="FPKAMProofT1"}). We assume that all $p_{01}^\nu =- a_\nu\cdot (1, \ldots ,1):=-a_\nu I$, where $\{a_\nu\}_{\nu\in \mathbb{N}}$ is a strictly decreasing positive sequence. Note that this is not true in all cases, because the drift terms in KAM theory may not hold signs, but we can choose special perturbations that make it true. Then one may construct some frequency mapping such that $\xi_\infty =+\infty$, in other words, after some $N \gg 1$, one cannot find a parameter $\xi_{N+1} \in \mathcal{O}$ in the sense of frequency-preserving. To be more precise, let $\Delta \in (0, \min \left\{ {1,{\rm{diam}}\mathcal{O}/2} \right\} )$ and $m=n \in \mathbb{N}^+$ be given. Then we construct a $C^\infty$ frequency mapping $\omega(\xi)$ on $\mathbb{R}^n$ that satisfies $$\left\{ \begin{gathered}
\omega \left( {{\xi _0}} \right) = \bar \omega ,\;\; {\text{$ \bar \omega $ is Diophantine}}, \hfill \\
\omega \left( {{\xi _0} + \nu \Delta I } \right) = \bar \omega + \sum\limits_{j = 0}^{\nu - 1} {{a_j}} I,\;\;\forall\nu \in {\mathbb{N}^ + }, \hfill \\
{D^u}\omega \left( {{\xi _0} + \nu \Delta I } \right) = 0,\;\;\forall \nu ,u \in \mathbb{N}^+, \hfill \\
\omega \left( {{\xi _0} + s} \right) \ne \omega \left( {{\xi _0} + \nu \Delta I } \right),\;\;{\text{$ \forall s $ between $ \left( {\nu - 1} \right)\Delta I $ and $ \nu \Delta I , \;\;\nu \in {\mathbb{N}^ + }$}}. \hfill \\
\end{gathered} \right.$$ It is easy to verify that the $\bar \omega$-frequency-preserving equations give ${\xi _\nu } = {\xi _0} + \nu \Delta I$ for all $\nu \in \mathbb{N}^+$. If we limit $\omega(\xi)$ on $\mathcal{O}$, then there exists a sufficiently large $N \in \mathbb{N}^+$ such that $\xi_{\nu} \notin \mathcal{O}$ for all $\nu \geqslant N+1$ (this also shows that (H3) fails). Even if $\mathcal{O}=\mathbb{R}^n$, $\{\xi_{\nu}\}_{\nu \in \mathbb{N}^+}$ does not converge and therefore cannot technically reach frequency-preserving. The above analysis gives an intuitive conclusion, but it seems difficult to retroactively construct an explicit parameterized Hamiltonian system to illustrate this point.
We also believe that the integrability in the Controllability condition (H3) is strongly related to the irrationality of the prescribed frequency (e.g., Diophantine nonresonance in [\[FPKAMDIO\]](#FPKAMDIO){reference-type="eqref" reference="FPKAMDIO"}) and the regularity of the perturbed Hamiltonian system with respect to the action-angle variables (e.g., analyticity in this paper). This is because the integrability plays a crucial role in our proof, showing that the parameter sequence is indeed a Cauchy sequence through the super-exponential decay of KAM errors. However, if one considers frequencies that are nearly rational, such as the Bruno type (see Hanßmann and Si [@MR2586370], Bounemoura [@MR3982280] and etc.), or Hamiltonian systems that are less regular with respect to the action-angle variables, such as non-analytic Gevrey regularity (see Popov [@MR2104602], Lopes Dias and Gaivão [@MR4011043], Chen and Cheng [@MR4208447], Bounemoura [@MR4412075] and etc.), non-Gevrey $C^\infty$ regularity (see Bounemoura and Féjoz [@MR4050197] and etc.), or even finitely smooth regularity (see Pöschel [@MR668410], Salamon [@MR2111297], Albrecht [@MR2350326], Li and Shang [@MR3960504], Koudjinan [@MR4104457] and etc.), then the KAM convergence rate may be slower (exponentially slow convergence, or even arbitrarily slow convergence), which makes us have to require some stronger integrability in (H3) via our approach.
# The KAM step {#FPKAMsec-4}
In this section, we give some lemmas of the standard KAM theorem. The proofs will be omitted here, and we refer to [@DL] for more details of the calculations. It should be emphasized that we here employ the quasi-linear KAM iterative scheme instead of the linear one, which was first introduced by Li and Yi in [@MR2003447]. This technique could overcome some essential difficulties that cannot be solved by the classic KAM iteration. Although it might not be necessary under the settings in this paper, we prefer to follow this approach which may lead ones directly extend such KAM results to some other problems.
Let ${\rm{diam}}\mathcal{O} \leqslant 1$ without loss of generality. For a perturbation function $P\left( {y,x,\xi } \right)$ (it may vary in the KAM step), which is analytic about $y$ and $x$ on some closed domain $D$ and only continuous about $\xi$ on $\mathcal{V}\subset \mathcal{O}$, we define its norm as follows $${\left\| P \right\|_D}: = {\left| P \right|_D} + {\left[ P \right]_{\omega}},$$ where the sup-norm is given by $${\left| P \right|_D} := \mathop {\sup }\limits_{\xi \in \mathcal{V}} \mathop {\sup }\limits_{\left( {y,x} \right) \in D} \left| P \right|,$$ and the semi-norm is defined as $$\label{FPKAMSEMI}
[P]_\omega=:\mathop {\sup }\limits_{\xi \ne \zeta ,\xi ,\zeta \in \mathcal{V}} \frac{{\mathop {\sup }\limits_{y,x \in G \times {\mathbb{T}^n}} \left| {P\left( {y,x,\xi } \right) - P\left( {y,x,\zeta } \right)} \right|}}{{\left| {\omega \left( \xi \right) - \omega \left( \zeta \right)} \right|}}.$$ Note that this $\omega$-dependent semi-norm is well defined thanks to the Relative singularity condition (H2) and the Controllability condition (H3). Being different from the Hölder one in [@DL] or the weaker one in [@TD], this semi-norm still allows the KAM iteration, while greatly weakens the requirements for regularity with respect to the parameter.
## Description of the $0$-th KAM step
Set $\rho = 1/10,\gamma = {\varepsilon ^{1/20}}$ and let $\eta > 0$ be an integer such that ${\left( {1 + \rho } \right)^\eta } > 2$. We first define the following parameters of the $0$-th KAM step: $$\begin{aligned}
\label{FPKAMCANSHU}&{r_0} = r,\;{\gamma _0} = \gamma ,\;{e_0} = 0,\;{\bar h _0} = 0,\;{\mu _0} = {\varepsilon ^{\frac{1}{{40\eta \left( {\tau + 1} \right)}}}},\;{s_0} = \frac{{s{\gamma _0}}}{{16\left( {{M^ * } + 2} \right)K_1^{\tau + 1}}}, \\
&{\mathcal{O}_0} = \left\{ {\xi \in \mathcal{O}:\left| {\xi - {\xi _0}} \right| < {\rm{dist}}\left( {{\xi _0},\partial \mathcal{O}} \right)} \right\},\notag \\
&D \left( {{s_0},{r_0}} \right) = \left\{ {\left( {y,x} \right): |y| < {s_0},\left| {\operatorname{Im} x} \right| < {r_0}} \right\} ,\notag\end{aligned}$$ where $0 < {s_0},{\gamma _0},{\mu _0} \leqslant 1,\tau > 0$ and ${M^ * } > 0$ is a constant defined as in Lemma [**Lemma** 3](#FPKAM3.3){reference-type="ref" reference="FPKAM3.3"}. Therefore, we can write $$\begin{aligned}
{H_0}: ={}& H\left( {y,x,{\xi _0}} \right) = {N_0} + {P_0},\notag \\
{N_0}: ={}& {N_0}\left( {y,{\xi _0},\varepsilon } \right) = {e_0} + \left\langle {\omega \left( {{\xi _0}} \right),y} \right\rangle + {\bar h _0},\notag \\
{P_0}: ={}& \varepsilon P\left( {y,x,{\xi _0} } \right).\notag\end{aligned}$$ According to the above parameters, we have the following estimate for $P_0$.
****Lemma** 1** (Lemma 3.1 in [@DL]). *It holds that $$\notag
{\left\| {{P_0}} \right\|_{D\left( {{s_0},{r_0}} \right)}} \leqslant \gamma _0^{5}s_0^4{\mu _0}.$$*
## The induction from the $\nu$-th KAM step
### Description of the $\nu$-th KAM step
We now define the parameters of the $\nu$-th KAM step, i.e., $$\notag
{r_\nu } = {r_{\nu - 1}}/2 + {r_0}/4,\;\;{s_\nu } = \mu _\nu ^{2\rho }{s_{\nu - 1}}/8,\;\;{\mu _\nu } = {8^4}\mu _{\nu - 1}^{1 + \rho }.$$ Now suppose that at the $\nu$-th step, we have arrived at the following real analytic Hamiltonian system $$\label{FPKAM1}
{H_\nu } = {N_\nu } + {P_\nu },\;\;{N_\nu } = {e_\nu } + \left\langle {\omega \left( {{\xi _0}} \right),y} \right\rangle + {\bar h _\nu }\left( {y,\xi } \right)$$ defined on $D\left( {{s_\nu },{r_\nu }} \right)$ with the perturbation estimate ${\left\| {{P_\nu }} \right\|_{D\left( {{s_\nu },{r_\nu }} \right)}} \leqslant \gamma _0^{5}s_\nu ^4{\mu _\nu }$. The equations of the motion associated to $H_\nu$ are $$\label{FPKAMmotion}
\left\{ \begin{gathered}
{\dot{y}_\nu } = - {\partial _{{x_\nu }}}{H_\nu }, \hfill \\
{\dot{x}_\nu } = {\partial _{{y_\nu }}}{H_\nu }. \hfill \\
\end{gathered} \right.$$
Except for additional instructions, we omit the indices for all quantities of the present KAM step (at the $\nu$th-step) and use + to index all the quantities (Hamiltonians, domains, normal forms, perturbations, transformations, etc.) in the next KAM step (at the $(\nu +1)$-th step). To simplify the notations, we will not specify the dependence of $P$, $P_+$ and etc. All the constants $c_1$-$c_6$ below are positive and independent of the iteration process, and we also use $c$ to denote any intermediate positive constant which is independent of the iteration process.
Define $$\begin{aligned}
{r_ + } ={}& r/2 + {r_0}/4,\notag \\
{s_ + } ={}& \alpha s/8,\;\;\alpha = {\mu ^{2\rho }} = {\mu ^{1/{5}}},\notag \\
\label{FPKAMCANSHU2} {\mu _ + } ={}& {8^4}{c_0}{\mu ^{1 + \rho }},\;\;{c_0} = 1 + \mathop {\max }\limits_{1 \leqslant i \leqslant 6} {c_i}, \\
{K_ + } ={}& {\left( {\left[ { - \ln \mu } \right] + 1} \right)^{3\eta }},\notag \\
D\left( s \right) ={}& \left\{ {y \in {\mathbb{C}^n}:\left| y \right| < s} \right\},\notag \\
\hat D ={}& D\left( {s,{r_ + } + 7\left( {r - {r_ + }} \right)/8} \right),\notag \\
\tilde D ={}& D\left( {s/2,{r_ + } + 6\left( {r - {r_ + }} \right)/8} \right),\notag \\
{D_{i\alpha /8}} ={}& D\left( {i\alpha s/8,{r_ + } + \left( {i - 1} \right)\left( {r - {r_ + }} \right)/8} \right),\;1 \leqslant i \leqslant 8,\notag \\
{D_ + } ={}& {D_{\alpha /8}} = D\left( {{s_ + },{r_ + }} \right),\notag \\
\Gamma \left( {r - {r_ + }} \right) ={}& \sum\limits_{0 < \left| k \right| \leqslant {K_ + }} {{{\left| k \right|}^{3\tau + 5}}{e^{ - \left| k \right|\left( {r - {r_ + }} \right)/8}}}. \notag\end{aligned}$$
### Construct a symplectic transformation
As usual, we construct a symplectic coordinate transformation $\Phi _ +$ as $$\notag
{\Phi _ + }:\left( {{y_ + },{x_ + }} \right) \in D\left( {{s_ + },{r_ + }} \right) \to {\Phi _ + }\left( {{y_ + },{x_ + }} \right) = \left( {y,x} \right) \in D\left( {s,r} \right),$$ such that it transforms the Hamiltonian [\[FPKAM1\]](#FPKAM1){reference-type="eqref" reference="FPKAM1"} into the Hamiltonian of the next KAM cycle (at the $(\nu + 1)$-th step) $$\notag
{H_ + } = H \circ {\Phi _ + } = {N_ + } + {P_ + },$$ where $N_+$ and $P_+$ have the similar properties as $N$ and $P$ respectively on $D(s_+,r_+)$, and the equations of motion [\[FPKAMmotion\]](#FPKAMmotion){reference-type="eqref" reference="FPKAMmotion"} are changed into $$\notag
\left\{ \begin{gathered}
{y_ + } = - {\partial _{{x_ + }}}{H_ + }, \hfill \\
{x_ + } = {\partial _{{y_ + }}}{H_ + }. \hfill \\
\end{gathered} \right.$$
### Truncation
To deal with the small divisors in homological equations, an efficient approach is truncation. Consider the truncation of Taylor-Fourier series of $P$ $$P = \sum\limits_{k \in {\mathbb{Z}^n},l \in \mathbb{Z}_ + ^n} {{p_{kl}}{y^l}{e^{\sqrt { - 1} \left\langle {k,x} \right\rangle }}} ,\;\;R = \sum\limits_{\left| k \right| \leqslant {K_ + },\left| l \right| \leqslant 4} {{p_{kl}}{y^l}{e^{\sqrt { - 1} \left\langle {k,x} \right\rangle }}} .$$
****Lemma** 2**. *Assume that $$\notag
\int_{{K_ + }}^{ + \infty } {{t^n}{e^{ - t\left( {r - {r_ + }} \right)/16}}dt} \leqslant \mu .$$ Then there exists a constant $c_1>0$ such that $$\label{FPKAMR}
{\left\| {P - R} \right\|_{{D_\alpha }}} \leqslant {c_1}\gamma _0^{5}{s^4}{\mu ^2},\;\;{\left\| R \right\|_{{D_\alpha }}} \leqslant {c_1}\gamma _0^{5}{s^4}\mu .$$*
****Remark** 6**. *The proof is indeed the same as Lemma 3.2 in [@DL]. To be more precise, for the Hölder semi-norm estimates in [@DL], the denominator part $|\xi-\zeta|^\beta$ remains unchanged, and only the numerator part is analyzed. Therefore, we just need to replace $|\xi-\zeta|^\beta$ by $\left|\omega(\xi)-\omega(\zeta)\right|$ to obtain estimates for the $\omega$-dependent semi-norm.*
### The quasi-linear homological equations
We construct a symplectic transformation as the time $1$-map $\phi _F^1$ of the flow generated by a Hamiltonian $F$ to eliminate all resonance terms in $R$. By setting $$\notag
F = \sum\limits_{0 < \left| k \right| \leqslant {K_ + },\left| l \right| \leqslant 4} {{f_{kl}}{y^l}{e^{\sqrt { - 1} \left\langle {k,x} \right\rangle }}} ,\;\;[R] = {\left( {2\pi } \right)^{ - n}}\int_{{\mathbb{T}^n}} {R\left( {y,x} \right)dx} ,$$ we obtain that $$\left\{ {N,F} \right\} + R - \left[ R \right] = 0,$$ which yields the so-called quasi-linear homological equations $$\label{FPKAMtdfc}
\sqrt { - 1} \left\langle {k,\omega \left( {{\xi _0}} \right) + {\partial _y}\bar h } \right\rangle {f_{kl}} = {p_{kl}},\;\;\left| l \right| \leqslant 4,\\
\;0 < \left| k \right| \leqslant {K_ + }.$$
****Lemma** 3** (Lemma 3.3 in [@DL]). *Assume that $$\notag
\mathop {\max }\limits_{\left| i \right| \leqslant 2} {\left\| {\partial _y^i\bar h - \partial _y^i{{\bar h }_0}} \right\|_{D\left( s \right)}} \leqslant \mu _0^{1/2},\;\;s < \frac{{{\gamma _0}}}{{\left( {2\left( {{M^ * } + 2} \right)K_ + ^{\tau + 1}} \right)}},$$ where $${M^ * } = \mathop {\max }\limits_{\left| i \right| \leqslant 2,y \in D\left( s \right)} \left| {\partial _y^i{{\bar h }_0}\left( {{\xi _0},y} \right)} \right|.$$ Then the quasi-linear homological equations in [\[FPKAMtdfc\]](#FPKAMtdfc){reference-type="eqref" reference="FPKAMtdfc"} can be uniquely solved on $D(s)$ to obtain a family of functions $f_{kl}$ which are analytic in $y$ and satisfy the following estimates: $$\notag
{\left\| {\partial _y^i{f_{kl}}} \right\|_{D\left( s \right)}} \leqslant {c_2}{\left| k \right|^{\left( {\left| i \right| + 1} \right)\tau + \left| i \right|}}\gamma _0^{4 - \left| i \right|}{s^{4 - \left| i \right|}}\mu {e^{ - \left| k \right|r}}$$ for $\left| l \right| \leqslant 4,0 < \left| k \right| \leqslant {K_ + }$ and $\left| i \right| \leqslant 4$, where $c_2>0$ is a constant.*
Applying the above transformation $\phi _F^1$ to Hamiltonian $H$, we obtain that $$H \circ \phi _F^1 = \left( {N + R} \right) \circ \phi _F^1: = {\bar N _ + } + {\bar P _ + },$$ where the new normal form turns to $${\bar N _ + } = N + \left[ R \right] = {e_ + } + \left\langle {\omega \left( \xi \right),y} \right\rangle + \Big\langle {\sum\limits_{j = 0}^\nu {p_{01}^j\left( \xi \right)} ,y} \Big\rangle + {\bar h _ + }\left( {y,\xi } \right)$$ with $${e_ + } = e + p_{00}^\nu ,\;\;{\bar h _ + }\left( {y,\xi } \right) = \bar h \left( {y,\xi } \right) + \left[ R \right] - p_{00}^\nu - \left\langle {p_{01}^\nu \left( \xi \right),y} \right\rangle ,$$ and the new perturbation becomes $${\bar P _ + } = \int_0^1 {\left\{ {{R_t},F} \right\} \circ \phi _F^tdt} + \left( {P - R} \right) \circ \phi _F^1$$ with $${R_t} = \left( {1 - t} \right)\left[ R \right] + tR.$$
### The parameter translation
In this section, we construct a translation $\phi$ so as to keep the frequency unchanged, where $$\phi :\;\;x \to x,\;\;y \to y,\;\;\widetilde \xi \to \widetilde \xi + {\xi _ + } - \xi ,$$ and ${\xi _ + }$ will be determined later. Let ${\Phi _ + } = \phi _F^1 \circ \phi$. Then $$\begin{aligned}
{}&H \circ {\Phi _ + } = {N_ + } + {P_ + },\\
{}&{N_ + } = {\bar N _ + } \circ \phi = {e_ + } + \left\langle {\omega \left( {{\xi _ + }} \right),y} \right\rangle + \Big\langle {\sum\limits_{j = 0}^\nu {p_{01}^j\left( {{\xi _ + }} \right)} ,y} \Big\rangle + {\bar h _ + }\left( {y,{\xi _ + }} \right),\\
{}&{P_ + } = {\bar P _ + } \circ \phi .\end{aligned}$$
### Frequency-preserving {#kkkk}
In this section, we will show that the prescribed Diophantine frequency can be preserved by a suitable parameter translation technique in the iteration process. To be more precise, the Internal condition (H1) ensures that the parameter ${{\xi _ \nu }}$ can be found in the internal parameter set $\mathcal{O}^o$ to keep the frequency unchanged at this KAM step, and the Relative singularity condition (H2) together with the Controllability condition (H3) assure that $\left\{ {{\xi _\nu }} \right\}_{\nu \in \mathbb{N}^+}$ is indeed a Cauchy sequence. The following lemma is crucial to our arguments.
****Lemma** 4**. *Assume that $$\notag
{\mathop {\sup }\limits_{\xi \in \mathcal{O}} \left| {\sum\limits_{j = 0}^\nu {p_{01}^j} } \right|} < \mu _0^{1/2}.$$ Then there exists at least a ${\xi _ + } \in \mathcal{V} \subset {\mathcal{O}^o}$ such that $$\label{FPKAMomega}
\omega \left( {{\xi _ + }} \right) + \sum\limits_{j = 0}^\nu {p_{01}^j\left( {{\xi _ + }} \right)} = \omega \left( {{\xi _0}} \right).$$ Moreover, ${\lim _{\nu \to + \infty }}{\xi _ + } : = {\xi ^ * } \in \mathcal{V}$, and $\left| {{\xi ^ * } - {\xi _0}} \right| = \mathcal{O}( { - \int_0^{\varepsilon } {\frac{{\varphi (x)}}{{x\ln x}}dx} } ) = o\left( 1 \right)$, where the function $\varphi$ is defined in (H3).*
*Proof.* The proof will be completed by an induction on $\nu$. We start with the case $\nu=0$. It is evident that $\omega \left( {{\xi _0}} \right) = \omega \left( {{\xi _0}} \right)$. Now assume that for some $\nu \geqslant 1$ we have arrived at $$\label{FPKAM4.25}
\omega \left( {{\xi _i}} \right) + \sum\limits_{j = 0}^{i - 1} {p_{01}^j\left( {{\xi _i}} \right)} = \omega \left( {{\xi _0}} \right),\;\;{\xi _i} \in \mathcal{V} \subset \mathcal{O}^o,\;\;1 \leqslant i \leqslant \nu .$$ Then in view of the smallness of $\sum\nolimits_{j = 0}^{i - 1} {p_{01}^j\left( {\cdot} \right)}$ and the Internal condition (H1), we can conclude that there exists at least a ${\xi _ + } \subset \mathcal{O}^o$ such that the frequency-preserving equation [\[FPKAMomega\]](#FPKAMomega){reference-type="eqref" reference="FPKAMomega"} holds, whenever $\varepsilon$ is sufficiently small. Indeed, we could show that $\xi_+ \in \mathcal{V}$ by the the following stronger convergence analysis.
Recall that [\[FPKAMR\]](#FPKAMR){reference-type="eqref" reference="FPKAMR"} in Lemma [**Lemma** 2](#FPKAM3.2){reference-type="ref" reference="FPKAM3.2"} implies that $$\label{FPKAMCANSHU4}
{|{p_{01}^j}|,[{p_{01}^j} ]_{\omega}} < c{\mu _j},\;\;0 \leqslant j \leqslant \nu,$$ which leads to $$\label{FPKAMlxm}
\left| {p_{01}^j\left( {{\xi _ + }} \right) - p_{01}^j\left( \xi \right)} \right| < c{\mu _j}\left|\omega(\xi_+)-\omega(\xi)\right|,\;\;0 \leqslant j \leqslant \nu.$$ Here we first provide a quantitative estimate for $\mu_j$, showing that it is super-exponentially small. Using [\[FPKAMCANSHU\]](#FPKAMCANSHU){reference-type="eqref" reference="FPKAMCANSHU"} and [\[FPKAMCANSHU2\]](#FPKAMCANSHU2){reference-type="eqref" reference="FPKAMCANSHU2"}, we obtain from the smallness of $\varepsilon$ that $$\begin{aligned}
{\mu _j} &\leqslant C\mu _{j - 1}^{1 + \rho } \leqslant {C^{1 + \left( {1 + \rho } \right)}}\mu _{j - 2}^{{{\left( {1 + \rho } \right)}^2}} \leqslant \cdots \leqslant {C^{\sum\limits_{l = 0}^{j - 1} {{{\left( {1 + \rho } \right)}^l}} }}\mu _0^{{{\left( {1 + \rho } \right)}^j}}\notag \\
\label{FPKAMCANSHU3}& = {C^{\frac{{{{\left( {1 + \rho } \right)}^{j - 1}} - 1}}{\rho }}}{\varepsilon ^{\frac{{{{\left( {1 + \rho } \right)}^j}}}{{40\eta \left( {\tau + 1} \right)}}}} \leqslant \frac{{{\varepsilon ^{{q^j}}}}}{2c},\;\;\forall j \in {\mathbb{N}^ + },\end{aligned}$$ where $1<q<1+\rho$, and $C>0$ is a universal constant independent of $j$ and $\varepsilon$. Besides, applying the L'Hospital's rule we get $$\begin{aligned}
\sum\limits_{j = 0}^\infty {{\mu _j}} &\leqslant \sum\limits_{j = 0}^\infty {\frac{{{\varepsilon ^{{q^j}}}}}{{2c}}} \lesssim \sum\limits_{j = 0}^\infty {\left( {{q^j} - {q^{j + 1}}} \right)\frac{{{\varepsilon ^{{q^j}}}}}{{{q^j}}}} \lesssim \int_1^{ + \infty } {\frac{{{\varepsilon ^x}}}{x}dx} \notag \\
\label{FPKAMLEIJIA}& = \int_{ - \ln \varepsilon }^{ + \infty } {\frac{1}{{x{e^x}}}dx} = {\mathcal{O}^\# }\left( {\frac{\varepsilon }{{ - \ln \varepsilon }}} \right) = o\left( 1 \right)\end{aligned}$$ as $0<\varepsilon \ll 1$. Now, with the Relative singularity condition (H2) we have $$\begin{aligned}
\left| {p_{01}^\nu \left( {{\xi _ + }} \right)} \right| ={}& \left| {\omega \left( {{\xi _ + }} \right) - \omega \left( \xi \right) + \sum\limits_{j = 0}^{\nu - 1} {\left( {p_{01}^j\left( {{\xi _ + }} \right) - p_{01}^j\left( \xi \right)} \right)} } \right|\notag \\
\geqslant{}& \left| {\omega \left( {{\xi _ + }} \right) - \omega \left( \xi \right)} \right| - \sum\limits_{j = 0}^{\nu - 1} {\left| {p_{01}^j\left( {{\xi _ + }} \right) - p_{01}^j\left( \xi \right)} \right|} \notag \\
\label{FPKAM1112} \geqslant{}& \left| {\omega \left( {{\xi _ + }} \right) - \omega \left( \xi \right)} \right| - c\left( {\sum\limits_{j = 0}^{\nu - 1} {{\mu _j}} } \right)\left| {\omega \left( {{\xi _ + }} \right) - \omega \left( \xi \right)} \right| \\
\label{FPKAM1111} \geqslant{}& \left| {\omega \left( {{\xi _ + }} \right) - \omega \left( \xi \right)} \right|/2,\end{aligned}$$ where [\[FPKAM1112\]](#FPKAM1112){reference-type="eqref" reference="FPKAM1112"} uses the smallness of the $\omega$-dependent semi-norm in [\[FPKAMlxm\]](#FPKAMlxm){reference-type="eqref" reference="FPKAMlxm"}, and [\[FPKAM1111\]](#FPKAM1111){reference-type="eqref" reference="FPKAM1111"} uses the smallness of the accumulated KAM errors in [\[FPKAMLEIJIA\]](#FPKAMLEIJIA){reference-type="eqref" reference="FPKAMLEIJIA"}, whenever $\varepsilon>0$ is sufficiently small. Then by [\[FPKAMCANSHU4\]](#FPKAMCANSHU4){reference-type="eqref" reference="FPKAMCANSHU4"}, [\[FPKAMCANSHU3\]](#FPKAMCANSHU3){reference-type="eqref" reference="FPKAMCANSHU3"} and the Controllability condition (H3), one can derive from [\[FPKAM1111\]](#FPKAM1111){reference-type="eqref" reference="FPKAM1111"} that $$\label{FPKAMCANSHU5}
\left| {{\xi _{+}} - {\xi }} \right| \leqslant \varphi (\varepsilon^{{q^\nu }}).$$ Note that $$\label{FPKAMCANSHU6}
\sum\limits_{\nu = 0}^\infty {\varphi (\varepsilon ^{{q^\nu }})} \lesssim \sum\limits_{\nu = 0}^\infty {\left( {{q^\nu } - {q^{\nu + 1}}} \right)\frac{{\varphi (\varepsilon^{{q^\nu }})}}{{{q^\nu }}}} \lesssim \int_1^{ + \infty } {\frac{{\varphi (\varepsilon ^x)}}{x}dx} \lesssim - \int_0^{\varepsilon} {\frac{{\varphi (x)}}{{x\ln x}}dx} =o(1)$$ as $0<\varepsilon \ll 1$, thanks to the monotonicity and the integrability of $\varphi$. Therefore, we eventually deduce that $\{\xi_\nu\}_{\nu\in \mathbb{N}^+}$ is indeed a Cauchy sequence, hence it converges to some $\xi_\infty:= \xi^*$ in the subset $\mathcal{V}\subset \mathcal{O}^o$. Furthermore, an upper bound of the distance between $\xi^*$ and $\xi_0$ could be obtained from [\[FPKAMCANSHU5\]](#FPKAMCANSHU5){reference-type="eqref" reference="FPKAMCANSHU5"} as $$\notag
\left| {{\xi ^ * } - {\xi _0}} \right| = \mathop {\lim }\limits_{\nu \to + \infty } \left| {{\xi _ {\nu+1} } - {\xi _0}} \right| \leqslant\mathop {\lim }\limits_{\nu \to + \infty } \sum\limits_{j = 1}^{\nu + 1} {\left| {{\xi _j} - {\xi _{j - 1}}} \right|} \leqslant \sum\limits_{\nu = 0}^\infty {\varphi (\varepsilon ^{{q^\nu }})},$$ i.e., is at most the order of $\mathcal{O}( { - \int_0^{\varepsilon } {\frac{{\varphi (x)}}{{x\ln x}}dx} } ) = o\left( 1 \right)$ by [\[FPKAMCANSHU6\]](#FPKAMCANSHU6){reference-type="eqref" reference="FPKAMCANSHU6"}, which gives the conclusion. ◻
### The estimate on ${N_ + }$
****Lemma** 5** (Lemma 3.5 in [@DL]). *There is a constant $c_3>0$ such that the following hold: $$\notag
\left| {{\xi _ + } - \xi } \right| \leqslant {c_3}\mu ,\;\;\left| {{e_ + } - e} \right| \leqslant {c_3}{s^4}\mu ,\;\;{\left\| {{{\bar h }_ + } - \bar h } \right\|_{D\left( s \right)}} \leqslant {c_3}{s^4}\mu .$$*
### The estimate on ${\Phi _ + }$
****Lemma** 6** (Lemma 3.6 in [@DL]). *There is a constant $c_4>0$ such that for all $\left| i \right| + \left| j \right| \leqslant 4$, $$\notag
{\left\| {\partial _x^i\partial _y^jF} \right\|_{\hat D}} \leqslant {c_4}\gamma _0^{4}{s^{4 - \left| i \right|}}\mu \Gamma \left( {r - {r_ + }} \right).$$*
****Lemma** 7** (Lemma 3.7 in [@DL]). *Assume that $${c_4}{s^{ 3}}\mu \Gamma \left( {r - {r_ + }} \right) < \left( {r - {r_ + }} \right)/8,\;\;{c_4}{s^4}\mu \Gamma \left( {r - {r_ + }} \right) < \alpha s/8.$$ Then for all $0 \leqslant t \leqslant 1$, the maps $\phi _F^t:{D_{\alpha /4}} \to {D_{\alpha /2}}$ and $\phi :\mathcal{O} \to {\mathcal{O}_ + }$ are well defined, and ${\Phi _ + }:{D_ + } \to D\left( {s,r} \right)$. Moreover, there is a constant $c_5>0$ such that $$\begin{aligned}
{\left\| {\phi _F^t - \rm{id}} \right\|_{\tilde D}},\;{\left\| {D\phi _F^t - \rm{Id}} \right\|_{\tilde D}},\;{\left\| {{D^2}\phi _F^t} \right\|_{\tilde D}},&\\
{\left\| {{\Phi _ + } - \rm{id}} \right\|_{\tilde D}},\;{\left\| {D{\Phi _ + } - \rm{Id}} \right\|_{\tilde D}},\;{\left\| {{D^2}{\Phi _ + }} \right\|_{\tilde D}}& \leqslant {c_5}\mu \Gamma \left( {r - {r_ + }} \right).
\end{aligned}$$*
### The estimate on ${P _ + }$
****Lemma** 8** (Lemma 3.8 in [@DL]). *Assume that the previous assumptions hold. Then there is a constant $c_6>0$ such that $$\notag
{\left\| {{P_ + }} \right\|_{{D_ + }}} \leqslant {c_6}\gamma _0^{5}{s^4}{\mu ^2}\left( {{\Gamma ^2}\left( {r - {r_ + }} \right) + \Gamma \left( {r - {r_ + }} \right)} \right).$$ Moreover, if $$\notag
{\mu ^\rho }\left( {{\Gamma ^2}\left( {r - {r_ + }} \right) + \Gamma \left( {r - {r_ + }} \right)} \right) \leqslant 1,$$ then $$\notag
{\left\| {{P_ + }} \right\|_{{D_ + }}} \leqslant {c_6}\gamma _0^{5}s_ + ^4{\mu _ + }.$$*
# Proof of the main KAM results {#FPKAMsec-5}
## Proof of Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} {#FPKAMProofT1}
### The iteration lemma
In this section, we give an iteration lemma which guarantees the inductive construction of the transformations in all the KAM steps.
Let ${r_0},{s_0},{\gamma _0},{\mu _0},{H_0},{e_0},{{\bar h}_0},{P_0}$ be given at the beginning of Section [3](#FPKAMsec-4){reference-type="ref" reference="FPKAMsec-4"}, and set ${D_0} = {D_0}\left( {{s_0},{r_0}} \right),{K_0} = 0,{\Phi _0} = \rm{id}$. We define the following sequence inductively for all $\nu \geqslant 1$: $$\begin{aligned}
{r_\nu } ={}& {r_0}\big( {1 - \sum\limits_{i = 1}^\nu {{2^{ - i - 1}}} } \big),\notag \\
{s_\nu } ={}& {\alpha _{\nu - 1}}{s_{\nu - 1}}/8,\notag \\
{\alpha _\nu } ={}& \mu _\nu ^{2\rho } = \mu _\nu ^{1/5},\notag \\
{\mu _\nu } ={}& {8^4}{c_0}\mu _{\nu - 1}^{1 + \rho },\notag \\
{K_\nu } ={}& {\left( {\left[ { - \ln {\mu _{\nu - 1}}} \right] + 1} \right)^{3\eta }},\notag \\
{{\tilde D}_\nu } ={}& D\left( {{s_\nu }/2,{r_\nu } + 6\left( {{r_{\nu - 1}} - {r_\nu }} \right)/8} \right).\notag\end{aligned}$$
****Lemma** 9** (Lemma 4.1 in [@DL]). *Denote ${\mu _ * } = {\mu _0}/\big( {{{\left( {{M^ * } + 2} \right)}^{3}}K_1^{5\left( {\tau + 1} \right)}} \big)$. If $\varepsilon >0$ is sufficiently small, then the KAM step described as above is valid for all $\nu \geqslant 0$, resulting in sequences ${H_\nu },{N_\nu },{e_\nu },{{\bar h}_\nu },{P_\nu },{\Phi _\nu }$ for $\nu \geqslant 1$ with the following estimates: $$\begin{aligned}
{}&\left| {{e_{\nu + 1}} - {e_\nu }} \right|,\;{\left\| {{{\bar h}_{\nu + 1}} - {{\bar h}_\nu }} \right\|_{D\left( {{s_\nu }} \right)}},\;{\left\| {{P_\nu }} \right\|_{D\left( {{s_\nu },{r_\nu }} \right)}},\;\left| {{\xi _{\nu + 1}} - {\xi _\nu }} \right| \leqslant \mu _ * ^{1/2}{2^{ - \nu }},\\
{}&\left| {{e_\nu } - {e_0}} \right|,\;{\left\| {{{\bar h}_\nu } - {{\bar h}_0}} \right\|_{D\left( {{s_\nu }} \right)}} \leqslant 2\mu _ * ^{1/2}.
\end{aligned}$$ In addition, ${\Phi _{\nu + 1}}:{{\tilde D}_{\nu + 1}} \to {{\tilde D}_\nu }$ is symplectic, and $$\label{FPKAMdafai}
{\left\| {{\Phi _{\nu + 1}} - {\rm{id}}} \right\|_{{{\tilde D}_{\nu + 1}}}} \leqslant \mu _ * ^{1/2}{2^{ - \nu }}.$$ Moreover, it holds on ${D_{\nu + 1}}$ that $${H_{\nu + 1}} = {H_\nu } \circ {\Phi _{\nu + 1}} = {N_{\nu + 1}} + {P_{\nu + 1}}.$$*
### Convergence
The convergence is standard in KAM theory. For the sake of completeness, we briefly give the framework of the proof. Let $${\Psi ^\nu }: = {\Phi _1} \circ {\Phi _2} \circ \cdots {\Phi _\nu },\;\;\nu \geqslant 1.$$ Then by Lemma [**Lemma** 9](#Iteration lemma){reference-type="ref" reference="Iteration lemma"}, we have $${D_{\nu + 1}} \subset {D_\nu },\;\;{\Psi ^\nu }:{{\tilde D}_\nu } \to {{\tilde D}_0},\;\;{H_0} \circ {\Psi ^\nu } = {H_\nu } = {N_\nu } + {P_\nu }$$ and $$\label{FPKAMN+}
{N_ + } = {e_\nu } + \Big\langle {\omega \left( {{\xi _\nu }} \right) + \sum\limits_{j = 0}^\nu {p_{01}^j\left( {{\xi _\nu }} \right)} ,y} \Big\rangle + {{\bar h}_\nu }\left( {y,{\xi _\nu }} \right),\;\;\nu \geqslant 0,$$ where ${\Psi ^0} = {\rm{id}}$. Using [\[FPKAMdafai\]](#FPKAMdafai){reference-type="eqref" reference="FPKAMdafai"} and the identity $${\Psi ^\nu } = {\rm{id}} + \sum\limits_{j = 0}^\nu {\left( {{\Psi ^j} - {\Psi ^{j - 1}}} \right)} ,$$ we can verify that ${\Psi ^\nu }$ is uniformly convergent and denote the limit by ${\Psi ^\infty }$.
In view of Lemma [**Lemma** 9](#Iteration lemma){reference-type="ref" reference="Iteration lemma"} and Lemma [**Lemma** 4](#FPKAMcrucial){reference-type="ref" reference="FPKAMcrucial"}, it is clear to see that ${e_\nu },{{\bar h}_\nu }$ and ${\xi _\nu }$ converge uniformly about $\nu$, and we denote their limits by ${e_\infty },{{\bar h}_\infty },\xi _\infty:=\xi^*$, respectively. In addition, Lemma [**Lemma** 4](#FPKAMcrucial){reference-type="ref" reference="FPKAMcrucial"} gives the following frequency-preserving equations: $$\begin{aligned}
{}&\omega \left( {{\xi _1}} \right) + p_{01}^0\left( {{\xi _1}} \right) = \omega \left( {{\xi _0}} \right),\notag \\
{}&\omega \left( {{\xi _2}} \right) + p_{01}^0\left( {{\xi _2}} \right) + p_{01}^1\left( {{\xi _2}} \right) = \omega \left( {{\xi _0}} \right),\notag \\
{}&\vdots\notag \\
\label{FPKAMpinlvlie}{}&\omega \left( {{\xi _\nu }} \right) + p_{01}^0\left( {{\xi _\nu }} \right) + \cdots + p_{01}^{\nu - 1}\left( {{\xi _\nu }} \right) = \omega \left( {{\xi _0}} \right).\end{aligned}$$ Using the Cauchy property of $\{\xi_\nu\}_{\nu \in \mathbb{N}^+}$ and taking limits at both sides of [\[FPKAMpinlvlie\]](#FPKAMpinlvlie){reference-type="eqref" reference="FPKAMpinlvlie"}, we get $$\omega \left( {{\xi _\infty }} \right) + \sum\limits_{j = 0}^\infty {p_{01}^j\left( {{\xi _\infty }} \right)} = \omega \left( {{\xi _0}} \right).$$ Then on $D\left( {{s_0}/2} \right)$, we conclude from [\[FPKAMN+\]](#FPKAMN+){reference-type="eqref" reference="FPKAMN+"} that $N_\nu$ converges uniformly to $${N_\infty } = {e_\infty } + \left\langle {\omega \left( {{\xi _0}} \right),y} \right\rangle + {{\bar h}_\infty }\left( {y,{\xi _\infty }} \right).$$ Hence, ${P_\nu } = {H_0} \circ {\Psi ^\nu } - {N_\nu }$ converges uniformly to ${P_\infty } = {H_0} \circ {\Psi ^\infty } - {N_\infty }$ on $D\left( {{s_0}/2,{r_0}/2} \right)$. Since ${\left\| {{P_\nu }} \right\|_{{D_\nu }}} \leqslant c\gamma _0^{5}s_\nu ^4{\mu _\nu }$, we have that $P_\nu$ converges to $0$ as $\nu \to \infty$, and $J\nabla {P_\infty } = 0$ on $D\left( {0,{r_0}/2} \right)$. Thus, for ${\xi _0} \in \mathcal{O}^o$ given in advance (see (H1)), the Hamiltonian ${H_\infty } = {N_\infty } + {P_\infty }$ admits an analytic, quasi-periodic, invariant $n$-torus $\left\{ 0 \right\}\times {\mathbb{T}^n}$ with the prescribed Diophantine frequency $\omega \left( {{\xi _0}} \right)$, which completes the proof of Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}.
## Proof of Corollary [**Corollary** 1](#FPKAMCORO1){reference-type="ref" reference="FPKAMCORO1"} {#proof-of-corollary-fpkamcoro1}
In view of the comments below (H3), we deduce from (H2) that the function $\varphi(\delta) = \mathcal{O}^\#(\delta^{1/\alpha})$ as $\delta\to 0^+$ without loss of generality. Then (H3) automatically holds for any $0<\tau\leqslant1/2$: $$- \int_0^\tau {\frac{{\varphi (x)}}{{x\ln x}}dx} \lesssim - \int_0^\tau {\frac{1}{{{x^{1 - 1/\alpha }}\ln x}}dx} \lesssim \int_0^\tau {\frac{1}{{{x^{1 - 1/\alpha }}}}dx} < + \infty.$$ Therefore, by applying Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"} we prove this corollary directly.
## Proof of Corollary [**Corollary** 2](#FPKAMCORO2){reference-type="ref" reference="FPKAMCORO2"} {#proof-of-corollary-fpkamcoro2}
Note that for a non-degenerate linear frequency mapping $\omega(\xi)$ defined on the parameter set $\mathcal{O}\subset \mathbb{R}^n$, almost all frequencies in $(\omega(\mathcal{O}))^o$ are $\tau$-Diophantine with $\tau>\max\{n-1,1\}$, i.e., satisfy [\[FPKAMDIO\]](#FPKAMDIO){reference-type="eqref" reference="FPKAMDIO"}, and the corresponding parameters belongs to $\mathcal{O}^o$. Therefore, recalling Comment (C2) and the Lipschitz regularity for the perturbation, we prove the desired conclusion by directly applying Theorem [**Theorem** 1](#FPKAMT1){reference-type="ref" reference="FPKAMT1"}.
# Acknowledgements {#acknowledgements .unnumbered}
This work was supported in part by the National Basic Research Program of China (Grant No. 2013CB834100), National Natural Science Foundation of China (Grant Nos. 12071175, 11171132 and 11571065), Project of Science and Technology Development of Jilin Province (Grant Nos. 2017C028-1 and 20190201302JC) and Natural Science Foundation of Jilin Province (Grant No. 20200201253JC).
99 J. Albrecht, On the existence of invariant tori in nearly-integrable Hamiltonian systems with finitely differentiable perturbations. Regul. Chaotic Dyn., 12 (2007), 281--320. <https://doi.org/10.1134/S1560354707030033>
V. Arnold, Small denominators. I. Mapping the circle onto itself. Izv. Akad. Nauk SSSR Ser. Mat. 25 (1961), 21--86.
V. Arnold, Proof of a theorem of A. N. Kolmogorov on the preservation of conditionally periodic motions under a small perturbation of the Hamiltonian. Uspehi Mat. Nauk 18 (1963), no. 5 (113), 13--40.
V. Arnold, Small denominators and problems of stability of motion in classical and celestial mechanics. Uspehi Mat. Nauk 18 (1963), no. 6 (114), 91--192.
D. Bambusi, B. Grébert, Birkhoff normal form for partial differential equations with tame modulus. Duke Math. J. 135 (2006), no. 3, 507--567. <https://doi.org/10.1215/S0012-7094-06-13534-2>
M. Berti, L. Biasco, Branching of Cantor manifolds of elliptic tori and applications to PDEs. (English summary) Comm. Math. Phys. 305 (2011), no. 3, 741--796. <https://doi.org/10.1007/s00220-011-1264-3>
M. Berti, L. Biasco, M. Procesi, KAM theory for the Hamiltonian derivative wave equation. Ann. Sci. Éc. Norm. Supér. (4) 46 (2013), no. 2, 301--373 (2013). <https://doi.org/10.24033/asens.2190>
A. Bounemoura, Some remarks on the optimality of the Bruno-Rüssmann condition. Bull. Soc. Math. France 147 (2019), no. 2, 341--353. <https://doi.org/10.24033/bsmf.2784>
A. Bounemoura, Optimal linearization of vector fields on the torus in non-analytic Gevrey classes. Ann. Inst. H. Poincaré C Anal. Non Linéaire 39 (2022), no. 3, 501--528. <https://doi.org/10.4171/aihpc/12>
A. Bounemoura, J. Féjoz, KAM, $\alpha$-Gevrey regularity and the $\alpha$-Bruno-Rüssmann condition. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 19 (2019), no. 4, 1225--1279.
J. Bourgain, Construction of quasi-periodic solutions for Hamiltonian perturbations of linear equations and applications to nonlinear PDE. Int. Math. Res. Not. 11 (1994), 475--497. <https://doi.org/10.1155/S1073792894000516>
J. Bourgain, Construction of periodic solutions of nonlinear wave equations in higher dimension. Geom. Funct. Anal. 5 (1995), no. 4, 629--639. <https://doi.org/10.1007/BF01902055>
C. Cheng, Y. Sun, Existence of KAM tori in degenerate Hamiltonian systems. J. Differential Equations 114 (1994), no. 1, 288--335. <https://doi.org/10.1006/jdeq.1994.1152>
Q. Chen, C. Cheng, Gevrey genericity of Arnold diffusion in a priori unstable Hamiltonian systems. Nonlinearity 34 (2021), no. 1, 455--508. <https://doi.org/10.1088/1361-6544/abb44f>
L. Chierchia, M. Procesi, Kolmogorov-Arnold-Moser (KAM) theory for finite and infinite dimensional systems. Perturbation theory---mathematics, methods and applications, 247--289, Encycl. Complex. Syst. Sci., Springer, New York, 2022. <https://doi.org/10.1007/978-1-0716-2621-4_302>
S. Chow, Y. Li, Y. Yi, Persistence of invariant tori on submanifolds in Hamiltonian systems. J. Nonlinear Sci. 12 (2002), no. 6, 585--617. <https://doi.org/10.1007/s00332-002-0509-x>
J. Du, Y. Li, H. Zhang, Kolmogorov's theorem for degenerate Hamiltonian systems with continuous parameters. <https://doi.org/10.48550/arXiv.2206.05461>
L. Eliasson, Perturbations of stable invariant tori for Hamiltonian systems. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 15 (1988), no. 1, 115--147 (1989). <http://www.numdam.org/item?id=ASNSP_1988_4_15_1_115_0>
L. Eliasson, B. Fayad, R. Krikorian, Around the stability of KAM tori. Duke Math. J. 164 (2015), no. 9, 1733--1775. <https://doi.org/10.1215/00127094-3120060>
B. Grébert, L. Thomann, KAM for the quantum harmonic oscillator. Comm. Math. Phys. 307 (2011), no. 2, 383--427. <https://doi.org/10.1007/s00220-011-1327-5>
H. Hanßmann, J. Si, Quasi-periodic solutions and stability of the equilibrium for quasi-periodically forced planar reversible and Hamiltonian systems under the Bruno condition. Nonlinearity 23 (2010), no. 3, 555--577. <https://doi.org/10.1088/0951-7715/23/3/007>
M. Herman, Sur la conjugaison différentiable des difféomorphismes du cercle à des rotations. Inst. Hautes Études Sci. Publ. Math. No. 49 (1979), 5--233. <http://www.numdam.org/item?id=PMIHES_1979__49__5_0>
B. Khesin, S. Kuksin, D. Peralta-Salas, KAM theory and the 3D Euler equation. Adv. Math. 267 (2014), 498--522. <https://doi.org/10.1016/j.aim.2014.09.009>
A. Kolmogorov, On conservation of conditionally periodic motions for a small change in Hamilton's function. Dokl. Akad. Nauk SSSR (N.S.) 98, (1954), 527--530.
C. Koudjinan, A KAM theorem for finitely differentiable Hamiltonian systems. J. Differential Equations 269 (2020), no. 6, 4720--4750. <https://doi.org/10.1016/j.jde.2020.03.044>
S. Kuksin, Hamiltonian perturbations of infinite-dimensional linear systems with imaginary spectrum. Funktsional. Anal. i Prilozhen. 21 (1987), no. 3, 22--37.
S. Kuksin, J. Pöschel, Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation. Ann. of Math. (2) 143 (1996), no. 1, 149--179. <https://doi.org/10.2307/2118656>
Y. Li, Y. Yi, Persistence of invariant tori in generalized Hamiltonian systems. Ergodic Theory Dynam. Systems 22 (2002), no. 4, 1233--1261. <https://doi.org/10.1017/S0143385702000743>
Y. Li, Y. Yi, A quasi-periodic Poincaré's theorem. Math. Ann. 326 (2003), 649--690. <https://doi.org/10.1007/s00208-002-0399-0>
X. Li, Z. Shang, On the existence of invariant tori in non-conservative dynamical systems with degeneracy and finite differentiability. Discrete Contin. Dyn. Syst. 39 (2019), no. 7, 4225--4257. <https://doi.org/10.3934/dcds.2019171>
C. Liu, Z. Tong, Y. Li, Moser's theorem with frequency-preserving. Proc. Roy. Soc. Edinburgh Sect. A (2023). <https://doi.org/10.1017/prm.2023.74>
J. Liu, X. Yuan, A KAM theorem for Hamiltonian partial differential equations with unbounded perturbations. Comm. Math. Phys. 307 (2011), no. 3, 629--673. <https://doi.org/10.1007/s00220-011-1353-3>
J. Lopes Dias, J. Gaivão, Linearization of Gevrey flows on $\Bbb T^d$ with a Brjuno type arithmetical condition. J. Differential Equations 267 (2019), no. 12, 7167--7212. <https://doi.org/10.1016/j.jde.2019.07.020>
R. Montalto, M. Procesi, Linear Schrödinger equation with an almost periodic potential. SIAM J. Math. Anal. 53 (2021), no. 1, 386--434. <https://doi.org/10.1137/20M1320742>
J. Moser, On invariant curves of area-preserving mappings of an annulus. Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. II 1962 (1962), 1--20.
J. Moser, A rapidly convergent iteration method and non-linear partial differential equations. I. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3) 20 (1966), 265--315.
J. Moser, A rapidly convergent iteration method and non-linear differential equations. II. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3) 20 (1966), 499--535.
G. Popov, KAM theorem for Gevrey Hamiltonians. Ergodic Theory Dynam. Systems 24 (2004), no. 5, 1753--1786. <https://doi.org/10.1017/S0143385704000458>
J. Pöschel, Integrability of Hamiltonian systems on Cantor sets. Comm. Pure Appl. Math. 35 (1982), no. 5, 653--696. <https://doi.org/10.1002/cpa.3160350504>
J. Pöschel, On elliptic lower-dimensional tori in Hamiltonian systems. Math. Z. 202 (1989), no. 4, 559--608. <https://doi.org/10.1007/BF01221590>
J. Pöschel, Small divisors with spatial structure in infinite-dimensional Hamiltonian systems. Comm. Math. Phys. 127 (1990), no. 2, 351--393. <http://projecteuclid.org/euclid.cmp/1104180143>
H. Rüssmann, Invariant tori in non-degenerate nearly integrable Hamiltonian systems. Regul. Chaotic Dyn. 6 (2001), no. 2, 119--204. <https://doi.org/10.1070/RD2001v006n02ABEH000169>
D. Salamon, The Kolmogorov-Arnold-Moser theorem. Math. Phys. Electron. J. 10 (2004), Paper 3, 37 pp.
M. Sevryuk, KAM-stable Hamiltonians. J. Dynam. Control Systems. 1 (1995), no. 3, 351--366. <https://doi.org/10.1007/BF02269374>
M. Sevryuk, The classical KAM theory at the dawn of the twenty-first century. Mosc. Math. J. 3 (2003), no. 3, 1113--1144, 1201--1202. <https://doi.org/10.17323/1609-4514-2003-3-3-1113-1144>
M. Sevryuk, Partial preservation of frequencies in KAM theory. Nonlinearity 19 (2006), no. 5, 1099--1140. <https://doi.org/10.1088/0951-7715/19/5/005>
M. Sevryuk, Partial preservation of frequencies and Floquet exponents in KAM theory. Tr. Mat. Inst. Steklova 259 (2007), Anal. i Osob. Ch. 2, 174--202 ISBN: 978-5-02-036298-7 ; translation in Proc. Steklov Inst. Math. 259 (2007), no. 1, . Previously 259 (2007), no. 2 on publisher site, 167--195. <https://doi.org/10.1134/S0081543807040128>
M. Sevryuk, Partial preservation of the frequencies and Floquet exponents of invariant tori in KAM theory reversible context 2. Sovrem. Mat. Fundam. Napravl. 63 (2017), no. 3, 516--541; translation in J. Math. Sci. (N.Y.) 253 (2021), no. 5, 730--753.
Z. Tong, J. Du, Y. Li, KAM theorem on the modulus of continuity about parameters. Sci. China Math (2023). <https://doi.org/10.1007/s11425-022-2102-5>
C. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equations via KAM theory. Comm. Math. Phys. 127 (1990), no. 3, 479--528. <http://projecteuclid.org/euclid.cmp/1104180217>
J. Xu, J. You, Q. Qiu, Invariant tori for nearly integrable Hamiltonian systems with degeneracy. Math. Z. 226 (1997), no. 3, 375--387. <https://doi.org/10.1007/PL00004344>
X. Zhao, Y. Li, Iso-manifold KAM persistence. J. Differential Equations 310 (2022), 484--505. <https://doi.org/10.1016/j.jde.2021.10.059>
| arxiv_math | {
"id": "2309.11797",
"title": "A sharp frequency-preserving KAM theorem with continuous dependence on\n parameters and several counterexamples",
"authors": "Zhicheng Tong and Yong Li",
"categories": "math.DS",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
author:
-
-
bibliography:
- sn-bibliography.bib
title: Goldstein Stationarity in Lipschitz Constrained Optimization
---
# Introduction
This work considers a quite general family of constrained optimization problems where both the objective and constraint functions may be nonconvex and nonsmooth. The only structure assumed is Lipschitz continuity of these functions. Specifically, we consider problems of the following form $$p_\star = \begin{cases}
\min_{x \in \mathbb{R}^n} \quad &f(x) \\
\mathrm{s.t.} \quad & g_i(x) \leq 0, \qquad i=1,...,m
\label{originalmainproblem}
\end{cases}$$ with objective $f:\mathbb{R}^n \to \mathbb{R}$ and constraints $g_i:\mathbb{R}^n \to \mathbb{R}$. We assume these functions are $M$-Lipschitz continuous on a neighborhood of the feasible region $\{x \mid g_i(x)\leq 0\}$, but do not assume convexity or differentiability. We propose an iterative subgradient-type method that only computes one gradient (or subgradient-like vector) of one of these functions at each iteration. For now, we leave general the exact notion of our first-order oracle $\mathcal{G}_h(x)$ and associated subdifferential by $\partial h(x)$. Throughout, this will either be the Clarke subdifferential or a certain nonstandard directional subdifferential.
For unconstrained minimization (i.e., $m=0$), in 1977, Goldstein [@goldstein1977optimization] proposed an idealized algorithm achieving descent by repeatedly moving in minimal norm subgradient directions, eventually reaching a related notion of approximate stationarity. Mahdavi-Amiri and Yousefpou [@mahdavi2012effective] revisited this idea, giving a more practical minimization algorithm approximating these descent directions. The groundbreaking work of Zhang et al. [@zhang2020complexity] showed assuming only Lipschitz continuity is sufficient for provable convergence guarantees towards stationarity. Subsequently, Davis et al. [@davis2022gradient] and Kong and Lewis [@kong2022cost] built on this using the more standard Clarke subgradient oracle and avoiding the use of randomness, respectively. A key insight enabling these works to prove guarantees for such generic problems is considering a weakened notion of stationarity, seeking points where a convex combination of gradients computed nearby is small. Formally, we say $x$ is an $(\epsilon,\delta)$-Goldstein stationary point of $h$ if $\mathrm{dist}(0,\partial_\delta h(x))\leq \epsilon$ where $\partial_\delta h(x)$ denotes the Goldstein subdifferential defined as $$\partial_\delta h(x) = \mathrm{conv}\left\{ \bigcup_{y\in B(x,\delta)} \partial h(x)\right\}$$ where $B(x,\delta)$ is the closed ball of radius $\delta$ around $x$ and $\mathrm{conv}()$ denotes the convex hull. This condition is a natural relaxation of the optimality condition $0\in\partial h(x)$.
Davis et al. [@davis2022gradient] showed for any Lipschitz $h$, using $O(1/\delta\epsilon^3)$ gradient evaluations, an $(\epsilon,\delta)$-Goldstein stationary point can be found in expectation (where the randomness arises from sampling a random point for each gradient evaluation). Kong and Lewis [@kong2022cost] showed an entirely deterministic method could find such a point in $O(1/\delta\epsilon^4)$ subgradient-like evaluations (although depending on a certain nonconvexity modulus). Cutkosky et al. [@CutkoskyMO23] derived a $O(1/\delta\epsilon^3)$ rate using only stochastic gradient evaluations (beyond the scope of this work) and matching lower bounds when $M=\Omega(\sqrt{\frac{\epsilon}{\delta}})$. Both Davis et al. and Kong and Lewis use a double-loop method, only differing in their inner loop's implementation, which approximates Goldstein's descent direction.
This work generalizes these prior works to the functionally constrained setting [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"}. We show that a simple modification of previous double-loop methods can solve generic constrained problems of the form [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"}. To analyze this, we propose new Goldstein-type generalizations of classic Fritz-John, KKT, and Constraint Qualification conditions (see Definitions [\[def_gfj\]](#def_gfj){reference-type="ref" reference="def_gfj"}-[\[def_gcq\]](#def_gcq){reference-type="ref" reference="def_gcq"}). Our Theorems [Theorem 1](#theorem_FJ){reference-type="ref" reference="theorem_FJ"} and [\[theorem_KKT\]](#theorem_KKT){reference-type="ref" reference="theorem_KKT"} show that using the same inner-loop as Davis et al. [@davis2022gradient], an approximate Goldstein Fritz John or KKT point is reached at the same rate, depending on whether constraint qualification holds, in expectation. Similarly, using the inner loop of Kong and Lewis [@kong2022cost] extends their deterministic rate to functional constraints. Associated dual multipliers can be extracted from our method, so stopping criteria and certificates of approximate stationarity follow immediately.
## Related Works and a Sketch of our Algorithm
We first review classic results on Lagrangian optimality conditions and provably good methods for weakly convex constrained problems and for Lipschitz unconstrained problems. Subsequently, Section [2](#section_algorithm&convergence){reference-type="ref" reference="section_algorithm&convergence"} formalizes all the necessary definitions and our algorithm. Section [3](#section_analysis){reference-type="ref" reference="section_analysis"} then states and proves our convergence theorems.
The Lagrangian of [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"} for any nonnegative Lagrange multipliers $\lambda\in\mathbb{R}^m$ is denoted by $L(x, \lambda)=f(x)+ \sum_{i=1}^m \lambda_i g_i(x)$. We say a feasible point $x^\star\in\mathbb{R}^n$ is KKT stationary if there exist such multipliers where $$\label{eq:KKT}
\begin{cases} \partial f(x^\star) + \sum_{i=1}^m\lambda_i \partial g_i(x^\star) \ni 0 \ ,\\
\lambda_i g_i(x^\star)=0, \quad \forall i=1\dots m \ . \end{cases}$$ Generally, this condition is not necessary for $x^\star$ to be a minimizer of [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"}. Instead, one can only ensure every minimizer $x^\star$ is Fritz-John (FJ) stationary, meaning that there exist nonnegative multipliers $\gamma_0,\dots \gamma_m$, not all zero, such that $$\label{eq:FJ}
\begin{cases}\gamma_0\partial f(x^\star) + \sum_{i=1}^m\gamma_i \partial g_i(x^\star) \ni 0 \ ,\\
\gamma_i g_i(x^\star)=0, \quad \forall i=1\dots m \ . \end{cases}$$ The above FJ condition implies the KKT condition whenever the "constraint qualification" (CQ) condition $0\not\in \partial \max\{g_i(x) \mid g_i(x^*)=0 \}(x^\star)$ holds at $x^\star$[^1]. Hence, when CQ holds, KKT is a necessary optimality condition. Our theory gives guarantees on the approximate attainment of FJ or KKT conditions, depending on whether a slightly strengthened constraint qualification condition holds.
Previous works [@ma2020quadratically; @boob2023stochastic; @jia2022first] have addressed nonsmooth nonconvex constrained problems under the additional assumption that $f,g_0,\dots,g_m$ are all $\rho$-weakly convex. A function $h$ is $\rho$-weakly convex if $h + \frac{\rho}{2}\|\cdot\|^2$ is convex. Under this additional condition, one can achieve an objective decrease on [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"} by solving the following strongly convex proximal subproblem $$\begin{aligned}
\label{eq:proximal-approach}
\begin{cases}
\min_{x \in \mathbb{R}^n} \quad &f(x)+\frac{\hat{\rho}}{2}\|x-x_k\|^2 \\
\mathrm{s.t.} \quad &g_i(x)+\frac{\hat{\rho}}{2}\|x-x_k\|^2 \leq 0, \qquad i=1,...,m
\end{cases}\end{aligned}$$ for any $\hat\rho > \rho$. The works [@ma2020quadratically; @boob2023stochastic; @jia2022first] all considered double-loop methods where the inner loop approximately solves this subproblem to produce the next iterate of the outer loop. In all these works, convergence rates on the order of $O(1/\epsilon^4)$ were proven towards approximate KKT stationarity holding near the iterates. Additionally, [@jia2022first] showed that without constraint qualification, approximate FJ stationarity is still reached at the same rate. A direct, single-loop approach was recently developed by [@huang2023oracle], attaining the same rates via a much simpler iteration.
Inspired by Goldstein [@goldstein1977optimization], previous works [@mahdavi2012effective; @zhang2020complexity; @davis2022gradient; @kong2022cost; @CutkoskyMO23] have proposed their algorithms for solving nonsmooth nonconvex unconstrained problems under Lipschitz continuity. These works considered double-loop methods, where the inner loop computes an approximate minimal norm element of the Goldstein subdifferential, either being close to zero or producing at least some descent on the objective function. These descent directions are then followed in an outer loop until a small Goldstein subgradient is found.
Without weak convexity, the proximal subproblem [\[eq:proximal-approach\]](#eq:proximal-approach){reference-type="eqref" reference="eq:proximal-approach"} becomes intractable. Instead, at each outer iteration $k$, we consider the unconstrained Lipschitz subproblem of minimizing $h_{x_k}(x) := \max\{f(x)-f(x_k), g_i(x)\}$. Using a subgradient norm minimizing inner loop, we either identify a descent on $h_{x_k}$ from $x_k$, yielding a feasible descent for [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"}, or certify that $x_k$ is approximately Goldstein stationary to $h_{x_k}$, yielding approximate FJ or KKT stationarity for [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"}.
# Goldstein KKT Stationarity and our Algorithm {#section_algorithm&convergence}
For a fixed notion of subdifferential $\partial h(x)$ (two such models are formalized in Sections [2.2](#subsec:Damek){reference-type="ref" reference="subsec:Damek"} and [2.3](#subsec:Lewis){reference-type="ref" reference="subsec:Lewis"}), we define Goldstein-type measures of approximate notions of Fritz-John [\[eq:FJ\]](#eq:FJ){reference-type="eqref" reference="eq:FJ"} and KKT stationarities [\[eq:KKT\]](#eq:KKT){reference-type="eqref" reference="eq:KKT"} for the constrained problem [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"}.
**Definition 1**. *We say a feasible point $x$ is $(\delta,\epsilon,\eta)$-Goldstein Fritz-John (GFJ) stationary for problem [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"} if there exists $(\gamma_0,\gamma)\geq 0$ with $\gamma_0+\sum_{i=1}^m\gamma_i=1$, such that $\mathrm{dist}(0, \gamma_0\partial_\delta f(x)+\sum_{i=1}^m\gamma_i\partial_\delta g_i(x)) \leq \epsilon$ and $\max_{y \in B(x,\delta)}|\gamma_ig_i(y)| \leq \eta, \forall i=1,...,m$. [\[def_gfj\]]{#def_gfj label="def_gfj"}*
**Definition 2**. *We say a feasible point $x$ is $(\delta,\epsilon,\eta)$-Goldstein KKT (GKKT) stationary for problem [\[originalmainproblem\]](#originalmainproblem){reference-type="eqref" reference="originalmainproblem"} if there exists $\lambda\geq 0$ such that $\mathrm{dist}(0, \partial_\delta f(x)+\sum_{i=1}^m\lambda_i\partial_\delta g_i(x)) \leq \epsilon$ and $\max_{y \in B(x,\delta)}|\lambda_ig_i(y)| \leq \eta, \forall i=1,...,m$. [\[def_gkkt\]]{#def_gkkt label="def_gkkt"}*
GFJ stationarity implies GKKT stationarity when $\gamma_0>0$ in Definition [\[def_gfj\]](#def_gfj){reference-type="ref" reference="def_gfj"}. A larger value of $\gamma_0$ leads to stronger guarantees on approximate GKKT stationarity. In particular, GFJ stationarity is equivalent to GKKT stationarity if $\gamma_0=1$. Constraint Qualification (CQ) classically provides a means to bound $\gamma_0$ away from zero. Below, we generalize CQ to use Goldstein subdifferentials.
**Definition 3**. *We say a feasible point $x$ satisfies $(a, b, c)$-Goldstein Constraint Qualification (GCQ) if all $\gamma \geq 0$ with $\sum_{i=1}^n\gamma_i=1$ and $\gamma_i>0$ only if $g_i(x) \geq -c$ have $\mathrm{dist}(0, \sum_{i=1}^n\gamma_i\partial_ag_{i}(x)) \geq b$. [\[def_gcq\]]{#def_gcq label="def_gcq"}*
## The Proposed Constrained Goldstein Subgradient Method
We make the following pair of modeling assumptions throughout.
**Assumption 1**. *The optimal objective value $p_\star$ is finite and for some $\Delta>0$, $f$ and each $g_i$ are $M$-Lipschitz continuous within distance $\Delta$ of $\{x \mid g_i(x)\leq 0\}$.[\[assumption1\]]{#assumption1 label="assumption1"}*
**Assumption 2**. *An initial feasible point $x_0$ is known (i.e. $g(x_0) \leq 0$).[\[assumption2\]]{#assumption2 label="assumption2"}*
Without loss of generality, the $m$ constraints of [\[mainproblem\]](#mainproblem){reference-type="eqref" reference="mainproblem"} can be combined into one: $$p_\star = \begin{cases}
\min_{x \in \mathbb{R}^n} \quad &f(x) \\
\mathrm{s.t.} \quad &g(x):=\max_{i=1,...,m}g_i(x) \leq 0 \ .
\label{mainproblem}
\end{cases}$$ Our proposed subgradient method then maintains a feasible solution $x_k$ and proceeds by, in an inner loop, computing in a descent direction of $$\begin{aligned}
\label{eq:subproblem}
\min h_{x_k}(z) := \max\{f(z)-f(x_k), g(z)\} \ ,\end{aligned}$$ and then moving in this direction a fixed distance $\delta<\Delta$ in an outer loop. To facilitate this, we require some subgradient-type oracle $\mathcal{G}$ be provided for $f$, $g$, and $h_x$. The two particular computational models we consider here are formalized in Assumptions [\[assumption3a\]](#assumption3a){reference-type="ref" reference="assumption3a"} and [\[assumption3b\]](#assumption3b){reference-type="ref" reference="assumption3b"}, with $\mathcal{G}$ as a gradient almost everywhere or as a specialized directional derivative matching subgradient, respectively. In general, all we require is the following.
**Assumption 3**. *For any $x,z \in \mathbb{R}^n$, subgradient-type oracles are known with $\mathcal{G}_f(z) \in \partial f(z), \mathcal{G}_g(z) \in \partial g(z), \mathcal{G}_{h_x}(z) \in \partial h_x(z)$ satisfying $$\mathcal{G}_{h_{x}}(z) \in \begin{cases}
\{\mathcal{G}_f(z)\} & \text{ if } f(z)-f(x) > g(z)\\
\{\mathcal{G}_f(z), \mathcal{G}_g(z)\} & \text{ if } f(z)-f(x) = g(z)\\
\{ \mathcal{G}_g(z)\} & \text{ if } f(z)-f(x) < g(z)\ . \end{cases}$$ [\[assumption3\]]{#assumption3 label="assumption3"}*
We require a subroutine outputting an approximate minimal norm element of the Goldstein subdifferential set of $h_{x_k}$ at $x_k$ in each outer iteration $k$. We consider any subroutine $\mathcal{A}(x,h,\delta,\epsilon)$ producing a direction $\zeta\in\mathbb{R}^n$ defined by $t \leq T_{\mathcal{A}}(M,\epsilon,\delta,\tau)$ subgradient oracle calls with probability at least $(1-\tau)$ satisfying $$\|\zeta\| \leq \epsilon \qquad \mathrm{or} \qquad h(x)-h(x-\delta \zeta / \|\zeta\|) \geq C\delta \epsilon \ ,$$ with $\zeta =\sum_{i=1}^t w_i \mathcal{G}_{h_x}(z_i)$ for some points $z_i \in B(x, \delta)$ and weights $w\geq 0, \sum_{i=1}^t w_i=1$ for some fixed $C \in (0, 1)$ and $\tau \in [0, 1)$.
The evaluation of such a subroutine forms the inner loop of our method. As two examples, Davis et al. [@davis2022gradient] gave a randomized searching subroutine, which we denote as $RandSearch(x, h, \delta, \epsilon)$, and Kong and Lewis [@kong2022cost] gave a deterministic linesearching subroutine defined below as $BisecSearch(x, h, \delta, \epsilon)$. Our proposed algorithm can use any such subroutine as a black-box inner loop, formalized in Algorithm [\[ouralgorithm\]](#ouralgorithm){reference-type="ref" reference="ouralgorithm"}. The following two lemmas show finite termination and a relationship between Goldstein stationarity on $h_{x_k}$ and GFJ and GKKT stationarity (proofs deferred to Section [3](#section_analysis){reference-type="ref" reference="section_analysis"}).
**Lemma 1**. *Algorithm [\[ouralgorithm\]](#ouralgorithm){reference-type="ref" reference="ouralgorithm"} always terminates with a feasible $x_k$ with $k \leq \left\lceil\frac{f(x_0)-p_\star}{C\delta \epsilon}\right\rceil$. [\[lemma_outeriterations\]]{#lemma_outeriterations label="lemma_outeriterations"}*
**Lemma 2**. *If a subroutine $\mathcal{A}$ reports a feasible $x$ is $(\delta,\epsilon)$-Goldstein stationary to $h_x$, then $x$ is $(\delta,\epsilon,3M\delta)$-GFJ stationary for problem [\[mainproblem\]](#mainproblem){reference-type="eqref" reference="mainproblem"}. If $(\delta, \hat{\epsilon}, M\delta)$-GCQ holds at $x$ with $\hat{\epsilon}>\epsilon$, then $x$ is $(\delta, \epsilon(\hat{\epsilon}+M)/(\hat{\epsilon}-\epsilon),3M\delta(\hat{\epsilon}+M)/(\hat{\epsilon}-\epsilon))$-GKKT stationary. [\[lemma_fjtokkt\]]{#lemma_fjtokkt label="lemma_fjtokkt"}*
The Lagrange multiplier $\lambda_k$ associated with each iteration $x_k$ can be easily computed: Let $z_i$ and $w_i$ denote the points and weights associated with the construction of $\zeta_k$. By Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}, letting $I_f =\{ i \mid \mathcal{G}_f(z_i)=\mathcal{G}_{h_{x_k}}(z_i)\}$, we have $\zeta_k=\sum_{i\in I_f} w_i\mathcal{G}_f(z_i)+\sum_{i\not\in I_f} w_i\mathcal{G}_g(z_i)$. Provided $\sum_{i\in I_f} w_i>0$, which GCQ ensures, $\lambda_k=\sum_{i\not\in I_f}w_i/\sum_{i\in I_f}w_i$ certifies approximate GKKT stationarity.
## Oracles and Minimum Norm Subroutine of Davis et al. [@davis2022gradient] {#subsec:Damek}
Note that the assumed uniform Lipschitz continuity ensures gradients exist almost everywhere for $f$, $g$, and $h_{x_k}$ (by Rademacher). Davis et al. [@davis2022gradient] leverage this by only requiring subgradients at randomly sampled points that are differentiable with probability one. Such an oracle can often be produced via automatic differentiation.
Oracles computing $\mathcal{G}_f(x) = \nabla f(x)$ and $\mathcal{G}_g(x) = \nabla g(x)$ almost everywhere in $\mathbb{R}^n$ are known. [\[assumption3a\]]{#assumption3a label="assumption3a"}
Given these oracles, a gradient oracle for $h_{x_k}$ almost everywhere can be easily constructed, returning $\nabla f(x)$ if $f(x)-f(x_k) \geq g(x)$ and $\nabla g(x)$ otherwise. Using this oracle, the randomized search subroutine of [@davis2022gradient] finds an approximate minimum norm Goldstein subgradient as defined in Algorithm [\[davisalgorithm\]](#davisalgorithm){reference-type="ref" reference="davisalgorithm"}. The natural corresponding subdifferential is the set of all Clarke subgradients, given by $\partial h(x) = \mathrm{conv}\left\{ \lim_{i\rightarrow\infty} \nabla h(x_i) \mid x_i\rightarrow x,\ x_i\in\mathrm{dom}(\nabla h) \right\}$.
## Oracles and Minimum Norm Subroutine of Kong et al. [@kong2022cost] {#subsec:Lewis}
Alternatively, Kong and Lewis [@kong2022cost] showed an entirely deterministic approach could be used at the cost of requiring additional mild assumptions and a stronger subgradient-type oracle. The directional derivative of any function $h:\mathbb{R}^n \to \mathbb{R}$ is defined by $$\begin{aligned}
\nabla_v h(x)=\lim_{t \to 0^+}\frac{h(x+tv)-h(x)}{t} \ ,\end{aligned}$$ for any point $x \in \mathbb{R}^n$ and any direction $v \in \mathbb{R}^n$. Kong and Lewis [@kong2022cost]'s subroutine requires an oracle able to compute subgradient-like vectors in agreement with this directional derivative and bounds on the following quantities. For any one-dimensional function $\ell \colon [0,\delta]\rightarrow \mathbb{R}$, we denote the "concave deviation" by $$c_\delta(\ell ) = \inf\{ M \geq 0 \mid \ell + s \text{ is convex for some convex $M$-Lipschitz } s\colon [0,\delta]\rightarrow \mathbb{R} \} \ .$$ This notion extends to multivariate functions $h\colon \mathbb{R}^n\rightarrow \mathbb{R}$ by considering each restriction of $h$ to a line segment $\ell_{h,x,y}(r)=h(x+r(y-x)/\delta)$ for $\|x-y\|\leq \delta$. We denote the nonconvexity modulus of $h$ by the largest concave deviation among these $$\Lambda_h(\delta) = \sup_{\|x-y\|\leq \delta}\{c_\delta(\ell_{h,x,y})\} \ .$$
The functions $f$ and $g$ are directionally differentiable, have oracles producing directional subgradient maps $F(x,v)$, $G(x,v)$ satisfying $\langle F(x,v),v\rangle=\nabla_v f(x)$, $\langle G(x,v),v\rangle=\nabla_v g(x)$, and have $\Lambda_f(\delta)$ and $\Lambda_g(\delta)$ both finite. [\[assumption3b\]]{#assumption3b label="assumption3b"}
For any $x,z,v\in\mathbb{R}^n$, we define the associated directional subgradient map for $h_x$ as $$H_x(z,v) = \begin{cases} F(z,v) \text{ if } f(z)-f(x)>g(z) \text{, or } f(z)-f(x)=g(z) \text{ and } \nabla_v f(z) \geq \nabla_v g(z)\\
G(z,v) \text{ if } f(z)-f(x)<g(z) \text{, or } f(z)-f(x)=g(z) \text{ and } \nabla_v f(z) < \nabla_v g(z) \ . \end{cases}$$ This is justified by noting the directional derivative of $h_{x}$ at $z$ is given by $$\begin{aligned}
\nabla_v h_{x}(z)=\begin{cases}
\nabla_v f(z), &{f(z)-f(x)>g(z)}\\
\max\{\nabla_v f(z), \nabla_v g(z)\}, &{f(z)-f(x)=g(z)} \\
\nabla_v g(z), &{f(z)-f(x)<g(z)} \ .
\end{cases}\end{aligned}$$ Note $\Lambda_{h_x}(\delta) \leq \Lambda_f(\delta)+\Lambda_g(\delta)<\infty$ since any convex Lipschitz functions $s_f,s_g$ such that $\ell_{f,x,y}+s_f$ and $\ell_{g,x,y}+s_g$ are convex have $\ell_{h_z,x,y}+s_f+s_g$ convex.
Suppressing the dependence on the given direction $v$, we denote $\mathcal{G}_f(z) = F(z,v)$, $\mathcal{G}_g(z) = G(z,v)$, and $\mathcal{G}_{h_x}(z) = H_x(z,v)$. One corresponding subdifferential is the set of all directional subgradients, $\partial h(x)=\{\zeta \mid \exists v \in \mathbb{R}^n \ \mathrm{s.t.\ } \langle \zeta,v \rangle=\nabla_v h(x)\}$.
Using such oracles, the bisection search subroutine of [@kong2022cost] finds an approximate minimum norm Goldstein subgradient as defined in Algorithm [\[lewisalgorithm\]](#lewisalgorithm){reference-type="ref" reference="lewisalgorithm"}. For a given $z\in\mathbb{R}^n$, this subroutine constructs a sequence of Goldstein subgradients $\zeta_t$ of $h_z$ at $z$, eventually satisfying one of the needed conditions. Defining $\hat{\zeta}_t=\zeta_t/\|\zeta_t\|$, $z_t(r)=z+(r-\delta)\hat{\zeta}_t$ and $l_t(r)=h_z(z_t(r))-\epsilon r/2$ on $[0,\delta]$, knowing $l_t(0)>l_t(\delta)$ (an average rate of decrease), the critical step in this construction is the bisection [@kong2022cost Algorithm 3.1], finding a location where the right derivative $l_{t+}'(r)=\lim_{\Delta r \to 0^+}(l_t(r+\Delta r)-l_t(r))/\Delta r$ of $l_t(r)$ is negative by bisection. Finite nonconvexity modulus guarantees termination.
# Convergence Guarantees and Analysis {#section_analysis}
We provide two main theorems on FJ and KKT guarantees, using Algorithm [\[davisalgorithm\]](#davisalgorithm){reference-type="ref" reference="davisalgorithm"} or [\[lewisalgorithm\]](#lewisalgorithm){reference-type="ref" reference="lewisalgorithm"}, respectively, as the subroutines. These follow primarily from our Lemmas [\[lemma_outeriterations\]](#lemma_outeriterations){reference-type="ref" reference="lemma_outeriterations"} and [\[lemma_fjtokkt\]](#lemma_fjtokkt){reference-type="ref" reference="lemma_fjtokkt"}.
**Theorem 1**. *Given Assumptions [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"} and [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}, for any $\epsilon>0$ and $0<\delta<\Delta$, Algorithm [\[ouralgorithm\]](#ouralgorithm){reference-type="ref" reference="ouralgorithm"} with $\Tilde{\epsilon}=\epsilon$ will terminate with an $(\delta, \epsilon,3M\delta)$-GFJ stationary point. Under Assumption [\[assumption3a\]](#assumption3a){reference-type="ref" reference="assumption3a"} and using Algorithm [\[davisalgorithm\]](#davisalgorithm){reference-type="ref" reference="davisalgorithm"} as the subroutine, with probability $1-\tau$, this requires at most $$\left\lceil\frac{4(f(x_0)-p_\star)}{\delta \epsilon}\right\rceil \left\lceil\frac{64M^2}{\epsilon^2}\right\rceil \left\lceil2\log\left(\frac{4(f(x_0)-p_\star)}{\tau\delta \epsilon}\right)\right\rceil$$ first-order oracle calls. Under Assumption [\[assumption3b\]](#assumption3b){reference-type="ref" reference="assumption3b"} and using Algorithm [\[lewisalgorithm\]](#lewisalgorithm){reference-type="ref" reference="lewisalgorithm"}, this becomes $$\left\lceil\frac{3(f(x_0)-p_\star)}{\delta \epsilon}\right\rceil \left\lceil\frac{16M^2}{\epsilon^2}\right\rceil \left(1+\left\lfloor\frac{12(\Lambda_f(\delta)+\Lambda_g(\delta))}{\epsilon}\right\rfloor\right) \ .$$*
*Proof.* According to Lemma [\[lemma_outeriterations\]](#lemma_outeriterations){reference-type="ref" reference="lemma_outeriterations"}, we attain a $(\delta,\epsilon)$-Goldstein stationary point $x_k$ to $h_{x_k}$ with $g(x_k)<0$ with $k \leq \left\lceil\frac{f(x_0)-p_\star}{C\delta \epsilon}\right\rceil.$ According to Lemma [\[lemma_fjtokkt\]](#lemma_fjtokkt){reference-type="ref" reference="lemma_fjtokkt"}, $x$ is a $(\delta,\epsilon,3M\delta)$-GFJ stationary solution for problem [\[mainproblem\]](#mainproblem){reference-type="eqref" reference="mainproblem"}. By Corollary 5 and Theorem 6 in [@davis2022gradient], each call to Algorithm [\[davisalgorithm\]](#davisalgorithm){reference-type="ref" reference="davisalgorithm"} uses at most $\lceil 64M^2/\epsilon^2\rceil\lceil2\log(1/\tau)\rceil$ gradient evaluations with probability at least $1-\tau$ and $C=1/4$. Likewise, by Theorem 4.3, Corollary 5.6 and Theorem 6.6 in [@kong2022cost], Algorithm [\[lewisalgorithm\]](#lewisalgorithm){reference-type="ref" reference="lewisalgorithm"} requires no more than $\lceil 16M^2/\epsilon^2\rceil (1+\lfloor12(\Lambda_f(\delta)+\Lambda_g(\delta))/\epsilon\rfloor)$ subgradient evaluations with $C=1/3$. ◻
**Theorem 2**. *Given Assumptions [\[assumption1\]](#assumption1){reference-type="ref" reference="assumption1"}, [\[assumption2\]](#assumption2){reference-type="ref" reference="assumption2"}, for any $\epsilon>0$ and $0<\delta<\Delta$ such that $(\delta, \sigma, M\delta)$-GCQ holds for all $x$, Algorithm [\[ouralgorithm\]](#ouralgorithm){reference-type="ref" reference="ouralgorithm"} with $\Tilde{\epsilon}=\sigma\epsilon/(\epsilon+\sigma+M)$ will terminate with an $(\delta,\epsilon,3M\delta(\epsilon+\sigma+M)/\sigma)$-Goldstein KKT point. Under Assumption [\[assumption3a\]](#assumption3a){reference-type="ref" reference="assumption3a"} and using Algorithm [\[davisalgorithm\]](#davisalgorithm){reference-type="ref" reference="davisalgorithm"} as the subroutine, with probability $1-\tau$, this requires at most $$\left\lceil\frac{4(f(x_0)-p_\star)}{\delta\Tilde{\epsilon}}\right\rceil \left\lceil\frac{64M^2}{\Tilde{\epsilon}^2}\right\rceil \left\lceil2\log\left(\frac{4(f(x_0)-p_\star)}{\tau\delta \Tilde{\epsilon}}\right)\right\rceil$$ first-order oracle calls. Under Assumption [\[assumption3b\]](#assumption3b){reference-type="ref" reference="assumption3b"} and using Algorithm [\[lewisalgorithm\]](#lewisalgorithm){reference-type="ref" reference="lewisalgorithm"}, this becomes $$\left\lceil\frac{3(f(x_0)-p_\star)}{\delta\Tilde{\epsilon}}\right\rceil \left\lceil\frac{16M^2}{\Tilde{\epsilon}^2}\right\rceil \left(1+\left\lfloor\frac{12(\Lambda_f(\delta)+\Lambda_g(\delta))}{\Tilde{\epsilon}}\right\rfloor\right) \ .$$ [\[theorem_KKT\]]{#theorem_KKT label="theorem_KKT"}*
*Proof.* By Lemmas [\[lemma_outeriterations\]](#lemma_outeriterations){reference-type="ref" reference="lemma_outeriterations"} and [\[lemma_fjtokkt\]](#lemma_fjtokkt){reference-type="ref" reference="lemma_fjtokkt"}, our choice $\Tilde{\epsilon}=\frac{\sigma\epsilon}{\epsilon+\sigma+M}<\sigma$ ensures $x_k$ is a feasible $(\delta,\Tilde{\epsilon},3M\Tilde{\epsilon})$-GFJ point. Then $(\delta,\sigma,M\delta)$-GCQ implies $x_k$ is $(\delta,\epsilon,3M\delta(\epsilon+\sigma+M)/\sigma)$-GKKT stationary, where $\epsilon$ is derived by $\epsilon=\Tilde{\epsilon}(\sigma+M)/(\sigma-\Tilde{\epsilon})$, and $3M\delta(\epsilon+\sigma+M)/\sigma=3M\delta(\sigma+M)/(\sigma-\Tilde{\epsilon})$. Using the same subroutine bounds of [@davis2022gradient; @kong2022cost] used in the prior proof of Theorem [Theorem 1](#theorem_FJ){reference-type="ref" reference="theorem_FJ"} finishes the proof. ◻
## Proof of Lemma [\[lemma_outeriterations\]](#lemma_outeriterations){reference-type="ref" reference="lemma_outeriterations"}
We prove this by inductively showing feasibility is maintained and the objective value decreases by at least $C\delta\epsilon$ each iteration until $\|\zeta_k\|\leq \epsilon$. Supposing $\|\zeta_k\|>\epsilon$, our black-box subroutine finds $h_{x_k}(x_k)-h_{x_k}(x_{k+1}) \geq C\delta \epsilon$. Since $h_x(x)=\max\{f(x)-f(x), g(x)\}=0$ whenever $g(x) \leq 0$, we know $\max\{f(x_{k+1})-f(x_k),g(x_{k+1})\} \leq -C\delta \epsilon.$ Thus we have descent $f(x_k)-f(x_{k+1}) \geq C\delta \epsilon$ and maintain feasibility $g(x_{k+1}) \leq -C\delta \epsilon$. Given an initial objective gap of $f(x_0)-p_\star$, this occurs at most $\left\lceil\frac{f(x_0)-p_\star}{C\delta \epsilon}\right\rceil$ times.
## Proof of Lemma [\[lemma_fjtokkt\]](#lemma_fjtokkt){reference-type="ref" reference="lemma_fjtokkt"}
If $x$ is $(\delta,\epsilon)$-Goldstein stationary to $h_x$, there exists $t$ points $z_i \in B(x, \delta)$ with (sub)gradients $\mathcal{G}_{h_x}(x_i) \in \partial h_x(x_i)$ and weights $w > 0$ with $\sum_{i=1}^tw_i=1$ such that $\zeta = \sum_{i=1}^t w_i\mathcal{G}_{h_x}(x_i)$ has $\|\zeta\| \leq \epsilon$. By Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}, letting $I_f =\{ i \mid \mathcal{G}_f(z_i)=\mathcal{G}_{h_{x}}(z_i)\}$, we have $\zeta=\sum_{i\in I_f} w_i\mathcal{G}_f(z_i)+\sum_{i\not\in I_f} w_i\mathcal{G}_g(z_i)$. This motivates selecting $\gamma_0 = \sum_{i\in I_f} w_i \geq 0$ and $\gamma = \sum_{i\not\in I_f} w_i \geq 0$. Indeed this selection has $\gamma_0+\gamma=1$ and $\mathrm{dist}(0, \gamma_0\partial_\delta f(x)+\gamma\partial_\delta g(x)) \leq \epsilon$. Lastly, for establishing approximate Goldstein Fritz-John stationarity, we verify complementary slackness. This trivially holds if $\gamma=0$. Otherwise let $i\not\in I_f$ and observe that $|g(z)| \leq 3M\delta$ for any $z\in B(x,\delta)$ since we have $$g(z) = g(z) - g(x) + g(x) \leq M\delta$$ using $M$-Lipschitz continuity of $g$ on $\|z-x\|\leq \delta$ and the feasibility of $x$, and we have $$g(z) = g(z) - g(z_i) + g(z_i) \geq -3M\delta$$ using $M$-Lipschitz continuity of $g$ on $\|z_i-z\|\leq 2\delta$ and $g(z_i) \geq f(z_i)-f(x) \geq -M\delta$. Since $\gamma\leq 1$, $\max_{z\in B(x,\delta)} |\gamma g(z)|\leq 3M\delta$ and so $x$ is a $(\delta,\epsilon,3M\delta)$-GFJ point.
Assuming $(\delta, \hat{\epsilon}, M\delta)$-GCQ holds, we first claim $\gamma_0 \geq \frac{\hat{\epsilon}-\epsilon}{\hat{\epsilon}+M} > 0$ since $$(1-\gamma_0) \hat{\epsilon}=\gamma \hat{\epsilon} \leq \|\sum_{i\not\in I_f} w_i\mathcal{G}_g(z_i)\| \leq \|\sum_{i=1}^t w_i\mathcal{G}_{h_{x}}(z_i)\|+\|\sum_{i\in I_f} w_i\mathcal{G}_f(z_i)\| \leq \epsilon+\gamma_0 M \ .$$ As a result, consider the Lagrangian multiplier $0 \leq \lambda := \gamma/\gamma_0 \leq \frac{\hat{\epsilon} + M}{\hat{\epsilon}-\epsilon} - 1$. Then $(\delta, \epsilon(\hat{\epsilon}+M)/(\hat{\epsilon}-\epsilon),3M\delta(\hat{\epsilon}+M)/(\hat{\epsilon}-\epsilon))$-GKKT stationary for problem [\[mainproblem\]](#mainproblem){reference-type="eqref" reference="mainproblem"} follows as $$\begin{aligned}
\mathrm{dist}(0, \partial_\delta f(x)+\lambda \partial_\delta g(x)) \leq \frac{\epsilon}{\gamma_0} \leq \frac{\epsilon(\hat{\epsilon}+M)}{\hat{\epsilon}-\epsilon} \ ,\\
\max_{z\in B(x,\delta)} |\lambda g(z)| \leq \frac{\max_{z\in B(x,\delta)} |\gamma g(z)|}{\gamma_0} \leq \frac{3M\delta (\hat{\epsilon}+M)}{\hat{\epsilon}-\epsilon} \ .\end{aligned}$$
Benjamin Grimmer was supported by the Air Force Office of Scientific Research under award number FA9550-23-1-0531. No data sets were used in or generated by this work. The authors have no competing interests to declare.
[^1]: In particular, constraint qualification ensures $\gamma_0$ is positive. Hence $\lambda_i = \gamma_i/\gamma_0$ satisfies KKT.
| arxiv_math | {
"id": "2310.03690",
"title": "Goldstein Stationarity in Lipschitz Constrained Optimization",
"authors": "Benjamin Grimmer, Zhichao Jia",
"categories": "math.OC",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We study one specific version of the contact process on a graph. Here, we allow multiple infections carried by the nodes and include a probability of removing nodes in a graph. The removal probability is purely determined by the number of infections the node carries at the moment when it gets another infection. In this paper, we show that on any finite graph, any positive value of infection rate $\lambda$ will result in the death of the process almost surely. In the case of infinite trees, We also give a lower bound on the infection rate in order for the process to survive, and an upper bound for the process to die out.
address: |
Xu Huang: Dept. of Mathematics\
University of Rochester\
Rochester, NY 14627
author:
- Xu Huang
bibliography:
- refs.bib
title: An SIR Model for the Multi-Virus Contact Process
---
# Introduction
## Background
The contact process, first introduced by T. E. Harris in [@T.E.Harris], is a model describing the spread of infections on a graph. A general description of this model can be found in [@grimmett2018probability]. There are many beautiful papers in this area, such as [@nam2022critical] and [@durrett1988lecture]. In this paper, instead of working with the traditional contact process, which is a Susceptible-Infected-Susceptible (SIS) model, we reformulate the process as a Susceptible-Infected-Removed (SIR) model. The field of SIR models recently drew lots of attention because of the pandemic, and papers like [@dauvergne2022sir] and [@candellero2021first] give us some clues about some of the differences between SIS and SIR models. Our model is called the Multi-Virus contact process with death rate acting on each node of the graph. Here is the verbal description of the model:
1. Each vertex in the graph is allowed to carry multiple infections at any given moment.
2. Each infection passes to its host's neighbors independently following a Poisson process with infection rate $\lambda$.
3. Each infection is healed following a Poisson process with rate 1.
4. Each infection or recovery happens independently in the process.
5. Each infection has a probability of killing its host at the moment it passes to the host. The probability mass function $\phi$ of death only depends on the number of infections that the host is carrying at the moment.
6. When a node dies, the node itself, all the infections on it, and all the edges connecting to it are removed from the graph.
There are two main differences between the classical contact process and our new model. The first is that we allow any node to carry multiple infections, and the second is that we add the probability of removing nodes and edges from the graph by including the probability of death.
Also, due to the fact that the probability of death only depends on the current number of viruses on the node, our model is related to the zero-range process. First introduced by Frank Spitzer in [@SPITZER1970246], the zero-range process (ZRP) is a type of stochastic interacting particle system in which indistinguishable particles can relocate from the site to site, with rates depending on the occupancy of the departure sites. Then one can view the viruses in the multi-virus contact process model as particles giving birth, dying, and moving among the nodes. The death of any node corresponds to the removal of a site from the zero-range process. The probability of removing such a site is purely determined by the number of particles (or viruses) on it. This paper does not use methods from ZRP, but the reader interested in such methods can consult [@10.1214/aop/1176996977] and [@liggett1985interacting].
Note the probability mass function of death only depends on the number of infections a node currently carries, without taking time as another parameter. This setup means that we assume the illness of a node only worsens at the moment the node gets a new infection. This is actually fairly similar to what we have in practice, since in real life, it often takes a professional doctor fairly little time to judge whether a patient is going to die or not, and this patient, if he/she is diagnosed to live, is going to get better until a new virus arrives.
## Mathematical Formulation
Here is the mathematical formulation of our contact model. On a given graph $G_0 = (V_0, E_0)$ with vertex set $V_0$ and edge set $E_0$ at the moment $t=0$, our contact process $(\xi_t)_{t \geq 0}$ is a continuous Markov process with infection rate $\lambda$ and recovery rate 1 on the state space $\{\mathbb{N} \cup \emptyset\}^{V_0}$, where $\mathbb{N}$, from now on, is the set of natural numbers including 0. Here, on a given site $x$, we write $\xi_t (x) = 0$ if the node is healthy, $\xi_t (x) = i$ if it carries $i$ infections and $\xi_t (x) = \emptyset$ if it dies or is already dead, at moment $t$. If $\xi_t (x) = \emptyset \text{ or } 0\quad \forall x \in V_0$, then of course there do not exist any infected nodes on the graph at the moment $t$ anymore, and we denote this case by $\xi_t = \emptyset$. Therefore, assuming that the state of the contact process is $\zeta$ at time $t$, and $\zeta\ne\emptyset$, then we have $$\begin{split}
& \mathbb{P} (\xi_{t+h} (x) = \zeta(x) -1 \mid \xi_{t} = \zeta) = \zeta(x)h + o(h) \qquad \qquad \qquad \text{if} \hspace{0.1cm} \zeta (x) \ne 0 \hspace{0.1cm} \text{or} \hspace{0.1cm} \emptyset \\
&\mathbb{P} (\xi_{t+h} (x) = \zeta(x)+1 \mid \xi_{t} = \zeta) = (\lambda N^t_{\zeta}(x) h + o(h))(1-\Phi(i+1)) \qquad \hspace{0.3cm} \text{and} \\
&\mathbb{P} (\xi_{t+h} (x) = \emptyset \mid \xi_{t} = \zeta) = (\lambda N^t_{\zeta}(x) h + o(h))\Phi(i+1) \qquad \hspace{1.3cm} \text{if} \hspace{0.1cm} \zeta (x) \ne \emptyset
\end{split}$$ as $h \downarrow 0$. $N^t_{\zeta}(x)$ is the number of infections which the neighbours of $x$ have in $\xi$ at moment $t$,
$$N^t_{\zeta}(x) = \sum_{y \in \space V_0, y \sim_t x} {\zeta(y)},$$ where $x \sim_t y$ means that there still exists an edge connecting $x$ and $y$ at moment $t$. $\Phi(j)$ is the cumulative distribution function associated to $\phi$. The function is the probability that an infection kills the specific particle at the moment when it gets infected, with respect to the number of infections the particle has at that moment. Therefore, $\Phi$ is a non-decreasing function with domain of $\mathbb{N}$, and we assume the function has the property that $$\begin{aligned}
\Phi(0) = 0, \hspace{0.1cm} \Phi(k) = 1 \text{ for all } k\geq M,\end{aligned}$$ where $M\in \mathbb{N}$ is the constant representing the maximum total number of infections that a node can carry at any moment.
Therefore, each virus is healed at rate 1, infects any neighbor at rate $\lambda$ independently, and kills a node following a function of number of infections the node currently carries at the moment when the virus arrives at the node.
We also define the death and survival of the process as following:
**Definition 1**. The multi-virus contact process is said to **die out** if $$P_{\lambda} (\xi_t \ne \emptyset \hspace{0.2cm} \forall t) = 0,$$ and to **survive** if $$P_{\lambda} (\xi_t \ne \emptyset \hspace{0.2cm} \forall t) > 0.$$
By this definition, it is clear that with any particular value of $\lambda$ the process can either die out or survive, but not both.
We also give definition of phase transition for the multi-virus contact process.
**Definition 2**. The SIR contact process is said to have **phase transition** if there exists a critical value $\lambda_c > 0$ such that the process will die out if the infection rate of process is smaller than $\lambda_c$ and will survive if the infection rate is larger than $\lambda_c$.
# Finite Scenario
We start by proving that in any given finite graph, our contact process dies out almost surely.
**Theorem 1**. Assume the contact process starts on a finite graph $G_0 = (V_0, E_0)$, where $|V_0| < \infty$. Then we have $$P_{\lambda} (\xi_t \ne \emptyset \hspace{0.15cm} \forall t) = 0 \quad \forall \lambda > 0.$$
**Remark 1**. Theorem [Theorem 1](#th: 1){reference-type="ref" reference="th: 1"} shows a similarity between the SIR contact process with the classical SIS contact process, that both processes have **no** phase transition in the finite setting.
Before proving this theorem, we first define a helpful concept and then prove an essential lemma.
**Definition 3**. Any node $\hat{v}$ in $G_0$ is said to have the immortal property, or to be immortal, if $\forall N \in \mathbb{N}$, $\exists t' \in [0, \infty)$ such that the total number of infections that $\hat{v}$ experienced before time $t'$ is larger than $N$.
**Remark 2**. It is clear that a node being immortal is a random event, since at time t = 0 one might not know if a node turns out to be immortal.
**Lemma 1**. No node can have immortal property in any graph $G_0$ almost surely.
*Proof.* Let $\epsilon > 0$ be given. Choose $N_\epsilon = \bigg \lceil \frac { \log \epsilon}{ \log(1-\Phi(1))} + 1 \bigg \rceil$. Choose an arbitrary vertex $\hat{v}$ from $V_0$, then for $\hat{v}$, it has three possible situations: it may never get $N_\epsilon$ infections and survive throughout the time, it may die before or when getting $N_\epsilon$ infections, or it may survive at all times after the $N_\epsilon$ infections at site $\hat{v}$ has occurred. Moreover, the probability for the node to survive after getting $N_\epsilon$ infections is $$\label{neweq 1}
\mathbb{P} ( \xi(\hat{v}) \ne \emptyset) = \prod_{i=1}^{N_\epsilon} (1-\Phi(k_i)) \leq \prod_{i=1}^{N_\epsilon} (1-\Phi(1)) = (1-\Phi(1))^{N_{\epsilon}} < \epsilon.$$ where $k_i$ represent the number of infections $\hat{v}$ is carrying at the moment this site gets its $i^{th}$ infection.
The first inequality in [\[neweq 1\]](#neweq 1){reference-type="eqref" reference="neweq 1"} holds by the monotonicity of the cumulative distribution function, as we have $\Phi(x_i) \geq \Phi(1)$ for all $x_i \geq 1$. [\[neweq 1\]](#neweq 1){reference-type="eqref" reference="neweq 1"} shows that for any positive value $\epsilon$, there always exists a $N_\epsilon$ such that the probability for $\hat{v}$ to survive after $N_\epsilon$ infections is smaller than $\epsilon$. Therefore, there exists a $N_0$ such that the probability for $\hat{v}$ to never get $N_0$ infections or to die before or when getting $N_0$ infection converges to 1 if we let $\epsilon$ goes to 0. However, in either case $\hat{v}$ is not immortal. This shows that any vertex cannot be immortal almost surely. ◻
Now we prove **Theorem [Theorem 1](#th: 1){reference-type="ref" reference="th: 1"}**.
*Proof.* It is sufficient to show that for all $\lambda \in \mathbb{R_+}$, the contact process with infection rate $\lambda$ will die out almost surely at some finite time $T$. Take an arbitrary $\lambda \in \mathbb{R_+}$ to be the infection rate.
Due to the fact that no node in $V_0$ is immortal, then for any arbitrary node $v_i \in V_0$, there exists a $N_i^{\lambda} \in \mathbb{N}$ almost surely that the node will either die or never be infected again after getting $N_i^{\lambda}$ infections. Since each infection is healing following a Poisson process with rate 1, if a node survives after its $N_i^\lambda$ infections, the infections on it will eventually be cured as surely, so there exists a finite time $t_i^{\lambda} \in [0, \infty)$ such that $v_i$ will either die or survive forever after $t_i^{\lambda}$ almost surely. Therefore, by taking $T^{\lambda} = \max_{i \leq |V_0|}{t_i}$, all nodes either died or will never be infected again starting from $T^{\lambda}$, and we finishes the proof. ◻
Next we give an example of a finite tree that satisfies Theorem [Theorem 1](#th: 1){reference-type="ref" reference="th: 1"}. This example will be helpful in the infinite case.
## Finite tree with fixed offspring number
Let $T_{d,n}$ be a labeled finite tree with fixed offspring number $d \geq 2$ and fixed number of vertices $n$. Therefore, the degree of every node is $d + 1$, except for the leaf nodes, each of which has degree 1, and the root of the tree which has degree $d$. We denote the root of the tree as $\{0 \}$.
If we apply Theorem [Theorem 1](#th: 1){reference-type="ref" reference="th: 1"} to this finite tree, we then know that the process will die out on $T_{d,n}$ for all $\lambda$ almost surely. Besides, one can observe from Figure 1 that whenever a site is removed from the graph, the graph will be split into a finite union of finite sub-trees, and in each sub-tree the offspring number of each node within it will be less than or equal to $d$. By the same token, when working with an infinite tree, one can observe that the removal of nodes will transform the whole graph to a finite union of finite and infinite sub-trees. This observation will be useful later on in the infinite tree setting.
[\[fig:1\]]{#fig:1 label="fig:1"}
# Infinite Scenario
## Infinite tree with fixed degree
Now we move our focus to infinite trees.
Let $T_d$ be an infinite tree with fixed degree number $d, d \in \mathbb{N}, d \geq 3$, i.e each node is connected to its parent and at least two offspring. We denote the root of the tree as $\{0 \}$. Let $\xi = ( \xi_{t} : t\geq 0)$ be the multi-virus contact process on $T_d$ with infection rate $\lambda$ and healing rate 1, as specified in the introduction, and initial state $\xi_0 = \{ 0 \}$.
In this case, we **cannot** assume monotonicity of survival of the process with respect to infection rate $\lambda$. The reason for not assuming monotonicity is clear: with a higher infection rate the virus will spread faster, and therefore further in the graph, but the infected nodes will also get more infections per time. The infected nodes are likely to result in a faster death, which is definitely bad for the survival of virus. Therefore, more work is needed to verify the existence of a critical value of $\lambda$ for the phase transition from death to survival of the process.
As a matter of fact, the survival of the process may also not be monotonic with respect to the healing rate. When the healing rate increases, infections will be cured faster, but the nodes also are expected to live longer, which could be helpful for the survival of virus. Therefore, more work is needed to prove the existence of a critical value of the healing rate as well.
We can still give lower and upper bounds for $\lambda$, which guarantee the survival or death of the process, respectively.
**Theorem 2**. Assume $(1-\Phi(1))(d-2) + 1 - M > 0$, where $M$ is the same constant as in the definition of cumulative distribution function of death, i.e. $\Phi(k) = 1 \text{ for all } k\geq M$. Then there exists $\lambda_* > 0$ such that if $\lambda \leq \lambda_*$ the contact process **dies out** almost surely. Further, $$\displaystyle \lambda_* \geq \frac{1}{ \displaystyle (1-\Phi(1))(d-2) + (1-2\Phi(2))}.$$
**Theorem 3**. Assume $1-\Phi(1) - \Phi(2) > 0$. there exists $\lambda^* > 0$ such that if $\lambda \geq \lambda^*$ the contact process **survive** almost surely. Further, $$\displaystyle \lambda^* \leq \frac{1}{1-\Phi(1)-\Phi(2)}.$$
### Upper bound
Let $\rho \in (0, 1)$, and let $\nu_{\rho}(A) = \rho^{|I_A|}$ for any finite subset $A$ of the vertex-set $V_0$ of $T_d$, where $I_A$ is the total number of infections we have on $A$. At any given moment $t$, we define $V_t^*$ are the unhealthy but alive vertices in $V_0$ at moment $t$, i.e. $V_t^* = \{v \in V_0, \quad \xi_t(v) \ne 0 \text{ or } \emptyset \}$. It's clear that $V_t^*$ is a finite set. Now we work with the process $\nu_{\rho}(V_t^*)$. Let $g^A_{\lambda}(t) = E^{A}_{\lambda}(\nu_{\rho}(V_t^*))$, which is the expected value of the process $\nu_{\rho}(V_t^*)$ with infection starting on set $A$ with infection rate $\lambda$.
Before the proof begins, here is a short outline of the reasoning. We first find the lower bound of the infection rate $\lambda$ at which $(g^A_\lambda(t))' \leq 0$. If $(g^{V_u^*}_\lambda(t))' \leq 0$ under all circumstances because of the $\lambda$ we choose, then by the Markov property, both the conditional probability and expectation do not depend on the current time, so $$\label{markov}
\frac{d}{du}{g_{\lambda}^{A}(u)} =E^{A}_{\lambda} \bigg(\frac{d}{dt}{g_{\lambda}^{V_u^*}(t)} \bigg| _{t=0} \bigg) \leq 0,$$ implying that $g^A_\lambda(u)$ is non-increasing in $u$. Now, assume we start the infection at the root of the tree, then we have $g^{\{0\}}(0) = \rho < 1$, so therefore $\lim_{u \to \infty} g^A(u) < 1$. On the other hand, note that when all virus disappears, we have $\lim_{t \to \infty} g^A(t) = 1$ representing the death of the process, so when [\[markov\]](#markov){reference-type="eqref" reference="markov"} holds, the process will survive. The infection rate $\lambda$ must be smaller than the lower bound at which $(g^A_\lambda(t))' \leq 0$ in order for the process to die out, so the lower bound becomes the upper bound in Theorem [Theorem 2](#upperb){reference-type="ref" reference="upperb"}.
In the following proofs, we will discuss three special cases and generalize them to get our final results.
**Lemma 2**. Let $A$ be any finite subset of the vertex-set $V_0$ of $T_d$. Let all sites in $A$ have exactly 1 infection at $t=0$. Let $\lambda_1$ be an infection rate which leads to the die out of the process starting at $A$. Then
$$\displaystyle \lambda_1 \leq \frac{1}{ \displaystyle (1-\Phi(1))(d-2) + (1-2\Phi(2))}.$$
*Proof.* We have
$$\label{eq:1}
\begin{split}
g^{A}_{\lambda_1}(t) =& |A|t(\frac{\nu_{\rho}(A)}{\rho}) + \lambda_1 N_A t(1-\Phi(1))(\nu_{\rho}(A) \rho) \\
&+ \lambda_1|A|t(\frac{\nu_{\rho}(A)}{\rho})\Phi(2) +\lambda_1|A|t(\nu_{\rho}(A)\rho)(1-\Phi(2)) \\
&+ \nu_{\rho}(A)(1- |A|t -\lambda_1 N_A t(1-\Phi(1)) \\
&- \lambda_1|A|t \Phi(2)- \lambda_1|A|t(1-\Phi(2) ) \\
&+ o(t),
\end{split}$$
as $t \downarrow 0$, where
$N_{A} = |\{ \langle x, y\rangle : x \in A, y\notin A \}|$
is the number of edges of $T_d$ with exactly one end vertex in $A$.
Table 1 shows how each term in [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"} represents a case of interaction between $A$ and its surroundings.
[\[tab:table1\]]{#tab:table1 label="tab:table1"}
**Term** **Representation**
---------------------------------------------------- --------------------------
$|A|t(\frac{\nu_{\rho}(A)}{\rho})$ Healing
$\lambda_1 N_A t(1-\Phi(1))(\nu_{\rho}(A) \rho)$ Infecting surrounding
$\lambda_1|A|t(\nu_{\rho}(A)\rho)(1-\Phi(2))$ Infecting nodes in $A$
$\lambda_1|A|t(\frac{\nu_{\rho}(A)}{\rho})\Phi(2)$ Killing the nodes in $A$
The Rest No change
: Representation of Each Term in [\[eq:1\]](#eq:1){reference-type="eqref" reference="eq:1"}.
The estimation for the number of neighbors of $A$ can be done in the following way. The minimum number of neighbors of A with given cardinality is achieved when all nodes in $A$ are on the same big branch of the tree. That is, if we remove all other nodes and edges from $G_0$ and only keep the nodes in $A$ and edges connecting them, they still have the shape of one finite tree, and we'll call this tree $T^*_d$ = ($A$, $E^*$). For each node in $T^*_d$, we count its offspring number plus one, which means the overall sum for all node in $A$ is $d|A|$ at the moment $t=0$. Here the 'plus one' represents the node itself. At this moment, this sum includes three types of nodes. The first type is the leaves of leaves in $T^*_d$, which do not belong to $A$ and are counted once. The second type represents the nodes in $A$ besides the root of $T^*_d$, which are counted twice. The third type is the root of $T^*_d$, which belongs to $A$ and is counted once. Now, we want are the neighbors of $A$, $N_A$, which represents the leaves of leaves in $T^*_d$ and the parent of the root of $T^*_d$, so we will have to deduct the size of $A$ minus one twice from the overall sum. That is, $$\label{ineq}
|\{ \langle x, y\rangle : x \in A, y\notin A \}| \geq d |A| - 2(|A|-1),$$ and we have $$\begin{split}
\frac{d}{dt}{g^{A}_{\lambda_1}(t)} \bigg| _{t=0} =& (1-\rho) \nu_{\rho}(A) \bigg\{ \frac{|A|}{\rho} + (\frac{\lambda_1|A|}{\rho})\Phi(2) \\ &- \lambda_1 N_A (1-\Phi(1)) - \lambda_1|A|(1-\Phi(2)) \bigg \} \\ &\leq (1-\rho) \nu_{\rho}(A) \bigg \{ \frac{|A|}{\rho} + (\frac{\lambda_1|A|}{\rho})\Phi(2) \\ &- \lambda_1 \bigg [d |A| - 2(|A|-1) \bigg ]
(1-\Phi(1)) - \lambda_1|A|(1-\Phi(2)) \bigg \} \\ &= (1-\rho) \nu_{\rho}(A) \bigg\{ |A| \{ \frac{1}{\rho} + \frac{\lambda_1 \Phi(2)}{\rho} \\ &-\lambda_1(1-\Phi(1))[ d-2 ] -\lambda_1(1-\Phi(2)) \} \\ &- 2\lambda_1 (1-\Phi(1)) \bigg \} \leq 0
\end{split}$$ when $$\frac{1}{\rho} + \frac{\lambda_1 \Phi(2)}{\rho} -\lambda_1(1-\Phi(1))[ d-2 ] -\lambda_1(1-\Phi(2)) \leq 0,$$ i.e. $$\rho \lambda_1 \bigg \{ (1-\Phi(1))(d-2) + (1-\Phi(2)) - \frac{\Phi(2)}{\rho} \bigg \} \geq 1.$$
By the argument we have in the outline, in order for the process to die out, we need $$\lambda_1 \bigg \{ \rho [(1-\Phi(1))(d-2) + (1-\Phi(2)) - \frac{\Phi(2)}{\rho} ] \bigg \} \leq 1 \quad \forall \rho \in (0, 1),$$ which leads to $$\displaystyle{ \lambda_1 \leq \frac{1}{ (1-\Phi(1))(d-2) + (1-2\Phi(2))} }$$ and finishes the proof. ◻
**Lemma 3**. Let $A$ be any finite subset of the vertex-set $V_0$ of $T_d$. Let all sites in $A$ have exactly $i$ infections at $t=0$, and $i$ satisfies the condition that $$(d-2) + 1 - (i+1)\Phi(i+1) > 0.$$ Let $\lambda_i$ be an infection rate which leads to the die out of the process starting at $A$. Then $$\label{new lem1}
\displaystyle{ \lambda_i \leq \frac{1}{ (1-\Phi(1))(d-2) + (1-\Phi(i+1)) - i \Phi(i+1) }}.$$
*Proof.* We have
$$\label{eq:2}
\begin{split}
g^{A}_{\lambda_i}(t) = & i|A|t(\frac{\nu_{\rho}(A)}{\rho}) + \lambda_i i N_A t(1-\Phi(1))(\nu_{\rho}(A) \rho) \\ &+ \lambda_i i|A|t (\frac{\nu_{\rho}(A)}{\rho^{i}})\Phi(i+1) + \lambda_i i|A|t(\nu_{\rho}(A)\rho)(1-\Phi(i+1)) \\ &+ \nu_{\rho}(A)\bigg [1- i|A|t -\lambda_i i N_A t(1-\Phi(1)) \\ &- \lambda_i i|A|t \Phi(i+1)- \lambda_i i|A|t(1-\Phi(i+1) \bigg] \\ &+ o(t),
\end{split}$$
as $t \downarrow 0$,
Table 2 shows how each term in [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"} represents a case of interaction between set $A$ and its surrounding.
[\[tab:table2\]]{#tab:table2 label="tab:table2"}
**Term** **Representation**
---------------------------------------------------------- --------------------------
$i|A|t(\frac{\nu_{\rho}(A)}{\rho})$ Healing
$i \lambda_i N_A t(1-\Phi(1))(\nu_{\rho}(A) \rho)$ Infecting surrounding
$i \lambda_i|A|t(\nu_{\rho}(A)\rho)(1-\Phi(i+1))$ Infecting nodes in $A$
$i \lambda_i|A|t(\frac{\nu_{\rho}(A)}{\rho^i})\Phi(i+1)$ Killing the nodes in $A$
The Rest No change
: Representation of Each Term in [\[eq:2\]](#eq:2){reference-type="eqref" reference="eq:2"}.
By taking the first derivative with respect to $t$, we have $$\begin{split}
\frac{d}{dt}{g^{A}_{\lambda_i}(t)} \bigg| _{t=0} =& (1-\rho) \nu_{\rho}(A) \bigg\{ \frac{i|A|}{\rho} + (\frac{\lambda_i i|A| (\rho^{i-1} + ... + 1)}{\rho^{i}})\Phi(i+1) \\ &- \lambda_i i N_A (1-\Phi(1)) - \lambda_i i|A|(1-\Phi(i+1)) \bigg \} \\ & \leq (1-\rho) \nu_{\rho}(A) \bigg \{ \frac{i|A|}{\rho} + (\frac{\lambda_i i|A| (\rho^{i-1} + ... + 1)}{\rho^{i}})\Phi(i+1) \\ &- \lambda_i i\bigg [(d+1) |A| - 2(|A|-1) \bigg ]
(1-\Phi(1)) - \lambda_i i|A|(1-\Phi(i+1)) \bigg \} \\ &= (1-\rho) \nu_{\rho}(A) \bigg\{ |A|i \bigg [ \frac{1}{\rho} + \frac{\lambda_i(\rho^{i-1} + ... + 1) \Phi(i+1)}{\rho^i} \\ &-\lambda_i(1-\Phi(1))[d-2] -\lambda_i(1-\Phi(i+1)) \bigg ] - 2\lambda_i i (1-\Phi(1)) \bigg \} \\ & \leq 0
\end{split}$$ when $$\frac{1}{\rho} + \frac{\lambda_i(\rho^{i-1} + ... + 1) \Phi(i+1)}{\rho^i} -\lambda_i(1-\Phi(1))[ d-2 ] -\lambda_i(1-\Phi(i+1)) \leq 0,$$ i.e. $$\rho \lambda_i \bigg \{ (1-\Phi(1))(d-2) + (1-\Phi(i+1)) - \frac{\Phi(i+1)(\rho^{i-1} + ... + 1)}{\rho^i} \bigg \} \geq 1.$$ In order for the process to die out, we must have $$\label{eq:7}
\lambda_i \bigg \{ \rho \bigg [ (1-\Phi(1))(d-2) + (1-\Phi(i+1)) - \frac{\Phi(i+1)(\rho^{i-1} + ... + 1)}{\rho^i} \bigg ] \bigg \} \leq 1 \quad \forall \rho.$$ Since the first derivative with respect to $\rho$ inside the big curly bracket is always larger than 0, the maximum is obtained when $\rho = 1$, which further leads to $$\label{new lem2}
\displaystyle{ \lambda_i \leq \frac{1}{ (1-\Phi(1))(d-2) + (1-\Phi(i+1)) - i \Phi(i+1) }}$$ and completes the proof. ◻
We can also see that the upper bound on the infection parameter $lambda_i$ when $i > 1$ in [Lemma 2](#lem3.1){reference-type="ref" reference="lem3.1"} is strictly larger than the bound we got in the case when $i = 1$ in [\[lemma2\]](#lemma2){reference-type="eqref" reference="lemma2"}, so we can just use the bound for $i = 1$ to be the smallest upper bound when the number of infections are same for the nodes in $A$.
These cases are the general cases which will be helpful when we are dealing with more complicated situations. Now we focus on the case where there are different numbers of infections for the nodes in set $A$. we define $n_0(v)$ to be the number of infections that the node $v \in A$ has at $t=0$. we start with the case where $n_0(v)$ returns 2 values: $i$ and $j$.
**Lemma 4**. Suppose the set of initial infections $A$ satisfies $A = B \cup C$. Here, each node in $B$ has $i$ infections, and each node in $C$ has $j$ infections at $t=0$. Without the loss of generality, we assume $j > i$. Then the process dies out only when
$$\displaystyle{ \lambda \leq \frac{1}{ (1-\Phi(1))(d-2) + (1-\Phi(i+1)) - i \Phi(i+1) }}.$$
*Proof.* We define $\hat{N}$ to be the number of nearest neighbor pairs $x, y$ with $x \in B$ and $y \in C$, i.e. $\hat{N} = |\{ (x, y): x \in B, y\in C, x \sim_0 y \}|$, in the graph $G_0$. In this case, the expression for $g^{A}_{\lambda}(t)$ is going to have more terms than the former 2 cases due to the fact that the interaction between $B$ and $C$ should also be involved. So we have $$\label{eq:3}
\begin{split}
g^{A}_{\lambda}(t) =& \lambda it(N_B - \hat{N})[1-\Phi(1)]\nu_\rho(A)\rho + \lambda i |B|t[1-\Phi(i+1)]\nu_\rho(A)\rho \\ &+ \frac{\lambda i |B|t\Phi(i+1)\nu_\rho(A)}{\rho^i} + \frac{|B|it\nu_\rho(A)}{\rho} + \lambda jt(N_C - \hat{N})[1-\Phi(1)]\nu_\rho(A)\rho \\ &+ \lambda j |C|t[1-\Phi(j+1)]\nu_\rho(A)\rho + \frac{\lambda j |C|t\Phi(j+1)\nu_\rho(A)}{\rho^j} + \frac{|C|jt\nu_\rho(A)}{\rho} \\ &+ \lambda i \hat{N} t [1-\Phi(j+1)]\nu_\rho(A) \rho + \lambda j \hat{N} t [1-\Phi(i+1)]\nu_\rho(A) \rho \\ &+ \frac{\lambda \hat{N} i t \Phi(j+1)\nu_\rho(A)}{\rho^j} + \frac{\lambda \hat{N} j t \Phi(i+1)\nu_\rho(A)}{\rho^i} \\ &+ \nu_\rho(A) \bigg [1 - \lambda it(N_B - \hat{N})[1-\Phi(1)] - \lambda i |B|t[1-\Phi(i+1)] \\ &- \lambda i |B|t\Phi(i+1) - |B|it - \lambda jt(N_C - \hat{N})[1-\Phi(1)] \\ &- \lambda j |C|t[1 -\Phi(j+1)] -\lambda j |C|t\Phi(j+1) - |C|jt \\ &- \lambda i \hat{N} t [1-\Phi(j+1)] - \lambda j \hat{N} t [1-\Phi(i+1)] \\ &- \lambda \hat{N} i t \Phi(j+1) - \lambda \hat{N} j t \Phi(i+1) \bigg ] \\ &+ o(t)
\end{split}$$ as $t \downarrow 0$, where $$\label{eq:BC}
N_{B} = |\{ \langle x, y\rangle : x \in B, y\notin B \}|$$ and $$N_{C} = |\{ \langle x, y\rangle : x \in C, y\notin C \}|$$ correspond to the respective number of neighbor nodes that set $B$ or set $C$ has.
Table 3 shows how each term in [\[eq:3\]](#eq:3){reference-type="eqref" reference="eq:3"} represents a case of interaction within set $A$ and between $A$ and its surroundings.
[\[tab:table3\]]{#tab:table3 label="tab:table3"}
**Term** **Representation**
----------------------------------------------------------- -------------------------
$\lambda it(N_B - \hat{N})[1-\Phi(1)]\nu_\rho(A)\rho$ $B$ infects surrounding
$\lambda i |B|t[1-\Phi(i+1)]\nu_\rho(A)\rho$ $B$ infects itself
$\frac{\lambda i |B|t\Phi(i+1)\nu_\rho(A)}{\rho^i}$ $B$ kills itself
$\frac{|B|it\nu_\rho(A)}{\rho}$ $B$ heals
$\lambda jt(N_C - \hat{N})[1-\Phi(1)]\nu_\rho(A)\rho$ $C$ infects surrounding
$\lambda j |C|t[1-\Phi(j+1)]\nu_\rho(A)\rho$ $C$ infects itself
$\frac{\lambda j |C|t\Phi(j+1)\nu_\rho(A)}{\rho^j}$ $C$ kills itself
$\frac{|C|jt\nu_\rho(A)}{\rho}$ $C$ heals
$\lambda i \hat{N} t [1-\Phi(j+1)]\nu_\rho(A) \rho$ $B$ infects $C$
$\lambda j \hat{N} t [1-\Phi(i+1)]\nu_\rho(A) \rho$ $C$ infects $B$
$\frac{\lambda \hat{N} i t \Phi(j+1)\nu_\rho(A)}{\rho^j}$ $B$ kills $C$
$\frac{\lambda \hat{N} j t \Phi(i+1)\nu_\rho(A)}{\rho^i}$ $C$ kills $B$
The Rest No change
: Representation of Each Term in [\[eq:3\]](#eq:3){reference-type="eqref" reference="eq:3"}.
We take the first derivative of the generating function with respect to $t$ and let $t = 0$. Then we have $$\label{eq:4}
\begin{split}
\frac{d}{dt}{g^{A}_{\lambda}(t)} \bigg| _{t=0} =& \lambda i(N_B - \hat{N})[1-\Phi(1)]\nu_\rho(A)\rho + \lambda i |B|[1-\Phi(i+1)]\nu_\rho(A)\rho \\ &+ \frac{\lambda i |B|\Phi(i+1)\nu_\rho(A)}{\rho^i} + \frac{|B|i\nu_\rho(A)}{\rho} \\ &+ \lambda j(N_C - \hat{N})[1-\Phi(1)]\nu_\rho(A)\rho + \lambda j |C|[1-\Phi(j+1)]\nu_\rho(A)\rho \\ &+ \frac{\lambda j |C|\Phi(j+1)\nu_\rho(A)}{\rho^j} + \frac{|C|j\nu_\rho(A)}{\rho} + \lambda i \hat{N} [1-\Phi(j+1)]\nu_\rho(A) \rho \\ &+ \lambda j \hat{N} [1-\Phi(i+1)]\nu_\rho(A) \rho + \frac{\lambda \hat{N} i \Phi(j+1)\nu_\rho(A)}{\rho^j} \\ &+ \frac{\lambda \hat{N} j \Phi(i+1)\nu_\rho(A)}{\rho^i} \\ &+ \nu_\rho(A) \bigg [ - \lambda i(N_B - \hat{N})[1-\Phi(1)] - \lambda i |B|[1-\Phi(i+1)] \\ &- \lambda i |B|\Phi(i+1) - |B|i - \lambda j(N_C - \hat{N})[1-\Phi(1)] \\ &- \lambda j |C|[1 -\Phi(j+1)] - \lambda j |C|\Phi(j+1) \\ &- |C|j - \lambda i \hat{N} [1-\Phi(j+1)] - \lambda j \hat{N} [1-\Phi(i+1)] \\ &- \lambda \hat{N} i \Phi(j+1) - \lambda \hat{N} j \Phi(i+1) \bigg ].
\end{split}$$ We take the common factor out, so [\[eq:4\]](#eq:4){reference-type="eqref" reference="eq:4"} leads to $$\label{eq:5}
\begin{split}
\nu_\rho(A)(1-\rho) & \bigg \{ - \lambda i(N_B - \hat{N})[1-\Phi(1)] -\lambda i |B|[1-\Phi(i+1)] \\ &+ \frac{\lambda i |B|\Phi(i+1)(\rho^{i-1} + ... + 1)}{\rho^i} + \frac{|B|i}{\rho} -\lambda j(N_C - \hat{N})[1-\Phi(1)] \\ &-\lambda j |C|[1-\Phi(j+1)] +\frac{\lambda j |C|\Phi(j+1)(\rho^{j-1} + ... + 1)}{\rho^j} + \frac{|C|j}{\rho} \\ &-\lambda i \hat{N} [1-\Phi(j+1)] - \lambda j \hat{N} [1-\Phi(i+1)] \\ &+\frac{\lambda \hat{N} i \Phi(j+1)(\rho^{j-1} + ... + 1)}{\rho^j} + \frac{\lambda \hat{N} j \Phi(i+1)(\rho^{i-1} + ... + 1)}{\rho^i} \bigg \} \\ &\leq \nu_\rho(A)(1-\rho) \bigg \{ \frac{|A|j}{\rho} + \frac{\lambda|A|j(\rho^{j-1} + ... + 1)}{\rho^j} \\ &+ \frac{2\lambda\hat{N}j\Phi(j+1)(p^{j-1} + ... + 1)}{\rho^j} - \lambda iN_B[1-\Phi(1)] \\ &-\lambda i |B|[1-\Phi(i+1)] -\lambda jN_C[1-\Phi(1)] -\lambda j |C|[1-\Phi(j+1)] \\ &+\lambda i \hat{N} [\Phi(j+1)-\Phi(1)] + \lambda j \hat{N} [\Phi(i+1)-\Phi(1)] \bigg \}
\end{split}$$ since $i \leq j$ and $|B| + |C| = |A|$. By [\[eq:BC\]](#eq:BC){reference-type="eqref" reference="eq:BC"} and [\[ineq\]](#ineq){reference-type="eqref" reference="ineq"} $$\begin{split}
N_B \geq [(d+1) |B| - 2(|B|-1)] \\
N_C \geq [(d+1) |C| - 2(|C|-1)],
\end{split}$$ so we have $$\begin{split}
\eqref{eq:5} \leq& \nu_\rho(A)(1-\rho) \bigg \{ \frac{|A|j}{\rho} + \frac{\lambda|A|j(\rho^{j-1} + ... + 1)}{\rho^j} \\ &+ \frac{2\lambda\hat{N}j\Phi(j+1)(p^{j-1} + ... + 1)}{\rho^j} \\ &- \lambda i \bigg [(d+1) |B| - 2(|B|-1) \bigg] [1-\Phi(1)] -\lambda i |B|[1-\Phi(i+1)] \\ &-\lambda j \bigg [(d+1) |C| - 2(|C|-1) \bigg][1-\Phi(1)] -\lambda j |C| [1-\Phi(j+1)] \\ &+\lambda i \hat{N} [\Phi(j+1)-\Phi(1)] + \lambda j \hat{N} [\Phi(i+1)-\Phi(1)] \bigg \} \leq 0
\end{split}$$
when $$\begin{split}
&\frac{|A|j}{\rho} + \frac{\lambda|A|j(\rho^{j-1} + ... + 1)}{\rho^j} + \frac{2\lambda\hat{N}j\Phi(j+1)(p^{j-1} + ... + 1)}{\rho^j} \\ &- \lambda i |B| (d-2) [1-\Phi(1)] -\lambda i |B|[1-\Phi(i+1)] \\&-\lambda j |C| (d-2)[1-\Phi(1)] -\lambda j |C| [1-\Phi(j+1)] \\&+\lambda i \hat{N} [\Phi(j+1)-\Phi(1)] + \lambda j \hat{N} [\Phi(i+1)-\Phi(1)] \leq 0,
\end{split}$$ and this is equivalent to $$\begin{split}
&- \frac{\lambda|A|j(\rho^{j-1} + ... + 1)}{\rho^j} - \frac{2\lambda\hat{N}j\Phi(j+1)(p^{j-1} + ... + 1)}{\rho^j} \\ &+ \lambda i |B| (d-2) [1-\Phi(1)] + \lambda i |B|[1-\Phi(i+1)] +\lambda j |C| (d-2)[1-\Phi(1)] \\ &+\lambda j |C| [1-\Phi(j+1)] -\lambda i \hat{N} [\Phi(j+1)-\Phi(1)] \\ &- \lambda j \hat{N} [\Phi(i+1)-\Phi(1)] \geq \frac{|A|j}{\rho}.
\end{split}$$ In order for the contact process to die out, we cannot let [\[eq:5\]](#eq:5){reference-type="eqref" reference="eq:5"} to be smaller than 0, so a necessary condition is $$\label{eq:8}
\begin{split}
&- \frac{\lambda|A|j(\rho^{j-1} + ... + 1)}{\rho^j} - \frac{2\lambda\hat{N}j\Phi(j+1)(p^{j-1} + ... + 1)}{\rho^j} \\ &+ \lambda i |B| (d-2) [1-\Phi(1)] + \lambda i |B|[1-\Phi(i+1)] \\ &+\lambda j |C| (d-2)[1-\Phi(1)] +\lambda j |C| [1-\Phi(j+1)] \\ &-\lambda i \hat{N} [\Phi(j+1)-\Phi(1)] - \lambda j \hat{N} [\Phi(i+1)-\Phi(1)] \leq \frac{|A|j}{\rho} \quad \forall \rho \in (0, 1).
\end{split}$$ We can then split $|A|$ back into $|B|$ and $|C|$. As the terms involving $\hat{N}$ are all negative terms, [\[eq:8\]](#eq:8){reference-type="eqref" reference="eq:8"} becomes true whenever $$\label{eq:9}
- \frac{\lambda|B|j(\rho^{j-1} + ... + 1)}{\rho^j} + \lambda i |B| (d-2) [1-\Phi(1)] + \lambda i |B|[1-\Phi(i+1)] \leq \frac{|B|j}{\rho}$$ and $$\label{eq:10}
- \frac{\lambda|C|j(\rho^{j-1} + ... + 1)}{\rho^j} +\lambda j |C| (d-2)[1-\Phi(1)] +\lambda j |C| [1-\Phi(j+1)] \leq \frac{|C|j}{\rho}.$$ Note that both [\[eq:9\]](#eq:9){reference-type="eqref" reference="eq:9"} and [\[eq:10\]](#eq:10){reference-type="eqref" reference="eq:10"} are satisfied when we employ Lemma [Lemma 3](#lemma2){reference-type="ref" reference="lemma2"} and use $\lambda_i$ as our sufficient upper bound. ◻
As the situation with two distinct infection numbers on the initial infectives is proved, we can generalize it and get the following lemma.
**Lemma 5**. Suppose the set of initial infections, $A =\bigcup_{i=1}^n A_i$, where each node in $A_i$ initially has $m_i$ infections. A sufficeint condition for the process to die out is $$\displaystyle{ \lambda \leq \frac{1}{ (1-\Phi(1))(d-2) + (1-\Phi(k+1)) - k \Phi(k+1) }},$$ where $k$ is the minimum of $m_i$.
Now, one last thing we need to check is whether the statement $(g^A_\lambda(t))' \leq 0$ will still hold when some nodes are killed after a finite time. That is, $$E^A_\lambda \bigg(\frac{d}{dt}{g_\lambda^{ \hat{V_u^*}}}(t) \bigg) \bigg| _{t=0} \leq 0,$$ where $g_\lambda^{\hat{V_u^*}}(t)$ is the function representing the time when some nodes in $T_d$ are already killed at time $t$, i.e $\xi(x_i) = \emptyset$ for some $i$. In this scenario, the death of a node will lead to two changes on the graph: removal of the infections on this site and the edges connecting neighbors and itself. We already took care of the decrease in overall infection number when calculating $g^A_\lambda(t)$, so the only influence brought by killing a node is changing the structure of the graph by removing edges.
Whenever a node is removed from $G_t$, we can see that $T_d$ will be split into a finite combination of finite graphs and infinite trees. Since we already proved that for any finite graph the process will die out almost surely, we only need to look into 1 infinite tree without loss of generality. In this case, removal of edges will affect our reasoning by reducing the size of $N_A$. If $N_A = |\{ \langle x, y\rangle : x \in A, y \notin A \}| \leq d |A| - 2(|A|-1)$, this will actually result in an upper bound which is strictly larger than the bound we use for $\lambda_1$, so it will not invalidate our reasoning. For example, if we look at the case in which there is one infection per site, then $$\begin{split}
\frac{d}{dt}{g^{A}_{\lambda}(t)} \bigg| _{t=0} =& (1-\rho) \nu_{\rho}(A) \bigg\{ \frac{|A|}{\rho} + (\frac{\lambda|A|}{\rho})\Phi(2) - \lambda N_A (1-\Phi(1)) \\ &- \lambda|A|(1-\Phi(2)) \bigg \} \\ &\leq (1-\rho) \nu_{\rho}(A) \bigg\{ \frac{|A|}{\rho} + (\frac{\lambda|A|}{\rho})\Phi(2) - \lambda|A|(1-\Phi(2)) \bigg \} \leq 0
\end{split}$$ when $$\rho\lambda \bigg \{ (1-\Phi(2)) -\frac{\Phi(2)}{\rho} \bigg \} \geq 1.$$ If we want the process to die out, we need $$\rho\lambda \bigg \{ (1-\Phi(2)) -\frac{\Phi(2)}{\rho}) \bigg \} \leq 1 \quad \forall \rho,$$ so $\lambda \leq \frac{1}{1-2\Phi(2)}$, and we can tell that this bound is strictly larger than the upper bound we use for $\lambda_1$, and thus it erases our concern.
Together with Lemma [Lemma 5](#lemma3.4){reference-type="ref" reference="lemma3.4"}, we complete the proof of Theorem [Theorem 2](#upperb){reference-type="ref" reference="upperb"}.
### Lower bound
We also give a short outline for proving Theorem [Theorem 3](#thm:3){reference-type="ref" reference="thm:3"}. We look into the number of infections that the whole graph is carrying throughout the time, and we use an integer-valued jump process to represent it. The jump process will have an absorption site at $0$, representing the death of the process. It will also have some positive probability of increasing and decreasing, representing the change of overall number of infection throughout the time. Now, if we find the biggest value of $\lambda$ which, by changing the probability of jumping accordingly, will lead the jump process to its absorption state almost surely, then we need $\lambda$ to be larger than this value in order for the contact process to survive.
*Proof.* Let $(N_t)_{t \geq 0}$ be the integer-valued process representing the number of infections on $V_t$, i.e. $(N_t) = |(\xi_t)| \hspace{0.2cm} \forall t$.
$N_t$ increases by 1 whenever the virus passes to healthy node or to the sites that are already infected. By the setup of our contact process, a virus is successfully passed to the healthy nodes with rate $\lambda (1-\Phi(1))$, and is passed to the nodes that are already infected with rate $\lambda (1-\Phi(i+1))$, where $i$ represents the number of infections the recipient is currently carrying. Since $\Phi(x)$ is monotonically increasing with respect to $x$, we have the rate of infection to be smaller than $\lambda (1-\Phi(1))$.
Also, $N_t$ decreases by 1 whenever the virus is healed, and when the virus is killing the host by passing to the sites that are already infected, $N_t$ decreases by the number of infection on the host. By the same argument that $\Phi(x)$ is monotonically increasing with respect to $x$, we have the rate of decreasing is strictly larger than 1+ $\lambda \Phi(2)$.
Now, we couple the $(N_t)_{t \geq 0}$ with a 1 dimension continuous-time random walk $(W_t)_{t \geq > 0}$ on $\mathbb{N}$ including 0, with absorbing site on 0. Note that when the walk achieves the absorbing site, it means that there is no virus on the graph anymore, so the contact process dies out. In this case, $(N_t)$ is stochastically dominated by $(W_t)$ if the probability of increasing by 1 for $(W_t)$ is
$$\displaystyle p_{W} = \frac{\lambda (1-\Phi(1))}{1+ \lambda (1-\Phi(1)) + \lambda \Phi(2)}.$$
Since $(W_t)$ is a 1 dimensional random walk, we have
$$\displaystyle \lim_{t \to \infty} P(W_t = 0) = 1 \quad \text{ if } \quad p_W \leq \frac{1}{2},$$ which is equivalently to say that
$$\displaystyle\frac{\lambda (1-\Phi(1))}{1+ \lambda (1-\Phi(1)) + \lambda \Phi(2)} \leq \frac{1}{2}$$ $$\label{eq:last}
\displaystyle \lambda \leq \frac{1}{1-\Phi(1)-\Phi(2)}.$$
Any $\lambda$ smaller than the value in [\[eq:last\]](#eq:last){reference-type="eqref" reference="eq:last"} will lead to the death of the process almost surely, as the number of infection in the graph will converge to 0. Therefore, infection rate needs to be larger than the bound in [\[eq:last\]](#eq:last){reference-type="eqref" reference="eq:last"} in order for the contact process to survive, which finishes the proof of Theorem [Theorem 3](#thm:3){reference-type="ref" reference="thm:3"}. ◻
# Acknowledgement {#acknowledgement .unnumbered}
The author would like to thank Professor Carl Mueller for his invaluable guidance and support with patience throughout the time.
| arxiv_math | {
"id": "2309.11031",
"title": "A Model for the Multi-Virus Contact Process",
"authors": "Xu Huang",
"categories": "math.PR",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
A result of Honda, Kazez, and Matić states that a contact structure is tight if and only if all its supporting open books are right-veering. We show a combinatorial way of detecting the left-veering arcs in open books, implying the existence of an algorithm that detects the right-veering property for compact surfaces with boundary.
author:
- Miguel Orbegozo Rodriguez
bibliography:
- main.bib
- main.bib
title: Detecting right-veering diffeormorphisms
---
# Introduction
Let $M$ be a closed, oriented $3$-manifold. A *contact structure* on $M$ is a plane field $\xi$ that is maximally non-integrable; equivalently, it can be expressed as the kernel of a $1$-form $\alpha$ such that $\alpha \wedge d \alpha > 0$. An *open book decomposition* is a pair $(B, \pi)$, where $B$, called the *binding*, is an oriented link in $M$; and $\pi : M \setminus B \rightarrow S^1$ is a fibration of the complement of the binding over the circle such that the fibres, called the *pages*, are (diffeomorphic) compact surfaces with common boundary $B$. This is determined up to diffeomorphism by the pair $(\Sigma, \varphi)$, where $\Sigma$ is a page and $\varphi$ is the mapping class of the return map of $\pi$. Thanks to the following theorem of Giroux, contact structures can be studied by studying open book decompositions.
**Theorem 1**. *[@Giroux] There exists a 1-1 correspondence between contact structures up to isotopy and open book decompositions up to positive stabilisation.*
Contact structures exhibit a fundamental dichotomy; those that admit an embedding of a disc $D^2$ such that the boundary of the disc is tangent to the contact structure are called *overtwisted*, while those that do not are called *tight*. Overtwisted contact structures were classified by Eliashberg, who showed in [@Eliashberg] that there is a unique isotopy class of overtwisted contact structures in each homotopy class of plane fields. Tight contact structures, on the other hand, are not as well understood. Thus being able to distinguish between tight and overtwisted contact structures becomes an important problem.
A result of Honda, Kazez, and Matić [@Right-veering] states that a contact structure is tight if and only if all its supporting open book decompositions are *right-veering*. An open book is right-veering if its monodromy sends every properly embedded arc to the right (a precise definition will be given in Section [2](#Preliminaries){reference-type="ref" reference="Preliminaries"}). Therefore existence of a single arc that is not sent to to the right (a *left-veering arc*) is enough to guarantee overtwistedness of the contact structure. However, it is often difficult to determine if such an arc exists. Indeed, if we take a *basis* of the surface (a collection of arcs which cut the surface into a disc), the images of these arcs under a diffeomorphism determine it up to isotopy; however it is possible to find monodromies that are not right-veering and yet send each arc of some basis to the right. Looking at more arcs does not necessarily solve the problem, as similar counterexamples can be found, for example, when the collection of arcs we look at is a *complete set*, i.e a maximal collection of pairwise disjoint and non-isotopic arcs.
Thus the usual approach to showing whether an open book is right-veering is to either exhibit an arc that goes to the left, which is found in a non-systematic way, or divide all arcs into different classes and then show that each class can only contain arcs sent to the right. The main issue with this is that in both cases the argument is case dependent.
Our main Theorem shows that a left-veering arc can be detected combinatorially from a basis of arcs and its image.
**Theorem 1**. *Let $(\Sigma, \varphi)$ be an open book, and $\Gamma$ a basis for $\Sigma$ with all arcs duplicated. Suppose that there exists a left-veering arc $\gamma$, which we can assume to be minimal. Then there exists a collection of extended towers $\{\mathcal{T}_i\}_{i = 1}^{N}$ (where $N$ is the number of intersections between $\gamma$ and the basis) supported in (subcollections of) $\Gamma$ such that:*
- *$\mathcal{T}_1$ is a completed extended tower.*
- *$\mathcal{T}_i$ is a completed partial extended tower, whose starting point is the adjacent point to the connecting vertex of $\mathcal{T}_{i-1}$.*
- *$\mathcal{T}_N$ is a incomplete partial extended tower, whose starting point is the adjacent point to the connecting vertex of $\mathcal{T}_{N-1}$.*
*Conversely, if we have such a collection, then there exists a left-veering arc $\gamma$.*
The non-standard terminology will be defined in Sections [2](#Preliminaries){reference-type="ref" reference="Preliminaries"} and [3](#Towers){reference-type="ref" reference="Towers"}. For now, we provide an overview of the result. We show that, given an open book $(\Sigma, \varphi)$ and a basis $\mathcal{B}$ of $\Sigma$, a left-veering arc induces a collection of objects, called *extended towers*, that are constructed using $\mathcal{B}$. Conversely, the existence of such a collection implies the existence of a left-veering arc, which moreover can be constructed from the extended towers.
The strategy is as follows. We start with the basis $\mathcal{B}$ with all its arcs doubled, and assume there is a left-veering arc. We show that this implies the existence of another left-veering arc, which we call *minimal* (with respect to the basis), that is, we can divide the arc into arc segments with the property that the endpoints of each arc segment are either on the boundary or on a basis arc, and the basis and the arc segments are disjoint except for these endpoints; and these arc segments are all fixable except one, which is left-veering. Then we show that fixable arc segments are detected by *completed extended towers*, while left-veering arc segments are detected by *incomplete extended towers*.
Finally, we observe that the converse also holds. This effectively gives an algorithm to detect the right-veering property by running a search along the finitely many possibilities for extended towers.
The outline of the paper is as follows. In Section [2](#Preliminaries){reference-type="ref" reference="Preliminaries"} we give some preliminary definitions and we introduce regions, which will be the building blocks of our extended towers, and we show that they detect left-veering arcs and fixable arc segments in simple cases. In Section [3](#Towers){reference-type="ref" reference="Towers"} we define extended towers and their properties. The main results are proved in Section [4](#Results){reference-type="ref" reference="Results"}. We first show the existence of minimal left-veering arcs, and we prove by induction that each segment of such an arc is detected in a basis by an extended tower. Finally, putting all the arc segments together gives the collection of extended towers, which are connected by the fixed points that are the endpoints of the arc segments. To conclude, in Section [5](#Example){reference-type="ref" reference="Example"} we show that the collection of extended towers needed to detect a left-veering arc can be arbitrarily large.
A recent result of Baldwin, Ni, and Sivek [@BNS] characterises right-veering diffeomorphisms for open books with connected boundary in terms of an invariant defined using Knot Floer homology called $b(K)$ (where $K$ is the binding), originally defined in [@Baldwin-Vela-Vick]. This invariant counts the first level of the filtration induced by the knot where the contact class vanishes. Baldwin, Ni, and Sivek's result then says that the monodromy of a fibred knot is not right-veering if and only if $b(K) = 1$, that is, the contact class vanishes in the lowest possible filtration level. It seems possible to link this result with ours, this is the subject of work in progress.
## Acknowledgements
I would like to thank my supervisor Andy Wand for invaluable advice and support.
# Preliminaries {#Preliminaries}
We start with some definitions regarding open books, and we introduce the concept of a region.
**Definition 2**. Let $\Sigma$ be a compact surface with nonempty boundary. A *properly embedded arc* is the image of an embedding $\alpha: [0,1] \hookrightarrow \Sigma$ such that $\alpha(0), \alpha(1) \in \partial \Sigma$. An *arc segment* is the image of an embedding of the unit interval that is not necessarily proper, i.e we do not require that $\alpha(0), \alpha(1) \in \partial \Sigma$.
**Definition 3**. Let $(\Sigma, \varphi)$ be an open book. An *arc collection* in $\Sigma$ is a set of pairwise disjoint properly embedded arcs. An arc collection $\mathcal{B}$ such that $\Sigma \setminus \mathcal{B}$ is a disc is called a *basis*.
**Definition 4**. Let $(\Sigma, \varphi)$ be an open book, and $\Gamma$ an arc collection in $\Sigma$. If $\Sigma \setminus \Gamma$ contains an $n$-gon component with exactly one edge on each element of $\Gamma$, we say $\Gamma$ *cuts out an $n$-gon*.
**Definition 5**. Let $\alpha_1, \alpha_2$ be disjoint properly embedded oriented arcs in a compact surface with boundary $\Sigma$, such that there is a boundary arc going from $\alpha_1(1)$ to $\alpha_2(0)$. The *arc-slide* of $\alpha_1$ and $\alpha_2$ is (the isotopy class of) the arc $\beta$ that starts at $\alpha_1(0)$ and ends at $\alpha_2(1)$, such that $\alpha_1, \alpha_2$, and $\beta$ cut out a $6$-gon from $\Sigma$ whose standard boundary orientation coincides with the orientation from $\alpha_1$, $\alpha_2$, and $-\beta$.
![The arc-slide $\beta$ of two arcs $\alpha_1$ and $\alpha_2$. Observe we need to reverse the orientation of $\beta$ to obtain an orientation of the boundary of the $6$-gon.](Arc-slide.pdf){#fig:Arc-slide width="4cm"}
Any two bases for $\Sigma$ can be related by a sequence of arc-slides (see [@Contact-class] for a proof of this fact). We can also extend this definition to larger families of arcs; if a collection of pairwise disjoint arcs $\{ \alpha_i \}_{i=1}^n$ is such that there is a boundary arc going from $\alpha_i(1)$ to $\alpha_{i+1}(0)$, then the arc $\beta$ that starts at $\alpha_1 (0)$ and ends at $\alpha_n(1)$ and cuts out a disc from $\Sigma$ (whose standard boundary orientation coincides with the orientation from $\alpha_i$ and $-\beta$) is called the *arc-sum* of $\{ \alpha_i \}_{i=1}^n$.
We now give the notion of right-veering arcs as introduced in [@Right-veering]. Our definition is phrased in a slightly different way, in order to be consistent with the orientation convention that we will use, but is equivalent to the one in [@Right-veering].
**Definition 6**. Let $(\Sigma, \varphi)$ be an open book decomposition, and let $\alpha$ be an oriented properly embedded arc with starting point $x$. We will adopt the convention that its image $\varphi(\alpha)$ is given the opposite orientation to $\alpha$. We then say that $\alpha$ is *right-veering* (with respect to $\varphi$) if $\varphi(\alpha)$ is isotopic to $\alpha$ or, after isotoping $\alpha$ and $\varphi(\alpha)$ so that they intersect transversely with the fewest possible number of intersections, $( \alpha'(0),\varphi(\alpha)'(1))$ define the orientation of $\Sigma$ at $x$. In this latter case we will say that $\alpha$ is *strictly right-veering*. If $\alpha$ is not right-veering we say it is *left-veering*.
In Figure [2](#fig:RVArc){reference-type="ref" reference="fig:RVArc"} we can see that intuitively a right-veering arc $\alpha$ is such that $\varphi(\alpha)$ is to the right of $\alpha$ near the starting point once we have isotoped them so that they have the fewest possible number of intersections.
![A (strictly) right-veering arc $\alpha$.](RVArc.PNG){#fig:RVArc width="5cm"}
Although this definition only refers to the starting point of the oriented arc, we will usually say that an arc is right-veering to mean that both itself and the arc with opposite orientation are right-veering (thus referring to both endpoints). However, when we say than an arc is left-veering we only refer to its starting point. This is because one left-veering oriented arc is enough for an open book to support an overtwisted contact structure, by [@Right-veering], so we only need to detect one.
**Definition 7**. Let $(\Sigma, \varphi)$ be an open book decomposition. We say that $\varphi$ is *right-veering* if every oriented properly embedded arc in $\Sigma$ is right-veering.
Sometimes we will say that the open book itself is right-veering when the monodromy is right-veering.
Figure [3](#fig:RVBasis){reference-type="ref" reference="fig:RVBasis"} shows that, to determine if an open book is right-veering, it is not enough to check that every arc of a basis is right-veering. The page is a planar surface with $4$ boundary components, and the monodromy is determined by the images of the arcs from the basis.
![A basis of right-veering arcs, and a (dotted) left-veering arc.](RVBasis.pdf){#fig:RVBasis width="6cm"}
**Definition 8**. Let $(\Sigma, \varphi)$ be an open book. We say that an arc $\gamma$ is *bigon free* with respect to an arc $\alpha$ if $\gamma$ does not form any bigons with $\alpha$. Similarly, we say that an arc collection $\Gamma$ is *bigon free* if for every $\alpha, \beta \in \Gamma$, $\alpha$ does not form any bigons with $\varphi (\beta)$.
Given an arc collection $\Gamma$, we can always isotope $\varphi(\Gamma)$ so that $\Gamma$ is bigon free, and so we will always assume this is the case.
We now recall some definitions and notation from [@DetectingTightness].
**Definition 9**. Let $(\Sigma, \varphi)$ be an open book, $\Gamma = \{\alpha_i\}_{i = 1}^n$ an arc collection, and $\varphi(\Gamma) = \{\varphi(\alpha_i) \}_{i=1}^n$ its image under the mapping class $\varphi$ (we orient $\varphi(\alpha_i)$ with the opposite orientation to the one induced by $\alpha_i$). A *region $R$ in $(\Sigma, \varphi,\Gamma)$* (or *supported in $(\Sigma, \varphi,\Gamma)$*) is the image of an immersed $2k$-gon such that:
- The edges are mapped to $\Gamma$ and $\varphi (\Gamma)$ alternatively.
- The orientations of the arcs and their images orient $\partial R$.
- Every corner is acute, i.e for every vertex $x = \alpha_i \cap \varphi(\alpha_j)$, in a neighbourhood of $x$, $R$ only intersects one of the $4$ quadrants defined by $\alpha_i$ and $\varphi(\alpha_j)$ at $x$.
- The immersion restricted to the vertices of the $2k$-gon is injective.
A point $\alpha_i \cap \varphi(\alpha_j)$ is *positive* if the tangent vectors of $\alpha_i$ and $\varphi(\alpha_j)$ (in that order) determine the orientation of $\Sigma$ at the intersection point, and *negative* otherwise. Moreover, we will say a region is *positive* if the boundary orientation given by the orientation of the arcs from $\Gamma$ and $\varphi(\Gamma)$ coincides with the usual counterclockwise orientation, and *negative* otherwise.
We will denote positive intersection points by $\bullet$-points and negative intersection points by $\circ$-points. See Figure [4](#fig:Region){reference-type="ref" reference="fig:Region"} for an example of a region with its positive and negative points labelled. We will also denote the set of $\bullet$-points (respectively $\circ$-points) of a region $A$ by Dot($A$) (respectively Circ($A$)), and the set of vertices $\textrm{Dot}(A) \cup \textrm{Circ}(A)$ as $\textrm{V}(A)$.
![A region $R$, which is positive because the arcs orient $\partial R$ counterclockwise. We will use the convention that straight lines represent arcs and curved lines represent their images under $\varphi$.](Region.PNG){#fig:Region width="5cm"}
**Definition 10**. Let $R$ be a region in $(\Sigma, \varphi,\Gamma)$. We say that $R$ is *completed* if there exists another region $R'$, such that $\textrm{Circ}(R') \subset \textrm{Circ}(R)$, but the induced orientation of $\partial R'$ is the opposite orientation to the one in $R$. We will call this region the *completion* of $R$. If no such region exists we say $R$ is *not completed*.
*Remark 1*. A completion of a region need not use all of the arcs used in the region.
![On the left, a region $R$ and its completion $R'$. On the right, a region $R$ which is not completed.](Completions.PNG){#fig:Completion width="7cm"}
Now our setup differs slightly from the standard version of consistency as defined in [@LegendrianSurgery] and [@DetectingTightness]. The reason for this is that, while Honda, Kazez, and Matić's result establishes a relationship between the right-veering property and tightness, the two concepts are not equivalent, since for any overtwisted contact manifold we can find a supporting open book decomposition that is right-veering. The technology in [@LegendrianSurgery] and [@DetectingTightness] aims to detect tightness, while we aim to detect the right-veering property, and so it is natural that there will be similarities as well as differences.
**Definition 11**. Let $(\Sigma, \varphi)$ be an open book, $\Gamma$ an arc collection in $\Sigma$, and $\alpha_1, \alpha_2 \in \Gamma$. Assume there exists a boundary component $B$ of $\Sigma$ which contains an endpoint of $\alpha_1$ and an endpoint of $\alpha_2$. Then assume that $\varphi(\alpha_1)$ is boundary parallel near $B$ until it intersects $\alpha_2$, and does not intersect any other $\alpha \in \Gamma$ before doing so. This creates a triangle (with sides an arc segment of $\varphi(\alpha_1)$, an arc segment of $\alpha_2$ and an arc segment of $B$). We can see such a triangle in Figure [6](#fig:BasepointTriangle){reference-type="ref" reference="fig:BasepointTriangle"}. We will refer to this triangle as a *basepoint triangle*. For an arc collection $\Gamma$, we denote the set of $\circ$-points that are vertices of basepoint triangles by $\textrm{Circ}_{\partial}(\Gamma)$.
![A basepoint triangle (shaded).](BasepointTriangle.PNG){#fig:BasepointTriangle width="3cm"}
*Remark 2*. If there is an arc image $\varphi(\alpha)$ that intersects a basepoint triangle disjoint from $\alpha$, it must do so forming a bigon. Note that since we assume that our collections are bigon free then this cannot happen. We can generalise this situation for the case where, instead of a basepoint triangle, we have a disc component of $\Sigma \setminus (\Gamma \cup \{ \alpha \})$ with a unique edge on an arc $\alpha \in \Gamma$ (a basepoint triangle is the simplest case of such a disc, with an edge on an arc, and the other two on the boundary and an arc image respectively).
**Definition 12**. Let $\Gamma$ be an arc collection in an open book $(\Sigma, \varphi)$, $\{\varphi(\beta_i)\}_{i = 1}^n$ a subcollection of $\varphi(\Gamma)$, and $a$ an arc segment of an arc $\alpha \in \Gamma$ such that $\{\varphi(\beta_i)\}_{i = 1}^n \cup \{a\}$ cut out a disc $D$ from $\Sigma$. Then we say $a$ is *restricted in $\Gamma$*.
Similarly as in the previous remark, for a restricted edge $a$ and associated disc $D$, images of arcs disjoint from $D$ cannot intersect the restricted edge because they would then have to form a bigon. Clearly the edge of a basepoint triangle that lies on an arc is restricted, with the disc $D$ being the basepoint triangle itself.
Now we want to show that, in the case where the arc-slide of a pair of arcs --with the opposite orientation-- is left-veering, regions with an edge on a basepoint triangle can detect this left-veering arc. We will see later that this is a simple example of an extended tower, and it will be the base case of our induction.
This means that we want to understand what possibilities there are for regions when we look at three arcs $\{ \alpha_0, \alpha_1, \alpha_2 \}$ that cut out a $6$-gon from the surface. Call this $6$-gon $P$. Then $\varphi(P)$ must also be a $6$-gon. Now consider an arc collection $\mathcal{C}$ (which may also include some of the arcs cutting out $P$). When segments of two of the arc images $\varphi(\alpha_i), \varphi(\alpha_j)$ form opposite sides of a rectangle which is a connected component of $\varphi(P) \setminus \mathcal{C}$, we will say that they are *parallel (with respect to $\mathcal{C}$)* along those segments.
**Proposition 13**. *Let $\alpha_0, \alpha_1, \alpha_2$ be properly embedded arcs that cut out a $6$-gon $P$ from $\Sigma$, oriented counterclockwise, and assume $\alpha_1$ and $\alpha_2$ are right-veering. Then $\alpha_0$ is left-veering if and only if there exists a positive region $R$ in $\{\alpha_1, \alpha_2\}$ contained in $P$, with $\bullet$-points on $\partial \Sigma$ and where one of the edges is the edge of a basepoint triangle, that has no completion.*
*Proof.* First assume that $\alpha_0$ is left-veering, which means (since $\alpha_2$ is right-veering), that it leaves $P$ by intersecting $\alpha_1$ in a point $z$. Since $\alpha_2$ is right-veering, $\varphi(\alpha_2)$ must leave $P$ by intersecting $\alpha_1$ in a point $y$. This in turn means that $\varphi(\alpha_1)$ must leave $P$ by intersecting $\alpha_2$ (as it cannot intersect $\varphi(\alpha_2)$), creating the region $R$, with an edge being an edge of the basepoint triangle formed by $\alpha_2$ and $\varphi(\alpha_1)$. For a contradiction, suppose that this region can be completed with a region $R'$, which must necessarily be a rectangle (it cannot be a bigon because we are assuming our collections are bigon free, and it cannot have more than 4 vertices because $R$ only has $2$ $\circ$-points). Moreover, the edge of this rectangle on $\alpha_2$ is restricted, because it is an edge of a basepoint triangle. This in turn means that the edge of the rectangle on $\alpha_1$ is restricted. Then the $\bullet$-point of $R'$ on $\alpha_1$ cannot be between $z$ and $y$, because then $\varphi(\alpha_1)$ would have to form a bigon. This means that it would have to be between $z$ and the other endpoint of $\alpha_1$ (that is, $\alpha_1(0)$). But this means that $\varphi(\alpha_0)$ intersects the restricted edge --a contradiction.
Conversely, assume that there exists a region $R$ satisfying the above conditions, in particular, it has no completion. Suppose for a contradiction that $\alpha_0$ is right-veering. But then, since the image of $P$ must be a disc, there must be an arc segment on $\alpha_1$ cutting out a disc with $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$, so it must be a restricted edge with respect to $\{ \alpha_1, \alpha_2 \}$. However this in turn means that $R$ has a completion --a contradiction. So $\alpha_0$ must be left-veering, see Figure [7](#fig:TriangleLV){reference-type="ref" reference="fig:TriangleLV"}.
![The incomplete region $R$ when $\alpha_0$ is left-veering.](LV6-gon.PNG "fig:"){#fig:TriangleLV width="5cm"} ◻
For completeness, we include the case where all three arcs cutting out a $6$-gon are right-veering.
**Proposition 14**. *Let $\alpha_0, \alpha_1, \alpha_2$ be properly embedded arcs cutting out a $6$-gon $P$, oriented counterclockwise, and assume they are right-veering. Then we can find a non-empty collection of regions in $\{ \alpha_0, \alpha_1, \alpha_2 \}$ such that every positive region is completed by a negative region, and every interior $\bullet$-point is a vertex of two regions (one positive and one negative).*
*Proof.* Suppose first that the image of each arc leaves $P$ by forming a basepoint triangle. Then we have an initial positive region $R$ (a $6$-gon) where the $\bullet$-points are endpoints of the arcs and the $\circ$-points are the intersection points where the arc images leave $P$. To see that this is completed, observe that the image of $P$ is again a $6$-gon which is the union of a completing region and the three basepoint triangles, and so we must have a negative region with the same $\circ$-points as $R$ and whose $\bullet$-points are the other endpoints of the arcs.
Now suppose that one arc image (we can assume it is $\varphi(\alpha_2)$) leaves $P$ without forming a basepoint triangle (so in this case, by intersecting $\alpha_1$). This forces the image of $\alpha_1$ to leave $P$ by intersecting $\alpha_2$ and creating a basepoint triangle. This immediately gives a positive region $R_1$. This region has a completion $R'_1$, because if it did not, $\alpha_0$ would have to be left-veering by Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"}. One of the $\bullet$-points is a vertex of the basepoint triangle formed by $\varphi(\alpha_1)$ and the other one is either an endpoint of $\alpha_1$ or an interior point $\alpha_1 \cap \varphi(\alpha_1)$, let us denote it by $x$. In the first case we have that $\alpha_0$ is isotopic to $\varphi(\alpha_0)$, as it cannot be left-veering because it would intersect a restricted edge on $\alpha_1$ but it also cannot be strictly right-veering because $\alpha_0$ with the opposite orientation would also have to be strictly right-veering and it would have to intersect the restricted edge on $\alpha_1$.
For the second case we have that, after $x$, $\varphi(\alpha_1)$ intersects $\alpha_0$ and then it must be parallel to $\varphi(\alpha_0)$ until their other endpoint (because the image of $P$ must be a disc). This gives another two regions, a positive one $R_2$ where the $\bullet$-points are $x$ and an endpoint of $\alpha_0$, and its completion $R'_2$, the part of $\varphi(P)$ where $\varphi(\alpha_0)$ and $\varphi(\alpha_1)$ are parallel. ◻
We have seen what regions arise in a $6$-gon when all arcs are right-veering, and when one of the arcs is left-veering. Eventually we want to detect a left-veering arc by dividing it into segments that can be fixed by the monodromy and a segment that is left-veering. Thus, now we turn our attention to arc segments that can be fixed by the monodromy.
**Definition 15**. Let $(\Sigma, \varphi)$ be an open book. If two arcs $\alpha_1, \alpha_2$ that cut out a $6$-gon $P$ with a third arc $\alpha_0$ (oriented counterclockwise) support a pair of regions $\{R_1, R'_1\}$ as in the second case of Proposition [Proposition 14](#RV){reference-type="ref" reference="RV"} (i.e there is a positive region $R_1$ in $P$ where one of the sides is a side of the basepoint triangle, and it is completed by a region $R_1'$ with a $\bullet$-point in the interior of $\alpha_1$) we say that $\alpha_2$ is *$\varphi$-contained* in $\alpha_1$ and we call $\{ R_1, R'_1 \}$ a *positive splitting pair*. We also call $\{R_2,R'_2\}$ a *negative splitting pair*, and we also say that $\alpha_0$ is *$\varphi$-contained* in $\alpha_1$.
**Definition 16**. Let $\Gamma$ be an arc collection in an open book $(\Sigma, \varphi)$, and $x, y \in \Gamma \cap \varphi(\Gamma)$ be two points (which could be on the boundary or interior points) that are fixed by some representative of $\varphi$ . Let $\gamma$ be an arc segment starting in $x$ and ending in $y$. We say $\gamma$ is *fixable by $\varphi$* if there is a representative of $\varphi$ that fixes it relative to $\partial \gamma$.
This means that $\gamma$ and $\varphi(\gamma)$ bound a collection of bigons that intersect only on points $\gamma \cap \varphi(\gamma)$, see Figure [8](#fig:FixableArc){reference-type="ref" reference="fig:FixableArc"} for an example.
*Remark 3*. If both endpoints of an arc segment are fixed by $\varphi$, properties like being right- or left-veering can be defined as for properly embedded arcs.
![On the left, $\gamma$ is fixable because we can isotope $\varphi(\gamma)$ relative to its endpoints to coincide with $\gamma$. On the right, $\gamma$ is not fixable.](FixableArc.PNG){#fig:FixableArc width="7cm"}
In practice, we want our fixed points to come from intersections $\alpha \cap \varphi(\alpha)$ where $\alpha$ is an arc of a chosen basis.
Similarly to Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"}, if we have $3$ arcs cutting out a $6$-gon, we can use regions to detect a fixable arc inside this $6$-gon.
**Proposition 17**. *Let $(\Sigma, \varphi)$ be an open book, and let $\alpha_0, \alpha_1, \alpha_2$ be properly embedded arcs cutting out a $6$-gon $P$ from $\Sigma$, oriented counterclockwise, and assume they are strictly right-veering. Let $\gamma$ be an arc segment contained in $P$ starting on $\partial \Sigma$ between $\alpha_2$ and $\alpha_0$ and ending in an intersection point $x= \alpha_1 \cap \varphi(\alpha_1)$ in the interior of $\alpha_1$. Then $\gamma$ is fixable by $\varphi$ if and only if $\{\alpha_1, \alpha_2\}$ support a positive splitting pair $\{R,R'\}$ such that the $\bullet$-point of $R'$ in the interior of $\Sigma$ is $x$.*
*Proof.* First assume that $\gamma$ is fixable. Then the image of $\alpha_2$ must leave $P$ by intersecting $\alpha_1$, which means that the image of $\alpha_1$ leaves $P$ by intersecting $\alpha_2$ (and forming a basepoint triangle), giving the positive region $R$. Since we are assuming that $\alpha_0$ is strictly right-veering, this region must be completed by a region $R'$, giving the positive splitting pair.
![The regions $R$ and $R'$ when $\gamma$ is fixable.](Fixed6-gon.PNG){#fig:TriangleFixed width="5cm"}
Conversely, suppose that we have a positive splitting pair $\{R, R'\}$. Then $\gamma$ cannot be left-veering, because it would have to intersect the edge of $R'$ on $\alpha_1$, which is restricted. However, by Proposition [Proposition 14](#RV){reference-type="ref" reference="RV"}, after the splitting pair (that is, after the interior $\bullet$-point of $R'$) $\varphi(\alpha_1)$ and $\varphi(\alpha_0)$ must be parallel up to the boundary, which means that they form a rectangle with $\alpha_0$ and an edge of a basepoint triangle on $\alpha_1$. Thus the edge of this rectangle on $\alpha_0$ is also restricted. If $\gamma$ were strictly right-veering in its starting point, it would have to intersect this restricted edge. Thus $\gamma$ must be fixable by $\varphi$ (strictly speaking, we might not have that $\varphi(x) = x$, however, in this case, by the same argument, $\gamma, \varphi(\gamma)$, and $\varphi(\alpha_1)$ must bound a disc, and thus we may isotope $\varphi(x)$ by sliding it along $\varphi(\alpha_1)$ to coincide with $x$ and then $\gamma$ is fixable).
![Once we have $R$ and $R'$, the image of $\gamma$ cannot go to either side because the edges on $\alpha_0$ and $\alpha_1$ are restricted.](Split6-gon.PNG "fig:"){#fig:Split6-gon width="5cm"} ◻
# Extended towers {#Towers}
We now introduce extended towers, which will be our main tool for detecting left-veering arcs. They are inspired by the notion of towers introduced in [@DetectingTightness]. However, there are some differences. Towers were defined to detect tightness, and we aim to detect left-veering arcs. Both concepts are related but not equivalent, and the new features of extended towers reflect this. From now on, let $\Gamma$ be an arc collection in a surface with boundary $\Sigma$, and $\alpha_0$ a properly embedded arc, disjoint from $\Gamma$, such that $\Gamma \cup \{ \alpha_0 \}$ cuts out a disc $P$. Moreover, orient the arcs in the standard counterclockwise orientation of $\partial (P)$.
**Definition 18**. For a given arc collection $\Gamma$ in an open book $(\Sigma, \varphi)$, we denote the set of regions supported in $\Gamma$ by $\mathcal{R}(\Sigma, \varphi, \Gamma)$. Moreover, for any collection of regions $\mathcal{A}$, the set of positive regions in $\mathcal{A}$ is denoted by $\mathcal{A}^+$, and, similarly, the set of negative regions of $\mathcal{A}$ is denoted by $\mathcal{A}^-$.
**Definition 19**. An *extended tower* in $(\Sigma, \varphi, \Gamma)$ is a (nonempty) collection $\mathcal{T} \subset \mathcal{R}(\Sigma, \varphi, \Gamma)$ where $\textrm{Dot}(\mathcal{T}) \subset (\textrm{Dot}(\mathcal{T}^-) \cup \partial \Sigma)$, $\textrm{Circ}(\mathcal{T}) \subset (\textrm{Circ}(\mathcal{T}^+) \cup \textrm{Circ}_{\partial}(\Gamma))$, and for all pairs $A,B \in \mathcal{T}$, no corner of $A$ is contained in the interior of $B$. We say that $\Gamma$ *supports* $\mathcal{T}$.
**Definition 20**. An extended tower $\mathcal{T}$ in $(\Sigma, \varphi, \Gamma)$ is *replete* if whenever there is a region $A \in \mathcal{R}^-(\Sigma, \varphi, \Gamma)$ which satisfies $\textrm{Circ}(A) \subset \textrm{Circ}(\mathcal{T}) \cup \textrm{Circ}_{\partial}(\Gamma)$, and $\mathcal{T} \cup A$ is again an extended tower, then $A \in \mathcal{T}$. All extended towers will be assumed replete unless otherwise stated.
The main difference with towers from [@DetectingTightness] is that here we allow negative regions with $\circ$-points in $\textrm{Circ}_{\partial}(\Gamma)$ as well as $\textrm{Circ} (\mathcal{T}^+)$. We now impose some further restrictions on our extended towers so that they completely characterise fixable and left-veering arcs.
**Definition 21**. An extended tower $\mathcal{T}$ in $(\Sigma, \varphi, \Gamma)$, with $\Gamma$ an arc collection as above, is *nice* if for every region $A \in \mathcal{T}^+$ we have $A \subset P$ and for every region $B \in \mathcal{T}^-$ we have that $int(B)$ is disjoint from $\varphi(\Gamma)$. We will assume all extended towers are nice.
*Remark 4*. We want to consider only nice extended towers because extended towers that are not nice come from collections where there are no left-veering arcs and no fixable arcs. See for example the extended tower in Figure [11](#fig:NotNice){reference-type="ref" reference="fig:NotNice"}, it is not nice but conforms to the definition of completed extended tower that we will see next (which is the one we want to identify with fixable arcs, and indeed there is no fixable arc in Figure [11](#fig:NotNice){reference-type="ref" reference="fig:NotNice"}).
![The extended tower $\mathcal{T} = \{ R_1, R_2, R'_1, R'_2 \}$ supported in $\alpha_1, \alpha_2$ is not nice because $R_1$ is a positive region that is not contained in $P$.](NotNice.pdf){#fig:NotNice width="5cm"}
**Definition 22**. Let $\Gamma$ be an arc collection in an open book $(\Sigma, \varphi)$ and $\mathcal{T}$ an extended tower in $\Gamma$. We say $\mathcal{T}$ is *nested* if there exist nested subcollections of regions as follows.
- $\mathcal{T}^+_0 = \{ R \in \mathcal{T}^+ \mid \textrm{Dot}(R) \subset \partial \Sigma \}$.
- $\mathcal{T}^-_0 = \{ R' \in \mathcal{T}^- \mid \textrm{Circ}(R') \subset \mathcal{T}^+_0 \cup \textrm{Circ}_{\partial}(\Gamma) \}$.
- $\mathcal{T}^+_i = \{ R \in \mathcal{T}^+ \mid \textrm{Dot}(R) \subset \mathcal{T}^-_{i-1} \cup \partial \Sigma \}$.
- $\mathcal{T}^-_i = \{ R' \in \mathcal{T}^- \mid \textrm{Circ}(R) \subset \mathcal{T}^+_{i} \cup \textrm{Circ}_{\partial}(\Gamma) \}$.
- $\mathcal{T}^+ = \bigcup_{i} \mathcal{T}^+_i$ and $\mathcal{T}^- = \bigcup_{i} \mathcal{T}^-_i$
We will assume all extended towers are nested, and we will refer to regions in $\mathcal{T}^{\pm}_i$ as *being in level i*.
*Remark 5*. For the arc collections $\Gamma$ as above, a necessary condition for an extended tower to be nested is the existence of a level $0$ positive region. Indeed, without a level $0$ positive region the only possibility for a level $0$ negative region would be one where all $\circ$-points are on basepoint triangles. However, this implies that the arcs supporting this negative region cut out a disc, contradicting the conditions we required for $\Gamma$.
Therefore, we can see that the extended tower from Figure [11](#fig:NotNice){reference-type="ref" reference="fig:NotNice"} is not nested as there is no level $0$ positive region. Observe that in terms of Heegaard Floer homology, a level $0$ positive region corresponds to a differential to the contact class. We can also see an example of a nice extended tower that is not nested in Figure [12](#fig:WeirdCompletedTower){reference-type="ref" reference="fig:WeirdCompletedTower"}.
![An extended tower that is not nested, because there is no level $0$ positive region.](WeirdCompletedTower.pdf){#fig:WeirdCompletedTower width="5cm"}
**Definition 23**. Let $\mathcal{T}$ be an extended tower in $(\Sigma,\varphi,\Gamma)$, and let $x$ be an interior point of some $\alpha \in \Gamma$ that is a vertex of a region in $\mathcal{T}$. We say that $x$ is *two-sided* if it is a vertex of exactly two regions of $\mathcal{T}$ (one positive and one negative).
**Definition 24**. Let $\mathcal{T}$ be an extended tower in $(\Sigma,\varphi, \Gamma = \{ \alpha_i \}_{i = 1} ^ n)$, where $\Gamma \cup \{\alpha_0\}$ cuts out a disc $P$ for some properly embedded arc $\alpha_0$ disjoint from $\Gamma$, and the arcs are oriented labelled counterclockwise. We say that $\mathcal{T}$ is *completed* if every interior vertex of $\mathcal{T}$ is two-sided, with the exception of a single $\bullet$-point $y_0 \in \alpha_1 \cap \varphi(\alpha_1)$, which we call a *connecting vertex*.
![An example of a completed extended tower, which is also nice and replete, where the only interior vertex that is not two-sided is the $\bullet$-point $y_0 \in \alpha_1 \cap \varphi(\alpha_1)$.](CompletedTower.pdf){#fig:CompletedTower height="6cm"}
This is a much more restricted notion than that of completed tower in [@DetectingTightness]. The reason for this is we want completed extended towers to correspond exactly to fixable arc segments, and to determine them uniquely. It is also clear that completed extended towers are those where every point of every arc $\alpha_2, \dots, \alpha_n$ (as long as they are all strictly right-veering) belongs to a region, and every point on $\alpha_1$ from $y_0$ to $\alpha_1(1)$ also belongs to a region.
**Definition 25**. Let $\mathcal{T}$ be an extended tower in $\Gamma$. We say that $\mathcal{T}$ is *incomplete* if for every negative region $A \in \mathcal{T}^-$, there exists a vertex $x \in \textrm{Dot}(A)$ that is two-sided, i.e. there exists a positive region $B \in \mathcal{T}^+$ such that $x \in \textrm{Dot}(B)$.
![An example of an incomplete extended tower, because there is only one negative region $R'_1$, which shares a $\bullet$-point with $R_2$, and the $\circ$-points $x$ and $z$ are not two-sided.](IncompleteTower.pdf){#fig:IncompleteTower height="5cm"}
This mirrors the definition of incomplete tower from [@DetectingTightness], because the property it aims to detect, a left-veering arc, implies overtwistedness. The additional conditions imposed to extended towers are what distinguishes this from the definition of incomplete tower. We can see this in Figure [15](#fig:GlobalExample){reference-type="ref" reference="fig:GlobalExample"}, where $\{R\}$ forms an incomplete tower. Indeed, the open book is known to support an overtwisted contact structure (see [@Lisca] and [@Lekili]). However, as an extended tower, it is not replete, since we can add the region $R'$ because one of its $\circ$-points is a vertex of $R$ and the other one is a vertex of a basepoint triangle, and the extended tower $\{R, R' \}$ is not incomplete, because $R'$ does not share any $\bullet$-points with a positive region. Thus this extended tower does not imply the existence of a left-veering arc, and indeed this open book is right-veering by [@Lekili].
![The region $R$ forms an incomplete tower but is not a replete extended tower because we can add the region $R'$.](GlobalExample.pdf){#fig:GlobalExample width="5cm"}
*Remark 6*. As evidenced by Figure [15](#fig:GlobalExample){reference-type="ref" reference="fig:GlobalExample"}, we can have extended towers that are neither completed nor incomplete, for instance, if not every interior vertex is two-sided but there exists a negative region with no $\bullet$-points in common with any positive region. An extended tower where every interior $\bullet$-point is two-sided is also neither completed nor incomplete. It will follow from our discussion later that in this case the arc $\alpha_0$ is fixable, but we want to detect fixable arc segments rather than arcs and so we exclude this case from our definition of completed extended tower.
In Figure [16](#fig:TowerExceptions){reference-type="ref" reference="fig:TowerExceptions"} we can see two extended towers that are neither completed nor incomplete. On the left, $\mathcal{T}_1 = \{ R, R' \}$ is not incomplete because the negative region $R'$ does not have any $\bullet$-point in common with the unique positive region $R$, but is also not completed because there is a $\bullet$-point that is not two-sided and is not of the form $\alpha \cap \varphi(\alpha)$. On the right, $\mathcal{T}_2 = \{ R_1, R'_1, R_2, R'_2 \}$ is not completed nor incomplete because every interior vertex is two-sided, and $R'_2$ does not share a $\bullet$-point with a positive region.
![Two examples of extended towers which are neither completed nor incomplete.](TowerExceptions.pdf){#fig:TowerExceptions height="4cm"}
# Results {#Results}
## Minimal left-veering arcs
**Definition 26**. Let $(\Sigma, \varphi)$ be an open book, and let $\mathcal{B}$ be a basis for $\Sigma$. Let $\gamma$ be a properly embedded arc, which we may assume intersects the basis. This divides $\gamma$ into a collection of arc segments $\gamma_1, \dots , \gamma_n$, labelled and oriented following the orientation of $\gamma$, which intersect $\mathcal{B}$ only on their endpoints. We say $\gamma$ is *shortened with respect to $\mathcal{B}$* if $\gamma_1, \dots, \gamma_{n-1}$ are fixable. If $\gamma$ is left-veering, we call it a *shortened left-veering arc with respect to $\mathcal{B}$*.
Note that a shortened arc $\gamma$ as in Definition [Definition 26](#ShortenedArc){reference-type="ref" reference="ShortenedArc"} is left-veering if and only if $\gamma_n$ is left-veering.
**Lemma 27**. *Let $(\Sigma, \varphi)$ be an open book, and let $\mathcal{B}$ be a basis for $\Sigma$. Suppose there exists a left-veering arc $\gamma$ in $(\Sigma, \varphi)$. Then there exists a shortened left-veering arc $\gamma'$ with respect to $\mathcal{B}$.*
*Proof.* We may assume $\varphi(\gamma)$ is bigon free with respect to $\mathcal{B} \cup {\gamma}$. We define the arc $\gamma'$ as follows. Let $x \in \gamma \cap \mathcal{B}$ be the first intersection point with the basis such that, after $x$, $\gamma$ and $\varphi(\gamma)$ exit the disc cut out by the basis by intersecting different arcs $\alpha_1$ and $\alpha_2$ respectively. Then take the arc $\gamma '$ that is the same as $\gamma$ up to $x$ and ends in the starting point of $\alpha_2$ without having any more intersections with $\mathcal{B}$ or the arc segment of $\gamma$ up to $y$. Clearly $\gamma'$ is shortened with respect to $\mathcal{B}$. To show that it is left-veering, take the arc $\gamma'$ and isotope it slightly so that it lies to the left of $\gamma$. Then its image is fixable and to the left of the image of $\gamma$ up to $x$. Suppose for a contradiction that $\gamma'$ is right-veering. Then $\varphi(\gamma')$ must intersect $\varphi(\gamma)$ so that $\varphi(\gamma)$, $\varphi(\gamma')$, and $\partial \Sigma$ bound a disc. Moreover, this would have to be the image of a disc bounded by $\gamma$, $\gamma'$, and $\partial \Sigma$. However, this gives a contradiction because $\gamma$ and $\gamma'$ are (by construction) disjoint before $\gamma$ intersects $\alpha_1$ so they cannot bound a disc, as we can see in Figure [17](#fig:MinimalArc){reference-type="ref" reference="fig:MinimalArc"}.
![If $\gamma'$ were right-veering, the image of the darkly shaded subsurface would have to be the lightly shaded one, a contradiction. Note that in this case $\gamma$ is an arc and not an arc image even though it is not represented with a straight line.](MinimalArc.pdf "fig:"){#fig:MinimalArc height="5cm"} ◻
**Definition 28**. Define a length of an arc $\gamma$ with respect to a basis $\mathcal{B}$ by the (unsigned) number of intersections of $\gamma$ with all the arcs of $\mathcal{B}$. Out of all shortened left-veering arcs, we call one which minimises this length a *minimal left-veering arc* with respect to $\mathcal{B}$.
Note that a minimal left-veering arc minimises this distance for all left-veering arcs and not just shortened ones.
We will be interested primarily in minimal left-veering arcs. Moreover, when we have a $2n$-gon $P$ cut out by $\{\alpha_i\}_{i=0}^n$ with $\alpha_0$ left-veering, we can assume there are no left-veering arcs contained in $P$, because otherwise we can find a subset of the arcs cutting out a $2m$-gon (for $m<n$) with a left-veering arc, and we can work with this subset instead. In particular, we can assume that the image of $\alpha_0$ leaves the $2n$-gon by intersecting $\alpha_1$. This is because if it leaves $P$ by intersecting some other arc, say $\alpha_k$, then the arc $\beta_0$ that is obtained by consecutive arcslides of $\alpha_0$ over $\alpha_1, \dots, \alpha_{k-1}$ must be left-veering, by the same argument we made in Lemma [Lemma 27](#ShortenedLV){reference-type="ref" reference="ShortenedLV"}, and then we can focus on the $2(n-k)$-gon cut out by $\alpha_{k}, \alpha_{k+1}, \dots, \alpha_n$, and $\beta_0$, which will have the property we want.
## Base Case
In this subsection we show using methods from Section [2](#Preliminaries){reference-type="ref" reference="Preliminaries"} that extended towers detect left-veering arcs and fixable arc segments in $6$-gons. This will be the base case of our induction.
**Proposition 29**. *Let $(\Sigma, \varphi)$ be an open book. Let $\alpha_0, \alpha_1, \alpha_2$ be properly embedded arcs cutting out a $6$-gon $P$, oriented counterclockwise, and assume $\alpha_1$ and $\alpha_2$ are right-veering. Then $\alpha_0$ is left-veering if and only if $\{\alpha_1, \alpha_2 \}$ support an incomplete extended tower that is nested, nice, and replete.*
*Proof.* First suppose that $\alpha_0$ is left-veering. Then Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"} gives a (positive) region $R$ whose interior is disjoint from $\{ \alpha_1, \alpha_2 \}$. Since again by Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"} there are no negative regions, $\mathcal{T} = \{ R \}$ forms an incomplete extended tower that moreover is replete and nice, and $R$ is on level zero since the $\bullet$-points of $R$ are on the boundary, so $\mathcal{T}$ is nested.
Now suppose that there exists an incomplete extended tower $\mathcal{T}$ supported in $\{\alpha_1, \alpha_2\}$. We want to show that $\mathcal{T} = \{ R \}$, with $R$ the region from Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"}, which shows that $\alpha_0$ is left-veering. There must be a region in $\mathcal{T}^+_0$, and for $\mathcal{T}$ to be nice it must be the region $R$ from Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"}. Now, if there exists a negative region $R'$ in $\mathcal{T}^-_0$, then again by Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"} every point on $\alpha_2$ belongs to a region, so there can be no more positive regions. But now $R'$ does not have any $\bullet$-points in common with a positive region, so $\mathcal{T}$ is not incomplete --a contradiction. So there does not exist such a negative region, and thus $\mathcal{T} = \{ R \}$, and then by Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"} $\alpha_0$ is left-veering. ◻
**Proposition 30**. *Let $(\Sigma, \varphi)$ be an open book. Let $\alpha_1, \alpha_2, \alpha_0$ be properly embedded strictly right-veering arcs cutting out a $6$-gon $P$, oriented counterclockwise. Let $\gamma$ be an arc segment contained in $P$ starting between $\alpha_2$ and $\alpha_0$ and ending in the interior of $\alpha_1$. Then $\gamma$ is fixable by $\varphi$ if and only if $\{\alpha_1, \alpha_2\}$ support a completed extended tower that is nested, nice and replete, and whose connecting vertex coincides with $\gamma \cap \alpha_1$.*
*Proof.* First suppose that $\gamma$ is fixable. Then we have the regions $R$ and $R'$ from Proposition [Proposition 17](#InitialFixed){reference-type="ref" reference="InitialFixed"} forming the splitting pair, and we can see that they form a completed extended tower which is nested, replete and nice, and the unique connecting vertex is $\gamma \cap \alpha_1$.
Conversely, suppose that there exists a completed extended tower $\mathcal{T}$ which is nice and replete. By the same reasoning as in Proposition [Proposition 29](#TriangleLV){reference-type="ref" reference="TriangleLV"}, the positive region $R$ from Proposition [Proposition 17](#InitialFixed){reference-type="ref" reference="InitialFixed"} must be in $\mathcal{T}$. Since $\mathcal{T}$ is completed, there must be a negative region $R'$ with the same $\circ$-points as $R$. As one of the $\circ$-points in this negative region is on the basepoint triangle on $\alpha_2$, one of the $\bullet$-points of $R'$ is on the boundary (because there can be no other intersection points between the $\circ$-point on the basepoint triangle and the boundary as our arc collections are bigon free). Now every point of $\alpha_2$ belongs to a region, so there can be no more regions in $\mathcal{T}$. This means that for $\mathcal{T}$ to be completed the other $\bullet$-point of $R'$ must be an interior point $\alpha_1 \cap \varphi(\alpha_1)$, which means that the regions in $\mathcal{T}$ are the regions from Proposition [Proposition 17](#InitialFixed){reference-type="ref" reference="InitialFixed"} (i.e the splitting pair), so $\gamma$ is fixable. ◻
## Inductive Step
We now have that a left-veering arc is detected by an incomplete extended tower if it cuts out a $6$-gon (with the correct orientation) with two arcs from the basis. We want to extend this by induction to the case where the left-veering arc cuts out an $n$-gon with arcs from the basis. Similarly, we have that completed towers detect fixable arc segments when the arc segment is contained in a $6$-gon cut out by three arcs, two of which are from our basis, and we want to extend to the case where the arc segment is contained in a $n$-gon, with $n-1$ arcs in our basis. To show this, let us first introduce some notation to be used throughout this subsection.
Let $\alpha_0$, $\alpha_1$, and $\alpha_2$ be properly embedded arcs that cut out a $6$-gon $P$, where the arcs are labelled counterclockwise. Also let $\Gamma$ be an arc collection such that there exists an arc $\beta$ disjoint from $\Gamma$ with $\mathcal{C} = \Gamma \cup \{ \beta \}$ cutting out a disc $P'$ with disjoint interior with $P$, and $\alpha_0 \in \Gamma$. Again, we will orient this arc collection with the counterclockwise orientation. Moreover, assume that $\alpha_0$ is not the first arc in $\Gamma$ (i.e the next one to $\beta$ as we go counterclockwise through the boundary of the disc cut out by $\mathcal{C}$). This is because we want to detect fixable arcs with an endpoint on the first arc of the collection $\Gamma$, so we do not want this point to change. Let $\Gamma' = (\Gamma \setminus \{ \alpha_0 \} ) \cup \{ \alpha_1, \alpha_2 \}$. Then $\mathcal{C}' = \Gamma' \cup \{ \beta \}$ cuts out a disc $P \cup P'$. Orient the arcs in this collection again counterclockwise (this agrees with the previous orientation).
The idea is that, given an extended tower $\mathcal{T}$ in $\Gamma$, we can slide its regions over $\alpha_0$ to $\alpha_1$ and $\alpha_2$ to obtain an extended tower $\mathcal{T}'$ supported in $\Gamma'$, that will have the same properties as $\mathcal{T}$. We can see a simple example with completed extended towers in Figure [18](#fig:SlideExample){reference-type="ref" reference="fig:SlideExample"}.
![The regions for the new extended tower are obtained by sliding the regions from the old extended tower](InducedTower.pdf){#fig:SlideExample height="4cm"}
To make this operation more precise we now define two maps, which we will call *slide maps* and denote by $s^{\pm}$. Given an extended tower $\mathcal{T}$ supported in $\Gamma$, where $\Gamma$ is an arc collection as above, these maps will send the set of vertices of the positive (respectively negative) regions of $\mathcal{T}$ (denoted by $V(\mathcal{T}^{\pm})$) to intersection points $\Gamma' \cap \varphi(\Gamma')$. These points will determine regions that form the extended tower $\mathcal{T}'$ supported in $\Gamma'$ that will have the same properties as $\mathcal{T}$. Note that most points will be two-sided and thus will be both in $\mathcal{T}^+$ and $\mathcal{T}^-$, which means that they will have an image under $s^+$ and an image under $s^-$. Most of the time these will agree. In fact, the only time they will not agree will be when a component of $P \cap \varphi(P)$ is a $6$-gon.
**Definition 31**. Let $x \in \textrm{V}(\mathcal{T}^+)$. Then $s^+(x)$ is defined as follows.
1. If $x$ does not lie on $\alpha_0$ or $\varphi(\alpha_0)$, $s^+(x) = x$.
2. If $x$ lies on the intersection of $\varphi(\alpha_0)$ with some other arc $\beta$ from $\Gamma$, then $s^+(x)$ is the intersection point $y$ of $\beta$ with $\varphi(\alpha_1)$ or $\varphi(\alpha_2)$ such that the segment between $x$ and $y$ is contained in $\beta \cap \varphi(P)$.
3. If $x$ lies on the intersection of $\alpha_0$ with the image of some other arc $\beta$ from $\Gamma$, then $s^+(x)$ is the intersection point $y$ of $\varphi(\beta)$ with $\alpha_1$ or $\alpha_2$ such that the segment between $x$ and $y$ is contained in $\varphi(\beta) \cap P$.
4. If $x$ lies on the intersection of $\alpha_0$ with its image, then $s^+(x)$ is the intersection point $y$ of $\alpha_m$ with $\varphi(\alpha_l)$ (where $m$ and $l$ can be $1$ or $2$ and not necessarily equal), obtained by first going along $\alpha_0$ to $\varphi(\alpha_l)$, and then along $\varphi(\alpha_l)$ to $y$, such that this path is contained in $P \cap \varphi(P)$.
We illustrate the different cases in Figure [19](#fig:SlideCases){reference-type="ref" reference="fig:SlideCases"}, where we can see that, while the definition may seem arbitrary, for an intersection point $x \in \Gamma \cap \varphi(\Gamma)$ we are essentially choosing "the closest point" to $x$ that belongs to $\Gamma' \cap \varphi(\Gamma')$.
**Proposition 32**. *The map $s^+$ is well defined.*
*Proof.* Case 1 is immediate. Case 2 is well defined because if $\varphi(\alpha_0)$ intersects an arc then either $\varphi(\alpha_1)$ or $\varphi(\alpha_2)$ must also intersect that arc because $\varphi(P)$ is a disc. Moreover, there is a unique segment from $x$ to $y$ contained in $\beta \cap \varphi(P)$. Case 3 is the same as Case 2 but with the roles of the arcs and arc images reversed. Finally, in Case 4, the same argument as for Case 2 shows that either $\varphi(\alpha_1)$ or $\alpha_2$ intersect $\alpha_0$, and there is a unique segment between $x$ and $\varphi(\alpha_m)$ contained in $P \cap \varphi(P)$. Then, $\varphi(\alpha_m)$ must exit $P$ by intersecting either $\alpha_1$ or $\alpha_2$, and going along $\varphi(\alpha_m)$ gives $y$. ◻
![The different cases for $s^+$, where the point $x_i$ is of case $i$ in Definition [Definition 31](#SlidePositive){reference-type="ref" reference="SlidePositive"}. The dashed arrows indicate the action of $s^+$.](SlideCases.PNG){#fig:SlideCases width="6cm"}
Observe that for the first two cases, both $x$ and $s^+(x)$ belong to an arc that is not in $P$, and for the last two cases $x$ belongs to $\alpha_0$ and $s^+(x)$ belongs to either $\alpha_1$ or $\alpha_2$. We now define $s^-$ similarly.
**Definition 33**. Let $x \in \textrm{V}(\mathcal{T}^-)$. Then $s^-(x)$ is defined as:
1. If $x$ does not lie on $\alpha_0$ or $\varphi(\alpha_0)$, $s^-(x) = x$.
2. If $x$ lies on the intersection of $\alpha_0$ with the image of some other arc $\beta$ from $\Gamma$, then $s^-(x)$ is the intersection point $z$ of $\varphi(\beta)$ with $\alpha_1$ or $\alpha_2$ such that the segment between $x$ and $z$ is contained in $P$.
3. If $x$ lies on the intersection of $\varphi(\alpha_0)$ with some other arc $\beta$ from $\Gamma$, then $s^-(x)$ is the intersection point $z$ of $\beta$ with $\varphi(\alpha_1)$ or $\varphi(\alpha_2)$ such that the segment between $x$ and $z$ is contained in $\beta \cap \varphi(P)$.
4. If $x$ lies on the intersection of $\alpha_0$ with its image, then $s^-(x)$ is the intersection point $z$ of $\alpha_m$ with $\varphi(\alpha_l)$ (where $m$ and $l$ can be $1$ or $2$ and not necessarily equal), obtained by first going along $\varphi(\alpha_0)$ to $\alpha_m$, and then along $\alpha_m$ to $z$, such that this path is contained in $P \cap \varphi(P)$.
Observe that if we reverse the roles of the arcs and arc images, that is, we take our arc collections to be $\varphi(\Gamma)$ and $\varphi(\Gamma')$, and their images to be $\varphi^{-1}(\varphi(\Gamma))$ and $\varphi^{-1}(\varphi(\Gamma'))$, and we also reverse their orientation (so that the negative regions become positive regions), then the definition of $s^-$ is the same as the definition of $s^+$ using the original arc collections. This also means, by Proposition [Proposition 32](#WellDefined){reference-type="ref" reference="WellDefined"}, that $s^-$ is well defined.
Also note that, away from $P \cup \varphi(P)$, the slide map does not change the intersection point. Moreover, if $x$ is a positive (respectively negative) intersection point then $s^+(x)$ and $s^-(x)$ will also be positive (respectively negative).
Finally, the slide maps are injective, so they give a bijection onto their image, and then we can refer to the inverse of these maps. We will use this to show that extended towers in $\Gamma'$ also induce extended towers in $\Gamma$.
Now we want to show that using the maps $s^+$ and $s^-$ we can construct an extended tower $\mathcal{T}'$ supported in $\Gamma'$ that has the same properties as $\mathcal{T}$. The properties that will be preserved will be being nested, being completed/incomplete (or neither), being replete, and being nice. We will focus on the local effect of the slide maps on a neighbourhood of $P$, and a neighbourhood of $\varphi(P)$ (because outside these neighbourhoods the slide maps do not change anything). In particular, this means that if a positive (respectively negative) region $R$ is supported in $\Gamma$ and the image of each of its vertices under $s^+$ (resp. $s^-$) is itself, then $R$ is also supported in $\Gamma'$. If $R$ is not supported in $\Gamma'$, the local effect of the slide map on the vertices will induce one (or more) regions supported in $\Gamma'$.
We will need to check several things. First, that the induced regions form an extended tower, and then, that the properties of being nested, replete, nice, and completed or incomplete (or neither) are preserved.
First we will separate two cases, when $\alpha_2$ is $\varphi$-contained in $\alpha_1$ and when $\alpha_0$ is $\varphi$-contained in $\alpha_2$. The reason for this is that in these cases the image under the slide maps of a boundary point is an interior point (a boundary point is never two sided so we need to consider this separately). We will not consider the case where $\alpha_1$ is $\varphi$-contained in $\alpha_0$, because then the slide maps send an interior point to two boundary points (which are never two-sided). However, we do not need this case.
**Lemma 34**. *Suppose that $\alpha_2$ is $\varphi$-contained in $\alpha_1$. Then if $\mathcal{T}$ is an extended tower in $\Gamma$, there is an extended tower $\mathcal{T}'$ supported in $\Gamma'$ with the same properties as $\mathcal{T}$. Conversely, if $\mathcal{T}'$ is an extended tower supported in $\Gamma'$, there is an extended tower $\mathcal{T}$ supported in $\Gamma$ with the same properties as $\mathcal{T}$.*
*Proof.* First let $\mathcal{T}$ be an extended tower in $\Gamma$. We will now see how each region in $\mathcal{T}$ induces a region supported in $\Gamma'$
First, away from $P$ and $\varphi(P)$ any region $R \in \mathcal{T}$ is unchanged since it is already supported in $\Gamma'$, and so the region induced by the slide maps is $R$ itself. For any region with an edge on the interior of $\alpha_0$, observe that an arc image intersecting $\alpha_0$ must leave $P$ by intersecting $\alpha_1$, and so the images of any vertex on $\alpha_0$ under the slide maps coincide, and is on $\alpha_1$. Then, we obtain the region $R'$ by simply adding or removing rectangles. Similarly, for a region with an edge on $\varphi(\alpha_0)$, we can see reversing the role of arcs and arc images that any arc intersecting $\varphi(\alpha_0)$ must also intersect $\varphi(\alpha_1)$, and so the image under the slide maps of a vertex on $\varphi(\alpha_0)$ lies on $\varphi(\alpha_1)$, and again the induced region is obtained by simply adding or removing rectangles. We can see this in Figure [20](#fig:ArcSlideEquivalent){reference-type="ref" reference="fig:ArcSlideEquivalent"}. Observe that each region $R_i \in \mathcal{T}$ corresponds to a unique region $R_i'$ in $\Gamma'$ (sometimes $R_i = R_i'$), so we define $\mathcal{T}' = \{ R_i' \mid R_i \in \mathcal{T} \} \cup \{ R_1, R_2 \}$, where $\{ R_1, R_2 \}$ is the splitting pair of $\{ \alpha_1, \alpha_2 \}$.
![The regions obtained by adding or removing rectangles, when $\alpha_2$ is $\varphi$-contained in $\alpha_1$, and the splitting pair. The dashed arrows indicate the action of the slide maps.](ArcSlideEquivalent.pdf){#fig:ArcSlideEquivalent height="5cm"}
Let us now check the properties of $\mathcal{T}'$. The positive regions are by construction contained in $P$ and thus disjoint from $\Gamma'$ in their interior, and we can see by switching the role of arcs and arc images that the interior of the negative regions is disjoint from $\varphi(\Gamma')$, so the extended tower is nice. It is also replete, because we cannot add any negative regions with $\circ$-points in $\textrm{Circ}(\mathcal{T}') \cup \textrm{Circ}_{\partial}(\Gamma)$, since $\mathcal{T}$ is replete and we have already used the $\circ$-point in the basepoint triangle of $\alpha_1$ and $\alpha_2$ for the splitting pair of regions.
The positive region from the splitting pair is a level $0$ region, and so is the negative region from the splitting pair. This will imply that some of the regions may go up one level (for example, if there was a level $0$ region with a $\bullet$-point on $\alpha_0(1)$, the induced region is now a level $1$ region because it as an interior $\bullet$-point that is also a vertex of a level $0$ negative region), but if $\mathcal{T}$ was nested then so is $\mathcal{T}'$, as we can construct it level by level from the levels of $\mathcal{T}$.
Finally, being completed, incomplete, or neither comes from which vertices are two-sided. For interior points that are the image of an interior point $x$, they will be two-sided if and only if $x$ is two-sided. So we only need to check the boundary point $y = \alpha_0(1)$ (recall $\alpha_0$ is given its orientation from being an arc of $\Gamma$) because its image $z = s^+(y)$ is an interior point (the image of the other endpoint $\alpha_0(0)$ is not an interior point so we do not need to check if it is two-sided). But $z$ is two-sided because we have included the positive splitting pair of regions in $\{ \alpha_1, \alpha_2 \}$ in $\mathcal{T}'$. Thus if $\mathcal{T}$ is completed then $\mathcal{T}'$ is completed, and if $\mathcal{T}$ is incomplete then so is $\mathcal{T}'$.
For the converse, notice that the only points in an extended tower that do not have an inverse image are in the splitting pair of $\{\alpha_1, \alpha_2 \}$. Now, if we assume that $\mathcal{T}'$ is not supported in just $\{ \alpha_1, \alpha_2 \}$, then $z= s^+(\alpha_0(1))$ is an interior $\bullet$-point that must be two-sided, so there must be a region $R'$ that is not supported in $\{ \alpha_1, \alpha_2 \}$. But then we get a region $R$ supported in $\Gamma$. Moreover, note that every point in $\alpha_2$ belongs to a region, and so every other region in $\mathcal{T}'$ has vertices that have an inverse image on $\Gamma$. Moreover, every vertex is two-sided if and only if the preimage is. Notice that we do get an extended tower by taking the regions induced by these points because the preimage of $z$ is on the boundary, so even though the negative region which has $z$ as a vertex does not induce a negative region in $\mathcal{T}$, it does not have to, precisely because $z$ is on the boundary. ◻
**Lemma 35**. *Suppose that $\alpha_0$ is $\varphi$-contained in $\alpha_2$. Then if $\mathcal{T}$ is an extended tower in $\Gamma$, there is an extended tower $\mathcal{T}'$ supported in $\Gamma'$ with the same properties as $\mathcal{T}$. Conversely, if $\mathcal{T}'$ is an extended tower supported in $\Gamma'$, there is an extended tower $\mathcal{T}$ supported in $\Gamma$ with the same properties as $\mathcal{T}$.*
*Proof.* The argument is the same as for Lemma [Lemma 34](#Contained1){reference-type="ref" reference="Contained1"}, except now the arc images intersecting $\alpha_0$ all intersect $\alpha_2$, and the arcs intersecting $\varphi(\alpha_0)$ all intersect $\varphi(\alpha_2)$. Now we need to check the boundary point $y = \alpha_0(0)$, which is the one that is sent to an interior point. But now we the regions that we include in $\mathcal{T}'$ are the negative splitting pair of $\{\alpha_1, \alpha_2 \}$ (if the $\bullet$-point $\alpha_0(0)$ belongs to a region) so its image is two-sided, and it does not affect the properties of the extended tower. Moreover, the regions from the splitting pair now belong to higher levels (specifically, if the positive region containing $\alpha_0(0)$ is on level $i$, then the regions from the splitting pair are on level $i+1$. Indeed, the positive region has a $\bullet$-point on the boundary and another one on a level $i$ region so it is on level $i+1$, and the negative region has a $\circ$-point on $\textrm{Circ}_{\partial}(\Gamma')$ and another one on a level $i+1$ positive region). The rest of the regions are on the same levels as before, so $\mathcal{T}'$ is also nested if $\mathcal{T}$ is nested.
For the converse again the argument is the same using the negative splitting pair instead of the positive one. ◻
Now suppose none of the arcs $\{\alpha_0, \alpha_1, \alpha_2 \}$ is $\varphi$-contained in any of the others, that is, the image of $\alpha_0$ leaves $P$ by intersecting $\alpha_1$ and the image of $\alpha_2$ leaves $P$ by intersecting $\alpha_0$. Again the slide maps are the identity away from a neighbourhood of $P$, and a neighbourhood of $\varphi(P)$. Therefore, we will distinguish three cases, depending on how $\varphi(P)$ intersects $P$.
As before, $\mathcal{T}'$ will be a collection of regions induced by the regions in $\mathcal{T}$. However, in the previous cases every region in $\mathcal{T}$ corresponded to a unique region supported in $\Gamma'$ that was obtained by adding and removing rectangles. Now, the slide maps might "break up" regions into several other regions as vertices connected by an edge in $\alpha_0$, or $\varphi(\alpha_0)$, could be mapped to different arcs, or arc images. In this case the following definition will be useful.
**Definition 36**. Let $\mathcal{T}$ be an extended tower in $(\Sigma, \varphi, \Gamma)$. Two regions $R_1,R_2 \in \mathcal{T}^{\pm}$ are said to be *connected by a region $R \in \mathcal{T}^{\mp}$* if there exist points $x \in \textrm{Dot}(R) \cap \textrm{Dot}(R_1)$ and $y \in \textrm{Circ}(R) \cap \textrm{Circ}(R_2)$. We will then refer to the region $R$ as a *connecting region*.
Connecting regions supported in $\{ \alpha_1, \alpha_2 \}$ might not have all of their vertices be images of a vertex in $\mathcal{T}$ (so they are not induced by regions in $\mathcal{T}$ the way the regions in the previous Lemmas were), but we will include them in some cases to preserve properties of $\mathcal{T}$. Similarly, we might need to add negative regions (whose $\bullet$-points are not images under the slide maps of vertices in $\mathcal{T}$) to ensure the resulting extended tower is replete.
**Lemma 37**. *Suppose that $P \cap \varphi(P)$ is just the basepoint triangles. Using the maps $s^+$ and $s^-$ we can construct regions giving an extended tower $\mathcal{T}'$ supported in $\Gamma'$ that has the same properties as $\mathcal{T}$.*
*Proof.* Unlike before, given a region $R \in \mathcal{T}$, the image of its vertices under the slide maps might not induce a unique region in $\Gamma'$. We construct the extended tower $\mathcal{T}'$ as follows. For any region $R \in \mathcal{T} ^ {\pm}$, if the set of vertices $s^{\pm}(V(R))$ induces a unique region $R'$ supported in $\Gamma'$, set $R' \in \mathcal{T}'$. So suppose that it does not, that is, points from $s^{\pm}(V(R))$ lie on two different regions $R'_1$ and $R'_2$ connected by a third region $R'_3$. First assume that $R$ is a negative region. Then if there are two-sided $\bullet$-points $x,y \in V(R)$ such that $s^-(x) \in R'_1$ and $s^-(y) \in R'_2$, we set $R'_1, R'_2, R'_3 \in \mathcal{T}'$. If every two sided point in $R$ has its image in $R'_1$, then set $R'_1 \in \mathcal{T}'$ (but not $R'_2$ or $R'_3$). Now assume that $R$ is a positive region. We do the same as before but with $\circ$-points. If there are two-sided $\circ$-points $x,y \in V(R)$ such that $s^+(x) \in R'_1$ and $s^+(y) \in R'_2$, we set $R'_1, R'_2, R'_3 \in \mathcal{T}'$. If every two sided $\circ$-point in $R$ has its image in $R'_1$, then set $R'_1 \in \mathcal{T}'$, but not $R'_2$ or $R'_3$.
Finally, if after having done this there is a negative region $R'_1$ supported in $\Gamma'$ such that all its $\circ$-points are vertices of positive regions in $\mathcal{T}'$ or $\circ$-points on a basepoint triangle, set $R'_1 \in \mathcal{T}'$. This ensures that $\mathcal{T}'$ is replete. Further, if there exists a positive region $R'_3$ with its $\bullet$-points being on $\alpha_1(1)$ or $\bullet$-points of $R'_1$, set $R'_3 \in \mathcal{T}'$. This last case only happens if there is a $\circ$-point $x \in \alpha_0$ in a positive region $R \in \mathcal{T}$ that is not two-sided, but its image $s^+(x)$, which is a vertex of a positive region $R'$, is two sided after applying this rule. We will see later (Figure [23](#fig:WeirdTwosided){reference-type="ref" reference="fig:WeirdTwosided"}) that this ensures that if $\mathcal{T}$ is incomplete then so is $\mathcal{T'}$.
We now show what the induced regions are.
If $P \cap \varphi(P)$ is just the basepoint triangles, for every interior intersection point $x \in \mathcal{T}$ on $\alpha_0$ (which will also be on an arc image $\varphi(\beta)$ with $\beta \neq \alpha_0)$), we have that $s^+(x) = s^-(x)$ and moreover the image under the slide map is obtained by going along $\varphi(\beta)$ until it leaves $P$. We can then see that, for a positive region $R$ with vertices $x,y$ on $\alpha_0$, there are two options.
The first option is that $s^+(x)$ and $s^+(y)$ lie on the same arc, which means that the local effect of the slide map on $R$ is just extending it by a rectangle.
The second option is that $s^+(x)$ lies on $\alpha_1$ and $s^+(y)$ lies on $\alpha_2$. Then the local effect of the slide map on $R$ is extending it by a $6$-gon, where the extra sides are given by $\alpha_1$ from $s^+(x)$ to its endpoint, an edge of the basepoint triangle, and $\alpha_2$ from the $\circ$-point on the basepoint triangle to $s^+(y)$, see Figure [21](#fig:No_Intersection){reference-type="ref" reference="fig:No_Intersection"}. Moreover, observe that this will only happen once.
For a negative region $R$ with vertices $x,y$ on $\alpha_0$, there are again two options.
The first option is that $s^-(x)$ and $s^-(y)$ lie on the same arc, which means that the local effect of the slide map on $R$ is just removing a rectangle. We can see this, together with the positive regions, in Figure [21](#fig:No_Intersection){reference-type="ref" reference="fig:No_Intersection"}. Observe that in all these first cases the levels are preserved.
![Obtaining the new regions by adding or removing rectangles (or a $6$-gon in the case of $R_3$). The dashed arrows indicate the action of $s^{\pm}$. Here we do not a priori know if the $\circ$-point on the basepoint triangle is two-sided, we deal with this case later.](No_Intersection.pdf){#fig:No_Intersection width="7cm"}
The second option is that $s^-(x)$ lies on $\alpha_1$ and $s^-(y)$ lies on $\alpha_2$. In this case, $R$ must have boundary along another arc image intersecting $\alpha_1$ and $\alpha_2$ so that $R$ is a disc. Moreover, because $s^-(x)$ lies on $\alpha_1$ and $s^-(y)$ lies on $\alpha_2$, we do not have a unique induced region now, but two regions $R_1'$ and $R_2'$, one with an edge on $\alpha_2$ between $s^{-}(y)$ and a $\bullet$-point $b$ and another one with an edge on $\alpha_1$ between a $\circ$-point $a$ and $s^{-}(x)$, connected by a positive rectangle $R_3'$ with vertices $a,b$, the $\circ$-point on the basepoint triangle, and the boundary point on $\alpha_1$. We can see this in Figure [22](#fig:DividingNegative){reference-type="ref" reference="fig:DividingNegative"}. Notice that $a$, $b$, and the point on the basepoint triangle are not images of any point in $\mathcal{T}$ under the slide maps but these, together with the analogous case in $\varphi(P)$ instead of $P$, will be the only cases of this.
Because the rectangle $R'_3$ connects $R_1'$ and $R_2'$, both $a$ and $b$ are two-sided. Again, note that there can only be one region $R \in \mathcal{T}$ of this form, otherwise we would have vertices in the interior an edge of of another region, contradicting the definition of extended tower. We now consider the levels of the regions. Assume the region $R$ is on level $i$. Then, since not all $\circ$-points of $R'_1$ come from vertices of $R$, $R'_1$ will be on level $j$ with $j \leq i$. Then, $R'_3$ will be on level $j+1$, and $R'_2$ will be on level $\max \{ j+1, i \}$.
![Dividing a negative region into two negative regions $R'_1$ and $R'_2$ connected by a positive rectangle $R'_3$.](DividingNegative.PNG){#fig:DividingNegative width="8cm"}
As for the neighbourhood of $\varphi(P)$, we obtain regions in the same way by reversing the role of the arcs and arc images, and changing the orientation of the arcs. Note that this switches the roles of $\alpha_1$ and $\alpha_2$ (in particular, when we are extending a region by a $6$-gon the point on the boundary that we use is on $\alpha_2$ and not $\alpha_1$, but note that neither of these points comes as the image under the slide maps).
Finally, if there is a vertex $x \in \alpha_0$ in a positive region $R \in \mathcal{T}$ that is not two sided, but $s^+(x)$ is two sided because otherwise $\mathcal{T}'$ would not be replete, then $s^+(x)$ is a vertex of a negative region $R'_1$ supported in $\Gamma'$. Then we must have that $s^+(x)$ lies on $\alpha_2$, and $P$ divides what would be a region (but is not) making $x$ two-sided, and $R'_1$ is a region that makes $s^+(x)$ two-sided, see Figure [23](#fig:WeirdTwosided){reference-type="ref" reference="fig:WeirdTwosided"}. Then we also have that the region $R'_3$ has as its $\bullet$-points the point $\alpha_1(1)$, which is on the boundary, and the point $b$, which is a $\bullet$-point on a negative region of $\mathcal{T}'$, so $R'_3 \in \mathcal{T}'$. Notice that here the region $R'_1$ is on the same level as $R'$, and the region $R'_3$ is on the level above $R'_1$.
![The point $x$ is not two-sided but $s^+(x)$ is, because $R'_1$ is a region. However, this induces a positive region $R'_3$ and now $b$ is a two-sided $\bullet$-point in $R'_1$.](WeirdTwosided.PNG){#fig:WeirdTwosided width="7cm"}
We will now see how the properties of $\mathcal{T}$ are preserved.
As before, because of the way the induced regions are constructed, the result is an extended tower, which moreover is nice. It is also replete by construction.
Also, our discussion on the levels of the regions implies that if $\mathcal{T}$ is nested then so is $\mathcal{T}'$. Being completed (or incomplete or neither) will depend on which vertices are two-sided.
Let us now see how the vertices being two-sided (or not) determines whether their images are two-sided. Again we will focus on a neighbourhood of $P$ and the analogous result for a neighbourhood of $\varphi(P)$ will follow from reversing the role of arcs and arc images.
First suppose that $\mathcal{T}$ is completed. This means that every interior vertex is two-sided, except for the connecting vertex $y_0$. But by construction this means that the images of these interior vertices in $\Gamma'$ are two-sided, except from $y_0$ (we imposed that $\alpha_0$ is not the first arc in $\Gamma$ so $s^-(y_0) = y_0$). Moreover the vertices $a$ and $b$ from the connecting region, if there is one, are also two-sided.
There only remains to show that the $\circ$-point on the basepoint triangle formed by $\varphi(\alpha_1)$ and $\alpha_2$ is two-sided. Since $\mathcal{T}$ is completed, $\alpha_0(1)$ is the vertex of a positive region in $\mathcal{T}$, which means that $\alpha_2(1) = s^+(\alpha_0(1))$ is the vertex of a positive region in $\mathcal{T}'$. Note that not every arc image intersecting $\alpha_0$ leaves $P$ by intersecting $\alpha_2$, since initially $\varphi(\alpha_0)$ intersects $\alpha_1$ by hypothesis. Since every point in $\alpha_0$ belongs to a region, because $\mathcal{T}$ is completed, there must be a region where the arc images forming the edge on $\alpha_0$ must leave $P$ by intersecting different arcs. If this region is negative, it must split into two negative regions in $\mathcal{T}'$ connected by a (positive) rectangle which has the $\circ$-point on the basepoint triangle as one of its vertices (see Figure [22](#fig:DividingNegative){reference-type="ref" reference="fig:DividingNegative"}). If the region is positive, then it is extended by a $6$-gon as in Figure [21](#fig:No_Intersection){reference-type="ref" reference="fig:No_Intersection"}, and one of the vertices is immediately the $\circ$-point on the basepoint triangle. Therefore the $\circ$-point on the basepoint triangle is a vertex of a positive region. To see that it is also a vertex of a negative region, reverse the roles of arcs and arc images. Since $\mathcal{T}$ is completed, every point in $\varphi(\alpha_0)$ belongs to a region. But then there must be a region where the arcs forming the edge on $\varphi(\alpha_0)$ leave $\varphi(P)$ by intersecting different arc images. If this region is positive, it must split into two positive regions connected by a (negative) rectangle which has the $\circ$-point on the basepoint triangle as one of its vertices, see Figure [24](#fig:PositiveSplitRegion){reference-type="ref" reference="fig:PositiveSplitRegion"}.
![When there is a positive region such that the arcs forming the edge on $\alpha_0$ leave $\varphi(P)$ by intersecting different arc images, there are three induced regions $R'_1, R'_2, R'_3$ and the connecting region $R'_3$ uses the basepoint triangle.](PositiveSplitRegion.pdf){#fig:PositiveSplitRegion width="9cm"}
If the region is negative, then it is extended by a $6$-gon, and one of the vertices is immediately the $\circ$-point on the basepoint triangle. Therefore the $\circ$-point on the basepoint triangle is a vertex of a negative region.
![When there is a negative region such that the arcs forming the edge on $\alpha_0$ leave $\varphi(P)$ by intersecting different arc images, the induced region $R'$ uses the basepoint triangle.](NegativeExtendedRegion.pdf){#fig:NegativeExtendedRegion width="9cm"}
Now assume that $\mathcal{T}$ is incomplete. Let $R \in \mathcal{T}^-$. Then there exists a $\bullet$-point $x$ in $R$ that is two-sided. If the vertices of $R$ induce a unique region in $\mathcal{T}'$ then $s^-(x)$ is a two-sided $\bullet$-point. If they induce two regions $R'_1, R'_2$ connected by a positive region $R'_2$, as in Figure [22](#fig:DividingNegative){reference-type="ref" reference="fig:DividingNegative"}, there are three cases to consider.
First, if all the $\bullet$-points in $R$ that are two-sided have their images in $R'_1$, by construction we do not add $R'_2$ or $R'_3$ to $\mathcal{T}'$ and so the property that negative regions have a $\bullet$-point that is two-sided is preserved. The result is still a replete extended tower because the vertex $a$ is not a vertex of a positive region anymore, so we are not forced to add $R'_2$ to make $\mathcal{T}'$ replete, and there are no interior $\bullet$-points in $R'_2$ that are only vertices of positive regions, by hypothesis.
Second, if some $\bullet$-points in $R$ have their images in $R'_1$ and some in $R'_2$, the property that negative regions have a $\bullet$-point that is two-sided is immediately preserved.
Third, if all the $\bullet$-points in $R$ that are two-sided have their images in $R'_2$, note that the vertex $b$ which is a $\bullet$-point in $R'_1$ is now two-sided, and so the property that negative regions have a $\bullet$-point that is two-sided is preserved.
Finally, assume that a $\circ$-point in $\alpha_0$ is not two-sided, but its image is, as in Figure [23](#fig:WeirdTwosided){reference-type="ref" reference="fig:WeirdTwosided"}. Then recall that $R'_1, R'_3 \in \mathcal{T}'$. Therefore, again the point $b$ is a $\bullet$-point in the negative region that is two-sided, so the property that negative regions have a $\bullet$-point that is two-sided is preserved.
Therefore if $\mathcal{T}$ is incomplete $\mathcal{T}'$ is incomplete, and we are done. ◻
**Lemma 38**. *Suppose that the intersection of $P \cap \varphi(P)$ is the basepoint triangles and a collection of rectangles. Using the maps $s^+$ and $s^-$ we can construct regions giving an extended tower $\mathcal{T}'$ supported in $\Gamma'$ that has the same properties as $\mathcal{T}$.*
*Proof.* This case is done in exactly the same way as Lemma [Lemma 37](#NoIntersection){reference-type="ref" reference="NoIntersection"}. Notice that now in the case where $x \in \varphi(\alpha_0)$, $s^{\pm}(x)$ are not obtained by following an arc image to its intersection with $\alpha_1$ or $\alpha_2$ but by following two sides of a rectangle (which is part of the intersection $P \cap \varphi(P)$). However, locally this only amounts to adding or removing this rectangle from the regions. Moreover, note that in this case we still have that $s^+(x) = s^-(x)$ for all points where both maps are defined. ◻
**Lemma 39**. *Suppose that the intersection $P \cap \varphi(P)$ contains a $6$-gon. Using the maps $s^+$ and $s^-$ we can construct regions giving an extended tower $\mathcal{T}'$ supported in $\Gamma'$ that has the same properties as $\mathcal{T}$.*
*Proof.* The construction of $\mathcal{T}'$ is done in the same way as Lemma [Lemma 37](#NoIntersection){reference-type="ref" reference="NoIntersection"}. Let $A$ be the $6$-gon contained in $P \cap \varphi(P)$. We only need to focus on the regions given by the intersection points in $A$, because all the others have been covered by Lemmas [Lemma 37](#NoIntersection){reference-type="ref" reference="NoIntersection"} and [Lemma 38](#RectangleIntersection){reference-type="ref" reference="RectangleIntersection"}. There are two cases; either $\varphi(\alpha_0)$ intersects $\alpha_0$ as an edge of $A$, or it does not.
First assume it does, and call this intersection point $x$. Then $s^+(x) \neq s^-(x)$ (if they are both defined), but for a region $R$ (either positive or negative) we will see that the induced region $R'$ is obtained by adding and removing rectangles, see Figure [26](#fig:SlideMap){reference-type="ref" reference="fig:SlideMap"} for an example.
![The case where $s^+(x) \neq s^-(x)$. Here $y = s^+(x)$ and $z = s^-(x)$. The dashed arrows indicate the action of $s^{\pm}$. The regions $R'_1, R'_2$ are obtained from $R_1, R_2$ by adding and removing rectangles.](SlideMap.pdf){#fig:SlideMap width="8cm"}
Now, this case is further divided into two cases. First, if $\varphi(\alpha_1)$ also intersects $\alpha_0$, then the orientation of the arc images forces $x$ to be a $\bullet$-point, as we can see in Figure [27](#fig:6-gonIntersection){reference-type="ref" reference="fig:6-gonIntersection"}. This means that $x$ is a vertex of a negative region $R_2$, and we get an induced negative region $R'_2$ supported in $\Gamma'$.
Now if $x$ is two-sided, it is also a vertex of a positive region $R'_1$. We also have a positive rectangle $R'_3$ using $\alpha_1$, $\alpha_2$, a side of the basepoint triangle, and either $\varphi(\alpha_1)$ or $\varphi(\alpha_2)$ (depending on which one does not intersect $\alpha_0$). Moreover, one of the vertices of this region is $s^-(x)$. This region can be completed with a rectangle $R'_4$ using the part where $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$ are parallel, and one of the vertices of this region is $s^+(x)$. This means that both $s^+(x)$ and $s^-(x)$, together with the new points we have introduced (including the $\circ$-point on the basepoint triangle) are two-sided. In particular if $\mathcal{T}$ was completed, so is $\mathcal{T}'$, because all new points introduced are two-sided. Similarly, if $\mathcal{T}$ was incomplete, so is $\mathcal{T}'$, because both the negative regions that we have introduced have $\bullet$-points in common with a positive region. We can see these regions in Figure [27](#fig:6-gonIntersection){reference-type="ref" reference="fig:6-gonIntersection"}.
![The regions induced by a positive region $R_1$ and a negative region $R_2$ with a vertex on $x$. The dashed arrows indicate the action of $s^{\pm}$.](6-gonIntersection.PNG){#fig:6-gonIntersection width="7cm"}
If $x$ is not two-sided, then $\mathcal{T}$ cannot have been completed. So assume it was incomplete. Then $R_2$ has some $\bullet$-point $y$ in common with a positive region, which means that $R'_2$ has some $\bullet$-point in common with a positive region, and now by construction we do not include $R'_3$ and $R'_4$ in $\mathcal{T}'$, because $R'_4$ does not have any $\bullet$-points in common with a positive region in $\mathcal{T}'$. However the result is still an extended tower, which is nice and replete, using the same argument as in Lemma [Lemma 37](#NoIntersection){reference-type="ref" reference="NoIntersection"}. Moreover, the property that negative regions have a $\bullet$-point that is two-sided is preserved.
To see that $\mathcal{T}'$ is nested, suppose that $R_2$ is on level $i$. Then $R_1$ is on level $j$, with $j > i$ (because one of its $\bullet$-points is a vertex of a level $i$ region, but it might have other vertices belonging to regions on higher levels). Then, $R'_2$ is also on level $i$, $R'_3$, is on level $i+1$, and $R'_4$ is also on level $i+1$. Therefore, $R'_1$ is on level $\max \{j, i+2 \}$.
Second, if $\varphi(\alpha_2)$ intersects $\alpha_0$, then the orientation of the arc images forces $x$ to be a $\circ$-point. This means that it is a vertex of a positive region $R_1$, which induces a positive region $R'_1$ supported in $\Gamma'$. Now, $s^+(x)$ is two-sided because we can use the negative region $R'_4$ where $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$ are parallel, and this in turn gives a positive rectangle $R'_3$ as before, where one of the vertices is $s^-(x)$. This means that the interior $\bullet$-point that we introduced in $R'_4$ is two-sided, so the property that negative regions have a $\bullet$-point that is two-sided is preserved, so if $\mathcal{T}$ is incomplete then so is $\mathcal{T}'$. Moreover, if $\mathcal{T}$ is completed, every interior vertex, in particular $x$, is two-sided, which means that it is a vertex of a negative region $R_2$, which induces a negative region $R'_2$, which means that $s^-(x)$ is two-sided, and $\mathcal{T}'$ is completed. We can see this case in Figure [28](#fig:6-gon2){reference-type="ref" reference="fig:6-gon2"}. Similarly as before we can find the levels of these new regions, and thus $\mathcal{T}'$ is nested.
![The regions induced by a positive region $R_1$ and a negative region $R_2$ with a vertex on $x$. The dashed arrows indicate the action of $s^{\pm}$.](6-gon2.PNG){#fig:6-gon2 width="9cm"}
So now assume that $\varphi(\alpha_0)$ does not intersect $\alpha_0$ as an edge of $A$. This means that it intersects $\alpha_1$ and $\alpha_2$. Moreover, the segment of $\varphi(\alpha_0)$ that is an edge of $A$ can only be a part of a negative region $R$. But since this region cannot intersect $\varphi(P)$, the intersection with $P$ must be a rectangle. But since there is no intersection points with $\alpha_0$ there is no action of the slide maps, and the induced region is simply given by extending using a $6$-gon. We can see this region in Figure [29](#fig:LastCase){reference-type="ref" reference="fig:LastCase"}. There only remains to show that, if $\mathcal{T}$ is completed, the $\circ$-point on the basepoint triangle is two-sided, that is, it is the vertex of a positive region. But if $\mathcal{T}$ is completed, every point on $\alpha_0$ is part of a region, and thus there must be a region in $\mathcal{T}$ with a vertex between $\alpha_0(0)$ and the intersection between $\varphi(\alpha_2)$ and $\alpha_0$, and a vertex between the intersection point of $\varphi(\alpha_1)$ and $\alpha_0$ and $\alpha_0(1)$. Now notice that, because the region $R$ intersects $P$, such a region must necessarily be positive, otherwise we would have a corner of a region in the interior of a region, contradicting the definition of extended tower. But then the induced region supported in $\Gamma'$ uses the $\circ$-point on the basepoint triangle, and we are done.
Again as before we can assign a level to these regions, so $\mathcal{T}'$ is nested.
![If $\varphi(\alpha_0)$ does not intersect $\alpha_0$ we can extend this region using a $6$-gon (like we did for positive regions in $P$, see Figure [21](#fig:No_Intersection){reference-type="ref" reference="fig:No_Intersection"}).](LastCase.pdf "fig:"){#fig:LastCase width="7cm"} ◻
We are only left to show that if we have an extended tower $\mathcal{T}'$ supported in $\Gamma'$, there is an extended tower $\mathcal{T}$ supported in $\Gamma$ with the same properties as $\mathcal{T}'$.
Recall that the slide maps do not provide a bijection between vertices in $\mathcal{T}$ and vertices in $\mathcal{T}'$, because there are new vertices in $\mathcal{T}'$ that we have added. However, both $s^+$ and $s^-$ are injective, and so form a bijection with their image. Moreover, the vertices without an inverse are those that lie on endpoints of a segment disjoint from $\alpha_0$ contained in an arc image, or a segment disjoint from $\varphi(\alpha_0)$ contained in an arc. Also, in the definition of the slide maps the extended tower $\mathcal{T}$ is only used to specify the domain (we only consider vertices of regions in $\mathcal{T}$).
Thus, given an extended tower $\mathcal{T}'$ supported in $\Gamma'$, we can define a set of intersection points of $\Gamma \cap \varphi(\Gamma)$ as follows.
Let $R' \in \mathcal{T}'^+$ and $x$ a vertex of $R'$. If the inverse of the slide map makes sense, that is, there exists a vertex $y \in \Gamma \cap \varphi(\Gamma)$ such that $s^+(y) = x$, then define $y = (s^+)^{-1}(x)$ the *preimage* of $x$. We proceed analogously with negative regions and $s^-$. Now consider the set of all such preimages, i.e $\mathcal{V}(\mathcal{T}') = \{ y \in \Gamma \cap \varphi(\Gamma) \mid s^{+}(y) \textrm{ or } s^-(y) \in V(\mathcal{T} \}$.
**Lemma 40**. *Let $\mathcal{T}'$ be a completed (respectively incomplete) extended tower in $\Gamma'$, which is not just supported in $\{ \alpha_1, \alpha_2 \}$. Then the set $\mathcal{V}(\mathcal{T}')$ induces an extended tower $\mathcal{T}$ supported in $\Gamma$ that is completed (respectively incomplete).*
*Proof.* For regions whose entire set of vertices has a preimage we are done in the same way as the previous lemmas. So we only need to focus on regions that have one or more vertices without a preimage under $s^{\pm}$. We distinguish several different cases.
The first case is when the region is supported in $\{ \alpha_1, \alpha_2 \}$. There are two options for this. The first one is when the region connects two regions which are not just supported in $\{ \alpha_1, \alpha_2 \}$, we will deal with this more generally in the second case. If it does not, then the same reasoning as in Lemma [Proposition 30](#TriangleFixed){reference-type="ref" reference="TriangleFixed"} implies that it must be part of a splitting pair (otherwise $\mathcal{T}$ would not be nested), but Lemma [Proposition 30](#TriangleFixed){reference-type="ref" reference="TriangleFixed"} already shows how to get the extended tower $\mathcal{T}$ in this case.
The second case is a vertex belonging to an arc image (different from $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$) that intersects $\alpha_1$ and $\alpha_2$. In this case we see that the arc image provides an edge for a positive region as in the previous case. Moreover, this means that the positive region must be connecting two negative regions, and the points of these regions that do have preimages will give a negative region in $\Gamma$. Essentially we are merging the two negative regions, which is the opposite operation to dividing a region into two connected by a third region as we had done in the previous Lemmas. The level of the new region will be the highest of the levels of the two original regions (this might also increase the level of positive regions with $\bullet$-points in the merged region) so $\mathcal{T}'$ is still nested.
If there are not two regions but just one, then we have a vertex that is not two-sided, but then the preimage of a vertex from the positive region is not two-sided as in Figure [23](#fig:WeirdTwosided){reference-type="ref" reference="fig:WeirdTwosided"} (in particular neither of the extended towers can be completed, so assume that $\mathcal{T}'$ is incomplete). Moreover this means that we do not add a negative region to $\mathcal{T}$, so this would not affect whether every negative region in $\mathcal{T}$ has a two-sided $\bullet$-point. This could now result in a positive region with $\bullet$-points that do not belong to a negative region. In this case, we also do not add this region to $\mathcal{T}$ to make sure $\mathcal{T}$ is an extended tower. Now this could result in a negative region with $\circ$-points that do not belong to a positive region. We can carry on this procedure, but it must (at the latest) terminate when we reach level $0$, where we would have an extended tower $\mathcal{T} = \{R \}$, with $R$ a positive region, so $\mathcal{T}$ is nested, replete, nice, and incomplete.
The third case is analogous to the second one, and happens when both $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$ intersect an arc $\beta_j$. Observe that, as before, we can relate this case to the second one by interchanging the roles of $\Gamma'$ and $\varphi(\Gamma')$.
Because all the vertices are inverse images under the slide maps, the resulting collection of regions $\mathcal{T}$ is an extended tower which moreover must be nice. To see whether it is replete, let $A \in \mathcal{R}(\Sigma, \varphi, \Gamma)$ such that $\textrm{Circ}(A) \subset \textrm{Circ}(\mathcal{T}) \cup \textrm{Circ}_{\partial}(\Gamma)$, and $\mathcal{T} \cup A$ is again an extended tower. Take $s^-(V(A))$. These vertices must also give negative region(s) in $\Gamma'$, so $V(A)$ is the inverse image of vertices of $\mathcal{T}'$ and thus is in $\mathcal{T}$, so $\mathcal{T}$ is replete.
Now suppose that $\mathcal{T}'$ is completed. Then all its interior vertices are two-sided, but this implies that all interior vertices of $\mathcal{T}$ are two-sided, so $\mathcal{T}$ is completed.
So suppose that $\mathcal{T}'$ is incomplete, and let $R$ be a negative region in $\mathcal{T}$. We want to show that it has a two-sided $\bullet$-point. Take $s^-(V(R))$. These points are by construction vertices of regions in $\mathcal{T}'$. The only case where a $\bullet$-point of a region in $\Gamma'$ being two-sided does not imply that its preimage is two-sided is when $P \cap \varphi(P)$ contains a $6$-gon. So suppose we have a negative region $R$ supported in $\Gamma$ with a $\bullet$-point $x \in \alpha_0 \cap \varphi(\alpha_0)$ that is not two-sided, and an induced region $R'$ supported in $\Gamma'$ such that $y = s^-(x)$ that is two-sided. This means that $y$ is the vertex of a connecting region $R'_1$ that connects $R'$ to a negative region $R'_2$, which is a rectangle where $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$ are parallel. But now, because $x$ is not two-sided, $R'_2$ has no two-sided $\bullet$-points (it only has two, a boundary point, and the point $z$ that would be $s^+(x)$ if $x$ were two-sided, and none of them can be two-sided). This means that either $\mathcal{T}$ was not incomplete, or $R'_2$ is not in $\mathcal{T}'$. But if $R'_2$ is not in $\mathcal{T}'$, since $\mathcal{T}'$ is replete $R'_1$ also cannot be in $\mathcal{T}'$. But this means that $s^-(x)$ is not two-sided --a contradiction. We can see this in Figure [30](#fig:IncompleteGoingBack){reference-type="ref" reference="fig:IncompleteGoingBack"}.
![If $x$ is not two-sided but $y = s^-(x)$ is, then $\mathcal{T}$ cannot have been incomplete as $R'_2$ does not have any $\bullet$-points in common with any positive region.](IncompleteGoingBack.pdf){#fig:IncompleteGoingBack height="4cm"}
There only remains to show that this procedure does not yield an empty extended tower. For a contradiction, assume that it does. Then, there is a positive region $R$ in $\mathcal{T}'$ that does not induce a positive region in $\Gamma$. At least one vertex of $R$ must not have an inverse image under $s^+$. Moreover, $R$ cannot be a connecting region contained in $P$, in this case $R$ does not come from a positive region in $\Gamma$ (it connects two regions induced by a negative region in $\Gamma$) but one of those is in a lower level than $R$ so we can find a positive region in a lower level that also cannot induce a positive region in $\Gamma$ (if we assume that the induced extended tower is empty). Thus the only way this can happen is if we have $R$ and a negative region $R'$ that is not a connecting region, that is, there is not another positive region connected to $R$ by $R'$ (notice that we have encountered this case before with the roles of positive and negative regions reversed). For this to happen, we must have that a vertex $x$ of this region lies either on $\varphi(\alpha_1)$ or $\varphi(\alpha_2)$ and the intersection of the discs cut out by $\Gamma \cup \{ \beta \}$ and $\varphi(P)$ contains a $6$-gon, see Figure [32](#fig:EmptyTower){reference-type="ref" reference="fig:EmptyTower"} for an example.
Suppose $x$ is a $\circ$-point. Then necessarily it lies on $\varphi(\alpha_2) \cap \beta_i$, for some $\beta_i \in \Gamma$. But now the negative region $R'$ does not have any common $\bullet$-points with any positive region, as it has its other $\bullet$-point on the boundary. Indeed, if it does not, we can show $\mathcal{T}'$ is not nested. Suppose that the other $\bullet$-point is not on the boundary. Then either it is not two-sided, in which case we are done because $\mathcal{T}$ is not incomplete, or it is the vertex of a positive region $R'_1$. Notice that the other $\circ$-point of $R'$ must also be a vertex of a positive region $R'_2$. Then, there must be another negative region $R'_3$ where $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$ are parallel, until the boundary (if it is not until the boundary we can repeat this argument). Now let us look at the levels of the regions. Let $i$ be the level of $R'$. Then $R'_1$ is on level $j$ with $j > i$, as they share a $\bullet$-point. Similarly, $R'_2$ is on level $k$ with $k \leq i$, as they share a $\circ$-point. Then the same argument shows that $R'_3$ is on level $l$ with $l > k$, and $j \leq l$. But then we have $k \leq i < j \leq l < k$, a contradiction. Thus, the other $\bullet$-point of $R'$ is on the boundary. We can see this situation in Figure [31](#fig:Levels){reference-type="ref" reference="fig:Levels"}.
![We cannot have multiple regions where $\varphi(\alpha_1)$ and $\varphi(\alpha_2)$ are parallel because then the extended tower would not be nested. Going from $R'$ to $R'_1$ and then $R'_3$ would increase the level, but going to $R'_2$ and then $R'_3$ would decrease it..](Levels.pdf){#fig:Levels height="6cm"}
This means that $R'$ has no common $\bullet$-point with any positive region so $\mathcal{T}'$ is not incomplete. Moreover, if $\mathcal{T}$ is completed, then it must be supported in $\{ \alpha_1, \alpha_2 \}$ because there is a point in $\varphi(\alpha_1)$ that is not two-sided --a contradiction. We can see this in Figure [32](#fig:EmptyTower){reference-type="ref" reference="fig:EmptyTower"}.
![If $\mathcal{T}'$ induces an empty extended tower, then it cannot have been incomplete because $R'$ has no common $\bullet$-point with any positive region but also cannot have been completed unless it is supported in $\{ \alpha_1, \alpha_2 \}$ because there is a point in $\varphi(\alpha_1)$ that is not two-sided.](EmptyTower.pdf){#fig:EmptyTower height="4cm"}
Now suppose that $x$ is a $\bullet$-point. Then the negative region $R'$ has a $\circ$-point $y$ that is not two-sided (and not on a basepoint triangle), contradicting the definition of extended tower, see Figure [33](#fig:EmptyTower2){reference-type="ref" reference="fig:EmptyTower2"}.
![The point $y$ is a $\circ$-point of a negative region but not a positive region.](EmptyTower2.pdf "fig:"){#fig:EmptyTower2 height="4cm"} ◻
Recall that our setup is an arc collection $\Gamma$ such that there exists an arc $\beta$ with $\Gamma \cup \beta$ cutting out a disc from the basis. Now let $\alpha_0$ be an arc in $\Gamma$ that is not the next one to $\beta$ as we go along the boundary of the disc cut out by $\Gamma \cup \{ \beta \}$, and $\alpha_1, \alpha_2$ arcs disjoint from this disc such that together with $\alpha_0$ they cut out a $6$-gon $P$. Combining the previous Lemmas, we have proved the following Propositions.
**Proposition 41**. *Let $\mathcal{T}$ be an extended tower in $\Gamma$. If $\alpha_1$ is not $\varphi$-contained in $\alpha_0$, then there is a (nice and replete) extended tower $\mathcal{T}'$ in $\Gamma' = (\Gamma \setminus \{ \alpha_0 \}) \cup \{ \alpha_1, \alpha_2 \}$ which is completed (respectively incomplete) if $\mathcal{T}$ is.*
**Proposition 42**. *Let $\mathcal{T}'$ be an extended tower in $\Gamma'$. If $\alpha_1$ is not $\varphi$-contained in $\alpha_0$, then there is a (nice and replete) extended tower $\mathcal{T}$ in $\Gamma$ which is completed (respectively incomplete) if $\mathcal{T}'$ is.*
## Main Results
Using the base cases and the inductive step we can now show that a collection of arcs $\Gamma$ detects a left-veering arc $\beta$ if $\Gamma \cup \{\beta\}$ cuts out a disc (with the correct orientation), by which we mean that $\Gamma$ supports an incomplete extended tower if and only if $\beta$ is left-veering. Similarly, $\Gamma$ also detects fixable arc segments.
**Theorem 43**. *Let $\{\alpha_i\}_{i=0} ^ n$ be a collection of properly embedded arcs cutting out a $(2n + 2)$-gon $P$, oriented counterclockwise, and assume $\{\alpha_i\}_{i=1} ^ n$ are right-veering. Moreover, suppose no arc contained in $P$ is left-veering. Then $\alpha_0$ is left-veering if and only if $\Gamma = \{\alpha_i\}_{i=1} ^ n$ supports a replete and incomplete extended tower $\mathcal{T}$.*
*Proof.* We argue by induction on the number of arcs in our collection. The case $n=3$ is given by Proposition [Proposition 29](#TriangleLV){reference-type="ref" reference="TriangleLV"}. Now assume the result is true for $k$ arcs, with $k<n$. In the $2n$-gon at least one arc image $\varphi(\alpha_{i_0})$ will leave $P$ by intersecting $\alpha_{i_0+1}$ (because each arc image cuts a smaller subsurface inside $P$ so we can apply an innermost disc argument), creating a basepoint triangle. If $\alpha_{i_0} \neq \alpha_1$ then we can apply Propositions [Proposition 41](#Induction1){reference-type="ref" reference="Induction1"} and [Proposition 42](#Induction2){reference-type="ref" reference="Induction2"}, and there exists an incomplete extended tower $\mathcal{T}$ supported by $\Gamma$ if and only if there exists an incomplete extended tower $\mathcal{T}$ supported by $\{\alpha_1, \dots, \alpha_{n}\}\setminus (\{ \alpha_{i_0},\alpha_{i_0+1} \}) \cup \{ \beta \}$, that is nice and replete, where $\beta$ is the arc-sum of $\alpha_{i_0}$ and $\alpha_{i_0+1}$. But by induction this happens if and only if $\alpha_0$ is left-veering.
So now suppose that $\alpha_{i_0} = \alpha_1$, and there are no other cases where the arc images create a basepoint triangle. Then every arc image $\varphi(\alpha_i)$ must leave $P$ by intersecting $\alpha_1$ (an arc image intersecting another arc would cut a smaller disc that does not contain $\alpha_1$ and we could apply our innermost disc argument there). Then $\Gamma$ supports an extended tower $\mathcal{T}$ whose regions are all rectangles as follows. The level$0$ positive region $R_1$ and its completion $R'_1$ come from the fact that $\alpha_2$ is $\varphi$-contained in $\alpha_1$ (because otherwise the arc-slide of $\alpha_1$ and $\alpha_2$ would be left-veering by Proposition [Proposition 29](#TriangleLV){reference-type="ref" reference="TriangleLV"}. Then $\varphi(\alpha_1)$ enters $P$ again and must exit by intersecting $\alpha_3$, because $\varphi(\alpha_3)$ leaves $P$ by intersecting $\alpha_1$. This forms another rectangle $R_2$, which must be completed by a rectangle $R'_2$. To see this, suppose for a contradiction that $R_2$ is not completed. Then, $\{ R_1, R'_1, R_2 \}$ would form an incomplete extended tower supported in $\{ \alpha_1, \alpha_2, \alpha_3 \}$, and then by induction their arc-sum (strictly speaking, the arc-sum with opposite orientation) would be left-veering, which contradicts the assumption that no arc contained in $P$ is left-veering. The rest of the rectangles are obtained in the same fashion. This extended tower is clearly nice and replete. To see that it is nested, observe that $R_1$ is on level $0$ (and so is $R'_1$), and then the level increases by $1$ with each positive region.
However, if the extended tower is incomplete, the last (positive) rectangle cannot have a completion (otherwise the tower would be completed), and the same argument as in Proposition [Proposition 13](#InitialLV){reference-type="ref" reference="InitialLV"} shows that $\alpha_0$ is left-veering. Conversely, if $\alpha_0$ is left-veering, suppose for a contradiction that the extended tower is completed, that is, the last positive rectangle does have a completion. But then the edge on $\alpha_1$ must be restricted. However the fact that $\alpha_0$ is left-veering means that its image must intersect this edge --a contradiction. We can see this in Figure [34](#fig:LastLV){reference-type="ref" reference="fig:LastLV"}.
![On the left, if the incomplete tower is supported in a smaller collection of arcs then their arc-sum $\beta$ (with opposite orientation) is left-veering by induction. On the right, the incomplete extended tower.](LVLastCase.PNG "fig:"){#fig:LastLV width="12cm"} ◻
**Theorem 44**. *Let $\{\alpha_i\}_{i = 0} ^n$ be a collection of properly embedded right-veering arcs cutting out a $(2n + 2)$-gon $P$, oriented and indexed counterclockwise, and suppose no arc contained in $P$ is left-veering. Let $\gamma$ be an arc segment contained in $P$ starting between $\alpha_n$ and $\alpha_0$ and ending in the interior of $\alpha_1$. Then $\gamma$ is fixable by $\varphi$ if and only if $\Gamma =\{\alpha_i\}_{i = 1} ^n$ supports a completed extended tower $\mathcal{T}$ whose connecting vertex coincides with $\gamma \cap \alpha_1$.*
*Proof.* We argue by induction on the number of arcs in our collection. The case $n=3$ is given by Proposition [Proposition 30](#TriangleFixed){reference-type="ref" reference="TriangleFixed"}. Now assume the result is true for $k$ arcs, with $k < n$. In the $2n$-gon at least one arc image $\varphi(\alpha_{i_0})$ will leave $P$ by intersecting $\alpha_{i_0+1}$, creating a basepoint triangle (because each arc image cuts a smaller subsurface inside $P$). If $\alpha_{i_0} \neq \alpha_1$ then we can apply Propositions [Proposition 41](#Induction1){reference-type="ref" reference="Induction1"} and [Proposition 42](#Induction2){reference-type="ref" reference="Induction2"}, and there exists a completed extended tower $\mathcal{T}$ supported by $\Gamma$ if and only if there exists a completed extended tower $\mathcal{T}$ supported by $\{\alpha_1, \dots \alpha_{n}\}\setminus (\{ \alpha_{i_0},\alpha_{i_0+1} \}) \cup \{ \beta \}$, that is replete, where $\beta$ is the arc-slide of $\alpha_{i_0}$ and $\alpha_{i_0+1}$. But by induction this happens if and only if $\gamma$ is fixable by $\varphi$.
So now suppose that $\alpha_{i_0} = \alpha_1$, and there are no other cases where the arc images create a basepoint triangle. Then every arc image $\varphi(\alpha_i)$ must leave $P$ by intersecting $\alpha_1$. Then the extended tower is a collection of rectangles, obtained as in Theorem [Theorem 43](#TowerLV){reference-type="ref" reference="TowerLV"}, with the difference that now if the extended tower is completed then $\gamma$ must be fixable because it cannot go to either right or left, and conversely if $\gamma$ is fixable then the extended tower must be completed. We can see this last case in Figure [35](#fig:LastInductionCase){reference-type="ref" reference="fig:LastInductionCase"}.
![The extended tower when the only basepoint triangle is formed by $\alpha_1$ and $\alpha_2$.](LastInductionCase.PNG "fig:"){#fig:LastInductionCase width="8cm"} ◻
Our aim is to detect a left-veering arc with a collection of extended towers, each of which will detect a segment of the arc. However, in the setup we have so far, we only detect arcs (or arc segments) with a starting point on the boundary. To get around this, let $\mathcal{C} = \{\alpha_0, \dots ,\alpha_n\}$ be an arc collection cutting out a $(2n + 2)$-gon $P$, oriented and labelled counterclockwise. Now assume that there is a point $x \in \alpha_n$ that is the endpoint of a fixed arc segment disjoint from $P$, so by Theorem [Theorem 44](#TowerFixedArc){reference-type="ref" reference="TowerFixedArc"} there exists an extended tower with $x$ as its connecting veretx. Let $\alpha'_n$ be the (oriented) arc segment between $\alpha_n(0)$ and $x$, and let $\alpha'_0$ be the (oriented) arc segment that goes from $x$ to $\alpha_1(0)$. Then $\mathcal{C}' = \{\alpha_0', \alpha_1, \dots ,\alpha_{n-1}, \alpha_n'\}$ is a collection of arc segments that cut out a $(2n+1)$-gon $P'$. Moreover, at $x$, the tangent vector of $\alpha'_n$ followed by the tangent vector of $\varphi(\alpha'_n)$ define the orientation of $\Sigma$ (because $x$ is the connecting vertex of a completed extended tower) so in a slight abuse of notation we can say that the arc segment $\alpha'_n$ is right-veering, and we can adapt the terminology and methods of extended towers to $\mathcal{C}'$ (since the only properties we use in the results is that the arcs bound a disc and are disjoint and right-veering). In particular Theorems [Theorem 43](#TowerLV){reference-type="ref" reference="TowerLV"} and [Theorem 44](#TowerFixedArc){reference-type="ref" reference="TowerFixedArc"} still hold. We can see this situation in Figure [36](#fig:PartialTower){reference-type="ref" reference="fig:PartialTower"}.
**Definition 45**. Let $\mathcal{T}$ be an extended tower supported in the collection $\mathcal{C}' \setminus \{\alpha_0'\}$. We say $\mathcal{T}$ is a *partial extended tower*, and $x$ its *starting point*.
We will also say, in a slight abuse of notation, that such a partial extended tower $\mathcal{T}$ is *supported in $\Gamma = \{\alpha_1, \dots \alpha_n \}$*.
![The setup for a partial extended tower supported in $\{ \alpha_1, \alpha_2, \alpha_3, \alpha'_4 \}$. The fact that $x$ is the connecting vertex of an extended tower means that at $x$ the tangent vector of $\alpha'_4$ followed by the tangent vector of $\varphi(\alpha'_4)$ define the orientation of $\Sigma$.](PartialTower.pdf){#fig:PartialTower width="5cm"}
We are now almost ready to prove that we can detect the existence of a left-veering arc from a basis of the surface, we just need one more definition. Notice that for our results to work we need the arcs cutting out a disc to be distinct. However, if we simply take a basis as our collection of arcs, this does not necessarily happen. We solve this issue by duplicating every arc from the basis. This has the effect that when a left-veering arc intersects the basis it actually intersects two (isotopic) arcs $\alpha$ and $\beta$. Moreover, the segment of the left-veering arc before this intersection will cut out a disc with a collection containing one of the arcs (say, $\alpha$) and the segment after the intersection will cut out a disc with a collection containing the other arc (say, $\beta$). Then an extended tower in the first collection will have a connecting vertex on $\alpha \cap \varphi(\alpha)$ and an extended tower in the second collection will have a starting point on $\beta \cap \varphi(\beta)$. Because we want to construct the left-veering arc from the extended towers, we want to relate these two points.
*Remark 7*. We want to distiguish $\alpha$ and $\beta$ even though they are isotopic because $\alpha$ could also be an arc in the second collection, and we want each arc from the collection supporting an extended tower to be distinct. Note though that we only need to duplicate each arc from a basis because the disc cut by a basis has exactly two copies of each arc, so each extended tower will have at most two isotopic arcs.
**Definition 46**. Let $\alpha$ and $\beta$ be two isotopic properly embedded arcs. Then $\varphi(\alpha)$ and $\varphi(\beta)$ are always parallel and, for an intersection $x \in \alpha \cap \varphi(\alpha)$, then we have a small rectangle contained in the intersection of the thin strip between $\alpha$ and $\beta$ with its image which has $x$ as a vertex. We call the vertex of this rectangle $y \in \beta \cap \varphi(\beta)$ the *adjacent point to $x$*.
![The point $y$ that is adjacent to the point $x$.](AdjacentPoint.PNG){#fig:AdjacentPoint width="4cm"}
**Theorem 47**. *Let $(\Sigma, \varphi)$ be an open book, and $\Gamma$ a basis for $\Sigma$ with all arcs duplicated. Suppose that there exists a left-veering arc $\gamma$, which we can assume to be minimal. Then there exists a collection of extended towers $\{\mathcal{T}_i\}_{i = 1}^{N}$ (where $N$ is the number of intersections between $\gamma$ and the basis) supported in (subcollections of) $\Gamma$ such that:*
- *$\mathcal{T}_1$ is a completed extended tower.*
- *$\mathcal{T}_i$ is a completed partial extended tower, whose starting point is the adjacent point to the connecting vertex of $\mathcal{T}_{i-1}$.*
- *$\mathcal{T}_N$ is a incomplete partial extended tower, whose starting point is the adjacent point to the connecting vertex of $\mathcal{T}_{N-1}$.*
*Conversely, if we have such a collection, then there exists a left-veering arc $\gamma$.*
*Proof.* Cut $\Sigma$ along the arcs $\Gamma$, making a disc, and orient them counterclockwise. Then $\gamma$ is fixable until it intersects one of the arcs (if it is disjoint from the basis then it will cut out a disc with a subcollection of arcs from $\Gamma$ and then Theorem [Theorem 43](#TowerLV){reference-type="ref" reference="TowerLV"} gives an incomplete extended tower). Then Theorem [Theorem 44](#TowerFixedArc){reference-type="ref" reference="TowerFixedArc"} gives the first completed extended tower $\mathcal{T}_1$. Then $\gamma$ is again fixable from the adjacent point to this point until the next intersection with $\Gamma$ (and also in the small rectangle between the adjacent points), and now modifying Theorem [Theorem 44](#TowerFixedArc){reference-type="ref" reference="TowerFixedArc"} for the case where we have an arc segment with fixed endpoints gives the first completed partial extended tower. Repeat until $\varphi(\gamma)$ goes to the left, and then modifying Theorem [Theorem 43](#TowerLV){reference-type="ref" reference="TowerLV"} gives the incomplete partial extended tower.
For the converse, observe that both Theorem [Theorem 44](#TowerFixedArc){reference-type="ref" reference="TowerFixedArc"} and Theorem [Theorem 43](#TowerLV){reference-type="ref" reference="TowerLV"} are if and only if statements, and the left-veering arc is constructed by joining all the fixed arc segments given by the completed extended towers and the left-veering arc segment given by the incomplete one. Observe that the small arc segments between adjacent points necessary to connect all of the arc segments given by Theorems [Theorem 44](#TowerFixedArc){reference-type="ref" reference="TowerFixedArc"} and [Theorem 43](#TowerLV){reference-type="ref" reference="TowerLV"} are fixable because their endpoints are fixed points and they lie in the thin strip between isotopic arcs. ◻
Notice that the number of extended towers $N$ in a collection detecting a left-veering arc in the construction described by Theorem [Theorem 47](#TowerCollection1){reference-type="ref" reference="TowerCollection1"} coincides with the number of intersections of the arc with the basis. Moreover, each of these points corresponds to a point $\alpha \cap \varphi(\alpha)$ for some arc $\alpha$ in the basis. Once we fix a basis, the number of such points is finite and gives an upper bound for $N$. Moreover, each extended tower is also a finite collection of regions. Therefore, Theorem [Theorem 47](#TowerCollection1){reference-type="ref" reference="TowerCollection1"} implies the existence of an algorithm that takes as input a basis of arcs and their images and either produces a collection of extended towers giving a left-veering arc or terminates in a finite number of steps, which means that the monodromy is right-veering.
# An example {#Example}
Theorem [Theorem 47](#TowerCollection1){reference-type="ref" reference="TowerCollection1"} does not make any assumptions on the number of extended towers needed to detect a left-veering arc, and a natural question is whether multiple extended towers are always needed, and, if the answer is affirmative, whether there is an upper bound on the number of extended towers that does not depend on the choice of basis. Example [Example 48](#LargeIntersection){reference-type="ref" reference="LargeIntersection"} shows that indeed some cases require multiple extended towers and the number of extended towers required can be made arbitrarily large. Recall that the arc segments that form a left-veering arc $\gamma$ are determined by the intersections of $\gamma$ with a basis, and each segment is detected with an extended tower. Therefore for any natural number $n$ we construct an open book which is not right-veering and a basis such that each arc has more than $n$ intersections with every left-veering arc.
**Example 48**. Let $\Sigma$ be a planar surface with $4$ boundary components $\{ C_i\}_{i = 1}^4$, and $\varphi = \tau_1\tau_2\tau_3\tau_a\tau_b^{-1}$ (where $\tau_i$ represents a positive Dehn twist around the boundary component $C_i$) as shown in Figure [38](#fig:Example){reference-type="ref" reference="fig:Example"}. Then $(\Sigma, \varphi)$ is not a right-veering open book, as the arc $\gamma$ in Figure [38](#fig:Example){reference-type="ref" reference="fig:Example"} is left-veering. However, any left-veering arc has to start in the boundary component $C_4$, and moreover has to intersect $b$ before it intersects $a$. In particular, this means that it has to intersect the arc $\delta$ going from $C_1$ to $C_3$, and thus the curve $c$ that separates $C_1$ and $C_3$ from $C_2$ and $C_4$. We choose a basis $\mathcal{B}$ of right-veering arcs such that all arcs intersect $c$. For every natural number $n$, let $\mathcal{B}_n = \tau_c^n(\mathcal{B})$. Then $\mathcal{B}_n$ is a basis which is also right-veering and every arc of $\mathcal{B}_n$ intersects every left-veering arc more than $n$ times. But this in turn implies that more than $n$ extended towers are needed for $\mathcal{B}_n$ to detect a left-veering arc.
![On the left, the surface $\Sigma$ with the curves involved in the monodromy, and a left-veering arc $\gamma$. On the right, a basis $\mathcal{B} = \{ \alpha_1, \alpha_2, \alpha_3 \}$ such that every arc in $\mathcal{B}$ intersects the curve $c$.](Example.PNG){#fig:Example width="8cm"}
| arxiv_math | {
"id": "2310.02210",
"title": "Detecting right-veering diffeomorphisms",
"authors": "Miguel Orbegozo Rodriguez",
"categories": "math.GT",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We show that for every $k\ge 3$ there exist complex algebraic cones of dimension $k$ with isolated singularities, which are bi-Lipschitz and semi-algebraically equivalent but they have different degrees. We also prove that homeomorphic projective hypersurfaces with dimension greater than 2 have the same degree. In the final part of the paper, we classify links of real cones with base $\mathbb{P}^1\times \mathbb{P}^2.$ As an application we give an example of three four dimensional real algebraic cones in $\mathbb{R}^8$ with isolated singularity which are semi-algebraically and bi-Lipschitz equivalent but they have non-homeomorphic bases.
address:
- " Departamento de Matemática, Universidade Federal do Ceará, Rua Campus do Pici, s/n, Bloco 914, Pici, 60440-900, Fortaleza-CE, Brazil. E-mail: `[email protected]` E-mail: `[email protected]` "
- " Instytut Matematyczny, Polska Akademia Nauk, Śniadeckich 8, 00-656 Warszawa, Poland & Departamento de Matemática, Universidade Federal do Ceará, Rua Campus do Pici, s/n, Bloco 914, Pici, 60440-900, Fortaleza-CE, Brazil. E-mail: `[email protected]` "
author:
- Alexandre Fernandes
- Zbigniew Jelonek
- José Edson Sampaio
title: Bi-Lipschitz equivalent cones with different degrees
---
[^1]
# Introduction
In 1971, O. Zariski in [@Zariski:1971] proposed many questions and the most known among them is the following.
1. **Question A.** Let $f,g\colon(\mathbb{C}^n,0)\to (\mathbb{C},0)$ be two complex analytic functions. If there is a homeomorphism $\varphi\colon(\mathbb{C}^n,V(f),0)\to (\mathbb{C}^n,V(g),0)$, is it true that $m(V(f),0)=m(V(g),0)$?
This is still an open problem. The stated version of Question A is the Zariski's famous Multiplicity Conjecture. Recently, the Zariski's Multiplicity Conjecture for families was solved by Fernández-Bobadilla and Pełka in [@BobadillaP:2022].
Recently, we had also some contributions to Zariski's Multiplicity Conjecture from the Lipschitz point of view. For instance, in [@BobadillaFS:2018] was proposed the following conjecture:
**Conjecture 1**. *Let $X\subset \mathbb{C}^n$ and $Y\subset \mathbb{C}^m$ be two complex analytic sets with $\dim X=\dim Y=d$. If their germs at zero are bi-Lipschitz homeomorphic, then their multiplicities $m(X,0)$ and $m(Y,0)$ are equal.*
Still in [@BobadillaFS:2018] the authors proved that Conjecture [Conjecture 1](#conj_local){reference-type="ref" reference="conj_local"} has positive answer for $d=2$. The positive answer for $d=1$ was already known, since Neumann and Pichon [@N-P], with previous contributions of Pham and Teissier [@P-T] and Fernandes [@F], proved that the Puiseux pairs of plane curves are invariant under bi-Lipschitz homeomorphisms, and as a consequence the multiplicity of complex analytic curves with any codimension is invariant under bi-Lipschitz homeomorphisms. In order to find other partial results to Conjecture [Conjecture 1](#conj_local){reference-type="ref" reference="conj_local"} see, e.g., [@BirbrairFLS:2016], [@Comte:1998], [@ComteMT:2002], [@FernandesS:2016], [@Jelonek:2021], [@Sampaio:2016], [@Sampaio:2019], [@Sampaio:2020b] and [@Sampaio:2022]. However, in dimension three, Birbrair, Fernandes, Sampaio and Verbitsky [@bfsv] have presented examples of complex algebraic cones $X$ and $Y$ with isolated singularity, which were bi-Lipschitz homeomorphic but with different multiplicities at the origin.
The first aim of this paper is to generalize the result from [@bfsv]. We show that for every $k\ge 3$ there exist complex algebraic cones of dimension $k$ with isolated singularities, which are bi-Lipschitz and semi-algebraically equivalent but they have different degrees (see Theorem [Theorem 4](#thm:complex_cones){reference-type="ref" reference="thm:complex_cones"}).
The main idea of the above mentioned result is to find diffeomorphic smooth projective algebraic varieties with different degrees and with bi-Lipschitz homeomorphic affine cones. This is related to the converse of the Question B in [@Zariski:1971], which is also related to Question A (see [@Sampaio:2020a]).
Before to state that Question B, let us introduce some notation. Let $Bl_0(\mathbb{C}^n)=\{(x,[v])\in \mathbb{C}^n\times \mathbb{C}P^{n-1};x\wedge v=0\}$ and $\beta\colon Bl_0(\mathbb{C}^n)\to \mathbb{C}^n$ be the projection onto $\mathbb{C}^n$, where $x\wedge v=0$ means that there exists $\lambda \in \mathbb{C}$ such that $x=\lambda v$. If $X\subset \mathbb{C}^n$ is a complex analytic set, we define $Bl_0(X)=\overline{\beta^{-1}(X\setminus \{0\})}$ and $E_{0}(X)=Bl_0(X)\cap (\{0\}\times \mathbb{C}P^{n-1})$. Remark that $E_{0}(X)=\{0\}\times \mathbb{P}C(X,0)$, where $\mathbb{P}C(X,0)$ is the projectivized tangent cone of $X$. Now, we are ready to state the Question B in [@Zariski:1971].
1. **Question B.** Let $f,g\colon(\mathbb{C}^n,0)\to (\mathbb{C},0)$ be two complex analytic functions. If there is a homeomorphism $\varphi\colon(\mathbb{C}^n,V(f),0)\to (\mathbb{C}^n,V(g),0)$, is there a homeomorphism $h\colon E_{0}(V(f))\to E_{0}(V(g))$ such that for each $p\in E_{0}(V(f))$ (1) the germs $(Bl_0(\mathbb{C}^n), Bl_0(V(f)), p)$ and $(Bl_0(\mathbb{C}^n),Bl_0(V(g)),h(p))$ are homeomorphic and (2) the germs $(E_0(\mathbb{C}^n), E_0(V(f)), p)$ and $(E_0(\mathbb{C}^n),$ $E_0(V(g)),$ $h(p))$ are homeomorphic?
This problem has a negative answer as it was shown by Fernández de Bobadilla in [@Bobadilla:2005]. However, it was presented in [@Sampaio:2020a] some versions of Question B and was showed that a positive answer to one of those versions implies in a positive answer to Question A at least in the case of hypersurfaces with isolated singularities. Other versions of Question B were recently presented in [@Bobadilla:2022]. Here we present more two versions of this question. The first one is the following:
1. **Question B$_{Lip}$.** Let $f,g\colon(\mathbb{C}^n,0)\to (\mathbb{C},0)$ be two complex analytic functions. If there is a bi-Lipschitz homeomorphism $\varphi\colon(\mathbb{C}^n,V(f),0)\to (\mathbb{C}^n,$ $V(g),0)$, is there a homeomorphism $h\colon E_{0}(V(f))\to E_{0}$ $(V(g))$?
The second question that we have in mind has the following more general statement:
1. **Question B$_{Abs-Lip}$.** Let $X\subset \mathbb{C}^n$ and $Y\subset \mathbb{C}^m$ be two complex analytic sets with $\dim X=\dim Y=d$. If $(X,0)$ and $(Y,0)$ are bi-Lipschitz homeomorphic, is there a homeomorphism $h\colon E_{0}(X)\to E_{0}(Y)$?
In [@kol2] (see the Conclusion on p.129) Kollar proved that if $X\subset \mathbb{C}\Bbb P^{n+1}$ is a smooth projective hypersurface of dimension greater than one, then the degree of $X$ is determined by the underlying topological space of $X$. In [@BarthelD:1994 Theorem 1] Barthel and Dimca proved that in the case of projective hypersurfaces (possibly with singularities) of dimension greater than one, the degree equals one is a topological invariant. Here we generalize these results in dimension $n>2$. More precisely, we prove the following:
Let $V,V'\subset \mathbb{C}\Bbb P^{n+1}$ be two projective hypersurfaces. Assume $n>2.$ If $V$ is homeomorphic to $V'$, then $\deg V=\deg V'.$
As a consequence of this result, we obtain in Corollary [Corollary 8](#cor:question_b_imples_a){reference-type="ref" reference="cor:question_b_imples_a"} that a positive answer to Question B$_{Lip}$ implies in a positive answer to the following metric version of the Question A (see also [@Bobadilla:2022 Question 3.6.4]):
1. **Question A$_{Lip}$.** Let $f,g\colon(\mathbb{C}^n,0)\to (\mathbb{C},0)$ be two complex analytic functions. If there is a bi-Lipschitz homeomorphism $\varphi\colon(\mathbb{C}^n,V(f),0)\to (\mathbb{C}^n,V(g),0)$, is it true that $m(V(f),0)=m(V(g),0)$?
In the final part of this paper, we classify links of real cones with base $\Bbb P^1\times \Bbb P^2.$ As an application, we give an example of three four dimensional real algebraic cones in $\mathbb{R}^8$ with isolated singularity which are semi-algebraically and bi-Lipschitz equivalent but they have non-homeomorphic bases. In particular, the real version of Question $B_{Abs-Lip}$ has a negative answer.
# Preliminaries {#section:preliminaries}
**Definition 1**. *Let $X\subset \mathbb{R}^n$ and $Y\subset \mathbb{R}^m$ be two sets and let $h\colon X\to Y$.*
- *We say that $h$ is **Lipschitz** if there exists a positive constant $C$ such that $$\|h(x)-h(y)\|\leq C\|x-y\|, \quad \forall x, y\in X.$$*
- *We say that $h$ is **bi-Lipschitz** if $h$ is a homeomorphism, it is Lipschitz and its inverse is also Lipschitz. In this case, we say that $X$ and $Y$ are **bi-Lipschitz equivalent**. When $n=m$ and $h$ is the restriction of bi-Lipschitz homeomorphism $H\colon \mathbb{R}^n\to \mathbb{R}^n$, we say that $X$ and $Y$ are **ambient bi-Lipschitz equivalent**.*
**Definition 2**. *Let $X\subset \Bbb P^n$ be an algebraic variety. Then by an algebraic cone $\overline{C(X)}\subset \Bbb P^{n+1}$ with the base $X$ we mean the set $$\overline{C(X)}=\bigcup_{x\in X} \overline{O,x},$$ where $O$ is the center of coordinates in $\mathbb{R}^{n+1}$, and $\overline{O,x}$ means a projective line which goes through $O$ and $x.$ By an affine cone $C(X)$ we mean $\overline{C(X)}\setminus X.$*
**Proposition 3**. *Let $C(X)$ and $C(Y)$ be affine cones in $\mathbb{R}^N.$ Assume that their links are bi-Lipschitz (semi-algebraically) equivalent. Then they are bi-Lipschitz (semi-algebraically) equivalent. Moreover, if $\dim C(X)=\dim C(Y)=d$, $2d+2\le N$ and $C(X)$ is semi-algebraically bi-Lipschitz equivalent to $C(Y)$, then they are ambient semi-algebraically bi-Lipschitz equivalent.*
*Proof.* Let $X',Y'$ denotes links of $C(X),C(Y)$ respectively. We have $X',Y'\subset\Bbb{S}^{N-1}$, where $\Bbb{S}^{N-1}$ denotes the unit Euclidean sphere in $\mathbb{R}^{N}$. In this case, $$C(X) =\{ t\cdot x \ : \ x\in X' \ \mbox{and} \ t\geq 0\} \ \mbox{and} \ C(Y) =\{ t\cdot x \ : \ y\in Y' \ \mbox{and} \ t\geq 0\}.$$ By assumption, there is a bi-Lipschitz homeomorphism $f\colon X'\rightarrow Y'$; i.e. $\exists \lambda \geq 1$ such that $$\frac{1}{\lambda} \| x_1-x_2 \| \leq \| f(x_1)-f(x_2) \| \leq \lambda \| x_1 - x_2 \| \ \forall \ x_1,x_2\in X.$$ Let us define $F\colon C(X)\rightarrow C(Y)$ by $F(t\cdot x) = t\cdot f(x)$ $\forall x\in X$ and $t\geq 0$. We claim that $F$ is a bi-Lipschitz map. In fact, given $t\cdot x_1, s\cdot x_2\in C(X)$ (one may suppose that $t\leq s$), then: $$\begin{aligned}
\| F(t\cdot x_1) - F(s\cdot x_2) \| &\leq & \| F(t\cdot x_1) - F(t\cdot x_2)\| + \| F(t\cdot x_2) - F(s\cdot x_2)\| \\
&=& t \| f(x_1) - f(x_2) \| + |t-s|\| f(x_2) \| \\
&\leq& \lambda t \| x_1 - x_2 \| + |t-s| |x_2| \\
&=& \lambda \| t\cdot x_1 - t\cdot x_2 \| + \| t\cdot x_2 -s\cdot x_2\| \\
&\leq & \lambda \| t\cdot x_1 - s\cdot x_2 \| + \| t\cdot x_1 -s\cdot x_2\| \\
&=& (\lambda + 1) \| t\cdot x_1 - s\cdot x_2 \|.\end{aligned}$$
In a similar way, if we denote by $g\colon Y'\rightarrow X'$ the inverse map of $f$, we see that $G\colon C(Y) \rightarrow C(X)$, defined by $G(t\cdot y)= t\cdot g(y)$, $\forall y\in Y$ and $t\geq 0$, is the inverse of $F$ and it is a Lipschitz map. This concludes that $F$ is a bi-Lipschitz map.
Moreover, if $f$ is additionally semi-algebraic, we see that $F$ is also semi-algebraic by the construction. Similarly for $g.$ The last statement follows directly from [@bfj]. ◻
# Complex cones
We generalize here the result from [@bfsv]:
**Theorem 4**. *Let $C_k$ denotes the Veronese embedding of degree $d$ of $\mathbb{C}\Bbb P^1$ into $\mathbb{C}\Bbb P^k.$ Let $n\ge 2$ and consider the varieties $X_{k,n}=\phi( C_k\times \mathbb{C}\Bbb P^{n-1})$, where $\phi$ is a Segre embedding. Then for fixed $n$ all varieties $X_{k,n}$ have different degrees deg $X_{k,n}=kn$ and among cones $C(X_{k,n})$ there are infinitely many cones which are bi-Lipschitz and semi-algebraically equivalent.*
*Proof.* Note that $C_k$ as cycle is $k\mathbb{C}\Bbb P^1.$ Hence $C_k\times \mathbb{C}\Bbb P^{n-1}\sim k \mathbb{C}\Bbb P^1\times \mathbb{C}\Bbb P^{n-1}.$ Since after Segre embedding deg $\mathbb{C}\Bbb P^1\times \mathbb{C}\Bbb P^{n-1}=n$, we have deg $X_{k,n}=kn.$ Using generic projection, we can assume that all $X_{k,n}$ are in $\mathbb{C}\Bbb P^{2n+1}.$ By the constructions $X_{k,n}$ is the union of projective $(n-1)$-planes $X=\bigcup_{a\in C_k} \phi(\{a\} \times \Bbb P^{n-1})$. This means that $\overline{C(X_{k,n})}$ is the union of $n$-planes which has at infinity the $(n-1)$-plane $\phi(\{a\}\times\Bbb P^{n-1})$ and goes through the point $O=(0,...,0).$ Thus the link $L_{k,n}$ of this cone is a union of $(2n-1)$-spheres. In fact using the Ehresmann's Theorem, it is easy to observe that these links are sphere bundles over $C_k\cong S^2$ with the projection which is a composition of a projections $p: \mathbb{C}\Bbb P^{2n+1} \setminus \{0\}\to \mathbb{C}\Bbb P^{2n}$ and the projection $q: C_k\times\mathbb{C}\Bbb P^{n-1}\to C_k.$ By the Steenrod Theorem (see [@steenrod 6.III, p. 300]), topologically there are only two such sphere bundles. On the other hand, it follows from [@KirbyS:1977 Classification Theorem, p. 155] that on a compact manifold of dimension different from four there is only a finite number of differential structures. This means that all manifolds $L_{k,n}$ , $k=1,2,...$ can have only a finite number of different differential structures. By the Dirichlet box principle, among all $X_{k,n}$ there is an infinite family $\mathcal{S}$ whose members are diffeomorphic one to each other.
By [@kol Corollary 11], all links from the family $\mathcal{S}$ are Nash diffeomorphic. in particular they are bi-Lipschitz and semi-algebraically equivalent. By Theorem [Proposition 3](#alex){reference-type="ref" reference="alex"}, we see that all cones $C(X), X\in \mathcal{S}$ are bi-Lipschitz and semi-algebraically equivalent. But all members of the family $\{C(X), X\in \mathcal{S}\}$ have different degrees. ◻
**Corollary 5**. *For every $n\ge 3$ there exist two analytic $n$ dimensional germs $V,V'\subset (\mathbb{C}^{2n},0)$ with isolated singularities, which are bi-Lipschitz, sub-analytically equivalent, but they have different multiplicities at $0.$*
# Topological invariance of the degree of projective hypersurfaces
**Theorem 6**. *Let $V,V'\subset \mathbb{C}\Bbb P^{n+1}$ be two projective hypersurfaces. Assume $n>2.$ If $V$ is homeomorphic to $V'$, then $\deg V=\deg V'.$*
*Proof.* Let $V_1,...,V_r$ (resp. $V_1',...,V_s'$) be the irreducible components of $V$ (resp. $V'$). Let $\phi\colon V\to V'$ be a homeomorphism. By [@Gau-Lipman:1983 Lemma A.8], $\phi(V_j)$ is an irreducible component of $V'$ for all $j=1,...,r$. Then $r=s$ and by reordering the indices, if necessary, $\phi(V_j)=V'_j$ for all $j=1,...,r$.
Since $\deg V =\deg V_1 +...+\deg V_r$ and $\deg V' =\deg V'_1+...+\deg V'_r$, we may assume that $V$ and $V'$ are irreducible projective hypersurfaces.
Let us recall that the cohomology ring of $\mathbb{C}\Bbb P^{n+1}$ is isomorphic to $\Bbb Z[x]/(x^{n+2})$, see e.g. [@greenberg] and it is generated by the generator $\alpha$ of $H^2(\mathbb{C}\Bbb P^{n+1},\Bbb Z).$ Let $\iota: V\to \mathbb{C}\Bbb P^{n+1}$ be the inclusion. By the Lefschetz theorem and our assumption we have an isomorphism $\iota^*: H^2(\mathbb{C}\Bbb P^{n+1},\Bbb Z)\to H^2(V,\Bbb Z).$ In particular the element $\alpha_V=\iota^*(\alpha)$ is a generator of $H^2(V,\Bbb Z).$
Since $V$ is an irreducible projective variety, we have $H_{2n}(V,\Bbb Z)=\Bbb Z$ (it is generated by the fundamental class $[V]$). Moreover by relative exact sequence $H^{2n} (V,\Bbb Z) = H^{2n} (V,Sing(V), \Bbb Z).$ By the Lefschtez duality we have $$H^{2n} (V,Sing(V), \Bbb Z)=H_0(V \setminus Sing(V), \Bbb Z).$$ If $V$ is irreducible, $H_0(V \setminus Sing(V), \Bbb Z)=\Bbb Z.$ Consequently $H^{2n} (V,\Bbb Z) = \Bbb Z.$
Since we have a canonical epimorphism $H^{2n}(V,\Bbb Z) \to H_{2n}(V,\Bbb Z)^*$ we see that these spaces are isomorphic (see [@greenberg], 23.7). In fact the mapping $H^{2n}(\mathbb{C}\Bbb P^{n+1},\Bbb Z)\to H^{2n}(V, \Bbb Z)$ is dual to the mapping $H_{2n}(V, \Bbb Z)\to H_{2n}(\mathbb{C}\Bbb P^{n+1},\Bbb Z)$ ([@greenberg], 23.11). Since $V$ as a topological cycle is equivalent to $\deg V H,$ where $H$ is a hyperplane (i.e. a generator of $H_{2n}(\mathbb{C}\Bbb P^{n+1},\Bbb Z)$) we see that the mapping $H_{2n}(V, \Bbb Z)\to H_{2n}(\mathbb{C}\Bbb P^{n+1},\Bbb Z)$ is a multiplication by $\deg V.$ Hence also the mapping $H^{2n}(\mathbb{C}\Bbb P^{n+1},\Bbb Z)\to H^{2n}(V, \Bbb Z)$ is a multiplication by $\deg V.$ This means that $\iota^*(\alpha^n)=\alpha_V^n=\deg V [V]^*$ where $[V]^*$ is the (dual) fundamental class.
Now let $\alpha_{V'}$ be a generator of $H^2(V',\Bbb Z)$ constructed in an analogous way as $\alpha_V.$ Hence by symmetry we have $\alpha_{V'}^n=\deg V' [V']^*$. Let $\phi: V\to V'$ be a homeomorphism. Hence $\phi^*(\alpha_{V'})=\pm \alpha_V.$ Thus $\pm \deg V' [V]^*=\phi^*(\deg V' [V']^*)=\phi^*(\alpha_{V'}^n)=\pm \alpha_V^n=\pm \deg V [V]^*.$ Hence $\deg V=\deg V'.$ ◻
**Corollary 7**. *Let $f,g\colon(\mathbb{C}^n,0)\to (\mathbb{C},0)$ be two complex analytic functions with $n>4$. Assume that there is a bi-Lipschitz homeomorphism $\varphi\colon(\mathbb{C}^n,V(f),0)\to (\mathbb{C}^n,V(g),0)$. If there is a homeomorphism $h\colon E_{0}(V(f))\to E_{0}(V(g))$ then $m(V(f),0)=m(V(g),0)$.*
*Proof.* By [@FernandesS:2016 Theorem 2.1], we may assume that $f$ and $g$ are irreducible homogeneous polynomials. In this case, $m(V(f),0)=\deg E_{0}(V(f))$ and $m(V(g),0)=\deg E_{0}(V(g))$. Since there is a homeomorphism $h\colon E_{0}(V(f))\to E_{0}(V(g))$, it follows from Theorem [Theorem 6](#thm:gen_kollar){reference-type="ref" reference="thm:gen_kollar"} that $m(V(f),0)=m(V(g),0)$. ◻
Thus, we obtain the following:
**Corollary 8**. *If Question B$_{Lip}$ has a positive answer then Question A$_{Lip}$ has a positive answer as well.*
# Real cones
In this section, we consider real algebraic varieties. We prove the following:
**Theorem 9**. *Let $\iota : \Bbb P^1\times \Bbb P^2\to \Bbb P^n$ be an algebraic embedding. Let $X=\iota( \Bbb P^1\times \Bbb P^2).$ If deg $X$ is odd then the link of a cone $C(X)$ is diffeomorphic to the twisted product $\tilde{S^1\times S^2}$. If deg $X$ is even, then every connected link of $C(X)$ is either diffeomorphic to $\Bbb P^1\times\Bbb P^2$ or to $\Bbb P^1\times S^2$ and both cases are possible.*
*Proof.* Denote by $A_k, B_l$ the Veronese embedding of $\Bbb P^1$ and $\Bbb P^2$ of degree $k$ and $l$ respectively. Now let $\phi: A_k\times B_l \to \Bbb P^{N(k,l)}$ be a suitable Segre embedding and denote by $W_{k,l}$ the image $\phi(A_k \times B_l)$. As in the previous section we see that deg $W_{k,l} = 3kl.$ Let $X_{k,l}=C(W_{k,l})$ be the cone with the base $W_{k,l}.$ Additionally denote by $L_{k,l}$ the link of this cone.
By the constructions every base $W_{k,1}$ is the union of planes $$X=\bigcup_{a\in A_k } \phi(\{a\} \times \Bbb P^2).$$ This means that $\overline{X_{k,1}}$ is the union of $3-$planes which have at infinity the plane $\phi(\{a\} \times \Bbb P^2)$ and goes through the point $O=(0,...,0).$ Similarly $\overline{X_{1,l}}$ is the union of planes which have at infinity the line $\phi_1(\Bbb P^1\times \{a\})$ and goes through the point $O=(0,...,0).$
Thus the link of $X_{k,1}$ is a union of spheres and the link of $X_{1,l}$ is a union of circles. In fact it is easy to observe that the first link is a sphere bundles over $\Bbb P^1$ with the projection which is a composition of a projection $p: \mathbb{R}^8\setminus \{0\}\to \Bbb P^{7}$ and the projection $q: \Bbb P^1\times \Bbb P^2\to \Bbb P^1$ . Similarly the link $X_{1,k}$ is a circle bundle over $\Bbb P^2$ with the projection which is a composition of a projections $p: \mathbb{R}^8\setminus \{0\}\to \Bbb P^{7}$ and the projection $q: \Bbb P^1\times \Bbb P^2\to \Bbb P^2$. In particular both links are connected. Note that the link $L_{1,1}$ has a structure of a circle bundle over $\Bbb P^2$ and a structure of a sphere bundle over $\Bbb P^1.$ We have:
**Lemma 10**. *If the link over a cone with the base $\Bbb P^1\times \Bbb P^2$ is connected, then it is diffeomorphic either to $\Bbb P^1\times \Bbb P^2$, $\Bbb P^1\times S^2$ or to the twisted product $\tilde{S^1\times S^2}=S^1\times S^2/G$, where $G$ is the group generated by the involution $g: S^1\times S^2\ni (x,p)\mapsto (-x,-p)\in S^1\times S^2.$*
*Proof.* The link over a cone with base $\Bbb P^1\times \Bbb P^2$ is a Hopf fibration over the base of a cone, hence it is diffeomorphic to the double covering of $\Bbb P^1\times \Bbb P^2.$ We show that such a space $M$ is either diffeomorphic to $\Bbb P^1\times \Bbb P^2$ or $\Bbb P^1\times S^2.$ Indeed, let $h : M \to \Bbb P^1\times \Bbb P^2$ be a double covering. Hence $h_*(\pi_1(M))$ has index two in $\pi_1(\Bbb P^1\times \Bbb P^2)=\Bbb Z\times \Bbb Z/2.$ Hence either $h_*(\pi_1(M))=\Bbb Z\times \{0\}$ , $h_*(\pi_1(M))=(2)\times \Bbb Z/2$ or $h_*(\pi_1(M))$ is the group generated by $(1,1).$ The first case corresponds to $\Bbb P^1\times S^2.$ The second case corresponds to $\Bbb P^1\times \Bbb P^2.$ The third case correspond to the twisted product $\tilde{S^1\times S^2}$. ◻
Since the link $L_{k,1}$ is a sphere bundle over $\Bbb P^1,$ using the long homotopy sequence of this fibration we see that the link $L_{k,1}$ has the first homotopy group equal to $\Bbb Z.$ We have an exact sequence $$0=\pi_1(S^2)\to \pi_1(L_{k,1}) \to \pi_1(\Bbb P^1)\to 0,$$ hence $\pi_1(L_{k,1})=\pi_1(\Bbb P^1)=\Bbb Z.$
Thus the link $L_{1,1}$ has to be diffeomorphic either to twisted product $\tilde{S^1\times S^2}$ or to $\Bbb P^1\times S^2.$ Using theory of Seifert manifolds we exclude the second possibility. Indeed the following lemma is true (see [@seifert], Lemma 2.3.10):
**Lemma 11**. *If $M$ is orientable, Seifert fibered space with orbit surface $\Bbb P^2$ and less than two exceptional fibers, then $M$ is homeomorphic either to a Lens space $L(4n, 2n-1)$, or to a Seifert space with orbit space $S^2$ and three exceptional fibers with two of index two, or to connected sum of teo copies of $\Bbb P^3.$ All fundamental groups are finite with exception of $\pi_1(\Bbb P^3\#\Bbb P^3)=\Bbb Z/2*\Bbb Z/2$.*
In particular we see that the space $\Bbb P^1\times S^2$ with first homotopy group equal to $\Bbb Z$ can not be a total space of a circle bundle over $\Bbb P^2.$ Thus $L_{1,1}$ is diffeomorphic to $\tilde{S^1\times S^2}$.
Now consider the link $L_{1,2}.$ Since it is a circle bundle over $\Bbb P^2$ it can be either diffeomorphic to $\Bbb P^1\times \Bbb P^2$ or to the twisted product $\tilde{S^1\times S^2}.$ If the second possibility holds then we can lift an analytic mapping $W_{1,1}\to W_{1,2}$ to obtain an analytic mapping $L_{1,1}\to L_{1,2}$ which preserves the Hopf fibration. This means in terminology of [@fjs] that there is an $a$-invariant subanalytic bi-Lipschitz mapping from $X_{1,1}$ to $X_{1,2}.$ But this mapping has $a$-invariant graph and by [@fjs Theorem 3.2] we have $\deg X_{1,1}=\deg X_{1,2} \mod 2$-a contradiction. Hence $L_{1,2}=\Bbb P^1\times \Bbb P^2.$
Now consider link $L_{2,1}.$ His fundamental group is $\Bbb Z$ hence it is diffeomorphic either to twisted product $\tilde{S^1\times S^2}$ or to $\Bbb P^1\times S^2.$ By the same argument as above the first possibility is excluded. Hence $L_{2,1}=\Bbb P^1\times S^2.$
To finish our proof we need following:
**Lemma 12**. *Let $C(X)\subset \mathbb{R}^n$ be an algebraic cone of dimension $d>1$ with connected base $X.$ If $\deg C(X)$ is odd, then the link of $C(X)$ is connected.*
*Proof.* We can assume $d<n.$ Assume that the link $L=A\cup B$ has two connected components. Then $A,B$ are Euler cycles, in particular they are homological cycles $\mod 2.$ Let $\phi: S^{n-1} \ni x\to [x]\in \Bbb P^{n-1}.$ The mapping $\phi$ is continuous and restricted to $A$ is a homeomorphism onto $X.$ Hence $\phi_*([A])=[X].$ Since the cycle $A$ iz zero in $H_{d-1}(S^{n-1},\Bbb Z/(2))$ we have $[X]=\phi_*([A])=0$ in $H_{d-1} (\Bbb P^{n-1},\Bbb Z/(2)).$ But $\deg C(X)=\deg X \mod 2$- a contradiction. ◻
Let $X$ be as in Theorem and assume deg $C(X)$ is odd. If the link $L$ of $C(X)$ is not equal to the twisted product $\tilde{S^1\times S^2}$, then either $L=L_{1,2}$ or $L=L_{2,1}$. Since degrees of cones $X_{1,2}$ and $X_{2,1}$ are even arguing as above we get a contradiction. In the same way we can prove that if deg $C(X)$ is even, then the link $L$ can not be diffeomorphic to $L_{1,1}=\tilde{S^1\times S^2}.$ ◻
**Theorem 13**. *There exists three semi-algebraically and bi-Lipschitz equivalent algebraic cones $C(X), C(Y), C(Z)\subset \mathbb{R}^8$ with non-homeomorphic smooth algebraic bases. In fact $X\cong \Bbb P^1\times \Bbb P^2$, $Y\cong \Bbb P^1\times S^2$ and $Z\cong\tilde{S^1\times S^2}.$*
*Proof.* Consider the standard embedding of a sphere $\iota:S^2 \to \Bbb P^3$ and let $Z=\iota(S^2).$ Let $Y:=\phi_1(\Bbb P^1\times Z)$, where $\phi_1: \Bbb P^1\times \Bbb P^3\to \Bbb P^7$ is a Segre embedding. The variety $Y$ has degree $6.$ As in Lemma [Lemma 10](#lemma){reference-type="ref" reference="lemma"}, we can prove that the link of $C(Y)$ is $\Bbb P^1\times S^2.$ Hence $C(Y)$ has the same link as $X_{2,1}.$ Put $X_{2,1}:=C(X).$
Now consider the embedding $\iota: \tilde{S^1\times S^2}\ni [x,y]\mapsto [x:y]\in \Bbb P^4.$ Let $Z=\iota(\tilde{S^1\times S^2}).$ Hence $Z$ is a hyperquadric in $\Bbb P^4$ given by the equation $x_1^2+x_2^2=y_1^2+y_2^2+y_3^2.$ Since $C(Z)$ is a hypersurface it has an orientable link. Hence this link has to be connected and as in Lemma [Lemma 10](#lemma){reference-type="ref" reference="lemma"} we see that it is diffeomorphic to $S^1\times S^2$. Thus $C(Z)$ has the same link as $C(X)$ and $C(Y).$ By [@kol Corollary 11], we have that all these links are Nash diffeomorphic. In particular, they are bi-Lipschitz and semi-algebraically equivalent. ◻
**Corollary 14**. *There exist four dimensional algebraic cones $C(X),C(Y)\subset \mathbb{R}^8$ and a semi-algebraic bi-Lipschitz homeomorphism $\phi: C(X)\to C(Y)$ which transforms every ray $Ox$ into a ray $O\phi(x)$ isometrically, but there is no a homeomorphism $C(X)\to C(Y)$, which transforms every generatrix onto generatrix.*
*Proof.* Let $C(X), C(Y)$ will be as in Theorem [Theorem 13](#main_thm){reference-type="ref" reference="main_thm"}. If such a homeomorphism $C(X)\to C(Y)$ exists, then it induces a homeomorphism $X\to Y$, a contradiction. ◻
**Theorem 15**. *1) Manifolds $S^1\times S^2$ and $\tilde{S^1\times S^2}$ can not be diffeomorphic to projective varieties with odd degree.*
*2) Let $\iota: \Bbb P^n\to \Bbb P^N$ is an algebraic embedding. If deg $\iota(\Bbb P^n)$ is odd, then the link of $C(\iota(\Bbb P^n))$ is $S^n$, if deg $\iota(\Bbb P^n)$ is even, then the link of $C(\iota(\Bbb P^n))$ is disconnected.*
*3) A simply connected real projective variety of positive dimension can not have odd degree.*
*Proof.* 1) Note that $\tilde{S^1\times S^2}$ has an embedding into $\Bbb P^3$ as a hyperqaudric $Z$ and the link $L$ of the cone $C(Z)$ is connected (see the proof of Theorem [Theorem 13](#main_thm){reference-type="ref" reference="main_thm"}). Now assume that $\tilde{S^1\times S^2}$ has an algebraic embedding $Z'$ into some $\Bbb P^N$ with odd degree. Then its link $L'$ is connected by Lemma [Lemma 12](#lemma2){reference-type="ref" reference="lemma2"}, consequently it is $S^1\times S^2.$ Thus $L=L'.$ By the assumption the manifolds $Z$ and $Z'$ are diffeomorphic. Thus by [@kol Corollary 11] these manifolds are Nash diffeomorphic, in particular they are analytically equivalent. Since the fundamental group of the space $\tilde{S^1\times S^2}$ has only one subgroup of index two, the analytic mapping $Z\to Z'$ can be extended to an analytic mapping $L\to L',$ which preserves the Hopf fibration. This means in terminology of [@fjs] that there is an $a$-invariant subanalytic bi-Lipschitz mapping from $C(Z)$ to $C(Z').$ But this mapping has $a$-invariant graph and by [@fjs Theorem 3.2] we have $\deg C(Z)=\deg C(Z') \mod 2$-a contradiction.
In a similar way, we can prove that $S^1\times S^2$ has not an embedding into projective space with odd degree.
2\) Note that $\Bbb P^n$ has a trivial embedding into projective space with odd degree. Hence we can repeat the proof above.
3\) This follows directly from Lemma [Lemma 12](#lemma2){reference-type="ref" reference="lemma2"}. ◻
99
Barthel, G. and Dimca, A. *On Complex Projective Hypersurfaces which are Homology-Pn's*. In: Singularities, London Mathematical Society Lecture Note Series, vol. 201, Cambridge University Press, 1994.
Birbrair, L.; Fernandes, A.; Lê D. T. and Sampaio, J. E. *Lipschitz regular complex algebraic sets are smooth*. Proc. Amer. Math. Soc., vol. 144 (2016), no. 3, 983-987.
Birbrair, L.; Fernandes, A.; Sampaio, J. E. and Verbitsky, M. *Multiplicity of singularities is not a bi-Lipschitz invariant*. Math. Ann., vol. 377 (2020), 115-121.
Birbrair, L., Fernandes, A., Jelonek, Z., *On the extension of bi-Lipschitz mappings*, Selecta Mathematica, 27(2), (2021).
Fernández de Bobadilla, J.. *Answers to some equisingularity questions.* Invent. math., vol. 161 (2005), 657--675.
Fernández de Bobadilla, J.. *Topological Equisingularity: Old Problems from a New Perspective (with an Appendix by G.-M. Greuel and G. Pfister on SINGULAR)*. In: Cisneros-Molina, J.L., Dũng Tráng, L., Seade, J. (eds) Handbook of Geometry and Topology of Singularities III. Springer, Cham. (2022).
Bobadilla, J.F. de; Fernandes, A. and Sampaio, J. E. *Multiplicity and degree as bi-lipschitz invariants for complex sets*. Journal of Topology, vol. 11 (2018), 958-966.
Brin, M., Seifert Fibered Spaces, Notes for a course given in the Spring of 1993, arxiv:0711.1346v2, 2007.
Comte, G., *Multiplicity of complex analytic sets and bi-Lipschitz maps*. Real analytic and algebraic singularities (Nagoya/Sapporo/Hachioji, 1996) Pitman Res. Notes Math. Ser., vol. 381 (1998), 182-188.
Comte, G.; Milman, P. and Trotman, D. *On Zariski's multiplicity problem*. Proc. Amer. Math. Soc., vol 130 (2002), no. 7, 2045---2048.
Fernandes, A., *Topological equivalence of complex curves and bi-Lipschitz maps*. Michigan Math. J., vol. 51 (2003), 593-606.
Fernandes, A., Jelonek, Z., Sampaio, J., E. *On the Fukui---Kurdyka---Paunescu conjecture*, Compositio Mathematica , vol. 158 , Issue 6 , (2022) , pp. 1298 - 1313.
Fernandes, A. and Sampaio, J. E., . Journal of Topology, vol. 9 (2016), 927-933.
Gau, Y.-N. and Lipman, J. . Inventiones mathematicae, vol. 73 (1983), no. 2, 165--188.
Greenberg, M. J. Lectures on Algebraic Topology, W. A. Benjamin Advanced Bk Program, 1973.
Fernández de Bobadilla, J. Pełka, T. *Symplectic monodromy at radius zero and equimultiplicity of $\mu$-constant families*. Preprint (2022), arXiv:2204.07007 \[math.AG\].
Jelonek, Z., *On algebraic bi-Lipschitz homeomorphisms*, Proc. A.M.S, to appear.
Kirby, R. C. and Siebenmann, L. *Foundational essays on topological manifolds, smoothings and triangulations*. Annals of Mathematics Studies vol. 88, Princeton University Press, 1977.
Kollar, J., *Nash work in algebraic geometry*, Bull. A.M. S., vol. 54, (2017), 307-324.
Kollar, J., *The Topology of Real and Complex Algebraic Varieties*, Advanced Studies in Pure Mathematics 31, Taniguchi Conference on Mathematics Nara '98, (2001), 127-145.
Neumann, W. and Pichon, A. *Lipschitz geometry of complex curves*. Journal of Singularities, vol. 10 (2014), 225-234.
Pham, F. and Teissier, B. *Fractions lipschitziennes d'une algèbre analytique complexe et saturation de Zariski*. Prépublications du Centre de Mathématiques de l'Ecole Polytechnique (Paris), no. M17.0669, June (1969). Available at <https://hal.archives-ouvertes.fr/hal-00384928/>
Sampaio, J. E. *Bi-Lipschitz homeomorphic subanalytic sets have bi-Lipschitz homeomorphic tangent cones*. Selecta Math. (N.S.), vol. 22 (2016), no. 2, 553-559.
Sampaio, J. E. *On Zariski's multiplicity problem at infinity*. Proc. Amer. Math. Soc., vol. 147 (2019), 1367-1376.
Sampaio, J. E. *Some homeomorphisms that preserve tangent cones and multiplicity*. Contemporary Mathematics, vol. 742 (2020), 189--200.
Sampaio, J. E. *Multiplicity, regularity and blow-spherical equivalence of complex analytic set*. The Asian Journal of Mathematics, vol. 24 (2020), no. 5, 803--820.
Sampaio, J. E. *Multiplicity, regularity and Lipschitz geometry of real analytic hypersurfaces.* Israel Journal of Mathematics, vol. 246 (2021), no. 1, 371--394.
Sampaio, J. E. *Differential invariance of the multiplicity of real and complex analytic sets.* Publicacions Matemàtiques, vol. 66 (2022), 355--368.
Steenrod, N. E. *The classification of sphere bundles.* Annals of Mathematics, vol. 45 (1944), 294--311.
Zariski, O. *Some open questions in the theory of singularities*. Bull. Amer. Math. Soc., vol. 77 (1971), no. 4, 481-491.
[^1]: The first named author was partially supported by CNPq-Brazil grant 304221/2017-1. The second named author is partially supported by the grant of Narodowe Centrum Nauki number 2019/33/B/ST1/00755. The third author was partially supported by CNPq-Brazil grant 310438/2021-7 and by the Serrapilheira Institute (grant number Serra -- R-2110-39576).
| arxiv_math | {
"id": "2309.07078",
"title": "Bi-Lipschitz equivalent cones with different degrees",
"authors": "Alexandre Fernandes, Zbigniew Jelonek and Jos\\'e Edson Sampaio",
"categories": "math.AG math.CV math.MG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
A new notion of face relative interior for convex sets in topological real vector spaces is introduced in this work. Face relative interior is grounded in the facial structure, and may capture the geometry of convex sets in topological vector spaces better than other generalisations of relative interior.
We show that the face relative interior partitions convex sets into face relative interiors of their closure-equivalent faces (different to the partition generated by intrinsic cores), establish the conditions for nonemptiness of this new notion, compare the face relative interior with other concepts of convex interior and prove basic calculus rules.
author:
- "Reinier Díaz Millán[^1] and Vera Roshchina[^2]"
bibliography:
- references.bib
title: Face relative interior of convex sets in topological vector spaces
---
# Introduction
There are several notions generalising the relative interior of a convex set in the Euclidean setting to real vector spaces. Perhaps the most well-known are the purely algebraic notion of intrinsic core defined for convex sets in arbitrary real vector spaces and the quasi-relative interior defined for convex sets in topological vector spaces. The intrinsic core has a deep and beautiful connection with the facial structure of convex sets, however it may be empty even for 'very large' sets (see our recent review [@mill-rosh] and references therein). In contrast to this, quasi-relative interior doesn't seem to have a neat interpretation in terms of faces, but its nonemptiness in practically useful settings (e.g. separable Banach spaces) appears to be more important for applications.
We argue that an alternative to the quasi-relative interior can be defined intrinsically in terms of faces while being nonempty under similar conditions. We call this new notion face relative interior. While it remains to be seen how useful this new generalisation may be for applications, some of the properties of face relative interior (that other notions do not satisfy) suggest that it may be a more natural choice than the quasi-relative interior in some settings, especially when one is concerned with the facial structure of convex sets in general topological vector spaces.
The *face relative interior* of a convex set consists of all points for which the topological closure of their minimal face contains the entirety of this convex set. The face relative interior is sandwiched between the intrinsic core and the quasi-relative interior, and is different to both. We devote Section [2](#sec:facereldef){reference-type="ref" reference="sec:facereldef"} to the proof of this fact and provide several examples that demonstrate the differences between the face-relative interior and other notions.
Convex sets in separable Banach spaces have nonempty face-relative interiors (Corollary [Corollary 17](#cor:friBanach){reference-type="ref" reference="cor:friBanach"}). More generally, nonemptiness is guaranteed under general assumptions, similar to the well-known conditions for the nonemptiness of the quasi-relative interior. We discuss this in Section [3](#sec:nonemptiness){reference-type="ref" reference="sec:nonemptiness"}, where we also systematise the results on a key notion of CS-closed sets in topological vector spaces that are otherwise scattered in the literature.
Section [4](#sec:calculus){reference-type="ref" reference="sec:calculus"} is dedicated to exploring the calculus of face relative interiors, which resembles the calculus of quasi-relative interiors. Interestingly, we didn't need to invoke duality in our proofs.
Finally, in Section [5](#sec:equiv){reference-type="ref" reference="sec:equiv"} we argue that the face relative interior aligns more naturally with the facial structure of the convex set, compared to the quasi-relative interior. While any convex set is partitioned as the disjoint union of the intrinsic cores of its faces (see [@mill-rosh]), this is not true for the quasi-relative interior. However identifying faces that have the same closure we obtain a similar disjoint partition via the face relative interiors, based on the observation that faces that aren't closure-equivalent have non-intersecting face relative interiors. We show that this structural property is not available for the quasi-relative interiors: it may happen that faces with different closures have overlapping quasi-relative interiors (see Example [Example 35](#nonpartition){reference-type="ref" reference="nonpartition"}). We also show that the partition via face-relative interiors of faces is different to the partition via the intrinsic cores using explicit examples.
We finish the paper with conclusions, where we also review some open questions and topics for future investigation.
# Face relative interior: definition and examples {#sec:facereldef}
The definition of face relative interior is based on the facial structure of the convex set. For a self-contained exposition, we briefly review the relevant ideas first. We begin with a discussion on the facial structure of convex sets in real vector spaces, recap several useful results and definitions related to minimal faces, and then move on to defining the face relative interior, placing it in the context of other notions, and working out the conditions for its nonemptiness.
## Notions of relative interior
Faces are the main structural elements of convex sets: faces of a convex set form a lattice with respect to the set inclusion, and the facial structure is key for studying the geometry of convex sets.
For now, assume that $X$ is an arbitrary real vector space. Let $C\subseteq X$ be a convex set. A convex subset $F\subseteq C$ is called a *face* of $C$ if for every $x\in F$ and every $y,z \in C$ such that $x\in (y,z)$, we have $y,z\in F$.
An empty set is a face of $C$, and the set $C$ itself is its own face. Nonempty faces that don't coincide with $C$ are called proper. It is customary to write $F\unlhd C$ for a face $F$ of $C$.
Given a subset $S$ of a convex set $C$, the face $F\unlhd C$ that is the smallest face with respect to set inclusion that contains the set $S$ is called a minimal face. The minimal face is well-defined since any intersection of faces of a convex set is also a face. We denote the minimal face of a convex set $C$ containing $S\subset C$ by $F_{\min}(C,S)$. When $S$ is a singleton ($S=\{x\}$ for some $x\in X$), we abuse the notation and write $F_{\min}(x,C)$ meaning $F_{\min}(\{x\},C)$.
It was shown in [@mill-rosh] that a minimal face of a convex set $C$ containing $x\in X$ has an equivalent representation $$\label{eq:frcharlin}
F_{\min} (x,C) = C\cap (\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)+x),$$ where by $\mathop{\mathrm{lin}}K$ we denote the lineality space of a convex cone $K$: the smallest linear subspace contained in $K$, and $\mathop{\mathrm{cone}}$ denotes the conic hull (for a convex set $C$ we have $\mathop{\mathrm{cone}}C = \mathbb{R}_{+} C = \{\alpha x\,|\, x\in C, \alpha \geq 0\}$). We abuse the notation and write $C+x$ for $C+\{x\}$, the Minkowski sum of the set $C$ and the singleton $\{x\}$.
The intrinsic core (also known as pseudo-relative interior and the set of inner points) was studied extensively in the 50's (see our recent review [@mill-rosh] for more information). The intrinsic core $\mathop{\mathrm{icr}}C$ of a convex set $C$ is a subset of $C$ such that $x\in \mathop{\mathrm{icr}}C$ if and only if for every $y\in C$ there exists $z\in C$ with $x\in (y,z)$.[\[page:deficr\]]{#page:deficr label="page:deficr"} One can also define the intrinsic core via minimal faces, $$\label{eq:icrminface}
\mathop{\mathrm{icr}}C=\{x\in C\,|\, F_{\min}(x,C)=C\},$$ and using the characterisation [\[eq:frcharlin\]](#eq:frcharlin){reference-type="eqref" reference="eq:frcharlin"}, $$\label{eq:icrcone}
\mathop{\mathrm{icr}}C=\{x\in C\,|\, \mathop{\mathrm{cone}}(C-x) \text{ is a linear subspace} \}.$$
The quasi-relative interior was introduced by Borwein and Lewis in 1992 [@BorweinLewisPartiallyFinite], and is a fairly well-studied notion, with many theoretical and practical applications. For instance, it was used in [@Dontchev] to formulate a Slater-like condition for the cone of nonnegative functions in $L^2$; in [@WeakEfficiency] the notion of quasi-relative interior is used to obtain necessary and sufficient conditions for the existence of saddle points of a Lagrangian function in a class of vector optimisation problems; in [@SetValuedSystems] the relation between linear separation and saddle points of Lagrangian functions in the context of variational inequalities is studied via the quasi-relative interior. Some other applications are discussed in [@InfDim; @RemarksInf; @Lagrange; @Regularity; @Subconvex; @EfficientCoderivatives; @MR3297972; @MR3735852]. A detailed overview of Borwein and Lewis' work with examples and additional results is given in [@Lindstrom]. Zalinescu [@ZalinescuThreePb; @ZalinescuOnTheUse] studied duality and separation in the context of quasi-relative interior and resolved a number of open questions related to the quasi-relative interior.
Following [@BorweinGoebel Definition 2.6], we define the *quasi-relative interior* $\mathop{\mathrm{qri}}C$ of a convex set $C\subseteq X$, where $X$ is a topological vector space, as $$\mathop{\mathrm{qri}}C=\{x\in C: \overline{\mathop{\mathrm{cone}}}(C-x) \text{ is a linear subspace} \}.$$
Some of the key advantages of quasi-relative interior over the intrinsic core is that quasi-relative interior is guaranteed to be nonempty under mild conditions (we discuss these in detail in Section [3](#sec:nonemptiness){reference-type="ref" reference="sec:nonemptiness"}), and it has favourable properties when working with dual objects.
Finally, we mention another generalisation of relative interior, which is commonly referred to as the actual 'relative interior', denoted by $\mathop{\mathrm{ri}}C$. This is the interior of $C$ relative to $\overline{\mathop{\mathrm{aff}}C}$, the closed affine hull of $C$.
We are now ready to define the new notion of face relative interior.
**Definition 1** (Face relative interior). Let $C\subseteq X$ be a convex subset of a topological vector space $X$. We define the *face relative interior* of $C$ as $$\mathop{\mathrm{fri}}C:=\{x\in C\,|\, C\subseteq \overline{F_{\min}(x,C)}\}.$$
Our definition is analogous to the definition of the quasi-relative interior in that it also incorporates a topological closure. Curiously, all three key topological generalisations of the relative interior can be obtained by taking closures of appropriate objects, using three different but equivalent characterisations of the intrinsic core. The characterisations [\[eq:icrminface\]](#eq:icrminface){reference-type="eqref" reference="eq:icrminface"} and [\[eq:icrcone\]](#eq:icrcone){reference-type="eqref" reference="eq:icrcone"} correspond to face relative and quasi-relative interiors. Since the intrinsic core of a convex set $C$ can be seen as the core of $C$ with respect to its affine hull, relative interior can be seen as the 'topological closure' of this characterisation.
We will focus on the properties and calculus of the face relative interior in detail in Section [4](#sec:calculus){reference-type="ref" reference="sec:calculus"}. For now we note that the face relative interior is a convex set.
**Proposition 2**. *Let $C$ be a convex subset of a topological vector space $X$. Then $\mathop{\mathrm{fri}}C$ is a convex subset of $C$.*
*Proof.* Suppose that $x,y\in\mathop{\mathrm{fri}}C$ and $z\in (x,y)$. Since $C$ is convex, $z\in C$. Since $z\in (x,y)$, we must have $x,y\in F_{\min}(z,C)$ by the definition of a face. Since $x\in \mathop{\mathrm{fri}}C$, it follows that $$C \subseteq \overline{F_{\min}(x,C)} \subseteq \overline{F_{\min}(z,C)},$$ proving that $z \in \mathop{\mathrm{fri}}C$. Then $\mathop{\mathrm{fri}}C$ is a convex subset of $C$. ◻
## The sandwich theorem
The face relative interior is sandwiched between the intrinsic core and quasi-relative interior, as we show next in Theorem [Theorem 3](#thm:interiorsandwich){reference-type="ref" reference="thm:interiorsandwich"}. We prove this theorem first and then proceed with examples that show face relative interior to be different from the other three notions that we discussed (intrinsic core, quasi-relative and relative interiors).
**Theorem 3** (Sandwich). *Let $C$ be a convex subset of a topological vector space $X$. Then $$\label{eq:mainsandwich}
\mathop{\mathrm{ri}}C \subseteq \mathop{\mathrm{icr}}C \subseteq \mathop{\mathrm{fri}}C \subseteq \mathop{\mathrm{qri}}C.$$*
*Proof.* The first inclusion $\mathop{\mathrm{ri}}C\subseteq \mathop{\mathrm{icr}}C$ is well-known (e.g. see [@BorweinGoebel Theorem 2.12]).
Suppose that $C\subseteq X$ is convex, and let $x\in \mathop{\mathrm{icr}}C$. By [\[eq:icrminface\]](#eq:icrminface){reference-type="eqref" reference="eq:icrminface"} we have $F_{\min }(x,C) = C$, and hence $$C \subseteq \overline{F_{\min}(x,C)}.$$ By the definition of face relative interior, we conclude that $x\in \mathop{\mathrm{fri}}C$. We have therefore shown that $\mathop{\mathrm{icr}}C \subseteq \mathop{\mathrm{fri}}C$.
It remains to demonstrate that $\mathop{\mathrm{fri}}C \subseteq \mathop{\mathrm{qri}}C$. Let $x\in \mathop{\mathrm{fri}}C$. Then $$C \subseteq \overline{F_{\min}(x,C)}.$$
From [\[eq:frcharlin\]](#eq:frcharlin){reference-type="eqref" reference="eq:frcharlin"} we have $$C \subseteq \overline{F_{\min}(x,C)} = \overline {C\cap (\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)+x)}\subseteq \overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)+x}.$$ In a topological vector space, the topology is translation invariant (see [@hitchhiker Section 5.1]), for any $A\subset X$ and $x\in X$ we have $\overline{A+x}\subseteq \overline{A}+x$.
Then we have that $\overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)+x}\subseteq \overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)}+x$, so $$\label{eq:technical045}
C - x\subseteq \overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)},$$
and since $\overline{\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)}$ is a linear subspace of $X$, we have $\mathop{\mathrm{cone}}\overline{\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)} = \overline{\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)}$, using [\[eq:technical045\]](#eq:technical045){reference-type="eqref" reference="eq:technical045"} we obtain $$\overline{\mathop{\mathrm{cone}}(C - x)}\subseteq \overline {\mathop{\mathrm{cone}}\overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)}} = \overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)}.$$ Since $\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)\subseteq \mathop{\mathrm{cone}}(C-x)$, this yields $$\overline {\mathop{\mathrm{lin}}\mathop{\mathrm{cone}}(C-x)} = \overline{\mathop{\mathrm{cone}}(C - x)},$$ hence, $\overline{\mathop{\mathrm{cone}}(C - x)}$ is a linear subspace, and by the definition of quasi-relative interior, we have $x\in \mathop{\mathrm{qri}}C$. We conclude that $\mathop{\mathrm{fri}}C \subseteq \mathop{\mathrm{qri}}C$. ◻
**Remark 4**. *If $X$ is finite-dimensional, or $\mathop{\mathrm{ri}}C\neq \emptyset$, we have $\mathop{\mathrm{ri}}C = \mathop{\mathrm{qri}}C$, and hence all four notions coincide (see [@BorweinGoebel Theorem 2.12]). Note however that $\mathop{\mathrm{icr}}C \neq \emptyset$ doesn't guarantee that $\mathop{\mathrm{icr}}C = \mathop{\mathrm{fri}}C$ (and hence that $\mathop{\mathrm{icr}}C = \mathop{\mathrm{qri}}C$) ( see Example [Example 11](#eg:icrneqfri){reference-type="ref" reference="eg:icrneqfri"}).*
A consequence of Theorem [Theorem 3](#thm:interiorsandwich){reference-type="ref" reference="thm:interiorsandwich"} is the following result about minimal faces.
**Corollary 5**. *Let $C$ be a convex subset of a topological vector space $X$. Then for any $x\in C$ we have $x\in \mathop{\mathrm{icr}}F_{\min}(x,C)\subseteq \mathop{\mathrm{fri}}F_{\min}(x,C)\subseteq \mathop{\mathrm{qri}}F_{\min}(x,C)$.*
Since $F_{\min}(x,C)=F_{\min}\left(x,F_{\min}(x,C)\right)$, and by [@mill-rosh Corollary 3.14] the intrinsic core of a minimal face contains $x$ and is hence nonempty, we have $$x\in\mathop{\mathrm{icr}}F_{\min}(x,C)\subseteq \mathop{\mathrm{fri}}F_{\min}(x,C)\subseteq \mathop{\mathrm{qri}}F_{\min}(x,C).$$
Our next goal is to show that the face relative interior is a new notion that doesn't coincide with neither the intrinsic core nor the quasi-relative interior. We consider three examples: two of them are well-known from the works [@BorweinGoebel] and [@BorweinLewisPartiallyFinite]. In Example [Example 9](#eg:02a){reference-type="ref" reference="eg:02a"} the intrinsic core is empty, but the face relative interior isn't; in Example [\[exa1\]](#exa1){reference-type="ref" reference="exa1"} the face relative interior coincides with the intrinsic core, but is a proper subset of the quasi-relative interior. We also construct a set in $l_2$ for which the intrinsic core is nonempty but doesn't coincide with the face relative interior in Example [Example 11](#eg:icrneqfri){reference-type="ref" reference="eg:icrneqfri"}.
In these examples, our calculations are done directly from definitions, which may help further familiarise the reader with our ideas. We will need a couple of technical results to proceed: the first one (Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"}) is a characterisation of a minimal face that was proved in [@mill-rosh], and the second one is a generalisation of the delicate construction of slowly converging sequences from [@BorweinGoebel].
**Proposition 6**. *The minimal face $F_{\min}(x,C)$ for $x\in C$, where $C$ is convex, can be represented as $$\label{minface:union}
F_{\min}(x,C) = \bigcup \{ [y,z]\subseteq C, \, x\in (y,z)\}.$$*
The next statement is evident from Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"}.
**Corollary 7**. *If $A$ and $B$ are convex subsets of a vector space $X$, $A\subseteq B$ and $x\in A$, then $$F_{\min}(x,A)\subseteq F_{\min}(x,B).$$*
**Proposition 8**. *Let $X = l^d$ with $d\in [1,+\infty)$. Then for any $x\in X$ and any $\delta>0$ there exists $y\in l^d$ such that $\|y\|_d = \delta$ and for all $\varepsilon>0$ there is an $i \in \mathbb{N}$ with $$|x_i|<\varepsilon y_i.$$*
*Proof.* Fix an $x\in X = l^d$ and $\delta>0$. Let $z\in X$ be such that $z_k>0$ for every $k\in \mathbb{N}$ and $\|z\|_d = \delta$. We define a function $N(k)$ recursively as follows. Since $|x_n|\to 0$, for any $k\in \mathbb{N}$ we can find $N'(k)$ such that $$|x_n|<\frac{z_k}{k}\quad \forall n \geq N'(k).$$ Let $N(1) = N'(1)$, and define the remaining values of the function $N(k)$ recursively, as $$N(k) = \max\{N'(k), N(k-1)+1\}.$$
Observe that $N(k)$ is injective by construction. We define the sequence $y\in X$ such that for each $n\in \mathbb{N}$ $$y_n: = \begin{cases}
0,& n\notin N(\mathbb{N}),\\
z_k, & N(k) = n.
\end{cases}$$ Note that $\|y\|_d = \|z\|_d = \delta$.
Now choose any $\varepsilon>0$. There exists $k\in \mathbb{N}$ such that $\varepsilon >1/k$. Since $$|x_n|< \frac{z_k}{k}\quad \forall n \geq N(k)\geq N'(k),$$ we conclude that for $i=N(k)$ $$|x_i|< \frac{z_k}{k}\leq \varepsilon y_{i}.$$ ◻
The next example shows that the intrinsic core is different to the face relative interior. More precisely, we demonstrate that it is possible to have $\emptyset = \mathop{\mathrm{icr}}C \subsetneq \mathop{\mathrm{fri}}C$.
**Example 9** (From [@BorweinGoebel Example 2.4]). Let $C$ be a subset of the Hilbert space $l_2$, $$C = \{x\in l_2\,|\, \|x\|_2\leq 1, x_i\geq 0\; \forall\, i \in \mathbb{N}\}.$$ We will demonstrate that $$\label{eq:egicrnotfri}
\emptyset = \mathop{\mathrm{icr}}C \neq \mathop{\mathrm{fri}}C,$$ where $$\label{eq:friexample1}
\mathop{\mathrm{fri}}C = \{x\in C\, |\, \|x\|_2<1, x_i >0 \, \forall i \in \mathbb{N}\}.$$
Our first step is to show that $\mathop{\mathrm{icr}}C =\emptyset$. We will prove this by showing that for any $x\in C$ there is some $y\in C\setminus F_{\min}(x,C)$, and hence from [\[eq:icrminface\]](#eq:icrminface){reference-type="eqref" reference="eq:icrminface"} it follows that $\mathop{\mathrm{icr}}C = \emptyset$. Take any $x\in C$. By Proposition [Proposition 8](#prop:majorant){reference-type="ref" reference="prop:majorant"} there exists an $y$ such that $\|y\|_2 = 1$ and for any $\varepsilon>0$ there is an $i\in \mathbb{N}$ with $\varepsilon y_i >|x_i|$. It is clear that $y\in C$. If $y\in F_{\min}(x,C)$, then from Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"} we know that there exists $z\in C$ such that $x\in (y,z)$. Explicitly we have $x = \alpha y + (1-\alpha) z$, $\alpha \in (0,1)$, and so $$z = \frac{1}{1-\alpha} (x - \alpha y).$$ By our construction of $y$, for $\varepsilon=\alpha$ there is some $i\in \mathbb{N}$ such that $x-\alpha y_i<0$, and hence $$z_i = \frac{1}{1-\alpha}(x_i - \alpha y_i) <0,$$ which contradicts $z\in C$, and hence $y\notin F_{\min}(x,C)$, and we have hence shown that $\mathop{\mathrm{icr}}C = \emptyset$.
Next we are going to calculate all minimal faces of $C$ explicitly. We already know that for any $x\in C$ the set $F_{\min}(x,C)$ does not contain any sequences that converge faster than $x$. Now any other $y\in C$ is such that there exists some $\varepsilon>0$ with $\varepsilon y_i\leq x_i$ for all $i\in \mathbb{N}$. We let $$z_\alpha = x+ \alpha(x-y) = \alpha x + x-\alpha y.$$ and so for $\alpha \leq \varepsilon$ $$z_i = \alpha x_i + x_i - \alpha y_i \geq \alpha x_i \geq 0.$$ If $\|x\|_2<1$, we can choose $\alpha\leq \varepsilon$ such that $$\|z_\alpha\|_2\leq \|x\|_2 + \alpha \|x-y\|_2 \leq 1,$$ and hence $x\in (y,z_\alpha)$ with $y,z_\alpha \in C$.
We conclude that $$\label{eq:minface001}
F_{\min}(x,C) = \{y\in C\,|\, \exists \varepsilon >0, |x_i|\geq \varepsilon |y_i| \; \forall \, i \in \mathbb{N}\} \quad \forall x\in C, \; \|x\|_2 <1.$$
It remains to consider the case $\|x\|_2=1$. We will show that in this case, the minimal face of $x$ is the singleton $\{x\}$. Indeed, assume on the contrary that $x\in (x+u,x-u)$, where $x-u,x+u\in C$ and $u\neq 0$. Then using the parallelogram law and the fact that $\|x\|=1$, we have $$2 = 2 \|x\|_2^2 \leq 2 \|x\|_2^2+2 \|u\|_2^2 = \|x+u\|_2^2 + \|x-u\|_2^2 \leq 2,$$ and we conclude that $u = 0$. Hence $$\label{eq:minface002}
F_{\min}(x,C) = \{x\} \quad \forall x\in C, \|x\|_2=1.$$
It follows from [\[eq:minface001\]](#eq:minface001){reference-type="eqref" reference="eq:minface001"} that if for $x\in C$ we have $x_i =0$ for some $i\in \mathbb{N}$, then $$F_{\min}(x,C) \subseteq \{u\, |\, u_i = 0\},$$ and hence $\overline{F_{\min}(x,C)} \neq C$. Likewise, when $\|x\|_2=1$, we have $\overline{F_{\min}(x,C)} =\{x\} \neq C$. On the other hand, if $x\in C$ is such that $\|x\|_2<1$ and $x_i >0$ for all $i\in \mathbb{N}$, representing any $y\in C$ as the limit of eventually zero sequences $y^k$ with the first $k$ entries coinciding with the first $k$ entries of $y$, we have $\bar y^k \in F_{\min}(x,C)$ and since $\bar y^k \to y$, we conclude that $y\in \overline{F_{\min}(x,C)}$. By the arbitrariness of $y$, we conclude that $x\in \mathop{\mathrm{fri}}C$, and therefore we have [\[eq:friexample1\]](#eq:friexample1){reference-type="eqref" reference="eq:friexample1"}. The rest of the relation [\[eq:egicrnotfri\]](#eq:egicrnotfri){reference-type="eqref" reference="eq:egicrnotfri"} also follows from this representation.
The next example demonstrates that the face relative interior is different to the quasi-relative interior.
**Example 10**. [@BorweinLewisPartiallyFinite Example 2.20][\[exa1\]]{#exa1 label="exa1"} Let $$C:= \{x\in l_2\,|\, \|x\|_1 \leq 1\}.$$ It was shown in [@BorweinLewisPartiallyFinite] that $$\label{eq:qriEx1}
\mathop{\mathrm{qri}}C = C \setminus \{x\in l_2\,|\, \|x\|_1=1 , \; x_n = 0 \; \forall n> \text{some } N\}.$$
We will calculate the face relative interior of $C$ explicitly and show that $\mathop{\mathrm{fri}}C\subsetneq \mathop{\mathrm{qri}}C$.
Observe that for every $x\in C$ such that $\|x\|_1<1$ we have $F_{\min}(x,C)= C$. Indeed, take any $y\in C$, and let $$z = x + \alpha (x-y), \quad \text{where } \; \alpha = \frac{1-\|x\|_1}{\|y-x\|_1}.$$ Then $z\in l_1$, $$\|z\|_1 \leq \|x\|_1 + \alpha \|x-y\|_1 = 1,$$ and hence $z\in C$. We have $$x = \frac{1}{1+\alpha} z+ \frac{\alpha}{1+\alpha} y\in (y,z),$$ and from Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"} it follows that $y\in F_{\min}(x,C)$. We conclude that $F_{\min}(x,C) = C$, and so $$\{x\,|\, \|x\|_1<1\} \subseteq \mathop{\mathrm{icr}}C \subseteq \mathop{\mathrm{fri}}C.$$
Now consider the case $\|x\|_1 = 1$. Suppose that $y\in F_{\min}(x,C)$. Then there is some $z\in C$ such that $$x\in (y,z).$$ Without loss of generality, we can assume that $x_i\geq 0$ for all $i$ (otherwise it is easy to see that the argument still holds by using the relation $x_i = -|x_i|$ instead of $x_i = |x_i|$ whenever necessary). We have for some $\alpha \in (0,1)$ $$1 = \sum_{i=1}^\infty |x_i| = \sum_{i=1}^\infty x_i = \sum_{i=1}^\infty (\alpha y_i+ (1-\alpha) z_i) = \alpha \sum_{i=1}^\infty y_i+ (1-\alpha) \sum_{i=1}^\infty z_i \leq \alpha \|y\|_1 + (1-\alpha) \|z\|_1\leq 1.$$ Note that, $$\sum_{i=1}^{\infty}y_i\leq \sum_{i=1}^{\infty}|y_i|=1, \sum_{i=1}^{\infty}z_i\leq \sum_{i=1}^{\infty}|z_i|=1,$$ and using $$\alpha \sum_{i=1}^\infty y_i+ (1-\alpha) \sum_{i=1}^\infty z_i=1,$$ this implies that $\sum_{i=1}^{\infty}y_i=\sum_{i=1}^{\infty}z_i=1$. Then it follows that $$\sum_{i=1}^\infty |y_i|-\sum_{i=1}^\infty y_i=\sum_{i=1}^\infty (|y_i|-y_i)=0,$$ implying that $|y_i|=y_i\geq 0$ for all $i\in \mathbb{N}$.
hence, $$F_{\min}(x,C) \subseteq C\cap \{x\,|\, \sum_{i=1}^\infty x_i = 1, x_i\geq 0, \forall i\in \mathbb{N}\}.$$ This guarantees that $\overline{F_{\min}(x,C)} \neq C$, since $C$ contains elements with strictly negative entries.
The remaining cases when some of the entries of $x$ are negative can be considered similarly, by taking care of the signs of $y_i$ and $z_i$. We conclude that $\mathop{\mathrm{fri}}C = \{x\,|\, \|x\|_1<1\}$, which is strictly smaller than the quasi-relative interior. It is worth noting that in this case, the notion of face relative interior appears to better reflect the intuitive perception of what the 'relative interior' of the set $C$ should be.
To conclude, in this case we have $$\mathop{\mathrm{icr}}C = \mathop{\mathrm{fri}}C \subsetneq \mathop{\mathrm{qri}}C = \mathop{\mathrm{qi}}C.$$
Our last example shows that there exists a convex set with nonempty intrinsic core, that is also different to its face relative interior.
**Example 11**. Let $L=l_1$, and embed $L$ into $l_2$ as a subset. Take any point $u\in l_2\setminus l_1$ and let $C= \mathop{\mathrm{co}}\{L, u\}$.
Every point $x\in C$ can be uniquely represented as $x= (1-\alpha) l + \alpha u$, with $l\in L$. If $\alpha = 1$, then $x=u$, and $F_{\min}(x,C) =F_{\min} (u,C) = \{u\}$. If $\alpha = 0$, then $x\in L$, and $F_{\min}(x,C) = L$. Otherwise, for $\alpha \in (0,1)$, for any $y= (1-\beta)l + \beta u\in C$ let $0<\gamma < \min\left\{1,\frac{\alpha}{|\alpha - \beta|}\right\}$, then for a sufficiently small positive value of $\gamma$ $$z: = \left(1-\frac{\alpha - \gamma \beta}{1-\gamma}\right) l + \frac{\alpha - \gamma \beta} {1-\gamma} u \in C,$$ and hence we have $$x = (1-\gamma) z + \gamma y,$$ meaning that $y\in F_{\min}(x,C)$ for every $y\in C$. We conclude that $\mathop{\mathrm{icr}}C = C\setminus (L \cup \{u\})$.
Since for every point $x\in L$ the minimal face is $L$ itself, and the closure of $L = l_1$ in $l_2$ is the entirety of $l_2$, this means $L\subset \mathop{\mathrm{fri}}C$, and so $\emptyset \neq\mathop{\mathrm{icr}}C \neq \mathop{\mathrm{fri}}C$.
# The question of nonemptiness {#sec:nonemptiness}
While the intrinsic core can be empty (and in fact any infinite-dimensional vector space contains a convex set whose intrinsic core is empty, see [@Holmes]), the quasi-relative interior of a convex set $C$ is guaranteed to be nonempty under the assumption that $C$ is CS-closed, and some additional conditions imposed on the ambient space.
We discuss the notion of CS-closedness in Section [3.1](#ss:csclosed){reference-type="ref" reference="ss:csclosed"} and give a comprehensive list of conditions under which convex sets in general vector spaces are CS-closed. This list is similar to the one given in Proposition 6 in [@BorweinConvexRelations]. We then move on to the nonemptiness results in Section [3.2](#ss:nonempty){reference-type="ref" reference="ss:nonempty"}, and prove in Theorem [Theorem 16](#thm:frinonempty){reference-type="ref" reference="thm:frinonempty"} that the face relative interior of a CS-closed set is nonempty under the assumptions used in [@BorweinLewisPartiallyFinite] to prove that the quasi-relative interior is nonempty. In particular, it follows that the face relative interior of any convex set is nonempty in a separable Banach space.
## CS-closed sets {#ss:csclosed}
Recall that for any subset $S$ of a real vector space $X$ we define its convex hull $\mathop{\mathrm{co}}S$ as the set of all finite *convex combinations* of points from $S$. It may happen that the convex hull of a compact convex subset of a topological vector space $X$ is not closed, as shown in the following example.
**Example 12** (From [@hitchhiker Example 5.34]). Let $(u^n)_{n\in \mathbb{N}}$ be a sequence of points in $l_2$ such that each $u^n$ is a sequence of zeros except for the $n$-th coordinate which is $1/n$. The set $A = \{0, u^1,u^2,\dots, u^n,\dots \}$ is compact, but its convex hull $\mathop{\mathrm{co}}A$ is not. Indeed, consider the sequence $(x^k)_{k\in \mathbb{N}}$, where $$x^k = \sum_{i=1}^k \frac{u^i}{2^i} +\frac{0}{2^k}.$$ We have $x_k \in \mathop{\mathrm{co}}A$ by the definition of the convex hull, moreover, $$x_k \to x = \sum_{i=1}^{\infty}\frac{1}{2^i} u_i.$$ It is evident that $x\in l_2$, and hence also $x\in \overline{\mathop{\mathrm{co}}} A$, but $x\notin \mathop{\mathrm{co}}A$.
The point $x$ in Example [Example 12](#eg:hullnotclosed){reference-type="ref" reference="eg:hullnotclosed"} can be viewed as an infinite generalisation of a convex combination of points in $A$. Indeed, given an arbitrary set $S\subseteq X$, we can define the infinite convex hull (or $\sigma$-convex hull, see [@SigmaConvex]) as $$\label{eq:cs}
\tilde \mathop{\mathrm{co}}S = \left\{ \sum_{i\in \mathbb{N}}\lambda_i x_i \;\Bigr|\; \sum_{i\in \mathbb{N}}\lambda_i =1,\quad \lambda_i\geq 0,\, x_i\in S \;\, \forall \, i \in \mathbb{N}\right\},$$ where only convergent series $\sum_{i\in \mathbb{N}}\lambda_i x_i$ are taken into account.
A subset $S$ of a normed vector space $X$ is called convergent-series closed, or CS-closed for short (see [@Jameson]) if $S=\tilde\mathop{\mathrm{co}}S$, i.e. $S$ contains every convergent sum $\sum_{i\in \mathbb{N}}\lambda_i x_i$ as in [\[eq:cs\]](#eq:cs){reference-type="eqref" reference="eq:cs"}, and CS-compact if in addition every such series converges. Every CS-compact set is CS-closed, and every CS-closed set is convex, see [@fremlin_talagrand_1979]. Even though every closed convex set in a normed vector space $X$ is CS-closed, CS-closed sets do not need to be closed: for instance, the open unit ball in any normed space is CS-closed. In fact, *all* convex sets in completely metrizable Fréchet spaces are CS-closed due to [@FremlinTalagrand Corollary 5]. In particular, *all convex subsets of a Banach space are CS-closed*.
**Proposition 13**. *Let $X$ be a Hausdorff topological vector space and let $C$ be a convex subset of $X$. The following conditions are sufficient for $C$ to be CS-closed.*
1. *[\[it:openisCS\]]{#it:openisCS label="it:openisCS"} The set $C$ is open.*
2. *[\[it:closedisCS\]]{#it:closedisCS label="it:closedisCS"} The set $C$ is sequentially closed.*
3. *[\[it:intersection\]]{#it:intersection label="it:intersection"} The set $C$ is an intersection of CS-closed sets.*
4. *[\[it:separatedfinite\]]{#it:separatedfinite label="it:separatedfinite"} The set $C$ is finite-dimensional.*
5. *[\[it:Gdelta\]]{#it:Gdelta label="it:Gdelta"} The space $X$ is Fréchet (locally convex and metrizable) and the set $C$ is $G_\delta$ (a countable intersection of open sets).*
6. *[\[it:allCS\]]{#it:allCS label="it:allCS"} If $X$ is completely metrizable (that is, the topology of $X$ is induced by a metric under which $X$ is complete).*
*Proof.* For item [\[it:openisCS\]](#it:openisCS){reference-type="ref" reference="it:openisCS"} see Example (i) in [@JamesonConvexSeries], item [\[it:closedisCS\]](#it:closedisCS){reference-type="ref" reference="it:closedisCS"} is shown in [@JamesonConvexSeries Proposition 1]. Item [\[it:intersection\]](#it:intersection){reference-type="ref" reference="it:intersection"} is obvious and is also mentioned in [@JamesonConvexSeries]. Items [\[it:Gdelta\]](#it:Gdelta){reference-type="ref" reference="it:Gdelta"} and [\[it:allCS\]](#it:allCS){reference-type="ref" reference="it:allCS"} are proved in [@FremlinTalagrand Corollaries 4 and 5]. ◻
## Nonemptiness of the face relative interior {#ss:nonempty}
The main goal of this section is to show that the face relative interior is nonempty under reasonable assumptions. We show that the same conditions that are known to guarantee the nonemptiness of quasi-relative interior ensure the nonemptiness of the face relative interior. Before recalling the key result from [@BorweinLewisPartiallyFinite] given in Theorem [Theorem 14](#thm:qrinonempty){reference-type="ref" reference="thm:qrinonempty"} below (that we then mirror in Theorem [Theorem 16](#thm:frinonempty){reference-type="ref" reference="thm:frinonempty"} for the face relative interiors), we note here that there is an earlier result [@zalinescubook Proposition 1.2.9.] that all nonempty CS-complete sets in first countable separable locally convex spaces have nonempty quasi-relative interiors. Together with item [\[it:allCS\]](#it:allCS){reference-type="ref" reference="it:allCS"} of Proposition [Proposition 13](#prop:CS){reference-type="ref" reference="prop:CS"} either of these results implies that every nonempty convex subset of a separable Banach space has nonempty quasi-relative interior (see [@BorweinGoebel Theorem 2.8]). We show that the same is true for the face relative interior (see Corollary [Corollary 17](#cor:friBanach){reference-type="ref" reference="cor:friBanach"}).
**Theorem 14** ([@BorweinLewisPartiallyFinite Theorem 2.19]). *Suppose $X$ is a topological vector space such that either*
1. *$X$ is a separable Fréchet space, or*
2. *$X = Y^*$ with $Y$ a separable normed space, and the topology on $X$ given by the weak\* topology of $Y^*$.*
*Suppose that $C\subseteq X$ is CS-closed. Then $\mathop{\mathrm{qri}}C\neq \emptyset$.*
We will need the following technical result from [@BorweinLewisPartiallyFinite].
**Lemma 15** ([@BorweinLewisPartiallyFinite Lemma 2.18]). *Suppose that $X$ is a topological vector space with either*
1. *$X$ separable and complete metrizable, or*
2. *$X=Y^*$ with $Y$ a separable normed space, and the topology on $X$ given by the weak\* topology of $Y^*$.*
*If $C\subseteq X$ is nonempty and CS-closed, then there exist sequences $(x_n)_{n=1}^\infty$ and $(\lambda_n)_{n=1}^\infty$, such that $\sum_{i=1}^\infty \lambda_i = 1$, $\lambda_i \geq 0$ and $x_i \in C$ for all $i \in \{1,\dots\}$, the sequence $(x_n)_{n=1}^\infty$ is dense in $C$, and the convex series $\sum_{i=1}^\infty \lambda_i x_i$ converges to some $\bar x \in C$.*
**Theorem 16**. *Suppose that $X$ is a topological vector space with either*
1. *$X$ separable and complete metrizable, or*
2. *$X=Y^*$ with $Y$ a separable normed space, and the topology on $X$ given by the weak\* topology of $Y^*$.*
*If $C$ is a nonempty CS-closed subset of $X$, then $\mathop{\mathrm{fri}}C\neq \emptyset$.*
By Lemma [Lemma 15](#lem:218){reference-type="ref" reference="lem:218"} there exists a sequence $(\lambda_n)_{n\in \mathbb{N}}$ of nonnegative real numbers with $\sum_{n\in \mathbb{N}}\lambda_n=1$ and a sequence $(x_n)_{n=1}^\infty$ dense in $C$ such that $\bar{x}:=\sum_{n\in \mathbb{N}}\lambda_n x_n\in C$.
We will show that that $\overline{F_{\min}(\bar{x},C)}=C$, hence $\bar x \in \mathop{\mathrm{fri}}C \neq \emptyset$. Indeed, for all $n\in \mathbb{N}$ we have $$\bar{x}=\lambda_n x_n+(1-\lambda_n)\sum_{k\in \mathbb{N},k\neq n}\frac{\lambda_k}{1-\lambda_n}x_k,$$ using the SC-closedness of $C$, we have that $\sum_{n\in \mathbb{N},n\neq k}\frac{\lambda_n}{1-\lambda_k}x_n\in C$, then for all $n\in \mathbb{N}$, $x_n\in F_{\min}(\bar{x},C)$. Taking in account that $(x_n)_{n\in\mathbb{N}}$ is dense on $C$, we have $C\subseteq \overline{F_{\min}(\bar{x},C)}$, then $C=\overline{F_{\min}(\bar{x},C)}$, implying that $\bar{x}\in \mathop{\mathrm{fri}}C$.
**Corollary 17**. *Any nonempty convex subset of a separable Banach space has nonempty face relative interior.*
*Proof.* It follows from Theorem [Theorem 16](#thm:frinonempty){reference-type="ref" reference="thm:frinonempty"} that any CS-closed subset of a separable Banach space has nonempty face relative interior, while item [\[it:allCS\]](#it:allCS){reference-type="ref" reference="it:allCS"} of Proposition [Proposition 13](#prop:CS){reference-type="ref" reference="prop:CS"} ensures that every convex subset of a separable Banach space is CS-closed. ◻
Since the face relative interior is guaranteed to be nonempty under the same (known) conditions as the quasi-relative interior, and we couldn't find any examples of convex sets for which the face relative interior is empty, but the quasi-relative interior isn't, we conjecture that the nonemptiness of $qri$ must guarantee the nonemptiness of $fri$.
**Conjecture 18**. *Let $C$ be a convex subset of a topological vector space $X$. The face relative interior of $C$ is nonempty whenever the quasi-relative interior of $C$ is nonempty.*
# Calculus of face relative interiors {#sec:calculus}
The purpose of this section is to lay out some calculus rules and other properties of the face relative interior, and compare these results to what is known as the quasi-relative interior. We draw explicit parallels between these notions and their properties, as we work through the relevant statements and their proofs.
We begin with the most straightforward properties.
**Proposition 19** (Basic properties). *If $C$ is a convex subset of a topological vector space $X$, then the following properties hold.*
1. *[\[it:convex\]]{#it:convex label="it:convex"} The set $\mathop{\mathrm{fri}}C$ is a convex subset of $C$.*
2. *[\[it:segmentfri\]]{#it:segmentfri label="it:segmentfri"} Given $x\in \mathop{\mathrm{fri}}C$ and any $y\in C$, we have $[x,y)\subseteq \mathop{\mathrm{fri}}C$.*
3. *[\[it:closure\]]{#it:closure label="it:closure"} If $\mathop{\mathrm{fri}}C\neq \emptyset$, then $\overline{C}= \overline{\mathop{\mathrm{fri}}C}$.*
4. *[\[it:idempotent\]]{#it:idempotent label="it:idempotent"} The face relative interior operation is idempotent, that is, $\mathop{\mathrm{fri}}(\mathop{\mathrm{fri}}C)=\mathop{\mathrm{fri}}C$.*
*Proof.* The first assertion [\[it:convex\]](#it:convex){reference-type="ref" reference="it:convex"} is easy to prove, and we have already addressed it in Proposition [Proposition 2](#prop:friconvex){reference-type="ref" reference="prop:friconvex"}.
To demonstrate that [\[it:segmentfri\]](#it:segmentfri){reference-type="ref" reference="it:segmentfri"} is true, let $x\in \mathop{\mathrm{fri}}C$ and $y\in C$. By definition of a face for any $u\in (x,y)$ we have $x\in F_{\min}(u, C)$, therefore $$C\subseteq \overline{F_{\min}(x,C)} \subseteq \overline{F_{\min}(u,C)},$$ hence $u\in \mathop{\mathrm{fri}}C$. We conclude that $[x,y)\subseteq \mathop{\mathrm{fri}}C$.
To see that [\[it:closure\]](#it:closure){reference-type="ref" reference="it:closure"} is true, assume that $\mathop{\mathrm{fri}}C\neq \emptyset$. Then there exists some $x\in \mathop{\mathrm{fri}}C$. Now for any $y\in C$ by [\[it:segmentfri\]](#it:segmentfri){reference-type="ref" reference="it:segmentfri"} we have $(x,y)\in \mathop{\mathrm{fri}}C$, and hence $y\in \overline {\mathop{\mathrm{fri}}C}$. Therefore $C\subseteq \overline {\mathop{\mathrm{fri}}C}$, from where [\[it:closure\]](#it:closure){reference-type="ref" reference="it:closure"} follows.
Finally, we focus on proving [\[it:idempotent\]](#it:idempotent){reference-type="ref" reference="it:idempotent"}. By [\[it:convex\]](#it:convex){reference-type="ref" reference="it:convex"} the set $\mathop{\mathrm{fri}}C$ is convex, therefore the composition $\mathop{\mathrm{fri}}(\mathop{\mathrm{fri}}C)$ is well-defined. Applying [\[it:convex\]](#it:convex){reference-type="ref" reference="it:convex"} to $\mathop{\mathrm{fri}}C$, we have $\mathop{\mathrm{fri}}(\mathop{\mathrm{fri}}C)\subseteq \mathop{\mathrm{fri}}C$. It remains to show that $\mathop{\mathrm{fri}}C\subseteq \mathop{\mathrm{fri}}(\mathop{\mathrm{fri}}C)$. For this it is sufficient to demonstrate that $F_{\min}(x,C)\subseteq \overline{F_{\min}(x,\mathop{\mathrm{fri}}C)}$ for any $x\in \mathop{\mathrm{fri}}C$ (indeed, if for any $x\in \mathop{\mathrm{fri}}C$ we have $F_{\min}(x,C)\subseteq\overline{F_{\min}(x,\mathop{\mathrm{fri}}C)}$ then $\mathop{\mathrm{fri}}C\subseteq C\subseteq \overline{F_{\min}(x,C)}\subseteq \overline{F_{\min}(x,\mathop{\mathrm{fri}}C)}$, and so $x\in \mathop{\mathrm{fri}}(\mathop{\mathrm{fri}}C)$).
Let $y\in F_{\min}(x,C)$. From Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"} we have $x\in (y,z)$, where $z$ is also in $F_{\min}(x,C)$. By [\[it:segmentfri\]](#it:segmentfri){reference-type="ref" reference="it:segmentfri"} we conclude that $(y,z)\subset \mathop{\mathrm{fri}}C$. Then by the definition of a face we have $(y,z)\subseteq F_{\min}(x,\mathop{\mathrm{fri}}C)$, and hence $y\in \overline{F_{\min}(x,\mathop{\mathrm{fri}}C)}$, proving the assertion. ◻
**Remark 20**. *Note that the quasi-relative interior satisfies all properties given in Proposition [Proposition 19](#prop:basic){reference-type="ref" reference="prop:basic"}. The relevant results can be found in Proposition 2.11 (for [\[it:convex\]](#it:convex){reference-type="ref" reference="it:convex"}), Lemma 2.9 (for [\[it:segmentfri\]](#it:segmentfri){reference-type="ref" reference="it:segmentfri"}), Proposition 2.12 (for [\[it:closure\]](#it:closure){reference-type="ref" reference="it:closure"}) of [@BorweinLewisPartiallyFinite] and Proposition 2.5 (vii) (for [\[it:idempotent\]](#it:idempotent){reference-type="ref" reference="it:idempotent"}) of [@Regularity]. In the case of the quasi-relative interior, these properties are proved under the assumption that the space $X$ is locally convex. This condition is needed to carry out the proofs that rely on dual objects, which is not required in our case.*
**Proposition 21** (More calculus rules). *Let $X$ and $Y$ be topological vector spaces.*
1. *[\[it:prod\]]{#it:prod label="it:prod"} For any pair of convex sets $C \subseteq X$ and $D\subseteq Y$ we have $$\mathop{\mathrm{fri}}(C\times D) = \mathop{\mathrm{fri}}C \times \mathop{\mathrm{fri}}D.$$*
2. *[\[it:linimage\]]{#it:linimage label="it:linimage"} For any continuous linear operator $T:X\to Y$ and any convex set $C\subseteq X$ we have $$\label{eq:linear}
T(\mathop{\mathrm{fri}}C) \subseteq \mathop{\mathrm{fri}}(T(C)).$$ Moreover, if $T$ is injective on $C$, then [\[eq:linear\]](#eq:linear){reference-type="eqref" reference="eq:linear"} becomes equality.*
*Proof.* The equality [\[it:prod\]](#it:prod){reference-type="ref" reference="it:prod"} is easy to see using the observation that $F_{\min}((c,d),C\times D) = F_{\min} (c,C)\times F_{\min}(d,D)$ for any $(c,d)\in C\times D$.
To show [\[it:linimage\]](#it:linimage){reference-type="ref" reference="it:linimage"}, notice that since the linear mapping $T$ is continuous, the preimages of closed sets must be closed, hence for any subset $S$ of $X$ we must have $T(\overline{S}) \subseteq \overline{T S}$. To complete the proof it remains to show that for any $x\in C$ we have $T(F_{\min}(x, C))\subseteq F_{\min}(T(x), T(C))$ (then for any $x\in \mathop{\mathrm{fri}}C$ we have $C\subseteq \overline{F_{\min}(x,C)}$, and hence $$T(C) \subseteq T(\overline{C}) \subseteq \overline{T(F_{\min}(x,C)})\subseteq \overline{F_{\min}(T(x), T(C))},$$ therefore $T(x) \in \mathop{\mathrm{fri}}T(C)$.
Now take any $x\in \mathop{\mathrm{fri}}C$, and suppose that $u\in F_{\min}(x,C)$. By Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"} there must exists some $v\in F_{\min}(x,C)$ such that $x\in (u,v)$. Since $T$ is a linear mapping, this means that $T(x) \in (T(u),T(v)$, therefore $T(u)\in F_{\min}(T(x), T(C))$.
Assume that $T$ is injective on $C$. Let $\hat{x}\in C$ such that $T(\hat{x})\in \mathop{\mathrm{fri}}T(C)$, then $T(C)\subseteq \overline{F_{\min}\left(T(\hat{x}),T(C)\right)}$. Now for all $x\in C$ and for each neighbourhood $U_x$ of $x$, there exist a set $V_{tx}$ such that $T(z)\in V_{tx}$ for all $z\in U_x$. Take $z\in U_x\cap C$ with $T(z)\in V_{tx}\cap F_{\min}(T(\hat{x}),T(C))$, the existence of $z$ is because of $F_{\min}(T(\hat{x}),T(C))$ is dense in $T(C)$. Since $T(z)\in F_{\min}\left(T(\hat{x}),T(C)\right)$, using Proposition [Proposition 6](#prop:minfaceunion){reference-type="ref" reference="prop:minfaceunion"}, there exist $y_z\in C$ with $T(y_z)\in F_{\min}\left(T(\hat{x}),T(C)\right)$ and $\alpha_z\in (0,1)$, such that $T(\hat{x})=\alpha_{z}T(z)+(1-\alpha_z)T(y_z)=T(\alpha_z z+(1-\alpha_z)y_z)$, then by the injectivity of $T$ and convexity of $C$ we have $\hat{x}=\alpha_z z+(1-\alpha_z)y_z$, this implies that $z\in F_{\min}(\hat{x},C)$, then $x\in \overline{F_{\min}(\hat{x},C)}$, the arbitrarily of $x$ implies that $C\subseteq \overline{F_{\min}(\hat{x},C)}$, then $\hat{x}\in\mathop{\mathrm{fri}}C$ and $T(\hat{x})\in T(\mathop{\mathrm{fri}}C)$. This concludes the proof. ◻
**Remark 22**. *The product rule in Proposition [Proposition 21](#prop:prodandlin){reference-type="ref" reference="prop:prodandlin"} [\[it:prod\]](#it:prod){reference-type="ref" reference="it:prod"} also holds for the quasi-relative interior (see [@BorweinLewisPartiallyFinite Proposition 2.5]). The counterpart of inclusion [\[it:linimage\]](#it:linimage){reference-type="ref" reference="it:linimage"} for the quasi-relative interior and is given in Proposition 2.21 of [@BorweinLewisPartiallyFinite].*
Our next goal is to address a crucial calculus question: Under what conditions does the sum of face relative interiors equal the face relative interior of the sum? It is not difficult to see that translations preserve face relative interiors, as we show next.
**Proposition 23**. *Let $C$ be a convex set in a topological vector space $X$, and suppose that $a\in X$, then $\mathop{\mathrm{fri}}(C+a)=\mathop{\mathrm{fri}}C +a$.*
Let $x\in C+a$ then, $x=x_c+a$ with $x_c\in C$. Note that $$\label{minfa}
\overline{F_{\min}(x_c,C)+a}=\overline{F_{\min}(x_c,C)}+a$$. For any face $F$ of $C$ we have that $F\lhd C \Longleftrightarrow F+a\lhd C+a$, which implies that $F_{\min}(x_c+a,C+a)=F_{\min}(x_c,C)+a$. Using the last fact in [\[minfa\]](#minfa){reference-type="eqref" reference="minfa"} we have, $$\overline{F_{\min}(x_c+a,C+a)}=\overline{F_{\min}(x_c,C}+a)=\overline{F_{\min}(x_c,C)}+a.$$ The last equation implies that $C\subseteq \overline{F_{\min}(x_c+a,C+a)}$ is equivalent to $C\subseteq \overline{F_{\min}(x_c,C)}+a$. Then $\mathop{\mathrm{fri}}(C+a)=\mathop{\mathrm{fri}}C +a$.
**Proposition 24**. *Let $C$ and $D$ be convex subsets of a topological vector space $X$. Then $$\mathop{\mathrm{fri}}C+\mathop{\mathrm{fri}}D\subseteq \mathop{\mathrm{fri}}(C+D).$$*
*Proof.* Let $C$ and $D$ be convex subsets of a topological vector space $X$, and let $T:X\times X\to X$ be the addition mapping, $T(x,y) = x+y$. This mapping is linear and continuous.
Using Proposition [Proposition 21](#prop:prodandlin){reference-type="ref" reference="prop:prodandlin"} we obtain $$\begin{aligned}
\mathop{\mathrm{fri}}C + \mathop{\mathrm{fri}}C & = T(\mathop{\mathrm{fri}}C \times \mathop{\mathrm{fri}}D) \quad \text{by the definition of $T$}\\
& = T(\mathop{\mathrm{fri}}(C\times D)) \quad \text{by Proposition~\ref{prop:prodandlin}~\ref{it:prod}}\\
& \subseteq \mathop{\mathrm{fri}}T(C\times D) \quad \text{by Proposition~\ref{prop:prodandlin}~\ref{it:linimage}}\\
& = \mathop{\mathrm{fri}}(C+D)\quad \text{by the definition of $T$}.\end{aligned}$$ ◻
**Remark 25**. *An analogue of Proposition [Proposition 24](#sum){reference-type="ref" reference="sum"} holds in the case of the quasi-relative interior (see [@BorweinGoebel Lemma 3.6 (b)]). We refer the reader to [@ZalinescuThreePb] for a nuanced discussion of this property of quasi-relative interiors. We note that the sum of quasi-relative interiors doesn't necessarily coincide with the quasi-relative interior of the sum even when the individual quasi-relative interiors are nonempty, and the same is true for the face relative interior, as we show in what follows using an example from [@ZalinescuThreePb].*
**Example 26** (Example 2 in [@ZalinescuThreePb]). Let $X=\ell_2$, $\bar{x}=(n^{-1})_{n\geq 1}$, $C=[0,1]\bar{x}$ and $D=\ell_1^+$.
For any $y\in C+D$ and $n\in \mathbb{N}$ let $y^n$ be the truncated sequence whose first $n$ entries coincide with $y$, and the remaining entries are all zeros. Note that $y^n\in C+D$, since $0\in C$ and $y^n\in l_1^+$. It is also evident that $y^n\to y$.
For any $x\in l_1^++$ there is a suffciently small $\alpha>0$ such that $x + \alpha (x-y^n)$ has strictly positive entries, hence, for any $x\in l_1^{++}$ we have $y^n\in F_{\min}(x,C+D)$. We conclude that $l_1^{++}\subseteq \mathop{\mathrm{fri}}(C+D)$.
On the other hand, observe that since $\mathop{\mathrm{fri}}C = (0,\bar x)$, we have $l_1^{++}\cap \mathop{\mathrm{fri}}(C + D) = \emptyset$, hence $\mathop{\mathrm{fri}}C+\mathop{\mathrm{fri}}D \subseteq \mathop{\mathrm{fri}}(C + D)$ doesn't contain elements of $l_1^{++}$. We conclude that $\mathop{\mathrm{fri}}C+\mathop{\mathrm{fri}}D$ is strictly smaller than $\mathop{\mathrm{fri}}(C+D)$.
The next example shows that the intersection rules that are true for the quasi-relative interiors do not hold for the face relative interior (see [@ZalinescuThreePb Proposition 3]). Specifically, there are convex sets for which $\mathop{\mathrm{fri}}(C\cap D)\notin \mathop{\mathrm{fri}}C \cap \mathop{\mathrm{fri}}D$ even under the condition $\mathop{\mathrm{fri}}C \cap \mathop{\mathrm{fri}}D\neq \emptyset$.
**Example 27**. We construct the sets $C$ and $D$ in $l_2$ such that $0\in \mathop{\mathrm{fri}}(C\cap D)$, and at the same time $0\notin \mathop{\mathrm{fri}}C \cap \mathop{\mathrm{fri}}D\neq \emptyset$.
Let $v = \left(1,\frac 1 2,\frac 1 3, \dots, \frac 1 n, \dots\right)$ and consider the following subsets of $l_2$: $$\begin{aligned}
A & = \{ t v + u \,| u\in l_1, t>0\} ,\\
B & = \{ x\in l_1\, | \, x_i \geq -1 \; \forall i \},\\
C & = \{ v+ s ( w-v)\, | \, w\in B , \, s> 0 \},\\
D & = A \cup B.\end{aligned}$$ First observe that all of these sets are convex, including $D$: both $A$ and $B$ are convex, while for any $x \in A$ and $y\in B$ and any $\alpha \in (0,1)$ we have $$(1-\alpha ) x+ \alpha y = (1-\alpha ) (u+ t v)+ \alpha y =
(1-\alpha ) t v + (1-\alpha ) u+ \alpha y \in A.$$ Another important observation is that $v\in \overline B$ and $v\notin l_1$.
It is also evident by considering the intersections $C\cap A$ and $C\cap B$ that $$C\cap D = \{ v + s (w-v)\, | \, w\in B, \, s\in (0,1]\}.$$
We will show that $0\in \mathop{\mathrm{fri}}(C\cap D)$, and at the same time $0\notin \mathop{\mathrm{fri}}C \cap \mathop{\mathrm{fri}}D\neq \emptyset$.
To show that $\mathop{\mathrm{fri}}C\cap \mathop{\mathrm{fri}}D\neq \emptyset$, we demonstrate that $\frac v 2 \in \mathop{\mathrm{fri}}C$ and $\frac v 2 \in\mathop{\mathrm{fri}}D$. Take any $x\in C$. Then $x= v+s (w-v)$ for some $w\in B$. Choose $\varepsilon>0$ such that $$\frac 1 2 + \varepsilon \left ( \frac 1 2 -s \right ) > 0, \qquad \varepsilon \left ( s\left ( \|w\|_1+1\right)-\frac 1 2 \right) <\frac 1 2,$$ Then $$y = \frac 1 2 v + \varepsilon \left ( \frac 1 2 v -x \right ) = (1-t') v + t' w'\in C,$$ where $$t' = \frac 1 2 + \varepsilon \left ( \frac 1 2 -s \right ) , \quad w'= \frac{-\varepsilon s \|w\|_1}{t'}\in B.$$ Therefore $\frac v 2 \in (x, y)$, and hence $x\in F_{\min} (\frac v 2 , C)$. We conclude that $C = F_{\min} (v/2, C)$, hence $\frac v 2 \in \mathop{\mathrm{icr}}C \subseteq \mathop{\mathrm{fri}}C$ (by Theorem [Theorem 3](#thm:interiorsandwich){reference-type="ref" reference="thm:interiorsandwich"}).
Likewise, for any $x\in D$ we have $x = t v + u$, where $t\geq 0$ and $u\in l_1$. Choose a sufficiently small $\varepsilon$ such that $$\frac 1 2 + \varepsilon \left ( \frac 1 2 -t \right ) > 0,$$ then $$\frac 1 2 v + \varepsilon \left ( \frac 1 2 v -x \right ) = \left(\frac 1 2 + \varepsilon \left ( \frac 1 2 -t \right ) v -
\varepsilon u \right)\in A \subset D,$$ and using the same argument as before, we conclude that $\frac v 2 \in \mathop{\mathrm{fri}}D$.
To show that $0\in \mathop{\mathrm{fri}}(C\cap D)$, take any $x\in B$, then either $x=0$ or $-x/\|x\|_1\in B$. We conclude that $B\subseteq F_{\min} (0, C\cap D)$. Since $C\cap D \subseteq \overline B$, we conclude that $0\in \mathop{\mathrm{fri}}C\cap D$.
To show that $0\notin \mathop{\mathrm{fri}}C\cap \mathop{\mathrm{fri}}D$, it is sufficient to demonstrate that $0\notin \mathop{\mathrm{fri}}D$. It is evident that for any $x\in A$ there is no $y\in D$ such that $0\in (x,y)$ (since $y$ can't have a negative "$v$ coordinate"). Therefore $F_{\min} (0, D)\subseteq B$. Now for any $x\in \overline B$ we must have all coordinates of $x$ not smaller than $-1$, and hence, say, the point $v+(-2,0,0,\dots)$ is in $D$ but is not in $\overline B$. We hence conclude that $D\notin \overline {F_{\min} (0, D)}$, and $0\notin \mathop{\mathrm{fri}}D$.
Our last calculus result relates the face relative interiors of sets and their closures in a more refined way than [\[it:closure\]](#it:closure){reference-type="ref" reference="it:closure"} of Proposition [Proposition 19](#prop:basic){reference-type="ref" reference="prop:basic"}.
**Proposition 28**. *Let $A,B$ convex sets in the real topological space $X$, such that $A\subseteq B$, $\mathop{\mathrm{icr}}A=\mathop{\mathrm{fri}}A$ and $A\cap \mathop{\mathrm{fri}}B\neq \emptyset$, then $\mathop{\mathrm{fri}}A\subseteq \mathop{\mathrm{fri}}B$.*
*Proof.* Consider $x_0\in \mathop{\mathrm{fri}}A=\mathop{\mathrm{icr}}A$, then from the definition of intrinsic core (see page [\[page:deficr\]](#page:deficr){reference-type="ref" reference="page:deficr"} or Definition 3.1 in [@mill-rosh]), for all $x\in A$ there exist $y\in A$ with $x_0\in(y,x)$. Take $\bar{x}\in A\cap \mathop{\mathrm{fri}}B$, there exist $\bar{y}\in A\subseteq B$ such that $x_0\in (\bar{x},\bar{y})$. Now, since $\bar{y}\in B$ and $\bar{x}\in A\subseteq B$, using Proposition [Proposition 19](#prop:basic){reference-type="ref" reference="prop:basic"}[\[it:segmentfri\]](#it:segmentfri){reference-type="ref" reference="it:segmentfri"} we have that $[\bar{x},\bar{y})\subset\mathop{\mathrm{fri}}B$, which implies that $x_0\in \mathop{\mathrm{fri}}B$, the arbitrarily of $x_0$ implies that $\mathop{\mathrm{fri}}A\subseteq\mathop{\mathrm{fri}}B$. ◻
**Proposition 29** (Intersection property). *Let $C$ and $D$ be nonempty convex sets in a real topological space $X$ such that $\mathop{\mathrm{icr}}(C\cap D)=\mathop{\mathrm{fri}}(C\cap D)$. Then*
1. *If $C\cap \mathop{\mathrm{fri}}D\neq \emptyset$, then $\mathop{\mathrm{fri}}(C\cap D)\subseteq C\cap \mathop{\mathrm{fri}}D$.*
2. *If $\mathop{\mathrm{fri}}C\cap \mathop{\mathrm{fri}}D\neq \emptyset$, then $\mathop{\mathrm{fri}}(C\cap D)\subseteq \mathop{\mathrm{fri}}C\cap \mathop{\mathrm{fri}}D$.*
*Proof.*
1. This is a direct consequence of Proposition [Proposition 28](#icrfri){reference-type="ref" reference="icrfri"} with $A=C\cap D$ and $B=D$.
2. Since $\mathop{\mathrm{fri}}C\cap\mathop{\mathrm{fri}}D\neq \emptyset$, then $C\cap\mathop{\mathrm{fri}}D\neq \emptyset$ and $D\cap\mathop{\mathrm{fri}}C\neq\emptyset$, using twice the previous item, we have that $\mathop{\mathrm{fri}}(C\cap D)\subseteq C\cap\mathop{\mathrm{fri}}D$ and $\mathop{\mathrm{fri}}(C\cap D)\subseteq D\cap\mathop{\mathrm{fri}}C$, which implies that $$\mathop{\mathrm{fri}}(C\cap D)\subseteq \big(C\cap\mathop{\mathrm{fri}}D\big)\cap \big(D\cap\mathop{\mathrm{fri}}C\big)=\mathop{\mathrm{fri}}C\cap\mathop{\mathrm{fri}}D.$$
◻
We finish this section with a characterisation of face relative interior that we then use to prove some relations between face relative interiors of convex sets and their closures in Proposition [Proposition 31](#prop:closures){reference-type="ref" reference="prop:closures"}.
**Theorem 30**. *Let $X$ be a topological vector space. Given a convex set $C\subset X$ and $x_0\in C$, we have $x_0\notin \mathop{\mathrm{fri}}C$ if and only if there exist $y\in C$ and a neighbourhood $\mathrel{V_y}$ such that for all $z\in \mathrel{V_y} \cap C$, the direction $x_0 -z$ is a not feasible direction, that is, for all $\epsilon > 0$, $x_0+\epsilon(x_0-z)\notin C$.*
We provide a geometric illustration of Theorem [Theorem 30](#notinfri){reference-type="ref" reference="notinfri"} and its proof in Fig. [1](#fig:technical){reference-type="ref" reference="fig:technical"}.
![A technical construction in the proof of Theorem [Theorem 30](#notinfri){reference-type="ref" reference="notinfri"}.](pics/technical.pdf){#fig:technical width="50%"}
*Proof of Theorem [Theorem 30](#notinfri){reference-type="ref" reference="notinfri"}.* Suppose that $x_0\in C\setminus \mathop{\mathrm{fri}}C$. Then $C\nsubseteq \overline{F_{\min}(x_0,C)}$ and there exists $y\in C$ such that $y\notin \overline{F_{\min}(x_0,C)}$, which implies the existence of a neighborhood $\mathrel{V_y}$ such that, $\mathrel{V_y}\cap F_{\min}(x_0,C)=\emptyset$. For any $z\in \mathrel{V_y} \cap C$ and $\epsilon>0$ we have $x_0\in (z,x_0+\epsilon (x_0-z))$, which together with $z\notin F_{\min}(x_0,C)$ and [\[minface:union\]](#minface:union){reference-type="eqref" reference="minface:union"} yields that for all $\epsilon>0$ the point $x_0+\epsilon(x_0-z)\notin C$.
Conversely, for some $x_0\in C$ let $y\in C$ and $\mathrel{V_y}$ be such that for all $\epsilon>0$ and $z\in \mathrel{V_y}\cap C$, $x_0+\epsilon(x_0-z)\notin C$. Then for all $z\in \mathrel{V_y}\cap C$ (as well as for $z\notin C$) we have $z\notin F_{\min}(x_0,C)$, and therefore $y\notin \overline{F_{\min}(x_0,C)}$. ◻
**Proposition 31**. *Given convex sets $A, B$ such that $A\subseteq B\subseteq \overline{A}$. Then we have:*
1. *[\[it:a\]]{#it:a label="it:a"} $\mathop{\mathrm{fri}}A \subseteq \mathop{\mathrm{fri}}B\subseteq \mathop{\mathrm{fri}}\overline{A}$.*
2. *[\[it:b\]]{#it:b label="it:b"} $\mathop{\mathrm{fri}}A=A\cap \mathop{\mathrm{fri}}B=A\cap \mathop{\mathrm{fri}}\overline{A}$.*
*Proof.* Note that given a point $x\in A$, since $A\subseteq B \subseteq \overline{A}$ we have by Corollary [Corollary 7](#cor:minfsubset){reference-type="ref" reference="cor:minfsubset"} that $F_{\min}(x,A) \subseteq F_{\min}(x,B)\subseteq F_{\min}(x,\overline{A})$.
To show [\[it:a\]](#it:a){reference-type="ref" reference="it:a"} take $x\in \mathop{\mathrm{fri}}A$ then, $A\subseteq \overline{F_{\min}(x,A)}$, now since $\overline{F_{\min}(x,A)}$ is closed, we have that $A\subseteq B \subseteq \overline{A}\subseteq \overline{F_{\min}(x,A)}\subseteq \overline{F_{\min}(x,B)} \subseteq \overline{F_{\min}(x,\overline{A})}$, proving that $x\in \mathop{\mathrm{fri}}B \subseteq \mathop{\mathrm{fri}}\overline{A}$.
To demonstrate [\[it:b\]](#it:b){reference-type="ref" reference="it:b"}, notice that due to [\[it:a\]](#it:a){reference-type="ref" reference="it:a"} we have that $\mathop{\mathrm{fri}}A \subseteq A\cap \mathop{\mathrm{fri}}B \subseteq A\cap \mathop{\mathrm{fri}}\overline{A}$. Now, we will prove that $A\cap \mathop{\mathrm{fri}}\overline{A}\subseteq \mathop{\mathrm{fri}}A$, which completes the proof. Take $x\in A\cap \mathop{\mathrm{fri}}\overline{A}$, suppose that $x\notin \mathop{\mathrm{fri}}A$. Using Theorem [Theorem 30](#notinfri){reference-type="ref" reference="notinfri"}, we have the existence of $y\in A$ and $r>0$, such that for all $z\in A\cap B(y,r)$, we have $x+\epsilon (x-z)\notin A$ for all $\varepsilon>0$. This implies that $x+\epsilon(x-y)\notin \overline{A}$. This implies that $x\notin \mathop{\mathrm{fri}}\overline{A}$, contradicts our assumption. Concluding $\mathop{\mathrm{fri}}A=A\cap \mathop{\mathrm{fri}}\overline{A}$. ◻
# Equivalence classes of faces {#sec:equiv}
Recall that any convex set in a real vector space has a beautiful and useful representation as the disjoint union of cores of its faces. This result is tied up to the fact that any point of a convex set belongs to the intrinsic core of its minimal face and that the intersection of intrinsic cores of any pair of distinct faces is empty. Neither the quasi-relative interior nor the face relative interior satisfies these properties. In both cases, two distinct faces may have a nonempty intersection of their quasi- or face relative interiors.
It may be possible however to obtain a disjoint decomposition of a convex set into the face relative interiors of what we call closure-equivalent faces of a convex set. Such decomposition is not available in terms of the quasi-relative interiors.
**Definition 32**. Given the convex sets $A,B\subseteq X$, where $X$ is a topological vector space, we say that $A$ is closure equivalent to $B$, denoted by $A \simeq B$ if $\overline{A}=\overline{B}$.
In this context we can consider the set of equivalence classes of faces of some convex set $C$, that is, $[F]\in \mathcal{F}$ iff $[F]$ contains all faces $A,B$ of $C$ such that $\bar A = \bar B$.
**Proposition 33**. *Let $C\subset X$ be a convex set on a separable Banach space. Let $A,B\subseteq C$ two faces of $C$, such that $A$ and $B$ are not closure equivalent, that is, $\overline{A}\neq \overline{B}$. Then $\mathop{\mathrm{fri}}A\cap \mathop{\mathrm{fri}}B=\emptyset$.*
Suppose that on the contrary there exist some convex sets $A$ and $B$ such that $\bar A \neq \bar B$, but there is an $x\in \mathop{\mathrm{fri}}A\cap \mathop{\mathrm{fri}}B$. This implies, by definition, that $x\in A\cap B$. Now, since $\overline{A}\neq \overline{B}$ there exist $y\in A\cup B$ such that $y\notin \overline{A}\cap \overline{B}$. Note that $\overline{F_{\min}(x,A\cap B)}\subseteq \overline{A\cap B}\subseteq \overline{A}\cap \overline{B}$, then $y\notin \overline{F_{\min}(x,A\cap B)}$, Suppose, without loss of generality, that $y\in A$, using that $F_{\min}(x,A\cap B)$ is a face on $A$ containing $x$, then $F_{\min}(x,A)\subseteq F_{\min}(x,A\cap B)$, then $A\nsubseteq \overline{F_{\min}(x,A)}$ contradicting that $x\in \mathop{\mathrm{fri}}A$.
Note that Proposition [Proposition 33](#partition){reference-type="ref" reference="partition"} is not valid when the face relative interior is replaced by the quasi-relative interior, as we demonstrate in the next example. First we quote a well-known fact about the intrisic cores of minimal faces (see [@mill-rosh Corollary 3.14]).
**Proposition 34**. *Let $C$ be a convex subset of a real vector space $X$, and let $F$ be a face of $C$. Then $F = F_{\min}(x,C)$ if and only if $x \in \mathop{\mathrm{icr}}F$.*
**Example 35**. Suppose $C\subset l_2$ is like in Example [\[exa1\]](#exa1){reference-type="ref" reference="exa1"}, that is, $C = \{x\, |\, \|x\|_1 \leq 1\}$. Take $\bar{x}\in C$ such that $\|\bar{x}\|_1=1$, and such that $\bar x$ has an infinite number of nonzero entries, and assume for simplicity that all entries of $\bar x$ are nonnegative. We have demonstrated earlier (see Example [\[exa1\]](#exa1){reference-type="ref" reference="exa1"}) that in this case $$F_{\min }(\bar x,C) \subseteq C\cap \left\{x\,|\, \sum_{i=1}^\infty x_i \geq 0 \; \forall i \in \mathbb{N}\right\},$$ and that $\overline{F_{\min}(\bar x,C)}\neq C$. By Proposition [Proposition 34](#prop:icrminf){reference-type="ref" reference="prop:icrminf"} $\bar x\in \mathop{\mathrm{icr}}F_{\min}(x,C)$, also $\mathop{\mathrm{icr}}F_{\min}(x,C) \subseteq \mathop{\mathrm{qri}}F_{\min}(x,C)$ by Theorem [Theorem 3](#thm:interiorsandwich){reference-type="ref" reference="thm:interiorsandwich"}. We hence have $\bar x \in \mathop{\mathrm{qri}}F_{\min}(x,C)$. At the same time we know from the representation [\[eq:qriEx1\]](#eq:qriEx1){reference-type="eqref" reference="eq:qriEx1"} that $\bar x\in \mathop{\mathrm{qri}}C$. We conclude that $\bar{x}\in \mathop{\mathrm{qri}}F_{\min}(\bar{x},C)\cap \mathop{\mathrm{qri}}C$, hence the quasi-relative interiors of closure-equivalent faces may have nonempty intersections.
We can therefore obtain the representation of a convex set as the disjoint union of the face relative interiors of its closure-equivalent faces, as we demonstrate next.
**Theorem 36**. *Let $C$ be a convex subset of a topological vector space $X$, and let $\mathcal{F}_C$ be the set of closure-equivalent classes of faces of $C$. Then $$\label{eq:disjoint}
C = \mathop{\mathrm{\dot \bigcup}}_{[F]\in \mathcal F_C} \mathop{\mathrm{fri}}F,$$ where by dot over the union sign we denote that the union is disjoint.*
*Proof.* Since for any $x\in C$ we have $x\in \mathop{\mathrm{icr}}F_{\min}(x,C)$ by Proposition [Proposition 34](#prop:icrminf){reference-type="ref" reference="prop:icrminf"}, and $\mathop{\mathrm{icr}}F_{\min}(x,C) \subseteq \mathop{\mathrm{qri}}F_{\min}(x,C)$ by Theorem [Theorem 3](#thm:interiorsandwich){reference-type="ref" reference="thm:interiorsandwich"}, we conclude that $$C \subseteq \bigcup_{[F]\in \mathcal F_C} \mathop{\mathrm{fri}}F,$$ moreover, since for any face $F$ of $C$ we have $\mathop{\mathrm{fri}}F \subseteq F\subset C$, we sharpen this inclusion to $$\label{eq:uniontechnical}
C = \bigcup_{[F]\in \mathcal F_C} \mathop{\mathrm{fri}}F.$$ Finally, from Proposition [Proposition 33](#partition){reference-type="ref" reference="partition"} we conclude that the union in [\[eq:uniontechnical\]](#eq:uniontechnical){reference-type="eqref" reference="eq:uniontechnical"} is disjoint, arriving at [\[eq:disjoint\]](#eq:disjoint){reference-type="eqref" reference="eq:disjoint"}. ◻
Recall that any convex set $C$ in a real vector space $X$ decomposes into a disjoint union of the intrinsic cores of its faces. This decomposition may coincide with the one in Theorem [Theorem 36](#thm:decompose){reference-type="ref" reference="thm:decompose"} (for instance, in the finite-dimensional setting, when all interior notions coincide), but generally speaking these two decompositions are different. To see how they compare in the case when the intrinsic cores don't match the face relative interiors, we revisit Examples [Example 9](#eg:02a){reference-type="ref" reference="eg:02a"} and [Example 11](#eg:icrneqfri){reference-type="ref" reference="eg:icrneqfri"}. The former is a more intricate case than the latter, so we start with a simple case.
**Example 37**. Let $C$ be as in Example [Example 11](#eg:icrneqfri){reference-type="ref" reference="eg:icrneqfri"}. There are only three minimal faces: $\{u\}$, $L$, and the entire set $C$. We have $$C = \mathop{\mathrm{icr}}\{u\}\cup \mathop{\mathrm{icr}}L \cup \mathop{\mathrm{icr}}C = \{u\}\cup L \cup (C\setminus (L\cup \{u\})).$$ At the same time since the only point in $C$ for which $C\notin \overline{F_{\min}(x,C)}$ is $\{u\}$, we conclude that the decomposition of Theorem [Theorem 36](#thm:decompose){reference-type="ref" reference="thm:decompose"} gives $$C = \{u\}\cup (C\setminus \{u\}).$$
**Example 38**. Let $C$ be as in Example [Example 9](#eg:02a){reference-type="ref" reference="eg:02a"}. We have determined that $$\mathop{\mathrm{fri}}C = \{x\in C \,|\, \|x\|_2 <1, \; x_i>0\; \forall\, i \in \mathbb{N}\}.$$ Observe that for any nonempty $N\subseteq \mathbb{N}$ the set $$F_N = \{x\in l_2\,|\, \|x\|_2\leq 1, x_i \geq 0 \forall i \in \mathbb{N}\setminus N, \, x_i = 0 \forall i \in N\}$$ is a face of $C$, and using the same argument based on truncated sequences as in the discussion of Example [Example 9](#eg:02a){reference-type="ref" reference="eg:02a"}, we conclude that for any nonempty $N\subseteq \mathbb{N}$ $$\mathop{\mathrm{fri}}F_N = \{x\in l_2\,|\, \|x\|_2< 1, x_i > 0 \;\forall i \in \mathbb{N}\setminus N, \, x_i = 0 \forall i \in N\}.$$ The only remaining points are the ones of the unit norm, and we have already determined that in this case $\mathop{\mathrm{fri}}F_{\min}(x,C) = F_{\min}(x,C) = \{x\}$. We hence have a decomposition of $C$ as follows, $$C = \bigcup_{x\in C, \|x\|_2 =1} \{x\} \cup \bigcup_{N\subseteq \mathbb{N}} \{x\in l_2\,|\, \|x\|_2< 1, x_i > 0 \, \forall i \in \mathbb{N}\setminus N, \, x_i = 0 \, \forall i \in N\}.$$ Recall how in this case $\mathop{\mathrm{icr}}C = \emptyset$, and so $C$ is never the minimal face of $x\in C$. Based on the observation that minimal faces are determined by the comparative speed with which the sequences that define the points in $C$ converge, the respectful decomposition of the set $C$ via the intrinsic cores is different to the one that we have just obtained via the face relative interiors.
# Conclusions
We have introduced a new notion of face relative interior for convex sets in topological vector spaces. Face relative interior can be seen as an alternative to the quasi-relative interior that is better aligned with the facial structure of convex sets.
Even though we have placed the new notion in the context of other research with the sandwich theorem and examples, and provided an extensive discussion on calculus rules, there is still much to be understood. For instance, due to Theorem [Theorem 3](#thm:interiorsandwich){reference-type="ref" reference="thm:interiorsandwich"} we always have $\mathop{\mathrm{fri}}C \subseteq \mathop{\mathrm{qri}}C$, also in separable Banach spaces, both $\mathop{\mathrm{fri}}C$ and $\mathop{\mathrm{qri}}C$ are nonempty for any convex set $C$. However, we don't know if there exists a convex set in some topological vector space for which the face relative interior is empty and the quasi-relative interior isn't. Another challenge is that the intersection property given in Proposition [Proposition 29](#prop:intersection){reference-type="ref" reference="prop:intersection"} is only proved under an unusual assumption that the face relative interior of the intersection must coincide with the intrinsic core, and since we know that without this assumption the result may fail (as seen in Example [Example 27](#eg:intersectionfails){reference-type="ref" reference="eg:intersectionfails"}), we are wondering what is the weakest condition that guarantees this property to hold.
We have not touched on the topics of duality within this work, which is an exciting subject for future research. We anticipate that face-relative interior will find practical use whenever the facial structure of a convex set is easy to access, however, it remains to be seen if any interesting applications emerge in convex geometry and optimisation.
# Acknowledgements
We are grateful to Prof. David Yost for pointing out some relevant references to us and for productive discussions in the initial stages of working on this project
We are also grateful to the Australian Research Council for the financial support provided by means of Discovery Projects "An optimisation-based framework for non-classical Chebyshev approximation", DP180100602 and "Geometry in projection methods and fixed-point theory" DP200100124.
[^1]: Deakin University, Melbourne, Australia
[^2]: UNSW Sydney, Australia
| arxiv_math | {
"id": "2309.06699",
"title": "Face relative interior of convex sets in topological vector spaces",
"authors": "Reinier D\\'iaz Mill \\'an, Vera Roshchina",
"categories": "math.OC math.GN",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this paper, we study the nearly Gorenstein projective closure of numerical semigroups. We also studied the nealy Gorenstein property of associated graded ring of simplicial affine semigroups. In addition, in case of gluing of numerical semigroups, we answer the question posed by Herzog-Hibi-Stamate (Semigroup Forum 103:550--566, 2021).
address: Discipline of Mathematics, IISER Bhopal, Madhya Pradesh, India.
author:
- Pranjal Srivastava
date:
-
-
title: On Nearly Gorenstein Simplicial Semigroup Algebras
---
# Introduction
Let $\mathbb{N}$ denotes the set of non negative integers and $\mathbb{K}$ denotes a field. Let $(A,\mathfrak{a},\mathbb{K})$ be a positively graded $\mathbb{K}$-algebra which is Cohen-Macaulay and possesses a canonical module $\omega_A$. We recall that if $N$ is any $A-$module, its trace is the ideal $\mathrm{tr}_A(N)=\sum_{\phi \in \mathrm{Hom}_A(N,A)}\phi(N)$ in $A$. In [@Trace], the ring $A$ is called nearly Gorenstein when $\mathfrak{a} \subseteq\mathrm{tr}_{A}(\omega_{A})$ and the residue of $A$ is defined as the length of a module $A/\mathrm{tr}_A(\omega_{A})$. The trace ideal of $\omega_A$ has been used to define the class of nearly Gorenstein rings. It is well known that $\mathrm{tr}(\omega_A)$ defines the non-Gorenstein locus of $A$ (See [@Trace Lemma 2.1]). In this paper, we study the trace ideal of canonical module of some simplicial affine semigroups.
Numerical semigroup rings are one-dimensional domains and hence Cohen-Macaulay; these are the coordinate rings of affine monomial curves. A lot of interesting studies have been undertaken by several authors from the viewpoints of singularities, homology and also from purely semigroup theoretic aspects. In [@Residue], Herzog et al. studied the trace of the canonical ideal of numerical semigroup rings and characterized the nearly Gorenstein numerical semigroup using the residue of numerical semigroups. Moscariello and Strazzanti also studies the nearly Gorenstein affine monimial curves (See [@Nearly-Almost]). The projective closure of a numerical semigroup ring is defined by an affine semigroup in $\mathbb{N}^2$ (See Section [2](#PCN){reference-type="ref" reference="PCN"}). The Cohen-Macaulay property and many other properties of numerical semigroup rings are not preserved under the operation of projective closure. In this paper, we explore the trace of the canonical ideal of projective closure of numerical semigroup rings. We study the nearly Gorenstein property of projective closure of numerical semigroup whenever numerical semigroups are nearly Gorenstein using Gröbner basis of the defining ideal of projective closure of numerical semigroup rings. Recently, Miyashita studied the nearly Gorenstein projective monomial curve for small co-dimensions (See [@Miyashita]).
For a numerical semigroup $\Gamma$, we prove that projective closure $\mathbb{K}[\overline{\Gamma}]$ of numerical semigroup ring $\mathbb{K}[\Gamma]$ is nearly Gorenstein if and only if $\mathbb{K}[\Gamma]$ is nearly Gorenstein under some suitable Gröbner basis condition on defining ideal of $\mathbb{K}[\Gamma]$. An important class of numerical semigroups supporting this result is the one defined by an arithmetic sequence; in other words, if $\Gamma$ is a numerical semigroup, minimally generated by an arithmetic sequence, then $\mathbb{K}[\overline{\Gamma}]$ is nearly Gorenstein (See Corollary [Corollary 5](#BPAS){reference-type="ref" reference="BPAS"}).
Affine simplicial semigroups provide a natural generalization of numerical semigroups, whose theory has been developed mainly in connection to the study of curve singularities. Many authors have studied the properties of the affine semigroup ring $\mathbb{K}[\Gamma]$ from the properties of the affine semigroup $\Gamma$; see [@Bruns-Herzog],[@Affine]. In this paper, we study the nearly Gorenstein property of the associated graded ring of simplicial affine semigroups. We prove that $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ is nearly Gorenstein if and only $\mathbb{K}[\Gamma]$ is nearly Gorenstein (See Theorem [Theorem 14](#CM){reference-type="ref" reference="CM"}).
Herzog et al. (See [@Residue]) defined the canonical trace ideal and residue for numerical semigroup rings. They defined the residue of numerical semigroup $\Gamma$, which measures how far is $\Gamma$ from being symmetric. They asked the following questions.
**Question 1**. *([@Residue Question 2.3]) Let $A=\mathbb{K}[\Gamma]$, is it true that $$\begin{aligned}
l(A/\mathrm{tr}_A(\omega_{A}))\leq
g(\Gamma)-n(\Gamma)?
\end{aligned}$$ where $g(\Gamma)=|\mathbb{N}\setminus \Gamma|, n(\Gamma)=|\{s \in \Gamma : s < \mathrm{F}(\Gamma)\}|$ denote the number of gaps and the number of non-gaps, respectively, and $\mathrm{F}(\Gamma)$ denotes the Frobenius number of $\Gamma$, that is, the largest integer in $\mathbb{N}\setminus \Gamma$.*
They show that Question [Question 1](#Ques){reference-type="ref" reference="Ques"} holds for $3$-generated numerical semigroup (See [@Residue Proposition 3.2]). Recently, Herzog and Kumashiro in [@Upper; @bound] give the answer to this question for all numerical semigroups $\Gamma$ whose $\mathrm{type}(\Gamma)\leq 3$. They also give one example where this question has a negative answer (See [@Upper; @bound Example 2.5]). In this paper, we discuss this question for gluing numerical semigroups (See Definition [Definition 15](#Gluing-Def){reference-type="ref" reference="Gluing-Def"}). We show that Question [Question 1](#Ques){reference-type="ref" reference="Ques"} holds for the gluing of numerical semigroups provided glued numerical semigroups satisfy Question [Question 1](#Ques){reference-type="ref" reference="Ques"}. Rosales [@JCR] introduced the concept of gluing of numerical semigroups, which was motivated by Delorme's work on complete intersection numerical semigroups. This technique was used extensively to create many examples of set-theoretic complete intersection and ideal-theoretic complete intersection of affine and projective varieties. Feza Arslan et al. [@GHM] introduced the concept of *nice gluing* to give infinitely many families of $1$-dimensional local rings with non-Cohen Macaulay tangent cone and non-decreasing Hilbert function. An answer to the Question [Question 1](#Ques){reference-type="ref" reference="Ques"} in case of gluing would help us create a family of numerical semigroups in any embedding dimensions in which Question [Question 1](#Ques){reference-type="ref" reference="Ques"} holds.
Şahin in [@Sahin] define an operation called "lifting of monomial curves\" (See Subsection [4.1](#lift){reference-type="ref" reference="lift"} for definition) and use for spreading a special property of a monomial curve within an infinite family of examples. In this paper, we prove that $\Gamma$ satisfies inequality (1.1), then its $k$-lifting $\Gamma_k$ also satisfy inequality (1.1).
# Nearly Gorenstein property of projective closure of numerical semigroups {#PCN}
Let us define the projective closure of numerical semigroup rings. Let $e\geq 3$ and $\mathbf{\underline n} = (n_{1}, \ldots, n_{e})$ be a sequence of $e$ distinct positive integers with $\gcd(\mathbf{\underline n})=1$. Let us assume that the numbers $n_{1}, \ldots, n_{e}$ generate the numerical semigroup $\Gamma(n_1,\ldots, n_e) = \langle n_{1}, \ldots , n_{e} \rangle =
\lbrace\sum_{j=1}^{e}z_{j}n_{j}\mid z_{j}\in \mathbb{N}\rbrace$ minimally, that is, if $n_i=\sum_{j=1}^{e}z_{j}n_{j}$ for some non-negative integers $z_{j}$, then $z_{j}=0$ for all $j\neq i$ and $z_{i}=1$. We often write $\Gamma$ in place of $\Gamma(n_1,\ldots, n_e)$, when there is no confusion regarding the defining sequence $n_{1}, \ldots, n_{e}$. Let $\eta: R = \mathbb{K}[x_1,\,\ldots,\, x_e]\rightarrow \mathbb{K}[t]$ be the mapping defined by $\eta(x_i)=t^{n_i},\,1\leq i\leq e$. The ideal $\ker (\eta) =I(\Gamma)$ is called the defining ideal of $\Gamma(n_1,\ldots, n_e)$ and it defines the affine monomial curve $\{(u^{n_{1}},\ldots , u^{n_{e}})\in \mathbb{A}^{e}_{\mathbb{K}}\mid u\in \mathbb{K}\} =:
C(n_{1},\ldots,n_{e})$ (or simply $C(\Gamma)$). We write $\mathbb{K}[x_{1},\ldots,x_{e}]/I(\Gamma) =: \mathbb{K}[\Gamma(n_1,\ldots, n_e )]$ (or simply $\mathbb{K}[\Gamma]$), which is called the semigroup ring for the semigroup $\Gamma(n_1,\ldots, n_e )$. It is known that $I(\Gamma)$ is generated by the binomials $x^{a}-x^{b}$, where $a$ and $b$ are $e$-tuples of non-negative integers with $\eta(x^{a})=\eta(x^{b})$.
Let $n_{e}>n_{i}$ for all $i<e$, and $n_{0}=0$. We define the semigroup $\overline{\Gamma(n_{1},\ldots n_{e})}
= \langle \{(n_{i},n_{e}-n_{i})\mid 0\leq i\leq e\} \rangle = \lbrace \sum_{i=0}^{e}z_{i}(n_{i},n_{e}-n_{i}) \mid
z_{i}\in\mathbb{N}\rbrace$, often written as $\overline{\Gamma}$. Let $\eta^{h}:S=\mathbb{K}[x_{0},\ldots,x_{e}]\longrightarrow \mathbb{K}[s,t]$ be the $\mathbb{K}$-algebra map defined as $\eta^{h}(x_{i})=t^{n_{i}}s^{n_{e}-n_{i}}, 0\leq i \leq e$ and $\ker(\eta^{h}) = \overline{I(\Gamma)}$. The homogenization of $I(\Gamma)$ with respect to the variable $x_{0}$ is $\overline{I(\Gamma)}$, which defines $\{[(v^{n_{e}}: v^{n_{e}-n_{1}}u^{n_{1}}: \cdots: u^{n_{e}})]\in\mathbb{P}^{e}_{\mathbb{K}}\mid u, v\in \mathbb{K}\} =: \overline{C(n_{1},\ldots,n_{e})}$ (or simply $\overline{C(\Gamma)}$), the projective closure of the affine monomial curve $C(n_{1},\ldots,n_{e})$. The $\mathbb{K}$-algebra $\mathbb{K}[x_{0},\ldots,x_{e}]/\overline{I(\Gamma)} =:
\mathbb{K}[\overline{\Gamma(n_1,\ldots, n_e )}]$ (or simply $\mathbb{K}[\overline{\Gamma}]$) denotes the coordinate ring. It can be proved easily that $\overline{C(n_{1},\ldots,n_{e})}$ is a projective curve, which is said to be arithmetically Cohen-Macaulay if the vanishing ideal $\overline{I(\Gamma)}$ is a Cohen-Macaulay ideal.
**Lemma 2**. *Let $(R,\mathfrak{m})$ be a Cohen-Macaulay graded ring with canonical module $\omega_{R}$. Let $\phi:(R,\mathfrak{m}) \rightarrow (S,\mathfrak{n})$ be a ring homomorphism of Cohen-Macaulay graded rings satisfying*
(i) *$\phi(R_i)\subset S_i$ for all $i \in \mathbb{Z}$,*
(ii) *$\phi(\mathfrak{m})\subset \mathfrak{n}$,*
(iii) *$S$ is a finite graded $R$-module.*
*Then $\omega_{S} \cong \mathrm{Ext}^{t}_{R}(S,\omega_{R})$, where $t=\mathrm{dim}(R)-\mathrm{dim}(S)$.*
*Proof.* See Proposition 3.6.12 in [@Bruns-Herzog] ◻
*Remark 3*. Note that a graded polynomial ring $R[x_0]=\mathbb{K}[x_0,\dots,x_e]$ over a field $\mathbb{K}$ is Gorenstein, hence [@Bruns-Herzog Proposition 3.6.11] $\omega_{R[x_0]}\cong R[x_0]$. Suppose $\mathbb{K}[\overline{\Gamma}]$ is Cohen-Macaulay then by Lemma [Lemma 2](#Can-Exist){reference-type="ref" reference="Can-Exist"} $\mathbb{K}[\overline{\Gamma}]$ admits a canonical-module $\omega_{\mathbb{K}[\overline{\Gamma}]}$ and $\omega_{\mathbb{K}[\overline{\Gamma}]}\cong \mathrm{Ext}_{R[x_0]}^{e-1}(R[x_0]/I^h,R[x_0])$.
**Theorem 4**. *Let $\Gamma$ be a numerical semigroup, such that $\mathbb{K}[\overline{\Gamma}]$ is arithmetically Cohen-Macaulay. If there exists a Gröbner basis $G$ of the defining ideal $I(\Gamma)$ of $\mathbb{K}[\Gamma]$, with respect to the degree reverse lexicographic ordering $x_{i}>x_{e}$, for $i=1,\dots,e-1$, such that $x_{e}$ belongs to the support of all non-homogeneous elements of $G$. Then $\mathbb{K}[\Gamma]$ is nearly-Gorenstein if and only if $\mathbb{K}[\overline{\Gamma}]$ is nearly-Gorenstein.*
*Proof.* Let $G=\{f_{1},\dots,f_{l},g_{1},\dots,g_{r}\}$ be a Gröbner basis of $I(\Gamma)$, with respect to the degree reverse lexicographic order $x_{i}>x_{e}$. Let $f_{1},\dots, f_{l}$ be homogeneous binomials and $g_{1},\dots, g_{r}$ be non-homogeneous elements. It follows from Lemma 2.1 in [@Herzog-Stamate] that $G^{h}$ is a Gröbner basis of $\overline{I(\Gamma)}$, with respect to the monomial order induced by $x_{i}>x_{0}$. Since $\mathbb{K}[\overline{\Gamma}]$ is arithmetically Cohen-Macaulay, $x_{e}$ does not divide any leading monomial of $G$. From our assumption, the terms of $g_{i}$, which are not the leading term is divisible by $x_{e}$, for $i=1,\dots,e$.
Consider the natural map $\pi : R[x_0]= \mathbb{K}[x_{0},\dots,x_{e}] \longrightarrow \mathbb{K}[x_{1},\dots,x_{e-1}]$, given by $\pi(x_{e})=0,\pi(x_{0})=0$ and $\pi(x_{i})=x_{i}$ for $i=1,\dots,e-1$. Note that $\frac{\mathbb{K}[x_{0},\dots,x_{e}]}{(G^{h},x_{0},x_{e})} \cong \frac{\mathbb{K}[x_{1},\dots,x_{e-1}]}{\pi(G^{h})}$ as $S/x_{0}$-module. Since $\pi(g_{i})$ is a monomial which is not divisible by $x_{0}$, $x_{e}$ and since any term of $f_{i}$ is not divisible by $x_{0}$, we have $\pi(G^{h})=\pi(G).$ Now, $x_{e}$ being regular in $\frac{\mathbb{K}[x_{0},\dots,x_{e}]}{G^{h}}$, we have $$\omega_{\mathbb{K}[\overline{\Gamma}]}\cong \mathrm{Ext}_{R[x_0]}^{e-1}(R[x_0]/G^h,R[x_0])= \mathrm{Ext}_{R[x_0]}^{e-1}\left(\frac{\mathbb{K}[x_{0},\dots,x_{e}]}{(G^{h},x_{e})}, R[x_0]\right).$$ Moreover, since $x_{0}$ is regular in $S/(x_{e},G^{h})$, we can write $$\begin{aligned}
\mathrm{Ext}_{R[x_0]}^{e-1}\left(\frac{\mathbb{K}[x_{0},\dots,x_{e}]}{(G^{h},x_{e})}, R[x_0]\right) \cong&
\mathrm{Ext}_{R}^{e-1}\left(\frac{\mathbb{K}[x_{0},\dots,x_{e}]}{(G^{h},x_{e},x_0)}, R[x_0]\right) \\ \cong&
\mathrm{Ext}_{R}^{e-1}\left(\frac{\mathbb{K}[x_{1},\dots,x_{e-1}]}{(\pi(G^{h}))}, R[x_0]\right)\\ \cong& \mathrm{Ext}_{R}^{e-1}\left(\frac{\mathbb{K}[x_{1},\dots,x_{e-1}]}{(\pi(G))}, R[x_0]\right) \\ \cong&\mathrm{Ext}_{R}^{e-1}\left(\frac{\mathbb{K}[x_{1},\dots,x_{e}]}{(G,x_e)}, R[x_0]\right)\\ \cong &\mathrm{Ext}_{R}^{e-1}\left(\frac{\mathbb{K}[x_{1},\dots,x_{e}]}{G}, R[x_0]\right)\cong \omega_{\mathbb{K}[\Gamma]}R[x_0].
\end{aligned}$$ which is true since $x_{e}$ is both $\mathbb{K}[x_{1},\dots,x_{e}]$-regular and $\mathbb{K}[x_{1},\dots,x_{e}]/G$ - regular. Hence $\omega_{\mathbb{K}[\overline{\Gamma}]}\cong \omega_{\mathbb{K}[\Gamma]}R[x_0]$. Since every elements of $\mathrm{Hom}_{R[x_0]}(\omega_{\mathbb{K}[\overline{\Gamma}]},R[x_0])$ can be obtained obtained by elements of $\mathrm{Hom}_{R}(\omega_{\mathbb{K}[\Gamma]},R)$ using composition with $f:R \rightarrow R[x_0]$. Hence, $$\mathrm{tr}_{R[x_0]}(\omega_{\mathbb{K}[\overline{\Gamma}]})=\sum_{\phi \in \mathrm{Hom}_{R[x_0]}(\omega_{\mathbb{K}[\overline{\Gamma}]},R[x_0])}\phi(\omega_{\mathbb{K}[\overline{\Gamma}]})\cong\sum_{\phi \in \mathrm{Hom}_{R}(\omega_{\mathbb{K}[\Gamma]},R)}\phi(\omega_{\mathbb{K}[\Gamma]})=\mathrm{tr}_{R}(\omega_{\mathbb{K}[\Gamma]})R[x_0].$$ Therefore, $\mathbb{K}[\Gamma]$ is nearly-Gorenstein implies $\mathfrak{m} \subseteq \mathrm{tr}_{R}(\omega_{\mathbb{K}[\Gamma]})$ and by above equality $\mathfrak{m}[x_0] \subseteq \mathrm{tr}_{R[x_0]}(\omega_{\mathbb{K}[\overline{\Gamma}]})$, therefore $\mathbb{K}[\overline{\Gamma}]$ is nearly-Gorenstein and vice versa. ◻
**Corollary 5**. *Let $\Gamma=\Gamma(n_{1},\dots,n_{e})$ be a numerical semigroup, minimally generated by an arithmetic sequence $n_{1}<n_{2}<\dots<n_{e}$, such that $n_{i}=n_{1}+(i-1)d, 1 \leq i \leq e, e\geq n_1$ and $n_{1}=q(e-1)+r', r' \in [1,e]$. Then $\mathbb{K}[\overline{\Gamma}]$ is nearly Gorenstein.*
*Proof.* The set $G=\{x_{i}x_{j}-x_{i-1}x_{j+1} \vert 2 \leq i \leq j \leq n-1\} \cup \{x_{1}^{q+d}x_{i}-x_{e+i}x_{e}^{q} \vert 1 \leq i \leq e-r'\}$ is a Gröbner basis of the defining ideal $I(\Gamma)$ of $C(\Gamma)$, with respect to the degree reverse lexicographic ordering induced by $x_{1}>x_{2}>\cdots>x_{e}$ and $in_{<}(G)=\{x_{i}x_{j} \vert 2 \leq i \leq j \leq e-1\} \cup \{x_{1}^{q+d}x_{i} \vert 1 \leq i \leq e-r'\}$(See [@Bermejo]). Note that, $x_{e}$ belongs to the support of all non-homogeneous elements of $G$. By Theorem [Theorem 4](#BSP){reference-type="ref" reference="BSP"}, and [@Residue Proposition 2.4], $\mathbb{K}[\overline{\Gamma}]$ is nearly Gorenstein. ◻
# Nearly Gorenstein Associated graded ring of simplicial Affine semigroups
Let $\Gamma$ be an affine semigroup, i.e, a finitely generated semigroup which for some $r$ is isomorphic to a subsemigroup of $\mathbb{Z}^r$ containing zero. Suppose that $\Gamma$ is a simplicial affine semigroup, fully embedded in $\mathbb{N}^{d}$, minimally generated by $\{\mathbf{a}_{1}.\dots,\mathbf{a}_{d},\mathbf{a}_{d+1},\dots,\mathbf{a}_{d+r}\}$ with the set of extremal rays $E=\{\mathbf{a}_{1},\dots,\mathbf{a}_{d}\}$. The semigroup algebra $\mathbb{K}[\Gamma]$ over a field $\mathbb{K}$ is generated by the monomials $\mathbf{x}^{\mathbf{a}}$, where $\mathbf{a} \in \Gamma$, with maximal ideal $\mathfrak{m}=(\mathbf{x}^{\mathbf{a}_{1}},\dots,\mathbf{x}^{\mathbf{a}_{d+r}})$.
Let $I(\Gamma)$ denote the defining ideal of $\mathbb{K}[\Gamma]$, which is the kernel of the $\mathbb{K}-$algebra homomorphism $\phi:R=\mathbb{K}[z_{1},\dots,z_{d+r}] \rightarrow \mathbb{K}[\mathbf{x}^{\mathbf{a}_{1}},\dots,\mathbf{x}^{\mathbf{a}_{d+r}}]$, such that $\phi(z_{i})=\mathbf{x}^{\mathbf{a}_{i}}$, $i=1,\dots,d+r$. Let us write $\mathbb{K}[\Gamma]\cong A/I(\Gamma)$. The defining ideal $I(\Gamma)$ is a binomial prime ideal ([@Herzog], Proposition 1.4). The associated graded ring $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])=\oplus_{i=0}^{\infty}\mathfrak{m}^{i}/\mathfrak{m}^{i+1}$ is isomorphic to $\frac{\mathbb{K}[z_{1},\dots,z_{d+r}]}{I(\Gamma)^{*}}$ (see [@Bruns-Herzog], Example 4.6.3), where $I(\Gamma)^{*}$ is the homogeneous ideal generated by the initial forms $f^{*}$ of the elements $f\in I(\Gamma)$, and $f^{*}$ is the homogeneous summand of $f$ of the least degree.
**Definition 6**. * Let $\Gamma$ be an affine semigroup in $\mathbb{N}^{d}$, minimally generated by $\mathbf{a}_{1},\dots,\mathbf{a}_{n}$. The *rational polyhedral cone* generated by $\Gamma$ is defined as $$\mathrm{cone}(\Gamma)=\big\{\sum_{i=1}^{n}\alpha_{i}\mathbf{a}_{i}: \alpha_{i} \in \mathbb{R}_{\geq 0}, \,i=1,\dots,n\big\}.$$ The *dimension* of $\Gamma$ is defined as the dimension of the subspace generated by $\mathrm{cone}(\Gamma)$. *
The $\mathrm{cone}(\Gamma)$ is the intersection of finitely many closed linear half-spaces in $\mathbb{R}^{d}$, each of whose bounding hyperplanes contains the origin. These half-spaces are called *support hyperplanes*.
**Definition 7**. * Suppose $\Gamma$ is an affine semigroup of dimension $r$ in $\mathbb{N}^{d}$. If $r=1$, $\mathrm{cone}(\Gamma)=\mathbb{R}_{\geq 0}$. If $r=2$, the support hyperplanes are one-dimensional vector spaces, which are called the *extremal rays* of $\mathrm{cone}(\Gamma)$. If $r >2$, intersection of any two adjacent support hyperplanes is a one-dimensional vector space, called an extremal ray of $\mathrm{cone}(\Gamma)$. An element of $\Gamma$ is called an extremal ray of $\Gamma$ if it is the smallest non-zero vector of $\Gamma$ in an extremal ray of $\mathrm{cone}(\Gamma)$. *
**Definition 8**. * An affine semigroup $\Gamma$ in $\mathbb{N}^{d}$, is said to be *simplicial* if the $\mathrm{cone}(\Gamma)$ has exactly $d$ extremal rays, i.e., if there exist a set with $d$ elements, say $\{\mathbf{a}_{1},\dots,\mathbf{a}_{d}\} \subset \{\mathbf{a}_{1},\dots,\mathbf{a}_{d},\mathbf{a}_{d+1},\dots,\mathbf{a}_{d+r}\}$, such that they are linearly independent over $\mathbb{Q}$ and $\Gamma \subset \sum\limits_{i=1}^{d}\mathbb{Q}_{\geq 0}\mathbf{a}_{i}$. *
**Definition 9**. * Let $(B,\mathcal{F})$ be a filtered, Noetherian ring. A sequence $g = g_{1},\dots,g_{n}$ in $B$ is called *super regular* if the sequence of initial forms $g^{*} = g_{1}^{*},\dots,g_{n}^{*}$ is regular in $\mathrm{gr}_{\mathcal{F}}(B)$. *
**Lemma 10**. *Let $(\mathbf{x}^{\mathbf{a}_{1}},\dots,\mathbf{x}^{\mathbf{a}_{d+r}})$ be a reduction ideal of $\mathfrak{m}$, then the following statements are equivalent:*
(a) *$\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ is a Cohen-Macaulay ring.*
(b) *$(\mathbf{x}^{\mathbf{a}_{1}})^{*},\dots,(\mathbf{x}^{\mathbf{a}_{d}})^{*}$ provides a regular sequence in $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$.*
(c) *$\mathbb{K}[\Gamma]$ is Cohen-Macaulay and $(\mathbf{x}^{\mathbf{a}_{i}})^{*}$ is a non-zero divisor in $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma]),\, \text{for} \,\, i=1,\dots,d$.*
*Proof.* See Proposition 5.2 in [@Reduction]. ◻
Let $A$ be a filtered noetherian graded ring with homogeneous maximal ideal $\mathfrak{m}_{A}$ and suppose $B=A/xA$, where $x$ is not a zero-divisor on $A$. Let $\psi:A \rightarrow B$ be the canonical epimorphism.
**Lemma 11**. *If $x$ is a super regular in $A$ then $$\mathrm{gr}_{\mathfrak{m}_{A}}(A) \xrightarrow{(x)^{*}} \mathrm{gr}_{\mathfrak{m}_{A}}(A) \xrightarrow{\mathrm{gr}(\psi)} \mathrm{gr}_{\mathfrak{m}_{B}}(B) \rightarrow 0$$ is exact.*
*Proof.* See Lemma a in [@Super]. ◻
**Lemma 12**. *Consider a map $$\pi_{d}:(R:=)\mathbb{K}[z_{1},\dots,z_{d},\dots,z_{d+r}] \rightarrow \bar{R}=\mathbb{K}[z_{d+1},\dots,z_{d+r}]$$ such that $\pi_{d}(z_{j})=0, 1 \leq j \leq d$ and $\pi_{d}(z_{j})=z_{j}, d+1 \leq j \leq d+r$. If $z_{1},\dots,z_{d}$ is a super regular in $R/I(\Gamma)$ then*
*$$\mathrm{gr}_{\bar{\mathfrak{m}}}\big(\bar{R}/\pi_{d}(I(\Gamma)) \cong \frac{\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))}{(z_{1},\dots,z_{d})\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))},$$ where $\bar{\mathfrak{m}}=\pi_{d}(\mathfrak{m})$.*
*Proof.* See Lemma 3.8 in [@Saha-Associated]. ◻
*Remark 13*. Assume $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ is Cohen-Macaulay and $\mathrm{dim}(\mathbb{K}[\Gamma])=d$. Then by Lemma [Lemma 2](#Can-Exist){reference-type="ref" reference="Can-Exist"}, $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ admits a canonical module $\omega_{\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])}$ and $\omega_{\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])} \cong \mathrm{Ext}^{r}_R(\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma]),R)$.
**Theorem 14**. *Let $(\mathbf{x}^{\mathbf{a}_{1}},\dots,\mathbf{x}^{\mathbf{a}_{d+r}})$ be a reduction ideal of $\mathfrak{m}$. Suppose $\mathbb{K}[\Gamma]$ and $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ are Cohen-Macaulay. Let $G=\{f_{1},\dots,f_{t},g_{1},\dots,g_{s}\}$ be a minimal Gröbner basis of the defining ideal $I(\Gamma)$, with respect to the negative degree reverse lexicographic ordering induced by $z_{d+r} > \dots > z_{d} > \dots >z_{1}$. We assume that $f_{1}, \dots,f_{t}$ are homogeneous and $g_{1},\dots,g_{s}$ are non homogeneous, with respect to the standard gradation on the polynomial ring $\mathbb{K}[z_{1}, \ldots, z_{d+r}]$. If there exists a $j$, $1\leq j \leq d$, such that $z_{j}$ belongs to the support of $g_{l}$, for every $1 \leq l \leq s$. Then $\mathbb{K}[\Gamma]$ is nearly Gorenstein if and only if $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma]))$ is also nearly Gorenstein.*
*Proof.* Let $G=\{f_{1},\dots,f_{t},g_{1},\dots,g_{s}\}$ be a minimal Gröbner basis of the defining ideal $I(\Gamma)$, with respect to the negative degree reverse lexicographic ordering induced by $z_{d+r} > \dots > z_{d} > \dots >z_{1}$. When $s=0$, $I(\Gamma)$ is homogeneous ideal and from Remark 2.1 ([@Reduction]), $\mathbb{K}[\Gamma] \cong \mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$. Hence, the result follows directly.
When $s\geq 1$, we have $f_{1}, \dots, f_{t}$ are homogeneous, $g_{1},\dots g_{s}$ are non-homogeneous and $\mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ is Cohen-Macaulay, this implies that $z_{1},\dots,z_{d}$ do not divide the $\mathrm{LM}(f_{k})$ and $\mathrm{LM}(g_{l})$ for $k=1,\dots, t$, $l=1,\dots, s$. Moreover, $z_{j} \in \mathrm{supp}(\{g_{1},\dots,g_{s}\})$, for some $1 \leq j \leq d$. Therefore, $z_{j}$ divides a non-leading term of $g_{1},\dots,g_{s}$, for some $1 \leq j \leq d$.
We consider the map $$\pi_{d}:R=\mathbb{K}[z_{1},\dots,z_{d},\dots,z_{d+r}] \rightarrow \bar{R}=\mathbb{K}[z_{d+1},\dots,z_{d+r}]$$ such that $\pi_{d}(z_{j})=0$, $1 \leq j \leq d$ and $\pi_{d}(z_{j})=z_{j}, d+1 \leq j \leq d+r$. We note that $\pi_{d}(f_{1}),\dots,\pi_{d}(f_{t})$ are either monomials or homogeneous polynomials. Since $z_{j}$ divides a non-leading term of $\{g_{1},\dots,g_{s}\}$ for some $1 \leq j \leq d+r$, we must have that $\pi_{d}(g_{1}),\dots,\pi_{d}(g_{s})$ are the leading monomials of $g_{1},\dots,g_{s}$ respectively.
Therefore $\{\pi_{d}(f_{1}),\dots,\pi_{d}(f_{t}),\pi_{d}(g_{1}),\dots,\pi_{d}(g_{s})\}$ generates the homogeneous ideal $\pi_{d}(I(\Gamma))$. Hence $$\begin{aligned}
\mathrm{Ext}^{r}_{R}\big(\bar{R}/\pi_{d}(I(\Gamma),R)\big)&\cong \mathrm{Ext}^{r}_{R}(\mathrm{gr}_{\bar{\mathfrak{m}}}\big(\bar{R}/\pi_{d}(I(\Gamma)),R\big)
\end{aligned}$$ where $\bar{\mathfrak{m}}=\pi_{d}(\mathfrak{m})$. Since $\mathbb{K}[\Gamma],\, \mathrm{gr}_{\mathfrak{m}}(\mathbb{K}[\Gamma])$ are Cohen-Macaulay, by Lemma [Lemma 10](#Cond){reference-type="ref" reference="Cond"} $z_{1},\dots,z_{d}$ are regular in $\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))$ , and therefore $z_{1},\dots,z_{d}$ form a super regular sequence in $(R/I(\Gamma))$. By Lemma [Lemma 12](#iso1){reference-type="ref" reference="iso1"}, $$\mathrm{gr}_{\bar{\mathfrak{m}}}\big(\bar{R}/\pi_{d}(I(\Gamma)) \cong \frac{\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))}{(z_{1},\dots,z_{d})\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))},$$ therefore $$\mathrm{Ext}^{r}_{R}(\mathrm{gr}_{\bar{\mathfrak{m}}}\big(\bar{R}/\pi_{d}(I(\Gamma),R)\big) \cong \mathrm{Ext}^{r}_{R}\bigg(\frac{\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))}{(z_{1},\dots,z_{d})\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))},R\bigg).$$ $\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))$ being Cohen-Macaulay, $z_{1},\dots,z_{d}$ form a regular sequence in $\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))$, hence $$\mathrm{Ext}^{r}_{\frac{R}{z_1,\dots,z_d}}\bigg(\frac{\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))}{((z_{1},\dots,z_{d})\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma)},R\bigg)\cong\mathrm{Ext}^{r}_{R}\big(\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma),R\big).$$ $R/I(\Gamma)$ being Cohen-Macaulay, $z_{1},\dots,z_{d}$ form a regular sequence in $R/I(\Gamma)$. Hence,
the canonical module of $\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma)$ is given by $$\begin{aligned}
\omega_{\mathrm{gr}_{\mathfrak{m}}(\frac{R}{I(\Gamma)})} \cong & \mathrm{Ext}^{r}_R(\mathrm{gr}_{\mathfrak{m}}(\frac{R}{I(\Gamma)}),R)\cong \mathrm{Ext}^{r}_{\frac{R}{z_1,\dots,z_d}}(\frac{\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma))}{((z_{1},\dots,z_{d})\mathrm{gr}_{\mathfrak{m}}(R/I(\Gamma)},R)\\
\cong & \mathrm{Ext}^{r}_{\frac{R}{z_1,\dots,z_d}}(\mathrm{gr}_{\bar{\mathfrak{m}}}\big(\bar{R}/\pi_{d}(I(\Gamma),R)\\
\cong & \mathrm{Ext}^{r}_{\frac{R}{z_1,\dots,z_d}}((\bar{R}/\pi_{d}(I(\Gamma)),R)\\
\cong &
\mathrm{Ext}^{r}_{\frac{R}{z_1,\dots,z_d}}((R/(z_{1},\dots,z_{d},I(\Gamma)),R)\\
\cong& \mathrm{Ext}^{r}_{R}((R/I(\Gamma)),R)\\
\cong &
\omega_{\frac{R}{I(\Gamma)}},
\end{aligned}$$ and we have $\mathrm{tr}(\omega_{\frac{R}{I(\Gamma)}})\cong \mathrm{tr}(\omega_{\mathrm{gr}_{\mathfrak{m}}(\frac{R}{I(\Gamma)})})$. Therefore, $\mathrm{gr}_{\mathfrak{m}}(\frac{R}{I(\Gamma)})$ is nearly Gorenstein if and only if $\frac{R}{I(\Gamma)}$ is nearly Gorenstein. ◻
# Residue for gluing of numerical semigroups
A numerical semigroup $\Gamma$ is a submonoid of $\mathbb{N}$ which have a finite complement in $\mathbb{N}$. Say $\Gamma$ is minimally generated y $n_1<\dots <n_e$ with $e>1$. We write $\Gamma=\langle n_1,\dots,n_e \rangle$. The number $e$ is called the *embedding dimension* of $\Gamma$. The elements in the set $G(\Gamma)=\mathbb{N}\setminus \Gamma$ is called the gaps of $\Gamma$ and the cardinality of $G(\Gamma)$ is called genus of $\Gamma$, denoted by $g(\Gamma)$. As $|G(\Gamma)|$ is finite, there exists a largest integer $\mathrm{F}(\Gamma)$, called the *Frobenius number* of $\Gamma$, such that $\mathrm{F}(\Gamma) \notin \Gamma$. Let $M:=\Gamma \setminus \{0\}$. The element $\nu \in G(\Gamma)$ with $\nu +M \in \Gamma$ are called *pseudo-Frobenius numbers*, denoted by $\mathrm{PF}(\Gamma)$ and its cardinality is known as *type* of $\Gamma$, denoted $\mathrm{type}(\Gamma)$. In semigroup ring $\mathbb{K}[\Gamma]$, the canonical module $\omega_{\mathbb{K}[\Gamma]}$ is the fractionary $\mathbb{K}[\Gamma]$-ideal generated by the elements $t^{\nu}$ with $\nu \in \mathrm{PF}(\Gamma)$, see [@Eisenbud Exercise 21.11]. Therefore, the Cohen-Macaulay type of $\mathbb{K}[\Gamma]$ is equal to $\mathrm{type}(\Gamma)$. The anti-canonical ideal of $\mathbb{K}[\Gamma]$ is the fractionary ideal $\omega^{-1}_{\mathbb{K}[\Gamma]}=\{x \in Q(\mathbb{K}[\Gamma]): x \cdot \omega_{\mathbb{K}[\Gamma]}\}$, where $Q(\mathbb{K}[\Gamma])$ is quotient field of $\mathbb{K}[\Gamma]$. Let $\Omega_{\Gamma}$ and $\Omega_{\Gamma}^{-1}$ be the set of exponents of the monomials in $\omega_{\mathbb{K}[\Gamma]}$, and in $\omega_{\mathbb{K}[\Gamma]}^{-1}$ respectively. The *trace* of $\Gamma$ as $\mathrm{tr}(\Gamma)=\Omega_{\Gamma}+\Omega_{\Gamma}^{-1}$. It is clear that $\mathrm{tr}(\Gamma)$ is an ideal in $\Gamma$ consisting of the exponents of the monomials in $\mathrm{tr}(\mathbb{K}[\Gamma])$. As $\mathbb{K}[\Gamma]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma]})$ is a finite dimensional vector space with a $\mathbb{K}$-basis given by $\{t^{\gamma}: \gamma \in \Gamma \setminus \mathrm{tr}(\Gamma)\}$. The *residue* of $\Gamma$ is defined as the residue of $\mathbb{K}[\Gamma]$, namely $$\mathrm{res}(\Gamma)=\mathrm{dim}_{\mathbb{K}}\mathbb{K}[\Gamma]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma]})=|\Gamma \setminus \mathrm{tr}(\Gamma)|.$$
Now, we are ready to answer Question [Question 1](#Ques){reference-type="ref" reference="Ques"} in case of gluing of numerical semigroups. Let us first recall Question [Question 1](#Ques){reference-type="ref" reference="Ques"} and definition of gluing of numerical semigroups.
**Question [Question 1](#Ques){reference-type="ref" reference="Ques"}** Given a numerical semigroup $\Gamma$, is it true that $$\mathrm{res}(\Gamma)\leq n(\Gamma)-g(\Gamma)?$$
**Definition 15** ([@JCR]). * Let $\Gamma_{1} = \Gamma(m_{1},\dots, m_{l})$ and $\Gamma_{2} = \Gamma(n_{1},\dots, n_{k})$ be two numerical semigroups, with $m_{1} < \cdots < m_{l}$ and $n_{1} < \cdots < n_{k}$. Let $\lambda =b_{1}m_{1} +\cdots +b_{l}m_{l} \in \Gamma_{1}$ and $\mu = a_{1}n_{1} +\cdots +a_{k}n_{k} \in\Gamma_{2}$ be two positive integers satisfying $\gcd(\lambda, \mu) = 1$, with $\lambda \notin \{m_{1}, \dots ,m_{l} \}$, $\mu \notin \{n_{1}, \ldots ,n_{k}\}$ and $\{\mu m_{1},\ldots , \mu m_{l}\}\cap \{\lambda n_{1},\ldots , \lambda n_{k}\} = \phi$. The numerical semigroup $\Gamma_{1}\#_{\lambda,\mu} \Gamma_{2}=\langle \mu m_{1},\ldots , \mu m_{l}, \lambda n_{1},\ldots, \lambda n_{k}\rangle$ is called a *gluing* of the semigroups $\Gamma_{1}$ and $\Gamma_{2}$ with respect to $\mu$ and $\lambda$. *
Section 3 in [@Sahin-Gluing] states that if $\Gamma$ be obtained by gluing $\Gamma_{1} = \Gamma(m_{1},\dots, m_{l})$ and $\Gamma_{2} = \Gamma(n_{1},\dots, n_{k})$ with respect to $\lambda = \sum_{i=1}^{l}b_{i}m_{i}$ and $\mu = \sum_{i=1}^{k}a_{i}n_{i}$, and if the defining ideals $I(\Gamma_{1}) \subset \mathbb{K}[x_{1},\ldots, x_{l}]$ and $I(\Gamma_{2}) \subset \mathbb{K}[y_{1},\ldots, y_{k}]$ are generated by the sets $G_{1} = \{f_{1},\ldots, f_{d}\}$ and $G_{2} = \{g_{1},\ldots, g_{r}\}$ respectively, then the defining ideal $I(\Gamma)\subset R = \mathbb{K}[x_{1},\ldots, x_{l}, y_{1},\ldots, y_{k}]$ is generated by the set $G = G_{1}\cup G_{2}\cup \{\rho\}$, where $\rho=x_{1}^{b_{1}}\dots x_{l}^{b_{l}}-y_{1}^{a_{1}}\dots y_{k}^{a_{k}}$.
**Theorem 16**.
(i) *[\[(i)\]]{#(i) label="(i)"} $\mathrm{tr}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)=\mu \mathrm{tr}(\Gamma_1)+\lambda \mathrm{tr}(\Gamma_2)$.*
(ii) *$\mathrm{res}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)=\mu \mathrm{res}(\Gamma_1)+ \lambda \mathrm{res}(\Gamma_2)$.*
*Proof.* Let $\mathbb{F}: 0 \rightarrow F_k \xrightarrow {\phi_{k}} \cdots \xrightarrow {\phi_2} F_1 \xrightarrow {\phi_1} F_0$ be a minimal $\Gamma_1$-graded free resolution of $I_{\Gamma_1}$ with $H_0= R/I_{\Gamma_1}$, $\mathbb{G}: 0 \rightarrow G_l \xrightarrow {\psi_{k}} \cdots \xrightarrow {\psi_2} G_1 \xrightarrow {\psi_1} G_0$ be a minimal $\Gamma_2$-graded free resolution of $I_{\Gamma_1}$ with $H_0= R/I_{\Gamma_2}$ and $\mathbb{H}_{\rho}: 0 \rightarrow R \xrightarrow {\rho} R \rightarrow 0$. From [@Sahin-Gluing Theorem 3.2] we know that $\mathbb{H}_{\rho} \otimes \mathbb{F} \otimes \mathbb{G}$ give a minimal free resolution of $I_{\Gamma_1 \#_{\lambda,\mu} \Gamma_2 }$. By [@Bruns-Herzog Corollary 3.9], the dual complex $(\mathbb{H}_{\rho} \otimes \mathbb{F} \otimes \mathbb{G})^{\ast}$ is a minimal free resolution of $\omega_{\mathbb{K}[{\Gamma_1 \#_{\lambda,\mu} \Gamma_2 }]}$. From the isomorphism $\mathbb{H}_{\rho}^{\ast} \otimes \mathbb{F}^{\ast} \otimes \mathbb{{G}}^{\ast} \cong (\mathbb{H}_{\rho} \otimes \mathbb{F} \otimes \mathbb{G})^{\ast}$ we can deduce that $\mathbb{H}_{\rho}^{\ast} \otimes \mathbb{F}^{\ast} \otimes \mathbb{{G}}^{\ast}$ is exact and it resolves $\omega_{\rho R} \otimes \omega_{\mathbb{K}[\Gamma_1]} \otimes \omega_{\mathbb{K}[\Gamma_2]}$ minimally. By [@Trace Proposition 4.1], we have $\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1 \#_{\lambda,\mu} \Gamma_2 ]})=\mathrm{tr}(\omega_{\rho R}) \cdot \mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1]} \cdot \mathrm{tr} ( \omega_{\mathbb{K}[\Gamma_2]})$, and hence the exponent of monomials in $\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1 \#_{\lambda,\mu} \Gamma_2 ]})$ are of the form $\mu \mathrm{tr}(\Gamma_1)+\lambda \mathrm{tr}(\Gamma_2)$, so semigroup ideal $\mathrm{tr}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)=\mu \mathrm{tr}(\Gamma_1)+\lambda \mathrm{tr}(\Gamma_2)$. Note that $\mathbb{K}[H]/\mathrm{tr}(\omega_{\mathbb{K}[H]}))$ forms a vector space over $\mathbb{K}$ with a $\mathbb{K}$-basis $\{t^h :H \setminus \mathrm{tr}(H)\}$ for any numerical semigroup $H$ and $\mathrm{dim}_{\mathbb{K}}\mathbb{K}[H]/\mathrm{tr}(\omega_{\mathbb{K}[H]}))=|H \setminus \mathrm{tr}(H)|$. Consider a vector space homomorphism $\eta :\mu \cdot \mathbb{K}[\Gamma_1]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1]})) \oplus \lambda \cdot \mathbb{K}[\Gamma_2]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_2]})) \rightarrow \mathbb{K}[\Gamma_1 \#_{\lambda,\mu}\Gamma_2]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1 \#_{\lambda,\mu}\Gamma_2]}))$ defined by $\eta(\mu t^{ s_1}+\lambda t^{s_2})=t^{\mu s_1+\lambda s_2}$. From ([\[(i)\]](#(i)){reference-type="ref" reference="(i)"}), it is clear that $\eta$ is a vector space isomorphism, and $\mathrm{dim}_{\mathbb{K}}\frac{\mathbb{K}[\Gamma_1 \#_{\lambda,\mu}\Gamma_2]}{\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1 \#_{\lambda,\mu}\Gamma_2]}))}=\mu \mathrm{dim}_{\mathbb{K}}\frac{\mathbb{K}[\Gamma_1] }{\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_1]}}+ \lambda \mathrm{dim}_{\mathbb{K}}\frac{\mathbb{K}[\Gamma_2]}{\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_2]})}$. Therefore, $\mathrm{res}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)=\mu \mathrm{res}(\Gamma_1)+ \lambda \mathrm{res}(\Gamma_2)$. ◻
**Corollary 17**. *Let $\mathbb{K}[\Gamma_{1}]$ and $\mathbb{K}[\Gamma_{2}]$ are two nearly Gorenstein rings. Then $\mathbb{K}[\Gamma]$ is never nearly Gorenstein, where $\Gamma$ is a gluing of $\Gamma_1$ and $\Gamma_2$.*
*Proof.* The proof follows directly from Theorem [Theorem 16](#Gluing-Trace){reference-type="ref" reference="Gluing-Trace"} and the fact that $\mathbb{K}[\Gamma]$ is nearly Gorenstein if and only if $\mathrm{res}(\Gamma)\leq 1$. ◻
**Lemma 18**. *Assuming the notation of current section,*
(i) *$\mathrm{PF}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)=\{\mu g +\lambda g'+ \mu \lambda : g \in \mathrm{PF}(\Gamma_1), g' \in \mathrm{PF}(\Gamma_2)\}$.*
(ii) *$\mathrm{F}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)=\mu \mathrm{F}(\Gamma_1) +\lambda \mathrm{F}(\Gamma_2)+ \mu \lambda$.*
*Proof.* See Proposition 6.6 in [@PF-Gluing]. ◻
Set $C_H=t^{\mathrm{F}(H)}\omega_{\mathbb{K}[H]}=\sum_{\alpha \in \mathrm{PF}(H)} \mathbb{K}[H]t^{\mathrm{F}(H)-\alpha}$ for any numerical semigroup $H$. Then we have $\mathbb{K}[H] \subseteq C_H \subseteq \overline{\mathbb{K}[H]}$, where $\overline{\mathbb{K}[H]}$ denotes the integral closure of $\mathbb{K}[H]$. By Lemma [Lemma 18](#PF-Gluing){reference-type="ref" reference="PF-Gluing"}, we have $C_{\Gamma_1 \#_{\lambda,\mu}\Gamma_2}=t^{\mathrm{F}(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)}\omega_{\mathbb{K}[{\Gamma_1 \#_{\lambda,\mu} \Gamma_2 }]}= \sum_{g\in \mathrm{PF}(\Gamma_1),g' \in \mathrm{PF}(\Gamma_2)}\mathbb{K}[{\Gamma_1 \#_{\lambda,\mu} \Gamma_2 }]t^{\mu \mathrm{F}(\Gamma_1)-\mu g+ \lambda \mathrm{F}(\Gamma_2)-\lambda g'}$. Hence $\mathrm{dim}_{\mathbb{K}} \frac{C_{\Gamma_1 \#_{\lambda,\mu}\Gamma_2}}{\mathbb{K}[\Gamma_1 \#_{\lambda,\mu}\Gamma_2]}=\mu \mathrm{dim}_{\mathbb{K}}\frac{C_{\Gamma_1}}{\mathbb{K}[\Gamma_1]}+\lambda \mathrm{dim}_{\mathbb{K}}\frac{C_{\Gamma_2}}{\mathbb{K}[\Gamma_2]}$. By [@Upper; @bound Lemma 2.2], $g(\Gamma_1\#_{\lambda,\mu} \Gamma_2)-n(\Gamma_1\#_{\lambda,\mu} \Gamma_2)=l_{\mathbb{K}[\Gamma_1\#_{\lambda,\mu} \Gamma_2]}(\frac{C_{\Gamma_1\#_{\lambda,\mu} \Gamma_2}}{\mathbb{K}[\Gamma_1\#_{\lambda,\mu} \Gamma_2]})=\mu l_{\mathbb{K}[\Gamma_1\#_{\lambda,\mu} \Gamma_2]}(\frac{C_{\Gamma_1}}{\mathbb{K}[\Gamma_1]})+
\lambda l_{\mathbb{K}[\Gamma_1\#_{\lambda,\mu} \Gamma_2]}(\frac{C_{\Gamma_2}}{\mathbb{K}[\Gamma_2]})$. Now if $\Gamma_1,\Gamma_2$ satisfies Question [Question 1](#Ques){reference-type="ref" reference="Ques"}, then $\mathrm{res}(\Gamma_1 \#_{\lambda,\mu} \Gamma_2)=\mu \mathrm{res}(\Gamma_1)+\lambda \mathrm{res}(\Gamma_2)\leq \mu(g(\Gamma_1)-n(\Gamma_1))+\lambda(g(\Gamma_2)-n(\Gamma_2))=g(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)-n(\Gamma_1 \#_{\lambda,\mu}\Gamma_2)$, hence $\Gamma_1 \#_{\lambda,\mu}\Gamma_2$ also satisfies Question [Question 1](#Ques){reference-type="ref" reference="Ques"}.
## Residue of lifting of monomial curve {#lift}
Let $\Gamma$ be a numerical semigroup minimally generated by $n_1 < \dots <n_r$. By a $k$-lifting $S_k$ of $\Gamma$ we mean the numerical semigroup minimally generated by $n_1,kn_2,\dots,kn_r$, where $k$ is a positive integer with $\mathrm{gcd}(k, n_1) = 1$. Let $A=\mathbb{K}[x_1,\dots,x_r]$ and $B=\mathbb{K}[y_1,\dots,y_r]$, and we set $\mathrm{degree}(x_i)=n_i$ for $1 \leq i \leq r, \mathrm{deg}(y_i)=kn_i$ for $2 \leq i \leq r-1$ and $\mathrm{deg}(y_1)=n_1$. In this subsection denote semigroup rings by $R=\mathbb{K}[\Gamma]$ and $R_k=\mathbb{K}[\Gamma_k]$
**Theorem 19**. *$\mathrm{tr}(\Gamma_k)=k\cdot\mathrm{tr}(\Gamma)$*
*Proof.* Let $$\mathbb{F}: 0 \rightarrow F_p \rightarrow F_{p-1} \rightarrow \dots \rightarrow F_0 \rightarrow A/I(\Gamma) \rightarrow 0$$ be a minimal free $A$-resolution of $A/I(\Gamma)$. Then by [@Bruns-Herzog Corollary 3.9], the dual complex $\mathbb{F}^{\ast}=\mathrm{Hom}_A(\mathbb{\mathbb{F}'},A)$ is a minimal free resolution of $\omega_{\mathbb{K}[\Gamma]}$.
As indicated in [@Var], a minimal $\Gamma_k$-graded free resolution $\mathbb{F}_k$ of $\mathbb{K}[\Gamma_k]$ is obtained from a minimal $\Gamma$-graded free resolution of $\mathbb{K}[\Gamma]$ via the faithfully flat extension $f : A \rightarrow B$, defined by sending $x_1 \rightarrow y_1^k$ and $x_i \rightarrow y_i$ for all $i > 1$, and the dual complex $\mathbb{F}_k^{\ast} =\mathrm{Hom}_B(\mathbb{F}_k,B)=\mathrm{Hom}_B(\mathbb{F}\otimes_A B,B)\cong \mathrm{Hom}_A(\mathbb{F},A)\otimes_A B \cong \mathbb{F}^{\ast} \otimes_A B$. Hence $\omega_{\mathbb{K}[\Gamma_k]}=k\omega_{\mathbb{K}[\Gamma]}$. It is clear that any $\phi \in \mathrm{Hom}_B(\omega_{\mathbb{K}[\Gamma_k]},B)$ can be obtained by extending the map $\psi \in \mathrm{Hom}_A(\omega_{\mathbb{K}[\Gamma]},A)$ via $f$. Hence $\mathrm{tr}_B(\omega_{\mathbb{K}[\Gamma_k]})=\sum_{\phi\in \mathrm{Hom}_B(\omega_{\mathbb{K}[\Gamma_k]},B)}\phi(\omega_{\mathbb{K}[\Gamma_k]})=k\sum_{\psi\in \mathrm{Hom}_A(\omega_{\mathbb{K}[\Gamma],A)}}\psi(\omega_{\mathbb{K}[\Gamma]})=k \cdot \mathrm{tr}_{A}(\omega_{\mathbb{K}[\Gamma])}$. ◻
**Corollary 20**. *$\mathrm{res}(\Gamma_k)=k \cdot \mathrm{res}(\Gamma)$.*
*Proof.* Note that $\mathbb{K}[\Gamma]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma]})$ is a finite dimensional vector space with a $\mathbb{K}$-basis given by $\{t^\gamma:\gamma \in \Gamma\setminus \mathrm{tr}(\Gamma)\}$. Consider a map $\eta: k \cdot \mathbb{K}[\Gamma]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma]}) \rightarrow \mathbb{K}[\Gamma_k]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_k]})$ defined by $\eta(kt^h)=t^{kh}$. Due to Theorem [Theorem 19](#Lift-Can){reference-type="ref" reference="Lift-Can"}, $\eta$ is well defined and it is easy to check that $\eta$ is bijective homomomorphism. Hence $\dim_{\mathbb{K}}(\mathbb{K}[\Gamma_k]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma_k]}))=k \cdot \dim_{\mathbb{K}}(\mathbb{K}[\Gamma]/\mathrm{tr}(\omega_{\mathbb{K}[\Gamma]}))$ and we have the required result. ◻
One direct consequence of Corollary [Corollary 20](#Lift-Res){reference-type="ref" reference="Lift-Res"} is as following
**Corollary 21**. *If $\Gamma$ is nearly Gorenstein semigroup then $\Gamma_k$ be never nearly Gorenstein for $k\geq 2$.*
Let $C= t^{F(\Gamma)}\omega_{R}=\sum_{\alpha \in \mathrm{PF}(S)}Rt^{F(\Gamma)-\alpha}$. Then we obtained that $R\subseteq C \subseteq \overline{R}=\mathbb{K}[t]$, where $\overline{R}$ denotes the integral closure of $R$. By [@Sahin], we know that $\mathrm{PF}(\Gamma_k)=\{kf+(k-1)n_1 \mid f \in \mathrm{PF}(\Gamma)\}$ and $F(\Gamma_k)=kF(\Gamma)+(k-1)n_1$, hence $C_k=\sum_{\alpha \in \mathrm{PF}(\Gamma)}R_kt^{k(F(\Gamma)-\alpha)}$ and $R_k \subseteq C_k \subseteq \overline{R_k}=\mathbb{K}[t]$. Now consider a module homomorphism $\psi: k\frac{C}{R} \rightarrow \frac{C_k}{R_k}$ defined by $\psi(k(c+R))=kc+R_k$. It is clear that $\psi$ is a module isomorphism and hence $l_{R_{k}}(\frac{C_k}{R_k})=kl_R(\frac{C}{R})$. By [@Upper; @bound Lemma 2.2], $g(\Gamma_k)-n(\Gamma_k)=l_{R_{k}}(\frac{C_k}{R_k})=k l_R(\frac{C}{R})=k(g(\Gamma)-n(\Gamma))$. Now if $\Gamma$ satisfies Question [Question 1](#Ques){reference-type="ref" reference="Ques"}, then $\mathrm{res}(\Gamma_k)=k\mathrm{res}(\Gamma)\leq k(g(\Gamma)-n(\Gamma))=g(\Gamma_k)-n(\Gamma_k)$, hence $\Gamma_k$ also satisfies Question [Question 1](#Ques){reference-type="ref" reference="Ques"}.
A
Arslan, Feza; Mete, Pinar; Şahin, Mesut. Gluing and Hilbert functions of monomial curves. Proc. Amer. Math. Soc.137(2009), no.7, 2225--2232.
Bermejo, Isabel; García-Llorente; Eva, García-Marco, Ignacio. Algebraic invariants of projective monomial curves associated to generalized arithmetic sequences. J. Symbolic Comput.81(2017), 1--19.
Bruns, Winfried; Herzog, Jürgen. Cohen-Macaulay rings. Cambridge Stud. Adv. Math., 39 Cambridge University Press, Cambridge, 1993.
Bruns, Winfried; Gubeladze, Joseph; Trung, N. V. Problems and algorithms for affine semigroups. Semigroup Forum 64(2002), no.2, 180--212.
D'Anna, Marco; Jafari, Raheleh; Strazzanti, Francesco. Simplicial affine semigroups with monomial minimal reduction ideals. Mediterr. J. Math.19(2022), no.2, Paper No. 84, 17 pp.
Eisenbud, D. Commutative algebra: with a view toward algebraic geometry (Vol. 150). Springer Science $\&$ Business Media, 2013.
Herzog, Jürgen. Generators and relations of abelian semigroups and semigroup rings. Manuscripta Math.3(1970), 175--193.
Herzog, Jürgen; Hibi, Takayuki; Stamate, Dumitru I. Canonical trace ideal and residue for numerical semigroup rings. Semigroup Forum 103(2021), no.2, 550--566.
Herzog, Jürgen; Stamate, Dumitru. Cohen-Macaulay criteria for projective monomial curves via Gröbner bases. Acta Math. Vietnam.44(2019), no.1, 51--64.
Herzog, J. When is a regular sequence super regular?. Nagoya Math. J.83(1981), 183--195.
Herzog, Jürgen; Kumashiro, Shinya. Upper bound on the colength of the trace of the canonical module in dimension one. Arch. Math. (Basel)119(2022), no.3, 237--246.
Herzog, Jürgen; Hibi, Takayuki; Stamate, Dumitru I. The trace of the canonical module. Israel J. Math. 233(2019), no.1, 133--165.
Miyashita, S. (2023). Nearly Gorenstein projective monomial curves of small codimension. arXiv preprint arXiv:2302.04027.
Moscariello, Alessio; Strazzanti, Francesco Nearly Gorenstein vs almost Gorenstein affine monomial curves. Mediterr. J. Math.18(2021), no.4, Paper No. 127, 14 pp.
Nari, Hirokatsu. Symmetries on almost symmetric numerical semigroups. Semigroup Forum 86(2013), no.1, 140--154.
Numata, T. A variation of gluing of numerical semigroup. Semigroup Forum (2016) 93:152--160.
Rosales, J. C. On presentations of subsemigroups of $\mathbb{N}^n$. Semigroup Forum 55(1997), no.2, 152--159.
Saha, Joydip; Sengupta, Indranath; Srivastava, Pranjal. On the associated graded ring of semigroup algebras. Comm. Algebra 51(2023), no.10, 4259--4270.
Şahin, Mesut. Liftings of a monomial curve. Bulletin of the Australian Mathematical Society 98.2 (2018): 230-238.
Şahin, Mesut; Stella, Leah Gold. Gluing semigroups and strongly indispensable free resolutions. Internat. J. Algebra Comput. 29(2019), no.2, 263--278.
| arxiv_math | {
"id": "2310.00619",
"title": "On Nearly Gorenstein Simplicial Semigroup Algebras",
"authors": "Pranjal Srivastava",
"categories": "math.AC",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
The spectral clustering algorithm is often used as a binary clustering method for unclassified data by applying the principal component analysis. To study theoretical properties of the algorithm, the assumption of homoscedasticity is often supposed in existing studies. However, this assumption is restrictive and often unrealistic in practice. Therefore, in this paper, we consider the allometric extension model, that is, the directions of the first eigenvectors of two covariance matrices and the direction of the difference of two mean vectors coincide, and we provide a non-asymptotic bound of the error probability of the spectral clustering algorithm for the allometric extension model. As a byproduct of the result, we obtain the consistency of the clustering method in high-dimensional settings.
title: Spectral clustering algorithm for the allometric extension model
---
# Introduction {#sec:1}
## Spectral clustering algorithm {#subsec:11}
Clustering unclassified data is one of typical problems in multivariate data analysis. Several clustering methods have been intensively developed. For example, the $k$-means clustering [@RefP81; @RefP82], the hierarchical clustering [@RefB14], the method based on linear discriminant functions [@RefO78], to mention only a few, are commonly used methods. For a review of clustering methods, see [@RefA17] and references therein. Of those methods, the spectral clustering based on principal component analysis is quite popular because of its low computational load. It indeed has various applications such as community detection and graph partitioning. See [@RefL21] for references and further applications. According to Section 4.7.1 of [@RefV18], the method is described as follows.
Let the dimension of data and the sample size be $n$ and $m$, respectively, and $n$-dimensional (centered) data points are denoted by $\boldsymbol{x}_1, \ldots, \boldsymbol{x}_m$. Calculate the realized sample covariance matrix $$\boldsymbol{S}_x = \frac{1}{m} \sum_{i=1}^m \boldsymbol{x}_i \boldsymbol{x}_i^\top$$ and the unit-length eigenvector $\boldsymbol{v}$ corresponding to the largest eigenvalue of $\boldsymbol{S}_x$, where
$\top$ denotes the transpose of a vector. To classify data points into two groups, we project data points onto the space spanned by $\boldsymbol{v}$ and classify them by the signs of their principal component scores.
To review theoretical properties of the spectral clustering algorithm, we define $\theta$ indicating group to which an individual belongs as a symmetric Bernoulli variable (Rademacher variable), that is to say, $P(\theta=1)=P(\theta=-1)=1/2$, and let $\boldsymbol{g}^{(1)}$ and $\boldsymbol{g}^{(-1)}$ be $n$-dimensional centered normal variables with covariance matrices $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$, respectively. Define a $n$-dimensional random variable $\boldsymbol{X}$ as $$\boldsymbol{X} = \theta \boldsymbol{\mu} + \boldsymbol{g}^{(\theta)} ,$$ where $\boldsymbol{\mu}$ is a $n$-dimensional vector. Note that $E[\boldsymbol{X}]= \boldsymbol{0}_n$.
Let $\boldsymbol{X}_1,\ldots,\boldsymbol{X}_m$ be iid copies of $\boldsymbol{X}$, $\boldsymbol{S}_{m} = m^{-1} \sum_{i=1}^m \boldsymbol{X}_i \boldsymbol{X}_ i^\top$ the sample covariance matrix, and $\boldsymbol{\gamma}_1(\boldsymbol{S}_{m})$ the unit eigenvector corresponding to the largest eigenvalue of $\boldsymbol{S}_{m}$. Under the assumption that $\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 = \boldsymbol{I}_n$, where $\boldsymbol{I}_n$ is the the identity matrix of size $n$, @RefV18 [Section 4.7] evaluated $$\label{verp}
P\left( \{\mbox{The number of misclassifications of $X_1,\ldots,X_m$}\} \leq \varepsilon m \right),$$ where $\varepsilon$ is a positive constant satisfying appropriate conditions and "The number of misclassifications" will be formulated later in [\[kerp\]](#kerp){reference-type="eqref" reference="kerp"}. Moreover, under the assumption that $\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 = \boldsymbol{I}_n$ and $\| \boldsymbol{\mu} \|_2 \geq C_{\rm gap} n/m$ for some large constant ${C_{\rm gap}}$, [@RefCZ18] derived an upper bound for the expectation of the misclustering rate that is a quantity of interest for clustering methods, where "The misclustering rate" will be explained in [\[misclustering\]](#misclustering){reference-type="eqref" reference="misclustering"}. In this paper, we use the words "misclassification" and "misclustering" in the different senses. [@RefDS13] proposed the clustering method based on the moment method for a mixture distribution of spherical normal distributions. [@RefAFW22] proposed a more sophisticated method using the singular value decomposition of the Gram matrix calculated with its diagonal components replaced by zero and evaluated the misclustering rate of this method. Recently, for a derivative algorithm of Lloyd's iterative procedure in a two-component mixture of normal distributions, [@RefN22] derived conditions under which asymptotically correct results of the procedure are obtained.
When $\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2$, the first eigenvector of the covariance matrix $\boldsymbol{\Sigma}$ of the mixture distribution is different from that of $\boldsymbol{\Sigma}$ in general. However, if $\boldsymbol{\Sigma}_1$ is spherical, the direction of the first eigenvector of $\boldsymbol{\Sigma}$ is parallel to $\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2$, where $\boldsymbol{\mu}_1 = \boldsymbol{\mu}$ and $\boldsymbol{\mu}_2= -\boldsymbol{\mu}$. This serves as the basis for the spectral clustering algorithm. However, the assumption that $\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2$ and $\boldsymbol{\Sigma}_1$ is spherical is sometimes restrictive and unlikely to hold in practice. Note that Section 5 of [@RefAFW22] evaluated eigenvalues and eigenvectors in heteroscedastic situation for each individual. Therefore, in this paper, we suppose the weaker assumption that $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$ follow the allometric extension relationship formulated in [@RefF97] and [@RefBFN99], that is, the leading eigenvector of $\boldsymbol{\Sigma}_1$ is parallel to that of $\boldsymbol{\Sigma}_2$ and is also parallel to $\boldsymbol{\mu}_1-\boldsymbol{\mu}_2$. Then, we evaluate the misclassification probability of the spectral clustering algorithm. This results allows to show the consistency of the spectral clustering algorithm in a high-dimensional setting.
## Notation
In this subsection, we introduce some notations used in this paper. For vectors $\boldsymbol{a}$ and $\boldsymbol{b}$ which have the same dimension, $\langle \boldsymbol{a}, \boldsymbol{b} \rangle = {\boldsymbol{a}^{\top} \boldsymbol{b} }$ denotes the inner product of $\boldsymbol{a}$ and $\boldsymbol{b}$. For a vector $\boldsymbol{a}$, let $\| \boldsymbol{a} \|_2 = \sqrt{\langle \boldsymbol{a}, \boldsymbol{a} \rangle}$ be the $\ell^2$-norm of $\boldsymbol{a}$. For a square matrix $\boldsymbol{A}$, let $\lambda_k(\boldsymbol{A})$ be the $k$-th largest eigenvalue of $\boldsymbol{A}$, $\boldsymbol{\gamma}_k(\boldsymbol{A})$ the eigenvector corresponding to $\lambda_k(\boldsymbol{A})$, and $\| \boldsymbol{A} \|_{\mathrm{op}}$ the operator norm of $\boldsymbol{A}$. For a positive integer $n$, $\mathcal{N}_n(\boldsymbol{\mu},\boldsymbol{\Sigma})$ denotes the $n$-dimensional normal distribution with a mean vector $\boldsymbol{\mu}$ and a covariance matrix $\boldsymbol{\Sigma}$. Moreover, in general, $(\boldsymbol{\mu},\boldsymbol{\Sigma})$ denotes the distribution with a mean vector $\boldsymbol{\mu}$ and a covariance matrix $\boldsymbol{\Sigma}$. For a positive definite covariance matrix $\boldsymbol{\Sigma}$, $\boldsymbol{\Sigma}^{1/2}$ denotes the symmetric matrix satisfying $\boldsymbol{\Sigma}^{1/2} \boldsymbol{\Sigma}^{1/2} = \boldsymbol{\Sigma}$, and $\boldsymbol{\Sigma}^{- 1/2}$ denotes the inverse of $\boldsymbol{\Sigma}^{1/2}$.
## Organization of the paper
In section [2](#sec:2){reference-type="ref" reference="sec:2"}, the allometric extension model is introduced and some properties of this model are derived. Section [3](#sec:3){reference-type="ref" reference="sec:3"} provides the main results of this paper. Theoretical results are proven in Section [4](#sec:4){reference-type="ref" reference="sec:4"}. Some concluding remarks are presented in section [5](#sec:5){reference-type="ref" reference="sec:5"}.
# Allometric extension model {#sec:2}
In this section, we introduce the allometric extension model. Let $n$ be a positive integer, and let $\boldsymbol{\mu}_1$ and $\boldsymbol{\mu}_2$ be $n$-dimensional vectors satisfying $\boldsymbol{\mu}_1 \neq \boldsymbol{\mu}_2$. Also, let $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$ be positive definite $n \times n$ symmetric matrices such that $\lambda_1(\boldsymbol{\Sigma}_i) > \lambda_2(\boldsymbol{\Sigma}_i)$ for $i=1,2$. We define the allometric extension relationship between two distributions $(\boldsymbol{\mu}_1, \boldsymbol{\Sigma}_1)$ and $(\boldsymbol{\mu}_2, \boldsymbol{\Sigma}_2)$ as follows: there exists a $\beta \in \mathbb{R}$ such that $$\begin{aligned}
\label{aem}
\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) = \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_2) = \beta (\boldsymbol{\mu}_1 - \boldsymbol{\mu}_2) ,\end{aligned}$$ where the sign of $\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_2)$ is suitably chosen. Throughout the discussion, we assume that the two distribution are $n$-dimensional normal distributions.
The allometric extension model formulated in [@RefF97] and [@RefBFN99] are used as a model to express a typical relationship between two or more biotic groups. For this model, [@RefBFN99] proposed a test procedure for determining whether two groups are in the allometric extension relationship and analyzed carapace size data of turtles of different sexes as discussed in the paper by Jolicoeur and Mosimann; for this data, we refer to Table 1.4 in [@RefF97]. Moreover, [@RefTM23] proposed a test procedure for the allometric extension relationship when observations are high-dimensional. Furthermore, properties of mixtures of two or more distributions that form the allometric extension relationship are also discussed in @RefF97 [section 8.7], [@RefKHF08] and [@RefMK14].
Let us derive some properties associated with the mixture distribution of two normal distributions forming the allometric extension relationship. Let $n$ be a positive integer, and let $f_{1}(\cdot)$ and $f_{2}(\cdot)$ be the probability density functions of $\mathcal{N}_n(\boldsymbol{\mu}_1,\boldsymbol{\Sigma}_1)$ and $\mathcal{N}_n(\boldsymbol{\mu}_2,\boldsymbol{\Sigma}_2)$, respectively. We assume that $(\boldsymbol{\mu}_1, \boldsymbol{\Sigma}_1)$ and $(\boldsymbol{\mu}_2, \boldsymbol{\Sigma}_2)$ satisfy [\[aem\]](#aem){reference-type="eqref" reference="aem"} and that the $n$-dimensional random variable $\boldsymbol{X}$ follows a mixture distribution whose probability density function is given by $$f_X(\boldsymbol{x}) = \pi_1 f_{1}(\boldsymbol{x}) + \pi_2 f_{2}(\boldsymbol{x}) \quad (\boldsymbol{x} \in \mathbb{R}^n),$$ where $\pi_1$ and $\pi_2$ are positive values satisfying $\pi_1 + \pi_2 = 1$. In this case, the following properties about the covariance matrix $\boldsymbol{\Sigma} = V[\boldsymbol{X}]$ hold.
**Proposition 1**. *Under the conditions stated above, $$\begin{aligned}
\mbox{(i)} & \quad \gamma_1(\boldsymbol{\Sigma}) = \gamma_1(\boldsymbol{\Sigma}_1), \label{AE1} \\
\mbox{(ii)} & \quad \lambda_1(\boldsymbol{\Sigma} ) = \pi_1 \lambda_1(\boldsymbol{\Sigma}_1)+ \pi_2 \lambda_1(\boldsymbol{\Sigma}_2)+ \pi_1 \pi_2 \| \boldsymbol{\mu}_1 - \boldsymbol{\mu}_2 \|_2^2, \label{AE2} \\
\mbox{(iii)} & \quad \lambda_2(\boldsymbol{\Sigma}) \leq \pi_1 \lambda_2(\boldsymbol{\Sigma}_1) + \pi_2 \lambda_2(\boldsymbol{\Sigma}_2) , \label{AE3}\end{aligned}$$ where the sign of $\gamma_1(\boldsymbol{\Sigma})$ is appropriately chosen in [\[AE1\]](#AE1){reference-type="eqref" reference="AE1"}.*
**Remark 1**. *The results [\[AE1\]](#AE1){reference-type="eqref" reference="AE1"} and [\[AE2\]](#AE2){reference-type="eqref" reference="AE2"} are given in Lemma 8.7.1 of [@RefF97], but [\[AE3\]](#AE3){reference-type="eqref" reference="AE3"} gives a better evaluation than the corresponding result in [@RefF97]. Therefore, we include the proof of Proposition [Proposition 1](#propae){reference-type="ref" reference="propae"}, although the proof is similar.*
**Remark 2**. *In this study, we consider the situation that $\lambda_1(\boldsymbol{\Sigma}_i) > \lambda_2(\boldsymbol{\Sigma}_i)$ for $i=1,2$. Therefore, we stress that the case where $\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 = \boldsymbol{I}_n$ is not included in our setting.*
# Spectral clustering algorithm for the allometric extension model {#sec:3}
In this section, we derive an upper bound on the misclassification probability of the spectral clustering algorithm for the allometric extension model. Let $\theta$ be a symmetric Bernoulli variable, $n$ a positive integer, $\boldsymbol{\mu}$ an $n$-dimensional vector, and $\boldsymbol{\Sigma}_1, \boldsymbol{\Sigma}_2$ $n \times n$ positive definite symmetric matrices satisfying $\lambda_1(\boldsymbol{\Sigma}_i) > \lambda_2(\boldsymbol{\Sigma}_i)$ for $i=1,2$. Suppose that $$\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) = \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_2) = 2\beta \boldsymbol{\mu}$$ for some $\beta > 0$. We consider an $n$-dimensional random variable $\boldsymbol{X}$ defined as $$\begin{aligned}
\boldsymbol{X} = \theta \boldsymbol{\mu} + \boldsymbol{g}^{(\theta)},\end{aligned}$$ where $\boldsymbol{g}^{(1)} \sim \mathcal{N}_n(\boldsymbol{0}_n, \boldsymbol{\Sigma}_1)$ and $\boldsymbol{g}^{(-1)} \sim \mathcal{N}_n(\boldsymbol{0}_n, \boldsymbol{\Sigma}_2)$ that are independent of $\theta$. Then, the probability density function $f_X(\cdot)$ of $\boldsymbol{X}$ is given by $$f_{X}(\boldsymbol{x}) = \frac{1}{2} f_{1}(\boldsymbol{x}) + \frac{1}{2} f_{2}(\boldsymbol{x}) \quad (\boldsymbol{x} \in \mathbb{R}^n),$$ where $f_{1}(\cdot)$ and $f_{2}(\cdot)$ are the probability density functions of $\mathcal{N}_n(\boldsymbol{\mu},\boldsymbol{\Sigma}_1)$ and $\mathcal{N}_n(-\boldsymbol{\mu},\boldsymbol{\Sigma}_2)$, respectively. As shown in the following proposition, $\boldsymbol{X}$ is an $\mathbb{R}^n$-valued sub-gaussian random variable.
**Proposition 2**. *There exist a constant $K \geq 1$ such that $$\label{kthi}
\| \langle \boldsymbol{X}, \boldsymbol{x} \rangle \|_{\psi_2} \leq K \| \langle \boldsymbol{X} , \boldsymbol{x} \rangle \|_{L^2}$$ for any $\boldsymbol{x} \in \mathbb{R}^n$, where $\| \cdot \|_{\psi_2}$ and $\| \cdot \|_{L^2}$ denote the sub-gaussian norm and the $L^2$ norm, respectively. In particular, [\[kthi\]](#kthi){reference-type="eqref" reference="kthi"} holds for any $\boldsymbol{x} \in \mathbb{R}^n$ when $K = \sqrt{{32}/(4 - \mathrm{e} ) } (= 4.9966\cdots)$.*
For a positive integer $m$, we observe random variables $\boldsymbol{X}_1,\ldots,\boldsymbol{X}_m$ following $$\boldsymbol{X}_i = \theta_i \boldsymbol{\mu} + \boldsymbol{g}_i^{(\theta_i)} \quad (i=1,\ldots,m),$$ where $\theta_1,\ldots,\theta_m$ are iid copies of $\theta$ and $\boldsymbol{g}^{(t)}_1,\ldots,\boldsymbol{g}^{(t)}_m$ are iid copies of $\boldsymbol{g}^{(t)}$ for $t =-1,1$. Note that $\boldsymbol{X}_1,\ldots,\boldsymbol{X}_m$ are iid copies of $\boldsymbol{X}$. We presume that there are two groups $\mathcal{N}_n(\boldsymbol{\mu},\boldsymbol{\Sigma}_1)$ and $\mathcal{N}_n(-\boldsymbol{\mu},\boldsymbol{\Sigma}_2)$ forming the allometric extension relationship and $\theta_i$ indicates the group to which an individual $\boldsymbol{X}_i$ belongs for $i=1,\ldots,m$. The sample covariance matrix $\boldsymbol{S}_{m}$ is defined as $$\boldsymbol{S}_{m} = \frac{1}{m} \sum_{i=1}^m \boldsymbol{X}_i \boldsymbol{X}_i^\top.$$ Here we note that $E[\boldsymbol{X}_i] = \boldsymbol{0}_n$ for $i=1,\ldots,m$. As an estimator of $$\label{aes}
\boldsymbol{\Sigma}= E[\boldsymbol{X} \boldsymbol{X}^\top]
= \boldsymbol{\mu} \boldsymbol{\mu}^\top + \frac{1}{2} \boldsymbol{\Sigma}_1 + \frac{1}{2} \boldsymbol{\Sigma}_2,$$ the estimation error $\boldsymbol{S}_m - \boldsymbol{\Sigma}$ of $\boldsymbol{S}_m$ can be evaluated in the following proposition, which will be used to prove our main result.
**Proposition 3**. *Let $K (\geq 1)$ be a constant satisfying [\[kthi\]](#kthi){reference-type="eqref" reference="kthi"} for any $\boldsymbol{x} \in \mathbb{R}^n$. For any $u \geq 0$, $$\begin{aligned}
&P\left(\| \boldsymbol{ S }_m - \boldsymbol{ \Sigma } \|_{\mathrm{op}} \leq C K^2 \left( \sqrt{ \frac{n+u}{m} } + \frac{n+u}{m} \right) \left(\frac{\lambda_1(\boldsymbol{\Sigma}_1) + \lambda_1(\boldsymbol{\Sigma}_2)}{2} + \| \boldsymbol{\mu} \|_2^2 \right)
\right)\\
&\geq 1- 2 \mathrm{e} ^{-u},\end{aligned}$$ where $C$ is some positive absolute constant.*
**Remark 3**. *If we use the result of Exercise 4.7.3 of [@RefV18], Proposition [Proposition 3](#evs){reference-type="ref" reference="evs"} immediately follows from Propositions [Proposition 1](#propae){reference-type="ref" reference="propae"} and [Proposition 2](#kex){reference-type="ref" reference="kex"}. The proof of Proposition [Proposition 3](#evs){reference-type="ref" reference="evs"} is included just to be sure.*
**Remark 4**. *It also holds that $$E[ \| \boldsymbol{ S }_m - \boldsymbol{ \Sigma } \|_{\mathrm{op}} ]
\leq C K^2 \left( \sqrt{\frac{n}{m}} + \frac{n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1) + \lambda_1(\boldsymbol{\Sigma}_2)}{2} + \| \boldsymbol{\mu} \|_2^2 \right) ,$$ which is a direct consequence of Theorem 4.7.1 of [@RefV18] and Propositions [Proposition 1](#propae){reference-type="ref" reference="propae"} and [Proposition 2](#kex){reference-type="ref" reference="kex"}.*
If we regard binary clustering as a binary classification problem for unlabeled data, the main objective is to classify individuals in a random sample into correct groups, where each individual belongs to one of the two groups. Let us assume that $\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}) \rangle >0$. In our problem setting, the spectral clustering algorithm classifies $\boldsymbol{X}_1,\ldots,\boldsymbol{X}_m$ into two clusters by the signs of $\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle$ $(i=1,\ldots,m)$. The misclassification probability of $\boldsymbol{X}_i$ can be expressed as $$P(\theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0)$$ for $i=1,\ldots,m$. As it will be remarked later, evaluating misclassification probability enables us to evaluate the misclustering rate; see Remark [Remark 6](#remmisclust){reference-type="ref" reference="remmisclust"}. The following theorem provides a non-asymptotic upper bound of this misclassification probability.
**Theorem 4**. *Let $C$, $K$, $K_g$, and $c$ be positive absolute constants which are independent of $m$, $n$, $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$, and let $c_1 = 1+K_g^2/\sqrt{c}$. Suppose that $m$, $n$, $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$ satisfy $$\begin{aligned}
&\sqrt{2} C K^2 \left ( \sqrt{\frac{2n}{m}} + \frac{2n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1) + \lambda_1(\boldsymbol{\Sigma}_2)}{ \| \boldsymbol{\mu} \|_2^2} + 2 \right)
\nonumber\\
&\leq \frac{\alpha \| \boldsymbol{\mu}\|_2}{c_1 \sqrt{n \max_{j=1,2}\{\lambda_1(\boldsymbol{\Sigma}_j)\}} + \| \boldsymbol{\mu} \|_2}
\label{tha}\end{aligned}$$ for some $\alpha \in (0,1)$. Then it holds that $$\begin{aligned}
P(\theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle <0) &\leq \Phi\left( \frac{-(1-\alpha)\|\boldsymbol{\mu}\|_2}{\sqrt{\max_{j=1,2} \{ \lambda_1(\boldsymbol{\Sigma}_j) \} }} \right) + 6 \mathrm{e} ^{-n}\end{aligned}$$ for $i=1,\ldots,m$, where $\Phi(\cdot)$ is the distribution function of the standard normal distribution.*
**Remark 5**. *The constant $K$ is given in Proposition [Proposition 2](#kex){reference-type="ref" reference="kex"}, and $C$ in Proposition [Proposition 3](#evs){reference-type="ref" reference="evs"}. Moreover, the constant $K_g$ is the sub-gaussian norm of a standard normal variable; in particular, $K_g = \sqrt{8/3}$. Letting $\boldsymbol{g} \sim \mathcal{N}_n (\boldsymbol{0}_n, \boldsymbol{I}_n)$, we have $$\label{v33}
P( \bigl| \| \boldsymbol{g}\|_2 -\sqrt{n} \bigr| \geq t) \leq 2 \exp\left( - \frac{ct^2}{K_g^4} \right)$$ for all $t\geq 0$; see, e.g., Equation (3.3) in @RefV18 [p.40]. The constant $c$ in Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"} appears in the inequality [\[v33\]](#v33){reference-type="eqref" reference="v33"}.*
Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"} provides a non-asymptotic lower bound for the probability of the misclassification rate announced in Section [1](#sec:1){reference-type="ref" reference="sec:1"}. In our setting, [\[verp\]](#verp){reference-type="eqref" reference="verp"} is formulated as $$\begin{aligned}
&P\left( \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0 \} \leq \varepsilon m \right)
\nonumber\\
&= 1 - P\left( \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0 \} > \varepsilon m \right).
\label{kerp}\end{aligned}$$ By applying the Markov inequality, we have $$\begin{aligned}
&P\left( \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0 \} > \varepsilon m \right) \\
&\leq \frac{1}{\varepsilon m} \sum_{i=1}^m E\left[1\left\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0 \right\} \right]
= \frac{1}{\varepsilon} P\left( \theta_1 \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_1 \rangle < 0 \right),\end{aligned}$$ because $(\theta_1 \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_1 \rangle, \ldots, \theta_m \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_m \rangle )$ is exchangeable. Thus, we obtain the following corollary to Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"}.
**Corollary 5**. *Consider constants $C$, $K$, and $c_1$ in Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"}. Suppose that $m$, $n$, $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$ satisfy [\[tha\]](#tha){reference-type="eqref" reference="tha"} for some $\alpha \in (0,1)$. Then it holds that $$P\left( \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0 \} \leq \varepsilon m \right)
\geq 1 - \frac{1}{\varepsilon}\Phi\left( \frac{-(1-\alpha)\|\boldsymbol{\mu}\|_2}{\sqrt{\max_{j=1,2} \{\lambda_1(\boldsymbol{\Sigma}_j)\}}} \right) - \frac{6}{\varepsilon} \mathrm{e} ^{-n} .$$*
**Remark 6**. *In our setting, the misclustering rate stated in Section [1](#sec:1){reference-type="ref" reference="sec:1"} is expressed as $$\label{misclustering}
\frac{1}{m} \min \left\{ \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_m), \boldsymbol{X}_i \rangle < 0\}, \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_m), \boldsymbol{X}_i \rangle > 0\} \right\}.$$ It holds that $$\begin{aligned}
& P\left( \frac{1}{m} \min \left\{ \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_m), \boldsymbol{X}_i \rangle < 0\}, \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_m), \boldsymbol{X}_i \rangle > 0\} \right\} \leq \varepsilon \right) \\
& \geq P\left( \sum_{i=1}^m 1\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_m), \boldsymbol{X}_i \rangle < 0\} \leq \varepsilon m \right).\end{aligned}$$ We can evaluate the right-hand side by using Corollary [Corollary 5](#co0){reference-type="ref" reference="co0"}. In addition, the expectation of [\[misclustering\]](#misclustering){reference-type="eqref" reference="misclustering"} may be similarly evaluated. Note that [\[misclustering\]](#misclustering){reference-type="eqref" reference="misclustering"} is called "misclassification rate" in [@RefCZ18] and [@RefAFW22].*
We define a signal-to-noise ratio $\eta$ for our problem as $$\eta = \frac{\|\boldsymbol{\mu}\|_2^2}{\max_{j=1,2} \{ \lambda_1(\boldsymbol{\Sigma}_j)\}},$$ which plays an important role to evaluate performances. The numerator $\|\boldsymbol{\mu}\|_2^2$ and the denominator $\max_{j=1,2}(\lambda_1(\boldsymbol{\Sigma}_j))$ correspond to strength of signals and noises, respectively. If noises are large compared to signals, then signals are hidden by noises, which makes detection difficult. In this sense, $\eta$ can be interpreted as the difficulty of classification when the allometric extension model is considered. If $\eta$ is small (large), it is difficult (easy) to classify individuals in a random sample into the correct groups. It is easy to see that [\[tha\]](#tha){reference-type="eqref" reference="tha"} is fulfilled when $m$, $n$, $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$ satisfy $$\label{thas}
2^{ 3/2 } C K^2 \left ( \sqrt{\frac{2n}{m}} + \frac{2n}{m} \right) \left( \frac{1}{\eta} + 1 \right)
\leq \frac{\alpha }{c_1 \sqrt{n/\eta} + 1}.$$ Hence, if $n/\eta = O(1)$, then [\[tha\]](#tha){reference-type="eqref" reference="tha"} holds for sufficiently large $m$ compared to $n$.
**Remark 7**. *[@RefAFW22] and [@RefN22] discuss this kind of topic in a similar context.*
**Remark 8**. *By the Mills inequality, Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"} leads to $$\begin{aligned}
&P(\theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle \leq 0)
\nonumber\\
&\leq
\frac{1}{\sqrt{2\pi (1-\alpha)^{2}\eta}} \exp\left(- \frac{(1-\alpha)^2 \eta}{2} \right) + 6 \mathrm{e} ^{-n}
\quad (i=1,\ldots,m).
\label{millsa}\end{aligned}$$ The inequality [\[millsa\]](#millsa){reference-type="eqref" reference="millsa"} shows that the misclassification probability for an individual decays exponentially with the signal-to-noise ratio $\eta$ (multiplied by a constant) and the dimension $n$.*
Finally, we consider the probability of the event $$\left\{ \bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle > 0 \bigr\} \right\} \cup \left\{ \bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle <0 \bigr\} \right\}$$ indicating all individuals $\boldsymbol{X}_1,\ldots,\boldsymbol{X}_m$ are clustered correctly, that is to say, the misclustering rate equals zero. Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"} implies the consistency of clustering in the sense of [\[consc\]](#consc){reference-type="eqref" reference="consc"} under a high-dimensional regime.
**Corollary 6**. *As $m,n \to \infty$ with $$\label{ARh}
\frac{n}{m} \to 0, \quad \frac{\log{m}}{n} \to 0 ,$$ if $n/\eta = O(1)$, then $$\label{consc}
P \left( \left\{ \bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle > 0 \bigr\} \right\} \cup \left\{ \bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle <0 \bigr\} \right\} \right) \to 1.$$*
# Proofs {#sec:4}
## Proof of Proposition [Proposition 1](#propae){reference-type="ref" reference="propae"} {#proof-of-proposition-propae}
For simplicity, denote $\boldsymbol{\gamma}_1 (\boldsymbol{\Sigma }_1) = \boldsymbol{\gamma}_1 ( \boldsymbol{\Sigma}_2)$ by $\boldsymbol{\gamma}_1$, where the sign of $\boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_2)$ is appropriately chosen. Let us construct a random variable $Y$ taking values in $\{1,2\}$ such that $P(Y = i) = \pi_i$ and $\boldsymbol{X} | \{Y=i\} \sim \mathcal{N}_n (\boldsymbol{\mu}_i, \boldsymbol{\Sigma}_i)$ conditionally for $i=1,2$. Then, we have $E[ \boldsymbol{X} \arrowvert Y = i] = \mu_i$ and $V[\boldsymbol{X} \arrowvert Y = i] = \boldsymbol{ \Sigma }_i$ for $i=1,2$. These formulae give $$E[V[\boldsymbol{X}\arrowvert Y]] = \sum_{i=1}^2 \pi_i \boldsymbol{ \Sigma }_i, \quad
V[ E[ \boldsymbol{X} \arrowvert Y]]= \sum_{i=1}^2 \pi_i \left( \boldsymbol{\mu}_i - \bar{\boldsymbol{\mu}} \right) \left( \boldsymbol{\mu}_i - \bar{\boldsymbol{\mu}} \right)^\top,$$ where $$\bar{\boldsymbol{\mu}} = \sum_{i=1}^2 \pi_i\boldsymbol{\mu}_i = \boldsymbol{\mu}_1 - \frac{\pi_2}{\beta} \boldsymbol{\gamma}_1
=\boldsymbol{\mu}_2 + \frac{\pi_1}{\beta} \boldsymbol{\gamma}_1 .$$ It follows that $$\begin{aligned}
&\boldsymbol{ \Sigma }
%= V \left[\boldsymbol{X}\right]
= E[V[\boldsymbol{X}\arrowvert Y]] + V[ E[ \boldsymbol{X} \arrowvert Y]]
= \sum_{i=1}^2 \pi_i \boldsymbol{ \Sigma }_i + \sum_{i=1}^2 \pi_i \left( \boldsymbol{\mu}_i - \bar{\boldsymbol{\mu}} \right) \left( \boldsymbol{\mu}_i - \bar{\boldsymbol{\mu}} \right)^\top
\\
&= \sum_{i=1}^2 \pi_i \boldsymbol{ \Sigma }_i + \frac{\pi_1 \pi_2}{\beta^2} \boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top ,\end{aligned}$$ because $$\begin{aligned}
&\sum_{i=1}^2 \pi_i \left( \boldsymbol{\mu}_i - \bar{\boldsymbol{\mu}} \right) \left( \boldsymbol{\mu}_i - \bar{\boldsymbol{\mu}} \right)^\top
= \pi_1 \left(\frac{\pi_2}{\beta}\boldsymbol{\gamma}_1 \right) \left(\frac{\pi_2}{\beta}\boldsymbol{\gamma}_1 \right)^\top
+ \pi_2 \left(-\frac{\pi_1}{\beta}\boldsymbol{\gamma}_1 \right) \left( -\frac{\pi_1}{\beta}\boldsymbol{\gamma}_1 \right)^\top \\*
&=\frac{\pi_1\pi_2^2}{\beta^2} \boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top +
\frac{\pi_1^2 \pi_2}{\beta^2}\boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top
= \frac{\pi_1 \pi_2}{\beta^2} (\pi_1 + \pi_2)\boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top
= \frac{\pi_1 \pi_2}{\beta^2} \boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top,\end{aligned}$$ which is due to the definition of the allometric extension model. From $$\boldsymbol{ \Sigma } \boldsymbol{\gamma}_1
= \sum_{i=1}^2 \pi_i \boldsymbol{ \Sigma }_i \boldsymbol{\gamma}_1 + \frac{\pi_1 \pi_2}{\beta^2} \boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top \boldsymbol{\gamma}_1
= \left( \sum_{i=1}^2 \pi_i \lambda_1 ( \boldsymbol{ \Sigma }_i ) + \frac{ \pi_1 \pi_2}{\beta^2} \right) \boldsymbol{\gamma}_1 ,$$ we see that $\boldsymbol{\gamma}_1$ is an eigenvector of $\boldsymbol{ \Sigma }$ corresponding to the eigenvalue $$\sum_{i=1}^2 \pi_i \lambda_1 ( \boldsymbol{ \Sigma }_i ) + \frac{\pi_1 \pi_2}{\beta^2}.$$ Let $\boldsymbol{ \xi }$ be another unit-length eigenvector of $\boldsymbol{\Sigma}$ orthogonal to $\boldsymbol{\gamma}_1$. As $\boldsymbol{ \xi }$ and $\boldsymbol{\gamma}_1$ are orthogonal, it holds that $$\boldsymbol{ \xi }^\top \boldsymbol{ \Sigma } \boldsymbol{ \xi }
= \sum_{i=1}^2 {\pi_i} \boldsymbol{ \xi }^\top \boldsymbol{ \Sigma }_i \boldsymbol{ \xi } + \frac{ \pi_1 \pi_2 }{\beta^2} \boldsymbol{ \xi }^\top \boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top \boldsymbol{ \xi }
= \sum_{i=1}^2{ \pi_i } \boldsymbol{ \xi }^\top \boldsymbol{ \Sigma }_i \boldsymbol{ \xi } .$$ By using the spectral decomposition $$\boldsymbol{\Sigma }_i = \lambda_1( \boldsymbol{\Sigma }_i ) \boldsymbol{\gamma}_1 \boldsymbol{\gamma}_1 ^\top + \sum_{j=2}^n
\lambda_j( \boldsymbol{\Sigma}_i ) \boldsymbol{\gamma}_j ( \boldsymbol{\Sigma}_i ) \boldsymbol{\gamma}_j( \boldsymbol{\Sigma}_i ) ^\top
\quad (i = 1,2),$$
we have $$\begin{aligned}
&\boldsymbol{ \xi }^\top \boldsymbol{ \Sigma }_i \boldsymbol{ \xi }
= \boldsymbol{ \xi }^\top \left( \sum_{j=2}^n \lambda_j( \boldsymbol{ \Sigma }_i ) \boldsymbol{\gamma}_j( \boldsymbol{ \Sigma }_i ) \boldsymbol{\gamma}_j( \boldsymbol{ \Sigma }_i )^\top \right) \boldsymbol{ \xi }
= \sum_{j=2}^n \lambda_j ( \boldsymbol{ \Sigma }_i ) \langle \boldsymbol{ \xi } , \boldsymbol{\gamma}_j( \boldsymbol{ \Sigma }_i ) \rangle ^2 \\
& \leq \lambda_2( \boldsymbol{ \Sigma }_i ) \sum_{j=2}^n \langle \boldsymbol{ \xi } , \boldsymbol{\gamma}_j( \boldsymbol{ \Sigma }_i ) \rangle ^2
= \lambda_2( \boldsymbol{ \Sigma }_i )\end{aligned}$$ for $i=1,2$. Consequently, we deduce that $$\boldsymbol{ \xi }^\top \boldsymbol{ \Sigma } \boldsymbol{ \xi } \leq \sum_{i=1}^2 \pi_i \lambda_2(\boldsymbol{ \Sigma }_i)
< \sum_{i=1}^2 \pi_i \lambda_1 ( \boldsymbol{ \Sigma }_i ) + \frac{ \pi_1 \pi_2}{\beta^2}$$ for any unit-length eigenvector $\boldsymbol{\xi}$ of $\boldsymbol{\Sigma}$ orthogonal to $\boldsymbol{\gamma}_1$, hence $$\sum_{i=1}^2 \pi_i \lambda_1 ( \boldsymbol{ \Sigma }_i ) + \frac{\pi_1 \pi_2}{\beta^2}$$ is the unique largest eigenvalue $\lambda_1(\boldsymbol{\Sigma})$ of $\boldsymbol{\Sigma}$ and $$\lambda_2(\boldsymbol{ \Sigma }) \leq \sum_{i=1}^2 \pi_i \lambda_2(\boldsymbol{ \Sigma }_i)
.$$ This completes the proof. 0◻
## Proof of Proposition [Proposition 2](#kex){reference-type="ref" reference="kex"} {#proof-of-proposition-kex}
When $\boldsymbol{x} = \boldsymbol{0}_n$, [\[kthi\]](#kthi){reference-type="eqref" reference="kthi"} holds for any $K (\geq 1)$. Hereafter, we consider $\boldsymbol{x} \neq \boldsymbol{0}_n$.
We first observe that $$\| \langle \boldsymbol{X}, \boldsymbol{x} \rangle \|_{L^2}
= E [\langle \boldsymbol{X}, \boldsymbol{x} \rangle^2]
= E [\boldsymbol{x}^{\top} \boldsymbol{X}\boldsymbol{X}^{\top} \boldsymbol{x}]
= \boldsymbol{x}^{\top} E[\boldsymbol{X}\boldsymbol{X}^{\top}] \boldsymbol{x} = \langle \boldsymbol{\Sigma} \boldsymbol{x}, \boldsymbol{x} \rangle.$$ Next, we evaluate $\| \langle \boldsymbol{X}, \boldsymbol{x} \rangle \|_{\psi_2}$. It holds that $$\begin{aligned}
& E \biggl[\exp \left( \frac{ { \langle \boldsymbol{X}, \boldsymbol{x} \rangle }^2 }{ t^2 } \right) \biggr]
= E \biggl[ \exp \left( \frac{ { \langle \boldsymbol{\mu} + \boldsymbol{g}^{(\theta)}, \boldsymbol{x} \rangle }^2 }{ t^2 } \right) \biggr] \nonumber \\
&= E \biggl[ E\biggl[ \exp \left( \frac{ { \langle \boldsymbol{\mu} + \boldsymbol{g}^{(\theta)}, \boldsymbol{x} \rangle }^2 }{ t^2 } \right) \biggl| \theta \biggr] \biggr] \nonumber \\
&= \frac{1}{2} E \biggl[ \exp{ \left( \frac{ { \langle \boldsymbol{\mu} + \boldsymbol{g}^{(1)}, \boldsymbol{x} \rangle }^2 }{ t^2 } \right)} \biggr]
+ \frac{1}{2} E \biggl[ \exp{ \left( \frac{ { \langle -\boldsymbol{\mu} + \boldsymbol{g}^{(-1)}, \boldsymbol{x} \rangle }^2 }{ t^2 } \right)} \biggr] ,
\label{kthp1}\end{aligned}$$ where $t$ will be specified later. As for the first term on the right-hand side of [\[kthp1\]](#kthp1){reference-type="eqref" reference="kthp1"}, it follows from $\langle \boldsymbol{\Sigma}_1^{- 1/2} \boldsymbol{g}^{(1)} , { \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} }/{\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2} \rangle \sim \mathcal{N}(0,1)$ that $$\begin{aligned}
& \frac{1}{2} E \left[ \exp{ \left( \frac{ \langle \boldsymbol{\mu} + \boldsymbol{g}^{(1)}, \boldsymbol{x} \rangle ^2 }{ t^2 } \right) } \right] \\
&\leq \frac{1}{2} E \left[ \exp { \left( \frac{ {2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle }^2 + 2 {\langle \boldsymbol{g}^{(1)}, \boldsymbol{x} \rangle }^2 } { t^2 } \right)}
\right] \\
&= \frac{1}{2} \exp{ \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right) } E \left[ \exp{ \left( \frac{ 2\langle \boldsymbol{\Sigma}_1^{- 1/2} \boldsymbol{g}^{(1)} , \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \rangle^2 }{ t^2 } \right) } \right] \\
&= \frac{1}{2} \exp{ \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right) } E \left[ \exp{ \left( \frac{ 2 \| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2 \left\langle \boldsymbol{\Sigma}_1^{- 1/2} \boldsymbol{g}^{(1)} ,
{ \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} }/{\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2} \right\rangle^2 }{ t^2 } \right) } \right] \\
&= \frac{1}{2} \exp{ \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right) } E \left[ \exp \left( \frac{ Z^2 }{ { t^2 }/{( 2\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2 )} } \right) \right], \end{aligned}$$ where $Z$ is a standard normal random variable. The expectation on the right-hand side is finite when $t^2 > 4\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2$ and is given by $$E \left[ \exp \left( \frac{ Z^2 }{ { t^2 }/{( 2\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2 )} } \right) \right]
= \frac{1}{ \sqrt{ 1 - {4\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2 }/{ t^2 }} },$$ which yields that $$\frac{1}{2} E \biggl[ \exp{ \left( \frac{ \langle \boldsymbol{\mu} + \boldsymbol{g}^{(1)}, \boldsymbol{x} \rangle ^2 }{ t^2 } \right) } \biggr] \leq \frac{1}{2} \exp{ \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right)} \frac{1}{ \sqrt{ 1 - {4\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2 }/{ t^2 }} }.$$ Similarly, as for the second term on the right-hand side of [\[kthp1\]](#kthp1){reference-type="eqref" reference="kthp1"}, when $t^2 > 4\| \boldsymbol{\Sigma}_2^{ 1/2}\boldsymbol{x} \|_2^2$, it holds that $$\frac{1}{2} E \biggl[ \exp{ \left( \frac{ \langle -\boldsymbol{\mu} + \boldsymbol{g}^{(-1)}, \boldsymbol{x} \rangle ^2 }{ t^2 } \right) } \biggr] \leq \frac{1}{2} \exp{ \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right) } \frac{1}{ \sqrt{ 1 - {4\| \boldsymbol{\Sigma}_1^{ 1/2}\boldsymbol{x} \|_2^2 }/{ t^2 }} }.$$ Letting $M = \max\{ \| \boldsymbol{\Sigma}_1^{1/2} \boldsymbol{x} \|_2 , \| \boldsymbol{\Sigma}_2^{1/2} \boldsymbol{x} \|_2\}$, we have $$E \biggl[\exp \left( \frac{ { \langle \boldsymbol{X}, \boldsymbol{x} \rangle }^2 }{ t^2 } \right) \biggr] \leq \exp{ \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right) } \frac{1}{ \sqrt{ 1 - {4 M^2 }/{ t^2 }} }$$ when $t^2> 4M^2$. Hence, from the definition of the sub-gaussian norm, it suffices to show that there exists a constant $K$ satisfying this inequality with $t = K \sqrt{ \langle \boldsymbol{\Sigma} \boldsymbol{x}, \boldsymbol{x} \rangle }$ for all $\boldsymbol{x} (\neq \boldsymbol{0}_n)$. It follows from [\[aes\]](#aes){reference-type="eqref" reference="aes"} that $$\langle \boldsymbol{\Sigma} \boldsymbol{x} , \boldsymbol{x} \rangle
= \langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2+ \frac{1}{2} \| \boldsymbol{\Sigma}_1^{1/2} \boldsymbol{x} \|_2^2 + \frac{1}{2} \| \boldsymbol{\Sigma}_2^{1/2} \boldsymbol{x} \|_2^2
\geq \langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2.$$ Moreover, plugging $t = K \sqrt{ \langle \boldsymbol{\Sigma} \boldsymbol{x}, \boldsymbol{x} \rangle }$ into $\exp ( { 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }/{ t^2 })$, we have $$\exp \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ t^2 } \right)
= \exp \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ K^2 \langle \boldsymbol{\Sigma} \boldsymbol{x}, \boldsymbol{x} \rangle } \right)
\leq
\exp \left( \frac{ 2\langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 }{ K^2 \langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2 } \right)
= \exp \left( \frac{2}{K^2} \right).$$ Furthermore, if $K \geq 2$, then $\exp({2}/{K^2}) \leq \sqrt{ \mathrm{e} }$. Therefore, what is left is to show that there exists a constant $K \geq 2$ such that $$\label{kthp2}
\frac{1}{ \sqrt{ 1 - {4 M^2 }/(K^2 \langle \boldsymbol{\Sigma} \boldsymbol{x}, \boldsymbol{x} \rangle )} }
\leq \frac{2}{ \sqrt{ \mathrm{e} } }$$ for all $\boldsymbol{x} (\neq 0)$. The inequality [\[kthp2\]](#kthp2){reference-type="eqref" reference="kthp2"} is equivalent to $$K^2 \geq \frac{16}{ \mathrm{e} } \left( \frac{4}{ \mathrm{e} } - 1 \right)^{-1} \frac{M^2}{ \langle \boldsymbol{\Sigma} \boldsymbol{x} , \boldsymbol{x} \rangle } .$$ From $$\frac{M^2}{ \langle \boldsymbol{\Sigma} \boldsymbol{x} , \boldsymbol{x} \rangle }
= \frac{M^2}{ \langle \boldsymbol{\mu} , \boldsymbol{x} \rangle^2+ \| \boldsymbol{\Sigma}_1^{1/2} \boldsymbol{x} \|_2^2 / 2 + \| \boldsymbol{\Sigma}_2^{1/2} \boldsymbol{x} \|_2^2 / 2 }
\leq \frac{2M^2}{ \| \boldsymbol{\Sigma}_1^{1/2} \boldsymbol{x} \|_2^2 + \| \boldsymbol{\Sigma}_2^{1/2} \boldsymbol{x} \|_2^2 }
\leq 2,$$ it follows that $$K = \sqrt{\frac{32}{ \mathrm{e} } \left( \frac{4}{ \mathrm{e} } - 1 \right)^{-1}} = \sqrt{\frac{32}{4 - \mathrm{e} } }$$ satisfies [\[kthp2\]](#kthp2){reference-type="eqref" reference="kthp2"} for all $\boldsymbol{x} (\neq \boldsymbol{0}_n)$. 0◻
## Proof of Proposition [Proposition 3](#evs){reference-type="ref" reference="evs"} {#proof-of-proposition-evs}
Fix $u \geq 0$. Let $$\boldsymbol{Z}_i = \boldsymbol{\Sigma}^{-1/2} \boldsymbol{X}_i \quad (i=1,\ldots,m), \quad \mbox{and} \quad
\boldsymbol{R} = \frac{1}{m} \sum_{i=1}^m \boldsymbol{Z}_i \boldsymbol{Z}_i^\top - \boldsymbol{I}_n.$$ It follows from Equation (4.25) of @RefV18 [p.94] that $$\| \boldsymbol{S}_m - \boldsymbol{\Sigma} \|_{\mathrm{op}} \leq \| \boldsymbol{R} \|_{\mathrm{op}} \|\boldsymbol{\Sigma} \|_{\mathrm{op}}.$$ By using Proposition [Proposition 2](#kex){reference-type="ref" reference="kex"} and Equation (4.22) of @RefV18 [p.91] with $t=\sqrt{u}$, we have $$P\left( \| \boldsymbol{R} \|_{\mathrm{op}} \leq K^2 \max\{\delta,\delta^2\} \right) \geq
1 - 2 \mathrm{e} ^{-u},$$ where $\delta = \tilde{C}(\sqrt{n} + \sqrt{u})/\sqrt{m}$. Note that $\tilde{C}$ is the absolute constant $C$ in Equation (4.22) of @RefV18. We see that $$\delta = \tilde{C} \biggl( \frac{ \sqrt{n} + \sqrt{u}}{ \sqrt{m}} \biggr) \leq \tilde{C} \sqrt{ \frac{2(n + u)}{m} } .$$ Letting $C = \max \{ \sqrt{2} \tilde{C}, 2 \tilde{C}^2 \}$, we have $$K^2 \max\{\delta,\delta^2\}
\leq K^2(\delta + \delta^2)\leq C K^2 \biggl( \sqrt{ \frac{n+u}{m}} + \frac{n+u}{m}\biggr).$$ Consequently, it follows that $$\begin{aligned}
& P \left( \| \boldsymbol{S}_m - \boldsymbol{\Sigma} \|_{\mathrm{op}} \leq C K^2 \biggl( \sqrt{ \frac{n+u}{m}} + \frac{n+u}{m} \biggr) \|\boldsymbol{\Sigma} \|_{\mathrm{op}}
\right) \\
&\geq
P \left( \| \boldsymbol{R} \|_{\mathrm{op}} \leq C K^2 \biggl( \sqrt{ \frac{n+u}{m}} + \frac{n+u}{m} \biggr) \right) \\
& \geq
P \left( \| \boldsymbol{R} \|_{\mathrm{op}} \leq K^2 \max\{\delta,\delta^2\} \right)
\geq 1 - 2 \mathrm{e} ^{-u}.\end{aligned}$$ Finally, [\[AE2\]](#AE2){reference-type="eqref" reference="AE2"} implies the assertion. 0◻
## Proof of Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"} {#proof-of-theorem-mthm}
We see that $$\begin{aligned}
& P\biggl( \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i\rangle < 0 \biggr) \\*
%&= P \biggl( \{ \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i \rangle<0 \} \cap \; \{ \theta_i = 1 \} \biggr) + P \biggl( \{\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i\rangle>0 \} \cap \; \{ \theta_i = -1 \} \biggr) \\*
&= P \biggl( \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i \rangle<0 \biggl\arrowvert \theta_i = 1 \biggr) P\biggl( \theta_i = 1 \biggr) \\*
& \quad + P \biggl( \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i \rangle>0 \biggl\arrowvert \theta_i = -1 \biggr) P\biggl(\theta_i = -1 \biggr) \\*
&= \frac{1}{2} P \biggl( \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i\rangle<0 \biggl\arrowvert \theta_i = 1 \biggr) +
\frac{1}{2} P \biggl( \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i \rangle>0 \biggl\arrowvert \theta_i = -1 \biggr) \\*
&\leq \frac{1}{2} \left( \Phi\left( \frac{-(1-\alpha) \| \boldsymbol{\mu} \|_2 }{\sqrt {\lambda_1(\boldsymbol{\Sigma}_1)}} \right) + \Phi\left( \frac{-(1-\alpha) \| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_2)}} \right) \right) + 6 \mathrm{e} ^{-n}
\\*
& \leq \Phi\left( \frac{-( 1 - \alpha ) \| \boldsymbol{\mu} \|_2}{\sqrt{\max_{j=1,2} \{ \lambda_1(\boldsymbol{\Sigma}_j) \}}} \right) + 6 \mathrm{e} ^{-n} ,\end{aligned}$$ the second last inequality being a consequence of Lemma [Lemma 7](#lemk){reference-type="ref" reference="lemk"} presented below. 0◻
**Lemma 7**. *Consider constants $C$, $K$, and $c_1$ in Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"}. If $m$, $n$, $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$ satisfy $$\sqrt{2} C K^2 \left ( \sqrt{\frac{2n}{m}} + \frac{2n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1) + \lambda_1(\boldsymbol{\Sigma}_2)}{ \| \boldsymbol{\mu} \|_2^2} + 2 \right)
\leq \frac{\alpha \| \boldsymbol{\mu} \|_2}{c_1 \sqrt{n \lambda_1(\boldsymbol{\Sigma}_1)} + \| \boldsymbol{\mu} \|_2}$$ for some $\alpha \in (0,1)$, then it holds that $$\label{lem2a}
P(\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle < 0 | \theta_i = 1)
\leq \Phi\left( \frac{-(1-\alpha)\|\boldsymbol{\mu}\|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) + 6 \mathrm{e} ^{-n}$$ for $i=1,\ldots,m$. If $m$, $n$, $\boldsymbol{\mu}$, $\boldsymbol{\Sigma}_1$, and $\boldsymbol{\Sigma}_2$ satisfy $$\sqrt{2} C K^2 \left ( \sqrt{\frac{2n}{m}} + \frac{2n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1) + \lambda_1(\boldsymbol{\Sigma}_2)}{ \| \boldsymbol{\mu} \|_2^2} + 2 \right)
\leq \frac{\alpha \| \boldsymbol{\mu}\|_2}{c_1 \sqrt{n \lambda_1(\boldsymbol{\Sigma}_2)} + \| \boldsymbol{\mu} \|_2}$$ for some $\alpha \in (0,1)$, then it holds that $$\label{lem2b}
P(\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle > 0 | \theta_i = - 1)
\leq \Phi\left( \frac{-(1-\alpha)\|\boldsymbol{\mu}\|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_2)}} \right) + 6 \mathrm{e} ^{-n}$$ for $i=1,\ldots,m$.*
This lemma provides an evaluation of the conditional misclassification probability, which is the key to prove Theorem [Theorem 4](#mthm){reference-type="ref" reference="mthm"}. In the next subsection, let us prove Lemma [Lemma 7](#lemk){reference-type="ref" reference="lemk"}.
## Proof of Lemma [Lemma 7](#lemk){reference-type="ref" reference="lemk"} {#subsec;lem3}
We shall only prove [\[lem2a\]](#lem2a){reference-type="eqref" reference="lem2a"}, because the proof of [\[lem2b\]](#lem2b){reference-type="eqref" reference="lem2b"} is similar.
Fix $i \in \{1,\ldots,m \}$. Let us denote the event $\{ \theta_i = 1\}$ by $\mathrm{A}_i$ for simplicity. We have $$\begin{aligned}
&P \biggl(\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) , \boldsymbol{X}_i \rangle<0 \biggl\arrowvert \mathrm{A}_i\biggr) \nonumber \\*
&= P \biggl(\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) + \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) , \boldsymbol{X}_i - \boldsymbol{\mu} + \boldsymbol{\mu}\rangle<0 \biggl\arrowvert \mathrm{A}_i \biggr) \nonumber \\*
&= P\biggl( \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_1), \boldsymbol{X}_i - \boldsymbol{\mu} \rangle + \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) -\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1),\boldsymbol{\mu}\rangle +\langle \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) ,\boldsymbol{X}_i - \boldsymbol{\mu}\rangle \nonumber \\
& \qquad +\langle\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1),\boldsymbol{\mu}\rangle <0 \biggl\arrowvert \mathrm{A}_i\biggr) \nonumber \\*
&\leq
P\biggl(\langle\boldsymbol{\Sigma}_1^{1/2}(\boldsymbol{\gamma}_1(\boldsymbol{S}_{m})- \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)),\boldsymbol{\Sigma}_1^{-1/2}(\boldsymbol{X}_i - \boldsymbol{\mu})\rangle - \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2\cdot \| \boldsymbol{\mu} \|_2 \nonumber \\*
&\qquad + \|\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \biggl\langle \frac{\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)}{\| \boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 },\boldsymbol{\Sigma}_1^{-1/2}(\boldsymbol{X}_i - \boldsymbol{\mu})\biggr\rangle
+ \| \boldsymbol{\mu}\|_2 < 0\biggl\arrowvert \mathrm{A}_i \biggr) \nonumber \\*
&\leq
P\biggl(
-\|\boldsymbol{\Sigma}_1^{1/2} \|_{\mathrm{op}} \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_1)\|_2 \|\boldsymbol{\Sigma}_1^{- 1/2}(\boldsymbol{X}_i - \boldsymbol{\mu})\|_2
\nonumber \\*
&\qquad
- \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2\cdot \| \boldsymbol{\mu} \|_2 \nonumber \\
& \qquad
+ \|\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \biggl\langle \frac{\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)}{\| \boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 },\boldsymbol{\Sigma}_1^{-1/2}(\boldsymbol{X}_i - \boldsymbol{\mu})\biggr\rangle
+ \| \boldsymbol{\mu}\|_2 < 0\biggl\arrowvert \mathrm{A}_i \biggr)
, \label{lem2p1}\end{aligned}$$ where $\langle \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1),\boldsymbol{\mu} \rangle = \| \boldsymbol{\mu} \|_2$ and $$\begin{aligned}
&\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_1), \boldsymbol{X}_i - \boldsymbol{\mu} \rangle
= \langle \boldsymbol{\Sigma}_1^{1/2} (\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_1)), \boldsymbol{\Sigma}_1^{-1/2}(\boldsymbol{X}_i - \boldsymbol{\mu} )\rangle \\
&\geq -
\|\boldsymbol{\Sigma}_1^{1/2} (\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_1))\|_2 \|\boldsymbol{\Sigma}_1^{- 1/2}(\boldsymbol{X}_i - \boldsymbol{\mu} )\|_2 \\
&\geq -\|\boldsymbol{\Sigma}_1^{1/2} \|_{\mathrm{op}} \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1 (\boldsymbol{\Sigma}_1)\|_2 \|\boldsymbol{\Sigma}_1^{- 1/2}(\boldsymbol{X}_i - \boldsymbol{\mu})\|_2
\end{aligned}$$ are used. Let $\boldsymbol{g}_i = \boldsymbol{\Sigma}_1^{-1/2}(\boldsymbol{X}_i - \boldsymbol{\mu})$ and $$Z = \frac{1}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \|\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \biggl\langle \frac{\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)}{\|\boldsymbol{\Sigma}_1^{1/2}\boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 },\boldsymbol{g}_i \biggr\rangle.$$ Then, it can be shown that $\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} Z | \mathrm{A}_i \sim \mathcal{N}(0, \lambda_1(\boldsymbol{\Sigma}_1))$ conditionally because $\boldsymbol{g}_i | \mathrm{A}_i \sim \mathcal{N}_{n}(0, \boldsymbol{I}_n)$ conditionally and the normal distribution enjoys the property of reproductivity. The right-hand side of [\[lem2p1\]](#lem2p1){reference-type="eqref" reference="lem2p1"} is equal to $$\begin{aligned}
&P\Biggl( - \| \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \left(\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}\| \boldsymbol{g}_i\|_2 + \|\boldsymbol{\mu} \|_2\right) + \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} \cdot Z + \|\boldsymbol{\mu} \|_2 \\
&\qquad
<0\biggl\arrowvert \mathrm{A}_i \Biggr)\\
&=P\Biggl(Z< \| \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \left(\| \boldsymbol{g}_i \|_2 + \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}}\biggl\arrowvert \mathrm{A}_i \Biggr).\end{aligned}$$ Let $\delta$ be a constant satisfying $$\sqrt{2} C K^2 \biggl( \sqrt{ \frac{2n}{m} } + \frac{2n}{m} \biggr) \biggl( \frac{\lambda_1(\boldsymbol{\Sigma}_1)}{\| \boldsymbol{\mu} \|_2^2} + \frac{\lambda_1(\boldsymbol{\Sigma}_2)}{\| \boldsymbol{\mu} \|_2^2} + 2 \biggr)
\leq \delta \leq
\frac{\alpha \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}}}{ c_1 \sqrt{n} + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}}}.$$ Then, we see that $$\begin{aligned}
& P\Biggl(Z< \| \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \left(\| \boldsymbol{g}_i \|_2 + \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \biggl\arrowvert \mathrm{A}_i \Biggr) \nonumber \\
&= P\Biggl( \Biggl\{Z< \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 \left(\| \boldsymbol{g}_i \|_2 + \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}}\Biggr\} \nonumber \\
& \qquad
\cap \Biggl\{ \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 \leq \delta \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr) \nonumber \\
& \quad + P\Biggl( \Biggl\{Z< \| \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1)\|_2 \left(\| \boldsymbol{g}_i \|_2 + \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \Biggr\} \nonumber\\
& \qquad \cap \Biggl\{ \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 > \delta \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr) \nonumber \\
&\leq P\Biggl( \Biggl\{Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \Biggr\} \nonumber \\
& \qquad \cap \Biggl\{ \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 \leq \delta \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr)
+ P \Biggl( \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 > \delta \biggl\arrowvert \mathrm{A}_i \Biggr) \nonumber \\
& \leq
P\Biggl( Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \biggl\arrowvert \mathrm{A}_i \Biggr) \nonumber \\
& \quad + P \Biggl( \|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 > \delta \biggl\arrowvert \mathrm{A}_i \Biggr)\label{lem2p2}.\end{aligned}$$ As for the first term on the right-hand side of [\[lem2p2\]](#lem2p2){reference-type="eqref" reference="lem2p2"}, we have $$\begin{aligned}
& P\Biggl( Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} } \biggl\arrowvert \mathrm{A}_i \Biggr) \\
&=
P\Biggl( \Biggl\{Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}}\right) - \frac{\| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} } \Biggr\} \cap \Biggl\{ \| \boldsymbol{g}_i \|_2 \leq c_1 \sqrt{n} \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr) \\
&\quad + P\Biggl( \Biggl\{ Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} } \Biggr\} \cap \Biggl\{ \| \boldsymbol{g}_i \|_2 > c_1 \sqrt{n} \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr) \\
&\leq P\Biggl( \Biggl\{Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} } \Biggr\} \cap \Biggl\{ \| \boldsymbol{g}_i \|_2 \leq c_1 \sqrt{n} \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr) \\
& \quad+ P \Biggl( \| \boldsymbol{g}_i \|_2 > c_1 \sqrt{n} \biggl\arrowvert \mathrm{A}_i \Biggr) {.}\end{aligned}$$ The definition of $\delta$ yields $$\begin{aligned}
& P \Biggl( \Biggl\{Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \Biggr\} \cap \Biggl\{\|\boldsymbol{g}_i \|_2 \leq c_1 \sqrt{n} \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr) \\
&\leq
P \Biggl( \Biggl\{Z< \delta \left(c_1 \sqrt{n} + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \Biggr\} \cap \Biggl\{\|\boldsymbol{g}_i \|_2 \leq c_1 \sqrt{n} \Biggr\} \biggl\arrowvert \mathrm{A}_i \Biggr)
\\
&\leq
P \Biggl( Z< \delta \left(c_1 \sqrt{n} + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \biggl\arrowvert \mathrm{A}_i \Biggr)
\\
&\leq P \Biggl(Z< \frac{-(1-\alpha) \| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \biggl\arrowvert \mathrm{A}_i \Biggr) .\end{aligned}$$ Moreover, recalling that $c_1 = 1 + { K_g^2}/{\sqrt{c}}$, we have $$\begin{aligned}
&P(\|\boldsymbol{g}_i\|_2 > c_1 \sqrt{n} \arrowvert \mathrm{A}_i )
= P( | \| \boldsymbol{g}_i \|_2 - \sqrt{n} + \sqrt{n} | > c_1 \sqrt{n} \arrowvert \mathrm{A}_i ) \\
& \leq P( | \| \boldsymbol{g}_i \|_2 - \sqrt{n} | > (c_1-1) \sqrt{n} \arrowvert \mathrm{A}_i ) \\
& \leq 2 \mathrm{e} ^{-n}, \end{aligned}$$ where [\[v33\]](#v33){reference-type="eqref" reference="v33"} is used in the last inequality. It follows that $$\begin{aligned}
&P\Biggl( Z< \delta \left(\| \boldsymbol{g}_i \|_2 + \frac{ \| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \right) - \frac{\| \boldsymbol{\mu} \|_2}{ \sqrt{\lambda_1(\boldsymbol{\Sigma}_1)} } \biggl\arrowvert \mathrm{A}_i \Biggr) \\
& \leq P\Biggl(Z< \frac{-(1-\alpha) \| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \biggl\arrowvert \mathrm{A}_i \Biggr) + 2 \mathrm{e} ^{-n} .\end{aligned}$$ As for the second term on the right-hand side of [\[lem2p2\]](#lem2p2){reference-type="eqref" reference="lem2p2"}, by using [\[AE1\]](#AE1){reference-type="eqref" reference="AE1"}, [\[AE2\]](#AE2){reference-type="eqref" reference="AE2"}, [\[AE3\]](#AE3){reference-type="eqref" reference="AE3"} and the Davis--Kahan theorem (see, e.g., @RefV18 [p.89]), we have $$\begin{aligned}
& P\left(\|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}_1) \|_2 > \delta \biggl\arrowvert \mathrm{A}_i \right)
=
P\left(\|\boldsymbol{\gamma}_1(\boldsymbol{S}_{m}) - \boldsymbol{\gamma}_1(\boldsymbol{\Sigma}) \|_2 > \delta \biggl\arrowvert \mathrm{A}_i \right) \\
&\leq
P\left( \frac{2^{{3}/2}\| \boldsymbol{S}_{m} - \boldsymbol{\Sigma}\|_{\mathrm{op}}}{\lambda_1(\boldsymbol{\Sigma})- \lambda_2(\boldsymbol{\Sigma})}> \delta \biggl\arrowvert \mathrm{A}_i \right) \\
&\leq
P\left( \frac{2^{{3}/2}\| \boldsymbol{S}_{m} - \boldsymbol{\Sigma}\|_{\mathrm{op}}}{\|\boldsymbol{\mu}\|_2^2 }> \delta \biggl\arrowvert \mathrm{A}_i \right) =
P \left( \| \boldsymbol{S}_{m} - \boldsymbol{\Sigma} \|_{\mathrm{op}} > \frac{\delta \|\boldsymbol{\mu} \|_2^2}{2^{{3}/{2}}} \biggl\arrowvert \mathrm{A}_i \right).\end{aligned}$$ The definition of $\delta$ and Proposition [Proposition 3](#evs){reference-type="ref" reference="evs"} with $u = n$ yield $$\begin{aligned}
& P \left( \| \boldsymbol{S}_{m} - \boldsymbol{\Sigma} \|_{\mathrm{op}} > \frac{\delta \|\boldsymbol{\mu} \|_2^2}{2^{{3}/{2}}} \biggl\arrowvert \mathrm{A}_i \right)\\*
&\leq
P \biggr( \| \boldsymbol{S}_{m} - \boldsymbol{\Sigma} \|_{\mathrm{op}} > C K^2 \left(\sqrt{\frac{2n}{m}}+\frac{2n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1)}{2} + \frac{\lambda_1(\boldsymbol{\Sigma}_2)}{2} + \| \boldsymbol{\mu} \|_2^2 \right) \biggl\arrowvert \mathrm{A}_i \biggr) \\*
&=
\frac{1}{P \left( \mathrm{A}_i \right)} \\
& \quad \cdot P \left( \| \boldsymbol{S}_{m} - \boldsymbol{\Sigma} \|_{\mathrm{op}} > C K^2 \left(\sqrt{\frac{2n}{m}}+\frac{2n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1)}{2} + \frac{\lambda_1(\boldsymbol{\Sigma}_2)}{2} + \| \boldsymbol{\mu} \|_2^2 \right) \cap \mathrm{A}_i \right) \\*
&\leq
2 P \biggr( \| \boldsymbol{S}_{m} - \boldsymbol{\Sigma} \|_{\mathrm{op}} > C K^2 \left(\sqrt{\frac{2n}{m}}+\frac{2n}{m} \right) \left( \frac{\lambda_1(\boldsymbol{\Sigma}_1)}{2} + \frac{\lambda_1(\boldsymbol{\Sigma}_2)}{2} + \| \boldsymbol{\mu} \|_2^2 \right) \biggr)\\
&
\leq 4 \mathrm{e} ^{-n} .\end{aligned}$$ From what has already been proved, we conclude that $$P \biggl(\langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i\rangle<0 \biggl\arrowvert \mathrm{A}_i \biggr) \\
\leq P\biggl(Z< \frac{-(1-\alpha) \| \boldsymbol{\mu} \|_2}{\sqrt{\lambda_1(\boldsymbol{\Sigma}_1)}} \biggl\arrowvert \mathrm{A}_i \biggr) + 6 \mathrm{e} ^{-n}.$$ This completes the proof. 0◻
## Proof of Corollary [Corollary 6](#col){reference-type="ref" reference="col"} {#proof-of-corollary-col}
Consider $m$, $n$, and $\eta$ satisfying [\[thas\]](#thas){reference-type="eqref" reference="thas"}. Then, by using the Bonferroni inequality and [\[millsa\]](#millsa){reference-type="eqref" reference="millsa"}, we have $$\begin{aligned}
&P \left( \left\{ \bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle > 0 \bigr\} \right\} \cup \left\{ \bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle <0 \bigr\} \right\} \right)
\nonumber \\
&\geq
P \left(\bigcap_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle > 0 \bigr\} \right)
= 1 - P \left(\bigcup_{i=1}^m \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle \leq 0 \bigr\} \right)
\nonumber \\
&\geq
1 - \sum_{i=1}^m P \left( \bigl\{ \theta_i \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_i \rangle \leq 0 \bigr\} \right)
% = 1 - m P \left(\theta_1 \langle \boldsymbol{\gamma}_1(\boldsymbol{S}_{m}), \boldsymbol{X}_1 \rangle \leq 0 \right)
\nonumber \\
%& \geq
%1 - m \bigl\{ \Phi\left( -(1-\alpha) \red{\sqrt{\eta}} \right) + 6 \e^{-n} \bigr\} \\
&\geq
1 - \frac{m}{\sqrt{2\pi (1-\alpha)^{2}\eta}} \exp\left(- \frac{(1-\alpha)^2 \eta}{2} \right) - 6 m \mathrm{e} ^{-n}.
\label{col:eq}\end{aligned}$$ The second and third terms on the right-hand side of [\[col:eq\]](#col:eq){reference-type="eqref" reference="col:eq"} converge to 0 as $n,m \to\infty$ with [\[ARh\]](#ARh){reference-type="eqref" reference="ARh"} under the assumption that $n/\eta=O(1)$. 0◻
# Concluding remarks {#sec:5}
In this paper, we derived non-asymptotic bounds for the error probability of the spectral clustering algorithm when the mixture distribution of two multivariate normal distributions that form the allometric extension relationship is considered. As future directions, it is interesting to relax the assumption of the normal distribution to the sub-gaussian distribution and to consider weights in the mixture distribution other than $\pi_1=\pi_2=1/2$.
This study was supported in part by Japan Society for the Promotion of Science KAKENHI Grant Numbers 21K13836 and 23K16851.
99 Abbe, E., Fan, J., Wang, K. (2022). An $\ell_p$ theory of PCA and spectral clustering. *Ann. Statist.* **50**, no.4, 2359--2385. Amit S., Mukesh P., Akshansh G., Neha B., Om P. P., Aruna T., Meng J. E., Weiping D., Chin-Teng L. (2017). A review of clustering techniques and developments. *Neurocomputing* **267**, 664--681. Bartoletti, S., Flury, B. D., Nel, D. G. (1999). Allometric extension. *Biometrics* **55**, no.4, 1210--1214. Borysov, P., Hannig, J., Marron, J. S. (2014). Asymptotics of hierarchical clustering for growing dimension. *J. Multivariate Anal.* **124**, 465--479. Cai, T. T., Zhang, A. (2018). Rate-optimal perturbation bounds for singular subspaces with applications to high-dimensional statistics. *Ann. Statist.* **46**, no.1, 60--89. Flury, B. (1997). *A First Course in Multivariate Statistics*. Springer-Verlag, New York. Hills, M. (2006). Allometry. In *Encyclopedia of Statistical Sciences* (eds S. Kotz, C.B. Read, N. Balakrishnan, B. Vidakovic and N.L. Johnson). https://doi.org/10.1002/0471667196.ess0033.pub2 (Last access: 2023/06/01) Hsu, D., Kakade, S. (2013). Learning mixtures of spherical Gaussians: moment methods and spectral decompositions," in *ITCS'13---Proceedings of the 2013 ACM Conference on Innovations in Theoretical Computer Science*. ACM, New York. pp.11-19. Kurata, H., Hoshino, T., Fujikoshi, Y. (2008). Allometric extension model for conditional distributions. *J. Multivariate Anal.* **99**, no.9, 1985--1998. Löffler, M., Zhang, A. Y., Zhou, H. H. (2021). Optimality of spectral clustering in the Gaussian mixture model *Ann. Statist.* **49**, no.5, 2506--2530. Matsuura, S., Kurata, H. (2014). Principal points for an allometric extension model. *Statist. Papers* **55**, no.3, 853--870. Ndaoud, M. (2022). Sharp optimal recovery in the two component Gaussian mixture model. *Ann. Statist.* **50**, no.4, 2096--2126. O'Neill, T. J. (1978). Normal discrimination with unclassified observations. *J. Amer. Statist. Assoc.* **73**, no.364, 821--826. Pollard, D. (1981). Strong consistency of $k$-means clustering. *Ann. Statist.* **9**, no.1, 135--140. Pollard, D. (1982). A central limit theorem for $k$-means clustering. *Ann. Probab.* **10**, no.4, 919--926. Tsukuda, K., Matsuura, S. (2023). High-dimensional hypothesis testing for allometric extension model. *J. Multivariate Anal.* **197**, 105208. Vershynin, R. (2018) *High-Dimensional Probability. An Introduction with Applications in Data Science*. Cambridge University Press, Cambridge.
| arxiv_math | {
"id": "2309.06264",
"title": "Spectral clustering algorithm for the allometric extension model",
"authors": "Kohei Kawamoto, Yuichi Goto, Koji Tsukuda",
"categories": "math.ST stat.ME stat.TH",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We consider stochastic forced Navier--Stokes equations on $\mathbb R^{3}$ starting from zero initial condition. The noise is linear multiplicative and the equations are perturbed by an additional body force. Based on the ideas of Albritton, Brué and Colombo [@ABC22], we prove non-uniqueness of local-in-time Leray--Hopf solutions as well as joint non-uniqueness in law for solutions on $\mathbb R^{+}$. In the deterministic setting, we show that the set of forces, for which Leray--Hopf solutions are non-unique, is dense in $L^{1}_{t}L^{2}_{x}$. In addition, by a simple controllability argument we show that for every divergence-free initial condition in $L^{2}_{x}$ there is a force so that non-uniqueness of Leray--Hopf solutions holds.
address:
- Fakultät für Mathematik, Universität Bielefeld, D-33501 Bielefeld, Germany
- Department of Mathematics, Beijing Institute of Technology, Beijing 100081, China
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
author:
- Martina Hofmanová
- Rongchan Zhu
- Xiangchan Zhu
title: Non-uniqueness of Leray--Hopf solutions for stochastic forced Navier--Stokes equations
---
[^1]
# Introduction
We are concerned with the question of non-uniqueness of Leray--Hopf solutions[^2] to stochastic forced Navier--Stokes equations on $\mathbb R^{3}$ of the form $$\label{1:intro}
\aligned
{\mathord{{\rm d}}}u+\mathord{{\rm div}}(u\otimes u){\mathord{{\rm d}}}t+\nabla p{\mathord{{\rm d}}}t&=\Delta u {\mathord{{\rm d}}}t+ G(u){\mathord{{\rm d}}}W+f{\mathord{{\rm d}}}t,
\\\mathord{{\rm div}}u&=0,
\endaligned$$ where $W$ is a Wiener process on some stochastic basis $(\Omega,{\mathcal F}, ({\mathcal F}_t)_{t\geqslant 0}, {\mathbf P})$. The stochastic Itô differential $G(u){\mathord{{\rm d}}}W$ represents a stochastic force acting on the fluid. Additionally, the equations are perturbed by an external body force $f$. The coefficient $G$ satisfies appropriate assumptions specified in Section [2.2](#s:p){reference-type="ref" reference="s:p"}. We particularly focus on the iconic examples of linear multiplicative $G(u)=u$ as well as on the deterministic situation $G=0$.
Already without the presence of the stochastic forcing, (non)uniqueness of Leray--Hopf solutions used to be one of the landmark open problems in fluid dynamics, which was only solved very recently in the outstanding work [@ABC22]. More precisely, the authors proved that for the initial condition $u_{0}=0$ there exists a time $T>0$ and a force $f\in L^{1}(0,T;L^{2})$ which gives raise to two distinct Leray--Hopf solutions to [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} on $[0,T]\times\mathbb R^{3}$ with $G=0$. In fact, the constructed solutions even satisfy a local energy inequality and are therefore called suitable Leray--Hopf solutions. The proof relies on a construction of a smooth compactly supported steady state of the Navier--Stokes equations in similarity variables. Moreover, it is linearly unstable for the dynamics, meaning that the corresponding linearized operator has an unstable eigenvalue. The force $f$ is defined implicitly so that the Navier--Stokes equations are satisfied. The second solution is then a trajectory on an unstable manifold and the two solutions differ in their decay as $t\to0^{+}$.
Prior to [@ABC22], non-uniqueness of weak solutions without the energy inequality to the Navier--Stokes equations [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} with $G=0$, $f=0$ was proved in [@BV19a] by convex integration. However, the addition of the energy inequality could only be achieved for the $p$-Navier--Stokes equations where the Laplacian is replaced by $p$-Laplacian with $p\in (1,6/5)$, see [@BMS20], and the fractional Navier--Stokes equations, see [@CDD18]. These works were the basis for various non-uniqueness results for [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} in the stochastic setting such as non-uniqueness in law [@HZZ19], non-uniqueness of Markov solutions [@HZZ21markov], and non-uniqueness of ergodic stationary solutions [@HZZ22] and also global existence and non-uniqueness when the system is driven by a space-time white noise [@HZZ21]. Stochastic power law fluids were treated in [@LZ23].
In the present work, we study the stochastic Navier--Stokes equations [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} with the additional force $f$. We show that the construction of [@ABC22] can be adapted to this setting and we establish the following local-in-time result. We refer to Theorem [Theorem 11](#th:2){reference-type="ref" reference="th:2"} for precise assumptions and full statements.
**Theorem 1**. *Consider the case of linear multiplicative noise. There exists an $(\mathcal{F}_{t})$-stopping time $T>0$ and an $(\mathcal{F}_{t})$-adapted $f\in L^{1}(0,T;L^{2})$ $\bf{P}$-a.s. such that there are two distinct $(\mathcal{F}_{t})$-adapted Leray--Hopf solutions to the Navier--Stokes equations [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} on $[0,T]\times\mathbb R^{3}$ with zero initial datum.*
In addition, we are able to extend the above solutions to the full time interval $\mathbb R^{+}$ by the probabilistic extension introduced in [@HZZ19]. The method was originally developed to extend convex integration solutions, which exist up to a stopping time, to global solutions. At the stopping time, the convex integration solutions were connected to Leray--Hopf solutions, which exist for every divergence-free initial condition in $L^{2}$. Such a connection is immediate in the deterministic setting. The subtlety in the stochastic setting originates in the fact that Leray--Hopf solutions are only probabilistically weak: they cannot be constructed on every probability space with a given Wiener process, but these probabilistic elements become part of the solution. The probabilistic extension method from [@HZZ19] was frequently used ever since, especially when convex integration was applied in the stochastic setting (see e.g. [@Ya20a; @Ya20b]).
Unlike Leray--Hopf solutions, the convex integration solutions are probabilistically strong, that is, adapted to the given Wiener process. Remarkably, also the solutions obtained in Theorem [Theorem 1](#th:1intro){reference-type="ref" reference="th:1intro"} are probabilistically strong. In order to apply the probabilistic extension from [@HZZ19], we first lift these solutions to probability measures on the canonical space of trajectories up to a stopping time. The canonical space is generated by both the solution and the noise. These probability measures are then extended by laws of classical Leray--Hopf solutions to probability measures on trajectories on $\mathbb R^{+}$. The resulting solutions fulfill the energy inequality on the whole $\mathbb R^{+}$ and are therefore Leray--Hopf. This in turn permits to obtain joint non-uniqueness in law in this class, that is, non-uniqueness of the joint laws of $(u,W)$. The result is proved in Theorem [Theorem 20](#th:1law){reference-type="ref" reference="th:1law"}.
**Theorem 2**. *Consider the case of linear multiplicative noise. There exists $f$, a measurable functional of the driving Wiener process $W$, such that joint non-uniqueness in law holds true in the class of Leray--Hopf solutions.*
Especially for the linear multiplicative noise, it is interesting to reconcile the above results with the available well-posedness theory. Indeed, this form of noise admits a certain regularization by noise phenomena as shown in [@RZZ14]. Here, the following Navier--Stokes system is considered $$\begin{aligned}
\label{eq:p}
{\mathord{{\rm d}}}u+\mathord{{\rm div}}(u\otimes u){\mathord{{\rm d}}}t+\nabla p{\mathord{{\rm d}}}t=\Delta u{\mathord{{\rm d}}}t+\beta u{\mathord{{\rm d}}}W,\quad \mathord{{\rm div}}u=0,\end{aligned}$$ with $W$ a being Wiener process and $\beta\in \mathbb R$. Local strong solutions exist and are unique for initial conditions in $H^{s}$ with $s>3/2$. Moreover, for every $\varepsilon>0$ there exists $\kappa=\kappa(\beta^{2},\varepsilon)$ satisfying $\lim_{\beta\to\infty} \kappa(\beta^{2},\varepsilon)=\infty$ such that whenever $\|u_0\|_{H^s}\leqslant\kappa$ for some $s>3/2$ then the solution is global with probability bigger than $1-\varepsilon$. The basic idea is that letting $v=e^{-\beta W}u$ Itô's formula implies $$\begin{aligned}
\label{eq:po}
\partial_t v+\frac{\beta^2}2v+e^{\beta W}\mathord{{\rm div}}(v\otimes v)+\nabla p_v=\Delta v.\end{aligned}$$ Here, the second term $\frac{\beta^2}2v$ provides enough dissipation to give global solutions with large probability.
Including an additional force $f$ in [\[eq:p\]](#eq:p){reference-type="eqref" reference="eq:p"}, the result of [@RZZ14] remains valid under the condition $f\in L^{2}(0,T;H^{s-1})$. However, this regularity does not hold for the force obtained in Theorem [Theorem 1](#th:1intro){reference-type="ref" reference="th:1intro"} and Theorem [Theorem 2](#th:2intro){reference-type="ref" reference="th:2intro"} (see in particular (1.24) in [@ABC22]). We rather obtain that $f\in L^{1}(0,T;L^{p})$ for every $p<3$ which is sharp as in the deterministic setting, namely, in the sense that for $f\in L^{1}(0,T;L^{3})$ there is uniqueness of the corresponding Navier--Stokes equations [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"}.
Our proof of Theorem [Theorem 1](#th:1intro){reference-type="ref" reference="th:1intro"} and Theorem [Theorem 2](#th:2intro){reference-type="ref" reference="th:2intro"} in the case of linear multiplicative noise relies on a new transformation of the Navier--Stokes system [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} to a random PDE. Particularly, we do not employ the usual exponential transformation $v=e^{-W}u$ leading to an analog [\[eq:po\]](#eq:po){reference-type="eqref" reference="eq:po"}. Instead, we proceed via a two step transformation which permits to shift the random and time dependent coefficient away from the nonlinearity and to the dissipative term where it appears in the form of a viscosity. To be more precise, we let $w(t):=e^{-(W(t)-t/2)}u(t)$ and $w(t)=:v(\int_{0}^{t}e^{W(s)-s/2}d s)=:(v\circ\theta)(t)$. Observing that $\theta$ is invertible, Itô's formula leads to (see Section [3.1](#sec:loc){reference-type="ref" reference="sec:loc"} for more details) $$\label{eq:v}
\partial_t v+\mathord{{\rm div}}(v\otimes v)+\nabla \pi=h(\theta^{-1}(t))\Delta v+g,$$ for an appropriately defined pressure $\pi$, force $g$ and viscosity $h(\theta^{-1}(t))$.
Roughly speaking, we apply the method of [@ABC22] on the level of the transformed equation [\[eq:v\]](#eq:v){reference-type="eqref" reference="eq:v"}. It is convenient that the modification only appears in the dissipative term, which is anyway treated as a perturbation. To deal with the perturbation, we prove a new regularity estimate for the semigroup generated by the linearized operator (i.e. $e^{\tau L_{ss}}$ in the notation of Section [2.3](#s:2.3){reference-type="ref" reference="s:2.3"}) by using a Littlewood--Paley decomposition and paraproducts. This way we obtain two different Leray--Hopf solutions to [\[eq:v\]](#eq:v){reference-type="eqref" reference="eq:v"}. In order to obtain meaningful solutions after transforming back to the original equation [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"}, it is necessary to guarantee their adaptedness with respect to the filtration $(\mathcal{F}_{t})_{t\geqslant 0}$ generated by the Wiener process $W$. This is rather delicate due to the random time rescaling $\theta$ and we need to establish several auxiliary results concerning stopping times, filtrations and adaptedness.
Finally, we present several results in the deterministic setting.
**Theorem 3**. *Let $G=0$. Then non-uniqueness of Leray--Hopf solutions holds in the following situations.*
1. *For every $g\in L^{1}(0,1;L^{2})$ and every $\varepsilon>0$ there exists $f\in L^{1}(0,1;L^{2})$ satisfying $$\|g-f\|_{L^{1}(0,1;L^{2})}\leqslant\varepsilon$$ such that Leray--Hopf solutions with zero initial condition to [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} are non-unique.*
2. *For every divergence-free initial condition $u_{0}\in L^{2}$ there exists $f\in L^{1}(0,1;L^{2})$ such that Leray--Hopf solutions to [\[1:intro\]](#1:intro){reference-type="eqref" reference="1:intro"} with the initial condition $u_{0}$ are non-unique.*
The result in *(1)* is achieved through Theorem [Theorem 25](#th:1){reference-type="ref" reference="th:1"} and Corollary [Corollary 27](#c:2){reference-type="ref" reference="c:2"} by a modification of the construction from [@ABC22]. The result in *(2)* is proved in Theorem [Theorem 28](#thm:6){reference-type="ref" reference="thm:6"} by a simple controllability argument in the spirit of [@Fla97] and no implicit modification of the proof of [@ABC22] is necessary.
The paper is organized as follows. In Section [2](#sec:2){reference-type="ref" reference="sec:2"} we introduce the notation, set the basic general assumptions on the operator $G$ and recall the key elements of the construction from [@ABC22]. The case of linear multiplicative noise is treated in Section [3](#s:3){reference-type="ref" reference="s:3"}. Section [4](#sec:det){reference-type="ref" reference="sec:det"} and Section [5](#s:6){reference-type="ref" reference="s:6"} then focus on the deterministic setting.
# Preliminaries {#sec:2}
Throughout the paper, we use the notation $a\lesssim b$ if there exists a constant $c>0$ such that $a\leqslant cb$, and we write $a\simeq b$ if $a\lesssim b$ and $b\lesssim a$.
## Function spaces
Given a Banach space $E$ with a norm $\|\cdot\|_E$ and $T>0$, we write $C_TE=C([0,T];E)$ for the space of continuous functions from $[0,T]$ to $E$, equipped with the supremum norm $\|f\|_{C_TE}=\sup_{t\in[0,T]}\|f(t)\|_{E}$. We also use $C([0,\infty);E)$ to denote the space of continuous functions from $[0,\infty)$ to $E$. For $p\in [1,\infty]$ we write $L^p_TE=L^p(0,T;E)$ for the space of $L^p$-integrable functions from $[0,T]$ to $E$, equipped with the usual $L^p$-norm. We also use $L^p_{\text{loc}}(\mathbb{R}^+;E)$ to denote the space of functions $f$ from $[0,\infty)$ to $E$ satisfying $f|_{[0,T]}\in L^p_TE$ for all $T>0$. Similar notation is used for $C^\alpha_{\text{loc}}(\mathbb{R}^+;E)$. Set $L^{2}_{\sigma}=\{u\in L^2; \mathord{{\rm div}}u=0\}$. We also denote $L^2_{\text{loc}}$ the space $L^2$ with the topology of $L^2$-convergence on compact subsets of ${\mathbb R}^3$. We use $(\Delta_{i})_{i\geqslant-1}$ to denote the Littlewood--Paley blocks corresponding to a dyadic partition of unity. Besov spaces on the torus with general indices $\alpha\in \mathbb R$, $p,q\in[1,\infty]$ are defined as the completion of $C^\infty_c(\mathbb{R}^{d})$ with respect to the norm $$\|u\|_{B^\alpha_{p,q}}:=\left(\sum_{j\geqslant-1}2^{j\alpha q}\|\Delta_ju\|_{L^p}^q\right)^{1/q},$$ where $C^\infty_c(\mathbb{R}^{d})$ means smooth functions on ${\mathbb R}^d$ with compact support. Set $H^\alpha=B^\alpha_{2,2}$ for $\alpha\in{\mathbb R}$.
Paraproducts were introduced by Bony in [@Bon81] and they permit to decompose a product of two distributions into three parts which behave differently in terms of regularity. More precisely, using the Littlewood-Paley blocks, the product $fg$ of two Schwartz distributions $f,g\in\mathcal{S}'(\mathbb R^{d})$ can be formally decomposed as $$fg=f\prec g+f\circ g+f\succ g,$$ with $$f\prec g=g\succ f=\sum_{j\geqslant-1}\sum_{i<j-1}\Delta_if\Delta_jg, \quad f\circ g=\sum_{|i-j|\leqslant 1}\Delta_if\Delta_jg.$$ Here, the paraproducts $\prec$ and $\succ$ are always well-defined and critical term is the resonant product denoted by $\circ$. In general, it is only well-defined provided the sum of the regularities of $f$ and $g$ in terms of Besov spaces is strictly positive. Moreover, we have the following paraproduct estimates from [@Bon81] (see also [@GIP15 Lemma 2.1]).
**Lemma 4**. *Let $\beta\in\mathbb R$, $p, p_1, p_2, q\in [1,\infty]$ such that $\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}$. Then it holds $$%\label{eq:para1}
\|f\prec g\|_{B^\beta_{p,q}}\lesssim\|f\|_{L^{p_1}}\|g\|_{B^{\beta}_{p_2,q}},$$ and if $\alpha<0$ then $$%\label{eq:para2}
\|f\prec g\|_{B^{\alpha+\beta}_{p,q}}\lesssim\|f\|_{B^{\alpha}_{p_1,q}}\|g\|_{B^{\beta}_{p_2,q}}.$$ If $\alpha+\beta>0$ then it holds $$%\label{eq:para3}
\|f\circ g\|_{B^{\alpha+\beta}_{p,q}}\lesssim\|f\|_{B^{\alpha}_{p_1,q}}\|g\|_{B^{\beta}_{p_2,q}}.$$*
## Probabilistic elements {#s:p}
For a Hilbert space ${\mathbb U}$, let $L_2({\mathbb U},L^2_\sigma)$ be the space all Hilbert--Schmidt operators from ${\mathbb U}$ to $L^2_\sigma$ with the norm $\|\cdot\|_{L_2({\mathbb U},L^2_\sigma)}$. Let $G: L^2_\sigma\rightarrow L_2({\mathbb U},L^2_\sigma)$ be $\mathcal{B}(L^2_\sigma)/\mathcal{B}(L_2({\mathbb U},L^2_\sigma))$ measurable. In the following, we assume $$\|G(x)\|_{L_2({\mathbb U},L_{\sigma}^2)}\leqslant C(1+\|x\|_{L^2}),$$ for every $x\in C_c^\infty\cap L^2_{\sigma}$ and if in addition $y_n\rightarrow y$ in $L^2_{\text{loc}}$ then we require $$\lim_{n\rightarrow \infty}\|G(y_n)^*x-G(y)^*x\|_{{\mathbb U}}=0,$$ where the asterisk denotes the adjoint operator.
Suppose there is another Hilbert space ${\mathbb U}_1$ such that the embedding ${\mathbb U}\subset {\mathbb U}_1$ is Hilbert--Schmidt. We also use $H^{-3}_{\text{loc}}$ to denote the space $H^{-3}$ with a topology defined by the seminorms $$\|g\|_{H^{-3}_{R}}=\sup\{\langle g,v\rangle, v\in C^\infty_c, \|v\|_{H^3}\leqslant 1, \mathop{\mathrm{supp}}v\subset B_R\},\quad 0<R<\infty,$$ with $B_R=\{x:|x|<R\}$. Let $\Omega:=C([0,\infty);H^{-3}_{\text{loc}}\times {\mathbb U}_1)\cap L^2_{\mathrm{loc}}([0,\infty);L^2_\sigma\times {\mathbb U}_1)$ and let $\mathscr{P}({\Omega})$ denote the set of all probability measures on $({\Omega},{\mathcal{B}})$ with ${\mathcal{B}}$ being the Borel $\sigma$-algebra coming from the topology of locally uniform convergence on $\Omega$. Let $(x,y):{\Omega}\rightarrow C([0,\infty);H_{\rm loc}^{-3}\times {\mathbb U}_{1})$ denote the canonical process on ${\Omega}$ given by $$(x_t(\omega),y_t(\omega))=\omega(t).$$ For $t\geqslant 0$ we define $\sigma$-algebra ${\mathcal{B}}^{t}=\sigma\{ (x(s),y(s)),s\geqslant t\}$. Finally, we define the canonical filtration ${\mathcal{B}}_t^0:=\sigma\{ (x(s),y(s)),s\leqslant t\}$, $t\geqslant 0$, as well as its right-continuous version ${\mathcal{B}}_t:=\cap_{s>t}{\mathcal{B}}^0_s$, $t\geqslant 0$.
## Useful results from [@ABC22] {#s:2.3}
A recent breakthrough result from [@ABC22] is the non-uniqueness of Leray--Hopf solutions to the following forced Navier--Stokes system: $$\begin{aligned}
\label{eq:fns}
\partial_t u&=\Delta u+\bar f- u\cdot \nabla u+\nabla p,\quad \mathord{{\rm div}}u=0,
\\u(0)&=0.\nonumber\end{aligned}$$ More precisely, there exist $T>0$ and $\bar f\in L^1(0,T;L^2)$ and two distinct Leray--Hopf solutions on $[0,T]\times {\mathbb R}^3$ to the Navier--Stokes system [\[eq:fns\]](#eq:fns){reference-type="eqref" reference="eq:fns"} with force $\bar f$ and initial data $0$.
We then recall the following similarity variables from [@ABC22]: $$\begin{aligned}
\xi=\frac{x}{\sqrt t}&,\quad \tau=\log t,\nonumber
\\ u(t,x)=\frac1{\sqrt t}U(\tau,\xi),&\quad \bar f(t,x)=\frac1{t^{3/2}}\bar F(\tau,\xi). \label{tr:1}\end{aligned}$$ In these variables, the forced Navier--Stokes system [\[eq:fns\]](#eq:fns){reference-type="eqref" reference="eq:fns"} becomes $$\begin{aligned}
\label{e:U}
\partial_\tau U-\frac12(1+\xi\cdot\nabla)U-\Delta U+U\cdot \nabla U+\nabla P=\bar F,\quad \mathord{{\rm div}}U=0.\end{aligned}$$
Suppose that $\bar U$ is the linearly unstable solution for the dynamics of [\[e:U\]](#e:U){reference-type="eqref" reference="e:U"} obtained in [@ABC22 Theorem 1.3], that is, there exists an unstable eigenvalue for the linearized operator $L_{ss}$ defined by $$\begin{aligned}
\label{def:Lss}
-L_{ss}U=-\frac12(1+\xi\cdot\nabla)U-\Delta U+\mathbb{P}(\bar U\cdot \nabla U+U\cdot \nabla \bar U),\end{aligned}$$ where $\mathbb{P}$ is the Leray projector. Set $$\begin{aligned}
\bar F:=-\frac12(1+\xi\cdot\nabla)\bar U-\Delta \bar U+\bar U\cdot \nabla \bar U.\end{aligned}$$ Using [@ABC22 Theorem 1.3] we know that $\bar u(t,x)=\frac1{\sqrt{t}}\bar U(\xi)$ is a Leray--Hopf solution to the forced Navier--Stokes equations [\[eq:fns\]](#eq:fns){reference-type="eqref" reference="eq:fns"} with $\bar f(t,x)=\frac1{t^{3/2}}\bar F(\tau,\xi)$. By [@ABC22 Theorem 4.1], the linear operator $L_{ss}:D(L_{ss})\subset L^2_\sigma\to L^2_\sigma$ with $D(L_{ss}):=\{U\in L^2_\sigma:U\in H^2({\mathbb R}^3),\xi\cdot \nabla U\in L^2({\mathbb R}^3)\}$, has an unstable eigenvalue $\lambda$. Then $\lambda$ can be chosen to be maximally unstable, that is $$\begin{aligned}
\label{def:a}
a:= \text{Re}\lambda=\sup_{z\in \sigma(L_{ss})}\text{Re} z>0,\end{aligned}$$ with $\sigma(L_{ss})$ being the spectrum of the operator $L_{ss}$. Let $\eta\in H^k({\mathbb R}^3)$ be a non-trivial smooth eigenfunction for all $k\geqslant 0$. Define $$\begin{aligned}
\label{def:Ulin}
U^{lin}(\tau):=\text{Re}(e^{\lambda\tau} \eta),\end{aligned}$$ which is a solution to the linearized PDE $$\partial_\tau U^{lin}=L_{ss}U^{lin}.$$ We also recall from [@ABC22 (4.16)] that $$\begin{aligned}
\label{r:Ulin}
\|U^{lin}\|_{H^k}=C(k)e^{a\tau},\quad \tau\in{\mathbb R}, \ k\in{\mathbb N}.\end{aligned}$$
We also recall the following result from [@ABC22 Lemma 4.4], which will be used frequently in the sequel.
**Lemma 5**. *For any $\sigma_2\geqslant\sigma_1\geqslant 0$ and $\delta>0$, it holds $$\begin{aligned}
\|e^{\tau L_{ss}}U_0\|_{H^{\sigma_2}}\lesssim \tau^{-(\sigma_2-\sigma_1)/2}e^{\tau(a+\delta)}\|U_0\|_{H^{\sigma_1}},\quad \tau>0,\end{aligned}$$ for any $U_0\in L^2_\sigma\cap H^{\sigma_1}$.*
# Linear multiplicative noise {#s:3}
In this section, we prove non-uniqueness of Leray--Hopf solutions to the following Navier--Stokes equations driven by linear multiplicative noise $$\begin{aligned}
\label{eql}{\mathord{{\rm d}}}u+\mathord{{\rm div}}(u\otimes u){\mathord{{\rm d}}}t+\nabla p{\mathord{{\rm d}}}t=\Delta u{\mathord{{\rm d}}}t+u{\mathord{{\rm d}}}W+f{\mathord{{\rm d}}}t,\quad \mathord{{\rm div}}u=0,\end{aligned}$$ where $W$ is a one-dimensional Brownian motion on stochastic basis $(\Omega,{\mathcal F},({\mathcal F}_t)_{t\geqslant 0},{\mathbf P})$ and $(\mathcal{F}_t)_{t\geqslant 0}$ is the normal filtration generated by $W$, that is, the canonical right-continuous filtration augmented by all the $\mathbf{P}$-negligible sets.
## Construction of non-unique local-in-time solutions {#sec:loc}
In the first step, we introduce a new variable $v$ so that the stochastic forced Navier--Stokes system [\[eql\]](#eql){reference-type="eqref" reference="eql"} rewrites as a forced Navier--Stokes system with a random and time dependent viscosity. This transformation is different from what is usually used in case of linear multiplicative noise, see e.g. [@HZZ19]. In particular, we define $$\begin{aligned}
\label{tr}
w(t):=u(t)e^{-(W(t)-t/2)},\quad w(t)=v\Big(\int_0^te^{W(s)-s/2}{\mathord{{\rm d}}}s\Big)=:(v\circ \theta)(t).
\end{aligned}$$ By Itô's formula we find that $w$ solves $$\partial_tw+e^{W-t/2}\mathord{{\rm div}}(w\otimes w){\mathord{{\rm d}}}t+\Delta w+e^{-(W-t/2)}\nabla p=e^{-(W-t/2)}f,\quad \mathord{{\rm div}}w=0.$$ In this equation, we already eliminated the stochastic integral but the factor in front of the nonlinear term is not so convenient, which is the reason for the time rescaling leading to the definition of $v$. We have $$\partial_tw(t)=(\partial_tv(\theta(t)))e^{W(t)-t/2}$$ which leads to $$\partial_tv(\theta(t))=e^{-W(t)+t/2}\partial_tw(t)=-\mathord{{\rm div}}(w\otimes w)-e^{-W(t)+t/2}\Delta w+e^{-2W(t)+t}\nabla p+e^{-2W(t)+t}f.$$ We observe that $\theta$ is continuous and strictly increasing and so there exists an inverse $\theta^{-1}:{\mathbb R}^+\to{\mathbb R}^+$. Thus, we obtain that $v$ satisfies $$\begin{aligned}
\label{eql1}
\partial_tv+\mathord{{\rm div}}(v\otimes v)+\nabla \pi =h(\theta^{-1}(t))\Delta v+g,\quad \mathord{{\rm div}}v=0,\end{aligned}$$ where $$h(t)=e^{-W(t)+t/2},\quad g(t)=h^2(\theta^{-1}(t))f(\theta^{-1}(t)),\quad \pi(t)=h^2(\theta^{-1}(t))p(\theta^{-1}(t)).$$ In other words, the random and time dependent factor now appears in [\[eql1\]](#eql1){reference-type="eqref" reference="eql1"} in place of viscosity. This is very helpful because the dissipative term will be treated as a perturbation.
However, due to the the above random time change we need to carefully trace adaptedness, in order to guarantee that the final solutions to [\[eql\]](#eql){reference-type="eqref" reference="eql"} are adapted to $(\mathcal{F}_{t})_{t\geqslant 0}$. This turns out to be rather subtle. To this end, we first adjust the filtration and show several auxiliary results on adaptedness of stochastic processes and on stopping times.
**Lemma 6**. *For every $s\geqslant 0$, $\theta^{-1}(s)$ is an $({\mathcal F}_t)$-stopping time. Define $$\hat{{\mathcal F}}_t:={\mathcal F}_{\theta^{-1}(t)}:=\{A\in {\mathcal F};\,A\cap \{\theta^{-1}(t)< s\}\in{\mathcal F}_{s} \text{ for all }s\geqslant 0\},\quad t\geqslant 0.$$ Then $(\hat{{\mathcal F}}_t)_{t\geqslant 0}$ is a right-continuous filtration. Moreover, if a stochastic process $X$ is $(\mathcal{F}_{t})$-progressively measurable then $X\circ\theta^{-1}$ is $(\hat{{\mathcal F}}_t)$-adapted.*
*Proof.* For any $s,t\geqslant 0$, $\{\theta^{-1}(s)\leqslant t\}=\{\theta(t)\geqslant s\}\in {\mathcal F}_t$. Hence, the first result follows. Since $\{\theta^{-1}_t,t\geqslant 0\}$ is a family of increasing $({\mathcal F}_t)$-stopping times, the $\sigma$-algebras $\hat{\mathcal F}_t$, $t\geqslant 0$, are also increasing, hence they build a filtration. For $A\in \cap_{n=1}^\infty \hat{\mathcal F}_{t+\frac1n}$, by the continuity and monotonicity of $\theta^{-1}$ we have for any $u\geqslant 0$ $$A\cap \{\theta^{-1}(t)< u\}=\bigcup_{n}A\cap \big\{\theta^{-1}(t+\frac1n)< u\big\}\in {\mathcal F}_u.$$ Hence, $A\in \hat{\mathcal F}_{t}$ and $(\hat{{\mathcal F}}_t)_{t\geqslant 0}$ is then a right-continuous filtration. The last result follows by a well-known result for stopping times. ◻
**Lemma 7**. *Suppose that $v$ is an $(\hat{\mathcal{F}}_{t})$-adapted $H^{\gamma}$-valued stochastic process with $\mathbf{P}$-a.s. continuous trajectories for some $\gamma\in \mathbb R$. Then $v\circ\theta$ is $(\mathcal{F}_{t})$-adapted.*
*Proof.* Using the continuity of $v$ with respect to $t$, we find $$\begin{aligned}
v(\theta(t))=\lim_{n\to\infty}\sum_{k=0}^\infty v\Big(\frac{k}{2^n}\Big){\mathbf{1}}_{\{\frac{k}{2^n}\leqslant\theta(t)<\frac{k+1}{2^n}\}}.
\end{aligned}$$ Now it suffices to prove that $v(\frac{k}{2^n}){\mathbf{1}}_{\{\frac{k}{2^n}\leqslant\theta(t)<\frac{k+1}{2^n}\}}\in {\mathcal F}_t$. For any open set $A\subset H^{{\gamma}}$ $$\begin{aligned}
\Big\{v\Big(\frac{k}{2^n}\Big){\mathbf{1}}_{\{\frac{k}{2^n}\leqslant\theta(t)<\frac{k+1}{2^n}\}}\in A\Big\}%\cap \{\theta(t)\leq n\}
&=\Big\{v\Big(\frac{k}{2^n}\Big)\in A\Big\}\cap \Big\{\frac{k}{2^n}\leqslant\theta(t)<\frac{k+1}{2^n}\Big\}
\\&=\Big\{v\Big(\frac{k}{2^n}\Big)\in A\Big\}\cap \Big\{\theta^{-1}\Big(\frac{k}{2^n}\Big)\leqslant t\Big\}\cap \Big\{\theta(t)<\frac{k+1}{2^n}\Big\}\in {\mathcal F}_t,
\end{aligned}$$ where we used that $\{v(\frac{k}{2^n})\in A\}\in \hat{\mathcal F}_{\frac{k}{2^n}}={\mathcal F}_{\theta^{-1}(\frac{k}{2^n})}$. Hence, the result follows. ◻
We also prove the following results for stopping time.
**Lemma 8**. *Suppose that $T$ is an $(\hat{\mathcal F}_t)$-stopping time. Then $\theta^{-1}\circ T$ is an $({\mathcal F}_t)$-stopping time.*
*Proof.* We have $$\begin{aligned}
\{\theta^{-1}\circ T< t\}=\{T< \theta(t)\}&=\bigcup_{s\in{\mathbb Q}}\{T< s\}\cap\{s< \theta(t)\}
=\bigcup_{s\in{\mathbb Q}}\{T< s\}\cap\{\theta^{-1}(s)< t\}\in {\mathcal F}_t.\end{aligned}$$ ◻
By Lemma [Lemma 6](#lem:1){reference-type="ref" reference="lem:1"} we find that the random coefficient $h\circ\theta^{-1}$ in [\[eql1\]](#eql1){reference-type="eqref" reference="eql1"} is $(\hat{\mathcal F}_t)$-adapted. Our aim is to find two $(\hat {\mathcal F}_t)$-adapted Leray--Hopf solutions to [\[eql1\]](#eql1){reference-type="eqref" reference="eql1"} with the same $(\hat {\mathcal F}_t)$-adapted force $g$ before some strictly positive $(\hat {\mathcal F}_t)$-stopping time $\hat T$. Then using Lemma [Lemma 7](#lem:2){reference-type="ref" reference="lem:2"} and Lemma [Lemma 8](#lem:3){reference-type="ref" reference="lem:3"} and the transform in [\[tr\]](#tr){reference-type="eqref" reference="tr"}, we construct two $( {\mathcal F}_t)$-adapted Leray--Hopf solutions to [\[eql\]](#eql){reference-type="eqref" reference="eql"} with the same $( {\mathcal F}_t)$-adapted force $f$ before some strictly positive $( {\mathcal F}_t)$-stopping time $T_{0}=\theta^{-1}\circ \hat T$.
As the next step, we concentrate on [\[eql1\]](#eql1){reference-type="eqref" reference="eql1"}, and find the force $g$ which gives raise to two Leray--Hopf solutions. The transform in the similarity variables as in [\[tr:1\]](#tr:1){reference-type="eqref" reference="tr:1"}, i.e. $$\begin{aligned}
\label{tr2}
v(t,x)=\frac1{\sqrt t}U(\tau,\xi), \quad g(t,x)=\frac1{t^{3/2}}{H}(\tau,\xi),\end{aligned}$$ leads to $$\begin{aligned}
\label{eql2}\partial_\tau U-\frac12(1+\xi\cdot\nabla)U+U\cdot \nabla U+\nabla P=h(\theta^{-1}(e^\tau))\Delta U+{H},\quad \mathord{{\rm div}}U=0.\end{aligned}$$ This has a similar structure as [\[e:U\]](#e:U){reference-type="eqref" reference="e:U"} except for the viscosity. Then the background solution $\bar U$ defined in Section [2](#sec:2){reference-type="ref" reference="sec:2"} based on the construction from [@ABC22 Theorem 1.3] is a solution to [\[eql2\]](#eql2){reference-type="eqref" reference="eql2"} with $H=\bar H$ given by $$\bar H=-\frac12(1+\xi\cdot\nabla)\bar U+\bar U\cdot \nabla \bar U-h(\theta^{-1}(e^\tau))\Delta \bar U.$$ As $\bar U$ is deterministic, using Lemma [Lemma 6](#lem:1){reference-type="ref" reference="lem:1"} and the transform [\[tr2\]](#tr2){reference-type="eqref" reference="tr2"} it follows that $$\begin{aligned}
\label{def:g}
g(t,x)=\frac1{t^{3/2}}\bar H(\tau,\xi)\end{aligned}$$ is $(\hat{{\mathcal F}}_t)$-adapted. Hence, using Lemma [Lemma 7](#lem:2){reference-type="ref" reference="lem:2"} $$\begin{aligned}
\label{def:f}
f(t,x):=h(t,x)^{-2}g(\theta(t))\end{aligned}$$ is $({{\mathcal F}}_t)$-adapted, where $h(t,x)^{-2}=1/h(t,x)^2$. Note, however, that here $g$ is not continuous at $t=0$. But we can still apply Lemma [Lemma 7](#lem:2){reference-type="ref" reference="lem:2"} to the function $\tilde{g}(t,x)=t^{3/2}g(t,x)$ which is continuous to deduce that $\tilde{g}(\theta)$ is $({{\mathcal F}}_t)$-adapted. If $t=0$ then $f$ is deterministic and hence measurable with respect to ${\mathcal F}_0$, which implies the adaptedness of $f$.
In the following, we fix the above $H=\bar H$, which belongs to $C([\tau_1,\tau_2];L^2)$ for any $\tau_1,\tau_2\in{\mathbb R}$ ${\mathbf P}$-a.s. and we aim to construct the second solution to [\[eql2\]](#eql2){reference-type="eqref" reference="eql2"}. Similarly to [@ABC22], we make use of the following ansatz for the second solution $$\begin{aligned}
\label{defU}
U=\bar U+U^{lin}+U^{per},
\end{aligned}$$ with $\bar U$ and $U^{lin}$ as in Section [2](#sec:2){reference-type="ref" reference="sec:2"} and $U^{per}$ solves the following equation $$\begin{aligned}
\label{eqper1}
&\partial_\tau U^{per}-L_{ss}U^{per}+\mathbb{P}\Big(U^{lin}\cdot \nabla U^{per}+U^{per}\cdot \nabla U^{lin}+U^{lin}\cdot \nabla U^{lin}+U^{per}\cdot \nabla U^{per}\Big)\nonumber\\
&=(h(\theta^{-1}(e^\tau))-1)\Delta (U^{per}+U^{lin}).\end{aligned}$$ Here $L_{ss}$ was defined in [\[def:Lss\]](#def:Lss){reference-type="eqref" reference="def:Lss"}. For $U^{per}$, we additionally require a suitable decay estimate which compared to [\[r:Ulin\]](#r:Ulin){reference-type="eqref" reference="r:Ulin"} implies that $U^{lin}\neq -U^{per}$ and consequently $U\neq \bar U$ leading to non-uniqueness.
Compared to [@ABC22], the only difference in [\[eqper1\]](#eqper1){reference-type="eqref" reference="eqper1"} comes from the right hand side. The idea is to view it as a perturbation of the equation. To obtain the necessary estimates, we use the Besov space $B^N_{2,\infty}$ (to replace $H^N$ in [@ABC22]) in the definition of the following Banach space $X$, i.e. for some $\varepsilon>0$ and $N>5/2$, $N\in\mathbb{N}$ we let $$X:=\{U\in C((-\infty,T];B^N_{2,\infty}): \|U\|_X<\infty\},$$ with the norm $$\|U\|_X:=\sup_{\tau<T}e^{-(a+\varepsilon)\tau}\|U(\tau)\|_{B^N_{2,\infty}}.$$
By the time change in [\[tr:1\]](#tr:1){reference-type="eqref" reference="tr:1"} we also need to define the following filtration ${\mathcal G}_\tau:=\hat {\mathcal F}_{t}$ for $\tau=\log t\in {\mathbb R}$. Since the time change is deterministic, it holds that $v$ is $(\hat {\mathcal F}_t)$-adapted if and only if $U$ is $( {\mathcal G}_\tau)$-adapted for $v$, $U$ in [\[tr2\]](#tr2){reference-type="eqref" reference="tr2"}. Furthermore, $T$ is a $( {\mathcal G}_\tau)$-stopping time if and only if $e^T$ is a $(\hat {\mathcal F}_t)$-stopping time. Or in other words, $T$ is a $(\hat {\mathcal F}_t)$-stopping time if and only if $\log T$ is a $( {\mathcal G}_\tau)$-stopping time.
In the following, we consider [\[eqper1\]](#eqper1){reference-type="eqref" reference="eqper1"} with the random coefficient $h(\theta^{-1}(e^\tau))-1$ adapted to the filtration $({\mathcal G}_\tau)_{\tau\in\mathbb R}$ and we want to find one solution adapted to the same filtration.
**Lemma 9**. *For an integer $N>5/2$, there exist $\varepsilon>0$ and a $({\mathcal G}_{\tau})$-stopping time $T=\tau_0 \wedge \log\tau_R \in{\mathbb R}$ and a $({\mathcal G}_\tau)$-adapted stochastic process $U^{per}\in C((-\infty,T];B^{N}_{2,\infty})$, which is a solution to [\[eqper1\]](#eqper1){reference-type="eqref" reference="eqper1"} and satisfies $$\|U^{per}(\cdot,\tau)\|_{H^N}\leqslant e^{(a+\varepsilon)\tau}, \quad \tau\leqslant T.$$ Here $\tau_R$ and $\tau_0$ are given in the proof.*
*Proof.* Choose $0<\varepsilon<\frac14$. Define the following $(\hat{\mathcal F}_t)$-stopping time for an arbitrary $R>0$ $$\tau_R:=\tau_R^1\wedge \tau_R^2,$$ $$\tau_R^1:=\inf\{t>0, \|W_{\theta^{-1}}\|_{C_t^{1/4}}\geqslant R\},\qquad \tau_R^2:=\inf\{t>0, |\theta^{-1}(t)|\geqslant R\}.$$ In the following we consider the time $\tau\leqslant\log\tau_R$ and intend to apply fix point argument in a small ball of the Banach space $X$ as in [@ABC22]. The mild formulation of [\[eqper1\]](#eqper1){reference-type="eqref" reference="eqper1"} reads as $$\begin{aligned}
U^{per}=\int_{-\infty}^\tau e^{(\tau-s)L_{ss}}(I_1(s)+I_2(s)){\mathord{{\rm d}}}s,\end{aligned}$$ with $$\label{de:I}
\aligned
I_1&:=-\mathbb{P}\Big(U^{lin}\cdot \nabla U^{per}+U^{per}\cdot \nabla U^{lin}+U^{lin}\cdot \nabla U^{lin}+U^{per}\cdot \nabla U^{per}\Big),
\\I_2&:=(h(\theta^{-1}(e^s))-1)\Delta (U^{per}+U^{lin}).
\endaligned$$ We shall particularly focus on $I_2$ which comes from the right hand side of [\[eqper1\]](#eqper1){reference-type="eqref" reference="eqper1"}, whereas $I_{1}$ will be bounded below using the estimates from [@ABC22 Section 4.2].
For the second term in $I_2$ we apply Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} and [\[r:Ulin\]](#r:Ulin){reference-type="eqref" reference="r:Ulin"} to have for $0<\delta<\varepsilon$ $$\begin{aligned}
&\bigg\|\int_{-\infty}^\tau e^{(\tau-s)L_{ss}}(h(\theta^{-1}(e^s))-1)\Delta U^{lin}{\mathord{{\rm d}}}s\bigg\|_{B^N_{2,\infty}}
\\&\lesssim\int_{-\infty}^\tau e^{(\tau-s)(a+\delta)}e^{\frac14s}\|U^{lin}(s)\|_{H^{N+2}}{\mathord{{\rm d}}}s\lesssim\int_{-\infty}^\tau e^{(\tau-s)(a+\delta)}e^{as+\frac14s}{\mathord{{\rm d}}}s\lesssim e^{a\tau+\frac14\tau}.
\end{aligned}$$ Here, we used the definition of the stopping time $\tau_{R}$ to get $$\begin{aligned}
\label{estb}
|h(\theta^{-1}(e^s))-1|\lesssim_{R} \Big|\frac{\theta^{-1}(e^s)}2-W_{\theta^{-1}(e^s)}\Big|\lesssim_{R} |\theta^{-1}(e^s)|^{1/4}\lesssim_{R} e^{s/4}, \quad e^s\leqslant\tau_R,
\end{aligned}$$ since $\theta^{-1}$ has bounded derivatives before $\tau_R$, which was used in the last step.
Next, we concentrate on the first term in $I_{2}$. We have to use an improved bound for the operator $L_{ss}$, which is stated in Lemma [Lemma 10](#lem:im){reference-type="ref" reference="lem:im"} below. By this result, Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} and [\[estb\]](#estb){reference-type="eqref" reference="estb"} for $N>5/2, j\geqslant 0$ $$\begin{aligned}
&2^{jN}\bigg\|\Delta_j\int_{-\infty}^\tau e^{(\tau-s)L_{ss}}(h(\theta^{-1}(e^s))-1)\Delta U^{per}{\mathord{{\rm d}}}s\bigg\|_{L^2}
\\&\lesssim\bigg\|\Delta_j\nabla^N\int_{-\infty}^{\tau} e^{(\tau-s)L_{ss}}(h(\theta^{-1}(e^s))-1)\Delta U^{per}{\mathord{{\rm d}}}s\bigg\|_{L^2}
\\&\lesssim\int_{-\infty}^{\tau-2} e^{(\tau-s)(a+\delta)}e^{(a+\varepsilon)s+\frac14s}{\mathord{{\rm d}}}s\|U^{per}\|_X+\int_{\tau-2}^{\tau} e^{(a+\varepsilon)s+\frac14s}\Big(2^{2j}e^{-2^{2j}(\tau-s)}+1\Big){\mathord{{\rm d}}}s\|U^{per}\|_X
\\&\lesssim e^{(a+\varepsilon)\tau+\frac14\tau}\|U^{per}\|_X,
\end{aligned}$$ where $(\Delta_j)_{j\geqslant-1}$ denotes the Littlewood--Paley blocks corresponding to a dyadic partition of unity and we used Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} for $s\in (-\infty,\tau-2]$ and Lemma [Lemma 10](#lem:im){reference-type="ref" reference="lem:im"} for $s\in [\tau-2,\tau]$. For $j=-1$ we directly apply Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"}. Hence, we derive $$\begin{aligned}
&\Big\|\int_{-\infty}^\tau e^{(\tau-s)L_{ss}}(h(\theta^{-1}(e^s))-1)\Delta U^{per}{\mathord{{\rm d}}}s\Big\|_{B^N_{2,\infty}}\lesssim e^{(a+\varepsilon)\tau+\frac14\tau}\|U^{per}\|_X.
\end{aligned}$$ Combining the above calculation we obtain $$\begin{aligned}
\Big\|\int_{-\infty}^\tau e^{(\tau-s)L_{ss}}I_2(s){\mathord{{\rm d}}}s\Big\|_{B^N_{2,\infty}}
\lesssim e^{(a+\varepsilon)\tau+\frac14\tau}\|U^{per}\|_X+e^{a\tau+\frac14\tau}.\end{aligned}$$
As mentioned above, we apply the approach of [@ABC22 Section 4.2] to control $I_{1}$ as follows $$\begin{aligned}
\Big\|\int_{-\infty}^\cdot e^{(\cdot-s)L_{ss}}I_1(s){\mathord{{\rm d}}}s\Big\|_{X}
\lesssim e^{(a+\varepsilon)T}\|U^{per}\|_X^2+e^{(a-\varepsilon)T}+e^{aT}\|U^{per}\|_X.\end{aligned}$$
Altogether, this leads to $$\begin{aligned}
&\Big\|\int_{-\infty}^\cdot e^{(\cdot-s)L_{ss}}(I_1(s)+I_2(s)){\mathord{{\rm d}}}s\Big\|_{X}
\\&\leqslant C\Big(e^{(a+\varepsilon)T}\|U^{per}\|_X^2+e^{(a-\varepsilon)T}+e^{aT}\|U^{per}\|_X\Big)+C\Big(e^{(\frac14-\varepsilon)T}+e^{\frac14T}\|U^{per}\|_X\Big).\end{aligned}$$ We choose a deterministic $\tau_0$ very negative such that $$2C(e^{(a+\varepsilon)\tau_0}+e^{(a-\varepsilon)\tau_0}+e^{a\tau_0})+C(e^{(\frac14-\varepsilon)\tau_0}+e^{\frac14\tau_0})\leqslant 1/2.$$ Then it is standard to apply the fix point argument as in [@ABC22] to find the desired solution in $X$ with $T=\tau_0\wedge \log \tau_R$. ◻
**Lemma 10**. *For $N\in\mathbb{N}$, $j\geqslant 0$, $0<\tau<2$ there exists a constant $c>0$ such that $$\|\Delta_j \nabla^{N+2}e^{\tau L_{ss}}U_0\|_{L^2}\lesssim 2^{2j}e^{-c2^{2j}\tau}\|U_0\|_{B^N_{2,\infty}}+\|U_0\|_{L^2},$$ for any $U_0\in B^N_{2,\infty}$.*
*Proof.* We use a similar transform as in the proof of [@ABC22 Lemma 4.4], i.e. we set $U(\tau)=e^{\tau L_{ss}}U_0$ and $$u(t,x)=\frac1{\sqrt{t+1}}U\Big(\log(t+1),\frac{x}{\sqrt{t+1}}\Big),\quad \bar u(t,x)=\frac1{\sqrt{t+1}}\bar U\Big( \frac{x}{\sqrt{t+1}}\Big).$$ This leads to $$\partial_t u-\Delta u=-\mathbb{P}(\bar u\cdot \nabla u+u\cdot \nabla \bar u),\quad u(0)=U_0.$$ Since $\bar U$ is smooth, we obtain that $\bar u$ is also smooth and in this transform $t=e^\tau-1\simeq \tau$ when $\tau\in(0,2)$. By Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} we know that $$\sup_{t\in (0,e^2-1]}\|u(t)\|_{L^2}\leqslant C\|U_0\|_{L^2}.$$ By Duhamel formula, the paraproduct decomposition with implicit summation over $i=1,2,3$ and using the smoothness of $\bar u$, we obtain for $t\in (0,e^2-1)$ $$\begin{aligned}
\label{estg}&\|\Delta_j \nabla^N u(t)\|_{L^2}\nonumber
\\&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla^NU_{0}\|_{L^2}+\int_0^te^{-2^{2j}(t-s)}\|\nabla^{N}\Delta_j(\bar u^{i} \prec \partial_{i} u +\bar u^{i} \succ \partial_{i} u +\bar u^{i} \circ \partial_{i} u)\|_{L^2}{\mathord{{\rm d}}}s \nonumber\\
&\qquad\qquad+\int_0^te^{-2^{2j}(t-s)}\|\nabla^{N}\Delta_j( u^{i} \prec \partial_{i}\bar u + u^{i} \succ \partial_{i}\bar u + u^{i} \circ \partial_{i} \bar u)\|_{L^2}{\mathord{{\rm d}}}s \nonumber
\\&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla^NU_{0}\|_{L^2}+\sum_{l\sim j}\int_0^te^{-2^{2j}(t-s)}2^{2j}\|\nabla^{N-1}\Delta_l u\|_{L^2}{\mathord{{\rm d}}}s+\sup_{t\in(0,e^2-1)}\|u(t)\|_{L^2}.
\end{aligned}$$ Indeed, in the last step we used paraproduct estimates Lemma [Lemma 4](#lem:para){reference-type="ref" reference="lem:para"}, smoothness of $\bar u$ and $$\|\Delta_j\nabla^{N}(\bar u\prec \nabla u)\|_{L^2}\lesssim\sum_{l\sim j}(\|\nabla^{N+1}\Delta_l u\|_{L^2}+\|\nabla\Delta_l u\|_{L^2})\lesssim 2^{2j}\sum_{l\sim j}\|\nabla^{N-1}\Delta_l u\|_{L^2}+2^j\| u\|_{L^2},$$ $$\|\Delta_j\nabla^{N}(\nabla\bar u\prec u)\|_{L^2}\lesssim\sum_{l\sim j}(\|\nabla^{N}\Delta_l u\|_{L^2}+\|\Delta_l u\|_{L^2})\lesssim 2^{j}\sum_{l\sim j}\|\nabla^{N-1}\Delta_l u\|_{L^2}+\| u\|_{L^2},$$ and we used $\sup_{t\in(0,e^2-1)}\|u(t)\|_{L^2}$ to directly control the remaining terms as $\bar u$ is smooth.
Choosing $N=1$ in [\[estg\]](#estg){reference-type="eqref" reference="estg"}, the second term on the right hand side gives $\|u\|_{L^2}$ hence we obtain for $t\in (0,e^2-1)$ $$\begin{aligned}
\label{eqest1}\|\Delta_j \nabla u(t)\|_{L^2}&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla U_0\|_{L^2}+\|U_0\|_{L^2}.
\end{aligned}$$ Choosing $N=2$ in [\[estg\]](#estg){reference-type="eqref" reference="estg"}, we apply [\[eqest1\]](#eqest1){reference-type="eqref" reference="eqest1"} to control the second term and obtain for $t\in (0,e^2-1)$ $$\begin{aligned}
\|\Delta_j \nabla^2 u(t)\|_{L^2}&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla^2 U_0\|_{L^2}+\int_0^te^{-2^{2j}(t-s)}e^{-2^{2j}s}2^{2j}{\mathord{{\rm d}}}s\sum_{l\sim j}\|\Delta_l \nabla U_0\|_{L^2}+\|U_0\|_{L^2}
\\&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla^2 U_0\|_{L^2}+e^{-c2^{2j}t}\sum_{l\sim j}\|\Delta_l \nabla U_0\|_{L^2}+\|U_0\|_{L^2}.
\end{aligned}$$ For a general $N\in\mathbb N$, we iterate the above argument to have for $t\in (0,e^2-1)$ $$\begin{aligned}
\|\Delta_j \nabla^N u(t)\|_{L^2}&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla^N U_0\|_{L^2}+\sum_{l\sim j}\int_0^te^{-c2^{2j}t}2^{2j}{\mathord{{\rm d}}}s\|\Delta_l \nabla^{N-1} U_0\|_{L^2}+\|U_0\|_{L^2}
\\&\lesssim e^{-2^{2j}t}\|\Delta_j \nabla^N U_0\|_{L^2}+\sum_{l\sim j}e^{-c2^{2j}t}\|\Delta_l \nabla^{N-1} U_0\|_{L^2}+\|U_0\|_{L^2},
\end{aligned}$$ which implies the desired result. ◻
Going back to the Navier--Stokes equations in the physical variables [\[eql\]](#eql){reference-type="eqref" reference="eql"}, we deduce the following.
**Theorem 11**. *There exist an $(\mathcal{F}_t)$-stopping time $T_0>0$ and an $({\mathcal F}_t)$-adapted $f\in L^1(0,T_0;L^2)$ ${\mathbf P}$-a.s. such that there exists two distinct $(\mathcal{F}_t)$-adapted Leray--Hopf solutions in $L^\infty(0,T_0;L^2)\cap L^2(0,T_0;H^1)\cap C_w([0,T_0];L^2)$ ${\mathbf P}$-a.s. to the Navier--Stokes equations [\[eql\]](#eql){reference-type="eqref" reference="eql"} on $[0,T_0]\times{\mathbb R}^3$ and initial data $u_0\equiv0$, i.e. for all $t\in [0,T_0]$ and all divergence-free $\psi\in C^\infty_c({\mathbb R}^3)$ $$\begin{aligned}
\langle u(t),\psi\rangle=\int_0^t\langle u,\Delta\psi\rangle{\mathord{{\rm d}}}r-\int_0^t \langle u\cdot \nabla u,\psi\rangle{\mathord{{\rm d}}}r+\int_0^t \langle f,\psi\rangle{\mathord{{\rm d}}}r+\int_0^t\langle\psi,u{\mathord{{\rm d}}}W\rangle,\end{aligned}$$ and the following energy inequality holds true for all $t\in{\mathbb R}^+$ $$\begin{aligned}
\label{energy1}\mathbf E\|u(t\wedge T_0)\|_{L^2}^2+2\mathbf E\int_0^{t\wedge T_0}\|\nabla u\|_{L^2}^2{\mathord{{\rm d}}}s\leqslant 2\mathbf E\int_0^{t\wedge T_0}\langle f,u\rangle{\mathord{{\rm d}}}s+\mathbf E\int_0^{t\wedge T_0}\|u\|_{L^2}^2{\mathord{{\rm d}}}s.\end{aligned}$$ Moreover, the mapping $W\mapsto f=f(W)$ is continuous from $C([0,T])$ to $L^1_{T}L^2$.*
*Proof.* As a consequence of Lemma [Lemma 9](#lem:per){reference-type="ref" reference="lem:per"} and [\[r:Ulin\]](#r:Ulin){reference-type="eqref" reference="r:Ulin"}, we deduce that [\[eql2\]](#eql2){reference-type="eqref" reference="eql2"} with $H=\bar H$ on $(-\infty,T]$, $T=\tau_{0}\wedge\log\tau_{R}$, admits two $(\mathcal{G}_{\tau})$-adapted solutions $\bar U$ and $U$ defined in [\[defU\]](#defU){reference-type="eqref" reference="defU"}. The transform [\[tr2\]](#tr2){reference-type="eqref" reference="tr2"} then permits to go back to the physical variables, and there exist two distinct $(\hat {\mathcal F}_t)$-adapted Leray--Hopf solutions $v_{1}$, $v_{2}$ to the Navier--Stokes equations [\[eql1\]](#eql1){reference-type="eqref" reference="eql1"} with $g$ given in [\[def:g\]](#def:g){reference-type="eqref" reference="def:g"} on $[0,\hat T]$ with $\hat T={e^{\tau_{0}}\wedge\tau_{R}}$ where $$v_{1}(t,x)=\frac1{\sqrt t}\bar U(\xi), \quad v_{2}(t,x)=\frac1{\sqrt t} U(\tau,\xi).$$ By Lemma [Lemma 9](#lem:per){reference-type="ref" reference="lem:per"}, $\hat T$ is a $(\hat{\mathcal F}_t)$-stopping time.
Finally, we define $$u_1(t)=e^{W(t)-t/2}(v_1\circ \theta)(t),\quad u_2(t)=e^{W(t)-t/2} (v_2\circ\theta)(t)$$ which gives two distinct $({\mathcal F}_t)$-adapted solutions to the original Navier--Stokes equations [\[eql\]](#eql){reference-type="eqref" reference="eql"} on $[0,T_{0}]$ with $T_{0}=\theta^{-1}\circ\hat T$ and $f$ given in [\[def:f\]](#def:f){reference-type="eqref" reference="def:f"}. Indeed, by Lemma [Lemma 8](#lem:3){reference-type="ref" reference="lem:3"}, $T_0$ is an $({\mathcal F}_t)$-stopping time. In view of Lemma [Lemma 7](#lem:2){reference-type="ref" reference="lem:2"}, both $u_1$ and $u_2$ are $({\mathcal F}_t)$-adapted. Based on change of variables and regularity of $\bar U$, $U$ we obtain $$\begin{aligned}
\label{re:u_i}
\|u_i(t)\|_{L^2}\lesssim \theta(t)^{1/4}\lesssim t^{1/4},\quad \|\nabla u_i(t)\|_{L^2}\lesssim \theta(t)^{-1/4}\lesssim t^{-1/4},\qquad i=1,2,
\end{aligned}$$ where we used that $\theta(t)\sim t$ before $T_0$. In fact, if $t\leqslant T_0$ then $\theta(t)\leqslant\tau_R$ which implies that $|W_t|\leqslant R$, $|t|\leqslant R$. Thus $\theta(t)\sim t$. This implies that $u_i\in C([0,T_0];L^2)\cap L^2(0,T_0;H^1)$ and the integral $\int_0^{T_0} \langle \mathord{{\rm div}}(u_{i}\otimes u_{i}),u_{i}\rangle$ is finite. By Lemma [Lemma 9](#lem:per){reference-type="ref" reference="lem:per"} and Itô's formula, we find that $u_1$ and $u_2$ have the desired regularity on $[0,T_0]$ ${\mathbf P}$-a.s., satisfy the Navier--Stokes equations [\[eql\]](#eql){reference-type="eqref" reference="eql"} in the analytically weak sense with $f$ given in [\[def:f\]](#def:f){reference-type="eqref" reference="def:f"}, and also satisfy the energy inequality on $[0,T_0]$. From Lemma [Lemma 9](#lem:per){reference-type="ref" reference="lem:per"} and [\[r:Ulin\]](#r:Ulin){reference-type="eqref" reference="r:Ulin"} we find that $u_1$ and $u_2$ are different. By [\[def:f\]](#def:f){reference-type="eqref" reference="def:f"}, continuity of $\theta$ with respect to $W$ and uniform integrability we know that $W\mapsto f$ is continuous functional with respect to $W$ from $C([0,T])$ to $L^1_{T}L^2$. The result follows. ◻
From [\[re:u_i\]](#re:u_i){reference-type="eqref" reference="re:u_i"} and Itô's formula we further obtain that the above solutions satisfy for any $q\geqslant 1$ $$\mathbf E\left(\sup_{r\in [0,t\wedge T_0]}\|u(r)\|_{L^2}^{2q}+\int_{0}^{t\wedge T_0}\|\nabla u(r)\|^2_{L^2}{\mathord{{\rm d}}}r\right)\lesssim\mathbf E\left(\int_0^{t\wedge T_{0}}\|f(r)\|_{L^2}{\mathord{{\rm d}}}r\right)^{2q}+1<\infty.$$
**Remark 12**. As in [@ABC22 (1.24)], we obtain that $f\in L^1_TL^p$ for any $p<3$. On the other hand, by a similar argument as in the deterministic setting, we know that if $f\in L^1_TL^3$ then there exists at most one solution in $C_TL^3$ to [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} starting from $0$. Indeed, if $f\in L^1_TL^3$ then by the mild formulation the solution belongs in $C_TL^3$ which is a space where uniqueness holds true. For the equation of $w$ we could use similar argument as in the deterministic case [@LR02 Chapter 27] to deduce uniqueness of solutions to [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} in $C_TL^3$ when $f\in L^1_TL^3$.
## Non-uniqueness in law {#sec:3.2}
We consider the following general stochastic Navier--Stokes equations $$\label{1}
\aligned
{\mathord{{\rm d}}}u-\Delta u {\mathord{{\rm d}}}t+\mathord{{\rm div}}(u\otimes u){\mathord{{\rm d}}}t+\nabla p{\mathord{{\rm d}}}t&=G(u){\mathord{{\rm d}}}W+f(W){\mathord{{\rm d}}}t,
\\\mathord{{\rm div}}u&=0.
\endaligned$$ In the above, $W$ is a Wiener process on a stochastic basis $(\Omega,{\mathcal F}, ({\mathcal F}_t)_{t\geqslant 0}, {\mathbf P})$, $G(u){\mathord{{\rm d}}}W$ represents a stochastic force acting on the fluid with $G$ satisfying the assumptions in Section [2.2](#s:p){reference-type="ref" reference="s:p"} and additionally $f(W)$ is a given random force where $f$ is a continuous map from $C(\mathbb R^{+};{\mathbb U}_{1})$ to $L^{1}_{\rm loc}(\mathbb R^{+};L^{2})$ and $f(W)(t)\in\sigma\{W(s);s\leqslant t\}$. We give the definition of probabilistically weak solutions to the above system.
**Definition 13**. *Let $s\geqslant 0$ and $x_{0}\in L^{2}_{\sigma}$, $y_0\in {\mathbb U}_1$. A probability measure $P\in \mathscr{P}({\Omega})$ is a probabilistically weak Leray--Hopf solution to the Navier--Stokes system [\[1\]](#1){reference-type="eqref" reference="1"} with the initial value $(x_0,y_0)$ at time $s$ provided*
*(M1) $P(x(t)=x_0, y(t)=y_0, 0\leqslant t\leqslant s)=1$, for any $n\in\mathbb{N}$ $$P\left\{(x,y)\in {\Omega}: \int_0^n\|G(x(r))\|_{L_2({\mathbb U};L_2^\sigma)}^2{\mathord{{\rm d}}}r<+\infty\right\}=1.$$*
*(M2) Under $P$, $y$ is a cylindrical $({\mathcal{B}}_{t})_{t\geqslant s}$-Wiener process on ${\mathbb U}$ starting from $y_0$ at time $s$ and for every $\psi\in C^\infty_c({\mathbb R}^3)\cap L^2_\sigma$, and for $t\geqslant s$ $$\langle x(t)-x(s),\psi\rangle+\int^t_s\langle \mathord{{\rm div}}(x(r)\otimes x(r))-\Delta x(r),\psi\rangle {\mathord{{\rm d}}}r=\int_s^t \langle \psi, G(x(r)) {\mathord{{\rm d}}}y(r)\rangle+\int_s^t\langle\psi,f(y)(r)\rangle{\mathord{{\rm d}}}r.$$*
*(M3) It holds for all $t\geqslant s$ $$E^P\|x(t)\|_{L^2}^2+2E^P\int_s^{t}\|\nabla x\|_{L^2}^2{\mathord{{\rm d}}}r\leqslant\|x(s)\|_{L^2}^2+2E^P\int_s^{t}\langle f(y)(r),x(r)\rangle{\mathord{{\rm d}}}r+E^P\int_s^{t}\|G(x(r))\|_{L^2({\mathbb U},L^2_\sigma)}^2{\mathord{{\rm d}}}r,$$ and for any $q\in \mathbb{N}$ there exists a positive real function $t\mapsto C_{t,q}$ such that for all $t\geqslant s$ $$E^P\left(\sup_{r\in [0,t]}\|x(r)\|_{L^2}^{2q}+\int_{s}^t\|\nabla x(r)\|^2_{L^2}{\mathord{{\rm d}}}r\right)\leqslant C_{t,q}(\|x_0\|_{L^2}^{2q}+1+E^P\|f(y)\|_{L^1_tL^2}^{2q})<\infty,$$*
For our purposes, we also require a definition of probabilistically weak solutions defined up to a stopping time $\tau$. To this end, we set $${\Omega}_{\tau}:=\{\omega(\cdot\wedge\tau(\omega));\omega\in {\Omega}\}.$$
**Definition 14**. *Let $s\geqslant 0$ and $x_{0}\in L^{2}_{\sigma}$, $y_0\in {\mathbb U}_1$. Let $\tau\geqslant s$ be a $({\mathcal{B}}_{t})_{t\geqslant s}$-stopping time. A probability measure $P\in \mathscr{P}({\Omega}_\tau)$ is a probabilistically weak solution to the Navier--Stokes system [\[1\]](#1){reference-type="eqref" reference="1"} on $[s,\tau]$ with the initial value $(x_0,y_0)$ at time $s$ provided*
*(M1) $P(x(t)=x_0, y(t)=y_0, 0\leqslant t\leqslant s)=1$ and for any $n\in\mathbb{N}$ $$P\left\{(x,y)\in \Omega: \int_0^{n\wedge \tau}\|G(x(r))\|_{L_2({\mathbb U};L_2^\sigma)}^2{\mathord{{\rm d}}}r<+\infty\right\}=1.$$*
*(M2) Under $P$, $\langle y(\cdot\wedge \tau),l\rangle_{{\mathbb U}}$ is a continuous square integrable $({\mathcal{B}}_{t})_{t\geqslant s}$-martingale starting from $y_0$ at time $s$ with quadratic variation process given by $(t\wedge \tau-s)\|l\|_{{\mathbb U}}^2$ for $l\in U$. For every $\psi\in C^\infty_c(\mathbb{R}^3)\cap L^2_\sigma$, and for $t\geqslant s$ $$\begin{aligned}
&\langle x(t\wedge \tau)-x(s),\psi\rangle+\int^{t\wedge \tau}_s\langle \mathord{{\rm div}}(x(r)\otimes x(r))-\Delta x(r),\psi\rangle {\mathord{{\rm d}}}r
\\&\qquad=\int_s^{t\wedge\tau} \langle \psi, G(x(r)) {\mathord{{\rm d}}}y(r)\rangle+\int_s^{t\wedge \tau}\langle f(y)(r),\psi\rangle{\mathord{{\rm d}}}r.
\end{aligned}$$*
*(M3) It holds for all $t\geqslant s$ $$\begin{aligned}
& E^P\|x(t\wedge \tau)\|_{L^2}^2+2E^P\int_s^{t\wedge \tau}\|\nabla x\|_{L^2}^2{\mathord{{\rm d}}}r
\\&\qquad\leqslant\|x(s)\|_{L^2}^2+2E^P\int_s^{t\wedge \tau}\langle f(y)(r),x(r)\rangle{\mathord{{\rm d}}}r+E^P\int_s^{t\wedge \tau}\|G(x(r))\|_{L^2({\mathbb U},L^2_\sigma)}^2{\mathord{{\rm d}}}r,\end{aligned}$$ and for any $q\in \mathbb{N}$ there exists a positive real function $t\mapsto C_{t,q}$ such that for all $t\geqslant s$ $$\begin{aligned}
&E^P\left(\sup_{r\in [0,t\wedge\tau]}\|x(r)\|_{L^2}^{2q}+\int_{s}^{t\wedge\tau}\|\nabla x(r)\|^2_{L^2}{\mathord{{\rm d}}}r\right)
\\&\qquad\leqslant C_{t,q}\Big(\|x_0\|_{L^2}^{2q}+E^P\Big(\int_0^{t\wedge\tau}\|f(y(r))\|_{L^2}d r\Big)^{2q}+1\Big)<\infty.
\end{aligned}$$*
First, we show that probabilistically weak solutions in the sense of Definition [Definition 13](#weak solution){reference-type="ref" reference="weak solution"} exist and are stable with respect to approximations of the initial time and the initial condition.
**Theorem 15**. *For every $(s,x_0,y_0)\in [0,\infty)\times L_{\sigma}^2\times {\mathbb U}_1$, there exists $P\in\mathscr{P}({\Omega})$ which is a probabilistically weak solution to the Navier--Stokes system [\[1\]](#1){reference-type="eqref" reference="1"} starting at time $s$ from the initial condition $(x_0,y_0)$ in the sense of Definition [Definition 13](#weak solution){reference-type="ref" reference="weak solution"}. The set of all such probabilistically weak solutions with the same implicit constant $C_{t,q}$ in Definition [Definition 13](#weak solution){reference-type="ref" reference="weak solution"} is denoted by $\mathscr{W}(s,x_0,y_0, C_{t,q})$.*
*Let $(s_n,x_n,y_n)\rightarrow (s,x_0,y_0)$ in $[0,\infty)\times L_{\sigma}^2\times {\mathbb U}_1$ as $n\rightarrow\infty$ and let $P_n\in \mathscr{W}(s_n,x_n,y_n, C_{t,q})$. Then there exists a subsequence $n_k$ such that the sequence $(P_{n_k})_{k\in\mathbb{N}}$ converges weakly to some $P\in\mathscr{W}(s,x_0,y_0, C_{t,q})$.*
*Proof.* The proof follows the lines of the proof of [@HZZ19 Theorem 5.1]. The main difference is the function space for tightness since we are now on the whole space $\mathbb R^{3}$ instead of the torus $\mathbb{T}^{3}$. In this case we could use [@MR Lemma 2.7] to deduce that the family of probability measures $P_{n}$, $n\in\mathbb N$, is tight on $$\begin{aligned}
C({\mathbb R}^+;H^{-3}_{\text{loc}})\cap \big(L^2_{\text{loc}}({\mathbb R}^+;H^1),w\big)\cap L^2_{\text{loc}}({\mathbb R}^+;L^2_{\text{loc}}),
\end{aligned}$$ where $\big(L^2_{\text{loc}}({\mathbb R}^+;H^1),w\big)$ denotes the weak topology on $L^2_{\rm loc}(\mathbb R^{+};H^1)$. The convergence of $f(y)$ follows from continuity of $y\mapsto f(y)$ from $C({\mathbb R}^+,{\mathbb U}_1)\to L^1(0,T;L^2)$ for every $T\geqslant 0$. ◻
As the next step, we shall extend probabilistically weak solutions defined up a $({\mathcal{B}}_{t})_{t\geqslant 0}$-stopping time $\tau$ to the whole interval $[0,\infty)$. We denote by ${\mathcal{B}}_{\tau}$ the $\sigma$-field associated with $\tau$.
We recall the following result from [@HZZ19 Proposition 5.2, Proposition 5.3].
**Proposition 16**. *Let $\tau$ be a bounded $({\mathcal{B}}_{t})_{t\geqslant 0}$-stopping time. Then for every $\omega\in {\Omega}$ there exists $Q_{\omega}\in\mathscr{P}({\Omega})$ such that for $\omega\in \{x(\tau)\in L^2_\sigma\}$ $$\label{qomega 1}
Q_\omega\big(\omega'\in\Omega; (x,y)(t,\omega')=(x,y)(t,\omega) \textrm{ for } 0\leqslant t\leqslant\tau(\omega)\big)=1,$$ and $$\label{qomega2 1}
Q_\omega(A)=R_{\tau(\omega),x(\tau(\omega),\omega),y(\tau(\omega),\omega)}(A)\qquad\text{for all}\ A\in \mathcal{B}^{\tau(\omega)}.$$ where $R_{\tau(\omega),x(\tau(\omega),\omega),y(\tau(\omega),\omega)}\in\mathscr{P}({\Omega})$ is a probabilistically weak solution to the Navier--Stokes system [\[1\]](#1){reference-type="eqref" reference="1"} starting at time $\tau(\omega)$ from the initial condition $(x(\tau(\omega),\omega), y(\tau(\omega),\omega))$. Furthermore, for every $B\in{\mathcal{B}}$ the mapping $\omega\mapsto Q_{\omega}(B)$ is ${\mathcal{B}}_{\tau}$-measurable.*
**Proposition 17**. *Let $x_{0}\in L^{2}_{\sigma}$. Let $P$ be a probabilistically weak solution to the Navier--Stokes system [\[1\]](#1){reference-type="eqref" reference="1"} on $[0,\tau]$ starting at the time $0$ from the initial condition $(x_{0},0)$. In addition to the assumptions of Proposition [Proposition 16](#prop:1 1){reference-type="ref" reference="prop:1 1"}, suppose that there exists a Borel set $\mathcal{N}\subset{\Omega}_{\tau}$ such that $P(\mathcal{N})=0$ and for every $\omega\in \mathcal{N}^{c}$ it holds $$\label{Q1 1}
\aligned
&Q_\omega\big(\omega'\in\Omega; \tau(\omega')=
\tau(\omega)\big)=1.
\endaligned$$ Then the probability measure $P\otimes_{\tau}R\in \mathscr{P}({\Omega})$ defined by $$%\label{eq:PR 1}
P\otimes_{\tau}R(\cdot):=\int_{{\Omega}}Q_{\omega} (\cdot)\,P({\mathord{{\rm d}}}\omega)$$ satisfies $P\otimes_{\tau}R= P$ on $\sigma\{x(t\wedge\tau),y(t\wedge \tau),t\geqslant 0\}$ and is a probabilistically weak solution to the Navier--Stokes system [\[1\]](#1){reference-type="eqref" reference="1"} on $[0,\infty)$ with initial condition $(x_{0},0)$.*
*Proof.* The proof follows the lines of [@HZZ19 Proposition 5.3]. The main difference is that we have to verify (M3) holds for $P\otimes_{\tau}R$. We have $$\aligned
&E^{P\otimes_{\tau}R}\Big(\|x(t)\|_{L^2}^{2}+2\int_0^t\|\nabla x(r)\|_{L^2}^2{\mathord{{\rm d}}}r\Big)\\
&= E^{P\otimes_{\tau}R}\Big(\|x(t\wedge \tau)\|_{L^2}^{2}+2\int_0^{t\wedge\tau}\|\nabla x(r)\|_{L^2}^2{\mathord{{\rm d}}}r\Big)
\\&\qquad+E^{P\otimes_{\tau}R}\Big(\|x(t)\|_{L^2}^{2}-\|x(t\wedge\tau)\|_{L^2}^{2}+2\int_{t\wedge\tau}^t\|\nabla x(r)\|_{L^2}^2{\mathord{{\rm d}}}r\Big).
\endaligned$$ For the first term on the right hand side, we have $$\begin{aligned}
&E^{P\otimes_{\tau}R}\Big(\|x(t\wedge \tau)\|_{L^2}^{2}+2\int_0^{t\wedge\tau}\|\nabla x(r)\|_{L^2}^2{\mathord{{\rm d}}}r\Big)
\\ & \leqslant\|x(0)\|_{L^2}^2+2E^{P}\int_0^{t\wedge \tau}\langle f(y),x\rangle{\mathord{{\rm d}}}r+E^P\int_0^{t\wedge \tau}\|G(x(r))\|_{L^2({\mathbb U},L^2_\sigma)}^2{\mathord{{\rm d}}}r.\end{aligned}$$ For the second term we note that under $Q_\omega$ $$\begin{aligned}
&E^{Q_\omega}\Big(\|x(t)\|_{L^2}^{2}-\|x(t\wedge\tau(\omega))\|_{L^2}^{2}+2\int_{t\wedge\tau(\omega)}^t\|\nabla x(r)\|_{L^2}^2{\mathord{{\rm d}}}r\Big)
\\&\leqslant 2E^{Q_\omega}\int_{t\wedge \tau(\omega)}^t\langle f(y),x\rangle{\mathord{{\rm d}}}r+E^{Q_\omega}\int_{t\wedge \tau(\omega)}^t\|G(x(r))\|_{L^2({\mathbb U},L^2_\sigma)}^2{\mathord{{\rm d}}}r.\end{aligned}$$ Integrating with respect to $P$ and using [\[Q1 1\]](#Q1 1){reference-type="eqref" reference="Q1 1"}, we deduce $$\begin{aligned}
&E^{P\otimes_{\tau}R}\Big(\|x(t)\|_{L^2}^{2}-\|x(t\wedge\tau)\|_{L^2}^{2}+2\int_{t\wedge\tau}^t\|\nabla x(r)\|_{L^2}^2{\mathord{{\rm d}}}r\Big)
\\&\leqslant 2E^{P\otimes_{\tau}R}\int_{t\wedge \tau}^t\langle f(y),x\rangle{\mathord{{\rm d}}}r+E^{P\otimes_{\tau}R}\int_{t\wedge \tau}^t\|G(x(r))\|_{L^2({\mathbb U},L^2_\sigma)}^2{\mathord{{\rm d}}}r.\end{aligned}$$ Hence, the first inequality in (M3) holds for $P\otimes_{\tau}R$ and the second one is similar. ◻
From now on, we restrict ourselves to the setting of a linear multiplicative noise as in Section [3.1](#sec:loc){reference-type="ref" reference="sec:loc"}. In particular, the driving Wiener process is real-valued and consequently ${\mathbb U}={\mathbb U}_{1}=\mathbb{R}$. Furthermore, we choose $f$ as defined in [\[def:f\]](#def:f){reference-type="eqref" reference="def:f"}. By definition, it belongs to $L^{1}_{\rm loc}(\mathbb R^{+};L^{2})$ and in Theorem [Theorem 11](#th:2){reference-type="ref" reference="th:2"} we only used its restriction to $[0,T_{0}]$. In addition, $f$ is a continuous functional of the driven Brownian motion $W$, namely, $W\mapsto f(W)$ is a continuous map from $C({\mathbb R}^+;{\mathbb U}_1)$ to $L^1_{\text{loc}}({\mathbb R}^+;L^2)$ satisfying $f(y)(t)\in {\mathcal B}_t^0(y)$ for $y\in C({\mathbb R}^+;{\mathbb U}_1)$ and every $t\geqslant 0$, where ${\mathcal B}_t^0(y)=\sigma\{y(s),s\leqslant t\}$. Furthermore, we have for every $t\geqslant 0$ and $q\geqslant 1$ $$\begin{aligned}
\label{bd:f}
\mathbf E\|f\|_{L_t^1L^2}^{2q}<\infty.\end{aligned}$$ Indeed, since $h(t)^{-1}=e^{W(t)-t/2}$ is an exponential martingale, we have for any $p\geqslant 1$ $$\mathbf E\big(\max_{s\in [0,t]}h(s)^{-1}\big)^p\lesssim1.$$ Hence, we consider $g(\theta(t))$. By definition of $g$, we could write $$\begin{aligned}
g(\theta(t))=g_1(\theta(t))+h(t)g_2(\theta(t)),\end{aligned}$$ for some $g_1=\frac1{t^{3/2}}H_1(\frac{x}{\sqrt t})$ and $g_2=\frac1{t^{3/2}}H_2(\frac{x}{\sqrt t})$ with $H_1, H_2$ smooth functions with compact support. By change of variables we have $$\begin{aligned}
\int_0^t\|g_1(\theta(s))\|_{L^2}{\mathord{{\rm d}}}s\lesssim \int_0^t|\theta(s)|^{-3/4}{\mathord{{\rm d}}}s.\end{aligned}$$ Since $$\theta(t)=\int_0^te^{W(s)-s/2}{\mathord{{\rm d}}}s\geqslant{\mathord{{\rm min}}}_{s\in [0,t]} e^{W(s)-s/2}t,$$ we have $$\begin{aligned}
\int_0^t\|g_1(\theta(s))\|_{L^2}{\mathord{{\rm d}}}s\lesssim \int_0^ts^{-3/4}{\mathord{{\rm d}}}s \max_{s\in [0,t]}(e^{-\frac34W(s)+\frac{3s}8}).\end{aligned}$$ Hence, we have for any $p\geqslant 1$ $$\begin{aligned}
\mathbf E\Big(\int_0^t\|g_1(\theta(s))\|_{L^2}{\mathord{{\rm d}}}s\Big)^p\lesssim 1.\end{aligned}$$ Similarly, we have $$\begin{aligned}
\mathbf E\Big(\int_0^t\|h(s)^{-1}g_2(\theta(s))\|_{L^2}{\mathord{{\rm d}}}s\Big)^p\lesssim 1.\end{aligned}$$ Hence, [\[bd:f\]](#bd:f){reference-type="eqref" reference="bd:f"} holds.
Next, we shall introduce the stopping time as in Section [3.1](#sec:loc){reference-type="ref" reference="sec:loc"}, i.e. we define $$\bar\theta(t):=\int_0^te^{y(s)-s/2}{\mathord{{\rm d}}}s,\quad t\geqslant 0,$$ which is also positive for $t>0$, strictly increasing and continuous for every $y$. Hence we also have the inverse of $\bar \theta$ denoted as $\bar \theta^{-1}$. For $n\in\mathbb{N}$, $R>1$ we define $$\aligned\bar\tau^n(\omega)&:=\inf\left\{t> 0, |\bar\theta^{-1}(t,\omega)|>R-\frac{1}{n}\right\}\bigwedge \inf\left\{t>0,\|y(\bar\theta^{-1}(t),\omega)\|_{C_t^{\frac{1}{4}}}>R-\frac{1}{n}\right\}\bigwedge e^{\tau_0},
\endaligned$$ with $\tau_0$ being the deterministic constant given in the proof of Lemma [Lemma 9](#lem:per){reference-type="ref" reference="lem:per"}. Set $$\bar T^n:=\bar \theta^{-1}\circ \bar\tau^n.$$ Then the sequence $\{\bar T^{n}\}_{n\in\mathbb{N}}$ is nondecreasing and we define $$\label{eq:tauL 1}
\bar T:=\lim_{n\rightarrow\infty}\bar T^n.$$ Without additional regularity of the process $y$, it holds true that $\bar\tau^{n}(\omega)=0$. By [@HZZ19 Lemma 3.5] and Lemma [Lemma 8](#lem:3){reference-type="ref" reference="lem:3"} we obtain that $\bar T^{n}$ is $({\mathcal{B}}_t)_{t\geqslant 0}$-stopping time and consequently also $\bar T$ is a $({\mathcal{B}}_t)_{t\geqslant 0}$-stopping time as an increasing limit of stopping times.
Now, we fix a real-valued Wiener process $W$ defined on a probability space $(\Omega, \mathcal{F},\mathbf{P})$ and we denote by $(\mathcal{F}_{t})_{t\geqslant 0}$ its normal filtration. On this stochastic basis, we apply Theorem [Theorem 11](#th:2){reference-type="ref" reference="th:2"} and denote by $u_1$ and $u_2$ the corresponding solution to the Navier--Stokes system [\[eql\]](#eql){reference-type="eqref" reference="eql"} on $[0,T_0]$, where the stopping time $T_{0}$ is defined in the proof of Theorem [Theorem 11](#th:2){reference-type="ref" reference="th:2"}. We recall that $u_i, i=1,2,$ is adapted with respect to $(\mathcal{F}_{t})_{t\geqslant 0}$. We denote by $P_i$ the law of $(u_i,W)$ and obtain the following result by similar arguments as in the proof of [@HZZ19 Proposition 5.4].
**Proposition 18**. *The probability measure $P_i, i=1,2,$ is a probabilistically weak solution to the Navier--Stokes system [\[eql\]](#eql){reference-type="eqref" reference="eql"} on $[0,\bar T]$ in the sense of Definition [Definition 14](#weak solution 1){reference-type="ref" reference="weak solution 1"}, where $\bar T$ was defined in [\[eq:tauL 1\]](#eq:tauL 1){reference-type="eqref" reference="eq:tauL 1"}.*
**Proposition 19**. *The probability measure $P_i\otimes_{\bar T}R, i=1,2,$ is a probabilistically weak solution to the Navier--Stokes system [\[eql\]](#eql){reference-type="eqref" reference="eql"} on $[0,\infty)$ in the sense of Definition [Definition 13](#weak solution){reference-type="ref" reference="weak solution"}.*
*Proof.* The proof follows from similar argument as in [@HZZ19 Proposition 3.8]. In light of Proposition [Proposition 16](#prop:1 1){reference-type="ref" reference="prop:1 1"} and Proposition [Proposition 17](#prop:2 1){reference-type="ref" reference="prop:2 1"}, it only remains to establish [\[Q1 1\]](#Q1 1){reference-type="eqref" reference="Q1 1"}. We know that $$\begin{aligned}
\begin{aligned}
{P_i}\left(\omega:y(\bar\theta^{-1}(\cdot\wedge \bar T(\omega)))\in C^{{\frac13}}_{\mathrm{loc}}({\mathbb R}^+;{\mathbb R})\right)=1.
\end{aligned}
\end{aligned}$$ This means that there exists a ${P_i}$-measurable set $\mathcal{N}\subset \Omega_{0,\bar T}$ such that ${P_i}(\mathcal{N})=0$ and for $\omega\in \mathcal{N}^c$ $$\label{continuity1}
y(\bar\theta^{-1}(\cdot\wedge \bar T(\omega)))\in C^{{\frac13}}_{\mathrm{loc}}({\mathbb R}^+;{\mathbb R}).$$ Similar as in [@HZZ19 Proposition 3.8] for all $\omega\in \mathcal{N}^c\cap \{x(\tau)\in L^2_\sigma\}$ $$Q_\omega\left(\omega'\in\Omega_{0}; y(\bar\theta^{-1})\in C^{{\frac13}}_{\mathrm{loc}}({\mathbb R}^+;{\mathbb R})\right)=1.$$
As a consequence, for all $\omega\in \mathcal{N}^c\cap \{x(\tau)\in L^2_\sigma\}$ there exists a measurable set $N_\omega$ such that $Q_\omega(N_\omega)=0$ and for all $\omega'\in N_\omega^c$ the trajectory $t\mapsto y(\bar\theta^{-1})(t,\omega')$ belongs to $C^{{\frac{1}{3}}}_{\mathrm{loc}}({\mathbb R}^+;{\mathbb R})$. Therefore, by [\[eq:tauL 1\]](#eq:tauL 1){reference-type="eqref" reference="eq:tauL 1"} and $\bar\theta^{-1}\in C^1_{\text{loc}}({\mathbb R}^+;{\mathbb R})$ for all $\omega'\in \Omega$ we obtain that $\bar T(\omega')=\widetilde T(\omega')$ for all $\omega'\in N_\omega^c$ where $\widetilde T=\bar\theta^{-1}\circ \widetilde \tau$ $$\widetilde{\tau}(\omega'):=\inf\left\{t\geqslant 0, |\bar\theta^{-1}(t)|\geqslant R\right\}\bigwedge\inf\left\{t\geqslant 0,\|y(\bar \theta^{-1})\|_{C_t^{1/4}}\geqslant R\right\}\bigwedge e^{\tau_0}.$$ This implies that for $t>0$ $$\label{mea}\aligned
\left\{\omega'\in N_\omega^c,\widetilde T(\omega')\leqslant t\right\}
&=\left\{\omega'\in N_\omega^c,\widetilde \tau(\omega')\leqslant\bar\theta(t)\right\}
\\ &=\left\{\omega'\in N_\omega^c, \sup_{s\in\mathbb{Q},s\leqslant\bar\theta(t)}
|\bar\theta^{-1}(s)|\geqslant R\right\}
\\&\qquad\bigcup\left\{\omega'\in N_\omega^c, \sup_{s_1\neq s_2\in \mathbb{Q}\cap [0,\bar\theta(t)]}\frac{|(y(\bar \theta^{-1}))(s_1)-(y(\bar \theta^{-1}))(s_2)|}{|s_1-s_2|^{\frac{1}{4}}}\geqslant R\right\}
\\&\qquad\bigcup\left\{\omega'\in N_\omega^c, e^{\tau_0}\leqslant\bar\theta(t)\right\}
\\&=\left\{\omega'\in N_\omega^c, \sup_{s_1\neq s_2\in \mathbb{Q}\cap [0,t]}\frac{|y(s_1)-y(s_2)|}{|\bar\theta(s_1)-\bar\theta(s_2)|^{\frac{1}{4}}}\geqslant R\right\}
\\&\qquad\bigcup\left\{\omega'\in N_\omega^c,t\geqslant R\right\}\bigcup\left\{\omega'\in N_\omega^c, e^{\tau_0}\leqslant\bar\theta(t)\right\}
\\&=: N^c_\omega \cap A_t.\endaligned$$ Finally, we deduce that for all $\omega\in\mathcal{N}^c\cap \{x(\tau)\in L^2_\sigma\}$ with $P_i(x(\tau)\in L^2_\sigma)=1$ $$\label{Q}
\aligned
&Q_\omega\big(\omega'\in\Omega; \bar T (\omega')=\bar T(\omega)\big)=Q_\omega\big(\omega'\in N_\omega^c; \bar T (\omega')=\bar T(\omega)\big)
\\&\quad=Q_\omega\big(\omega'\in N_\omega^c; \omega'(s)=\omega(s), 0\leqslant s\leqslant\bar T(\omega), \bar T (\omega')=\bar T(\omega)\big)=1,
\endaligned$$ where we used [\[qomega 1\]](#qomega 1){reference-type="eqref" reference="qomega 1"} and the fact that ([\[mea\]](#mea){reference-type="ref" reference="mea"}) implies $$\{\omega'\in N_\omega^c; \bar T (\omega')=\bar T(\omega)\}=N_\omega^c\cap (A_{\bar T(\omega)}\backslash (\cup_{n=1}^\infty A_{\bar T(\omega)-\frac1n}))\in N_\omega^c\cap \mathcal{B}_{\bar T(\omega)}^0,$$ and $Q_\omega(A_{\bar T(\omega)}\backslash (\cup_{n=1}^\infty A_{\bar T(\omega)-\frac1n}))=1$. This verifies the condition [\[Q1 1\]](#Q1 1){reference-type="eqref" reference="Q1 1"} in Proposition [Proposition 17](#prop:2 1){reference-type="ref" reference="prop:2 1"} and as a consequence ${P_i}\otimes_{\tau_{L}}R$ is a probabilistically weak solution to the Navier--Stokes system [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} on $[0,\infty)$ in the sense of Definition [Definition 13](#weak solution){reference-type="ref" reference="weak solution"}. ◻
**Theorem 20**. *There exists a force $f$, which is a measurable functional of the driving Brownian motion $W$ such that there exist two distinct probabilitically weak Leray--Hopf solutions ${\mathbf P}_1$ and ${\mathbf P}_2$ to the Navier--Stokes system [\[eql\]](#eql){reference-type="eqref" reference="eql"} and $f\in L^1_{\rm loc}({\mathbb R}^+;L^2)$ ${\mathbf P}_i$-a.s.*
*Proof.* Define ${\mathbf P}_i=P_i\otimes_{\bar T}R$, $i=1,2$ for $P_i\otimes_{\bar T}R$ in Proposition [Proposition 19](#prp:ext2 1){reference-type="ref" reference="prp:ext2 1"}. Using Theorem [Theorem 11](#th:2){reference-type="ref" reference="th:2"} the laws of these two probabilistically weak solutions are distinct. Indeed, before $\bar T$ we see that the rates with which the two solutions converge to zero are different which implies law of two solutions are different. In fact, we have $$P_1\otimes_{\bar T}R\left(x(t)=\frac{e^{y(t)-t/2}}{\sqrt{\theta(t)}}\bar{U}\bigg(\frac{\cdot}{\sqrt{\theta(t)}}\bigg),\ t\leqslant\bar T\right)=1,$$ $$P_2\otimes_{\bar T}R\left(x(t)=\frac{e^{y(t)-t/2}}{\sqrt{\theta(t)}}\bar{U}\bigg(\frac{\cdot}{\sqrt{\theta(t)}}\bigg),\ t\leqslant\bar T\right)=0.$$ As a consequence joint non-uniqueness in law, i.e. non-uniqueness of probabilistically weak solutions, holds for the Navier--Stokes system [\[eql\]](#eql){reference-type="eqref" reference="eql"}. ◻
**Remark 21**. It is also possible to construct a force $f$ such that the Leray--Hopf solutions to the forced Navier--Stokes system driven by additive noise are not unique. More precisely, we may decompose the equations into the linear equations and the nonlinear equations as in [@HZZ19]. We can then use similar arguments as in Lemma [Lemma 9](#lem:per){reference-type="ref" reference="lem:per"} to construct non-unique solutions for the nonlinear equations (see [@BJLZ23]).
# Deterministic force in a dense set {#sec:det}
The aim of this section is to prove that for any given $f$ in a suitable function space the following deterministic forced Navier--Stokes equations on $[0,1]\times\mathbb R^{3}$ $$\begin{aligned}
\label{eq1}\partial_t u&=\Delta u+\bar f+f- u\cdot \nabla u+\nabla p,\quad \mathord{{\rm div}}u=0,
\\u(0)&=0,\nonumber\end{aligned}$$ admits two Leray--Hopf solutions, where $\bar f$ is the force from Section [2](#sec:2){reference-type="ref" reference="sec:2"}. Note that it is enough to construct these solutions on some time interval $[0,T_{0}]$, $T_{0}>0$, as these can be always extended to $[0,1]$ by a Leray--Hopf solution obtained by the usual argument. As a matter of fact, the solutions obtained in [@ABC22] are even suitable Leray--Hopf solutions, that is, they additionally satisfy a local energy inequality (see [\[eq:locen\]](#eq:locen){reference-type="eqref" reference="eq:locen"} below). Based on the discussion in [@LR02 Chapter 30] suitable Leray--Hopf solutions exist for every initial condition in $L^{2}$. Accordingly, the solutions à la [@ABC22] can be extended to $[0,1]$ by suitable Leray--Hopf solutions and the resulting solutions remain suitable Leray--Hopf.
Next, we recall the notion of Leray--Hopf solution in this setting.
**Definition 22**. *Let $u_0\in L^2$ be a divergence-free vector field, and $f+\bar f\in L^1(0,1;L^2)$. A Leray--Hopf solution to the Navier--Stokes system [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} on $[0,1]\times {\mathbb R}^3$ with initial data $u_0$ and force $f+\bar f$ is a divergence-free vector field $u\in L^\infty(0,1;L^2)\cap L^2(0,1;H^1)\cap C_w([0,1];L^2)$ such that $u(0)=u_0$ and for all $t\in [0,1]$ and all divergence-free $\psi\in C^\infty_c({\mathbb R}^3)$ $$\begin{aligned}
\langle u(t),\psi\rangle-\langle u(0),\psi\rangle=\int_0^t\langle u,\Delta\psi\rangle{\mathord{{\rm d}}}r-\int_0^t \langle u\cdot \nabla u,\psi\rangle{\mathord{{\rm d}}}r+\int_0^t \langle f+\bar f,\psi\rangle{\mathord{{\rm d}}}r,\end{aligned}$$ and the following energy inequality holds true for all $t\in(0,1]$ $$\begin{aligned}
\label{energy}\|u(t)\|_{L^2}^2+2\int_0^t\|\nabla u\|_{L^2}^2{\mathord{{\rm d}}}s\leqslant\|u(0)\|_{L^2}^2+2\int_0^t\langle f+\bar f,u\rangle{\mathord{{\rm d}}}s.\end{aligned}$$ We say that the solution is suitable provided it satisfies the local energy inequality $$\label{eq:locen}
(\partial_{t}-\Delta)\frac12|u|^{2}+|\nabla u|^{2}+\mathord{{\rm div}}\left[\left(\frac12|u|^{2}+p\right)u\right]\leqslant(f+\bar f)\cdot u$$ in the sense of distributions on $(0,1)\times\mathbb R^{3}$, where $p\in L^{1}((0,1)\times\mathbb R^{3})$ is the associated pressure.*
As the first step, we solve the following equation $$\begin{aligned}
\label{eq4}\partial_t u&=\Delta u+f-\mathbb{P}[\bar u\cdot \nabla u+u\cdot \nabla \bar u+u\cdot \nabla u],
\\u(0)&=0,\nonumber
\end{aligned}$$ where $\bar u(t,x)=\frac1{\sqrt{t}}\bar U(\xi)$ with the notation of Section [2](#sec:2){reference-type="ref" reference="sec:2"}. For an integer $N>5/2$ and $\varepsilon>0$, define a Banach space $$Y:=\{f\in C((0,1),H^N), \|f\|_{Y}<\infty\},$$ with $$\|f\|_{Y}:=\sup_{t\in[0,1]} \sum_{k=0}^Nt^{\frac34-a-\varepsilon+\frac{k}2}\|\nabla^k f(t)\|_{L^2}.$$ It is easy to see that $C_c^\infty((0,1)\times {\mathbb R}^3)\subset Y\subset L^1(0,1;L^2)$.
**Proposition 23**. *Assume that $N>5/2$ is an integer and consider $f\in Y$. Then there exist $T\in\mathbb{R}$ and $u\in C([0,e^T];L^2)\cap L^2(0,e^T;H^1)$ a solution to [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"} satisfying for any $p\in[2,\infty)$ and $k\leqslant N$, $k\in{\mathbb N}$, $t\in [0,e^T]$ $$\begin{aligned}
t^{k/2}\|\nabla^k u(t)\|_{L^p}\lesssim t^{\frac12(\frac3p-1)}.\end{aligned}$$*
*Proof.* If we consider [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"} directly and try to use fixed point argument we will see a problem coming from $\bar u$. Instead, we perform the following transform as in [\[tr:1\]](#tr:1){reference-type="eqref" reference="tr:1"} $$U(\tau,\xi)=U \left(\log t,\frac{x}{\sqrt{t}}\right)=\sqrt{t}u(t,x), \quad f(t,x)=\frac1{t^{3/2}} F \left(\log t,\frac{x}{\sqrt{t}}\right).$$ Then it follows that $U$ satisfies the following equations $$\begin{aligned}
\label{eq6}
\begin{aligned}
\partial_\tau U&=L_{ss}U+\mathbb{P}(F-U\cdot \nabla U),\\
U(-\infty)&=0,
\end{aligned}
\end{aligned}$$ where $L_{ss}$ was defined in [\[def:Lss\]](#def:Lss){reference-type="eqref" reference="def:Lss"}. This problem can be solved by a fix point argument. By Duhamel formula, we have $$\begin{aligned}
U(\tau)=\int_{-\infty}^{\tau} e^{(\tau-s)L_{ss}}\mathbb{P}[F-(U\cdot \nabla U)]{\mathord{{\rm d}}}s
\end{aligned}$$ and we define the norm $$\|U\|_{X_T}:=\sup_{\tau<T}e^{-(a+\varepsilon)\tau}\|U(\tau)\|_{H^N}.$$ with some $\varepsilon>0$, $a$ given in [\[def:a\]](#def:a){reference-type="eqref" reference="def:a"} and $T\in\mathbb R$. In view of Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} we have for $\tau\in {\mathbb R}$, $0<\delta<\varepsilon$ $$\begin{aligned}
\|U(\tau)\|_{H^N}&\lesssim \int_{-\infty}^{\tau} \Big(e^{(\tau-s)(a+\delta)+s(a+\varepsilon)}\|F\|_{X_\tau}+\frac{e^{(\tau-s)(a+\delta)}e^{2s(a+\varepsilon)}}{(\tau-s)^{1/2}}\|U\|_{X_\tau}^2\Big){\mathord{{\rm d}}}s
\\&\lesssim e^{\tau(a+\varepsilon)}\|F\|_{X_\tau}+e^{\tau(2a+2\varepsilon)}\|U\|_{X_\tau}^2.
\end{aligned}$$ Then we could use fixed point argument in a small ball in $X_T$ by choosing $T$ very negative and obtain $$\|U\|_{X_T}\lesssim \|F\|_{X_T}.$$ Now, we find a suitable solution $U$ for [\[eq6\]](#eq6){reference-type="eqref" reference="eq6"} provided $\|F\|_{X_T}<\infty$. Define $$u(t,x)=\frac{1}{\sqrt{t}}U \left(\log t, \frac{x}{\sqrt{t}}\right)$$ and it is easy to see that $u$ is a solution to [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"}. Since $f\in Y$, we have $\|F\|_{X_T}<\infty$. In fact, we have $$F(\tau,\xi)=t^{3/2}f(t,x)=e^{3\tau/2} f(e^\tau,\xi e^{\tau/2}),$$ and $$\nabla^kF(\tau,\xi)=e^{3\tau/2} e^{k\tau/2}\nabla^kf(e^\tau,\xi e^{\tau/2}).$$ The last claim is obtained by change of variables. The proof is complete. ◻
Proposition [Proposition 23](#pro:1){reference-type="ref" reference="pro:1"} and [@ABC22 Theorem 1.3] imply that $u+\bar u$ is a Leray--Hopf solution to the forced Navier--Stokes equations [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"}.
As the next step, we construct another Leray--Hopf solution. First, we consider $U(\xi,\tau)=\sqrt{t}u(t,x)$ and observe that the proof of Proposition [Proposition 23](#pro:1){reference-type="ref" reference="pro:1"} it holds that $$\begin{aligned}
\label{est}
\|U(\tau)\|_{H^N}\lesssim e^{(a+\varepsilon)\tau},\quad \tau\in(-\infty, T].\end{aligned}$$ Next, we make the following ansatz for the second Leray--Hopf solution: $$\tilde{U}=\bar U+U+U^{lin}+U^{per},$$ where $U^{lin}$ is defined in [\[def:Ulin\]](#def:Ulin){reference-type="eqref" reference="def:Ulin"}. Consequently, $U^{per}$ shall satisfy $$\begin{aligned}
\label{eqper}
&\partial_\tau U^{per}-L_{ss}U^{per}+\mathbb{P}\Big(( U+U^{lin})\cdot \nabla U^{per}+U^{per}\cdot \nabla ( U+U^{lin})\nonumber
\\&+U\cdot \nabla U^{lin} +U^{lin}\cdot \nabla U +U^{lin}\cdot \nabla U^{lin}+U^{per}\cdot \nabla U^{per}\Big)=0.\end{aligned}$$ The latter problem can be solved by a similar argument as in [@ABC22 Proposition 4.5] as follows.
**Proposition 24**. *Assume $N>5/2$ is an integer. Then there exist $T\in\mathbb{R}$, $0<\varepsilon_0<a$ and $U^{per}\in C((-\infty,T];H^N)$ a solution to [\[eqper\]](#eqper){reference-type="eqref" reference="eqper"} such that $$\|U^{per}(\tau)\|_{H^N}\leqslant e^{(a+\varepsilon_0)\tau}, \quad \tau\leqslant T.$$*
*Proof.* We apply a fixed point argument in Banach space $$X:=\{U\in C((-\infty,T];H^N): \|U\|_X<\infty\},$$ with the norm $$\|U\|_X:=\sup_{\tau<T}e^{-(a+\varepsilon_0)\tau}\|U(\tau)\|_{H^N},$$ with $\varepsilon_0>\delta$ in order to guarantee convergence of a time integral below. By the proof of [@ABC22 Proposition 4.5], we know that $$\begin{aligned}
&
\Big\|\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}}\mathbb{P}\Big(U^{lin}\cdot \nabla U^{per}+U^{per}\cdot \nabla U^{lin}+U^{lin}\cdot\nabla U^{lin}+U^{per}\cdot\nabla U^{per} \Big){\mathord{{\rm d}}}s\Big\|_{X}
\\&\lesssim e^{T(a+\varepsilon_0)}\|U^{per}\|_X^2+e^{Ta}\|U^{per}\|_X+e^{T(a-\varepsilon_0)}.\end{aligned}$$ Hence, it is sufficient to estimate the following terms $$\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}}\mathbb{P}\Big(U\cdot \nabla U^{per}+U^{per}\cdot \nabla U
+U\cdot \nabla U^{lin} +U^{lin}\cdot \nabla U \Big){\mathord{{\rm d}}}s.$$ Using [\[est\]](#est){reference-type="eqref" reference="est"} and Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} it holds for $0<\delta<\varepsilon_0$ $$\begin{aligned}
&
\Big\|\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}}\mathbb{P}(U\cdot \nabla U^{per}+U^{per}\cdot \nabla U ){\mathord{{\rm d}}}s\Big\|_{H^N}
\\&\lesssim \int_{-\infty}^{\tau}\frac{ e^{(\tau-s)(a+\delta)+s(2a+\varepsilon_0+\varepsilon)}}{(\tau-s)^{{1/2}}}\|U^{per}\|_X{\mathord{{\rm d}}}s
\lesssim e^{\tau(2a+\varepsilon_0+\varepsilon)}\|U^{per}\|_X.\end{aligned}$$ Using [\[r:Ulin\]](#r:Ulin){reference-type="eqref" reference="r:Ulin"}, Lemma [Lemma 5](#lem:Lss){reference-type="ref" reference="lem:Lss"} and [\[est\]](#est){reference-type="eqref" reference="est"} we derive $$\begin{aligned}
&
\Big\|\int_{-\infty}^{\tau}e^{(\tau-s)L_{ss}}\mathbb{P}(U\cdot \nabla U^{lin} +U^{lin}\cdot \nabla U ){\mathord{{\rm d}}}s\Big\|_{H^N}
\\&\lesssim \int_{-\infty}^{\tau}\frac{e^{(\tau-s)(a+\delta)}e^{s(2a+\varepsilon)}}{(\tau-s)^{{1/2}}}{\mathord{{\rm d}}}s
\lesssim e^{\tau(2a+\varepsilon)}.\end{aligned}$$ Combining the above estimates we could choose $T$ very negative to apply fix point argument in a small ball in $X$ to construct a solution for [\[eqper\]](#eqper){reference-type="eqref" reference="eqper"} (see [@ABC22 Proposition 4.5] for more details). ◻
By [\[r:Ulin\]](#r:Ulin){reference-type="eqref" reference="r:Ulin"} and Proposition [Proposition 24](#pro:2){reference-type="ref" reference="pro:2"} we find that $U^{lin}+U^{per}\neq0$ as the convergence rate to $-\infty$ is different. Then $\tilde u(t,x)=\frac1{\sqrt{t}}\tilde U(\tau,\xi)$ gives the second Leray solution to [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} as the regularity of $\tilde u$ is the same as in [@ABC22].
As noted above, the solutions $\tilde u$ and $u+\bar u$ can be extended to Leray--Hopf solutions on $[0,1]$. Hence, we deduce the following results.
**Theorem 25**. *Let $\bar f$ be the force obtained in [@ABC22 Theorem 1.3] and let $f\in Y$ be arbitrary. There exist two distinct suitable Leray--Hopf solutions $\tilde u$ and $u+\bar u$ to the Navier--Stokes equations on $[0,1]\times{\mathbb R}^3$ with body force $\bar f+f$ and initial data $u_0\equiv0$.*
**Corollary 26**. *For any $\varepsilon>0$ there exist $h$ with $\|h\|_{L^1((0,1);L^2)}\leqslant\varepsilon$, and two distinct suitable Leray--Hopf solutions $\tilde u_1$ and $\tilde u_2$ to the Navier--Stokes equations on $[0,1]\times{\mathbb R}^3$ with body force $h$ and initial data $u_0\equiv0$.*
*Proof.* For any $\varepsilon>0$ there is $f_\varepsilon\in Y$ obtained by convolution with a mollifier and a suitable cut-off near $t=0$ such that $$\begin{aligned}
\|f_\varepsilon+\bar f\|_{L^1(0,1;L^2)}\leqslant\varepsilon.\end{aligned}$$ Choosing $h=f_\varepsilon+\bar f$, the result follows from Theorem [Theorem 25](#th:1){reference-type="ref" reference="th:1"}. ◻
**Corollary 27**. *For any $f\in L^1(0,1;L^2)$ and $\varepsilon>0$ there exist $g\in L^1(0,1;L^2)$ with $$\|g-f\|_{L^1(0,1;L^2)}\leqslant\varepsilon,$$ and two distinct suitable Leray--Hopf solutions $\tilde u_1$ and $\tilde u_2$ to the Navier--Stokes equations on $[0,1]\times{\mathbb R}^3$ with body force $g$ and initial data $u_0\equiv0$.*
*Proof.* For any $\varepsilon>0$ we could find $g_\varepsilon\in Y$ by convolution with a mollifier and a suitable cut-off near $t=0$ such that $$\begin{aligned}
\|g_\varepsilon- f\|_{L^1(0,1;L^2)}\leqslant\varepsilon/2.\end{aligned}$$ Choosing $g=f_{\varepsilon/2}+\bar f+g_\varepsilon$, the result is a consequence of Theorem [Theorem 25](#th:1){reference-type="ref" reference="th:1"} and the fact that $f_{\varepsilon/2}+g_\varepsilon\in Y$, where $f_{\varepsilon/2}$ was defined in the proof of Corollary [Corollary 26](#c:1){reference-type="ref" reference="c:1"}. ◻
# General initial conditions {#s:6}
In this final section, we show a simple extension of the main result of [@ABC22] to general initial conditions in $L^{2}$ based on an approximate controllability argument from [@Fla97]. More precisely, we prove the following.
**Theorem 28**. *Let $u_{0}\in L^{2}_{\sigma}$. There exists a body force $f$ so that the deterministic forced Navier--Stokes equations on $[0,1]\times\mathbb R^{3}$ admit two distinct Leray--Hopf solutions with initial condition $u_{0}$.*
*Proof.* The idea is as follows. First, we show that there is a force $\tilde f$, a time $2T^{*}>0$ and a Leray--Hopf solution $\tilde u$ to the deterministic forced Navier--Stokes equations with this force on the time interval $[0,2T^{*}]$ such that $\tilde u(2T^{*})=0$. Second, taking the final value as the initial condition on the next time interval, we employ the technique of [@ABC22] on $[2T^{*},2T^{*}+T]$ to obtain two different Leray--Hopf solutions on $[0,2T^{*}+T]$ starting from the initial condition $u_{0}$.
In the first step, we take an arbitrary Leray--Hopf solution $\tilde u$ to the Navier--Stokes equations with zero force and starting from $u_{0}$. We observe that there is a time for which the solution belongs to $H^{2}$. Indeed, there is a time $T_{1}>0$ such that $\tilde u(T_{1})\in H^{1}$, so that on some interval $[T_{1},T^{*}]$ the solution is the unique strong solution by weak-strong uniqueness, in particular, $\tilde u \in C ([T_{1},T^{*}]
; H^1) \cap L^2 (T_{1},T^{*} ; H^2)$. In other words, making $T^{*}$ smaller if necessary, we may assume without loss of generality that $\tilde{u} (T^{*}) \in H^2$.
Next, we intend to find a force $\tilde{f}$ so that $\tilde{u}$ extends to a solution of the forced Navier--Stokes equations on $[0, 2T^{\ast}]$, so that $\tilde{u} (2 T^{\ast}) = 0$. We simply define by linear interpolation $$\tilde{u} (t) : = \frac{2 T^{\ast} - t}{T^{\ast}} \tilde{u} (T^{\ast}),
\qquad t \in [T^{*}, 2 T^{\ast}] .$$ Clearly, $\tilde{u} \in C ([T^{\ast}, 2 T^{\ast}] ; H^2)$ and since $$\| \mathbb{P} (\tilde{u} \cdot\nabla \tilde{u}) \|_{L^2} \lesssim \|
\tilde{u} \|_{L^{\infty}} \| \tilde{u} \|_{H^1} \lesssim \| \tilde{u}
\|_{H^2} \| \tilde{u} \|_{H^1}$$ we deduce $$\tilde{f} :=\partial_t \tilde{u} - \Delta \tilde{u} +\mathbb{P}
(\tilde{u} \cdot\nabla \tilde{u}) \in C ([T^{\ast}, 2 T^{\ast}] ; L^2) .$$ Letting $\tilde{f} = 0$ on $[0, T^{\ast}]$, we therefore found a solution $\tilde{u}$ to the forced Navier--Stokes equations with force $\tilde{f}$ on $[0, 2 T^{\ast}]$ which satisfies the energy inequality on $[0,T_{1}]$, belongs to $C ([T_{1}, 2 T^{\ast}] ; H^1) \cap L^2 (T_{1}, 2
T^{\ast} ; H^2)$ and has the terminal value $\tilde{u} (2 T^{\ast}) = 0$. The regularity of strong solution particularly implies that the energy inequality holds true on the full time interval $[0,2T^{*}]$.
Finally, the construction from [@ABC22] permits to find a force and to extend the solution in a non-unique manner to some interval $[2
T^{\ast}, 2 T^{\ast} + T]$. Further extension to $[0,1]$ by usual Leray--Hopf solutions is immediate. The proof is complete. ◻
BDLSV19
D. Albritton, E. Brué, M. Colombo, Non-uniqueness of Leray solutions of the forced Navier-Stokes equations, , 196, 415-455, 2022.
J.-M. Bony, Calcul symbolique et propagation des singularités pour les équations aux dérivées partielles non linéaires, *Ann. Sci. École Norm. Sup.* (4) 14, no. 2, 209-246, 1981.
E. Brué, R. Jin, Y. Li, D. Zhang, Non-uniqueness in law of Leray solutions to 3D forced stochastic Navier--Stokes equations, preprint, 2023.
J. Burczak, S. Modena, L. Székelyhidi, Non uniqueness of power-law flows, *Communications in Mathematical Physics* 388, 199-243, 2021.
T. Buckmaster, V. Vicol, Nonuniqueness of weak solutions to the Navier--Stokes equation, , 189(1):101--144, 2019.
M. Colombo, C. De Lellis, L. De Rosa, Ill-posedness of Leray solutions for the hypodissipative Navier--Stokes equations, *Communications in Mathematical Physics* 362, 659-688, 2018.
A. Debussche, Ergodicity results for the stochastic Navier-Stokes equations: an introduction, In *Topics in mathematical fluid mechanics*, volume 2073 of *Lecture Notes in Math.*, pages 23--108. Springer, Heidelberg, 2013.
F. Flandoli, Irreducibility of the 3-D stochastic Navier--Stokes equation. *Journal of functional analysis*, 149(1), 160-177, 1997.
M. Gubinelli, P. Imkeller, N. Perkowski, Paracontrolled distributions and singular PDEs, *Forum Math.* Pi 3 no. 6, 2015.
M. Hofmanová, R. Zhu, X. Zhu, Global existence and non-uniqueness for 3D Navier--Stokes equations with space-time white noise, , 247, 46, 2023.
M. Hofmanová, R. Zhu, X. Zhu, Global-in-time probabilistically strong and Markov solutions to stochastic 3D Navier--Stokes equations: existence and non-uniqueness, , Vol. 51, No. 2, 524--579, 2023.
M. Hofmanová, R. Zhu, X. Zhu, Non-unique ergodicity for deterministic and stochastic 3D Navier--Stokes and Euler equations, *arXiv:2208.08290*, 2022.
M. Hofmanová, R. Zhu, X. Zhu, Non-uniqueness in law of stochastic 3D Navier--Stokes equations, , 2019.
P.G. Lemarié-Rieusset, Recent developments in the Navier-Stokes problem, 2022 by Chapman and Hall/CRC.
H. Lü, X. Zhu, Global-in-time probabilistically strong solutions to stochastic power-law equations: existence and non-uniqueness, *Stochastic Processes and their Applications*, 164, 62-98, 2023.
R. Mikulevicius, B.L. Rozovskii, Global $L^2$-solution of Stochastic Navier-Stokes Equations, *Ann. of Prob.*, 2005, Vol.33, No.1, 137-176.
M. Röckner, R. Zhu, X. Zhu, Local existence and non-explosion of solutions for stochastic fractional partial differential equations driven by multiplicative noise, *Stochastic Processes and their Applications* 124, 1974-2002, 2014.
K. Yamazaki, Non-uniqueness in law for two-dimensional Navier-Stokes equations with diffusion weaker than a full laplacian, Vol. 54, 4, 2022
K. Yamazaki, Remarks on the non-uniqueness in law of the Navier-Stokes equations up to the j.-l. lions' exponent, Volume 147, Pages 226--269, May 2022
[^1]: M.H. is grateful for funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 949981) and the financial supports by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 317210226--SFB 1283. R.Z. and X.Z. are grateful to the financial supports by National Key R&D Program of China (No. 2022YFA1006300). R.Z. is grateful to the financial supports of the NSFC (No. 12271030). X.Z. is grateful to the financial supports in part by National Key R&D Program of China (No. 2020YFA0712700) and the NSFC (No. 12090014, 12288201) and the support by key Lab of Random Complex Structures and Data Science, Youth Innovation Promotion Association (2020003), Chinese Academy of Science. Rongchan Zhu is the corresponding author.
[^2]: Leray--Hopf solutions are weak solutions satisfying an energy inequality.
| arxiv_math | {
"id": "2309.03668",
"title": "Non-uniqueness of Leray-Hopf solutions for stochastic forced\n Navier-Stokes equations",
"authors": "Martina Hofmanov\\'a, Rongchan Zhu, Xiangchan Zhu",
"categories": "math.PR math.AP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Asymptotic behavior of the point process of high and medium values of a Gaussian stationary process with discrete time is considered. An approximation by a Poisson cluster point process is given for the point process.
author:
- "Vladimir I. Piterbarg[^1]"
title: Poisson limit theorem for the number of excursions above high and medium levels by Gaussian stationary sequences
---
Consider Gaussian stationary sequence $X(k),$ $k\in\mathbb{Z}$ with $EX(k)=0,$ $\operatorname*{Var}X(k)\equiv1,$ $\operatorname*{Cov}%
(X(0,X(k)=EX(k)X(0)=r(k).$ Let $\mathcal{B}$ be the algebra of bounded Borel subsets of $\mathbb{R}$. Consider on $\mathcal{B}$ a family of Bernoulli processes $$\mathfrak{B}_{u,n}(B):=\sum_{k\in nB}\mathbf{I}\left\{ X(k)>u\right\}
,\ \ B\in\mathcal{B},\ u>0,\ n\in\mathbb{N}. \label{BPP}%$$ We study the limit behavior of $\mathfrak{B}_{u,n}(\cdot)\ $as $u,n\rightarrow
\infty.$ Theorem [Theorem 1](#Mittal Ylvisaker){reference-type="ref" reference="Mittal Ylvisaker"} below says that if $r(k)$ tends to zero sufficiently fast, $\mathfrak{B}_{u,n}(\cdot)$ tends as $u,n\rightarrow
\infty$ weakly to a Poisson point process $\mathfrak{P}_{\lambda}(B),$ $B\in\mathcal{B}$, with intensity $\lambda>0$ provided *natural normalization* is fulfilled, that is, $n,u\rightarrow\infty$ such that $$\lim_{u,n\rightarrow\infty}np(u)=\lambda,\ p(u)=P(X(1)>u). \label{norm0}%$$ In case of independent $X(k)$s it means that the distribution (binomial) of $\mathfrak{B}_{u,n}([0,1])$ tends to Poisson one with parameter $\lambda$ (Poisson Limit Theorem). Since for Gaussian standard distribution function $\Phi$, $$\Psi(u)\geq p(u)=1-\Phi(u)\geq(1-u^{-2})\Psi(u),\ \ u>0,\ \text{with }%
\Psi(u)=\frac{1}{\sqrt{2\pi}u}e^{-u^{2}/2}, \label{p(u)}%$$ analytically more convenient to take an equivalent normalization, with $\Psi(u)$ instead of $p(u).$ From relations ([\[norm0\]](#norm0){reference-type="ref" reference="norm0"},[\[p(u)\]](#p(u)){reference-type="ref" reference="p(u)"}) one has the asymptotic solution, $$u=u_{n}=\sqrt{2\log n}-\frac{\frac{1}{2}\log\log n+\log(\lambda\sqrt{\pi/2}%
)}{\sqrt{2\log n}}+O(1/\log n),\ n\rightarrow\infty. \label{norm}%$$
From results of [@mittal; @ylv] it follows
**Theorem 1**. *Let $$r(k)\log k\rightarrow0\ \text{as }k\rightarrow\infty, \label{logCond}%$$ and ([\[norm0\]](#norm0){reference-type="ref" reference="norm0"}) be fulfilled. Then $$\mathfrak{B}_{u,n}(B)\Rightarrow\mathfrak{P}_{\lambda}(B),\ B\in\mathcal{B},
\label{poisson}%$$ weakly as $u\rightarrow\infty.$*
Mittal and Ylvisaker in [@mittal; @ylv] proved a limit theorem for maximum of the Gaussian sequence on $[0,n]$ as $n\rightarrow\infty.$ Theorem [Theorem 1](#Mittal Ylvisaker){reference-type="ref" reference="Mittal Ylvisaker"} easily follows from theirs result, see [@book] or [@lectures] for details, definitions, etc. Necessary and sufficient conditions on $r(k)$ for ([\[poisson\]](#poisson){reference-type="ref" reference="poisson"}) are also given in these books.
The main question we considered here is how to approximate the point process $\mathfrak{B}_{u,n}(B)$ by a family of Poisson processes $\mathfrak{P}%
_{np}(B)$ as level $u$ tends to infinity slower than in ([\[norm0\]](#norm0){reference-type="ref" reference="norm0"}, [\[norm\]](#norm){reference-type="ref" reference="norm"}), that is if $np(u)\rightarrow\infty$ as $u,n\rightarrow\infty$ for any $B$ of positive measure (length). To this end, using one of the Prokhorov celebrated theorems, [@prokhorov], on the distance in variation between Bernoulli and Poisson distributions, we consider the limit behavior of distributions of random variables $\mathfrak{B}_{u,n}(B)$ with such behaviours of $u,$ that is, for various medium tending $u$ to infty. First we consider independent $X(k)$s, and then, using comparison techniques for Gaussian distributions, pass to dependent ones.
# A sequence of independent Gaussian variables
Assume first that $r(k)=0$ for all $k>0.$ Repeat in the above notations and conditions a result of Yu. V. Prokhorov on approximations of binomial distributions. In [@prokhorov] approximation qualities of binomial distribution by both Poisson and normal distributions is considered. Here we are interesting only in Poisson approximation therefore we formulate only corresponding result from [@prokhorov]
Denote by $|B|,$ the measure (length) of $B,$ and $$\ \rho_{u,n}(B)=\sum_{k=0}^{n|B|}|P(\mathfrak{B}_{u,n}(B)=k)-P(\mathfrak{P}%
_{np}(B)=k)|, \label{Pr1}%$$ $B\in\mathcal{B}$, the distance in variation between the distributions.
**Theorem 2**. *(Theorem 2, [@prokhorov]). Assume that $r(k)=0$ for all $k>0.$ Then for all $u\geq0$ and $B\in\mathcal{B}$, $|B|>0,$ $$\rho_{u,n}(B)=\lambda_{1}p(u)+p(u)O\left( \min(1,(np(u)^{-1/2}\right)
,\ n,u\rightarrow\infty,\ \label{th1}%$$ with $\lambda_{1}=\sqrt{2}/\sqrt{\pi e}=0.483...\ .$*
From this theorem immediately follows estimation of distance in variance between point processes $\mathfrak{B}_{u,n}(\cdot)$ and $\mathfrak{P}%
_{np}(\cdot).$
**Corollary 1**. *If $r(k)=0$ for all $k>0,$then $$\sup_{B\in\mathcal{B}}|\mathfrak{B}_{u,n}(B)-\mathfrak{P}_{np}(B)|\leq Cp(u).
\label{invar-0}%$$*
Indeed, $O(\cdot)$ in ([\[th1\]](#th1){reference-type="ref" reference="th1"}) is equal to $O(1)$ and does not depend on $|B|.$
We are interesting here in approximation of $\mathfrak{B}_{u,n}(B)$ for levels $u$ less than standard one ([\[norm\]](#norm){reference-type="ref" reference="norm"}). Namely, starting with the above Prokhorov Theorems we consider levels $u$ tending to infinity with $n$, but $p(u)n\rightarrow\infty$ as $n\rightarrow\infty.$
Consider an example.
**Example 1**. ***Power scale.** Let $p(u)n^{a}\rightarrow
c\in(0,\infty).$ For $a=1$ we have Poisson Limit Theorem even for depending $X(k)$s, that is Theorem [Theorem 1](#Mittal Ylvisaker){reference-type="ref" reference="Mittal Ylvisaker"}. For $a=0,$ we have Bernoulli Theorem. A generalization (normal approximation) for depended $X(k)$s one can find in [@book], [@lectures]. Taking in ([\[norm\]](#norm){reference-type="ref" reference="norm"}) $n^{a},$ $a>0,$ instead of $n,$ we get that $$u=u(a)=\sqrt{2a\log n}-\frac{\frac{1}{2}\log\log n+\log(c\sqrt{a\pi/2})}%
{\sqrt{2a\log n}}+O(1/\log n),\ n\rightarrow\infty. \label{example}%$$*
In [@prokhorov] normal approximation (Bernoulli Theorem) for $\mathfrak{B}_{u,n}(B)$ is considered as well and both the approximations are compared. In future publications, using the other results from that paper, we consider normal approximation combined with Poisson one for the number of level exceedances as well.
## Thinning. Clusters.
Consider the following scheme of Poisson approximation of the point process $\mathfrak{B}_{u,n}(\cdot)$ when $np(u)\rightarrow\infty,$ $n,u\rightarrow
\infty.$ Let an integer $l$ be depended of $n,$ $l=l(n)$ in such a way that for some $\lambda\in(0,\infty),$ $$\lim_{u,n\rightarrow\infty}(1-p(u))^{l(n)}p(u)=\lambda. \label{tau1}%$$ Introduce a thinned point process of $l$*-points of exceedances*, $$\mathfrak{B}_{u,n,l}(B):=\sum_{k\in nB}\mathbf{I}\{\max_{i=1,...,l}X(k-i)\leq
u,X(k)>u\},\ B\in\mathcal{B}. \label{thinned}%$$ We call it *cluster center process*, see [@Daley; @V-J], Section 6.3.
Taking logarithm in ([\[tau1\]](#tau1){reference-type="ref" reference="tau1"}) and using Thaylor we get that $$l(n)=\frac{\log np(u)-\log(\lambda+o(1))}{p(u)+o(1)}=\frac{\log np(u)}%
{p(u)}(1+o(1)),\ \ n,u\rightarrow\infty. \label{tau3}%$$ Denote $n_{1}:=l(n)/\log np,$ so that $p(u)n_{1}\rightarrow1$ as $n,u\rightarrow\infty.$ Denote also $$...y_{k}<y_{k+1}<...,$$ points of $\mathfrak{B}_{u,n,l}(B)$ and associate a point $y_{k}$ with the *component processes*, that is, *clusters*, $$\mathfrak{B}_{u,n_{1}}(B|y_{k})=\sum_{j\in n_{1}B}\mathbf{I}%
\{X(j)>u\}\mathbf{I}\{[y_{k},y_{k+1})\},\ B\in\mathcal{B},\ k\in\mathbb{Z}.$$ That is we consider independent groups of points of $\mathfrak{B}_{u,n}(B)$ located between $l(n)$-points, say, $l(n)$*-packs*, in another normalization. Thus we write for any $B\in\mathcal{B},$ $$\mathfrak{B}_{u,n}(B)=\int_{\mathbb{R}}\mathfrak{B}_{u,n_{1}}(B|y)\mathfrak{B}%
_{u,n,l}(dy)=\sum_{y_{k}\in\mathfrak{B}_{u,n,l}(B)}\mathfrak{B}_{u,n_{1}%
}(B|y_{k})<\infty. \label{cluster PP}%$$
Two notmalizations are used above. The first one, $B\rightarrow nB,$ under its natural ("correct") choice ([\[norm0\]](#norm0){reference-type="ref" reference="norm0"}) gives convergence to the Poisson process. But sometimes such the choice of normalization does not capture the entire time interval of interest, therefore we need $np(u)\rightarrow\infty.$ Therefore first we divide all points of interest into clusters by $l$-points, and consider points within the clusters using a corresponding natural normalization $B\rightarrow n_{1}B$, "under a magnifying glass". Then we shall see that both the number of clusters is approximately Poisson and the number of points in the cluster is also approximately Poisson, but in a different normalization.
Another situation when such the clusterisation is natural is a case of strong dependence, that is, $r(k)$ is close to one for several first $k=1,2,...$. This is especially evident for Gaussian stationary processes in continuous time $X(t)$ with non-smooth trajectories and the corresponding sequences $X(k\Delta)$ with $\Delta\rightarrow0$ as $u\rightarrow\infty$, when the points $t$ of exceeding of high levels appear in the form of short groups (packs), the number of which is asymptotically Poisson. See details in [@lectures], Lecture 17 and [@book], Section 4. Let us note that sometimes normal approximation of the clusters can be more natural.
## Convergence to cluster Poisson process.
Now introduce cluster Poisson process. Cluster center process is a Poisson point process $\mathfrak{P}_{\lambda}(B),$ $B\in\mathcal{B}$, with intensity $\lambda,$ see ([\[tau1\]](#tau1){reference-type="ref" reference="tau1"}), so that by Theorem [Theorem 2](#Th2){reference-type="ref" reference="Th2"} for any $B\in\mathcal{B}$, taking $(1-p(u))^{l(n)}p(u)$ instead of $p(u),$ for some constant $C,$ similarly to Corollary [Corollary 1](#invar0){reference-type="ref" reference="invar0"}, $$\begin{aligned}
\sup_{B\in\mathcal{B}}|\mathfrak{B}_{u,n,l}(B)-\mathfrak{P}_{np(u,l)}(B)| &
\leq Cp(u,l)\ \label{invar-1}\\
\text{with }p(u,l) & :=(1-p(u))^{l(n)}p(u).\nonumber\end{aligned}$$ As well, denoting $$...z_{k}<z_{k+1}<...,$$ points of $\mathfrak{P}_{np(u,l)}$, associate a point $z_{k}$ with the Poisson clusters $\mathfrak{P}_{n_{1}p(u)}(B|z_{k}),$ $k\in\mathbb{Z}$, independent Poisson point processes with equal intensities $n_{1}p(u).$ Write, for any $B\in\mathcal{B},$ $$\mathfrak{P}_{np}(B)=\int_{\mathbb{R}}\mathfrak{P}_{n_{1}p(u)}%
(B|z)\mathfrak{P}_{np(u,l)}(dz)=\sum_{z_{k}\in\mathfrak{P}_{np(u,l)}%
(B)}\mathfrak{P}_{n_{1}p(u)}(B|z_{k})<\infty, \label{cluster PPP}%$$ the corresponding cluster Poisson process.
Since all $X(k)$ are independent, the proof of the following theorem obviously follows from Poisson Limit Theorem and Kallenberg theorem, [@kallenberg].
**Theorem 3**. *Let $r(k)=0$ for all $k>0.$ The cluster point process $\mathfrak{B}_{u,n}(B),$ $B\in\mathcal{B}$, ([\[cluster PP\]](#cluster PP){reference-type="ref" reference="cluster PP"}) converges weakly as $n,u\rightarrow\infty$ with $np(u)\rightarrow\infty$ to the cluster Poisson process $\mathfrak{P}_{np}(B)$, $B\in\mathcal{B}$, ([\[cluster PPP\]](#cluster PPP){reference-type="ref" reference="cluster PPP"}).*
Moreover, from Corollary ([Corollary 1](#invar0){reference-type="ref" reference="invar0"}) it follows
**Theorem 4**. *Let $r(k)=0$ for all $k>0.$ The cluster point process $\mathfrak{B}_{u,n}(B),$ $B\in\mathcal{B}$, ([\[cluster PP\]](#cluster PP){reference-type="ref" reference="cluster PP"}) converges in varition as $n,u\rightarrow\infty$ with $np(u)\rightarrow\infty$ to the cluster Poisson process $\mathfrak{P}_{np}(B)$, $B\in\mathcal{B}$, ([\[cluster PPP\]](#cluster PPP){reference-type="ref" reference="cluster PPP"}).*
Let us continue Example [Example 1](#power scale){reference-type="ref" reference="power scale"}. From ([\[tau3\]](#tau3){reference-type="ref" reference="tau3"}) it follows that $$\begin{aligned}
l(n) & =\frac{(a-1)\log(cn/(\lambda+o(1)))}{\log(1-p(u_{n}^{a}))}\\
& =\frac{1-a}{c}n^{a}\log\frac{cn}{\lambda}\left( 1+o\left( \frac{1}{\log
n}\right) \right) ,\ n\rightarrow\infty.\end{aligned}$$ Remark that $l(n)=0$ for $a=1.$ Remark as well that $\lambda$ increases up to infinity as $l$ decreases up to zero, what seems quite naturally.
# Dependent Gaussian variables.
Now turn to Gaussian stationary sequence $X(k)$. We intend to use the comparison technique for Gaussian distributions, see [@berman], [@lectures], [@book]. The following statement is the Corollary 2.3.1, [@lectures], which is a generalization of Berman inequality, [@berman], [@lectures], [@book]. For any sequence of real numbers $x_{k},$ $k\in\mathbb{Z}$, denote by $\mathcal{A}_{u},$ algebra of sets generated by sets $\{x_{k}>u\},$ $k\in\mathbb{Z}$. Denote for shortness by $\mathbf{X}=\{X(k),k=1,...,n\mathbb{\}}$ and by $\mathbf{X}_{0}=\{X_{0}%
(k),k=1,...,n\mathbb{\}},$where $X_{0}(k)$ are Gaussian independent standard variables. Let ([\[logCond\]](#logCond){reference-type="ref" reference="logCond"}) be fulfilled but in contrast with Theorem [Theorem 1](#Mittal Ylvisaker){reference-type="ref" reference="Mittal Ylvisaker"} assume that $np\rightarrow\infty$ as $u\rightarrow
\infty.$ Consider now how fast may $np$ tend to infinity so that Theorem [Theorem 4](#var conv){reference-type="ref" reference="var conv"} assertion still holds. Similarly to proof of Theorem [Theorem 1](#Mittal Ylvisaker){reference-type="ref" reference="Mittal Ylvisaker"}, the main tool is the following comparison inequality.
**Proposition 1**. *For any $A\in\mathcal{A}_{u}$ and any $u,$ $$|P(\mathbf{X}\in A)-P(\mathbf{X}_{0}\in A)|\leq\frac{1}{\pi}\sum_{k=1}%
^{n}\frac{(n-k)|r(k)|}{\sqrt{1-r^{2}(k)}}\exp\left( -\frac{u^{2}}%
{1+r(k)}\right) . \label{3.*}%$$*
Now we derive bounds for $u$ to have that the right hand part of this inequality still tend to zero as $n\rightarrow\infty.$ Notice that since we interesting in case $np\rightarrow\infty,$ from ([\[norm\]](#norm){reference-type="ref" reference="norm"}) and proof of Mittal result, [@mittal; @ylv], see also Theorem 3.7, [@lectures], it can be seen that $$\limsup_{n\rightarrow\infty}\frac{u}{\sqrt{2\log n}}\leq1.$$ Denote $\rho(k):=\sup_{l\geq k}|r(l)|.$
**Lemma 1**. *Assume that $$r(k)k^{1-\rho(1)}\rightarrow0\ \text{as }k\rightarrow\infty; \label{powerCond}%$$ and $$\liminf_{n\rightarrow\infty}\frac{u}{\sqrt{2\log n}}>\sqrt{1-\rho(1)}.
\label{lowerbound}%$$ Then the sum in ([\[3.\*\]](#3.*){reference-type="ref" reference="3.*"}) tends to zero as $n\rightarrow\infty.$*
**Proof.** (TO CHECK!) Notice that from ([\[powerCond\]](#powerCond){reference-type="ref" reference="powerCond"}) it follows that $\rho(1)<1.$ Denote $\gamma=1-\rho(1).$ Take some $\alpha\in(0,\gamma)$ and break the sum in ([\[3.\*\]](#3.*){reference-type="ref" reference="3.*"}) in two parts, till $[n^{\alpha}]$ and from $[n^{\alpha}]+1$ till $n.$ We have for the first part, $$\begin{aligned}
& \sum_{k=1}^{[n^{\alpha}]}(n-k)|r(k)|\int_{0}^{1}\frac{1}{\sqrt{1-h^{2}%
r^{2}(k)}}\exp\left( -\frac{u^{2}}{1+hr(k)}\right) dh\\
& \leq nn^{\alpha}\rho(1)\frac{1}{\sqrt{1-\rho^{2}(1)}}\exp\left(
-\frac{u^{2}}{1+\rho(1)}\right) .\end{aligned}$$ By taking logarithm, we get, that the right part tends to zero as $n\rightarrow\infty$ if $$u^{2}-(1+\rho(1)(1+\alpha)\log n\ \rightarrow\infty$$ as $n\rightarrow\infty.$ Since $\alpha$ can be chosen arbitrarily, we can say that there exist $\alpha\in(0,\gamma)$ such that the first part of the sum ([\[3.\*\]](#3.*){reference-type="ref" reference="3.*"}) tends to zero as $n\rightarrow\infty$ if $$\liminf_{n\rightarrow\infty}\frac{u}{\sqrt{2\log n}}>\sqrt{\frac{1+\rho(1)}%
{2}}.$$
For the second part we have, $$\begin{aligned}
& \sum_{k=[n^{\alpha}]+1}^{n}(n-k)|r(k)|\int_{0}^{1}\frac{1}{\sqrt
{1-h^{2}r^{2}(k)}}\exp\left( -\frac{u^{2}}{1+hr(k)}\right) dh\\
& \leq\frac{n}{\sqrt{1-\rho^{2}(1)}}\sum_{k=[n^{\alpha}]+1}^{n}%
|r(k)|\exp\left( -\frac{u^{2}}{1+|r(k)|}\right) \\
& \leq\frac{n}{\sqrt{1-\rho^{2}(1)}}\exp\left( -\frac{u^{2}}{1+\rho
([n^{\alpha}])}\right) \sum_{k=[n^{\alpha}]+1}^{n}|r(k)|.\end{aligned}$$ By condition ([\[powerCond\]](#powerCond){reference-type="ref" reference="powerCond"}), the latter sum for any $\varepsilon>0$ and all sufficiently large $n$ is at most $\varepsilon n^{2-\gamma}.$ Using this, we get that the second part of the sum is for the same $\varepsilon$ and $n$ at most $$\frac{\varepsilon n^{2-\gamma}}{\sqrt{1-\rho^{2}(1)}}\exp\left( -\frac{u^{2}%
}{1+\rho([n^{\alpha}])}\right) .$$ Since $\varepsilon$ is arbitrarily small, by taking logarithm, we get that the second part tends to zero if and only if $$\frac{u^{2}}{1+n^{-\alpha\gamma}}-(2-\gamma)\log n\rightarrow\infty
\ \ \text{as }n\rightarrow\infty.$$ In turn this is followed from the inequality $$\liminf_{n\rightarrow\infty}\frac{u}{\sqrt{(2-\gamma)\log n}}>1.$$ Now just remark that $$2-\gamma=1+\rho(1).$$ Thus the Lemma.
Thus we have proved the following.
**Theorem 5**. *Let for covariance function of Gaussian sequence $X(k)$ relation ([\[powerCond\]](#powerCond){reference-type="ref" reference="powerCond"}) be fulfilled. Let $n$ and $u$ tends both to infinity such that $nP(X(1)>u)\rightarrow\infty$ but ([\[lowerbound\]](#lowerbound){reference-type="ref" reference="lowerbound"}) be fulfilled. Then assertion of Theorem [Theorem 3](#weak conv){reference-type="ref" reference="weak conv"} is fulfilled in the same notations.*
# Further considerations and extensions.
**1. Strong mixing condition.** Remark that condition ([\[lowerbound\]](#lowerbound){reference-type="ref" reference="lowerbound"}) means that $\sup_{k\geq1}|r(k)|<1/2.$ If $\sup_{k\geq1}|r(k)|\geq1/2$ Lemma [Lemma 1](#compLemma){reference-type="ref" reference="compLemma"} does not work. But one can apply Theorem 3.5, [@lectures], see also Theorem 2.1, [@book], from which it follows that if $$\sum_{k=1}^{\infty}k|r(k)|<\infty,$$ then for any $u$ and any normalization, point process $\eta_{u}(\cdot)$ satisfies Rosenblatt strong mixing condition. Hence assertion of Theorem [Theorem 5](#final){reference-type="ref" reference="final"} can be proved by this approach for any normalizations of the mark processes $\zeta_{u}^{k}(\cdot).$ It is a subject of future considerations.
**2. Brown motion clusters.** If some applications require the normalization of $\zeta_{u}^{k}(\cdot)$ with $pn_{1}\rightarrow\infty,$ one can prove convergence to a cluster Poisson process with independent Wiener processes with trends as clusters, or marks.
**3. Random energy model by Derrida,** [@derrida],
For independent $X(k),$ the classical Derrida model of randon energy is $$S_{N}(\beta):=\sum_{k=1}^{[2^{N}]}e^{\beta\sqrt{N}X(k)},\ \beta>0.$$ Standard problems here are study a limit behavior of $S_{N}(\beta)/N$ as $N\rightarrow\infty,$ in dependence of $\beta$ and limit behavior of distribution of normed $S_{N}(\beta)$ (limit theorem), as well. Since large values of the sequence $X(k)$ plays main role, the above results can be apply. Our approach allows also considering dependent $X(i)$s. Moreover, Prokhorov theorems may allow to get quality of these limit approximations and even to derive asymptotic expansions.
99
Berman S. (1992). Sojourns and extremes of stochastic processes. CBC Press, 199.
Ben Arous, G., Bogachev, L., Molchanov, S. Limit theorems for sums of random exponentials. Probability Theory and Related Fields, 132(4):579{612}, 2005.
Daley,D., Vere-Jones,D. An introduction to the Theory of Point Processes, Volume I: Elementary Theory and Methods. Springer, 2003
Kallenberg O. (1986) Random Measures, 4th edition. Academic Press, New York, London; Akademie-Verlag, Berlin.
Kozulyaev P.A. (1939) Asymptotic analysis of a fundamental formula of Probability Theory. --- Acad. Notes Moscow Univ., v. 15, 179--182. In Russian.
Mittal Y. and Ylvisaker D. (1975) Limit distribution for the maxima of stationary Gaussian processes. Stochastic Processes and their Applications 3, 1-18, .
Piterbarg V. I. (2015) Twenty Lectures about Gaussian Processes. Atlantic Financial Press London, NewYork.
Piterbarg V.I. (2012) Asymptotic Methods in Theory of Gaussian Random Processes and Fields, Providence, American Mathematical Society, Ser. Translations of Mathenatical Monographies, **148.**
Prokhorov Yu. V. (1953) Asymptotic behavior of the binomial distribution. Uspekhi Mat. Nauk, 8:3(55), 135--142.
Resnick Sidney I. (1987) Extreme Values, Regular Variation, and Point Processes. Springer-Varlag.
[^1]: Lomonosov Moscow state university, Moscow, Russia; Federal State Institution \"Scientific-Research Institute for System Analysis of the Russian Academy of Sciences\"; International Laboratory of Stochastic Analysis and its Applications, National Research University Higher School of Economics, Russian Federation. *piter\@mech.math.msu.su*
| arxiv_math | {
"id": "2309.00925",
"title": "Poisson limit theorem for the number of excursions above high and medium\n levels by Gaussian stationary sequences",
"authors": "Vladimir I. Piterbarg",
"categories": "math.PR",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We prove a quantitative convergence result of the nonlocal Allen--Cahn equation to volume-preserving mean curvature flow. The proof uses gradient flow calibrations and the relative entropy method, which has been used in the recent literature to prove weak-strong uniqueness results for mean curvature flow and convergence of the Allen--Cahn equation. A crucial difference in this work is a new notion of gradient flow calibrations. We add a tangential component to the velocity field in order to prove the Gronwall estimate for the relative energy. This allows us to derive the optimal convergence rate without having to show the closeness of the Lagrange-multipliers.
**Keywords:** Mean curvature flow, volume-preservation, constrained gradient flows, reaction-diffusion equations, relative entropy method, calibrated geometry, gradient-flow calibrations.
**Mathematical Subject Classification**: 53E10; 35K57
address: Milan Kroemer, Tim Laux, Hausdorff Center for Mathematics, University of Bonn, Villa Maria, Endenicher Alllee 62, 53115 Bonn, Germany
author:
- Milan Kroemer
- Tim Laux
bibliography:
- lit.bib
title: Quantitative convergence of the nonlocal Allen--Cahn equation to volume-preserving mean curvature flow
---
# Introduction
We consider the nonlocal Allen--Cahn equation $$\begin{aligned}
\label{eq:AC}
\partial_t u_\varepsilon= \Delta u_\varepsilon- \frac1{\varepsilon^2} W'(u_\varepsilon) + \lambda_\varepsilon\sqrt{2W(u_\varepsilon)}
\end{aligned}$$ which was first introduced by Golovaty [@Golovaty]. Here $\lambda_\varepsilon=\lambda_\varepsilon(t)$ is a Lagrange multiplier which is given explicitly by $$\begin{aligned}
\label{eq:lambdaeps}
\lambda_\varepsilon(t) \coloneqq -\frac{\int_{\mathbb{R}^d} (\Delta u_\varepsilon-\frac1{\varepsilon^2} W'(u_\varepsilon)) \sqrt{2W(u_\varepsilon)}\,\mathrm{d}x}{\int_{\mathbb{R}^d} 2W(u_\varepsilon) \,\mathrm{d}x}.
\end{aligned}$$ This is a natural choice since then the mass of $\psi_\varepsilon\coloneqq\phi \circ u_\varepsilon$, where $\phi(u) =\int_0^{u} \sqrt{2W(z)}\,\mathrm{d}z$, is preserved: $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\int (\phi \circ u_\varepsilon)(x,t)\,\mathrm{d}x = \int_{\mathbb{R}^d} \sqrt{2W(u_\varepsilon(x,t))} \partial_t u_\varepsilon(x,t) \,\mathrm{d}x =0.
\end{aligned}$$ This change of variables $\phi:u_\varepsilon\mapsto\psi_\varepsilon$ is crucial in studying the Allen--Cahn equation and was discovered by Modica--Mortola [@modicamortola] and independently by Bogomol'nyi [@bogomolnyi]. In the present paper, we derive an optimal quantitative convergence result in the sharp interface limit, see Theorem [Theorem 1](#thm:ACtoMCF){reference-type="ref" reference="thm:ACtoMCF"} below.
Nonlocal versions of the Allen--Cahn equation were first introduced by Rubinstein and Sternberg [@RubinsteinSternberg] as a basic model for coarsening processes which conserve the phase volume. The original model by Rubinstein and Sternberg is $$\begin{aligned}
\partial_t u_\varepsilon= \Delta u_\varepsilon- \frac1{\varepsilon^2} W'(u_\varepsilon) + \frac1\varepsilon\lambda_\varepsilon,
\end{aligned}$$ where $\lambda_\varepsilon=\lambda_\varepsilon(t)$ is the Lagrange multiplier associated to the mass constraint $\int_{\mathbb{R}^d} u_\varepsilon(x,t) \,\mathrm{d}x = \int_{\mathbb{R}^d} u_\varepsilon(x,0)\,\mathrm{d}x$ and is explicitly given by $\lambda_\varepsilon(t) = \int_{\mathbb{R}^d} \frac1\varepsilon W'(u_\varepsilon(x,t)) \,\mathrm{d}x$. Equation [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"} has several advantages over the classical Rubinstein--Sternberg model as the effect of the Lagrange multiplier is amplified close to the diffuse interface.
The nonlocal Allen--Cahn equation [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"} is the $L^2$-gradient flow of the Cahn--Hilliard energy $$\begin{aligned}
E_\varepsilon[u] = \int_{\mathbb{R}^d} \left( \frac\varepsilon 2 |\nabla u|^2 +\frac1\varepsilon W(u) \right) \,\mathrm{d}x
\end{aligned}$$ restricted to the mass-constrained "submanifold" $\left\lbrace u \colon \int_{\mathbb{R}^d} \phi \circ u\,\mathrm{d}x = m\right\rbrace \subset L^2(\mathbb{R}^d)$ and sped up by the factor $\frac1\varepsilon$. This gradient-flow structure can be read off from the optimal energy dissipation relation which holds for any classical solution of [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"}: $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}E_\varepsilon[u_\varepsilon(\cdot,t)] = -\int_{\mathbb{R}^d} \varepsilon(\partial_t u_\varepsilon(x,t))^2 \,\mathrm{d}x.
\end{aligned}$$
The investigation of the sharp-interface limit $\varepsilon\to0$ of nonlocal versions of the Allen--Cahn equation [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"}--[\[eq:lambdaeps\]](#eq:lambdaeps){reference-type="eqref" reference="eq:lambdaeps"} started with the matched asymptotic expansion by Golovaty [@Golovaty]. His formal argument suggests that the limit evolves by the nonlocal evolution equation $$\begin{aligned}
\label{eq:introVPMCF}
V=-H+\lambda \quad \text{on $\Sigma(t)$,}
\end{aligned}$$ where $V$ and $H$ denote the normal velocity and the mean curvature of the evolving surface $\Sigma(t)=\partial\Omega(t)$, respectively, and $\lambda = \lambda(t)$ is the Lagrange multiplier corresponding to the volume constraint $|\Omega(t)|=|\Omega(0)|$. Also this equation, the volume-preserving mean curvature flow, has a gradient-flow structure as is seen at the energy dissipation relation $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}E[\Sigma(t)]
= \int_{\Sigma(t)} V(x,t)H(x,t) \,\mathrm{d}\mathcal{H}^{d-1}(x)
=- \int_{\Sigma(t)} V^2 \,\mathrm{d}\mathcal{H}^{d-1}(x),
\end{aligned}$$ which holds for sufficiently regular solutions of [\[eq:introVPMCF\]](#eq:introVPMCF){reference-type="eqref" reference="eq:introVPMCF"}. Again the evolution is restricted to a "submanifold" $\left\lbrace\Sigma=\partial \Omega \subset \mathbb{R}^d \colon |\Omega| = m\right\rbrace$ which incorporates the volume constraint. Takasao showed under very mild assumptions that solutions to [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"}--[\[eq:lambdaeps\]](#eq:lambdaeps){reference-type="eqref" reference="eq:lambdaeps"} converge to a weak solution of volume-preserving mean curvature flow in the sense of Brakke [@Brakke]; first for ambient dimensions $d=2,3$ [@Takasao] and most recently, for a slight perturbation of [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"}--[\[eq:lambdaeps\]](#eq:lambdaeps){reference-type="eqref" reference="eq:lambdaeps"} in all dimensions [@TakasaoHigherD]. Another approach is inspired by the work of Luckhaus and Sturzenhecker [@LucStu]: the second author and Simon [@LauxSimon] showed that, under a natural energy-convergence assumption as in [@LucStu], the limit is a distributional solution to volume-preserving mean curvature flow, which holds in all spatial dimensions and also in the case of multiple phases, any selection of which may carry a volume constraint.
For our proof, we use the relative energy method. In the context of the convergence of phase field models this method was introduced by Fischer, Simon and the second author in [@FischerLauxSimon], but the relative energy is very closely related to the diffuse tilt-excess introduced by Simon and the second author in [@LauxSimon]. It can also be used to incorporate boundary contact, as was shown by Hensel and Moser [@HenselMoserContact], and Hensel and the second author [@HenselLauxContact]. As the method does not rely on the maximum principle, it can also be applied for vectorial problems. Liu and the second author [@LauxLiu] combined the relative energy method with weak convergence methods to derive the scaling limit of transitions between the isotropic and the nematic phase in liquid crystals. Fischer and Marveggio [@FischerMarveggio] showed that the method can also be used for the vectorial Allen--Cahn equation, at least in ambient dimensions $d=2,3$ and for a prototypical potential with three wells.
The nonlocal Allen--Cahn equation is a physically motivated model, which is why its sharp interface limit is of high interest. But it can also be viewed as an approximation scheme to construct (numerically or theoretically) solutions to volume preserving mean curvature flow. Other methods to construct solutions include PDE methods which can be used for short time [@EscherSimonett]; versions of the minimizing movements scheme by Almgren, Taylor and Wang [@Almgren1993], as was first done by Mugnai, Seis and Spadaro [@MugnaiSeisSpadaro] and later by Julin and Niinikoski [@Julin2022]; and the thresholding scheme, which is also numerically efficient, see the work of Swartz and the second author [@LauxSwartz].
## Notation
The Landau symbol $O$ will be used frequently. Precisely, by $a=O(b)$ we mean that there exists a constant $C$ depending on $d$, $T$, and $\Sigma=(\Sigma(t))_{t\in[0,T]}$, such that $|a| \leq C |b|$. The signed distance function to $\Sigma(t)$ will be denoted by $$\begin{aligned}
\label{eq:defsdist}
\mathbf{s}(x,t)\coloneqq \mathop \textup{dist} \nolimits(x,\Omega(t))- \mathop \textup{dist} \nolimits(x,\mathbb{R}^d\setminus\Omega(t)),
\end{aligned}$$ where $\Omega(t)$ is the region enclosed by $\Sigma(t)$. The gradient and divergence on $\mathbb{R}^d$ will be denoted by $\nabla$ and $\mathrm{div}$, respectively. In the neighborhood of a surface $\Sigma$ the tangential gradient and divergence will be denoted by $\nabla_{\Sigma}$ and $\mathrm{div}_{\Sigma}$, and are explicitly given by $$\nabla_\Sigma=(\mathrm{Id}-\nu\otimes\nu)\nabla\quad\text{and}\quad\mathrm{div}_\Sigma=(\mathrm{Id}-\nu\otimes\nu):\nabla.$$ These operators can also be defined intrinsically on $\Sigma$ so that we can apply them to functions and vector fields only defined on the surface.
# Main results
The main result of this work states that solutions to the nonlocal Allen--Cahn equation with well-prepared initial conditions converge to solutions of volume-preserving mean curvature flow before the onset of singularities. In addition, the theorem provides the optimal convergence rate $O(\varepsilon)$. For simplicity we assume that the two wells of $W$ are $0$ and $1$ and that the induced surface tension is normalized to $\sigma \coloneqq \phi(1)=\int_0^1 \sqrt{2W(z)} \,\mathrm{d}z =1$. This is for example the case if $W(z)= 18 z^2 (z-1)^2$.
**Theorem 1**. *Let $\Sigma=(\Sigma(t)=\partial\Omega(t))_{t\in [0,T]}$ be a smooth solution to volume-preserving mean curvature flow according to Definition [Definition 1](#def:strong){reference-type="ref" reference="def:strong"} below and let $u_\varepsilon$ be a solution of the nonlocal Allen--Cahn equation [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"} with well-prepared initial conditions according to Definition [Definition 3](#def:wellprepared){reference-type="ref" reference="def:wellprepared"} below. Then there exists a constant $C=C(d,\Sigma,T)<\infty$ such that $$\begin{aligned}
\sup_{t\in[0,T]}\int_{\mathbb{R}^d}|\psi_\varepsilon(x,t)-\chi_{\Omega(t)}(x)| \,\mathrm{d}x \leq C\varepsilon.
\end{aligned}$$*
We note that well-prepared initial data can easily be constructed by gluing the optimal profile around $\Sigma(0)$:
**Lemma 1**. *If $\Sigma(0)$ is $C^{3,\alpha}$ for some $\alpha\in(0,1)$, then there exist constants $(a_\varepsilon)_{\varepsilon>0}$ with $a_\varepsilon=O(\varepsilon)$ as $\varepsilon\downarrow 0$ such that $$\begin{aligned}
\label{eq:init_data_optimal_profile}
u_{\varepsilon}(x,0)\coloneqq U\left(\frac{-\mathbf{s}(x,\Sigma(0))-a_\varepsilon}{\varepsilon}\right)
\quad\text{is well-prepared in the sense of Definition~\ref{def:wellprepared},}
\end{aligned}$$ where $U$ is the unique solution to $U''=W'(U)$ with $U(-\infty)=0,\,U(+\infty)=1$ and $U(0)=\frac{1}{2}$.*
**Definition 1**. We call a family of surfaces $\Sigma=(\Sigma(t))_{t\in[0,T]}$ a smooth solution to volume-preserving mean curvature flow if there exists $\alpha\in(0,1)$ such that $\Sigma(t)$ is $C^{3,\alpha}$ for all $t$ and $\Sigma(t)$ evolves by [\[eq:introVPMCF\]](#eq:introVPMCF){reference-type="eqref" reference="eq:introVPMCF"}, i.e., $V=-H+\lambda$, and the normal velocity $V(t)$ is of class $C^{1,\alpha}$ in space.
Before we give a precise definition of well-preparedness, we need to introduce some definitions. The key tool in our proof is a suitable gradient flow calibration.
**Definition 2**. Let $\Sigma=(\Sigma(t))_{t\in[0,T]}$ be a one-parameter family of closed surfaces $\Sigma(t) = \partial \Omega(t) \subset \mathbb{R}^d$. Let $\xi,B \colon \mathbb{R}^d\times[0,T]\to \mathbb{R}^d$ be two vector fields, let $\vartheta \colon \mathbb{R}^d\times[0,T]\to \mathbb{R}$ and let $\lambda \colon [0,T]\to \mathbb{R}$. We call the tuple $(\xi,B, \vartheta, \lambda)$ a *gradient-flow calibration for volume-preserving mean curvature flow* if the following statements hold true.
(i) *Regularity.*[\[item:reg\]]{#item:reg label="item:reg"} The vector field $\xi$ and the function $\vartheta$ satisfy $$\begin{aligned}
\xi \in
C^{0,1}(\mathbb{R}^d\times[0,T];\mathbb{R}^d) \quad\text{and} \quad \vartheta \in C^{0,1}(\mathbb{R}^d\times[0,T]).
\end{aligned}$$ Furthermore, for each $t\in[0,T]$ it holds $$\begin{aligned}
B(\cdot,t) \in C^{0,1}(\mathbb{R}^d;\mathbb{R}^d).
\end{aligned}$$
(ii) *Normal extension and shortness.*[\[item:normal\]]{#item:normal label="item:normal"} The vector field $\xi$ extends the exterior unit normal vector field $$\begin{aligned}
\xi(\cdot,t) = \nu(\cdot,t) \quad \text{on $\Sigma(t)$}
\end{aligned}$$ and it is short away from $\Sigma$: There exists a constant $c>0$ such that $$\begin{aligned}
\label{eq:xishort}
|\xi(\cdot,t)| \leq (1-c \mathop \textup{dist} \nolimits^2(x,\Sigma(t)))_+,
\end{aligned}$$ where $(\cdot)_+$ denotes the positive part.
(iii) *Divergence constraint.* [\[item:divB\]]{#item:divB label="item:divB"} There exist a bounded function $c:[0,T]\rightarrow\mathbb{R}$ such that the vector fields $B(\cdot,t)$ satisfy, for each $t\in [0,T]$, $$\begin{aligned}
\label{eq:divB}
\nabla \cdot B (\cdot,t)-c(t)=O\big( \mathop \textup{dist} \nolimits(\cdot, \Sigma(t))\big),
\end{aligned}$$ and $$\begin{aligned}
\label{eq:xixi_nablaB}
\xi\otimes\xi:\nabla B(\cdot,t)=O( \mathop \textup{dist} \nolimits(\cdot,\Sigma(t))).
\end{aligned}$$
(iv) *Approximate transport equations.*[\[item:transport\]]{#item:transport label="item:transport"} The weight $\vartheta$ is transported to first order $$\begin{aligned}
\label{eq:transp_weight}
\left(\partial_t \vartheta + (B\cdot \nabla)\vartheta \right)(\cdot,t) = O\big( \mathop \textup{dist} \nolimits(\cdot,\Sigma(t))\big),
\end{aligned}$$ and the length of $\xi$ to second order $$\begin{aligned}
\label{eq:transp_absxi}
\left(\partial_t |\xi|^2 + (B\cdot \nabla) |\xi|^2\right)(\cdot,t) = O\big( \mathop \textup{dist} \nolimits^2(\cdot,\Sigma(t))\big).
\end{aligned}$$ Furthermore $$\begin{aligned}
\label{eq:transp_xi}
\left(\partial_t \xi + (B\cdot \nabla ) \xi + (\nabla B)^{\mathsf{T}} \xi \right)(\cdot,t)
=O\big( \mathop \textup{dist} \nolimits(\cdot,\Sigma(t))\big).
\end{aligned}$$
(v) *Geometric evolution equation.*[\[item:GEE\]]{#item:GEE label="item:GEE"}
$$\begin{aligned}
\label{eq:extGEE}
B(\cdot,t)\cdot\xi(\cdot,t)+\nabla \cdot \xi(\cdot,t)-\lambda(t)=O\big( \mathop \textup{dist} \nolimits(\cdot,\Sigma(t))\big).
\end{aligned}$$
(vi) *Coercivity of the transported weight.* [\[item:signweights\]]{#item:signweights label="item:signweights"} It holds $$\begin{aligned}
\vartheta(\cdot,t)&>0\quad\text{on $\mathbb{R}^d\setminus\Omega(t)$,} \\
\vartheta(\cdot,t)&<0\quad\text{in $\Omega(t)$,} \\
\sup_{(x,t)\in\mathbb{R}^d\times[0,T]}|\vartheta(x,t)|&<\infty,
\end{aligned}$$ and there exist constants $0<c,C<\infty$ such that, on $\mathop \textup{supp}\xi$, $$\begin{aligned}
\label{eq:weight-bilip-estimate}
c \mathop \textup{dist} \nolimits(\cdot,\Sigma(t))\leq|\vartheta(\cdot,t)|\leq C \mathop \textup{dist} \nolimits(\cdot,\Sigma(t)).
\end{aligned}$$
In case such a gradient-flow calibration exists for $\Sigma$, we call $\Sigma$ a *calibrated flow*.
The main difficulty in this work, compared to previous works using relative energy methods, are the divergence constraints [\[eq:divB\]](#eq:divB){reference-type="eqref" reference="eq:divB"} and [\[eq:xixi_nablaB\]](#eq:xixi_nablaB){reference-type="eqref" reference="eq:xixi_nablaB"} on $B$ which need a particular construction. These divergence constraints are natural in the following sense. In view of [@Laux2022], it is useful to choose $B$ such that its divergence is controlled, since $\nabla\cdot B=0$ is the localized version of the preservation of the total volume. There, it was chosen such that $\nabla \cdot B =O( \mathop \textup{dist} \nolimits(\cdot,\Sigma))$. Here, we need to relax this constraint to (10) as we additionally want to fix the $\nu\otimes \nu$ component of the Jacobian $\nabla B$. Then $\nabla\cdot B = (I-\nu \otimes \nu ) \colon \nabla B = \mathrm{div}_{\Sigma} B$. And since the rate of change of the total surface area is dictated by the PDE, we cannot set $c(t)=0$.
Our ansatz is to add a tangential part to the velocity field, say $X$. Then $B=V\nu+X$ on $\Sigma$ and the divergence constraint $\nabla\cdot B=c$ on $\Sigma$ becomes $$\mathrm{div}_{\Sigma}X=V H-c.$$ Hence we see that necessarily $$\begin{aligned}
c(t) &= \frac{\int_{\Sigma} VH\,\mathrm{d}\mathcal{H}^{d-1}}{\mathcal{H}^{d-1}(\Sigma)}.
\end{aligned}$$ This PDE is underdetermined, so we make the ansatz that $X$ is a gradient field, i.e., $X=\nabla_{\Sigma}\varphi$ for some potential $\varphi$. Then $\varphi$ solves the Poisson equation $$\begin{aligned}
\Delta_{\Sigma}\varphi&=V H-c\quad\text{on $\Sigma$,}
\end{aligned}$$ where $\Delta_{\Sigma}=\mathrm{div}_{\Sigma}\nabla_{\Sigma}$ is the Laplace--Beltrami operator on $\Sigma$.
Now Theorem [Theorem 1](#thm:ACtoMCF){reference-type="ref" reference="thm:ACtoMCF"} rests on the following two propositions. The first one guarantees the existence of a calibration, the second shows that, given a calibration, the Allen--Cahn equation converges.
**Proposition 1**. *If $\Sigma$ is a smooth solution to volume-preserving mean curvature flow in the sense of Definition [Definition 1](#def:strong){reference-type="ref" reference="def:strong"}, then $\Sigma$ is a calibrated flow.*
**Proposition 2**. *Let $u_\varepsilon$ be a solution to the nonlocal Allen--Cahn equation [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"} and let $(\Sigma(t))_{t\in[0,T]}$ be a calibrated flow according to Definition [Definition 2](#def:GF_Cal){reference-type="ref" reference="def:GF_Cal"}. Suppose further that$\int_{\mathbb{R}^d}\psi(x,0)\,\mathrm{d}x=|\Omega(0)|$. Then there exists a constant $C=C(d,T,\Sigma)$ such that, for all $t\in(0,T)$, it holds $$\begin{aligned}
\label{eq:ddtEeps}
\frac{\mathrm{d}}{\mathrm{d}t}\big(\mathcal{E}_\varepsilon(t) +\mathcal{F}_\varepsilon(t)\big) \leq C\big(\mathcal{E}_\varepsilon(t)+\mathcal{F}_\varepsilon(t)\big),
\end{aligned}$$ where $\mathcal{E}_\varepsilon$ and $\mathcal{F}_\varepsilon$ are defined below in [\[eq:defEeps\]](#eq:defEeps){reference-type="eqref" reference="eq:defEeps"} and [\[eq:defFeps\]](#eq:defFeps){reference-type="eqref" reference="eq:defFeps"}, respectively.*
We work with the relative energy $$\begin{aligned}
\label{eq:defEeps}
\mathcal{E}_\varepsilon(t)\coloneqq\mathcal{E}_\varepsilon[u_\varepsilon,\Sigma](t)
\coloneqq&E_\varepsilon[u_\varepsilon(\cdot,t)] + \int_{\mathbb{R}^d} \xi(x,t) \cdot \nabla \psi_\varepsilon(x,t) \,\mathrm{d}x
\\\notag=& \int_{\mathbb{R}^d} \left( \frac\varepsilon 2 |\nabla u_\varepsilon(x,t)|^2 + \frac1\varepsilon W(u_\varepsilon(x,t)) - |\nabla \psi_\varepsilon(x,t)|\right)\,\mathrm{d}x
\\\notag&+ \int_{\mathbb{R}^d} \left( 1 - \xi(x,t) \cdot \nu_\varepsilon(x,t) \right) |\nabla \psi_\varepsilon(x,t)| \,\mathrm{d}x,
\end{aligned}$$ where $\psi_\varepsilon(x,t) \coloneqq \int_0^{u_\varepsilon(x,t)} \sqrt{2W(z)} \,\mathrm{d}z$ and $\nu_\varepsilon(x,t) \coloneqq -\frac{\nabla \psi_\varepsilon(x,t)}{|\nabla \psi_\varepsilon(x,t)|}$ if $\nabla \psi_\varepsilon(x,t)\neq0$ and $\nu_\varepsilon(x,t)\coloneqq e$ for some arbitrary $e\in S^{d-1}$ if $\nabla \psi_\varepsilon(x,t)=0$. It is already clear that the relative energy $\mathcal{E}_\varepsilon$ controls both the discrepancy between the two terms in the energy and the tilt-excess.
Furthermore, we define the volume error functional $$\begin{aligned}
\label{eq:defFeps}
\mathcal{F}_\varepsilon(t)\coloneqq\mathcal{F}_\varepsilon[u_\varepsilon,\Sigma](t)
&\coloneqq \int_{\mathbb{R}^d} | \psi_\varepsilon(x,t)- \chi_{\Omega(t)}(x)| |\vartheta(x,t)| \,\mathrm{d}x
\\\notag &= \int_{\mathbb{R}^d} ( \psi_\varepsilon(x,t)- \chi_{\Omega(t)}(x))\vartheta(x,t) \,\mathrm{d}x .
\end{aligned}$$
**Definition 3**. We call initial conditions $u_\varepsilon(\cdot,0)$ *well-prepared* if they satisfy the following assumptions:
(i) *Mass constraint.* [\[item:mass\]]{#item:mass label="item:mass"} $\int_{\mathbb{R}^d} \psi_\varepsilon(x,0) \,\mathrm{d}x = |\Omega(0)|$.
(ii) *Optimal convergence rate.* [\[item:optimalrate\]]{#item:optimalrate label="item:optimalrate"} $\mathcal{E}_\varepsilon(0)+\mathcal{F}_\varepsilon(0)=O(\varepsilon^2)$.
The proofs of Proposition [Proposition 1](#prop:strong_is_calibrated){reference-type="ref" reference="prop:strong_is_calibrated"} and [Proposition 2](#prop:AC){reference-type="ref" reference="prop:AC"} are deferred to the next sections. Now, based on the propositions we are able to prove Theorem [Theorem 1](#thm:ACtoMCF){reference-type="ref" reference="thm:ACtoMCF"} similarly to [@FischerLauxSimon].
*Proof of Theorem [Theorem 1](#thm:ACtoMCF){reference-type="ref" reference="thm:ACtoMCF"}.* By Gronwall's lemma, [\[eq:ddtEeps\]](#eq:ddtEeps){reference-type="eqref" reference="eq:ddtEeps"} implies $$\begin{aligned}
\label{eq:pf_gronwall}
\mathcal{E}_\varepsilon(t)+\mathcal{F}_\varepsilon(t)
\leq C \big(\mathcal{E}_\varepsilon(0)+\mathcal{F}_\varepsilon(0) \big)
\quad\text{for all $t\in[0,T]$.}
\end{aligned}$$ Now, for $\delta>0$ and $f\in L^\infty(0,r)$, we split the square $[0,r]^2$ into two triangles and apply Fubini's theorem $$\begin{aligned}
\left(\int_0^\delta|f(r)|\,\mathrm{d}r\right)^2
\leq 2\|f\|_\infty\int_0^\delta|f(r)|r\,\mathrm{d}r.
\end{aligned}$$ Let $\mathcal{U}_r(t)\coloneqq\left\lbrace x: \mathop \textup{dist} \nolimits(x,\Sigma(t))<r\right\rbrace$ denote the tubular neighborhood of $\Sigma(t)$ with radius $r$ and let $\pi_{\Sigma(t)}\coloneqq\mathrm{Id}-\mathbf{s}\nabla\mathbf{s}\otimes\nabla\mathbf{s}$ denote the orthogonal projection onto $\Sigma(t)$, where $\mathbf{s}$ is the signed distance function defined in [\[eq:defsdist\]](#eq:defsdist){reference-type="eqref" reference="eq:defsdist"}. Now let $\delta>0$ sufficiently small such that $\pi_{\Sigma(t)}$ is well defined on $\mathcal{U}_\delta(t)$ and injective for all $t\in[0,T]$. We compute $$\begin{aligned}
&\left(
\int_{\mathcal{U}_\delta(t)}|\psi_\varepsilon(\cdot,t)-\chi_{\Omega(t)}|\,\mathrm{d}x
\right)^2 \\
\leq\,\,&C\bigg(
\int_{\Sigma(t)}\int_0^\delta|\psi_\varepsilon-\chi_{\Omega}|(y+r\nu(y,t),t)r\,\mathrm{d}r\,\mathrm{d}\mathcal{H}^{d-1}(y) \\
&+\int_{\Sigma(t)}\int_0^\delta|\psi_\varepsilon-\chi_{\Omega}|(y-r\nu(y,t),t)r\,\mathrm{d}r\,\mathrm{d}\mathcal{H}^{d-1}(y)
\bigg)^2 \\
=\,\,&C\int_{\Sigma(t)}\int_{-\delta}^\delta|\psi_\varepsilon-\chi_{\Omega(t)}|(y+r\nu(y,t),t) \mathop \textup{dist} \nolimits(y+r\nu(x,t),\Sigma(t))\,\mathrm{d}r\,\mathrm{d}\mathcal{H}^{d-1}(y) \\
\leq\,\,&C\int_{\mathcal{U}_\delta(t)}|\psi_\varepsilon(x,t)-\chi_{\Omega(t)}(x)| \mathop \textup{dist} \nolimits(x,\Sigma(t))\,\mathrm{d}x
\leq\,\,C\mathcal{F}_\varepsilon(t).
\end{aligned}$$ In view of [\[eq:pf_gronwall\]](#eq:pf_gronwall){reference-type="eqref" reference="eq:pf_gronwall"} and the well-preparedness condition [\[item:optimalrate\]](#item:optimalrate){reference-type="eqref" reference="item:optimalrate"} we obtain Theorem [Theorem 1](#thm:ACtoMCF){reference-type="ref" reference="thm:ACtoMCF"}. ◻
# Construction of calibration: Proof of Proposition [Proposition 1](#prop:strong_is_calibrated){reference-type="ref" reference="prop:strong_is_calibrated"} {#construction-of-calibration-proof-of-proposition-propstrong_is_calibrated}
*Proof of Proposition [Proposition 1](#prop:strong_is_calibrated){reference-type="ref" reference="prop:strong_is_calibrated"}.* Let $(\Sigma(t))_{t\in[0,T]}$ be a smooth solution to volume-preserving mean curvature flow. and let $\delta>0$ be sufficiently small such that $\pi_\Sigma$, with the notation of the proof of Theorem [Theorem 1](#thm:ACtoMCF){reference-type="ref" reference="thm:ACtoMCF"}, is well defined, injective and of class $C^2$ on $\mathcal{U}_{2\delta}$. Define a smooth cutoff function $\zeta:\mathbb{R}\rightarrow[0,\infty)$ such that $\zeta(r)=1-r^2$ for $r<\delta/2$, $\zeta=0$ for $r>\delta$ and define $$\xi(\cdot,t)\coloneqq\zeta(\mathbf{s}(\cdot,\Sigma(t)))\nabla\mathbf{s}(\cdot,\Sigma(t)).$$ Next, let $\theta$ be a smooth truncation of the identity, i.e., $\theta(r)=-\theta(-r),\,\theta(r)=r$ for $|r|<\delta/2$ and $\theta(r)=\delta$ for $r\geq\delta$. Now we define $\vartheta(x,t)\coloneqq\theta(\mathbf{s}(x,\Sigma(t)))$.
Finally we construct the vector field $B$. Let $V(\cdot,t)$ denote the normal velocity of the interface $\Sigma(t)=\left\lbrace\vartheta(\cdot,t)=0\right\rbrace$ and let $\eta$ be a cutoff function such that $\eta(r)=1$ for $|r|<\delta$ and $\eta(r)=0$ for $r>2\delta$. Now consider the ansatz $$B(x,t)\coloneqq\eta(\mathbf{s}(x,\Sigma(t)))((V\nu+X)\circ\pi_{\Sigma(t)}(x))$$ for some tangent vector field $X(\cdot,t):\Sigma(t)\rightarrow T\Sigma(t)$. Then $\nu\otimes\nu:\nabla B=0$ and hence $$\begin{aligned}
\nabla\cdot B
&=(\mathrm{Id}-\nu\otimes\nu):\nabla B+\nu\otimes\nu:\nabla B \\
&=\mathrm{div}_{\Sigma(t)}B \\
&=\mathrm{div}_{\Sigma(t)}(V\nu+X) \\
&=V\mathrm{div}_{\Sigma(t)}\nu+\mathrm{div}_{\Sigma(t)}X.
\end{aligned}$$ We can construct such an $X(\cdot,t)$ by solving the PDE $$-\Delta_{\Sigma(t)}\varphi=VH-c\quad\text{on $\Sigma(t)$,}$$ where $c(t)=\fint_{\Sigma(t)} VH\,\mathrm{d}\mathcal{H}^{d-1}$. Then the right-hand side satisfies the compatibility condition $$\begin{aligned}
\int_{\Sigma(t)}V H-c\,\mathrm{d}\mathcal{H}^{d-1}
&=0,
\end{aligned}$$ and existence and uniqueness of weak solutions in $H^1_{(0)}(\Sigma(t))$ can easily be shown with the Lax--Milgram lemma, cf. [@Kroemer2022 Lemma 4]. Since $\Sigma(t)$ is $C^{3,\alpha}$ and the normal velocity $V$ is of class $C^{1,\alpha}$, the regularity of $\varphi$ can be improved to $C^{3,\alpha}$ using Schauder estimates, cf. [@Laux2022 Proof of Thm. 1]. Now set $X\coloneqq\nabla_{\Sigma(t)}\varphi$. Then $B$ is of class $C^{1,\alpha}$, in particular $C^{0,1}$, and satisfies the required properties: $$\begin{aligned}
\label{eq:B1}\nu\otimes\nu:\nabla B&=0\qquad\text{on $\Sigma(t)$,}
\\ \label{eq:B2}\mathrm{div}B&=c\qquad\text{on $\Sigma(t)$,}
\end{aligned}$$ and hence by Lipschitz continuity the divergence constraints [\[eq:divB\]](#eq:divB){reference-type="eqref" reference="eq:divB"} and [\[eq:xixi_nablaB\]](#eq:xixi_nablaB){reference-type="eqref" reference="eq:xixi_nablaB"}.
Now we compute, on $\Sigma$, $$\begin{aligned}
\partial_t\mathbf{s}+B\cdot\nabla\mathbf{s}
&=-V+B\cdot\nu
=-V+V=0.
\end{aligned}$$ Since both $|\xi|^2=(\zeta\circ\mathbf{s})^2$ and $\vartheta=\theta\circ\mathbf{s}$ are functions of the signed distance and Lipschitz, we immediately obtain [\[eq:transp_weight\]](#eq:transp_weight){reference-type="eqref" reference="eq:transp_weight"} and [\[eq:transp_absxi\]](#eq:transp_absxi){reference-type="eqref" reference="eq:transp_absxi"}.
It remains to show [\[eq:transp_xi\]](#eq:transp_xi){reference-type="eqref" reference="eq:transp_xi"} and [\[eq:extGEE\]](#eq:extGEE){reference-type="eqref" reference="eq:extGEE"}. Since $\zeta'(0)=0$, we have, on $\Sigma$, $$\begin{aligned}
B\cdot\xi+\nabla\cdot\xi-\lambda
&=B\cdot\nu+|\nabla\mathbf{s}|^2\zeta'(0)+\zeta(0)\nabla\cdot\nu-\lambda
=V+H-\lambda
=0.
\end{aligned}$$ By Lipschitz continuity of $B$ and $\xi$ we get [\[eq:extGEE\]](#eq:extGEE){reference-type="eqref" reference="eq:extGEE"}. Finally we compute $$\begin{aligned}
&\quad(\partial_t\xi+(B\cdot\nabla)\xi+(\nabla B)^\mathsf{T}\xi)(\cdot,t) \\
&=\zeta'(\mathbf{s})(\partial_t\mathbf{s}+B\cdot\nabla\mathbf{s})\nabla\mathbf{s}
+\zeta(\mathbf{s})(\partial_t\nabla\mathbf{s}+(B\cdot\nabla)\nabla\mathbf{s}+(\nabla B)^\mathsf{T}\nabla\mathbf{s})
\end{aligned}$$ As before, the first term is $O( \mathop \textup{dist} \nolimits(\cdot,\Sigma(t)))$. Thus it remains to compute the second term. We have, on $\Sigma$, $$\begin{aligned}
0&=\nabla(\partial_t\mathbf{s}+(B\cdot\nabla)\mathbf{s}) \\
&=\partial_t\nabla\mathbf{s}+(B\cdot\nabla)\nabla\mathbf{s}+(\nabla B)^T\nabla\mathbf{s}\\
&=\partial_t\xi+(B\cdot\nabla)\xi+(\nabla B)^T\xi.
\end{aligned}$$ This concludes the proof of Proposition [Proposition 1](#prop:strong_is_calibrated){reference-type="ref" reference="prop:strong_is_calibrated"}. ◻
# Relative energy estimate: Proof of Proposition [Proposition 2](#prop:AC){reference-type="ref" reference="prop:AC"} {#relative-energy-estimate-proof-of-proposition-propac}
This section is devoted to the proof of the relative energy estimate in Proposition [Proposition 2](#prop:AC){reference-type="ref" reference="prop:AC"}. We will need an appropriate weak formulation of the nonlocal Allen--Cahn equation, which we will later test with the extended velocity field $B$. It is easy to check that testing [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"} with $B\cdot \varepsilon\nabla u_\varepsilon$ we have for any solution $u_\varepsilon$ of the nonlocal Allen--Cahn equation [\[eq:AC\]](#eq:AC){reference-type="eqref" reference="eq:AC"} $$\begin{aligned}
&\int (\nabla \cdot B) \Big( \frac\varepsilon 2 |\nabla u_\varepsilon|^2 + \frac1\varepsilon W(u_\varepsilon)\Big) \,\mathrm{d}x
- \int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon|\nabla \psi_\varepsilon| \,\mathrm{d}x
\\& =-\int \left(V_\varepsilon-\lambda_\varepsilon\sqrt{2W(u_\varepsilon)}\right) \nu_\varepsilon\cdot B |\nabla u_\varepsilon| \,\mathrm{d}x
+\int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x,
\label{eq:weakAC}\end{aligned}$$ where we omitted the domain of integration $\mathbb{R}^d\times\left\lbrace t\right\rbrace$, for $t\in(0,T)$, cf. [@LauxSimon Section 3.2].
The following simple lemma, cf. [@FischerLauxSimon Lemma 4], states the basic coercivity properties of the relative energy $\mathcal{E}_\varepsilon$.
**Lemma 2**. *There exist constants $0<c,C<\infty$ such that $$\begin{aligned}
\label{eq:Ereleps1} \int \Big(\sqrt{\varepsilon} |\nabla u_\varepsilon| -\frac1{\sqrt{\varepsilon}} \sqrt{2W(u_\varepsilon)}\Big)^2\,\mathrm{d}x
&\leq 2 \mathcal{E}_\varepsilon[u_\varepsilon,\Sigma],
\\ \label{eq:Ereleps2}\int |\nu_\varepsilon-\xi|^2 |\nabla \psi_\varepsilon|\,\mathrm{d}x
&\leq 2 \mathcal{E}_\varepsilon[u_\varepsilon,\Sigma],
\\ \label{eq:Ereleps3}\int |\nu_\varepsilon-\xi|^2\varepsilon|\nabla u_\varepsilon|^2 \,\mathrm{d}x
&\leq 12 \mathcal{E}_\varepsilon[u_\varepsilon,\Sigma],
\\\label{eq:Ereleps4} \int\min\left\lbrace \mathop \textup{dist} \nolimits^2(\cdot,\Sigma),c\right\rbrace\Big( \frac\varepsilon 2 |\nabla u_\varepsilon|^2 +\frac1\varepsilon W(u_\varepsilon) \Big) \,\mathrm{d}x
&\leq C(\Sigma) \mathcal{E}_\varepsilon[u_\varepsilon,\Sigma].
\end{aligned}$$*
Now we are in the position to prove the proposition.
*Proof of Proposition [Proposition 2](#prop:AC){reference-type="ref" reference="prop:AC"}.* We compute using Gauss' theorem $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_\varepsilon(t)
=& \frac{\mathrm{d}}{\mathrm{d}t}E_\varepsilon[u_\varepsilon(\cdot,t)] + \frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{R}^d\times\left\lbrace t\right\rbrace} \xi \cdot \nabla \psi_\varepsilon\,\mathrm{d}x
\\=& -\int_{\mathbb{R}^d\times\left\lbrace t\right\rbrace} \frac1{\varepsilon} (\varepsilon\partial_tu_\varepsilon)^2 \,\mathrm{d}x
- \int_{\mathbb{R}^d\times\left\lbrace t\right\rbrace} (\nabla \cdot \xi) \sqrt{2W(u_\varepsilon)} \partial_t u_\varepsilon\,\mathrm{d}x
+ \int_{\mathbb{R}^d\times\left\lbrace t\right\rbrace} \partial_t \xi \cdot \nabla \psi_\varepsilon\,\mathrm{d}x .
\end{aligned}$$ In the following, we again omit the domain of integration $\mathbb{R}^d\times\left\lbrace t\right\rbrace$. We set $V_\varepsilon\coloneqq \varepsilon\partial_t u_\varepsilon$. Then we see, using that $\int V_\varepsilon\sqrt{2W(u_\varepsilon)} \,\mathrm{d}x = \frac{\mathrm{d}}{\mathrm{d}t}\int \psi_\varepsilon\,\mathrm{d}x =0$, $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_\varepsilon(t) = \int \left( -\frac1\varepsilon V_\varepsilon^2 -\frac1\varepsilon V_\varepsilon\big( \nabla\cdot \xi - \lambda \big)\sqrt{2W(u_\varepsilon)} - \partial_t\xi \cdot \nu_\varepsilon|\nabla \psi_\varepsilon| \right)\,\mathrm{d}x.
\end{aligned}$$ We add the weak formulation [\[eq:weakAC\]](#eq:weakAC){reference-type="eqref" reference="eq:weakAC"}, tested with the velocity field $B$, to obtain $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_\varepsilon(t)
=& \int \left(- \frac1\varepsilon V_\varepsilon^2 -\frac1\varepsilon V_\varepsilon\big(\nabla \cdot\xi - \lambda\big) \sqrt{2W(u_\varepsilon)}
+ \left(V_\varepsilon-\lambda_\varepsilon\sqrt{2W(u_\varepsilon)}\right) \nu_\varepsilon\cdot B |\nabla u_\varepsilon|\right) \,\mathrm{d}x
\\&+\int (\nabla \cdot B) \Big( \frac\varepsilon 2 |\nabla u_\varepsilon|^2 + \frac1\varepsilon W(u_\varepsilon)\Big) \,\mathrm{d}x
- \int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon|\nabla \psi_\varepsilon| \,\mathrm{d}x
\\&- \int \nu_\varepsilon\cdot \partial_t \xi |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\&-\int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon\big(\varepsilon|\nabla u_\varepsilon|^2- |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x.
\end{aligned}$$ Decomposing the vector field $B= (B\cdot \xi)\xi + (\mathrm{Id} -\xi \otimes \xi)B$, completing squares, and adding zero to make the transport term for $\xi$ appear, we get $$\label{eq:dissipation}
\begin{split}
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_\varepsilon(t)
&+\frac12 \int \frac1\varepsilon\Big(V_\varepsilon+ (\nabla \cdot \xi- \lambda)\sqrt{2W(u_\varepsilon)} \Big)^2\,\mathrm{d}x
+ \frac12 \int \frac1\varepsilon\Big| V_\varepsilon\nu_\varepsilon- \varepsilon|\nabla u_\varepsilon|(B\cdot \xi) \xi\Big|^2 \,\mathrm{d}x
\\=&\frac12 \int \Big(( \nabla \cdot \xi- \lambda )^2 \frac1\varepsilon 2W(u_\varepsilon)
+ (B\cdot \xi )^2 |\xi |^2 \varepsilon|\nabla u_\varepsilon|^2 \Big)\,\mathrm{d}x
\\&+ \int \big(V_\varepsilon\nu_\varepsilon(\mathrm{Id}-\xi\otimes \xi) B -\lambda_\varepsilon\sqrt{2W(u_\varepsilon)} \nu_\varepsilon\cdot B \big) |\nabla u_\varepsilon| \,\mathrm{d}x
\\& +\int \big(\nabla \cdot B -\nu_\varepsilon\cdot \nabla B \nu_\varepsilon+ \nu_\varepsilon\cdot (B\cdot \nabla) \xi + \xi (\nu_\varepsilon\cdot \nabla) B\big) |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\&-\int \nu_\varepsilon\cdot(\partial_t \xi+ (B\cdot \nabla ) \xi +(\nabla B)^\mathsf{T}\xi) |\nabla \psi_\varepsilon|\,\mathrm{d}x
\\&+\int (\nabla \cdot B) \Big( \frac\varepsilon 2 |\nabla u_\varepsilon|^2 + \frac1\varepsilon W(u_\varepsilon) - |\nabla \psi_\varepsilon| \Big) \,\mathrm{d}x
\\&-\int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x.
\end{split}$$ Completing another square and using $\xi \otimes \nu_\varepsilon+ \nu_\varepsilon\otimes \xi = -(\nu_\varepsilon-\xi)\otimes (\nu_\varepsilon-\xi) + \nu_\varepsilon\otimes \nu_\varepsilon+ \xi \otimes \xi$, we may write the right-hand side of [\[eq:pfACbeforesymmetry\]](#eq:pfACbeforesymmetry){reference-type="eqref" reference="eq:pfACbeforesymmetry"} as $$\begin{aligned}
&\notag\frac12 \int \frac1\varepsilon\Big( ( \nabla \cdot \xi- \lambda ) \sqrt{2W(u_\varepsilon)}
+ \varepsilon|\nabla u_\varepsilon| B\cdot \xi \Big)^2 \,\mathrm{d}x
\\&\notag+\frac12 \int \big(|\xi|^2-1\big) (B\cdot \xi)^2 \varepsilon|\nabla u_\varepsilon|^2 \,\mathrm{d}x
\\&\notag-\int (\nabla \cdot \xi -\lambda) B\cdot \xi |\nabla \psi_\varepsilon|\,\mathrm{d}x
\\&\notag+ \int \big(V_\varepsilon\nu_\varepsilon\cdot (\mathrm{Id}-\xi\otimes \xi) B -\lambda_\varepsilon\sqrt{2W(u_\varepsilon)} \nu_\varepsilon\cdot B\big)|\nabla u_\varepsilon| \,\mathrm{d}x
\\&\notag+\int (\nabla \cdot B) (1-\xi \cdot \nu_\varepsilon) |\nabla \psi_\varepsilon| \,\mathrm{d}x + \int (\nabla \cdot B) \xi \cdot \nu_\varepsilon|\nabla \psi_\varepsilon| \,\mathrm{d}x
\\&\notag- \int \nabla B \colon (\nu_\varepsilon- \xi) \otimes( \nu_\varepsilon-\xi) |\nabla \psi_\varepsilon|\,\mathrm{d}x
\\&\notag-\int \nu_\varepsilon\cdot (\xi \cdot \nabla ) B |\nabla \psi_\varepsilon|\,\mathrm{d}x+ \int \nu_\varepsilon\cdot (B\cdot \nabla) \xi |\nabla \psi_\varepsilon|\,\mathrm{d}x
\\&\notag-\int (\nu_\varepsilon-\xi) \cdot \left( \partial_t \xi +(B\cdot \nabla) \xi + (\nabla B)^{\mathsf{T}} \xi \right) |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\& \notag-\int \xi \cdot \left( \partial_t \xi +(B\cdot \nabla) \xi \right) |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\&\notag+\int (\nabla \cdot B) \Big( \frac\varepsilon 2 |\nabla u_\varepsilon|^2 + \frac1\varepsilon W(u_\varepsilon) - |\nabla \psi_\varepsilon| \Big) \,\mathrm{d}x
\\&-\int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x.
\label{eq:pfACbeforesymmetry}
\end{aligned}$$ Two integrations by parts and the symmetry of the Hessian $\nabla^2\psi_\varepsilon$ imply $$\begin{aligned}
\int (B\cdot \nabla) \xi \cdot \nabla \psi_\varepsilon\,\mathrm{d}x
=\int (\xi \cdot \nabla )B \cdot \nabla \psi_\varepsilon\,\mathrm{d}x
+ \int \big( (\nabla \cdot \xi) B - (\nabla \cdot B) \xi \big) \cdot \nabla \psi_\varepsilon\,\mathrm{d}x.
\end{aligned}$$ Combining this with $\nabla \psi_\varepsilon= -\nu_\varepsilon|\nabla \psi_\varepsilon|$, we may again replace three terms in [\[eq:pfACbeforesymmetry\]](#eq:pfACbeforesymmetry){reference-type="eqref" reference="eq:pfACbeforesymmetry"} by the term $(\nabla \cdot \xi) B\cdot \nu_\varepsilon|\nabla \psi_\varepsilon|$ so that we get $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}&\notag\mathcal{E}_\varepsilon(t)
+\frac12 \int \frac1\varepsilon\Big(V_\varepsilon+ (\nabla \cdot \xi- \lambda)\sqrt{2W(u_\varepsilon)} \Big)^2\,\mathrm{d}x
+\frac12 \int \frac1\varepsilon\Big| V_\varepsilon\nu_\varepsilon- \varepsilon|\nabla u_\varepsilon|(B\cdot \xi) \xi\Big|^2 \,\mathrm{d}x
\\\notag=&\frac12 \int \frac1\varepsilon\Big( ( \nabla \cdot \xi- \lambda ) \sqrt{2W(u_\varepsilon)}
+ \varepsilon|\nabla u_\varepsilon| B\cdot \xi \Big)^2 \,\mathrm{d}x
+\frac12 \int \big(|\xi|^2-1\big) (B\cdot \xi)^2 \varepsilon|\nabla u_\varepsilon|^2 \,\mathrm{d}x
\\\notag&-\int (\nabla \cdot \xi -\lambda) (1-\xi \cdot \nu_\varepsilon) B\cdot \xi |\nabla \psi_\varepsilon|\,\mathrm{d}x
+\int (\lambda-\lambda_\varepsilon) \nu_\varepsilon\cdot B |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\\notag&+ \int \Big(V_\varepsilon+(\nabla \cdot \xi -\lambda)\sqrt{2W(u_\varepsilon)} \Big) \nu_\varepsilon\cdot (\mathrm{Id}-\xi\otimes \xi) B |\nabla u_\varepsilon|\,\mathrm{d}x
\\\notag&+\int (\nabla \cdot B) (1-\xi \cdot \nu_\varepsilon) |\nabla \psi_\varepsilon| \,\mathrm{d}x
- \int(\nu_\varepsilon- \xi) \cdot \nabla B ( \nu_\varepsilon-\xi) |\nabla \psi_\varepsilon|\,\mathrm{d}x
\\\notag&-\int (\nu_\varepsilon-\xi) \cdot \left( \partial_t \xi +(B\cdot \nabla) \xi + (\nabla B)^{\mathsf{T}} \xi \right) |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\\notag& -\frac12 \int \left( \partial_t |\xi|^2 +(B\cdot \nabla) |\xi|^2 \right) |\nabla \psi_\varepsilon| \,\mathrm{d}x
\\\notag&+\int (\nabla \cdot B) \Big( \frac\varepsilon 2 |\nabla u_\varepsilon|^2 + \frac1\varepsilon W(u_\varepsilon) - |\nabla \psi_\varepsilon| \Big) \,\mathrm{d}x
\\ &-\int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x.
\label{eq:relEneps}
\end{aligned}$$ We argue term-by-term that the right-hand side can be controlled suitably. By and large, the argument is similar to the one in the sharp-interface case in [@Laux2022], here based on the coercivity properties of $\mathcal{E}_\varepsilon$ collected in Lemma [Lemma 2](#lem:Ereleps){reference-type="ref" reference="lem:Ereleps"}. Let us first estimate the terms that are analogous to [@Laux2022]. For the first term, by Young's inequality we have $$\begin{aligned}
&\frac1{2\varepsilon} \Big( ( \nabla \cdot \xi- \lambda ) \sqrt{2W(u_\varepsilon)}
+ \varepsilon|\nabla u_\varepsilon| B\cdot \xi \Big)^2
\\&\leq ( \nabla \cdot \xi- \lambda +B\cdot \xi )^2 \varepsilon|\nabla u_\varepsilon|^2
+ (\nabla \cdot \xi -\lambda)^2 \Big(\sqrt{\varepsilon} |\nabla u_\varepsilon| -\frac1{\sqrt{\varepsilon}} \sqrt{2W(u_\varepsilon)}\Big)^2.
\end{aligned}$$ The contribution of these terms are controlled by $\mathcal{E}_\varepsilon(t)$ using [\[eq:extGEE\]](#eq:extGEE){reference-type="eqref" reference="eq:extGEE"} and [\[eq:Ereleps4\]](#eq:Ereleps4){reference-type="eqref" reference="eq:Ereleps4"}, respectively, [\[eq:Ereleps1\]](#eq:Ereleps1){reference-type="eqref" reference="eq:Ereleps1"}. The second term in [\[eq:relEneps\]](#eq:relEneps){reference-type="eqref" reference="eq:relEneps"} is controlled by [\[eq:xishort\]](#eq:xishort){reference-type="eqref" reference="eq:xishort"} in conjunction with [\[eq:Ereleps4\]](#eq:Ereleps4){reference-type="eqref" reference="eq:Ereleps4"}. The third term is directly controlled by $\|\nabla \cdot \xi -\lambda\|_\infty \|B\cdot \xi\|_\infty \mathcal{E}_\varepsilon(t)$. The analogous argument holds for the sixth term. For the fifth term we use Young's inequality: $$\begin{aligned}
&\int\left(V_\varepsilon+(\nabla\cdot\xi-\lambda)\sqrt{2W(u_\varepsilon)}\right)\nu_\varepsilon\cdot(\mathrm{Id}-\xi\otimes\xi)B|\nabla u_\varepsilon|\,\mathrm{d}x \\
&\leq\frac{1}{4}\int\frac{1}{\varepsilon}\left(V_\varepsilon+(\nabla\cdot\xi-\lambda)\sqrt{2W(u_\varepsilon)}\right)^2\,\mathrm{d}x
+\int(\nu_\varepsilon\cdot(\mathrm{Id}-\xi\otimes\xi)B)^2\varepsilon|\nabla u_\varepsilon|^2\,\mathrm{d}x \\
&\leq\frac{1}{4}\int\frac{1}{\varepsilon}\left(V_\varepsilon+(\nabla\cdot\xi-\lambda)\sqrt{2W(u_\varepsilon)}\right)^2\,\mathrm{d}x
+\|B\|_\infty^2\int|\nu_\varepsilon-(\nu_\varepsilon\cdot\xi)\xi|^2\varepsilon\nabla u_\varepsilon|^2\,\mathrm{d}x.
\end{aligned}$$ The first term is absorbed in the first term on the left-hand side of [\[eq:relEneps\]](#eq:relEneps){reference-type="eqref" reference="eq:relEneps"}. The second term is estimated by [\[eq:Ereleps3\]](#eq:Ereleps3){reference-type="eqref" reference="eq:Ereleps3"}. The seventh term is controlled by [\[eq:Ereleps2\]](#eq:Ereleps2){reference-type="eqref" reference="eq:Ereleps2"}, since $(\nu_\varepsilon-\xi)\cdot\nabla B(\nu_\varepsilon-\xi)\leq\|\nabla B\|_\infty|\nu_\varepsilon-\xi|^2$. For the eighth term we have, using [\[eq:extGEE\]](#eq:extGEE){reference-type="eqref" reference="eq:extGEE"} and Young's inequality, $$\begin{aligned}
&(\nu_\varepsilon-\xi)\cdot\left( \partial_t \xi +(B\cdot \nabla) \xi + (\nabla B)^{\mathsf{T}} \xi \right)|\nabla \psi_\varepsilon| \\
\leq\,\,&\frac{1}{2}|\nu_\varepsilon-\xi|^2|\nabla\psi_\varepsilon|+\frac{1}{2}C\min\left\lbrace \mathop \textup{dist} \nolimits^2(\cdot,\Sigma(t)),c\right\rbrace|\nabla\psi_\varepsilon|.
\end{aligned}$$ Since $|\nabla\psi_\varepsilon|\leq\frac{1}{2}\varepsilon|\nabla u_\varepsilon|^2+\frac{1}{\varepsilon}W(u_\varepsilon)$, the eighth term is controlled by [\[eq:Ereleps2\]](#eq:Ereleps2){reference-type="eqref" reference="eq:Ereleps2"} and [\[eq:Ereleps4\]](#eq:Ereleps4){reference-type="eqref" reference="eq:Ereleps4"}. The ninth term is controlled by [\[eq:transp_absxi\]](#eq:transp_absxi){reference-type="eqref" reference="eq:transp_absxi"}. The second to last term is controlled by $\|\nabla\cdot B\|_\infty\mathcal{E}_\varepsilon(t)$. Thus is remains to estimate the fourth term and the last term.
For the last term in [\[eq:relEneps\]](#eq:relEneps){reference-type="eqref" reference="eq:relEneps"} we observe that, using $|\nu_\varepsilon\cdot\nabla B\nu_\varepsilon-\xi\cdot\nabla B\xi|\leq\|\nabla B\|_\infty|\nu_\varepsilon-\xi|$ and Young's inequality, $$\begin{aligned}
&\quad\int \nu_\varepsilon\cdot \nabla B \nu_\varepsilon\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x \\
&\leq\int\xi\cdot\nabla B\xi\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x \\
&\quad+\|\nabla B\|_\infty\int|\nu_\varepsilon-\xi|\sqrt{\varepsilon}|\nabla u_\varepsilon|\left(\sqrt{\varepsilon}|\nabla u_\varepsilon|-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_\varepsilon)}\right)\,\mathrm{d}x \\
&\leq\int\xi\cdot\nabla B\xi\big(\varepsilon|\nabla u_\varepsilon|^2 - |\nabla\psi_\varepsilon|\big) \,\mathrm{d}x
+\|\nabla B\|_\infty\int|\nu_\varepsilon-\xi|^2\varepsilon|\nabla u_\varepsilon|^2\,\mathrm{d}x \\
&\quad+\|\nabla B\|_\infty\int\left(\sqrt{\varepsilon}|\nabla u|-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_\varepsilon)}\right)^2\,\mathrm{d}x.
\end{aligned}$$ Here, the last two terms are bounded by [\[eq:Ereleps1\]](#eq:Ereleps1){reference-type="eqref" reference="eq:Ereleps1"} and [\[eq:Ereleps3\]](#eq:Ereleps3){reference-type="eqref" reference="eq:Ereleps3"}, respectively. We compute, using Young's inequality, $$\begin{aligned}
\int\xi\otimes\xi:\nabla B(\varepsilon|\nabla u_\varepsilon|^2-|\nabla\psi_\varepsilon|)\,\mathrm{d}x
\leq\,\,&\frac{1}{2}\int\left(\xi\otimes\xi:\nabla B\right)^2\varepsilon|\nabla u|^2\,\mathrm{d}x \\
&+\frac{1}{2}\int\left(\sqrt{\varepsilon}|\nabla u_\varepsilon|-\frac{1}{\sqrt{\varepsilon}}\sqrt{2W(u_\varepsilon)}\right)^2\,\mathrm{d}x.
\end{aligned}$$ Using the coercivity estimate [\[eq:weight-bilip-estimate\]](#eq:weight-bilip-estimate){reference-type="eqref" reference="eq:weight-bilip-estimate"} and [\[eq:Ereleps1\]](#eq:Ereleps1){reference-type="eqref" reference="eq:Ereleps1"}, the second summand is bounded by $\mathcal{E}_\varepsilon$. For the first term we have, by [\[eq:xixi_nablaB\]](#eq:xixi_nablaB){reference-type="eqref" reference="eq:xixi_nablaB"}, $$\begin{aligned}
\frac{1}{2}\int\left(\xi\otimes\xi:\nabla B\right)^2\varepsilon|\nabla u_\varepsilon|^2\,\mathrm{d}x
&\leq\frac{1}{2}\|\nabla B\|_\infty^2\int_{\mathop \textup{supp}\xi}|\mathbf{s}|^2\varepsilon|\nabla u|^2\,\mathrm{d}x
\end{aligned}$$ which is bounded by $\mathcal{E}_\varepsilon$ by [\[eq:Ereleps4\]](#eq:Ereleps4){reference-type="eqref" reference="eq:Ereleps4"}.
Next we estimate the fourth term in [\[eq:relEneps\]](#eq:relEneps){reference-type="eqref" reference="eq:relEneps"}. Since by Gauss' theorem $$\int_{\Omega(t)}\nabla\cdot B\,\mathrm{d}x
=\int_{\Sigma(t)} B\cdot\nu\,\mathrm{d}\mathcal{H}^{d-1}=\int_{\Sigma(t)}V\,\mathrm{d}\mathcal{H}^{d-1}=\frac{\mathrm{d}}{\mathrm{d}t}|\Omega(t)|=0,$$ we have $$\begin{aligned}
\int(\lambda-\lambda_\varepsilon)\nu_\varepsilon\cdot B|\nabla\psi_\varepsilon|\,\mathrm{d}x
&=-(\lambda-\lambda_\varepsilon)\int(\nabla\cdot B)\psi_\varepsilon\,\mathrm{d}x \\
&=-(\lambda-\lambda_\varepsilon)\int(\nabla\cdot B)(\psi_\varepsilon-\chi_{\Omega(t)})\,\mathrm{d}x.
\end{aligned}$$ Furthermore, since $\frac{\mathrm{d}}{\mathrm{d}t}\int\psi_\varepsilon\,\mathrm{d}x=0=\frac{\mathrm{d}}{\mathrm{d}t}|\Omega(t)|$, we also have $$\begin{aligned}
&\quad\int(\nabla\cdot B)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x \\
&=\int(\nabla\cdot B-c)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
+\int c(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x \\
&=\int(\nabla\cdot B-c)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
+c\left(\int_{\mathbb{R}^d}\psi_\varepsilon(x,t)\,\mathrm{d}x-|\Omega(t)|\right) \\
&=\int(\nabla\cdot B-c)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
+c\left(\int_{\mathbb{R}^d}\psi_\varepsilon(x,0)\,\mathrm{d}x-|\Omega(0)|\right).
\end{aligned}$$ By the well-preparedness assumption [\[item:mass\]](#item:mass){reference-type="eqref" reference="item:mass"}, the second summand vanishes. By [\[eq:divB\]](#eq:divB){reference-type="eqref" reference="eq:divB"} we have $$\begin{aligned}
(|\lambda|+|\lambda_\varepsilon|)\int(\nabla\cdot B-c)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
&\leq C(|\lambda|+|\lambda_\varepsilon|)\int|\vartheta||\psi_\varepsilon-\chi_{\Omega}|\,\mathrm{d}x \\
&=C(|\lambda|+|\lambda_\varepsilon|)\mathcal{F}_\varepsilon.
\end{aligned}$$ Therefore we have in total $$\begin{aligned}
\label{eq:finalestimateEeps}
\notag\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_\varepsilon(t)
+\frac14 \int \frac1\varepsilon\Big(V_\varepsilon+ (\nabla \cdot \xi- \lambda)\sqrt{2W(u_\varepsilon)} \Big)^2\,\mathrm{d}x
&+ \frac12 \int \frac1\varepsilon\Big| V_\varepsilon\nu_\varepsilon- \varepsilon|\nabla u_\varepsilon|(B\cdot \xi) \xi\Big|^2 \,\mathrm{d}x \\
&\leq C(\mathcal{E}_\varepsilon(t)+\mathcal{F}_\varepsilon(t)).
\end{aligned}$$
Finally we estimate $\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_\varepsilon$ by decomposing it into a term which is bounded by $\mathcal{E}_\varepsilon+\mathcal{F}_\varepsilon$ and a small dissipation term which can be absorbed on the left-hand side of [\[eq:finalestimateEeps\]](#eq:finalestimateEeps){reference-type="eqref" reference="eq:finalestimateEeps"}.
We smuggle in $\int(B\cdot\nabla\vartheta)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
=-\int\vartheta B\cdot\nabla\psi_\varepsilon\,\mathrm{d}x-\int(\nabla\cdot B)\vartheta(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x$ and obtain $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_\varepsilon(t)
&=\int\partial_t\vartheta(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
+\int\vartheta\partial_t\psi_\varepsilon\,\mathrm{d}x-\int_{\Sigma}\vartheta V\,\mathrm{d}\mathcal{H}^{d-1} \\
&=\int(\partial_t\vartheta+B\cdot\nabla\vartheta)(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x
+\int(\nabla\cdot B)\vartheta(\psi_\varepsilon-\chi_{\Omega})\,\mathrm{d}x \\
&\quad+\int\vartheta(\partial_t\psi_\varepsilon+B\cdot\nabla\psi_\varepsilon)\,\mathrm{d}x.
\end{aligned}$$ Since $(\partial_t\vartheta+B\cdot\nabla\vartheta)=O( \mathop \textup{dist} \nolimits(\cdot,\Sigma))$ and $B$ is Lipschitz, the first two summands are bounded by $\mathcal{F}_\varepsilon$. It only remains to estimate the last integral, which amounts to estimating the error in the transport equation for $\psi_\varepsilon$.
Indeed, decomposing the vector field $B=(B\cdot \xi)\xi + (\mathrm{Id}-\xi \otimes \xi)B$ once more and applying Young's inequality, we compute $$\begin{aligned}
\int\vartheta(\partial_t\psi_\varepsilon+B\cdot\nabla\psi_\varepsilon)\,\mathrm{d}x
&\leq\int\vartheta\left(
\frac{1}{\varepsilon}\sqrt{2W(u_\varepsilon)}V_\varepsilon-\frac{1}{\varepsilon}\sqrt{2W(u_\varepsilon)}\varepsilon|\nabla u_\varepsilon|\nu_\varepsilon\cdot(B\cdot\xi)\xi
\right)\,\mathrm{d}x \\
&\quad+\int\vartheta|\nabla\psi_\varepsilon|B\cdot(\nu_\varepsilon-(\xi\cdot\nu_\varepsilon)\xi)\,\mathrm{d}x \\
&\leq\int\vartheta\frac{1}{\varepsilon}\sqrt{2W(u_\varepsilon)}\left(
V_\varepsilon-\varepsilon|\nabla u_{\varepsilon}|\nu_\varepsilon\cdot(B\cdot\xi)\xi
\right)\,\mathrm{d}x \\
&\quad+\|B\|_\infty\int|\vartheta||\nu_\varepsilon-(\nu_\varepsilon\cdot\xi)\xi||\nabla\psi_\varepsilon|\,\mathrm{d}x \\
&\leq 2\int\vartheta^2\frac{1}{\varepsilon}W(u_\varepsilon)\,\mathrm{d}x
+\frac{1}{4}\int\frac{1}{\varepsilon}\left(
V_\varepsilon-\varepsilon|\nabla u_{\varepsilon}|\nu\cdot(B\cdot\xi)\xi
\right)^2\,\mathrm{d}x \\
&\quad+\|B\|_\infty\int|\vartheta||\nu_\varepsilon-(\nu_\varepsilon\cdot\xi)\xi||\nabla\psi_\varepsilon|\,\mathrm{d}x.
\end{aligned}$$ The first term is estimated by [\[eq:Ereleps4\]](#eq:Ereleps4){reference-type="eqref" reference="eq:Ereleps4"}. The second term is absorbed in the dissipation [\[eq:finalestimateEeps\]](#eq:finalestimateEeps){reference-type="eqref" reference="eq:finalestimateEeps"} after adding $\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}_\varepsilon$ and $\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}_\varepsilon$ together. For the last term we apply Young's inequality once more to obtain $$\begin{aligned}
\int|\vartheta||\nu_\varepsilon-(\nu_\varepsilon\cdot\xi)\xi||\nabla\psi_\varepsilon|\,\mathrm{d}x
&\leq\frac{1}{2}\int\vartheta^2|\nabla\psi_\varepsilon|\,\mathrm{d}x
+\int(1-\nu_\varepsilon\cdot\xi)|\nabla\psi_\varepsilon|\,\mathrm{d}x \\
&\leq\frac{1}{2}\int\vartheta^2\left(\frac{\varepsilon}{2}|\nabla u_\varepsilon|^2+\frac{1}{\varepsilon}W(u_\varepsilon)\right)\,\mathrm{d}x
+\mathcal{E}_\varepsilon.
\end{aligned}$$ This is again estimated by $\mathcal{E}_\varepsilon(t)$. Therefore $$\frac{\mathrm{d}}{\mathrm{d}t}\left(\mathcal{E}_\varepsilon(t)+\mathcal{F}_\varepsilon(t)\right)
\leq C\left(\mathcal{E}_\varepsilon(t)+\mathcal{F}_\varepsilon(t)\right).\qedhere$$ ◻
Finally we give a short proof of Lemma [Lemma 1](#lem:existence_init_data){reference-type="ref" reference="lem:existence_init_data"}.
*Proof of Lemma [Lemma 1](#lem:existence_init_data){reference-type="ref" reference="lem:existence_init_data"}.* The proof is similar to the unconstrained case in [@FischerLauxSimon]. For $a\in\mathbb{R}$ define $u_\varepsilon^{a}(x)\coloneqq U(-\varepsilon^{-1}\mathbf{s}(x)-a)$, where $U$ is as in [\[eq:init_data_optimal_profile\]](#eq:init_data_optimal_profile){reference-type="eqref" reference="eq:init_data_optimal_profile"}, and let $\psi_{\varepsilon}^a\coloneqq\phi\circ u_\varepsilon^a$. Since $a\mapsto\int\psi_\varepsilon^a\,\mathrm{d}x$ is continuous and $\int\psi_\varepsilon^a\,\mathrm{d}x\rightarrow 0$ as $a\rightarrow+\infty$ and $\int\psi_\varepsilon^a\,\mathrm{d}x\rightarrow\infty$ as $a\rightarrow-\infty$, there exists, for each $\varepsilon>0$, $a_\varepsilon\in\mathbb{R}$ such that $\int\psi_\varepsilon^{a_\varepsilon}\,\mathrm{d}x=|\Omega(0)|$. Furthermore $$\begin{aligned}
\frac{d}{da}\bigg|_{a=0}\int\phi\circ u_\varepsilon^a\,\mathrm{d}x
&=\int\frac{1}{\varepsilon}\sqrt{2W(u_\varepsilon^0)}U'(-\varepsilon^{-1}\mathbf{s})\,\mathrm{d}x
=\int\frac{2}{\varepsilon}W(u_\varepsilon^0)\,\mathrm{d}x \\
&=E_\varepsilon(u_\varepsilon^0)\rightarrow\mathcal{H}^{d-1}(\Sigma(0))\neq 0,
\end{aligned}$$ and $\int\phi\circ U(-\varepsilon^{-1}\mathbf{s})\,\mathrm{d}x=|\Omega(0)|(1+O(\varepsilon))$. Hence $a_\varepsilon=O(\varepsilon)$. For simplicity we write $u_\varepsilon=u_\varepsilon^{a_\varepsilon}$. Now we compute, using $U'(s)=\sqrt{2W(U(s))}$ and $1-\nabla\mathbf{s}\cdot\xi\leq 1-|\xi|^2\leq c \mathop \textup{dist} \nolimits^2(\cdot,\Sigma(0))$, $$\begin{aligned}
\mathcal{E}_\varepsilon(0)
&=\int\left(\frac{\varepsilon}{2}|\nabla u_\varepsilon|^2+\frac{1}{\varepsilon}W(u_\varepsilon)\right)\,\mathrm{d}x
+\int\xi\cdot\nabla\psi_\varepsilon\,\mathrm{d}x \\
&=\int\left(\frac{1}{2\varepsilon}\left|U'\left(-\varepsilon^{-1}\mathbf{s}(x)-a_\varepsilon\right)\right|^2+\frac{1}{\varepsilon}W(u_\varepsilon(x))\right)\,\mathrm{d}x
-\int\frac{2}{\varepsilon}W(u_\varepsilon)\xi\cdot\nabla\mathbf{s}(x)\,\mathrm{d}x \\
&=\int(1-\nabla\mathbf{s}\cdot\xi)\frac{2}{\varepsilon}W(u_\varepsilon)\,\mathrm{d}x \\
&\leq c\varepsilon^2\int\left(\frac{ \mathop \textup{dist} \nolimits(x,\Sigma(0))}{\varepsilon}\right)^2\frac{2}{\varepsilon}W(u_\varepsilon)\,\mathrm{d}x.
\end{aligned}$$ Hence $\mathcal{E}_\varepsilon(0)=O(\varepsilon^2)$. The bulk error $$\begin{aligned}
\mathcal{F}_\varepsilon(0)
&=\int\vartheta(x)(\phi(U(-\varepsilon^{-1}\mathbf{s}(x)-a_\varepsilon))-\chi_{\Omega(0)}(x))\,\mathrm{d}x
\end{aligned}$$ is also $O(\varepsilon^2)$, since $c \mathop \textup{dist} \nolimits(\cdot,\Sigma(0))\leq|\vartheta|\leq C \mathop \textup{dist} \nolimits(\cdot,\Sigma(0))$ and $U(s)\rightarrow 0$ as $s\rightarrow-\infty$ and $U(s)\rightarrow 1$ as $s\rightarrow+\infty$. Hence $u_\varepsilon$ satisfies condition [\[item:optimalrate\]](#item:optimalrate){reference-type="eqref" reference="item:optimalrate"}. ◻
# Acknowledgments {#acknowledgments .unnumbered}
This project has received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2047/1 -- 390685813.
| arxiv_math | {
"id": "2309.12409",
"title": "Quantitative convergence of the nonlocal Allen--Cahn equation to\n volume-preserving mean curvature flow",
"authors": "Milan Kroemer and Tim Laux",
"categories": "math.AP",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We prove the following. Let $\mu_{1},\ldots,\mu_{n}$ be Borel probability measures on $[-1,1]$ such that $\mu_{j}$ has finite $s_j$-energy for certain indices $s_{j} \in (0,1]$ with $s_{1} + \ldots + s_{n} > 1$. Then, the multiplicative convolution of the measures $\mu_{1},\ldots,\mu_{n}$ has power Fourier decay: there exists a constant $\tau = \tau(s_{1},\ldots,s_{n}) > 0$ such that $$\left| \int e^{-2\pi i \xi \cdot x_{1}\cdots x_{n}} \, d\mu_{1}(x_{1}) \cdots \, d\mu_{n}(x_{n}) \right| \leq |\xi|^{-\tau}$$ for sufficiently large $|\xi|$. This verifies a suggestion of Bourgain from 2010.
address:
- |
Department of Mathematics and Statistics\
University of Jyväskylä, P.O. Box 35 (MaD)\
FI-40014 University of Jyväskylä\
Finland
- CNRS, Institut Galilée, Université Sorbonne Paris Nord, 99 avenue J.-B. Clément, 93430 Villetaneuse
- |
Department of Mathematics\
The University of British Columbia\
1984 Mathematics Road, Vancouver, BC\
Canada
author:
- Tuomas Orponen, Nicolas de Saxcé, and Pablo Shmerkin
bibliography:
- references2.bib
title: On the Fourier decay of multiplicative convolutions
---
# Introduction
In 2010, Bourgain [@Bo2 Theorem 6] proved the following remarkable Fourier decay property for multiplicative convolutions of Frostman measures on the real line.
**Theorem 1** (Fourier decay for multiplicative convolutions). *For all $s>0$, there exists $\epsilon>0$ and $n\in\mathbb{Z}_+$ such that the following holds for every $\delta>0$ sufficiently small.\
If $\mu$ is a probability measure on $[-1,1]$ satisfying $$\forall r\in[\delta,\delta^{\epsilon}],\quad \sup_{a\in [-1,1]} \mu(B(a,r)) < r^s$$ then for all $\xi\in\mathbb{R}$ with $\delta^{-1} \leq \lvert\xi\rvert \leq 2\delta^{-1}$, $$\label{fd}
\int e^{2\pi i\xi x_1\dots x_n} d\mu(x_1)\dots d\mu(x_n) \leq \delta^{-\epsilon}.$$*
This result found striking applications in the Fourier decay of fractal measures and resulting spectral gaps for hyperbolic surfaces [@BourgainDyatlov17; @SahlstenStevens22]. It was recently generalised to higher dimensions by Li [@Li21].
At the end of the introduction of [@Bo2], Bourgain proposes to study the optimal relation between $s$ and $n$. Our goal here is to show that, as suggested by Bourgain, Theorem [Theorem 1](#bourgain_decay){reference-type="ref" reference="bourgain_decay"} holds under the condition $n>1/s$, which is optimal up to the endpoint, as we shall see in Example [Example 1](#hi){reference-type="ref" reference="hi"} below.
The statement we obtain applies more generally to multiplicative convolutions of different measures, and our proof also allows us to replace the Frostman condition by a slightly weaker condition. Precisely, for a finite Borel measure $\mu$ on $\mathbb{R}$, given $s\in(0,1]$ and $\delta>0$, the *$s$-energy* of $\mu$ is defined as $$I_s(\mu) := \iint \lvert x-y\rvert^{-s}\,d\mu(x)\,d\mu(y).$$ We refer the reader to [@mattila] for the basic properties of the energy of a measure. As in Bourgain's theorem, we shall be mostly interested in the properties of measures up to some fixed small scale $\delta$; for that reason, we also define the $s$-energy of $\mu$ at scale $\delta$ by $$I^{\delta}_{s}(\mu) = I_s(\mu_\delta),$$ where $\mu_\delta=\mu*P_\delta$ is the regularisation of $\mu$ at scale $\delta$, and $P_\delta$ a smooth approximate unit of size $\delta$. The main result of the present article is the following.
**Theorem 1** (Fourier decay under optimal entropy condition). *Let $n \geq 2$, and $\{s_{j}\}_{j = 1}^{n} \subset (0,1]$ such that $\sum s_{j} > 1$. Then, there exist $\delta_{0},\epsilon,\tau \in (0,1]$, depending only on the parameters above, such that the following holds for $\delta \in (0,\delta_{0}]$. Let $\mu_{1},\ldots,\mu_{n}$ be Borel probability measures on $[-1,1]$ satisfying the energy conditions $$\label{frostman}
I_{s_{j}}^{\delta}(\mu_{j}) \leq \delta^{-\epsilon}, \qquad 1 \leq j \leq n.$$ Then, for all $\xi$ satisfying $\delta^{-1} \leq |\xi| \leq 2\delta^{-1}$, $$\label{form9}
\left| \int e^{-2\pi i \xi x_{1}\dots x_{n}} \, d\mu_{1}(x_{1}) \dots \, d\mu_{n}(x_{n}) \right| \leq |\xi|^{-\tau}.$$*
*Remark 1*. It is not difficult to check that the Frostman condition $\mu(B(a,r))\leq r^s$ from Bourgain's Theorem [Theorem 1](#bourgain_decay){reference-type="ref" reference="bourgain_decay"} is stronger that the assumption on the $s$-energy at scale $\delta$ used above. The reader is referred to Lemma [Lemma 1](#frostman-energy){reference-type="ref" reference="frostman-energy"} for a detailed argument.
*Remark 1*. The values of the parameters $\delta_{0},\epsilon > 0$ stay bounded away from $0$ as long as $\min \{s_{1},\ldots,s_{n}\} > 0$ stays bounded away from $0$, and $\sum_{j} s_{j} > 1$ stays bounded away from $1$, and $n$ ranges in a bounded subset of $\mathbb{N}$.
The following corollary is immediate:
**Corollary 1**. *Let $n\ge 2$, and $\{ s_j\}_{j=1}^{n}\subset (0,1]$ such that $\sum s_j>1$. There exists $\tau=\tau(n,\{s_j\})>0$ such that the following holds. Let $\mu_1,\dots,\mu_n$ be Borel probability measures on $\mathbb{R}$ such that $I_{s_j}(\mu_j)<+\infty$. Then there is $C=C(\{\mu_j\})>0$ such that $$\label{decay}
\left| \int e^{-2\pi i \xi x_{1}\dots x_{n}} \, d\mu_{1}(x_{1}) \dots \, d\mu_{n}(x_{n}) \right| \leq C\cdot |\xi|^{-\tau}, \quad\xi\in\mathbb{R}.$$*
Writing $\mu_1\boxtimes\cdots\boxtimes\mu_n$ for the image of the measure $\mu_1\times\dots\times\mu_n$ under the product map $(x_1,\dots,x_n)\mapsto x_1\dots x_n$, the Fourier decay condition [\[decay\]](#decay){reference-type="eqref" reference="decay"} implies that additive convolution powers of $\mu_1\boxtimes\cdots\boxtimes\mu_n$ become absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$, with arbitrarily smooth densities. In particular, if $A_i$ denotes the support of the measure $\mu_i$, for $i=1,\dots,n$, a sumset of the product set $$A_1A_2\dots A_n = \{a_1a_2\dots a_n\ :\ a_i\in A_i\}$$ must contain a non-empty interval. This observation, together with the example below, shows that the condition $\sum s_j>1$ used in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is essentially optimal.
*Example 1*. Given $s\in(0,1)$ and an increasing sequence of integers $(n_k)_{k\geq 1}$, define a subset $H_s$ in $\mathbb{R}$ by $$H_s = \{x\in [0,1]\ : \ \forall k\geq 1,\ d(x,n_k^{-s}\mathbb{Z})\leq n_k^{-1}\}.$$ If $(n_k)$ grows fast enough, then both $H_s$ and the additive subgroup it generates will have Hausdorff dimension $s$.
Now assume that the parameters $s_1,\dots,s_n$ satisfy $\sum s_i<1$. Fixing $s_i'>s_i$ such that one still has $\sum s_i'<1$, Frostman's lemma yields probability measures $\mu_i$ supported on $H_{s_i'}$ and satisfying $\mu_i(B(a,r))<r^{s_i}$ for all $r>0$ sufficiently small. However, since the support $A_i$ of $\mu_i$ satisfies $A_i\subset H_{s_i'}$, one has $$A_1\dots A_n \subset \left\{x\in[0,1]\ : \forall k\geq 1,\ d\left(x,n_k^{-\sum s_i'}\mathbb{Z}\right)\leq n\cdot n_k^{-1}\right\}.$$ This shows that the subgroup generated by $A_1\dots A_n$ has dimension bounded above by $\sum s_i'<1$ and so is not equal to $\mathbb{R}$. So the measure $\mu_1\boxtimes\cdots\boxtimes\mu_n$ cannot have polynomial Fourier decay.
An analogue of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} in the prime field setting was obtained by Bourgain in [@bourgain_primefield], and our proof follows a similar general strategy, based on sum-product estimates and flattening for additive-multiplicative convolutions of measures. Example [Example 1](#hi){reference-type="ref" reference="hi"} above shows that there exist compact sets $A$ and $B$ in $\mathbb{R}$ such that the additive subgroup $\langle AB\rangle$ generated by the product set $AB$ satisfies $\dim_{\mathrm{H}}\langle AB\rangle \leq \dim_{\mathrm{H}}A + \dim_{\mathrm{H}}B.$ Conversely, it was shown in [@2023arXiv230110199O] as a consequence of the discretised radial projection theorem [@osw_radialprojection] that for Borel sets $A,B \subset \mathbb{R}$, one has $$\label{2ab}
\dim_{\mathrm{H}}(AB+AB - AB-AB) \geq \min\{\dim_{\mathrm{H}}A + \dim_{\mathrm{H}}B,1\}.$$ The main ingredient in the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is a discretised version of this inequality; the precise statement is given below as Proposition [Proposition 1](#prop:expansion){reference-type="ref" reference="prop:expansion"} and is taken from [@2023arXiv230110199O Proposition 3.7]. It can be understood as a precise version of the discretised sum-product theorem, which allows us to improve on the strategy used by Bourgain in [@Bo2] and obtain Fourier decay of multiplicative convolutions under optimal entropy conditions. Before turning to the detailed proof, let us give a general idea of the argument.
## Notation {#notation .unnumbered}
We fix for the rest of the article a standard, $L^1$-normalized approximate identity $\{P_{\delta}\}_{\delta > 0} = \{\delta^{-1}P(\cdot/\delta)\}_{\delta > 0}$. Given a measure $\mu$ on $\mathbb{R}$, recall that we write $\mu_\delta$ for the density of $\mu$ at scale $\delta$, or equivalently, $\mu_\delta=\mu*P_\delta$.
Below, we shall use both additive and multiplicative convolution of measures. To avoid any confusion, we write $\mu\boxplus\nu$, $\mu\boxminus\nu$ and $\mu\boxtimes\nu$ to denote the image of $\mu\times\nu$ under the maps $(x,y)\mapsto x+y$, $(x,y)\mapsto x-y$, and $(x,y)\mapsto xy$, respectively. Similarly, we denote additive and multiplicative $k$-convolution powers of measures by $\mu^{\boxplus k}$ and $\mu^{\boxtimes k}$, respectively.
The push-forward of a Borel measure $\mu$ on the real line under a Borel map $g:\mathbb{R}\to\mathbb{R}$ is denoted $g_{\sharp}\mu$, that is, $$\int f\,d(g_{\sharp}\mu) = \int f\circ g\,d\mu.$$
## Sketch of proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} {#sketch-of-proof-of-theorem-main .unnumbered}
The $n=2$ case of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is classical and already appears in Bourgain's paper [@Bo2 Theorem 7]: If $\mu$ and $\nu$ are two probability measures on $[-1,1]$ such that $\lVert\mu_\delta\rVert_2^2\leq\delta^{-1+s}$ and $\lVert\nu_\delta\rVert_2^2\leq\delta^{-1+t}$, then the multiplicative convolution $\mu\boxtimes\nu$ satisfies $$\lvert\widehat{\mu\boxtimes\nu}(\xi)\rvert\lesssim\delta^{\frac{s+t-1}{2}},\quad \delta^{-1}\leq\lvert\xi\rvert\leq 2\delta^{-1}.$$ For the reader's convenience we record the detailed argument below, see Section [2](#sec:n2){reference-type="ref" reference="sec:n2"}.
We want to use induction to reduce to this base case. To explain the induction step, we focus on the case $n=3$. The main point is to translate equation [\[2ab\]](#2ab){reference-type="eqref" reference="2ab"} into a flattening statement for additive-multiplicative convolutions of measures. For simplicity, assume we knew that if $\mu$ and $\nu$ are probability measures on $[-1,1]$, then the measure $$\eta := (\mu\boxtimes\nu)\boxplus(\mu\boxtimes\nu) \boxminus(\mu\boxtimes\nu)\boxminus(\mu\boxtimes\nu)$$ satisfies, for $\epsilon>0$ arbitrarily small, $$\label{ab_measures}
\lVert\eta_\delta\rVert_2^2 \leq \delta^{1-\epsilon} \lVert\mu_\delta\rVert_2^2\lVert\nu_\delta\rVert_2^2.$$ (Note that this is the exact analogue of [\[2ab\]](#2ab){reference-type="eqref" reference="2ab"} for $L^2$-dimensions of measures at scale $\delta$.) If $\mu_1$, $\mu_2$ and $\mu_3$ satisfy $\lVert(\mu_i)_\delta\rVert_{2}^2\leq\delta^{-1+s_i}$ for some parameters $s_i$ with $s_1+s_2+s_3>1$, we apply the above inequality to $\mu_1$ and $\mu_2$ to obtain $$\lVert\eta_\delta\rVert_2^2 \leq \delta^{-1-\epsilon+s_1+s_2},$$ where $\eta=(\mu_1\boxtimes\mu_2)\boxplus(\mu_1\boxtimes\mu_2) \boxminus(\mu_1\boxtimes\mu_2)\boxminus(\mu_1\boxtimes\mu_2)$. If $\epsilon$ is chosen small enough, we have $(s_1+s_2-\epsilon)+s_3>1$, and so we may apply the $n=2$ case to the measures $\eta$ and $\mu_3$ to get, for $\delta^{-1}<\lvert\xi\rvert<2\delta^{-1}$, $$\lvert\widehat{\eta\boxtimes\mu_3}(\xi)\rvert<\delta^{\frac{s_1+s_2+s_3-\epsilon-1}{2}}.$$ To conclude, one observes from the Cauchy-Schwarz inequality that for any two probability measures $\mu$ and $\nu$, one always has $\lvert\widehat{\mu\boxtimes\nu}(\xi)\rvert^2\leq\widehat{(\mu\boxminus\mu)\boxtimes\nu}(\xi)$. This elementary observation applied twice yields $$\lvert\widehat{\mu_1\boxtimes\mu_2\boxtimes\mu_3}(\xi)\rvert^4 \leq \widehat{\eta\boxtimes\mu_3}(\xi)^4 < \delta^{\frac{1}{2}(s_1+s_2+s_3-\epsilon-1)}$$ which is the desired Fourier decay, with parameter $\tau=\frac{1}{8}(s_1+s_2+s_3-\epsilon-1)$.
Unfortunately, the assumptions on the $L^2$-norms of $\mu$ and $\nu$ are not sufficient to ensure inequality [\[ab_measures\]](#ab_measures){reference-type="eqref" reference="ab_measures"} in general. One also needs some kind of non-concentration condition on $\mu$ and $\nu$, and for that purpose we use the notion of energy of the measure at scale $\delta$, which gives information on the behaviour of the measure at all scales between $\delta$ and $1$. The precise statement we use for the induction is given as Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} below. It is also worth noting that to obtain the correct bound on the energy, we need to use a large number $k$ of additive convolutions, whereas $k=4$ was sufficient in the analogous statement [\[2ab\]](#2ab){reference-type="eqref" reference="2ab"} for Hausdorff dimension of sum-product sets. We do not know whether Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} holds for $k=4$, or even for $k$ bounded by some absolute constant.
We conclude this introduction by an example showing that for $n \geq 3$, the assumption $I_{s_{j}}^{\delta}(\mu_{j}) \leq\delta^{-\epsilon}$ in Theorem [Theorem 1](#main){reference-type="ref" reference="main"} cannot be replaced by the \"single-scale\" $L^{2}$-bound $\|\mu_{\delta}\|_{2}^{2} \leq \delta^{s_{j} - 1-\epsilon}$. This is mildly surprising, because the situation is opposite in the case $n = 2$, as shown by Proposition [Proposition 1](#basecase){reference-type="ref" reference="basecase"} below. We only write down the details of the example in the case $n = 3$, but it is straightforward to generalise to $n \geq 3$.
*Example 1*. For every $s \in (0,\tfrac{1}{2})$ and $\delta_{0} > 0$ there exists a scale $\delta \in (0,\delta_{0}]$ and a Borel probability measure $\mu = \mu_{\delta,s}$ on $[1,2]$ with the following properties:
- $\|\mu\|_{2}^{2} \sim \delta^{s - 1}$.
- $|\widehat{\mu\boxtimes\mu\boxtimes\mu}(\delta^{-1})| \sim 1$.
The building block for the construction is the following. For $r \in 2^{-\mathbb{N}}$, and a suitable absolute constant $c > 0$, let $\mathcal{I} = \mathcal{I}_{r}$ be a family of $r^{-1}$ intervals of length $cr$, centred around the points $r\mathbb{Z}\cap [0,1]$. Then, if $c > 0$ is small enough, we have $$\cos(2\pi x/r) \geq \tfrac{1}{2}, \qquad \forall x \in \cup \mathcal{I}.$$ Consequently, if $\boldsymbol\rho$ is any probability measure supported on $\cup \mathcal{I}$, then $|\widehat{\boldsymbol\rho}(r^{-1})| \geq \tfrac{1}{2}$.
Fix $s \in (0,\tfrac{1}{2})$, $\delta > 0$, and let $\boldsymbol\rho$ be the uniform probability measure on the intervals $\mathcal{I}_{\delta^{s}}$. As we just discussed, $|\widehat{\boldsymbol\rho}(\delta^{-s})| \sim 1$. Next, let $\mu = \mu_{\delta,s}$ be a rescaled copy of $\boldsymbol\rho$ inside the interval $[1,1 + \delta^{1 - s}] \subset [1,2]$. More precisely, $\mu = \tau_{\sharp} \lambda_{\sharp} \boldsymbol\rho$, where $\tau(x) = x + 1$ and $\lambda(x) = \delta^{1 - s}x$. Now $\mu$ is a uniform probability measure on a collection of $\delta^{-s}$ intervals of length $\delta$, and consequently $\|\mu\|_{2}^{2} \sim \delta^{s - 1}$.
We next investigate the Fourier transform of $\mu\boxtimes\mu\boxtimes\mu$. Writing $\mu' := \lambda_{\sharp} \boldsymbol\rho$, we have $$\widehat{\mu\boxtimes\mu\boxtimes\mu}(\delta^{-1}) = \iiint e^{-2\pi i \delta^{-1}(x + 1)(y + 1)(z + 1)} \, d\mu'(x) \, d\mu'(y) \, d\mu'(z).$$ We expand $$\delta^{-1}(x + 1)(y + 1)(z + 1) = \delta^{-1}xyz + \delta^{-1}(xy + xz + yz) + \delta^{-1}(x + y + z) + \delta^{-1}.$$ Now the key point: since $x,y,z \in \operatorname{spt}\mu' \subset [0,\delta^{1 - s}]$, we have both $|\delta^{-1} xyz| \leq \delta^{2 - 3s}$ and $|\delta^{-1}(xy + xz + yz)| \lesssim \delta^{1 - 2s}$. Since $s < \tfrac{1}{2}$, both exponents $2 - 3s$ and $1 - 2s$ are strictly positive and consequently, $$e^{-2\pi i \delta^{-1}(x + 1)(y + 1)(z + 1)} = e^{-2\pi i \delta^{-1}(x + y + z + 1)} + o_{\delta \to 0}(1).$$
Using this, and also that $\hat{\mu}'(\xi) = \widehat{\boldsymbol\rho}(\delta^{1 - s}\xi)$, we find $$\begin{aligned}
\widehat{\mu\boxtimes\mu\boxtimes\mu}(\delta^{-1}) & = e^{-2\pi i \delta^{-1}}\iiint e^{-2\pi i \delta^{-1}(x + y + z)} \, d\mu'(x) \, d\mu'(y) \, d\mu'(z) + o_{\delta \to 0}(1)\\
& = e^{-2\pi i \delta^{-1}} (\widehat{\boldsymbol\rho}(\delta^{-s}))^{3} + o_{\delta \to 0}(1).\end{aligned}$$ In particular, $|\widehat{\mu\boxtimes\mu\boxtimes\mu}(\delta^{-1})| \sim 1$ for $\delta > 0$ sufficiently small.
If we allow $\mu$ to be supported on $[-1,1]$, as in Theorem [Theorem 1](#main){reference-type="ref" reference="main"}, an even simpler example is available, namely $\mu=\delta^{-1+s}\mathbf{1}_{[0,c\delta^{1-s}]}$, where $s<2/3$. Then $\mu\boxtimes\mu\boxtimes\mu$ is supported on $[0,c^3\delta^{3-3s}]\subset [0,c\delta]$, so it satisfies $\lvert\widehat{\mu\boxtimes\mu\boxtimes\mu}(\delta^{-1})\rvert\gtrsim 1$ if $c>0$ is chosen small enough.
# The base case $n=2$ {#sec:n2}
In the $n=2$ case, Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is proved by a direct elementary computation. In fact, to obtain the desired Fourier decay, one only needs an assumption on the $L^2$-norms of the measures at scale $\delta$.
**Proposition 1** (Base case $n=2$). *Let $\delta \in (0,1]$ and let $\mu,\nu$ be Borel probability measures on $[-1,1]$ and $s,t \in [0,1]$ such that $$\lVert\mu_\delta\rVert_2^2\leq\delta^{-1+s}
\quad\mbox{and}\quad
\lVert\nu_\delta\rVert_2^2\leq\delta^{-1+t}.$$ Then, for all $\xi$ with $\delta^{-1} \leq |\xi| \leq 2\delta^{-1}$, $$\left| \iint e^{-2\pi i \xi \cdot xy} \, d\mu(x) \, d\nu(y) \right|
\lesssim \delta^{\frac{s+t-1}{2}}.$$*
*Remark 1*. If $\mu$ and $\nu$ are equal to the normalized Lebesgue measure on balls of size $\delta^{1-s}$ and $\delta^{1-t}$, respectively, the assumptions of the proposition are satisfied. In that case, the multiplicative convolution $\mu\boxtimes\nu$ is supported on a ball of size $\delta^{1-\max(s,t)}$, so the Fourier decay cannot hold for $\lvert\xi\rvert\leq\delta^{-1+\max(s,t)}$.
The above proposition is an easy consequence of the lemma below, which is essentially [@Bo2 Theorem 7], except that we keep slightly more careful track of the constants. We include the proof for completeness.
**Lemma 1**. *Let $\delta \in (0,1]$, and let $\mu,\nu$ be Borel probability measures on $[-1,1]$ with $$A := \int_{|\xi| \leq 2\delta^{-1}} |\hat{\mu}(\xi)|^{2} \, d\xi \quad \text{and} \quad B := \int_{|\xi| \leq 2\delta^{-1}} |\hat{\nu}(\xi)|^{2} \, d\xi.$$ Then, for all $\xi$ with $1\leq\lvert\xi\rvert\leq\delta^{-1}$, $$\label{form3}
\left| \iint e^{-2\pi i \xi \cdot xy} \, d\mu(x) \, d\nu(y) \right| \lesssim \sqrt{AB/|\xi|} + \delta.$$*
*Proof.* Let $\varphi \in C^{\infty}_{c}(\mathbb{R})$ be an auxiliary function with the properties $\mathbf{1}_{[-1,1]} \leq \varphi \leq \mathbf{1}_{[-2,2]}$ (thus $\varphi \equiv 1$ on $\operatorname{spt}\mu$) and $\widehat{\varphi} \geq 0$. Fixing $1 \leq |\xi| \leq \delta^{-1}$, the left-hand side of [\[form3\]](#form3){reference-type="eqref" reference="form3"} can be estimated by $$\begin{aligned}
\left| \iint e^{-2\pi i \xi \cdot xy} \, d\mu(x) \, d\nu(y) \right|
& = \left| \int \widehat{\varphi \mu}(y\xi) \, d\nu(y)\right| \leq \iint \widehat{\varphi}(x - y \xi) |\hat{\mu}(x)| \, dx \, d\nu(y)\\
& = \int |\hat{\mu}(x)| \left( \int \widehat{\varphi}(x - y \xi) \, d\nu(y) \right) \, dx.\end{aligned}$$ We split the right-hand side as the sum $$\int_{|x| \leq 2\delta^{-1}} |\hat{\mu}(x)| \left( \int \widehat{\varphi}(x - y\xi) \, d\nu(y) \right) \, dx + \iint_{|x| \geq 2\delta^{-1}} |\hat{\mu}(x)| \widehat{\varphi}(x - y\xi) \, dx \, d\nu(y) =: I_{1} + I_{2}.$$ For the term $I_{2}$, we use that $\widehat{\varphi}(x - y\xi) \lesssim |x - y\xi|^{-2}$, $|\hat{\mu}(x)| \leq 1$, and $\nu(\mathbb{R}) = 1$: $$I_{2} \lesssim \max_{y \in [-1,1]} \int_{|x| \geq 2\delta^{-1}} \frac{dx}{|x - y\xi|^{2}} \lesssim \int_{|x| \geq \delta^{-1}} \frac{dx}{|x|^{2}} \lesssim \delta.$$ For the term $I_{1}$, we first use the Cauchy-Schwarz inequality and the definition of $A$ to deduce $$I_{1} \leq \sqrt{A}\left( \int \left[ \int \widehat{\varphi}(x - y\xi) \, d\nu(y) \right]^{2} \, dx \right)^{1/2}.$$ Finally, for the remaining factor, assume $\xi > 0$ without loss of generality, and write $\widehat{\varphi}(x - y\xi) = \widehat{\varphi_{\xi}}(x/\xi - y)$, where $\varphi_{\xi} = \xi^{-1}\varphi(\cdot/\xi)$. With this notation, and by Plancherel's formula, $$\begin{aligned}
\int \left[ \int \widehat{\varphi}(x - y\xi) \, d\nu(y) \right]^{2} \, dx
& = \int (\widehat{\varphi_{\xi}} \ast \nu)(x/\xi)^{2} \, dx
= \xi \int (\widehat{\varphi_{\xi}} \ast \nu)(z)^{2} \, dz\\
& = \xi \int \varphi_{\xi}(u)^{2}|\hat{\nu}(u)|^{2} \, du
\lesssim \xi^{-1} \int_{\operatorname{spt}\varphi_{\xi}} |\hat{\nu}(u)|^{2} \, du.\end{aligned}$$ Finally, recall that $\operatorname{spt}\varphi \subset [-2,2]$, so $\operatorname{spt}\varphi_{\xi} \subset [-2\xi,2\xi] \subset [-2\delta^{-1},2\delta^{-1}]$. This shows that $I_{1} \lesssim \sqrt{AB/\xi}$, and the proof of [\[form3\]](#form3){reference-type="eqref" reference="form3"} is complete. ◻
*Proof of Proposition [Proposition 1](#basecase){reference-type="ref" reference="basecase"}.* Observe that by Plancherel's formula $$A = \int_{\lvert\xi\rvert\leq 4\delta^{-1}} \lvert\hat{\mu}(\xi)\rvert^2\,d\xi
\leq \lVert\mu_{\frac{\delta}{10}}\rVert_2^2
\lesssim \lVert\mu_\delta\rVert_2^2
\leq \delta^{-1+s}$$ and similarly $$B = \int_{\lvert\xi\rvert\leq 4\delta^{-1}} \lvert\hat{\mu}(\xi)\rvert^2\,d\xi
\lesssim \delta^{-1+t}.$$ So Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} applied at scale $\delta/2$ implies that for $\delta^{-1}\leq\lvert\xi\rvert\leq 2\delta^{-1}$, $$\begin{aligned}
\left| \iint e^{-2\pi i \xi \cdot xy} \, d\mu(x) \, d\nu(y) \right|
& \lesssim \sqrt{AB/|\xi|} + \delta\\
& \lesssim \delta^{\frac{s+t-1}{2}}.\end{aligned}$$ ◻
# Dimension and energy of additive-multiplicative convolutions
This section is the central part of the proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}. Its goal is to derive Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} below, whose statement can be qualitatively understood in the following way: If $\mu$ and $\nu$ are two Borel probability measures on $\mathbb{R}$ with respective dimensions $s$ and $t$, then there exists some additive convolution of $\mu\boxtimes\nu$ with dimension at least $s+t-\epsilon$, where $\epsilon>0$ can be arbitrarily small. The precise formulation in terms of the energies of the measures at scale $\delta$ will be essential in our proof of Fourier decay for multiplicative convolutions.
**Lemma 1**. *For all $s,t \in (0,1]$ with $s + t \leq 1$, and for all $\kappa > 0$, there exist $\epsilon = \epsilon(s,t,\kappa) > 0$, $\delta_{0} = \delta_{0}(s,t,\kappa,\epsilon) > 0$, and $k_{0} = k_{0}(s,t,\kappa) \in \mathbb{N}$ such that the following holds for all $\delta \in (0,\delta_{0}]$ and $k \geq k_{0}$. Let $\mu,\nu$ be Borel probability measures on $[-1,1]$ satisfying $$\label{form5} I^{\delta}_{s}(\mu) \leq \delta^{-\epsilon} \quad \text{and} \quad I^{\delta}_{t}(\nu) \leq \delta^{-\epsilon}.$$ Then, with $\Pi := (\mu\boxminus\mu)\boxtimes(\nu\boxminus\nu)$, we have $$I^{\delta}_{s + t}(\Pi^{\boxplus k}) \leq \delta^{-\kappa}.$$ Moreover, the value of $k_{0}$ stays bounded as long as $\min\{s,t\} > 0$ stays bounded away from zero.*
The main component of the proof of Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} will be a combinatorial result from [@2023arXiv230110199O] which we will apply in the following form. Let $N(E,\delta)$ denote the smallest number of $\delta$-balls required to cover $E$.
**Lemma 1**. *For all $s,t \in (0,1]$ with $s + t \leq 1$, and for all $\kappa > 0$, there exist $\epsilon = \epsilon(s,t,\kappa) > 0$ and $\delta_{0} = \delta_{0}(s,t,\kappa,\epsilon) > 0$ such that the following holds for all $\delta \in (0,\delta_{0}]$ and $k \geq 2$. Let $\mu,\nu$ be Borel probability measures on $[-1,1]$ satisfying $$\label{form20}
I_{s}^{\delta}(\mu) \leq \delta^{-\epsilon}
\quad \text{and} \quad
I_{t}^{\delta}(\nu) \leq \delta^{-\epsilon}.$$ Let $\Pi := (\mu \boxminus\mu) \boxtimes(\nu \boxminus\nu)$ and assume that $E\subset\mathbb{R}$ is a set with $\Pi^{\boxplus k}(E) \geq \delta^{\epsilon}$. Then, $$\label{form13}
N(E,\delta) \geq \delta^{-s - t + \kappa}.$$*
Since this lemma does not explicitly appear in [@2023arXiv230110199O], we now briefly explain how to derive it from the results of that paper. Recall that a Borel measure $\mu$ on $\mathbb{R}$ is said to be *$(s,C)$-Frostman* if it satisfies $$\mu(B(x,r))\leq Cr^s \quad\text{for all }x\in\mathbb{R}, r>0.$$ The precise statement we shall need is [@2023arXiv230110199O Proposition 3.7], see also [@2023arXiv230110199O Remark 3.11], which reads as follows.
**Proposition 1**. *Given $s,t\in (0,1]$ and $\sigma\in [0,\min\{s+t,1\})$, there exist $\epsilon = \epsilon(s,t,\sigma) > 0$ and $\delta_{0} = \delta_{0}(s,t,\sigma,\epsilon) > 0$ such that the following holds for all $\delta \in (0,\delta_{0}]$.\
Let $\mu_{1},\mu_{2}$ be $(s,\delta^{-\epsilon})$-Frostman probability measures, let $\nu_{1},\nu_{2}$ be $(t,\delta^{-\epsilon})$-Frostman probability measures, all four measures supported on $[-1,1]$, and let $\boldsymbol\rho$ be an $(s + t,\delta^{-\epsilon})$-Frostman probability measure supported on $[-1,1]^{2}$. Then there is a set $\mathbf{Bad} \subset \mathbb{R}^{4}$ with $$(\mu_{1} \times \mu_{2} \times \nu_{1} \times \nu_{2})(\mathbf{Bad}) \le \delta^{\epsilon},$$ such that for every $(a_1,a_2,b_1,b_2)\in \mathbb{R}^{4} \, \setminus \, \mathbf{Bad}$ and every subset $G \subset \mathbb{R}^{2}$ satisfying $\boldsymbol\rho(G) \geq \delta^{\epsilon}$, one has $$\label{eq:robust-sum-prod}
N(\{(b_1- b_2)a + (a_1- a_2) b : (a,b) \in G\},\delta) \ge \delta^{-\sigma}.$$*
The derivation of Lemma [Lemma 1](#lemma2){reference-type="ref" reference="lemma2"} from Proposition [Proposition 1](#prop:expansion){reference-type="ref" reference="prop:expansion"} is relatively formal; it mostly uses the link between the Frostman condition and the energy at scale $\delta$, and the pigeonhole principle to construct large fibres in product sets. Let us first record an elementary statement about the energy at scale $\delta$ of a Frostman measure.
**Lemma 1** (Frostman condition and $s$-energy). *Fix $C\geq 1$, $s\in(0,d)$, and $\epsilon \in (0,\tfrac{1}{2}]$. Then the following holds for all $\delta > 0$ small enough. Let $\mu$ be a probability measure on $B(1) \subset \mathbb{R}^{d}$.*
1. *If $\mu$ satisfies $\mu(B(x,r))\leq Cr^s$ for all $x\in \mathbb{R}^{d}$ and all $r\in[\delta,\delta^\epsilon]$, then $I_s^\delta(\mu) \leq C\delta^{-d \epsilon }$.*
2. *Conversely, if $I_s^\delta(\mu)\leq\delta^{-\epsilon}$, there exists a set $A$ such that $\mu(A)\geq 1-(\log1/\delta)\delta^{\epsilon}$ and for every $r\in[\delta,1]$, $\mu|_A(B(x,r))\leq \delta^{-2\epsilon}r^s$.*
*Proof.* Assume first that $\mu$ satisfies the Frostman condition $\mu(B(x,r))\leq Cr^s$ for $r\in[\delta,\delta^\epsilon]$. It is not difficult to check that the measure $\mu_{\delta}$ with density $\mu \ast P_{\delta}$ satisfies $$\mu_{\delta}(B(x,r)) \lesssim \begin{cases} C\delta^{s - d}r^{d}, & 0 < r \leq \delta, \\ Cr^{s}, & \delta \leq r \leq \delta^{\epsilon}, \\ 1, & r \geq \delta^{\epsilon}. \end{cases}$$ Consequently, for $x \in \mathbb{R}^{d}$ fixed, $$\int |x - y|^{-s} \, d\mu_{\delta}(y)
\lesssim \sum_{2^{k} \leq \delta} C\delta^{s - d}2^{(d - s)k} + \sum_{\delta \leq 2^{k} \leq \delta^{\epsilon}} C + \sum_{2^{k} \geq \delta^{\epsilon}} 2^{-ks}
\lesssim_{s} C\delta^{-s\epsilon}.$$ Since $\mu_{\delta}$ is a probability measure, and $s < d$, this implies $I_{s}^{\delta}(\mu) \leq C\delta^{-d\epsilon}$ for $\delta > 0$ small enough.
For the converse, we observe that $$\int\lvert x-y\rvert^{-s}\,d\mu_\delta(y) = s\int \mu_\delta(B(x,r)) r^{-s-1}\,dr,$$ and so, with a change of variables, $$I_s^\delta(\mu) = s\iint \mu_\delta(B(x,r)) r^{-s}\,\frac{dr}{r}\,d\mu(x)
= (\log 2)\cdot s \iint \mu_\delta(B(x,2^{-u})) 2^{su}\,du\,d\mu(x).$$ If $I_s^\delta(\mu)\leq\delta^{-\epsilon}$, letting $$E_u=\{x\ : \ 2^{su}\mu_\delta(B(x,2^{-u}))> \delta^{-2\epsilon}\},$$ one gets $\mu(E_u)\leq\delta^{\epsilon}$. So, for $E=\bigcup E_u$, where $u=0,1,\dots,\lfloor\log1/\delta\rfloor$, we find $\mu(E) \leq (\log1/\delta)\delta^{\epsilon}.$ Thus, letting $A=\mathbb{R}^{d} \, \setminus \, E$, one indeed has $$\mu(A)\geq 1-(\log1/\delta)\delta^{\epsilon}$$ and for all $x$ in $A$, for all $r\in[\delta,1]$, $\mu(B(x,r))\lesssim \delta^{-2\epsilon}r^s$. ◻
*Proof of Lemma [Lemma 1](#lemma2){reference-type="ref" reference="lemma2"}.* First of all, we may assume that $k = 2$, since if $k > 2$, we may write $$\Pi^{\boxplus k}(E) = \int \Pi^{\boxplus 2}(E - x_{3} - \ldots - x_{k}) \, d\Pi(x_{3}) \cdots \, d\Pi(x_{k}),$$ and in particular there exists a vector $(x_{3},\ldots,x_{k})$ such that $\Pi^{\boxplus 2}(E - x_{3} - \ldots - x_{k}) \geq \delta^{\epsilon}$. After this, it suffices to prove [\[form13\]](#form13){reference-type="eqref" reference="form13"} with $E - x_{3} - \ldots - x_{k}$ in place of $E$.
Second, we may assume that the measures $\mu,\nu$ satisfy the Frostman conditions $$\mu(B(x,r)) \leq \delta^{-6\epsilon}r^{s} \quad \text{and} \quad \nu(B(x,r)) \leq \delta^{-6\epsilon}r^{t}$$ for $\delta \leq r \leq 1$ and all $x \in \mathbb{R}$. Indeed, since $I_s^\delta(\mu)\leq\delta^{-3\epsilon}$, Lemma [Lemma 1](#frostman-energy){reference-type="ref" reference="frostman-energy"} shows that there exists a Borel set $A \subset \mathbb{R}$ of measure $\mu(A) \geq 1 - \delta^{2\epsilon}$ with the property $(\mu|_{A})(B(x,r)) \leq \delta^{-6\epsilon}r^{s}$ for all $x\in\mathbb{R}$ and all $r\in[\delta,1]$. Similarly, we may find a Borel set $B \subset \mathbb{R}$ of measure $\nu(B) \geq 1 - \delta^{2\epsilon}$ with the property $(\nu|_{B})(B(x,r)) \leq \delta^{-6\epsilon}r^{t}$. Now, we still have $\overline{\Pi}^{\boxplus 2}(E) \geq \tfrac{1}{2}\delta^{\epsilon}$, where $$\overline{\Pi} := (\mu|_{A} \boxminus\mu|_{A}) \boxtimes(\nu|_{B} \boxminus\nu|_B).$$ Therefore, we may proceed with the argument, with $\mu,\nu$ replaced by $\mu|_{A},\nu|_{B}$.
Let us rewrite the condition $\Pi^{\boxplus 2}(E) \geq \delta^{\epsilon}$ as $(\mu \times \nu)^{4}(G_8) \geq \delta^{\epsilon}$, where $$G_8 := \{(a_{1},b_{1},\ldots,a_{4},b_{4}) \in \mathbb{R}^{8} : (a_{1} - a_{2})(b_{3} - b_{4}) + (b_{1} - b_{2})(a_{3} - a_{4}) \in E\}.$$ In particular, there exists a subset $G_{6} \subset \mathbb{R}^{6}$ of measure $(\mu \times \nu)^{3}(G_{6}) \geq \delta^{2\epsilon}$ such that for every $(a_1,b_1,a_2,b_2,a_3,b_3)$ in $G_6$, one has $(\mu \times \nu)(G_{2}) \geq \delta^{2\epsilon}$, where $$\label{form14}
G_{2} := \{(a_{4},b_{4}) \in \mathbb{R}^{2} : (a_{1},b_{1},\ldots,a_{4},b_{4}) \in G_{8}\}.$$ Next, we plan to apply Proposition [Proposition 1](#prop:expansion){reference-type="ref" reference="prop:expansion"}. To make this formally correct, let us \"freeze\" two of the variables, say $(a_{3},b_{3})$: more precisely, fix $(a_{3},b_{3})$ in such a way that $(\mu \times \nu)^{2}(G_{4}) \geq \delta^{2\epsilon}$, where $$G_4 := \{(a_{1},b_{1},a_{2},b_{2}) \in \mathbb{R}^{4} : (a_{1},b_1,a_{2},b_{2},a_3,b_{3}) \in G_{6}\}.$$ If $\epsilon$ is chosen small enough in terms of $s$, $t$ and $\sigma := s+t-\kappa$, we may apply Proposition [Proposition 1](#prop:expansion){reference-type="ref" reference="prop:expansion"} with $\mu_{1} = \mu_{2} = \mu$ and $\nu_{1} = \nu_{2} = \nu$, and $\boldsymbol\rho= \mu \times \nu$, using $6\epsilon$ instead of $\epsilon$. Then, one has $$(\mu \times \nu)^{2}(\mathbf{Bad})
< \delta^{6\epsilon} \leq (\mu \times \nu)^{2}(G_{4}),$$ and [\[eq:robust-sum-prod\]](#eq:robust-sum-prod){reference-type="eqref" reference="eq:robust-sum-prod"} holds for all $(a_{1},b_{1},a_2,b_{2}) \in \mathbb{R}^{4} \, \setminus \, \mathbf{Bad}$. Consequently, we may find a $4$-tuple $(a_{1},b_1,a_{2},b_{2}) \in G_{4} \, \setminus \, \mathbf{Bad}$, and eventually a $6$-tuple $$(a_{1},b_1,a_{2},b_{2},a_3,b_{3}) \in G_{6}$$ such that whenever $G \subset \mathbb{R}^{2}$ is a Borel set with $(\mu \times \nu)(G) = \boldsymbol\rho(G) \geq \delta^{6\epsilon}$, then $$\begin{aligned}
& N(\{(a_{1} - a_{2})(b_{3} - b_{4}) + (b_{1} - b_{2})(a_{3} - a_{4}) : (a_{4},b_{4}) \in G\},\delta)\\
& = N(\{(a_{1} - a_{2})b_{4} + (b_{1} - b_{2})a_{4} : (a_{4},b_{4}) \in G\},\delta) \geq \delta^{-\sigma} = \delta^{-s - t + \kappa}.\end{aligned}$$ In particular, by [\[form14\]](#form14){reference-type="eqref" reference="form14"}, this can be applied to the set $G := G_{2}$, and the conclusion is that $$\label{form15}
N(\{(a_{1} - a_{2})(b_{3} - b_{4}) + (b_{1} - b_{2})(a_{3} - a_{4}) : (a_{4},b_{4}) \in G_{2}\},\delta) \geq \delta^{-s - t + \kappa}.$$ However, since $(a_{1},b_{1},a_{2},b_{2},a_{3},b_{3}) \in G_{6}$, we have $(a_{1},b_{1},\ldots,a_{4},b_{4}) \in G_8$ for all $(a_{4},b_{4}) \in G_{2}$, and consequently $$\forall (a_{4},b_{4}) \in G_{2},\qquad (a_{1} - a_{2})(b_{3} - b_{4}) + (b_{1} - b_{2})(a_{3} - a_{4}) \in E.$$ Therefore, [\[form15\]](#form15){reference-type="eqref" reference="form15"} implies [\[form13\]](#form13){reference-type="eqref" reference="form13"}. ◻
We now want to go from the combinatorial conclusion of Lemma [Lemma 1](#lemma2){reference-type="ref" reference="lemma2"} to the more measure theoretic statement of Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} involving energies at scale $\delta$. For that, our strategy is similar in flavour to the one used by Bourgain and Gamburd [@bg_su2] to derive their flattening lemma, decomposing the measures into dyadic level sets.
*Proof of Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"}.* Let $\Pi_{r} := \Pi \ast P_{r}$, where we recall that $\{P_{r}\}_{r > 0} = \{r^{-1}P(\cdot/r)\}_{r > 0}$ is a standard approximate identity. The goal will be to show that if $k \geq 1$ is sufficiently large (depending on $s,t,\kappa$), then, for all $r \in [\delta,1]$, $$\label{form16}
J_{r}(k) := \lVert\Pi_{r}^{\boxplus 2^k}\rVert_{2} \leq \delta^{-\kappa/2}r^{(s + t - 1)/2}.$$ This implies in a standard manner (using for example [@mattila Lemma 12.12], Plancherel and a dyadic frequency decomposition) that $$I_{s + t}^\delta(\Pi^{\boxplus 2^k}) \lesssim \delta^{-2\kappa}.$$ Note that the sequence $\{J_{r}(k)\}_{k \in \mathbb{N}}$ is decreasing in $k$, since by Young's inequality $$J_{r}(k + 1) = \lVert\Pi_r^{\boxplus 2^k}\boxplus\Pi_r^{\boxplus 2^k}\rVert_2
\leq \lVert\Pi_r^{\boxplus 2^k}\rVert_1\lVert\Pi_r^{\boxplus 2^k}\rVert_2
= \lVert\Pi_r^{\boxplus 2^k}\rVert_2
= J_r(k).$$ Therefore, in order to prove [\[form16\]](#form16){reference-type="eqref" reference="form16"}, the value of $k$ may depend on $r$, as long as it is uniformly bounded in terms of $s,t,\kappa$. Eventually, the maximum of all possible values for $k$ will work for all $\delta \leq r \leq 1$.
Let us start by disposing of large $r$, i.e. $r\geq\delta^{\kappa/2}$. For that, we have the trivial bound (recalling also that we assumed $s+t\leq 1$) $$J_{r}(k) \leq J_{r}(0) \lesssim r^{-1} \leq \delta^{-\kappa/2}r^{s + t - 1}.$$
So, it remains to treat the case $r \in[\delta, \delta^{\kappa/2}]$. We now fix such a scale $r$. By the pigeonhole principle, given a small parameter $\epsilon \in (0,\tfrac{\kappa}{4})$ to be fixed later (the choice will roughly be determined by applying Lemma [Lemma 1](#lemma2){reference-type="ref" reference="lemma2"} to the parameters $s,t,\kappa$), there exists $k \lesssim 1/\epsilon$, depending on $r$, such that $$\label{form17}
\|\Pi_{r}^{\boxplus 2^{k+1}}\|_{2} = J_{r}(k + 1) \geq r^{\epsilon}J_{r}(k) = r^{\epsilon} \|\Pi_{r}^{\boxplus 2^k}\|_{2}.$$ This index $k$ is fixed for the rest of the argument, so we will not display it in (all) subsequent notation. We may assume that $\|\Pi_{r}^{\boxplus 2^{k}}\|_{2} \geq 1$, otherwise [\[form16\]](#form16){reference-type="eqref" reference="form16"} is clear.
Let $\mathcal{D}_{r}$ be the dyadic intervals of $\mathbb{R}$ of length $r$. For each $I \in \mathcal{D}_{r}$, we set $$a_{I} := \sup_{x \in I} \Pi_{r}^{\boxplus 2^{k}}(x).$$ Next, we fix an absolute constant $C_{0} \geq 1$ to be specified momentarily, and we define the collections $$\mathcal{A}_{j} := \begin{cases} \{I \in \mathcal{D}_{r} : a_{I} \leq C_{0}\}, & j = 0, \\ \{I \in \mathcal{D}_{r} : C_{0}2^{j - 1} < a_{I} \leq C_{0}2^{j}\}, & j \geq 1. \end{cases}$$ We also define the sets $A_{j} := \cup \mathcal{A}_{j}$; note that the sets $A_{j}$ are disjoint for distinct $j$ indices. Since $\Pi$ is a probability measure, $\Pi_{r}^{\boxplus 2^{k}} \lesssim 1/r$ for all $k \geq 1$. Therefore $A_{j} = \emptyset$ for $j \geq C\log(1/r)$, and evidently $$\label{form18a} \Pi_{r}^{\boxplus 2^{k}} \lesssim \sum_{j = 0}^{C\log(1/r)} 2^{j} \cdot \mathbf{1}_{A_{j}}.$$ Here the implicit constants may depend on $C_{0}$. Conversely, we claim that $$\label{form18b}
\sum_{j = 1}^{C\log(1/r)} 2^{j} \cdot \mathbf{1}_{A_{j}} \lesssim \Pi^{\boxplus 2^{k}}_{r}.$$ To see this, fix $x \in A_{j}$ with $j \geq 1$, and let $I = I(x) \in \mathcal{D}_{r}$ be the dyadic $r$-interval containing $x$. Then $a_{I} \geq C_{0}2^{j - 1}$, which means that there exists another point $x' \in I$ with $\Pi_{r}^{\boxplus 2^{k}}(x') \geq C_{0}2^{j - 1}$. Now the key point: the function $\Pi_{r}^{\boxplus 2^{k}}$ is $C_{1}/r$-Lipschitz for some absolute constant $C_{1} > 0$. Therefore, $\Pi_{r}^{\boxplus 2^{k}}(x) \geq C_{0}2^{j - 1} - C_{1}|x - x'|/r \sim 2^{j}$, provided that $C_{0} \geq 2C_{1}$. This proves [\[form18b\]](#form18b){reference-type="eqref" reference="form18b"}.
Based on [\[form18b\]](#form18b){reference-type="eqref" reference="form18b"} (and our hypothesis $\|\Pi_{r}^{\boxplus 2^{k}}\|_{2} \geq 1$ to treat the case $j = 0$) we may deduce, in particular, that $$\label{form19}
2^j \lVert\mathbf{1}_{A_j}\rVert_2
\lesssim \lVert\Pi_r^{\boxplus 2^k}\rVert_2, \qquad j \geq 0.$$ Next, using [\[form18a\]](#form18a){reference-type="eqref" reference="form18a"}, we may pigeonhole an index $j \geq 0$ and a set $A:=A_j$ with the property $$\|\Pi_{r}^{\boxplus 2^{k+1}}\|_{2} \leq \|\Pi_{r}^{\boxplus 2^k} \boxminus\Pi_{r}^{\boxplus 2^k}\|_{2}
\lesssim (\log1/r)\cdot 2^j\cdot\lVert\mathbf{1}_{A} \boxminus\Pi_{r}^{\boxplus 2^k}\rVert_{2}.$$ Since further, by Plancherel and Cauchy-Schwarz, $$\|\mathbf{1}_{A} \boxminus\Pi_{r}^{\boxplus 2^k}\|_{2}
\leq \|\mathbf{1}_{A} \boxminus\mathbf{1}_{A}\|_{2}^{1/2}\|\Pi_{r}^{\boxplus 2^{k+1}}\|_{2}^{1/2}$$ we deduce that $$\begin{aligned}
r^{\epsilon}\|\Pi_{r}^{\boxplus 2^k}\|_{2}
\stackrel{\eqref{form17}}{\leq} \|\Pi_{r}^{\boxplus 2^{k+1}}\|_{2}
& \lesssim (\log1/r)^2\cdot 2^{2j}\cdot\lVert\mathbf{1}_{A}\boxminus\mathbf{1}_{A}\rVert_2 \notag\\
&\label{form21} \lesssim r^{-\epsilon}\cdot 2^{2j}\cdot\lVert\mathbf{1}_{A}\rVert_1\lVert\mathbf{1}_{A}\rVert_2 \stackrel{\eqref{form19}}{\lesssim} r^{-\epsilon}\cdot 2^j\cdot\lVert\mathbf{1}_A\rVert_1\lVert\Pi_r^{\boxplus 2^k}\rVert_2.\end{aligned}$$ At this point we note that if $j = 0$, then the preceding inequality shows that $\|\Pi_{r}^{\boxplus 2^{k}}\|_{2} \lesssim r^{-2\epsilon}$, which is better than [\[form16\]](#form16){reference-type="eqref" reference="form16"}, since we declared that $\epsilon \leq \kappa/4$. So, we may and will assume that $j \geq 1$ in the sequel.
In (i) below we combine [\[form21\]](#form21){reference-type="eqref" reference="form21"} and [\[form18b\]](#form18b){reference-type="eqref" reference="form18b"}, whereas in (ii) below we combine [\[form21\]](#form21){reference-type="eqref" reference="form21"} with $2^{j}\|\mathbf{1}_{A}\|_{1} \lesssim \|\Pi_{r}^{\boxplus 2^{k}}\|_{1} = 1$:
- $r^{2\epsilon} \lesssim 2^j\lVert\mathbf{1}_A\rVert_1 \lesssim \Pi_r^{\boxplus 2^k}(A)$,
- $r^{2\epsilon}\|\Pi_{r}^{\boxplus 2^k}\|_{2} \lesssim 2^j\lVert\mathbf{1}_A\rVert_2 \lesssim \lVert\mathbf{1}_A\rVert_1^{-1}\lVert\mathbf{1}_A\rVert_2$.
Since $A$ is a union of intervals in $\mathcal{D}_{r}$, one has $$\lVert\mathbf{1}_A\rVert_1\sim rN(A,r) \quad\text{and}\quad \lVert\mathbf{1}_A\rVert_2\sim r^{1/2}N(A,r)^{1/2},$$ so item (ii) yields $$\lVert\Pi_r^{\boxplus 2^k}\rVert_2 \lesssim r^{-\frac{1}{2}-2\epsilon} N(A,r)^{-1/2}.$$ On the other hand, since $$I^{r}_{s}(\mu) \leq I^{\delta}_{s}(\mu) \leq \delta^{-\epsilon} \leq r^{-\epsilon/\kappa} \quad \text{and} \quad I^{r}_{t}(\nu) \leq r^{-\epsilon/\kappa},$$ Lemma [Lemma 1](#lemma2){reference-type="ref" reference="lemma2"} applied at scale $r$ (and recalling (i) above) shows that if $\epsilon$ is chosen small enough in terms of $s,t,\kappa$, then $$N(A,r) \geq r^{-s-t+\kappa}.$$ We thus obtain what we claimed in [\[form16\]](#form16){reference-type="eqref" reference="form16"}: $$\|\Pi_{r}^{\boxplus 2^k}\|_{2} \lesssim \delta^{-\kappa/2}r^{(s + t - 1)/2}.$$ This completes the proof of Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"}. ◻
# The induction step {#sec:induction}
The proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is by induction on $n$, starting from the $n=2$ case, already studied in Section [2](#sec:n2){reference-type="ref" reference="sec:n2"}. The induction step is based on the flattening results for additive-multiplicative convolutions developed in the previous section. It will be essential in the argument to be able to switch the order of addition and multiplication. For that we record the following lemma, which is a simple application of the Cauchy-Schwarz inequality.
**Lemma 1**. *Given two Borel probability measures $\mu$ and $\nu$ on $\mathbb{R}$, one has, for all $\xi$ in $\mathbb{R}$, $$\lvert\widehat{\mu\boxtimes\nu}(\xi)\rvert^2 \leq \widehat{(\mu\boxminus\mu)\boxtimes\nu}(\xi).$$*
*Proof.* Writing the Fourier transforms explicitly, and applying the Cauchy-Schwarz inequality, one gets $$\left\lvert\iint e^{-2\pi i \xi xy}\,dx\,dy\right\rvert^2
\leq \int\left\lvert\int e^{-2\pi i\xi xy}\,dx\right\rvert^2\,dy
= \iiint e^{-2\pi i\xi (x_1-x_2)y}\,dx_1\,dx_2\,dy.$$ ◻
*Proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"}.* For the base case $n=2$, we may apply Proposition [Proposition 1](#basecase){reference-type="ref" reference="basecase"}. Indeed, assuming $I_{s_i}^\delta(\mu_i)\leq\delta^{-\epsilon}$ for $i=1,2$, one has $$\lVert\mu_{i,\delta}\rVert_2^2
= \int \lvert\widehat{\mu_{i,\delta}}(\xi)\rvert^2\,d\xi
\lesssim \delta^{-1+s_i+\epsilon} \int \lvert\xi\rvert^{1-s_i}\lvert\widehat{\mu_{i,\delta}}(\xi)\rvert^2\,d\xi
\lesssim \delta^{-1+s_i+2\epsilon}$$ so if $s_1+s_2>1$, taking $\epsilon$ small enough to ensure $s_1+s_2-4\epsilon>1$, one finds, for every $\delta^{-1}\leq\lvert\xi\rvert\leq 2\delta^{-1}$, $$\lvert\widehat{\mu_1\boxtimes\mu_2}(\xi)\rvert \leq \delta^{\frac{s_1+s_2-4\epsilon-1}{2}},$$ which is the desired Fourier decay.
Now let $n \geq 3$, and assume that we have already established the case $n - 1$ with the collection of parameters $\mathcal{S}_{n - 1} := \{s_{1} + s_{2},s_{3},\ldots,s_{n}\}$, and some constants $$\label{form6}
\epsilon_{n - 1}(\mathcal{S}_{n - 1}) > 0 \quad \text{and} \quad \delta_{0} := \delta_{0}(\mathcal{S}_{n - 1}) > 0.$$ It is easy to reduce to the case $s_{1} + s_{2} \leq 1$, so we assume this in the sequel.
Given $\xi$ with $\delta^{-1}\leq\xi\leq 2\delta^{-1}$, our goal is to bound $$\mathcal{F}(\xi) := (\mu_1\boxtimes\dots\boxtimes\mu_n)^\wedge(\xi).$$ Applying Lemma [Lemma 1](#order){reference-type="ref" reference="order"} twice, first with $\mu=\mu_1$ and $\nu=\mu_2\boxtimes\dots\boxtimes\mu_n$ and then with $\mu=\mu_2$ and $\nu=(\mu_1\boxminus\mu_1)\boxtimes\mu_3\boxtimes\dots\boxtimes\mu_n$, yields $$\lvert\mathcal{F}(\xi)\rvert^4\leq (\Pi\boxtimes\mu_3\boxtimes\dots\boxtimes\mu_n)^\wedge(\xi),$$ where $\Pi=(\mu_1\boxminus\mu_1)\boxtimes(\mu_2\boxminus\mu_2)$. Using the same lemma again $k$ times, we further get $$\lvert\mathcal{F}(\xi)\rvert^{2^{k+2}}\leq (\Pi^{\boxplus 2^k}\boxtimes\mu_3\boxtimes\dots\boxtimes\mu_n)^\wedge(\xi).$$ Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"} applied with constants $s := s_{1}$ and $t := s_{2}$, and $\kappa := \epsilon_{n - 1} := \epsilon_{n - 1}(\mathcal{S}_{n - 1})$, shows that if $\epsilon = \epsilon(s_{1},s_{2},\epsilon_{n - 1}) > 0$ is sufficiently small, $k = k(s_{1},s_{2},\epsilon_{n - 1})$ is sufficiently large, and $\mu_1$, $\mu_2$ satisfy $I_{s_j}^\delta(\mu_j)\leq\delta^{-\epsilon}$ for $j=1,2$, then $$I_{s_{1} + s_{2}}^{\delta}(\Pi^{\boxplus 2^k}) \leq \delta^{-\kappa} = \delta^{-\epsilon_{n - 1}}.$$ We apply our induction hypothesis to the collection of $n - 1$ probability measures $$\{\bar{\mu}_{1},\ldots,\bar{\mu}_{n - 1}\} = \{\Pi^{\boxplus 2^k},\mu_{3},\ldots,\mu_{n}\}$$ with exponents $\{s_{1} + s_{2},s_{3},\ldots,s_{n}\}$ to get $$|\mathcal{F}(\xi)|^{2^{k+2}} \leq |\xi|^{-\epsilon_{n - 1}}.$$ (To be precise, since the measure $\Pi^{\boxplus 2^k}$ is not supported on $[-1,1]$ but on $[-2^{k+2},2^{k+2}]$, so one rather needs to consider the rescaled measure $\bar{\mu}_1=(2^{-k-2})_*\Pi^{\boxplus 2^k}$, which satisfies $I_{s_1+s_2}^{\delta}(\bar{\mu}_1)\sim_k I_{s_1+s_2}^\delta(\Pi^{\boxplus 2^k})$ but the involved constant depending on $k$ is harmless.) This shows that the Fourier decay property holds for $n$, with constants $\epsilon_n:= \min\{\epsilon,\epsilon_{n - 1}\}$ and $\tau_n=\frac{\tau_{n-1}}{2^k}$. The necessary size of $k$ is determined by the application of Lemma [Lemma 1](#lemma3){reference-type="ref" reference="lemma3"}, so it depends only on $\min\{s_{1},s_{2}\} > 0$. The proof of Theorem [Theorem 1](#main){reference-type="ref" reference="main"} is complete. ◻
| arxiv_math | {
"id": "2309.03068",
"title": "On the Fourier decay of multiplicative convolutions",
"authors": "Tuomas Orponen, Nicolas de Saxc\\'e, Pablo Shmerkin",
"categories": "math.CA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this work, it is shown that there is a continuous function $\mu:[0,1] \longrightarrow \mathbb{C}$ such that $\mathsf{P}^\mathsf{I}(\mu(\alpha)) = 0$, $\forall \alpha \in [0,1]$, where $\mathsf{P}^\mathsf{I}$ is a Type I reduced Ito polynomial. In addition, demonstrations are given for basic properties of the Karpelevič region, including a rigorous, but elementary argument showing that points on the boundary of the Karpelevič region are *extremal* whenever $n> 3$.
author:
- Devon N. Munger
- Andrew L. Nickerson
- Pietro Paparella
bibliography:
- karparcs.bib
title: Demystifying the Karpelevič theorem
---
Karpelevič arc ,Karpelevič region ,stochastic matrix
15A18 ,15B51 ,30C15
# Introduction
A *stochastic* matrix is an entrywise nonnegative matrix whose rows sum to one. In 1938[^1], Kolmogorov posed the problem of characterizing the subset of the complex plane, which we denote by $\Theta_n$, that comprises all eigenvalues of all $n$-by-$n$ *stochastic* matrices [@s1972 p. 2].
Dmitriev and Dynkin [@ams140] obtained a partial solution, and Karpelevič [@ams140 Theorem B] solved the problem by showing that the boundary of $\Theta_n$ consists of curvilinear *arcs* (hereinafter, *Karpelevič arcs* or *K-arcs*), whose points satisfy a polynomial equation that depends on the endpoints of the arc.
The statement of the Karpelevič theorem is long and complicated, but simplifications of the statement were given by oković [@d1990 Theorem 4.5] and subsequently by Ito [@i1997 Theorem 2].
For $n \in \mathbb{N}$, let $F_n \coloneqq \{ p/q \mid 0\leq p \le q \leq n,~\gcd(p,q)=1 \}$. The following is Karpelevič's theorem in a form due to Ito.
**Theorem 1** (Karpelevič theorem). *The region $\Theta_n$ is symmetric with respect to the real axis, is included in the unit-disc $\{ z \in \mathbb{C} \mid |z| \leq 1\}$, and intersects the unit-circle $\{ z \in \mathbb{C} \mid |z| = 1\}$ at the points $\left\{ e^{\frac{2\pi p}{q}\mathsf{i}} \mid p/q \in F_n \right\}$. The boundary of $\Theta_n$ consists of these points and of curvilinear arcs connecting them in circular order.*
*Let the endpoints of an arc be $e^{\frac{2\pi p}{q}\mathsf{i}}$ and $e^{\frac{2\pi r}{s}\mathsf{i}}$ ($q \le s$). Each of these arcs is given by the following parametric equation: $$\label{itoequation}
t^{s} \left( t^{q} - \beta \right)^{\left\lfloor n/q \right\rfloor} = \alpha^{\left\lfloor n/q \right\rfloor} t^{q\left\lfloor n/q \right\rfloor},\ \alpha \in [0,1], ~\beta\coloneqq 1-\alpha.$$*
Following Johnson and Paparella [@jp2017], equation [\[itoequation\]](#itoequation){reference-type="eqref" reference="itoequation"} is called the *Ito equation* and the polynomial $$\label{itopolynomial}
\mathsf{P}_\alpha(t)\coloneqq t^s(t^q-\beta)^{\left\lfloor n/q \right\rfloor}-\alpha^{\left\lfloor n/q \right\rfloor}t^{q\left\lfloor n/q \right\rfloor},\ \alpha\in[0,1],\ \beta \coloneqq 1-\alpha$$ is called the *Ito polynomial*.
If $\mathsf{P}_\alpha(t) = 0$, $t \ne 0$, and $\left\lfloor n/q \right\rfloor = n$, then $t$ is a zero of the *Type 0 (Ito) polynomial* $$\label{type_zero_ito}
\mathsf{P}_\alpha^{\mathsf{0}} (t) = (t-\beta)^n - \alpha^n,~\alpha\in[0,1],\ \beta \coloneqq 1-\alpha.$$ The Type 0 polynomial corresponds the *Farey pair* $(0/1,1/n)$.
If $\mathsf{P}_\alpha(t) = 0$, $t \ne 0$, and $\left\lfloor n/q \right\rfloor = 1$, then $t$ is a zero of the *Type I (Ito) polynomial* $$\label{type_one_ito}
\mathsf{P}_\alpha^{\mathsf{I}}(t) = t^s-\beta t^{s-q} - \alpha,\ \alpha\in[0,1],\ \beta \coloneqq 1-\alpha.$$
If $\mathsf{P}_\alpha(t) = 0$, $t \ne 0$, $1 < \left\lfloor n/q \right\rfloor < n$, and $s > q\left\lfloor n/q \right\rfloor$, then $t$ is a zero of the *Type II (Ito) polynomial* $$\label{type_two_ito}
\mathsf{P}_\alpha^{\mathsf{II}} (t) = (t^q-\beta)^{\left\lfloor n/q \right\rfloor}-\alpha^{\left\lfloor n/q \right\rfloor}t^{q\left\lfloor n/q \right\rfloor-s},\ \alpha\in[0,1],\ \beta \coloneqq 1-\alpha$$
If $\mathsf{P}_\alpha(t) = 0$, $t \ne 0$, $1 < \left\lfloor n/q \right\rfloor < n$, and $s < q\left\lfloor n/q \right\rfloor$, then $t$ is a zero of the *Type III (Ito) polynomial* $$\label{type_tre_ito}
\mathsf{P}_\alpha^{\mathsf{III}} (t) = t^{s-q\left\lfloor n/q \right\rfloor}(t^q-\beta)^{\left\lfloor n/q \right\rfloor}-\alpha^{\left\lfloor n/q \right\rfloor},\ \alpha\in[0,1],\ \beta \coloneqq 1-\alpha$$
The polynomials given in equations [\[type_zero_ito\]](#type_zero_ito){reference-type="eqref" reference="type_zero_ito"}--[\[type_tre_ito\]](#type_tre_ito){reference-type="eqref" reference="type_tre_ito"} are called the *reduced Ito polynomials*. (Note that $s \ne q\left\lfloor n/q \right\rfloor$ since $\gcd{(q,s)} = 1$.)
The continuity of the Karpelevič arcs is justified by a remarkable result due to Kato [@k1995 Theorem 5.2] and the fact that the zeros of a polynomial depend continuously on its coefficients (see, e.g., Horn and Johnson [@hj2013 Appendix D]).
**Theorem 2** (Kato). *Let $\Lambda(\alpha)$ be an unordered $N$-tuple of complex numbers, depending continuously on a real variable $\alpha$ in a (closed or open) interval $I$. Then there exist $N$ single-valued, continuous functions $\mu_n(\alpha), n = 1,\ldots,N$, the values of which constitute the $N$-tuple $\Lambda(\alpha)$ for each $\alpha \in I$.*
If $\mu: [0,1] \longrightarrow \mathbb{C}$ and $\mu$ is continuous, then $\mu$ is called an *arc* (or *path*). By , there are, including repetitions, $N \coloneqq \deg \mathsf{P}_\alpha$ arcs $\mu_i: [0,1] \longrightarrow \mathbb{C}$, $i \in \{1,\ldots,N \}$, such that $\mathsf{P}_\alpha(\mu_i(\alpha)) = 0$, $\forall \alpha \in [0,1]$.
However, what can not be asserted from or , is the conclusion that there *is* an arc in the sector $$\left\{ z \in \mathbb{C} \mid \min \left\{ \frac{p}{q}, \frac{r}{s} \right\} \le \frac{\mathop{\mathrm{Arg}}z}{2\pi} \le \max \left\{ \frac{p}{q}, \frac{r}{s} \right\} \right\}.$$ Hence, we offer the following.
**Conjecture 3**. *If $\mathsf{P}_\alpha$ is defined as in [\[itopolynomial\]](#itopolynomial){reference-type="eqref" reference="itopolynomial"}, then there is a continuous function $\mu: [0,1] \longrightarrow \mathbb{C}$ such that $\mu(0) = e^{\frac{2\pi p}{q} \mathsf{i}}$, $\mu(1) = e^{\frac{2\pi r}{s} \mathsf{i}}$, and $\mathsf{P}_\alpha (\mu(\alpha)) = 0, \forall \alpha \in [0,1]$. Furthermore, $$\min \left\{ \frac{p}{q}, \frac{r}{s} \right\} \le \frac{\mathop{\mathrm{Arg}}\mu(\alpha)}{2\pi} \le \max \left\{ \frac{p}{q}, \frac{r}{s} \right\}, \forall \alpha \in [0,1].$$*
With the above in mind, it is possible to give a precise definition of what is meany by a *K-arc*.
**Definition 4**. Let $n \ge 4$, $\mathsf{X} \in \{ \mathsf{0},\mathsf{I},\mathsf{II}, \mathsf{III} \}$, and let $\frac{p}{q}$ and $\frac{r}{s}$ be Farey neighbors. If there is a continuous function $\mu: [0,1] \longrightarrow \mathbb{C}$ such that $\mu(0) = \omega_q^p$, $\mu(1) = \omega_{s}^{r}$, $\mathsf{P}_\alpha^\mathsf{X} (\mu(\alpha)) = 0, \forall \alpha \in [0,1]$, and $$\min \left\{ \frac{p}{q}, \frac{r}{s} \right\} \le \frac{\mathop{\mathrm{Arg}}\mu(\alpha)}{2\pi} \le \max \left\{ \frac{p}{q}, \frac{r}{s} \right\}, \forall \alpha \in [0,1],$$ then $$K_n\left( \min \left\{ \frac{p}{q}, \frac{r}{s} \right\}, \max \left\{ \frac{p}{q}, \frac{r}{s} \right\} \right) \coloneqq \left\{ \lambda \in \mathbb{C} \mid \lambda = \mu(\alpha),\ \alpha \in [0,1] \right\}$$ is called the *K-arc with respect to $\frac{p}{q}$ and $\frac{r}{s}$ (of order $n$)*.
is assumed, most notably, in the proof that the K-arcs are differentiable (see Kim and Kim [@kk2020 Theorem 3] for the general case and Johnson and Paparella [@jp2017 Corollary 4.4] for a special case of Type I arcs).
is trivial for the Type 0 polynomial (see Subsection [4.1](#subsect:typezero){reference-type="ref" reference="subsect:typezero"}). oković [@d1990 pp. 175--181] indirectly established for Type I polynomials by examining the boundary of so-called *Farey tiles*---as such, his argument is somewhat lengthy and involved.
In this work, a direct and elementary argument is given to establish for Type I polynomials (see Subsection [4.2](#subsect:typeone){reference-type="ref" reference="subsect:typeone"}). Although this is a special case, many Type II and Type III arcs of a given order are Minkowski powers of certain Type I arcs (see Section [5](#sect:karcpowers){reference-type="ref" reference="sect:karcpowers"}). The case for Type II and Type III arcs that are not pointwise powers is left open.
In addition, this work also contains demonstrations of assertions made concerning $\Theta_n$ made in the aforementioned works that are nebulous and, in some cases, incorrect. In particular, and most salient, if $\lambda \in \Theta_n$, then $\lambda$ is called *extremal* if $\gamma \lambda \notin \Theta_n$, $\forall \gamma > 1$. If $E_n$ denotes the collection of extremal numbers, then it is clear that $E_n \subseteq \partial\Theta_n$. Karpelevič [@ams140 p. 81] asserted that $\partial \Theta_n = E_n$ because $\Theta_n$ is closed. The case when $n=1$ is trivial, but Karpelevič's assertion is false when $n=2$ and $n=3$---indeed, notice that $\partial \Theta_2 \setminus E_2 = (-1,1)$ and $\partial\Theta_3 \setminus E_3 = (-1,-1/2]$ (see Figure [\[fig:thetan\]](#fig:thetan){reference-type="ref" reference="fig:thetan"}).
In this work, it is shown that $\partial\Theta_n \subseteq E_n$ when $n>3$ via elementary means.
# Notation & Background
If $z$ is a nonzero complex number, then $\arg z \coloneqq \{ \theta \in \mathbb{R} \mid z = r(\cos \theta + \mathsf{i}\sin \theta) \}$ and $\mathop{\mathrm{Arg}}z \coloneqq \{ \theta \in \arg z \mid \theta \in -(\pi,\pi] \}$. For $n \in \mathbb N$, let $\omega_n \coloneqq \cos(2\pi/n) + \mathsf{i}\sin(2\pi/n)$.
Given $n \in \mathbb{N}$, elements of the set $F_n \coloneqq \{ p/q \mid 0\leq p \le q \leq n,\gcd(p,q) = 1 \}$ are called the *Farey fractions of order n*. If $p/q$ and $r/s$ are elements of $F_n$ such that $p/q < r/s$, then $(p/q,r/s)$ is called a *Farey pair (of order $n$)* if $x \not\in F_n$ whenever $p/q < x < r/s$. The Farey fractions $p/q$ and $r/s$ are called *Farey neighbors* if $(p/q,r/s)$ or $(r/s, p/q)$ is a Farey pair.
The following result is well-known (see, e.g., LeVeque [@l2002 Theorem 8.14]).
**Theorem 5**. *If $p/q,r/s \in F_n$, then $p/q$ and $r/s$ are Farey neighbors of order $n$ if and only if $ps - qr = \pm 1$ and $q + s > n$.*
# Basic Results of the Karpelevič theorem
**Proposition 6**. *"The region $\Theta_n$ is symmetric with respect to the real axis."*
*Proof.* This is the least controversial aspect of , given that the characteristic polynomial of a real matrix has real coefficients and thus nonreal zeros occur in complex conjugate pairs. ◻
**Proposition 7**. *"The region $\Theta_n$ is included in the unit-disc $\{ z \in \mathbb{C} \mid |z| \leq 1\}$."*
**Remark 8**. The Perron--Frobenius theorem [@40320] has often been used incorrectly to justify this result. However, there is a tidy matricial explanation, as follows.
*Proof.* If $A \in \textsf{M}_n (\mathbb{C})$, then $$\begin{Vmatrix} A \end{Vmatrix}_\infty \coloneqq \max\limits_{1\le i \le n} \sum_{j=1}^n \vert a_{ij} \vert$$ is a matrix norm [@hj2013 Example 5.6.5]. If $A$ is stochastic, then $\begin{Vmatrix} A \end{Vmatrix}_\infty = 1$. Furthermore, if $\vert \vert \cdot \vert\vert$ a matrix norm, then $\rho(A) \le \begin{Vmatrix} A \end{Vmatrix}$ [@hj2013 Theorem 5.6.9]. Thus, if $\lambda$ is an eigenvalue of a stochastic matrix, then $\vert \lambda \vert \le \rho(A) \le 1$. ◻
**Proposition 9**. *"The region $\Theta_n$ intersects the unit-circle $\{ z \in \mathbb{C} \mid |z| = 1\}$ at the points $\{ \omega_q^p \mid p/q \in F_n \}$."*
*Proof.* This result was initially established by Romanovsky [@r1936]. ◻
# K-arcs
In this section, is established for Type 0 and Type I polynomials.
## Type 0 {#subsect:typezero}
It is known that if $\lambda = a + \mathsf{i}b$ is an eigenvalue of a stochastic matrix, then $a + \vert{b}\vert \tan(\pi/n) \le 1$ (for references, see Laffey [@l1995 p. 85]). If $\mathsf{P}_\alpha^\mathsf{0} (t) \coloneqq (t-\alpha)^n-\beta^n$, with $\alpha \in [0,1]$ and $\beta \coloneqq 1 - \alpha$, then $\mathsf{P}_\alpha^\mathsf{0} (\beta\omega_n + \alpha) = 0$. Thus, $\partial\Theta_n \cap \{ z \in \mathbb{C} \mid 0 \le \mathop{\mathrm{Arg}}z \le 2\pi/n \} = \mathop{\mathrm{conv}}{(1,\omega_n)}$ setting aside .
## Type I {#subsect:typeone}
**Observation 10**. If $n \ge 4$ and $\left\lfloor n/q \right\rfloor = 1$, then $q > 2$.
*Proof.* For contradiction, if $q \le 2$, then $$\left\lfloor n/q \right\rfloor = 1 \implies 1 \le \frac{n}{q} < 2,$$ i.e., $q \le n < 2q \le 4$. Thus, $n < 4$, which yields the desired contradiction. ◻
In what follows, it is assumed that $n \ge 4$ and $p/q$ and $r/s$ are Farey neighbors such that $\left\lfloor n/q \right\rfloor = 1$ and $$0 < \min \left\{ \frac{p}{q},\frac{r}{s} \right\} < \max \left\{ \frac{p}{q},\frac{r}{s} \right\} < \frac{1}{2}.$$
We start with some preliminary observations: Since $ps - qr = \pm 1$, it follows that $\gcd(q,s) = 1$. Thus, for the Type I polynomial, $$\label{typeonepoly}
\mathsf{P}_\alpha^\mathsf{I} (t) = t^{s} - \beta t^{s-q} - \alpha,\ \alpha \in [0,1],\ \beta \coloneqq 1 - \alpha,\ \gcd (q,s) = 1,\ 2 < q < s \le n.$$ If $\alpha = 0$, then $\mathsf{P}_0^\mathsf{I}(t) = t^{s-q} (t^q - 1)$ and $\mathsf{P}_0^\mathsf{I}$ has zeros $$\overbrace{0,\ldots,0}^{s-q}, 1, \omega_q,\ldots,\omega_q^{q-1}.$$ If $\alpha = 1$, then $\mathsf{P}_1^\mathsf{I}(t) = t^s - 1$ and $\mathsf{P}_1^\mathsf{I}$ has zeros $1,\omega_s,\ldots,\omega_s^{s-1}$.
**Lemma 11** (Euclid's Lemma). *If $\gcd(a,b) = 1$ and $a \mid b c$, then $a \mid c$.*
**Theorem 12**. *Let $k \in \mathbb{Z}$, $\rho \in [0,\infty)$, and $\alpha \in (0,1)$. If $\Im \left(\omega_{2s}^k\right) \ne 0$, then $\mathsf{P}_\alpha^\mathsf{I} (\rho \omega_{2s}^k) \neq 0$. Similarly, if $\Im\left(\omega_{2q}^k\right) \ne 0$, then $\mathsf{P}_\alpha^\mathsf{I} (\rho \omega_{2q}^k) \neq 0$.*
*Proof.* Since $\omega_{2s}^k$ is a zero of $t^{2s} - 1 = (t^s + 1)(t^s - 1)$, it follows that $(\omega_{2s}^k)^s = \pm 1$. For contradiction, if $\exists \rho \in [0,\infty)$ such that $$\begin{aligned}
0 = \mathsf{P}_\alpha^\mathsf{I} (\rho \omega_{2s}^k)
&= \rho^s \omega_{2s}^{ks} - \beta \left(\rho^{s-q} \omega_{2s}^{ks-kq} \right) - \alpha \\
&= \pm \rho^s \mp \beta \rho^{s-q} \omega_{2s}^{-kq} - \alpha \\
&= \left[ \pm \rho^s \mp \beta \rho^{s-q}\cos (\frac{-\pi kq}{s}) - \alpha \right] + \mathsf{i}\left[\mp \beta \rho^{s-q}\sin(\frac{-\pi k q}{s}) \right] \\
&= \left[ \pm \rho^s \mp \beta \rho^{s-q}\cos (\frac{\pi kq}{s}) - \alpha \right] + \mathsf{i}\left[\pm\beta \rho^{s-q}\sin(\frac{\pi k q}{s}) \right],\end{aligned}$$ then $$\pm \rho^s \mp \beta \rho^{s-q}\cos (\frac{\pi kq}{s}) - \alpha = 0 \label{realpart}$$ and $$\beta \rho^{s-q}\sin(\frac{\pi k q}{s}) = 0. \label{imagpart}$$ If $\rho = 0$, then, by [\[realpart\]](#realpart){reference-type="eqref" reference="realpart"}, $\alpha = 0$, a contradiction. If $\rho \ne 0$, then, by [\[imagpart\]](#imagpart){reference-type="eqref" reference="imagpart"} $$\sin(\frac{\pi k q}{s}) = 0.$$ Thus, $s \mid k q$ and since $\gcd(q,s) = 1$, by , it follows that $s \mid k$. Consequently, $$\begin{aligned}
\Im \left( \omega_{2s}^k \right) = \sin(\frac{\pi k}{s}) = 0,\end{aligned}$$ a contradiction.
Since $\omega_{2q}^k$ is a zero of $t^{2q} - 1 = (t^q + 1)(t^q - 1)$, it follows that $(\omega_{2q}^k)^q = \pm 1$ and $(\omega_{2q}^k)^{-q} = \pm 1$. For contradiction, if $\exists \rho \in [0,\infty)$ such that $$\begin{aligned}
0 = \mathsf{P}_\alpha^\mathsf{I} (\rho \omega_{2s}^k)
&= (\rho \omega_{2q}^k)^s - \beta (\rho \omega_{2q}^k)^{s-q} - \alpha \\
&= \rho^s \omega_{2q}^{ks} \pm \beta \rho^{s-q} \omega_{2q}^{ks} - \alpha \\
&= \left[\rho^{s-q} (\rho^q \pm \beta)\cos (\frac{\pi k s}{q}) - \alpha \right] + \mathsf{i}\left[ \rho^{s-q}(\rho^q \pm \beta) \sin( \frac{\pi k s}{q}) \right],\end{aligned}$$ then, $$\label{realpart2}
\rho^{s-q} (\rho^q \pm \beta)\cos (\frac{\pi k s}{q}) - \alpha = 0$$ and $$\label{imagpart2}
\rho^{s-q}(\rho^q \pm \beta) \sin( \frac{\pi k s}{q}) = 0$$ If $\rho = 0$ or $\rho^q \pm \beta = 0$, then, by [\[realpart2\]](#realpart2){reference-type="eqref" reference="realpart2"}, $\alpha = 0$ a contradiction. If $\rho\ne 0$ and $\rho^q \pm \beta \ne 0$, then $$\sin (\frac{\pi k s}{q}) = 0.$$ by [\[imagpart2\]](#imagpart2){reference-type="eqref" reference="imagpart2"}. Thus, $q \mid ks$ and since $\gcd(q,s) = 1$, by , it follows that $q \mid k$. Consequently, $$\begin{aligned}
\Im \left( \omega_{2q}^k \right) = \sin(\frac{\pi k}{q}) = 0,\end{aligned}$$ a contradiction. ◻
**Observation 13**. If $2 < q < s$, then $$\frac{r-1}{s} < \frac{2r-1}{2s} < \min \left\{ \frac{p}{q}, \frac{r}{s} \right\} < \max \left\{ \frac{p}{q}, \frac{r}{s} \right\} < \frac{2r+1}{2s} < \frac{r+1}{s}.$$
*Proof.* We distinguish the following cases:
1. $\frac{p}{q} < \frac{r}{s}$. In this case, $qr-ps=1$ and it suffices to show that $$\frac{2r-1}{2s} < \frac{p}{q}$$ given that the inequalities $$\frac{r-1}{2s} < \frac{2r-1}{2s}$$ and $$\frac{r}{s} < \frac{2r+1}{2s} < \frac{r+1}{s}$$ are obvious; to this end, notice that $$\frac{2r-1}{2s} < \frac{p}{q} \Longleftrightarrow 2(qr-ps) < q \Longleftrightarrow 2 < q.$$
2. $\frac{r}{s} < \frac{p}{q}$. In this case, $ps - qr = 1$ and it suffices to show that $$\frac{p}{q} < \frac{2r+1}{2s}$$ given that the inequalities $$\frac{r-1}{2s} < \frac{2r-1}{2s} < \frac{r}{s}$$ and $$\frac{2r+1}{2s} < \frac{r+1}{s}$$ are obvious; to this end, notice that $$\frac{p}{q} < \frac{2r+1}{2s} \Longleftrightarrow 2(ps - qr) < q \Longleftrightarrow 2 < q. \qedhere$$
◻
**Theorem 14**. *If $\mathsf{P}_\alpha^\mathsf{I}$ is the polynomial defined in [\[type_one_ito\]](#type_one_ito){reference-type="eqref" reference="type_one_ito"}, then there is a continuous function $\mu: [0,1] \longrightarrow \mathbb{C}$ such that $\mu(0) = \omega_q^p$, $\mu(1) = \omega_{s}^{r}$, and $\mathsf{P}_\alpha^\mathsf{I} (\mu(\alpha)) = 0, \forall \alpha \in [0,1]$. Furthermore, $$\min \left\{ \frac{p}{q}, \frac{r}{s} \right\} \le \frac{\mathop{\mathrm{Arg}}\mu(\alpha)}{2\pi} \le \max \left\{ \frac{p}{q}, \frac{r}{s} \right\}, \forall \alpha \in (0,1).$$*
*Proof.* By , there are $s$ continuous functions $\mu_i: [0,1] \longrightarrow \mathbb{C}$, $i=1,\ldots,s$, such that $\mathsf{P}_\alpha^\mathsf{I} (\mu_i(\alpha)) = 0$, $\forall \alpha \in [0,1]$. Notice that $\exists k \in \{1,\ldots,s\}$ such that $\mu_k (0) = \omega_q^p$. By and $\mu_k (1) = \omega_s^r$ (otherwise, $\mu_k$ would cross a "forbidden" ray). Furthermore, the arc must be entirely contained in the stated sector given that, again, it would cross either of the forbidden rays $\{ \rho \omega_q^p \mid \rho \ge 0 \}$ and $\{ \rho \omega_s^r \mid \rho \ge 0 \}$ (see Figure [\[fig:schematics\]](#fig:schematics){reference-type="ref" reference="fig:schematics"}). ◻
**Lemma 15**. *If $\mathsf{P}_\alpha^\mathsf{I}(\lambda) = 0$, with $\alpha \in (0,1)$, then $\vert \lambda \vert < 1$.*
*Proof.* By , $\vert \lambda \vert \le 1 \implies \vert \lambda^{s-q} \vert = \vert \lambda \vert^{s-q} \le 1$. Furthermore, since $$\vert \lambda \vert^s = \vert \lambda^s \vert = \vert \beta \lambda^{s-q} + \alpha \vert \le \beta \vert \lambda^{s-q} \vert + \alpha \le \beta + \alpha < 1,$$ it follows that $\vert \lambda \vert < 1$. ◻
**Theorem 16**. *If $\left\lfloor n/q \right\rfloor = 1$, then the K-arc $$K_n\left( \min \left\{ \frac{p}{q}, \frac{r}{s} \right\}, \max \left\{ \frac{p}{q}, \frac{r}{s} \right\} \right)$$ is simple.*
*Proof.* For contradiction, if $\mu(\alpha) = \lambda = \mu(\hat{\alpha})$ and $\mathsf{P}_\alpha(\lambda) = 0 = \mathsf{P}_{\hat{\alpha}}(\lambda)$, with $\alpha \ne \hat{\alpha}$, then $\beta \ne \hat{\beta}$ and $$\lambda^s - \beta \lambda^{s-q} - \alpha =
\lambda^s - \hat{\beta} \lambda^{s-q} - \hat{\alpha} \implies \lambda^{s-q} =
\frac{\hat{\alpha} - \alpha}{\beta - \hat{\beta}}.$$ Since $\beta - \hat{\beta} = (1-\alpha)-(1-\hat{\alpha}) = \hat{\alpha} - \alpha$, it follows that $\lambda^{s-q} = 1$. Consequently, $\vert \lambda \vert = 1$. However, if $\alpha \in (0,1)$ or $\hat{\alpha} \in (0,1)$, then, by , $\vert \lambda \vert < 1$, a contradiction. Othwerise, if $\alpha = 0$ and $\hat{\alpha} = 1$, then $\omega_q^p = \mu(0) = \mu(1) = \omega_s^r$, a contradiction since $\omega_q^p \ne \omega_s^r$. The case when $\alpha = 1$ and $\hat{\alpha} = 0$ yields a similar contradiction. ◻
# Powers of K-Arcs {#sect:karcpowers}
The purpose of this section is to simplify and expand upon results by Johnson and Paparella [@jp2017 Theorem 5.4] and by Kim and Kim [@kk2020 Theorem 4]) concerning point-wise powers of special Type I K-arcs.
Given $S \subseteq \mathbb{C}$ and $p \in \mathbb{N}$, let $S^p \coloneqq \{ z^p \mid z \in S \}$. Generalizing the Minkowski sum and product of sets, we call $S^p$ the *$p$th Minkowski power of $S$*.
In this section, $d$, $m$, and $n$ are integers such that $1 < d < m \le n$ and $\left( \frac{1}{m}, \frac{1}{m-1} \right)$ is a Farey pair of order $n$ (as a consequence of , the latter holds if and only if $n < 2m -1$). Notice that the reduced Ito polynomial is given by $$\mathsf{P}_\alpha^\mathsf{I} (t) = t^m - \beta t - \alpha,\ \alpha \in [0,1],\ \beta \coloneqq 1 - \alpha.$$ Let $\mu: [0,1] \longrightarrow \mathbb{C}$ be the continuous function guaranteed by such that $\mathsf{P}_\alpha^\mathsf{I} (\mu(\alpha)) = 0$ ($\forall \alpha \in [0,1]$), $\mu(0) = \omega_{m-1}$, $\mu(1) = \omega_{m}$, and $\arg \mu_\alpha \in \left[ \frac{2\pi}{m}, \frac{2\pi}{m-1} \right]$ ($\forall \alpha \in [0,1]$).
The following result is also a facile consequence of .
**Lemma 17**. *Suppose that $d$, $m$, and $n$ are integers such that $1 < d < m \le n$.*
1. *If $m = dk$, then $\left( \frac{1}{m}, \frac{1}{m-1} \right)$ and $\left( \frac{1}{k}, \frac{d}{m-1} \right)$ are Farey pairs of order $n$ if and only if $n < m + k - 1$.*
2. *If $m-1 = dk$, then $\left( \frac{1}{m}, \frac{1}{m-1} \right)$ and $\left( \frac{d}{m}, \frac{1}{k} \right)$ are Farey pairs of order $n$ if and only if $n < m + k$.*
**Theorem 18**. *Suppose that $d$, $m$, and $n$ are integers such that $1 < d < m \le n$.*
1. *If $m = dk$ and $n < m + k - 1$, then holds for $\left( \frac{1}{k}, \frac{d}{m-1} \right)$.*
2. *If $m - 1 = dk$ and $n < m + k - 1$, then holds for $\left( \frac{d}{m}, \frac{1}{k} \right)$.*
*Proof.* (i) By , $\left( \frac{1}{m}, \frac{1}{m-1} \right)$ and $\left( \frac{1}{k}, \frac{d}{m-1} \right)$ are Farey pairs of order $n$. Since $k > 1$ and $$d = \frac{m}{k} \le \frac{n}{k} < \frac{m + k - 1}{k} = d + 1 - \frac{1}{k} < d + 1,$$ it follows that $\left\lfloor n/k \right\rfloor = d$. Thus, the Ito equation for $\left( \frac{1}{k}, \frac{d}{m-1} \right)$ is given by $$\begin{aligned}
t^{m-1}(t^k - \delta)^d = \gamma^d t^m,\ \gamma \in [0,1],\ \delta \coloneqq 1 -\gamma,\end{aligned}$$ and any nonzero root satisfies the reduced Ito equation $$(t^k - \delta)^d = \gamma^d t,\ \gamma \in [0,1],\ \delta \coloneqq 1 -\gamma. \label{reditopower}$$ If $\lambda \coloneqq \mu(\delta)$, then $\lambda^m - \delta = \gamma \lambda$ and $$((\lambda^d)^k - \delta)^d = (\lambda^m - \delta)^d = (\gamma \lambda)^d = \gamma^d \lambda^d,$$ i.e., $\lambda^d$ satisfies [\[reditopower\]](#reditopower){reference-type="eqref" reference="reditopower"}. Thus, the function $\hat{\mu}: [0,1] \longrightarrow\mathbb{C}$, defined by $\hat{\mu}(\delta) = \mu(\delta)^d$, is the required function.
Suppose that $m-1 = dk$, $\left( \frac{1}{m}, \frac{1}{m-1} \right)$ and $\left( \frac{d}{m}, \frac{1}{k} \right)$ are Farey pairs of order $n$, and $n < m + k - 1$. Since $k > 1$ and $$d = \frac{m-1}{k} < \frac{n}{k} < \frac{m + k - 1}{k} = d + \frac{1}{k} < d + 1,$$ it follows that $\left\lfloor n/k \right\rfloor = d$. Thus, the Ito equation for $\left( \frac{d}{m}, \frac{1}{k} \right)$ is given by $$\begin{aligned}
t^m(t^k - \delta)^d = \gamma^d t^{m-1},\ \gamma \in [0,1],\ \delta \coloneqq 1 -\gamma,\end{aligned}$$ and any nonzero root satisfies the reduced Ito equation $$t(t^k - \delta)^d = \gamma^d,\ \gamma \in [0,1],\ \delta \coloneqq 1 -\gamma. \label{reditopower2}$$ If $\lambda \coloneqq \mu(\gamma)$, then $\gamma = \lambda(\lambda^{m-1} - \delta)$ and $$\gamma^d = (\lambda(\lambda^{m-1} - \delta))^d = \lambda^d ((\lambda^d)^k - \delta)^d,$$ i.e., $\lambda^d$ satisfies [\[reditopower2\]](#reditopower2){reference-type="eqref" reference="reditopower2"}. Thus, the function $\hat{\mu}: [0,1] \longrightarrow\mathbb{C}$, defined by $\hat{\mu}(\gamma) = \mu(\gamma)^d$, is the required function. ◻
**Remark 19**. If $n = 2\ell$, with $\ell \ge 2$, then $$K_{2\ell} \left( \frac{\ell - 1}{2\ell - 1}, \frac{1}{2} \right)
= \overline{K_{2\ell} \left( \frac{1}{2}, \frac{\ell}{2\ell - 1} \right)}
= \overline{K_{2\ell}^\ell \left( \frac{1}{2\ell}, \frac{1}{2\ell - 1} \right)}.$$
**Example 20**. If $n=8$, then there are eleven K-arcs in the upper-half plane. Table [1](#tab:theta_eight){reference-type="ref" reference="tab:theta_eight"} lists Farey pairs not exceeding one-half. Figure [\[fig:enter-label\]](#fig:enter-label){reference-type="ref" reference="fig:enter-label"} displays the K-arcs (solid) of $\Theta_8$ in the upper-half plane guaranteed by , , and .
Farey Pair Type Powered Type I
------------ ------ ------------------------------
(0/1, 1/8) 0
(1/8, 1/7) I
(1/7, 1/6) I
(1/6, 1/5) I
(1/5, 1/4) II
(1/4, 2/7) II $K_8^2(1/8, 1/7)$
(2/7, 1/3) III $K_8^2(1/7, 1/6)$
(1/3, 3/8) III
(3/8, 2/5) I
(2/5, 3/7) I
(3/7, 1/2) II $\overline{K_8^4(1/8, 1/7)}$
: Arc and types for $\Theta_8$.
# Extremal Points and Star Centers
In this section, we provide an elementary proof that $\partial\Theta_n \subseteq E_n$ whenever $n > 3$. Elementary demonstrations are also included to show that $\Theta_n$ is star-convex with star centers at $0$ ($n > 1$) and $1$ ($n \ge 1$). (see Section [8](#starcenters){reference-type="ref" reference="starcenters"} for matricial proofs).
**Lemma 21**. *If $\lambda \in \Theta_n$, then $C_p(\lambda) \coloneqq \mathop{\mathrm{conv}}(1,\lambda,\ldots,\lambda^p) \subseteq \Theta_n,\forall p \in \mathbb{N}$.*
*Proof.* If $x \in \mathop{\mathrm{conv}}(1,\lambda,\ldots,\lambda^p)$, then there are nonnegative scalars $\alpha_0,\alpha_1,\ldots,\alpha_p$ such that $\sum_{k=0}^p \alpha_k = 1$ and $$x = \sum_{k=0}^p \alpha_k \lambda^k.$$ By hypothesis, there is a stochastic matrix $M$ such that $\lambda \in \sigma (M)$. The matrix $$\hat{M} \coloneqq \sum_{k=0}^p \alpha_k M^k$$ is stochastic and $$x = \sum_{k=0}^p \alpha_k \lambda^k \in \sigma\left( \hat{M} \right). \qedhere$$ ◻
**Corollary 22**. *If $n \in \mathbb{N}$, then $\Theta_n$ is star-convex with star-center $1$.*
**Remark 23**. More can be asserted: if $\lambda \in \{ z \in \mathbb{C} : \vert{z}\vert < 1 \}$, and $\Re \lambda \le 0$ or $\Im\lambda \ne 0$, then $\exists q \in \mathbb{N}$ such that $C_p(\lambda) \subseteq C_q(\lambda),\ \forall p \ge q$ (see Dubuc and Malik [@dm1992 Lemma 2.1]). The integer $q = q(z)$ is called the *color* of $z$.
**Theorem 24**. *If $n > 1$, then $\Theta_n$ is star-convex with star-center $0$.*
*Proof.* If $\lambda \in \Theta_n$ and $\Im \lambda = 0$, then the result follows since $\Theta_n \supseteq [-1, 1]$. Otherwise, assume that $\Im \lambda \ne 0$. In view of , without loss of generality, it may be assumed that $\Im \lambda > 0$. Notice that there is an integer $k > 1$ such that $\theta \in \left[ \frac{\pi}{k}, \frac{\pi}{k-1} \right)$. Writing $\lambda = r (\cos\theta + \mathsf{i}\sin\theta)$, it can be shown that $\lambda$ is a zero of the polynomial $$Q_j (t) \coloneqq t^k - \frac{\sin k \theta}{\sin j \theta}r^{k-j} t^j + \frac{\sin(k-j)\theta}{\sin j \theta} r^k,\ 1 \le j < k$$ (for full details, see Munger and Paparella [@mp2023 Theorem 3.2]). Furthermore, $Q_j$ has nonnegative coefficients and $Q_j(0) \ne 0$. Dividing the equation $Q_j(\lambda) = 0$ by $Q_j(1) > 0$ yields $0 \in C_k(\lambda)$. ◻
**Theorem 25**. *If $n > 3$, then $\partial\Theta_n \subseteq E_n$.*
*Proof.* Since $\partial\Theta_n \cap \{ z \in \mathbb{C} \mid \Im z = 0 \} = \{-1,1\}$, it suffices to show that $\lambda \in E_n$ whenever $\lambda \in \partial\Theta_n$ and $\Im \lambda \ne 0$.
To this end, let $\lambda = a + \mathsf{i} b \in \partial\Theta_n$ and suppose, for contradiction, that $\exists \gamma \in (1,\infty)$ such that $\gamma\lambda\in\Theta_n$. By , it can be assumed, without loss of generality, that $b = \Im \lambda > 0$.
Recall that the distance from a point $(x_0,y_0) \in \mathbb{R}^2$ to the line passing through the points $p_1 \coloneqq (x_1,y_1)$ and $p_2 \coloneqq (x_2,y_2)$ is given by $$d \left( \overleftrightarrow{p_1 p_2},(x_0,y_0) \right) = \frac{\vert (x_2 - x_1)(y_1 - y_0) - (x_1 - x_0)(y_2 - y_1) \vert}{\vert\vert P_1 - P_2 \vert\vert_2}.$$
A tedious, but straightforward calculation reveals that if $(x_0,y_0) = (a,b)$, $p_1 = (1,0)$, and $p_2 = (\gamma a,\gamma b)$, then $$d_1 \coloneqq d \left( \overleftrightarrow{p_1 p_2},(x_0,y_0) \right) = \frac{b(\gamma - 1)}{\vert\vert P_1 - P_2 \vert\vert_2} > 0.$$ Similarly, if $(x_0,y_0) = (a,b)$, $p_1 = (\gamma a,\gamma b)$, and $p_2 = (\gamma^2 (a^2 - b^2), 2\gamma^2 ab)$, then $$d_2 \coloneqq d \left(\overleftrightarrow{p_1 p_2},(x_0,y_0)\right) = \frac{\gamma^2(\gamma - 1)b(a^2 + b^2)}{\vert\vert P_1 - P_2 \vert\vert_2} > 0.$$ Thus, if $0 < \varepsilon < \min\{ d_1, d_2 \}$, then $N_\varepsilon (\lambda) \coloneqq \{ z \in \mathbb{C} \mid \vert z - \lambda \vert < \varepsilon \} \subset C_q(\gamma\lambda) \subseteq \Theta_n$, a contradiction. ◻
# Implications for Further Inquiry
In addition to establishing for Type II and Type II K-arcs that are not Minkowski powers of special Type I arcs, it would be of interest to show that points on K-arcs are extremal via elementary means.
# Star Convexity {#starcenters}
In this section, we give matricial proofs that $\Theta_n$ is star-convex with star centers at $0$ and $1$.
**Lemma 26**. *If $A$ and $B$ are stochastic, $\alpha \in [0,1]$, and $\beta \coloneqq 1 - \alpha$, then $\alpha A + \beta B$ is stochastic.*
*Proof.* Recall that if $A \ge 0$, then $A$ is stochastic if and only if $Ae = e$, where $e$ denotes the all-ones vector. Notice that $\alpha A + \beta B \ge 0$ and $$\left(\alpha A + \beta B \right)e = \alpha Ae + \beta Be = \alpha e + \beta e = e,$$ which establishes the result. ◻
The following eigenvalue-perturbation result is due to Brauer (for proofs, see McDonald and Paparella [@mp2021] and references therein).
**Theorem 27** (Brauer [@b1952 Theorem 27]). *Let $A \in \mathsf{M}_n(\mathbb{C})$ and suppose that $\sigma(A) = \{ \lambda_1, \ldots, \lambda_k, \dots, \lambda_n \}$ (including multiplicities). If $x$ is an eigenvector associated with $\lambda_k$ and $y \in \mathbb{C}^n$, then the matrix $A + xy^*$ has eigenvalues $\{ \lambda_1,\ldots, \lambda_k + y^*x, \lambda_2, \dots, \lambda_n \}$.*
**Theorem 28**. *If $n>1$, then the set $\Theta_n$ is star-convex with star center zero.*
*Proof.* If $\lambda \in \Theta_n$, then there is a stochastic matrix $M$ such that $\lambda \in \sigma (M)$. Suppose that $\sigma(M)=\{1,\lambda_2,\ldots,\lambda_n\}$, where $\lambda = \lambda_k$. If $\alpha \in \mathbb{C}$, then $\sigma(\alpha M)=\{\alpha,\alpha\lambda_2,\ldots,\alpha\lambda_n\}$. Since $\alpha Me = \alpha e$, by , it follows that $$\begin{aligned}
\sigma \left( \alpha M+\frac{1-\alpha}{n}ee^\top \right)
&= \sigma \left( \alpha M + e \left( \frac{1-\alpha}{n} e^\top\right) \right) \\
&= \left\{\alpha + \frac{1-\alpha}{n} (e^\top e), \alpha\lambda_2,\ldots,\alpha\lambda_n \right\} \\
&= \{\alpha + 1-\alpha, \alpha\lambda_2,\ldots,\alpha\lambda_n\} \\
&=\{1, \alpha\lambda_2,\ldots,\alpha\lambda_n\}. \end{aligned}$$ Thus, $$\alpha\lambda\in \sigma \left( \alpha M+\frac{1-\alpha}{n}ee^\top \right), \forall\alpha\in[0,1].$$ By , the matrix $\alpha M+\frac{1-\alpha}{n}ee^\top$ is stochastic. Hence, $\alpha\lambda\in \Theta_n$, $\forall \alpha \in [0,1]$. ◻
**Lemma 29**. *If $n\ge1$, then the set $\Theta_n$ is star-convex with star center one.*
*Proof.* If $\lambda \in \Theta_n$, then there is a stochastic matrix $M$ such that $\lambda \in \sigma (M)$. Suppose that $\sigma(M)=\{1,\lambda_2,\ldots,\lambda_n\}$, where $\lambda = \lambda_k$. Let $S$ be an invertible matrix such that $J = S^{-1} M S$ is a Jordan canonical form of $M$. If $\alpha \in [0,1]$ and $\beta \coloneqq 1 - \alpha$, then the matrix $\alpha M + \beta I$ is stochastic () and since $S^{-1}\left(\alpha M + \beta I\right)S = \alpha J + \beta I$ is upper-triangular with spectrum $\{1,\alpha \lambda_2 + \beta,\ldots,\alpha\lambda_n + \beta\}$, it follows that $\sigma(\alpha M + \beta I) = \{1,\alpha \lambda_2 + \beta,\ldots,\alpha\lambda_n + \beta\}$, i.e., $\alpha\lambda + \beta \in \Theta_n$, $\forall \alpha \in [0,1]$. ◻
[^1]: Many authors misattribute Kolmogorov's proposal to a 1937 paper, which is perhaps due to a misattribution by Gantmacher in 1959 (for full details, see Swift [@s1972 p. 2]).
| arxiv_math | {
"id": "2309.03849",
"title": "Demystifying the Karpelevi{\\v{c}} theorem",
"authors": "Devon N. Munger and Andrew L. Nickerson and Pietro Paparella",
"categories": "math.SP math.CV",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Virtual constraints are relations imposed on a control system that become invariant via feedback control, as opposed to physical constraints acting on the system. Nonholonomic systems are mechanical systems with non-integrable constraints on the velocities. In this work, we introduce the notion of *virtual nonlinear nonholonomic constraints* in a geometric framework which is a controlled invariant submanifold and we show the existence and uniqueness of a control law preserving this submanifold. We illustrate the theory with various examples and present simulation results for an application.
author:
- "Efstratios Stratoglou, Alexandre Anahory Simoes, Anthony Bloch , Leonardo J. Colombo. [^1] [^2] [^3] [^4] [^5]"
title: On the Geometry of Virtual Nonlinear Nonholonomic Constraints
---
Virtual constraints, Nonholonomic systems, Geometric control, feedback control, Nonlinear constraints.
# Introduction
irtual constraints are relations on the configuration variables of a control system which are imposed through feedback control and the action of actuators, instead of through physical connections such as gears or contact conditions with the environment. The advantage of working with virtual constraints is that they can be re-programmed instantaneously without any change to the connections of the links of a robot or its environment. As a consequence, one may achieve a desired prescribed motion by imposing virtual constraints. Virtual constraints extend the application of zero dynamics to feedback design (see e.g., [@Isidori], [@Westervelt; @et; @al; @2018]).
Virtual holonomic constraints have been studied over the past few years in a variety of contexts, such as motion planning and control [@Freidovich; @et; @al], [@Shiriaev; @et; @al], [@Mohammadi; @et; @al], [@Westerberg; @et; @al] and biped locomotion where it was used to achieve a desired walking gait [@Chevallereau; @et; @al], [@Westervelt; @et; @al]. Virtual nonholonomic constraints are a class of virtual constraints that depend on velocities rather than only on the configurations of the system. Those virtual constraints were introduced in [@griffin2015nonholonomic] to design a velocity-based swing foot placement in bipedal robots. In particular, this class of virtual constraints was used in [@hamed2019nonholonomic; @horn2020nonholonomic; @horn2018hybrid; @horn2021nonholonomic] to encode velocity-dependent stable walking gaits via momenta conjugate to the unactuated degrees of freedom of legged robots and prosthetic legs.
The recent work [@moran2021energy] (see also [@moran2023]) introduces an approach to defining rigorously virtual nonholonomic constraints, but it is not set in the most appropriate geometric setting to study this kind of constraint: that of tangent bundles. In their study the authors make no distinction between making a constraint invariant under the closed-loop system or being stabilized by it. In our work, we only consider a constraint to be a virtual constraint when it is invariant under the controlled motion.
In the paper [@virtual], we developed a geometric description of linear virtual nonholonomic constraints, i.e., constraints that are linear in the velocities, while in [@affine] we addressed the problem of affine virtual nonholomonic constraints, but the nonlinear case was not addressed because the nature of the constraints makes a thorough mathematical analysis difficult. In the present work, we extend the latest outcomes by laying the geometric foundations of virtual nonlinear nonholonomic constraints and studying their properties. We ensure the existence and uniqueness of a control law that makes the constraints invariant and we explore some consequences for the corresponding close-loop system. In addition, we check under which conditions the closed-loop dynamics coincides with the nonholonomic dynamics under the nonlinear constraints. Lastly, we give an explicit application of the theory to the motion of particles moving with an alignment on the velocities and we test our results with numerical simulations.
The remainder of the paper is structured as follows. In Section II we present the necessary background for mechanical systems on Riemannian manifolds and recall the equations of motion for a nonlinear nonholonomic mechanical system. In Section III we give a geometric construction of virtual nonholonomic constraints. The main result of the paper is Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"}, where under some assumptions, we prove the existence and uniqueness of a control law making the constraints invariant under the closed-loop system. Additionally, we examine the geometric properties of the closed-loop dynamics and exemplify it in examples. Next, in Section IV we present an application of our results to the motion of particles moving with an alignment on the velocities, where we enforce virtual nonlinear nonholonomic constraint satisfying the assumptions of Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"} and simulate its behavior.
# Mechanical Systems on Riemannian Manifolds
In this section, we will review the equations of motion for mechanical systems subject to nonlinear constraints.
Suppose $Q$ is a differentiable manifold of dimension $n$. Throughout the text, $q^{i}$ will denote a particular choice of local coordinates on this manifold and $TQ$ denotes its tangent bundle, with $T_{q}Q$ denoting the tangent space at a specific point $q\in Q$ generated by the coordinate vectors $\frac{\partial}{\partial q^{i}}$. Usually $v_{q}$ denotes a vector at $T_{q}Q$ and, in addition, the coordinate chart $q^{i}$ induces a natural coordinate chart on $TQ$ denoted by $(q^{i},\dot{q}^{i})$. There is a canonical projection $\tau_{Q}:TQ \rightarrow Q$, sending each vector $v_{q}$ to the corresponding base point $q$. Note that in coordinates $\tau_{Q}(q^{i},\dot{q}^{i})=q^{i}$. The tangent map of the canonical projection is given by $T\tau_Q:TTQ\to TQ.$ The cotangent bundle of $Q$ is denoted by $T^*Q$ and for $q\in Q$ the cotangent space $T^*_qQ$ is generated by cotangent vectors $dq^i$ which satisfies the dual pairing $\langle dq^i,\frac{\partial}{\partial q^j}\rangle=\delta_{ij}$, where $\delta_{ij}$ is the Kronecker delta.
A vector field $X$ on $Q$ is a map assigning to each point $q\in Q$ a vector tangent to $q$, that is, $X(q)\in T_{q}Q$. In the context of mechanical systems, we find a special type of vector fields that are always defined on the tangent bundle $TQ$, considered as a manifold itself. A second-order vector field (SODE) $\Gamma$ on the tangent bundle $TQ$ is a vector field on the tangent bundle satisfying the property that $T\tau_{Q}\left(\Gamma (v_{q})\right) = v_{q}$. The expression of any SODE in coordinates is the following: $$\Gamma(q^{i},\dot{q}^{i})= \dot{q}^{i}\frac{\partial}{\partial q^{i}} + f^{i}(q^{i},\dot{q}^{i}) \frac{\partial}{\partial \dot{q}^{i}},$$ where $f^{i}:TQ \rightarrow \mathbb{R}$ are $n$ smooth functions. We denote the set of all vector fields on $Q$ by $\mathfrak{X}(Q)$.
A one-form $\alpha$ on $Q$ is a map assigning to each point $q$ a cotangent vector to $q$, that is, $\alpha(q)\in T^{*}Q$. Cotangent vectors acts linearly on vector fields according to $\alpha(X) = \alpha_{i}X^{i}\in \mathbb{R}$ if $\alpha = \alpha_{i}dq^{i}$ and $X = X^{i} \frac{\partial}{\partial q^{i}}$. In the following, we will refer to two-forms or $(0,2)$-tensor fields which are bilinear maps that act on a pair of vector fields to produce a number and also to $(1,1)$-tensor fields which are linear maps that act on a vector field to produce a new vector field.
A symplectic form $\omega$ on a manifold $M$ is a $(0,2)$-type tensor field that is skew-symmetric and non-denerate, i.e., $\omega(X,Y)=-\omega(Y,X)$ for all vector fields $X$ and $Y$ and if $\omega(X,Y)=0$ for all vector fields $X$ then $Y=0$.
The symplectic form induces a linear isomorphism $\flat_{\omega}:\mathfrak{X}(M)\rightarrow \Omega^{1}(M)$, given by $\langle\flat_{\omega}(X),Y\rangle=\omega(X,Y)$ for any vector fields $X, Y$. The inverse of $\flat_{\omega}$ will be denoted by $\sharp_{\omega}$.
In the following, we will use the canonical almost tangent structure $J:TTQ \rightarrow TTQ$. This is a type $(1,1)$- tensor field on $TQ$ whose expression in local coordinates is $J=dq^{i}\otimes \frac{\partial}{\partial \dot{q}^{i}}$, where $\otimes$ stands for the tensor product. For instance, if $\Gamma$ is a SODE vector field $J(\Gamma) = \dot{q}^{i}\frac{\partial}{\partial \dot{q}^{i}}$.
Given a Lagrangian function $L:TQ\rightarrow \mathbb{R}$, the associated energy $E_{L}$ is the function defined by $E_{L}(q,\dot{q})=\dot{q}\frac{\partial L}{\partial \dot{q}} - L(q,\dot{q})$ and we may write a symplectic form on $TQ$, denoted by $\omega_{L}$, defined by $\omega_L = -d(J^{*}dL)$. In natural coordinates of $TQ$, $\omega_{L}=\frac{\partial^{2} L}{\partial \dot{q}^{i} \partial q^{j}} dq^{i}\wedge dq^{j} + \frac{\partial^{2} L}{\partial \dot{q}^{i} \partial \dot{q}^{j}} dq^{i}\wedge d\dot{q}^{j}$. This geometric construction is used to write Euler-Lagrange dynamics as the integral curves of the vector field $\Gamma_{L}$ solving the equation $i_{\Gamma_{L}}\omega_{L}=dE_{L}$, where $i_{\Gamma_{L}}\omega_{L}$ denotes the contraction of $\Gamma_L$ and $\omega_L$ (see [@B]). In fact, this is the geometric equation defining Hamiltonian vector fields in general symplectic manifolds.
Before proceeding, we will recall the definition of Riemannian metric. A Riemannian metric is a generalization of the inner product on a vector space to arbitrary manifolds. In fact, one can describe it as an inner product in each tangent space $T_{q}Q$ that varies smoothly with the base point $q$. In particular, since the metric will be an inner product on each tangent space, as will see below, it will be defined on the space $TQ\times_{Q}TQ$, composed of pairs of tangent vectors lying in the same tangent space. In this way, we avoid defining the inner product between two vectors that are tangent at different points. More precisely,
**Definition 1**. *A Riemannian metric $\mathcal{G}$ on a manifold $Q$ is a $(0,2)$-tensor, i.e., a bilinear map $\mathcal{G}:TQ\times_{Q} TQ \rightarrow \mathbb{R}$, satisfying the following properties:*
1. *symmetric: $\mathcal{G}(v_{q},w_{q})=\mathcal{G}(w_{q},v_{q})$ for all $q\in Q$ and $v_{q}$,$w_{q} \in T_{q}Q$.*
2. *non-degenerate: $\mathcal{G}(v_{q},w_{q})=0$ for all $w_{q}\in TQ$ if and only if $v_{q}=0$.*
3. *positive-definite: $\mathcal{G}(v_{q},v_{q})\geqslant 0$, with equality holding only if $v_{q}=0$.*
*Accordingly, if $\mathcal{G}$ is a Riemannian metric then the pair $(Q,\mathcal{G})$ is called a Riemannian manifold.*
If $(q^{1},\dots,q^{n})$ are local coordinates on $Q$, then the local expression of the Riemannian metric $\mathcal{G}$ is $\displaystyle{
\mathcal{G}=\mathcal{G}_{i j} dq^{i} \otimes dq^{j}}$ with $\displaystyle{\mathcal{G}_{i j}=\mathcal{G}\left(\frac{\partial}{\partial q^{i}},\frac{\partial}{\partial q^{j}}\right)}$.
In the following, we will make use of a special technique to lift a Riemannian metric on a manifold to a metric on the tangent bundle $TQ$. The complete lift of a Riemannian metric $\mathcal{G}$ on $Q$ is denoted by $\mathcal{G}^{c}$ and it is almost a Riemannian metric, since it does not satisfy property (iii) from the definition above, i.e., it is not positive-definite, which is similar with what happens in special relativity (see for instance [@schutz]) where the metric is indefinite. Given natural bundle coordinates on $TQ$, its local expression is $\displaystyle{\mathcal{G}^{c}=\dot{q}^{k}\frac{\partial \mathcal{G}_{ij}}{\partial q^{k}} dq^{i} \otimes dq^{j} + \mathcal{G}_{i j} dq^{i} \otimes d\dot{q}^{j} + \mathcal{G}_{i j} d\dot{q}^{i} \otimes dq^{j}}$.
For the Riemannian metric $\mathcal{G}$ on $Q$, we can use its non-degeneracy property to define the musical isomoprhism $\flat_{\mathcal{G}}:\mathfrak{X}(Q)\rightarrow \Omega^{1}(Q)$ defined by $\flat_{\mathcal{G}}(X)(Y)=\mathcal{G}(X,Y)$ for any $X, Y \in \mathfrak{X}(Q)$. Also, denote by $\sharp_{\mathcal{G}}:\Omega^{1}(Q)\rightarrow \mathfrak{X}(Q)$ the inverse musical isomorphism, i.e., $\sharp_{\mathcal{G}}=\flat_{\mathcal{G}}^{-1}$.
**Definition 2**. *The vertical lift of a vector field $X\in \mathfrak{X}(Q)$ to $TQ$ is defined by $$X_{v_{q}}^{V}=\left. \frac{d}{dt}\right|_{t=0} (v_{q} + t X(q)).$$ The complete lift of a vector field, $X$, which in local coordinates is given by $X=X^i\frac{\partial}{\partial q^i}$ is $$X^c=X^i\frac{\partial}{\partial q^i} + \dot{q}^j\frac{\partial X^i}{\partial q^j}\frac{\partial}{\partial \dot{q}^i}.$$ The vertical lift of a one-form $\alpha\in\Omega^1(Q)$ is defined as the pullback of $\alpha$ to $TQ$, i.e. $$\alpha^V=(\tau_Q)^*\alpha,$$ which locally is $\alpha^V=\alpha_idq^i$ and its complete lift is $$\alpha^c=\dot{q}^j\frac{\partial\alpha^i}{\partial q^j}dq^i + \alpha^idq^i.$$*
**Proposition 1**. *For a Riemannian metric $\mathcal{G}$ on $Q$, vector fields $X,Y\in\mathfrak{X}(Q)$ and a one-form $\alpha\in\Omega^1(Q)$ we have $$(\alpha(X))^V=\alpha^c(X^V),$$ $$\mathcal{G}^c(X^V,Y^c) = \mathcal{G}^c(X^c,Y^V) = [\mathcal{G}(X,Y)]^V,$$ $$\mathcal{G}^c(X^V,Y^V)=0.$$*
For details of the aforementioned one can see [@Leon_Rodrigues].
The complete lift of a Riemannian metric possess useful properties such as the one described in the following lemma.
**Lemma 1**. *Let $(Q,\mathcal{G})$ be a Riemannian manifold and $\alpha\in \Omega^{1}(Q)$ a one-form. Then, $$\left[ \sharp_{\mathcal{G}}(\alpha)\right]^{V} = \sharp_{\mathcal{G}^{c}}(\alpha^{V}).$$*
*Proof.* Given any $Y\in\mathfrak{X}(Q)$, it is enough to prove the equality using the inner product with the lifts $Y^{c}$ and $Y^{V}$, because if $\{Y^{a}\}$ was a local basis of vector fields, then $\{(Y^{a})^{c}, (Y^{a})^{V}\}$ would also be a local basis of vector fields on $TQ$.
On one hand, $$\mathcal{G}^{c}\left(\left[ \sharp_{\mathcal{G}}(\alpha)\right]^{V},Y^{V} \right) = 0 = \alpha^{V}(Y^{V}) = \mathcal{G}^{c}\left(\sharp_{\mathcal{G}^{c}}(\alpha^{V}),Y^{V} \right).$$
On the other hand, $$\mathcal{G}^{c} \left(\left[ \sharp_{\mathcal{G}}(\alpha)\right]^{V}, Y^{c} \right) = \left[ \mathcal{G}(\sharp_{\mathcal{G}}(\alpha), Y)\right]^{V} = \left[ \alpha(Y)\right]^{V}$$
$$= \alpha^{V}(Y^{c})=\mathcal{G}^{c} \left( \sharp_{\mathcal{G}^{c}}(\alpha^{V}),Y^{c} \right).$$ Hence, the results follows by non-degeneracy of $\mathcal{G}^{c}$. ◻
Finally, we recall the concept of a linear connection on a manifold that is used to generalize the concept of directional derivative of a vector field along another to a manifold. Formally, a linear connection on a manifold $Q$ is any map of the form $\nabla:\mathfrak{X}(Q)\times \mathfrak{X}(Q) \rightarrow \mathfrak{X}(Q)$ which is $C^{\infty}(Q)$-linear on the first factor, $\mathbb{R}$-linear in the second factor, and if we denote the image of $X, Y \in \mathfrak{X}(Q)$ by $\nabla_{X} Y$, then $\nabla$ satisfies the Leibniz differentiation rule, i.e., $\nabla_{X} (f Y)=X(f)\cdot Y+f\cdot \nabla_{X} Y$ for every $f\in C^{\infty}(Q)$. In local coordinates, connections are fully described by the Chrystoffel symbols which are real-valued functions on $Q$ given by $$\nabla_{\frac{\partial}{\partial q^{i}}}\frac{\partial}{\partial q^{j}}=\Gamma_{i j}^{k}\frac{\partial}{\partial q^{k}}.$$ Thus if $X$ and $Y$ are vector fields whose coordinate expressions are $X=X^{i}\frac{\partial}{\partial q^{i}}$ and $Y=Y^{i}\frac{\partial}{\partial q^{i}}$, then $$\nabla_{X} Y=\left(X^{i}\frac{\partial Y^{k}}{\partial q^{i}}+X^{i}Y^{j} \Gamma_{i j}^{k}\right)\frac{\partial}{\partial q^{k}}.$$
In a Riemannian manifold, there is a special linear connection--the *Levi-Civita connection*--associated to the Riemannian metric $\mathcal{G}$. This is the unique connection $\nabla^{\mathcal{G}}:\mathfrak{X}(Q)\times \mathfrak{X}(Q) \rightarrow \mathfrak{X}(Q)$ satisfying the following two additional properties:
1. $[ X,Y]=\nabla_{X}^{\mathcal{G}}Y-\nabla_{Y}^{\mathcal{G}}X$ (symmetry)
2. $X(\mathcal{G}(Y,Z))=\mathcal{G}(\nabla_{X}^{\mathcal{G}}Y,Z)+\mathcal{G}(Y,\nabla_{X}^{\mathcal{G}}Z)$ (compatibillity of the metric).
We might also introduce the covariant derivative of a vector field along a curve. The covariant derivative of a vector field $X\in \mathfrak{X}(Q)$ along a curve $q:I\rightarrow Q$, where $I$ is an interval of $\mathbb{R}$, is given by the local expression $$\nabla_{\dot{q}}X (t)=\left( \dot{X}^{k}(t)+\dot{q}^{i}(t) X^{j}(t)\Gamma_{i j}^{k}(q(t)) \right)\frac{\partial}{\partial q^{k}}.$$ A geodesic in a Riemannian manifold is the curve of minimum length joining two points in space. Geodesics are characterized by the equation $\nabla_{\dot{q}} \dot{q}=0$.
## Nonlinear nonholonomic mechanics
A nonlinear nonholonomic constraint on a mechanical system is a submanifold $\mathcal{M}$ of the tangent bundle $TQ$ from which the velocity of the system can not leave. Mathematically, the constraint may be written as the set of points where a function of the type $\phi:TQ \rightarrow \mathbb{R}^{m}$ vanishes, where $m < n=\dim Q$. That is, $\mathcal{M}=\phi^{-1}(\{0\})$. If every point in $\mathcal{M}$ is regular, i.e., the tangent map $T_{p}\phi$ is surjective for every $p\in \mathcal{M}$, then $\mathcal{M}$ is a submanifold of $TQ$ with dimension $2n-m$ by the regular level set theorem.
Now let $\phi = (\phi^{1}, \dots, \phi^{m})$ denote the coordinate functions of the constraint $\phi$. Considering the dual of the canonical almost tangent structure $J$, we have that $J^{*}(d\phi^{a}) = \frac{\partial \phi^{a}}{\partial \dot{q}^{i}}dq^{i}$. Notice also that $J^{*}(d\phi^{a})(X^{V}) = 0$. The equations of motion are integral curves of a vector field $\Gamma_{nh}$ defined by the equations $$\label{noneq}
\begin{split}
& i_{\Gamma_{nh}}\omega_{L} - dE_{L} = \lambda_{a}J^{*}(d\phi^{a}) \\
& \Gamma_{nh} \in TM,
\end{split}$$ where $\lambda_{a}$ are Lagrange multiplier's to be determined.
These equations have a well-defined solution if $\sharp_{\omega_{L}}(J^{*}(d\phi^{a}))\cap TM = \{0\}$. Moreover, the expression in coordinates of integral curves of $\Gamma_{nh}$ are called Chetaev's equations: $$\begin{split}
& \frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}}\right)-\frac{\partial L}{\partial q}=\lambda_{a} \frac{\partial \phi^{a}}{\partial \dot{q}} \\
& \phi^{a}(q,\dot{q}) = 0
\end{split}$$ and they are the equations of motion for systems with nonlinear constraints (see [@paula], [@cendra], [@MdLeon] for more details).
In the following, we will consider a slight generalization of the concept of distribution. We will consider a mapping that to each point $v_{q}$ on the submanifold $\mathcal{M}$ assigns a vector subspace of $T_{v_{q}}(TQ)$. This map is a distribution on $TQ$ restricted to $\mathcal{M}$. From now on, let $S$ be a distribution on $TQ$ restricted to $\mathcal{M}$, whose annihilator is spanned by the one-forms $J^{*}(d\phi^{a})$, i.e.,$$S^{o}=\left\langle \{J^{*} (d\phi^{a})\}\right\rangle$$
Chetaev's equations may be written in Riemannian form using a geodesic-like equation according to the following theorem:
**Theorem 1**. *A curve $q:I\rightarrow Q$ is a solution of Chetaev's equations for a mechanical type Lagrangian if and only if $\phi(q,\dot{q})=0$ and it satisfies the equation $$\label{Chetaev's eqns}
\left( \nabla_{\dot{q}}\dot{q} + \text{grad} V \right)^{V} \in S^{\bot},$$ where $S^{\bot}$, the orthogonal distribution to $S$, with respect to the semi-Riemannian metric $\mathcal{G}^{c}$.*
*Proof.* Suppose the Lagrangian $L$ is determined by a Riemannian metric $\mathcal{G}$ on $Q$ and a potential function $V$, so that its local expression is $$L(q,\dot{q})=\frac{1}{2}\mathcal{G}_{ij}\dot{q}^{i}\dot{q}^{j} - V(q).$$
Chetaev's equations consist of Euler-Lagrange equations plus a reaction force term responsible for enforcing the constraints. In local coordinates we eventually get $$\ddot{q}^{i} - \mathcal{G}^{ij}\left[ \frac{1}{2}\frac{\partial \mathcal{G}_{lk}}{\partial q^{j}}\dot{q}^{l}\dot{q}^{k}-\frac{\partial \mathcal{G}_{lj}}{\partial q^{k}}\dot{q}^{l}\dot{q}^{k} - \frac{\partial V}{\partial q^{j}}\right] = \lambda_{a}\mathcal{G}^{ij}\frac{\partial \phi^{a}}{\partial \dot{q}^{j}},$$ where $\mathcal{G}^{ij}$ is the inverse matrix of $\mathcal{G}_{ij}$. The left-hand side can be recognized to be the coordinate expression of the vector field $$\nabla_{\dot{q}}\dot{q} + \text{grad} V$$ (see [@B&L] for details). We will show that the right-hand side is the coordinate expression of the vector field $\sharp_{\mathcal{G}^{c}}(J^{*}(d\phi^{a}))$.
Given a one-form $\alpha$ on $TQ$, the inverse musical isomorphism $\sharp_{\mathcal{G}^{c}}(\alpha)$ is characterized by $$\mathcal{G}^{c}(\sharp_{\mathcal{G}^{c}}(\alpha), X) = \langle \alpha, X \rangle, \quad \text{for any } X\in \mathfrak{X}(TQ).$$
Using this property, and taking into account the coordinate expression of $J^{*}(d\phi^{a})$, we can deduce from $$\begin{cases}
\langle J^{*}(d\phi^{a}), \frac{\partial}{\partial q^{j}}\rangle & = \frac{\partial \phi^{a}}{\partial \dot{q}^{j}}\\
\langle J^{*}(d\phi^{a}), \frac{\partial}{\partial \dot{q}^{j}}\rangle & = 0
\end{cases}$$ that $\sharp_{\mathcal{G}^{c}} (J^{*}(d\phi^{a})) = \mathcal{G}^{ij}\frac{\partial \phi^{a}}{\partial \dot{q}^{j}}\frac{\partial}{\partial \dot{q}^{j}} \in \sharp_{\mathcal{G}^{c}}(S^{o})$. In addition, we have that $S^{\bot}$ satisfies $S^{\bot} = \sharp_{\mathcal{G}^{c}}(S^{o})$.
Thus, using the coordinate expression of the vertical lift we deduce that $$(\nabla_{\dot{q}}\dot{q} + \text{grad} V)^{V} = \lambda_{a} \sharp_{\mathcal{G}^{c}}J^{*}(d\phi^{a}),$$ which finishes the proof. ◻
**Remark 1**. *Notice that $S^{\bot}$ is spanned by vertical vectors in $TQ$. This observation will be relevant later in the paper.$\diamond$*
# Virtual nonholonomic constraints {#sec:controler}
Next, we present the rigorous construction of virtual nonholonomic constraints. In contrast to the case of standard constraints on mechanical systems, the concept of virtual constraint is always associated with a controlled system and not just with a submanifold defined by the constraints.
Given an external force $F^{0}:TQ\rightarrow T^{*}Q$ and a control force $F:TQ\times U \rightarrow T^{*}Q$ of the form $$F(q,\dot{q},u) = \sum_{a=1}^{m} u_{a}f^{a}(q)$$ where $f^{a}\in \Omega^{1}(Q)$ with $m<n$, $U\subset\mathbb{R}^{m}$ the set of controls and $u_a\in\mathbb{R}$ with $1\leq a\leq m$ the control inputs, consider the associated mechanical control system of the form $$\label{mechanical:control:system}
\nabla_{\dot{q}}\dot{q} =Y^0(q,\dot{q})+u_{a}Y^{a}(q).$$ where $Y^0(q,\dot{q})=\sharp_{\mathcal{G}} (F^0(q, \dot{q}))$ and $Y^{a}=\sharp_{\mathcal{G}} (f^{a}(q)).$
Hence, the solutions of the previous equation are the trajectories of a vector field of the form $$\label{SODE}\Gamma(q, \dot{q}, u)=G(q,\dot{q})+u_{a}(Y^{a})_{(q,\dot{q})}^{V}.$$ We call each $Y^{a}=\sharp(f^{a})$ a control force vector field, $G$ is the vector field determined by the unactuated forced mechanical system $$\nabla_{\dot{q}}\dot{q} =Y^0(q,\dot{q}).
%\nabla^{\mathcal{G}}_{\dot{q}(t)} \dot{q}(t) =Y^{0}(q(t),\dot{q}(t))$$
**Definition 3**. *The distribution $\mathcal{F}\subseteq TQ$ generated by the vector fields $Y^{a}=\sharp_{\mathcal{G}}(f^{a})$ is called the *input distribution* associated with the mechanical control system [\[mechanical:control:system\]](#mechanical:control:system){reference-type="eqref" reference="mechanical:control:system"}.*
Now we will define the concept of virtual nonholonomic constraint.
**Definition 4**. *A *virtual nonholonomic constraint* associated with the mechanical control system [\[mechanical:control:system\]](#mechanical:control:system){reference-type="eqref" reference="mechanical:control:system"} is a controlled invariant submanifold $\mathcal{M}\subseteq TQ$ for that system, that is, there exists a control function $\hat{u}:\mathcal{M}\rightarrow \mathbb{R}^{m}$ such that the solution of the closed-loop system satisfies $\psi_{t}(\mathcal{M})\subseteq \mathcal{M}$, where $\psi_{t}:TQ\rightarrow TQ$ denotes its flow.*
**Definition 5**. *Two subspaces $W_1$ and $W_2$ of a vector space $V$ are transversal if*
1. *$V = W_1+W_2$*
2. *$\dim V =\dim W_1 + \dim W_2$, i.e. the dimensions of $W_1$ and $W_2$ are complementary with respect to the ambient space dimension.*
**Theorem 2**. *If the tangent space, $T_{v_{q}}\mathcal{M}$, of the manifold $\mathcal{M}$ and the vertical lift of the control input distribution $\mathcal{F}$ are transversal and $T_{v_{q}}\mathcal{M}\cap \mathcal{F}^V=\{0\}$, then there exists a unique control function making $\mathcal{M}$ a virtual nonholonomic constraint associated with the mechanical control system [\[mechanical:control:system\]](#mechanical:control:system){reference-type="eqref" reference="mechanical:control:system"}.*
*Proof.* Suppose that $TTQ=T\mathcal{M}\oplus \mathcal{F}^V$ and that trajectories of the control system [\[mechanical:control:system\]](#mechanical:control:system){reference-type="eqref" reference="mechanical:control:system"} may be written as the integral curves of the vector field $\Gamma$ defined by [\[SODE\]](#SODE){reference-type="eqref" reference="SODE"}. For each $v_{q}\in \mathcal{M}_{q}$, we have that $$\Gamma(v_{q})\in T_{v_{q}}(TQ)=T_{v_{q}}\mathcal{M}\oplus \hbox{span}\Big{\{}(Y^{a})_{v_{q}}^{V} \Big{\}},$$ with $Y^{a}=\sharp(f^{a})$. Using the uniqueness decomposition property arising from transversality, we conclude there exists a unique vector $\tau^{*}(v_{q})=(\tau_{1}^{*}(v_{q}),\cdots, \tau_{m}^{*}(v_{q}))\in \mathbb{R}^{m}$ such that $$\Gamma(v_{q})=G(v_{q})+\tau_{a}^{*}(v_{q})(Y^{a})_{v_{q}}^{V}\in T_{v_{q}}\mathcal{M}.$$ If $\mathcal{M}$ is defined by $m$ constraints of the form $\phi^{b}(v_{q})=0$, $1\leq b\leq m$, then the condition above may be rewritten as $$d\phi^{b}(G(v_{q})+\tau_{a}^{*}(v_{q})(Y^{a})_{v_{q}}^{V})=0,$$ which is equivalent to $$\tau_{a}^{*}(v_{q})d\phi^{b}((Y^{a})_{v_{q}}^{V})=-d\phi^{b}(G(v_{q})).$$ Note that, the equation above is a linear equation of the form $A(v_{q})\tau=b(v_{q})$, where $b(v_{q})$ is the vector $(-d\phi^{1}(G(v_{q})), \dots, -d\phi^{m}(G(v_{q})))\in \mathbb{R}^{m}$ and $A(v_{q})$ is the $m\times m$ matrix with entries $A^{b}_{a}(v_{q})=d\phi^{b}((Y^{a})_{v_{q}}^{V})= \frac{\partial\phi^b}{\partial\dot{q}}(q,\dot{q})(Y^{a})$, where the last equality may be deduced by computing the expressions in local coordinates. That is, if $(q^{i}, \dot{q}^{i})$ are natural bundle coordinates for the tangent bundle, then $$\begin{split}
d\phi^{b}((Y^{a})_{v_{q}}^{V}) & = \left(\frac{\partial \phi^{b}}{\partial q^{j}}dq^{j} + \frac{\partial\phi^b}{\partial \dot{q}^i}d\dot{q}^{i}\right)\left(Y^{a,k}\frac{\partial}{\partial \dot{q}^{k}}\right) \\
& = \frac{\partial\phi^b}{\partial \dot{q}^i}Y^{a,i} = \frac{\partial\phi^b}{\partial \dot{q}}(q,\dot{q})(Y^{a}).
\end{split}$$ In addition, $A(v_{q})$ has full rank, since its columns are linearly independent. In fact suppose that $$c_{1}\begin{bmatrix} \frac{\partial\phi^1}{\partial \dot{q}}(Y^{1}) \\
\vdots \\
\frac{\partial\phi^m}{\partial \dot{q}}(Y^{1}) \end{bmatrix} + \cdots + c_{m}\begin{bmatrix} \frac{\partial\phi^1}{\partial \dot{q}}(Y^{m}) \\
\vdots \\
\frac{\partial\phi^m}{\partial \dot{q}}(Y^{m}) \end{bmatrix}= 0,$$ which is equivalent to $$\begin{bmatrix} \frac{\partial\phi^1}{\partial \dot{q}}(c_{1}Y^{1}+\cdots + c_{m}Y^{m}) \\
\vdots \\
\frac{\partial\phi^m}{\partial \dot{q}}(c_{1}Y^{1}+\cdots + c_{m}Y^{m}) \end{bmatrix}=0.$$ Moreever, by transversality we have $T_{v_q}\mathcal{M}\cap \mathcal{F}^V = \{0\}$ which implies that $c_{1}Y^{1}+\cdots + c_{m}Y^{m}=0$. Since $\{Y_{i}\}$ are linearly independent we conclude that $c_{1}=\cdots=c_{m}=0$ and $A$ has full rank. But, since $A$ is an $m\times m$ matrix, and $\mathcal{M}$ is a constrained submanifold, it must be invertible. Therefore, there is a unique vector $\tau^{*}(v_{q})$ satisfying the matrix equation and $\tau^{*}:\mathcal{M}\rightarrow \mathbb{R}^{m}$ is smooth since it is the solution of a matrix equation depending smoothly on $v_{q}$. ◻
**Remark 2**. *In previous studies, virtual nonholonomic constraints were defined in somewhat different wyays. The most general one, comprising every single other as a particular case, is given in [@moran2021energy] where a virtual nonholonomic constraint is a set of the form $\mathcal{M}=\{(q,p)\in Q\times \mathbb{R}^{n} \ | \ h(q,p)=0\}$, for which there exists a control law making it invariant under the flow of the closed-loop controlled Hamiltonian equations. This constraint may be rewritten using the cotangent bundle $T^{*}Q$ and $h$ may be seen as a function $h:T^{*}Q\rightarrow \mathbb{R}^{m}$. In addition, $h$ should satisfy $\text{rank } dh(q,p) = m$ for all $(q,p)\in \mathcal{M}$.*
*Our definition falls under this general definition. In order to see this, we must rewrite the virtual nonholonomic constraints and the control system on the cotangent bundle.*
*Indeed, consider the Hamiltonian function $H:T^{*}Q \rightarrow \mathbb{R}$ obtained from a Lagrangian function in the following way $$H(q,p)=p\dot{q}(q,p)-L(q,\dot{q}(q,p)),$$ where $\dot{q}(q,p)$ is a function of $(q,p)$ given by the inverse of the Legendre transformation $p=\frac{\partial L}{\partial \dot{q}}$. The controlled Hamiltonian equations are given by $$\dot{q}=\frac{\partial H}{\partial p}, \quad \dot{p}=-\frac{\partial H}{\partial q} + F^{0}(q,\dot{q}(q,p)) + u_{a}f^{a}(q),$$ where $F^{0}$ is an external force map. Now, any submanifold $\mathcal{M}\subseteq TQ$ might be defined as the set $$\mathcal{M}= \{ (q,\dot{q})\in TQ \ | \ \phi^{a}(q,\dot{q}) = 0\},$$ where $d\phi^{a}$ with $1 \leqslant a\leqslant m$ are $m$ linearly independent constraints. The cotangent version of the constraint manifold is the set $$\tilde{\mathcal{M}}= \{ (q,p) \ | \ \phi^{a}(q,\dot{q}(q,p)) = 0 \}.$$ Therefore, we set $$h(q,p)=(\phi^{1}(q,\dot{q}(q,p)),\cdots, \phi^{m}(q,\dot{q}(q,p))).\hfill\diamond$$*
**Example 1**. *Consider a particle moving in three dimensional space and subject to the gravitational potential. Its configuration space is $Q=\mathbb{R}^3$ with $q=(x,y,z)\in Q.$ The Lagrangian $L:TQ\rightarrow\mathbb{R}$, is given by $$L(q,\dot{q})=\frac{m}{2}\left(\dot{x}^2+\dot{y}^2+\dot{z}^2\right)-mgz,$$ and we consider the constraint that is imposed by $\Phi(q,\dot{q})=0$ with $$\Phi(q,\dot{q})=a^2\left(\dot{x}^2+\dot{y}^2\right)-\dot{z}^2$$ and the constraint manifold defined as $$\mathcal{M}=\{(q,\dot{q})\in TQ \;: \; \Phi(q,\dot{q})=0\}.$$ Consider also the control force $F:TQ\times U\to T^*Q$ $$F(q,\dot{q},u)=uf=u\left(xdx+ydy+dz\right).$$ The controlled Euler-Lagrange equations are $$m\ddot{x}=ux, \quad m\ddot{y}=uy, \quad m\ddot{z}=-gm +u.$$ The tangent space of the constraint manifold $\mathcal{M}$ is given by $$\begin{aligned}
T_{(q,\dot{q})}\mathcal{M}&=\{v\in TTQ\; :\; d\Phi(v)=0\}\\
& =\mathop{\mathrm{span}}\{X_1, X_2, X_3, X_4, X_5\},
\end{aligned}$$ where $(q,\dot{q})\in\mathcal{M}$ and $$X_1=\frac{\partial}{\partial x}, \quad X_2=\frac{\partial}{\partial y}, \quad X_3=\frac{\partial}{\partial z}, \quad$$ $$X_4=\dot{z}\frac{\partial}{\partial \dot{y}}+a^2\dot{y}\frac{\partial}{\partial \dot{z}} \quad X_5=\dot{z}\frac{\partial}{\partial \dot{x}}+a^2\dot{x}\frac{\partial}{\partial \dot{z}}.$$ and the input distribution $\mathcal{F}$ is generated by the vector field $$Y=\frac{x}{m}\frac{\partial}{\partial x}+\frac{y}{m}\frac{\partial}{\partial y}+\frac{1}{m}\frac{\partial}{\partial z}.$$*
*The control law that makes the constraint manifold invariant is given by $$\hat{u}=-\frac{mg\dot{z}}{a^2x\dot{x}+a^2y\dot{y}-\dot{z}}.$$*
In the following, we characterize the closed-loop dynamics as solutions of the nonholonomic equations [\[noneq\]](#noneq){reference-type="eqref" reference="noneq"}.
**Theorem 3**. *A curve $q:I\rightarrow Q$ is a trajectory of the closed-loop system for the Lagrangian control system [\[mechanical:control:system\]](#mechanical:control:system){reference-type="eqref" reference="mechanical:control:system"} making $\mathcal{M}$ invariant if and only if it satisfies $$\label{constrained:equation}
(\nabla_{\dot{q}}\dot{q} + \text{grad} V)^{V} =-\tau^*_a ( \sharp_{\mathcal{G}^{c}}(f^a)^{V}),$$ or, in other words, $$(\nabla_{\dot{q}}\dot{q} + \text{grad} V )^{V} \in \mathcal{F}^V$$*
*where $\mathcal{F}^V$ is the distribution on $TQ$ spanned by the vector fields $\{\sharp_{\mathcal{G}^{c}}(f^a)^{V}\}$ and $\tau^*_a$ the unique control from Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"}.*
*Proof.* Let $q:I\to Q$ be the trajectory of the mechanical system ([\[mechanical:control:system\]](#mechanical:control:system){reference-type="ref" reference="mechanical:control:system"}), hence it is an intergral curve of the vector vield $\Gamma(v_q)$ with $v_q\in TQ$, of the form ([\[SODE\]](#SODE){reference-type="ref" reference="SODE"}) $$\Gamma(v_{q})=G(v_{q})+u_{a}(Y^{a})_{v_{q}}^{V}.$$ From Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"} there exists a unique control function $\tau^*_a$ that makes $\mathcal{M}$ a virtual nonholonomic constraint i.e. $$\Gamma(v_{q}) = G(v_{q})+\tau_{a}^{*}(v_{q})(Y^{a})_{v_{q}}^{V}\in T_{v_{q}}\mathcal{M}$$
By the observations preceding equation [\[SODE\]](#SODE){reference-type="eqref" reference="SODE"} the trajectories of $\Gamma$ satisfy the equation $\nabla_{\dot{q}}\dot{q} + \text{grad} V + \tau^*_a Y^a = 0$, since $Y^a=\sharp_\mathcal{G}(f^a)$ and lifting the equation yields
$$(\nabla_{\dot{q}}\dot{q} + \text{grad} V)^{V} = - \tau^{*}_a( \sharp_\mathcal{G}(f^a))^{V}.$$
By Lemma [Lemma 1](#completemetricLemma){reference-type="ref" reference="completemetricLemma"} of the complete lift of the Riemannian metric $\mathcal{G}$, $\mathcal{G}^c$, we have $(\nabla_{\dot{q}}\dot{q} + \text{grad} V)^{V} = - \tau^{*}_a ( \sharp_{\mathcal{G}^{c}}(f^a)^{V})$. ◻
The next proposition shows that if the vertical lift of the input distribution is orthogonal to the tangent space $T_{v_q}\mathcal{M}$ of the virtual nonholonomic constraint manifold $\mathcal{M}$ then the constrained dynamics is precisely the nonholonomic dynamics with respect to the original Lagrangian function.
**Proposition 2**. *If $\mathcal{F}^V$ is equal to $S^{\bot}$ then the trajectories of the feedback controlled mechanical system [\[constrained:equation\]](#constrained:equation){reference-type="eqref" reference="constrained:equation"} are the nonholonomic equations of motion [\[Chetaev\'s eqns\]](#Chetaev's eqns){reference-type="eqref" reference="Chetaev's eqns"}.*
*Proof.* From the Chetaev's equations ([\[Chetaev\'s eqns\]](#Chetaev's eqns){reference-type="ref" reference="Chetaev's eqns"}) we have that the vector field $(\nabla_{\dot{q}}\dot{q} + \text{grad} V)^V$ is a linear combination of $\sharp_{\mathcal{G}^c}(J^*(d\phi^a))$ the generators of $S^\perp$. If $S^\perp$ equals $\mathcal{F}^V$ then $(\nabla_{\dot{q}}\dot{q} + \text{grad} V)^V\in\mathcal{F}^V$ which yields equation ([\[constrained:equation\]](#constrained:equation){reference-type="ref" reference="constrained:equation"}). ◻
**Remark 3**. *Notice that given a mechanical system with nonlinear constraints, there always exist a distribution $\mathcal{F}$ such that $\mathcal{F}^{V}=S^{\bot}$, since $S^{\bot}$ is spanned by vertical lifts of vector fields on $Q$.$\diamond$*
The next example illustrates Proposition [Proposition 2](#orthogonal:input:distribution){reference-type="ref" reference="orthogonal:input:distribution"}.
**Example 2**. *Consider, as in example [Example 1](#ex 1){reference-type="ref" reference="ex 1"}, a particle moving in three dimensional space and subject to the gravitational potential, with the same Lagrangian $L:TQ\to\mathbb{R}$, $$L(q,\dot{q})=\frac{m}{2}\left(\dot{x}^2+\dot{y}^2+\dot{z}^2\right)-mgz$$ but consider now a constraint that makes the magnitude of the velocity constant, namely, $\Phi(q,\dot{q})=0$ with $$\Phi=\dot{x}^2+\dot{y}^2+\dot{z}^2-c=0, \quad c>0.$$ The constraint manifold is given by $$\mathcal{M}=\{(q,\dot{q})\in TQ \;: \; \Phi(q,\dot{q})=0\}$$ and consider the control force $F:TQ\times U\to T^*Q$ $$F(q,\dot{q},u)=uf=u(\dot{x}dx+\dot{y}dy+\dot{z}dz).$$ The controlled Euler-Lagrange equations are $$m\ddot{x}=u\dot{x}, \quad m\ddot{y}=u\dot{y}, \quad m\ddot{z}=-gm +u\dot{z}.$$ The input distribution, $\mathcal{F}$, is generated by the vector field $$Y=\frac{\dot{x}}{m}\frac{\partial}{\partial x}+\frac{\dot{y}}{m}\frac{\partial}{\partial y}+\frac{\dot{z}}{m}\frac{\partial}{\partial z},$$ thus the vertical lift of the input distribution, $\mathcal{F}^V$, is generated by $$Y^V=\frac{\dot{x}}{m}\frac{\partial}{\partial \dot{x}}+\frac{\dot{y}}{m}\frac{\partial}{\partial \dot{y}}+\frac{\dot{z}}{m}\frac{\partial}{\partial\dot{z}}.$$*
*The control that makes the constraint manifold invariant is $$\hat{u}=\frac{mg\dot{z}}{c}.$$ For $S^\perp$, the orthogonal to $S$, we write the differencial of $\Phi$, namely, $d\Phi=2\dot{x}d\dot{x}+2\dot{y}d\dot{y}+2\dot{z}d\dot{z}$ and its image through the dual of the canonical almost tangent structure $J=dq\otimes\frac{\partial}{\partial\dot{q}}$, $$J^*(d\Phi)=2\dot{x}dx+2\dot{y}dy+2\dot{z}dz.$$ Hence, $S^\perp$ is generated by $$\sharp_{\mathcal{G}^c}(J^*(d\Phi))=\frac{\dot{x}}{m}d\dot{x}+\frac{\dot{y}}{m}d\dot{y}+\frac{\dot{z}}{m}d\dot{z}.$$ Notice that the vertical lift of the input distribution, $\mathcal{F}^V$, is equal to $S^\perp$ and from Proposition [Proposition 2](#orthogonal:input:distribution){reference-type="ref" reference="orthogonal:input:distribution"}, the local expression of the equations ([\[Chetaev\'s eqns\]](#Chetaev's eqns){reference-type="ref" reference="Chetaev's eqns"}) and ([\[constrained:equation\]](#constrained:equation){reference-type="ref" reference="constrained:equation"}) will be the same. Indeed, the equations of the corresponding nonholonomic systems are $$\begin{cases}
m\ddot{x}=\lambda\dot{x} \\
m\ddot{y}=\lambda\dot{y} \\
m\ddot{z}+mg=\lambda\dot{z} \\
\dot{x}^2+\dot{y}^2+\dot{z}^2-c=0
\end{cases},$$ where $\lambda\in\mathbb{R}$ is a Lagrange multiplier to be determined using the constraints, while the equations of the controlled system with control force determined by $F$ are given by $$\begin{cases}
m\ddot{x}=-\tau^*\dot{x} \\
m\ddot{x}=-\tau^*\dot{y} \\
m\ddot{x}+mg=-\tau^*\dot{z}
\end{cases},$$ where $\tau^{*}$ is the unique feedback control making the constraints invariant under the flow. The two systems are equivalent on the submanifold $\mathcal{M}$ i.e. the trajectories of the constrained mechanical system [\[constrained:equation\]](#constrained:equation){reference-type="eqref" reference="constrained:equation"} and the nonholonomic equations of motion [\[Chetaev\'s eqns\]](#Chetaev's eqns){reference-type="eqref" reference="Chetaev's eqns"} coincide on the constraint manifold.*
**Remark 4**. *Note that, in the example above, we are enforcing as constraints a constant value of the kinetic energy of a thermostat system - see [@Rojo; @Bloch] for a complementary analysis of this problem.$\diamond$*
**Remark 5**. *When $\mathcal{M}$ is a linear distribution on $Q$, the assumption made in the previous proposition reduces to the assumption considered in [@virtual], i.e., $\mathcal{F}$ is orthogonal to $\mathcal{M}$.*
*Indeed, suppose that $\mathcal{M}$ is a linear distribution $\mathcal{D}$. There exist one-forms $\{\mu^{a}\}$, $a=1, \dots, m$ such that $(\mu^{a})^{V}= J^{*}(d\phi^{a})$. Hence, $(\mathcal{D}^{o})^{V}=S^{o}$. Therefore, $S=\{X\in T(TQ) | (\mu^{a})^{V}(X)=0 \}$ is a rank $2n-m$ distribution on $TQ$ spanned by vector fields of the form $X^{c}$, $X^{V}$ and $Y^{V}$ where $X\in\Gamma(\mathcal{D})$ and $Y\in \Gamma(\mathcal{D}^{\bot})$, and $S^{\bot}$ is a rank $m$ distribution on $TQ$ spanned by $Y^{V}$, so that $S^{\bot}$ is actually contained in $S$. Therefore $S^{\bot}=(\mathcal{D}^{\bot})^{V}$. Thus, the assumption $\mathcal{F}^{V}= S^{\bot}$ reduces to $\mathcal{F}=\mathcal{D}^{\bot}$ or, equivalently, the input distribution $\mathcal{F}$ must be orthogonal to the linear distribution $\mathcal{D}$. This is precisely the assumption made in [@virtual].$\diamond$*
# Application and Simulation Results {#application section}
The previous results on virtual nonlinear nonholonomic constraints can be used to enforce a desired relation between state variables through a linear control force whenever the interplay between forces and constraints satisfies our assumptions. In the following, we give a particular application of how a desired constraint can be enforced in the problem of the motion of particles moving with an alignment on the velocities as in [@Bloch; @Rojo]. This application can be useful in imposing virtual constraints for flocking motion in multi-agent systems [@flocking; @flocking2].
Consider two particles moving under the influence of gravity and which we desire to constrain to move with parallel velocity. Suppose that the motion of the two particles evolves in a plane parametrized by $(x,z)$. The position of the particles is given by $q_1=(x_1,0,z_1)$ and $q_2=(x_2,0,z_2)$, respectively, so the configuration space can be considered as $Q=\mathbb{R}^4$ with $q=(q_1,q_2)\in Q$.
The Lagrangian $L:TQ\to\mathbb{R},$ is given by $$L(q,\dot{q})=\frac{1}{2}m_1\dot{q}_1^2 + \frac{1}{2}m_2\dot{q}_2^2 - G(q)$$ where $G(q)=m_1gz_1 + m_2gz_2$ is the potential energy due to gravity and $m_i, i=1,2$ are the masses of the particles, respectively. The constraint is given by the equation $\Phi:TQ\to\mathbb{R},$ $$\Phi(q,\dot{q})=\dot{x_1}\dot{z_2} - \dot{x_2}\dot{z_1}$$ and the control force is just $F:TQ\times\mathbb{R}\to T^*Q$ given by $$F(q,\dot{q},u)=u(f_1dx_1 + f_2dz_1 + f_3dx_2 + f_4dz_2).$$ The controlled Euler-Lagrange equations are $$\begin{split}
m_1 \ddot{x}_1 &= uf_1, \quad
m_1\ddot{z}_1 +m_1g = uf_2, \\
m_2\ddot{x}_2 &= uf_3, \quad
m_2 \ddot{z}_2 + m_2g = uf_4.
\end{split}$$ The constraint manifold is $\mathcal{M}=\{(q,\dot{q})\in TQ \; :\; \Phi(q,\dot{q})=0\}$ and its tangent space, at every point $(q,\dot{q})\in\mathcal{M}$, is given by $T_{(q,\dot{q})}\mathcal{M}=\{v\in TTQ\; :\; d\Phi(v)=0\}
=\mathop{\mathrm{span}}\{X_1, X_2, X_3, X_4, X_5, X_6, X_7\}$, with $$X_1=\frac{\partial}{\partial x_1}, \quad X_2=\frac{\partial}{\partial z_1}, \quad X_3=\frac{\partial}{\partial x_2}, \quad X_4=\frac{\partial}{\partial z_2}$$ $$X_5=\dot{x}_2\frac{\partial}{\partial \dot{x}_1} + \dot{z}_2\frac{\partial}{\partial\dot{z_1}} + \dot{x}_1\frac{\partial}{\partial\dot{x_2}} + \dot{z}_1\frac{\partial}{\partial\dot{z_2}}, \quad$$ $$X_6=\dot{z}_1\frac{\partial}{\partial \dot{x}_1} + \dot{x}_1\frac{\partial}{\partial\dot{z_1}} + \dot{z}_2\frac{\partial}{\partial\dot{x_2}} + \dot{x}_2\frac{\partial}{\partial\dot{z_2}}, \quad$$ $$X_7=\dot{x}_1\frac{\partial}{\partial \dot{x}_1} + \dot{z}_1\frac{\partial}{\partial\dot{z_1}} - \dot{x}_2\frac{\partial}{\partial\dot{x_2}} - \dot{z}_2\frac{\partial}{\partial\dot{z_2}}. \quad$$ The input distribution $\mathcal{F}$ is generated by the vector field $\displaystyle{Y=\frac{f_1}{m_1}\frac{\partial}{\partial x_1} + \frac{f_2}{m_1}\frac{\partial}{\partial z_1} + \frac{f_3}{m_2}\frac{\partial}{\partial z_2} + \frac{f_4}{m_2}\frac{\partial}{\partial z_2}}.$ Note here that the vertical lift of the input distribution, $\mathcal{F}^V$, which is generated by $\displaystyle{Y^V=\frac{f_1}{m_1}\frac{\partial}{\partial \dot{x}_1} + \frac{f_2}{m_1}\frac{\partial}{\partial \dot{z}_1} + \frac{f_3}{m_2}\frac{\partial}{\partial \dot{z}_2} + \frac{f_4}{m_2}\frac{\partial}{\partial \dot{z}_2}},$ is transversal to the tangent space of the constraint manifold, $T\mathcal{M}$. By Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"} there is a unique control law making the constraint manifold a virtual nonholonomic constraint. The control law that makes the constraint manifold invariant is $$\hat{u}=\left(\dot{z}_2f_1-\dot{z}_1f_3 + \dot{x}_1f_4-\dot{x}_2f_2\right)^{-1}\left( g\dot{x}_1 - g\dot{x}_2\right).$$ For $f_1=f_2=1$ and $f_3=f_4=0$ we get $F(q,\dot{q},u)=u(dx_1 + dz_1)$ and $$\hat{u}=\left(\dot{z}_2 -\dot{x}_2 \right)^{-1}\left( g\dot{x}_1 - g\dot{x}_2\right).$$
We have simulated the closed-loop control system with the preferred feedback control law using a standard fourth-order Runge-Kutta method and initial points $(x_{1},x_{2}, z_{1}, z_{2}) = (1, 40, 0, 0)$ and initial velocities $(\dot{x}_{1}, \dot{x}_{2}, \dot{z}_{1}, \dot{z}_{2})=(80, 20, 40, 10)$. In Fig. [1](#traj){reference-type="ref" reference="traj"} we show the controlled trajectories for both particles where can be seen the velocities' compliance with the constraint. The total energy of the system is depicted in Fig. [2](#energy){reference-type="ref" reference="energy"} while the preservation of the constraint during the simulation time is shown in Fig. [3](#constraint){reference-type="ref" reference="constraint"}. Fluctuations of the values of the constraint function appear due to simulation computational process and are restricted to a minor interval as expected. The control function is depicted in Fig. [4](#controls){reference-type="ref" reference="controls"} where it tends to zero since the motion tends to become vertical and gravity takes over.
![Controlled trajectory of the two particles](trajectory.png){#traj}
![Total energy of the system](energy.png){#energy}
![The value of the constraint function $\Phi$ during the simulation time](constraint.png){#constraint}
![Control function during simulation time.](control.png){#controls}
# Conclusions and Future Work
In this paper, we have extended our results in [@virtual] and [@affine] to the case of nonlinear constraints on the velocities. Our results guarantee that linear and affine control forces might be used to enforce desired constraints on the velocities and positions, provided they meet the assumptions on the statement of Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"}, that is, the tangent bundle to the constraint submanifold is transversal to the vertical lift of the input distribution. As future work, there is an obvious necessity of extending the range of applicability of our results to the cases in which the above assumptions are not met. In some of this cases, a control law might exist though it is possible that it is no longer unique.
One of our objectives for a future work is to adapt these results to applications to bipedal robot locomotion. To this end, we will extend our results to the same setting of the type of virtual nonlinear constraints appearing in [@griffin2015nonholonomic]. These constraints are of the form $\phi(q_{a}, q_{u}, \dot{q}_{a}, \dot{q}_{u}) = q_{a} - h(q, \dot{q}_{u})$, where $q=(q_{a}, q_{u})$ are local coordinates of $Q$, with respect to which the actuated coordinate vector fields $\frac{\partial}{\partial q_{a}}$ are the control force vector fields $Y^{a}$, i.e, the controlled equations are of the type $\nabla_{\dot{q}_{u}}\dot{q}_{u} =Y^0(q,\dot{q}), \text{ and } \nabla_{\dot{q}_a}\dot{q}_a=Y^0(q,\dot{q})+u_{a}\frac{\partial}{\partial q_{a}}$. Under this assumption, the vertical lift of $Y^{a}$ belongs to $T\mathcal{M}$, since $\langle d\phi, (Y^{a})^V \rangle = \langle d\phi, \frac{\partial}{\partial \dot{q}_{a}} \rangle=\frac{\partial \phi}{\partial \dot{q}_{a}}=0$. Therefore, $\mathcal{F}^{V}\subseteq T\mathcal{M}$. Hence, this set of virtual nonholonomic constraints does not fall under the assumptions of Theorem [Theorem 2](#main:theorem){reference-type="ref" reference="main:theorem"}. In addition, we will also address a related problem: instead of enforcing a constraint, we will study from the geometric point of view the stabilization properties of the virtual nonholonomic constraint.
10
R. Abraham, J. E. Marsden, *Foundations of Mechanics,* Addison-Wesley, New York, 2nd edition, (1978).
A. Anahory Simoes, E. Stratoglou, A. Bloch, L. Colombo. Virtual Nonholonomic Constraints: A Geometric Approach. *Automatica*. Vol 155, 111166, (2023).
P. Balseiro, M. de León, J.C. Marrero, D. Martín de Diego. The ubiquity of the symplectic Hamiltonian equations in mechanics. Journal of Geometric Mechanics, 1(1), 1-34, (2009).
A. M. Bloch, *Nonholonomic mechanics and control,* Springer-Verlag New York, (2015).
A. M. Bloch, A. G. Rojo. Optical mechanical analogy and nonlinear nonholonomic constraints, Physical Review E 93, 023005, (2016).
F. Bullo, A.D. Lewis. Geometric Control of Mechanical Systems: Modeling, Analysis, and Design for Simple Mechanical Systems, number 49 in Texts in Applied Mathematics, Springer-Verlag, (2005).
H. Cendra, A. Ibort, M. de León, D. Martín de Diego. A generalization of Chetaev's principle for a class of higher-order nonholonomic constraints. Journal of mathematical physics, 45(7), 2785-2801, (2004).
C. Chevallereau, G. Abba, Y. Aoustin, F. Plestan, E. Westervelt, C. C. De Wit, J. Grizzle. Rabbit: A testbed for advanced control theory. IEEE Control Systems Magazine, 23(5), 57--79, (2003).
M. de León. A historical review on nonholonomic mechanics. RACSAM, 106, 191-224 (2012).
M. de León, P.R. Rodrigues. Methods of Differential Geometry in Analytical Mechanics, volume 158, Elsevier, Amsterdam, (1989).
L. Freidovich, A. Robertsson, A. Shiriaev, R. Johansson. Periodic motions of the pendubot via virtual holonomic constraints: Theory and experiments. Automatica 44(3), 785--791, (2009).
B. Griffin, J. Grizzle Nonholonomic virtual constraints for dynamic walking. 54th IEEE Conference on Decision and Control 4053--4060 (2015).
K. Hamed, A. Ames. Nonholonomic hybrid zero dynamics for the stabilization of periodic orbits: Application to underactuated robotic walking. IEEE Transactions on Control Systems Technology, 28(6), 2689--2696 (2019).
J. Horn, A. Mohammadi, K. Hamed, R. Gregg. Nonholonomic virtual constraint design for variable-incline bipedal robotic walking. IEEE Robotics and Automation Letters, 5(2), 3691--3698 (2020).
J. Horn, A. Mohammadi, K. Hamed, R. Gregg. Hybrid zero dynamics of bipedal robots under nonholonomic virtual constraints. IEEE Control Systems Letters, 3(2), 386--391 (2018).
J. Horn, R.Gregg. Nonholonomic Virtual Constraints for Control of Powered Prostheses Across Walking Speeds. IEEE Transactions on Control Systems Technology. (2021).
A. Isidori. Nonlinear control systems, Springer Science & Business Media, (2013).
A. Mohammadi, M. Maggiore, L. Consolini. Dynamic virtual holonomic constraints for stabilization of closed orbits in underactuated mechanical systems. Automatica 94, 112--124, (2018).
A. Moran-MacDonald. Energy injection for mechanical systems through the method of Virtual Nonholonomic Constraints. University of Toronto (2021). A. Moran-MacDonald, M. Maggiore and X. Wang, \"From Gymnastics to Virtual Nonholonomic Constraints: Energy Injection, Dissipation, and Regulation for the Acrobot,\" in IEEE Transactions on Control Systems Technology, doi: 10.1109/TCST.2023.3294065
C. W. Reynolds. Flocks, herds and schools: A distributed behavioral model, in: 14th Annual Conference on Computer Graphics and Interactive Techniques, 25--34, 1987. A. G. Rojo, A. M. Bloch. Nonholonomic double-bracket equations and the Gauss thermostat, Physical Review E 80, 025601(R), (2009).
B. Schutz. A First Course in General Relativity (2nd ed.). Cambridge University Press (2009).
A. S. Shiriaev, L. Freidovich, S. V. Gusev. Transverse linearization for controlled mechanical systems with several passive degrees of freedom. IEEE Transactions on Automatic Control 55(4), 893--906, (2010).
E. Stratoglou, A. Anahory Simoes, A. Bloch, L. Colombo. Virtual Affine Nonholonomic Constraints. International Conference on Geometric Science of Information, 89-96, 2023. H. Tanner, A. Jadbabaie, G. Pappas. Flocking in fixed and switching networks, IEEE Trans. on Automatic Control 52(5), 863--868, 2007. E. Westervelt, J. Grizzle, C. Chevallereau, J. Choi, B. Morris. Feedback control of dynamic bipedal robot locomotion. CRC press, (2018).
E. Westervelt, J. Grizzle, D. E. Koditschek. Hybrid zero dynamics of planar biped walkers. IEEE Transactions on Automatic Control, 48(1), 42-56, (2003).
S. Westerberg, U. Mettin, A. S. Shiriaev, L. B. Freidovich, Y. Orlov. Motion planning and control of a simplified helicopter model based on virtual holonomic constraints. IEEE 2009 International Conference on Advanced Robotics, pp. 1--6, (2009).
[^1]: E. Stratoglou is with Universidad Politécnica de Madrid (UPM), José Gutiérrez Abascal, 2, 28006 Madrid, Spain. (e-mail: ef.stratoglou\@alumnos.upm.es).
[^2]: A. Anahory Simoes is with the School of Science and Technology, IE University, Spain. (e-mail: alexandre.anahory\@ie.edu).
[^3]: A. Bloch is with Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA. (e-mail: abloch\@umich.edu)
[^4]: L. Colombo is with Centre for Automation and Robotics (CSIC-UPM), Ctra. M300 Campo Real, Km 0,200, Arganda del Rey - 28500 Madrid, Spain. (e-mail: leonardo.colombo\@csic.es)
[^5]: The authors acknowledge financial support from Grant PID2022-137909NB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and the LINC Global project from CSIC \"Wildlife Monitoring Bots\" INCGL20022. A.B. was partially supported by NSF grant DMS-2103026, and AFOSR grants FA 9550-22-1-0215 and FA 9550-23-1-0400
| arxiv_math | {
"id": "2310.01849",
"title": "On the Geometry of Virtual Nonlinear Nonholonomic Constraints",
"authors": "Efstratios Stratoglou, Alexandre Anahory Simoes, Anthony Bloch and\n Leonardo J. Colombo",
"categories": "math.OC cs.SY eess.SY math-ph math.MP",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We construct a linear system on a general curve in a totally geodesic subvariety of the moduli space of curves. As a consequence, we obtain rank bounds for totally geodesic subvarieties of dimension at least two. Furthermore, we classify totally geodesic subvarieties of dimension at least two in strata with at most two zeros.
author:
- Frederik Benirschke
title: Totally geodesic subvarieties of the moduli space of curves and linear systems
---
# Introduction
Let ${\mathcal M}_{g,n}$ be the moduli space of genus $g$ curves with $n$ marked points. A *totally geodesic subvariety* (for the Teichmüller metric) of ${\mathcal M}_{g,n}$ is an algebraic subvariety $M\subseteq {\mathcal M}_{g,n}$ such that any Teichmüller geodesic passing through a general point and tangent to $M$ is contained in $M$. Totally geodesic subvarieties are closely related to $\operatorname{GL}(2,{\mathbb{R}})$-orbit closures in strata of quadratic differentials. Let $QM$ be the subbundle of the bundle of quadratic differentials ${\mathcal Q}_{{\mathcal M}_{g,n}}$ containing pairs $(X,q)$ where $X\in M$ and $q$ is a quadratic differential generating a Teichmüller geodesic contained in $M$. The bundle $QM$ is stratified by the order of zeros and poles, and we let $QM(\mu)\subseteq {\mathcal Q}(\mu)$ be the stratum of maximal dimension. Then $QM(\mu)$ is a $\operatorname{GL}(2,{\mathbb{R}})$-orbit closure of dimension$$\dim QM = \dim QM(\mu) = 2\dim M$$ in some stratum of quadratic differentials ${\mathcal Q}(\mu)\subseteq {\mathcal Q}_{{\mathcal M}_{g,n}}$.
In this paper, we mostly work with the orbit closure $QM(\mu)$ instead of the totally geodesic subvariety $M$. The following definition allows us to switch between the two points of view.
**Definition 1**. Let $N\subseteq{\mathcal Q}(\mu)$ be an orbit closure in a stratum of quadratic differentials. We say $N$ is a *totally geodesic orbit closure* if $$\dim N = 2\dim \pi(N),$$ where $\pi:{\mathcal Q}(\mu)\to{\mathcal M}_{g,n}$ is the forgetful map.
There is a $1$-to-$1$ correspondence between totally geodesic subvarieties and totally geodesic orbit closures given by $$N = QM(\mu),\, M = \overline{\pi(N)};$$ see [@goujard; @wright-totally-geodesic]. The rank of a totally geodesic orbit closure is $$\operatorname{rank}(N):= \tfrac{\dim N}{2}.\footnote{By taking square roots of quadratic differentials in $N$, one obtains an orbit closure $N'$ in a stratum of Abelian differentials. The rank of $N'$ is by definition half of the dimension of the projection of the tangent space $p(T_{(X,\omega)}N')\subseteq H^1(X,Z(\omega),{\mathbb{C}})$ at a generic point $(X,\omega)\in N'$.
By \cite[Thm 1.3]{wright-totally-geodesic} the rank of $N'$ is $\tfrac{\dim(N)}{2}$. Thus with our definition $\operatorname{rank}(N)=\operatorname{rank}(N').$
}$$
Our first result is a rank bound for totally geodesic orbit closures, depending only on the number of simple zeros in the partition $\mu$. Any partition $\mu$ can be written as $$\mu = (-1^n,1^m, b_1,\ldots,b_k),\, b_i>1, i=1,\ldots k.
% 4g-4 = m-n + \sum_{i=1}^k b_i.$$We call $m$ the *number of simple zeros*.
**Theorem 2**. *Let ${\mathcal Q}(\mu)$ be a stratum of quadratic differentials in genus $g$ with $m$ simple zeros. Suppose $N\subseteq {\mathcal Q}(\mu)$ is a totally geodesic orbit closure of $\operatorname{rank}(N)\geq 2.$ Then*
*$$\operatorname{rank}(N) \leq \begin{cases} m-g+1 & \text{ if $m\geq 2g-1$},\\
\tfrac{m}{2}+1 &\text{ if $m\leq 2g-1$}.
\end{cases}$$ In particular, if $\operatorname{rank}(N)= m+1$, then $g(X) = 0$ and $\operatorname{rank}(N)\leq n-3$.*
The special case $m=0$ rules out the existence of totally geodesic subvarieties of rank at least $2$ in strata with only higher-order zeros.
**Corollary 3**. *There do not exist totally geodesic orbit closures of rank at least $2$ in strata ${\mathcal Q}(-1^n,b_1,\ldots,b_k)$ with $b_i>1$ for $i=1,\ldots,k$.*
We prove by constructing a linear system of degree $m$ and dimension $\operatorname{rank}(N)$ and applying results from the theory of special divisors; see for a more precise statement. In the special case of a totally geodesic surface $N$, we obtain an upper bound on the gonality of curves in $\pi(N)$; see .
A basic construction for producing totally geodesic subvarieties is covering constructions, which are obtained by pulling back differentials along branched coverings. The resulting orbit closures are called *loci of covers*. *Primitive* totally geodesic subvarieties are the totally geodesic subvarieties that do not arise as covering construction. It is an important open problem to classify all primitive totally geodesic subvarieties. Mirzakhani initially conjectured that all totally geodesic subvarieties of dimension at least $2$ are covering constructions. Still, three examples of primitive totally geodesic surfaces were recently found by McMullen, Mukamel, Wright, and Eskin [@mmw; @emmw].
We rule out the existence of primitive, totally geodesic orbit closures in strata of quadratic differentials with at most two zeros. I.e., strata of the form ${\mathcal Q}(-1^n,a,b), a,b \geq 0$ or ${\mathcal Q}(-1^n,a), a\geq 0$.
**Theorem 4**. *Let ${\mathcal Q}(\mu)$ be a stratum of quadratic differentials with at most two zeros. Suppose $N\subseteq {\mathcal Q}(\mu)$ is a totally geodesic orbit closure of rank at least $2$. Then $N$ is one of the following strata $${\mathcal Q}(-1^5,1), {\mathcal Q}(-1^6,1^2), {\mathcal Q}(-1^2,1^2).$$*
*In particular, there do not exist primitive totally geodesic orbit closures in ${\mathcal Q}(\mu)$.*
The known examples of primitive totally geodesic orbit closures from [@emmw; @mmw] are totally geodesic surfaces in the strata $${\mathcal Q}(-1^3,1^3), {\mathcal Q}(-1^4,1^4), {\mathcal Q}(-1,1^5).$$
Thus, cannot be extended to strata of quadratic differentials with more than two zeros without further assumptions.
## Acknowledgements {#acknowledgements .unnumbered}
We thank Paul Apisa, Alex Wright, Carlos Servàn, Dawei Chen and Aaron Calderon for helpful conversations.
# Linear systems of totally geodesic orbit closures
Let $M$ be a totally geodesic subvariety of ${\mathcal M}_{g,n}$. Our main tool is the construction of a linear series for a general curve in $M$. Recall that a linear series $\mathscr{D}$ on a curve $X$ is a linear subspace $V\subseteq H^0({\mathcal O}_X(D))$ for a divisor $D$ on $X$. A linear series of *dimension* $r+1 := \dim(V)$ and *degree* $d := \deg(D)$ is called a $\mathfrak{g}^r_d.$ We refer the reader to [@acgh Chap. I + III] for an introduction to the theory of linear series. The construction of the linear series starts with the following observation, which is a variation of [@FredCarlos Prop. 3.1]. Recall that $QM\subseteq {\mathcal Q}_{{\mathcal M}_{g,n}}$ is the bundle of all quadratic differentials generating Teichmüller geodesics that are contained in $M$.
**Proposition 5**. *Suppose $M\subseteq{\mathcal M}_{g,n}$ is a totally geodesic subvariety of dimension $d$ and $X\in M^{reg}$ a regular point. Then, the fiber of $QM$ over $X$ is a linear subspace of dimension $d$.*
*Proof.* Consider the map $$\phi: Q_XM\hookrightarrow {\mathcal Q}_{{\mathcal M}_{g,n},X} = T^*_X{{\mathcal M}_{g,n}}\to T^*_XM,$$ where $Q_XM$ is the fiber of $QM$ over $X$ and $T^*_XM$ is the cotangent space to $M$.
Following the argument in [@FredCarlos Prop. 3.1], the map $\phi$ is injective, proper, homogenous. The homogeneity follows from $\operatorname{GL}(2,{\mathbb{R}})$-invariance of $QM$, while injectivity and properness follow from $\phi$ being norm-preserving for the restriction of the $L^1$-norm on $Q_XM$ and the quotient $L^1$-norm on $T^*_XM$. Since $QM$ is an algebraic variety, $\phi$ is also holomorphic.
Let $$U:=(Q_XM)^{reg},\, Z:= Q_XM \setminus U.$$ Since $\phi$ is proper, $\phi(Z)$ is a proper closed subvariety and $T^*_XM\setminus \phi(Z)$ is connected. By invariance of domain, $\phi_{|U}$ is a homeomorphism onto $T^*_XM\setminus \phi(Z)$ and hence $\phi$ is surjective. We conclude that $\phi$ is a homeomorphism. Restricted to $T^*_XM\setminus \phi(Z)$ the inverse $\phi^{-1}$ is holomorphic and by the Riemann extension theorem, $\phi^{-1}$ is holomorphic. Additionally, $\phi^{-1}$ is homogeneous. Thus, $\phi^{-1}$ is homogeneous and differentiable at the origin and hence linear. It follows that $$Q_XM = \phi^{-1}(T^*_XM)$$ is a linear subspace of dimension $$\dim T^*_XM = \dim M=d.$$ ◻
We now switch from totally geodesic subvarieties to totally geodesic orbit closures. Let $N\subseteq {\mathcal Q}(\mu)$ be a totally geodesic orbit closure of rank at least $2$ in a stratum of quadratic differentials. Suppose $$(X,q)\in N\subseteq{\mathcal Q}(\mu).$$For the rest of this section, we write $$\begin{gathered}
\mu = (-1^n,1^m, b_1,\ldots,b_k), b_i>1,\\
g= g(X),\, r = \operatorname{rank}(N)-1,\\
(q) = B + \sum_{i=1}^m x_i - P,\\
B= \sum_{i=1}^k b_i y_i, P = \sum_{i=1}^n p_i \text{ for some points $x_i, p_i,y_i\in X$}.
\end{gathered}$$
Let $$\mathscr{D}_X := \{ P + (q),\,\, (X,q)\in N\}$$ be the associated $\mathfrak{g}^r_{4g-4+n}$. The next result estimates the base locus of $\mathscr{D}_X$.
**Proposition 6**. *Suppose $N\subseteq{\mathcal Q}(\mu)$ is a totally geodesic orbit closure with $\operatorname{rank}(N)\geq 2$ and $X\in \pi(N)$ is generic. Then $B$ is contained in the base locus of $\mathscr{D}_X$ and thus $$\mathscr{D}_X = B + \mathscr{D}'_X,$$ where $\mathscr{D}'_X$ is a $\mathfrak{g}^r_m$. In particular, a generic curve $X\in \pi(N)\subseteq{\mathcal M}_{g,n}$ admits a $\mathfrak{g}^r_m$.*
*Proof.* Let $B'$ be the base locus of $\mathscr{D}_X$. The goal is to show that $B'\geq B$. A general divisor in $\mathscr{D}_X$ is of the form $$P + (q) = B' + \sum_{i=1}^{4g-4+n-\deg(B')} z_i$$ where $z_i\in X$ are pairwise distinct and not contained in $B'$. On the other hand, a generic quadratic differential $q\in V$ is contained in $${\mathcal Q}(-1^n,1^m, b_1,\ldots,b_k), b_i>1.$$ In particular each point $z_i$ is one of the $m$ simple zeros. Hence $B\leq B'$. ◻
now follows from the existence of a $\mathfrak{g}^r_m$, together with Riemann-Roch and Clifford's theorem.
*Proof of .* By , there exists a $\mathfrak{g}^r_m$ on a generic curve $X$ in $N$. It follows from Riemann-Roch that if $m\geq 2g-1$, one has $r\leq m-g$. The case $r\leq 2g-1$ follows from Clifford's theorem [@acgh III.1]. In case of $r=m$ there exists a divisor $D$ on $X$ with $h^0(D) = \deg(D)+1$. Hence, there exists a degree one map from $X$ to ${\mathbb{P}}^1$. Thus $g(X) =0$. ◻
**Remark 7**. *Another consequence of Clifford's theorem is that if $$m<2g-2,\, 2r = m,$$ then $\pi(N)\subseteq {\mathcal M}_{g,n}$ is contained in the hyperelliptic locus. We will see in that the same conclusion holds in the case $m <g, \, 2r = m$.*
We now turn to the proof of .
*Proof of .* First, assume that ${\mathcal Q}(-1^n,a)$ is a stratum with only one zero. By , $m=1$ and thus $N$ has to be one of the following strata ${\mathcal Q}(-1^5,1), {\mathcal Q}(-1,1)$. The former has rank $2$, while the second stratum is empty.
Next, we adress strata ${\mathcal Q}(-1^n,a, b)$ with two zeros. The first step is to show that $N$ is a locus of covers in a stratum in genus zero or one.
By , the case $m=0$ is impossible, and $m=1$ is only possible if $g=0$. If $m=1,g=0$, it follows from [@Apisa Thm. 1.5] that $N$ is a locus of covers.
The remaining case is $m=2$. This can only occur in the two strata $${\mathcal Q}(-1^6,1^2), {\mathcal Q}(-1^2,1^2).$$ The first case is a stratum in genus zero with two zeros, in which case we can apply [@Apisa Thm. 1.5] again to conclude that $N$ is a locus of covers. The second case would lead to a totally geodesic orbit closure $N$ with $\pi(N)\subseteq {\mathcal M}_{1,2}$. For dimension reasons this is only possible if $\pi(N) = {\mathcal M}_{1,2}$ and $N$ is the generic stratum in genus $1.$
So far, we have shown that either $g=0$ and $N$ is a locus of covers or $g=1$ and $N={\mathcal Q}(-1^2,1^2)$. It remains to analyze the cases with $g=0$. By it follows $\operatorname{rank}(N)\leq 3$. If $\operatorname{rank}(N)=3$ this implies $m=2, \dim N=6$ and thus $N= {\mathcal Q}(-1^6,1^2)$. The remaining case is $\operatorname{rank}(N)=2$ and ${\mathcal Q}(\mu)= (-1^{a+5},1,a)$. Since $N$ is a full locus of covers, it has to be a cover of a $4$-dimensional stratum in genus zero. The only possibilities are $${\mathcal Q}(-1^4,0^2), {\mathcal Q}(-1^5,1).$$ The first one is not possible since, in this case, the orbit closure has non-zero rel, and totally geodesic orbit closures always have zero rel, see [@wright-totally-geodesic Thm. 1.3]. In the second case, we will reach a contradiction from Riemann-Hurwitz. The zero of order $a$ has to lie over the simple zero in ${\mathcal Q}(-1^5,1)$. In particular a pullback under a degree $d$ map has at least $5d-(2+k)$ simple poles, where $k$ is the number of simple branch points lying over the simple pole. The additional $2$ corresponds to a potential ramification point of multiplicity $3$, in case the simple zero in ${\mathcal Q}(-1^{a+5},1,a)$ lies over a simple pole. By Riemann-Hurwitz we have $k\leq 2d-2$. Thus we obtain $$5 \geq 5d-2-k \geq 3d,$$ which is impossible for $d\geq 2.$ ◻
It follows from that $2r\leq m$ if $m\leq 2g-1$. The following result gives further restrictions in the range $2r\leq m \leq 3r-2$.
**Corollary 8**. *Let ${\mathcal Q}(\mu)$ be a stratum of quadratic differentials with $m$ zeros and $N\subseteq {\mathcal Q}(\mu)$ a totally geodesic orbit closure of rank $r+1\geq 2$. Let $\alpha = m-2r$ and assume $0\leq \alpha \leq r-2$. Then, one of the following is true*
- *$g\leq r+2\alpha+1$ or,*
- *a generic curve $X$ in $\pi(N)$ is a double cover of a curve of genus at most $\tfrac{\alpha}{2}$.*
*Proof.* By [@acgh Chap. 3, Exc. B-7] the existence of a $$\mathfrak{g}^{r}_{2r+\alpha},\, 0\leq \alpha\leq r-2$$ implies that either $g\leq r+2\alpha+1$ or that a generic curve $X$ in $N$ is a double cover of a curve of genus $g'\leq \tfrac{\alpha}{2}$. ◻
**Remark 9**. *For example, the statement for $\alpha =0$ in the above proposition says that if $N$ is a totally geodesic orbit closure with $$2\leq \operatorname{rank}(N)= \tfrac{m}{2}+1 < g,$$then $\pi(N)$ is in the hyperelliptic locus.*
**Remark 10**. *Even if $m$ is not in the range between $2r$ and $3r-2$, the linear system $\mathscr{D}_X$ induces a map $\phi:X\to {\mathbb{P}}^d, d\leq m$, after removing the base locus. There are two cases. Either $\phi$ is birational, in which case one can bound the genus of $X$ only in terms of $r$ and $m$ using Castelnuovo's bound (see [@acgh Chapt. III.2]). The second case is that $X$ covers another curve with a degree of at least two. It would be interesting to find a numerical criterion for when the totally geodesic orbit closure is a locus of covers in the second case.*
## Gonality bounds for totally geodesic surfaces {#gonality-bounds-for-totally-geodesic-surfaces .unnumbered}
Recall that the gonality $\operatorname{gon}(X)$ of a curve $X$ is the smallest degree of a non-constant holomorphic map to ${\mathbb{P}}^1$. The gonality of a general curve $X$ of genus $g$ is $$\operatorname{gon}(X) = \lfloor \tfrac{g+3}{2}\rfloor.$$ In [@bud], Bud showed that in many strata of quadratic differentials, for example, if the partition has only positive entries, a generic curve still has gonality $\lfloor \tfrac{g+3}{2}\rfloor$. On the other hand, we obtain gonality bounds for totally geodesic surfaces only in terms of the number of simple poles of the partition $\mu$.
**Corollary 11**. *Suppose $N\subseteq {\mathcal Q}(\mu)$ is a totally geodesic orbit closure of rank $2$ and $(X,\omega) \in N$. Then $$\operatorname{gon}(X) \leq m,$$ where $m$ is the number of simple zeros of $\mu$. In particular, if $m=1$, then $g(X)=0$ and, if $m=2$, then $X$ is hyperelliptic.*
*Proof.* If $\operatorname{rank}(N)=2$, the linear system $\mathscr{D}$ from is a $\mathfrak{g}^1_m$ and hence $$\operatorname{gon}(X)\leq m.$$ ◻
**Remark 12**. *For any rank $2$ orbit closure $M$ in a stratum of Abelian differentials, one can construct a map to ${\mathbb{P}}^1$ similarly. Let $(X,\omega)\!~\in~\!\pi(M)$. Consider the projection of the tangent space of $M$ to absolute cohomology $H^1(X;{\mathbb{C}})$. The intersection with $H^{1,0}(X)$ is a $2$-dimensional subspace and hence defines a map to ${\mathbb{P}}^1$ after removing the base locus. If one can prove a lower bound on the size of the base locus, similar to the bound we have for totally geodesic orbit closures in terms of the number of simple zeros, then one would also obtain gonality bounds for curves in $M$.*
EMMW20
E.Arbarello, M.Cornalba, P.A.Griffiths and J.Harris. Vol. I, volume 267 of Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, NewYork, 1985.
P. Apisa. . arXiv Preprint [arXiv:2110.07540](https://arxiv.org/abs/2110.07540)
A. Bud. . arXiv Preprint [arXiv:2008.02183](https://arxiv.org/abs/2008.02183) (2020)
F. Benirschke and C. Serván. . arXiv Preprint [arXiv:2305.04153](https://arxiv.org/abs/2305.04153)
A. Eskin, C. McMullen, R. Mukamel, A.Wright. .
J. Amer. Math. Soc. 33, pp. 1039-1086, 2020.
S. Filip, . Ann. of Math. vol. 183 (2), pp. 681--713, 2016
E. Goujard. <https://www.bourbaki.fr/TEXTES/Exp1178-Goujard.pdf> (2021).
C. McMullen, R. Mukamel, A. Wright. . Annals of Mathematics, Volume 185(3), pp. 757-990, (2017)
A. Wright. . Journal of Differential Geometry 115.3 , pp. 565-575, 2020.
| arxiv_math | {
"id": "2310.05345",
"title": "Totally geodesic subvarieties of the moduli space of curves and linear\n systems",
"authors": "Frederik Benirschke",
"categories": "math.AG math.DS",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
*In 1973, Katok constructed a non-degenerate (also called bumpy) Finsler metric on $S^3$ with exactly four prime closed geodesics. And then Anosov conjectured that four should be the optimal lower bound of the number of prime closed geodesics on every Finsler $S^3$. In this paper, we proved this conjecture for bumpy Finsler $S^{3}$ if the Morse index of any prime closed geodesic is nonzero.*
author:
- |
Huagui Duan[^1], Zihao Qi[^2]\
\
School of Mathematical Sciences and LPMC, Nankai University,\
Tianjin 300071, P. R. China\
title: Multiple closed geodesics on Finsler $3$-dimensional sphere
---
**Key words**: Closed geodesic, Finsler metric, $3$-dimensional sphere, Index theory
**2000 Mathematics Subject Classification**: 53C22, 58E05, 58E10.
# Introduction and main result
A closed curve on a Finsler manifold is a closed geodesic if it is locally the shortest path connecting any two nearby points on this curve. As usual, on any Finsler manifold $(M, F)$, a closed geodesic $c:S^1={\bf R}/{\bf Z}\to M$ is *prime* if it is not a multiple covering (i.e., iteration) of any other closed geodesics. Here the $m$-th iteration $c^m$ of $c$ is defined by $c^m(t)=c(mt)$. The inverse curve $c^{-1}$ of $c$ is defined by $c^{-1}(t)=c(1-t)$ for $t\in {\bf R}$. Note that unlike a Riemannian manifold, the inverse curve $c^{-1}$ of a closed geodesic $c$ on a irreversible Finsler manifold need not be a geodesic. We call two prime closed geodesics $c$ and $d$ *distinct* if there is no ${\theta}\in (0,1)$ such that $c(t)=d(t+{\theta})$ for all $t\in{\bf R}$. On a reversible Finsler (or Riemannian) manifold, two closed geodesics $c$ and $d$ are called *geometrically distinct* if $c(S^1)\neq d(S^1)$, i.e., their image sets in $M$ are distinct. We shall omit the word *distinct* when we talk about more than one prime closed geodesic.
For a closed geodesic $c$ on $n$-dimensional manifold $(M,\,F)$, denote by $P_c$ the linearized Poincaré map of $c$. Then $P_c\in {\rm Sp}(2n-2)$ is symplectic. For any $P\in {\rm Sp}(2k)$, we define the *elliptic height* $e(P)$ of $P$ to be the total algebraic multiplicity of all eigenvalues of $P$ on the unit circle ${\bf U}=\{z\in{\bf C}|\; |z|=1\}$ in the complex plane ${\bf C}$. Since $P$ is symplectic, $e(P)$ is even and $0\le e(P)\le 2k$. A closed geodesic $c$ is called *elliptic* if $e(P_c)=2(n-1)$, i.e., all the eigenvalues of $P_c$ locate on ${\bf U}$; *non-degenerate* if $1$ is not an eigenvalue of $P_c$. A Finsler manifold $(M,\,F)$ is called *bumpy* if all the closed geodesics (including iterations) on it are non-degenerate.
There is a famous conjecture in Riemannian geometry which claims there exist infinitely many closed geodesics on any compact Riemannian manifold. This conjecture has been proved except for CROSS's (compact rank one symmetric spaces). The results of Franks [@Fra] in 1992, Hingston [@Hin2] and Bangert [@Ban] in 1993 imply this conjecture is true for any Riemannian $S^2$.
For the Finsler metric, the closed geodesic problem is completely different. It was quite surprising when Katok [@Kat] in 1973 found some irreversible bumpy Finsler metrics on CROSS's with only finitely many closed geodesics and all closed geodesics are non-degenerate and elliptic (cf. [@Zil]). Based on Katok's examples, in 1974, Anosov [@Ano] conjectured that the optimal lower bound of the number of prime closed geodesics on any Finsler $S^n$ should be $2[\frac{n+1}{2}]$. We refer readers to two excellent survey papers [@Lon4] and [@BuK] for more interesting questions.
The index iteration theory of closed geodesics (cf.[@Bot] and [@Lon3]) has been an important and powerful tool in studying the closed geodesic problem on Finsler manifolds. For example, Bangert and Long in [@BaL] (finished in 2005) shows that there exist at least two prime closed geodesics on every Finsler $S^2$, which solved the Anosov conjecture. And then a great number of multiplicity and stability results about closed geodesics which are established by index iteration theory have appeared (cf. [@DuL1]-[@DuL2], [@DLW], [@Lon4], [@Rad3], [@Wan1]-[@Wan3] and therein).
This paper is mainly devoted to study the Anosov conjecture of bumpy Finsler $S^3$. In this direction, Duan and Long in [@DuL1] and Rademacher in [@Rad3] proved the existence of at least two prime closed geodesics on bumpy Finsler $S^n$, respectively. Wang in [@Wan2] solved the Anosov conjecture for any positively curved bumpy Finsler $S^n$. Recently, the Anosov conjecture on bumpy Finsler CROSS's (including spheres) has been proved by Duan, Long and Wang in [@DLW] under the much weaker curvature or index conditions, and especially for $S^3$ the index condition is $i(c)\ge 2$ for any prime closed geodesic $c$, which, to the authors' knowledge, is the weakest condition for Anosov conjecture of bumpy Finsler $S^3$. Actually we also notice that in Katok's emxample, the Morse index of every prime closed geodesic on $S^3$ is equal to and greater than $2$.
Motivated by the Katok's examples and above results, in this paper we established the following result, which resolve the Anosov conjecture of bumpy $S^3$ under much more relaxing restriction.
**Theorem 1.1.** *Assume that $F$ is a bumpy Finsler metric on $S^{3}$, then there exist at least four prime closed geodesics on $(S^3,F)$ if the Morse index of every prime closed geodesic is nonzero.*
Under the assumption of Theorem 1.1, it follows from Remark 3.7 of [@DuL2] and Theorem 1.4 of [@DLW] that there exist at least three prime closed geodesics on bumpy Finsler $S^3$. Then, through assuming the existence of exactly three prime closed geodesics, we will use a contradiction argument to complete the proof of Theorem 1.1.
Since here we only assume the Morse index $i(c)\ge 1$ for any prime closed geodesic $c$, generally the sequence $\{i(c^m)\}$ will have no monotonicity with respect to $m$, which will lead to the invalidness of methods in [@DLW] and [@Wan2]. On the other hand, due to the assumption of the existence of three prime closed geodesics, the methods in [@DuL2] through some complicated classification arguments and the mean index identity can not give any valuable information in this situation.
To overcome the above difficulties, the main novelty in this paper is to establish a *variant of generalized common index jump theorem* (see Theorem 3.8 below), which, compared with Theorem 3.6 of [@DLLW] (cf. Theorem 3.6 below), gives more information about some parameters including the common integer $N$ and iterations $m_k$s. Then we use the Morse theory, some precise analysis and estimates of common iteration indices to derive a contradiction. Because this method does not use the special classification of closed geodesics, it may be hopeful to be applied to studying the corresponding problems on higher Finsler manifolds.
When the assumption of Theorem 1.1 is not satisfied, i.e. such $(S^3,F)$ possesses at least one prime closed geodesic with zero Morse index, one suspects the existence of infinitely many prime closed geodesics. A related result with this is the Corollary 2 of [@BaK], which shows that for a compact manifold $M$ with finite fundamental group, suppose there exists a closed geodesic $c$ on $M$ such that $c^m$ is a local minimum of $E$ for infinitely many $m\in{\bf N}$, then there exist infinitely many prime closed geodesics on $M$. So we further propose the following conjecture.
**Conjecure 1.2.** *Let $F$ be a bumpy Finsler metric on $n$-dimensional sphere $S^n$. If there exist only finitely many prime closed geodesics on $(S^n,F)$, then none of the prime closed geodesics is a local minimal of the energy functional $E$, which implies $i(c)\ge 1$ for any prime closed geodesic $c$.*
In this paper, let ${\bf N}$, ${\bf N}_0$, ${\bf Z}$, ${\bf Q}$, ${\bf R}$, and ${\bf C}$ denote the sets of natural integers, non-negative integers, integers, rational numbers, real numbers, and complex numbers respectively. We use only singular homology modules with ${\bf Q}$-coefficients. For an $S^1$-space $X$, we denote by $\overline{X}$ the quotient space $X/S^1$. We define the functions $$\left\{\matrix{[a]=\max\{k\in{\bf Z}\,|\,k\le a\}, &
E(a)=\min\{k\in{\bf Z}\,|\,k\ge a\} , \cr
\varphi(a)=E(a)-[a], &\{a\}=a-[a]. \cr}\right. \label{1.1}$$ Especially, $\varphi(a)=0$ if $a\in{\bf Z}\,$, and $\varphi(a)=1$ if $a\notin{\bf Z}\,$.
# Critical modules of closed geodesics and Morse Theory
Let $M=(M,F)$ be a compact Finsler manifold $(M,F)$, the space $\Lambda=\Lambda M$ of $H^1$-maps $\gamma:S^1\rightarrow M$ has a natural structure of Riemannian Hilbert manifold on which the group $S^1={\bf R}/{\bf Z}$ acts continuously by isometries, cf. [@Kli], Chapters 1 and 2. This action is defined by $(s\cdot\gamma)(t)=\gamma(t+s)$ for all $\gamma\in{\Lambda}$ and $s,
t\in S^1$. For any $\gamma\in\Lambda$, the energy functional is defined by $$E(\gamma)=\frac{1}{2}\int_{S^1}F(\gamma(t),\dot{\gamma}(t))^2dt.
\label{2.1}$$ It is $C^{1,1}$ and invariant under the $S^1$-action. The critical points of $E$ of positive energies are precisely the closed geodesics $\gamma:S^1\to M$. The index form of the functional $E$ is well defined along any closed geodesic $c$ on $M$, which we denote by $E''(c)$. As usual, we denote by $i(c)$ and $\nu(c)$ the Morse index and nullity of $E$ at $c$. In the following, we denote by $${\Lambda}^\kappa=\{d\in {\Lambda}\;|\;E(d)\le\kappa\},\quad {\Lambda}^{\kappa-}=\{d\in {\Lambda}\;|\; E(d)<\kappa\},
\quad \forall \kappa\ge 0. \nonumber$$ For a closed geodesic $c$ we set ${\Lambda}(c)=\{{\gamma}\in{\Lambda}\mid E({\gamma})<E(c)\}$.
For $m\in{\bf N}$ we denote the $m$-fold iteration map $\phi_m:\Lambda\rightarrow\Lambda$ by $\phi_m({\gamma})(t)={\gamma}(mt)$, for all $\,{\gamma}\in{\Lambda}, t\in S^1$, as well as ${\gamma}^m=\phi_m(\gamma)$. If $\gamma\in\Lambda$ is not constant then the multiplicity $m(\gamma)$ of $\gamma$ is the order of the isotropy group $\{s\in S^1\mid s\cdot\gamma=\gamma\}$. For a closed geodesic $c$, the mean index $\hat{i}(c)$ is defined as usual by $\hat{i}(c)=\lim_{m\to\infty}i(c^m)/m$. Using singular homology with rational coefficients we consider the following critical ${\bf Q}$-module of a closed geodesic $c\in\Lambda$: $$\overline{C}_*(E,c)
= H_*\left(({\Lambda}(c)\cup S^1\cdot c)/S^1,{\Lambda}(c)/S^1\right). \label{2.3}$$
The following results of Rademacher will be used in our proofs below.
**Proposition 2.1.** (cf. Satz 6.11 of [@Rad2] ) *Let $c$ be a prime closed geodesic on a bumpy Finsler manifold $(M,F)$. Then there holds* $$\overline{C}_q( E,c^m) = \left\{\matrix{
{\bf Q}, &\quad {\it if}\;\; i(c^m)-i(c)\in 2{\bf Z}\;\;{\it and}\;\;
q=i(c^m),\; \cr
0, &\quad {\it otherwise}. \cr}\right.$$
**Definition 2.2.** (cf. Definition 1.6 of [@Rad1]) *For a closed geodesic $c$, let ${\gamma}_c\in\{\pm\frac{1}{2},\pm1\}$ be the invariant defined by ${\gamma}_c>0$ if and only if $i(c)$ is even, and $|{\gamma}_c|=1$ if and only if $i(c^2)-i(c)$ is even.*
Let $(X,Y)$ be a space pair such that the Betti numbers $b_i=b_i(X,Y)=\dim H_i(X,Y;{\bf Q})$ are finite for all $i\in {\bf Z}$. As usual the Poincaré series of $(X,Y)$ is defined by the formal power series $P(X, Y)=\sum_{i=0}^{\infty}b_it^i$. We need the following well known results on Betti numbers and the Morse inequality for $\overline{{\Lambda}}\equiv
\overline{{\Lambda}} S^3$ and $\overline{{\Lambda}}^0=\overline{\Lambda}^0S^3
=\{{\rm constant\;point\;curves\;in\;}S^3\}\cong S^3$.
**Proposition 2.3.** (cf. Remark 2.5 of [@Rad1] or [@Hin1]) *The Poincaré series is given by $$\begin{aligned}
P(\overline{{\Lambda}}S^3,\overline{{\Lambda}}^0S^3)(t)
&=&t^2\left(\frac{1}{1-t^2}+\frac{t^2}{1- t^2}\right) \nonumber\\
&=& t^2(1+t^2)(1+t^2+t^4+\cdots) = t^2+2t^4+2t^6+\cdots, \nonumber\end{aligned}$$ which yields* $${b}_q = {b}_q(\overline{{\Lambda}}S^3,\overline{{\Lambda}}^0 S^3)\;
= {\rm rank}H_q(\overline{{\Lambda}} S^3,\overline{{\Lambda}}^0 S^3 )
= \;\;\left\{\matrix{
1,&\quad {\it if}\quad q=2, \cr
2,&\quad {\it if}\quad q=2k+2,\quad k\in {\bf N}, \cr
0 &\quad {\it otherwise}. \cr}\right. \label{2.4}$$
**Proposition 2.4.** (cf. Theorem I.4.3 of [@Cha], Theorem 6.1 of [@Rad2]) *Suppose that there exist only finitely many prime closed geodesics $\{c_j\}_{1\le j\le k}$ on a Finsler $3$-sphere $(S^3, F)$. Set $$M_q =\sum_{1\le j\le k,\; m\ge 1}\dim{\overline{C}}_q(E, c^m_j), \quad \forall q\in{\bf Z}.$$ Then for every integer $q\ge 0$ there holds* $$\begin{aligned}
M_q - M_{q-1} + \cdots +(-1)^{q}M_0
&\ge& b_q - b_{q-1}+ \cdots + (-1)^{q}b_0, \label{2.5}\\
M_q &\ge& b_q. \label{2.6}\end{aligned}$$
# The common index jump theorem for symplectic paths
In [@Lon1] of 1999, Y. Long established the basic normal form decomposition of symplectic matrices. Based on this result he further established the precise iteration formulae of indices of symplectic paths in [@Lon2] of 2000. Since every closed geodesic on a sphere is orientable, then, by Theorem 1.1 of [@Liu], the Morse index of a closed geodesic on $S^n$ coincides with the Maslov-type index of a corresponding symplectic path.
As in [@Lon2], the basic normal forms are denoted by $$\begin{aligned}
N_1({\lambda}, b) &=& \left(\begin{array}{ll}{\lambda}& b\\
0 & {\lambda}\end{array}\right), \qquad {\rm for\;}{\lambda}=\pm 1, \; b\in{\bf R}, \label{3.1}\\
D({\lambda}) &=& \left(\begin{array}{ll}{\lambda}& 0\\
0 & {\lambda}^{-1} \end{array}\right), \qquad {\rm for\;}{\lambda}\in{\bf R}\setminus\{0, \pm 1\}, \label{3.2}\\
R({\theta}) &=& \left(\begin{array}{ll}\cos{\theta}& -\sin{\theta}\\
\sin{\theta}& \cos{\theta}\end{array}\right), \qquad {\rm for\;}{\theta}\in (0,\pi)\cup (\pi,2\pi), \label{3.3}\\
N_2(e^{\sqrt{-1}{\theta}}, B) &=& \left(\begin{array}{ll} R({\theta}) & B \\
0 & R({\theta}) \end{array} \right), \qquad {\rm for\;}{\theta}\in (0,\pi)\cup (\pi,2\pi)\;\; {\rm and}\; \nonumber\\
&& \quad B=\left(\begin{array}{lll} b_1 & b_2\\
b_3 & b_4 \end{array}\right)\; {\rm with}\; b_j\in{\bf R}, \;\;
{\rm and}\;\; b_2\not= b_3. \label{3.4}\end{aligned}$$
As in [@Lon2], the $\diamond$-sum (direct sum) of any two real matrices is defined by $$\left(\begin{array}{lll}A_1 & B_1\\ C_1 & D_1 \end{array}\right)_{2i\times 2i}\diamond
\left(\begin{array}{lll}A_2 & B_2\\ C_2 & D_2 \end{array}\right)_{2j\times 2j}
=\left(\begin{array}{llll}A_1 & 0 & B_1 & 0 \\
0 & A_2 & 0& B_2\\
C_1 & 0 & D_1 & 0 \\
0 & C_2 & 0 & D_2\end{array}\right).$$
For every $M\in{\rm Sp}(2n)$, the homotopy set $\Omega(M)$ of $M$ in ${\rm Sp}(2n)$ is defined by $${\Omega}(M)=\{N\in{\rm Sp}(2n)\,|\,{\sigma}(N)\cap{\bf U}={\sigma}(M)\cap{\bf U}\equiv\Gamma,
\;\nu_{{\omega}}(N)=\nu_{{\omega}}(M),\, \forall{\omega}\in\Gamma\},$$ where ${\sigma}(M)$ denotes the spectrum of $M$, $\nu_{{\omega}}(M)\equiv\dim_{{\bf C}}\ker_{{\bf C}}(M-{\omega}I)$ for ${\omega}\in{\bf U}$. The component ${\Omega}^0(M)$ of $P$ in ${\rm Sp}(2n)$ is defined by the path connected component of ${\Omega}(M)$ containing $M$.
**Lemma 3.1.** (cf. [@Lon2], Lemma 9.1.5 and List 9.1.12 of [@Lon3])
*For $M\in{\rm Sp}(2n)$ and ${\omega}\in{\bf U}$, the splitting number $S_M^\pm({\omega})$ (cf. Definition 9.1.4 of [@Lon3]) satisfies $$\begin{aligned}
S_M^{\pm}({\omega}) &=& 0, \qquad {\rm if}\;\;{\omega}\not\in{\sigma}(M). \label{3.5}\\
S_{N_1(1,a)}^+(1) &=& \left\{\begin{array}{lll}1, &\quad {\rm if}\;\; a\ge 0, \\
0, &\quad {\rm if}\;\; a< 0. \end{array}\right. \label{3.6}\end{aligned}$$*
For any $M_i\in{\rm Sp}(2n_i)$ with $i=0$ and $1$, there holds
$$S^{\pm}_{M_0\diamond M_1}({\omega}) = S^{\pm}_{M_0}({\omega}) + S^{\pm}_{M_1}({\omega}),
\qquad \forall\;{\omega}\in{\bf U}. \label{3.7}$$
We have the following decomposition theorem
**Theorem 3.2.** (cf. [@Lon2] and Theorem 1.8.10 of [@Lon3]) *For any $M\in{\rm Sp}(2n)$, there is a path $f:[0,1]\to{\Omega}^0(M)$ such that $f(0)=M$ and $$f(1) = M_1\diamond\cdots\diamond M_k, \label{3.8}$$ where each $M_i$ is a basic normal form listed in ([\[3.1\]](#3.1){reference-type="ref" reference="3.1"})-([\[3.4\]](#3.4){reference-type="ref" reference="3.4"}) for $1\leq i\leq k$.*
For every ${\gamma}\in\mathcal{P}_\tau(2n)\equiv\{{\gamma}\in C([0,\tau],Sp(2n))\ |\ {\gamma}(0)=I_{2n}\}$, we extend ${\gamma}(t)$ to $t\in [0,m\tau]$ for every $m\in{\bf N}$ by $$\begin{aligned}
{\gamma}^m(t)={\gamma}(t-j\tau){\gamma}(\tau)^j \quad \forall\;j\tau\le t\le (j+1)\tau \;\;
{\rm and}\;\;j=0, 1, \ldots, m-1, \label{3.9}\end{aligned}$$ as in p.114 of [@Lon1]. As in [@LoZ] and [@Lon3], we denote the Maslov-type indices of ${\gamma}^m$ by $(i({\gamma},m),\nu({\gamma},m))$.
Then by Theorem 3.2 and some index computations of basic normal forms, the following iteration formula from [@LoZ] and [@Lon3] can be obtained.
**Theorem 3.3.** (cf. Theorem 9.3.1 of [@Lon3]) *For any path ${\gamma}\in\mathcal{P}_\tau(2n)$, let $M={\gamma}(\tau)$ and $C(M)=\sum_{0<{\theta}<2\pi}S_M^-(e^{\sqrt{-1}{\theta}})$. We extend ${\gamma}$ to $[0,+\infty)$ by its iterates. Then for any $m\in{\bf N}$ we have $$\begin{aligned}
&&i({\gamma},m)
= m(i({\gamma},1)+S^+_{M}(1)-C(M))\nonumber\\
&&\qquad+ 2\sum_{{\theta}\in(0,2\pi)}E\left(\frac{m{\theta}}{2\pi}\right)S^-_{M}(e^{\sqrt{-1}{\theta}}) - (S_M^+(1)+C(M)) \label{3.10}\end{aligned}$$ and $$\hat{i}({\gamma},1) = i({\gamma},1) + S^+_{M}(1) - C(M) + \sum_{{\theta}\in(0,2\pi)}\frac{{\theta}}{\pi}S^-_{M}(e^{\sqrt{-1}{\theta}}). \label{3.11}$$*
**Theorem 3.4.** (cf. Theorem 4.1 of [@LoZ] and Theorem 3.4 of [@DLLW]) *Fix an integer $q>0$. Let $\mu_i\ge 0$ and ${\beta}_i$ be integers for all $i=1,\cdots,q$. Let ${\alpha}_{i,j}$ be positive numbers for $j=1,\cdots,\mu_i$ and $i=1,\cdots,q$. Let ${\delta}\in(0,\frac{1}{2})$ satisfying ${\delta}\max\limits_{1\le i\le q}\mu_i<\frac{1}{2}$. Suppose $D_i \equiv {\beta}_i+\sum\limits_{j=1}^{\mu_i}{\alpha}_{i,j}\neq 0$ for $i=1,\cdots,q$. Then there exist infinitely many $(N, m_1,\cdots,m_q)\in{\bf N}^{q+1}$ such that $$\begin{aligned}
&& m_i{\beta}_i+\sum_{j=1}^{\mu_i}E(m_i{\alpha}_{i,j}) =
\varrho_i N+\Delta_i, \qquad \forall\ 1\le i\le q. \label{3.12}\\
&& \min\{\{m_i{\alpha}_{i,j}\}, 1-\{m_i{\alpha}_{i,j}\}\} < {\delta},\qquad \forall\ j=1,\cdots,\mu_i, 1\le i\le q, \label{3.13}\\
&& m_i{\alpha}_{i,j}\in{\bf N},\ {\rm if} \ {\alpha}_{i,j}\in{\bf Q}, \label{3.14}\end{aligned}$$ where $$\begin{aligned}
\varrho_i=\left\{\begin{array}{cc}1, &{\rm if}\ D_i>0, \cr
-1, &{\rm if}\ D_i<0, \end{array}\right.\quad \Delta_i=\sum_{0<\{m_i{\alpha}_{i,j}\}<{\delta}}1,\quad \forall\ 1\le i\le q.\label{3.15}\end{aligned}$$*
**Remark 3.5.** (i) When $D_i>0$ for all $1\le i\le q$, this is precisely the Theorem 4.1 of [@LoZ] (also cf. Theorem 11.1.1 of [@Lon3]). When $D_i\neq0$ for all $1\le i\le q$, this has been proved in [@DLLW] (cf. Theorem 3.4 of [@DLLW]).
\(ii\) According to Theorem 3.4 and its proof, it is easy to see that for any two small enough integers ${\delta}_1$ and ${\delta}_2$ satisfying ${\delta}_k \in(0,\frac{1}{2})$ and ${\delta}_k\max\limits_{1\le i\le q}\mu_i<\frac{1}{2}$ for $k=1,2$, Theorem 3.4 holds with $$\sum_{0<\{m_i{\alpha}_{i,j}\}<{\delta}_1}1=\sum_{0<\{m_i{\alpha}_{i,j}\}<{\delta}_2}1,\quad \forall\ 1\le i\le q.$$
In 2002, Y. Long and C. Zhu [@LoZ] has established the common index jump theorem for symplectic paths, which has become one of the main tools to study the periodic orbit problem in Hamiltonian and symplectic dynamics. In [@DLW] of 2016, H. Duan, Y. Long and W. Wang further improved this theorem to an enhanced version which gives more precise index properties of ${\gamma}_k^{2m_k}$ and ${\gamma}_k^{2m_k\pm m}$ with $1\le m \le \bar{m}$ for any fixed $\bar{m}$. Under the help of Theorem 3.4, following the proofs of Theorem 3.5 in [@DLW], this result has been further generalized to the case of admitting the existence of symplectic paths with negative mean indices.
**Theorem 3.6.** (**Generalized common index jump theorem**, cf. Theorem 3.6 of [@DLLW])
*Let $\gamma_i\in\mathcal{P}_{\tau_i}(2n)$ for $i=1,\cdots,q$ be a finite collection of symplectic paths with nonzero mean indices $\hat{i}({\gamma}_i,1)$. Let $M_i={\gamma}_i(\tau_i)$. We extend ${\gamma}_i$ to $[0,+\infty)$ by ([\[3.9\]](#3.9){reference-type="ref" reference="3.9"}) inductively.*
Then for any fixed $\bar{m}\in {\bf N}$, there exist infinitely many $(q+1)$-tuples $(N, m_1,\cdots,m_q) \in {\bf N}^{q+1}$ such that the following hold for all $1\le i\le q$ and $1\le m\le \bar{m}$, $$\begin{aligned}
\nu({\gamma}_i,2m_i-m) &=& \nu({\gamma}_i,2m_i+m) = \nu({\gamma}_i, m), \label{3.16}\\
i({\gamma}_i,2m_i+m) &=& 2\varrho_i N+i({\gamma}_i,m), \label{3.17}\\
i({\gamma}_i,2m_i-m) &=& 2\varrho_i N-i({\gamma}_i,m)-2(S^+_{M_i}(1)+Q_i(m)), \label{3.18}\\
i({\gamma}_i, 2m_i)&=& 2\varrho_i N -(S^+_{M_i}(1)+C(M_i)-2\Delta_i), \label{3.19}\end{aligned}$$ where $$\begin{aligned}
&&\varrho_i=\left\{\begin{array}{cc}1, &{\rm if}\ \hat{i}({\gamma}_i,1)>0, \cr
-1, &{\rm if}\ \hat{i}({\gamma}_i,1)<0, \end{array}\right.\qquad
\Delta_i = \sum_{0<\{m_i{\theta}/\pi\}<\delta}S^-_{M_i}(e^{\sqrt{-1}{\theta}}),\nonumber\\
&&\ Q_i(m) = \sum_{e^{\sqrt{-1}{\theta}}\in{\sigma}(M_i),\atop \{\frac{m_i{\theta}}{\pi}\}
= \{\frac{m{\theta}}{2\pi}\}=0}S^-_{M_i}(e^{\sqrt{-1}{\theta}}). \label{3.20}\end{aligned}$$ Moreover we have $$\begin{aligned}
\min\left\{\{\frac{m_i\theta}{\pi}\},1-\{\frac{m_i\theta}{\pi}\}\}\right\}<{\delta},\label{3.21}\end{aligned}$$ whenever $e^{\sqrt{-1}\theta}\in\sigma(M_i)$ and ${\delta}$ can be chosen as small as we want. More precisely, by (3.17) in [@DLLW] and (4.40), (4.41) in [@LoZ] , we have $$\begin{aligned}
m_i=\left(\left[\frac{N}{M|\hat i(\gamma_i, 1)|}\right]+\chi_i\right)M,\quad\forall\ 1\le i\le q,\label{3.22}\end{aligned}$$ where $\chi_i=0$ or $1$ for $1\le i\le q$ and $\frac{M\theta}{\pi}\in{\bf Z}$ whenever $e^{\sqrt{-1}\theta}\in\sigma(M_i)$ and $\frac{\theta}{\pi}\in{\bf Q}$ for some $1\le i\le q$. Furthermore, given $M_0$, from the proof of Theorem 4.1 of [@LoZ], we may further require $N$ to be the mutiple of $M_0$, i.e., $M_0|N$.
**Remark 3.7.** In fact, let $\mu_i=\sum_{0<{\theta}<2\pi}S_{M_i}^-(e^{\sqrt{-1}{\theta}})$, $\alpha_{i,j}=\frac{{\theta}_j}{\pi}$ where $e^{\sqrt{-1}{\theta}_j}\in\sigma(M_i)$ for $1\le j\le\mu_i$ and $1\le i\le q$. Let $l=q+\sum_{i=1}^q \mu_i$ and $$\begin{aligned}
v=\left(\frac{1}{M|\hat{i}(\gamma_1,1)|},\cdots,\frac{1}{M|\hat{i}(\gamma_1,1)|},
\frac{{\alpha}_{1,1}}{|\hat{i}(\gamma_1,1)|},\cdots,
\frac{{\alpha}_{1,\mu_1}}{|\hat{i}(\gamma_1,1)|},\cdots,\frac{{\alpha}_{q,1}}{|\hat{i}(\gamma_q,1)|},\cdots,
\frac{{\alpha}_{q,\mu_q}}{|\hat{i}(\gamma_q,1)|}\right)\in{\bf R}^l.\label{3.23}\end{aligned}$$ Then Theorem 3.6 is equivalent to find a vertex $$\begin{aligned}
\chi=(\chi_1,\cdots,\chi_q,\chi_{1,1},\cdots,\chi_{1,\mu_1},\cdots,\chi_{q,1},\cdots,\chi_{q,\mu_q})\in\{0,1\}^l\end{aligned}$$ of the cube $[0,1]^l$ and infinitely many $N\in{\bf N}$ such that for any small enough $\epsilon\in(0,\frac{1}{2})$ there holds $$\begin{aligned}
|\{Nv\}-\chi|<\epsilon.\label{3.24}\end{aligned}$$
Next we use some multiple $\hat{N}$ of $N$ to replace $N$ in Theorem 3.6, then we get the corresponding iterations $\hat{m}_k$s such that the equalities similar to ([\[3.16\]](#3.16){reference-type="ref" reference="3.16"})-([\[3.19\]](#3.19){reference-type="ref" reference="3.19"}) hold. At the same time, we also get some relation equalities about these integers, which will play a crucial role in the proof of Theorem 1.1 in Section 4.
**Theorem 3.8.** (A variant of Theorem 3.6)
*Let $\gamma_i\in\mathcal{P}_{\tau_i}(2n)$ for $i=1,\cdots,q$ be a finite collection of symplectic paths with nonzero mean indices $\hat{i}({\gamma}_i,1)$. Let $M_i={\gamma}_i(\tau_i)$. Fix a positive integer $\hat{p}$, use $\frac{\delta}{\hat{p}}$ and $\frac{\epsilon}{\hat{p}}$ to substitute $\delta$ in ([\[3.21\]](#3.21){reference-type="ref" reference="3.21"}) and $\epsilon$ in ([\[3.24\]](#3.24){reference-type="ref" reference="3.24"}) respectively, and then let $N$ be an integer satisfying ([\[3.16\]](#3.16){reference-type="ref" reference="3.16"})-([\[3.22\]](#3.22){reference-type="ref" reference="3.22"}).*
Then for $\hat{N}=\hat{p}N$, there exists the corresponding $q$-tuples $(\hat{m}_1,\cdots,\hat{m}_q) \in {\bf N}^q$ such that the following hold for all $1\le i\le q$ and $1\le m\le \bar{m}$ $$\begin{aligned}
\nu({\gamma}_i,2\hat{m}_i-m) &=& \nu({\gamma}_i,2\hat{m}_i+m) = \nu({\gamma}_i, m), \label{3.25}\\
i({\gamma}_i,2\hat{m}_i+m) &=& 2\varrho_i \hat{p} N+i({\gamma}_i,m), \label{3.26}\\
i({\gamma}_i,2\hat{m}_i-m) &=& 2\varrho_i \hat{p} N-i({\gamma}_i,m)-2(S^+_{M_i}(1)+Q_i(m)), \label{3.27}\\
i({\gamma}_i, 2\hat{m}_i)&=& 2\varrho_i \hat{p} N -(S^+_{M_i}(1)+C(M_i)-2\hat{\Delta}_i), \label{3.28}\end{aligned}$$ where $\varrho_i$ and $Q_i(m)$ are the same as those in ([\[3.20\]](#3.20){reference-type="ref" reference="3.20"}) and $$\begin{aligned}
\hat{m}_i=\left(\left[\frac{\hat{p}N}{M|\hat i(\gamma_i, 1)|}\right]+\hat{\chi}_i\right)M,\quad
\hat{\Delta}_i = \sum_{0<\{\hat{m}_i{\theta}/\pi\}<\delta}S^-_{M_i}(e^{\sqrt{-1}{\theta}}),\quad\forall\ 1\le i\le q.\label{3.29}\end{aligned}$$ Furthermore, comparing with some corresponding integers in Theorem 3.6, there holds $$\begin{aligned}
\hat{\chi}_i=\chi_i,\quad \hat{m}_i=\hat{p}m_i,\quad \hat{\Delta}_i=\Delta_i,\quad \forall\ 1\le i\le q.\label{3.30}\end{aligned}$$
**Proof.** Set $v_i=\frac{1}{|M\hat{i}(\gamma_i,1)|},\forall 1\le i\le q$. Firstly notice that $$\begin{aligned}
\{\hat{p}Nv_i\}=\{\hat{p}([Nv_i]+\{Nv_i\})\}=\{\hat{p}[Nv_i]+\hat{p}\{Nv_i\}\}=\{\hat{p}\{Nv_i\}\}.\end{aligned}$$
When $\chi_i=0$, it follows from ([\[3.24\]](#3.24){reference-type="ref" reference="3.24"}) with $\epsilon$ replaced by $\frac{\epsilon}{\hat{p}}$ (i.e. $\{Nv_i\}<\frac{\epsilon}{\hat{p}}$) that $|\{\hat{p}Nv_i\}-\chi_i|=\{\hat{p}\{Nv_i\}\}=\hat{p}\{Nv_i\}<\epsilon$. When $\chi_i=1$, it follows from ([\[3.24\]](#3.24){reference-type="ref" reference="3.24"}) that $0<1-\{Nv_i\}<\frac{\epsilon}{\hat{p}}$. Therefore there holds $0<\hat{p}-\hat{p}\{Nv_i\}<\epsilon$, which yields $0<1-\{\hat{p}\{Nv_i\}\}=1-\{\hat{p}Nv_i\}<\epsilon$. In a word, there always holds $$\begin{aligned}
|\{\hat{p}Nv_i\}-\chi_i|=|\{\hat{p}\{Nv_i\}\}-\chi_i|<\epsilon,\qquad 1\le i\le q.\label{3.32}\end{aligned}$$ Roughly speaking, when $\{Nv_i\}$ is close enough to $\chi_i$, $\{\hat{p}Nv_i\}$ is also close enough to the same $\chi_i$. Therefore $\hat{\chi}_i=\chi_i, \forall\ 1\le i\le q$.
Now we prove the relation of $\hat{m}_i$ and $m_i$ for $1\le i\le q$. By the similar arguments as those in proof of Theorem 3.6 (i.e. the proof of similar ([\[3.22\]](#3.22){reference-type="ref" reference="3.22"})), we have $$\begin{aligned}
\hat{m}_i&=&\left(\left[\frac{\hat{p}N}{M|\hat i(\gamma_i, 1)|}\right]+\hat{\chi}_i\right)M=\left(\left[\hat{p}Nv_i\right]+\chi_i\right)M\nonumber\\
&=&\left(\left[\hat{p}([Nv_i]+\{Nv_i\})\right]+\chi_i\right)M=\left(\hat{p}[Nv_i]+\left[\hat{p}\{Nv_i\}\right]+\chi_i\right)M.\label{3.33}\end{aligned}$$
When $\chi_i=0$, then $\{Nv_i\}<\frac{\epsilon}{\hat{p}}$, and thus $\hat{p}\{Nv_i\}<\epsilon$. By ([\[3.33\]](#3.33){reference-type="ref" reference="3.33"}) and ([\[3.22\]](#3.22){reference-type="ref" reference="3.22"}) we obtain $\hat{m}_i=\hat{p}[Nv_i]M=\hat{p}m_i$. When $\chi_i=1$, it follows from ([\[3.24\]](#3.24){reference-type="ref" reference="3.24"}) that $0<1-\{Nv_i\}<\frac{\epsilon}{\hat{p}}$. Therefore there holds $\hat{p}-1<\hat{p}-\epsilon<\hat{p}\{Nv_i\}<\hat{p}$, which yields $[\hat{p}\{Nv_i\}]=\hat{p}-1$. Again by ([\[3.33\]](#3.33){reference-type="ref" reference="3.33"}) and ([\[3.22\]](#3.22){reference-type="ref" reference="3.22"}) we obtain $\hat{m}_i=(\hat{p}[Nv_i]+(\hat{p}-1)+1)M=\hat{p}([Nv_i]+1)M=\hat{p}([Nv_i]+\chi_i)M=\hat{p}m_i$. In a word, there always holds $\hat{m}_i=\hat{p}m_i$, $\forall 1\le i\le q$.
Finally the proof of identities ([\[3.25\]](#3.25){reference-type="ref" reference="3.25"})-(3.28) are completely the same as those of ([\[3.16\]](#3.16){reference-type="ref" reference="3.16"})-([\[3.19\]](#3.19){reference-type="ref" reference="3.19"}) in Theorem 3.6, so here we omit these details. height0.18cm width0.14cm $\,$
# Proof of Theorem 1.1
In this section, let $F$ be any bumpy Finsler metric on $S^3$, i.e., all closed geodesics (including iterations) on $(S^3,F)$ are non-degenerate. To prove theorem 1.1, firstly we assume that the Morse index $i(c)\ge 1$ for any prime closed geodesic $c$ on $(S^3,F)$.
On one hand, Theorem 1.4 in [@DLW] showed that if every prime closed geodesic $c$ on every bumpy $S^3$ satisfies $i(c)\ge 2$, then there exist at least four prime closed geodesics. On the other hand, Remark 3.7 in [@DuL2] showed that if there exist exactly two prime closed geodesics $c_1$ and $c_2$ on bumpy Finsler $S^3$, then only two possible cases happen: (i) $i(c_1)=0$ and $i(c_2)=1$; (ii) $i(c_k)\ge 2$ for $k=1,2$.
Thus, it follows from Remark 3.7 in [@DuL2] and the assumption of nonzero Morse index that there exist at least 3 prime closed geodesics on $S^3$. Therefore, in order to prove theorem 1.1, by the above assumption and Theorem 1.4 of [@DLW], we only need to consider the case when there exist exactly 3 prime closed geodesics $c_k$ with $k=1,2,3$ on $(S^3,F)$ and at least one prime closed geodesic among them has Morse index $1$. Then we try to get a contradiction.
Furthermore, by Propositions 2.1, 2.3 and 2.4, among three prime closed geodesics there exists at least one prime closed geodesic with even Morse index such that these closed geodesics can generate non-trivial Morse-type numbers $M_{2j}\ge b_{2j}\ge 1$ for any $j\ge 1$. In fact, under the assumption of existence of exactly three prime closed geodesics, it can be further showed that there exist exactly two prime closed geodesics with even Morse indices (cf. Lemma 4.2 and Assumption below).
It is well-known that the Morse index sequence $\{i(c^m)\}$ either tends to $+\infty$ asumptotically linearly or $i(c^m)=0$ for all $m\ge1$ (cf. Proposition 1.3 of [@Bot]). Therefore $\hat{i}(c)=0$ if and only if $i(c^m)=0$ for all $m\ge1$. Since $i(c_k)\ge 1$ for $k=1,2,3$ by the above assumption, so the mean index satisfies $\hat{i}(c_k)>0$. So $\varrho_k=1$ in Theorem 3.6 by the definition ([\[3.20\]](#3.20){reference-type="ref" reference="3.20"}), and $i(c_k^m)\rightarrow +\infty$ as $m\rightarrow +\infty$. So the positive integer $\hat{m}$ defined by $$\begin{aligned}
\overline{m}=\max\limits_{1\leq k\leq q}\min\{m_0\in{\bf N}\,|\,i(c_{k}^{m+m_0})\geq i(c_{k})+4,\ \forall\, m\ge 1\}\label{4.1}\end{aligned}$$ is well defined and finite.
**Lemma 4.1.** *With the above $\overline{m}$, Theorem 3.6 yields $(q+1)$-tuples $(N, m_1,\cdots,m_q) \in {\bf N}^{q+1}$ such that the following inequalities hold* $$\begin{aligned}
i(c_{k}^{2m_{k}-m})&\leq& 2N-i(c_{k}),\quad\forall\ 1\leq m< 2m_{k}, \label{4.2}\\
i(c_{k}^{2m_{k}+m})&\geq& 2N+i(c_{k}),\quad \forall\ m\geq 1.\label{4.3}\end{aligned}$$
**Proof.** Note that there always holds $S^+_{M_i}(1)=Q_i(m)=0$ by the non-degeneracy of $F$ and Lemma 3.1, and $i(c^m)\ge i(c)\ge 1$ for any $m\ge 1$ by the Bott formulae (cf. Theorem 9.2.1 of [@Lon3]). So it follows from ([\[3.17\]](#3.17){reference-type="ref" reference="3.17"}) and ([\[3.18\]](#3.18){reference-type="ref" reference="3.18"}) of Theorem 3.6 that the above inequalities hold for $m\leq \overline{m}$.
Now assume $m\geq \overline{m}+1$. Similar to the proof of Theorem 3.6 (cf. equalities (3.33) and (3.36) in [@DLW] for more details), by Theorem 3.3 and (3.12), we have $$\begin{aligned}
i(c_{k}^{2m_{k}-m})&=&-i(c_{k}^{m})+2N+2\Delta_{k}+2\sum\limits_{\theta}\left(E\left(\frac{2m_{k}-m}{2\pi}\theta\right)+\right.\nonumber\\ &&\qquad\left. E\left(\frac{m}{2\pi}\theta\right)-E\left(\frac{m_{k}}{\pi}\theta\right)\right)S_{M_{k}}^{-}(e^{\sqrt{-1}\theta})-2C(M_{k}),\label{4.4}\\
i(c_{k}^{2m_{k}+m})&=&i(c_{k}^{m})+2N+2\Delta_{k}+2\sum\limits_{\theta}\left(E\left(\frac{2m_{k}+m}{2\pi}\theta\right)-\right.\nonumber\\ &&\qquad\qquad \left. E\left(\frac{m}{2\pi}\theta\right)-E\left(\frac{m_{k}}{\pi}\theta\right)\right)S_{M_{k}}^{-}(e^{\sqrt{-1}\theta}).\label{4.5}\end{aligned}$$
Note that $E(a)+E(b)-E(a+b)\leq 1$. Then by the definition of $\overline{m}$, there holds $$i(c_{k}^{2m_{k}-m})\leq -i(c_{k}^{m})+2N+2\Delta_{k}\leq -i(c_{k})-4+2N+2\Delta_{k}\leq 2N-i(c_{k}),$$ $$i(c_{k}^{2m_{k}+m})\geq i(c_{k}^{m})+2N+2\Delta_{k}-2C(M_{k})\geq i(c_{k})+4+2N+2\Delta_{k}-2C(M_{k})\geq 2N+i(c_{k}),$$ where we use the fact $0\le\Delta_{k}\le C(M_k)\le 2$ in the case of $S^3$. height0.18cm width0.14cm $\,$
**Lemma 4.2.** *It is impossible that two of $c_{1},c_{2},c_{3}$ have odd indices and one of them has an even index.*
**Proof.** Without loss of generality, suppose that $c_{1},c_{2}$ have odd indices and $c_{3}$ has an even index, then by Lemma 4.1, there exist positive integers $N$ and $m_{3}$ such that $$\begin{aligned}
i(c_{3}^{2m_{k}-m})&\le& 2N-i(c_{3})\le 2N-1,\quad \forall\ 1\leq m< 2m_{3}, \label{4.8}\\
i(c_{3}^{2m_{k}+m})&\ge& 2N+i(c_{3})\ge 2N+1,\quad \forall\ m\geq 1.\label{4.9}\end{aligned}$$ Then by Proposition 2.1, $c_{1}$ and $c_{2}$ have no contribution to $M_{2N}$ while $c_{3}$ contributes at most 1 to $M_{2N}$. Hence there holds $M_{2N}\leq 1<b_{2N}$, which contradicts to Proposition 2.4.height0.18cm width0.14cm $\,$
According to the above arguments and Lemma 4.2, we make the following assumption and will get a contradiction to complete the proof of Theorem 1.1.
**Assumption.** *There exist exactly 3 prime closed geodesics $c_{1},c_{2},c_{3}$ on bumpy Finsler 3-sphere $(S^3,F)$ with $i(c_{1})=1, i(c_{2})\in2{\bf N}, i(c_{3})\in2{\bf N}$.*
Now we use Lemma 4.1 to get the following two crucial inequalities.
**Lemma 4.3.** *Under the assumption, with the above $\overline{m}$, Theorem 3.6 yields $(q+1)$-tuples $(N, m_1,\cdots,m_q) \in {\bf N}^{q+1}$ such that the following inequalities hold $$\begin{aligned}
\sum_{k=1}^{3}2m_{k}\gamma_{c_k}-(2N)_{+}^{e}+(2N)_{+}^{o}&\ge& 2N-1, \label{4.10}\\
\sum_{k=1}^{3}2m_{k}\gamma_{c_k}-(2N-1)_{+}^{e}+(2N-1)_{+}^{o}&\le& 2N-3, \label{4.11}\end{aligned}$$ where the counting notations are defined by $n_{+}^{e}=\#\{k\ |\ i(c_{k}^{2m_{k}})>n,i(c_{k}^{2m_{k}})\equiv i(c_{k})\equiv 0\left(\rm mod 2\it\right)\}$ and $n_{+}^{o}=\#\{k\ |\ i(c_{k}^{2m_{k}})>n,i(c_{k}^{2m_{k}})\equiv i(c_{k})\equiv 1\left(\rm mod 2\it\right)\}$.*
**Proof.** It follows from Proposition 2.1, Definition 2.2 and Theorem 3.3 that $$\begin{aligned}
\sum_{m=1}^{2m_k} (-1)^{i(c_k^m)} \dim \overline{C}_{i(c_k^{m})}(E,c_k^m)
&=& \sum_{i=0}^{m_k-1} \sum_{m=2i+1}^{2i+2} (-1)^{i(c_k^m)} \dim \overline{C}_{i(c_k^{m})}(E,c_k^m) \nonumber\\
&=& \sum_{i=0}^{m_k-1} \sum_{m=1}^{2} (-1)^{i(c_k^m)} \dim \overline{C}_{i(c_k^{m})}(E,c_k^m) \nonumber\\
&=& m_k \sum_{m=1}^{2} (-1)^{i(c_k^m)} \dim \overline{C}_{i(c_k^{m})}(E,c_k^m) \nonumber\\
&=& 2m_k{\gamma}_{c_k},\qquad \forall\ 1\le k\le 3, \label{4.12}\end{aligned}$$ where the second equality follows from Proposition 2.1 and the fact $i(c_k^{m+2})-i(c_k^m)\in 2{\bf Z}$ for all $m\ge 1$ from Theorem 3.3, and the last equality follows from Proposition 2.1 and Definition 2.2.
By Lemma 4.1 and Proposition 2.1, the sum of the left side of ([\[4.12\]](#4.12){reference-type="ref" reference="4.12"}) with respect to $k$ is $$\begin{aligned}
\sum_{k=1}^{3}\sum_{m=1}^{2m_{k}}(-1)^{i(c_{k}^{m})}\dim\overline{C}_{i(c_k^{m})}(E,c_{k}^{m})&=&\sum_{i=1}^{2N}(-1)^{i}M_{i}+(2N)_{+}^{e}-(2N)_{+}^{o},\label{4.13}\\
\sum_{k=1}^{3}\sum_{m=1}^{2m_{k}}(-1)^{i(c_{k}^{m})}\dim\overline{C}_{i(c_k^{m})}(E,c_{k}^{m})&=&\sum_{i=1}^{2N-1}(-1)^{i}M_{i}+(2N-1)_{+}^{e}-(2N-1)_{+}^{o}.\label{4.14}\end{aligned}$$
On the other hand, Propositions 2.3 and 2.4 give the following inequalities $$\begin{aligned}
\sum_{i=1}^{2N}(-1)^{i}M_{i}\ge \sum_{i=1}^{2N}(-1)^{i}b_{i}=2N-1,\qquad
\sum_{i=1}^{2N-1}(-1)^{i}M_{i}\le \sum_{i=1}^{2N-1}(-1)^{i}b_{i}=2N-3.\label{4.15}\end{aligned}$$ Now the inequalities ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}) and ([\[4.11\]](#4.11){reference-type="ref" reference="4.11"}) follow from ([\[4.12\]](#4.12){reference-type="ref" reference="4.12"})-([\[4.15\]](#4.15){reference-type="ref" reference="4.15"}). height0.18cm width0.14cm $\,$
Again by Lemma 4.1 and Proposition 2.1, $c_{2}^{2m_{2}}$ and $c_{3}^{2m_{3}}$ contribute at most 1 to $M_{2N}$, respectively. Note that Propositions 2.3 and 2.4 yields $M_{2N}\geq b_{2N}=2$. So $c_{2}^{2m_{2}}$ and $c_{3}^{2m_{3}}$ must contribute exactly 1 to $M_{2N}$, respectively, that is, there must hold $$\begin{aligned}
i(c_{2}^{2m_{2}})=2N,\qquad i(c_{3}^{2m_{3}})=2N. \label{4.16}\end{aligned}$$
Hence, by the definitions of $n_{+}^{e}$ and $n_{+}^{o}$ we have $(2N)_{+}^{e}=0$, $(2N-1)_{+}^{e}=2$, $(2N)_{+}^{o}\le 1$ and $0\le (2N-1)_{+}^{o}$. And then it follows from ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}) and ([\[4.11\]](#4.11){reference-type="ref" reference="4.11"}) of Lemma 4.3 that $$\begin{aligned}
2N-2&\leq&2N-1-(2N)_{+}^{o}\nonumber\\
&\leq&\sum_{k=1}^{3}2m_{k}\gamma_{c_k}\nonumber\\
&\leq& 2N-1-(2N-1)_{+}^{o}\nonumber\\
&\leq&2N-1.\label{4.17}\end{aligned}$$
Fix a small enough $\epsilon$ in (3.24) and suppose $N$ satisfies (3.24) for $\frac{\epsilon}{4}$ (i.e., taking $\hat{p}=4$ there), then by Theorem 3.8, the integers $\hat{N}=4N$ and $\hat{m}_{k}=\left(\left[\frac{\hat{N}}{M\hat{i}(c_k)}\right]+\chi_{k}\right)M=4m_k, k=1,\cdots,q$ yields the $(q+1)$-tuples $(\hat{N},\hat{m}_1,\cdots,\hat{m}_q) \in {\bf N}^{q+1}$ such that Theorem 3.4 and ([\[3.25\]](#3.25){reference-type="ref" reference="3.25"})-([\[3.28\]](#3.28){reference-type="ref" reference="3.28"}) in Theorem 3.8 hold. Then by the similar arguments above, it can be shown that Lemma 4.1, Lemma 4.3 and ([\[4.17\]](#4.17){reference-type="ref" reference="4.17"}) also hold with $N$ being replaced by $4N$. This is to say, we have $$\begin{aligned}
8N-2\leq \sum_{k=1}^{3}2\hat{m}_{k}\gamma_{c_k}=4\sum_{k=1}^{3}2 m_{k}\gamma_{c_k}\leq 8N-1,\label{4.18}\end{aligned}$$where we use the fact $\hat{m}_{k}=4m_{k}$ in ([\[3.30\]](#3.30){reference-type="ref" reference="3.30"}) of Theorem 3.8. Then the integer in the middle is the multiple of $4$ and lies between $8N-2$ and $8N-1$. Clearly this is a contradiction.
Therefore the Assumption is not true and the proof of Theorem 1.1 is finished. height0.18cm width0.14cm $\,$
**Acknowledgements** The first author would like to thank Professor Yiming Long for his valuable communications and constant encouragement.
100
D. V. Anosov, Geodesics in Finsler geometry. *Proc. I.C.M.* (Vancouver, B.C. 1974), Vol. 2. 293-297 Montreal (1975). V. Bangert, On the existence of closed geodesics on two-spheres. *Inter. J. Math.* 4, 1-10 (1993). V. Bangert and Y. Long, The existence of two closed geodesics on every Finsler 2-sphere. *Math. Ann.* 346, 335-366 (2010). V. Bangert, W. Klingenberg, Homology generated by iterated closed geodesics. *Topology*. 22, 379-388 (1983). R. Bott, On the iteration of closed geodesics and the Sturm intersection theory. *Commun. Pure Appl. Math.* 9, 171-206 (1956). K. Burns, V. Matveev, Open problems and questions about geodesics. *Ergod. Th. & Dynam. Sys.* 41, 641-684 (2021). K. C. Chang, Infinite Dimensional Morse Theory and Multiple Solution Problems. Birkhäuser, Boston (1993). H. Duan and Y. Long, Multiple closed geodesics on bumpy Finsler $n$-spheres. *J. Diff. Equa.* 233, 221-240 (2007). H. Duan and Y. Long, Multiplicity and stability of closed geodesics on bumpy Finsler $3$-spheres. *Cal. Variations and PDEs*. 31, 483-496 (2008). H. Duan, Y. Long and W. Wang, The enhanced common index jump theorem for symplectic paths and non-hyperbolic closed geodesics on Finsler manifolds. *Calc. Var. and PDEs.* 55, Art. 145, 28 pp (2016). H. Duan, H. Liu, Y. Long and W. Wang, Generalized common index jump theorem with applications to closed characteristics on star-shaped hypersurfaces and beyond. http://arxiv.org/abs/2205.07082, preprint (2022). N. Hingston, Equivariant Morse theory and closed geodesics. *J. Diff. Geom.* 19, 85-116 (1984). N. Hingston, On the growth of the number of closed geodesics on the two-sphere. *Inter. Math. Research Notices.* 9, 253-262 (1993). J. Franks, Geodesics on $S^2$ and periodic points of annulus diffeomorphisms. *Invent. Math.* 108, 403-418 (1992). A. B. Katok, Ergodic properties of degenerate integrable Hamiltonian systems. *Izv. Akad. Nauk SSSR.* 37 (1973) (Russian), *Math. USSR-Isv.* 7, 535-571 (1973). W. Klingenberg, Lectures on Closed Geodesics. Springer, Berlin (1978). C. Liu, The relation of the Morse index of closed geodesics with the Maslov-type index of symplectic paths. *Acta Math. Sinica.* 21, 237-248 (2005). Y. Long, Bott formula of the Maslov-type index theory. *Pacific J. Math.* 187, 113-149 (1999). Y. Long, Precise iteration formulae of the Maslov-type index theory and ellipticity of closed characteristics. *Adv. Math.* 154, 76-131 (2000). Y. Long, Index Theory for Symplectic Paths with Applications. Progress in Math. 207, Birkhäuser (2002). Y. Long, Multiplicity and stability of closed geodesics on Finsler 2-spheres. *J. Euro. Math. Soc.* 8, 341-353 (2006). Y. Long, C. Zhu, Closed charateristics on compact convex hypersurfaces in ${\bf R}^{2n}$. *Ann. of Math.* 155, 317-368 (2002). H.-B. Rademacher, On the average indices of closed geodesics. *J. Diff. Geom.* 29, 65-83 (1989). H.-B. Rademacher, Morse Theorie und geschlossene Geodatische. *Bonner Math. Schriften* Nr. 229 (1992). H.-B. Rademacher, The second closed geodesic on Finsler spheres of dimension $n>2$. *Trans. Amer. Math. Soc.* 362, 1413-1421 (2010). W. Wang, Closed geodesics on positively curved Finsler spheres. *Adv. Math.* 218, 1566-1603 (2008). W. Wang, On a conjecture of Anosov. *Adv. Math.* 230, 1597-1617 (2012). W. Wang, On the average indices of closed geodesics on positively curved Finsler spheres. *Math. Ann.* 355, 1049-1065 (2013). W. Ziller, Geometry of the Katok examples. *Ergod. Th. & Dynam. Sys.* 3, 135-157 (1982).
[^1]: Partially supported by National Key R&D Program of China (No. 2020YFA0713300) and NNSFC (Nos. 12271268, 11671215 and 11790271)). E-mail: duanhg\@nankai.edu.cn.
[^2]: Partially supported by NNSFC (No. 12271268). E-mail: 2120210039\@mail.nankai.edu.cn
| arxiv_math | {
"id": "2309.00211",
"title": "Multiple closed geodesics on Finsler $3$-dimensional sphere",
"authors": "Huagui Duan, Zihao Qi",
"categories": "math.SG math.DG math.DS",
"license": "http://creativecommons.org/publicdomain/zero/1.0/"
} |
---
abstract: |
It is a well known result due to Korshunov and Sapozhenko that the hypercube in $n$ dimensions has $(1 + o(1)) \cdot 2 \sqrt e \cdot 2^{2^{n-1}}$ independent sets. Jenssen and Keevash investigated in depth Cartesian powers of cycles of fixed even lengths far beyond counting independent sets. They wonder to which extent their results extend to cycles of odd length, where not even the easiest case, counting independent sets in Cartesian powers of the triangle, is known. In this paper, we make progress on their question by providing a lower bound, which we believe to be tight. We also obtain a less precise lower bound for the number of independent sets in Cartesian powers of arbitrary odd cycles and show how to approach this question both with the cluster expansion method as well as more directly with isoperimetric inequalities.
address: Universität Heidelberg, Institut für Informatik, Im Neuenheimer Feld 205, 69120 Heidelberg, Germany
author:
- Patrick Arras and Felix Joos
bibliography:
- indepsets.bib
title: |
Independent sets\
in discrete tori of odd sidelength
---
# Introduction
The hypercube $\mathbb{Z}_2^n$ is arguably among the most well-investigated graphs because it is one of the very few explicitly constructable graphs that are (very) sparse. In this paper, we focus on counting independent sets. Korshunov and Sapozhenko [@KS83] showed that there are $(1+o(1)) \cdot 2 \sqrt e \cdot 2^{2^{n-1}}$ independent sets in the hypercube. Observe that hypercubes are bipartite graphs where each partition class has $2^{n-1}$ vertices. Hence there are $2\cdot 2^{2^{n-1}}-1$ independent sets that are a subset of one of the partition classes, which already reveals the majority of all independent sets. Roughly a $(\sqrt e -1)/\sqrt e$-fraction of all independent sets contain vertices from both partition classes. Among those, essentially all contain only very few vertices from one class and many from the other. This is due to the fact that selecting some vertices in one class excludes many more vertices (the neighbours of these vertices) in the other class from being in the independent set. This structural fact plays a dominant role in essentially all considerations regarding the number of independent sets or colourings in Cartesian powers of graphs. Estimating the number of neighbours of sets of vertices is another prominent topic on its own and is captured under the umbrella of *vertex-isoperimetric inequalities*.
The problem of calculating the number of independent sets in $\mathbb{Z}_2^n$ has been revisited and extended by many researchers [@BGL21; @EG12; @Gal11; @JP20; @JPP22; @KP22; @Par22]; in particular, there are also results regarding the number of proper $q$-colourings in $\mathbb{Z}_2^n$ [@Gal03; @KP20]. Most recently, Jenssen and Keevash [@JK20] investigated the topic in great depth. Instead of the hypercube only, they consider Cartesian powers of even cycles, where they treat the complete graph on two vertices as the cycle on two vertices. Then their results contain the hypercube. These graphs are usually known as $n$-dimensional discrete tori, which we denote as $\mathbb{Z}_m^n$ (if the base graph is a cycle of length $m$). Their results include a way to calculate asymptotically sharp formulas for both the number of independent sets and the number of proper $q$-colourings in tori $\mathbb{Z}_m^n$ with $m$ even. To this end, they utilize the cluster expansion approach from statistical physics. This method is well-established in the field and has been exploited to answer many similar questions, most recently in [@CDF+22; @JPP23].
Jenssen and Keevash [@JK20] ask whether their results extend to tori that stem from cycles of odd length. They however note that not even the number of independent sets is asymptotically known for Cartesian powers of a triangle. Here we make progress in answering their question and also point out why these cases may be much more complex than the previously investigated ones.
**Theorem 1**. *There are at least $(1-o(1)) \cdot 3 \cdot 2^{n-1} \cdot 2^{3^{n-1}} \cdot \exp( (3/2)^{n-1} )$ independent sets in $\mathbb{Z}_3^n$.*
The bound in [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"} deserves some explanation. There are $3 \cdot 2^{n-1}$ maximum independent sets (this is nontrivial; see [Lemma 8](#lem: no further maxindepsets){reference-type="ref" reference="lem: no further maxindepsets"}) of order $3^{n-1}$. Hence there are about $3 \cdot 2^{n-1} \cdot 2^{3^{n-1}}$ subsets of these sets (again, this is nontrivial, because these sets overlap; see [Lemma 9](#lem: intersection maxindepsets){reference-type="ref" reference="lem: intersection maxindepsets"}). For the hypercube, the subsets of the two maximum independents sets give up to a factor of $\sqrt{e}$ the correct count. For $\mathbb{Z}_3^n$, there is a correction term of (at least) $\exp( (3/2)^{n-1})$; that is, for $\mathbb{Z}_3^n$ only a double exponentially small fraction of the independent sets are a subset of some maximum independent set. We conjecture that the bound in [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"} is asymptotically tight.
The method we use for the proof of [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"} does not work for odd $m \geqslant 5$. Nonetheless, we are able to determine the number of maximum independent sets in $\mathbb{Z}_m^n$ for all odd $m$. Additionally, we provide an alternative approach that yields the following lower bound.
**Theorem 2**. *There are at least $2^{\lfloor m/2 \rfloor m^{n-1}} \cdot \exp( (1-o(1)) (m/2)^{n-1} )$ independent sets in $\mathbb{Z}_m^n$ for $m \geqslant 3$ odd.*
In order to arrive at a sensible conjecture of how higher order terms in this asymptotic behaviour might look like, we also adapt the cluster expansion approach to our setting and calculate some initial terms.
As indicated already before, a key tool in determining the number of independent sets in discrete tori are appropriate isoperimetric inequalities for independent sets. We prove the following inequality, which appears to us to be of general interest. For simplicity, consider the tori $\mathbb{Z}_3^n$. As it turns out, these graphs are $3$-colourable and each colour class of each $3$-colouring is of size $3^{n-1}$. Denote the three colour classes (of some 3-colouring) by $\mathcal{E}(0), \mathcal{E}(1), \mathcal{E}(2)$. We consider $A\subseteq \mathcal{E}(0)$ and give a lower bound for the number of neighbours of $A$ in $\mathcal{E}(1)$ (or $\mathcal{E}(2)$). Similar considerations can also be made for $\mathbb{Z}_m^n$ with $m \geqslant 5$ odd.
To this end, define $\mathcal{E}_m^n(p)$ as the set of all vertices $v = (v_1, \ldots, v_n)$ in $\mathbb{Z}_m^n$ that satisfy $\sum_{i = 1}^n v_i \equiv p$ mod $m$. Our isoperimetric inequalities then take the following form.
**Theorem 3**. *For $m, n \in \mathbb{N}$ with $m \geqslant 3$ odd and $\ell \coloneqq \lfloor m/2 \rfloor$, consider $p \in \mathbb{Z}_m$ and $A \subseteq\mathcal{E}_m^n(p)$. Setting $\alpha \coloneqq \abs A / m^{n-1}$, we then have $$\abs{N_n(A, \mathcal{E}_m^n(p \pm 1))}
\geqslant\abs A \left( 1 + \frac {1-\alpha}{\sqrt{\ell^3mn}} \right)
\,.$$*
We remark that our inequalities are of the same type as needed for bipartite tori. Unfortunately, for $\mathbb{Z}_3^n$ it seems crucial to prove an appropriate lower bound for the number of neighbours in $\mathcal{E}(2)$ of an independent set $A\subseteq \mathcal{E}(0) \cup \mathcal{E}(1)$. Here, it is important that $A$ is independent in $\mathbb{Z}_3^n$, and incorporating this condition appears to us as a significant complication. Finding such an inequality would be very desirable.
The paper is structured as follows. We first fix some further notation in [2](#sec:nota){reference-type="ref" reference="sec:nota"}. In [3](#sec:lower-bounds){reference-type="ref" reference="sec:lower-bounds"} we prove [\[thm: lowerboundK3,thm: lowerboundKm\]](#thm: lowerboundK3,thm: lowerboundKm){reference-type="ref" reference="thm: lowerboundK3,thm: lowerboundKm"}. The more general approach via cluster expansion is introduced in [4](#sec:clusterexpansion){reference-type="ref" reference="sec:clusterexpansion"}, and [5](#sec:isoperimetry){reference-type="ref" reference="sec:isoperimetry"} deals with the proof of [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"}.
# Preliminaries {#sec:nota}
Let $\mathcal{I}(G)$ denote the set of all *independent* sets in the graph $G$, that is all $I \subseteq V(G)$ such that $E(G[I]) = \emptyset$. For $k \in \mathbb{N}_0$, we also define $\mathcal{I}_k(G) \coloneqq \{ I \in \mathcal{I}(G) \colon \abs I = k \}$. Moreover, let $\mathcal{I}^*(G) \coloneqq \{ I \in \mathcal{I}(G) \colon \abs I \geqslant\abs {I'} \text{ for all } I' \in \mathcal{I}(G) \}$ be the set of all *maximum* independent sets in $G$. For a vertex set $X \subseteq V(G)$, we write $N_G(X) \coloneqq \bigcup_{x \in X} N_G(x) \setminus X$ for the *neighbourhood* of $X$. If $Y \subseteq V(G)$ is another set, then we denote the neighbourhood of $X$ in $Y$ as $N_G(X, Y) \coloneqq N_G(X) \cap Y$.
Let $[k] \coloneqq \{ 1, \ldots, k \}$. We are interested in $n$-dimensional discrete tori $\mathbb{Z}_m^n$ with odd sidelength $m$. These graphs can be defined by $V(\mathbb{Z}_m^n) \coloneqq \{ 0, 1, \ldots, m-1 \}^n$ and $$\begin{aligned}
E(\mathbb{Z}_m^n) \coloneqq
\{ uv \mid \text{there is } j \in [n] \text{ such that } u_j &= v_j \pm 1 \text{ mod } m \\
\text{ and } u_i &= v_i \text{ for all } i \in [n] \setminus \{ j \} \}
\,.\end{aligned}$$
All our arguments will eventually examine the limit as $n \to \infty$. For the sake of brevity, we utilize the Landau notation $o(h(n))$ to denote any function $j(n)$ satisfying $j(n) / h(n) \to 0$ as $n \to \infty$. This is mainly used in statements of the form $f(n) \geqslant(1 - o(1)) g(n)$ to express that $g$ is an asymptotic lower bound for $f$ or as $f(n) = (1 \pm o(1)) g(n)$ to say that both $f(n) \geqslant(1 - o(1)) g(n)$ and $f(n) \leqslant(1 + o(1)) g(n)$ hold. Similarly, we write $O(h(n))$ to denote any $j(n)$ with $\limsup_{n \to \infty} j(n) / h(n) < \infty$.
We will also make use of a standard probability tool that, depending on the context, is known as either the FKG inequality [@FKG71] or the Harris inequality [@Har60]. In its most basic form, it asserts the following.
**Lemma 4**. *Let $k \in \mathbb{N}$ and consider the probability space $(\{ 0, 1 \}^k, \mathcal F, \mathbb{P})$. Define the partial order $\leqslant$ on $\{ 0, 1 \}^k$ by $x' \leqslant x$ if $x'_i \leqslant x_i$ for all $i \in [k]$. An event $A \in \mathcal F$ is called *decreasing* if $x' \leqslant x$ and $x \in A$ imply $x' \in A$. Then for any two decreasing events $A, B \in \mathcal F$, we have $\mathbb{P}[A \cap B] \geqslant\mathbb{P}[A] \mathbb{P}[B]$.*
# Lower bounds {#sec:lower-bounds}
In this section, we prove the asymptotic lower bounds on $\abs {\mathcal{I}(\mathbb{Z}_m^n)}$ in [\[thm: lowerboundK3,thm: lowerboundKm\]](#thm: lowerboundK3,thm: lowerboundKm){reference-type="ref" reference="thm: lowerboundK3,thm: lowerboundKm"}, where $\mathcal{I}(G)$ refers to the set of independent sets in a graph $G$. We begin with the more detailed estimate for $\abs {\mathcal{I}(\mathbb{Z}_3^n)}$ in [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"}. The principal idea behind our argument is a modification of the simplified approach used by Sapozhenko [@Sap89] for proving the lower bound of $\abs {\mathcal{I}(\mathbb{Z}_2^n)} \geqslant(1-o(1)) \cdot 2 \sqrt e \cdot 2^{2^{n-1}}$ in the hypercube, as presented by Galvin [@Gal19] in his expository note. Recall that the hypercube possesses a unique bipartition into two maximum independent sets. Now choose one of them as the *majority side* $I$ of the independent set to be constructed and select a relatively small independent set $A$ of $k$ *defect vertices* on the *minority side* $\overline I$. Then combine $A$ with a relatively large set $B \subseteq I \setminus N_{\mathbb{Z}_2^n}(A, I)$ to form an independent set $A \cup B \in \mathcal{I}(\mathbb{Z}_2^n)$. Essentially, the argument then comes down to proving that this process produces all but a negligible fraction of independent sets in the hypercube.
We would like to follow a similar strategy for $\mathbb{Z}_m^n$ with $m$ odd. We again start by selecting a maximum independent set $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$ as the majority side. For odd $m$, however, there is more than two choices: We observe in [\[lem: generate maxindepsets,lem: no further maxindepsets\]](#lem: generate maxindepsets,lem: no further maxindepsets){reference-type="ref" reference="lem: generate maxindepsets,lem: no further maxindepsets"} that all maximum independent sets in $\mathbb{Z}_m^n$ look the same, partitioning $V(\mathbb{Z}_m^n)$ into $m$ partition classes and selecting $\ell \coloneqq \lfloor m/2 \rfloor$ pairwise non-adjacent ones of these classes. This part is actually true for all odd $m$ and thus is stated in full generality.
In order to leave the maximum number of vertices in $I \setminus N_{\mathbb{Z}_m^n}(A, I)$ as potential members of $B$, we would like to select the $k$ defect vertices of $A$ in a way that minimizes $\abs {N_{\mathbb{Z}_m^n}(A, I)}$. One quickly finds that since $I$ comprises $\ell$ of the $m$ partition classes, the minority side $\overline I$ must contain two adjacent classes. As the vertices in these classes have the least neighbours in $I$, choosing $A$ as an independent set in the graph $H_n \subseteq\mathbb{Z}_m^n$ induced by these two classes is optimal.
Finally, this is again combined with a set $B \subseteq I \setminus N_{\mathbb{Z}_m^n}(A, I)$ to form an independent set $A \cup B \in \mathcal{I}(\mathbb{Z}_m^n)$. When calculating the exact numbers, our lower bound suggests that for $m = 3$, the number of defects $k$ is essentially Poisson-distributed with parameter $\lambda = (m/2)^{n-1}$. Additionally, prescribing a minimum size for $B$ guarantees that starting out with different maximum independent sets $I$ leads to different independent sets $A \cup B$, so no set in $\mathcal{I}(\mathbb{Z}_m^n)$ is counted multiple times. We then obtain the desired asymptotic lower bound by summing over a sufficiently large range of $k$ around $\lambda$.
As our first step, [\[lem: max size of maxindepset,lem: generate maxindepsets,lem: only disjoint if shifted,lem: no further maxindepsets\]](#lem: max size of maxindepset,lem: generate maxindepsets,lem: only disjoint if shifted,lem: no further maxindepsets){reference-type="ref" reference="lem: max size of maxindepset,lem: generate maxindepsets,lem: only disjoint if shifted,lem: no further maxindepsets"} examine the size, number, and structure of maximum independent sets in $\mathbb{Z}_m^n$ for $m$ odd.
**Lemma 5**. *For $m, n \in \mathbb{N}$ with $m \geqslant 2$ and $\ell \coloneqq \lfloor m/2 \rfloor$, every $I \in \mathcal{I}(\mathbb{Z}_m^n)$ satisfies $\abs I \leqslant m^{n-1} \ell$.*
*Proof.* We proceed by induction on $n$. For $n = 1$, the statement is trivial. Suppose that for some $n > 1$, every $I' \in \mathcal{I}(\mathbb{Z}_m^{n-1})$ satisfies $\abs {I'} \leqslant m^{n-2} \ell$, and let $I \in \mathcal{I}(\mathbb{Z}_m^n)$ be arbitrary. Partition $V(\mathbb{Z}_m^n)$ into $V_q \coloneqq \{ v \in V(\mathbb{Z}_m^n) \colon v_n = q \}$ for $q \in \mathbb{Z}_m$ and observe that each $\mathbb{Z}_m^n[V_q]$ is isomorphic to $\mathbb{Z}_m^{n-1}$. In particular, this partitions $I$ into $m$ subsets $I_q = I \cap V_q$, which inherit the independence of $I$ and are thus isomorphic to independent sets $I_q' \in \mathcal{I}(\mathbb{Z}_m^{n-1})$. By the induction hypothesis, these satisfy $\abs {I_q} = \abs {I_q'} \leqslant m^{n-2} \ell$ and as there are $m$ of them, we obtain $\abs I = \sum_{q \in \mathbb{Z}_m} \abs {I_q} \leqslant m^{n-1} \ell$. ◻
**Lemma 6**. *For $m, n \in \mathbb{N}$ with $m \geqslant 3$ odd, the map $\iota_n \colon \mathbb{Z}_m \times \{ \pm 1 \}^{n-1} \to \mathcal{I}^*(\mathbb{Z}_m^n)$ defined by $$\begin{aligned}
\iota_n(q, \varepsilon_1, \ldots, {}&\varepsilon_{n-1}) \\ \coloneqq \Bigg\lbrace &v \in V(\mathbb{Z}_m^n) \colon v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1} \in \{ q, q+2, \ldots, q+m-3 \} \mod m \Bigg\rbrace\end{aligned}$$ is well-defined and injective.*
*Proof.* We start by showing that $\iota_n$ is well-defined. For this, let $uv$ be an arbitrary edge in $\mathbb{Z}_m^n$. Note that this means that $u, v$ only differ in one coordinate, and only by 1 mod $m$. Therefore, the sums $u_1 + \sum_{i = 1}^{n-1} \varepsilon_i u_{i+1}$ and $v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1}$ only differ by 1 mod $m$ and as the values $q, q+2, \ldots, q+m-3$ have pairwise differences of at least 2 mod $m$, any $I \in \operatorname{im}(\iota_n)$ can contain at most one of $u, v$. Since $uv \in E(\mathbb{Z}_m^n)$ was arbitrary, this proves $\operatorname{im}(\iota_n) \subseteq\mathcal{I}(\mathbb{Z}_m^n)$.
To see that each set in $\operatorname{im}(\iota_n)$ is indeed a maximum independent set, partition $\mathbb{Z}_m^n$ into $V_w \coloneqq \{ v \in V(\mathbb{Z}_m^n) \colon (v_2, \ldots, v_n) = w \}$ for all $w \in V(\mathbb{Z}_m^{n-1})$. Observe that for any $(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \in \mathbb{Z}_m \times \{ \pm 1 \}^{n-1}$, all vertices $v \in V_w$ produce the same sum $\sum_{i = 1}^{n-1} \varepsilon_i v_{i+1}$, so $v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1}$ will cycle through all possible values $0, \ldots, m-1$ mod $m$ as $v$ cycles through $V_w$. Irrespective of $q$, exactly $\ell \coloneqq \lfloor m/2 \rfloor$ of these values belong to $\{ q, q+2, \ldots, q+m-3 \}$, which shows that $\abs {\iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \cap V_w} = \ell$ for all $w \in V(\mathbb{Z}_m^{n-1})$. It is now easy to see that $$\abs {\iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1})}
= \sum_{w \in V(\mathbb{Z}_m^{n-1})} \abs {\iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \cap V_w}
= m^{n-1} \ell$$ and $\iota(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \in \mathcal{I}^*(\mathbb{Z}_m^n)$ follows by [Lemma 5](#lem: max size of maxindepset){reference-type="ref" reference="lem: max size of maxindepset"}.
To see that $\iota_n$ is injective, let $(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \neq (q', \varepsilon'_1, \ldots, \varepsilon'_{n-1})$. If $q \neq q'$, then $q - q' \mod m \in \{ 1, \ldots, m-1 \}$. Without loss of generality, let $q - q' \mod m$ be odd, otherwise swap the roles of $(q, \varepsilon_1, \ldots, \varepsilon_{n-1})$ and $(q', \varepsilon'_1, \ldots, \varepsilon'_{n-1})$ and observe that $m$ being odd implies that $q' - q \equiv m - (q - q') \mod m \in \{ 1, \ldots, m-1 \}$ has a different parity than $q - q' \mod m$. Then $(q, 0, \ldots, 0) \in \iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1})$, but $(q, 0, \ldots, 0) \notin \iota_n(q', \varepsilon'_1, \ldots, \varepsilon'_{n-1})$, and so $\iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \neq \iota_n(q', \varepsilon'_1, \ldots, \varepsilon'_{n-1})$.
If $q = q'$, we may assume by symmetry that $\varepsilon_1 \neq \varepsilon'_1$. Consider the vertex $v \coloneqq (q-1, 1, 0, \ldots, 0) \in V(\mathbb{Z}_m^n)$. Then $v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1} = q-1 + \varepsilon_1$, while $v_1 + \sum_{i = 1}^{n-1} \varepsilon'_i v_{i+1} = q-1 + \varepsilon'_1$. As $\varepsilon_1 \neq \varepsilon'_1 \in \{ \pm 1 \}$, one of these values is $q \in \{ q, q+2, \ldots, q+m-3 \}$ and the other is $q-2 \notin \{ q, q+2, \ldots, q+m-3 \}$. Consequently $v$ belongs to exactly one of the sets $\iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1})$ and $\iota_n(q', \varepsilon'_1, \ldots, \varepsilon'_{n-1})$, which shows that $\iota_n(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \neq \iota_n(q', \varepsilon'_1, \ldots, \varepsilon'_{n-1})$. ◻
**Lemma 7**. *For $m, n \in \mathbb{N}$ with $m \geqslant 3$ odd, consider $x = (q, \varepsilon_1, \ldots, \varepsilon_{n-1}), x' = (q', \varepsilon'_1, \ldots, \varepsilon'_{n-1}) \in \mathbb{Z}_m \times (\pm 1)^{n-1}$. Then $\iota_n(x) \cap \iota_n(x') = \emptyset$ implies $q' \equiv q \pm 1 \mod m$ and $\varepsilon_j = \varepsilon_j'$ for all $j \in [n-1]$.*
*Proof.* Suppose $\iota_n(x) \cap \iota_n(x') = \emptyset$. Without loss of generality, assume that $q - q' \mod m \in \{ 0, \ldots, m-1 \}$ is even, otherwise swap the roles of $x$ and $x'$ and observe that $m$ being odd implies that $q' - q \mod m \in \{ 0, \ldots, m-1 \}$ has a different parity than $q - q' \mod m$. Now consider the vertex $u \coloneqq (q, 0, \ldots, 0) \in V(\mathbb{Z}_m^n)$. Since it obviously belongs to $\iota_n(x)$, it must not belong to $\iota_n(x')$ by assumption, so $u_1 + \sum_{i = 1}^{n-1} \varepsilon'_i u_{i+1} = q = q' + (q - q') \notin \{ q', q'+2, \ldots, q'+m-3 \}$ mod $m$. Subtracting $q'$, this excludes $0, 2, \ldots, m-3$ as possible values for the even number $q - q' \mod m \in \{ 0, \ldots, m-1 \}$, leaving only $q - q' \equiv m-1$ mod $m$ and thus, $q' \equiv q + 1 \mod m$.
Now if $\varepsilon_1 \neq \varepsilon'_1$, consider the vertex $v \coloneqq (q-1, \varepsilon_1, 0, \ldots, 0) \in V(\mathbb{Z}_m^n)$. Then $v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1} = q - 1 + \varepsilon_1^2 = q \in \{ q, q+2, \ldots, q+m-3 \}$ mod $m$ and $v_1 + \sum_{i = 1}^{n-1} \varepsilon'_i v_{i+1} = q - 1 - \varepsilon_1^2 = q - 2 \in \{ q+1, q+3, \ldots, q+m-2 \} = \{ q', q'+2, \ldots, q'+m-3 \}$ mod $m$. This means that $v \in \iota_n(x) \cap \iota_n(x')$. By symmetry, the same conclusion also holds if $\varepsilon_j \neq \varepsilon'_j$ for some $j \in [n-1]$. Hence $\iota_n(x) \cap \iota_n(x') = \emptyset$ implies that $\varepsilon_j = \varepsilon'_j$ for all $j \in [n-1]$. ◻
**Lemma 8**. *For $m, n \in \mathbb{N}$ with $m \geqslant 3$ odd, the map $\iota_n$ from [Lemma 6](#lem: generate maxindepsets){reference-type="ref" reference="lem: generate maxindepsets"} is a bijection.*
*Proof.* We again write $\ell \coloneqq \lfloor m/2 \rfloor$. It only remains to show that $\iota_n$ is surjective. Recall that while proving its well-definedness in [Lemma 6](#lem: generate maxindepsets){reference-type="ref" reference="lem: generate maxindepsets"}, we already showed that $\abs I = m^{n-1} \ell$ for every $I \in \operatorname{im}(\iota_n) \subseteq\mathcal{I}^*(\mathbb{Z}_m^n)$, so the upper bound on the size of independent sets in [Lemma 5](#lem: max size of maxindepset){reference-type="ref" reference="lem: max size of maxindepset"} is indeed attained for every $n$. We thus know that every $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$ has $\abs I = m^{n-1} \ell$.
We proceed by induction on $n$. For $n = 1$, the statement is trivial. Suppose that for some $n > 1$, the function $\iota_{n-1}$ is a bijection, and let $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$ be arbitrary. Partition $V(\mathbb{Z}_m^n)$ into $V_q \coloneqq \{ v \in V(\mathbb{Z}_m^n) \colon v_n = q \}$ for $q \in \mathbb{Z}_m$ and define the isomorphisms $\phi_q \colon \mathbb{Z}_m^n[V_q] \to \mathbb{Z}_m^{n-1} ,\, v \mapsto (v_1, \ldots, v_{n-1})$. We denote the independent set $\phi_q(I \cap V_q)$ as $I_q \in \mathcal{I}(\mathbb{Z}_m^{n-1})$ and observe that trivially, $\abs I = \sum_{q \in \mathbb{Z}_m} \abs {I_q}$. However, the sets $I_q$ can have at most $m^{n-2} \ell$ vertices by [Lemma 5](#lem: max size of maxindepset){reference-type="ref" reference="lem: max size of maxindepset"}, so in order to achieve $\abs I = m^{n-1} \ell$, they must all be maximum independent sets in $\mathbb{Z}_m^{n-1}$. The induction hypothesis therefore guarantees that each $I_q$ has a preimage under $\iota_{n-1}$, which we will denote as $x_q \coloneqq \iota_{n-1}^{-1}(I_q) \in \mathbb{Z}_m \times \{ \pm 1 \}^{n-2}$. We refer to its first component as $p_q \in \mathbb{Z}_m$.
We now claim the following:
Except for the first component, all $x_q$ are identical. The first components satisfy $p_q = p_0 + q(p_1 - p_0)$ mod $m$ for all $q \in \mathbb{Z}_m$.
Let us first prove that $\{ p_{q-1}, p_{q+1} \} = \{ p_q-1, p_q+1 \}$ mod $m$ for every $q \in \mathbb{Z}_m$. It is easy to see that $\iota_{n-1}(x_q) = I_q$ and $\iota_{n-1}(x_{q - 1}) = I_{q - 1}$ must be disjoint as otherwise, $u \in I_q \cap I_{q - 1}$ for some $u \in V(\mathbb{Z}_m^{n-1})$ implies $(u, q), (u, q - 1) \in I$ and contradicts $I$ being independent. Similarly, $I_q \cap I_{q + 1} = \emptyset$. So [Lemma 7](#lem: only disjoint if shifted){reference-type="ref" reference="lem: only disjoint if shifted"} guarantees that all $x_q$ are identical except for their first component, for which $\{ p_{q-1}, p_{q+1} \} \subseteq\{ p_q - 1, p_q + 1 \}$ mod $m$ must hold.
For a proof by contradiction, assume that $p_{q-1} = p_{q+1} = p_q - 1$ mod $m$. Partition $V(\mathbb{Z}_m^n)$ into $V_w \coloneqq \{ v \in V(\mathbb{Z}_m^n) \colon (v_1, \ldots, v_{n-1}) = w \}$ for all $w \in V(\mathbb{Z}_m^{n-1})$. Observe that $\mathbb{Z}_m^n[V_w] \cong\mathbb{Z}_m^1$ for all $w \in V(\mathbb{Z}_m^{n-1})$. Consider now $\tilde w \coloneqq (p_q - 2, 0, \ldots, 0)$. We immediately observe that $p_q - 2 \notin \{ p_q, p_q + 2, \ldots, p_q + m - 3 \}$, so $\tilde w \notin \iota_{n-1}(x_q) = I_q = \phi_q(I \cap V_q)$ and $(\tilde w, q) \notin I$. However, we also see that $p_q - 2 \notin \{ p_q - 1, p_q + 1, \ldots, p_q + m - 4 \}$, so $\tilde w \notin \iota_{n-1}(x_{q \pm 1}) = I_{q \pm 1} = \phi_{q \pm 1}(I \cap V_{q \pm 1})$ and neither $(\tilde w, q-1)$ nor $(\tilde w, q+1)$ belong to $I$. But then $I \cap V_{\tilde w}$ must be an independent set in the path $\mathbb{Z}_m^n[V_{\tilde w}] \setminus \{ (\tilde w, q-1), (\tilde w, q), (\tilde w, q+1) \}$ on $m-3$ vertices and can thus have at most $\abs {I \cap V_{\tilde w}} \leqslant(m-3)/2 < \ell$ vertices. Summing $\abs {I \cap V_w}$ over all $w \in V(\mathbb{Z}_m^{n-1})$ then yields $\abs I = \sum_{w \in V(\mathbb{Z}_m^{n-1})} \abs {I \cap V_w} < m^{n-1} \ell$ in contradiction to $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$. The same contradiction arises when assuming $p_{q-1} = p_{q+1} = p_q + 1$ mod $m$ and considering $\tilde w \coloneqq (p_q - 1, 0, \ldots, 0) \in V(\mathbb{Z}_m^{n-1})$ instead. This proves that $\{ p_{q-1}, p_{q+1} \} = \{ p_q-1, p_q+1 \}$ mod $m$ for every $q \in \mathbb{Z}_m$.
The equality $p_q = p_0 + q(p_1 - p_0)$ mod $m$ is trivially true for $q \in \{ 0, 1 \}$. In general, it follows by induction on $q$: Suppose that for some $q > 0$ the claim is true for $q-1$ and $q$. Then $\{ p_{q-1}, p_{q+1} \} = \{ p_q-1, p_q+1 \} = \{ p_0 + q(p_1 - p_0) - 1, p_0 + q(p_1 - p_0) + 1 \}$. Recalling that $p_1 - p_0 \in \{ \pm 1 \}$ by [Lemma 7](#lem: only disjoint if shifted){reference-type="ref" reference="lem: only disjoint if shifted"}, this set is exactly $\{ p_0 + (q-1)(p_1 - p_0), p_0 + (q+1)(p_1 - p_0) \}$ and as $p_{q-1} = p_0 + (q-1)(p_1 - p_0)$ by the induction hypothesis, we must have $p_{q+1} = p_0 + (q+1)(p_1 - p_0)$. This concludes the proof of the claim above.
Finally, we show in the following that for $x_0 = (p_0, \varepsilon_1, \ldots, \varepsilon_{n-2})$ and $\varepsilon_{n-1} \coloneqq p_0 - p_1 \in \{ \pm 1 \}$ mod $m$, we have $\iota_n(p_0, \varepsilon_1, \ldots, \varepsilon_{n-1}) = I$. We prove this equality for the respective intersections with $V_q = \{ v \in V(\mathbb{Z}_m^n) \colon v_n = q \}$ for all $q \in \mathbb{Z}_m$. So let $q \in \mathbb{Z}_m$ and $v \in V_q$ be arbitrary. Then by definition of $\iota_n$, we have $v \in \iota_n(p_0, \varepsilon_1, \ldots, \varepsilon_{n-1}) \cap V_q$ if and only if $$\begin{aligned}
\label{eq: iota(new x)}
&v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1} \notag\\
={}&v_1 + \sum_{i = 1}^{n-2} \varepsilon_i v_{i+1} + q(p_0 - p_1) \in \{ p_0, p_0 + 2, \ldots, p_0 + m - 3 \} \mod m
\,.\end{aligned}$$ On the other hand, we have $v \in I \cap V_q$ if and only if $(v_1, \ldots, v_{n-1}) \in \phi_q(I \cap V_q) = I_q = \iota_{n-1}(x_q)$ by definition of $\phi_q$. Also note that by the claim above, $x_0$ and $x_q$ share all but their first components, so $x_q = (p_q, \varepsilon_1, \ldots, \varepsilon_{n-1})$. This means that $v \in I \cap V_q$ is equivalent to $$v_1 + \sum_{i = 1}^{n-2} \varepsilon_i v_{i+1} \in \{ p_q, p_q + 2, \ldots, p_q + m - 3 \} \mod m
\,.$$ We can now use our claim to replace $p_q$ by $p_0 + q(p_1 - p_0) = p_0 - q(p_0 - p_1)$. Adding $q(p_0 - p_1)$ to both sides of this relation, we obtain exactly the condition in ([\[eq: iota(new x)\]](#eq: iota(new x)){reference-type="ref" reference="eq: iota(new x)"}). This shows that $\iota_n(x_0, \varepsilon_{n-1}) \cap V_q = I \cap V_q$ for all $q \in \mathbb{Z}_m$ and so indeed, $\iota_n(x_0, \varepsilon_{n-1}) = I$ holds. As $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$ was chosen arbitrarily, this concludes the proof that $\iota_n$ is surjective and thus, a bijection. ◻
For $m = 3$, we can establish an upper bound on the size of the intersection of two maximum independent sets. Limiting this overlap is crucial to ensure that the process described above yields different independent sets $A \cup B$ when starting with different maximum independent sets $I$, which leads to the factor of $3 \cdot 2^{n-1}$ in the lower bound.
**Lemma 9**. *For $n \in \mathbb{N}$, let $I, I' \in \mathcal{I}^*(\mathbb{Z}_3^n)$ be distinct. Then $\abs {I \cap I'} \leqslant 3^{n-2}$.*
*Proof.* Let $(q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \coloneqq \iota_n^{-1}(I)$ and $(q', \varepsilon'_1, \ldots, \varepsilon'_{n-1}) \coloneqq \iota_n^{-1}(I')$. Suppose $v \in I \cap I'$, then adding and subtracting the conditions for $v \in I$ and $v \in I'$ yields $$2v_1 + \sum_{i = 1}^{n-1} (\varepsilon_i + \varepsilon'_i) v_{i+1}
= q + q' \mod 3
\quad \text{and} \quad
\sum_{i = 1}^{n-1} (\varepsilon_i - \varepsilon'_i) v_{i+1}
= q - q' \mod 3
\,.$$ Let $J_+ \coloneqq \{ 0 \} \cup \{ i \in [n-1] \colon \varepsilon_i = \varepsilon'_i \}$ and $J_- \coloneqq \{ i \in [n-1] \colon \varepsilon_i = -\varepsilon'_i \}$. Then this is equivalent to $2 \sum_{i \in J_+} v_{i+1} = q + q'$ mod $3$ and $2 \sum_{i \in J_-} v_{i+1} = q - q'$ mod $3$. Multiplying both equations by $2 = -1$ mod $3$, we get $$\sum_{i \in J_+} v_{i+1}
= -q - q' \mod 3
\quad \text{and} \quad
\sum_{i \in J_-} v_{i+1}
= -q + q' \mod 3
\,.$$ Observe that trivially, $J_+ \neq \emptyset$ and let $j_+ \coloneqq \max J_+$. Now suppose that that $J_- = \emptyset$. This immediately implies $\varepsilon_i = \varepsilon'_i$ for all $i$ and, by the second equation above, also that $q = q'$. Thus, $\iota_n^{-1}(I) = \iota_n^{-1}(I')$ in contradiction to $I \neq I'$. So $J_-$ is also nonempty and we can let $j_- \coloneqq \max J_-$. Then for each of the $3^{n-2}$ ways to choose the entries $v_{i+1}$ for $i \in \{ 0, 1, \ldots, n-1 \} \setminus \{ j_+, j_- \}$, there is at most one choice for $v_{j_++1}$ and $v_{j_-+1}$ (namely $v_{j_++1} = -q - q' - \sum_{i \in J_+ \setminus \{ j_+ \}}v_{i+1}$ mod $3$ and $v_{j_-+1} = -q + q' - \sum_{i \in J_- \setminus \{ j_- \}}v_{i+1}$ mod $3$) such that $v$ is in $I \cap I'$. This proves that $\abs {I \cap I'} \leqslant 3^{n-2}$. ◻
This concludes our analysis of maximum independent sets in $\mathbb{Z}_m^n$. We now turn our attention to the selection of defects from the subgraph $H_n$. Here, we observe that we can essentially assume the defects to be chosen independently as long as their number is small compared to the square root of the number of vertices available in $H_n$.
**Lemma 10**. *For every $n \in \mathbb{N}$, let $M_n, K_n \in \mathbb{N}$ and $H_n$ be a graph on $M_n$ vertices such that $(\Delta(H_n)+1) K_n^2 \leqslant o(M_n)$. Then for every $k \leqslant K_n$, we have $$\abs {\mathcal{I}_k(H_n)}
\geqslant(1 - o(1)) \frac {M_n^k}{k!}
\,.$$*
*Proof.* Let $\varepsilon> 0$ and choose $n$ large enough to guarantee $(\Delta(H_n) + 1)K_n^2 \leqslant\varepsilon M_n$. Now select $k$ vertices from $H_n$ one after the other, excluding from the choices for the $(i+1)$-th vertex the $i$ vertices chosen in previous steps as well as the at most $\Delta(H_n)i$ vertices adjacent to any vertex chosen in previous steps. This ensures that the union of all vertices chosen is a set in $\mathcal{I}_k(H_n)$, but every such set is produced exactly $k!$ times. Having at least $M_n - (\Delta(H_n) + 1)i$ choices for the $(i+1)$-th vertex, we find that $$\abs {\mathcal{I}_k(H_n)}
\geqslant\frac 1{k!} \prod_{i = 0}^{k - 1} (M_n - (\Delta(H_n) + 1)i)
= \frac {M_n^k}{k!} \prod_{i = 0}^{k - 1} \left( 1 - \frac {(\Delta(H_n) + 1)i}{M_n} \right)
\,.$$ Now we use the fact that $i \leqslant k \leqslant K_n$ and $(\Delta(H_n) + 1)K_n / M_n \leqslant\varepsilon/K_n$ to obtain $$\prod_{i = 0}^{k - 1} \left( 1 - \frac {(\Delta(H_n) + 1)i}{M_n} \right)
\geqslant\prod_{i = 0}^{k - 1} \left( 1 - \frac {\varepsilon}{K_n} \right)
\geqslant\left( 1 - \frac {\varepsilon}{K_n} \right)^{K_n}
\geqslant 1 - \varepsilon$$ by the Bernoulli inequality. ◻
Finally, we will also use the following two immediate consequences of Chebyshev's inequality, applying it to a symmetric binomial and a Poisson distribution, respectively. Together, they allow us to bring the final lower bound into a closed form.
**Lemma 11**. *For all $\varepsilon> 0$, there exists an $n_0 \in \mathbb{N}$ such that for all $n \geqslant n_0$ and all $y \in \mathbb{N}$, the following holds: $$2^{-y} \sum_{b = y/2 - n\sqrt{y/4}}^{y/2 + n\sqrt{y/4}} \begin{pmatrix}y \\ b\end{pmatrix}
\geqslant 1 - \varepsilon
\,.$$*
**Lemma 12**. *For all $\varepsilon> 0$, there exists an $n_0 \in \mathbb{N}$ such that for all $n \geqslant n_0$ and all $\lambda > 0$, the following holds: $$\exp(-\lambda) \sum_{k = \lambda - n \sqrt {\lambda}}^{\lambda + n \sqrt {\lambda}} \frac {\lambda^k}{k!}
\geqslant 1 - \varepsilon
\,.$$*
We can now combine all of this to prove the desired asymptotic lower bound of at least $(1-o(1)) \cdot 3 \cdot 2^{n-1} \cdot 2^{3^{n-1}} \cdot \exp( (3/2)^{n-1} )$ independent sets in $\mathbb{Z}_3^n$.
*Proof of [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"}.* Let $x \coloneqq (q, \varepsilon_1, \ldots, \varepsilon_{n-1}) \in \mathbb{Z}_3 \times \{ \pm 1 \}^{n-1}$ be arbitrary and set $I \coloneqq \iota_n(x)$ as well as $U \coloneqq \{ v \in V(\mathbb{Z}_3^n) \colon v_1 + \sum_{i = 1}^{n-1} \varepsilon_i v_{i+1} \in \{ q-2, q-1 \} \mod 3 \} = V(\mathbb{Z}_3^n) \setminus I$. Now consider the graph $H_n \coloneqq \mathbb{Z}_3^n[U]$ on $M_n \coloneqq 2 \cdot 3^{n-1}$ vertices. It is easy to see that $H_n$ is $n$-regular, so we have $\Delta(H_n) = n$. Let $\lambda \coloneqq (3/2)^{n-1}$ and $K_n \coloneqq \lambda + n \sqrt{\lambda}$. Then we immediately observe that $(\Delta(H_n) + 1)K_n^2 \leqslant O(n\lambda^2) = O(n(9/4)^n) = o(3^n) = o(M_n)$. Selecting any $k \in [\lambda - n \sqrt{\lambda}, K_n]$, we thus have $\abs {\mathcal{I}_k(H_n)} \geqslant(1 - o(1)) \frac {M_n^k}{k!}$ by [Lemma 10](#lem: defect selection){reference-type="ref" reference="lem: defect selection"}.
Now consider some set $A \in \mathcal{I}_k(H_n)$. As every vertex in $A \subseteq U$ has exactly $n$ edges to $I$, we have $\abs {N_{\mathbb{Z}_3^n}(A, I)} \leqslant n \abs A = nk$. Let $y \coloneqq 3^{n-1} - nk$ and select some $B \in I \setminus N_{\mathbb{Z}_3^n}(A, I)$ with $\abs B \in [y/2 - n \sqrt{y/4}, y/2 + n \sqrt{y/4}]$. Then $A \cup B \in \mathcal{I}(\mathbb{Z}_3^n)$ by construction and the number of distinct such $A \cup B$ is at least $$\begin{aligned}
\sum_{k = \lambda - n\sqrt{\lambda}}^{\lambda + n\sqrt{\lambda}} \abs {\mathcal{I}_k(H_n)} \sum_{b = y/2 - n\sqrt{y/4}}^{y/2 + n\sqrt{y/4}} \begin{pmatrix}y \\ b\end{pmatrix}
&\geqslant(1 - o(1)) \sum_{k = \lambda - n\sqrt{\lambda}}^{\lambda + n\sqrt{\lambda}} \frac {M_n^k \cdot 2^{y}}{k!} \\
&= (1 - o(1)) \cdot 2^{3^{n-1}} \sum_{k = \lambda - n\sqrt{\lambda}}^{\lambda + n\sqrt{\lambda}} \frac {(2 \cdot 3^{n-1})^k \cdot 2^{-nk}}{k!} \\
&= (1 - o(1)) \cdot 2^{3^{n-1}} \sum_{k = \lambda - n\sqrt{\lambda}}^{\lambda + n\sqrt{\lambda}} \frac {\lambda^k}{k!} \\
&\geqslant(1 - o(1)) \cdot 2^{3^{n-1}} \exp \left( \left( \tfrac 32 \right)^{n-1} \right)\end{aligned}$$ by [\[lem: binomial,lem: Poisson\]](#lem: binomial,lem: Poisson){reference-type="ref" reference="lem: binomial,lem: Poisson"}.
As $I \in \mathcal{I}^*(\mathbb{Z}_3^n)$ was arbitrary, we can actually obtain the $\abs {\mathcal{I}^*(\mathbb{Z}_3^n)}$-fold of this bound, which is exactly the desired statement, if we can show that an independent set $A \cup B$ produced by the above process starting from $I \in \mathcal{I}^*(\mathbb{Z}_3^n)$ cannot also be written as $A' \cup B'$ produced starting from $I' \in \mathcal{I}^*(\mathbb{Z}_3^n) \setminus \{ I \}$. For a proof by contradiction, suppose this were false and $A \cup B = A' \cup B'$. Then we would have $$\abs {I \cap I'}
\geqslant\abs {B \cap B'}
\geqslant\abs B - \abs {A'}
\geqslant y/2 - n\sqrt{y/4} - K_n
\geqslant 3^{n-1}/2 - o(3^n)$$ and could ensure $\abs {I \cap I'} \geqslant 3^{n-1}/3 = 3^{n-2}$ by choosing $n$ sufficiently large. This, however, contradicts [Lemma 9](#lem: intersection maxindepsets){reference-type="ref" reference="lem: intersection maxindepsets"} and thus finishes the proof. ◻
A first natural guess would be to assume that the cases $m \geqslant 5$ can be treated similarly as for $m = 3$. However, for most independent sets, the number of defects becomes so large that when choosing them as an independent subset of the graph $H_n$ induced by two adjacent partition classes, we cannot essentially ignore the edges between these two classes anymore. In the terminology of [Lemma 10](#lem: defect selection){reference-type="ref" reference="lem: defect selection"}, this graph $H_n$ has $M_n = 2m^{n-1}$ vertices, while we need to allow defect sets of size up to $K_n \geqslant(m/2)^{n-1}$. This violates the assumption $(\Delta(H_n) + 1)K_n^2 \leqslant o(M_n)$ of [Lemma 10](#lem: defect selection){reference-type="ref" reference="lem: defect selection"} for $m \geqslant 5$.
For general odd $m$, we therefore only derive a less precise lower bound. Its proof relies on the following lemma, which functions as a rough lower bound on the number of independent sets $A$ in the graph $H = H_n$.
**Lemma 13**. *Let $H$ be a graph and $\lambda \in (0, 1/2]$. Then for $p \coloneqq \lambda/(1 + \lambda)$, we have $$\sum_{A \in \mathcal{I}(H)} \lambda^{\abs A}
\geqslant\frac {(1 - p^2)^{\abs {E(H)}}} {(1-p)^{\abs {V(H)}}}
\geqslant\exp \left( p \abs {V(H)} - 2p^2 \abs {E(H)} \right)
\,.$$*
*Proof.* Consider the random subset $X \subseteq V(H)$ that arises from selecting every vertex of $H$ independently with probability $p$. We calculate the probability of $X$ being independent in two different ways. On the one hand, we have $$\begin{aligned}
\mathbb{P}[X \in \mathcal{I}(H)]
= \sum_{A \in \mathcal{I}(H)} \mathbb{P}[X = A]
&= \sum_{A \in \mathcal{I}(H)} p^{\abs A} (1-p)^{\abs {V(H)} - \abs A} \\
&= (1-p)^{\abs {V(H)}} \sum_{A \in \mathcal{I}(H)} \left( \frac p{1-p} \right)^{\abs A}
\,.\end{aligned}$$ Observe that $p/(1-p) = \lambda$. On the other hand, we can also consider all the edges $uv \in E(H)$ and calculate the probability that for none of them, both endpoints belong to $X$. This yields $$\begin{aligned}
\mathbb{P}[X \in \mathcal{I}(H)]
= \mathbb{P}\left[ \bigcap_{uv \in E(H)} \{ u \in X \wedge v \in X \}^c \right]
&\geqslant\prod_{uv \in E(H)} (1 - \mathbb{P}[u \in X \wedge v \in X]) \\
&= (1 - p^2)^{\abs {E(H)}}
\,,\end{aligned}$$ where we have fixed an enumeration of $V(H) = \{ v_1, \ldots, v_k \}$ and identified $X \subseteq V(H)$ with $x \in \{ 0, 1 \}^k$ defined by $x_i = 1$ if $v_i \in X$, so we can repeatedly apply [Lemma 4](#lem: FKG){reference-type="ref" reference="lem: FKG"}. Note that the partial order $\leqslant$ on $\{ 0, 1 \}^k$ is just the subset relation and for all $E \subseteq E(H)$, the event $\bigcap_{uv \in E} \{ u \in X \wedge v \in X \}^c$ is decreasing. Combining both observations proves the first statement claimed.
In order to obtain the second inequality, we note that $p, p^2 \in (0, 1/2)$ since $0 < p < \lambda$. This allows us to use the geometric series to calculate $$\frac 1{1 - p}
= \sum_{i = 0}^\infty p^i
\geqslant\sum_{i = 0}^\infty \frac {p^i}{i!}
= e^p$$ and also bound $\ln \left( (1 - p^2)^{\abs {E(H)}} \right) = \abs {E(H)} \ln (1 - p^2)$ from below using $$\ln (1 - p^2)
= \sum_{k = 1}^\infty (-1)^{k+1} \frac {(-p^2)^k}k
= -\sum_{k = 1}^\infty \frac {p^{2k}}k
\geqslant-\sum_{k = 1}^\infty p^{2k}
= - \frac {p^2}{1 - p^2}
\geqslant-2p^2
\,.$$ This finishes the proof. ◻
We are now ready to prove [Theorem 2](#thm: lowerboundKm){reference-type="ref" reference="thm: lowerboundKm"}. Recall that we need to show that for $n, m \in \mathbb{N}$ with $m$ odd and $\ell \coloneqq \lfloor m/2 \rfloor$, we have $$\abs {\mathcal{I}(\mathbb{Z}_m^n)}
\geqslant 2^{\ell m^{n-1}} \exp \left( (1 - o(1)) \left( \frac m2 \right)^{n-1} \right)
\,.$$
*Proof of [Theorem 2](#thm: lowerboundKm){reference-type="ref" reference="thm: lowerboundKm"}.* Fix a maximum independent set $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$ and let $H \subseteq\mathbb{Z}_m^n$ be the subgraph induced by the two adjacent partition classes in $\overline I$. We can obviously obtain a set of pairwise distinct independent sets in $\mathbb{Z}_m^n$ by considering all combinations $A \cup B$ of $A \in \mathcal{I}(H)$ and $B \subseteq I \setminus N_{\mathbb{Z}_m^n}(A, I)$. As $\abs {N_{\mathbb{Z}_m^n}(A, I)} \leqslant\sum_{a \in A} \abs {N_{\mathbb{Z}_m^n}(a, I)} \abs A = n \abs A$, we obtain $$\abs {\mathcal{I}(\mathbb{Z}_m^n)}
\geqslant\sum_{A \in \mathcal{I}(H)} 2^{\abs {I \setminus N_{\mathbb{Z}_m^n}(A, I)}}
\geqslant\sum_{A \in \mathcal{I}(H)} 2^{\ell m^{n-1} - n \abs A}
= 2^{\ell m^{n-1}} \sum_{A \in \mathcal{I}(H)} (2^{-n})^{\abs A}
\,.$$ We now apply [Lemma 13](#lem: estimate partition function){reference-type="ref" reference="lem: estimate partition function"} with $\lambda \coloneqq 2^{-n}$ and $p \coloneqq 2^{-n}/(1 + 2^{-n})$, which satisfies $(1 - o(1)) 2^{-n} \leqslant p \leqslant 2^{-n}$. We observe that $\abs {V(H)} = 2m^{n-1}$ and $\abs {E(H)} = nm^{n-1}$, so we obtain $$\sum_{A \in \mathcal{I}(H)} (2^{-n})^{\abs A}
\geqslant\exp \left( p \abs {V(H)} - 2p^2 \abs {E(H)} \right)
= \exp \left( 2pm^{n-1} (1 - pn) \right)
\,.$$ The statement follows from $2pm^{n-1} \geqslant(1 - o(1))(m/2)^{n-1}$ and $pn \leqslant n2^{-n} \leqslant o(1)$. ◻
Note that in contrast to [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"}, the lower bound in [Theorem 2](#thm: lowerboundKm){reference-type="ref" reference="thm: lowerboundKm"} does not contain a factor of $m \cdot 2^{n-1}$ representing the choice of $I \in \mathcal{I}(\mathbb{Z}_m^n)$ anymore. This is due to the fact that $m \cdot 2^{n-1} = o(\exp ((m/2)^n))$, so the lost factor is anyway smaller than the error allowed in [Theorem 2](#thm: lowerboundKm){reference-type="ref" reference="thm: lowerboundKm"}.
# Cluster Expansion {#sec:clusterexpansion}
One of the most powerful tools for counting independent sets is the cluster expansion from statistical physics. In particular, it is often able to yield much more detailed asymptotic formulas for the number of independent sets in a graph. Since the method is quite well-established, we omit the technical details and instead focus on how to employ this method in the current setting. We first introduce the polymer model $\mathcal{P}(\mathbb{Z}_m^n, I)$ and establish its connection to $\abs {\mathcal{I}(\mathbb{Z}_m^n)}$, before we direct our attention to the clusters and actually calculate some initial terms of the cluster expansion.
## The polymer model
Let $G$ be a graph and fix an independent set $I \in \mathcal{I}(G)$. We say that a set $S \subseteq\overline I \coloneqq V(G) \setminus I$ is *$(2, I)$-linked* if $G[S \cup N_G(S, I)]$ is connected. We define the following *polymer model* $\mathcal{P}(G, I)$: A *polymer* is a $(2, I)$-linked subset of $\overline I$ that is independent in $G$. Two polymers $S, T$ are *compatible* if $S \cup T$ is independent in $G$, but not $(2, I)$-linked. We also write this as $S \sim T$. The *weight* of a polymer $S$ is defined as $w(S) \coloneqq 2^{-\abs {N_G(S, I)}}$.
For every polymer model $\mathcal{P}$, one can consider the associated *partition function* $Z_{\mathcal{P}}
\coloneqq \sum_{\mathcal S} \prod_{S \in \mathcal S} w(S)$, where the sum is over all sets $\mathcal S$ of pairwise compatible polymers. In the case of the polymer model $\mathcal{P}= \mathcal{P}(G, I)$ defined above, this partition function is actually counting independent sets in $G$.
**Lemma 14**. *Let $G$ be a graph and $I \in \mathcal{I}(G)$. Then $\abs {\mathcal{I}(G)} = 2^{\abs I} Z_{\mathcal{P}(G, I)}$.*
*Proof.* For every independent set in $G$, there is exactly one way to write it as the union $A \cup B$ of $A \in \mathcal{I}(G[\overline I])$ and $B \subseteq I \setminus N_G(A, I)$. This yields $$\abs {\mathcal{I}(G)}
= \sum_{A \in \mathcal{I}(G[\overline I])} 2^{\abs I - \abs {N_G(A, I)}}
= 2^{\abs I} \sum_{A \in \mathcal{I}(G[\overline I])} 2^{- \abs {N_G(A, I)}}
\,.$$
Our next goal is to show that there is a one-to-one correspondence between independent sets $A \in \mathcal{I}(G[\overline I])$ and sets $\mathcal S$ of pairwise compatible polymers, which enables us to sum over all such sets $\mathcal S$ instead. If $G$ is bipartite and $I, \overline I$ are its partition classes (as is usually the case in the literature on $\mathbb{Z}_m^n$ with even $m$), this is straightforward. Meanwhile, our setting allows edges inside of $\overline I$ and thus requires the more delicate polymer definition above. Therefore, a formal proof of the desired correspondence seems warranted to us.
To this end, consider the following two functions that map independent sets $A \in \mathcal{I}(G[\overline I])$ to sets $\mathcal S$ of pairwise compatible polymers and vice-versa: Given an independent set $A \in \mathcal{I}(G[\overline I])$, decompose the graph $G[A \cup N_G(A, I)]$ into its $\ell \geqslant 0$ connected components $C_1, \ldots, C_\ell$ and define $\Phi(A) \coloneqq \{ V(C_i) \cap \overline I \mid i \in [\ell] \}$. Given a set $\mathcal S$ of pairwise compatible polymers, define $\Psi(\mathcal S) \coloneqq \bigcup_{S \in \mathcal S} S$.
**Claim 15**. *The functions $\Phi$ and $\Psi$ are well-defined and the inverse of each other.*
*Proof of the claim.* Let $A \in \mathcal{I}(G[\overline I])$ be arbitrary and $C_1, \ldots, C_\ell$ be the connected components of $G[A \cup N_G(A, I)]$. Since $(A \cup N_G(A, I)) \cap \overline I = A$, we find that the $S_i \coloneqq V(C_i) \cap \overline I$ partition $A$. On the one hand, this immediately proves that $\Psi (\Phi(A)) = A$. On the other hand, this also guarantees that the $S_i$ inherit from $A$ that they are subsets of $\overline I$ that are independent in $G$. By construction, each $G[S_i \cup N_G(S_i, I)] = C_i$ is connected, so the $S_i$ are indeed $(2, I)$-linked and thus polymers.
In order to see that the $S_i$ are also pairwise compatible, note that $S_i \cup S_j \subseteq A$ must be independent in $G$ and thus assume that it is still $(2, I)$-linked. This means that $G[S_i \cup N_G(S_i, I) \cup S_j \cup N_G(S_j)]$ is connected and thus belongs to the same connected component $C_i = C_j$ of $G[A \cup N_G(A, I)]$. As desired, $i = j$ follows. This shows that $\Phi$ is indeed well-defined.
For the inverse direction, let $\mathcal S$ be a set of pairwise compatible polymers. Then each $S \in \mathcal S$ is a subset of $\overline I$ that is independent in $G$. Their union must therefore also be a subset of $\overline I$. If it were not independent, there would be $u, v$ in distinct $S, T \in \mathcal S$ with $uv \in E(G)$. However, then $S \cup T$ would not be independent and $S, T$ would therefore not be compatible, a contradiction. This proves that $\Psi$ is well-defined.
In order to see that $\Phi(\Psi(\mathcal S)) = \mathcal S$, first note that by compatibility, both $\Phi(\Psi(\mathcal S))$ and $\mathcal S$ partition $A \coloneqq \Psi(\Phi(\Psi(\mathcal S))) = \Psi(\mathcal S)$. It therefore suffices to show that every $S \in \mathcal S$ is a subset of some $S' \in \Phi(A)$ and vice-versa:
- Let $S \in \mathcal S$ be arbitrary. By its $(2, I)$-linkedness, $G[S \cup N_G(S, I)]$ is a connected subgraph of $G[A \cup N_G(A, I)]$ and must therefore belong to a single connected component of $G[A \cup N_G(A, I)]$. This shows that the intersection $(S \cup N_G(S, I)) \cap \overline I = S$ is contained in some element of $\Phi(A)$.
- On the other hand, consider some connected component $C$ of $G[A \cup N_G(A, I)]$ and suppose $C \cap \overline I$ intersects with multiple polymers in $\mathcal S$. Let $S, T \in \mathcal S$ be two such polymers, which by connectedness of $C$, can be chosen such that $S \cup N_G(S, I)$ and $T \cup N_G(T, I)$ are adjacent in $G$. As $I$ is independent, any connecting edge is either between $S$ and $T$ or establishes an intersection of $N_G(S, I)$ and $N_G(T, I)$. Either way, it contradicts $S$ and $T$ being compatible.
This shows that $\Phi(\Psi(\mathcal S)) = \mathcal S$ and thus finishes the proof of the claim. ◻
The $(2, I)$-linkedness of every $S \in \Phi(A)$ now guarantees that the $N_G(S, I)$ partition $N_G(A, I)$. This allows us to write $$2^{- \abs {N_G(A, I)}}
= 2^{- \sum_{S \in \Phi(A)} \abs {N_G(S, I)}}
= \prod_{S \in \Phi(A)} 2^{- \abs {N_G(S, I)}}
= \prod_{S \in \Phi(A)} w(S)
\,.$$ Having established in [Claim 15](#claim: indepsets<->polymers){reference-type="ref" reference="claim: indepsets<->polymers"} that $\Phi$ bijectively maps independent sets $A \in \mathcal{I}(G[\overline I])$ to sets $\mathcal S$ of pairwise compatible polymers, we conclude that $$\sum_{A \in \mathcal{I}(G[\overline I])} 2^{- \abs {N_G(A, I)}}
= \sum_{A \in \mathcal{I}(G[\overline I])} \prod_{S \in \Phi(A)} w(S)
= \sum_{\mathcal S} \prod_{S \in \mathcal S} w(S)
= Z_{\mathcal{P}(G, I)}
\,,$$ which finishes the proof. ◻
## Cluster expansion {#cluster-expansion}
Let $\mathcal{P}$ be a polymer model with compatibility relation $\sim$. A *cluster* of $\mathcal{P}$ is a vector $\Gamma = (S_1, S_2, \ldots, S_k)$ of $k \geqslant 1$ (not necessarily distinct) polymers of $\mathcal{P}$ such that the corresponding *incompatibility graph* $H_\Gamma = ([k], \{ ij \colon S_i \not \sim S_j \})$ is connected. The *size* of a cluster $\Gamma = (S_1, S_2, \ldots, S_k)$ is defined as $\Abs \Gamma \coloneqq \sum_{i = 1}^k \abs {S_i}$. Let $\mathcal{C}(\mathcal{P})$ denote the (infinite) set of all clusters and write $\mathcal{C}_r(\mathcal{P}) \coloneqq \{ \Gamma \in \mathcal{C}(\mathcal{P}) \colon \Abs \Gamma = r \}$. We now define the *Ursell function* of a graph $H$ as $$\phi(H)
\coloneqq \frac 1{\abs {V(H)}!} \sum_{\substack{\text{spanning, connected}\\\text{subgraphs $F \subseteq H$}}} (-1)^{\abs {E(F)}}
\,.$$ With this, we can write the logarithm of the partition function $Z_{\mathcal{P}}$ as the following formal power series, which is also known as the *cluster expansion* of $\mathcal{P}$: $$\log Z_{\mathcal{P}}
= \sum_{r = 1}^\infty L_r(\mathcal{P})
\qquad \text{with} \qquad
L_r(\mathcal{P}) \coloneqq \sum_{\substack{\Gamma \in \mathcal{C}_r(\mathcal{P}) \\ \Gamma = (S_1, \ldots, S_k)}} \phi(H_\Gamma) \prod_{i = 1}^k w(S_i)
\,.$$ We apply this to $\mathcal{P}= \mathcal{P}(\mathbb{Z}_m^n, I)$, choosing an arbitrary maximum independent set $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$. This requires us to figure out the different types of clusters that exist in this model as well as calculate their contribution to the cluster expansion. For small $r$, this is straightforward enough to do.
**Theorem 16**. *For $n, m \in \mathbb{N}$ with $m \geqslant 5$ odd and $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$, we have $$\begin{aligned}
L_1(\mathcal{P}(\mathbb{Z}_m^n, I)) = m^{n-1} \bigg( & 2 \cdot 2^{-n} + \frac {m-3}2 \cdot 2^{-2n} \bigg) \\
L_2(\mathcal{P}(\mathbb{Z}_m^n, I)) = m^{n-1} \bigg( &\big( n^2 - 2n - 1 \big) \cdot 2^{-2n} + \big( 3n^2 - n \big) \cdot 2^{-3n} \\
&{} + \bigg( \frac {3m - 12}2 n^2 + \frac{7 - 2m}2 n - \frac {m-3}4 \bigg) \cdot 2^{-4n} \bigg)
\,.\end{aligned}$$*
*Proof.* By symmetry, it suffices to consider $I = \iota_n(1, 1, \ldots, 1)$. Let $\mathcal{P}\coloneqq \mathcal{P}(\mathbb{Z}_m^n, I)$ be the corresponding polymer model and write $\mathcal{E}(p) \coloneqq \{ v \in V(\mathbb{Z}_m^n) \colon \sum_{i = 1}^n v_i = p \mod m \}$ for $p \in \mathbb{Z}_m$. This means that $I = \mathcal{E}(1) \cup \mathcal{E}(3) \cup \ldots \cup \mathcal{E}(m-2)$, while $\overline I = \mathcal{E}(0) \cup \mathcal{E}(2) \cup \ldots \cup \mathcal{E}(m-1)$. Note that vertices in $X \coloneqq \mathcal{E}(0) \cup \mathcal{E}(m-1)$ have $n$ neighbours in $I$, whereas vertices in $Y \coloneqq \overline I \setminus X$ have $2n$ neighbours in $I$.
For size $1$, each cluster $\Gamma = (\{ v \}) \in \mathcal{C}_1(\mathcal{P})$ consists of a one single-vertex polymer and has $\phi(H_\Gamma) = 1$. Consequently, there are $2 \cdot m^{n-1}$ clusters with $v \in X$ and weight $1 \cdot 2^{-n}$ as well as $\frac {m-3}2 \cdot m^{n-1}$ clusters with $v \in Y$ and weight $1 \cdot 2^{-2n}$.
For size $2$, there are nine different types of clusters $\Gamma = (\{ v \}, \{ w \}) \in \mathcal{C}_2(\mathcal{P})$ that consist of two single-vertex polymers. Their incompatibility graph $H_\Gamma$ is a single edge, so $\phi(H_\Gamma) = -\frac 12$ for all of them.
1. $v \in X$ and $w = v$: $2 \cdot m^{n-1}$ clusters of weight $-\frac 12 \cdot 2^{-2n}$.
2. $v \in Y$ and $w = v$: $\frac {m-3}2 \cdot m^{n-1}$ clusters of weight $-\frac 12 \cdot 2^{-4n}$.
3. $v \in \mathcal{E}(0)$ and $w = v - e_i$ or $v \in \mathcal{E}(m-1)$ and $w = v + e_i$ for $i \in [n]$: $2 \cdot m^{n-1} \cdot n$ clusters of weight $-\frac 12 \cdot 2^{-2n}$.
4. $v \in X$ and $w = v + e_i - e_j$ for $i \neq j \in [n]$: $2 \cdot m^{n-1} \cdot n(n-1)$ clusters of weight $-\frac 12 \cdot 2^{-2n}$.
5. $v \in Y$ and $w = v + e_i - e_j$ for $i \neq j \in [n]$: $\frac {m-3}2 \cdot m^{n-1} \cdot n(n-1)$ clusters of weight $-\frac 12 \cdot 2^{-4n}$.
6. $v \in \mathcal{E}(0) \cup \mathcal{E}(m-3)$ and $w = v + 2e_i$ or $v \in \mathcal{E}(2) \cup \mathcal{E}(m-1)$ and $w = v - 2e_i$ for $i \in [n]$: $4 \cdot m^{n-1} \cdot n$ clusters of weight $-\frac 12 \cdot 2^{-3n}$.
7. $v \in \mathcal{E}(2) \cup \ldots \cup \mathcal{E}(m-5)$ and $w = v + 2e_i$ or $v \in \mathcal{E}(4) \cup \ldots \cup \mathcal{E}(m-3)$ and $w = v - 2e_i$ for $i \in [n]$: $(m - 5) \cdot m^{n-1} \cdot n$ clusters of weight $-\frac 12 \cdot 2^{-4n}$.
8. $v \in \mathcal{E}(0) \cup \mathcal{E}(m-3)$ and $w = v + e_i + e_j$ or $v \in \mathcal{E}(2) \cup \mathcal{E}(m-1)$ and $w = v - e_i - e_j$ for $i \neq j \in [n]$: $4 \cdot m^{n-1} \cdot \frac {n(n-1)}2$ clusters of weight $-\frac 12 \cdot 2^{-3n}$.
9. $v \in \mathcal{E}(2) \cup \ldots \cup \mathcal{E}(m-5)$ and $w = v + e_i + e_j$ or $v \in \mathcal{E}(4) \cup \ldots \cup \mathcal{E}(m-3)$ and $w = v - e_i - e_j$ for $i \neq j \in [n]$: $(m - 5) \cdot m^{n-1} \cdot \frac {n(n-1)}2$ clusters of weight $-\frac 12 \cdot 2^{-4n}$.
The remaining clusters $\Gamma = (\{ v, w \}) \in \mathcal{C}_2(\mathcal{P})$ consist of a single two-vertex polymer and thus satisfy $\phi(H_\Gamma) = 1$. In fact, all such polymers $\{ v, w \}$ correspond to two clusters $(\{ v \}, \{ w \})$ and $(\{ w \}, \{ v \})$ of the same type in the list above. It is easy to see that for clusters $(\{ v \}, \{ w \})$ of type (1), (2), and (3), the union of their vertices is not a two-vertex polymer in $\mathcal{P}$. For all the other types, however, $(\{ v, w \})$ is indeed a valid cluster in $\mathcal{C}_2(\mathcal{P})$. We thus obtain the number of these clusters by dividing the number above by $2$. The weights can be calculated by replacing $-\frac 12$ by $1$ and adding a factor of $2$ for every shared neighbour of $v$ and $w$ in $I$.
1. $v$ and $w$ share $v + e_i$ if $v \in \mathcal{E}(0)$ or $v - e_j$ if $v \in \mathcal{E}(m-1)$: $m^{n-1} \cdot n(n-1)$ clusters of weight $2 \cdot 2^{-2n}$.
2. $v$ and $w$ share $v + e_i$ and $v - e_j$: $\frac {m-3}4 \cdot m^{n-1} \cdot n(n-1)$ clusters of weight $4 \cdot 2^{-4n}$.
3. $v$ and $w$ share $v + e_i$ if $v \in \mathcal{E}(0) \cup \mathcal{E}(m-3)$ or $v - e_i$ if $v \in \mathcal{E}(2) \cup \mathcal{E}(m-1)$: $2 \cdot m^{n-1} \cdot n$ clusters of weight $2 \cdot 2^{-3n}$.
4. $v$ and $w$ share $v + e_i$ if $v \in \mathcal{E}(2) \cup \ldots \cup \mathcal{E}(m-5)$ or $v - e_i$ if $v \in \mathcal{E}(4) \cup \ldots \cup \mathcal{E}(m-3)$: $\frac {m-5}2 \cdot m^{n-1} \cdot n$ clusters of weight $2 \cdot 2^{-4n}$.
5. $v$ and $w$ share $v + e_i$ and $v + e_j$ if $v \in \mathcal{E}(0) \cup \mathcal{E}(m-3)$ or $v - e_i$ and $v - e_j$ if $v \in \mathcal{E}(2) \cup \mathcal{E}(m-1)$: $2 \cdot m^{n-1} \cdot \frac {n(n-1)}2$ clusters of weight $4 \cdot 2^{-3n}$.
6. $v$ and $w$ share $v + e_i$ and $v + e_j$ if $v \in \mathcal{E}(2) \cup \ldots \cup \mathcal{E}(m-5)$ or $v - e_i$ and $v - e_j$ if $v \in \mathcal{E}(4) \cup \ldots \cup \mathcal{E}(m-3)$: $\frac {m-5}2 \cdot m^{n-1} \cdot \frac {n(n-1)}2$ clusters of weight $4\cdot 2^{-4n}$.
Grouping by exponent of the $2^{-n}$-term and simplifying yields both formulas. ◻
Note that every vertex in $\overline I$ has either $n$ or $2n$ edges to $I$, so every polymer $S$ has weight $w(S) \leqslant 2^{- \abs S n}$ and every cluster $\Gamma \in \mathcal{C}_r(\mathcal{P})$ contributes at most $O(2^{-rn})$ to $\log Z_\mathcal{P}$. For small $r$, the number of clusters in $\mathcal{C}_r(\mathcal{P})$ is obviously bounded by $O(p_r(n) \cdot m^n)$ for some polynomial $p_r$, so the terms $L_r(\mathcal{P}(\mathbb{Z}_m^n, I))$ with $m/2^r < 1$ become negligible as $n \to \infty$. In order to establish convergence of the cluster expansion, however, one has to make the same argument for arbitrarily large $r$. Since we are unable to achieve this, the approach above only yields a conjecture on which clusters are relevant, but falls short of a proof.
In the case of $m \in \{ 5, 7 \}$, we have $m/2^3 < 1$, so only clusters of size at most $2$ should be relevant. It is also sensible to assume that starting with distinct maximum independent sets $I, I' \in \mathcal{I}^*(\mathbb{Z}_m^n)$ will again result in $A \cup B \neq A' \cup B'$ for all but a negligible fraction of combinations of $A \in \mathcal{I}(G[\overline I])$ and $B \in I \setminus N_G(A, I)$ as well as $A' \in \mathcal{I}(G[\overline {I'}])$ and $B' \in I' \setminus N_G(A', I')$. Therefore, the calculation in [Theorem 16](#thm: L1L2){reference-type="ref" reference="thm: L1L2"} together with [\[lem: no further maxindepsets,lem: indepsets polymer model\]](#lem: no further maxindepsets,lem: indepsets polymer model){reference-type="ref" reference="lem: no further maxindepsets,lem: indepsets polymer model"} naturally leads to the following conjecture.
**Conjecture 17**. *$$\begin{aligned}
\abs {\mathcal{I}(\mathbb{Z}_5^n)}
&= (1 \pm o(1)) \cdot 5 \cdot 2^{n-1} \cdot 2^{2 \cdot 5^{n-1}} \cdot \exp \left( \left( \frac 52 \right)^{n-1} + \frac {n^2 - 2n}4 \left( \frac 54 \right)^{n-1} \right) \\
\abs {\mathcal{I}(\mathbb{Z}_7^n)}
&= (1 \pm o(1)) \cdot 7 \cdot 2^{n-1} \cdot 2^{3 \cdot 7^{n-1}} \cdot \exp \left( \left( \frac 72 \right)^{n-1} + \frac {n^2 - 2n + 1}4 \left( \frac 74 \right)^{n-1} \right)
\,.\end{aligned}$$*
# Isoperimetric inequalities {#sec:isoperimetry}
The reasoning behind the proof of the upper bound in [@Gal19] is actually quite similar to the initial argument of [Lemma 14](#lem: indepsets polymer model){reference-type="ref" reference="lem: indepsets polymer model"}: Having fixed a (maximum) independent set $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$, every independent set in $\mathbb{Z}_m^n$ can be partitioned into its intersections $A, B$ with $\overline I$ and $I$, respectively. Since both inherit independence in $\mathbb{Z}_m^n$, the former is a set $A \in \mathcal{I}(\mathbb{Z}_m^n[\overline I])$. By independence of $I$, any $B \subseteq I$ is automatically independent, but once $A$ is known, $B$ must also satisfy $B \subseteq I \setminus N_{\mathbb{Z}_m^n}(A, I)$, leaving exactly $2^{\abs I - \abs {N_{\mathbb{Z}_m^n}(A, I)}}$ choices. For $m \geqslant 3$ odd and $\ell \coloneqq \lfloor m/2 \rfloor$, we have already established in [3](#sec:lower-bounds){reference-type="ref" reference="sec:lower-bounds"} that all $I \in \mathcal{I}^*(\mathbb{Z}_m^n)$ have the same structure, so it suffices to only look at one representative. With the notation of [Lemma 6](#lem: generate maxindepsets){reference-type="ref" reference="lem: generate maxindepsets"}, we choose $I_0 \coloneqq \iota_n(0, 1, \ldots, 1)$. Also using $\abs {\mathcal{I}^*(\mathbb{Z}_m^n)} = m \cdot 2^{n-1}$ and $\abs I = \ell m^{n-1}$ from [3](#sec:lower-bounds){reference-type="ref" reference="sec:lower-bounds"}, we therefore immediately observe that $$\abs {\mathcal{I}(\mathbb{Z}_m^n)}
\leqslant\sum_{I \in \mathcal{I}^*(\mathbb{Z}_m^n)} \sum_{A \in \mathcal{I}(\mathbb{Z}_m^n[\overline I])} 2^{\abs I - \abs {N_{\mathbb{Z}_m^n}(A, I)}}
\leqslant m \cdot 2^{n-1} \cdot 2^{\ell m^{n-1}} \cdot \sum_{A \in \mathcal{I}(\mathbb{Z}_m^n[\overline {I_0}])} 2^{-\abs {N_{\mathbb{Z}_m^n}(A, I_0)}}
\,.$$
It remains to bound the sum on the right from above. For this, we require a lower bound on the neighbourhood size of certain independent sets $A$. At its core, this is asking for an isoperimetric inequality in the graph $\mathbb{Z}_m^n$, that is some lower bound on $\abs {N_{\mathbb{Z}_m^n}(A)}$ in terms of $\abs A$. Fixing all but one coordinate, $\abs {N_{\mathbb{Z}_m^n}(A)} \geqslant\abs A$ is trivial to obtain. In order to make progress towards an upper bound, however, guaranteeing that this neighbourhood is actually slightly larger than $A$ (for $A$ not too large) seems necessary. It is worth noting that proving such an inequality is also the crucial step in establishing convergence of the cluster expansion, since we again need to limit how many independent sets $A$ can contribute a weight of $w(A) = 2^{-\abs {N_{\mathbb{Z}_m^n}(A, I)}}$ to $\log Z_{\mathcal{P}(\mathbb{Z}_m^n, I)}$.
In the following, we will prove an isoperimetric inequality that constitutes first progress towards an upper bound for $\mathcal{I}(\mathbb{Z}_m^n)$ with $m \geqslant 3$ odd, but unfortunately is not strong enough yet. Our proof adapts the approach of [@JK20 Lemma 6.1] to the case of odd sidelength. Since in the construction of the abovementioned maximum independent set $I_0 = \iota_n(0, 1, \ldots, 1) \in \mathcal{I}^*(\mathbb{Z}_m^n)$, vertices $v \in V(\mathbb{Z}_m^n)$ are classified according to the value of $\sum_{i = 1}^n v_i$ mod $m$, it is helpful to refer to this number as the *class* $\operatorname{cls}(v)$ of $v$. This partitions $V(\mathbb{Z}_m^n)$ into $\mathcal{E}_m^n(p) \coloneqq \{ v \in V(\mathbb{Z}_m^n) \colon \operatorname{cls}(v) = p \}$ for $p \in \mathbb{Z}_m$. Note that with this notation, we have $I_0 = \mathcal{E}_m^n(0) \cup \mathcal{E}_m^n(2) \cup \ldots \cup \mathcal{E}_m^n(m-3)$.
For the sake of simplicity, we write $N_n$ instead of $N_{\mathbb{Z}_m^n}$. For arbitrary subsets $A \subseteq V(\mathbb{Z}_m^n)$ and $q \in \mathbb{Z}_m$, we define $A_q \coloneqq \{ v \in V(\mathbb{Z}_m^{n-1}) \colon (v, q) \in A \}$ and hence obviously $\abs A = \sum_{q \in \mathbb{Z}_m} \abs {A_q}$. The following lemma is the central step in our induction.
**Lemma 18**. *For $m, n \in \mathbb{N}$ with $m \geqslant 3$ odd, consider $p, q \in \mathbb{Z}_m$ and $A \subseteq\mathcal{E}_m^n(p)$. Then $$\abs {N_n(A, \mathcal{E}_m^n(p \pm 1))_q}
\geqslant\max \left\lbrace \abs {N_{n-1}(A_q, \mathcal{E}_m^{n-1}(p - q \pm 1))}, \abs {A_{q \mp 1}} \right\rbrace
\,.$$*
*Proof.* By symmetry, it suffices to prove the statement for $N_n(A, \mathcal{E}_m^n(p + 1))_q$. We show that in fact, this set contains both $N_{n-1}(A_q, \mathcal{E}_m^{n-1}(p - q + 1))$ and $A_{q - 1}$.
For the first part, let $v \in N_{n-1}(A_q, \mathcal{E}_m^{n-1}(p - q + 1))$ be arbitrary. Then there is $u \in A_q$ such that $uv \in E(\mathbb{Z}_m^{n-1})$. This means that $(u, q)(v, q) \in E(\mathbb{Z}_m^n)$, so $(v, q) \in N_n(A)$. Furthermore, $\operatorname{cls}(v) = p - q + 1$ implies $\operatorname{cls}((v, q)) = \operatorname{cls}(v) + q = p + 1$, so $(v, q) \in N_n(A, \mathcal{E}_m^n(p + 1))$ and $v \in N_n(A, \mathcal{E}_m^n(p + 1))_q$ follows.
For the second part, let $v \in A_{q - 1}$ be arbitrary. Then $(v, q - 1) \in A$, so $(v, q) \in N_n(A)$. Furthermore, $\operatorname{cls}((v, q - 1)) = p$ because of $(v, q - 1) \in A \subseteq\mathcal{E}_m^n(p)$ implies $\operatorname{cls}((v, q)) = \operatorname{cls}((v, q - 1)) + 1 = p + 1$, so $(v, q) \in N_n(A, \mathcal{E}_m^n(p + 1))$ and $v \in N_n(A, \mathcal{E}_m^n(p + 1))_q$ follows. ◻
We shall also use two further easy observations.
**Lemma 19**. *Let $m \in \mathbb{N}$ and $\delta > 0$. Suppose $\alpha_0, \ldots, \alpha_{m-1} \in \mathbb{R}$ such that $\alpha_{q - 1} \leqslant\alpha_q + \delta$ for all $q \in \mathbb{Z}_m$. Then their mean $\alpha \coloneqq \sum_{q \in \mathbb{Z}_m} \alpha_q / m$ satisfies $\alpha_q \in [\alpha - \frac{m-1}2 \delta, \alpha + \frac{m-1}2 \delta]$ for all $q \in \mathbb{Z}_m$. The same conclusion also holds if $\alpha_{q + 1} \leqslant\alpha_q + \delta$ for all $q \in \mathbb{Z}_m$.*
*Proof.* It suffices to only prove the case $\alpha_{q - 1} \leqslant\alpha_q + \delta$ as for $\alpha_{q + 1} \leqslant\alpha_q + \delta$, we can consider the sequence defined by $\alpha'_q \coloneqq \alpha_{-q}$ instead. So let $q \in \mathbb{Z}_m$ be arbitrary and observe that inductively, $\alpha_{q-r} \leqslant\alpha_q + r\delta$ holds for all $r \geqslant 0$ (reading the indices of $\alpha_i$'s modulo $m$). Now calculate $$\alpha
= \sum_{\tilde q \in \mathbb{Z}_m} \frac {\alpha_{\tilde q}}m
= \sum_{r = 0}^{m-1} \frac {\alpha_{q-r}}m
\leqslant\sum_{r = 0}^{m-1} \frac {\alpha_q + r\delta}m
= \alpha_q + \frac \delta m \sum_{r = 0}^{m-1} r
= \alpha_q + \frac {m-1}2 \delta
\,.$$
Reordering the assumption also guarantees that $\alpha_{q + 1} \geqslant\alpha_q - \delta$ for all $q \in \mathbb{Z}_m$, which inductively yields $\alpha_{q+r} \geqslant\alpha_q - r\delta$ for all $r \geqslant 0$. This allows us to obtain the inverse estimate $\alpha = \sum_{r = 0}^{m-1} \frac {\alpha_{q+r}}m \geqslant\alpha_q - \frac \delta m \sum_{r = 0}^{m-1} r = \alpha_q - \frac {m-1}2 \delta$ as well. Taken together, we have shown that $\abs {\alpha - \alpha_q} \leqslant\frac {m-1}2 \delta$ for all $q \in \mathbb{Z}_m$, which is equivalent to $\alpha_q \in [\alpha - \frac{m-1}2 \delta, \alpha + \frac{m-1}2 \delta]$. Since $q \in \mathbb{Z}_m$ was chosen arbitrary, this finishes the proof. ◻
**Lemma 20**. *Let $a < b$ and $g \colon [a, b] \to \mathbb{R}$ be concave. Then among all multisets $X$ with $m \in \mathbb{N}$ elements and $\sum_{x \in X} x = m \frac {a + b}2$, the minimum value of $\sum_{x \in X} g(x)$ is achieved by $\ell \coloneqq \lfloor m/2 \rfloor$ copies of $a$, $\ell$ copies of $b$, and at most one copy of $\frac {a + b}2$.*
*Proof.* Concavity implies that whenever $X$ contains two values $x, y$ with $a < x \leqslant y < b$, letting $\varepsilon\coloneqq \min \{ x-a, b-y \}$ and replacing $x, y$ by $x - \varepsilon$ and $y + \varepsilon$ does not increase $\sum_{x \in X} g(x)$, while leaving $\sum_{x \in X} x$ unchanged. Inductively, we arrive at a minimizer $X$ that contains at most one value that is neither $a$ nor $b$. Straightforward calculation then shows that it must be the one claimed in the statement. ◻
We are now ready to prove the isoperimetric inequality. It verifies that when we consider a set $A$ in one partition class $\mathcal{E}_m^n(p)$ and its neighbours in one of the adjacent partition classes, then there are at least $\frac {1 - \alpha}{C_m \sqrt n} \abs A$ additional neighbours apart from the trivial $\abs A$ many, where $\alpha \coloneqq \abs A / m^{n-1}$ is the relative size of $A \subseteq\mathcal{E}_m^n(p)$ and the constant $C_m \coloneqq \sqrt{\ell^3m}$ with $\ell \coloneqq \lfloor m/2 \rfloor$ does not depend on $A$ or $n$.
*Proof of [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"}.* By symmetry, it suffices to prove the statement for $N_n(A, \mathcal{E}_m^n(p + 1))$. We proceed by induction on $n$. For $n = 1$, every class consists of a single vertex, so either $\abs A = \abs {N_1(A, \mathcal{E}_m^n(p + 1))} = 0$ or $\abs A = 1$, $\abs{N_1(A, \mathcal{E}_m^n(p + 1))} = 1$, and $\alpha = 1$. In both cases, the inequality holds. So let $n > 1$ and define $\delta \coloneqq m\alpha(1 - \alpha) / \sqrt{\ell^3mn}$. It is easy to see that $A_q \subseteq\mathcal{E}_m^{n-1}(p - q)$ for every $q \in \mathbb{Z}_m$ because of $A \subseteq\mathcal{E}_m^n(p)$. In order to apply the induction hypothesis to $A_0, \ldots, A_{m-1}$, let $\alpha_q \coloneqq \abs {A_q} / m^{n-2}$ and distinguish two cases:
*Case 1:* There is $\tilde q \in \mathbb{Z}_m$ such that $\alpha_{\tilde q - 1} \geqslant\alpha_{\tilde q} + \delta$.
We use [Lemma 18](#lem: slice-est){reference-type="ref" reference="lem: slice-est"} and the induction hypothesis to obtain the estimate $$\abs {N_n(A, \mathcal{E}_m^n(p + 1))_q}
\geqslant\abs{N_{n-1}(A_q, \mathcal{E}_m^{n-1}(p - q + 1))}
\geqslant\abs {A_q}$$ for all $q \in \mathbb{Z}_m \setminus \{ \tilde q \}$. For $\tilde q$ itself, we calculate $$\abs {N_n(A, \mathcal{E}_m^n(p + 1))_{\tilde q}}
\geqslant\abs {A_{\tilde q - 1}}
= \alpha_{\tilde q - 1} m^{n-2}
\geqslant(\alpha_{\tilde q} + \delta)m^{n-2}
= \abs {A_{\tilde q}} + \delta m^{n-2}
\,.$$ Summing up all these inequalities and plugging in the definition of $\delta$, we obtain as desired $$\begin{aligned}
\abs {N_n(A, \mathcal{E}_m^n(p + 1))}
= \sum_{q \in \mathbb{Z}_m} \abs {N_n(A, \mathcal{E}_m^n(p + 1))_q}
&\geqslant\sum_{q \in \mathbb{Z}_m} \abs {A_q} + \frac {m\alpha (1 - \alpha)}{\sqrt{\ell^3mn}} m^{n-2} \\
&= \abs A \left( 1 + \frac {1-\alpha}{\sqrt{\ell^3mn}} \right)
\,.\end{aligned}$$
*Case 2:* We have $\alpha_{q - 1} < \alpha_q + \delta$ for all $q \in \mathbb{Z}_m$.
Here, [Lemma 19](#lem: circular){reference-type="ref" reference="lem: circular"} implies that all $\alpha_q$ are within at most $\ell \delta$ of their mean, which is precisely $\sum_{q \in \mathbb{Z}_m} \alpha_q / m = \sum_{q \in \mathbb{Z}_m} \abs {A_q} / m^{n-1} = \abs A / m^{n-1} = \alpha$. So, $\alpha_q \in [\alpha - \ell \delta, \alpha + \ell \delta]$ holds for all $q \in \mathbb{Z}_m$. According to [Lemma 18](#lem: slice-est){reference-type="ref" reference="lem: slice-est"} and the induction hypothesis, we can now bound $$\begin{aligned}
\label{eq: case-2}
\abs {N_n(A, \mathcal{E}_m^n(p + 1))}
= \sum_{q \in \mathbb{Z}_m} \abs {N_n(A, \mathcal{E}_m^n(p + 1))_q}
&\geqslant\sum_{q \in \mathbb{Z}_m} \abs {N_{n-1}(A_q, \mathcal{E}_m^{n-1}(p - q + 1))} \notag\\
&\geqslant\sum_{q \in \mathbb{Z}_m} \abs {A_q} \left( 1 + \frac {1 - \alpha_q}{\sqrt{\ell^3m(n-1)}} \right) \notag\\
&= \abs A + \frac {m^{n-2}}{\sqrt{\ell^3m(n-1)}} \sum_{q \in \mathbb{Z}_m} \alpha_q(1 - \alpha_q)
\,.\end{aligned}$$
Defining the function $g(\alpha) \coloneqq \alpha(1 - \alpha)$ on the interval $[\alpha - \ell \delta, \alpha + \ell \delta]$, we have to minimize the sum $\sum_{q \in \mathbb{Z}_m} g(\alpha_q)$ subject to the condition $\sum_{q \in \mathbb{Z}_m} \alpha_q = m \alpha$. As $g$ is concave, [Lemma 20](#lem: concavity){reference-type="ref" reference="lem: concavity"} guarantees that choosing $\alpha_0, \ldots, \alpha_{m-1}$ as $\ell$ copies of $\alpha - \ell \delta$, $\ell$ copies of $\alpha + \ell \delta$, and one copy of $\alpha$ itself yields the minimal value, which we calculate as $$\sum_{q \in \mathbb{Z}_m} g(\alpha_q)
\geqslant\ell g(\alpha - \ell \delta) + g(\alpha) + \ell g(\alpha + \ell \delta)
= mg(\alpha) - 2\ell^3\delta^2
\,.$$ In order to determine the relative significance of the error term $-2\ell^3\delta^2$, we continue as follows: $$\frac {mg(\alpha) - 2\ell^3\delta^2}{mg(\alpha)}
= 1 - \frac {2\ell^3m^2\alpha^2(1 - \alpha)^2}{mg(\alpha)2\ell^3mn}
= 1 - \frac {2g(\alpha)}{n}
\geqslant 1 - \frac 1{2n}
\geqslant\sqrt {1 - \frac 1n}
= \sqrt { \frac {n-1}n }
\,.$$ Here, the two inequalities follow from the fact that the maximum of $g$ on $[0, 1] \ni \alpha$ is $g(1/2) = 1/4$ as well as the fact that $(1 - 1/(2n))^2 = 1 - 1/n + 1/(4n^2) \geqslant 1 - 1/n$. Finally, we plug this into inequality ([\[eq: case-2\]](#eq: case-2){reference-type="ref" reference="eq: case-2"}) to obtain as desired $$\begin{aligned}
\abs {N_n(A, \mathcal{E}_m^n(p + 1))}
&\overset {\mathclap{(\ref{eq: case-2})}} \geqslant\abs A + \frac {m^{n-2}}{\sqrt{\ell^3m(n-1)}} \sum_{q \in \mathbb{Z}_m} g(\alpha_q) \\
&\geqslant\abs A + \frac {m^{n-2} \cdot mg(\alpha)}{\sqrt{\ell^3m(n-1)}} \sqrt { \frac {n-1}n } \\
&= \abs A \left( 1 + \frac {1 - \alpha}{\sqrt{\ell^3mn}} \right)
\,.\end{aligned}$$ This concludes the proof. ◻
Obviously, this also establishes a lower bound on the total number of neighbours of $A$, irrespective of their class, as applying [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"} to both $p+1$ and $p-1$ yields double the bound. More generally, we can choose $A$ as an arbitrary subset of pairwise non-adjacent partition classes, which are automatically independent. This leads to the following corollary.
**Corollary 21**. *Let $P \in \mathcal{I}(\mathbb{Z}_m^1)$ be an independent set in the $m$-cycle and $A \subseteq\bigcup_{p \in P} \mathcal{E}_m^n(p)$. Setting $\alpha_p \coloneqq \abs {A \cap \mathcal{E}_m^n(p)} / m^{n-1}$, we have $$\abs{N_{\mathbb{Z}_m^n}(A)}
\geqslant\abs A + \max_{p \in P} \abs {A \cap \mathcal{E}_m^n(p)} + \frac {m^{n-1}}{\sqrt{\ell^3mn}} \sum_{p \in P} \alpha_p (1 - \alpha_p)
\,.$$*
*Proof.* It is easy to see that $\abs {N_{\mathbb{Z}_m^1}(P)} \geqslant\abs P + 1$. Now let $Q \coloneqq N_{\mathbb{Z}_m^1}(P)$ and for each $q \in Q$ choose $p_q \in N_{\mathbb{Z}_m^1}(q, P)$. This can obviously be done such that $\{ p_q \colon q \in Q \} = P$, choosing the $p \in P$ with maximal $\alpha_p$ twice. We then apply [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"} to find that $$\begin{aligned}
\abs{N_{\mathbb{Z}_m^n}(A \cap \mathcal{E}_m^n(p_q), \mathcal{E}_m^n(q))}
&\geqslant\abs {A \cap \mathcal{E}_m^n(p_q)} \left( 1 + \frac {1-\alpha_{p_q}}{\sqrt{\ell^3mn}} \right) \\
&= \abs {A \cap \mathcal{E}_m^n(p_q)} + \frac {m^{n-1}}{\sqrt{\ell^3mn}} \alpha_{p_q}(1-\alpha_{p_q})\end{aligned}$$ for every $q \in Q$. As these sets are disjoint, adding up the inequalities yields $$\begin{aligned}
\label{eq: two-sums}
\abs{N_{\mathbb{Z}_m^n}(A)}
&\geqslant\sum_{q \in Q} \abs{N_{\mathbb{Z}_m^n}(A \cap \mathcal{E}_m^n(p_q), \mathcal{E}_m^n(q))} \notag\\
&\geqslant\sum_{q \in Q} \abs {A \cap \mathcal{E}_m^n(p_q)} + \frac {m^{n-1}}{\sqrt{\ell^3mn}} \sum_{q \in Q} \alpha_{p_q}(1-\alpha_{p_q})
\,.\end{aligned}$$ We recall that the $p_q$ were chosen in a way that guarantees that every $p \in P$ is chosen at least once. This means that $\sum_{q \in Q} \alpha_{p_q}(1 - \alpha_{p_q}) \geqslant\sum_{p \in P} \alpha_p(1 - \alpha_p)$. Moreover, the $p \in P$ with largest $\alpha_p$ is chosen twice. This means that $\sum_{q \in Q} \abs {A \cap \mathcal{E}_m^n(p_q)}$ contains $\max_{p \in P} \abs {A \cap \mathcal{E}_m^n(p)}$ twice, and so $\sum_{q \in Q} \abs {A \cap \mathcal{E}_m^n(p_q)} \geqslant\abs A + \max_{p \in P} \abs {A \cap \mathcal{E}_m^n(p)}$. Plugging both observations into ([\[eq: two-sums\]](#eq: two-sums){reference-type="ref" reference="eq: two-sums"}) yields the desired statement. ◻
In order to see how this is different from the isoperimetric inequality needed to deduce an upper bound on $\abs {\mathcal{I}(\mathbb{Z}_m^n)}$, consider the arguably easiest case $m = 3$. While [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"} examines sets $A$ in, say, $\mathcal{E}_3^n(1)$, we would actually need to consider independent sets $A \subseteq\overline {I_0} = \mathcal{E}_3^n(1) \cup \mathcal{E}_3^n(2)$ and maintain an isoperimetric inequality of the form $$\abs {N_{\mathbb{Z}_3^n}(A, \mathcal{E}_3^n(0))}
\geqslant\abs A \big( 1 + f(n, \abs A / \abs {\overline {I_0}}) \big)
\,.$$
Using the independence of $A \subseteq\overline {I_0}$, one easily observes that $\abs A / \abs {\overline {I_0}} \leqslant 1/2$ and might thus be tempted to hope that $f(n, \abs A / \abs {\overline {I_0}})$ depends on $\abs A / \abs {\overline {I_0}}$ only by involving a factor of $1 - 2 \abs A / \abs {\overline {I_0}}$. There is, however, the following counterexample: Let $A \coloneqq \overline {I_0} \cap I_1$ be the intersection of $\overline {I_0}$ with a different maximum independent set, for example $I_1 \coloneqq \iota_n(0, -1, 1, \ldots, 1)$. Then $\abs A / \abs {\overline {I_0}} = 1/3$ by the argument of [Lemma 9](#lem: intersection maxindepsets){reference-type="ref" reference="lem: intersection maxindepsets"}, but the neighbourhood of $A$ in $I_0$ does not include $I_0 \cap I_1$, so $\abs {N_{\mathbb{Z}_3^n}(A, I_0)} = 2 \cdot 3^{n-2} = \abs A$. Yet, we still believe there is a constant $C$ such that every independent set $A \subseteq\overline {I_0}$ satisfies $$\abs {N_{\mathbb{Z}_3^n}(A, \mathcal{E}_3^n(0))}
\geqslant\abs A \left( 1 + \frac {1 - 3 \abs A / \abs {\overline {I_0}}}{C \sqrt n} \right)
\,.$$ Unfortunately, we did not succeed in proving such a statement with the approach outlined in [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"}.
# Concluding remarks
In this paper, we make progress on a question of Jenssen and Keevash [@JK20] about the number of independent sets in Cartesian powers of the triangle. We elaborate on several properties which illustrate that estimating this number may be much harder than the bipartite cases (including the hypercube) that have been considered so far; one reason being the more complex isoperimetric inequality that is needed.
We establish in Theorem [Theorem 1](#thm: lowerboundK3){reference-type="ref" reference="thm: lowerboundK3"} a lower bound on the number of independent sets in $\mathbb{Z}_3^n$, which we conjecture to be asymptotically tight. Clearly, it would be desirable to prove that this bound is indeed tight, but even finding an isoperimetric inequality as described after Theorem [Theorem 3](#thm: isoperimetry){reference-type="ref" reference="thm: isoperimetry"} would be interesting.
For Cartesian powers of larger odd cycles, we provide a less precise lower bound on the number of independent sets. Moreover, we show how to approach this question with the cluster expansion method by calculating initial terms for $\mathbb{Z}_5^n$ and $\mathbb{Z}_7^n$. More precise asymptotics and further progress towards an upper bound would again be highly desirable.
# Acknowledgements {#acknowledgements .unnumbered}
We would like to thank Matthew Jenssen for many valuable ideas and discussions as well as for introducing us to the cluster expansion method.
| arxiv_math | {
"id": "2310.02747",
"title": "Independent sets in discrete tori of odd sidelength",
"authors": "Patrick Arras and Felix Joos",
"categories": "math.CO math.PR",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
Let $A= \Bbbk Q/I$ be a finite dimensional gentle algebra. In this article, under some hypothesis on the quiver $Q$, we give conditions for nilpotency of the $L_\infty$-structure on the shifted Bardzell's complex $B(A)[1]$. For nilpotent cases, we describe Maurer-Cartan elements.
address:
- "$^1$ Departamento de Matemática e Estatística, Universidade Federal de São João del-Rei, Praça Frei Orlando, 170, Centro, São João del-Rei, Minas Gerais, Brazil, CEP: 36307-352 "
- $^2$ Instituto de Matemática (INMABB), Departamento de Matemática, Universidad Nacional del Sur (UNS)-CONICET, Bahı́a Blanca, Argentina
- $^3$ Guangdong Technion Israel Institute of Technology, Shantou, Guangdong Province, China
- $^4$ Centro marplatense de Investigaciones Matemáticas, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Mar del Plata, Argentina
author:
- Monique Müller $^1$
- Marı́a Julia Redondo $^{2, 3}$
- Fiorela Rossi Bertone $^2$
- Pamela Suarez $^4$
title: Maurer-Cartan equation for gentle algebras
---
# Introduction
Given an associative algebra $A$, it is well known that the set of equivalence classes of deformations of $A$, in the sense of Gerstenhaber [@G1], is in one-to-one correspondence with the set of Maurer-Cartan elements modulo Gauge equivalence, see for instance [@Man]. In order to find Maurer-Cartan elements, one can use the shifted Hochschild complex $C(A)[1]$ which, endowed with the Gerstenhaber bracket, admits a structure of dg-Lie algebra.
The natural generalization of dg-Lie algebras is the concept of $L_\infty$-algebras. Moreover, two different deformation problems are equivalent if the corresponding dg-Lie algebras are equivalent as $L_\infty$-algebras.
When $A$ is a monomial algebra, Bardzell's complex $B(A)$ has shown to be more efficient when dealing with computations of the Hochschild cohomology groups. Comparison morphisms between $C(A)$ and $B(A)$ have been described explicitly in [@RR1]. In [@RRB] the authors explicitly translate the dg-Lie algebra structure from $C(A)[1]$ to $B(A)[1]$ using the existence of a contraction. Hence, Maurer-Cartan elements can be computed using the generalized Maurer-Cartan equation $$l_1 (f) - \sum_{n \geq 2} (-1)^{\frac{(n+1)n}{2}} \frac{1}{n!} l_n(f, \dots , f) =0,$$ for $f = \sum_{i \geq 1} f_i t^i$ with $f_i \in B^1(A)[1]$. In general, this equation contains an infinite sum and its convergence may be guaranteed by some nilpotence condition, see for instance [@Get; @Yalin]. We are interested in finding conditions on $A$ such that $l_n(f_1, \dots , f_n) =0$ for all $f_i \in B^1(A)[1]$ for $n\gg 0$.
Gentle algebras constitute a large and well-studied class of quadratic monomial algebras. They first appeared as iterated tilted algebras of Dynkin type $A_n$ and $\tilde{A}_n$, see [@AH; @AS], so they can be seen as generalizations of algebras of Dynkin type $A_n$.
In this paper, we deal with gentle algebras $A=\Bbbk Q/I$ whose quiver $Q$ has no parallel arrows and no oriented cycles. For them, in Theorem [Theorem 24](#teo ln=0){reference-type="ref" reference="teo ln=0"}, we give conditions in the quiver to ensure the nilpotency of the brackets $l_n(f_1,\dots,f_n)$ for elements $f_i\in B^1(A)[1]$. Moreover, we find a sufficient condition for the vanishing of $l_n(f_1,\dots,f_n)$ for $n\geq 5$, and a necessary condition for the nonvanishing of $l_4(f_1, f_2,f_3,f_4)$, for all $f_i\in B^1(A)[1]$. Furthermore, for nilpotent brackets, we describe the Maurer-Cartan elements in Theorem [Theorem 26](#teo MC elements){reference-type="ref" reference="teo MC elements"} and prove that, in fact, they are just given by $2$-cocycles.
The paper is organized as follows. Section [2](#sec 2){reference-type="ref" reference="sec 2"} contains definitions and notation about gentle algebras, Hochschild and Bardzell's complexes, $L_\infty$-algebras, and Maurer-Cartan elements. In Section [3](#sec 3){reference-type="ref" reference="sec 3"} we give the needed formulae for quadratic algebras. Section [4](#sec 4){reference-type="ref" reference="sec 4"} is devoted to the study of gentle algebras and the description of the subquivers that may imply the nonvanishing of the brackets $l_n(f_1,\dots,f_n)$ for elements $f_i\in B^1(A)[1]$. Finally, in Section [5](#sec 5){reference-type="ref" reference="sec 5"} we present our main results concerning Maurer-Cartan elements.
# Preliminaries {#sec 2}
Let $\Bbbk$ be a field of characteristic zero.
## Quivers, relations and gentle algebras
Consider an associative finite dimensional algebra $A$ which is a quotient of a path algebra $\Bbbk Q/I$, for a quiver $Q$ and an admissible ideal $I$. We write $Q_0$ and $Q_1$ for the set of vertices and arrows, respectively. Also, $s, t: Q_1\to Q_0$ denote the source and target maps of an arrow. A path $w = \alpha_1 \cdots \alpha_n$ in $Q$ of length $n \geq 1$ is a sequence of arrows $\alpha_1, \ldots, \alpha_n$ such that $t(\alpha_i) = s(\alpha_{i+1})$ for $1 \leq i < n$. Two paths in $\Bbbk Q$ are said to be parallel if they share starting and ending points. The ideal $I$ is generated by a set $\mathcal{R}$ of paths that are minimal with respect to inclusion of paths, and $E=\Bbbk Q_0$. If $I$ is generated by paths of length two, the algebra is called quadratic. By abuse of notation we use the same letters to refer to paths in the path algebra $\Bbbk Q$ or the quotient algebra $A$.
**Definition 1**. A quadratic algebra $A=\Bbbk Q/I$ is gentle if it satisfies the following conditions:
1. $Q$ is finite;
2. If $\alpha \in Q_1$, then there exists at most one arrow $\beta$ with $s(\beta)=t(\alpha)$ such that $\alpha \beta \in I$ and at most one arrow $\gamma$ with $t(\gamma) = s(\alpha)$ such that $\gamma \alpha \in I$;
3. If $\alpha \in Q_1$, then there exists at most one arrow $\beta$ such that $\alpha \beta \not \in I$ and at most one arrow $\gamma$ such that $\gamma \alpha \not \in I$;
4. The admissible ideal $I$ is generated by the relations in (b).
Notice that from (b) and (c) we have that there are at most two incoming and at most two outgoing arrows at each vertex in the quiver $Q$.
## Hochschild and Bardzell's complexes
The *bar resolution* of $A$ $$C_*(A)= (A \otimes A^{\otimes n} \otimes A, d_n)_{n \geq 0}$$ is an standard free resolution of the $A$-bimodule $A$ where the differential is given by $$\begin{aligned}
d_n (a_0 \otimes \cdots \otimes a_{n+1}) = \sum_{i=0}^{n} (-1)^{i} a_0 \otimes \cdots \otimes a_{i-1} \otimes & a_i a_{i+1} \otimes a_{i+2} \otimes \cdots \otimes a_{n+1}.
\end{aligned}$$ The $\Bbbk$-$A$-map $$s_n :A \otimes A^{\otimes n} \otimes A \to A \otimes A^{\otimes n+1} \otimes A$$ defined by $s_n(x) = 1 \otimes x$ for any $x \in A \otimes A^{\otimes n} \otimes A$ is an homotopy contraction, that is, $$\textsl{Id}= s_{n-1}d_n + d_{n+1} s_n.$$ The *Hochschild complex* $C^*(A)= ( \operatorname{Hom}_\Bbbk(A^{\otimes n}, A), d^n)_{n \geq 0}$ is obtained by applying the functor $\operatorname{Hom}_{A-A}(-,A)$ to the bar resolution, and using the isomorphism $$\begin{aligned}
\operatorname{Hom}_{A-A}(A\otimes V \otimes A,A) & \simeq \operatorname{Hom}_\Bbbk(V,A), \qquad
\hat f \mapsto f
\end{aligned}$$ given by $f (v) = \hat f (1 \otimes v \otimes 1)$. The differential of a $n$-cochain is the $(n+1)$-cochain given by $$\begin{aligned}
&(-1)^n (d^nf)(a_0 \otimes \cdots \otimes a_{n}) = \hat f d_{n+1} (1 \otimes a_0 \otimes \cdots \otimes a_{n} \otimes 1) = a_0 f(a_1 \otimes \cdots \otimes a_n) \\
&\, - \sum_{i=0}^{n-1} (-1)^{i} f(a_0 \otimes \cdots \otimes a_{i-1}\otimes a_i a_{i+1} \otimes a_{i+1} \otimes \cdots \otimes a_n)
+ (-1)^{n+1} f(a_0 \otimes \cdots \otimes a_{n-1})a_n.
\end{aligned}$$ Its cohomology is the *Hochschild cohomology* $\ensuremath{\mathsf{HH}}(A)$ of $A$ with coefficients in $A$.
It is well known that the Hochschild cohomology groups $\ensuremath{\mathsf{HH}}^n(A)$ can be identified with the groups $\operatorname{Ext}^n_{A-A}(A,A)$ since $A$ is $\Bbbk$-projective. Thus, we can replace the bar resolution with any convenient resolution of the $A$-bimodule $A$. When $A$ is a monomial algebra, the Bardzell's resolution, see [@B], have been mainly used to compute the Hochschild cohomology. This resolution is defined as $$B^*(A) = (\operatorname{Hom}_{E-E} ({\Bbbk} AP_n, A), (-1)^n \delta^n)_{n \geq 0}$$ where $AP_n$ is the set of supports of $n$-concatenations associated to a presentation $(Q,I)$ of $A$, $E=\Bbbk Q_0$ and $\delta^n (f) = \hat f \delta_{n+1}$, see [@RR1 Section 2.3] for precise definitions.
## Gerstenhaber bracket
Let $f\in \operatorname{Hom}_\Bbbk(A^{\otimes n}, A)$ and $g\in \operatorname{Hom}_\Bbbk(A^{\otimes m}, A)$. The Gerstenhaber bracket $[f, g] \in \operatorname{Hom}_\Bbbk(A^{\otimes n+m-1}, A),$ is given by $$[f, g] = f \circ g - (-1)^{(n-1)(m-1)} g \circ f$$ where $\circ$ denotes the Gerstenhaber product $$\begin{aligned}
f \circ g= \sum_{i=0}^{n-1} (-1)^{i(m+1)} f \circ_i g =\sum_{i=0}^{n-1} (-1)^{i(m+1)} f (\textsl{Id}^{\otimes i} \otimes g \otimes \textsl{Id}^{\otimes n-i-1}).
\end{aligned}$$ In particular, for $n=m=2$, $$\begin{aligned}
= f(g \otimes \textsl{Id}- \textsl{Id}\otimes g) + g(f \otimes \textsl{Id}- \textsl{Id}\otimes f).
\end{aligned}$$ It is well known that the shifted Hochschild complex $C^*(A)[1]$ endowed with the Gerstenhaber bracket is a dg-Lie algebra, see [@Tam; @TT].
## $L_{\infty}$-structure {#subsec l inf}
Following [@RRB], we recall the structure of $L_{\infty}$-algebra on the shifted Bardzell's complex and the weak $L_{\infty}$-equivalence with the shifted Hochschild's complex.
**Definition 2**. A $L_\infty$-algebra is a $\mathbb{Z}$-graded vector space $L$ together with linear maps $$l_n: \otimes^n L \to L$$ of degree $2-n$ subject to the following axioms:
- for every $n \in \mathbb{N}$, every homogeneous $v_1, \ldots, v_n \in L$, and every $\sigma \in \mathbb{S}_n$, $$l_n (v_{\sigma(1)},\ldots, v_{\sigma(n)}) = \chi(\sigma)\ l_n(v_1, \dots, v_n);$$
- for every $n \in \mathbb{N}$, and every homogeneous $v_1, \ldots, v_n \in L$, $$\sum_{i+j=n+1} \sum_{\sigma \in \mathbb{S}_{i,n-i}} (-1)^{i {(j-1)}} \chi(\sigma) \
l_j ( l_i (v_{\sigma(1)}, \ldots, v_{\sigma(i)}), v_{\sigma(i+1)},\ldots, v_{\sigma(n)}) =0$$ where $\chi(\sigma)$ denotes the antisymmetric Koszul sign.
Let $B=B^*(A)[1]$ and $C=C^*(A)[1]$ be the shifted Bardzell's and Hochschild's complexes, respectively. Let $F_*, G_*$ be the comparison morphisms between the corresponding projective resolutions described explicitly in [@RR1] and in [@BW]. Let $H_*$ be the homotopy map between $\textsl{Id}$ and $F_*G_*$ given by the $A$-$A$-maps $$H_n: C_n(A) \to C_{n+1}(A)$$ defined by $H_0=0$ and $$H_n= \textsl{Id}\otimes
F_n G_n s_{n-1} - \textsl{Id}\otimes H_{n-1} d_n s_{n-1}.$$ Applying the functor $\operatorname{Hom}_{A-A}(-, A)$ we get the functors $F^*, G^*, H^*$, and we define recursively the linear maps $$l_n: \otimes^n B\to B, \, v_n: \otimes^n B\to C, \mbox{ and } \phi_n: \otimes^n B\to C$$ by $l_1= (-1)^{*+1}\delta^*$, $v_1 = 0$, $\phi_1= G^*$; and, for $n \geq 2$ and homogeneous $f_1, \dots, f_n \in B$, $$\begin{aligned}
\label{eq: vn}
v_n & = \sum_{t=1}^{n-1} \sum_{\tau \in \mathbb{S}^{-}_{t,n-t}} \chi(\tau){ \kappa (\tau)_{t}} \ [ \phi_t , \phi_{n-t}] \hat \tau \\ \notag
l_n & = F^* v_n, \\
\notag
\phi_n & = H^* v_n
\end{aligned}$$ where $$\begin{aligned}
\hat \tau (f_1 \otimes \cdots \otimes f_n) & = f_{\tau(1)} \otimes \cdots \otimes f_{\tau(n)}, \mbox{ and} \\
[ \phi_t , \phi_{n-t}] (f_1, \ldots, f_n) & = [ \phi_t (f_1, \ldots, f_t), \phi_{n-t}(f_{t+1}, \ldots, f_n)].
\end{aligned}$$
**Theorem 3**. *[@RRB][\[infinito-bardzell\]]{#infinito-bardzell label="infinito-bardzell"} With the notation above, the maps $l_n:\otimes^n B\to B$, $n\in \mathbb{N}$, give $B$ a $L_\infty$-structure, and the quasi-isomorphism $G$ extends to a weak $L_\infty$-equivalence $\phi: B \to C$.*
## Maurer-Cartan elements
Let $L$ be a $L_\infty$-algebra. The set $\mathcal{MC}(L)$ of Maurer-Cartan elements consists of all $f \in L^1$ satisfying the generalized Maurer-Cartan equation $$l_1 (f) - \sum_{n \geq 2} (-1)^{\frac{(n+1)n}{2}} \frac{1}{n!} {l_n(f, \ldots, f)} =0.$$ It is well known that the formal deformations of an algebra $A$ over $\Bbbk [[t]]$ are in one-to-one correspondence with equivalence classes of Maurer-Cartan elements in $\mathcal{MC}( C^*(A)[1] \otimes ((t)))$, see for instance [@DMZ §5].
When $A$ is a monomial algebra and $$B^*(A)[1]= (B^{n+1}(A), {(-1)^{n}} \delta^{n+1}, l_n)$$ is the $L_\infty$-algebra defined in Subsection [2.4](#subsec l inf){reference-type="ref" reference="subsec l inf"}, Theorem [\[infinito-bardzell\]](#infinito-bardzell){reference-type="ref" reference="infinito-bardzell"} and [@Y Theorem 8.13] implies that $$\mathcal{MC} (\overline C^*(A)[1] \otimes ((t))) \simeq \mathcal{MC} (B^*(A)[1] \otimes ((t)))$$ and $f = \sum_{i \geq 1} f_i t^i$ with $f_i \in B^1(A)[1]$ satisfies the generalized Maurer-Cartan equation if and only if $$\label{MC}
- \delta^2(f_i) - \sum_{n \geq 2} \ \sum_{j_1 + \cdots + j_n=i} (-1)^{\frac{(n+1)n}{2}} \frac{1}{n!} {l_n (f_{j_1} , \ldots , f_{j_n})} =0.$$
# Formulae for quadratic algebras {#sec 3}
From now on, unless explicitly stated, all paths will be considered in $A$, not in $Q$. For any quadratic algebra $A=\Bbbk Q/I$ we present all the needed formulae that will be used throughout this article. Recall that $AP_0= Q_0, AP_1= Q_1$, and $$AP_n= \{ \omega=(\alpha_1, \ldots, \alpha_n): \alpha_i \alpha_{i+1} =0 \mbox{ for every $i$} \}.$$ The $A^e$-linear maps $F_n : A\otimes \Bbbk AP_n \otimes A \to A^{\otimes n+2}$ and $G_n : A^{\otimes n+2} \to A\otimes \Bbbk AP_n \otimes A$ are defined by $$\begin{aligned}
&F_0(1\otimes e\otimes 1)=e\otimes 1, \qquad F_1(1\otimes \alpha\otimes 1)= 1\otimes \alpha\otimes 1, \\
&G_0(1\otimes 1)=1\otimes 1 \otimes 1, \qquad G_1(1\otimes \beta_1\cdots \beta_s \otimes 1)= \sum_{i=1}^s \beta_1\cdots\beta_{i-1}\otimes \beta_i\otimes \beta_{i+1}\cdots\beta_s,
\end{aligned}$$ if $\alpha \in Q_1$, $\beta_1\cdots \beta_s$ a nonzero path in $A$, and $$\begin{aligned}
& F_n(1\otimes (\alpha_1, \ldots, \alpha_n) \otimes 1)
= 1 \otimes \alpha_1 \otimes \cdots \otimes \alpha_n \otimes 1 \\
& G_n(1\otimes u \alpha_1\otimes \alpha_2 \otimes \cdots \otimes \alpha_{n-1} \otimes \alpha_n v \otimes 1)=
u \otimes (\alpha_1, \ldots, \alpha_n)
\otimes v\end{aligned}$$ if $\alpha_i\alpha_{i+1} =0$ for any $i$, $u \alpha_1, \alpha_n v$ nonzero paths in $A$, and it is zero otherwise. For $n=1,2$, the $A^e$-map $H_n: C_n(A) \to C_{n+1}(A)$ is given by $$\begin{aligned}
H_1 (1 \otimes \beta_1 \cdots \beta_m \otimes 1) & = \sum_{i=2}^m 1 \otimes \beta_1 \cdots \beta_{i-1} \otimes \beta_i \otimes \beta_{i+1} \cdots \beta_m
\end{aligned}$$ $$\begin{aligned}
H_2 (1 \otimes \beta_1 \cdots \beta_m \otimes \gamma_1\cdots \gamma_r \otimes 1) =&
\ \Theta(\beta_m, \gamma_1) \
1 \otimes \beta_1 \cdots \beta_{m-1} \otimes \beta_m \otimes \gamma_1 \otimes \gamma_2 \cdots \gamma_r
\\
& - \sum_{i=2}^s 1 \otimes \beta_1 \cdots \beta_{m} \otimes \gamma_1 \cdots \gamma_{i-1} \otimes \gamma_i \otimes \gamma_{i+1} \cdots \gamma_r.
\end{aligned}$$ with $\Theta(\beta_m, \gamma_1)=1$ if $\beta_m \gamma_1 =0$, and zero otherwise.
A basis of $B^1(A)[1] = \operatorname{Hom}_{E-E}(\Bbbk AP_2,A)$ is given by the set of $\Bbbk$-linear maps $(\alpha_1\alpha_2||u)$ with $\alpha_1\alpha_2=0$, $u$ a nonzero path parallel to $\alpha_1\alpha_2$, and $$(\alpha_1\alpha_2||u)(\omega) =
\begin{cases}
u, & \mbox{if $\omega = (\alpha_1,\alpha_2)$};\\
0, & \mbox{otherwise.}
\end{cases}$$
Since we are interested in computing Maurer-Cartan elements, we have to study the behaviour of $$l_n(f_1, \dots, f_n)(\omega)$$ for any $f_i \in B^{1}(A)[1]$ and $\omega= (\alpha_1, \alpha_2, \alpha_3) \in AP_3$. For $n=1$ we have $$l_1(f)(\omega)= - \delta^2(f)(\alpha_1,\alpha_2, \alpha_3 )= f( \alpha_1,\alpha_2) \alpha_3-\alpha_1f( \alpha_2,\alpha_3),$$ and, for any $n \geq 2$, $$\begin{aligned}
l_n(f_1,\ldots, f_n)(\omega) &=v_n(f_1, \ldots, f_n)F_3(1 \otimes (\alpha_1,\alpha_2,\alpha_3) \otimes 1)\\
&=v_n(f_1, \ldots, f_n)(1\otimes \alpha_1\otimes \alpha_2\otimes \alpha_3\otimes 1).
\end{aligned}$$ Hence, using [\[eq: vn\]](#eq: vn){reference-type="eqref" reference="eq: vn"} and the fact that $$\phi_k (f_{i_1} , \dots , f_{i_k})(1 \otimes \alpha_i \otimes \alpha_{j} \otimes 1) = v_k (f_{i_1} , \dots , f_{i_k}) H_2 (1 \otimes \alpha_i \otimes \alpha_{j}\otimes 1) =0$$ for any $k >1$, we obtain that $$\begin{aligned}
\label{l_n}
l_n(f_1,\dots, f_n)(\omega)
= &(-1)^n\sum\limits_{i=1}^n \phi_{n-1}(f_1,\ldots, \hat{f}_i, \ldots f_n)(1\otimes f_i(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1)\\ &- (-1)^n\sum\limits_{i=1}^n \phi_{n-1}(f_1,\ldots, \hat{f}_i, \dots f_n)(1\otimes \alpha_1\otimes f_i(\alpha_2\alpha_3)\otimes 1). \notag \end{aligned}$$
The following lemma is a direct generalization of [@RRB Lemma 4.5].
**Lemma 4**. *Let $A=\Bbbk Q/I$ be a quadratic algebra, $u, v$ paths in $\operatorname{rad}A$ such that $uv \neq 0$. Then $$\phi_n (f_1, \dots, f_n) (1 \otimes u \otimes v \otimes 1)=0$$ for all $n \geq 1$ and for any $f_i \in B^{1}(A)[1]$.*
*Proof.* The statement is clear if $n=1$ since $\phi_1(f)=\hat{f} \circ G_2$ and $G_2(1 \otimes u \otimes v \otimes 1)=0$. Now we proceed by induction. Let $u= \beta_1 \cdots \beta_m$ and $v=\gamma_1 \cdots \gamma_r$. For $n \geq 2$, $$\phi_{n}(f_1,\ldots,f_n)(1\otimes u\otimes v \otimes 1)=v_{n}(f_1,\ldots,f_n)H_2(1\otimes u\otimes v \otimes 1)$$ and $$H_2 (1 \otimes \beta_1 \cdots \beta_m \otimes \gamma_1\cdots \gamma_r \otimes 1) = - \sum_{i=2}^s 1 \otimes \beta_1 \cdots \beta_{m} \otimes \gamma_1 \cdots \gamma_{i-1} \otimes \gamma_i \otimes \gamma_{i+1} \cdots \gamma_r.$$ The result follows since $v_n = \sum_{t=1}^{n-1} \sum_{\tau \in \mathbb{S}^{-}_{t,n-t}} \chi(\tau){ \kappa (\tau)_{t}} \ [ \phi_t , \phi_{n-t}] \hat \tau$ with $$[ \phi_t , \phi_{n-t}] = \phi_t (\phi_{n-t} \otimes \textsl{Id}- \textsl{Id}\otimes \phi_{n-t}) +\phi_{n-t} (\phi_{t} \otimes \textsl{Id}- \textsl{Id}\otimes \phi_{t})$$ and, for any $k$ with $1 \leq k <n$, $$\begin{aligned}
\phi_k (f_{i_1}, \ldots, f_{i_k}) (1 \otimes \beta_1 \cdots \beta_{m} \otimes \gamma_1 \cdots \gamma_{i-1} \otimes 1) & =0, \mbox{ and} \\
\phi_k (f_{i_1}, \ldots, f_{i_k}) (1 \otimes \gamma_1 \cdots \gamma_{i-1} \otimes \gamma_i \otimes 1) &= 0
\end{aligned}$$ by inductive hypothesis. ◻
**Remark 5**. Let $f_1, \dots, f_n \in B^{1}(A)[1]$, $\alpha, \beta \in Q_1$, $v \in \operatorname{rad}A$. If $v\alpha\neq 0$, we have that $$\begin{aligned}
v_n (f_1, \dots, f_n)(1 \otimes v \otimes \alpha \otimes & \beta \otimes 1)= \\
& (-1)^{n+1} \sum \phi_{n-1}(f_1,\dots,\hat{f}_i,\dots,f_n)(1\otimes v\otimes f_i(\alpha\beta)\otimes 1)
\end{aligned}$$ since $\phi_t(f_{i_1},\dots,f_{i_t})(1\otimes v \otimes \alpha \otimes 1)=0$ for all $t\geq 1$ by Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} and $\phi_t(f_{i_1},\dots,f_{i_t})(1\otimes \alpha\otimes \beta\otimes 1) = v_t (f_{i_1},\dots,f_{i_t}) H_2(1\otimes \alpha\otimes \beta\otimes 1) =0$ for $t\geq 2$. Then, arguing as in the proof of [@RRB Proposition 4.6], the result follows.
**Lemma 6**. *Let $A=\Bbbk Q/I$ be a quadratic algebra, $u=\beta_1\cdots\beta_m$, $v= \gamma_1\cdots \gamma_r$ with $\beta_m \gamma_1 =0$, and $f_1,\dots, f_n \in B^{1}(A)[1]$. If, for all $i$, there is no nonzero path $w$ appearing as a summand of $f_i(\beta_m \gamma_1)$ such that $\beta_{m-1} w=0$ or $w\gamma_2=0$, then $$\phi_n(f_1, \ldots, f_n)(1\otimes u \otimes v \otimes 1)=0$$ for all $n\geq 2$.*
*Proof.* For any $n\geq 2$, by definition, $\phi_n(f_1, \dots, f_n)(1\otimes u \otimes v \otimes 1)$ equals $$\begin{aligned}
& v_n (f_1, \ldots, f_n) (1 \otimes \beta_1 \cdots \beta_{m-1} \otimes \beta_m \otimes \gamma_1 \otimes 1) \gamma_2 \cdots \gamma_r \\
& - \sum_{i=2}^r v_n (f_1, \ldots, f_n) (1\otimes \beta_1 \cdots \beta_{m} \otimes \gamma_1 \cdots \gamma_{i-1} \otimes \gamma_i \otimes 1)\gamma_{i+1} \cdots \gamma_r.
\end{aligned}$$ We proceed by induction on $r$. If $r=1$, from Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"}, for any $t \geq 1$ we get that $\phi_t (f_{i_1}, \ldots, f_{i_t})(1 \otimes \beta_1 \cdots \beta_{m-1} \otimes \beta_m \otimes 1)=0$. Moreover, for any $t>1$, $$\phi_t (f_{i_1}, \ldots, f_{i_t})(1 \otimes \beta_m \otimes \gamma_1 \otimes 1) = v_t (f_{i_1}, \ldots, f_{i_t}) H_2 (1 \otimes \beta_m \otimes \gamma_1 \otimes 1) =0,$$ and $$\begin{aligned}
\phi_{n-1}(f_1, \ldots, \hat{f}_j, \dots, f_n)(1 \otimes \beta_1 \cdots \beta_{m-1} \otimes {f_j}(\beta_m \gamma_1) \otimes 1)=0
\end{aligned}$$ since there is no nonzero path $w$ appearing as a summand in $f_j(\beta_m \gamma_1)$ such that $\beta_{m-1} w=0$.
For $r>1$, arguing as above, it is clear that $$v_n (f_1, \ldots, f_n) (1 \otimes \beta_1 \cdots \beta_{m-1} \otimes \beta_m \otimes \gamma_1 \otimes 1)=0.$$ Now, from Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"}, we have $$\phi_t (f_{i_1}, \ldots, f_{i_t})(1 \otimes \gamma_1 \cdots \gamma_{i-1} \otimes \gamma_i \otimes 1)=0, \qquad \forall t \geq 1.$$ Also, by inductive hypothesis $$\phi_t (f_{i_1}, \ldots, f_{i_t})(1 \otimes \beta_1 \cdots \beta_m \otimes \gamma_1 \cdots \gamma_{i-1} \otimes 1) = 0, \qquad \forall {t\geq 2}.$$ Then, $\phi_n(f_1, \ldots, f_n)(1\otimes u \otimes v \otimes 1)$ is equal to a linear combination of terms of the form $$\begin{aligned}
\phi_{n-1}(f_1, \ldots, \hat{f}_j, \dots, f_n)(1 \otimes \beta_1 \cdots \beta_{m-1} f_j(\beta_m \gamma_1)\gamma_2 \cdots \gamma_{i-1} \otimes \gamma_i \otimes 1).
\end{aligned}$$ Finally, this vanishes by Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"}. In fact, for $i>2$ we use that $\gamma_{i-1}\gamma_i\neq 0$, and, for $i=2$ we have $f_j(\beta_m \gamma_1) \gamma_2 \neq 0$ by hypothesis. ◻
**Proposition 7**. *Let $A=\Bbbk Q/I$ be a quadratic algebra and let $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$ such that*
1. *there is no nonzero path $u$ parallel to $\alpha_1\alpha_2$ such that $u \alpha_3=0$, and*
2. *there is no nonzero path $u$ parallel to $\alpha_2\alpha_3$ such that $\alpha_1 u=0$.*
*Then $l_n(f_1, \ldots, f_n)(\omega)=0$ for all $n \geq 2$ and for any $f_i \in B^{1}(A)[1]$.*
*Proof.* From equation [\[l_n\]](#l_n){reference-type="eqref" reference="l_n"} we have that $$\begin{aligned}
l_n(f_1,\dots, f_n)(\omega)=& (-1)^n\sum\limits_{i=1}^n \phi_{n-1}(f_1,\ldots, \hat{f}_i, \dots f_n)(1\otimes f_i(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1)\\ &- (-1)^n\sum\limits_{i=1}^n \phi_{n-1}(f_1,\ldots, \hat{f}_i, \ldots f_n)(1\otimes \alpha_1\otimes f_i(\alpha_2\alpha_3)\otimes 1). \end{aligned}$$ From Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} we get that the previous expression vanishes if there is no nonzero path $u$ parallel to $\alpha_1 \alpha_2$ such that $u\alpha_3=0$, and no nonzero path $u$ parallel to $\alpha_2 \alpha_3$ such that $\alpha_1 u=0$. ◻
# Gentle algebras's case {#sec 4}
In order to prove our main results for gentle algebras in the next section, we present now a series of lemmas that will refine the necessary conditions to ensure the nilpotency of the brackets $l_n(f_1, \ldots, f_n)$ appearing in the generalized Maurer-Cartan equation. We also give some relevant counterexamples to show that the weakening of the hypotheses is not possible.
We will explain in detail all the steps we have to complete in order to get all the needed results for proving our main results in Theorems [Theorem 24](#teo ln=0){reference-type="ref" reference="teo ln=0"}, [Theorem 25](#theo l_4 no 0 parte2){reference-type="ref" reference="theo l_4 no 0 parte2"} and [Theorem 26](#teo MC elements){reference-type="ref" reference="teo MC elements"}.
We start with a gentle algebra $A=\Bbbk Q/I$ and prove that $l_2(f_1, f_2)=0$, for any $f_1, f_2 \in B^1(A)[1]$;
Adding the assumption that $Q$ has no double arrows and no oriented cycles, we prove that $l_3(f_1, f_2, f_3)=0$, for any $f_1, f_2, f_3 \in B^1(A)[1]$;
If $l_n(f_1, \ldots, f_n) \neq 0$ for some $n \geq 4$ and for some $f_i \in B^1(A)[1], 1 \leq i \leq n$, we prove that $Q$ has a subquiver of the form $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.7pc/[rr]^{u} & 2 \ar[r]_{\alpha_2} & 3 \ar[r]_{\alpha_3} \ar@{.>}@/^0.5pc/[r]^{w'} & 4 & \mbox{ or} &
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[rr]^{u} & 3 \ar[r]_{\alpha_3} & 4;
}$$
More precisely, we prove that the subquivers in the previous step should be as follows: $$\xymatrix{
1 \ar[r]_{\alpha_1} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[r]^{u'} & 3 \ar[r]_{\alpha_3} \ar@{.>}@/^0.5pc/[r]^{w'} & 4,
& &
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[r]^{u'} & 3 \ar[r]_{\alpha_3} & 4, \mbox{ or}
\\
& 5 \ar[rd]^{\beta_2} \ar[rr]^{\lambda}& & 6 \ar@{.>}@/^1pc/[rrrd]^{\epsilon_2 \cdots \epsilon_p}& &\\
1 \ar[rr]_{\alpha_1}\ar[ru]^{\beta_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\epsilon_1} & & 3 \ar[rr]_{\alpha_3} & & 4;
}$$
We prove that if $Q$ has no subquiver of the form $$\xymatrix{
& & & 5 \ar[rd]^{\gamma_2} \ar[rr]^{\rho} & & 6 \ar@{.>}@/^1pc/[rd]^{\delta_2\cdots\delta_s} &\\
1 \ar[rr]_{\alpha_1} \ar@{.>}@/^1pc/[rr]^{v'} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\gamma_1} & & 3 \ar[ru]_{\delta_1} \ar[rr]_{\alpha_3} & & 4,
}$$ then $l_n(f_1, \ldots, f_n)=0$ for all $n\geq 5$, for all $f_i \in B^1(A)[1]$;
We prove that if $l_4(f_1, f_2, f_3, f_4) \neq 0$ for some $f_i \in B^1(A)[1], 1 \leq i \leq 4$, then $Q$ contains a subquiver of the form $$\xymatrix{
& 5 \ar[rd]^{\beta_2} \ar[rr]^{\lambda}& & 6 \ar@{.>}@/^1pc/[rrrd]^{\epsilon_2\cdots\epsilon_p}& &\\
1 \ar[rr]_{\alpha_1}\ar[ru]^{\beta_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\epsilon_1} & & 3 \ar[rr]_{\alpha_3} & & 4.
}$$
**Remark 8**. Let $A=\Bbbk Q/I$ be a gentle algebra and let $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$ such that $l_n(f_1, \ldots, f_n)(\omega) \neq 0$ for some $n \geq 2$ and for some $f_i \in B^{1}(A)[1], 1 \leq i \leq n$. Then, applying Proposition [Proposition 7](#Prop. 2.13){reference-type="ref" reference="Prop. 2.13"}, since $A$ is gentle, we have
1. there is a nonzero path $u=v' \alpha_2$ parallel to $\alpha_1\alpha_2$, or
2. there is a nonzero path $u=\alpha_2 w'$ parallel to $\alpha_2\alpha_3$.
**Lemma 9**. *Let $A=\Bbbk Q/I$ be a gentle algebra and let $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$. Then $l_2(f_1, f_2)(\omega) =0$ for all $f_1, f_2 \in B^{1}(A)[1]$.*
*Proof.* We will show that $$\begin{aligned}
l_2(f_1, f_2)(\omega)
= &\phi_{1}(f_1)(1\otimes f_2(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1) + \phi_{1}(f_2)(1\otimes f_1(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1)\\
&- \phi_{1}(f_1)(1\otimes \alpha_1\otimes f_2(\alpha_2\alpha_3)\otimes 1) - \phi_{1}(f_2)(1\otimes \alpha_1\otimes f_1(\alpha_2\alpha_3)\otimes 1)
\end{aligned}$$ is zero. Using Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} we get that each summand $\phi_{1}(f_i)(1\otimes f_j(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1)$ vanishes if there is no nonzero path $v' \alpha_2$ appearing as a summand in $f_j(\alpha_1\alpha_2)$. If it appears, and with coefficient $a_j \in \Bbbk$, then $$\phi_{1}(f_i)(1\otimes f_j(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1) = a_j \phi_{1}(f_i) (1 \otimes v' \alpha_2 \otimes \alpha_3 \otimes 1)
= a_j v' f_i (\alpha_2 \alpha_3)$$ which vanishes except for the case of a nonzero path $\alpha_2 w'$ appearing as a summand in $f_i(\alpha_2 \alpha_3)$. The same conclusion can be made for $\phi_{1}(f_i)(1\otimes \alpha_1\otimes f_j(\alpha_2\alpha_3)\otimes 1)$. Finally, if $v' \alpha_2$ appears as a summand in $f_j(\alpha_1\alpha_2)$ with coefficient $a_j$ and $\alpha_2 w'$ appears as a summand in $f_i(\alpha_2 \alpha_3)$ with coefficient $b_i$, we get that $$\begin{aligned}
\phi_{1}(f_i)(1\otimes f_j(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1) & = a_j b_i v' \alpha_2 u', \mbox{ and} \\
\phi_{1}(f_i)(1\otimes \alpha_1\otimes f_j(\alpha_2\alpha_3)\otimes 1) & = a_j b_i v' \alpha_2 u'
\end{aligned}$$ and the proof follows. ◻
**Lemma 10**. *Let $A=\Bbbk Q/I$ be a gentle algebra and let $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$. If $Q$ has no oriented cycles and no parallel arrows then $l_3(f_1, f_2, f_3)(\omega)=0$, for all $f_1, f_2, f_3 \in B^{1}(A)[1]$.*
*Proof.* We claim that $\phi_1(f) (1 \otimes x \otimes y \otimes 1)$ is zero if $x,y$ are both paths of length greater than $1$. Indeed, set $x=x' \nu_1, y=\nu_2 y'$, $\nu_1, \nu_2 \in Q_1$. The result is clear if $\nu_1 \nu_2 \neq 0$. In the other case, $$\phi_1(f)(1 \otimes x' \nu_1 \otimes \nu_2 y' \otimes 1) =
x' f(\nu_1 \nu_2) y'$$ is nonzero if at least one summand of $f(\nu_1 \nu_2)$ has a nonzero path starting with $\nu_1$ and ending with $\nu_2$, which is not possible since $Q$ has no oriented cycles.
Now, we have that $$\begin{aligned}
l_3(f_1, f_2, f_3)(\omega)
=&-\sum_{i=1}^3 \phi_{2}(f_1, \hat{f}_i, f_3)(1\otimes f_i(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1)\\
& + \sum_{i=1}^3 \phi_{2}(f_1, \hat{f}_i, f_3)(1\otimes \alpha_1\otimes f_i(\alpha_2\alpha_3)\otimes 1),
\end{aligned}$$ Using Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} we get that the first summands vanish if $v' \alpha_2$ does not appear as a summand in $f_i (\alpha_1 \alpha_2)$, and the last summands vanish if $\alpha_2 w'$ does not appear as a summand in $f_i(\alpha_2 \alpha_3)$. If $v' \alpha_2$ appears as a summand in $f_i (\alpha_1 \alpha_2)$ with coefficient $a_i$ then $$\begin{aligned}
\phi_{2}&(f_j, f_k)(1\otimes f_i(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1) = a_i \phi_{2}(f_j, f_k) (1 \otimes v' \alpha_2 \otimes \alpha_3 \otimes 1) \\
& = a_i \phi_1(f_j) (1 \otimes v' \otimes f_k (\alpha_2 \alpha_3) \otimes 1) + a_i \phi_1(f_k) (1 \otimes v' \otimes f_j (\alpha_2 \alpha_3) \otimes 1).
\end{aligned}$$ As $Q$ has no parallel arrows, and $v'$ is parallel to $\alpha_1$, then $v'=v''\beta$ with $v''$ a path of positive length and $\beta\in Q_1$. So $\phi_1(f_k) (1 \otimes v' \otimes f_j (\alpha_2 \alpha_3) \otimes 1)$ vanishes except for $f_j(\alpha_2\alpha_3)$ having an arrow $\mu$ as a summand, with $\beta \mu =0$. Hence, $$\phi_1(f_k) (1 \otimes v''\beta \otimes \mu \otimes 1)= v'' f_k(\beta \mu)$$ is zero since there is no nonzero path parallel to $\beta\mu$ starting with $\beta$. Therefore, $\phi_{2}(f_j, f_k)(1\otimes f_i(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1)=0$.
If $\alpha_2 w'=\alpha_2\delta_1\cdots\delta_s$ appears as a summand in $f_i (\alpha_2 \alpha_3)$ with coefficient $b_i$, then $$\begin{aligned}
\phi_{2}(f_j, f_k)(1\otimes \alpha_1\otimes& f_i(\alpha_2\alpha_3)\otimes 1) = b_i \phi_{2}(f_j, f_k) (1 \otimes \alpha_1 \otimes \alpha_2 w' \otimes 1) \\
= &-\sum_{r=1}^s b_i v_{2}(f_j, f_k) (1 \otimes \alpha_1 \otimes \alpha_2 \delta_1\cdots\delta_{r-1}\otimes \delta_r \otimes 1)\delta_{r+1}\cdots \delta_s \\
=& - b_i \phi_{1}(f_j) (1 \otimes f_k(\alpha_1 \alpha_2)\otimes \delta_1 \otimes 1)\delta_{2}\cdots \delta_s \\
&- b_i \phi_{1}(f_k) (1 \otimes f_j(\alpha_1 \alpha_2)\otimes \delta_1 \otimes 1)\delta_{2}\cdots \delta_s.
\end{aligned}$$ Now, $\phi_{1}(f_k) (1 \otimes f_j(\alpha_1 \alpha_2)\otimes \delta_1 \otimes 1)\delta_{2}\cdots \delta_s$ vanishes except for $f_j(\alpha_1\alpha_2)$ having a path $\beta_1\cdots\beta_m$ as a summand, with $\beta_m \delta_1 =0$. In that case we obtain $$\beta_1\cdots\beta_{m-1}f_k(\beta_m\delta_1)\delta_2\cdots\delta_s$$ which is nonzero if $\beta_1\cdots\beta_{m-1}f_k(\beta_m\delta_1)$ has as summand a path ending with $\delta_1$, since $s>1$. This is not possible because $Q$ has no oriented cycles. Then $\phi_{2}(f_j, f_k)(1\otimes \alpha_1\otimes f_i(\alpha_2\alpha_3)\otimes 1)=0$ and the lemma follows. ◻
If Q has oriented cycles or parallel arrows the previous result does not hold, as we can see in the following examples.
**Example 11**. [@RRB Example 4.3]. Let $A= \Bbbk Q/I$ with quiver $$\xymatrix{1 \ar[r]_{\alpha_1}&2\ar@<1ex>[r]^{\gamma}\ar[r]_{\alpha_2}&3\ar@<1ex>[r]^{\delta}\ar[r]_{\alpha_3}&4}$$ and $I=<\alpha_1\alpha_2, \alpha_2\alpha_3, \gamma\delta >$. Let $f=(\alpha_1\alpha_2||\alpha_1\gamma)+(\alpha_2\alpha_3|| \alpha_2\delta)+(\gamma\delta||\gamma\alpha_3+\alpha_2\delta)$. One can check that $$l_n (f, \ldots, f) {(\alpha_1,\alpha_2,\alpha_3)} = \begin{cases}
(-1)^{\frac{n-1}{2}}n! \ ( (\alpha_1 \alpha_2, \alpha_2 \alpha_3) || \alpha_1 \gamma \alpha_3), & \mbox{if $n$ is odd;} \\
0, & \mbox{otherwise.} \\
\end{cases}$$
**Example 12**. Let $A= \Bbbk Q/I$ with quiver $$\xymatrix{
& 7 \ar[rd]^{\lambda_2} & & \\
& 5 \ar[u]_{\lambda_1} \ar[rd]^
{\beta_2}&6 \ar[rd]^{\delta_2}&\\
1\ar[r]_{\alpha_1}\ar[ru]^{\beta_1}&2 \ar@/^1.5pc/[uu]^{\nu_2}\ar[r]_{\alpha_2}&3\ar[u]_{\delta_1}\ar[r]_{\alpha_3} &4 \ar@/^1.5pc/[ll]^{\nu_1}}$$ and $I=<\alpha_1\alpha_2, \alpha_2\alpha_3, \nu_1\nu_2, \beta_2\delta_1, \beta_1\lambda_1, \delta_2\nu_1, \nu_2\lambda_2, \lambda_2\delta_2>$. Let $$\begin{aligned}
f=(\alpha_1\alpha_2||\beta_1\beta_2+ & \beta_1\beta_2\alpha_3\nu_1\alpha_2)+(\alpha_2\alpha_3||\alpha_2\delta_1\delta_2) \\
+&(\beta_2 \delta_1||\beta_2\alpha_3\nu_1\alpha_2\delta_1+\lambda_1\lambda_2)
+(\beta_1\lambda_1||\alpha_1\nu_2)+(\nu_2\lambda_2||\alpha_2\delta_1).
\end{aligned}$$ One can check that $$l_n (f, \ldots, f) {(\alpha_1,\alpha_2,\alpha_3)} = \begin{cases}
-n! \beta_1\beta_2\alpha_3\nu_1\alpha_2\delta_1\delta_2, & \mbox{if $3$ divides $n$;} \\
0, & \mbox{otherwise.} \\
\end{cases}$$
Now we present a result concerning the nonvanishing of $l_n$ for $n \geq 4$.
**Proposition 13**. *Let $A=\Bbbk Q/I$ be a gentle algebra and let $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$ such that $l_n(f_1, \ldots, f_n)(\omega)\neq 0$ for some $n \geq 4$ and for some $f_i \in B^{1}(A)[1], 1 \leq i \leq n$. Then*
1. *there is a nonzero path $u$ parallel to $\alpha_1\alpha_2$ and there is a nonzero path $w=\alpha_2 w'$ parallel to $\alpha_2\alpha_3$ such that $uw'=0$ $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.7pc/[rr]^{u} & 2 \ar[r]_{\alpha_2} & 3 \ar[r]_{\alpha_3} \ar@{.>}@/^0.5pc/[r]^{w'} & 4
} , \mbox{ or}$$*
2. *there is a nonzero path $v= v' \alpha_2$ parallel to $\alpha_1\alpha_2$ and there is a nonzero path $u$ parallel to $\alpha_2\alpha_3$ such that $v' u =0$ $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[rr]^{u} & 3 \ar[r]_{\alpha_3} & 4.
}$$*
*Proof.* From equation [\[l_n\]](#l_n){reference-type="eqref" reference="l_n"} we have that $l_n(f_1, \dots, f_n)(\omega)\neq 0$ implies that, for some $i$, $$\begin{aligned}
\label{caso a}
\phi_{n-1}(f_1,\ldots, \hat{f}_i, \ldots, f_n)(1\otimes \alpha_1\otimes f_i(\alpha_2\alpha_3)\otimes 1) \neq 0, \mbox{ or } \\ \label{caso b}
\phi_{n-1}(f_1,\ldots, \hat{f}_i, \ldots , f_n)(1\otimes f_i(\alpha_1\alpha_2)\otimes \alpha_3\otimes 1) \neq 0.
\end{aligned}$$ If [\[caso a\]](#caso a){reference-type="eqref" reference="caso a"} holds, we can affirm that a nonzero path $w= \alpha_2w'$ appears as a summand in $f_i(\alpha_2\alpha_3)$. Applying Lemma [Lemma 6](#lemma: no parallel){reference-type="ref" reference="lemma: no parallel"} to $$\begin{aligned}
\phi_{n-1}&(f_1,\ldots, \hat{f}_i, \ldots, f_n)(1\otimes \alpha_1 \otimes \alpha_2w'\otimes 1)
\end{aligned}$$ we get that there exists a nonzero path $u$ parallel to $\alpha_1\alpha_2$ such that $uw'=0$. We proceed analogously for [\[caso b\]](#caso b){reference-type="eqref" reference="caso b"}. ◻
In the next example we show that the previous results can not be extended to the case of string algebras.
**Example 14**. Let $A=kQ/I$ be the string algebra given by the quiver $$\xymatrix{
& 5\ar[rd]^{\beta_2}\ar[rr]^{\lambda_2}& & 6\ar[d]^{\lambda_3}&\\
1\ar[ru]^{\beta_1}\ar[r]_{\alpha_1}& 2\ar[u]^{\lambda_1}\ar[r]_{\alpha_2}& 3\ar[r]_{\alpha_3}\ar[ru]^{\mu}&4 }$$ with $I=<\alpha_1\alpha_2, \alpha_2\alpha_3, \lambda_1\beta_2, \beta_1\lambda_2, \beta_2\alpha_3, \alpha_2\mu, \mu\lambda_3>$. Set $$f=(\alpha_1\alpha_2||\beta_1\beta_2)+(\beta_2\alpha_3||\lambda_2\lambda_3)+(\beta_1\lambda_2||\beta_1\beta_2\mu+\alpha_1\lambda_1\lambda_2)+(\mu\lambda_3||\alpha_3).$$ Then, one can check that $$l_{3k}(f, \ldots, f)(\alpha_1,\alpha_2,\alpha_3)=c_k \, \alpha_1\lambda_1\lambda_2\lambda_3\neq 0$$ for some nonzero $c_k \in \Bbbk$.
We have seen in Examples [Example 11](#parallel arrows){reference-type="ref" reference="parallel arrows"} and [Example 12](#oriented cycles){reference-type="ref" reference="oriented cycles"} that gentle algebras with non-nilpotent $L_\infty$-structure can be found when the quiver has parallel arrows or oriented cycles.
Now, assuming the hypothesis about no oriented cycles and no parallel arrows, we will find conditions for nilpotency.
**Remark 15**. In the subsequent proofs, we will use the following argument to show that $$\phi_n(f_1,\ldots, f_n)(1\otimes x_0 \otimes y_0\otimes 1)=0$$ for any $n\geq 3$. The recursive definition of $\phi_n$ implies that $\phi_n(f_1,\ldots, f_n)(1\otimes x_0 \otimes y_0\otimes 1)$ will be expressed in terms of $\phi_r(f_{i_1}\ldots, f_{i_r}) (1\otimes x_t \otimes y_t \otimes 1)$ for $1 \leq r <n$ and for several possible arguments $1\otimes x_t \otimes y_t \otimes 1$. The desired vanishing will follow by the equalities $$\phi_1(f_j)(1\otimes x_t \otimes y_t \otimes 1)=0
\qquad \mbox{ and } \qquad \phi_2(f_{i_1},f_{i_2})(1\otimes x_t\otimes y_t\otimes 1)=0$$ for all possible arguments. We will explain in detail this procedure in the following example.
**Example 16**. Let $Q$ be the quiver $$\xymatrix{
& \bullet\ar[dr]^{\beta_m}\ar@{.>}@/^1.5pc/[drrr]^z &&\\
1 \ar@{.>}@/^.5pc/[ru]^{\beta_1 \cdots \beta_{m-1}} \ar[rr]_{\alpha_1} & & 2 \ar[rr]_{\alpha_2} \ar@/_1.5pc/[rrrr]_{\delta} & & 3 \ar[rr]_{\alpha_3} & & 4
}$$ with $\beta_1 \cdots \beta_m \alpha_2$ and $z \alpha_3$ nonzero paths, $\delta$ an arrow and $(\alpha_1,\alpha_2, \alpha_3)\in AP_3$. For any $n\geq 4$, we will see that $\phi_{n}(f_1, \ldots, f_n) (1\otimes \beta_1\cdots\beta_m\alpha_2\otimes \alpha_3\otimes 1)=0$. We have $$\begin{aligned}
\label{argumento1} \phi_{n}(f_1, \ldots, f_n) &(1\otimes \beta_1\cdots\beta_m\alpha_2\otimes \alpha_3\otimes 1)
\\ \notag
&=v_{n}(f_1,\ldots, f_n) (1\otimes \beta_1\cdots\beta_m\otimes \alpha_2\otimes \alpha_3\otimes 1) \\
\notag
&=(-1)^{n} \sum_{j=1}^n \phi_{n-1}(f_1,\ldots,\hat{f}_j, \dots, f_n) (1\otimes \beta_1\cdots\beta_m\otimes f_j(\alpha_2 \alpha_3)\otimes 1).
\end{aligned}$$ Since the unique nonzero path parallel to $\alpha_2\alpha_3$ is $\delta$, the only argument that can appear for $\phi_{n-1}(f_1,\dots,\hat{f}_j, \dots, f_n)$ is $$\label{argumento2}
1\otimes \beta_1\cdots\beta_m\otimes \delta \otimes 1.$$ Now we proceed, $$\begin{aligned}
\phi_{n-1}&(f_1,\ldots,\hat{f}_j, \ldots, f_n) (1\otimes \beta_1\cdots \beta_m\otimes \delta\otimes 1)\\
\notag &= v_{n-1}(f_1,\ldots,\hat{f}_j, \ldots, f_n) (1\otimes \beta_1\cdots\beta_{m-1}\otimes \beta_m\otimes \delta\otimes 1)\\
&= (-1)^{n-1} \sum_{\substack{k=1 \\k \neq j}}^n \phi_{n-2}(f_1,\ldots,\hat{f}_j,\ldots,\hat{f}_k, \dots, f_n) (1\otimes \beta_1\cdots\beta_{m-1}\otimes f_k(\beta_m \delta) \otimes 1)
\end{aligned}$$ and the unique nonzero path parallel to $\beta_m\delta$ is $z \alpha_3$, so the only argument that can appear for $\phi_{n-2}(f_1,\ldots,\hat{f}_j,\ldots,\hat{f}_k, \ldots, f_n)$ is $$\label{argumento3}
1\otimes \beta_1\cdots\beta_{m-1}\otimes z\alpha_3 \otimes 1.$$ Since $Q$ has no oriented cycles, there is no nonzero path from $1$ to $4$ ending with $\alpha_3$. Hence, $$\begin{aligned}
\phi_{n-2}&(f_1,\ldots,\hat{f}_j,\ldots,\hat{f}_k, \ldots, f_n) (1\otimes \beta_1\cdots\beta_{m-1}\otimes z\alpha_3 \otimes 1)\\
&= v_{n-2}(f_1,\ldots,\hat{f}_j,\ldots,\hat{f}_k, \dots, f_n) (1\otimes \beta_1\cdots\beta_{m-1}\otimes z\otimes \alpha_3 \otimes 1).
\end{aligned}$$ By the definition of $v_{n-2}$ this is a linear combination of elements of the form $$\begin{aligned}
\phi_{n-2-t}(f_{i_1},\ldots, f_{i_{n-2-t}})(1\otimes \phi_t(f_{i_{n-1-t}},\ldots, f_{i_{n-2}})(1\otimes \beta_1\cdots\beta_{m-1}\otimes z\otimes 1)\otimes \alpha_3\otimes 1).
\end{aligned}$$ The only nonzero path parallel to $\beta_1\cdots\beta_{m-1}z$ is $\beta_1\cdots\beta_m\alpha_2$. Then, the unique nonzero elements that may appear are of the form $$\begin{aligned}
\phi_{n-2-t}(f_{i_1},\ldots, f_{i_{n-2-t}})(1\otimes \beta_1\cdots\beta_m\alpha_2 \otimes \alpha_3\otimes 1).
\end{aligned}$$ As the argument of the last equation coincides with the argument of [\[argumento1\]](#argumento1){reference-type="eqref" reference="argumento1"}, we can affirm that the only possible arguments appearing in $\phi_l (f_{j_1},\ldots, f_{j_l})$ are $$1\otimes \beta_1\cdots\beta_m\alpha_2\otimes \alpha_3\otimes 1, \, 1\otimes \beta_1\cdots\beta_m\otimes \delta\otimes 1,\, \mbox{ and } \, 1\otimes \beta_1\cdots\beta_{m-1}\otimes z \alpha_3\otimes 1.$$ One can check that $\phi_1(f_i)$ and $\phi_2(f_i,f_j)$ vanish in all of them, therefore $$\phi_{n}(f_1, \ldots, f_n) (1\otimes \beta_1\cdots\beta_m\alpha_2\otimes \alpha_3\otimes 1)=0.$$
From Proposition [Proposition 13](#prop 1){reference-type="ref" reference="prop 1"} we need to study in detail those algebras whose quivers contain subquivers of one of the following two cases:
- $(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, nonzero paths $u, w'$ such that $uw' =0$ $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.7pc/[rr]^{u} & 2 \ar[r]_{\alpha_2} & 3 \ar[r]_{\alpha_3} \ar@{.>}@/^0.5pc/[r]^{w'} & 4
}, \mbox{ or}$$
- $(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, nonzero paths $v',w$ such that $v'w =0$ $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[rr]^{u} & 3 \ar[r]_{\alpha_3} & 4.
}$$
## Case A
In this case we get the following result.
**Proposition 17**. *Let $A=\Bbbk Q/I$ be a gentle algebra. Assume $Q$ has no oriented cycles, no parallel arrows and it contains a subquiver of the form $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.7pc/[rr]^{u} & 2 \ar[r]_{\alpha_2} & 3 \ar[r]_{\alpha_3} \ar@{.>}@/^0.5pc/[r]^{w'} & 4
}$$ with $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, $u,w'$ nonzero paths and $uw'=0$. If $l_n(f_1, \dots, f_n)(\omega)\neq 0$ for some $n \geq 4$ and for some $f_i \in B^{1}(A)[1], 1 \leq i \leq n$, then $u= \alpha_1 u'$.*
*Proof.* Write $u= \beta_1 \cdots \beta_m$, $\beta_1 \neq \alpha_1$ and $w'= \delta_1 \cdots \delta_s$ with $\beta_m \delta_1=0$. Since $Q$ has no oriented cycles, the unique nonzero path from $1$ to $4$ is $u \alpha_3$. From [\[l_n\]](#l_n){reference-type="eqref" reference="l_n"} and Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"}, we get that $l_n(f_1, \dots, f_n)(\omega)\neq 0$ implies that $$\label{phi n-1 Ai}
\phi_{n-1}(f_1,\dots, \hat{f}_i, \dots, f_n)(1\otimes \alpha_1 \otimes \alpha_2w' \otimes 1)$$ is not zero for some $i$. Observe that [\[phi n-1 Ai\]](#phi n-1 Ai){reference-type="eqref" reference="phi n-1 Ai"} equals $$- v_{n-1}(f_1,\dots, \hat{f}_i, \dots, f_n)(1 \otimes \alpha_1 \otimes \alpha_2 \delta_1 \cdots \delta_{s-1} \otimes \delta_s \otimes 1)$$ since all the other terms do not end with $\alpha_3$. By definition, this is a linear combination of terms of the form $$\begin{aligned}
\label{phi_n-1-t A}
\phi_{n-1-t}(f_{i_1},\dots, f_{i_{n-1-t}})(1\otimes\phi_t (f_{i_{n-t}},\dots, f_{i_{n-1}})(1\otimes\alpha_1\otimes \alpha_2 \delta_1\cdots \delta_{s-1}\otimes 1)\otimes \delta_s\otimes 1).
\end{aligned}$$ Since $Q$ has no oriented cycles, from Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} we get that the unique possibility for the nonvanishing of $\phi_{n-1-t}$ is that there exists a nonzero path $\alpha_1\mu_1\cdots\mu_r$ parallel to $\alpha_1\alpha_2 \delta_1\cdots \delta_{s-1}$ with $\mu_r\neq\delta_{s-1}$. $$\xymatrix{
& & & & & \bullet \ar[rd]^{\delta_s} \\
1 \ar[rr]_{\alpha_1} \ar@{.>}@/^2pc/[rrrr]^u & & 2 \ar[rr]_{\alpha_2} \ar@{.>}@/^2pc/[rrru]^{\mu_1 \cdots \mu_r} & & 3 \ar[rr]_{\alpha_3} \ar@{.>}@/^.5pc/[ru]^{\delta_1 \cdots \delta_{s-1}} & & 4.
}$$ In this case, as the only nonzero path from $1$ to $4$ starts with $\beta_1$, [\[phi_n-1-t A\]](#phi_n-1-t A){reference-type="eqref" reference="phi_n-1-t A"} vanishes for $n-1-t=1$, and, for $n-1-t \geq 2$, it can only be of the form $$\begin{aligned}
\notag
\phi_{n-1-t}&{(f_{j_{1}},\dots, f_{j_{n-t-1}})}(1\otimes \alpha_1\mu_1\cdots\mu_r\otimes\delta_s\otimes 1)
\\
&=v_{n-1-t}{(f_{j_{1}},\dots, f_{j_{n-t-1}})}(1\otimes \alpha_1\mu_1\cdots\mu_{r-1} \otimes\mu_r\otimes\delta_s\otimes 1)\label{case A n-1-t} .
\end{aligned}$$ For $r>1$, [\[case A n-1-t\]](#case A n-1-t){reference-type="eqref" reference="case A n-1-t"} is zero because there are no nonzero paths parallel to $\mu_r\delta_s$ since $Q$ has no oriented cycles. Finally, for $r=1$, the only nonzero path parallel to $\mu_1\delta_s$ is $\alpha_2w'$. Therefore, [\[case A n-1-t\]](#case A n-1-t){reference-type="eqref" reference="case A n-1-t"} can only be a linear combination of terms of the form $$\begin{aligned}
\phi_{n-2-t}(f_{k_1},\dots, f_{k_{n-2-t}})(1\otimes\alpha_1\otimes \alpha_2 w'\otimes 1)
\end{aligned}$$ whose arguments coincide with the one of [\[phi n-1 Ai\]](#phi n-1 Ai){reference-type="eqref" reference="phi n-1 Ai"}. Hence, the only arguments appearing in $\phi_p (f_{t_1},\dots, f_{t_p})$ are $1\otimes \alpha_1\otimes \alpha_2 w' \otimes 1$ and $1\otimes \alpha_1\mu_1\otimes \delta_s \otimes 1$. Applying Remark [Remark 15](#loop argument){reference-type="ref" reference="loop argument"} we get that $l_n(f_1,\dots,f_n)({\omega})$ vanishes. ◻
## Case B
Assume that each quiver in this subsection has no oriented cycles, no parallel arrows, and contains a subquiver $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[rr]^{u} & 3 \ar[r]_{\alpha_3} & 4
}$$ with $\omega =(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, $v'= \beta_1 \dots \beta_m$ and $u=\delta_1 \dots \delta_s$ nonzero paths, and $\beta_m \delta_1 =0$. Observe that, since $Q$ has no double arrows, $m >1$. We start this subsection with a series of technical lemmas concerning the vanishing of $v_n$.
**Lemma 18**. *With the notation above, if $\delta_s\neq \alpha_3$, then $$v_n(f_1,\dots,f_n)(1\otimes \beta_1\cdots\beta_{m-1}\otimes \beta_m\otimes \delta_1\otimes 1)\delta_2\cdots\delta_s =0$$ for any $n \geq 3$ or $m \geq 3$. Moreover, for $n=m=2$, if $s>1$ and $\lambda$ is an arrow in $Q$ parallel to $\beta_2\delta_1$, then there exist $f_1,f_2\in B^{1}(A)[1]$ such that $$v_2(f_1, f_2)(1\otimes \beta_1\otimes \beta_2\otimes \delta_1\otimes 1)\delta_2\cdots\delta_s \neq 0.$$*
*Proof.* From Remark [Remark 5](#rmk 4.6){reference-type="ref" reference="rmk 4.6"} we have that $$\begin{aligned}
v_n (f_1, \dots, &f_n)(1 \otimes \beta_1\cdots\beta_{m-1} \otimes \beta_m \otimes \delta_1\otimes 1) \delta_2\cdots\delta_s = \notag \\ \label{vn A}
& (-1)^{n+1} \sum \phi_{n-1}(f_1,\dots,\hat{f}_i,\dots,f_n)(1\otimes \beta_1\cdots\beta_{m-1}\otimes f_i(\beta_m\delta_1)\otimes 1) \delta_2\cdots\delta_s .
\end{aligned}$$ Since $\delta_s \neq \alpha_3$, the unique nonzero path from $1$ to $4$ is $\alpha_1 u$. The case $s=1$ was considered in some of the steps developed in Example [Example 16](#ejemplo s=1){reference-type="ref" reference="ejemplo s=1"}.
Assume $s>1$. If $n\geq 3$, equation [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} could be nonzero only if there exists a path $\lambda_1\dots\lambda_h$ parallel to $\beta_m\delta_1$.
Consider $h=1$. For $m=2$, [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} vanishes. For $m \geq 3$, arguing as in Example [Example 16](#ejemplo s=1){reference-type="ref" reference="ejemplo s=1"} one can check that the only arguments appearing in $\phi_t (f_{j_1},\dots, f_{j_t})$ are $1\otimes \beta_1\dots \beta_{m-1}\otimes \lambda \otimes 1$ and $1\otimes \beta_1\dots \beta_{m-2}\otimes z \lambda \otimes 1$, where $z$ is a nonzero path parallel to $\beta_{m-1}$. Applying Remark [Remark 15](#loop argument){reference-type="ref" reference="loop argument"} we get that [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} vanishes.
Assume $h>1$. In this case, [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} is a linear combination of terms of the form $$\begin{aligned}
\label{h>1}
\phi_{n-1-t}(f_{j_1},\dots, f_{j_{n-1-t}})( 1 \otimes x_t \otimes \lambda_h \otimes 1)\delta_2\cdots\delta_s, \mbox{ with }\\
\notag
x_t= \phi_t(f_{j_{n-t}},\dots, f_{j_{n-1}}) (1\otimes \beta_1 \cdots \beta_{m-1} \otimes \lambda_1\cdots \lambda_{h-1}\otimes 1).
\end{aligned}$$ If $\lambda_1=\beta_m$, they vanish. Assume $\lambda_1\neq \beta_m$. Since $Q$ has no oriented cycles, the only nonzero path parallel to $\beta_1 \cdots \beta_{m-1} \lambda_1\cdots \lambda_{h-1}$ could be $\beta_1\cdots\beta_m\alpha_2 \nu_1\cdots\nu_p$: $$\xymatrix{
& & \bullet \ar[rd]^{\lambda_h}\\
& \bullet \ar[rd]^{\beta_m} \ar@{.>}@/^1pc/[ru]^{\lambda_1 \cdots \lambda_{h-1}} & & \bullet \ar@{.>}@/^1pc/[rrrd]^{\delta_2 \cdots \delta_s}\\
1 \ar[rr]_{\alpha_1} \ar@{.>}@/^1pc/[ru]^{\beta_1 \cdots \beta_{m-1}} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\delta_1}& & 3 \ar@{.>}@/_2pc/[lluu]_{\nu_1\cdots\nu_p}\ar[rr]_{\alpha_3}& & 4.
}$$ If $\nu_p=\lambda_{h-1}$ or $n-1-t=1$, then [\[h\>1\]](#h>1){reference-type="eqref" reference="h>1"} vanishes. If not, then [\[h\>1\]](#h>1){reference-type="eqref" reference="h>1"} can only have terms of the form $$v_d(f_{j_1},\dots, f_{j_d})(1\otimes\beta_1\cdots\beta_m\alpha_2\nu_1\cdots\nu_{p-1}\otimes \nu_p\otimes\lambda_h\otimes 1)\delta_2\cdots\delta_s$$ with $d\geq 2$, which is equal to a sum of terms of the form $$(-1)^{d+1}\phi_{d-1}(f_{k_1},\dots, f_{k_{d-1}})(1\otimes\beta_1\cdots\beta_m\alpha_2\nu_1\cdots\nu_{p-1}\otimes f_k(\nu_p\lambda_h)\otimes 1)\delta_2\cdots\delta_s.$$ Now, whether $f_k(\nu_p\lambda_h)$ ends with $\lambda_h$ or with $\delta_1$, we have cycles, which is a contradiction. Hence [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} vanishes for $n \geq 3$.
If $n=2$ and $m>2$, then [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} is zero since there is no nonzero path from $1$ to $4$ starting with $\beta_{1}$. Finally, if $n=2$ and $m=2$, then [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} is zero unless there exists a path $\lambda_1\dots \lambda_h$ appearing as a summand in $f_i(\beta_2\delta_1)$. It also vanishes if $h >1$ since $$\phi_1(f_{j})(1\otimes \beta_1\otimes \lambda_1\dots \lambda_h\otimes 1)=f_j(\beta_1\lambda_1)\lambda_2\cdots\lambda_h$$ and $\lambda_h\delta_2=0$. If $h=1, f_1=(\beta_1 \lambda_1||\alpha_1 \delta_1)$ and $f_2=(\beta_2 \delta_1||\lambda_1)$, then $$v_2(f_1, f_2)(1\otimes \beta_1\otimes \beta_2\otimes \delta_1
\otimes 1)\delta_2\cdots\delta_s = \alpha_1 u \neq 0$$ and the existence of $\lambda, \delta_s$ and $\alpha_3$ implies $s>1$. We have proved that, except for $n=m=2$, [\[vn A\]](#vn A){reference-type="eqref" reference="vn A"} vanishes and the lemma follows. ◻
**Lemma 19**. *With the notation above, if $s=2$, then $$v_n(f_1,\dots,f_n)(1\otimes \beta_1\cdots\beta_{m}\otimes \delta_1 \otimes \delta_2\otimes 1)=0$$ for any $n\geq2$.*
*Proof.* We proceed by induction on $n$. Since $Q$ has no double arrows and no oriented cycles, $\delta_2 \neq \alpha_3$ and $\alpha_1 \delta_1 \delta_2$ is the unique nonzero path from $1$ to $4$. By definition $$v_n(f_1,\ldots,f_n)(1\otimes \beta_1\cdots\beta_{m}\otimes \delta_1 \otimes \delta_2\otimes 1)$$ is a linear combination of terms of the form $$\begin{aligned}
\label{vn s2}
\phi_{n-t}(f_{i_1},\ldots,f_{i_{n-t}})(1\otimes \phi_t(f_{i_{n-t+1}}, \ldots , f_{i_n})(1 \otimes \beta_1\cdots\beta_{m}\otimes \delta_1 \otimes 1) \otimes \delta_2\otimes 1).
\end{aligned}$$ For $n=2$, we have that [\[vn s2\]](#vn s2){reference-type="eqref" reference="vn s2"} starts with $\beta_1$, thus it vanishes. For $n\geq 3$, the possible nonzero paths parallel to $\beta_1\cdots\beta_m\delta_1$ are $\alpha_1\delta_1$ and $\beta_1\cdots\beta_m\alpha_2\mu_1\cdots\mu_h$, where $\mu_1\cdots\mu_h$ is a path from $3$ to $s(\delta_2)$: $$\xymatrix{
& \bullet \ar[rd]^{\beta_m} & & & \bullet \ar[rrd]^{\delta_2}\\
1 \ar[rr]_{\alpha_1} \ar@{.>}@/^1pc/[ru]^{\beta_1 \cdots \beta_{m-1}} & & 2 \ar[rr]_{\alpha_2} \ar[rru]^{\delta_1}& & 3 \ar@{.>}[u]_{\mu_1 \cdots \mu_h}\ar[rr]_{\alpha_3}& & 4.
}$$ Since $\delta_1\delta_2\neq 0$, then [\[vn s2\]](#vn s2){reference-type="eqref" reference="vn s2"} can only be a linear combination of terms of the form $$\phi_{n-t}(f_{j_1}, \dots,f_{j_{n-t}})(1\otimes \beta_1\cdots\beta_m\alpha_2\mu_1\cdots\mu_h \otimes \delta_2\otimes 1).$$ This equals zero for $n-t=1$ and, for $n-t >1$, it equals $$v_{n-t}(f_{j_1}, \dots,f_{j_{n-t}})(1\otimes \beta_1\cdots\beta_m\alpha_2\mu_1\cdots\mu_{h-1}\otimes \mu_h \otimes \delta_2\otimes 1).$$ Now, concerning the existence of nonzero paths parallel to $\mu_h\delta_2$, the only possibility is $\alpha_3$ when $h=1$, since $Q$ has no oriented cycles. For $n-t=2$, we get a path starting in $\beta_1$, hence it vanishes. For $n-t>2$, only the terms of the form $$\begin{aligned}
\phi_{n-t-1}&(f_{k_1}, \dots,f_{k_{n-t-1}})(1\otimes \beta_1\cdots\beta_m\alpha_2\otimes \alpha_3\otimes 1)\\
&= v_{n-t-1}(f_{k_1}, \dots,f_{k_{n-t-1}})(1\otimes \beta_1\cdots\beta_m \otimes \alpha_2\otimes \alpha_3\otimes 1)
\end{aligned}$$ may remain. Since $\phi_{p}(f_{l_1}, \ldots, f_{l_{p}}) (1 \otimes \alpha_2 \otimes \alpha_3 \otimes 1)=0$ for any $p>1$, and $\delta_1 \delta_2$ is the only nonzero path parallel to $\alpha_2 \alpha_3$, the previous expression can only be a linear combination of terms of the form $$\phi_{n-t-2}(f_{l_1}, \ldots, f_{l_{n-t-2}}) (1 \otimes \beta_1\cdots\beta_m \otimes \delta_1 \delta_2 \otimes 1).$$ This is zero if $n-t-2=1$, and it is equal to $$\begin{aligned}
&v_{n-t-2} (f_{l_1}, \dots,f_{l_{n-t-2}})(1\otimes \beta_1\cdots\beta_{m-1}\otimes \beta_m\otimes \delta_1\otimes 1)\delta_2 \\
& - v_{n-t-2} (f_{l_1}, \dots,f_{l_{n-t-2}})(1\otimes \beta_1\cdots\beta_m \otimes \delta_1\otimes \delta_2\otimes 1)
\end{aligned}$$ if $n-t-2>1$. The first term vanishes because the only nonzero path parallel to $\beta_m\delta_1$ starts with $\beta_m$, since $t(\beta_m)=t(\alpha_1)$ and $t(\delta_1) = t(\mu_1)$. The second term vanishes by inductive hypothesis. Hence the result follows. ◻
**Lemma 20**. *With the notation above, if $n\geq2$ and $s>2$, then $$v_n(f_1,\dots,f_n)(1\otimes v'\otimes \delta_1\cdots \delta_{i-1} \otimes \delta_i\otimes 1)\delta_{i+1}\cdots\delta_s =0$$ for every $i$.*
*Proof.* We proceed by induction on $i$. Let $i=2$. By definition, $$v_n(f_1,\dots,f_n)(1\otimes v'\otimes \delta_1\otimes \delta_2\otimes 1)\delta_{3}\cdots\delta_s$$ is a linear combination of terms of the form $$\begin{aligned}
\label{eq term BB}
\phi_{n-t}(f_{j_1},\dots, f_{j_{n-t}})(1\otimes \phi_t(f_{j_{n+1-t}},\dots, f_{j_{n}})(1\otimes v'\otimes\delta_1\otimes 1)\otimes \delta_2\otimes 1)\delta_{3}\cdots\delta_s.
\end{aligned}$$ The only possible nonzero paths parallel to $v'\delta_1$ are $\alpha_1\delta_1$ and $v'\alpha_2\mu_1\cdots\mu_h$, where $\mu_1\cdots\mu_h$ is a nonzero path from $3$ to $s(\delta_2)$. Since $\delta_1 \delta_2 \neq 0$, [\[eq term BB\]](#eq term BB){reference-type="eqref" reference="eq term BB"} can only be a linear combination of terms of the form $$\phi_{n-t}(f_{j_1},\dots, f_{j_{n-t}})(1\otimes v'\alpha_2\mu_1\cdots\mu_h \otimes \delta_2\otimes 1)\delta_{3}\cdots\delta_s.$$ For $n-t=1$, this term vanishes since there is no nonzero path from $1$ to $4$ starting on $\beta_1$. For $n-t>1$, it is equal to $$v_{n-t}(f_{j_1},\dots, f_{j_{n-t}})(1\otimes v'\alpha_2\mu_1\cdots\mu_{h-1}\otimes \mu_h \otimes \delta_2\otimes 1)\delta_{3}\cdots\delta_s.$$ Now, one can check that for $h=1$, this last equation equals zero since the only nonzero path parallel to $\mu_1\delta_2$ starts with $\mu_1$. For $h>1$, we can only have terms of the form $$\phi_{n-1-t}(f_{k_1},\dots, f_{k_{n-1-t}})(1\otimes v'\alpha_2\mu_1\cdots\mu_{h-1} \otimes \nu_1\cdots\nu_p \otimes 1)\delta_{3}\dots\delta_s$$ where $\nu_1\cdots\nu_p$ is a nonzero path from $s(\mu_h)$ to $t(\delta_2)$: $$\xymatrix{ & & & &\bullet \ar[r]^{\delta_2} & \bullet \ar@{.>}@/^1pc/[rdd]^{\delta_3 \cdots \delta_s} & \\
& & & & \bullet \ar[u]^{\mu_h} \ar@{.>}@/_1pc/[ru]_{\nu_1\cdots\nu_p}& & &\\
1 \ar@{.>}@/^1pc/[rr]^{v'} \ar[rr]_{\alpha_1} & & 2 \ar[rr]_{\alpha_2} \ar@/^1pc/[rruu]^{\delta_1} & & 3 \ar@{.>}[u]^{\mu_1 \cdots \mu_{h-1}} \ar[rr]_{\alpha_3} & & 4.}$$ Arguing as in Example [Example 16](#ejemplo s=1){reference-type="ref" reference="ejemplo s=1"} one can check that, for $p=1$, the only possible arguments appearing in $\phi_c (f_{l_1},\dots, f_{l_c})$ are $1\otimes v'\alpha_2\mu_1\cdots\mu_{h-1}\otimes \nu_1 \otimes 1$ and $1\otimes v'\alpha_2\mu_1\cdots\mu_{h-2}\otimes y \nu_1 \otimes 1$, where $y$ is a nonzero path parallel to $\mu_{h-1}$, and for $p>1$, the only possible arguments are $1\otimes v'\alpha_2\mu_1\cdots\mu_{h-1}\otimes \nu_1 \ldots \nu_p \otimes 1$, $1\otimes v'\alpha_2\mu_1\cdots\mu_{h}z\otimes \nu_p \otimes 1$, and $1\otimes v'\alpha_2\mu_1\cdots\mu_{h}\otimes \delta_2 \otimes 1$, where $z$ is a nonzero path from $t(\mu_h)$ to $s(\nu_p)$. Applying Remark [Remark 15](#loop argument){reference-type="ref" reference="loop argument"} we get that [\[eq term BB\]](#eq term BB){reference-type="eqref" reference="eq term BB"} vanishes.
Let $i>2$. By definition, $$v_n(f_1,\dots,f_n)(1\otimes v'\otimes \delta_1\cdots \delta_{i-1} \otimes \delta_i\otimes 1)\delta_{i+1}\cdots\delta_s$$ is a linear combination of terms of the form $$\begin{aligned}
\label{eq term BBB}
\phi_{n-t}(f_{j_1},\dots, f_{j_{n-t}})(1\otimes \phi_t(f_{j_{n+1-t}},\dots, f_{j_{n}})(1\otimes v'\otimes\delta_1 \cdots \delta_{i-1}\otimes 1)\otimes \delta_i\otimes 1)\delta_{i+1}\cdots\delta_s.
\end{aligned}$$ If $t=1$, $\phi_1(f_{k_{1}})(1\otimes v'\otimes\delta_1 \cdots \delta_{i-1}\otimes 1)$ is a path that ends with $\delta_{i-1}$, and in this case [\[eq term BBB\]](#eq term BBB){reference-type="eqref" reference="eq term BBB"} vanishes by Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"}. If $t>1$, the terms in $$\phi_t(f_{j_{n+1-t}},\dots, f_{j_{n}})(1\otimes v'\otimes\delta_1 \cdots \delta_{i-1}\otimes 1)\\
$$ end with $\delta_{i-1}$, or are equal to $$v_t(f_{j_{n+1-t}},\dots, f_{j_{n-2}})(1\otimes v' \otimes \delta_1 \cdots \delta_{i-2} \otimes \delta_{i-1}\otimes 1)$$ which is zero by the inductive hypothesis. ◻
**Proposition 21**. *Let $A=\Bbbk Q/I$ be a gentle algebra. Assume $Q$ has no oriented cycles, no parallel arrows, and contains a subquiver of the form $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[rr]^{u} & 3 \ar[r]_{\alpha_3} & 4
}$$ with $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, $v', u$ nonzero paths, and $v'u=0$. If $l_n(f_1, \dots, f_n)(\omega)\neq 0$ for some $n \geq 4$ and for some $f_i \in B^{1}(A)[1], 1 \leq i \leq n$ then*
1. *$u=u' \alpha_3$, or*
2. *[\[itemtwo\]]{#itemtwo label="itemtwo"} $v'=\beta_1 \beta_2, u= \epsilon_1 \cdots \epsilon_p$ with $p >1$, $\epsilon_p \neq \alpha_3$, and there exists an arrow $\lambda$ parallel to $\beta_2 \epsilon_1$.*
*Proof.* Assume $v'= \beta_1 \cdots \beta_m$, $u= \epsilon_1 \cdots \epsilon_p$, $\beta_m \epsilon_1=0$ and $\epsilon_p \neq \alpha_3$. By assumption, $Q$ has no double arrows so $m>1$, and since $\epsilon_p \neq \alpha_3$, the unique nonzero path from $1$ to $4$ is $\alpha_1 u$. From [\[l_n\]](#l_n){reference-type="eqref" reference="l_n"} and Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} we have that $l_n(f_1, \dots, f_n)(\omega)\neq 0$ implies that $$\begin{aligned}
\phi_{n-1}&(f_1,\dots, \hat{f}_{i_1}, \dots, f_n)(1\otimes v'\alpha_2 \otimes \alpha_3 \otimes 1) \\
= & v_{n-1}(f_1,\dots, \hat{f}_{i_1}, \dots, f_n)(1\otimes v' \otimes \alpha_2 \otimes \alpha_3 \otimes 1) \neq 0
\end{aligned}$$ for some $i_1$. By Remark [Remark 5](#rmk 4.6){reference-type="ref" reference="rmk 4.6"}, it follows that $$\begin{aligned}
\phi_{n-2}(f_1,\dots, \hat{f}_{i_1}, \dots, \hat{f}_{i_2}, \dots, f_n)(1\otimes v' \otimes u \otimes 1) \neq 0
\end{aligned}$$ for some $i_2$. This term equals $$\begin{aligned}
v_{n-2}&(f_1,\dots, \hat{f}_{i_1}, \dots, \hat{f}_{i_2}, \dots, f_n)
(1 \otimes \beta_1 \cdots \beta_{m-1} \otimes \beta_m \otimes \epsilon_1 \otimes 1) \epsilon_2 \cdots \epsilon_p \\
&- \sum_{i=2}^{p}v_{n-2}(f_1,\dots, \hat{f}_{i_1}, \dots, \hat{f}_{i_2}, \dots, f_n)
(1 \otimes \beta_1 \cdots \beta_{m} \otimes \epsilon_1 \cdots \epsilon_{i-1} \otimes \epsilon_i \otimes 1)\epsilon_{i+1}\cdots\epsilon_p.
\end{aligned}$$ Using Lemmas [Lemma 19](#lema s=2){reference-type="ref" reference="lema s=2"} and [Lemma 20](#lema B){reference-type="ref" reference="lema B"}, we get that the unique possible nonzero summand is the first one. Moreover, the first summand also vanishes when $m\geq 3$, see Lemma [Lemma 18](#lema A){reference-type="ref" reference="lema A"}. Hence $l_n(f_1, \dots, f_n)(\omega) =0$ for any $n\geq 4$ when $m\geq 3$. If $m=2$, using again Lemma [Lemma 18](#lema A){reference-type="ref" reference="lema A"}, we get that the first summand vanishes when $n\geq 5$, and its nonvanishing in the case $n=4$ implies the existence of a subquiver as described in [\[itemtwo\]](#itemtwo){reference-type="ref" reference="itemtwo"}. ◻
**Remark 22**. Let $A=\Bbbk Q/I$ be a gentle algebra. Assume $Q$ has no oriented cycles, no parallel arrows, and it contains a subquiver of the form described in [\[itemtwo\]](#itemtwo){reference-type="ref" reference="itemtwo"}, that is, $$\xymatrix{
& 5 \ar[rd]^{\beta_2} \ar[rr]^{\lambda}& & 6 \ar@{.>}@/^1pc/[rrrd]^{\epsilon_2\cdots\epsilon_p}& &\\
1 \ar[rr]_{\alpha_1}\ar[ru]^{\beta_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\epsilon_1} & & 3 \ar[rr]_{\alpha_3} & &4
}$$ with $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, $\epsilon_p\neq\alpha_3$, $\beta_1\beta_2\neq 0$, and $\epsilon_1\cdots\epsilon_p\neq 0$. Then, by the proof of Proposition [Proposition 21](#proposition B){reference-type="ref" reference="proposition B"} and Lemma [Lemma 18](#lema A){reference-type="ref" reference="lema A"}, $l_n(f_1, \ldots , f_n)(\omega)=0$ for $n\geq 5$, and $l_4(f_1, \ldots, f_4)(\omega)$ could be nonzero only when there exist $i\neq j$ such that $\lambda$ appears as a summand in $f_i(\beta_2 \epsilon_1)$ and $f_j(\beta_1 \lambda)\epsilon_2 \cdots \epsilon_p$ is nonzero. Notice that this still holds if $\epsilon_p=\alpha_3$ as we will show in Theorem [Theorem 25](#theo l_4 no 0 parte2){reference-type="ref" reference="theo l_4 no 0 parte2"}.
# Main results {#sec 5}
From Propositions [Proposition 13](#prop 1){reference-type="ref" reference="prop 1"}, [Proposition 17](#proposition A){reference-type="ref" reference="proposition A"} and [Proposition 21](#proposition B){reference-type="ref" reference="proposition B"}, we conclude that if $A=\Bbbk Q/I$ is a gentle algebra whose quiver $Q$ has no oriented cycles and no parallel arrows, and $l_n(f_1, \dots, f_n)(\omega)\neq 0$ for some $n \geq 4$, for some $f_i \in B^{1}(A)[1], 1 \leq i \leq n$, and for some $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, then $Q$ contains a subquiver of the form
1. $$\xymatrix{
1 \ar[r]_{\alpha_1} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[r]^{u'} & 3 \ar[r]_{\alpha_3} \ar@{.>}@/^0.5pc/[r]^{w'} & 4
}$$ with nonzero paths $u',w'$ such that $\alpha_1 u' \alpha_3 \neq 0$,
2. $$\xymatrix{
1 \ar[r]_{\alpha_1} \ar@{.>}@/^0.5pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^0.7pc/[r]^{u'} & 3 \ar[r]_{\alpha_3} & 4
}$$ with nonzero paths $v',u'$ such that $\alpha_1 u' \alpha_3 \neq 0$, or
3. $$\xymatrix{
& 5 \ar[rd]^{\beta_2} \ar[rr]^{\lambda}& & 6 \ar@{.>}@/^1pc/[rrrd]^{\epsilon_2 \cdots \epsilon_p}& &\\
1 \ar[rr]_{\alpha_1}\ar[ru]^{\beta_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\epsilon_1} & & 3 \ar[rr]_{\alpha_3} & & 4
}$$ with nonzero paths $\beta_1\beta_2 \alpha_2$ and $\alpha_1 \epsilon_1 \cdots \epsilon_p, \epsilon_p \neq \alpha_3$.
The following proposition will allow us to find a nilpotence condition on a gentle algebra $A$ whose quiver has no parallel arrows and no oriented cycles. More precisely, we find conditions on the quiver $Q$ in order to get the vanishing of $l_n(f_1, \ldots, l_n)$ for all $n \geq 5$ and for all $f_i \in B^1(A)[1]$.
**Proposition 23**. *Let $A=kQ/I$ be the gentle algebra given by the quiver $$\xymatrix{
& & & 5 \ar[rd]^{\gamma_2} \ar[rr]^{\rho} & & 6 \ar@{.>}@/^1pc/[rd]^{\delta_2\cdots\delta_s} &\\
1 \ar[rr]_{\alpha_1} \ar@{.>}@/^1pc/[rr]^{v'} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\gamma_1} & & 3 \ar[ru]_{\delta_1} \ar[rr]_{\alpha_3} & & 4,
} \label{caso 2} \tag{$\star$}$$ with $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, $v', \gamma_1 \gamma_2$ and $w'=\delta_1 \cdots \delta_s$ nonzero paths. Then there exists $f \in B(A)[1]$ such that $l_n(f, \ldots, f)(\omega) \neq 0$ for any $n=2+3k, k \geq 1$.*
*Proof.* Let $v'=\beta_1\dots \beta_m$ and $n\geq 4$. Consider $$f=(\alpha_1\alpha_2||v'\alpha_2+\alpha_1\gamma_1\gamma_2)+(\alpha_2\alpha_3||\gamma_1\gamma_2\alpha_3+\alpha_2w')+(\gamma_2\delta_1||\rho) +(\gamma_1\rho||\alpha_2\delta_1).$$ Since $\alpha_1 \gamma_1 \gamma_2 \alpha_3$ is a nonzero path, by equation [\[l_n\]](#l_n){reference-type="eqref" reference="l_n"} and Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} we have $$\begin{aligned}
l_n(f, \ldots, f)(\omega)=(-1)^n n[&\phi_{n-1}(f, \ldots, f) (1\otimes v'\alpha_2 \otimes \alpha_3\otimes 1) \\
- &
\phi_{n-1}(f, \ldots, f)(1\otimes \alpha_1\otimes \alpha_2w'\otimes 1)]
\end{aligned}$$ which is, by definition, equal to $$\begin{aligned}
&(-1)^n n [ v_{n-1}(f, \ldots, f)(1\otimes v'\otimes \alpha_2 \otimes \alpha_3\otimes 1)
\\
&+ v_{n-1}(f, \ldots, f)(1\otimes \alpha_1\otimes \alpha_2 \otimes \delta_1\otimes 1)\delta_2\cdots\delta_s \\
&+ \sum_{i=2}^s v_{n-1}(f, \ldots, f)(1\otimes \alpha_1\otimes \alpha_2 \delta_1\cdots \delta_{i-1}\otimes \delta_i\otimes 1)\delta_{i+1}\cdots \delta_s
].
\end{aligned}$$ Since the only nonzero path parallel to $\alpha_1\alpha_2\delta_1\cdots \delta_{i-1}$ is $v'\alpha_2\delta_1\cdots \delta_{i-1}$, then $$v_{n-1}(f, \ldots, f)(1\otimes \alpha_1\otimes \alpha_2 \delta_1\cdots \delta_{i-1}\otimes \delta_i\otimes 1)\delta_{i+1}\cdots \delta_s=0$$ for all $i$. Also, as $f(\beta_m\gamma_1)=0$, by Lemma [Lemma 6](#lemma: no parallel){reference-type="ref" reference="lemma: no parallel"} we get that $$v_{n-1}(f, \ldots, f)(1\otimes v'\otimes \alpha_2 \otimes \alpha_3\otimes 1)= (-1)^n \phi_{n-2}(f, \ldots, f)(1\otimes v' \otimes \gamma_1\gamma_2\alpha_3\otimes 1)=0.$$ Hence, $$\begin{aligned}
l_n(f, \ldots, f)(\omega)
= (-1)^n n \ v_{n-1}(f, \ldots, f)(1\otimes \alpha_1\otimes \alpha_2 \otimes \delta_1\otimes 1)\delta_2\dots\delta_s \label{top row} \end{aligned}$$ which is equal to $-n(n-1)v_{n-2}(f, \ldots, f)(1\otimes \alpha_1\gamma_1\otimes \gamma_2\otimes \delta_1\otimes 1)\delta_2\cdots\delta_s$. This term vanishes if $n=4$ and, for $n \geq 5$, it is equal to $$\begin{aligned}
(-1)^nn(n-1)&(n-2) v_{n-3}(f, \ldots, f)(1\otimes \alpha_1\otimes \gamma_1\otimes \rho\otimes 1)\delta_2\cdots\delta_s
\\
=&
\begin{cases}
-\frac{n!}{(n-4)!}v_{n-4}(f, \ldots, f)(1\otimes \alpha_1\otimes \alpha_2 \otimes \delta_1\otimes 1)\delta_2\cdots\delta_s, & \mbox{ if $n\geq 6$,}\\
5!\, v'\alpha_2 w', & \mbox{ if $n=5$.}
\end{cases}
\end{aligned}$$ Now, using [\[top row\]](#top row){reference-type="eqref" reference="top row"}, we get that, for $n\geq 6$, $$l_n(f, \ldots, f) (\omega) = (-1)^{n} \frac{n!}{(n-3)!} l_{n-3}(f, \ldots, f)(\omega).$$ Therefore, one can check that $$l_n(f, \ldots, f)(\omega)= -(-1)^{\frac{k(k+1)}{2}} n! \,v'\alpha_2w'\neq 0$$ for $n=2+3k$ and $k\geq1$. ◻
In the next two theorems we find a sufficient condition for the vanishing of $l_n(f_1, \ldots, f_n)$ for $n \geq 5$, and a necessary condition for the nonvanishing of $l_4(f_1, f_2, f_3, f_4)$, for $f_i \in B^1(A)[1]$. With this purpose, in the next proof we will also pay attention to those terms involved that vanish also for $n=4$.
**Theorem 24**. *Let $A=\Bbbk Q/I$ be a gentle algebra. Assume $Q$ has no oriented cycles and no parallel arrows. If $Q$ does not contain subquivers of the form [\[caso 2\]](#caso 2){reference-type="eqref" reference="caso 2"}, then $l_n(f_1, \ldots, f_n)=0$ for all $n \geq 5$, for all $f_i \in B^1(A)[1], 1 \leq i \leq n$.*
*Proof.* Suppose that there exist $\omega=(\alpha_1, \alpha_2, \alpha_3)\in AP_3$ and $f_1, \ldots, f_n\in B(A)[1]$ such that $l_n(f_1, \dots, f_n)(\omega)\neq 0$ and $n\geq 5$. It follows from Propositions [Proposition 13](#prop 1){reference-type="ref" reference="prop 1"}, [Proposition 17](#proposition A){reference-type="ref" reference="proposition A"} and [Proposition 21](#proposition B){reference-type="ref" reference="proposition B"}, and Remark [Remark 22](#rmk l_4 no 0){reference-type="ref" reference="rmk l_4 no 0"} that the quiver $Q$ has a subquiver of the form $$\xymatrix{
& 1 \ar[r]_{\alpha_1} \ar@{.>}@/^1pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^1pc/[r]^{u'} & \ar@{.>}@/^1pc/[r]^{w'} 3 \ar[r]_{\alpha_3} & 4
}$$ where the nonzero paths $v'$ and $w'$ may not appear simultaneously. Set $v'=\beta_1\cdots\beta_m$, $u'=\gamma_1\cdots\gamma_r$, and $w'=\delta_1\cdots\delta_s$, where $m,r,s >1$. By definition, $l_n(f_1, \dots, f_n)(\omega)$ is a linear combination of terms of the form $$\begin{aligned}
\label{eqn1}
\phi_{n-1}(f_1, \dots, \hat{f}_i,\dots, f_n)(1\otimes v'\alpha_2\otimes \alpha_3\otimes 1), \mbox{ and }\\ \label{eqn1'}
\phi_{n-1}(f_1, \dots, \hat{f}_i,\dots, f_n)(1\otimes\alpha_1\otimes \alpha_2w'\otimes 1).
\end{aligned}$$ We first consider the term [\[eqn1\]](#eqn1){reference-type="eqref" reference="eqn1"}, which is equal to $$v_{n-1}(f_1, \dots, \hat{f}_i,\dots, f_n)(1\otimes v'\otimes \alpha_2\otimes \alpha_3\otimes 1),$$ and can only be written as a linear combination of terms of the form $$\begin{aligned}
\label{termino 2}
& v_{n-2}(f_1, \dots, \hat{f}_i,\dots, \hat{f}_k,\dots, f_n)(1\otimes v'\otimes \gamma_1\cdots \gamma_{j-1}\otimes \gamma_j\otimes 1)\gamma_{j+1}\cdots\gamma_r\alpha_3, \\ \label{termino 3}
& v_{n-2}(f_1, \dots, \hat{f}_i,\dots, \hat{f}_k,\dots, f_n)(1\otimes v'\otimes u'\otimes\alpha_3\otimes 1), \mbox{ and } \\
\label{termino 1}
& v_{n-2}(f_1, \dots, \hat{f}_i,\dots, \hat{f}_k,\dots, f_n)(1\otimes\beta_1\cdots \beta_{m-1}\otimes \beta_m\otimes \gamma_1\otimes 1)\gamma_2\cdots \gamma_r\alpha_3.
\end{aligned}$$ Since $Q$ has no oriented cycles, the only nonzero path parallel to $v'\gamma_1\cdots\gamma_{j-1}$ is $\alpha_1\gamma_1\cdots \gamma_{j-1}$, then [\[termino 2\]](#termino 2){reference-type="eqref" reference="termino 2"} vanishes by Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"} for any $n\geq 4$. By definition, [\[termino 3\]](#termino 3){reference-type="eqref" reference="termino 3"} is a linear combination of terms of the form $$\begin{aligned}
\phi_{n-3}(f_{i_1}, \dots,f_{i_{n-3}} )&(1\otimes \beta_1\cdots\beta_{m-1} f_{i_{n-2}}(\beta_m \gamma_1)\gamma_2\cdots \gamma_r\otimes \alpha_3\otimes 1), \mbox{ and }\\
\phi_{n-2-t}(f_{k_1}, \dots,f_{k_{n-2-t}})&(1\otimes \phi_t(f_{k_{n-1-t}}, \dots,f_{k_{n-2}})(1\otimes v' \otimes u' \otimes 1)\otimes \alpha_3\otimes 1),
\end{aligned}$$ for $2\leq t < n-2$. Since $\gamma_r\alpha_3 \neq 0$, and $$\begin{aligned}
\phi_t(f_{k_{n-1-t}},& \dots,f_{k_{n-2}})(1\otimes v' \otimes u' \otimes 1)\\
= &v_t(f_{k_{n-1-t}}, \dots,f_{k_{n-2}})(1\otimes \beta_1\cdots\beta_{m-1}\otimes \beta_m\otimes \gamma_1\otimes 1)\gamma_2\cdots\gamma_r \\
& - \sum_{j=2}^r v_t(f_{k_{n-1-t}}, \dots,f_{k_{n-2}})(1\otimes v' \otimes \gamma_1\cdots \gamma_{j-1}\otimes \gamma_j\otimes 1)\gamma_{j+1}\cdots\gamma_r,
\end{aligned}$$ we have that [\[termino 3\]](#termino 3){reference-type="eqref" reference="termino 3"} is zero for $n=4$ and the only nonzero terms which can appear in [\[termino 3\]](#termino 3){reference-type="eqref" reference="termino 3"} for $n\geq 5$ are $$\begin{aligned}
\phi_{n-2-t}(f_{k_1}, \dots,f_{k_{n-2-t}})&(1\otimes v_t(f_{k_{n-1-t}}, \dots,f_{k_{n-2}})(1\otimes v' \otimes \gamma_1\cdots \gamma_{r-1}\otimes \gamma_r\otimes 1) \otimes \alpha_3\otimes 1).
\end{aligned}$$ By definition of $v_t$, we have to study the terms of the form $$\begin{aligned}
\phi_{t-c}(f_{p_1}, \dots,f_{p_{t-c}})(1\otimes \phi_c(f_{p_{t-c+1}}, \dots,f_{p_{t}})(1\otimes v'\otimes \gamma_1\cdots\gamma_{r-1} \otimes 1 ) \otimes \gamma_r\otimes 1)
\end{aligned}$$ for all $1 \leq c <t$. Since the only nonzero path parallel to $v' \gamma_1\cdots\gamma_{r-1}$ is $\alpha_1 \gamma_1\cdots\gamma_{r-1}$ and $\gamma_{r-1}\gamma_r\neq 0$, we get that [\[termino 3\]](#termino 3){reference-type="eqref" reference="termino 3"} vanishes for all $n \geq 4$.
Now we will see that [\[termino 1\]](#termino 1){reference-type="eqref" reference="termino 1"} equals zero. Since $\phi_t(1 \otimes \beta_m \otimes \gamma_1 \otimes 1)=0$ when $t>1$, we have that [\[termino 1\]](#termino 1){reference-type="eqref" reference="termino 1"} is a linear combination of terms of the form $$\phi_{n-3}(f_{q_1}, \dots,f_{q_{n-3}})(1\otimes \beta_1\cdots \beta_{m-1}\otimes f_{q_{n-2}}(\beta_m \gamma_1)\otimes 1)\gamma_2\cdots\gamma_r \alpha_3.$$ These terms vanish for all $n \geq 5$, unless there exists a nonzero path $\lambda_1\cdots \lambda_h$ parallel to $\beta_m\gamma_1$: $$\xymatrix{
& \bullet \ar[rd]^{\beta_m} \ar@{.>}@/^.5pc/[rr]^{\lambda_1\cdots\lambda_h} & & \bullet \ar@{.>}@/^.5pc/[rd]^{\gamma_2\cdots\gamma_r} & &\\
1 \ar@{.>}@/^.5pc/[ru]^{\beta_1\cdots\beta_{m-1}} \ar[rr]_{\alpha_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\gamma_1} & & 3 \ar@{.>}@/^1pc/[rr]^{w'} \ar[rr]_{\alpha_3} & & 4.
}$$ Notice that $\lambda_h\neq \gamma_1$ since $A$ is gentle without oriented cycles, and hence $\lambda_h \gamma_2 =0$. In this case, it vanishes if $h=1$, $m=2$, and $n\geq 5$, and otherwise it can only be a linear combination of terms of the form $$\begin{aligned}
v_{n-3}&(f_{q_1}, \dots,f_{q_{n-3}})(1\otimes \beta_1\cdots \beta_{m-1}\otimes \lambda_1\cdots \lambda_{h-1}\otimes \lambda_h \otimes 1)\gamma_2\cdots\gamma_r \alpha_3, & \mbox{if $h>1$,} \\
v_{n-3}&(f_{q_1}, \dots,f_{q_{n-3}})(1\otimes \beta_1\cdots \beta_{m-2}\otimes \beta_{m-1}\otimes \lambda_1\otimes 1)\gamma_2\cdots\gamma_r \alpha_3, & \mbox{ if $h=1$}.
\end{aligned}$$ The first term vanishes since there are no nonzero paths parallel to $\beta_1\cdots \beta_{m-1} \lambda_1\cdots \lambda_{h-1}$ since $Q$ has no oriented cycles, thus [\[termino 1\]](#termino 1){reference-type="eqref" reference="termino 1"} vanishes for $h>1$. The second term, for $h=1$ and $m>2$, vanishes if there is no nonzero path $y$ parallel to $\beta_{m-1}\lambda_1$ such that $\beta_{m-2} y=0$. Since $Q$ has no oriented cycles, the unique possibility for such $y$ is $y=z \lambda_1$, for $z$ a nonzero path parallel to $\beta_{m-1}$. Arguing as in Example [Example 16](#ejemplo s=1){reference-type="ref" reference="ejemplo s=1"}, one can check that the only arguments appearing in the next steps of the computation are $1\otimes \beta_1\cdots \beta_{m-1}\otimes \lambda_1 \otimes 1$ and $1\otimes \beta_1\cdots \beta_{m-2}\otimes z \lambda_1 \otimes 1$. Applying Remark [Remark 15](#loop argument){reference-type="ref" reference="loop argument"} we get that [\[termino 1\]](#termino 1){reference-type="eqref" reference="termino 1"} vanishes. Therefore, [\[eqn1\]](#eqn1){reference-type="eqref" reference="eqn1"} vanishes for $n\geq 5$.
It remains to show that [\[eqn1\'\]](#eqn1'){reference-type="eqref" reference="eqn1'"} vanishes. By definition, it is equal to $$\begin{aligned}
\label{eq v n-1 a}
-v_{n-1}&(f_1, \dots, \hat{f}_i,\dots, f_n)(1\otimes\alpha_1\otimes \alpha_2\otimes \delta_1\otimes 1)\delta_2\cdots \delta_s\\ \notag
&-\sum_{j=2}^s v_{n-1}(f_1, \dots, \hat{f}_i,\dots, f_n)(1\otimes\alpha_1\otimes \alpha_2\delta_1\cdots \delta_{j-1}\otimes \delta_j\otimes 1)\delta_{j+1}\cdots \delta_s.
\end{aligned}$$ By Lemma [Lemma 4](#lemma:phi=0-RBR){reference-type="ref" reference="lemma:phi=0-RBR"}, all the last terms vanish for all $n \geq 3$ since, as $Q$ has no oriented cycles, $v'\alpha_2\delta_1\cdots\delta_{j-1}$ is the only nonzero path parallel to $\alpha_1\alpha_2\delta_1\cdots\delta_{j-1}$. The term [\[eq v n-1 a\]](#eq v n-1 a){reference-type="eqref" reference="eq v n-1 a"} can only be a linear combination of terms of the form $$\begin{aligned}
\phi_{n-2}&(f_1, \dots, \hat{f}_i, \dots, \hat{f}_j, \dots, f_n)(1\otimes\alpha_1\gamma_1\cdots\gamma_r\otimes \delta_1\otimes 1)\delta_2\cdots \delta_s\\
&= v_{n-2}(f_1, \dots, \hat{f}_i, \dots, \hat{f}_j, \dots, f_n)(1\otimes\alpha_1\gamma_1\cdots\gamma_{r-1}\otimes \gamma_r\otimes \delta_1\otimes 1)\delta_2\cdots \delta_s.
\end{aligned}$$ Let $\rho_1\cdots\rho_c$ be a nonzero path parallel to $\gamma_r\delta_1$: $$\xymatrix{
& & & \bullet \ar[rd]^{\gamma_r} \ar@{.>}@/^.5pc/[rr]^{\rho_1\cdots\rho_c} & & \bullet \ar@{.>}@/^.5pc/[rd]^{\delta_2\cdots\delta_s} &\\
1 \ar[rr]_{\alpha_1} \ar@{.>}@/^1pc/[rr]^{v'} & & 2 \ar[rr]_{\alpha_2} \ar@{.>}@/^.5pc/[ru]^{\gamma_1\cdots\gamma_{r-1}} & & 3 \ar[ur]^{\delta_1} \ar[rr]_{\alpha_3} & & 4.}$$ Thus, the only terms which can appear in [\[eq v n-1 a\]](#eq v n-1 a){reference-type="eqref" reference="eq v n-1 a"} are multiples of $$\label{eq c}\phi_{n-3}(f_{t_1}, \dots, f_{t_{n-3}})(1\otimes\alpha_1\gamma_1\cdots\gamma_{r-1}\otimes \rho_1\cdots \rho_c\otimes 1)\delta_2\cdots \delta_s$$ for $n \geq 5$. If $c>1$, using that $\rho_c \delta_2=0$, we get that [\[eq c\]](#eq c){reference-type="eqref" reference="eq c"} is equal to $$\begin{aligned}
-v_{n-3}(f_{t_1}, \dots, f_{t_{n-3}})(1\otimes\alpha_1\gamma_1\cdots\gamma_{r-1}\otimes \rho_1\cdots \rho_{c-1}\otimes \rho_c\otimes 1)\delta_2\cdots \delta_s.
\end{aligned}$$ This is zero since there are no nonzero paths parallel to $\alpha_1\gamma_1\cdots\gamma_{r-1}\rho_1\cdots\rho_{c-1}$. If $c=1$, [\[eq c\]](#eq c){reference-type="eqref" reference="eq c"} is equal to $$\begin{aligned}
& v_{n-3}(f_{t_1}, \dots, f_{t_{n-3}})(1\otimes\alpha_1\gamma_1\cdots\gamma_{r-2}\otimes \gamma_{r-1} \otimes \rho_1\otimes 1)\delta_2\cdots \delta_s.
\end{aligned}$$ Notice that for $c=1$, we have that $r>2$ since by assumption the quiver $Q$ does not contain a subquiver of the form [\[caso 2\]](#caso 2){reference-type="eqref" reference="caso 2"}. In this case, this term vanishes if there is no nonzero path $y$ parallel to $\gamma_{r-1}\rho_1$ such that $\gamma_{r-2} y=0$. Since $Q$ has no oriented cycles, the unique possibility for such $y$ is $y = z\rho_1$, for $z$ a nonzero path parallel to $\gamma_{r-1}$. Arguing as in Example [Example 16](#ejemplo s=1){reference-type="ref" reference="ejemplo s=1"}, one can check that the only arguments appearing in the next steps of the computation are $1\otimes\alpha_1\gamma_1\cdots\gamma_{r-1}\otimes \rho_1 \otimes 1$ and $1\otimes \alpha_1\gamma_1\cdots\gamma_{r-2}\otimes z\rho_1 \otimes 1$. Applying Remark [Remark 15](#loop argument){reference-type="ref" reference="loop argument"} we get that [\[eq c\]](#eq c){reference-type="eqref" reference="eq c"} vanishes for all $n \geq 5$. Therefore, [\[eqn1\'\]](#eqn1'){reference-type="eqref" reference="eqn1'"} vanishes and the proof is completed. ◻
**Theorem 25**. *Let $A=\Bbbk Q/I$ be a gentle algebra. Assume $Q$ has no oriented cycles and no parallel arrows. If $l_4(f_1, f_2, f_3, f_4) \neq 0$ for some $f_i \in B^1(A)[1]$, then $Q$ contains a subquiver of the form $$\begin{aligned}
\label{diamond} \tag{$\diamond$}
\xymatrix{
& 5 \ar[rd]^{\beta_2} \ar[rr]^{\lambda}& & 6 \ar@{.>}@/^1pc/[rrrd]^{\epsilon_2\cdots\epsilon_p}& &\\
1 \ar[rr]_{\alpha_1}\ar[ru]^{\beta_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\epsilon_1} & & 3 \ar[rr]_{\alpha_3} & & 4
}
\end{aligned}$$ with $\omega=(\alpha_1,\alpha_2, \alpha_3)\in AP_3$, $\beta_1\beta_2\neq 0$, $\epsilon_1\cdots\epsilon_p\neq 0$, and $l_4(f_1,f_2,f_3,f_4)(\omega)\neq 0$ for some $f_i\in B^1(A)[1], , 1 \leq i \leq 4$.*
*Proof.* From Propositions [Proposition 13](#prop 1){reference-type="ref" reference="prop 1"}, [Proposition 17](#proposition A){reference-type="ref" reference="proposition A"} and [Proposition 21](#proposition B){reference-type="ref" reference="proposition B"}, we know that $Q$ contains a subquiver of the form [\[diamond\]](#diamond){reference-type="eqref" reference="diamond"} with $\epsilon_p\neq\alpha_3$, or a subquiver of the form $$\xymatrix{
& 1 \ar[r]_{\alpha_1} \ar@{.>}@/^1pc/[r]^{v'} & 2 \ar[r]_{\alpha_2} \ar@{.>}@/^1pc/[r]^{u'} & \ar@{.>}@/^1pc/[r]^{w'} 3 \ar[r]_{\alpha_3} & 4
}$$ where the nonzero paths $v'$ or $w'$ may not appear simultaneously. Assume we are in the second case, with the same notation as in the proof of Theorem [Theorem 24](#teo ln=0){reference-type="ref" reference="teo ln=0"}. So, $l_4(f_1,f_2,f_3,f_4)(\omega)$ is a linear combination of terms of the form [\[termino 1\]](#termino 1){reference-type="eqref" reference="termino 1"} and [\[eq c\]](#eq c){reference-type="eqref" reference="eq c"}: $$\begin{aligned}
& v_{2}(f_{i_1}, f_{i_2})(1\otimes\beta_1\cdots \beta_{m-1}\otimes \beta_m\otimes \gamma_1\otimes 1)\gamma_2\cdots \gamma_r\alpha_3, \mbox{ and} \\
&\phi_1(f_{t})(1\otimes\alpha_1\gamma_1\cdots\gamma_{r-1}\otimes \rho_1\cdots \rho_c\otimes 1)\delta_2\cdots \delta_s.
\end{aligned}$$ The second one is zero since, as $Q$ has no oriented cycles, there is no nonzero path starting with $\alpha_1$ and ending with $\delta_s$. The first one vanishes when $m>2$ since there is no nonzero path starting with $\beta_1$ and ending with $\alpha_3$. Assume $m=2$. In this case, the nonvanishing of the term $$\phi_1(f_{j})(1\otimes\beta_1\otimes f_k(\beta_2 \gamma_1)\otimes 1)\gamma_2\cdots \gamma_r\alpha_3$$ implies the existence of a nonzero path $\lambda_1 \cdots \lambda_h$ parallel to $\beta_2 \gamma_1$, with $\lambda_1 \neq \beta_2$. Since $Q$ has no oriented cycles, this means that $\lambda_h \neq \gamma_1$. Therefore $\lambda_h \gamma_2=0$, and hence the last term can be nonzero only if $h=1$. In this case we have a subquiver of the form [\[diamond\]](#diamond){reference-type="eqref" reference="diamond"} (with $\epsilon_1 \cdots \epsilon_p=\gamma_1 \cdots \gamma_r \alpha_3$).
Finally, if $Q$ has a subquiver of the form [\[diamond\]](#diamond){reference-type="eqref" reference="diamond"} then, for instance, for $$\begin{aligned}
& f_1=(\alpha_1 \alpha_2|| \beta_1 \beta_2 \alpha_2), & & f_2=(\alpha_2 \alpha_3 || \epsilon_1 \cdots \epsilon_p), \\ & f_3=( \beta_2\epsilon_1|| \lambda), \mbox{ and } & &f_4=(\beta_1 \lambda ||\alpha_1 \epsilon_1),
\end{aligned}$$ we have that $l_4(f_1, f_2, f_3, f_4)(\omega) = -\alpha_1 \epsilon_1 \cdots \epsilon_p\neq 0$. ◻
**Theorem 26**. *Let $A=\Bbbk Q/I$ be a gentle algebra. Assume $Q$ has no oriented cycles, no parallel arrows, and no subquivers of the form [\[caso 2\]](#caso 2){reference-type="eqref" reference="caso 2"}. Then $f= \sum_{i=1}^m f_i t^i$ is a Maurer-Cartan element if and only if $f_i$ is a $2$-cocycle for all $i$.*
*Proof.* From Lemmas [Lemma 9](#l_2 =0){reference-type="ref" reference="l_2 =0"} and [Lemma 10](#l_3 =0){reference-type="ref" reference="l_3 =0"}, and Theorem [Theorem 24](#teo ln=0){reference-type="ref" reference="teo ln=0"}, we have that $l_n(f_{i_1}, \ldots, f_{i_n})\neq 0$ only for $n=1,4$. Therefore, $f= \sum_{i=1}^m f_i t^i$ is a Maurer-Cartan element if and only if it satisfies the equations [\[MC\]](#MC){reference-type="eqref" reference="MC"} $$\delta^2(f_k)+ {\frac{1}{4!}} \sum_{i_1+i_2+i_3+i_4=k}l_4(f_{i_1}, f_{i_2}, f_{i_3}, f_{i_4})=0$$ for all $k\geq 1$. If $l_4=0$, the result is clear.
Assume $l_4 (f_{i_1}, f_{i_2}, f_{i_3}, f_{i_4}) \neq 0$. By Theorem [Theorem 25](#theo l_4 no 0 parte2){reference-type="ref" reference="theo l_4 no 0 parte2"}, we can affirm that $Q$ has a subquiver of the form [\[diamond\]](#diamond){reference-type="eqref" reference="diamond"}. We will prove first that if $g_j$ is a $2$-cocycle for all $j$ with $1 \leq j \leq 4$, then $l_4(g_1,g_2,g_3,g_4) = 0$. In fact, $l_4(g_1,g_2,g_3,g_4)\neq 0$ implies that $l_4(g_1,g_2,g_3,g_4)(\omega) \neq 0$ for $\omega=(\alpha_1, \alpha_2, \alpha_3)$ in the subquiver of the form [\[diamond\]](#diamond){reference-type="eqref" reference="diamond"}, see Theorem [Theorem 25](#theo l_4 no 0 parte2){reference-type="ref" reference="theo l_4 no 0 parte2"}. Moreover, by Remark [Remark 22](#rmk l_4 no 0){reference-type="ref" reference="rmk l_4 no 0"}, $g_j(\beta_1\lambda )\delta_2$ is nonzero, for some $j$, and hence $g_j$ is not a $2$-cocycle since $$\begin{aligned}
\delta^2(g_j)(\beta_1,\lambda,\delta_2)= - g_j(\beta_1\lambda)\delta_2 + \beta_1 g_j(\lambda \delta_2)
\end{aligned}$$ and $g_j(\beta_1\lambda ) \delta_2$ is not a path starting with $\beta_1$ in [\[diamond\]](#diamond){reference-type="eqref" reference="diamond"}.
Now it is clear that if all $f_i$ are $2$-cocycles, then $\sum_{i=1}^mf_i$ is a Maurer-Cartan element. For the converse, suppose that not all $f_i$ are $2$-cocycles, and let $t$ be such that $f_t$ is not a $2$-cocycle and $f_j$ are $2$-cocycles for all $j$ with $1 \leq j<t$. As we noted above, $l_4(f_{i_1}, f_{i_2}, f_{i_3}, f_{i_4})= 0$ when $i_1 + i_2 + i_3 + i_4=t$. Hence the Maurer-Cartan equation for $k=t$ reduces to $\delta^2(f_t)=0$, which is a contradiction. ◻
The previous result is not true if the quiver has parallel arrows or oriented cycles, as we can see in the following examples.
**Example 27**. Let $A=\Bbbk Q/I$ be the algebra with quiver $$\xymatrix{
& & & 5 \ar[rd]^{\gamma_2}& &\\
1 \ar@<1ex>[rr]^{\beta} \ar[rr]_{\alpha_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\gamma_1} & & 3 \ar[rr]_{\alpha_3} & & 4,
}$$ and $I=\langle\alpha_1\alpha_2, \alpha_2\alpha_3, \beta\gamma_1\rangle$. Let $f_1=(\alpha_1\alpha_2||\beta\alpha_2 +\alpha_1\gamma_1\gamma_2) + (\alpha_2\alpha_3||\gamma_1\gamma_2\alpha_3) + (\beta\gamma_1||\alpha_1\gamma_1)$ and $f_3= (\alpha_1\alpha_2||\alpha_1\gamma_1\gamma_2)$. One can check that $l_n(g_1,\dots,g_n)(\alpha_1,\alpha_2,\alpha_3)=0$ for all $g_j\in B^1(A)[1]$ and for all $n\neq 1,3$. Moreover, $f_1$ is a $2$-cocycle and $l_3(f_{i_1}, f_{i_2},f_{i_3})(\alpha_1,\alpha_2,\alpha_3)=0$ except for $$l_3(f_1, f_1, f_1)(\alpha_1,\alpha_2,\alpha_3)= 6 \alpha_1\gamma_1\gamma_2\alpha_3 \neq 0.$$ As $l_1(f_3)(\alpha_1,\alpha_2,\alpha_3)= \alpha_1\gamma_1\gamma_2\alpha_3 \neq 0$ we have that $f_3$ is not a $2$-cocycle, and $\displaystyle l_1(f_3) - \frac{1}{6}l_3(f_1, f_1, f_1)=0$, hence $f=f_1 t+f_3 t^3$ is a Maurer-Cartan element.
**Example 28**. Let $A=\Bbbk Q/I$ be the algebra with quiver $$\xymatrix{
& 5\ar[rd]^{\beta_2} \ar@/_0.5pc/[ld]_{\lambda} & & 6 \ar[rd]^{\gamma_2}& &\\
1 \ar@/_0.5pc/[ru]_{\beta_1} \ar[rr]_{\alpha_1} & & 2 \ar[rr]_{\alpha_2} \ar[ru]^{\gamma_1} & & 3 \ar[rr]_{\alpha_3} & & 4,
}$$ and $I=\langle\alpha_1\alpha_2, \alpha_2\alpha_3, \beta_2\gamma_1, \beta_1\lambda, \lambda\beta_1 \rangle$. Let $f_1=(\alpha_1\alpha_2||\beta_1\beta_2\alpha_2 +\alpha_1\gamma_1\gamma_2) + (\alpha_2\alpha_3||\gamma_1\gamma_2\alpha_3) + (\beta_2\gamma_1||\lambda\alpha_1\gamma_1) + (\beta_1\lambda|| e_1) + (\lambda\beta_1|| e_5)$ and $f_4= -(\alpha_1\alpha_2||\alpha_1\gamma_1\gamma_2)$. One can check that $l_n(g_1,\dots,g_n)(\omega)=0$ for all $g_j\in B^1(A)[1]$, for all $\omega\in AP_3$, and for all $n\neq 1,4$. Moreover, $f_1$ is a $2$-cocycle and $l_4(f_{i_1}, f_{i_2},f_{i_3}, f_{i_4})(\omega)=0$ except for $$l_4(f_1, f_1, f_1, f_1)(\alpha_1,\alpha_2,\alpha_3)=-24 \alpha_1\gamma_1\gamma_2\alpha_3 \neq 0.$$ Since $l_1(f_4)(\alpha_1,\alpha_2,\alpha_3)= -\alpha_1\gamma_1\gamma_2\alpha_3 \neq 0$ we have that $f_4$ is not a $2$-cocycle and $f=f_1 t+f_4 t^4$ is a Maurer-Cartan element because $\displaystyle l_1(f_4) - \frac{1}{24}l_4(f_1, f_1, f_1, f_1)=0$.
# Acknowledgements {#acknowledgements .unnumbered}
This work began at the Women in Noncommutative Algebra and Representation Theory (WINART3) workshop, held at the Banff International Research Station (BIRS) in April 2022. The authors would like to thank the organizers of WINART3 for this collaboration opportunity. M. Müller gratefully acknowledges the support of Association for Woman in Mathematics (AWM), through AWM Mathematical Endeavors Revitalization Program (MERP) grant.
XXXXX
Assem, I., Happel, D.. Generalized tilted algebras of type $A_n$. *Comm. Algebra* 9 (20): 2101--2125 (1981). DOI: [10.1080/00927878108822697](https://doi.org/10.1080/00927878108822697)
Assem, I., Skowroński, A., Iteraterd tilted algebras of type $\tilde A_n$. *Math. Z.* 195 (2): 269--290 (1987). DOI: [10.1007/BF01166463](https://doi.org/10.1007/BF01166463)
Bardzell, M. J.. The alternating syzygy behavior of monomial algebras. *J. Algebra* 188 (1): 69--89 (1997). DOI: [10.1006/jabr.1996.6813](https://doi.org/10.1006/jabr.1996.6813)
Barmeier, S., Wang, Z.. Deformations of path algebras of quivers with relations. ArXiv: [2002.10001](https://arxiv.org/abs/2002.10001).
Doubek, M., Markl, M., Zima, P.. Deformation theory (lecture notes). *Arch. Math.* 43 (5): 333--371 (2007). [eudml.org/doc/250171](http://eudml.org/doc/250171)
Gerstenhaber, M.. On the deformation of rings and algebras. *Ann. of Math.* 79 (2): 59--103 (1964). DOI: [10.2307/1970484](https://doi.org/10.2307/1970484)
Getzler, E.. Lie theory for nilpotent $L_\infty$-algebras. *Ann. Math.* 170 (2): 271--301 (2009). DOI: [10.4007/annals.2009.170.271](https://annals.math.princeton.edu/2009/170-1/10.4007/annals.2009.170.271)
Manetti, M.. Extended deformation functors. *Int. Math. Res. Not.* 14 (2002) 719--756. DOI: [10.1155/S1073792802008024](https://doi.org/10.1155/S1073792802008024)
Redondo, M. J., Román, L.. Comparison morphisms between two projective resolutions of monomial algebras. *Rev. Un. Mat. Argentina* 59 (1): 1--31 (2018). DOI: [10.33044/revuma.v59n1a01](https://doi.org/10.33044/revuma.v59n1a01)
Redondo, M. J., Rossi Bertone, F.. $L_\infty$-structure on Bardzell's complex for monomial algebras. *J. Pure Appl. Algebra* 226, 106935 (2021). DOI: [10.1016/j.jpaa.2021.106935](https://doi.org/10.1016/j.jpaa.2021.106935)
Tamarkin, D. E.. Another proof of M. Kontsevich's formality theorem. ArXiv: [9803025](https://arxiv.org/abs/math/9803025).
Tamarkin, D. E., Tsygan, B.. Non-commutative differential calculus, homotopy BV algebras and formality conjectures. *Methods Func. Anal. and Topology* 6: 85--100 (2000). [MR1783778](http://mfat.imath.kiev.ua/article/?id=124)
Yalin, S.. Maurer-Cartan spaces of filtered $L_\infty$-algebras. *J. Homotopy Relat. Struct.* 11: 375--407 (2016). DOI: [10.1007/s40062-015-0108-9](https://doi.org/10.1007/s40062-015-0108-9)
Yekutieli, A.. MC elements in pronilpotent DG Lie algebras. *J. Pure Appl. Algebra* 216 (11): 2338--2360 (2012). DOI: [10.1016/j.jpaa.2012.03.002](https://doi.org/10.1016/j.jpaa.2012.03.002)
| arxiv_math | {
"id": "2309.02582",
"title": "Maurer-Cartan equation for gentle algebras",
"authors": "Monique M\\\"uller, Mar\\'ia Julia Redondo, Fiorela Rossi Bertone and\n Pamela Suarez",
"categories": "math.RA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We introduce a new class of Discontinuous Galerkin (DG) methods for solving nonlinear conservation laws on unstructured Voronoi meshes that use a nonconforming Virtual Element basis defined within each polygonal control volume. The basis functions are actually evaluated as an $L_2$ projection of the virtual basis which remains unknown, along the lines of the Virtual Element Method (VEM). Contrarily to the VEM approach, the new basis functions lead to a nonconforming representation of the solution with discontinuous data across the element boundaries, as typically employed in DG discretizations. To improve the condition number of the resulting mass matrix, an orthogonalization of the full basis is proposed. The discretization in time is carried out following the ADER (Arbitrary order DERivative Riemann problem) methodology, which yields one-step fully discrete schemes that make use of a coupled space-time representation of the numerical solution. The space-time basis functions are constructed as a tensor product of the virtual basis in space and a one-dimensional Lagrange nodal basis in time. The resulting space-time stiffness matrix is stabilized by an extension of the dof--dof stabilization technique adopted in the VEM framework, hence allowing an element-local space-time Galerkin finite element predictor to be evaluated. The novel methods are referred to as VEM-DG schemes, and they are arbitrarily high order accurate in space and time.
The new VEM-DG algorithms are rigorously validated against a series of benchmarks in the context of compressible Euler and Navier--Stokes equations. Numerical results are verified with respect to literature reference solutions and compared in terms of accuracy and computational efficiency to those obtained using a standard modal DG scheme with Taylor basis functions. An analysis of the condition number of the mass and space-time stiffness matrix is also forwarded.
address:
- Department of Mathematics and Computer Science, University of Ferrara, Via Niccolò Machiavelli 30, 44121 Ferrara, Italy
- Department of Environmental and Prevention Sciences, University of Ferrara, Corso Ercole I d'Este 32, 44121 Ferrara, Italy
author:
- Walter Boscheri$^*$
- Giulia Bertaglia
bibliography:
- paper_vem-dg.bib
title: Nonconforming Virtual Element basis functions for space-time Discontinuous Galerkin schemes on unstructured Voronoi meshes
---
Discontinuous Galerkin , Virtual Element Method , High order in space and time , ADER schemes , Unstructured meshes , Compressible flows
# Introduction {#sec.intro}
Discontinuous Galerkin (DG) methods have been first introduced in [@reed] for the solution of neutron transport equations, and subsequently applied to general nonlinear systems of hyperbolic conservation laws in [@cbs0; @cbs1; @cbs2; @cbs3]. In the DG framework, the numerical solution is represented by piecewise polynomials within each mesh control volume, allowing jumps of the discrete solution across element boundaries. The main advantage of the DG approach is that it automatically provides high order of accuracy *locally*, thus it does not require any reconstruction stencil like in finite volume solvers. Furthermore, DG schemes are typically more accurate compared to finite volume or finite difference methods, because the entire high order polynomial is evolved in time for each computational cell.
The discrete solution is represented in terms of an expansion that involves a set of basis functions and the associated degrees of freedom, also referred to as expansion coefficients. The basis can be either modal or nodal, provided that it achieves the formal order of accuracy of the method. In the nodal approach, the degrees of freedom correspond to the value of the numerical solution at the nodal points, while in the modal approach the expansion coefficients give the modes of the polynomial basis.
Efficient DG schemes can be devised with *nodal basis* if the nodes are carefully chosen, hence leading to nodal DG schemes with Gauss-Lobatto nodes [@GassnerDG_LES] or Gauss-Legendre nodes [@Exahype]. These schemes are typically referred to as spectral element methods (SEM) [@Kopriva2009], which fit the Summation-By-Parts Simultaneous-Approximation-Term (SBP-SAT) framework [@CARPENTER199674; @GassnerSIAM2013]. The nodal basis functions are defined on a reference element, to which the physical control volume is mapped. This means that the usage of nodal basis is restricted to either Cartesian meshes with quadrilaterals and hexahedra, or unstructured grids made of simplex control volumes, namely triangles and tetrahedra. To overcome this limitation, in [@GassnerPoly] a nodal basis ansatz on more general unstructured meshes is proposed, and recently in [@ADERAFEDG] an agglomerated continuous finite element basis is devised at the sub-grid level of a Voronoi mesh. The nonlinear stability of nodal DG schemes is guaranteed by employing either artificial viscosity techniques [@PerssonAV; @VegtAV; @TavelliCNS; @GassnerMunzAV] or sub-cell finite volume limiters [@DGLimiter1; @DGsubcell_Gassner].
Alternatively, *modal basis* like Taylor basis can be easily employed on very general control volumes [@ArepoTN; @DGBoltz], although the associated computational cost increases since no reference element is available. Nevertheless, the adoption of a hierarchical modal approach is very convenient for designing slope and moment limiters in order to ensure the stability of DG schemes [@cbs4; @Biswas_94; @Burbeau_2001; @Kri07; @Kuzmin2013].
With the aim of dealing with general polygonal and polyhedral elements, the Virtual Element Method (VEM), which belongs to the class of continuous finite element methods, has recently emerged [@vem1; @vem2; @vem3; @vem4; @vem5; @vem6]. In the VEM framework, the discrete solution is approximated by a set of basis functions that do not need to be explicitly determined, hence making them only *virtually* defined. Indeed, the numerical solution is known using suitable operators that project the basis functions onto polynomial spaces of any degree, allowing for the discretization and appropriate approximation of the continuous linear functional and the bilinear form resulting from the variational formulation only through the knowledge of the degrees of freedom on the boundary of the mesh elements. Several applications in the field of solid mechanics, especially for elliptic problems of linear elasticity [@vem3; @Gain2014], linear elastodynamics [@Antonietti2021], elastic problems with singularities and discontinuities [@BenvenutiChiozzi2019; @BenvenutiChiozzi2022], and fracture mechanics [@Hussein2019; @Nguyen2018], have largely proposed VEM strategies to solve the problem of interest. In more recent years, applications of VEM in the field of fluid dynamics have been forwarded, such as for flow problems in porous media [@Borio2022; @Borio2021] or to solve the steady Navier--Stokes equations [@Beirao-Lovadina2018; @Chernov2021; @BeiraoStokes22; @vem6; @Antonietti22]. To account for discontinuous solution across the vertexes of the computational mesh, *nonconforming* virtual element methods have been introduced in [@VEM_nc_org; @vem_nc_2; @vem_nc_3; @vem_nc_4; @vem_nc_5]. In this case, the numerical solution is always computed by solving a linear system on the entire computational mesh, but at the global level the conformity requirement between elements is relaxed, and the definition of the global discrete space is no longer continuous.
In this work, we further extend the application of VEM to solve nonlinear time-dependent Partial Differential Equations (PDEs), introducing an innovative class of high-order numerical methods which relate the Virtual Element Method to the compact framework of Discontinuous Galerkin Finite Element methods on unstructured Voronoi meshes, focusing on the solution of compressible viscous flows governed by the Navier--Stokes equations. The VEM projection operators are used to define the *local* solution space within each control volume, hence using the virtual basis functions to approximate the discrete solution in the cell. Belonging the numerical scheme to the family of DG methods, the solution remains discontinuous across element boundaries, leading to a numerical method based on nonconforming virtual element basis functions. The new basis is a mixed nodal/modal basis, since it involves high order internal momenta starting from third order approximations. However, the interpolation property exhibited by nodal basis holds even for the degrees of freedom associated to the internal momenta, making the new nonconforming VEM basis potentially able to yield a quadrature-free DG scheme [@Exahype; @ADERAFEDG]. The time discretization is performed relying on the ADER (Arbitrary order DERivative Riemann problem) approach [@Toro2006; @Dumbser2008], which, in contrast to Runge-Kutta schemes based on temporal sub-stages evaluations, permits to obtain high order in time with a one-step fully discrete method. This is achieved through the computation of a space-time predictor solution at the aid of an element-local finite element predictor. Therefore, we construct the space-time basis functions by performing a tensor product between the nonconforming VEM basis in space and a standard nodal Lagrange basis in time. As analyzed in [@Mascotto2018], the mass and stiffness matrices arising from the VEM paradigm are highly ill-conditioned, thus requiring special care if they have to be inverted. Consequently, an *ad hoc* stabilization technique is also introduced for the inversion of the space-time stiffness matrix needed in the ADER predictor strategy. Inspired by [@Berrone2017], to further reduce the condition number of the VEM spatial mass matrix, an orthogonalization technique is proposed on the full basis.
The rest of the paper is structured as follows. In Section [2](#sec.pde){reference-type="ref" reference="sec.pde"} we introduce the governing equations. Section [3](#sec.numscheme){reference-type="ref" reference="sec.numscheme"} presents the numerical method, including the description of the novel nonconforming space-time VEM basis as well as the fully discrete DG scheme. The accuracy and the robustness of the proposed approach are demonstrated in Section [4](#sec.validation){reference-type="ref" reference="sec.validation"}, which contains a set of benchmark test problems in the field of compressible inviscid and viscous flows. Finally, Section [5](#sec.concl){reference-type="ref" reference="sec.concl"} provides some concluding remarks and gives an outlook to future work.
# Mathematical model {#sec.pde}
The mathematical model is given by a nonlinear system of conservation laws, reading in general form as $$\label{eqn.PDE}
\frac{\partial \mathbf{U}}{\partial t} + \nabla \cdot \mathbf{F}(\mathbf{U},\nabla \mathbf{U}) = \mathbf{0}\,, \qquad
\mathbf{x}\in \Omega \subset \mathds{R}^d\,, \quad
t \in \mathds{R}_0^+\,, \quad
\mathbf{U}\in \Omega_{\mathbf{U}} \subset \mathds{R}^{\gamma}\,,$$ where $\Omega$ is a bounded domain in $d=2$ space dimensions, $\mathbf{x}= (x,y)$ is the vector of spatial coordinates, and $t$ is the time. Here, $\mathbf{U}$ denotes the vector of conserved variables defined in the space of admissible states $\Omega_{\mathbf{U}}\subset \mathds{R}^{\gamma}$, while $\mathbf{F}(\mathbf{U},\nabla \mathbf{U}) = (\mathbf{f}(\mathbf{U},\nabla \mathbf{U}), \,\mathbf{g}(\mathbf{U},\nabla \mathbf{U}))$ is the conservative non-linear flux tensor ($\mathbf{f}$ in $x$-direction, $\mathbf{g}$ in $y$-direction), which depends not only on the conserved state $\mathbf{U}$ but also on its gradient $\nabla \mathbf{U}$. In particular, we are interested in the compressible Navier--Stokes equations for a Newtonian fluid with heat conduction, which are based on the physical principle of conservation of mass, momentum and total energy. They can be written in the conservative form [\[eqn.PDE\]](#eqn.PDE){reference-type="eqref" reference="eqn.PDE"}, being $$\label{eqn.NS}
\begin{aligned}
\mathbf{U}= \begin{pmatrix}
\, \rho\\
\rho \mathbf{v}\\
\rho E \,
\end{pmatrix}\,,\qquad
\mathbf{F}(\mathbf{U},\nabla \mathbf{U}) = \begin{pmatrix}
\, \rho \mathbf{v}\\
\rho (\mathbf{v}\otimes \mathbf{v}) + \boldsymbol{\sigma}(\mathbf{U}, \nabla \mathbf{U}) \\
\mathbf{v}\cdot \left(\rho E \mathbf{I}+ \boldsymbol{\sigma}(\mathbf{U}, \nabla \mathbf{U})\right) - \kappa \nabla T\,
\end{pmatrix}\,.
\end{aligned}$$ In the above equations, $\rho(\mathbf{x}, t)$ is the fluid density, $\mathbf{v}(\mathbf{x}, t) = (u,v)$ denotes the fluid velocity vector, $p(\mathbf{x},t)$ is the fluid pressure, $E(\mathbf{x},t)=\varepsilon+\frac1{2}|\mathbf{v}|^2$ defines the specific total energy as the sum of the specific internal energy $\varepsilon(\mathbf{x},t)$ and the specific kinetic energy $k(\mathbf{x},t)=\frac1{2}|\mathbf{v}|^2$, and $\mathbf{I}$ is the identity $d \times d$ matrix. The stress tensor $\boldsymbol{\sigma}(\mathbf{U}, \nabla \mathbf{U})$ is defined under Stokes hypothesis as follows: $$\label{stressT}
\boldsymbol{\sigma}(\mathbf{U}, \nabla \mathbf{U}) = \left( p + \frac{2}{3}\mu\, \nabla \cdot \mathbf{v}\right) \mathbf{I}- \mu\left( \nabla\mathbf{v}+ \nabla\mathbf{v}^T \right)\,,$$ where $\mu$ is the dynamic viscosity, assumed to be constant. In the energy flux, $T(\mathbf{x},t)$ represents the fluid temperature, while $\kappa=\mu \gamma c_v \Pr^{-1}$ is the thermal conductivity coefficient, which depends on the viscosity $\mu$, the Prandtl number $\Pr$ and the specific heats $c_v$ and $c_p$, at constant volume and pressure, respectively, being $\gamma = c_p/c_v$ the adiabatic index. The specific heat at constant volume $c_v = R/(\gamma - 1)$ depends on the gas constant $R$.
## Equations of state
To close the system defined by [\[eqn.NS\]](#eqn.NS){reference-type="eqref" reference="eqn.NS"}, it is necessary to introduce two equations of state (EOS), a thermal one for the pressure $p = p(T,\rho)$ and a caloric one for the specific internal energy $\varepsilon=\varepsilon(T,\rho)$. In this work, we consider an ideal gas with the thermal and caloric EOS provided, respectively, by $$\label{EOS}
\frac{p}{\rho}=RT\,, \qquad \varepsilon=c_v T\,.$$ With this assumption, using the definition of the temperature obtained inverting the thermal EOS into the caloric EOS, it is possible to cancel out the temperature and to work with a single EOS of the form $\varepsilon(p,\rho)$, which defines the following linear relationship between pressure $p$ and internal energy $\varepsilon$: $$\label{EOS1}
\varepsilon(p,\rho)=\frac{p}{\rho(\gamma-1)}\,.$$
## Eigenvalues of the system
The compressible Navier--Stokes equations [\[eqn.NS\]](#eqn.NS){reference-type="eqref" reference="eqn.NS"} involve a convective-acoustic sub-system, which corresponds to the compressible Euler equations, and a viscous sub-system. The eigenvalues can be analyzed separately for each sub-system [@ADERAFEDG; @BosPar2021], thus the convective-acoustic eigenvalues $\boldsymbol{\lambda}^c=\{\lambda_i^c\}_{i=1}^4$ are given by $$\label{eig_c}
\lambda^c_1 = |\mathbf{v}| - c \,, \qquad \lambda^c_2 = \lambda^c_3 = |\mathbf{v}|\,, \qquad \lambda^c_4 = |\mathbf{v}| + c\,,$$ where $c = \sqrt{\gamma RT}$ is the sound speed, while the viscous eigenvalues $\boldsymbol{\lambda}^v=\{\lambda_i^v\}_{i=1}^4$, which depend on the viscosity and heat conduction properties of the fluid, read $$\label{eig_v}
\lambda^v_1 = \lambda^v_2 = 0 \,, \qquad
\lambda^v_3 = \frac{4\mu}{3\rho} \,, \qquad \lambda^v_4 = \frac{\gamma \mu}{\rho \Pr}\,.$$
# Numerical scheme {#sec.numscheme}
In this work, we propose to solve numerically the governing equations [\[eqn.PDE\]](#eqn.PDE){reference-type="eqref" reference="eqn.PDE"}-[\[eqn.NS\]](#eqn.NS){reference-type="eqref" reference="eqn.NS"} using a high order fully discrete one-step scheme which embeds the Virtual Element Method (VEM) [@vem2] in the definition of the basis functions of a Discontinuous Galerkin (DG) approach for the discretization in space, accounting for unstructured Voronoi meshes. To attain high accuracy in time, the ADER (Arbitrary order DERivative Riemann problem) methodology [@Dumbser2008] is properly modified to deal with the novel spatial basis functions.
## Discretization of the space-time computational domain {#ssec.domain}
In this section we present the details of the discretization of the computational domain in space and time.
### Space discretization
The spatial domain $\Omega \subset \mathds{R}^2$ is discretized in $N_P$ non-overlapping polygonal control volumes $P_i$, $i = 1,...,N_P$, with boundary $\partial P_i$ and surface area $|P_i|$, defined by the following Voronoi-type tessellation: $$\mathcal{T}_{\Omega} = \bigcup \limits_{i=1}^{N_P}{P_i}\,.$$ Each Voronoi element $P_i$ is characterized by $N_{e_i}$ vertexes $r$ (oriented counter-clockwise) with coordinates $\mathbf{x}_{r}=(x_{r},y_{r})$, and $N_{e_i}$ straight edges $e$ that compose the boundary $\partial P_i$. For each edge $e$ we also define the outward pointing unit normal vector $\mathbf{n}_{e}=(n_x,n_y)_e$. The barycenter ${\mathbf{x}_{P_i}}=(x_{P_i},y_{P_i})$ of each element is defined so that $$\label{eqn.xi}
\mathbf{x}_{P_i} = \frac1{N_{e_i}} \sum_{k=1}^{N_{e_i}}\mathbf{x}_{r_k}\,.$$ Each Voronoi cell is associated to a characteristic length size $h_{P_i}$ computed as follows: $$\label{eqn.h}
h_{P_i} = \frac{2 |P_i|}{\sum_{k=1}^{N_{e_i}} |\partial P_{i \, e_k}|}\,,$$ where $|\partial P_{i \, e_k}|$ is the length of the edge $e_k$, $k=1,...,N_{e_i}$, of the element $P_i$.
To construct the Voronoi grid we start by a primary Delaunay triangulation, whose vertexes provide the generators of the Voronoi cells. The interested reader is referred to [@ADERAFEDG] for all the details. Although we use a Voronoi mesh, this is not mandatory for the numerical scheme presented in this work. Indeed, we remark that any unstructured mesh can be adopted, provided that the following mesh regularity assumptions [@SIFVVEM; @Mascotto2018] are fulfilled: there exists a constant $\varrho > 0$ such that for every polygonal element $P_i \in\mathcal{T}_\Omega$ it holds that
$(i)$
: $P_i$ is star-shaped with respect to a disk with radius $R_i \geq \varrho h_{P_i}$;
$(ii)$
: for every edge $e_k \in\partial P_i$ it holds that $|\partial P_{i \, e_k}| \geq \varrho h_{P_i}$.
This implies that the number of edges and vertexes is finite over the whole computational mesh, and that degenerate edges with zero length do not exist.
### Time discretization
The time coordinate is bounded in the interval $[0,t_f]$, where $t_f \in \mathds{R}_0^+$ identifies the final time of the simulation. A sequence of ${\Delta t}= t^{n+1}-t^n$ time steps are used to discretize the time interval, such that $t \in [t^n;t^{n+1}]$: $$\label{eqn.dt}
t = t^n + \tau {\Delta t}\,, \qquad \tau \in [0;1]\,,$$ where $\tau$ defines a reference time coordinate which maps the physical time interval $[t^n;t^{n+1}]$ into the range $[0;1]$. The time step size is computed at each temporal iteration in order to respect a $\textnormal{CFL}$ (Courant--Friedrichs--Lewy) stability condition for explicit DG schemes [@ADERNSE; @stedg2], thus $$\label{CFL}
{\Delta t}= \frac{\textnormal{CFL}}{2N+1}\, \frac{\min \limits_{\mathcal{T}_\Omega} h_{P}}{\max \limits_{\mathcal{T}_\Omega} \left( |\lambda^c_{\max}| + \frac{2(2N+1)}{h_i} \cdot |\lambda^v_{\max}| \right)_{P}} \,,$$ where $\lambda^c_{\max}$ and $\lambda^v_{\max}$ represent the maximum eigenvalues defined in [\[eig_c\]](#eig_c){reference-type="eqref" reference="eig_c"}-[\[eig_v\]](#eig_v){reference-type="eqref" reference="eig_v"} of the element $P$, and $N$ is the degree of the chosen piecewise polynomial data representation, as further detailed.
## Data representation {#ssec.numsol}
For ease of reading, from now on we omit the subscript $i$ referring to polygon $P_i$, hence simply writing $P$, bearing in mind that we address local quantities within $P_i$. In each computational cell $P$, the discrete representation of the conserved quantity $\mathbf{U}(\mathbf{x},t)$ at the current time $t^n$ is referred to as $\mathbf{u}_h^n$, and it is given in terms of piecewise polynomials of degree $N \geq 0$ defined in the space $\mathbbm{P}_{N}$ as $$\mathbf{u}_h^n:=\mathbf{U}\left(\mathbf{x}|_{P},t^n \right) = \sum \limits_{\ell=1}^{\mathcal{N}} \phi_\ell(\mathbf{x}) \, \hat{\mathbf{u}}^{n}_{\ell}
:= \phi_\ell(\mathbf{x}) \, \hat{\mathbf{u}}^{n}_{\ell} , \qquad \mathbf{x}\in {P},
\label{eqn.uh}$$ with $\phi_\ell(\mathbf{x})$ denoting the spatial basis functions used to span the space of piecewise polynomials $\mathbbm{P}_{N}$ up to degree $N$, and $\mathcal{N}$ being the total number of degrees of freedom $\hat{\mathbf{u}}^{n}_{\ell}$ for the cell $P$. Classical tensor index notation based on the Einstein summation convention is adopted, which implies summation over two equal indexes.
To compute the quantity $\mathbf{u}_h^n$, an explicit definition of the basis functions $\phi_\ell(\mathbf{x})$ is needed. Due to the unstructured nature of the polygonal mesh, where, in principle, no reference element can be adopted (nor the unit triangle neither the unit square), a common solution consists in employing modal basis functions (like Taylor basis). Recently in [@ADERAFEDG], an alternative ansatz has been forwarded for defining the basis functions, using local agglomerated finite element spaces. In the sequel, we present a novel strategy to approximate the discrete solution fitting the general form [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"}.
## The local Virtual Element space
We define the *local* Virtual Element space of degree $N$, $V^{h}_N(P)$, on each Voronoi element $P \in \mathcal{T}_\Omega$ to be $$\label{eq:Vh:def}
V^{h}_N(P) = \Big\{\,
\varv\in H^1(P)\,:\, \Delta\varv\in\mathbbm{P}_{N-2}(P) \,, \quad
\varv|_{\partial P}\in C^{0}(\partial P)\,, \quad
\varv|_{{e}}\in\mathbbm{P}_{N}({e})\,\,\forall{e}\in\partial P \, \Big\}\,,$$ where $\varv\in V^{h}_N(P)$ is a function of the local Virtual Element space and $\mathbbm{P}_{N}(P)$ denotes the space of polynomials of degree up to $N$ on ${P}$. Hence, $\mathbbm{P}_{N}(P)$ is a subspace of $V^{h}_N(P)$ with dimension $$\label{eq:nk}
n_N := \textrm{dim} (\mathbbm{P}_{N}(P)) = \frac{(N+1)(N+2)}{2}.$$
First, let us now define a basis for $\mathbbm{P}_{N}({P})$. We introduce the multi-index $\bm{\ell}=(\ell_1,\ell_2)$, thus, if $\bm{x}=(x_1,x_2)$, then $\bm{x}^{\bm{\ell}}=(x_1^{\ell_1},x_2^{\ell_2})$. We define the scaled monomials $m_{\bm{\ell}}$ as $$\begin{aligned}
\label{eq:m_alpha}
m_{\bm{\ell}} = \Big(\frac{\mathbf{x}-\mathbf{x}_{{P}}}{h_{{P}}}\Big)^{\bm{\ell}}, \qquad 0 \leq \ell_1+\ell_2 \leq N.\end{aligned}$$ A basis for $\mathbbm{P}_{N}({P})$ can then be provided by the set $\mathcal{M}_N({P})$ of scaled monomials of degree less than or equal to $N$: $$\begin{aligned}
\label{eq:M_k}
\mathcal{M}_N({P}) := \{ m_{\bm{\ell}} : 0 \leq \ell_1+\ell_2 \leq N \}.\end{aligned}$$ We also use the $\alpha$ subscript to denote the $\alpha$-th scaled monomial $m_{\alpha}$ of $\mathcal{M}_N({P})$.
Next, let us focus on the definition of a canonical basis for the local virtual space. Each function $\varv$ belonging to $V^{h}_N(P)$ is uniquely defined by the following degrees of freedom (dof) [@vem2]:
- the $N_{e}$ point values of $\varv$ at the vertexes of $P$;
- the $(N-1)N_{e}$ point values of $\varv$ on the $(N+1)$ Gauss-Lobatto quadrature points on each edge $e$;
- the scaled internal moments up to degree $(N-2)$ of $\varv$ in $P$, defined as $$\label{eqn.dof_mom}
\frac1{|P|} \int_{P} \varv\,m_{\alpha} \, d\mathbf{x}, \qquad \alpha = 1,...,n_{N-2}\,,$$ where $n_{N-2}$ is the dimension of $\mathbbm{P}_{N-2}(P)$ evaluated according to [\[eq:nk\]](#eq:nk){reference-type="eqref" reference="eq:nk"}.
The dimension of $V^{h}_N({P})$ and, therefore, also the number of degrees of freedom of each function $\varv$ results then $$\label{eq:dim_Vh}
N^{\textrm{dof}}_{{P}}:= \textrm{dim}(V^{h}_N({P})) = N \cdot N_e + \frac{N(N-1)}{2}\,.$$ We denote by $\hat{\varv}_{k}$ the value of the $k$-th degree of freedom of $\varv$ and by $\{\varphi\}_{\ell=1}^{N^{\textrm{dof}}_{{P}}}$ the local canonical basis for $V^{h}_N({P})$ such that for every $k=1, ..., N^{\textrm{dof}}_{{P}}$ it holds $$\label{eqn.interp_prop}
\{\hat{\varphi}_k\}_{\ell=1}^{N^{\textrm{dof}}_{{P}}} = \delta_{k \ell},$$ with $\delta_{k \ell}$ being the Kronecker delta function. Thus, we can represent each function $\varv$ of the *local* Virtual Element space in terms of its degrees of freedom using the following Lagrange interpolation: $$\label{eq:vh_repr}
\varv= \sum_{\ell=1}^{N^{\textrm{dof}}_{{P}}} \hat{\varv}_{\ell} \,\varphi_\ell\,.$$ Let us notice that the basis functions $\{\varphi_\ell\}_{\ell=1}^{N^{\textrm{dof}}_{{P}}}$ are *virtual*, meaning that they are not explicitly known but they are ensured to span the local Virtual Element space defined by [\[eq:Vh:def\]](#eq:Vh:def){reference-type="eqref" reference="eq:Vh:def"}. Only the degrees of freedom $\{\hat{\varphi}_k\}_{\ell=1}^{N^{\textrm{dof}}_{{P}}}$ of the virtual basis are known.
### Elliptic projection operator
Since the explicit definition of the virtual basis functions is not available, we need to design an appropriate projection operator $\Pi^\nabla_{{P},N}: V^{h}_N({P}) \rightarrow \mathbbm{P}_{N}({P})$, which maps functions from the Virtual Element space to the known polynomial space of degree $N$. Furthermore, such operator must be computable relying only on the *known* degrees of freedom of the virtual basis. According to [@vem2], the projector is defined from the orthogonality condition $$\label{eq:vem_proj}
\int_{{P}} \nabla p \cdot \nabla (\Pi^\nabla_{{P},N}\varv- \varv) \, d\mathbf{x}= 0\,, \qquad \forall p \in \mathbbm{P}_{N}({P})\,,\quad \forall \varv\in V^{h}_N({P})\, ,$$ where $p$ is a function of the polynomial space $\mathbbm{P}_{N}({P})$. The above relation is uniquely determined by prescribing an additional projection operator that acts onto constants, namely $P_0: V^{h}_N({P}) \rightarrow \mathbbm{P}_{0}({P})$, fulfilling the following relation: $$\label{eq:vem_proj_const}
P_0(\Pi^\nabla_{{P},N}\varv- \varv) = 0\,.$$ Following [@vem2], we choose $P_0$ so that $$\begin{aligned}
\label{eq:vem_proj_0}
P_0 \varv= \begin{cases}
\frac1{N_e} \sum_{k=1}^{N_e} \varv(\mathbf{x}_{r_k}) \quad &\textrm{for} \enskip N = 1\,, \\
\frac1{|{P}|}\int_{{P}} \varv\, d\mathbf{x}\quad &$\textrm{otherwise}$\,,
\end{cases}\end{aligned}$$ with $\mathbf{x}_{r_k}$ being the coordinates of all vertexes of cell ${P}$. The basis $\mathcal{M}_N({P})$ given by [\[eq:M_k\]](#eq:M_k){reference-type="eqref" reference="eq:M_k"} is a basis of $\mathbbm{P}_{N}(P)$ expressed in terms of the scaled monomials [\[eq:m_alpha\]](#eq:m_alpha){reference-type="eqref" reference="eq:m_alpha"}, thus we can rewrite the orthogonality condition [\[eq:vem_proj\]](#eq:vem_proj){reference-type="eqref" reference="eq:vem_proj"} as $$\label{eq:vem_proj2}
\int_{{P}} \nabla m_\alpha \cdot \nabla (\Pi^\nabla_{{P},N}\varv- \varv) \, d\mathbf{x}= 0\,, \qquad \alpha = 1,...,n_N\,.$$ Moreover, being also $\Pi^\nabla_{{P},N}\varv$ an element of $\mathbbm{P}_{N}({P})$, we can use the basis $m_\alpha$ to compute it as well, that is $$\label{eq:vem_proj2b}
\Pi^\nabla_{{P},N}\varv= \sum_{\beta=1}^{n_N} s^\beta m_\beta\,,$$ where $s^\beta$ are the unknowns which define the projector operator. The above equation inserted into [\[eq:vem_proj2\]](#eq:vem_proj2){reference-type="eqref" reference="eq:vem_proj2"} and [\[eq:vem_proj_const\]](#eq:vem_proj_const){reference-type="eqref" reference="eq:vem_proj_const"} finally leads to the following linear system: $$\begin{aligned}
\label{eq:vem_proj3}
&\sum_{\beta=1}^{n_N} s^\beta \int_{{P}} \nabla m_\alpha \cdot \nabla m_\beta \, d\mathbf{x}= \int_{{P}} \nabla m_\alpha \cdot \nabla \varv\, d\mathbf{x}\,, \qquad \alpha = 1,...,n_N\,,\\
\label{eq:vem_proj_aux}
&\sum_{\beta=1}^{n_N} s^\beta P_0 m_\beta = P_0 \varv\,.\end{aligned}$$ The terms on the left side of [\[eq:vem_proj3\]](#eq:vem_proj3){reference-type="eqref" reference="eq:vem_proj3"} can be straightforwardly computed since they involve the integration of known polynomials over ${P}$, while integration by parts is performed on the right hand side yielding $$\label{eq:vem_proj4}
\int_{{P}} \nabla m_\alpha \cdot \nabla \varv\, d\mathbf{x}= -\int_{{P}}\Delta m_\alpha \varv\, d\mathbf{x}+ \int_{\partial {P}} \frac{\partial m_\alpha}{\partial n} \varv\, dS.$$ We notice that the first term can be directly computed at the sole aid of the internal degrees of freedom of $\varv$, while the second one (which contains the derivative of the basis $m_\alpha$ in normal direction $n$ with respect to the cell boundary $\partial {P}$) can be exactly computed through Gauss-Lobatto quadrature rules along each edge of the element. The values of $\varv$ at these points are indeed known, and they coincide with some degrees of freedom of the local Virtual Element space. Once the solution vector $s^\beta$ of system [\[eq:vem_proj3\]](#eq:vem_proj3){reference-type="eqref" reference="eq:vem_proj3"}-[\[eq:vem_proj_aux\]](#eq:vem_proj_aux){reference-type="eqref" reference="eq:vem_proj_aux"} is obtained, it is employed to evaluate the projection $\Pi^\nabla_{{P},N}\varphi_k$ of the virtual basis function $\varv= \varphi_\ell$ recalling equation [\[eq:vem_proj2b\]](#eq:vem_proj2b){reference-type="eqref" reference="eq:vem_proj2b"}: $$\Pi^\nabla_{{P},N}\varphi_\ell = \sum_{\beta=1}^{n_N} s^\beta_\ell m_\beta, \qquad \ell = 1,...,N^{\textrm{dof}}_{{P}}\,.$$
System [\[eq:vem_proj3\]](#eq:vem_proj3){reference-type="eqref" reference="eq:vem_proj3"}-[\[eq:vem_proj_aux\]](#eq:vem_proj_aux){reference-type="eqref" reference="eq:vem_proj_aux"} can be expressed in matrix form as $$\label{eq:proj_system}
\mathbf{G} \accentset{\star}{\mathbf{\Pi}}^{\nabla}_N = \mathbf{B}\,,$$ where $\mathbf{G}$ and $\mathbf{B}$ are the following $n_N \times n_N$ and $n_N \times N^{\textrm{dof}}_{{P}}$ matrices, respectively: $$\begin{aligned}
\label{gb_matrices}
(\mathbf{G})_{\alpha \beta} &= P_0 m_\beta\,, \quad &&\textrm{for} \enskip \alpha = 1, \enskip \beta = 1,...,n_N\,, \\
(\mathbf{G})_{\alpha \beta} &= \int_{{P}}\nabla m_\alpha \cdot \nabla m_\beta \, d\mathbf{x}\,, \quad &&\textrm{for} \enskip \alpha \geq 2, \enskip \beta = 1,...,n_N\,,\\
(\mathbf{B})_{\alpha \ell} &= P_0 \varphi_\ell\,, \quad &&\textrm{for} \enskip \alpha = 1, \enskip \ell = 1,...,N^{\textrm{dof}}_{{P}}\,, \\
(\mathbf{B})_{\alpha \ell} &= \int_{{P}} \nabla m_\alpha \cdot \nabla \varphi_\ell \, d\mathbf{x}\,, \quad &&\textrm{for} \enskip \alpha \geq 2, \enskip \ell = 1,...,N^{\textrm{dof}}_{{P}}\,,\end{aligned}$$ while $\accentset{\star}{\mathbf{\Pi}}^{\nabla}_N$ is the $n_N \times N^{\textrm{dof}}_{{P}}$ matrix representation of the projection operator $\Pi^\nabla_{{P},N}$ in the basis set $\mathcal{M}_N({P})$, i.e., $(\accentset{\star}{\mathbf{\Pi}}^{\nabla}_N)_{\alpha \ell} = s_\ell^\alpha$.
### $L_2$ projection operator {#ssec.L2proj}
Similarly to what done in the previous section, we introduce here the $L_2$ projection operator $\Pi^0_{{P},N}: V^{h}_N({P}) \rightarrow \mathbbm{P}_{N}({P})$, again mapping functions from the virtual to the polynomial space of degree $N$. This projector arises from the following orthogonality condition: $$\label{eq:vem_projL1}
\int_{{P}} p (\Pi^0_{{P},N}\varv- \varv) \, d\mathbf{x}= 0\,, \qquad \forall p \in \mathbbm{P}_{N}({P})\,,\quad \forall \varv\in V^{h}_N({P})\, .$$ Once more, it is possible to compute the projector operator only using the known degrees of freedom of $\varv$. Indeed, since $\Pi^0_{{P},N}\varv$ is an element of $\mathbbm{P}_{N}({P})$, we have that $$\label{eq:vem_projL2}
\Pi^0_{{P},N}\varv= \sum_{\beta=1}^{n_N} r^\beta m_\beta\,,$$ which allows us to reformulate the condition [\[eq:vem_projL1\]](#eq:vem_projL1){reference-type="eqref" reference="eq:vem_projL1"} as $$\label{eq:vem_projL3}
\sum_{\beta=1}^{n_N} r^\beta \int_{{P}} m_\alpha m_\beta \, d\mathbf{x}= \int_{{P}} m_\alpha \varv\, d\mathbf{x}\,, \qquad \alpha = 1,...,n_N\,.$$ Also in this case, the term on the left side of the above equation can be directly evaluated from the monomials [\[eq:m_alpha\]](#eq:m_alpha){reference-type="eqref" reference="eq:m_alpha"}, while the integral on the right side is still not computable. To solve the problem, we approximate the function $\varv$ with its elliptic projection $\Pi^\nabla_{{P},N}\varv$ only for monomials of degree $N$ and $(N-1)$, since the moments for $m_\alpha \in \mathbbm{P}_{N-2}({P})$ are readily known as degrees of freedom of the virtual basis. For $\varv= \varphi_\ell$, the resulting system can be written in the following matrix notation: $$\label{eq:proj_system2}
\mathbf{H} \accentset{\star}{\mathbf{\Pi}}^0_N = \mathbf{C}\,,$$ where $\mathbf{H}$ and $\mathbf{C}$ are an $n_N \times n_N$ and $n_N \times N^{\textrm{dof}}_{{P}}$ matrix, respectively, defined as $$\begin{aligned}
\label{h_matrix}
(\mathbf{H})_{\alpha \beta} &=\int_{{P}}m_\alpha m_\beta \, d\mathbf{x}\,, \quad\qquad\qquad \alpha,\beta = 1,...,n_N\,, \\
\label{c_matrix}
(\mathbf{C})_{\alpha \ell} &=
\begin{cases}
\int_{{P}}m_\alpha \varphi_\ell \, d\mathbf{x}\,, \qquad &\alpha= 1,...,n_{N-2}, \enskip \ell = 1,...,N^{\textrm{dof}}_{{P}}\,, \\
\int_{{P}}m_\alpha \Pi^\nabla_{{P},N}\varphi_\ell \, d\mathbf{x}\,, \qquad &n_{N-2}+1 \leq \alpha \leq n_N, \enskip \ell = 1,...,N^{\textrm{dof}}_{{P}}\,.
\end{cases} \end{aligned}$$ The matrix representation of the $L_2$ projection operator $\Pi^\nabla_{{P},N}$ is given by $\accentset{\star}{\mathbf{\Pi}}^0_N$ with dimension $n_N \times N^{\textrm{dof}}_{{P}}$.
## Nonconforming Virtual Element basis functions {#ssec.basis}
The local Virtual Element space can be described by the elliptic and $L_2$ projection operators previously introduced (see definitions [\[eq:proj_system\]](#eq:proj_system){reference-type="eqref" reference="eq:proj_system"} and [\[eq:proj_system2\]](#eq:proj_system2){reference-type="eqref" reference="eq:proj_system2"}, respectively). The Virtual Element Method [@vem2] is then built upon the *global* Virtual Element space that is constructed by gluing together all the local spaces $V^{h}_N(P_i)$ for $i=1,...,N_P$. In this sense, the VEM methodology belongs to the class of *continuous* finite element solvers (FEM). Differently, the nonconforming Virtual Element Method, originally presented in [@VEM_nc_org], allows the solution to be discontinuous at element vertexes by defining nonconforming local virtual basis along the edges of each cell, meaning that continuity along the boundary $\partial {P}$ is no longer guaranteed. A global Virtual Element space is finally still defined, formally collecting all the local virtual spaces, hence solving again the problem at the *global* level.
Here, we aim at designing a new ansatz for the numerical representation of the solution within each control volume $P$, thus we are interested in the definition of local spaces, and corresponding basis functions, in the framework of discontinuous Galerkin methods. In practice, we need to provide an explicit formulation for the basis functions $\phi_\ell(\mathbf{x})$ appearing in the ansatz [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"}. To this end, the idea is to use the *local* Virtual Element space in [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"} to approximate the numerical solution in the polynomial space $\mathbbm{P}_{N}$, bearing in mind that we can actually employ the projectors previously described. Specifically, the basis functions $\phi_\ell(\mathbf{x})$ in [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"} are then *approximated* using the $L_2$ projector [\[eq:proj_system2\]](#eq:proj_system2){reference-type="eqref" reference="eq:proj_system2"} as $$\label{eqn.vem_basis}
\phi_\ell(\mathbf{x}) := \varphi_\ell(\mathbf{x}) \simeq \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,\ell} \cdot m_{\alpha}, \qquad \ell=1,...,N^{\textrm{dof}}_{{P}},$$ where $N^{\textrm{dof}}_{{P}}$ is the total number of degrees of freedom, hence implying $\mathcal{N}=N^{\textrm{dof}}_{{P}}$ in [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"}. The *spatial* discrete approximation of the generic quantity $q(\mathbf{x},t)$ writes then $$q_h^n:=q\left(\mathbf{x}|_{P},t^n \right) = \sum \limits_{\ell=1}^{N^{\textrm{dof}}_{{P}}} \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,\ell} \cdot m_{\alpha} \, \hat{q}^{n}_{\ell}, \qquad \mathbf{x}\in {P}.
\label{eqn.uhvem}$$ We refer to this ansatz as *nonconforming Virtual Element basis*, since we admit jumps in the definition of the global solution space along the entire edges of the computational elements, like in classical Godunov-type DG or finite volume solvers. The numerical solution remains locally defined within each cell, and it spans the local Virtual Element space [\[eq:Vh:def\]](#eq:Vh:def){reference-type="eqref" reference="eq:Vh:def"}.
In order to deal with a monolithic space-time discretization, we also need to construct a consistent space-time basis. This is simply achieved by a tensor product of the nonconforming Virtual Element basis in space [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"} with one-dimensional nodal basis functions in time. In particular, the time nodal basis consists of $(N+1)$ linearly independent Lagrange interpolating polynomials belonging to the space $\mathbbm{P}_{N}$, i.e. $\left\{\psi_k\right\}_{k=1}^{N+1}$, passing through a set of $(N+1)$ nodal points $\left\{\tau_k\right\}_{k=1}^{N+1}$, which are assumed to be the Gauss-Legendre nodes [@stroud] referred to the unit time interval given by the mapping [\[eqn.dt\]](#eqn.dt){reference-type="eqref" reference="eqn.dt"}. The resulting space-time basis functions $\theta(\mathbf{x},\tau)$ are expressed as $$\label{eqn.vem_basis_st}
\theta_r(\mathbf{x},\tau) := \varphi_\ell(\mathbf{x}) \cdot \psi_k(\tau) \simeq \left( \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,\ell} \cdot m_{\alpha} \right) \cdot \psi_k, \qquad \ell=1,...,N^{\textrm{dof}}_{{P}}, \quad k=1,\,...,N+1, \quad r=1,...,N^{\textrm{dofst}}_{{P}},$$ where the total number of space-time degrees of freedom is $N^{\textrm{dofst}}_{{P}}=N^{\textrm{dof}}_{{P}}\cdot (N+1)$. Consequently, the numerical representation of the generic *space-time* quantity $q(\mathbf{x},t)$ explicitly reads $$q_h:=q\left(\mathbf{x}|_{P},t \in {\Delta t}\right) = \sum \limits_{\ell=1}^{N^{\textrm{dofst}}_{{P}}} \theta_\ell(\mathbf{x},\tau) \, \hat{q}_{\ell} , \qquad \mathbf{x}\in {P}, \quad \tau \in [0;1].
\label{eqn.qhvem}$$
We underline that both spatial [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"} and space-time [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"} basis functions are actually computed at the aid of the $L_2$ projector detailed in Section [3.3.2](#ssec.L2proj){reference-type="ref" reference="ssec.L2proj"}, hence implying an *approximation* of the virtual basis functions $\{\varphi_\ell\}_{\ell=1}^{N^{\textrm{dof}}_{{P}}}$ that remain unknown.
**Remark 1** (Orthogonalization of the Virtual Element basis). *If the shape of the polygonal cell is very irregular, meaning that the mesh regularity assumptions defined in Section [3.1](#ssec.domain){reference-type="ref" reference="ssec.domain"} are barely satisfied, the construction of the projection matrices for the the elliptic and $L_2$ operators might lead to very ill-conditioned linear systems [@Berrone2017]. This poses serious limitations to the practical usage of the local Virtual Element space and the associated basis, especially for degree $N>2$, as analyzed in [@Mascotto2018]. To overcome this problem, in [@Berrone2017] an orthogonalization of the projectors is proposed, showing a remarkable improvement in the condition number of the algebraic systems related to the construction of the VEM projectors, namely [\[eq:proj_system\]](#eq:proj_system){reference-type="eqref" reference="eq:proj_system"} and [\[eq:proj_system2\]](#eq:proj_system2){reference-type="eqref" reference="eq:proj_system2"}.*
*Therefore, for $N>2$ we propose a Gram-Schmidt orthogonalization along the lines of [@Berrone2017]. However, the entire basis is orthogonalized up to degree $N$, and not only up to degree $N-1$ as forwarded in [@Berrone2017]. To that aim, let us introduce matrix $\mathbf{R}$ such that $$\label{eqn.L_matrix}
\mathbf{R} \, \mathbf{H} \, \mathbf{R}^{\top} = \boldsymbol{\Lambda},$$ where matrix $\mathbf{H}$ is given by [\[h_matrix\]](#h_matrix){reference-type="eqref" reference="h_matrix"} and $\boldsymbol{\Lambda}$ is the diagonal matrix with the eigenvalues of $\mathbf{H}$. It follows that the columns of matrix $\mathbf{R}$ contain the right-eigenvectors of $\mathbf{H}$. The orthogonal matrix $\mathbf{Z}$ is computed as $$\label{eqn.T_matrix}
\mathbf{Z} = \sqrt{\mathbf{\boldsymbol{\Lambda}}^{-1}} \, \mathbf{R}.$$ In this way, the set of orthonormal polynomials $\{\mathbf{z_\alpha}\}_{\alpha=1}^{n_N}$ spanning the polynomial space $\mathbbm{P}_{N}$ are defined by $$z_{\alpha} = (\mathbf{Z})_{\alpha \beta} \, m_{\beta}, \qquad \textrm{for} \enskip \alpha = 1,...,n_N, \enskip \beta = 1,...,n_N,$$ where $m_{\beta}$ are the monomials introduced in [\[eq:m_alpha\]](#eq:m_alpha){reference-type="eqref" reference="eq:m_alpha"}. It is easy to verify that the orthonormal polynomials yield an identity mass matrix $\tilde{\mathbf{H}}$ of dimension $n_N \times n_N$, indeed $$\begin{aligned}
\label{eqn.mass_matrix_orth}
(\tilde{\mathbf{H}})_{\alpha \beta} &=& \int_{{P}}z_\alpha z_\beta \, d\mathbf{x}\nonumber \\
&=& \int_{{P}}(\mathbf{Z})_{\alpha k} m_k \, (\mathbf{Z})_{\beta \ell} m_\ell \, d\mathbf{x}\nonumber \\
&=& \int_{{P}}(\mathbf{Z})_{\alpha k} m_k \, m_\ell (\mathbf{Z})_{\beta \ell} \, d\mathbf{x}\nonumber \\
&=& \int_{{P}}(\mathbf{Z})_{\alpha k} \, (\mathbf{H})_{k \ell} \, (\mathbf{Z})_{\beta \ell} \, d\mathbf{x}\nonumber \\
&=& \int_{{P}}\sqrt{\mathbf{(\Lambda)}_{\alpha \alpha}^{-1}} \, (\mathbf{R})_{\alpha k} \, (\mathbf{H})_{k \ell} \, \sqrt{\mathbf{(\Lambda)}_{\beta \beta}^{-1}} \, (\mathbf{R})_{\beta \ell} \, d\mathbf{x}\nonumber \\
&=& \int_{{P}}\sqrt{\mathbf{(\Lambda)}_{\alpha \alpha}^{-1}} \, (\mathbf{R})_{\alpha k} \, (\mathbf{H})_{k \ell} \, (\mathbf{R})_{\ell \beta} \, \sqrt{\mathbf{(\Lambda)}_{\beta \beta}^{-1}} \, d\mathbf{x}\nonumber \\
&=& \int_{{P}}\sqrt{\mathbf{(\Lambda)}_{\alpha \alpha}^{-1}} \, \mathbf{(\Lambda)}_{\alpha \beta} \, \sqrt{\mathbf{(\Lambda)}_{\beta \beta}^{-1}} \, d\mathbf{x}= \delta_{\alpha \beta}.\end{aligned}$$ The nonconforming Virtual Element basis can thus be orthogonalized by modifying the definition [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"} as follows: $$\label{eqn.vem_basis_orth}
\phi_\ell(\mathbf{x}) = \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,\ell} \cdot \left( (\mathbf{Z})_{\alpha \beta} \, m_{\beta} \right), \qquad \ell=1,...,N^{\textrm{dof}}_{{P}}, \quad \alpha,\beta=1,...,n_N.$$ The space-time basis functions are always constructed as a tensor product of the orthogonalized spatial basis [\[eqn.vem_basis_orth\]](#eqn.vem_basis_orth){reference-type="eqref" reference="eqn.vem_basis_orth"} and the one-dimensional Lagrange nodal basis $\left\{\psi_k\right\}_{k=1}^{N+1}$ according to [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"}.*
**Remark 2** (Derivatives of the Virtual Element basis). *The discrete spatial derivatives of the nonconforming Virtual Element basis [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"} are evaluated as follows: $$\label{eqn.vem_basis_der}
\frac{\partial \phi_\ell}{\partial x} = \left( \accentset{\star}{\mathbf{\Pi}}^{0,x}_{N-1}\right)_{\alpha,\ell} \cdot m_{\alpha}, \quad \frac{\partial \phi_\ell}{\partial y} = \left( \accentset{\star}{\mathbf{\Pi}}^{0,y}_{N-1}\right)_{\alpha,\ell} \cdot m_{\alpha}, \qquad \ell=1,...,N^{\textrm{dof}}_{{P}}, \quad \alpha=1, ..., n_{N-1}.$$ The $L_2$ operators $\accentset{\star}{\mathbf{\Pi}}^{0,x}_{N-1}$ and $\accentset{\star}{\mathbf{\Pi}}^{0,y}_{N-1}$ project the derivative of the solution in $x$ and $y$ direction from the local Virtual Element space $V^{h}_{N-1}({P})$ to the polynomial space $\mathbbm{P}_{N-1}$, thus they are the matrix representation of the projectors $(\Pi^{0,x}_{{P},N-1},\Pi^{0,y}_{{P},N-1}): V^{h}_{N-1}({P}) \rightarrow \mathbbm{P}_{N-1}$. They are constructed following the same methodology used for the $L_2$ projector described in Section [3.3.2](#ssec.L2proj){reference-type="ref" reference="ssec.L2proj"}. However, they refer to a polynomial degree $N-1$ and they project the derivatives of the virtual basis functions. In particular, the derivative operators introduced in [\[eqn.vem_basis_der\]](#eqn.vem_basis_der){reference-type="eqref" reference="eqn.vem_basis_der"} are computed as $$\begin{aligned}
\accentset{\star}{\mathbf{\Pi}}^{0,x}_{N-1} &= \hat{\mathbf{H}}^{-1}\mathbf{E}^x, \\
\accentset{\star}{\mathbf{\Pi}}^{0,y}_{N-1} &= \hat{\mathbf{H}}^{-1}\mathbf{E}^y, \end{aligned}$$ where matrix $\hat{\mathbf{H}}$ is obtained by taking the first $n_{N-1}$ rows and columns of matrix $\mathbf{H}$ defined in [\[h_matrix\]](#h_matrix){reference-type="eqref" reference="h_matrix"}. Matrices $\mathbf{E}^x$ and $\mathbf{E}^y$ account for the derivatives of the virtual basis and they write $$\begin{aligned}
(\mathbf{E}^x)_{k\alpha} = \int_{{P}} \varphi_{k,x}m_\alpha \, d\mathbf{x}, \quad\quad (\mathbf{E}^y)_{k\alpha} = \int_{{P}} \varphi_{k,y}m_\alpha \, d\mathbf{x}, \quad \quad \alpha = 1,...,n_{N-1}.\end{aligned}$$ In practice, they can be computed performing integration by parts: $$\label{eqn.Ex}
\int_{{P}} \varphi_{k,x}m_\alpha \, d\mathbf{x}= -\int_{{P}} \varphi_{k} m_{\alpha,x} \, d\mathbf{x}+ \int_{\partial {P}} \varphi_{k} m_\alpha n_x \, dS, \quad \quad \alpha = 1,...,n_{N-1}.$$ The first integral on the right hand side of [\[eqn.Ex\]](#eqn.Ex){reference-type="eqref" reference="eqn.Ex"} can be computed relying on the internal degrees of freedom of the local Virtual Element space (scaled internal momenta which are available up to degree $(N-2)$), while the second integral can also be readily evaluated using the degrees of freedom lying on the boundary $\partial {P}$, which coincide with the Gauss-Lobatto quadrature nodes along each edge of the cell. The same holds true for the matrix $\mathbf{E}^y$.*
*The time derivative of the space-time Virtual Element basis [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"} does not pose any problem, and it can be simply evaluated by taking the derivative of the Lagrange basis in time. Therefore the space-time derivatives are computed at the aid of the definitions [\[eqn.vem_basis_der\]](#eqn.vem_basis_der){reference-type="eqref" reference="eqn.vem_basis_der"} as follows: $$\frac{\partial \theta_r}{\partial x} = \frac{\partial \phi_\ell}{\partial x} \cdot \psi_k, \quad \frac{\partial \theta_r}{\partial y} = \frac{\partial \phi_\ell}{\partial y} \cdot \psi_k, \quad \frac{\partial \theta_r}{\partial \tau} = \phi_\ell \cdot \frac{\partial \psi_k}{\partial \tau}.$$*
## Space-time VEM-DG scheme
We discretize the Navier--Stokes equations defined by system [\[eqn.PDE\]](#eqn.PDE){reference-type="eqref" reference="eqn.PDE"}-[\[eqn.NS\]](#eqn.NS){reference-type="eqref" reference="eqn.NS"} designing an ADER VEM-DG scheme, which permits to reach arbitrary accuracy in space and time. This numerical scheme consists in two main steps.
1. The computation of a local space-time ADER *predictor* starting from the known solution at the current time step $t^n$ for each mesh element ${P}$. This step gives a high order space-time approximation of the discrete solution that is valid locally inside each cell within the time interval $[t^n;t^{n+1}]$. There is no need for information exchanges between neighbor cells.
2. A *corrector* step that leads to the effective solution at the new time step $t^{n+1}$ by directly integrating the system of equations over the space-time control volume ${P}\times [t^n;t^{n+1}]$. The spatial discretization is built upon the novel nonconforming Virtual Element basis used to devise a discontinuous Galerkin scheme. Thanks to the usage of Riemann solvers at the cell interfaces, the corrector step will also take into account the interaction between the predictors of neighboring cells.
### ADER local space-time predictor {#ssec.ader}
The ADER methodology was firstly introduced in [@toro3; @titarevtoro] aiming at computing a predictor solution $\mathbf{q}_h(\mathbf{x},t)$ by solving the generalized Riemann problem "in the small\", hence neglecting interactions between neighbor cells. To evolve the solution in time, time derivatives must be known. They can be evaluated relying on an element-local *weak formulation* of the governing equations according to [@Dumbser2008].
We start by multiplying the PDE [\[eqn.PDE\]](#eqn.PDE){reference-type="eqref" reference="eqn.PDE"} by a set of test functions $\theta_k(\mathbf{x},\tau)$ of the same form of the space-time basis [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"}, and then we integrate the governing system over a space-time control volume given by ${P}\times [0;1]$, obtaining $$\label{eqn.ader1}
\int_{0}^{1} \int_{{P}} \theta_k \, \frac{\partial \mathbf{U}}{\partial \tau} \, d\mathbf{x}\, d\tau = - {\Delta t}\, \int_{0}^{1} \int_{{P}} \theta_k \, \left( \frac{\partial \mathbf{f}}{\partial x} + \frac{\partial \mathbf{g}}{\partial y} \right) \, d\mathbf{x}\, d\tau, \qquad k = 1, ..., N^{\textrm{dofst}}_{{P}}.$$ Let us notice that the PDE has been reformulated in the *reference time coordinate* $\tau$, hence implying the presence of the Jacobian $1/{\Delta t}$ of the transformation according to the mapping [\[eqn.dt\]](#eqn.dt){reference-type="eqref" reference="eqn.dt"}. The unknown discrete solution $\mathbf{q}_h$ as well as the fluxes $(\mathbf{f}_h,\mathbf{g}_h)$ are approximated using the nonconforming Virtual Element space-time basis [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"}, thus $$\label{eqn.qh}
\mathbf{q}_h(\mathbf{x},t) = \sum \limits_{\ell=1}^{N^{\textrm{dofst}}_{{P}}} \theta_\ell(\mathbf{x},\tau) \, \hat{\mathbf{q}}_{\ell}, \quad \mathbf{f}_h(\mathbf{x},t) = \sum \limits_{\ell=1}^{N^{\textrm{dofst}}_{{P}}} \theta_\ell(\mathbf{x},\tau) \, \hat{\mathbf{f}}_{\ell}, \quad \mathbf{g}_h(\mathbf{x},t) = \sum \limits_{\ell=1}^{N^{\textrm{dofst}}_{{P}}} \theta_\ell(\mathbf{x},\tau) \, \hat{\mathbf{g}}_{\ell}.$$ Thanks to the interpolation property [\[eqn.interp_prop\]](#eqn.interp_prop){reference-type="eqref" reference="eqn.interp_prop"}, it holds that $$\label{eqn.f_interp}
\hat{\mathbf{f}}_\ell=\mathbf{f}(\hat{\mathbf{q}}_\ell) \quad \textrm{and} \quad \hat{\mathbf{g}}_\ell=\mathbf{g}(\hat{\mathbf{q}}_\ell),$$ so the degrees of freedom of the fluxes can be simply evaluated *pointwise* from $\mathbf{q}_h$. The above definitions are inserted into the weak formulation [\[eqn.ader1\]](#eqn.ader1){reference-type="eqref" reference="eqn.ader1"}, hence obtaining $$\label{eqn.ader2}
\int_{0}^{1} \int_{{P}} \theta_k \, \frac{\partial \theta_\ell}{\partial \tau} \hat{\mathbf{q}}_\ell \, d\mathbf{x}\, d\tau = - {\Delta t}\, \int_{0}^{1} \int_{{P}} \theta_k \, \left( \frac{\partial \theta_\ell}{\partial x} \hat{\mathbf{f}}_\ell + \frac{\partial \theta_\ell}{\partial y} \hat{\mathbf{g}}_\ell \right) \, d\mathbf{x}\, d\tau, \qquad k = 1, ..., N^{\textrm{dofst}}_{{P}}.$$ To satisfy the causality principle accounting only for information coming from the past in each cell, the term on the left hand side of [\[eqn.ader2\]](#eqn.ader2){reference-type="eqref" reference="eqn.ader2"} is integrated by parts in time, yielding $$\begin{aligned}
\label{eqn.ader3}
&& \left( \int_{{P}} \theta_k(\mathbf{x},1) \theta_\ell(\mathbf{x},1) \, d\mathbf{x}- \int_{0}^{1} \int_{{P}} \frac{\partial \theta_k}{\partial \tau} \, \theta_\ell \, d\mathbf{x}\, d\tau \right) \hat{\mathbf{q}}_\ell =\nonumber \\
&& \int_{{P}} \theta_k(\mathbf{x},0) \phi_\ell(\mathbf{x},0) \, \hat{\mathbf{u}}^n_\ell \, d\mathbf{x}-
{\Delta t}\, \int_{0}^{1} \int_{{P}} \theta_k \, \left( \frac{\partial \theta_\ell}{\partial x} \hat{\mathbf{f}}_\ell + \frac{\partial \theta_\ell}{\partial y} \hat{\mathbf{g}}_\ell \right) \, d\mathbf{x}\, d\tau, \qquad k = 1, ..., N^{\textrm{dofst}}_{{P}},\end{aligned}$$ where the numerical solution at the current time $t^n$ (i.e. $\tau=0$) is expressed with its definition given by [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"} in terms of the spatial basis $\phi_\ell$ defined in [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"}. We can rewrite the weak formulation [\[eqn.ader3\]](#eqn.ader3){reference-type="eqref" reference="eqn.ader3"} in compact matrix-vector notation as $$\label{eqn.ader4}
\mathbf{K}_1 \, \hat{\mathbf{q}}_\ell = \mathbf{F}_0 \, \mathbf{u}^n_\ell - {\Delta t}\left( \mathbf{K}_x\, \mathbf{f}(\hat{\mathbf{q}}_\ell) + \mathbf{K}_y \,\mathbf{g}(\hat{\mathbf{q}}_\ell)\right) ,$$ with the definitions $$\begin{aligned}
\label{eqn.st_matrices}
&& \mathbf{K}_1 = \int_{{P}} \theta_k(\mathbf{x},1) \theta_\ell(\mathbf{x},1) \, d\mathbf{x}- \int_{0}^{1} \int_{{P}} \frac{\partial \theta_k}{\partial \tau} \, \theta_\ell \, d\mathbf{x}\, d\tau, \qquad \mathbf{F}_0 = \int_{{P}} \theta_k(\mathbf{x},0) \phi_\ell(\mathbf{x},0) \, d\mathbf{x}, \nonumber \\
&& \mathbf{K}_x = \int_{0}^{1} \int_{{P}} \theta_k \, \frac{\partial \theta_\ell}{\partial x} \, d\mathbf{x}\, d\tau, \qquad \mathbf{K}_y = \int_{0}^{1} \int_{{P}} \theta_k \, \frac{\partial \theta_\ell}{\partial y} \, d\mathbf{x}\, d\tau.\end{aligned}$$ The nonlinear algebraic equation system [\[eqn.ader4\]](#eqn.ader4){reference-type="eqref" reference="eqn.ader4"} is solved locally for the unknown space-time expansion coefficients $\hat{\mathbf{q}}_\ell$ with a simple fixed-point iterative scheme: $$\label{eqn.ader5}
\hat{\mathbf{q}}_\ell^{r+1} = (\mathbf{K}_1)^{-1} \left( \mathbf{F}_0 \, \mathbf{u}^n_\ell - {\Delta t}\left( \mathbf{K}_x \, \mathbf{f}(\hat{\mathbf{q}}_\ell^r) + \mathbf{K}_y \, \mathbf{g}(\hat{\mathbf{q}}_\ell^r) \right) \right),$$ where the superscript $r$ indicates the iteration number. The iteration stops when the residual of [\[eqn.ader5\]](#eqn.ader5){reference-type="eqref" reference="eqn.ader5"} is less than a prescribed tolerance that guarantees precision, typically set to $10^{-12}$. However, it could also be sufficient a smaller number of iterations to achieve at least the formal order of accuracy, as recently investigated in [@AdaADER23].
**Remark 3** (Computation of the space-time stiffness matrices). *The space-time matrices given by [\[eqn.st_matrices\]](#eqn.st_matrices){reference-type="eqref" reference="eqn.st_matrices"} involve integration of the nonconforming Virtual Element space-time basis functions [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"} over the space-time control volume ${P}\times [0;1]$. They are directly computed by means of very efficient quadrature rules on general polygonal cells forwarded in [@SOMMARIVA2009886; @SOMMARIVA2020]. This gives the so-called consistency term of the matrices, which is enough for those matrices which do not need to be inverted, namely $\mathbf{F_0}$, $\mathbf{K_x}$ and $\mathbf{K_y}$. However, the space-time stiffness matrix $\mathbf{K_1}$ has to be inverted in [\[eqn.ader5\]](#eqn.ader5){reference-type="eqref" reference="eqn.ader5"}, and a stabilization term must be added as usually done in the Virtual Element framework [@vem2; @vem1]. Indeed, mass and stiffness VEM matrices can become highly ill-conditioned, especially when $N>2$ (see [@Mascotto2018]).*
*There are no analysis in the literature for space-time Virtual Element matrices, thus we mimic and modify what has been done in the purely spatial setting [@vem2] in order to account for novel space-time VEM operators. Let us write matrix $\mathbf{K_1}$ using the space-time virtual basis functions $\tilde{\varphi}_{ik}$, which are unknown: $$\label{eqn.K1_virt}
(\mathbf{K_1})_{k \ell} = \int_{0}^{1} \int_{{P}} \tilde{\varphi}_{ik} \frac{\partial \tilde{\varphi}_{j\ell}}{\partial \tau} \, d\mathbf{x}d\tau, \qquad \tilde{\varphi}_{ik} = \varphi_i \, \psi_k, \quad \tilde{\varphi}_{j\ell} = \varphi_j \, \psi_\ell,$$ where $\psi_k,\psi_\ell$ are the time basis functions described in Section [3.4](#ssec.basis){reference-type="ref" reference="ssec.basis"}. We then introduce the expansions $$\begin{aligned}
\label{eqn.expansion_st}
\tilde{\varphi}_{ik} &=& \Pi^0_{{P},N}\varphi_i \, \psi_k + (I-\Pi^0_{{P},N}) \varphi_i \, \psi_k \\
\frac{\partial \tilde{\varphi}_{j\ell}}{\partial \tau} &=& \frac{\partial}{\partial \tau} \left( \Pi^0_{{P},N}\varphi_j \, \psi_\ell + (I-\Pi^0_{{P},N}) \varphi_j \, \psi_\ell \right),\end{aligned}$$ with $I$ being the identity matrix of dimension $N^{\textrm{dof}}_{{P}}\times N^{\textrm{dof}}_{{P}}$. The above expressions are plugged into the definition [\[eqn.K1_virt\]](#eqn.K1_virt){reference-type="eqref" reference="eqn.K1_virt"} leading to $$\begin{aligned}
\label{eqn.K1_virt2}
\int_{0}^{1} \int_{{P}} \tilde{\varphi}_{ik} \frac{\partial \tilde{\varphi}_{j\ell}}{\partial \tau} \, d\mathbf{x}d\tau &=& \int_{0}^{1} \int_{{P}} \Pi^0_{{P},N}\varphi_i \, \psi_k \cdot \frac{\partial}{\partial \tau} \left( \Pi^0_{{P},N}\varphi_j \, \psi_\ell \right) \, d\mathbf{x}d\tau \nonumber \\
&+& \int_{0}^{1} \int_{{P}} \Pi^0_{{P},N}\varphi_i \, \psi_k \cdot \frac{\partial}{\partial \tau} \left( (I-\Pi^0_{{P},N}) \varphi_j \, \psi_\ell \right) \, d\mathbf{x}d\tau \nonumber \\
&+& \int_{0}^{1} \int_{{P}} (I-\Pi^0_{{P},N}) \varphi_i \, \psi_k \cdot \frac{\partial}{\partial \tau} \left( \Pi^0_{{P},N}\varphi_j \, \psi_\ell \right) \, d\mathbf{x}d\tau \nonumber \\
&+& \int_{0}^{1} \int_{{P}} (I-\Pi^0_{{P},N}) \varphi_i \, \psi_k \cdot \frac{\partial}{\partial \tau} \left( (I-\Pi^0_{{P},N}) \varphi_j \, \psi_\ell \right) \, d\mathbf{x}d\tau.\end{aligned}$$ Since the basis functions $\psi_k,\psi_\ell$ only depend on time, and $\varphi_i,\varphi_j$ only depend on space, we can rearrange the above equation as $$\begin{aligned}
\label{eqn.K1_virt3}
\int_{0}^{1} \int_{{P}} \tilde{\varphi}_{ik} \frac{\partial \tilde{\varphi}_{j\ell}}{\partial \tau} \, d\mathbf{x}d\tau &=& \int_{0}^{1} \int_{{P}} \Pi^0_{{P},N}\varphi_i \, \psi_k \cdot \frac{\partial}{\partial \tau} \left( \Pi^0_{{P},N}\varphi_j \, \psi_\ell \right) \, d\mathbf{x}d\tau \nonumber \\
&+& \int_{0}^{1} \psi_k \cdot \frac{\partial \psi_\ell}{\partial \tau} \, d\tau \, \int_{{P}} \Pi^0_{{P},N}\varphi_i \cdot (I-\Pi^0_{{P},N}) \varphi_j \, d\mathbf{x}\nonumber \\
&+& \int_{0}^{1} \psi_k \cdot \frac{\partial \psi_\ell}{\partial \tau} \, d\tau \, \int_{{P}} (I-\Pi^0_{{P},N}) \varphi_i \cdot \Pi^0_{{P},N}\varphi_j \, d\mathbf{x}\nonumber \\
&+& \int_{0}^{1} \psi_k \cdot \frac{\partial \psi_\ell}{\partial \tau} \, d\tau \, \int_{{P}} (I-\Pi^0_{{P},N}) \varphi_i \cdot (I-\Pi^0_{{P},N}) \varphi_j \, d\mathbf{x}.\end{aligned}$$ The first term on the right hand side ensures consistency, while the second and the third term vanish because of the orthogonality condition [\[eq:vem_projL1\]](#eq:vem_projL1){reference-type="eqref" reference="eq:vem_projL1"} related to the construction of the projector $\Pi^0_{{P},N}$. The last term ensures stability and the spatial integral is approximated using the dof--dof stabilization [@vem2]. Thus, recalling the definition of the space-time Virtual Element basis [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"}, the space-time stiffness matrix $\mathbf{K}_1$ can eventually be computed by $$\begin{aligned}
\label{eqn.K1_final}
\int_{0}^{1} \int_{{P}} \tilde{\varphi}_{ik} \frac{\partial \tilde{\varphi}_{j\ell}}{\partial \tau} \, d\mathbf{x}d\tau &\simeq& \int_{0}^{1} \int_{{P}} \left( \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,i} m_{\alpha} \right) \cdot \psi_k \cdot \frac{\partial}{\partial \tau} \left( \left( \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,j} m_{\alpha} \right) \cdot \psi_\ell \right) \, d\mathbf{x}d\tau \nonumber \\
&+& \int_{0}^{1} \psi_k \cdot \frac{\partial \psi_\ell}{\partial \tau} \, d\tau \, \, \left((\mathbf{I}-\accentset{\star}{\mathbf{\Pi}}^0_N)_{\alpha,i}\right)^{\top} \cdot (\mathbf{I}-\accentset{\star}{\mathbf{\Pi}}^0_N)_{\alpha,j}.\end{aligned}$$ This formulation provides both consistency and stability in a fully space-time setting, so that matrix $\mathbf{K}_1$ can be inverted and used in the iterative scheme [\[eqn.ader5\]](#eqn.ader5){reference-type="eqref" reference="eqn.ader5"}.*
### Fully discrete one-step VEM-DG corrector
The predictor solution is used to carry out the corrector step, which accounts for the numerical fluxes between neighbor cells. The corrector is based on a discontinuous Galerkin scheme that is directly applied to the integrated form of the governing equations [\[eqn.PDE\]](#eqn.PDE){reference-type="eqref" reference="eqn.PDE"} in space and time. Since the predictor solution has already been computed within the local space-time control volumes, the corrector involves a one-step time integration, yielding a fully discrete scheme. This improves the computational efficiency in parallel computation over multi-step schemes, like Runge-Kutta DG methods, because only one communication is needed among the threads, which exchange the predictor solution. Furthermore, being the predictor locally evaluated, no information exchange is needed therein.
The variational formulation is obtained upon multiplication of the PDE [\[eqn.PDE\]](#eqn.PDE){reference-type="eqref" reference="eqn.PDE"} by a spatial test function $\phi_k(\mathbf{x})$ of the same form of the basis functions $\phi_\ell(\mathbf{x})$ used to approximate the numerical solution [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"}, followed by integration on the space-time control volume ${P}\times [t^n;t^{n+1}]$ for each ${P}_i$, $i=1,...,N_{{P}}$: $$\label{eqn.pdeweak}
\int_{t^n}^{t^{n+1}} \int_{{P}} \phi_k \,\partial_t \mathbf{U}\,d\mathbf{x}dt + \int_{t^n}^{t^{n+1}} \int_{{P}} \phi_k \,\nabla \cdot \mathbf{F}\,d\mathbf{x}dt = \mathbf{0}\,.$$ Integration by parts in space of the second term leads to $$\label{eqn.pdeweak_int_parts}
\int_{t^n}^{t^{n+1}} \int_{{P}} \phi_k \,\partial_t \mathbf{U}\,d\mathbf{x}dt + \int_{t^n}^{t^{n+1}} \int_{\partial {P}} \phi_k \,\mathbf{F}\cdot \mathbf{n}\,dS dt - \int_{t^n}^{t^{n+1}} \int_{{P}} \nabla \phi_k \cdot \mathbf{F}\,d\mathbf{x}dt = \mathbf{0}\,,$$ where $\mathbf{n}$ is the outward pointing unit normal vector of the element boundary $\partial {P}$. Using the solution representation [\[eqn.uh\]](#eqn.uh){reference-type="eqref" reference="eqn.uh"} as well as the predictor solution $\mathbf{q}_h(\mathbf{x},t)$ and its gradient, the weak form [\[eqn.pdeweak_int_parts\]](#eqn.pdeweak_int_parts){reference-type="eqref" reference="eqn.pdeweak_int_parts"} becomes $$\begin{aligned}
\int_{{P}_i} \phi_k \phi_\ell \, d\mathbf{x}\, \mathbf{u}_{\ell}^{n+1} &=& \int_{{P}} \phi_k \phi_\ell \, d\mathbf{x}\, \mathbf{u}_{\ell}^{n} - \int_{t^n}^{t^{n+1}} \int_{\partial {P}} \phi_k \,\mathcal{G}\left( (\mathbf{q}_h^-,\nabla \mathbf{q}_h^-), (\mathbf{q}_h^+,\nabla \mathbf{q}_h^+) \right) \cdot \mathbf{n}\,dS dt \nonumber \\
&+& \int_{t^n}^{t^{n+1}} \int_{{P}} \nabla \phi_k \cdot \mathbf{F}(\mathbf{q}_h,\nabla \mathbf{q}_h) \,d\mathbf{x}dt, \label{eqn.DGscheme}\end{aligned}$$ where $\mathcal{G}\left( (\mathbf{q}_h^-,\nabla \mathbf{q}_h^-), (\mathbf{q}_h^+,\nabla \mathbf{q}_h^+) \right) \cdot \mathbf{n}$ is the numerical flux function, which involves the left $(\mathbf{q}_h^-,\nabla \mathbf{q}_h^-)$ and right $(\mathbf{q}_h^+,\nabla \mathbf{q}_h^+)$ high order boundary-extrapolated data and gradients with respect to the cell boundary $\partial {P}$. To compute it, the Rusanov flux [@Rusanov:1961a] is employed, modified to simultaneously include both the convective and the viscous terms: $$\mathcal{G}\left( (\mathbf{q}_h^-,\nabla \mathbf{q}_h^-), (\mathbf{q}_h^+,\nabla \mathbf{q}_h^+) \right) \cdot \mathbf{n}= \frac{1}{2} \left( \mathbf{F}(\mathbf{q}_h^-,\nabla \mathbf{q}_h^-) + \mathbf{F}(\mathbf{q}_h^+,\nabla \mathbf{q}_h^+) \right) \cdot \mathbf{n}- \frac{1}{2} \left( |\lambda^c_{\max}| + 2 \eta |\lambda^v_{\max}| \right) \left( {\mathbf{q}}_h^+ - {\mathbf{q}}_h^- \right).
\label{eqn.rusanov}$$ The factor $\eta$ in the numerical dissipation is estimated from the solution of the generalized Riemann problem (GRP) for the diffusion equation [@MunzDiffusionFlux] and it is given by $$\eta = \frac{2N+1}{(h_{P^-}+h_{P^+})\sqrt{\frac{\pi}{2}}},$$ where $h_{P^-}$ and $h_{P^+}$ are the characteristic length sizes of the two mesh elements (left and right, respectively) that share the interface for which the flux is being evaluated.
The nonconforming Virtual Element basis functions [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"} are used to define both $\phi_k$ and $\phi_\ell$, thus the one-step fully discrete DG scheme [\[eqn.DGscheme\]](#eqn.DGscheme){reference-type="eqref" reference="eqn.DGscheme"} is referred to as VEM-DG scheme.
**Remark 4** (Computation of the mass matrix). *The VEM-DG scheme [\[eqn.DGscheme\]](#eqn.DGscheme){reference-type="eqref" reference="eqn.DGscheme"} requires the computation of the mass matrix for the generic element ${P}$, which is assumed to be defined relying on the unknown virtual basis functions $\varphi(\mathbf{x})$, that is $$\label{eqn.mass_matrix}
(\mathbf{M})_{k \ell} := \int_{{P}} \phi_k \phi_\ell \, d\mathbf{x}=\int_{{P}} \varphi_k \varphi_\ell \, d\mathbf{x}.$$ Following [@vem2], we can introduce the expansion $$\varphi_k = \Pi^0_{{P},N}\varphi_k + (I-\Pi^0_{{P},N}) \varphi_k,$$ that is plugged into the definition [\[eqn.mass_matrix\]](#eqn.mass_matrix){reference-type="eqref" reference="eqn.mass_matrix"}, hence obtaining $$\begin{aligned}
\label{eqn.proj_M}
(\mathbf{M})_{k \ell} &=&\int_{{P}} \Pi^0_{{P},N}\varphi_k \Pi^0_{{P},N}\varphi_\ell \, d\mathbf{x}+ \int_{{P}} (I-\Pi^0_{{P},N})\varphi_k (I-\Pi^0_{{P},N}) \varphi_\ell \, d\mathbf{x}\nonumber \\
&+& \int_{{P}} \Pi^0_{{P},N}\varphi_k (I-\Pi^0_{{P},N}) \varphi_\ell \, d\mathbf{x}+ \int_{{P}} (I-\Pi^0_{{P},N})\varphi_k \Pi^0_{{P},N}\varphi_\ell \, d\mathbf{x}.\end{aligned}$$ The last two terms on the right hand side are exactly zero because of the orthogonality condition [\[eq:vem_projL1\]](#eq:vem_projL1){reference-type="eqref" reference="eq:vem_projL1"}, while the second term accounts for the stabilization, and it is approximated again with a dof--dof stabilization [@vem2] as for the space-time stiffness matrix in [\[eqn.K1_final\]](#eqn.K1_final){reference-type="eqref" reference="eqn.K1_final"}, thus yielding $$(\mathbf{M})_{k \ell} \simeq \int_{{P}} \left( \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,k} m_{\alpha} \right) \, \left( \left( \accentset{\star}{\mathbf{\Pi}}^0_N\right)_{\alpha,\ell} m_{\alpha} \right) \, d\mathbf{x}+ |{P}| \, \left((\mathbf{I}-\accentset{\star}{\mathbf{\Pi}}^0_N)_{\alpha,k}\right)^{\top} \cdot (\mathbf{I}-\accentset{\star}{\mathbf{\Pi}}^0_N)_{\alpha,\ell}.
\label{eqn.proj_M3}$$ The first term in [\[eqn.proj_M3\]](#eqn.proj_M3){reference-type="eqref" reference="eqn.proj_M3"} is the consistency term, while the second term is responsible for stabilization, which is crucial for the inversion of the mass matrix in the VEM-DG scheme [\[eqn.DGscheme\]](#eqn.DGscheme){reference-type="eqref" reference="eqn.DGscheme"}.*
## Runge-Kutta VEM-DG scheme
Alternatively, the nonconforming Virtual Element basis [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"} can be used to devise a standard Runge-Kutta scheme, that is a multi-step algorithm based on the method of lines. A Runge-Kutta method with a total number of $s$ sub-stages is described by a Butcher tableau of the form shown in Table [1](#tab.BTRK){reference-type="ref" reference="tab.BTRK"}.
------------ -------------- -------------- ---------- ------------------- --------------------------------------------------
0
$\alpha_2$ $\beta_{21}$
$\alpha_3$ $\beta_{31}$ $\beta_{32}$
$\vdots$ $\vdots$ $\vdots$ $\ddots$
$\alpha_s$ $\beta_{s1}$ $\beta_{s2}$ $...$ $\beta_{s (s-1)}$
$c_1$ $c_2$ $...$ $c_{s-1}$ $c_s$ [\[tab.BTRK\]]{#tab.BTRK label="tab.BTRK"}
------------ -------------- -------------- ---------- ------------------- --------------------------------------------------
: Butcher tableau for Runge-Kutta explicit methods.
The semi-discrete scheme for the governing PDE is then given by $$\mathbf{M} \, \frac{\textrm{d} \mathbf{U}}{\textrm{d} t} = - \mathcal{L}_h(\mathbf{u}_h),$$ where $\mathbf{M}$ is the Virtual Element spatial mass matrix [\[eqn.proj_M3\]](#eqn.proj_M3){reference-type="eqref" reference="eqn.proj_M3"} and the term $\mathcal{L}_h(\mathbf{u}_h)$ contains the spatial discretization: $$\mathcal{L}_h(\mathbf{u}_h) = \int_{\partial {P}} \phi_k \,\mathcal{G}\left( (\mathbf{u}_h^-,\nabla \mathbf{u}_h^-), (\mathbf{u}_h^+,\nabla \mathbf{u}_h^+) \right) \cdot \mathbf{n}\,dS - \int_{{P}} \nabla \phi_k \cdot \mathbf{F}(\mathbf{u}_h,\nabla \mathbf{u}_h) \,d\mathbf{x}.$$ Here, the test functions $\phi_k$ are assumed to be of the form of [\[eqn.vem_basis\]](#eqn.vem_basis){reference-type="eqref" reference="eqn.vem_basis"}, and the numerical flux term is computed according to the definition [\[eqn.rusanov\]](#eqn.rusanov){reference-type="eqref" reference="eqn.rusanov"}. The numerical solution is determined at the next time step as $$\mathbf{U}^{n+1} = \mathbf{U}^{n} + \mathbf{M}^{-1} \, {\Delta t}\, \sum \limits_{i=1}^s c_i \cdot \kappa_i,$$ with the generic Runge-Kutta stage $\kappa_i$ evaluated at time level $t^{(i)}=t^n+\alpha_i {\Delta t}$ by $$\kappa_i = -\mathcal{L}_h\left( \mathbf{u}_h^n + {\Delta t}\sum \limits_{j=1}^{i} \beta_{ij} \cdot \kappa_j \right).$$ In this case, the space-time basis functions [\[eqn.vem_basis_st\]](#eqn.vem_basis_st){reference-type="eqref" reference="eqn.vem_basis_st"} are no longer needed, and the only matrix that must be inverted is the spatial mass matrix $\mathbf{M}$.
## VEM-DG limiter with artificial viscosity
The DG scheme [\[eqn.DGscheme\]](#eqn.DGscheme){reference-type="eqref" reference="eqn.DGscheme"} is linear in the sense of Godunov [@Godunov1959], thus spurious oscillations might arise when dealing with shock waves and other discontinuities. To limit these instabilities, we rely on a simple artificial viscosity method, see for instance [@PerssonAV; @TavelliCNS; @Hesthaven_LimiterAV2011; @Bassi_LimiterAV2018; @ADERAFEDG]. The limiter does not act on the entire mesh, but only on those cells which are crossed by discontinuities. Therefore, one first needs to detect the so-called *troubled* cells, and then to limit the numerical solution only there.
At each time step, the troubled elements are detected with the flattener indicator $\beta_{{P}}$ proposed in [@BalsaraFlattener], that is computed for each element as $$\beta_{{P}}= \min {\left[ 1, \max {\left(0, -\frac{\nabla \cdot \mathbf{v}+ \bar{g} c_{\min}}{\bar{g} c_{\min}}\right)_{{P}}}\right]},
\label{eqn.flattener}$$ with the coefficient $\bar{g}=0.1$ according to [@BalsaraFlattener]. The minimum of the sound speed $c_{\min}$ is evaluated considering the element ${P}$ itself and its neighborhood, that is $$c_{\min} = \min_{\partial {P}} (c^+,c^-), \qquad c^{\pm}=\sqrt{\gamma R T^{\pm}},$$ while the divergence of the velocity field $\nabla \cdot \mathbf{v}$ is estimated as $$\nabla \cdot \mathbf{v}= \frac{1}{|{P}|}\sum \limits_{\partial {P}}{ |\partial {P}|^{\pm} \left(\mathbf{v}^+ - \mathbf{v}^- \right) \cdot \mathbf{n}},
\label{eqn.divV}$$ where the velocity vector is computed as a cell average quantity from the numerical solution $\mathbf{u}_h$, and the quantity $|\partial {P}|^{\pm}$ is the length of the edge shared by the left ${P}^-$ and the right ${P}^+$ cell. A cell is marked as troubled if $\beta_{{P}}>10^{-10}$, and some artificial viscosity $\mu_{a}$ is added to the physical viscosity $\mu$, thus obtaining an effective viscosity $\tilde{\mu}_{{P}}=\mu_{a}+\mu$ which is then used in the Navier-Stokes fluxes [\[eqn.NS\]](#eqn.NS){reference-type="eqref" reference="eqn.NS"}. The additional viscosity $\mu_{a}$ is determined so that a resulting unity mesh Reynolds number is assigned to the troubled cells, which is $$\text{Re}= \frac{\rho |\lambda_{\max}^c| h_{{P}}}{\tilde{\mu}_{{P}}}=1.$$ To account also for heat conduction, a unit Prandtl number must be set, hence imposing an artificial heat conduction coefficient $\tilde{\kappa}_{{P}}$ such that $\Pr = \tilde{\mu}_{{P}} \gamma c_v/\tilde{\kappa}_{{P}} = 1$.
# Numerical results {#sec.validation}
The new numerical schemes are tested against benchmarks for compressible gas dynamics to properly assess their accuracy and robustness. The label VEM-DG is used to refer to the methods presented in this work, with ADER time discretization unless otherwise indicated. Moreover, if not stated otherwise, we set the ratio of specific heats to $\gamma=1.4$ and the gas constant to $R=1$, hence retrieving a specific heat capacity at constant volume of $c_v = 2.5$. The initial condition of the flow field is typically given in terms of the vector of primitive variables $\mathbf{{P}}(\mathbf{x},t) = (\rho,u,v,p)$. The simulations are run on 64 CPUs with MPI parallelization.
## Numerical convergence study
We consider the smooth isentropic vortex test case proposed in [@HuShuTri] to study the numerical convergence of the VEM-DG schemes. The computational domain is given by $\Omega=[0;10] \times [0;10]$ with periodic boundaries. The fluid is inviscid and heat conduction is neglected, thus we set $\mu=\kappa=0$, and some perturbations are initially assigned on the top of a homogeneous background field: $$\mathbf{{P}}(\mathbf{x},0) = (1+\delta \rho, 1+\delta u, 1+\delta v, 1+\delta p).
\label{eq.ConvEul-IC}$$ The perturbations for temperature $\delta T$, density $\delta \rho$ and pressure $\delta p$ read $$\label{ShuVortDelta1}
\delta T = -\frac{(\gamma-1)\varepsilon^2}{8\gamma\pi^2}e^{1-r^2}, \quad
\delta \rho = (1+\delta T)^{\frac{1}{\gamma-1}}-1, \quad
\delta p = (1+\delta T)^{\frac{\gamma}{\gamma-1}}-1,$$ where the radius of the vortex has been defined as $r=\sqrt{(x-5)^2+(y-5)^2}$ and the vortex strength has been set to $\varepsilon=5$. The velocity perturbations are given by $$\label{ShuVortDelta2}
\left(\begin{array}{c} \delta u \\ \delta v \\ \end{array}\right) = \frac{\varepsilon}{2\pi}e^{\frac{1-r^2}{2}} \left(\begin{array}{c} -(y-5) \\ \phantom{-}(x-5) \end{array}\right).$$ The final time of the simulation is chosen to be $t_f=0.1$, and the exact solution $\mathbf{{P}}_e(\mathbf{x},t)$ can be simply computed as the time-shifted initial condition by the convective velocity $\mathbf{v}_c=(1,1)$, that is $\mathbf{{P}}_e(\mathbf{x},t_f)=\mathbf{{P}}_e(\mathbf{x}-\mathbf{v}_c \, t_f,0)$. The error is measured at the final time in $L_2$ norm as $$\epsilon_{L_2}= \sqrt{\int \limits_{\Omega} \left( \mathbf{{P}}_e(\mathbf{x},t_f)-\mathbf{u}_h(\mathbf{x},t_f)\right)^2 \, \text{d}\mathbf{x}}.$$
A sequence of successively refined unstructured Voronoi meshes of characteristic mesh size $h_{\Omega}$ is used to perform the convergence analysis. The VEM-DG schemes are compared against two different space-time DG methods, namely: i) the classical modal DG (M-DG) schemes which make use of modal basis functions, which are the monomials [\[eq:m_alpha\]](#eq:m_alpha){reference-type="eqref" reference="eq:m_alpha"}, and ii) the Agglomerated Finite Element DG (AFE-DG) methods recently forwarded in [@ADERAFEDG], which are based on a local finite element basis within each control volume. We use a $\textnormal{CFL}$ number of $\textnormal{CFL}=0.25$.
The convergence results are collected in Table [2](#tab.conv){reference-type="ref" reference="tab.conv"}, which demonstrate that the formal order of convergence is attained by the VEM-DG schemes. Figure [2](#fig.comparison_efficiency){reference-type="ref" reference="fig.comparison_efficiency"} compares the three different methods, showing that the VEM-DG schemes are more efficient than the standard M-DG methods, while being less efficient and less accurate than the AFE-DG schemes. This is expected since the AFE-DG schemes are built upon a sub-triangulation of the Voronoi elements, thus allowing each sub-triangle to be mapped to a reference element and ultimately to design a fully quadrature-free DG scheme, as detailed in [@ADERAFEDG]. Nevertheless, compared to the commonly used M-DG methods, the novel VEM-DG schemes are more efficient and, potentially, they also easily allows mixed elements and nonconforming grids to be handled.
-------------- ------------------------------------- --------------------------- ----------- -------------- --------------------------- ----------- -------------- --------------------------- -----------
VEM-DG M-DG AFE-DG
$h_{\Omega}$ $\rho_{L_2}$ $\mathcal{O}(\rho_{L_2})$ CPU time $\rho_{L_2}$ $\mathcal{O}(\rho_{L_2})$ CPU time $\rho_{L_2}$ $\mathcal{O}(\rho_{L_2})$ CPU time
Order of accuracy: $\mathcal{O}(2)$
4.428E-01 1.315E-02 \- 4.438E+00 1.277E-02 \- 9.359E+00 8.239E-03 \- 2.172E+00
3.557E-01 7.039E-03 2.85 9.953E+00 6.892E-03 2.81 1.814E+01 4.161E-03 3.12 4.797E+00
2.311E-01 3.178E-03 1.85 3.894E+01 3.130E-03 1.83 6.086E+01 1.936E-03 1.78 2.247E+01
1.762E-01 1.751E-03 2.20 6.930E+01 1.728E-03 2.19 2.467E+02 1.132E-03 1.98 4.278E+01
Order of accuracy: $\mathcal{O}(3)$
4.428E-01 1.646E-03 \- 1.090E+02 1.447E-03 \- 1.390E+02 4.102E-04 \- 2.172E+00
3.557E-01 7.524E-04 3.57 1.740E+02 6.141E-04 3.91 3.813E+02 1.985E-04 3.31 4.797E+00
2.311E-01 2.615E-04 2.45 8.972E+02 2.009E-04 2.59 1.214E+03 7.785E-05 2.17 2.247E+01
1.762E-01 1.128E-04 3.10 1.575E+03 8.547E-05 3.15 2.606E+03 3.783E-05 2.66 4.278E+01
Order of accuracy: $\mathcal{O}(4)$
4.428E-01 1.184E-04 \- 3.699E+03 1.184E-04 \- 4.809E+03 1.814E-05 \- 1.409E+03
3.557E-01 4.113E-05 4.83 8.167E+03 4.111E-05 4.83 1.552E+04 5.451E-06 5.49 5.301E+03
2.311E-01 1.009E-05 3.26 4.088E+04 1.005E-05 3.27 4.906E+04 1.305E-06 3.32 1.397E+04
1.762E-01 3.417E-06 3.99 5.692E+04 3.390E-06 4.00 8.539E+04 4.790E-07 3.69 2.857E+04
-------------- ------------------------------------- --------------------------- ----------- -------------- --------------------------- ----------- -------------- --------------------------- -----------
: Numerical convergence results for the compressible Euler equations using VEM-DG, M-DG and AFE-DG schemes from second up to fourth order of accuracy in space and time. The errors are measured in the $L_2$ norm and refer to density ($\rho$) at time $t_f=0.1$. The absolute CPU time of each simulation is also reported in seconds $[s]$. The characteristic mesh size is given by $h_{\Omega}=\max \limits_{\mathcal{T}_\Omega} h_{{P}}$.
[\[tab.conv\]]{#tab.conv label="tab.conv"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Comparison between VEM-DG (squares), M-DG (circles) and AFE-DG (diamonds) schemes from second up to fourth order of accuracy. Left: dependency of the error norm on the mesh size. Right: dependency of the error norm on the CPU time.](Figures/h_vs_L2.pdf){#fig.comparison_efficiency width="47%"} ![Comparison between VEM-DG (squares), M-DG (circles) and AFE-DG (diamonds) schemes from second up to fourth order of accuracy. Left: dependency of the error norm on the mesh size. Right: dependency of the error norm on the CPU time.](Figures/time_vs_L2.pdf){#fig.comparison_efficiency width="47%"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Table [3](#tab.IC){reference-type="ref" reference="tab.IC"} reports the errors related to the $L_2$ projection of the initial condition [\[eq.ConvEul-IC\]](#eq.ConvEul-IC){reference-type="eqref" reference="eq.ConvEul-IC"} for the three methods, showing that the VEM-DG schemes behave quite similar to the M-DG methods. Figure [8](#fig.ShuVortex-IC){reference-type="ref" reference="fig.ShuVortex-IC"} depicts the initial condition on a very coarse mesh of characteristic mesh size $h_{\Omega}=5/6$ for all the DG methods considered here, which qualitatively confirms the results obtained from the analysis of Table [3](#tab.IC){reference-type="ref" reference="tab.IC"}.
----- -------------- ----------- ----------- -------------- ----------- ----------- -------------- ----------- -----------
VEM-DG M-DG AFE-DG
$N$ $\rho_{L_2}$ $u_{L_2}$ $p_{L_2}$ $\rho_{L_2}$ $u_{L_2}$ $p_{L_2}$ $\rho_{L_2}$ $u_{L_2}$ $p_{L_2}$
2 1.145E-02 2.348E-02 1.726E-02 9.575E-03 1.787E-02 1.448E-02 2.041E-03 3.872E-03 2.990E-03
3 2.119E-03 5.276E-03 3.460E-03 1.506E-03 3.669E-03 2.493E-03 1.573E-04 3.751E-04 2.433E-04
----- -------------- ----------- ----------- -------------- ----------- ----------- -------------- ----------- -----------
: Errors related to the $L_2$ projection of the initial condition for the isentropic vortex test case measured in $L_2$ norm for density ($\rho$), horizontal velocity ($u$) and pressure ($p$). The characteristic mesh size is $h_{\Omega}=10/12$ and the errors are reported for third and fourth order VEM-DG, M-DG and AFE-DG schemes.
[\[tab.IC\]]{#tab.IC label="tab.IC"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![$L_2$ projection of the density distribution of the initial condition for the isentropic vortex test case with $N=2$ (top row) and $N=3$ (bottom row) on a mesh with characteristic size $h_{\Omega}=10/12$. Left: VEM-DG. Middle: M-DG. Right: AFE-DG.](Figures/ShuVortex_IC_P2-VEM-DG.pdf){#fig.ShuVortex-IC width="32%"} ![$L_2$ projection of the density distribution of the initial condition for the isentropic vortex test case with $N=2$ (top row) and $N=3$ (bottom row) on a mesh with characteristic size $h_{\Omega}=10/12$. Left: VEM-DG. Middle: M-DG. Right: AFE-DG.](Figures/ShuVortex_IC_P2-Modal-DG.pdf){#fig.ShuVortex-IC width="32%"} ![$L_2$ projection of the density distribution of the initial condition for the isentropic vortex test case with $N=2$ (top row) and $N=3$ (bottom row) on a mesh with characteristic size $h_{\Omega}=10/12$. Left: VEM-DG. Middle: M-DG. Right: AFE-DG.](Figures/ShuVortex_IC_P2-AFE-DG.pdf){#fig.ShuVortex-IC width="32%"}
![$L_2$ projection of the density distribution of the initial condition for the isentropic vortex test case with $N=2$ (top row) and $N=3$ (bottom row) on a mesh with characteristic size $h_{\Omega}=10/12$. Left: VEM-DG. Middle: M-DG. Right: AFE-DG.](Figures/ShuVortex_IC_P3-VEM-DG.pdf){#fig.ShuVortex-IC width="32%"} ![$L_2$ projection of the density distribution of the initial condition for the isentropic vortex test case with $N=2$ (top row) and $N=3$ (bottom row) on a mesh with characteristic size $h_{\Omega}=10/12$. Left: VEM-DG. Middle: M-DG. Right: AFE-DG.](Figures/ShuVortex_IC_P3-Modal-DG.pdf){#fig.ShuVortex-IC width="32%"} ![$L_2$ projection of the density distribution of the initial condition for the isentropic vortex test case with $N=2$ (top row) and $N=3$ (bottom row) on a mesh with characteristic size $h_{\Omega}=10/12$. Left: VEM-DG. Middle: M-DG. Right: AFE-DG.](Figures/ShuVortex_IC_P3-AFE-DG.pdf){#fig.ShuVortex-IC width="32%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## Analysis of the condition number of the VEM matrices {#ssec.cond_num}
Next, we analyze the condition number of the matrices that need to be inverted in the novel VEM-DG schemes. Specifically, we consider the space mass matrix $\mathbf{M}$ in the corrector step given by [\[eqn.proj_M3\]](#eqn.proj_M3){reference-type="eqref" reference="eqn.proj_M3"} and the space-time stiffness matrix $\mathbf{K}_1$ in the predictor step defined in [\[eqn.K1_final\]](#eqn.K1_final){reference-type="eqref" reference="eqn.K1_final"}. The condition number $\kappa(\mathbf{A})$ of a generic matrix $\mathbf{A}$ of size $n \times n$ is defined as $$\label{eqn.cond_num}
\kappa(\mathbf{A}) = ||\mathbf{A}||_F \, ||\mathbf{A}^{-1}||_F,$$ where we use the Frobenius norm $$||\mathbf{A}||_F = \sqrt{\sum_{i=1}^{n} \sum_{j=1}^{n} a_{ij}^2},$$ with $a_{ij}$ denoting the entries of matrix $\mathbf{A}$. Two meshes are used, taken from the previous convergence test case, with characteristic mesh size $h_{\Omega,1}=1/3$ and $h_{\Omega,2}=1/6$, and the condition numbers are computed for virtual basis of degree $N=\{1,2,3\}$. The results are gathered in Table [4](#tab.cond_num){reference-type="ref" reference="tab.cond_num"}, where it is evident that the higher the polynomial degree of the virtual basis, the worse the condition number of the matrices. In particular, the space-time stiffness matrix is always worse conditioned compared to the space mass matrix of approximately two orders of magnitude. Furthermore, for $N=3$ the condition numbers become very large, hence making the corresponding VEM-DG schemes less robust. These results are in agreement with the findings reported in [@Berrone2017; @Mascotto2018], where it is indeed shown that the condition number of the virtual basis deteriorates very rapidly starting from $N=3$. In Table [4](#tab.cond_num){reference-type="ref" reference="tab.cond_num"} we measure the minimum, maximum and average condition number of each matrix. The average quantity is simply given by the arithmetic average of the condition number over all the cells of the computational domain. Figure [14](#fig.cond_num){reference-type="ref" reference="fig.cond_num"} depicts the condition number for each cell of the coarser mesh employed for this analysis (we use the logarithm of the condition number to improve the readability of the results due to the involved very large scales). It is interesting to notice that the worst ill-conditioning is mainly located at boundary elements or at internal cells with a quite irregular shape. In both cases, the characteristic mesh size $h_{P}$, as defined in equation [\[eqn.h\]](#eqn.h){reference-type="eqref" reference="eqn.h"}, is no longer a suitable measure of the size of the cell, since the aspect ratio is quite far from unity, hence yielding a local bad mesh quality. On the other side, we notice that the internal cells which exhibit a bad condition number are characterized by high differences in the length of the boundary edges, hence involving polygons with more than six edges or highly stretched control volumes with only four sides.
This simple analysis suggests that: i) more investigations are needed in the stabilization terms of the VEM matrices, i.e. the space mass matrix $\mathbf{M}$ and the space-time stiffness matrix $\mathbf{K}_1$; ii) the aspect ratio of the control volume highly affects the condition number of the VEM matrices, hence requiring novel approaches to incorporate this uncertainty in the definition of the local VEM space.
-------------- ---------------------- ----------------- ---------------------- ------------------------ ----------------- ----------------------
$N=1$
$\kappa(\mathbf{M})$ $\kappa(\mathbf{K}_1)$
$h_{\Omega}$ $\kappa_{\min}$ $\kappa_{\max}$ $\kappa_{\text{av}}$ $\kappa_{\min}$ $\kappa_{\max}$ $\kappa_{\text{av}}$
1/3 1.835E+01 6.059E+01 3.774E+01 8.788E+02 4.687E+03 1.301E+03
1/6 1.812E+01 6.424E+01 3.751E+01 2.938E+03 2.093E+04 5.012E+03
$N=2$
$\kappa(\mathbf{M})$ $\kappa(\mathbf{K}_1)$
$h_{\Omega}$ $\kappa_{\min}$ $\kappa_{\max}$ $\kappa_{\text{av}}$ $\kappa_{\min}$ $\kappa_{\max}$ $\kappa_{\text{av}}$
1/3 3.616E+02 2.802E+03 5.666E+02 1.855E+04 3.151E+05 4.382E+04
1/6 3.584E+02 2.001E+04 5.107E+02 6.902E+04 3.563E+06 1.448E+05
$N=3$
$\kappa(\mathbf{M})$ $\kappa(\mathbf{K}_1)$
$h_{\Omega}$ $\kappa_{\min}$ $\kappa_{\max}$ $\kappa_{\text{av}}$ $\kappa_{\min}$ $\kappa_{\max}$ $\kappa_{\text{av}}$
1/3 4.599E+04 2.566E+08 5.548E+05 2.899E+06 6.082E+10 1.183E+08
1/6 4.295E+04 1.777E+09 6.075E+05 1.010E+07 1.622E+12 5.079E+08
-------------- ---------------------- ----------------- ---------------------- ------------------------ ----------------- ----------------------
: Minimum $\kappa_{\min}$, maximum $\kappa_{\max}$ and average $\kappa_{\text{av}}$ condition number of the space mass matrix $\mathbf{M}$ and of the space-time stiffness matrix $\mathbf{K}_1$ for two different meshes of characteristic size $h_{\Omega,1}=1/3$ and $h_{\Omega,2}=1/6$. We consider Virtual Element basis of degree $N=\{1,2,3\}$.
[\[tab.cond_num\]]{#tab.cond_num label="tab.cond_num"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Analysis of the condition number of the VEM matrices employed in the new VEM-DG schemes for $N=1$ (top row), $N=2$ (middle row) and $N=3$ (bottom row) for a mesh with characteristic size $h_{\Omega}=1/3$. Left column: logarithm of the condition number of the space mass matrix $\mathbf{M}$. Right column: logarithm of the condition number of the space-time stiffness matrix $\mathbf{K}_1$. ](Figures/KM_P1_h30.pdf){#fig.cond_num width="47%"} ![Analysis of the condition number of the VEM matrices employed in the new VEM-DG schemes for $N=1$ (top row), $N=2$ (middle row) and $N=3$ (bottom row) for a mesh with characteristic size $h_{\Omega}=1/3$. Left column: logarithm of the condition number of the space mass matrix $\mathbf{M}$. Right column: logarithm of the condition number of the space-time stiffness matrix $\mathbf{K}_1$. ](Figures/KK_P1_h30.pdf){#fig.cond_num width="47%"}
![Analysis of the condition number of the VEM matrices employed in the new VEM-DG schemes for $N=1$ (top row), $N=2$ (middle row) and $N=3$ (bottom row) for a mesh with characteristic size $h_{\Omega}=1/3$. Left column: logarithm of the condition number of the space mass matrix $\mathbf{M}$. Right column: logarithm of the condition number of the space-time stiffness matrix $\mathbf{K}_1$. ](Figures/KM_P2_h30.pdf){#fig.cond_num width="47%"} ![Analysis of the condition number of the VEM matrices employed in the new VEM-DG schemes for $N=1$ (top row), $N=2$ (middle row) and $N=3$ (bottom row) for a mesh with characteristic size $h_{\Omega}=1/3$. Left column: logarithm of the condition number of the space mass matrix $\mathbf{M}$. Right column: logarithm of the condition number of the space-time stiffness matrix $\mathbf{K}_1$. ](Figures/KK_P2_h30.pdf){#fig.cond_num width="47%"}
![Analysis of the condition number of the VEM matrices employed in the new VEM-DG schemes for $N=1$ (top row), $N=2$ (middle row) and $N=3$ (bottom row) for a mesh with characteristic size $h_{\Omega}=1/3$. Left column: logarithm of the condition number of the space mass matrix $\mathbf{M}$. Right column: logarithm of the condition number of the space-time stiffness matrix $\mathbf{K}_1$. ](Figures/KM_P3_h30.pdf){#fig.cond_num width="47%"} ![Analysis of the condition number of the VEM matrices employed in the new VEM-DG schemes for $N=1$ (top row), $N=2$ (middle row) and $N=3$ (bottom row) for a mesh with characteristic size $h_{\Omega}=1/3$. Left column: logarithm of the condition number of the space mass matrix $\mathbf{M}$. Right column: logarithm of the condition number of the space-time stiffness matrix $\mathbf{K}_1$. ](Figures/KK_P3_h30.pdf){#fig.cond_num width="47%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## First problem of Stokes
The first problem of Stokes deals with the time-evolution of an infinite incompressible shear layer with a flow dominated by viscous effects, and for this problem an exact analytical solution of the unsteady Navier-Stokes equations is available [@Schlichting]. The computational domain is the channel $\Omega=[-0.5,0.5]\times[-0.05,0.05]$, which is discretized with a coarse grid made of $N_P=358$ cells. The initial condition reads $$\label{eqn.StokesExact}
\rho = 1, \quad u=0, \quad v=\left\{ \begin{array}{rl}
v_0 & x \leq 0 \\ -v_0 & x > 0
\end{array} \right. , \quad p=\frac{1}{\gamma}, \qquad v_0 = 0.1.$$ This initial condition is used to enforce boundary conditions in $x-$direction, while periodic boundaries are prescribed in $y-$direction. The flow is characterized by a low Mach number $\textnormal{Ma}=10^{-1}$ and heat conduction is neglected ($\kappa=0$). The final time is chosen to be $t_f=1$ and we set $\textnormal{CFL}=0.5$. The simulation is carried out with the third order VEM-DG schemes for two different values of viscosity, namely $\mu=10^{-3}$ and $\mu=10^{-4}$. The comparison between the reference solution and the numerical results is presented in Figure [16](#fig.Stokes){reference-type="ref" reference="fig.Stokes"}, showing an excellent matching.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![First problem of Stokes at time $t_f=1$. Third order numerical results for the vertical component of the velocity obtained with the VEM-DG scheme and compared against the reference solution by extracting a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0$. Viscosity $\mu=10^{-3}$ (left) and $\mu=10^{-4}$ (right).](Figures/Stokes1_1e-3.pdf){#fig.Stokes width="47%"} ![First problem of Stokes at time $t_f=1$. Third order numerical results for the vertical component of the velocity obtained with the VEM-DG scheme and compared against the reference solution by extracting a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0$. Viscosity $\mu=10^{-3}$ (left) and $\mu=10^{-4}$ (right).](Figures/Stokes1_1e-4.pdf){#fig.Stokes width="47%"}
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## Explosion problem
The explosion problem is a benchmark for the compressible Euler equations, hence we consider again an inviscid fluid with no heat conduction. The computational domain is the square $\Omega=[-1;1]^2$ with transmissive boundaries that is paved with a Voronoi mesh of characteristic size $h_{\Omega}=1/128$. The initial condition consists in two different states, separated by the circle of radius $R=0.5$: $$\mathbf{P}(\mathbf{x},0) = \left\{ \begin{array}{clcc} \mathbf{P}_i = & (1.0, 0.0, 0.0, 1.0) & \textnormal{ if } & r \leq R, \\
\mathbf{P}_o = & (0.125, 0.0, 0.0, 0.1) & \textnormal{ if } & r > R,
\end{array} \right.$$ where $\mathbf{P}_i$ and $\mathbf{P}_o$ denote the inner and the outer state, respectively, while $r=\sqrt{x^2+y^2}$ is the generic radial position. As done in [@TavelliCNS], the initial discontinuity in the $L_2$ projection of the DG solution is slightly smoothed, so that nonphysical oscillations at the initial time are eliminated: $$\mathbf{P}(\mathbf{x},0) = \frac{1}{2} \left(\mathbf{P}_o+\mathbf{P}_i\right) + \frac{1}{2} \left(\mathbf{P}_o-\mathbf{P}_i\right) \textnormal{erf} \left( \frac{r-R}{\alpha_0} \right), \qquad \alpha_0=10^{-2}.$$
The solution involves three different types of waves, namely an outward traveling shock front, an inward moving rarefaction fan and a contact wave in between. We set $\textnormal{CFL}=0.5$ and the final time $t_f=0.25$. Figure [20](#fig.EP2D){reference-type="ref" reference="fig.EP2D"} depicts the numerical results obtained with the third order version of the novel VEM-DG schemes, which are compared against the reference solution, computed as an equivalent one-dimensional problem in radial direction $r$ with a geometric source term (see [@ToroBook]). An excellent agreement can be appreciated, noticing that the numerical solution preserves its symmetry even on general unstructured meshes, as evident from the three-dimensional view of the density distribution plot in Figure [20](#fig.EP2D){reference-type="ref" reference="fig.EP2D"}.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Explosion problem at time $t_f=0.25$. Third order numerical results with VEM-DG scheme for density, horizontal velocity and pressure compared against the reference solution extracted with a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0$. Three-dimensional view of the density distribution is shown in the top left panel.](Figures/EP2D_rho3D.pdf){#fig.EP2D width="47%"} ![Explosion problem at time $t_f=0.25$. Third order numerical results with VEM-DG scheme for density, horizontal velocity and pressure compared against the reference solution extracted with a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0$. Three-dimensional view of the density distribution is shown in the top left panel.](Figures/EP2D_rho.pdf){#fig.EP2D width="47%"}
![Explosion problem at time $t_f=0.25$. Third order numerical results with VEM-DG scheme for density, horizontal velocity and pressure compared against the reference solution extracted with a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0$. Three-dimensional view of the density distribution is shown in the top left panel.](Figures/EP2D_u.pdf){#fig.EP2D width="47%"} ![Explosion problem at time $t_f=0.25$. Third order numerical results with VEM-DG scheme for density, horizontal velocity and pressure compared against the reference solution extracted with a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0$. Three-dimensional view of the density distribution is shown in the top left panel.](Figures/EP2D_p.pdf){#fig.EP2D width="47%"}
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Since the solution exhibits a shock wave, the artificial viscosity limiter is activated. Figure [22](#fig.EP2Dlim){reference-type="ref" reference="fig.EP2Dlim"} shows the limited cells, which are perfectly detected only across the shock wave. The time evolution of the total number of limited cells is also reported. We notice that very few cells need to be supplemented with artificial viscosity, mainly concentrated along the shock profile and involving less than $4\%$ of the total number of elements.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Explosion problem with third order VEM-DG scheme. Troubled cell map (left) with the limited cells highlighted in red and the unlimited cells colored in blue; percentage of limited cells at each time step (right).](Figures/EP2D_lim.pdf){#fig.EP2Dlim width="47%"} ![Explosion problem with third order VEM-DG scheme. Troubled cell map (left) with the limited cells highlighted in red and the unlimited cells colored in blue; percentage of limited cells at each time step (right).](Figures/EP2D_badcells.pdf){#fig.EP2Dlim width="47%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## Viscous shock profile
We consider an analytical solution of the compressible Navier-Stokes equation given by an isolated viscous shock wave which is traveling into a medium at rest with a shock Mach number of $\mathrm{M_s} > 1$. The exact solution is derived in [@Becker1923], where the compressible Navier-Stokes equations are solved for the special case of a stationary shock wave at Prandtl number $\Pr= 0.75$ with constant viscosity. This test case is particularly interesting because it allows all terms contained in the Navier-Stokes system to be properly verified, since advection, thermal conduction and viscous stresses are all present. The computational domain is given by $\Omega=[0;1]\times [0;0.2]$ which is discretized by a coarse grid made of a total number of unstructured polygons $N_P=1120$, shown in the top panel of Figure [27](#fig.ViscousShock){reference-type="ref" reference="fig.ViscousShock"}. At $x=0$, inflow boundary conditions are imposed, while outflow conditions are considered at $x=1$. Periodic boundaries are set in $y-$direction. The details for the setup of this test case can be found in [@ADERNSE; @ADERAFEDG], thus we briefly recall that the initial condition is given by a shock wave centered at $x=0.25$, traveling at Mach $\mathrm{M_s}=2$ from left to right, with a Reynolds number $\text{Re}=100$. The viscosity coefficient is $\mu=2 \cdot 10^{-2}$ and the final time of the simulation is $t_f=0.2$, with the shock front located at $x=0.65$. We use a $\textnormal{CFL}$ number of $\textnormal{CFL}=0.5$ and the third order VEM-DG schemes. The results are depicted in Figure [27](#fig.ViscousShock){reference-type="ref" reference="fig.ViscousShock"}, where a comparison against the analytical solution is provided, retrieving an excellent matching. We compare the exact solution and the numerical solution for density, horizontal velocity component, pressure and heat flux $q_x=\kappa \, \frac{\partial T}{\partial x}$, with $T=p/(R\rho)$ being the temperature, as given in the thermal EOS [\[EOS\]](#EOS){reference-type="eqref" reference="EOS"}. We also notice an excellent symmetry preservation of the solution along the $y-$direction, despite the use of unstructured Voronoi meshes, where in general the edges of the control volumes are not aligned with the main flow field in the $x-$direction.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Viscous shock profile with shock Mach number $\mathrm{M_s}=2$ and Prandtl number $\Pr=0.75$ at time $t_f=0.2$. Top panel: Voronoi tessellation and pressure contours. Middle and bottom panel: third order numerical solution with VEM-DG scheme compared against the reference solution for density, horizontal velocity, pressure and heat flux (from middle left to bottom right panel). We show a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0.1$.](Figures/ShockNS_p2D.pdf){#fig.ViscousShock width="90%"}
![Viscous shock profile with shock Mach number $\mathrm{M_s}=2$ and Prandtl number $\Pr=0.75$ at time $t_f=0.2$. Top panel: Voronoi tessellation and pressure contours. Middle and bottom panel: third order numerical solution with VEM-DG scheme compared against the reference solution for density, horizontal velocity, pressure and heat flux (from middle left to bottom right panel). We show a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0.1$.](Figures/ShockNS_rho.pdf){#fig.ViscousShock width="47%"} ![Viscous shock profile with shock Mach number $\mathrm{M_s}=2$ and Prandtl number $\Pr=0.75$ at time $t_f=0.2$. Top panel: Voronoi tessellation and pressure contours. Middle and bottom panel: third order numerical solution with VEM-DG scheme compared against the reference solution for density, horizontal velocity, pressure and heat flux (from middle left to bottom right panel). We show a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0.1$.](Figures/ShockNS_u.pdf){#fig.ViscousShock width="47%"}
![Viscous shock profile with shock Mach number $\mathrm{M_s}=2$ and Prandtl number $\Pr=0.75$ at time $t_f=0.2$. Top panel: Voronoi tessellation and pressure contours. Middle and bottom panel: third order numerical solution with VEM-DG scheme compared against the reference solution for density, horizontal velocity, pressure and heat flux (from middle left to bottom right panel). We show a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0.1$.](Figures/ShockNS_p.pdf){#fig.ViscousShock width="47%"} ![Viscous shock profile with shock Mach number $\mathrm{M_s}=2$ and Prandtl number $\Pr=0.75$ at time $t_f=0.2$. Top panel: Voronoi tessellation and pressure contours. Middle and bottom panel: third order numerical solution with VEM-DG scheme compared against the reference solution for density, horizontal velocity, pressure and heat flux (from middle left to bottom right panel). We show a one-dimensional cut of 200 equidistant points along the $x-$direction at $y=0.1$.](Figures/ShockNS_qx.pdf){#fig.ViscousShock width="47%"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## 2D Taylor-Green vortex
The Taylor-Green vortex problem is a benchmark for the incompressible Navier-Stokes equations, and an exact solution is known in two space dimensions: $$\begin{aligned}
u(\mathbf{x},t)&=&\phantom{-}\sin(x)\cos(y) \, e^{-2\nu t}, \nonumber \\
v(\mathbf{x},t)&=&-\cos(x)\sin(y) \, e^{-2\nu t}, \nonumber \\
p(\mathbf{x},t)&=& C + \frac{1}{4}(\cos(2x)+\cos(2y)) \, e^{-4\nu t},
\label{eq:TG_ini}\end{aligned}$$ with $\nu=\mu/\rho$ being the kinematic viscosity of the fluid. For modeling the low Mach regime of the compressible Navier-Stokes equations, we set $C=100/\gamma$ as additive constant for the pressure field. Heat conduction is neglected, the viscosity coefficient is chosen to be $\mu=10^{-2}$, and the initial density is $\rho(\mathbf{x},0)=1$. The computational domain is defined by $\Omega=[0;2\pi]^2$ with periodic boundaries everywhere. The mesh counts a total number of $N_P=2916$ Voronoi cells. The final time of the simulation is $t_f=1$ and we set $\text{CFL}=0.5$. Figure [31](#fig.TGV2D){reference-type="ref" reference="fig.TGV2D"} shows the numerical results obtained running the third order version of the new VEM-DG schemes, where a comparison against the exact solution is depicted. Here, one can appreciate a very good agreement between the VEM-DG method in the low Mach number regime and the exact solution of the incompressible Navier-Stokes equations, both for velocity and pressure profiles. The stream-traces of the velocity field and the pressure distribution are also plotted.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![2D Taylor-Green vortex at time $t_f=1$ with viscosity $\mu=10^{-2}$. Exact solution of the Navier-Stokes equations and third order numerical solution with VEM-DG scheme. Top: mesh configuration with pressure distribution (left) and vorticity magnitude with stream-traces (right). Bottom: one-dimensional cut of 200 equidistant points along the $x$-axis and the $y-$axis for the velocity components $u$ and $v$ (left) and for the pressure $p$ (right).](Figures/TGV_p2D.pdf){#fig.TGV2D width="47%"} ![2D Taylor-Green vortex at time $t_f=1$ with viscosity $\mu=10^{-2}$. Exact solution of the Navier-Stokes equations and third order numerical solution with VEM-DG scheme. Top: mesh configuration with pressure distribution (left) and vorticity magnitude with stream-traces (right). Bottom: one-dimensional cut of 200 equidistant points along the $x$-axis and the $y-$axis for the velocity components $u$ and $v$ (left) and for the pressure $p$ (right).](Figures/TGV_vorticity.pdf){#fig.TGV2D width="47%"}
![2D Taylor-Green vortex at time $t_f=1$ with viscosity $\mu=10^{-2}$. Exact solution of the Navier-Stokes equations and third order numerical solution with VEM-DG scheme. Top: mesh configuration with pressure distribution (left) and vorticity magnitude with stream-traces (right). Bottom: one-dimensional cut of 200 equidistant points along the $x$-axis and the $y-$axis for the velocity components $u$ and $v$ (left) and for the pressure $p$ (right).](Figures/TGV_velocity.pdf){#fig.TGV2D width="47%"} ![2D Taylor-Green vortex at time $t_f=1$ with viscosity $\mu=10^{-2}$. Exact solution of the Navier-Stokes equations and third order numerical solution with VEM-DG scheme. Top: mesh configuration with pressure distribution (left) and vorticity magnitude with stream-traces (right). Bottom: one-dimensional cut of 200 equidistant points along the $x$-axis and the $y-$axis for the velocity components $u$ and $v$ (left) and for the pressure $p$ (right).](Figures/TGV_pressure.pdf){#fig.TGV2D width="47%"}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## Compressible mixing layer {#ssec.MixLayer}
The study of an unsteady compressible mixing layer, as proposed in [@Colonius], is widely used to test numerical methods for compressible viscous flows. The two-dimensional computational domain is the rectangular box $\Omega=[-200,200] \times [-50,50]$, discretized with a total number of $N_{{P}}=15723$ Voronoi cells. At the left side of the domain ($x=-200$), a time-dependent inflow boundary condition is prescribed in terms of a function $\delta(y,t)$: $$\begin{aligned}
\rho(y,t) &=& \rho_0 + 0.05 \, \delta(y,t), \nonumber \\
\mathbf{v}(y,t) &=& \mathbf{v}_0 + \left( \begin{array}{c}
1.0 \\ 0.6
\end{array} \right)\, \delta(y,t), \\
\quad p(y,t) &=& p_0 + 0.2 \, \delta(y,t), \nonumber \end{aligned}$$ with the background state $$\rho_0= 1, \quad \mathbf{v}_0 = \left( \begin{array}{c}
\frac{1}{8} \tanh(2y) + \frac{3}{8} \\ 0
\end{array} \right), \quad p_0 = \frac{1}{\gamma},$$ and the perturbation $$\begin{aligned}
\delta(y,t) &=& -10^{-3} \exp(-0.25 y^2) \cdot \nonumber \\
&\phantom{=}&\left[ \cos(\omega t) + \cos\left(\frac{1}{2}\omega t -0.028\right) + \cos\left(\frac{1}{4}\omega t +0.141\right) + \cos\left(\frac{1}{8}\omega t +0.391\right) \right], \end{aligned}$$ with the fundamental frequency of the mixing layer $\omega = 0.3147876$. Initially, the background state is assigned to the flow, hence $$\rho(\mathbf{x},0)=\rho_0, \quad \mathbf{v}(\mathbf{x},0)=\mathbf{v}_0, \quad p(\mathbf{x},0)=p_0.$$
On the right boundary we set outflow conditions, while the free stream velocities are imposed in the $y-$direction, thus we set $u_{+\infty}=0.5$ and $u_{-\infty}=0.25$ for $y \to + \infty$ and $y \to -\infty$, respectively. Heat conduction is neglected, and the viscosity coefficient is set to be $\mu=10^{-3}$. For this final test, the simulation is run with the third order VEM-DG scheme with Runge-Kutta time stepping up to the final time $t_f=1596.8$. We show the vorticity of the flow field at three different output times in Figure [34](#fig.MixLayer){reference-type="ref" reference="fig.MixLayer"}. The resolution of the numerical results permits to appreciate the vortical structures generated by the perturbation at the inflow boundary, demonstrating the capability of the novel methods to capture complex vortical patterns in the flow field. We also notice that these results are qualitatively in good agreement with those obtained with the method presented in [@ADERAFEDG].
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Compressible mixing layer at time $t=500$, $t=1000$ and $t=1596.8$ (from top to bottom row). Third order numerical results with Runge-Kutta VEM-DG for $z-$vorticity. 51 contour levels in the range $[-0.12,0.12]$ have been used for plotting the vorticity distribution on the sub-domain $[-200,100]\times[-20,20]$.](Figures/MixingLayer_t500.pdf){#fig.MixLayer width="90%"}
![Compressible mixing layer at time $t=500$, $t=1000$ and $t=1596.8$ (from top to bottom row). Third order numerical results with Runge-Kutta VEM-DG for $z-$vorticity. 51 contour levels in the range $[-0.12,0.12]$ have been used for plotting the vorticity distribution on the sub-domain $[-200,100]\times[-20,20]$.](Figures/MixingLayer_t1000.pdf){#fig.MixLayer width="90%"}
![Compressible mixing layer at time $t=500$, $t=1000$ and $t=1596.8$ (from top to bottom row). Third order numerical results with Runge-Kutta VEM-DG for $z-$vorticity. 51 contour levels in the range $[-0.12,0.12]$ have been used for plotting the vorticity distribution on the sub-domain $[-200,100]\times[-20,20]$.](Figures/MixingLayer_t1596.pdf){#fig.MixLayer width="90%"}
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
# Conclusions {#sec.concl}
In this article, a new family of discontinuous Galerkin finite element methods has been introduced for the solution of nonlinear systems of hyperbolic PDEs on unstructured meshes composed of Voronoi cells. The numerical solution is approximated by means of novel Virtual Element basis functions, borrowing the $L_2$ projection operators defined in the Virtual Element Method [@vem2]. The resulting basis leads to a mixed nodal/modal approach that permits to represent the discrete solution with arbitrary order of accuracy within each control volume. The time marching is carried out relying on the ADER strategy, which yields a fully discrete one-step DG scheme in space and time based on a variational formulation of the governing PDEs. The global solution space results to be discontinuous, thus the novel Virtual Element basis functions are said to be nonconforming, in accordance with the definition of nonconforming VEM spaces in [@VEM_nc_org]. The space-time basis functions are used to construct mass and stiffness matrices that are stabilized relying on an extension of the dof--dof VEM stabilization proposed here for the first time to handle space-time polynomial spaces. An orthogonalization of the basis functions is also forwarded, following the ideas outlined in [@Berrone2017], which allows the condition number of the mass and stiffness matrices to be improved. Applications to compressible inviscid and viscous flows demonstrate the accuracy, robustness and capabilities of the novel VEM-DG schemes.
By exploiting the nodal interpolation property of the VEM basis, future work will concern the investigation of quadrature-free VEM-DG schemes, aiming at improving the computational efficiency of the algorithm. Furthermore, we remark that an implicit treatment of the viscous fluxes [@BDT_cns] would lead to a remarkable relaxation of the $\textnormal{CFL}$ stability condition on the maximum admissible time step, thus contributing to make the scheme more efficient as well. Finally, we foresee also the employment of the novel nonconforming Virtual Element basis for the construction of structure-preserving differential operators [@SPDGdivcurl] to be used in magnetohydrodynamics or elasticity in solid mechanics. Stabilization-free techniques devised in the VEM framework [@BERRONE2023108641] will also be part of further developments in the definition of the space-time matrices.
# Acknowledgments {#acknowledgments .unnumbered}
WB received financial support by Fondazione Cariplo and Fondazione CDP (Italy) under the project No. 2022-1895 and by the Italian Ministry of University and Research (MUR) with the PRIN Project 2022 No. 2022N9BM3N. GB has been partially funded by the University of Ferrara under the call "Bando Giovani anno 2022". Both the authors are members of GNCS--INdAM (*Gruppo Nazionale per il Calcolo Scientifico* of the Italian *Istituto Nazionale di Alta Matematica*). This work was partially carried out at the Institute des Mathematiques de Bordeaux (IMB, Bordeaux-France) during the visiting program of WB. Finally, the authors also wish to acknowledge the great opportunity for the exchange of ideas provided by the SHARK-FV 2023 workshop, where the main lines of research presented in this paper were initially conceptualized.
| arxiv_math | {
"id": "2309.02882",
"title": "Nonconforming Virtual Element basis functions for space-time\n Discontinuous Galerkin schemes on unstructured Voronoi meshes",
"authors": "Walter Boscheri and Giulia Bertaglia",
"categories": "math.NA cs.CE cs.NA physics.flu-dyn",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We obtain a generalization of the ABC Theorem on locally nilpotent derivations to the case of the polynomials with $m$ monomials such that each variable is included just in one monomial. As applications of this result we provide some construction of rigid and semi-rigid algebras and describe the Makar-Limanov invariant of algebras of a special form.
address: HSE University, Faculty of Computer Science, 11 Pokrovsky Bulvar, Moscow, 109028, Russia
author:
- Veronika Kikteva
title: Generalization of the ABC theorem on locally nilpotent derivations
---
# Introduction
Suppose $\mathbb{K}$ is an algebraically closed field of characteristic zero. Let $B$ be a commutative $\mathbb{K}$-domain. A *derivation* on $B$ is a linear map $D$: $B\to B$ which satisfies the Leibniz rule. A derivation $D$ on $B$ is *locally nilpotent* if for each $b\in B$ there exists a non-negative integer $n$ such that $D ^n (b)=0$. We denote the set of locally nilpotent derivations on $B$ by $\mathrm{LND}(B)$.
We say that $b_1,b_2\in B$ are *relatively prime* in $B$ if $(b_1) \cap (b_2) = (b_1 b_2)$. This generalizes the definition of relative primeness for nonzero elements in a UFD.
The following theorem is an important result in the theory of locally nilpotent derivations. It states that if some elements satisfy a polynomial equation of a special form, and additional conditions are imposed on them, then these elements are contained in the kernel of every locally nilpotent derivation.
**Theorem 1**. *[\[ABC\]]{#ABC label="ABC"} Suppose that $x,y,z\in B$ are pairwise relatively prime and satisfy $x^a+y^b+z^c=0$ for some integers $a, b, c \geq 2$. If $a^{-1}+b^{-1}+c^{-1}\leq 1$, then $\mathbb{K}[x,y,z]\subseteq \mathrm{Ker}\ D$ for all $D\in \mathrm{LND}(B)$.*
The ABC Theorem follows from the Mason --- Stothers Theorem. Let $q$ be a polynomial in one variable over the field $\mathbb{K}$. Denote by $N(q)$ the number of distinct roots of $q$. The Mason --- Stothers Theorem states the following.
**Theorem 2**. *[\[MSABC\]]{#MSABC label="MSABC"} Let $a(t),b(t),c(t)$ be relatively prime not all constant polynomials in one variable over $\mathbb{K}$. If $a(t)+b(t)+c(t)=0$ then $$\mathrm{max}\{ \mathrm{deg}\ a(t), \mathrm{deg}\ b(t),\mathrm{deg}\ c(t)\}\leq N (a(t)b(t)c(t))-1.$$*
Theorem [\[genMSABC\]](#genMSABC){reference-type="ref" reference="genMSABC"} generalizes the result of Mason and Stothers.
**Theorem 3**. *[\[genMSABC\]]{#genMSABC label="genMSABC"} Let $n\geq 3$ and $f_1,\dots,f_n$ be not all constant polynomials over $\mathbb{K}$ such that $f_1+\dots+f_n=0$. Assume furthermore that for all $1\leq i_1<\dots<i_s\leq n$ we have $$f_{i_1}+\dots+f_{i_s}=0 \Longrightarrow \mathrm{gcd}(f_{i_1},\dots,f_{i_s})=1.$$ Then $$\max_{1\leq k\leq n} \mathrm{deg}(f_k) \leq (n-2) (N(f_1)+\dots+N(f_n)-1).$$*
The *Makar-Limanov invariant* of a commutative $\mathbb{K}$-domain $B$ is the subalgebra of $B$ defined by $$\mathrm{ML}(B)=\bigcap_{D\in \text{LND}(B)} \text{Ker}\ D.$$ This invariant was introduced by Makar-Limanov in [@ML], called the ring of absolute constants or the absolute kernel, and denoted by $\mathrm{AK}(B)$. This invariant has many remarkable properties, see [@fr]. It is easy to see that $$\mathrm{ML}(\mathbb{K}[X_1,\dots,X_n])=\mathbb{K}.$$ The Makar-Limanov invariant can be used to prove that some varieties are not isomorphic. In particular, this invariant was used to prove that the Koras-Russell cubic threefold is not isomorphic to the affine space, see [@ML]. It provided a new idea that led to solving the Linearization Problem for $\mathbb{C}^{\times}$-actions on $\mathbb{C}^3$.
A commutative $\mathbb{K}$-domain $B$ is called *rigid* if $\mathrm{ML}(B)=B$. Equivalently, there are no nonzero locally nilpotent derivations on $B$. An affine algebraic variety $X$ is *rigid* if the coordinate ring of $X$ is rigid. A commutative $\mathbb{K}$-domain $B$ is *semi-rigid* if there exists $D\in \text{LND}(B)$ such that $\mathrm{ML}(B)=\text{Ker}\ D$. Recall that two locally nilpotent derivations are equivalent if their kernels coincide. A semi-rigid algebra is either rigid or admits only one nonzero locally nilpotent derivation up to equivalence, see [@fr].
The Mason --- Stothers Theorem and the ABC Theorem are helpful in dealing with polynomials over algebraically closed fields. In particular, Fermat's Last Theorem for polynomials follows from Theorem [\[MSABC\]](#MSABC){reference-type="ref" reference="MSABC"}, see [@lang §7].
Using these theorems, the automorphism groups of certain factorial varieties were investigated in [@FinstonMaubach], and the rigidity of some varieties was proved in [@Chitayat; @Crachiola; @FinstonMaubachAlmostRigid]. For further applications and generalizations, see [@IV; @Gundersen; @langDiophantine] and the chapter \"Another generalization of Mason's ABC-theorem\" of [@genABC].
The aim of this paper is to prove a generalization of the ABC Theorem to the case of the polynomials such that each variable is included just in one monomial and to consider some applications of this result. The generalization of the ABC Theorem is stated in Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"}. To prove this statement we use Theorem [\[genMSABC\]](#genMSABC){reference-type="ref" reference="genMSABC"}, which contains the generalization of the Mason --- Stothers Theorem.
Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"} has several applications, some of them are given in the last section of this paper. We construct the new classes of rigid and semi-rigid algebras, and describe the Makar-Limanov invariant of algebras of a special form.
The second result of this paper is Theorem [Theorem 10](#mysumneq0){reference-type="ref" reference="mysumneq0"} which generalizes the following result.
**Theorem 4**. *Let $D\in \mathrm{LND}(B)$ be nonzero. Suppose $u,v\in \mathrm{Ker}\ D$ and $x,y\in B$ are nonzero, while $a,b\in \mathbb{Z}_{\geq 2}$. Assume $ux^a+vy^b\neq 0$. Then the condition $D(ux^a+vy^b)=0$ implies $D(x)=D(y)=0$.*
Note that Theorem [Theorem 10](#mysumneq0){reference-type="ref" reference="mysumneq0"} allows us to answer the following question in a particular case.
**Question 5**. *Let $B$ be an affine $\mathbb{C}$-domain, and let $p\in \mathbb{C}^{[m]}$, with ${m\geq 2}$, be such that $\delta(p)\neq 0$ for all nonzero $\delta \in \mathrm{LND}(\mathbb{C}^{[m]})$. Suppose that there exist algebraically independent $a_1,\dots,a_m\in B$ and nonzero $D\in \mathrm{LND}(B)$ such that $D(p(a_1,\dots,a_m))=0$. Does this imply $D(a_i)=0$ for each $i=1,\dots,m$?*
The author is grateful to Sergey Gaifullin and to Ivan Arzhantsev for constant attention to this work.
# Preliminaries
Let $D$ be a locally nilpotent derivation on a commutative $\mathbb{K}$-domain $B$. Then the derivation $D$ induces the degree function on $B$: $$\mathrm{deg}_D(b)=\mathrm{min} \{n\in \mathbb{N}\ |\ D^{n+1}(b)=0 \}$$ for nonzero $b\in B$, and $\mathrm{deg}_D(0)= - \infty$.
Recall that a subalgebra $A\subseteq B$ is called *factorially closed* if for all nonzero $x,y\in B$ the condition $xy\in A$ implies $x,y\in A$. The kernel of every locally nilpotent derivation is factorially closed, see [@fr Principle 1 (a)].
An element $t\in B$ is a *local slice* of a derivation $D\in \mathrm{LND}(B)$ if $D^2(t)=0$ and $D(t)\neq 0$. Note that if a derivation $D$ is locally nilpotent and nonzero, then it has a local slice. Indeed, it suffices to choose $b\in B$ such that $D(b)\neq 0$ and put $n:=\mathrm{deg}_D (b)\geq 1$. We have $D^n(b)\neq 0$ and $D^{n+1}(b)=0$. Therefore, $D^{n-1}(b)$ is a local slice for $D$.
Denote by $\mathrm{Quot}(B)$ the quotient field of $B$. Let $S\subseteq B \setminus \{ 0 \}$ be a multiplicatively closed subset. Consider the ring $$S^{-1}B:=\{ ab^{-1}\in \mathrm{Quot}(B) \ | \ a\in B, b\in S \}.$$ The ring $S^{-1}B$ is called the *localization* of $B$ at $S$. If $S=\{ f^i \}_{i\geq 0}$ for some nonzero $f\in B$ then $B_f$ denotes $S^{-1}B$.
**Theorem 6**. *[\[thmslice\]]{#thmslice label="thmslice"} Let $D\in\mathrm{LND}(B)$ be given, and let $t\in B$ be a local slice of $D$. Set $A=\mathrm{Ker}\ D$. Then $B_{D(t)}=A_{D(t)}[t]$.*
Note that if there is a nonzero locally nilpotent derivation $D$ on $B$ then it is possible to embed $B$ in the polynomial ring in a local slice $t$ of $D$ over the quotient field of the kernel of $D$. The following lemma claims that if elements are pairwise relatively prime in $B$ then they are pairwise relatively prime as polynomials in $t$.
**Lemma 7**. *Let $A$ be a subalgebra of $B$ and let $t\in B$ be a transcendent element over $A$ such that $B\subseteq \mathrm{Quot}(A)[t]$. If elements $g_1,\dots,g_m$ are pairwise relatively prime as elements of $B$ then they are pairwise relatively prime as elements of $\mathrm{Quot}(A)[t]$.*
*Proof.* Assume that $g_i$ and $g_j$ are relatively prime in $B$ and have a nontrivial common divisor $f$ in $\mathrm{Quot}(A)[t]$. Since $f$ divides $g_i$ in $\mathrm{Quot}(A)[t]$, it follows that $\frac{g_i}{f}$ has the form $$\frac{g_i}{f}=\frac{c_n}{a}t^n+\dots +\frac{c_1}{a}t +\frac{c_0}{a},$$ where $a, c_0,\dots ,c_n\in A$. Similarly, the element $\frac{g_j}{f}$ has the form $$\frac{g_j}{f}=\frac{d_k}{b}t^k+\dots +\frac{d_1}{b}t +\frac{d_0}{b}$$ for some $b,d_0,\dots, d_k\in A$.
Note that $\frac{a g_i}{f},\frac{b g_j}{f}\in B$. Therefore, the element $$q:= \frac{abg_i g_j}{f}=g_i a \frac{b g_j}{f}=g_j b \frac{a g_i}{f}$$ is contained in the intersection of the ideals generated by $g_i$ and $g_j$, i.e., $q\in (g_i)\cap (g_j)$.
If $q$ is contained in the ideal $(g_i g_j)$, then there exists an element $h\in B$ such that $$q=\frac{abg_i g_j}{f}=g_i g_j h.$$ Therefore, we obtain $$\mathrm{deg}_t(a)+\mathrm{deg}_t(b)=0=\mathrm{deg}_t(h)+\mathrm{deg}_t(f).$$ Thus we have $\mathrm{deg}_t(f)=0$, because the degree of elements of the algebra $B$ as polynomials in $t$ cannot be negative. Since $f$ is not a constant in $\mathrm{Quot}(A)[t]$, we have that the elements $g_i$ and $g_j$ are not relatively prime. This contradiction proves the lemma. ◻
# The main results
Let $D$ be a locally nilpotent derivation on the commutative $\mathbb{K}$-domain $B$. Suppose $m,n_1,\dots,n_m$ are positive integers, where $m\geq 3$, while $a_{i}\in \text{Ker}\ D$ are nonzero, $k_{ij}$ are positive integers and elements $b_{ij}\in B$, where $i=1,\dots,m$, and $j=1,\dots,n_i$.
Furthermore, $$f_i:=a_i b_{i1}^{k_{i1}}\dots b_{in_i}^{k_{in_i}} \in B,$$ where $i=1,\dots,m.$
**Theorem 8**. *Suppose that $f_1+\dots+f_m=0$ and elements $f_1,\dots,f_m$ are pairwise relatively prime. Assume that $$\sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}}\leq \frac{1}{m-2}.$$*
*Then the element $b_{ij}$ is contained in the kernel of $D$ for all $i=1,\dots,m$ and $j=1,\dots,n_i$.*
*Proof.* Denote by $A$ the kernel of $D$ and by $t\in B$ a local slice of $D$. By Theorem [\[thmslice\]](#thmslice){reference-type="ref" reference="thmslice"} it follows that there exists an embedding $B\subseteq K[t]=K^{[1]}$, where $K=\mathrm{Quot}(A)$, and the degree of elements of $B$ as polynomials in $t$ coincides with the degree function induced by the locally nilpotent derivation $D$: ${\mathrm{deg}_t=\mathrm{deg}_D=:\mathrm{deg}}$. In particular, an element of $B$ is contained in the kernel of $D$ if and only if it is constant as a polynomial in $t$.
Without loss of generality it can be assumed that $$\mathrm{deg}(f_1)\geq \dots \geq \mathrm{deg}(f_m).$$ Then for all $i=1,\dots,m$ and $j=1,\dots,n_i$ we have $$\mathrm{deg}(b_{ij})=\frac{k_{ij} \mathrm{deg}(b_{ij})}{k_{ij}}\leq \frac{\sum_{\iota=1}^{n_i} k_{i\iota} \mathrm{deg}(b_{i\iota})}{k_{ij}}=\frac{\mathrm{deg}(f_i)}{k_{ij}}\leq \frac{\mathrm{deg}(f_1)}{k_{ij}}.$$
The proof is by reductio ad absurdum. Assume that there exists a positive integer $i$, $1\leq i\leq m$, such that $\mathrm{deg}(f_i)>0$. From the conditions of Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"}, elements of any subset $\{f_{i_1},\dots,f_{i_s}\} \subseteq \{f_1,\dots,f_m\}$ are pairwise relatively prime. By Lemma [Lemma 7](#vzprost){reference-type="ref" reference="vzprost"} they are pairwise relatively prime as elements of $K[t]$. Then $\mathrm{gcd}(f_{i_1},\dots,f_{i_s})=1$ for each subset ${\{f_{i_1},\dots,f_{i_s}\} \subseteq \{f_1,\dots,f_m\}}$.
By Theorem [\[genMSABC\]](#genMSABC){reference-type="ref" reference="genMSABC"} we obtain: $$\mathrm{deg}(f_1)\leq (m-2)(N(f_1)+\dots+N(f_m)-1)\leq$$ $$\leq (m-2)\left(\sum_{i=1}^{m} \sum_{j=1}^{n_i} N(b_{ij})-1\right)\leq (m-2)\left( \sum_{i=1}^{m} \sum_{j=1}^{n_i}\mathrm{deg}(b_{ij})-1\right)\leq$$ $$\leq (m-2)\left( \mathrm{deg}(f_1) \sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}} -1\right).$$ Therefore, we have $$\mathrm{deg}(f_1)\left( \frac{1}{m-2} - \sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}} \right)\leq -1<0.$$ This is a contradiction. We have that the element $f_i$ belongs to the kernel of $D$ for all $i=1,\dots,m$. Since the kernel of a locally nilpotent derivation is factorially closed, we obtain that the element $b_{ij}$ is contained in the kernel of $D$ for any $i=1,\dots,m$ and $j=1,\dots,n_i$. ◻
*Remark 9*. Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"} generalizes Theorem [\[ABC\]](#ABC){reference-type="ref" reference="ABC"}: put $m=3,\ n_1=n_2=n_3 = 1$ and $a_1=a_2=a_3= 1$. Then the condition $$\sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}}\leq \frac{1}{m-2}$$ changes to $$a^{-1}+b^{-1}+c^{-1}\leq 1$$ from Theorem [\[ABC\]](#ABC){reference-type="ref" reference="ABC"}.
**Theorem 10**. *Suppose that $f_1+\dots+f_m\in \mathrm{Ker}\ D \backslash \{ 0\}$ and for all $s\leq m$ and ${1\leq i_1<\dots<i_s\leq m}$ we have $$f_{i_1}+\dots+f_{i_s}=0 \Longrightarrow f_{i_1},\dots,f_{i_s}\ \textit{are\ pairwise\ relatively\ prime}.$$ Assume that $$\sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}} \leq \frac{1}{m-1}.$$*
*Then the element $b_{ij}$ is in the kernel of $D$ for all $i=1,\dots,m$ and $j=1,\dots,n_i$.*
*Proof.* We apply the argument utilized in the proof of Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"}. There exists an embedding $B\subseteq K[t]$, where $K$ is the quotient field of the kernel of the locally nilpotent derivation $D$ and $t$ is a local slice of $D$.
As above, we assume that $$\mathrm{deg}(f_1)\geq \dots \geq \mathrm{deg}(f_m),$$ therefore, $$\mathrm{deg}(b_{ij})=\frac{k_{ij} \mathrm{deg}(b_{ij})}{k_{ij}}\leq \frac{\sum_{\iota=1}^{n_i} k_{i\iota} \mathrm{deg}(b_{i\iota})}{k_{ij}}=\frac{\mathrm{deg}(f_i)}{k_{ij}}\leq \frac{\mathrm{deg}(f_1)}{k_{ij}}$$ for all $i=1,\dots,m$ and $j=1,\dots,n_i$.
We denote $f_{m+1}:=-(f_1+\dots+f_m)$. Note that $f_{m+1}$ is a nonzero element of $K$. Hence, $\mathrm{deg}(f_{m+1})=N(f_{m+1})=0$. We assume again that there exists a positive integer $i$ such that $\mathrm{deg}(f_i)>0$. From the conditions of Theorem [Theorem 10](#mysumneq0){reference-type="ref" reference="mysumneq0"} if $f_{i_1}+\dots=f_{i_s}=0$ then $f_{i_1},\dots,f_{i_s}$ are pairwise relatively prime in $B$. By Lemma [Lemma 7](#vzprost){reference-type="ref" reference="vzprost"}, for every subset $\{f_{i_1},\dots,f_{i_s}\}\subseteq \{f_1,\dots f_m\}$ such that $f_{i_1}+\dots+f_{i_s}=0$ we have $\mathrm{gcd}(f_{i_1},\dots,f_{i_s})=1$ considered as polynomials in $t$. By Theorem [\[genMSABC\]](#genMSABC){reference-type="ref" reference="genMSABC"}, applied to $f_1+\dots+f_m+f_{m+1}=0$, we have $$\mathrm{deg}(f_1)\leq (m-1)(N(f_1)+\dots+N(f_m)+N(f_{m+1})-1)\leq$$ $$\leq (m-1)\left(\sum_{i=1}^{m} \sum_{j=1}^{n_i} N(b_{ij})-1\right)\leq (m-1)\left( \sum_{i=1}^{m} \sum_{j=1}^{n_i}\mathrm{deg}(b_{ij})-1\right)\leq$$ $$\leq(m-1)\left( \mathrm{deg}(f_1) \sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}} -1\right).$$ Therefore, we obtain $$\mathrm{deg}(f_1)\left( \frac{1}{m-1} - \sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}} \right)\leq -1<0.$$ This is a contradiction. We get that $f_1,\dots,f_m$ are in the kernel of $D$. Since the kernel of a locally nilpotent derivation is factorially closed, we get that every $b_{ij}$ is contained in the kernel of $D$. ◻
*Remark 11*. In case $B$ is a factorial algebra, the condition on $f_1,\dots,f_m$ to be pairwise relatively prime in Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"} can be replaced by the following two conditions: $\mathrm{gcd}(f_1,\dots,f_m)=1$ and for any positive integer $s \leq m$ for all $1\leq i_1<\dots<i_s\leq m$ $$f_{i_1}+\dots+f_{i_s}=0 \Longrightarrow \mathrm{gcd}(f_{i_1},\dots,f_{i_s})=0,$$ because it suffices for applying Theorem [\[genMSABC\]](#genMSABC){reference-type="ref" reference="genMSABC"}. This relates Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"} if $B/(F)$ is a UFD, and Corollary [Corollary 22](#MLmysumeq0){reference-type="ref" reference="MLmysumeq0"}, if $B$ is a UFD.
In Theorem [Theorem 10](#mysumneq0){reference-type="ref" reference="mysumneq0"}, if $B$ is a factorial algebra then the condition that $f_{i_1},\dots,f_{i_s}$ are pairwise relatively prime can be replaced by $\mathrm{gcd}(f_{i_1},\dots,f_{i_s})=1$.
# Applications
## Construction of rigid algebras
Suppose $m,n_1,\dots,n_m$ are positive integers, where $m\geq 3$, $a_{i}\in \mathbb{K}^{\times}$, $k_{ij}$ are positive integers and $b_{ij}\in B$, where $i=1,\dots,m$ and $j=1,\dots,n_i$. Assume that elements $b_{ij}$ generate $B$ as an algebra.
We denote $$F_i:=a_i b_{i1}^{k_{i1}}\dots b_{in_i}^{k_{in_i}} \in B,$$ where $i=1,\dots,m.$
Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"} implies
**Corollary 12**. *Let $F:=F_1+\dots+F_m$ be a prime element of $B$. Denote by $f_i$ the image of $F_i$ in the factor-algebra $B/(F)$. Assume that $f_1,\dots,f_m$ are pairwise relatively prime and $$\sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}}\leq \frac{1}{m-2}.$$*
*Then the algebra $B/(F)$ is rigid.*
To apply Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"} it is necessary to check the pairwise relative primeness of those elements whose sum in the factor-algebra equals to zero. The following lemma can be used to prove this.
**Lemma 13**. *Under the previous notation, if elements $b_{ij}$ are algebraically independent, then $f_1,\dots ,f_m$ are pairwise relatively prime.*
*Proof.* Without loss of generality we prove that elements $f_1$ and $f_2$ are relatively prime. It suffices to show that there is an embedding $(f_1) \cap (f_2)\subseteq (f_1 f_2)$. Take an element $g\in (f_1) \cap (f_2)$. Let $G$ be a preimage of $g$ in the polynomial ring $\mathbb{K}[
b_{11},\dots, b_{mn_m}
]$, $$G=H_1 F_1 +P_1 F= H_2 F_2 +P_2 F$$ for some polynomials $H_1,H_2,P_1,P_2\in \mathbb{K}[
b_{11},\dots, b_{mn_m}
].$ Hence, $$H_1 F_1 + (P_1 - P_2) F = H_2 F_2 = G - P_2 F.\eqno{(1)}$$
Monomials in the polynomial $P_1-P_2$ can be divided into two parts. The first part contains the terms divisible by $F_2$ and the second part consists of the rest monomials. The same is true for the polynomial $H_1$.
Therefore, these polynomials can be represented in the following form: $$P_1 - P_2 = P_0 F_2 + \widetilde{P},\ H_1 = H_0 F_2 + \widetilde{H}, \eqno{(2)}$$ where $\widetilde{P}$ and $\widetilde{H}$ have no monomials divisible by $F_2$.
Combining (2) and (1), we obtain $$F_2\ |\ (H_0 F_2 + \widetilde{H}) F_1 + (P_0 F_2 + \widetilde{P})F.$$ Subtracting elements divisible by $F_2$, we have $$F_2\ |\ \widetilde{H} F_1 + \widetilde{P} (F_1+F_3+\dots+F_m),$$ while $F_1$ and $F_1+F_3+\dots+F_m$ do not contain variables from $F_2$. Thus, we get $$\widetilde{H} F_1 + \widetilde{P} (F_1+F_3+\dots+F_m)=0,$$ and $F_1\ |\ \widetilde{P}$. Let $\widetilde{P}=\widehat{P}_0 F_1$. Then from decomposition (2) we obtain $P_1-P_2 = \widehat{P}_0 F_1 + P_0 F_2$.
Using (1) we get that the polynomial $F_2$ divides the polynomial $$H_1 F_1 + (\widehat{P}_0 F_1 + P_0 F_2) F = H_2 F_2 = G - P_2 F=$$ $$= (H_1 + \widehat{P}_0 F)F_1 + P_0 F_2 F,$$ hence, $F_2\ |\ H_1 + \widehat{P}_0 F$. So, $G - P_2 F - P_0 F_2 F \in (F_1 F_2)$ and $g\in (f_1 f_2)$. This completes the proof of the relatively primeness of $f_1$ and $f_2$. ◻
*Remark 14*. In case $m=3$ the statement of Lemma [Lemma 13](#mylemma){reference-type="ref" reference="mylemma"} follows from [@HH]. Recall the necessary definitions. Let $R$ be an algebra graded by a finitely generated abelian group $K$. An element $r\in R$ is called a $K$*-prime* if $r$ is a homogeneous nonzero nonunit such that if $r$ divides a product of some homogeneous elements, then it divides one of the factors. An algebra $R$ is called *factorially graded* if every homogeneous nonzero nonunit is a product of $K$-primes. This decomposition is unique up to multiplication by a unit and permutation of factors.
Let us prove Lemma [Lemma 13](#mylemma){reference-type="ref" reference="mylemma"} in case $m=3$ using results of [@HH]. We denote by $\widetilde{b}_{ij}$ the image of $b_{ij}$ in the factor-algebra $B/(F)$. Note that $B/(F)$ is factorially graded with respect to the abelian finitely generated group $K$ (see [@HH Theorem 10.1(i)]), and $\widetilde{b}_{ij}$ are pairwise nonassociated $K$-primes (see [@HH Theorem 10.4(i)]). If a homogeneous element $g\in B/(F)$ is contained in the intersection of the ideals $(f_i)$ and $(f_j)$, then it has the form $$g=a_i \widetilde{b}_{i1}^{k_{i1}}\dots \widetilde{b}_{in_i}^{k_{in_i}} h_i = a_j \widetilde{b}_{j1}^{k_{j1}}\dots \widetilde{b}_{jn_j}^{k_{jn_j}} h_j$$ for some elements $h_i, h_j\in B/(F)$. Since a decomposition into $K$-primes is unique, we obtain $f_i|h_j$ and $f_j|h_i$. Therefore, the element $g$ belongs to the ideal $(f_i f_j)$. Every element of the factor-algebra $B/(F)$ can be represented as the sum of homogeneous elements, hence, $(f_i)\cap (f_j)=(f_i f_j)$. The relative primeness of $f_i$ and $f_j$ is proved.
**Example 15**. (Trinomial hypersurfaces) Suppose $n_0,n_1$ and $n_2$ are positive integers, $l_{ij}$ are non-negative integers, where $0\leq i \leq 2$ and $1\leq j \leq n_i$. We denote $$T_i^{l_i}:=T_{i1}^{l_{i1}}\dots T_{in_i}^{l_{in_i}}\in \mathbb{K}[
T_{ij};\ 0\leq i\leq 2,\ 1\leq j\leq n_i
],$$ where $i=0,1,2$. Let $X$ be a trinomial hypersurface with coordinate ring $$\mathbb{K}[X] \simeq \mathbb{K}[
T_{01},\dots, T_{2n_2}
]/(
T_0^{l_0}+T_1^{l_1}+T_2^{l_2}).$$ We assume that $\sum l_{ij}^{-1}\leq 1$. By Lemma [Lemma 13](#mylemma){reference-type="ref" reference="mylemma"} we have that the images of $T_0^{l_0}$, $T_1^{l_1}$ and $T_2^{l_2}$ in the factor-algebra are relatively prime. Hence, the algebra $\mathbb{K}[X]$ satisfies the conditions of Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"}. By Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"} we obtain that the hypersurface $X$ is rigid.
In this example our result confirms an already known fact. We denote ${d_i:=\text{gcd}(l_{i1},\dots,l_{in_i})}$. If $d_i$ are relatively prime, then we have a factorial trinomial hypersurface, its rigidity follows from [@Ar16]. Otherwise rigidity of $X$ follows from [@SA].
In particular, a hypersurface given by the equation $$\{X_1^6X_2^7+Y_1^8Y_2^9+Z_1^{10}Z_2^{11}=0\}$$ in the 6-dimensional affine space with coordinates $X_1$, $X_2$, $Y_1$, $Y_2$, $Z_1$ and $Z_2$ is rigid by Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"}. The relative primeness of the images of $X_1^6X_2^7,\ Y_1^8Y_2^9$ and $Z_1^{10}Z_2^{11}$ in the factor-algebra $$\mathbb{K}[X_1,X_2,Y_1,Y_2,Z_1,Z_2]/(X_1^6X_2^7+Y_1^8Y_2^9+Z_1^{10}Z_2^{11})$$ follows from Lemma [Lemma 13](#mylemma){reference-type="ref" reference="mylemma"}. It is easy to see that the condition $$\frac{1}{6}+\frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10}+\frac{1}{11}\leq 1$$ holds.
**Example 16**. (Trinomial varieties) The class of trinomial varieties is more general than the class of trinomial hypersurfaces. Trinomial varieties are algebraic varieties given by аn appropriate system of trinomials. We use the notation of [@HH].
For a positive integer $r\geq 1$ let $A=(a_0,\dots,a_r)$ be a sequence of vectors $a_i=(b_i,c_i)\in\mathbb{K}^2$ such that the pair of vectors $(a_i,a_k)$ is linearly independent for any $i\neq k$. Suppose $\mathfrak{n}=(n_0,\dots, n_r)$ is a sequence of positive integers, $L=(l_{ij})$ is a set of positive integers, where $0\leq i\leq r$ and $1\leq j\leq n_i$. For any $0\leq i \leq r$ we define a monomial $$T_i^{l_i}:= T_{i1}^{l_{i1}}\dots T_{in_i}^{l_{in_i}}\in S:= \mathbb{K}[T_{ij};\ 0\leq i \leq r, 1\leq j\leq n_i].$$ Given any pair of indices $0\leq i,j \leq r$ put $\alpha_{ij}:=\mathrm{det}(a_i,a_j)=b_i c_j - b_j c_i$, and for any triple of indices $0\leq i< j< k\leq r$ let $$g_{i,j,k}:=\alpha_{jk}T_i^{l_i} + \alpha_{ki}T_j^{l_j} +\alpha_{ij}T_k^{l_k}\in S.$$ A *trinomial variety* is an affine algebraic variety $X$ with coordinate ring $$\mathbb{K}[X] \simeq R(A,\mathfrak{n}, L):= \mathbb{K}[T_{ij},\ 0\leq i\leq r,\ 1
\leq j \leq n_i]/(g_{i,i+1,i+2};\ 0\leq i\leq r-2).$$ We denote by $t_{ij}$ the image of $T_{ij}$ in the factor-algebra $R(A, \mathfrak{n}, L)$. In addition we assume that for every trinomial the sum of the inverse values to all powers of the variables included in this trinomial is less than or equal to 1. To apply Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"} it remains to show the relative primeness of $t_i^{l_i}=t_{i1}^{l_{i1}}\dots t_{in_i}^{l_{in_i}}$. This follows from the fact that $\mathbb{K}[X]$ is factorially graded, see [@HH Theorem 1.1(i)].
A criterion for a trinomial variety to be factorial is that numbers $d_i=\text{gcd}(l_{i1},\dots,l_{in_i})$ are pairwise relatively prime, see [@HH Theorem 1.1(ii)]. A criterion for a factorial trinomial variety to be rigid is obtained in [@SA]. Any criterion for a non-factorial trinomial variety to be rigid is unknown yet.
It is easy to construct a non-factorial trinomial variety that is rigid by Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"}. For example, a variety $X$ with coordinate ring $$\mathbb{K}[X]\simeq \mathbb{K}[X_1,X_2,Y,Z_1,Z_2,W_1,W_2]/(X_1^6 X_2^{12}+Y^7 +Z_1^6 Z_2^9,\ X_1^6 X_2^{12}+2Y^7 +W_1^8 W_2^9)$$ satisfies these conditions. Denote by $x_1,x_2,y,z_1,z_2,w_1$ and $w_2$ the images of the elements $X_1,X_2,Y,Z_1,Z_2,W_1$ and $W_2$ in the factor-algebra. Since $\mathbb{K}[X]$ is factorially graded, the elements $$x_1^6 x_2^{12}, y^7, z_1^6 z_2^9\ \mathrm{and}\ x_1^6 x_2^{12}, 2y^7, w_1^8 w_2^9$$ are pairwise relatively prime. It is clear that $$\frac{1}{6}+\frac{1}{12}+\frac{1}{7}+\frac{1}{6}+\frac{1}{9}\leq 1 \ \mathrm{and}\ \frac{1}{6}+\frac{1}{12}+\frac{1}{7}+\frac{1}{8}+\frac{1}{9}\leq 1.$$
**Example 17**. ($m$-term hypersurfaces) An affine algebraic variety is called an *$m$-term hypersurface* if its coordinate ring is isomorphic to a polynomial ring factorized by an ideal generated by a polynomial with $m$ monomials.
In case $b_{ij}$ are algebraically independent and $B$ is a polynomial ring, elements $f_1,\dots,f_m$ are pairwise relatively prime by Lemma [Lemma 13](#mylemma){reference-type="ref" reference="mylemma"}. Assume that $$\sum k_{ij}^{-1}\leq \frac{1}{m-2}.$$ Then the $m$-term hypersurface with coordinate ring isomorphic to $B/(F)$ is rigid by Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"}.
For example, the factor-algebra $$B=\mathbb{K}[X,Y,Z,V,W]/(X^{10} + Y^{10} Z^{11} + V^{10}+W^{10})$$ is rigid. The pairwise relative primeness of the images of the monomials $X^{10},Y^{10}Z^{11},V^{10}$ and $W^{10}$ in the factor-algebra follows from Lemma [Lemma 13](#mylemma){reference-type="ref" reference="mylemma"}. The condition $$\frac{1}{10}+\frac{1}{10}+\frac{1}{11}+\frac{1}{10}+\frac{1}{10}\leq \frac{1}{2}$$ holds.
**Example 18**. Suppose $F_1,F_2,F_3$ and $F$ are polynomials in variables $X,Y,Z,V$ and $W$ over the field $\mathbb{K}$: $$F_1:=XY+Z,\ F_2:=Y^6 +Z^6 +W^6+ V,\ F_3:=Y^6 +Z^6 +W^6- V,$$ we denote $F:=F_1^3+F_2^3+F_3^3$. An algebra $B$ is given by $$B=\mathbb{K}[X,Y,Z,V,W]/(F).$$ Let $f_1, f_2$ and $f_3$ be the images of $F_1, F_2$ and $F_3$ in the factor-algebra.
Let us prove that the algebra $B$ is rigid. It is necessary to check the pairwise relative primeness of the elements $f_1^3, f_2^3$ and $f_3^3$. It follows from the pairwise relative primeness of the elements $f_1, f_2$ and $f_3$, because if some elements $a,b\in B$ are relatively prime and $m,n \in \mathbb{Z}_{\geq 1}$, then elements $a^m$ and $b^n$ are relatively prime too, see [@fr].
Let $g\in (f_1) \cap (f_2)$. Consider a preimage of $g$ in the polynomial ring. It has the form $$G=H_1 F_1 + P_1 F=H_2 F_2 + P_2 F$$ for some polynomials $H_1, H_2, P_1$ and $P_2$.
Therefore, we have $$G-P_2 F=H_1 F_1 + (P_1-P_2)F = H_2 F_2.\eqno{(3)}$$
We reduce all parts of equation (3) by the polynomial $F_2$ according to the lexicographic order $V>W>X>Y>Z$. This means that we replace $V$ with $-Y^6-Z^6-W^6$. We get that $F_1$ does not change, $F_2$ reduces to zero, and $F$ changes to $F_1^3 + 8(Y^6+Z^6+W^6)^3$. Denote by $\widetilde{H}_1,\ \widetilde{H}_2,\ \widetilde{P}$ and $\widetilde{G}$ the images of the polynomials $H_1,\ H_2,\ P_1-P_2$ and $G$ under the reduction. From (3) it follows that $$\widetilde{H}_1 F_1 + \widetilde{P}(F_1^3 + 8(Y^6+Z^6+W^6)^3) = 0,$$ hence $F_1\ |\ \widetilde{P}$. The polynomial $\widetilde{P}$ is obtained from $P_1-P_2$ by the reduction using $F_2$. Therefore, the polynomial $P_1-P_2$ has the form $\widehat{P}_1 F_1+\widehat{P}_2 F_2$ for some polynomials $\widehat{P}_1$ and $\widehat{P}_2$.
Combining with (3), we get $$G-P_2 F=H_1 F_1 + (\widehat{P}_1 F_1+\widehat{P}_2 F_2)F = H_2 F_2,$$ whence $F_2\ |\ H_1 F_1+ \widehat{P}_1 F_1 F$. Therefore, $F_2\ |\ H_1+ \widehat{P}_1 F$ and $G-P_2 F - \widehat{P}_2 F_2 F \in (F_1 F_2)$. We obtain $g\in (f_1 f_2)$. The relative primeness of the elements $f_1$ and $f_2$ is proved. The relative primeness of $f_1$ and $f_3$ can be proved in the same way.
It remains to show that the elements $f_2$ and $f_3$ are relatively prime. We consider an element $g$ belonging to the intersection of the ideals generated by $f_2$ and $f_3$, i.e., ${g\in (f_2)\cap (f_3)}$. Any preimage of $g$ in the polynomial ring has the form $$G=H_2 F_2 + P_2 F = H_3 F_3 + P_3 F \eqno{(4)}$$for some polynomials $H_2,H_3,P_2$ and $P_3$. Again we reduce all parts of equation (4) by the polynomial $F$ according to the lexicographic order ${X>Y>Z>V>W}$. We obtain that the polynomials $F_2$ and $F_3$ do not change, because these polynomials are independent of the variable $X$, and $F$ reduces to zero. Denote by $\widetilde{H}_2,\widetilde{H}_3$ and $\widetilde{G}$ the images of the polynomials $H_2,H_3$ and $G$ under the reduction. Therefore, from (4) we have $$\widetilde{G}=\widetilde{H}_2 F_2= \widetilde{H}_3 F_3.$$
The polynomials $F_2$ and $F_3$ are relatively prime. Indeed, if these polynomials have a nontrivial common divisor, then the elements $Y^6+Z^6+W^6$ and $V$ have a nontrivial common divisor too, this is a contradiction.
Hence, from $\widetilde{H}_2 F_2= \widetilde{H}_3 F_3$ it follows that $\widetilde{H}_2 \in (F_3)$, whence $\widetilde{G}\in (F_2 F_3)$. Therefore, the image of $\widetilde{G}$ in the factor-algebra is $\widetilde{g}\in (f_2 f_3)$. Note also that $\widetilde{g}=g$, because $\widetilde{G}$ is obtained from $G$ by the reduction using $F$. This concludes the proof of the relative primeness of the elements $f_2$ and $f_3$.
It follows from Theorem [\[ABC\]](#ABC){reference-type="ref" reference="ABC"} that for every derivation $D\in \text{LND}(B)$ we have $$xy+z,\ y^6 +z^6 +w^6+ v,\ y^6 +z^6 +w^6-v\in \text{Ker}\ D.$$ Therefore, we obtain $v,\ y^6 +z^6 +w^6\in \text{Ker}\ D$. By Theorem [Theorem 10](#mysumneq0){reference-type="ref" reference="mysumneq0"} we have $y,z,w\in \text{Ker}\ D$. Since the kernel of a locally nilpotent derivation is factorially closed, we get $x\in \text{Ker}\ D$. All generators of the algebra $B$ are in the kernel of each locally nilpotent derivation on $B$. It follows that the algebra $B$ is rigid.
## Construction of semi-rigid algebras
Some examples of semi-rigid algebras can be obtained by the following theorem, see [@fr Theorem 2.24] and [@MLLND].
**Theorem 19** (Semi-Rigidity Theorem, L. Makar-Limanov). *If $A$ is a rigid commutative $\mathbb{K}$-domain of finite transcendence degree over $\mathbb{K}$, then $A^{[1]}$ is semi-rigid, where $A^{[1]}$ denotes a polynomial ring in one variable over $A$.*
**Example 20**. Consider the algebra $$B=\mathbb{K}[X,Y,Z,V,W]/((X-Y)^4 + V^4 W^5 +Z^4).$$ Let us change variables by $\widetilde{X}=X-Y,\ \widetilde{Y}=X+Y$. Then $$B=\mathbb{K}[\widetilde{X},\widetilde{Y},Z,V,W]/(\widetilde{X}^4+V^4 W^5 + Z^4)=\left( \mathbb{K}[\widetilde{X},Z,V,W]/(\widetilde{X}^4+V^4 W^5 + Z^4) \right)[\widetilde{Y}].$$ By Corollary [Corollary 12](#RIGIDmysumeq0){reference-type="ref" reference="RIGIDmysumeq0"}, the algebra $$\mathbb{K}[\widetilde{X},Z,V,W]/(\widetilde{X}^4+V^4 W^5 + Z^4)$$ is rigid. Hence, by Theorem [Theorem 19](#semirid){reference-type="ref" reference="semirid"}, the algebra $B$ is semi-rigid.
Also, some semi-rigid algebras can be obtained by Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"}.
**Example 21**. Consider the polynomials $$F_1:=XW-YV,\ F_2:=V^4 W^5,\ F_3:= Z,\ F:=F_1^4 + F_2 +F_3^4 \in \mathbb{K}[X,Y,Z,V,W]$$ and the factor-algebra $$B=\mathbb{K}[X,Y,Z,V,W]/(F).$$ Let $f_1,f_2$ and $f_3$ be the images of the polynomials $F_1,F_2$ and $F_3$ in the factor-algebra $B$. Let us show that the algebra $B$ is semi-rigid. To prove the pairwise relative primeness of $f_1^4,f_2$ and $f_3^4$ it is sufficient to prove that $f_1,f_2$ and $f_3$ are pairwise relatively prime.
Let us prove that $f_1$ and $f_2$ are relatively prime. Let $g\in (f_1)\cap (f_2)$, then every preimage of $g$ in the polynomial ring $G$ has the form $$G=H_1 F_1 + P_1 F = H_2 F_2 + P_2 F$$ for some polynomials $H_1,H_2,P_1$ and $P_2$. We reduce this equation by $F$, assuming that $Z$ is a maximal variable in a lexicographic order. By tilde we denote images of polynomials under the reduction. We obtain $$\widetilde{G}=\widetilde{H}_1 F_1=\widetilde{H}_2 F_2.$$ The polynomials $F_1$ and $F_2$ are relatively prime, therefore, $\widetilde{G} \in (F_1 F_2)$. The image of $\widetilde{G}$ in the factor-algebra $B$ belongs to the ideal $(f_1 f_2)$ and coincides with $g$. Hence $f_1$ and $f_2$ are relatively prime. Similarly, the relative primeness of $f_2$ and $f_3$ can be proved by a reduction according to a lexicographic order with $X$ as the maximal variable.
To prove that $f_1$ and $f_3$ are relatively prime take an element $g\in (f_1)\cap (f_3)$ and its preimage $G$ in the polynomial ring: $$G=H_1 F_1 + P_1 F = H_3 F_3 + P_3 F$$ for some polynomials $H_1,H_3,P_1,P_3$. We have $$H_1 F_1 +(P_1 - P_3) F = H_3 F_3.$$ We consider the residues modulo $F_3 = Z$ of all parts of this equation. The residues of $H_1$ and $P_1-P_3$ we denote by $\widetilde{H}_1$ and $\widetilde{P}$. We obtain $$\widetilde{H}_1 F_1 + \widetilde{P} (F_1^4 + F_2) =0,$$ whence $F_1\ |\ \widetilde{P}$. Therefore, for some polynomials $\widehat{P}_1,\widehat{P}_3$ we get $$P_1-P_3 = \widehat{P}_1 F_1 + \widehat{P}_3 F_3.$$ Hence, $$G-P_3 F= H_1 F_1 + (\widehat{P}_1 F_1 + \widehat{P}_3 F_3)F = H_3 F_3$$ and $H_1+\widehat{P}_1 F \in (F_3)$. We get $G-P_3 F - \widehat{P}_3 F_3 F\in (F_1 F_3)$ and $g\in(f_1 f_3)$, which concludes the proof of the relative primeness of $f_1$ and $f_3$.
We denote by $x,y,z,v$ and $w$ the images of $X,Y,Z,V$ and $W$ in the factor-algebra $B$. By Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"} for every $D\in \text{LND}(B)$ we have $xw-yv,v,w,z\in \text{Ker}\ D$. Therefore, we obtain $$\mathbb{K}[xw-yv,v,w,z]\subseteq \mathrm{ML}(B).$$ There exists a derivation $\widetilde{D}$ with kernel $\mathbb{K}[xw-yv,v,w,z]$: $$\widetilde{D}(x)=v,\ \widetilde{D}(y)=w,\ \widetilde{D}(v)=\widetilde{D}(w)=\widetilde{D}(z)=0.$$ Hence, $\text{Ker}\ \widetilde{D}=\mathrm{ML}(B)$, and the algebra $B$ is semi-rigid by definition.
## Stable Makar-Limanov invariant
The above results can be reformulated in terms of the Makar-Limanov invariant. Suppose $m,n_1,\dots,n_m$, are positive integers, where $m\geq 3$, $a_{i}\in \mathbb{K}$ are nonzero elements, $k_{ij}$ are positive integers and $b_{ij}\in B$, where $i=1,\dots,m$ and $j=1,\dots,n_i$.
We denote $$f_i:=a_i b_{i1}^{k_{i1}}\dots b_{in_i}^{k_{in_i}} \in B,$$ where $i=1,\dots,m.$
By Theorem [Theorem 8](#mysumeq0){reference-type="ref" reference="mysumeq0"} we have
**Corollary 22**. *Assume that $f_1+\dots+f_m=0$, and elements $f_1,\dots,f_m$ are pairwise relatively prime. Suppose that $$\sum_{i=1}^{m} \sum_{j=1}^{n_i} \frac{1}{k_{ij}}\leq \frac{1}{m-2}.$$*
*Then $\mathbb{K}[b_{11},\dots,b_{mn_m}] \subseteq \mathrm{ML}(B)$.*
Let $X$ be a rigid affine variety. Theorem [Theorem 19](#semirid){reference-type="ref" reference="semirid"} states that $\mathrm{ML}(X\times \mathbb{K})=\mathbb{K}[X]$. It follows easily that $\mathrm{ML}(X\times \mathbb{K}^n)\subseteq \mathbb{K}[X]$. An example of a rigid surface $X$ such that ${\mathrm{ML}(X\times \mathbb{K}^2)\neq \mathbb{K}[X]}$ is given in [@D].
Consider the chain of subalgebras $$\mathrm{ML}(X)\supseteq \mathrm{ML}(X\times \mathbb{K})\supseteq \mathrm{ML}(X\times \mathbb{K}^2)\supseteq \dots .\eqno{(5)}$$ Define the *stable Makar-Limanov invariant* as the subalgebra equal to the intersection of the elements of (5). We denote it by $\mathrm{SML}(X)$. This term was suggested by Sergey Gaifullin.
*Remark 23*. Since $\mathrm{ML}(X\times\mathbb{K}^n)$ is an algebraically closed subalgebra in the algebra ${\mathbb{K}[X\times\mathbb{K}^n]}$, there exist finitely many steps in which chain (5) is stabilized, i.e., there is a positive integer $r$ such that $\mathrm{ML}(X\times\mathbb{K}^r)=\mathrm{SML}(X)$.
Let us remark that elements, belonging to $\mathrm{ML}(X)$ by Corollary [Corollary 22](#MLmysumeq0){reference-type="ref" reference="MLmysumeq0"}, are contained in $\mathrm{SML}(X)$.
**Example 24**. Let $X=\{X_1^6X_2^7+Y_1^8Y_2^9+Z_1^{10}Z_2^{11}=0\}$, see Example [Example 15](#trinom){reference-type="ref" reference="trinom"}. By Corollary [Corollary 22](#MLmysumeq0){reference-type="ref" reference="MLmysumeq0"} it follows that $$\mathrm{SML}(X)=\mathbb{K}[X].$$
99
*Arzhantsev I. V.* On rigidity of factorial trinomial hypersurfaces // Int. J. Algebra Comput. 2016. V. 26, N 5. P. 1061--1070.
*Arzhantsev I. V.* Polynomial curves on trinomial hypersurfaces // Acta Arith. 2018. V. 186, N 1. P. 87--99.
*de Bondt M.* Homogeneous Keller maps. Ph.-D thesis, University of Nijmegen, 2009.
*Chitayat M., Daigle D.* On the rigidity of certain Pham--Brieskorn rings // J. Algebra. 2020. V. 550. P. 290--308.
*Crachiola A. J., Maubach S.* Rigid rings and Makar-Limanov techniques // Comm. Algebra. 2013. V. 41, N 11. P. 4248--4266.
*Dubouloz A.* Rigid affine surfaces with isomorphic $\mathbb{A}^2$-cylinders // Kyoto J. Math. 2019. V. 59, N 1. P. 182--193.
*Finston D., Maubach S.* The automorphism group of certain factorial threefolds and a can- cellation problem // Isr. J. Math. 2008. V. 163. P. 369--381.
*Finston D., Maubach S.* Constructing (almost) rigid rings and a UFD having infinitely generated Derksen and Makar-Limanov invariants // Canad. Math. Bull. 2010. V. 53, N 1. P. 77--86.
*Freudenburg G.* Algebraic theory of locally nilpotent derivations. Encyclopaedia of Mathematical Sciences 136. Berlin: Springer, 2017.
*Gaifullin S. A.* On rigidity of trinomial hypersurfaces and factorial trinomial varieties. 2019. 20 p. arXiv:1902.06136.
*Gundersen G. G., Hayman W. K.* The strength of Cartan's version of Nevanlinna theory // Bull. Lond. Math. Soc. 2004. V. 36. P. 433--454.
*Hausen J., Herppich E.* Factorially graded rings of complexity one // Torsors, étale homotopy and applications to rational points. Cambridge: Camb. Univ. Press, 2013. V. 405. P. 414--428.
*Lang S.* Old and new conjectured Diophantine inequalities // Bull. Amer. Math. Soc. 1990. V. 23. P. 37--75.
*Lang S.* Algebra. Graduate texts in mathematics 211. New York: Springer-Verlag, 2002.
*Makar-Limanov L. G.* On the hypersurface $x+x^2 y+z^2 +t^3=0$ in $\mathbb{K}^4$ or a $\mathbb{K}^3$-like threefold which is not $\mathbb{K}^3$ // Israel J. Math. 1996. V. 96. P. 419--429.
*Makar-Limanov L. G.* Locally nilpotent derivations, a new ring invariant and applications. Bar-Ilan University, 1998. (Lecture notes).
*Stothers W. W.* Polynomial identities and hauptmoduln // The Q. J. Math. 1981. V. 32, N 3. P. 349--370.
| arxiv_math | {
"id": "2309.13971",
"title": "Generalization of the ABC theorem on locally nilpotent derivations",
"authors": "Veronika Kikteva",
"categories": "math.AG math.AC",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The property of countable metacompactness of a topological space gets its importance from Dowker's 1951 theorem that the product of a normal space $X$ with the unit interval $[0,1]$ is again normal iff $X$ is countably metacompact. In a recent paper, Leiderman and Szeptycki studied $\Delta$-spaces, which are a subclass of the class of countably metacompact spaces. They proved that a single Cohen real introduces a ladder system $L$ over the first uncountable cardinal for which the corresponding space $X_{ L}$ is not a $\Delta$-space, and asked whether there is a ZFC example of a ladder system $L$ over some cardinal $\kappa$ for which $X_{ L}$ is not countably metacompact, in particular, not a $\Delta$-space. We prove that an affirmative answer holds for the cardinal $\kappa=\mathop{\mathrm{cf}}(\beth_{\omega+1})$.
address:
- Department of Mathematics, Bar-Ilan University, Ramat-Gan 5290002, Israel.
- Department of Mathematics, Ben-Gurion University of the Negev, P.O.B. 653, Be'er Sheva, 84105 Israel
- Department of Mathematics, Bar-Ilan University, Ramat-Gan 5290002, Israel.
author:
- Rodrigo Carvalho
- Tanmay Inamdar
- Assaf Rinot
date: "Preprint as of September 23, 2023. For the latest version, visit [http://p.assafrinot.com/63]{.sans-serif}."
title: Ladder systems and countably metacompact topological spaces
---
# Introduction
Throughout, $\kappa$ denotes a regular uncountable cardinal. A *ladder system* over a stationary subset $S$ of $\kappa$ is a sequence ${\vec L}=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ such that each $A_\delta$ is a cofinal subset of $\delta$. It is *$\xi$-bounded* iff $\mathop{\mathrm{otp}}(A_\delta)\le\xi$ for all $\delta\in S$. The corresponding topological space $X_{\vec L}$ has underlying set $(\kappa\times\{0\})\cup(S\times\{1\})$ with all points in $(\kappa\times\{0\})$ being isolated and, for every $\delta\in S$, the neighborhoods of $(\delta,1)$ consisting of sets of the form $(A\times\{0\})\cup\{(\delta,1)\}$ for some $A$ an end segment of $A_\delta$.
A topological space $X$ is a *$\Delta$-space* (resp. *countably metacompact*) iff for very decreasing sequence $\langle D_n\mathrel{|}\allowbreak n<\omega\rangle$ of subsets of $X$ (resp. closed subsets of $X$) with empty intersection, there is a decreasing sequence $\langle U_n\mathrel{|}\allowbreak n<\omega\rangle$ of open subsets of $X$ with empty intersection such that $D_n\subseteq U_n$ for all $n<\omega$.
It is well-known that the product of two normal topological spaces need not be normal, but what about the product of a normal space $X$ and the unit interval $[0,1]$? It is a classical theorem of Dowker [@dowker] that the product $X\times[0,1]$ is again normal iff $X$ is countably metacompact, hence the importance of this notion. The notion of a $\Delta$-space is due to Knight [@MR1196219].
In a recent paper by Leiderman and Szeptycki [@leiderman2023deltaspaces], a systematic study of $\Delta$-spaces is carried out, motivated by the $C_p$-theory of such spaces (see [@MR4214339 Theorem 2.1]). Section 5 of [@leiderman2023deltaspaces] is dedicated to the study of spaces of the form $X_{\vec L}$. It is proved that in $\textsf{\textup{ZFC}}$ there is an $\omega$-bounded ladder system ${\vec L}$ over $\aleph_1$ for which $X_{\vec L}$ is countably metacompact, that under Martin's axiom all $\omega$-bounded ladder systems ${\vec L}$ over $\aleph_1$ satisfy that $X_{\vec L}$ is countably metacompact, and that in the forcing extension after adding a single Cohen real, there exists an $\omega$-bounded ladder system ${\vec L}$ over $\aleph_1$ for which the space $X_{\vec L}$ is not a $\Delta$-space. At the end of that section, Problem 5.11 asks whether there is a $\textsf{\textup{ZFC}}$ example of a ladder system ${\vec L}$ over some cardinal $\kappa$ whose corresponding space $X_{\vec L}$ is not countably metacompact, hence not a $\Delta$-space. We answer this question in the affirmative, as follows.
**Theorem 1**. *For $\kappa:=\mathop{\mathrm{cf}}(\beth_{\omega+1})$ there are co-boundedly many regular cardinals $\mu<\beth_\omega$ such that $E^\kappa_\mu:=\{\delta<\kappa\mathrel{|}\allowbreak\mathop{\mathrm{cf}}(\delta)=\mu\}$ carries a $\mu$-bounded ladder system ${\vec L}$ such that $X_{\vec L}$ is not countably metacompact.*
Our result fit into a well-known program of obtaining analogues in $\textsf{\textup{ZFC}}$ of statements which are undecidable at small cardinals. Often times, the price is that these results concern higher cardinals and it suggests the fruitfulness of an asymptotic viewpoint to statements in infinite combinatorics. So, here the Leiderman-Szeptycki consistency result for $\kappa=\aleph_1$ is obtained in $\textsf{\textup{ZFC}}$ at $\kappa=\mathop{\mathrm{cf}}(2^\lambda)$, where $\lambda:=\sup\{2^{\aleph_0},2^{2^{\aleph_0}},2^{2^{2^{\aleph_0}}},\ldots\}$. The proof builds heavily on Shelah's contributions to this program, where he previously showed that refined forms of Jensen's results for Gödel's constructible universe [@MR0309729] hold asymptotically in any universe of set theory. This includes refined forms of the $\textsf{\textup{GCH}}$ [@Sh:460], of the square principle [@Sh:420] and of the diamond principle [@Sh:775].
Ultimately, our proof of Theorem [Theorem 1](#thma){reference-type="ref" reference="thma"} goes through a diamond-type principle on ladder systems studied by Shelah under various names ([@Sh:775; @Sh:829]): middle diamond, super black box, $\mathop{\mathrm{Ps}}_1$. We opt for the following nomenclature.
**Definition 1**. For a ladder system $\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ over some stationary $S\subseteq\kappa$ and a cardinal $\theta$, $\diamondsuit(\vec L,\theta)$ asserts the existence of a sequence $\langle f_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ such that:
- for every $\delta\in S$, $f_\delta$ is a function from $A_\delta$ to $\theta$;
- for every function $f:\kappa\rightarrow\theta$, there are stationarily many $\delta\in S$ such that $f\mathbin\upharpoonright A_\delta=f_\delta$.
Note that Jensen's diamond principle $\diamondsuit(S)$ is simply $\diamondsuit(\vec L,2)$ for the degenerate ladder system $\vec L=\langle \delta\mathrel{|}\allowbreak\delta\in S\rangle$.
In [@Sh:775], Shelah proved that $\diamondsuit(\vec L,\mu)$ holds in $\textsf{\textup{ZFC}}$ for various ladder systems $\vec L$ and cardinals $\mu$. Here, we shall reproduce a proof of the following special case.
**Theorem 2** (Shelah). *Suppose that $\Lambda\le\lambda$ is a pair of uncountable cardinals such that $\Lambda$ is a strong limit. Denote $\kappa := \mathop{\mathrm{cf}}(2^\lambda)$. Then, for co-boundedly many regular cardinals $\mu<\Lambda$, there exists a $\mu$-bounded ladder system $\vec C=\langle C_\delta\mathrel{|}\allowbreak\delta\in E^\kappa_\mu\rangle$ with each $C_\delta$ a club in $\delta$ such that $\diamondsuit(\vec C, \mu)$ holds.[^1]*
Deriving Theorem [Theorem 1](#thma){reference-type="ref" reference="thma"} from Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"} is easy, but all the details will be given. At the end of the paper, we shall also give two simple sufficient conditions (beyond $\textsf{\textup{ZFC}}$) for the existence of ladder systems of interest.
**Theorem 3**. *If $\aleph_\omega$ is a strong limit, then for every $n<\omega$, there are infinitely many $m<\omega$ such that $E^{\aleph_{m+1}}_{\aleph_{n+1}}$ carries an $\aleph_{n+1}$-bounded ladder system $\vec L$ for which $X_{\vec L}$ is not countably metacompact.*
**Theorem 4**. *If there exists a $\kappa$-Souslin tree, then there exists a ladder system $\vec L$ over some stationary subset of $\kappa$ for which $X_{\vec L}$ is not countably metacompact.*
## Notation and conventions {#conventions}
$\mathop{\mathrm{Reg}}(\kappa)$ stands for set of all infinite regular cardinals below $\kappa$. For a set $X$, we write $[X]^{\kappa}$ for the collection of all subsets of $X$ of size $\kappa$. The collections $[X]^{\le\kappa}$ and $[X]^{<\kappa}$ are defined similarly. For a set of ordinals $A$, we write $\mathop{\mathrm{acc}}(A) := \{\alpha\in A \mathrel{|}\allowbreak\sup(A \cap \alpha) = \alpha > 0\}$ and $\mathop{\mathrm{nacc}}(A) := A \setminus \mathop{\mathrm{acc}}(A)$. For cardinals $\theta$ and $\mu$, $\theta^{+\mu}$ denotes the $\mu^{\text{th}}$ cardinal after $\theta$: so if $\theta = \aleph_\alpha$, then $\theta^{+\mu}= \aleph_{\alpha+\mu}$. The map $\alpha\mapsto\beth_\alpha$ is defined by recursion on the class of ordinals, setting $\beth_0:=\aleph_0$, $\beth_{\alpha+1}:=2^{\beth_\alpha}$ and $\beth_\alpha:=\bigcup_{\beta<\alpha}\beth_\beta$ for every infinite limit ordinal $\alpha$.
# Diamonds on ladder systems {#sec2}
In this section, we reproduce some results from Shelah's [@Sh:775] with the goal of proving Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"}, aiming for a simpler exposition. We start by considering two generalisations of $\diamondsuit(\vec L,\theta)$ and a generalisation of the Devlin-Shelah weak diamond principle $\Phi$ [@MR469756].
**Definition 1**. Suppose that $\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ is a ladder system over some stationary $S\subseteq\kappa$, and that $\mu,\theta$ are cardinals greater than $1$.
- $\diamondsuit^*(\vec L,\mu,\theta)$ asserts the existence of a sequence $\langle\mathcal P_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ such that:
- for every $\delta\in S$, $|\mathcal P_\delta|<\mu$;
- for every function $f:\kappa\rightarrow\theta$, there are club many $\delta\in S$ such that $f\mathbin\upharpoonright A_\delta\in\mathcal P_\delta$.
- $\diamondsuit(\vec L,\mu,\theta)$ asserts the existence of a sequence $\langle\mathcal P_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ such that:
- for every $\delta\in S$, $|\mathcal P_\delta|<\mu$;
- for every function $f:\kappa\rightarrow\theta$, there are stationarily many $\delta\in S$ such that $f\mathbin\upharpoonright A_\delta\in\mathcal P_\delta$.
- $\Phi(\vec L,\mu,\theta)$ asserts that for every function $F:(\bigcup_{\delta\in S}{}^{A_\delta}\mu)\rightarrow\theta$, there exists a function $g:S\rightarrow\theta$ such that, for every function $f:\kappa\rightarrow\mu$, there are stationarily many $\delta\in S$ such that $F(f\mathbin\upharpoonright A_\delta)= g(\delta)$.
We encourage the reader to determine the monotonicity properties of the above principles; another easy exercise is to verify that for every ladder system $\vec L$ over a subset of $\kappa$ and every cardinal $\mu$, $\diamondsuit^*(\vec L, \mu^\kappa, \mu)$ holds.
Following on from a previous remark, for $\kappa$ a successor cardinal, the principle $\diamondsuit^*(S)$ is simply $\diamondsuit^*(\vec L,\kappa,2)$ for the degenerate ladder system $\vec L=\langle \delta\mathrel{|}\allowbreak\delta\in S\rangle$. Also note that $\diamondsuit^*(\vec L,\mu,\theta)\implies \diamondsuit(\vec L,\mu,\theta)$ and $\diamondsuit(\vec L,\theta)\iff\diamondsuit(\vec L,2,\theta)$. Less immediate from these two observations, but clear after expanding the definitions is that $\diamondsuit(\vec L,\mu)\implies\Phi(\vec L,\mu,\theta)$ for any cardinal $\theta$. This implication admits a converse, as follows.
**Lemma 2**. *Suppose that $\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ is a $\xi$-bounded ladder system over some stationary $S\subseteq\kappa$. If $\Phi(\vec L,\mu,\mu^{|\xi|})$ holds, then so does $\diamondsuit( \vec L, \mu)$.*
*Proof.* Denote $\theta:=\mu^{|\xi|}$. Let $\vec h = \langle h_\tau \mathrel{|}\allowbreak\tau< \theta\rangle$ be some enumeration of ${}^\xi\mu$. For every $\delta \in S$, fix an injection $\psi_\delta: A_\delta\rightarrow\xi$. Fix a function $F:(\bigcup_{\delta \in S}{}^{A_\delta}\mu)\rightarrow \theta$ such that for all $\delta \in S$ and $\bar f: A_\delta \rightarrow \mu$, $$(F(\bar f) = \tau)\implies(h_\tau\circ\psi_\delta=\bar f).$$ Now, assuming that $\Phi(\vec L, \mu, \theta)$ holds, we may fix a function $g: S \rightarrow \theta$ such that for every function $f:\kappa\rightarrow\mu$, the set $\{\delta\in S\mathrel{|}\allowbreak F(f\mathbin\upharpoonright A_\delta)= g(\delta)\}$ is stationary in $\kappa$.
For every $\delta\in S$, let $f_\delta:=h_{g(\delta)}\circ \psi_\delta$. We claim that $\langle f_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ witnesses that $\diamondsuit(\vec L,\mu)$ holds. Indeed, given $f: \kappa \rightarrow \mu$, consider the stationary set $S':=\{\delta\in S\mathrel{|}\allowbreak F(f\mathbin\upharpoonright A_\delta)= g(\delta)\}$. For every $\delta\in S'$, it is the case that $$f_\delta=h_{F(f\mathbin\upharpoonright A_\delta)}\circ \psi_\delta=f\mathbin\upharpoonright A_\delta,$$ as sought. ◻
We can at this stage describe the structure of the proof of Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"} given below. To start, in Lemma [Lemma 6](#countinglemma){reference-type="ref" reference="countinglemma"} we establish an instance of the principle $\diamondsuit^*(\vec L, \ldots)$. The caveat here is that the second parameter, the width of the diamond sequence, will be rather large. Using this very wide diamond and an instance of a colouring principle which is established in Lemma [Lemma 5](#lemma114){reference-type="ref" reference="lemma114"}, we will then in Lemma [Lemma 7](#lemma115){reference-type="ref" reference="lemma115"} derive an instance of the principle $\Phi(\vec L, \mu, \theta)$. Crucially for us here, the parameter $\theta$, the number of colours, will be large. Finally, we will use Lemma [Lemma 2](#l210){reference-type="ref" reference="l210"} to obtain a narrow diamond sequence on the ladder system. The details are in Corollary [Corollary 9](#cor28){reference-type="ref" reference="cor28"}.
The upcoming proof of Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"} will make multiple uses of Shelah's revised GCH theorem [@Sh:460] that was briefly mentioned in the paper's introduction. To state it, we shall need the following definition.
**Definition 3**. For cardinals $\theta\le\lambda$:
- $\lambda^{[\theta]}$ stands for the least size of a subfamily $\mathcal A\subseteq[\lambda]^{\le\theta}$ satisfying that every element of $[\lambda]^\theta$ is the union of less than $\theta$ many sets from $\mathcal A$;
- $m(\lambda,\theta)$ stands for the least size of a subfamily $\mathcal A\subseteq[\lambda]^{\theta}$ satisfying that for every $b\in [\lambda]^\theta$, there is an $a\in\mathcal A$ with $|a\cap b|=\theta$.
Note that for $\theta$ a regular cardinal, $m(\lambda,\theta)\le\lambda^{[\theta]}$.
**Fact 4** (Shelah's RGCH, [@Sh:460]). *For every pair $\Lambda\le\lambda$ of uncountable cardinals such that $\Lambda$ is a strong limit, for co-boundedly many $\theta\in\mathop{\mathrm{Reg}}(\Lambda)$, $\lambda^{[\theta]}=\lambda$.*
As a warm up, we prove the following lemma that may be extracted from the proof of [@Sh:775 Claim 1.11]. It concerns the principle $\mathop{\mathrm{{\sf onto}}}(\ldots)$ from [@paper47].
**Lemma 5**. *Suppose that $\theta,\lambda$ are infinite cardinals such that $2^\theta\le m(\lambda,\theta)=\lambda$.*
*Then $\mathop{\mathrm{{\sf onto}}}(\{\lambda\},[2^\lambda]^{\le\lambda},\theta)$ holds, i.e., there is a colouring $c:\lambda\times 2^\lambda\rightarrow\theta$ such that, for every $B\in[2^\lambda]^{\lambda^+}$,[^2] there exists an $\alpha<\lambda$ such that $c[\{\alpha\}\times B]=\theta$.*
*Proof.* Let $\mathcal A$ be a witness for $m(\lambda,\theta)=\lambda$. The next claim is of independent interest. It may be proved using elementary submodels, but we give a more elementary (for the reason of avoiding elementary submodels) proof due to Ido Feldman.
**Claim 1**. *For every $\mathcal H\subseteq{}^\lambda2$ of size $\lambda^+$, there exists an $a\in\mathcal A$ such that the set $\{h \mathbin\upharpoonright a \mathrel{|}\allowbreak h \in \mathcal H\}$ has size at least $\theta$.*
For two distinct functions $g,h\in {}^\lambda2$, denote $$\Delta(g,h):=\min\{\delta<\lambda\mathrel{|}\allowbreak g(\delta)\neq h(\delta)\}.$$ Now, let a family $\mathcal H\subseteq{}^\lambda2$ of size $\lambda^+$ be given.
$\blacktriangleright$ If there exists a function $g:\lambda\rightarrow2$ such that $D(g):=\{ \Delta(g,h)\mathrel{|}\allowbreak h\in\mathcal H\setminus\{g\}\}$ has size greater than or equal to $\theta$, then pick $a\in\mathcal A$ such that $|a\cap D(g)|=\theta$, and for each $\delta\in a\cap D(g)$, pick some $h_\delta\in\mathcal H$ such that $\Delta(g,h_\delta)=\delta$. Then $\delta\mapsto h_\delta\mathbin\upharpoonright a$ is injective over $a\cap D(g)$.
$\blacktriangleright$ Otherwise, for every $g:\lambda\rightarrow2$, let $\pi_g:\mathop{\mathrm{otp}}(D(g))\rightarrow D(g)$ be the increasing enumeration of $D(g)$, so that $\bar g:=g\circ \pi_g$ is an element of ${}^{<\theta}2$. As $2^{<\theta}\le2^\theta\le\lambda$, we may find $g\neq h$ in $\mathcal H$ such that $\bar g=\bar h$. Consider $\delta:=\Delta(g,h)$. Then $\delta\in D(g)\cap D(h)$. In addition, since $g\mathbin\upharpoonright\delta=h\mathbin\upharpoonright\delta$, $D(g)\cap\delta=D(h)\cap\delta$. In particular, for $\xi:=\mathop{\mathrm{otp}}(D(g)\cap\delta)$, we get that $\pi_g(\xi)=\delta=\pi_h(\xi)$ and hence $g(\delta)=\bar g(\xi)=\bar h(\xi)=h(\delta)$, contradicting the definition of $\delta$. So this case does not exist.
For each $a \in\mathcal A$, let $\mathcal G^a$ be the collection of all functions $g:{}^a\theta \rightarrow \theta$ such that $$|\{ f\in{}^a\theta \mathrel{|}\allowbreak g(f)\neq 0\}| \leq \theta.$$ Clearly, $|\mathcal G^a|=2^\theta\le\lambda$. For each $g\in\mathcal G^a$, we lift $g$ to a function $\hat g:{}^\lambda\theta\rightarrow\theta$ by letting $$\hat g(h):= g(h\mathbin\upharpoonright a).$$
Now, let $\langle g_\alpha \mathrel{|}\allowbreak\alpha< \lambda\rangle$ be an injective enumeration of $\{\hat g\mathrel{|}\allowbreak a \in \mathcal A,\, g \in\mathcal G^a\}$, and let $\langle h_\beta\mathrel{|}\allowbreak\beta<2^\lambda\rangle$ be an injective enumeration of ${}^\lambda2$. Define a colouring $c:\lambda\times 2^\lambda\rightarrow\theta$ via $c(\alpha,\beta):=g_\alpha(h_\beta)$. To see that $c$ as as sought, let $B\in[2^\lambda]^{\lambda^+}$. By Claim [Claim 1](#invclaim){reference-type="ref" reference="invclaim"}, pick $a\in\mathcal A$ such that $\{ h_\beta\mathbin\upharpoonright a\mathrel{|}\allowbreak\beta\in B\}$ has size at least $\theta$. Pick $B'\subseteq B$ of ordertype $\theta$ on which $\beta\mapsto h_\beta\mathbin\upharpoonright a$ is injective. It follows that we may define a function $g:{}^a\theta\rightarrow\theta$ in $\mathcal G^a$ via $$g(f):=\begin{cases}\mathop{\mathrm{otp}}(B'\cap\beta),&\text{if }\beta \in B' \text{ and }f= h_\beta\mathbin\upharpoonright a;\\
0,&\text{otherwise}.\end{cases}$$
Pick $\alpha<\lambda$ such that $\hat g=g_\alpha$. Then, $c[\{\alpha\}\times B']=\theta$. ◻
Our next step is proving the following lemma that is easily extracted from the beginning of the proof of [@Sh:775 Claim 1.10].
**Lemma 6**. *Suppose that $\Lambda\le\lambda$ is a pair of uncountable cardinals such that $\Lambda$ is a strong limit. Denote $\kappa := \mathop{\mathrm{cf}}(2^\lambda)$. Then, for co-boundedly many $\mu \in \mathop{\mathrm{Reg}}(\Lambda)$, there is a $\mu$-bounded $C$-sequence $\vec C$ over some stationary $S\subseteq E^\kappa_\mu$ such that $\diamondsuit^*(\vec C, 2^\lambda, 2^\lambda)$ holds.*
*Proof.* We start with the following claim which guides our choice of $\mu$.
**Claim 2**. *There is a co-bounded set of $\mu \in \mathop{\mathrm{Reg}}(\Lambda)$ such that for every cardinal $\varkappa < 2^\lambda$, $\varkappa^{[\mu]} < 2^\lambda$.*
By Fact [Fact 4](#rgch){reference-type="ref" reference="rgch"}, for every $\varkappa\in[\Lambda,2^\lambda)$, there is a cardinal $\epsilon_\varkappa < \Lambda$ such that for every $\mu\in\mathop{\mathrm{Reg}}(\Lambda)\setminus \epsilon_\varkappa$, $\varkappa^{[\mu]}=\varkappa$. Now, as $\mathop{\mathrm{cf}}(2^\lambda)>\lambda\ge\Lambda$, it follows that there is $\Gamma$ an unbounded subset of cardinals in $2^\lambda$ and a cardinal $\epsilon< \Lambda$ such that for every $\varkappa \in \Gamma$, $\epsilon_\varkappa < \epsilon$. In particular, for every $\mu \in \mathop{\mathrm{Reg}}(\Lambda) \setminus \epsilon$, for every $\varkappa \in \Gamma$, $\varkappa^{[\mu]} = \varkappa < 2^\lambda$. This, combined with the observation that for any cardinals $\varkappa_0< \varkappa_1$ and cardinal $\mu$, $\varkappa_0^{[\mu]} \leq \varkappa_1^{[\mu]}$, verifies the claim.
Let $\mu$ be any cardinal in the co-bounded subset of $\mathop{\mathrm{Reg}}(\Lambda)$ given by the claim. Hereafter, all we shall need to assume about $\mu$ is that it is a regular cardinal smaller than $\lambda$ and $m(\varkappa,\mu)<2^\lambda$ for all $\varkappa<2^\lambda$. As $\mu^+\le\lambda<\kappa$, by [@Sh:420 Claim 1.2 and Lemma 1.4], there exists a stationary $S\subseteq E^\kappa_\mu$ that lies in $I[\kappa]$. By possibly intersecting $S$ with some club, this means that there exists a $\mu$-bounded $C$-sequence $\vec C=\langle C_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ satisfying the following weak coherence property: for every pair $\gamma<\delta$ of ordinals from $S$, for all $\beta\in\mathop{\mathrm{nacc}}(C_{\gamma})\cap\mathop{\mathrm{nacc}}(C_\delta)$, $C_{\gamma}\cap\beta=C_\delta\cap\beta$. We shall prove that $\diamondsuit^*(\vec C, 2^\lambda, 2^\lambda)$ holds.
To this end, we fix the following objects.
(i) Let $h: \kappa \rightarrow 2^\lambda$ be increasing and cofinal.
(ii) Let $\mathcal C:= \{C_\delta \cap \beta \mathrel{|}\allowbreak\delta \in S, \beta\in \mathop{\mathrm{nacc}}(C_\delta)\}$, so that $|\mathcal C| =\kappa$ and it consists of sets of size less than $\mu$.
(iii) Let $T:= \{f \mathrel{|}\allowbreak\exists C \in \mathcal C\, [f \in {}^C(2^\lambda)]\}$. For each $C \in \mathcal C$, since $|C|<\mu<\lambda$, it is the case that $2^\lambda \leq |{}^C(2^\lambda)| \leq (2^\lambda)^{|C|} = 2^\lambda$. As $|\mathcal C| = \kappa$, we conclude that $|T| =2^\lambda$.
(iv) Let $\langle f_i \mathrel{|}\allowbreak i< 2^\lambda\rangle$ be an enumeration of $T$.
(v) For $\delta< \kappa$, denote $T_{< \delta}:= \{f_i \mathrel{|}\allowbreak i < h(\delta)\}$, so that $|T_{<\delta}|<2^\lambda$.
For every $\delta \in S$, let $$\mathcal P_\delta:= \{f \in {}^{C_\delta}h(\delta) \mathrel{|}\allowbreak\forall \beta \in \mathop{\mathrm{nacc}}(C_\delta)\, [f \mathbin\upharpoonright(C_\delta \cap \beta) \in T_{< \delta}]\}.$$ We shall show that the sequence $\langle \mathcal P_\delta \mathrel{|}\allowbreak\delta \in S\rangle$ is a witness for $\diamondsuit^*(\vec C, 2^\lambda, 2^\lambda)$. We begin by estimating its width. Implicit in the upcoming proof are the 'tree powers' from the hypotheses of [@Sh:775 Claim 1.10].
**Claim 3**. *Let $\delta \in S$. Then $|\mathcal P_\delta| < 2^\lambda$.*
Consider the set $Q_\delta:= \{g \in T_{<\delta} \mathrel{|}\allowbreak\exists \beta \in \mathop{\mathrm{nacc}}(C_\delta)\,[g \in {}^{C_\delta \cap \beta}h(\delta)]\}$. For every $f\in\mathcal P_\delta$, $b_f:=\{q \in Q_\delta\mathrel{|}\allowbreak q \subseteq f\}$ is nothing but $\{ f\mathbin\upharpoonright\beta\mathrel{|}\allowbreak\beta\in\mathop{\mathrm{nacc}}(C_\delta)\}$. So from $\mathop{\mathrm{otp}}(C_\delta)=\mathop{\mathrm{cf}}(\delta)=\mu$, we infer that $(b_f,{\subseteq})$ is order-isomorphic to $(\mu,{\in})$, satisfying that $\bigcup b=f$ for every $b\in[b_f]^\mu$. In particular, $|b_f\cap b_{f'}|<\mu$ for all $f\neq f'$ from $\mathcal P_\delta$.
Set $\varkappa:=|Q_\delta|$. As $\varkappa\le|T_{<\delta}|<2^\lambda$, the choice of $\mu$ ensures that $m(\varkappa,\mu)<2^\lambda$. In particular, we may fix a subfamily $\mathcal A_\delta\subseteq[Q_\delta]^\mu$ of size less than $2^\lambda$ such that, for every $f\in\mathcal P_\delta$, there exists $a_f\in\mathcal A_\delta$ with $|a_f\cap b_f|=\mu$. Then $f\mapsto a_f$ forms an injection from $\mathcal P_\delta$ to $\mathcal A_\delta$, so that $|\mathcal P_\delta|<2^\lambda$.
We are left with verifying that $\langle \mathcal P_\delta \mathrel{|}\allowbreak\delta \in S\rangle$ has the required guessing property. So let $f: \kappa\rightarrow{}2^\lambda$. Let $D\subseteq\kappa$ be a club such that $\delta \in D$ implies that
(i) for every $\beta< \delta$, $f(\beta) < h(\delta)$;
(ii) for every $\beta< \delta$, for every $\delta'\in S$ such that $\beta \in \mathop{\mathrm{nacc}}(C_{\delta'})$, we have $f \mathbin\upharpoonright(C_{\delta'}\cap \beta) \in T_{< \delta}$.
Here we use the weak coherence property of $\vec C$ to ensure that the requirement in (ii) can indeed be satisfied. Now suppose that $\delta \in D$. Then for every $\beta \in \mathop{\mathrm{nacc}}(C_\delta)$, $f\mathbin\upharpoonright(C_\delta \cap \beta) \in T_{< \delta}$ and $\mathop{\mathrm{Im}}(f \mathbin\upharpoonright(C_\delta \cap \beta)) \subseteq h(\delta)$. So indeed $f \mathbin\upharpoonright C_\delta \in \mathcal P_\delta$. ◻
The last step is proving the next lemma that we extracted from the end of the proof of [@Sh:775 Claim 1.10].
**Lemma 7**. *Suppose that:*
- *$\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ is a ladder system over some stationary $S\subseteq\kappa$;*
- *$\diamondsuit(\vec L,2^\lambda,2^\lambda)$ holds with $\lambda<\kappa$;*
- *$\mathop{\mathrm{{\sf onto}}}(\{\lambda\},[2^\lambda]^{\le\lambda},\theta)$ holds.*
*Then $\Phi(\vec L,2^\lambda,\theta)$ holds.*
*Proof.* Fix a bijection $\psi:{}^\lambda(2^\lambda)\leftrightarrow2^\lambda$. For every $\alpha<\lambda$, define a map $\psi_\alpha:2^\lambda\rightarrow2^\lambda$ via $$\psi_\alpha(\tau):=\psi^{-1}(\tau)(\alpha).$$ The point is that for every function $\sigma:\lambda\rightarrow2^\lambda$ and every $\alpha<\lambda$, $$\psi_\alpha(\psi(\sigma))=\sigma(\alpha).$$
For every $x\subseteq\kappa$, for every map $\eta:x\rightarrow2^\lambda$, for every $\alpha<\lambda$, we let $\eta^\alpha:=\psi_\alpha\circ\eta$, so $\eta_\alpha: x \rightarrow 2^\lambda$ as well.
Now, let $F:(\bigcup_{\delta\in S}{}^{A_\delta}2^\lambda)\rightarrow\theta$ be given. Let $\langle\mathcal P_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ be a witness for $\diamondsuit(\vec L,2^\lambda,2^\lambda)$. Without loss of generality, we may assume that for every $\delta\in S$, each element of $\mathcal P_\delta$ is a function from $A_\delta$ to $2^\lambda$. In particular, for all $\delta\in S$, $\eta\in\mathcal P_\delta$, and $\alpha<\lambda$, $\eta^\alpha$ is a map from $A_\delta$ to $2^\lambda$ so that $F(\eta^\alpha)$ is a well-defined ordinal less than $\theta$. In other words, for all $\delta\in S$ and $\eta\in\mathcal P_\delta$, $$h_\eta:=\langle F(\eta^\alpha)\mathrel{|}\allowbreak\alpha<\lambda\rangle$$ is a map from $\lambda$ to $\theta$.
**Claim 4**. *Let $\delta\in S$. There exists a function $g_\delta:\lambda\rightarrow\theta$ such that, for every $\eta\in\mathcal P_\delta$, there exists an $\alpha<\lambda$ with $h_\eta(\alpha)= g_\delta(\alpha)$.*
Fix a witness $c:\lambda\times2^\lambda\rightarrow\theta$ to $\mathop{\mathrm{{\sf onto}}}(\{\lambda\},[2^\lambda]^{\le\lambda},\theta)$. Notice that for every $\eta\in\mathcal P_\delta$, the set $B_\eta:=\{\beta<2^\lambda\mathrel{|}\allowbreak\forall\alpha<\lambda\,[h_\eta(\alpha)\neq c(\alpha,\beta)]\}$ has size no more than $\lambda$, since otherwise, by the choice of $c$, we may pick an $\alpha<\lambda$ such that $c[\{\alpha\}\times B_\eta]=\theta$, and in particular, $h_\eta(\alpha) \in c[\{\alpha\}\times B_\eta]$. Now, as $|\mathcal P_\delta|<2^\lambda$, it follows that we may pick $\beta\in 2^\lambda\setminus\bigcup_{\eta\in\mathcal P_\delta}B_\eta$. Define $g_\delta:\lambda\rightarrow\theta$ via $g_\delta(\alpha):=c(\alpha,\beta)$. Then $g_\delta$ is as sought. Switching the roles of $\delta$ and $\alpha$ in the preceding claim, we may fix a sequence $\langle g_\alpha:S\rightarrow\theta\mathrel{|}\allowbreak\alpha<\lambda\rangle$ such that, for every $\delta\in S$, for every $\eta\in\mathcal P_\delta$, there exists an $\alpha<\lambda$ with $h_\eta(\alpha)= g_\alpha(\delta)$.
**Claim 5**. *There exists an $\alpha<\lambda$ such that for every function $f:\kappa\rightarrow2^\lambda$, the following set is stationary in $\kappa$: $$\{\delta\in S\mathrel{|}\allowbreak F(f\mathbin\upharpoonright A_\delta)=g_\alpha(\delta)\}.$$*
Suppose not. Then, for every $\alpha<\lambda$, we may fix a function $f_\alpha:\kappa\rightarrow2^\lambda$ and a club $D_\alpha\subseteq\kappa$ disjoint from $\{\delta\in S\mathrel{|}\allowbreak F(f_\alpha\mathbin\upharpoonright A_\delta)=g_\alpha(\delta)\}$. Define a map $\eta:\kappa\rightarrow2^\lambda$ via: $$\eta(\gamma):=\psi(\langle f_i(\gamma)\mathrel{|}\allowbreak i<\lambda\rangle).$$
Now, as $\langle\mathcal P_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ witnesses $\diamondsuit(\vec L,2^\lambda,2^\lambda)$, we may pick $\delta\in\bigcap_{\alpha<\lambda}D_\alpha\cap S$ such that $\bar\eta:=\eta\mathbin\upharpoonright A_\delta$ is in $\mathcal P_\delta$. For every $\alpha<\lambda$, recall that $\bar\eta^\alpha$ is defined as $\psi_\alpha\circ\bar\eta$, so that, for every $\gamma\in A_\delta$, $$\bar\eta^\alpha(\gamma)=\psi_\alpha(\psi(\langle f_i(\gamma)\mathrel{|}\allowbreak i<\lambda\rangle))=f_\alpha(\gamma).$$ That is, for every $\alpha<\lambda$, $\bar\eta^\alpha=f_\alpha\mathbin\upharpoonright A_\delta$. As $\bar\eta\in\mathcal P_\delta$, we may pick some $\alpha<\lambda$ such that $h_{\bar\eta}(\alpha)=g_\alpha(\delta)$. But, by the definition of $h_{\bar\eta}$, $$h_{\bar\eta}(\alpha)=F(\bar\eta^\alpha).$$ Altogether, $$F(f_\alpha\mathbin\upharpoonright A_\delta)=F(\bar\eta^\alpha)=h_{\bar\eta}(\alpha)=g_\alpha(\delta),$$ contradicting the fact that $\delta\in D_\alpha\cap S$. This completes the proof. ◻
**Remark 8**. The conclusion of the preceding remains valid after relaxing its third hypothesis to "$\mathop{\mathrm{{\sf onto}}}(\{\lambda\},J,\theta)$ holds for some $2^\lambda$-complete ideal $J$ over $2^\lambda$". On another front, note that the principles $\diamondsuit(\vec L, \mu, \theta)$ and $\Phi(\vec L, \mu, \theta)$ can be strengthened by adding an extra parameter $I$, an ideal on $\kappa$ extending $\mathop{\mathrm{NS}}_\kappa \mathbin\upharpoonright S$. In each case, the set of good guesses $\delta$ is now required to be a set in $I^+$ instead of merely a stationary subset of $S$. We leave to the interested reader to verify that most of the results in this section hold for these strengthenings for $I$ any $\lambda^+$-complete ideal on $\kappa$ extending $\mathop{\mathrm{NS}}_\kappa \mathbin\upharpoonright S$. The only change that needs to be made is that the second hypothesis of Lemma [Lemma 7](#lemma115){reference-type="ref" reference="lemma115"} will now require $\diamondsuit^*(\vec L, 2^\lambda, 2^\lambda)$ instead of $\diamondsuit(\vec L, 2^\lambda, 2^\lambda)$, which is what Lemma [Lemma 5](#lemma114){reference-type="ref" reference="lemma114"} produces anyway.
We are now in a condition to prove Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"}.
**Corollary 9**. *Suppose that $\Lambda\le\lambda$ is a pair of uncountable cardinals such that $\Lambda$ is a strong limit. Denote $\kappa := \mathop{\mathrm{cf}}(2^\lambda)$. Then, for co-boundedly many regular cardinals $\mu<\Lambda$, there exists a $\mu$-bounded $C$-sequence $\vec C$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec C, \mu)$ holds.*
*Proof.* By Lemma [Lemma 6](#countinglemma){reference-type="ref" reference="countinglemma"}, for co-boundedly many $\mu \in \mathop{\mathrm{Reg}}(\Lambda)$, there is a $\mu$-bounded $C$-sequence $\vec C$ over some stationary $S\subseteq E^\kappa_\mu$ such that $\diamondsuit^*(\vec C, 2^\lambda, 2^\lambda)$ holds. In particular, there is a $\mu$-bounded $C$-sequence $\vec C$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec C, 2^\lambda, 2^\lambda)$ holds. By Fact [Fact 4](#rgch){reference-type="ref" reference="rgch"}, we may fix $\theta\in\mathop{\mathrm{Reg}}(\Lambda)$ above $2^\mu$ such that $\lambda^{[\theta]}=\lambda$. As $\Lambda$ is a strong limit, $2^\theta<\Lambda\le\lambda$. Thus, by Lemma [Lemma 5](#lemma114){reference-type="ref" reference="lemma114"}, $\mathop{\mathrm{{\sf onto}}}(\{\lambda\},[2^\lambda]^{\le\lambda},\theta)$ holds. Then, by Lemma [Lemma 7](#lemma115){reference-type="ref" reference="lemma115"}, $\Phi(\vec C,2^\lambda,\theta)$ holds. In particular, $\Phi(\vec C,\mu,2^\mu)$ holds. Then, by Lemma [Lemma 2](#l210){reference-type="ref" reference="l210"}, $\diamondsuit( \vec C, \mu)$ holds. ◻
We conclude this section by briefly describing some other configurations which provide narrow diamonds over a ladder system. Above, we used Shelah's beautiful revised GCH theorem, Fact [Fact 4](#rgch){reference-type="ref" reference="rgch"}, to obtain instances of cardinals $\theta \leq \lambda$ such that $m(\lambda, \theta) = \lambda$. The following fact provides other instances (see [@MR4529976 Lemma 2.3(vi)] for a proof).
**Fact 10**. *For all infinite cardinals $\theta\le\lambda < \theta^{+\mathop{\mathrm{cf}}(\theta)}$, $m(\lambda, \theta) = \lambda$ holds.*
Using Fact [Fact 10](#meetingfact){reference-type="ref" reference="meetingfact"} we can trace through the proofs of this section to obtain the following theorem. The reader may first consider Corollary [Corollary 12](#cor212){reference-type="ref" reference="cor212"} below which deals with the simplest case of the theorem, where $\mu:= \aleph_0$, in which case $\mu^{+\mu}= \aleph_\omega$.
**Theorem 11**. *Suppose that $\mu$ is an infinite regular cardinal, and $\lambda$ is a cardinal such that $\mu<2^\mu<2^{2^\mu}\le\lambda<2^\lambda<\mu^{+\mu}$. Denote $\kappa:= \mathop{\mathrm{cf}}(2^\lambda)$. Then there exists a $\mu$-bounded $C$-sequence $\vec C$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec C, \mu)$ holds.*
*Proof.* As $2^\lambda<\mu^{+\mu}$, for every cardinal $\varkappa\in[\mu,2^\lambda)$, it is the case that $\mu\le\varkappa<\mu^{+\mu}$, and so Fact [Fact 10](#meetingfact){reference-type="ref" reference="meetingfact"} implies that $m(\varkappa, \mu) = \varkappa<2^\lambda$. As made clear right after Claim [Claim 2](#c261){reference-type="ref" reference="c261"}, we then get a $\mu$-bounded $C$-sequence $\vec C$ over some stationary $S\subseteq E^\kappa_\mu$ such that $\diamondsuit^*(\vec C, 2^\lambda, 2^\lambda)$ holds. In particular, there is a $\mu$-bounded $C$-sequence $\vec C$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec C, 2^\lambda, 2^\lambda)$ holds. Now let $\theta:= 2^\mu$. As $\mu<\theta<\lambda<\mu^{+\mu}$, it is the case that $\theta<\lambda<\theta^{+\theta}$, so Fact [Fact 10](#meetingfact){reference-type="ref" reference="meetingfact"} implies that $m(\lambda,\theta)=\lambda$. Then, by Lemma [Lemma 5](#lemma114){reference-type="ref" reference="lemma114"}, $\mathop{\mathrm{{\sf onto}}}(\{\lambda\},[2^\lambda]^{\le\lambda},\theta)$ holds. Then, by Lemma [Lemma 7](#lemma115){reference-type="ref" reference="lemma115"}, $\Phi(\vec C,2^\lambda,\theta)$ holds. In particular, $\Phi(\vec C,\mu,2^\mu)$ holds. Then, by Lemma [Lemma 2](#l210){reference-type="ref" reference="l210"}, $\diamondsuit( \vec C, \mu)$ holds. ◻
**Corollary 12**. *If $2^{2^{2^{\aleph_0}}}< \aleph_\omega$, then for $\kappa:=2^{2^{\aleph_0}}$, there is an $\omega$-bounded $C$-sequence $\vec C$ over $E^\kappa_\omega$ such that $\diamondsuit(\vec C, \omega)$ holds.0◻*
**Corollary 13**. *If $\aleph_\omega$ is a strong limit, then for every $\mu\in\mathop{\mathrm{Reg}}(\aleph_\omega)$, there are infinitely many $\kappa<\aleph_\omega$ for which there exists a $\mu$-bounded $C$-sequence $\vec C$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec C, \mu)$ holds. 0◻*
**Remark 14**. With a bit more work, one can show that if $\aleph_\omega$ is a strong limit, then for every uncountable $\mu<\aleph_\omega$, there exists a $\mu$-bounded $C$-sequence $\vec C$ over $E^{\aleph_{\omega+1}}_\mu$ such that $\diamondsuit(\vec C, \mu)$ holds.
# Ladder systems and topological spaces {#sec1}
While not stated explicitly, we shall want the topological spaces constructed in this paper to be Hausdorff. Thus, we shall need the following folklore fact.
**Fact 15**. *For a ladder system $\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$, all of following are equivalent:*
1. *$X_{\vec L}$ is Hausdorff;*
2. *$X_{\vec L}$ is Hausdorff and regular;*
3. *for every pair $\gamma<\delta$ of ordinals from $S$, $\sup(A_\gamma\cap A_\delta)<\delta$.*
In particular, if $\vec L$ is $\omega$-bounded, then $X_{\vec L}$ is Hausdorff and regular. More generally, for every $\mu$-bounded ladder system $\vec L$ over a subset of $E^\kappa_\mu$, it is the case that $X_{\vec L}$ is Hausdorff and regular. A second basic fact will be needed. Namely, by [@leiderman2023deltaspaces Proposition 4.1] and a straight-forward generalisation of [@MR2099600 Claim 1], we have the following characterisation.
**Fact 16**. *For a ladder system $\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$, all of following are equivalent:*
1. *$X_{\vec L}$ is a $\Delta$-space;*
2. *$X_{\vec L}$ is countably metacompact;*
3. *For every function $g:S \rightarrow \omega$, there is a function $f: \kappa \rightarrow \omega$, such that, for every $\delta \in S$, $\sup\{\alpha\in A_\delta \mathrel{|}\allowbreak f(\alpha) \leq g(\delta)\} <\delta$.*
**Remark 17**. The above characterisation makes it clear that an $\omega$-bounded ladder system $\vec L$ over $\omega_1$ for which $X_{\vec L}$ is not countably compact can be constructed from a gallery of hypotheses. To mention just two, by [@paper23 Theorem 3.7] such a ladder system exists assuming $\clubsuit$, and by the proof of [@paper54 Corollary 4.6] such a ladder system exists assuming $\diamondsuit(\mathfrak b)$.
**Lemma 18**. *Suppose that $\mu<\kappa$ is a pair of regular uncountable cardinals, and that $\vec L$ is a $\mu$-bounded ladder system over $E^\kappa_\mu$ such that $\Phi(\vec L,\omega,\omega)$ holds. Then $X_{\vec L}$ is a regular Hausdorff space that is not countably metacompact.*
*Proof.* Since $\vec L$ is a $\mu$-bounded ladder system over $E^\kappa_\mu$, Fact [Fact 15](#fact32){reference-type="ref" reference="fact32"} implies that $X_{\vec L}$ is regular and Hausdorff. Write $\vec L$ as $\langle A_\delta \mathrel{|}\allowbreak\delta \in E^\kappa_\mu\rangle$. Define a function $F:(\bigcup_{\delta\in S}{}^{A_\delta}\omega)\rightarrow\omega$ be letting for all $\delta\in S$ and $f:A_\delta\rightarrow\omega$, $$F( f):=\min\{n<\omega\mathrel{|}\allowbreak\sup\{\alpha\in A_\delta \mathrel{|}\allowbreak f(\alpha) =n\} =\delta\}.$$ Since $\Phi(\vec L,\omega,\omega)$ holds, we may now fix a function $g:S\rightarrow\omega$ such that, for every function $f:\kappa\rightarrow\omega$, there are stationarily many $\delta\in S$ such that $F(f\mathbin\upharpoonright A_\delta)= g(\delta)$. In particular, for every function $f:\kappa\rightarrow\omega$ there are stationarily many $\delta\in S$ such that $\sup\{\alpha\in A_\delta \mathrel{|}\allowbreak f(\alpha) =g(\delta)\} =\delta$. So, by Fact [Fact 16](#thm31){reference-type="ref" reference="thm31"}, $X_{\vec L}$ is not countably metacompact. ◻
We are now ready to prove Theorem [Theorem 1](#thma){reference-type="ref" reference="thma"}. Indeed, it follows by taking $\Lambda=\lambda=\beth_\omega$ in the next theorem.
**Corollary 19**. *Suppose that $\Lambda\le\lambda$ is a pair of uncountable cardinals such that $\Lambda$ is a strong limit. Denote $\kappa := \mathop{\mathrm{cf}}(2^\lambda)$. Then there are co-boundedly many $\mu\in\mathop{\mathrm{Reg}}(\Lambda)$ such that $E^\kappa_\mu$ carries a $\mu$-bounded ladder system ${\vec L}$ such that $X_{\vec L}$ is a regular Hausdorff space that is not countably metacompact.*
*Proof.* By Corollary [Corollary 9](#cor28){reference-type="ref" reference="cor28"}, there are co-boundedly many uncountable $\mu\in\mathop{\mathrm{Reg}}(\Lambda)$, for which there exists a $\mu$-bounded ladder system $\vec L$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec L, \omega)$ holds, in particular, $\Phi(\vec L, \omega,\omega)$ holds. Now, appeal to Lemma [Lemma 18](#midapp){reference-type="ref" reference="midapp"}. ◻
We now prove an expanded version of Theorem [Theorem 3](#thmc){reference-type="ref" reference="thmc"}.
**Theorem 20**.
1. *If $\aleph_\omega$ is a strong limit, then for every $n<\omega$, there are infinitely many $m<\omega$ such that $E^{\aleph_{m+1}}_{\aleph_{n+1}}$ carries an $\aleph_{n+1}$-bounded ladder system $\vec L$ for which $X_{\vec L}$ is not countably metacompact.*
2. *If $2^{2^{\aleph_1}}<\aleph_{\omega_1}$, then for $\kappa:=2^{2^{\aleph_1}}$, there exists an $\omega_1$-bounded ladder system $\vec L$ over $E^\kappa_{\omega_1}$ for which $X_{\vec L}$ is a regular Hausdorff space that is not countably metacompact;*
*Proof.* (1) The proof is similar to that of Theorem [Corollary 19](#thm34){reference-type="ref" reference="thm34"}, just using Corollary [Corollary 13](#cor29){reference-type="ref" reference="cor29"} instead of Corollary [Corollary 9](#cor28){reference-type="ref" reference="cor28"}.
\(2\) Denote $\mu:=\aleph_1$ and $\lambda:=2^\mu$. Note that $\aleph_1<\lambda<\mathop{\mathrm{cf}}(\kappa)\le\kappa<\aleph_{\omega_1}$ and hence $\kappa$ is regular. As $2^\lambda<\mu^{+\mu}$, for every cardinal $\varkappa\in[\mu,2^\lambda)$, it is the case that $\mu\le\varkappa<\mu^{+\mu}$, and so Fact [Fact 10](#meetingfact){reference-type="ref" reference="meetingfact"} implies that $m(\varkappa, \mu) = \varkappa<2^\lambda$. As made clear right after Claim [Claim 2](#c261){reference-type="ref" reference="c261"}, we then get a $\mu$-bounded $C$-sequence $\vec C$ over some stationary $S\subseteq E^\kappa_\mu$ such that $\diamondsuit^*(\vec C, 2^\lambda, 2^\lambda)$ holds. In particular, there is a $\mu$-bounded $C$-sequence $\vec C$ over $E^\kappa_\mu$ such that $\diamondsuit(\vec C, 2^\lambda, 2^\lambda)$ holds. As $\lambda^\mu=\lambda$, Lemma [Lemma 5](#lemma114){reference-type="ref" reference="lemma114"} implies that $\mathop{\mathrm{{\sf onto}}}(\{\lambda\},[2^\lambda]^{\le\lambda},\mu)$ holds. Then, by Lemma [Lemma 7](#lemma115){reference-type="ref" reference="lemma115"}, $\Phi(\vec C,2^\lambda,\mu)$ holds. Now, appeal to Lemma [Lemma 18](#midapp){reference-type="ref" reference="midapp"}. ◻
We conclude this paper by providing a proof of Theorem [Theorem 4](#thmd){reference-type="ref" reference="thmd"}.
**Theorem 21**. *If there exists a $\kappa$-Souslin tree, then there exists a ladder system $\vec L$ over some stationary subset of $\kappa$ for which $X_{\vec L}$ is a regular Hausdorff space that is not countably metacompact.*
*Proof.* By [@paper48 Theorem 2.29], the existence of a $\kappa$-Souslin tree implies that $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,1,1)$ holds for some $\kappa$-sized pairwise disjoint family $\mathcal S$ of stationary subsets of $\kappa$. In particular, from a $\kappa$-Souslin tree one obtains a ladder system $\vec L=\langle A_\delta\mathrel{|}\allowbreak\delta\in S\rangle$ over some stationary $S\subseteq\kappa$, and a partition $S=\biguplus_{n<\omega}S_n$ such that the following two hold:
(i) for every cofinal $A\subseteq\kappa$, for every $n<\omega$, there exists $\delta\in S_n$ such that $\sup(A_\delta\cap A)=\delta$;
(ii) for every pair $\gamma<\delta$ of ordinals from $S$, $\sup(A_{\gamma}\cap A_{\delta})<\gamma$.
Now letting $g:S\rightarrow\omega$ describe the partition of $S$, we get that for every function $f: \kappa \rightarrow \omega$, by picking $n<\omega$ such that $A:=f^{-1}\{n\}$ is cofinal in $\kappa$, we may find $\delta\in S_n$ such that $\sup(A_\delta\cap A)=\delta$, and hence $\sup\{\alpha\in A_\delta \mathrel{|}\allowbreak f(\alpha) =g(\delta)\} =\delta$. So Clause (i) implies that $X_{\vec L}$ is not countably metacompact by Fact [Fact 16](#thm31){reference-type="ref" reference="thm31"}, and Clause (ii) ensures that $X_{\vec L}$ is a regular Hausdorff space by Fact [Fact 15](#fact32){reference-type="ref" reference="fact32"}. ◻
# Acknowledgments
We thank Ido Feldman for the combinatorial proof of Claim [Claim 1](#invclaim){reference-type="ref" reference="invclaim"}. The first author was supported by the European Research Council (grant agreement ERC-2018-StG 802756). The second author was supported by the Israel Science Foundation (grant agreement 665/20). The third author was partially supported by the Israel Science Foundation (grant agreement 203/22) and by the European Research Council (grant agreement ERC-2018-StG 802756).
BEG$^{+}$04
Zoltán Balogh, Todd Eisworth, Gary Gruenhage, Oleg Pavlov, and Paul Szeptycki. Uniformization and anti-uniformization properties of ladder systems. , 181(3):189--213, 2004.
Ari Meir Brodsky and Assaf Rinot. A microscopic approach to Souslin-tree constructions. Part II. , 172(5):Paper No. 102904, 65, 2021.
C. H. Dowker. On countably paracompact spaces. , 3:219--224, 1951.
Keith J. Devlin and Saharon Shelah. A weak version of $\diamondsuit$ which follows from $2\sp{\aleph
\sb{0}}<2\sp{\aleph \sb{1}}$. , 29(2-3):239--247, 1978.
Tanmay C. Inamdar. On strong chains of sets and functions. , 69(1):286--301, 2023.
Tanmay Inamdar and Assaf Rinot. Was Ulam right? I: Basic theory and subnormal ideals. , 323(C):Paper No. 108287, 53pp, 2023.
R. Björn Jensen. The fine structure of the constructible hierarchy. , 4:229--308; erratum, ibid. 4 (1972), 443, 1972. With a section by Jack Silver.
Jerzy Ka̧kol and Arkady Leiderman. A characterization of $X$ for which spaces $C_p(X)$ are distinguished and its applications. , 8:86--99, 2021.
R. W. Knight. -sets. , 339(1):45--60, 1993.
Arkady Leiderman and Paul Szeptycki. On $\Delta$-spaces. , to appear. arXiv:2307.16047.
Assaf Rinot and Roy Shalev. A guessing principle from a Souslin tree, with applications to topology. , 323(C):Paper No. 108296, 29pp, 2023.
Assaf Rinot, Roy Shalev, and Stevo Todorčević. A new small Dowker space. , to appear. `https://doi.org/10.1007/s10998-023-00541-6`
Saharon Shelah. Advances in cardinal arithmetic. In *Finite and infinite combinatorics in sets and logic (Banff, AB, 1991)*, volume 411 of *NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci.*, pages 355--383. Kluwer Acad. Publ., Dordrecht, 1993.
Saharon Shelah. The generalized continuum hypothesis revisited. , 116:285--321, 2000.
Saharon Shelah. Middle diamond. , 44:527--560, 2005.
Saharon Shelah. More on the revised GCH and the black box. , 140(1-3):133--160, 2006.
[^1]: *A ladder system as above, i.e., consisting of closed sets, is called a *$C$-sequence*.*
[^2]: *This is not a typo. The second parameter of the principle $\mathop{\mathrm{{\sf onto}}}$ is the ideal $J=[2^\lambda]^{\le\lambda}$, and the quantification here is over all sets $B$ that are $J$-positive, hence, the focus on $[2^\lambda]^{\lambda^+}$.*
| arxiv_math | {
"id": "2309.13367",
"title": "Ladder systems and countably metacompact topological spaces",
"authors": "Rodrigo Carvalho and Tanmay Inamdar and Assaf Rinot",
"categories": "math.LO math.GN",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Let $f:[0,1]^d\to\mathbb{R}$ be a completely monotone integrand as defined by Aistleitner and Dick (2015) and let points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ have a non-negative local discrepancy (NNLD) everywhere in $[0,1]^d$. We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of $f$ over $[0,1]^d$. An analogous non-positive local discrepancy (NPLD) property provides a computable lower bound. It has been known since Gabai (1967) that the two dimensional Hammersley points in any base $b\geqslant 2$ have non-negative local discrepancy. Using the probabilistic notion of associated random variables, we generalize Gabai's finding to digital nets in any base $b\geqslant 2$ and any dimension $d\geqslant 1$ when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when $d\geqslant 3$. As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension $d\geqslant 2$ either fails to be projection regular or has all its points on the main diagonal.
author:
- |
Michael Gnewuch\
University of Osnabrück
- |
Peter Kritzer\
Austrian Academy of Sciences
- |
Art B. Owen\
Stanford University
- |
Zexin Pan\
Stanford University
bibliography:
- qmc.bib
date: August 2023
title: Computable error bounds for quasi-Monte Carlo using points with non-negative local discrepancy
---
**Keywords:** Associated random variables, Digital nets, Rank one lattices
# Introduction
Quasi-Monte Carlo (QMC) sampling [@dick:pill:2010; @nied92] can have much better asymptotic accuracy than plain Monte Carlo (MC), but it does not come with the usual statistical error estimates that MC has. Those estimates can be recovered by randomized QMC (RQMC) [@lecu:lemi:2002; @Owen:2023practical] based on independent replicates of QMC. In this paper we consider an alternative approach to uncertainty quantification for QMC. For some special sampling points with a non-negative local discrepancy (NNLD) property described later and a suitably monotone integrand $f$, we can compute upper and lower bounds on the integral $\mu$ of $f$ over the unit cube in $d$ dimensions. Methods based on random replication can provide confidence intervals for $\mu$ that attain a desired level such as 95% or 99% asymptotically, as the number of replicates diverges. The method we consider attains 100% coverage for finite $n$.
Unlike the well-known bounds derived via the Koksma-Hlawka inequality [@hick:2014], these bounds can be computed by practical algorithms. Convex optimization [@boyd:vand:2004] has the notion of a certificate: a computable bound on the minimum value of the objective function. The methods we present here provide certificates for multidimensional integration of a completely monotone function.
This improved uncertainty quantification comes at some cost. Our versions of the method will be more accurate than MC for dimensions $d\leqslant 3$, as accurate as MC (apart from logarithmic factors) for $d=4$ and less accurate than MC for $d\geqslant 5$. They also require some special knowledge of the integrand.
The problem is trivial and the solution is well known for $d=1$. If $f:[0,1]\to\mathbb{R}$ is nondecreasing then $$\begin{aligned}
\label{eq:onedimcase}
\frac1n\sum_{i=0}^{n-1}f\Bigl(\frac{i}n\Bigr)
\leqslant\int_0^1f(x)\,\mathrm{d}x
\leqslant
\frac1n\sum_{i=1}^{n}f\Bigl(\frac{i}n\Bigr).\end{aligned}$$ These bracketing inequalities hold even if some of the quantities in them are $\pm \infty$. This works because $f$ is nondecreasing, the evaluation points in the left hand side are 'biased low' and those in the right hand side are 'biased high'.
To get a multivariate version of [\[eq:onedimcase\]](#eq:onedimcase){reference-type="eqref" reference="eq:onedimcase"}, we generalize the notion of points biased low to points biased towards the origin in terms of a non-negative local discrepancy (NNLD) property of the points. This property was shown to hold for two dimensional Hammersley points by Gabai [@gaba:1967] in 1967. We couple the NNLD property with a multivariate notion of monotonicity called complete monotonicity [@aist:dick:2015].
This paper is organized as follows. Section [2](#sec:defs){reference-type="ref" reference="sec:defs"} gives some notation and then defines the properties of point sets and functions that we need. Theorem [Theorem 1](#thm:basic){reference-type="ref" reference="thm:basic"} there establishes the bracketing property we need. Section [3](#sec:nnldproperties){reference-type="ref" reference="sec:nnldproperties"} gives fundamental properties of NNLD point sets with an emphasis on projection regular point sets. Only very trivial lattice rules, confined to the diagonal in $[0,1]^d$, can be both projection regular and NNLD. Cartesian products preserve the NNLD property as well as an analogous non-positive local discrepancy property. Section [4](#sec:kh){reference-type="ref" reference="sec:kh"} compares our bounds to those obtainable from the Koksma-Hlawka inequality. Section [5](#sec:netconstructions){reference-type="ref" reference="sec:netconstructions"} shows that digital nets whose generator matrices are permutation matrices produce NNLD point sets. Section [6](#sec:rankone){reference-type="ref" reference="sec:rankone"} gives a construction of rank one lattice rules that are NNLD. We conclude with a discussion and some additional references in Section [7](#sec:disc){reference-type="ref" reference="sec:disc"}.
# Definitions and a bound {#sec:defs}
Here we define a non-negative local discrepancy (NNLD) property of the points we use as well as a complete monotonicity criterion for the integrand. We then establish bounds analogous to [\[eq:onedimcase\]](#eq:onedimcase){reference-type="eqref" reference="eq:onedimcase"}. First we introduce some notation.
## Notation
For integer $b\geqslant 1$, let $\mathbb{Z}_b=\{0,1,\dots,b-1\}$. The set $\{1,2,\dots,d\}$ of variable indices is denoted by $[d]$. For $u\subseteq [d]$, we use $|u|$ for the cardinality of $u$ and $-u$ for the complement $[d]\setminus u$, especially in subscripts and superscripts. The singleton $\{j\}$ may be abbreviated to just $j$ and $-\{j\}$ to $-j$. For points $\boldsymbol{x},\boldsymbol{z}\in[0,1]^d$ and a set $u\subseteq [d]=\{1,2,\dots,d\}$ let $\boldsymbol{x}_u{:}\boldsymbol{z}_{-u}$ be the hybrid point with $j$'th component $x_j$ for $j\in u$ and $j$'th component $z_j$ for $j\not\in u$.
The points with all coordinates $0$ or all coordinates $1$ are denoted by $\boldsymbol{0}$ and $\boldsymbol{1}$ respectively. When it is necessary to specify their dimension we use $\boldsymbol{0}_d$ and $\boldsymbol{1}_d$. The notation $\mathbbm{1}\{A\}$ is for an indicator variable equal to $1$ when $A$ is true and $0$ otherwise.
For integer $d\geqslant 1$ we will use the following precedence notion on $[0,1]^d$. For $\boldsymbol{x},\boldsymbol{z}\in\mathbb{R}^d$ we say that $\boldsymbol{x}\leqslant\boldsymbol{z}$ when $x_j\leqslant z_j$ holds for all $j=1,\dots,d$.
## Non-negative local discrepancy
A QMC rule is given by a list of points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ and it yields the estimate $$\hat\mu = \frac1n\sum_{i=0}^{n-1}f(\boldsymbol{x}_i)$$ of $\mu$. We refer to these points as a point set, $P_n$, though in any setting where some $\boldsymbol{x}_i$ are duplicated we actually treat $P_n$ as a multiset, counting multiplicity of the points. The local discrepancy of $P_n$ at $\boldsymbol{z}\in[0,1]^d$ is given by $$\delta(\boldsymbol{z}) = \delta(\boldsymbol{z};P_n)=\widehat\mathrm{VOL}([\boldsymbol{0},\boldsymbol{z}))-\mathrm{VOL}([\boldsymbol{0},\boldsymbol{z}))$$ where $\mathrm{VOL}$ is Lebesgue measure and $\widehat\mathrm{VOL}$ is the empirical measure with $$\widehat\mathrm{VOL}([\boldsymbol{0},\boldsymbol{z}))=\frac1n\sum_{i=0}^{n-1}1_{\boldsymbol{x}_i\in[\boldsymbol{0},\boldsymbol{z})}.$$ That is, $\mathrm{VOL}$ is $\mathbb{U}[0,1]^d$ while $\widehat\mathrm{VOL}$ is $\mathbb{U}(P_n)$. The quantity $D_n^*=\sup_{\boldsymbol{z}\in[0,1]^d}|\delta(\boldsymbol{z})|$ is called the star discrepancy of the point set $P_n$.
**Definition 1**. The point set $P_n$ with points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}$ has non-negative local discrepancy (NNLD) if $$\begin{aligned}
\label{eq:defnnld}
\delta(\boldsymbol{z})\geqslant 0\end{aligned}$$ for all $\boldsymbol{z}\in[0,1]^d$.
A distribution for $\boldsymbol{x}\in\mathbb{R}^d$ is positively lower orthant dependent [@shak:1982] if $$\Pr( \boldsymbol{x}\leqslant\boldsymbol{z})\geqslant\prod_{j=1}^d\Pr(x_j\leqslant z_j)$$ for all $\boldsymbol{z}\in\mathbb{R}^d$. A sufficient condition for NNLD is that the $\mathbb{U}(P_n)$ distribution on $[0,1]^d$ is positively lower orthant dependent and that the marginal distributions $\mathbb{U}\{x_{0,j},\dots,x_{n-1,j}\}$ for each $j=1,\dots,d$ are stochastically smaller than $\mathbb{U}[0,1]$. The random variable $X$ is stochastically smaller than the random variable $Y$ if $\Pr( X\leqslant z)\geqslant\Pr(Y\leqslant z)$ for all $z\in\mathbb{R}$ and in that case we also say that the distribution of $X$ is stochastically smaller than that of $Y$. There is a related notion of positive upper orthant dependence as well as two related notions of negative orthant dependence, both upper and lower.
In one dimension, the points $0,1/n,\dots,(n-1)/n$ are NNLD. As mentioned earlier, $n=b^m$ Hammersley points in base $b\geqslant 2$ and dimension $d=2$ are NNLD [@gaba:1967]. Those Hammersley points are constructed as follows. For $0\leqslant i<n$ write $i=\sum_{k=1}^ma_i(k)b^{k-1}$ for digits $a_i(k)\in\{0,1,\dots,b-1\}$ and set $i' =\sum_{k=1}^ma_i(m-k+1)b^{k-1}$. Then the $i$'th such Hammersley point is $\boldsymbol{x}_i=\bigl(i/n,i'/n\bigr)$ for $i=0,1,\dots,n-1$. Some further properties of the Hammersley points, related to the work of [@gaba:1967], are given by [@declerck:1986].
We will also make use of a complementary property: non-positive local discrepancy.
**Definition 2**. The point set $P_n$ with points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}$ has non-positive local discrepancy (NPLD) if $$\begin{aligned}
\label{eq:defpnld}
\delta(\boldsymbol{z})\leqslant 0\end{aligned}$$ for all $\boldsymbol{z}\in[0,1]^d$.
One of our techniques is to take NNLD points $\boldsymbol{x}_i$ and reflect them to $\boldsymbol{1}-\boldsymbol{x}_i$ to get points that oversample rectangular regions near $\boldsymbol{1}$. In doing so we will need to take care of two issues. One is that for $d\geqslant 2$, the complement of a hyperrectangle $[\boldsymbol{0},\boldsymbol{a})$ under this transformation is not another hyperrectangle. The other is that even for $d=1$, the complement of a half open interval $[0,a)$ is a closed interval $[a,1]$.
To handle these issues we make two observations below. First, for an $n$-point set $P_n \subset [0,1]^d$ let us additionally define the local discrepancy with respect to closed boxes: $$\overline{\delta}(\boldsymbol{z}) = \overline{\delta}(\boldsymbol{z}; P_n)
=\widehat\mathrm{VOL}([\boldsymbol{0},\boldsymbol{z}])-\mathrm{VOL}([\boldsymbol{0},\boldsymbol{z}]).$$
**Observation 1**. *The point set $P_n$ has the NNLD property if and only if $$\label{closed_NNLD}
\overline{\delta}(\boldsymbol{z}) \geqslant 0
\hspace{3ex}
\text{for all $\boldsymbol{z}\in [0,1]^d$.}$$ This is due to the following reasoning: First, we always have $\overline{\delta}(\boldsymbol{z}) \geqslant\delta(\boldsymbol{z})$ for all $\boldsymbol{z}\in [0,1]^d$. Thus the NNLD property of $P_n$ implies [\[closed_NNLD\]](#closed_NNLD){reference-type="eqref" reference="closed_NNLD"}. For the converse, we assume that $P_n$ satisfies [\[closed_NNLD\]](#closed_NNLD){reference-type="eqref" reference="closed_NNLD"} and consider two cases. If $z_j = 0$ for some $j\in [d]$ then $\delta(\boldsymbol{z})=0$. If instead $\min_{j\in[d]}z_j>0$ then $$\delta(\boldsymbol{z}) = \lim_{\varepsilon \downarrow 0} \overline{\delta}(\boldsymbol{z}- \varepsilon \boldsymbol{1}).$$ Either way, [\[eq:defnnld\]](#eq:defnnld){reference-type="eqref" reference="eq:defnnld"} holds, i.e., $P_n$ is NNLD.*
**Observation 2**. *The condition $$\label{closed_NPLD}
\overline{\delta}(\boldsymbol{z}) \leqslant 0
\hspace{3ex}
\text{for all $\boldsymbol{z}\in [0,1]^d$}$$ implies that $P_n$ has the NPLD property, since $\delta(\boldsymbol{z}) \leqslant\overline{\delta}(\boldsymbol{z})$ for all $\boldsymbol{z}\in [0,1]^d$. As a partial converse, if $P_n\subset[0,1)^d \cup \{\boldsymbol{1}\}$, then the NPLD property also implies condition [\[closed_NPLD\]](#closed_NPLD){reference-type="eqref" reference="closed_NPLD"}. Indeed, in that case we have $\overline{\delta}(\boldsymbol{1}) = 0$ and $$\overline{\delta}(\boldsymbol{z}) = \lim_{\varepsilon \downarrow 0} \delta(\boldsymbol{z}+ \varepsilon \boldsymbol{1}) \leqslant 0
\hspace{3ex}\text{for all $\boldsymbol{z}\in [0,1)^d$}.$$ Now consider for any $\boldsymbol{z}\in [0,1)^d$ and any $\varnothing\neq u\subsetneq [d]$ the closed anchored box $[\boldsymbol{0}, (\boldsymbol{z}_u{:}\boldsymbol{1}_{-u})]$. Due to $P_n\subset[0,1)^d \cup \{\boldsymbol{1}\}$, it contains exactly the same number of points from $P_n$ as the anchored box $[\boldsymbol{0}, (\boldsymbol{z}_u{:}\boldsymbol{z}^*_{-u})]$, where $\boldsymbol{z}^*$ is defined by $z_j^* := \max( \{x_{0,j}, \ldots, x_{n-1,j}\}\setminus \{1\})$ for $j=1, \ldots,d$ taking $z_j^*=0$ in case it is $\max(\varnothing)$. Consequently, we have $$\overline{\delta}(\boldsymbol{z}_u{:}\boldsymbol{1}_{-u}) \leqslant\overline{\delta}(\boldsymbol{z}_u{:}\boldsymbol{z}^*_{-u}) \leqslant 0.
% \hspace{3ex}\text{for all $\bsz\in [\bszero, \bsone)^d$ and all $\emptyset \neq u\subseteq [d]$,}$$ Hence for $d=1$ we have equivalence of [\[closed_NPLD\]](#closed_NPLD){reference-type="eqref" reference="closed_NPLD"} and NPLD for all $P_n \subset [0,1]$. But if $d\geqslant 2$, then for arbitrary $P_n \subset [0,1]^d$ not contained in $[0,1)^d \cup \{\boldsymbol{1}\}$ the NPLD property does not necessarily imply condition [\[closed_NPLD\]](#closed_NPLD){reference-type="eqref" reference="closed_NPLD"}, as a trivial example with $d=2$, $n=1$, $P_n= \{(1,1/2)\}$ shows: $\delta(\boldsymbol{z}) = - \mathrm{VOL}([\boldsymbol{0},\boldsymbol{z})) \leqslant 0$ for all $\boldsymbol{z}\in [0,1]^d$, but $\overline{\delta}((1,1/2)) = 1-1/2 =1/2 >0$.*
For $d=1$ if the points in $\tilde P_n$ are $1-x_i$ for the points $x_i$ of $P_n$, then $$\overline{\delta}(z;P_n)+\delta(1-z;\tilde P_n) = 0,%-\frac1n\sum_{i=0}^n\ind\{x_i=z\}.$$ i.e., $\overline{\delta}(z;P_n)=-\delta(1-z;\tilde P_n)$ for all $z\in [0,1]$. Then due to Observations [Observation 1](#obs:nnldandclosesdiscrep){reference-type="ref" reference="obs:nnldandclosesdiscrep"} and [Observation 2](#obs:npldalmostiff){reference-type="ref" reference="obs:npldalmostiff"}, reflections of NNLD points are NPLD points and vice versa for $d=1$.
In addition to reflection, we consider another useful transformation. Let $\tilde\boldsymbol{x}_i$ be the base $b$ Hammersley points for $i=0,\dots,n-1$ where $n=b^m$ and $d=2$. Then [@dick:krit:2006] show that $$\begin{aligned}
\label{eq:npldpts}
\boldsymbol{x}_i = (1/n+\tilde x_{i,1},1-\tilde x_{i,2})\end{aligned}$$ are NPLD.
## Completely monotone functions
Here we define completely monotone functions, describing them in words before giving the formal definition. If $\boldsymbol{x}\leqslant\boldsymbol{z}$, then a completely monotone function can increase but not decrease if any $x_j$ is replaced by $z_j$. That is $f(\boldsymbol{x}_{-j}{:}\boldsymbol{z}_j)-f(\boldsymbol{x})\geqslant 0$ always holds. Next, the size of this difference can only be increasing as some other component $x_k$ is increased to $z_k$, so certain differences of differences must also be non-negative. This condition must hold for anywhere from $1$ to $d$ applications of differencing. The $|u|$-fold differences of differences are alternating sums of the form $$\Delta_u(\boldsymbol{x},\boldsymbol{z})=
\sum_{v\subseteq u}(-1)^{|u-v|}f(\boldsymbol{x}_{-v}{:}\boldsymbol{z}_v).$$ Note that the coefficient of $f(\boldsymbol{x}_{-u}{:}\boldsymbol{z}_u)$ in $\Delta_u(\boldsymbol{x},\boldsymbol{z})$ is positive.
**Definition 3**. The function $f:[0,1]^d\to\mathbb{R}$ is completely monotone if $\Delta_u(\boldsymbol{x},\boldsymbol{z})\geqslant 0$ for all non-empty $u$ and all $\boldsymbol{x},\boldsymbol{z}\in[0,1]^d$ with $\boldsymbol{x}_u\leqslant\boldsymbol{z}_u$.
In [@aist:dick:2015], Aistleitner and Dick use completely monotone functions to analyze the total variation of $f$ in the sense of Hardy and Krause, denoted by $V_{\mathrm{HK}}(f)$. See [@variation] for an account. From Theorem 2 of [@aist:dick:2015], if $V_{\mathrm{HK}}(f)<\infty$ then we can write $$f(\boldsymbol{x}) = f(\boldsymbol{0})+f^+(\boldsymbol{x})-f^-(\boldsymbol{x})$$ where $f^+$ and $f^-$ are completely monotone functions with $f^+(\boldsymbol{0})=f^-(\boldsymbol{0})=0$. They call $f^+-f^-$ the Jordan decomposition of $f$. The functions $f^\pm$ are uniquely determined.
If $f$ is right-continuous and $V_{\mathrm{HK}}(f)<\infty$ then $f(\boldsymbol{x})=\nu([\boldsymbol{0},\boldsymbol{x}])$ for a uniquely determined signed Borel measure $\nu$, by Theorem 3 of [@aist:dick:2015]. Let this signed measure have Jordan decomposition $\nu=\nu^+-\nu^-$ for ordinary (unsigned) Borel measures $\nu^\pm$. Then $f^\pm(\boldsymbol{x})=\nu^{\pm}([\boldsymbol{0},\boldsymbol{x}]\setminus\{\boldsymbol{0}\})$.
The completely monotone functions that we study take the form $$\begin{aligned}
\label{eq:ourcmf}
f(\boldsymbol{x})=f(\boldsymbol{0}) + \lambda\, \nu([\boldsymbol{0},\boldsymbol{x}])\end{aligned}$$ where $\nu$ is an arbitrary probability measure on $[0,1]^d$ (or, more precisely, on the Borel $\sigma$-algebra of $[0,1]^d$) and $\lambda\geqslant 0$. Note that every right-continuous completely monotone function $f$ on $[0,1]^d$ can be represented in that way, see, e.g., [@elstrodt:2018 II.5.11 Korrespondenzsatz, p. 67].
If $\nu$ is absolutely continuous with respect to the Lebesgue measure, then we may represent $f$, due to the Radon-Nikodym theorem, as $$\begin{aligned}
\label{eq:ourcmf_abs_cont}
f(\boldsymbol{x})=f(\boldsymbol{0}) + \lambda \int_{[\boldsymbol{0},\boldsymbol{x}]}
g(\boldsymbol{z})\,\mathrm{d}\boldsymbol{z}\end{aligned}$$ where $g$ is a probability density on $[0,1]^d$, i.e., a non-negative Lebesgue integrable function on $[0,1]^d$ with integral equal to one.
## Basic result
Here we present the basic integration bounds. To bracket $\mu$ we use up to $2n$ function evaluations using $n$ each for the lower and upper limits. For some constructions it is possible that some function evaluations might be usable in both limits, reducing the cost of computation. For $d=1$ we only need $n+1$ evaluations.
**Theorem 1**. *Let $f$ be a completely monotone function of the form [\[eq:ourcmf\]](#eq:ourcmf){reference-type="eqref" reference="eq:ourcmf"}. Let $P_n= \{\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\} \subset [0,1]^d$, and put $\widetilde{P}_n = \{\boldsymbol{1}- \boldsymbol{x}_0,\dots,\boldsymbol{1}- \boldsymbol{x}_{n-1}\}$.*
- *Let $\widetilde{P}_n$ have non-negative local discrepancy. Then $$\begin{aligned}
\label{eq:upperbound}
\overline\mu =\hat\mu = \frac1n\sum_{i=0}^{n-1}f(\boldsymbol{x}_i)
\geqslant\int_{[0,1]^d}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}.\end{aligned}$$*
- *Let $P_n$ have non-positive local discrepancy. If additionally either $P_n \subset [0,1)^d \cup \{\boldsymbol{1}\}$ or $\nu$ is absolutely continuous with respect to the Lebesgue measure, then $$\begin{aligned}
\label{eq:lowerbound}
\underline\mu=\frac1n\sum_{i=0}^{n-1}f(\boldsymbol{1}-\boldsymbol{x}_i)
\leqslant\int_{[0,1]^d}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}.\end{aligned}$$*
*Proof.* Without loss of generality take $f(\boldsymbol{0})=0$ and $\lambda =1$. Consequently, $f(\boldsymbol{x}) = \nu([\boldsymbol{0}, \boldsymbol{x}])$ for all $\boldsymbol{x}\in [0,1]^d$. We obtain $$\begin{aligned}
\mu &= \int_{[0,1]^d} \nu([\boldsymbol{0}, \boldsymbol{x}]) \,\mathrm{d}\boldsymbol{x}
= \int_{[0,1]^d}\int_{[0,1]^d} 1_{\boldsymbol{z}\leqslant\boldsymbol{x}}
\,\mathrm{d}\nu(\boldsymbol{z})\,\mathrm{d}\boldsymbol{x}.
\end{aligned}$$ Reversing the order of integration, $$\begin{aligned}
\label{eq:mubasic}
\mu&= \int_{[0,1]^d}\int_{[0,1]^d}1_{\boldsymbol{z}\leqslant\boldsymbol{x}}\,\mathrm{d}\boldsymbol{x}\,\mathrm{d}\nu(\boldsymbol{z})
=\int_{[0,1]^d}\mathrm{VOL}([\boldsymbol{z},\boldsymbol{1}])\,\mathrm{d}\nu(\boldsymbol{z}).\end{aligned}$$ Similarly, $$\begin{aligned}
\hat\mu
=\frac1n\sum_{i=0}^{n-1}\nu([\boldsymbol{0}, \boldsymbol{x}_i])
=\frac1n\sum_{i=0}^{n-1}\int_{[0,1]^d}1_{\boldsymbol{z}\leqslant\boldsymbol{x}_i}\,\mathrm{d}\nu(\boldsymbol{z})\end{aligned}$$ from which $$\begin{aligned}
\label{eq:muhatbasic}
\hat \mu&=\int_{[0,1]^d} \frac1n\sum_{i=0}^{n-1}1_{\boldsymbol{z}\leqslant\boldsymbol{x}_i}\,\mathrm{d}\nu(\boldsymbol{z})
=\int_{[0,1]^d}\widehat\mathrm{VOL}([\boldsymbol{z},\boldsymbol{1}])\,\mathrm{d}\nu(\boldsymbol{z}).\end{aligned}$$ Combining [\[eq:mubasic\]](#eq:mubasic){reference-type="eqref" reference="eq:mubasic"} and [\[eq:muhatbasic\]](#eq:muhatbasic){reference-type="eqref" reference="eq:muhatbasic"} the integration error now satisfies $$\begin{aligned}
\label{difference_mu_hat_mu}
\hat\mu-\mu&=\int_{[0,1]^d}
\Bigl(\widehat\mathrm{VOL}([\boldsymbol{z},\boldsymbol{1}])-\mathrm{VOL}([\boldsymbol{z},\boldsymbol{1}]\Bigr) \,\mathrm{d}\nu(\boldsymbol{z})\notag\\
&=\int_{[0,1]^d}\overline{\delta}(\boldsymbol{1}- \boldsymbol{z}; \widetilde{P}_n)\,\mathrm{d}\nu(\boldsymbol{z}),\end{aligned}$$ where $\overline{\delta}(\boldsymbol{1}- \boldsymbol{z}; \widetilde{P}_n)$ is the local discrepancy of $\widetilde{P}_n$ with respect to the anchored closed box $[\boldsymbol{0}, \boldsymbol{1}- \boldsymbol{z}]$. Recall that $\nu$ is a positive measure.
For part (i), let $\widetilde{P}_n$ have the NNLD property. Due to Observation [Observation 1](#obs:nnldandclosesdiscrep){reference-type="ref" reference="obs:nnldandclosesdiscrep"} we have $\overline{\delta}(\boldsymbol{1}- \boldsymbol{z}; \widetilde{P}_n) \geqslant 0$ for all $\boldsymbol{z}\in [0,1]^d$. Hence $\hat\mu\geqslant\mu$, establishing [\[eq:upperbound\]](#eq:upperbound){reference-type="eqref" reference="eq:upperbound"}.
For part (ii), let $\widetilde{P}_n$ have the NPLD property. If additionally $\widetilde{P}_n \subset [0,1)^d \cup \{\boldsymbol{1}\}$, then Observation [Observation 2](#obs:npldalmostiff){reference-type="ref" reference="obs:npldalmostiff"} ensures that $\overline{\delta}(\boldsymbol{1}- \boldsymbol{z}; \widetilde{P}_n) \leqslant 0$ for all $\boldsymbol{z}\in [0,1]^d$, establishing $\hat\mu \leqslant\mu$. If instead $\nu$ is absolutely continuous with respect to the Lebesgue measure, then we can replace $\overline{\delta}(\boldsymbol{1}- \boldsymbol{z}; \widetilde{P}_n)$ in [\[difference_mu_hat_mu\]](#difference_mu_hat_mu){reference-type="eqref" reference="difference_mu_hat_mu"} by $\delta(\boldsymbol{1}- \boldsymbol{z}; \widetilde{P}_n)$ without changing the integral. Hence we get again $\hat\mu \leqslant\mu$. In any case, exchanging the roles of $P_n$ and $\widetilde{P}_n$ establishes [\[eq:lowerbound\]](#eq:lowerbound){reference-type="eqref" reference="eq:lowerbound"}. ◻
Theorem [Theorem 1](#thm:basic){reference-type="ref" reference="thm:basic"} provides an upper bound for $\mu$ when sampling from reflected NNLD points. This bound will approach $\mu$ as $n\to\infty$ if those points also satisfy $D_n^*\to0$ as $n\to\infty$. To get a lower bound we can use reflected NPLD points, provided that either $\nu$ is absolutely continuous or those points all belong to $[0,1)^d\cup\{\boldsymbol{1}\}$. The NPLD points could be those given by equation [\[eq:npldpts\]](#eq:npldpts){reference-type="eqref" reference="eq:npldpts"}. We find in Section [5](#sec:netconstructions){reference-type="ref" reference="sec:netconstructions"} that NPLD points are not as simple to construct as NNLD points.
## Example
Here is a simple example to illustrate these bounds. The integrand is known to be completely monotone because it is a multivariate cumulative distribution function (CDF). For $\boldsymbol{x}\in[0,1]^2$ we take $$\label{ex:f1}
f(\boldsymbol{x}) = \Pr( X_1\leqslant x_1, X_2\leqslant x_2)$$ for $\boldsymbol{X}\sim\mathcal{N}(0,\Sigma)$ with $\Sigma=
%\begin{pmatrix}1&\rho\\\rho&1\end{pmatrix}$
\bigl(\begin{smallmatrix}1&\rho\\\rho&1\end{smallmatrix}\bigr)$ using $\rho=0.7$. Due to [\[eq:upperbound\]](#eq:upperbound){reference-type="eqref" reference="eq:upperbound"}, we can compute an upper bound for $\mu=\int_{[0,1]^2}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$ by sampling at points $\boldsymbol{1}-\boldsymbol{x}_i$ where $\boldsymbol{x}_i\in[0,1]^2$ are the first $n=2^m$ Hammersley points in any base $b\geqslant 2$. We can compute a lower bound for $\mu$ by first transforming Hammersley points via [\[eq:npldpts\]](#eq:npldpts){reference-type="eqref" reference="eq:npldpts"} to get NPLD points $\boldsymbol{x}_i$ and then sampling at $\boldsymbol{1}-\boldsymbol{x}_i$. Note that the point sets in these bounds are not extensible in that the points for $n=b^m$ are not necessarily reused for $n=b^{m+1}$.
Figure [1](#fig:example1){reference-type="ref" reference="fig:example1"} shows the results for $n=2^m$ and $1\leqslant m\leqslant 13$. Over the given range, $n(\overline\mu-\underline\mu)$ increases with $n$ while $n(\overline\mu-\underline\mu)/\log(n)$ decreases with $n$. The computed upper and lower bounds for $n=2^{13}$ show that $$0.5618735\leqslant\mu\leqslant 0.5619890.$$ This function is so smooth and the dimension is so small that comparable accuracy could be attained by standard low dimensional integration methods with many fewer function evaluations. However, these computations took approximately five seconds in R on a MacBook Air M2 laptop, using the `mvtnorm` package [@mvtnorm:book; @mvtnorm] to compute $f$. A more efficient integration could save only about five seconds and it would not come with guaranteed bounds.
![[\[fig:example1\]]{#fig:example1 label="fig:example1"} The top panel shows upper and lower bounds for $\mu=\int_{[0,1]^2}f(\boldsymbol{x})\,\mathrm{d}\boldsymbol{x}$ using transformations of the Hammersley points and $n=2^m$ for $1\leqslant m\leqslant 13$. The bottom panel plots the difference between those upper and lower bounds versus $n$, on a logarithmic scale. ](figexample1.pdf){#fig:example1 width=".9\\hsize"}
# More about NNLD points {#sec:nnldproperties}
Here we collect some observations about properties that any $n\geqslant 1$ NNLD points in $[0,1]^d$ must necessarily have. Then we use those properties to describe constraints that the NNLD property imposes on customary QMC constructions (lattices and digital nets). Finally we show that the NNLD and NPLD properties are preserved by tensor products.
The first and most obvious property of NNLD points is that $\boldsymbol{0}$ must be one of those points or else there is a box $B=[\boldsymbol{0},\boldsymbol{a})$ with $0=\widehat\mathrm{VOL}(B)<\mathrm{VOL}(B)$ so that $\delta(\boldsymbol{a})<0$. Next it must be true that all $n$ points belong to $[0,1-1/n]^d$. Suppose to the contrary that $x_{i1}>1-1/n$ for some $0\leqslant i <n$. Then for some $\epsilon>0$ there exists $B=[0,1-1/n+\epsilon)\times[0,1]^{d-1}$ with $\widehat\mathrm{VOL}(B) \leqslant(n-1)/n<\mathrm{VOL}(B)$ so that $\boldsymbol{x}_i$ are not NNLD. The same argument applies if $x_{ij}>1-1/n$ for any $i$ and any $j$.
Trivial constructions of NNLD points have $\boldsymbol{x}_i=(i/n)\boldsymbol{1}\in[0,1]^d$ for $0\leqslant i<n$. We observe that these points as well as the Hammersley points for $d=2$ have variables that are positively correlated. We will use a general positive dependence property in Sections [5](#sec:netconstructions){reference-type="ref" reference="sec:netconstructions"} and [6](#sec:rankone){reference-type="ref" reference="sec:rankone"} to construct more NNLD point sets. The NPLD construction in [\[eq:npldpts\]](#eq:npldpts){reference-type="eqref" reference="eq:npldpts"} creates a negative lower orthant dependence property for the components of $\boldsymbol{x}_i\in[0,1]^2$.
Many of the constructions $P_n$ we consider are projection regular by which we mean that the projections of $P_n$ onto each single coordinate are equal to the full set $\{0,1/n,2/n,\dots,(n-1)/n\}$. Projection regularity is usually considered advantageous in QMC, as it guarantees a certain structure and even distribution of the integration node set, and simplifies the derivation of error bounds. However, combined with the NNLD property, it imposes a constraint on the point set that we will use to rule out certain constructions.
**Proposition 1**. *Let $P_n$ be a point set with $n$ points in $[0,1)^d$ that is projection regular. If $P_n$ has the NNLD property, then $P_n$ must contain the point $$\boldsymbol{x}_*=\left(\frac{n-1}{n},\frac{n-1}{n},\ldots,\frac{n-1}{n}\right).$$*
*Proof.* Suppose that $P_n$ is projection regular and does not contain $\boldsymbol{x}_*$. Then there must exist at least one two dimensional projection $Q_n$ of $P_n$ which does not contain the point $\boldsymbol{y}_*:=(\frac{n-1}{n},\frac{n-1}{n})$. Without loss of generality, assume that $Q_n$ is the projection of $P_n$ onto the first and second coordinates.
This implies, due to projection regularity, that at least two points of $Q_n$ do not lie in the box $[\boldsymbol{0},\boldsymbol{y}_*)$. Thus, $$\delta(\boldsymbol{y}_*) = \widehat\mathrm{VOL}([\boldsymbol{0},\boldsymbol{y}_*))-\mathrm{VOL}([\boldsymbol{0},\boldsymbol{y}_*)) \leqslant\frac{n-2}{n}-\frac{(n-1)^2}{n^2}=-\frac{1}{n^2}.$$ Therefore, $P_n$ has negative local discrepancy for the box $[\boldsymbol{0},\boldsymbol{y}_*)\times [0,1)^{d-2}$. ◻
Proposition [Proposition 1](#lem:upper_diag_point){reference-type="ref" reference="lem:upper_diag_point"} has some consequences for well known QMC points. We will consider digital nets and integration lattices. The most widely used and studied integration lattices are rank one lattices. Given a generating vector $\boldsymbol{g}=(g_1,\dots,g_d)\in\mathbb{N}^d$ and a sample size $n\geqslant 1$, a rank one lattice uses points $$\boldsymbol{x}_i =\Bigl( \frac{g_1i}n,\frac{g_2i}n,\dots,\frac{g_di}n\Bigr) \bmod 1$$ for $0\leqslant i<n$ where the modulus operation above takes the fractional part of its argument. These $n$ points form a group under addition modulo 1. More general integration lattices having ranks between $1$ and $d$ can also be constructed [@dick:krit:pill:2022; @nied92; @sloanjoe]. Lattice rules with ranks larger than $1$ are seldom used. They also have the group structure.
**Corollary 1**. *For fixed $d,n \geqslant 1$ there is only one projection regular lattice point set in $[0,1)^d$ that consists of $n$ points and has the NNLD property, namely the lattice point set $$\left\{ \boldsymbol{0}, \frac{1}{n} \boldsymbol{1}, \frac{2}{n} \boldsymbol{1}, \ldots, \frac{n-1}{n}\boldsymbol{1}\right\},$$ whose points all lie on the main diagonal of the $d$-dimensional unit cube $[0,1)^d$.*
*Proof.* Let $P_n$ be a projection regular lattice point set, consisting of $n$ points in $[0,1)^d$, that has NNLD. Due to Proposition [Proposition 1](#lem:upper_diag_point){reference-type="ref" reference="lem:upper_diag_point"}, $P_n$ has to contain the point $\boldsymbol{x}_*= \frac{n-1}{n} \boldsymbol{1}$. Due to the additive group structure of $P_n$, we have $$k \boldsymbol{x}_* \bmod 1 = \frac{n-k}{n} \boldsymbol{1}\in P_n \hspace{3ex}\text{for $k=0,1,\ldots, n-1$}.$$ The set above has $n$ distinct points, so they must be all of $P_n$. ◻
From Corollary [Corollary 1](#cor:pro_reg_lattice_NNLD){reference-type="ref" reference="cor:pro_reg_lattice_NNLD"} we see, in particular, that the only projection regular rank one lattices that are NNLD are trivial, and equivalent to taking all $g_j=1$. If we also consider lattices that are not projection regular, then we can find constructions that are NNLD and do not only consist of points on the main diagonal of the unit cube $[0,1)^d$. See Theorem [Theorem 3](#thm:non_pro_reg_lattice_NNLD){reference-type="ref" reference="thm:non_pro_reg_lattice_NNLD"}.
Now we look at $(t,m,d)$-nets [@dick:pill:2010; @nied92]. The most widely used $(t,m,d)$-nets are those of Sobol' in base $b=2$. Sobol' points require one to choose parameters known as direction numbers, with those of [@joe:kuo:2008] being especially prominent. By considering the point $\boldsymbol{x}_*=\boldsymbol{1}(1-1/n)$, we often find that such Sobol' points cannot be NNLD. The first and third components of $\boldsymbol{x}_i\in[0,1]^d$ for $d\geqslant 3$ are projection regular but, for $2\leqslant m\leqslant 20$ they fail to contain $(1-1/n,1-1/n)$. Therefore the projection of the Sobol' points onto those two dimensions fails to be NNLD and hence the $d$ dimensional point set is not NNLD either.
Like lattice point sets, digital $(t,m,d)$-nets in base $b\geqslant 2$ have a group structure; this time it is based on the digitwise addition modulo $b$, which is performed in each component separately. Using this group structure and Proposition [Proposition 1](#lem:upper_diag_point){reference-type="ref" reference="lem:upper_diag_point"}, we obtain a corollary with a similar flavor to Corollary [Corollary 1](#cor:pro_reg_lattice_NNLD){reference-type="ref" reference="cor:pro_reg_lattice_NNLD"}, although with less dramatic consequences.
**Corollary 2**. *Let $d,m \geqslant 1$ and $b\geqslant 2$. Let $$\alpha_{b,m} = \sum_{\nu =1}^m b^{-\nu} = \frac{1-b^{-m}}{b-1}.%, \hspace{3ex} k=0,1,\ldots, b-1.$$ On the one hand, any digital $(t,m,d)$-net in base $b\geqslant 2$ that is projection regular and has the NNLD property contains the cyclic subgroup $$\{ \boldsymbol{0}, \alpha_{b,m} \boldsymbol{1},2 \alpha_{b,m} \boldsymbol{1}, \ldots, (b-1) \alpha_{b,m}\boldsymbol{1}\},$$ which consists of $b$ points on the main diagonal.*
*On the other hand, any $(t,m,d)$-net in base $b\geqslant 2$ has at most $b^{t+ \lceil \frac{m-t}{d} \rceil}$ points on the main diagonal.*
*Proof.* Let $n=b^m$, and let $P_n$ be a projection regular digital $(t,m,d)$-net, consisting of $n$ points in $[0,1)^d$, that has NNLD. Due to Proposition [Proposition 1](#lem:upper_diag_point){reference-type="ref" reference="lem:upper_diag_point"}, $P_n$ has to contain the point $\boldsymbol{x}_*= \frac{n-1}{n} \boldsymbol{1}= (b-1)\alpha_{b,m} \boldsymbol{1}$. Using the specific commutative group addition of $P_n$, we see that adding up $\boldsymbol{x}_*$ $k$ times yields $$k \boldsymbol{x}_* = (b-k)\alpha_{b,m} \boldsymbol{1}\in P_n$$ for $k=0,1,\ldots, b-1$.
Now let $P_n$ be an arbitrary $(t,m,d)$-net in base $b$. Put $k:=\lceil \frac{m-t}{d} \rceil$. We may partition the half-open unit cube $[0,1)^d$ into $b^{m-t}$ half-open axis-parallel boxes (of the same shape and of volume $b^{t-m}$) with side length $b^{-k}$ and, possibly, side length $b^{1-k}$. Due to the net property, each of these boxes contains exactly $b^t$ points of $P_n$, and at most $b^k$ of the boxes have a non-trivial intersection with the main diagonal. ◻
The next result shows that Cartesian products of finitely many NNLD (or NPLD) point sets are also NNLD (respectively NPLD).
**Lemma 1**. *For positive integers $d_1$, $d_2$, $n_1$ and $n_2$, let $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n_1-1}\in[0,1]^{d_1}$ and $\tilde \boldsymbol{x}_0,\dots,\tilde\boldsymbol{x}_{n_2-1}\in[0,1]^{d_2}$ be NNLD point sets. Let $\boldsymbol{z}_0,\dots,\boldsymbol{z}_{N-1}\in[0,1]^{d_1+d_2}$ for $N=n_1n_2$ be the Cartesian product of those two point sets. Then $\boldsymbol{z}_0,\dots,\boldsymbol{z}_{N-1}$ are NNLD points. If both $\boldsymbol{x}_i$ and $\tilde \boldsymbol{x}_i$ are NPLD then $\boldsymbol{z}_i$ are also NPLD.*
*Proof.* For any $\boldsymbol{z}\in[0,1]^{d_1+d_2}$ define $\boldsymbol{x}=\boldsymbol{z}_{[d_1]}$ and $\tilde \boldsymbol{x}=\boldsymbol{z}_{-[d_1]}$. Let $\mathrm{VOL}_1$, $\mathrm{VOL}_2$ and $\mathrm{VOL}$ denote Lebesgue measure on $[0,1]^{d_1}$, $[0,1]^{d_2}$ and $[0,1]^d$ for $d=d_1+d_2$, respectively. Let $\widehat\mathrm{VOL}_1$, $\widehat\mathrm{VOL}_2$ and $\widehat\mathrm{VOL}$ be empirical measures for $\boldsymbol{x}_i$, $\tilde\boldsymbol{x}_i$ and $\boldsymbol{z}_i$ respectively. If $\boldsymbol{x}_i$ and $\tilde \boldsymbol{x}_i$ are NNLD then $$\begin{aligned}
\widehat\mathrm{VOL}([\boldsymbol{0}_d,\boldsymbol{z}))&=
\widehat\mathrm{VOL}_1([\boldsymbol{0}_{d_1},\boldsymbol{x}))
\widehat\mathrm{VOL}_2([\boldsymbol{0}_{d_2},\tilde\boldsymbol{x}))\\
&\geqslant\mathrm{VOL}_1([\boldsymbol{0}_{d_1},\boldsymbol{x}))
\mathrm{VOL}_2([\boldsymbol{0}_{d_2},\tilde\boldsymbol{x}))\\
&=\mathrm{VOL}([\boldsymbol{0}_d,\boldsymbol{z})).\end{aligned}$$ Therefore $\delta(\boldsymbol{z})\geqslant 0$ and $\boldsymbol{z}_i$ are NNLD. The same argument, with the inequalities reversed, applies to the NPLD case. ◻
# Comparison to Koksma-Hlawka bounds {#sec:kh}
The Koksma-Hlawka inequality is $$\begin{aligned}
\label{eq:koksmahlawka}
|\hat\mu-\mu|\leqslant D_n^* V_{\mathrm{HK}}(f)\end{aligned}$$ where $D_n^*$ denotes again the star discrepancy and $V_{\mathrm{HK}}(f)$ is the total variation of $f$ in the sense of Hardy and Krause. We can be sure that $$\hat\mu-D_n^*V_{\mathrm{HK}}(f)\leqslant\mu \leqslant\hat\mu +D_n^*V_{\mathrm{HK}}(f)$$ but the endpoints of this interval are in general far harder to compute than $\mu$ is. One difficulty is that $V_{\mathrm{HK}}(f)$ is a sum of $2^d-1$ Vitali variations (see [@variation]) that in general are harder to compute than $f$ itself is. However when $\tilde{f}$, defined by $\tilde f(\boldsymbol{x})=f(\boldsymbol{1}-\boldsymbol{x})$ for every $\boldsymbol{x}$, is completely monotone then it is useful to work with an alternative definition of total variation $V_{\mathrm{HK}\boldsymbol{0}}$ (see [@aist:dick:2015]). For this definition, $V_{\mathrm{HK}\boldsymbol{0}}(\tilde f) = V_{\mathrm{HK}}(f)$, and $V_{\mathrm{HK}\boldsymbol{0}}(\tilde{f})=\tilde{f}(\boldsymbol{1})- \tilde{f}(\boldsymbol{0}) = f(\boldsymbol{0})-f(\boldsymbol{1})$, see [@aist:dick:2015].
With an expression for total variation we still need a value or a bound for $D_n^*$. The computation of $D_n^*$ is expensive, but in some instances it might be worth doing, and for a given set of points we could pre-compute $D_n^*$. It is possible to compute $D_n^*$ exactly at cost $O(n^{d/2+1})$ for fixed $d$ as $n\to\infty$, see [@dobk:epps:mitc:1996]. The cost to compute $D_n^*$ is exponential in the dimension $d$. If $n=d\to\infty$ together then computation of $D_n^*$ is NP-complete, see [@gnew:sriv:winz:2008; @gia:kna:wah:wer:2012]. Nevertheless, there are algorithms known that provide either upper and lower bounds for $D_n^*$ in moderate dimension, see [@Thie:2001], or lower bounds for $D_n^*$ even in high dimensions, see [@gnew:wahl:winz:2012]. For these and other facts about computing $D_n^*$, cf. [@doer:gnew:wals:2014].
Then, if we have computed a value $\varepsilon \geqslant D_n^*(P_n)$ we then get an interval $$\hat\mu \pm \varepsilon (f(\boldsymbol{0})-f(\boldsymbol{1}))$$ that is sure to contain $\mu$, when $f(\boldsymbol{1}-\boldsymbol{x})$ is completely monotone, whether or not $P_n$ is NNLD.
# Digital net constructions {#sec:netconstructions}
The NNLD points of [@declerck:1986; @gaba:1967] are two dimensional Hammersley points which are a special kind of digital nets [@dick:pill:2010] in which the generator matrices are permutation matrices. In this section we show that digital nets constructed with permutation matrices can be used to get NNLD points with $n=b^m$ points for any integer base $b\geqslant 2$ in any dimension $d\geqslant 1$. This generalizes the result of [@declerck:1986; @gaba:1967] which holds for $d=2$. We obtain this generalization by a probabilistic argument using the notion of associated random variables from reliability theory [@esar:pros:walk:1967]. We also show that there is a limit to how good digital nets can be when their generator matrices are permutation matrices.
## Permutation digital nets
Here we describe how permutation digital nets are constructed. We won't need the more general definition of digital nets until we study them more closely in Section [5.3](#sec:morenets){reference-type="ref" reference="sec:morenets"}.
For a dimension $d\geqslant 1$, an integer base $b\geqslant 2$ and an integer $m\geqslant 1$ we choose $d$ matrices $C^{(j)}\in\mathbb{Z}_b^{m\times m}$. For $n=b^m$ and indices $i=0,1,\dots,n-1$, write $i=\sum_{k=1}^ma_{i,k}b^{k-1}$ for $a_{i,k}\in\mathbb{Z}_b$ and put $\vec{i}=(a_{i,1},\dots,a_{i,k})^\mathsf{T}$. Now let $$\vec{x}_{ij}=C^{(j)}\vec{i}\ \bmod b$$ have components $\vec{x}_{ij}(k)\in\mathbb{Z}_b$. Then $\boldsymbol{x}_i$ has j'th component $$x_{ij} = \sum_{k=1}^m\vec{x}_{ij}(k)b^{-k}\in[0,1).$$ Here we use arithmetic modulo $b$ to define the digital nets. It is customary to only use arithmetic modulo $b$ when $b$ is a prime number and to use a generalization based on finite fields when $b=p^r$ for a prime number $p$ and some power $r\geqslant 2$. Our proofs of NNLD properties exploit a monotonicity of integers modulo $b$ whether or not $b$ is a prime.
As an illustration, the first $16$ Hammersley points in base $b\geqslant 2$ for $d=2$ are constructed this way with $$\begin{aligned}
\label{eq:hammillust}
C^{(1)} = \begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
\end{pmatrix}
\quad\text{and}\quad
C^{(2)} = \begin{pmatrix}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
\end{pmatrix}.\end{aligned}$$ Hammersley points for $d=2$ and general $m\geqslant 1$ are constructed similarly, with $C^{(1)}=I_m$ and $C^{(2)}$ a 'reversed' identity matrix as in [\[eq:hammillust\]](#eq:hammillust){reference-type="eqref" reference="eq:hammillust"}. The Hammersley points for $d\geqslant 3$ are constructed using different bases for different components [@hamm:1960].
## Associated random variables
The settings with $d=1$ or with $n=1$ are trivial so we work with $d\geqslant 2$ and $n>1$. The key ingredient in constructing a short proof of the NNLD property is the notion of associated random variables [@esar:pros:walk:1967] that originated in reliability theory.
**Definition 4**. Random variables $T_1,\dots,T_m$ are associated if, for $\boldsymbol{T}=(T_1,\dots,T_m)$ we have $\mathrm{Cov}( g_1(\boldsymbol{T}),g_2(\boldsymbol{T}))\geqslant 0$ for all pairs of functions $g_1,g_2:\mathbb{R}^m\to\mathbb{R}$ that are nondecreasing in each argument individually and for which $\mathbb{E}(g_1(\boldsymbol{T}))$, $\mathbb{E}(g_2(\boldsymbol{T}))$ and $\mathbb{E}( g_1(\boldsymbol{T})g_2(\boldsymbol{T}))$ all exist.
The next theorem uses points that are a digital net with permutation matrix generators, followed by shifting every component of each point to the right by a distance $1/n$. It shows that they oversample sets of the form $(\boldsymbol{z},\boldsymbol{1}]$.
**Theorem 2**. *For integers $m\geqslant 1$, $b\geqslant 2$ and $d\geqslant 2$, let $\pi_1,\dots,\pi_d$ be permutations of $\{1,\dots,m\}$, not necessarily distinct. For $n=b^m$ and $i=0,\dots,n-1$ and $k=1,\dots,m$ define $a_i(k)\in\mathbb{Z}_b$ via $i=\sum_{k=1}^ma_i(k)b^{k-1}$. If $\boldsymbol{x}_i\in(0,1]^d$ has components $$\begin{aligned}
\label{eq:shiftedpermnet}
x_{ij} = \frac1n+\sum_{k=1}^mb^{-k}a_i(\pi_j(k)),\quad j=1,\dots,d\end{aligned}$$ then for any $\boldsymbol{z}\in[0,1]^d$ $$\label{eq:perm_implies_nnld}
\frac1n\sum_{i=0}^{n-1}\prod_{j=1}^d\mathbbm{1}\{ x_{ij}>1-z_j\}
\geqslant\prod_{j=1}^dz_j.$$*
*Proof.* We define a random index $i\sim \mathbb{U}\{0,1,\dots,n-1\}$ which then implies that for each index $j$ the digits $a_i(\pi_j(k))\sim\mathbb{U}(\mathbb{Z}_b)$ independently for $k=1,\dots,m$. For each $j=1,\dots,d$ we have $x_{ij}\sim\mathbb{U}\{
1/n,2/n,\dots,1\}$. Therefore for any $z_j\in[0,1]$, $\Pr( x_{ij}>1-z_j)\geqslant z_j$.
Let $T_j$ be the value of the random variable $x_{ij}$ where $i$ is random and $j$ is not. Letting $\gamma_j$ be the inverse of the permutation $\pi_j$, we may write $$T_j=x_{ij}=\frac1n+\sum_{k=1}^mb^{-\gamma_j(k)} a_i(k).$$ Independent random variables $a_i(k)$ are associated by Theorem 2.1 of [@esar:pros:walk:1967]. Then $T_1,\dots,T_d$ are associated by result P4 of [@esar:pros:walk:1967] because they are nondecreasing functions of $a_i(1),\dots,a_i(m)$.
For $d=2$, let $g_1(\boldsymbol{T})=\mathbbm{1}\{x_{i1}>1-z_1\}$ and $g_2(\boldsymbol{T})=\mathbbm{1}\{x_{i2}>1-z_2\}$. These are nondecreasing functions of associated random variables and so by the definition of associated random variables $$\Pr( x_{i1}>1-z_1, x_{i2}>1-z_2) \geqslant\Pr(x_{i1}>1-z_1)\Pr(x_{i2}>1-z_2).$$ Next, for $2< r\leqslant d$ let $g_1(\boldsymbol{T})= \prod_{j=1}^{r-1}\mathbbm{1}\{x_{ij}>1-z_j\}$ and $g_2(\boldsymbol{T})=\mathbbm{1}\{x_{ir}>1-z_r\}$. Using induction we conclude that with our random $i$, $$\Pr( x_{ij}>1-z_j,\ j=1,\dots,d)\geqslant\prod_{j=1}^d\Pr(x_{ij}>1-z_j)
\geqslant\prod_{j=1}^dz_j$$ which is equivalent to [\[eq:perm_implies_nnld\]](#eq:perm_implies_nnld){reference-type="eqref" reference="eq:perm_implies_nnld"}. ◻
**Corollary 3**. *For integer $b\geqslant 2$ and dimension $d\geqslant 2$ let $\tilde\boldsymbol{x}_0,\dots,\tilde\boldsymbol{x}_{n-1}\in[0,1]^d$ be points of a digital net constructed in base $b$ using permutation matrices as generators. Then the points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}\in[0,1]^d$ with $x_{ij} = 1-(1/n+\tilde x_{ij})$ are NNLD.*
*Proof.* Pick $\boldsymbol{z}\in[0,1]^d$. Now $\mathbbm{1}\{x_{ij}<z_j\}=\mathbbm{1}\{\tilde x_{ij} + 1/n>1-z_j\}$ and so $$\begin{aligned}
\widehat\mathrm{VOL}([\boldsymbol{0},\boldsymbol{z}))=\frac{1}{n}\sum_{i=0}^{n-1}\prod_{j=1}^d\mathbbm{1}\{x_{ij} <z_j\}
=\frac{1}{n} \sum_{i=0}^{n-1}\prod_{j=1}^d\mathbbm{1}\{\tilde x_{ij} +1/n>1-z_j\}
\geqslant\prod_{j=1}^dz_j\end{aligned}$$ by Theorem [Theorem 2](#thm:perm_implies_nnld){reference-type="ref" reference="thm:perm_implies_nnld"}. ◻
For $d=2$ it was possible to turn an NNLD point set into an NPLD point set in [\[eq:npldpts\]](#eq:npldpts){reference-type="eqref" reference="eq:npldpts"} which includes a reflection $x_{i,2}=1-\tilde x_{i,2}$. If we were to reflect two or more components of an NNLD point set, then those components would take on a positive upper orthant dependence, which does not generally provide the negative lower orthant dependence we want for NPLD points. For projection regular NNLD points the reflection of $s\geqslant 2$ components will contain $\boldsymbol{1}_s/n$ and there will be a box $B=[\boldsymbol{0}_s,\boldsymbol{1}_s(1/n+\epsilon))$ with $\delta(B)=1/n-(1/n +\epsilon)^s >0$ for small enough $\epsilon>0$.
## Quality of permutation digital nets {#sec:morenets}
It is clear on elementary grounds that a permutation digital net with two identical permutations among $\pi_1,\dots,\pi_d$ would be very bad. The resulting points would satisfy $x_{ij}=x_{ij'}$ for $0\leqslant i<n$ and some $1\leqslant j<j'\leqslant d$. Here we show that our restriction to permutation digital nets rules out the best digital nets when $d\geqslant 3$. We begin with the definitions of these nets.
**Definition 5**. For integers $d\geqslant 1$, $b\geqslant 2$, and vectors $\boldsymbol{k},\boldsymbol{a}\in\mathbb{N}^d$ with $a_j\in\mathbb{Z}_{b^{k_j}}$ for $j=1,\dots,d$ the Cartesian product $$\mathcal{E}(\boldsymbol{k},\boldsymbol{a})=\prod_{j=1}^d\Bigl[ \frac{a_j}{b^{k_j}},\frac{a_j+1}{b^{k_j}}
\Bigr)$$ is an elementary interval in base $b$.
**Definition 6**. For integers $b\geqslant 2$, $d\geqslant 1$ and $0\leqslant t\leqslant m$, the $n$ points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}$ are a $(t,m,d)$-net in base $b$ if $$\widehat\mathrm{VOL}(\mathcal{E}(\boldsymbol{k},\boldsymbol{a}))=\mathrm{VOL}(\mathcal{E}(\boldsymbol{k},\boldsymbol{a}))$$ holds for all elementary intervals in base $b$ for which $\sum_{j=1}^dk_j\leqslant m-t$.
Digital nets are $(t,m,d)$-nets. Other things being equal, smaller values of $t$ denote better equidistribution of the points $\boldsymbol{x}_i$ which translates into a lower bound on $D_n^*$ and hence a smaller upper bound in the Koksma-Hlawka inequality. From Theorem 4.10 of [@nied92] $$\begin{aligned}
\label{eq:niedrate}
D_n^* = O\Bigl( \frac{b^t\log(n)^{d-1}}n\Bigr)
+O\Bigl( \frac{\log(n)^{d-2}}n\Bigr)\end{aligned}$$ where the implied constants depend only on $d$ and $b$. The powers of $\log(n)$ are not negligible but they are also not seen in examples of integration errors [@wherearethelogs].
The quality parameter of a permutation digital net can be very bad. For $d=2$, taking the Hammersley construction yields $t=0$ which is the best possible value. Here we show that for $d\geqslant 3$, the best available values of $t$ are far from optimal.
The following definition and result are based on [@mato:1998 Sect. 2.3].
**Construction 1** (Digital Construction of $(t,m,d)$-Nets). *For prime $b$, and $C^{(1)},\dots,C^{(d)} \in (\mathbb{F}_b)^{m\times m}$, let $\mathcal{C} = \{ C^{(1)}, \ldots, C^{(d)} \}$. For $h\in \mathbb{F}_b^m$ define $p(h) \in [0,1)^d$ componentwise by its $b$-adic digit expansion $$p(h)_j = %\langle C^{(i)}h, (b^{-1}, \ldots, b^{-m}) \rangle
\delta^{(j)}_1(h) b^{-1} + \delta^{(j)}_2 (h) b^{-2} + \cdots + \delta^{(j)}_m(h) b^{-m}
\in [0,1), \hspace{3ex} j=1,\ldots,d,$$ where $\delta^{(j)}(h) = (\delta^{(j)}_1(h), \ldots, \delta^{(j)}_m(h) )$ is simply the vector $C^{(j)}h\in \mathbb{F}^m_b$. We define the point set $$\begin{aligned}
\label{eq:pointset}
P(\mathcal{C}) = (p(h))_{h\in \mathbb{F}^m_b}.\end{aligned}$$ Clearly, $|P(\mathcal{C})| = b^m$.*
*To assess the quality of $P(\mathcal{C})$, we define the *quality criterion $\rho(\mathcal{C})$*: For $\boldsymbol{m}=(m_1,m_2, \ldots, m_d)\in \{0,1,\ldots,m\}^d$ with $|\boldsymbol{m}|=\sum_{j=1}^dm_j$ let $$\mathcal{C}^{(\boldsymbol{m})}
= \begin{pmatrix}
C^{(1)}(1{:}m_1, \cdot )\\
C^{(2)}(1{:}m_2,\cdot)\\
\vdots\\
C^{(d)}(1{:}m_d,\cdot)
\end{pmatrix}\in \mathbb{F}_b^{|\boldsymbol{m}|\times d}$$ where $C^{(j)}(1{:}m_j,\cdot)\in\mathbb{F}_b^{m_j\times d}$ represents the first $m_j$ rows of $C^{(j)}$. Now $\rho(\mathcal{C})$ is the maximum number $\rho \in \{0,1,\ldots,m\}$ such that for all $\boldsymbol{m}\in \{0,1,\ldots,m\}^d$ with $|\boldsymbol{m}| = \rho$ we have $\mathop{\mathrm{rank}}(\mathcal{C}^{(\boldsymbol{m})}) = \rho$.*
**Proposition 2**. *Let $b,m,\mathcal{C}$, and $P(\mathcal{C})$ be as in Construction [Construction 1](#Cons_6){reference-type="ref" reference="Cons_6"}. Then $P(\mathcal{C})$ is a $(t,m,d)$-net for $t= m - \rho(\mathcal{C})$.*
**Observation 3**. *The proposition shows that the best possible $t$-value $t(\mathcal{C})$ of $P(\mathcal{C})$ is at most $m - \rho(\mathcal{C})$. But similar arguments as in the corresponding proof of [@mato:1998 Proposition 2.7] show that actually $$t(\mathcal{C}) = m - \rho(\mathcal{C}).$$*
**Proposition 3**. *Let $V:= \{v_1, \ldots, v_m \}$ be a set of linearly independent vectors in $\mathbb{F}_b^m$. Let $m= \ell d + r$, where $\ell\in \mathbb{N}_0$ and $0 \leqslant r < d$. If the rows $C_k^{(j)}$, $k=1, \ldots,m$, of the matrices $C^{(j)}$, $j=1,\ldots,d$, are all contained in $V$, then $\rho(\mathcal{C}) \leqslant 2 \lfloor m/d \rfloor + 1$. Therefore, the smallest $t$-value $t(\mathcal{C})$ of $P(\mathcal{C})$ satisfies $$t(\mathcal{C}) \geqslant(d-2) \lfloor m/d \rfloor + r - 1.$$*
*Proof.* Consider the $m$ row vectors $$C^{(1)}_1, C^{(2)}_1, \ldots, C^{(d)}_1, \quad C^{(1)}_2, C^{(2)}_2, \ldots,C^{(d)}_2,\quad\ldots\quad, C^{(1)}_{\ell+1},C^{(2)}_{\ell+1},\ldots,C^{(r)}_{\ell +1}.$$
*Case 1*: Two of these row vectors are equal. Assume these rows are $C^{(j)}_k$ and $C^{(j')}_{k'}$. If $j=j'$, then we consider the matrix $C:= \mathcal{C}^{(\boldsymbol{m})}$ with $m_j = \max\{k, k'\}$ and $m_\nu = 0$ for all $\nu \neq j$. Obviously, $\mathop{\mathrm{rank}}(C) \leqslant\max\{k, k'\} - 1$. Hence it follows that $\rho(\mathcal{C}) \leqslant\max\{k, k'\} - 1 \leqslant\lceil m/d \rceil - 1$. If $j\neq j'$, then we consider the matrix $C:= \mathcal{C}^{(\boldsymbol{m})}$ with $m_j = k$, $m_{j'} = k'$, and $m_\nu = 0$ for all $\nu \notin \{j,j'\}$. Obviously, $\mathop{\mathrm{rank}}(C) \leqslant k + k' - 1$. Hence it follows that $\rho(\mathcal{C}) \leqslant k + k' - 1 \leqslant 2\lceil m/d \rceil - 1$.
*Case 2*: All of these row vectors are different. Consider $C^{(d)}_{\ell+1}$. Then there exist $1\leqslant j < d$ and $1\leqslant h \leqslant\ell+1$ or $j = d$ and $1\leqslant h \leqslant\ell$ such that $C^{(d)}_{\ell+1} = C^{(j)}_h$.
Now we argue similarly as in case 1: If $j=d$, then it is easy to see that $\rho(\mathcal{C}) \leqslant\ell= \lfloor m/d \rfloor$. If $j\neq j'$, then $\rho(\mathcal{C}) \leqslant h+ \ell \leqslant 2\ell + 1 \leqslant 2 \lfloor m/d \rfloor+1$.
In any case, we have shown that $\rho(\mathcal{C}) \leqslant 2 \lfloor m/d \rfloor+1$. ◻
**Corollary 4**. *Let $m= \ell d + r$, where $\ell\in \mathbb{N}$ and $0 \leqslant r < d$. If $C^{(1)}, \ldots, C^{(d)} \in \mathbb{F}_b^{m\times m}$ are all permutation matrices, then the smallest $t$-value $t(\mathcal{C})$ of $P(\mathcal{C})$ satisfies $$t(\mathcal{C}) \geqslant(d-2) \lfloor m/d \rfloor + r - 1.$$*
*Proof.* This follows directly from Proposition [Proposition 3](#Prp:V_t_value){reference-type="ref" reference="Prp:V_t_value"}, since the rows of the matrices $C^{(1)}, \ldots, C^{(d)}$ are all in $\{ e_1, \ldots, e_m\}$, where $e_i$ denotes the $i$-th standard unit vector of $\mathbb{F}_b^m$. ◻
Let us represent the permutation matrix where row $k$ has a one in column $\pi(k)$ as simply the column vector with entries $\pi(k)$. Then we can represent our permutation nets with an $m\times d$ matrix $\Pi$ with $j$'th column $\pi_j$. For example the Hammersley points with generator matrices $I_m$ and reversed $I_m$ are represented this way by $$\begin{aligned}
\label{eq:hammerpi}
\Pi=\begin{pmatrix}
1 & m\\
2 & m-1\\
\vdots & \vdots\\
m & 1
\end{pmatrix}.\end{aligned}$$ For $d=3$ we want $\Pi\in\{1,\dots,m\}^{m\times 3}$ with the largest possible value of $$\rho=\min\bigl\{ k+k' \mid \Pi_{k,j}=\Pi_{k',j'}, 1\leqslant j<j'\leqslant 3\bigr\}-1.$$ Then we get quality parameter $t=m-\rho$. If we simply adjoin a third column to $\Pi$ in [\[eq:hammerpi\]](#eq:hammerpi){reference-type="eqref" reference="eq:hammerpi"} the best $\rho$ we can get is $m/2$ if $m$ is even and $(m+1)/2$ if $m$ is odd. These lead to $t\geqslant m/2$ if $m$ is even and $t\geqslant(m-1)/2$ if $m$ is odd, which is much worse than the bound in Corollary [Corollary 4](#cor:tlowerbound){reference-type="ref" reference="cor:tlowerbound"}. For $t=m/2$ the first term in [\[eq:niedrate\]](#eq:niedrate){reference-type="eqref" reference="eq:niedrate"} is $O(b^{m/2}\log(n)^2/n)=O(\log(n)^2/\sqrt{n})$ because $b=n^{1/m}$.
If $m=3\ell$, then we can choose the first $\ell$ rows of $\Pi$ to be $$\begin{pmatrix}
1 & 2 & 3\\
4 & 5 & 6\\
\vdots & \vdots &\vdots\\
3\ell-2 &3\ell-1 & 3\ell\\
\end{pmatrix}.$$ Let us label these first $\ell$ rows of $\Pi$ by $\boldsymbol{r}_1,\boldsymbol{r}_2,\dots,\boldsymbol{r}_\ell\in\mathbb{N}^3$. Now, for $\boldsymbol{r}= (a,b,c)$ let $\boldsymbol{r}'=(b,c,a)$ and $\boldsymbol{r}''=(c,a,b)$ be one and two rotations of the elements of $\boldsymbol{r}$ to the left with wraparound. By taking the rows of $\Pi$ in this order $$\boldsymbol{r}_1,\boldsymbol{r}_2,\dots,\boldsymbol{r}_\ell,
\enspace \boldsymbol{r}_{\ell}',\boldsymbol{r}_{\ell-1}',\dots,\boldsymbol{r}_1',
\enspace \boldsymbol{r}_{\ell}'',\boldsymbol{r}_{\ell-1}'',\dots,\boldsymbol{r}_1''$$ we get $\rho = 2\ell$ and hence $t=m/3$. This is very close to the bound $\lfloor m/d\rfloor+0-1=m/3-1$ from Corollary [Corollary 4](#cor:tlowerbound){reference-type="ref" reference="cor:tlowerbound"}. We prefer the ordering $$\boldsymbol{r}_1,\boldsymbol{r}_2,\dots,\boldsymbol{r}_\ell,\enspace
\boldsymbol{r}'_\ell,\boldsymbol{r}''_\ell,\enspace
\boldsymbol{r}_{\ell-1}',\boldsymbol{r}_{\ell-1}'',\enspace
\boldsymbol{r}'_{\ell-2},\boldsymbol{r}''_{\ell-2},\enspace
\enspace \dots\enspace
\boldsymbol{r}_2',\boldsymbol{r}_2'',\enspace \boldsymbol{r}_1', \boldsymbol{r}_1''$$ because while it attains the same value of $t$ it has fewer pairs of columns for which $k+k'=2\ell+1$. With $t=m/3$ for $d=3$ the first term in [\[eq:niedrate\]](#eq:niedrate){reference-type="eqref" reference="eq:niedrate"} is $O(b^t\log(n)^2/n)=O(n^{-2/3}\log(n)^2)$.
Using the same method for $d=4$ and $m=4\ell$ we can get $\rho=2\ell =m/2$, implying that $t=m/2$, and yielding a rate of $O(b^t\log(n)^3/n)=O(n^{-1/2}\log(n)^3)$. This result for $d=4$ matches the rate for plain MC apart from the power of $\log(n)$. So the 100% error bounds available from NNLD sampling come with a logarithmic accuracy penalty in comparison to plain MC.
A second choice for $d=4$ is to use a Cartesian product of two Hammersley point sets with $\sqrt{n}$ points each. The error of such a Cartesian product would ordinarily be the same as that of the individual Hammersley rules in two dimensions with their reduced sample sizes. That is $O( n^{-1/2}\log(n))$ which is then a better logarithmic factor than the $4$ dimensional permutation nets attain.
For $d=3$ we could also use a Cartesian product of Hammersley points with $n=b^2$ points and a one dimensional grid $\{0,1/n,\dots,1-1/n\}$. This then uses $N=n^2$ points and we expect an error of $O(\log(n)/n)=O(\log(N)/N^{1/2})$ which is a worse rate than we can get with the permutation net in $[0,1]^3$.
## Other generator matrices
Permutation matrices are not the only generator matrices that can produce points with the NNLD property. For digital nets in base $2$, we know from Proposition [Proposition 1](#lem:upper_diag_point){reference-type="ref" reference="lem:upper_diag_point"} that if $C^{(1)}=I_m$ then we must have $C^{(j)}\boldsymbol{1}_m=\boldsymbol{1}_m \bmod 2$. This in turn implies that every row of $C^{(j)}$ must have an odd number of $1$s in it. A numerical search shows there are 221 choice of nonsingular $C^{(2)}$ when $m=4$ and $C^{(1)}=I_4$. Below are some examples: $$\begin{aligned}
C^{(2)} = \begin{pmatrix}
1 & 0 & 0 & 0\\
1 & 1 & 0 & 1\\
0 & 1 & 1 & 1\\
1 & 1 & 1 & 0\\
\end{pmatrix}
\quad \text{or}\quad
\begin{pmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
1 & 0 & 1 & 1\\
1 & 1 & 1 & 0\\
\end{pmatrix}
\quad \text{or}\quad
\begin{pmatrix}
0 & 0 & 1 & 0\\
1 & 0 & 0 & 0\\
1 & 1 & 0 & 1\\
0 & 1 & 0 & 0\\
\end{pmatrix}
.\end{aligned}$$
Nevertheless, it is hard to find an example where non-permutation matrices perform better than permutation matrices with respect to the $t$-value. When $d=3$, one can verify, either by lengthy reasoning or brute-force enumeration, that NNLD digital nets constructed by non-permutation matrices cannot attain a better t-value than those constructed by permutation matrices for $m\leqslant 7$ and $b=2$.
# Non-trivial Rank 1 lattices that are NNLD {#sec:rankone}
Here we consider special cases of rank-1 lattice rules that are suboptimal in terms of discrepancy, but produce NNLD points. While they can be defined in any dimension $d\geqslant 2$ it is only for dimension $1$ that they are projection regular. Therefore the conclusions from Proposition [Proposition 1](#lem:upper_diag_point){reference-type="ref" reference="lem:upper_diag_point"} and Corollary [Corollary 1](#cor:pro_reg_lattice_NNLD){reference-type="ref" reference="cor:pro_reg_lattice_NNLD"} do not hold for them when $d>1$.
**Theorem 3**. *For integers $m\geqslant d$ and $b\geqslant 2$ and $0\leqslant i<n=b^m$, let $$\boldsymbol{x}_{i} =
\Bigl(
\frac{i}{n},
\frac{ib}{n},
\dots,
\frac{ib^{j-1}}n,
\dots,
\frac{ib^{d-1}}{n}
\Bigr) \quad\bmod 1.$$ Then points $\boldsymbol{x}_0,\dots,\boldsymbol{x}_{n-1}$ are NNLD.*
Before proving this theorem we note that these points are quite poor for integration; however, the structure of the points can be useful for showing good integration bounds in suitably weighted spaces, see [@DKLP15]. There are only $b^{d-j+1}$ unique values of $x_{ij}$. Further, when $|j-j'|$ is small the points $(x_{ij},x_{ij'})$ lie within at most $b^{|j-j'|}$ lines in $[0,1)^2$ and have a large discrepancy.
*Proof.* We write $i=\sum_{k=1}^ma_i(k)b^{k-1}$ and then $$\begin{aligned}
nx_{ij} &= b^{j-1}\sum_{k=1}^ma_i(k)b^{k-1}
\ \bmod b^m
= \sum_{k=1}^{m+1-j}a_i(k)b^{j+k-2}.\end{aligned}$$ For $i\sim\mathbb{U}\{0,1,\dots,n-1\}$ the digits $a_i(1),\dots,a_i(m)$ are independent $\mathbb{U}(\mathbb{Z}_b)$ random variables. Hence they are associated random variables which makes $nx_{i1},\dots,nx_{id}$ and hence $x_{i1},\dots,x_{id}$ into associated random variables. Finally, $x_{ij}$ has the uniform distribution on $\{0,1/n_j,2/n_j,\dots,1-1/n_j\}$ where $n_j=n/b^{j-1}$. This distribution is stochastically smaller than $\mathbb{U}[0,1]$ and so $\boldsymbol{x}_i$ are NNLD. ◻
The values $x_{ij}$ for $0\leqslant i<b^m$ in these lattices take $n_j=b^{d-j+1}$ distinct values $\ell/n_j$ for $0\leqslant\ell<n_j$ with each of those values appearing $n/n_j$ times. As such they constitute a left endpoint integration rule on $n_j$ points and so for nonperiodic smooth integrands we anticipate an error rate of $O(n_j^{-1})$. For this to be better than plain MC we require $n_j\geqslant\sqrt{n}$ or $j\leqslant m/2$. While a better rate is available for periodic integrands, those cannot be completely monotone unless they are constant.
# Discussion and further references {#sec:disc}
We find that it is possible to get computable bounds on some integrals by using points with a suitable bias property (non-negative local discrepancy (NNLD)) on integrands with a suitable monotonicity property (complete monotonicity). The method of associated random variables is useful for showing that a given point set is NNLD.
There are several generalizations of multivariate monotonicity in [@mosl:scar:1991]. They include the complete monotonicity discussed here as well as the more commonly considered monotonicity in each of the $d$ inputs one at a time. The complexity of integrating coordinate-wise monotone functions has been studied by [@novak:1992; @papa:1993]. Scrambled $(t,m,d)$-nets have been shown to be negatively orthant dependent if and only if $t=0$ [@wiar:lemi:dong:t2021]. Similarly, it was shown in [@WnuGne:2020] that randomly shifted and jittered (RSJ) rank-$1$ lattices based on a random generator are also negatively orthant dependent and that, in some sense, one cannot achieve this result by employing less randomness. Using the NLOD property of the distribution of these RQMC points, it follows from [@lemi:2018] that for functions which are monotone in each variable scrambled nets and RSJ rank-1 lattices cannot increase variance over plain Monte Carlo in any dimension $d$.
While complete monotonicity is a very special property, its applicability can be widened by the method of control variates. If $h(\cdot)$ is completely monotone with known integral $\theta$, we will in some settings be able to find $\lambda_+>0$ for which $f+\lambda_+ h$ is a completely monotone function of $\boldsymbol{x}$. Then by Theorem [Theorem 1](#thm:basic){reference-type="ref" reference="thm:basic"} we can compute an upper bound $B_+\geqslant\mu+\lambda_+\theta$ and conclude that $\mu\leqslant B_+-\lambda_+\theta$. Similarly a lower bound can be found by choosing $\lambda_-$ such that $\lambda_-h-f$ is a completely monotone function of $\boldsymbol{x}$, using Theorem [Theorem 1](#thm:basic){reference-type="ref" reference="thm:basic"} to get an upper bound $\lambda_-\theta-\mu\leqslant B_-$ and then concluding that $\mu\geqslant\lambda_-\theta -B_-$. Details on how to choose $h$ and find $\lambda_\pm$ are beyond the scope of this article.
The customary way to quantify uncertainty in QMC is to use RQMC replicates with statistically derived asymptotic confidence intervals. For a recent thorough empirical evaluation of RQMC, see [@lecu:naka:owen:tuff:2023], who found the usual confidence intervals based on the central limit theorem to be even more reliable than sophisticated bootstrap methods. Here we have found an alternative computable non-asymptotic approach with 100% coverage, but so far it does not give very good accuracy for high dimensions.
# Acknowledgments {#acknowledgments .unnumbered}
We thank Josef Dick, David Krieg, Frances Kuo, Dirk Nuyens and Ian Sloan for discussions. Much of this work took place at the MATRIX Institute's location in Creswick Australia as part of their research program on 'Computational Mathematics for High-Dimensional Data in Statistical Learning', in February 2023, and the paper was finalized during the Dagstuhl Seminar 23351 'Algorithms and Complexity for Continuous Problems', in Schloss Dagstuhl, Wadern, Germany, in August 2023. We are grateful to MATRIX and to the Leibniz Center Schloss Dagstuhl. The contributions of ABO and ZP were supported by the U.S. National Science Foundation under grant DMS-2152780. Peter Kritzer is supported by the Austrian Science Fund (FWF) Project P34808. For the purpose of open access, the authors have applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission.
| arxiv_math | {
"id": "2309.04209",
"title": "Computable error bounds for quasi-Monte Carlo using points with\n non-negative local discrepancy",
"authors": "Michael Gnewuch, Peter Kritzer, Art B. Owen, Zexin Pan",
"categories": "math.NA cs.NA math.ST stat.TH",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
In this paper, we present an algebro-geometric construction of the Hitchin connection in the parabolic setting for a fixed determinant line bundle, following [@pauly2023hitchin]. Our strategy is based on Hecke modifications, where we provide a decomposition formula for the parabolic determinant line bundle and the canonical line bundle of the moduli space of parabolic bundles. As a special case, we construct a Hitchin connection on the moduli space of vector bundles with fixed non-trivial determinant.
nocite: "[@*]"
---
**Parabolic Hitchin connection**\
[Zakaria Ouaras]{.smallcaps}\
Laboratoire J.-A. Dieudonné\
Université Côte d'Azur\
Parc Valrose, 06108 Nice Cedex 02, France\
[ouaras\@unice.fr]([email protected])\
# Introduction
Hitchin connection is a projective flat connection introduced by Hitchin [@hitchin1990flat] in the context of geometric quantization of real symplectic manifolds, motivated by the work of Witten [@witten1989quantum] in the study of quantum Chern-Simons theory. For a given Lie group $G$, one associates the character variety of irreducible representation of the fundamental group of a topological surface $X$. This variety is canonically a symplectic manifold $(M,\omega)$ by the Atiyah-Bott-Goldman form [@Atiyah-Bott], and a conformal structure on $X$ induces a complex structure on $M$. Hitchin showed that the conditions for the existence of the projevtive connection are satisfied in this case for $G=\mathrm{SU}(r)$ and the closed oriented surface $X$ has genus $g \geq 2$ (except for $r$=$g$=$2$). By Narasimhan-Seshadri theorem, $M$ is identified with the moduli space of stable rank-$r$ vector bundles with trivial determinant over $X$ equipped with a complex structure, thus a quasi-projective variety, where the symplectic form $\omega$ is a Kähler form and the inverse of the determinant line bundle provides a pre-quantum line bundle (see [@MR783704] for details).\
Scheinost-Schottenloher [@schottenloher1995metaplectic] generalize Hitchin's construction to higher dimensions, on the relative moduli space of semistable holomorphic rank-r vector bundles $E$ with total Chern class one, with trivial determinant over a family of Kähler varieties, and also deals with the case of parabolic vector bundles in dimension one. This space is a compact complex variety equipped with a natural ample line bundle $\mathcal{L}$ that generalizes the determinant line bundle. They show under the hypothesis that a square root of the canonical bundle $K_{\mathcal{M}/B}^{1/2}$ exists, that the pushforward sheaf $\mathcal{W}_k\footnote{The twist by $ K_{\mathcal{M}}^{1/2}$ is called the metaplectic correction}:= \varpi_*\left( \mathcal{L}^k \otimes K_{\mathcal{M}/B}^{1/2} \right)$ is locally free and equipped with projectively flat connection. A special case is when $X$ is an *elliptic surface*, Bauer [@bauer1989parabolic] give a description of the space of semistable rank-r bundles with total Chern class one and trivial determinant over $X$ in terms of parabolic bundles over the Riemann surface $C$ associated to $X$. Hence, by this identification under the assumption, they get a projectively flat connection over the pushforward of the generalized determinant line bundle with a metaplectic correction in the parabolic setting [^1]. Bjerre [@bjerre2018hitchin] removed the restriction requiring the existence of a square root of the canonical line bundle, and proved the existence of the Hitchin connection over the space $\mathcal{W}^0_k:= \varpi_*\left( \mathcal{L}^k \right)$, using a general construction of Hitchin connections in geometric quantization, as done in [@andersen2012hitchinT] and Hitchin connections in metaplectic quantization, as done in [@andersen2012hitchin].\
The constructions above were done using differential and Kähler geometric methods. Van Geemen-de Jong [@van1998hitchin] gave a purely algebraic approach for the construction of the Hitchin connection over a family $\mathcal{M}/S$ equipped with a line bundle $\mathcal{L}$. One of the conditions is the equality $\mu_L \circ \rho=-\kappa_{\mathcal{M}/S},$ where $\kappa_{\mathcal{M}/S}$ is the Kodaira-Spencer map of a family $\mathcal{M}/S$, $\mu_L$ is a map associated to a line bundle $L$ over $\mathcal{M}$ and $\rho$ is a symbol map. We refer to several works related to algebro-geometric constructions of the Hitchin connection, Faltings [@faltings1993stable], Axelrod-Witten-della-Pietra [@axelrod1991geometric] and Ran [@ran2006jacobi].\
Baier-Bolognesi-Martens-Pauly [@pauly2023hitchin] provide an algebo-geometric construction of the Hitchin connection over the relative moduli space of semi-stable rank-r vector bundles with trivial determinant over a family of complex projective curves of genus $g \geq 2$ (except $r$=$g$=2) parameterized by a variety $S$. They use the so-called trace complex theory $\cite{beilinson1988determinant}$, Bloch-Esnault quasi-isomorphism $\cite{bloch1999relative}$, and Sun-Tsai isomorphism $\cite{sun2004hitchin}$. They give a description of the Atiyah class of the determinant line bundle, and the symbol map is a rational multiple of the quadratic part of the Hitchin system composed with the Kodaira-Spencer map of the family of curves.
#### Setting of the problem:
Let $\pi_s: \mathcal{C} \longrightarrow S$ be a smooth versal family of projective curves of genus $g \geq 2$, parameterized by a projective variety $S$ and let $(\sigma_i)_{i\in I=\{1,2,...,N\}}$ $N$ disjoint sections of $\pi_s$, i.e. $\forall i \neq j \in I$ and $\forall s \in S$, we have $\sigma_i(s) \neq \sigma_j(s)$, let denote $D=\sum_{i=1}^N \sigma_i(S)$ the associated divisor. For a fixed rank-$r$ parabolic type $\alpha_*=(k,\vec{a},\vec{m})$ with respect to the parabolic divisor $D$ and a relative line bundle $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$, we denote by $\pi_e:\mathcal{SM}^{par}_{\mathcal{C}/S}:=\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta) \longrightarrow S$, the relative moduli space of parabolic rank-$r$ vector bundles of determinant $\delta$ and type $\alpha_*$ equipped with the parabolic determinant line bundle$\Theta_{par} \in \mathrm{Pic}\left(\mathcal{SM}^{par}_{\mathcal{C}/S} /S \right)$. We have the fibre product $$\def\cartesien{%
\ar@{-}[]+R+<6pt,-1pt>;[]+RD+<6pt,-6pt>%
\ar@{-}[]+D+<1pt,-6pt>;[]+RD+<6pt,-6pt>%
}
\xymatrix{
\mathcal{C}\times_S\mathcal{SM}^{par}_{\mathcal{C}/S} \ar[rr]^{\pi_{n}} \ar[d]_{\pi_{w}} \cartesien && \mathcal{SM}^{par}_{\mathcal{C}/S} \ar[d]^{\pi_{e}} & \\
\ \left( \mathcal{C},D \right) \ar[rr]_{\pi_{s}} && \ar@/^1pc/[ll]^{ \sigma_{i}} S
}
\label{fiber-product}$$
#### Question:
Is there a projective flat connection on the vector bundle $\mathcal{V}^{par}(\alpha_*,\delta,\nu):=\pi_{e_*}\left( \Theta_{par}^{\nu} \right)$ over $S$ associated to a heat operator on the parabolic determinant line bundle ?
#### Motivation:
In the theory of conformal blocks Tsuchiya-Ueno-Yamada [@tsuchiya1989conformal] constructed for the Lie algebra $sl_{r}(\mathbb{C})$, an integer $k$ and an N-tuple $\vec {\lambda}=\{\lambda_x\}_{x\in D}$ of dominant weights for $sl_{r}(\mathbb{C})$ over a marked curve $(C,D)$ a vector space $\mathbb{V}_C(I,\vec{\lambda},k)$, which can be glued to a vector bundle $\mathbb{V}(I,\vec{\lambda},k)$ over the moduli space of marked curves $\mathfrak{M}_{g,N}$ parameterizing $N$-pointed curves of genus-$g$, called the vector bundle of conformal blocks. In [@tsuchiya1989conformal] they constructed a flat projective connection. Moreover, Beauville-Laszlo [@beauville1994conformal] proved that the vector spaces $V_J$ in Hitchin's constructions are canonically identified with $\mathbb{V}_C(D,\vec{\lambda},k)$ over a curve with one point with trivial weights. It is natural to ask whether the connections of Hitchin [@hitchin1990flat] and Tsuchiya-Ueno-Yamada [@tsuchiya1989conformal] coincide, this was proven by Laszlo [@laszlo1998hitchin]. Pauly [@pauly1996espaces] gave a generalization of Beauville-Laszlo's identification of the space of non-abelian theta functions $\mathrm{H}^0\left( \mathcal{SM}_C^{par},\Theta_{par}\right)$, with the conformal blocks $\mathbb{V}(D,\vec{\lambda},k)$. By [@tsuchiya1989conformal] there a projective flat connection on the conformal block side, hence it is natural to ask the above question, and to compare the two connections via Pauly's isomorphism.\
Biswas-Mukhopadhyay-Wentworth [@biswas2021ginzburg] gave a proof of the existence of the Hitchin connection for parabolic $G$-bundles following [@pauly2023hitchin] (we present here the case $G=SL_r$ [^2]). Their strategy uses Seshadri [@seshadri1977moduli] identification of the moduli space of parabolic bundles with $\mathcal{SU}_{X/S }^{\Gamma}(r,d)$ the moduli space of $\Gamma$-bundle for a Galois group $\Gamma$ over a family of Galois cover $X/S$ of the family of curves $\mathcal{C}/S$ [^3]. Applying [@pauly2023hitchin] Proposition 4.7.1 over the space $\mathcal{SU}_{X/S} (r)$ applied to the determinant line bundle $\widehat{\mathcal{L}}$ and to conclude they proof that the map $\mu_{\widehat{\mathcal{L}}_Q}$ is an isomorphism hence the modified symbol map for the line bundle $(\widehat{\mathcal{L}}_Q)^{\nu}$ for a positive integer $\nu$ is given by $$\rho^{Hit}_{par,\Gamma}(\nu):=\mu^{-1}_{(\widehat{\mathcal{L}}_Q)^{\nu}} \circ \left(\cup[\widehat{\mathcal{L}}_Q] \circ \rho_{par} \circ \kappa_{\mathcal{C}/S} \right)
\label{BMWsymbol}$$ where $\rho_{par}$ is the quadratic part of the parabolic Hitchin system and $\kappa_{\mathcal{C}/S}$ (resp. $\kappa_{\mathcal{SM}^{par}_{\mathcal{C}}(r)/S}$) is the Kodaira-Spencer map of the family of curves (resp. the Kodaira-Spencer map of the family of relative moduli space which depends on the Galois cover). In a second paper $\cite{biswas2021geometrization}$, they prove that the symbol map $\rho^{Hit}_{par,\Gamma}(\nu)$ is independent on the parabolic weights in the case of full flag using abelianization on parabolic Higgs bundles. Hence, using the equality $\widehat{\mathcal{L}}_Q:=Q^*(\widehat{\mathcal{L}})\cong \Theta_{par}^{\vert \Gamma \vert /k},$ given in [@BR93] (Proposition 4.14), where $\vert \Gamma \vert$ is the order of $\Gamma$. Set $\Theta$ the pull-back of the determinant line bundle by the forgetful map to $\mathcal{SU}_{\mathcal{C}/S}(r)$. Therefore, the symbol can be written as follows $$\rho^{Hit}_{par,\Gamma}(\nu):=\vert \Gamma \vert \ \mu^{-1}_{\Theta_{par}^{\nu}} \circ \left(\cup[\Theta] \circ \rho_{par} \circ \kappa_{\mathcal{C}/S} \right).$$
Our work is independent of the work of [@biswas2021ginzburg]. The object that we define are intrinsically attached to the marked curve and the quasi-parabolic type. Our proof is over $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$, the relative moduli space of parabolic rank-r vector bundles with fixed parabolic type $\alpha_*$, and fixed determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$, whereas [@biswas2021ginzburg] work for $\delta= \mathcal{O}_{\mathcal{C}}$. We prove the following results:\
**Theorem [Theorem 58](#Main Theorem 1){reference-type="ref" reference="Main Theorem 1"}:** *Let $\nu$ be a positive integer. There exists a unique projective flat connection on the vector bundle ${\pi_e}_*(\Theta_{par}^{\nu})$ of non-abelian parabolic theta functions, induced by a heat operator with symbol map $$\rho^{Hit}_{par}(\nu):= \frac{1}{(\nu k+r)} \left( \rho_{par}\circ \kappa_{\mathcal{C}/S} \right).$$*
For $D=\emptyset$, we have $\alpha_*$ = $k \in \mathbb{N}^*$ the trivial parabolic type, the space $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ coincides with $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ the moduli space of semi-stable rank-r bundles with determinant $\delta$. Hence, $\Theta_{par}^{r/n}\equiv \mathcal{L}^k$ for $n:=\gcd(r,\mathrm{deg}(\delta))$, where $\mathcal{L}$ is the determinant line bundle over $p_e:\mathcal{SU}_{\mathcal{C}/S}(r,\delta) \rightarrow S$, and $\rho_{par}=\rho^{Hit}$ is the Hitchin symbol map.\
**Theorem [Theorem 59](#Main Theorem 2){reference-type="ref" reference="Main Theorem 2"}:** *Let $\mathcal{C}/S$ be a smooth versal family of complex projective curves of genus $g \geq 2$ (and $g \geq 3$ if $r=2$ and $\mathrm{deg}(\delta)$ is even). Let $k$ a positive integer. There exists a unique projective flat connection on the vector bundle $p_{e_*}(\mathcal{L}^{k})$ of non-abelian theta functions, induced by a heat operator with symbol map $$\rho(k):= \frac{n}{r(k+n)} \left( \rho^{Hit}\circ \kappa_{\mathcal{C}/S} \right).$$*
Note that for $\delta$=$\mathcal{O}_{\mathcal{C}}$, we recover the classical case proved in [@hitchin1990flat].\
Our strategy is based on what we call Hecke maps, which are rational maps given by associating to a parabolic bundle its Hecke modifications with respect to the elements of the filtrations, hence we get vector bundles with fixed determinants. Using this maps and the forgetful map over the moduli space of parabolic bundles, we give in Proposition [Proposition 52](#Parabolic determinant bundle and Hecke modifications){reference-type="ref" reference="Parabolic determinant bundle and Hecke modifications"} a decomposition formula for the parabolic determinant line bundle $\Theta_{par}$ $$\Theta_{par}^r:= \lambda_{par}(\mathcal{E}_*)^r=\Theta^{n k} \otimes \bigotimes\limits_{i=1}^N\bigotimes\limits_{j=1}^{\ell_i-1} \left( \Theta^n \otimes \Theta_{j}(i)^{n_j(i)} \right)^{p_j(i)}$$ and in the Proposition [Proposition 53](#Canonical bundle){reference-type="ref" reference="Canonical bundle"} a decomposition formula for the relative canonical line bundle $$K_{\mathcal{SM}^{par}_{\mathcal{C}/S}/S}=\Theta^{-2n} \otimes \bigotimes\limits_{i=1}^N \bigotimes\limits_{j=1}^{\ell_i-1} \left( \Theta^n \otimes \Theta_{j}(i)^{n_j(i)} \right)$$ where $\Theta$ and $\Theta_j(i)$ are the pull-backs of the ordinary determinant line bundles under the forgetful map and Hecke maps, respectively. We also prove a Beilinson-Schechtman type Theorem [^4] [Theorem 37](#Parabolic BSBE-2){reference-type="ref" reference="Parabolic BSBE-2"} over the moduli space of parabolic bundles, where we express the Atiyah classes of the line bundles $\Theta$ and $\Theta_j(i)$ as a rational multiples of extension classes of exact sequences associated to a strongly-parabolic exact sequence (see definition [Definition 30](#SQPA){reference-type="ref" reference="SQPA"}) of a parabolic bundle and its Hecke modifications. We also define the parabolic Atiyah algebroid (see definition [Definition 29](#QPA){reference-type="ref" reference="QPA"}) and we show in Theorem [Theorem 43](#infinitesimal deformations){reference-type="ref" reference="infinitesimal deformations"}, that the first cohomology group with values in this sheaf parameterizes the infinitesimal deformations of a marked curve equipped with a quasi-parabolic bundle.\
Finally, by proving the invariance of the parabolic Hitchin symbol $\rho_{par}$ under the Hecke modifications, we obtain the van-Geemen-de Jong equation for any positive power of the parabolic determinant line bundle. Additionally, we establish the equation $\mu_{\Theta_{par}^{\nu}}= \left( \frac{\nu k+r}{k} \right) \cup [\Theta_{par} ]$ and we show the symbol map $\rho^{Hit}_{par}(\nu)$ is independent of the parabolic weights. To conclude the proof, we verify the remaining conditions of the existence and flatness of the connection using Singh Theorem [Theorem 55](#Singh){reference-type="ref" reference="Singh"} on Hitchin varieties.
#### Acknowledgements:
I extend my sincere gratitude to my thesis supervisor Christian Pauly, for his invaluable guidance and constant encouragement throughout the completion of this work. Additionally, I would like to thank the reviewers of my thesis, Xiaotao Sun and Richard Wentworth, for their valuable feedback and suggestions. Finally, I am grateful to Michele Bolognesi and Johan Martens for engaging discussions.
# Parabolic vector bundles and their moduli spaces
Let $C$ be a smooth projective complex curve of genus $g \ge 2$ and $D$=$\{ x_1,x_2, ...,x_N \}$ a finite subset of points of $C$. The set $D$ will also be called a parabolic divisor. Set $I=\{1,2,...,N\},$ where $N=\mathrm{deg}(D)$.
## Parabolic vector bundles
#### Parabolic type of a vector bundle {#parabolic-type-of-a-vector-bundle .unnumbered}
A parabolic type for a rank-$r$ vector bundle over the curve $C$ with respect to the parabolic divisor $D$ is the following numerical data $\alpha_*=(k,\vec{a},\vec{m})$ consisting of:
- A quasi-parabolic type $\vec{m}=(\ell_i,m(i))_{i \in I}$, where $\ell_i \in \mathbb{N}^*$ are integers called the length, and
1. A sequence of integers called the flag type at $x_i\in D$ $$m(i)=(m_1(i),m_2(i), ...,m_{\ell_ i}(i)), \quad \ \mathrm{with} \quad m_j(i) \in \mathbb{N}^*.$$
2. We have for every $i \in I$ the relation: $\sum\limits_{j=1}^{\ell_i} m_j(i)=r.$
- A system of parabolic weights $(k,\vec{a})$, where $k \in \mathbb{N}^*$ and $\vec{a}=(a_j(i))_{\substack{i \in I \\ 1 \leq j \leq \ell_i}}$ a sequence of integers satisfying: $$0 \leq a_1(i) < a_2(i) <...<a_{\ell_i}(i)< k.$$
We say that $x_i \in D$ is a trivial point if $\ell_i=1$, which implies that $m_1(i)=r$ and $m(i)=(r)$.\
We say that $\alpha_*$ is full flag parabolic type if $\ell_i=r$ for all $i\in I$, thus $m_j(i)=1$ $\forall i,j$.
**Definition 1** (Parabolic vector bundles $\cite{seshadri1977moduli}$). Let $E$ be a rank-r vector bundle over $C$. A quasi-parabolic structure of quasi-parabolic type $\vec{m}=(\ell_i,m(i))_{i\in I}$ on $E$ with respect to the parabolic divisor $D$, is given for each $i \in I$, by a linear filtration of length $\ell_i$ on the fibre $E_{x_i}$ $$F^*_*(E): \ \ \ \ E_{x_i} = F^1_i(E) \supset F^2_i(E) \supset \cdot\cdot\cdot\supset F^{\ell_i}_i(E) \supset F^{\ell_i+1}_i(E)=\{0\}$$ such that for $j \in \{1,2...,\ell_i\}$ we have $\mathrm{dim}_{\mathbb{C}}\left(F^{j}_i(E)/F^{j+1}_i(E)\right)$=$m_j(i)$. A parabolic structure on $E$ with respect to the parabolic divisor $D$ is the data $(E,F^*_*(E),\alpha_*)$ where $\alpha_*=(k,\vec{a},\vec{m})$ is a fixed parabolic type, and $(E,F^*_*)$ is a quasi-parabolic structure over $E$ of type $\vec{m}$ with respect to the parabolic divisor $D$. We denote a parabolic vector bundle by $E_*$.
For all $i \in I$ and $j \in \{1,2,...,\ell_i \}$, we define the following quotients $Gr^j_i(E):=\left( F^{j}_{i}(E)/F^{j+1}_{i}(E) \right)$ and $Q^j_i(E):=\left( E_{x_i}/F^{j+1}_{i}(E) \right)$, with dimensions $m_j(i)$ and $r_j(i)=\sum\limits_{q=1}^{j} m_q(i)$, respectively.
**Definition 2** (parabolic degree, slope and Euler characteristic). Let $E_*$ be a parabolic bundle over $C$. We define:
1. The parabolic degree: $\mathrm{pardeg}(E)=\mathrm{deg}(E)+\frac{1}{k} \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} m_j(i) a_j(i).$
2. The parabolic slope: $\mu_{par}(E)=\mathrm{pardeg}(E)/\mathrm{rank}(E).$
3. The parabolic Euler characteristic: $\chi_{par}(E)= \mathrm{pardeg}(E)+\mathrm{rank}(E)(1-g).$
**Definition 3** (parabolic and strongly parabolic endomorphisms). Let $E_*$ be a parabolic bundle over $C$ and let $f \in \mathrm{End}(E)$. Then, $f$ is a parabolic (res. strongly parabolic) endomorphism, if for all $i \in I$ and $j \in \{1,2,...,\ell_i \}$ one has $$f_{x_i} \left(F^{j}_i(E)\right) \subset F^{j}_i(E) \quad res. \quad f_{x_i} \left(F^{j}_i(E)\right) \subset F^{j+1}_i(E) .$$ We denote, the sheaf of parabolic endomorphisms and strongly parabolic endomorphisms by $\mathrm{parEnd}(E)$ and $\mathrm{SparEnd}(E)$,respectively, which are locally free. By definition, we have the inclusions $$\mathrm{SparEnd}(E) \hookrightarrow \mathrm{parEnd}(E) \hookrightarrow \mathrm{End}(E).
\label{SPE}$$
**Proposition 4**. *$\cite{yokogawa1991moduli}$ Let $E_*$ be a parabolic vector bundle over $C$ with respect to the parabolic divisor $D$. Then, we have a canonical isomorphism of locally free sheaves $$\mathrm{parEnd}(E)^{\vee} \cong \mathrm{SparEnd}(E) \otimes \mathcal{O}_C(D).$$ This isomorphism is given by the non-degenerate trace paring: $$\begin{array}{ccccc}
\mathrm{Tr} & : & \mathrm{parEnd}(E)\otimes \mathrm{SparEnd}(E)& \longrightarrow & \mathcal{O}_C(-D) \\
& &\phi \otimes \psi & \longmapsto & \mathrm{Tr}(\phi\circ \psi).
\end{array}$$*
by dualizing [\[SPE\]](#SPE){reference-type="eqref" reference="SPE"} we get $$\mathrm{End}(E)^{\vee}\cong \mathrm{End}(E) \hookrightarrow \mathrm{SparEnd}(E)(D) \hookrightarrow \mathrm{parEnd}(E)(D) .$$
## Moduli spaces of parabolic bundles
To construct the moduli space of parabolic vector bundles over a curve $C$, we need a notion of semi-stability and stability which will depend on the parabolic type $\alpha_*$. Mehta-Seshadri constructed the moduli space of semistable parabolic vector bundles over a smooth family of projective complex curve $C$. In this subsection, we recall the existence theorem of a coarse moduli space parameterizing parabolic bundles. The main reference is $\cite{seshadri1982fibers}$.
**Theorem 5** (Mehta-Seshadri [@mehta1980moduli]). *For a fixed parabolic type $\alpha_*$, there is a coarse moduli space $\mathcal{M}^{par}:=\mathcal{M}^{par}(r,\alpha_*,d)$ which is a projective irreducible normal variety, parameterizing $\alpha_*$-semi-stable parabolic rank-r vector bundles of degree-d up to $S$-equivalence over the curve $C$. Moreover, the subspace $\mathcal{M}_s^{par}\subset \mathcal{M}^{par}$ of $\alpha_*$-stable parabolic bundles is an open smooth subset.*
For a line bundle $\delta \in \mathrm{Pic}^d(C)$, we define $\mathcal{SM}^{par}_C(r,\alpha_*,\delta)=\{ E_* \in \mathcal{M}^{par}_C(r,\alpha_*,d) \ / \ \det(E)\cong \delta\},$ the moduli space of $\alpha_*$-semistable parabolic bundles with determinant $\delta$, which is also projective irreducible normal variety.
## Relative moduli spaces {#Relative moduli spaces}
In this subsection, we will recall the existence of a relative version of the moduli spaces of semi-stable parabolic vector bundles over a family of smooth projective complex curves equipped with a family of parabolic divisors.\
Let $\pi_s: \mathcal{C} \longrightarrow S$ be a smooth family of projective curves of genus $g \ge 2$, parameterized by an algebraic variety $S$ over $\mathbb{C}$ and let ${\sigma_i : S \rightarrow \mathcal{C}}_{ i \in I}$, be $N$ section, such that $\forall i \neq j \in I$ and $\forall s \in S$, we have: $\sigma_i(s) \neq \sigma_j(s)$. We denote by $D:= \sum_{i\in I} \sigma_i(S),$ the associated divisor (as the relative dimension of the map $\pi_s$ is one), which will be seen as a family of parabolic degree $N$ divisors parameterized by the variety $S$ and let $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$[^5].\
Let $\pi_e: \mathcal{T}\longrightarrow S$ be a $S$-variety. A relative family of $\alpha_*$-parabolic rank-r vector bundles of degree $d$ (resp. determinant $\delta$) over $\mathcal{C}/S$ parameterized by $\mathcal{T}/S$ is a locally free sheaf $\mathcal{E}$ over $\mathcal{C}\times_S \mathcal{T}$ together with the following data:
- For each $i \in I$, we give a filtration of the vector bundle $\mathcal{E}_{\sigma_i}:=\mathcal{E}\vert_{\sigma_i(S)\times_S \mathcal{T}}$ over $\sigma_i(S)\times_S \mathcal{T}\cong \mathcal{T}$ by subbundles as follow $$\mathcal{E}_{\sigma_i}=F^1_{i}(\mathcal{E}) \supset F^2_{i}(\mathcal{E}) \supset \cdot\cdot\cdot \supset F^{\ell_i}_{i}(\mathcal{E}) \supset F^{\ell_i+1}_{i}(\mathcal{E})=\{0\},$$ $$0 \leq a_1(i) < a_2(i)<...<a_{\ell_i}(i)<k,$$ such that for each $j\in \{1,2,...,\ell_i\}$ we have: $\mathrm{rank}\left( F^j_{i}(\mathcal{E})/F^{j+1}_{i}(\mathcal{E})\right)=m_j(i)$. Thus we get a parabolic structure over $\mathcal{E}$, we denoted by $\mathcal{E}_*$.
- For each $t \in \mathcal{T}$ we set $\mathcal{C}_t:=\pi_s^{-1}\left( \pi_e(t)\right)$. Then the vector bundle $\mathcal{E}_* \vert_{\mathcal{C}_t}$ is a $\alpha_*$-semistable parabolic bundle of degree $d$ respectively with determinant $\delta_t:=\delta \vert_{\mathcal{C}_t} \in \mathrm{Pic}^d(\mathcal{C}_t)$ with respect to the parabolic divisor $D_t:=\sum_{i\in I} \sigma_i(\pi_e(t)).$
Two relative families $\mathcal{E}_*$ and $\mathcal{E}'_*$ are said to be equivalent if there is a line bundle $L$ on $\mathcal{T}$ such that $\mathcal{E}_* \cong \mathcal{E}'_* \otimes \pi_e^*(L)$. Hence, we define a functor $$\begin{array}{ccccc}
\underline{\mathcal{M}^{par}_{\mathcal{C}}}:= \underline{\mathcal{M}^{par}_{\mathcal{C}}(r,\alpha_*,d)} & : & S-schemes & \longrightarrow & Set \\
& & \mathcal{T}& \longmapsto & \underline{\mathcal{M}^{par}_{\mathcal{C}}}(\mathcal{T}), \\
\end{array}$$ which associates to a Noetherian $S$-scheme $\mathcal{T}$ the set of equivalent families of parabolic rank-r vector bundles over $\mathcal{C}/S$ parameterized by the scheme $\mathcal{T}/S$ of parabolic type $\alpha_*$ of degree $d$. We define a subfunctor $$\underline{\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)} \ \subset \ \underline{\mathcal{M}^{par}_{\mathcal{C}}(r,\alpha_*,d)},$$ parameterizing parabolic rank-r vector bundles over $\mathcal{C}/S$ of type $\alpha_*$ with determinant $\delta$.\
Maruyama and Yokogawa constructed a relative version of the moduli space of semistable parabolic vector bundles over a smooth family of projective curves in [@yokogawa1993compactification; @maruyama1992moduli] and $\cite{yokogawa1995infinitesimal}$.
**Theorem 6** ([@mehta1980moduli]). *The functors defined above are representable by proper $S$-schemes denoted by*
*$\tilde{\pi_e}: \mathcal{M}^{par}_{\mathcal{C}/S}(r,\alpha_*,d) \longrightarrow S,$ and $\pi_e: \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta) \longrightarrow S.$*
*Their closed points parameterizes relative $S$-equivalence classes of rank-r semi-stable parabolic vector bundles of fixed type $\alpha_*$ and degree $d$ respectively fix determinant $\delta$ over the family of marked curves $\mathcal{C}/S$.*
Let denote $\chi^{par}:= \mathcal{C}\times_S \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ the fibre product over $S$ (see diagram [\[fiber-product\]](#fiber-product){reference-type="eqref" reference="fiber-product"}).
**Definition 7** (Universal family). A universal parabolic vector bundle over $\mathcal{C} \times_S \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,d)$ is a family $\mathcal{E}_*$ of parabolic vector bundle of rank-r with determinant $\delta$ of parabolic type $\alpha_*$ over the family of curves $\mathcal{C}/S$, such that: $\forall \left[E_*\right] \in \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,d),$ we have $\mathcal{E}_* \vert_{\mathcal{C}_{E_*}}$ $S$-equivalent to $E_*$, over the curve $\mathcal{C}_{E_*}=\pi_s^{-1}\left( \pi_e \left( \left[E_*\right] \right) \right).$
**Remark 8**.
1. A universal parabolic bundle if it exists is unique modulo equivalence of families.
2. In fact, existence of universal family is equivalent to the isomorphism of functors $$\underline{\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,d)}(-) \simeq \mathcal{H}om\left(-,\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,d)\right),$$ in this case we say that the moduli space is a fine moduli space.
**Proposition 9** ($\cite{boden1999rationality}$, Proposition. 3.2). *The moduli space of $\alpha_*$-parabolic-stable bundles is fine if and only if we have: $\gcd \{d,m_j(i) \vert i \in I, 1 \leq j \leq \ell_i \}=1$.*
**Remark 10**. As $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ is a a good quotient of a Hilbert quotient scheme, denoted $\mathcal{Z}^{ss}$, such that there is a universal bundle $\mathcal{E}_*$ on $\mathcal{C}\times_S\mathcal{Z}^{ss}$. $\mathcal{E}_*$ may not descend to $\mathcal{C}\times_S \mathcal{SM}^{par}$, but objects such $\mathrm{End}^0(\mathcal{E})$,$\mathrm{parEnd}^0(\mathcal{E})$ and $\mathcal{A}^0(\mathcal{E})$ ect, descend. Recall that, a sheaf $\mathcal{F}$ on $\mathcal{C}\times_S\mathcal{Z}^{ss}$ descends to $\mathcal{C}\times_S \mathcal{SM}^{par}$, if the action of scalar automorphisms of $\mathcal{E}$ (relative to $\mathcal{Z}^{ss}$, on $\mathcal{F}$ is trivial. This without confusion we pretend the existence of a universal bundle $\mathcal{E}_*$ over $\mathcal{C}\times_S \mathcal{SM}^{par}_{\mathcal{C}/S}$ that we call virtual universal bundle.
# Hecke modifications and filtered vector bundles {#Hecke Modification}
Let $E_* \longrightarrow C$ be a rank-r parabolic vector bundle of parabolic type $\alpha_*$ with respect to a parabolic divisor $D$ with determinant $\delta \in \mathrm{Pic}^d(C)$. We associate for all $i \in I$ and $j \in \{1,2,3,...,\ell_i\}$ the following exact sequences $$0\longrightarrow \mathcal{H}_i^j(E) \hookrightarrow E \longrightarrow Q_i^j(E):= E_{x_i}/F_i^{j+1}(E) \longrightarrow 0$$ the quotient sheaf $Q_i^j(E)$ is supported on $x_i$ of length $r_{j}(i)=\sum\limits_{q=1}^{j} m_q(i).$ The sub-sheaves $\mathcal{H}_i^j(E)$ are locally free of rank-r and their determinants are given by: $\delta_{j}(i):=\delta \otimes \mathcal{O}_C\left(-r_{j}(i)x_{i}\right)$, we denote their degree by $d_j(i):=\mathrm{deg}\ \delta_j(i)=d-r_j(i),$ and we set the integers $n_j(i)=\mathrm{gcd}(r,d_{j}(i))$ and $n=\mathrm{gcd}(r,d)$.
**Definition 11** (Hecke modifications). We call the vector bundle $\mathcal{H}_i^j(E)$ the Hecke modification of the parabolic bundle $E_*$ with respect to the subspace $F_i^{j+1}(E)\subset E_{x_i}$. We set $\mathcal{H}_i^{0}(E)=E$.
**Proposition 12** (Hecke filtrations). *Let $E_*$ be a parabolic rank-r vector bundle with respect to the parabolic divisor $D$. Then, for each $i\in I$ the Hecke modifications over $x_i \in D$, satisfies for all $j \in \{1,2,...,\ell_i\}$ the following inclusions $$E(-x_i)= \mathcal{H}^{\ell_i}_i(E) \subset \mathcal{H}^{\ell_i-1}_i(E) \subset \cdot\cdot\cdot \subset \mathcal{H}^2_i(E) \subset \mathcal{H}^{1}_i(E) \subset \mathcal{H}^0_i(E)=E.$$*
We take the Hecke modifications over a point $x_i \in D$ for $i\in I$ and let $j \in \{1,2,...,\ell_i \}$, the $j$-th Hecke exact sequence $$\xymatrix{
0 \ar[r] & \mathcal{H}^{j}_i(E) \ar[r] & E \ar[r] & Q_i^{j}(E) \ar[r]& 0,
}$$ where the last arrow is given by the composition $$\xymatrix{
E \ar[rr]^{ev_{x_i}} && E_{x_i} \ar@{->>}[r] & Q_i^j(E)= E_{x_i}/F^{j+1}_i(E).
}$$ The inclusions $$F^{j+1}_i(E) \supset F^{j+2}_i(E)$$ give a surjective maps $Q_i^{j+1}(E) \twoheadrightarrow Q_i^j(E).$ Then we get $$\xymatrix{
0 \ar[r] & \mathcal{H}^{j+1}_i(E) \ar[rr] \ar@{^{(}->}[rrd]^q && E \ar[rr] \ar[d]^{id} & & Q_i^{j+1}(E) \ar[r] \ar@{->>}[d]&0 \\
0 \ar[r] & \mathcal{H}^j_i(E) \ar[rr] && E \ar[rr]_p&& Q_i^j(E) \ar[r] & 0
}$$ As the right diagram commutes and the map $p \circ q=0$, we get that the image of the map $q$ is in the sub-sheaf $\mathcal{H}^j_i(E)$. So as a conclusion, we get a filtration by rank-r locally free sub-sheaves $$E(-x_i)= \mathcal{H}^{\ell_i}_i(E) \subset \mathcal{H}^{\ell_i-1}_i(E) \subset \cdot\cdot\cdot \subset \mathcal{H}^2_i(E) \subset \mathcal{H}^{1}_i(E) \subset \mathcal{H}^0_i(E)=E.$$ $\square$
**Remark 13**. By the last proposition a rank-r parabolic structure with respect to a parabolic divisor $D$ is equivalent to the following data: $(E,\mathcal{H}^*_*(E),\alpha_*)$ such that
- $E$ a rank-r vector bundle over $C$.
- $\alpha_*=(k,\vec{a},\vec{m})$ is a parabolic type with respect to the divisor $D$.
- for all $i \in I$, we give a filtration by rank-r locally free subsheaves $$E(-x_i)= \mathcal{H}^{\ell_i}_i(E) \subset \mathcal{H}^{\ell_i-1}_i(E) \subset \cdot\cdot\cdot \subset \mathcal{H}^2_i(E) \subset \mathcal{H}^{1}_i(E) \subset \mathcal{H}^0_i(E)=E,$$ such that the torsion sheaves $\mathcal{H}^{j}_i(E)/\mathcal{H}^{j+1}_i(E)$ are supported at $x_i \in D$ and\
$\mathrm{length}\left(\mathcal{H}^{j}_i(E)/\mathcal{H}^{j+1}_i(E)\right)=m_j(i).$
#### Classifying maps {#section Classifying maps}
Let $\mathcal{E}_*$ be a family of rank-r parabolic vector bundles of fixed parabolic type $\alpha_*$ over $(\mathcal{C},D)/S$ with fixed determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$ parameterized by $\mathcal{T}/S$. As the semi-stability is an open condition, we get the following rational maps from $\mathcal{T}$:
- To the relative moduli space of parabolic semistable rank-r vector bundles of parabolic type $\alpha_*$ $$\begin{array}{ccccc}
\psi_{\mathcal{T}} & : & \mathcal{T}& \dashrightarrow & \mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta) \\
& & t & \longmapsto & [ \mathcal{E}_{t_*} ]:=[ \mathcal{E}_* \vert_{\mathcal{C}_t}]
\end{array}$$ where $[\mathcal{E}_{t_*}]$ is the $S$-equivalence class of the semi-stable parabolic bundle $\mathcal{E}_{t_*}$.
- To the relative moduli spaces of the semistable rank-r vector bundles with fixed determinant $\delta_{j}(i) \in \mathrm{Pic}^{d_j(i)}(\mathcal{C}/S)$ for all $i \in I$ and $j \in \{1,2,...,\ell_i \}$ by associating Hecke modifications (see subsection [3](#Hecke Modification){reference-type="ref" reference="Hecke Modification"}) $$\begin{array}{ccccc}
\phi^{\mathcal{T}}_{i,j} & : & \mathcal{T}& \dashrightarrow & \mathcal{SU}_{\mathcal{C}}(r,\delta_{j}(i)) \\
& & t & \longmapsto & \mathcal{H}^j_i(\mathcal{E}_t)
\end{array}$$ where $\mathcal{H}^j_i(\mathcal{E}_t):=\mathrm{ker}\{\mathcal{E}\longrightarrow Q^j_i(\mathcal{E})\}$ and $\delta_j(i):=\delta \left(-r_j(i)\sigma_i(S)\right).$
- The forgetful rational map (we forget the parabolic structure) $$\begin{array}{ccccc}
\phi_{\mathcal{T}} &: & \mathcal{T}& \dashrightarrow& \mathcal{SU}_{\mathcal{C}}(r,\delta) \\
& &t & \longmapsto & \mathcal{E}_t
\end{array}$$
We call these maps the classifying morphisms.
## Yokogawa-Maruyama point of view
In this subsection, we give Yokogawa's point of view of parabolic vector bundles and theirs moduli space. Simpson $\cite{simpson1990harmonic}$ gives another description of parabolic vector bundles as filtered bundles, which can be generalized to higher dimensions. Maruyama and Yokogawa in [@maruyama1992moduli; @yokogawa1991moduli; @yokogawa1993compactification] give the construction of the relative moduli space of semistable parabolic vector bundles vector bundles using the new description and they prove that the two moduli spaces are isomorphic to each other as algebraic varieties.\
Let $C$ a smooth projective complex curve and $D= \sum_{i=1}^N x_i$ a reduced divisor over $C$.
**Definition 14** (Filtered vector bundles [@simpson1990harmonic] ). A filtered rank-r bundle over the marked curve $(C,D)$ is a rank-r vector bundle $E$ over $C$ together with a filtrations $E_{\bullet}=(E_{\lambda,i})_{i,\in I \lambda \in \mathbb{R}}$, satisfying for all $i \in I$ the following conditions
1. Local freeness: $E_{\lambda,i}$ are locally free of rank-r, $\forall \lambda \in \mathbb{R}$ and $E_{0,i}=E$.
2. Decreasing: $E_{\lambda,i} \subset E_{\beta,i}$ for all $\lambda \geq \beta$.
3. Left continuous: for $\varepsilon$ sufficiently small real number, then $E_{\lambda-\varepsilon,i}=E_{\alpha,i}$.
4. Finiteness: the length of the filtration for $0 \leq \lambda \leq 1$ is finite.
5. Periodicity: for all real number $\lambda$, we have $E_{\lambda+1,i}=E_{\lambda,i}(-x_i)$.
#### System of weights {#system-of-weights .unnumbered}
Let $(E_{\lambda,i})_{i\in I, \lambda \in \mathbb{R}}$ be a filtered vector bundle with respect to the divisor $D$. Then we define the system of weights on $x_i$ for $i \in I$ as the ordered jumping real numbers in the reel interval $[0,1]$ i.e., $0 \leq \lambda \leq 1$ such that for $\varepsilon$ small enough we have: $E_{\lambda,i} \neq E_{\lambda+\varepsilon,i}$. We will assume that the jumping numbers are rational numbers. So we get for each $i \in I$ an ordered sequence of rational numbers: $0 \leq \lambda_1(i) < \lambda_2(i) <...<\lambda_{\ell_i}(i)< 1,$ where $\ell_i$ is the number of jumps at the point $x_i$. The $\mathrm{lenght} \left( \frac{E_{\lambda,i}}{E_{\lambda-1,i}} \right)$ is called the multiplicity of $\lambda_j(i)$.
**Remark 15**. We set $E_{\lambda}=\bigcap\limits_{i=1}^N E_{\lambda,i}$, we get a filtration $E_{\bullet}:=(E_{\lambda})_{\lambda \in \mathbb{R}}$ that satisfies the first four points of the Definition [Definition 14](#Filtred-bundles){reference-type="ref" reference="Filtred-bundles"}, and for the periodicity we get for each $\lambda \in \mathbb{R}$, $E_{\lambda+1}=E_{\lambda}(-D).$
**Definition 16**. Let $E_{\bullet}=(E_{\lambda})_{\lambda \in \mathbb{R}}$ be a filtered rank-r bundle over the marked curve $(C,D)$. then we define
1. Filtered degree: $\mathrm{deg}(E_{\bullet})=\int_0^1 \mathrm{deg}(E_{\lambda}) \mathrm{d}\lambda.$
2. Filtered slope: $\mu(E_{\bullet})=\mathrm{deg}(E_{\bullet})/r.$
**Proposition 17** (Filtered bundles as Parabolic bundles). *Over a smooth marked curve $(C,D)$. The notion of filtered rank-r vector bundles is equivalent to parabolic rank-r vector bundles with respect to the same divisor $D$ and stability conditions coincides.*
**Remark 18**. $\cite{maruyama1992moduli}$ proved the following equality $deg(E_{\bullet})=\mathrm{pardeg}(E_{\bullet})+\mathrm{rank}(E) \ \mathrm{deg}(D).$
#### Classifying maps for filtered vector bundles:
Let $\mathcal{E}_{\bullet}$ be a family of filtered rank-r bundles over the smooth family of marked curves $(\mathcal{C},D)$ over $S$ parameterized by a $S$-variety $\mathcal{T}$ of fixed determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$ and fixed weights, we get for each $\lambda \in \mathbb{R}$ a rational map to the moduli space of semi-stable rank-r vector bundles of fixed determinant $$\begin{array}{ccccc}
\phi^{\mathcal{T}}_{\lambda} &: & \mathcal{T}& \dashrightarrow& \mathcal{SU}_{\mathcal{C}/S}(r,\delta(\lambda)) \\
& &t & \longmapsto & \mathcal{E}_{\lambda}\vert_{\mathcal{C}_t}
\end{array}$$ where for each $t \in \mathcal{T}$ we associate the curve $\mathcal{C}_t:=\pi_n^{-1}(t)=\pi_s^{-1}(\pi_e(t))$ and for each $\lambda \in \mathbb{R}$ we associate the line bundle $\delta(\lambda):= \det(\mathcal{E}_{\lambda}) \in \mathrm{Pic}^{d(\lambda)}(\mathcal{C}/S)$ of degree $d(\lambda)$ and denote $n(\lambda)=\gcd(r,d(\lambda))$.
# Line bundles over the moduli spaces of parabolic bundles
In this section, we recall the description of the Picard group of the relative moduli space of semi-stable vector bundles of fixed rank and determinant and also its ample generator and its canonical line bundle.\
Let $\pi_s: \mathcal{C}\longrightarrow S$ be a smooth family of projective complex curves of genus $g \geq 2$ ($g \geq 3$ if the rank is 2 and the degree is even). If the parabolic divisor is empty and that the parabolic type is trivial. Note that the trivial parabolic structure is just the structure of a vector bundles and in this case parabolic semi-stability (resp. stability) coincide with semistability (resp. stability) of vector bundles. Thus the relative moduli space $\mathcal{SM}^{par}_{\mathcal{C}}(r,0_*,d)$ in Theorem [Theorem 5](#Mehta_seshadri){reference-type="ref" reference="Mehta_seshadri"} of rank-r parabolic bundles with determinant $\delta$ coincides with the coarse relative moduli space of semistable rank-r vector bundles with determinant $\delta$, denoted by $$\mathcal{SU}_{\mathcal{C}/S}(r,d):=\mathcal{SM}^{par}_{\mathcal{C}/S}(r,0_*,d),$$ is an irreducible normal variety over $S$.
## Determinant line bundle
Let $\mathcal{E}$ be a family of semistable rank-r vector bundles with fixed determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$ parametrized by a $S$-variety $\mathcal{T}$, we get a cartesian diagram $$\def\cartesien{%
\ar@{-}[]+R+<6pt,-1pt>;[]+RD+<6pt,-6pt>%
\ar@{-}[]+D+<1pt,-6pt>;[]+RD+<6pt,-6pt>%
}
\xymatrix{
\mathcal{C}\times_S \mathcal{T}\ar[rr]^{p_n} \ar[d]_{ p_w} \cartesien & & \mathcal{T}\ar[d]^{p_e} & \\
\mathcal{C} \ar[rr]_{\pi_s=p_s} & & S
}$$
**Definition 19** (Determinant line bundle$\cite{knudsen1976projectivity}$). Let $\mathcal{E}\rightarrow \mathcal{C}\times_S \mathcal{T}$ be a family of vector bundles.We define $$\mathrm{det} R^{\bullet}p_{n_*}\left(\mathcal{E}\right):=\left(\det p_{n_*}(\mathcal{E})\right)^{-1} \otimes \det R^1p_{n_*}(\mathcal{E}),$$ which is an element of $\mathrm{Pic}(\mathcal{T}/S)$, we call it the determinant line bundle associated to $\mathcal{E}$ with respect to the map $p_n:\mathcal{C}\times_S \mathcal{T}\longrightarrow \mathcal{T}$.
Drezet-Narasimhan gave the description of the ample generator of the relative Picard group $\mathrm{Pic}(\mathcal{SU}_{C}(r,\delta))$ and they describes the canonical bundle to the moduli space $\mathcal{SU}_{C}(r,\delta)$ for any line bundle $\delta$ over the curve $\mathcal{C}$.
**Theorem 20** ( $\cite{drezet1989groupe}$, Theorems B & F). *We have the following properties*
1. *The relative Picard group $\mathrm{Pic}(\mathcal{SU}_{\mathcal{C}/S}(r,\delta)/S)$ is isomorphic to $\mathbb{Z}\mathcal{L}$, where $\mathcal{L}$ is an ample line bundle.*
2. *Set $n=\gcd(r,d)$, where $d=\mathrm{deg}(\delta)$. Then the dualizing sheaf of $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ is $K_{\mathcal{SU}_{\mathcal{C}/S}(r,\delta)/S} \cong \mathcal{L}^{-2n}.$*
Let $\mathcal{E}$ be a virtual universal bundle over $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$, then
- The relative ample generator of the Picard group is expressed as follow $\mathcal{L}= \lambda(\mathcal{E}\otimes p_w^*(F)).$ Where $F$ is a vector bundle over $\mathcal{C}$, such that $\mathrm{rank}(F)=\frac{r}{n}$ and $\mathrm{deg}(F)=-\frac{\chi(E)}{n}$.
- The canonical bundle satisfy the equalities $\cite{laszlo1997line}$ $$\mathcal{L}^{-2n}=K_{\mathcal{SU}_{\mathcal{C}/S}(r,\delta)/S}=\lambda \left( \mathrm{End}^0(\mathcal{E}) \right)^{-1}.$$
If $\mathcal{T}=\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ and $k$ is large enough, the pullbacks under the classifying morphisms ${\phi_{i,j}}, \phi$ (we drop the reference to the parameter space) of the ample generators of $\mathrm{Pic} (\mathcal{SU}_{\mathcal{C}/S}(r,\delta_{j}(i))/S)$ and $\mathrm{Pic}( \mathcal{SU}_{\mathcal{C}/S}(r,\delta)/S)$, respectively, extend to all the space $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$. We denote them by $\Theta_{j}(i)$ and $\Theta$, respectively.
**Theorem 21** ([@narasimhan1993factorisation]). *Let $\mathcal{E}$ be a relative family of rank-r vector bundle of fixed determinant $\delta \in \mathrm{Pic}^{d}(\mathcal{C}/S)$ parameterized by a $S$-scheme $\mathcal{T}$ over the family $p_s : \mathcal{C} \longrightarrow S$, then we have $$\phi^*_{\mathcal{T}} \left( \mathcal{L}\right)=\lambda(\mathcal{E})^{\frac{r}{n}}\otimes \det \left( \mathcal{E}_{\sigma}\right)^{\aleph},$$ where $\phi_{\mathcal{T}}$ is the classifying morphism to $\mathcal{SU}_{\mathcal{C}}(r,\delta)$ the moduli space of semi-stable rank-r bundles with determinant $\delta$, $\sigma:S \longrightarrow \mathcal{C}$ any section of the map $p_s$, and $$\aleph=\frac{d+r(1-g)}{n} \ \ \ \mathrm{and} \ \ \ n=\gcd(r,d).$$*
## Parabolic determinant line bundle
Let $\mathcal{E}_*$ be a relative family of parabolic rank-r vector bundles of determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$ and fixed parabolic type $\alpha_*$ over a smooth family of curves $(\mathcal{C},D)/S$ parameterized by $\mathcal{T}/S$. Let $\pi_n: \mathcal{C} \times_S \mathcal{T}\longrightarrow \mathcal{T}$ the projection map.
Assume the following condition: $kd+ \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} m_j(i)a_j(i) \in r \mathbb{Z}\label{star}$.
**Definition 22**. $\cite{BR93}$ [\[parabolic-determinant-by-Bisas\]]{#parabolic-determinant-by-Bisas label="parabolic-determinant-by-Bisas"} We define the parabolic determinant line bundle as following $$\lambda_{par}(\mathcal{E}_*):= \lambda(\mathcal{E})^{k} \otimes \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i} \left\lbrace \det \left( F^{j}_i(\mathcal{E})/F^{j+1}_i(\mathcal{E})\right)^{-a_j(i)} \right\rbrace \otimes \det(\mathcal{E}_{\sigma})^{ \frac{k \chi_{par}}{r}},$$ which is a line bundle over $\mathcal{T}/S$, where
- $\mathcal{E}_{\sigma}:=\mathcal{E}\vert_{ \sigma(S) \times_S \mathcal{T}}$ for some section $\sigma$ of the map $\pi_s: \mathcal{C} \longrightarrow S$.
- The determinant line bundle bundle: $\lambda(\mathcal{E}):=\det R^{\bullet}\pi_{n_*}(
\mathcal{E}):=\left( \det \pi_{n_*}\mathcal{E} \right)^{-1} \otimes \det R^1\pi_{n_*}(\mathcal{E}).$
- $\chi_{par}=d+r(1-g)+ \frac{1}{k}\sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} m_j(i)a_j(i) .$
# Hitchin connection in algebraic geometry {#chapter3}
In this section, we introduce van Geemen-de Jong approach to the constructing connections over a push-forward of a line bundles by giving heat operators over the line bundles. We will define connections, heat operators, the relation between them and give van Geemen-de Jong theorem, which is an algebro-geometric analogues of Hitchin's theorem in Kähler geometry. We follow [@van1998hitchin].\
In all the section, we take $\pi: \mathcal{M} \longrightarrow S,$ a smooth surjective morphism of regular $\mathbb{C}$-schemes, we have the natural exact sequence associated to the differential map $$\xymatrix{
0 \ar[r] & T_{\mathcal{M}/S} \ar[r] & T_{\mathcal{M}} \ar[r]^{d\pi} & \pi^* \left( T_S \right) \ar[r] & 0.
}
\label{dpi}$$ Let $E$ be a locally free sheaf over $\mathcal{M}$. We denote $\mathcal{D}^{(q)}_{\mathcal{M}}(E)$ the sheaf of differential operators of order at most $q$ over $E$. For each $q \in \mathbb{N}$, we have a natural inclusion: $\mathcal{D}^{(q-1)}_{\mathcal{M}}(E) \hookrightarrow \mathcal{D}^{(q)}_{\mathcal{M}}(E).$ Hence we get the short exact sequence $$\xymatrix{
0 \ar[r] & \mathcal{D}^{(q-1)}_{\mathcal{M}}(E) \ar[r] & \mathcal{D}^{(q)}_{\mathcal{M}}(E) \ar[rr]^{\nabla_q} && \mathrm{Sym}^q(T_{\mathcal{M}})\otimes \mathrm{End}(E) \ar[r] & 0,
}$$ where $\mathrm{Sym}^q(T_{\mathcal{M}})$ is the $q$-th symmetric power and the natural quotient map $\nabla_q$ is called the symbol map of order $q$. We define the sheaf $\mathcal{D}^{(q)}_{\mathcal{M}/S}(E)$ of relative differential operators with respect to the map $\pi: \mathcal{M} \rightarrow S$ as the sub-sheaf of operators that are $\pi^{-1}(\mathcal{O}_S)$-linear. By restriction to this sub-sheaf we get a map $$\nabla_q: \mathcal{D}^{(q)}_{\mathcal{M}/S}(E) \longrightarrow \mathrm{Sym}^q(\mathcal{T}_{\mathcal{M}})\otimes \mathrm{End}(E)$$ with image in the sub-sheaf $\mathrm{Sym}^q( T_{\mathcal{M}/S}).$ Hence a relative symbol map $$\nabla_q: \mathcal{D}^{(q)}_{\mathcal{M}/S}(E) \longrightarrow \mathrm{Sym}^q( T_{\mathcal{M}/S})\otimes \mathrm{End}(E) .$$
## Atiyah classes and connections on vector bundles
We follow Atiyah's description of Atiyah algebroids and exact sequences $\cite{atiyah1957complex}$ in the context of vector bundles rather than working with principal bundles.
**Definition 23** (Atiyah Class). Let $E$ be a vector bundle over $\mathcal{M}$. Then the Atiyah exact sequence associated to $E$ is given by the following pull-back $$\xymatrix{
0 \ar[r] & \mathrm{End}(E) \ar[r] \ar@{=}[d] & \mathcal{A}_{\mathcal{M}}(E) \ar[r]^{\nabla_1} \ar@{^{(}->}[d] & T_{\mathcal{M}} \ar[r] \ar@{^{(}->}[d]^{-\otimes id} & 0 \\
0 \ar[r] & \mathrm{End}(E) \ar[r] & \mathcal{D}^{(1)}_{\mathcal{M}}(E) \ar@{->>}[r]^{\nabla_1} & T_{\mathcal{M}} \otimes \mathrm{End}(E) \ar[r] & 0 }$$ The sheaf $\mathcal{A}_{\mathcal{M}}(E)$ is called the Atiyah algebroid of $E$, we denote its extension class by $at_{\mathcal{M}}(E)\in \mathrm{Ext}^1(T_{\mathcal{M}},\mathrm{End}(E)) \simeq \mathrm{H}^1(\mathcal{M},\Omega^1_{\mathcal{M}} \otimes \mathrm{End}(E))$[^6]. For a line bundle $L$ over $\mathcal{M}$ the Atiyah sequence coincides with $$\xymatrix{
0 \ar[r] & \mathcal{O}_{\mathcal{M}} \ar[r] & \mathcal{D}^{(1)}_{\mathcal{M}}(L) \ar[r]^{\nabla_1} & T_{\mathcal{M}} \ar[r] & 0 }
\label{atiyah-classe}$$
Note that the Atiyah class can be given by tensorize the Atiyah exact sequence [Definition 23](#Atiyah-definition){reference-type="ref" reference="Atiyah-definition"} with the cotangent sheaf $\Omega^1_{\mathcal{M}}$ $$\xymatrix{
0 \ar[r] & \mathrm{End}(E) \otimes \Omega^1_{\mathcal{M}} \ar[r] & \mathcal{A}_{\mathcal{M}}(E) \otimes \Omega^1_{\mathcal{M}} \ar[r]^{\nabla_1} & T_{\mathcal{M}}\otimes \Omega^1_{\mathcal{M}} \ar[r] & 0
}$$ take the connecting morphism in the long exact sequence in cohomology $$\delta_1: \mathrm{H}^0 \left( \mathcal{M},\mathrm{End}( T_{\mathcal{M}}) \right) \longrightarrow \mathrm{H}^1 \left(\mathcal{M}, \mathrm{End}(E)\otimes \Omega^1_{\mathcal{M}} \right)$$ then the class $at_{\mathcal{M}}(E)$ is given by $\delta_1(\mathrm{Id})$. We have the following lemma in Atiyah [@atiyah1957complex].
**Lemma 24**. *Let $X$ a smooth algebraic variety, $L$ a line bundle and $k$ a positive integer, then we have an isomorphism of short exact sequences $$\xymatrix{
0 \ar[r] & \mathcal{O}_X \ar[r] \ar[d] & \mathcal{A}_X(L^k) \ar@{->>}[r]^{\nabla_1} \ar[d] & T_X \ar[r] \ar[d]^{\mathrm{id}} & 0 \\
0 \ar[r] &\mathcal{O}_X \ar[r]^{1/k} & \mathcal{A}_X(L) \ar[r]^{\nabla_1} & T_X \ar[r] & 0 }$$*
We define a relative version of Atiyah algebroid denoted by $\mathcal{A}_{\mathcal{M}/S}(E)$, given by taking the pull-back $$\xymatrix{
0 \ar[r] & \mathrm{End}(E )\ar[r] \ar@{=}[d] & \mathcal{A}_{\mathcal{M}/S}(E) \ar[r] \ar@{^{(}->}[d] & T_{\mathcal{M}/S} \ar[r] \ar@{^{(}->}[d]^{\iota} & 0 \\
0 \ar[r] & \mathrm{End}(E) \ar[r] & \mathcal{A}_{\mathcal{M}}(E) \ar[r]^{\nabla_1} & T_{\mathcal{M}} \ar[r] & 0 }$$ as an extension, we have that $at_{\mathcal{M}/S}(E)$ in a section of $\mathcal{E}xt^{1}(T_{\mathcal{M}/S},\mathrm{End}(E))\simeq R^1 \pi_*(\Omega^1_{\mathcal{M}/S} \otimes \mathrm{End}(E)).$ For a line bundle $L \in \mathrm{Pic}(\mathcal{M})$, we denote this class by $[L] \in \mathrm{H}^0 \left( S, R^1\pi_*(\Omega^1_{\mathcal{M}/S}) \right)$.\
For our purpose, we need the trace-free Atiyah algebroid of vector bundles with fix determinant. We have a direct sum decomposition $\mathrm{End}(E)=\mathrm{End}^0(E) \oplus \mathcal{O}_{\mathcal{M}}$, and denote by $q: \mathrm{End}(E)\rightarrow \mathrm{End}^0(E)$ the first projection map. Then, the trace-free Atiyah algebroid is given by the push-out of the standard Atiyah sequence by the map $q$ as follows $$\xymatrix{
0 \ar[r] & \mathrm{End}(E)\ar[r] \ar[d]^q & \mathcal{A}_{\mathcal{M}}(E) \ar[r] \ar[d] & T_{\mathcal{M}} \ar[r] \ar[d] & 0 \\
0 \ar[r] & \mathrm{End}^0(E)\ar[r] & \mathcal{A}^0_{\mathcal{M}}(E) \ar[r] & T_{\mathcal{M}} \ar[r] & 0,
}$$ with the same method, we define the trace-free relative version Atiyah algebroid $\mathcal{A}_{\mathcal{M}/S}^0(E)$.
## Heat operators
Let $L$ be a line bundle over $\mathcal{M}$ such that $\pi_*L$ is a locally free vector bundle over $S$. We are interested in the subsheaf of the differential operators of degree 2 given by $$\mathcal{W}_{\mathcal{M}/S}(L):= \mathcal{D}^{(1)}_{\mathcal{M}}(L)+\mathcal{D}^{(2)}_{\mathcal{M}/S}(L).$$ $\nabla_2$ is restriction of the symbol map to this sub-sheaf. We define the subprincipal symbol $$\sigma_S: \mathcal{W}_{\mathcal{M}/S}(L) \longrightarrow \pi^* T_S,$$ such that for $s \in L$ a local section of $L$ and $f$ a local section of $\mathcal{O}_S$ we have, for all $D \in \mathcal{W}_{\mathcal{M}/S}(L)$ $$\langle \sigma_S(H),d(\pi^*f) \rangle =d(\pi^*fs)-\pi^*fH(s).$$ The elements of the sheaf $\mathcal{W}_{\mathcal{M}/S}(L)$ satisfies Leibniz rule (that follow from proprieties of the second symbol map) $$H(fgs)= \left\langle \nabla_2(H),df \otimes dg \right\rangle s +fH(gs)+gH(fs)-fgH(s).$$ Thus, we get a short exact sequence $$0 \longrightarrow \mathcal{D}^{(1)}_{\mathcal{M}/S}(L) \longrightarrow
\mathcal{W}_{\mathcal{M}/S}(L) \overset{\sigma_S \oplus \nabla_2}{\longrightarrow} \pi^* ( T_{S})\oplus \mathrm{Sym}^2(T_{\mathcal{M}/S} ) \longrightarrow 0.
\label{heat-operator-sequence}$$
**Definition 25** (Heat operator $\cite{van1998hitchin}$). A heat operator $H$ on $L$ is a $\mathcal{O}_S$-linear map of coherent sheaves $$H : T_S \longrightarrow \pi_*\mathcal{W}_{\mathcal{M}/S}(L)$$ such that $\sigma_S \circ \tilde{H}=Id,$ where $\tilde{H}$ is the $\mathcal{O}_{\mathcal{M}}$-linear map associate to $H$ by adjunction $$\tilde{H} : \pi^* T_S \longrightarrow \mathcal{W}_{\mathcal{M}/S}(L).$$ A projective heat operator $H$ on $L$ is a heat operator with values in the sheaf $\left( \pi_* \mathcal{W}_{\mathcal{M}/S}(L)\right)/\mathcal{O}_S$.
#### Symbol of heat operators:
The symbol map of a (projective) heat operator $H$ is the map $$\rho_H:=\pi_*(\sigma_2) \circ H : T_S \longrightarrow \pi_* \mathrm{Sym}^2(T_{\mathcal{M}/S}).$$
#### The map $\mu_L$:
For any line bundle $L$ over $\mathcal{M}$, we associate the exact sequence $$0 \longrightarrow T_{\mathcal{M}/S} \longrightarrow \mathcal{D}_{\mathcal{M}/S}^{(2)} (L)/\mathcal{O}_{\mathcal{M}} \longrightarrow \mathrm{Sym}^2(T_{\mathcal{M}/S}) \longrightarrow 0,$$ and the first connector map in cohomology with respect to the map $\pi$ give rise to a map $$\mu_{L}: \pi_* \mathrm{Sym}^2( T_{\mathcal{M}/S}) \longrightarrow R^1 \pi_{*} \left( T_{\mathcal{M}/S} \right).$$
We have the following description.
**Proposition 26** ($\cite{welters1983polarized,pauly2023hitchin}$). *The first connecting morphism is given by the following formula: $\mu_{L}=\cup \left[ L \right]-\cup \left( \frac{1}{2} \left[ K_{\mathcal{M}/S} \right] \right)$, where $K_{\mathcal{M}/S}$ is the relative canonical line bundle.*
## A heat operator for a candidate symbol
As in Hitchin's theorem, van Geemen-de Jong gave under what conditions a candidate symbol $$\rho: T_S \longrightarrow \pi_* \left( \mathrm{Sym}^2(T_{\mathcal{M}/S})\right),$$ can be lifted to a (projective) heat operator, i.e. , is there a (projective) heat operator such that we have $\rho_H:=\sigma_S \circ H=\rho.$ The answer is given in the following theorem.
**Theorem 27** (van Geemen-de Jong, $\cite{van1998hitchin}$, §2.3.7). *Let $L \in \mathrm{Pic}(\mathcal{M})$ and $\pi: \mathcal{M} \longrightarrow S$ as before, we have that if, for a given map $\rho: T_S \longrightarrow \pi_* \mathrm{Sym}^2 T_{\mathcal{M}/S}$*
1. *$\kappa_{\mathcal{M}/S}+\mu_{L}\circ \rho=0$,*
2. *The map $\cup [L]:\pi_* T_{\mathcal{M}/S} \longrightarrow R^1\pi_* \mathcal{O}_{\mathcal{M}}$, is an isomorphism, and*
3. *$\pi_* \mathcal{O}_{\mathcal{M}}=\mathcal{O}_S$.*
*Then, there exists a unique projective heat operator $H$ whose symbol is $\rho$.*
**Theorem 28** (Flatness criterion, [@pauly2023hitchin] Theorem 3.5.1). *Under the assumptions of the Theorem [Theorem 27](#van Geemen and De Jong){reference-type="ref" reference="van Geemen and De Jong"}, the projective connection associated to the symbol $\rho$ is projectively flat if the following conditions holds.*
1. *The symbol Poisson-commute with respect to the natural symplectic form over the relative cotangent bundle $T^{\vee}_{\mathcal{M}/S}$.i.e. For all local sections $\theta, \theta'$ of $T_S$, we have $\{ \rho(\theta),\rho(\theta')\}_{T^{\vee}_{\mathcal{M}/S}}=0.$*
2. *The morphism $\mu_L$ is injective.*
3. *There are no vertical vector fields, $\pi_* \left( T_{\mathcal{M}/S}\right)=0$.*
# Parabolic and strongly parabolic Atiyah sequences
In this section, we prove the main theorem which generalises $\cite{pauly2023hitchin}$ algebro-geometric construction of Hitchin connection over $\mathcal{SU}_{\mathcal{C}}(r,\mathcal{O}_{\mathcal{C}})$ the relative moduli space of rank-r vector bundles with trivial determinate over a smooth family of complex projective curves of genus $g \geq 2$, to $\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)$ the relative moduli space of parabolic rank-r vector bundles of fixed determinant $\delta \in \mathrm{Pic}(\mathcal{C}/S)$ and of fixed parabolic type $\alpha_*$.\
Let $S$ be a smooth complex algebraic variety, we take a smooth family over $S$ of projective marked curves $(\mathcal{C},D)$, where the divisor $D$ is given by $N$-section of the map $\pi_s$ such that the relative degree is $N$, let $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$ be a relative line bundles over the family of curves. Let $\mathcal{U}$ be a family of rank-r vector bundles and fixed determinant $\delta$ and $\mathcal{E}_*$ a family of rank-r parabolic vector bundles of fixed parabolic type $\alpha_*$ and fixed determinant $\delta$ over $(\mathcal{C},D)/S$ parameterized by a $S$-schemes $\mathcal{T}$, such that we have the following fibre product $$\xymatrix{
\mathcal{X}:=\mathcal{C} \times_S \mathcal{T}\ar[rr]^{\pi_n} \ar[d]_{\pi_w} && \mathcal{T}\ar[d]^{\pi_{e}} & \\
\ \left( \mathcal{C},D \right) \ar[rr]_{\pi_s} && \ar@/^1pc/[ll]^{ \sigma_{i}} S
}$$ Set $\mathcal{D}:=\pi_w^{-1}(D)=D \times_S \mathcal{T}$. We define the quasi-parabolic and strongly quasi-parabolic Atiyah sequences and algebroids that we use to study deformation of marked curves equipped with quasi-parabolic vector bundles and to give an existence of Kodaira-Spencer map in the parabolic case .
**Definition 29** (Quasi-parabolic Atiyah algebroid (QPA)). We take the pushout of the relative Atiyah exact sequence of the parabolic bundle $\mathcal{E}_*$ by the inclusion $\mathrm{End}^0(\mathcal{E})\hookrightarrow \mathrm{SparEnd}(\mathcal{E})^{ \vee}$, we get $$\xymatrix{
0 \ar[r] & \mathrm{End}^0(\mathcal{E}) \ar[r] \ar@{^{(}->}[d] & \mathcal{A}^{0}_{\mathcal{X}/ \mathcal{T}}(\mathcal{E}) \ar[r] \ar[r] \ar[d] & T_{\mathcal{X}/\mathcal{T}} \ar[r] \ar@{=}[d] & 0 \\
0 \ar[r] & \mathrm{SparEnd}^0(\mathcal{E})^{\vee} \ar[r]& \mathcal{A}_{1} \ar[r] & \pi_w^* T_{\mathcal{C}/S} \ar[r] & 0}$$ then the QPA sequence is given by tensorizing the exact sequence below by $\mathcal{O}_{\mathcal{X}}\left(\mathcal{-D} \right)$: $$\xymatrix{
0 \ar[r] & \mathrm{parEnd}^0(\mathcal{E})\ar[r] & \mathcal{A}^{0,par}_{\mathcal{X}/\mathcal{T}} (\mathcal{E})\ar[r] & \pi_w^* T_{\mathcal{C}/S}(-D) \ar[r] & 0,
}$$ and the QPA algebroid is given by: $\mathcal{A}^{0,par}_{\mathcal{X}/\mathcal{T}} (\mathcal{E}):=\mathcal{A}_{1}\otimes \mathcal{O}_{\mathcal{X}}\left(\mathcal{-D} \right).$
**Definition 30** (Strongly quasi-parabolic Atiyah algebroid (SQPA)). We take the pushout of the Atiyah exact sequence of the parabolic bundle $\mathcal{E}_*$ by the inclusion $\mathrm{End}^0(\mathcal{E})\hookrightarrow \mathrm{parEnd}(\mathcal{E})^{ \vee}$, we get $$\xymatrix{
0 \ar[r] & \mathrm{End}^0(\mathcal{E}) \ar[r] \ar@{^{(}->}[d] & \mathcal{A}^{0}_{\mathcal{X}/\mathcal{T}}(\mathcal{E}) \ar[r] \ar[d] & T_{\mathcal{X}/\mathcal{T}} \ar[r] \ar@{=}[d]&0 \\
0 \ar[r] & \mathrm{parEnd}^0(\mathcal{E})^{\vee} \ar[r]& \mathcal{A}_{2} \ar[r] & \pi_w^* T_{\mathcal{C}/S} \ar[r] & 0
}$$ then the SQPA sequence is given by tensorizing exact sequence below by $\mathcal{O}_{\mathcal{X}}\left(\mathcal{-D} \right)$: $$\xymatrix{
0 \ar[r] & \mathrm{SparEnd}^0(\mathcal{E})\ar[r] & \mathcal{A}^{0,par,St}_{\mathcal{X}/\mathcal{T}} (\mathcal{E})\ar[r] & \pi^*_w T_{\mathcal{C}/S} \left(-D \right) \ar[r] & 0,
}$$ and the SQPA algebroid is given by: $\mathcal{A}^{0,par,St}_{\mathcal{X}/\mathcal{T}} (\mathcal{E}):=\mathcal{A}_{2}\bigotimes \mathcal{O}_{\mathcal{X}} \left(\mathcal{-D} \right).$
# Trace complexes theory over $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$
## Trace complexe theory over $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$
The main ingredient in [@pauly2023hitchin] is a description of the Atiyah class of the the determinant line bundle $\mathcal{L}$ over the moduli space $\mathcal{SU}_{\mathcal{C}/S}(r)$ by the Atiyah class of a universal family of rank-r vector bundles. In this subsection, we show the same relation on the moduli space $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$, based on Beilinson-Schechtman's trace-complex theory, Sun-Tsai isomorphism (Theorem [Theorem 31](#BEA){reference-type="ref" reference="BEA"} bellow) and Bloch-Esnault complex (Theorem [Theorem 32](#BSBE){reference-type="ref" reference="BSBE"}). Here, we do not need the definition of the complex trace, we use Sun-Tsai's characterization of the (-1)- Bloch-Esnault term as definition. We recall the following fibre product $$\xymatrix{
\mathcal{X}:=\mathcal{C}\times_S \mathcal{SU}_{\mathcal{C}/S}(r,\delta) \ar[rr]^{p_n} \ar[d]_{p_w} & & \mathcal{SU}_{\mathcal{C}/S}(r,\delta) \ar[d]^{p_e} & \\
\mathcal{C} \ar[rr]_{p_s= \pi_{s}} & & S
}$$
Let $\mathcal{U}$ be a universal vector bundle over $\mathcal{C}\times_S \mathcal{SU}_{\mathcal{C}/S}(r,\delta)$. The following theorem give a characterization of the (-1)-Bloch-Esnault algebra $^{0}\mathcal{B}^{-1}_{\mathcal{SU}_{\mathcal{C}/S}/S}(\mathcal{U}),$ that we will use as a definition.
**Theorem 31** ([@sun2004hitchin]). *There is a canonical isomorphism of short exact sequences $$\xymatrix{
0 \ar[r] & T^{\vee}_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}/S}} \ar[r] \ar[d]^{\cong} & \mathcal{A}^{0}_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}}(\mathcal{U})^{\vee} \ar[r] \ar[d]^{\cong} & \mathrm{End}(\mathcal{U})^{\vee}\ar[r] \ar[d]^{\cong}_{-Tr}&0 \\
0 \ar[r] & K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}/S}} \ar[r] & ^{0}\mathcal{B}^{-1}_{\mathcal{SU}_{\mathcal{C}/S}/S}(\mathcal{U}) \ar[r] & \mathrm{End}(\mathcal{U}) \ar[r] & 0
}$$ where $K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}/S}}$ is the relative canonical bundle with respect to the map $p_n$.*
**Theorem 32** (Beilinson- Schechtman [@beilinson1988determinant], Bloch-Esnault [@esnault2000determinant]). *There is a canonical isomorphism of exact sequences over $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ $$\xymatrix{
0 \ar[r] & R^1 p_{n_*} (K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}/S}}) \ar[r] \ar[d]_{2r \cdot \mathrm{id}}^{\cong} & R^1 p_{n_*} ( ^0\mathcal{B}^{-1}(\mathcal{U})) \ar[r] \ar[d]^{\cong} & {R^1 p_{n}}_* (\mathrm{End}^{0}(\mathcal{U})^{\vee})\ar[r] \ar[d]^{-Tr}_{\cong} &0 \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0 \ar[r] & \mathcal{O}_{\mathcal{SU}_{\mathcal{C}/S}}\ar[r] & \mathcal{A}_{\mathcal{SU}_{\mathcal{C}/S}/S}\left(\lambda(\mathrm{End}^0 \left(\mathcal{U} \right)\right) \ar[r] & T_{\mathcal{SU}_{\mathcal{C}/S}/S} \ar[r] & 0
}$$*
Combining this two results we get the following theorem, proven in [@pauly2023hitchin] for $\delta=\mathcal{O}_{\mathcal{C}}$, but their proof work for any relative line bundle $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$.
**Theorem 33**. *There is a canonical isomorphism of exact sequences over $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ $$\xymatrix{
0\ar[r] & R^1 p_{n_*} (K_{\mathcal{X}/ \mathcal{SU}}) \ar[r] \ar[d]_{\frac{r}{n}\mathrm{Id}}^{\cong} & R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}}(\mathcal{U})^{\vee} \right) \ar[r] \ar[d]_{\cong} & R^1 p_{n_*} \left( \mathrm{End}^0(\mathcal{U})^{\vee} \right) \ar[d]^{\mathrm{Id}}_{\cong} \ar[r] & 0 \\
0\ar[r] & \mathcal{O}_{\mathcal{SU}_{\mathcal{C}}}\ar[r] & \mathcal{A}_{\mathcal{SU}_{\mathcal{C}}/S}(\mathcal{L}) \ar[r] ^{\nabla_1} & T_{\mathcal{SU}_{\mathcal{C}}/S} \ar[r] & 0
}$$ where $\mathcal{L}$ is the relative ample generator of the group $\mathrm{Pic}\left(\mathcal{SU}_{\mathcal{C}}(r,\delta)/S \right)$ and $n=\gcd(r,\mathrm{deg}(\delta))$.*
By Theorem [Theorem 31](#BEA){reference-type="ref" reference="BEA"} and [Theorem 32](#BSBE){reference-type="ref" reference="BSBE"}, one has the following isomorphism of short exact sequences $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ $$\xymatrix{
0\ar[r] & R^1 p_{n_*}\left( K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}} \right) \ar[r] \ar[d]_{2r\cdot \mathrm{id}}^{\cong} & R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}}(\mathcal{U})^{\vee} \right) \ar[r] \ar[d]^{\cong} & R^1 p_{n_*} \left(\mathrm{End}^0(\mathcal{U})^{\vee} \right) \ar[d]_{\cong}^{-Tr} \ar[r] & 0 \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0\ar[r] & \mathcal{O}_{\mathcal{SU}_{\mathcal{C}}}\ar[r] & \mathcal{A}_{\mathcal{M}/S}\left(\lambda(\mathrm{End}^0(\mathcal{
U})) \right) \ar[r] ^{\nabla_1} & T_{\mathcal{SU}_{\mathcal{C}}/S} \ar[r] & 0
}$$ By Drezet-Narasimhan theorem [Theorem 20](#Drezet_Narasimhan){reference-type="ref" reference="Drezet_Narasimhan"} and [@laszlo1997line], we have $\lambda(\mathrm{End}^0(\mathcal{
U}))=K_{\mathcal{SU}_{\mathcal{C}}}=\mathcal{L}^{-2n}.$ Hence, we get the following isomorphism $$\xymatrix{
0\ar[r] & R^1 p_{n_*}\left( K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}} \right) \ar[r] \ar[d]_{2r\cdot \mathrm{id}}^{\cong} & R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}}(\mathcal{U})^{\vee} \right) \ar[r] \ar[d]^{\cong} & R^1 p_{n_*} \left(\mathrm{End}^0(\mathcal{U})^{\vee} \right) \ar[d]_{\cong}^{-Tr} \ar[r] & 0 \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0\ar[r] & \mathcal{O}_{\mathcal{SU}_{\mathcal{C}}}\ar[r] & \mathcal{A}_{\mathcal{M}/S}\left(\mathcal{L}^{-2n} \right) \ar[r] ^{\nabla_1} & T_{\mathcal{SU}_{\mathcal{C}}/S} \ar[r] & 0
}$$ and by applying Lemma [Lemma 24](#multiplicity){reference-type="ref" reference="multiplicity"} (for $k=-2n$ and $L=\mathcal{L}$), we get $$\xymatrix{
0\ar[r] & R^1 p_{n_*}\left( K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}} \right) \ar[r] \ar[d]_{2r\cdot \mathrm{id}}^{\cong} & R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}}(\mathcal{U})^{\vee} \right) \ar[r] \ar[d]^{\cong} & R^1 p_{n_*} \left(\mathrm{End}^0(\mathcal{U})^{\vee} \right) \ar[d]_{\cong}^{-Tr} \ar[r] & 0 \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0\ar[r] & \mathcal{O}_{\mathcal{SU}_{\mathcal{C}}}\ar[r]^{\frac{1}{2n}} & \mathcal{A}_{\mathcal{M}/S}\left(\mathcal{L}\right) \ar[r] ^{-\nabla_1} & T_{\mathcal{SU}_{\mathcal{C}}/S} \ar[r] & 0
}$$ The right vertical map is $-Tr$, the vertical left map is $2r \mathrm{Id}$ and the extension class of the last exact map is $-2n \left[\mathcal{L}\right]$ in $\mathrm{H}^0\left(S, R^1 \pi_*\left(\Omega^1_{\mathcal{SU}_{\mathcal{C}/S}(r,\delta)/S}\right)\right)$. Hence we conclude that the extension class of the exact sequence $$\xymatrix{
0 \ar[r] & R^1 p_{n_*}\left( K_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}} \right) \ar[r] & R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}}(\mathcal{U})^{\vee} \right) \ar[r] & R^1 p_{n_*} \left(\mathrm{End}^0(\mathcal{U})^{\vee} \right) \ar[r] & 0
}$$ equals $\frac{n}{r} \left[\mathcal{L}\right]$. This conclude the proof. $\square$
## Parabolic Bloch-Esnault complex
Now, we work over $\mathcal{SM}^{par}_{\mathcal{C}/S}:=\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$. We denote by $\mathcal{E}_*$ a virtual universal parabolic bundle over $\mathcal{X}^{par}=\mathcal{C}\times_S \mathcal{SM}^{par}_{\mathcal{C}/S}$. For our need, we define the $(-1)$-term of the parabolic Bloch-Esnault complex as follow.
**Definition 34**. We define the (-1)-term of the parabolic Bloch-Esnault $^0\mathcal{P}^{-1}(\mathcal{E})$, as a pullback of the (-1)-term of the Bloch-Esnault complex $^0\mathcal{B}^{-1}(\mathcal{E})$, by the natural inclusion $\mathrm{parEnd}^0(\mathcal{E}) \hookrightarrow \mathrm{parEnd}^0(\mathcal{E})$, as follows $$\xymatrix{
0 \ar[r] & K_{\mathcal{X}^{par}/\mathcal{SM}^{par}} \ar[r] \ar@{=}[d] & ^0\mathcal{P}^{-1}(\mathcal{E}) \ar[r] \ar[d] & \mathrm{parEnd}^0(\mathcal{E}) \ar[r]\ar[d] & 0 \\
0 \ar[r] & K_{\mathcal{X}^{par}/\mathcal{SM}^{par}} \ar[r] & ^0\mathcal{B}^{-1}(\mathcal{E}) \ar[r] & \mathrm{End}^0(\mathcal{E}) \ar[r] &0
}$$ where $K_{\mathcal{X}^{par}/\mathcal{SM}^{par}}$ is the relative canonical line bundle relatively to the map $\pi_n$.
We apply $R^1\pi_{e_*}$ to the (-1)-Bloch-Esnault term exact sequence $$\xymatrix{
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r] \ar@{=}[d] & R^1 \pi_{n_*} \left( ^0\mathcal{P}^{-1}(\mathcal{E}) \right) \ar[r] \ar[d] & R^1 \pi_{n_*} \left( \mathrm{parEnd}^{0}(\mathcal{E}) \right) \simeq T_{\mathcal{SM}^{par}_{\mathcal{C}} / S} \ar[r]\ar[d] & 0 \\
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r] & R^1 \pi_{n_*}( ^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r] & R^1 \pi_{n_*}( \mathrm{End}^{0}(\mathcal{E})) \ar[r] &0
}$$ the exact sequence bellow is the pullback of the Bloch-Esnault exact sequence of the vector bundle $\mathcal{E}$ seen as a family over the space $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ by the forgetful morphism map $$\phi: \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta) \longrightarrow \mathcal{SU}_{\mathcal{C}}(r,\delta)$$ which can be lifted to a map on the fibre product $$\phi: \mathcal{C}\times_S\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta) \longrightarrow \mathcal{C}\times_S\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$$
we can choose a virtual universal bundle $\mathcal{U}$ over $\mathcal{C}\times_S\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$, such that $\phi^*\left( \mathcal{U}\right) \cong \mathcal{E}$. Moreover, the differential map $$d \phi: T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \longrightarrow \phi^*\left(T_{\mathcal{SU}_{\mathcal{C}}/S} \right)$$ is given by applying $R^1 \pi_{n_*}$ to the natural inclusion $\mathrm{parEnd}^0 \left(\mathcal{E}\right) \hookrightarrow \mathrm{End}^0\left( \phi^* \left(\mathcal{U}\right)\right).$\
By construction we get an identification theorem in the parabolic configuration of Theorem [Theorem 31](#BEA){reference-type="ref" reference="BEA"}.
**Proposition 35**. *Let $\mathcal{E}_*$ be a virtual universal parabolic bundle over $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$. Then, there is a canonical isomorphism $^0\mathcal{P}^{-1}(\mathcal{E}) \simeq \left[ \mathcal{A}^{0,par,St}_{\mathcal{X}^{par}/\mathcal{SM}^{par}} (\mathcal{E}) \left(\mathcal{D} \right)\right]^{\vee}$, such that $$\xymatrix{
0 \ar[r] & K_{\mathcal{X}^{par}/\mathcal{SM}^{par}} \ar[r] \ar[d]^{\cong} & ^0\mathcal{P}^{-1}(\mathcal{E}) \ar[r] \ar[d]^{\cong} & \mathrm{parEnd}^0(\mathcal{E}) \ar[r]\ar[d]^{\cong} & 0 \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0 \ar[r] & K_{\mathcal{X}^{par}/\mathcal{SM}^{par}} \ar[r] & \left[ \mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{SM}^{par}} (\mathcal{E}) \left(\mathcal{D} \right) \right]^{\vee} \ar[r] & \mathrm{parEnd}^0(\mathcal{E}) \ar[r] &0
}$$*
Hence, we get the following parabolic version of [@pauly2023hitchin] Theorem 4.4.1.
**Theorem 36**. *Let $\mathcal{E}_{\bullet}=(\mathcal{E}_{\lambda})_{\lambda \in \mathbb{R}}$ be a virtual universal filtered bundle over $\mathcal{M}_{\bullet}\simeq \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$. Then for each $\lambda \in \mathbb{R}$, we have the following isomorphism of short exact sequences over $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ $$\xymatrix{
R^1 \pi_{n_*} \left( K_{\mathcal{X}^{par}/\mathcal{M}{\bullet}} \right) \ar@{^{(}->}[r] \ar[d]^{\simeq}_{{\frac{r}{n(\lambda)}}} & R^1 \pi_{n_*} \left( \left[ \mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{M}_{\bullet}} (\mathcal{E}_{\lambda} )(\mathcal{D} )\right]^{\vee} \right)\ar@{->>}[r] \ar[d]^{\simeq} & R^1 \pi_{n_*} \left(\mathrm{parEnd}^0(\mathcal{E}_{\lambda})\right) \ar[d]^{\cong} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathcal{O}_{\mathcal{M}_{\bullet}} \ar@{^{(}->}[r] & \mathcal{A}_{\mathcal{M}_{\bullet}/S}\left( \Theta(\lambda) \right) \ar@{->>}[r] ^{\nabla_1} & T_{\mathcal{M}_{\bullet}/S}
}$$ where $\Theta(\lambda)$ is the pullback of the ample generator of the group $\mathrm{Pic}\left( \mathcal{SU}_{\mathcal{C}/S}(r,\delta_{\lambda})/S \right)$ by the classifying maps $$\begin{array}{ccccc}
\phi_{\lambda} &: & \mathcal{M}_{\bullet} & \longrightarrow & \mathcal{SU}_{\mathcal{C}/S}(r,\delta(\lambda)) \\
& & \mathcal{E}_{\bullet} & \longmapsto & \mathcal{E}_{\lambda}
\end{array}$$ set $d(\lambda)=\mathrm{deg}\ \delta(\lambda)$ and $n(\lambda)=\gcd \left(r, d(\lambda) \right)$, which is equivalent to the equality $$\frac{r}{n(\lambda)} \Delta_{\lambda}= [ \Theta(\lambda)] \in \mathrm{H}^0\left( S,R^1 \pi_{e_*} \left( \Omega^1_{\mathcal{M}_{\bullet}/S} \right)\right),$$ where we denote by $\Delta_{\lambda}$ the extension class of the first exact sequence.*
This theorem is equivalent, in the parabolic setting, to the following theorem using Hecke modification.
**Theorem 37**. *Under the same hypothesis. Let $\mathcal{E}_*$ be a virtual parabolic universal bundle, we have the following isomorphism of short exact sequences over $\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)$ $$\xymatrix{
R^1 \pi_{n_*} \left(K_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} \right) \ar@{^{(}->}[r] \ar[d]^{\cong}_{{\frac{r}{n_j(i)}}} & R^1 \pi_{n_*} \left(\left[ \mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} (\mathcal{H}^j_i\left(\mathcal{E}\right))(\mathcal{D})\right]^{\vee} \right)\ar@{->>}[r] \ar[d]^{\cong} & R^1 \pi_{n_*} (\mathrm{parEnd}^0(\mathcal{H}^j_i\left(\mathcal{E}\right))) \ar[d]^{\cong} \\
%R^1 \pi_{n_*} \left(K_{\mathcal{X}^{par}/\smp_{\mathcal{C}}} \right) \ar@{^{(}->}[r] \ar[d]^{\cong}_{{\frac{r}{n_j(i)}}} & R^1 \pi_{n_*} \left( \mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\smp_{\mathcal{C}}} (\mathcal{E})(\mathcal{D})^{\vee} \right)\ar@{->>}[r] \ar[d]^{\cong} & R^1 \pi_{n_*} (\pe^0(\eta)) \ar[d]^{\cong} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar@{^{(}->}[r] & \mathcal{A}_{\mathcal{SM}^{par}_{\mathcal{C}}/S}\left( \Theta_{j}(i) \right) \ar@{->>}[r] ^{\nabla_1} & T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}
}$$ where $n_j(i):=\mathrm{gcd}(r,\mathrm{deg}(\delta_j(i)))$ as defined before. We denote the extension class of the the first exact sequence by $\Delta_{j}(i)$ and the Atiyah class of a line bundle $L$ by $[L]$. Then the theorem is equivalent to the equality of global sections $$\frac{r}{n_j(i)} \Delta_{j}(i)= [ \Theta_{j}(i) ] \in \mathrm{H}^0\left(S,R^1 \pi_{e_*}(\Omega^1_{\mathcal{M}^{par}/S}) \right).$$ With the same hypothesis we have $$\xymatrix{
{R^1 \pi_{n}}_* \left(K_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}}\right) \ar@{^{(}->}[r] \ar[d]^{{\frac{r}{n}}}_{\cong } & R^1 \pi_{n_*} \left( \left[\mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} (\mathcal{E})(\mathcal{D})\right]^{\vee}\right) \ar@{->>}[r] \ar[d]^{\cong} & {R^1 \pi_{n}}_* (\mathrm{parEnd}(\mathcal{E})) \ar[d]^{\cong} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar@{^{(}->}[r] & \mathcal{A}_{\mathcal{SM}^{par}_{\mathcal{C}}/S}(\Theta) \ar@{->>}[r]^{\nabla_1} & T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}
}$$ which is equivalent to the equality $$\frac{r}{n} \Delta= [ \Theta] \in \mathrm{H}^0\left(S, R^1 \pi_{e_*}(\Omega^1_{\mathcal{SM}^{par}_{\mathcal{C}}/S})\right).$$*
Modulo a shifting by a rational number $\lambda$ in the filtered configuration which correspondent to Hecke modifications in the parabolic sitting, it is sufficient to prove the theorem for $\lambda=0$. Take the forgetful map $\phi : \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta) \rightarrow \mathcal{SU}_{\mathcal{C}/S}(r,\delta)$, which can be lifted to the fibre product over $S$ and let $\mathcal{E}_*$ be a universal parabolic bundle over $\mathcal{C} \times_S \mathcal{SM}^{par}_{\mathcal{C}/S}$. Let denote by $\mathcal{U}$ a virtual universal bundle over $\mathcal{C}\times_S \mathcal{SU}_{\mathcal{C}/S}$ such that $\phi^* \left( \mathcal{U} \right) \cong \mathcal{E}$. Take the pullback of the exact sequence given in Theorem [Theorem 33](#BS-generalized){reference-type="ref" reference="BS-generalized"} by the forgetful map $\phi$, we get $$\xymatrix{
0\ar[r] & \phi^* \left( R^1 p_{n_*} (K_{\mathcal{X}/ \mathcal{SU}})\right) \ar[r] \ar[d]_{\frac{r}{n}\mathrm{Id}}^{\cong} & \phi^* \left( R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}}(\mathcal{U})^{\vee} \right)\right) \ar[r] \ar[d]_{\cong} & \phi^* \left( R^1 p_{n_*} \left( \mathrm{End}^0(\mathcal{U})^{\vee} \right) \right) \ar[d]^{\mathrm{Id}}_{\cong} \ar[r] & 0 \\
%%%%%%%%%%%%%%%%%%
0\ar[r] & \mathcal{O}_{\mathcal{SM}^{par}}\ar[r] & \phi^*\left( \mathcal{A}_{\mathcal{SU}_{\mathcal{C}}/S}(\mathcal{L})\right) \ar[r] ^{\nabla_1} & \phi^* \left( T_{\mathcal{SU}_{\mathcal{C}}/S}\right) \ar[r] & 0
}$$ take the differential map $\mathrm{d} \phi: T_{\mathcal{SM}^{par}/S} \longrightarrow \phi^*\left( T_{\mathcal{SU}_{\mathcal{C}}/S} \right),$ which correspond to taking the first direct image $R^1 \pi_{n_*}$ of the natural inclusion of sheaves $$\mathrm{parEnd}^0(\mathcal{E}) \hookrightarrow \mathrm{End}^0(\mathcal{E})=\phi^* \left(\mathrm{End}^0(\mathcal{U}) \right)$$ Now, we take the pull-back of this isomorphism of exact sequences by $\mathrm{d}\phi$ by the two realisations as follows $$\xymatrix{
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r] \ar@{=}[d] & R^1 \pi_{n_*} ( ^0\mathcal{P}^{-1}(\mathcal{E})) \ar[r] \ar[d] & R^1 \pi_{n_*} (\mathrm{parEnd}^{0}(\mathcal{E})) \ar[r]\ar[d]^{d\phi} & 0 \\
%%%%%%%%%%%%%%%%%%
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r] \ar@{=}[d] & R^1 \pi_{n_*}( ^0\mathcal{B}^{-1}(\mathcal{E})) \ar[r] \ar@{=}[d] & R^1 \pi_{n_*}( \mathrm{End}^{0}(\mathcal{E})) \ar[r] \ar@{=}[d]&0 \\ %%%%%%
0\ar[r] & \phi^* \left( R^1 p_{n_*} (K_{\mathcal{X}/ \mathcal{SU}_{\mathcal{C}}})\right) \ar[r] \ar[d]_{\frac{r}{n}\mathrm{Id}}^{\cong} & \phi^* \left( R^1 p_{n_*} \left( \mathcal{A}^0_{\mathcal{X}/\mathcal{SU}_{\mathcal{C}}}(\mathcal{U})^{\vee} \right)\right) \ar[r] \ar[d]_{\cong} & \phi^* \left( R^1 p_{n_*} \left( \mathrm{End}^0(\mathcal{U})^{\vee} \right) \right) \ar[d]^{\mathrm{Id}}_{\cong} \ar[r] & 0 \\
%%%%%%%%%%%%%%%%%%
0\ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}}\ar[r] \ar@{=}[d] & \phi^*\left( \mathcal{A}_{\mathcal{SU}_{\mathcal{C}}/S}(\mathcal{L})\right) \ar[r] ^{\nabla_1} & \phi^* \left( T_{\mathcal{SU}_{\mathcal{C}}/S}\right) \ar[r] & 0 \\
%%%%%%%%%%%%%
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r] & \mathcal{A}_{\mathcal{SM}^{par}/S}(\phi^*(\mathcal{L})) \ar[r] \ar[u] & T_{\mathcal{SM}^{par}/S} \ar[r] \ar[u]_{d\phi} & 0
}$$ By construction the first and the last exact sequences are isomorphic as they are pullbacks of isomorphic exact sequences by the differential map $$\xymatrix{
0 \ar[r] & R^1 \pi_{n_*} (K_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}}) \ar[r] \ar@{=}[d]_{\frac{r}{n} \mathrm{Id}} & R^1 \pi_{n_*} ( ^0\mathcal{P}^{-1}(\mathcal{E})) \ar[r] \ar[d]^{\cong} & R^1 \pi_{n_*} (\mathrm{parEnd}^{0}(\mathcal{E})) \ar[r] \ar[d]^{\cong} & 0 \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}} \ar[r] & \mathcal{A}_{\mathcal{SM}^{par}/S}(\phi^*(\mathcal{L})) \ar[r] & T_{\mathcal{SM}^{par}/S} \ar[r] & 0
}$$ where $\mathcal{L}$ is the ample generator of the Picard group of the space $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ that we denote by $\Theta$. Note that we have the equalities $$K_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} \cong \pi_w^*\left( K_{\mathcal{C}/S} \right) \cong \phi^* \left( K_{\mathcal{X}/ \mathcal{SU}_{\mathcal{C}}} \right) .$$ We conclude the proof by applying Proposition [Proposition 35](#sun-tsai-parabolic){reference-type="ref" reference="sun-tsai-parabolic"}, to obtain the following isomorphism of exact sequences $$\xymatrix{
R^1 \pi_{n_*} (K_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}}) \ar@{^{(}->}[r] \ar[d]^{{\frac{r}{n}}}_{\cong } & R^1 \pi_{n_*} \left( \left[\mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} (\mathcal{E})(\mathcal{D})\right]^{\vee}\right) \ar@{->>}[r] \ar[d]^{\cong} & {R^1 \pi_{n}}_* (\mathrm{parEnd}(\mathcal{E})) \ar[d]^{\cong} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar@{^{(}->}[r] & \mathcal{A}_{\mathcal{SM}^{par}_{\mathcal{C}}/S}(\Theta) \ar@{->>}[r]^{\nabla_1} & T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}
}$$ Hence, this conclude the proof. $\square$
# Parabolic Hitchin symbol map
Let $\mathcal{E}_*$ be a virtual universal parabolic vector bundle of fixed parabolic type $\alpha_*$ over $\mathcal{C} \times_S \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$. We want to define a parabolic version of the Hitchin symbol map, as given in $\cite{pauly2023hitchin}$ in section 4.3. We will use the following notation: $\mathcal{SM}^{par}_{\mathcal{C}} := \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$.\
**First approach:** We take the trace map which we denote by $B$ as follows $$\small{
\xymatrix{
\pi_{n_*} \left( \mathrm{End}^0(\mathcal{E}) \otimes \pi_w^* K_{\mathcal{C}/S} (D ) \right) \otimes \pi_{n_*} \left(\mathrm{End}^0(\mathcal{E}) \otimes \pi_w^* K_{\mathcal{C}/S} \left( D \right) \right) \ar[d]_B & (\phi,\psi) \ar@{|->}[d] \\
\pi_{n_*} \pi_w^* \left( K^{\otimes 2 }_{\mathcal{C}/S} \left( 2D \right) \right) & B(\phi,\psi)=\mathrm{Trace}(\phi \circ \psi)
}
}$$ we take its restriction to the subbundle $\mathrm{SparEnd}^0(\mathcal{E}) \subset \mathrm{End}^0(\mathcal{E})$ so the left hand side is the cotangent bundle $T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S}$, so we get $$\xymatrix{
B : T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \otimes T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \ar[r] & \pi_{n_*} \pi_w^* \left( K^{\otimes 2 }_{\mathcal{C}/S} ( 2D ) \right)
}$$ a simple calculation gives $$\mathrm{Image} ( B ) \subset \pi_{n_*} \pi_w^* \left( K^{\otimes 2}_{\mathcal{C}/S} ( D ) \right)$$ we denote by $B$ the following restriction $$\xymatrix{
B : T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \otimes T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \ar[r] & \pi_{n_*} \pi_w^* \left( K^{\otimes 2 }_{\mathcal{C}/S} (D) \right)
}$$ we dualize and by Serre's duality relative to $\pi_{n}$ we get $$B^{\vee} : \pi^*_{e} \left( R^1 \pi_{s_*} \left( T_{\mathcal{C}/S}\left( -D \right) \right) \right) \rightarrow T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}\otimes T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}$$
**Definition 38** (Parabolic Hitchin Symbol). The parabolic Hitchin symbol $\rho^{par}$ is the morphism given by $$\rho_{par}:=\pi_{e_*}(B^{\vee}): R^1 \pi_{s_*} \left( T_{\mathcal{C}/S} ( -D) \right) \longrightarrow \pi_{e_*} \mathrm{Sym}^2 \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right)$$
**Second approach:** We consider the evaluation map of the sheaf $\mathrm{SparEnd}^{0}(\mathcal{E})\otimes \pi^*_w \left( \omega_{\mathcal{C}/S} ( D) \right)$ composed with the injection map $\mathrm{SparEnd}(\mathcal{E}) \subset \mathrm{parEnd}(\mathcal{E})$, we get $$\pi_n^* \ \pi_{n_*} \left( \mathrm{SparEnd}^{0}(\mathcal{E})\otimes \pi^*_w K_{\mathcal{C}/S} ( D) \right) \overset{ev}{\longrightarrow} \mathrm{parEnd}^{0}(\mathcal{E})\otimes\pi^*_w \left( K_{\mathcal{C}/S}( D) \right)$$ we dualize $$\mathrm{parEnd}^0(\mathcal{E})^{\vee} \otimes \pi^*_w \left( T_{\mathcal{C}/S} (-D) \right)\overset{ev^{\vee}}{\longrightarrow} \pi_n^* \left( \pi_{n_*} \left( \mathrm{SparEnd}^0(\mathcal{E}) \otimes \pi^*_w \ K_{\mathcal{C}/S} (D) \right)\right)^{\vee}$$ this morphism gives a map which we denote by $ev^{\vee}$ $$\pi^*_w \left( T_{\mathcal{C}/S} (-D) \right) \overset{ev^{\vee}}{\longrightarrow} \mathrm{parEnd}^0(\mathcal{E}) \otimes \pi_n^* \left( \pi_{n_*} \left( \mathrm{SparEnd}^{0}(\mathcal{E}) \otimes \pi^*_w \left( K_{\mathcal{C}/S} (D) \right) \right) \right)^{\vee}$$ by Serre's duality relatively to $\pi_n$ $$\pi^*_w \left( T_{\mathcal{C}/S}(-D) \right) \overset{ev^{\vee}}{\longrightarrow} \mathrm{parEnd}^0(\mathcal{E})\otimes \pi_n^* \left( R^1 \pi_{n_*} \left( \mathrm{parEnd}^0(\mathcal{E}) \right) \right)$$ we apply $\pi_{e_*} \circ R^1 \pi_{n_*}$ and by the projection formula, we get $$\pi_{e_*}\left( R^1 \pi_{n_*}\left( ev^{\vee} \right) \right): R^1 \pi_{s_*} \left( T_{\mathcal{C}/S}(-D) \right) \longrightarrow \pi_{e_*} \mathrm{Sym}^2 \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right).$$
**Lemma 39**. *This application coincide with the parabolic Hitchin symbol $\rho_{par}$. i.e., $\rho_{par}=\pi_{e_*}\left( R^1 \pi_{n_*} \left( ev^{\vee} \right) \right)$.*
**Proposition 40**. *The symbol map $\rho_{par}$ is invariant under Hecke modifications.*
The proposition is a consequence of the following: take $E_*$ a parabolic vector bundle over a curve $C$ of parabolic type $\alpha_*$ with respect to a divisor $D$. Let $g \in \mathrm{parEnd}(E)$ be a parabolic endomorphism. Then
**Lemma 41**. *The trace is invariant under Hecke modifications.i.e.*
*$tr(\mathcal{H}_i^j(g))=tr(g)$ for all $i\in I$ and $j \in \{1,2,...,\ell_i \}$.*
For $i\in I$ and $j \in \{1,2,...,\ell_i \}$ take the Hecke modification of $E$ with respect to the subspace $F^{j+1}_i(E)$ so we get a sub sheaf $$f: \mathcal{H}_i^j(E) \hookrightarrow E$$ which is an isomorphism over $C\setminus\{x_i\}$, thus
$tr(\mathcal{H}_i^j(g))=tr(g)$ over $C\setminus\{x_i\}$.
The vector bundle $\mathcal{H}_i^j(E)$ inherited a parabolic structure, and $\mathcal{H}_i^j(g)$ is a parabolic endomorphism with respect to this parabolic structure. $$\xymatrix{
E \ar[rr]^{g} && E \\
\mathcal{H}_i^j(E) \ar[rr]_{ \mathcal{H}_i^j(g)} \ar[u]^f && \mathcal{H}_i^j(E) \ar[u]_f &
}$$ Now, let us describe the map $\mathcal{H}_i^j(g)_{x_i}: \mathcal{H}_i^j(E)_{x_i} \rightarrow \mathcal{H}_i^j(E)_{x_i}$. We have the decomposition of the map $g$ with respect to the quotient exact sequence $$\xymatrix{
0 \ar[rr] && F_i^{j+1}(E) \ar[rr] \ar[d]^{g\vert_{ F_i^{j+1}(E)}} && E_{x_i} \ar[rr] \ar[d]^{g_{x_i}} && Q_i^j(E):= E_{x_i}/F_i^{j+1}(E) \ar[rr] \ar[d]^{\overline{g}} && 0 \\
0 \ar[rr] && F_i^{j+1}(E) \ar[rr] && E_{x_i} \ar[rr] & & Q_i^j(E):=E_{x_i}/F_i^{j+1}(E) \ar[rr] &&0
}$$ thus we have $$g_{x_i}=
\begin{pmatrix}
g\vert_{ F_i^{j+1}(E)} & \ast \\
0 & \overline{g}
\end{pmatrix} \Longrightarrow tr(g_{x_i})=tr(g\vert_{ F_i^{j+1}(E)})+tr(\overline{g}).$$ the Hecke modification $\mathcal{H}_i^j(E)$ fit in the same diagram $$\xymatrix{
0 \ar[rr] && Q_i^j(E) \ar[rr] \ar[d]^{\overline{g}} && \mathcal{H}_i^j(E) \ar[rr] \ar[d]^{\mathcal{H}_i^j(g)_{x_i}} && F_i^{j+1}(E) \ar[rr] \ar[d]^{g\vert_{ F_i^{j+1}(E)}} && 0 \\
0 \ar[rr] && Q_i^j(E) \ar[rr] && \mathcal{H}_i^j(E)_{x_i} \ar[rr] & & F_i^{j+1}(E) \ar[rr] &&0
}$$ hence we get $$\mathcal{H}_i^j(g)_{x_i}=
\begin{pmatrix}
\overline{g} & 0 \\
\ast & g\vert_{ F_i^{j+1}(E)}
\end{pmatrix} \Longrightarrow tr(\mathcal{H}_i^j(g)_{x_i})=tr(\overline{g})+tr(g\vert_{ F_i^{j+1}(E)})= tr(g_{x_i}),$$ There for one has globally the equality: $tr(g)=tr(\mathcal{H}_i^j(g)) \in \mathcal{O}_{C}.$ This ends the proof. $\square$
**Proposition 42**. *The parabolic Hitchin symbol map $\rho_{par}$ is an isomorphism.*
Take the relative cotangent bundle over $\mathcal{SM}^{par}_{\mathcal{C}}$, denote it by $q: T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \longrightarrow \mathcal{SM}^{par}_{\mathcal{C}}$, the projection map. One get the following isomorphism $$(\pi_e \circ q)_* \left( \mathcal{O}_{T^{\vee}_{\mathcal{SM}^{par}_{\mathcal{C}}/S}} \right) \cong \bigoplus\limits_{q \geq 0} \pi_{e_*} \mathrm{Sym}^q \left( T_{\mathcal{SM}^{par}/S} \right)$$ and take the $\mathbb{G}_m$-action over the moduli space of the parabolic Higgs bundles $\mathcal{H}iggs^P(\alpha_*)$ that contain the cotangent space $T^{\vee}_{\mathcal{SM}^{par}/S}$ as a big open subspace. Thus elements of $\pi_{e_*} \mathrm{Sym}^q \left( T_{\mathcal{SM}^{par}/S} \right)$ can be seen as regular functions over $\mathcal{O}_{T^{\vee}_{\mathcal{SM}^{par}/S}}$ of degree 2 with respect to the action of $\mathbb{G}_m$, that can be extend by Hartog's theorem to all the space $\mathcal{H}iggs^P(\alpha_*)$ of strongly-parabolic Higgs bundles. As the parabolic Hitchin system is equivariant under the $\mathbb{G}_m$-action, they are obtained from the quadratic part of the parabolic Hitchin base given by the space $$\pi_{s_*} \left( K_{\mathcal{C}/S}^{\otimes2}\left(D \right) \right) \cong R^1 \pi_{s_*} \left( T_{\mathcal{C}/S}\left( -D \right) \right)$$ $\square$
# Kodaira-Spencer map
## Infinitesimal deformations of $(C,D,F^*_*(E))$
In this subsecion, we study the infinitesimal deformations of $\mathbb{E}:=(C,D,E_*)$ a smooth marked projective curve of genus $g \geqslant 2$ and $D$ a reduced divisor of degree $N$ equipped with a quasi-parabolic rank-r vector bundle of fixed quasi-parabolic type $\vec{m}$.
**Theorem 43**. *The infinitesimal deformations of $\mathbb{E}=(C,D,E_*)$ are parametrized by $\mathrm{H}^1 \left( C, \mathcal{A}^{par}_{C} (E) \right)$.*
Let $\mathcal{U}=\{U_{\lambda}\}_{\lambda}$ be an affine cover of the curve $C$ such that any open affine set contains at most one point of the divisor $D$, we set $U_{\lambda,\mu}=\mathrm{Spec}(A_{\lambda,\mu})$. Let $\mathbb{E}_{\varepsilon}$ be an infinitesimal deformations of $\mathbb{E}$ given by $(C_{\varepsilon},D_{\varepsilon}, {E_*}_{\varepsilon})$, where the deformation $(C_{\varepsilon},D_{\varepsilon})$ is given by the 1-cocycle $\{\vartheta_{\lambda,\mu}\}$ in $\mathrm{H}^1(C,T_C(-D))$, and the vector bundle $E_{\varepsilon}$ is given by the 1-cocycle $\{ \xi_{\lambda,\mu} \}$ with values in $\widehat{ \mathcal{A}_C(E)}$ given by the pull-back $$\xymatrix{
0 \ar[r] & \mathrm{End}(E) \ar[r] \ar[d] & \mathcal{A}_C(E) \ar[r] & T_C \ar[r] & 0 \\
0 \ar[r] & \mathrm{End}(E) \ar[r] & \widehat{\mathcal{A}_C(E)} \ar[r] \ar@{^{(}->}[u] & T_C(-D) \ar[r] \ar@{^{(}->}[u] & 0
}
\label{the-first-subsheaf}$$ By definition, a parabolic bundle is given for all $i \in I$ by the Hecke filtration (see Proposition [Proposition 12](#Hecke-filtrations){reference-type="ref" reference="Hecke-filtrations"}) $$\mathcal{H}_i^{\ell_i}(E) \subset \mathcal{H}_i^{\ell_i-1}(E) \subset \cdot\cdot\cdot \subset \mathcal{H}_i^2(E) \subset \mathcal{H}_i^{1}(E) \subset \mathcal{H}_i^0(E)=E.
\label{Filtration-in-the lemma}$$ Hence the parabolic vector bundle ${E_*}_{\varepsilon}$ is given also for all $i \in I$ by filtrations of locally free sheaves $$\mathcal{H}_i^{\ell_i}(E_{\varepsilon}) \subset \mathcal{H}_i^{\ell_i-1}(E_{\varepsilon}) \subset \cdot\cdot\cdot \subset \mathcal{H}_i^2(E_{\varepsilon}) \subset \mathcal{H}_i^{1}(E_{\varepsilon}) \subset \mathcal{H}_i^0(E_{\varepsilon})=E_{\varepsilon},$$ and the 1-cocycle $\{\tau_{\lambda,\mu} \}$ must preserve this filtrations, locally the sheaf $\mathcal{H}_i^{j}(E)$ is identified with a $A_{\lambda,\mu}[\varepsilon]$-submodule denoted by $M^{i,j}_{\lambda,\mu}[\varepsilon] \subset M^0_{\lambda,\mu}[\varepsilon]=M_{\lambda,\mu}[\varepsilon]$, $$\newcommand{\incl}[1][r]
{\ar@<-0.3pc>@{^(-}[#1] \ar@<+0.2pc>@{-}[#1]}
\xymatrix{
M_{\lambda,\mu}[\varepsilon] \ar[rr]^{\tau_{\lambda,\mu}} && M_{\lambda,\mu}[\varepsilon] \\
M^{i,j}_{\lambda,\mu}[\varepsilon] \ar[rr] \incl[u] && M^{i,j}_{\lambda,\mu}[\varepsilon] \incl[u] &
}$$ The fact that the diagram commutes is equivalent to the fact that the 1-cocycle $\{\xi_{\lambda,\mu} \}$ preserve the filtration given by the $A_{\lambda,\mu}$-modules $\{M^{i,j}_{\lambda,\mu}\}$ associated to the filtration [\[Filtration-in-the lemma\]](#Filtration-in-the lemma){reference-type="eqref" reference="Filtration-in-the lemma"}. Hence the 1-cocycle $\{\xi_{\lambda,\mu} \}$ has values in the sheaf $\widehat{\mathcal{A}^{par}_C(E)}$ defined as the subsheaf of $\widehat{\mathcal{A}_C(E)}$ given locally by differential operators preserving the subsheaves $\mathcal{H}_i^j(E)$. Hence, the infinitesimal deformations of $\mathbb{E}=(C,D,E_*)$ are given by the cohomology group $\mathrm{H}^1(C,\widehat{\mathcal{A}^{par}_C(E)} )$. Note that the sheaf $\widehat{\mathcal{A}^{par}_C(E)}$ can be included in an exact sequence $$\xymatrix{
0 \ar[r] & \mathrm{parEnd}(E) \ar[r] & \widehat{\mathcal{A}^{par}_C(E)} \ar[r]^{\nabla_1} & T_C(-D)
}$$ where the map $\nabla_1$ is the restriction of the natural map $:\widehat{\mathcal{A}_C(E)} \rightarrow T_C(-D)$ given in the exact sequence [\[the-first-subsheaf\]](#the-first-subsheaf){reference-type="eqref" reference="the-first-subsheaf"}.\
To conclude the proof we need to show the following isomorphism $\widehat{\mathcal{A}^{par}_C(E)} \cong\mathcal{A}^{par}_C(E)$. Note that by definition of $\mathcal{A}^{par}_C(E)$ as push-out we have $$\mathcal{A}^{par}_C(E) :=\{ (f,\partial)\ / \ \ f \in \mathrm{parEnd}(E) \, \ \partial \in \mathcal{A}_C(E)(-D) \ \mathrm{and} \ (f,0) \sim (0,f) \ \mathrm{if} \ f \in \mathrm{End}(E)(-D)\}.$$ Thus we can define an $\mathcal{O}_C$-linear map $\varrho$ as follows $$\begin{array}{ccccc}
\varrho & : & \mathcal{A}^{par}_C(E) & \longrightarrow & \widehat{\mathcal{A}^{par}_C(E)} \\
& & (f, \partial) & \longmapsto & f+\partial.
\end{array}$$ Clearly the map $\varrho$ induces identity map on $\mathrm{parEnd}(E)$. Let us prove that $\varrho$ is an isomorphism:
1. Injectivity: Let $(f,\partial) \in \mathcal{A}^{par}_C(E)$ such that $\varrho(f,\partial)=0 \Leftrightarrow f+\partial=0 \Leftrightarrow \partial=-f$, hence $\partial, f \in \mathrm{End}(E)(-D)$ by definition of $\mathcal{A}^{par}_C(E)$, we have $(f,\partial)=(f,-f)\sim(f-f,0)=0 \in\mathcal{A}^{par}_C(E)$.
2. Surjectivity: Let $\partial \in \widehat{\mathcal{A}^{par}_C(E)}$ we associate its symbol $\nabla_1(\partial)\in T_C(-D)$. Take a lifting $\widehat{\nabla_1(\partial)} \in \mathcal{A}^{par}_C(E)$ (modulo $\mathrm{parEnd}(E)$), which can be written $\widehat{\nabla_1(\partial)}=(f,\widehat{\partial})$, where $\widehat{\partial} \in \mathcal{A}_C(E)(-D)$ with $\nabla_1(\widehat{\partial})=\nabla_1(\partial)$ and $f$ any element in $\mathrm{parEnd}(E)$ . Note that $\partial, \widehat{\partial} \in \mathcal{A}_C(E)(-D) \Rightarrow \partial-\widehat{\partial} \in \mathrm{parEnd}(E)$. For $f=\partial-\widehat{\partial}$, one has $q(\partial-\widehat{\partial},\widehat{\partial})=\partial$.
Hence, we get an isomorphism of exact sequences $$\xymatrix{
0 \ar[r] & \mathrm{parEnd}(E) \ar[r] \ar@{=}[d]^{\mathrm{Id}} & \mathcal{A}^{par}_C(E) \ar[r] \ar[d]_{\varrho}^{\cong} & T_C(-D) \ar[r] \ar@{=}[d]^{\mathrm{Id}} & 0 \\
0 \ar[r] & \mathrm{parEnd}(E) \ar[r] & \widehat{\mathcal{A}^{par}_C(E)} \ar[r] & T_C(-D) \ar[r] & 0
}$$ This concludes the proof. $\square$
**Remark 44**. Note that $\cite{biswas2022infinitesimal}$ studied the infinitesimal deformations of $\mathbb{E}=(C,D,E_*)$. Where their definition of parabolic Atiyah algebroid $At(E_*)$, coincides with the definition of the sheaf $\widehat{\mathcal{A}^{par}_C(E)}$ hence isomorphic to the parabolic Atiyah algebroid $\mathcal{A}^{par}_C(E)$.
## Parabolic Kodaira-Spencer map
Let $\pi_s : (\mathcal{C},D) \longrightarrow S$ be a smooth family of projective marked curves parametrized by an algebraic variety $S$ and let $\pi_e :\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta) \longrightarrow S$ the relative moduli spaces of parabolic rank-r vector bundles of fixed parabolic type $\alpha_*$ with determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$. Let $\mathcal{E}_*$ a virtual universal parabolic vector bundle over $\mathcal{C}\times_S \mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)$ $$\def\cartesien{%
\ar@{-}[]+R+<6pt,-1pt>;[]+RD+<6pt,-6pt>%
\ar@{-}[]+D+<1pt,-6pt>;[]+RD+<6pt,-6pt>%
}
\xymatrix{
\mathcal{X}^{par} \ar[rr]^{\pi_{n}} \ar[d]_{\pi_{w}} \cartesien && \mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta) \ar[d]^{\pi_{e}} & \\
\ \left( \mathcal{C},D \right) \ar[rr]_{\pi_{s}} && \ar@/^1pc/[ll]^{ \sigma_{i}} S
}$$ we use the following notation $\mathcal{SM}^{par}_{\mathcal{C}}:=\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)$. We have two fundamental maps
- The Kodaira-Spencer of the family of marked curves: $\kappa_{\mathcal{C}/S}: T_S \longrightarrow R^1 \pi_{s_*} \left( T_{\mathcal{C}/S} (-D) \right),$\
given as the first connecting morphism on cohomology of the short exact sequence $$0 \longrightarrow T_{\mathcal{C}/S}(-D) \longrightarrow \mathcal{T}_{\mathcal{C}}(-D) \longrightarrow \pi_s^* T_S \longrightarrow S.$$
- The Kodaira-Spencer of the family of moduli spaces: $\kappa_{\mathcal{SM}^{par}_{\mathcal{C}}/S}: T_S \longrightarrow R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right)$, where $$T_{\mathcal{SM}^{par}/S} \cong R^1 \pi_{n_*} \left( \mathrm{parEnd}^0(\mathcal{E})\right)$$ given as the first connecting morphism on cohomology of the short exact sequence $$0 \longrightarrow T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \longrightarrow T_{\mathcal{SM}^{par}_{\mathcal{C}}} \longrightarrow \pi_e^* T_S \longrightarrow S.$$
Take the QPA sequence of the bundle $\mathcal{E}_*$ over $\mathcal{X}^{par}$ $$\xymatrix{
0 \ar[r] & \mathrm{parEnd}^{0}(\mathcal{E})\ar[r] & \mathcal{A}^{0,par}_{\mathcal{X}^{par}/\mathcal{M}^{par}} (\mathcal{E}) \ar[r] & \pi^{*}_{w}\left( T_{\mathcal{C}} \left(-D \right) \right) \ar[r] & 0}$$ As $\pi_{e_*} \left( \pi^*_w\left( T_{\mathcal{C}} \left(-D \right) \right) \right) =0$ et $R^2 \pi_{n_*} \left( \mathrm{parEnd}^{0}(\mathcal{E})\right)=0$ (the relative dimension of $\pi_n$ is 1), we apply $R^1 \pi_{n_*}$ we get an exact sequence on $\mathcal{M}^{par}$: $$\small {
\xymatrix{
0 \ar[r] & T_{\mathcal{M}^{par}/S} \ar[r] & R^1 \pi_{n_*} \left(\mathcal{A}^{0,par}_{\mathcal{X}^{par}/\mathcal{M}^{par}} (\mathcal{E}) \right)\ar[r] & R^1 \pi_{n_*} \left( \pi^{*}_{w} \left( T_{\mathcal{C}} \left(-D \right) \right) \right) \ar[r] & 0
}}$$ The first connector with respect to $\pi_{e_*}$ denoted $\Phi^{par}$, and called the parabolic Kodaira-Spencer map.
**Proposition 45**. *Then $\Phi^{par}$ commute with the Kodaira-Spencers of the two families: $\Phi_{par}\circ \kappa_{\mathcal{C}/S}=\kappa_{\mathcal{SM}^{par}_{\mathcal{C}}/S}$*
Let $\partial=\cup \Delta$ is the first connector of the long exact sequence for $\pi_e$ of the sequence $$\xymatrix{
0 \ar[r] & \mathcal{O}_{\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r]& R^1 \pi_{n_*}\left( \left[\mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} (\mathcal{E})\left(\mathcal{D} \right)\right]^{\vee} \right) \ar[r] & T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \ar[r] & 0}$$ given by applying $R^1\pi_{n_*}$ to the dual of the SQPA sequence tonsorized by $\mathcal{O}_{\mathcal{\chi}^{par}}(\mathcal{D})$ $$\xymatrix{
0 \ar[r] & \Omega^1_{\mathcal{\chi}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} \ar[r] & \left[\mathcal{A}^{0,par,st}_{\mathcal{X}^{par}/\mathcal{SM}^{par}_{\mathcal{C}}} (\mathcal{E})\left(\mathcal{D} \right)\right] ^{\vee} \ar[r] & \mathrm{parEnd}^0(\mathcal{E}) \ar[r] & 0}$$
**Theorem 46** (Parabolic version of proposition 4.7.1 [@pauly2023hitchin]). *The following diagram commute $$\xymatrix{
R^1 \pi_{s_*} \left( T_{\mathcal{C}/\mathcal{S}} (-D ) \right) \ar[rr]^{-\Phi_{par}} \ar[rd]_ {\rho_{par}} & & R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right)& \\
& \pi_{e_*} \mathrm{Sym}^2 \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right) \ar[ru]_{\partial}
}$$ i.e., $\Phi_{par}+\partial \circ \rho_{par}=0.$*
We need the following lemma
**Lemma 47** ([@pauly2023hitchin] lemma 4.5.1, page 23.). *Let $X$ a scheme, $V$ and $L$ be, respectively, a vector and a line bundle on $X$. Moreover, let $F \in \mathrm{Ext}^1 \left( L,V \right)$ $$\xymatrix{
0 \ar[r] & V \ar[r]^{\mathsf{i}} & F \ar[r]^{\pi} & L \ar[r] & 0
}$$ by taking the dual and tensorizing with $V \otimes L$, we get $$0 \longrightarrow V \longrightarrow F^* \otimes V \otimes L \longrightarrow V^* \otimes V \otimes L \longrightarrow 0$$ consider the injection $$\begin{array}{ccccc}
\psi & : & L & \longrightarrow & V^* \otimes V \otimes L \\
& & t & \longmapsto & Id_V\otimes t \\
\end{array}$$ then there exist a canonical injection $\phi: F \longrightarrow F^* \otimes V \otimes L$ such that the following diagram commutes $$\xymatrix{
0 \ar[r] & V \ar[r] \ar@{=}[d] & F \ar@{^{(}->}[d]^{\phi} \ar[r]^{-\pi} & L \ar@{^{(}->}[d]^{\psi} \ar[r] & 0 \\
0 \ar[r] & V \ar[r] & F^* \otimes V \otimes L \ar[r] & V^* \otimes V \otimes L \ar[r] & 0 \\
}$$*
Now, we prove the proposition. Take the parabolic Atiyah sequence on $\mathcal{X}^{par}$ of the universal bundle $\mathcal{E}$ relative to $\pi_n$. We note : $\mathcal{A}^{par}:= \mathcal{A}^{0,par}_{\mathcal{X}^{par}/\mathcal{M}^{par}} \left( \mathcal{E} \right)$ and $\mathcal{A}^{str}:= \mathcal{A}^{0,par,str}_{\mathcal{X}^{par}/\mathcal{M}^{par}} \left( \mathcal{E} \right)$, and take the evaluation map composed with the inclusion $\mathrm{SparEnd}(\mathcal{E}) \hookrightarrow \mathrm{parEnd}(\mathcal{E})$ $$\pi_n^* \ \pi_{n_*} \left( \mathrm{SparEnd}^{0}(\mathcal{E})\otimes \pi^*_w K_{\mathcal{C}/S}\left( D \right) \right) \overset{ev}{\longrightarrow} \mathrm{parEnd}^{0}(\mathcal{E})\otimes \pi^*_w K_{\mathcal{C}/S} \left( D \right)$$ We dualize $$\mathrm{parEnd}^{0}(\mathcal{E})^{\vee} \otimes \pi^*_w \ T_{\mathcal{C}/S} \left( -D \right)\overset{ev^*}{\longrightarrow} \pi_n^*\ \pi_{n_*} \left( \mathrm{SparEnd}^{0}(\mathcal{E}) \otimes \pi^*_w K_{\mathcal{C}/S}\left( D \right)\right)^{\vee}$$ We get the following morphism of exact sequences $$\xymatrix{
0 \ar[d] & 0 \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathrm{parEnd}^{0}(\mathcal{E}) \ar@{=}[r]\ar[d] & \mathrm{parEnd}^{0}(\mathcal{E}) \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathrm{parEnd}^{0}(\mathcal{E})\otimes \mathcal{A}^{{par}^{\vee}} \otimes \pi^*_w T_{\mathcal{C}/S} \left(-D \right) \ar[d] \ar[r]^{q} & \mathrm{parEnd}^{0}(\mathcal{E})\otimes \pi_n^* \pi_{n_*} \left( \mathcal{A}^{str} \otimes \pi^*_w K_{\mathcal{C}/S}\left( D \right)\right)^{\vee} \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\mathrm{parEnd}^{0}(\mathcal{E}) \otimes \mathrm{parEnd}^{0}(\mathcal{E})^{\vee} \otimes \pi^*_w T_{\mathcal{C}/S} \left(-D \right)\ar[r]^{ev^{\vee}} \ar[d] & \mathrm{parEnd}^{0}(\mathcal{E}) \otimes \pi_n^* \ \pi_{n_*} \left( \mathrm{SparEnd}^{0}(\mathcal{E}) \otimes \pi^*_w K_{\mathcal{C}/S}\left( D \right)\right)^{\vee} \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
0 & 0
}$$ The map $q$ is given by taking the dual of the evaluation map $$ev: \pi_n^* \pi_{n_*} \left( \mathcal{A}^{par} \otimes \pi^*_w K_{\mathcal{C}/S}\left( D \right)\right) \longrightarrow \mathcal{A}^{par} \otimes \pi^*_w K_{\mathcal{C}/S} \left(D \right)$$ and compose it with the natural inclusion $\mathcal{A}^{str} \hookrightarrow \mathcal{A}^{par}$. We apply lemma [Lemma 47](#lemma){reference-type="ref" reference="lemma"} to the left exact sequence ( for $V= \mathrm{parEnd}^0(\mathcal{E})$, $L= \pi^*_w \ T_{\mathcal{C}/S} (-D )$ and $F= \mathcal{A}^{par}$), and we apply the Serre duality relative to $\pi_n$ for the right exact sequence we get the morphism of exact sequences $$\xymatrix{
0 \ar[d] & & & 0 \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%
\mathrm{parEnd}^{0}(\mathcal{E}) \ar@{=}[rrr]\ar[d] & & & \mathrm{parEnd}^{0}(\mathcal{E}) \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%
\mathcal{A}^{par} \ar[d] \ar[rrr] & & & \mathrm{parEnd}^{0}(\mathcal{E})\otimes \pi_n^* \ R^1 \pi_{n_*} \left( \mathcal{A}^{{str}}(\mathcal{D})^{\vee} \right) \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%
\pi^*_w \ T_{\mathcal{C}/S} (-D ) \ar[rrr] \ar[d] & & & \mathrm{parEnd}^{0}(\mathcal{E}) \otimes \pi_n^* \ R^1 \pi_{n_*} \left( \mathrm{parEnd}^{0}(\mathcal{E}) \right) \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%
0 & & &0
}$$ The left exact sequence is the parabolic Atiyah sequence where we multiply the map $\mathcal{A}^{par} \longrightarrow \pi^*_w( \ T_{\mathcal{C}/S} \left(-D \right))$ by $-1$, as shown in Lemma [Lemma 47](#lemma){reference-type="ref" reference="lemma"}.\
We apply $R^1 \pi_{n_*}$, we get $$\xymatrix{
0 \ar[d] & 0 \ar[d] \\
T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}\ar@{=}[r]\ar[d] & T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}\ar[d] \\
R^1 \pi_{n_*} \left( \mathcal{A}^{par} \right) \ar[d] \ar[r] &T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \otimes R^1 \pi_{n_*} \left( \mathcal{A}^{{str}}( \mathcal{D})^{\vee} \right) \ar[d] \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
R^1 \pi_{n_*} \left( \pi^*_w \left( T_{\mathcal{C}/S} \left(-D \right) \right) \right)\ar[r] \ar[d] & T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}\otimes T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}\ar[d] \\
0 & 0
}$$ The right exact sequence is the $R^1\pi_{e_*}$ applied to the dual of strongly parabolic Atiyah sequence tensorized by $T_{\mathcal{SM}^{par}_{\mathcal{C}}/S}$, and the first connecting homomorphism in cohomology with respect to $\pi_{e}$ is given by cup product with the class $\Delta$. Take the first connecting homomorphism of the long exact sequence with respect to the map $\pi_e$, we get $$\xymatrix{ \ar @{} [ddrr] |{\square}
\pi_{e_*} R^1 \pi_{n_*} \left( \pi^*_w \left(T_{\mathcal{C}/S} \left(-D \right) \right)\right) \ar[rr] \ar[dd] & & \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \otimes T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right) \ar[dd]^{\cup \Delta} \\ \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right) \ar@{=}[rr] & & R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right)
}$$ we have $$\pi_{e_*} R^1 \pi_{n_*} \left( \pi^*_w \left( T_{\mathcal{C}/S} \left(-D \right) \right)\right) \simeq R^1 \pi_{s_*} \left( T_{\mathcal{C}/S} \left(-D \right) \right)$$ we get, the following commutative diagram $$\xymatrix{ \ar @{} [ddrr] |{\square}
R^1 \pi_{s_*} \left( \mathcal{T}_{\mathcal{C}/S} \left(-D \right) \right) \ar[rr]^ {\rho_{par}} \ar[dd]_{ - \Phi^{par}} & & \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \otimes T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right) \ar[dd]^{ \partial} \\ \\
R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right) \ar@{=}[rr] & & R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}_{\mathcal{C}}/S} \right)
}$$ Thus conclude the proof. $\square$
## Some equalities and consequences
We recall the equalities given in Theorem [Theorem 37](#Parabolic BSBE-2){reference-type="ref" reference="Parabolic BSBE-2"}. For all $i \in I$ and $j \in \{1,2,...,\ell_i\}$ one has
1. $\frac{r}{n} \Delta= [ \Theta] \in \mathrm{H}^0\left(S, R^1 \pi_{e_*} (\Omega^1_{\mathcal{SM}^{par}/S})\right)$, and
2. $\frac{r}{n_j(i)} \Delta_{j}(i)= [ \Theta_{j}(i) ] \in \mathrm{H}^0\left(S, R^1 \pi_{e_*} (\Omega^1_{\mathcal{SM}^{par}/S})\right)$.
We denote the associated applications given by the contraction with the classes $\Delta$ and $\Delta_{j}(i)$, respectiverly by $$\partial := \cup \Delta : \pi_{e_*} \left( \mathrm{Sym}^2 \left( T_{\mathcal{SM}^{par}/S} \right) \right) \longrightarrow R^1 \pi_{e_*} \left( T_{\mathcal{SM}^{par}/\mathcal{S}} \right),$$ and $$\partial_j(i):= \cup \Delta_j(i): \pi_{e_*} \left( \mathrm{Sym}^2 \left( T_{\mathcal{M}^{par}/S} \right) \right) \longrightarrow R^1 \pi_{e_*} \left( T_{\mathcal{M}^{par}/\mathcal{S}} \right).$$
Combining the above equalities, we get the following result.
**Theorem 48**. *Assume that the family $(\pi_s: (\mathcal{C},D) \rightarrow S)$ is versal [^7]. Then for all $i \in I$ and $j \in \{1,2,...,\ell_i \}$ we have the equalities over the moduli space $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$*
*$\cup [\Theta]\circ \rho_{par}=- \frac{r}{n}\cdot \Phi_{par}$ and $\cup [\Theta_j(i)]\circ \rho_{par}=- \frac{r}{n_j(i)} \cdot \Phi_{par}$.*
The first equality is a direct consequence of Proposition [Theorem 46](#parabolic version of proposition 4.7.1){reference-type="ref" reference="parabolic version of proposition 4.7.1"}, where we have $\partial \circ \rho_{par}= -\Phi_{par},$ we multiply the equality by $\frac{r}{n}$ and use the first equality above $$\frac{r}{n} \partial \circ \rho_{par}=\cup [\Theta]\circ \rho_{par}= -\frac{r}{n}\Phi_{par}.$$ Let fix $i \in I$ and $j \in \{1,2,...,\ell_i \}$. We take the Hecke isomorphism over $S$ $$\def\commutatif{\ar@{}[u]|{\circlearrowleft}}\xymatrix{ \mathcal{H}^j_i : \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta) \ar[rr]^{\cong} \ar[dr]_{\pi_e} & & \mathcal{SM}^{par}_{\mathcal{C}/S} \left (r,\mathcal{H}_i^j(\alpha_*),\mathcal{H}_i^j(\delta)\right) \ar[dl]^{\pi_e^{i,j}} \\& S \commutatif
}$$ where the map $\mathcal{H}^j_i$ is given by Hecke modification equipped with its natural parabolic structure. Then by Proposition [Proposition 45](#Kodaira-spencer decomposition){reference-type="ref" reference="Kodaira-spencer decomposition"} applied over $\mathcal{SM}^{par}:=\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ and $\mathcal{SM}^{par}_{i,j}:=\mathcal{SM}^{par}_{\mathcal{C}/S} \left (r,\mathcal{H}_i^j(\alpha_*),\mathcal{H}_i^j(\delta) \right)$, the following diagram commute under the assumption that the map $\kappa_{\mathcal{C}/S}$ is an isomorphism $$\xymatrix{
&&&& R^1\pi_{e_*}\left( T_{\mathcal{SM}^{par}/S}\right) \ar[dd]_{\cong}^{\mathcal{H}_i^j} \\
T_S \ar[rrr]_{\cong}^{\kappa_{\mathcal{C}/S}} \ar@/^1pc/[rrrru]^{\kappa_{\pi_e}} \ar@/_1pc/[rrrrd]_{\kappa_{\pi^{i,j}_e}} &&& R^1\pi_{s_*} \left( T_{\mathcal{C}/S}(-D) \right) \ar[ru]^{\Phi_{par}} \ar[rd]_{\Phi^{i,j}_{par}} \\
&&&& R^1\pi^{i,j}_{e_*}\left( T_{\mathcal{SM}^{par}_{i,j}/S}\right)
}$$ In fact by Proposition [Proposition 45](#Kodaira-spencer decomposition){reference-type="ref" reference="Kodaira-spencer decomposition"} applied over $\mathcal{SM}^{par}$ and $\mathcal{SM}^{par}_{i,j}$, one has $$\mathcal{H}_i^j \circ \Phi_{par}= \Phi^{i,j}_{par}.
%\left.
%\begin{array}{rr}
%\Phi_{par} \circ \kappa_{\mathcal{C}/S}= \kappa_{\pi_e}\\
% \Phi^{i,j}_{par} \circ \kappa_{\mathcal{C}/S}= \kappa_{\pi^{i,j}_e} \\
%\H_i^j \circ \kappa_{\pi_e}=\kappa_{\pi^{i,j}_e} \\
% \kappa_{\mathcal{C}/S} \quad \mathrm{isomorphism}\end{array} \right\} \Longrightarrow \H_i^j \circ \Phi_{par}= \Phi^{i,j}_{par}.
\label{Kodaira-Spencer-Hecke}$$ Now, we define parabolic Hitchin symbol map $\rho^{i,j}_{par}$ over the moduli space $\mathcal{SM}^{par}_{i,j}$ (see definition [Definition 38](#def-Parabolic-Hitchin-Symbol-map){reference-type="ref" reference="def-Parabolic-Hitchin-Symbol-map"}). We have the following commutative diagram $$\def\commutatif{\ar@{}[rrdd]|{\circlearrowleft}}
\xymatrix{
\commutatif && \pi_{e_*} \left( \mathrm{Sym}^2 T_{\mathcal{SM}^{par}/S} \right) \ar[rr]^{\cup [\Theta_j(i)]} \ar[dd]_{\cong}^{\mathcal{H}_i^j} \commutatif && R^1\pi_{e_*}\left( T_{\mathcal{SM}^{par}/S}\right) \ar[dd]_{\cong}^{\mathcal{H}_i^j} \\
%----------------------------------------
R^1\pi_{s_*} \left( T_{\mathcal{C}/S}(-D) \right) \ar@/^1pc/[rru]^{\rho_{par}} \ar@/_1pc/[rrd]_{\rho^{i,j}_{par}} \
\\
&& \pi_{e_*} \left( \mathrm{Sym}^2 T_{\mathcal{SM}^{par}_{i,j}/S} \right) \ar[rr]_{\cup [\Theta_j(i)]} && R^1\pi^{i,j}_{e_*}\left( T_{\mathcal{SM}^{par}_{i,j}/S} \right)
}$$ The first diagram commute by Proposition [Proposition 40](#symbol-invariance){reference-type="ref" reference="symbol-invariance"}. Hence, by the above diagram one has $$\begin{aligned}
\cup [\Theta_j(i)] \circ \rho_{par} &=\left( (\mathcal{H}_i^j)^{-1} \circ \cup [\Theta_j(i)] \circ \mathcal{H}_i^j \right) \circ \rho_{par}= (\mathcal{H}_i^j)^{-1} \circ \left( \cup [\Theta_j(i)] \circ \rho^{i,j}_{par} \right) \end{aligned}$$ we apply Proposition [Theorem 46](#parabolic version of proposition 4.7.1){reference-type="ref" reference="parabolic version of proposition 4.7.1"}, we get $$\begin{aligned}
\cup [\Theta_j(i)] \circ \rho_{par} &=(\mathcal{H}_i^j)^{-1} \circ \left( -\frac{r}{n_j(i)} \Phi_{par}^{i,j} \right)=-\frac{r}{n_j(i)} \Phi_{par},\end{aligned}$$ the last equality is given by equation [\[Kodaira-Spencer-Hecke\]](#Kodaira-Spencer-Hecke){reference-type="eqref" reference="Kodaira-Spencer-Hecke"}. This conclude the proof. $\square$
# Some line bundles over $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$
#### Parabolic determinant bundle:
Let $\mathcal{E}_*$ be a family of parabolic rank r vector bundle of fixed parabolic type $\alpha_*$ over a smooth family of curves $\mathcal{C}/S$ parametrized by a $S$-variety $\mathcal{T}$. Let $p : \mathcal{C} \times_S \mathcal{T}\longrightarrow \mathcal{T}$ the projection map. We recall the definition of the parabolic determinant line bundle under the hypothesis [\[star\]](#star){reference-type="eqref" reference="star"} $$\lambda_{par}(\mathcal{E}_*):= \lambda(\mathcal{E})^{k} \otimes \left( \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i} \left\lbrace \det \left( F^{j}_i(\mathcal{E})/F^{j+1}_i(\mathcal{E})\right) ^{ -a_j(i)} \right\rbrace \right) \otimes \det(\mathcal{E}_{\sigma})^{\frac{k \chi_{par}}{r}}$$ which is a line bundle over $\mathcal{T}$, see definition [\[parabolic-determinant-by-Bisas\]](#parabolic-determinant-by-Bisas){reference-type="ref" reference="parabolic-determinant-by-Bisas"}. Pauly in [@pauly1996espaces] give another definition as following
**Definition 49** (Parabolic determinant bundle). Let $\mathcal{E}_*$ be a family of parabolic rank-r vector bundles of parabolic type $\alpha_*$ over a smooth family of curves $\pi_s: \mathcal{C} \longrightarrow S$ parameterized by a $S$-variety $\mathcal{T}$, then we have $$\Theta_{par}(\mathcal{E}_*):= \lambda(\mathcal{E})^{k} \otimes \left( \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \left\lbrace \det \left( \mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E})\right)^{ p_{j}(i)} \right\rbrace \right) \otimes \det(\mathcal{E}_{\sigma})^{e}$$ where the determinant is with respect to the projection $\mathcal{C} \times_S \mathcal{T}\longrightarrow \mathcal{T}$ and for all $i \in I$ and $j \in \{1,2,...,\ell_i-1\}$
- $\mathcal{E}_{\sigma_i}:=\mathcal{E}\vert_{\sigma_i(S)\times_S \mathcal{T}}$, where $\sigma_i:S \longrightarrow \mathcal{C}$ the parabolic section of $\pi_s$.
- $p_j(i)=a_{j+1}(i)-a_j(i)$ and $r_j(i):=\sum\limits_{i=1}^{q} m_i(q)=\mathrm{dim}_{\mathbb{C}}(\mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E}))$.
- $re= k\chi -\sum\limits_{i=1}^N \sum\limits _{j=1}^{\ell_i-1} p_j(i) r_j(i)$, where $\chi=d+ r(1-g)$.
**Theorem 50**. *$\cite{pauly1996espaces}$ Let $\mathcal{E}_*$ a relative family of rank-r parabolic vector bundles of fixed determinant $\delta \in \mathrm{Pic}^{d}(\mathcal{C}/S)$ and of parabolic type $\alpha_*$ parameterized by a $S$-scheme $\mathcal{T}$ over a family of smooth projective curves $\mathcal{C}/S$. Then there is an ample line bundle $\Theta_{par}$ over $\mathcal{SM}^{par}(r,\alpha,\delta)/S$, such that $\psi_{\mathcal{T}}^*(\Theta_{par})= \lambda_{par}(\mathcal{E}_*),$ where $\psi_{\mathcal{T}}$ is the classifying morphism to the relative moduli space $\mathcal{SM}^{par}(r,\alpha,\delta)/S)$.*
We prove in the following Proposition that, the two definition are the same.
**Proposition 51**. *Let $\mathcal{E}_*$ be a family of parabolic rank-r vector bundles of parabolic type $\alpha_*$ over a smooth family of curves $\pi_s: \mathcal{C} \longrightarrow S$ parameterized by a $S$-variety $\mathcal{T}$. Then, $\Theta_{par}(\mathcal{E}_*)\cong \lambda_{par}(\mathcal{E}_*).$*
To prove the equality of the line bundles over $\mathcal{T}$, we begin by replacing $\det\left(F^{j}_i(\mathcal{E})/F^{j+1}_i(\mathcal{E})\right)$ by $\det\left(\mathcal{E}_{x_i}/F^{j+1}_i(\mathcal{E})\right)$. In fact we have for all $i \in I$ and $j \in \{1,2,...,\ell_i\}$ the equality $$\det\left(F^{j}_i(\mathcal{E})/F^{j+1}_i(\mathcal{E})\right)=\left(\det\left(\mathcal{E}_{x_i}/F^{j}_i(\mathcal{E})\right)\right)^{-1}\otimes \det\left(\mathcal{E}_{x_i}/F^{j+1}_i(\mathcal{E})\right).
\label{transformation}$$ for the proof, we take for all $i \in I$ and $j \in \{1,2,...,\ell_i\}$, the quotient exact sequences $$\xymatrix{
0 \ar[r] & F^{j}_i(\mathcal{E}) \ar[r] & \mathcal{E}_{\sigma_i} \ar[r] & Q_i^{j-1}(\mathcal{E}):= \mathcal{E}_{\sigma_i}/F^{j}_i(\mathcal{E}) \ar[r]& 0, \\
0 \ar[r] & F^{j+1}_i(\mathcal{E}) \ar[r] & \mathcal{E}_{\sigma_i} \ar[r] & Q_i^{j}(\mathcal{E}):= \mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E}) \ar[r]& 0,
}$$ We calculate the determinants line bundles $$\begin{aligned}
\det F^{j}_i(\mathcal{E}) = \det ( \mathcal{E}_{\sigma_i}) \otimes \left(\det Q_i^{j-1}(\mathcal{E}) \right)^{-1} \quad and \quad
\det F^{j+1}_i(\mathcal{E})= \det ( \mathcal{E}_{\sigma_i}) \otimes \left( \det Q_i^{j}(\mathcal{E}) \right)^{-1}.\end{aligned}$$ We calculate determinant using the above equalities, we get $$\begin{aligned}
\det \left( F^{j}_i(\mathcal{E})/ F^{j+1}_i(\mathcal{E}) \right)&= \left( \det \left(\mathcal{E}_{\sigma_i}/F^{j}_i(\mathcal{E})\right) \right)^{-1} \otimes \det\left(\mathcal{E}_{\sigma_i}/F^{j+1}_i (\mathcal{E})\right).
%\det F^{j}_i(\eta) \otimes \left( \det F^{j+1}_i(\eta) \right)^{-1} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%&= \det ( \eta_{\sigma_i}) \otimes \left( \det Q_i^{j-1}(\eta) \right)^{-1} \otimes \left( \det ( \eta_{\sigma_i}) \otimes \left( \det Q_i^{j}(\eta) \right)^{-1} \right)^{-1} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%&= \left( \det Q_i^{j-1}(\eta) \right)^{-1} \otimes \det Q_i^{j}(\eta) \\
%%%%%%%%%%%%%%%%%%%%%%%%
%&= \left( \det \left(\eta_{\sigma_i}/F^{j}_i(\eta )\right) \right)^{-1} \otimes \det\left(\eta_{\sigma_i}/F^{j+1}_i (\eta)\right).\end{aligned}$$ Now we can proof the proposition $$\begin{aligned}
\lambda_{par}(\mathcal{E}_*)& := \lambda(\mathcal{E})^{k} \otimes \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i} \left\lbrace \det \left( F^{j}_i(\mathcal{E})/F^{j+1}_i(\mathcal{E})\right) \right\rbrace^{-a_j(i)} \otimes \det(\mathcal{E}_{\sigma})^{ \frac{k}{r}\chi_{par}} \\
&= \lambda(\mathcal{E})^{k} \otimes \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i} \left\lbrace \det \left(\mathcal{E}_{\sigma_i}/F^{j}_i(\mathcal{E})\right)^{-1} \otimes \det\left(\mathcal{E}_{\sigma_i}/F^{j+1}_i (\mathcal{E})\right) \right\rbrace ^{-a_j(i)}\otimes \det(\mathcal{E}_{\sigma})^{ \frac{k}{r} \chi_{par}}\end{aligned}$$ by rearranging the terms, we get $$\begin{aligned}
\lambda_{par}(\mathcal{E}_*)= \lambda(\mathcal{E})^{k} \otimes \bigotimes\limits_{i=1}^N\left\lbrace \det(\mathcal{E}_{\sigma_i})^{a_{\ell_i}(i)} \otimes \bigotimes_{j=1}^{\ell_i-1} \det \left( \mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E})\right)^{ p_{j}(i)} \right\rbrace \otimes \det(\mathcal{E}_{\sigma})^{ \frac{k}{r}\chi_{par}} \end{aligned}$$ as $\det(\mathcal{E}_{\sigma})$ is independent of the section $\sigma$, we get $$\begin{aligned}
\lambda_{par}(\mathcal{E}_*)= \lambda(\mathcal{E})^{k} \otimes \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \det \left( \mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E})\right)^{ p_{j}(i)} \otimes \det(\mathcal{E}_{\sigma})^{\left( \frac{k}{r} \chi_{par}-\sum\limits_{i=1}^N a_{\ell_i}(i)\right)}\end{aligned}$$
Now we observe the following equality: $$\sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} a_j(i)m_j(i)=-\sum\limits_{i=1}^N \sum\limits _{j=1}^{\ell_i-1} d_j(i) r_j(i)+r \sum\limits_{i=1}^N a_{\ell_i}(i)
\label{equality}$$ So the exponent $$\begin{aligned}
\frac{k}{r} \chi_{par}-\sum\limits_{i=1}^N a_{\ell_i}(i)&=\frac{k}{r} \chi+ \frac{1}{r} \left( \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} a_j(i) m_j(i)-r\sum\limits_{i=1}^N a_{\ell_i}(i) \right)=\frac{k }{r}\chi- \frac{1}{r}\sum\limits_{i=1}^N \sum\limits _{j=1}^{\ell_i-1} d_j(i) r_j(i).
%
%\frac{k}{r} \left( \chi+ \frac{1}{k} \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} a_j(i) m_j(i)\right)-\sum\limits_{i=1}^N a_{\ell_i}(i)\\
%&=\frac{k}{r} \chi+ \frac{1}{r} \left( \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i} a_j(i) m_j(i)-r\sum\limits_{i=1}^N a_{\ell_i}(i) \right) \end{aligned}$$ which is equivalent to the equality $k \chi_{par}-r \sum\limits_{i=1}^N a_{\ell_i}(i)=re.$ This conclude the proof. $\square$\
We give an other description of the parabolic determinant line bundle.
**Proposition 52** (Parabolic determinant bundle and Hecke modifications). *Let $\mathcal{E}_*$ be a family of parabolic rank-r vector bundles of parabolic type $\alpha_*$ over a smooth family of curves $\mathcal{C}/S$ parameterized by a $S$-variety $\mathcal{T}$, then $$\lambda_{par}(\mathcal{E}_*)^r=\Theta^{a} \otimes \left( \bigotimes\limits_{i=1}^N\bigotimes\limits_{j=1}^{\ell_i-1} \Theta_{j}(i)^{n_j(i)p_j(i)} \right),
\label{decomposition}$$ where, for all $i \in I$ and $j \in \{1,2,...,\ell_i-1 \}$*
- *$\Theta$ is the pullback of the ample generator of $\mathrm{Pic}(\mathcal{SU}_{\mathcal{C}}(r,\delta)/S)$ by the classifying map $\phi_{\mathcal{T}}$ and $n=\mathrm{gcd}(r,d)$.*
- *$\Theta_{j}(i)$ is the pullback of the ample generators of $\mathrm{Pic}(\mathcal{SU}_{\mathcal{C}}(r,\delta_j(i))/S)$ by the classifying maps $\phi^{\mathcal{T}}_{i,j}$ and $n_j(i)=\mathrm{gcd}(r,d_j(i))$, where $d_j(i)=\mathrm{deg}(\delta_j(i))$.*
- *$p_j(i)=a_{j+1}(i)-a_j(i)$ and $a= n \left( k- \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} p_j(i) \right)$.*
By the Proposition [Definition 49](#parabolic_determinant_2){reference-type="ref" reference="parabolic_determinant_2"}, the proposition is equivalent to the equality $\Theta_{par}(\mathcal{E}_*)^r=\Theta^{a} \bigotimes\limits_{i=1}^N\bigotimes\limits_{j=1}^{\ell_i-1} \Theta_{j}(i)^{n_j(i)p_j(i)}.$ By definition we have $$\Theta_{par}(\mathcal{E}_*)= \lambda(\mathcal{E})^{k} \otimes \left(\bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \left\lbrace \det \left( \mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E})\right)^{ p_{j}(i)} \right\rbrace \right) \otimes \det(\mathcal{E}_{\sigma})^{e}$$ take the Hecke exact sequences $$\xymatrix{
0 \ar[r] & G^{j}_i(\mathcal{E}) \ar[r] & \mathcal{E}\ar[r] & Q_i^{j}(\mathcal{E}):=\mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E}) \ar[r]& 0,
}$$ by Lemma 3.3 [@pauly1998fibres], we get $\lambda(\mathcal{E})=\lambda( G^{j}_i(\mathcal{E})) \otimes \left( \det \left(\mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E}) \right) \right)^{-1}.$\
We rearrange the terms and by the above equality and take the $r$-th power $$\begin{aligned}
\Theta_{par}(\mathcal{E}_*)^r&= \lambda(\mathcal{E})^{rk} \otimes \left( \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \left\lbrace \lambda \left( G^{j}_i (\mathcal{E}) \right) \otimes \lambda(\mathcal{E})^{-1} \right\rbrace^{rp_{j}(i)} \right)\otimes \det(\mathcal{E}_{\sigma})^{re} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&= \left\lbrace \lambda(\mathcal{E})^{k} \bigotimes\limits_{i=1}^N \bigotimes\limits_{j=1}^{\ell_i-1} \lambda(\mathcal{E})^{-p_j(i)} \right\rbrace^r \otimes \left( \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \lambda \left( G^{j}_i ( \mathcal{E}) \right)^{rp_{j}(i)}\right) \otimes \det(\mathcal{E}_{\sigma})^{re} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&= \left\lbrace \lambda(\mathcal{E})^{\frac{r}{n}} \otimes \det(\mathcal{E}_{\sigma})^{\aleph} \right\rbrace^{a} \otimes \left( \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \left\lbrace \lambda( G^{j}_i \left( \mathcal{E})\right)^{\frac{r}{n_j(i)}} \otimes \det( \mathcal{E}_{\sigma})^{\aleph_j(i)} \right\rbrace^{n_j(i) p_{j}(i)}\right) \otimes \det(\mathcal{E}_{\sigma})^q.\end{aligned}$$ where $n \aleph=\chi:=d+r(1-g)$ for $n=\gcd(r,d)$, and $n_j(i) \aleph_j(i)=\chi_j(i):=d_j(i)+r(1-g)$ for $n_j(i)=\gcd(r,d_j(i))$. $$\Theta_{par}(\mathcal{E}_*)^r= \Theta^{a} \otimes \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \Theta_j(i)^{n_j(i) p_{j}(i)}\otimes \det(\mathcal{E}_{\sigma})^q.$$ where $a =n \left( k- \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} p_j(i) \right) \ \ \mathrm{and} \ \
re= k\chi -\sum\limits_{i=1}^N \sum\limits _{j=1}^{\ell_i-1} p_j(i) r_j(i).$\
Hence, we get $q =re- \aleph a- \left( \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} \aleph_j(i) p_j(i) n_j(i)\right)=0$. This conclude the proof. $\square$
#### Canonical bundle
We calculate the canonical bundle of the relative moduli space of rank-r semi-stable parabolic bundles for a fixed parabolic type $\mathcal{SM}^{par}_{\mathcal{C}}:=\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)/S$ over a smooth family of marked projective curves parameterized by a scheme $S$.
#### Canonical bundle in the Grassmannian case:
We suppose that the divisor is of degree one and the flag type is of length one. Let $\mathcal{E}_*$ a virtual universal parabolic vector bundle $$\def\cartesien{%
\ar@{-}[]+R+<6pt,-1pt>;[]+RD+<6pt,-6pt>%
\ar@{-}[]+D+<1pt,-6pt>;[]+RD+<6pt,-6pt>%
}
\xymatrix{
\mathcal{X}^{par} \ar[rr]^{\pi_{n}} \ar[d]_{\pi_{w}} \cartesien && \mathcal{SM}^{par}_{\mathcal{C}} \ar[d]^{\pi_{e}} \ar[rr]^{\phi} && \mathcal{SU}_{\mathcal{C}}(r,\delta) \ar[dll]^{p_e} \\
\ \left( \mathcal{C},D \right) \ar[rr]_{\pi_{s}} && \ar@/^1pc/[ll]^{ \sigma_{i}} S
}$$ where $D=\sigma(S)$. In this case the map $\phi$ is a grassmannian bundle over the stable locus of $\mathcal{SU}_{\mathcal{C}}$ (the relative moduli space of semi-stable rank-r vector bundles of determinant $\delta$) and we set $\mathcal{D}:=\pi_w^* \left(D \right)=D \times_S \mathcal{SM}^{par}_{\mathcal{C}}$ then we have the Hecke exact sequence $$0\longrightarrow \mathcal{H}(\mathcal{E}) \longrightarrow \mathcal{E} \longrightarrow Q(\mathcal{E})=\mathcal{E}\vert_{\mathcal{D}}/F(\mathcal{E}) \longrightarrow 0
\label{B}$$ and the natural exact sequence supported over $\mathcal{D}$ $$0\longrightarrow F(\mathcal{E}) \longrightarrow \mathcal{E}\vert_{\mathcal{D}} \longrightarrow Q(\mathcal{E}) \longrightarrow 0
\label{A}$$ The relative tangent bundle of the fibration $\phi$ is given as follow $$T_{\phi}=\mathrm{ Hom} \left( F(\mathcal{E}),Q(\mathcal{E})) \right)=F(\mathcal{E})^{-1}\otimes Q(\mathcal{E})$$ we set $r':=\mathrm{rank}(Q(\mathcal{E}))=r-\mathrm{rank}(F(\mathcal{E}))$, hence the canonical bundle is given as follow $$K_{\phi}=\det (T_{\phi})^{-1}= \det(F(\mathcal{E}))^{r'} \otimes \det( Q(\mathcal{E}))^{-(r-r')}$$ the short exact sequence [\[A\]](#A){reference-type="eqref" reference="A"} $$\det(\mathcal{E}_D)=\det(F(\mathcal{E}))\otimes \det(Q(\mathcal{E})),$$ we replace in the previous equation $$K_{\phi}=\det(\mathcal{E}_D)^{r'} \otimes \det(Q(\mathcal{E}))^{-r}$$ so we apply lemma 3.3 [@pauly1998fibres], applied to the Hecke modification sequence [\[B\]](#B){reference-type="eqref" reference="B"} gives the equality $$\lambda(\mathcal{E})=\lambda(\mathcal{H}(\mathcal{E}))\otimes \det(Q(\mathcal{E}))^{-1}$$ which implies that $$K_{\phi}=\det(\mathcal{E}_D)^{r'} \otimes \lambda(\mathcal{H}(\mathcal{E}))^{-r} \otimes \lambda(\mathcal{E})^r$$ $$K_{\phi'}=\left[ \lambda(\mathcal{H}(\mathcal{E}))^{\frac{r}{n'}} \otimes \det(\mathcal{E}_y)^{\frac{\chi'}{n'}} \right]^{-n'} \otimes \left[ \lambda(\mathcal{E})^{\frac{r}{n}} \times \det(\mathcal{E}_{D})^{\frac{\chi}{n} } \right]^{n} \otimes \det(\mathcal{E}_D)^{r'-\chi+\chi'}$$ where: $n=\mathrm{gcd}(r,\mathrm{deg}(\mathcal{E}))$, $n'=gcd(r,\mathrm{deg}(G_D(\mathcal{E}))$, $\chi=\chi(\mathcal{E})$ and $\chi'=\chi(G_D(\mathcal{E}))$. $$\chi'-\chi=\mathrm{deg}(G_D(\mathcal{E}))-\mathrm{deg}(\mathcal{E})=-r' \Longrightarrow r'-\chi+\chi'=0$$ if we note $\Theta_D$ the pull-back of the ample generator of $\mathrm{Pic}(\mathcal{SU}_{\mathcal{C}}(r,\delta')/S)$, we get: $K_{\phi'}=\Theta^n \otimes \Theta_D^{-n'},$ hence $$K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}=K_{\mathcal{SU}_{\mathcal{C}}(r,\delta')/S}\otimes K_{\phi'}=\Theta^{-2n} \otimes \left( \Theta^n \otimes \Theta_D^{-n'} \right)=\Theta^{-n} \otimes \Theta_D^{-n'}.
\label{grassmanian-canonical-bundle}$$
#### General case:
Now we can calculate the relative canonical bundle $K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}$ of the moduli space $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$.
**Proposition 53**. *Let $b=-n \left( 2+\mathrm{deg}(D)- \sum\limits_{i=1}^N \ell_i \right).$ Then $K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}=\Theta^{b} \otimes \left( \bigotimes\limits_{i=1}^N \bigotimes\limits_{j=1}^{\ell_i-1} \Theta_{j}(i)^{-n_{j}(i)}\right).$*
Let $\phi:\mathcal{SM}^{par}_{\mathcal{C}} \rightarrow \mathcal{SU}_{\mathcal{C}}(r,\delta)$ be the forgetfull map and let denote its relatif canonical bundle by $K_{\phi}$, then we have $K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}=K_{\mathcal{SU}_{\mathcal{C}}(r,\delta) /S}\otimes K_{\phi}$ and by Drezet-Narasimhan [Theorem 20](#Drezet_Narasimhan){reference-type="ref" reference="Drezet_Narasimhan"} we get $K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}=\Theta^{-2n}\otimes K_{\phi}$, where $\Theta$ is the pullback of the relative ample generator of $\mathrm{Pic}(\mathcal{SU}_{\mathcal{C}}(r,\delta)/S)$ by $\phi$ and $n=\gcd(r,\mathrm{deg}(\delta))$. Now, as the map $\phi$ is generically a product of a flag varieties we can decompose the relative canonical bundle $K_{\phi}=\bigotimes\limits_{i=1}^N K_{\phi(i)}$, where for all $i\in I$ the bundle $K_{\phi(i)}$ is the canonical bundle of a flag variety. Hence, as the flag variety is embedded canonically in a product of Grassmanians and that its canonical bundle is given by the product of the canonical bundle over the Grassmanians, then by the equality [\[grassmanian-canonical-bundle\]](#grassmanian-canonical-bundle){reference-type="eqref" reference="grassmanian-canonical-bundle"}, we have $$K_{\phi(i)}= \bigotimes_{j=1}^{\ell_i-1} \left( \Theta^{n} \otimes \Theta_{j}(i)^{-n_{j}(i)} \right).$$ We replace and rearrange the terms in the equation $K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}=K_{\mathcal{SU}_{\mathcal{C}}(r,\delta) /S}\otimes K_{\phi}$, we get $$\begin{aligned}
K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}&= \Theta^{-2n}\otimes \bigotimes\limits_{i=1}^N K_{\phi(i)} = \Theta^{-2n}\otimes \bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \left( \Theta^{n} \otimes \Theta_{j}(i)^{-n_{j}(i)} \right) \end{aligned}$$ This proves the formula. $\square$
# Hitchin connection for parabolic non-abelian theta-functions
**Theorem 54**. *With the same hypothesis. Take the parabolic symbol map $\rho_{par}$, then the parabolic determinant line bundle $\Theta_{par}$ satisfies the van Geemen-de Jong equation. i.e. , $\mu_{\Theta_{par}} \circ \rho_{par}=- ( k+r) \Phi_{par}$.*
By Proposition [Proposition 26](#Welters){reference-type="ref" reference="Welters"}, the Theorem is equivalent to the following points
1. We prove the equality: $\cup [\Theta_{par} ]\circ \rho_{par}=- k\ \Phi^{par}$, so called the metaplectic correction. By Proposition [Proposition 52](#Parabolic determinant bundle and Hecke modifications){reference-type="ref" reference="Parabolic determinant bundle and Hecke modifications"}, Theorem [Theorem 48](#equalities){reference-type="ref" reference="equalities"} and linearity with respect to the the tensor product, we get $$\begin{aligned}
\cup [ \Theta_{par}^r ] \circ \rho_{par} &= \cup \left[ \Theta^{a} \otimes \bigotimes\limits_{i=1}^{N}\bigotimes\limits_{j=1}^{\ell_i-1} \Theta_{j}(i)^{n_j(i)p_j(i)} \right] \circ \rho_{par} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&= a \left( \cup \left[ \Theta \right] \circ \rho_{par} \right)+ \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} n_j(i) p_j(i) \left( \cup \left[ \Theta_j(i) \right] \circ \rho_{par} \right) \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% &= a \left(- \frac{r}{n} \Phi^{par} \right) + \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} n_j(i) p_j(i) \left(- \frac{r}{n_j(i)} \Phi_{par} \right) \\
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
& = - \left( \frac{r}{n}a+ \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} n_j(i) p_j(i) \left( \frac{r}{n_j(i)} \right)\right) \Phi_{par}
\end{aligned}$$ thus $$\cup [ \Theta_{par} ] \circ \rho_{par}=-\left( \frac{a}{n}+ \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} p_j(i) \right) \Phi_{par} =-k \ \Phi_{par}
\label{1-part-VJ}$$ which follows by the following identity $$\frac{a}{n}+\sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} p_j(i) = \left( k-\sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} p_j(i) \right)+\sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} p_j(i)=k$$
2. We prove the equality: $\cup [K_{\mathcal{M}^{par}/S}] \circ \rho_{par}= 2r \ \Phi_{par}$.
By Proposition [Proposition 53](#Canonical bundle){reference-type="ref" reference="Canonical bundle"}, Theorem [Theorem 48](#equalities){reference-type="ref" reference="equalities"} and linearity with respect to the the tensor product, we have $$\begin{aligned}
\cup [K_{\mathcal{SM}^{par}_{\mathcal{C}}/S}] \circ \rho_{par} &= \cup \left[ \Theta^{-b} \otimes \bigotimes\limits_{i=1}^N \bigotimes\limits_{j=1}^{\ell_i-1} \Theta_j(i)^{-n_j(i)} \right] \circ \rho_{par} \\
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
&=-b \left( \cup [ \Theta] \circ \rho_{par} \right) + \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1} -n_j(i) \left( \cup [ \Theta_j(i) ] \circ \rho_{par} \right) \\
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% &=-n \left( 2+\deg(D)- \sum\limits_{i=1}^m \ell_i \right) \left( - \frac{r}{n} \Phi^{par} \right) + \sum\limits_{i=1}^m \sum\limits_{j=1}^{\ell_i} -n_j(i) \left ( - \frac{r}{n_j(i)} \Phi^{par} \right) \\
&= r \left( 2+\mathrm{deg}(D)- \sum\limits_{i=1}^N \ell_i+ \sum\limits_{i=1}^N \sum\limits_{j=1}^{\ell_i-1}1 \right)\ \Phi_{par}= 2r \ \Phi_{par}.\end{aligned}$$
adding the two equations, we get $$\mu_{\Theta_{par}} \circ \rho_{par} =-(k+r) \cdot \Phi_{par}.
\label{1+2-part-VJ}$$ $\square$
We observe that the composition $\mu_{\Theta_{par}}\circ \rho_{par}$ does not depend on the parabolic weights but depends on the level-$k$, in some sense what contribute in the decomposition [\[decomposition\]](#decomposition){reference-type="eqref" reference="decomposition"} in the term $\Theta^k$, we rearrange the terms as follow $$\begin{aligned}
\Theta_{par}^r =\Theta^{a} \bigotimes\limits_{i=1}^N\bigotimes\limits_{j=1}^{\ell_i-1} \Theta_j(i)^{n_j(i)p_j(i)}
=\Theta^k \otimes \left( \bigotimes\limits_{i=1}^N\bigotimes\limits_{j=1}^{\ell_i-1} \left( \Theta^{-n} \otimes \Theta_j(i)^{n_j(i)} \right)^{p_j(i)}\right).\end{aligned}$$ by the Definition [Definition 49](#parabolic_determinant_2){reference-type="ref" reference="parabolic_determinant_2"} and Propositions [Proposition 51](#B=P){reference-type="ref" reference="B=P"}, [Proposition 52](#Parabolic determinant bundle and Hecke modifications){reference-type="ref" reference="Parabolic determinant bundle and Hecke modifications"} the identification we get $$\bigotimes\limits_{i=1}^N \bigotimes_{j=1}^{\ell_i-1} \left\lbrace \det \left( \mathcal{E}_{\sigma_i}/F^{j+1}_i(\mathcal{E})\right)^{ rp_{j}(i)} \right\rbrace \otimes \det(\mathcal{E}_{\sigma})^{(re-k\chi)}=\bigotimes\limits_{i=1}^N\bigotimes\limits_{j=1}^{\ell_i-1} \left( \Theta^{-n} \otimes \Theta_j(i)^{n_j(i)} \right)^{p_j(i)}.$$ which we call the flag part of the determinant line bundle and we denoted $\mathcal{F}(\alpha_*)$. By Corollary [Theorem 48](#equalities){reference-type="ref" reference="equalities"} for all $i \in I$ and $j \in \{1,2,...,\ell_i \}$ we have $$\left( \cup \left[ \Theta^{-n} \right]+ \cup \left[ \Theta_j(i)^{n_j(i)} \right] \right) \circ \rho_{par} = 0$$ thus by Proposition [Proposition 42](#rho_isomorphism){reference-type="ref" reference="rho_isomorphism"} we have that $\rho_{par}$ is an isomorphism, then we have: $$\cup \left[ \mathcal{F}(\alpha_*)\right] =0.
\label{flag-equation}$$
**Remark 56**.
1. In general case (see. $\cite{singh2021differential}$) : If $X$ is a Hitchin variety and $L$ line bundle over $X$, then $$\cup \left[L \right]: \mathrm{H}^0(X, \mathrm{Sym}^q \ T_X) \longrightarrow \mathrm{H}^1(X, \mathrm{Sym}^{q-1}T_X)$$ which can be seen as the first connecting map on cohomology of the short exact sequence $$0 \longrightarrow \mathrm{Sym}^{q-1}\left( \mathcal{D}^{(1)}_X(L)\right) \longrightarrow \mathrm{Sym}^q( \mathcal{D}^{(1)}_X(L)) \longrightarrow \mathrm{Sym}^q(T_X) \longrightarrow 0$$ which is the $q$-th symmetric power of the the Atiyah sequence [\[atiyah-classe\]](#atiyah-classe){reference-type="eqref" reference="atiyah-classe"}, and we have the following theorem
**Theorem 55** ([@singh2021differential], Theorem 2.2). *If $L$ is an ample line bundle then the map above is an isomorphism.*
2. The varieties $\mathcal{SU}_{\mathcal{C}/S}(r,\delta)$ and $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ are Hitchin varieties in the sense of $\cite{singh2021differential}$, and by Theorem [Theorem 50](#ampleness of Theta_par){reference-type="ref" reference="ampleness of Theta_par"} the parabolic determinant line bundle $\Theta_{par}$ is ample thus the map is an isomorphism $$\cup \left[ \Theta_{par} \right]: \pi_{e_*} \mathrm{Sym}^2 \left( T_{\mathcal{SM}^{par}/S} \right) \longrightarrow R^1\pi_{e_*} \left( T_{\mathcal{SM}^{par}/S}\right),$$
By equalities [\[1+2-part-VJ\]](#1+2-part-VJ){reference-type="eqref" reference="1+2-part-VJ"} and [\[flag-equation\]](#flag-equation){reference-type="eqref" reference="flag-equation"}, one has for all positive integer $\nu$ the equalities $$\begin{aligned}
\mu_{\Theta_{par}^{\nu}}&= \frac{n \left( \nu k+r \right)}{r} \cdot \cup \left[\Theta \right]=\left( \frac{\nu k+r}{k} \right) \cup \left[ \Theta_{par} \right].\end{aligned}$$
Thus, by the previous remark, we have the following proposition.
**Proposition 57**. *For any positive integer $\nu$, the map $\mu_{\Theta_{par}^{\nu}}$ is an isomorphism.*
We get the van Geemen-de jong equation Theorem [Theorem 54](#VJ-equation){reference-type="ref" reference="VJ-equation"} for any positive power of the parabolic theta line bundle $$\mu_{\Theta_{par}^{\nu}} \circ \rho_{par}=- ( \nu k+r) \Phi_{par}$$
**Theorem 58**. *Consider a smooth versal family $(\pi_s:(\mathcal{C},D) \longrightarrow S)$ of complex projective marked curves of genus $g \geq 2$, and $D$ a reduced divisor of relative degree $N$. Take $\alpha_*=(k,\vec{a},\vec{m})$ a fixed rank-$r$ parabolic type with respect to the divisor $D$. We denote by $(\pi_e: \mathcal{SM}^{par}(r,\alpha_*,\delta) \rightarrow S)$, the relative moduli space of parabolic rank-$r$ vector bundles over $(\mathcal{C},D)/S$ with determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$, equipped with the parabolic determinant line bundle $\Theta_{par}$.Let $\nu$ be a positive integer. Then, there exists a unique projective flat connection on the vector bundle ${\pi_e}_*(\Theta_{par}^{\nu})$ of non-abelian parabolic theta functions, induced by a heat operator with symbol $$\rho^{Hit}_{par}(\nu):= \frac{1}{(\nu k+r)} \left( \rho_{par}\circ \kappa_{\mathcal{C}/S} \right).$$*
Let prove the theorem for $\nu=1$, we denote $\rho^{Hit}_{par}:= \rho^{Hit}_{par}(1)$.
- First we prove existence of the connection: We apply van Geemen-de Jong Theorem [Theorem 27](#van Geemen and De Jong){reference-type="ref" reference="van Geemen and De Jong"} for $L=\Theta_{par}$ over $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$. Thus by Theorem [Theorem 54](#VJ-equation){reference-type="ref" reference="VJ-equation"} and Proposition [Theorem 36](#Parabolic BSBE){reference-type="ref" reference="Parabolic BSBE"}, we get the first condition of Theorem [Theorem 27](#van Geemen and De Jong){reference-type="ref" reference="van Geemen and De Jong"} $$\begin{aligned}
\mu_{\Theta_{par}} \circ \left( \rho_{par} \circ \kappa_{\mathcal{C}/S} \right) &= -(k+r) \Phi_{par} \circ \kappa_{\mathcal{C}/S} =-(k+r)\ \kappa_{\mathcal{SM}^{par}_{\mathcal{C}}/S}.
\end{aligned}$$ The second condition follows by Theorem [Theorem 55](#Singh){reference-type="ref" reference="Singh"} for $q=1$.i.e. the map $$\cup \left[ \Theta_{par} \right]: \pi_{e_*} (T_{\mathcal{SM}^{par}/S}) \longrightarrow R^1\pi_{e_*}( \mathcal{O}_{\mathcal{SM}^{par}/S})$$ is an isomorphism, as the relative Picard group $\mathrm{Pic}(\mathcal{SM}^{par}_{\mathcal{C}}/S)$ is discrete then the infinitesimal deformations of any line bundle $L$ over $\mathcal{SM}^{par}$ are trivial and parameterized by the sheaf $\mathrm{R}^1\pi_{e_*}(\mathcal{O}_{\mathcal{SM}^{par}})$ over $S$, thus $\pi_{e_*}( \mathcal{O}_{\mathcal{SM}^{par}})\cong 0$, as consequence $\pi_{e_*} T_{\mathcal{SM}^{par}/S} \cong 0$, no global relative vector fields.
The third condition follows from the algebraic Hartogs's Theorem and the fact that the space $\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)$ is normal variety and proper over $S$ and that the smooth locus is a big open subset of $\mathcal{SM}^{par}_{\mathcal{C}}(r,\alpha_*,\delta)$.
- Flatness of the connection: We apply flatness criterion Theorem [Theorem 28](#Flatness criterion){reference-type="ref" reference="Flatness criterion"} to the parabolic symbol map $\rho_{par}$. The first condition holds since by definition of the parabolic Hitchin map corresponds to homogeneous functions on $T^{\vee}_{\mathcal{SM}^{par}/S}$ of degree two under the action of $\mathbb{G}_m$ in the parabolic Hitchin system, hence Poisson-commute. The second point is given in Proposition [Proposition 57](#mu-isomorphism){reference-type="ref" reference="mu-isomorphism"} and the third point is is given in the first part of the proof.
$\square$
For $D=\emptyset$ and $\alpha_*=k \in \mathbb{N}^*$ the trivial parabolic type. We have the identification $\mathcal{SM}^{par}_{\mathcal{C}}(r,0_*,\delta)=\mathcal{SU}_{\mathcal{C}}(r,\delta)$ the moduli space of semistable rank-r vector bundles with fixed determinant $\delta \in \mathrm{Pic}^d(\mathcal{C}/S)$, hence $\Theta_{par}^{r/n}=\mathcal{L}^k$ for $n:=\gcd(r,\mathrm{deg}(\delta))$ and $\rho_{par}\equiv \rho^{Hit}$. Then as a corollary we obtain the following theorem
**Theorem 59**. *We consider $\mathcal{C}/S$ a smooth family of complex projective curves of genus $g \geq 2$ (and $g \geq 3$ if $r=2$ and $\mathrm{deg}(\delta)$ is even). Let $\mathcal{L}$ be the determinant line bundle over $p_e: \mathcal{SU}_{\mathcal{C}/S}(r,\delta) \rightarrow S$. Let $k$ be a positive integer. Then, there exists a unique projective flat connection on the vector bundle $p_{e_*}(\mathcal{L}^{k})$ of non-abelian theta functions, induced by a heat operator with symbol map $$\rho(k):= \frac{n}{r(k+n)} \left( \rho^{Hit}\circ \kappa_{\mathcal{C}/S} \right).$$*
#### Some comments:
We apply Theorem [Theorem 59](#Main Theorem 2){reference-type="ref" reference="Main Theorem 2"} for $\delta=\mathcal{O}_{\mathcal{C}}$ thus $n=r$ and $\Theta_{par}=\mathcal{L}^k$, we get $$\rho(k):= \frac{1}{(k+r)} \left( \rho_{Hit}\circ \kappa_{\mathcal{C}/S} \right)$$ Hence, we recover Theorems 4.8.1 and 4.8.2 in [@pauly2023hitchin], which was generalized in [@biswas2021ginzburg; @biswas2021geometrization] to $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*)$ the space of rank-r parabolic bundle with trivial determinant and parabolic type $\alpha_*$. The symbol map is given for a positive integer $\nu$ by $$\rho^{Hit}_{par, \Gamma}(\nu):=\vert \Gamma \vert \ \mu^{-1}_{\Theta^{\nu}} \circ \left(\cup[\Theta] \circ \rho_{par} \circ \kappa_{\mathcal{C}/S} \right),$$ By [Proposition 57](#mu-isomorphism){reference-type="ref" reference="mu-isomorphism"} and [\[flag-equation\]](#flag-equation){reference-type="eqref" reference="flag-equation"}, we get the equality: $\vert \Gamma\vert (\mu^{-1}_{\Theta_{par}^{\nu}} \circ \cup[\Theta]) = \frac{\vert \Gamma \vert}{(\nu k+r)} \mathrm{Id}$, hence $$\rho^{Hit}_{par, \Gamma}(\nu):= \frac{\vert \Gamma \vert}{(\nu k+r)} \left( \rho_{par} \circ \kappa_{\mathcal{C}/S} \right)=\vert \Gamma \vert \rho^{Hit}_{par}(\nu).$$ The factor $\vert \Gamma \vert$ is because they work over $\mathcal{SU}_{X/S}^{\Gamma}(r)$ the moduli space of $\Gamma$-linearised bundles for family of Galois coverings $h:X \longrightarrow (\mathcal{C},D)$ parameterized by the variety $S$.
**Remark 61**. If the system of weights $\alpha_*$ is not generic in the sense of Yokogawa, then the moduli space $\mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)$ is not smooth, and its Picard group is not maximal. In other words, not all line bundles on the Quot scheme descend to the moduli space. In fact, we can choose the weights $\alpha_*$ in such a way that we have the following isomorphism: $$\mathrm{Pic}\left( \mathcal{SM}^{par}_{\mathcal{C}/S}(r,\alpha_*,\delta)/S \right) \simeq \mathbb{Z} \Theta_{par}.$$
To prove Theorem [Theorem 58](#Main Theorem 1){reference-type="ref" reference="Main Theorem 1"} in the case of non-generic weights, we must work over the stack of quasi-parabolic vector bundles, where the Picard group is maximal, the Hecke maps $\mathcal{H}_i^j$ and the forgetful map are maps between stacks (no stability conditions). Note that the decompositions of the parabolic determinant line bundle [Proposition 52](#Parabolic determinant bundle and Hecke modifications){reference-type="ref" reference="Parabolic determinant bundle and Hecke modifications"} and the canonical line bundle [Proposition 53](#Canonical bundle){reference-type="ref" reference="Canonical bundle"} still hold. Hence, we get the van Geemen-de Jong equation over the stack, and as the morphisms that appear in this equation are well defined over the moduli space of parabolic bundles for any system of weights, we get the existence theorem of the Hitchin connection.
**Example 60** (Non-generic weights. See [@pauly1998fibres] for the details). Let's consider the rank two case with $D$ being a parabolic divisor of degree $N=2m \geq 4$. For all $i\in I$, we choose the following system of weights:
$a_1(i)=0$, $a_2(i)=1$ and $k=2$.
In this case, the Picard group of the moduli space $\mathcal{SM}^{par}_{\mathcal{C}/S}(2,\alpha_*,\mathcal{O}_{\mathcal{C}})$ is generated by the line bundle $\Theta_{par}$.
BMW21b
M. Atiyah. Complex analytic connections in fibre bundles. , 85(1):181--207, 1957.
M.Atiyah and R. Bott. The Yang-Mills equations over Riemann surfaces. , 308(1505):523--615, 1983.
J. Andersen, N. Gammelgaard, and M. Lauridsen. Hitchin's connection in metaplectic quantization. , 3(3):327--357, 2012.
J. Andersen. Hitchin's connection, Toeplitz operators, and symmetry invariant deformation quantization. , 3(3):293--325, 2012.
S. Axelrod. . Princeton University, 1991.
T. Baier, M. Bolognesi, J. Martens, and C. Pauly. The Hitchin connection in arbitrary characteristic. , 22(1):449--492, 2023.
V. Balaji, I. Biswas, and D. S. Nagaraj. Ramified $G$-bundles as parabolic bundles. , 18(2):123--138, 2003.
S. Bauer. Parabolic bundles, elliptic surfaces and SU(2)-representation spaces of genus zero Fuchsian groups. .
A. Beauville and Y. Laszlo. Conformal blocks and generalized theta functions. , 164:385--419, 1994.
A. Beilinson and V. Schechtman. Determinant bundles and virasoro algebras. , 118:651--701, 1988.
A. Bertram. Generalized SU(2)-theta functions. , 113(1):351--372, 1993.
U. Bhosle. Parabolic vector bundles on curves. , 27(1-2):15--22, 1989.
S. Bloch and H. Esnault. Relative algebraic differential characters. , 1999.
O. Biquard. Fibrés paraboliques stables et connexions singulieres plates. , 119(2):231--257, 1991.
I. Biswas, S. Mukhopadhyay, and R. Wentworth. Geometrization of the tuy/wzw/kz connection. , 2021.
I. Biswas, S. Dumitrescu, S. Heller, and C. Pauly. Infinitesimal deformations of parabolic connections and parabolic opers. , 2022.
I. Biswas, S. Mukhopadhyay, and R. Wentworth. A Hitchin connection on nonabelian theta functions for parabolic G-bundles. (0), 2023.
I. Biswas, S. Mukhopadhyay, and R. Wentworth. A parabolic analog of a theorem of Beilinson and Schechtman. , 2023.
I. Biswas and N. Raghavendra. Determinants of parabolic bundles on Riemann surfaces. , 103(1):41--71, 1993.
H. Boden and K. Yokogawa. Rationality of moduli spaces of parabolic bundles. , 59(2):461--478, 1999.
M. Bjerre. The Hitchin connection for the quantization of the moduli space of parabolic bundles on surfaces with marked points. , 2018.
M. Crampin and DJ Saunders. Projective connections. , 57(2):691--727, 2007.
J.M. Drezet and M.S. Narasimhan. Groupe de picard des variétés de modules de fibrés semi-stables sur les courbes algébriques. , 97:53--94, 1989.
H. Esnault and I-Hsun Tsai. Determinant bundle in a family of curves after Beilinson and Schechtman. , 211(2):359--363, 2000.
G. Faltings. Stable G-bundles and projective connections. , 2(3):507--568, 1993.
B. Fantechi, L. Göttsche, L. Illusie, S. Kleiman, N. Nitsure, and A. Vistoli. , volume 123. American Mathematical Society Providence, RI, 2005.
B. Van Geemen and A.J de Jong. On Hitchin's connection. , 11(1):189--228, 1998.
R. Hartshorne. , volume 52. Springer Science & Business Media, 2013.
N Hitchin. Stable bundles and integrable systems. , 54(1):91--114, 1987.
N. Hitchin. Flat connections and geometric quantization. , 131:347--380, 1990.
N.J. Hitchin. The symplectic geometry of moduli spaces of connections and geometric quantization. , 102:159--174, 1990.
F. Knudsen and D.Mumford. The projectivity of the moduli space of stable curves i: Preliminaries on\" det\" and\" div\". , 39(1):19--55, 1976.
S Lang. A. Grothendieck, éléments de géométrie algébrique. , 67(6):239--246, 1961.
Y. Laszlo. Hitchin's and wzw connections are the same. , 49(3):547--576, 1998.
Y. Laszlo and C. Sorger. The line bundles on the moduli of parabolic $G$-bundles over curves and their sections. In *Annales scientifiques de l'Ecole normale supérieure*, volume 30, pages 499--525, 1997.
E. Martinengo. . PhD thesis, Sapienza Universita di Roma, 2009.
M. Maruyama and K. Yokogawa. Moduli of parabolic stable sheaves. , 293:77--99, 1992.
V. Mehta and C. Seshadri. Moduli of vector bundles on curves with parabolic structures. , 248:205--239, 1980.
M. Narasimhan and T. Ramadas. Factorisation of generalised theta functions. i. , 114(1):565--623, 1993.
C. Pauly. Espaces de modules de fibrés paraboliques et blocs conformes. , 85(1):217--235, 1996.
C.Pauly. Fibrés paraboliques de rang 2 et fonctions thêta généralisées. , 228:31--50, 1998.
D. Quillen. Determinants of Cauchy-Riemann operators on Riemann surfaces. , 19(1):37--41, 96, 1985.
Z. Ran. Jacobi cohomology, local geometry of moduli spaces, and Hitchin connections. , 92(3):545--580, 2006.
L. Schaposnik. Spectral data for G-Higgs bundles. , 2013.
P. Scheinost and M. Schottenloher. Metaplectic quantization of the moduli spaces of flat and parabolic bundles. , 466:145--219, 1995.
E. Sernesi. , volume 334. Springer Science & Business Media, 2007.
C. Seshadri. Moduli of vector bundles with parabolic structures. , 83, 1977.
C. Seshadri. Fibérs vectoriels sur les courbes algébriques. , 96:1--209, 1982.
C. Seshadri. Moduli of $\pi$-vector bundles over an algebraic curve. , pages 139--260, 2011.
C. Simpson. Harmonic bundles on noncompact curves. , 3(3):713--770, 1990.
A. Singh. Differential operators on Hitchin variety. , 566:361--373, 2021.
X. Sun and I. Tsai. Hitchin's connection and differential operators with values in the determinant bundle. , 67(2):335--376, 2004.
A. Tsuchiya, K. Ueno, and Y. Yamada. Conformal field theory on universal family of stable curves with gauge symmetries. In *Integrable Sys Quantum Field Theory*, pages 459--566. Elsevier, 1989.
G.E Welters. Polarized abelian varieties and the heat equations. , 49(2):173--194, 1983.
E. Witten. Quantum field theory and the jones polynomial. , 121(3):351--399, 1989.
K. Yokogawa. Moduli of stable pairs. , 31(1):311--327, 1991.
K. Yokogawa. Compactification of moduli of parabolic sheaves and moduli of parabolic Higgs sheaves. , 33(2):451--504, 1993.
K. Yokogawa. Infinitesimal deformation of parabolic higgs sheaves. , 6(1):125, 1995.
[^1]: Note that we don't know how the generalized determinant bundle and the parabolic determinant bundle are related.
[^2]: Correspond to parabolic bundles with trivial determinant.
[^3]: See also [@mehta1980moduli], [@bhosle1989parabolic]and [@BR93].
[^4]: See [@biswas2023BS] for a parabolic analog using another definition of parabolic Atiyah algebroid.
[^5]: See $\cite{fantechi2005fundamental}$ for definition of relative Picard groups.
[^6]: The isomorphism is because we deal with locally free sheaves.
[^7]: *The Kodaira-Spencer of the family of marked curves is an isomorphism*
| arxiv_math | {
"id": "2310.02813",
"title": "Parabolic Hitchin connection",
"authors": "Zakaria Ouaras",
"categories": "math.AG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We consider the stochastic control problem of the *shallow lake* and continue the work of [@KLS] in three directions. First, we generalise the characterisation of the value function as the viscosity solution of a well-posed problem to include more general recycling rates. Then, we prove approximate optimality under bounded controls and we establish quantitative estimates. Finally, we implement a convergent and stable numerical scheme for the computation of the value function to investigate properties of the optimally controlled stochastic shallow lake. This approach permits to derive tails asymptotics for the invariant distribution and to extend results of [@GKW] beyond the small noise limit.\
*AMS 2010 Mathematics Subject Classification: 93E20, 60H30, 49L25*\
*Keywords: Shallow Lake, Viscosity solution, Optimal stochastic control, Skiba point*
author:
- Angeliki Koutsimpela
- Michail Loulakis
bibliography:
- reference.bib
title: On the optimally controlled stochastic shallow lake
---
# Introduction
The shallow lake problem is a well-known problem of the environmental economy with a great mathematical interest. Pollution of shallow lakes is caused by human activity, e.g. the use of fertilisers and the increased inflow of waste water from industries and human settlements, and is usually quantified by the concentration of phosphorus. The amount of phosphorus in algae, $x(t)$, is usually modelled by the non-linear stochastic differential equation: $$\label{sldyn}
\begin{cases} dx(t)= \left( u(t)-b x(t)+r\big(x(t)\big)\right) dt+\sigma x(t)dW_t, & \\
x(0) = x\ge 0. & \end{cases}$$ The first term, $u:\; [0,\infty)\rightarrow (0,\infty)$, in the drift part of the dynamics, represents the exterior load of phosphorus as a result of human activities. The second term is the rate of loss $b x(t)$, which is due to sedimentation, outflow and sequestration in other biomass. The third term, $r\big(x(t)\big)$, is the rate of recycling of phosphorus on the bed of the lake. This term is assumed to be a sigmoid function (see [@carp99]) and the typical choice in the literature is the function $x\mapsto \frac{x^2}{x^2+1}$. An uncertainty in the rate of loss is inserted in the model through a linear multiplicative gaussian white noise with intensity $\sigma$.
The economics of the lake arise from its conflicting services to the community. On the one hand, a clear lake is an ecological resource of recreational activities. On the other hand, the lake can serve as a sink for agricultural and industrial waste. When the users of the lake cooperate, the loading strategy, $u$, can be used as a control to maximise the benefit from the lake. Assuming an infinite horizon, this benefit is typically chosen as $$\label{Jxu}
J(x;u)=\mathbb{E}_x \left[\int_0^\infty e^{-\rho t}\big(\ln u(t)-cx^2(t)\big)\, dt \right],$$ where $\rho >0$ is the discount rate and $x(\cdot)$ is the solution to ([\[sldyn\]](#sldyn){reference-type="ref" reference="sldyn"}), for a given exterior loading (control) $u(\cdot)$, and initial state, $x\ge 0$. The total benefit of the lake increases with the loading of phosphorus as $\ln u(t)$, but at the same time decreases with the existing amount of phosphorus as $-cx^2(t)$, due to implied deterioration of its ecological services. The positive parameter $c$ reflects the relative weight of this component.
For the optimal management of the lake when the initial state is $x$, we need to maximise the total benefit over a set of admissible controls, $\mathfrak{U}_x$. Thus, the value function of the problem is $$\label{ihvf}
V(x) = \underset{u\in \mathfrak{U}_x} {\sup}\ J(x;u).$$ .05in
Therefore, the shallow lake problem becomes a problem of control theory or a differential game in the case where we have competitive users of the lake ([@carp99; @Brock; @Xep]).
The deterministic ($\sigma=0$) version of the problem has been extensively studied, not only as an interesting problem in environmental economics but also as a prototype example where a sigmoid term in the dynamics may lead to the existence of multiple equilibria and associated basins of attraction. Depending on the parameters, the shallow lake problem may have two different equilibria and a Skiba point, i.e., an initial state for which distinct optimal solutions to [\[ihvf\]](#ihvf){reference-type="eqref" reference="ihvf"} exist. The leftmost (oligotrophic) equilibrium point of the system of the lake corresponds to a lake with low concentration of phosphorus, while the rightmost one (eutrophic) corresponds to a lake with high concentration of phosphorus. At the Skiba point, there are two different optimal strategies, each one driving the system to a different equilibrium and the value function is not differentiable thereat.
The range of parameters for which Skiba points appear has been explored in [@W03] and [@WBif]. Properties of the value function of the deterministic shallow lake problem have been proved in [@KosZoh]. The existence of optimal control is usually taken as a hypothesis in the literature and the optimal dynamics of the lake is studied mostly through the necessary conditions determined by the Pontryagin maximum principle, and the equilibrium points of the corresponding dynamical system (see [@W03; @Xep]). A rigorous answer to this question was given in [@Bar20] and [@Bar21], albeit under restrictions that do not fully cover the range of the parameters for which Skiba points are present.
Over the last decade, there has been increasing interest in the stochastic ($\sigma\neq 0$) version of the problem. Deterministic systems with two equilibrium points and one Skiba point have a fundamentally different behaviour from their stochastic counterparts. Specifically, random fluctuations drive the stochastic system from one equilibrium point to the other (metastability). In the context of the shallow lake, this phenomenon is studied numerically in [@GKW], where the value function in [\[ihvf\]](#ihvf){reference-type="eqref" reference="ihvf"} is approximated for small $\sigma$ based on heuristic methods of perturbation analysis. In [@KLS] the authors characterise the value function of the stochastic shallow lake problem as the unique (in a suitable class) state-constraint viscosity solution of the Hamilton-Jacobi-Bellman (HJB) equation $$\label{OHJB}
\rho V-\left(r(x)-bx\right)V_x+\left( \ln(-V_x)+cx^{2}+1\right)-\dfrac{1}{2}\sigma^{2}x^{2}V_{xx}=0,$$ and analytically derive properties of the value function in the case $r(x)=x^2/(1+x^2)$.
The present work continues the work of [@KLS]. First, it extends the results therein to include much more general recycling rates as made precise in Assumption [Assumption 1](#r){reference-type="ref" reference="r"}, as well as the penalty parameter $c$ in [\[Jxu\]](#Jxu){reference-type="eqref" reference="Jxu"}, which cannot be scaled away with a suitable change of variables. Then, it adds some new analytical results regarding the approximate optimality of bounded controls (Lemma [Lemma 2](#Napprox){reference-type="ref" reference="Napprox"}) and the tail behaviour of the invariant distribution of the optimally controlled stochastic lake (Proposition [Proposition 5](#asympt){reference-type="ref" reference="asympt"}). Finally, we use a convergent, monotone, Barles-Souganidis scheme to compute the value function of [\[ihvf\]](#ihvf){reference-type="eqref" reference="ihvf"} as the relevant viscosity solution of [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"}. This approach relies heavily on the aforementioned rigorous results and because it does not invoke perturbation expansions, it permits to numerically investigate properties of the optimally controlled stochastic shallow lake beyond the small noise regime. As an example, we present the effect of noise intensity, $\sigma$, of the penalty parameter $c$, and of the discount rate, $\rho$, on the number and location of modes of the invariant distribution. We also present typical paths of the optimally controlled stochastic shallow lake showing the transitions between oligo/eu-trophic states, as well as statistics of the transition times. Analytical results are collected in the following section and numerical results are presented in Section [3](#numerics){reference-type="ref" reference="numerics"} of this article.
# Analytical Results
In this section, we introduce the details of the model, some necessary notation, and we present the rigorous part of this work. The section contains two kinds of results: generalisations of those in [@KLS] and new ones. Regarding the proofs of the former, naturally, some arguments do not depend on the precise form of the recycling rate and go through verbatim simply by substituting $x^2/(1+x^2)$ by $r(x)$, while others need to be modified to a lesser or greater extent. For the sake of completeness, we transcribe here most of the results in [@KLS] in their generalised form but, to avoid cumbersome repetitions, we only present those proofs that differ from the original ones. To keep the flow of results uninterrupted, statements are collected in subsection [2.1](#statement){reference-type="ref" reference="statement"}, while all proofs provided are collected in subsection [2.2](#proofs){reference-type="ref" reference="proofs"}.
We assume that there exists a filtered probability space $( \Omega, \mathcal{F},\{ \mathcal{F}_t\}_{t\ge 0},\mathbb{P})$ satisfying the usual conditions, and a Brownian motion $\{W_t: t\ge 0\}$ defined on that space. An admissible control $u(\cdot)\in\mathfrak{U}_x$ is an $\mathcal{F}_t$-adapted, $\mathbb{P}$-a.s. locally integrable process with values in $U=(0,\infty)$, satisfying $$\label{ac_constraint}
\mathbb{E} \left[ \int_{0}^{\infty}e^{-\rho t}\ln u(t)dt \right] < \infty,$$ such that the problem ([\[sldyn\]](#sldyn){reference-type="ref" reference="sldyn"}) has a unique strong solution $x(\cdot)$. We will also find useful the positive processes $$\label{ZM}
Z_t=e^{\sigma W_t-\left(b+\sigma^2/2\right)t} \quad \text{ and } \quad M_t(u)=\int \limits_{0}^{t}\frac{Z_t}{Z_s}u(s)ds.$$ .05in Regarding the recycling rate, $r$, the following assumptions are made throughout the paper. Note that they are satisfied by most sigmoid functions that are typically used in applications.
**Assumption 1**. *The rate of recycling $r(x)$ satisfies the following:*
1. *$r$ is locally Lipschitz and nondecreasing*
2. *$r(0)=0$ and $r(x)<(b+\rho)x$ close to $0$*
3. *$a:=\lim \limits_{x\rightarrow \infty}r(x)<\infty$*
4. *The limit $\lim \limits_{x\rightarrow \infty}(a-r(x))x=:C$ exists and is a finite, necessarily nonnegative, real number.*
## Statements {#statement}
Proposition [Proposition 1](#propdyn){reference-type="ref" reference="propdyn"} collects some useful facts about the set of admissible controls and path properties of the controlled system [\[sldyn\]](#sldyn){reference-type="eqref" reference="sldyn"}. Precisely, the set of admissible controls does not depend on the initial state, while paths of [\[sldyn\]](#sldyn){reference-type="eqref" reference="sldyn"} remain nonnegative and satisfy a comparison principle.
**Proposition 1**.
1. *[\[positive\]]{#positive label="positive"} If $x\geq 0$, $u\in \mathfrak{U}_x,$ and $x(\cdot)$ is the solution to ([\[sldyn\]](#sldyn){reference-type="ref" reference="sldyn"}), then $\mathbb{P}\big[x(t)\ge 0,\ \forall t\ge 0\big]=1.$ In particular, $\mathbb{P}\big[x(t)\ge M_t(u),\ \forall t\ge 0\big]=1.$*
2. *[\[uxisu\]]{#uxisu label="uxisu"} For all $x,y\ge 0$, $\mathfrak{U}_x=\mathfrak{U}_y=:\mathfrak{U}.$*
3. *[\[monotone\]]{#monotone label="monotone"} Suppose $x(\cdot),\ y(\cdot)$ satisfy ([\[sldyn\]](#sldyn){reference-type="ref" reference="sldyn"}) with controls $u_1,\ u_2\in\mathfrak{U}$, respectively, and $x(0)=x$, $y(0)=y$. If $x\le y$ and $\mathbb{P}\big[u_1(t)\le u_2(t),\ \forall t\ge 0\big]=1$, then $$\mathbb{P}\big[y(t)-x(t)\ge (y-x)Z_t,\ \forall t\ge 0\big]=1.$$*
Propositions [Proposition 2](#propv){reference-type="ref" reference="propv"}, [Proposition 3](#vprop){reference-type="ref" reference="vprop"}, and [Proposition 4](#st_eq){reference-type="ref" reference="st_eq"} describe properties of the value function, $V$. Note that these properties are derived directly from the definition of $V$ in [\[ihvf\]](#ihvf){reference-type="eqref" reference="ihvf"} by means of stochastic analysis, so they are not a consequence of any differential equation, such as [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"}, that $V$ may satisfy. On the contrary, they are used as a crucial input in the characterisation of the value function as a viscosity solution to [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"}, as they ensure that the associated Hamiltonian of the control problem is finite, and they outline a class of functions among which there is uniqueness of solutions to [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"}, thus singling out the relevant one. Notice also that Propositions [Proposition 2](#propv){reference-type="ref" reference="propv"} and [Proposition 3](#vprop){reference-type="ref" reference="vprop"} do not require $\sigma>0$, so they can be used in the deterministic problem, as well. Let $$\label{K}
A:=\frac{c}{\rho+2b-\sigma^2}.$$
**Proposition 2**.
1. *[\[restr1\]]{#restr1 label="restr1"} $V(x)> -\infty$ if and only if $\sigma^2< \rho+2b$.*
2. *[\[Vdec\]]{#Vdec label="Vdec"} The function $x\mapsto V(x)+Ax^2$, where $A$ is defined in [\[K\]](#K){reference-type="eqref" reference="K"}, is decreasing on $[0,+\infty)$.*
3. *[\[V0fin\]]{#V0fin label="V0fin"} The value function at zero satisfies $V(0) \le\frac{1}{\rho}\ln\left(\frac{b+\rho}{\sqrt{2ec}}\right).$*
4. *[\[specialdpp\]]{#specialdpp label="specialdpp"} Fix $x_1,x_2\in[0,\infty)$ with $x_1<x_2$, and, for $u\in\mathfrak{U}$, let $x(\cdot)$ be the solution to ([\[sldyn\]](#sldyn){reference-type="ref" reference="sldyn"}) with control $u$ and $x(0)=x_1$. If $\tau_u$ is the hitting time of $x(\cdot)$ on $[x_2,+\infty)$, that is, $\tau_u=\inf\{t\ge 0: x(t)\ge x_2\},$ then $$\label{babydpp}
V(x_1)=\sup_{u\in\mathfrak{U}}\mathbb{E}\left[\int_0^{\tau_u} e^{-\rho t}\big(\ln u(t)-cx^2(t)\big)\, dt+e^{-\rho\tau_u}V(x_2)\right].$$*
**Proposition 3**. *Suppose $0\leq \sigma^2<\rho+2b$.*
1. *[\[vprop1\]]{#vprop1 label="vprop1"} There exist constants $K_1, K_2>0$, such that, for any $x\ge 0$, we have $$\label{Vbounds}
K_1\ \le\ V(x)+A\left(x+\frac{a}{b+\rho}\right)^2+\frac{1}{\rho}\ln \left(x+\frac{a}{b+\rho}\right)\ \le\ K_2.$$*
2. *[\[vprop2\]]{#vprop2 label="vprop2"} There exist a constant $C_1>0$ and a function $c: \; [0,+\infty)\rightarrow (0,\infty)$ with $\lim \limits_{x\rightarrow 0}c(x)=e^{-(\rho V(0)+1)}$ such that, for any $x_1,x_2\in [0,+\infty)$ with $x_1<x_2$, $$\label{DVbounds}
\frac{V(x_2)-V(x_1)}{x_2-x_1}\le -c(x_2) \le -C_1.$$*
**Proposition 4**. *Suppose $0< \sigma^2<\rho+2b$.\
There exists an increasing function $L_{\sigma}: [0,\infty) \rightarrow \mathbb{R}$ with $\lim \limits_{x\rightarrow 0} L_{\sigma}(x)=e^{-(\rho V(0)+1)}$ such that, for any $x_1,x_2\in [0,\infty)$ with $x_1<x_2,$ $$\frac{V(x_2)-V(x_1)}{x_2-x_1}\geq -L_{\sigma}(x_2)$$*
An immediate consequence of Propositions [Proposition 3](#vprop){reference-type="ref" reference="vprop"}(ii) and [Proposition 4](#st_eq){reference-type="ref" reference="st_eq"} is the following corollary.
**Corollary 1**. *If $0< \sigma^2<\rho+2b$, $V$ is differentiable at zero and $$\ln\big(-V'(0)\big)+\rho V(0)+1=0.
\label{bc}$$*
**Remark 1**. *Equation [\[bc\]](#bc){reference-type="eqref" reference="bc"} can be perceived as a nonlinear mixed boundary condition satisfied by the value function. It remains true even when $\sigma=0$, but the proof is more involved and is part of a forthcoming work on the deterministic shallow lake problem. A discretised version of it is used in the numerical scheme of Section [3](#numerics){reference-type="ref" reference="numerics"}.*
Theorem [Theorem 1](#constrained_vs){reference-type="ref" reference="constrained_vs"} states that the value function of the stochastic shallow lake problem is a constrained viscosity solution of the Hamilton-Jacobi-Bellman (HJB) equation [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"} on $[0,+\infty)$, i.e. a viscosity subsolution on $[0,+\infty)$ and a viscosity supersolution on $(0,+\infty)$. Theorem [Theorem 2](#comparison){reference-type="ref" reference="comparison"} is a comparison principle that guarantees uniqueness of such solutions in a suitable class of functions. In view of Proposition [Proposition 3](#vprop){reference-type="ref" reference="vprop"} and Corollary [Corollary 1](#coro1){reference-type="ref" reference="coro1"}, Theorems [Theorem 1](#constrained_vs){reference-type="ref" reference="constrained_vs"} and [Theorem 2](#comparison){reference-type="ref" reference="comparison"} characterise the value function as the unique constrained viscosity solution of [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"} on $[0,+\infty)$ that satisfies all the conditions of Theorem [Theorem 2](#comparison){reference-type="ref" reference="comparison"}.
**Theorem 1**. *If $0<\sigma^2 < \rho +2b$, the value function $V$ is a continuous in $[0,\infty)$ constrained viscosity solution of the equation [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"} in $[0,\infty)$.*
**Theorem 2**. *Let $0<\sigma^2 < \rho +2b$ and assume that $u\in C([0,\infty))$ is a bounded from above strictly decreasing subsolution of [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"} in $[0, \infty)$ and $v\in C([0,\infty))$ is a bounded from above strictly decreasing supersolution of [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"} in $(0, \infty)$ such that $v \geq -c_1(1+x^2)$ and $Du\leq -\frac{1}{c_2}$ in the viscosity sense, for $c_1$, $c_2$ positive constants. Then $u \leq v$ in $[0, \infty)$.*
An important ingredient for the proof of Theorem [Theorem 2](#comparison){reference-type="ref" reference="comparison"} is the following lemma [Lemma 1](#u_v){reference-type="ref" reference="u_v"}.
**Lemma 1**. *Suppose $0\leq \sigma^2 <\rho+2b$ and $u$, $v$ satisfy the assumptions of Theorem [Theorem 2](#comparison){reference-type="ref" reference="comparison"}. Then $\psi=u-v$ is a subsolution of $$\label{u-v}
\rho \psi +bxD\psi-\left(a+c^*\right)|D\psi| -\frac{1}{2}\sigma^2x^2 D^2 \psi = 0 \,\, \mbox{ in } \,\, [0, \infty).$$*
**Remark 2**. *Theorems [Theorem 1](#constrained_vs){reference-type="ref" reference="constrained_vs"} and [Theorem 2](#comparison){reference-type="ref" reference="comparison"} remain valid even when $\sigma=0$. In fact, uniqueness can be established in a slightly larger class. However, the proofs in the deterministic case are quite different and, like Corollary [Corollary 1](#coro1){reference-type="ref" reference="coro1"}, they are part of a forthcoming work.*
The stability property of viscosity solutions yields the following corollary.
**Corollary 2**. *As $\sigma \rightarrow 0$, the value function $V=V_{\sigma}$ defined by ([\[ihvf\]](#ihvf){reference-type="ref" reference="ihvf"}) converges locally uniformly to the constrained viscosity solution $V_0$ of the deterministic shallow lake equation in $[0, \infty)$, $$\rho V_0=\left( r(x)-bx\right)V'_0-\Big( \ln(-V'_0)+cx^{2}+1\Big).$$*
Finally, the next theorem describes the exact asymptotic behaviour of $V$ at $+\infty$. In Section 4, we present and implement a monotone numerical scheme approximating ([\[ihvf\]](#ihvf){reference-type="ref" reference="ihvf"}). Relation ([\[asympto_behav\]](#asympto_behav){reference-type="ref" reference="asympto_behav"}) is crucial for the accurate computation of $V$ in this setting, because it suggests the boundary condition at the right end of the computational domain.
**Theorem 3**. *As $x\rightarrow \infty$, $$\label{asympto_behav}
V(x)= -A\left(x+\frac{a}{b+\rho}\right)^2-\frac{1}{\rho}\ln \left[2A(x+\frac{a}{b+\rho})\right]+K+o(1).$$ where $$\label{K1}
K=\frac{1}{\rho}\left(\frac{2b+\sigma^2}{2\rho}-\frac{Aa^2(\rho+2b)}{(b+\rho)^2} -1+2AC\right)$$*
We now proceed to the statement of two new results. Lemma [Lemma 2](#Napprox){reference-type="ref" reference="Napprox"} states that we may essentially achieve the optimal total benefit using a bounded control $u$. For $N>0$, define the set of $N-$bounded admissible controls $$\mathfrak{U}_N=\{u\in \mathfrak{U}: \; |u(t,\omega)|\leq N, \; \forall t\geq 0, \; \omega \in \Omega\},$$ and the associated value function $$V_N(x)=\sup_{u\in \mathfrak{U}_N} \mathbb{E} \left[ \int \limits_0^{\infty} e^{-\rho t} \left( \ln u(t)-cx^2(t) \right)dt \right]$$ where $x(\cdot)$ is the solution to [\[sldyn\]](#sldyn){reference-type="eqref" reference="sldyn"} with $x(0)=x$.
**Lemma 2**. *For all $x\geq 0$ we have $$0\leq V(x)-V_N(x)\leq \frac{(\rho+b)^2}{4\rho cN^2}$$*
For $\sigma>0$, the ellipticity of the HJB equation [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"} in $(0, \infty)$ induces extra regularity for the function V in $(0, \infty)$. Hence, the optimal dynamics for the shallow lake problem are described by $$\label{opt}
\begin{cases}
dx_*(t)=\left(-\frac{1}{V_\sigma'(x_*(t))}-bx(t)+r\big(x_*(t)\big)\right)dt+\sigma x_*(t)dW_t & \\
x_*(0)=x &
\end{cases}$$ Standard results in diffusion theory (e.g., Lemma 23.18 in [@kallenberg]) imply that the process $\{x_*(t):\ t\ge 0\}$ has an invariant distribution with density $f$. In particular, for any initial state, $x\ge 0$, and any Borel subset of $\mathbb{R}$, $A$, we have $$\lim_{t\to\infty} \mathbb{P}\big[x_*(t)\in A\big]=\int_A f(x)\ dx.$$ The last proposition describes the invariant distribution of $x_*$ and the precise asymptotics of its tails, as $x\to 0$ and as $x\to\infty$.
**Proposition 5**. *The density, $f$, of the stationary distribution of the optimal dynamics [\[opt\]](#opt){reference-type="eqref" reference="opt"} is $$\label{inv1}
f(x)=\frac{1}{Z} x^{-2\big(1+\frac{b}{\sigma^2}\big)}\, e^{-\frac{2}{\sigma^2}\Phi_\sigma (x)},$$ where $Z$ is a normalising constant and $$\Phi_{\sigma}(x)=\int \limits_{x}^{\infty}\Big(-\frac{1}{V_\sigma'(u)} + {r(u)}\Big)\frac{du}{u^2}, \qquad x>0.$$ In particular, $$\label{tails}
\lim_{x\to 0} x\ \Phi_{\sigma}(x) = \frac{1}{ |V_\sigma'(0)|}, \qquad\text{and}\qquad \lim \limits_{x\rightarrow \infty}\Phi_\sigma (x)=0.$$*
## Proofs
In this subsection, we present the proofs that are not straightforward modifications of those in [@KLS].\
**Proof of Proposition [\[V0fin\]](#V0fin){reference-type="ref" reference="V0fin"}**\
Based on Proposition [Proposition 1](#propdyn){reference-type="ref" reference="propdyn"}(i), for any $u\in\mathfrak{U}$, $x(t)\ge M_t(u)$. In addition, for any positive $\mathbb{P}$-a.s. locally integrable $\mathcal{F}_t$-adapted process $f$, by Lemma A.1(i) in [@KLS]) we have that $$\label{Mt}
\mathbb{E}\left[\int \limits_0^{\infty} e^{-\rho t}M_t(f)dt\right]=\frac{1}{\rho+b}\mathbb{E}\left[\int \limits_0^{\infty}e^{-\rho t}f(t)dt\right].$$ Therefore, using first Jensen's inequality and then [\[Mt\]](#Mt){reference-type="eqref" reference="Mt"}, we find $$\begin{aligned}
\frac{\ln c}{2\rho}+\mathbb{E}\left[\int_0^\infty e^{-\rho t}\ln u(t)\ dt\right] &\le\frac{1}{\rho}\ln\mathbb{E}\left[\int_0^\infty \rho e^{-\rho t}\sqrt{c}u(t)\ dt\right] \\
&=\frac{1}{\rho}\ln\mathbb{E}\left[\int_0^\infty \rho(\rho+b) e^{-\rho t}\sqrt{c}M_t(u)\ dt\right] \\
&\le \frac{\ln(b+\rho)}{\rho}+\frac{1}{\rho}\ln\mathbb{E}\left[\int_0^\infty \rho e^{-\rho t}\sqrt{c}x(t)\ dt\right] \\
&\le \frac{\ln(b+\rho)}{\rho}+\frac{1}{2\rho}\ln\mathbb{E}\left[\int_0^\infty \rho e^{-\rho t}cx^2(t)\ dt\right].\end{aligned}$$
In view of ([\[ac_constraint\]](#ac_constraint){reference-type="ref" reference="ac_constraint"}), it suffices to consider $u\in\mathfrak{U}$ such that $D=\mathbb{E}\left[\int_0^\infty e^{-\rho t}cx^2(t)\ dt\right]<\infty$. Then, $$\mathbb{E} \left\{ \int_{0}^{\infty}e^{-\rho t}\big[\ln u(t)-cx^{2}(t)\big]dt \right\}\le \frac{\ln(b+\rho)}{\rho}+\frac{\ln(\rho D)}{2\rho}-D-\frac{\ln c}{2\rho}\le\frac{1}{\rho}\ln\left(\frac{b+\rho}{\sqrt{2ec}}\right).$$ The assertion follows by taking the supremum over admissible controls.$\Box$\
**Proof of Proposition [\[vprop1\]](#vprop1){reference-type="ref" reference="vprop1"}**\
*Proof of the lower bound:* It suffices to produce an admissible control that achieves a benefit $J(x;u)$ greater than or equal to the bound. For $x,y\in\mathbb{R}$, let us denote by $x\vee y=\max\{x,y\}$ and consider the feedback control $$\label{feedcon}
u(t)=\frac{1\vee x(t)}{1+x^2(t)}+a-r(x(t)).$$ An elementary variation of parameters argument for stochastic differential equations yields that $$\begin{aligned}
\label{xZ}
x(t)&=xZ_t+\int \limits_0^t \frac{Z_t}{Z_s} \left(u(s)+r(x(s)) \right)ds \\ &=xZ_t+\int_0^t \frac{Z_t}{Z_s}\ \Big(a+\frac{1\vee x(t)}{1+x^2(t)}\Big)ds\nonumber \\
&=xZ_t+aM_t(1)+\int_0^t \frac{Z_t}{Z_s}\ \frac{1\vee x(s)}{1+x^2(s)}ds.
\label{testX}\end{aligned}$$ We now square both sides to get $$\begin{aligned}
x^2(t)&=x^2Z_t^2+2axZ_tM_t(1)+a^2M_t^2(1)+\left(\int_0^t \frac{Z_t}{Z_s}\ \frac{1\vee x(s)}{1+x^2(s)}ds\right)^2\\
&\quad+2aM_t(1)\int_0^t \frac{Z_t}{Z_s}\ \frac{1\vee x(s)}{1+x^2(s)}ds+2\int_0^t \frac{Z_t^2}{Z_s}\ \frac{x \cdot 1\vee x(s)}{1+x^2(s)}ds.\end{aligned}$$ For the fourth and fifth term on the right-hand-side we simply use the inequality $\frac{1\vee x(s)}{1+x^2(s)}\le 1$. For the rightmost term we note that by ([\[testX\]](#testX){reference-type="ref" reference="testX"}) we have $x\le x(s)Z_s^{-1}$. It follows that $$x^2(t)\le x^2Z_t^2+2axZ_tM_t(1)+(a+1)^2M_t^2(1)+2\int_0^t \frac{Z_t^2}{Z_s^2}\ ds.
\label{testX2}$$ It is straightforward to check that $\mathbb{E}\big[Z_t^2\big]=\exp((\sigma^2-2b)t)$, while by Lemma A.1 (ii) in [@KLS], we have $$\label{ZMint}
\mathbb{E}\big[\int_0^\infty e^{-\rho t}Z_tM_t(1)dt\big]=\frac{A}{c(\rho+b)}.$$ Hence, we can find some constant $B$, such that $$\int_0^\infty e^{-\rho t}\mathbb{E}\big[cx^2(t)\big]\ dt \le A\big(x+\frac{a}{\rho+b}\big)^2+B.
\label{Xener}$$
On the other hand, using that for all $x\ge 0$, we have $a\ge r(x)$ and $\displaystyle{\frac{1\vee x}{1+x^2}\ge \frac{1}{1+x} }$, we find $$\begin{aligned}
\int_0^\infty e^{-\rho t}\mathbb{E}\big[\ln u(t)\big]\ dt &\ge-\int_0^\infty e^{-\rho t}\mathbb{E}\big[\ln\big(1+x(t)\big)\big]\ dt \\
&\ge -\frac{1}{\rho}\ln\left(\int_0^\infty \rho e^{-\rho t} \Big(1+\mathbb{E}\big[x(t)\big]\Big) dt\right)\\
&=-\frac{1}{\rho}\ln\left(1+\rho\int_0^\infty e^{-\rho t} \mathbb{E}\big[x(t)\big] dt\right),\end{aligned}$$ where in the penultimate step we have used Jensen's inequality.\
By ([\[testX\]](#testX){reference-type="ref" reference="testX"}) it follows that $$\displaystyle
\mathbb{E}\big[x(t)\big]\le x\mathbb{E}\big[Z_t\big]+(a+1)\mathbb{E}\big[M_t(1)\big]=xe^{-bt}+(a+1)\mathbb{E}\big[M_t(1)\big].$$ Hence, using Lemma A.1(i) in [@KLS], we obtain that the control $u$ in [\[feedcon\]](#feedcon){reference-type="eqref" reference="feedcon"} satisfies $$\int_0^\infty e^{-\rho t}\mathbb{E}\big[\ln u(t)\big]\ dt \ge -\frac{1}{\rho}\ln\left(1+\frac{\rho x}{\rho+b}+\frac{a+1}{\rho+b}\right).$$
The preceding estimate and ([\[Xener\]](#Xener){reference-type="ref" reference="Xener"}) together imply that, for some suitable constant $K_1$, $$\begin{aligned}
V(x)&\ge J(x;u) =\mathbb{E}\big[\int_0^\infty e^{-\rho t}\big(\ln u(t)-cx^2(t)\big)\ dt \big]\\
&\ge -A\left(x+\frac{a}{b+\rho}\right)^2-\frac{1}{\rho}\ln \left(x+\frac{a}{b+\rho}\right)+K_1.\end{aligned}$$
.07in *Proof of the upper bound:*Fix $u\in\mathfrak{U}$. Equation [\[xZ\]](#xZ){reference-type="eqref" reference="xZ"} gives $$\begin{aligned}
x^2(t)&\ge x^2Z_t^2+2xZ_t^2\int_0^t\frac{1}{Z_s}\left(u(s)+r(x(s))\right) ds\\&=x^2Z_t^2+2xZ_tM_t(a+u)-\int_0^t\frac{Z_t^2}{Z_s}2x(a-r(x(s)) ds\\
&\geq x^2Z_t^2+2axZ_tM_t(1)+2xZ_tM_t(u)-\int_0^t\frac{Z_t^2}{Z_s^2}2x(s)(a-r(x(s)) ds.\end{aligned}$$
Since $$\lim \limits_{x\rightarrow \infty} x(a-r(x))=C\in \mathbb{R} \; \Rightarrow \; x(a-r(x))\leq K\; \text{for some positive constant K,}$$
we can further estimate $x^2(t)$ from below by $$\begin{aligned}
\label{mtuest}
x^2(t)&\ge x^2Z_t^2+2axZ_tM_t(1)+2xZ_tM_t(u)-2K\int_0^t \frac{Z_t^2}{Z_s^2} ds.\end{aligned}$$
Using the elementary inequality that $\ln a\le ab-\ln b-1$, which holds for all $a,b>0$, and Lemma A.1(ii) in [@KLS], we obtain, for some constant B, $$\begin{aligned}
&\int_0^\infty e^{-\rho t}\mathbb{E}\big[\ln u(t)\big]dt \le \mathbb{E}\left[\int_0^\infty e^{-\rho t} \Big\{2AxZ_tu(t)-\ln\big(2AxZ_t\big)\Big\}\ dt\right]\nonumber\\
&\qquad=\mathbb{E}\left[\int_0^\infty\hspace{-2mm} e^{-\rho t} 2cxZ_tM_t(u)\, dt\right]-\frac{\ln(2Ax) }{\rho}+\frac{2b+\sigma^2}{2\rho^2}\nonumber\\
&\qquad\le \mathbb{E}\left[\int_0^\infty \hspace{-2mm}e^{-\rho t} \Big(cx^2(t)-cx^2Z_t^2-2acxZ_tM_t(1)+2Kc\int_0^t\frac{Z_t^2}{Z_s^2}\, ds\Big)\, dt\right]-\frac{\ln x + B}{\rho}\nonumber,\end{aligned}$$ where in the last step we have used [\[mtuest\]](#mtuest){reference-type="eqref" reference="mtuest"} to estimate $Z_tM_t(u)$. By another application of [\[ZMint\]](#ZMint){reference-type="eqref" reference="ZMint"} we conclude that for every $u\in\mathfrak{U}$ there exists $K_2>0$ such that $$J(x;u)\le -A\left(x+\frac{a}{b+\rho}\right)^2-\frac{1}{\rho}\ln \big(x+\frac{a}{\rho+b}\big)+K_2.$$
The assertion now follows by taking the supremum over $u\in\mathfrak{U}$. $\Box$\
**Proof of Proposition [\[vprop2\]](#vprop2){reference-type="ref" reference="vprop2"}**\
By Assumption [Assumption 1](#r){reference-type="ref" reference="r"}, there exists $\varepsilon>0$ such that $r(x)<(b+\rho)x,\; \forall x\in\; (0,\varepsilon]$. In view of Proposition [\[Vdec\]](#Vdec){reference-type="ref" reference="Vdec"}, it suffices to assume that $x_2\le \varepsilon$, since otherwise we have $$V(x_2)-V(x_1)\le -A(x_2^2-x_1^2)< -A\varepsilon(x_2-x_1).$$
For a positive constant $d$, choose a $u_d\in\mathfrak{U}$ that is constant and equal to $d$ up to time $\tau_d=\tau_{u_d}$. Then, Proposition [\[specialdpp\]](#specialdpp){reference-type="ref" reference="specialdpp"} yields $$V(x_1)\ge \frac{\ln d-cx_2^2}{\rho}(1-\mathbb{E}\big[e^{-\rho\tau_d}\big])+\mathbb{E}\big[{e^{-\rho\tau_d}}\big] V(x_2),$$ or equivalently, $$\big(V(x_2)-V(x_1)\big)\mathbb{E}\big[e^{-\rho\tau_d}\big]\le -\big(\ln d-\rho V(x_1)-cx_2^2)\ \mathbb{E}\left[\int_0^{\tau_d}e^{-\rho t}\, dt\right].
\label{ub}$$
Consider now the solution $x_d(\cdot)$ to ([\[sldyn\]](#sldyn){reference-type="ref" reference="sldyn"}) with $x(0)=x_1$ and control $u_d$. We can apply Itô's formula to $e^{-\rho t}x_d(t)$, followed by the optional stopping theorem for the bounded stopping time $\tau_N=\tau_d\wedge N$, and we get $$\begin{aligned}
\mathbb{E}\big[e^{-\rho \tau_N}x_d(\tau_N)\big]-x_1&=\mathbb{E}\left[\int_0^{\tau_N} e^{-\rho t}\big(d-(b+\rho)x_d(t)+r(x_d)\big)\, dt\right].
\label{lastone}\end{aligned}$$
The leftmost term of [\[lastone\]](#lastone){reference-type="eqref" reference="lastone"} is equal to $x_2\mathbb{E}\big[e^{-\rho \tau_d};\tau_d\le N\big]+e^{-\rho N}\mathbb{E}\big[x_c(\tau_N); \tau_d>N\big]$. .05in
On the other hand, since we have assumed that $x_2\le \varepsilon$, we have $x_d(t)\le \varepsilon$ up to time $\tau_d$. Thus, the right hand side of [\[lastone\]](#lastone){reference-type="eqref" reference="lastone"} is bounded by $\mathbb{E}\left[\int_0^{\tau_N}e^{-\rho t} d\ dt\right]$. .05in
Letting $N\to\infty$ in $\eqref{lastone}$, by the monotone convergence theorem, we have $$x_2 \mathbb{E}\big[e^{-\rho \tau_d}\big]-x_1\le d\,\mathbb{E}\big[\int_0^{\tau_d} e^{-\rho t}dt\big] \Longleftrightarrow (x_2-x_1)\mathbb{E}\big[e^{-\rho\tau_d}\big]\le (d+\rho x_1)\ \mathbb{E}\big[\int_0^{\tau_d} e^{-\rho t}dt\big].$$
Substituting this in ([\[ub\]](#ub){reference-type="ref" reference="ub"}) and choosing $\ln d=\rho V(x_1)+1+cx_2^2$, we find $$\label{lowb}
V(x_2)-V(x_1)\le -(x_2-x_1)\left(e^{\rho V(x_1)+1+cx_2^2}+\rho x_1\right)^{-1} \!\!\!.$$
The assertion now follows setting $c(x_2)=A\varepsilon\mathbf{1}\{x_2>\varepsilon\}+ \left(e^{\rho V(0)+1+cx_2^2}+\rho x_2\right)^{-1} \mathbf{1}\{x_2\leq \varepsilon\}$ and $C_1=A\varepsilon\wedge \left(e^{\rho V(0)+1+c\varepsilon^2}+\rho \varepsilon\right)^{-1}>0$. $\Box$\
.07in **Proof of Theorem [Theorem 3](#asymptotic){reference-type="ref" reference="asymptotic"}** \
We define an auxiliary function $v$ by $$v(x):=V(x)+A\left(x+\frac{a}{b+\rho}\right)^2+\frac{1}{\rho}\ln \left(2A(x+\frac{a}{b+\rho})\right)-K, \quad x\in\mathbb{R}.$$
Straightforward calculations yield that $v$ is a viscosity solution in $(0, \infty)$ of the equation $$\begin{gathered}
\rho v+\left(bx-r(x)\right)v'+\ln\left(1+\frac{1-\rho\big(x+\frac{a}{b+\rho}\big)v'}{2A\rho\left(x+\frac{a}{b+\rho}\right)^2} \right)-\dfrac{1}{2}\sigma^{2}x^{2}v''+f=0,\end{gathered}$$ where $$f(x)=\frac{a(b+\frac{\sigma^2}{2})+(b+\rho)r(x)}{\rho\big(a+x(b+\rho)\big)}+\frac{\sigma^2x(b+\rho)}{2\rho\big(a+x(b+\rho)\big)^2}-\frac{2A}{b+\rho}(a-r(x))(a+x(b+\rho))+2AC.$$ Note that $f$ is smooth on $[0,\infty)$ and vanishes as $x\to\infty$. .05in
Let $v_\lambda(y)=v(\frac{y}{\lambda})$ and observe that, if $v_\lambda(1) \rightarrow 0$ as $\lambda \rightarrow 0$, then $v(x) \rightarrow 0$ as $x
\rightarrow \infty$. It turns out that $v_\lambda$ solves $$\begin{gathered}
\rho v_\lambda+ \left(bx-\lambda r(\frac{x}{\lambda})\right)v'_\lambda+\ln\left(1+\frac{\lambda^2\big(1-\rho\big(x+\frac{\lambda a}{b+\rho}\big)v'_\lambda\big)}{2A\rho\left(x+\frac{\lambda a}{b+\rho}\right)^2} \right)-\dfrac{1}{2}\sigma^{2}x^{2}v''_\lambda+f\big(\frac{x}{\lambda}\big)=0.\end{gathered}$$
Since, by ([\[Vbounds\]](#Vbounds){reference-type="ref" reference="Vbounds"}) $v_\lambda$ is uniformly bounded, we consider the half-relaxed limits $v^*(y)=\limsup_{x \rightarrow y, \lambda\rightarrow 0}v_\lambda(x)$ and $v_*(y)=\liminf_{x \rightarrow y, \lambda\rightarrow 0}v_\lambda(x)$ in $(0, \infty)$. By [@BP], $v^*$ and $v_*$ are respectively sub- and super-solutions of $$\rho w +byw'-\frac{1}{2} \sigma^2 y^2 w''=0.$$
It is easy to check that for any $y>0$ we have $v^*(y)=\limsup_{x\to\infty}v(x)$ and $v_*(y)=\liminf_{x\to\infty}v(x)$. .05in The subsolution property of $v^*$ and the supersolution property of $v_*$ give $$\limsup_{x\to\infty}v(x)\le 0\le\liminf_{x\to\infty}v(x)\le \limsup_{x\to\infty}v(x).$$0◻
**Proof of Lemma [Lemma 2](#Napprox){reference-type="ref" reference="Napprox"}:** The lower bound is evident since the supremum in the definition of $V_N$ is taken over a subset of admissible controls. Let now $\varepsilon >0$, and consider a control $u\in \mathfrak{U}$ such that $$V(x)<\mathbb{E}_x \left[ \int \limits_0^{\infty} e^{-\rho t} (\ln u(t) -c x^2(t))dt \right]+\varepsilon$$ Let us now define $u_N=u\wedge N=\min\{u,N\}\in \mathfrak{U}_N,$ and $x_N(\cdot)$ the solution to [\[sldyn\]](#sldyn){reference-type="eqref" reference="sldyn"} with $u_N$ as control. Clearly, $$V_N(x)\geq \mathbb{E}_x \left[ \int \limits_0^{\infty}e^{-\rho t} (\ln u_N(t) -c x_N^2(t))dt \right]$$ so that $$\label{VVN}
V(x)-V_N(x)\leq \varepsilon +\mathbb{E}_x \left[ \int \limits_0^{\infty}e^{-\rho t} (\ln\left( \frac{u(t)}{u_N(t)}\right) -cx^2(t)+cx_N^2(t))dt \right].$$ If we denote $\Delta u=(u-N)^+=\max\{u-N,0\}$, we can write $u=u_N+\Delta u$, with $\Delta u \neq 0$ if and only if $u>N$. Thus,
$$\begin{gathered}
\label{lnDu}
\mathbb{E} \left[ \int \limits_0^{\infty}e^{-\rho t} \ln\left( \frac{u(t)}{u_N(t)}\right)dt \right]=\mathbb{E} \left[ \int \limits_0^{\infty}e^{-\rho t} \ln\left(1+ \frac{\Delta u(t)}{N}\right)dt \right] \\ \leq \frac{1}{N} \mathbb{E} \left[ \int \limits_0^{\infty} e^{-\rho t} \Delta u(t)dt \right]=\frac{\rho+b}{N} \mathbb{E} \left[ \int \limits_0^{\infty} e^{-\rho t} M_t( \Delta u )dt \right],
\end{gathered}$$ where the final equality is due to [\[Mt\]](#Mt){reference-type="eqref" reference="Mt"}. After application of [\[xZ\]](#xZ){reference-type="eqref" reference="xZ"} for both controls $u$ and $u_N$, we get $$x(t)-x_N(t)=M_t(\Delta u )+\int \limits_0^t \frac{Z_t}{Z_s} \left( r(x(s))-r(x_N(s)) \right)ds. %\geq M_t(\Delta u),$$ In view of Propositions [\[positive\]](#positive){reference-type="ref" reference="positive"} and [\[monotone\]](#monotone){reference-type="ref" reference="monotone"}, we have $0\leq x_N(\cdot) \leq x(\cdot),$ $\mathbb{P}$-a.s. and since $r$ is increasing, we have $$x^2(t)-x^2_N(t)\geq (x(t)-x_N(t))^2\geq M^2_t(\Delta u), \; \forall t \geq 0, \mathbb{P}-\text{a.s.}$$ The preceding estimate and [\[lnDu\]](#lnDu){reference-type="eqref" reference="lnDu"} can further strengthen inequality [\[VVN\]](#VVN){reference-type="eqref" reference="VVN"} to $$V(x)-V_N(x)\leq \varepsilon + \mathbb{E} \left[ \int \limits_0^{\infty} e^{-\rho t}\left( \frac{\rho+b}{N} M_t(\Delta u)-c M^2_t(\Delta u) \right)dt \right]\leq \varepsilon +\frac{1}{\rho} \left(\frac{\rho+b}{2N\sqrt{c}}\right)^2.$$ Since $\varepsilon>0$ was arbitrary, the assertion follows.0◻\
**Proof of Lemma [Proposition 5](#asympt){reference-type="ref" reference="asympt"}** \
To calculate the stationary density $f$ of an Itô diffusion $y_t$, we may solve the stationary Fokker-Planck equation, $$\label{FPe}
\mathcal{L}^*(f)=0,$$ where $\mathcal{L}^*$ is the adjoint of the generator $\mathcal{L}$ of the process, and demand that the solution satisfy the constraints $f\geq 0$ and $\int f=1$.
Equation [\[FPe\]](#FPe){reference-type="eqref" reference="FPe"} takes a convenient form, when the corresponding process $y_t$ has a constant diffusion coefficient. Specifically, if $dy_t=-g(y_t)dt+\sigma dW_t$, then $\mathcal{L}(f)=\frac{\sigma^2}{2}f''- gf'$ and equation [\[FPe\]](#FPe){reference-type="eqref" reference="FPe"} becomes $$\frac{\sigma^2}{2} \frac{d^2f}{dy^2}+\frac{d(gf)}{dy}=0.$$ Hence, if $\{y_t: t\ge 0\}$ has a unique invariant distribution, its density, $f_Y$, will be given by $$\label{invdist}
f_Y(y)=\frac{1}{Z} e^{-\frac{2}{\sigma^2}G(y)},\quad y\in\mathbb{R},$$ where $G$ is an antiderivative of $g$, and $Z$ is a normalising constant. We can reduce the dynamics of the optimally controlled stochastic lake to the preceding form via the transformation $y_t=\ln (x_{*}(t))$. Indeed, applying It$\hat{o}$'s rule to the process $y_t=\log x_{*}(t)$, we find $$\label{con_dif}
dy_t=\left(e^{-y_t}h(e^{y_t})-\frac{\sigma^2}{2}\right)dt+\sigma dW_t$$ where $h(x)=-\frac{1}{V'(x)}-bx+r(x)$ is the drift of the optimally controlled lake, $x_*$. Assumption [Assumption 1](#r){reference-type="ref" reference="r"} and Proposition [Proposition 3](#vprop){reference-type="ref" reference="vprop"}(ii) ensure that $y$ in [\[con_dif\]](#con_dif){reference-type="eqref" reference="con_dif"} has a unique invariant distribution, whose density is given by [\[invdist\]](#invdist){reference-type="eqref" reference="invdist"}, with $$G(y)=\big(b+\frac{\sigma^2}{2}\big)y + \Phi_\sigma(e^y).
%\qquad\text\int_{e^y}^\infty \frac{1}{V'(s)}-r(s)\ \frac{ds}{s^2}.$$ Hence, $x_*(t)=e^{y_t}$ itself has a unique invariant distribution, whose density is given by $$f(x)=\frac{f_Y(\ln x)}{x}=\frac{1}{Z} x^{-2\big(1+\frac{b}{\sigma^2}\big)}\, e^{-\frac{2}{\sigma^2}\Phi_\sigma (x)},\quad x>0.$$ The limits in [\[tails\]](#tails){reference-type="eqref" reference="tails"} that determine the tail asymptotics of the invariant distribution are an immediate consequence of Assumption [Assumption 1](#r){reference-type="ref" reference="r"} and Proposition [Proposition 3](#vprop){reference-type="ref" reference="vprop"}(ii).0◻\
# Numerical Investigation {#numerics}
In this section we implement the monotone numerical scheme suggested in [@KLS] to numerically compute the value function of [\[ihvf\]](#ihvf){reference-type="eqref" reference="ihvf"}. We do so for parameter values in ranges that correspond to distinct qualitative behaviour, as well as for recycling rates approximating a step function. Computation of the value function provides access to the invariant distribution of the optimally controlled stochastic shallow lake [\[opt\]](#opt){reference-type="eqref" reference="opt"} through equation [\[inv1\]](#inv1){reference-type="eqref" reference="inv1"} and we investigate the shape of the invariant density. Finally, we can simulate paths of [\[opt\]](#opt){reference-type="eqref" reference="opt"} and explore the statistics of the transition times between oligotrophic and eutrophic states.
More precisely, we consider a computational domain $[0,l]$, for a sufficiently large $l$, and a uniform partition $0 = x_0 < x_1 < \ldots < x_{N-1} < x_N = l$, whose size we denote by $\Delta x$. Having in mind ([\[DVbounds\]](#DVbounds){reference-type="ref" reference="DVbounds"}), if $V_{i}$ is the approximation of $V$ at $x_i$, we employ a backward finite difference discretisation to approximate the first derivative in the linear term of the (OHJB), a forward finite difference discretisation for the derivative in the logarithmic term and a central finite difference scheme to approximate the second derivative. These considerations yield, for $i = 1,\ldots, N-1$, the approximate equations $$\begin{gathered}
\label{DHJB}
V_i - \frac{1}{\rho} \Big(r(x_i) - bx_i \Big) \frac{ V_{i} - V_{i-1} }{ \Delta x} + \frac{1}{\rho}
\left[cx_i^2 + 1 +\ln \left( - \frac{V_{i+1} - V_i}{\Delta x}\right)\right] \\
- \frac{\sigma^2}{2\rho}x_i^2
\frac{ V_{i+1} + V_{i-1} - 2 V_i}{(\Delta x)^2}=0.
\end{gathered}$$
In addition, we impose a boundary condition at the right endpoint, $l$, which we assumed to be sufficiently large, taking advantage of the asymptotic behaviour of $V$ described in Theorem [Theorem 3](#asymptotic){reference-type="ref" reference="asymptotic"}. That is, $$\label{rboundary}
V_N=-A\left(l+\frac{a}{b+\rho}\right)^2-\frac{1}{\rho}\ln \left(2A(l+\frac{a}{b+\rho})\right)+K.$$
Finally, another approximate equation is provided by the boundary condition at $x=0$, as given in Corollary [Corollary 1](#coro1){reference-type="ref" reference="coro1"}, i.e., $$\label{lboundary}
V_0+\frac{1}{\rho}
\left[1 +\ln \left( - \frac{V_{1} - V_0}{\Delta x}\right)\right]=0.$$
In fact, equation [\[lboundary\]](#lboundary){reference-type="eqref" reference="lboundary"} coincides with [\[DHJB\]](#DHJB){reference-type="eqref" reference="DHJB"} for $i=0$, and we have in total a system of $N$ nonlinear equations for the $N$ unknowns, $V_0,\ldots,V_{N-1}$. In this paper, we approximate the solution of this system using the Newton-Raphson method. As an initial estimation of the solution in the Newton-Raphson algorithm, we considered a quadratic function $V^0$, such that $V^0(x_N)=V_N,\; V^0(x_0)=V^0(0)=\frac{1}{\rho}\ln\left(\frac{b+\rho}{\sqrt{2ec}}\right)$ (the upper bound in Proposition [\[V0fin\]](#V0fin){reference-type="ref" reference="V0fin"}) and $V^0(x_1)=V^0(0)-\Delta x e^{-(\rho V^0(0)+1)}$, so that [\[lboundary\]](#lboundary){reference-type="eqref" reference="lboundary"} is initially satisfied.
The proof of convergence of the numerical scheme to the value function follows a general argument proposed in [@Barles] to prove convergence of monotone schemes for viscosity solutions of fully nonlinear second-order elliptic or parabolic, possibly degenerate, partial differential equations. By Proposition 2 in [@KLS], the numerical scheme described above is consistent and monotone, provided $$\label{num_condition}
\Delta x \big(r(x) - bx\big)\leq \frac{\sigma^2}{2}.$$ In addition, as $\Delta x\to 0$, the continuous function that interpolates the solution at the points $x_0,\ldots,x_N$ converges locally uniformly to the unique constrained viscosity solution of [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"}, i.e., the actual value function. Condition [\[num_condition\]](#num_condition){reference-type="eqref" reference="num_condition"} is always satisfied for the stochastic shallow lake problem, provided $\Delta x$ is chosen suitably small. Note also that for the usual choice of recycling function, $r(x)=x^2/(x^2+1),$ condition [\[num_condition\]](#num_condition){reference-type="ref" reference="num_condition"} is satisfied if e.g. $b\geq 0.5$, even if $\sigma=0$.
An advantage of this methodology for the computation of the value function is that we are free to choose any value of the parameter $\sigma$, as long as $\sigma^2<\rho+2b$ (see Proposition [\[restr1\]](#restr1){reference-type="ref" reference="restr1"}) and condition [\[num_condition\]](#num_condition){reference-type="ref" reference="num_condition"} are satisfied. In this way, we are not restricted to small values of the noise parameter $\sigma$. For instance, in Figure [7](#steady_iv){reference-type="ref" reference="steady_iv"} we extend to higher noise intensities the bifurcation diagram in Figure 5 of [@GKW], revealing new features.
In the first part of our numerical investigation, we study the problem with the typical choice of the function $r,$ i.e. $r(x)=x^2/(x^2+1)$, while in the second part we study the properties of the value function $V,$ which corresponds to a hyperbolic tangent function.
## The value function V and the optimal policy
In order to gain some first insight into the problem, we begin by exploring the properties of the value function $V$ for various values of the parameters $b,c,\rho, \sigma$. In our analysis, our choice of parameters is based on the bifurcation analysis in [@kslv].
Figures [1](#vfsec1){reference-type="ref" reference="vfsec1"} and [3](#vf2sec1){reference-type="ref" reference="vf2sec1"} show the graph of the value function for the fixed parameters $(b,c,\rho)=(0.65,1,0.03)$ and $(b,c,\rho)=(0.65,0.5,0.03)$ respectively with the noise $\sigma$ varying. Notice that these graphs also depict the value function in the deterministic case ($\sigma =0$). In Figures [2](#usec1){reference-type="ref" reference="usec1"} and [4](#u2sec1){reference-type="ref" reference="u2sec1"}, the corresponding optimal management policies are shown. For the choice of parameters $(b,c,\rho)=(0.65,1,0.03)$, the optimal policies are smooth functions. On the other hand, when $(b,c,\rho)=(0.65,0.5,0.03)$, the system exhibits a Skiba point and the (deterministic) optimal policy is discontinuous at this point.
![](b065_r003_c1_section1.jpg){#vfsec1 width="100%"}
![](b065_r003_c1_u_section1.jpg){#usec1 width="100%"}
![](b065_r003_c05_section1.jpg){#vf2sec1 width="100%"}
![](b065_r003_c05_u_section1.jpg){#u2sec1 width="100%"}
. [\[vf\]]{#vf label="vf"}
## Invariant distribution
In this section, we numerically investigate the properties of the equilibrium distribution of the optimally controlled lake for different combinations of the parameters of the problem.
Apart from the invariant density, $f,$ and cumulative distribution, $F,$ of the optimally controlled lake, we also present bifurcation diagrams based on its transformation invariant function, $I:=\sigma x f$. The main advantage of this object is its invariance under diffeomorphic coordinate transformations, which makes the transformation invariant function a suitable basis of bifurcation theory (see e.g. [@Zeeman] and [@GKW]). Following the definitions introduced in [@GKW], the local maximisers of the transformation invariant $I$ are called *stochastic attractors* of the process, while the local minimiser of $I$ is called the *regime switching threshold*. The stochastic attractors are the natural analogue of the attracting steady states of the deterministic problem and the *regime switching threshold* is the analogue of the indifference point (the Skiba point).
For the fixed parameters $(b,c,\rho)=(0.65, 0.5,0.03)$, Figure [\[IFiv\]](#IFiv){reference-type="ref" reference="IFiv"} shows the invariant density and cumulative distribution functions for several values of the noise parameter $\sigma$. For this set of parameters, the deterministic problem has a Skiba point. In the presence of small noise, the lake spends most of the time in the eutrophic state. As noise increases, the mode of the invariant distribution shifts to cleaner states, i.e. to lower concentrations of phosphorus. A detailed presentation of this shift from a bimodal distribution (with a peak at the eutrophic state) to a unimodal one (with a peak at oligotrophic phosphorus concentrations) due to the increase of noise is depicted in Figure [7](#steady_iv){reference-type="ref" reference="steady_iv"}. This bifurcation diagram illustrates the locations of the modes and the antimodes of the transformation invariant function, $I$, with respect to $\sigma$.
![](b065_r003_c05_f_section1.jpg){#Iiv width="100%"}
![](b065_r003_c05_FF_section1.jpg){#Fiv width="100%"}
In the case of the fixed parameters $(b,c,\rho)=(0.8, 0.5,0.03)$, the deterministic problem exhibits a unique equilibrium in the eutrophic state (see [@W03]). Therefore, we have qualitatively different dynamics comparing to the preceding case. In the presence of small noise, the (transformation) invariant function is unimodal with a peak at the eutrophic state, but the location of the mode moves to cleaner states as the noise intensity increases. These results are summarised in Figure [8](#steady_viii){reference-type="ref" reference="steady_viii"}. The same behaviour for large values of noise, as in the previous cases, is also present for combinations of parameters for which the deterministic problem exhibits a unique equilibrium in the oligotrophic state. Based on the above observations, one could argue that noise seems to 'clean' the lake, in the sense that the lake spends more time in states corresponding to low phosphorus concentrations. One should bear in mind however that, at the same time, extremely polluted states become more likely at high noise intensities. Notice that if we were limited to small values of noise, e.g. $\sigma <0.2$ (see Figures [7](#steady_iv){reference-type="ref" reference="steady_iv"} and [8](#steady_viii){reference-type="ref" reference="steady_viii"}), we would not be able to observe this behaviour.
![](steady_states_s_sos.jpg){#steady_iv width="100%"}
![](steady_states_s_sos_08_05_v2.jpg){#steady_viii width="100%"}
Figure [9](#steady_c_065){reference-type="ref" reference="steady_c_065"} illustrates a bifurcation diagram for the fixed parameters $(b,\rho)=(0.65,0.03)$ and noise $\sigma=0.1$ with respect to the cost of pollution $c$. As it is expected by the definition of the total benefit [\[Jxu\]](#Jxu){reference-type="eqref" reference="Jxu"}, large values of $c$ attribute more weight to the ecological services of the lake, thus cleaning the optimally controlled lake. This is not the case, for the bifurcation diagram with respect to the discount factor $\rho.$ In Figure [10](#steady_r_065){reference-type="ref" reference="steady_r_065"} the bifurcation diagram with respect to $\rho$ for the fixed parameters $(b,c)=(0.65,0.8)$ and noise $\sigma=0.1$ is depicted. In this diagram, we observe that as the discount factor $\rho$ increases and the benefit of future generations is discounted, the stochastic attractors of the system move towards eutrophic states.
![](steady_states_c_sos.jpg){#steady_c_065 width="100%"}
![](steady_states_r_065_08.jpg){#steady_r_065 width="100%"}
## The optimal paths and escape times
In the deterministic version of the problem, the optimally controlled system asymptotically approaches one of its attracting equilibrium states. On the other hand, noise introduces fluctuations around the stochastic attractors of the process. In case the stochastic system has more than one attractors, noise eventually induces fluctuations that are large enough to drive the system beyond the regime switching threshold and into the basin of attraction of a different attractor. The process thermalises there until a new large fluctuation causes another regime switching, and so on. The invariant density $f$ and one simulated path of the optimally controlled stochastic lake for the choice of parameters $(b,c,\rho, \sigma)=(0.65,0.512,0.03,0.1)$ are depicted in Figure [\[troxies2\]](#troxies2){reference-type="ref" reference="troxies2"}.
If we consider a diffusion $x_t$ in a double-well potential, $G$, with constant diffusion coefficient, i.e., $dx_t=-G'(x_t)dt+\sigma dW_t$, and we denote by $x_{\pm}$ the stochastic attractors of the process, with $x_-<x_+$ and by $x_*$ the regime switching threshold, then the expected time $T_{x_+}$ of the system to hit $x_+$ when it starts at $x_-$ is asymptotically exponential, in the sense that $$\label{EXP}
T_{x_+}/ \mathbb{E}_{x_-} [T_{x_+}] \overset{d}{\rightarrow} Exp(1) \text{\; as \;} \sigma \rightarrow 0$$ and it is described by the Arrhenius law $$\label{AL}
\displaystyle{\lim \limits_{\sigma \rightarrow 0} \frac{\sigma^2}{2}\log \mathbb{E}_{x_-} [T_{x_+}] =G_0(x_*)-G_0(x_-). }$$ The preceding results assume that the potential $G$ does not depend on the noise intensity, $\sigma$. This is not true in our case, as the optimal policy, hence the drift of the optimally controlled lake, depends on $\sigma$ through the HJB equation [\[OHJB\]](#OHJB){reference-type="eqref" reference="OHJB"}. The derivation of an expression analogous to [\[AL\]](#AL){reference-type="ref" reference="AL"} in this context is the subject of future work.\
Figure [13](#esctime){reference-type="ref" reference="esctime"} illustrates a histogram of 1000 realisations of the random variable $T_{x_+}/ \mathbb{E}_{x_-} [T_{x_+}]$ for the optimally controlled lake. The expected time in the denominator was estimated by the sample mean of the computed times. The (red) curve corresponds to the exponential distribution with mean 1. The fit to the exponential distribution is very good for $\sigma = 0.08$.
![](f_01_065_003_0512.jpg){#I0 width="100%"}
![](troxies_01_065_003_0512_v2.jpg){#troxies201 width="100%"}
![Histogram of 1.000 samples of the normalised transition time from the oligotrophic ($x_-$) to the eutrophic ($x_+$) state. Parameters: $(b,c,\rho,\sigma)=(0.65,0.5,0.03,0.08).$](exp_sos.jpg){#esctime width="60%"}
## The rate of recycling
In this section, we present some numerical results, when a hyperbolic tangent function is used as the rate of recycling, $r$. We initially consider $r(x)=\tanh(x-3)+\tanh(3)$. In Figure [\[Iu21\]](#Iu21){reference-type="ref" reference="Iu21"}, we show the value function, the invariant density functions and the corresponding optimal policies for different combinations of the parameters $(b,c,\rho)$ and different values of $\sigma$. We observe that the lake has two attractors for small values of noise, when $(b,c,\rho)=(0.8,0.06,0.5)$, while it has only one when $(b,c,\rho)=(0.5,0.5,0.01)$. Nevertheless, in both cases, noise shifts the modes to cleaner states of the lake. In Figure [\[change\]](#change){reference-type="ref" reference="change"}, we illustrate the changes induced to the value function by small changes in the rate of recycling, $r$. In particular, we numerically approximate the value functions $V$ that correspond to the rate of recycling $r(x)=\frac{1}{2}(\tanh(a(x-3))+\tanh(3a))$ for various values of the parameter $a$, as well as the step function $\mathbf{1}\{x>3\}$ in the stochastic $(\sigma = 0.1)$ case.
![](tanh.jpg){#tanh3 width="100%"}
![](section2_tanh01.jpg){#Va21 width="100%"}
# Acknowledgement {#acknowledgement .unnumbered}
The authors are grateful to Emmanuil Georgoulis for his suggestions on the implementation of the numerical algorithm.
# Funding {#funding .unnumbered}
This work has been supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant," project HFRI-FM17-1034 (SCALINCS).
![](b050_r001_c05_section2.jpg){#vfsec22 width="100%" height="5cm"}
![](b08_r05_c006_section2.jpg){#vfsec23 width="100%" height="5cm"}
![](b050_r001_c05_u_section2.jpg){#u22 width="100%"}
![](b08_r05_c006_u_section2.jpg){#u23 width="100%"}
![](b050_r001_c05_I_section2.jpg){#I22 width="100%"}
![](b08_r05_c006_I_section2.jpg){#I23 width="100%"}
.05in
| arxiv_math | {
"id": "2309.02885",
"title": "On the optimally controlled stochastic shallow lake",
"authors": "Angeliki Koutsimpela and Michail Loulakis",
"categories": "math.OC math.PR",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We consider a remote sensing system in which fixed sensors are placed in a region, and a drone flies over the region to collect information from cluster heads. We assume that the drone has a fixed maximum range, and that the energy consumption for information transmission from the cluster heads increases with distance according to a power law. Given these assumptions, we derive local optimum conditions for a drone path that either minimizes the total energy or the maximum energy required by the cluster heads to transmit information to the drone. We show how a homotopy approach can produce a family of solutions for different drone path lengths, so that a locally optimal solution can be found for any drone range. We implement the homotopy solution in python, and demonstrate the tradeoff between drone range and cluster head power consumption for several geometries. Execution time is sufficiently rapid for the computation to be performed real time, so the drone path can be recalculated on the fly. The solution is shown to be globally optimal for sufficiently long drone path lengths. For future work, we indicate how the solution can be modified to accommodate moving sensors.
author:
- Ramkumar Ganapathy
- "Christopher Thron[^1]"
bibliography:
- DroneInfoHarvest.bib
title: Optimal Real Time Drone Path Planning for Harvesting Information from a Wireless Sensor Network
---
Keywords: Wireless sensor network, drone, path planning, information collection, power-efficient, extended lifetime, optimization,
# Introduction {#sec:intro}
There are many critical applications for remote sensing in low-resourced areas. Some of these include agricultural monitoring of crops and/or forests for fire and/or disease; wildlife monitoring, to track wildlife movement and detect poachers; free-range livestock monitoring, to track herd movement and to prevent cattle rustling; and early warning of terrorist or bandit activity. In low-resourced situations especially, the system cost is of huge importance, and spells the difference as to whether the system may be implemented or not. Sensors in area-monitoring networks typically are battery-powered, and must be replaced when their batteries is exhausted. This can be both difficult and costly, particularly in inaccessible locations. For this reason, reducing the power consumption of sensors in the field is a key factor in designing such remote sensing systems.
Previous authors have investigated practical use cases for using drones to collect information from WSN's. [@nguyen2021uav] has provided a comprehensive survey of drone-assisted data collection from wireless sensor networks. Several previous papers have dealt specifically with drone path planning in such scenarios. [@ho2013heuristic] gives a heuristic algorithm to decide which node within a cluster to fly to in order to gather information from the cluster. [@liu2018performance] uses a boustrophedon-type flight path for a sensor network located in region divided into square cells. [@gong2018flight] considers the problem of minimizing flight time of an information-gathering drone in the case where sensors are in a straight line, and the time required for information transfer from each sensor is an important consideration. [@skiadopoulos2020impact] considers the impact of drone path shape on information transmission from sensors to drone. [@6825191] uses particle-swarm optimization to arrive at an optimal path.
In this paper we consider a hybrid WSN-drone system for remote sensing. A single drone is used to collect information from wireless sensors which have fixed locations in the field (see Figure [1](#fig:schematic){reference-type="ref" reference="fig:schematic"}. The drone harvests information from all sensors during a single tour. The system design is posed as a constrained optimization problem. Our novel solution approach involves using a Lagrange multiplier to construct a differential equation in which the Lagrange multiplier is the independent variable. Our approach has reduced computational complexity, so that real-time solution within seconds is possible even for very large systems.
![Schematic of remote sensing system in which a drone harvests information from fixed sensors (copied from Reference [@nguyen2021uav]).](figures/Picture1.png){#fig:schematic width="4.5 in"}
# Methodology {#sec:method}
## System assumptions
We consider a remote sensing system that satisfies the following assumptions:
1. The system includes a wireless sensor network with fixed cluster heads. Each sensor in the field transmits its information (either directly or indirectly) to one of the cluster heads.
2. The system also includes a drone that flies on a specific trajectory (to be determined), so that each cluster head transmits its energy to the drone when it is nearby.
3. The drone's energy consumption is determined entirely by the length of the drone's trajectory. (This is the case for example if the drone flies at constant velocity, and there is no wind or other conditions that would affect the flight of the drone.)
4. The drone has a fixed maximum path length, which is determined by the energy capacity of the battery and the rate of energy consumption.
5. The drone's location is approximately constant while receiving information from a particular cluster head. This assumption is satisfied in either of the following scenarios: (a) the information transmission from the cluster head occupies only a brief period of time, so that the distance the drone moves during the transmission period is negligible; or (b) the drone lands during transmission, and consumes no energy during the transmission period.
6. The energy expended by the cluster head in information transmission is proportional to the distance between the cluster head and drone raised to a power law exponent.
We also consider two alternative criteria for optimization:
1. Minimize the total energy expended by cluster heads in information transmission;
2. Minimize the maximum of the energies expended by cluster heads in information transmission.
Under assumptions $A_1$-$A_6$ and criteria $O_1$-$O_2$, it follows that an optimum path for the drone will consist of straight-line segments joining the drones' locations where it harvests energy from the different cluster heads. This conclusion is reflected in the mathematical description in subsequent sections.
## System parameters and variables
The system parameters include:
- $(x_{j},y_{j}), j = 1...J$: positions of the cluster heads that are sending the information;
- $L$: Maximum path length for the drone;
- $p$: Power loss exponent.
The system variables include:
- $(u_{j},v_{j}), j = 1...J$: Drone positions for harvesting information from cluster heads. Here the order of points the drone takes is assumed to be $(u_{j}, v_{j})$ to $(u_{j+1}, v_{j+1})$ for $j = 1\ldots J-1$.
- $(u_0,v_0)$, $(u_{J+1},v_{J+1})$: Drone starting and ending position, respectively.
## Mathematical formulation of the optimization problem
Information transmission from cluster head $(x_j,y_j)$ takes place when the drone is at position $(u_j,v_j)$. The total energy consumption required for information transfer is the sum of the energies from the $J$ cluster heads, which following assumption $(A_6)$ can be represented as a function $f(\vec{u}, \vec{v})$ where:
$$\label{eq:def_f}
f(\vec{u}, \vec{v}) = \sum_{j=1}^J \left((x_j-u_j)^2 + (y_j - v_j)^2\right)^{p/2},$$ where $\vec{u}:= [u_1,\ldots u_J]$ and $\vec{v}:= [v_1,\ldots v_J]$.
The distance the drone travels is given by $g(\vec{u}, \vec{v})$, where $$\label{eq:def_g}
g(\vec{u}, \vec{v}) = \sum_{j=0}^J \sqrt{ (u_j-u_{j+1})^2 + (v_j-v_{j+1})^2 }$$
The system's constraint is given by $g(\vec{u}, \vec{v})\le L$, where $L$ is the maximum distance the drone can travel. In the case where $L$ is large enough so that the drone is capable of making a complete tour of the cluster heads, then any such complete tour will give $g(\vec{u},\vec{v}) = 0$ and is thus an optimal solution to the problem. So we consider instead the more difficult problem where $L$ is smaller than the smallest tour distance, so $g(\vec{u}, \vec{v}) = 0$ is impossible. In this case we may replace the inequality with an equality constraint. This can be seen as follows. Suppose a drone path includes the point $(u_j,v_j)$ where $|(u_j,v_j) - (x_j,y_j)|>0$. For simplicity we define: $$\label{eq:wzDef}
\vec{w}_j := (u_j,v_j) \text{~and~} \vec{z}_j := (x_j,y_j).$$ Then for any $\delta$ with $0<\delta \le 1$ we have $$|\vec{w}_j + \delta(\vec{z}_j - \vec{w}_j) - \vec{z}_j| = (1-\delta)|\vec{w}_j - \vec{z}_j| < |\vec{w}_j - \vec{z}_j|,$$ so replacing $\vec{w}_j$ with $\vec{w}_j + \delta(\vec{z}_j - \vec{w}_j)$ will reduce the energy function [\[eq:def_f\]](#eq:def_f){reference-type="eqref" reference="eq:def_f"}. Now if the drone path length is less than $L$, then $\delta>0$ can be chosen sufficiently small such that replacing $\vec{w}_j$ with $\vec{w}_j + \delta(\vec{z}_j - \vec{w}_j)$ still yields total drone path length less than $L$. Thus any drone path with length less than $L$ can be improved: so no drone path with length less than $L$ can be optimal.
We may now formulate the optimization problem. In case $O_1$, we have $$\label{eq:fundOpt}
\begin{aligned}
\text{Minimize } f(\vec{u},\vec{v})
\text{ subject to }
g(\vec{u},\vec{v}) = L.
\end{aligned}$$ In case $O_2$, minimizing the maximum transmission energy is equivalent to minimizing $|\vec{w}_j -\vec{z}_J|$ over all $j$. Using the equality $$\lim_{p\rightarrow \infty} (a_1^p + \ldots + a_J^p)^{1/p} = \max_j a_j\quad \text{ if }a_j>0~\forall j,$$ with $a_j := |\vec{w}_j -\vec{z}_j|$, we conclude that by taking $p$ sufficiently large we can approach the solution to $O_2$ with arbitrary precision.
## Local optimization conditions {#sec:locOpt}
The minimization problem [\[eq:fundOpt\]](#eq:fundOpt){reference-type="eqref" reference="eq:fundOpt"} leads to the following Lagrange multiplier condition for a local minimum: $$\label{eq:Lagrange}
\nabla f(\vec{u}, \vec{v}) = \lambda \nabla(g(\vec{u}, \vec{v}) - L),$$ where $\nabla h$ denotes the gradient of $h$ with respect to $\vec{u},\vec{v}$: $$\nabla h(\vec{u}, \vec{v}) := \left(\frac{\partial f}{\partial u_1}, \ldots, \frac{\partial f}{\partial u_J}, \frac{\partial f}{\partial v_1},\ldots,\frac{\partial f}{\partial v_J}\right),$$ and $\lambda$ is a Lagrange multiplier.
Writing the vector equation [\[eq:Lagrange\]](#eq:Lagrange){reference-type="eqref" reference="eq:Lagrange"} out in terms of components, we have:
$$\label{eq:opt1}
\frac{\partial f}{\partial u_j} = \lambda\frac{\partial g}{\partial u_j};\qquad
\frac{\partial f}{\partial v_j} = \lambda\frac{\partial g}{\partial v_j}
\qquad (j=1 \ldots J).$$
For notational simplicity we define new variables. Let $(j=1\ldots J)$ $$\label{eq:def_simp_nots}
\begin{aligned}
&{a_j} := {u_j - x_j}; \qquad &{b_j} := {v_j - y_j}; \qquad
&A_j := {a_j}^2 + {b_j}^2,
\end{aligned}$$ and let $(j=0\ldots J)$ $$\label{eq:def_simp_nots2}
\begin{aligned}
&{m_j} = u_{j} - u_{j+1}; \qquad &{n_j} := v_{j} - v_{j+1};\qquad
&M_j := ({m_j}^2 + {n_j}^2)^{1/2}.
\end{aligned}$$ Then we have: $$\label{eq:def_f_g}
f(\vec{u}, \vec{v}) = \sum_{j=1}^J A_j^{p/2}~~\text{and}~~
g(\vec{u}, \vec{v}) = \sum_{j=0}^J M_j.$$ Using this notation, we have: $$\begin{aligned}
\frac{\partial f}{\partial u_j} &= \frac{\partial f}{\partial A_j} \cdot\frac{\partial A_j}{\partial a_j} \cdot \frac{\partial a_j}{\partial u_j}
= ((p/2)A_j^{p/2-1} \cdot 2a_j \\
&= pa_j A_j^q,~~ \text{where }q:=p/2-1,)
\end{aligned}$$ and $$\label{eq:pdu}
\begin{aligned}
\frac{\partial g}{\partial u_j} &= \frac{\partial g}{\partial M_{j-1}} \cdot \frac{\partial M_{j-1}}{\partial m_{j-1}} \cdot \frac{\partial m_{j-1}}{\partial u_j} + \frac{\partial g}{\partial M_j} \cdot \frac{\partial M_j}{\partial m_j} \cdot \frac{\partial m_j}{\partial u_j}\\
&= (1/2)M_{j-1}^{-1}(2m_{j-1})(-1) + (1/2)M_j^{-1}(2m_j)\\
&= \frac{m_{j}}{M_{j}} - \frac{m_{j-1}}{M_{j-1}}.
\end{aligned}$$ Similarly we have: $$\label{eq:pdv}
\frac{\partial f}{\partial v_j} = pb_j A_j^q;\qquad
\frac{\partial g}{\partial v_j} = \frac{n_{j}}{M_{j}} - \frac{n_{j-1}}{M_{j-1}}.$$ It follows that we may rewrite the equations in [\[eq:opt1\]](#eq:opt1){reference-type="eqref" reference="eq:opt1"} as: $$\label{eq:opt2}
pa_j A_j^q = \lambda\left( \frac{m_{j}}{M_{j}} - \frac{m_{j-1}}{M_{j-1}}\right); \quad
pb_j A_j^q = \lambda\left( \frac{n_{j}}{M_{j}} - \frac{n_{j-1}}{M_{j-1}}\right)~~(j=1\ldots J).$$ Equations [\[eq:opt2\]](#eq:opt2){reference-type="eqref" reference="eq:opt2"} are required for local optimality. Hence they are necessary for global optimality, but not sufficient. So the solutions that we provide in this paper may or may not be globally optimal. We return to this issue in Section [5](#sec:future){reference-type="ref" reference="sec:future"}.
## Homotopy approach to local optimization for a given path length {#sec:homotopy}
Solutions of the fundamental optimization problem [\[eq:fundOpt\]](#eq:fundOpt){reference-type="eqref" reference="eq:fundOpt"} for different values of $L$ correspond to solutions of [\[eq:opt2\]](#eq:opt2){reference-type="eqref" reference="eq:opt2"} with different values of $\lambda$. As $L$ is varied continuously, the value of $\lambda$ will also vary continuously, and the components of $\vec{u}$ and $\vec{v}$ will vary continuously as well. This suggests that if we have a solution to [\[eq:fundOpt\]](#eq:fundOpt){reference-type="eqref" reference="eq:fundOpt"} for a given value of $L$, then we may be able to perturb that solution differentially to obtain solutions for different values of $L$. In fact, we do have a solution for a particular value of $L$: namely $f(\vec{u},\vec{v})=0$ when $u_j=x_j, v_j=y_j$, which solves the constraint $g(\vec{u},\vec{v}) = L$ when $L$ is equal to the length of the shortest tour that visits all sensors (the so-called "travelling salesman" tour).
In practice, we want to find solutions corresponding to values of $L$ that are smaller than the length of a full tour. So we start with the full tour, and successively "nudge" the solutions so they correspond to smaller and smaller values of $L$. This is an example of a *homotopy approach*: start with a solution that is optimal for a different set of conditions, and generate a smooth curve of solutions that joins this solution to a solution that satisfies desired conditions. In practice this smooth curve of solutions is often specified parametrically as the solution to a set of ordinary differential equations in a given parameter. In the next section, we will derive differential equations that can be used to find the parametrized curve that leads to our desired optimal solution.
## Derivation of differential equations for parametrized homotopy curve
First we parametrize solutions to [\[eq:opt2\]](#eq:opt2){reference-type="eqref" reference="eq:opt2"} using the parameter $s$, so that $a_j, b_j, A_j, m_j, M_j,$ $\lambda$ are all functions of $s$. As $s$ changes the equalities cannot change, so we have $$\label{eq:optDiff}
\begin{aligned}
p \frac{d}{ds} \left(a_j A_j^q\right) = \frac{d}{ds} \left( \lambda\frac{m_{j}}{M_{j}} - \lambda\frac{m_{j-1}}{M_{j-1}}\right) \\
p \frac{d}{ds} \left(b_j A_j^q\right) = \frac{d}{ds} \left( \lambda\frac{n_{j}}{M_{j}} - \lambda\frac{n_{j-1}}{M_{j-1}}\right)
\end{aligned}$$
Since we want to find solutions for smaller values of $L$, we want to ensure that $L$ decreases as $s$ increases. We may ensure this by adding the condition: $$\label{eq:gDeriv}
\frac{dg}{ds} = -1 \implies \sum_{j=0}^J \frac{dM_j}{ds} = -1.$$ Recall that the variables $a_j, b_j, A_j, m_j, M_j,$ in [\[eq:optDiff\]](#eq:optDiff){reference-type="eqref" reference="eq:optDiff"} are all functions of $\vec{u}$ and $\vec{v}$, so the derivatives in [\[eq:optDiff\]](#eq:optDiff){reference-type="eqref" reference="eq:optDiff"} can all be expressed in terms of the derivatives $\frac{du_j}{ds}$ and $\frac{dv_j}{ds}$ for $j=1,\ldots J$. As a result, we obtain a system of $2J+1$ differential equations for the variables $\{u_j,v_j\}, j=1\ldots J$ and $\lambda$.
We may rewrite the first equation in [\[eq:optDiff\]](#eq:optDiff){reference-type="eqref" reference="eq:optDiff"} as $(j=1\ldots J)$: $$\label{eq:expandEq}
\begin{aligned}
0 = &p A_j^q \frac{da_j}{ds} + pqa_jA_j^{q-1}\frac{dA_j}{ds} -\frac{\lambda} {M_j}\frac{dm_j}{ds} + \frac{\lambda}{ M_{j-1}}\frac{dm_{j-1}}{ds}\\
&+\frac{\lambda m_j} {M_j^2}\frac{dM_j}{ds} -\frac{\lambda m_{j-1}}{M_{j-1}^2}\frac{dM_{j-1}}{ds}
+\frac{m_j}{M_j}\frac{d\lambda}{ds} -\frac{m_{j-1}}{M_{j-1}}\frac{d\lambda}{ds}
\end{aligned}$$ The derivatives in [\[eq:expandEq\]](#eq:expandEq){reference-type="eqref" reference="eq:expandEq"} may be expressed in terms of $\left\{\frac{du_k}{ds}\right\}$ and $\left\{\frac{dv_j}{ds}\right\}$ using the formulas $(j=0\ldots J)$:: $$\begin{aligned}
\frac{da_j}{ds} &= \frac{du_j}{ds};\\
\frac{dA_j}{ds} &= 2a_j\frac{du_j}{ds} + 2b_j\frac{dv_j}{ds};\\
\frac{dm_j}{ds} &= \frac{du_j}{ds} - \frac{du_{j+1}}{ds};\\
\frac{dM_j}{ds} &= M_j^{-1} \left( m_j \left(\frac{du_j}{ds} - \frac{du_{j+1}}{ds}\right) + n_j \left(\frac{dv_j}{ds} - \frac{dv_{j+1}}{ds}\right)\right);\\
\end{aligned}$$ It follows that [\[eq:expandEq\]](#eq:expandEq){reference-type="eqref" reference="eq:expandEq"} with index $j$ will depend on the derivatives $\frac{du_k}{ds}$, $\frac{dv_k}{ds}$ and $\frac{d\lambda}{ds}$ for $k=j-1,j,j+1$ (noting that $\frac{du_0}{ds} = \frac{dv_0}{ds} = \frac{du_{J+1}}{ds} = \frac{dv_{J+1}}{ds} = 0$). By collecting terms corresponding to each derivative, we find $(j=1\ldots J)$:: $$\label{eq:expandEq1}
\begin{aligned}
0 = &\frac{du_j}{ds}\left(p A_j^q + pqa_jA_j^{q-1}(2a_j) -\frac{\lambda} {M_j} - \frac{\lambda}{ M_{j-1}}
+\frac{\lambda m_j^2} {M_j^3} +\frac{\lambda m_{j-1}^2}{M_{j-1}^3}\right)\\
&+\frac{du_{j-1}}{ds}\left( \frac{\lambda} {M_{j-1}} -\frac{\lambda m_{j-1}^2} {M_{j-1}^3}\right)+\frac{du_{j+1}}{ds}\left( \frac{\lambda} {M_j} -\frac{\lambda m_j^2} {M_j^3}\right)\\
&+\frac{dv_j}{ds}\left(pqa_jA_j^{q-1}(2b_j)
+\frac{\lambda m_jn_j} {M_j^3} +\frac{\lambda m_{j-1}n_{j-1}}{M_{j-1}^3}\right)\\
&+\frac{dv_{j-1}}{ds}\left(-\frac{\lambda m_{j-1}n_{j-1}} {M_{j-1}^3}\right)+\frac{dv_{j+1}}{ds}\left( -\frac{\lambda m_j n_j} {M_j^3}\right)\\
&+ \frac{d\lambda}{ds}\left(\frac{ m_{j}} {M_{j}} - \frac{m_{j-1}} {M_{j-1}}\right)\\
\end{aligned}$$ Using various algebraic manipulations and the fact that: $$1-\frac{m_j^2}{M_j^2} = \frac{n_j^2}{M_j^2},$$ the equations [\[eq:expandEq\]](#eq:expandEq){reference-type="eqref" reference="eq:expandEq"} may be simplified to: $$\label{eq:expandEq2}
\begin{aligned}
0 = &\frac{du_j}{ds}\left(p A_j^q\left(1 + \frac{2qa_j^2}{A_j}\right) - \frac{\lambda n_{j-1}^2}{ M_{j-1}} -\frac{\lambda n_j^2} {M_j} \right)\\
&+\frac{du_{j-1}}{ds}\left( \frac{\lambda n_{j-1}^2} {M_{j-1}} \right)+\frac{du_{j+1}}{ds}\left( \frac{\lambda n_j^2} {M_j} \right)\\
&+\frac{dv_j}{ds}\left(2pqa_jb_jA_j^{q-1}
+ \frac{\lambda m_{j-1}n_{j-1}}{M_{j-1}^3}
+\frac{\lambda m_jn_j} {M_j^3} \right)\\
&-\frac{dv_{j-1}}{ds}\left(\frac{\lambda m_{j-1}n_{j-1}} {M_{j-1}^3}\right)-\frac{dv_{j+1}}{ds}\left( \frac{\lambda m_j n_j} {M_j^3}\right)\\
&+ \frac{d\lambda}{ds}\left(\frac{ m_{j}} {M_{j}} - \frac{m_{j-1}} {M_{j-1}}\right), \quad j=1\ldots J:
\end{aligned}$$ The $J$ equations associated with the second equation in [\[eq:optDiff\]](#eq:optDiff){reference-type="eqref" reference="eq:optDiff"} may be obtained from [\[eq:expandEq2\]](#eq:expandEq2){reference-type="eqref" reference="eq:expandEq2"} by exchanging $u_k\leftrightarrow v_k$, $a_k\leftrightarrow b_k$, and $m_k\leftrightarrow n_k$.
Finally, [\[eq:gDeriv\]](#eq:gDeriv){reference-type="eqref" reference="eq:gDeriv"} can be expressed in terms of derivatives of $\{u_j,v_j\}$ as: $$\sum_j \left(\frac{m_j}{M_j} - \frac{m_{j-1}}{M_{j-1}}\right)\frac{du_j}{ds} + \left(\frac{n_j}{M_j} - \frac{n_{j-1}}{M_{j-1}}\right)\frac{dv_j}{ds}=-1.$$
The entire system of $2J+1$ equations can be represented in matrix form as: $$\label{eq:H}
H\frac{d\mathbf{u}}{ds} = \mathbf{z} ~\implies \frac{d\mathbf{u}}{ds} = H^{-1}\mathbf{z},$$ Where $H$ is a $(2J+1)\times(2J+1)$ matrix, and $\mathbf{u},\mathbf{z}$ are column vectors of length $2J+1$, given by the formulas: $$\begin{aligned}
\mathbf{u} &:= \left[\vec{u}, \vec{v}, \lambda\right]^T;\\
\mathbf{z} &:= \left[0,\ldots,0,-1\right]^T
\end{aligned}$$ (i.e. $\mathbf{z}$ has a single nonzero entry of -1 in the last component).
The matrix $H$ can be decomposed into blocks: $$\label{eq:Hblock}
H =
\begin{bmatrix}
H_{11} & H_{12} & \vec{h_1}\\
H_{21} & H_{22} & \vec{h_2}\\
\vec{h}_1^T & \vec{h}_2^T & 0
\end{bmatrix},$$ The blocks $H_{ij}$ can be expressed in terms of diagonal and subdiagonal matrices. First we introduce some abbreviated notations. Let $\mathop{\mathrm{diag}}(c_j)$ denote the $J \times J$ diagonal matrix with entries $c_1,\ldots c_J$ on the diagonal. Let $D$ denote the discrete forward derivative matrix with $-1$'s on the diagonal and 1's on the first superdiagonal. Then we have: $$\label{eq:Hspec}
\begin{aligned}
H_{11} &= \mathop{\mathrm{diag}}\left( pA_j^q\left(1 + \frac{2qa_j^2}{A_j}\right) \right) + \lambda \mathop{\mathrm{diag}}\left( \frac{n_j^2}{M_j} \right) D + \lambda \mathop{\mathrm{diag}}\left( \frac{n_{j-1}^2}{M_{j-1}} \right)D^T\\
H_{12} &= \mathop{\mathrm{diag}}\left( 2pqa_jb_jA_j^{q-1}\right) - \lambda \mathop{\mathrm{diag}}\left( \frac{m_jn_j}{M_j^3} \right) D - \lambda \mathop{\mathrm{diag}}\left( \frac{m_{j-1}n_{j-1}}{M_{j-1}}\right)D^T \\
H_{21} &= H_{12}\\
H_{22} &= \mathop{\mathrm{diag}}\left( pA_j^q\left(1 + \frac{2qb_j^2}{A_j}\right) \right) + \lambda \mathop{\mathrm{diag}}\left( \frac{m_j^2}{M_j} \right) D + \lambda \mathop{\mathrm{diag}}\left( \frac{m_{j-1}^2}{M_{j-1}} \right)D^T\\
\vec{h}_1 &= \left(\frac{m_1}{M_1} - \frac{m_{0}}{M_{0}}~,~ \ldots~,~ \frac{m_J}{M_J} - \frac{m_{J-1}}{M_{J-1}} \right)^T \\
\vec{h}_2 &= \left(\frac{n_1}{M_1} - \frac{n_{0}}{M_{0}}~,~ \ldots ~,~\frac{n_J}{M_J} - \frac{n_{J-1}}{M_{J-1}} \right)^T.
\end{aligned}$$
## Initial conditions for differential equation {#sec:initialConditions}
In Section [2.5](#sec:homotopy){reference-type="ref" reference="sec:homotopy"}, we suggested that by starting with a complete tour by the drone that visits all sensors and "nudging" the drone's path, we can find optimal solutions for shorter and shorter path lengths. Unfortunately, the matrix $H$ in [\[eq:H\]](#eq:H){reference-type="eqref" reference="eq:H"} is singular when $(u_j,v_j) = (x_j,y_j), j=1 \ldots J$, which corresponds exactly to the case of a complete tour. So we cannot use this solution as an initial condition. Fortunately we can address this problem by choosing a solution where the $(u_j,v_j)$ are a small offset from $(x_j,y_j)$ for all $j$, i.e. $$\label{eq:uvInit}
(u_j,v_j) = (x_j,y_j) + \vec{\epsilon}_j.$$ We need to choose $\vec{\epsilon}_j$ in such a way that the Lagrange multiplier conditions [\[eq:Lagrange\]](#eq:Lagrange){reference-type="eqref" reference="eq:Lagrange"} are satisfied. We also need to choose $\vec{\epsilon}_j$ such that the path joining the $(u_j,v_j)$ is smaller than the full tour.
The gradient of $f(\vec{u},\vec{v})$ evaluated at the point for the initial point given by [\[eq:uvInit\]](#eq:uvInit){reference-type="eqref" reference="eq:uvInit"} is given by:
$$\label{eq:initial_cond}
\begin{aligned}
\nabla{f(u_{j}, v_{j})} &= p\left(x_{j}-u_{j})^{2} + (y_{j}-v_{j})^{2}\right)^{(\frac{p}{2}-1)}\bigl[(x_{j}-u_{j}), (y_{j}-v_{j})\bigr]\\
&= p\left| \vec{\epsilon}_j \right|^{p-2}\vec{\epsilon}_j\\
&= p\left| \vec{\epsilon}_j \right|^{p-1}\hat{\epsilon}_j,
\end{aligned}$$ where $\hat{\epsilon}_j$ is the unit vector in the direction of $\vec{\epsilon}$. The Lagrange multiplier condition [\[eq:Lagrange\]](#eq:Lagrange){reference-type="eqref" reference="eq:Lagrange"} gives: $$\label{eq:eps_g}
\begin{aligned}
p\left| \vec{\epsilon}_j \right|^{p-1}\hat{\epsilon}_j = \lambda \nabla_j g(\vec{u},\vec{v}),
\end{aligned}$$ where $$\nabla_j g(\vec{u},\vec{v}) := \left( \frac{\partial g}{\partial u_j}, \frac{\partial g}{\partial v_j} \right).$$ Since $g(\vec{u},\vec{v})$ is smooth in the vicinity of $(\vec{u},\vec{v})\approx (\vec{x},\vec{y})$, we may approximate $$\label{eq:gj}
\nabla_j g(\vec{u},\vec{v}) \approx
\left. \nabla_j g(\vec{u},\vec{v}) \right|_{\vec{u} = \vec{x},\vec{v} = \vec{y}},$$ which may be evaluated using [\[eq:pdu\]](#eq:pdu){reference-type="eqref" reference="eq:pdu"} and [\[eq:pdv\]](#eq:pdv){reference-type="eqref" reference="eq:pdv"}. Denoting the right-hand side of [\[eq:gj\]](#eq:gj){reference-type="eqref" reference="eq:gj"} as $\vec{g}_j$, we have from [\[eq:eps_g\]](#eq:eps_g){reference-type="eqref" reference="eq:eps_g"} $$\label{eq:eps_g2}
\begin{aligned}
&p\left| \vec{\epsilon}_j \right|^{p-1}\hat{\epsilon}_j = \lambda \vec{g}_j\\
\implies & \hat{\epsilon}_j = \pm \hat{g}_j \text{ and } \left|\epsilon_{j}\right| = \left|\frac{\lambda}{p}\right|^{(\frac{1}{p-1})}|\vec{g}_{j}|^{(\frac{1}{p-1})}.
\end{aligned}$$ Since we are interested in solutions for which $g$ is reduced, we choose the negative sign in [\[eq:eps_g2\]](#eq:eps_g2){reference-type="eqref" reference="eq:eps_g2"}. Then $\vec{\epsilon}_j$ is uniquely determined by the values of $\vec{g}_j$ and $\lambda$. By choosing a small value of $\lambda$, we may obtain initial values for $(\vec{u},\vec{v})$ that are very close to $(\vec{x},\vec{y})$, so that the approximation in [\[eq:gj\]](#eq:gj){reference-type="eqref" reference="eq:gj"} holds to a very high degree of accuracy. In this way, we obtain initial conditions for our differential equation [\[eq:H\]](#eq:H){reference-type="eqref" reference="eq:H"} for which $H$ is not singular, and can thus apply standard numerical methods for solution.
## Global optimality
In Section [2.4](#sec:locOpt){reference-type="ref" reference="sec:locOpt"} we posed the local optimality condition [\[eq:Lagrange\]](#eq:Lagrange){reference-type="eqref" reference="eq:Lagrange"} from which the vector differential equation [\[eq:H\]](#eq:H){reference-type="eqref" reference="eq:H"} is derived. To show that a locally optimal solution is globally optimal, we would need to show that it improves over all other locally optimal solutions. We may establish this for the solutions described in Sections [2.5](#sec:homotopy){reference-type="ref" reference="sec:homotopy"}-[2.7](#sec:initialConditions){reference-type="ref" reference="sec:initialConditions"}.
Every local optimal solution for a given drone path length will be part of a homotopy of solutions that may be parametrized by path length. As path length increases in the homotopy, the energy consumption decreases until a minimum energy consumption is reached for the entire homotopy. This minimum energy is necessarily nonnegative--and if it is 0, then the homotopy must terminate at a tour that joins all of the cluster heads. The shortest cluster head tour (i.e. the traveling salesman solution) will be the unique optimum in the case when the drone path length is equal to this shortest tour. It follows that for sufficiently long drone path lengths, the solution that belongs to the homotopy that includes the traveling salesman solution will be the unique global optimum. This justifies our claim that the solution of [\[eq:fundOpt\]](#eq:fundOpt){reference-type="eqref" reference="eq:fundOpt"} calculated using the homotopy approach outlined in Sections [2.5](#sec:homotopy){reference-type="ref" reference="sec:homotopy"}-[2.7](#sec:initialConditions){reference-type="ref" reference="sec:initialConditions"} is optimal for sufficiently long drone path lengths.
## Implementation in python {#sec:implementation}
As described above the algorithm has two phases. First, an initial ordering of cluster heads is determined, such that the length of a tour joining the cluster heads in this order is minimized among all possible orderings; Once the ordering is set, the homotopy of solutions is computed, using the initial conditions and system of differential equations described in Sections [2.7](#sec:initialConditions){reference-type="ref" reference="sec:initialConditions"} and [2.5](#sec:homotopy){reference-type="ref" reference="sec:homotopy"}. The final path with desired path length is selected from the homotopy.
For the first phase, the shortest tour joining cluster heads is found using travelling salesman algorithm as implemented in the `python-tsp` package.. This gives a set of ordered cluster head points, that is, ($x_{j}$,$y_{j}$) re-arranged appropriately.
For the second phase, the homotopy solution is computed according to the following steps:
- Initialize cluster head locations, initial value of $\lambda$, and step size $h$ (typically on the order of 0.1),
- Find a minimal initial path joining the cluster head points using travelling salesman algorithm.
- Determine the initial points of closest approach to be a small distance away from the cluster head points, according to the algorithm described in Section [2.7](#sec:initialConditions){reference-type="ref" reference="sec:initialConditions"}.
- Solve the homotopy equations [\[eq:H\]](#eq:H){reference-type="eqref" reference="eq:H"} numerically. Initially we used the `odeint` solver from the `scipy` package, but encountered instabilities when the the path vertices began to merge. We had better success using a Runge Kutta-4 code obtained from the web and modified for our purpose.
The Appendix gives a more detailed description of the code as well as a link to a Github page where the code can be downloaded.
## Specification of test cases {#sec:testCases}
The code was tested with the following configurations:
Case 1:
Cluster head positions: \[(2, 1), (2, 4), (6, 4), (6, 1)\]
Starting position of the drone: (0, 0)
Transmission power loss exponent: 2
Case 2:
Cluster head positions: \[(2, 1), (2, 4), (8, 2), (6, 4), (6, 1)\]
Starting position of the drone: (0, 0)
Transmission power loss exponent: 2
Case 3:
Cluster head positions: \[(2, 1), (2, 4), (8, 2), (6, 4), (6, 1)\]
Starting position of the drone: (3, 1) Transmission power loss exponent: 2
Case 4:
Cluster head positions: \[(2, 1), (2, 4), (8, 2), (6, 4), (6, 1), (7, 3.5), (1, 2.5)\]
Starting position of the drone: (0, 0)
Transmission power loss exponent: 2
All cases used a step size of 0.1. The execution time was extremely rapid, and all scenarios were computed within seconds.
# Results
Figures [2](#fig:chart-1){reference-type="ref" reference="fig:chart-1"}-[4](#fig:chart-3){reference-type="ref" reference="fig:chart-3"} display results from simulations of the test cases described in Section [2.10](#sec:testCases){reference-type="ref" reference="sec:testCases"} using the python code described in Section [6](#sec:python){reference-type="ref" reference="sec:python"}. To simplify the discussion, we will define the *path defect* as the difference between the length of the minimum tour joining the cluster heads and the length of the drone's path.
Figure [2](#fig:chart-1){reference-type="ref" reference="fig:chart-1"} contains a $2\times 2$ grid of plots that shows several results from each of the above test scenarios. The chart correspond to the drone paths generated by the homotopy solution, starting from the traveling-salesman path that joins the cluster heads and decreasing down to a minimum straight-line path that joins starting and ending points. We can see that initially the paths roughly preserve the original shape as they shrink. (In fact, it can be shown that as the path shrinks the path vertices always move in the direction of the angle bisectors of the path polygon.) After several iterations, path vertices merge: for example, the first figure in the first row starts with a path containing 5 segments, which eventually becomes 4 and then 3 segments as the path length shrinks. Similar changes occur in the other three figures.
The left-hand graph in Figure [3](#fig:chart-2){reference-type="ref" reference="fig:chart-2"} shows the path defect as a function of step number for each of the 4 test cases. The initial path length is equal to the tour length and hence the graphs start from 0. All graphs start out with the same upward slope, which is equal to 1/(step size), since $dg/ds = -1$ according to Eq.[\[eq:gDeriv\]](#eq:gDeriv){reference-type="eqref" reference="eq:gDeriv"}. However, when the drone's information-gathering points merge, the slope changes. This indicates that there may be numerical problems with the solution following point mergers, because eq. [\[eq:gDeriv\]](#eq:gDeriv){reference-type="eqref" reference="eq:gDeriv"} should guarantee a constant slope of 1/(step size). This phenomenon will be further investigated in the future (see Section [4](#sec:conclusions){reference-type="ref" reference="sec:conclusions"}
The second chart in the middle graph gives cluster head's power transmission loss as a function of step number. As expected, this increases as the path length decreases.
The tradeoff between drone path length and transmission loss is more clearly shown in Figure [4](#fig:chart-3){reference-type="ref" reference="fig:chart-3"}, which plots the cluster head energy consumption on the vertical axis versus cluster head tour length minus drone path length on the horizontal axis. In all cases, the $p$'th root of energy initially depends nearly linearly on path defect. The $p$'th root of energy is the $\ell^p$ norm of the vector $[|\vec{z}_1-\vec{w}_1|, \ldots, |\vec{z}_J-\vec{w}_J| ]$ consisting of distances between cluster heads and corresponding drone path vertices. In the case where all distances $|\vec{z}_j-\vec{w}_j|$ are approximately equal, then the $\ell^p$ norm is nearly proportional to the mean distance between cluster head and corresponding drone vertex. The linear section of the graphs in Figure [4](#fig:chart-3){reference-type="ref" reference="fig:chart-3"} indicate an initial linear dependence of the distance between cluster heads and drone vertices as the path defect increases. However, as the drone range continues to decrease we see an increase in the rate of consistent convex dependence
![Drone path homotopy solutions for 4 different test scenarios.](figures/1.png){#fig:chart-1 width="4.5in"}
![*(left)* Iteration number versus path defect and *(right)* iteration number versus total energy for the four test scenarios shown in Figure [2](#fig:chart-1){reference-type="ref" reference="fig:chart-1"}.](figures/2.png){#fig:chart-2 width="4.5in"}
![Path defect vs $\text{(total energy)}^{1/P}$](figures/3.png){#fig:chart-3 width="4.5in"}
# Conclusions {#sec:conclusions}
The solution described above for finding optimal drone information-harvesting paths is locally optimal, and globally optimal for sufficiently long drone ranges. The numerical solution algorithm executes quickly enough that it can be implemented in real time. The solution can be used either to optimize total transmit power consumption, and maximum power consumption by cluster heads. However, there are limitations in that the speed of execution slows considerably when the drone range decreases below a certain point (i.e. when path vertices begin to merge), and the characteristics of the numerical solution indicate that the solution may no longer be optimal.
# Future work {#sec:future}
The following future work is proposed in order to improve the solution.
- Further investigations may be made into the algorithm's behavior after drone vertex mergers. One possibility is to modify the algorithm so that when path vertices merge, the two associated cluster heads may be replaced by a single virtual cluster head that produces the same power loss. Then the homotopy solution can be continued with vectors $\vec{u},\vec{v}$ that each have one less component.
- The algorithm can be modified to accommodate moving cluster heads, by making $\vec{x}$ and $\vec{y}$ in [\[eq:def_f\]](#eq:def_f){reference-type="eqref" reference="eq:def_f"} to be functions of $\vec{u}$ and $\vec{v}$.
- Further explorations of the question of global optimization may be pursued. different homotopies associated with different orderings of the cluster heads.
- The numerical homotopy solution code can be improved in a number of ways to decrease execution times. In particular, existing solver packages and/or variable step-size solution algorithms may be explored.
# Appendix: Python implementation {#sec:python}
## Link to code
The python code used to generate the figures shown in Section [\[sec:results\]](#sec:results){reference-type="ref" reference="sec:results"} is available at: <https://github.com/ganap-ram/drone>.
## Implemented equations
This section describes the differential equations used to solve as implemented in the python code. Although the implementation is mathematically equivalent to the description in Section [2.5](#sec:homotopy){reference-type="ref" reference="sec:homotopy"}, the notation used is somewhat different.
For notational and coding simplicity we define some intermediate variables: $$\begin{aligned}
{u_j}, {v_j} &= j^{th} \textrm{ turning point of drone, }j = {1..J}\\
{a_j} &= {u_j - x_j}\\
{b_j} &= {v_j - y_j}\\
A_j &= {a_j}^2 + {b_j}^2 \\
{m_j} &= u_{j} - u_{j+1}\\
{n_j} &= v_{j} - v_{j+1}\\
M_j &= {m_j}^2 + {n_j}^2 \\
H_j &= {M_j}^{3/2} \\
q &= \frac{p}{2} - 1\\
r &= 1/2\\
\end{aligned}$$
The following are the various coefficient values of the derivatives in the matrix: $$\begin{aligned}
s_{j1} &= \lambda\frac{M_{j-1} - m_{j-1}^{2}}{H_{j-1}}\\
s_{j2} &= -\lambda\frac{m_{j-1}n_{j-1}}{H_{j-1}}\\
s_{j3} &= 2pqa_{j}^{2}A_{j}^{q-1} + pA_{j}^{q} - \lambda\frac{H_{j}(M_{j-1} - m_{j-1}^{2}) + H_{j-1}(M_{j} - m_{j}^{2})}{H_{j-1}H_{j}}\\
s_{j4} &= 2pqa_{j}b_{j}A_{j}^{q-1} + \lambda\frac{m_{j-1}n_{j-1}H_{j} + m_{j}n_{j}H_{j-1}}{H_{j-1}H_{j}}\\
s_{j5} &= \lambda\frac{M_{j} - m_{j}^{2}}{H_{j}}\\
s_{j6} &= -\lambda\frac{m_{j}n_{j}}{H_{j}}\\
w_{j} &= \frac{-m_{j-1}\sqrt{M_{j}} + m_{j}\sqrt{M_{j-1}}}{\sqrt{M_{j-1}M_{j}}}
\end{aligned}$$
$$\begin{aligned}
t_{j1} &= -\lambda\frac{m_{j-1}n_{j-1}}{H_{j-1}}\\
t_{j2} &= \lambda\frac{M_{j-1} - n_{j-1}^{2}}{H_{j-1}}\\
t_{j3} &= 2pqa_{j}b_{j}A_{j}^{q-1} + \lambda\frac{m_{j-1}n_{j-1}H_{j} + m_{j}n_{j}H_{j-1}}{H_{j-1}H_{j}}\\
t_{j4} &= 2pqb_{j}^{2}A_{j}^{q-1} + pA_{j}^{q} - \lambda\frac{H_{j}(M_{j-1} - n_{j-1}^{2}) + H_{j-1}(M_{j} - n_{j}^{2})}{H_{j-1}H_{j}}\\
t_{j5} &= -\lambda\frac{m_{j}n_{j}}{H_{j}}\\
t_{j6} &= \lambda\frac{M_{j} - n_{j}^{2}}{H_{j}}\\
z_{j} &= \frac{-n_{j-1}\sqrt{M_{j}} + n_{j}\sqrt{M_{j-1}}}{\sqrt{M_{j-1}M_{j}}}
\end{aligned}$$
And the differential equations have the form ($j=1\ldots J$): $$\label{eq:DEs}
\begin{aligned}
& s_{j1}\frac{du_{j-1}}{ds} + s_{j2}\frac{dv_{j-1}}{ds} + s_{j3}\frac{du_{j}}{ds} + s_{j4}\frac{dv_{j}}{ds} + s_{j5}\frac{du_{j+1}}{ds} + s_{j6}\frac{dv_{j+1}}{ds} - c1\frac{d\lambda}{ds} &= 0\\
& t_{j1}\frac{du_{j-1}}{ds} + t_{j2}\frac{dv_{j-1}}{ds} + t_{j3}\frac{du_{j}}{ds} + t_{j4}\frac{dv_{j}}{ds} + t_{j5}\frac{du_{j+1}}{ds} + t_{j6}\frac{dv_{j+1}}{ds} - c2\frac{d\lambda}{ds} &= 0\\
\frac{dg}{ds} &= -1 \implies \sum_{j=1}^J \left(\frac{\partial g}{\partial u_j}\frac{du_j}{ds} + \frac{\partial g}{\partial u_j}\frac{dv_j}{ds}\right) &= -1.
\end{aligned}$$
This is represented in the matrix notation as: $$\begin{aligned}
K D = C,
\end{aligned}$$ where: $$\begin{aligned}
K &=
\begin{bmatrix}
\begin{matrix}
s_{13} & s_{14} & s_{15} & s_{16} & 0 & 0 & 0 & 0 & \cdots & 0 & 0 & -w_{0}\\
t_{13} & t_{14} & t_{15} & t_{16} & 0 & 0 & 0 & 0 & \cdots & \cdots & 0 & -z_{0}\\
s_{21} & s_{22} & s_{23} & s_{24} & s_{25} & s_{16} & 0 & 0 & \cdots & \cdots & 0 & -w_{1}\\
t_{21} & t_{22} & t_{23} & t_{24} & t_{25} & t_{16} & 0 & 0 & \cdots & \cdots & 0 & -z_{1}\\
0 & 0 & s_{31} & s_{32} & s_{33} & s_{34} & s_{35} & s_{36} & \cdots & \cdots & \cdots & \vdots\\
0 & 0 & t_{31} & t_{32} & t_{33} & t_{34} & t_{35} & t_{36} & \cdots & \cdots & \cdots & \vdots\\
0 & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots &\vdots & \vdots\\
0 & 0 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots &0 & \vdots\\
0 & 0 &\cdots & \cdots & \cdots & 0 & 0 & s_{J1} & s_{J2} & s_{J3} & s_{J4} & -w_{J}\\
0 & 0 & 0 & \cdots & \cdots & 0 & 0 & t_{J1} & t_{J2} & t_{J3} & t_{J4}& -z_{J}\\
w_{0} & z_{0} & w_{1} & z_{1} & \cdots & \cdots & \cdots & \cdots & \cdots & w_{J} & z_{J}& 0\\
\end{matrix}
\end{bmatrix}\\\end{aligned}$$ $$\begin{aligned}
D &=
\begin{bmatrix}
{du}_{1}/{ds} \\
{dv}_{1}/{ds} \\
{du}_{2}/{ds} \\
{dv}_{2}/{ds} \\
{du}_{3}/{ds} \\
{dv}_{3}/{ds} \\
\vdots \\
\vdots \\
\vdots \\
{du}_{J}/{ds} \\
{dv}_{J}/{ds} \\
{d\lambda}/{ds} \\
\end{bmatrix}\\\end{aligned}$$ And $$\begin{aligned}
C &=
\begin{bmatrix}
0\\
0\\
0\\
0\\
0\\
0\\
\vdots \\
\vdots \\
\vdots \\
0\\
0\\
-1\\
\end{bmatrix}\end{aligned}$$
[^1]: Corresponding author: thron\@tamuct.edu
| arxiv_math | {
"id": "2309.01604",
"title": "Optimal Real Time Drone Path Planning for Harvesting Information from a\n Wireless Sensor Network",
"authors": "Ramkumar Ganapathy, Christopher Thron",
"categories": "math.OC",
"license": "http://creativecommons.org/licenses/by-nc-sa/4.0/"
} |
---
abstract: |
A *vertex ordering* of a graph $G$ is a bijection $\pi\colon\{1,\dots,|V(G)|\}\to V(G)$. It is *successive* if the induced subgraph $G[v_{\pi(1)},\dots,v_{\pi(k)}]$ is connected for each $k$. Fang et al. \[*J. Comb. Theory* **A199** (2023), 105776\] gave formulas for counting the number of successive vertex orderings for a class of graphs they called *fully regular*, and conjectured that these formulas could be written as certain products involving differences or ratios of binomial coefficients in two cases: When the graph is the line graph $L(K_n^{(3)})$ of the complete $3$-uniform hypergraph, or when it is the line graph $L(K_{m,n}^{(1,2)})$ of a complete "bipartite" $3$-uniform hypergraph. In this paper, we confirm both of these conjectures.
address: College of Design and Engineering, National University of Singapore
author:
- Boon Suan Ho
title: |
Two product formulas for counting\
successive vertex orderings
---
A *vertex ordering* of a graph $G$ is a bijection $\pi\colon\{1,\dots,|V(G)|\}\to V(G)$. It is *successive* if the induced subgraph $G[v_{\pi(1)},\dots,v_{\pi(k)}]$ is connected for each $k$. Fang et al. [@fang] defined the class of *fully regular graphs*, and proved that for such graphs, the probability of a random vertex ordering being successive is given by a sum of products involving certain invariants of the graph. In two special cases, they conjectured that this sum admits a nice product expression. By manipulating certain generalized hypergeometric identities, we prove those conjectures in this paper.
**Theorem 1** (Fang et al. [@fang], Conjecture 5.2). *We have, for $n\ge3$, $$\sum_{i=0}^{\lfloor n/3\rfloor-1}\prod_{j=1}^i\frac{-\binom{n-3j}{3}}{\binom{n}{3}-\binom{n-3j}{3}}
=\Bigl\lfloor\frac{n}{3}\Bigr\rfloor\prod_{\substack{j=n+1\\3\nmid j}}^{\lfloor3n/2\rfloor-2}c_j\Bigg/\prod_{\substack{j=3\\3\mid j}}^{n-3}c_j,$$ where $c_j=\frac{6}{j}(\binom{n}{3}-\binom{n-j}{3})=j^2 + (3-3n)j + 3n^2-6n+2$.*
By Theorem 1.4 of [@fang], the expression on the left is equal to the probability that a vertex ordering of $L(K_n^{(3)})$ is successive; equivalently, the expression multiplied by $|V|!$ counts the number of successive vertex orderings of $L(K_n^{(3)})$. (We write $L(G)$ for the *line graph* of $G$, which is a graph whose vertices are the edges of $G$, where two edges are considered adjacent if they share a common vertex.)
**Theorem 2** (Fang et al. [@fang], Conjecture 5.3). *We have, for $m\ge1$ and $n\ge2$, $$\sum_{i=0}^{\min\{m,\lfloor n/2\rfloor\}}\prod_{j=1}^i\frac{-(m-j)\binom{n-2j}{2}}{m\binom{n}{2}-(m-j)\binom{n-2j}{2}}
=m\prod_{j=1}^{m-1}\frac{mn-\binom{m+1}{2}+\binom{j}{2}}{d_j},$$ where $d_j=\frac{1}{j}(m\binom{n}{2}-(m-j)\binom{n-2j}{2})$, and where any numerators or denominators on the right that are equal to zero are ignored.*
Similarly, by Theorem 1.4 of [@fang], the expression on the left gives the probability of getting a successive vertex ordering of $L(K_{m,n}^{(1,2)})$, where $K_{m,n}^{(1,2)}$ is the $3$-uniform hypergraph on vertex set $V=V_1\cup V_2$ with $|V_1|=m$ and $|V_2|=n$, whose edges are all of the subsets $e\subseteq V$ such that $|e\cap V_1|=1$ and $|e\cap V_2|=2$.
Recall that the *generalized hypergeometric function* is defined by $$%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{p}F_{q}{\left(\genfrac..{0pt}{}{a_1,\dots,a_p}{b_1,\dots,b_q};z\right)}%
\endgroup
\coloneqq\sum_{k=0}^\infty\frac{(a_1)_k\dots(a_p)_k}{(b_1)_k\dots(b_q)_k}\frac{z^k}{k!},$$ where $(a)_k\coloneqq a(a+1)\dots(a+k-1)$ denotes the *rising factorial* (with $(a)_0\coloneqq1$). Notice that if one of the upper parameters is a nonpositive integer $-a$, then this sum has only $a+1$ terms, since the rest of the terms vanish (we call such hypergeometric sums *terminating*). In particular, if $b>a\ge0$ are integers and we have a ${}_3F_2(1)$ sum where one of the upper parameters is $-a$ and one of the lower parameters is $-b$, this sum is well-defined. We will only be interested in the terminating ${}_3F_2(1)$ case.
# Proof of Theorem 1 {#sec:theorem1}
Let $X_n\coloneqq\sqrt{1+6n-3n^2}$. Then $$\begin{cases}
-\binom{n-3j}{3}&=-\frac{1}{6}(n-3j)(n-1-3j)(n-2-3j),\\
\binom{n}{3}-\binom{n-3j}{3}&=\frac{9}{2}j\bigl(j-\frac{1}{6}(3n-3+X_n)\bigr)\bigl(j-\frac{1}{6}(3n-3-X_n)\bigr);
\end{cases}$$ and so we have $$\begin{aligned}
&\sum_{i=0}^{\lfloor n/3\rfloor-1}\prod_{j=1}^i\frac{-\binom{n-3j}{3}}{\binom{n}{3}-\binom{n-3j}{3}} \\
&\quad=\sum_{i=0}^{\lfloor n/3\rfloor-1}\prod_{j=1}^i
\frac{-\frac{1}{6}(n-3j)(n-1-3j)(n-2-3j)}
{\frac{9}{2}j\bigl(j-\frac{1}{6}(3n-3+X_n)\bigr)\bigl(j-\frac{1}{6}(3n-3-X_n)\bigr)}\\
&\quad=\sum_{i=0}^{\lfloor n/3\rfloor-1}\prod_{j=1}^i
\frac{(-\frac{n}{3}+j)(\frac{1}{3}-\frac{n}{3}+j)(\frac{2}{3}-\frac{n}{3}+j)}
{j\bigl(j-\frac{1}{6}(3n-3+X_n)\bigr)\bigl(j-\frac{1}{6}(3n-3-X_n)\bigr)}\\
&\quad=%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{3}, \frac{4}{3}-\frac{n}{3}, \frac{5}{3}-\frac{n}{3}}{\frac{1}{6}(9-3n+X_n), \frac{1}{6}(9-3n-X_n)};1\right)}%
\endgroup
\eqqcolon F(1).\end{aligned}$$ Sheppard (Equation (18) in [@sheppard]; see Corollary 3.3.4 in [@specialfunctions] for a textbook treatment, as well as [Theorem A1](#appendixtheorem:A1) in [Appendix A](#sec:appendix) for a self-contained proof) proved the identity $$%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,b}{d,e};1\right)}%
\endgroup
=\frac{(d-a)_N(e-a)_N}{(d)_N(e)_N}
%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,a+b-N-d-e+1}{a-N-d+1,a-N-e+1};1\right)}%
\endgroup
,$$ whenever $N\ge0$ is a nonnegative integer and $a,b,d,e\in\mathbf C$ are such that both sides converge. In our sum $F(1)$, one of the upper parameters is a nonpositive integer, though exactly which parameter that is depends on the value of $n\bmod 3$. *In the rest of this section, we will assume that $n\bmod3=0$*, as the other two cases are proven analogously (see [Appendix A](#sec:appendix) for those cases). Set $N=n/3-1$, $a=4/3-n/3$, $b=5/3-n/3$, $d=\frac{1}{6}(9-3n+X_n)$, and $e=\frac{1}{6}(9-3n-X_n)$ in Sheppard's identity to get $$\begin{aligned}
F(1)&=\frac{\bigl(\frac{1}{6}(1-n+X_n)\bigr)_{n/3-1}
\bigl(\frac{1}{6}(1-n-X_n)\bigr)_{n/3-1}}
{\bigl(\frac{1}{6}(9-3n+X_n)\bigr)_{n/3-1}
\bigl(\frac{1}{6}(9-3n-X_n)\bigr)_{n/3-1}}\\
&\qquad\times%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{3},\frac{4}{3}-\frac{n}{3},2}{\frac{1}{6}(11-n+X_n),\frac{1}{6}(11-n-X_n)};1\right)}%
\endgroup
.\end{aligned}$$ The denominators and lower parameters are never nonpositive integers, since $X_n$ contributes a nonzero imaginary part, so we may proceed without worrying about division by zero. Now $(a)_k(b)_k=\prod_{j=0}^{k-1}(a+j)(b+j)$, so we may simplify $$\begin{aligned}
&\Bigl(\frac{1}{6}\bigl(1-n+X_n\bigr)\Bigr)_{n/3-1}
\Bigl(\frac{1}{6}\bigl(1-n-X_n\bigr)\Bigr)_{n/3-1} \\
&\quad=\prod_{j=0}^{n/3-2}\Bigl(j^2-\frac{n-1}{3}j+\frac{n^2-2n}{9}\Bigr)\end{aligned}$$ and $$\begin{aligned}
&\Bigl(\frac{1}{6}\bigl(9-3n+X_n\bigr)\Bigr)_{n/3-1}
\Bigl(\frac{1}{6}\bigl(9-3n-X_n\bigr)\Bigr)_{n/3-1} \\
&\quad=\prod_{j=0}^{n/3-2}\Bigl(j^2+(3-n)j+\frac{3n^2-15n+20}{9}\Bigr).\end{aligned}$$ As for the ${}_3F_2(1)$ sum on the right-hand side of Sheppard's identity, we can use Gosper's algorithm to prove that $$%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{3},\frac{4}{3}-\frac{n}{3},2}{\frac{1}{6}(11-n+X_n),\frac{1}{6}(11-n-X_n)};1\right)}%
\endgroup
=\frac{n^2-4n+6}{3n-6}.$$ (See [Section 3](#sec:gosper) for details.) Putting it all together, our task is now to prove that $$\frac{\prod_{j=0}^{n/3-2}(9j^2-3(n-1)j+n^2-2n)}{\prod_{j=0}^{n/3-2}(9j^2+9(3-n)j+3n^2-15n+20)}
\frac{n^2-4n+6}{3n-6}
=\frac{n/3}{\prod_{j=1}^{n/3-1}c_{3j}}
\frac{\prod_{j=n+1}^{\lfloor3n/2\rfloor-2}c_j}
{\prod_{j=n/3+1}^{\lfloor n/2\rfloor-1}c_{3j}}.$$ Since $c_{3j+3}=9j^2+9(3-n)j+3n^2-15n+20$, the bottom left product is equal to the product $\prod_{j=1}^{n/3-1}c_{3j}$ on the bottom right. Thus it suffices for us to prove that $$\begin{aligned}
\prod_{j=0}^{n/3-2}\bigl(9j^2-3(n-1)j+n^2-2n\bigr)
\prod_{j=n/3+1}^{\lfloor n/2\rfloor-1}c_{3j}
=\frac{n(n-2)}{n^2-4n+6}\prod_{j=n+1}^{\lfloor3n/2\rfloor-2}c_j.\end{aligned}$$ Denote by $b_j\coloneqq9j^2-3(n-1)j+n^2-2n$ the $j$-th factor of the product on the left. Since $b_0/c_{n+1}=n(n-2)/(n^2-4n+6)$, we are left to prove that $$\prod_{j=1}^{\lceil n/6\rceil-1}b_j\prod_{j=\lceil n/6\rceil}^{n/3-2}b_j\prod_{j=n/3+1}^{\lfloor n/2\rfloor-1}c_{3j}
=\prod_{j=n+2}^{\lfloor3n/2\rfloor-2}c_j.$$ Since $c_k=c_{3n-3-k}$, the result follows by using the fact that $b_j=c_{n-1+3j}$ in the first product and $b_j=c_{2n-2-3j}$ in the second product. The idea is that the product on the right splits into the three products on the left, where each product on the left comprises only terms with the same index value modulo $3$.
# Proof of Theorem 2 {#sec:theorem2}
The main ideas in this proof are very similar to those in the proof of [Theorem 1](#theorem:1), though the details get rather messy near the end. *In this section, we will assume that $n$ is even*; the reader is referred to [Appendix A](#sec:appendix) for the (similar) odd case. Let $Y_{m,n}\coloneqq\sqrt{(2m+1)^2-8mn}$. Since $-(m-j)\binom{n-2j}{2}=2(j-\frac{n}{2})(j+\frac{1}{2}-\frac{n}{2})(j-m)$ and $m\binom{n}{2}-(m-j)\binom{n-2j}{2}
=2j\bigl(j+\frac{1}{4}(1-2m-2n+Y_{m,n})\bigr)\bigl(j+\frac{1}{4}(1-2m-2n-Y_{m,n})\bigr)$, we have $$\begin{aligned}
&\sum_{i=0}^{\min\{m,\lfloor n/2\rfloor\}}\prod_{j=1}^i\frac{-(m-j)\binom{n-2j}{2}}{m\binom{n}{2}-(m-j)\binom{n-2j}{2}} \\
&\quad=%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{2},\frac{3}{2}-\frac{n}{2},1-m}{\frac{1}{4}(5-2m-2n+Y_{m\normalcomma n}),
\frac{1}{4}(5-2m-2n-Y_{m\normalcomma n})};1\right)}%
\endgroup
\\
&\quad=\frac{\bigl(\frac{1}{4}(-1-2m+Y_{m,n})\bigr)_{n/2-1}
\bigl(\frac{1}{4}(-1-2m-Y_{m,n})\bigr)_{n/2-1}}
{\bigl(\frac{1}{4}(5-2m-2n+Y_{m,n})\bigr)_{n/2-1}
\bigl(\frac{1}{4}(5-2m-2n-Y_{m,n})\bigr)_{n/2-1}}\\
&\qquad\times%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{2},\frac{3}{2}-\frac{n}{2},2}{\frac{1}{4}(9+2m-2n+Y_{m\normalcomma n}), \frac{1}{4}(9+2m-2n-Y_{m\normalcomma n})};1\right)}%
\endgroup
,\end{aligned}$$ where we have applied Sheppard's identity with $N=n/2-1$, $a=3/2-n/2$, $b=1-m$, $d=\frac{1}{4}(5-2m-2n+Y_{m,n})$, and $e=\frac{1}{4}(5-2m-2n-Y_{m,n})$. Now the lower parameters of the resulting ${}_3F_2(1)$ sum are never nonpositive integers, and while the numerators and denominators of the fractions involved can have factors that are negative integers, they never include a zero factor (see [Lemmas A2](#lemma:A2) and [A3](#lemma:A3) in [Appendix A](#sec:appendix) for proofs of these facts). Thus we need not worry about division by zero in this case either.
The expression above simplifies to $$\frac{\prod_{j=0}^{n/2-2}\bigl(j^2-\frac{2m+1}{2}j+\frac{mn}{2}\bigr)}
{\prod_{j=0}^{n/2-2}\bigl(j^2+\frac{5-2m-2n}{2}j+\frac{6-6m-5n+4mn+n^2}{4}\bigr)}
\frac{6+4m-5n+n^2}{4m}.$$ (Once again, see [Section 3](#sec:gosper) for details on evaluating this ${}_3F_2(1)$ sum using Gosper's algorithm.) We must prove that this expression is equal to $$m\prod_{j=1}^{m-1}\frac{mn-\binom{m+1}{2}+\binom{j}{2}}{d_j},$$ where $d_j\coloneqq\frac{1}{j}(m\binom{n}{2}-(m-j)\binom{n-2j}{2})$. Since $d_{j+1}=2j^2+(5-2m-2n)j+\frac{1}{2}(6-6m-5n+4mn+n^2)$, our task is to prove that $$\label{eq:sus}
\frac{\prod_{j=0}^{n/2-2}p_j}{\prod_{j=0}^{n/2-2}d_{j+1}}
=\frac{4m^2}{6+4m-5n+n^2}\frac{\prod_{j=1}^{m-1}q_j}{\prod_{j=1}^{m-1}d_j},$$ where we have defined $p_j\coloneqq 2j^2-(2m+1)j+mn$ and $q_j\coloneqq mn-\binom{m+1}{2}+\binom{j}{2}$. *The products in this section are to be interpreted as ignoring zero factors.* (Formally, we could define such a product by viewing $\prod_j a_j$ as shorthand for $\prod_j (a_j+[a_j=0])$, where $[P]$ denotes the *Iverson bracket*, which is equal to $1$ if the proposition $P$ is true, and is equal to $0$ if $P$ is false.) Note that in [\[eq:sus\]](#eq:sus){reference-type="eqref" reference="eq:sus"}, only the two products on the right may contain zero factors, as mentioned earlier in our discussion concerning division by zero.
Since we have the identities $$\label{eq:refl}
q_j=q_{1-j},$$ $$p_j=q_{m-2j+1}=q_{2j-m},$$ $$d_j=p_{m+n/2-j}=q_{m+n-2j},$$ and $$\frac{4m^2}{6+4m-5n+n^2}
=\frac{q_mq_{m+1}}{q_{m-n+1}q_{m-n+3}},$$ which are easily checked by direct calculation, we are left to prove the following: $$\label{eq:thm2identity}
\prod_{j=0}^{n/2}q_{m-2j+1}
\prod_{j=1}^{m-1}q_{m+n-2j}
=\prod_{j=1}^{m+1}q_j
\prod_{j=1}^{n/2-1}q_{m+n-2j}.$$ To this end, we consider three cases, depending on the relative ordering of $n/2$, $m$, and $n$.
**Case 1** ($n/2<m$). We may divide both sides of [\[eq:thm2identity\]](#eq:thm2identity){reference-type="eqref" reference="eq:thm2identity"} by $\prod_{j=1}^{n/2-1}q_{m+n-2j}$, so our task is to prove that $$\label{eq:thm2case1}
\prod_{j=0}^{n/2}q_{m-2j+1}
\prod_{j=n/2}^{m-1}q_{m+n-2j}
=\prod_{j=1}^{m+1}q_j.$$
**Case 1A** ($n\le m$). We simplify the second product on the left of [\[eq:thm2case1\]](#eq:thm2case1){reference-type="eqref" reference="eq:thm2case1"}, using [\[eq:refl\]](#eq:refl){reference-type="eqref" reference="eq:refl"}: $$\begin{aligned}
\prod_{j=n/2}^{m-1}q_{m+n-2j}
&=\prod_{j=n/2}^{n/2+\lceil m/2\rceil-1}q_{m+n-2j}
\prod_{j=n/2+\lceil m/2\rceil}^{m-1}q_{m+n-2j}\\
&=\prod_{j=0}^{\lceil m/2\rceil-1}q_{m-2j}
\prod_{j=n/2+\lceil m/2\rceil}^{m-1}q_{1-m-n+2j}\\
&=\prod_{j=0}^{\lfloor m/2\rfloor-n/2-1}q_{m-n-1-2j}
\prod_{j=0}^{\lceil m/2\rceil-1}q_{m-2j}.\end{aligned}$$ The left-hand side of [\[eq:thm2case1\]](#eq:thm2case1){reference-type="eqref" reference="eq:thm2case1"} then becomes $$\begin{aligned}
&\prod_{j=0}^{n/2}q_{m-2j+1}
\prod_{j=0}^{\lfloor m/2\rfloor-n/2-1}q_{m-n-1-2j}\prod_{j=0}^{\lceil m/2\rceil-1}q_{m-2j}\\
&\quad=\prod_{j=0}^{\lfloor m/2\rfloor}q_{m-2j+1}\prod_{j=0}^{\lceil m/2\rceil-1}q_{m-2j}\\
&\quad=\prod_{j=1}^{m+1}q_j,\end{aligned}$$ which completes the proof of case 1A.
**Case 1B** ($n>m$). We simplify the first product on the left of [\[eq:thm2case1\]](#eq:thm2case1){reference-type="eqref" reference="eq:thm2case1"}, using [\[eq:refl\]](#eq:refl){reference-type="eqref" reference="eq:refl"}: $$\begin{aligned}
\prod_{j=0}^{n/2}q_{m-2j+1}
&=\prod_{j=0}^{\lfloor m/2\rfloor}q_{m-2j+1}
\prod_{j=\lfloor m/2\rfloor+1}^{n/2}q_{m-2j+1}\\
&=\prod_{j=0}^{\lfloor m/2\rfloor}q_{m-2j+1}
\prod_{j=\lfloor m/2\rfloor+1}^{n/2}q_{2j-m}\\
&=\prod_{j=0}^{\lfloor m/2\rfloor}q_{m-2j+1}
\prod_{j=0}^{n/2-\lfloor m/2\rfloor-1}q_{n-m-2j}.\end{aligned}$$ The left-hand side of [\[eq:thm2case1\]](#eq:thm2case1){reference-type="eqref" reference="eq:thm2case1"} then becomes $$\begin{aligned}
&\prod_{j=0}^{\lfloor m/2\rfloor}q_{m-2j+1}
\prod_{j=n/2}^{m-1}q_{m+n-2j}
\prod_{j=0}^{n/2-\lfloor m/2\rfloor-1}q_{n-m-2j}\\
&\quad=\prod_{j=0}^{\lfloor m/2\rfloor}q_{m-2j+1}
\prod_{j=0}^{\lceil m/2\rceil-1} q_{m-2j}\\
&\quad=\prod_{j=1}^{m+1}q_j.\end{aligned}$$ This completes the proof of case 1B, and consequently case 1.
**Case 2** ($n/2\ge m$). We may divide both sides by $\prod_{j=1}^{m-1}q_{m+n-2j}$, so our task is to prove that $$\prod_{j=0}^{n/2}q_{m-2j+1}
=\prod_{j=1}^{m+1}q_j\prod_{j=m}^{n/2-1}q_{m+n-2j}.$$ Split the left product into three parts: $$\prod_{j=0}^{n/2}q_{m-2j+1}=
\prod_{j=0}^{\lceil(m-1)/2\rceil}q_{m-2j+1}
\prod_{j=\lceil(m+1)/2\rceil}^{m}q_{m-2j+1}
\prod_{j=m+1}^{n/2}q_{m-2j+1}.$$ Then [\[eq:refl\]](#eq:refl){reference-type="eqref" reference="eq:refl"} implies that $$\prod_{j=m+1}^{n/2}q_{m-2j+1}=\prod_{j=m}^{n/2-1}q_{m+n-2j},$$ so it remains to be shown that $$\prod_{j=0}^{\lceil(m-1)/2\rceil}q_{m-2j+1}
\prod_{j=\lceil(m+1)/2\rceil}^{m}q_{m-2j+1}
=\prod_{j=1}^{m+1}q_j.$$ Applying [\[eq:refl\]](#eq:refl){reference-type="eqref" reference="eq:refl"} to the second product on the left, we get $$\begin{aligned}
\prod_{j=0}^{\lceil(m-1)/2\rceil}q_{m-2j+1}
\prod_{j=\lceil(m+1)/2\rceil}^{m}q_{m-2j+1}
&=\prod_{j=0}^{\lceil(m-1)/2\rceil}q_{m-2j+1}
\prod_{j=0}^{\lfloor(m-1)/2\rfloor}q_{m-2j}\\
&=\prod_{j=1}^{m+1}q_j.\end{aligned}$$ This completes the proof of Theorem 2.
# Using Gosper's algorithm {#sec:gosper}
Here we give proofs of the identities $$%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{3},\frac{4}{3}-\frac{n}{3},2}{\frac{1}{6}(11-n+X_n),\frac{1}{6}(11-n-X_n)};1\right)}%
\endgroup
=\frac{n^2-4n+6}{3n-6}$$ and $$%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-\frac{n}{2},\frac{3}{2}-\frac{n}{2},2}{\frac{1}{4}(9+2m-2n+Y_{m\normalcomma n}), \frac{1}{4}(9+2m-2n-Y_{m\normalcomma n})};1\right)}%
\endgroup
=\frac{6+4m-5n+n^2}{4m}$$ that were obtained using Gosper's algorithm. Since an understanding of how Gosper's algorithm works is not required for following the proof below, the reader is referred to [@AeqB] for such details. We can write the first sum as $\sum_{k=0}^{n/3-1} t_k$, where $$t_k=\frac{(1-\frac{n}{3})_k(\frac{4}{3}-\frac{n}{3})_k(2)_k}
{\bigl(\frac{1}{6}(11-n+X_n)\bigr)_k\bigl(\frac{1}{6}(11-n-X_n)\bigr)_k}\frac{1}{k!}.$$ Gosper's algorithm finds a rational function $R(k)$ such that $t_k=R(k+1)t_{k+1}-R(k)t_k$, so that the sum telescopes, giving $$\sum_{k=0}^{n/3-1} t_k
=R(n/3)t_{n/3}-R(0)t_0.$$ Verifying such a proof amounts to computing the rational function $\frac{1}{t_k}(R(k+1)t_{k+1}-R(k)t_k)$ and checking that it is equal to one, then computing and simplifying the expression $R(n/3)t_{n/3}-R(0)t_0$. It is routine to do this using computer algebra systems such as Mathematica (though it can also be done by hand). The author obtained these identities using Paule and Schorn's Mathematica package `fastZeil` [@fastZeil]; see [Appendix B](#sec:mathematica) for a guide on how to follow the computations in this paper using Mathematica.
For the first identity, we have $$R_1(k)=-\frac{6+9k^2-3k(n-5)+n(n-4)}{3(k+1)(n-2)}.$$
Similarly, we can write the second sum as $\sum_{k=0}^{n/2}u_k$, where $$u_k=\frac{(1-\frac{n}{2})_k(\frac{3}{2}-\frac{n}{2})_k(2)_k}
{\bigl(\frac{1}{4}(9+2m-2n+Y_{m,n})\bigr)_k\bigl(\frac{1}{4}(9+2m-2n-Y_{m,n})\bigr)_k}\frac{1}{k!};$$ the proof then follows from setting $$R_2(k)=-\frac{2(1+k)(3+2k+2m)-(5+4k)n+n^2}{4(1+k)m},$$ so that $$\sum_{k=0}^{n/2}u_k=R_2(n/2+1)u_{n/2+1}-R_2(0)u_0.$$
# Acknowledgements {#acknowledgements .unnumbered}
The author thanks Hao Huang for his helpful comments on a draft of this paper, as well as Zhi Tat Ang for numerous stimulating discussions. The author would also like to express his gratitude to Aryan Jain for his warm hospitality during a brief night's stay, where a rejuvenating beverage provided invaluable inspiration while working on the proof of Theorem 2.
9 Lixing Fang, Hao Huang, János Pach, Gábor Tardos, and Junchi Zuo, "Successive vertex orderings of fully regular graphs," *Journal of Combinatorial Theory* **A199** (2023), 105776 (14 pages). [`doi:10.1016/j.jcta.2023.105776`](https://doi.org/10.1016/j.jcta.2023.105776)
George E. Andrews, Richard Askey, and Ranjan Roy, "Special Functions," *Encyclopedia of Mathematics and Its Applications* **71** (Cambridge University Press, 1999).
William F. Sheppard, "Summation of the coefficients of some terminating hypergeometric series," *Proceedings of the London Mathematical Society* (2) **10** (1912), 469--478.
Marko Petkovšek, Herbert S. Wilf, and Doron Zeilberger, $\mathit{A=B}$ (A. K. Peters, 1996).
Peter Paule and Markus Schorn, "A Mathematica Version of Zeilberger's Algorithm for Proving Binomial Coefficient Identities," *Journal of Symbolic Computation* **20** (1995), 673--698. (See <https://risc.jku.at/sw/fastzeil/> for how to download and use these computer programs.)
# Appendix A: Technical lemmas & calculations for similar cases {#sec:appendix .unnumbered}
For convenience, we reproduce Sheppard's proof from [@sheppard] in our notation.
**Theorem 1** (Sheppard [@sheppard], Equation (18)). *Suppose $N\ge0$ is a nonnegative integer, and $a,b,d,e\in\mathbf C$. Then $$%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,b}{d,e};1\right)}%
\endgroup
=\frac{(d-a)_N(e-a)_N}{(d)_N(e)_N}
%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,a+b-N-d-e+1}{a-N-d+1,a-N-e+1};1\right)}%
\endgroup
,$$ whenever both sides converge.*
*Proof.* By induction on $s$ with the base case $s=1$ given by $$\begin{aligned}
&1+\frac{(-N)(a)(-a+N+d+e-1)}{de}-\frac{(-a+N+d-1)(-a+N+e-1)}{de}\\
&\quad=\frac{(1-N)(a+1)(-a+N+d+e-1)}{de},\end{aligned}$$ we have $$\begin{aligned}
&\sum_{k=0}^s\frac{(-N)_k(a)_k(-a+N+d+e-1)_k}{(d)_k(e)_k\;k!}\\
&\qquad-\frac{(-a+N+d-1)(-a+N+e-1)}{de}\\
&\qquad\quad\times\sum_{k=0}^{s-1}
\frac{(1-N)_k(a+1)_k(-a+N+d+e-1)_k}{(d+1)_k(e+1)_k\;k!}\\
&\quad=\frac{(1-N)_s(a+1)_s(-a+N+d+e-1)_s}{(d)_s(e)_s\;s!}\end{aligned}$$ for $s\ge1$; applying this identity with $s=N$ gives $$\label{eqn:magic}
\begin{aligned}
&%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,-a+N+d+e-1}{d,e};1\right)}%
\endgroup
\\
&\quad=\frac{(-a+N+d-1)(-a+N+e-1)}{de}\\
&\quad\qquad\times%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{1-N,a+1,-a+N+d+e+1}{d+1,e+1};1\right)}%
\endgroup
.
\end{aligned}$$ Repeated applications of [\[eqn:magic\]](#eqn:magic){reference-type="eqref" reference="eqn:magic"} with $s=N,\dots,1$ then yield $$\label{eqn:hill}
%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,-a+N+d+e-1}{d,e};1\right)}%
\endgroup
=\frac{(d-a)_N(e-a)_N}{(d)_N(e)_N}.$$ Now, let $$\begin{aligned}
\Phi(N,a,b,d,e)
&\coloneqq(d)_N(e)_N\;%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,b}{d,e};1\right)}%
\endgroup
\\
&=\sum_{k=0}^N\frac{1}{k!}(-N)_k(a)_k(b)_k(d+k)_{N-k}(e+k)_{N-k},
\end{aligned}$$ $$\begin{aligned} \label{eqn:vanish}
&\Psi(N,a,b,d,e)
\coloneqq(d-a)_N(e-a)_N\;%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{-N,a,a+b-N-d-e+1}{a-N-d+1,a-N-e+1};1\right)}%
\endgroup
\\
&\qquad=\sum_{k=0}^N\frac{1}{k!}(-N)_k(a)_k(a+b-N-d-e+1)_k(d-a)_{N-k}(e-a)_{N-k},
\end{aligned}$$ and $$f_N(b)\coloneqq\Phi(N,a,b,d,e)-\Psi(N,a,b,d,e).$$ It suffices to prove that $f_N=0$ for $N\ge0$; we will induct on $N$. Clearly $f_0=0$, so suppose that $f_j=0$ for $0\le j<N$. Then $$\Phi(N,a,b+1,d,e)-\Phi(N,a,b,d,e)
=-Na\,\Phi(N-1,a+1,b+1,d+1,e+1)$$ and $$\Psi(N,a,b+1,d,e)-\Psi(N,a,b,d,e)
=-Na\,\Psi(N-1,a+1,b+1,d+1,e+1),$$ which implies that $f_N(b)=f_N(b+1)=f_N(b+2)=\cdots$. Thus $f_N$ is a constant, since it is a polynomial. Choosing $b=-a+N+d+e-1$, so that $a+b-N-d-e+1=0$, we see that $\Psi$ reduces to the first term in the sum [\[eqn:vanish\]](#eqn:vanish){reference-type="eqref" reference="eqn:vanish"}, so that $\Psi=(d-a)_N(e-a)_N$ for this choice of $b$. It now follows from [\[eqn:hill\]](#eqn:hill){reference-type="eqref" reference="eqn:hill"} that $\Phi=\Psi$ for this $b$, so that $f_N(b)=0$ and consequently $f_N=0$. This closes the induction and completes the proof. ◻
Now we prove some auxiliary results for [Section 2](#sec:theorem2) that ensure that we never have to worry about dividing by zero. Recall that $Y_{m,n}\coloneqq\sqrt{(2m+1)^2-8mn}$.
**Lemma 1**. *The terms $\frac{1}{4}(9+2m-2n\pm Y_{m,n})$ are never equal to nonpositive integers.*
*Proof.* If $m<2n-1$, then $(2m+1)^2-8mn<0$, so the terms have a nonzero imaginary part. If $m\ge 2n-1$, then $(2m+1)^2-8mn>0$, and so $Y_{m,n}>0$. Thus it suffices to prove that $\frac{1}{4}(9+2m-2n-Y_{m,n})\ge2$, or $2(m-n)+1\ge Y_{m,n}$, which reduces to $4n(n-1)\ge0$. ◻
The following lemma guarantees that the rising factorials in the denominator of the expression obtained from applying Sheppard's identity in [Section 2](#sec:theorem2) never contain zero factors.
**Lemma 2**. *The terms $\frac{1}{4}(5-2m-2n\pm Y_{m,n})$ and $\frac{1}{4}(-1-2m\pm Y_{m,n})$ are never nonpositive integers in the range $\{2-\frac{n}{2}, 3-\frac{n}{2}, \dots, 0\}$.*
*Proof.* If $m<2n-1$, all the terms have a nonzero imaginary part. If $m\ge2n-1$, then $Y_{m,n}>0$. For the terms $\frac{1}{4}(5-2m-2n\pm Y_{m,n})$, it suffices to prove that $\frac{1}{4}(5-2m-2n+Y_{m,n})\le\frac{3}{2}-n$, which reduces to $2(n-m)-1\le-Y_{m,n}$, an inequality established in the proof of [Lemma A2](#lemma:A2).
Similarly, when $m\ge2n-1$, we can prove that $\frac{1}{4}(-1-2m\pm Y_{m,n})\le-\frac{n}{2}$, since it also reduces to $2(n-m)-1\le-Y_{m,n}$. ◻
Now we give some details on the cases $n\equiv1,2\pmod3$ for [Theorem 1](#theorem:1) and $n\equiv1\pmod2$ for [Theorem 2](#theorem:2).
When $n\equiv1\pmod3$ in [Theorem 1](#theorem:1), we apply Sheppard's identity with $N=n/3-4/3$, $a=1-n/3$, $b=5/3-n/3$, $d=\frac{1}{6}(9-3n+X_n)$, $e=\frac{1}{6}(9-3n-X_n)$ to get $$\begin{aligned}
&\frac{\bigl(\frac{1}{6}(3-n+X_n)\bigr)_{n/3-4/3}
\bigl(\frac{1}{6}(3-n-X_n)\bigr)_{n/3-4/3}}
{\bigl(\frac{1}{6}(9-3n+X_n)\bigr)_{n/3-4/3}
\bigl(\frac{1}{6}(9-3n-X_n)\bigr)_{n/3-4/3}}\\
&\quad\times%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{\frac{4}{3}-\frac{n}{3}, 1-\frac{n}{3}, 2}{\frac{1}{6}(11-n+X_n), \frac{1}{6}(11-n-X_n)};1\right)}%
\endgroup
\\
&\quad=\frac{\prod_{j=0}^{n/3-7/3}\bigl(j^2+\frac{3-n}{3}j+\frac{2-3n+n^2}{9}\bigr)}
{\prod_{j=0}^{n/3-7/3}\bigl(j^2+(3-n)j+\frac{3n^2-15n+20}{9}\bigr)}
\frac{n^2-4n+6}{3n-6}.\end{aligned}$$ (Notice that we obtain the exact same ${}_3F_2(1)$ sum as in [Section 1](#sec:theorem1), since the ordering of parameters does not change a hypergeometric series.) From here the task is then to prove that this expression is equal to $$\frac{\lfloor n/3\rfloor}{\prod_{j=1}^{\lfloor n/3\rfloor-1}c_{3j}}
\frac{\prod_{j=n+1}^{\lfloor3n/2\rfloor-2}c_j}{\prod_{j=\lfloor n/3\rfloor+1}^{\lfloor n/2\rfloor-1}c_{3j}},$$ which proceeds in essentially the same way as before, except now things are slightly messier because of the additional floor functions.
When $n\equiv2\pmod3$ in [Theorem 1](#theorem:1), we apply Sheppard's identity with $N=n/3-5/3$, $a=1-n/3$, $b=4/3-n/3$, $d=\frac{1}{6}(9-3n+X_n)$, $e=\frac{1}{6}(9-3n-X_n)$ to obtain $$\begin{aligned}
&\frac{\bigl(\frac{1}{6}(3-n+X_n)\bigr)_{n/3-5/3}
\bigl(\frac{1}{6}(3-n-X_n)\bigr)_{n/3-5/3}}
{\bigl(\frac{1}{6}(9-3n+X_n)\bigr)_{n/3-5/3}
\bigl(\frac{1}{6}(9-3n-X_n)\bigr)_{n/3-5/3}}\\
&\quad\times%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{\frac{5}{3}-\frac{n}{3}, 1-\frac{n}{3}, 2}{\frac{1}{6}(13-n+X_n), \frac{1}{6}(13-n-X_n)};1\right)}%
\endgroup
\\
&\quad=\frac{\prod_{j=0}^{n/3-8/3}\bigl(j^2+\frac{3-n}{3}j+\frac{n^2-3n+2}{9}\bigr)}
{\prod_{j=0}^{n/3-8/3}\bigl(j^2+(3-n)j+\frac{3n^2-15n+20}{9}\bigr)}
\frac{n^2-5n+12}{3n-3}.\end{aligned}$$ Notice that the fraction with two products on the left is identical to the one that arises in the $n\equiv1\pmod3$ case, but the hypergeometric series is no longer the same! Nevertheless, the sum succumbs to Gosper's algorithm, with $$R(k)=-\frac{12+9k^2-3k(n-7)+n(n-5)}{3(1+k)(n-1)};$$ the same manipulations then follow, except we now have $$\frac{b_0}{c_{n+2}}
=\frac{\lfloor n/3\rfloor(3n-3)}{n^2-5n+12}
=\frac{(n-2)(n-1)}{n^2-5n+12}.$$
Finally, when $n\equiv1\pmod2$ in [Theorem 2](#theorem:2), we apply Sheppard's identity with $N=n/2-3/2$, $a=1-n/2$, $b=1-m$, $d=\frac{1}{4}(9+2m-2n+Y_{m,n})$, and $e=\frac{1}{4}(9+2m-2n-Y_{m,n})$ to get $$\begin{aligned}
&\frac{\bigl(\frac{1}{4}(1-2m+Y_{m,n})\bigr)_{n/2-3/2}
\bigl(\frac{1}{4}(1-2m-Y_{m,n})\bigr)_{n/2-3/2}}
{\bigl(\frac{1}{4}(5-2m-2n+Y_{m,n})\bigr)_{n/2-3/2}
\bigl(\frac{1}{4}(5-2m-2n-Y_{m,n})\bigr)_{n/2-3/2}}\\
&\qquad\times%
\begingroup % only local assignments
\pFqmuskip=8mu\relax
\mathchardef\normalcomma=\mathcode`,
% make the comma math active
\mathcode`\,=\string"8000
% and define it to be \pFqcomma
\begingroup\lccode`\~=`\,
\lowercase{\endgroup\let~}{\normalcomma}\mskip\pFqmuskip
% typeset the formula
{}_{3}F_{2}{\left(\genfrac..{0pt}{}{\frac{3}{2}-\frac{n}{2}, 1-\frac{n}{2}, 2}{\frac{1}{4}(9+2m-2n+Y_{m\normalcomma n}), \frac{1}{4}(9+2m-2n-Y_{m\normalcomma n})};1\right)}%
\endgroup
\\
&\quad=\frac{\prod_{j=0}^{n/2-5/2}\bigl(j^2-\frac{2m-1}{2}j+\frac{m(n-1)}{2}\bigr)}
{\prod_{j=0}^{n/2-5/2}\bigl(j^2+\frac{5-2m-2n}{2}j+\frac{6-6m-5n+4mn+n^2}{4}\bigr)}
\frac{6+4m-5n+n^2}{4m}.\end{aligned}$$ The resulting hypergeometric series is identical to that given in [Section 2](#sec:theorem2), and the products are almost identical, except that the $mn/2$ in the original numerator is now $m(n-1)/2$. The arguments that follow are also essentially the same.
# Appendix B: Using Mathematica to follow this paper {#sec:mathematica .unnumbered}
Most of the computations in this paper were done and verified using Wolfram Mathematica. The computations involving Gosper's algorithm were done using Paule and Schorn's Mathematica package `fastZeil` [@fastZeil]. For example, the first few lines of [Section 1](#sec:theorem1) written in Mathematica would look something like this:\
-75pt100pt
Input X\[n\_\] := Sqrt\[1+6n-3\]
Input -Binomial\[n-3j,3\] == -(n-3j)(n-1-3j)(n-2-3j)
Output True
Input FullSimplify\[Binomial\[n,3\]-Binomial\[n-3j,3\]
== j(j-(3n-3+X\[n\]))(j-(3n-3-X\[n\]))\]
Output True
And the proof of the first identity in [Section 3](#sec:gosper) would look like this:\
-75pt100pt
Input \<\< RISC'fastZeil' (\* import fastZeil \*)
Input Gosper\[, {k,0,-1}\]
Print If -1+n/3 is a natural number and (-3+n)/3 is no negative integer and -2+n != 0, then:
Output {Sum\[,{k,0,-1+}\]
== -}
Input Prove\[\]
Input FullSimplify\[-\]
Output
Here, the `Prove[]` function from the `fastZeil` package generates a proof of the identity in a separate Mathematica notebook, which amounts to giving the rational function $R(k)$ as discussed in [Section 3](#sec:gosper).
If the reader owns a copy of Mathematica and wishes to follow the computations in this paper, a Mathematica notebook containing the essential computations of this paper is available from <https://boonsuan.github.io/orderings.nb>.
| arxiv_math | {
"id": "2310.03356",
"title": "Two product formulas for counting successive vertex orderings",
"authors": "Boon Suan Ho",
"categories": "math.CO",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
author:
- Anne Marie Svane, Christophe Biscio, Rasmus Waagepetersen
bibliography:
- ../lit.bib
title: A functional central limit theorem for the $K$-function with an estimated intensity function
---
**Abstract**
The $K$-function is arguably the most important functional summary statistic for spatial point processes. It is used extensively for goodness-of-fit testing and in connection with minimum contrast estimation for parametric spatial point process models. It is thus pertinent to understand the asymptotic properties of estimates of the $K$-function. In this paper we derive the functional asymptotic distribution for the $K$-function estimator. Contrary to previous papers on functional convergence we consider the case of an inhomogeneous intensity function. We moreover handle the fact that practical $K$-function estimators rely on plugging in an estimate of the intensity function. This removes two serious limitations of the existing literature.
*Keywords:* estimated intensity; functional central limit theorem; goodness-of-fit test; inhomogeneous $K$-function; point processes; Ripley's $K$-function
*2020 Mathematics Subject Classification:* 60F17; 60G55; 60F05
# Introduction
Ripley's $K$-function [@ripley:77] is the most popular summary of the second moment structure of a spatial point process. Asymptotic properties of the $K$-function are of interest when the $K$-function is used for goodness-of-fit test [@heinrich:91] of a proposed point process model or for deriving asymptotic properties of parameter estimates obtained from minimum contrast estimating functions based on the $K$-function [@heinrich:92; @guan:sherman:07; @waagepetersen:guan:09].
A complete characterization of the functional asymptotic distribution of the $K$-function for wide classes of stationary Gibbs point processes and stationary so-called conditionally $m$-dependent point processes was obtained in [@biscio:svane:22]. The results were derived assuming known intensity and hence applicable for goodness-of-fit testing for a specific type of model with a given intensity. It is, however, in practice often desired to test the goodness-of-fit of a given type of model considering the intensity an unknown parameter to be estimated. Then the asymptotic distribution of the $K$-function needs to be adjusted for the effect of replacing the true intensity by an estimate. Functional convergence of statistics related to the $K$-function for both known and unknown intensity was considered in [@heinrich:91]. However, the approach in this paper relied heavily on the setting of a stationary Poisson process.
Assuming a constant intensity is often restrictive and [@baddeley:moeller:waagepetersen:00] extended the $K$-function to a wide class of point processes with inhomogeneous intensity functions. This allowed to study residual second order structure after filtering out large scale variations due to a non-constant intensity function. In practice an estimate of the intensity function is plugged in for the typically unknown intensity function. Parametric estimates can be obtained using for example composite likelihood or quasi-likelihood, e.g. [@schoenberg:05; @waagepetersen:07; @guan:loh:07; @waagepetersen:guan:09; @guan:jalilian:waagepetersen:15]. Consistency and asymptotic normality of parameter estimates is studied in detail in these papers. For example, [@waagepetersen:guan:09] used a framework of non-stationary $\alpha$-mixing spatial point processes and established joint asymptotic normality of composite likelihood estimates for the intensity and minimum-contrast estimates of clustering parameters where the contrast was based on the $K$-function with estimated intensity. However, [@waagepetersen:guan:09] did not consider functional convergence.
In this paper we address limitations of the existing literature by establishing a functional central limit theorem for the $K$-function in the case of an unknown possibly non-constant intensity function. Our main focus is establishing functional convergence when the intensity function is replaced by a parametric estimate. We assume as our starting point the availability of a multivariate central limit theorem for a random vector consisting of intensity function parameter estimates and estimates of the $K$-function with known intensity function for a range of spatial distances. This assumption is justified by the aforementioned references.
# The $K$-function {#sec:K}
Let $\mathcal{P}\subseteq \mathbb{R}^d$ be a simple point process with intensity function $\rho(\cdot)>0$ and let $A\subseteq \mathbb{R}^d$ be a set of positive and finite volume $|A|$. Assuming further that $\mathcal{P}$ is second-order intensity reweighted stationary [@baddeley:moeller:waagepetersen:00], the $K$-function is defined for any $r>0$ as $$K(r) = \frac{1}{|A|}\mathbb{E}\sum_{x\in \mathcal{P}\cap A}
\sum_{y\in \mathcal{P}} \frac{\mathds{1}_{\{0<\|x-y\|\le r\}}}{\rho(x)
\rho(y)},$$ where the right hand side does not depend on the particular choice of $A$. The $K$-function can also be expressed as a Palm expection $$K(r)= \mathbb{E}_u \sum_{y\in \mathcal{P}}\frac{\mathds{1}_{\{0<\|y-u\|\le r\}}}{\rho(y)},$$ for any $u \in \mathbb{R}^d$, where $\mathbb{E}_u$ denotes the Palm expectation given $u\in \mathcal{P}$. These definitions agree with the definition of Ripley's $K$-function in the stationary case where $\rho(\cdot)$ is constant.
In practice $\mathcal{P}$ is only observed inside a bounded observation window. Throughout this paper, we will consider a square observation window $W_n =
[-\frac{1}{2}n^{1/d},\frac{1}{2}n^{1/d}]^d$ of volume $n$ and write $\mathcal{P}_n = \mathcal{P}\cap W_n$. Assuming for a moment that the intensity function is known, an unbiased estimator of $K(r)$ is given by $$\label{eq:Kestknown}
\hat{K}_n(r) = \sum_{x\in \mathcal{P}_n}
\sum_{y\in \mathcal{P}_n}
\frac{\mathds{1}_{\{0<\|x-y\|\le r\}}}{\rho(x)\rho(y)} e_n(x,y),$$ where $e_n(x,y)$ is an edge correction factor ensuring unbiasedness. A popular choice is $e_n(x,y)= |W_n \cap W_{n,x-y}|^{-1}$ where $W_{n,x-y}$ is $W_n$ translated by $x-y$.
In practice $\rho(\cdot)$ is unknown and must be estimated. We assume that $\rho(\cdot)$ belongs to a parametric model $\rho_\beta(\cdot)$, $\beta \in
\mathbb{R}^p$, so that $\rho(\cdot)=\rho_{\beta^*}(\cdot)$ for some fixed parameter value $\beta^* \in \mathbb{R}^d$. A common example is the log linear model $\rho_\beta(\cdot)=\exp( z(u)^{\sf T}\beta)$ where for $u \in \mathbb{R}^d$, $z(u)$ is a $p$-dimensional covariate vector assumed to be observed within $W_n$. Methods for obtaining consistent and asymptotically normal estimates of $\beta$ include composite likelihood and quasi-likelihood, e.g. [@schoenberg:05; @waagepetersen:07; @guan:loh:07; @waagepetersen:guan:09; @guan:jalilian:waagepetersen:15; @choiruddin:coeurjolly:waagepetersen:21]. We denote by $\hat \beta_n$ an estimate of $\beta$ obtained from an observation of $\mathcal{P}\cap W_n$. Define $$\hat{K}_{n,\beta}(r) = \sum_{x\in \mathcal{P}_n}
\sum_{y\in \mathcal{P}_n}
\frac{\mathds{1}_{\{0<\|x-y\|\le r\}}}{\rho_{\beta}(x)\rho_{\beta}(y)} e_n(x,y).$$ Then $\hat K_n(r)=\hat K_{n,\beta^*}(r)$. In practice we estimate $K(r)$ by $\hat K_{n,\hat \beta_n}(r)$.
We will assume throughout that $$\label{eq:consistency}
\hat \beta_n \text{ and } \hat K_n \text{ are consistent. }$$ Moreover, we will often make the following assumption on $\rho(\cdot)$: There is an $\varepsilon>0$ and constants $c_1,\ldots,c_4>0$ such that for all $(\beta,x)\in
B_\varepsilon(\beta^*)\times \mathbb{R}^{d}$, $$\label{eq:rhoassumption}
c_1<\rho_\beta(x)<c_2,\qquad \left\|\frac{{\mathrm d}}{{\mathrm d}
\beta}\rho_{\beta}(x)\right\|<c_3, \qquad \left\|\frac{{\mathrm d}^2}{{\mathrm d}\beta^{\sf T}{\mathrm d}
\beta}\rho_{\beta}(x)\right\|<c_4.$$ Here $B_{r}(x)$ denotes the Euclidean ball in $\mathbb{R}^d$ of radius $r$ centered at $x$.
We assume existence of joint intensity functions $\rho^{(l)}$, $l=2,3,4$ [e.g. @rasmus] and define normalized joint intensities $g^{(l)}(u_1,\ldots,u_l)=\rho^{(l)}(u_1,\ldots,u_l)/\prod_{i=1}^l\rho(u_i)$, $l=2,3,4$. In particular, $g^{(2)}$ is the pair correlation function, which will simply be denoted by $g$. Assuming that $g$ is translation-invariant, $g(u,v)=g(u-v)$, the $K$-function can be written as $K(r)=\int_{B_r(0)}g(h) {\mathrm d}h$. We will also assume translation invariance of $g^{(3)}$ and $g^{(4)}$. We note that there are wide classes of point processes for which translation invariant normalized joint intensities exist such as log Gaussian Cox processes [@moeller:syversveen:waagepetersen:98], inhomogeneous Neyman-Scott processes [@waagepetersen:07], and determinantal point processes [@lavancier:moeller:rubak:15].
We say that $\mathcal{P}$ hast fast decaying correlations if for any $p,q \ge 1$ with $p+q \le 4$, there exists a function $\phi_{p,q}:[0,\infty[
\rightarrow [0,\infty[$ such that $\int_{0}^\infty
\phi_{p,q}(r)r^{d-1} {\mathrm d}r < \infty$ and $$\label{eq:fastdecay} | g^{(p+q)}({\mathbf{x}},{\mathbf{x}}')-g^{(p)}({\mathbf{x}}))g^{(q)}({\mathbf{x}}')| \le
\phi_{p,q}(d({\mathbf{x}},{\mathbf{x}}'))$$ where ${\mathbf{x}}$ and ${\mathbf{x}}'$ are point configurations of cardinality $p$ and $q$, respectively, and $d({\mathbf{x}},{\mathbf{x}}')=\min_{u \in {\mathbf{x}},v \in {\mathbf{x}}'}\|u-v\|$, and we define $g^{(1)}(u)=1$, $u \in \mathbb{R}^d$. If $\mathcal{P}$ has fast decaying correlations [\[eq:fastdecay\]](#eq:fastdecay){reference-type="eqref" reference="eq:fastdecay"} it follows easily that $$\label{eq:decayg}
\int_{\mathbb{R}^d}|g(h)-1| {\mathrm d}h < \infty,$$ $$\label{eq:decayg3}
\sup_{x\in\mathbb{R}^d}\int_{\mathbb{R}^d}|g^{(3)}(0,x,z)-g(x)| {\mathrm d}z < \infty$$ and $$\label{eq:decayg4}
\sup_{u_1,u_2\in\mathbb{R}^d}\int_{\mathbb{R}^d}|g^{(4)}(0,u_1,u_4,u_2+u_4)-g(u_1)g(u_2)| {\mathrm d}u_4 < \infty.$$ We finally need a condition of bounded normalized densities: $$\label{eq:boundedg} g^{(k)} \text{ are bounded for } k=2,3,4.$$
# Central limit theorem with estimated intensity function {#sec:clt}
In this section we demonstrate how central limit theorems for $K$-functions with unknown intensity can be deduced from analoguous results with known intensity under suitable assumptions. We postpone a discussion of point process types satisfying the various assumptions to Section [4](#sec:pp){reference-type="ref" reference="sec:pp"}.
## Finite dimensional distributions
Consider for any $k \ge 1$ and $0 < r_1<r_2<\cdots<r_k< \infty$, the vector $$U_n=((\hat \beta_n- \beta^*)^{\sf T},\hat
K_n(r_1)-K(r_1),\ldots,\hat
K_n(r_k)-K(r_k))^{\sf T}.$$ Throughout this paper we assume asymptotic normality, $$\label{eq:clt}
|W_n|^{1/2}C_{n}U_n \rightarrow N(0,I_{p+k}),$$ for a sequence of matrices $C_n$ where $C_n^{-1}(C_n^{-1})^{{\sf T}}/|W_n|$ approximates $\mathop{\mathrm{\mathsf{Var}}}U_n$. We discuss these assumptions in more detail in Section [4](#sec:pp){reference-type="ref" reference="sec:pp"}.
The objective in this section is to obtain a central limit theorem for $$V_n=((\hat \beta_n-\beta^*)^{\sf T},\hat K_{n,\hat \beta_n}(r_1)-K
(r_1),\ldots,\hat K_{n,\hat \beta_n}(r_k)-K
(r_k))^{\sf T}$$ as well as consistency of $\hat K_{n,\hat \beta_n}(r)$ for $r \ge 0$. For this we employ a first order Taylor expansion to obtain $$\label{eq:KTaylor} \hat K_{n,\hat \beta_n}-K(r) = H_{n,\tilde \beta_{n,r}}(r) (\hat
\beta_n-\beta^*)+ \hat K_{n}(r)-K(r),$$ where $\|\tilde \beta_{n,r}- \beta^*\| \le \|\hat \beta_{n}- \beta^* \|$ and $$\label{eq:Hn}
H_{n,\beta}(r)= -\sum_{x\in \mathcal{P}_n}
\sum_{y\in \mathcal{P}_n}
\frac{\mathds{1}_{\{0<\|x-y\|\le
r\}}}{\rho_{\beta}(x)\rho_{\beta}(y)} \frac{{\mathrm d}}{{\mathrm d}\beta^{\sf T}}\log[\rho_{\beta}(x)\rho_{\beta}(y)]
e_n(x,y).$$
Let $\tilde B_n$ denote the matrix with columns $\tilde \beta_{n,r_i}$ and let $H_n(\tilde B_n)$ denote the $k \times p$ matrix with rows $H_{n,\tilde \beta_{n,r_i}}(r_i)$, $i=1,\ldots,k$. Further let $$A_n= \begin{bmatrix}I_p & 0_{p \times k}\\ H_n(\tilde B_n) & I_k\end{bmatrix}$$ where $0_{p\times k}$ is a $p \times k$ matrix of zeros. Then, since $A_n$ is invertible, $$|W_n|^{1/2}C_n A_n^{-1} V_n = |W_n|^{1/2}C_{n} U_n .$$ Thus, by [\[eq:clt\]](#eq:clt){reference-type="eqref" reference="eq:clt"}, $|W_n|^{1/2}C_n A_n^{-1} V_n$ is asymptotically $N(0,I_{p+k})$. This can be used to construct a joint confidence ellipsoid for $((\beta^*)^{\sf T},K(r_1),\ldots,K(r_k))^{\sf T}$.
We can estimate $H_{n,\tilde \beta_{n,r}}(r)$ by $H_{n,\hat \beta_{n,r}}(r)$ according to the following proposition which is also useful for establishing consistency of $\hat K_{n,\hat \beta_n}$.
**Proposition 1**. *Assume that [\[eq:consistency\]](#eq:consistency){reference-type="eqref" reference="eq:consistency"}, [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"} and [\[eq:fastdecay\]](#eq:fastdecay){reference-type="eqref" reference="eq:fastdecay"} are satisfied. Then $H_{n,\tilde \beta_{n,r}}(r)-H_{n,\beta^*}(r)$ and $H_{n,\tilde \beta_{n,r}}(r)-H_{n,\hat \beta_{n,r}}(r)$ converge to zero in probability. Further, $\bar H_{n}(r)=\mathbb{E}H_{n,\beta^*}(r)$ is bounded. Finally, $\mathop{\mathrm{\mathsf{Var}}}H_{n,\beta^*}(r)$ is $O(n^{-1})$.*
*Proof.* Define $h_{x,y}(\beta)=\rho_{\beta}(x)\rho_{\beta}(y)
\frac{{\mathrm d}}{{\mathrm d}\beta^{\sf T}}\log[\rho_{\beta}(x)\rho_{\beta}(y)]$ and let $h_{i,x,y}(\beta)$ be the $i$th component of $h_{x,y}$. Then $$h_{i,x,y}(\tilde \beta_{n,r})-h_{i,x,y}(\beta^*)= h'_{i,x,y}(\bar
\beta_{i,x,y,n})(\tilde \beta_{n,r}-\beta^*)$$ with $h'_{i,x,y}(\beta)= \frac{{\mathrm d}}{{\mathrm d}\beta^{\sf T}}
h_{i,x,y}(\beta)$ and $\|\bar \beta_{i,x,y,n}-\beta^*\| \le
\|\tilde{\beta}_{n,r}-\beta^*\|$. Next, $h_{x,y}(\tilde \beta_{n,r})-h_{x,y}(\beta^*)= (\tilde \beta_{n,r}-\beta^*)^{\sf T}h'_{x,y}(\bar
B_{x,y,n})^{\sf T}$ where $h'_{x,y}(\bar
B_{x,y,n})$ is $p \times p$ with rows $h'_{i,x,y}(\bar
\beta_{i,x,y,n})$. Further, $$\label{eq:Hn_assump}
\| H_{n, \tilde \beta_{n,r}}(r)-H_{n, \beta^*}(r) \| \le \|\tilde
\beta_{n,r}-\beta^*\| \sup_{x,y}\|h_{x,y}'(\bar B_{x,y,n} )\| \bigg| \sum_{x,y\in
\mathcal{P}_n}\mathds{1}_{\{0<\|x-y\|\le r\}} e_n(x,y)\bigg| .$$ On the right hand side, the first factor is bounded by $\|\hat
\beta_n-\beta^*\|$ and hence converges to zero in probability. The second is bounded in probability since eventually $\|
\bar \beta_{i,x,y,n} - \beta^*\| \le \|
\hat \beta_n - \beta^*\| \le \varepsilon$ with high probability. The last factor is bounded by $c_2^{2}\hat K_{n}(r)$, where $c_2$ is the constant from [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"}, and this is bounded in probability by the consistency assumption on $\hat K_{n}(r)$.
The convergence of $H_{n,\tilde
\beta_{n,r}}(r)- H_{n,\hat \beta_n}(r)$ follows from the previous result, the analogous result with $\tilde \beta_{n,r}$ replaced by $\hat \beta_n$, and the decomposition $$H_{n,\tilde \beta_{n,r}}(r)-H_{n,\hat \beta_{n,r}}(r)= H_{n,\tilde
\beta_{n,r}}(r)- H_{n,\beta^*}(r)+H_{n, \beta^*}(r)-H_{n,\hat
\beta_{n,r}}(r).$$ By application of the Campbell formula, $$\|\bar H_{n}(r)\| \le \int_{W_n^2} \frac{\mathds{1}_{\{0<\|x-y\|\le
r\}}}{\rho_{\beta^*}(x)\rho_{\beta^*}(y)|W_{n,x-y}|}g(x-y) \left\|\frac{{\mathrm d}}{{\mathrm d}\beta^{\sf T}}(\rho_{\beta}(x)\rho_{\beta}(y))\Big|_{\beta=\beta^*}\right\|
{\mathrm d}x {\mathrm d}y \le c K(r)$$ for some $c>0$. That $\mathop{\mathrm{\mathsf{Var}}}H_{n,\beta^*}(r)$ is $O(n^{-1})$ follows from Lemma 1 in [@waagepetersen:guan:09]. ◻
**Corollary 2**. *Assume that [\[eq:consistency\]](#eq:consistency){reference-type="eqref" reference="eq:consistency"}, [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"} and [\[eq:fastdecay\]](#eq:fastdecay){reference-type="eqref" reference="eq:fastdecay"} hold. Then $\hat K_{n,\hat
\beta_n}(r)$ is consistent.*
*Proof.* Combining [\[eq:KTaylor\]](#eq:KTaylor){reference-type="eqref" reference="eq:KTaylor"} with Proposition [Proposition 1](#prop:H_est){reference-type="ref" reference="prop:H_est"}, we have $$\begin{aligned}
&\hat K_{n,\hat \beta_n}-K(r) = H_{n,\hat \beta_{n,r}}(r) (\hat
\beta_n-\beta^*)+ \hat K_{n, \beta^*}(r)-K(r) +o_P(1) \\
= & H_{n,\beta^*}(r) (\hat
\beta_n-\beta^*)+ \hat K_{n}(r)-K(r) +o_P(1)\\
= &\bar H_{n}(r) (\hat
\beta_n-\beta^*)+ (H_{n, \beta^*}(r) - \bar H_{n})(\hat
\beta_n-\beta^*) +\hat K_{n}(r)-K(r) +o_P(1)
= o_P(1)\end{aligned}$$ by consistency of $(\hat \beta_n^{\sf T},\hat K_{n}(r))$, boundedness of $\bar H_{n}(r)$, and since $H_{n,\beta^*}(r)-\bar H_{n}$ is bounded in probability. ◻
## Functional convergence {#sec:functional}
Fix $0\le r_0 < R <\infty$. We will prove functional convergence of the process $\{\sqrt{n}(\hat{K}_{n,\hat
\beta_n}(r)-K(r))\}_{r\in[r_0,R]}$ with estimated intensity under the assumption that functional convergence holds for the process $\{\sqrt{n}(\hat{K}_{n}(r)-K(r))\}_{r\in [r_0,R]}$ with known intensity.
Define $\bar H_{n}(r)=\mathbb{E}H_{n,\beta^*}(r)$ as in Proposition [Proposition 1](#prop:H_est){reference-type="ref" reference="prop:H_est"}. We make the following further assumptions:
1. [\[itemi\]]{#itemi label="itemi"} The matrices $C_n$ converge to a fixed invertible matrix $C$.
2. [\[itemii\]]{#itemii label="itemii"} The matrices $\bar H_{n}(r)$ converge uniformly to a function $H(r)$.
3. [\[itemiii\]]{#itemiii label="itemiii"} The process $\{\sqrt{n}(\hat{K}_{n}(r)-K(r))\}_{r\in[r_0,R]}$ converges in Skorokhod topology to a Gaussian process with a limiting covariance function $c(s,t)$, $s,t \ge 0$.
Note that by Proposition [Proposition 1](#prop:H_est){reference-type="ref" reference="prop:H_est"}, [\[itemii\]](#itemii){reference-type="ref" reference="itemii"} together with [\[eq:consistency\]](#eq:consistency){reference-type="eqref" reference="eq:consistency"} and [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"} implies that $A_n \to A$ in probability where $A$ is defined as $A_n$, but with $H_n(\tilde B_n)$ replaced by the matrix with rows $H(r_i)$, $i=1,\ldots,k$. Combining further with [\[eq:clt\]](#eq:clt){reference-type="eqref" reference="eq:clt"} we obtain $$\label{eq:obs}
\sqrt{n}V_n \rightarrow N(0,A \Sigma A^T)$$ where $\Sigma=C^{-1}(C^{-1})^{{\sf T}}$.
Our main result is the following functional central limit theorem. The proof is postponed to Section [6](#sec:proof){reference-type="ref" reference="sec:proof"}.
**Theorem 3**. *Suppose that [\[eq:consistency\]](#eq:consistency){reference-type="eqref" reference="eq:consistency"}, [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"}, [\[eq:fastdecay\]](#eq:fastdecay){reference-type="eqref" reference="eq:fastdecay"}, [\[eq:boundedg\]](#eq:boundedg){reference-type="eqref" reference="eq:boundedg"}, [\[eq:clt\]](#eq:clt){reference-type="eqref" reference="eq:clt"}, and [\[itemi\]](#itemi){reference-type="ref" reference="itemi"}-[\[itemiii\]](#itemiii){reference-type="ref" reference="itemiii"} hold. Then $\big\{\sqrt{n}\big(\hat{K}_{n,\hat
\beta_n}(r)-K(r)\big)\big\}_{r\in[r_0,R]}$ converges in Skorokhod topology to a Gaussian process with limiting covariance function given by $$\label{eq:lim_cov}
\tilde{c}(s,t)=H(s) \Sigma_{11} H(t)^{\sf T}+H(s) \Sigma_{2,t}+H(t) \Sigma_{2,s} +c(s,t),$$ where $\Sigma_{11}$ is the limiting covariance matrix of $\sqrt{n}(\hat \beta_{n}-\beta^*)$ and $\Sigma_{2,r}$ is the limiting covariance vector between $\sqrt{n}(\hat \beta_{n}-\beta^*)$ and $\sqrt{n}\hat K_{n}(r)$, $r \in [r_0,R]$.*
# Point processes satisfying the assumptions {#sec:pp}
The most complete characterization of asymptotic normality is obtained in the case of constant intensity, see Section [4.1](#sec:constantintensity){reference-type="ref" reference="sec:constantintensity"}. We consider further the case of the commonly used [e.g. @Baddeley:Rubak:Wolf:15] log-linear model for the intensity function in Section [4.2](#sec:loglinear){reference-type="ref" reference="sec:loglinear"}.
## Constant intensity {#sec:constantintensity}
In the case of constant intensity $\rho_\beta(x) =
\beta$, the standard estimator for $\beta$ is $\hat{\beta}_n = \#(\mathcal{P}_n)/|W_n|$, where $\#$ denotes cardinality, see e.g. [@chiu]. Functional convergence with known intensity was established in [@biscio:svane:22] for the class of stationary conditionally $m$-dependent point processes having exponential decay of correlations and satisfying two extra conditions [@biscio:svane:22 Cond. **(M)**] and [@biscio:svane:22 Cond. **(R)**]. Exponential decay of correlations means that [\[eq:fastdecay\]](#eq:fastdecay){reference-type="eqref" reference="eq:fastdecay"} holds for all $p$ and $q$ with $\phi_{p,q}$ being an exponential function, see [@yogesh Sec. 1.1] for the precise definition.
Assumption [\[eq:clt\]](#eq:clt){reference-type="eqref" reference="eq:clt"} can be verified assuming only exponential decay of correlations and [@biscio:svane:22 Cond. **(M)**], and hence applies to all point processes discussed in [@yogesh Sec. 2.2.2] (and also [@yogesh Sec. 2.2.1]). Assuming additionally conditional $m$-dependence, [\[itemi\]](#itemi){reference-type="ref" reference="itemi"} can be verified when replacing [@biscio:svane:22 Cond. **(R)**] by the following: For any $r_0\le r_1<r_2 \le R$, define events $$\begin{aligned}
F_{1} ={}& \lbrace \mathcal{P}_{(5\tilde{R})^{d}} = \emptyset \rbrace \\
F_{2} ={}& \left\lbrace \forall x,y \in \mathcal{P}_{(5\tilde{R})^{d}}: x=y\vee \|x-y\|> r_1 \right \rbrace \cap \left\lbrace \exists x,y \in \mathcal{P}_{(3\tilde{R})^{d}}: 0<\|x-y\|\le r_2 \right\rbrace\\
F_{3} ={}& \left\lbrace \mathcal{P}_{(5\tilde{R})^{d}}\backslash \mathcal{P}_{(3\tilde{R})^{d}} = \emptyset \right \rbrace \cap \left\lbrace \# (\mathcal{P}_{(3\tilde{R})^{d}}) = 1 \right\rbrace
\end{aligned}$$ where $\tilde R = \max\{m,R\}$. Then we require
1. $$\mathbb{E}\big[\min_{i \in \{1,2\}} P\big( \mathcal{P}_{(5\tilde{R})^d} \in F_i\,|\, \sigma(\Lambda , \mathcal{P}\setminus W_{\tilde{R}^d})\big) \big] > 0,$$
2. $$\mathbb{E}\big[\min_{i \in \{1,3\}} P\big( \mathcal{P}_{(5\tilde{R})^d} \in F_i\,|\, \sigma(\Lambda , \mathcal{P}\setminus W_{\tilde{R}^d})\big) \big] > 0,$$
where $\Lambda$ is the random measure from the definition of conditional $m$-dependence. Examples of conditionally $m$-dependent point processes satisfying all assumptions are log-Gaussian Cox processes with exponentially decaying covariance function and Matérn cluster processes, see [@bchs20 Sec. 4].
Condition [\[eq:clt\]](#eq:clt){reference-type="eqref" reference="eq:clt"} can also be shown for the class of Gibbs processes considered in [@biscio:svane:22]. Recent developments [@otto Thm. 3] (or [@benes Thm. 4.11]) allow a generalization to Gibbs processes satisfying [@otto Cond. **(A)**], which is more general than the assumption in [@biscio:svane:22], c.f. the discussion in [@benes Rem. 3.5]. Positive definiteness of $\Sigma$ can be obtained as in [@biscio:svane:22 Prop. 6.2] if the process satisfies **(R1)** and **(R2)** with $\tilde{R}$ larger than the interaction radius and $\Lambda$ trivial. Verifying this is often straightforward for a given Papangelou intensity.
The functional convergence [\[itemiii\]](#itemiii){reference-type="ref" reference="itemiii"} established for the point processes considered in [@biscio:svane:22] generalizes to Gibbs processes satisfying [@otto Cond. **(A)**] since fast decay of correlations can be derived from [@benes Thm. 3.4], and condition **(M)** can be derived from [@benes Lem. 2.6]. Condition [\[itemii\]](#itemii){reference-type="ref" reference="itemii"} is easily verified, noting that $$\label{eq:H}
H_{n,\beta}(r) = -2\beta^{-1} \hat{K}_{n,\beta}(r), \qquad H(r) =-2(\beta^*)^{-1} {K}(r).$$ The condition [\[itemi\]](#itemi){reference-type="ref" reference="itemi"} follows from conditional $m$-dependence and **(R1)** and **(R2)**. We then obtain the following corollary.
**Corollary 4**. *For all point processes considered in [@biscio:svane:22] and Gibbs processes satisfying [@otto Cond. **(A)**], if additionally **(R1)** and **(R2)** are satisfied, then $\big\{\sqrt{n}\big(\hat{K}_{n,\hat \beta_n}(r)-K(r)\big)\big\}_{r\in[r_0,R]}$ converges in Skorokhod topology to a centered Gaussian process with covariance structure given by [\[eq:lim_cov\]](#eq:lim_cov){reference-type="eqref" reference="eq:lim_cov"}.*
For any point process with fast decay of correlations, the limiting covariance with known intensity $\Sigma$ can be computed from Campbell's formulae. The computations are omitted, but are very similar to those in Appendix [7](#sec:convergence){reference-type="ref" reference="sec:convergence"}. This yields $$\begin{aligned}
\nonumber
\lim_{n\to \infty} n \mathop{\mathrm{\mathsf{Var}}}(\hat \beta_n) =& \beta^2\int_{\mathbb{R}^{d}} (g(x) - 1) {\mathrm{d}}x + \beta,\\ \nonumber
\lim_{n\to \infty} n \mathop{\mathrm{Cov}}(\hat \beta_n, \hat K_n (r)) =& \beta\int_{\mathbb{R}^{2d}}(g^{(3)}(o,x,y)- g(x) )\mathds{1}_{\{\|x\|\le r\}} {\mathrm{d}}x {\mathrm{d}}y + 2 K(r),\\
\nonumber
\lim_{n\to \infty} n \mathop{\mathrm{Cov}}( \hat{K}_{n}(r_1), \hat{K}_{n}(r_2)) =&\int_{\mathbb{R}^{3d}} \left(g^{(4)}(o,x,y,z) - g(x)g(y-z)\right)\mathds{1}_{\{\|x\|\le r_1,\|y-z\|\le r_2\}} {\mathrm{d}}x {\mathrm{d}}y {\mathrm{d}}z\\ \nonumber
&+ \frac{4}{ \beta}\int_{\mathbb{R}^{2d}} g^{(3)}(o,x,y) \mathds{1}_{\{\|x\|\le r_1,\|y\|\le r_2\}} {\mathrm{d}}x {\mathrm{d}}y\\ \label{eq:cov_known}
&+ \frac{2}{ \beta^2}\int_{\mathbb{R}^d} g(x) \mathds{1}_{\{\|x\|\le r_1\wedge r_2\} } {\mathrm{d}}x.\end{aligned}$$ Note that all integrals converge due to the fast decay of correlations assumption.
With unknown intensity, the limiting covariance structure is [\[eq:lim_cov\]](#eq:lim_cov){reference-type="eqref" reference="eq:lim_cov"}. Using this, we obtain from [\[eq:cov_known\]](#eq:cov_known){reference-type="eqref" reference="eq:cov_known"}, $$\begin{aligned}
\nonumber
&\lim_{n\to \infty} n \mathop{\mathrm{Cov}}( \hat{K}_{n,\hat\beta_n}(r_1), \hat{K}_{n,\hat \beta_n}(r_2)) = \lim_{n\to \infty} n^{-1} \mathop{\mathrm{Cov}}( \hat{K}_{n}(r_1), \hat{K}_{n}(r_2))\\ \nonumber
&- {2} \int_{\mathbb{R}^{2d}} \left(g^{(3)}(o,x,y)-g(x) \right) \left( K(r_1)\mathds{1}_{\{\|x\|\le r_2\}} +K(r_2) \mathds{1}_{\{\|x\|\le r_1\}}\right) {\mathrm{d}}x {\mathrm{d}}y\\
&+{4}K(r_1)K(r_2)\left( \int_{\mathbb{R}^d} (g(x) - 1) {\mathrm{d}}x - \beta^{-1}\right). \label{cov_estimated}\end{aligned}$$
For Poisson processes in $\mathbb{R}^2$, these formulas reduce to the covariance formulas given in [@heinrich:91]: $$\begin{aligned}
\nonumber
&\lim_{n\to \infty} n \mathop{\mathrm{Cov}}( \hat{K}_{n}(r_1), \hat{K}_{n}(r_2)) = 2\pi (r_1\wedge r_2)^2/\rho^2 + 4\pi^2r_1^2 r_2^2/ \rho,\\
&\lim_{n\to \infty} n \mathop{\mathrm{Cov}}( \hat{K}_{n,\hat\beta_n}(r_1), \hat{K}_{n,\hat \beta_n}(r_2)) = 2\pi (r_1\wedge r_2)^2/\rho^2. \label{eq:poisson_cov}\end{aligned}$$ For general point processes, the explicit covariance formulas are more complicated but may be evaluated numerically after plugging in estimates of the normalized joint intensities. For instance, in case of log-Gaussian Cox processes and Neyman-Scott processes, explicit parametric expresssions for the normalized joint intensities are available [@moeller:syversveen:waagepetersen:98; @jalilian:16] enabling parametric estimation of these.
Note that by [\[eq:poisson_cov\]](#eq:poisson_cov){reference-type="eqref" reference="eq:poisson_cov"}, in case of a Poisson process, it is actually beneficial to estimate the intensity since the asymptotic variance is smaller in the unknown intensity case. We expect this to hold for more general point processes. For example, for point processes satisfying $g^{(3)}(0,x,y)
\ge g(x)$ for all $x,y\in \mathbb{R}^d$, the middle term in [\[cov_estimated\]](#cov_estimated){reference-type="eqref" reference="cov_estimated"} gives a negative contribution to the variance. This holds for instance for Poisson processes, shot noise Cox processes and log-Gaussian Cox processes with non-negative covariance, see formulas for $\rho^{(k)}$ in [@CMW]. Moreover, if $g$ is sufficiently close to $1$, the last term in [\[cov_estimated\]](#cov_estimated){reference-type="eqref" reference="cov_estimated"} is also negative, reducing further the point-wise variance when the intensity is estimated.
## Log-linear intensity {#sec:loglinear}
The log-linear model $\rho_\beta(u)=\exp(z(u)^{\sf T}\beta)$ is the default model when covariates $z(u)$, $u \in W_n,$ are available for explaining variation in the intensity function. We consider here the case where $\hat \beta_n$ is the first order composite likelihood estimate obtained by maximizing the likelihood function of a Poisson process with intensity function $\rho_\beta$ [see @schoenberg:05; @waagepetersen:07; @guan:loh:07; @waagepetersen:guan:09; @choiruddin:coeurjolly:waagepetersen:21 for theoretical and practical details]. Following for example [@waagepetersen:guan:09], under certain conditions, $$|W_n|\bar S_n (\hat \beta_n - \beta^*) = e_n(\beta^*)+ o_{P}(1)$$ where $\bar S_n$ is the normalized sensitivity $$\label{eq:sensitivity}
\bar S_n = \frac{1}{|W_n|}\int_{W_n} z(u)z(u)^{\sf T}\exp(z(u)^{\sf T}\beta^*) {\mathrm d}
u$$ and $$\label{eq:en}
e_n(\beta^*)=\sum_{u \in \mathcal{P}_n \cap W_n} z(u) - \int_{W_n}z(u)
\rho_{\beta^*}(u) {\mathrm d}u$$ is the Poisson composite likelihood score. Let $\Delta K= (\hat K_n(r_i)-K(r_i))_{i=1}^k$ and $$\Sigma_n=\mathop{\mathrm{\mathsf{Var}}}((|W_n|^{-1/2}e_n(\beta^*)^{\sf T},|W_n|^{1/2}\Delta K^{\sf T})^{\sf T}).$$ Then, e.g. assuming $\alpha$-mixing [@waagepetersen:guan:09], $$\Sigma_{n}^{-1/2} (|W_n|^{-1/2}e_n(\beta^*),|W_n|^{1/2} \Delta
K^{\sf T})^{\sf T}\rightarrow N(0,I_{p+k}).$$ Hence, letting $B_n$ be block diagonal with blocks $\bar S_n$ and $I_k$, $|W_n|^{1/2}(\Sigma_n/|W_n|)^{-1/2} B_n U_n$ converges to $N(0,I_{p+k})$. We can thus take $C_n= (\Sigma_n/|W_n|)^{-1/2} B_n$.
The validity of assumption [\[itemi\]](#itemi){reference-type="ref" reference="itemi"} depends on the behaviour of $z$ on the infinite domain $\mathbb{R}^d$. The normalized sensitivity $\bar S_n$ is for example obviously a spatial average that has a limiting value $\bar S$ when $z$ is a realization of an ergodic process. In this situation, we can also show (Appendix [7](#sec:convergence){reference-type="ref" reference="sec:convergence"}) under [\[eq:decayg\]](#eq:decayg){reference-type="eqref" reference="eq:decayg"}-[\[eq:decayg4\]](#eq:decayg4){reference-type="eqref" reference="eq:decayg4"} that $\bar H_n$ and $\Sigma_n$ have limits $H$ and $\Sigma$. Hence $C_n$ has a limit $C$ too. Specifically, $\bar{H}_{n,\beta^*}(r)$ converges to $-2 K(r)
\bar z$ where $\bar z= \lim_{n \rightarrow \infty}
{|W_n|}^{-1}\int_{W_n}z(v) {\mathrm d}v$ and [\[itemii\]](#itemii){reference-type="ref" reference="itemii"} is satisfied. The invertibility of $C$ remains an assumption.
Having established [\[eq:clt\]](#eq:clt){reference-type="eqref" reference="eq:clt"} and [\[itemi\]](#itemi){reference-type="ref" reference="itemi"}, [\[itemiii\]](#itemiii){reference-type="ref" reference="itemiii"} holds since the tightness proof from [@biscio:svane:22] goes through in the case of inhomogeneous point processes with exponential decay of correlations satisfying [@biscio:svane:22 Cond. **(M)**] and intensity function satisfying [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"}. Indeed, one must show Lemma 6.2, Lemma 7.2 and Lemma 7.3 of [@biscio:svane:22]. Lemmas 6.2 and Lemma 7.2 are proved in the exact same way using the lower bound on $\rho_{\beta^*}$. Lemma 7.3 also follows in the same way by noting that the proof of [@yogesh Thm. 1.11] carries over to the case of non-stationary processes.
# Simulation study
To emphasize the importance of taking into account the effect of estimating the intensity, we conduct a small scale simulation study for a Poisson process and a Matérn cluster process on a sequence of increasing square observation windows of sidelengths 1, 2, 4 and 8. We use an intensity of 200 for both types of point processes. As in [@biscio:svane:22] we consider a Kolmogorov-Smirnov goodness-of-fit test statistic $\sup_{r \in [0,R]}|\hat K_{n,\hat \beta_n}(r)- \pi r^2|$ for the null hypothesis of a homogeneous Poisson process. We choose $R=0.05$ to reflect a reasonable upper lag on the unit square window. For the Matérn process we use a parent intensity of 25, on average 8 offspring for each parent, and a uniform dispersal density on a disc of radius 0.2.
Table [1](#simstudy){reference-type="ref" reference="simstudy"} reports rejection probabilities when the Kolmogorov-Smirnov test is rejected on the nominal 5% level. The rejection probabilities are computed over 10000 simulations where for each simulation, the critical value of the test is determined from the asymptotic distribution of $\sqrt{n}(\hat K_{n,\hat \beta_n}- \pi r^2)$ under the null distribution. Here we consider both results obtained with the asymptotic variance formula for estimated intensity and the asymptotic variance formula pretending the estimated intensity is the true intensity (see [\[eq:poisson_cov\]](#eq:poisson_cov){reference-type="eqref" reference="eq:poisson_cov"}). In both asymptotic variance formulas, the unknown intensity is replaced by the estimated intensity. In case of the Poisson process, the actual levels of the test are close to the nominal level for all window sizes when the correct asymptotic variance is used. However, assuming erroneously known intensity, the rejection probabilities are far too small, completely invalidating the goodness-of-fit test. For the Matérn process, using the variance formula for known intensity leads to a loss of power of the goodness of fit test.
Window side length 1 2 4 8
------------------------------------ -------- -------- -------- --------
Poisson (assuming estm. intensity) 0.053 0.053 0.052 0.051
Poisson (assuming known intensity) 0.0015 0.0011 0.0008 0.0008
Matérn (assuming estm. intensity) 0.63 1.00 1.00 1.00
Matérn (assuming known intensity) 0.31 0.96 1.00 1.00
: Rejection probabilities for Kolmogorov-Smirnov test for Poisson process (upper two rows) and Matérn process (lower two rows) on increasing observation windows. For each type of process, the first row is using asymptotic variance for estimated intensity and the second row is using asymptotic variance assuming known intensity.
# Proof of Theorem [Theorem 3](#tight){reference-type="ref" reference="tight"} {#sec:proof}
We will need the following uniform version of Proposition [Proposition 1](#prop:H_est){reference-type="ref" reference="prop:H_est"}.
**Lemma 5**. *Under the assumptions [\[eq:consistency\]](#eq:consistency){reference-type="eqref" reference="eq:consistency"}, [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"}, [\[eq:fastdecay\]](#eq:fastdecay){reference-type="eqref" reference="eq:fastdecay"}, [\[eq:boundedg\]](#eq:boundedg){reference-type="eqref" reference="eq:boundedg"}, $\|H_{n,\beta^*}-\bar H_n \|_\infty$ goes to zero in probability. Moreover, $\bar H_n$ is continuous, and so is $H$ if [\[itemii\]](#itemii){reference-type="ref" reference="itemii"} is satisfied.*
*Proof of Lemma [Lemma 5](#lem:Hunif){reference-type="ref" reference="lem:Hunif"}.* Let $\delta>0$ be given and let $\eta>0$ be a parameter to be chosen later. Choose $r_0 < r_1 < ... <r_k=R$ such that $|r_i-r_{i+1}|\leq \eta$ and $k\eta \le 2(R-r_0)$. Then $$\begin{aligned}
\label{term1}
P\Big(\sup_{r\in [r_0,R]} \|H_{n,\beta^*}(r) - \bar H_n(r) \| \ge \delta \Big) \le & P\Big( \sup_{i=0,\ldots,k-1}\sup_{r\in [r_i,r_{i+1}]} \| H_{n,\beta^*}(r) - H_{n,\beta^*}(r_i) \| \ge \delta/3 \Big) \\ \label{term2}
&+ P\Big(\sup_{i=0,\ldots,k} \|H_{n,\beta^*}(r_i) - \bar H_n(r_i) \| \ge \delta/3 \Big)\\ \label{term3}
&+ P\Big(\sup_{i=0,\ldots,k-1}\sup_{{r\in [r_i,r_{i+1}]}} \|\bar H_{n}(r_i) - \bar H_{n}(r) \| \ge \delta/3 \Big).
\end{aligned}$$ With $r_i<r$, $$\label{eq:something}
H_{n,\beta^*}(r) - H_{n,\beta^*}(r_i) = -\sum_{x,y\in \mathcal{P}_n} \frac{\mathds{1}_{A(r,r_i)}(x-y)}{|W_n\cap W_{n,x-y}| \rho_{\beta^*} (x)^2 \rho_{\beta^*} (y)^2}\frac{{\mathrm{d}}}{{\mathrm{d}}\beta}( \rho_{\beta^*} (x) \rho_{\beta^*} (y))$$ where $A(r,t)$ for $r<t$ denotes the annulus $B_t(0) \backslash B_r(0)$ with volume $|A(r,t)|\leq C_1 |t-r|$ where $C_1$ is independent of $t$ as long as $t\leq R$. It follows that $$\label{eq:varHdiff}
\sup_{r\in [r_i,r_{i+1}]} \|H_{n,\beta^*}(r) - H_{n,\beta^*}(r_i) \| \leq \sum_{x,y\in \mathcal{P}_n} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y)}{|W_n\cap W_{n,x-y}| \rho_{\beta^*} (x)^2 \rho_{\beta^*} (y)^2}\left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta}( \rho_{\beta^*} (x) \rho_{\beta^*} (y)) \right\|.$$ Let $X_{i,n}$ denote the right hand side. Campbell's formula shows that $\mathbb{E}X_{i,n}\le C_2 \eta$, where $C_2$ is independent of $n$ and $r_i$. Moreover, the computation in Appendix [8](#sec_Hvar){reference-type="ref" reference="sec_Hvar"} shows that $\mathop{\mathrm{\mathsf{Var}}}(X_{i,n}) \le C_3\eta n^{-1}$. Choose $\eta <\delta/(6C_32$. Then by Chebyshev's inequality, $$P\Big(\sup_{r\in [r_i,r_{i+1}]} \| H_{n,\beta^* }(r) - H_{n,\beta^* }(r_i) \| \ge \delta/3 \Big) \le P(|X_{i,n}| \ge \delta/3) \le P(|X_{i,n}-\mathbb{E}X_{i,n}| \ge \delta/6 ) \le \frac{36 C_3 \eta }{n \delta^2},$$ so [\[term1\]](#term1){reference-type="eqref" reference="term1"} is bounded by $36 k
C_3\eta/(n\delta^2)$, which tends to zero as $n\to
\infty$ since $k\eta$ is bounded.
Proposition [Proposition 1](#prop:H_est){reference-type="ref" reference="prop:H_est"} shows that $\mathop{\mathrm{\mathsf{Var}}}H_{n,\beta^*}(r)$ is $O(n^{-1})$, and it is easily verified (by computations similar to Appendix [8](#sec_Hvar){reference-type="ref" reference="sec_Hvar"}) that the upper bound is uniform in $r$ for $r\in [r_0,R]$. By Chebyshev's inequality, there is a $C_4>0$ such that $$P\Big(\sup_{i=0,\ldots,k} \|H_{n,\beta^*}(r_i) - \bar H_n(r_i) \| \geq \delta/3\Big) \leq \frac{9(k+1)C_4}{n \delta^2},$$ so [\[term2\]](#term2){reference-type="eqref" reference="term2"} goes to zero for $n\to 0$ for any fixed $\eta$.
Taking expectations and applying Campbell's formula in [\[eq:something\]](#eq:something){reference-type="eqref" reference="eq:something"}, we get $$\begin{aligned}
\nonumber
&\sup_{r\in [r_i,r_{i+1}]} \| \bar H_{n }(r_i) - \bar H_{n }(r) \| \\ \nonumber
&\le \int_{W_n^2} \frac{\mathds{1}_{A(r_i,r_{i+1})}(u) }{|W_n\cap W_{n,u}| \rho_{\beta^*} (x) \rho_{\beta^*} (x-u)}\left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta}( \rho_{\beta^*} (x) \rho_{\beta^*} (x-u)) \right\| g(u) {\mathrm{d}}x{\mathrm{d}}u\\
&\le C_5\eta \label{eq:Hcont}
\end{aligned}$$ for some $C_5$ independent of $r_i$ and $n$. Thus, choosing $\eta < \delta/(3C_5)$, [\[term3\]](#term3){reference-type="eqref" reference="term3"} vanishes.
Finally, [\[eq:Hcont\]](#eq:Hcont){reference-type="eqref" reference="eq:Hcont"} shows that $\bar H_n$ is continuous and if uniform convergence in [\[itemii\]](#itemii){reference-type="ref" reference="itemii"} is satisfied, $H$ must also be continuous. ◻
The proof of Theorem [Theorem 3](#tight){reference-type="ref" reference="tight"} uses some definitions of Skorokhod space, which we briefly recall here, see [@billingsley Sec. 12] for details. The Skorokhod space $D[r_0,R]$ of cadlag functions on $[r_0,R]$ is a separable metric space with metric $\mu$ given by $$\mu(f_1,f_2) = \inf_{\lambda} \{|\lambda - I|_{\infty} \vee |f_1-f_2\circ \lambda|_{\infty}\},$$ where the infimum runs over all strictly increasing, continuous bijections $\lambda: [r_0,R] \to [r_0,R]$, $I$ is the identity map, and $|\cdot|_\infty$ is the sup norm. In the rest of this section, unless other mentioned, functions will be restricted to the domain $[r_0,R]$, i.e. $|f|_{\infty}= \sup_{r \in [r_0,R]}|f(r)|$.
The tightness condition we apply makes use of the cadlag modulus of continuity $\omega_{f}'(\delta)$ defined by $$\label{eq:def_omega}
\omega_{f}'(\delta) = \inf_{\substack{t_1<\dotsm<t_k\\ |t_i - t_{i-1}|>\delta}}\max_{i=1,\ldots,k} \sup_{s,t\in [t_{i-1},t_i)} |f(s)-f(t)|.$$ We will need the following property: If $f_2$ is continuous on $[r_0,R]$, then it is uniformly continuous. Hence there is a function $g_{f_2}(\delta)$ with $\lim_{\delta\to 0}g_{f_2}(\delta) = 0$ such that $|s-t|\le \delta$ implies $|f_2(s)-f_2(t)|\le
g_{f_2}(\delta)$. Since it is enough to take the infimum in [\[eq:def_omega\]](#eq:def_omega){reference-type="eqref" reference="eq:def_omega"} over all partitions with $|t_i-t_{i-1}|\le
2\delta$, we get for any cadlag $f_1$ $$\label{eq:sum_omega}
\omega_{f_1+f_2}'(\delta) \le \omega_{f_1}'(\delta) + g_{f_2}(2\delta).$$
*Proof of Theorem [Theorem 3](#tight){reference-type="ref" reference="tight"}.* Recall the decomposition $$\hat K_{n,\hat \beta_n}-K(r) = H_{n,\tilde \beta_{n,r}}(r) (\hat
\beta_n-\beta^*)+ \hat K_{n}(r)-K(r).$$ The proof relies on the observation that $H_{n,\tilde \beta_{n,r}}(r)$ converges to a continuous deterministic function $H(r)$, $\sqrt{n}(\hat
\beta_n-\beta^*)$ is constant in $r$ and converges in distribution, and $\sqrt{n} (\hat K_{n}(r)-K(r))$ converges in Skorokhod space. Thus, the proof below can be viewed as a stochastic process analogue to Slutsky's theorem.
To simplify, we write $$\label{eq:slutsky}
\sqrt{n}(\hat K_{n,\hat \beta_n}-K(r) ) = H_{n,\tilde \beta_{n,r}}(r)Y_n + \sqrt{n}(\hat K_{n}(r)- K(r)) =(H_{n,\tilde \beta_{n,r}}(r)-{H}(r) ) Y_n + Z_n(r)$$ where $Y_n = \sqrt{n} (\hat \beta_n-\beta^*)$ and converges in distribution to a Gaussian vector $Y$ by assumption [\[eq:obs\]](#eq:obs){reference-type="eqref" reference="eq:obs"}, and $$Z_n(r) = {H}(r) Y_n + \sqrt{n}(\hat K_{n}(r) - K(r)).$$
By [@billingsley Thm. 3.1], in order to show functional convergence, it is enough to show:
- Convergence of $\mu\big(Z_n,\sqrt{n}\big(\hat K_{n,\hat \beta_n}-K) \big) \big) \to 0$ in probability.
- Convergence of $Z_n$ in distribution in Skorokhod topology to a Gaussian process with the covariance structure given by [\[eq:lim_cov\]](#eq:lim_cov){reference-type="eqref" reference="eq:lim_cov"}.
To show a., note that $\mu(f_1,f_2)\le |f_1-f_2|_\infty$, so $$\mu\big(Z_n, \sqrt{n}\big(\hat K_{n,\hat \beta_n}-K \big)\big)\le \big| \big(H_{n,\tilde \beta_{n,r}}- H \big) Y_n \big|_\infty \leq \big\| H_{n,\tilde \beta_{n,r}}- H\big\|_\infty \|Y_n\|.$$ Since $Y_n\to Y$ in distribution, $\limsup_n P(\|Y_n\|\ge M) \le P(\|Y\|\ge
M)$ and hence $\|Y_n\|$ is bounded in probability. It remains to show that $\|H_{n,\tilde \beta_{n,r}}- H\|_\infty$ goes to zero in probability. We write $$\label{eq:oldiii}
\|H_{n, \tilde{\beta}_{n,r}}- H\|_\infty \le \|H_{n, \tilde{\beta}_{n,r}}-H_{n,\beta^*} \|_\infty + \|H_{n,\beta^*}-\bar H_n \|_\infty + \|\bar H_{n}- H \|_\infty .$$ The first term goes to zero in probability because the bound in [\[eq:Hn_assump\]](#eq:Hn_assump){reference-type="eqref" reference="eq:Hn_assump"} is uniform in $r$ since $\hat{K}_n(r) \le \hat K_n(R)$, the middle term goes to zero in probability by Lemma [Lemma 5](#lem:Hunif){reference-type="ref" reference="lem:Hunif"}, and the third term goes to zero by [\[itemii\]](#itemii){reference-type="ref" reference="itemii"}.
To show b., note that convergence of finite dimensional distributions follows from the observation [\[eq:obs\]](#eq:obs){reference-type="eqref" reference="eq:obs"}. It remains to show tightness. According to [@billingsley Thm. 13.2], tightness of a sequence $Z_n$ is equivalent to the following two conditions:
- $\lim_{a\to \infty}\limsup_n P(|Z_n|_{\infty}\ge a) = 0$.
- For any $\varepsilon>0$: $\lim_{\delta\to 0} \limsup_n P(\omega_{Z_n}'(\delta)\ge \varepsilon) = 0$.
To show 1., note $$\begin{aligned}
&P(|H Y_n + \sqrt{n}(\hat K_n- K) |_{\infty} \ge a) \\
&\le P(|H Y_n|_{\infty} \ge {a}/{2}) + P(|\sqrt{n}(\hat K_n- K)|_{\infty} \ge {a}/{2}).
\end{aligned}$$ Taking $\lim_{a\to \infty}\limsup_n$, the latter term vanishes by tightness of $\sqrt{n}(\hat K_n(r)- K(r))$. The first term satisfies $$\begin{aligned}
P\left(|H Y_n|_{\infty}\ge a/2\right) \le P\left(\|Y_n\| \ge \sqrt{a/2}\right) + P\left(\|H\|_\infty \ge \sqrt{a/2}\right).
\end{aligned}$$ Clearly, $\lim_{a\to \infty} P(\|H\|_\infty \ge a/2) =0$ since $H$ is continuous and hence bounded on $[r_0,R]$ by Lemma [Lemma 5](#lem:Hunif){reference-type="ref" reference="lem:Hunif"}. Moreover, since $Y_n\to Y$ in distribution, $$\lim_{a\to \infty}\limsup_n P(\|Y_n\| \ge \sqrt{a/2})\le\lim_{a\to \infty}P(\|Y\| \ge \sqrt{a/2})= 0.$$
To show 2., we use [\[itemii\]](#itemii){reference-type="ref" reference="itemii"} and [\[eq:sum_omega\]](#eq:sum_omega){reference-type="eqref" reference="eq:sum_omega"} to obtain a ($p$-dimensional) function $g_{H}$ such that $$\omega_{\sqrt{n} (\hat K_n-K)+H Y_n}'(\delta) \le \omega_{ \sqrt{n} (\hat K_n-K)}'(\delta) + \|Y_n\|\|g_{ H}(2\delta)\|.$$ It follows that $$\begin{gathered}
P\left(\omega_{Y_nK+\sqrt{n} (\hat
K_n-K)}'(\delta)\ge \varepsilon\right)
%\le P\left( \omega_{ \sqrt{n} (\hat K_n(r)-K(r))}'(\delta)\ge \eps - \|Y_n\|\|g_H(2\delta)\|\right)\\
\le P\left( \omega_{ \sqrt{n} (\hat K_n-K)}'(\delta)\ge \varepsilon/2 \right) + P\left(\|Y_n\| \ge (\varepsilon/2) \|g_{ H}(2\delta)\|^{-1}\right).
\end{gathered}$$ Taking $\lim_{\delta\to 0} \limsup_n$ yields 0 in both terms because of tightness of $\sqrt{n} (\hat K_n-K)$ and convergence in distribution of $Y_n$. ◻
# Convergence results for the log-linear model {#sec:convergence}
Throughout this appendix $z$ is a realization from a stationary, bounded, and ergodic random field on $\mathbb{R}^d$.
## Convergence of $\bar H_{n}$
By [\[eq:Hn\]](#eq:Hn){reference-type="eqref" reference="eq:Hn"} and Campbells formula, $$\begin{aligned}
\bar H_{n}(r) &= - \mathbb{E}\sum_{x\in \mathcal{P}_n}
\sum_{y\in \mathcal{P}_n}
\frac{\mathds{1}_{\{0<\|x-y\|\le
r\}}}{\rho_{\beta^*}(x)\rho_{\beta^*}(y)} \frac{{\mathrm d}}{{\mathrm d}\beta^{\sf T}}\log(\rho_{\beta}(x)\rho_{\beta}(y))|_{\beta=\beta^*}
e_n(x,y) \\
&= -\int_{W_{n}^2} \frac{\mathds{1}_{\{0<\|x-y\|\le
r\}}}{|W_n \cap W_{n,x-y}|}g(x-y) [z(x)+z(y)] {\mathrm d}x {\mathrm d}y\\
&= - 2 \int_{B_r(0)} g(u) \frac{1}{|W_n \cap W_{n,u}|}\int_{W_n \cap W_{n,u}} z(v) {\mathrm d}v {\mathrm d}u.\end{aligned}$$ The inner integral is a spatial average of $z(v)$ over $W_n\cap W_{n,u}$. By the Pointwise Ergodic Theorem [@kallenberg Thm. 10.14], this converges to a limiting value $\bar z$ since $z(\cdot)$ is a realization of a stationary ergodic process. It then follows by boundedness of $z$ and dominated convergence that $\bar
H_{n,\beta^*}(r)$ converges to $-2 K(r) \bar z$. Clearly, the convergence is uniform since $$\begin{aligned}
\|\bar H_{n}(r) - H(r) \| \leq 2 \int_{B_R(0)} |g(u)| \left\| \frac{1}{|W_n \cap W_{n,u}|}\int_{W_n \cap W_{n,u}} z(v) {\mathrm d}v - \bar z \right\|{\mathrm d}u,\end{aligned}$$ where the right hand side is independent of $r$.
## Convergence of $\bar S_n$
The normalized sensitivity $\bar S_n$ in [\[eq:sensitivity\]](#eq:sensitivity){reference-type="eqref" reference="eq:sensitivity"} is obviously a spatial average that has a limiting value $\bar S$ when $z$ is ergodic.
## Upper left block of $\Sigma_n$
The variance of $|W_n|^{-1/2} e_n(\beta^*)$ is by [\[eq:en\]](#eq:en){reference-type="eqref" reference="eq:en"} $$\frac{1}{|W_n|} \mathop{\mathrm{\mathsf{Var}}}\bigg( \sum_{x \in \mathcal{P}_n \cap
W_n} z(x)\bigg) = \bar S_n + \frac{1}{|W_n|}\int_{W_n^2} z(x)z(y)^{\sf T}\rho_{\beta^*}(x)\rho_{\beta^*}(y) [g(x-y)-1] {\mathrm d}x {\mathrm d}y,$$ where the first term on the right hand side converges to a limiting value $\bar S$. The double integral can be rewritten as $$\int_{\mathbb{R}^d} [g(v)-1] \frac{1}{|W_{n}|} \int_{W_{n}\cap W_{n,v}} z(u) z(u-v)^{\sf T}\rho_{\beta^*}(u)\rho_{\beta^*}(u-v) {\mathrm d}u {\mathrm d}v.$$ Since ${|W_{n}\cap W_{n,v}|}/{|W_{n}|} \to 1$, the inner integral converges pointwise to a spatial average which is uniformly bounded in $n$ and $v$ by the assumptions on $z$. Dominated convergence and [\[eq:decayg\]](#eq:decayg){reference-type="eqref" reference="eq:decayg"} then shows that the limit of the double integral is $$\Sigma_{11} = \int_{\mathbb{R}^d} [g(v)-1] \lim_{n\to \infty} \bigg(\frac{1}{|W_{n}|} \int_{W_{n}} z(u) z(u-v)^{\sf T}\rho_{\beta^*}(u)\rho_{\beta^*}(u-v){\mathrm d}u \bigg) {\mathrm d}v.$$
## Lower right block of $C_n$
The lower right block of $\Sigma_n$ has entries $|W_n|\mathop{\mathrm{Cov}}[\hat
K_{n}(r_i),\hat K_n(r_j)]$. Assume $r_i\le r_j$. Following the proof of Lemma 1 in [@waagepetersen:guan:09] this is the sum $$\begin{aligned}
&2 |W_n| \int_{W_n^2} \frac{\mathds{1}_{\{\|x-y\| \le r_i\}}}{|W_{n,x-y}|^2
\rho_{\beta^*}(x)\rho_{\beta^*}(y)}g(x-y) {\mathrm d}x {\mathrm d}y\\
&+ 4 |W_n|\int_{W_n^3} \frac{\mathds{1}_{\{\|x-y\| \le r_i,\|x-z\| \le
r_j\}}}{|W_{n,x-y}||W_{n,x-z}|\rho_{\beta^*}(x)}g^{(3)}(x,y,z)
{\mathrm d}x {\mathrm d}y {\mathrm d}z,\\
&+ |W_n| \int_{W_n^4} \frac{\mathds{1}_{\{\|x-y\| \le r_i,\|z-w\| \le
r_j\}}}{|W_{n,x-y}||W_{n,z-w}|} [g^{(4)}(x,y,z,w)-g(x-y)g(z-w)]
{\mathrm d}u {\mathrm d}v {\mathrm d}w {\mathrm d}z.
\end{aligned}$$ The first term equals $$\begin{aligned}
2 \int_{\mathbb{R}^d} g(w)\mathds{1}_{\{\|w\| \le r_i\}} \frac{|W_n|}{|W_{n,w}|^2} \int_{W_n \cap W_{n,w}}
\frac{1}{\rho_{\beta^*}(u)\rho_{\beta^*}(u-w)} {\mathrm d}u {\mathrm d}v,
\end{aligned}$$ which converges to $$\begin{aligned}
2 \int_{\mathbb{R}^d} g(w)\mathds{1}_{\{\|w\| \le r_i\}} \lim_{n\to \infty}\bigg( \frac{1}{|W_{n}|} \int_{W_n } \frac{1}{\rho_{\beta^*}(u)\rho_{\beta^*}(u-w)} {\mathrm d}u\bigg) {\mathrm d}w.
\end{aligned}$$ The second term is $$\begin{aligned}
4 \int_{B_{r_i}(0)\times B_{r_j(0)}} g^{(3)}(0,v,w)\frac{|W_n|}{|W_{n,-w_1}||W_{n,-w_2}|}\int_{W_n \cap W_{n,-w_1} \cap W_{n,-w_2} }\frac{1}{\rho_{\beta^*}(u)}
{\mathrm d}u {\mathrm d}v {\mathrm d}w.\end{aligned}$$ Ergodicity and dominated convergence yields that the limit exists and is $$\begin{aligned}
4 \int_{B_{r_i}(0)\times B_{r_j(0)}} g^{(3)}(0,v,w)\lim_{n\to \infty}\bigg(\frac{1}{|W_{n}|}\int_{W_n} \frac{1}{\rho_{\beta^*}(u)}
{\mathrm d}u \bigg) {\mathrm d}v {\mathrm d}w.\end{aligned}$$ After change of variables $u_1=x-y$, $u_2=z-w$, $u_3=z$, $u_4=w$, the fourth term is $$\int_{W_n \times B_{r_i}(0)\times B_{r_j}(0)} \frac{|W_n|}{|W_{n,u_1}||W_{n,u_2}|}\int_{W_n}
[g^{(4)}(0,u_1,u_4,u_4+u_2)-g(u_1)g(u_2)] {\mathrm d}u_3 {\mathrm d}u_1 {\mathrm d}u_2
{\mathrm d}u_4,$$ which by [\[eq:decayg4\]](#eq:decayg4){reference-type="eqref" reference="eq:decayg4"} converges to $$\int_{B_{r_i}(0) \times B_{r_j}(0)} \int_{\mathbb{R}^d}
[g^{(4)}(0,u_1,u_4,u_4+u_2)-g(u_1)g(u_2)]{\mathrm d}u_4 {\mathrm d}u_1 {\mathrm d}u_2
.$$ Hence the lower right block of $\Sigma_n$ converges to a matrix $\Sigma_{22}$.
## Lower left block of $\Sigma_n$
The $ij$th entry in the lower left block of $\Sigma_n$ is $\mathop{\mathrm{Cov}}[ \hat
K_n(r_i),e_n(\beta^*)_j]$. This covariance is $$\begin{aligned}
&\int_{W_n^3} \frac{\mathds{1}_{\{\|x-y\|\le r_i\}}}{|W_{n,x-y}|}z_j(w)
\rho_{\beta^*}(w)[g^{(3)}(x,y,w)-g(x-y)] {\mathrm d}x {\mathrm d}y {\mathrm d}w\\
& + 2 \int_{W_n^2} \frac{\mathds{1}_{\{\|x-y\|\le r_i}\}}{|W_{n,x-y}|}z_j(x) g(x-y) {\mathrm d}x
{\mathrm d}y .
\end{aligned}$$ The first term equals $$\int_{\mathbb{R}^{d}\times B_{r_i}(0) } [g^{(3)}(0,v_1,v_2)-g(v_1)] \frac{1}{|W_{n,-v_1}|}\int_{W_n\cap W_{n,-v_1}\cap W_{n,-v_2}} z_j(u+v_2)
\rho_{\beta^*}(u+v_2){\mathrm d}u {\mathrm d}v_1 {\mathrm d}v_2,$$ which by dominated convergence, ergodicity, and [\[eq:decayg3\]](#eq:decayg3){reference-type="eqref" reference="eq:decayg3"} converges to $$\int_{\mathbb{R}^{d}\times B_{r_i}(0) } [g^{(3)}(0,v_1,v_2)-g(v_1)] \lim_{n\to \infty}\bigg(\frac{1}{|W_{n}|}\int_{W_n} z_j(u+v_2) \rho_{\beta^*}(u+v_2){\mathrm d}u \bigg){\mathrm d}v_1 {\mathrm d}v_2.$$ The last term is asymptotically equivalent to $$2 \int_{\mathbb{R}^n} g(v) \frac{\mathds{1}_{\{\|v\|\le r_i\}}}{|W_{n,v}|} \int_{W_n\cap W_{n,v}}z_j(u) {\mathrm d}u
{\mathrm d}v,$$ which converges to $2K(r_i)\bar z$. Hence, the lower left block of $\Sigma_n$ converges to a fixed matrix $\Sigma_{21}$.
# Variance of [\[eq:varHdiff\]](#eq:varHdiff){reference-type="eqref" reference="eq:varHdiff"} {#sec_Hvar}
By Campbell's formula, $\mathop{\mathrm{\mathsf{Var}}}(X_{i,n})$ is $$\begin{aligned}
&2 \int_{W_n^2} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y)}{|W_n\cap W_{n,x-y}|^2
\rho_{\beta^*}(x)^3\rho_{\beta^*}(y)^3} g(x-y) \left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta} (\rho_{\beta^*}(x)\rho_{\beta^*}(y)) \right\|^2{\mathrm d}x {\mathrm d}y\\
&+ 4 \int_{W_n^3} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y) \mathds{1}_{A(r_i,r_{i+1})}(x-z)}{|W_n\cap W_{n,x-y}|| W_n\cap W_{n,x-z}|\rho_{\beta^*}(x)^3\rho_{\beta^*}(y)\rho_{\beta^*}(z)}g^{(3)}(x,y,z) \\
&\times \left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta} (\rho_{\beta^*}(x)\rho_{\beta^*}(y)) \right\| \left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta} (\rho_{\beta^*}(x)\rho_{\beta^*}(z)) \right\|
{\mathrm d}x {\mathrm d}y {\mathrm d}z,\\
&+ \int_{W_n^4} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y) \mathds{1}_{A(r_i,r_{i+1})}(z-w) }{|W_n\cap W_{n,x-y}||W_n\cap W_{n,z-w}|\rho_{\beta^*}(x)\rho_{\beta^*}(y)\rho_{\beta^*}(z)\rho_{\beta^*}(w)} \\
& \times [g^{(4)}(x,y,z,w)-g(x-y)g(z-w)] \left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta} (\rho_{\beta^*}(x)\rho_{\beta^*}(y)) \right\| \left\|\frac{{\mathrm{d}}}{{\mathrm{d}}\beta} (\rho_{\beta^*}(w)\rho_{\beta^*}(z)) \right\|
{{\mathrm d}x {\mathrm d}y} {\mathrm d}w {\mathrm d}z. \end{aligned}$$ Using the assumption [\[eq:rhoassumption\]](#eq:rhoassumption){reference-type="eqref" reference="eq:rhoassumption"} on $\rho$, boundedness of $g^{(l)}$ [\[eq:boundedg\]](#eq:boundedg){reference-type="eqref" reference="eq:boundedg"} and [\[eq:decayg4\]](#eq:decayg4){reference-type="eqref" reference="eq:decayg4"}, this is bounded by $$\begin{aligned}
&2C \int_{W_n^2} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y)}{|W_n\cap W_{n,x-y}|^2
} {\mathrm d}x {\mathrm d}y\\
&+ 4 C \int_{W_n^3} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y) \mathds{1}_{A(r_i,r_{i+1})}(x-z)}{|W_n\cap W_{n,x-y}||W_n\cap W_{n,x-z}| }
{\mathrm d}x {\mathrm d}y {\mathrm d}z,\\
&+ C \int_{W_n^4} \frac{\mathds{1}_{A(r_i,r_{i+1})}(x-y) \mathds{1}_{A(r_i,r_{i+1})}(z-w)}{|W_n\cap W_{n,x-y}||W_n\cap W_{n,z-w}|} [g^{(4)}(x,y,z,w)-g(x-y)g(z-w)] {{\mathrm d}x {\mathrm d}y} {\mathrm d}w {\mathrm d}z. \end{aligned}$$ which is clearly of the right order.
| arxiv_math | {
"id": "2309.12834",
"title": "A functional central limit theorem for the K-function with an estimated\n intensity function",
"authors": "Anne Marie Svane, Christophe Biscio, Rasmus Waagepetersen",
"categories": "math.ST stat.TH",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We propose a new continuum model for a random genetic drift problem by employing a dynamical boundary condition approach. The new model can be viewed as a regularized Kimura equation, which admits a continuous solution and recovers the original system in the limits. The existence and uniqueness of the strong solution of the regularized system are shown. Finally, we present some numerical results for the regularized model, which indicates that the model can capture the main features of the original model, include gene fixation and conservation of the first moment.
author:
- "Chun Liu[^1]"
- "Jan-Eric Sulzbach[^2]"
- "Yiwei Wang[^3]"
bibliography:
- Ref.bib
title: "On a Continuum Model for Random Genetic Drift: A Dynamical Boundary Condition Approach"
---
# Introduction
Genetic drift is a fundamental process in molecular evolution [@ewens2004mathematical]. It refers to the random fluctuations in allele frequencies within a population over time. Typical mathematical models for genetic drift include the Wright--Fisher model [@fisher1923xxi; @wright1931evolution] and its diffusion limit, known as the Kimura equation [@wright1945differential; @kimura1962probability].
Without mutation, migration, and selection process, the Kimura equation can be written as [@ewens2004mathematical] $$\label{Kimura_eq}
\partial_t \rho = \partial_{xx}^2 ( x(1 - x) \rho) \ , \quad x \in (0, 1) \ , \ t >0 \ .$$
Here, the variable $x$ is the fraction of the focal allele $A_1$ in the population and $(1 - x)$ is that of $A_2$, the function $\rho(x, t)$ is the probability of finding a relative composition $x \in [0,1]$ of gene $A_1$ at time $t$. Although ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}) is a linear PDE of $\rho(x, t)$, due to the degeneracy of $x(1 - x)$ at the boundary $x = 0$ and $x = 1$ [@epstein2010wright; @epstein2013degenerate], it is difficult to impose a suitable boundary condition [@feller1951diffusion]. In[@Shiryayev1992], Kolmogorov suggested that the equation [\[Kimura_0\]](#Kimura_0){reference-type="eqref" reference="Kimura_0"} is only reasonable for $x$ not too close to $0$ and $1$.
To maintain the biological significance, $\rho(x, t)$ satisfies the mass conservation $$\int_{0}^1 \rho(x, t) \mathrm{d}x = 1 \ ,$$ which suggests a possible non-flux boundary to ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}), given by $$\partial_x(x(1 - x) \rho) = 0 \ , \quad x = 0 ~\text{or}~ 1 \ , \quad \forall t \ .$$ However, such a boundary condition excludes the existence of regular solution of ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}). In [@mckane2007singular; @chalub2009non], the authors prove that for a given $\rho_0 \in \mathcal{BM}^+([0, 1])$, there exists a unique solution to ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}) with $\rho(x, t) \in L^{\infty} ([0, \infty), \mathcal{BM}^+([0, 1]))$, and the solution $\rho(x, t)$ can be expressed as $$\label{measure_solution}
\rho(x, t) = q(x, t) + a(t) \delta_0 + b(t) \delta_1 \ .$$ Here $\mathcal{BM}^+([0, 1])$ is the space of all (positive) Radon measures on $[0,1]$, $\delta_0$ and $\delta_1$ are Dirac delta function at 0 and 1 respectively, and $q(x, t) \in C^{\infty} (\mathbb{R}^+; C^{\infty}([0, 1]))$ is a classical solution to ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}). Moreover, it is proved that [@chalub2009non], as $t \rightarrow \infty$, $q (x, t) \rightarrow 0$ uniformly, and $a(t)$ and $b(t)$ are monotonically increasing function such that $$\label{eq_a_b}
\begin{aligned}
& a^{\infty} = \lim_{t \rightarrow \infty } a(t) = \int_{0}^1 (1 - x) \rho_0(x) \mathrm{d}x \ ,\\
& b^{\infty} = \lim_{t \rightarrow \infty } b(t) = \int_{0}^1 x \rho_0(x) \mathrm{d}x \ . \\
\end{aligned}$$ The equilibrium ([\[eq_a\_b\]](#eq_a_b){reference-type="ref" reference="eq_a_b"}) is determined by the fact that the Kimura equation ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}) preserves the first moment of the probability density [@chalub2009non]. Indeed, a direct computation shows that $$\begin{aligned}
& \frac{\mathrm{d}}{\mathrm{d}t} \int_0^1 x \rho(x, t) \mathrm{d}x = \int_0^1 x \partial_{xx}^2 ( x(1 - x) \rho) \mathrm{d}x \\
& = \int_0^1 \partial_{xx} (x) (x (1-x) \rho) \mathrm{d}x - \partial_x (x) \partial_x (x (1-x) \rho) \Big|_{0}^1 + x (\partial_x (x(1 - x) \rho )) \Big|_{0}^1 = 0. \\
\end{aligned}$$ In biology, the function $\psi(x) = x$ is known as the fixation probability function, which describes the probability of allele $A_1$ fixing in a population while allele $A_2$ goes extinct, under the condition of starting from an initial composition of $x$.
The measure-valued solution ([\[measure_solution\]](#measure_solution){reference-type="ref" reference="measure_solution"}) imposed a difficulty to study Kimura equation both numerically [@duan2019numerical] and theoretically [@casteras2022hidden]. For instance, although a lot of different methods for the Kimura equation have been developed, ranging from finite volume method [@xu2019behavior; @zhao2013complete], finite Lagrangian methods [@duan2019numerical], an optimal mass transportation method [@carrillo2022optimal], and SDE--based simulation methods [@dangerfield2012boundary; @jenkins2017exact], it is difficult to get a good numerical approximation to ([\[measure_solution\]](#measure_solution){reference-type="ref" reference="measure_solution"}), which includes Dirac delta function, and accurately capture the dynamics of $a(t)$ and $b(t)$.
The purpose of this paper is to propose a new continuum model for random genetic drift by incorporating a dynamical boundary to Kimura equation. The new model is given by $$\begin{cases}
% & \pp_t \rho = - \pp_x (\rho u), \quad x \in [\delta, 1 - \delta] \\
% \pp_{xx}^2 ( x(1 - x) p), \quad x \in (\delta, 1 - \delta), t >0 \\
% & \pp_x (x (1 - x)p) = \frac{1}{\gamma_0} (G_0'(a) - \mu_{\delta}(t) ), \quad x = \delta \\
% & - \pp_x (x (1 - x)p) = \frac{1}{\gamma_1} (G_1'(b) - \mu_{1 - \delta}(t) ), \quad x = 1 - \delta \\
% & \rho u = - \pp_x (x (1 - x) \rho), \quad x \in (\delta, 1 - \delta), \\ %
& \rho_t = \partial_{xx}^2 (x (1 - x) \rho), \quad x \in (\delta, 1 - \delta), \\ %
& \rho u(\delta, t) = - a'(t), \quad \rho u(1 - \delta, t) = b'(t) \\
& a'(t) = - ( (\epsilon a) - \rho(\delta , t) ) \\
& b'(t) = - ( (\epsilon b) - \rho(1 - \delta , t) ). \\
\end{cases}$$ Here $\delta$ and $\epsilon$ are artificial small parameters. The key idea is to introduce an additional jump process between bulk and surface, which leads to dynamical boundary condition. Such a regularized Kimura equation admits a classical solution for fixed $\epsilon$ and $\delta$, and can be solved by a standard numerical scheme. Formally, the original Kimura equation can be viewed as a singular limit of $\epsilon \rightarrow 0$ and $\delta \rightarrow 0$. Numerical tests show that the qualitative behaviors of the original Kimura equation, such as gene fixation and conservation of first moment, can be well captured with small $\epsilon$ and $\delta$.
The rest of this paper is organized as follows. In Section 2, we introduce a few background, including the Wright-Fisher model, the variational structure of Kimura equation, and the dynamical boundary approach for generalized diffusions. The new continuum model is presented in Section 3. The existence and uniqueness of the strong solution of the regularized system is shown in section 4. Finally, we perform a numerical study to the regularized system, and demonstrated the effects of $\delta$ and $\epsilon$, as well as the ability of new model in capturing the key feature of the original Kimura equation.
# Background
## From a Wright-Fisher model to Kimura equation
We first briefly review the formal derivation of Kimura equation from the Wright-Fisher model. Consider two competing alleles, denoted by $A_1$ and $A_2$, in a diploid population with fixed size $N$ (i.e., total $2N$ alleles). Assume $u$ and $v$ are the probability of mutations $A_1 \rightarrow A_2$ and $A_2 \rightarrow A_1$ respectively. Let $X_k$ be the portion of individuals of type $A_1$ in generation $k$. The original Wright-Fisher model describes random fluctuations in the genetic expression by a discrete-time, discrete-state Markov chain. The discrete state space is defined as $I = \{0, \frac{1}{2N}, \ldots, 1\}$, and the transition probability is given by [@feller1951diffusion; @ethier1977error] $$\mathbb{P} \left( X_{k+1} = \frac{j}{2N} ~\Big| X_k = \frac{i}{2 N} \right) = \tbinom{2N}{j} p_i^j (1 - p_i)^{2N - j}\ ,$$ where $$p_i = (1 - u) \frac{i}{2 N} + v (1 - \frac{i}{2 N})\ .$$ In the case that $u = v = 0$ (without mutations), we have $$\mathbb{P} \left( X_{k+1} = \frac{j}{M} ~\Big| X_k = \frac{i}{2 N} \right) = \tbinom{2N}{j} (\frac{i}{2N})^j (1 - \frac{i}{2N})^{2N - j}\ ,$$[\[Kimura_0\]]{#Kimura_0 label="Kimura_0"} This is known as the pure drift case. Notice that in the pure drift case, the jump rate becomes 0 if $X_k = 0$ or $X_k = 1$, which are absorbing states in the system. There is no possibility of fixations if $u > 0$ and $v > 0$.
When the population size $N$ is large, the Wright-Fisher model can be approximated by a continuous--state, continuous--time process $X(t)$, which represents the proportion of alleles of type $A_1$, at least formally. The dynamics of $X(t)$ is described by an SDE [@dangerfield2012boundary] $$\mathrm{d}X_t = (u - (u + v) X_t) \mathrm{d}t + \sqrt{X_t (1 - X_t)} \mathrm{d}W_t$$ The corresponding Fokker-Planck equation is given by [@ethier1977error; @tran2013introduction] $$\rho_t + \partial_x ((u(1 - x) - v x ) \rho) = \frac{1}{2} \partial_x ( x (1 - x) \rho).$$ For the pure diffusion case, we obtain a Kimura equation $$\label{Kimura_1}
\partial_t \rho = \frac{1}{2} \partial_{xx}^2 ( x(1 - x) \rho), \quad x \in (0, 1), t >0,$$ The corresponding SDE can be written as $$\mathrm{d}\mathbf{X}_t = \sqrt{X_t (1 - X_t)} \mathrm{d}W_t.$$ It is convenient to re-scale the time by letting $t' = 2 t$, and the Kimura equation [\[Kimura_0\]](#Kimura_0){reference-type="eqref" reference="Kimura_0"} becomes [\[Kimura_eq\]](#Kimura_eq){reference-type="eqref" reference="Kimura_eq"}.
One can further rescale the time and define $t = \frac{k}{2 N}$ [@tran2013introduction]. It is straightforward to show that $$\begin{aligned}
& \mathbb{E} (\delta X_t) = 0 \\
& \mathbb{E} ( (\delta X_t)^2) = X_t (1 - X_t) \delta t \\
& \mathbb{E} ( (\delta X_t)^k) = o (\delta), \quad k > 3 \\
\end{aligned}$$
## Formal Variational Structure
Formally, the Kimura equation ([\[Kimura_eq\]](#Kimura_eq){reference-type="ref" reference="Kimura_eq"}) can be viewed as a generalized diffusion, derive from the energy-dissipation law [@duan2019numerical] $$\label{Energy-Dissipation}
\frac{\mathrm{d}}{\mathrm{d}t} \int_{0}^1 \rho \ln \left( x(1 - x) \rho \right) \mathrm{d}x = - \int_{0}^1 \frac{\rho}{x(1 - x)} |u|^2 \mathrm{d}x\ ,$$ where the distribution $\rho$ satisfies the kinematics $$\label{kinemtics}
\begin{aligned}
& \partial_t \rho + \partial_x (\rho u) = 0\ , \quad x \in (0, 1) \\
& \rho u = 0, \quad x = 0~\text{or}~1 \ . \\
\end{aligned}$$ Here the non-flux boundary $\rho u = 0$ guarantees the mass conservation.
To obtain the Kimura equation from the energy-dissipation law ([\[Energy-Dissipation\]](#Energy-Dissipation){reference-type="ref" reference="Energy-Dissipation"}), one needs to introduce a flow map denoted as $x(X, t): [0,1] \rightarrow [0, 1]$ associated with the velocity field $u(x, t)$. For given $X$, the flow map satisfies the following ordinary differential equation: $$\frac{\mathrm{d}}{\mathrm{d}t} x(X, t) = u(x(X, t), t), \quad x(X, 0) = X,$$ where $X$ represents the Lagrangian coordinate, and $x$ represents the Eulerian coordinate. Due to the mass conservation, $\rho(x(X, t), t)$ satisfies the kinematics $$\rho(x(X, t), t) = \rho_0(X) / \det F(X, t)\ , \quad \det F(X, t) = \partial_X x(X, t)\ , \quad X \in (0, 1)$$ in Lagrangian coordinates, where $\rho_0(X)$ is the initial density. Recall $u(\bm{x}(X, t), t) = x_t(X, t)$, the energy-dissipation law can be written as $$\label{ED_Lagrangian}
\frac{\mathrm{d}}{\mathrm{d}t} \int \rho_0 \ln(x(1 - x)) + \rho_0 \ln (\rho_0 / \partial_X x) \mathrm{d}X = - \int_{0}^1 \frac{\rho_0}{x(1 - x)} |x_t|^2 \mathrm{d}x$$ in Lagrangian coordinates, which can be interpreted as an $L^2$-gradient flow in terms of $x(X, t)$. By a standard energetic variation procedure (see [@duan2019numerical] for details), we can derive the force balance equation $$\frac{\rho}{x(1 - x)} u = - \rho \partial_x( \ln (\rho x(1 - x)))\ ,$$ which can be simplified as $$\label{velocity_a}
\rho u = - \partial_x (x (1 - x)p)\ . % - p x(1 - x) U'(x).$$ Combining the velocity equation with the kinematics ([\[kinemtics\]](#kinemtics){reference-type="ref" reference="kinemtics"}), we can recover the original Kimura equation. The variational structure ([\[ED_Lagrangian\]](#ED_Lagrangian){reference-type="ref" reference="ED_Lagrangian"}) naturally leads to the Lagrangian algorithms developed in [@duan2019numerical; @carrillo2022optimal].
Alternatively, ([\[Energy-Dissipation\]](#Energy-Dissipation){reference-type="ref" reference="Energy-Dissipation"}) can be interpreted a Wasserstein type of gradient flow, with the transport distance defined by $$\label{WS_distance}
\begin{aligned}
& d^2(\rho_1, \rho_2) = \mathop{\min}_{(\rho, \mathbf{u})} \int_{0}^1 \int_0^1 \frac{\rho}{x(1 - x)} |u|^2 \mathrm{d}x \mathrm{d}t~, \\
& \text{subject to}~ \rho_t + \partial_x (\rho u) = 0~,\quad \rho(x, 0) = \rho_1~, \quad \rho(x, 1) = \rho_2. \\
\end{aligned}$$ The distance ([\[WS_distance\]](#WS_distance){reference-type="ref" reference="WS_distance"}) is known as Wasserstein-Shahshahani distance [@chalub2021gradient; @casteras2022hidden].
As pointed out in [@casteras2022hidden], the variational structure presented in this subsection is rather formal. Although we can define $U(x) = \ln (x (1 - x))$ [@casteras2022hidden; @tran2015free], which plays the role of internal energy as in standard Fokker-Planck equation, the system doesn't admit a unique equilibrium distribution $\rho^{\rm eq} \approx \exp( - U)$, as $\int_{0}^1 \frac{1}{x (1 - x)} \mathrm{d}x$ is unbounded.
## Dynamical boundary condition
As suggested in [@casteras2022hidden], a natural attempt at compensating for singularities is to relax the boundary condition by taking account of the bulk/surface interaction into model.
Before we present the application of the dynamical boundary condition approach to the Kimura model, we first briefly review of the dynamical boundary approach for generalized diffusions in this subsection. Consider a bounded domain $\Omega$, and let $\Gamma = \partial\Omega$ be the boundary of $\Omega$. Classical PDE models on $\Omega$ often impose Dirichlet, Neumann, and Robin boundary conditions for physical variable $\rho$. The basic fundamental assumption behind this is $\rho$ is regular enough (e.g. $\rho \in C^m (\bar{\Omega})$ for some $m$), so we can define $\rho |_{\Gamma}$ as the trace of $\rho$, and we can define the boundary condition in terms of $\rho |_{\Gamma}$. However, as in the classical Kimura equation, it may not be always possible to take the trace.
The idea of dynamical boundary condition is to introduce another function $\sigma \in C(\Gamma)$ to describe the surface densities on the boundary, and view the exchange between bulk and surface density as a chemical reaction $\rho \ce{<=>} \sigma$ [@knopf2021phase; @wang2022some]. Due to the mass conservation, $\rho$ and $\sigma$ satisfies $$\frac{\mathrm{d}}{\mathrm{d}t} \left(\int_{\Omega} \rho \mathrm{d}\bm{x}+ \int_{\Gamma} \sigma \mathrm{d}S \right) = 0,$$ which leads to the kinematics in Eulerian coordinates $$\begin{aligned}
& \rho_t + \nabla \cdot (\rho \mathbf{u}) = 0, ~~ \bm{x}\in \Omega, \\
& \rho \mathbf{u}\cdot {\bm \nu} = R_t, \quad \sigma_t + \nabla_{\Gamma} \cdot (\sigma {\bm v}_{\Gamma}) = R_t, ~~ \bm{x}\in \Gamma \\
\end{aligned}$$ where ${\bm \nu}$ is the outer normal of $\Omega$ and $R$ is the reaction trajectory for the chemical reaction $\rho \ce{<=>} \sigma$ that represents the density exchange between bulk and surface [@wang2020field].
In general, systems with dynamical boundary condition can be modeled through an energy-dissipation law $$\frac{\mathrm{d}}{\mathrm{d}t} \left( \mathcal{F}_b(\rho) + \mathcal{F}_s(\sigma) \right) = - \left( \int_{\Omega} \eta_b (\rho) |\mathbf{u}|^2 \mathrm{d}\bm{x}+ \int_{\partial\Omega} \eta_s (\sigma) |{\bm v}|^2 + R_t \Psi(R, R_t) \mathrm{d}S \right)$$ Here $\mathcal{F}_b(\rho)$ and $\mathcal{F}_s(\sigma)$ are free energies in the bulk and surface respectively, $\eta_b (\rho) > 0$ and $\eta_{s} (\sigma) > 0$ are friction coefficients for bulk and surface diffusions, $R_t \Psi(R, R_t) \geq 0$ is the dissipation due to the bulk/surface interaction, which in general is non-quadratic in terms of $R_t$ [@wang2020field]. An energetic variational procedure leads to the force balance equations for the mechanical and chemical parts $$\begin{cases}
& \eta_{b} (\rho) \mathbf{u}= - \rho \nabla \mu_b (\rho), \quad \mu_b = \frac{\delta \mathcal{F}_b}{\delta \rho} \quad \bm{x}\in \Omega \\
& \eta_s (\sigma) {\bm v} = - \sigma \nabla_{\Gamma} \mu_s (\sigma), \quad \mu_s = \frac{\delta \mathcal{F}_s}{\delta \sigma}, \quad \bm{x}\in \Gamma \\
& \Psi(R, R_t) = - (\mu_s(\sigma) - \mu_b(\rho)), \quad \bm{x}\in \Gamma \\
\end{cases}$$ where $\mu_s(\sigma) - \mu_b(\rho)$ is the affinity of the bulk-surface reaction. Different choices of the free energy and the dissipation lead to different systems [@wang2022some].
# A regularized Kimura equation
In this section, we propose a regularized Kimura equation by applying a dynamical boundary condition approach in this specific one dimensional setting.
For a given $\delta > 0$, let $\rho(x, t)$ represent the probability at $x$ for $x \in (\delta, 1 - \delta)$. We denote the probabilities at $x = 0$ and $x = 1$ as $a(t)$ and $b(t)$, respectively. Due to the conservation of mass, we have $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t} \left( \int_{\delta}^{1-\delta} \rho(x, t) \mathrm{d}x + a(t) + b(t) \right) = 0
\end{aligned}$$ which is the consequence of the following kinematics conditions $$\label{kinemtics_DB}
\begin{aligned}
& \partial_t \rho + \partial_x (\rho u) = 0, \quad x \in (\delta, 1 - \delta) \\
& \rho u(\delta, t) = - \dot{R}_0, \quad \rho u(1 - \delta, t) = \dot{R}_1 (t) \\
& a'(t) = \dot{R}_0, \quad b'(t) = \dot{R}_1(t) \\
% & p u(\delta, t) = - a'(t), \quad p u(1 - \delta, t) = b'(t) \\
% & \sigma
\end{aligned}$$ Here $R_0(t)$ and $R_1(t)$ denotes the reaction trajectory from $x = \delta$ to $x = 0$ and from $x = 1 - \delta$ to $x = 1$ respectively. It is worth mentioning that $\delta$ is a rather artificial regularization parameter.
The overall system can be modeled through the energy-dissipation law $$\label{ED_RK}
\begin{aligned}
& \frac{\mathrm{d}}{\mathrm{d}t} \int_{\delta}^{1 - \delta} \rho \ln \left( x(1 - x) \rho \right)\mathrm{d}x + G_0(a) + G_1(b) \\
& = - \int_{\delta}^{1 - \delta} \frac{\rho}{x(1 - x)} |u|^2 \mathrm{d}x - \Gamma_0(R_0, \dot{R_0}) - \Gamma_1(R_1, \dot{R_1})
\end{aligned}
% \gamma_0 (a'(t))^2 - \gamma_1 (b'(t))^2,$$ where $G_0(a)$ and $G_1(b)$ are free energy on the boundary, and $\Gamma_0(R_0, \dot{R}_0)$ and $\Gamma_1(R_1, \dot{R}_1)$ are dissipations on due to the bulk/surface jump. The energy-dissipation law in the bulk region ($x \in (\delta, 1- \delta)$) is exactly same to the one in the original Kimura equation ([\[Energy-Dissipation\]](#Energy-Dissipation){reference-type="ref" reference="Energy-Dissipation"}). The remaining question is how choose $G_i (i = 0, 1)$ and $\Gamma_i (i = 0, 1)$ to capture the qualitative behavior of the original Kimura equation. In the current work, we take $$\label{Form_G}
G_0(q) = G_1(q) = G(q) = q \ln (\epsilon \delta (1 - \delta) q )\ ,$$ and $$\Gamma_0(R, \dot{R}) = \dot{R} \ln \left( \frac{\dot{R}_t}{\gamma_0 a} + 1 \right), \quad \Gamma_1(R, \dot{R}) = \dot{R} \ln \left( \frac{\dot{R}}{\gamma_1 b} + 1 \right) \ .$$ Here $\gamma_0$ and $\gamma_1$ are reaction rate from surface to bulk. In the current study, we take $\gamma_0 = \gamma_1 = \epsilon$. We'll study the effects of $\gamma_0$ and $\gamma_1$ in the future work.
By an energetic variational procedure [@wang2022some], we can obtain the velocity equation $$\label{velocity}
\rho u = - \partial_x (x (1 - x) \rho), \quad x \in (\delta, 1 - \delta), % \quad x \in (\delta, 1-\delta)$$ and the equations for reaction rates $$\label{Eq_R}
\begin{aligned}
& \ln \left( \frac{\dot{R}_0}{\epsilon a} + 1 \right) = - (\ln (\epsilon a) - \ln \rho(\delta, t)) \\
& \ln \left( \frac{\dot{R}_1}{\epsilon b} + 1 \right) = - (\ln (\epsilon b) - \ln p(1 - \delta, t) ) \\
\end{aligned}$$ One can rewrite ([\[Eq_R\]](#Eq_R){reference-type="ref" reference="Eq_R"}) as $$\label{Eq_RR}
\dot{R}_0 = \rho(\delta, t) - \epsilon a, \quad \dot{R}_1 = \rho(1 - \delta, t) - \epsilon b \ .$$ Combing ([\[velocity\]](#velocity){reference-type="ref" reference="velocity"}) and ([\[Eq_RR\]](#Eq_RR){reference-type="ref" reference="Eq_RR"}) with the kinematics ([\[kinemtics_DB\]](#kinemtics_DB){reference-type="ref" reference="kinemtics_DB"}), one arrives the final equation $$\label{Eq_Final_LMA}
\begin{cases}
& \partial_t \rho = - \partial_x (\rho u), \quad x \in [\delta, 1 - \delta] \\
% \pp_{xx}^2 ( x(1 - x) p), \quad x \in (\delta, 1 - \delta), t >0 \\
% & \pp_x (x (1 - x)p) = \frac{1}{\gamma_0} (G_0'(a) - \mu_{\delta}(t) ), \quad x = \delta \\
% & - \pp_x (x (1 - x)p) = \frac{1}{\gamma_1} (G_1'(b) - \mu_{1 - \delta}(t) ), \quad x = 1 - \delta \\
& \rho u = - \partial_x (x (1 - x) \rho), \quad x \in (\delta, 1 - \delta), \\ %
& \rho u(\delta, t) = - a'(t), \quad \rho u(1 - \delta, t) = b'(t) \\
& a'(t) = \rho(\delta, t) - \epsilon a \\
& b'(t) = \rho(1 - \delta, t) - \epsilon b \\
\end{cases}$$
**Remark 1**. *One import feature of the original Kimura equation that it also preserves the first moment, i.e., $\frac{\mathrm{d}}{\mathrm{d}t} \int x \rho \mathrm{d}x = 0$. For the regularized system ([\[Eq_Final_LMA\]](#Eq_Final_LMA){reference-type="ref" reference="Eq_Final_LMA"}), a direct calculation shows that for $\psi(x) = x$, $$\begin{aligned}
& \frac{\mathrm{d}}{\mathrm{d}t} \int_{\delta}^{1 - \delta} \psi(x) \rho(x, t) \mathrm{d}x + a \psi(0) + b \psi(1) \\
& = \int_{\delta}^{1 - \delta} - \partial_x \psi(x) \partial_x ( x(1 - x) p ) \mathrm{d}x - \psi(x) (p u) \Big |_{\delta}^{ 1 - \delta} + a'(t) \psi(0) + b'(t) \psi(1) \\
& = \int_{\delta}^{1 - \delta} \partial_{xx} \psi(x) ( x(1 - x) p ) \mathrm{d}x - \partial_x \psi(x) (x(1 - x) p ) \Big|_{\delta}^{1 - \delta} + \delta (b'(t) - a'(t)) \\
%- \psi(x) (p u) \Big |_{\delta}^{ 1 - \delta} + a'(t) \psi(0) + b'(t) \psi(1), \\
& = - \delta (1 - \delta) (p(1 - \delta) - p(\delta)) + \delta (b'(t) - a'(t)). % a'(t) ( 0 - \delta ) + b'(t) ( 1 - (1 - \delta)).
\end{aligned}$$ The regularized system ([\[Eq_Final_LMA\]](#Eq_Final_LMA){reference-type="ref" reference="Eq_Final_LMA"}) no longer preserves the first moment for $\delta > 0$. However, numerical simulations show that the change in first moment will be very small if $\delta$ is small.*
For the regularized system ([\[Eq_Final_LMA\]](#Eq_Final_LMA){reference-type="ref" reference="Eq_Final_LMA"}), since $$a'(t) + \epsilon a = \rho(\delta, t)$$ we have $$a(t) = e^{-\epsilon t} \int_{0}^t e^{\epsilon s} \rho(\delta, s) \mathrm{d}s % + a_0 e^{-\epsilon t}$$ with the initial condition $a(0) = 0$. As a consequence $$a'(t) = -\epsilon e^{-\epsilon t} \int_{0}^t e^{\epsilon s} \rho(\delta, s) \mathrm{d}s + \rho(\delta, t) % - \epsilon a_0 e^{-\epsilon t}$$ Hence, the boundary condition can be interpreted as a **delayed boundary condition** $$\label{eq_delayed_bc}
p u (\delta, t) = \epsilon e^{-\epsilon t} \int_{0}^t e^{\epsilon s} \rho(\delta, s) \mathrm{d}s - \rho(\delta, t) % + \epsilon a_0 e^{-\epsilon t}$$
Formally, if we let $\epsilon \rightarrow 0$, the boundary condition becomes $$\rho u (\delta, t) = - \rho(\delta, t) % + \epsilon a_0 e^{-\epsilon t}$$ Or $$\partial_x (x (1 - x) \rho) = - \rho, \quad x = \delta, 1 - \delta$$
**Remark 2**. *If $\epsilon = 0$, the equation becomes $$\label{Eq_Final_LMA_epsilon_0}
\begin{cases}
& \partial_t \rho = - \partial_x (\rho u), \quad x \in (\delta, 1 - \delta) \\
& \rho u = - \partial_x (x (1 - x) \rho), \quad x \in (\delta, 1 - \delta), \\ %
& \rho u(\delta, t) = - \rho(\delta, t), \quad \rho u(1 - \delta, t) = \rho(1 - \delta, t) \\
& a'(t) = \rho(\delta, t) \\
& b'(t) = \rho(1 - \delta, t) \ . \\
\end{cases}$$ But the energy-dissipation law ([\[ED_RK\]](#ED_RK){reference-type="ref" reference="ED_RK"}) with ([\[Form_G\]](#Form_G){reference-type="ref" reference="Form_G"}) is no longer valid in this limiting case.*
Next, we perform some formal analysis to the regularized model. At the equilibrium, we have $$\label{ab_inf}
\rho^{\infty}(\delta) = \epsilon a^{\infty}, \quad \rho^{\infty} (1 - \delta) = \epsilon b^{\infty},$$ and $\rho^{\infty}$ satisfies $$\label{Eq_steady_state}
\partial_{xx}^2 ( x(1 - x) \rho^{\infty}) = 0, \quad x \in (\delta, 1 - \delta)\ .$$
Given $\epsilon > 0$, $a^{\infty} > 0$, and $b^{\infty} > 0$, the equation $$\label{Eq_steady_state}
\begin{aligned}
& \partial_{xx} ( x(1 - x) \rho^{\infty}) = 0, \quad x \in (\delta, 1 - \delta) \\
& \rho(\delta) = \epsilon a^{\infty}, \quad \rho(1 - \delta) = \epsilon b^{\infty} \\
\end{aligned}$$ has a unique solution for any fixed $\delta > 0$.
It is straightforward to show that the classical solution to ([\[Eq_steady_state\]](#Eq_steady_state){reference-type="ref" reference="Eq_steady_state"}) is given by $$\rho^{\infty}(x) = \frac{Ax + B}{x (1 - x)}\ .$$ Then according to ([\[ab_inf\]](#ab_inf){reference-type="ref" reference="ab_inf"}), $A$ and $B$ satisfies $$A \delta + B = \delta (1 - \delta) \epsilon a^{\infty}, \quad A (1 - \delta) + B = \delta (1 - \delta) \epsilon b^{\infty} \ .$$ One can solve $A$ and $B$ in terms of $a^{\infty}$ and $b^{\infty}$, that is, $$\label{value_A_B}
\begin{aligned}
& A = \epsilon \frac{\delta (1 - \delta)}{1 - 2 \delta} (b^{\infty} - a^{\infty})\ , \quad B = \epsilon \frac{\delta (1 - \delta)}{1 - 2 \delta} ( (1 - \delta) a^{\infty} - \delta b^{\infty})\ . \\
\end{aligned}$$ Hence, for fixed $\delta$, by letting $\epsilon \rightarrow 0$, the equilibrium solution $\rho^{\infty}$ goes to $0$. For fixed $\epsilon$, by letting $\delta \rightarrow 0$, we also have $A, B \rightarrow 0$.
Moreover, recall the mass conservation $$\int_{\delta}^{1 - \delta} \rho^{\infty} \mathrm{d}x + a^{\infty} + b^{\infty} = 1 \,$$ and use $$\begin{aligned}
\int_{\delta}^{1 - \delta} \rho^{\infty} \mathrm{d}x & = \int_{\delta}^{1 - \delta} \frac{B}{x} + \frac{A + B}{1 - x} \mathrm{d}x = B \ln x \Big|_{\delta}^{1 - \delta} - (A + B) \ln (1 - x) |_{\delta}^{1 - \delta } \\
& % = B (\ln (1 - \delta) - \ln \delta) - (A + B) (\ln \delta - \ln (1 - \delta))
= (2 B + A) (\ln (1 - \delta) - \ln \delta) = \epsilon \delta (1 - \delta) (a^{\infty} + b^{\infty}) (\ln (1 - \delta) - \ln \delta)\ , \\
\end{aligned}$$ we can obtain $$a^{\infty} + b^{\infty} = \frac{1}{1 + \epsilon \delta (1 - \delta) (\ln (1 - \delta) - \ln \delta) }.$$ Therefore, for fixed $\delta$, by taking $\epsilon \rightarrow 0$, $a^{\infty} + b^{\infty} \rightarrow 1$. Moreover, for fixed $\epsilon$, since $\delta (1 - \delta) (\ln (1 - \delta) - \ln \delta) \rightarrow 0$ as $\delta \rightarrow 0$, we also have $a^{\infty} + b^{\infty} \rightarrow 1$.
$$\int_{\delta}^{1 - \delta} x \rho^{\infty} \mathrm{d}x + b^{\infty} = \int_{\delta}^{1 - \delta} x \rho_0 \mathrm{d}x + b_0$$
For a uniform initial condition, we have $a^{\infty} = b^{\infty}$, $A = 0$ and $$B = \epsilon \delta (1 - \delta) a^{\infty}$$ We can easily compute $$\int_{\delta}^{1 - \delta} \rho^{\infty} \mathrm{d}x =$$
In the limit of $\epsilon \rightarrow 0$ and $\delta \rightarrow 0$, we expected $$\rho^{\infty} = 0, and$$
## Convergence to equilibrium
In this subsection, we prove that the well-possedness of the regularized system by utilizing its variational structure.
; Let $N$ be an arbitrary integer and $\tau = \frac{T}{N}$ denotes the step size in time. We define $\{ \rho^n, a^n, b^n \}$ by the following minimizing movement scheme $$\{ \rho^{n+1}, a^{n+1}, b^{n+1} \} = \mathop{\arg\min}_{\rho, a, b} \frac{1}{2 \tau} d^2 (\{\rho, a, b \}, \{ \rho^{n}, a^{n}, b^{n} \} ) + \mathcal{F}[\rho, a, b]$$
$$\mathcal{F}_{\epsilon, \delta}[\rho, a, b] = \int_{\delta}^{1 - \delta} \rho ln ( (x (1 - x) \rho )) \mathrm{d}x + a \ln (\epsilon \delta (1 - \delta) a) + b \ln (\epsilon \delta (1 - \delta) b)$$ which is bounded from below for fixed $\epsilon$ and $\delta$. (We may need a uniform bound on $\rho$, $a$ and $b$\.....)
We can show that there exists a constant $C(\epsilon, \delta)$ that depends on $\epsilon$ and $\delta$ such that $\mathcal{F}_{\epsilon, \delta}[\rho, a, b] + C(\epsilon, \delta)$ is uniformly bounded from below.
# Existence of Solutions
The goal of this section is to study and prove the existence of solutions to the regularized Kimura equation $$\label{Eq_reg_Kimura}
\begin{aligned}
\partial_t \rho &= \partial_{xx}^2 \big(x(1-x)\rho\big), \qquad x\in (\delta, 1-\delta)\\
\partial_x x(1-x)\rho &= a'(t), \quad a'(t)= \rho(\delta)-\epsilon a\qquad x=\delta\\
-\partial_x x(1-x)\rho &= b'(t), \quad b'(t)= \rho(1-\delta)-\epsilon b\qquad x=1-\delta\\
\rho(0,x)&= \rho_0(x)\qquad x\in (\delta,1-\delta),\quad a(0)=a_0,\quad b(0)=b_0.
\end{aligned}$$ In a first step, we show the non-negativity of solutions and provide a-priori energy estimates. Then, the main part of this section is to prove the existence and uniqueness of strong solutions of the regularized Kimura equation.
## A Priori estimates
As $\rho$ represents a probability density in the Kimura equation it is reasonable to consider only non-negative solutions. Therefore, let $\rho\in C^0([0,T]\times [\delta,1-\delta])\cap C^2((0,T)\times (\delta,1-\delta))$ be a classical solution with initial data $\rho_0>0$. Then, by the continuity of the classical solution it follows that there exists a time $T=T_{pos}$ such that $\rho(t)\geq 0$ in $[\delta,1-\delta]$ for all $t\in [0,T_{pos}]$. Once we have shown the existence of solutions in the next part, it follows that $T_{pos}$ can be chosen arbitrarily large. To obtain additional information we rewrite the first equation of [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"} as follows $$\label{Eq_linear_bulk}
\begin{aligned}
\partial_t \rho &= x(1-x) \partial_{xx}^2\rho + (1-2x)\partial_x \rho -2\rho, \qquad x\in (\delta, 1-\delta).
\end{aligned}$$ As this system is a linear parabolic equation in the bulk $(\delta,1-\delta)$, with smooth and non-degenerate coefficients, we can apply the classical strong maximum principle to obtain that any classical solution $\rho\in C^0([0,T]\times [\delta,1-\delta])\cap C^2((0,T)\times (\delta,1-\delta))$ attains its maximum on the parabolic boundary. In addition, we have that if the initial data $\rho_0$ is positive then $\rho\geq 0$ in $(0,T)\times (\delta,1-\delta)$. Now, suppose there exists a $(t_0,x_0)\in (0,T)\times (\delta,1-\delta)$ such that $\rho(t_0,x_0)=0$, then it follows that $\rho=0$ on $[0,t_0]\times (\delta,1-\delta)$, see [@walter1986strong] for the details. Hence, we have shown that for positive initial data the positivity of classical solutions in the bulk persists. As a consequence we observe that the boundary values $a(t)$ and $b(t)$ are non-negative. In addition, from the conservation of mass $$\begin{aligned}
\int_\delta^{1-\delta}\rho(t) \mathrm{d}x+ a(t)+b(t)= \int_\delta^{1-\delta}\rho_0 \mathrm{d}x+ a_0+b_0=1
\end{aligned}$$ it follows that $a(t)$ and $b(t)$ are bounded for all $t\in [0,T]$. Now, assume that $\rho(\delta,t)=0$ and $a(t)>0$. That is we have $a'(t)<0$. Then, by rewriting the dynamic boundary condition we obtain $$\begin{aligned}
\delta(1-\delta)\partial_x \rho(x,t)\big|_\delta= a'(t)<0.
\end{aligned}$$ Then, as $\rho$ is continuous in $x$ there exists a negative minimum in the interior, which contradicts the parabolic maximum and comparison principle. Hence, we can assume that there exists an $\epsilon>0$ small enough such that $a'(t)\geq0$ and $b'(t)\geq0$ for all $t\in (0,T]$. See a numerical example in Section 5 with $a'(t) < 0$ if $\epsilon$ is not small enough. With these observations we can now turn the focus to finding suitable a priori energy estimates of system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"}.\
First, we test the equation with $x(1-x)\rho$ in weak formulation. Then, we have $$\begin{aligned}
\int_\delta^{1-\delta} x(1-x)\rho_t \rho \mathrm{d}x+ \int_\delta^{1-\delta} |\partial_x x(1-x) \rho|^2\mathrm{d}x &= -\delta(1-\delta) \big( b'(t) \rho(1-\delta,t)+ a'(t) \rho(\delta,t)\big).
\end{aligned}$$ Using the non-negativity of $a'$ and $b'$ we obtain $$\begin{aligned}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\rho\|^2_{L^2(\delta,1-\delta)}+\|\partial_x x(1-x) \rho\|^2_{L^2(\delta,1-\delta)} +b'(t) \rho(1-\delta,t)+ a'(t) \rho(\delta,t) &\leq 0
\end{aligned}$$ for all $t\in (0,T]$, where $T>0$. Thus, $$\begin{aligned}
\frac{1}{2}\|\rho(T)\|^2_{L^2(\delta,1-\delta)}+\int_0^T \|\partial_x x(1-x) \rho\|^2_{L^2(\delta,1-\delta)} \mathrm{d}t\leq \|\rho_0\|^2_{L^2(\delta,1-\delta)}.
\end{aligned}$$ However, the above estimate can be improved by using the explicit boundary condition [\[eq_delayed_bc\]](#eq_delayed_bc){reference-type="eqref" reference="eq_delayed_bc"} where we have $$\begin{aligned}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\rho\|^2_{L^2(\delta,1-\delta)}&+\|\partial_x x(1-x) \rho\|^2_{L^2(\delta,1-\delta)} + \rho(1-\delta,t)^2+ \rho(\delta,t) ^2\\
&\leq \epsilon e^{-\epsilon t} \int_0^t e^{\epsilon s} \big(\rho(\delta,s)\rho(\delta,t)+\rho(1-\delta,s)\rho(1-\delta,t)\big)\mathrm{d}s
\end{aligned}$$ Integrating this inequality from $0$ to $T$ and using further estimates we obtain $$\begin{aligned}
&\frac{1}{2}\|\rho(T)\|^2_{L^2(\delta,1-\delta)}+\int_0^T\|\partial_x x(1-x) \rho\|^2_{L^2(\delta,1-\delta)} \mathrm{d}t + \int_0^T \rho(1-\delta,t)^2+ \rho(\delta,t) ^2 \mathrm{d}t\\
&\leq \epsilon\int_0^T e^{-\epsilon t} \int_0^t e^{\epsilon s} \big(\rho(\delta,s)\rho(\delta,t)+\rho(1-\delta,s)\rho(1-\delta,t)\big)\mathrm{d}s \mathrm{d}t +\frac{1}{2} \|\rho_0\|^2_{L^2(\delta,1-\delta)}\\
&\leq \epsilon \bigg(\int_0^T \rho(\delta,t)\mathrm{d}t\bigg)^2+ \bigg(\int_0^T \rho(1-\delta,t)\mathrm{d}t\bigg)^2+ \frac{1}{2}\|\rho_0\|^2_{L^2(\delta,1-\delta)}\\
&\leq \epsilon T \int_0^T \big(\rho(\delta,t) ^2+\rho(1-\delta,t)^2\big) \mathrm{d}t + \frac{1}{2}\|\rho_0\|^2_{L^2(\delta,1-\delta)}.
\end{aligned}$$ Then, for $\epsilon T< 1$, the boundary term can be absorbed on the left-hand side of the inequality and this yields the improved estimate $$\begin{aligned}
\|\rho(T)\|^2_{L^2(\delta,1-\delta)}&+\int_0^T\|\partial_x x(1-x) \rho\|^2_{L^2(\delta,1-\delta)} \mathrm{d}t + C(\epsilon,T)\int_0^T \rho(1-\delta,t)^2+ \rho(\delta,t) ^2 \mathrm{d}t\\ &\leq \|\rho_0\|^2_{L^2(\delta,1-\delta)}.
\end{aligned}$$ This implies that $\rho \in L^\infty(0,T;L^2(\delta,1-\delta))$ and $\rho(\delta), \rho(1-\delta)\in L^2(0,T)$.\
Next, we test the equation with $\rho$. This then yields $$\begin{aligned}
\int_{\delta}^{1-\delta} \rho_t \rho \mathrm{d}x&= - \int_{\delta}^{1-\delta} \partial_x (x(1-x)\rho) \partial_x \rho \mathrm{d}x - b'(t) \rho(1-\delta) -a'(t)\rho(\delta).
\end{aligned}$$ where we again applied the dynamic boundary conditions. Using the information from above we have $$\begin{aligned}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\rho\|_{L^2(\delta,1-\delta)}^2 +\| \sqrt{x(1-x)} \partial_x \rho\|^2_{L^2(\delta,1-\delta)} \leq &-\int_{\delta}^{1-\delta} (1-2x)\rho \partial_x\rho \mathrm{d}x.
\end{aligned}$$ This can be further estimated as $$\begin{aligned}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\rho\|_{L^2(\delta,1-\delta)}^2 + \frac{1}{2}\| \sqrt{x(1-x)} \partial_x \rho\|^2_{L^2(\delta,1-\delta)} \leq \int_{\delta}^{1-\delta} \frac{(1-2x)^2}{x(1-x)}\rho ^2 \mathrm{d}x.
\end{aligned}$$ Integrating the equation yields $$\begin{aligned}
\|\rho(T)\|_{L^2(\delta,1-\delta)}^2 + \delta \int_0^T\|\partial_x \rho\|_{L^2(\delta,1-\delta)}^2\mathrm{d}t &\leq C(\delta) \int_0^T \|\rho\|_{L^2(\delta,1-\delta)}^2 \mathrm{d}t\\
&\leq C(\delta) T \|\rho_0\|_{L^2(\delta,1-\delta)}^2,
\end{aligned}$$ where we used the previous estimate. Thus we obtain $\rho \in L^\infty([0,T];L^2(\delta,1-\delta))\cap L^2([0,T];H^1(\delta,1-\delta))$ for any finite $T>0$.
**Remark 3**. *We note that the constant $C(\delta)$ grows with $\delta^{-1}$. Thus, in the limit $\delta\to 0$ it is no longer possible to control the $H^1$-norm of $\rho$.*
To obtain higher order estimates we differentiate the equation with respect to time. As the equation is linear this yields $$\begin{aligned}
\partial_t v&= \partial_{xx} x(1-x) v\\
\partial_x x(1-x)v&= v(\delta)-\epsilon a'(t)\\
- \partial_x x(1-x)v&= v(1-\delta)-\epsilon b'(t),
\end{aligned}$$ where we set $v=\partial_t \rho$. Following the same ideas as before we have $$\begin{aligned}
& \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\sqrt{x(1-x)} v\|^2 +\frac{1}{2}\delta(1-\delta)\big( v(1-\delta)^2+v(\delta)^2\big) \leq\delta(1-\delta)\epsilon^2 \big( b'(t)^2+a'(t))^2\big).
\end{aligned}$$ Using the delayed boundary condition and the $L^2$-bound on $\rho(\delta),\rho(1-\delta)$ obtained in the previous estimates this can be estimated as $$\begin{aligned}
& \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\sqrt{x(-1x)} v\|^2 +\frac{1}{2}\delta(1-\delta)\big( v(1-\delta)^2+v(\delta)^2\big) \leq \epsilon^2 \delta(1-\delta) C(t)\|\rho_0\|^2.
\end{aligned}$$ Note that this procedure only works for $\delta > 0$. Then, if the initial data is regular enough this then yields the higher order estimates for $\rho$.
## Strong solutions
We say a function $\rho$ is weak solution to the regularized Kimura system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"} with initial data $\rho_0\in L^2(\delta,1-\delta)$ if
1. $\rho\in L^\infty(0,T;L^2(\delta,1-\delta))\cap L^2(0,T;H^1(\delta,1-\delta))$ for any $T>0$
2. $\rho$ satisfies $$\begin{aligned}
& - \int_0^t \int_\delta^{1-\delta} \rho \partial_t\zeta \mathrm{d}x\mathrm{d}s + \int_0^t \int_\delta^{1-\delta} \partial_x x(1-x)\rho \partial_x \zeta \mathrm{d}x\mathrm{d}s \\
& + \int_0^t \big( b(s)\partial_t\zeta(1-\delta,s)+ a(s)\partial_t\zeta(\delta,s)\big) \mathrm{d}s= \int_\delta^{1-\delta} \rho_0 \zeta(0)- \rho(t)\zeta(t) \mathrm{d}x\\
&+b_0\zeta(1-\delta,0)- b(t)\zeta(1-\delta,t)+a_0\zeta(\delta,0)- a(t)\zeta(\delta,t)
\end{aligned}$$ for all smooth test functions $\zeta$ and almost every $t>0$.
3. The boundary terms satisfy the ODEs $$\begin{aligned}
a'(t)&= \rho(\delta,t)-\epsilon a(t),\quad a(0)=a_0,\\
b'(t)&= \rho(1-\delta,t)-\epsilon b(t),\quad b(0)=b_0.
\end{aligned}$$
We say a function $\rho$ is a weak solution to [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"} if it is a weak solution and satisfies in addition $\rho\in L^\infty(0,T;H^1(\delta,1-\delta))\cap L^2(0,T;H^2(\delta,1-\delta))$.
Let $0<\epsilon\ll 1$ be sufficiently small and let $\rho_0\in H^1_0(\delta,1-\delta)$. Then, for all $T>0$ there exists a unique strong solution to the regularized Kimura equation [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"}.
**Remark 4**. *The requirement on the initial condition $\rho_0 \in H^1_0(\delta,1-\delta)$ to be zero on the boundary may seem like a restriction. However, if $\rho_0\neq 0$ on the boundary in some application we can consider a slightly bigger domain $(\tilde \delta, 1-\tilde \delta)$, where $\tilde \delta<\delta$ and where we extend the initial to be zero on the new boundary. This works for all possible domains $(\delta,1-\delta)$ as the regularization of the domain $\delta$ can be chosen arbitrarily small.*
*Proof.* The idea of the proof is to use a Rothe's method (backward in-time Euler scheme) to construct approximate solutions. As a reference we refer to [@roubivcek2013nonlinear] for an overview of the method and [@rulla1996optimal] for an application towards degenerate parabolic problems.\
*Step 1:* Constructions of solutions. Let $\rho_0\in H^1_0(\delta,1-\delta)$. Then, as we are on a one-dimensional bounded interval it follows that $\rho_0\in L^\infty$, and moreover, that $\rho_0$ is uniformly continuous. Now, we consider the time interval $[0,T]$ and divide it into $n$ subintervals of length $\lambda_n=\frac{T}{n}$ and set $0=t_0^n\leq t_1^n\leq \dots\leq t_n^n= T$. Then, we replace the system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"} with a time-discretized approximate version in the bulk $$\label{Eq_discrete}
\begin{aligned}
\frac{\rho_{k+1}^n-\rho_k^n}{\lambda_n}&= \partial_{xx}^2(x(1-x)\rho_{k+1}^n), \quad x\in (\delta, 1-\delta), \\
\partial_x x(1-x)\rho_{k+1}^n &= a'(t^n_{k+1}),\quad x=\delta,\\
-\partial_x x(1-x)\rho_{k+1}^n &=b'(t^n_{k+1}), \quad x=1-\delta,\\
\rho_0^n&= \rho_0\qquad x\in (\delta,1-\delta),
\end{aligned}$$ for $0\leq k\leq n-1$, where the dynamic boundary condition is now formulated as a Robin-type boundary condition. To obtain the right-hand side of the boundary conditions we solve $$\begin{aligned}
a'(t)&= \rho_{k+1}^n(\delta)-\epsilon a(t),\quad b'(t)= \rho_{k+1}^n(1-\delta)-\epsilon b(t)\\
a(t_k)&=a_k,\quad b(t_k)=b_k,
\end{aligned}$$ for $t\in [t_k^n,t_{k+1}^n]$ and for all $0\leq k\leq n-1$. Thus, we obtain $$\begin{aligned}
a(t)&= a_k e^{-\epsilon (t-t_k)}+ \int_{t_k}^t e^{-\epsilon(t-s)}\rho_{k+1}^n(\delta)\mathrm{d}s =a_k e^{-\epsilon (t-t_k)} +\frac{1}{\epsilon}\rho_{k+1}^n(\delta)( 1-e^{-\epsilon(t-t_k)})
\end{aligned}$$ and a similar equation can be obtained for $b(t)$. Then, using the assumption that $a_0=0$ we can compute $a(t_1^n)$ by solving the ODE for $k=0$. Hence, $$\begin{aligned}
a(t_1^n)=a_1&= a_0e^{-\epsilon (t_1-t_0)} + \frac{1}{\epsilon}\rho_1^n(\delta)( 1-e^{-\epsilon(t_1-t_0)}) = \frac{1}{\epsilon}\rho_1^n(\delta)( 1-e^{-\epsilon \lambda_n})\\
&= \rho_1^n(\delta) (\lambda_n +\epsilon \lambda_n^2 + h.o.t.) \simeq \lambda_n \rho_1^n(\delta).
\end{aligned}$$ Solving each equation iteratively and neglecting higher order terms in $\epsilon$ this yields $$a_k= \lambda_n \sum_{i=1}^{k} \rho_i(\delta)\quad \text{and}\quad b_k= \lambda_n \sum_{i=1}^{k} \rho_i(1-\delta)$$ and then $$a'(t_{k+1}^n)= \rho_{k+1}^n(\delta) - \epsilon\lambda_n \sum_{i=1}^{k+1} \rho_i^n(\delta),\quad b'(t_{k+1}^n)=\rho_{k+1}^n(1-\delta) - \epsilon\lambda_n \sum_{i=1}^{k+1} \rho_i^n(1-\delta).$$ To show the existence of solutions to the approximate system we have to solve the linear elliptic equation. However, as we are in space dimension one, we can interpret the equation also as a linear ODE of degree two in the $x$-variable with boundary values determined by the solution of the boundary ODEs $a(t)$ and $b(t)$ evaluated at $t=t^n_{k+1}$. Then, the existence of solutions follows by the classical Picard-Lindelöf theorem as the forcing term is continuous in $x$. Moreover, we have that $\rho_k^n\in C^2(\delta,1-\delta)\cap C([\delta,1-\delta])$. To see that the solutions $\rho_k^n$ remain positive for positive initial data $\rho_0$, we rewrite the bulk equation as $$\begin{aligned}
-x(1-x) \partial_{xx}^2\rho_{k+1}^n -(1-2x)\partial_x \rho_{k+1}^n +(2+\frac{1}{\lambda_n})\rho_{k+1}^n= \frac{\rho_k^n}{\lambda_n}.
\end{aligned}$$ Then, applying a strong maximum principle together with a comparison principle, see [@evans2010partial] for details, yields that $\rho_k^n > 0$ in the interval $(\delta,1-\delta)$. On the boundary we observe that for $0<\epsilon\ll 1$ sufficiently small the expression $a'(t_1^n)= (1- \epsilon\lambda_n )\rho_1^n(\delta)\geq 0$ and hence either $\rho_1^n=0$ or $\rho_1^n>0$. Then, $$a'(t_2^n)= (1- \epsilon\lambda_n )\rho_2^n(\delta) -\epsilon\lambda_n \rho_1^n(\delta)= (1-2\delta)\rho_2^n(\delta)+ \delta(1-\delta)\partial_x \rho_2^n(\delta)$$ and thus $$\delta(1-\delta)\partial_x \rho_2^n(\delta)= (2\delta -\epsilon\lambda_n) \rho_2^n(\delta) -\epsilon\lambda_n \rho_1^n(\delta).$$ Now, for the case that $\rho_1^n(\delta)>0$ assume that $\rho_2^n(\delta)=0$ Then $\partial_x \rho_2^n(\delta)<0$ which implies that there exists negative minimum in the bulk $(\delta,1-\delta)$. However, this is a contradiction to the maximum principle and thus for this case $\rho_2^n(\delta)>0$. Therefore, in both cases for $0<\epsilon\ll 1$ sufficiently small $a'(t_2^n)\geq 0$ and by iteration it follows that $a'(t_k^n)\geq 0$ for all $k=0,\dots, n-1$. A similar argument holds for $b'(t_k^n)$.
Now, we can define the Rothe's sequence as $$\begin{aligned}
P^n(t)= \begin{cases}
\rho_0,& \text{if}~~t=0,\\
\rho_{k}^n+\frac{1}{\lambda_n}(t-t_k^n)(\rho_{k+1}^n-\rho_k^n),& \text{if}~~t\in (t_k^n,t_{k+1}^n],
\end{cases}
\end{aligned}$$ where the idea is to show that the Rothe's sequence converges to the solution of the continuous equation.
*Step 2:* Uniform bounds. To obtain uniform bounds for $\rho_k^n$ and $\frac{\rho_{k+1}^n-\rho_k^n}{\lambda_n}$ we follow a similar approach as in the a priori estimates for the system. Thus, we test the discretized equation [\[Eq_discrete\]](#Eq_discrete){reference-type="eqref" reference="Eq_discrete"} with $x(1-x)\rho_{k+1}^n$. This then yields for all $0\leq k\leq n-1$ $$\begin{aligned}
&\lambda_n \int_\delta^{1-\delta} |\partial_x( x(1-x)\rho_{k+1}^n)|^2 \mathrm{d}x -\lambda_n[x(1-x)\rho_{k+1}^n\partial_x x(1-x)\rho_{k+1}^n]_\delta^{1-\delta}\\
&+\int_\delta^{1-\delta} x(1-x) |\rho_{k+1}^n|^2 \mathrm{d}x=\int_\delta^{1-\delta} x(1-x)\rho_k^n \rho_{k+1}^n\mathrm{d}x.
\end{aligned}$$ Using the boundary condition this can be further estimated as $$\begin{aligned}
\lambda_n \|\partial_x x(1-x) \rho_{k+1}^n\|^2 &+ \frac{1}{2} \|\sqrt{x(1-x)}\rho_{k+1}^n\|^2 +
\lambda_n b'(t_{k+1}^n)\rho_{k+1}^n(1-\delta)\\
&+\lambda_n a'(t_{k+1}^n)\rho_{k+1}^n(\delta) \leq \frac{1}{2} \|\sqrt{x(1-x)} \rho_{k}^n\|^2,
\end{aligned}$$ where we note that the third and fourth terms are non-negative by the observations in Step 1. This can be further estimated as $$\label{estimate1}
\begin{aligned}
\lambda_n \|\partial_x x(1-x) \rho_{k+1}^n\|^2 &+ \frac{1}{2} \|\sqrt{x(1-x)}\rho_{k+1}^n\|^2 +
\lambda_n (1-2\epsilon\lambda_n)(\rho_{k+1}^n(1-\delta)^2 +\rho_{k+1}^n(\delta)^2) \\
&\leq \frac{1}{2} \|\sqrt{x(1-x)} \rho_{k}^n\|^2 +\epsilon\lambda_n \sum_{i=1}^k\rho_i^n(\delta)^2 +\epsilon\lambda_n \sum_{i=1}^k\rho_i^n(1-\delta)^2.
\end{aligned}$$ Iterating this estimate and noting that $\rho_{k+1}^n(\delta),\rho_{k+1}^n(1-\delta)>0$ if $\rho_{k}^n(\delta),\rho_{k}^n(1-\delta)>0$ yields $$\begin{aligned}
\|\sqrt{x(1-x)}\rho_{k+1}^n\|^2 +& \lambda_n (1-2\epsilon\lambda_n)(\rho_{k+1}^n(1-\delta)^2 +\rho_{k+1}^n(\delta)^2) \\
\leq& \|\sqrt{x(1-x)}\rho_{k}^n\|^2 +\lambda_n (1-2\epsilon\lambda_n)(\rho_{k}^n(1-\delta)^2 +\rho_{k}^n(\delta)^2)\\
& +\epsilon\lambda_n \sum_{i=1}^{k-1}(\rho_i^n(\delta)^2+\rho_i^n(1-\delta)^2)\\
\leq& \|\sqrt{x(1-x)}\rho_{k-1}^n\|^2 +\lambda_n (1-3\epsilon\lambda_n)(\rho_{k-1}^n(1-\delta)^2 +\rho_{k-1}^n(\delta)^2)\\
& +\epsilon\lambda_n \sum_{i=1}^{k-2}(\rho_i^n(\delta)^2+\rho_i^n(1-\delta)^2)\\
\leq & \dots \leq \|\sqrt{x(1-x)}\rho_0\|^2 +(\rho_0(\delta)^2+\rho_0(1-\delta)^2)
\end{aligned}$$ Hence, $\rho_{k}^n$ is uniformly bounded in $L^2(\delta,1-\delta)$. Coming back to estimate [\[estimate1\]](#estimate1){reference-type="eqref" reference="estimate1"} we note that the estimate can also be written as $$\begin{aligned}
&\lambda_n \|\partial_x x(1-x) \rho_{k+1}^n\|^2 +\lambda_n \|\partial_x x(1-x) \rho_{k}^n\|^2+ \frac{1}{2} \|\sqrt{x(1-x)}\rho_{k+1}^n\|^2\\
&+ \lambda_n (1-2\epsilon\lambda_n)(\rho_{k+1}^n(1-\delta)^2 +\rho_{k+1}^n(\delta)^2) \\
&\leq \frac{1}{2} \|\sqrt{x(1-x)} \rho_{k}^n\|^2 +\epsilon\lambda_n \sum_{i=1}^k\rho_i^n(\delta)^2 +\epsilon\lambda_n \sum_{i=1}^k\rho_i^n(1-\delta)^2 +\lambda_n \|\partial_x x(1-x) \rho_{k}^n\|^2.
\end{aligned}$$ Hence, we obtain $$\begin{aligned}
\sum_{i=0}^k \lambda_n \|\partial_x x(1-x) \rho_{i+1}^n\|^2 \leq \|\sqrt{x(1-x)}\rho_0\|^2 +(\rho_0(\delta)^2+\rho_0(1-\delta)^2),
\end{aligned}$$ which yields the uniform bounds for $\sum_{i=0}^k \lambda_n \|\partial_x x(1-x) \rho_{i+1}^n\|^2$. This term is the discretized version of the bound of $\|\partial_x x(1-x) \rho(t)\|^2 \in L^2((0,T)\times (\delta,1-\delta))$.\
The next estimate is for the discrete time derivative, where we consider the difference between two solutions. $$\begin{aligned}
\partial_{xx}x(1-x)(\rho_{k+1}^n -\rho_k^n)= \frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}- \partial_{xx}x(1-x)\rho_k^n,
\end{aligned}$$ where the last term can be written as $-\frac{\rho_{k}^n -\rho_{k-1}^n}{\lambda_n}$ for $k\geq 1$. Then testing the equation with $x(1-x)\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}$ yields $$\begin{aligned}
& \lambda_n\int_\delta^{1-\delta}\bigg |\partial_x x(1-x) \frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\bigg |^2 \mathrm{d}x- \lambda_n \bigg[ x(1-x)\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n} \partial_x x(1-x)\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\bigg]_\delta^{1-\delta} \\
&+\int_\delta^{1-\delta} \bigg|\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\bigg|^2\mathrm{d}x= \int_\delta^{1-\delta} \frac{\rho_{k}^n -\rho_{k-1}^n}{\lambda_n} \frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\mathrm{d}x.
\end{aligned}$$ By similar methods as in the previous computations, this can be estimated as follows $$\label{estimate2_0}
\begin{aligned}
& \lambda_n\big \|\partial_x x(1-x) \frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\big \|^2 + \delta(1-\delta)\frac{(\rho_{k+1}^n(1-\delta) -\rho_k^n(1-\delta))^2}{\lambda_n}\\
&+\delta(1-\delta)\frac{(\rho_{k+1}^n(\delta) -\rho_k^n(\delta))^2}{\lambda_n} + \frac{1}{2}\bigg\|\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\bigg\|^2\ \\
&\leq \frac{1}{2} \|\frac{\rho_{k}^n -\rho_{k-1}^n}{\lambda_n}\|^2 + \epsilon\lambda_n \delta(1-\delta)\frac{\rho_{k+1}^n(1-\delta) -\rho_k^n(1-\delta)}{\lambda_n}\rho_{k+1}^n(1-\delta)\\
&+\epsilon\lambda_n \delta(1-\delta)\frac{\rho_{k+1}^n(\delta) -\rho_k^n(\delta)}{\lambda_n}\rho_{k+1}^n(\delta).
\end{aligned}$$ We observe that for the case $\rho_{k+1}^n(\delta) -\rho_k^n(\delta)\leq 0$ and $\rho_{k+1}^n(1-\delta) -\rho_k^n(1-\delta)\leq 0$ the boundary terms can be absorbed in the left-hand side of the inequality. Now, the case $\rho_{k+1}^n(\delta) -\rho_k^n(\delta)> 0$ or $\rho_{k+1}^n(1-\delta) -\rho_k^n(1-\delta)> 0$ implies that there exists a sufficiently small $\epsilon>0$ such that $\rho_{k+1}^n(\delta) -\rho_k^n(\delta)-\epsilon\lambda_n \rho_{k+1}^n(\delta)> 0$ or $\rho_{k+1}^n(1-\delta) -\rho_k^n(1-\delta)- \epsilon\lambda_n \rho_{k+1}^n(1-\delta)> 0$. Thus, all boundary terms can be absorbed in the left-hand side of the inequality and we obtain $$\label{estimate2_1}
\begin{aligned}
\frac{1}{2}\bigg\|\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\bigg\|^2 &+ C\frac{|\rho_{k+1}^n(\delta)-\rho_k^n(\delta)|}{\lambda_n}+ C\frac{|\rho_{k+1}^n(1-\delta)-\rho_k^n(1-\delta)|}{\lambda_n}\\
&\leq \frac{1}{2} \|\frac{\rho_{k}^n -\rho_{k-1}^n}{\lambda_n}\|^2 .
\end{aligned}$$ For the special case $k=0$ we have $$\begin{aligned}
&\int_\delta^{1-\delta} |\partial_x x(1-x)\frac{(\rho_1^n-\rho_0)}{\lambda_n}|^2 \mathrm{d}x -\big[x(1-x)\frac{(\rho_1^n-\rho_0)}{\lambda_n}\partial_x x(1-x)(\rho_1^n-\rho_0)\big]_\delta^{1-\delta} \\
&+ \int_\delta^{1-\delta} \bigg|\frac{\rho_{1}^n -\rho_0}{\lambda_n}\bigg|^2\mathrm{d}x= -\int_\delta^{1-\delta} \partial_{x}( x(1-x)\rho_0) \partial_x x(1-x)\frac{(\rho_1^n-\rho_0)}{\lambda_n}\mathrm{d}x\\
&+\big[x(1-x)\frac{(\rho_1^n-\rho_0)}{\lambda_n}\partial_x x(1-x)\rho_0 \big]_\delta^{1-\delta} .
\end{aligned}$$ This yields $$\begin{aligned}
&\frac{1}{2} \bigg\|\partial_x x(1-x)\frac{(\rho_1^n-\rho_0)}{\lambda_n}\bigg\|^2 +\delta(1-\delta)(1-\epsilon\lambda_n)\bigg(\frac{\rho_1^n(1-\delta)^2}{\lambda_n}+\frac{\rho_1^n(\delta)^2}{\lambda_n}\bigg) \\
&+ \bigg\|\frac{\rho_{1}^n -\rho_0}{\lambda_n}\bigg\|^2 \leq \frac{1}{2}\| \partial_{x}( x(1-x)\rho_0)\|^2,
\end{aligned}$$ where we have used the fact that the initial data satisfies $\rho_0(\delta),\rho_0(1-\delta)=0$. Then by iteration we obtain $$\begin{aligned}
&\frac{1}{2}\bigg\|\frac{\rho_{k+1}^n -\rho_k^n}{\lambda_n}\bigg\|^2 + C\frac{|\rho_{k+1}^n(\delta)-\rho_k^n(\delta)|}{\lambda_n}+ C\frac{|\rho_{k+1}^n(1-\delta)-\rho_k^n(1-\delta)|}{\lambda_n}\\
&\leq \frac{1}{2} \|\frac{\rho_{k}^n -\rho_{k-1}^n}{\lambda_n}\|^2 + C\frac{|\rho_{k}^n(\delta)-\rho_{k-1}^n(\delta)|}{\lambda_n}+ C\frac{|\rho_{k}^n(1-\delta)-\rho_{k-1}^n(1-\delta)|}{\lambda_n}\\
&\leq \bigg\|\frac{\rho_{1}^n -\rho_0}{\lambda_n}\bigg\|^2+C\frac{\rho_1^n(1-\delta)^2}{\lambda_n}+C\frac{\rho_1^n(\delta)^2}{\lambda_n}\\
&\leq \frac{1}{2}\| \partial_{x}( x(1-x)\rho_0)\|^2 ,
\end{aligned}$$ which yields the uniform estimate for the approximate time derivative. Again, observe that from estimate [\[estimate2_0\]](#estimate2_0){reference-type="eqref" reference="estimate2_0"} we can deduce the uniform bounds of the discretized $L^2$-norm with respect to time of $\sum_{i=0}^k \lambda_n\big \|\partial_x x(1-x) \frac{\rho_{i+1}^n -\rho_i^n}{\lambda_n}\big \|^2$.\
The last estimate is for the higher order derivatives. Now, testing the equation with $x(1-x)\partial_{xx}x(1-x) \rho_{k+1}^n$ we have $$\begin{aligned}
&\lambda_n \int_\delta^{1-\delta} x(1-x) |\partial_{xx}^2 x(1-x) \rho_{k+1}^n|^2 \mathrm{d}x+ \int_{\delta}^{1-\delta}|\partial_x x(1-x)\rho_{k+1}^n |^2 \mathrm{d}x\\
&-\big[x(1-x)(\rho_{k+1}^n-\rho_k^n) \partial_x x(1-x)\rho_{k+1}^n \big]_{\delta}^{1-\delta} =\int_{\delta}^{1-\delta} \partial_x x(1-x)\rho_k^n \partial_x x(1-x)\rho_{k+1}^n \mathrm{d}x.
\end{aligned}$$ Using the boundary condition this yields $$\begin{aligned}
& \lambda_n \|\sqrt{x(1-x)}\partial_{xx}^2 x(1-x) \rho_{k+1}^n\|^2 +\frac{1}{2}\|\partial_x x(1-x)\rho_{k+1}^n \|^2 \leq\\
&\leq \frac{1}{2}\|\partial_x x(1-x)\rho_{k}^n \|^2 -\delta(1-\delta)\big(\rho_{k+1}^n(\delta)-\rho_k^n(\delta)\big)a'(t_{k+1}^n) \\
&\quad - \delta(1-\delta)\big(\rho_{k+1}^n(1-\delta)-\rho_k^n(1-\delta)\big)b'(t_{k+1}^n),
\end{aligned}$$ where the boundary terms can be estimated by estimate [\[estimate2_1\]](#estimate2_1){reference-type="eqref" reference="estimate2_1"} $$\begin{aligned}
\rho_{k+1}^n(1-\delta)-\rho_k^n(1-\delta)\leq \lambda_n\frac{|\rho_{k+1}^n(1-\delta)-\rho_k^n(1-\delta)|}{\lambda_n}\leq \lambda_n \| \partial_{x}( x(1-x)\rho_0)\|^2
\end{aligned}$$ and similar for $\rho_{k+1}^n(\delta)-\rho_k^n(\delta)$. Therefore, we conclude that $$\label{Eq_estimate3}
\begin{aligned}
\frac{1}{2}\|\partial_x x(1-x)\rho_{k+1}^n \|^2 &\leq \frac{1}{2}\|\partial_x x(1-x)\rho_{k}^n \|^2 + \lambda_n \| \partial_{x}( x(1-x)\rho_0)\|^2\\
&\leq \frac{1}{2}\|\partial_x x(1-x)\rho_{k-1}^n \|^2 +2 \lambda_n \| \partial_{x}( x(1-x)\rho_0)\|^2\\
&\leq \dots\leq \frac{1}{2} \|\partial_x x(1-x)\rho_{0} \|^2 +T \| \partial_{x}( x(1-x)\rho_0)\|^2.
\end{aligned}$$ which yields the desired uniform bounds. Similar to the first estimates we can also obtain a uniform bound on $\sum_{i=0}^k \lambda_n \|\sqrt{x(1-x)}\partial_{xx}^2 x(1-x) \rho_{i+1}^n\|^2$.\
From these uniform bounds we can extract a converging subsequence and pass to the limit as $n \to \infty$. The last step is show that the limit is indeed a solution of system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"}.\
*Step 3:* Limit as $n\to \infty$. From the uniform bounds we can conclude that $P^n(t)$ is uniformly Lipschitz continuous. Moreover, we can define a sequence of step functions $\tilde P^n(t)$ by $$\begin{aligned}
\tilde P^n(t)= \begin{cases}
\rho_0 & \text{if} \quad t=0,\\
\rho_k^n &\text{if} \quad t\in (t_{k-1}^n,t_k^n].
\end{cases}
\end{aligned}$$ It follows that $P^n(t)-\tilde P^n(t)\to 0$ as $n\to \infty$. Then, the discrete equation can be written as $$\label{Eq_sequence_limit}
\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t} P^n(t)& = \partial_{xx}x(1-x) \tilde P^n(t)\\
\partial_x x(1-x) \tilde P^n(t)&= a'(t)\\
-\partial_x x(1-x) \tilde P^n(t)&= b'(t).
\end{aligned}$$ We claim that there exists a $\rho \in C([0,T];L^2(\delta,1-\delta))\cap L^2(0,T;H^1(\delta,1-\delta))$ such that $P^n\to \rho$ in $C([0,T];L^2)\cap L^2(0,T;H^1)$ as $n\to \infty$. Indeed, considering the system for $P^n(t)-P^m(t)$ and using the energy estimates obtained in Step 2 we get $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t} \|P^n(t)-P^m(t)\|^2 \leq C_{n,m},
\end{aligned}$$ where the constant only depends on the initial data $\rho_0$ and satisfies $C_{n,m}\to 0$ as $n,m\to \infty$. This implies that $$\begin{aligned}
\sup_{t\in (0,T]} \|P^n(t)-P^m(t)\|^2 \leq T C_{n,m}.
\end{aligned}$$ Therefore, the Cauchy sequence converges and and we can conclude that $P^n\to \rho$ in $C([0,T];L^2)\cap L^2(0,T;H^1)$. Moreover, it follows from the uniform Lipschitz continuity of $P^n(t)$ that $\rho$ is Lipschitz continuous in time as well. In addition, we note that as the sequence $P^n(t)-\tilde P^n(t)\to 0$ as $n\to \infty$ it follows that $\tilde P^n(t)\to \rho(t)$ in $L^2(\delta,1-\delta)$.\
*Step 4:* Existence of weak solutions. We finally show that the limit $\rho$ obtained in the previous step is indeed a weak solution of the system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"}. From the convergence result and the regularity of the limit we can pass to the limit in the weak formulation of [\[Eq_sequence_limit\]](#Eq_sequence_limit){reference-type="eqref" reference="Eq_sequence_limit"}. This shows the existence of a weak solution.\
*Step 5:* Uniqueness of solutions. Suppose that $\rho_1$ and $\rho_2$ are two weak solutions to the system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"} corresponding to the same initial data. Then, we denote the difference by $w=\rho_1-\rho_2$. Testing the difference between the solutions with $x(1-x)w$ yields $$\begin{aligned}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\sqrt{x(1-x)}w\|^2 +\|\partial_x x(1-x) w\|^2 &= \big[x(1-x)w \partial_x x(1-x)w \big]_\delta^{1-\delta}.
\end{aligned}$$ Using the dynamic boundary condition this can be written as $$\begin{aligned}
& \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\sqrt{x(1-x)}w\|^2 +\|\partial_x x(1-x) w\|^2 +\delta(1-\delta)\big( w(t,1-\delta)^2+w(t,\delta)^2\big)\\
&= \delta(1-\delta)\epsilon e^{-\epsilon t}\int_0^te^{\epsilon s}(w(t,\delta)w(s,\delta)+w(t,1-\delta)w(s,1-\delta))\mathrm{d}s.
\end{aligned}$$ Using a combination of Young's and Jensen's inequality this can be estimated as $$\begin{aligned}
& \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t} \|\sqrt{x(1-x)}w\|^2 +\|\partial_x x(1-x) w\|^2 +\frac{1}{2}\delta(1-\delta)\big( w(t,1-\delta)^2+w(t,\delta)^2\big)\\
&\leq \frac{1}{2}\delta(1-\delta)\bigg(\epsilon e^{-\epsilon t}\int_0^te^{\epsilon s}(w(s,\delta)+w(s,1-\delta))\mathrm{d}s\bigg)^2\\
&\leq \frac{1}{2}\delta(1-\delta)\bigg(\epsilon\int_0^t |w(s,\delta)+w(s,1-\delta)|\mathrm{d}s\bigg)^2\\
&\leq \delta(1-\delta)\epsilon^2 t \int_0^t w(s,\delta)^2+w(s,1-\delta)^2\mathrm{d}s.
\end{aligned}$$ Integrating this inequality with respect to time $t$ yields $$\begin{aligned}
& \frac{1}{2} \|\sqrt{x(1-x)}w(\tau)\|^2 +\int_0^\tau\|\partial_x x(1-x) w\|^2\mathrm{d}t +\frac{1}{2}\delta(1-\delta)\int_0^\tau\big( w(t,1-\delta)^2+w(t,\delta)^2\big)\mathrm{d}t\\
&\leq \delta(1-\delta)\epsilon^2\int_0^\tau t \int_0^t w(s,\delta)^2+w(s,1-\delta)^2\mathrm{d}s \mathrm{d}t +\frac{1}{2} \|\sqrt{x(1-x)}w_0\|^2,
\end{aligned}$$ where the last term equals zero as the solutions have the same initial data. Hence, focusing on the important terms gives rise to $$\begin{aligned}
& \frac{1}{2} \|\sqrt{x(1-x)}w(\tau)\|^2 +\frac{1}{2}\delta(1-\delta)\int_0^\tau\big( w(t,1-\delta)^2+w(t,\delta)^2\big)\mathrm{d}t\\
&\leq \delta(1-\delta)\epsilon^2\int_0^\tau t \int_0^t \big(w(s,\delta)^2+w(s,1-\delta)^2\big)\mathrm{d}s \mathrm{d}t\\
&\leq \delta(1-\delta)\epsilon^2 \tau^2 \int_0^\tau \big(w(t,\delta)^2+w(t,1-\delta)^2\big) \mathrm{d}t
\end{aligned}$$ for all $\tau \in (0,T]$, where the last term can be absorbed on the left-hand side for sufficiently small $\epsilon$. This shows the uniqueness of the weak solution.\
*Step 6:* Existence of strong solutions. So far in this proof we have not used the fact that the solutions to the discretized system [\[Eq_discrete\]](#Eq_discrete){reference-type="eqref" reference="Eq_discrete"} have indeed higher order energy estimates. Let $\rho\in L^\infty(0,T;L^2)\cap L^2(0,T;H^1)$ be the unique solution to system [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"}. Then, by estimate [\[Eq_estimate3\]](#Eq_estimate3){reference-type="eqref" reference="Eq_estimate3"} we recall that the discrete approximate sequence $\tilde P^n(t)$ satisfies $$\begin{aligned}
\|\partial_x x(1-x) \tilde P^n(t_k^n)\|^2+ \sum_{i=0}^k \lambda_n \|\sqrt{x(1-x)}\partial_{xx}^2 \tilde P^n(t_i^n)\|\leq C(T)\|\partial_x \rho_0\|.
\end{aligned}$$ Moreover, we recall that $\tilde P^n\to \rho$ in $L^\infty(0,T;L^2)\cap L^2(0,T;H^1)$ as $n\to \infty$. Hence, by the uniqueness of the weak solution we obtain that $\tilde P^n\to \rho$ in $L^\infty(0,T;H^1)\cap L^2(0,T;H^2)$ as $n\to \infty$ and thus $\rho$ is indeed a strong solution.\
This completes the proof of the existence of a unique strong solution to the regularized Kimura equation [\[Eq_reg_Kimura\]](#Eq_reg_Kimura){reference-type="eqref" reference="Eq_reg_Kimura"}. ◻
# Numerics
In this section, we perform a numerical study to the regularized Kimura equation, considering various initial conditions as well as different values of $\delta$ and $\epsilon$. These numerical results demonstrate that the regularized equation can capture the key properties of the original Kimura equation.
## Numerical schemes
We first propose a numerical scheme for the regularized Kimura equation [\[Eq_Final_LMA\]](#Eq_Final_LMA){reference-type="eqref" reference="Eq_Final_LMA"} based on a finite volume approach. For a given $\delta$, we can divide $(\delta, 1 - \delta)$ into $N$ subintervals of size $h = (1 - 2 \delta) / N$. Let $x_i = \delta + ih$ ($i = 0, 1, 2 \ldots, N$), and define $$\rho_0 = \frac{2}{h} \int_{\delta}^{x_{1} + h/2} \rho \mathrm{d}x, \quad \rho_i = \frac{1}{h} \int_{x_i - h/2}^{x_i + h/2} \rho \mathrm{d}x, \quad i = 1, \ldots N - 1, \quad \rho_N = \frac{2}{h} \int_{x_N - h/2}^{x_N} \rho \mathrm{d}\bm{x}.
% p_N = \int_{x_{N} - \frac{1}{2}}^{x_N} \rho(x) \dd x + \int_{0}^{1/2} \rho(x) \dd \x \ .$$ Due to the mass conservation $\frac{\mathrm{d}}{\mathrm{d}t} \left(\int_{\delta}^{1 -\delta} \rho \mathrm{d}x + a(t) + b(t) \right) = 0$, we have $$\label{Dis_MC}
a(t) + b(t) + h( \frac{1}{2} \rho_0(t) + \sum_{i = 1}^{N-1} \rho_i(t) + \frac{1}{2} \rho_N(t) ) = 1, \quad \forall t$$ in the semi-discrete sense. Let ${\bm p}(t) = (a, \rho_0, \ldots, \rho_N, b)^{\rm T}$, and ${\bm w} = (1, h/2, h, \ldots, h, h/2, 1) \in \mathbb{R}^{N+3}$. It is convenient to define the inner product in the discrete space $$\langle f, g \rangle_{w} = \sum_{i=1}^{N+3} w_i f_i g_i.$$ Then the mass conservation ([\[Dis_MC\]](#Dis_MC){reference-type="ref" reference="Dis_MC"}) can be written as $\langle {\bm p}, {\bf 1} \rangle = 1$. We also define the discrete free energy as $$\begin{aligned}
\mathcal{F}_h({\bm p}) & = h \left( \frac{1}{2}\rho_0 \ln (\delta (1 - \delta) \rho_0) + \sum_{i=1}^{N+1} \rho_i \ln ( x_i (1 - x_i) \rho_i) + \frac{1}{2} \rho_N \ln (\delta (1 - \delta) \rho_N) \right) \\
& + a \ln (\epsilon \delta (1 - \delta) a) + b \ln (\epsilon \delta (1 - \delta) b) = \langle {\bm \mu}, {\bm p} \rangle_{w}, \\
\end{aligned}$$ where ${\bm \mu} = (\ln (\epsilon \delta (1 - \delta) a), \ln (x_0 (1 - x_0) \rho)_0), \ldots \ln (x_N (1 - x_N) \rho)_N) ,\ln (\epsilon \delta (1 - \delta) b))^{\rm T}$ is the discrete chemical potential.
The semi-discrete system associated with ([\[Eq_Final_LMA\]](#Eq_Final_LMA){reference-type="ref" reference="Eq_Final_LMA"}) can be written as $$\label{Eq_Final_semi}
\begin{cases}
& \frac{\mathrm{d}a}{\mathrm{d}t} = (\rho_0 - \epsilon a) \\
& \frac{\mathrm{d}\rho_0}{\mathrm{d}t} = \frac{2}{h} ( - (\rho u)_{1/2} + (\epsilon a - \rho_0) \\
& \frac{\mathrm{d}\rho_i}{\mathrm{d}t} = \frac{1}{h} ( - (\rho u)_{i + 1/2} + (\rho u)_{i-1/2}), \quad i = 1, \ldots N -1 \\
& \frac{\mathrm{d}\rho_N}{\mathrm{d}t} = \frac{2}{h} ( (\epsilon b - \rho_N) + (\rho u)_{N - 1/2}) \\
& \frac{\mathrm{d}b}{\mathrm{d}t} = (\rho_N - \epsilon b) \ , \\
% & \frac{2}{h} * ( )
\end{cases}$$ where $$(p u)_{i + 1/2} = - \frac{1}{h} (x_{i+1} (1 - x_{i+1}) p_{i+1} - x_{i} (1 - x_{i}) p_{i} ) \ .$$ it is clear that ([\[Eq_Final_semi\]](#Eq_Final_semi){reference-type="ref" reference="Eq_Final_semi"}) is a linear system of ${\bm p}$, which can be denoted by $$\frac{\mathrm{d}{\bm p}}{\mathrm{d}t} = L_h {\bm p}, \quad L_h \in \mathbb{R}^{(N+3) \times (N+3)}.$$
Moreover, since $\langle \frac{\mathrm{d}p}{\mathrm{d}t}, 1 \rangle_{w} = \langle L_h {\bm p}, 1 \rangle_w$, the semi-discrete equation ([\[Eq_Final_semi\]](#Eq_Final_semi){reference-type="ref" reference="Eq_Final_semi"}) preserves the discrete mass conservation ([\[Dis_MC\]](#Dis_MC){reference-type="ref" reference="Dis_MC"}).
Next, we introduce a temporal discretization to ([\[Eq_Final_semi\]](#Eq_Final_semi){reference-type="ref" reference="Eq_Final_semi"}). Since we are interested in investigating the behavior of $\epsilon$ and $\delta$, a high-order temporal discretization is required. In the current study, we adopt a second-order Crank--Nicolson scheme, given by $$\label{CN}
\frac{{\bm p}^{n+1} - {\bm p}^n}{\tau} = \frac{1}{2} (L_h {\bm p}^{n} + L_h {\bm p}^{n+1}),$$ for the temporal discretization. Since the scheme is linear, we can solve $p^{n+1}$ directly, given by $${\bm p}^{n+1} = \left( \frac{1}{\tau} {\sf I} - \frac{1}{2} L_h \right)^{-1} \left( \frac{1}{\tau} {\sf} + \frac{1}{2}L_h \right) {\bm p}^n.$$ Although it is not straightforward to prove, numerical simulations show that the numerical scheme ([\[CN\]](#CN){reference-type="ref" reference="CN"}) is positive-preserving and energy stable. We'll address this in the future work.
## Numerical results
In this subsection, we present some numerical results for the regular ized Kimura equation with different initial condition.
: We first consider a uniform initial condition, given by $$\label{IC1}
\rho(x, 0) = 1 / (1 - 2 \delta), \quad x \in (\delta, 1 - \delta), \quad a(0) = b(0) = 0.$$
Fig. [4](#Fig1){reference-type="ref" reference="Fig1"}(a) shows the numerical solution $\rho(x, t)$ for $\delta = 0.001$ and $\epsilon = 0.001$ at $t = 10$, with $h = 10^{-4}$ and $\tau = 10^{-4}$. The time evolution of $a(t)$ and $b(t)$ are shown in Fig. [4](#Fig1){reference-type="ref" reference="Fig1"} (b). Due the symmetry of the initial condition, the dynamics of $a(t)$ and $b(t)$ is exactly the same. It is clear that nearly all the mass is moving to the boundary. The evolution of discrete energy and the 1-st moment are shown in Fig. [4](#Fig1){reference-type="ref" reference="Fig1"} (c) and (d). The discrete energy is decreasing with respect to time, while the 1-st moment is conserved numerically. We need to mention that due to the symmetry of the initial condition, we have $a'(t) = b'(t)$ and $\rho(1 - \delta, t) = \rho(\delta, t)$, so the first moment is also conserved in theory.
Uniform_bulk_solution (0, 70)(a)
Uniform_at_bt (-9, 78)(b)
Energy_ep_1e-3_delta_1e-3 (0, 48)(c)
moment_ep_1e-3_delta_1e-3 (-2, 48)(d)
Next, we conducted numerical experiments to investigate the influence of $\epsilon$ and $\delta$ on the numerical solution. We take $h = 10^{-4}$ and $\tau = 10^{-4}$ to eliminate the effects of numerical errors. Figure [2](#Fig_delta_epsilon){reference-type="ref" reference="Fig_delta_epsilon"}(a) shows the numerical solutions $\rho(x, t)$ at $t = 10$ for various values of $\delta$, while keeping $\epsilon$ fixed at $0.001$. Similarly, Figure [2](#Fig_delta_epsilon){reference-type="ref" reference="Fig_delta_epsilon"}(b) depicts the numerical solutions at $t = 10$ for different values of $\epsilon$ with $\delta$ fixed at $0.001$. The obtained results clearly demonstrate that the value at $x = \delta$ and $1-\delta$ is determined by $\epsilon$. Additionally, as $\epsilon$ approaches zero, the bulk solution $\rho(x, t)$ at $t = 10$ tends to approach zero. One can notice that $\| \rho \|_{L^2}$ is is proportional to $\epsilon$, which is consistent with the formal analysis ([\[value_A\_B\]](#value_A_B){reference-type="ref" reference="value_A_B"}).
![(a) Numerical solution in the bulk ($\rho(x, t)$) at $t = 10$ for fixed $\epsilon = 1e-3$ and different $\delta$. (b) Numerical solution in the bulk ($\rho(x, t)$) at $t = 10$ for fixed $\delta = 1e-3$ and different $\epsilon$. $\Delta t = 1e-4$ and $h = (1 - 2\delta) / 10^4$.](./fixed_epsilon_1e-3_different_delta.pdf "fig:"){#Fig_delta_epsilon width="0.48 \\linewidth"} ![(a) Numerical solution in the bulk ($\rho(x, t)$) at $t = 10$ for fixed $\epsilon = 1e-3$ and different $\delta$. (b) Numerical solution in the bulk ($\rho(x, t)$) at $t = 10$ for fixed $\delta = 1e-3$ and different $\epsilon$. $\Delta t = 1e-4$ and $h = (1 - 2\delta) / 10^4$.](./LMA_delta_1e-3_different_epsilon.pdf "fig:"){#Fig_delta_epsilon width="0.48 \\linewidth"}
Fig. [4](#Fig1){reference-type="ref" reference="Fig1"} show the numerical solutions for $\delta = 0.01$ at $T = 10$ with $\epsilon = 1/100$ and $1/1000$ respectively.
![Numerical solutions at $T = 10$ for $\delta = 0.01$ with $\epsilon = 1/100$ (left) and $1/1000$ (right) respectively. The insets show $\rho(x, t)$ for $x \in (\delta, 1 - \delta)$. ](./Code/epsilon_100_all_case1_T_10 "fig:"){#Fig1 width="0.48 \\linewidth"} ![Numerical solutions at $T = 10$ for $\delta = 0.01$ with $\epsilon = 1/100$ (left) and $1/1000$ (right) respectively. The insets show $\rho(x, t)$ for $x \in (\delta, 1 - \delta)$. ](./Code/epsilon_1000_all_case1_T_10 "fig:"){#Fig1 width="0.48 \\linewidth"}
: In the subsection, we consider a non-uniform initial condition $$\label{IC2}
\rho(x) =
\begin{cases}
& 0.5 / ( 1 - 2 \delta), \quad \delta < x < 0.5 \\
% & 1 / ( 1 - 2 \delta), \quad x = 0.5 \\
& 1.5 / ( 1 - 2 \delta), \quad 0.5 <= x < 1 - \delta \ , \\
\end{cases} \quad a(0) = b(0) = 0.$$
Non_uniform_bulk_solution_ep_1e-3_delta_1e-3_T\_10 (0, 70)(a)
Non_uniform_at_bt (-9, 78)(b)
Non_uniform_Energy_ep_1e-3_delta_1e-3 (0, 48)(c)
Non_uniform_1-st_moment_ep_1e-3_delta_1e-3 (-2, 48)(d)
Fig. [\[Fig_nonuniform\]](#Fig_nonuniform){reference-type="ref" reference="Fig_nonuniform"}(a) shows the numerical solution $\rho(x, t)$ for $\delta = 0.001$ and $\epsilon = 0.001$ at $t = 10$, with $h = 10^{-4}$ and $\tau = 10^{-4}$. The time evolution of $a(t)$ and $b(t)$ are shown in Fig. [\[Fig_nonuniform\]](#Fig_nonuniform){reference-type="ref" reference="Fig_nonuniform"} (b). The evolution of discrete energy and the 1-st moment are shown in Fig. [\[Fig_nonuniform\]](#Fig_nonuniform){reference-type="ref" reference="Fig_nonuniform"} (c) and (d). In this case, we can also observe the energy stability and the first moment is almost a constant.
![Numerical solutions at $T = 10$ for $\delta = 0.01$ with $\epsilon = 1/100$ (left) and $1/1000$ (right) respectively \[non-uniform initial condition ([\[IC2\]](#IC2){reference-type="ref" reference="IC2"})\]. The insets show $\rho(x, t)$ for $x \in (\delta, 1 - \delta)$. ](./Code/epsilon_100_all_case2_T_10 "fig:"){#Fig3 width="0.58 \\linewidth"} ![Numerical solutions at $T = 10$ for $\delta = 0.01$ with $\epsilon = 1/100$ (left) and $1/1000$ (right) respectively \[non-uniform initial condition ([\[IC2\]](#IC2){reference-type="ref" reference="IC2"})\]. The insets show $\rho(x, t)$ for $x \in (\delta, 1 - \delta)$. ](./Code/epsilon_1000_all_case2_T_10 "fig:"){#Fig3 width="0.4 \\linewidth"}
: Last, we take the initial condition as a Gaussian distribution $$\rho_0(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp( - \frac{(x - x_0)^2}{2 \sigma^2}), \quad a(0) = b(0) = 0,$$ with $\sigma = 0.1$ and $x_0 = 0.4$. Fig. [\[Fig_Gaussian\]](#Fig_Gaussian){reference-type="ref" reference="Fig_Gaussian"} shows the numerical results for $\delta = 1e-4$ and $\epsilon = 1e-4$. Fig. [\[Fig_Gaussian\]](#Fig_Gaussian){reference-type="ref" reference="Fig_Gaussian"}(a) shows $\rho(x, t)$ for $t = 0$, $0.1$, $0.5$ and $2$, respectively, while Fig. [\[Fig_Gaussian\]](#Fig_Gaussian){reference-type="ref" reference="Fig_Gaussian"}(b) shows the evolution of $a(t)$ and $b(t)$ with respect to $t$. The evolution of discrete energy and the 1-st moment are shown in Fig. [\[Fig_nonuniform\]](#Fig_nonuniform){reference-type="ref" reference="Fig_nonuniform"} (c) and (d). Again, the numerical scheme is energy stable and 1-st moment is also a constant..
Gaussian_rho_t\_ep_1e-3_delta_1e-3 (0, 70)(a)
Gaussian_at_bt_ep_1e-3_delta_1e-3 (-9, 78)(b)
Gaussian_Energy_ep_1e-3_delta_1e-3 (0, 48)(c)
Gaussian_moment_ep_1e-3_delta_1e-3 (-3, 48)(d)
: A crucial assumption in proving the existence of strong solutions is $a'(t) \geq 0$ and $b'(t) \geq 0$. As discussed earlier, it requires that $\epsilon$ small enough. In this subsection, we present some numerical results show that if $\epsilon$ is not small enough, we may have $a'(t) < 0$ or $b'(t) < 0$. To this end, we consider the initial value $$\label{IC_large}
\rho(x, 0) = 0.02 / (1 - 2 \delta), \quad x \in (\delta, 1 - \delta), \quad a(0) = b(0) = 0.49.$$ Fig. [7](#Fig_at){reference-type="ref" reference="Fig_at"} shows $a(t)$ for $t \in (0, 0.1)$ for fixed $\delta = 10^{-4}$ and different $\epsilon$. One can see that for relative large $\epsilon$, $a(t)$ is not a monotonic function. But interestingly, $a(t)$ will be a monotonic function for $t > t^*$ where $t^*$ is a time depends on $\epsilon$.
![ Numerical solution of $a(t)$ for fixed $\delta = 10^{-4}$ and different $\epsilon$. ](./at_a_0_0_49_different_epsilon.pdf){#Fig_at width="0.55 \\linewidth"}
# Conclusion remark
We proposed a new continuum model for a random genetic drift problem by incorporating a dynamic boundary condition approach. The dynamic boundary condition compensates for singularities on the boundary in the original Kimura equation. We have demonstrated the existence and uniqueness of a strong solution for the regularized system. Finally, we present some numerical results for the regularized model, which indicate that the model can capture the main features of the original model. As future work, we will further study the long-term behavior of the new model. Additionally, we plan to extend current approach to multi-alleles genetic drift problem.
# Acknowledgements {#acknowledgements .unnumbered}
C. L. is partially supported by NSF grants DMS-1950868 and DMS2118181. J.-E. S. would like to thank C.L and the Illinois Institute of Technology for the invitation to a research visit. Moreover, J.-E.S. acknowledges the support of the DFG under grant No. 456754695. Y. W is partially supported by NSF DMS-2153029.
[^1]: Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL. Email: cliu124\@iit.edu
[^2]: Department of Mathematics, Technical University Munich, Munich, Germany. Email: janeric.sulzbach\@ma.tum.de
[^3]: Department of Mathematics, University of California, Riverside, CA. Email: yiweiw\@ucr.edu Corresponding author.
| arxiv_math | {
"id": "2309.09484",
"title": "On a Continuum Model for Random Genetic Drift: A Dynamical Boundary\n Condition Approach",
"authors": "Chun Liu, Jan-Eric Sulzbach and Yiwei Wang",
"categories": "math.AP cs.NA math.NA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In [@FR23], it was shown that the set of Nash equilibria for any non-cooperative $N$ player game coincides with the set of Pareto optimal points of a certain vector optimization problem with non-convex ordering cone. To avoid dealing with a non-convex ordering cone, an equivalent characterization of the set of Nash equilibria as the intersection of the Pareto optimal points of $N$ multi-objective problems (i.e. with the natural ordering cone) is proven. So far, algorithms to compute the exact set of Pareto optimal points of a multi-objective problem exist only for the class of linear problems, which reduces the possibility of finding the true set of Nash equilibria by those algorithms to linear games only.
In this paper, we will consider the larger class of convex games. As, typically, only approximate solutions can be computed for convex vector optimization problems, we first show, in total analogy to the result above, that the set of $\epsilon$-approximate Nash equilibria can be characterized by the intersection of $\epsilon$-approximate Pareto optimal points for $N$ convex multi-objective problems. Then, we propose an algorithm based on results from vector optimization and convex projections that allows for the computation of a set that, on one hand, contains the set of all true Nash equilibria, and is, on the other hand, contained in the set of $\epsilon$-approximate Nash equilibria. In addition to the joint convexity of the cost function for each player, this algorithm works provided the players are restricted by either shared polyhedral constraints or independent convex constraints.
author:
- "Zachary Feinstein [^1]"
- "Niklas Hey[^2]"
- "Birgit Rudloff [^3]"
bibliography:
- biblio-Nash.bib
title: Approximating the set of Nash equilibria for convex games
---
# Introduction
The concept of a *Nash equilibrium* is an important concept in game theory that was first introduced by John Nash in his works [@nash1950; @nash1951]. Considering a non-cooperative game with $N$ players ($N \geq 2$), where each player $i$ tries to minimize her individual cost function $f_i$, the Nash equilibrium describes a joint strategy of all players at which each player $i$ cannot reduce her cost $f_i$ assuming the strategies of the other players remain fixed. Hence, the Nash equilibrium provides stability in a non-cooperative game setting.
In this paper, we will mainly focus on convex games with a shared constraint set $\mathbb{X}$. Usually, when considering convex games, the focus is on providing the existence of a unique equilibrium point and developing methods for finding this particular equilibrium, see e.g. [@rosen65], where additional strong convexity conditions are assumed to guarantee uniqueness of the Nash equilibrium. In this paper, we will consider convex games without additional assumptions, hence allowing for games with a unique, several, or infinitely many equilibria. Our aim is to approximate the set of Nash equilibria for any desired error bound $\epsilon>0$.
In [@FR23], it was shown that the set of Nash equilibria can be equivalently characterized by the set of Pareto optimal points of a specific vector optimization problem with a non-convex ordering cone, or equivalently, by the intersection of the Pareto optimal points of $N$ specific multi-objective problems. This result holds true in general for any possible non-cooperative game without making assumptions on cost functions or constraint sets. In order to use this result for numerical computations, one would need to compute the set of Pareto optimal points for multi-objective problems. So far, algorithms that provide the set of Pareto optimal points exist only for the special case of linear multi-objective optimization, see e.g. [@Armand93; @VanTu17; @tohidi2018adjacency]. This restricts computational methods of finding the set of Nash equilibria via that characterization to linear games so far.
The goal of this paper is to introduce a method which approximates the set of all Nash equilibria of a convex game. Hence, $\epsilon$-approximate solution concepts are considered for both, Nash equilibria and Pareto optimality. Similar to the characterizations proven in [@FR23], the set of $\epsilon$-Nash equilibria for any possible $N$-player game can be characterized by the intersection of $\epsilon$-Pareto optimal points of $N$ multi-objective problems for any $\epsilon>0$. For convex games these multi-objective problems are convex. In general, the set of Pareto optimal points as well as the set of $\epsilon$-Pareto optimal points are not finitely generated and thus cannot be computed exactly. However, due to the Lipschitz continuity of the convex cost functions and by making additional assumptions on the structure of the convex constraint set, we will (for each of the $N$ specific convex problems) be able to compute a finitely generated set which, on one hand, contains all Pareto optimal points and, on the other hand, is a subset of the $\epsilon$-Pareto optimal points for this problem for some specific $\epsilon>0$. As a consequence, by taking the intersections over these $N$ sets yields a finitely generated set $X$ which contains the set of true Nash equilibria $\operatorname{NE}(f,\mathbb X)$ for the convex game while being contained in the set of $\epsilon$-approximate Nash equilibria $\epsilon\operatorname{NE}(f,\mathbb X)$, i.e. $$\begin{aligned}
\operatorname{NE}(f,\mathbb X) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb X).\end{aligned}$$ This result provides some advantages: It is guaranteed that each element of interest, namely each equilibrium point, is contained in the computed set. Furthermore, each element in the found set is guaranteed to be at least almost at equilibrium, i.e. deviating at most by $\epsilon$ from the cost provided by a true equilibrium point. Additionally to the Lipschitz continuity for the convex cost functions, we distinguish between two different assumptions on the structure of the shared constraint set: In the first case we assume that the shared constraint set is a polytope. In the second case we consider for each player $i$ a convex constraint set that is independent from the constraints of the other players. For both cases, the proposed algorithm is proven to work correctly. Computational examples are provided which illustrate the sandwich result.
# Definitions
Let us consider the following non-cooperative shared constraint games for $N \geq 2$ players. Each player $i$ (for $i = 1,...,N$) considers strategies in the linear space $\mathcal{X}_i= \mathbb{R}^{n_i}$, where $n_i\in\mathbb{N}$, and where we do not impose any condition that $\mathcal{X}_i$ and $\mathcal{X}_j$ are equal. Furthermore, each player $i$ has a cost function $f_i: \prod_{j = 1}^N \mathcal{X}_j \to \mathbb{R}$ she seeks to minimize. This cost $f_i(x)$ for $x \in \prod_{j = 1}^N \mathcal{X}_j$ may depend on player $i$'s strategy $x_i \in \mathcal{X}_i$ as well as the strategy chosen by all other players $x_{-i} \in \prod_{j \neq i} \mathcal{X}_j$. The vector of cost functions is denoted by $f=(f_1,...,f_N)$ with $f: \prod_{i = 1}^N \mathcal{X}_i \to \mathbb{R}^N$. Assume further a shared constraint set $\mathbb{X}\subseteq \prod_{j = 1}^N \mathcal{X}_j$ for the joint strategy $x \in \prod_{j = 1}^N \mathcal{X}_j$ of all players. This shared constraint condition is common in the literature (see, e.g., [@rosen65; @facchinei2007generalized]). In a non-cooperative shared constraint game each player $i$ minimizes her cost function given all other players fix their strategies $x_{-i}^* \in \prod_{j \neq i} \mathcal{X}_j$. That is, for all $i = 1,...,N$ the following optimization problem is considered $$\label{eq:game}
x_i^* \in \mathop{\mathrm{arg\,min}}\{f_i(x_i,x_{-i}^*) \; | \; (x_i,x_{-i}^*) \in \mathbb{X}\}.$$
**Definition 1**. *A joint strategy $x^* \in \mathbb{X}$ satisfying [\[eq:game\]](#eq:game){reference-type="eqref" reference="eq:game"} for all $i = 1,...,N$ is called a ***Nash equilibrium***. Thus, $x^* \in \mathbb{X}$ is a Nash equilibrium if, for any player $i$, $f_i(x_i,x_{-i}^*) \geq f_i(x^*)$ for all strategies $x_i \in \mathcal{X}_i$ with $(x_i,x_{-i}^*) \in \mathbb{X}$. The set of all Nash equilibria is denoted by $\operatorname{NE}(f,\mathbb{X})$.*
When numerical methods are used, one has to deal with approximation errors. Therefore, we will now introduce the notion of approximate Nash equilibria. They are also needed for, e.g., sensitivity analysis of Nash equilibria, see [@feinstein2022].
**Definition 2**. *[@nisan2007algorithmic Section 2.6.6] The joint strategy $x^* \in \mathbb{X}$ is called an $\bm{\epsilon}$-***approximate Nash equilibrium*** for the game if, for any player $i$, $$f_i(x_i,x_{-i}^*) \leq f_i(x^*) \; \Rightarrow \; f_i(x_i,x_{-i}^*) + \epsilon \geq f_i(x^*)$$ for any $x_i \in \mathcal{X}_i$ such that $(x_i,x_{-i}^*) \in \mathbb{X}$. The set of all $\epsilon$-approximate Nash equilibria will be denoted by $\epsilon\operatorname{NE}(f,\mathbb{X})$. That is, $$\epsilon\operatorname{NE}(f,\mathbb{X}) := \{x^* \in \mathbb{X}\; | \; \forall i: \; f_i(x^*)\leq\inf\{f_i(x_i,x_{-i}^*) \; | \; (x_i,x_{-i}^*) \in \mathbb{X}\} + \epsilon\}.$$*
Let us now introduce the notion of Pareto optimality. Consider a linear space $\mathcal{X}$ as well as the space $\mathbb{R}^m$ with the natural ordering cone $\mathbb{R}^m_+$. Recall that a set $C \subseteq \mathbb{R}^m$ is called a cone if $\alpha C\subseteq C$ for all $\alpha \geq 0$. The convex cone $\mathbb{R}^m_+$ introduces a pre-order $\leq$ on $\mathbb{R}^m$ via $x\leq y \iff y-x\in \mathbb{R}^m_+,$ for $x,y\in \mathbb{R}^m$. A multi-objective optimization problem is an optimization problem of the form $$\label{eq:VOPg}
\min\{g(x) \; | \; x \in \mathbb{X}\}$$ for some feasible region $\mathbb{X}\subseteq \mathcal{X}$, a vector function $g: \mathbb{X}\to \mathbb{R}^m$ and ordering cone $\mathbb{R}^m_+$. To minimize this vector-valued function, and thus to solve the multi-objective optimization problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"}, means to compute the set of minimizers, also called Pareto optimal points, or efficient points, which are defined as follows.
**Definition 3**. *[@GJN06][\[defn:pareto\]]{#defn:pareto label="defn:pareto"} An element $x^* \in \mathbb X$ is called **Pareto optimal** for problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} if $$\big(g(x^*) - \mathbb{R}^m_+\setminus{\{0\}}\big)\cap g[\mathbb{X}] =\emptyset.$$ The set of all Pareto optimal points of problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} is denoted by $$\mathop{\mathrm{argMin}}\{g(x) \; | \; x \in \mathbb{X}\}.$$*
We use the notation $g[\mathbb{X}]:=\{g(x) \; | \; x\in \mathbb{X}\} \subseteq \mathbb{R}^m$ for the image of the feasible set $\mathbb{X}$. Note that the set $\mathop{\mathrm{argMin}}\{g(x) \; | \; x \in \mathbb{X}\}$ can equivalently be characterized as the set of feasible points $x^*$ that map to minimal elements of $g[\mathbb{X}]$, i.e., the set of points $x^* \in \mathbb{X}$ such that if $g(x) \leq g(x^*)$ for some $x \in \mathbb{X}$ then $g(x^*) \leq g(x)$ holds, see [@Jahn11 Def. 3.1(c), Def. 4.1(a) and Def. 7.1(a)].
Denote by $\mathcal P:=\operatorname{cl}(g[\mathbb{X}] +\mathbb{R}^m_+)$ the upper image of problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"}. As neither $\mathcal P$, nor the set of all Pareto optimal points of problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} can be computed exactly in general, except for linear multi-objective optimization problems, one usually computes a (polyhedral) inner approximation of $\mathcal P$ for a given error level $\epsilon>0$ and a fixed direction $c \in \mathbb{R}^m_+\setminus{\{0\}}$, see e.g. [@LRU14] for the case of convex problems. This motivated the following definition of $\epsilon$-Pareto optimal points.
**Definition 4**. *[@K79][@GJN06 Def. 2.2] [\[def:Pareto-c-solution\]]{#def:Pareto-c-solution label="def:Pareto-c-solution"} Let $c \in \mathbb{R}^m_+\setminus{\{0\}}$. An element $x^* \in \mathbb X$ is called $\bm{\epsilon}$-**Pareto optimal** with respect to $c$ for problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} if $$\big(g(x^*)- \epsilon c - \mathbb{R}^m_+\setminus{\{0\}}\big)\cap g[\mathbb{X}] =\emptyset.$$ The set of all $\epsilon$-Pareto optimal points of problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} with respect to $c$ is denoted by $$\epsilon\mathop{\mathrm{argMin}}\{g(x) \; | \; x \in \mathbb{X}\} .$$*
Often, the notion of weakly Pareto optimal points and weakly $\epsilon$-Pareto optimal points are important.
**Definition 5**. *[@GJN06] [\[def:wPareto-c-solution\]]{#def:wPareto-c-solution label="def:wPareto-c-solution"} An element $x^* \in \mathbb X$ is called **weakly Pareto optimal** for problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} if $$\big(g(x^*) - \operatorname{int}\mathbb{R}^m_+\big)\cap g[\mathbb{X}] =\emptyset.$$ Let $c \in \mathbb{R}^m_+\setminus{\{0\}}$. An element $x^* \in \mathbb X$ is called weakly $\epsilon$-Pareto optimal with respect to $c$ for problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} if $$\big(g(x^*)- \epsilon c - \operatorname{int}\mathbb{R}^m_+\big)\cap g[\mathbb{X}] =\emptyset.$$*
# Equivalence of Nash and Pareto and their approximations
Consider now for each player $i$ the following convex multi-objective optimization problem $$\label{eq:vop}
\min\{(x_{-i},-x_{-i},f_i(x)) \; | \; x \in \mathbb{X}\}.$$ Note that the ordering cone of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} is $\mathbb{R}^{m_i}_+$ with $m_i=2 \sum_{j=1,\\ j \neq i}^N n_j +1$. This objective space dimension can be further reduced by considering the equivalent convex multi-objective optimization problem $$\label{eq:vop2}
\min\left\{\left(x_{-i},-\sum_{j=1,\\ j \neq i}^N \sum_{k = 1}^{n_j} x_{jk},f_i(x)\right) \; \bigg\vert \; x \in \mathbb{X}\right\}$$ with natural ordering cone in dimension $\sum_{j=1,\\ j \neq i}^N n_j+2$. This is important for computational aspects and should be kept in mind. For purely aesthetical reasons and since it simplifies explanations we will work with problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} instead of problem [\[eq:vop2\]](#eq:vop2){reference-type="eqref" reference="eq:vop2"} in our exposition.
Note that minimizing the multi-objective function $(x_{-i},-x_{-i},f_i(x))$ with respect to the natural ordering cone in [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} fixes, for player $i$, the strategy $x_{-i}$ of the other players while minimizing his objective function $f_i(x)$. This clearly corresponds to solving the optimization problem [\[eq:game\]](#eq:game){reference-type="eqref" reference="eq:game"} for any given strategy $x_{-i}=x_{-i}^*$. Doing that for each player, i.e. taking the intersection of the Pareto optimal points of each problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} over all players computes the fixed points. The following theorem is immediate.
**Theorem 6**. *[@FR23][\[thm:nash\]]{#thm:nash label="thm:nash"} The set of Nash equilibria of any non-cooperative game [\[eq:game\]](#eq:game){reference-type="eqref" reference="eq:game"} coincides with the intersection of the Pareto optimal points of problems [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} over all $i\in\{1,...,N\}$, i.e., $$\begin{aligned}
\label{eq:shared}
\operatorname{NE}(f,\mathbb{X}) = \bigcap_{i = 1}^N \mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x)) \; | \; x \in \mathbb{X}\}.\end{aligned}$$*
The above theorem was given in [@FR23] in a more general framework and in a different formulation using a non-convex ordering cone (see [@FR23 Th. 2.6, Cor. 2.8]). We provide here the above intuitive and simplified formulation that was already mentioned in Remark 3.5 in [@FR23] for the linear case, but holds of course also in general.
In the case of linear games the set of all Pareto optimal points of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} can be computed exactly and Theorem [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} can be used to numerically compute the set of all Nash equilibria of such games, see [@FR23]. If the game is not linear, approximations need to be considered. In the following, we will therefore relate the set of $\epsilon$-approximate Nash equilibria with the $\epsilon$-Pareto optimal points of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. To do so we will fix the directions $\bar{c}_i=(0,...,0,1)^\top\in \mathbb{R}^{m_i}_+\setminus{\{0\}}$ for all $i\in\{1,...,N\}$. The choice of this direction ensures that no $\epsilon$-deviation is allowed in the other player's strategies, so for each player $i$ the strategy $x_{-i}$ of the other players stays fixed, while an $\epsilon$-deviation is allowed for the objective $f_i(x)$ of player $i$.
**Theorem 7**. *The set of $\epsilon$-approximate Nash equilibria of any non-cooperative game [\[eq:game\]](#eq:game){reference-type="eqref" reference="eq:game"} coincides with the intersection of the $\epsilon$-Pareto optimal points of problems [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} for direction $\bar{c}_i=(0,...,0,1)^\top$ over all $i\in\{1,...,N\}$, i.e., $$\begin{aligned}
\label{eq:shared_eps}
\epsilon\operatorname{NE}(f,\mathbb{X}) = \bigcap_{i = 1}^N \epsilon\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x)) \; | \; x \in \mathbb{X}\}.\end{aligned}$$*
*Proof.* The choice of the direction $\bar{c}_i=(0,...,0,1)^\top\in \mathbb{R}^{m_i}_+\setminus{\{0\}}$ and the choice of the objective function $g(x)=(x_{-i},-x_{-i},f_i(x))$ ensures that by Definition [\[def:Pareto-c-solution\]](#def:Pareto-c-solution){reference-type="ref" reference="def:Pareto-c-solution"}, $x^*\in \epsilon\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))|x \in \mathbb X \}$ is equivalent to: there does not exists an $x\in \mathbb X$ such that $x_{-i} = x_{-i}^*$ and $f_i(x)< f_i(x^*)-\epsilon$. This is equivalent to $f_i(x_i,x_{-i}^*) + \epsilon \geq f_i(x^*)$ for any $x_i \in \mathcal{X}_i$ such that $(x_i,x_{-i}^*) \in \mathbb{X}$. Since this has to hold for all $i\in\{1,...,N\}$, the equivalence to Definition [Definition 2](#defn:nash-approx){reference-type="ref" reference="defn:nash-approx"} follows. ◻
**Remark 8**. *Theorems [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} and [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"} can also be stated as:*
- *For generalized games, where each player can have an individual constraint set, i.e., when problem [\[eq:game\]](#eq:game){reference-type="eqref" reference="eq:game"} in Definition [Definition 1](#defn:nash-shared){reference-type="ref" reference="defn:nash-shared"} is replaced by $x_i^* \in \mathop{\mathrm{arg\,min}}\{f_i(x_i,x_{-i}^*) \; | \; (x_i,x_{-i}^*) \in \mathbb{X}_i\}$. Then, the constraints sets $\mathbb{X}$ in the Pareto problems in Theorems [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} and [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"} have to be replaced by the individual constraint sets $\mathbb{X}_i \subseteq \prod_{j = 1}^N \mathcal{X}_j$ for $i\in\{1,...,N\}$.*
- *For general linear spaces $\mathcal{X}_i$ for the strategies. Then the ordering cone $\mathbb{R}^{m_i}_+$ in problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} is replaced by the natural ordering cone in the corresponding space.*
- *For vector games, i.e. when the objective functions of the players are vector functions. In this case, the ordering cone $\mathbb{R}^{m_i}_+$ in problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} has to be adapted accordingly, see also Section 4 in [@FR23].*
*Within this paper, we are mainly interested in using Theorems [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} and [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"} to numerically approximate the set of all Nash equilibria for certain convex games as detailed in Section [4](#sec:convex){reference-type="ref" reference="sec:convex"} below. Thus, we will only work with Theorems [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} and [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"} as stated above.*
**Remark 9**. *[\[rem:weak_opt\]]{#rem:weak_opt label="rem:weak_opt"} Note that even though in multi-objective optimization one often works with weakly Pareto optimal points or weakly $\epsilon$-Pareto optimal points, for our particular multi-objective optimization problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} the results in Theorems [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} and [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"} only hold for the stronger concepts of Pareto optimal points, respectively $\epsilon$-Pareto optimal points. In particular, note that every feasible point $x\in \mathbb X$ is weakly Pareto optimal (and hence also weakly $\epsilon$-Pareto optimal) for problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. This follows easily from the fact that for any arbitrary $\Bar{x} \in \mathbb{X}$ it is $((\Bar{x}_{-i},-\Bar{x}_{-i})^\top -\operatorname{int}\mathbb R^{m_i-1}_+ )\cap \{(x_{-i},-x_{-i})\; | \; x \in \mathbb{X}\}=\emptyset$. Thus, the concepts of weakly ($\epsilon$-) Pareto optimality are not meaningful for problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}.*
# Convex games {#sec:convex}
The aim of this paper is to develop an algorithm that uses representations [\[eq:shared\]](#eq:shared){reference-type="eqref" reference="eq:shared"} and [\[eq:shared_eps\]](#eq:shared_eps){reference-type="eqref" reference="eq:shared_eps"} to approximate the set of Nash equilibria of a non-cooperative game. To do so, we will focus for the remaining part of this paper on convex games.
**Assumption 10**. *[\[ass:convex\]]{#ass:convex label="ass:convex"}*
1. *The shared constraint set $\mathbb{X}\subseteq \prod_{j=1}^N \mathcal{X}_j$ is convex and compact.*
2. *Each cost function $f_i: \prod_{j=1}^N \mathcal{X}_j \to \mathbb R$ is convex.*
The first assumption is typical for convex games. However, in the second assumption, the joint convexity of each player's cost function is assumed; this is in contrast to the usual assumption for convexity in the decision variable for player $i$ only (see, e.g., [@nikaido1955note; @rosen65]). Note that both assumptions imply that each function $f_i$ is Lipschitz continuous on the compact set $\mathbb{X}$. Let us denote by $L>0$ the largest of the corresponding Lipschitz constants. Thus, we have for all $i = 1,...,N$ that $|f_i(x_1)-f_i(x_2)| \leq L \left\Vert x_1-x_2\right\Vert$ for $x_1,x_2 \in \mathbb{X}$ where $\left\Vert \cdot\right\Vert$ is the $L_1$ norm on $\prod_{j=1}^N \mathcal{X}_j$. This will be useful to relate an approximation error made in the preimage space to the corresponding error in the image space using the Lipschitz constant.
Under these convexity assumptions and Theorem [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"}, we can hence use convex multi-objective optimization methods to compute an approximation of the set of Nash equilibria. The following difficulties appear
- In convex multi-objective optimization one usually focuses on weakly Pareto optimal points (or weakly $\epsilon$-Pareto optimal points) since they can equivalently be characterized as solutions to the weighted sum scalarization. That is, a point $x^* \in \mathbb X$ is weakly Pareto optimal for the convex problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} if and only if $x^*$ is a solution to the scalar problem $\min_{x\in \mathbb{X}}\sum_{i=1}^m w_i g_i(x)$ for some $w\in\mathbb{R}^m_+\setminus{\{0\}}$ (Corollary 5.29 of [@Jahn11]). However, as stated in Remark [\[rem:weak_opt\]](#rem:weak_opt){reference-type="ref" reference="rem:weak_opt"}, the concept of weak Pareto optimality is not meaningful for our problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} and we have to work with Pareto optimal points instead. For Pareto optimal points there is not a one-to-one correspondence to solutions of weighted sum scalarizations, but only the following implication: a point $x^* \in \mathbb X$ is Pareto optimal for a convex problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"} if $x^*$ is a solution to the scalar problem $\min_{x\in \mathbb{X}}\sum_{i=1}^m w_i g_i(x)$ for some $w\in\mathbb{R}^m_{++}$ (Theorem 5.18(b) of [@Jahn11]). The absence of an equivalent characterization of the set of Pareto optimal points through scalarizations will make it impossible to solve our problem in full generality for convex games. However, despite this issue we will be able to compute a set which contains the set of all Pareto optimal points of the convex problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} (and is included in the set of all $\epsilon$-Pareto optimal points) if we make additional assumptions on the structure of the constraint set $\mathbb{X}$. We will consider two different structures of $\mathbb{X}$, one in Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} and one in Remark [\[rem:inconstr\]](#rem:inconstr){reference-type="ref" reference="rem:inconstr"}.
- To the best of our knowledge, there is no algorithm so far that computes or approximates the set of all Pareto optimal points or weakly Pareto optimal points for a convex multi-objective optimization problem. In the linear case, such algorithms exist, see [@Armand93; @VanTu17; @tohidi2018adjacency], and have been used to compute the set of Nash equilibria for linear games in [@FR23]. For convex multi-objective optimization problems, it is often not necessary to know the set of all (weakly) Pareto optimal points, as one is usually satisfied in finding finitely many weakly Pareto optimal points that approximate the upper image $\mathcal P$. In detail: one usually computes a finite set of weakly Pareto optimal points whose images provide a polyhedral inner approximation $\mathcal{P}^{In}$ of $\mathcal P$ that is in $\varepsilon$-distance to $\mathcal P$ for a given error level $\varepsilon>0$ and a fixed direction $c \in \operatorname{int}\mathbb{R}^m_+$ in the following sense $$\begin{aligned}
{\label{innerappr1}}
\mathcal{P}^{In} \subseteq \mathcal{P} \subseteq \mathcal{P}^{In}-\varepsilon \{{c}\},\end{aligned}$$ see, e.g., [@LRU14]. This is not sufficient for our purposes as we need on one hand not weakly, but Pareto optimal points, and on the other hand, we need the set of all Pareto optimal points (or $\epsilon$-Pareto optimal points). However, we will see that such a polyhedral approximation [\[innerappr1\]](#innerappr1){reference-type="eqref" reference="innerappr1"} will be a first step to reach our goal.
- The algorithm in [@LRU14], providing a polyhedral inner approximation of $\mathcal P$ satisfying [\[innerappr1\]](#innerappr1){reference-type="eqref" reference="innerappr1"}, works under the assumption that the direction $c \in \operatorname{int}\mathbb{R}^m_+$. This assumption is clearly violated for our problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} as we need the directions to be $\bar{c}_i=(0,...,0,1)^\top\in \mathbb{R}^{m_i}_+\setminus{\{0\}}$ to obtain the representation [\[eq:shared\]](#eq:shared){reference-type="eqref" reference="eq:shared"}.
As mentioned above in issue (i), we try to cover the set of all Pareto optimal points and therefore make additional assumptions on the structure of the constraint set $\mathbb{X}$. In the following, we will consider constraints sets $\mathbb{X}$ that are polytopes, whereas in Remark [\[rem:inconstr\]](#rem:inconstr){reference-type="ref" reference="rem:inconstr"}, we consider the case where each player $i$ has an independent convex constraint set $\mathbb{X}_i \subseteq \mathcal{X}_i$. Problem (iii) can be handled by a small modification of the algorithm in [@LRU14] that is possible because of the particular structure of the objective function of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} being linear in all but the last component and the directions $\bar{c}_i=(0,...,0,1)^\top$ being zero in these linear components.
Let us now discuss problem (ii) in detail. The reason why approximations are considered for convex problems is that it is in general not possible to compute the upper image, but only a polyhedral approximation of it. For the same reason it will also not be possible to compute the exact set of Pareto optimal points $\mathop{\mathrm{argMin}}\{g(x) \; | \; x \in \mathbb{X}\}$ or the exact set of $\epsilon$-Pareto optimal points $\epsilon\mathop{\mathrm{argMin}}\{g(x) \; | \; x \in \mathbb{X}\}$ of a convex (but not linear) problem [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"}, as one could only expect to compute a finitely generated set. Hence, it is hopeless to expect to compute the set $\operatorname{NE}(f,\mathbb{X})$ or $\epsilon\operatorname{NE}(f,\mathbb{X})$ via [\[eq:shared\]](#eq:shared){reference-type="eqref" reference="eq:shared"}, respectively [\[eq:shared_eps\]](#eq:shared_eps){reference-type="eqref" reference="eq:shared_eps"}, in Theorems [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} and [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"}, exactly. However, below we propose Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} that computes a finitely generated set $X$ satisfying $$\begin{aligned}
\label{eq:sandwich}
\operatorname{NE}(f,\mathbb X) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb X),\end{aligned}$$ as proven in Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"}. Thus, one obtains an even better approximation of the set of Nash equilibria than the set of all $\epsilon$-approximate Nash equilibria would provide. This will be illustrated in Section [5](#sec:ex){reference-type="ref" reference="sec:ex"}, see in particular Figure [2](#fig:ex1alltogether){reference-type="ref" reference="fig:ex1alltogether"} and [3](#fig:ex2alltogether){reference-type="ref" reference="fig:ex2alltogether"}.
In the following, we will make the following assumption concerning the structure of the constraint set in addition to considering convex games (Assumption [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"}).
**Assumption 11**. *[\[asspolycase\]]{#asspolycase label="asspolycase"} The shared constraint set $\mathbb{X}\subseteq \prod_{j=1}^N \mathcal{X}_j$ is a nonempty polytope of the form $\mathbb{X}=\mathop{\mathrm{conv}}\{\bar{x}^1,...,\Bar{x}^k \}$ for some $k\in\mathbb{N}$.*
**Remark 12**.
- *Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} can be equivalently formulated with linear inequalities, i.e., there is some $A \in \mathbb R^{p \times \sum_{j=1}^N n_j}$ and $b \in \mathbb R^p, p \in \mathbb N$ such that $\mathbb{X}=\{x \in \mathbb R^{\sum_{j=1}^N n_j}| Ax \leq b\}$.*
- *Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} is satisfied for many games found in the literature (see, e.g., [@nabetani2011parametrized GNEP (21)] and [@braouezec2021economic Proposition 3]). Included are also special cases like mixed strategies (without further constraints) as then $\mathbb{X}=[0,1]^N$ or, more generally, box constraints.*
- *In Remark [\[rem:inconstr\]](#rem:inconstr){reference-type="ref" reference="rem:inconstr"}, we will drop Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} and consider instead the case where each player $i$ has an independent convex constraint set $\mathbb{X}_i \subseteq \mathcal{X}_i$.*
For the overall goal to algorithmically compute a set $X$ satisfying [\[eq:sandwich\]](#eq:sandwich){reference-type="eqref" reference="eq:sandwich"}, one has, by equations [\[eq:shared\]](#eq:shared){reference-type="eqref" reference="eq:shared"} and [\[eq:shared_eps\]](#eq:shared_eps){reference-type="eqref" reference="eq:shared_eps"}, to find for each player $i$ a set $X_i$ satisfying $$\begin{aligned}
\label{eq:sandwich2}
\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x)) \; | \; x \in \mathbb{X}\} \subseteq X_i \subseteq \epsilon\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x)) \; | \; x \in \mathbb{X}\}.\end{aligned}$$ Then, the desired $X$ is obtained by setting $X=\bigcap_{i=1}^N X_i$. The sets $X_i$ will be constructed in three steps. Recall that each player $i$ considers the multi-objective problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. Let us denote by $$\mathcal{P}_i:=\{(x_{-i},-x_{-i},f_i(x)) \; | \;x \in \mathbb X\}+\mathbb R^{m_i}_+$$ the upper image of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. Note that by the assumptions of this section, the set $\mathcal{P}_i$ is closed, so the closure in the definition of the upper image is not needed here. In a first step one needs to compute a polyhedral inner approximation $\mathcal{P}^{In}_i$ of $\mathcal{P}_i$ such that $$\begin{aligned}
{\label{innerappr}}
\mathcal{P}_i^{In} \subseteq \mathcal{P}_i \subseteq \mathcal{P}^{In}_i-\varepsilon \{c\}\end{aligned}$$ for $c=\bar{c}_i:=(0,....,0,1)^\top \in\mathbb R^{m_i}_+$ and $\varepsilon>0$. As $\bar{c}_i\notin \operatorname{int}\mathbb{R}^{m_i}_+$, the algorithm in [@LRU14] cannot be applied directly, but can be modified to our setting. The details of this modification are given in Subsection [4.1](#subsec:mod){reference-type="ref" reference="subsec:mod"}; this addresses problem (iii) above. The second step is to sort out faces of this polyhedral approximation that are only weakly efficient in a certain sense; this addresses problem (i) above. Details are given in Subsection [4.2](#sec:maxefficientFaces){reference-type="ref" reference="sec:maxefficientFaces"}. The third step is then to approximate the set of preimages that lie below the remaining maximal efficient faces yielding a set of $\epsilon$-Pareto optimal points $X_i$ satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"}. This addresses problem (ii) above. Details will be given in Subsection [4.3](#sec:Proj){reference-type="ref" reference="sec:Proj"}.
## Computing a polyhedral approximation of the upper image {#subsec:mod}
Algorithm 1 of [@LRU14] allows for the computation of a polyhedral $\varepsilon$-approximation [\[innerappr\]](#innerappr){reference-type="eqref" reference="innerappr"} to problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} for a direction $c_i\in \operatorname{int}\mathbb{R}^{m_i}_+$. Since, by Theorem [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"}, we have to use $c_i=\bar{c}_i:=(0,....,0,1)^\top \notin \operatorname{int}\mathbb{R}^{m_i}_+$ instead, we cannot apply [@LRU14 Algorithm 1] directly. A careful inspection of [@LRU14 Algorithm 1] reveals that the assumption $c_i\in \operatorname{int}\mathbb{R}^{m_i}_+$ is only used in [@LRU14 Proposition 4.4] to show that the so-called Pascoletti-Serafini scalarization of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}, that is, $$\begin{aligned}
{\tag{$P_2(v)$}}
\min \{z \in \mathbb R\; | \; (x_{-i},-x_{-i},f_i(x))-z \bar{c}_i-v \leq 0,\; x \in \mathbb{X}\},\end{aligned}$$ for $v\in\mathbb{R}^{m_i}$, is feasible and strong duality is satisfied between $(P_2(v))$ and its Lagrange dual problem. This in turn is used to prove the correctness of [@LRU14 Algorithm 1] in [@LRU14 Theorem 4.9]. For the direction $\bar{c}_i=(0,....,0,1)^\top$, feasibility and strong duality might fail for the points $v$ considered in the course of the algorithm. We will now introduce a modification to [@LRU14 Algorithm 1] that allows for the specific problem of interest, problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}, to restore feasibility and strong duality for $(P_2(v))$ and its dual. This modification takes advantage of the fact that the zero entries of $\bar{c}_i$ correspond to linear components in the objective of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}.
Let us now explain the details of this modification. Algorithm 1 of [@LRU14] starts with an initialization phase where an initial outer approximation $\mathcal{P}_0$ of $\mathcal{P}_i$ is computed via solving weighted sum scalarizations. The solutions of the corresponding weighted sum scalarizations are added to an initial set $\bar{\mathcal{X}}$. This is done in line 2 and 3 of [@LRU14 Algorithm 1] and remains the same in the modified version. It is shown in [@LRU14] that the initial outer approximation $\mathcal{P}_0$ is pointed.
However, before entering the iteration phase in line 4 of [@LRU14 Algorithm 1], we will add the following two lines to the algorithm, where we denote by $a_i:=\frac{{m_i}-1}{2}$: $$\begin{aligned}
{\label{outerapprupdate}}
\mathcal{P}_0=\mathcal{P}_0 \cap \{y \in \mathbb R^{m_i}\; | \; y_{1:a_i}=-y_{a_i+1:m_i-1} \} \cap \{y \in \mathbb R^{m_i}\; | \; y_{1:a_i} \in \mathop{\mathrm{conv}}\{\Bar{x}^1_{-i},...,\Bar{x}^k_{-i} \} \}
,\end{aligned}$$ $$\begin{aligned}
{\label{barXupdate}}
\bar{\mathcal{X}}=\bar{\mathcal{X}} \cup \{\bar{x}^1,...,\Bar{x}^k \}.\end{aligned}$$ Recall that, by Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"}, $\mathbb{X}=\mathop{\mathrm{conv}}\{\bar{x}^1,...,\Bar{x}^k \}$ for $k\in\mathbb{N}$. This completes the now modified initialization phase of the algorithm. The rest of the algorithm (i.e. the iteration phase of [@LRU14 Algorithm 1]) remains unchanged.
Let us now comment on these two modifications. Line [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"} makes additional cuts to the outer approximation $\mathcal{P}_0$. The first cut with the linear subspace $\{y \in \mathbb R^{m_i}\; | \; y_{1:a_i}=-y_{a_i+1:m_i-1} \}$ is using the particular structure of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}, utilizing that the first $a_i$ components of the objective function are the negative of the next $a_i$ components. Thus, the upper image has to lie in this linear subspace and this fact can be used for the initial outer approximation $\mathcal{P}_0$ already. The second cut with the polyhedron $\{y \in \mathbb R^{m_i}\; | \; y_{1:a_i} \in \mathop{\mathrm{conv}}\{\Bar{x}^1_{-i},...,\Bar{x}^k_{-i} \} \}$ is using that, by Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"}, $\mathbb{X}=\mathop{\mathrm{conv}}\{\bar{x}^1,...,\Bar{x}^k \}$ for $k\in\mathbb{N}$ and the fact that the first $a_i$ components of the objective function of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} are $x_{-i}$. Since both, the linear space $\{y \in \mathbb R^{m_i}\; | \; y_{1:a_i}=-y_{a_i+1:m_i-1} \}$ and the polyhedron $\{y \in \mathbb R^{m_i}\; | \; y_{1:a_i} \in \mathop{\mathrm{conv}}\{\Bar{x}^1_{-i},...,\Bar{x}^k_{-i} \} \}$, are supersets of the image set $\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\}$ of [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}, the updated set $\mathcal{P}_0$ remains still an outer approximation of this image set. (Note that one could add $\mathbb R^{m_i}_+$ to $\mathcal{P}_0$ obtained in [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"} in order to obtain an $\mathcal{P}_0$ that would still be an outer approximation of $\mathcal{P}_i$, but since it is enough for it to be an outer approximation of the image set, the addition of $\mathbb R^{m_i}_+$ is not needed for the algorithm to work correctly.) Further, the updated set $\mathcal{P}_0$ is still a pointed polyhedron. Line [\[barXupdate\]](#barXupdate){reference-type="eqref" reference="barXupdate"} updates the set $\bar{\mathcal{X}}$ by adding the vertices of the feasible set $\mathbb{X}$. This does not affect [@LRU14 Algorithm 1] or its correctness and only changes the set $\bar{\mathcal{X}}$ outputted by the algorithm. It is however crucial for step 3 of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} introduced below, see also Section [4.3](#sec:Proj){reference-type="ref" reference="sec:Proj"}, as it has an important consequence as detailed in Remark [\[rem1\]](#rem1){reference-type="ref" reference="rem1"}.
In the second phase of [@LRU14 Algorithm 1] the outer approximation $\mathcal{P}_0$ is iteratively updated until after termination of the algorithm, [\[innerappr\]](#innerappr){reference-type="eqref" reference="innerappr"} is satisfied. We prove in the following that the iteration phase of [@LRU14 Algorithm 1] works also correctly for problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} with respect to the boundary direction $\bar{c}_i$, if the modification of the initialization phase, i.e. the one with the two additional lines [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"} and [\[barXupdate\]](#barXupdate){reference-type="eqref" reference="barXupdate"}, is used.
The idea of the iteration phase of [@LRU14 Algorithm 1] is that in each iteration it is checked if the distance of the vertices of the current outer approximation $\mathcal{P}_0$ to the upper image $\mathcal{P}_i$ is less or equal than $\varepsilon$. If for each vertex the distance is less or equal than $\varepsilon$ then the algorithm stops. If a vertex of $\mathcal{P}_0$ is found to have a distance larger than $\varepsilon$, then the set $\mathcal{P}_0$ is updated. To check for each vertex $v \in \mathcal{P}_0$ the distance to $\mathcal{P}_i$, the Pascoletti-Serafini scalarization $(P_2(v))$ is considered. Since $v$ is always chosen among the vertices of $\mathcal{P}_0$, we can, using [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"}, rewrite $(P_2(v))$ as $$\begin{aligned}
\min \{z \in \mathbb R\; | \; x_{-i}=v_{1:a_i}, f_i(x)-z-v_{m_i} \leq 0, Ax \leq b \}.\end{aligned}$$ Note that since $\mathbb{X}$ is a polyhedral set, the constraint $f_i(x)-z-v_{m_i} \leq 0$ is the only nonlinear constraint for $(P_2(v))$. The Lagrange dual problem to $(P_2(v))$ is given by $$\begin{aligned}
{\tag{$D_2(v)$}}
\max \{ \inf_{x \in \mathbb{X}} \{u^\top(Ax-b)+w^\top(x_{-i},-x_{-i},f_i(x))^\top \}-w^\top v\; | \; u \geq 0,\; w^\top \bar{c}_i=1, \; w \geq 0\}.\end{aligned}$$ To update the outer approximation $\mathcal{P}_0$, optimal solutions of both $(P_2(v))$ and $(D_2(v))$ are required. In the following lemma, the existence of optimal solutions for both problems and strong duality is proven.
**Lemma 13**. *[\[PSduality\]]{#PSduality label="PSduality"} Let Assumptions [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} and [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} be satisfied. Let $\mathcal{P}_0$ be an outer approximation of the image set $\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\}$ of [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} satisfying [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"}. For every $v \in \mathcal{P}_0$ there exist optimal solutions $(x^v,z^v)$ and ($u^v,w^v)$ to $(P_2(v))$ and $(D_2(v))$ respectively, and the optimal values coincide.*
*Proof.* Consider $v \in \mathcal{P}_0$. By [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"} we know that $v_{1:a_i} \in \mathop{\mathrm{conv}}\{\Bar{x}^1_{-i},...,\Bar{x}^k_{-i} \}$. Thus, there are $\lambda^1,...,\lambda^k \geq 0$ with $\sum_{j=1}^k \lambda^j=1$ such that $v_{1:a_i}=\sum_{j=1}^k \lambda^j \Bar{x}^j_{-i}$. Set $\bar{x}=\sum_{j=1}^k \lambda^j \Bar{x}^j\in\mathbb{X}$ and choose $\bar{z} \in \mathbb R$ such that $f_i(\bar{x})-\bar{z}-v_{m_i}\leq 0$. Then $(\bar{x},\bar{z})$ is feasible for $(P_2(v))$. Since the feasible set of $(P_2(v))$ is compact (by the same arguments as in the proof of [@LRU14 Proposition 4.4]), an optimal solution exists. In order to guarantee strong duality we need to find a feasible element of $(P_2(v))$ that satisfies the weak Slater condition, i.e. for all nonaffine constraints the inequality constraint needs to be strict. However, since the only nonaffine constraint of $(P_2(v))$ is $f_i({x})-{z}-v_{m_i}\leq 0$ we can always find $z^* \in \mathbb R$ such that $f_i(\bar{x})-z^*-v_{m_i}<0$. Then $(\bar{x},z^*)$ satisfies the weak Slater condition which implies strong duality. ◻
This result guarantees (by replacing in the proof of [@LRU14 Theorem 4.9], statement [@LRU14 Proposition 4.4] by Lemma [\[PSduality\]](#PSduality){reference-type="ref" reference="PSduality"}) that the iteration phase works correctly under the changes done in the initialization phase. Thus, the modification of [@LRU14 Algorithm 1] proposed here outputs, at termination, a finite set $\bar{\mathcal{X}}$ such that $\mathcal{P}^{In}_i:=\mathop{\mathrm{conv}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \Bar{\mathcal{X}} \}+\mathbb R^{m_i}_+$ satisfying [\[innerappr\]](#innerappr){reference-type="eqref" reference="innerappr"}.
**Remark 14**. *[\[rem1\]]{#rem1 label="rem1"} Denote the elements of the finite set $\Bar{\mathcal{X}}$ by $\{{x}^1,...,{x}^s\}$ for $s\in\mathbb{N}$ and consider $\mathcal{P}^{In}_i:=\mathop{\mathrm{conv}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \Bar{\mathcal{X}} \}+\mathbb R^{m_i}_+$. Since for any vertex $\bar{x}$ of $\mathbb{X}$, we have, due to [\[barXupdate\]](#barXupdate){reference-type="eqref" reference="barXupdate"}, $(\bar{x}_{-i},-\bar{x}_{-i},f_i(\bar{x})) \in \mathcal{P}^{In}_i$, we conclude that for any $x \in \mathbb{X}$ it holds $x_{-i} \in \mathop{\mathrm{conv}}\{{x}^1_{-i},...,{x}^s_{-i}\}$. As such, for any Pareto optimal element $x^*$ of [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} we can express $x^*_{-i}$ via convex combinations of ${x}^1_{-i},...,{x}^s_{-i}$. This is necessary to prove the correctness of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} proposed in Subsection [\[Algosection\]](#Algosection){reference-type="ref" reference="Algosection"}.*
## Computing the maximal efficient faces {#sec:maxefficientFaces}
In Subsection [4.1](#subsec:mod){reference-type="ref" reference="subsec:mod"} we obtained a polyhedral $\varepsilon$-approximation $\mathcal{P}^{In}_i$ of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} satisfying [\[innerappr\]](#innerappr){reference-type="eqref" reference="innerappr"} for the direction $\bar{c}_i$. Note that the boundary of $\mathcal{P}^{In}_i$ is the set of all weakly minimal elements of $\mathcal{P}^{In}_i$. The next step is to sort out all weakly minimal elements of $\mathcal{P}^{In}_i$ that are not minimal. This addresses problem (i) above. To do so, we apply the concept of maximal efficient faces in vector optimization. For some general multi-objective problem of the form [\[eq:VOPg\]](#eq:VOPg){reference-type="eqref" reference="eq:VOPg"}, a face $F$ of $\mathbb{X}$ is called efficient if it contains Pareto optimal elements only. An efficient face $F$ of $\mathbb{X}$ is called maximal efficient if there is no face $G$ of $\mathbb{X}$ such that $F \subsetneq G$. Now consider the following linear multi-objective optimization problem $$\begin{aligned}
\label{faces}
\min\{y \; | \; y \in \mathcal{P}_i^{In}\}\end{aligned}$$ with ordering cone $\mathbb R^{m_i}_{+}$. One can easily see that the union of all maximal efficient faces of [\[faces\]](#faces){reference-type="eqref" reference="faces"} equals the set of minimal elements of $\mathcal{P}^{In}_i$ since $\mathcal{P}^{In}_i$ is the upper image of [\[faces\]](#faces){reference-type="eqref" reference="faces"}. As this is now a linear problem due to the polyhedral structure of $\mathcal{P}^{In}_i$, one can use existing methods, see e.g. [@Armand93; @VanTu17; @tohidi2018adjacency], to compute all maximal efficient faces $F^1,...,F^{k_i}$ of $\mathcal{P}_i^{In}$, where $k_i\in\mathbb{N}$.
**Remark 15**. *[\[rem:maxfacesbd\]]{#rem:maxfacesbd label="rem:maxfacesbd"} Note that any maximal efficient face $F$ for problem [\[faces\]](#faces){reference-type="eqref" reference="faces"} is a polytope: Since $F$ is a face of the polyhedral feasible set $\mathcal{P}^{In}_i$ it is also a polyhedron. Assume now that $F$ is not bounded, i.e. $F=\mathop{\mathrm{conv}}V+\mathop{\mathrm{cone}}D$ where $V \subseteq \mathcal{P}^{In}_i$ and $D \subseteq \mathbb R^{m_i}_+ \setminus \{0\}$ for finite sets $V$ and $D$. Then there is some $y \in F$ with $y=\Bar{y}+d$ where $\Bar{y} \in \mathop{\mathrm{conv}}V$ and $d \in \mathbb R^{m_i}_+ \setminus \{0\}$. Then it must be that $\Bar{y} \leq y$ and $\Bar{y} \neq y$ which is a contradiction to $y$ being Pareto optimal for problem [\[faces\]](#faces){reference-type="eqref" reference="faces"}.*
## Approximate the set of Pareto optimal points {#sec:Proj}
The next step is to approximate the set of Pareto optimal points of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. In detail, we want to approximate the set of preimages whose images lie below the maximal efficient faces $F^j$, $j=1,...,k_i$, computed in Section [4.2](#sec:maxefficientFaces){reference-type="ref" reference="sec:maxefficientFaces"}, yielding a set of $\epsilon$-Pareto optimal points $X_i$ satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"}.
The aim is to compute $$\begin{aligned}
\label{cp}
\bar{X}^j_i:=\{x \in \mathbb{X}\; | \; \exists y \in F^j: (x_{-i},-x_{-i},f_i(x)) \leq_{\mathbb R^{m_i}_{+}} y\}\end{aligned}$$ for each maximal efficient face $F^j$, where $j=1,...,k_i$. Note that this is a bounded convex projection problem as considered in [@kr22; @kr23] as it is of the form: $\text{compute } \{ x \in \mathbb{X} \; | \; \exists y \in F^j : (x,y) \in S \}$ for $S=\{(x,y)\in \mathbb{X}\times F^j \; | \; (x_{-i},-x_{-i},f_i(x)) \leq_{\mathbb R^{m_i}_{+}} y\}$. The set $S$ is bounded due to Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} and Remark [\[rem:maxfacesbd\]](#rem:maxfacesbd){reference-type="ref" reference="rem:maxfacesbd"}. As the set $S$ is convex (and not polyhedral in general), the sets $\bar{X}^j_i$ cannot be computed exactly, but can only be approximated by the methods in e.g. [@SZC18; @LZS21; @kr22] or [@kr23 Algorithm 5.1]. Note that [@SZC18; @LZS21; @kr22] solve an associated convex multi-objective problem in dimension $d+1$ for $d=\sum_{j=1}^N n_j$, whereas [@kr23 Algorithm 5.1] works in dimension $d$. For a fixed error level $\varepsilon_2>0$ one obtains a finite set $\hat{X}^j_i \subseteq\mathbb{X}$ satisfying $$\begin{aligned}
\label{innerproj}
\mathop{\mathrm{conv}}\hat{X}^j_i \subseteq\bar{X}^j_i\subseteq \mathop{\mathrm{conv}}\hat{X}^j_i+B_{\varepsilon_2},
\end{aligned}$$ see e.g. [@kr22 Theorem 3.11] and [@kr23 Theorem 5.4], where $B_{\varepsilon_2}$ is a closed $\varepsilon_2$-ball around the origin (in the $L_1$ norm). In detail, [@kr23 Algorithm 5.1] outputs a finite $\varepsilon_2$-solution $\bar S\subseteq S$ of the bounded convex projection problem [\[cp\]](#cp){reference-type="eqref" reference="cp"} such that $\hat{X}^j_i:=\{x \in \mathbb{X} \; | \; (x,y)\in \bar S\}$ (i.e. the collection of the $x$ components of the finitely many vectors $(x,y)\in \bar S$) satisfies [\[innerproj\]](#innerproj){reference-type="eqref" reference="innerproj"}.
We will show in Lemma [\[sandwich\]](#sandwich){reference-type="ref" reference="sandwich"} below that the union of the sets $\bar{X}^j_i$ from the convex projection satisfy [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"} with respect to the error level $\varepsilon_1$. As the convex projections $\bar{X}^j_i$ cannot be computed exactly, we will then show that the union of the approximate sets, i.e., $$\begin{aligned}
X_i:=\bigcup_{j=1}^{k_i} ( \mathop{\mathrm{conv}}\hat{X}^j_i+B_{\varepsilon_2})\cap \mathbb{X}\end{aligned}$$ is the desired set of $\epsilon$-Pareto optimal points of problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"} satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"} for an adapted error level $\epsilon$, that involves the two error levels $\varepsilon_1, \varepsilon_2>0$ and the Lipschitz constant of the function $f_i$. Hence the set $$X=\bigcap_{i=1}^N X_i$$ will be the desired set yielding [\[eq:sandwich\]](#eq:sandwich){reference-type="eqref" reference="eq:sandwich"}: $$\begin{aligned}
\operatorname{NE}(f,\mathbb X) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb X).\end{aligned}$$
## Algorithm and main result
[\[Algosection\]]{#Algosection label="Algosection"} We are now ready to combine the three steps described in the last subsections, present our algorithm, and prove the main result of this paper.
**Algorithm 16**. *[\[alg1\]]{#alg1 label="alg1"}*
Input
: *convex cost functions $f_1,...,f_N$ and shared polyhedral constraint set $\mathbb{X}$ satisfying Assumptions [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} and [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"}, approximation levels $\varepsilon_1, \varepsilon_2>0$*
For each
: *$i=1,...,N$ do*
step 1:
: *Consider the convex multi-objective optimization problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}: $$\min\{(x_{-i},-x_{-i},f_i(x)) \; | \; x \in \mathbb{X}\}.$$ Compute an inner approximation $\mathcal{P}^{In}_i$ such that [\[innerappr\]](#innerappr){reference-type="eqref" reference="innerappr"} is satisfied with approximation level $\varepsilon_1>0$ using [@LRU14 Algorithm 1] with the modified initialization using [\[outerapprupdate\]](#outerapprupdate){reference-type="eqref" reference="outerapprupdate"} and [\[barXupdate\]](#barXupdate){reference-type="eqref" reference="barXupdate"} as described in Section [4.1](#subsec:mod){reference-type="ref" reference="subsec:mod"}.*
step 2:
: *Consider the linear multi-objective optimization problem [\[faces\]](#faces){reference-type="eqref" reference="faces"}: $$\begin{aligned}
\min\{y \; | \; y \in \mathcal{P}_i^{In}\}\end{aligned}$$ with ordering cone $\mathbb R^{m_i}_{+}$ and compute the maximal efficient faces $F^1,...,F^{k_i}$ (e.g. with algorithm from [@Armand93; @VanTu17; @tohidi2018adjacency], see also Section [4.2](#sec:maxefficientFaces){reference-type="ref" reference="sec:maxefficientFaces"}).*
step 3:
: *For each $j=1,...,k_i$ and given approximation level $\varepsilon_2>0$ compute a finite $\varepsilon_2$-solution $\bar S$ of the convex projection problem [\[cp\]](#cp){reference-type="eqref" reference="cp"}: $$\begin{aligned}
\text{compute } \bar{X}^j_i:=\{x \in \mathbb{X}\; | \; \exists y \in F^j: (x_{-i},-x_{-i},f_i(x)) \leq_{\mathbb R^{m_i}_{+}} y\}.\end{aligned}$$ This can be done by e.g. [@SZC18; @LZS21; @kr22] or [@kr23 Algorithm 5.1], see also Section [4.3](#sec:Proj){reference-type="ref" reference="sec:Proj"}. Then, $\hat{X}^j_i:=\{x \in \mathbb{X} \; | \; (x,y)\in \bar S\}$ yields an $\varepsilon_2$-approximation of the set $\bar{X}^j_i$ in the sense of [\[innerproj\]](#innerproj){reference-type="eqref" reference="innerproj"}.*
*Set $$\begin{aligned}
X_i=\bigcup_{j=1}^{k_i} ( \mathop{\mathrm{conv}}\hat{X}^j_i+B_{\varepsilon_2})\cap \mathbb{X}.
\end{aligned}$$*
Output:
: *$X=\bigcap_{i=1}^N X_i$*
We can now state the main theorem of this paper.
**Theorem 17**. *[\[sandwicheps\]]{#sandwicheps label="sandwicheps"} Let Assumptions [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} and [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} be satisfied. Let $X \subseteq \mathbb R^{\sum_{i = 1}^N n_i}$ be the set computed by Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}. Then it holds $$\begin{aligned}
\label{eq:sandwichth}
\operatorname{NE}(f,\mathbb X) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb X)\end{aligned}$$ for $\epsilon=\varepsilon_1+2L \varepsilon_2$.*
In order to prove Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"} we need the following lemmata. The first one shows that the union of the exact sets $\bar{X}^j_i$ of the convex projection [\[cp\]](#cp){reference-type="eqref" reference="cp"} satisfy [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"} with respect to the error level $\varepsilon_1$.
**Lemma 18**. *[\[sandwich\]]{#sandwich label="sandwich"} Let Assumptions [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} and [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} be satisfied. For any $i=1,...,N$ it holds $$\begin{aligned}
\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\} \subseteq \bigcup_{j=1}^{k_i} \bar{X}^j_i \subseteq \varepsilon_1\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\}.
\end{aligned}$$*
*Proof.* For the left inclusion consider $x^* \in \mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\}$.\
Let $\mathcal{P}^{In}_i=\mathop{\mathrm{conv}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \Bar{\mathcal{X}} \}+\mathbb R^{m_i}_+$ for $\Bar{\mathcal{X}}=\{{x}^1,...,{x}^s\}\subseteq \mathbb{X}$ with $s\in\mathbb{N}$ be the inner approximation of $\mathcal{P}_i$ computed as described in Section [4.1](#subsec:mod){reference-type="ref" reference="subsec:mod"}. Due to Remark [\[rem1\]](#rem1){reference-type="ref" reference="rem1"}, we know there are $\lambda^1,...,\lambda^s \geq 0$ with $\sum_{l=1}^s \lambda^l=1$ such that $x^*_{-i}=\sum_{l=1}^s \lambda^l {x}^l_{-i}$. Thus, $(x^*_{-i},-x^*_{-i},\sum_{l=1}^s \lambda^l f_i({x}^l)) \in \mathcal{P}^{In}_i$. Then there is $\hat{y}$ such that $(x^*_{-i},-x^*_{-i},\hat{y})$ is minimal in $\mathcal{P}^{In}_i$: Assume there would be no such $\hat{y}$. Then for all $\alpha>0$ it would be $(x^*_{-i},-x^*_{-i},\sum_{l=1}^s \lambda^l f_i({x}^l))-\alpha \bar{c}_i \in \mathcal{P}^{In}_i$. In that case $\mathcal{P}^{In}_i$ would contain the line $\{(x^*_{-i},-x^*_{-i},\sum_{l=1}^s \lambda^l f_i({x}^l))+\alpha \bar{c}_i\; | \; \alpha \in \mathbb R\}$ which is a contradiction to $\mathcal{P}^{In}_i$ being pointed. Thus, there exists $\hat{y}$ such that $(x^*_{-i},-x^*_{-i},\hat{y})$ is minimal in $\mathcal{P}^{In}_i$ and therefore exists a maximal efficient face $F^j$ of $\mathcal{P}^{In}_i$ with $(x^*_{-i},-x^*_{-i},\hat{y}) \in F^j$. So there are $\hat{\lambda}^1,...,\hat{\lambda}^s \geq 0$ with $\sum_{l=1}^s \hat{\lambda}^l=1$ such that $(x^*_{-i},-x^*_{-i},\hat{y})=(\sum_{l=1}^s \hat{\lambda}^l {x}^l_{-i},-\sum_{l=1}^s \hat{\lambda}^l {x}^l_{-i},\sum_{l=1}^s \hat{\lambda}^l f_i({x}^l))$. Since $f_i(x^*) \leq f_i(\sum_{l=1}^s \hat{\lambda}^l {x}^l) \leq \hat{y}$ it is $x^* \in \bar{X}^j_i$.
For the second statement let $x^* \in \bigcup_{j=1}^{k_i} \bar{X}^j_i$. Then $x^* \in \bar{X}^j_i$ for some $j$. It is $(x^*_{-i},-x^*_{-i},f_i(x^*)) \leq y$ for $y \in F^j$. Note that $y$ is a minimal element in $\mathcal{P}^{In}_i$. So, $(x^*_{-i},-x^*_{-i},f_i(x^*)-\varepsilon_1) \leq y-(0,0,\varepsilon_1)$ and $y-(0,0,\varepsilon_1)$ is a minimal element in $\mathcal{P}^{Out}_i$. This means either $(x^*_{-i},-x^*_{-i},f_i(x^*)-\varepsilon_1) \notin \mathcal{P}_i$ or $(x^*_{-i},-x^*_{-i},f_i(x^*)-\varepsilon_1)$ is a minimal element in $\mathcal{P}_i$. In any case there is no $x \in \mathbb X$ with $x_{-i}=x^*_{-i}$ and $f_i(x)+\varepsilon_1<f_i(x^*)$. ◻
As the convex projections $\bar{X}^j_i$ cannot be computed exactly, we will now prove the corresponding result for the approximate sets $X_i=\bigcup_{j=1}^{k_i} ( \mathop{\mathrm{conv}}\hat{X}^j_i+B_{\varepsilon_2})\cap \mathbb{X}$, where the error level has to be adjusted by the error of the approximate convex projection $\varepsilon_2$. As this error is made in the preimage space, it needs to be translated into the image space of the convex multi-objective optimization problem [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. For this step the Lipschitz continuity of $f_i$ implied by Assumption [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} is necessary.
**Lemma 19**. *[\[epsbound\]]{#epsbound label="epsbound"} Let Assumptions [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} and [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} be satisfied. For any $i=1,...,N$ it holds $$\begin{aligned}
\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\} \subseteq X_i \subseteq \epsilon\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\}
\end{aligned}$$ for $\epsilon=\varepsilon_1+2L \varepsilon_2$.*
*Proof.* Fix $i\in\{1,...,N\}$ and let $\Bar{x} \in X_i$. Then, there is some $j\in\{1,...,k_i\}$ such that $\Bar{x} \in \mathop{\mathrm{conv}}\hat{X}^j_i+B_{\varepsilon_2}$. So, $\bar{x}=x^*+b^*$ for $x^* \in \mathop{\mathrm{conv}}\hat{X}^j_i$ and $\left\Vert b^*\right\Vert \leq \varepsilon_2$. Since $\mathop{\mathrm{conv}}\hat{X}^j_i$ is an inner approximation of $\bar{X}^j_i$, see [\[innerproj\]](#innerproj){reference-type="eqref" reference="innerproj"}, it follows by Lemma [\[sandwich\]](#sandwich){reference-type="ref" reference="sandwich"} that $x^*$ is $\varepsilon_1$-Pareto optimal for [\[eq:vop\]](#eq:vop){reference-type="eqref" reference="eq:vop"}. Thus, $f_i(x^*) \leq f_i(x)+\varepsilon_1$ for any $x \in \mathbb{X}$ with $x_{-i}=x^*_{-i}$ and $f_i(x) \leq f_i(x^*)$. Now let $x \in \mathbb X$ with $x_{-i}=\bar{x}_{-i}=x^*_{-i}+b^*_{-i}$ and $f_i(x) \leq f_i(\bar{x})$. It holds $$\begin{aligned}
f_i(x)&+\varepsilon_1+2L\varepsilon_2-f_i(\bar{x})=f_i(x_i,x^*_{-i}+b^*_{-i})+\varepsilon_1+2L \varepsilon_2-f_i(x^*+b^*)\\
&=f_i(x_i,x^*_{-i}+b^*_{-i})-f_i(x_i,x^*_{-i})+L \varepsilon_2+f_i(x^*)-f_i(x^*+b^*) + L \varepsilon_2+f_i(x_i,x^*_{-i})-f_i(x^*)+\varepsilon_1 \geq 0,
\end{aligned}$$ where we used that $|f_i(x_i,x^*_{-i}+b^*_{-i})-f_i(x_i,x^*_{-i})| \leq L \varepsilon_2$ and $|f_i(x^*)-f_i(x^*+b^*)| \leq L \varepsilon_2$ due to Lipschitz continuity of $f_i$ as well as $f_i(x_i,x^*_{-i})+\varepsilon_1 \geq f_i(x^*)$ due to $x^*$ being $\varepsilon_1$-Pareto optimal. Thus, $f_i(x)+(\varepsilon_1 +2L\varepsilon_2) \geq f_i(\bar{x})$ which shows that $\Bar{x} \in (\varepsilon_1 +2L\varepsilon_2)\mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \;x \in \mathbb{X}\}$. ◻
We are now ready to prove Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"}.
*Proof of Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"}.* Let us first prove the left hand side inclusion. Let $x^* \in \operatorname{NE}(f,\mathbb{X})$. By Theorem [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"}, it follows that $x^* \in \bigcap_{i=1}^N \mathop{\mathrm{argMin}}\{(x_{-i},-x_{-i},f_i(x))\; | \; x \in \mathbb{X}\}$. Then, for each $i=1,...,N$ there is by Lemma [\[sandwich\]](#sandwich){reference-type="ref" reference="sandwich"} a $j=j(i)$ such that $x^* \in \bar{X}^j_i$. This implies, by [\[innerproj\]](#innerproj){reference-type="eqref" reference="innerproj"}, $x^* \in \mathop{\mathrm{conv}}\hat{X}^j_i+B_{\varepsilon_2}$. As $x^* \in \operatorname{NE}(f,\mathbb{X})$ also implies $x^* \in \mathbb{X}$, this proves the claim by the definition of the sets $X_i$ and $X$. Let us now prove the right hand side inclusion. Let $x^* \in X$, thus $x^* \in X_i$ for all $i=1,...,N$. By Lemma [\[epsbound\]](#epsbound){reference-type="ref" reference="epsbound"} and Theorem [Theorem 7](#thm:nash-approx){reference-type="ref" reference="thm:nash-approx"} the claim follows. ◻
Let us conclude the section with several remarks.
**Remark 20**. *[\[rem:inconstr\]]{#rem:inconstr label="rem:inconstr"} Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"} also holds if we replace Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} by the following condition, keeping Assumption [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} unchanged: Assume each player $i$ has an independent constraint set $\mathbb{X}_i=\{x_i \in \mathcal{X}_i\; | \; g_i(x_i) \leq 0\}$, where $g_i: \mathcal{X}_i \to \mathbb R$ is a convex function and $\mathbb{X}_i$ has nonempty interior. Thus, the shared constraint set is $\mathbb{X}=\mathbb{X}_1 \times ... \times \mathbb{X}_N=\{x \in \prod_{j = 1}^N \mathcal{X}_j\; | \; g_1(x_1) \leq 0,...,g_N(x_N) \leq 0\}$. Constraints of this type are also frequently considered in the literature: they are called 'orthogonal constraint sets' in [@rosen65] and 'classical games' in [@braouezec2021economic].*
*When replacing the polyhedral constraint Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} by the independent constraint assumption $\mathbb{X}=\mathbb{X}_1 \times ... \times \mathbb{X}_N$, one needs to adapt step $1$ of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} as follows*
step 1:
: *Compute a polyhedral outer approximation $P^{Out}_{-i}$ of $\mathbb{X}_{-i}:=\mathbb{X}_1 \times ... \times \mathbb{X}_{i-1} \times \mathbb{X}_{i+1} \times ... \times \mathbb{X}_N$. Denote the vertices of $P^{Out}_{-i}$ by $p^1,...,p^{l_i}$. Consider the convex multi-objective optimization problem $$\begin{aligned}
\min \{(x_{-i},-x_{-i},f_i(x))\; | \;x_i \in \mathbb{X}_i,\; x_{-i} \in P^{Out}_{-i} \}.\end{aligned}$$ Compute an inner approximation $\mathcal{P}^{In}_i$ of its upper image such that [\[innerappr\]](#innerappr){reference-type="eqref" reference="innerappr"} is satisfied with approximation level $\varepsilon_1>0$ using [@LRU14 Algorithm 1] with the modified initialization, similar to Section [4.1](#subsec:mod){reference-type="ref" reference="subsec:mod"}, but using $$\begin{aligned}
\mathcal{P}_0&=\mathcal{P}_0 \cap \{y \in \mathbb R^{m_i}\; | \; y_{1:a_i}=-y_{a_i+1:m_i-1} \} \cap \{y \in \mathbb R^{m_i}\; | \; y_{1:a_i} \in \mathop{\mathrm{conv}}\{p^1_{-i},...,p^{l_i}_{-i} \} \}
\\
\Bar{\mathcal{X}}&=\Bar{\mathcal{X}} \cup \{(x_i,p^j)\; | \;j=1,...,{l_i}\},\end{aligned}$$ where $x_i \in \mathbb{X}_i$ is chosen arbitrarily.*
*The rest of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} remains unchanged. The set $X$ outputted by this modification of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} will satisfy $$\begin{aligned}
\operatorname{NE}(f,\mathbb X) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb X),\end{aligned}$$ for $\epsilon=\varepsilon_1+2L \varepsilon_2$ for a game with an independent constraint set as above and satisfying Assumption [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"}. The proofs are in analogy to the proof of Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"}. The key is that the independent constraint assumption and the above stated modification of step $1$ of Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} ensure that $\mathcal{P}_i^{In}$ has the same property as in Remark [\[rem1\]](#rem1){reference-type="ref" reference="rem1"}.*
**Remark 21**. *Note however, that for a more general constraint set $\mathbb{X}$ than that considered in Assumption [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} or Remark [\[rem:inconstr\]](#rem:inconstr){reference-type="ref" reference="rem:inconstr"}, the property stated in Remark [\[rem1\]](#rem1){reference-type="ref" reference="rem1"} is not enough to prove that Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} outputs a set $X$ that satisfies [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"}. In the independent constraint case of Remark [\[rem:inconstr\]](#rem:inconstr){reference-type="ref" reference="rem:inconstr"}, the set $\Bar{\mathcal{X}}:=\{x^1,...,x^s\}$ that is computed in step 1 of the algorithm has for each $i$ the following two properties:*
- *$\bar{\mathcal{X}}_{-i}:=\mathop{\mathrm{conv}}\{x^1_{-i},...,x^s_{-i} \} \supseteq \mathbb{X}_{-i}:=\{ x_{-i} \in \prod_{j \neq i} \mathcal{X}_j\; | \; \exists \; x_i \in \mathcal{X}_i: (x_i,x_{-i}) \in \mathbb{X}\}$*
- *$\bar{\mathcal{X}}_i:=\mathop{\mathrm{conv}}\{x^1_i,...,x^s_i\} \subseteq \mathbb{X}_i:=\{x_i \in \mathcal{X}_i\; | \;\exists \; x_{-i} \in \prod_{j \neq i} \mathcal{X}_j: (x_i,x_{-i}) \in \mathbb{X}_i\}$.*
*Due to the independent constraint sets, one obtains that for each $\Tilde{x}_i \in \bar{\mathcal{X}}_i$ and for each $\hat{x}_{-i} \in \Bar{\mathcal{X}}_{-i}$ it holds $(\Tilde{x}_i,\hat{x}_{-i}) \in \mathbb{X}$, which is needed in the proof of Lemma [\[sandwich\]](#sandwich){reference-type="ref" reference="sandwich"} under the assumption of Remark [\[rem:inconstr\]](#rem:inconstr){reference-type="ref" reference="rem:inconstr"}. However, this property cannot be guaranteed when we would add constraints that have no independence for the different strategies.*
# Examples {#sec:ex}
We will start in Section [5.1](#sec:ill){reference-type="ref" reference="sec:ill"} with simple illustrative two player examples where the set of Nash equilibria, as well as the set of $\epsilon$-approximate Nash equilibria, can be computed explicitly. As such we directly investigate the sandwich principle [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"}. In Section [5.2](#sec:pollution){reference-type="ref" reference="sec:pollution"}, we will consider more complex examples from environmental economics involving two or three players. There, the set of Nash equilibria or $\epsilon$-approximate Nash equilibria is in general not known and Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} is used to compute a set $X$ that by Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"} is known to lie between the set of true Nash equilibria and the set of $\epsilon$-approximate Nash equilibria.
## Illustrative examples {#sec:ill}
**Example 22**. *[\[ex:1\]]{#ex:1 label="ex:1"} Consider a two player game, where each player chooses a real-valued strategy, thus $\mathcal{X}_1=\mathcal{X}_2=\mathbb R$. Let the shared constraint set be $\mathbb{X}=\{(x_1,x_2) \in \mathbb R^2\; | \; x_1,x_2 \in [-1,1] \}$. Clearly, the set $\mathbb{X}$ is a compact polyhedral set. Consider for player $1$ the convex cost function $f_1(x_1,x_2)=\frac{1}{2}x^2_1-x_1(x_2+\frac{1}{2})+x^2_2$ and for player $2$ the convex cost $f_2(x_1,x_2)=\frac{1}{2}x^2_2+x_1 x_2+x^2_1$. One can show that $f_1$ and $f_2$ are Lipschitz continuous with constant $L=3$ on $\mathbb{X}$. Thus, Assumptions [\[ass:convex\]](#ass:convex){reference-type="ref" reference="ass:convex"} and [\[asspolycase\]](#asspolycase){reference-type="ref" reference="asspolycase"} are satisfied. Due to the simple structure and low dimension of this example it is possible to find the set of Nash equilibria easily by hand. It consists of the unique Nash equilibrium $(\frac{1}{4},-\frac{1}{4})$. However, we will apply Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} to illustrate the sandwich result [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"} of Theorem [\[sandwicheps\]](#sandwicheps){reference-type="ref" reference="sandwicheps"} and thus find an approximation of the unique Nash equilibrium. We choose error levels $\varepsilon_1=0.01, \varepsilon_2=0.001$. Hence, the overall error is $\epsilon=\varepsilon_1+2L \varepsilon_2=0.016$. Due to the simple nature of the problem we are also able to formulate the exact set of $\epsilon$-approximate Nash equilibria for this example, which is given by $$\begin{aligned}
\epsilon\operatorname{NE}(f,\mathbb{X})=&\{(x_1,x_2) \in \mathbb R^2| x_1 \in [\frac{1}{4}-\sqrt{0.032},\frac{1}{4}], x_2 \in [-x_1-\sqrt{0.032},x_1-\frac{1}{2}+\sqrt{0.032}] \}\\
&\cup \{(x_1,x_2) \in \mathbb R^2| x_1 \in [\frac{1}{4},\frac{1}{4}+\sqrt{0.032}], x_2 \in [x_1-\frac{1}{2}-\sqrt{0.032},-x_1+\sqrt{0.032}] \}.\end{aligned}$$ Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} computes a set $X\subseteq \mathbb R^2$ satisfying $\operatorname{NE}(f,\mathbb{X}) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb{X})$ for $\epsilon=0.016$. The three sets are depicted in Figure [2](#fig:ex1alltogether){reference-type="ref" reference="fig:ex1alltogether"}. Furthermore, Figure [1](#figure1){reference-type="ref" reference="figure1"} shows the sets $X_i$ of both players $i=1,2$ satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"} for $\varepsilon_1$. Their intersection $X_1 \cap X_2$ is the desired set $X$ which fulfills [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"} for $\epsilon=0.016$.*
![Computed sets $X_1$ for player $1$ (blue) and $X_2$ for player $2$ (yellow) for Example [\[ex:1\]](#ex:1){reference-type="ref" reference="ex:1"}. The intersection for both players yields the set $X$ that satisfies $\operatorname{NE}(f,\mathbb{X}) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb{X})$ for $\epsilon=0.016$.](ex1_pareto.eps){#figure1 width="0.4\\linewidth"}
*[\[figure1\]]{#figure1 label="figure1"}*
![The *singleton* set $\operatorname{NE}(f,\mathbb{X})$ (red), the set $X$ computed by Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} (blue) and the set $\epsilon\operatorname{NE}(f,\mathbb{X})$ (yellow) for Example [\[ex:1\]](#ex:1){reference-type="ref" reference="ex:1"}.](ex1_alltogether.eps){#fig:ex1alltogether width="0.4\\linewidth"}
**Example 23**. *[\[ex:2\]]{#ex:2 label="ex:2"} Consider now the example provided within [@feinstein2022 Section 2]. This is a parametric game which permits unique, multiple, and infinite Nash equilibria depending on the parameter chosen.*
*Specifically, consider a two player game in which each player has real-valued strategies in the compact set $[0,1]$. Thus, the shared constraint set is a box-constraint $\mathbb{X}=\{(x_1,x_2) \in \mathbb R^2\; | \; x_1,x_2 \in [0,1] \}$. In order to introduce the convex cost functions, consider the compact parameter space $\mathcal{Y}:=[0,2]$. The cost functions for player $1$ and player $2$ are given by $f_1(x_1,x_2)=x_1(x_1-2yx_2)+x^2_2$ and $f_2(x_1,x_2)=(x_1-x_2)^2$ respectively, where $y \in \mathcal{Y}$. Note that the Lipschitz constant for this problem depends on the parameter $y$ is given by $L_y=\max \{4,2+2(y^2+10^{-4})\}$. For each $y \in [0,2]$, a Nash equilibrium exists and the set of Nash equilibria depending on $y$ is of the form $$\begin{aligned}
\operatorname{NE}(f,\mathbb{X})= \begin{cases}
\{(0,0)\}, \hspace{0.2cm} &y<1\\
\{x \in [0,1]^2\; | \;x_1=x_2\}, \hspace{0.2cm} &y=1\\
\{(0,0), (1,1)\}, \hspace{0.2cm} &y>1.
\end{cases}\end{aligned}$$ For $y \in [0,1)$ we obtain a unique Nash equilibrium. If $y=1$ the set of Nash equilibria is a line. For $y \in (1,2]$ the set of Nash equilibria consists of two isolated points.*
*In order to demonstrate that our methods work independently of the set of Nash equilibria being a singleton, a finite set or an infinite set, we apply Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} for different values of $y$ given by $y \in \{0.5,1,1.5\}$. The chosen error levels are $\varepsilon_1=\varepsilon_2=0.001$. Figure [\[figure2\]](#figure2){reference-type="ref" reference="figure2"} shows, for both players, the computed sets $X_1$ and $X_2$ satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"}. The intersection $X_1 \cap X_2$ is the desired set $X$ satisfying [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"} for $\epsilon=0.018$ if $y=0.5$ or $y=1$, and $\epsilon=0.027$ if $y=1.5$. It is worth mentioning that the computed set $X$ is much smaller than the set $\epsilon\operatorname{NE}(f,\mathbb{X})$ in all three cases. This can be seen in detail in Figure [3](#fig:ex2alltogether){reference-type="ref" reference="fig:ex2alltogether"} for the case $y=1.5$.*
*[\[figure2\]]{#figure2 label="figure2"}*
![Example [\[ex:2\]](#ex:2){reference-type="ref" reference="ex:2"}: the set $\epsilon\operatorname{NE}(f,\mathbb{X})$ (yellow) as well as the computed set $X$ (blue) around the zoomed in regions of the true equilibria $(0,0)$ and $(1,1)$.](ex2x15_zoom.eps){#fig:ex2alltogether width="0.4\\linewidth"}
*[\[fig:ex2alltogether\]]{#fig:ex2alltogether label="fig:ex2alltogether"}*
**Example 24**. *[\[ex:3\]]{#ex:3 label="ex:3"} Consider the example provided within Figure 2 of [@rosen65]. For this two player game, the convex cost functions are given by $f_1(x_1,x_2)=\frac{1}{2}x^2_1-x_1 x_2+x^2_2$ and $f_2(x_1,x_2)=x^2_2+x_1 x_2+x^2_1$ for $x_1,x_2\in\mathbb{R}$. A polyhedral shared constraint set is given by $\mathbb{X}=\{(x_1,x_2) \in \mathbb R^2\; | \; x_1+x_2 \geq 1, x_1 \geq 0, x_2 \geq 0\}$. Note that the set $\mathbb{X}$ is not compact. However, a non-unique set of Nash equilibria for this game exists and is of the form $\operatorname{NE}(f,\mathbb{X})=\{(x,1-x)\; | \;x \in [\frac{1}{2},1] \}$.*
*In order to apply Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, we consider the compact subset $\bar{\mathbb{X}}=\mathbb{X}\cap [0,2]^2$. Since $\operatorname{NE}(f,\mathbb{X}) \subseteq \bar{\mathbb{X}}$, there is no loss of information by using this smaller constraint set for the computations. The Lipschitz constant of the joint cost function $(f_1,f_2)$ over $\Bar{\mathbb{X}}$ is $L=8$. We apply Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} for chosen error levels $\varepsilon_1=0.01, \varepsilon_2=0.001$. Thus, the outputted set $X \subseteq \mathbb R^2$ satisfies [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"} for $\epsilon=0.027$. Figure [4](#figure3){reference-type="ref" reference="figure3"} shows the computed sets $X_1$ and $X_2$ satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"} for both players and its intersection $X$.*
*![Example [\[ex:3\]](#ex:3){reference-type="ref" reference="ex:3"}: Computed sets $X_1$ for player 1 (blue) and $X_2$ for player 2 (yellow). The intersection for both players yields the set $X$ that satisfies $\operatorname{NE}(f,\mathbb{X}) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb{X})$ for $\epsilon=0.027$.](ex3_pareto.eps "fig:"){#figure3 width="0.4\\linewidth"}*
*[\[figure3\]]{#figure3 label="figure3"}*
*The exact set of $\epsilon$-approximate Nash equilibria for this example is given by $$\begin{aligned}
\epsilon\operatorname{NE}(f,\mathbb{X})&=\{(x_1,x_2) \in \mathbb{R}^2_+ \; | \; x_1 + x_2 \geq 1 , \; x_1 \in [x_2 - \sqrt{x_2^2 + 2(v_1(x_2) + \epsilon)}, \\
&\quad x_2 + \sqrt{x_2^2 + 2(v_1(x_2) + \epsilon)}] , \; x_2 \in [1 - x_1 , \frac{1}{2}(-x_1 + \sqrt{x_1^2 + 4(v_2(x_1) + \epsilon)}\}
\end{aligned}$$ for $v_1(x_2) = \frac{1}{2}(x_2 \vee (1-x_2))^2 - (x_2 \vee (1-x_2))x_2$ and $v_2(x_1) = (1-x_1)^+$. Figure [5](#fig:ex3_alltogether){reference-type="ref" reference="fig:ex3_alltogether"} shows the three sets of the sandwich result $\operatorname{NE}(f,\mathbb{X}) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb{X})$. Similar to before, the computed set $X$ is much smaller than the set $\epsilon\operatorname{NE}(f,\mathbb{X})$ and approximates the set of true Nash equilibria $\operatorname{NE}(f,\mathbb{X})$ quite well.*
![Example [\[ex:3\]](#ex:3){reference-type="ref" reference="ex:3"}: the set $\epsilon\operatorname{NE}(f,\mathbb{X})$ (yellow), the computed set $X$ (blue) and the set $\operatorname{NE}(f,\mathbb{X})$ (red) as well as a zoomed in region.](ex3_alltogether.eps){#fig:ex3_alltogether width="0.4\\linewidth"}
*[\[fig:ex3_alltogether\]]{#fig:ex3_alltogether label="fig:ex3_alltogether"}*
## Pollution control game {#sec:pollution}
Let us consider the $N$-player pollution control game from environmental economics considered in [@TZ2005]. There, each player represents a country that is, on one hand, seeking to maximing the net revenue from production of goods and services and, on the other hand, minimizing the environmental damage due to pollution. Pollution is assumed to be a proportional by-product of production and therefore the net revenues (gross revenue minus production cost) of each player, e.g., it can be expressed as a function of emissions. The strategy $x_i$ of country $i$, which is chosen from the space $\mathcal{X}_i = \mathbb{R}$, can be seen as the emissions of country $i$. Each country has its own revenue function $g_i: \mathcal{X}_i \to \mathbb R_+$ which is nonnegative and concave and depends only on the emissions of country $i$. However, the environmental damage due to emissions depends on the sum of all combined emissions which is stated by a convex damage cost function $d$. Each country tries to minimize its cost function $f_i$ (i.e. to maximize her total welfare given as the difference between the net revenue and the damage cost), which is given by $f_i(x):=d(x_1+...+x_N)-g_i(x_i)$. Additionally, two types of constraints can be considered. Firstly, each country $i$ can face an environmental constraint $x_i \in [0,T_i]$ where $T_i \geq 0$ is an exogenously given upper bound on emissions, e.g. from an international treaty. Secondly, countries can agree to jointly fix an upper bound for emissions, which is stated by a polyhedral constraint of the form $\sum_{i=1}^N \alpha_i x_i \leq T$, where $\alpha_i \in \mathbb R_+$ for $i=1,...,N,$ are positive coefficients and $T>0$ is an upper bound. In [@TZ2005], the assumption of a unique Nash equilibrium is made and only those pollution control games are considered in their study. In contrast, with help of Theorem [\[thm:nash\]](#thm:nash){reference-type="ref" reference="thm:nash"} (see also [@FR23 Th. 2.6]) we can characterize the set of Nash equilibria as the set of all Pareto solutions of a certain vector optimization problem, regardless of uniqueness or finiteness of the set of equilibria. Furthermore, with Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"}, we are able to numerically approximate the set of Nash equilibria. In the following, we present examples for games with $2$ or $3$ players, and where the set of Nash equilibria is nontrivial.
**Example 25**. *[\[ex:4\]]{#ex:4 label="ex:4"} Let us start with the case of two players. Assume each player has a linear revenue function given by $g_i(x_i)=\beta_i x_i$ with values $\beta_1=1.1$ and $\beta_2=2$. Consider a convex damage function $d(x)=\frac{1}{2}(x_1+x_2)^2$. Each player tries to minimize its cost $f_i(x)=d(x)-\beta_i x_i$. We assume that each player has its own box-constrained restrictions as well as a joint upper bound for the sum of their strategies (emissions). This leads to a compact polyhedral shared constraint set which is given by $\mathbb{X}=\{(x_1,x_2) \in [0,1]^2| x_1+0.4 x_2 \leq 1\}$. The Lipschitz constant of the cost functions over $\mathbb{X}$ is $L=2.1$. We apply Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} with chosen error level $\epsilon_1=\epsilon_2=0.01$. Then the set $X$ computed by Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} satisfies [\[eq:sandwich\]](#eq:sandwich){reference-type="eqref" reference="eq:sandwich"} for $\epsilon=0.052$. Figure [6](#figure4){reference-type="ref" reference="figure4"} shows the computed sets $X_1$ and $X_2$ satisfying [\[eq:sandwich2\]](#eq:sandwich2){reference-type="eqref" reference="eq:sandwich2"} for both players. The intersection is the set $X$. It contains a line segment as well as an isolated point area which comprise the set of Nash equilibria $\{(\frac{1}{10},1)\} \cup \{(x_1,\frac{5}{2}[1-x_1]) \; | \; x_1 \in [\frac{14}{15},1]\}$.*
*![Example [\[ex:4\]](#ex:4){reference-type="ref" reference="ex:4"}: Computed sets $X_1$ for player 1 (blue) and $X_2$ for player 2 (yellow) as well as the true set $\operatorname{NE}(f,\mathbb{X})$ (red). The intersection for both players yields the set $X$ that satisfies $\operatorname{NE}(f,\mathbb{X}) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb{X})$ for $\epsilon=0.052$.](ex4_pareto_nash.eps "fig:"){#figure4 width="0.4\\linewidth"}*
*[\[figure4\]]{#figure4 label="figure4"}*
**Example 26**. *[\[ex:5\]]{#ex:5 label="ex:5"} We now extend this pollution game to include a third player. As in the two player case above, each player has a linear revenue function of the form $g_i(x_i)=\beta_i x_i$ where we set the values $\beta_1=1.1, \beta_2=1.3$ and $\beta_3=3.2$. Furthermore, we define a convex quadratic environmental damage function $d(x)=\frac{1}{2}(x_1+x_2+x_3)^2$. Each player tries to minimize its cost function $f_i(x)=d(x)-\beta_i x_i$. Let the constraint set for this game be polyhedral with $\mathbb{X}=\{(x_1,x_2,x_3) \in [0,1]^3| x_1+0.6 x_2+0.4 x_3 \leq 1\}$. The Lipschitz constant of the cost over $\mathbb{X}$ takes the value $L=9.8$. We apply Algorithm [\[alg1\]](#alg1){reference-type="ref" reference="alg1"} with error levels $\varepsilon_1=\varepsilon_2=0.01$ to compute a set $X$ which satisfies [\[eq:sandwichth\]](#eq:sandwichth){reference-type="eqref" reference="eq:sandwichth"} for $\epsilon=0.1325$. This set $X$ is visualized in Figure [7](#figure5){reference-type="ref" reference="figure5"}. Similar to the two player example we obtain a line segment unified with an isolated area.*
*![Example [\[ex:5\]](#ex:5){reference-type="ref" reference="ex:5"}: Set $X$ satisfying $\operatorname{NE}(f,\mathbb{X}) \subseteq X \subseteq \epsilon\operatorname{NE}(f,\mathbb{X})$ for $\epsilon=0.1325$.](ex5_X.eps "fig:"){#figure5 width="0.4\\linewidth"}*
*[\[figure5\]]{#figure5 label="figure5"}*
[^1]: Stevens Institute of Technology, School of Business, Hoboken, NJ 07030, USA, zfeinste\@stevens.edu.
[^2]: Vienna University of Economics and Business, Institute for Statistics and Mathematics, Vienna A-1020, AUT, nihey\@wu.ac.at.
[^3]: Vienna University of Economics and Business, Institute for Statistics and Mathematics, Vienna A-1020, AUT, brudloff\@wu.ac.at.
| arxiv_math | {
"id": "2310.04176",
"title": "Approximating the set of Nash equilibria for convex games",
"authors": "Zachary Feinstein, Niklas Hey, Birgit Rudloff",
"categories": "math.OC econ.GN q-fin.EC",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In dimensions $d\geq 4$, by choosing a suitable scaling parameter, we show that the rescaled spatial SIR epidemic process converges to a super-Brownian motion with drift, thus complementing the previous results by Lalley [@L09] and Lalley-Zheng [@LZ10] on the convergence of SIR epidemics in $d\leq 3$. The scaling parameters we choose also agree with the corresponding asymptotics for the critical probability $p_c$ of the range-$R$ bond percolation on $\mathbb{Z}^d$ as $R\to \infty$.
author:
- Jieliang Hong
date: |
*Department of Mathematics, Southern University of Science and Technology,\
Shenzhen, China\
E-mail: `[email protected]`*
title: Rescaled SIR epidemic processes converge to super-Brownian motion in four or more dimensions
---
# Introduction
The celebrated works of Durrett-Perkins [@DP99] and Mueller-Tribe [@MT95] prove that the rescaled long range contact processes converge to a super-Brownian motion with drift in $d\geq 2$; to a super-Brownian motion with killing of the density in $d=1$. In the contact process setting, all individuals are either susceptible or infected, meaning that the infections will not give immunity upon recovery. By introducing a natural third type of individuals, recovered, to this model, we get the well-known susceptible/infected/recovered epidemic process (SIR), in which recovered individuals are immune to future infections. Lalley [@L09] and Lalley-Zheng [@LZ10] established the convergence of rescaled SIR epidemic processes for $d \leq 3$, but there are no such convergence results in higher dimensions $d\geq 4$. The results in the paper will then close the hole.
Define $\mathbb{Z}_u^d=\mathbb{Z}^d/u=\{x/u: x\in \mathbb{Z}^d\}$ for any $u>0$. Our SIR epidemic process takes place on $\mathbb{Z}^d_R$ where $R\in \mathbb{N}$ represents the infection range. Call $x,y\in \mathbb{Z}^d_R$ neighbors if $0<\| x-y\| _\infty\leq 1$ where $\| \cdot \| _\infty$ denotes the $l^\infty$ norm on $\mathbb{R}^d$. The state of the epidemic process at time $t$ can be characterized by a function $\psi_t: \mathbb{Z}_R^d\to \{-1,0,1\}$. For each vertex $x\in \mathbb{Z}^d_R$, we say $x$ at time $t$ is infected if $\psi_t(x)=1$; susceptible if $\psi_t(x)=0$; recovered if $\psi_t(x)=-1$. Define the continuous time SIR epidemic process $(\psi_t, t\geq 0)$ as follows:\
(a) Each particle at infected sites dies at rate $1$ and gives birth to one new particle at rate $\beta$.\
(b) When a birth event occurs at some site $x$, the new particle is sent to a site $y$ chosen randomly from $\mathcal{N}(x):=\{y\in \mathbb{Z}_R^d: 0<\| x-y\| _\infty\leq 1\}$, the set of neighbors of $x$.\
(c) If $y$ is susceptible, the birth is valid, the new particle establishes there and $y$ becomes infected. If $y$ is recovered or infected, such a birth will be suppressed.\
(d) When a death occurs at $x$, the site $x$ becomes recovered and refrains from establishing new particles.
One may think of the particles as the virus spreading over individuals positioned on $\mathbb{Z}_R^d$. The infected individual at $x$ recovers at rate $1$ (the virus is removed and the particle dies) while he transmits the disease (gives birth to a new particle) with rate $\beta$ to his neighbors in $\mathcal{N}(x)$. If $x$ attempts to infect his neighbor $y$ who is susceptible, then $y$ will become infected and a new particle (the virus) establishes itself there; otherwise the attempt of infection is invalid. Once the individual is recovered, he will be immune to future infections. We note the multi-occupancy of particles is not allowed, i.e. there is at most one particle at each site. In fact, if the assumption on the long-lasting immunity upon recovery is removed, one obtains immediately the contact process model considered in [@DP99]. Our $d\geq 4$ setting for the SIR epidemic here corresponds to their $d\geq 2$ setting for the contact process in [@DP99]. These dimension settings are essentially related to the fact that a super-Brownian motion has no fixed-time density iff $d\geq 2$ and no occupation density, or local time, iff $d\geq 4$ (see, e.g., Chp. III of [@Per02]).
Notice that by ignoring rules $(c)$ and $(d)$, the SIR epidemic process is identical to a branching random walk (BRW) where each particle dies at rate $1$ and gives birth to one offspring to its neighboring positions at rate $\beta$. To obtain a nontrivial scaling limit, we set $\beta=1+\theta/N$ where $N>0$ satisfies $$\begin{aligned}
\label{9e1.1}
R^d=
\begin{cases}
N,&d\geq 5\\
N\log N, &d=4\\
N^{3-d/2}, &d\leq 3.
\end{cases}\end{aligned}$$ To justify such a choice of $N$, by following the contact process arguments in [@DP99], we consider a typical particle in the branching random walk at time $N$ and count the number, denoted by $\chi_N$, of its neighboring sites that have been visited before time $N$. The expected value of $\chi_N$ is approximately $$\begin{aligned}
\mathbb{E}(\chi_N)\approx C\int_1^N \int_0^{t} (t+s)^{-d/2}ds dt=
\begin{cases}
C,&d\geq 5\\
C\log N, &d=4\\
CN^{2-d/2}, &d\leq 3.
\end{cases}\end{aligned}$$ Here we trace backward with time $t$ when the nearby relative branches off the family tree of the typical particle and $0< s< t$ is the extra time the relative takes before it reaches the neighboring site, the probability of which is at most $C(t+s)^{-d/2}$. To ensure that the epidemic survives, we require that the fraction of the available sites for birth in each neighborhood is at least $1-\theta/N$, so we need $\mathbb{E}(\chi_N)/R^d=O(1/N)$, giving $N$ as in [\[9e1.1\]](#9e1.1){reference-type="eqref" reference="9e1.1"}.
**Connection with the range-$R$ bond percolation**. The above asymptotics for $N$ as in [\[9e1.1\]](#9e1.1){reference-type="eqref" reference="9e1.1"} also appear in the corresponding asymptotics for the critical probability $p_c(R)$ of the range-$R$ bond percolation on $\mathbb{Z}^d$ as $R\to \infty$: In higher dimensions $d>6$, this asymptotics had been confirmed in Van der Hofstad and Sakai [@HS05] by using the lace expansion. When $d\leq 6$, the corresponding bounds for $p_c(R)$ were proved recently in [@FP16], [@Hong21] and [@Hong23] where the authors used the connection between the bond percolation and the discrete-time SIR epidemics to study the critical probability $p_c(R)$. Such a connection is another motivation for us to work on the convergence of SIR epidemics in this paper.
Now that the scaling parameter is determined, we apply the usual Brownian scaling to scale time by $N$, space by $\sqrt{N}$, and define the rescaled SIR epidemic process $(\psi_t, t\geq 0)$ as follows:\
(a) Each particle at infected sites dies at rate $N$ and gives birth to one new particle at rate $N+\theta$.\
(b) When a birth occurs at $x$, the new particle is sent to a site $y$ chosen randomly from $y\in \mathbb{Z}^d_{N^{1/2} R}$ with $0<\| y-x\| _\infty\leq N^{-1/2}$.\
(c) If $y$ is susceptible, the birth is valid, the new particle establishes there and $y$ becomes infected. If $y$ is recovered or infected, the birth will be suppressed.\
(d) When a death occurs at $x$, the site $x$ becomes recovered and refrains from establishing new particles.
To describe the convergence result, we introduce some notation. Let $M_F(\mathbb{R}^d)$ be the space of finite measures on $(\mathbb{R}^d,\mathfrak{B}(\mathbb{R}^d))$ equipped with the topology of weak convergence of measures. Write $$\mu(\phi)=\int \phi(x) \mu(dx)$$ for any measure $\mu$ and any integrable function $\phi$. Set $\Omega_X=D([0,\infty), M_F(\mathbb{R}^d))$ to be the Skorohod space of càdlàg paths on $M_F(\mathbb{R}^d)$ with the Skorohod topology, the space where the convergence will occur. For each $n\geq 1$, let $C_b^n(\mathbb{R}^d)$ denote the space of bounded continuous functions whose partial derivatives of order less than $n+1$ are also bounded and continuous. A super-Brownian motion $X=(X_t,t\geq 0)$ is a continuous $M_F(\mathbb{R}^d)$-valued strong Markov process defined on some complete filtered probability space $(\Omega, \mathcal F, \mathcal F_t, \mathbb{P})$. We say $X$ starts at $X_0\in M_F(\mathbb{R}^d)$ with branching rate $\gamma_0>0$, diffusion coefficient $\sigma_0^2>0$ and drift $\theta_0\in \mathbb{R}$ if it satisfies the following martingale problem: $$\begin{aligned}
\label{9e10.25}
(MP)^{\gamma_0, \sigma_0^2, \theta_0}_{X_0}: &\text{ For any } \phi \in C_b^2(\mathbb{R}^d),\nonumber\\
&M_t(\phi)=X_t(\phi)-X_0(\phi)-\int_0^t X_s\Big(\frac{\sigma_0^2}{2} \Delta\phi\Big) ds-\theta_0\int_0^t X_s(\phi) ds\nonumber\\
&\text{ is an $(\mathcal F_t)$-martingale with } \langle M(\phi)\rangle_t=\int_0^t X_s(\gamma_0 \phi^2)ds.\end{aligned}$$ The above martingale problem uniquely characterizes the law of super-Brownian motion on $\Omega_{X, C}=C([0,\infty), M_F(\mathbb{R}^d))$, the space of continuous $M_F(\mathbb{R}^d)$-valued paths furnished with the compact-open topology.
Now we are ready to present our main theorem. For simplicity, we assume that there are no recovered sites at the beginning of the epidemic. Let $$\begin{aligned}
\label{9e10.13}
\xi_t=\psi_t^{-1}(1):=\{x\in \mathbb{Z}^d_{N^{1/2} R}: \psi_t(x)=1\}\end{aligned}$$ denote the set of infected sites at time $t$. Then the set of recovered sites is given by $$\begin{aligned}
\psi_t^{-1}(-1):=\{x\in \mathbb{Z}^d_{N^{1/2} R}: \psi_t(x)=-1\}=\Big(\bigcup_{s\leq t} \xi_s\Big)-\xi_t.\end{aligned}$$ One may conclude from the above that the infected sets $\{\xi_t, t\geq 0\}$ uniquely determine the SIR epidemic process $(\psi_t, t\geq 0)$. By assigning mass $1/N$ to each infected individual, we define the measure-valued process by $$\begin{aligned}
\label{9ec10.25}
X_t^N:=\frac{1}{N}\sum_{x\in\xi_t} \delta_x.\end{aligned}$$ It follows that $(X_t^N, t\geq 0)\in \Omega_X$. When $d\geq 5$, we let $Y_1,Y_2, \cdots$ be i.i.d. random variables on $\mathbb{R}^d$ so that $Y_1=0$ or $Y_1$ is uniform on $[-1,1]^d$, each with probability $1/2$. Set $V_n=Y_1+\cdots+Y_n$ for $n\geq 1$ and $V_0=0$. In addition, we let $W_0$ be uniform on $[-1,1]^d$, independent of $\{Y_1, Y_2,\cdots\}$. Define $$\begin{aligned}
\label{9ec10.64}
b_d= 2^{-d} \sum_{l=0}^\infty \mathbb{P}(V_{l+1}\in [-1,1]^d \backslash \{0\}) +2^{-d} \sum_{l=0}^\infty \sum_{j=0}^\infty \frac{1}{2^{l+j+1}} \frac{(l+j)!}{l!j!} \sum_{m=0}^j \mathbb{P}(W_{0}+V_{l+m}\in [-1,1]^d).\end{aligned}$$ In $d=4$, we let $b_4={9}/(2\pi^2)$.
**Theorem 1**. *Let $d\geq 4$. If $X_0^N\to X_0$ in $M_F(\mathbb{R}^d)$ as $N\to \infty$ where $X_0$ has no point masses, then the rescaled SIR epidemic processes $(X_t^N, t\geq 0)$ defined in [\[9ec10.25\]](#9ec10.25){reference-type="eqref" reference="9ec10.25"} converge weakly to $(X_t, t\geq 0)$ on $\Omega_X$ as $N\to \infty$ where $X$ is a super-Brownian motion satisfying $(MP)^{\gamma_0, \sigma_0^2, \theta_0}_{X_0}$ with branching rate $\gamma_0=2$, diffusion coefficient $\sigma_0^2=1/3$ and drift $\theta_0=\theta-b_d$.*
**Remark 2**. *A careful reader may have noticed that the measure-valued epidemic process $\{X_t^N\}_{t\geq 0}$ is not Markovian--the suppressed births onto the recovered sites depend on the history of the process. However, its weak limit, the super-Brownian motion $X$, is a Markov process. This may not make sense at first thinking; we will show later in the proof that the suppressed births due to collisions between distant relatives can be ignored, making $\{X_t^N\}_{t\geq 0}$ "almost" Markovian for $N$ large. Therefore when passing to the limit, we get a Markov process. We refer the reader to Section [4](#9s4){reference-type="ref" reference="9s4"} for more details.*
Turning to the lower dimensions $d\leq 3$, the distant relatives may no longer be ignored as the occupation density exists, for which the scaling limit of the (discrete-time) spatial SIR epidemics has been studied in Lalley [@L09] and Lalley-Zheng [@LZ10]. Instead of the finer lattice, their epidemic model takes place on the integer lattice $\mathbb{Z}^d$ where a large population of size $N$ is located at each site, the so-called "Village" model. It is shown that after suitable scaling, which agrees with our argument in [\[9e1.1\]](#9e1.1){reference-type="eqref" reference="9e1.1"}, the SIR epidemic process will converge to a super-Brownian motion with killing of the local time. More specifically, the limiting process $X$ is the solution to the martingale problem $$\begin{aligned}
\label{9ec4.38}
X_t(\phi)=X_0(\phi)+M_t(\phi)+\int_0^t X_s\Big(\frac{\Delta}{6}\phi+\theta \phi\Big) ds-\int_0^t X_s(L_s \phi) ds, \forall \phi\in C_b^2(\mathbb{R}^d),\end{aligned}$$ where $X$ is a continuous $M_F(\mathbb{R}^d)$-valued process, $L_s$ is the local time of $X$, i.e. the density function of the occupation measure $\int_0^s X_u(\cdot) du$, and $M(\phi)$ is a continuous martingale with $\langle M(\phi)\rangle_t=\int_0^t X_s(2\phi^2)ds$. We refer the reader to Theorem 2.2 of Lalley-Perkins-Zheng [@LPZ14] for the existence and uniqueness of the above martingale problem. Although the settings are slightly different, it has been conjectured in Frei-Perkins [@FP16] that the same conclusion holds for the discrete-time long-range SIR epidemics (see Conjecture 1.2 therein). We also conjecture that the same conclusion holds under our continuous-time setting.
**Conjecture 3**. *Let $d\leq 3$. If $X_0^N$ converges to some appropriate compactly supported $X_0$ in $M_F(\mathbb{R}^d)$ as $N\to \infty$, then the rescaled SIR epidemic processes $(X_t^N, t\geq 0)$ defined in [\[9ec10.25\]](#9ec10.25){reference-type="eqref" reference="9ec10.25"} converge weakly to $(X_t, t\geq 0)$ on $\Omega_X$ as $N\to \infty$ where $X$ is as in [\[9ec4.38\]](#9ec4.38){reference-type="eqref" reference="9ec4.38"}.*
We briefly describe the idea for the proof of Theorem [Theorem 1](#9t0){reference-type="ref" reference="9t0"} with $d\geq 4$, which was inspired by the contact process proof in [@DP99] along with some necessary adjustments: Define a sequence of approximating processes $X_t^{k, N}$ that gives the upper and lower bounds for the rescaled SIR epidemic process $X_t^N$. Recall from [\[9e10.13\]](#9e10.13){reference-type="eqref" reference="9e10.13"} that $\xi_t$ is the set of infected sites of the SIR epidemic process. First, we define $\xi_t^0$ to be a branching random walk by ignoring the collision rules $(c),(d)$, where the multi-occupancy of particles is allowed, and we may view $\xi_t^0$ as a set in which repetitions of elements are allowed. For $k\geq 1$, let $\xi_t^k$ be the branching random walk $\xi_t^0$ that avoids births onto sites that have been visited by $\xi_t^{k-1}$. Since $\xi_t^0$ is an overestimate of the set of infected sites $\xi_t$, we get $\xi_t^1$ is an underestimate of $\xi_t$ as the set of forbidden sites is larger for $\xi_t^1$. On the other hand, $\xi_t^2$ will be an overestimate since it only removes particles that are born onto the smaller set visited by $\xi_t^1$. We may continue this procedure to define $\xi_t^k$ for $k\geq 3$ but we only need $k=0,1,2$ for our proof. From these approximating branching random walks $\xi_t^k$ with $k=0,1,2$, we may define their rescaled versions by $$\begin{aligned}
X_t^{k,N}:=\frac{1}{N}\sum_{x\in \xi_t^k} \delta_x.\end{aligned}$$ Write $\mu\geq \nu$ for measures $\mu, \nu$ if $\mu(A) \geq \nu(A)$ for any Borel set $A\subset \mathbb{R}^d$. By construction we have $$\begin{aligned}
\label{9e1.7}
X_t^{1,N}\leq X_t^N\leq X_t^{2,N}\leq X_t^{0,N}.
\end{aligned}$$
**Proposition 4**. *Let $d\geq 4$. Under the hypothesis of Theorem [Theorem 1](#9t0){reference-type="ref" reference="9t0"}, we have for all $T>0$, $$\begin{aligned}
\mathbb{E}\Big(\sup_{t\leq T} \Big\vert X_t^{2,N}(1)-X_t^{1,N}(1)\Big\vert \Big)\to 0 \text{ as } N\to \infty.\end{aligned}$$*
Together with [\[9e1.7\]](#9e1.7){reference-type="eqref" reference="9e1.7"}, the above proposition implies that it suffices to find the limit of $X^1$ or $X^2$, so Theorem [Theorem 1](#9t0){reference-type="ref" reference="9t0"} is immediate from the following result.
**Proposition 5**. *Let $d\geq 4$. Under the hypothesis of Theorem [Theorem 1](#9t0){reference-type="ref" reference="9t0"}, we have $\{X_t^{1,N}\}_{t\geq 0}$ converges weakly to $X$ on $\Omega_X$ as $N\to \infty$ where $X$ is a super-Brownian motion with branching rate $2$, diffusion coefficient $1/3$ and drift $\theta-b_d$.*
# Proof of the main result {#9s2}
Let $\theta\in \mathbb{R}$ and consider $N\in \mathbb{N}$ such that $N>2\vert \theta\vert$. For simplicity, we may set $\theta\geq 0$ as all the calculations below work similarly for $\theta<0$. Define the rescaled fine lattice where all our processes live on by $$\begin{aligned}
\label{9e1.3}
\mathcal{L}_N=
\begin{cases}
N^{-1/2} \cdot N^{-1/d} \cdot \mathbb{Z}^d,&d\geq 5\\
N^{-1/2}\cdot N^{-1/4}(\log N)^{-1/4} \cdot \mathbb{Z}^4, &d=4.
\end{cases}\end{aligned}$$ The first scaling factor $N^{-1/2}$ is from the usual space-time scaling while the second scaling factor is from $\mathbb{Z}_R^d$ and the relation between $N$ and $R$ as in [\[9e1.1\]](#9e1.1){reference-type="eqref" reference="9e1.1"}. We use the same labeling system from [@DP99] to study our SIR epidemic process. Define $$\begin{aligned}
\label{9e1.16}
\mathcal{I}=\bigcup_{n=0}^\infty \mathbb{N}\times \{0,1\}^n=\{(\beta_0, \beta_1, \cdots, \beta_n): \beta_0\in \mathbb{N}, \beta_i \in \{0,1\}, 1\leq i\leq n\},\end{aligned}$$ where $\beta_0$ labels the ancestor of a particle $\beta$. If $\beta=(\beta_0, \beta_1, \cdots, \beta_n)$ for some $n\geq 0$, then we let $\vert \beta\vert =n$ be the generation of $\beta$ and write $\beta\vert i=(\beta_0, \cdots, \beta_i)$ for $0\leq i\leq n$. Let $\pi \beta=(\beta_0, \beta_1, \cdots, \beta_{n-1})$ be the parent of $\beta$ and set $\pi \beta=\emptyset$ if $\vert \beta\vert =0$. For $j=0$ or $1$, denote by $\beta \vee j=(\beta_0, \beta_1, \cdots, \beta_n, j)$ the offspring of $\beta$. If $\gamma_0=\beta_0$, use $\beta\wedge \gamma$ to denote the most recent common ancestor of $\beta$ and $\gamma$, that is, if we let $k_{max}=\max\{0\leq k\leq |\beta|\wedge |\gamma|: \beta|k=\gamma|k\}$, then $\gamma\wedge \beta=\beta|k_{max}=\gamma|k_{max}$. Set $\beta\wedge \gamma=\emptyset$ if $\gamma_0\neq \beta_0$. Write $\beta\geq \gamma$ if $\beta$ is an offspring of $\gamma$ and $\beta>\gamma$ if it is strict.
The set of initially infected particles is given by $\{x_i: 1\leq i\leq M_N\} \subseteq \mathcal{L}_N$. We further assume that $X_0^{N}=\frac{1}{N}\sum_{i=1}^{M_N} \delta_{x_i}$ converges to some $X_0\neq 0$ in $M_F(\mathbb{R}^d)$. For any $i>M_N$, set $x_i$ to be the cemetery state $\Delta$. **Unless otherwise noted, we only consider particles $\beta$ with $1\leq \beta_0\leq M_N$ below.** Let $K_0^N$ be a finite subset of $\mathcal{L}_N$ disjoint from $\{x_i\}$, denoting the set of initial recovered sites at time $0$.
On some complete probability space $(\Omega, \mathcal F, \mathbb{P})$, we define the following independent collections of random variables:
- $\{t_\beta: \beta \in \mathcal{I}\}$ are i.i.d. with distribution $Exp(2N+\theta)$.
- $\{\delta_\beta: \beta \in \mathcal{I}\}$ are i.i.d. with $\mathbb{P}(\delta_\beta=-1)= \frac{N}{2N+\theta}$, and $\mathbb{P}(\delta_\beta=1)= \frac{N+\theta}{2N+\theta}$.
- $\{e_\beta: \beta \in \mathcal{I}\}$ are i.i.d. with distribution $\mathbb{P}(e_\beta=0)= \mathbb{P}(e_\beta=1)=\frac{1}{2}$.
- $\{W^\beta: \beta \in \mathcal{I}\}$ are i.i.d. that are uniform on $\mathcal{N}_N=\{y\in \mathcal{L}_N: 0<\| y\| _\infty\leq N^{-1/2}\}$.
Here we use $t_\beta$ to measure the time until a birth or death event occurs ($2N+\theta$ is the total rate). A death event occurs if $\delta_\beta=-1$ and a birth event if $\delta_\beta=1$. When a birth event occurs, the new particle is labeled by $\beta \vee e_\beta$ and is displaced from its parent $\beta$ by an amount of $W^\beta$. Relabel the parent particle $\beta$ by $\beta \vee (1-e_\beta)$. We use $e_\beta$ to record the change of the family line of $\beta$. The lifetime of a particle $\beta$ is given by $$\begin{aligned}
\label{9ea6.95}
T_\beta=\sum_{m=0}^{\vert \beta\vert } t_{\beta\vert m}.\end{aligned}$$ By convention, we let $T_\emptyset=-\infty$. The lifetime is not the death time of the particle--it might already be dead before $T_\beta$ if $\delta_{\beta\vert m}=-1$ for some $m<\vert \beta\vert$. We further define the death time of $\beta$ by $$\begin{aligned}
\label{e6.95}
\zeta_{\beta}^0=T_\beta \wedge \inf\{T_{\beta\vert m}: \delta_{\beta\vert m}=-1, m< \vert \beta\vert \},\end{aligned}$$ where $\inf \emptyset =\infty$. Hence we get $\zeta_{\beta}^0=T_\beta$ if no death occurs along the family line of $\beta$.
Recall we use $\beta \vee e_\beta$ to label the new particle displaced from its parent $\beta$. The family line of $\beta$ is changing iff $e_{\beta\vert m}=\beta_{m+1}$, so the position of the family line of $\beta$ is given by $$\begin{aligned}
\label{ea3.22}
B_t^\beta=
\begin{cases}
x_{\beta_0}+\sum_{m=0}^{\vert \beta\vert -1} W^{\beta\vert m} 1(e_{\beta\vert m}=\beta_{m+1}, T_{\beta\vert m}\leq t), &\text{ if } t<\zeta_\beta^0\\
\Delta, &\text{ if } t\geq \zeta_\beta^0.
\end{cases}\end{aligned}$$ The current location, $B^\beta$, of $\beta$ is given by $$\begin{aligned}
B^\beta=B_{T_\beta^{-}}^\beta=B_{T_{\pi \beta}}^\beta,\end{aligned}$$ where $T_\beta^{-}$ is the time infinitesimal close to and before $T_\beta$. Define the $\sigma$-field by $$\begin{aligned}
\mathcal F_t=\sigma\{1(T_\beta\leq t)(T_\beta, \delta_\beta, e_\beta, W_\beta): \beta\in \mathcal{I}\}\vee \{\mathbb{P}-\text{ null sets}\}.\end{aligned}$$ Write $\beta \sim t$ if $T_{\pi \beta}\leq t<T_\beta$, meaning the particle $\beta$ might be alive at time $t$. The particles that are dead will be identified by setting their location to be $\Delta$ and letting $\phi (\Delta)=0$ for any function $\phi$. Define our first measure-valued process, the branching random walk $X_t^0$, by $$\begin{aligned}
\label{9e9.01}
X_t^0(\phi)=X_t^{0,N}(\phi)=\frac{1}{N}\sum_{\beta \sim t} \phi(B_t^\beta).\end{aligned}$$ In particular, when $t=0$, by assumption we get $$\begin{aligned}
X_0^{0}=X_0^{0,N}=\frac{1}{N}\sum_{i=1}^{M_N} \delta_{x_i}\to X_0\in M_F(\mathbb{R}^d) \text{ as } N\to \infty,\end{aligned}$$ thus giving $X_0^0(1)$ is bounded for all $N$ (most of the time we will suppress the dependence of $X_t^{0, N}$ on $N$ and simply write $X_t^0$, but when necessary we will write $X_t^{0, N}$). The following is an elementary result on the first moment of the branching random walk from Lemma 2.9 of [@DP99].
**Lemma 6**. *If $\phi: \mathbb{R}^d\to \mathbb{R}$ is a bounded Borel measurable function, then $$\begin{aligned}
\mathbb{E}(X_t^0(\phi))=e^{\theta t} \int E_x^N(\phi(B_t^N)) X_0^0(dx),\end{aligned}$$ where $B_t^N$ is a continuous time random walk starting from $x$ under the law $P_x^N$ that takes a step uniformly over $\mathcal{N}_N$ at rate $N+\theta$.*
Now we turn to the SIR epidemic process. Denote by $\text{Supp}(\mu)$ the closed support of a measure $\mu$. Recall $K_0^N \subset \mathcal{L}_N$ is the set of recovered sites at time $0$. If $t\to \mu_t$ is a cadlag measure-valued path, we define $$\begin{aligned}
\label{9e10.93}
\mathcal{R}_t^\mu:=\bigcup_{s\leq t} \text{Supp}(\mu_s) \quad \text{ and } \quad \overline{\mathcal{R}}_t^\mu:=\mathcal{R}_t^\mu\cup K_0^N.\end{aligned}$$ Assume $\{K_0^N\}_{N\geq 1}$ satisfies $$\begin{aligned}
\label{9e0.93}
\lim_{N\to \infty} \frac{1}{N} \mathbb{E}\Big( \sum_{\beta} 1(T_\beta\leq t, B^\beta\neq \Delta) 1(B^\beta+W^\beta\in K_0^N)\Big)=0.\end{aligned}$$ The above ensures that the suppressed births onto the initial recovered sites can be ignored, which of course holds if $K_0^N=\emptyset$ for all $N\geq 1$. We will show below that the above is also necessary to obtain the convergence result as in Theorem [Theorem 1](#9t0){reference-type="ref" reference="9t0"}. Define $$\begin{aligned}
\label{9ea3.01}
\zeta_\beta(\mu)=\zeta_\beta^0\wedge \inf \Big\{T_{\beta\vert m}: m<\vert \beta\vert , e_{\beta\vert m}=\beta_{m+1}, B_{T_{_{\beta\vert m}}}^\beta\in \overline{\mathcal{R}}_{T_{\beta\vert m}^{-}}^\mu\Big\},\end{aligned}$$ to characterize the first time the family line of $\beta$ hits a site that has already been occupied by $\mu$. Define our rescaled SIR epidemic by $$\begin{aligned}
\label{9ec3.01}
X_t(\phi)=X_t^N(\phi)=\frac{1}{N} \sum_{\beta \sim t} \phi(B_t^\beta) 1(\zeta_\beta(X)>t).\end{aligned}$$ In the definition of $\zeta_\beta(X)$, we use $\overline{\mathcal{R}}_t^X$ to denote the set of the recovered and infected sites for each time $t$, which are exactly the forbidden locations to give births. Similar to (2.6) of [@DP99], the existence and uniqueness of $X_t$ are trivial if the initial mass is finite. Following the discussion below Conjecture [Conjecture 3](#c1.2){reference-type="ref" reference="c1.2"}, we define $$\begin{aligned}
\label{9e0.03}
X_t^n(\phi)=X_t^{n,N}(\phi)=\frac{1}{N} \sum_{\beta \sim t} \phi(B_t^\beta) 1(\zeta_\beta^n>t) \text{ where } \zeta_\beta^n=\zeta_\beta({X^{n-1}})\text{ for } n=1, 2.\end{aligned}$$ Recall $X_t^0$ from [\[9e9.01\]](#9e9.01){reference-type="eqref" reference="9e9.01"}. One may easily see that $X_t^0\geq X_t$ for any $t$ (recall $\mu\geq \nu$ for measures $\mu,\nu$ on $\mathbb{R}^d$ if $\mu(\phi)\geq \nu(\phi)$ for all functions $\phi\geq 0$). Thus it is immediate that $\overline{\mathcal{R}}_t^X\subseteq \overline{\mathcal{R}}_t^{X^0}$. It follows that $\zeta_\beta^1=\zeta_\beta({X^{0}})\leq \zeta_\beta(X)$ and hence $X_t^1\leq X_t$. Similar reasoning gives $X_t^2\geq X_t$. One may conclude $$\begin{aligned}
\label{9e10.03}
X_t^1\leq X_t \leq X_t^2 \leq X_t^0, \quad \forall t\geq 0.\end{aligned}$$ Let $n=1$ or $2$. Consider a particle $\beta$ born at time $T_{\pi \beta}$. In order that such a particle, $\beta$, is alive in $X^n$, we require $\zeta_\beta^n>T_{\pi\beta}$. At the end of its lifetime $T_\beta$, there is either a death event or a birth event happening to $\beta$: on the event of a death, i.e. $\delta_\beta=-1$, the particle $\beta$ is lost; on the event of a birth, i.e. $\delta_\beta=1$, the particle at $B^\beta$ is relabeled and a new particle will be sent to its neighboring site with displacement $W^\beta$ as long as the site $B^\beta+W^\beta$ has never been occupied before by $X^{n-1}$. Collecting the information, we may write $$\begin{aligned}
&X_{T_\beta}^n(\phi)-X_{T_\beta^-}^n(\phi)\\
&= \frac{1}{N} 1_{\{\zeta_\beta^n>T_{\pi \beta}\}} \Big\{ -\phi(B^\beta)1_{\{\delta_\beta=-1\}} + 1_{\{\delta_\beta=1\}} \phi(B^\beta+W^\beta) 1(B^\beta+W^\beta\notin \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-})\Big\}\\
&=\frac{1}{N} 1_{\{\zeta_\beta^n>T_{\pi \beta}\}}\Bigg\{ \phi(B^\beta)\delta_\beta +1_{\{\delta_\beta=1\}} \times \\
&\quad \quad\quad\quad \quad\quad\Big[ \big(\phi(B^\beta+W^\beta) -\phi(B^\beta)\big)-\phi(B^\beta+W^\beta) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-}) \Big]\Bigg\},\end{aligned}$$ where the second equality uses $\delta_\beta\in \{1,-1\}$. Summing the above over $\beta$ with $T_\beta\leq t$ and introducing $$\begin{aligned}
\label{9e0.94}
a_\beta^n(t)\equiv 1(T_\beta\leq t, \zeta_\beta^n>T_{\pi \beta})=1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}),\end{aligned}$$ we get $$\begin{aligned}
X_{t}^n(\phi)=X_{0}^n(\phi)&+\frac{1}{N} \sum_{\beta} a_\beta^n(t) \phi(B^\beta)\delta_\beta +\frac{1}{N} \sum_{\beta} a_\beta^n(t) 1_{\{\delta_\beta=1\}}\Big(\phi(B^\beta+W^\beta) -\phi(B^\beta)\Big) \\
&-\frac{1}{N} \sum_{\beta} a_\beta^n(t) 1_{\{\delta_\beta=1\}} \phi(B^\beta+W^\beta) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-}).
\end{aligned}$$ Centering $\delta_\beta$ and $1_{\{\delta_\beta=1\}}$ by their expectations, we arrive at $$\begin{aligned}
\label{9e2.1}
X_{t}^n(\phi)=X_{0}^n(\phi)&+\frac{1}{N} \sum_{\beta} a_\beta^n(t) \phi(B^\beta) (\delta_\beta-\frac{\theta}{2N+\theta}) \\
&+\frac{\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} a_\beta^n(t) \phi(B^\beta)\nonumber\\
&+\frac{N+\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} a_\beta^n(t) \Big(\phi(B^\beta+W^\beta) -\phi(B^\beta)\Big)\nonumber\\
&-\frac{N+\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} a_\beta^n(t) \phi(B^\beta+W^\beta) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-})\nonumber\\
&+\frac{1}{N} \sum_{\beta} a_\beta^n(t)\Big(1_{\{\delta_\beta=1\}}-\frac{N+\theta}{2N+\theta}\Big)\times\nonumber\\
&\quad\quad\quad\Big(\phi(B^\beta+W^\beta)1(B^\beta+W^\beta\notin \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-}) -\phi(B^\beta) \Big).\nonumber\end{aligned}$$ Rewrite the above by giving a notation for each term on the right-hand side to get $$\begin{aligned}
\label{9ea3.67}
X_{t}^n(\phi)&=X_{0}^n(\phi)+M_t^n(\phi)+D_t^{n,1}(\phi)+D_{t}^{n,2}(\phi)-K_{t}^{n}(\phi)+E^{t}_{n}(\phi),\end{aligned}$$ where $M_t^n(\phi)$ represents the martingale term, $D_t^{n,1}(\phi)$ the drift term, $D_t^{n,2}(\phi)$ the diffusion term, $K_{t}^{n}(\phi)$ the collision term and $E_t^{n}(\phi)$ the error term. Set $\varepsilon_N=\theta/(2N+\theta)$. Throughout the rest of the paper, we always pick $$\begin{aligned}
\phi \in C_b^3(\mathbb{R}^d) \text{ and } \| \phi\| _\infty=\sup_{x\in \mathbb{R}^d} \vert \phi(x)\vert .\end{aligned}$$
**Lemma 7**. *Let $n=1$ or $2$. We have $M_t^n(\phi)$ is an $\mathcal F_t$-martingale with $$\begin{aligned}
\langle M^n(\phi)\rangle_t=\Big(2+\frac{\theta}{N}\Big)(1-\varepsilon_N^2) \int_0^t X_r^n(\phi^2) dr.\end{aligned}$$ Moreover, $$\begin{aligned}
\langle M^2(\phi)-M^1(\phi)\rangle_t=\Big(2+\frac{\theta}{N}\Big)(1-\varepsilon_N^2) \int_0^t (X_r^2(\phi^2)-X_r^1(\phi^2)) dr.\end{aligned}$$*
**Lemma 8**. *Let $n=1$ or $2$. For all $t>0$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert D_s^{n,1}(\phi)-\theta \int_0^s X_r^n(\phi) dr\Big\vert \Big)=0.\end{aligned}$$*
**Lemma 9**. *Let $n=1$ or $2$. For all $t>0$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert D_s^{n,2}(\phi)- \int_0^s X_r^n(\frac{\Delta}{6}\phi) dr\Big\vert \Big)=0.\end{aligned}$$*
**Lemma 10**. *Let $n=1$ or $2$. For all $t>0$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \vert E_s^{n}(\phi)\vert \Big)=0.\end{aligned}$$*
${\bf Convention\ on\ Constants.}$ Constants whose value is unimportant and may change from line to line are denoted $C$. All these constants may depend on the dimension $d$, the drift $\theta$, the time $t$, and the test function $\phi$. All these parameters, $d,\theta, t, \phi$, will be fixed before picking $C$. Our constants $C$ will never depend on $N$ or the initial condition $X_0^0$.
**Lemma 11**. *For any $t>0$, there is some constant $C>0$ such that for $N$ large, $$\begin{aligned}
\mathbb{E}\Big(\frac{1}{N} \sum_{\beta} a_{\beta}^0(t) 1(B^\beta+W^\beta \in \overline{\mathcal{R}}_{T_\beta^-}^{X^0})\Big)\leq C (X_0^0(1)+X_0^0(1)^2).\end{aligned}$$*
**Lemma 12**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n}(\phi)-b_d \int_0^s X_r^1(\phi) dr\Big\vert \Big)=0.\end{aligned}$$*
The proofs of Lemmas [Lemma 7](#9l2.1){reference-type="ref" reference="9l2.1"}-[Lemma 10](#9l2.4){reference-type="ref" reference="9l2.4"} follow similarly to those in [@DP99], where the two authors claim them to be "The four easy convergences". The only difference lies in the definition of $\zeta_\beta^n$, which, fortunately, does not play an important role in the proof, so we omit the details (for interested readers, we have included a version of the proofs in Appendix [10](#9a1){reference-type="ref" reference="9a1"}). The proof of Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"} is given in Section [3](#9s3){reference-type="ref" reference="9s3"}. Proving Lemma [Lemma 12](#9l2.6){reference-type="ref" reference="9l2.6"} will be the main part of the paper, which is done in Section [5](#9s5){reference-type="ref" reference="9s5"}.
Let $\phi\equiv 1$. Apply Lemmas [Lemma 7](#9l2.1){reference-type="ref" reference="9l2.1"}-[Lemma 12](#9l2.6){reference-type="ref" reference="9l2.6"} in [\[9ea3.67\]](#9ea3.67){reference-type="eqref" reference="9ea3.67"} and collect all the error terms to get $$\begin{aligned}
&X_t^2(1)-X_t^1(1)= M_t^2(1)-M_t^1(1)+\theta \int_0^{t} (X_s^2(1)-X_s^1(1)) ds+ \tilde{E}_t,\nonumber\\
&\text{ where $M_t^1(1)$ and $M_t^2(1)$ are as in Lemma \ref{9l2.1} and } \lim_{N\to \infty} \mathbb{E}(\sup_{s\leq t} \vert \tilde{E}_s\vert )=0.\end{aligned}$$ Having established the above, the proof of Proposition [Proposition 4](#9p1.1){reference-type="ref" reference="9p1.1"} is quite straightforward and follows similarly to that of Proposition 1 in [@DP99]. We omit the details.
Next, to prove Proposition [Proposition 5](#9p1.2){reference-type="ref" reference="9p1.2"}, again we apply Lemmas [Lemma 7](#9l2.1){reference-type="ref" reference="9l2.1"}-[Lemma 12](#9l2.6){reference-type="ref" reference="9l2.6"} in [\[9ea3.67\]](#9ea3.67){reference-type="eqref" reference="9ea3.67"} and collect all the error terms to see that for any $\phi\in C_b^3(\mathbb{R}^d)$, $$\begin{aligned}
\label{9ea7.80}
&X_t^1(\phi)=X_0^1(\phi)+M_t^1(\phi)+(\theta-b_d)\int_0^{t} X_s^1(\phi) ds+\int_0^t X_s^1(\frac{\Delta}{6}\phi) ds+\hat{E}_t^1(\phi),\nonumber\\
&\text{ where } M_t^1(\phi) \text{ is as in Lemma \ref{9l2.1} and } \lim_{N\to \infty} \mathbb{E}(\sup_{s\leq t} \vert \hat{E}_s^1(\phi)\vert )=0.\end{aligned}$$ Hence we arrive at the same conclusion as in (2.18) of [@DP99]. Using their proof for Proposition 2 in [@DP99], we may conclude that Proposition [Proposition 5](#9p1.2){reference-type="ref" reference="9p1.2"} holds.\
It remains to prove Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"} and Lemma [Lemma 12](#9l2.6){reference-type="ref" reference="9l2.6"}.
# Upper bounds for the collision term {#9s3}
In this section, we will give the proof of Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"}. To explain the section name, we recall from [\[9e2.1\]](#9e2.1){reference-type="eqref" reference="9e2.1"} the collision term $K_t^n(\phi)$ given by $$\begin{aligned}
\label{9ea5.69}
K_{t}^n(\phi)=&\frac{N+\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} a_\beta^n(t) \phi(B^\beta+W^\beta) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-})\nonumber\\
\leq &\| \phi\| _\infty \frac{1}{N} \sum_{\beta} a_\beta^0(t) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{0}}_{T_\beta^-}).\end{aligned}$$ Therefore Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"} indeed gives an upper bound for $\mathbb{E}(K_t^n(\phi))$.
Recall from [\[9e0.94\]](#9e0.94){reference-type="eqref" reference="9e0.94"} to see $a_\beta^0(t)=1{(T_\beta\leq t, B^\beta\neq \Delta)}$ where we have replaced $\zeta_{\beta}^0>T_{\pi \beta}$ by $B^\beta\neq \Delta$ as $B^\beta=B^\beta_{T_{\pi\beta}}$. Recall [\[9e10.93\]](#9e10.93){reference-type="eqref" reference="9e10.93"} to see $\overline{\mathcal{R}}^{X^{0}}_{T_\beta^-}:= {\mathcal{R}}^{X^{0}}_{T_\beta^-}\cup K_0^N$. The assumption on $K_0^N$ from [\[9e0.93\]](#9e0.93){reference-type="eqref" reference="9e0.93"} implies $$\begin{aligned}
\| \phi\| _\infty \frac{1}{N}\mathbb{E}\Big( \sum_{\beta} 1{(T_\beta\leq t, B^\beta\neq \Delta)} 1(B^\beta+W^\beta\in K_0^N) \Big)\to 0 \text{ as } N\to \infty.\end{aligned}$$ Define $$\begin{aligned}
\label{9e9.02}
J(t):=& \frac{1}{N} \sum_{\beta}1{(T_\beta\leq t, B^\beta \neq \Delta)} 1(B^\beta+W^\beta\in \mathcal{R}^{X^{0}}_{T_\beta^-}).\end{aligned}$$ It suffices to bound $\mathbb{E}(J(t))$ for the proof of Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"}.
For $n=0,1,2$, by [\[9e9.01\]](#9e9.01){reference-type="eqref" reference="9e9.01"} and [\[9e0.03\]](#9e0.03){reference-type="eqref" reference="9e0.03"} we have $$\begin{aligned}
\text{Supp}(X_t^n)=\{B^\gamma_t: \gamma \sim t, \zeta_\gamma^n>t\}=\{B^\gamma: T_{\pi \gamma}\leq t <T_\gamma, \zeta_\gamma^n>t\}.\end{aligned}$$ Using the definition of $\zeta_\gamma^n$, we claim that the above can be rewritten as $$\begin{aligned}
\text{Supp}(X_t^n)=\{B^\gamma: T_{\pi \gamma}\leq t <T_\gamma, \zeta_\gamma^n>T_{\pi \gamma}\}.\end{aligned}$$ To see this, we note if $\zeta_\gamma^n>T_{\pi \gamma}$, then the particle $\gamma$ is alive in $X^n$. By the definition of $\zeta_\gamma^n$ from [\[9ea3.01\]](#9ea3.01){reference-type="eqref" reference="9ea3.01"}, we get $\zeta_\gamma^n=\zeta_{\gamma}^0$ and so $\zeta_{\gamma}^0>T_{\pi \gamma}$, in which case the definition of $\zeta_{\gamma}^0$ from [\[e6.95\]](#e6.95){reference-type="eqref" reference="e6.95"} implies $\zeta_{\gamma}^0=T_\gamma>t$, thus giving $\zeta_\gamma^n=\zeta_{\gamma}^0>t$. The other direction is immediate. It follows that $$\begin{aligned}
\label{9e3.1}
\mathcal{R}_t^{X^n}=&\bigcup_{s\leq t}\{B^\gamma: T_{\pi \gamma}\leq s <T_\gamma,\quad \zeta_\gamma^n>T_{\pi \gamma}\}\nonumber\\
=&\{B^\gamma: T_{\pi \gamma}\leq t, \quad \zeta_\gamma^n>T_{\pi \gamma}\}.\end{aligned}$$ Let $t=T_\beta^-$ and $n=0$ to get $$\begin{aligned}
\mathcal{R}^{X^{0}}_{T_\beta^-}=\{B^\gamma: T_{\pi \gamma}\leq T_\beta^-, \zeta_\gamma^0>T_{\pi \gamma}\}=\{B^\gamma: T_{\pi \gamma}<T_\beta, B^\gamma\neq \Delta\}.\end{aligned}$$ Apply the above in [\[9e9.02\]](#9e9.02){reference-type="eqref" reference="9e9.02"} to see $$\begin{aligned}
\label{9e9.03}
\mathbb{E}(J(t)) \leq &\frac{1}{N} \sum_{\beta,\gamma} \mathbb{E}\Big( 1(T_\beta\leq t,T_{\pi \gamma}<T_\beta) 1(B^\beta+W^\beta=B^\gamma \neq \Delta)\Big).\end{aligned}$$ For each particle $\alpha \in \mathcal{I}$, we denote by $\mathcal{H}_\alpha$ the $\sigma$-field of all the events in the family line of $\alpha$ strictly before $T_\alpha$, plus the value of $t_\alpha$, which is given by $$\begin{aligned}
\mathcal{H}_\alpha=\sigma\{t_{\alpha\vert m}, \delta_{\alpha\vert m}, e_{\alpha\vert m}, W^{\alpha\vert m}, m<\vert \alpha\vert \} \vee \sigma(t_\alpha).\end{aligned}$$ Then $T_\beta\in \mathcal{H}_\beta$ and $W^\beta$ is independent of $\mathcal{H}_\beta$. We may also conclude from $T_{\pi \gamma}<T_\beta$ that $\gamma$ is not a strict descendant of $\beta$, thus giving $W^\beta$ is independent of $\mathcal{H}_\gamma$. Recall $W^\beta$ is a uniform random variable on $\mathcal{N}_N=\{y\in \mathcal{L}_N: 0<\| y\| _\infty\leq N^{-1/2}\}$. Define $$\begin{aligned}
\label{9e9.04}
\psi(N)=\vert \mathcal{N}_N\vert =
\begin{dcases}
(2[N^{1/d}]+1)^d-1 \sim 2^d N, &\text{ in } d\geq 5,\\
(2[(N\log N)^{1/4}]+1)^4-1\sim 2^4 N\log N, &\text{ in } d=4,
\end{dcases}\end{aligned}$$ where $f(N)\sim g(N)$ if $f(N)/g(N) \to 1$ as $N\to \infty$. Calculate the expectation in [\[9e9.03\]](#9e9.03){reference-type="eqref" reference="9e9.03"} by conditioning on $\mathcal{H}_\beta \vee \mathcal{H}_\gamma$ to see $$\begin{aligned}
\label{9e2.23}
\mathbb{E}(J(t)) \leq &\frac{1}{N}\frac{1}{\psi(N)} \sum_{\beta,\gamma} \mathbb{E}\Big(1(T_\beta\leq t,T_{\pi \gamma}<T_\beta ) 1(B^\beta-B^\gamma \in \mathcal{N}_N)\Big).\end{aligned}$$ We shall note $B^\beta \neq \Delta$ and $B^\gamma \neq \Delta$ are implicitly included in the event $\{B^\beta-B^\gamma \in \mathcal{N}_N\}$.
**Lemma 13**. *Let $\Psi: [0,\infty)\times \Omega\to \mathbb{R}$ be a bounded and $\mathcal F_t$-predictable function and let $\beta\in \mathcal{I}$. Then $$\begin{aligned}
\Psi(T_\beta,\omega) 1(T_\beta\leq t)-(2N+\theta)\int_0^t 1(T_{\pi\beta}<r\leq T_\beta) \Psi(r,\omega)dr\end{aligned}$$ is an $\mathcal F_t$-martingale whose predictable quadratic variation is given by $$\begin{aligned}
\label{a2.23}
(2N+\theta) \int_0^t 1(T_{\pi\beta}<r\leq T_\beta) \Psi^2 (r,\omega) dr.\end{aligned}$$*
The martingale proof follows from Lemma 3.2 of [@DP99] while the predictable quadratic variation [\[a2.23\]](#a2.23){reference-type="eqref" reference="a2.23"} can be easily derived from the stochastic integral concerning the compensated Poisson process therein. Also, the restriction on $\vert \beta\vert >0$ in Lemma 3.2 of [@DP99] can be removed.
Define $$\begin{aligned}
\label{9e2.24}
\text{nbr}_{\beta,\gamma}(r)=1(T_{\pi \beta}<r\leq T_\beta, T_{\pi \gamma}<r, B^\beta-B^\gamma \in \mathcal{N}_N).\end{aligned}$$ Apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} in [\[9e2.23\]](#9e2.23){reference-type="eqref" reference="9e2.23"} to get $$\begin{aligned}
\label{9e2.31}
\mathbb{E}(J(t))\leq &\frac{1}{N\psi(N)} \sum_{\beta,\gamma} (2N+\theta) \mathbb{E}\Big(\int_0^t \text{nbr}_{\beta,\gamma}(r) dr\Big)\nonumber\\
\leq&\frac{C}{\psi(N)} \int_0^t \mathbb{E}\Big( \sum_{\beta,\gamma}\text{nbr}_{\beta,\gamma}(r)\Big) dr.\end{aligned}$$ It suffices to get bounds for $\mathbb{E}(\sum_{\beta,\gamma} \text{nbr}_{\beta,\gamma}(r))$. Define $$\begin{aligned}
\psi_0(N)={\psi(N)}/{N} \text{ and } \ I(t)=1+\int_0^t (1+s)^{1-d/2} ds, \forall t\geq 0.\end{aligned}$$ It is easy to check by [\[9e9.04\]](#9e9.04){reference-type="eqref" reference="9e9.04"} that there exist constants $C>c>0$ so that $$\begin{aligned}
\label{a2.31}
c\psi_0 (N)\leq I(N)\leq C\psi_0(N), \quad \forall N\text{ large}.\end{aligned}$$ Recall we only consider particles $\beta,\gamma$ with $1\leq \beta_0, \gamma_0\leq M_N=NX_0^0(1)$.
**Lemma 14**. *There is some constant $C>0$ so that for any $0<r<t$, $$\begin{aligned}
\text{(i)} \quad &\mathbb{E}\Big(\sum_{\beta,\gamma: \gamma_0\neq \beta_0} \text{nbr}_{\beta,\gamma}(r)\Big)\leq C(NX_0^0(1))^2 \frac{1}{(1+(2N+2\theta)r)^{d/2-1}}.\\
\text{(ii)} \quad &\mathbb{E}\Big(\sum_{\beta,\gamma: \gamma_0= \beta_0} \text{nbr}_{\beta,\gamma}(r)\Big)\leq CNX_0^0(1) I((2N+2\theta)r).\end{aligned}$$*
Assuming the above lemma, we may finish the proof of Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"}.
Apply Lemma [Lemma 14](#9l4.1){reference-type="ref" reference="9l4.1"} in [\[9e2.31\]](#9e2.31){reference-type="eqref" reference="9e2.31"} to get $$\begin{aligned}
\mathbb{E}(J(t)) \leq & \frac{C}{\psi(N)} (NX_0^0(1))^2 \int_0^t \frac{1}{(1+(2N+2\theta)r)^{d/2-1}} dr\\
&+\frac{C}{\psi(N)} NX_0^0(1) \int_0^t I((2N+2\theta)r)dr\\
\leq & \frac{C}{\psi_0(N)} (X_0^0(1))^2 I((2N+2\theta)t) +\frac{C}{\psi_0(N)} X_0^0(1) I((2N+2\theta)t) \\
\leq & C(X_0^0(1))^2 +CX_0^0(1),\end{aligned}$$ where the last inequality is by [\[a2.31\]](#a2.31){reference-type="eqref" reference="a2.31"}. The proof is complete.
It remains to prove Lemma [Lemma 14](#9l4.1){reference-type="ref" reference="9l4.1"}. We first give some elementary estimates. Let $Y_0^N,Y_1^N, \cdots$ be i.i.d. random variables on $\mathbb{R}^d$ so that $Y_0^N=0$ or $Y_0^N$ is uniform on $N^{1/2} \mathcal{N}_N$, each with probability $1/2$. Set $V_n^N=Y_1^N+\cdots+Y_n^N$ for $n\geq 1$ so that $\{V_n^N\}$ is a random walk that with probability $1/2$ stays put, and with probability $1/2$ takes a jump uniformly on $N^{1/2} \mathcal{N}_N$. The following result is from Lemma 4.2 of [@DP99].
**Lemma 15**. *There is some constant $C>0$ independent of $N$ so that $$\begin{aligned}
\mathbb{P}(V_m^N\in x+[-1,1]^d)\leq C (1+m)^{-d/2}, \quad \forall m\geq 0, x\in \mathbb{R}^d.\end{aligned}$$*
We also need estimates for the probability concerning the Poisson and Gamma random variables. For any $\lambda\geq 0$ and $m\geq 0$, define $$\begin{aligned}
\label{e4.2}
\pi(\lambda, m)=e^{-\lambda} \frac{\lambda^m}{m!}, \quad \text{ and } \quad \Pi(\lambda, m)=\sum_{k=m}^\infty \pi(\lambda, k).\end{aligned}$$ Let $\Pi(\lambda, m)=1$ if $m<0$. Define $\pi(\lambda, m)$=0 if $\lambda< 0$. Let $\xi_1,\xi_2,\cdots$ be i.i.d. random variables with distribution $Exp(1)$. Define $\Gamma_m=\xi_1+\cdots+\xi_m$ for $m\geq 1$. Then $\Gamma_m$ has a $\Gamma(m,1)$ distribution. Set by convention $\Gamma_m=0$ if $m\leq 0$. Recall $T_\beta$ from [\[9ea6.95\]](#9ea6.95){reference-type="eqref" reference="9ea6.95"}. By the scaling property of gamma distribution, we have $$\begin{aligned}
\label{9ea2.00}
\mathbb{P}(T_\beta\leq t)=\mathbb{P}(\Gamma_{\vert \beta\vert +1}\leq (2N+\theta)t)=\Pi((2N+\theta)t, \vert \beta\vert +1),
\end{aligned}$$ where the last equality comes from that $\Gamma_m \leq r$ iff at least $m$ events occur before time $r$ in a Poisson process with rate $1$. Similarly, $$\begin{aligned}
\label{9ea6.84}
\mathbb{P}(T_{\pi \beta}\leq t<T_\beta)=\mathbb{P}(\Gamma_{\vert \beta\vert }\leq (2N+\theta)t<\Gamma_{\vert \beta\vert +1})=\pi((2N+\theta)t, \vert \beta\vert ).
\end{aligned}$$
**Lemma 16**. *Let $\lambda\geq 0$.\
(i) For any $p>0$, there is some constant $C=C(p)>0$ so that $$\begin{aligned}
\sum_{m=0}^\infty \pi(\lambda, m) (1+m)^{-p}\leq C (1+\lambda)^{-p}.\end{aligned}$$ (ii) There is some constant $A>0$ so that for any $p\geq 0$ and $0< r\leq t$, $$\begin{aligned}
\label{9ea6.78}
\mathbb{E}\Big(\sum_{m>ANr} (\frac{2N+2\theta}{2N+\theta})^m 1(\Gamma_{m}<(2N+\theta)r) m^p\Big)\leq C_p 2^{-Nr},\end{aligned}$$ for some constant $C_p>0$.\
(iii) There is some constant $C>0$ so that for any $0< r\leq t$, $$\begin{aligned}
\sum_{m=0}^\infty \pi((2N+\theta)r, m) I(m) \leq CI(N).\end{aligned}$$*
The proof of (i) is immediate from Lemma 4.3 of [@DP99]. The inequality in (ii) is a revised version of Lemma 10.3 of [@DP99], which also follows easily from standard large deviation estimates (see, e.g., Theorem 5.4 of [@MU05]). For the proof of $(iii)$, we note the sum of $m<ANr$ gives at most $I(ANr)\leq CI(N)$ while for $m>ANr$, we use $I(m)\leq m$ and $$\begin{aligned}
\pi((2N+\theta)r, m)\leq \Pi((2N+\theta)r, m)=\mathbb{P}(\Gamma_{m}<(2N+\theta)r)\end{aligned}$$ to see that the sum of $m>ANr$ is at most $C_1 2^{-Nr}$ by (ii). So the inequality holds.
Since we only consider particles $\beta,\gamma$ with $1\leq \beta_0, \gamma_0\leq NX_0^0(1)$, we may let $\beta_0=i$ and $\gamma_0=j$ for some $1\leq i\neq j\leq NX_0^0(1)$. Set $0< r<t$. Recall $\text{nbr}_{\beta,\gamma}(r)$ from [\[9e2.24\]](#9e2.24){reference-type="eqref" reference="9e2.24"} to get $$\begin{aligned}
\sum_{\beta: \beta_0=i}\sum_{\gamma: \gamma_0=j} \mathbb{E}(\text{nbr}_{\beta,\gamma}(r))\leq &\sum_{\beta: \beta_0=i}\sum_{\gamma: \gamma_0=j} \mathbb{P}(T_{\pi \beta}<r< T_\beta, T_{\pi \gamma}<r, B^\beta\neq \Delta, B^\gamma \neq \Delta)\\
&\times \mathbb{P}(N^{1/2}(x_j-x_i)+V_{\vert \beta\vert +\vert \gamma\vert }^N \in [-1,1]^d),\end{aligned}$$ where the last probability follows from [\[ea3.22\]](#ea3.22){reference-type="eqref" reference="ea3.22"} which implies that conditioning on $\{B^\beta, B^\gamma \neq \Delta\}$, $$\begin{aligned}
\label{eb3.22}
B^\beta-B^\gamma=x_{i}-x_j+\sum_{s=0}^{\vert \beta\vert -1} W^{\beta\vert s} 1(e_{\beta\vert s}=\beta_{s+1})-\sum_{t=0}^{\vert \gamma\vert -1} W^{\gamma\vert t} 1(e_{\gamma\vert t}=\gamma_{t+1}).\end{aligned}$$ We note $B^\beta\neq \Delta$ is equivalent to $\delta_{\beta\vert m}=1$ for all $0\leq m\leq \vert \beta\vert -1$. Similarly for $B^\gamma \neq \Delta$. These two events are both independent of $T_{\pi \beta}, T_\beta, T_{\pi \gamma}$. By considering $\vert \beta\vert =l$ and $\vert \gamma\vert =m$ for $m\geq 0$ and $l\geq 0$, we get $$\begin{aligned}
\label{9e2.22}
\sum_{\beta: \beta_0=i}\sum_{\gamma: \gamma_0=j}& \mathbb{E}(\text{nbr}_{\beta,\gamma}(r))
\leq \sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{N+\theta}{2N+\theta})^{l+m} \cdot 2^l \cdot 2^m \cdot \pi((2N+\theta) r, l)\nonumber\\
&\times \Pi((2N+\theta) r, m) \cdot \mathbb{P}(N^{1/2}(x_j-x_i)+V_{l+m}^N \in [-1,1]^d), \end{aligned}$$ where the first factor on the right-hand side gives the probability that $B^\beta\neq \Delta$ and $B^\gamma \neq \Delta$, i.e. no death along both family lines. The $2^l$ and $2^m$ count the number of $\beta, \gamma$. The fourth and fifth terms are the probability of $T_{\pi \beta}<r< T_\beta$ and $T_{\pi \gamma}<r$ (see [\[9ea2.00\]](#9ea2.00){reference-type="eqref" reference="9ea2.00"} and [\[9ea6.84\]](#9ea6.84){reference-type="eqref" reference="9ea6.84"}). Use Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the last term on the right-hand side to conclude $$\begin{aligned}
\label{9e2.21}
\sum_{\beta: \beta_0=i}\sum_{\gamma: \gamma_0=j} \mathbb{E}(\text{nbr}_{\beta,\gamma}(r))\leq &\sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \pi((2N+\theta) r, l)\nonumber\\
&\times \Pi((2N+\theta) r, m) \cdot \frac{C}{(1+l+m)^{d/2}}.\end{aligned}$$ The following lemma will be used frequently below.
**Lemma 17**. *There is some constant $C>0$ such that for any $0< r\leq t$ and $l\geq 0$, we have $$\begin{aligned}
\label{e4.21}
& \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta) r, m) \frac{1}{(1+l+m)^{d/2}}\leq C \frac{1}{(1+l)^{d/2-1}}.\end{aligned}$$*
Let $A>0$ be as in Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii) so that $$\begin{aligned}
&\sum_{m>ANr} (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta) r, m) \frac{1}{(1+l+m)^{d/2}}\\
&\leq \frac{1}{(1+l)^{d/2}} \sum_{m>ANr} (\frac{2N+2\theta}{2N+\theta})^{m} \mathbb{P}(\Gamma_{m}<(2N+\theta)r) \\
&\leq \frac{1}{(1+l)^{d/2}}\times C_0 2^{-Nr} \leq C\frac{1}{(1+l)^{d/2-1}},\end{aligned}$$ where in the first inequality we write $\Pi((2N+\theta)r, m)$ as $\mathbb{P}(\Gamma_{m}< (2N+\theta)r)$ by [\[9ea2.00\]](#9ea2.00){reference-type="eqref" reference="9ea2.00"}. The second inequality is by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii). Next, the sum for $m\leq ANr$ is at most $$\begin{aligned}
&\sum_{m=0}^{ANr} e^{A\theta r} \frac{1}{(1+l+m)^{d/2}}\leq C \frac{1}{(1+l)^{d/2-1}}.\end{aligned}$$ We conclude [\[e4.21\]](#e4.21){reference-type="eqref" reference="e4.21"} holds.
The above lemma implies that [\[9e2.21\]](#9e2.21){reference-type="eqref" reference="9e2.21"} can be bounded above by $$\begin{aligned}
& \sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \pi((2N+\theta) r, l) \cdot C \frac{1}{(1+l)^{d/2-1}}\nonumber\\
&= Ce^{\theta r} \sum_{l=0}^\infty \pi((2N+2\theta) r, l) \frac{1}{(1+l)^{d/2-1}}\leq C \frac{1}{(1+(2N+2\theta) r)^{d/2-1}},\end{aligned}$$ where the last inequality is by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(i). The proof of Lemma [Lemma 14](#9l4.1){reference-type="ref" reference="9l4.1"}(i) is complete by noticing that there are at most $(NX_0^0(1))^2$ such pairs of $i$ and $j$.
Now we proceed to the case when $\beta_0=\gamma_0=i$ for some $1\leq i\leq NX_0^0(1)$. By translation invariance, we get $$\begin{aligned}
&\mathbb{E}\Big(\sum_{\beta,\gamma: \gamma_0= \beta_0} \text{nbr}_{\beta,\gamma}(r)\Big)= NX_0^0(1) \mathbb{E}\Big(\sum_{\beta,\gamma\geq 1} \text{nbr}_{\beta,\gamma}(r)\Big),\end{aligned}$$ In the above, we have replaced $\beta_0=\gamma_0=1$ by $\beta,\gamma\geq 1$ where $1\in \mathcal{I}$ is the first individual in generation $0$. It suffices to prove for all $0<r<t$, $$\begin{aligned}
\label{9e9.51}
I_0:= \mathbb{E}\Big(\sum_{\beta,\gamma\geq 1} \text{nbr}_{\beta,\gamma}(r)\Big)\leq C I((2N+2\theta) r).\end{aligned}$$ Recall $\text{nbr}_{\beta,\gamma}(r)$ from [\[9e2.24\]](#9e2.24){reference-type="eqref" reference="9e2.24"}. By letting $\alpha=\beta\wedge \gamma$, we may write the sum of $\beta,\gamma$ as $$\begin{aligned}
\label{9e6.11}
I_0\leq \sum_{k=0}^\infty \sum_{\alpha\geq 1, \vert \alpha\vert =k} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =k+m}} \mathbb{E}\Big(\text{nbr}_{\beta,\gamma}(r) \Big).\end{aligned}$$ Let $\alpha, \beta, \gamma$ be as in the summation and then condition on $\mathcal{H}_{\alpha}$ to get $$\begin{aligned}
\label{9e2.62}
\mathbb{E}(\text{nbr}_{\beta,\gamma}(r)\vert \mathcal{H}_{\alpha})\leq & 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}}\cdot (\frac{N+\theta}{2N+\theta})^{l+m}\nonumber\\
&\times \mathbb{P}(T_{\pi \beta}-T_{\alpha}<r-T_{\alpha}\leq T_\beta-T_{\alpha}\vert \mathcal{H}_{\alpha})\nonumber\\
&\times \mathbb{P}(T_{\pi \gamma}-T_{\alpha}<r-T_{\alpha}\vert \mathcal{H}_{\alpha})\nonumber\\
&\times \mathbb{P}(B^\beta-B^\gamma \in \mathcal{N}_N \vert \mathcal{H}_{\alpha}),\end{aligned}$$ where the second term is the upper bound for the probability that no death events occur on the family line after $\beta,\gamma$ split. Apply [\[9ea2.00\]](#9ea2.00){reference-type="eqref" reference="9ea2.00"} and [\[9ea6.84\]](#9ea6.84){reference-type="eqref" reference="9ea6.84"} in [\[9e2.62\]](#9e2.62){reference-type="eqref" reference="9e2.62"} to see $$\begin{aligned}
\label{e2.63}
\mathbb{E}(\text{nbr}_{\beta,\gamma}(r)\vert \mathcal{H}_{\alpha})\leq & 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}} \cdot (\frac{N+\theta}{2N+\theta})^{l+m} \pi((2N+\theta) (r-T_\alpha), l-1) \nonumber\\
&\times \Pi((2N+\theta) (r-T_\alpha), m-1) \cdot \frac{C}{(1+l+m)^{d/2}}.\end{aligned}$$ where we have also used Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the last probability in [\[9e2.62\]](#9e2.62){reference-type="eqref" reference="9e2.62"}. It follows that $$\begin{aligned}
\label{9e2.640}
I_0\leq C\mathbb{E}\Big(&\sum_{k=0}^\infty \sum_{\alpha\geq 1, \vert \alpha\vert =k} 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}}\nonumber\\
&\sum_{l=0}^\infty \sum_{m=0}^\infty 2^l \cdot 2^{m} \cdot (\frac{N+\theta}{2N+\theta})^{l+m} \pi((2N+\theta) (r-T_\alpha), l-1)\nonumber\\
&\times \Pi((2N+\theta) (r-T_\alpha), m-1) \cdot \frac{1}{(1+l+m)^{d/2}}\Big).\end{aligned}$$ By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $m$ is at most $$\begin{aligned}
\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta) (r-T_\alpha), m-1) \cdot \frac{1}{(1+l+m)^{d/2}}\leq C\frac{1}{(1+l)^{d/2-1}}.\end{aligned}$$ Then the sum of $l$ gives $$\begin{aligned}
\label{9e9.58}
&\sum_{l=0}^\infty(\frac{2N+2\theta}{2N+\theta})^{l+1} \pi((2N+\theta) (r-T_\alpha), l-1) \frac{C}{(1+l)^{d/2-1}}\\
&\leq Ce^{\theta r} \sum_{l=0}^\infty \pi((2N+2\theta) (r-T_\alpha), l) \frac{1}{(1+l)^{d/2-1}}\leq \frac{C }{(1+(2N+2\theta) (r-T_\alpha))^{d/2-1}},\nonumber\end{aligned}$$ where the last inequality is by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (i). Now conclude from the above to see $$\begin{aligned}
\label{e6.37}
I_0\leq C\mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}} \frac{1}{(1+(2N+2\theta) (r-T_\alpha))^{d/2-1}}\Big).\end{aligned}$$
**Lemma 18**. *For any Borel function $f: \mathbb{R}^+\to \mathbb{R}^+$ and $r\geq u\geq 0$, we have $$\begin{aligned}
\mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{u<T_{\alpha}<r, B^{\alpha}\neq \Delta\}} f((2N+2\theta) (r-T_\alpha))\Big)\leq e^{\theta r} \int_0^{(2N+2\theta)(r-u)} f(y) dy.
\end{aligned}$$*
Apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to see that the expectation above is equal to $$\begin{aligned}
\label{9e2.71}
& (2N+\theta) \int_0^r \mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{T_{\pi\alpha}<s\leq T_{\alpha}, B^{\alpha}\neq \Delta\}}\Big) 1(u<s) f((2N+2\theta) (r-s))ds \nonumber\\
&= (2N+\theta) \int_u^r e^{\theta s} f((2N+2\theta) (r-s))ds \leq e^{\theta r} \int_0^{(2N+2\theta)(r-u)} f(y) dy,\end{aligned}$$ where the equality is by Lemma [Lemma 6](#9l2.0){reference-type="ref" reference="9l2.0"} with $\phi\equiv 1$.
Use the above lemma with $f(y)=1/(1+y)^{d/2-1}$ and $u=0$ to see that [\[e6.37\]](#e6.37){reference-type="eqref" reference="e6.37"} becomes $$\begin{aligned}
I_0\leq C \int_0^{(2N+2\theta)r} \frac{1}{(1+y)^{d/2-1}} dy\leq C I((2N+2\theta)r),\end{aligned}$$ as required.
# Refined estimates for the collision term {#9s4}
This section is devoted to refining the bounds given in Lemma [Lemma 14](#9l4.1){reference-type="ref" reference="9l4.1"} and preparing for the convergence of the collision term in the following section. To start with, by recalling [\[9e2.23\]](#9e2.23){reference-type="eqref" reference="9e2.23"}, we define the collision from different ancestors by $$\begin{aligned}
\label{9e2.25}
J_0(t)=& \frac{1}{N\psi(N)} \sum_{\beta,\gamma: \beta_0\neq \gamma_0} 1(T_\beta\leq t,T_{\pi \gamma}<T_\beta ) 1(B^\beta-B^\gamma \in \mathcal{N}_N).\end{aligned}$$ By assuming the initial particles are sufficiently spread out, we show that $\mathbb{E}(J_0(t)) \to 0$ as $N\to \infty$.
**Lemma 19**. *If the initial condition $X_0^{0,N}$ converges to some atomless $X_0$ in $M_F(\mathbb{R}^d)$, then for any $t\geq 0$, we have $\mathbb{E}(J_0(t)) \to 0$ as $N\to \infty$.*
By letting $\beta_0=i$ and $\gamma_0=j$ for some $1\leq i\neq j\leq NX_0^0(1)$, we have $$\begin{aligned}
\label{9ea7.45}
\mathbb{E}(J_0(t))\leq& \frac{1}{N\psi(N)} \sum_{i\neq j} \sum_{\beta: \beta_0=i}\sum_{\gamma: \gamma_0=j} \mathbb{E}\Big(1(T_{\pi\beta}< t)1(T_{\pi \gamma}<t ) 1(B^\beta-B^\gamma \in \mathcal{N}_N)\Big),\end{aligned}$$ where we have bounded $1(T_\beta\leq t,T_{\pi \gamma}<T_\beta)$ by $1(T_{\pi\beta}< t, T_{\pi \gamma}<t)$ to get symmetry between $\beta$ and $\gamma$. Similar to [\[9e2.22\]](#9e2.22){reference-type="eqref" reference="9e2.22"}, we may bound the above sum by $$\begin{aligned}
\label{9a1.3}
\mathbb{E}(J_0(t))\leq&\frac{1}{N\psi(N)} \sum_{i\neq j} \sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \cdot \Pi((2N+\theta) t, l)\\
&\times \Pi((2N+\theta) t, m) \cdot \mathbb{P}(N^{1/2}(x_j-x_i)+V_{l+m}^N \in [-1,1]^d):=\frac{1}{N\psi(N)} \sum_{i\neq j} I_0(i,j).\nonumber\end{aligned}$$ It remains to show $$\begin{aligned}
\frac{1}{N\psi(N)} \sum_{i\neq j} I_0(i,j)\to 0 \text{ as } N\to\infty.\end{aligned}$$ For any $i\neq j$ we define $$\begin{aligned}
\label{9a1.4}
I_1(i,j):=& \sum_{l=0}^{ANt} \sum_{m=0}^{ANt} (\frac{2N+2\theta}{2N+\theta})^{l+m} \cdot \Pi((2N+\theta) t, l)\nonumber\\
&\times \Pi((2N+\theta) t, m) \cdot \mathbb{P}(N^{1/2}(x_j-x_i)+V_{l+m}^N \in [-1,1]^d),\end{aligned}$$ where $A>0$ is as in Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(ii). By the symmetry between $m$ and $l$, we obtain $$\begin{aligned}
I_0(i,j)-I_1(i,j)\leq &2 \sum_{l>ANt} \sum_{m=0}^{\infty} (\frac{2N+2\theta}{2N+\theta})^{l+m} \cdot \Pi((2N+\theta) t, l)\nonumber\\
&\times \Pi((2N+\theta) t, m) \cdot \frac{C}{(1+l+m)^{d/2}},\end{aligned}$$ where we have applied Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the last term in [\[9a1.4\]](#9a1.4){reference-type="eqref" reference="9a1.4"}. By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $m$ gives at most $Ce^{\theta t}/(1+l)^{d/2-1}\leq C$. We are left with $$\begin{aligned}
\label{9a1.5}
I_0(i,j)-I_1(i,j)\leq & C \sum_{l>ANt} (\frac{2N+2\theta}{2N+\theta})^{l} \cdot \mathbb{P}(\Gamma_l<(2N+\theta)t) \leq C\cdot C_0 2^{-Nt},\end{aligned}$$ where the last inequality is by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(ii). Hence $$\begin{aligned}
\frac{1}{N\psi(N)} \sum_{i\neq j} [I_0(i,j)-I_1(i,j)]\leq \frac{1}{N\psi(N)} [NX_0^0(1)]^2 C\cdot C_0 2^{-Nt}\to 0.\end{aligned}$$ It remains to show $$\begin{aligned}
\frac{1}{N\psi(N)} \sum_{i\neq j} I_1(i,j)\to 0 \text{ as } N\to\infty.\end{aligned}$$ To get bounds for $I_1(i,j)$, we simply bound $\Pi((2N+\theta) t, l)$ and $\Pi((2N+\theta) t, m)$ by $1$ to obtain $$\begin{aligned}
I_1(i,j)\leq & e^{2A\theta t} \sum_{l=0}^{ANt} \sum_{m=0}^{ANt} \mathbb{P}(N^{1/2}(x_j-x_i)+V_{l+m}^N \in [-1,1]^d).\end{aligned}$$ Rewrite the above sum by letting $n=l+m$ to get $$\begin{aligned}
\label{9e2.42}
I_1(i,j)&\leq e^{2A\theta t} \sum_{n=0}^{2ANt} \sum_{m=0}^{n} \mathbb{P}(N^{1/2}(x_j-x_i)+V_{n}^N \in [-1,1]^d)\nonumber\\
&\leq e^{2A\theta t} \sum_{n=0}^{2ANt} (n+1) \mathbb{P}(N^{1/2}(x_j-x_i)+V_{n}^N \in [-1,1]^d).\end{aligned}$$ Notice that if $\vert x_j-x_i\vert >\varepsilon$, we need $n\geq \varepsilon N^{1/2}-1$ so that $V_{n}^N$ gets to $N^{1/2}(x_j-x_i)+ [-1,1]^d$. Use this observation to see that [\[9e2.42\]](#9e2.42){reference-type="eqref" reference="9e2.42"} becomes $$\begin{aligned}
\label{9e2.43}
I_1(i,j)&\leq e^{2A\theta t} \sum_{n=\varepsilon N^{1/2}-1}^{2ANt} (n+1) \frac{C}{(n+1)^{d/2}}\nonumber\\
& + e^{2A\theta t} 1(\vert x_j-x_i\vert \leq \varepsilon) \sum_{n=0}^{2ANt} (n+1) \frac{C}{(n+1)^{d/2}},\end{aligned}$$ where we have applied Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the last term in [\[9e2.42\]](#9e2.42){reference-type="eqref" reference="9e2.42"}. Sum $i\neq j$ to get $$\begin{aligned}
\frac{1}{N\psi(N)}&\sum_{i\neq j} I_1(i,j)\leq \frac{e^{2A\theta t}}{N\psi(N)} (NX_0^{0,N}(1))^2 \sum_{n=\varepsilon N^{1/2}-1}^{2ANt} (n+1) \frac{C}{(n+1)^{d/2}}\nonumber\\
&+ \frac{e^{2A\theta t}}{N\psi(N)} N^2 (X_0^{0,N}\times X_0^{0,N}) (\{(x,y):\vert x-y\vert \leq \varepsilon\}) \sum_{n=0}^{2ANt} (n+1) \frac{C}{(n+1)^{d/2}}.\nonumber\end{aligned}$$ When $d\geq 5$, by recalling $\psi(N)\sim CN$ from [\[9e9.04\]](#9e9.04){reference-type="eqref" reference="9e9.04"}, one can check that the first term above converges to $0$ as $N\to \infty$. For the second term, we may use the fact that $X_0^{0, N}\to X_0$ in $M_F(\mathbb{R}^d)$ to see its limsup as $N\to \infty$ can be bounded by $$\begin{aligned}
C\cdot (X_0\times X_0) (\{(x,y):\vert x-y\vert \leq \varepsilon\}),\end{aligned}$$ which can be arbitrarily small by picking $\varepsilon>0$ small since we assume $X_0$ has no atoms. The proof for the case $d\geq 5$ is now complete.
Turning to $d=4$, we need a bit more care on the probability concerning $V_n^N$. The following result is from Lemma 5.2 of [@DP99].
**Lemma 20**. *There are constants $0<\delta_0, c,C<\infty$ so that if $0<z/n<\delta_0$, then $$\begin{aligned}
\mathbb{P}(\vert V_n^N\vert \geq z)\leq C\exp(-cz^2/n).\end{aligned}$$*
Similar to [\[9e2.43\]](#9e2.43){reference-type="eqref" reference="9e2.43"}, we apply Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound [\[9e2.42\]](#9e2.42){reference-type="eqref" reference="9e2.42"} by $$\begin{aligned}
\label{9e2.44}
I_1(i,j)&\leq e^{2A\theta t} 1(\vert x_i-x_j\vert \leq \varepsilon) \sum_{n=0}^{2ANt} (n+1) \frac{C}{(n+1)^{2}}\nonumber\\
&+ e^{2A\theta t} 1(\vert x_i-x_j\vert >\varepsilon) \sum_{n=\varepsilon N^{1/2}-1}^{N/(\log N)^3} (n+1) \cdot \mathbb{P}(N^{1/2}(x_j-x_i)+V_{n}^N \in [-1,1]^4)\nonumber\\
&+ e^{2A\theta t} \sum_{n=N/(\log N)^3}^{2ANt} (n+1) \frac{C}{(n+1)^{2}}.\end{aligned}$$ Sum $i\neq j$ to see $$\begin{aligned}
\frac{1}{N\psi(N)} \sum_{i\neq j} I_1(i,j)&\leq \frac{e^{2A\theta t}}{N\psi(N)} N^2 (X_0^{0}\times X_0^{0}) (\{(x,y):\vert x-y\vert \leq \varepsilon\}) \sum_{n=0}^{2ANt} (n+1) \frac{C}{(n+1)^{2}}\nonumber\\
&+ \frac{e^{2A\theta t}}{N\psi(N)} \sum_{\substack{i\neq j\\ \vert x_i-x_j\vert >\varepsilon}} \sum_{n=\varepsilon N^{1/2}-1}^{N/(\log N)^3} (n+1) \cdot \mathbb{P}(N^{1/2}(x_j-x_i)+V_{n}^N \in [-1,1]^4)\nonumber\\
&+ \frac{e^{2A\theta t}}{N\psi(N)}(NX_0^{0}(1))^2 \sum_{n=N/(\log N)^3}^{2ANt} (n+1) \frac{C}{(n+1)^{2}}:=S_1+S_2+S_3.\end{aligned}$$ For $S_1$, again we may use $X_0^{0,N}\to X_0$ in $M_F(\mathbb{R}^d)$ to see $$\begin{aligned}
\label{e1.11}
\limsup_{N\to \infty} S_1\leq C\cdot (X_0^{0}\times X_0^{0}) (\{(x,y):\vert x-y\vert \leq \varepsilon\}),\end{aligned}$$ which can be arbitrarily small by picking $\varepsilon>0$ small.
For $S_3$, we recall that $\psi(N)\sim CN\log N$ in $d=4$ and then bound the sum of $1/(n+1)$ by the integral of $x^{-1}$ to get as $N\to \infty$, $$\begin{aligned}
\label{e1.12}
S_3\leq \frac{C}{\log N} (X_0^{0}(1))^2 \int_{N/(\log N)^3}^{3ANt} \frac{1}{x} dx\leq \frac{C(X_0^{0}(1))^2}{\log N} C\log \log N \to 0.\end{aligned}$$
It remains to show that $S_2$ converges to $0$. Let $z=n^{1/2}\log N$ and use $n\geq \varepsilon N^{1/2}-1$ to see $$\begin{aligned}
z=n^{1/2}\log N=n\log N/n^{1/2}\leq n\log N/(\varepsilon N^{1/2}-1)^{1/2}\leq \delta_0 n.\end{aligned}$$ Apply Lemma [Lemma 20](#9l5.02){reference-type="ref" reference="9l5.02"} with the above $z$ with $z/n<\delta_0$ to get $$\begin{aligned}
\label{9e2.45}
\mathbb{P}(\vert V_n^N\vert \geq n^{1/2}\log N)\leq C\exp(-c(\log N)^2)=CN^{-c\log N}.\end{aligned}$$ Use $n\leq N/(\log N)^3$ to see $$\begin{aligned}
z=n^{1/2}\log N \leq N^{1/2}/(\log N)^{1/2}\leq \frac{\varepsilon}{2} N^{1/2}.\end{aligned}$$ Hence for $\vert x_i- x_j\vert >\varepsilon$, we have $$\begin{aligned}
\mathbb{P}(N^{1/2}(x_j-x_i)+V_{n}^N \in [-1,1]^4)\leq \mathbb{P}(\vert V_n^N\vert \geq n^{1/2}\log N)\leq CN^{-c\log N},
\end{aligned}$$ where the last inequality is by [\[9e2.45\]](#9e2.45){reference-type="eqref" reference="9e2.45"}. Finally, we use the above to see $S_2$ is at most $$\begin{aligned}
S_2\leq \frac{e^{2A\theta t}}{N\psi(N)} (NX_0^{0,N}(1))^2 N^2 \cdot CN^{-c\log N} \to 0 \text{ as } N\to \infty.
\end{aligned}$$ Together with [\[e1.11\]](#e1.11){reference-type="eqref" reference="e1.11"} and [\[e1.12\]](#e1.12){reference-type="eqref" reference="e1.12"}, the proof for $d=4$ is complete.
The next step is to deal with collisions from the same ancestor, i.e. $\beta_0=\gamma_0$. We will calculate the contribution of collisions from distant relatives. To make it precise, following [\[9e2.25\]](#9e2.25){reference-type="eqref" reference="9e2.25"} we define $$\begin{aligned}
\label{9e2.26}
J(t,\tau)= \frac{1}{N\psi(N)} \sum_{\beta,\gamma: \beta_0= \gamma_0} &1(T_\beta\leq t,T_{\pi \gamma}<T_\beta ) \nonumber\\
&\cdot 1(T_{\beta \wedge \gamma}\leq T_\beta-\tau) \cdot 1(B^\beta-B^\gamma \in \mathcal{N}_N).\end{aligned}$$ It is clear that $J(t,\tau)=0$ if $\tau \geq t$ since $T_{\beta \wedge \gamma}\geq T_{\beta_0}>0$. So we only need to consider $\tau <t$.
**Lemma 21**. *For any $t>0$, there is some constant $C>0$ so that for any $\tau< t$, $$\begin{aligned}
\mathbb{E}(J(t,\tau))\leq CX_0^{0}(1) \frac{1}{\psi_0(N)} \int_{(2N+\theta)\tau}^{(2N+\theta)t} \frac{1}{(1+y)^{d/2-1}}dy.\end{aligned}$$*
Recall $\text{nbr}_{\beta,\gamma}(r)$ from [\[9e2.24\]](#9e2.24){reference-type="eqref" reference="9e2.24"}. Similar to [\[9e2.31\]](#9e2.31){reference-type="eqref" reference="9e2.31"}, we may apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to see $$\begin{aligned}
\mathbb{E}(J(t,\tau))&=\frac{1}{N\psi(N)} \sum_{\beta,\gamma: \beta_0= \gamma_0} (2N+\theta)\mathbb{E}\Big(\int_0^t \text{nbr}_{\beta,\gamma}(r) \cdot 1{(T_{\beta \wedge \gamma}\leq r-\tau)}dr \Big)\nonumber\\
&\leq \frac{C}{\psi(N)} \sum_{\beta,\gamma: \beta_0= \gamma_0} \int_\tau^t \mathbb{E}\Big(\text{nbr}_{\beta,\gamma}(r) \cdot 1{(T_{\beta \wedge \gamma}\leq r-\tau)} \Big)dr,\end{aligned}$$ where the last inequality follows since the integrand is zero when $r\leq \tau$. Let $\beta_0=\gamma_0=i$ for some $1\leq i\leq NX_0^{0,N}(1)$. Use translation invariance to get $$\begin{aligned}
\label{9e2.54}
\mathbb{E}(J(t,\tau))\leq &\frac{C}{\psi(N)} NX_0^{0}(1) \int_\tau^t \mathbb{E}\Big(\sum_{\beta, \gamma\geq1}\text{nbr}_{\beta,\gamma}(r) \cdot 1{(T_{\beta \wedge \gamma}\leq r-\tau)} \Big)dr.\end{aligned}$$ The calculation for the expectation above is similar to that in [\[9e6.11\]](#9e6.11){reference-type="eqref" reference="9e6.11"}. One may continue all the arguments in the same way as that from [\[9e2.62\]](#9e2.62){reference-type="eqref" reference="9e2.62"} to [\[e6.37\]](#e6.37){reference-type="eqref" reference="e6.37"} except that $T_{\alpha}<r$ in [\[e6.37\]](#e6.37){reference-type="eqref" reference="e6.37"} is replaced by $T_{\alpha}<r-\tau$. So we may simply jump to [\[e6.37\]](#e6.37){reference-type="eqref" reference="e6.37"} to conclude $$\begin{aligned}
&\mathbb{E}\Big(\sum_{\beta, \gamma\geq 1} \text{nbr}_{\beta,\gamma}(r) \cdot 1{(T_{\beta \wedge \gamma}\leq r-\tau)} \Big) \\
&\leq C\mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{T_{\alpha}<r-\tau, B^{\alpha}\neq \Delta\}} \frac{1}{(1+(2N+2\theta) (r-T_\alpha))^{d/2-1}}\Big).\end{aligned}$$ Apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to see the right-hand side above is equal to $$\begin{aligned}
\label{9e2.75}
& C (2N+\theta) \int_0^{r-\tau} \mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{T_{\pi\alpha}<s\leq T_{\alpha}, B^{\alpha}\neq \Delta\}}\Big) \frac{1}{(1+(2N+2\theta) (r-s))^{d/2-1}}ds \nonumber\\
&=C (2N+\theta) \int_0^{r-\tau} e^{\theta s} \frac{1}{(1+(2N+2\theta) (r-s))^{d/2-1}}ds \leq C \int_{(2N+2\theta)\tau}^{(2N+2\theta)r} \frac{1}{(1+y)^{d/2-1}}dy.\end{aligned}$$ Finally we apply [\[9e2.75\]](#9e2.75){reference-type="eqref" reference="9e2.75"} in [\[9e2.54\]](#9e2.54){reference-type="eqref" reference="9e2.54"} to get $$\begin{aligned}
\mathbb{E}(J(t,\tau)) &\leq \frac{C}{\psi(N)} NX_0^{0}(1) \int_\tau^t \int_{(2N+\theta)\tau}^{(2N+\theta)r} \frac{1}{(1+y)^{d/2-1}}dy dr\nonumber\\
&\leq \frac{C}{\psi_0(N)} X_0^{0}(1) \int_{(2N+\theta)\tau}^{(2N+\theta)t} \frac{1}{(1+y)^{d/2-1}} dy,\end{aligned}$$ as required.
# Convergence of the collision term {#9s5}
We are ready to give the convergence of the collision term in this section and prove Lemma [Lemma 12](#9l2.6){reference-type="ref" reference="9l2.6"}. Recall from [\[9e2.1\]](#9e2.1){reference-type="eqref" reference="9e2.1"} that $$\begin{aligned}
\label{9e3.03}
K_{t}^n(\phi)=& \frac{N+\theta}{N} \frac{1}{2N+\theta}\sum_{\beta} a_\beta^n(t) \phi(B^\beta+W^\beta) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-}).\end{aligned}$$ The proof proceeds by approximating $K_t^n(\phi)$ in several steps until we reach $b_d \int_0^t X_r^1(\phi) dr$. The first one will be obtained by adjusting the constant, replacing $\phi(B^\beta+W^\beta)$ by $\phi(B^\beta)$ and replacing $\overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-}$ by ${\mathcal{R}}^{X^{n-1}}_{T_\beta^-}$ in [\[9e3.03\]](#9e3.03){reference-type="eqref" reference="9e3.03"}, that is, we define $$\begin{aligned}
\label{9e3.2}
K_{t}^{n,0}(\phi):=&\frac{1}{2N+\theta} \sum_{\beta} a_\beta^n(t) \phi(B^\beta) 1(B^\beta+W^\beta\in \mathcal{R}^{X^{n-1}}_{T_\beta^-}).\end{aligned}$$
**Lemma 22**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n}(\phi)-K_s^{n,0}(\phi)\Big\vert \Big)=0.\end{aligned}$$*
The second step is to replace the indicator $1(B^\beta+W^\beta\in \mathcal{R}^{X^{n-1}}_{T_\beta^-})$ by its conditional expectation. For any $m\geq 0$, recall from [\[9e3.1\]](#9e3.1){reference-type="eqref" reference="9e3.1"} that $$\begin{aligned}
\label{9e3.4}
\mathcal{R}^{X^{m}}_{T_\beta^-}=\{B^\gamma: T_{\pi \gamma}\leq T_\beta^-,\quad \zeta_\gamma^{m}>T_{\pi \gamma}\}=\{B^\gamma: T_{\pi \gamma}< T_\beta,\quad \zeta_\gamma^{m}>T_{\pi \gamma}\}.\end{aligned}$$ Define $$\begin{aligned}
\label{9e3.33}
\nu_m(\beta)=\Big\vert \{B^\gamma: T_{\pi \gamma}<T_\beta, B^\gamma-B^\beta \in \mathcal{N}_N, \zeta_{\gamma}^m>T_{\pi \gamma}\}\Big\vert \end{aligned}$$ to be the number of neighbors of $B^\beta$ that have been visited by $X^m$ up to time $T_\beta^-$. Then by conditioning on $\mathcal F_{T_\beta^-}$, we get $$\begin{aligned}
\label{9e3.31}
&\mathbb{E}\Big(1(B^\beta+W^\beta\in \mathcal{R}^{X^{m}}_{T_\beta^-})\Big\vert \mathcal F_{T_\beta^-}\Big)=\frac{\nu_m(\beta)}{\psi(N)},\end{aligned}$$ where we have used that $W^\beta$ is uniform on $\mathcal{N}_N$ and independent of $\mathcal F_{T_\beta^-}$. Given the above, we set $$\begin{aligned}
\label{9e3.3}
K_{t}^{n,1}(\phi):=&\frac{1}{2N+\theta} \sum_{\beta} a_\beta^n(t) \phi(B^\beta) \frac{\nu_{n-1}(\beta)}{\psi(N)}.\end{aligned}$$
**Lemma 23**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n,0}(\phi)-K_s^{n,1}(\phi)\Big\vert \Big)=0.\end{aligned}$$*
The number $\nu_{n-1}(\beta)$ in [\[9e3.3\]](#9e3.3){reference-type="eqref" reference="9e3.3"} counts the number of sites that have been occupied in the neighborhood of $B^\beta$, which might have been visited by more than one particle. However, since birth events on the same site are rare, one would expect their difference to be small. Define $$\begin{aligned}
\text{nbr}_{\beta,\gamma}^m=1(T_{\pi \gamma}<T_\beta, \zeta_\gamma^m>T_{\pi \gamma}, B^\beta-B^\gamma \in \mathcal{N}_N).\end{aligned}$$ Our next step would be to replace $\nu_{m}(\beta)$ by $\sum_{\gamma}\text{nbr}_{\beta,\gamma}^{m}$ as the latter is easier to compute. Moreover, the event in $\text{nbr}_{\beta,\gamma}^m$ represents a collision between two particles while our work in Section [4](#9s4){reference-type="ref" reference="9s4"} already gives bounds on the contributions of such collisions from distant relatives. Lemma [Lemma 21](#9l4.3a){reference-type="ref" reference="9l4.3a"} motivates us to define the cutoff time $\tau_N>0$ such that $$\begin{aligned}
\label{9e3.34}
\begin{cases}
N\tau_N \to \infty, \tau_N \to 0, &\text{ in } d\geq 5;\\
\tau_N=1/\log N, &\text{ in } d=4.
\end{cases} \end{aligned}$$ Throughout the rest of the section, we always assume $\tau_N<t$. By letting $\tau=\tau_N$ in [\[9e3.34\]](#9e3.34){reference-type="eqref" reference="9e3.34"}, we get $$\begin{aligned}
\label{9ec3.5}
\mathbb{E}(J(t,\tau_N))\leq CX_0^{0,N}(1) \frac{1}{\psi_0(N)} \int_{(2N+\theta)\tau_N}^{(2N+\theta)t} \frac{1}{(1+y)^{d/2-1}}dy \to 0 \text{ as } N\to\infty.\end{aligned}$$ So we define our third approximation of $K_t^n(\phi)$ by $$\begin{aligned}
\label{9e3.5}
K_{t}^{n,2}(\phi)=&\frac{1}{2N+\theta} \sum_{\beta} a_\beta^n(t) \phi(B^\beta) \frac{1}{\psi(N)}\sum_{\gamma}\text{nbr}_{\beta,\gamma}^{n-1} 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N)\nonumber\\
=&\frac{1}{2N+\theta} \frac{1}{\psi(N)} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n>T_{\pi\beta}) \phi(B^\beta) \nonumber\\
&\quad \sum_{\gamma}1(T_{\pi \gamma}<T_\beta, \zeta_\gamma^{n-1}>T_{\pi\gamma}, B^\beta-B^\gamma \in \mathcal{N}_N) 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N).\end{aligned}$$
**Lemma 24**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n,1}(\phi)-K_s^{n,2}(\phi)\Big\vert \Big)=0.\end{aligned}$$*
Moving to the fourth step, we will use a time change with the cutoff time $\tau_N$: In the second expression of $K_{t}^{n,2}(\phi)$ in [\[9e3.5\]](#9e3.5){reference-type="eqref" reference="9e3.5"}, we will replace $\{\zeta_\beta^n>T_{\pi\beta}\}$ by $\{\zeta_\beta^n>T_\beta-\tau_N, B^\beta\neq \Delta\}$ and $\{\zeta_\gamma^{n-1}>T_{\pi\gamma}\}$ by $\{\zeta_\gamma^{n-1}>T_\beta-\tau_N, B^\gamma\neq \Delta\}$ and define $$\begin{aligned}
\label{9e3.6}
& K_{t}^{n,3}(\phi)=\frac{1}{2N+\theta} \frac{1}{\psi(N)} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n>T_\beta-\tau_N) \phi(B^\beta)\nonumber\\
&\quad \sum_{\gamma}1(T_{\pi \gamma}<T_\beta, \zeta_\gamma^{n-1}>T_\beta-\tau_N, B^\beta-B^\gamma \in \mathcal{N}_N) 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N),\end{aligned}$$ where, as usual, $B^\beta, B^\gamma \neq \Delta$ are implicit in $\{B^\beta-B^\gamma \in \mathcal{N}_N\}$.
**Lemma 25**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n,2}(\phi)-K_s^{n,3}(\phi)\Big\vert \Big)=0.\end{aligned}$$*
By recalling from [\[9e10.03\]](#9e10.03){reference-type="eqref" reference="9e10.03"} that $X_t^1\leq X_t^2\leq X_t^0$, one can check for $n=1$ or $n=2$, $$\begin{aligned}
&\Big\{\zeta_\beta^n>T_\beta-\tau_N, \zeta_\gamma^{n-1}>T_\beta-\tau_N, T_{\gamma \wedge \beta}>T_\beta-\tau_N \Big\}\\
=&\Big\{\zeta_{\gamma \wedge \beta}^1>T_\beta-\tau_N, T_{\gamma \wedge \beta}>T_\beta-\tau_N \Big\}\\
=&\Big\{\zeta_{ \beta}^1>T_\beta-\tau_N, T_{\gamma \wedge \beta}>T_\beta-\tau_N \Big\}.\end{aligned}$$ The above equalities follow since $\beta$ and $\gamma$ share the same ancestor at time $T_\beta-\tau_N<T_{\gamma \wedge \beta}$. Hence $$\begin{aligned}
\label{9e3.7}
& K_{t}^{n,3}(\phi)=\frac{1}{2N+\theta} \frac{1}{\psi(N)} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^1>T_\beta-\tau_N) \phi(B^\beta)\nonumber\\
&\quad \times \sum_{\gamma}1(T_{\pi \gamma}<T_\beta, B^\beta-B^\gamma \in \mathcal{N}_N, T_{\gamma \wedge \beta}>T_\beta-\tau_N)= K_{t}^{1,3}(\phi).\end{aligned}$$ Define $$\begin{aligned}
\label{9e3.8}
F_\beta(r)=\frac{1}{\psi_0(N)}\sum_{\gamma}1(T_{\pi \gamma}<r, B^\beta-B^\gamma \in \mathcal{N}_N, T_{\gamma \wedge \beta}>r-\tau_N)\end{aligned}$$ so that $$\begin{aligned}
\label{9e3.9}
& K_{t}^{n,3}(\phi)=\frac{1}{N(2N+\theta)} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^1>T_\beta-\tau_N) \phi(B^\beta)F_\beta(T_\beta).\end{aligned}$$ Next, define $$\begin{aligned}
\label{9e3.10}
& G_r(\phi)=\frac{1}{N} \sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta, \zeta_\beta^1>r-\tau_N) \phi(B^\beta) F_\beta(r).\end{aligned}$$ By Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"}, we get $$\begin{aligned}
\label{9e3.11}
K_{t}^{n,3}(\phi)-\int_0^t G_r(\phi) dr \text{ is a martingale}.\end{aligned}$$ Replace $\phi(B^\beta)$ in [\[9e3.10\]](#9e3.10){reference-type="eqref" reference="9e3.10"} by $\phi(B^\beta_{(r-\tau_N)^+})$ and define $$\begin{aligned}
\label{9e3.12}
& G_r^\tau(\phi)=\frac{1}{N} \sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta, \zeta_\beta^1>r-\tau_N) \phi(B^\beta_{(r-\tau_N)^+}) F_\beta(r).\end{aligned}$$
**Lemma 26**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n,3}(\phi)-\int_0^t G_r^\tau(\phi)ds\Big\vert \Big)=0.\end{aligned}$$*
The last approximation term is obtained by replacing $F_\beta(r)$ by $1$ in [\[9e3.12\]](#9e3.12){reference-type="eqref" reference="9e3.12"}, that is, $$\begin{aligned}
\label{9e3.13}
& X_r^{1,\tau}(\phi):=\frac{1}{N} \sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta, \zeta_\beta^1>r-\tau_N) \phi(B^\beta_{(r-\tau_N)^+}),\end{aligned}$$ where $\{B^\beta\neq \Delta\}$ is implicitly given by letting $\phi(B^\beta_{(r-\tau_N)^+})=0$ if $B^\beta=\Delta$. In order to compare $X_r^{1,\tau}(\phi)$ with $G_r^\tau(\phi)$, we define $$\begin{aligned}
\mathcal A(s)=\{\alpha: T_{\pi\alpha}<s\leq T_\alpha, \zeta_\alpha^1>s\}\end{aligned}$$ to be the set of particles alive in $X^1$. Let $$\begin{aligned}
\label{9e4.3}
\{\alpha\}_r=\{\beta: \beta\geq \alpha, T_{\pi\beta}<r\leq T_\beta, B^\beta\neq \Delta\}\end{aligned}$$ be the set of descendants of $\alpha$ alive at time $r$ in the branching random walk. Now rewrite $G_r^\tau(\phi)$ from [\[9e3.12\]](#9e3.12){reference-type="eqref" reference="9e3.12"} as $$\begin{aligned}
\label{e1.24}
& G_r^\tau(\phi)=\frac{1}{N} \sum_{\alpha \in \mathcal A(r-\tau_N)} \phi(B^\alpha_{(r-\tau_N)^+}) \sum_{\beta\in \{\alpha\}_r} F_\beta(r).\end{aligned}$$ Similarly we may rewrite $X_r^{1,\tau}(\phi)$ as $$\begin{aligned}
\label{e1.25}
& X_r^{1,\tau}(\phi)=\frac{1}{N} \sum_{\alpha \in \mathcal A(r-\tau_N)} \phi(B^\alpha_{(r-\tau_N)^+}) \vert \{\alpha\}_r\vert.\end{aligned}$$ Define $$\begin{aligned}
\label{9e4.2}
&Z_\alpha(r)=\sum_{\beta\in \{\alpha\}_r} F_\beta(r)\quad \text{ and } \quad b_d^\tau=\frac{\mathbb{E}(Z_1(\tau_N))}{\mathbb{E}(\vert \{1\}_{\tau_N}\vert )}.
\end{aligned}$$ Combine [\[e1.24\]](#e1.24){reference-type="eqref" reference="e1.24"} and [\[e1.25\]](#e1.25){reference-type="eqref" reference="e1.25"} to get $$\begin{aligned}
\label{9e6.61}
& G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)=\frac{1}{N} \sum_{\alpha \in \mathcal A(r-\tau_N)} \phi(B^\alpha_{(r-\tau_N)^+}) (Z_\alpha(r)-b_d^\tau \vert \{\alpha\}_r\vert ).\end{aligned}$$ By translation invariance, we condition on $\mathcal F_{r-\tau_N}$ to see that on the event $\{\alpha \in \mathcal A(r-\tau_N)\}$, $$\begin{aligned}
\label{9e3.14}
& \mathbb{E}\Big(\big(Z_\alpha(r)-b_d^\tau \vert \{\alpha\}_r\vert \big) \Big\vert \mathcal F_{r-\tau_N}\Big)= \mathbb{E}\Big(Z_1(\tau_N)- b_d^\tau\vert \{1\}_{\tau_N}\vert \Big)=0.\end{aligned}$$
**Lemma 27**. *Let $n=1$ or $2$. For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert \int_0^s G_r^\tau(\phi)dr-b_d^\tau \int_0^s X_r^{1,\tau}(\phi)dr\Big\vert \Big)=0.\end{aligned}$$*
Finally, by showing the last two lemmas below, we arrive at our destinated term $b_d \int_0^s X_r^{1}(\phi)dr$.
**Lemma 28**. *$\lim_{N\to\infty} b_d^\tau=b_d$.*
**Lemma 29**. *For all $0<t<\infty$, $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert \int_0^s X_r^{1,\tau}(\phi)dr- \int_0^s X_r^{1}(\phi)dr\Big\vert \Big)=0.\end{aligned}$$*
At this stage, we have finished the proof of Lemma [Lemma 12](#9l2.6){reference-type="ref" reference="9l2.6"} and hence proved our main result Theorem [Theorem 1](#9t0){reference-type="ref" reference="9t0"}. It remains to prove Lemmas [Lemma 22](#9l5.1){reference-type="ref" reference="9l5.1"}-[Lemma 29](#9l5.8){reference-type="ref" reference="9l5.8"}.
# First part of the proofs of lemmas {#9s6}
We first give the relatively "simpler" proofs of Lemmas [Lemma 22](#9l5.1){reference-type="ref" reference="9l5.1"}, [Lemma 23](#9l5.2){reference-type="ref" reference="9l5.2"}, [Lemma 24](#9l5.3){reference-type="ref" reference="9l5.3"}, [Lemma 26](#9l5.5){reference-type="ref" reference="9l5.5"} in this section.
Since $K_t^{n}(\phi)$ is increasing in $t$, we get $$\begin{aligned}
&\mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n}(\phi)-\frac{N}{N+\theta}K_s^{n}(\phi)\Big\vert \Big)=\frac{\theta}{N+\theta}\mathbb{E}(K_t^{n}(\phi)).\end{aligned}$$ Recall from [\[9ea5.69\]](#9ea5.69){reference-type="eqref" reference="9ea5.69"} the inequality for $K_t^{n}(\phi)$. Together with the upper bound from Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"}, we conclude from the above that $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_s^{n}(\phi)-\frac{N}{N+\theta}K_s^{n}(\phi)\Big\vert \Big)\leq \frac{C}{N} \|\phi\|_\infty (X_0^0(1)+X_0^0(1)^2)\to 0.\end{aligned}$$ Next, define for each $s\geq 0$ $$\begin{aligned}
\hat{K}_{s}^{n,0}(\phi)=&\frac{1}{2N+\theta} \sum_{\beta} a_\beta^n(t) \phi(B^\beta) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{n-1}}_{T_\beta^-}).\end{aligned}$$ Then the difference between $\frac{N}{N+\theta}K_s^{n}(\phi)$ and $\hat{K}_s^{n,0}(\phi)$ is that $\phi(Y^\beta+W^\beta)$ is replaced by $\phi(Y^\beta)$. Since $\phi \in C_b^3(\mathbb{R}^d)$ and $W^\beta$ is uniform on $\mathcal{N}_N\subseteq [-N^{-1/2}, N^{-1/2}]^d$, by setting $$\begin{aligned}
\label{9ec5.37}
\eta_N=\sup {\{\vert \phi(x+y)-\phi(x)\vert : y\in [-N^{-1/2}, N^{-1/2}]^d, {x\in \mathbb{R}^d}\}},\end{aligned}$$ we have $\vert \phi(Y^\beta+W^\beta)-\phi(Y^\beta)\vert \leq \eta_N$ and $\eta_N\leq CN^{-1/2}$. It follows that $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t} \Big\vert \frac{N}{N+\theta}K_s^{n}(\phi)-\hat{K}_s^{n,0}(\phi)\Big\vert \Big)
&\leq C \eta_N \mathbb{E}\Big( \frac{1}{N} \sum_{\beta} a_\beta^0(t) 1(B^\beta+W^\beta\in \overline{\mathcal{R}}^{X^{0}}_{T_\beta^-}) \Big)\\
&\leq CN^{-1/2} C(X_0^0(1)+X_0^0(1)^2) \to 0,\end{aligned}$$ where the last inequality uses Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"} again. Finally, we recall the assumption on $K_0^N$ from [\[9e0.93\]](#9e0.93){reference-type="eqref" reference="9e0.93"} to see $$\begin{aligned}
&\mathbb{E}\Big(\sup_{s\leq t} \Big\vert \hat{K}_s^{n,0}(\phi)-K_s^{n,0}(\phi)\Big\vert \Big) \leq \|\phi\|_\infty \mathbb{E}\Big( \frac{1}{N} \sum_{\beta} a_\beta^0(t) 1(B^\beta+W^\beta\in K_0^N) \Big)\to 0,\end{aligned}$$ thus completing the proof.
**Lemma 30**. *If $G_\beta$ is measurable with respect to $\mathcal F_{T_\beta}$ that satisfies $$\begin{aligned}
\mathbb{E}(G_\beta\vert \mathcal F_{T_\beta^-})=0,\text{ and } \vert G_\beta\vert \leq K 1(T_\beta\leq \zeta_\beta^0)\end{aligned}$$ for some absolute constant $K>0$, then $M_t=N^{-1}\sum_{\beta} 1(T_\beta\leq t) G_\beta$ is an $\mathcal F_t$-martingale and there is some constant $C>0$ so that for any $t\geq 0$, $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t} M_s^2\Big) \leq C\frac{1}{N^2} \mathbb{E}\Big(\sum_\beta 1(T_\beta \leq t) G_\beta^2\Big).
\end{aligned}$$*
The proof is immediate from Lemma 3.5 of [@DP99] and its proof therein.
Recall $K_t^{n,1}(\phi)$ from [\[9e3.3\]](#9e3.3){reference-type="eqref" reference="9e3.3"} and define $$\begin{aligned}
M_t&:=K_t^{n,0}(\phi)-K_t^{n,1}(\phi) =\frac{1}{2N+\theta}\sum_{\beta} a_\beta^n(t) \phi(B^\beta) \Big(h_{\beta}^{n-1}-\frac{\nu_{n-1}(\beta)}{\psi(N)}\Big),\end{aligned}$$ where $$\begin{aligned}
h_{\beta}^{n-1}=1(B^\beta+W^\beta\in \mathcal{R}^{X^{n-1}}_{T_\beta^-}).\end{aligned}$$ By [\[9e3.31\]](#9e3.31){reference-type="eqref" reference="9e3.31"} and Lemma [Lemma 30](#9l6.1){reference-type="ref" reference="9l6.1"}, we get $(M_t, t\geq 0)$ is an $\mathcal F_t$-martingale and $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t}M_s^2\Big)\leq \frac{C}{N^2} \| \phi\| _\infty^2 \sum_{\beta} \mathbb{E}\Big( a_\beta^0(t) \Big(h_{\beta}^{n-1}-\frac{\nu_{n-1}(\beta)}{\psi(N)}\Big)^2\Big).\end{aligned}$$ Notice that $h_{\beta}^{n-1} \in \{0,1\}$. Using [\[9e3.31\]](#9e3.31){reference-type="eqref" reference="9e3.31"} again, we have $$\begin{aligned}
\mathbb{E}\Big( \Big(h_{\beta}^{n-1}-\frac{\nu_{n-1}(\beta)}{\psi(N)}\Big)^2\Big\vert \mathcal F_{T_\beta^-}\Big)\leq \mathbb{E}(h_{\beta}^{n-1}\vert \mathcal F_{T_\beta^-}),\end{aligned}$$ thus giving $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t}M_s^2\Big)\leq &\frac{C}{N^2} \| \phi\| _\infty^2 \sum_{\beta} \mathbb{E}\Big( a_\beta^0(t) h_{\beta}^{n-1}\Big)\leq C\| \phi\| _\infty^2 \frac{1}{N}(X_0^0(1)+X_0^0(1)^2) \to 0,\end{aligned}$$ where the last inequality is by Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"}. The proof is complete.
Recall $\nu_m(\beta)$ from [\[9e3.33\]](#9e3.33){reference-type="eqref" reference="9e3.33"} and define $$\begin{aligned}
\nu_{m,\tau}(\beta)=\Big\vert \{B^\gamma: T_{\pi \gamma}<T_\beta, B^\gamma-B^\beta \in \mathcal{N}_N, \zeta_{\gamma}^m>T_{\pi \gamma}, T_{\gamma \wedge \beta}>T_\beta-\tau_N\}\Big\vert .\end{aligned}$$ Replace $\nu_{n-1}(\beta)$ in $K_{t}^{n,1}(\phi)$ by $\nu_{n-1,\tau}(\beta)$ and set $$\begin{aligned}
\hat{K}_{t}^{n,1}(\phi)=&\frac{1}{2N+\theta} \sum_{\beta} a_\beta^n(t) \phi(B^\beta) \frac{\nu_{n-1,\tau}(\beta)}{\psi(N)}.\end{aligned}$$ It follows that $$\begin{aligned}
&\sup_{s\leq t}\vert \hat{K}_{s}^{n,1}(\phi)-{K}_{s}^{n,1}(\phi)\vert \leq \frac{1}{N} \frac{1}{\psi(N)} \| \phi\| _\infty \sum_{\beta} a_\beta^0(t) \vert \nu_{n-1}(\beta)-\nu_{n-1,\tau}(\beta)\vert \nonumber\\
&\leq \frac{1}{N\psi(N)} \| \phi\| _\infty \sum_{\beta} 1(T_\beta\leq t)\nonumber\\
&\quad\quad\quad \sum_{\gamma} 1(T_{\pi \gamma}<T_\beta) 1( B^\gamma-B^\beta \in \mathcal{N}_N)1( T_{\gamma \wedge \beta}\leq T_\beta-\tau_N)\nonumber\\
&= \| \phi\| _\infty(J_0(t)+J(t,\tau_N)),\end{aligned}$$ where the last equality is from [\[9e2.25\]](#9e2.25){reference-type="eqref" reference="9e2.25"} and [\[9e2.26\]](#9e2.26){reference-type="eqref" reference="9e2.26"}. By Lemma [Lemma 19](#9l4.1a){reference-type="ref" reference="9l4.1a"}, Lemma [Lemma 21](#9l4.3a){reference-type="ref" reference="9l4.3a"} and [\[9ec3.5\]](#9ec3.5){reference-type="eqref" reference="9ec3.5"}, we conclude $$\begin{aligned}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert \hat{K}_s^{n,1}(\phi)-K_s^{n,1}(\phi)\Big\vert \Big)=0.\end{aligned}$$ It remains to calculate the difference between $\hat{K}_{t}^{n,1}(\phi)$ and ${K}_{t}^{n,2}(\phi)$ given by $$\begin{aligned}
\label{9e3.41}
\sup_{s\leq t} \vert \hat{K}_{s}^{n,1}(\phi)-{K}_{s}^{n,2}(\phi)\vert &\leq \frac{1}{N} \frac{1}{\psi(N)} \| \phi\| _\infty \sum_{\beta} 1(T_\beta\leq t, B^\beta\neq \Delta) \nonumber\\
&\times \vert \nu_{n-1,\tau}(\beta)-\sum_{\gamma}\text{nbr}_{\beta,\gamma}^{n-1} 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N)\vert .\end{aligned}$$ Define $$\begin{aligned}
\label{9e3.42}
J_1(t)=&\frac{1}{N\psi(N)} \sum_{\beta} 1(T_\beta\leq t, B^\beta\neq \Delta)\sum_{\gamma} 1(T_{\pi\gamma}<T_\beta, B^\gamma-B^\beta \in \mathcal{N}_N) \nonumber\\
&\quad 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N) \sum_{\delta}1(T_{\pi\delta} <T_\beta, B^\delta=B^\gamma)1(T_{\delta \wedge \beta}>T_\beta-\tau_N).\end{aligned}$$ We claim $$\begin{aligned}
\label{9ec3.42}
\sup_{s\leq t}\vert \hat{K}_{s}^{n,1}(\phi)-{K}_{s}^{n,2}(\phi)\vert \leq C\| \phi\| _\infty J_1(t).\end{aligned}$$ To see this, we note the last term on the right-hand side of [\[9e3.41\]](#9e3.41){reference-type="eqref" reference="9e3.41"} is from the multiple occupancy of particles: If $k\geq 2$ particles have visited a neighboring site of $B^\beta$, then this will contribute at most $k-1$ to [\[9e3.41\]](#9e3.41){reference-type="eqref" reference="9e3.41"} and at least $k^2$ to the summation of $\gamma,\delta$ in [\[9e3.42\]](#9e3.42){reference-type="eqref" reference="9e3.42"}, thus giving [\[9ec3.42\]](#9ec3.42){reference-type="eqref" reference="9ec3.42"}. It suffices to show $\mathbb{E}(J_1(t)) \to 0$.\
Apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to get $$\begin{aligned}
\label{9e4.31}
\mathbb{E}(J_1(t))=&\frac{2N+\theta}{N\psi(N)} \int_0^t \sum_{\beta,\gamma, \delta}\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r, B^\gamma-B^\beta \in \mathcal{N}_N)\nonumber\\
&\quad 1(T_{\gamma \wedge \beta}>r-\tau_N)1(T_{\delta \wedge \beta}>r-\tau_N)1(T_{\pi\delta} <r) 1(B^\delta=B^\gamma)\Big)dr.\end{aligned}$$ Denote the terms inside the expectation above by $K_{\beta,\gamma, \delta}(r)$. The indicators $1(T_{\gamma \wedge \beta}>r-\tau_N)$ and $1(T_{\delta \wedge \beta}>r-\tau_N)$ ensure that $\delta_0=\beta_0=\gamma_0$. By translation invariance, we get $$\begin{aligned}
\label{e1.31}
\sum_{\beta,\gamma, \delta} \mathbb{E}\Big(K_{\beta,\gamma, \delta}(r)\Big) = NX_0^0(1)\mathbb{E}\Big(\sum_{\beta,\gamma, \delta\geq 1}K_{\beta,\gamma, \delta}(r)\Big).\end{aligned}$$
**Lemma 31**. *There is some constant $C>0$ such that for any $0< r\leq t$, $$\begin{aligned}
\label{9e4.61}
\mathbb{E}\Big(\sum_{\beta,\gamma, \delta\geq 1} K_{\beta,\gamma, \delta}(r)\Big) \leq C\tau_N.\end{aligned}$$*
Use [\[e1.31\]](#e1.31){reference-type="eqref" reference="e1.31"} and [\[9e4.61\]](#9e4.61){reference-type="eqref" reference="9e4.61"} in [\[9e4.31\]](#9e4.31){reference-type="eqref" reference="9e4.31"} to get $$\begin{aligned}
\label{9e4.60}
\mathbb{E}(J_1(t))\leq &\frac{2N+\theta}{N\psi(N)} \int_0^t NX_0^0(1) C\tau_N dr\leq \frac{CX_0^0(1)}{\psi_0(N)} \tau_N \to 0 \text{ as } N\to \infty.\end{aligned}$$ The proof of Lemma [Lemma 24](#9l5.3){reference-type="ref" reference="9l5.3"} is complete as noted above.
It remains to prove Lemma [Lemma 31](#9l5.0){reference-type="ref" reference="9l5.0"}.
By symmetry of $\delta$ and $\gamma$, we may assume $\delta \wedge \beta \leq \gamma\wedge \beta$. There are two cases here: (i) $\delta \wedge \beta < \gamma\wedge \beta$; (ii) $\delta \wedge \beta = \gamma\wedge \beta$. Denote by $K^{(i)}(r)$ (resp. $K^{(ii)}(r)$) for the sum of $K_{\beta,\gamma, \delta}(r)$ when $\delta,\gamma,\beta$ satisfy case (i) (resp. case (ii)).\
![[\[fig1\]]{#fig1 label="fig1"} Two cases for $K_{\beta,\gamma, \delta}(r)$. ](1.png){#fig1 width="0.75 \\textwidth"}
We first deal with $K^{(i)}(r)$. Let $\sigma=\delta\wedge \beta$ and $\alpha=\gamma\wedge \beta$ so that $\sigma<\alpha$. Let $\vert \sigma\vert =k$ for some $k\geq 0$ and $\vert \alpha\vert =k+j$ for some $j\geq 1$. Since $\delta\geq \sigma$, we let $\vert \delta\vert =k+n$ for some $n\geq 0$. Set $\vert \beta\vert =\vert \alpha\vert +l$ and $\vert \gamma\vert =\vert \alpha\vert +m$ for some $l,m\geq 0$. See Figure [1](#fig1){reference-type="ref" reference="fig1"} for illustration. The sum of $\delta, \gamma, \beta$ for $K^{(i)}(r)$ can be written as $$\begin{aligned}
\label{9e4.32}
\mathbb{E}(K^{(i)}(r))=& \sum_{k=0}^\infty \sum_{\substack{\sigma\geq 1,\\ \vert \sigma\vert =k}} \sum_{n=0}^\infty \sum_{\substack{\delta\geq \sigma,\\ \vert \delta\vert =k+n}}
\sum_{j=1}^\infty\sum_{\substack{\alpha\geq \sigma,\\ \vert \alpha\vert =k+j}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =\vert \alpha\vert +l}}\sum_{\substack{ \gamma\geq\alpha,\\ \vert \gamma\vert =\vert \alpha\vert +m}} \nonumber\\
&\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r, B^\gamma-B^\beta \in \mathcal{N}_N)\nonumber\\
&\quad 1(T_{\sigma}>r-\tau_N)1(T_{\pi\delta} <r, B^\delta=B^\gamma)\Big),\end{aligned}$$ where we have removed $1(T_{\alpha}>r-\tau_N)$ due to $1(T_{\sigma}>r-\tau_N)$ and $\sigma<\alpha$. We note on the event $\{B^\beta, B^\gamma, B^{\delta}\neq \Delta\}$, the spatial events $\{B^\beta-B^\gamma\in \mathcal{N}_N\}$ and $\{B^\delta=B^\gamma\}$ are independent of the branching events concerning $T_{\alpha}, T_\beta$, etc. Therefore the expectation above is at most $$\begin{aligned}
\label{9ec1.00}
&\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r) 1(T_{\sigma}>r-\tau_N)1(T_{\pi\delta} <r)\Big) \nonumber\\
&\times (\frac{N+\theta}{2N+\theta})^{k+j+l+m+n} \times \mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^\delta=B^\gamma) \Big),\end{aligned}$$ where the second term gives an upper bound for the probability of the event $\{B^\beta, B^\gamma, B^{\delta}\neq \Delta\}$. We also use $\mathbb{E}^{\beta, \gamma,{\delta}}$ to denote the expectation conditioning on $\{B^\beta, B^\gamma, B^{\delta}\neq \Delta\}$.
For the first expectation above concerning the branching events, we condition on $\mathcal{H}_\alpha$ to get $$\begin{aligned}
&\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r) 1(T_{\sigma}>r-\tau_N)1(T_{\pi\delta} <r)\Big\vert \mathcal{H}_\alpha\Big) \\
&\leq 1(T_\alpha<r) \cdot 1(T_{\sigma}>r-\tau_N) \cdot \mathbb{P}(T_{\pi\beta}-T_{\alpha} <r-T_{\alpha} \leq T_\beta-T_{\alpha} \vert \mathcal{H}_\alpha)\nonumber\\
&\quad \quad\cdot \mathbb{P}(T_{\pi\gamma}-T_{\alpha} <r-T_{\alpha}<\tau_N \vert \mathcal{H}_\alpha) \cdot \mathbb{P}(T_{\pi\delta}-T_{\sigma} <r-T_{\sigma}<\tau_N\vert \mathcal{H}_\sigma) \\
&= 1(T_\alpha<r) \cdot 1(T_{\sigma}>r-\tau_N) \cdot\pi((2N+\theta)(r-T_\alpha), l-1) \nonumber\\
&\quad \quad\cdot \Pi((2N+\theta)\tau_N,m-1) \cdot \Pi((2N+\theta)\tau_N,n-1),\end{aligned}$$ where in the first inequality we have used $T_{\alpha}>T_{\sigma}>r-\tau_N$. So the first expectation in [\[9ec1.00\]](#9ec1.00){reference-type="eqref" reference="9ec1.00"} is at most $$\begin{aligned}
\label{9ec1.12}
&\mathbb{E}\Big(1(\Gamma_{k+j+1}<(2N+\theta)r) \cdot1(\Gamma_{k+1}>(2N+\theta)(r-\tau_N)) \cdot\pi((2N+\theta)r-\Gamma_{k+j+1}, l-1)\Big) \nonumber\\
&\quad \quad\times \Pi((2N+\theta)\tau_N,m-1) \times \Pi((2N+\theta)\tau_N,n-1).\end{aligned}$$
Turning to the second expectation in [\[9ec1.00\]](#9ec1.00){reference-type="eqref" reference="9ec1.00"} concerning the spatial events, we let $\hat{B}^\beta$ (resp. $\hat{B}^\gamma$) be the position of $B^\beta-B^{\alpha}$ (resp. $B^\gamma-B^{\alpha}$) with the value of $W^{\alpha}$ subtracted if it shows up in $B^\beta$ (resp. ${B}^\gamma$). To be specific, we define (recall $B^\beta$ from [\[ea3.22\]](#ea3.22){reference-type="eqref" reference="ea3.22"}) $$\begin{aligned}
\label{9ea1.71}
\hat{B}^\beta=\sum_{j=\vert \alpha\vert +1}^{\vert \beta\vert -1} W^{\beta\vert j}1(e_{\beta\vert j}=\beta_{j+1}) \text{ and } \hat{B}^\gamma=\sum_{j=\vert \alpha\vert +1}^{\vert \gamma\vert -1} W^{\gamma\vert j} 1(e_{\gamma\vert j}=\gamma_{j+1}) .\end{aligned}$$ It follows that $B^\beta-B^\gamma-(\hat{B}^\beta- \hat{B}^\gamma)=W^\alpha$ and hence $$\begin{aligned}
\label{9ea2.82}
1(B^\gamma-B^\beta \in \mathcal{N}_N)\leq 1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d).\end{aligned}$$ Use the above to get the second expectation in [\[9ec1.00\]](#9ec1.00){reference-type="eqref" reference="9ec1.00"} is bounded by $$\begin{aligned}
\label{e1.10}
\mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d) 1(B^\delta=B^\gamma) \Big)\end{aligned}$$ Next, we claim that $$\begin{aligned}
\label{9ec1.10}
\mathbb{P}^{\beta, \gamma,{\delta}}\Big(B^\delta=B^\gamma\Big|\sigma(\hat{B}^\beta, \hat{B}^\gamma)\Big)\leq \frac{1}{\psi(N)} \frac{C}{(1+j+n)^{d/2}}.\end{aligned}$$ To see this, we recall $\sigma=\delta\wedge \gamma$ and $W^\sigma$ is a jump that might have moved the $\delta$ or the $\gamma$ line at time $T_\sigma$ so that $B^\delta_{T_\sigma}-B^\gamma_{T_\sigma}\in \{W^\sigma, -W^\sigma\}$. In order that $B^\delta=B^\gamma$, we require $$\begin{aligned}
\label{9ec1.14}
(B^\delta-B^\delta_{T_\sigma})-(B^\gamma-B^\gamma_{T_\sigma}) \in \mathcal{N}_N,\end{aligned}$$ and then let $W^\sigma$ take the exact value in $\mathcal{N}_N$ to make $B^\delta=B^\gamma$, which occurs with probability $1/\psi(N)$. Notice that $$\begin{aligned}
&\mathbb{P}^{\beta, \gamma,{\delta}}\Big((B^\delta-B^\delta_{T_\sigma})-(B^\gamma-B^\gamma_{T_\sigma}) \in \mathcal{N}_N\Big|\sigma(\hat{B}^\beta, \hat{B}^\gamma)\Big)\\
&=\mathbb{P}^{\beta, \gamma,{\delta}}\Big((B^\delta-B^\delta_{T_\sigma})-(B^\gamma-\hat{B}^\gamma-B^\gamma_{T_\sigma}) \in \hat{B}^\gamma+\mathcal{N}_N\Big|\sigma(\hat{B}^\beta, \hat{B}^\gamma)\Big)\leq \frac{C}{(1+j+n)^{d/2}},\nonumber\end{aligned}$$ where the last inequality uses [\[ea3.22\]](#ea3.22){reference-type="eqref" reference="ea3.22"} and Lemma [\[9e4.32\]](#9e4.32){reference-type="ref" reference="9e4.32"}. Hence the desired inequality in [\[9ec1.10\]](#9ec1.10){reference-type="eqref" reference="9ec1.10"} follows.
Apply [\[9ec1.10\]](#9ec1.10){reference-type="eqref" reference="9ec1.10"} to see that [\[e1.10\]](#e1.10){reference-type="eqref" reference="e1.10"} is at most $$\begin{aligned}
\label{9ec1.11}
&\frac{C}{\psi(N)} \frac{1}{(1+j+n)^{d/2}}\mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d) \Big)\nonumber\\
&\leq \frac{C}{\psi(N)} \frac{1}{(1+j+n)^{d/2}}\frac{1}{(1+l+m)^{d/2}},\end{aligned}$$ where we have used Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} and [\[9ea1.71\]](#9ea1.71){reference-type="eqref" reference="9ea1.71"} in the last inequality. Now conclude from [\[e1.10\]](#e1.10){reference-type="eqref" reference="e1.10"} and [\[9ec1.11\]](#9ec1.11){reference-type="eqref" reference="9ec1.11"} that $$\begin{aligned}
\label{9ec1.13}
& \mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^\delta=B^\gamma) \Big)\leq \frac{C}{\psi(N)} \frac{1}{(1+l+m)^{d/2}}\frac{1}{(1+j+n)^{d/2}}.\end{aligned}$$
Combine [\[9ec1.00\]](#9ec1.00){reference-type="eqref" reference="9ec1.00"}, [\[9ec1.12\]](#9ec1.12){reference-type="eqref" reference="9ec1.12"} and [\[9ec1.13\]](#9ec1.13){reference-type="eqref" reference="9ec1.13"} to see that [\[9e4.32\]](#9e4.32){reference-type="eqref" reference="9e4.32"} becomes $$\begin{aligned}
\label{9e9.69}
&\mathbb{E}( K^{(i)}(r))\leq \frac{C}{\psi(N)} \sum_{k=0}^\infty \sum_{j=1}^\infty\sum_{n=0}^\infty \sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+j+l+m+n} \nonumber\\
&\quad \times \frac{1}{(1+j+n)^{d/2}} \frac{1}{(1+l+m)^{d/2}}\mathbb{E}\Big(1(\Gamma_{k+j+1}<(2N+\theta)r) \nonumber\\
&\quad \times 1(\Gamma_{k+1}>(2N+\theta)(r-\tau_N)) \cdot \pi((2N+\theta)r-\Gamma_{k+j+1}, l-1)\Big) \nonumber\\
&\quad \times \Pi((2N+\theta)\tau_N,m-1) \times \Pi((2N+\theta)\tau_N,n-1). \end{aligned}$$ By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $m$ gives $$\begin{aligned}
&\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)\tau_N ,m-1)\frac{1}{(1+l+m)^{d/2}} \leq \frac{C}{(1+l)^{d/2-1}}\leq C.\end{aligned}$$ The sum for $l$ is at most $$\begin{aligned}
\label{9ec1.19}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l}\pi((2N+\theta)r-\Gamma_{k+j+1}, l-1)\nonumber\\
&\leq 2e^{\theta r}\sum_{l=0}^\infty \pi\Big((2N+2\theta)r-\frac{2N+2\theta}{2N+\theta}\Gamma_{k+j+1}, l\Big) =2e^{\theta r}.\end{aligned}$$ Next, turning to the sum of $n$, we use Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"} again to see $$\begin{aligned}
&\sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} \Pi((2N+\theta)\tau_N ,n-1)\frac{1}{(n+j+1)^{d/2}} \leq \frac{Ce^{\theta \tau_N}}{(1+j)^{d/2-1}}.\end{aligned}$$ Combine the above to see that $$\begin{aligned}
\label{9ec2.32}
\mathbb{E}( K^{(i)}(r))&\leq \frac{C}{\psi(N)} \sum_{k=0}^\infty \sum_{j=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+j} \frac{1}{(j+1)^{d/2-1}} \nonumber\\
& \mathbb{E}\Big(1(\Gamma_{k+j+1}<(2N+\theta)r) \cdot 1(\Gamma_{k+1}>(2N+\theta)(r-\tau_N)) \Big),\end{aligned}$$ Bound the expectation on the right-hand side by $$\begin{aligned}
\label{9ec1.02}
&\mathbb{E}\Big(1((2N+\theta)(r-\tau_N)<\Gamma_{k+1}<(2N+\theta)r) \cdot 1(\Gamma_{k+j+1}-\Gamma_{k+1}<(2N+\theta)\tau_N) \Big)\nonumber\\
&=\mathbb{P}((2N+\theta)(r-\tau_N)<\Gamma_{k+1}<(2N+\theta)r)\cdot \Pi((2N+\theta)\tau_N, j).\end{aligned}$$ So [\[9ec2.32\]](#9ec2.32){reference-type="eqref" reference="9ec2.32"} becomes $$\begin{aligned}
\label{e2.32}
\mathbb{E}( K^{(i)}(r))\leq \frac{C}{\psi(N)} &\sum_{k=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k}\mathbb{P}((2N+\theta)(r-\tau_N)<\Gamma_{k+1}<(2N+\theta)r) \nonumber\\
& \times \sum_{j=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{j} \Pi((2N+\theta)\tau_N, j) \frac{1}{(1+j)^{d/2-1}} .\end{aligned}$$
**Lemma 32**. *For any $t>0$, there is some constant $C>0$ so that for any $0< r\leq t$, $$\begin{aligned}
&\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)r, m) \frac{1}{(1+m)^{d/2-1}}\leq C I(N).\end{aligned}$$*
The $m=0$ term gives $1$. For $m\geq 0$, write $\Pi((2N+\theta)r, m)$ as $\mathbb{P}(\Gamma_{m}< (2N+\theta)r)$. Then the sum of $m>ANr$ is at most $C2^{-Nr}$ by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}. Next, the sum of $j\leq ANr$ is at most $$\begin{aligned}
&\sum_{m=0}^{ANr} e^{A\theta r} \frac{1}{(1+m)^{d/2-1}}\leq Ce^{A\theta t} I(ANr)\leq CI(N).\end{aligned}$$ The proof is complete.
By the above lemma, the sum of $j$ in [\[e2.32\]](#e2.32){reference-type="eqref" reference="e2.32"} gives at most $CI(N)$. The sum of $k$ gives $$\begin{aligned}
\label{9ec1.04}
&\sum_{k=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k} \mathbb{P}((2N+\theta)(r-\tau_N)<\Gamma_{k+1}<(2N+\theta)r) \nonumber\\
&=\sum_{k=0}^\infty \sum_{\alpha\geq 1, \vert \alpha\vert =k} \mathbb{E}\Big(1(r-\tau_N<T_\alpha<r, B^\alpha\neq\Delta)\Big)\leq Ce^{\theta r}N\tau_N,\end{aligned}$$ where the last inequality is by Lemma [Lemma 18](#9l3.2a){reference-type="ref" reference="9l3.2a"}. Therefore [\[e2.32\]](#e2.32){reference-type="eqref" reference="e2.32"} becomes $$\begin{aligned}
\mathbb{E}( K^{(i)}(r))&\leq \frac{C}{\psi(N)} C I(N) N\tau_N\leq C\tau_N.\end{aligned}$$
Next, we move to case (ii) where $\delta\wedge \beta=\gamma\wedge \beta$. Since we assume $\delta\wedge \beta\leq \gamma\wedge \beta$, we must have $\delta$ branches off $\gamma$ after $\gamma$ branches off $\beta$. Let $\alpha=\gamma\wedge \beta$ and $\vert \alpha\vert =k$ for some $k\geq 0$. Let $l\geq 0$ such that $\vert \beta\vert =k+l$. Set $\sigma=\delta\wedge \gamma$ and let $\vert \sigma\vert =k+j$ for some $j\geq 0$. Let $n,m\geq 0$ so that $\vert \delta\vert =\vert \sigma\vert +n$ and $\vert \gamma\vert =\vert \sigma\vert +m$. See Figure [1](#fig1){reference-type="ref" reference="fig1"}. Now we may write the sum of $\mathbb{E}(K^{(ii)}(r))$ as $$\begin{aligned}
\mathbb{E}(K^{(ii)}(r))=& \sum_{k=0}^\infty \sum_{\substack{\alpha\geq 1,\\ \vert \alpha\vert =k}} \sum_{l=0}^\infty \sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}} \sum_{j=0}^\infty\sum_{\substack{\sigma\geq \alpha,\\ \vert \sigma\vert =k+j}} \sum_{n=0}^\infty \sum_{\substack{\delta\geq \sigma,\\ \vert \delta\vert =k+j+n}}
\sum_{m=0}^\infty\sum_{\substack{ \gamma\geq\alpha,\\ \vert \gamma\vert =k+j+m }} \nonumber\\
&\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r, B^\gamma-B^\beta \in \mathcal{N}_N)\nonumber\\
&\quad \quad 1(T_{\alpha}>r-\tau_N)1(T_{\pi\delta} <r, B^\delta=B^\gamma)\Big).\end{aligned}$$ Similar to [\[9ec1.00\]](#9ec1.00){reference-type="eqref" reference="9ec1.00"}, the expectation above is at most $$\begin{aligned}
\label{9ec2.44}
&\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r) 1(T_{\alpha}>r-\tau_N)1(T_{\pi\delta} <r)\Big) \nonumber\\
&\times (\frac{N+\theta}{2N+\theta})^{k+j+l+m+n} \times \mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^\delta=B^\gamma) \Big).\end{aligned}$$ For the first expectation above, we condition on $\mathcal{H}_\sigma$ to get $$\begin{aligned}
&\mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi\gamma}<r) 1(T_{\alpha}>r-\tau_N)1(T_{\pi\delta} <r)\Big\vert \mathcal{H}_\sigma\Big) \\
&\leq 1(T_\sigma<r) \cdot 1(T_{\alpha}>r-\tau_N) \cdot \mathbb{P}(T_{\pi\beta}-T_{\alpha} <r-T_{\alpha} \leq T_\beta-T_{\alpha} \vert \mathcal{H}_\alpha)\nonumber\\
&\quad \quad\times \mathbb{P}(T_{\pi\gamma}-T_{\sigma} <r-T_{\sigma}<\tau_N \vert \mathcal{H}_\sigma) \times \mathbb{P}(T_{\pi\delta}-T_{\sigma} <r-T_{\sigma}<\tau_N\vert \mathcal{H}_\sigma) \\
&\leq 1(T_\sigma<r) \cdot 1(T_{\alpha}>r-\tau_N) \cdot \pi((2N+\theta)(r-T_\alpha), l-1) \nonumber\\
&\quad \quad\times \Pi((2N+\theta)\tau_N,m-1) \times \Pi((2N+\theta)\tau_N,n-1),\end{aligned}$$ where in the first inequality we have used $T_\sigma\geq T_\alpha>r-\tau_N$. Hence the first expectation in [\[9ec2.44\]](#9ec2.44){reference-type="eqref" reference="9ec2.44"} is at most $$\begin{aligned}
\label{e2.44}
&\mathbb{E}\Big(1(\Gamma_{k+j+1}<(2N+\theta)r) \cdot1(\Gamma_{k+1}>(2N+\theta)(r-\tau_N)) \cdot\pi((2N+\theta)r-\Gamma_{k+1}, l-1)\Big) \nonumber\\
&\quad \quad\times \Pi((2N+\theta)\tau_N,m-1) \times \Pi((2N+\theta)\tau_N,n-1).\end{aligned}$$ For the second expectation in [\[9ec2.44\]](#9ec2.44){reference-type="eqref" reference="9ec2.44"}, we follow [\[9ea1.71\]](#9ea1.71){reference-type="eqref" reference="9ea1.71"} to define (recall $\sigma=\gamma \wedge \delta$) $$\begin{aligned}
\label{e1.71}
\bar{B}^\delta=\sum_{j=\vert \sigma\vert +1}^{\vert \delta\vert -1} W^{\delta\vert j}1(e_{\delta\vert j}=\delta_{j+1}) \text{ and } \bar{B}^\gamma=\sum_{j=\vert \sigma\vert +1}^{\vert \gamma\vert -1} W^{\gamma\vert j} 1(e_{\gamma\vert j}=\gamma_{j+1})\end{aligned}$$ so that $B^\delta-B^\gamma-(\hat{B}^\delta- \hat{B}^\gamma)=W^\sigma$. Hence $$\begin{aligned}
1(B^\delta=B^\beta)= 1(\bar{B}^\delta- \bar{B}^\gamma=W^\sigma).\end{aligned}$$ Condition on $\sigma(\bar{B}^\delta, \bar{B}^\gamma,W^\sigma)$ to get $$\begin{aligned}
&\mathbb{P}^{\beta, \gamma,{\delta}}(B^\gamma-B^\beta\in \mathcal{N}_N\vert \sigma(\bar{B}^\delta, \bar{B}^\gamma,W^\sigma))\\
&= \mathbb{P}^{\beta, \gamma,{\delta}}(B^\beta-B^\alpha-(B^\gamma-\bar{B}^\gamma-B^\alpha)\in \bar{B}^\gamma+\mathcal{N}_N\vert \sigma(\bar{B}^\delta, \bar{B}^\gamma,W^\sigma))\leq \frac{C}{(1+j+l)^{d/2}}.\end{aligned}$$ It follows that $$\begin{aligned}
\label{e2.82}
&\mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^\delta=B^\gamma) \Big)\nonumber\\
&\leq \frac{C}{(1+j+l)^{d/2}} \mathbb{P}^{\beta, \gamma,{\delta}}(\bar{B}^\delta- \bar{B}^\gamma=W^\sigma)\leq \frac{C}{(1+j+l)^{d/2}}\frac{1}{\psi(N)}\frac{C}{(1+m+n)^{d/2}}.\end{aligned}$$ Combine [\[9ec2.44\]](#9ec2.44){reference-type="eqref" reference="9ec2.44"}, [\[e2.44\]](#e2.44){reference-type="eqref" reference="e2.44"} and [\[e2.82\]](#e2.82){reference-type="eqref" reference="e2.82"} to conclude $$\begin{aligned}
&\mathbb{E}( K^{(ii)}(r))\leq \frac{C}{\psi(N)} \sum_{k=0}^\infty \sum_{l=0}^\infty \sum_{j=0}^\infty\sum_{n=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+j+l+m+n} \nonumber\\
&\frac{1}{(1+n+m)^{d/2}} \frac{1}{(1+j+l)^{d/2}}\mathbb{E}\Big(1(\Gamma_{k+j+1}<(2N+\theta)r) \nonumber\\
& \quad \times 1(\Gamma_{k+1}>(2N+\theta)(r-\tau_N)) \cdot\pi((2N+\theta)r-\Gamma_{k+1}, l-1)\Big) \nonumber\\
&\quad \times \Pi((2N+\theta)\tau_N,m-1) \times \Pi((2N+\theta)\tau_N,n-1).\end{aligned}$$ By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the summation of $n$ gives $$\begin{aligned}
&\sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} \Pi((2N+\theta)\tau_N,n-1) \frac{1}{(1+m+n)^{d/2}}\leq \frac{C }{(1+m)^{d/2-1}}.\end{aligned}$$ Use Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"} to get the sum for $m$ is at most $$\begin{aligned}
&\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)\tau_N,m-1) \frac{1}{(1+m)^{d/2-1}}\leq CI(N).\end{aligned}$$ Now we are left with $$\begin{aligned}
&\mathbb{E}( K^{(ii)}(r))\leq \frac{C}{\psi(N)} I(N) \sum_{k=0}^\infty \sum_{j=0}^\infty \sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+j+l} \frac{1}{(1+j+l)^{d/2}} \nonumber\\
& \mathbb{E}\Big(1(\Gamma_{k+j+1}<(2N+\theta)r) 1(\Gamma_{k+1}>(2N+\theta)(r-\tau_N)) \cdot\pi((2N+\theta)r-\Gamma_{k+1}, l-1)\Big).\end{aligned}$$ Similar to [\[9ec1.02\]](#9ec1.02){reference-type="eqref" reference="9ec1.02"}, we may bound the expectation above by $$\begin{aligned}
&\mathbb{E}\Big(1((2N+\theta)(r-\tau_N)<\Gamma_{k+1}<(2N+\theta)r) \cdot\pi((2N+\theta)r-\Gamma_{k+1}, l-1) \Big) \cdot \Pi((2N+\theta)\tau_N, j).\end{aligned}$$ So the sum for $j$ gives $$\begin{aligned}
&\sum_{j=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{j} \Pi((2N+\theta)\tau_N, j) \frac{1}{(1+j+l)^{d/2}}\leq \frac{C }{(1+l)^{d/2-1}}\leq C,\end{aligned}$$ where the first inequality is by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}. Use [\[9ec1.19\]](#9ec1.19){reference-type="eqref" reference="9ec1.19"} to see that the sum of $l$ is at most $$\begin{aligned}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \pi((2N+\theta)r-\Gamma_{k+1}, l-1) \leq C.\end{aligned}$$ We are left with $$\begin{aligned}
&\mathbb{E}( K^{(ii)}(r))\leq \frac{C}{\psi(N)} I(N) \sum_{k=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k} \mathbb{P}\Big((2N+\theta)(r-\tau_N)<\Gamma_{k+1}<(2N+\theta)r\Big).\end{aligned}$$ By [\[9ec1.04\]](#9ec1.04){reference-type="eqref" reference="9ec1.04"}, the above is at most $$\begin{aligned}
&\mathbb{E}( K^{(ii)}(r))\leq \frac{C}{\psi(N)} I(N) \times CN\tau_N \leq C\tau_N,\end{aligned}$$ as required.
Apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to see $$\begin{aligned}
M_t:=K_{t}^{n,3}(\phi)-\int_0^t G_r(\phi)ds\text{ is a martingale},\end{aligned}$$ whose predictable quadratic variation is given by $$\begin{aligned}
\langle M\rangle_t=\frac{1}{N^2(2N+\theta)}\int_0^t \sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta, \zeta_\beta^1>r-\tau_N) \phi(B^\beta)^2 F_\beta(r)^2 dr.\end{aligned}$$ Using the $L^2$ maximal inequality and recalling $F_\beta(r)$ from [\[9e3.8\]](#9e3.8){reference-type="eqref" reference="9e3.8"}, we get $$\begin{aligned}
\label{9e6.1}
&\mathbb{E}\Big(\sup_{s\leq t} M_s^2\Big) \leq C\mathbb{E}(\langle M\rangle_t)\leq C\| \phi\| _\infty^2\frac{1}{N^3} \int_0^t \mathbb{E}\Big( \sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta) \nonumber\\
&\times \sum_{\gamma}1(T_{\pi \gamma}<r, B^\beta-B^\gamma\in \mathcal{N}_N) 1(T_{\gamma\wedge \beta}>r-\tau_N)\nonumber\\
&\times \sum_{\delta} 1(T_{\pi \delta}<r, B^\beta-B^\delta\in \mathcal{N}_N)1(T_{\delta\wedge \beta}>r-\tau_N) \Big) dr,\end{aligned}$$ where we have used $\psi_0(N)\geq 1$. Again the indicators $1(T_{\gamma \wedge \beta}>r-\tau_N)$ and $1(T_{\delta \wedge \beta}>r-\tau_N)$ ensure $\delta_0=\beta_0=\gamma_0$. We may set $\delta_0=\beta_0=\gamma_0=1$ so that for each $0<r<t$, the expectation in [\[9e6.1\]](#9e6.1){reference-type="eqref" reference="9e6.1"} is bounded by $$\begin{aligned}
\label{9ec1.21}
&NX_0^0(1) \sum_{\beta,\gamma,\delta\geq 1} \mathbb{E}\Big(1(T_{\pi\beta}<r\leq T_\beta) 1(T_{\pi \gamma}<r, B^\beta-B^\gamma\in \mathcal{N}_N)\nonumber\\
& 1(T_{\gamma\wedge \beta}>r-\tau_N) 1(T_{\delta\wedge \beta}>r-\tau_N) 1(T_{\pi \delta}<r)1(\sqrt{N}(B^\gamma-B^\delta)\in [-2,2]^d) \Big),\end{aligned}$$ where we have used $\{B^\beta-B^\delta\in \mathcal{N}_N\}$ and $\{B^\beta-B^\gamma\in \mathcal{N}_N\}$ to get $\sqrt{N}(B^\gamma-B^\delta)\in [-2,2]^d$. One can check the events inside the expectation in [\[9ec1.21\]](#9ec1.21){reference-type="eqref" reference="9ec1.21"} is almost identical to $K_{\beta,\gamma, \delta}(r)$ in [\[9e4.31\]](#9e4.31){reference-type="eqref" reference="9e4.31"} except that $B^\gamma=B^\delta$ is now replaced by $\sqrt{N}(B^\gamma-B^\delta)\in [-2,2]^d$. All the calculations from Lemma [Lemma 31](#9l5.0){reference-type="ref" reference="9l5.0"} will continue to work except that there will be an extra $\psi(N)$ as we do not have an exact equality here (see, e.g., [\[9ec1.10\]](#9ec1.10){reference-type="eqref" reference="9ec1.10"}). Lemma [Lemma 31](#9l5.0){reference-type="ref" reference="9l5.0"} then implies [\[9ec1.21\]](#9ec1.21){reference-type="eqref" reference="9ec1.21"} is at most $NX_0^0(1) (C\tau_N \cdot \psi(N))$ and [\[9e6.1\]](#9e6.1){reference-type="eqref" reference="9e6.1"} becomes $$\begin{aligned}
&\mathbb{E}\Big(\sup_{s\leq t} M_s^2\Big) \leq C\| \phi\| _\infty^2\frac{1}{N^3} \int_0^t NX_0^0(1) (C\tau_N\cdot \psi(N)) dr \nonumber\\
&\leq C\| \phi\| _\infty^2 X_0^0(1) \frac{\psi_0(N) \tau_N}{N} \to 0 \text{ as } N\to \infty.\end{aligned}$$ By applying Cauchy-Schwartz, we get $$\begin{aligned}
\label{9e6.2}
&\mathbb{E}\Big(\sup_{s\leq t} \Big\vert K_{s}^{n,3}(\phi)-\int_0^s G_r(\phi)dr\Big\vert \Big) \leq \sqrt{\mathbb{E}\Big(\sup_{s\leq t} M_s^2\Big) } \to 0 \text{ as } N\to \infty.\end{aligned}$$ It suffices to show $$\begin{aligned}
\lim_{N\to \infty} \int_0^t \mathbb{E}(\vert G_r^\tau(\phi) -G_r(\phi)\vert )dr=0.\end{aligned}$$ Recall $G_r^\tau (\phi)$ from [\[9e3.12\]](#9e3.12){reference-type="eqref" reference="9e3.12"}. If $r\leq \tau_N$, by recalling the definition of $F_\beta(r)$, we have $$\begin{aligned}
\mathbb{E}(G_r^\tau(\phi))&\leq \frac{1}{N} \| \phi\| _\infty \mathbb{E}\Big(\sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta) \frac{1}{\psi_0(N)} \sum_{\gamma} 1(T_{\pi\gamma}<r, B^\gamma-B^\beta\in\mathcal{N}_N, T_{\beta\wedge \gamma}>r-\tau_N)\Big)\nonumber\\
&\leq \frac{1}{\psi(N)} \| \phi\| _\infty \mathbb{E}\Big(\sum_{\beta,\gamma: \beta_0=\gamma_0} \text{nbr}_{\beta,\gamma}(r)\Big),\end{aligned}$$ where in the first inequality we have dropped $1(\zeta_\beta^1>r-\tau_N)$ and in the second inequality we have replaced $1(T_{\beta\wedge \gamma}>r-\tau_N)$ by $1(\beta_0=\gamma_0)$. Use Lemma [Lemma 14](#9l4.1){reference-type="ref" reference="9l4.1"} (ii) to see that $$\begin{aligned}
\label{9e9.30}
& \mathbb{E}(G_r^\tau(\phi)) \leq \frac{1}{\psi(N)} \| \phi\| _\infty CNX_0^0(1) I((2N+2\theta)r).\end{aligned}$$ Hence it follows that $$\begin{aligned}
\label{9e9.31}
&\int_0^{\tau_N} \mathbb{E}(G_r^\tau(\phi)) dr\leq \int_0^{\tau_N} C \| \phi\| _\infty \frac{1}{\psi_0(N)} X_0^0(1) I((2N+2\theta)r) dr\nonumber\\
&\leq \tau_N \frac{C}{\psi_0(N)} X_0^0(1) I((2N+2\theta)\tau_N) \leq CX_0^0(1) \tau_N \to 0.\end{aligned}$$ Similarly, we may obtain the same bound for $\mathbb{E}(G_r(\phi))$ as in [\[9e9.30\]](#9e9.30){reference-type="eqref" reference="9e9.30"}. We conclude that $$\begin{aligned}
\label{9e9.25}
&\int_0^{\tau_N} \mathbb{E}(\vert G_r^\tau(\phi)-G_r(\phi)\vert ) dr\leq \int_0^{\tau_N} \mathbb{E}(G_r^\tau(\phi)) dr+\int_0^{\tau_N} \mathbb{E}(G_r(\phi)) dr \to 0.\end{aligned}$$
It remains to bound $\mathbb{E}(\vert G_r^\tau(\phi) -G_r(\phi)\vert )$ for $r>\tau_N$. Use $\phi\in C_b^3$ to obtain $$\begin{aligned}
\label{9e6.3}
&\mathbb{E}(\vert G_r^\tau(\phi)-G_r(\phi)\vert )\leq \frac{C}{N}\frac{1}{\psi_0(N)} \mathbb{E}\Big(\sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta) \vert B^\beta- B^\beta_{r-\tau_N}\vert \nonumber\\
&\quad \times \sum_{\gamma} 1(T_{\pi \gamma}<r, B^\beta-B^\gamma\in \mathcal{N}_N) 1(T_{\gamma\wedge \beta}>r-\tau_N)\Big)\nonumber\\
&=\frac{C}{N}\frac{1}{\psi_0(N)} NX_0^0(1) \mathbb{E}\Big(\sum_{\beta,\gamma\geq 1} \text{nbr}_{\beta,\gamma}(r) 1(T_{\gamma\wedge \beta}>r-\tau_N) \vert B^\beta- B^\beta_{r-\tau_N}\vert
\Big),\end{aligned}$$ where the last equality follows by translation invariance.
**Lemma 33**. *For any $\tau_N<r<t$, we have $$\begin{aligned}
\mathbb{E}\Big(\sum_{\beta,\gamma\geq 1} \text{nbr}_{\beta,\gamma}(r) 1(T_{\gamma\wedge \beta}>r-\tau_N) \vert B^\beta- B^\beta_{r-\tau_N}\vert
\Big)\leq C \sqrt{ \tau_N} I(N).
\end{aligned}$$*
Use the above in [\[9e6.3\]](#9e6.3){reference-type="eqref" reference="9e6.3"} to get $$\begin{aligned}
&\mathbb{E}(\vert G_r^\tau(\phi)-G_r(\phi)\vert )\leq C \frac{1}{\psi_0(N)} X_0^0(1) \sqrt{ \tau_N} I(N)\leq CX_0^0(1) \sqrt{ \tau_N},\end{aligned}$$ thus giving $$\begin{aligned}
\label{9e9.26}
&\int_{\tau_N}^t \mathbb{E}(\vert G_r^\tau(\phi)-G_r(\phi)\vert ) dr \leq \int_{\tau_N}^t CX_0^0(1) \sqrt{ \tau_N} dr\to 0.\end{aligned}$$ The proof of Lemma [Lemma 26](#9l5.5){reference-type="ref" reference="9l5.5"} is now complete in view of [\[9e6.2\]](#9e6.2){reference-type="eqref" reference="9e6.2"}, [\[9e9.25\]](#9e9.25){reference-type="eqref" reference="9e9.25"}, [\[9e9.26\]](#9e9.26){reference-type="eqref" reference="9e9.26"}.
It remains to prove Lemma [Lemma 33](#9l9.1){reference-type="ref" reference="9l9.1"}.
By letting $\alpha=\beta\wedge \gamma$, we may write the sum of $\beta,\gamma$ as $$\begin{aligned}
\sum_{k=0}^\infty \sum_{\alpha\geq 1, \vert \alpha\vert =k} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =k+m}} \mathbb{E}\Big(\text{nbr}_{\beta,\gamma}(r) 1(T_{\alpha}>r-\tau_N) \vert B^\beta- B^\alpha_{r-\tau_N}\vert \Big).\end{aligned}$$ For any $\alpha, \beta, \gamma$ as in the summation, by conditioning on $\mathcal{H}_{\alpha}$, we have $$\begin{aligned}
\label{9e9.15}
&\mathbb{E}\Big(\text{nbr}_{\beta,\gamma}(r)1(T_{\alpha}>r-\tau_N) \vert B^\beta- B^\alpha_{r-\tau_N}\vert \Big\vert \mathcal{H}_{\alpha}\Big)\nonumber\\
&\leq 1{\{r-\tau_N<T_{\alpha}<r, B^{\alpha}\neq \Delta\}}\cdot (\frac{N+\theta}{2N+\theta})^{l+m}\nonumber\\
&\times \mathbb{P}(T_{\pi \beta}-T_{\alpha}<r-T_{\alpha}\leq T_\beta-T_{\alpha}\vert \mathcal{H}_{\alpha})\nonumber\\
&\times \mathbb{P}(T_{\pi \gamma}-T_{\alpha}<r-T_{\alpha}\vert \mathcal{H}_{\alpha})\nonumber\\
&\times \mathbb{E}\Big(1( B^\beta- B^\gamma \in \mathcal{N}_N) \cdot \vert B^\beta- B^\alpha_{r-\tau_N}\vert \Big\vert \mathcal{H}_{\alpha}\Big).\end{aligned}$$ The first probability equals $\pi((2N+\theta)(r-T_\alpha), l-1)$ and the second $\Pi((2N+\theta)(r-T_\alpha), m-1)$. For the last expectation, we recall $\hat{B}^\beta, \hat{B}^\gamma$ from [\[9ea1.71\]](#9ea1.71){reference-type="eqref" reference="9ea1.71"}. Apply [\[9ea2.82\]](#9ea2.82){reference-type="eqref" reference="9ea2.82"} to get $$\begin{aligned}
\mathbb{E}\Big(1( B^\beta- B^\gamma \in \mathcal{N}_N) \cdot \vert B^\beta- B^\alpha_{r-\tau_N}\vert \Big\vert \mathcal{H}_{\alpha}\Big) \leq \mathbb{E}\Big(1( \sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma) \in [-2,2]^d) \cdot \vert B^\beta- B^\alpha_{r-\tau_N}\vert \Big\vert \mathcal{H}_{\alpha}\Big).\end{aligned}$$ Use $\vert B^\beta- B^\alpha\vert \leq |\hat{B}^\beta|+CN^{-1/2}$ to get $$\begin{aligned}
\vert B^\beta- B^\alpha_{r-\tau_N}\vert&\leq \vert B^\alpha- B^\alpha_{r-\tau_N}\vert+ \vert \hat{B}^\beta\vert +CN^{-1/2} \leq \vert B^\alpha- B^\alpha_{r-\tau_N}\vert+ \vert \hat{B}^\gamma\vert +CN^{-1/2},\end{aligned}$$ where the second inequality is by $\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma) \in [-2,2]^d$. It follows that $$\begin{aligned}
&\mathbb{E}\Big(1( B^\beta- B^\gamma \in \mathcal{N}_N) \cdot \vert B^\beta- B^\alpha_{r-\tau_N}\vert \Big\vert \mathcal{H}_{\alpha}\Big) \\
& \leq \mathbb{E}\Big(1( \sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma) \in [-2,2]^d) \cdot (\vert \hat{B}^\gamma\vert +CN^{-1/2}) \Big) +\vert B^\alpha- B^\alpha_{r-\tau_N}\vert \cdot \mathbb{P}(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma) \in [-2,2]^d).\end{aligned}$$ The second term above is at most $\vert B^\alpha- B^\alpha_{r-\tau_N}\vert \cdot {C}/{(1+l+m)^{d/2}}$ by Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"}. For the first term above, we condition on $\hat{B}^\gamma$ and use Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} again to get $$\begin{aligned}
&\mathbb{E}\Big(1( \sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma) \in [-2,2]^d) \cdot (\vert \hat{B}^\gamma\vert +CN^{-1/2}) \Big) \nonumber\\
&\leq \frac{C}{(1+l)^{d/2}} \mathbb{E}(\vert N^{-1/2}V_{m-1}^N\vert +CN^{-1/2})\leq \frac{C}{(1+l)^{d/2}} N^{-1/2} \cdot (m+1)^{1/2}.\end{aligned}$$
We conclude from the above that [\[9e6.11\]](#9e6.11){reference-type="eqref" reference="9e6.11"} is at most $$\begin{aligned}
\label{9e9.21}
&\mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{r-\tau_N<T_{\alpha}<r, B^{\alpha}\neq \Delta\}} \cdot C \sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \nonumber\\
&\times \pi((2N+\theta)(r-T_\alpha), l-1) \cdot \Pi((2N+\theta)(r-T_\alpha), m-1) \nonumber\\
&\times \Big[\frac{N^{-1/2}}{(1+l)^{d/2}} (m+1)^{1/2}+\vert B^\alpha- B^\alpha_{r-\tau_N}\vert \cdot \frac{1}{(1+l+m)^{d/2}}\Big] \Big):=I_1+I_2,\end{aligned}$$ where $I_1$ (resp. $I_2$) denotes the summation of the first term (resp. second term) in the last square bracket. It suffices to prove $I_i\leq C \sqrt{ \tau_N} I(N)$ for $i=1,2$.
We first consider $I_1$. By Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(i), the sum of $l$ gives $$\begin{aligned}
\label{9e9.16}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \pi((2N+\theta)(r-T_\alpha), l-1) \frac{1}{(1+l)^{d/2}}\nonumber\\
&\leq \frac{Ce^{\theta r}}{(1+(2N+2\theta)(r-T_\alpha))^{d/2}} \leq \frac{C}{(1+(2N+2\theta)(r-T_\alpha))^{2}}.\end{aligned}$$ Use the definition of $\Pi(\lambda,m)$ to get the sum of $m$ equals $$\begin{aligned}
&\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)(r-T_\alpha), m-1) (1+m)^{1/2}\nonumber\\
&\leq 1+C\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} (1+m)^{1/2} \sum_{k=m}^\infty \pi((2N+\theta)(r-T_\alpha), k) \nonumber\\
&\leq 1+C\sum_{k=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k} \pi((2N+\theta)(r-T_\alpha), k) \sum_{m=0}^{k} (1+m)^{1/2}\nonumber\\
&\leq 1+Ce^{\theta r} \sum_{k=0}^\infty \pi((2N+2\theta)(r-T_\alpha), k) \cdot k^{3/2}\leq 1+C \mathbb{E}(\xi^{3/2}),\end{aligned}$$ where $\xi$ is a Poisson r.v. with parameter $(2N+2\theta)(r-T_\alpha)$. Use Cauchy-Schwartz to get the above is at most $$\begin{aligned}
\label{9e9.17}
1+C (\mathbb{E}(\xi^2) \cdot \mathbb{E}(\xi))^{1/2}&\leq Ce^{\theta r} (1+(2N+2\theta)(r-T_\alpha))^{3/2}.\end{aligned}$$ Combine [\[9e9.16\]](#9e9.16){reference-type="eqref" reference="9e9.16"} and [\[9e9.17\]](#9e9.17){reference-type="eqref" reference="9e9.17"} to get $$\begin{aligned}
\label{9e9.22}
I_1\leq &C N^{-1/2}\mathbb{E}\Big(\sum_{\alpha\geq 1} 1_{\{r-\tau_N<T_{\alpha}<r, B^{\alpha}\neq \Delta\}} \frac{1}{(1+(2N+2\theta)(r-T_\alpha))^{1/2}} \Big).\end{aligned}$$ Apply Lemma [Lemma 18](#9l3.2a){reference-type="ref" reference="9l3.2a"} with $f(y)=1/(1+y)^{1/2}$ to see that $$\begin{aligned}
I_1\leq C N^{-1/2} e^{\theta r} \int_0^{(2N+2\theta)\tau_N} \frac{1}{(1+y)^{1/2}} dy\leq C \sqrt{ \tau_N} \leq C \sqrt{ \tau_N} I(N).\end{aligned}$$
Next, we turn to $I_2$. The sum of $m$ gives $$\begin{aligned}
&\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)(r-T_\alpha), m) \frac{1}{(1+l+m)^{d/2}}\leq C \frac{1}{(1+l)^{d/2-1}},\end{aligned}$$ where the inequality is by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}. Apply Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(i) to see the sum of $l$ gives $$\begin{aligned}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \pi((2N+\theta)(r-T_\alpha), l) \frac{1}{(1+l)^{d/2-1}} \leq \frac{C }{(1+(2N+2\theta)(r-T_\alpha))^{d/2-1}},\end{aligned}$$ Combine the above to conclude $$\begin{aligned}
I_2&\leq C \mathbb{E}\Big(\sum_{\alpha\geq 1} 1_{\{r-\tau_N<T_{\alpha}<r, B^{\alpha}\neq \Delta\}} \frac{\vert B^\alpha-B^{\alpha}_{r-\tau_N}\vert }{(1+(2N+2\theta)(r-T_\alpha))^{d/2-1}} \Big).\end{aligned}$$ Use Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to get $$\begin{aligned}
I_2\leq &C (2N+\theta) \int_{r-\tau_N}^r \mathbb{E}\Big(\sum_{\alpha\geq 1} 1_{\{T_{\pi\alpha}<s\leq T_{\alpha}, B^{\alpha}\neq \Delta\}} \frac{\vert B^\alpha_s-B^{\alpha}_{r-\tau_N}\vert }{(1+(2N+2\theta)(r-s))^{d/2-1}}\Big) ds.\end{aligned}$$ For each $r-\tau_N<s<r$, by conditioning on $\mathcal F_{r-\tau_N}$, we may use the Markov property and Lemma [Lemma 6](#9l2.0){reference-type="ref" reference="9l2.0"} to get $$\begin{aligned}
\label{9ec1.23}
&\mathbb{E}\Big(\sum_{\alpha\geq 1} 1{\{T_{\pi\alpha}<s\leq T_{\alpha}, B^{\alpha}\neq \Delta\}} \vert B^\alpha_s-B^{\alpha}_{r-\tau_N}\vert \Big)\nonumber\\
&= e^{\theta (s-(r-\tau_N))} \mathbb{E}(\vert B^N_{s-(r-\tau_N)}-B^{N}_0\vert) \leq C\sqrt{s-(r-\tau_N)}\leq C \sqrt{\tau_N},\end{aligned}$$ where $B^N(t)=B^N_t$ is the continuous time random walk in Lemma [Lemma 6](#9l2.0){reference-type="ref" reference="9l2.0"}. Although $\phi(x)=\vert x\vert$ is not a bounded function required to apply Lemma [Lemma 6](#9l2.0){reference-type="ref" reference="9l2.0"}, we may work with $\phi_n(x)=\vert x\vert \wedge n$ and let $n\to\infty$. It follows that $$\begin{aligned}
I_2\leq& C \sqrt{ \tau_N} (2N+2\theta) \int_{r-\tau_N}^r \frac{1}{(1+(2N+2\theta)(r-s))^{d/2-1}} ds\nonumber\\
=&C \sqrt{ \tau_N} \int_{0}^{(2N+2\theta)\tau_N} \frac{1}{(1+y)^{d/2-1}} dy \leq C \sqrt{ \tau_N} I(N),\end{aligned}$$ as required.
# Convergence of the constant {#9s7}
This section is devoted to the proof of Lemma [Lemma 28](#9l5.7){reference-type="ref" reference="9l5.7"}, from which we obtain the constant $b_d$ as in [\[9ec10.64\]](#9ec10.64){reference-type="eqref" reference="9ec10.64"}. Recall $b_d^\tau$ from [\[9e4.2\]](#9e4.2){reference-type="eqref" reference="9e4.2"} and $\tau_N$ from [\[9e3.34\]](#9e3.34){reference-type="eqref" reference="9e3.34"}. It suffices to find the limits of $\mathbb{E}(Z_1(\tau_N))$ and $\mathbb{E}(\vert \{1\}_{\tau_N}\vert )$ as $N\to\infty$. Use the definition of $\{1\}_{\tau_N}$ from [\[9e4.3\]](#9e4.3){reference-type="eqref" reference="9e4.3"} to get $$\begin{aligned}
\mathbb{E}(\vert \{1\}_{\tau_N}\vert )=\mathbb{E}\Big(\sum_{\beta \geq 1} 1(T_{\pi\beta}<\tau_N\leq T_\beta, B^\beta\neq \Delta)\Big)=e^{\theta \tau_N} \to 1 \text{ as } N\to \infty,\end{aligned}$$ where the second equality is by Lemma [Lemma 6](#9l2.0){reference-type="ref" reference="9l2.0"}. It remains to prove $\lim_{N\to \infty} \mathbb{E}(Z_1(\tau_N))=b_d$. Recall $Z_1(\tau_N)$ from [\[9e4.2\]](#9e4.2){reference-type="eqref" reference="9e4.2"} and $F_\beta(r)$ from [\[9e3.8\]](#9e3.8){reference-type="eqref" reference="9e3.8"} to see $$\begin{aligned}
\label{9ec5.59}
I_0:=&\psi_0(N) \mathbb{E}(Z_1(\tau_N))=\mathbb{E}\Big(\sum_{\beta\in \{1\}_{\tau_N}} \sum_{\gamma}1(T_{\pi \gamma}<\tau_N, B^\beta-B^\gamma \in \mathcal{N}_N, T_{\gamma \wedge \beta}>0)\Big)\\
=& \mathbb{E}\Big( \sum_{\beta,\gamma \geq 1} 1(T_{\pi \beta}<\tau_N\leq T_\beta, T_{\pi \gamma}<\tau_N, B^\beta-B^\gamma \in \mathcal{N}_N)\Big)= \mathbb{E}\Big(\sum_{\beta,\gamma \geq 1} \text{nbr}_{\beta,\gamma}(\tau_N)\Big),\nonumber\end{aligned}$$ where in the second equality we have replaced the condition $T_{\gamma \wedge \beta}>0$ by $\gamma_0=\beta_0=1$. The last equality is by the definition of $\text{nbr}_{\beta,\gamma}(r)$ from [\[9e2.24\]](#9e2.24){reference-type="eqref" reference="9e2.24"}. We note that $B^\gamma-B^\beta\neq 0$, $T_{\pi \beta}<r\leq T_\beta$ and $T_{\pi \gamma}<r$ imply $\gamma\wedge \beta$ is not $\beta$, but it is possible that $\gamma\wedge \beta=\gamma$. Let $\vert \beta\wedge \gamma\vert =k$ for some $k\geq 0$. There are the following two cases: $$\begin{aligned}
\label{9e0.0}
\begin{cases}
&\text{ (i) } \gamma=\beta|k \text{ and } \vert \beta\vert =k+1+l \text{ for some $l\geq 0$};\\
& \text{(ii) } \vert \beta\vert =k+1+l \text{ and } \vert \gamma\vert =k+1+m \text{ for some $l\geq 0$ and $m\geq 0$}.
\end{cases}\end{aligned}$$ Set $\alpha=\beta\wedge \gamma$ so that $|\alpha|=k$. For case (i), we get $\gamma=\alpha$. For case (ii), we have either $\beta_{k+1}=1, \gamma_{k+1}=0$ or $\gamma_{k+1}=1, \beta_{k+1}=0$ which are symmetric, hence we may proceed with $\beta_{k+1}=1, \gamma_{k+1}=0$. Now conclude from the above and [\[9e0.0\]](#9e0.0){reference-type="eqref" reference="9e0.0"} that $$\begin{aligned}
\label{9e2.61}
I_0&=\mathbb{E}\Big(\sum_{k=0}^\infty \sum_{\alpha\geq 1, \vert \alpha\vert =k} \sum_{l=0}^\infty \sum_{\substack{\beta>\alpha,\\ \vert \beta\vert =k+1+l}} 1_{\{\gamma=\alpha\}} \text{nbr}_{\beta,\gamma}(\tau_N)\Big)\nonumber\\
&+2\mathbb{E}\Big(\sum_{k=0}^\infty \sum_{\substack{\alpha\geq 1,\\ \vert \alpha\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta>\alpha,\\ \vert \beta\vert =k+1+l,\\ \beta_{k+1}=1}}\sum_{\substack{ \gamma> \alpha,\\ \vert \gamma\vert =k+1+m,\\ \gamma_{k+1}=0 }} \text{nbr}_{\beta,\gamma}(\tau_N)\Big):=D_1^N+D_2^N.\end{aligned}$$ *Step 1.* For case (i) when $\gamma=\alpha<\beta$ with $|\alpha|=k$ and $|\beta|=k+1+l$, we get $$\begin{aligned}
\label{e3.22}
&\mathbb{E}(\text{nbr}_{\beta,\gamma}(\tau_N))=\mathbb{E}(\text{nbr}_{\beta,\alpha}(\tau_N))=\mathbb{P}(T_{\pi \beta}<\tau_N\leq T_\beta, B^\beta-B^\alpha \in \mathcal{N}_N)\nonumber\\
&=\pi((2N+\theta) \tau_N), k+l+1) \cdot (\frac{N+\theta}{2N+\theta})^{k+l+1} \cdot\mathbb{P}(V_{l+1}^N\in N^{1/2}\mathcal{N}_N),\end{aligned}$$ where the first term is by [\[9ea6.84\]](#9ea6.84){reference-type="eqref" reference="9ea6.84"}, and the second term gives the probability of $\{B^\beta, B^\alpha \neq \Delta\}$. The last term follows from [\[ea3.22\]](#ea3.22){reference-type="eqref" reference="ea3.22"}. Hence $$\begin{aligned}
D_1^N&= \sum_{k=0}^\infty \sum_{l=0}^\infty 2^k \cdot 2^{l+1} \cdot \pi((2N+\theta) \tau_N), k+l+1) \cdot (\frac{N+\theta}{2N+\theta})^{k+l+1} \cdot\mathbb{P}(V_{l+1}^N\in N^{1/2}\mathcal{N}_N)\nonumber\\
&= e^{\theta \tau_N} \sum_{l=0}^\infty \mathbb{P}(\Gamma_{l+1}<(2N+2\theta) \tau_N) \cdot\mathbb{P}(V_{l+1}^N\in N^{1/2}\mathcal{N}_N).
\end{aligned}$$ For $l>\sqrt{N\tau_N}$, we use Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the last probability so that $$\begin{aligned}
&\sum_{l>\sqrt{N\tau_N}} \mathbb{P}(\Gamma_{l+1}<(2N+2\theta) \tau_N)) \cdot\mathbb{P}(V_{l+1}^N\in N^{1/2}\mathcal{N}_N)\\
&\leq \sum_{l>\sqrt{N\tau_N}} \frac{C}{(1+l)^{d/2}}\leq C(N\tau_N)^{1-d/2} \to 0.
\end{aligned}$$ When $l<\sqrt{N\tau_N}$, by Markov's inequality we get $$\begin{aligned}
\mathbb{P}(\Gamma_{l+1}>(2N+2\theta) \tau_N)\leq \frac{l+1}{(2N+2\theta) \tau_N}\leq \frac{1}{\sqrt{N\tau_N}} \to 0.
\end{aligned}$$ Therefore we may replace $\mathbb{P}(\Gamma_{l+1}<(2N+2\theta) \tau_N)$ by $1$. It follows that $$\begin{aligned}
\label{e2.61}
\lim_{N\to \infty} D_1^N &= \lim_{N\to \infty} e^{\theta \tau_N} \sum_{l<\sqrt{N\tau_N}} \mathbb{P}(V_{l+1}^N\in N^{1/2}\mathcal{N}_N)\nonumber\\
&= \lim_{N\to \infty} \sum_{l<\sqrt{N\tau_N}} \mathbb{P}(V_{l+1}^N\in [-1,1]^d-\{0\})=\sum_{l=0}^\infty \mathbb{P}(V_{l+1}\in [-1,1]^d-\{0\}).
\end{aligned}$$
*Step 2.* Turning to $D_2^N$, we let $\alpha, \beta, \gamma$ be as in the summation and then condition on $\mathcal{H}_{\alpha}$ to get $$\begin{aligned}
%\label{9e2.62}
\mathbb{E}(\text{nbr}_{\beta,\gamma}(r)\vert \mathcal{H}_{\alpha})=& 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}}\cdot (\frac{N+\theta}{2N+\theta})^{1+l+m}\nonumber\\
&\times \mathbb{P}(T_{\pi \beta}-T_{\alpha}<r-T_{\alpha}\leq T_\beta-T_{\alpha}\vert \mathcal{H}_{\alpha})\nonumber\\
&\times \mathbb{P}(T_{\pi \gamma}-T_{\alpha}<r-T_{\alpha}\vert \mathcal{H}_{\alpha})\nonumber\\
&\times \mathbb{P}(W^N+V_{l+m}^N\in N^{1/2}\mathcal{N}_N),\end{aligned}$$ where the second term is the probability that no death events occur on the family line after $\beta,\gamma$ split. In the last term, we set $W^N$ to be uniform on $\mathcal{N}_N$, independent of $V_{n}^N$, to represent the step taken by $\alpha\vee e_\alpha$ that separates $\beta\vert (k+1)$ and $\gamma\vert (k+1)$ from their parent $\alpha$. After they split in generation $k+1$, we let $V_{l+m}^N$ be the steps taken independently on their family lines. Apply [\[9ea2.00\]](#9ea2.00){reference-type="eqref" reference="9ea2.00"} and [\[9ea6.84\]](#9ea6.84){reference-type="eqref" reference="9ea6.84"} in [\[9e2.62\]](#9e2.62){reference-type="eqref" reference="9e2.62"} to see $$\begin{aligned}
\label{9e2.63}
&\mathbb{E}(\text{nbr}_{\beta,\gamma}(r)\vert \mathcal{H}_{\alpha})= 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}}\cdot (\frac{N+\theta}{2N+\theta})^{1+l+m} \pi((2N+\theta) (r-T_\alpha), l) \nonumber\\
&\times \Pi((2N+\theta) (r-T_\alpha), m) \cdot \mathbb{P}(W^N+V_{l+m}^N\in N^{1/2}\mathcal{N}_N).\end{aligned}$$ Use [\[e3.22\]](#e3.22){reference-type="eqref" reference="e3.22"} and [\[9e2.63\]](#9e2.63){reference-type="eqref" reference="9e2.63"} in $D_2^N$ in [\[9e2.61\]](#9e2.61){reference-type="eqref" reference="9e2.61"} to get $$\begin{aligned}
\label{9e2.64}
D_2^N= 2\mathbb{E}\Big(&\sum_{k=0}^\infty \sum_{\alpha\geq 1, \vert \alpha\vert =k} 1{\{T_{\alpha}<r, B^{\alpha}\neq \Delta\}}\nonumber\\
&\sum_{l=0}^\infty \sum_{m=0}^\infty 2^l \cdot 2^{m} \cdot (\frac{N+\theta}{2N+\theta})^{1+l+m} \pi((2N+\theta) (r-T_\alpha), l)\nonumber\\
&\times \Pi((2N+\theta) (r-T_\alpha), m) \cdot \mathbb{P}(W^N+V_{l+m}^N\in [-1,1]^d-\{0\})\Big).\end{aligned}$$ For notation ease, we let $$\begin{aligned}
h_N(l,m):=\mathbb{P}(W^N+V_{l+m}^N\in [-1,1]^d-\{0\}).\end{aligned}$$ Use Fubini's theorem to get $$\begin{aligned}
D_2^N=&\sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{1+l+m} h_N(l,m) \cdot \mathbb{E}\Big( \sum_{\alpha\geq 1} 1{\{T_{\alpha}<\tau_N, B^{\alpha}\neq \Delta\}} \\
&\times \pi((2N+\theta) (\tau_N-T_\alpha), l) \cdot \Pi((2N+\theta) (\tau_N-T_\alpha), m) \Big).\end{aligned}$$ By Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"}, the expectation above is equal to $$\begin{aligned}
&(2N+\theta)\int_0^{\tau_N} \mathbb{E}\Big( \sum_{\alpha\geq 1} 1{\{T_{\pi \alpha}<r\leq T_{\alpha}, B^{\alpha}\neq \Delta\}} \nonumber\\
&\quad \quad \times \pi((2N+\theta) (\tau_N-r), l) \cdot \Pi((2N+\theta) (\tau_N-r), m) \Big) dr\nonumber\\
&=(2N+\theta)\int_0^{\tau_N} e^{\theta r} \pi((2N+\theta) (\tau_N-r), l) \cdot \Pi((2N+\theta) (\tau_N-r), m) dr,\end{aligned}$$ thus giving $$\begin{aligned}
D_2^N&=\sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{1+l+m} h_N(l,m) \cdot (2N+\theta)\nonumber\\
&\quad \int_0^{\tau_N} e^{\theta (\tau_N-r)} \pi((2N+\theta)r, l) \cdot \sum_{j=m}^\infty \pi((2N+\theta)r, j) dr\nonumber\\
&=\sum_{l=0}^\infty \sum_{m=0}^\infty h_N(l,m)\cdot (2N+2\theta) \int_0^{\tau_N} e^{\theta (\tau_N+r)}\nonumber\\
& \quad \times \pi((2N+2\theta)r, l) \cdot \sum_{j=m}^\infty (1+\varepsilon_N)^{m-j} \pi((2N+2\theta)r, j) dr,\end{aligned}$$ where the last equality also uses $\varepsilon_N=\frac{\theta}{2N+\theta}$. Notice that $1\leq e^{\theta (\tau_N+r)} \leq e^{2\theta \tau_N}$ for any $0\leq r\leq \tau_N$. So we may replace $e^{\theta (\tau_N+r)}$ by $1$. Use a change of variable to simplify the above integral and get $$\begin{aligned}
\label{9e9.35}
D_2^N& \sim\sum_{l=0}^\infty \sum_{m=0}^\infty h_N(l,m) \int_0^{(2N+2\theta)\tau_N} \pi(r, l) \sum_{j=m}^\infty (1+\varepsilon_N)^{m-j} \pi(r, j) dr\nonumber\\
& =\sum_{l=0}^\infty \sum_{j=0}^\infty \int_0^{(2N+2\theta)\tau_N} \pi(r, l) \pi(r, j) dr \sum_{m=0}^{j} (1+\varepsilon_N)^{m-j} h_N(l,m).\end{aligned}$$
**Lemma 34**. *For each $l\geq 0$ and $m\geq 0$, we have $$\begin{aligned}
\label{e1.45}
\lim_{N\to\infty} h_N(l,m)=h(l,m):=\mathbb{P}(W_0+V_{l+m}\in [-1,1]^d).\end{aligned}$$*
The proof is immediate by the weak convergence of $W^N$ to $W_0$ and $V_{n}^N$ to $V_n$. We also note that since $W_0$ is uniform on $[-1,1]^d$, we get $\mathbb{P}(W_0+V_{l+m}=0)=0$, so the limit is as in [\[e1.45\]](#e1.45){reference-type="eqref" reference="e1.45"}.
When $d\geq 5$, for each $l,j \geq 0$, we may bound the summand with respect to $l,j$ by $$\begin{aligned}
\int_0^{\infty} \pi(r, l) \pi(r, j) dr \sum_{m=0}^{\infty} \frac{C}{(l+m+1)^{d/2}}\leq \int_0^{\infty} \pi(r, l) \pi(r, j) dr \frac{C}{(l+1)^{d/2-1}}.\end{aligned}$$ Notice that $$\begin{aligned}
&\sum_{l=0}^\infty \sum_{j=0}^\infty \int_0^{\infty} \pi(r, l) \pi(r, j) dr \frac{C}{(l+1)^{d/2-1}}\nonumber\\
=& \int_0^{\infty} \sum_{l=0}^\infty \pi(r, l) \frac{C}{(l+1)^{d/2-1}} dr\leq \int_0^{\infty} \frac{C}{(1+r)^{d/2-1}} dr<\infty.\end{aligned}$$ By using Dominated Convergence Theorem, we may take the limit inside [\[9e9.35\]](#9e9.35){reference-type="eqref" reference="9e9.35"} to get $$\begin{aligned}
\lim_{N\to\infty} D_2^N&= \sum_{l=0}^\infty \sum_{j=0}^\infty \int_0^{\infty} \pi(r, l) \pi(r, j) dr \sum_{m=0}^{j} h(l,m)\nonumber\\
&=\sum_{l=0}^\infty \sum_{j=0}^\infty \frac{1}{2^{l+j+1}} \frac{(l+j)!}{l!j!} \sum_{m=0}^j h(l,m).\end{aligned}$$ Since $\psi_0(N) \to 2^d$ as $N\to \infty$, we get (recall $I_0$ from [\[9ec5.59\]](#9ec5.59){reference-type="eqref" reference="9ec5.59"}) $$\begin{aligned}
&\lim_{N\to \infty} \mathbb{E}(Z_1(\tau_N))=2^{-d}\lim_{N\to \infty} I_0=2^{-d}\lim_{N\to \infty} (D_1^N+D_2^N)\\
&=2^{-d} \sum_{l=0}^\infty \mathbb{P}(V_{l+1}\in [-1,1]^d-\{0\})+2^{-d} \sum_{l=0}^\infty \sum_{j=0}^\infty \frac{1}{2^{l+j+1}} \frac{(l+j)!}{l!j!} \sum_{m=0}^j h(l,m)=b_d,\end{aligned}$$ as required.
Turning to $d=4$, since $\psi_0(N)\sim 2^4 \log N$, we get that $$\begin{aligned}
J_0:=&\lim_{N\to \infty} \mathbb{E}(Z_1(\tau_N))=\lim_{N\to \infty} \frac{1}{2^4\log N} (D_1^N+D_2^N)\nonumber\\
=&\lim_{N\to \infty} \frac{1}{2^4\log N} \int_0^{(2N+2\theta)\tau_N} \sum_{l=0}^\infty \pi(r, l) \sum_{j=0}^\infty \pi(r, j) \sum_{m=0}^{j} (1+\varepsilon_N)^{m-j} h_N(l,m) dr,\end{aligned}$$ where the last equality is by [\[e2.61\]](#e2.61){reference-type="eqref" reference="e2.61"} and [\[9e9.35\]](#9e9.35){reference-type="eqref" reference="9e9.35"}. We claim that we may get rid of $(1+\varepsilon_N)^{m-j}$ and conclude $$\begin{aligned}
\label{9ec2.49}
J_0=\lim_{N\to \infty} \frac{1}{2^4\log N} \int_0^{(2N+2\theta)\tau_N} \sum_{l=0}^\infty \pi(r, l) \sum_{j=0}^{\infty} \pi(r, j) \sum_{m=0}^{j} h_N(l,m) dr.\end{aligned}$$
To see this, we note if $j\leq AN\tau_N$, then $1\geq (1+\varepsilon_N)^{m-j}\geq e^{-A\theta \tau_N} \geq 1-\eta$ for any $\eta>0$, allowing us to remove $(1+\varepsilon_N)^{m-j}$ for the sum of $j\leq AN\tau_N$. It suffices to show the contribution from the sum of $j>AN\tau_N$ converges to $0$.
Using Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound $h_N(l,m)$ by $C/(1+l+m)^2$, we get the sum of $m$ gives at most $C/(1+l)$. So the contribution from the sum of $j>AN\tau_N$ is at most $$\begin{aligned}
\label{9ec2.03}
&\frac{1}{2^4\log N} \int_0^{(2N+2\theta)\tau_N} \sum_{j>AN\tau_N} \pi(r, j) \sum_{l=0}^\infty \pi(r, l) \frac{C}{1+l} dr \nonumber\\
&\leq \frac{C}{\log N} \sum_{j>AN\tau_N} \mathbb{P}({\Gamma_j}<(2N+2\theta)\tau_N) \int_0^{(2N+2\theta)\tau_N} \frac{1}{1+r} dr,\end{aligned}$$ where in the last inequality we apply Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (i) to bound the sum for $l$ and bound $\pi(r,j)$ by $\Pi(r,j)=\mathbb{P}({\Gamma_j}<r)\leq \mathbb{P}({\Gamma_j}<(2N+2\theta)\tau_N)$. Use Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii) to see the sum of $j$ is at most $C2^{-N\tau_N}$. The integral of $r$ can be bounded by $C\log N$. We conclude [\[9ec2.03\]](#9ec2.03){reference-type="eqref" reference="9ec2.03"} is at most $C2^{-N\tau_N}\to 0$, thus giving [\[9ec2.49\]](#9ec2.49){reference-type="eqref" reference="9ec2.49"}.\
Next, the integral for $0<r<4\log N$ is at most $$\begin{aligned}
&\frac{1}{2^4\log N} \int_0^{4\log N} \sum_{l=0}^\infty \pi(r, l) \sum_{j=0}^{\infty} \pi(r, j) \frac{C}{1+l} dr \nonumber\\
&\leq\frac{1}{2^4\log N} \int_0^{4\log N} \frac{C}{1+r} dr\leq \frac{1}{\log N} C\log(\log N) \to 0,\end{aligned}$$ It follows that $$\begin{aligned}
\label{9ec2.07}
J_0=\lim_{N\to \infty} \frac{1}{2^4\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=0}^\infty \pi(r, l) \sum_{j=0}^{\infty} \pi(r, j) \sum_{m=0}^{j} h_N(l,m) dr.\end{aligned}$$ We claim that the above sum for $l\leq \log N$ and $j\leq \log N$ can be ignored. To see this, we note that the contribution from $l\leq \log N$ is at most $$\begin{aligned}
\label{9ec2.01}
&\frac{1}{2^4\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=0}^{\log N} \pi(r, l) \sum_{j=0}^\infty \pi(r, j) \frac{C}{1+l} dr\nonumber\\
&= \frac{1}{2^4\log N} \sum_{l=0}^{\log N} \frac{C}{1+l} \int_0^{\infty} \pi(r, l) dr\leq \frac{1}{2^4\log N} C\log (\log N) \to 0.\end{aligned}$$ Let$\xi$ be a Poisson random variable with parameter $r>4\log N$. By Chebyshev's inequality, $$\begin{aligned}
\sum_{j=0}^{ \log N} \pi(r, j)=\mathbb{P}(\xi\leq \log N)& \leq \mathbb{P}(\vert \xi-r\vert \geq r- \log N) \leq \frac{r}{(r-\log N)^2}\leq \frac{C}{\log N}.\end{aligned}$$ Hence the sum for $j\leq \log N$ is at most $$\begin{aligned}
\label{9ec2.10}
&\frac{1}{2^4\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=0}^\infty \pi(r, l) \sum_{j=0}^{ \log N} \pi(r, j) \frac{C}{1+l} dr\nonumber\\
&\leq \frac{1}{2^4\log N} \frac{C}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \frac{C}{1+r} dr \to 0. \end{aligned}$$ Use the above to conclude $$\begin{aligned}
\label{9ec2.08}
J_0=\lim_{N\to \infty} \frac{1}{2^4\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=\log N}^\infty \pi(r, l) \sum_{j= \log N}^\infty \pi(r, j) \sum_{m=0}^{j} h_N(l,m) dr.\end{aligned}$$
Now we are ready to replace $h_N(l,m)$ by its asymptotics when $l\geq \log N$. Define $$\begin{aligned}
g(l,m)= 2^d (2\pi/6)^{-d/2} (l+m)^{-d/2} \text{ for } l\geq \log N, m\geq 0.\end{aligned}$$
**Lemma 35**. *(i) If $N\to\infty$ and $x_n/n^{1/2}\to x$ as $n\to \infty$, then for any Borel set with $\vert \partial B\vert =0$ and $\vert B\vert <\infty$, we have $$\begin{aligned}
n^{d/2}\mathbb{P}(x_n+V_{n}^N\in B) \to \vert B\vert \cdot (2\pi\sigma^2)^{-d/2} e^{-{\vert x\vert ^2}/{2\sigma^2}},\end{aligned}$$ where $\sigma^2=1/6$ is the limit of the variance of one component of $V_1^N$ as $N\to\infty$.\
(ii) For any $\varepsilon>0$ small, if $N$ is large, then $$\begin{aligned}
\label{9ec1.85}
h_N(l,m)/g(l,m)\in [1-\varepsilon,1+\varepsilon], \quad \forall l\geq \log N, m\geq 0.\end{aligned}$$*
By Lemma 4.6 of [@BDS89] and its proof therein, we have (i) holds. For the proof of (ii), we note (i) ensures that [\[9ec1.85\]](#9ec1.85){reference-type="eqref" reference="9ec1.85"} at least holds for finitely many $l,m$. Assume to the contrary that [\[9ec1.85\]](#9ec1.85){reference-type="eqref" reference="9ec1.85"} fails for a sequence $l_n,m_n \to \infty$. Then this would contradict the results in (i), thus giving the proof.
By the above lemma with $d=4$, we may replace $h_N(l,m)$ in [\[9ec2.08\]](#9ec2.08){reference-type="eqref" reference="9ec2.08"} by $g(l,m)=2^4 \cdot (\pi/3)^{-2} (l+m)^{-2}$ to get $$\begin{aligned}
\label{9ec3.57}
J_0=\lim_{N\to \infty} \frac{9\pi^{-2}}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=\log N}^\infty \pi(r, l) \sum_{j=\log N}^\infty \pi(r, j) \sum_{m=0}^{j} \frac{1}{(l+m)^2} dr.\end{aligned}$$ Since both $l\geq \log N$ and $j\geq \log N$ are large, a bit of arithmetic implies $$\begin{aligned}
\sum_{m=0}^{j} \frac{1}{(l+m)^2} \sim \Big(\frac{1}{1+l}-\frac{1}{1+l+j}\Big).\end{aligned}$$ Therefore [\[9ec3.57\]](#9ec3.57){reference-type="eqref" reference="9ec3.57"} becomes $$\begin{aligned}
J_0=\lim_{N\to \infty} \frac{9\pi^{-2}}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=\log N}^\infty \pi(r, l) \sum_{j= \log N}^\infty \pi(r, j) \Big(\frac{1}{1+l}-\frac{1}{1+l+j}\Big) dr.\end{aligned}$$ Use the same reasoning as above (see, e.g., [\[9ec2.01\]](#9ec2.01){reference-type="eqref" reference="9ec2.01"} and [\[9ec2.10\]](#9ec2.10){reference-type="eqref" reference="9ec2.10"}) to write back the terms for $l\leq \log N$ and $j\leq \log N$ to get $$\begin{aligned}
J_0=\lim_{N\to \infty} \frac{9\pi^{-2}}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \sum_{l=0}^\infty \pi(r, l) \sum_{j=0}^\infty \pi(r, j) \Big(\frac{1}{1+l}-\frac{1}{1+l+j}\Big) dr.\end{aligned}$$ If $\xi_1, \xi_2$ are two independent Poisson random variables with parameter $r\geq 4\log N$, then $$\begin{aligned}
J_0=\lim_{N\to \infty} \frac{9\pi^{-2}}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \Big[\mathbb{E}\Big(\frac{1}{1+\xi_1}\Big)-\mathbb{E}\Big(\frac{1}{1+\xi_1+\xi_2}\Big)\Big] dr.\end{aligned}$$ Since $r>4\log N$ is large, one can check by Cauchy-Schwartz that $$\begin{aligned}
\mathbb{E}\Big(\frac{1}{1+\xi_1}\Big)\sim \frac{1}{1+r} \quad \text{ and } \quad \mathbb{E}\Big(\frac{1}{1+\xi_1+\xi_2}\Big)\sim \frac{1}{1+2r},\end{aligned}$$ thus giving $$\begin{aligned}
J_0&=\lim_{N\to \infty}\frac{9\pi^{-2}}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \frac{1}{1+r}dr \nonumber\\
&\quad -\lim_{N\to \infty}\frac{9\pi^{-2}}{\log N} \int_{4\log N}^{(2N+2\theta)\tau_N} \frac{1}{1+2r} dr =\frac{9}{2\pi^2}=b_4,
\end{aligned}$$ as required.
# Proof of Lemma [Lemma 27](#9l5.6){reference-type="ref" reference="9l5.6"} {#9s7.2}
We turn to the proof of Lemma [Lemma 27](#9l5.6){reference-type="ref" reference="9l5.6"} in this section. It suffices to show that $$\begin{aligned}
\lim_{N\to\infty} \int_{0}^t \mathbb{E}\Big(\vert G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)\vert \Big)dr=0.\end{aligned}$$ We already know from [\[9e9.31\]](#9e9.31){reference-type="eqref" reference="9e9.31"} that $$\begin{aligned}
&\lim_{N\to \infty} \int_0^{\tau_N} \mathbb{E}(G_r^\tau(\phi))dr = 0.\end{aligned}$$ Recall $X_r^{1,\tau}(\phi)$ from [\[9e3.13\]](#9e3.13){reference-type="eqref" reference="9e3.13"} to see that $$\begin{aligned}
\mathbb{E}( X_r^{1,\tau}(\phi))&\leq \frac{1}{N} \| \phi\| _\infty \mathbb{E}\Big( \sum_{\beta} 1(T_{\pi\beta}<r\leq T_\beta, B^\beta \neq \Delta)\Big)\nonumber\\
&= \| \phi\| _\infty \mathbb{E}(X_r^0(1))=e^{\theta r} \| \phi\| _\infty X_0^0(1).\end{aligned}$$ Lemma [Lemma 28](#9l5.7){reference-type="ref" reference="9l5.7"} gives $b_d^\tau \leq 1+b_d$ for $N$ large. It follows that $$\begin{aligned}
&\int_0^{\tau_N} \mathbb{E}\Big(b_d^\tau X_r^{1,\tau}(\phi) \Big)dr\leq (1+b_d) \| \phi\| _\infty X_0^0(1) \int_0^{\tau_N} e^{\theta r}dr \to 0.\end{aligned}$$ It remains to show $$\begin{aligned}
\lim_{N\to\infty} \int_{\tau_N}^t \mathbb{E}\Big(\vert G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)\vert \Big)dr=0.\end{aligned}$$
For any $r\geq \tau_N$, recall from [\[9e6.61\]](#9e6.61){reference-type="eqref" reference="9e6.61"} that $$\begin{aligned}
& G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)=\frac{1}{N} \sum_{\alpha \in \mathcal A(r-\tau_N)} \phi(B^\beta_{(r-\tau_N)^+}) (Z_\alpha(r)-b_d^\tau \vert \{\alpha\}_r\vert ).\end{aligned}$$ By [\[9e3.14\]](#9e3.14){reference-type="eqref" reference="9e3.14"}, the mean of the last term is zero conditioning on $\mathcal F_{r-\tau_N}$. Also, one can check that $\{(Z_\alpha(r),\vert \{\alpha\}_r\vert )\}_{\alpha\in \mathcal A(r-\tau_N)}$ are mutually independent and equal in law with $(Z_1(\tau_N), \vert \{1\}_{\tau_N} \vert)$ if conditioning on $\mathcal F_{r-\tau_N}$. It follows that $$\begin{aligned}
\label{9e7.1}
&\mathbb{E}\Big(\Big[G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)\Big]^2\Big)\leq \frac{1}{N^2} \| \phi\| _\infty^2 \cdot \mathbb{E}( \vert \mathcal A(r-\tau_N)\vert )\cdot \mathbb{E}\Big(\Big(Z_1(\tau_N)-b_d^\tau \vert \{1\}_{\tau_N}\vert \Big)^2\Big).\end{aligned}$$ For the first expectation above, we recall that $\vert \mathcal A(r-\tau_N)\vert$ counts the number of particles alive at time $r-\tau_N$ in $X^1$, so $$\begin{aligned}
\label{9e7.2}
\mathbb{E}( \vert \mathcal A(r-\tau_N)\vert )=\mathbb{E}(NX_{r-\tau_N}^{1,N}(1)) \leq \mathbb{E}(NX_{r-\tau_N}^{0,N}(1))= N X_0^0(1) e^{\theta(r-\tau_N)}.\end{aligned}$$ For the second expectation in [\[9e7.1\]](#9e7.1){reference-type="eqref" reference="9e7.1"}, we use $(a-b)^2\leq 2a^2+2b^2, \forall a,b\in \mathbb{R}$ to get $$\begin{aligned}
\label{9e7.3}
\mathbb{E}\Big(\Big(Z_1(\tau_N)-b_d^\tau \vert \{1\}_{\tau_N}\vert \Big)^2\Big)\leq 2\mathbb{E}(Z_1(\tau_N)^2)+2\mathbb{E}(\vert \{1\}_{\tau_N}\vert ^2) .\end{aligned}$$ Lemma 7.2 of [@DP99] gives $$\begin{aligned}
\label{9e7.4}
\mathbb{E}(\vert \{1\}_{\tau_N}\vert ^2)\leq C(1+N\tau_N)\leq CN\tau_N.\end{aligned}$$
**Lemma 36**. *There is some constant $C>0$ so that $\mathbb{E}(Z_1(\tau_N)^2)\leq CN\tau_N$.*
By assuming Lemma [Lemma 36](#9l7.1){reference-type="ref" reference="9l7.1"}, we may finish the proof of Lemma [Lemma 27](#9l5.6){reference-type="ref" reference="9l5.6"}.
For any $r\geq \tau_N$, we use [\[9e7.1\]](#9e7.1){reference-type="eqref" reference="9e7.1"}, [\[9e7.2\]](#9e7.2){reference-type="eqref" reference="9e7.2"}, [\[9e7.3\]](#9e7.3){reference-type="eqref" reference="9e7.3"}, [\[9e7.4\]](#9e7.4){reference-type="eqref" reference="9e7.4"} and Lemma [Lemma 36](#9l7.1){reference-type="ref" reference="9l7.1"} to see that $$\begin{aligned}
&\mathbb{E}\Big(\Big[G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)\Big]^2\Big)\leq \frac{C}{N^2} \| \phi\| _\infty^2 \cdot N X_0^0(1) e^{\theta(r-\tau_N)} \cdot N\tau_N\leq C\tau_N e^{\theta r}.\end{aligned}$$ By Cauchy-Schwartz, we have $$\begin{aligned}
\mathbb{E}\Big(\Big\vert G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)\Big\vert \Big)\leq C\tau_N^{1/2} e^{\theta r/2},\end{aligned}$$ thus giving $$\begin{aligned}
&\int_{\tau_N}^t \mathbb{E}\Big(\Big\vert G_r^\tau(\phi)-b_d^\tau X_r^{1,\tau}(\phi)\Big\vert \Big)dr \leq C\tau_N^{1/2} \int_0^t e^{\theta r/2} dr \to 0.\end{aligned}$$ The proof is complete.
It remains to prove Lemma [Lemma 36](#9l7.1){reference-type="ref" reference="9l7.1"}.
Recall from [\[9ec5.59\]](#9ec5.59){reference-type="eqref" reference="9ec5.59"} to see that $$\begin{aligned}
\label{9e7.10}
\mathbb{E}(\psi_0(N)^2 Z_{1}(\tau_N)^2) =&\mathbb{E}\Big(\sum_{\beta^1, \beta^2, \gamma^1, \gamma^2 \geq 1} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\cdot \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big),\end{aligned}$$ where $$\text{nbr}_{\beta,\gamma}(\tau_N)=1(T_{\pi \beta}<\tau_N\leq T_\beta, T_{\pi \gamma}<\tau_N, B^\beta-B^\gamma \in \mathcal{N}_N).$$ Since $T_{\pi \beta^1}<\tau_N\leq T_{\beta^1}$ and $T_{\pi \beta^2}<\tau_N\leq T_{\beta^2}$ both hold, we cannot have $\beta^1<\beta^2$ or $\beta^2<\beta^1$. By symmetry, we may assume $T_{\beta^1\wedge \gamma^1}\leq T_{\beta^2\wedge \gamma^2}$. We first consider the sum of $\beta^1, \gamma^1$ for $\text{nbr}_{\beta^1,\gamma^1}(\tau_N)$. Let $\alpha^1=\beta^1 \wedge \gamma^1$ with $\vert \alpha^1\vert =k$ for some $k\geq 0$. Set $\vert \beta^1\vert =k+l$ and $\vert \gamma^1\vert =k+m$ for some $l\geq 0$, $m\geq 0$. Then we have $$\begin{aligned}
\label{9e7.21}
\mathbb{E}(\psi_0(N)^2 Z_{1}(\tau_N)^2)= \mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{\beta^2, \gamma^2\geq 1} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$ To get the desired upper bound, we need to know the positions of $\beta^2,\gamma^2$ on the family tree relative to $\alpha^1, \beta^1,\gamma^1$. For any finite subset $\Lambda \subset \mathcal{I}$, define $$\begin{aligned}
\tau(\Lambda; \beta)=\max\{\vert \gamma\wedge \beta\vert : \gamma\in \Lambda, \gamma_0=\beta_0\}\end{aligned}$$ to identify the last generation when $\beta$ had an ancestor in common with some $\gamma\in \Lambda$. For any pair of $(\alpha^1,\beta^1, \gamma^1)$ as in [\[9e7.21\]](#9e7.21){reference-type="eqref" reference="9e7.21"}, we consider the three cases for the generation when $\beta^2$ branches off the family tree of $\alpha^1, \beta^1,\gamma^1$: $$\begin{aligned}
&\text{(1)}\quad \tau(\{\alpha^1,\beta^1,\gamma^1\}; \beta^2)=\vert \alpha^1 \wedge \beta^2\vert ;\nonumber\\
&\text{(2)} \quad \tau(\{\alpha^1,\beta^1,\gamma^1\}; \beta^2)=\vert \beta^1 \wedge \beta^2\vert > \vert \alpha^1 \wedge \beta^2\vert ;\nonumber\\
&\text{(3)} \quad \tau(\{\alpha^1,\beta^1,\gamma^1\}; \beta^2)=\vert \gamma^1 \wedge \beta^2\vert >\vert \alpha^1 \wedge \beta^2\vert . \end{aligned}$$ Denote by $R_1$ (resp. $R_2, R_3$) for the contribution to [\[9e7.21\]](#9e7.21){reference-type="eqref" reference="9e7.21"} from case (1) (resp. case (2), (3)). It follows that $$\begin{aligned}
\label{e1}
\mathbb{E}(\psi_0(N)^2 Z_{1}(\tau_N)^2)\leq C\sum_{i=1}^3 \mathbb{E}(R_i).\end{aligned}$$ We claim that $$\begin{aligned}
\label{e2}
\mathbb{E}(R_i)\leq CN\tau_N I(N)^2,\quad \forall i\in\{1,2,3\}.\end{aligned}$$ Then since $I(N)\leq C\psi_0(N)$, we may combine [\[e1\]](#e1){reference-type="eqref" reference="e1"} and [\[e2\]](#e2){reference-type="eqref" reference="e2"} to conclude $$\mathbb{E}(Z_{1}(\tau_N)^2)\leq CN\tau_N,$$ thus giving Lemma [Lemma 36](#9l7.1){reference-type="ref" reference="9l7.1"}. It remains to prove [\[e2\]](#e2){reference-type="eqref" reference="e2"}.
## Bounds for $\mathbb{E}(R_1)$
For $R_1$, we are in the case when $\beta^2$ branches off the family tree of $\alpha^1, \beta^1,\gamma^1$ from $\alpha^1$. We may let $\beta^2\wedge \alpha^1=\alpha^1\vert j$ for some $0\leq j\leq k$. Since we assume $T_{\beta^1\wedge \gamma^1}\leq T_{\beta^2\wedge \gamma^2}$, the only possible case is that $\tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge \beta^2\vert$, i.e. $\gamma^2$ has the most recent ancestor with $\beta^2$. Let $\alpha^2=\gamma^2\wedge \beta^2$ so that $\alpha^2\geq \beta^2\wedge \alpha^1=\alpha^1|j$. Set $\vert \alpha^2\vert =j+n$ for some $n\geq 0$. Let $\vert \beta^2\vert =j+n+l'$ and $\vert \gamma^2\vert =j+n+m'$ for some $l'\geq 0$, $m'\geq 0$. By conditioning on $\mathcal{H}_{\alpha^1}\vee \mathcal{H}_{\alpha^2}$, similar to the derivation of [\[e2.63\]](#e2.63){reference-type="eqref" reference="e2.63"}, we obtain $$\begin{aligned}
\label{9ec7.99}
&\mathbb{E}(\text{nbr}_{\beta^1,\gamma^1}(\tau_N)\text{nbr}_{\beta^2,\gamma^2}(\tau_N) \vert \mathcal{H}_{\alpha^1}\vee \mathcal{H}_{\alpha^2})\leq C\cdot 1_{(T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta)} 1_{(T_{\alpha^2}<\tau_N, B^{\alpha^2}\neq \Delta)} \nonumber\\
&\times (\frac{N+\theta}{2N+\theta})^{l+m+l'+m'}\frac{C}{(1+l+m)^{d/2}} \frac{C}{(1+l'+m')^{d/2}} \nonumber\\
&\times \pi((2N+\theta)(\tau_N-T_{\alpha^1}), l-1)\cdot \Pi((2N+\theta)(\tau_N-T_{\alpha^1}), m-1) \nonumber\\
&\times \pi((2N+\theta)(\tau_N-T_{\alpha^2}), l'-1) \cdot \Pi((2N+\theta)(\tau_N-T_{\alpha^2}), m'-1),\end{aligned}$$ where we have used Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the probabilities for the events $\{B^{\beta^i}-B^{\gamma^i} \in \mathcal{N}_N\}$, $i=1,2$. So we get $$\begin{aligned}
\mathbb{E}(R_1)\leq C\mathbb{E}\Big(&\sum_{k=0}^\infty \sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k} } \sum_{j=0}^{k}\sum_{n=0}^\infty\sum_{\substack{ \vert \alpha^2\vert =j+n,\\ \alpha^2\geq \alpha^1\vert j} } \prod_{i=1}^2 1{(T_{\alpha^i}<\tau_N, B^{\alpha_i}\neq \Delta)}\nonumber\\
&\sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{l'=0}^\infty \sum_{m'=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m+l'+m'}\frac{1}{(1+l+m)^{d/2}} \frac{1}{(1+l'+m')^{d/2}} \nonumber\\
& \times \pi((2N+\theta)(\tau_N-T_{\alpha^1}), l-1)\cdot \Pi((2N+\theta)(\tau_N-T_{\alpha^1}), m-1) \nonumber\\
&\times \pi((2N+\theta)(\tau_N-T_{\alpha^2}), l'-1) \cdot \Pi((2N+\theta)(\tau_N-T_{\alpha^2}), m'-1)\Big). \end{aligned}$$ Apply Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"} to see the sum of $m$ gives at most $C /(1+l)^{d/2-1}$. Then as in [\[9e9.58\]](#9e9.58){reference-type="eqref" reference="9e9.58"}, we use Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (i) to get the sum of $l$ is bounded above by $C /(1+(2N+2\theta)(\tau_N-T_{\alpha^1}))^{d/2-1}$. Similarly, the sum of $l',m'$ gives at most $C /(1+(2N+2\theta)(\tau_N-T_{\alpha^2}))^{d/2-1}$. These two bounds lead to $$\begin{aligned}
\mathbb{E}(R_1)\leq C \mathbb{E}\Big(&\sum_{k=0}^\infty \sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k} } \sum_{j=0}^{k}\sum_{n=0}^\infty \sum_{\substack{ \vert \alpha^2\vert =j+n,\\ \alpha^2\geq \alpha^1\vert j} } \nonumber\\
&\prod_{i=1}^2 1(T_{\alpha^i}<\tau_N, B^{\alpha_i}\neq \Delta) (1+(2N+2\theta)(\tau_N-T_{\alpha^i}))^{1-d/2}\Big).\end{aligned}$$ By considering $\alpha=\alpha^1\wedge \alpha^2$ with $\vert \alpha\vert =j\geq 0$, we may rewrite the above as $$\begin{aligned}
\label{9e9.57}
\mathbb{E}(R_1)\leq C\mathbb{E}\Big( &\sum_{j=0}^\infty \sum_{\alpha \geq 1,\vert \alpha\vert =j } \sum_{k=0}^{\infty} \sum_{\substack{\vert \alpha^1\vert =j+k,\\ \alpha^1\geq \alpha}} \sum_{n=0}^\infty \sum_{\substack{ \vert \alpha^2\vert =j+n,\\ \alpha^2\geq \alpha}} \nonumber\\
&\prod_{i=1}^2 1(T_{\alpha^i}<\tau_N, B^{\alpha_i}\neq \Delta) (1+(2N+2\theta)(\tau_N-T_{\alpha^i}))^{1-d/2}\Big).\end{aligned}$$ Conditioning on $\mathcal{H}_{\alpha}$, we may use the symmetry between $\alpha^1$ and $\alpha^2$ to see that the above becomes $$\begin{aligned}
\label{9e6.71}
\mathbb{E}(R_1)\leq C\mathbb{E}\Big(& \sum_{\alpha \geq 1} 1(T_{\alpha}<\tau_N, B^{\alpha}\neq \Delta) \\
& \times \Big[\mathbb{E}\Big( \sum_{\alpha^1\geq \alpha} 1(T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta) (1+(2N+2\theta)(\tau_N-T_{\alpha^1}))^{1-d/2}\Big\vert \mathcal{H}_\alpha\Big)\Big]^2\Big).\nonumber\end{aligned}$$ On the event $\{T_{\alpha}<\tau_N, B^{\alpha}\neq \Delta\}$, we have $$\begin{aligned}
& \mathbb{E}\Big( \sum_{\alpha^1\geq \alpha} 1(T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta) (1+(2N+2\theta)(\tau_N-T_{\alpha^1}))^{1-d/2}\Big\vert \mathcal{H}_\alpha\Big)\nonumber\\
\leq &1+2 \mathbb{E}\Big(\sum_{\alpha^1\geq (\alpha \vee 0)} 1(T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta) (1+(2N+2\theta)(\tau_N- {T}_{ \alpha^1}))^{1-d/2}\Big\vert \mathcal{H}_\alpha\Big)\nonumber\\
\leq &1+ 2\mathbb{E}\Big(\sum_{\delta\geq 1} 1(T_{\delta}<\tau_N- {T}_{ \alpha}, B^{\delta}\neq \Delta) (1+(2N+2\theta)(\tau_N- {T}_{ \alpha}-T_{\delta}))^{1-d/2}\Big)\nonumber\\
\leq&1+ 2 e^{\theta (\tau_N- {T}_{ \alpha})} \int_0^{(2N+2\theta)(\tau_N- {T}_{ \alpha})} \frac{1}{(1+y)^{d/2-1}} dy\leq C I(N),\end{aligned}$$ where the first term $1$ on the second line is to bound the case when $\alpha^1=\alpha$ while the factor $2$ is due to the symmetry between the two offspring, $\alpha\vee 0$ and $\alpha\vee 1$, of $\alpha$. The third line follows by replacing $\alpha \vee 0$ with $1$ and then translating with time $T_{\alpha}$. The second last inequality uses Lemma [Lemma 18](#9l3.2a){reference-type="ref" reference="9l3.2a"} with $r=\tau_N- {T}_{ \alpha}$ and $u=0$. Hence [\[9e6.71\]](#9e6.71){reference-type="eqref" reference="9e6.71"} is at most $$\begin{aligned}
\mathbb{E}(R_1)\leq C I(N)^2\mathbb{E}\Big( \sum_{\alpha \geq 1 } 1(T_{\alpha}<\tau_N, B^{\alpha}\neq \Delta) \Big).\end{aligned}$$ By using Lemma [Lemma 18](#9l3.2a){reference-type="ref" reference="9l3.2a"} with $f(y)\equiv 1$, $r=\tau_N$ and $u=0$, we get $$\begin{aligned}
\label{9e9.66}
\mathbb{E}\Big( \sum_{\alpha\geq 1 } 1(T_{\alpha}<\tau_N, B^{\alpha}\neq \Delta) \Big)\leq e^{\theta \tau_N} (2N+2\theta)\tau_N\leq CN\tau_N,\end{aligned}$$ thus giving $$\begin{aligned}
\mathbb{E}(R_1)\leq CN\tau_N I(N)^2,\end{aligned}$$ as required.
## Bounds for $\mathbb{E}(R_2)$
Turning to $R_2$, we have $\beta^2$ branches off the family tree of $\alpha^1, \beta^1,\gamma^1$ from $\beta^1$, so we set $\beta^2\wedge \beta^1=\beta^1\vert j$ for some $k\leq j\leq k+l$. Since we assume $T_{\beta^1\wedge \gamma^1}\leq T_{\beta^2\wedge \gamma^2}$, there are three cases for the generation when $\gamma^2$ branches off the family tree of $\alpha^1, \beta^1,\gamma^1,\beta^2$: $$\begin{aligned}
\label{9e9.91}
\text{(i)} &\quad \tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge \beta^2 \vert>\vert \gamma^2\wedge \beta^1\vert ;\nonumber\\
\text{(ii)}& \quad \tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge \beta^1\vert ;\nonumber\\
\text{(iii)} &\quad \tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge \gamma^1 \vert . \end{aligned}$$ Denote respectively by $R_2^{(i)}, R_2^{(ii)}, R_2^{(iii)}$ the contribution to $R_2$ from these three cases, for which we refer the reader to Figure [2](#fig2){reference-type="ref" reference="fig2"}.
![[\[fig2\]]{#fig2 label="fig2"} Three cases for $R_2$. ](r2.png){#fig2 width="1 \\textwidth"}
It suffices to show that $$\mathbb{E}(R_2^{(i)}) \leq CN\tau_N I(N)^2, \quad \mathbb{E}(R_2^{(ii)}) \leq CN\tau_N I(N)^2, \quad \mathbb{E}(R_2^{(iii)}) \leq CN\tau_N I(N)^2.$$
Case (i) implies that $\gamma^2$ branches off $\beta^2$ only after $\beta^2$ branches off $\beta^1$, that is, $\beta^2 \wedge \gamma^2>\beta^2\wedge \beta^1=\beta^1\vert j$. Recall [\[9e7.21\]](#9e7.21){reference-type="eqref" reference="9e7.21"} to get $$\begin{aligned}
\label{9ea1.00}
\mathbb{E}(R_2^{(i)})\leq C\mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{j=k}^{k+l} \sum_{\beta^2\geq \beta^1\vert j}\sum_{\gamma^2\geq \beta^1\vert j} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$
For each $k\leq j\leq k+l$, by conditioning on $\mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}$, we may replace $\beta^1\vert j$ by $1$ and use translation invariance to see that $$\begin{aligned}
\label{9ec4.98}
&\mathbb{E}\Big(\sum_{\beta^2\geq \beta^1\vert j}\sum_{\gamma^2\geq \beta^1\vert j} \text{nbr}_{\beta^2,\gamma^2}(\tau_N) \Big\vert \mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}\Big)
\nonumber\\
&= \mathbb{E}\Big( \sum_{\beta^2\geq 1}\sum_{\gamma^2\geq 1} \text{nbr}_{\beta^2,\gamma^2}(\tau_N-T_{\beta^1\vert j})\Big)\nonumber\\
&\leq C I((2N+2\theta) (\tau_N-T_{\beta^1\vert j}) )\leq CI(N),\end{aligned}$$ where the second inequality is from [\[9e9.51\]](#9e9.51){reference-type="eqref" reference="9e9.51"}. The sum of $j$ from $k$ to $k+l$ in [\[9ea1.00\]](#9ea1.00){reference-type="eqref" reference="9ea1.00"} gives an extra $1+l\leq 1+l+m$. We conclude [\[9ea1.00\]](#9ea1.00){reference-type="eqref" reference="9ea1.00"} becomes $$\begin{aligned}
\label{9ea1.08}
\mathbb{E}(R_2^{(i)})\leq CI(N) \mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N) \times (1+l+m)\Big).\end{aligned}$$ Similar to the derivation of [\[9e2.640\]](#9e2.640){reference-type="eqref" reference="9e2.640"}, we may bound the above by $$\begin{aligned}
\mathbb{E}(R_2^{(i)})\leq CI(N)\mathbb{E}\Big(&\sum_{k=0}^\infty \sum_{\alpha^1\geq 1, \vert \alpha^1\vert =k} 1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}}\nonumber\\
&\sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \pi((2N+\theta) (\tau_N-T_{\alpha^1}), l-1)\nonumber\\
&\times \Pi((2N+\theta) (\tau_N-T_{\alpha^1}), m-1) \cdot \frac{1}{(1+l+m)^{d/2}} \times (1+l+m)\Big).\end{aligned}$$ The sum of $m$ gives $$\begin{aligned}
& \sum_{m=0}^\infty(\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)(\tau_N-T_{\alpha^1}), m-1) \frac{1}{(1+l+m)^{d/2-1}} \leq CI(N),\end{aligned}$$ where the inequality is by Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"}. The sum of $l$ gives at most $Ce^{\theta \tau_N}$. We are left with $$\begin{aligned}
\label{9ea1.09}
\mathbb{E}(R_2^{(i)})& \leq Ce^{\theta \tau_N} I(N)^2 \mathbb{E}\Big( \sum_{\alpha^1\geq 1 } 1(T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta) \Big)\leq CI(N)^2 N\tau_N.\end{aligned}$$ where the last inequality is by [\[9e9.66\]](#9e9.66){reference-type="eqref" reference="9e9.66"}.\
Turning to Case (ii), recall $\beta^2\wedge \beta^1=\beta^1\vert j$ for some $k\leq j\leq k+l$. Let $\gamma^2\wedge \beta^1=\beta^1\vert j'$ for some $k\leq j'\leq k+l$. Then we have $$\begin{aligned}
\label{9ea1.01}
\mathbb{E}(R_2^{(ii)})\leq C\mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{j=k}^{k+l}\sum_{j'=k}^{k+l} \sum_{\beta^2\geq \beta^1\vert j}\sum_{\gamma^2\geq\beta^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$
For each $k\leq j\leq k+l$, set $\vert \beta^2\vert =j+u$ and $\vert \gamma^2\vert =j'+n$ for some $u\geq 0$, $n\geq 0$. By conditioning on $\mathcal{H}_{\beta^1}\vee \mathcal{H}_ {\gamma^1}$, on the event $\{T_{\pi \beta^1}<\tau_N\leq T_{\beta^1}, B^{\beta^1}\neq \Delta\}$ from $\text{nbr}_{\beta^1,\gamma^1}(\tau_N)$, we have the sum of $\beta^2, \gamma^2$ equals $$\begin{aligned}
\label{9e9.81}
& \mathbb{E}\Big(\sum_{\beta^2\geq \beta^1\vert j}\sum_{\gamma^2\geq\beta^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N) \Big\vert \mathcal{H}_{\beta^1}\vee \mathcal{H}_ {\gamma^1}\Big)
\nonumber\\
& \leq C\sum_{u=0}^\infty \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{u+n}\pi((2N+\theta)(\tau_N-T_{\beta^1\vert j})^+, u-1) \nonumber\\
&\quad \quad \times \Pi((2N+\theta)(\tau_N-T_{\beta^1\vert j'})^+, n-1) \frac{1}{(u+n+1)^{d/2}}\nonumber\\
& \leq C \sum_{u=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{u}\pi((2N+\theta)(\tau_N-T_{\beta^1\vert j})^+, u) \frac{1}{(u +1)^{d/2-1}}\nonumber\\
& \leq \frac{C}{(1+(2N+2\theta)(\tau_N-T_{\beta^1\vert j})^+)^{d/2-1}},\end{aligned}$$ where the second inequality follows by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"} and the third inequality uses Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(i). Apply [\[9e9.81\]](#9e9.81){reference-type="eqref" reference="9e9.81"} in [\[9ea1.01\]](#9ea1.01){reference-type="eqref" reference="9ea1.01"} to conclude that $$\begin{aligned}
\label{9e9.67}
\mathbb{E}(R_2^{(ii)})\leq C &\sum_{k=0}^\infty \sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq\alpha^1,\\ \vert \gamma^1\vert =k+m}} (1+l+m)\\
&\quad \times \mathbb{E}\Big( \text{nbr}_{\beta^1,\gamma^1}(\tau_N) \sum_{j=k}^{k+l} \frac{1}{(1+(2N+\theta)(T_{\pi\beta^1}-T_{\beta^1\vert j})^+)^{d/2-1}}\Big),\nonumber\end{aligned}$$ where the term $(1+l+m)$ is from the sum of $k\leq j'\leq k+l$. We also use $2N+2\theta\geq 2N+\theta$ and $T_{\pi\beta^1}<\tau_N$ to bound [\[9e9.81\]](#9e9.81){reference-type="eqref" reference="9e9.81"}. By conditioning on $\mathcal{H}_{\alpha^1}$, the expectation above is bounded by $$\begin{aligned}
&\mathbb{E}\Bigg\{1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}} (\frac{N+\theta}{2N+\theta})^{l+m} \nonumber\\
&\times \mathbb{E}\Big[1(T_{\pi \beta^1}<\tau_N\leq T_{\beta^1}) \sum_{j=k}^{k+l} \frac{1}{(1+(2N+\theta)(T_{\pi\beta^1}-T_{\beta^1\vert j})^+)^{d/2-1}}\Big\vert \mathcal{H}_{\alpha^1}\Big]\nonumber\\
&\times \mathbb{P}\Big(T_{\pi \gamma^1}<\tau_N\Big\vert \mathcal{H}_{\alpha^1}\Big)\cdot \frac{C}{(1+l+m)^{d/2}}\Bigg\},\end{aligned}$$ where we have used Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the probability $\mathbb{P}(B^{\beta^1}-B^{\gamma^1}\in \mathcal{N}_N\vert \mathcal{H}_{\alpha^1})$. Use the Gamma random variables to see that the above is equal to $$\begin{aligned}
\label{9e9.82}
&\mathbb{E}\Bigg\{ 1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}} (\frac{N+\theta}{2N+\theta})^{l+m} \nonumber\\
&\times \mathbb{E}\Big[1(\Gamma_{l-1}<(2N+\theta)(\tau_N-T_{\alpha^1})\leq \Gamma_{l}) \sum_{j=k}^{k+l} \frac{1}{(1+(\Gamma_{l-1}-\Gamma_{j-k})^+)^{d/2-1}}\Big]\nonumber\\
&\times \Pi((2N+\theta)(\tau_N-T_{\alpha^1}),m-1) \cdot \frac{C}{(1+l+m)^{d/2}}\Bigg\}.\end{aligned}$$ Apply the above in [\[9e9.67\]](#9e9.67){reference-type="eqref" reference="9e9.67"} to get $$\begin{aligned}
\label{9e9.83}
\mathbb{E}(R_2^{(ii)}) \leq C\mathbb{E}\Big( & \sum_{\alpha^1\geq 1 } 1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}}\sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \nonumber\\
&\times\mathbb{E}\Big[1(\Gamma_{l-1}<(2N+\theta)(\tau_N-T_{\alpha^1})\leq \Gamma_{l}) \sum_{j=0}^{l} \frac{1}{(1+(\Gamma_{l-1}-\Gamma_{j})^+)^{d/2-1}}\Big] \nonumber\\
&\times \Pi((2N+\theta) (\tau_N-T_{\alpha^1}), m-1) \frac{1}{(1+l+m)^{d/2-1}}\Big),\end{aligned}$$ The sum of $m$ above gives $$\begin{aligned}
\label{9ea1.03}
&\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta) (\tau_N-T_{\alpha^1}), m-1) \frac{1}{(1+l+m)^{d/2-1}} \leq CI(N), \end{aligned}$$ where the inequality is by Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"}. Turning to the sum of $l$, we have $$\begin{aligned}
\label{9e9.85}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \mathbb{E}\Big[1(\Gamma_{l-1}<(2N+\theta)(\tau_N-T_{\alpha^1})\leq \Gamma_{l}) \sum_{j=0}^{l} \frac{1}{(1+(\Gamma_{l-1}-\Gamma_{j})^+)^{d/2-1}}\Big] \nonumber\\
&\leq 5+ \sum_{l=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+1} \mathbb{E}\Big[1(\Gamma_{l}<(2N+\theta)(\tau_N-T_{\alpha^1})\leq \Gamma_{l+1}) \sum_{j=0}^{l+1} \frac{1}{(1+(\Gamma_{l}-\Gamma_{j})^+)^{d/2-1}}\Big] \nonumber\\
&\leq 5+Ce^{\theta \tau_N}+ 2\sum_{l=1}^\infty (1+\varepsilon_N)^{l} \mathbb{E}\Big[1_{(\Gamma_l<(2N+\theta)(\tau_N-T_{\alpha^1})\leq \Gamma_{l+1}) }\sum_{j=0}^{l-1} \frac{1}{(1+\Gamma_{l}-\Gamma_{j})^{d/2-1}}\Big] \nonumber\\
&\leq Ce^{\theta \tau_N}+Ce^{\theta(\tau_N-T_{\alpha^1})} I((2N+\theta)(\tau_N-T_{\alpha^1}))\leq CI(N),\end{aligned}$$ where the term $5$ on the second line comes from $l=0,1$. On the third line, the term $Ce^{\theta \tau_N}$ is from the sum for $l\geq 1$ and $j=l,l+1$. The second last inequality follows from Lemma 4.5 of [@DP99]. Combining [\[9ea1.03\]](#9ea1.03){reference-type="eqref" reference="9ea1.03"} and [\[9e9.85\]](#9e9.85){reference-type="eqref" reference="9e9.85"}, we conclude that [\[9e9.83\]](#9e9.83){reference-type="eqref" reference="9e9.83"} becomes $$\begin{aligned}
\label{9ea1.06}
\mathbb{E}(R_2^{(ii)}) &\leq CI(N)^2\mathbb{E}\Big( \sum_{\alpha^1\geq 1 } 1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}} \Big) \leq CN\tau_N I(N)^2,\end{aligned}$$ where the last inequality follows from [\[9e9.66\]](#9e9.66){reference-type="eqref" reference="9e9.66"}.\
Turning to Case (iii), since we assume $T_{\beta^1 \wedge \gamma^1}\leq T_{\beta^2 \wedge \gamma^2}$, we must have $\gamma^2\wedge \gamma^1\geq \alpha^1$. Set $\gamma^2\wedge \gamma^1=\gamma^1\vert j'$ for some $k\leq j'\leq k+m$. Then $$\begin{aligned}
\label{9ea1.04}
\mathbb{E}(R_2^{(iii)})\leq C\mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{j=k}^{k+l}\sum_{j'=k}^{k+m} \sum_{\beta^2\geq \beta^1\vert j}\sum_{\gamma^2\geq\gamma^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$
For each $k\leq j\leq k+l$ and $k\leq j'\leq k+m$, set $\vert \beta^2\vert =j+l'$ and $\vert \gamma^2\vert =j'+m'$ for some $l'\geq 0$ and $m'\geq 0$. By conditioning on $\mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}$, on the event as in $\text{nbr}_{\beta^1,\gamma^1}(\tau_N)$, we have the sum of $\beta^2, \gamma^2$ equals $$\begin{aligned}
& \mathbb{E}\Big(\sum_{\beta^2\geq \beta^1\vert j}\sum_{\gamma^2\geq\gamma^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N) \Big\vert \mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}\Big)
\nonumber\\
&\leq C \sum_{l'=0}^\infty \sum_{m'=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l'+m'}\pi((2N+\theta)(\tau_N-T_{\beta^1\vert j})^+, l'-1) \nonumber\\
&\quad \quad \times \Pi((2N+\theta)(\tau_N-T_{\gamma^1\vert j'})^+, m'-1) \frac{1}{(l'+m'+1)^{d/2}}\nonumber\\
&\leq \frac{C }{(1+(2N+2\theta)(\tau_N-T_{\beta^1\vert j})^+)^{d/2-1}},\end{aligned}$$ where the last inequality follows in a similar way as in [\[9e9.81\]](#9e9.81){reference-type="eqref" reference="9e9.81"}. Use $2N+2\theta\geq 2N+\theta$ and $T_{\pi\beta^1}<\tau_N$ to bound the above term and then conclude that [\[9ea1.04\]](#9ea1.04){reference-type="eqref" reference="9ea1.04"} becomes $$\begin{aligned}
\mathbb{E}(R_2^{(iii)})\leq C&\sum_{k=0}^\infty \sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m}} \sum_{j=k}^{k+l} (1+l+m)\nonumber\\
&\quad \mathbb{E}\Big( \text{nbr}_{\beta^1,\gamma^1}(\tau_N) \frac{1}{(1+(2N+\theta)(T_{\pi\beta^1}-T_{\beta^1\vert j})^+))^{d/2-1}}\Big),\nonumber\end{aligned}$$ where the term $(1+l+m)$ is from the sum of $k\leq j'\leq k+l$. The right-hand side above is exactly [\[9e9.67\]](#9e9.67){reference-type="eqref" reference="9e9.67"}. Use [\[9ea1.06\]](#9ea1.06){reference-type="eqref" reference="9ea1.06"} to conclude $$\begin{aligned}
\mathbb{E}(R_2^{(iii)}) &\leq C N\tau_N I(N)^2,\end{aligned}$$ as required.
## Bounds for $\mathbb{E}(R_3)$
Finally for $R_3$, we have $\beta^2$ branches off the family tree of $\alpha^1, \beta^1,\gamma^1$ from $\gamma^1$, so we let $\beta^2\wedge \gamma^1=\gamma^1\vert j$ for some $k\leq j\leq k+m$. Again there are three cases where $\gamma^2$ branches: $$\begin{aligned}
\text{(i)} &\quad \tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge \beta^2\vert> \vert \gamma^2 \wedge \gamma^1\vert;\nonumber\\
\text{(ii)}& \quad \tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge \gamma^1 \vert ;\nonumber\\
\text{(iii)} &\quad \tau(\{\alpha^1,\beta^1,\gamma^1,\beta^2\}; \gamma^2)=\vert \gamma^2 \wedge\beta^1 \vert . \end{aligned}$$ Denote by $R_3^{(i)}, R_3^{(ii)}, R_3^{(iii)}$ the contribution to $R_3$ from these three cases, for which we refer the reader to Figure [3](#fig3){reference-type="ref" reference="fig3"}. It suffices to show that $$\mathbb{E}(R_3^{(i)}) \leq CN\tau_N I(N)^2, \quad \mathbb{E}(R_3^{(ii)}) \leq CN\tau_N I(N)^2, \quad \mathbb{E}(R_3^{(iii)}) \leq CN\tau_N I(N)^2.$$
![[\[fig3\]]{#fig3 label="fig3"} Three cases for $R_3$. ](r3.png){#fig3 width="1 \\textwidth"}
Case (i) implies that $\gamma^2$ branches off $\beta^2$ only after $\beta^2$ branches off $\gamma^1$. Recall [\[9e7.21\]](#9e7.21){reference-type="eqref" reference="9e7.21"} to get $$\begin{aligned}
\label{9ea1.07}
\mathbb{E}(R_3^{(i)})\leq C\mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{j=k}^{k+m} \sum_{\beta^2\geq \gamma^1\vert j}\sum_{\gamma^2\geq \gamma^1\vert j} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$ For each $k\leq j\leq k+m$, by similar reasonings used in the derivation of [\[9ec4.98\]](#9ec4.98){reference-type="eqref" reference="9ec4.98"}, we get on the event $\text{nbr}_{\beta^1,\gamma^1}(\tau_N)$, $$\begin{aligned}
&\mathbb{E}\Big(\sum_{\beta^2\geq\gamma^1\vert j}\sum_{\gamma^2\geq \gamma^1\vert j} \text{nbr}_{\beta^2,\gamma^2}(\tau_N) \Big\vert \mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}\Big)
\nonumber\\
&= \mathbb{E}\Big( \sum_{\beta^2\geq 1}\sum_{\gamma^2\geq 1} \text{nbr}_{\beta^2,\gamma^2}(\tau_N-T_{\gamma^1\vert j})\Big) \leq CI(N).\end{aligned}$$ The sum of $j$ from $k$ to $k+m$ gives at most $1+m\leq 1+m+l$. Now we may conclude that [\[9ea1.07\]](#9ea1.07){reference-type="eqref" reference="9ea1.07"} becomes $$\begin{aligned}
\mathbb{E}(R_3^{(i)})\leq CI(N) \mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N) \times (1+l+m)\Big),\end{aligned}$$ which is exactly [\[9ea1.08\]](#9ea1.08){reference-type="eqref" reference="9ea1.08"}. Use [\[9ea1.09\]](#9ea1.09){reference-type="eqref" reference="9ea1.09"} to conclude $$\begin{aligned}
\mathbb{E}(R_3^{(i)})\leq CN\tau_N I(N)^2.
\end{aligned}$$
For case (ii) we let $\gamma^2\wedge \gamma^1=\gamma^1\vert j'$ for some $k\leq j'\leq k+m$. Then $$\begin{aligned}
\label{9ea1.10}
\mathbb{E}(R_3^{(ii)})\leq C\mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1>\alpha^1,\\ \vert \gamma^1\vert =k+1+m,\\ \gamma^1_{k+1}=0 }} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{j=k}^{k+m}\sum_{j'=k}^{k+m} \sum_{\beta^2\geq \gamma^1\vert j}\sum_{\gamma^2\geq\gamma^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$ For each $k\leq j\leq k+m$ and $k\leq j'\leq k+m$, set $\vert \beta^2\vert =j+u$ and $\vert \gamma^2\vert =j'+n$ for some $u\geq 0$, $n\geq 0$. Similar to the derivation of [\[9e9.81\]](#9e9.81){reference-type="eqref" reference="9e9.81"}, by conditioning on $\mathcal{H}_{\beta^1}\vee \mathcal{H}_ {\gamma^1}$, on the event $\{T_{\pi \gamma^1}<\tau_N, B^{\gamma^1}\neq \Delta\}$ from $\text{nbr}_{\beta^1,\gamma^1}(\tau_N)$, we have the sum of $\beta^2, \gamma^2$ equals $$\begin{aligned}
\label{9ec5.49}
& \mathbb{E}\Big(\sum_{\beta^2\geq \gamma^1\vert j}\sum_{\gamma^2\geq\gamma^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N) \Big\vert \mathcal{H}_{\beta^1}\vee \mathcal{H}_ {\gamma^1}\Big)
\nonumber\\
& \leq C\sum_{u=0}^\infty \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{u+n}\pi((2N+\theta)(\tau_N-T_{\gamma^1\vert j})^+, u-1) \nonumber\\
&\quad \quad \times \Pi((2N+\theta)(\tau_N-T_{\gamma^1\vert j'})^+, n-1) \frac{1}{(u+n+1)^{d/2}}\nonumber\\
& \leq C \sum_{u=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{u}\pi((2N+\theta)(\tau_N-T_{\gamma^1\vert j})^+, u-1) \frac{1}{(u +1)^{d/2-1}}\nonumber\\
& \leq \frac{C}{(1+(2N+2\theta)(\tau_N-T_{\gamma^1\vert j})^+)^{d/2-1}},\end{aligned}$$ where the second inequality follows by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"} and the third inequality uses Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(i). Use $2N+2\theta\geq 2N+\theta$ and $T_{\pi\gamma^1}<\tau_N$ to bound the above term and then conclude that [\[9ea1.10\]](#9ea1.10){reference-type="eqref" reference="9ea1.10"} becomes $$\begin{aligned}
\label{9e9.94}
\mathbb{E}(R_3^{(ii)}) \leq C &\sum_{k=0}^\infty \sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l }}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} (1+l+m)\\
&\quad \mathbb{E}\Big( \text{nbr}_{\beta^1,\gamma^1}(\tau_N) \sum_{j=k}^{k+m}\frac{1}{(1+(2N+\theta)(T_{\pi\gamma^1}-T_{\gamma^1\vert j})^+)^{d/2-1}}\Big),\nonumber\end{aligned}$$ where the term $(1+l+m)$ is from the sum of $k\leq j'\leq k+m$. By conditioning on $\mathcal{H}_{\alpha^1}$, we get that the expectation above is at most $$\begin{aligned}
&\mathbb{E}\Bigg\{1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}} (\frac{N+\theta}{2N+\theta})^{l+m} \nonumber\\
&\times \mathbb{E}\Big[1(T_{\pi \gamma^1}<\tau_N) \sum_{j=k}^{k+m} \frac{1}{(1+(2N+\theta)(T_{\pi\gamma^1}-T_{\gamma^1\vert j})^+)^{d/2-1}}\Big\vert \mathcal{H}_{\alpha^1}\Big]\nonumber\\
&\times \mathbb{P}\Big(T_{\pi \beta^1}<\tau_N\leq T_{\beta^1}\Big\vert \mathcal{H}_{\alpha^1}\Big)\cdot \frac{C}{(1+l+m)^{d/2}}\Bigg\},\end{aligned}$$ where we have used Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to bound the probability $\mathbb{P}(B^{\beta^1}-B^{\gamma^1}\in \mathcal{N}_N\vert \mathcal{H}_{\alpha^1})$. Use the Gamma random variables to see that the above is at most $$\begin{aligned}
\label{9e9.96}
&\mathbb{E}\Bigg\{ 1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}} (\frac{N+\theta}{2N+\theta})^{l+m} \nonumber\\
&\times \mathbb{E}\Big[1(\Gamma_{m-1}<(2N+\theta)(\tau_N-T_{\alpha^1})) \sum_{j=k}^{k+m} \frac{1}{(1+(\Gamma_{m-1}-\Gamma_{j-k})^+)^{d/2-1}}\Big]\nonumber\\
&\times \pi((2N+\theta)(\tau_N-T_{\alpha^1}),l-1) \cdot \frac{C}{(1+l+m)^{d/2}}\Bigg\}.\end{aligned}$$ Now we have [\[9e9.94\]](#9e9.94){reference-type="eqref" reference="9e9.94"} becomes $$\begin{aligned}
\label{9e9.97}
\mathbb{E}(R_3^{(ii)}) &\leq C\mathbb{E}\Big( \sum_{\alpha^1\geq 1 } 1{\{T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta\}}\sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \nonumber\\
&\times\mathbb{E}\Big[1(\Gamma_{m-1}<(2N+\theta)(\tau_N-T_{\alpha^1})) \sum_{j=0}^{m} \frac{1}{(1+(\Gamma_{m-1}-\Gamma_{j})^+)^{d/2-1}}\Big] \nonumber\\
&\times \pi((2N+\theta) (\tau_N-T_{\alpha^1}), l-1) \frac{1}{(1+l+m)^{d/2-1}}\Big),\end{aligned}$$ The sum of $l$ gives $$\begin{aligned}
\label{e2.2}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \pi((2N+\theta)(\tau_N-T_{\alpha^1}), l-1) \frac{1}{(1+l+m)^{d/2-1}} \nonumber\\
&\leq \frac{C}{(1+(2N+2\theta)(\tau_N-T_{\alpha^1}))}\leq \frac{C}{(1+N(\tau_N-T_{\alpha^1}))},\end{aligned}$$ where the first inequality is from $m\geq 0$, Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} and $d\geq 4$. The sum of $m$ gives $$\begin{aligned}
\label{9ec9.99}
\sum_{m=0}^\infty & (\frac{2N+2\theta}{2N+\theta})^{m} \mathbb{E}\Big[1(\Gamma_{m-1}<(2N+\theta)(\tau_N-T_{\alpha^1})) \sum_{j=0}^{m} \frac{1}{(1+(\Gamma_{m-1}-\Gamma_{j})^+)^{d/2-1}}\Big] \nonumber\\
\leq &\sum_{m>AN(\tau_N-T_{\alpha^1})} (\frac{2N+2\theta}{2N+\theta})^{m} \mathbb{E}\Big[1(\Gamma_{m-1}<(2N+\theta)(\tau_N-T_{\alpha^1})) \times 2m\Big]\nonumber\\
&\quad + \sum_{m\leq AN(\tau_N-T_{\alpha^1})} e^{A\theta \tau_N} \mathbb{E}\Big[ \sum_{j=0}^{m} \frac{1}{(1+(\Gamma_{m-1}-\Gamma_j)^+)^{d/2-1}}\Big].\end{aligned}$$ Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii) implies that the first summation above is at most $C2^{-N(\tau_N-T_{\alpha^1})}\leq C$. Turning to the second summation, for each $m\leq AN(\tau_N-T_{\alpha^1})$, we have $$\begin{aligned}
&\mathbb{E}\Big[ \sum_{j=0}^{m} \frac{1}{(1+(\Gamma_{m-1}-\Gamma_j)^+)^{d/2-1}}\Big] \leq C+\mathbb{E}\Big[ \sum_{j=d}^{AN\tau_N} \frac{1}{(1+\Gamma_j)^{d/2-1}}\Big]\nonumber\\
&\leq C+ C\sum_{j=d}^{AN\tau_N} \frac{1}{(1+j)^{d/2-1}}\leq CI(N),\end{aligned}$$ where the second inequality uses Lemma 10.4 of [@DP99]. Thus, we may bound [\[9ec9.99\]](#9ec9.99){reference-type="eqref" reference="9ec9.99"} by $$\begin{aligned}
\label{e2.1}
C+C(1+AN(\tau_N-T_{\alpha^1})) I(N)\leq C(1+N(\tau_N-T_{\alpha^1}))I(N).\end{aligned}$$ Combine [\[e2.2\]](#e2.2){reference-type="eqref" reference="e2.2"} and [\[e2.1\]](#e2.1){reference-type="eqref" reference="e2.1"} to see that [\[9e9.97\]](#9e9.97){reference-type="eqref" reference="9e9.97"} becomes $$\begin{aligned}
\label{9e9.99}
\mathbb{E}(R_3^{(ii)})& \leq CI(N) \mathbb{E}\Big( \sum_{\alpha^1\geq 1 } 1(T_{\alpha^1}<\tau_N, B^{\alpha^1}\neq \Delta) \Big)\nonumber\\
&\leq C N \tau_N I(N)\leq C N \tau_N I(N)^2,\end{aligned}$$ where the second inequality follows by [\[9e9.66\]](#9e9.66){reference-type="eqref" reference="9e9.66"}.\
Turning to Case (iii), since we assumed at the beginning that $T_{\beta^1 \wedge \gamma^1}\leq T_{\beta^2 \wedge \gamma^2}$, we must have $\gamma^2\wedge \beta^1\geq \beta^1 \wedge \gamma^1=\beta^1\vert k$. Set $\gamma^2\wedge \beta^1=\beta^1\vert j'$ for some $k\leq j'\leq k+l$. Then $$\begin{aligned}
\label{9ea1.11}
\mathbb{E}(R_3^{(iii)})\leq C\mathbb{E}\Big(\sum_{k=0}^\infty &\sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l}}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m}} \text{nbr}_{\beta^1,\gamma^1}(\tau_N)\nonumber\\
& \quad \times \sum_{j=k}^{k+m}\sum_{j'=k}^{k+l} \sum_{\beta^2\geq \gamma^1\vert j}\sum_{\gamma^2\geq\beta^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N)\Big).\end{aligned}$$ For each $k\leq j\leq k+m$ and $k\leq j'\leq k+l$, set $\vert \beta^2\vert =j+l'$ and $\vert \gamma^2\vert =j'+m'$ for some $l'\geq 0$ and $m'\geq 0$. By conditioning on $\mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}$, on the event $\text{nbr}_{\beta^1,\gamma^1}(\tau_N)$, we have the sum of $\beta^2, \gamma^2$ is at most $$\begin{aligned}
& \mathbb{E}\Big(\sum_{\beta^2\geq \gamma^1\vert j}\sum_{\gamma^2\geq\beta^1\vert j'} \text{nbr}_{\beta^2,\gamma^2}(\tau_N) \Big\vert \mathcal{H}_{\beta^1}\vee\mathcal{H}_{\gamma^1}\Big)
\nonumber\\
& \leq C\sum_{l'=0}^\infty \sum_{m'=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l'+m'}\pi((2N+\theta)(\tau_N-T_{\gamma^1\vert j})^+, l'-1) \nonumber\\
&\quad \quad \times \Pi((2N+\theta)(\tau_N-T_{\beta^1\vert j'})^+, m'-1) \frac{1}{(l'+m'+1)^{d/2}}\nonumber\\
&\leq \frac{C}{(1+(2N+2\theta)(\tau_N-T_{\gamma^1\vert j})^+)^{d/2-1}},\end{aligned}$$ where the last inequality follows in a similar reasoning in [\[9ec5.49\]](#9ec5.49){reference-type="eqref" reference="9ec5.49"}. Use $2N+2\theta\geq 2N+\theta$ and $T_{\pi\gamma^1}<\tau_N$ to bound the above term and then conclude that [\[9ea1.11\]](#9ea1.11){reference-type="eqref" reference="9ea1.11"} becomes $$\begin{aligned}
\mathbb{E}(R_3^{(ii)}) \leq C &\sum_{k=0}^\infty \sum_{\substack{\alpha^1\geq 1,\\ \vert \alpha^1\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta^1\geq \alpha^1,\\ \vert \beta^1\vert =k+l }}\sum_{\substack{ \gamma^1\geq \alpha^1,\\ \vert \gamma^1\vert =k+m }} (1+l+m)\\
&\quad \mathbb{E}\Big( \text{nbr}_{\beta^1,\gamma^1}(\tau_N) \sum_{j=k}^{k+m}\frac{1}{(1+(2N+\theta)(T_{\pi\gamma^1}-T_{\gamma^1\vert j})^+)^{d/2-1}}\Big),\nonumber\end{aligned}$$ where the term $(1+l+m)$ is from the sum of $k\leq j'\leq k+l$. The right-hand side above is exactly [\[9e9.94\]](#9e9.94){reference-type="eqref" reference="9e9.94"}. By [\[9e9.99\]](#9e9.99){reference-type="eqref" reference="9e9.99"}, we conclude $$\begin{aligned}
\mathbb{E}(R_3^{(iii)}) &\leq C N\tau_N I(N)^2,\end{aligned}$$ as required.\
The proof of Lemma [Lemma 36](#9l7.1){reference-type="ref" reference="9l7.1"} is now complete as noted below [\[e2\]](#e2){reference-type="eqref" reference="e2"}.
# Proofs of Lemmas [Lemma 25](#9l5.4){reference-type="ref" reference="9l5.4"} and [Lemma 29](#9l5.8){reference-type="ref" reference="9l5.8"} {#9s8}
The last section is devoted to proving the last two lemmas, Lemmas [Lemma 25](#9l5.4){reference-type="ref" reference="9l5.4"} and [Lemma 29](#9l5.8){reference-type="ref" reference="9l5.8"}, from Section [5](#9s5){reference-type="ref" reference="9s5"}. We first consider Lemma [Lemma 25](#9l5.4){reference-type="ref" reference="9l5.4"}. Recall the definition of $\zeta_\alpha^m$ with $m\geq 0$ from [\[9ea3.01\]](#9ea3.01){reference-type="eqref" reference="9ea3.01"} to see that $$\begin{aligned}
1(\zeta_\alpha^m>T_{\pi\alpha})=1(\zeta_\alpha^m\geq T_{\alpha}).\end{aligned}$$ Use the above to rewrite $K_{t}^{n,2}(\phi)$ from [\[9e3.5\]](#9e3.5){reference-type="eqref" reference="9e3.5"} as $$\begin{aligned}
K_{t}^{n,2}(\phi)=&\frac{1}{2N+\theta} \frac{1}{\psi(N)} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta)\nonumber\\
& \sum_{\gamma} 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N) 1(T_{\pi\gamma}<T_\beta, \zeta_\gamma^{n-1}\geq T_{\gamma}) 1(B^{\gamma}-B^{\beta}\in \mathcal{N}_N).\end{aligned}$$ By comparing to $K_{t}^{n,3}(\phi)$ as in [\[9e3.6\]](#9e3.6){reference-type="eqref" reference="9e3.6"}, we get $$\begin{aligned}
\label{9e8.01}
\sup_{s\leq t}\vert K_{s}^{n,2}(\phi)-K_{s}^{n,3}(\phi)\vert &\leq \frac{1}{2N+\theta} \frac{1}{\psi(N)} \| \phi\| _\infty \sum_{\beta} 1(T_\beta\leq t)\sum_{\gamma} 1(T_{\pi\gamma}<T_\beta) \\
&\times 1(T_{\gamma \wedge \beta}>T_\beta-\tau_N) 1(B^{\gamma}-B^{\beta}\in \mathcal{N}_N)\nonumber\\
&\times \Big[1(\zeta_\beta^n> T_{\beta}-\tau_N)1(\zeta_\gamma^{n-1}> T_{\beta}-\tau_N)-1(\zeta_\beta^n\geq T_{\beta})1(\zeta_\gamma^{n-1}\geq T_{\gamma})\Big].\nonumber\end{aligned}$$ Since $T_{\gamma \wedge \beta}>T_\beta-\tau_N$ implies $T_{\gamma}>T_\beta-\tau_N$, we may bound the square bracket above by $$\begin{aligned}
1(\zeta_\beta^n \in (T_{\beta}-\tau_N, T_{\beta}))+1(\zeta_\gamma^{n-1} \in (T_{\beta}-\tau_N, T_{\gamma})).\end{aligned}$$ To ease the notation, we define $$\begin{aligned}
\text{nbr}_{\beta}(\gamma)=1(T_{\gamma \wedge \beta}>T_\beta-\tau_N, T_{\pi \gamma}<T_{\beta}, B^\beta-B^\gamma \in \mathcal{N}_N),\end{aligned}$$ and $$\begin{aligned}
\label{9ea3.08}
I(\beta)=\frac{1}{\psi_0(N)} \sum_{\gamma} \text{nbr}_{\beta}(\gamma).\end{aligned}$$ Using the above in [\[9e8.01\]](#9e8.01){reference-type="eqref" reference="9e8.01"} and then taking expectation, we get $$\begin{aligned}
\label{9e8.02}
\mathbb{E}\Big(&\sup_{s\leq t}\vert K_{s}^{n,2}(\phi)-K_{s}^{n,3}(\phi)\vert \Big)\leq \frac{1}{N^2} \| \phi\| _\infty \mathbb{E}\Big(\sum_{\beta} 1(T_{\beta}\leq t, B^\beta\neq \Delta) 1_{\zeta_\beta^n \in (T_{\beta}-\tau_N, T_{\beta})} I(\beta)\Big)\nonumber\\
&+ \frac{1}{N\psi(N)} \| \phi\| _\infty \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta}\leq t, B^\beta\neq \Delta) \sum_{\gamma}\text{nbr}_{\beta}(\gamma) 1_{\zeta_\gamma^{n-1} \in (T_{\beta}-\tau_N, T_\gamma)} \Big).\end{aligned}$$
**Lemma 37**. *For any $m\geq 0$ and $t\geq 0$, we have $$\begin{aligned}
\label{9ea1.15}
(i) \lim_{N\to \infty} \frac{1}{N^2} \mathbb{E}\Big(\sum_{\beta} 1(T_{\beta}\leq t, B^\beta\neq \Delta) 1_{\zeta_\beta^m \in (T_{\beta}-\tau_N, T_{\beta})} (I(\beta)+1)\Big)=0.\end{aligned}$$ and $$\begin{aligned}
\label{9ea1.16}
(ii) &\lim_{N\to \infty} \frac{1}{N\psi(N)} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta}\leq t, B^\beta\neq \Delta) \sum_{\gamma}\text{nbr}_{\beta}(\gamma) 1_{\zeta_\gamma^{m} \in (T_{\beta}-\tau_N, T_\gamma)} \Big)=0.\end{aligned}$$*
Unlike the case in [@DP99], the symmetry between $\beta,\gamma$ here is not explicit, but we will show later in Section [9.2](#s9.2){reference-type="ref" reference="s9.2"} that [\[9ea1.16\]](#9ea1.16){reference-type="eqref" reference="9ea1.16"} is an easy consequence of [\[9ea1.15\]](#9ea1.15){reference-type="eqref" reference="9ea1.15"}. For now, the proof of Lemma [Lemma 25](#9l5.4){reference-type="ref" reference="9l5.4"} is immediate by [\[9e8.02\]](#9e8.02){reference-type="eqref" reference="9e8.02"} and Lemma [Lemma 37](#9l10.1){reference-type="ref" reference="9l10.1"}. The extra $+1$ in [\[9ea1.15\]](#9ea1.15){reference-type="eqref" reference="9ea1.15"} is intended for the proof of Lemma [Lemma 29](#9l5.8){reference-type="ref" reference="9l5.8"}, which we now give.
Recall $X_r^{1,\tau}(\phi)$ from [\[9e3.13\]](#9e3.13){reference-type="eqref" reference="9e3.13"} and $X_r^1(\phi)$ from [\[9e0.03\]](#9e0.03){reference-type="eqref" reference="9e0.03"}. Using $\phi\in C_b^3(\mathbb{R}^d)$, we get $$\begin{aligned}
&\vert X_r^{1,\tau}(\phi)-X_r^1(\phi)\vert \leq \frac{1}{N} \| \phi\| _\infty \sum_{\beta} 1(T_{\pi\beta}<r< T_\beta, B^\beta\neq \Delta)1(\zeta_\beta^1\in (r-\tau_N,r))\nonumber\\
&\quad \quad\quad +\frac{C}{N} \sum_{\beta} 1(T_{\pi\beta}<r< T_\beta, B^\beta\neq \Delta) \vert B_r^\beta-B_{(r-\tau_N)^+}^\beta\vert .\end{aligned}$$ It follows that $$\begin{aligned}
\label{9ea3.02}
&\mathbb{E}\Big(\int_0^t \vert X_r^{1,\tau}(\phi)-X_r^1(\phi)\vert dr\Big)\nonumber\\
&\leq \frac{1}{N} \| \phi\| _\infty \int_0^t \mathbb{E}\Big(\sum_{\beta} 1(T_{\pi\beta}<r< T_\beta, B^\beta\neq \Delta)1(\zeta_\beta^1\in (r-\tau_N,r))\Big)dr\nonumber\\
&\quad +\frac{C}{N} \int_0^t \mathbb{E}\Big(\sum_{\beta} 1(T_{\pi\beta}<r< T_\beta, B^\beta\neq \Delta) \vert B_r^\beta-B_{(r-\tau_N)^+}^\beta\vert \Big)dr:=I_1+I_2.\end{aligned}$$ To take care of the first term, we apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to see that $$\begin{aligned}
I_1=\frac{ \| \phi\| _\infty }{N(2N+\theta)}\mathbb{E}\Big(\sum_{\beta} 1(T_{\beta}\leq t, B^\beta\neq \Delta) 1{(\zeta_\beta^n \in (T_\beta-\tau_N, T_\beta))} \Big).\end{aligned}$$ By comparing to [\[9ea1.15\]](#9ea1.15){reference-type="eqref" reference="9ea1.15"}, it is immediate that $I_1\to 0$ as $N\to\infty$.
Turning to $I_2$, by considering $\beta_0=i$ for some $1\leq i\leq NX_0^0(1)$, we get $$\begin{aligned}
I_2=&\frac{C}{N} NX_0^0(1) \int_0^t \mathbb{E}\Big(\sum_{\beta\geq 1} 1(T_{\pi\beta}<r< T_\beta, B^\beta\neq \Delta)\vert B_r^\beta-B_{(r-\tau_N)^+}^\beta\vert \Big) dr\nonumber\\
&\leq CX_0^0(1) \int_0^t C \sqrt{r-(r-\tau_N)^+} dr\leq CX_0^0(1)\sqrt{\tau_N} \to 0,\end{aligned}$$ where the first inequality follows from [\[9ec1.23\]](#9ec1.23){reference-type="eqref" reference="9ec1.23"}. The proof is now complete.
It remains to prove Lemma [Lemma 37](#9l10.1){reference-type="ref" reference="9l10.1"}.
## Proof of Lemma [Lemma 37](#9l10.1){reference-type="ref" reference="9l10.1"}(i) {#proof-of-lemma-9l10.1i}
Since $B^\beta=B^\beta_{T_\beta^-}\neq \Delta$, we must have $\zeta_\beta^0=T_{\beta}$ by definition. If $m=0$, we cannot have both $\zeta_\beta^0=T_{\beta}$ and $\zeta_\beta^0\in (T_\beta-\tau_N, T_\beta)$ hold. Hence [\[9ea1.15\]](#9ea1.15){reference-type="eqref" reference="9ea1.15"} is trivial for $m=0$. It suffices to prove for $m\geq 1$. In order that $\zeta_\beta^m \in (T_{\beta}-\tau_N, T_{\beta})$, there has to be some $i<\vert \beta\vert$ so that $$\begin{aligned}
T_{\beta\vert i}>T_{\beta}-\tau_N, \quad B^{\beta\vert i}+W^{\beta\vert i} \in \overline{\mathcal{R}}^{X^{m-1}}_{T_{\beta\vert i}^-}, \quad e_{\beta\vert i}=\beta_{i+1}.\end{aligned}$$ The above means that at time $T_{\beta\vert i}$, the particle $\beta\vert i$ gives birth to $\beta\vert (i+1)$ which is displaced from $B^{\beta\vert i}$ by a distance of $W^{\beta\vert i}$. However, there exists some particle $\delta$ in $X^{m-1}$ which had already visited that location before time $T_{\beta\vert i}$, or that location lies in $K_0^N$. Hence we may bound $1\{\zeta_\beta^m \in (T_{\beta}-\tau_N, T_{\beta})\}$ by $$\begin{aligned}
\label{9ec9.53}
&\sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_{\beta}-\tau_N) \sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta)\nonumber\\
&\quad +\sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_{\beta}-\tau_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N),\end{aligned}$$ where we have dropped the condition $e_{\beta\vert i}=\beta_{i+1}$. Use the above to get that the left-hand side term of [\[9ea1.15\]](#9ea1.15){reference-type="eqref" reference="9ea1.15"} is at most $$\begin{aligned}
\label{9e8.03}
&\frac{1}{N^2} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\quad 1(B^{\beta\vert i}+W^{\beta\vert i} \in K_0^N) \times (I(\beta)+1) \Big) \nonumber\\
&+\frac{1}{N^2} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\quad\sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big)\nonumber\\
&+\frac{1}{N^2} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\quad\sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \times I(\beta) \Big):=I_0+I_1+I_2.\end{aligned}$$ It suffices to show $I_i\to 0$ for $i=0,1,2$.
### Convergence of $I_0$
The proof of $I_0\to 0$ relies largely on the assumption of $K_0^N$ from [\[9e0.93\]](#9e0.93){reference-type="eqref" reference="9e0.93"}: $$\begin{aligned}
\label{9e3.49}
\lim_{N\to \infty} \frac{1}{N} \mathbb{E}\Big( \sum_{\beta} 1(T_\beta\leq t, B^\beta\neq \Delta) 1(B^\beta+W^\beta\in K_0^N)\Big)=0.\end{aligned}$$ Consider $\beta_0=i_0$ for some $1\leq i_0\leq NX_0^0(1)$ and $|\beta|=l$ for some $l\geq 0$. It follows that $$\begin{aligned}
\label{9e3.491}
&\frac{1}{N} \mathbb{E}\Big( \sum_{\beta} 1(T_\beta\leq t, B^\beta\neq \Delta) 1(B^\beta+W^\beta\in K_0^N)\Big)\nonumber\\
&=\frac{1}{N} \sum_{i_0=1}^{NX_0^0(1)} \sum_{l=0}^\infty \sum_{\beta\geq i_0, |\beta|=l} \mathbb{E}\Big(1(T_\beta\leq t, B^\beta\neq \Delta) 1(B^\beta+W^\beta\in K_0^N)\Big)\nonumber\\
&=\frac{1}{N} \sum_{i_0=1}^{NX_0^0(1)} \sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^l \cdot \mathbb{P}(\Gamma_{l+1}\leq (2N+\theta)t ) \cdot \mathbb{P}(x_{i_0}+V_{l}^N+W^N\in K_0^N),\end{aligned}$$ where $W^N$ is uniform on $\mathcal{N}_N$, independent of $V_{l}^N$. Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(ii) implies that the above sum for $l>ANt$ is at most $CX_0^0(1) 2^{-Nt} \to 0$. Hence one may easily check that [\[9e3.49\]](#9e3.49){reference-type="eqref" reference="9e3.49"} is equivalent to $$\begin{aligned}
\label{9e3.490}
\lim_{N\to \infty}\frac{1}{N} \sum_{i_0=1}^{NX_0^0(1)} \sum_{l=0}^{ANt} \mathbb{P}(\Gamma_{l+1}\leq (2N+\theta)t ) \cdot \mathbb{P}(x_{i_0}+V_{l}^N+W^N\in K_0^N)=0.\end{aligned}$$
To prove $I_0\to 0$, it suffices to show that $$\begin{aligned}
\label{9ea3.444}
I_0^{(a)}:=&\frac{1}{N^2} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\quad 1(B^{\beta\vert i}+W^{\beta\vert i} \in K_0^N) \Big)\to 0,\end{aligned}$$ and $$\begin{aligned}
\label{9ea3.445}
I_0^{(b)}:=&\frac{1}{N^2} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\quad 1(B^{\beta\vert i}+W^{\beta\vert i} \in K_0^N) \times I(\beta) \Big)\to 0.\end{aligned}$$
Turning to $I_0^{(a)}$, we let $\beta_0=i_0$ for some $1\leq i_0\leq NX_0^0(1)$ and set $|\beta|=l$ for some $l\geq 1$. Then $$\begin{aligned}
I_0^{(a)}=&\frac{1}{N^2} \sum_{i_0=1}^{NX_0^0(1)} \sum_{l=1}^\infty (\frac{2N+2\theta}{2N+\theta})^l \mathbb{E}\Big(1(\Gamma_{l+1}\leq (2N+\theta)t) \nonumber\\
& \sum_{i=0}^{l-1} 1(\Gamma_{i+1}>\Gamma_{l+1}-(2N+\theta)\tau_N) \Big) \cdot \mathbb{P}(x_{i_0}+V_{i}^N+W^N\in K_0^N). \end{aligned}$$ Use Fubini's theorem to exchange the sum of $i,l$ to get $$\begin{aligned}
I_0^{(a)}=& \frac{1}{N^2} \sum_{i_0=1}^{NX_0^0(1)} \sum_{i=0}^{\infty} (\frac{2N+2\theta}{2N+\theta})^i \mathbb{P}(x_{i_0}+V_{i}^N+W^N\in K_0^N)\nonumber\\
& \sum_{l=i+1}^\infty (\frac{2N+2\theta}{2N+\theta})^{l-i} \mathbb{E}\Big(1(\Gamma_{l+1}\leq (2N+\theta)t) 1(\Gamma_{l+1}-\Gamma_{i+1}<(2N+\theta)\tau_N) \Big). \end{aligned}$$ Bounding $1(\Gamma_{l+1}\leq (2N+\theta)t)$ by $1(\Gamma_{i+1}\leq (2N+\theta)t)$, we get the above is at most $$\begin{aligned}
I_0^{(a)}\leq & \frac{1}{N^2} \sum_{i_0=1}^{NX_0^0(1)} \sum_{i=0}^{\infty} (\frac{2N+2\theta}{2N+\theta})^i \mathbb{P}(x_{i_0}+V_{i}^N+W^N\in K_0^N)\nonumber\\
& \mathbb{P}(\Gamma_{i+1}\leq (2N+\theta)t) \sum_{l=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \mathbb{P}(\Gamma_{l}<(2N+\theta)\tau_N). \end{aligned}$$ By Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(ii), the sum of $l$ is at most $$\begin{aligned}
C2^{-N\tau_N}+\sum_{l=1}^{AN\tau_N} (\frac{2N+2\theta}{2N+\theta})^{l} \leq CN\tau_N\leq CN.\end{aligned}$$ It follows that $$\begin{aligned}
I_0^{(a)}\leq & \frac{C}{N} \sum_{i_0=1}^{NX_0^0(1)} \sum_{i=0}^{\infty} (\frac{2N+2\theta}{2N+\theta})^i \mathbb{P}(x_{i_0}+V_{i}^N+W^N\in K_0^N)
\mathbb{P}(\Gamma_{i+1}\leq (2N+\theta)t).\end{aligned}$$ In view of [\[9e3.491\]](#9e3.491){reference-type="eqref" reference="9e3.491"}, we conclude $I_0^{(a)}\to 0$.\
Turning to $I_0^{(b)}$ from [\[9ea3.445\]](#9ea3.445){reference-type="eqref" reference="9ea3.445"}, we recall $I(\beta)$ from [\[9ea3.08\]](#9ea3.08){reference-type="eqref" reference="9ea3.08"} to see that $$\begin{aligned}
\label{9ea5.667}
I_0^{(b)}&=\frac{1}{N\psi(N)} \mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t)\sum_{\gamma}1(T_{\pi\gamma}<T_\beta) 1(B^\beta-B^\gamma\in \mathcal{N}_N) \nonumber\\
&1(T_{\gamma\wedge \beta}>T_\beta-\tau_N) \sum_{i=0}^{ \vert \beta\vert-1} 1(T_{\beta\vert i}>T_\beta-\tau_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big).\end{aligned}$$ Since $T_{\gamma\wedge \beta}>T_\beta-\tau_N$ implies $\gamma_0=\beta_0$, we may let $\alpha=\beta\wedge \gamma$ and use $T_{\alpha}\leq T_\beta< T_{\alpha}+\tau_N$ to get $$\begin{aligned}
\label{9ea5.666}
I_0^{(b)}&\leq \frac{1}{N\psi(N)} \mathbb{E}\Big(\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) 1(T_\beta<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) \nonumber\\
& 1(B^\beta-B^\gamma\in \mathcal{N}_N) \sum_{i=0}^{\vert \beta\vert -1}1(T_{\beta\vert i}>T_\beta-\tau_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big).\end{aligned}$$ Denote by $I_0^{(b,1)}$ (resp. $I_0^{(b,2)}$) the sum of $i\leq \vert \alpha\vert -1$ (resp. $\vert \alpha\vert \leq i \leq \vert \beta\vert -1$) in the above expression. It suffices to prove $I_0^{(b,1)}\to 0$ and $I_0^{(b,2)}\to 0$.\
For $I_0^{(b,1)}$, since $i\leq \vert \alpha\vert -1$, we get $\beta\vert i=\alpha\vert i$ and $T_{\beta\vert i}, B^{\beta\vert i}, W^{\beta\vert i}$ are all measurable with respect to $\mathcal{H}_{\alpha}$. Use Fubini's theorem and $T_\beta\geq T_{\alpha}$ to see $$\begin{aligned}
\label{9ea3.513}
I_0^{(b,1)}\leq \frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}1(T_{ \alpha} \leq t) \sum_{i=0}^{\vert \alpha\vert -1}1(T_{\alpha\vert i}>T_\alpha-\tau_N) 1(B^{\alpha\vert i}+W^{\alpha\vert i}\in K_0^N) \\
& \sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\beta}-T_{\alpha}<\tau_N) 1(T_{\pi\gamma}-T_{\alpha}<\tau_N) 1({B}^\beta-{B}^\gamma\in \mathcal{N}_N)\Big).\nonumber\end{aligned}$$ By considering $\vert \beta\vert =\vert \alpha\vert +l$ and $\vert \gamma\vert =\vert \alpha\vert +m$ for some $l,m\geq 0$, we get the sum of $\beta,\gamma$ is at most (recall $\hat{B}^\beta, \hat{B}^\gamma$ from [\[9ea1.71\]](#9ea1.71){reference-type="eqref" reference="9ea1.71"}) $$\begin{aligned}
\label{9ea2.801}
& \sum_{l=0}^\infty \sum_{m=0}^\infty \sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =\vert \alpha\vert +l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =\vert \alpha\vert +m }} 1(T_{\beta}-T_{\alpha}<\tau_N)\nonumber\\
& 1(T_{\pi\gamma}-T_{\alpha}<\tau_N) 1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d),\end{aligned}$$ where we have used [\[9ea2.82\]](#9ea2.82){reference-type="eqref" reference="9ea2.82"} to bound $1(B^\beta-B^\gamma\in \mathcal{N}_N)$. By conditioning on $\mathcal{H}_\alpha$, the conditional expectation of [\[9ea2.801\]](#9ea2.801){reference-type="eqref" reference="9ea2.801"} is at most $$\begin{aligned}
\label{9ea3.511}
&1(B^\alpha\neq \Delta) \sum_{l=0}^\infty \sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+m} \Pi((2N+\theta)\tau_N, l) \Pi((2N+\theta)\tau_N, m-1) \frac{C}{(1+l+m)^{d/2}}.\end{aligned}$$ The sum of $m$ gives at most ${C }/{(1+l)^{d/2-1}}$ by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}. Then the sum of $l$ gives $$\begin{aligned}
\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \Pi((2N+\theta)\tau_N, l) \frac{C}{(1+l)^{d/2-1}}\leq CI(N),\end{aligned}$$ where the inequality is by Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"}. We conclude from the above that $$\begin{aligned}
\label{9ea3.512}
\mathbb{E}\Big(& \sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha} 1(T_{\beta}-T_{\alpha}<\tau_N) 1(T_{\pi\gamma}-T_{\alpha}<\tau_N) \nonumber\\
&\quad 1({B}^\beta-{B}^\gamma\in \mathcal{N}_N)\Big\vert \mathcal{H}_\alpha\Big)\leq CI(N)1(B^\alpha\neq \Delta).\end{aligned}$$ Therefore [\[9ea3.513\]](#9ea3.513){reference-type="eqref" reference="9ea3.513"} becomes $$\begin{aligned}
I_0^{(b,1)}\leq \frac{C}{N^2} \mathbb{E}\Big(&\sum_{\alpha}1(T_{ \alpha} \leq t, B^\alpha\neq \Delta) \sum_{i=0}^{\vert \alpha\vert -1}1(T_{\alpha\vert i}>T_\alpha-\tau_N) 1(B^{\alpha\vert i}+W^{\alpha\vert i}\in K_0^N)\Big),\end{aligned}$$ which gives exactly the same expression as in [\[9ea3.444\]](#9ea3.444){reference-type="eqref" reference="9ea3.444"} for $I_0^{(a)}$ up to some constant. So we conclude $I_0^{(b,1)}\to 0$.\
Recall $I_0^{(b,2)}$ is the contribution to [\[9ea5.666\]](#9ea5.666){reference-type="eqref" reference="9ea5.666"} from the sum of $i\geq \vert \alpha\vert$, in which case $T_{\beta\vert i}\geq T_\alpha$ and $1(T_{\beta\vert i}>T_\beta-\tau_N)$ will be absorbed into $1(T_\alpha>T_\beta-\tau_N)$. It follows that $$\begin{aligned}
\label{9ed1.661}
I_0^{(b,2)}\leq \frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) \\
& 1(B^\beta-B^\gamma\in \mathcal{N}_N)\sum_{i=\vert \alpha\vert }^{\vert \beta\vert -1} 1(B^{\beta\vert i}+W^{\beta\vert i} \in K_0^N) \Big),\nonumber\end{aligned}$$ where we have used $T_{\pi\beta}<T_\beta$. Consider $\alpha_0=i_0$ for $1\leq i_0\leq NX_0^0(1)$. Then $$\begin{aligned}
\label{9ed1.66}
I_0^{(b,2)}\leq &\frac{C}{N\psi(N)} \sum_{i_0=1}^{NX_0^0(1)} \sum_{k=0}^\infty \sum_{\substack{\alpha\geq i_0,\\ \vert \alpha\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =k+m}} \nonumber\\
& \sum_{i=k}^{k+l-1} \mathbb{E}\Big(1(T_{\alpha} \leq t) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) \nonumber\\
& 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big),\end{aligned}$$ Note that on the event $\{B^\beta, B^\gamma \neq \Delta\}$, the spatial events $\{B^\beta-B^\gamma\in \mathcal{N}_N\}$ and $\{B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N\}$ are independent of the branching events on $T_{\alpha}, T_{\pi\beta}, T_{\pi\gamma}$. Therefore the expectation above is at most $$\begin{aligned}
\label{9ed1.53}
&\mathbb{E}\Big(1(T_{\alpha} \leq t) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N)\Big) \nonumber\\
&(\frac{N+\theta}{2N+\theta})^{k+l+m} \mathbb{E}^{\beta, \gamma}\Big(1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big),\end{aligned}$$ where $\mathbb{E}^{\beta, \gamma}$ is the expectation conditional on $\{B^\beta, B^\gamma\neq \Delta\}$. We also have used [\[9ea2.82\]](#9ea2.82){reference-type="eqref" reference="9ea2.82"} to bound $1(B^\beta-B^\gamma\in \mathcal{N}_N)$. The first expectation above is equal to $$\begin{aligned}
\label{9ed8.53}
& \mathbb{P}(\Gamma_{k+1}\leq (2N+\theta)t) \cdot \mathbb{P}(\Gamma_{l-1} \leq (2N+\theta)\tau_N) \cdot \mathbb{P}(\Gamma_{m-1} \leq (2N+\theta)\tau_N).\end{aligned}$$ Turning to the second expectation in [\[9ed1.53\]](#9ed1.53){reference-type="eqref" reference="9ed1.53"}, recall $\hat{B}^\beta$, $\hat{B}^\gamma$ from [\[9ea1.71\]](#9ea1.71){reference-type="eqref" reference="9ea1.71"}. By setting $$\hat{W}^{\gamma\vert j}:=W^{\gamma\vert j} 1_{\{e_{\beta\vert j}=\beta_{j+1}\}},$$ we get that the term $\sum_{j=k+1}^{i} \hat{W}^{\gamma\vert j}$ appearing in $\hat{B}^\beta$ is measurable with respect to $\mathcal{H}_{\beta\vert (i+1)}$. Hence we may condition on $\mathcal{H}_{\beta\vert (i+1)}$ to see that $$\begin{aligned}
\label{e4.33}
& \mathbb{P}^{\beta,\gamma}\Big(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d \Big\vert \mathcal{H}_{\beta\vert (i+1)}\Big)\nonumber\\
= & \mathbb{P}\Big(\sqrt{N} \sum_{j=i+1}^{k+l-1} \hat{W}^{\beta\vert j} -\sqrt{N} \sum_{j=k+1}^{k+m-1} \hat{W}^{\gamma\vert j} \in [-2,2]^d-\sqrt{N} \sum_{j=k+1}^{i} \hat{W}^{\beta\vert j} \Big)\nonumber\\
\leq& \frac{C}{(k+l-i+m+1)^{d/2}},\end{aligned}$$ where the last inequality follows from Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"}. Since $B^{\beta\vert i}, W^{\beta\vert i}\in \mathcal{H}_{\beta\vert (i+1)}$, we may condition on $\mathcal{H}_{\beta\vert (i+1)}$ and use the above to get that $$\begin{aligned}
\label{9ed1.51}
& \mathbb{E}^{\beta, \gamma}\Big(1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big)\nonumber\\
&\leq \frac{C}{(k+l-i+m+1)^{d/2}} \mathbb{P}^{\beta, \gamma}(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N)\nonumber\\
&= \frac{C}{(k+l-i+m+1)^{d/2}} \mathbb{P}(x_{i_0}+V_{i}^N+W^{N}\in K_0^N).\end{aligned}$$ Combining [\[9ed8.53\]](#9ed8.53){reference-type="eqref" reference="9ed8.53"} and [\[9ed1.51\]](#9ed1.51){reference-type="eqref" reference="9ed1.51"}, we have that [\[9ed1.53\]](#9ed1.53){reference-type="eqref" reference="9ed1.53"} is at most $$\begin{aligned}
&(\frac{N+\theta}{2N+\theta})^{k+l+m} \frac{C}{(k+l-i+m+1)^{d/2}} \mathbb{P}(x_{i_0}+V_{i}^N+W^{N}\in K_0^N)\\
& \mathbb{P}(\Gamma_{k+1}\leq (2N+\theta)t) \cdot \mathbb{P}(\Gamma_{l-1} \leq (2N+\theta)\tau_N) \cdot \mathbb{P}(\Gamma_{m-1} \leq (2N+\theta)\tau_N) .\end{aligned}$$ Apply the above in [\[9ed1.66\]](#9ed1.66){reference-type="eqref" reference="9ed1.66"} to get $$\begin{aligned}
\label{e2.34}
I_0^{(b,2)}\leq &\frac{1}{N\psi(N)} \sum_{i_0=1}^{NX_0^0(1)} \sum_{k=0}^\infty \sum_{l=0}^\infty \sum_{m=0}^\infty \sum_{i=k}^{k+l-1} (\frac{2N+2\theta}{2N+\theta})^{k+l+m} \nonumber\\
& \frac{1}{(k+l-i+m+1)^{d/2}} \mathbb{P}(x_{i_0}+V_{i}^N+W^{N}\in K_0^N) \nonumber\\
& \mathbb{P}(\Gamma_{k+1}\leq (2N+\theta)t) \cdot \mathbb{P}(\Gamma_{l-1} \leq (2N+\theta)\tau_N) \cdot \mathbb{P}(\Gamma_{m-1} \leq (2N+\theta)\tau_N) .\end{aligned}$$ By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $m$ gives $$\begin{aligned}
\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)\tau_N, m-1) \frac{1}{(k+l-i+m+1)^{d/2}} \leq \frac{C }{(1+l+k-i)^{d/2-1}}.\end{aligned}$$ Then the sum of $l, i$ gives $$\begin{aligned}
\sum_{l=0}^\infty & \sum_{i=k}^{k+l-1} (\frac{2N+2\theta}{2N+\theta})^{l} \frac{1}{(1+l+k-i)^{d/2-1}} \nonumber\\
& \mathbb{P}(x_{i_0}+V_{i}^N+W^{N}\in K_0^N) \mathbb{P}(\Gamma_{l-1} \leq (2N+\theta)\tau_N)\nonumber\\
=\sum_{l=0}^\infty & \sum_{j=0}^{l-1} (\frac{2N+2\theta}{2N+\theta})^{l} \frac{1}{(1+l-j)^{d/2-1}} \nonumber\\
& \mathbb{P}(x_{i_0}+V_{j+k}^N+W^{N}\in K_0^N) \mathbb{P}(\Gamma_{l-1} \leq (2N+\theta)\tau_N).\end{aligned}$$ where the equality follows by letting $j=i-k$. Bounding $\mathbb{P}(\Gamma_{l-1}\leq (2N+\theta)\tau_N)$ by $\mathbb{P}(\Gamma_{j}\leq (2N+\theta)\tau_N)\mathbb{P}(\Gamma_{l-1}-\Gamma_{j}\leq (2N+\theta)\tau_N)$ and exchanging the sum of $l,j$, we get the above is at most $$\begin{aligned}
&\sum_{j=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{j} \mathbb{P}(x_{i_0}+V_{j+k}^N+W^{N}\in K_0^N)\mathbb{P}(\Gamma_{j}\leq (2N+\theta)\tau_N) \nonumber\\
& \sum_{l=j+1}^{\infty} (\frac{2N+2\theta}{2N+\theta})^{l-j} \frac{1}{(l-j+1)^{d/2-1}} \mathbb{P}(\Gamma_{l-j-1} \leq (2N+\theta)\tau_N).\end{aligned}$$ By Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"}, the sum of $l$ is at most $CI(N)$. Returning to [\[e2.34\]](#e2.34){reference-type="eqref" reference="e2.34"}, we are left with $$\begin{aligned}
\label{9ed1.20}
I_0^{(b,2)}\leq &\frac{1}{N^2} \sum_{i_0=1}^{NX_0^0(1)} \sum_{k=0}^\infty \sum_{j=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{j+k} \mathbb{P}(\Gamma_{k+1}\leq (2N+\theta)t)\nonumber\\
&\mathbb{P}(\Gamma_{j}\leq (2N+\theta)\tau_N) \mathbb{P}(x_{i_0}+V_{j+k}^N+W^{N}\in K_0^N).\end{aligned}$$ Notice that $$\begin{aligned}
&\mathbb{P}(\Gamma_{k+1}\leq (2N+\theta)t)\mathbb{P}(\Gamma_{j}\leq (2N+\theta)\tau_N)\nonumber\\
=&\mathbb{P}(\Gamma_{k+1}\leq (2N+\theta)t, \Gamma_{k+j+1}-\Gamma_{k+1}\leq (2N+\theta)\tau_N)\leq \mathbb{P}(\Gamma_{k+j+1}\leq 2(2N+\theta)t).\end{aligned}$$ Use the above to bound [\[9ed1.20\]](#9ed1.20){reference-type="eqref" reference="9ed1.20"} and then let $n=j+k$ to see that $$\begin{aligned}
I_0^{(b,2)}\leq &\frac{1}{N^2} \sum_{i_0=1}^{NX_0^0(1)} \sum_{n=0}^\infty (n+1) (\frac{2N+2\theta}{2N+\theta})^{n} \nonumber\\
& \mathbb{P}(\Gamma_{n+1}\leq 2(2N+\theta)t) \mathbb{P}(x_{i_0}+V_{n}^N+W^{N}\in K_0^N).\end{aligned}$$ By Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}(ii), the sum of $n>2ANt$ gives at most $C2^{-2Nt}\leq C$. For $n\leq 2ANt$, we bound $n+1$ by $3ANt$ to see $$\begin{aligned}
I_0^{(b,2)}\leq &\frac{CX_0^0(1)}{N} +\frac{C}{N} \sum_{i_0=1}^{NX_0^0(1)} \sum_{n=0}^{2ANt} \mathbb{P}(\Gamma_{n+1}\leq 2(2N+\theta)t) \mathbb{P}(x_{i_0}+V_{n}^N+W^{N}\in K_0^N).\end{aligned}$$ By apply [\[9e3.490\]](#9e3.490){reference-type="eqref" reference="9e3.490"} with $t=2t$, we conclude $I_0^{(b,2)}\to 0$. The proof is complete.
### Convergence of $I_1$
There are two cases for $\delta$ in the summation of $I_1$: (a) $\delta_0 \neq \beta_0$; (b) $\delta_0= \beta_0$. Denote by $I_1^{(a)}$ (resp. $I_1^{(b)}$) the contribution to $I_1$ from case (a) (resp. case (b)). It suffices to prove that $I_1^{(a)} \to 0$ and $I_1^{(b)} \to 0$.\
Let $\beta_0=i_0\neq j_0=\delta_0$ for some $1\leq i_0\neq j_0\leq NX_0^0(1)$. There are at most $[NX_0^0(1)]^2$ such pairs, so the contribution to $I_1$ from case (a) is bounded by $$\begin{aligned}
\label{9e8.05}
I_1^{(a)}\leq &\frac{1}{N^2}[NX_0^0(1)]^2 \sum_{\beta\geq i_0}\sum_{\delta\geq j_0} \mathbb{E}\Big( 1(T_{\beta}\leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1} 1(T_{\beta\vert i}>T_{\beta}-\tau_N) \nonumber\\
&1(T_{\pi\delta} <T_{\beta\vert i}) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big)\nonumber\\
\leq & X_0^0(1)^2 \sum_{l=0}^\infty \sum_{\beta\geq i_0, \vert \beta\vert =l} \sum_{n=0}^\infty\sum_{\delta\geq j_0, \vert \delta\vert =n} \sum_{i=0}^{l-1} \mathbb{E}\Big(1(T_{\beta}\leq t, B^{\beta}\neq \Delta) \nonumber\\
& 1(T_{\beta\vert i}>T_{\beta}-\tau_N) 1(T_{\pi\delta} <t) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ where in the last inequality we have replaced $T_{\pi\delta} <T_{\beta\vert i}$ by $T_{\pi\delta} <T_\beta\leq t$. The last expectation above is at most $$\begin{aligned}
\label{9ea1.13}
(\frac{N+\theta}{2N+\theta})^{l+n} \mathbb{E}\Big(& 1(\Gamma_{l+1}\leq (2N+\theta)t) 1(\Gamma_{i+1}>\Gamma_{l+1}-(2N+\theta)\tau_N) \Big) \nonumber\\
&\times \Pi((2N+\theta)t,n) \frac{1}{\psi(N)} \frac{C}{(i+n+1)^{d/2}},\end{aligned}$$ where we have used Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"} to obtain $$\begin{aligned}
\mathbb{P}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta)=\frac{1}{\psi(N)} \mathbb{P}(B^{\beta\vert i}-B^\delta \in \mathcal{N}_N) \leq \frac{1}{\psi(N)} \frac{C}{(i+n+1)^{d/2}}.\end{aligned}$$ Apply [\[9ea1.13\]](#9ea1.13){reference-type="eqref" reference="9ea1.13"} in [\[9e8.05\]](#9e8.05){reference-type="eqref" reference="9e8.05"} to get $$\begin{aligned}
\label{9ea1.14}
I_1^{(a)}\leq &\frac{C}{\psi(N)}X_0^0(1)^2\sum_{l=0}^\infty \sum_{n=0}^\infty \sum_{i=0}^{l-1} (\frac{2N+2\theta}{2N+\theta})^{n+l} \Pi((2N+\theta)t,n) \frac{1}{(i+n+1)^{d/2}} \nonumber\\
&\mathbb{E}\Big( 1(\Gamma_{l+1}\leq (2N+\theta)t)\cdot 1(\Gamma_{l+1}<\Gamma_{i+1}+(2N+\theta)\tau_N) \Big).\end{aligned}$$ The sum of $n$ gives $$\begin{aligned}
\label{9ea1.30}
& \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} \Pi((2N+\theta)t,n) \frac{1}{(i+n+1)^{d/2}}\leq Ce^{\theta t} \frac{1}{(i+1)^{d/2-1}},\end{aligned}$$ where the inequality is by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, thus giving $$\begin{aligned}
\label{9ea3.05}
I_1^{(a)}\leq &\frac{C}{\psi(N)}X_0^0(1)^2\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} \mathbb{E}\Big( 1(\Gamma_{l+1}\leq (2N+\theta)t) \nonumber\\
&\quad \sum_{i=0}^{l-1} 1(\Gamma_{l+1}<\Gamma_{i+1}+(2N+\theta)\tau_N) \frac{1}{(i+1)^{d/2-1}} \Big).\end{aligned}$$ By applying Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii), we get the sum of $l>ANt$ is at most $$\begin{aligned}
\sum_{l>ANt} (\frac{2N+2\theta}{2N+\theta})^{l} \mathbb{E}\Big( 1(\Gamma_{l+1}\leq (2N+\theta)t) \times l\Big)\leq C_1 2^{-Nt}.\end{aligned}$$ The remaining sum of $l\leq ANt$ is bounded by $$\begin{aligned}
&\sum_{l=0}^{ANt} e^{A\theta t} \mathbb{E}\Big( \sum_{i=0}^{l-1} 1(\Gamma_{l+1}-\Gamma_{i+1}<(2N+\theta)\tau_N) \frac{1}{(i+1)^{d/2-1}} \Big).\end{aligned}$$ Use Fubini's theorem to see that the above is at most $$\begin{aligned}
\label{9ea3.04}
&e^{A\theta t} \sum_{i=0}^{ANt} \frac{1}{(i+1)^{d/2-1}} \mathbb{E}\Big( \sum_{l=i+1}^{ANt} 1(\Gamma_{l-i}<(2N+\theta)\tau_N) \Big)\nonumber\\
&\leq e^{A\theta t} I(ANt) \mathbb{E}\Big( \sum_{l=1}^{ANt} 1(\Gamma_{l}<(2N+\theta)\tau_N) \Big).\end{aligned}$$ Again we apply Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii) to see that $$\begin{aligned}
\mathbb{E}\Big( \sum_{l=1}^{ANt} 1(\Gamma_{l}<(2N+\theta)\tau_N) \Big)& \leq \mathbb{E}\Big( \sum_{l>AN\tau_N} 1(\Gamma_{l}<(2N+\theta)\tau_N) \Big)+\mathbb{E}\Big( \sum_{l\leq AN\tau_N} 1(\Gamma_{l}<(2N+\theta)\tau_N) \Big)\\
&\leq C2^{-N\tau_N}+2 AN\tau_N \leq CN\tau_N.\end{aligned}$$ Hence [\[9ea3.04\]](#9ea3.04){reference-type="eqref" reference="9ea3.04"} is bounded by $Ce^{A\theta t} I(ANt) N\tau_N\leq CN\tau_N I(N)$. We conclude that [\[9ea3.05\]](#9ea3.05){reference-type="eqref" reference="9ea3.05"} becomes $$\begin{aligned}
I_1^{(a)}\leq\frac{C}{\psi(N)}X_0^0(1)^2 (C_1 2^{-Nt}+CN\tau_N I(N))\leq C[X_0^0(1)]^2\tau_N \to 0.\end{aligned}$$
Let $\beta_0=i_0$ for some $1\leq i_0\leq NX_0^0(1)$. By translation invariance, the contribution to $I_1$ from the case (b) is given by $$\begin{aligned}
I_1^{(b)}=\frac{1}{N^2}NX_0^0(1) \sum_{\beta\geq i_0}\sum_{\delta\geq i_0}\mathbb{E}\Big(& 1(T_{ \beta}\leq t, B^{\beta}\neq \Delta) \sum_{i=0}^{\vert \beta\vert -1}1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ Let $\vert \beta\vert =l$ for some $l\geq 1$ (the case for $|\beta|=0$ is trivial by $i\leq |\beta|-1$). Since $T_{\pi\delta} <T_{\beta\vert i}$ implies that $\beta\wedge \delta\leq \beta\vert i$, we may let $\beta\wedge \delta=\beta\vert j$ for some $j\leq i$ and $\vert \delta\vert =j+n$ for some $n\geq 0$. Then $$\begin{aligned}
\label{9ea1.20}
I_1^{(b)}&\leq \frac{X_0^0(1)}{N} \sum_{l=1}^\infty \sum_{\beta\geq i_0, \vert \beta\vert =l}\sum_{i=0}^{l-1} \sum_{j=0}^{i} \sum_{n=0}^\infty \sum_{\substack{ \delta\geq \beta\vert j,\\ \vert \delta\vert =j+n}}\mathbb{E}\Big( 1(T_{ \beta}\leq t, B^{\beta}\neq \Delta) \nonumber\\
& 1(T_{\beta\vert i}>T_\beta-\tau_N) 1(T_{\pi\delta}-T_{\beta\vert j}<t)1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where we have replaced $\{T_{\pi\delta} <T_{\beta\vert i}\}$ by $\{T_{\pi\delta}-T_{\beta\vert j}< T_{\beta\vert i}<T_\beta\leq t\}$. By conditioning on $\{B^\beta,B^\delta \neq \Delta\}$, we get $$\begin{aligned}
\mathbb{P}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta)&=\frac{1}{\psi(N)} \mathbb{P}((B^{\beta\vert i}-B^{\beta\vert j})-(B^\delta-B^{\beta\vert j}) \in \mathcal{N}_N) \nonumber\\
&\leq \frac{1}{\psi(N)} \frac{C}{(i-j+n+1)^{d/2}},\end{aligned}$$ where the inequality follows by Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"}. So the expectation in [\[9ea1.20\]](#9ea1.20){reference-type="eqref" reference="9ea1.20"} can be bounded by $$\begin{aligned}
\label{9ea1.21}
&\frac{C}{\psi(N)} \frac{1}{(i-j+n+1)^{d/2}} (\frac{N+\theta}{2N+\theta})^{l+n}\mathbb{E}\Big( 1(\Gamma_{l+1}\leq (2N+\theta)t) \nonumber\\
& 1(\Gamma_{i+1}>\Gamma_{l+1}-(2N+\theta)\tau_N)\Big) \cdot \Pi((2N+\theta)t, n-1) ,\end{aligned}$$ where we have used that $T_{\pi\delta}-T_{\beta\vert j}$ is independent of $\mathcal{H}_\beta$ for $n\geq 2$ (trivial for $n=0,1$). Apply [\[9ea1.21\]](#9ea1.21){reference-type="eqref" reference="9ea1.21"} in [\[9ea1.20\]](#9ea1.20){reference-type="eqref" reference="9ea1.20"} to get $$\begin{aligned}
\label{9ea1.31}
I_1^{(b)}&\leq \frac{CX_0^0(1)}{N\psi(N)} \sum_{l=1}^\infty \sum_{i=0}^{l-1} \sum_{j=0}^{i} \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l+n} \Pi((2N+\theta)t, n-1) \nonumber\\
&\frac{1}{(i-j+n+1)^{d/2}}\mathbb{E}\Big( 1(\Gamma_{l+1}\leq (2N+\theta)t) 1(\Gamma_{l+1}<\Gamma_{i+1}+(2N+\theta)\tau_N) ) \Big).\end{aligned}$$ By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $n$ gives at most $$\begin{aligned}
& \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} \Pi((2N+\theta)t, n-1) \frac{1}{(i-j+n+1)^{d/2}}\leq \frac{C }{(i-j+1)^{d/2-1}}.\end{aligned}$$ Then the sum of $j$ equals $$\begin{aligned}
& \sum_{j=0}^i C \frac{1}{(i-j+1)^{d/2-1}}\leq C I(i)\leq C I(l).
\end{aligned}$$ We conclude from the above that [\[9ea1.31\]](#9ea1.31){reference-type="eqref" reference="9ea1.31"} becomes $$\begin{aligned}
\label{9ea2.84}
I_1^{(b)}\leq \frac{CX_0^0(1)}{N\psi(N)} &\sum_{l=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{l} I(l) \mathbb{E}\Big( 1(\Gamma_{l+1}\leq (2N+\theta)t) \nonumber\\
&\cdot \sum_{i=0}^{l-1} 1(\Gamma_{l+1}-\Gamma_{i+1}<(2N+\theta)\tau_N) \Big).\end{aligned}$$
**Lemma 38**. *For $m=0,1$ or $2$, we have $$\begin{aligned}
\label{e2.84}
\sum_{l=0}^\infty & (\frac{2N+2\theta}{2N+\theta})^{l} \cdot I(l)^m \mathbb{E}\Big( 1(\Gamma_{l}\leq (2N+\theta)t) \nonumber\\
&\cdot \sum_{i=0}^{l} 1(\Gamma_{l}-\Gamma_{i+1}<(2N+\theta)\tau_N) \Big)\leq CN^2\tau_N I(N)^m.\end{aligned}$$*
The sum of $l>ANt$ is at most $C2^{-Nt}$ by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}. Next, the sum of $l\leq ANt$ is bounded by $$\begin{aligned}
&\sum_{l=0}^{ANt} e^{A\theta t} I(ANt)^m \mathbb{E}\Big( \sum_{i=0}^{l} 1(\Gamma_{l}<\Gamma_{i+1}+(2N+\theta)\tau_N) \Big)\nonumber\\
&\leq CI(N)^m \sum_{l=0}^{ANt} \mathbb{E}\Big( \sum_{k=-1}^l 1(\Gamma_{k}<(2N+\theta)\tau_N) \Big)\nonumber\\
&\leq C I(N)^m ANt \times \mathbb{E}\Big(\sum_{k=-1}^{ANt} 1(\Gamma_{k}<(2N+\theta)\tau_N) \Big).\end{aligned}$$ Again by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"}, the sum of $k>AN\tau_N$ is at most $C2^{-N\tau_N}$. The sum of $k\leq AN\tau_N$ is at most $CN\tau_N$. So we may bound the above by $C I(N)^m ANt \cdot (C2^{-N\tau_N}+CN\tau_N) \leq CN^2\tau_N I(N)^m$.
One can easily check that [\[e2.84\]](#e2.84){reference-type="eqref" reference="e2.84"} with $m=1$ implies an upper bound for the sum in [\[9ea2.84\]](#9ea2.84){reference-type="eqref" reference="9ea2.84"}, thus giving $$\begin{aligned}
I_1^{(b)}&\leq \frac{CX_0^0(1)}{N\psi(N)} CN^2\tau_N I(N)\leq {CX_0^0(1)} \tau_N \to 0,\end{aligned}$$ as required.
### Convergence of $I_2$
Recall $I(\beta)$ from [\[9ea3.08\]](#9ea3.08){reference-type="eqref" reference="9ea3.08"} to see that $$\begin{aligned}
\label{9e1.32}
I_2=\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\beta} 1(T_{ \beta} \leq t)\sum_{\gamma}1(T_{\pi\gamma}<T_\beta) 1(B^\beta-B^\gamma\in \mathcal{N}_N) \nonumber\\
&1(T_{\gamma\wedge \beta}>T_\beta-\tau_N) \sum_{i=0}^{ \vert \beta\vert-1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ Since $T_{\gamma\wedge \beta}>T_\beta-\tau_N$ implies that $\gamma_0=\beta_0$, we may let $\alpha=\beta\wedge \gamma$ and use $T_{\alpha}\leq T_\beta< T_{\alpha}+\tau_N$ to get $$\begin{aligned}
\label{9ec6.89}
I_2\leq \frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) \sum_{i=0}^{\vert \beta\vert -1}1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
& 1(T_\beta<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(B^\beta-B^\gamma\in \mathcal{N}_N)\nonumber\\
&\sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ Denote by $I_3$ (resp. $I_4$) the sum of $i\leq \vert \alpha\vert -1$ (resp. $\vert \alpha\vert \leq i \leq \vert \beta\vert -1$) in the above expression. It suffices to prove that $I_3 \to 0$ and $I_4 \to 0$.\
For $I_3$, since $i\leq \vert \alpha\vert -1$, we get $\beta\vert i=\alpha\vert i$ and so $T_{\beta\vert i}, B^{\beta\vert i}, W^{\beta\vert i}$ are all measurable with respect to $\mathcal{H}_{\alpha}$. Use Fubini's theorem and $T_\beta\geq T_{\alpha}$ to get $$\begin{aligned}
\label{9ec6.57}
I_3\leq \frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}1(T_{ \alpha} \leq t) \sum_{i=0}^{\vert \alpha\vert -1}1(T_{\alpha\vert i}>T_\alpha-\tau_N) \nonumber\\
&\sum_{\delta} 1(T_{\pi\delta} <T_{\alpha\vert i}, B^{\alpha\vert i}+W^{\alpha\vert i}=B^\delta)\nonumber\\
& \sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\beta}-T_{\alpha}<\tau_N) 1(T_{\pi\gamma}-T_{\alpha}<\tau_N) 1({B}^\beta-{B}^\gamma\in \mathcal{N}_N)\Big).\end{aligned}$$ Since $T_{\pi\delta} <T_{\alpha\vert i}\leq T_{\pi \alpha}$, we must have $\delta$ branch off the family tree of $\alpha, \beta, \gamma$ before $\alpha$. Hence we may condition on $\mathcal{H}_\alpha \vee \mathcal{H}_\delta$ and use [\[9ea3.512\]](#9ea3.512){reference-type="eqref" reference="9ea3.512"} to bound the above sum of $\beta,\gamma$ so that [\[9ec6.57\]](#9ec6.57){reference-type="eqref" reference="9ec6.57"} is at most $$\begin{aligned}
I_3\leq &\frac{C}{N^2}\mathbb{E}\Big( \sum_{\alpha}1(T_{ \alpha} \leq t, B^\alpha\neq \Delta) \sum_{i=0}^{\vert \alpha\vert -1}1(T_{\beta\vert i}>T_\alpha-\tau_N) \nonumber\\
&\quad\quad\quad\sum_{\delta} 1(T_{\pi\delta} <T_{\alpha\vert i}, B^{\alpha\vert i}+W^{\alpha\vert i}=B^\delta) \Big).\end{aligned}$$ The above gives the same expression as in [\[9e8.03\]](#9e8.03){reference-type="eqref" reference="9e8.03"} for $I_1$ up to some constant. It follows that $I_3\to 0$.\
Recall $I_4$ is the contribution to [\[9ec6.89\]](#9ec6.89){reference-type="eqref" reference="9ec6.89"} from the sum of $i\geq \vert \alpha\vert$, in which case $T_{\beta\vert i}\geq T_\alpha$ and $1(T_{\beta\vert i}>T_\beta-\tau_N)$ will be absorbed into $1(T_\alpha>T_\beta-\tau_N)$. It follows that $$\begin{aligned}
\label{9ea2.81}
I_4\leq \frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) \sum_{i=\vert \alpha\vert }^{\vert \beta\vert -1} 1(T_{\pi\beta}<T_{\alpha}+\tau_N) \nonumber\\
& 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(B^\beta-B^\gamma\in \mathcal{N}_N)\nonumber\\
&\sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where we have bounded $1(T_{\beta}<T_{\alpha}+\tau_N)$ by $1(T_{\pi\beta}<T_{\alpha}+\tau_N)$.
Separate the sum of $\delta$ into two parts: (a) $\delta_0 \neq \alpha_0$; (b) $\delta_0= \alpha_0$. Denote by $I_4^{(a)}$ (resp. $I_4^{(b)}$) the contribution from case (a) (resp. case (b)). It suffices to prove $I_4^{(a)} \to 0$ and $I_4^{(b)} \to 0$.\
Let $\alpha_0=i_0\neq j_0=\delta_0$ for some $1\leq i_0\neq j_0\leq NX_0^0(1)$. There are at most $[NX_0^0(1)]^2$ such pairs. It follows that $$\begin{aligned}
\label{9ea1.66}
I_4^{(a)}\leq &\frac{C}{N\psi(N)}[NX_0^0(1)]^2\sum_{k=0}^\infty \sum_{\substack{\alpha\geq i_0,\\ \vert \alpha\vert =k}} \sum_{l=0}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =k+m}} \sum_{n=0}^\infty \sum_{\substack{\delta\geq j_0,\\ \vert \delta\vert =n} } \nonumber\\
& \sum_{i=k}^{k+l-1} \mathbb{E}\Big(1(T_{ \alpha} \leq t) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) \nonumber\\
& 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(T_{\pi\delta} <2t)1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where we also replace $T_{\pi\delta}<T_{\beta\vert i}$ by $T_{\pi\delta}<T_{\beta}<T_\alpha+\tau_N\leq 2t$. Note that on the event $\{B^\beta, B^\gamma, B^{\delta}\neq \Delta\}$, the spatial events $\{B^\beta-B^\gamma\in \mathcal{N}_N\}$ and $\{B^{\beta\vert i}+W^{\beta\vert i}=B^\delta\}$ are independent of the branching events on $T_{\alpha}, T_\beta$, etc. Therefore the expectation above is at most $$\begin{aligned}
\label{9ea1.53}
&\mathbb{E}\Big(1(T_{\alpha}\leq t) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\delta} <2t)\Big) \nonumber\\
&(\frac{N+\theta}{2N+\theta})^{k+l+m+n} \sum_{i=k}^{k+l-1} \mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where $\mathbb{E}^{\beta, \gamma,{\delta}}$ is the expectation conditional on $\{B^\beta, B^\gamma, B^{\delta}\neq \Delta\}$. The first expectation above is equal to $$\begin{aligned}
\label{9ec8.53}
& \Pi((2N+\theta)t, k+1) \cdot \Pi((2N+\theta)\tau_N, m-1) \cdot \Pi((2N+\theta)\tau_N, l-1) \cdot \Pi(2(2N+\theta)t, n).\end{aligned}$$ Next, use [\[9ea2.82\]](#9ea2.82){reference-type="eqref" reference="9ea2.82"} to bound $1(B^\beta-B^\gamma\in \mathcal{N}_N)$ by $1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d)$. Similar to the derivation of [\[e4.33\]](#e4.33){reference-type="eqref" reference="e4.33"}, we may condition on $\mathcal{H}_{\beta\vert (i+1)}$ to see that $$\begin{aligned}
\label{9ea1.84}
& \mathbb{P}^{\beta, \gamma,{\delta}}\Big(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d \Big\vert \mathcal{H}_{\beta\vert (i+1)}\Big) \leq \frac{C}{(k+l-i+m+1)^{d/2}}.\end{aligned}$$ Since $B^{\beta\vert i}, W^{\beta\vert i}\in \mathcal{H}_{\beta\vert (i+1)}$, we may condition on $\mathcal{H}_{\beta\vert (i+1)}\vee \mathcal{H}_\delta$ and use the above to get $$\begin{aligned}
\label{9ea1.51}
& \mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big)\nonumber\\
&\leq \frac{C}{(k+l-i+m+1)^{d/2}} \mathbb{P}^{\beta, \gamma,{\delta}}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta).\end{aligned}$$ Further bound the last probability above by $$\begin{aligned}
\label{9ea1.52}
\mathbb{P}^{\beta, \gamma,{\delta}}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta)&=\frac{1}{\psi(N)} \mathbb{P}^{\beta, \gamma,{\delta}}(B^{\beta\vert i}-B^\delta\in \mathcal{N}_N)\nonumber\\
&\leq \frac{1}{\psi(N)} \frac{C}{(1+n+i)^{d/2}}\leq \frac{C}{\psi(N)} \frac{1}{(1+n+k)^{d/2}}.\end{aligned}$$ Combining [\[9ec8.53\]](#9ec8.53){reference-type="eqref" reference="9ec8.53"}, [\[9ea1.51\]](#9ea1.51){reference-type="eqref" reference="9ea1.51"} and [\[9ea1.52\]](#9ea1.52){reference-type="eqref" reference="9ea1.52"}, we get [\[9ea1.53\]](#9ea1.53){reference-type="eqref" reference="9ea1.53"} is at most $$\begin{aligned}
&(\frac{N+\theta}{2N+\theta})^{k+l+m+n} \frac{C}{(k+l-i+m+1)^{d/2}}\frac{1}{\psi(N)} \frac{1}{(1+n+k)^{d/2}}\\
&\Pi((2N+\theta)t,k+1) \cdot \Pi((2N+\theta)\tau_N, m-1) \cdot \Pi((2N+\theta)\tau_N, l-1) \cdot \Pi(2(2N+\theta)t, n).\nonumber\end{aligned}$$ Apply the above in [\[9ea1.66\]](#9ea1.66){reference-type="eqref" reference="9ea1.66"} to obtain $$\begin{aligned}
I_4^{(a)}\leq &\frac{C[X_0^0(1)]^2}{\psi(N)\psi_0(N)}\sum_{k=0}^\infty \Pi((2N+\theta)t,k+1) \sum_{l=0}^\infty \sum_{m=0}^\infty \sum_{i=k}^{k+l-1} \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+l+m+n} \nonumber\\
& \frac{1}{(k+l-i+m+1)^{d/2}} \frac{1}{(1+n+k)^{d/2}} \nonumber\\
& \cdot \Pi((2N+\theta)\tau_N, m-1) \cdot \Pi((2N+\theta)\tau_N, l-1) \cdot \Pi(2(2N+\theta)t, n).\end{aligned}$$ By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $n$ gives $$\begin{aligned}
\sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} \Pi(2(2N+\theta)t, n) \frac{1}{(1+n+k)^{d/2}} \leq C \frac{1}{(1+k)^{d/2-1}}.\end{aligned}$$ Similarly, the sum of $m$ gives $$\begin{aligned}
\sum_{m=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)\tau_N, m-1) \frac{1}{(k+l-i+m+1)^{d/2}} \leq \frac{C }{(1+k+l-i)^{d/2-1}}.\end{aligned}$$ Next, the sum of $i$ equals $$\begin{aligned}
\sum_{i=k}^{k+l-1} \frac{1}{(1+k+l-i)^{d/2-1}}=\sum_{i=1}^{l} \frac{1}{(1+i)^{d/2-1}}\leq I(l).\end{aligned}$$ To this end, we get $$\begin{aligned}
\label{9ec7.58}
I_4^{(a)}\leq &\frac{C[X_0^0(1)]^2}{\psi_0(N) \psi(N)}\sum_{k=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k}\Pi((2N+\theta)t, k+1) \frac{1}{(1+k)^{d/2-1}} \nonumber\\
&\quad \sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l } \Pi((2N+\theta)\tau_N, l) I(l).\end{aligned}$$ By Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"}, the sum of $k$ is at most $CI(N)$. For the sum of $l$, we write $\Pi((2N+\theta)\tau_N, l)=\mathbb{P}(\Gamma_{l}\leq (2N+\theta)\tau_N)$ to see $$\begin{aligned}
&\sum_{l=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{l } \mathbb{P}(\Gamma_{l }\leq (2N+\theta)\tau_N) I(l)\nonumber\\
&\leq C2^{-N\tau_N}+\sum_{l=0}^{AN\tau_N} e^{A\theta \tau_N} I(l)\leq CN\tau_N I(N),\end{aligned}$$ where the first inequality is by Lemma [Lemma 16](#9l4.3){reference-type="ref" reference="9l4.3"} (ii). Now we get [\[9ec7.58\]](#9ec7.58){reference-type="eqref" reference="9ec7.58"} is at most $$\begin{aligned}
I_4^{(a)}\leq &\frac{C[X_0^0(1)]^2}{\psi_0(N) \psi(N)} CN\tau_N I(N)^2 \leq C [X_0^0(1)]^2\tau_N\to 0.\end{aligned}$$
Let $\alpha_0=\delta_0=i_0$ for some $1\leq i_0\leq NX_0^0(1)$. The contribution to $I_4$ in [\[9ea2.81\]](#9ea2.81){reference-type="eqref" reference="9ea2.81"} from case (b) is bounded by $$\begin{aligned}
I_4^{(b)}\leq &\frac{C}{N\psi(N)}[NX_0^0(1)]\sum_{k=0}^\infty \sum_{\substack{\alpha\geq i_0,\\ \vert \alpha\vert =k}} \sum_{l=1}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =k+m}} \sum_{i=k}^{k+l-1} \sum_{\delta\geq i_0} \nonumber\\
&\mathbb{E}\Big(1(T_{\alpha}\leq t) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) \nonumber\\
&1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(T_{\pi\delta} <T_{\beta\vert i}) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ There are two cases for the generation when $\delta$ branches off the family tree of $\beta,\gamma$: $$\begin{aligned}
\text{(1)} &\quad \tau(\{\beta,\gamma\}; \delta)=\vert \beta \wedge \delta\vert ;\nonumber\\
\text{(2)}& \quad \tau(\{\beta,\gamma\}; \delta)=\vert \gamma \wedge \delta\vert >\vert \beta \wedge \delta\vert .\end{aligned}$$ Let $I_4^{(b,1)}$ (resp. $I_4^{(b,2)}$) denote the contribution to $I_4^{(b)}$ from case (1) (resp. case (2)), for which we refer the reader to Figure [4](#fig4){reference-type="ref" reference="fig4"}.\
![[\[fig4\]]{#fig4 label="fig4"} Two cases for $I_4^{(b)}$. ](I4.png){#fig4 width="1 \\textwidth"}
Since $T_{\pi\delta} <T_{\beta\vert i}$, we may set $\delta\wedge \beta=\beta\vert j$ for some $j\leq i$. Let $\vert \delta\vert =j+n$ for some $n\geq 0$. It follows that $$\begin{aligned}
\label{9ea1.82}
I_4^{(b,1)}\leq &\frac{CX_0^0(1)}{\psi(N)}\sum_{k=0}^\infty \sum_{\substack{\alpha\geq i_0,\\ \vert \alpha\vert =k}} \sum_{l=1}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta\geq \alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq \alpha,\\ \vert \gamma\vert =k+m}} \sum_{i=k}^{k+l-1} \sum_{j=0}^i \sum_{n=0}^\infty \sum_{\substack{\delta\geq \beta\vert j,\\ \vert \delta\vert =j+n}} \nonumber\\
\mathbb{E}\Big(&1(T_{\alpha}\leq t) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(T_{\pi\beta}<T_{\alpha}+\tau_N)1(T_{\pi\delta}-T_{\beta\vert j}<2t) \nonumber\\
& 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ where we have replaced $T_{\pi\delta}<T_{\beta\vert i}$ by $T_{\pi\delta}-T_{\beta\vert j}<T_{\beta}<T_\alpha+\tau_N\leq 2t$. Similar to [\[9ea1.53\]](#9ea1.53){reference-type="eqref" reference="9ea1.53"}, the expectation above is at most $$\begin{aligned}
\label{9ea1.80}
&\mathbb{E}\Big(1(T_{\alpha}\leq t) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(T_{\pi\beta}<2t) 1(T_{\pi\delta}-T_{\beta\vert j} <2t)\Big) \nonumber\\
&(\frac{N+\theta}{2N+\theta})^{k+n+l+m} \mathbb{E}^{\beta,\gamma, \delta}\Big(1(\sqrt{N}(\hat{B}^\beta-\hat{B}^\gamma)\in [-2,2]^d) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where we have used [\[9ea2.82\]](#9ea2.82){reference-type="eqref" reference="9ea2.82"} to bound $1(B^\beta-B^\gamma\in \mathcal{N}_N)$. For the first expectation, we condition on $\mathcal{H}_{\beta}$ to get $$\begin{aligned}
&\mathbb{E}\Big(1(T_{\pi\gamma}-T_{\alpha}<\tau_N)1(T_{\pi\delta}-T_{\beta\vert j} <2t) \Big\vert \mathcal{H}_\beta\Big)\nonumber\\
&=\Pi((2N+\theta)\tau_N, m-1) \cdot \Pi(2(2N+\theta)t), n-1).\end{aligned}$$ Therefore the first expectation in [\[9ea1.80\]](#9ea1.80){reference-type="eqref" reference="9ea1.80"} is equal to $$\begin{aligned}
\label{9ea3.81}
&\mathbb{E}\Big(1(\Gamma_{k+1}\leq (2N+\theta)t) 1(\Gamma_{k+l}<\Gamma_{k+1}+(2N+\theta)\tau_N) \Big)\nonumber\\
& \times \Pi((2N+\theta)\tau_N, m-1) \cdot \Pi(2(2N+\theta)t, n-1).\end{aligned}$$ Turning to the second expectation in [\[9ea1.80\]](#9ea1.80){reference-type="eqref" reference="9ea1.80"}, similar to the derivation of [\[9ea1.51\]](#9ea1.51){reference-type="eqref" reference="9ea1.51"}, we condition on $\mathcal{H}_{\beta\vert (i+1)}\vee \mathcal{H}_\delta$ to get $$\begin{aligned}
\label{9ea3.82}
& \mathbb{E}^{\beta, \gamma,{\delta}}\Big(1(\sqrt{N}(\hat{B}^\beta-\hat{B}^\gamma)\in [-2,2]^d) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big)\nonumber\\
&\leq \frac{C}{(k+l-i+m+1)^{d/2}} \mathbb{P}^{\beta, \gamma,{\delta}}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta),\end{aligned}$$ and then bound the last probability above by $$\begin{aligned}
\label{9ea3.83}
&\mathbb{P}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta)=\frac{1}{\psi(N)} \mathbb{P}(B^{\beta\vert i}-B^\delta\in \mathcal{N}_N)\nonumber\\
&= \frac{1}{\psi(N)} \mathbb{P}((B^{\beta\vert i}-B^{\beta\vert j})-(B^\delta-B^{\beta\vert j})\in \mathcal{N}_N)\leq \frac{1}{\psi(N)}\frac{C}{(1+n+i-j)^{d/2}}.\end{aligned}$$ Apply [\[9ea3.81\]](#9ea3.81){reference-type="eqref" reference="9ea3.81"}, [\[9ea3.82\]](#9ea3.82){reference-type="eqref" reference="9ea3.82"} and [\[9ea3.83\]](#9ea3.83){reference-type="eqref" reference="9ea3.83"} to bound [\[9ea1.80\]](#9ea1.80){reference-type="eqref" reference="9ea1.80"} by $$\begin{aligned}
&(\frac{N+\theta}{2N+\theta})^{k+n+l+m} \frac{C}{(k+l-i+m+1)^{d/2}}\frac{1}{\psi(N)} \frac{1}{(1+n+i-j)^{d/2}}\nonumber\\
&\mathbb{E}\Big(1(\Gamma_{k+1}\leq (2N+\theta)t) 1(\Gamma_{k+l}<\Gamma_{k+1}+(2N+\theta)\tau_N) \Big)\nonumber\\
&\cdot \Pi((2N+\theta)\tau_N, m-1) \cdot \Pi(2(2N+\theta)t, n-1),\end{aligned}$$ thus giving $$\begin{aligned}
I_4^{(b,1)}\leq &\frac{C}{\psi(N)^2}X_0^0(1)\sum_{k=0}^\infty \sum_{l=1}^\infty \sum_{m=0}^\infty \sum_{i=k}^{k+l-1} \sum_{j=0}^i \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+n+l+m} \nonumber\\
& \cdot \frac{1}{(k+l-i+m+1)^{d/2}} \frac{1}{(1+n+i-j)^{d/2}}\nonumber\\
&\cdot \Pi((2N+\theta)\tau_N, m-1) \cdot \Pi(2(2N+\theta)t, n-1)\nonumber\\
&\cdot \mathbb{E}\Big(1(\Gamma_{k+l}\leq 2(2N+\theta)t) 1(\Gamma_{k+l}<\Gamma_{k+1}+(2N+\theta)\tau_N) \Big),\end{aligned}$$ where we also replace $1\{\Gamma_{k+1}\leq (2N+\theta)t\}$ by $1\{\Gamma_{k+l}<(2N+\theta)t+(2N+\theta)\tau_N\leq 2(2N+\theta)t\}$. By Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}, the sum of $n$ gives at most ${C }/{(1+i-j)^{d/2-1}}$, and the sum of $m$ gives at most ${C }/{(1+k+l-i)^{d/2-1}}$. Next, the sum of $j$ is bounded by $$\begin{aligned}
\sum_{j=0}^i \frac{C}{(1+i-j)^{d/2-1}}\leq CI(i)\leq CI(l+k).\end{aligned}$$ The sum of $i$ gives $$\begin{aligned}
\sum_{i=k}^{k+l-1} \frac{C}{(1+k+l-i)^{d/2-1}}\leq CI(l)\leq CI(l+k).\end{aligned}$$ We are left with $$\begin{aligned}
I_4^{(b,1)}\leq &\frac{C}{\psi(N)^2}X_0^0(1)\sum_{k=0}^\infty \sum_{l=1}^\infty I(k+l)^2 (\frac{2N+2\theta}{2N+\theta})^{k+l} \nonumber\\
&\mathbb{E}\Big(1(\Gamma_{k+l}\leq 2(2N+\theta)t) 1(\Gamma_{k+l}<\Gamma_{k+1}+(2N+\theta)\tau_N) \Big).\end{aligned}$$ Rewrite the sum of $k,l$ by letting $n=k+l$ to see $$\begin{aligned}
I_4^{(b,1)}\leq &\frac{C}{\psi(N)^2}X_0^0(1)\sum_{n=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} I(n)^2 \mathbb{E}\Big(1(\Gamma_{n}\leq 2(2N+\theta)t) \nonumber\\
& \sum_{k=0}^{n} 1(\Gamma_{n}-\Gamma_{k+1}<(2N+\theta)\tau_N) \Big).\end{aligned}$$ Now apply lemma [Lemma 38](#9l8.4){reference-type="ref" reference="9l8.4"} with $m=2$ to conclude that $$\begin{aligned}
I_4^{(b,1)}\leq &\frac{C}{\psi(N)^2}X_0^0(1)\cdot CN^2\tau_N I(N)^2\leq CX_0^0(1) \tau_N\to 0.\end{aligned}$$
Turning to $I_4^{(b,2)}$, we set $\delta\wedge \gamma=\gamma\vert j$ for some $k+1\leq j\leq k+m$, and let $\vert \delta\vert =j+n$ for some $n\geq 0$. See Figure [4](#fig4){reference-type="ref" reference="fig4"}. It follows that $$\begin{aligned}
\label{9ea2.02}
I_4^{(b,2)}\leq &\frac{C}{\psi(N)}X_0^0(1)\sum_{k=0}^\infty \sum_{\substack{\alpha\geq i_0,\\ \vert \alpha\vert =k}} \sum_{l=1}^\infty \sum_{m=0}^\infty\sum_{\substack{\beta>\alpha,\\ \vert \beta\vert =k+l}}\sum_{\substack{ \gamma\geq\alpha,\\ \vert \gamma\vert =k+m }} \sum_{i=k}^{k+l-1} \sum_{j=k+1}^{k+m} \sum_{n=0}^\infty \sum_{\substack{\delta\geq \gamma\vert j,\\ \vert \delta\vert =j+n}} \nonumber\\
\mathbb{E}\Big(&1(T_{\alpha}\leq t) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(T_{\pi\delta} <T_{\alpha}+\tau_N) \nonumber\\
& 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where we have replaced $T_{\pi\delta}<T_{\beta\vert i}\leq T_\beta<T_\alpha+\tau_N$. As in [\[9ea1.80\]](#9ea1.80){reference-type="eqref" reference="9ea1.80"}, the expectation above is at most $$\begin{aligned}
\label{9ea1.90}
& (\frac{N+\theta}{2N+\theta})^{k+n+l+m} \mathbb{E}\Big(1(T_{\alpha}\leq t) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) \nonumber\\
& 1(T_{\pi\gamma}<T_{\gamma\vert j}+\tau_N)1(T_{\pi\delta} <T_{\gamma\vert j}+\tau_N) 1(T_{\gamma\vert (j-1)}<T_\alpha+\tau_N)\Big) \nonumber\\
& \mathbb{E}^{\beta,\gamma, \delta}\Big(1(\sqrt{N}(\hat{B}^\beta-\hat{B}^\gamma)\in [-2,2]^d) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big),\end{aligned}$$ where we also use $T_{\alpha} \leq T_{\gamma\vert j}$ and $T_{\gamma\vert (j-1)}\leq T_{\pi\gamma}<T_\alpha+\tau_N$. For the first expectation above, we condition on $\mathcal{H}_\beta \vee \mathcal{H}_{\gamma\vert j}$ to get $$\begin{aligned}
&\mathbb{E}\Big(1(T_{\pi\gamma}-T_{\gamma\vert j}<\tau_N)1(T_{\pi\delta}-T_{\gamma\vert j} <\tau_N) \Big\vert \mathcal{H}_\beta \vee \mathcal{H}_{\gamma\vert j}\Big)\nonumber\\
&=\Pi((2N+\theta)\tau_N, k+m-j-1) \cdot \Pi((2N+\theta)\tau_N, n-1).\end{aligned}$$ Next, we condition on $\mathcal{H}_\beta$ to see $$\begin{aligned}
& \mathbb{E}\Big(1(T_{\gamma\vert (j-1)}<T_\alpha+\tau_N) \Big\vert \mathcal{H}_\beta \Big)=\Pi((2N+\theta)\tau_N, j-k-1).\end{aligned}$$ We conclude the first expectation in [\[9ea1.90\]](#9ea1.90){reference-type="eqref" reference="9ea1.90"} is equal to $$\begin{aligned}
\label{9ea2.01}
\mathbb{E}\Big(&1(\Gamma_{k+1}\leq (2N+\theta)t) 1(\Gamma_{k+l}-\Gamma_{k+1}<(2N+\theta)\tau_N)\Big)\\
&\cdot \Pi((2N+\theta)\tau_N, k+m-j-1) \cdot\Pi((2N+\theta)\tau_N, n-1)\cdot \Pi((2N+\theta)\tau_N, j-k-1).\nonumber\end{aligned}$$ Turning to the second expectation in [\[9ea1.90\]](#9ea1.90){reference-type="eqref" reference="9ea1.90"}, since $\mathcal{H}_\delta$ includes the information from $\mathcal{H}_{\gamma\vert j}$, similar to [\[e4.33\]](#e4.33){reference-type="eqref" reference="e4.33"} we may get that $$\begin{aligned}
& \mathbb{E}^{\beta,\gamma, \delta}\Big(1(\sqrt{N}(\hat{B}^\beta- \hat{B}^\gamma)\in [-2,2]^d) \Big\vert \mathcal{H}_{\beta\vert (i+1)}\vee \mathcal{H}_\delta \Big)\nonumber\\
= & \mathbb{P}\Big(\sqrt{N} \sum_{r=i+1}^{k+l-1} \hat{W}^{\beta\vert r} -\sqrt{N} \sum_{r=j}^{k+m-1} \hat{W}^{\gamma\vert r} \in [-2,2]^d-\sqrt{N} \sum_{r=k+1}^{i} \hat{W}^{\beta\vert r}-\sqrt{N} \sum_{r=k+1}^{j-1} \hat{W}^{\gamma\vert r} \Big)\nonumber\\
\leq& \frac{C}{(1+(k+l-i)+(k+m-j))^{d/2}}, \end{aligned}$$ where the last inequality follows from Lemma [Lemma 15](#9l4.2){reference-type="ref" reference="9l4.2"}, thus giving $$\begin{aligned}
& \mathbb{E}^{\beta,\gamma, \delta}\Big(1(\sqrt{N}(\hat{B}^\beta-\hat{B}^\gamma)\in [-2,2]^d) 1(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big)\\
&\leq \frac{C}{(1+(k+l-i)+(k+m-j))^{d/2}} \mathbb{P}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta).\end{aligned}$$ The last probability above can be bounded by $$\begin{aligned}
&\mathbb{P}(B^{\beta\vert i}+W^{\beta\vert i}=B^\delta)=\frac{1}{\psi(N)} \mathbb{P}(B^{\beta\vert i}-B^\delta\in \mathcal{N}_N)\\
&= \frac{1}{\psi(N)} \mathbb{P}\Big((B^{\beta\vert i}-B^\alpha)-(B^\delta-B^\alpha)\in \mathcal{N}_N\Big)\nonumber\\
&\leq \frac{1}{\psi(N)}\frac{C}{(1+(i-k)+(n+j-k))^{d/2}}\leq \frac{1}{\psi(N)}\frac{C}{(1+n+j-k)^{d/2}},\nonumber\end{aligned}$$ where the last inequality uses $i\geq k$. Conclude from the above that the second expectation in [\[9ea1.90\]](#9ea1.90){reference-type="eqref" reference="9ea1.90"} is bounded by $$\begin{aligned}
\label{9ea1.91}
& \frac{C}{\psi(N)} \frac{1}{(1+(k+l-i)+(k+m-j))^{d/2}} \frac{1}{(1+n+j-k)^{d/2}}.\end{aligned}$$ Combine [\[9ea1.90\]](#9ea1.90){reference-type="eqref" reference="9ea1.90"}, [\[9ea2.01\]](#9ea2.01){reference-type="eqref" reference="9ea2.01"} and [\[9ea1.91\]](#9ea1.91){reference-type="eqref" reference="9ea1.91"} to see [\[9ea2.02\]](#9ea2.02){reference-type="eqref" reference="9ea2.02"} becomes $$\begin{aligned}
\label{9ec6.81}
I_4^{(b,2)}\leq &\frac{C}{\psi(N)^2}X_0^0(1)\sum_{k=0}^\infty \sum_{l=1}^\infty \sum_{m=0}^\infty \sum_{i=k}^{k+l-1} \sum_{j=k+1}^{k+m} \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+n+l+m} \nonumber\\
& \frac{1}{(1+(k+l-i)+(k+m-j))^{d/2}} \frac{1}{(1+n+j-k)^{d/2}}\cdot \Pi((2N+\theta)\tau_N, n-1)\nonumber\\
& \cdot \Pi((2N+\theta)\tau_N, j-k-1)\cdot \Pi((2N+\theta)\tau_N, k+m-j-1) \nonumber\\
& \mathbb{E}\Big(1(\Gamma_{k+1}\leq (2N+\theta)t) 1(\Gamma_{k+l}-\Gamma_{k+1}<(2N+\theta)\tau_N)\Big).\end{aligned}$$ Rewrite the above by letting $i=i-k$ and $j=j-k$ to get $$\begin{aligned}
\label{9ea2.03}
I_4^{(b,2)}\leq &\frac{C}{\psi(N)^2}X_0^0(1)\sum_{k=0}^\infty \sum_{l=1}^\infty \sum_{m=0}^\infty \sum_{i=0}^{l-1} \sum_{j=1}^{m} \sum_{n=0}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+n+l+m} \nonumber\\
& \frac{1}{(1+(l-i)+(m-j))^{d/2}} \frac{1}{(1+n+j)^{d/2}}\cdot \Pi((2N+\theta)\tau_N, n-1)\nonumber\\
& \cdot \Pi((2N+\theta)\tau_N, j-1)\cdot \Pi((2N+\theta)\tau_N, m-j-1) \nonumber\\
& \mathbb{E}\Big(1(\Gamma_{k+l}\leq 2(2N+\theta)t) 1(\Gamma_{k+l}-\Gamma_{k+1}<(2N+\theta)\tau_N)\Big),\end{aligned}$$ where we also replace $1\{\Gamma_{k+1}\leq (2N+\theta)t\}$ by $1\{\Gamma_{k+l}<(2N+\theta)t+(2N+\theta)\tau_N\leq 2(2N+\theta)t\}$. The sum of $n$ gives $$\begin{aligned}
\sum_{n=0}^\infty &(\frac{2N+2\theta}{2N+\theta})^{n} \Pi((2N+\theta)\tau_N, n-1) \frac{1}{(1+n+j)^{d/2}}\leq \frac{C}{(1+j)^{d/2-1}},\end{aligned}$$ where the inequality follows by Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"}. Then the sum of $m,j$ gives $$\begin{aligned}
& \sum_{m=0}^\infty \sum_{j=1}^{m} (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)\tau_N, j-1) \frac{C }{(1+j)^{d/2-1}}\nonumber\\
&\cdot \Pi((2N+\theta)\tau_N, m-j-1) \frac{1}{(1+(l-i)+(m-j))^{d/2}}.\end{aligned}$$ By Fubini's theorem, the above sum is equal to $$\begin{aligned}
&C \sum_{j=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{j} \Pi((2N+\theta)\tau_N, j-1) \frac{1}{(1+j)^{d/2-1}} \nonumber\\
& \times \sum_{m=j}^{\infty} (\frac{2N+2\theta}{2N+\theta})^{m-j}\Pi((2N+\theta)\tau_N, m-j-1) \frac{1}{(1+(l-i)+(m-j))^{d/2}}\nonumber\\
=&C \sum_{j=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{j}\Pi((2N+\theta)\tau_N, j-1) \frac{1}{(1+j)^{d/2-1}} \nonumber\\
& \times \sum_{m=0}^{\infty} (\frac{2N+2\theta}{2N+\theta})^{m} \Pi((2N+\theta)\tau_N, m-1) \frac{1}{(1+(l-i)+m)^{d/2}}\nonumber\\
\leq& CI(N) \cdot\frac{C}{(1+l-i)^{d/2-1}},\end{aligned}$$ where we have used Lemma [Lemma 32](#9l8.7){reference-type="ref" reference="9l8.7"} to bound the sum of $j$, and Lemma [Lemma 17](#9l4.0){reference-type="ref" reference="9l4.0"} to bound the sum of $m$.
Now conclude from the above that [\[9ec6.81\]](#9ec6.81){reference-type="eqref" reference="9ec6.81"} becomes $$\begin{aligned}
I_4^{(b,2)}\leq &\frac{CI(N)}{\psi(N)^2}X_0^0(1)\sum_{k=0}^\infty \sum_{l=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{k+l} \sum_{i=0}^{l-1} \frac{1}{(1+l-i)^{d/2-1}} \nonumber\\
& \mathbb{E}\Big(1(\Gamma_{k+l}\leq 2(2N+\theta)t) 1(\Gamma_{k+l}-\Gamma_{k+1}<(2N+\theta)\tau_N)\Big).\end{aligned}$$ The sum of $i$ gives at most $I(l)\leq I(l+k)$ and we are left with $$\begin{aligned}
I_4^{(b,2)}&\leq \frac{CI(N)}{\psi(N)^2}X_0^0(1)\sum_{k=0}^\infty \sum_{l=1}^\infty I(k+l) (\frac{2N+2\theta}{2N+\theta})^{k+l} \nonumber\\
& \mathbb{E}\Big(1(\Gamma_{k+l}\leq 2(2N+\theta)t) 1(\Gamma_{k+l}<\Gamma_{k+1}+(2N+\theta)\tau_N) \Big).\end{aligned}$$ Rewrite the above sum of $k,l$ by letting $n=k+l$ to see $$\begin{aligned}
I_4^{(b,2)}\leq &\frac{CI(N)}{\psi(N)^2}X_0^0(1) \sum_{n=1}^\infty (\frac{2N+2\theta}{2N+\theta})^{n} I(n) \mathbb{E}\Big(1(\Gamma_{n}\leq 2(2N+\theta)t) \nonumber\\
& \sum_{k=0}^{n} 1(\Gamma_{n}-\Gamma_{k+1}<(2N+\theta)\tau_N) \Big).\end{aligned}$$ Now apply the lemma [Lemma 38](#9l8.4){reference-type="ref" reference="9l8.4"} with $m=1$ to conclude $$\begin{aligned}
I_4^{(b,2)}\leq &\frac{CI(N)}{\psi(N)^2}X_0^0(1) CN^2\tau_N I(N)\leq CX_0^0(1) \tau_N\to 0.\end{aligned}$$ The proof is complete.
## Proof of Lemma [Lemma 37](#9l10.1){reference-type="ref" reference="9l10.1"}(ii) {#s9.2}
Since $B^\gamma=B^\gamma_{T_\gamma^-}\neq \Delta$, we must have $\zeta_\gamma^0=T_{\gamma}$ by its definition. If $m=0$, we cannot have both $\zeta_\gamma^0\in (T_\beta-\tau_N, T_\gamma)$ and $\zeta_\gamma^0=T_{\gamma}$ hold. So [\[9ea1.16\]](#9ea1.16){reference-type="eqref" reference="9ea1.16"} is trivial for $m=0$. It suffices to prove for $m\geq 1$. In order that $\zeta_\gamma^m \in (T_{\beta}-\tau_N, T_{\gamma})$, there has to be some $i<\vert \gamma\vert$ so that $$\begin{aligned}
T_{\gamma\vert i}>T_{\beta}-\tau_N, \quad B^{\gamma\vert i}+W^{\gamma\vert i} \in \overline{\mathcal{R}}^{X^{m-1}}_{T_{\gamma\vert i}^-}, \quad e_{\gamma\vert i}=\gamma_{i+1},\end{aligned}$$ meaning at time $T_{\gamma\vert i}$, the particle $\gamma\vert i$ gives birth to $\gamma\vert (i+1)$ which is displaced from $B^{\gamma\vert i}$ by a distance of $W^{\gamma\vert i}$. However, that location had already been visited by some particle $\delta$ in $X^{m-1}$ before time $T_{\gamma\vert i}$, or that location lies in $K_0^N$. Similar to [\[9ec9.53\]](#9ec9.53){reference-type="eqref" reference="9ec9.53"}, we may bound $1\{\zeta_\gamma^m \in (T_\beta-\tau_N, T_\gamma)\}$ by $$\begin{aligned}
&\sum_{i=0}^{\vert \gamma\vert -1} 1(T_{\gamma\vert i}>T_\beta-\tau_N) \sum_{\delta} 1(T_{\pi\delta} <T_{\gamma\vert i}, B^{\gamma\vert i}+W^{\gamma\vert i}=B^\delta)\nonumber\\
&\quad +\sum_{i=0}^{\vert \gamma\vert -1} 1(T_{\gamma\vert i}>T_{\beta}-\tau_N) 1(B^{\gamma\vert i}+W^{\gamma\vert i}\in K_0^N).\end{aligned}$$ Therefore the left-hand side term of [\[9ea1.16\]](#9ea1.16){reference-type="eqref" reference="9ea1.16"} is at most $$\begin{aligned}
\frac{1}{N\psi(N)} &\mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t) \sum_{\gamma} 1(T_{\pi\gamma}<T_\beta) 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(T_{\beta\wedge \gamma}>T_\beta-\tau_N)\nonumber\\
& \sum_{i=0}^{\vert \gamma\vert -1} 1(T_{\gamma\vert i}>T_\beta-\tau_N) 1(B^{\gamma\vert i}+W^{\gamma\vert i}\in K_0^N) \Big)\nonumber\\
+\frac{1}{N\psi(N)} &\mathbb{E}\Big(\sum_{\beta} 1(T_{ \beta} \leq t) \sum_{\gamma} 1(T_{\pi\gamma}<T_\beta) 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(T_{\beta\wedge \gamma}>T_\beta-\tau_N)\nonumber\\
& \sum_{i=0}^{\vert \gamma\vert -1} 1(T_{\gamma\vert i}>T_\beta-\tau_N) \sum_{\delta} 1(T_{\pi\delta} <T_{\gamma\vert i}, B^{\gamma\vert i}+W^{\gamma\vert i}=B^\delta) \Big):=J_0+J_1.\end{aligned}$$
First for $J_0$, if $i\leq \vert \beta\wedge \gamma\vert-1$, we have $\gamma\vert i=\beta\vert i$, so the sum for $i\leq \vert \beta\wedge \gamma\vert-1$ becomes $$\begin{aligned}
\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\beta} 1(T_{ \beta} \leq t)\sum_{\gamma}1(T_{\pi\gamma}<T_\beta) 1(B^\beta-B^\gamma\in \mathcal{N}_N) \nonumber\\
&1(T_{\gamma\wedge \beta}>T_\beta-\tau_N) \sum_{i=0}^{ \vert \beta\wedge \gamma\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big).\end{aligned}$$ The above can be bounded by $I_0^{(b)}$ from [\[9ea5.667\]](#9ea5.667){reference-type="eqref" reference="9ea5.667"} and so will converge to $0$.
Turning to $i\geq \vert \beta\wedge \gamma\vert$, we have $T_{\gamma\vert i}\geq T_{\beta\wedge \gamma}$. Hence $1(T_{\gamma\vert i}>T_\beta-\tau_N)$ is absorbed into $1(T_{\beta\wedge \gamma}>T_\beta-\tau_N)$. Let $\alpha=\beta\wedge \gamma$ to see the sum for $i\geq \vert \beta\wedge \gamma\vert$ in $J_0$ is at most $$\begin{aligned}
\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) \sum_{i=\vert \alpha\vert }^{\vert \gamma\vert -1} 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) \nonumber\\
& 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\gamma\vert i}+W^{\gamma\vert i}\in K_0^N) \Big),\end{aligned}$$ where we also use $T_{\alpha}\leq T_\beta\leq t$, $T_\beta< T_{\alpha}+\tau_N$ and $T_{\pi\beta}\leq T_\beta$. Exchange $\beta,\gamma$ to see the above equals $$\begin{aligned}
\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) \sum_{i=\vert \alpha\vert }^{\vert \beta\vert -1} 1(T_{\pi\beta}<T_{\alpha}+\tau_N) \nonumber\\
& 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(B^\beta-B^\gamma\in \mathcal{N}_N) 1(B^{\beta\vert i}+W^{\beta\vert i}\in K_0^N) \Big),\end{aligned}$$ reproducing the same expression as in [\[9ed1.661\]](#9ed1.661){reference-type="eqref" reference="9ed1.661"}. Hence the above converges to $0$ and we conclude $J_0\to 0$.
The case for $J_1$ is similar. If $i\leq \vert \beta\wedge \gamma\vert-1$, we get sum for $i\leq \vert \beta\wedge \gamma\vert-1$ is equal to $$\begin{aligned}
\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\beta} 1(T_{ \beta} \leq t)\sum_{\gamma}1(T_{\pi\gamma}<T_\beta) 1(B^\beta-B^\gamma\in \mathcal{N}_N) \nonumber\\
&1(T_{\gamma\wedge \beta}>T_\beta-\tau_N) \sum_{i=0}^{ \vert \beta\wedge \gamma\vert -1} 1(T_{\beta\vert i}>T_\beta-\tau_N) \nonumber\\
&\sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ We may bound the above by $I_2$ from[\[9e1.32\]](#9e1.32){reference-type="eqref" reference="9e1.32"}. Hence the above converges to $0$.
For $i\geq \vert \beta\wedge \gamma\vert$, again $1(T_{\gamma\vert i}>T_\beta-\tau_N)$ is absorbed into $1(T_{\beta\wedge \gamma}>T_\beta-\tau_N)$ and we may let $\alpha=\beta\wedge \gamma$ to see the sum for $i\geq \vert \beta\wedge \gamma\vert$ in $J_1$ is at most $$\begin{aligned}
\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) \sum_{i=\vert \alpha\vert }^{\vert \gamma\vert -1} 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) 1(T_{\pi\beta}<T_{\alpha}+\tau_N) \nonumber\\
&1(B^\beta-B^\gamma\in \mathcal{N}_N) \sum_{\delta} 1(T_{\pi\delta} <T_{\gamma\vert i}, B^{\gamma\vert i}+W^{\gamma\vert i}=B^\delta) \Big).\end{aligned}$$ Exchange $\beta,\gamma$ to see the above equals $$\begin{aligned}
\frac{1}{N\psi(N)} \mathbb{E}\Big(&\sum_{\alpha}\sum_{\beta\geq\alpha} \sum_{\gamma\geq \alpha}1(T_{\alpha} \leq t) \sum_{i=\vert \alpha\vert }^{\vert \beta\vert -1} 1(T_{\pi\beta}<T_{\alpha}+\tau_N) 1(T_{\pi\gamma}<T_{\alpha}+\tau_N) \nonumber\\
&1(B^\beta-B^\gamma\in \mathcal{N}_N) \sum_{\delta} 1(T_{\pi\delta} <T_{\beta\vert i}, B^{\beta\vert i}+W^{\beta\vert i}=B^\delta) \Big).\end{aligned}$$ This is the same expression as in [\[9ea2.81\]](#9ea2.81){reference-type="eqref" reference="9ea2.81"}. Hence the above converges to $0$ and we conclude $J_1\to 0$. The proof is complete.
10
M. Bramson, R. Durrett, and G. Swindle. Statistical Mechanics of Crabgrass. , **17**, no. 2, 444-481, (1989).
R. Durrett and E. Perkins. Rescaled contact processes converge to super-Brownian motion in two or more dimensions. , **114**, 309--399, (1999).
S. Frei and E. Perkins. A lower bound for $p_c$ in range-$R$ bond percolation in two and three dimensions. , **21**: no. 56, 1--22, (2016).
R. van der Hofstad and A. Sakai. Critical points for spread-out self-avoiding walk, percolation and the contact process above the upper critical dimensions. , **132**: 438-470, (2005).
J. Hong. An upper bound for $p_c$ in range-$R$ percolation in two and three dimensions. , **59 (3)**, 1259--1341, (2023).
J. Hong. A lower bound for $p_c$ in range-$R$ percolation in four, five and six dimensions. , 2307.01466, 34 pp., (2023).
S. Lalley. Spatial epidemics: critical behavior in one dimension. , **144** (3--4), 429--469, (2009).
S. Lalley, E. Perkins and X. Zheng. A phase transition for measure-valued SIR epidemic processes. , **42** (1): 237--310, (2014).
S. Lalley and X. Zheng. Spatial epidemics and local times for critical branching random walks in dimensions $2$ and $3$. , **148** (3--4): 527--566, (2010).
M. Miztenmacher and E. Upfal. Probability and Computing: Randomized Algorithms and Probabilistic Analysis. , Cambridge, (2005).
C. Mueller and R. Tribe. Stochastic PDEs arising from the long range contact process and long range voter model. , **102**: 519--546, (1995).
E.A. Perkins. Dawson-Watanabe Superprocesses and Measure-valued Diffusions. . Springer, Berlin (2002).
# Proofs of Lemmas [Lemma 7](#9l2.1){reference-type="ref" reference="9l2.1"}-[Lemma 10](#9l2.4){reference-type="ref" reference="9l2.4"} {#9a1}
Recall from [\[9e2.1\]](#9e2.1){reference-type="eqref" reference="9e2.1"} that $$\begin{aligned}
\label{9ec4.78}
M_t^n(\phi)&=\frac{1}{N} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n> T_{\pi \beta}) \phi(B^\beta) g_\beta\nonumber\\
&=\frac{1}{N} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta) g_\beta,\end{aligned}$$ where $g_\beta=\delta_\beta-\frac{\theta}{2N+\theta}$. Then it follows immediately from Lemma 3.3 and 3.4(a) of [@DP99] that $M_t^n(\phi)$ is an $\mathcal F_t$-martingale. One may also check $$\begin{aligned}
_t=\frac{1}{N^2} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta)^2 g_\beta^2.\end{aligned}$$ Since $\delta_\beta$ is independent of $\mathcal F_{T_\beta^-}$ by definition, we have $$\begin{aligned}
\mathbb{E}(g_\beta^2\vert \mathcal F_{T_\beta^-})=\mathbb{E}(g_\beta^2)=1-\varepsilon_N^2.\end{aligned}$$ By Lemma [Lemma 30](#9l6.1){reference-type="ref" reference="9l6.1"}, we get $$\begin{aligned}
Z_t:=\frac{1}{N^2} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta)^2 [g_\beta^2-(1-\varepsilon_N^2)]\end{aligned}$$ is a martingale. Apply Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} to further get $$\begin{aligned}
N_t:=&(1-\varepsilon_N^2)\frac{1}{N^2} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta)^2 \\
&-(1-\varepsilon_N^2) \frac{1}{N^2} \int_0^t (2N+\theta) \sum_{\beta} 1(T_{\pi \beta}<r< T_\beta, \zeta_\beta^n>r) \phi(B^\beta)^2\end{aligned}$$ is a martingale. Recalling $X_r^n(\phi^2)$ from [\[9e0.03\]](#9e0.03){reference-type="eqref" reference="9e0.03"}, we may conclude from the above that $$\begin{aligned}
_t=Z_t+N_t+(1-\varepsilon_N^2) \Big(2+\frac{\theta}{N}\Big) \int_0^t X_r^n(\phi^2) dr.\end{aligned}$$ Since both $Z_t$ and $N_t$ are martingales, we get $$\begin{aligned}
\langle M^n(\phi)\rangle_t=(1-\varepsilon_N^2) \Big(2+\frac{\theta}{N}\Big) \int_0^t X_r^n(\phi^2) dr.\end{aligned}$$ Turning to $\langle M^2(\phi)- M^1(\phi)\rangle_t$, we obtain from [\[9ec4.78\]](#9ec4.78){reference-type="eqref" reference="9ec4.78"} that $$\begin{aligned}
M_t^2(\phi)-M_t^2(\phi)=\frac{1}{N} \sum_{\beta} 1(T_\beta\leq t) 1(\zeta_\beta^1< T_{\beta}\leq \zeta_\beta^2) \phi(B^\beta) g_\beta.\end{aligned}$$ The arguments for deriving $\langle M^2(\phi)- M^1(\phi)\rangle_t$ follow in a similar way.
Recall from [\[9e2.1\]](#9e2.1){reference-type="eqref" reference="9e2.1"} that $$\begin{aligned}
\label{9ec6.07}
D_t^{n,1}(\phi)=\frac{\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta).\end{aligned}$$ By Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"}, we have $$\begin{aligned}
\label{9ec5.07}
M_t:=&D_t^{n,1}(\phi)- \frac{\theta}{N} \int_0^t \sum_{\beta} 1(T_{\pi \beta}<r<T_\beta, \zeta_\beta^n>r) \phi(B^\beta) dr\nonumber\\
=&D_t^{n,1}(\phi)-\theta\int_0^t X_r^n(\phi) dr\end{aligned}$$ is a martingale. Notice that $$\begin{aligned}
_t=\Big(\frac{\theta}{2N+\theta}\Big)^2 \frac{1}{N^2} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \phi(B^\beta)^2.\end{aligned}$$ Using Lemma [Lemma 13](#9l3.2){reference-type="ref" reference="9l3.2"} again, we get $$\begin{aligned}
&[M]_t- \frac{\theta^2}{(2N+\theta)N^2} \int_0^t \sum_{\beta} 1(T_{\pi \beta}<r<T_\beta, \zeta_\beta^n>r) \phi(B^\beta)^2 dr\nonumber\\
&=[M]_t-\frac{\theta^2}{(2N+\theta)N} \int_0^t X_r^n(\phi^2) dr\end{aligned}$$ is also a martingale, thus giving $\langle M\rangle_t=\frac{\theta^2}{(2N+\theta)N} \int_0^t X_r^n(\phi^2) dr$. By the $L^2$ maximal inequality, we get $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t} M_s^2 \Big)\leq \frac{\theta^2}{(2N+\theta)N} \int_0^t \mathbb{E}(X_r^n(\phi^2)) dr\nonumber\\
\leq \frac{\theta^2}{(2N+\theta)N} \|\phi\|_\infty \int_0^t e^{\theta r} X_0^0(1) dr\to 0,\end{aligned}$$ where the second inequality uses Lemma [Lemma 6](#9l2.0){reference-type="ref" reference="9l2.0"}. By applying Cauchy-Schwartz in the above, we conclude the proof is complete by [\[9ec5.07\]](#9ec5.07){reference-type="eqref" reference="9ec5.07"}.
Recall from [\[9e2.1\]](#9e2.1){reference-type="eqref" reference="9e2.1"} that $$\begin{aligned}
D_t^{n,2}(\phi)=&\frac{N+\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \nabla_\beta\phi,\end{aligned}$$ where $\nabla_\beta\phi=\phi(B^\beta+W^\beta) -\phi(B^\beta)$. Using Taylor's theorem with expansion at $B^\beta$, $\mathbb{E}(W_\beta^i)=0$ and $\mathbb{E}(W_\beta^i W_\beta^j)=0$ for $i\neq j$, one may obtain $$\begin{aligned}
&\mathbb{E}(\nabla_\beta\phi \vert \mathcal F_{T_\beta^-})=\frac{1}{2} \sum_{i=1}^d \phi_{ii}(B^\beta) \mathbb{E}[(W_\beta^i)^2]+R_N^\beta(\omega),\\
&R_N^\beta(\omega)=\frac{1}{2}\mathbb{E}\Big(\sum_{1\leq i, j\leq d} (\phi_{ij}(v(\omega))-\phi_{ij}(B^\beta)) W_\beta^i W_\beta^j \Big\vert \mathcal F_{T_\beta^-}\Big),\end{aligned}$$ where $\phi_{ij}$ are the partial derivatives of $\phi$ and $v(\omega)$ is a point on the line segment from $B^\beta$ to $B^\beta+W^\beta$. Since $\phi \in C_b^3$ and $|W^\beta|\leq CN^{-1/2}$, we get $|\phi_{ij}(v(\omega))-\phi_{ij}(B^\beta)|\leq CN^{-1/2}$ and $$\begin{aligned}
\label{9ec7.45}
R_N^\beta(\omega)\leq CN^{-1/2} (CN^{-1/2})^2\leq CN^{-3/2}.\end{aligned}$$ Notice $\sqrt{N} W_\beta^i$ converges weakly to a random variable uniformly distributed on $[-1,1]$ whose second moment is $1/3$. We conclude from the above that $$\begin{aligned}
&\mathbb{E}(\nabla_\beta\phi \vert \mathcal F_{T_\beta^-})=\Big(\frac{1}{6}+\delta_N\Big) N^{-1}\Delta \phi(B^\beta) +R_N^\beta(\omega),\end{aligned}$$ where $\delta_N\to 0$ as $N\to\infty$. Define $$\begin{aligned}
G_\beta=1(T_\beta\leq \zeta_\beta^n)\Big(\nabla_\beta\phi-\mathbb{E}(\nabla_\beta\phi \vert \mathcal F_{T_\beta^-})\Big).\end{aligned}$$ Then $\mathbb{E}(G_\beta|\mathcal F_{T_\beta^-})=0$ and $|G_\beta|\leq CN^{-1/2} 1(T_\beta\leq \zeta_\beta^0)$ by $\phi\in C_b^3$. Apply Lemma [Lemma 30](#9l6.1){reference-type="ref" reference="9l6.1"} to see $$\begin{aligned}
M_t=\frac{N+\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} 1(T_\beta\leq t) G_\beta \text{ is a martingale,}\end{aligned}$$ and $$\begin{aligned}
\label{9ec7.05}
\mathbb{E}\Big(\sup_{s\leq t} M_s^2\Big) \leq C\frac{1}{N^2} \mathbb{E}\Big(\sum_\beta 1(T_\beta \leq t) CN^{-1} 1(T_\beta\leq \zeta_\beta^0)\Big).\end{aligned}$$ By Lemma 3.4 (b) of [@DP99], we have $$\begin{aligned}
\label{9ec7.42}
\mathbb{E}\Big(\sum_\beta 1(T_\beta \leq t) 1(T_\beta\leq \zeta_\beta^0)\Big)\leq CN^2 X_0^0(1). \end{aligned}$$ Apply the above in [\[9ec7.05\]](#9ec7.05){reference-type="eqref" reference="9ec7.05"} and then use Cauchy-Schwartz to get $$\begin{aligned}
\label{9ec7.50}
\mathbb{E}\Big(\sup_{s\leq t} |M_s|\Big) \leq CN^{-1/2} \sqrt{X_0^0(1)}\to 0.\end{aligned}$$ Now rewrite $D_t^{n,2}(\phi)$ as $$\begin{aligned}
\label{9ec7.49}
&D_t^{n,2}(\phi)=M_t+E_t(\phi)+D_t^{n,3}(\phi),
\end{aligned}$$ where $$\begin{aligned}
&D_t^{n,3}(\phi)=\frac{1}{2N+\theta} \frac{1}{N} \Big(\frac{1}{6}+\delta_N\Big) \frac{N+\theta}{N} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) \Delta \phi(B^\beta), \nonumber\\
&E_t(\phi)=\frac{N+\theta}{2N+\theta} \frac{1}{N} \sum_{\beta} 1(T_\beta\leq t, \zeta_\beta^n\geq T_{\beta}) R_N^\beta(\omega).\nonumber
\end{aligned}$$ For $E_t(\phi)$, use [\[9ec7.45\]](#9ec7.45){reference-type="eqref" reference="9ec7.45"} and [\[9ec7.42\]](#9ec7.42){reference-type="eqref" reference="9ec7.42"} to get $$\begin{aligned}
\label{9ec7.56}
\mathbb{E}\Big(\sup_{s\leq t} |E_t(\phi)|\Big) \leq CN^{-1} CN^2 X_0^0(1) CN^{-3/2}\leq CN^{-1/2} X_0^0(1) \to 0.\end{aligned}$$ For $D_t^{n,3}(\phi)$, by comparing to $D_t^{n,1}(\phi)$ as in [\[9ec6.07\]](#9ec6.07){reference-type="eqref" reference="9ec6.07"}, we may apply Lemma [Lemma 8](#9l2.2){reference-type="ref" reference="9l2.2"} with $\phi=\Delta \phi$ to see $$\begin{aligned}
\label{9ec7.43}
\lim_{N\to \infty} \mathbb{E}\Big(\sup_{s\leq t} \Big\vert D_s^{n,3}(\phi)-\frac{N+\theta}{N} \Big(\frac{1}{6}+\delta_N\Big) \int_0^s X_r^n(\Delta \phi) dr\Big\vert \Big)=0.\end{aligned}$$ The proof is complete in view of [\[9ec7.50\]](#9ec7.50){reference-type="eqref" reference="9ec7.50"}, [\[9ec7.49\]](#9ec7.49){reference-type="eqref" reference="9ec7.49"}, [\[9ec7.56\]](#9ec7.56){reference-type="eqref" reference="9ec7.56"} and [\[9ec7.43\]](#9ec7.43){reference-type="eqref" reference="9ec7.43"}.
Recall from [\[9e2.1\]](#9e2.1){reference-type="eqref" reference="9e2.1"} that $$\begin{aligned}
E_t^{n}(\phi)=&\frac{1}{N} \sum_{\beta} a_\beta^n(t) h_\beta \Big(\phi(B^\beta+W^\beta)1(B^\beta+W^\beta\notin \mathcal{R}^{X^{n-1}}_{T_\beta^-}) -\phi(B^\beta) \Big),\end{aligned}$$ where $h_\beta=1(\delta_\beta=1)-\frac{N+\theta}{2N+\theta}$. Since $\mathbb{E}(h_\beta\vert \mathcal F_{T_\beta^-})=0$ and $h_\beta$ is independent of $W^\beta$, we conclude from Lemma [Lemma 30](#9l6.1){reference-type="ref" reference="9l6.1"} that $E_t^{n}(\phi)$ is a martingale. By the $L^2$ maximal inequality, we get $$\begin{aligned}
\mathbb{E}\Big(\sup_{s\leq t} E_s^{n}(\phi)\Big)&\leq C\mathbb{E}([E^{n}(\phi)]_t)\nonumber\\
&\leq \frac{C}{N^2} \mathbb{E}\Big(\sum_{\beta} a_\beta^0(t) \Big(\phi(B^\beta+W^\beta)-\phi(B^\beta)\Big)^2\Big)\nonumber\\
&+ \frac{C}{N^2} \mathbb{E}\Big(\sum_{\beta} a_\beta^0(t) \phi(B^\beta)^2 1(B^\beta+W^\beta\in \mathcal{R}^{X^{n-1}}_{T_\beta^-})\Big). \end{aligned}$$ Using Lemma [Lemma 11](#9l2.5){reference-type="ref" reference="9l2.5"}, we may bound the second term by $$\begin{aligned}
&\frac{C}{N} \|\phi\|_\infty^2\mathbb{E}\Big(\frac{1}{N}\sum_{\beta} a_\beta^0(t) 1(B^\beta+W^\beta\in \mathcal{R}^{X^{0}}_{T_\beta^-})\Big)\\
&\leq \frac{C}{N} \|\phi\|_\infty^2 C(X_0^0(1)+X_0^0(1)^2) \to 0.\end{aligned}$$ Turning to the first term, we use $\phi \in C_b^3$ and recall $\eta_N$ from [\[9ec5.37\]](#9ec5.37){reference-type="eqref" reference="9ec5.37"} to get $$\begin{aligned}
\Big(\phi(B^\beta+W^\beta)-\phi(B^\beta)\Big)^2 \leq \eta_N^2\leq CN^{-1}.\end{aligned}$$ It follows that the first term is at most $$\begin{aligned}
\frac{C}{N^3} \mathbb{E}\Big(\sum_{\beta} a_\beta^0(t) \Big)\leq CN^{-1} X_0^0(1) \to 0,\end{aligned}$$ where the inequality is by [\[9ec7.42\]](#9ec7.42){reference-type="eqref" reference="9ec7.42"}.
| arxiv_math | {
"id": "2309.08926",
"title": "Rescaled SIR epidemic processes converge to super-Brownian motion in\n four or more dimensions",
"authors": "Jieliang Hong",
"categories": "math.PR",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
This paper deals with the existence and multiplicity of solutions for the generalized $(p, q)$--Laplacian equation $$\begin{split}
&-{\rm div}(A(x, u)|\nabla u|^{p-2}\nabla u) +\frac1p A_t(x, u)|\nabla u|^p -{\rm div}(B(x, u)|\nabla u|^{q-2}\nabla u) \\
&\quad\qquad+\frac1q B_t(x, u)|\nabla u|^q + V(x)|u|^{p-2} u+ W(x)|u|^{q-2} u= g(x, u)\quad\qquad\mbox{ in } \mathbb R^N,
\end{split}$$ where $1<q\le p< N$, $A, B:\mathbb R^N\times\mathbb R\to\mathbb R$ are suitable $\mathcal{C}^1$--Carathéodory functions with $A_t(x, u)=\frac{\partial A}{\partial t}(x, u), B_t(x, u)=\frac{\partial B}{\partial t}(x, u)$, $V, W:\mathbb R^N\to\mathbb R$ are proper "weight functions\" and $g:\mathbb R^N\times\mathbb R\to\mathbb R$ is a Carathéodory map.\
Notwithstanding the occurrence of some coefficients which rely upon the solution itself makes the use of variational techniques more challenging, under suitable assumptions on the involved functions, we are able to exploit the variational nature of our problem. In particular, the existence of a nontrivial solution is derived via a generalized version of the Ambrosetti--Rabinowitz Mountain Pass Theorem, based on a weaker version of the classical Cerami--Palais--Smale condition.\
Finally, the multiplicity result, which is thoroughly new also even in the simpler case $q=p$, is gained under symmetry assumptions and a sharp decomposition of the ambient space.
author:
- |
Addolorata Salvatore\
Dipartimento di Matematica\
Università degli Studi di Bari Aldo Moro\
Via E. Orabona 4, 70125 Bari, Italy\
*addolorata.salvatore\@uniba.it*\
Caterina Sportelli\
Department of Mathematics and Statistics\
University of Western Australia\
35 Stirling Highway, WA 6009 Crawley, Australia\
*caterina.sportelli\@uwa.edu.au*
date: March 2023
title: "**On existence and multiplicity of solutions for generalized $(p, q)$--Laplacian equations on unbounded domains** [^1]"
---
1.125
*2020 Mathematics Subject Classification*. 35J20, 35J62, 35J92, 47J30, 58E30.\
*Key words*. $(p, q)$--Laplacian, weighted Sobolev spaces, weak Cerami--Palais--Smale condition, Ambrosetti--Rabinowitz condition, Mountain Pass theorem, nontrivial weak bounded solutions, multiple solutions.
# Introduction {#secintroduction}
In this paper we study the existence of solutions for the generalized $(p, q)$--Laplacian equation $$\label{main}
\begin{split}
&\hspace{-.12in}-{\rm div}(A(x, u)|\nabla u|^{p-2}\nabla u) +\frac1p A_t(x, u)|\nabla u|^p -{\rm div}(B(x, u)|\nabla u|^{q-2}\nabla u) \\
&+\frac1q B_t(x, u)|\nabla u|^q + V(x)|u|^{p-2} u+ W(x)|u|^{q-2} u= g(x, u)\quad\mbox{ in } \mathbb R^N,
\end{split}$$ where $1<q\le p\le N$, $A, B:\mathbb R^N\times\mathbb R\to\mathbb R$ are $\mathcal{C}^1$--Carathéodory functions with $A_t(x, u)=\frac{\partial A}{\partial t}(x, u), B_t(x, u)=\frac{\partial B}{\partial t}(x, u)$, $V, W:\mathbb R^N\to\mathbb R$ are suitable potentials in a sense discussed later and $g:\mathbb R^N\times\mathbb R\to\mathbb R$ is a Carathéodory function. In order to motivate the choice of this problem and highlight the relevance of our results, we present some examples which illustrate the equation covered in this paper.\
Firstly, we note that if $p=q$ equation [\[main\]](#main){reference-type="eqref" reference="main"} turns into the simpler one $$\label{Ceq}
\begin{split}
-{\rm div}(C(x, u)|\nabla u|^{p-2}\nabla u) &+\frac1p C_t(x, u)|\nabla u|^ p\\
&+ Z(x)|u|^{p-2} u= g(x, u)\quad\mbox{ in } \mathbb R^N,
\end{split}$$ where we set $C(x, u) =A(x, u)+B(x, u)$ and $Z(x) = V(x) +W(x)$. The existence of solutions for problem [\[Ceq\]](#Ceq){reference-type="eqref" reference="Ceq"} is related to the existence of solitary waves for the quasilinear Schrödinger equation $$\label{iSchr}
i\partial_t z = -\Delta z - \Delta l(|z|^2) l^{\prime}(|z|^2)z + U(x) z - k(x,|z|)z,
%\quad \hbox{with $x\in\R^N$, $t \ge 0$,}$$ with $x\in\mathbb R^N$, $t \ge 0$, where the solution $z(x,t)$ is complex in $\mathbb R^N\times \mathbb R_+$, while $U:\mathbb R^N\to\mathbb R$, $k:\mathbb R^N\times \mathbb R_+ \to\mathbb R$ and $l: \mathbb R_+ \to \mathbb R$ are real functions. Owing to its relevance in several fields of applied sciences, equation [\[iSchr\]](#iSchr){reference-type="eqref" reference="iSchr"} has been widely studied and nowadays still receives a great attention. In fact, relying to different types of the nonlinear term $l(s)$, it has been derived as a model for different phenomena which come up from plasma physics, fluid mechanics, mechanics and condensed matter theory. The relation between stationary solutions of [\[iSchr\]](#iSchr){reference-type="eqref" reference="iSchr"} and the model equation [\[Ceq\]](#Ceq){reference-type="eqref" reference="Ceq"} is extensively examined in [@CSS2; @CSappl], where a detailed list of further references is proposed, too.\
Clearly, when $C(x, u)$ is constant, problem [\[Ceq\]](#Ceq){reference-type="eqref" reference="Ceq"} turns into $$-\Delta_p u +Z(x)|u|^{p-2} u =g(x, u)\quad\mbox{ in $\mathbb R^N$}$$ which has been widely investigated in the last decades (see, e.g. [@BW; @CDS; @CePaSo; @DiSz; @Ra] for the case $p=2$ and [@BaGuRo; @LZ] for the wider case $p>1$).\
Moreover, if $p\neq q$, the interest in the study of the equation [\[main\]](#main){reference-type="eqref" reference="main"} is twofold. On the one hand, it is quite challenging from a mathematical viewpoint. On the other hand, equation [\[main\]](#main){reference-type="eqref" reference="main"} has several employments in the applied sciences. Observe that if $A(x, u) =B(x, u)\equiv 1$, [\[main\]](#main){reference-type="eqref" reference="main"} reduces to a classical $(p, q)$--Laplacian equation. In this case, a model of elementary particle physics was studied in [@BDF]. However, some classical results about $(p,q)$--Laplacian problems in bounded or unbounded domains can be found, for example, in [@AlFi; @Am; @BCS; @HeLi; @MuPa; @PaRaRe; @PoWa] and references therein.\
Moreover, we figure it is worthwhile to highlight the correlation with the well known Born-Infeld equation $$\label{bi}
-{\rm div}\left(\frac{\nabla u}{\sqrt{1-\frac{1}{a^2}|\nabla u|^2}}\right) = f(u) \quad\mbox{ in } \mathbb R^N,$$ with $a\in\mathbb R$. Equation [\[bi\]](#bi){reference-type="eqref" reference="bi"} appears quite naturally in several fields such as electromagnetism and in relativity where it represents the mean curvature operator in Lorentz-Minkowski space. The relation between [\[main\]](#main){reference-type="eqref" reference="main"} and [\[bi\]](#bi){reference-type="eqref" reference="bi"} is shown via the first order approximation of the Taylor expansion $$\frac{1}{\sqrt{1-x}}= 1+\frac{x}{2}+\frac{3}{8}x^2 +\sum_{k=3}^\infty \binom{k-\frac12}{k} x^k\qquad\mbox{ for } |x|<1,$$ which makes equation [\[bi\]](#bi){reference-type="eqref" reference="bi"} turn into $$-\Delta u -\frac{1}{2a^2}\Delta_4 u \ = \ f(u) \quad\mbox{ in $\mathbb R^N$}$$ and exhibits a particular case of [\[main\]](#main){reference-type="eqref" reference="main"} with $p=4$, $q=2$, $A(x, u)=\frac{1}{2a^2}$, $B(x, u)= 1$ and $V(x)=W(x)\equiv 0$.\
In general, equation [\[main\]](#main){reference-type="eqref" reference="main"} has been used to model steady--state solutions of reaction--diffusion problems arising in biophysics, in plasma physics and in the study of chemical reaction design. The prototype for these models can be written in the form $$-\Delta_p u -\Delta_q +|u|^{p-2} u +|u|^{q-2} u= f(x, u)\quad\mbox{ in $\mathbb R^N$},$$ which originates from a general reaction--diffusion system $$u_t = -{\rm div}(D(u)\nabla u) + g(x, u), \qquad \mbox{ with }\quad D(u) =|\nabla u|^{p-2} +|\nabla u|^{q-2}.$$ In the above mentioned settings, the function $u$ generally stands for a concentration, the term ${\rm div}(D(u)\nabla u)$ corresponds to the diffusion with coefficient $D(u)$, and $g(x, u)$ is the reaction term related to source and loss processes. Typically, in chemical and biological applications, the reaction term $g(x, u)$ is a polynomial of $u$ with variable coefficients (see e.g. [@CI] and references therein).\
Outlined the range of conceivable applications, from now on we assume $q<p$ and we consider equation [\[main\]](#main){reference-type="eqref" reference="main"} in its most general formulation. As [\[main\]](#main){reference-type="eqref" reference="main"} is settled in $\mathbb R^N$, the lack of compactness which occurs makes the classical variational tools difficult to handle. Here, we do not gain either from symmetry assumptions (see [@CaSa2020]) or from concentration compactness arguments (see [@HeLi]) but, drawing on the leading work [@BF], we enforce suitable assumption on the potentials $V(x), W(x)$, namely $$\label{miste}
\begin{split}
\mathop{\mathrm{essinf}}_{x\in \mathbb R^N}& V(x)>0, \qquad\mathop{\mathrm{essinf}}_{x\in \mathbb R^N} W(x)>0,\\
&\lim_{|x|\to +\infty}\int_{B_1(x)}\frac{1}{V(y)}\ dy\ = \ 0,
\end{split}$$ where $B_1(x)$ stands for the unitary sphere of $\mathbb R^N$ centered in the point $x$. As stated in [@BF Theorem 3.1], hypothesis [\[miste\]](#miste){reference-type="eqref" reference="miste"} assures a compact embedding in suitable weighted Lebesgue spaces on $\mathbb R^N$. On the other hand, even if problem [\[main\]](#main){reference-type="eqref" reference="main"} is characterized by the presence of two coefficients which depends on the solution itself, we give some sufficient conditions for recognizing the variational structure of problem [\[main\]](#main){reference-type="eqref" reference="main"}, so that enquire its solutions shrinks to detect the critical points of the nonlinear functional $$\begin{split}
{\cal J}(u) &= \frac1p \int_{\mathbb R^N} A(x, u)|\nabla u|^p dx +\frac1q \int_{\mathbb R^N}B(x, u)|\nabla u|^q dx +\frac1p \int_{\mathbb R^N} V(x) |u|^p dx\\
& \quad+\frac1q \int_{\mathbb R^N} W(x)|u|^q dx -\int_{\mathbb R^N} G(x, u) dx
\end{split}$$ with $G(x, t) = \int_0^t g(x, s)ds$, in the Banach space $$X= W^{1, p}_V(\mathbb R^N)\cap W^{1, q}_W(\mathbb R^N)\cap L^{\infty}(\mathbb R^N).$$ Among the previously cited problems, it is also worth noting that our functional ${\cal J}$ does not satisfy either the Palais--Smale condition or one of its standard variants in the Banach space $X$. Thus, we are not allowed to use directly existence and multiplicity results as the classical Ambrosetti--Rabinowitz theorems stated in [@AR]. Hence, we need to draw on the pioneer work [@CP2] a weaker definition of the Cerami's variant of Palais--Smale condition (see Definition [Definition 3](#wCPS){reference-type="ref" reference="wCPS"}) which fits our needs.\
We think that the handling of this definition, addressed to a wide and challenging framework, establishes a significant enhancement in this field. In fact, even if we avail of a generalized version of the Mountain Pass Theorem and its symmetric version (Theorems [Theorem 4](#absMPT){reference-type="ref" reference="absMPT"} and [Theorem 5](#SMPT){reference-type="ref" reference="SMPT"}) already presented in [@CPabstract], we employ both of them in the completely new scenario of a generalized $(p,q)$--Laplacian equation settled in an unbounded domain.\
Conversly, in view of this framework, we are compelled to assume some hypotheses on the functions $A(x, t)$ and $B(x, t)$, alongside with the above mentioned [\[miste\]](#miste){reference-type="eqref" reference="miste"}. On the one hand, if we just assume the Carathéodory functions $A(x, t)$, $B(x, t)$ are essentially bounded when $t$ is bounded and $g(x, u)$ matches a suitable polynomial growth, we prove that our functional ${\cal J}$ is of class $\mathcal{C}^1(X, \mathbb R)$. Thus, passing through an approximation argument over bounded domains, we prove both the existence of a nontrivial solution and the existence of infinitely many ones, but exploiting also a proper sum decomposition of the weighted Sobolev space $W_V^{1, p}(\mathbb R^N)$. Both of these results require some additional assumptions which we introduce precisely in Section [4](#sec_main){reference-type="ref" reference="sec_main"}.\
Now, in order to emphasize the improving of our main results, we present them here in a "simplified\" version. Anyway, we refer the reader to Section [4](#sec_main){reference-type="ref" reference="sec_main"}, Theorems [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"}, [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"} for a thorough list of all required hypotheses on the concerned functions and a fully detailed statement of these results.
**Theorem 1**. *If the potentials $V, W:\mathbb R^N\to\mathbb R$ satisfy [\[miste\]](#miste){reference-type="eqref" reference="miste"} and are bounded over bounded sets, if $A, B:\mathbb R^N\times\mathbb R\to\mathbb R$ are $\mathcal{C}^1$--Carathéodory functions which properly interact with their partial derivatives and $g:\mathbb R^N\times\mathbb R\to\mathbb R$ is a Carathéodory function which grows no more than $|u|^{p-1} +|u|^{s-1}$ with $$p<s<p^*,$$ has an appropriate behavior in the origin and satisfies the Ambrosetti--Rabinowitz condition, thus problem [\[main\]](#main){reference-type="eqref" reference="main"} admits at least one nontrivial weak bounded solution.*
It is worthwhile pointing out that the multiplicity result we provide here is entirely novel also in the case $q=p$, i.e., for the equation stated in [\[Ceq\]](#Ceq){reference-type="eqref" reference="Ceq"}.
**Theorem 2**. *In the previous assumptions, if the requirement about the behaviour of $g$ in the origin is replaced with the assumption that $g(x, \cdot)$ is odd, then problem [\[main\]](#main){reference-type="eqref" reference="main"} has infinitely many weak bounded solutions whose critical levels positively diverge.*
The paper is organized as follows. In Section [2](#abstractsection){reference-type="ref" reference="abstractsection"} we introduce the weak--Cerami--Palais--Smale condition and some related abstract results to which we refer for the statement of our main results. In Section [3](#sec_var){reference-type="ref" reference="sec_var"} we give some first hypotheses on the functions $A(x, u), B(x, u), G(x, u)$, on the potentials $V(x), W(x)$ and, in particular, we formulate a variational principle for problem [\[main\]](#main){reference-type="eqref" reference="main"}. In Section [4](#sec_main){reference-type="ref" reference="sec_main"} we state our existence and multiplicity results, provide some further assumptions on the involved functions and test some geometric properties. In Section [5](#sec_bounded){reference-type="ref" reference="sec_bounded"} we rephrase problem [\[main\]](#main){reference-type="eqref" reference="main"} over bounded domains, upon which we prove the existence of solutions. Finally, in Section [6](#sec_proof){reference-type="ref" reference="sec_proof"} we prove our main results.
# Abstract setting {#abstractsection}
Throughout this section, we assume that:
- $(X, \|\cdot\|_X)$ is a Banach space with dual $(X',\|\cdot\|_{X'})$;
- $(S,\|\cdot\|_S)$ is a Banach space such that $X \hookrightarrow S$ continuously, i.e., $X \subset S$ and a constant $\sigma_0 > 0$ exists such that $$\|\xi\|_S \ \le \ \sigma_0\ \|\xi\|_X\qquad \hbox{for all $\xi \in X$;}$$
- $J : {\mathcal D} \subset S \to \mathbb R$ and $J \in C^1(X,\mathbb R)$ with $X \subset {\mathcal D}$.
In order to avoid any ambiguity and simplify, when possible, the notation, from now on by $X$ we denote the space equipped with its given norm $\|\cdot\|_X$ while, if the norm $\Vert\cdot\Vert_{S}$ is involved, we write it explicitly.
For simplicity, taking $\gamma \in \mathbb R$, we say that a sequence $(\xi_n)_n\subset X$ is a *Cerami--Palais--Smale sequence at level $\gamma$*, briefly *$(CPS)_\gamma$--sequence*, if $$\lim_{n \to +\infty}J(\xi_n) = \gamma\quad\mbox{and}\quad
\lim_{n \to +\infty}\|dJ\left(\xi_n\right)\|_{X'} (1 + \|\xi_n\|_X) = 0.$$ Moreover, $\gamma$ is a *Cerami--Palais--Smale level*, briefly a *$(CPS)$--level*, if there exists a $(CPS)_\gamma$ -- sequence.
As $(CPS)_\gamma$ -- sequences may exist which are unbounded in $\|\cdot\|_X$ but converge with respect to $\|\cdot\|_S$, we have to weaken the classical Cerami--Palais--Smale condition in a suitable way according to the ideas already developed in previous papers (see, e.g., [@CPabstract]).
**Definition 3**. The functional $J$ satisfies the *weak Cerami--Palais--Smale condition at level $\gamma$* ($\gamma \in \mathbb R$), briefly *$(wCPS)_\gamma$ condition*, if for every $(CPS)_\gamma$--sequence $(\xi_n)_n$, a point $\xi \in X$ exists, such that
*(i)*
: $\displaystyle
\lim_{n \to+\infty} \|\xi_n - \xi\|_S = 0\quad$ (up to subsequences),
*(ii)*
: $J(\xi) = \gamma$, $\; dJ(\xi) = 0$.
If $J$ satisfies the $(wCPS)_\gamma$ condition at each level $\gamma \in I$, $I$ a real interval, we say that $J$ satisfies the $(wCPS)$ condition in $I$.
By using Definition [Definition 3](#wCPS){reference-type="ref" reference="wCPS"}, the following generalization of the Mountain Pass Theorem can be stated (see [@CPabstract Theorem 1.7]).
**Theorem 4** (Mountain Pass Theorem). *Let $J\in C^1(X,\mathbb R)$ be such that $J(0) = 0$ and the $(wCPS)$ condition holds in $\mathbb R$. Moreover, assume that two constants $\varrho$, $\beta > 0$ and a point $e \in X$ exist such that $$u \in X, \; \|u\|_S = \varrho\quad \text{ implies }\quad J(u) \ge \beta,$$ $$\|e\|_S > \varrho\qquad\hbox{and}\qquad J(e) < \beta.$$ Then $J$ has a critical point $u_X \in X$ such that $$J(u_X) = \inf_{\gamma \in \Gamma} \sup_{\sigma\in [0,1]} J(\gamma(\sigma)) \ge \beta$$ with $\Gamma = \{ \gamma \in C([0,1],X):\, \gamma(0) = 0,\; \gamma(1) = e\}$.*
Finally, with the stronger assumption that $J$ is symmetric, the following multiplicity result can be provided too (see [@CPabstract Theorem 1.8]).
**Theorem 5**. *Let $J\in C^1(X, \mathbb R)$ be an even functional such that $J(0)=0$ and $(wCPS)$ holds in $]0, +\infty[$. Assume that $\beta>0$ exists such that*
1. *two closed subspaces $E_{\beta}$ and $Z_{\beta}$ of $X$ exist such that $$E_{\beta} + Z_{\beta}=X,\qquad {\rm codim} Z_{\beta} < {\rm dim} E_{\beta}<+\infty,$$ and $J$ satisfies the following assumptions.*
1. *A constant $\varrho>0$ exists such that $$u\in Z_{\beta}, \; \|u\|_S =\varrho\quad\text{ implies }\quad J(u)\ge\beta;$$*
2. *A constant $R>0$ exists such that $$u\in E_{\beta}, \; \|u\|_X\ge R\quad\text{ implies }\quad J(u)\le 0,$$ hence $\displaystyle\sup_{u\in E_{\beta}} J(u)<+\infty$.*
*Then the functional $J$ possesses at least a pair of symmetric critical points in $X$ whose corresponding critical level belongs to $[\beta, \beta_1]$, with $$\beta_1=\displaystyle\sup_{u\in E_{\beta}} J(u) >\beta.$$*
# Variational setting and preliminary assumptions {#sec_var}
Let $\mathbb N= \{1,2,\dots\}$ be the set of the strictly positive integers and, taking any $\Omega$ open subset of $\mathbb R^N$ with smooth boundary, $N\ge 2$, we denote by
- $B_R(x) = \{y\in\mathbb R^N : |y-x|< R\}$ the open ball with center in $x\in \mathbb R^N$ and radius $R>0$;
- $|D|$ the usual $N$--dimensional Lebesgue measure of a measurable set $D$ in $\mathbb R^N$;
- $d_i$ any strictly positive and each time different constant;
- $(L^r(\Omega),|\cdot|_{\Omega,r})$ the classical Lebesgue space with norm $$|u|_{\Omega,r} = \left(\int_{\Omega}|u|^r dx\right)^{1/r}$$ if $1\le r<+\infty$;
- $(L^\infty(\Omega),|\cdot|_{\Omega,\infty})$ the space of the Lebesgue--measurable essentially bounded functions endowed with norm $\displaystyle |u|_{\Omega,\infty} = \mathop{\mathrm{ess\; sup}}_{\Omega} |u|$;
- $W^{1,r}(\Omega)$ and $W_0^{1,r}(\Omega)$ the classical Sobolev spaces both equipped with the standard norm $\|u\|_{\Omega, r} = (|\nabla u|_{\Omega,r}^r +|u|_{\Omega,r}^r)^{\frac1r}$ if $1 \le r < +\infty$.
Moreover, if $U:\mathbb R^N\to\mathbb R$ is a measurable function such that $$\label{Uinfess}
\mathop{\mathrm{essinf}}_{x\in\mathbb R^N} U(x)>0,$$ we denote by
- $(L_U^r(\Omega),|\cdot|_{\Omega,U,r})$, if $1\le r<+\infty$, the weighted Lebesgue space with $$\begin{split}
&L_U^r(\Omega)=\left\{u\in L^r(\Omega): \int_{\Omega} U(x) |u|^r dx <+\infty\right\},\\
&|u|_{\Omega,U,r} =\left(\int_{\Omega} U(x) |u|^r dx\right)^{\frac1r};
\end{split}$$
- $W_{U}^{1,r}(\Omega)$ and $W_{0,U}^{1,r}(\Omega)$, if $1\le r<+\infty$, the weighted Sobolev spaces $$\begin{split}
W_{U}^{1,r}(\Omega) = &\left\{ u\in W^{1,r}(\Omega): \ \int_{\Omega} U(x) |u|^r dx <+\infty\right\},\\
W_{0,U}^{1,r}(\Omega) = &\left\{ u\in W_0^{1,r}(\Omega): \ \int_{\Omega} U(x) |u|^r dx <+\infty\right\}
\end{split}$$ endowed with the norm $$\label{weightnorm}
\|u\|_{\Omega,U, r} = (|\nabla u|_{\Omega,r}^r +|u|_{\Omega,U,r}^r)^{\frac1r}.$$
For simplicity, we put $B_R = B_R(0)$ for the open ball with center at the origin and radius $R>0$ and, if $\Omega = \mathbb R^N$, we avoid to write the set in the norms, i.e., we denote $|\cdot|_{\infty} = |\cdot|_{\mathbb R^N,\infty}$ and, for any $1 \le r < +\infty$, we write
- $|\cdot|_{r} = |\cdot|_{\mathbb R^N,r}$ for the norm in $L^r(\mathbb R^N)$;
- $|\cdot|_{U,r} = |\cdot|_{\mathbb R^N,U,r}$ for the norm in $L_U^r(\mathbb R^N)$;
- $\|\cdot\|_r = \|\cdot\|_{\mathbb R^N, r}$ for the norm in $W^{1,r}(\mathbb R^N) = W^{1,r}_0(\mathbb R^N)$;
- $\|\cdot\|_{U, r} = \|\cdot\|_{\mathbb R^N,U, r}$ for the norm in $W_U^{1,r}(\mathbb R^N) = W_{0,U}^{1,r}(\mathbb R^N)$.
**Remark 6**. If the potential $U(x)$ satisfies assumption [\[Uinfess\]](#Uinfess){reference-type="eqref" reference="Uinfess"}, then the following continuous embeddings hold: $$L_U^r(\mathbb R^N)\hookrightarrow L^r(\mathbb R^N) \quad \hbox{for all $1\le r<+\infty$,}$$ $$\label{WVem}
W_U^{1, r}(\mathbb R^N)\hookrightarrow W^{1, r}(\mathbb R^N)
\quad \hbox{for all $1\le r<+\infty$.}$$ Clearly, [\[WVem\]](#WVem){reference-type="eqref" reference="WVem"} holds also replacing $\mathbb R^N$ with $\Omega$ any open subset of $\mathbb R^N$ with smooth boundary.
From Remark [Remark 6](#Remb){reference-type="ref" reference="Remb"} and Sobolev embedding theorems, we deduce the following result (for the compact embedding, see [@BF Theorem 3.1]).
**Theorem 7**. *Let $U:\mathbb R^N\to\mathbb R$ be a Lebesgue measurable function such that [\[Uinfess\]](#Uinfess){reference-type="eqref" reference="Uinfess"} holds. Then the following continuous embeddings hold.*
- *If $l<N$ then $$\label{cont1}
W_U^{1,l}(\mathbb R^N) \hookrightarrow L^r(\mathbb R^N) \quad\mbox{ for any }\; l\le r\le \frac{Nl}{N-l};$$*
- *If $l=N$ then $$\label{cont2}
W_U^{1, l}(\mathbb R^N)\hookrightarrow L^r(\mathbb R^N) \quad\mbox{ for any }\; l\le r<+\infty;$$*
- *If $l>N$ then $$\label{cont3}
W_U^{1, l}(\mathbb R^N)\hookrightarrow L^{r}(\mathbb R^N) \quad\mbox{ for any }\; l\le r\le+\infty.$$*
*Furthermore, if also $$\int_{B_1(x)}\frac{1}{U(y)} dy\to 0\;\mbox{ as } |x|\to +\infty,$$ the compact embedding $$\label{comp}
W_U^{1, l}(\mathbb R^N)\hookrightarrow\hookrightarrow L^r(\mathbb R^N) \quad\mbox{ for any } l\le r <l^*$$ is valid with $$l^*=\begin{cases}
\frac{Nl}{N-l} &\hbox{ if } l<N,\\
+\infty &\hbox{ if } l\ge N.
\end{cases}$$*
In particular, from Theorem [Theorem 7](#embed){reference-type="ref" reference="embed"}, it follows that for any $r\ge p$ so that [\[cont1\]](#cont1){reference-type="eqref" reference="cont1"}, respectively [\[cont2\]](#cont2){reference-type="eqref" reference="cont2"} or [\[cont3\]](#cont3){reference-type="eqref" reference="cont3"}, holds with $r=p$, then a constant $\tau_{r, V}>0$ exists such that $$\label{sob}
|u|_r\le\tau_{r, V} \|u\|_{V, p} \quad\mbox{ for all } u\in W_V^{1, p}(\mathbb R^N).$$ From now on, we suppose that
- the potentials $V, W:\mathbb R^N\to\mathbb R$ are Lebesgue measurable functions such that $$\mathop{\mathrm{essinf}}_{\mathbb R^N} V(x)>0,\qquad \mathop{\mathrm{essinf}}_{\mathbb R^N} W(x)>0;$$
- we have $$\int_{B_1(x)}\frac{1}{V(y)} dy\to 0 \qquad\mbox{ as } |x|\to +\infty.$$
Thereby, taking $p, q \in]1, +\infty[$, using the notations introduced at the beginning of this section, we can consider the weighted spaces $(W^{1, p}_V(\mathbb R^N), \|\cdot\|_{V})$ and $(W^{1, q}_W(\mathbb R^N), \|\cdot\|_W)$.\
From now on, we set $$\label{Xdefn}
X:= W^{1, p}_V(\mathbb R^N)\cap W^{1, q}_W(\mathbb R^N)\cap L^{\infty}(\mathbb R^N)$$ endowed with the norm $$\label{Xnorm}
\|u\|_X := \|u\|_V +\|u\|_W +|u|_{\infty}$$ where, to simplify the notations, we denote $$\label{Xnorm2}
\|\cdot\|_{V} = \|\cdot\|_{V, p}\quad\mbox{ and }\quad \|\cdot\|_W=\|\cdot\|_{W, q}.$$ Furthermore, Theorem [Theorem 7](#embed){reference-type="ref" reference="embed"} and definition [\[Xdefn\]](#Xdefn){reference-type="eqref" reference="Xdefn"} allow us to provide the following result.
**Corollary 8**. *If assumptions $(P_1)$--$(P_2)$ hold, then*
1. *$(X, \|\cdot\|_X)\hookrightarrow (L^r(\mathbb R^N), |\cdot|_r)$ continuously for any $q\le r\le+\infty$;*
2. *$(X, \|\cdot\|_X)\hookrightarrow\hookrightarrow (L^r(\mathbb R^N), |\cdot|_r)$ compactly for any $p\le r<+\infty$.*
*Proof.* The embedding $i)$ is trivially satisfied using hypothesis $(P_1)$. On the other hand, if also $(P_2)$ holds, from [\[Xdefn\]](#Xdefn){reference-type="eqref" reference="Xdefn"} and [\[comp\]](#comp){reference-type="eqref" reference="comp"} it follows that $$\label{comp1}
X\hookrightarrow\hookrightarrow L^r(\mathbb R^N)\quad\mbox{ for any } r\in [p, p^*[.$$ Now, let $r\ge p^*$ and $(u_n)_n\subset X$, $u\in X$ such that $u_n\rightharpoonup u$ in $X$. Thus, fixing any $\varepsilon>0$ with $p<p^*-\varepsilon<p^*$, from [\[comp1\]](#comp1){reference-type="eqref" reference="comp1"} we have $$|u_n -u|_r^r\le |u_n -u|_{\infty}^{r-p^*+\varepsilon}\int_{\mathbb R^N}|u_n -u|^{p^*-\varepsilon} dx\to 0$$ and then $ii)$ is verified. ◻
We proceed providing a suitable decomposition for the spaces $W_V^{1, p}(\mathbb R^N)$ and $X$.\
We define the subset $$\mathcal{S}=\{ v\in W^{1, p}_V(\mathbb R^N): |v|_p =1\},$$ the functional $\Phi: W^{1, p}_V(\mathbb R^N)\to\mathbb R$ by $$\Phi(u) =\|u\|_V^p,$$ and $$\eta_1 =\inf_{v\in\mathcal{S}} \Phi(v)\ge 0.$$ From [\[comp\]](#comp){reference-type="eqref" reference="comp"}, there exists $\psi_1\in\mathcal{S}$ such that $\eta_1=\Phi(\psi_1)$. Thus, starting from $\eta_1$, the existence of a sequence of positive numbers $(\eta_j)_j$, such that $$\label{inc}
\eta_j\nearrow +\infty\qquad\mbox{ as } j\to +\infty$$ and whose corresponding functions are $(\psi_j)_j$, $\psi_j\neq\psi_k$, if $j\neq k$, is established (see, [@CP2 Section 5]). Moreover, they generate the whole space $W^{1, p}_V(\mathbb R^N)$. To be more precise, setting $Y_V^j ={\rm span}\{\psi_1, \dots, \psi_j\}$ and denoting by $Z_V^j$ its complement, for all $j\in\mathbb N$ the decomposition $$W^{1, p}_V(\mathbb R^N) = Y_V^j \oplus Z_V^j$$ is valid. Moreover, using [@CP2 Lemma 5.4], we recall the inequality $$\label{2Zineq}
\eta_{j+1} |u|_p^p\le\|u\|_V^p \quad\mbox{ for all } u\in Z_V^j.$$ Setting $$\label{Zjdefn}
Z^j:= Z_V^j\cap X,$$ we have that $Z^j$ is a closed subspace of $X$ of finite codimension. Then a finite dimensional subspace $Y^j$ of $X$ exists which is its topological complement, i.e., the decomposition $$\label{oplus}
X=Y^j\oplus Z^j$$ holds.
We proceed recalling the following definition.
**Definition 9**. A function $h:\mathbb R^N\times\mathbb R\to\mathbb R$ is a $\mathcal{C}^{k}$--Carathéodory function, $k\in\mathbb N\cup\lbrace 0\rbrace$, if
- $h(\cdot,t) : x \in \mathbb R^N \mapsto h(x,t) \in \mathbb R$ is measurable for all $t \in \mathbb R$,
- $h(x,\cdot) : t \in \mathbb R\mapsto h(x,t) \in \mathbb R$ is $\mathcal{C}^k$ for a.e. $x \in \mathbb R^N$.
In order to bring out the variational nature of our problem, we introduce some preliminary assumptions. We assume that
- $A$ and $B$ are $\mathcal{C}^1$--Carathéodory functions;
- for any $\rho > 0$ we have that $$\begin{split}
&\sup_{\vert t\vert\leq \rho} \vert A\left(\cdot, t\right)\vert \in L^{\infty}(\mathbb R^N), \
\quad \sup_{\vert t\vert\leq \rho} \vert A_t\left(\cdot, t\right)\vert \in L^{\infty}(\mathbb R^N)\\
&\sup_{\vert t\vert\leq \rho} \vert B\left(\cdot, t\right)\vert \in L^{\infty}(\mathbb R^N), \
\quad \sup_{\vert t\vert\leq \rho} \vert B_t\left(\cdot, t\right)\vert \in L^{\infty}(\mathbb R^N).
\end{split}$$
Furthermore, we assume that $g:\mathbb R^N\times\mathbb R\to\mathbb R$ exists such that
- $g(x, t)$ is a $\mathcal{C}^0$--Carathéodory function;
- $a >0$ and $p<s$ exist such that $$|g(x, t)|\le a(|t|^{p-1} + |t|^{s-1})\quad \mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R.$$
**Remark 10**. Assumptions $(g_0)$ and $(g_1)$ imply that $$\label{Gdefn}
G:(x, t)\in\mathbb R^N\times\mathbb R\mapsto\int_0^t g(x, s) ds\in\mathbb R$$ is a well defined $\mathcal{C}^1$--Carathéodory function and $$\label{Gle}
|G(x, t)|\le \frac{a}{p}|t|^p +\frac{a}{s}|t|^s \quad \mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R.$$ In particular, from [\[Gdefn\]](#Gdefn){reference-type="eqref" reference="Gdefn"} and $(g_2)$ it follows that $$\label{G0=0}
G(x, 0) =g(x, 0)=0\quad \mbox{ for a.e. } x\in\mathbb R^N.$$
Let $u\in X$. Assumption $(h_1)$ and [\[Xdefn\]](#Xdefn){reference-type="eqref" reference="Xdefn"} ensure that both $A(\cdot, u)|\nabla u(\cdot)|^p\in L^1(\mathbb R^N)$ and $B(\cdot, u)|\nabla u(\cdot)|^q\in L^1(\mathbb R^N)$. On the other hand, embedding $i)$ in Corollary [Corollary 8](#Cor1){reference-type="ref" reference="Cor1"} and hypotheses $(g_0)$, $(g_1)$ imply that $G(\cdot, u)\in L^1(\mathbb R^N)$ too. Thus the functional $$\label{funct}
\begin{split}
{\cal J}(u) &= \frac1p \int_{\mathbb R^N} A(x, u)|\nabla u|^p dx +\frac1q \int_{\mathbb R^N}B(x, u)|\nabla u|^q dx\\
& \quad +\frac1p \int_{\mathbb R^N} V(x) |u|^p dx +\frac1q \int_{\mathbb R^N} W(x)|u|^q dx -\int_{\mathbb R^N} G(x, u) dx
\end{split}$$ is well defined for all $u\in X$. Furthermore, taking any $u, v\in X$, the Gâteaux differential of ${\cal J}$ in $u$ along the direction $v$ is given by $$\label{diff}
\begin{split}
&\hspace{-.135in}\langle d{\cal J}(u), v\rangle =\int_{\mathbb R^N} A(x, u)|\nabla u|^{p-2} \nabla u\cdot\nabla v dx +\frac1p \int_{\mathbb R^N} A_t(x, u) v|\nabla u|^p dx\\
&+\int_{\mathbb R^N} B(x, u)|\nabla u|^{q-2} \nabla u\cdot\nabla v dx + \frac1q \int_{\mathbb R^N} B_t(x, u) v|\nabla u|^q dx\\
&+\int_{\mathbb R^N} V(x) |u|^{p-2} u v dx +\int_{\mathbb R^N} W(x) |u|^{q-2} u v dx -\int_{\mathbb R^N} g(x, u) v dx.
\end{split}$$ In particular, the following regularity result can be stated.
**Proposition 11**. *Suppose that hypotheses $(P_1)$, $(h_0)$--$(h_1)$ and $(g_0)$--$(g_1)$ hold. If $(u_n)_n\subset X$, $u\in X$ and $M>0$ are such that $$\begin{aligned}
&u_n\to u \quad\mbox{ a.e. in } \mathbb R^N,\\
&\|u_n -u\|_V \to 0, \quad \|u_n-u\|_W\to 0\quad\mbox{ as } n\to +\infty,\\
&|u_n|_{\infty}\le M \quad\mbox{ for all } n\in\mathbb N,\end{aligned}$$ then $${\cal J}(u_n)\to {\cal J}(u) \quad\mbox{ and }\quad \|d{\cal J}(u_n) -d{\cal J}(u)\|_{X^{\prime}}\to 0\quad\mbox{ as } n\to +\infty.$$ Hence ${\cal J}$ is a $\mathcal{C}^1$ functional in $X$ with Fréchet differential defined as in [\[diff\]](#diff){reference-type="eqref" reference="diff"}.*
*Proof.* From Definition [\[funct\]](#funct){reference-type="eqref" reference="funct"}, the desired result occurs by using same arguments as in [@CSS2 Proposition 3.10]. ◻
# Statement of the main results {#sec_main}
From now on, besides hypotheses $(P_1)$--$(P_2)$, $(h_0)$--$(h_1)$, $(g_0)$--$(g_1)$ we consider the following additional conditions
- a constant $\alpha_0>0$ exists such that $$\begin{split}
&A(x, t)\ge\alpha_0 \quad\mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R,\\
&B(x, t)\ge\alpha_0 \quad\mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R;
\end{split}$$
- some constants $\mu>p$ (in particular it is $\mu>q$, too) and $\alpha_1>0$ exist so that $$\begin{split}
(\mu-p)A(x, t)-A_t(x, t)t\ge\alpha_1 A(x, t) \quad\mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R,\\
(\mu-q)B(x, t)-B_t(x, t)t\ge\alpha_1 B(x, t) \quad\mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R;
\end{split}$$
- a constant $\alpha_2>0$ exists such that $$\begin{split}
pA(x, t) + A_t(x, t)t\ge\alpha_2 A(x, t) \quad\mbox{ a.e. in } \mathbb R^N \mbox{ for all } t \in \mathbb R,\\
qB(x, t) + B_t(x, t)t\ge\alpha_2 B(x, t) \quad\mbox{ a.e. in } \mathbb R^N \mbox{ for all } t \in \mathbb R;
\end{split}$$
- having $\mu$ as in hypothesis $(h_3)$, then $$0<\mu G(x, t)\le g(x, t)t \quad\mbox{ a.e. in } \mathbb R^N, \mbox{ for all } t\in\mathbb R\setminus\{0\};$$
- taking $\alpha_0$ as in assumption $(h_2)$ and $\tau_{p, V}$ as in [\[sob\]](#sob){reference-type="eqref" reference="sob"}, we have $$\lim_{t\to 0}\frac{g(x, t)}{|t|^{p-2}t} =\bar{\alpha}<\frac{\alpha_0}{\tau_{p, V}^p};$$
- for any $\varrho >0$, a constant $C_{\varrho}>0$ exists such that $$\mathop{\mathrm{ess\; sup}}_{|x|\le\varrho} V(x)\le C_{\varrho} \quad\mbox{ and }\quad \mathop{\mathrm{ess\; sup}}_{|x|\le\varrho} W(x)\le C_{\varrho}.$$
**Remark 12**. If $(h_2)$ holds, then, without loss of generality, we can assume $\alpha_0 \le 1$. Moreover, taking $t=0$ in $(h_3)$, we have also $\mu -p\ge\alpha_1$.
**Remark 13**. From assumptions $(h_1)$, $(h_3)$ and direct computations we infer that $$\begin{split}
&A(x, t)\le a_1 +a_2 |t|^{\mu-p-\alpha_1}\quad\mbox{ a.e. in $\mathbb R^N$, for all $t\in\mathbb R$},\\
&B(x, t)\le a_1 +a_2 |t|^{\mu-q-\alpha_1}\quad\mbox{ a.e. in $\mathbb R^N$, for all $t\in\mathbb R$},
\end{split}$$ where $a_1, a_2$ denote two positive constants.
**Remark 14**. We note that [\[Gdefn\]](#Gdefn){reference-type="eqref" reference="Gdefn"}, assumptions $(g_0)$--$(g_2)$ and straightforward computations imply that for any $\varepsilon>0$ a function $\eta_{\varepsilon}\in L^{\infty}(\mathbb R^N)$ exists so that $\eta_{\varepsilon}>0$ for a.e. $x\in\mathbb R^N$ and $$\label{Ggeq}
G(x,t)\ge\eta_{\varepsilon}(x) |t|^{\mu} \quad\mbox{ for a.e. } x\in\mathbb R^N \hbox{ if } |t|\ge \varepsilon.$$ Furthermore, hypotheses $(g_0)$--$(g_1)$ and $(g_3)$ imply that for any $\varepsilon>0$ a constant $c_{\varepsilon}>0$ exists such that $$|g(x, t)|\le(\bar{\alpha} +\varepsilon)|t|^{p-1} +c_{\varepsilon}|t|^{s-1} \quad\mbox{ a.e. in $\mathbb R^N$, for all $\in\mathbb R$}$$ and $$\label{Glimcom}
|G(x, t)|\le\frac{\bar{\alpha} +\varepsilon}{p}|t|^p +\frac{c_{\varepsilon}}{s}|t|^s \quad\mbox{ a.e. in $\mathbb R^N$, for all $\in\mathbb R$}.$$ Hence, from [\[Gle\]](#Gle){reference-type="eqref" reference="Gle"} and [\[Ggeq\]](#Ggeq){reference-type="eqref" reference="Ggeq"} it follows that $$\label{p<q}
1<q\le p<\mu \le s.$$
We provide our main existence and multiplicity results.
**Theorem 15**. *Suppose that assumptions $(P_1)$--$(P_3)$, $(h_0)$--$(h_4)$ and $(g_0)$--$(g_3)$ hold with $$\label{hpsotto}
s<p^*.$$ Then problem [\[main\]](#main){reference-type="eqref" reference="main"} admits at least one weak nontrivial bounded solution.*
**Theorem 16**. *Under the same hypotheses of Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"}, but replacing assumption $(g_3)$ with the requirement that $g(x, \cdot)$ is odd, problem [\[main\]](#main){reference-type="eqref" reference="main"} has a sequence $(u_k)_k$ of weak bounded solutions such that ${\cal J}(u_k)\nearrow +\infty$.*
**Remark 17**. If $q=p$, Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} has been already proved (see [@CSS2 Theorem 4.4]). On the other hand, the multiplicity result stated in Theorem [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"} is completely new both in the cases $q=p$ and $q\neq p$.
**Remark 18**. Taking $\mu$ as in $(h_3)$, $(g_2)$ and $s$ as in $(g_1)$, it follows by [\[p\<q\]](#p<q){reference-type="eqref" reference="p<q"} and [\[hpsotto\]](#hpsotto){reference-type="eqref" reference="hpsotto"} that Theorems [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} and [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"} hold with $$1<q \le p<\mu \le s<p^*.$$
Under the assumptions of Theorems [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} and [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"}, the following geometrical conditions can be provided.
**Proposition 19**. *Assume that hypotheses $(h_0)$--$(h_1)$, $(h_3)$, $(g_0)$--$(g_2)$ and $(P_1)$ hold. Thus for any finite dimensional subspace $F\subset X$, we have $$\label{J-inf}
\lim_{\substack{v\in F\\ \|v\|_X\to +\infty}} {\cal J}(v) = -\infty.$$*
*Proof.* Suppose by contradiction that a finite dimensional subspace $F\subset X$ exists which does not serve condition [\[J-inf\]](#J-inf){reference-type="eqref" reference="J-inf"}, namely $(u_n)_n\subset F$ can be exhibited so that $$\label{toinf}
\|u_n\|_X\to +\infty\quad\mbox{ as } n\to +\infty$$ and for some $M>0$ it is ${\cal J}(u_n)\ge -M$, for all $n\in\mathbb N$. Setting $v_n:=\frac{u_n}{\|u_n\|_X}$ we have that $\|v_n\|_X=1$ and a subsequence exists such that $v_n\rightharpoonup v$ weakly in $X$. Actually, since $F$ is a finite dimensional subspace, it follows that $v_n\to v$ strongly in $X$ and almost everywhere in $\mathbb R^N$. Thus, in particular, we have $\|v\|_X=1$ and, setting $A:=\{x\in\mathbb R^N : v(x)\neq 0\}$, we have $|A|>0$. Now, for a.e. $x\in A$, we have $\displaystyle\lim_n |v_n(x)| =|v(x)|>0$ which, together with [\[toinf\]](#toinf){reference-type="eqref" reference="toinf"}, ensures that $|u_n(x)|\to +\infty$. Hence, from [\[Ggeq\]](#Ggeq){reference-type="eqref" reference="Ggeq"} we obtain that $$\label{G+inf}
\lim_n\frac{G(x, u_n(x))}{|u_n(x)|^{\mu-\alpha_1}} |v_n(x)|^{\mu-\alpha_1} = +\infty \quad\mbox{ for a.e. } x\in A,$$ where $\mu-\alpha_1\ge p$ (see Remark [Remark 12](#RemV1){reference-type="ref" reference="RemV1"}). Now, since [\[toinf\]](#toinf){reference-type="eqref" reference="toinf"} holds, we may suppose $\|u_n\|_X\ge 1$ for all $n\in\mathbb N$. Thus Remark [Remark 13](#Rmk<){reference-type="ref" reference="Rmk<"}, [\[Xnorm\]](#Xnorm){reference-type="eqref" reference="Xnorm"} and straightforward computations imply that $$\label{conti}
\begin{split}
&\frac1p \int_{\mathbb R^N} (A(x, u_n)|\nabla u_n|^p + V(x) |u_n|^p) dx\\
&\quad + \frac1q \int_{\mathbb R^N} (B(x, u_n)|\nabla u_n|^q dx + W(x) |u_n|^q dx)\\
&\quad\le \frac1p\left(a_1\|u_n\|_V^p +a_2|u_n|_{\infty}^{\mu-p-\alpha_1}\int_{\mathbb R^N}|\nabla u_n|^p dx\right)\\
&\quad+\frac1q\left(a_1\|u_n\|_W^q +a_2|u_n|_{\infty}^{\mu-q-\alpha_1}\int_{\mathbb R^N}|\nabla u_n|^q dx\right)\\
&\quad\le \frac1p (a_1\|u_n\|_X^p + a_2\|u_n\|_X^{\mu-\alpha_1})+\frac1q (a_1\|u_n\|_X^q + a_2\|u_n\|_X^{\mu-\alpha_1})\\
&\quad\le \frac2q (a_1+a_2)\|u_n\|_X^{\mu-\alpha_1}.
\end{split}$$ Summing up, from [\[funct\]](#funct){reference-type="eqref" reference="funct"}, [\[conti\]](#conti){reference-type="eqref" reference="conti"}, hypothesis $(g_2)$ and Fatou's Lemma we get $$\begin{split}
0&=\lim_n \frac{-M}{\|u_n\|_X^{\mu-\alpha_1}}\le\limsup_n\frac{{\cal J}(u_n)}{\|u_n\|_X^{\mu-\alpha_1}}\\
&\le\limsup_n\left(\frac2q(a_1+a_2) -\int_{\mathbb R^N}\frac{G(x, u_n(x))}{\|u_n\|_X^{\mu-\alpha_1}} dx\right)\\
&\le \frac2q(a_1+a_2) -\int_{\mathbb R^N}\liminf_n\frac{G(x, u_n(x))}{|u_n(x)|^{\mu-\alpha_1}}|v_n(x)|^{\mu-\alpha_1} dx.
\end{split}$$ Then $$\int_{\mathbb R^N}\liminf_n\frac{G(x, u_n(x))}{|u_n(x)|^{\mu-\alpha_1}}|v_n(x)|^{\mu-\alpha_1} dx\le\frac2q(a_1+a_2),$$ which contradicts [\[G+inf\]](#G+inf){reference-type="eqref" reference="G+inf"} as $|A|>0$. ◻
**Proposition 20**. *Assume that $(h_0)$--$(h_2)$, $(g_0)$--$(g_1)$, $(g_3)$ and $(P_1)$ hold. Then two positive constants $\rho$, $\beta > 0$ exist so that $$\label{geoN}
{\cal J}(u)\ge\beta \quad\mbox{ for all $u\in X$ with $\|u\|_V=\rho$}.$$*
*Proof.* Let $u\in X$ and, from $(g_3)$, take $\varepsilon >0$ such that $\bar{\alpha} + \varepsilon <\frac{\alpha_0}{\tau_{p, V}^p}$, i.e., $$\label{geoN1}
\alpha_0 - (\bar{\alpha}+\varepsilon) \tau_{p, V}^p > 0.$$ Then from $(h_2)$ with $\alpha_0 \le 1$ (see Remark [Remark 12](#RemV1){reference-type="ref" reference="RemV1"}), [\[funct\]](#funct){reference-type="eqref" reference="funct"}, [\[Glimcom\]](#Glimcom){reference-type="eqref" reference="Glimcom"} and [\[sob\]](#sob){reference-type="eqref" reference="sob"} we have that $$\begin{split}
{\cal J}(u)&\ge\frac{\alpha_0}{p}\int_{\mathbb R^N}(|\nabla u|^p dx +V(x)|u|^p) dx +\frac{\alpha_0}{q}\int_{\mathbb R^N}(|\nabla u|^q dx +W(x)|u|^q) dx \\
&-\frac{\bar{\alpha}+\varepsilon}{p}\int_{\mathbb R^N}|u|^p dx -\frac{c_{\varepsilon}}{s}\int_{\mathbb R^N} |u|^s dx\\
&\ge\ \frac{1}{p}\ \big(\alpha_0 - (\bar{\alpha}+\varepsilon) \tau_{p, V}^p\big) \|u\|_V^p
- \frac{c_{\varepsilon}}{s} \tau_{s, V}^s \|u\|_V^s.
\end{split}$$ Then [\[geoN1\]](#geoN1){reference-type="eqref" reference="geoN1"} and $p<s$ allow us to find two positive constants $\rho$ and $\beta$ so that [\[geoN\]](#geoN){reference-type="eqref" reference="geoN"} holds. ◻
Moreover, in order to prove Theorem [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"}, the following result is needed, too.
**Proposition 21**. *Suppose that assumptions $(h_0)$--$(h_2)$, $(g_0)$--$(g_1)$ and $(P_1)$--$(P_2)$ hold. Fixing any $\beta>0$, then $j\in\mathbb N$ and $\varrho$ sufficiently large exist such that $${\cal J}(u)\ge\beta \quad\mbox{ for all $u\in Z^j$ with $\|u\|_V=\varrho$},$$ with $Z^j$ as in [\[Zjdefn\]](#Zjdefn){reference-type="eqref" reference="Zjdefn"}.*
*Proof.* Let $\beta>0$ be fixed. Taking $u\in Z^j$, $j\in\mathbb N$, from [\[funct\]](#funct){reference-type="eqref" reference="funct"}, hypothesis $(h_2)$, [\[Gle\]](#Gle){reference-type="eqref" reference="Gle"}, [\[sob\]](#sob){reference-type="eqref" reference="sob"}, [\[2Zineq\]](#2Zineq){reference-type="eqref" reference="2Zineq"} and Hölder's inequality we deduce that $$\begin{split}
{\cal J}(u)&\ge\frac1p\left(\alpha_0- \frac{a}{\eta_{j+1}}\right)\|u\|_V^p + \frac{\alpha_0}{q}\|u\|_W^q-\frac{a}{s}\int_{\mathbb R^N}|u|^s dx\\
&\ge \frac1p\left(\alpha_0- \frac{a}{\eta_{j+1}}-\frac{a\tau_{p^*, V}^{s-r}}{s\eta_{j+1}^{\frac{r}{p}}}\|u\|_V^{s-p}\right)\|u\|_V^p
\end{split}$$ where $r$ is taken in such a way that $\frac{r}{p}+\frac{s-r}{p^*}=1$. By [\[inc\]](#inc){reference-type="eqref" reference="inc"}, we can always choose $\varrho$ large enough such that $$\frac{\alpha_0}{2p}\varrho^p \;>\; \beta$$ and $j$ large so that $$\alpha_0- \frac{a}{\eta_{j+1}}-\frac{a\tau_{p^*, V}^{s-r}}{s\eta_{j+1}^{\frac{r}{p}}}\varrho^{s-p}>\frac{\alpha_0}{2},$$ whence we obtain the desired result. ◻
Lastly, we provide a convergence result in the whole space $\mathbb R^N$.
**Proposition 22**. *Suppose that hypotheses $(P_1)$--$(P_2)$, $(h_0)$--$(h_3)$, $(g_0)$--$(g_2)$ hold. Taking any $\gamma\in\mathbb R$, we have that any $(CPS)_{\gamma}$--sequence $(u_n)_n\subset X$ is bounded in $W^{1, p}_V(\mathbb R^N)\cap W^{1, q}_{W}(\mathbb R^N)$. Furthermore, $u\in W^{1, p}_V(\mathbb R^N)\cap W^{1, q}_{W}(\mathbb R^N)$ exists such that, up to subsequences, the following limits hold $$\begin{aligned}
\label{conv1}
&u_n\rightharpoonup u \mbox{ weakly in } W_V^{1, p}(\mathbb R^N)\cap W^{1, q}_{W}(\mathbb R^N),\\
\label{conv2}
&u_n\to u \mbox{ strongly in } L^r(\mathbb R^N) \mbox{ for each } r\in [p, p^*[,\\
\label{conv3}
&u_n\to u \mbox{ a.e. in } \mathbb R^N.\end{aligned}$$*
*Proof.* Taking $\gamma\in\mathbb R$, let $(u_n)_n\subset X$ be a $(CPS)_{\gamma}$--sequence of ${\cal J}$, i.e., $$\label{beta0}
{\cal J}(u_n)\to\gamma\quad\mbox{ and }\quad \|d{\cal J}(u_n)\|_{X^{\prime}}(1+\|u_n\|_X)\to 0 \qquad\mbox{ as } n\to +\infty.$$ Thus, from [\[funct\]](#funct){reference-type="eqref" reference="funct"}, [\[diff\]](#diff){reference-type="eqref" reference="diff"}, [\[beta0\]](#beta0){reference-type="eqref" reference="beta0"}, assumptions $(h_2)$--$(h_3)$, $(g_2)$ and also [\[weightnorm\]](#weightnorm){reference-type="eqref" reference="weightnorm"}, direct computations imply that $$\begin{split}
&\mu\gamma +\varepsilon_n =\mu{\cal J}(u_n)-\left\langle d{\cal J}(u_n), u_n\right\rangle
\ge \frac{\alpha_0\alpha_1}{p}\int_{\mathbb R^N}|\nabla u_n|^p dx\\
&+\frac{\mu-p}{p}\int_{\mathbb R^N} V(x)|u_n|^p dx + \frac{\alpha_0\alpha_1}{q}\int_{\mathbb R^N}|\nabla u_n|^q dx +\frac{\mu-q}{q}\int_{\mathbb R^N} W(x) |u_n|^q dx.
\end{split}$$ It follows, in particular, that $(u_n)_n$ is bounded in the reflexive Banach space $W_V^{1, p}(\mathbb R^N)\cap W_W^{1, q}(\mathbb R^N)$ and then we have [\[conv1\]](#conv1){reference-type="eqref" reference="conv1"}. Since $$W_V^{1, p}(\mathbb R^N)\cap W_W^{1, q}(\mathbb R^N)\hookrightarrow W_V^{1, p}(\mathbb R^N),$$ we obtain [\[conv2\]](#conv2){reference-type="eqref" reference="conv2"} and [\[conv3\]](#conv3){reference-type="eqref" reference="conv3"} by using [\[comp\]](#comp){reference-type="eqref" reference="comp"} with $l=p$ and $U=V$. ◻
# Changeover through bounded domains {#sec_bounded}
From now on, let $\Omega$ denote an open bounded domain in $\mathbb R^N$. Thus, we define $$\label{Xlim}
X_{\Omega} = W_{0,V}^{1, p}(\Omega)\cap W_{0,W}^{1, q}(\Omega)\cap L^{\infty}(\Omega)$$ endowed with the norm $$\|u\|_{X_{\Omega}} =\|u\|_{\Omega,V} +\|u\|_{\Omega, W} +|u|_{\Omega,\infty} \quad\mbox{ for any } u\in X_{\Omega}$$ where, matching with the notations used in [\[Xnorm2\]](#Xnorm2){reference-type="eqref" reference="Xnorm2"}, we set $$\|\cdot\|_{\Omega, V} = \|\cdot\|_{\Omega, V, p} \quad\mbox{ and }\quad \|\cdot\|_{\Omega, W} =\|\cdot\|_{\Omega, W, q}.$$ We denote by $X_{\Omega}^{\prime}$ the dual space of $X$.
**Remark 23**. As $\Omega$ is a bounded domain, not only $\|u\|_{\Omega, p}$ and $|\nabla u|_{\Omega,p}$ are equivalent norms but also, if assumption $(P_3)$ holds, a constant $c_{\Omega} \ge 1$ exists such that $$\begin{split}
\|u\|_{\Omega,V, p}^p &= \int_{\Omega} |\nabla u|^p dx + \int_{\Omega} V(x) |u|^p dx\\
&\le \int_{\Omega} |\nabla u|^p dx +c_{\Omega} \int_{\Omega}|u|^p dx\le c_\Omega \|u\|_{\Omega, p}^p.
\end{split}$$ Hence, Remark [Remark 6](#Remb){reference-type="ref" reference="Remb"} implies that the norms $\|\cdot\|_{\Omega,V, p}$ and $\|\cdot\|_{\Omega, p}$ are equivalent, too. The same arguments apply for the norms $\|\cdot\|_{\Omega, W, q}$ and $\|\cdot\|_{\Omega, q}$.\
Moreover, since definition [\[Xlim\]](#Xlim){reference-type="eqref" reference="Xlim"} holds with $1<q \le p$ and $\Omega$ is a bounded domain, from now on we refer to $X_{\Omega}$ as $$X_{\Omega} =W_{0, V}^{1, p}(\Omega)\cap L^{\infty}(\Omega)$$ which can be equipped with the norm $$\|u\|_{X_{\Omega}} =\|u\|_{\Omega,p} +|u|_{\Omega,\infty} \quad\mbox{ for any } u\in X_{\Omega}.$$
Actually, since any function $u \in X_{\Omega}$ can be trivially extended to a function $\tilde{u}\in X$ just assuming $\tilde{u}(x) = 0$ for all $x \in \mathbb R^N \setminus \Omega$, then $$\label{Xequ}
\begin{split}
\|\tilde{u}\|_V &= \|u\|_{\Omega,V},
\quad \|\tilde{u}\|_{W} = \|u\|_{\Omega,W},\\
|\tilde{u}|_{\infty} &= |u|_{\Omega,\infty},\quad \
\|\tilde{u}\|_X = \|u\|_{X_\Omega}.
\end{split}$$ In this setting, [\[G0=0\]](#G0=0){reference-type="eqref" reference="G0=0"} and [\[funct\]](#funct){reference-type="eqref" reference="funct"} imply that the restriction of the functional ${\cal J}$ to $X_\Omega$, namely $${\cal J}_\Omega = {\cal J}|_{X_{\Omega}},$$ is such that $$\label{funOm}
\begin{split}
{\cal J}_{\Omega}(u) &= \frac1p \int_{\Omega} A(x, u)|\nabla u|^p dx +\frac1q \int_{\Omega}B(x, u)|\nabla u|^q dx\\
&\quad+ \frac1p\int_{\Omega} V(x) |u|^p dx +\frac1q\int_{\Omega} W(x)|u|^q dx-\int_{\Omega} G(x, u) dx.
\end{split}$$
**Remark 24**. We note that, setting $$\label{gtilde}
\tilde{g}(x,t) = g(x,t) - V(x) |t|^{p-2} t -W(x)|t|^{q-2} t$$ for a.e. $x \in\mathbb R^N$ all $t\in\mathbb R$, from $(g_0)$ and $(P_1)$ we have that $\tilde{g}$ is a $\mathcal{C}^0$--Caratheodory function such that $$\label{gtilde1}
\tilde{G}(x,t) = \int_0^t \tilde{g}(x,s) ds =
G(x,t)- \frac1p V(x) |t|^{p} -\frac1q W(x)|t|^q
%\quad\mbox{ for a.e. } x \in\R^N, \; \mbox{ all $t\in\R$.}$$ for a.e. $x \in\mathbb R^N$, all $t\in\mathbb R$. Then, by means of assumptions $(g_1)$ and $(P_3)$, some positive constants $\tilde{a}_1, \tilde{a}_2$ depending on $\Omega$ exist such that $$\label{gtildeup}
|\tilde{g}(x, t)|\le \tilde{a}_1 |t|^{s-1} +\tilde{a}_2 |t|^{q-1}
\quad\mbox{ for a.e. } x \in\Omega, \; \mbox{ all $t\in\mathbb R$.}$$ In particular, from [\[funOm\]](#funOm){reference-type="eqref" reference="funOm"} and [\[gtilde1\]](#gtilde1){reference-type="eqref" reference="gtilde1"} the functional ${\cal J}_{\Omega}$ can be written as $$\label{funOm1}
\begin{split}
{\cal J}_{\Omega}(u) = \ &\frac1p \int_{\Omega} A(x, u)|\nabla u|^p dx +\frac1q \int_{\Omega} B(x, u) |\nabla u|^q dx\\
&- \int_{\Omega} \tilde{G}(x,u) dx, \quad u\in X_{\Omega}.
\end{split}$$
Hence, as [\[gtildeup\]](#gtildeup){reference-type="eqref" reference="gtildeup"} holds, arguing as in [@CP2 Proposition 3.1], it follows that ${\cal J}_{\Omega}:X_{\Omega}\to\mathbb R$ is a $\mathcal{C}^1$ functional and for any $u$, $v\in X_{\Omega}$, its Fréchet differential in $u$ along the direction $v$ is given by $$\label{diffOm}
\begin{split}
&\langle d{\cal J}_{\Omega}(u), v\rangle =\int_{\Omega} A(x, u)|\nabla u|^{p-2} \nabla u\cdot\nabla v dx +\frac1p \int_{\Omega} A_t(x, u) v|\nabla u|^p dx \\
&\qquad+\int_{\Omega} B(x, u)|\nabla u|^{q-2} \nabla u\cdot\nabla v dx +\frac1q \int_{\Omega} B_t(x, u) v|\nabla u|^q dx\\
&\qquad -\int_{\Omega} \tilde{g}(x, u) v dx.
\end{split}$$
We prove that the functional ${\cal J}_\Omega$ verifies Definition [Definition 3](#wCPS){reference-type="ref" reference="wCPS"}.\
For simplicity, here and in the following we denote by $(\varepsilon_n)_n$ any sequence decreasing to $0$.
**Proposition 25**. *Under assumptions $(P_1)$--$(P_3)$, $(h_0)$--$(h_4)$ and $(g_0)$--$(g_2)$, the functional defined in [\[funOm\]](#funOm){reference-type="eqref" reference="funOm"} satisfies the $(wCPS)$ condition in $\mathbb R$.*
*Proof.* Taking $\gamma\in \mathbb R$, let $(u_n)_n \subset X_{\Omega}$ be a $(CPS)_\gamma$--sequence, i.e., $$\label{c1}
{\cal J}_{\Omega}(u_n) \to \gamma \quad \hbox{and}\quad \|d{\cal J}_{\Omega}(u_n)\|_{X'_{\Omega}}(1 + \|u_n\|_{X_{\Omega}}) \to 0\qquad
\mbox{if $\ n\to+\infty$.}$$ We want to prove that $u \in X_{\Omega}$ exists such that
- $\ \|u_n - u\|_{\Omega, p} \to 0$ (up to subsequences),
- $\ {\cal J}_{\Omega}(u) = \gamma$, $\;d{\cal J}_{\Omega}(u) = 0$.
The proof will be divided in the following steps.
- $(u_n)_n$ is bounded in $W_0^{1, p}(\Omega)$; hence, up to subsequences, $u \in W^{1,p}_{0}(\Omega)$ exists such that if $n\to+\infty$, then $$\begin{aligned}
&&u_n \rightharpoonup u\ \hbox{weakly in $W^{1,p}_{0}(\Omega)$,}
\label{c2}\\
&&u_n \to u\ \hbox{strongly in $L^r(\Omega)$ for each $r \in [1,p^*[$,}
\label{c3}\\
&&u_n \to u\ \hbox{a.e. in $\Omega$}.
\label{c4}\end{aligned}$$
- $u \in L^\infty(\Omega)$.
- Having defined $T_k : \mathbb R\to \mathbb R$ such that $$T_k t = \left\{\begin{array}{ll}
t&\hbox{if $|t| \le k$}\\
k \frac t{|t|}&\hbox{if $|t| > k$}
\end{array}\right. ,$$ if $k \ge |u|_{\infty,\Omega} + 1$ then $$\|d{\cal J}_{\Omega}(T_k u_n)\|_{X'_{\Omega}} \to 0\quad \hbox{and}\quad
{\cal J}_{\Omega}(T_k u_n)\to\gamma \qquad \hbox{as $n \to +\infty$.}$$
- $\|T_k u_n - u\|_{\Omega, p} \to 0$ if $n\to+\infty$; hence, $(i)$ holds.
- $(ii)$ is satisfied.
*Step 1.* Let $\gamma\in\mathbb R$ be fixed and consider a sequence $(u_n)_n\subset X$ satisfying [\[c1\]](#c1){reference-type="eqref" reference="c1"}. Thus from Remark [Remark 23](#Rmbdd){reference-type="ref" reference="Rmbdd"} and same arguments as in Proposition [Proposition 22](#convRN){reference-type="ref" reference="convRN"}, we infer that $$\begin{split}
\mu\gamma +\varepsilon_n \ge d_1\|u_n\|^p_{\Omega, p} +d_2\|u_n\|^q_{\Omega,q}.
\end{split}$$ It follows that $(u_n)_n$ is bounded in $W_0^{1, p}(\Omega)$ and then $u\in W_0^{1, p}(\Omega)$ exists so that, up to subsequences, [\[c2\]](#c2){reference-type="eqref" reference="c2"}--[\[c4\]](#c4){reference-type="eqref" reference="c4"} hold.\
*Step 2.* If $u\notin L^{\infty}(\Omega)$, either $$\label{sup_u}
\mathop{\mathrm{ess\; sup}}_{\Omega} u = +\infty$$ or $$\label{sup_menou}
\mathop{\mathrm{ess\; sup}}_{\Omega}(-u) = +\infty.$$ For example, suppose that [\[sup_u\]](#sup_u){reference-type="eqref" reference="sup_u"} holds. Then, for any fixed $k\in\mathbb N$, we have $$\begin{aligned}
\label{mpos}
|\Omega_{k}^{+}|> 0,\end{aligned}$$ with $\Omega_{k}^{+}:= \left\lbrace x\in\Omega \mid u(x) > k\right\rbrace$. Now, we consider the new function $R^{+}_k: t\in\mathbb R\mapsto R^{+}_k t\in\mathbb R$ such that $$R^{+}_k t = \begin{cases}
0 &\hbox{ if } t\leq k\\
t - k &\hbox{ if } t > k
\end{cases}.$$ From [\[c2\]](#c2){reference-type="eqref" reference="c2"}, we have $$R^{+}_k u_n \rightharpoonup R^{+}_k u \quad \mbox{ in } W^{1, p}_{0}(\Omega).$$ Thus by the sequentially weakly lower semicontinuity of $\Vert\cdot\Vert_{\Omega, p}$, it follows that $$\int_{\Omega} \vert\nabla R^{+}_k u\vert^{p} dx
\leq \liminf_{n\to +\infty} \int_{\Omega} \vert\nabla R^{+}_k u_n\vert^{p} dx$$ i.e., $$\label{3.23}
\int_{\Omega_{k}^{+}} \vert\nabla u\vert^{p} dx
\leq \liminf_{n\to +\infty}\int_{\Omega_{n, k}^{+}} \vert\nabla u_n\vert^{p} dx,$$ with $\Omega_{n, k}^{+}:=\left\lbrace x\in\Omega\mid u_n(x) > k\right\rbrace$. On the other hand, by definition, we have $$\Vert R^{+}_k u_n\Vert_{X_{\Omega}} \leq \Vert u_n\Vert_{X_{\Omega}}.$$ Hence, [\[c1\]](#c1){reference-type="eqref" reference="c1"} implies that $$|\langle d{\cal J}_{\Omega}(u_n), R^+_k u_n\rangle|\to 0;$$ thus, from [\[mpos\]](#mpos){reference-type="eqref" reference="mpos"} an integer $n_k\in\mathbb N$ exists such that $$\label{m+mpos}
|\langle d{\cal J}_{\Omega}(u_n), R^+_k u_n\rangle|
< |\Omega^{+}_{k}| \quad \mbox{ for all } n\geq n_k.$$ We observe that, taking any $k \ge 1$ and $n \in \mathbb N$, from [\[gtilde\]](#gtilde){reference-type="eqref" reference="gtilde"}, [\[diffOm\]](#diffOm){reference-type="eqref" reference="diffOm"}, hypotheses $(P_1)$, $(h_2)$ and $(h_4)$ with $\alpha_2 <q<p$, we obtain $$\begin{split}
&\langle d{\cal J}_{\Omega}(u_n), R^+_k u_n\rangle=\int_{\Omega^+_{n, k}} \left(1-\frac{k}{u_n}\right)\left(A(x, u_n)+\frac1p A_t(x, u_n) u_n \right) |\nabla u_n|^p dx\\
&\qquad+\int_{\Omega^+_{n, k}} \frac{k}{u_n} A(x, u_n)|\nabla u_n|^p dx+\int_{\Omega^+_{n, k}} \left(1-\frac{k}{u_n}\right) V(x) |u_n|^p dx\\
&\qquad+\int_{\Omega^+_{n, k}} \left(1-\frac{k}{u_n}\right)\left(B(x, u_n)+\frac1q B_t(x, u_n) u_n \right) |\nabla u_n|^q dx\\
&\qquad+\int_{\Omega^+_{n, k}} \frac{k}{u_n} B(x, u_n)|\nabla u_n|^q dx +\int_{\Omega^+_{n, k}} \left(1-\frac{k}{u_n}\right) W(x) |u_n|^q dx\\
&\qquad -\int_{\Omega} g(x, u_n) R^+_k u_n dx\\
&\qquad\ge \frac{\alpha_0\alpha_2}{p}\int_{\Omega^+_{n, k}} |\nabla u_n|^p dx +\frac{\alpha_0\alpha_2}{q}\int_{\Omega^+_{n, k}} |\nabla u_n|^q dx -\int_{\Omega} g(x, u_n) R^+_k u_n dx\\
&\qquad\ge \frac{\alpha_0\alpha_2}{p}\int_{\Omega^+_{n, k}} |\nabla u_n|^p -\int_{\Omega} g(x, u_n) R^+_k u_n dx
\end{split}$$ which, together with [\[m+mpos\]](#m+mpos){reference-type="eqref" reference="m+mpos"}, implies $$\label{mu1lambda}
\frac{\alpha_0\alpha_2}{p} \int_{\Omega^{+}_{n,k}} \vert\nabla u_n\vert^{p} dx
\leq |\Omega^{+}_{k}| + \int_{\Omega} g(x, u_n) R^{+}_k u_n dx
\quad \mbox{ for all } n\geq n_k.$$ Now, from [\[c4\]](#c4){reference-type="eqref" reference="c4"} and hypothesis $(g_0)$, we have $$g(x, u_n) R^+_k u_n\to g(x, u) R^+_k u \quad\mbox{ a.e. in } \Omega.$$ Furthermore, since $|R^+_k u_n|\le |u_n|$ for a.e. $x\in\Omega$, by using assumption $(g_1)$ and [\[c3\]](#c3){reference-type="eqref" reference="c3"}, we infer that $h\in L^1(\Omega)$ exists so that $$|g(x, u_n)R^+_k u_n|\le d_3(|u_n|^s +|u_n|^p)\le h(x) \quad\mbox{ for a.e. } x\in\Omega$$ see [@Brezis Theorem 4.9]. Thus, Dominated Convergence Theorem applies and we have $$\label{limg}
\lim_{n\to +\infty}\int_{\Omega} g(x, u_n) R^+_k u_n dx =\int_{\Omega} g(x, u)R^+_k u dx.$$ Now, again hypothesis $(g_1)$ and [\[hpsotto\]](#hpsotto){reference-type="eqref" reference="hpsotto"}, [\[3.23\]](#3.23){reference-type="eqref" reference="3.23"}, [\[mu1lambda\]](#mu1lambda){reference-type="eqref" reference="mu1lambda"}, [\[limg\]](#limg){reference-type="eqref" reference="limg"} ensure that $$\int_{\Omega_k^+} |\nabla u|^p dx \le d_4\left(|\Omega^+_k| + \int_{\Omega^+_k} |u|^s dx\right).$$ As this inequality holds for any $k\ge 1$, thus [@CP2 Lemma 4.5] applies and then $\displaystyle\mathop{\mathrm{ess\; sup}}_{\Omega} u$ is bounded, yielding a contradiction to [\[sup_u\]](#sup_u){reference-type="eqref" reference="sup_u"}.\
Suppose that [\[sup_menou\]](#sup_menou){reference-type="eqref" reference="sup_menou"} holds; it implies that, fixing any $k\in\mathbb N$, we have $$|\Omega^{-}_k|>0 \quad \hbox{with}\; \Omega^{-}_k = \lbrace x\in\Omega: u(x) < -k\rbrace.$$ In this case, by replacing function $R^{+}_k$ with $R^{-}_k:t\in\mathbb R\mapsto R^{-}_k t\in\mathbb R$ such that $$R^{-}_k t:=
\begin{cases}
0 &\hbox{ if } t\geq -k\\
t+k &\hbox{ if } t<-k
\end{cases},$$ we can reason as above so to apply again [@CP2 Lemma 4.5] which yields a contradiction to [\[sup_menou\]](#sup_menou){reference-type="eqref" reference="sup_menou"}. Then, it has to be $u\in L^{\infty}(\Omega)$.\
The remaining *Steps 3, 4, 5* can be addressed arguing as in [@CP2 Proposition 4.6], just taking $\tilde{g}$ as in [\[gtilde\]](#gtilde){reference-type="eqref" reference="gtilde"} and considering the functional as in [\[funOm1\]](#funOm1){reference-type="eqref" reference="funOm1"}. ◻
From now on, assume that $0\in\Omega$. Then $\varepsilon^*>0$ can be found so that $B_{\varepsilon^*}(0)\subset\Omega$.
**Remark 26**. Suppose that hypotheses of Proposition [Proposition 19](#geo1){reference-type="ref" reference="geo1"} hold. Thus fixing $\bar{u}\in X\setminus\{0\}$ such that ${\rm supp}\bar{u}\subset B_{\varepsilon^*}(0)$, a sufficiently large constant $\bar{\sigma}$ can be found such that $$\label{minbeta}
\|u^*\|_{\Omega, V}\ge\rho\quad\mbox{ and }\quad {\cal J}_{\Omega}(u^*)<\beta,$$ where $u^*=\bar{\sigma} \bar{u}$ and $\rho, \beta$ are as in Proposition [Proposition 20](#gerho){reference-type="ref" reference="gerho"}.
**Proposition 27**. *Under the assumptions of Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"}, there exists $\beta>0$ such that the functional ${\cal J}_{\Omega}$ admits at least a critical point $u_{\Omega}\in X_{\Omega}$ satisfying $$\beta\le{\cal J}_{\Omega}(u_{\Omega})\le\sup_{\sigma\in [0,1]}{\cal J}_{\Omega}(\sigma u^*).$$*
*Proof.* We observe that ${\cal J}_{\Omega}$ is a $\mathcal{C}^1$ functional satisfying the weak Cerami--Palais--Smale condition (see Propositions [Proposition 11](#C1){reference-type="ref" reference="C1"} and [Proposition 25](#wCPSbdd){reference-type="ref" reference="wCPSbdd"}) and by [\[G0=0\]](#G0=0){reference-type="eqref" reference="G0=0"} we have ${\cal J}_{\Omega}(0)=0$. Furthermore, from [\[Xequ\]](#Xequ){reference-type="eqref" reference="Xequ"}, [\[funOm\]](#funOm){reference-type="eqref" reference="funOm"} and Proposition [Proposition 20](#gerho){reference-type="ref" reference="gerho"} we infer the existence of $\rho, \beta>0$ which do not depend on $\Omega$ and satisfy $${\cal J}(u)\ge\beta \quad\mbox{ for all $u\in X_{\Omega}$, $\|u\|_{\Omega, V}=\rho$}.$$ Taking $u^*$ as in Remark [Remark 26](#Rmk*){reference-type="ref" reference="Rmk*"}, from [\[minbeta\]](#minbeta){reference-type="eqref" reference="minbeta"} we have that Theorem [Theorem 4](#absMPT){reference-type="ref" reference="absMPT"} applies to the functional ${\cal J}_{\Omega}$ in the Banach space $X_{\Omega}\hookrightarrow W_{0, V}^{1, p}(\Omega)$. Thus, a critical point $u_{\Omega}\in X_{\Omega}$ exists such that $${\cal J}_{\Omega}(u_{\Omega})=\inf_{\gamma\in\Gamma_{\Omega}}\sup_{\sigma\in [0, 1]}{\cal J}_{\Omega}(\gamma(\sigma))\ge\beta,$$ with $\Gamma_{\Omega}=\{\gamma\in C([0, 1], X_{\Omega}): \gamma(0)=0, \gamma(1) =u^*\}$. Since we assumed $B_{\varepsilon^*}(0)\subset\Omega$, the claimed result follows by noting that the function $$\gamma^*:\sigma\in[0, 1]\mapsto \sigma u^*\in X_{\Omega}$$ belongs to $\Gamma_{\Omega}$. ◻
**Proposition 28**. *Let the assumptions of Theorem [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"} hold. Thus, for any $\beta>0$, the functional ${\cal J}_{\Omega}$ admits at least a pair of symmetric critical points in $X_{\Omega}$, whose corresponding critical level belongs to $[\beta, \beta^*]$, with $\beta^*$ independent of $\Omega$.*
*Proof.* As already mentioned in Proposition [Proposition 27](#ExistOm){reference-type="ref" reference="ExistOm"}, the functional ${\cal J}_{\Omega}$ is of class $\mathcal{C}^1$ and satisfies the weak--Cerami--Palais--Smale condition. Moreover, as $g(x, \cdot)$ is odd, it follows from [\[funOm\]](#funOm){reference-type="eqref" reference="funOm"} that ${\cal J}_{\Omega}$ is even. Now fixing $\beta>0$, Propositions [Proposition 19](#geo1){reference-type="ref" reference="geo1"} and [Proposition 21](#PropMolt){reference-type="ref" reference="PropMolt"} guarantee that Theorem [Theorem 5](#SMPT){reference-type="ref" reference="SMPT"} applies. Hence a pair $u_{\Omega}, -u_{\Omega}$ of critical points of ${\cal J}_{\Omega}$ exists such that $$\beta\le {\cal J}_{\Omega}(u_{\Omega})\le \sup_{Y^j} {\cal J}(u),$$ where $Y^j$ is a finite dimensional subspace of $X$ such that [\[oplus\]](#oplus){reference-type="eqref" reference="oplus"} holds, with $j$ and $Z^j$ as in Proposition [Proposition 21](#PropMolt){reference-type="ref" reference="PropMolt"}. We note that $\displaystyle\sup_{Y^j} {\cal J}(u)$ is a real constant independent of $\Omega$. ◻
# Proof of Theorems [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} and [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"} {#sec_proof}
Throughout this section, for each $k\in\mathbb N$ we consider the open ball $B_k$, its related Banach space $X_{B_k}$ as in [\[Xlim\]](#Xlim){reference-type="eqref" reference="Xlim"} and the functional $${\cal J}_k : u \in X_{B_k} \mapsto {\cal J}_k(u) = {\cal J}_{B_k}(u) \in \mathbb R$$ with ${\cal J}_{B_k}={\cal J}|_{B_k}$.
**Remark 29**. For the sake of convenience, if $u \in X_{B_k}$ we always consider its trivial extension to $0$ in $\mathbb R^N \setminus B_k$. If we still denote such an extension by $u$ then we have that $u \in X$, too. Moreover, from [\[G0=0\]](#G0=0){reference-type="eqref" reference="G0=0"}, Definitions [\[funct\]](#funct){reference-type="eqref" reference="funct"} and [\[funOm\]](#funOm){reference-type="eqref" reference="funOm"}, respectively [\[diff\]](#diff){reference-type="eqref" reference="diff"} and [\[diffOm\]](#diffOm){reference-type="eqref" reference="diffOm"}, imply that ${\cal J}_k(u) = {\cal J}(u)$, respectively $$\langle d{\cal J}_k(u),v\rangle = \langle d{\cal J}(u),v\rangle
\quad \hbox{for all $v \in X_{B_k}$.}$$
Suppose that the assumptions in Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} are satisfied. Since for all $k \in \mathbb N$ Proposition [Proposition 27](#ExistOm){reference-type="ref" reference="ExistOm"} applies to ${\cal J}_k$ in $X_{B_k}$, from Remark [Remark 29](#rem00){reference-type="ref" reference="rem00"} a sequence $(u_k)_k \subset X$ exists such that for every $k \in \mathbb N$ we have
- $\ u_k \in X_{B_k}$ with $u_k = 0$ in $\mathbb R^N \setminus B_k$,
- $\ \displaystyle \beta\ \le\ {\cal J}(u_k)\ \le \ \sup_{\sigma\in [0,1]}{\cal J}(\sigma u^*)=: \beta^*$,
- $\langle d{\cal J}(u_k),v\rangle = 0$ for all $v \in X_{B_k}$,
where $\beta$ and $\beta^*$ are real positive constants which does not depend on $k$.\
The proof of Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} requires different steps. Firstly, we prove that sequence $(u_k)_k$ is bounded in $X$. To this aim, the following result is needed (see [@CSS2 Lemma 5.6]).
**Lemma 30**. *Let $\Omega$ be an open bounded domain in $\mathbb R^N$ and consider $p$, $r$ so that $1 < p \le N$ and $p \le r < p^*$ (if $N = p$ we just require that $p^*$ is any number larger than $r$) and take $u \in W_0^{1,p}(\Omega)$. If $a^* >0$ and $m_0\in \mathbb N$ exist such that $$\label{LUuno}
\int_{\Omega^+_m}|\nabla u|^p dx \le a^* \left(m^r\ |\Omega^+_m| +
\int_{\Omega^+_m} |u|^r dx\right)\quad\hbox{for all $m \ge m_0$,}$$ with $\Omega^+_m = \{x \in \Omega:\ u(x) > m\}$, then $\displaystyle \mathop{\mathrm{ess\; sup}}_{\Omega} u$ is bounded from above by a positive constant which can be chosen depending only on $|\Omega^+_{m_0}|$, $N$, $p$, $r$, $a^*$, $m_0$, $\|u\|_{\Omega, p}$, or better by a positive constant which can be chosen so that it depends only on $N$, $p$, $r$, $a^*$, $m_0$ and $a_0^*$ for any $a_0^* > 0$ such that $$\max\{|\Omega^+_{m_0}|,\ \|u\|_{\Omega, p}\}\ \le\ a_0^*.$$ Vice versa, if inequality $$\int_{\Omega^-_m}|\nabla u|^p dx \le a^*\left(m^r\ |\Omega^-_m| +
\int_{\Omega^-_m} |u|^r dx\right) \quad\hbox{for all $m \ge m_0$,}$$ holds with $\Omega^-_m = \{x \in \Omega:\ u(x) < - m\}$, then $\displaystyle \mathop{\mathrm{ess\; sup}}_{\Omega}(-u)$ is bounded from above by a positive constant which can be chosen so that it depends only on $N$, $p$, $r$, $a^*$, $m_0$ and any constant which is greater than both $|\Omega^-_{m_0}|$ and $\|u\|_{\Omega, p}$.*
**Proposition 31**. *A constant $M_0 > 0$ exists such that $$\label{bddX}
\|u_k\|_{X}\le M_0\quad\mbox{ for all } k\in\mathbb N.$$*
*Proof.* From conditions $(i)$ and $(iii)$ we infer that $$\label{p0}
\left\langle d{\cal J}(u_k), u_k\right\rangle =0 \quad\mbox{ for all } k\in\mathbb N.$$ Thus from [\[p0\]](#p0){reference-type="eqref" reference="p0"}, taking $\mu$ as in assumption $(h_3)$ and arguing as in Proposition [Proposition 22](#convRN){reference-type="ref" reference="convRN"} we get $$\mu{\cal J}(u_k)\ =\ \mu{\cal J}(u_k)-\left\langle d{\cal J}(u_k), u_k\right\rangle\ge d_1\|u_k\|_{V}^p +d_2\|u_k\|_W^q,$$ which, combined with condition $(ii)$, ensures that $$\label{M1}
\|u_k\|_{V} +\|u_k\|_W\le d_3 \quad\mbox{ for all } k\in\mathbb N.$$ In particular, from Remark [Remark 6](#Remb){reference-type="ref" reference="Remb"}, [\[Xnorm2\]](#Xnorm2){reference-type="eqref" reference="Xnorm2"} and [\[M1\]](#M1){reference-type="eqref" reference="M1"} we get $$\label{ukBk}
\|u_k\|_{B_k, p}\le d_4 \quad\mbox{ for all } k\in\mathbb N.$$ Now, in order to obtain estimate [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"}, from definition [\[Xdefn\]](#Xdefn){reference-type="eqref" reference="Xdefn"} we need to prove that $(u_k)_k$ is a bounded sequence in $L^{\infty}(\mathbb R^N)$ too. To this aim, we note that for a fixed $k \in \mathbb N$ either $|u_k|_\infty \le 1$ or $\ |u_k|_\infty > 1$. If $\ |u_k|_\infty > 1$, then $$\mathop{\mathrm{ess\; sup}}_{\mathbb R^N} u_k > 1\quad \hbox{and/or} \quad
\mathop{\mathrm{ess\; sup}}_{\mathbb R^N} (-u_k) > 1.$$ Assume that $\displaystyle \mathop{\mathrm{ess\; sup}}_{\mathbb R^N} u_k > 1$ and consider the set $$B^{+}_{k,1} = \{x \in \mathbb R^N:\ u_k(x) > 1\}.$$ From condition $(i)$ we have that $$B^{+}_{k,1} \subset \ B_k,$$ $B^{+}_{k,1}$ is an open bounded domain such that not only $|B^{+}_{k,1}| > 0$ but also, by means of [\[WVem\]](#WVem){reference-type="eqref" reference="WVem"} we have $$\begin{split}
|B^{+}_{k,1}| &\le \int_{B^{+}_{k,1}} (|u_k|^p +|u_k|^q) dx\le \int_{\mathbb R^N} (|u_k|^p +|u_k|^q) dx\\
& \le \|u_k\|_p^p+\|u_k\|_q^q\le d_5 (\|u_k\|_V^p +\|u_k\|_W^q).
\end{split}$$ Hence estimate [\[M1\]](#M1){reference-type="eqref" reference="M1"} gives $$\label{measbdd}
|B^{+}_{k,1}|\le d_6.$$ For any $m \in \mathbb N$ we consider the new function $R^+_m : \mathbb R\to \mathbb R$ defined as $$t \mapsto R^+_mt = \left\{\begin{array}{ll}
0&\hbox{if $t \le m$}\\
t-m&\hbox{if $t > m$}
\end{array}\right. .$$ Clearly, for any $m \ge 1$, we have that $R^+_m u_k \in X_k$ and from condition $(iii)$, [\[diffOm\]](#diffOm){reference-type="eqref" reference="diffOm"} and hypotheses $(V_1)$, $(h_2)$, $(h_4)$ and same arguments as in Step 2 of Proposition [Proposition 25](#wCPSbdd){reference-type="ref" reference="wCPSbdd"}, it follows that $$0\ =\ \langle d{\cal J}(u_k), R^+_m u_k\rangle \ge \frac{\alpha_0\alpha_2}{p}\int_{B^+_{k,m}} |\nabla u_k|^p -\int_{\mathbb R^N} g(x, u_k) R^+_m u_k dx,$$ i.e., $$\label{ie}
\frac{\alpha_0\alpha_2}{p}\int_{B^+_{k,m}} |\nabla u_k|^p dx
\le \int_{\mathbb R^N} g(x, u_k)R^+_m u_k dx,$$ with $B^+_{k,m} = \{x\in\mathbb R^N: u_k(x)>m\}$. Clearly, we have $B^+_{k,m}\subseteq B^+_{k,1}$ so from [\[measbdd\]](#measbdd){reference-type="eqref" reference="measbdd"} we have that $$\label{OmM}
|B^+_{k,m}|\le d_7.$$ Since $(g_2)$ implies $g(x,t) > 0$ if $t>0$ for a.e. $x \in \mathbb R^N$, then $g(x,u_k(x)) > 0$ for a.e. $x \in B_{k,m}^{+}$; so, hypothesis $(g_1)$ implies that $$\int_{\mathbb R^N} g(x, u_k)R^+_m u_k dx \le \int_{B^{+}_{k,m}} g(x, u_k)u_k dx\le 2a\int_{B^{+}_{k,m}} u_k^s dx,$$ which, together with [\[ie\]](#ie){reference-type="eqref" reference="ie"}, suggest $$\int_{B^{+}_{k,m}} |\nabla u_k|^p dx \le
d_8 \int_{B^{+}_{k,m}} u_k^s dx \quad \hbox{for all $m \ge 1$,}$$ with $d_8> 0$ independent of $m$ and $k$. Since [\[hpsotto\]](#hpsotto){reference-type="eqref" reference="hpsotto"} holds, Lemma [Lemma 30](#tecnico){reference-type="ref" reference="tecnico"} applies with $\Omega=B_{k}$ and $d_9 > 1$ exists such that $$\mathop{\mathrm{ess\; sup}}_{B_{k}} u_k \le d_9,$$ where the constant $d_9$ can be choosen independent of $k$ as it only rely upon $N, p, s, d_8$ and $a_0^*$, with $a_0^*\ge \max\{d_7, d_4\}$ (see [\[ukBk\]](#ukBk){reference-type="eqref" reference="ukBk"} and [\[OmM\]](#OmM){reference-type="eqref" reference="OmM"}).\
Similar arguments apply if $\displaystyle \mathop{\mathrm{ess\; sup}}_{\mathbb R^N} (-u_k) > 1$. Summing up, $d_{10} \ge 1$ exists such that $$|u_k|_{\infty} \le d_{10}\qquad \hbox{for all $k \in \mathbb N$}$$ and [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"} is complied. ◻
From estimate [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"} and [\[Xdefn\]](#Xdefn){reference-type="eqref" reference="Xdefn"} it follows that $u_\infty \in W_V^{1, p}(\mathbb R^N)\cap W_W^{1, q}(\mathbb R^N)$ exists such that, up to subsequences, $$\label{weakV}
u_k\rightharpoonup u_\infty \quad\mbox{ weakly in } W_V^{1,p}(\mathbb R^N)\cap W_W^{1, q}(\mathbb R^N).$$ Furthermore, from $ii)$ in Corollary [Corollary 8](#Cor1){reference-type="ref" reference="Cor1"} it follows that, up to subsequences, $$\begin{aligned}
\label{al1}
&u_k\to u_{\infty} \quad\mbox{strongly in } L^r(\mathbb R^N) \mbox{ for each } r\ge p;\\ \label{aeRN}
&u_k\to u_{\infty} \quad\mbox{a.e. in }\; \mathbb R^N.\end{aligned}$$ On the other hand, from [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"} and [\[aeRN\]](#aeRN){reference-type="eqref" reference="aeRN"}, we have $$\label{inLinf}
u_{\infty}\in L^{\infty}(\mathbb R^N)$$ (see [@CSS2 Proposition 6.5] for the details). Hence, by definition [\[Xdefn\]](#Xdefn){reference-type="eqref" reference="Xdefn"}, $u_{\infty}\in X$.
**Proposition 32**. *We have that $$\label{strongwpr}
u_k \to u_\infty \quad \hbox{strongly in $W_V^{1,p}(\mathbb R^N)\cap W_W^{1, q}(\mathbb R^N)$.}$$*
*Proof.* Following an idea introduced in [@BMP], let us consider the real map $\psi(t) = t {\rm e}^{\eta t^2}$, where $\eta > (\frac{\beta_2}{2\beta_1})^2$ will be fixed once $\beta_1$, $\beta_2 > 0$ are chosen in a suitable way later on. By definition, we have that $$\label{eq4}
\beta_1 \psi'(t) - \beta_2 |\psi(t)| > \frac{\beta_1} 2\qquad \hbox{for all $t \in \mathbb R$.}$$ Defining $v_{k}=u_k - u_\infty$, [\[weakV\]](#weakV){reference-type="eqref" reference="weakV"} implies that $$\label{cc2}
v_{k} \rightharpoonup 0\quad \hbox{weakly in $W^{1,p}_V(\mathbb R^N)\cap W^{1, q}_W (\mathbb R^N)$,}$$ while from [\[al1\]](#al1){reference-type="eqref" reference="al1"}, respectively [\[aeRN\]](#aeRN){reference-type="eqref" reference="aeRN"}, it follows that $$\label{cc21}
v_{k} \to 0 \quad \hbox{strongly in $L^r(\mathbb R^N)$ for each $r\ge p$;}$$ respectively $$\label{cc22}
v_{k} \to 0 \quad\hbox{a.e. in $\mathbb R^N$.}$$ Moreover, from [\[inLinf\]](#inLinf){reference-type="eqref" reference="inLinf"} and [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"} we obtain that $$\label{cc23}
v_k \in X\quad \hbox{and}\quad |v_{k}|_\infty \le \bar{M}_0 \quad\hbox{for all $k \in\mathbb N$,}$$ with $\bar{M}_0 = M_0 + |u_\infty|_\infty$. Hence, from [\[cc23\]](#cc23){reference-type="eqref" reference="cc23"} we have $$\label{stim1}
|\psi(v_{k})| \le \psi(\bar{M}_0),\quad 0<\psi'(v_{k}) \le \psi'(\bar{M}_0) \qquad\hbox{a.e. in $\mathbb R^N$,}$$ while [\[cc22\]](#cc22){reference-type="eqref" reference="cc22"} implies $$\label{stim2}
\psi(v_{k}) \to 0, \quad
\psi'(v_{k}) \to 1 \qquad\hbox{a.e. in $\mathbb R^N$.}$$ Thus, since $\psi(v_k)\in X_{B_k}$, from $(iii)$ and [\[diff\]](#diff){reference-type="eqref" reference="diff"} we infer that $$\label{6.40}
\begin{split}
0\ = &\left\langle d{\cal J}(u_k), \psi(v_k)\right\rangle
=\int_{\mathbb R^N} \psi^{\prime}(v_k) A(x, u_k) |\nabla u_k|^{p-2} \nabla u_k\cdot\nabla v_k dx\\
&+\int_{\mathbb R^N} \psi^{\prime}(v_k) B(x, u_k) |\nabla u_k|^{q-2} \nabla u_k\cdot\nabla v_k dx\\
& +\frac1p \int_{\mathbb R^N} A_t(x, u_k) \psi(v_k) |\nabla u_k|^p dx\\
& +\frac1q \int_{\mathbb R^N} B_t(x, u_k) \psi(v_k) |\nabla u_k|^q dx +\int_{\mathbb R^N} V(x)|u_k|^{p-2} u_k \psi(v_k) dx\\
& +\int_{\mathbb R^N} W(x)|u_k|^{q-2} u_k \psi(v_k) dx -\int_{\mathbb R^N} g(x, u_k) \psi(v_k) dx.
\end{split}$$ We note that $(g_0)$--$(g_1)$ together with [\[stim1\]](#stim1){reference-type="eqref" reference="stim1"}, Hölder inequality, [\[cont1\]](#cont1){reference-type="eqref" reference="cont1"} [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"}, imply that $$\left\vert\int_{\mathbb R^N} g(x, u_k) \psi(v_k) dx\right\vert \le d_1(|u_k|_p^{p-1} |v_k|_p^p +|u_k|_s^{s-1}|v_k|_s^s)\le d_2(|v_k|_p^p +|v_k|_s^s)$$ which, together with [\[cc21\]](#cc21){reference-type="eqref" reference="cc21"} ensures that $$\label{ip2}
\int_{\mathbb R^N} g(x,u_k) \psi(v_k) dx \to 0.$$ Moreover [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"}, assumptions $(h_1)$--$(h_2)$ and direct computations give $$\begin{split}
&\left\vert\int_{\mathbb R^N}A_t(x, u_k) \psi(v_{k}) |\nabla u_k|^p dx\right\vert
\le\ d_3\int_{\mathbb R^N} |\psi(v_{k})| |\nabla u_k|^{p} dx\\
&\qquad=d_3\Big(\int_{\mathbb R^N} |\psi(v_{k})| |\nabla u_k|^{p-2}\nabla u_k\cdot\nabla v_{k} dx\\
&\qquad\qquad\qquad\qquad\quad +\int_{\mathbb R^N}|\psi(v_{k})| |\nabla u_k|^{p-2}\nabla u_k\cdot\nabla u_\infty dx\Big)\\
&\qquad= d_3\int_{\mathbb R^N} |\psi(v_{k})| |\nabla u_k|^{p-2}\nabla u_k\cdot\nabla v_{k} dx +\varepsilon_k
\end{split}$$ since by the Hölder inequality and [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"} we have $$\label{tre3}
\begin{split}
&\int_{\mathbb R^N} |\psi(v_{k})| |\nabla u_k|^{p-2}\nabla u_k\cdot\nabla u_\infty dx\\
&\quad\le d_4\left(\int_{\mathbb R^N}|\psi(v_{k})|^p |\nabla u_\infty|^p dx\right)^{\frac1p}
\left(\int_{\mathbb R^N} |\nabla u_k|^p dx\right)^{\frac{p-1}{p}} \\
&\quad\le d_5\left(\int_{\mathbb R^N}|\psi(v_{k})|^p |\nabla u_\infty|^p dx\right)^{\frac1p}\
\to\ 0
\end{split}$$ as [\[stim1\]](#stim1){reference-type="eqref" reference="stim1"}, [\[stim2\]](#stim2){reference-type="eqref" reference="stim2"} allow us to apply the Dominated Convergence Theorem which ensures that $$\int_{\mathbb R^N}|\psi(v_{k})|^p |\nabla u_\infty|^p dx\to 0.$$ Same arguments can be rearranged to prove that $$\label{Bt6}
\begin{split}
&\left\vert\int_{\mathbb R^N}B_t(x, u_k) \psi(v_{k}) |\nabla u_k|^q dx\right\vert\\
&\qquad\le d_6\int_{\mathbb R^N} |\psi(v_{k})| |\nabla u_k|^{q-2}\nabla u_k\cdot\nabla v_{k} dx +\varepsilon_k.
\end{split}$$ Back to [\[6.40\]](#6.40){reference-type="eqref" reference="6.40"}, using estimates [\[ip2\]](#ip2){reference-type="eqref" reference="ip2"}--[\[Bt6\]](#Bt6){reference-type="eqref" reference="Bt6"} and setting $$h_k(x) =\psi^{\prime}(v_k(x))-d_7 |\psi(v_k(x))| \quad\mbox{ for a.e. } x\in\mathbb R^N$$ with $d_7 =\max\left\{\frac{d_3}{p}, \frac{d_6}{q}\right\}$, we obtain that $$\label{epsquasi}
\begin{split}
\varepsilon_k &\ge \int_{\mathbb R^N} A(x, u_k) h_k(x) |\nabla u_k|^{p-2}\nabla u_k\cdot\nabla v_k dx\\
&\quad +\int_{\mathbb R^N} B(x, u_k) h_k(x) |\nabla u_k|^{q-2}\nabla u_k\cdot\nabla v_k dx\\
&\quad+\int_{\mathbb R^N} V(x)|u_k|^{p-2} u_k \psi(v_k) dx +\int_{\mathbb R^N} W(x)|u_k|^{q-2} u_k \psi(v_k) dx.
\end{split}$$ On the other hand, taking $\beta_1=1$ and $\beta_2 =d_7$, from [\[stim1\]](#stim1){reference-type="eqref" reference="stim1"}, [\[stim2\]](#stim2){reference-type="eqref" reference="stim2"} and direct computations we have that $$\label{stima10}
h_{k}(x) \to 1 \ \hbox{a.e. in $\mathbb R^N$} \
\hbox{and} \
|h_{k}(x)| \le \psi'(\bar{M}_0) + d_7|\psi(\bar{M}_0)|\ \hbox{a.e. in $\mathbb R^N$,}$$ while from [\[eq4\]](#eq4){reference-type="eqref" reference="eq4"} we deduce $$\label{frac12}
h_{k}(x)>\frac12\quad\mbox{ a.e. in $\mathbb R^N$}.$$ Back to [\[epsquasi\]](#epsquasi){reference-type="eqref" reference="epsquasi"}, direct computations give $$\label{quasi2}
\begin{split}
\varepsilon_{k}\ \ge\ & +\int_{\mathbb R^N}(h_k A(x, u_k) -A(x, u_{\infty}))|\nabla u_{\infty}|^{p-2}\nabla u_{\infty}\cdot\nabla v_k \,dx\\
&+\int_{\mathbb R^N} h_k A(x, u_k)(|\nabla u_k|^{p-2}\nabla u_k -|\nabla u_{\infty}|^{p-2}\nabla u_{\infty})\cdot\nabla v_k\, dx\\
& +\int_{\mathbb R^N}(h_k B(x, u_k) -B(x, u_{\infty}))|\nabla u_{\infty}|^{q-2}\nabla u_{\infty}\cdot\nabla v_k\, dx\\
&+\int_{\mathbb R^N} h_k B(x, u_k)(|\nabla u_k|^{q-2}\nabla u_k -|\nabla u_{\infty}|^{q-2}\nabla u_{\infty})\cdot\nabla v_k\, dx\\
&+\int_{\mathbb R^N} V(x)(|u_k|^{p-2} u_k -|u_{\infty}|^{p-2} u_{\infty})\psi(v_k) dx\\
& +\int_{\mathbb R^N} V(x) |u_{\infty}|^{p-2} u_{\infty} \psi(v_k)\,dx\\
&+\int_{\mathbb R^N} W(x)(|u_k|^{q-2} u_k -|u_{\infty}|^{q-2} u_{\infty})\psi(v_k) dx\\
& +\int_{\mathbb R^N} W(x) |u_{\infty}|^{q-2} u_{\infty} \psi(v_k)\,dx
\end{split}$$ where we have used [\[cc2\]](#cc2){reference-type="eqref" reference="cc2"} to infer that $$\begin{split}
\int_{\mathbb R^N} A(x,u_\infty) |\nabla u_\infty|^{p-2}\nabla u_\infty \cdot\nabla v_{k}dx\ \to\ 0,\\[3pt]
\int_{\mathbb R^N} B(x,u_\infty) |\nabla u_\infty|^{q-2}\nabla u_\infty \cdot\nabla v_{k}dx\ \to\ 0.
\end{split}$$ On the other hand, from Hölder's inequality, we obtain $$\label{hkDC}
\begin{split}
&\int_{\mathbb R^N} |(h_k A(x, u_k) -A(x,u_\infty))|\nabla u_\infty|^{p-2}\nabla u_\infty\cdot\nabla v_{k}| dx\\
&\quad\le\left(\int_{\mathbb R^N}|h_{k} A(x, u_k) -A(x,u_\infty)|^{\frac{p}{p-1}}|\nabla u_\infty|^p dx\right)^{\frac{p-1}{p}}
\|v_{k}\|_p\ \to\ 0,
\end{split}$$ since assumption $(h_0)$, [\[aeRN\]](#aeRN){reference-type="eqref" reference="aeRN"} and [\[stima10\]](#stima10){reference-type="eqref" reference="stima10"} imply that $$h_{k}A(x,u_k) -A(x,u_\infty) \to 0 \quad \hbox{a.e. in $\mathbb R^N$,}$$ while [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"}, [\[stima10\]](#stima10){reference-type="eqref" reference="stima10"}, [\[inLinf\]](#inLinf){reference-type="eqref" reference="inLinf"} and $(h_1)$ give $$|h_{k} A(x, u_k) -A(x,u_\infty)|^{\frac{p}{p-1}}|\nabla u_\infty|^p\le d_8 |\nabla u_\infty|^p
\quad \hbox{a.e. in $\mathbb R^N$.}$$ Thus [\[hkDC\]](#hkDC){reference-type="eqref" reference="hkDC"} follows by [\[cc21\]](#cc21){reference-type="eqref" reference="cc21"} and the Dominated Convergence Theorem.\
Moreover, we have $$\label{four}
\begin{split}
&\int_{\mathbb R^N} V(x) |u_{\infty}|^{p-2} u_{\infty} \psi(v_k)dx\to 0\\
& \int_{\mathbb R^N} W(x) |u_{\infty}|^{q-2} u_{\infty} \psi(v_k)dx\to 0.
\end{split}$$ In fact, via the identification $$\varphi\in L_V^p(\mathbb R^N)\mapsto\int_{\mathbb R^N} V(x)|u_{\infty}|^{p-2} u_{\infty} \, \varphi\, dx\in\mathbb R$$ and Hölder's inequality, we obtain that $$\begin{split}
\left\vert \int_{\mathbb R^N} V(x)|u_{\infty}|^{p-2} u_{\infty} \, \varphi\, dx\right\vert &\le\left(\int_{\mathbb R^N} V(x)|\varphi|^p dx\right)^{\frac1p}\left(\int_{\mathbb R^N} V(x) |u_{\infty}|^p dx\right)^{\frac{p-1}{p}}\\[3pt]
&\le d_9 \|\varphi\|_{V}^{p-1},
\end{split}$$ i.e. $V(x)|u_{\infty}|^{p-2} u_{\infty} \in (L_V^p(\mathbb R^N))^{\prime}$. Since the previous argument can be rephrased to prove that $W(x)|u_{\infty}|^{q-2} u_{\infty} \in (L_W^q(\mathbb R^N))^{\prime}$, we deduce from [\[cc2\]](#cc2){reference-type="eqref" reference="cc2"} that [\[four\]](#four){reference-type="eqref" reference="four"} holds.\
Finally, summming up, from [\[frac12\]](#frac12){reference-type="eqref" reference="frac12"}--[\[four\]](#four){reference-type="eqref" reference="four"}, assumption $(P_1)$, the definition of $\psi(v_k)$ and strong convexity of the power function with exponent $p, q>1$, we have $$\begin{split}
\varepsilon_k &\ge \frac{\alpha_0}{2}\int_{\mathbb R^N} \big[(|\nabla u_k|^{p-2}\nabla u_k -|\nabla u_\infty|^{p-2}\nabla u_\infty)\\
&\qquad\qquad\qquad +(|\nabla u_k|^{q-2}\nabla u_k -|\nabla u_\infty|^{q-2}\nabla u_\infty)\big]\cdot\nabla v_{k} dx\\
&+\int_{\mathbb R^N} V(x)(|u_k|^{p-2} u_k -|u_{\infty}|^{p-2} u_{\infty}) v_k dx\\
&\qquad\qquad\qquad +\int_{\mathbb R^N} W(x)(|u_k|^{q-2} u_k -|u_{\infty}|^{q-2} u_{\infty}) v_k dx\ge 0.
\end{split}$$ Hence we infer that both $$\begin{split}
&\int_{\mathbb R^N} (|\nabla u_k|^{p-2}\nabla u_k -|\nabla u_\infty|^{p-2}\nabla u_\infty)\cdot\nabla(u_k-u_{\infty}) dx\to 0,\\
&\int_{\mathbb R^N}(|\nabla u_k|^{q-2}\nabla u_k
-|\nabla u_\infty|^{q-2}\nabla u_\infty)\cdot\nabla (u_k -u_{\infty}) dx\to 0
\end{split}$$ and $$\begin{split}
&\int_{\mathbb R^N} V(x)(|u_k|^{p-2} u_k -|u_{\infty}|^{p-2} u_{\infty})(u_k-u_{\infty}) dx \to 0, \\
&\int_{\mathbb R^N} W(x)(|u_k|^{q-2} u_k -|u_{\infty}|^{q-2} u_{\infty})(u_k-u_{\infty}) dx\to 0.
\end{split}$$ Thus [\[strongwpr\]](#strongwpr){reference-type="eqref" reference="strongwpr"} holds. ◻
**Proposition 33**. *One has $$\label{crit0}
\langle d{\cal J}(u_\infty),\varphi\rangle = 0 \quad \hbox{for all $\varphi \in C_c^\infty(\mathbb R^N)$}$$ with $C_c^\infty(\mathbb R^N) = \{\varphi \in C^\infty(\mathbb R^N):\ \mathop{\mathrm{supp}}\varphi\subset\subset \mathbb R^N\}$. Hence, $d{\cal J}(u_\infty)= 0$ in $X$.*
*Proof.* Let $\varphi \in C_c^\infty(\Omega)$. Thus, a radius $R \ge 1$ exists such that $\mathop{\mathrm{supp}}\varphi \subset B_R$. For all $k\ge R$ we have that $\varphi \in X_k$ so $(iii)$ holds and $\langle d{\cal J}(u_k),\varphi\rangle = 0$. Moreover, [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"}, [\[aeRN\]](#aeRN){reference-type="eqref" reference="aeRN"}, Propositions [Proposition 32](#limwpr){reference-type="ref" reference="limwpr"} and [Proposition 11](#C1){reference-type="ref" reference="C1"}, ensure that $$\|d{\cal J}(u_k)-d{\cal J}(u_{\infty})\|_{X^{\prime}}\to 0\quad\mbox{ as $k\to +\infty$,}$$ hence [\[crit0\]](#crit0){reference-type="eqref" reference="crit0"} holds. ◻
*Proof of Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"}.* Thanks to Proposition [Proposition 33](#critt){reference-type="ref" reference="critt"}, the statement of Theorem [Theorem 15](#ExistMain){reference-type="ref" reference="ExistMain"} is true if we prove $u_\infty \not\equiv 0$.\
Suppose by contradiction that $u_\infty=0$. Thus, from assumption $(g_1)$ and [\[al1\]](#al1){reference-type="eqref" reference="al1"} we have that $$\label{gto0}
\int_{\mathbb R^N} g(x, u_k) u_k dx\ \to\ 0,$$ which, together with assumption $(g_2)$, ensures that $$\label{Gto0}
\int_{\mathbb R^N} G(x, u_k) dx \to 0.$$ Furthermore, from [\[diff\]](#diff){reference-type="eqref" reference="diff"}, $(iii)$, $(h_4)$ and also $(h_2)$ it results $$\begin{split}
0 &= \left\langle d{\cal J}(u_k), u_k\right\rangle =\int_{\mathbb R^N} A(x, u_k)|\nabla u_k|^p dx
+\frac1p\int_{\mathbb R^N} A_t(x, u_k) u_k |\nabla u_k|^p dx\\
&\quad +\int_{\mathbb R^N} V(x) |u_k|^p dx +\int_{\mathbb R^N} B(x, u_k)|\nabla u_k|^q dx\\
&\quad +\frac1q\int_{\mathbb R^N} B_t(x, u_k) u_k |\nabla u_k|^q dx
+\int_{\mathbb R^N} W(x) |u_k|^q dx-\int_{\mathbb R^N} g(x, u_k) u_k dx\\
& \ge \frac{\alpha_2\alpha_0}{p}\int_{\mathbb R^N} |\nabla u_k|^p dx+ \int_{\mathbb R^N} V(x) |u_k|^p dx + \frac{\alpha_2\alpha_0}{q}\int_{\mathbb R^N} |\nabla u_k|^q dx\\
&\quad +\int_{\mathbb R^N} W(x) |u_k|^q dx-\int_{\mathbb R^N} g(x, u_k) u_k dx,
\end{split}$$ which implies that $$\label{weigh0}
\|u_k\|_V^p +\|u_k\|_W^q \to 0\quad\mbox{ as } k\to +\infty$$ by means of [\[weightnorm\]](#weightnorm){reference-type="eqref" reference="weightnorm"} and [\[gto0\]](#gto0){reference-type="eqref" reference="gto0"}. Hence, from [\[funct\]](#funct){reference-type="eqref" reference="funct"}, [\[Gto0\]](#Gto0){reference-type="eqref" reference="Gto0"}, [\[bddX\]](#bddX){reference-type="eqref" reference="bddX"}, assumption $(h_1)$, and [\[weigh0\]](#weigh0){reference-type="eqref" reference="weigh0"} we infer $${\cal J}(u_k) \le d_1 (\|u_k\|_V^p +\|u_k\|_W^q) - \int_{\mathbb R^N} G(x, u_k) dx \to 0$$ in contradiction with the estimate in $(ii)$. ◻
*Proof of Theorem [Theorem 16](#MoltMain){reference-type="ref" reference="MoltMain"}.* Fix $\beta>0$. For any $k\in\mathbb N$, let $u_k$ be the critical point established via Proposition [Proposition 28](#MoltOm){reference-type="ref" reference="MoltOm"}. Thus, the sequence $(u_k)_k\subset X$ meets conditions $(i)$--$(iii)$ and, in particular $$\beta\le {\cal J}(u_k)\le \beta^*,$$ where $\beta^*$ is independent of $k$. Arguing as done above, it follows that $u_{\infty}\in X$ exists which is a critical point for the functional ${\cal J}$. Clearly, Proposition [Proposition 11](#C1){reference-type="ref" reference="C1"} implies $${\cal J}(u_k)\to {\cal J}(u_{\infty}),$$ hence ${\cal J}(u_{\infty})\ge\beta$. Summing up, for any $\beta>0$, a critical point $u_{\infty}\in X$ exists with critical level greater or equal to $\beta$. By the arbitrariness of $\beta$, we obtain the claimed result. ◻
The research that led to the present paper was partially supported by MIUR--PRIN project "Qualitative and quantitative aspects of nonlinear PDEs" (2017JPCAPN[ ]{.ul}005) and *Fondi di Ricerca di Ateneo* 2017/18 "Problemi differenziali non lineari".\
Both the authors are members of the Research Group INdAM--GNAMPA.
99 C.O. Alves and G.M. Figueiredo, *Multiplicity and concentration of positive solutions for a class of quasilinear problems*, Adv. Nonlinear Stud. **11** (2011), 265--294.
A. Ambrosetti and P.H. Rabinowitz, *Dual variational methods in critical point theory and applications*, J. Funct. Anal. **14** (1973), 349-381.
V. Ambrosio, *A Kirchhoff Type Equation in $\mathbb R^N$ involving the fractional $(p,q)$--Laplacian*, J. Geom. Anal., **32** (2022).
M. Badiale, M.Guida and S. Rolando, *Compactness and existence results for the $p$--Laplace equation*, J. Math. Anal. Appl., **451** (2017), 345-370.
R. Bartolo, A. M. Candela and A. Salvatore, *On a class of superlinear $(p, q)$--Laplacian type equations on $\mathbb R^N$*, J. Math. Anal. Appl. **438** (2016), 29--41.
T. Bartsch and Z.Q. Wang, *Existence and multiplicity results for some superlinear elliptic problems on $\mathbb R^N$*, Comm. Partial Differential Equations **20** (1995), 1725-1741.
V. Benci, P. D'Avenia, D. Fortunato and L. Pisani, *Solitons in several space dimensions: Derrick's problem and infinitely many solutions*, Arch. Ration. Mech. Anal. **154** (2000), 297--324.
V. Benci and D. Fortunato, *Discreteness conditions of the spectrum of Schrödinger operators*, J. Math. Anal. Appl. **64** (1978), 695--700.
L. Boccardo, F. Murat and J.P. Puel, *Existence of bounded solutions for nonlinear elliptic unilateral problems*, Ann. Mat. Pura Appl. IV Ser. **152** (1988), 183-196.
H. Brezis, "Functional Analysis, Sobolev Spaces and Partial Differential Equations\", Universitext **XIV**, Springer, New York, 2011.
A.M. Candela and G. Palmieri, *Infinitely many solutions of some nonlinear variational equations*, Calc. Var. Partial Differential Equations **34** (2009), 495-530.
A. M. Candela and G. Palmieri, *Some abstract critical point theorems and applications*. In: Dynamical Systems, Differential Equations and Applications (X. Hou, X. Lu, A. Miranville, J. Su & J. Zhu Eds), *Discrete Contin. Dynam. Syst.* **Suppl. 2009** (2009), 133-142.
A.M. Candela and A. Salvatore, *Existence of radial bounded solutions for some quasilinear elliptic equations in $\mathbb R^N$*, Nonlinear Anal. **191** (2020), Article 111625 (26 pp).
A. M. Candela, A. Salvatore and C. Sportelli, *Bounded solutions for quasilinear modified Schrödinger equations*, *Calc. Var. Partial Differential Equations* **61**, 220 (2022) doi.org/10.1007/s00526-022-02328-y
A. M. Candela and C. Sportelli, *Soliton solutions for quasilinear modified Schrödinger equations in applied sciences*, *Discrete Contin. Dyn. Syst. Ser. S* **15** Issue 12 (2022), 3557-3570. DOI:10.3934/dcdss.2022121
G. Cerami, G. De Villanova and S. Solimini, *Infinitely many bound states for some nonlinear scalar field equations,* Calc. Var. Partial Differential Equations **23** (2005), 139--168.
G. Cerami, D. Passaseo and S. Solimini, *Nonlinear scalar field equations: Existence of a positive solution with infinitely many bumps*, Ann. Inst. H. Poincaré Anal. Non Linéaire **32** (2015), 23--40.
L. Cherfils and V. Il'yasov, *On the stationary solutions of generalized reaction diffusion equations with p&q-Laplacian,* Commun. Pure Appl. Anal. **4** (2005), 9-22.
Y. Ding and A. Szulkin, *Bound states for semilinear Schrödinger equations with sign-changing potential*, Calc. Var. Partial Differential Equations, **29**, (2007), 397--419.
C. He and G. Li, *The regularity of weak solutions to nonlinear scalar field elliptic equations containing $(p,q)$--Laplacians*, Ann. Acad. Sci. Fenn. Math. **33** (2008), 337--371.
D. Mugnai and N.S. Papageorgiou, *Wang's multiplicity result for superlinear $(p,q)$--equations without the Ambrosetti--Rabinowitz condition.*, Trans. Amer. Math. Soc. **366**, (2014), 4919--4937.
N.S. Papageorgiou, V.D. Rădulescu and D.D. Repovš, *On a class of parametric $(p,2)$--equations*, Appl. Math. Optim. **75** (2017), 193--228.
A. Pomponio and T. Watanabe, *Some quasilinear elliptic equations involving multiple $p$--Laplacians*, Indiana Univ. Math. J. **67** (2018), 2199-2224.
P.H. Rabinowitz, *On a class of nonlinear Schrödinger equations*, Z. Angew. Math. Phys. **43** (1992), 270-291.
C. Liu and Y. Zheng, *Existence of nontrivial solutions for $p$--Laplacian equations in $\mathbb R^N$*, J. Math. Anal. Appl. **380** (2011), 669-679.
[^1]: The research that led to the present paper was partially supported by MIUR--PRIN project "Qualitative and quantitative aspects of nonlinear PDEs" (2017JPCAPN[ ]{.ul}005), *Fondi di Ricerca di Ateneo* 2017/18 "Problemi differenziali non lineari".\
Both the authors are members of the Research Group INdAM--GNAMPA
| arxiv_math | {
"id": "2309.13364",
"title": "On existence and multiplicity of solutions for generalized (p,\n q)-Laplacian equations on unbounded domains",
"authors": "Addolorata Salvatore and Caterina Sportelli",
"categories": "math.AP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this note, we aim to extend the concepts of the Fontaine-Faltings module and Higgs-de Rham flow to parabolic versions. The crucial aspect of our generalization lies in the construction of parabolic inverse Cartier functors. The twisted versions discussed in Sun-Yang-Zuo's work can be viewed as a special case, wherein the parabolic weights are equal at every infinity point. We note that a modulo $p$ version of parabolic Higgs-de Rham flow was previously established in [@KrSh20].
address:
- School of Mathematical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, PR China
- School of Mathematics and Statistics, Wuhan University, Luojiashan, Wuchang, Wuhan, Hubei, 430072, P.R. China.; Institut für Mathematik, Universität Mainz, Mainz 55099, Germany
author:
- Jinbang Yang
- Kang Zuo
title: Parabolic Fontaine-Faltings modules and parabolic Higgs-de Rham flows
---
# **Parabolic structures** {#sec_main_para}
In this section, we undertake a study of de Rham bundles and Higgs bundles equipped with parabolic structures. We adopt the terminology and notation for parabolic structures established in [@IySi07], see also in [@KrSh20].
## Introduction to Parabolic vector bundles
Our intention is not to introduce parabolic objects on arbitrary spaces. Although the methods and results remain applicable to a broader range of scenarios, for the sake of brevity, we restrict our focus to the following specific spaces $(Y,D_Y)/S$.
Let $p$ be an odd prime number and let $S$ be our base space. We assume it is one of the following spaces.
(i) $S=\mathop{\mathrm{Spec}}(K)$, where $K$ is a field of characteristic $0$;
(ii) $S=\mathop{\mathrm{Spec}}(W_n(k))$, where $W_n(k)$ is a ring of truncated Witt vectors with coefficients in a finite field $k$;
(iii) $S=\mathop{\mathrm{Spec}}({\mathcal O}_K)$, where $K$ is an unramified $p$-adic number field.
(iv) $S=\mathop{\mathrm{Spf}}({\mathcal O}_K)$, where $K$ is an unramified $p$-adic number field.
For a smooth curve $Y$ over $S$ (or a smooth formal curve over $S$ if $S$ is a formal scheme), we define the reduced divisor $D_Y$ by $n$ $S$-sections $x_i\colon S\rightarrow Y$, $i=1,\cdots,n$, that do not intersect with each other. We denote by $U_Y := Y - D_Y$ and by $j_Y$ the open immersion $j_Y\colon U_Y\rightarrow Y$. The irreducible components of $D_Y$ are denoted by $D_{Y,i}$, $i=1,2,\cdots,n$, and we have $D_Y = \bigcup_{i=1}^n D_{Y,i}$.
We set $\Omega_{Y/S}^1$ to be the sheaf of relative $1$-forms and $\Omega_{Y/S}^1(\log D_Y)$ to be the sheaf of relative $1$-forms with logarithmic poles along $D_Y$. By the smoothness of $Y$ over $S$, both of these sheaves are line bundles over $Y$.
### Parabolic vector bundles
In this section, we introduce the concept of parabolic vector bundles, which is based on [@IySi07].
**Definition 1**. A *parabolic sheaf* on $(Y,D_Y)/S$ is a collection of torsion-free coherent sheaves $V={V_\alpha}$ on $Y$, which are flat over $S$, indexed by multi-indices $\alpha=(\alpha_1,\cdots,\alpha_n)\in {\mathbb Q}^n$. The sheaves $\{V_\alpha\}$ are subject to the following conditions:
- (inclusion) $V_\alpha \hookrightarrow V_\beta$ with cokernel flat over $S$, where $\alpha\leq \beta$ (i.e. where $\alpha_i\leq \beta_i$ for all $i$).
- (normalization) $V_{\alpha+\delta^i} = V_\alpha(D_{Y,i})$ where $\delta^i=(0,\cdots,1,\cdots,0)\in{\mathbb Q}^n$.
- (semicontinuity) for any given $\alpha$ there exists a constant $c=c(\alpha)>0$ such that $V_{\alpha+\epsilon} = V_\alpha$ for $\epsilon=(\epsilon_1,\cdots,\epsilon_n)\in{\mathbb Q}^n$ with $0\leq\epsilon_i\leq c$.
A *morphism between two parabolic sheaves* from $F$ to $F'$ is a collection of compatible morphisms of sheaves $f_\alpha\colon F_\alpha\rightarrow F'_\alpha$.
*Remark 1*.
1. The second condition implies that the quotient sheaves $F_\alpha/F_\beta$ for $\beta\leq \alpha$ are supported at $D_Y$.
2. The third condition means that the structure is determined by the sheaves $F_\alpha$ for a finite collections of indices $\alpha$ with $0\leq\alpha_i<1$.
3. The extension $j_{Y*}\left( j_Y^* F_\beta\right)$ does not depend on the choice of $\beta\in{\mathbb Q}^n$, denote by $F_\infty$. Note that the $F_\alpha$ may all be consider as subsheaves of $F_\infty$.
**Definition 2**. Let $F$ and $F'$ be two parabolic sheaves. Denote $$(F\otimes F')_{\alpha}:= \sum_{\beta+\gamma = \alpha} F_\beta \otimes F_{\gamma}\subset F_\infty \otimes F'_\infty.$$ Then $F\otimes F':=\{(F\otimes F')_{\alpha}\}\}$ forms a parabolic sheaf, which satisfies the universal property of *tensor product* of $F$ and $F'$ in the category of parabolic sheaves. Similarly, one can define the *wedge product*, *symmetric product*, and *determinant* for parabolic vector bundles as usual.
For any $\alpha=(\alpha_1,\cdots,\alpha_n)\in{\mathbb Q}^n$, denote $$\alpha D_Y = \alpha_1D_{Y,1} + \cdots + \alpha_n D_{Y,n}$$ which is a rational divisor supported on $D_Y$. Of course, all rational divisor supported on $D_Y$ are of this form. Denote $$\lfloor\alpha\rfloor \coloneqq (\lfloor\alpha_1\rfloor,\lfloor\alpha_2\rfloor,\cdots,\lfloor\alpha_n\rfloor)$$ where $\lfloor\alpha_i\rfloor$ is the maximal integer smaller than or equal to $\alpha_i$. In particular $\lfloor\alpha\rfloor D_Y$ is an integral divisor supported on $D_Y$.
**Example 1** (trivial parabolic structure). For any torsion-free sheaf $E$ on $Y$, it may be considered as a parabolic sheaf (we say "with trivial parabolic structure") by setting $$E_\alpha = E(\lfloor\alpha\rfloor D).$$
**Example 2**. Let $L$ be a line bundle and let $\gamma=(\gamma_1,\cdots,\gamma_n)\in{\mathbb Q}^n$ be a rational multi-indices. Then there is a parabolic sheaf denoted $${\mathcal L}=L(\gamma D_Y)$$ by setting $${\mathcal L}_{\alpha} \coloneqq L(\lfloor\alpha+\gamma\rfloor D_Y).$$ Clearly, for any two line bundles $L,L'$ and two multi-indices $\gamma,\gamma'\in{\mathbb Q}^n$, one has $$L(\gamma D)\otimes L'(\gamma' D') = (L\otimes L')\Big((\gamma +\gamma') D\Big).$$ Then the set of all isomorphic classes of parabolic line bundles forms an abelian group under the tensor product, which contains the Picard group of $Y$ as a subgroup in the natural way.
The fractional part of the real number $\gamma_i$ is just the parabolic weight of $F$ along $D_{Y,i}$. We denote $$\deg({\mathcal L}) = \deg(L) + \sum_{i=1}^n \gamma_i.$$
**Definition 3**.
- The parabolic sheaves appeared in are called *parabolic line bundles*.
- A *parabolic vector bundle* is a parabolic sheaf which is locally isomorphic to a direct sum of parabolic line bundles. (Simpson called this a *locally abelian parabolic vector bundle*.) A *morphism between two parabolic vector bundles* is a morphism between their underlying parabolic sheaves.
**Example 3**. Let $V$ be a vector bundle and let $\gamma \in {\mathbb Q}^n$. Then $$V(\gamma D):= V\otimes {\mathcal O}(\gamma D)$$ is a parabolic vector bundle.
### Quasi-parabolic structures, parabolic weights
Historically, the parabolic vector bundles was defined by using some filtrations. In this subsection, we show the equivalency of the new and old definitions for parabolic vector bundles.
**Definition 4**. Let $V$ be a vector bundle over $(Y,D_Y)$. An *quasi-parabolic structure on $V$* is a decreasing filtration of direct summands of $V\mid_{D_{Y,i}}$. $$V\mid_{D_{Y,i}} = {\mathrm{QP}}^1(V\mid_{D_{Y,i}}) \supsetneqq {\mathrm{QP}}^2(V\mid_{D_{Y,i}}) \supsetneqq \cdots \supsetneqq {\mathrm{QP}}^{n_i}(V\mid_{D_{Y,i}}) \supsetneqq 0.$$ A set of *parabolic weights* attached to the quasi-parabolic structure is a set of rational numbers $\alpha_i^1,\alpha_i^2,\cdots,\alpha_i^{n_i}$ satisfying $$0\leq\alpha_i^1<\alpha_i^2<\cdots<\alpha_i^{n_i}<1.$$ A *previous parabolic vector bundle* is triple $(V,{\mathrm{QP}},\alpha)$, which consists a vector bundle, a quasi-parabolic structure, and a set of parabolic weights. A *morphism between two previous parabolic vector bundles* from $(V,{\mathrm{QP}},\alpha)$ to $(W,{\mathrm{QP}},\beta)$, is a morphism of vector bundle $f\colon V\rightarrow W$ such that $$f\mid_{D_{Y,i}} ({\mathrm{QP}}^j(V\mid_{D_{Y,i}})) \subseteq {\mathrm{QP}}^k(W\mid_{D_{Y,i}})$$ for any triple of indices $(i,j,k)$ satisfying $\alpha_i^j\geq \beta_i^k$.
*Remark 2*. Since our base space $S$ is connected, for any direct summands $W$ of $V\mid_{D_{Y,i}}$, it is actually a sub vector bundle of $V\mid_{D_{Y,i}}$ over $D_{Y,i}$. Moreover, if one consider the pullback of $W$ along the surjective morphism $V\rightarrow V\mid_{D_{Y,i}}$, one gets a subsheaf $F_W$ of $V$, which is also a vector bundle over $Y$. In fact, the subsheaf $F_W$ fits in the following morphism of short exact sequences $$\xymatrix{
0 \ar[r] & V(-D_{Y,i}) \ar@{=}[d] \ar[r] & F_W \ar@{^(->}[d] \ar[r] & W \ar[r] \ar@{^(->}[d] & 0\\
0 \ar[r] & V(-D_{Y,i}) \ar[r] & V \ar[r] & V\mid_{D_{Y,i}} \ar[r] & 0.\\
}$$ Since both $W$ and $V(-D_{Y,i})$ are flat over $S$, the quasi-coherent sheaf $F_W$ is also flat over $S$. According [@stacks-project [Tag 080Q](https://stacks.math.columbia.edu/tag/080Q)], the local freeness of $F_W$ can be checked fiberwisely.
**Example 4**. Let $F=\{F_\alpha\}$ be a parabolic vector bundle over $(Y,D_Y)/S$. There is a natural previous parabolic vector bundle with underlying vector bundle $V\coloneqq F_0$.
For all $\epsilon\in[0,1)$, denote $${\mathrm{P}}^{\epsilon}(F_0\mid_{D_{Y,i}})\coloneqq F_{-\epsilon\delta^i}/F_{-\delta^i}\subseteq F_0/F_{-\delta^i}= F_0\mid_{D_{Y,i}}$$ which form a left continue descending filtration of direct summands index by rational number in $[0,1)$. To show that ${\mathrm{P}}^{\epsilon}(F_0\mid_{D_{Y,i}})$ is a direct summand, we may reduce to the case $F$ being a parabolic vector bundle and check that directly. Denote by $\{\alpha_i^j\}_{j=1}^{n_i}\subset [0,1)$ the finite set of all jumping locus of the filtration ${\mathrm{P}}^{\epsilon}(F_0\mid_{D_{Y,i}})$. By reordering the index, we may assume $0\leq \alpha_i^1< \alpha_i^2< \cdots < \alpha^{n_i}_i<1$. Then the filtration is uniquely determined by the following sub filtration $$\label{equ:QuasiPara}
F_0\mid_{D_{Y,i}} = {\mathrm{P}}^{\alpha_i^1}(F_0\mid_{D_{Y,i}}) \supsetneqq {\mathrm{P}}^{\alpha_i^2}(F_0\mid_{D_{Y,i}}) \supsetneqq \cdots \supsetneqq{\mathrm{P}}^{\alpha_i^{n_i}}(F_0\mid_{D_{Y,i}}) \supsetneqq 0.$$ Clearly, this forms a quasi-parabolic structure of $F_0$ along $D_{Y,i}$, which we call *the quasi-parabolic structure of $F$ along $D_{Y,i}$*. And the numbers in the set $\{\alpha_i^j\}_{j=1}^{n_i}$ forms a set of parabolic weights, which we call *the parabolic weights of $F$ along $D_{Y,i}$*, or *the parabolic weight of the quasi-parabolic structure of $F$ along $D_{Y,i}$*.
*Remark 3*. By the definition of our parabolic vector bundle, the parabolic weights are all rational.
**Lemma 1**. *The construction in induces an equivalent functor from the category of all parabolic vector bundles to the category of all previous parabolic vector bundles.*
*Proof.* Let $V$ be a vector bundle over $(Y,D_Y)$ with a quasi-parabolic structures $$V\mid_{D_{Y,i}} = {\mathrm{QP}}^1(V\mid_{D_{Y,i}}) \supsetneqq {\mathrm{QP}}^2(V\mid_{D_{Y,i}}) \supsetneqq \cdots \supsetneqq {\mathrm{QP}}^{n_i}(V\mid_{D_{Y,i}}) \supsetneqq 0$$ and parabolic weights $$0\leq\alpha_i^1<\alpha_i^2<\cdots<\alpha_i^{n_i}<1$$ for $i=1,\cdots,n$. We need to show that it associates a unique parabolic vector bundle $F=\{F_\alpha\}$ in the sense under the construction in .
**Existence:** For any $\epsilon\in [0,1)\cap{\mathbb Q}$, denote $${\mathrm{P}}^\epsilon(V\mid_{D_{Y,i}}) \coloneqq \left\{
\begin{array}{ll}
{\mathrm{QP}}^1(V\mid_{D_{Y,i}}) & \text{if} \epsilon \leq \alpha_i^1 \\
{\mathrm{QP}}^j(V\mid_{D_{Y,i}}) & \text{if} \alpha_{i}^{j-1} < \epsilon \leq \alpha_i^j \text{for some} j=2,\cdots,n_i\\
0 & \text{if} \alpha_{i}^{n_i} < \epsilon \\
\end{array}\right.$$ Denote by $F_{-\epsilon \delta^i}$ the pull back of ${\mathrm{P}}^\epsilon(V\mid_{D_{Y,i}})\subset V\mid_{D_{Y,i}}$ under the surjective morphisms $V\rightarrow V\mid_{D_{Y,i}}$. By , the $F_{-\epsilon \delta^i}$ is also a vector bundle. For all $\alpha=(\alpha_1,\cdots,\alpha_n)\in \Big((-1,0]\cap{\mathbb Q}\Big)^n$, denote $$F_\alpha = \bigcap_{i=1}^n F_{\alpha_i \delta^i}.$$ By normalization in the definition of the parabolic vector bundles, we defines $F_{\alpha}$ for any $\alpha\in{\mathbb Q}^n$. By direct computation, one checks that $\{F_\alpha\}$ forms a parabolic vector bundle associated to the given previous parabolic vector bundle.
**Uniqueness:** Let $\{F_\alpha\}$ and $\{F'_\alpha\}$ be two parabolic vector bundles associated to the given previous parabolic vector bundle. Then the isomorphism $F_0\cong V\cong F'_0$ can be restricted to isomorphisms $F_{-\epsilon \delta^i} \cong F'_{-\epsilon \delta^i}$, because of $P^\epsilon(F_0\mid_{D_{Y,i}}) \cong P^\epsilon(F'_0\mid_{D_{Y,i}})$ for all $\epsilon\in[0,1)$. By normalization, one gets an natural isomorphism between $\{F_\alpha\}$ and $\{F'_\alpha\}$. ◻
### Degrees and semistability
In order to define the degree of a parabolic vector bundle and the semistability of a parabolic vector bundle, we assume $Y$ is projective over $S$ in this subsection.
Let $F$ be a coherent sheaf on $Y$ which is flat over $S$. By the locally constancy of the Chern classes, the first Chern class of the coherent sheaf $F_s$ over the curve $Y_s$ for any points $s\in S$ does not depend on the choice of the point $s$. It is well-defined to set $$\deg(F)\coloneqq c_1(F_s).$$ We define the degree of a parabolic vector bundle as follows.
**Definition 5**. Let $F$ be a parabolic sheaf over $(Y,D_Y)/S$.
- The *degree of $F$* is defined as $$\deg(F) \coloneqq \deg(F_0) + \sum_{D_{Y,i}\in D_Y}\sum_{j=1}^{n_i} \alpha_i^j \cdot \mathop{\mathrm{rank}}( {\mathrm{P}}^{\alpha_i^j}(F_0\mid_{D_{Y,i}})/{\mathrm{P}}^{\alpha_i^{j+1}}(F_0\mid_{D_{Y,i}}))$$
- The parabolic vector bundle $F$ is called *semistable* (resp. *stable*) if for any proper sub parabolic sheaves $F'\subsetneq F$, one has $$\frac{\deg(F')}{\mathop{\mathrm{rank}}(F')} \leq \frac{\deg(F)}{\mathop{\mathrm{rank}}(F)} \qquad \text{\Big(resp.} \frac{\deg(F')}{\mathop{\mathrm{rank}}(F')} < \frac{\deg(F)}{\mathop{\mathrm{rank}}(F)} \text{\Big)}.$$
### The pullback of parabolic vector bundles
Let $f\colon (Y',D_{Y'}) \rightarrow (Y,D)$ be a morphism of smooth curves with relative normal crossings divisors over $S$ such that $f^{-1}(D)\subset D'$. We recall a definition proposed by Simpson in [@IySi07 Section 2.2] for the pullback $f^*F$ of a parabolic vector bundle $F$.
**Definition 6**.
1. If $F$ is a parabolic line bundle, then there exists a line bundle $L$ and a rational divisor $B$ which is supported on $D$ such that $F=L(B)$. We define $$f^*F \coloneqq (f^*L)(f^*B).$$
2. In general case, by localization, we reduce to the case of parabolic line bundles.
In the following, we will give an easy-to-use format description of pullback and then extend the definition for parabolic de Rham bundles.
Let $V=\{V_\alpha\}$ be a parabolic vector bundle over $(Y,D_Y)/S$. For any $\gamma\in{\mathbb Q}^n$, we identify the two parabolic vector bundles $$V = V(-\gamma D)\otimes{\mathcal O}_Y(\gamma D).$$ Clearly, once we identify the usual vector bundle with its associated parabolic vector bundle, then $V_0$ can be view as a parabolic subsheaf of $V$ $$V_0 \subseteq V.$$ In particular, for the parabolic bundle $V(-\gamma D)$, we have a parabolic subsheaf $$V(-\gamma D)_0\hookrightarrow V(-\gamma D).$$ By pullback along $f$ and tensoring $f^*\Big({\mathcal O}_Y(\gamma D)\Big)$, we gets a parabolic subsheaf $$f^*_{\gamma}(V)\coloneqq f^*\Big(V(-\gamma D)_0\Big)\otimes f^*\Big({\mathcal O}_Y(\gamma D)\Big) \subseteq f^*\Big(V(-\gamma D)\Big)\otimes f^*\Big({\mathcal O}_Y(\gamma D)\Big) = f^*(V).$$
**Proposition 1**. *$f^*V = \sum\limits_{\gamma\in{\mathbb Q}^n} f^*_{\gamma}(V)$.*
*Proof.* By localization, we reduce it to parabolic line bundle case, which follows by taking ${\mathcal L}$ to be the parabolic line bundle of its self. ◻
*Remark 4*. By replace the parabolic line bundle ${\mathcal O}_Y(\gamma D)$ with a general parabolic line bundle ${\mathcal L}$, one can define a parabolic subsheaf $$f_{{\mathcal L}}^*(V):= f^*\Big((V\otimes{\mathcal L}^{-1})_0\Big) \otimes f^*{\mathcal L}\subset f^*V.$$
## Parabolic de Rham bundles
### logarithmic de Rham bundle
A *(logarithmic) connection* on a sheaf $V$ of ${\mathcal O}_Y$-modules over $(Y,D_Y)/S$ is an ${\mathcal O}_S$-linear map $\nabla\colon V\rightarrow V\otimes_{{\mathcal O}_Y} \Omega_{Y/S}^1(\log D_Y)$ satisfying the Leibniz rule $\nabla(rv) = v\otimes \mathop{\mathrm{d}}r + r\nabla(v)$ for any local section $r\in {\mathcal O}_Y$ and $v\in V$. Give a (logarithmic) connection, there are canonical maps $$\nabla\colon V\otimes\Omega_{Y/S}^i(\log D_Y) \rightarrow V\otimes\Omega_{Y/S}^{i+1}(\log D_Y)$$ given by $s\otimes \omega \mapsto\nabla(s)\wedge \omega +s \otimes \mathop{\mathrm{d}}\omega$. The curvature $\nabla\circ\nabla$, the composition of the first two canonical maps, is ${\mathcal O}_Y$-linear and contained in $\mathop{\mathrm{\mathcal End}}(V)\otimes_{{\mathcal O}_Y} \Omega^2_{Y/S}(\log D_Y)$. The connection is called *integrable* and $(V,\nabla)$ is called a *(logarithmic) de Rham sheaf* if the curvature vanishes. For a de Rham sheaf, one has a natural *de Rham complex*: $${\mathrm{DR}}(V,\nabla):\qquad 0\rightarrow V
\xrightarrow{\nabla} V\otimes\Omega^1_{Y/S}(\log D_Y)
\xrightarrow{\nabla} V\otimes\Omega^2_{Y/S}(\log D_Y)
\xrightarrow{\nabla} V\otimes\Omega^3_{Y/S}(\log D_Y) \rightarrow \cdots$$ A logarithmic de Rham sheaf is called a *logarithmic de Rham bundle*, if the underlying sheaf is a vector bundle over $Y$. Denote by *$\mathrm{MIC}(Y,D_Y)$* the category of all logarithmic de Rham bundle over $(Y,D_Y)$.
In the same way, we define a *(logarithmic) $p$-connection along $D_Y$* by replacing the Leibniz rule in the definition of connection with $$\nabla(rv) = pv\otimes \mathop{\mathrm{d}}r + r\nabla(v).$$ Denote by *$\widetilde{\mathrm{MIC}}(Y,D_Y)$* the category of vector bundles over $Y$ with integrable logarithmic $p$-connections along $D_Y$.
### residue of a logarithmic de Rham bundle
Let $(V,\nabla)$ be a logarithmic de Rham bundle. On any given sufficient small open subset $U$, we choose a coordinate functor $f_i$ for each irreducible component $D_{Y,i}$. Let $e=(e_1,\cdots,e_r)$ be the local frame of $V$, and $\omega$ the relative connection matrix of $D$ with respect to $e$ for $V$ on $U$. Then for each $i$, the matrix $\omega$ can be written as $$\omega = \sum_{i=1}^{r} R_i \cdot \frac{\mathop{\mathrm{d}}f_i}{f_i} + S$$ where $R_i$ is an $r\times r$ matrix with entries in ${\mathcal O}_Y(U)$ and $S$ is a $r\times r$ matrix with entries in $(\Omega_{Y/S}^1)(U)$. Denote $$\mathop{\mathrm{res}}_{Y/S}(\omega,D_{Y,i}) := R_i\mid_{U\cap D_{Y,i}}$$ which is $r\times r$ matrix whose entries are contained in ${\mathcal O}_{D_{Y,i}}(U\cap D_{Y,i})$, and it is independent of choice of $f_i$. When $U$ run through a fine enough covering of $Y$, these matrices are glued into a global section $$\emph{$\mathop{\mathrm{res}}_{Y/S}(\nabla,D_{Y,i})$} \in \mathrm{H}^0(D_{Y,i},\mathop{\mathrm{\mathcal End}}_{\mathcal O_Y}(V)\mid_{D_{Y,i}}),$$ which is called the *residue of the connection $\nabla$ along $D_{Y,i}$*. The residue map of the connection along $D_{Y,i}$ is also represented as an ${\mathcal O}_{D_{Y,i}}$-endomorphism of $V\mid_{D_{Y,i}}$ $$\mathop{\mathrm{res}}_{D_{Y,i}}(\nabla)\colon V\mid_{D_{Y,i}}\longrightarrow V\mid_{D_{Y,i}}.$$ We assume $\mathop{\mathrm{res}}_{D_{Y,i}}(\nabla)$ is quasi-nilpotent and with rational eigenvalues in $[0,1)$.
### Hodge filtration and filtered logarithmic de Rham bundle
In this paper, a filtration $\mathop{\mathrm{Fil}}^\bullet$ on a logarithmic de Rham bundle $(V,\nabla)$ over $(Y,D_Y)/S$ will be called a *Hodge filtration of level in $[a,b]$* if the following conditions hold:
- $\mathop{\mathrm{Fil}}^i V$'s are subbundles of $V$, with $$V=\mathop{\mathrm{Fil}}^aV\supset \mathop{\mathrm{Fil}}^{a+1}V \supset\cdots \supset \mathop{\mathrm{Fil}}^bV\supset \mathop{\mathrm{Fil}}^{b+1}V=0.$$
- $\mathop{\mathrm{Fil}}$ satisfies Griffiths transversality with respect to the logarithmic connection $\nabla$.
In this case, the triple $(V,\nabla,\mathop{\mathrm{Fil}})$ is called a *filtered logarithmic de Rham bundle*. We denote by *$\mathrm{MCF}(Y,D_Y)$* the category of filtered logarithmic de Rham bundles over $(Y,D_Y)$ and by *$\mathrm{MCF}_a(Y,D_Y)$* those with level in $[0,a]$.
### The parabolic vector bundles associated to logarithmic de Rham bundles (over ${\mathbb C}$)
We first recall the following well know result.
**Lemma 2**. *Let $(V,\nabla)$ be a logarithmic de Rham bundle over $(Y,D_Y)/S$. Denote by $i$ the open immersion of $U=Y\setminus D_Y$ into $Y$. Then*
1. *the connection $\nabla$ on $V$ can be uniquely extended onto $V_\infty := j_{Y*}j^*_Y(V)$ under the natural injection $V\hookrightarrow V_\infty$, and*
2. *the extension connection can be restricted onto $V(D')$ for any integral divisor $D'$ supported on $D$. In particular, if $D'$ is also positive, then the logarithmic de Rham bundle $(V,\nabla)$ extends to another one $(V(D'),\nabla)$ with natural injection $$(V,\nabla) \hookrightarrow (V(D'),\nabla).$$*
3. *Let $V'$ be a vector bundle over $Y$ contained in $V_\infty$ as a subsheaf. Then there is at most one connection $\nabla'$ onto $V'$ such that $j_Y^*\nabla = j_Y^*\nabla'$.*
Denote by ${\mathbb Q}^S$ the maximal subring of ${\mathbb Q}$ such that the natural ring homomorphism ${\mathbb Z}\rightarrow {\mathcal O}_S(S)$ can be extended to ${\mathbb Q}^S$ $$\iota\colon {\mathbb Q}^S \rightarrow {\mathcal O}_S(S)$$ Then $${\mathbb Q}^S = \left\{\begin{array}{ll}
{\mathbb Q},& S \text{is in case (i)};\\
{\mathbb Q}_{(p)} & \text{otherwise.}
\end{array}\right.$$ We say that a logarithmic de Rham bundle $(V,\nabla)$ over $(Y,D_Y)/S$ *has rational eigenvalues*, if the eigenvalues of the residues are all contained in the image of $\iota$.
In the rest of this subsection, we set $S=\mathop{\mathrm{Spec}}({\mathbb C})$.
**Proposition 2** (Iyer-Simpson 2006). *Let $(V,\nabla)$ be a logarithmic de Rham bundle over $(Y,D_Y)$. Assume the residues have rational eigenvalues. Then*
1. *there exists a unique (locally abelian) parabolic vector bundle $F$ associated to $(V,\nabla)$. I.e. a parabolic vector bundle $F=\{F_\alpha\}$ together with isomorphisms $$F_\alpha\mid_{Y^0} \cong V\mid_{Y^0}$$ such that for each $\alpha$, the $\nabla$ extends to a logarithmic connection $\nabla_\alpha$ on $F_\alpha$ with the residue on the piece $\mathop{\mathrm{Gr}}_\alpha(F)\coloneqq F_\alpha/F_{\alpha-\varepsilon\delta^i}$ being an operator with eigenvalue $-\alpha_i$.*
2. *The eigenvalues of the residue of $\nabla_\alpha$ along $D_{Y,i}$ are contained in the interval $[-\alpha_i,1-\alpha_i)$.*
3. *The construction preserves short exact sequences and restrictions.*
**Corollary 1**. *If the eigenvalues $\{\eta_i^j\}$ of the residues are all located in $[0,1)$, then under suitable reordering of these eigenvalues, for each indices pair $(i,j)$, one has $$\eta_i^j = \left\{\begin{array}{ll}
0,& \text{if} \alpha_i^j=0;\\
1-\alpha_i^j,& \text{if} \alpha_i^j\neq 0\\
\end{array}\right.$$*
*Remark 5*. Let $(V,\nabla)$ be a logarithmic de Rham bundle over $(X,D_Y)$.
- From the construction of the associated parabolic vector bundle,
- if the eigenvalues of the residues of $\nabla$ are located in $[0,1)$, then $$(V,\nabla)=(F_0,\nabla_0)$$
- if the eigenvalues of the residues of $\nabla$ are located in $[-1,0)$, then $$(V,\nabla)=(F_{(1,\cdots,1)},\nabla_{(1,\cdots,1)})$$
- if the eigenvalues of the residues of $\nabla$ are located in $(-1,0]$, then $$(V,\nabla)=\bigcup_{\varepsilon<1} (F_{(\varepsilon,\cdots,\varepsilon)},\nabla_{(\varepsilon,\cdots,\varepsilon)}).$$
- $F$ is also the parabolic vector bundle associated to logarithmic de Rham bundle $(F_\alpha,\nabla_\alpha)$.
### Parabolic de Rham bundles
Inspired by , one can give a definition for parabolic de Rham bundles and parabolic Higgs bundles.
**Definition 7**. A *parabolic de Rham bundle* $(V,\nabla)=\{(V_\alpha,\nabla_\alpha)\}$ over $(Y,D_Y)/S$ is parabolic vector bundle $V=\{V_\alpha\}$ together with integrable connections $\nabla_\alpha$ having logarithmic pole along $D_Y$ such that the inclusions $V_\alpha\hookrightarrow V_\beta$ preserves the connections. We call $\nabla:=\{\nabla_\alpha\}$ a *parabolic connection* on the parabolic vector bundle $V$.
A parabolic de Rham bundle $(V,\nabla)=\{(V_\alpha,\nabla_\alpha)\}$ is called *standard* if all parabolic weights are contained in ${\mathbb Q}^{S}$, and for any $\alpha\in \Big({\mathbb Q}^{S}\Big)^n$, the residue on the piece $V_\alpha/V_{\alpha-\varepsilon\delta^i}$ of the logarithmic connection $\nabla_\alpha$ is an operator with eigenvalue $\iota(-\alpha_i)$.
*Remark 6*.
1. The $(V_\alpha,\nabla_\alpha)$ may be considered as de Rham subsheaves of $$(V_\infty,\nabla_\infty) := \varinjlim_\beta (V_\beta,\nabla_\beta) = j_{Y*}j^*_Y (V_\alpha,\nabla_\alpha).$$
2. Tensor product of two parabolic de Rham bundles can be naturally defined: for any two parabolic de Rham bundle $(V,\nabla)$ and $(V',\nabla')$, the underlying parabolic bundle of they tensor product is just $V\otimes V'$ and the parabolic connection is defined as the restrictions of the connection $\Big(\nabla_\infty\otimes \mathop{\mathrm{id}}+ \mathop{\mathrm{id}}\otimes\nabla'_\infty\Big)$ on $V_\infty\otimes V'_\infty$.
**Example 5**. Let $\gamma\in{\mathbb Q}^n$ be a multiple indices and let $(V,\nabla)$ be a logarithmic de Rham bundle over $(Y,D_Y)/S$. There is a natural parabolic connection, denote by $\nabla(\gamma D)$, on the parabolic vector bundle $V(\gamma D)$ given by $$\nabla(\gamma D)_\alpha:=\nabla_\infty \mid_{V(\gamma D)_{\alpha}}.$$ The logarithmic de Rham bundle $(V,\nabla)$ may be considered as a parabolic de Rham bundle (we say "with trivial parabolic structure") by identifying it with $(V(0D),\nabla(0D))$. Assume $(V,\nabla)=({\mathcal O}_Y,\mathop{\mathrm{d}})$. We call parabolic de Rham line bundle of form $({\mathcal O}_Y(\gamma D),\mathop{\mathrm{d}}(\gamma D))$ *shifting parabolic de Rham line bundles*. A shifting parabolic de Rham line bundle $({\mathcal O}_Y(\gamma D),\mathop{\mathrm{d}}(\gamma D))$ is of standard if and only if $\gamma_i\in \ker\iota$ for all $i=1,\cdots,n$.
**Proposition 3**. *If $S$ has characteristic zero, then any standard parabolic de Rham bundle over $(X,D_Y)$ is semistable and of degree zero.*
Recall that a logarithmic $p$-connection on a vector bundle $V$ over $(Y,D_Y)/S$ is an ${\mathcal O}_S$-linear mapping $$\nabla \colon V \rightarrow V\otimes \Omega_{Y/S}(\log D_Y)$$ satisfying, for any local section $s \in {\mathcal O}_Y$ and any local section $v\in V$ $$\nabla(sv) = p v\otimes \mathop{\mathrm{d}}s + s\nabla(v)$$ We note that the multiplication of a connection with $p$ is always a $p$-connection and if $p$ is invert in ${\mathcal O}_S(S)$, then all $p$-connections are coming from this way. Similarly, one can defines parabolic $p$-connections on parabolic vector bundles, and their tensor products. We left the routine definitions to the readers.
### Parabolic de Rham line bundles
By tensor some shifting parabolic de Rham line bundle, a parabolic de Rham bundle can be modified to be with trivial parabolic structure. Thus we have following result.
**Lemma 3**. *Let $({\mathcal L},\nabla)$ be a parabolic de Rham line bundle over $(Y,D_Y)/S$. Denote by $w_i\in [0,1)\cap {\mathbb Q}$ the parabolic weight of ${\mathcal L}$ along $D_{Y,i}$ and denote $w=(w_1,\cdots,w_n)$. Then*
1. *we have a decomposition of $({\mathcal L},\nabla)$ into a tensor product of usual logarithmic de Rham bundle and a shifting parabolic de Rham line bundle $$({\mathcal L},\nabla) = ({\mathcal L}_0,\nabla_0) \otimes ({\mathcal O}_Y(w D),\mathop{\mathrm{d}}(w D)).$$*
2. *Suppose we have another such decomposition $$({\mathcal L},\nabla) = (L,\nabla)\otimes ({\mathcal O}_Y(\gamma D),\mathop{\mathrm{d}}(\gamma D)),$$ then $\omega - \gamma \in {\mathbb Z}^n$ and $$(L,\nabla)=({\mathcal L}_0,\nabla_0)\otimes ({\mathcal O}_Y((w-\gamma)D),\mathop{\mathrm{d}}((w-\gamma)D)).$$*
*Proof.* Consider $(\Delta V,\Delta \nabla):= ({\mathcal L},\nabla) \otimes ({\mathcal L}_0,\nabla_0)^{-1} \otimes ({\mathcal O}_Y(w D),\mathop{\mathrm{d}}(w D))^{-1}$. From the definition of tensor product, one checks directly that $\Delta V={\mathcal O}_Y$, and $\Delta\nabla\mid_{Y\setminus D_Y}=\mathrm{d}\mid_{Y\setminus D_Y}$. Thus $(\Delta V,\Delta \nabla)=({\mathcal O}_Y,\mathrm{d})$ and (1) holds true. (2) follows (1) directly. ◻
*Remark 7*. By the same method, one has similar decompositions for parabolic vector bundle with parabolic $p$-connection.
### The pullback of a parabolic de Rham vector bundles
Now, we consider the pullback for parabolic de Rham bundle case. Let $(V,\nabla)$ be a parabolic de Rham bundle over $(Y,D_Y)/S$. For any $\gamma \in {\mathbb Q}^n$, then $f^*(\gamma D)$ is also a rational divisor over $Y'$ supported on $D'$. We simply set $$f^*\Big({\mathcal O}_Y(\gamma D),\mathop{\mathrm{d}}_Y(\gamma D)\Big) := \Big({\mathcal O}_{Y'}\big(f^*(\gamma D)\big),\mathop{\mathrm{d}}_{Y'}\big(f^*(\gamma D)\big)\Big).$$
Similarly as the parabolic vector bundle case, for each $\gamma \in {\mathbb Q}^n$, we set $$f^*_{\gamma}(V,\nabla)
\coloneqq f^*\Big(\big(V(-\gamma D),\nabla(-\gamma D)\big)_0\Big) \otimes f^*\Big({\mathcal O}_Y(\gamma D),\mathop{\mathrm{d}}_Y(\gamma D)\Big).$$
Denote $U_Y=Y-D_Y$ and $U_{Y'}=Y'-D_{Y'}$. If we restrict $f^*_{\gamma}(V,\nabla)$ onto the open subset $U_{Y'}$, then by the construction, one gets $$\Big(f^*_{\gamma}(V,\nabla)\Big)\mid_{U_{Y'}}=f^*\Big((V,\nabla)\mid_{U_Y}\Big).$$ By , the connections on $f^*_{\gamma}(V,\nabla)$ for difference choice of $\gamma$ are coincide with each others over the maximal common subsheaf.
**Definition 8**. Let $(V,\nabla)$ be a parabolic de Rham bundle over $(Y,D_Y)/S$. We define the pullback of $(V,\nabla)$ along $f$ as $$f^*(V,\nabla)\coloneqq \bigcup_{\gamma\in{\mathbb Q}^n} f^*_{\gamma}(V,\nabla).$$
## Parabolic Higgs bundles
### Graded vector bundles
Let $\{\mathop{\mathrm{Gr}}^\ell E\}_{\ell\in {\mathbb Z}}$ be subbundles of $V$. The pair $(E,\mathop{\mathrm{Gr}})$ is called *graded vector bundle* over $X$ if the natural map $\oplus_{\ell\in {\mathbb Z}} \mathop{\mathrm{Gr}}^\ell E\cong E$ is an isomorphism.
### logarithmic Higgs bundles
Let $E$ be vector bundle over $X$. Let $\theta\colon E\rightarrow E\otimes_{{\mathcal O}_X} \Omega^1_{X/S}(\log D)$ be an ${\mathcal O}_X$-linear morphism. The pair $(E,\theta)$ is called *(logarithmic) Higgs bundle* over $(X,D)/S$ if $\theta$ is integrable. i.e. $\theta\wedge\theta = 0$. For a Higgs bundle, one has a natural *Higgs complex* $${\mathrm{DR}}(E,\theta):\qquad 0\rightarrow E
\xrightarrow{\theta} E\otimes\Omega^1_{X/S}(\log D)
\xrightarrow{\theta} E\otimes\Omega^2_{X/S}(\log D)
\xrightarrow{\theta} E\otimes\Omega^3_{X/S}(\log D) \rightarrow \cdots$$
### graded logarithmic Higgs bundles
A *graded (logarithmic) Higgs bundle* over $(X,D)/S$ is a Higgs bundle $(E,\theta)$ together with a grading structure $\mathop{\mathrm{Gr}}$ on $E$ satisfying $$\theta(\mathop{\mathrm{Gr}}^\ell E) \subset \mathop{\mathrm{Gr}}^{\ell -1}E\otimes_{{\mathcal O}_X} \Omega^1_{X/S}(\log D).$$ Thus we have subcomplexes of ${\mathrm{DR}}(E,\theta)$ $$\mathop{\mathrm{Gr}}^\ell {\mathrm{DR}}(E,\theta):
\qquad 0
\rightarrow \mathop{\mathrm{Gr}}^\ell E
\xrightarrow{\theta} \mathop{\mathrm{Gr}}^{\ell-1} E \otimes \Omega^1_{X/S}(\log D)
\xrightarrow{\theta} \mathop{\mathrm{Gr}}^{\ell-2} E \otimes \Omega^2_{X/S}(\log D)
\xrightarrow{\theta} \cdots.$$
The following is the main example we will be concerned with.
**Example 6**. Let $(V,\nabla,\mathop{\mathrm{Fil}})$ be a filtered de Rham bundle over $V$. Denote $E =\bigoplus_{\ell\in \mathbb Z} \mathop{\mathrm{Fil}}^\ell V/\mathop{\mathrm{Fil}}^{\ell+1}V$ and $\mathop{\mathrm{Gr}}^\ell E = \mathop{\mathrm{Fil}}^\ell V/\mathop{\mathrm{Fil}}^{\ell+1}V$. By Griffith's transversality, the connection induces an ${\mathcal O}_X$-linear map $\theta \colon \mathop{\mathrm{Gr}}^\ell E \rightarrow \mathop{\mathrm{Gr}}^{\ell-1} E \otimes_{{\mathcal O}_X} \Omega^1_{X/S}(\log D)$ for each $\ell\in {\mathbb Z}$. Then $(E,\theta,\mathop{\mathrm{Gr}})$ is a graded Higgs bundle. Moreover we have $$\mathop{\mathrm{Gr}}^{\ell} {\mathrm{DR}}(E,\theta) = \mathop{\mathrm{Fil}}^\ell {\mathrm{DR}}(V,\nabla) / \mathop{\mathrm{Fil}}^{\ell+1} {\mathrm{DR}}(V,\nabla).$$
### Parabolic Higgs bundles
**Definition 9**. A *parabolic Higgs bundle* $(E,\theta)=\{(E_\alpha,\theta_\alpha)\}$ over $(Y,D_Y)$ is
- a parabolic vector bundle $E=\{E_\alpha\}$, together with
- integrable Higgs fields $\theta_\alpha$ having logarithmic pole along $D_Y$
such that the inclusions $E_\alpha\hookrightarrow E_\beta$ preserves the Higgs fields.
A parabolic Higgs bundle $(E,\theta)$ is called *graded*, if there is a grading structure $Gr$ on $E$ satisfying decomposition of the underlying parabolic vector bundle $E$ $$\theta(\mathop{\mathrm{Gr}}^\ell E) \subset \mathop{\mathrm{Gr}}^{\ell -1}E\otimes_{{\mathcal O}_X} \Omega^1_{X/S}(\log D).$$
*Remark 8*.
1. The $(E_\alpha,\theta_\alpha)$ may be considered as Higgs subsheaves of $$(E_\infty,\theta_\infty) := \varinjlim_\beta (E_\beta,\theta_\beta) = j_{Y*}j^*_Y (E_\alpha,\theta_\alpha).$$
2. Tensor product of two parabolic Higgs bundle bundles can be naturally defined: for any two parabolic Higgs bundle $(E,\theta)$ and $(E',\theta')$, the underlying parabolic bundle of they tensor product is just $E\otimes E'$ and the parabolic connection is defined as the restrictions of the connection $\Big(\theta_\infty\otimes \mathop{\mathrm{id}}+ \mathop{\mathrm{id}}\otimes \theta'_\infty\Big)$ on $E_\infty\otimes E'_\infty$.
## Parabolic de Rham bundles and parabolic Higgs bundles of lower rank over projective lines {#sec_para_classification_lower_rank}
In this section, we take $X={\mathbb P}^1_S$ as the projective line over $S$ and take $D=D_S\subset {\mathbb P}_S^1$ as the divisor given by $4$ $S$-points $\{0,1,\infty,\lambda\}$. Denote by $D_i$ the reduce and irreducible divisor given by the point $x$ for any $x\in\{0,1,\infty,\lambda\}$.
### Classification of parabolic de Rham line bundles
We first classifies all logarithmic de Rham line bundle over $({\mathbb P}^1_S,D_S)/S$.
**Lemma 4**.
1. *Let $e=(e_0,e_1,e_\lambda,e_\infty)\in \Big({\mathcal O}_S(S)\Big)^4$ and $d\in {\mathbb Z}$ satisfy $$e_0+e_1+e_\lambda+e_\infty + d = 0 \in {\mathcal O}_S(S).$$ Then up to an isomorphism there is a unique logarithmic de Rham line bundle $(L^{(e,d)},\nabla^{(e,d)})$ over $({\mathbb P}^1_S,D_S)/S$ such that*
- *the degree of $L^{(e,d)}$ is of degree $d$, and*
- *$e_x$ is the eigenvalue of the residue of $\nabla^{(e,d)}$ along $D_x$ for all $x=\{0,1,\lambda,\infty\}$.*
2. *Any logarithmic de Rham line bundle is of this form.*
*Proof.* (1). **Existence:** We first take $L^{(e,d)}={\mathcal O}_{{\mathbb P}^1_S}(-d(\infty))$ and set $$\nabla^{(e,d)}(1) = 1\otimes \Big(e_0 \mathop{\mathrm{d}}\log t + e_1\mathop{\mathrm{d}}\log(t-1) + e_\lambda \log(t-\lambda)\Big)$$ where $t$ is the parameter of the projective line. clearly, the residues of $\nabla^{(e,d)}$ at $0,1,\lambda$ are $e_0,e_1,e_\lambda$ respectively. We only need to show $\nabla^{(e,d)}$ can extends over $\infty$ and the residue is equal to $e_\infty$. This follows the following explicit computation: $$\begin{split}
\nabla^{(e)}\left(t^d\right) & = 1\otimes \mathop{\mathrm{d}}\left(t^d\right) + t^d\cdot\nabla^{(e)}\left(1\right)\\
& = t^d \otimes \Big(-d\cdot\mathop{\mathrm{d}}\log\frac1t + e_0 \mathop{\mathrm{d}}\log t + e_1\mathop{\mathrm{d}}\log(t-1) + e_\lambda \log(t-\lambda)\Big) \\
& = t^d \otimes \Big(-d-e_0 -e_1\cdot\frac{t}{t-1} - e_\lambda\cdot\frac{t}{t-\lambda}\Big)\cdot \mathop{\mathrm{d}}\log\frac1t\\
\end{split}$$
**Uniqueness:** Suppose there are two logarithm de Rham line bundles $(L,\nabla)$ and $(L',\nabla')$ satisfy the conditions. Since our base space is projective line, $L$ and $L'$ has the some degree, they must isomorphic to each other. So we may identify them $L=L'$. Now, on this line bundle there are two logarithmic connections with the same residues. They must coincide with each other, since $$\nabla-\nabla'\in \mathop{\mathrm{Hom}}_{{\mathcal O}_{{\mathbb P}^1_S}}(L,L \otimes \Omega^1_{{\mathbb P}^1_S}) \cong H^0({\mathbb P}^1_S,\Omega^1_{{\mathbb P}^1_S})=0.$$
(2). Suppose $(L,\nabla)$ be a logarithmic de Rham line bundle with rational eigenvalues $(e_0,e_1,e_\infty,e_\lambda)$ and degree $d$. Denote $e_\infty'\coloneqq -(e_0+e_1+e_\lambda+\iota(d))$ and $e'=(e_0,e_1,e_\lambda,e_\infty')$. By (1), one has logarithmic de Rham bundle $(L^{(e',d)},\nabla^{(e',d)})$. By similar proof as in (1), we may identify $L^{(e',d)}$ with $L$, and gets $$\nabla-\nabla^{(e',d)} \in \mathop{\mathrm{Hom}}_{{\mathcal O}_{{\mathbb P}^1_S}}(L,L \otimes \Omega^1_{{\mathbb P}^1_S}(\log D_\infty)) \cong H^0({\mathbb P}^1_S,\Omega^1_{{\mathbb P}^1_S}(\log D_\infty))=0.$$ Thus $e'_\infty=e_\infty$ and $(L,\nabla)\cong (L^{(e',d)},\nabla^{(e',d)})$. ◻
**Lemma 5**. *Let $e=(e_0,e_1,e_\lambda,e_\infty)\in \Big({\mathcal O}_S(S)\Big)^4$ and $d\in {\mathbb Z}$ satisfy $$e_0+e_1+e_\lambda+e_\infty + d = 0 \in {\mathcal O}_S(S).$$ For any $m=(m_0,m_1,n_\lambda,m_\infty)\in {\mathbb N}^4$, denote $$e' = (e_0-m_0,e_1-m_1,e_\lambda-m_\lambda,e_\infty-m_\infty) \quad \text{and} \quad d'=d+m_0+m_1+m_\lambda+m_\infty.$$ Then the natural extension $(L^{(e,d)}(D'),\nabla^{(e,d)})$ () with respect to $D'=m_0D_0 + m_1D_1 + m_\lambda D_\lambda + m_\infty D_\infty$ is isomorphic to $(L^{(e',d')},\nabla^{(e',d')})$.*
*Proof.* By the uniqueness in , we only show that $e'$ is coincide with the residues of $\nabla^{(e,d)}$ on $L^{(e,d)}(D')$. Let $f$ be a local generator of $L^{(e,d)}$ around $t=0$ and suppose $\nabla^{(e,d)}(f)=f\otimes \omega$. Then $x^{-m_0}f$ is the local generator of $L^{(e,d)}(D')$ and $$\nabla^{(e,d)}(x^{-m_0}f) = x^{-m_0}f \otimes (-m_0 + \omega).$$ Thus $e_0-m_0$ is the residue of $\nabla^{(e,d)}$ on $L^{(e,d)}(D')$ along $D_0$. Similarly, one checks that around the other three points. ◻
We classifies all parabolic de Rham line bundle over $({\mathbb P}^1_S,D_S)/S$.
**Lemma 6**.
1. *Let $w=(w_0,w_1,w_\lambda,w_\infty)\in \Big([0,1)\cap {\mathbb Q}\Big)^4$, $e=(e_0,e_1,e_\lambda,e_\infty)\in \Big({\mathcal O}_S(S)\Big)^4$, and $d\in {\mathbb Z}$ satisfy $$e_0+e_1+e_\lambda+e_\infty + d = 0 \in {\mathcal O}_S(S).$$ Then there exists a unique parabolic de Rham line bundle $\{(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)\}_{\alpha}$ such that*
- *$d=\deg(L^{(e,d,w)}_0)$;*
- *$e_x$ is the eigenvalue of residue of $\nabla^{(e,d,w)}_0$ along $D_x$ for $x\in\{0,1,\lambda,\infty\}$, and*
- *$w_x$ is the parabolic weight of the parabolic line bundle $\{L^{(w,d,e)}_\alpha\}_\alpha$ along $D_x$ for $x\in\{0,1,\lambda,\infty\}$;*
2. *The underlying parabolic line bundle of $\{(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)\}_{\alpha}$ is isomorphic to $${\mathcal O}_{{\mathbb P}^1_S}(d)(w_0D_0 + w_1D_1 + w_\lambda D_\lambda +w_\infty D_\infty).$$ The line bundle $L^{(e,d,w)}_\alpha$ is of degree $$d'\coloneqq d+[\alpha_0+w_0]+[\alpha_1+w_1]+[\alpha_\lambda+w_\lambda]+[\alpha_\infty+w_\infty]$$ and the eigenvalue of the residue of $\nabla^{(e,d,w)}_\alpha$ along $D_x$ is $e'_x\coloneqq e_x-[\alpha_x+w_x]$. In other words, $$(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha) \cong (L^{(e',d')},\nabla^{(e',d')}).$$*
3. *Any parabolic de Rham line bundle over $({\mathbb P}^1_S,D_S)/S$ is of the form given in (1).*
*Proof.* (1) **Existence:** For any $\alpha=(\alpha_0,\alpha_1,\alpha_\lambda,\alpha_\infty)\in{\mathbb Q}^4$, denote $D_\alpha = \alpha_0D_0 + \alpha_1D_1 + \alpha_\lambda D_ \lambda+ \alpha_\infty D_\infty$. We set $$(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha) \coloneqq (L^{(e,d)}(D_{[\alpha+w]}),\nabla^{(e,d)})$$ where $(L^{(e,d)},\nabla^{(e,d)})$ is the logarithmic de Rham line bundle given in and the extension $(L^{(e,d)}(D_{[\alpha+w]}),\nabla^{(e,d)})$ is defined as in . The natural injections introduced in makes the collection $\{(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)\}$ forming a parabolic de Rham bundle, which satisfies our requirement clearly.
**Uniqueness:** Let $(L_\alpha,\nabla_\alpha)$ and $(L'_\alpha,\nabla'_\alpha)$ be two parabolic de Rham line bundles satisfy our requirement. Then we may identify $(L_0,\nabla_0)$ and $(L'_0,\nabla'_0)$ according . Thus $$(L_\alpha,\nabla_\alpha) = (L_0(D_{[\alpha+w]}),\nabla_0) = (L'_0(D_{[\alpha+w]}),\nabla'_0) =(L'_\alpha,\nabla'_\alpha),$$ where the first and third equalities follow from the normalization and the definition of the weights of parabolic de Rham bundles.
\(2\) This follows the construction in (1).
\(3\) For any parabolic de Rham line bundle $\{(L_\alpha,\nabla_\alpha)\}_{\alpha}$, we have associated datum $(w,e,d)$ consisting of weights, eigenvalues and degree. By the uniqueness in (1), the only thing we need to check is $$e_0+e_1+e_\lambda+e_\infty + d = 0 \in {\mathcal O}_S(S).$$ And this follows . ◻
We classifies all standard parabolic de Rham line bundle over $({\mathbb P}^1_S,D_S)/S$.
**Lemma 7**. *Let $w=(w_0,w_1,w_\infty,w_\lambda)$ be an element in $\Big([0,1)\cap {\mathbb Q}^S\Big)^4$. Denote $$e_{w,x}\coloneqq\iota(w_x) \text{\quad and \quad} e_w\coloneqq(e_{w,0},e_{w,1},e_{w,\lambda},e_{w,\infty})\in \Big(O_S(S)\Big)^4.$$ For any integer $d$ with $$e_{w,0}+e_{w,1}+e_{w,\lambda}+e_{w,\infty}+d=0\in {\mathcal O}_S(S)$$ the parabolic de Rham line bundle $\{(L^{(e_w,d,w)}_\alpha,\nabla^{(e_w,d,w)}_\alpha)\}_{\alpha}$ is standard. And any standard parabolic de Rham line bundle is of this form.*
*Proof.* Fix a parabolic de Rham bundle $\{(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)\}_{\alpha}$. Recall that it is standard if and only if the eigenvalue of the residue of the connection $\nabla^{(e,d,w)}_\alpha$ on the grading piece $V_\alpha/V_{\alpha-\varepsilon\delta_x}$ is $\iota(-\alpha_x)$ for any $\alpha$. By the definition of the parabolic weights, for any $\alpha$, the grading piece does not vanish if and only if $\alpha_x+w_x$ is an integer. On the other hand, according (2) in , $(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)$ along $D_x$ is $e_{x}-[\alpha_x+w_x]$. Thus $\{(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)\}_{\alpha}$ is standard if and only if for any $\alpha$ with $\alpha_x+w_x$ being an integer one has $e_{x}-[\alpha_x+w_x]=\iota(-\alpha_x) \in {\mathcal O}_S(S)$.
If $\{(L^{(e,d,w)}_\alpha,\nabla^{(e,d,w)}_\alpha)\}_{\alpha}$ is standard, then taking $\alpha$ such that $\alpha_x = -w_x$ one gets $$e_x = \iota(w_x)=e_{w,x} \in {\mathcal O}_S(S).$$ In other words, it is of the form given in the Lemma.
Since $e_{w,x}=\iota(w_x)$, for any $\alpha$ with $\alpha_x+w_x$, one has $e_{w,x}-[\alpha_x+w_x]= e_{w,x} - \iota(\alpha_x) - \iota(w_x) =\iota(-\alpha_x) {\mathcal O}_S(S)$. Thus $\{(L^{(e_w,d,w)}_\alpha,\nabla^{(e_w,d,w)}_\alpha)\}_{\alpha}$ is indeed standard. ◻
**Corollary 2**.
1. *A standard parabolic de Rham line bundle is uniquely determined by its weights and degree.*
2. *Suppose $\mathop{\mathrm{char}}({\mathcal O}_S(S))=0$. Then a standard parabolic de Rham line bundle is uniquely determined by its weights. In this case the degree of the underlying parabolic line bundle is zero.*
### Some parabolic de Rham bundles of rank $2$
Denote by *${M^{{1\over 2}}_{{\rm dR}\,\lambda}}(S)$* the set of all isomorphic classes of rank-$2$ stable standard parabolic de Rham bundles $(V,\nabla)$ of degree zero on $({\mathbb P}^1_S,D_S)/S$ with all parabolic weights being zero at $\{0,1,\lambda\}$ and with all parabolic weights being $1/2$ at $\infty$.
**Proposition 4**. *Let $(V,\nabla)$ be a parabolic de Rham bundle in ${M^{{1\over 2}}_{{\rm dR}\,\lambda}}(S)$. Then*
1. *the parabolic de Rham bundle $(V,\nabla)$ has the form $$({\mathcal L}\oplus {\mathcal L}^{-1},\nabla).$$ where ${\mathcal L}={\mathcal O}(\frac12(\infty))$.*
2. *if we take the *parabolic Hodge line bundle* as ${\mathcal L}$, then the associated graded parabolic Higgs field is nonzero and is of form $$\theta\colon {\mathcal L}\rightarrow {\mathcal L}^{-1}\otimes \Omega^1_{X/S}(\log D).$$ In particular, the graded parabolic Higgs bundle $({\mathcal L}\oplus {\mathcal L}^{-1},\theta)$ is stable and is of degree zero.*
*Proof.* By tensoring with ${\mathcal O}_{{\mathbb P}^1_S}(-\frac12(\infty))$, one gets a parabolic vector bundle $V(-\frac12(\infty))$ of degree $-1$ and with trivial parabolic weights. Thus it is usual vector bundle of degree $-1$. We decompose $V(-\frac12(\infty))$ into direct sum of ${\mathcal O}_{{\mathbb P}^1_S}(d)$ and ${\mathcal O}_{{\mathbb P}^1_S}(-d-1)$ for some $d\geq0$. Denote ${\mathcal L}=O(\frac{2d+1}2(\infty))$. Then one has $$V = {\mathcal L}\oplus {\mathcal L}^{-1}.$$ In the following, we use the stability to show $d=0$. Since $(V,\nabla)$ is stable, one has $\nabla({\mathcal L})\not\subset {\mathcal L}\otimes \Omega^1_{{\mathbb P}^1_S}(\log D)$. In other words, the graded parabolic Higgs field, which is defined as the composition of following maps $${\mathcal L}\xrightarrow{\nabla} V\otimes\Omega^1_{{\mathbb P}^1_S}(\log D) \twoheadrightarrow {\mathcal L}^{-1}\otimes\Omega^1_{{\mathbb P}^1_S}(\log D)$$ is nonzero. Compute the degree on both sides of the nonzero map, one gets $$\frac{2d+1}2 \leq -\frac{2d+1}2 + 2.$$ Thus $d\leq 0$. The lemma follows. ◻
### Some graded parabolic Higgs bundles of rank $2$
Denote by *${\mathrm{HIG}^{{\rm gr}\,{1\over 2}}_{\lambda}}(S)$* the set of all isomorphic classes of rank-$2$ stable graded parabolic Higgs bundles $(E,\theta)$ of degree zero on $({\mathbb P}^1_S,D_S)/S$ with all parabolic weights being zero at $\{0,1,\lambda\}$ and with all parabolic weights being $1/2$ at $\infty$.
**Proposition 5**. *Let $(E,\theta)$ be a graded parabolic Higgs bundle in ${\mathrm{HIG}^{{\rm gr}\,{1\over 2}}_{\lambda}}(S)$. Then $$V = {\mathcal L}\oplus {\mathcal L}^{-1},$$ where ${\mathcal L}={\mathcal O}(\frac12(\infty))$ and the parabolic Higgs field is nonzero and is of form $$\theta\colon {\mathcal L}\rightarrow {\mathcal L}^{-1}\otimes \Omega^1_{X/S}(\log D).$$*
*Proof.* By tensoring with ${\mathcal O}_{{\mathbb P}^1_S}(-\frac12(\infty))$, one gets a parabolic vector bundle $V(-\frac12(\infty))$ of degree $-1$ and with trivial parabolic weights. Thus it is usual vector bundle of degree $-1$. We decompose $V(-\frac12(\infty))$ into direct sum of ${\mathcal O}_{{\mathbb P}^1_S}(d)$ and ${\mathcal O}_{{\mathbb P}^1_S}(-d-1)$ for some $d\geq0$. Denote ${\mathcal L}=O(\frac{2d+1}2(\infty))$. Then one has $$V = {\mathcal L}\oplus {\mathcal L}^{-1}.$$ By the stability, $\theta({\mathcal L})\not\subset {\mathcal L}\otimes \Omega^1_{{\mathbb P}^1_S}(\log D)$. Thus graded parabolic Higgs field nonzero and is of form $$\theta\colon {\mathcal L}\rightarrow {\mathcal L}^{-1}\otimes\Omega^1_{{\mathbb P}^1_S}(\log D).$$ ◻
**Corollary 3**. *Any parabolic Higgs bundle $(E,\theta)\in {\mathrm{HIG}^{{\rm gr}\,{1\over 2}}_{\lambda}}(S)$ is uniquely determined by $(\theta)_0 \in {\mathbb P}^1_S(S)$, the zero of the Higgs field $\theta$. One has a natural bijection induced by taking zeros $${\mathrm{HIG}^{{\rm gr}\,{1\over 2}}_{\lambda}}(S) \xrightarrow[(E,\theta)\mapsto (\theta)_0]{1:1} {\mathbb P}^1_S(S).$$*
# **Parabolic Fontaine-Faltings Modules and parabolic Higgs-de Rham Flows** {#sec_main_FF_and_Higgs_de_Rham}
In order to facilitate use, we introduce the definition of Parabolic Fontaine-Faltings Modules and parabolic Higgs-de Rham Flows. One can also see [@KrSh20], for original definitions and more detailed studies.
## Introduction to Fontaine-Faltings modules
In this subsection, we set $S=\mathrm{Spf}(W)$. Then ${\mathcal Y}$ is a proper smooth formal scheme over $S$ and $D_{\mathcal Y}$ be a reduced relative simple normal crossing $S$-divisor in ${\mathcal Y}$.
### Faltings' tilde functor $\widetilde{(\cdot)}$
We recall a functor $$\widetilde{(\cdot)}\colon \mathrm{MCF}({\mathcal Y},D_{\mathcal Y}) \rightarrow \widetilde{\mathrm{MIC}}({\mathcal Y},D_{\mathcal Y})$$ which was introduced by Faltings in [@Fal89]. We call it *Faltings' tilde functor* and denote it by *$\widetilde{(\cdot)}$*. For an object $(V,\nabla,\mathop{\mathrm{Fil}})$ in $\mathrm{MCF}_{a}({\mathcal Y},D_{\mathcal Y})$, denote by $\widetilde{V}$ the quotient $\bigoplus\limits_{i=0}^{a}\mathop{\mathrm{Fil}}^iV/\sim$ with $x\sim py$ for any local section $x\in\mathop{\mathrm{Fil}}^iV$ and $y$ the image of $x$ under the natural inclusion $\mathop{\mathrm{Fil}}^iV\hookrightarrow\mathop{\mathrm{Fil}}^{i-1}V$. Then connection $\nabla$ naturally induces a $p$-connection on $\widetilde{\nabla}$ on $\widetilde{V}$. We use $\widetilde{(V,\nabla,\mathop{\mathrm{Fil}})}$ stands for the pair $(\widetilde{V},\widetilde{\nabla})$. The morphisms under the functor are defined in the obvious way.
If $V$ is $p$-torsion free, then the tilde functor can be easily described as follows: $$(\widetilde{V},\widetilde{\nabla}):= \left(\sum_{i\in{\mathbb Z}} \frac1{p^\ell}\mathop{\mathrm{Fil}}^iV,p\nabla\right)\subset (V,p\nabla)\otimes_{{\mathbb{Z}_p}}{\mathbb{Q}_p}.$$
### The Frobenius pullback functor ${\mathcal F}$
Faltings also constructed a functor [@Fal89] $${\mathcal F}\colon \widetilde{\mathrm{MIC}}_a({\mathcal Y},D_{\mathcal Y}) \rightarrow \mathrm{MIC}({\mathcal Y},D_{\mathcal Y})$$ where $\widetilde{\mathrm{MIC}}_a({\mathcal Y},D_{\mathcal Y})$ is the full subcategory of $\widetilde{\mathrm{MIC}}({\mathcal Y},D_{\mathcal Y})$ consisting of the essential image of $\mathrm{MCF}_a({\mathcal Y},D_{\mathcal Y})$ under Faltings' tilde functor for $a\leq p-2$.[^1] We recall the definition as follows.
For small affine open subset ${\mathcal U}$ of ${\mathcal Y}$, there exists endomorphism $F_{\mathcal U}$ on ${\mathcal U}$ which lifts the absolute Frobenius on ${\mathcal U}_k$ and is compatible with the Frobenius map $F_S$ on $S=\mathop{\mathrm{Spec}}(W(k))$. Let $(\widetilde{V},\widetilde{\nabla})$ be an object in $\widetilde{MIC}_a({\mathcal Y},D_{\mathcal Y})$. Locally on ${\mathcal U}$, applying the functor $F_{{\mathcal U}}^*$, we get a de Rham bundle over ${\mathcal U}$ $$F_{{\mathcal U}}^*(\widetilde{V}\mid_{{\mathcal U}},\widetilde{\nabla}\mid_{{\mathcal U}})$$ where the underlying bundle is just pullback of $\widetilde{V}\mid_{{\mathcal U}}$ along $F_{{\mathcal U}}^*$ and the connection is the pullback of $\widetilde{\nabla}\mid_{{\mathcal U}}$ along $F_{{\mathcal U}}^*$ dividing $p$. By Taylor formula, up to a canonical isomorphism, it does not depend on the choice of $F_{\mathcal U}$ in case $a\leq p-2$. In particular, on the overlap of two small affine open subsets, there is an canonical isomorphism of two logarithmic de Rham bundles. By gluing via those isomorphisms, one gets a logarithmic de Rham bundle over $({\mathcal Y},D_{\mathcal Y})$, we denote it by $${\mathcal F}(\widetilde{V},\widetilde{\nabla}).$$ The morphisms under the functor are defined in the obvious way.
### A logarithmic Fontaine-Faltings module
A logarithmic Fontaine-Faltings module[^2] over $({\mathcal Y},D_{\mathcal Y})$ is a quadruple $(V,\nabla,\mathop{\mathrm{Fil}},\Phi)$ where $(V,\nabla,\mathop{\mathrm{Fil}})$ is a logarithmic de Rham bundle over $({\mathcal Y},D_{\mathcal Y})$ and $$\Phi\colon {\mathcal F}\widetilde{(V,\nabla,\mathop{\mathrm{Fil}})} \cong (V,\nabla)$$ is an isomorphism of logarithmic de Rham bundles over $({\mathcal Y},D_{\mathcal Y})$. We call $\Phi$ the *Frobenius structure* in the Fontaine-Faltings module.
### Local logarithmic Fontaine-Faltings modules
Let $Y=\mathop{\mathrm{Spec}}(R)$ be an affine $W$-scheme with an étale map $$W[T_1,T_2,\cdots,T_{d}]\rightarrow R,$$ over $W$,let $D$ be the divisor in $Y$ defined by $T_1\cdots T_d=0$,and let $U$ be the complement of $D$ in $Y$. Therefore, $U$ is a small affine scheme. In this context, we say that $(Y,D)$ is *log small*. Denote by ${\mathcal Y}$ the $p$-adic formal completion of $Y$ along the special fiber $Y_1$ and by ${\mathcal Y}_K$ the rigid-analytic space associated to ${\mathcal Y}$,which is an open subset of $Y_K^{\rm an}$. We construct spaces $D_K$,${\mathcal D}$,${\mathcal D}_K$,$U_K$, ${\mathcal U}$ and ${\mathcal U}_K$ exactly analogously to those for $Y$. Denote ${\mathcal Y}^\circ_K:={\mathcal Y}_K-{\mathcal D}_K$. Denote by $\widehat{R}$ the $p$-adic completion of $R$, so ${\mathcal Y}=\text{Spf}(\widehat{R})$.
Choose $\Phi:\widehat{R}\rightarrow\widehat{R}$ a lifting of the absolute Frobenius on $R/pR$ such that $\Phi(T_i)=unit\cdot T_i^p$. Recall a *logarithmic Fontaine-Faltings module* over the $p$-adic formal completion $({\mathcal Y},{\mathcal D}_{\mathcal Y})$ of $(Y,D)$ with Hodge-Tate weights in $[a,b]$ is a quadruple $(V,\nabla,\mathop{\mathrm{Fil}},\varphi)$, where
- $(V,\nabla)$ is a finitely generated locally free[^3] de Rham $\widehat{R}$-module with logarithmic poles along $T_1\dots T_d=0$;
- $\mathop{\mathrm{Fil}}$ is a Hodge filtration on $(V,\nabla)$ of level in $[a,b]$;
- $\widetilde{V}$ is the quotient $\bigoplus\limits_{i=a}^b\mathop{\mathrm{Fil}}^i/\sim$ with $x\sim py$ for $x\in\mathop{\mathrm{Fil}}^iV$ with $y$ being the image of $x$ under the natural inclusion $\mathop{\mathrm{Fil}}^iV\hookrightarrow\mathop{\mathrm{Fil}}^{i-1}V$;
- $\varphi$ is an $\widehat{R}$-linear isomorphism $$\varphi:\widetilde{V}\otimes_{\Phi}\widehat{R} \longrightarrow V,$$
- The relative Frobenius $\varphi$ is horizontal with respect to the connections.
In particular, a logarithmic Fontaine-Faltings module may be considered as a filtered logarithmic $F$-crystal in finite, locally free modules. Denote by $\mathcal{MF}_{[a,b]}^{\nabla,\Phi}(({\mathcal Y},{\mathcal D}_{\mathcal Y})/W)$ the category of logarithmic Fontaine-Faltings modules over $({\mathcal Y},{\mathcal D}_{\mathcal Y})$ with Hodge-Tate weights in $[a,b]$. For the rest of what follows, we assume that $b-a\leq p-2$. (It will follow that the resulting category is independent of the choice of $\Phi$, exactly as in the non-logarithmic case.)
### Decomposition of the ring ${\mathbb Z}_{p^f}\otimes_{{\mathbb Z}_p} {\mathbb Z}_{p^f}$
Take an generator $\zeta\in {\mathbb Z}_{p^f}$ over ${\mathbb Z}_p$ (in other words, ${\mathbb Z}_{p^f}={\mathbb Z}_{p}[\zeta]$) and denote $$e_i:= \frac{\prod\limits_{j=0\atop j\neq i}^{f-1}\Big(1\otimes\zeta-\zeta^{\sigma^j}\otimes 1\Big)}{\prod\limits_{j=0\atop j\neq i}^{f-1}\Big(\zeta^{\sigma^i}\otimes 1-\zeta^{\sigma^j}\otimes 1\Big)} \in {\mathbb Z}_{p^f}\otimes_{{\mathbb Z}_p} {\mathbb Z}_{p^f}.$$
**Lemma 8**.
1. *the decomposition $1=e_0+e_1+\cdots+e_{f-1}$ is an idempotent one,*
2. *$(\sigma\otimes\mathop{\mathrm{id}})(e_i) = e_{i+1}$,*
3. *$(1\otimes\zeta)\cdot e_i = (\zeta^{\sigma^i}\otimes1)\cdot e_i$.*
### Endomorphism structure on logarithmic Fontaine-Faltings module.
Let $(V,\nabla,\mathop{\mathrm{Fil}},\varphi)$ be a logarithmic Fontaine-Faltings module. Recall that a *${\mathbb Z}_{p^f}$-endomorphism structure* is a ring homomorphism $$\tau\colon {\mathbb Z}_{p^f} \rightarrow \mathop{\mathrm{End}}\big((V,\nabla,\mathop{\mathrm{Fil}},\varphi)\big).$$ In this case, $V$ are endow with both $\widehat{R}$-module structure and ${\mathbb Z}_{p^f}$-module structure. Suppose ${\mathbb F}_{p^f}\subseteq k$. Then one has canonical embedding ${\mathbb Z}_{p^f}\subset W(k) \subset \widehat{R}$. Thus one gets two ${\mathbb Z}_{p^f}$-module structures on $V$. It is natural to consider the sub-$\widehat{R}$-modules, for all $i\geq0$ $$V_i:=V^{\tau=\sigma^i} := \{v\in V\mid \tau(a)(v) = \sigma^{i}(a)v,\text{for all} a\in {\mathbb Z}_{p^f}\},$$ where $\sigma\colon {\mathbb Z}_{p^f}\rightarrow {\mathbb Z}_{p^f}$ is the lifting of the absolute Frobenius map on ${\mathbb F}_{p^f}$. Clearly $V_{i}=V_{i+f}$ for any $i\geq0$ because of $\sigma^f=\mathop{\mathrm{id}}$.
**Lemma 9**. *Let $(V,\nabla,\mathop{\mathrm{Fil}},\varphi)$ be a logarithmic Fontaine-Faltings module over $({\mathcal Y},{\mathcal Z})$ endowed with a ${\mathbb Z}_{p^f}$-endomorphism structure $$\tau\colon {\mathbb Z}_{p^f} \rightarrow \mathop{\mathrm{End}}\big((V,\nabla,\mathop{\mathrm{Fil}},\varphi)\big)$$ Assume ${\mathbb F}_{p^f}\subseteq k$. Then*
- *the connection and filtration can be restricted on $V_i$ for all $i\geq0$, and there is a decomposition of filtered logarithmic de Rham module $$(V,\nabla,\mathop{\mathrm{Fil}}) = (V_0,\nabla_0,\mathop{\mathrm{Fil}}_0) \oplus (V_1,\nabla_1,\mathop{\mathrm{Fil}}_1) \oplus \cdots \oplus (V_{f-1},\nabla_{f-1},\mathop{\mathrm{Fil}}_{f-1}).$$*
- *The restriction $\varphi_i$ of $\varphi$ on $\widehat{R}\otimes_{\Phi}\widetilde{V}_i$ gives an isomorphism of de Rham modules $$\varphi_i\colon \widehat{R} \otimes_{\Phi} \widetilde{V}_i \overset{\simeq}{\longrightarrow} V_{i+1}.$$*
*Proof.* (1) Since $\nabla$ is ${\mathbb Z}_{p^f}$-linear and $\mathop{\mathrm{Fil}}$ consists of sub-${\mathbb Z}_{p^f}$-modules, they both can be restricted on $V_i$. We only need to show $$V=V_0\oplus V_1 \oplus \cdots \oplus V_{f-1}.$$ Take an generator $\zeta\in {\mathbb Z}_{p^f}$ over ${\mathbb Z}_p$ (in other words, ${\mathbb Z}_{p^f}={\mathbb Z}_{p}[\zeta]$) and denote $$e_i:= \frac{\prod\limits_{j=0\atop j\neq i}^{f-1}\Big(\tau(\zeta)-\zeta^{\sigma^j}\Big)}{\prod\limits_{j=0\atop j\neq i}^{f-1}\Big(\zeta^{\sigma^i}-\zeta^{\sigma^j}\Big)}\in \mathop{\mathrm{End}}_{\widehat{R}}(V).$$ It is easy to check that $\mathop{\mathrm{id}}=e_0+e_1+\cdots+e_{f-1}$ is an idempotent decomposition, and that $$\tau(a) e_i=a^{\sigma^i} e_i,\qquad \text{for any} a\in {\mathbb Z}_{p^f}.$$ In particular, it induces a decomposition $$V = e_0V \oplus e_1V \oplus \cdots \oplus e_{f-1}V.$$ and $V_i = e_iV$ for all $i=0,\cdots,f-1$.
(2). Since the endomorphism structure preserves the Frobenius structure $\varphi$, we have $\tau(a)\circ\varphi = \varphi\circ(\mathop{\mathrm{id}}\otimes\tau(a))$ for any $a\in{\mathbb Z}_{p^f}$. Thus, for any $v_i\in \mathop{\mathrm{Fil}}^\ell V_i$ and any $a\in{\mathbb Z}_{p^f}$, we have $$\begin{split}
\tau(a)\big(\varphi(1\otimes_\Phi [v_i])\big)= \varphi\circ(\mathop{\mathrm{id}}\otimes_\Phi \tau(a)) (1\otimes[v_i]) = \varphi(1\otimes_\Phi a^{\sigma^i}[v_i])= a^{\sigma^{i+1}} \cdot \varphi(1\otimes[v_i]).
\end{split}$$ where $[v_i]$ is the image of $v_i$ under the natural morphism $\mathop{\mathrm{Fil}}^\ell V_i \rightarrow \widetilde{V}_i$. In other words, $\varphi(1\otimes_{\Phi}[v_i])\in V_{i+1}$. Then (2) follows. ◻
### Constant Fontaine-Faltings modules
**Definition 10**. an *admissible filtered $\varphi$-module of rank $r$ over $W$* is a triple $(V,\mathop{\mathrm{Fil}},\varphi)$, where
- $V$ is a finite generated free $W$-module of rank $r$,
- $\mathop{\mathrm{Fil}}$ is a filtration of direct summands of $V$ of form (for some $a,b\in{\mathbb Z}$) $$V = \mathop{\mathrm{Fil}}^aV \supseteq \mathop{\mathrm{Fil}}^{a+1}V \supseteq \cdots \supseteq \mathop{\mathrm{Fil}}^{b}V\supseteq \mathop{\mathrm{Fil}}^{b+1}V=0$$
- $\varphi\colon \widetilde{V} \rightarrow V$ is a $\sigma$-semilinear isomorphism (sometimes, we also use $\varphi$ to stand for the induced $W$-linear isomorphism $W\otimes_{\sigma} \widetilde{V}\rightarrow V$) where $$\widetilde{V}=\sum\limits_{\ell=a}^b\frac1{p^\ell} \mathop{\mathrm{Fil}}^\ell V \subset V_K:=V\otimes_WK.$$
The jumping indices of the filtration are called the *levels* of $(V,\mathop{\mathrm{Fil}},\varphi)$. Denote by $\mathcal{MF}^{\varphi}(W)$ the category of all admissible filtered $\varphi$-module over $W$. Denote by $\mathcal{MF}_{[a,b]}^{\varphi}(W)$ the full subcategory of all admissible filtered $\varphi$-module over $W$ with levels contained in $[a,b]$. An object in $\mathcal{MF}_{[0,p-2]}^{\varphi}(W)$ is called a *constant Fontaine-Faltings module*.
*Remark 9*.
1. The third term is equivalent to give a $\sigma$-semilinear isomorphism $\varphi_K\colon V_K\rightarrow V_K$ such that it is admissible (or, compatible with the filtration) in the sense that its restriction induces isomorphism from $\widetilde{V}$ to $V$. The admissible condition here is also called strong $p$-divisibility condition in some other literature.
2. The pair $(V_K,\varphi_K)$ forms an $F$-isocrystal over $k$. One has natural functor $$\mathcal{MF}^\varphi(W) \rightarrow \mathop{\mathrm{F-Isoc}}(k).$$
3. Let $\mathop{\mathrm{Fil}}_K$ be the induced filtration on $V_K$ from $\mathop{\mathrm{Fil}}$. Then the triple $(V_K,\mathop{\mathrm{Fil}}_K,\varphi_K)$ is an admissible filtered $\varphi$-module over $K$ in the sense [@FoOu].
4. If the index $a$ appeared in the filtration is zero. Then $V$ is contained in $\widetilde{V}$. The restriction of $\varphi$ gives a $\sigma$-semilinear injection $\varphi_V\colon V\rightarrow V$. In other words, the pair $(V,\varphi_V)$ forms an $F$-crystal over $k$.
**Example 7**. For each $w \in K^\times$, it can be write uniquely as $w=p^ru$, for some $r\in{\mathbb Z}$ and $u\in W^\times$. We construct an admissible filtered $\varphi$-module $M_w=(V,\mathop{\mathrm{Fil}},\varphi)$ of rank $1$ over $W$ as following: Let $V$ be a free $W$-module of rank $1$ with a generator $v$. The filtration $\mathop{\mathrm{Fil}}$ is given by $$V = \mathop{\mathrm{Fil}}^rV\supset \mathop{\mathrm{Fil}}^{r+1}V=0,$$ and the Frobenius structure is given by $\varphi(1\otimes \frac{v}{p^r})=uv$. Actually, all admissible filtered $\varphi$-modules of rank $1$ over $W$ are of this form. For any two $w,w'\in K^\times$, $M_w\simeq M_{w'}$ if and only if there exists some $\xi\in W^\times$ such that $w/w'=\sigma(\xi)/\xi$. In particular, the $p$-adic values of $w$ and $w'$ are of the same.
**Definition 11**. An *admissible filtered $\varphi$-module with ${\mathbb Z}_{p^f}$-endomorphism structure of rank $r$ over $W$* is a tuple $(V,\mathop{\mathrm{Fil}},\varphi,\tau)$, where $(V,\mathop{\mathrm{Fil}},\varphi)\in \mathcal{MF}^\varphi(W)$ is of rank $rf$ and $\tau$ is a ring homomorphism ( which is called a ${\mathbb Z}_{p^f}$-endomorphism structure on $(V,\mathop{\mathrm{Fil}},\varphi)$) $$\tau\colon {\mathbb Z}_{p^f} \rightarrow \mathop{\mathrm{End}}((V,\mathop{\mathrm{Fil}},\varphi)).$$ Denote by *$\mathcal{MF}^\varphi(W)_{{\mathbb Z}_{p^f}}$* the category of all admissible filtered $\varphi$-modules with ${\mathbb Z}_{p^f}$-endomorphism structures over $W$. Similarly we define the full subcategory *$\mathcal{MF}_{[a,b]}^\varphi(W)_{{\mathbb Z}_{p^f}}$*. The objects in $\mathcal{MF}_{[0,p-2]}^\varphi(W)_{{\mathbb Z}_{p^f}}$ are called *constant Fontaine-Faltings modules with ${\mathbb Z}_{p^f}$-endomorphism structures*.
**Definition 12**. The triple $(V_K,\varphi_K,\tau)$ forms an $F$-isocrystal over $k$ with coefficient ${\mathbb Q}_{p^f}$. Denote $$P((V,\mathop{\mathrm{Fil}},\varphi,\tau),t):=P((V_K,\varphi_K,\tau),t),\quad and \quad \mathop{\mathrm{tr}}(V,\mathop{\mathrm{Fil}},\varphi,\tau) := \mathop{\mathrm{tr}}(V_K,\varphi_K,\tau).$$ Call them the *characteristic polynomial and trace of $(V,\mathop{\mathrm{Fil}},\varphi,\tau)$*.
*Remark 10*. Let $(V,\mathop{\mathrm{Fil}},\varphi,\tau) \in \mathcal{MF}^\varphi(W)_{{\mathbb Z}_{p^f}}$ be of rank $r$. Then on $V$ there are two ${\mathbb Z}_{p^f}$-moudle structures, the first is the nature one and the second is given by $\tau$. So we may view $V$ as a $W\otimes_{{\mathbb Z}_p}{\mathbb Z}_{p^f}$-module. By the existence of the Frobenius structure, one actually shows that $V$ is locally free over $W\otimes_{{\mathbb Z}_p}{\mathbb Z}_{p^f}$ of rank $r$. Since the filtration $\mathop{\mathrm{Fil}}$ is direct summands as $W$-modules and is preserved by the action of ${\mathbb Z}_{p^f}$ via $\tau$, it consists of direct summands as $W\otimes_{{\mathbb Z}_p}{\mathbb Z}_{p^f}$-modules.
**Example 8**. Suppose ${\mathbb F}_{p^f}\subset k$. For any $w\in K\otimes{\mathbb Q}_{p^f}$ it can be uniquely written as $$w = \sum_{i=0}^{f-1} (p^{r_i}u_i\otimes 1)\cdot e_i,$$ where $r_i\in {\mathbb Z}$ and $u_i \in W^\times$ for all $i=0,1,\cdots,f-1$. We construct a rank $1$ object $M_{w}=(V,\mathop{\mathrm{Fil}},\varphi,\tau) \in \mathcal{MF}^\varphi(W)_{{\mathbb Z}_{p^f}}$ as follows: The $V$ is free of rank $1$ over $W\otimes {\mathbb Z}_{p^f}$ with a basis $v$, denote $v_i=e_iv$ and $V_i=Wv_i$. The action $\tau$ of ${\mathbb Z}_{p^f}$ given by the second factor. In other words, $\tau(a)(v_i) = (1\otimes a)\cdot v$. On $V$, we set the Filtration $\mathop{\mathrm{Fil}}$ via $$(V,\mathop{\mathrm{Fil}}) = (V_0,\mathop{\mathrm{Fil}}_0) \oplus (V_1,\mathop{\mathrm{Fil}}_1) \oplus \cdots \oplus (V_{f-1},\mathop{\mathrm{Fil}}_{f-1}),$$ where the filtration $\mathop{\mathrm{Fil}}_i$ is defined by ($r_f:=r_0$) $$V_i = \mathop{\mathrm{Fil}}_i^{r_{i+1}}V_i\supset \mathop{\mathrm{Fil}}_i^{r_{i+1}+1}V_i=0 \quad
\text{for all} i=0,1,\cdots,f-1.$$ The Frobenius structure is given by ($u_f:=u_0$ and $v_f:=v_0$) $$\varphi (\frac{v_i}{p^{r_{i+1}}}) = u_{i+1}v_{i+1} \quad
\text{for all} i=0,1,\cdots,f-1.$$ In this case, the associated $F$-isocrystal over $k$ with coefficient in ${\mathbb Q}_{p^f}$ can be simply represented as $(K\otimes_{{\mathbb Q}_p}{\mathbb Q}_{p^f}\cdot v,F)$ with Frobenius given by $$F(v) = w\cdot v.$$
**Definition 13**. The constant Fontaine-Faltings module corresponding to $w=p$ is called *cyclotomic Fontaine-Faltings module*.
By direct computation, one easily checks the following result.
**Lemma 10**. *Suppose ${\mathbb F}_{p^f}\subset k$. Then all objects of rank $1$ in $\mathcal{MF}^\varphi(W)_{{\mathbb Z}_{p^f}}$ are of the form $M_w$ for some $w\in (K\otimes {\mathbb Q}_{p^f})^\times$. And one has $$M_{w} \otimes M_{w'} \cong M_{w\cdot w'}.$$*
**Corollary 4**. *Let $k$ be a finite field and denote by $k'$ the quadratic field extension of $k$. Let ${\mathcal E}\in \mathcal{MF}^{\varphi}_{[0,0]}(W(k))_{{\mathbb Z}_{p^f}}$ be of rank $1$. Then there exists a rank $1$ object in ${\mathcal E}'=\mathcal{MF}_{[0,0]}(W(k'))_{{\mathbb Z}_{p^f}}$ such that $${\mathcal E}= {\mathcal E}'\otimes {\mathcal E}' \in \mathcal{MF}_{[0,0]}(W(k'))_{{\mathbb Z}_{p^f}}.$$*
*Proof.* Let ${\mathcal E}$ corresponds to $w=\sum_{i=0}^{f-1} (u_i\otimes1)\cdot e_i$, where $u_i\in W(k)^\times$. Since $k'$ is the quadratic extension of $k$ and $u_i$ is a unit in $W$, one has $\sqrt{u_i}\in W(k')$. Denote by ${\mathcal E}'$ the object corresponding to $w':=\sum_{i=0}^{f-1} (\sqrt{u_i}\otimes 1)\cdot e_i$. Then the corollary follows . ◻
**Definition 14**. The $M_w$ is called of finite order, if there exists some $d$ such that $$\underbrace{M_w \otimes M_w \otimes \cdots \otimes M_w}_{d} \cong M_1.$$ The smallest integer $d$ satisfying this condition is called the order of $M_w$.
**Corollary 5**. *Let ${\mathcal E}\in \mathcal{MF}^{\varphi}_{[0,0]}(W(k))_{{\mathbb Z}_{p^f}}$ be of rank $1$. Then there exists a rank $1$ object in ${\mathcal E}'=\mathcal{MF}_{[0,0]}(W(k))_{{\mathbb Z}_{p^f}}$ such that $${\mathcal E}\otimes {\mathcal E}'\otimes {\mathcal E}'$$ is of finite order. The order divides $\#k-1$.*
### Constant Fontaine-Faltings modules over a base
Let $Y$ be a smooth scheme over $W$ with geometrically connected generic fiber. The pullback operator along the structure morphism ${\mathcal Y}\rightarrow \mathop{\mathrm{Spf}}(W)$ induces a natural functor $$\mathcal{MF}_{[0,p-2]}^{\varphi}(W) \rightarrow \mathcal{MF}_{[0,p-2]}({\mathcal Y}/W).$$ We also call the essential image of this pullback functor *constant Fontaine-Faltings modules*. Similarly for those with endomorphism structures.
**Lemma 11**. *If $Y$ is projective over $W$, then a Fontaine-Faltings module is constant if and only if its underlying de Rham bundle is direct sum of copies of the trivial de Rham line bundle $({\mathcal O}_{{\mathcal Y}},\mathop{\mathrm{d}})$.*
### Restriction of Fontaine-Faltings modules on points.
Let $Y$ be a smooth scheme over $W$ with geometrically connected generic fiber. Let $x$ be a closed point in $Y_k$. Denote by $k_x$ the residue field at $x$, which is a finite extension of $k$. By the smoothness of $Y$, there exists a $W(k_x)$-point $\widehat{x}$ on $Y$, which lifts $x$. By restriction on $\widehat{x}$, one gets a functor $$\mathcal{MF}_{[0,p-2]}({\mathcal Y}/W) \rightarrow \mathcal{MF}_{[0,p-2]}^{\varphi}(W(k_x)).$$ In the following, we describe this functor clearly and show that it does not depend on the choice of the lifting $\widehat{x}$.
Let $(V,\nabla,\mathop{\mathrm{Fil}},\varphi) \in \mathcal{MF}_{[0,p-2]}({\mathcal Y}/W)$. By shrinking $Y$, we may assume $Y = \mathop{\mathrm{Spec}}(R)$ is small affine and contains $\widehat{x}$ such that there exists a Frobenius lifting $\Phi\colon \widehat{R}\rightarrow \widehat{R}$ preserves the $W$-point $\widehat{x}$. In other words, we have commutative diagram $$\xymatrix{
\widehat{R} \ar[r]^\Phi \ar@{->>}[d]_{\widehat{x}} & \widehat{R} \ar@{->>}[d]^{\widehat{x}} \\
W \ar[r]^{\sigma} & W \\
}$$ In this case, $V$ is a finite generated free $R$-module and $\mathop{\mathrm{Fil}}$ is a filtration of direct summands of $V$. Denote $$V_{\widehat{x}}:= V \otimes_{\widehat{x}} W \quad\text{and}\quad \mathop{\mathrm{Fil}}_{\widehat{x}}^\ell V_{\widehat{x}}:= \mathop{\mathrm{Fil}}^\ell V \otimes_{\widehat{x}} W,\text{for all} \ell.$$ Clearly, one has $\widetilde{V_{\widehat{x}}} = \widetilde{V} \otimes_{\widehat{x}} W$. Taking $\otimes_{\widehat{x}} W$ on the Frobenius structure $\varphi\colon \widehat{R}\otimes_\Phi \widetilde{V} \rightarrow \widetilde{V}$, one gets isomorphism $$\varphi_{\widehat{x}} \colon W\otimes_\sigma \widetilde{V_{\widehat{x}}} \rightarrow V_{\widehat{x}}.$$ Thus one gets a constant Fontaine-Faltings module over $W(k_x)$ $$\Big(V_{\widehat{x}},\mathop{\mathrm{Fil}}_{\widehat{x}},\varphi_{\widehat{x}}\Big) \in \mathcal{MF}_{[0,p-2]}^{\varphi}(W(k_x)).$$
*Remark 11*. Up to a canonical isomorphism, this constant Fontaine-Faltings module does not depend on the choice of $\Phi$. Because up to a canonical equivalent the category of Fontaine-Faltings module do not. The deep-seated reason is that a Fontaine-Faltings module corresponds to an $F$-crystal after forgetting the Hodge filtration.
**Lemma 12**. *Let $\widehat{x}$ and $\widehat{x}'$ be two lifting of $x$. Then there exists a canonical isomorphism $$\Big(V_{\widehat{x}},\mathop{\mathrm{Fil}}_{\widehat{x}},\varphi_{\widehat{x}}\Big) \simeq \Big(V_{\widehat{x}'},\mathop{\mathrm{Fil}}_{\widehat{x}'},\varphi_{\widehat{x}'}\Big).$$ In other words, the isomorphic class of $\Big(V_{\widehat{x}},\mathop{\mathrm{Fil}}_{\widehat{x}},\varphi_{\widehat{x}}\Big)$ does not depend on the choice of the lifting $\widehat{x}$.*
*Proof.* By the smoothness of $Y$, after shrinking $Y$, we may find an automorphism of ${\mathcal Y}$ $$\widehat{\mathop{\mathrm{id}}} \colon {\mathcal Y}\rightarrow {\mathcal Y}$$ which lifts the identity map on $Y_k$ such that it sends $\widehat{x}$ to $\widehat{x}'$. By using Taylor formula, one gets an canonical equivalent functor $$\widehat{\mathop{\mathrm{id}}}^*\colon \mathcal{MF}_{[0,p-2]}({\mathcal Y}/W) \rightarrow \mathcal{MF}_{[0,p-2]}({\mathcal Y}/W).$$ Choose a Frobenius lifting $\Phi$ that preserves $\widehat{x}$. Denote $\Phi':= \widehat{\mathop{\mathrm{id}}}^{-1} \circ \Phi \circ \widehat{\mathop{\mathrm{id}}}$, which is a Frobenius lifting that preserves $\widehat{x}'$. Now one has the following commutative diagram of functors $$\xymatrix{
\mathcal{MF}^{\Phi}_{[0,p-2]}({\mathcal Y}/W) \ar[r]^{\widehat{\mathop{\mathrm{id}}}^*} \ar[d] & \mathcal{MF}^{\Phi'}_{[0,p-2]}({\mathcal Y}/W)\ar[d]
\\
\mathcal{MF}^{\varphi}_{[0,p-2]}(W(k_x)) \ar@{=}[r] & \mathcal{MF}^{\varphi}_{[0,p-2]}(W(k_x))
\\
}$$ ◻
## Introduction to Higgs-de Rham flows
Lan-Sheng-Zuo have introduced the notion of (periodic) Higgs-de Rham flow over a smooth log scheme $(X,D)$ over $W(k)$ and have shown that the category of $f$-periodic Higgs bundles is equivalent to the category of log Fontaine-Faltings modules over $(X,D)$ endowed with an ${\mathbb Z}_{p^f}$-endomorphism structure.
Lately, Sun-Yang-Zuo introduced projective Higgs-de Rham flow, which generalized the notion of Higgs-de Rham flow. Actually, it is just the parabolic Higgs-de Rham flow with only one parabolic weight along each component of the boundary divisor.
### Functor $\overline{\mathop{\mathrm{Gr}}}$
Denote by $\mathcal{H}_a(Y,D_Y)$ the category of tuples $(E,\theta,V,\nabla,\mathop{\mathrm{Fil}},\psi)$, where
- $(E,\theta)$ is a graded logarithmic Higgs bundle [^4] over $Y$;
- $(V,\nabla,\mathop{\mathrm{Fil}})_a\in \mathrm{MCF}(Y,D_Y)$;
- and $\psi: \mathop{\mathrm{Gr}}(V,\nabla,\mathop{\mathrm{Fil}}) \cong (E,\theta)$ is an isomorphism of logarithmic Higgs bundles over $(Y,D_Y)$.
There is a natural functor from $\mathrm{MCF}_a(Y,D_Y)$ to $\mathcal{H}_a(Y,D_Y)$ given as follows for any object $(V,\nabla,\mathop{\mathrm{Fil}})\in \mathrm{MCF}(Y,D_Y)$ $$\overline{\mathop{\mathrm{Gr}}}(V,\nabla,\mathop{\mathrm{Fil}}):=(E,\theta,V,\nabla,\mathop{\mathrm{Fil}},\psi),$$ where $(E,\theta):=\mathop{\mathrm{Gr}}(V,\nabla,\mathop{\mathrm{Fil}})$ is the graded bundle with the graded Higgs field induced by the connection, and $\overline{\psi}$ is the identifying map $\mathop{\mathrm{Gr}}(V,\nabla,\mathop{\mathrm{Fil}})\cong (E,\theta)$.
*Remark 12*. There are also truncated version $\overline{\mathop{\mathrm{Gr}}}_n$ of this functor defined in [@LSZ19; @SYZ22]. For our application, we only use the version defined here, which is the limit functor of $\overline{\mathop{\mathrm{Gr}}}_n$. We note that the functor $\overline{\mathop{\mathrm{Gr}}}$ is actual an equivalence, but the $\overline{\mathop{\mathrm{Gr}}}_n$ is not.
### The functor $\mathcal{T}$ and inverse Cartier functor $\mathcal{C}^{-1}$
One defines a natural functor from the category $\mathcal{H}(Y,D_Y)$ to the category $\widetilde{\mathrm{MIC}}(Y,D_Y)$, for any object $(E,\theta,V,\nabla,\mathop{\mathrm{Fil}},\psi) \in \mathcal{H}(Y,D_Y)$ $$\mathcal{T}(E,\theta,V,\nabla,\mathop{\mathrm{Fil}},\psi):= \widetilde{(V,\nabla,\mathop{\mathrm{Fil}})}.$$ The inverse Cartier functor is defined to be the composition $${\mathcal C}^{-1} \colon \mathcal{H}_a(Y,D_Y) \xrightarrow{{\mathcal T}} \widetilde{\mathrm{MIC}}_a(Y,D_Y) \xrightarrow{{\mathcal F}} \mathrm{MIC}(Y,D_Y)$$ for $a\leq p-2$.
*Remark 13*.
1. Similar there is also truncated version $\mathcal{T}_n$ in [@LSZ19; @SYZ22], whose definition is much more complicated and its limit is just $\mathcal{T}$.
2. Faltings tilde functor $\widetilde{(\cdot)}$ is just the composition of ${\mathcal T}$ and $\overline{\mathop{\mathrm{Gr}}}$. For $0\leq a\leq p-2$, one has the following commutative diagram $$\xymatrix@R=2cm@C=2cm{
& \mathcal{H}_a(Y,D_Y)\ar[dr]^-{{\mathcal C}^{-1}} \ar@/^50pt/[dd]^{{\mathcal T}} \ar[d]|{\text{forget}}&\\
\mathrm{MCF}_a(Y,D_Y)\ar[dr]_{\widetilde{(\cdot)}}\ar[ur]^{\overline{\mathop{\mathrm{Gr}}}} \ar[r]|{\text{grading}}
& \mathrm{HIG}^{\rm gr}(Y,D_Y) & \mathrm{MIC}(Y,D_Y)\\
&\widetilde{\mathrm{MIC}}_a(Y,D_Y) \ar[ur]_-{{\mathcal F}} \ar[u]|{\text{grading}}\\
}$$ where $\mathrm{HIG}^{\rm gr}(Y,D_Y)$ is the category of graded logarithmic Higgs bundle over $(Y,D_Y)$, and the "forget" and "grading" stand for the nature functors.
### Higgs-de Rham flow
Recall [@LSZ19] that a *Higgs-de Rham flow* over $(Y,D_Y)$ is a sequence consisting of infinitely many alternating terms of filtered logarithmic de Rham bundles and logarithmic Higgs bundles $$\left\{
(V,\nabla,\mathop{\mathrm{Fil}})_{-1},
(E,\theta)_{0},
(V,\nabla,\mathop{\mathrm{Fil}})_{0},
(E,\theta)_{1},
(V,\nabla,\mathop{\mathrm{Fil}})_{1},
\cdots\right\},$$ which are related to each other by the following diagram inductively $$\label{diag:HDF}
\xymatrix@W=10mm@C=3mm@R=5mm{
(V,\nabla,\mathop{\mathrm{Fil}})_{-1} \ar[dr]^{\mathop{\mathrm{Gr}}} && (V,\nabla,\mathop{\mathrm{Fil}})_{0}\ar[dr]^{\mathop{\mathrm{Gr}}}
&& (V,\nabla,\mathop{\mathrm{Fil}})_{1}\ar[dr]^{\mathop{\mathrm{Gr}}}
&\\
& (E,\theta)_{0} \ar[ur]^{{\mathcal C}^{-1}}
&& (E,\theta)_{1} \ar[ur]^{{\mathcal C}^{-1}}
&& \cdots\\
}$$ where
- $(V,\nabla,\mathop{\mathrm{Fil}})_{-1}$ is a filtered de Rham bundle over $(Y,D_Y)$ of level in $[0,p-2]$;
- Inductively, for $i\geq1$, $(E,\theta)_i$ is the graded Higgs bundle $\mathop{\mathrm{Gr}}\left((V,\nabla,\mathop{\mathrm{Fil}})_{i-1}\right)$, $$(V,\nabla)_i:={\mathcal C}^{-1} ((E,\theta)_i,(V,\nabla,\mathop{\mathrm{Fil}})_{i-1} ,\mathrm{id})$$ and $\mathop{\mathrm{Fil}}_i$ is a Hodge filtration on $(V,\nabla)_i$ of level in $[0,p-2]$.
*Remark 14*.
1. In [@LSZ19], Higgs-de Rham flow is defined over any truncated level, whose definition is much more complicated.
2. The essential data given in a Higgs-de Rham flow are just $$V_{-1},\nabla_{-1},\mathop{\mathrm{Fil}}_{-1},\mathop{\mathrm{Fil}}_{0},\mathop{\mathrm{Fil}}_{1},\mathop{\mathrm{Fil}}_{2},\cdots$$ since the other terms can be constructed from these, e.g. $E_0$, $\theta_0$, $V_1$, $\nabla_1$, $\cdots$.
The Higgs-de Rham flow is called $f$-periodic if there exists an isomorphism $$\Phi\colon (E_{f},\theta_{f},V_{f-1},\nabla_{f-1},\mathop{\mathrm{Fil}}_{f-1},\mathrm{id}) \cong (E_0,\theta_0,V_{-1},\nabla_{-1},\mathop{\mathrm{Fil}}_{-1},\mathrm{id})$$ such that its induces isomorphisms, for all $i\geq0$, $$(E,\theta)_{f+i}\cong (E,\theta)_{i} \quad\text{and}\quad (V,\nabla,\mathop{\mathrm{Fil}})_{f+i}\cong (V,\nabla,\mathop{\mathrm{Fil}})_{i}.$$ We simply represent this periodic Higgs-de Rham flow with the following diagram $$\label{diag:HDF}
\xymatrix@W=10mm@C=3mm@R=10mm{
& (V,\nabla,\mathop{\mathrm{Fil}})_{0} \ar[dr]^{\mathop{\mathrm{Gr}}}
&& (V,\nabla,\mathop{\mathrm{Fil}})_{1} \ar[dr]^{\mathop{\mathrm{Gr}}}
&& (V,\nabla,\mathop{\mathrm{Fil}})_{f-1} \ar[dr]^{\mathop{\mathrm{Gr}}}
&\\
(E,\theta)_{0} \ar[ur]^{{\mathcal C}^{-1}}
&& (E,\theta)_{1} \ar[ur]^{{\mathcal C}^{-1}}
&& \cdots \ar[ur]^{{\mathcal C}^{-1}}
&& (E,\theta)_{f} \ar@/^30pt/[llllll]^{\Phi}\\}$$
**Theorem 1** (Lan-Shen-Zuo[@LSZ19]). *There exists an equivalence between the category of logarithmic Fontaine-Faltings modules over $(Y,D_Y)$ and the category of $1$-periodic logarithmic Higgs-de Rham flows over $(Y,D_Y)$.*
## Parabolic Fontaine-Faltings modules and Parabolic Higgs-de Rham flows {#parabolic-fontaine-faltings-modules-and-parabolic-higgs-de-rham-flows}
### Parabolic versions of the some categories and functors
In order to define parabolic version Higgs-de Rham flow, we need the parabolic versions of the categories and functors appeared in a diagram from [@SYZ22 Section 1.2.1]. The most crucial part is to define the inverse Cartier functor (or, equivalently, the Frobenius pullback functor).
$$\label{diag:C^{-1}}
\xymatrix{
& {\mathcal H}(X_n)\ar[drr]^-{C_n^{- ,1}} \ar[dd]^{{\mathcal T}_n}
&&\\
\mathrm{MCF}_{p-2}(X_n)\ar[dr]_{\widetilde{(\cdot)}}\ar[ur]^{\overline{\mathop{\mathrm{Gr}}}}
&&& \mathrm{MIC}(X_n)\\
&\widetilde{\mathrm{MIC}}(X_n) \ar[urr]_-{F_n=\{F_{\mathcal U}^*\}_{\mathcal U}}&&\\
}$$
In this section, we set $S=\mathop{\mathrm{Spec}}(W(k))$ and $S_n = \mathop{\mathrm{Spec}}(W_n(k))$. Denote by $$(Y_n,D_{Y_n})\coloneqq (Y,D_Y)\times_SS_n.$$
**Definition 15**. Denote by $\mathrm{MIC}((Y_n,D_{Y_n})/S_n)$ the category of all parabolic de Rham bundles over $(Y_n,D_{Y_n})/S_n$.
**Definition 16**. A *filtered parabolic de Rham bundle over $(Y_n,D_{Y_n})/S_n$* is a triple $(V,\nabla,\mathop{\mathrm{Fil}})$, which consists of a parabolic de Rham bundle $(V,\nabla)$ over $(Y_n,D_{Y_n})/S_n$ and a filtration $\mathop{\mathrm{Fil}}$ of parabolic sub bundles such that $(V_\alpha,\nabla_\alpha,\mathop{\mathrm{Fil}}_\alpha)$ forms a usual filtered de Rham bundle over $(Y_n,D_{Y_n})/S_n$ for each $\alpha\in {\mathbb Q}^n$. Denote by *$\mathrm{MCF}_{p-2}((Y_n,D_{Y_n})/S_n)$* the category of all filtered parabolic de Rham bundles over $(Y_n,D_{Y_n})/S_n$ with the levels are contained in $[0,a]$.
**Definition 17**. Denote by *${\mathcal H}((Y_n,D_{Y_n})/S_n)$* the category of tuples $(E,\theta,\overline{V},\overline{\nabla},\overline{\mathop{\mathrm{Fil}}},\overline{\psi})$, consisting of
- a graded parabolic Higgs bundle $(E,\theta)$ over $(Y_n,D_{Y_n})/S_n$ with exponent $\leq p-2$,
- a filtered parabolic de Rham bundle over $(Y_n,D_{Y_{n-1}})/S_{n-1}$, and
- an isomorphism of graded parabolic Higgs bundles over $(Y_n,D_{Y_{n-1}})/S_{n-1}$ $$\overline{\psi}\colon \mathop{\mathrm{Gr}}(\overline{V},\overline{\nabla},\overline{\mathop{\mathrm{Fil}}}) \cong (E,\theta)\otimes_{W_n(k)} W_{n-1}(k).$$
**Definition 18**. An *parabolic integrable $p$-connection $\nabla=\{\nabla_\alpha\}$* on a parabolic vector bundle $V$ over $(Y_n,D_{Y_n})/S_n$ is a collection of compatible integrable $p$-connections $\nabla_\alpha$ on the $V_\alpha$. I.e. for any $\alpha\leq\beta$, the following diagram commutes $$\xymatrix{
V_\alpha \ar[r]^-{\nabla_\alpha} \ar[d] & V_\alpha \otimes \Omega_{Y_n/S_n}(\log D_{Y_n}) \ar[d] \\
V_\beta \ar[r]^-{\nabla_\beta} & V_\beta \otimes \Omega_{Y_n/S_n}(\log D_{Y_n})\\
}$$ Let *$\widetilde{\mathrm{MIC}}((Y_n,D_{Y_n})/S_n)$* denote the category of pairs $(V,\nabla)$, consisting of a parabolic vector bundle $V$ and a parabolic integrable $p$-connection $\nabla$ on $V$.
#### *Functor $\overline{\mathop{\mathrm{Gr}}}$.*
For an object $(V,\nabla,\mathop{\mathrm{Fil}})$ in $\mathrm{MCF}_{p-2}((Y_n,D_{Y_n})/S_n)$, the functor $\overline{\mathop{\mathrm{Gr}}}$ is given by $$\overline{\mathop{\mathrm{Gr}}}(V,\nabla,\mathop{\mathrm{Fil}})=(E,\theta,\overline{V},\overline{\nabla},\overline{\mathop{\mathrm{Fil}}},\overline{\psi}),$$ where $(E,\theta)=\mathop{\mathrm{Gr}}(V,\nabla,\mathop{\mathrm{Fil}})$ is the graded parabolic Higgs bundle, $(\overline{V},\overline{\nabla},\overline{\mathop{\mathrm{Fil}}})$ is the modulo $p^{n-1}$-reduction of $(V,\nabla,\mathop{\mathrm{Fil}})$ and $\overline{\psi}$ is the identifying map $$\mathop{\mathrm{id}}\colon \mathop{\mathrm{Gr}}(\overline{V},\overline{\nabla},\overline{\mathop{\mathrm{Fil}}})= (E,\theta)\otimes_{W_n(k)}W_{n-1}(k).$$
#### *Faltings tilde functor $\widetilde{(\cdot)}$.*
For an object $(V,\nabla,\mathop{\mathrm{Fil}})$ in $\mathrm{MCF}_{p-2}(X_n)$, the $\widetilde{(V,\nabla,\mathop{\mathrm{Fil}})}$ will be denoted as the quotient $\bigoplus\limits_{i=0}^{p-2}\mathop{\mathrm{Fil}}^i/\sim$ with $x\sim py$ for any $x\in \mathop{\mathrm{Fil}}^iV$ and $y$ the image of $x$ under the natural inclusion $\mathop{\mathrm{Fil}}^iV\hookrightarrow \mathop{\mathrm{Fil}}^{i-1}V$.
#### *The construction of functor ${\mathcal T}_n$.*
Let $(E,\theta,\bar{V},\bar{\nabla},\overline{\mathop{\mathrm{Fil}}},\psi)$ be an object in ${\mathcal H}((Y_n,D_{Y_n})/S_n)$. For any $\alpha\in{\mathbb Q}^n$, denote $$(\widetilde{V}_\alpha,\widetilde{\nabla}_\alpha)\coloneqq{\mathcal T}_n(E_\alpha,\theta_\alpha,\bar{V}_\alpha,\bar{\nabla}_\alpha,\overline{\mathop{\mathrm{Fil}}}_\alpha,\psi_\alpha),$$ where the functor on the right hand side, still denote by ${\mathcal T}_n$, is the usual functor defined as in [@SYZ22]. Then the collection $(\{\widetilde{V}_\alpha\},\{\widetilde{\nabla}_\alpha\})\}$ forms a pair of parabolic vector bundle and a parabolic $p$-connections. We denote it by $${\mathcal T}_n(E,\theta,\bar{V},\bar{\nabla},\overline{\mathop{\mathrm{Fil}}},\psi)\coloneqq\{((\widetilde{V}_\alpha,\widetilde{\nabla}_\alpha))\}$$
#### *The Frobenius pullback functor $\Phi^*_n$.*
Base on the usual Frobenius pullback functor $\Phi^*_n$ for usual bundles with $p$-connections, we define the Frobenius pullback for parabolic vector bundles with parabolic $p$-connections, which will be still denoted $\Phi^*_n$ by abusing notions.
For any $\gamma\in {\mathbb Q}^n$, we has a canonical parabolic $p$-connection $p\cdot \mathop{\mathrm{d}}(\gamma D)$ on the parabolic vector bundle ${\mathcal O}_Y(\gamma D)$. We simply set $$\Phi^*_n\Big( {\mathcal O}_Y(\gamma D),p\cdot \mathop{\mathrm{d}}(\gamma D)\Big) \coloneqq \Big( {\mathcal O}_Y(p\gamma D),\mathop{\mathrm{d}}(p\gamma D)\Big).$$
Similarly as the pullback of parabolic de Rham bundle, we can define the Frobenius pullback functor as follows.
**Definition 19**. For any $(V,\nabla)$ in $\widetilde{\mathrm{MIC}}((Y_n,D_{Y_n})/S_n)$, we first denote $$\Phi^*_{n,\gamma}(V,\nabla)
\coloneqq\Phi_n^*\Big(\big((V,\nabla) \otimes ({\mathcal O}_Y(\gamma D),p\cdot \mathop{\mathrm{d}}(\gamma D))^{-1}\big)_0\Big)\otimes \Phi_n^*\big({\mathcal O}_Y(\gamma D),p\cdot \mathop{\mathrm{d}}(\gamma D)\big).$$ And then set $$\Phi^*_n(V,\nabla)\coloneqq \bigcup_{\gamma\in{\mathbb Q}^n} \Phi^*_{n,\gamma}(V,\nabla).$$ Denote by $C_n^{-1} \coloneqq \Phi_n^*\circ {\mathcal T}_n$ the composition functor, call it *the inverse Cartier functor*.
### Parabolic Fontaine-Faltings modules with endomorphism structure
Recall [@SYZ22 Lemma 1.1] or [@LSZ19 Lemma 5.6]\], we extend the definition to parabolic version.
**Definition 20**. A *parabolic Fontaine-Faltings module over $(Y_n,D_{Y_n})/S_n$* is a tuple $(V,\nabla,\mathop{\mathrm{Fil}},\varphi)$, where
- $(V,\nabla,\mathop{\mathrm{Fil}})\in \mathrm{MCF}_{p-2}((Y_n,D_{Y_n})/S_n)$ is a filtered parabolic de Rham bundle over $(Y_n,D_{Y_n})/S_n$ of level in $[0,p-2]$;
- $\varphi\colon C_n^{-1}\circ \overline{\mathop{\mathrm{Gr}}} (V,\nabla,\mathop{\mathrm{Fil}})\rightarrow (V,\nabla)$ is an isomorphism of parabolic de Rham bundles.
We denote by *$\mathcal{MF}_{[0,p-2]}^\nabla((Y_n,D_{Y_n})/S_n)$* the category of all parabolic Fontaine-Faltings modules over $(Y_n,D_{Y_n})/S_n$.
**Definition 21**. Let $M\in \mathcal{MF}_{[0,p-2]}^\nabla((Y_n,D_{Y_n})/S_n)$. A ring homomorphism $$\iota\colon \mathbb Z_{p^f}\rightarrow \mathop{\mathrm{End}}(M)$$ is called a *${\mathbb{Z}_p}$-endomorphism structure*. Denote by *$\mathcal{MF}_{[0,p-2]}^\nabla((Y_n,D_{Y_n})/S_n)_{{\mathbb Z}_{p^f}}$* the category of all parabolic Fontaine-Faltings modules with ${\mathbb{Z}_p}$-endomorphism structures over $(Y_n,D_{Y_n})/S_n$.
Let $(V,\nabla,\mathop{\mathrm{Fil}},\varphi)$ be a parabolic Fontaine-Faltings module with level in $[0,a]$ and $(V,\nabla,\mathop{\mathrm{Fil}},\varphi)'$ be a parabolic Fontaine-Faltings module with level in $[0,b]$. Then on the tensor parabolic de Rham bundle $$(V,\nabla)\otimes (V,\nabla)'$$ Set $$\mathop{\mathrm{Fil}}_{tot}^n(V\otimes V'):= \sum_{i+j=n}\mathop{\mathrm{Fil}}^iV\otimes \mathop{\mathrm{Fil}}^jV\quad \text{for all $n\in{\mathbb Z}$},$$ it forms a Hodge filtration on the parabolic de Rham $(V,\nabla)\otimes (V,\nabla)'$. Denote $$(V,\nabla,\mathop{\mathrm{Fil}})\otimes (V,\nabla,\mathop{\mathrm{Fil}})':=(V\otimes V',\nabla\otimes\mathop{\mathrm{id}}+\mathop{\mathrm{id}}\otimes\nabla',\mathop{\mathrm{Fil}}_{tot}).$$ By the definition of parabolic version of inverse Cartier, it naturally preserves tensor products, so we can set a Frobenius structure on $(V,\nabla,\mathop{\mathrm{Fil}})\otimes (V,\nabla,\mathop{\mathrm{Fil}})'$ given by $\varphi\otimes\varphi'$, and then get the tensor product of $$(V,\nabla,\mathop{\mathrm{Fil}},\varphi) \otimes (V,\nabla,\mathop{\mathrm{Fil}},\varphi)'.$$
For the tensor product of Fontaine-Faltings modules with endomorphism structure, we set the underlying parabolic Fontaine-Faltings module of $$(V,\nabla,\mathop{\mathrm{Fil}},\varphi,\iota) \otimes (V,\nabla,\mathop{\mathrm{Fil}},\varphi,\iota)'$$ to be $$\Big((V,\nabla,\mathop{\mathrm{Fil}},\varphi) \otimes (V,\nabla,\mathop{\mathrm{Fil}},\varphi)'\Big)^{\iota=\iota'}.$$ on which the action of $\iota$ and $\iota'$ are coincide with each other, and we using this common action to define the ${\mathbb Z}_{p^f}$-endomorphism structure.
We also define the symmetric product, wedge product and determinant for parabolic Fontaine-Faltings modules as usual way.
### Parabolic Higgs-de Rham flows
**Definition 22**. a parabolic *Higgs-de Rham flow* over $(Y_n,D_{Y_n})\subset (Y_{n+1},D_{Y_{n+1}})$ is a sequence consisting of infinitely many alternating terms of filtered parabolic de Rham bundles and graded parabolic Higgs bundles $$\left\{
(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1},
(E,\theta)_{0}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{0}^{(n)},
(E,\theta)_{1}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{1}^{(n)},
\cdots\right\},$$ which are related to each other by the following diagram inductively $$\tiny
\xymatrix@W=10mm@C=-3mm@R=5mm{
&& (V,\nabla,\mathop{\mathrm{Fil}})_{0}^{(n)} \ar[dr]^{\mathop{\mathrm{Gr}}}
&& (V,\nabla,\mathop{\mathrm{Fil}})_{1}^{(n)} \ar[dr]^{\mathop{\mathrm{Gr}}}
&& (V,\nabla,\mathop{\mathrm{Fil}})_{2}^{(n)} \ar[dr]^{\mathop{\mathrm{Gr}}}
&\\
& (E,\theta)_{0}^{(n)} \ar[ur]^{{\mathcal C}^{-1}_n} \ar@{..>}[dd]
&& (E,\theta)_{1}^{(n)} \ar[ur]^{{\mathcal C}^{-1}_n}
&& (E,\theta)_{2}^{(n)} \ar[ur]^{{\mathcal C}^{-1}_n}
&& \cdots\\
(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1} \ar[dr]^{\mathop{\mathrm{Gr}}}
&&&&&&&\\
&\mathop{\mathrm{Gr}}\left((V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1}\right)
&&&&\\
}$$ where
- $(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1}\in \mathrm{MCF}_{p-2}((Y_n,D_{Y_n})/S_n)$;
- $(E,\theta)_0^{(n)}$ is a lifting of the graded parabolic Higgs bundle $\mathop{\mathrm{Gr}}\left((V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1}\right)$ over $(Y_n,D_{Y_n})/S_n$, $(V,\nabla)_0^{(n)}\coloneqq C^{-1}_n ((E,\theta)_0^{(n)},(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1} ,\psi)$ and $\mathop{\mathrm{Fil}}^{(n)}_0$ is a parabolic Hodge filtration on $(V,\nabla)_0^{(n)}$ of level in $[0,p-2]$;
- Inductively, for $m\geq1$, $(E,\theta)_m^{(n)}\coloneqq\mathop{\mathrm{Gr}}\left((V,\nabla,\mathop{\mathrm{Fil}})_{m-1}^{(n)}\right)$ and $$(V,\nabla)_m^{(n)}\coloneqq C^{-1}_n \left(
(E,\theta)_{m}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{m-1},
\mathop{\mathrm{id}}
\right).$$ Here $(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{m-1}$ is the reduction of $(V,\nabla,\mathop{\mathrm{Fil}})^{(n)}_{m-1}$ on $X_{n-1}$. And $\mathop{\mathrm{Fil}}^{(n)}_m$ is a Hodge filtration on $(V,\nabla)_m^{(n)}$.
**Definition 23**. Let $$\mathrm{Flow}= \left\{
(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1},
(E,\theta)_{0}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{0}^{(n)},
(E,\theta)_{1}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{1}^{(n)},
\cdots\right\},$$ and $$\mathrm{Flow}'=\left\{
(V',\nabla',\mathop{\mathrm{Fil}}')^{(n-1)}_{-1},
(E',\theta')_{0}^{(n)},
(V',\nabla',\mathop{\mathrm{Fil}}')_{0}^{(n)},
(E',\theta')_{1}^{(n)},
(V',\nabla',\mathop{\mathrm{Fil}}')_{1}^{(n)},
\cdots\right\},$$ be two Higgs-de Rham flows over $(Y_n,D_{Y_n})\subset (Y_{n+1},D_{Y_{n+1}})$. A morphism from $\mathrm{Flow}$ to $\mathrm{Flow}'$ is a compatible system of morphisms $$\{\varphi_{-1}^{(n-1)},\psi_{0}^{(n)},\varphi_{0}^{(n)},\psi_{1}^{(n)},\varphi_{1}^{(n)},\cdots\}$$ between the terms respectively in the following sense that
- $\mathop{\mathrm{Gr}}(\varphi_{-1}^{(n-1)})=\psi_{0}^{(n)}\pmod{p^{n-1}}$
- $C_n^{-1}(\psi_m^{(n)},\varphi_{m-1}^{(n-1)}) = \varphi_m^{(n)},$ (if $m\geq1$, then here $\varphi_m^{(n-1)}:=\varphi_m^{(n)}\pmod{p^{n-1}}$), and
- $\mathop{\mathrm{Gr}}(\varphi_m^{(n)})=\varphi_{m+1}^{(n)}$.
*Remark 15*. The morphism is uniquely determined by the first two terms $\varphi_{-1}^{(n-1)},\psi_{0}^{(n)}$. If $n=1$, then the first morphism is vacuous.
**Definition 24**. Let $$\mathrm{Flow}= \left\{
(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{-1},
(E,\theta)_{0}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{0}^{(n)},
(E,\theta)_{1}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{1}^{(n)},
\cdots\right\},$$ be a flow, we call the flow $$\mathrm{Flow}[f] = \left\{
(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{f-1},
(E,\theta)_{f}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{f}^{(n)},
(E,\theta)_{f+1}^{(n)},
(V,\nabla,\mathop{\mathrm{Fil}})_{f+1}^{(n)},
\cdots\right\}$$ the *$f$-th shifting of $\mathrm{Flow}$*, where $(V,\nabla,\mathop{\mathrm{Fil}})^{(n-1)}_{f-1}:=(V,\nabla,\mathop{\mathrm{Fil}})^{(n)}_{f-1}\pmod{p^{n-1}}$.
**Definition 25**. If there is an isomorphism $$\psi:=\{\varphi_{-1}^{(n-1)},\psi_{0}^{(n)},\varphi_{0}^{(n)},\psi_{1}^{(n)},\varphi_{1}^{(n)},\cdots\}$$ from $\mathrm{Flow}[f]$ to $\mathrm{Flow}$, then we call the pair $(\mathrm{Flow},\psi)$ is a *periodic Higgs-de Rham flow*. Note that the $\psi$ is part of data in the periodic Higgs-de Rham flow, which we will call a periodic mapping of $\mathrm{Flow}$. Since $\psi$ is uniquely determined by $\varphi_{-1}^{(n-1)},\psi_{0}^{(n)}$, we sometime use $(\mathrm{Flow},(\varphi_{-1}^{(n-1)},\psi_{0}^{(n)}))$ to represent $(\mathrm{Flow},\psi)$ and call . In case $n=1$, the first term in the flow is vacuous, we also use $(\mathrm{Flow},\psi_{0}^{(1)})$ to represents $(\mathrm{Flow},\psi)$.
With the same method in [@LSZ19], one shows that:
**Theorem 2**. *Suppose ${\mathbb F}_{p^f}$ is contained in $k$. Then there is an equivalent functor from the category of $f$-periodic parabolic Higgs-de Rham flows to the category of parabolic Fontaine-Faltings modules with ${\mathbb Z}_{p^f}$-endomorphism structure.*
Sta17
Gerd Faltings. Crystalline cohomology and $p$-adic Galois-representations. In *Algebraic analysis, geometry, and number theory (Baltimore, MD, 1988)*, pages 25--80. Johns Hopkins Univ. Press, Baltimore, MD, 1989.
Jean-Marc Fontaine and Yi Ouyang. . .
Jaya N. N. Iyer and Carlos T. Simpson. A relation between the parabolic Chern characters of the de Rham bundles. , 338(2):347--383, 2007.
Raju Krishnamoorthy and Mao Sheng. Periodic de rham bundles over curves, 2020. arXiv:2011.03268.
Guitang Lan, Mao Sheng, and Kang Zuo. Semistable Higgs bundles, periodic Higgs bundles and representations of algebraic fundamental groups. , 21(10):3053--3112, 2019.
The Stacks Project Authors. exitStacks Project. <http://stacks.math.columbia.edu>, 2017.
Ruiran Sun, Jinbang Yang, and Kang Zuo. Projective crystalline representations of étale fundamental groups and twisted periodic Higgs--de Rham flow. , 24(6):1991--2076, 2022.
[^1]: The condition here is essential, which ensure the functor is globally well-defined.
[^2]: We note that the definition is slightly different with the original [@Fal89] in textual representation, but they are essentially the same, this can be get from [@SYZ22 Section 2.3].
[^3]: For our application, we only consider the locally free case at here. We note that the general definition does not require such kind condition.
[^4]: A Higgs bundle $(E,\theta)$ is called graded if $E$ can be written as direct sum of subbundles $E^{i}$ with $\theta(E^i)\subset E^{i-1}\otimes\Omega^1$. Obviously, a graded Higgs bundle is also nilpotent.
| arxiv_math | {
"id": "2309.10449",
"title": "Parabolic Fontaine-Faltings modules and parabolic Higgs-de Rham flows",
"authors": "Jinbang Yang and Kang Zuo",
"categories": "math.AG",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We investigate the number of half-regular squares required to decompose a non-negative $C^{k,\alpha}(\mathbb{R}^n)$ function into a sum of squares. Each non-negative $C^{3,1}(\mathbb{R}^n)$ function is known to be a finite SOS in $C^{1,1}(\mathbb{R}^n)$, and similar regularity-preserving SOS decompositions have been studied by various authors. Our work refines existing techniques to unify and build upon several known decomposition results, and moreover we provide upper and lower estimates on the number of squares required for SOS decompositions in $C^{k,\alpha}(\mathbb{R}^n)$.
author:
- "Sullivan Francis MacDonald[^1]"
bibliography:
- references.bib
date: September 2023
title: Sum of Squares Decompositions in Hölder Spaces
---
# Introduction
In this paper we are concerned with the problem of decomposing a given non-negative $C^{k,\alpha}(\mathbb{R}^n)$ function into a sum of squares (SOS) of 'half-regular' functions belonging to the space $$\label{halfreg}
C^\frac{k+\alpha}{2}(\mathbb{R}^n)=\begin{cases}\hfil C^{\frac{k}{2},\frac{\alpha}{2}}(\mathbb{R}^n) & k\textrm{ even}, \\ C^{\frac{k-1}{2},\frac{1+\alpha}{2}}(\mathbb{R}^n) & k\textrm{ odd}.
\end{cases}$$ Our main goals are to determine when such a decomposition is possible, and to estimate the number of functions which must be used in a SOS decomposition. That is, given $f\in C^{k,\alpha}(\mathbb{R}^n)$ which is a SOS of functions in [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"}, what is the smallest integer $m=m(n,k,\alpha)$ for which there exist functions $g_1,\dots, g_m$ in [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"} such that $f=g_1^2+\cdots+g_m^2$?
Such regularity preserving SOS decompositions have been studied by several authors. Fefferman & Phong [@Fefferman-Phong] showed that $m(n,3,1)<\infty$ for any $n\in\mathbb{N}$, meaning that any non-negative $C^{3,1}(\mathbb{R}^n)$ function is a finite SOS of functions in $C^{1,1}(\mathbb{R}^n)$. This result has found several applications in PDEs and functional analysis; see e.g. [@Fefferman-Phong; @Guan; @Tataru]. Further, Bony [@BonyFr] shows that $m(1,k,\alpha)=2$ for $\alpha\in(0,1]$ and $k\geq 2$, resolving the problem in one dimension. Techniques similar to those used by Glaeser [@Glaeser] similarly show that $m(1,1,1)=1$. In contrast, it seems that little is known about $m(n,k,\alpha)$ when $n$ and $k$ are both large.
The constant $m(n,k,\alpha)$ can be viewed in loose analogy to the Pythagoras number of a field $F$, which is the smallest integer $p$ for which each SOS in $F$ is a SOS of at most $p$ elements. Pythagoras numbers have been studied extensively in algebraic settings; famously, Lagrange's four-square theorem shows that the Pythagoras number of $\mathbb{Q}$ is four, and these invariants also arise in connection to Hilbert's 17$^\textrm{th}$ problem concerning sums of squares of rational functions. For a complete discussion, the interested reader is referred to [@Pfister]. Much of our terminology is borrowed from the literature on algebraic sums of squares.
Our motivation for studying the problem outlined above is twofold. First, there is active interest in studying regularity preserving SOS decompositions for applications elsewhere in analysis; they have been used to show that certain degenerate elliptic operators are hypoelliptic [@SOS_III], they have been employed to study regularity of solutions to the Monge-Ampère equation [@Guan], and famously they were used by Fefferman & Phong [@Fefferman-Phong] to prove sharpened forms of Gårding's inequality. Second, we wish to explore the connections between algebraic sums of squares and regularity preserving SOS decompositions further, and to refine standard decomposition techniques to minimize the number of squares needed for a decomposition.
It is not always possible to decompose a given $C^{k,\alpha}(\mathbb{R}^n)$ function into a finite SOS of functions in [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"} when $k\geq 4$; see Theorem [Theorem 32](#thm:existencecounter){reference-type="ref" reference="thm:existencecounter"}. Rather, for our purposes it is necessary to study the family of $C^{k,\alpha}(\mathbb{R}^n)$ functions which can be decomposed, which we call $\Sigma(n,k,\alpha)$. That is, $$\Sigma(n,k,\alpha)=\bigg\{f\in C^{k,\alpha}(\mathbb{R}^n):\;f=\sum_{j=1}^s g_j^2\;\textrm{ for }s<\infty\textrm{ and }g_j\in C^\frac{k+\alpha}{2}(\mathbb{R}^n)\bigg\}.$$ The result of Fefferman & Phong in [@Fefferman-Phong] shows that $\Sigma(n,3,1)=C^{3,1}(\mathbb{R}^n)$, and later we prove that $\Sigma(n,k,\alpha)=C^{k,\alpha}(\mathbb{R}^n)$ whenever $k+\alpha\leq 4$. In general it is only true that $\Sigma(n,k,\alpha)\subseteq C^{k,\alpha}(\mathbb{R}^n)$, as the examples in [@BonyEn] show. Following [@Pfister], we define the length $\ell(f)$ of a function $f\in\Sigma(n,k,\alpha)$ to be the smallest number of squares needed to represent it: $$\ell(f)=\min\bigg\{s\in\mathbb{N}:\exists\; g_1,\dots,g_s\in C^{\frac{k+\alpha}{2}}(\mathbb{R}^n)\textrm{ such that }f=\sum_{j=1}^s g_j^2\bigg\}.$$ Explicitly, the constants we study are defined by $m(n,k,\alpha)=\displaystyle\sup\{\ell(f)\;:\;f\in \Sigma(n,k,\alpha)\}$.
There is no known characterization of $\Sigma(n,k,\alpha)$ when $k\geq 4$, but some tangible subsets have been identified; Korobenko & Sawyer [@SOS_I] adapted the arguments of Bony et. al. [@BonyFr; @BonyEn] and Fefferman & Phong [@Fefferman-Phong] to show that functions in $C^{4,\alpha}(\mathbb{R}^2)$ which satisfy certain differential inequalities can be decomposed into finite sums of half-regular squares; we generalize their result in Theorem [Theorem 3](#main 3){reference-type="ref" reference="main 3"}, and gain additional information about $\Sigma(n,k,\alpha)$ when $k\geq 4$ via Theorem [Theorem 2](#main 2){reference-type="ref" reference="main 2"}.
The main results of this paper are summarized in the following theorems. Theorem [Theorem 1](#main 1){reference-type="ref" reference="main 1"} treats the case $k\leq 3$, in which every non-negative $C^{k,\alpha}(\mathbb{R}^n)$ function can be decomposed into a finite sum of half-regular squares. Theorem [Theorem 2](#main 2){reference-type="ref" reference="main 2"} provides lower bounds on $m(n,k,\alpha)$ using preexisting polynomial theory together with compactness arguments. Theorem [Theorem 3](#main 3){reference-type="ref" reference="main 3"} partially extends Theorem [Theorem 1](#main 1){reference-type="ref" reference="main 1"} to the case $k\geq 4$ in two dimensions. Finally, Theorem [Theorem 4](#main 4){reference-type="ref" reference="main 4"} shows that $m(n,1,\alpha)=1$ always.
**Theorem 1**. *If $k\leq 3$, $\alpha\in(0,1]$ and $n\geq 1$, then each non-negative $C^{k,\alpha}(\mathbb{R}^n)$ function is a finite sum of squares of functions in $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$. The number of squares required is at most $$m(n,k,\alpha)\leq 2^{n^2+n-1}.$$*
**Theorem 2**. *Let $k\geq 2$, $\alpha\in (0,1]$, and $n\geq 1$. There exist functions in $C^{k,\alpha}(\mathbb{R}^n)\cap \Sigma(n,k,\alpha)$ which are sums of squares of no fewer than $m(n,k,\alpha)$ functions in the space $C^{\frac{k+\alpha}{2}}(\mathbb{R}^n)$, where $$m(n,k,\alpha)\geq 2^{n-1}\left(\frac{k}{k+n}\right)^{\frac{n}{2}}.$$*
Sharper bounds for $m(n,k,\alpha)$ arise in our proofs of the theorems above, but these are difficult to express in closed form. For instance, we show that if $k\leq3$ then $m(2,k,\alpha)\leq 27$. Likewise, restricting to the case $k=2$, the estimates of Section 7 show that for any $n\geq 1$ we have $$\frac{n+1}{2}\leq m(n,2,\alpha)\leq 2^{n^2+n-1}.$$
When $k\geq 4$ our methods do not give an upper bound on $m(n,k,\alpha)$, but using essentially the same techniques that we use to prove Theorem [Theorem 1](#main 1){reference-type="ref" reference="main 1"} it is possible to bound the number of squares required to decompose certain $C^{k,\alpha}(\mathbb{R}^2)$ functions. We require that these functions satisfy generalized versions of the inequalities identified by Korobenko & Sawyer [@SOS_I Thm. 4.5].
**Theorem 3**. *For $k\geq 4$ let $f\in C^{k,\alpha}(\mathbb{R}^2)$ be non-negative, and assume that for $x\in\mathbb{R}^n$ $f$ satisfies $$\label{diffeqs}
|\nabla^\ell f(x)|\leq Cf(x)^\frac{k-\ell+\alpha}{k+\alpha}$$ for every even $\ell\geq 4$. Then $f$ is a sum of squares of at most 27 functions in $C^{\frac{k+\alpha}{2}}(\mathbb{R}^2)$.*
It is not clear whether $m(n,k,\alpha)$ actually depends on $\alpha$ in general; the bounds above are all independent of $\alpha$, but this does not preclude some dependency which our techniques overlook. In the case $k=1$ however, matters are considerably simpler.
**Theorem 4**. *If $\alpha\in(0,1]$, $n\geq 1$ and $f\in C^{1,\alpha}(\mathbb{R}^n)$ is non-negative, then $\sqrt{f}\in C^{\frac{1+\alpha}{2}}(\mathbb{R}^n)$. That is, for any $\alpha\in (0,1]$ and $n\in\mathbb{N}$ we have $m(n,1,\alpha)=1$.*
The remainder of the paper is organized as follows. In Sections 2 and 3 we state preliminary results, and in Section 4 we study localized decompositions of $C^{k,\alpha}(\mathbb{R}^n)$ functions by generalizing the technique of Fefferman & Phong [@Fefferman-Phong]. Using these preliminaries, the remaining Sections 5 through 9 are devoted to proving our main theorems.
Many of the techniques in this paper are refined forms of those introduced in [@BonyFr; @BonyEn; @Fefferman-Phong; @SOS_I; @Tataru], and the other cited works on SOS decompositions. Our results enjoy considerable generality thanks to the hard work of these authors, who pioneered the results that we can now build upon.
# Preliminaries
## Hölder Spaces {#hölder-spaces .unnumbered}
We define Hölder spaces in standard fashion. Given $\alpha\in(0,1]$ and a domain $\Omega\subseteq\mathbb{R}^n$, the Hölder space $C^\alpha(\Omega)$ is the real vector space of functions $f:\Omega\rightarrow\mathbb{R}$ for which $$[f]_{\alpha,\Omega}=\sup_{\substack{x,y\in\Omega\\x\neq y}}\frac{|f(x)-f(y)|}{|x-y|^\alpha}<\infty.$$ To describe higher-order Hölder spaces concisely we use multi-index notation. A multi-index is an $n$-tuple $\beta\in \mathbb{N}_0^n$, and given $\beta=(\beta_1,\dots,\beta_n)$ the order of $\beta$ is given by $|\beta|=\beta_1+\cdots\beta_n$. The $\beta$-derivative of a function $f$ is formally defined by $$\partial^\beta f=\frac{\partial^{|\beta|}f}{\partial^{\beta_1}x_1\cdots\partial^{\beta_n}x_n}.$$ Given $k\in\mathbb{N}$, the Hölder space $C^{k,\alpha}(\Omega)$ is defined to be the collection of functions $f:\Omega\rightarrow\mathbb{R}$ for which $[\partial^\beta f]_{\alpha,\Omega}<\infty$ whenever $|\beta|=k$.
In our later arguments, it will be helpful to employ a pointwise variant of the Hölder semi-norm. Introduced in [@BonyFr] and used in [@SOS_I], this object is defined for a function $f:\Omega\rightarrow\mathbb{R}$ and $x\in\Omega$ by setting $$[f]_\alpha(x)=\limsup_{z,y\rightarrow x}\frac{|f(z)-f(y)|}{|z-y|^\alpha}.$$ It is straightforward to show that $[f]_\alpha(x)\leq [f]_{\alpha,\Omega}$ and indeed, that $[f]_{\alpha,\Omega}=\displaystyle\sup_{x\in\Omega}[f]_\alpha(x)$.
We also recall a useful property regarding compact embeddings of Hölder spaces. Denoting by $C_b^{k,\alpha}(\Omega)$ the subspace of $C^{k,\alpha}(\Omega)$ comprised functions for which $$\|f\|_{C^{k,\alpha}_b(\Omega)}=\sum_{|\beta|\leq k}\sup_\Omega|\partial^\beta f|+\sum_{|\beta|=k}[\partial^\beta f]_{\alpha,\Omega}<\infty,$$ it is well-known that $C^{k,\alpha}_b(\Omega)$ is compactly embedded in lower-order Hölder spaces when one restricts to precompact domains. The following result makes this notion precise; it is a variant of classical results like [@GilbargTrudinger Lem. 6.33] and [@Driver Thm. 24.14].
**Lemma 5**. *Let $0\leq\alpha,\beta\leq 1$, and let $j$ and $k$ be non-negative integers. If $\Omega$ is precompact and $k+\alpha<j+\beta$, then $C_b^{j,\beta}(\overline{\Omega})$ is compactly embedded in $C_b^{k,\alpha}(\overline{\Omega})$.*
In other words, any bounded sequence in $C_b^{k,\alpha}(\overline{\Omega})$ has a convergent subsequence in $C_b^{j,\beta}(\overline{\Omega})$. This will be useful in Section [7](#sec:poly){reference-type="ref" reference="sec:poly"}.
## Multi-Index Calculus {#multi-index-calculus .unnumbered}
To perform the necessary estimates for our main regularity results, we introduce additional terminology and notation for working with multi-index derivatives. Given $k$-times differentiable functions $f$ and $g$ and a multi-index $\beta\in\mathbb{N}_0^n$ for which $|\beta|=k$, the generalized product rule reads $$\partial^\beta (fg)=\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}(\partial^{\gamma }f)(\partial^{\beta-\gamma}g).$$ The subscript $\gamma\leq\beta$ indicates that we sum over all multi-indices $\gamma$ of length $n$ which are smaller than $\beta$ in the partial order defined by setting $(\gamma_1,\dots,\gamma_n)\leq(\beta_1,\dots,\beta_n)$ if and only if $\gamma_j\leq \beta_j$ for each $j=1,\dots,n$. The generalized binomial coefficient is given by $$\binom{\beta}{\gamma}=\frac{\beta!}{\gamma!(\beta-\gamma)!},$$ where $\beta!=\beta_1!\cdots\beta_n!$. As an easy consequence of the formula above and the definition of the pointwise semi-norm, we have a variant of the sub-product rule employed in [@SOS_I]: $$[\partial^\beta(fg)]_{\alpha}(x)\leq\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}([\partial^{\beta-\gamma}f]_{\alpha}(x)|\partial^\gamma g(x)|+[\partial^{\beta-\gamma}g]_{\alpha}(x)|\partial^\gamma f(x)|).$$
Along with the product and sub-product rules, we require a version of the chain rule adapted to higher-order derivatives. Many variants appear in the literature, see e.g. [@SOS_I (3.4)], and we use one adapted to multi-indices. To state it concisely we use multi-index partitions. A multi-set $\Gamma$ of multi-indices is called a partition of $\beta$ if $|\gamma|>0$ for each $\gamma\in\Gamma$, and $$\sum_{\gamma\in\Gamma}\gamma=\beta.$$ Several multi-indices in the sum above can be identical. Given a multi-index $\beta$, we denote the set of its distinct partitions by $P(\beta)$.
**Lemma 6** (Chain Rule). *Let $f:\mathbb{R}^{n+1}\rightarrow\mathbb{R}$ and $g:\mathbb{R}^{n}\rightarrow\mathbb{R}$ both be $k$ times differentiable, and define $h:\mathbb{R}^n\rightarrow\mathbb{R}$ by $h(x)=f(x,g(x))$. For any multi-index $\beta$ of order $|\beta|\leq k$ and length $n$, there exist constants $C_{\beta,\Gamma}$ for which $$\label{eq:holycow}
\partial^\beta h =\sum_{\eta\leq\beta}\sum_{\Gamma\in P(\eta)}C_{\beta,\Gamma}(\partial^{\beta-\eta}\partial^{|\Gamma|}_{n+1}f)\prod_{\gamma\in\Gamma}\partial^\gamma g,$$ where $\partial^\beta h$ is evaluated at $x$ and the derivatives of $f$ are evaluated at $(x,g(x))$.*
The constants $C_{\beta,\Gamma}$ can be found explicitly by counting multi-index partitions. Alternate but equivalent forms of the expression above for $\partial^\beta h$ can be found in [@hardy; @SOS_I]. For our purposes the values of constants like $C_{\beta,\Gamma}$ are unimportant, and henceforth we follow standard convention by letting $C$ denote a positive constant whose value may change from line to line, but which depends on fixed parameters such as $n$, $k$ and $\alpha$. When the values of constants are significant, they are emphasized.
A useful consequence of Lemma [Lemma 6](#lem:genchain){reference-type="ref" reference="lem:genchain"} is a recursive formula for the derivatives of implicitly defined functions, which we include here with the standard implicit function theorem from [@IFTref].
**Theorem 7** (Implicit Function Theorem). *Let $G\in C^k(\mathbb{R}^{n+1})$ and let $(x_0,z_0)\in\mathbb{R}^n\times\mathbb{R}$ be a point at which $G(x_0,z_0)=0$ and $\frac{\partial G}{\partial x_n}(x_0,z_0)>0$. There exists a unique $g\in C^k(U)$, for some neighbourhood $U\subset\mathbb{R}^{n}$ of $x_0$, such that $G(x,g(x))=0$ for every $x\in U$. Further, the derivatives of $g$ are given recursively for constants $C_{\beta,\Gamma}$ by $$\partial^\beta g=-\frac{1}{\partial_{n}G}\sum_{0\leq\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta)\setminus\{\beta\}}}C_{\beta,\Gamma}(\partial^{\beta-\eta}\partial^{|\Gamma|}_nG)\prod_{\gamma\in\Gamma}\partial^\gamma g,$$ where $\partial^\beta g$ is evaluated at $x$ and the derivatives of $G$ are all evaluated at $(x,g(x))$.*
In the ensuing sections we will also make Hölder estimates for the derivatives of functions, and this can be done with the help of the following inequality.
**Lemma 8**. *Let $f\in C^{k,\alpha}(\mathbb{R}^n)$. If $|\beta|<k$ then for any $x,y\in\mathbb{R}^n$ the following estimate holds: $$|\partial^\beta f(x)-\partial^\beta f(y)|\leq \sum_{1\leq |\gamma|\leq k-|\beta|}\frac{1}{\gamma!}|x-y|^{|\gamma|}|\partial^{\beta+\gamma} f(x)|+C|x-y|^{k-|\beta|+\alpha}.$$*
*Proof.* Let $\beta$ be any multi-index of length $n$ and order $|\beta|<k$. We begin by using Taylor's theorem to form a Taylor expansion for $\partial^\beta f$ evaluated at $y$ and centred at $x$, $$\partial^\beta f(y)=\sum_{0\leq |\gamma|<k-|\beta|}\frac{1}{\gamma!}\partial^{\beta+\gamma} f(x)(y-x)^\gamma+\sum_{|\gamma|=k-|\beta|}\frac{|\gamma|}{\gamma!}(y-x)^\gamma\int_0^1(1-t)^{|\gamma|-1}\partial^{\beta+\gamma}f(x+t(y-x))dt.$$ To refine this, we observe that the integral on the right-hand side can be rewritten in the form $$\frac{1}{|\gamma|}\partial^{\beta+\gamma}f(x)+\int_0^1(1-t)^{|\gamma|-1}(\partial^{\beta+\gamma}f(x+t(y-x))-\partial^{\beta+\gamma}f(x))dt.$$ Consequently, if $|\beta|+|\gamma|=k$ we may use that $f\in C^{k,\alpha}(\Omega)$ by assumption to make the estimate $$|\gamma|\int_0^1(1-t)^{|\gamma|-1}\partial^{\beta+\gamma}f(x+t(y-x))dt\leq |\partial^{\beta+\gamma}f(x)|+[\partial^{\beta+\gamma}f]_{\alpha,\Omega}|\gamma||x-y|^\alpha\int_0^1(1-t)^{|\gamma|-1}t^\alpha dt.$$ The integral on the right-hand side is bounded by $1/|\gamma|$, meaning that the item on the left is bounded by $|\partial^{\beta+\gamma}f(x)|+[\partial^{\beta+\gamma}f]_{\alpha,\Omega}|x-y|^\alpha$. Rearranging our initial Taylor expansion and applying this inequality gives the claimed estimate with $C=\sum_{|\gamma|=k-|\beta|}[\partial^{\beta+\gamma}f]_{\alpha,\mathbb{R}^n}/\gamma!$. ◻
Along with derivatives, we also find it useful to make Hölder estimates for powers of functions. These are enabled by the following result.
**Lemma 9**. *Let $f\in C^{\alpha}(\mathbb{R}^n)$ be non-negative, let $s>1$ and fix $\varepsilon>0$. Then for any $x,y\in\mathbb{R}^n$, $$|f(x)^s-f(y)^s|\leq \frac{(1+\varepsilon)[f]_{\alpha,\mathbb{R}^n}^s}{((1+\varepsilon)^\frac{1}{s-1}-1)^{s-1}}|x-y|^{s\alpha}+\varepsilon\max\{f(x)^s,f(y)^s\}.$$*
*Proof.* For a number $0<\lambda<1$ to be chosen momentarily, observe that convexity gives $$f(x)^s\leq\frac{1}{\lambda^s}(\lambda|f(x)-f(y)|+\lambda f(y))^s\leq \frac{1}{\lambda^{s-1}}|f(x)-f(y)|^s+\frac{1}{(1-\lambda)^{s-1}} f(y)^s.$$ Rearranging and applying the Hölder estimate $|f(x)-f(y)|^s\leq [f]_{\alpha,\mathbb{R}^n}^s|x-y|^{s\alpha}$ thus shows that $$f(x)^s-f(y)^s\leq \frac{[f]_{\alpha,\mathbb{R}^n}^s}{\lambda^{s-1}}|x-y|^{s\alpha}+\bigg(\frac{1}{(1-\lambda)^{s-1}}-1\bigg)f(y)^s.$$ Next select $\lambda=1-(1+\varepsilon)^{-\frac{1}{s-1}}$, so that the coefficient on the last term is exactly $\varepsilon$ and $$f(x)^s-f(y)^s\leq \frac{(1+\varepsilon)[f]_{\alpha,\mathbb{R}^n}^s}{((1+\varepsilon)^\frac{1}{s-1}-1)^{s-1}}|x-y|^{s\alpha}+\varepsilon f(y)^s.$$ Interchanging $x$ and $y$ and then taking a maximum, we obtain the desired result. ◻
Finally, we introduce some helpful notation. Given $\xi\in S^{n-1}$, the $k^\mathrm{th}$-order derivative in the direction of $\xi$ is defined by $$\partial^k_\xi f=(\xi\cdot\nabla)^kf=\sum_{|\beta|=k}\frac{k!}{\beta!}\xi^\beta\partial^\beta f.$$ Also, given $f\in C^{k,\alpha}(\mathbb{R}^n)$ we sometimes wish to discuss all $k^\mathrm{th}$ order derivatives simultaneously; to do this we let $|\nabla^kf|$ denote the vector of $k^{th}$-order derivatives of $f$, so that $|\nabla^kf|\in C^\alpha(\mathbb{R}^n)$.
More derivative estimates and applications of multi-index calculus arise throughout this work in specialized settings, but those listed above suffice for most of our arguments.
## Infinite Graphs {#infinite-graphs .unnumbered}
Though pure graph theory may seem out of place in a work on sums of squares, our scheme for bounding $m(n,k,\alpha)$ relies in part on estimating the chromatic number $\chi(G)$ of an infinite graph $G$ which we construct in the next section. For easy reference, we first quote a relevant result in this area due to Behzad & Radjavi [@behzad]. We remind the reader that the degree $\mathrm{deg}\;v$ of a vertex $v\in V(G)$ is the number of incident edges with that vertex.
**Theorem 10** (Behzad & Radjavi, 1976). *Let $G$ be a connected infinite graph with $\mathrm{deg}\;v\leq b$ for each $v\in V(G)$ and some $b\in\mathbb{N}$. Then $\chi(G)\leq b$.*
The proof given in [@behzad] involves the famous theorems of Brooks and De Bruijn & Erdős, and it relies only on the degree bound and the fact that $G$ is infinite. The graph which we later construct enjoys a great deal more structure not accounted for by these arguments, and so it is reasonable to suspect that a better bound for $\chi(G)$ is possible. To improve over Brooks' Theorem, Welsh & Powell [@Welsh] worked out the following result.
**Theorem 11** (Welsh & Powell, 1967). *Let $H$ be a finite graph whose vertices $v_1,\dots,v_\ell$ are ordered by decreasing degree. Then $\chi(H)\leq \displaystyle\max_{1\leq i\leq \ell}\min\{\mathrm{deg}\;v_i+1,i\}$.*
This implies Brooks' Theorem, and for small graphs it is often much better. However, it loses its advantage for graphs which have a large number of vertices of high degree. If these high degree vertices are spread out, this difficulty is largely artificial. For the remainder of this section then, we adapt the proof of Theorem [Theorem 11](#thm:WP){reference-type="ref" reference="thm:WP"} to obtain a result suitable to our infinite graph.
For $s\in\mathbb{N}$ we say that a finite graph $H$ has structure $\alpha_s$ if there exist distinct vertices $v$ and $w_1,\dots,w_s$ such that $\{v,w_j\}\in E(H)$ and $\mathrm{deg}\;w_j\geq\mathrm{deg}\;v$ for each $j=1,\dots,s$, and if whenever $\{w_i,w_j\}\not\in E(H)$ for $j>i$, there exists $z\in V(H)\setminus\{v,w_1,\dots,w_s\}$ such that $\mathrm{deg}\;z\geq\mathrm{deg}\;w_j$, $\{w_i,z\}\not\in E(G)$, and $\{w_j,z\}\in E(G)$.
**Theorem 12**. *If a finite graph $H$ does not have the structure $\alpha_s$ for $s\in\mathbb{N}$, then $\chi(H)\leq s$.*
Ignoring part of the structure $\alpha_s$, we recover a simple criterion for $\chi(H)$ to be small.
**Corollary 13**. *Let $s\in\mathbb{N}$ and let $H$ be a finite graph. If $H$ does not have a vertex with $s$ distinct neighbouring vertices of the same or larger degree, then $\chi(H)\leq s$.*
The bounds we obtain for chromatic numbers of arbitrary finite induced sub-graphs of an infinite graph $G$ also serve as bounds for $\chi(G)$ thanks to the following well-known result.
**Theorem 14** (De Bruijn & Erdős, 1951). *Let $G$ be an infinite graph. Then $\chi(G)=m$ if and only if $\chi(H)$ is at most $m$ for any finite subgraph of $H$ of $G$.*
The first part of our proof of Theorem [Theorem 12](#best chromatic){reference-type="ref" reference="best chromatic"} involving greedy colouring follows [@Welsh] almost exactly. It is in the analysis of the output of this colouring algorithm that our approach differs.
*Proof of Theorem [Theorem 12](#best chromatic){reference-type="ref" reference="best chromatic"}.* Following [@Welsh], we first perform a greedy colouring of $H$. Let $H$ be a finite graph, and list its vertices $v_1,\dots, v_\ell$ in order of decreasing degree. Define $T_1\subseteq V(H)$ by fixing $v_1\in T_1$, and if $v_{i_1},\dots, v_{i_t}\in T_1$ where $1=i_1<\cdots<i_t$, assign $v_j$ to $T_1$ if and only if $j$ is the smallest integer exceeding $i_t$ for which $v_j$ is not adjacent to any of $\{v_{i_1},\dots,v_{i_t}\}$. Since $H$ is finite, we can proceed until $T_1$ is maximal.
Now form $T_2$ by finding the largest $j$ such that $v_j\not\in T_1$ and let $v_j\in T_2$. Proceed as above for vertices in $V(H)\setminus T_1$, adding a vertex to $T_2$ if it has maximal degree among vertices not already assigned to $T_2$. Proceeding in this way, we see that $T_{|V(H)|+1}=\emptyset$, meaning that this algorithm terminates in finitely many steps.
Here Welsh & Powell estimate the smallest number $s\in\mathbb{N}$ such that $T_{s+1}=\emptyset$. We now do the same using different techniques. Assume that for $s\in\mathbb{N}$ there exists $v\in V(H)$ such that $v\not\in T_1\cup\cdots\cup T_s$. Then $v$ is adjacent to an element in each of the sets $T_1,\dots,T_s$, for otherwise it would belong to one of them, and $v$ is adjacent to a vertex of degree at least $\mathrm{deg}\;v$ in each of the sets $T_1,\dots, T_s$. To see this, assume toward a contradiction that for some $j\leq s$, $v$ is only adjacent to vertices of smaller degree than itself in $T_j$. Then $v$ would be assigned to $T_j$ in the greedy colouring before those other vertices, contradicting the fact that $v\not\in T_j$. It follows that $v$ has distinct neighbours $w_1,\dots,w_s$ each with degree $\mathrm{deg}\;w_j\geq \mathrm{deg}\;v$.
Given any two neighbours $w_i\in T_i$ and $w_j\in T_j$ of $v$ for $j>i$, either $\{w_i,w_j\}\in E(H)$, or there exists a neighbour $z\in T_i$ of $w_j$ with $\mathrm{deg}\;z\geq\mathrm{deg}\;w_j$. Otherwise, $w_j$ would have been assigned $T_i$, contrary to our assumption. Thus $H$ has the structure $\alpha_s$. It follows in the contrapositive that if $H$ omits this structure, then $T_{s+1}=\emptyset$ and $\chi(H)\leq s$. ◻
For the graph $G$ which we later construct, Theorem [Theorem 10](#basic chromatic){reference-type="ref" reference="basic chromatic"} gives $\chi(G)\leq 12$, while the refinement Corollary [Corollary 13](#better chromatic){reference-type="ref" reference="better chromatic"} gives $\chi(G)\leq 9$. Further improvements seem plausible, given that the argument which we later employ does not use Theorem [Theorem 12](#best chromatic){reference-type="ref" reference="best chromatic"} in full force.
## An Algebraic Result {#an-algebraic-result .unnumbered}
In the next section we will need to sum inequalities in such a way that all but a few terms cancel. The following lemma which enables this was proved independently with this purpose in mind, and we learned later that the same technique is used for a similar purpose in [@BonyFr §4].
**Lemma 15**. *Let $\ell$ be an odd integer and set $s=\frac{\ell+1}{2}$. There exist numbers $\eta_1,\dots,\eta_s\in\mathbb{R}$ and $q_1,\dots,q_s\geq 0$ such that for each odd $j\leq \ell$, $$\sum_{i=1}^sq_i\eta_i^j=\begin{cases}
0 & j<\ell,\\
1 & j=\ell.
\end{cases}$$*
*Proof.* It suffices to show that $Mq=e_s$ has a non-negative solution $q=(q_1,\dots,q_s)^t\in\mathbb{R}^s$, where $e_s$ is the last standard basis vector in $\mathbb{R}^s$ and the square matrix $M$ is given entry-wise by $(M)_{ij}=\eta_j^{2i-1}$. Our aim is to select $\eta_1,\dots,\eta_s$ so that $M$ is invertible and the entries of $M^{-1}e_s$ are non-negative. If $M$ is nonsingular then then $q_k$ is the $k^\mathrm{th}$ entry in column $s$ of $M^{-1}$ given by $$q_k=\frac{(-1)^{s+k}\mathrm{det}(M_{sk})}{\mathrm{det}(M)},$$ where $M_{sk}$ is the matrix minor of $M$ obtained by deleting row $s$ and column $k$ of $M$.
Now we establish a formula for $\mathrm{det}(M)$. This is a homogeneous polynomial in the variables $\eta_1,\dots,\eta_s$ of degree $s^2$, and if $\eta_i=0$ for any $i$ then $M$ is singular, meaning that $\eta_i\mid \mathrm{det}(M)$. Similarly, if $i\neq j$ and $\eta_i=\pm\eta_j$ then $M$ has two linearly dependent columns and $\mathrm{det}(M)=0$, so $(\eta_i^2-\eta_j^2)\mid \mathrm{det}(M)$ whenever $i\neq j$. It follows that $P\mid\mathrm{det}(M)$, where $$P=\bigg(\prod_{i=1}^s\eta_i\bigg)\bigg(\prod_{i=1}^s\prod_{j=1}^{i-1}(\eta_i^2-\eta_j^2)\bigg).$$ Since $\mathrm{deg}\;P=s^2$, it follows from the Fundamental Theorem of Algebra that $\mathrm{det}(M)/P$ has degree zero and is constant. Moreover, it is easy to verify that $\mathrm{det}(M)/P=1$ and so $\mathrm{det}(M)=P$. We can compute $\mathrm{det}(M_{sk})$ in similar faskion, since this minor assumes the same form as $M$. Omitting appropriate terms from the determinant formula for $M$, we get $$\mathrm{det}(M_{sk})=\bigg(\prod_{\substack{1\leq i\leq s\\
i\neq k}}\eta_i\bigg)\bigg(\prod_{\substack{1\leq j< i\leq s\\i,j\neq k}}(\eta_i^2-\eta_j^2)\bigg).$$ Equipped with this identity and the formula for $q_k$ above, we are now able to write $$q_k=(-1)^{s+k}\bigg(\eta_k\prod_{j=1}^{k-1}(\eta_k^2-\eta_j^2)\prod_{i=k+1}^s(\eta_i^2-\eta_k^2)\bigg)^{-1}=\bigg(\eta_k\prod_{\substack{1\leq i\leq s\\i\neq k}}(\eta_k^2-\eta_i^2)\bigg)^{-1}.$$ Taking $\eta_k=(-1)^{s+k}k$ ensures that each $q_k$ is positive, and we are finished. ◻
# Partition Of Unity
Now we construct the partition of unity from which we extract the desired SOS decomposition. The key property of our construction is that the functions comprising the partition are not too closely clustered together, which allows us to form a SOS decomposition with relatively few functions. The construction is in many ways standard, taking inspiration from classical results like [@guzman Lem. 1.2]. It is modelled loosely on [@Tataru Lem. 5.4], and it relies on a property of functions with controlled oscillation.
**Definition 16**. *A function $r:\mathbb{R}^n\rightarrow\mathbb{R}$ is said to be slowly-varying if $r\geq 0$ everywhere, and given $0<c<1$ there exists a constant $\nu>0$ such that $|r(x)-r(y)|\leq cr(x)$ whenever $|x-y|\leq\nu r(x)$.*
The most obvious slowly-varying functions are the uniformly Lipschitz functions on $\mathbb{R}^n$. A more interesting example is given in Lemma [Lemma 19](#lem:slowvar){reference-type="ref" reference="lem:slowvar"}. For such functions we have the following result.
**Theorem 17**. *If $r$ is slowly-varying and $1<\lambda<\frac{3}{2}$, then there exists a family of dyadic cubes $\{Q_j\}$ and non-negative smooth functions $\{\psi_j\}$ such that each $\psi_j$ is supported in $\lambda Q_j$ and $$\label{eq:pou2}
\sum_{j=1}^\infty\psi_j^2=
\chi_{\{r>0\}}.$$ This partition of unity has the following properties:*
- *No more than $2^n$ functions in $\{\psi_j\}$ are non-zero at any given point,*
- *For every multi-index $\beta$, each $\psi_j$ satisfies the estimate $|\partial^\beta\psi_j(x)|\leq C r(x)^{-|\beta|}$,*
- *For every multi-index $\beta$ and $0<\alpha\leq 1$, each $\psi_j$ satisfies $[\partial^\beta\psi_j]_{\alpha}(x)\leq Cr(x)^{-\alpha-|\beta|}$,*
- *One can partition $\{\psi_j\}$ into $4^n-2^n$ collections of functions with pairwise-disjoint supports.*
- *If $n=2$, this partition of $\{\psi_j\}$ can be achieved using only $9$ pairwise disjoint collections.*
*Remark*: *A priori* the case $n=2$ is not fundamentally different from higher dimensions. Rather, the argument that we use to prove property *(5)* is not easy to implement when $n\geq 3$.
*Proof.* First we construct an appropriate family of cubes. Let $r$ be slowly varying with $\nu>0$ chosen so that $c\leq\frac{1}{5}$, and let $\{Q_j\}$ be the collection of maximal dyadic cubes which satisfy $$\ell(Q)\leq \frac{\nu}{2\sqrt{n}}\inf_{Q}r.$$ We show that the quantities in the inequality above are in fact equivalent. Given a cube $Q$, we denote its parent cube by $\Tilde{Q}$. If $y\in Q$ and $z\in\Tilde{Q}$ then $|y-z|\leq \sqrt{n}\ell(\Tilde{Q})=2\sqrt{n}\ell(Q)\leq \nu r(y)$, and from slow variation we can conclude that $(1-c)r(y)\leq r(z)$. Since $y$ and $z$ were arbitrary we find that $(1-c)\inf_Qr\leq \inf_{\tilde{Q}}r$, and using this fact with maximality of $Q$ we get $$\ell(Q)=\frac{1}{2}\ell(\tilde{Q})>\frac{\nu}{4\sqrt{n}}\inf_{\Tilde{Q}}r\geq\frac{(1-c)\nu}{4\sqrt{n}}\inf_{Q}r.$$
Equipped with the inequalities above, we next show that if $Q_i$ and $Q_j$ are adjacent cubes (i.e. if $\overline{Q_1}\cap\overline{Q_j}\neq\emptyset$) then $\ell(Q_i)\leq 2\ell(Q_j)$ and $\ell(Q_j)\leq 2\ell(Q_i)$. For brevity we let $r_i=\inf_{Q_i} r$, and we observe that if there exists $x\in \overline{Q_i}\cap\overline{Q_j}$ then for any $y\in Q_i$ we have $|x-y|\leq \sqrt{n}\ell(Q_i)$. From our selection of $Q_i$ it follows that $|x-y|\leq\nu r_i\leq \nu r(y)$, and since $r$ is slowly-varying we have $(1-c)r(y)\leq r(x)\leq (1+c)r(y)$. Since $y\in Q_i$ was arbitrary, we have $(1-c)r_i\leq r(x)\leq (1+c)r_i$. The same inequalities hold if we replace $r_i$ with $r_j$, meaning that $$\bigg(\frac{1-c}{1+c}\bigg)r_j\leq r_i\leq \bigg(\frac{1+c}{1-c}\bigg)r_j.$$ Finally, we combine the inequalities above to see that if the cubes $Q_i$ and $Q_j$ are adjacent then $$\ell(Q_i)\leq \frac{\nu}{2\sqrt{n}}r_i\leq \frac{\nu(1+c)}{2\sqrt{n}(1-c)}r_j\leq\frac{2(1+c)}{(1-c)^2}\ell(Q_j)<4\ell(Q_j),$$ where the last inequality follows from our initial selection of $c$. Since $Q_i$ and $Q_j$ are dyadic, this implies that $\ell(Q_i)\leq 2\ell(Q_j)$, and an identical argument shows that $\ell(Q_j)\leq 2\ell(Q_i)$.
Now we fix $1<\lambda<\frac{3}{2}$, and we study the dilated family $\{\lambda Q_j\}$ with the aim of showing that no point in $\mathbb{R}^n$ belongs to more than $2^n$ dilated cubes. First, we establish that if $\lambda Q_i\cap\lambda Q_j\neq \emptyset$ then $Q_i$ and $Q_j$ are adjacent. To do this we argue by contraposition, assuming that $Q_i$ and $Q_j$ are not adjacent. Writing each cube as a Cartesian product of intervals in the form $$Q_i=\prod_{k=1}^nI_{ik},$$ we observe that if $d_k=\mathrm{dist}(I_{ik},I_{jk})$ then $\mathrm{dist}(Q_i,Q_j)\geq \min\{d_1,\dots,d_n\}$. Since $Q_i$ and $Q_j$ are non-adjacent dyadic cubes, it follows from the inequalities for adjacent cubes established above that for each $k=1,\dots,n$, we must have $d_k\geq \frac{1}{2}\max\{\ell(Q_i),\ell(Q_j)\}$. This is because the distance in any direction is at least the length of some cube that is adjacent to $Q_i$ or $Q_j$.
Clearly $\lambda Q_i$ is the Cartesian product of the dilated intervals, and for each $k$ we can observe that $\mathrm{dist}(\lambda I_{ik},\lambda I_{jk})=\mathrm{dist}(I_{ik},I_{jk})-(\frac{\lambda-1}{2})(\ell(Q_i)+\ell(Q_j))$ since $|I_{ik}|=\ell(Q_i)$ for each $k=1,\dots,n$. Consequently we have the estimate $$\mathrm{dist}(\lambda I_{ik},\lambda I_{jk})\geq \frac{1}{2}\max\{\ell(Q_i),\ell(Q_j)\}-\bigg(\frac{\lambda-1}{2}\bigg)(\ell(Q_i)+\ell(Q_j))\geq \bigg(\frac{3}{2}-\lambda\bigg)\max\{\ell(Q_i),\ell(Q_j)\}.$$ Since $\lambda<\frac{3}{2}$ by assumption, we find that $\mathrm{dist}(\lambda I_{ik},\lambda I_{jk})>0$ for each $k$. It follows that $\mathrm{dist}(\lambda Q_i,\lambda Q_j)>0$ meaning that $\lambda Q_i$ and $\lambda Q_j$ do not intersect. Hence, if $\lambda Q_i$ and $\lambda Q_j$ intersect, then $Q_i$ and $Q_j$ are adjacent as claimed.
Now fix $x\in\mathbb{R}^n$, and relabelling if necessary let $Q_1,\dots,Q_m$ be the cubes whose dilates by $\lambda$ contain $x$. If $Q_i$ and $Q_j$ are any two of these cubes, then $x\in \lambda Q_i\cap\lambda Q_j$ and it follows from the argument above $Q_i$ and $Q_j$ are adjacent, so the cubes $Q_1,\dots,Q_m$ are pairwise adjacent. We use this fact to obtain an upper bound on $m$, by showing that $\overline{Q_1}\cap\cdots\cap \overline{Q_m}\neq\emptyset$. To this end we once again write each cube as a Cartesian product of closed intervals, and use the fact that Cartesian products and intersections commute: $$\bigcap_{i=1}^m\overline{Q_i}=\bigcap_{i=1}^m\prod_{k=1}^nI_{ik}=\prod_{k=1}^n\bigcap_{i=1}^mI_{ik}.$$ The second identity is easily verified by induction. For each fixed $k$, now consider the intersection $$\bigcap_{i=1}^mI_{ik}.$$ Since the family of cubes is pairwise adjacent, any two intervals $I_{ik}$ and $I_{jk}$ must intersect, for otherwise $\mathrm{dist}(Q_i,Q_j)>0$. Writing $I_{ik}=[a_{ik},b_{ik}]$, we take $a_k=\max_i\{a_{ik}\}$ and $b_k=\min_i\{b_{ik}\}$ and we note that $a_k\leq b_k$ for if not, then for some $i$ and $j$ we have $a_{ik}>b_{jk}$ and $I_{ik}\cap I_{jk}=\emptyset$. Since $[a_k,b_k]\subseteq I_{ik}$ for each $i=1\dots,m$ we have that $$\bigcap_{i=1}^mI_{ik}\supseteq[a_k,b_k]\neq\emptyset,$$ since the interval contains at least the point $a_k$. It follows that $\overline{Q_1}\cap\cdots\cap \overline{Q_m}$ is a Cartesian product of non-empty sets, hence it is nonempty. Since a given point in $\mathbb{R}^n$ can belong to the boundary of at most $2^n$ closed cubes, we conclude that $m\leq 2^n$. Finally, since $x\in\mathbb{R}^n$ was arbitrary, we conclude that any point can belong to at most $2^n$ dilated cubes.
There is an additional property of this family $\{Q_j\}$ that will be useful momentarily. Namely, we are able to write $$\label{eq:setseq}
S=\{x\in\mathbb{R}^n:\;r(x)>0\}=\bigcup_{j=1}^\infty\lambda Q_j.$$ Clearly $S$ is contained in the union of dilated cubes, and to verify the reverse inclusion we fix $x\in\lambda Q_j$ for any $j$. Since $\lambda<\frac{3}{2}$, it follows that for any $y\in Q_j$ we have that $r(y)>0$ and $$|x-y|\leq \frac{(1+\lambda)\sqrt{n}}{2}\ell(Q_j)\leq \frac{(1+\lambda)\nu}{4}\inf_{Q_j}r\leq \nu r(y).$$ From slow variation of $r$ this implies that $0<(1-c)r(y)\leq r(x)$, allowing us to conclude that $x\in S$. Since $x$ was any member of the union, we see that [\[eq:setseq\]](#eq:setseq){reference-type="eqref" reference="eq:setseq"} holds.
Equipped with the family of dyadic cubes constructed above, we now define the functions $\{\psi_j\}$ by using a construction resembling that in [@Tataru]. First let $\phi:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth non-negative function supported in $[-\frac{\lambda}{2},\frac{\lambda}{2}]$, for which $\phi=1$ on $[-\frac{1}{2},\frac{1}{2}]$. Fixing a cube $Q$, we denote its center point by $\tilde{x}$ and we set $$\psi_Q(x)=\prod_{k=1}^n\phi\bigg(\frac{\tilde{x}_k-x_k}{\ell(Q)}\bigg).$$ In this way, we see that $\psi_Q=1$ on $Q$, and that $\psi_Q$ is supported in $\lambda Q$. Moreover, for any multi-index $\beta=(\beta_1,\dots,\beta_n)$ we observe that thanks to smoothness of $\phi$, the following estimate holds for some constant $C$ which does not depend on $Q$, $$|\partial^\beta\psi_Q(x)|=\prod_{k=1}^n\bigg|\frac{\partial^{\beta_k}}{\partial x_k^{\beta_k}}\phi\bigg(\frac{\tilde{x}_k-x_k}{\ell(Q)}\bigg)\bigg|\leq C\prod_{k=1}^n\ell(Q)^{-\beta_k}=C\ell(Q)^{-|\beta|}.$$ This estimate will be useful momentarily. Finally, we can define the family of functions $\{\psi_j\}$ by taking the dyadic collection $\{Q_j\}$ constructed above and for each $j$ setting $$\psi_j=\psi_{Q_j}\bigg(\sum_{i=1}^\infty\psi_{Q_i}^2\bigg)^{-\frac{1}{2}}.$$ There is no issue of convergence inside $S$ since at least one and at most $2^n$ terms in the sum above are nonzero at any given point. It is also clear by the definition that if $x\in S$ then $$\sum_{j=1}^\infty\psi_j^2=1.$$ Since each $\psi_{Q_j}$ is supported in $S$ (as $\lambda Q_j\subset S$ for each $j$, as we showed above) so too is $\psi_j$. Hence we can smoothly extend each $\psi_j$ by zero outside of $S$ to get equation [\[eq:pou2\]](#eq:pou2){reference-type="eqref" reference="eq:pou2"}. Additionally, given $x\in\mathbb{R}^n$, we recall that $x$ is in at most $2^n$ of the dilates $\{\lambda Q_j\}$, meaning that at most $2^n$ of the functions $\{\psi_j\}$ can be non-zero at any given point and property *(1)* holds.
Now we establish estimates *(2)* and *(3)*. For the former, we fix $j\in\mathbb{N}$ and conduct a straightforward (albeit tedious) calculation using the generalized product and chain rules discussed in Section 2 to see that for any multi-index $\beta$, $$\partial^\beta\psi_j=\sum_{\gamma\leq\beta}\sum_{\Gamma\in P(\gamma)}C_{\gamma,\Gamma}(\partial^{\beta-\gamma}\psi_{Q_j})\bigg(\sum_{i=1}^\infty\psi_{Q_i}^2\bigg)^{-\frac{1}{2}-|\Gamma|}\prod_{\eta \in\Gamma}\sum_{i=1}^\infty\sum_{\delta\leq\eta}\binom{\eta}{\delta}(\partial^{\eta-\delta}\psi_{Q_i})(\partial^\delta\psi_{Q_i}).$$ If $x\not\in \overline{\lambda Q_j}$ then $\partial^\beta\psi_j(x)=0$, and *(2)* holds trivially. On the other hand, if $x\in \overline{\lambda Q_j}$ then at most $2^n$ of the functions $\psi_{Q_i}$ and their derivatives are not identically zero in a neighbourhood of $x$. Re-indexing if necessary, we call these functions $\psi_{Q_1},\dots,\psi_{Q_{2^n}}$ and we note that $\ell(Q_j)\leq 2\ell(Q_i)$ for each $i=1,\dots,2^n$. Recalling the estimate $|\partial^\beta\psi_{Q_i}|\leq C\ell(Q_i)^{-|\beta|}$, we find that $$\prod_{\eta\in\Gamma}\bigg|\sum_{i=1}^\infty\sum_{\delta\leq\eta}\binom{\eta}{\delta}(\partial^{\eta-\delta}\psi_{Q_i})(\partial^\delta\psi_{Q_i})\bigg|\leq C\prod_{\eta\in\Gamma}\sum_{i=1}^{2^n}\sum_{\delta\leq\eta}\ell(Q_i)^{-|\eta|}\leq C\prod_{\eta\in\Gamma}\ell(Q_j)^{-|\eta|}=C\ell(Q_j)^{-|\gamma|}.$$ Since $\psi_{Q_i}(x)=1$ for some $i\in\mathbb{N}$ we also have $$\bigg(\sum_{i=1}^\infty\psi_{Q_i}^2\bigg)^{-\frac{1}{2}-|\Gamma|}\leq 1.$$ Employing these pointwise estimates in the formula for $\partial^\beta\psi_j$ stated above, we obtain the bound $$|\partial^\beta\psi_j|\leq C\sum_{\gamma\leq\beta}\sum_{\Gamma\in P(\gamma)}|\partial^{\beta-\gamma}\psi_{Q_j}|\ell(Q_j)^{-|\gamma|}\leq C\sum_{\gamma\leq\beta}\sum_{\Gamma\in P(\gamma)}\ell(Q_j)^{|\gamma|-|\beta|}\ell(Q_j)^{-|\gamma|}=C\ell(Q_j)^{-|\beta|}.$$ Finally, we use the fact established above that $\ell (Q_j)$ is comparable to $r(x)$ on $\lambda Q_j$ to conclude that $|\partial^\beta\psi_j|\leq Cr^{-|\beta|}$, giving property *(2)*.
For property *(3)* we note that the estimate is trivial when $x\not\in\overline{\lambda Q_j}$, since $[\partial^\beta \psi_j]_\alpha(x)=0$. If $x\in \overline{\lambda Q_j}$ then from the estimate *(2)* and the mean value theorem, we have for $y,z\in\overline{\lambda Q_j}$ that $$|\partial^\beta\psi_j(y)-\partial^\beta\psi_j(z)|\leq C\ell(Q_j)^{-|\beta|-1}|y-z|\leq C\ell(Q_j)^{-|\beta|-\alpha}|y-z|^{\alpha}.$$ From this estimate, it follows that $[\partial^\beta \psi_j]_\alpha(x)\leq C\ell(Q_j)^{-|\beta|-\alpha}$. Property *(3)* then comes as a consequence of the equivalence of $r$ with $\ell(Q_j)$ on $\lambda Q_j$.
To prove property *(4)*, we form an infinite graph $G$ as follows. Associate to each cube in $\{Q_j\}$ a vertex $v_j$, and add edge $\{v_i,v_j\}$ if the corresponding cubes $Q_i$ and $Q_j$ are adjacent (i.e. if $\overline{Q_i}\cap\overline{Q_j}\neq\emptyset$). It follows from continuity of $r$ that $G$ is an infinite graph, and we can assume without loss of generality that $G$ is connected by treating each connected component of the set $S$ separately. We wish to bound $\chi(G)$, the chromatic number of $G$.
First we do this in any number of dimensions, before specializing to $n=2$. Each cube in $\{Q_j\}$ is adjacent to at most $4^n-2^n$ other cubes, since adjacent cubes satisfy $\ell(Q_i)\leq 2\ell(Q_j)$, and so $d=\displaystyle\sup_{v\in V(G)}\mathrm{deg}\;v\leq 4^n-2^n.$ It follows from Theorem [Theorem 10](#basic chromatic){reference-type="ref" reference="basic chromatic"} that $\chi(G)\leq 4^n-2^n$, giving *(4)*.
Finally, to prove *(5)* we fix $n=2$ and we consider an arbitrary finite induced subgraph $H$ of $G$. Fix $s=9$ and assume toward a contradiction that there exists $v\in V(H)$ with $s$ neighbours each with degree at least $s$. In terms of the dyadic grid, this means that there exists a cube $Q$ which is adjacent to at least $9$ cubes $Q_1,\dots,Q_9$, which are each adjacent to at least $9$ cubes. At most four of these cubes intersect $\overline{Q}$ only at a corner, and the remaining cubes which we relabel as $Q_1,\dots, Q_5$ must share a face with $Q$. It follows that $\ell(Q_1)+\cdots+\ell(Q_{5})\leq 4\ell(Q),$ meaning that at least two neighbours (say $Q_1$ and $Q_2$) have size $\frac{1}{2}\ell(Q)$. This implies that $Q_1$ and $Q_2$ are adjacent to at most $8$ other cubes: $Q$ and three of its neighbours, and at most four adjacent cubes with length at least $\frac{1}{4}\ell(Q)$ opposite $Q$.
If $\mathrm{deg}\;v\leq 10$ then we are finished, for this implies that at most $8$ neighbours of $Q$ may induce vertices of degree at least $s=9$. If there are $m=11$ or $m=12$ neighbours then four must be at corners, and two others have length $\frac{1}{2}\ell(Q)$ and occupy at least one face of $Q$. For the remaining neighbours which we relabel $Q_1,\dots,Q_{m-6}$ we have $$\ell(Q_1)+\cdots+\ell(Q_{5})\leq \ell(Q_1)+\cdots+\ell(Q_{m-6})=3\ell(Q),$$ meaning that at least four of the cubes above have length $\frac{1}{2}\ell(Q)$. Each can have at most 8 neighbours, so at least 6 neighbours of $Q$ have fewer than $s$ neighbours, meaning that only $m-6\leq 6$ neighbours of $Q$ can induce vertices of degree at least $s$. If follows that $G$ contains no vertices with $s=9$ neighbours each of degree exceeding $\mathrm{deg}\;v\geq s$. By Corollary [Corollary 13](#better chromatic){reference-type="ref" reference="better chromatic"} we conclude that $\chi(H)\leq 9$. Since $H$ was arbitrary, it follows that $\chi(G)\leq 9$ by Theorem [Theorem 14](#Erdos){reference-type="ref" reference="Erdos"}. ◻
Property *(5)* cannot be improved using the pigeonholing argument above, since a given cube $Q$ in $\mathbb{R}^2$ can have 8 neighbouring vertices of degree at least $8$. In higher dimensions, the pigeonhole argument also becomes difficult since bounding vertex degrees is challenging.
# Local Decompositions
This section shows that the localization of a non-negative function $f\in C^{k,\alpha}(\mathbb{R}^n)$ to a suitable cube $Q$ either has a square root in [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"}, or it is possible to write $f=g^2+F$ on $Q$, where $g$ belongs to [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"} and $F\in C^{k,\alpha}(\mathbb{R}^{n-1})$. These local decompositions can be performed in the support sets of the partition functions $\psi_j$, and they always exist when $k\leq 3$. If $k\geq4$ then $f$ can be decomposed locally when some additional hypotheses are satisfied.
Throughout this section we fix non-negative $f\in C^{k,\alpha}(\mathbb{R}^n)$ for $k\geq 2$ and $0<\alpha\leq 1$. Our primary instrument is a function $r$ that controls the derivatives of $f$. It was originally identified by Fefferman & Phong [@Fefferman-Phong] in the case $k=3$ and $\alpha=1$, and modified by Korobenko & Sawyer [@SOS_I] to work for $k=4$ and $0<\alpha\leq1$. For our purposes a suitable control function is given by $$\label{eq:controlfunc}
r(x)=\max_{\substack{0\leq j\leq k,\\j\;\mathrm{even}}}\sup_{|\xi|=1}[\partial^j_\xi f(x)]_+^\frac{1}{k-j+\alpha}.$$
**Theorem 18**. *If $f\in C^{k,\alpha}(\mathbb{R}^n)$ is non-negative, then for every $\ell\leq k$ there exists a constant $C$ such that $|\nabla^\ell f(x)|\leq Cr(x)^{k-\ell+\alpha}$ pointwise in $\mathbb{R}^n$.*
*Proof.* For even $\ell\leq k$, we begin by bounding the negative part of the $\ell^\mathrm{th}$-order directional derivatives of $f$. To do this we use non-negativity of the Taylor polynomial of $f$ to obtain the following inequality for $\lambda\in\mathbb{R}$ and $\mathbb{\xi}\in S^{n-1}$: $$0\leq f(x+\lambda\xi)\leq \sum_{j=0}^k\frac{\lambda^j}{j!}\partial^j_\xi f(x)+\frac{1}{k!}[\nabla^kf]_{\alpha,\mathbb{R}^n}|\lambda|^{k+\alpha}.$$ A similar estimate holds if $\lambda$ is replaced with $-\lambda$, and summing the resulting inequalities gives $$0\leq \sum_{\substack{0\leq j\leq k,\\ j\;\textrm{even}}}\frac{\lambda^j}{j!}\partial^j_\xi f(x)+\frac{2}{k!}[\nabla^kf]_{\alpha,\mathbb{R}^n}\lambda^{k+\alpha}$$ for $\lambda>0$. Rearranging the inequality above and noting that $\partial^j_\xi f(x)\leq [\partial^j_\xi f(x)]_+$, we find that $$-\partial_\xi^\ell f(x)\leq\sum_{\substack{0\leq j\leq k,\\ j\;\textrm{even},\;j\neq\ell}}\frac{\ell!}{j!}\lambda^{j-\ell}[\partial^j_\xi f(x)]_++\frac{2\ell!}{k!}[\nabla^kf]_{\alpha,\mathbb{R}^n}\lambda^{k-\ell+\alpha}.$$ Using $\lambda=\!\!\displaystyle\max_{\substack{0\leq j\leq k,\\ j\;\textrm{even},\;j\neq\ell}}[\partial^j_\xi f(x)]_+^\frac{1}{k-j+\alpha}$ now gives $\lambda^{j-\ell}[\partial^j_\xi f(x)]_+\leq \lambda^{k-\ell+\alpha}$ and $-\partial_\xi^\ell f(x)\leq C\lambda^{k-\ell+\alpha}$.
Taking a supremum over all $\xi\in S^{n-1}$ and using equivalence of the norms $|\nabla^kf(x)|$ and $\displaystyle\sup_{|\xi|=1}|\partial^k_\xi f(x)|$, we see now that for even values of $\ell$ the required estimate holds pointwise: $$|\nabla^\ell f(x)|\leq C\sup_{|\xi|=1}[\partial_\xi^\ell f(x)]_++C\max_{\substack{0\leq j\leq k,\\ j\;\textrm{even},\;j\neq\ell}}\sup_{|\xi|=1}[\partial^j_\xi f(x)]_+^\frac{k-\ell+\alpha}{k-j+\alpha}\leq Cr(x)^{k-\ell+\alpha}.$$
For derivatives of odd order we again begin by using a Taylor expansion and non-negativity of $f$ to see that if $\lambda\in\mathbb{R}$ and $|\xi|=1$ then $$0\leq \sum_{j=0}^k\frac{\lambda^j}{j!}\partial^j_\xi f(x)+\frac{1}{k!}[\nabla^kf]_{\alpha,\mathbb{R}^n}|\lambda|^{k+\alpha}.$$ This inequality continues to hold if we replace $\lambda$ with $\eta\lambda$ for any $\eta\in\mathbb{R}$. First let $\ell$ be the largest odd number that is less than or equal to $k$ and set $s=\frac{\ell+1}{2}$. Replacing $\lambda$ with $\eta_1\lambda$ through $\eta_s\lambda$ for some real numbers $\eta_1,\dots,\eta_s$ to be chosen momentarily, and adding the resulting inequalities scaled by positive constants $q_1,\dots,q_s$, we find that $$0\leq \sum_{j=0}^k\bigg(\sum_{i=1}^sq_i\eta_i^j\bigg)\frac{\lambda^j}{j!}\partial^j_\xi f(x)+C\lambda^{k+\alpha}.$$ The preceding inequality continues to hold if we replace $\lambda$ with $-\lambda$, and so if $\lambda>0$ then $$\bigg|\sum_{\substack{1\leq j\leq \ell,\\ j\;\textrm{odd}}}\bigg(\sum_{i=1}^sq_i\eta_i^j\bigg)\frac{\lambda^j}{j!}\partial^j_\xi f(x)\bigg|\leq C\bigg(\sum_{\substack{0\leq j\leq k,\\ j\;\textrm{even}}}\lambda^j\partial^j_\xi f(x)+\lambda^{k+\alpha}\bigg).$$ For brevity we denote by $F_\lambda(x)$ the right-hand side above. Using , choose $\eta_1,\dots,\eta_s\in\mathbb{R}$ and $q_1,\dots,q_s\geq 0$ so that $\sum_{i=1}^sq_i\eta_i^j=0$ for every odd $j<\ell$ and $\sum_{i=1}^sq_i\eta_i^\ell=1$. With these selections, the estimate above reads $|\lambda^\ell\partial^\ell_\xi f(x)|\leq C F_\lambda(x)$.
Equipped with this estimate, we can repeat the argument above using different constants $\tilde{q}_1,\dots,\tilde{q}_{s-1}\geq 0$ and $\tilde{\eta}_1,\dots\tilde{\eta}_{s-1}\in\mathbb{R}$, and by combining non-negative Taylor polynomials we get $$\bigg|\sum_{\substack{1\leq j\leq \ell-2,\\ j\;\textrm{odd}}}\bigg(\sum_{i=1}^{s-1}\tilde{q}_i\tilde{\eta}_i^j\bigg)\frac{\lambda^j}{j!}\partial^j_\xi f(x)\bigg|\leq C|\lambda^\ell\partial^\ell_\xi f(x)|+CF_\lambda(x)\leq CF_\lambda(x).\\
$$ Using again, we can choose the constants $\tilde{q}_i$ and $\tilde{\eta}_i$ so that the left-hand side above reduces to $|\lambda^{\ell-2}\partial^{\ell-2}_\xi f(x)|$. This gives $|\lambda^{\ell-2}\partial^{\ell-2}_\xi f(x)|\leq CF_\lambda(x)$, and we see by repeating this argument that for each odd $\ell\leq k$ and $\lambda>0$, $$|\partial_\xi^\ell f(x)|\leq CF_\lambda(x)\leq C\bigg(\sum_{\substack{0\leq j\leq k,\\ j\;\textrm{even}}}\lambda^{j-\ell}[\partial^j_\xi f(x)]_++\lambda^{k-\ell+\alpha}\bigg).$$ Taking $\lambda=\displaystyle\max_{\substack{0\leq j\leq k,\\ j\;\textrm{even}}}|\partial_\xi^jf(x)|^\frac{1}{k-j+\alpha}$ in this estimate gives $|\partial_\xi^\ell f(x)|\leq C\lambda^{k-\ell+\alpha}$, and it follows that $$|\nabla^\ell f(x)|\leq C\sup_{|\xi|=1}|\partial^\ell_\xi f(x)|\leq C\sup_{|\xi|=1}\lambda^{k-\ell+\alpha}\leq Cr(x)^{k-\ell+\alpha}.$$ Thus the claimed derivative estimates hold for every $\ell\leq k$ as claimed. ◻
Following [@SOS_I; @Tataru], we now show that $r$ is slowly-varying.
**Lemma 19**. *There exists a constant $\nu>0$ such that $|r(x)-r(y)|\leq \frac{1}{4}r(x)$ when $|x-y|\leq \nu r(x)$.*
*Remark*: The choice of the constant $\frac{1}{4}$ in the preceding lemma is largely arbitrary; any constant between $0$ and $1$ will suffice, and we only choose $\frac{1}{4}$ to simplify some later calculations. Throughout this section we continue to put size restrictions on $\nu$. These ensure that in the end, it is a small positive constant whose value is inconsequential for our final construction. We encounter no issue in occasionally asking that $\nu$ be smaller than previously assumed.
*Proof.* Given $x,y\in\mathbb{R}^n$, it follows from the definition of $r$ and straightforward properties of the maximum and supremum that $$\label{eq:intermed4}
|r(x)-r(y)|\leq \max_{\substack{0\leq j\leq k,\\j\;\mathrm{even}}}\big\{\sup_{|\xi|=1}\big|[\partial^j_\xi f(x)]_+^\frac{1}{k-j+\alpha}-[\partial^j_\xi f(y)]_+^\frac{1}{k-j+\alpha}\big|\big\}.$$ Now we assume that $|x-y|\leq \nu r(x)$. If $j<k$ then $k-j+\alpha> 1$, and a concavity estimate gives $$\big|[\partial^j_\xi f(x)]_+^\frac{1}{k-j+\alpha}-[\partial^j_\xi f(y)]_+^\frac{1}{k-j+\alpha}\big|\leq |\partial^j_\xi f(x)-\partial^j_\xi f(y)|^\frac{1}{k-j+\alpha}.$$ To estimate the right-hand side above, we expand the directional operator $\partial_\xi^j$ to write $$|\partial^j_\xi f(x)-\partial^j_\xi f(y)|=\bigg|\sum_{|\beta|=j}\frac{\beta!}{j!}\xi^\beta(\partial^\beta f(x)-\partial^\beta f(y))\bigg|\leq \sum_{|\beta|=j}\frac{\beta!}{j!}|\partial^\beta f(x)-\partial^\beta f(y)|.$$ Then, together with the bounds $|\nabla^\ell f(x)|\leq Cr(x)^{k-\ell+\alpha}$ and $|x-y|\leq \nu r(x)$ gives us $$|\partial^\beta f(x)-\partial^\beta f(y)|\leq C\sum_{1\leq |\gamma|\leq k-|\beta|}\nu^{|\gamma|}r(x)^{k-|\beta|+\alpha}+C\nu^{k-|\beta|+\alpha}r(x)^{k-|\beta|+\alpha}.$$ Every power on $\nu$ above is at least one, since $|\beta|<k$. Thus, assuming that $\nu\leq 1$, we find that there exists a constant $C$ for which $$\label{eq:omegaeqn}
|\partial^\beta f(x)-\partial^\beta f(y)|\leq C\nu r(x)^{k-|\beta|+\alpha}.$$ Therefore $|\partial^j_\xi f(x)-\partial^j_\xi f(y)|\leq C\nu r(x)^{k-|\beta|+\alpha}$, and since this bound is independent of $\xi$ we can choose a small number $\nu$ which is independent of $x$ such that the following holds: $$\sup_{|\xi|=1}\big|[\partial^j_\xi f(x)]_+^\frac{1}{k-j+\alpha}-[\partial^j_\xi f(y)]_+^\frac{1}{k-j+\alpha}\big|\leq C\nu^\frac{1}{k-j+\alpha}r(x)\leq \frac{1}{4}r(x).$$
It remains to consider the case $j=k$, which arises when $k$ is even. This time we employ , taking $\beta=\frac{1}{\alpha}$ and $\varepsilon=\nu^\alpha$. Combining with the inequality $|x-y|\leq \nu r(x)$, we get $$\sup_{|\xi|=1}\big|[\partial^k_\xi f(x)]_+^\frac{1}{\alpha}-[\partial^k_\xi f(y)]_+^\frac{1}{\alpha}\big|\leq\frac{C\nu (1+\nu^\alpha)r(x)}{((1+\nu^\alpha)^{\frac{\alpha}{1-\alpha}}-1)^\frac{1-\alpha}{\alpha}}+\nu^\alpha\max\{r(x),r(y)\}.$$ If $\nu\leq 1$ the quotient above is bounded by $C\nu^\alpha r(x)$, for a constant $C$ that depends only on $\alpha$ and the Hölder semi-norms of $f$. Since $\max\{r(x),r(y)\}\leq r(x)+|r(x)-r(y)|$ by non-negativity of $r$, when $\nu$ is small enough we have $$\sup_{|\xi|=1}\big|[\partial^k_\xi f(x)]_+^\frac{1}{\alpha}-[\partial^k_\xi f(y)]_+^\frac{1}{\alpha}\big|\leq C\nu^\alpha r(x)+\nu^\alpha|r(x)-r(y)|\leq \frac{1}{8}r(x)+\frac{1}{2}|r(x)-r(y)|.$$ It follows from these bounds and [\[eq:intermed4\]](#eq:intermed4){reference-type="eqref" reference="eq:intermed4"} that $|r(x)-r(y)|\leq \frac{1}{4}r(x)$, as we wished to show. ◻
The following consequence of estimate [\[eq:omegaeqn\]](#eq:omegaeqn){reference-type="eqref" reference="eq:omegaeqn"} will also be useful momentarily.
**Corollary 20**. *Let $f\in C^{k,\alpha}(\mathbb{R}^n)$ be non-negative for $k\geq 2$, and let $r$ be as in [\[eq:controlfunc\]](#eq:controlfunc){reference-type="eqref" reference="eq:controlfunc"}. If $\nu$ is a small positive constant, then there exists $\omega>0$ for which*
- *$|f(x)-f(y)|\leq \frac{1}{2}\omega \nu r(x)^{k+\alpha}$ whenever $|x-y|\leq \nu r(x)$,*
- *$|\partial^\beta f(x)-\partial^\beta f(y)|\leq \frac{1}{2}r(x)^{k-2+\alpha}$ whenever $|x-y|\leq \sqrt{\nu^2+6\omega\nu}r(x)$ and $|\beta|=2$.*
*Proof.* Estimate *(1)* follows by taking $\omega=2C$ for $C$ as in [\[eq:omegaeqn\]](#eq:omegaeqn){reference-type="eqref" reference="eq:omegaeqn"} when $|\beta|=0$, while *(2)* is given by replacing $\nu$ with $\sqrt{\nu^2+6\omega\nu}$ in [\[eq:omegaeqn\]](#eq:omegaeqn){reference-type="eqref" reference="eq:omegaeqn"} when $|\beta|=2$, and once again choosing $\nu$ small. ◻
For the remainder of this section, we fix a maximal dyadic cube $Q$ which satisfies $$\ell(Q)\leq \frac{\nu}{2\sqrt{n}}\inf_{Q}r.$$ Letting $x$ denote the center of $Q$ and $r_Q=\inf_Qr$, we consider two cases: when $f(x)\geq \omega\nu r_Q^{k+\alpha}$, and when $f(x)<\omega\nu r_Q^{k+\alpha}$. In the first case, we show that $f$ has a half-regular root on $2Q$.
**Lemma 21**. *Let $f$, $r$, and $\nu$ be as above and assume that $f(x)\geq \omega\nu r_Q^{k+\alpha}$. Then for any multi-index $\beta$ of order $|\beta|\leq \frac{k+\alpha}{2}$, and for any $y\in 2Q$, the following estimates hold:*
- *$|\partial^\beta\sqrt{f(y)}|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|}$,*
- *$[\partial^\beta\sqrt{f}]_{\frac{\alpha}{2}}(y)\leq Cr(y)^{\frac{k}{2}-|\beta|}$ if $k$ is even,*
- *$[\partial^\beta\sqrt{f}]_{\frac{1+\alpha}{2}}(y)\leq Cr(y)^{\frac{k-1}{2}-|\beta|}$ if $k$ is odd.*
*Proof.* Using Lemma [Lemma 6](#lem:genchain){reference-type="ref" reference="lem:genchain"} we bound derivatives of $f$ at $y\in 2Q$, estimating pointwise to first get $$|\partial^\beta\sqrt{f(y)}|=\bigg|\sum_{\Gamma\in P(\beta)}C_{\beta,\Gamma}f(y)^{\frac{1}{2}-|\Gamma|}\prod_{\gamma\in\Gamma}\partial^\gamma f(y)\bigg|\leq C\sum_{\Gamma\in P(\beta)}f(y)^{\frac{1}{2}-|\Gamma|}\prod_{\gamma\in\Gamma}|\partial^\gamma f(y)|.$$ If $y\in 2Q$ then $|x-y|\leq \nu r_Q$, meaning that $f(y)\geq \frac{1}{2}\omega\nu r_Q^{k+\alpha}$ by property *(1)* of Corollary [Corollary 20](#cor:supplementary){reference-type="ref" reference="cor:supplementary"}. Additionally, Theorem [Theorem 18](#thm:cauchylike){reference-type="ref" reference="thm:cauchylike"} and the fact that $r(y)\leq Cr_Q$ for $y\in 2Q$ together show that $$|\partial^\beta\sqrt{f(y)}|\leq C\sum_{\Gamma\in P(\beta)}r_Q^{(k+\alpha)(\frac{1}{2}-|\Gamma|)}\prod_{\gamma\in\Gamma}r(y)^{k+\alpha-|\gamma|}\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|}.$$
For the semi-norm estimates *(2)* and *(3)*, we treat the odd and even cases simultaneously by letting $\Lambda$ denote the integer part of $\frac{k}{2}$ and setting $\lambda=\frac{k+\alpha}{2}-\Lambda$. To prove the claimed results it suffices to show that $[\partial^\beta\sqrt{f}]_{\lambda}(y)\leq Cr(y)^{\Lambda-|\beta|}$ whenever $|\beta|\leq \Lambda$ and $y\in 2Q$. Given $\beta\in\mathbb{N}_0^n$ observe that by Lemma [Lemma 6](#lem:genchain){reference-type="ref" reference="lem:genchain"} and the triangle inequality, we have for $w,z\in 2Q$ that $$\label{eq:ctrl}
\begin{split}
|\partial^\beta\sqrt{f(w)}-\partial^\beta\sqrt{f(z)}|&\leq C\sum_{\Gamma\in P(\beta)} f(w)^{\frac{1}{2}-|\Gamma|}\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma f(w)-\prod_{\gamma\in\Gamma}\partial^\gamma f(z)\bigg|\\
&\qquad+C\sum_{\Gamma\in P(\beta)} |f(w)^{\frac{1}{2}-|\Gamma|}-f(z)^{\frac{1}{2}-|\Gamma|}|\prod_{\gamma\in\Gamma}|\partial^\gamma f(z)|.
\end{split}$$ We treat the terms on the right-hand side above separately, first using the mean value theorem and slow variation of $r$ on $2Q$ to obtain the bound $$|f(w)^{\frac{1}{2}-|\Gamma|}-f(z)^{\frac{1}{2}-|\Gamma|}|\leq Cr(y)^{\frac{k+\alpha}{2}-1-|\Gamma|(k+\alpha)}|w-z|\leq Cr(y)^{\frac{k+\alpha}{2}-|\Gamma|(k+\alpha)-\lambda}|w-z|^\lambda.$$ Explicitly, we have used here that $f$ is bounded below by a multiple of $r(y)^{k+\alpha}$ on $2Q$, followed by the fact that $|w-z|\leq Cr(y)^{1-\lambda}|w-z|^\lambda$ for $w,z\in 2Q$. Bounding the other derivatives of $f$ on $2Q$ in the same fashion, we see that $$|f(w)^{\frac{1}{2}-|\Gamma|}-f(z)^{\frac{1}{2}-|\Gamma|}|\prod_{\gamma\in\Gamma}|\partial^\gamma f(z)|\leq Cr(y)^{\frac{k+\alpha}{2}-|\Gamma|(k+\alpha)-\lambda}|w-z|^\lambda\prod_{\gamma\in\Gamma}r(y)^{k+\alpha-|\gamma|}.$$ Since $\Gamma\in P(\beta)$, we can simplify the right-hand side above to $Cr(y)^{\Lambda-|\beta|}|w-z|^\lambda$. To estimate the remaining term of [\[eq:ctrl\]](#eq:ctrl){reference-type="eqref" reference="eq:ctrl"}, we observe that $$\label{eq:sub}
\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma f(w)-\prod_{\gamma\in\Gamma}\partial^\gamma f(z)\bigg|\leq \sum_{\gamma\in\Gamma}\bigg(\prod_{\mu\in\Gamma\setminus\{\gamma\}}\sup_{B}|\partial^\mu f|\bigg)|\partial^\gamma f(w)-\partial^\gamma f(z)|.$$
Arguing as above, we can control the product in [\[eq:sub\]](#eq:sub){reference-type="eqref" reference="eq:sub"} by $Cr(y)^{(k+\alpha)(|\Gamma|-1)-|\beta|+|\gamma|}$. Further, since $|\beta|\leq\Lambda< \frac{k+\alpha}{2}$, whenever $\gamma\in P(\beta)$ we must have $|\gamma|<k$, meaning that we can use the mean value theorem and Theorem [Theorem 18](#thm:cauchylike){reference-type="ref" reference="thm:cauchylike"} to get for $w,z\in 2Q$ that $$|\partial^\gamma f(w)-\partial^\gamma f(z)|\leq Cr(y)^{k+\alpha-|\gamma|-\lambda}|w-z|^\lambda.$$ Therefore the left-hand side of [\[eq:sub\]](#eq:sub){reference-type="eqref" reference="eq:sub"} is bounded by $Cr(y)^{(k+\alpha)|\Gamma|-|\beta|-\lambda}|w-z|^\lambda$. Consequently, $$f(w)^{\frac{1}{2}-|\Gamma|}\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma f(w)-\prod_{\gamma\in\Gamma}\partial^\gamma f(z)\bigg|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|-\lambda}|y-z|^\lambda=Cr(y)^{\Lambda-|\beta|}|y-z|^\lambda.$$ Now from [\[eq:ctrl\]](#eq:ctrl){reference-type="eqref" reference="eq:ctrl"} and the definition of the pointwise semi-norm, we observe for any $y\in 2Q$ that $$[\partial^\beta\sqrt{f}]_\lambda(y)=\limsup_{w,z\rightarrow y}\frac{|\partial^\beta\sqrt{f(w)}-\partial^\beta\sqrt{f(z)}|}{|w-z|^\lambda}\leq Cr(y)^{\Lambda-|\beta|}$$ Critically, the constant $C$ is independent of $Q$. Hence, properties *(2)* and *(3)* hold as claimed. ◻
If $k$ is even and $\beta$ is a multi-index of order $\frac{k}{2}$, then the preceding result shows that $[\partial^\beta \sqrt{f}]_\frac{\alpha}{2}(y)$ is uniformly bounded on $2Q$, meaning that $[\partial^\beta \sqrt{f}]_{\frac{\alpha}{2},2Q}$ is finite. A similar bound holds for odd-order derivatives and as a consequence we have the following.
**Corollary 22**. *Let $f\in C^{k,\alpha}(\mathbb{R}^n)$ be non-negative and let $r$ be as in [\[eq:controlfunc\]](#eq:controlfunc){reference-type="eqref" reference="eq:controlfunc"}. If $\nu$ is a small positive constant and $f(x)\geq \omega\nu r_Q^{k+\alpha}$, then $\sqrt{f}\in C^\frac{k+\alpha}{2}(2Q)$.*
On the other hand, if $f(x)<\omega\nu r_Q^{k+\alpha}$ then this corollary fails. The best we can do is form a local decomposition of $f$ on $2Q$ that takes the form $f=g^2+F,$ where $g$ is half as regular as $f$ and the remainder $F\in C^{k,\alpha}(2Q)$ depends on $n-1$ variables. The construction of this local decomposition is patterned after the arguments in [@SOS_I; @Tataru], but we go to greater lengths to obtain estimates for every $k\geq 0$, not just small values of $k$.
In what follows, given $x\in\mathbb{R}^n$ we write $x=(x',x_n)$ for $x'\in\mathbb{R}^{n-1}$ and $x_n\in\mathbb{R}$. We also let $Q'=\{x':x\in Q\}$ to denote the projection of $Q$ onto $\mathbb{R}^{n-1}$ in the $x_n$ direction. Equipped with this notation we have the following result.
**Lemma 23**. *Let $f\in C^{k,\alpha}(\mathbb{R}^n)$ be non-negative and satisfy $f(x)<\omega \nu r_Q^{k+\alpha}$. Assume also that $r$ satisfies the following pointwise bound everywhere: $$\label{eq:needed}
r(y)\leq \max\bigg\{f(y)^\frac{1}{k+\alpha},\sup_{|\xi|=1}[\partial^2_\xi f(y)]_+^\frac{1}{k-2+\alpha}\bigg\}.$$ Then after a suitable change of variables, there exists a function $X\in C^{k-1,\alpha}(2Q')$ which enjoys the following properties for $y\in 2Q$,*
- *$|x_n-X(y')|\leq r(y)$,*
- *$f(y',X(y'))\leq f(y)$,*
- *$\partial_{x_n}f(y',X(y'))=0$.*
*Moreover, for all derivatives of order $|\beta|\leq k-1$ and $y\in 2Q$, the function $X$ satisfies*
- *$|\partial^\beta X(y')|\leq Cr(y)^{1-|\beta|}$,*
- *$[\partial^\beta X]_{\alpha}(y')\leq Cr(y)^{1-\alpha-|\beta|}$.*
*Remark*: This result states that $f$ has a unique minimum along each ray in the $x_n$ direction through a box contained in $2Q$. Moreover, the collection of these minima, viewed as a function on $2Q'$, belongs to $C^{k-1,\alpha}(2Q')$. The bound [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"} holds automatically when $k=2$ and $k=3$, while if $k\geq 4$ additional conditions must be placed on $f$ for [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"} to hold.
*Proof.* Adapting an argument from [@SOS_I], we let $y'\in 2Q'$ and set $h=\sqrt{6\omega\nu}$, where $\nu$ is chosen small enough that $h\leq 1$ and $\omega\nu\leq 1$. First we show that the map $t\mapsto f(y',t)$ has a unique minimum in the interval $(x_n-hr_Q,x_n+hr_Q)$, so that properties *(1)-(3)* follow as direct consequences. Observe that if $f(x)<\omega\nu r_Q^{k+\alpha}\leq r_Q^{k+\alpha}$ then $r_Q>0$, and by [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"} we have $$r_Q=\sup_{|\xi|=1}[\partial^2_\xi f(x)]_+^\frac{1}{k-2+\alpha}.$$ Assume without loss of generality that the supremum above is attained by the $x_n$ directional derivative of $f$ so that $r_Q^{k-2+\alpha}=\partial^2_{x_n}f(x)$. We note that non-negativity of $f$ and inclusion in $C^{k,\alpha}(\mathbb{R}^n)$ are preserved by such rotations.
For each $(y',t)\in 2Q'\times [x_n-hr_Q,x_n+hr_Q]$ we have $|(t,y')-x|\leq \sqrt{\nu^2+6\omega\nu}r_Q$, and so $\partial^2_{x_n}f(y',t)\geq\frac{1}{2}r_Q^{k-2+\alpha}$ by item *(2)* of . It follows that $t\mapsto f(x',t)$ has a unique minimum on $[x_n-hr_Q,x_n+hr_Q]$. Assume toward a contradiction that the minimum on this interval occurs at the left endpoint $x_n-hr_Q$, so that $\partial_{x_n}f(x',x_n-hr_Q)\geq 0$. Additionally, note that $f(y',x_n)<\frac{3}{2}\omega\nu r_Q^{k+\alpha}$ due to property *(1)* of and our condition that $f(x)<\omega\nu r_Q^{k+\alpha}$. So for some $t\in (x_n-hr_Q,x_n)$ we have $$\frac{3}{2}\omega\nu r_Q^{k+\alpha}>f(y',x_n)=f(y',x_n-hr_Q)+hr_Q\partial_{x_n}f(x',x_n-hr_Q)+\frac{h^2r_Q^2}{2}\partial^2_{x_n}f(y',t)\geq \frac{3}{2}\omega\nu r(x)^{k+\alpha},$$ a contradiction since $r_Q>0$. An identical contradiction arises if we assume that the minimum occurs at $x_n+hr_Q$, so $t\mapsto f(y',t)$ has a unique minimum in $(x_n-hr_Q,x_n+hr_Q)$ which we call $X(y')$. Properties *(1)-(3)* follow from our construction, and as $y'$ was any point in $2Q'$, we see that $X$ is a well-defined function on $2Q'$.
To see that $X\in C^{k-1}(2Q')$, we observe that for each $y'$ we have $\partial_{x_n}f(y',X(y'))=0$ and that $\partial^2_{x_n}f(y',X(y'))>0$ by property *(2)* of . Applying Theorem [Theorem 7](#thm:IFT){reference-type="ref" reference="thm:IFT"} to $\partial_{x_n}f$ at the point $(y',X(y'))$ we see that $X\in C^{k-1}(U)$ for some neighbourhood $U$ of $y'$, since $\partial_{x_n}f\in C^{k-1}(\mathbb{R}^n)$. Further, as $y'$ was any point in $2Q'$ we can find such a neighbourhood around every point in $2Q$. By the uniqueness statement of , these functions must agree with $X$ an all such neighbourhoods, meaning that $X\in C^{k-1}(2Q')$.
To establish derivative estimates of $X$ for property *(4)*, we first use the derivative formula in Theorem [Theorem 7](#thm:IFT){reference-type="ref" reference="thm:IFT"} to write $$|\partial^\beta X(y')|=\bigg|\frac{1}{\partial^2_{x_n}f(y',X(y'))}\sum_{0\leq\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta),\\\Gamma\neq \{\beta\}}}C_{\beta,\Gamma}(\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(y',X(y')))\prod_{\gamma\in\Gamma}\partial^\gamma X(y')\bigg|.$$ We simplify this by observing that $\partial^2_{x_n}f(y',X(y'))\geq Cr(y)^{k-2+\alpha}$ on $2Q'$ by property *(2)* of and slow variation of $r$. Using this fact with the estimates of Theorem [Theorem 18](#thm:cauchylike){reference-type="ref" reference="thm:cauchylike"}, we get $$|\partial^\beta X(y')|\leq Cr(y)^{2-k-\alpha}\sum_{0\leq\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta),\\\Gamma\neq \{\beta\}}}r(y',X(y'))^{k+\alpha+|\eta|-|\beta|-|\Gamma|-1}\prod_{\gamma\in\Gamma}|\partial^\gamma X(y')|.$$ Taking $\nu$ small enough that holds when $\nu$ is replaced by $\sqrt{\nu^2+6\omega\nu}$, it follows from the definition of $X$ that $r(y',X(y'))\leq Cr(y)$ whenever $y'\in 2Q'$.
Now we argue by strong induction that $|\partial^\beta X(y')|\leq Cr(y)^{1-|\beta|}$, observing for a base case that if $|\beta|=1$ then for some $j\in\{1,\dots,n-1\}$ and any $y'\in 2Q'$ we can write $$|\partial^\beta X(y')|=|\partial_{x_j}X(y')|=\bigg|\frac{\partial_{x_j}\partial_{x_n}f(y',X(y'))}{\partial^2_{x_n}f(y',X(y'))}\bigg|\leq \frac{Cr(y)^{k+\alpha-2}}{r(y)^{k+\alpha-2}}=C.$$ For an inductive hypothesis we assume that $|\partial^\gamma X(y')|\leq Cr(y)^{1-|\gamma|}$ whenever $|\gamma|<|\beta|$, so that $$|\partial^\beta X(y')|\leq C\sum_{0\leq\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta),\\\Gamma\neq \{\beta\}}}r(y)^{1-|\beta|+|\eta|-|\Gamma|}\prod_{\gamma\in\Gamma}r(y)^{1-|\gamma|}=Cr(y)^{1-|\beta|}.$$ It follows that the claimed estimate holds for every $\beta$ with $|\beta|\leq k-1$, giving *(4)*.
It remains to demonstrate that estimate *(5)* holds, and once again we prove this using strong induction. For a base case we observe that $|X(w')-X(z')|\leq C|w'-z'|$ by the Mean Value Theorem and the fact that $|\nabla X|\leq C$ uniformly. Further, if $w',z'\in 2Q'$ then we have that $|w'-z'|\leq Cr(y)^{1-\alpha}|w'-z'|^\alpha$ and $$|X(w')-X(z')|\leq Cr(y)^{1-\alpha}|w'-z'|^\alpha,$$ from which it follows by taking a limit supremum as $w',z'\rightarrow y'$ that $[X]_{\alpha}(y')\leq Cr(y)^{1-\alpha}$ for $y\in 2Q'$. Thus the required semi-norm estimate holds in the base case.
For the inductive step first use to write the pointwise difference $\partial^\beta X(w')-\partial^\beta X(z')$ for $w',z'\in 2Q'$ in expanded form as follows: $$\sum_{\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta),\\\Gamma\neq \{\beta\}}}C_{\beta,\Gamma}\bigg(\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(w',X(w'))}{\partial^2_{x_n}f(w',X(w'))}\prod_{\gamma\in\Gamma}\partial^\gamma X(w')-\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(z',X(z'))}{\partial^2_{x_n}f(z',X(z'))}\prod_{\gamma\in\Gamma}\partial^\gamma X(z')\bigg).$$ For brevity, we write $\Tilde{w}=(w',X(w'))$ and note that $|\tilde{w}-\Tilde{z}|\leq C|w'-z'|$ by the estimates above. Using the triangle inequality we can bound each of the terms in the sum above by the quantity $$\frac{|\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})|}{\partial^2_{x_n}f(\tilde{w})}\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma X(w)-\prod_{\gamma\in\Gamma}\partial^\gamma X(z)\bigg|+\bigg|\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})}{\partial^2_{x_n}f(\tilde{w})}-\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\Tilde{z})}{\partial^2_{x_n}f(\Tilde{z})}\bigg|\prod_{\gamma\in\Gamma}|\partial^\gamma X(w)|$$ meaning that the required semi-norm estimate on $\partial^\beta X$ will follow if we can bound each of the four factors above in an appropriate fashion.
To this end we first observe that by lower control of $\partial^2_{x_n}f$ on $2Q'$, together with slow variation of $r$ and the derivative estimates of Theorem [Theorem 18](#thm:cauchylike){reference-type="ref" reference="thm:cauchylike"}, we have $$\frac{|\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})|}{\partial^2_{x_n}f(\tilde{w})}\leq \frac{Cr(y)^{k+\alpha-|\beta|+|\eta|-|\Gamma|-1}}{r(y)^{k+\alpha-2}}=Cr(y)^{1-|\beta|+|\eta|-|\Gamma|}.$$ Next, we employ the derivative bounds on $X$ proved above for item *(5)*. Notice that in the sum above, since $\Gamma\in P(\eta)$ for $\eta$ satisfying $|\eta|\leq |\beta|\leq k$ and $\Gamma\neq\{\beta\}$, for each $\gamma\in \Gamma$ we must have that $|\gamma|\leq k-1$. Therefore $|\partial^\gamma X(w')|\leq Cr(y)^{1-|\gamma|}$ uniformly on $2Q'$ by *(4)*, and it follows that $$\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma X(w')\bigg|\leq C\prod_{\gamma\in\Gamma}r(y)^{1-|\gamma|}=Cr(y)^{|\Gamma|-|\eta|}.$$ Using lower control of $\partial^2_{x_n}f$ by $r(y)^{k-2+\alpha}$ once again, we can also make the the following estimate: $$\bigg|\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})}{\partial^2_{x_n}f(\tilde{w})}-\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\Tilde{z})}{\partial^2_{x_n}f(\Tilde{z})}\bigg|\leq \frac{C|\partial^2_{x_n}f(\Tilde{z})\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})-\partial^2_{x_n}f(\tilde{w})\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\Tilde{z})|}{r(y)^{2k+2\alpha-4}}.$$ Employing the triangle inequality and our local control of derivatives of $f$, we can bound the numerator on the right-hand side above by a constant multiple of the following expression, $$r(y)^{k-2+\alpha}|\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})-\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\Tilde{z})|+r(y)^{k+\alpha-|\beta|+|\eta|-|\Gamma|-1}|\partial^2_{x_n}f(\Tilde{z})-\partial^2_{x_n}f(\tilde{w})|.$$ If $|\beta|-|\eta|+|\Gamma|+1<k$ then the mean value theorem, our pointwise derivative estimates on $f$, and the estimate $|\tilde{w}-\Tilde{z}|\leq Cr(y)$ for $w',z'\in 2Q'$, all give $$|\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})-\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\Tilde{z})|\leq Cr(y)^{k-|\beta|+|\eta|-|\Gamma|-1}|w'-z'|^\alpha.$$ On the other hand, if $|\beta|-|\eta|+|\Gamma|+1=k$ this estimate holds since $\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f\in C^\alpha(\mathbb{R}^n)$ and $$|\partial^{\beta-\eta}\partial^{|\Gamma|}_{x_n}f(\tilde{w})-\partial^{\beta-
\eta}\partial^{|\Gamma|}_{x_n}f(\Tilde{z})|\leq C|\tilde{w}-\Tilde{z}|^\alpha\leq C|w'-z'|^\alpha=Cr(y)^{k-|\beta|+|\eta|-|\Gamma|-1}|w'-z'|^\alpha.$$ An identical argument shows that $|\partial^2_{x_n}f(z',X(z'))-\partial^2_{x_n}f(w',X(w'))|\leq Cr(y)^{k-2}|w'-z'|^\alpha$ when $k\geq 2$, and altogether we find now that $$\bigg|\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\tilde{w})}{\partial^2_{x_n}f(\tilde{w})}-\frac{\partial^{\beta-\eta}\partial^{|\Gamma|+1}_{x_n}f(\Tilde{z})}{\partial^2_{x_n}f(\Tilde{z})}\bigg|\leq Cr(x)^{1-\alpha-|\beta|+|\eta|-|\Gamma|}|w'-z'|^\alpha.$$
One more term estimate remains before we can invoke an inductive hypothesis on the Hölder semi-norms of $X$ to simplify our bounds. Iterating the triangle inequality gives $$\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma X(w')-\prod_{\gamma\in\Gamma}\partial^\gamma X(z')\bigg|\leq \sum_{\gamma\in\Gamma}\bigg(\prod_{\mu\in\Gamma\setminus\{\gamma\}}\sup_{B'}|\partial^\mu X|\bigg)|\partial^\gamma X(w')-\partial^\gamma X(z')|,$$ and using the estimates established above for property *(2)*, we can bound this further to get $$\bigg|\prod_{\gamma\in\Gamma}\partial^\gamma X(w')-\prod_{\gamma\in\Gamma}\partial^\gamma X(z')\bigg|\leq C\sum_{\gamma\in\Gamma}r(y)^{|\Gamma|-1-|\eta|+|\gamma|}|\partial^\gamma X(w')-\partial^\gamma X(z')|.$$ Consequently, we can bound each term in our earlier expansion of $\partial^\beta X(w')-\partial^\beta X(z')$ to get $$|\partial^\beta X(w')-\partial^\beta X(z')|\leq C\sum_{\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta)\\\Gamma\neq \{\beta\}}}\bigg(\sum_{\gamma\in\Gamma}\frac{|\partial^\gamma X(w')-\partial^\gamma X(z')|}{r(y)^{|\beta|-|\gamma|}}+r(y)^{1-\alpha-|\beta|}|w'-z'|^\alpha\bigg).$$
Dividing the expression above by $|w'-z'|^\alpha$ and taking a limit supremum as $w',z'\rightarrow y'$, it follows from the definition of the pointwise Hölder semi-norm that $$[\partial^\beta X]_\alpha(y')\leq C\sum_{\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta),\\\Gamma\neq \{\beta\}}}\sum_{\gamma\in\Gamma}r(y)^{-|\beta|+|\gamma|}[\partial^\gamma X]_\alpha(y')+Cr(y)^{1-\alpha-|\beta|}.$$ Finally, assume for a strong inductive hypothesis that $[\partial^\gamma X]_{\alpha}(y')\leq Cr(y)^{1-\alpha-|\gamma|}$ for $y'\in 2Q'$ whenever $|\gamma|<|\beta|$. It follows immediately that $$[\partial^\beta X]_\alpha(y')\leq C\sum_{\eta\leq\beta}\sum_{ \substack{\Gamma\in P(\eta),\\\Gamma\neq \{\beta\}}}\sum_{\gamma\in\Gamma}r(y)^{-|\beta|+|\gamma|}r(y)^{1-\alpha-|\gamma|}+Cr(y)^{1-\alpha-|\beta|}=Cr(y)^{1-\alpha-|\beta|}.$$ By induction this holds for every $\beta$ with $|\beta|\leq k-1$, showing that *(5)* holds on $2Q'$. ◻
Henceforth, we assume that $\nu$ has been chosen small enough that the $\nu$-dependant estimates of the preceding results hold. Moreover we assume that [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"} holds, and we recall that this is automatic when $k=2$ and $k=3$. Our reward for undergoing the hard work of defining $X$ on $2Q'$, and proving its various properties, is that for $y\in 2Q$ we may follow what is by now a standard technique (see e.g. [@Fefferman-Phong; @SOS_I; @Tataru]) of defining $$F(y)=f(y',X(y')).$$ It follows from the construction above that $f-F\geq 0$, with equality at local minima of $f$. Now we establish pointwise properties of the functions $F$ and $f-F$.
**Lemma 24**. *Assume that the hypotheses of hold and define $F(y)=f(y',X(y'))$. Then for $\beta$ with $|\beta|<\frac{k+\alpha}{2}$ and every $y\in 2Q$,*
- *$|\partial^\beta\sqrt{f(y)-F(y)}|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|}$,*
- *$[\partial^\beta\sqrt{f-F}]_{\frac{\alpha}{2}}(y)\leq Cr(y)^{\frac{k}{2}-|\beta|}$ if $k$ is even,*
- *$[\partial^\beta\sqrt{f-F}]_{\frac{1+\alpha}{2}}(y)\leq Cr(y)^{\frac{k-1}{2}-|\beta|}$ if $k$ is odd.*
*Proof.* By the fundamental theorem of calculus we have $f(y)-F(y)=(y_n-X(y'))^2H(y)$, where $$H(y)=\int_0^1 (1-t)\partial^2_{x_n}f(y',ty_n+(1-t)X(y'))dt.$$ Arguing as in the proof of we have $\frac{1}{2}r_Q^{k-2+\alpha}\leq \partial^2_{x_n}f(y',ty_n+(1-t)X(y'))$ for every $t\in[0,1]$, meaning that $H(y)\geq Cr(y)^{k-2+\alpha}$. Additionally, we can bound the derivatives of $H$ on $2Q$. To this end we first observe that $$\partial^\beta H(y)=\int_0^1 (1-t)\partial^\beta[\partial^2_{x_n}f(y',ty_n+(1-t)X(y'))]dt.$$ To evaluate the derivative inside the integral, we use the shorthand $L(y,t)=ty_n+(1-t)X(y')$ and write $\beta'=(\beta_1,\dots,\beta_{n-1})$, so that gives $$\partial^\beta[\partial^2_{x_n}f(y',L(y,t))]=t^{\beta_n}\sum_{\mu\leq\beta'}\sum_{\Gamma\in P(\mu)}C_{\beta',\Gamma}(1-t)^{|\Gamma|}\partial^{\beta'-\mu}\partial_{x_n}^{2+\beta_n+|\Gamma|}f(y',L(y,t))\prod_{\gamma\in\Gamma}\partial^\gamma X(y').$$ It follows from the triangle inequality that $$|\partial^\beta H(y)|\leq C\sum_{\mu\leq\beta'}\sum_{\Gamma\in P(\mu)}\int_0^1t^{\beta_n}(1-t)^{|\Gamma|+1}|\partial^{\beta'-\mu}\partial_{x_n}^{2+\beta_n+|\Gamma|}f(y',L(y,t))|dt\prod_{\gamma\in\Gamma}|\partial^\gamma X(y')|.$$ Recall from estimate *(4)* of that $|\partial^\gamma X(y')|\leq Cr(y)^{1-|\gamma|}$. Additionally, Theorem [Theorem 18](#thm:cauchylike){reference-type="ref" reference="thm:cauchylike"} and give $|\partial^{\beta'-\mu}\partial_{x_n}^{2+\beta_n+|\Gamma|}f(y',L(y,t))|\leq Cr(y)^{k+\alpha-2-|\Gamma|-|\beta|+|\mu|}$, so $$|\partial^\beta H(y)|\leq C\sum_{\mu\leq\beta'}\sum_{\Gamma\in P(\mu)}r(y)^{k+\alpha-2-|\Gamma|-|\beta|+|\mu|}\prod_{\gamma\in\Gamma}r(y)^{1-|\gamma|}=Cr(y)^{k+\alpha-2-|\beta|}.$$
Next, using these bounds we can estimate derivatives of $\sqrt{H}$ on $2Q$. To this end we employ the chain rule, the lower bound $H(y)\geq Cr(y)^{k-2+\alpha}$, and the derivative bound above to estimate $$|\partial^\beta \sqrt{H(y)}| =\bigg|\sum_{\Gamma\in P(\beta)}C_{\beta,\Gamma}H(y)^{\frac{1}{2}-|\Gamma|}\prod_{\gamma\in\Gamma}\partial^\gamma H(y)\bigg|\leq Cr(y)^{\frac{k+\alpha}{2}-1-|\beta|}.$$ Further, estimates *(1)* and *(4)* of show that $|\partial^\beta(y_n-X(y'))|\leq Cr(y)^{1-|\beta|}$ on $2Q$, respectively when $|\beta|=0$ and $|\beta|>0$. Using the product rule we can thus bound derivatives of $\sqrt{f(y)-F(y)}=(y_n-X(y'))\sqrt{H(y)}$ by writing $$\partial^\beta\sqrt{f(y)-F(y)}=\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}\partial^{\beta-\gamma}(y_n-X(y'))\partial^\gamma\sqrt{H(y)}.$$ Employing the triangle inequality and the derivative bounds computed above, we find that $$|\partial^\beta\sqrt{f(y)-F(y)}|\leq C\sum_{\gamma\leq\beta}|\partial^{\beta-\gamma}(y_n-X(y'))||\partial^\gamma\sqrt{H(y)}|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|}.$$ Since $y\in 2Q$ was arbitrary, estimate *(1)* of follows.
For estimates *(2)* and *(3)* we once again denote by $\Lambda$ the integer part of $\frac{k}{2}$, and set $\lambda=\frac{k+\alpha}{2}-\Lambda$ so that the claimed estimates will follow if we can show that $[\partial^\beta\sqrt{f-F}]_{\lambda}(y)\leq Cr(y)^{\Lambda-|\beta|}$. To this end we observe that if $w,z\in 2Q$, then the mean value theorem and estimate *(1)* give $$|\partial^\beta\sqrt{f(w)-F(w)}-\partial^\beta\sqrt{f(z)-F(z)}|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|-1}|w-z|\leq Cr(y)^{\Lambda-|\beta|}|w-z|^\lambda.$$ It follows by taking a limit supremum as $w,z\rightarrow y$ that $[\partial^\beta\sqrt{f-F}]_{\lambda}(y)\leq Cr(y)^{\Lambda-|\beta|}$. ◻
There remains one more property of $F$ to establish before we can proceed to extend the local results of this section to global ones. These are derivative and local semi-norm estimates, which follow from the properties of $X$ established in .
**Lemma 25**. *Assume that the hypotheses of hold, and define $X$ and $F$ as above. If $F(y)=f(y',X(y'))$ then for every $y\in 2Q$ the following estimates hold:*
- *$|\partial^\beta F(y)|\leq Cr(y)^{k+\alpha-|\beta|}$,*
- *$[\partial^\beta F]_{\alpha}(y)\leq Cr(y)^{k-|\beta|}$.*
*Proof.* Given a multi-index $\beta$ of order $|\beta|\leq k$, we compute $\partial^\beta F$ by first writing $\beta=\mu+\rho$, for a multi-index $\rho\leq \beta$ with $|\rho|=1$, and with $\mu=\beta-\rho$. It follows from the definition of $X$ that $$\partial^\beta F(y')=\partial^\mu[\partial^\rho f(y',X(y'))+\partial_{x_n}f(y',X(y'))\partial^\rho X(y')]=\partial^\mu[\partial^\rho f(y',X(y'))].$$ To bound the $\mu$-derivative on the right-hand side, we employ to get $$\partial^\beta F(y')=\partial^\mu[\partial^\rho f(y',X(y'))]=\sum_{\eta\leq\mu }\sum_{\Gamma\in P(\eta)}C_{\mu,\Gamma}(\partial^{\beta-\eta}\partial^{|\Gamma|}_{x_n}f(y',X(y')))\prod_{\gamma\in\Gamma}\partial^\gamma X(y').$$ Then, observe by the triangle inequality that $$|\partial^\beta F(y')|\leq C\sum_{\eta\leq\mu }\sum_{\Gamma\in P(\eta)}|\partial^{\beta-\eta}\partial^{|\Gamma|}_{x_n}f(y',X(y'))|\prod_{\gamma\in\Gamma}|\partial^\gamma X(y')|.$$ Since $|\mu|=|\beta|-1\leq k-1$, for each $\gamma$ in the product above, we have that $|\partial^\gamma X(y')|\leq Cr(y)^{1-|\gamma|}$ on $2Q'$ by estimate *(4)* of . Similarly, Theorem [Theorem 18](#thm:cauchylike){reference-type="ref" reference="thm:cauchylike"} and slow variation of $r$ allow us to bound the derivatives of $F$ above uniformly by powers of $r(y)$ to get property *(1)*: $$|\partial^\beta F(y')|\leq C\sum_{\eta\leq\mu }\sum_{\Gamma\in P(\eta)}r(y)^{k+\alpha-|\beta|+|\eta|-|\Gamma|}\prod_{\gamma\in\Gamma}r(y)^{1-|\gamma|}=Cr(y)^{k+\alpha-|\beta|}.$$
For estimate *(2)*, we take $w',z'\in 2Q'$ and write $\tilde{w}=(w',X(w'))$. As above, for some multi-index $\mu\leq\beta$ of order $|\beta|-1$ we can expand $\partial^\beta F$ as follows: $$\partial^\beta F(w')-\partial^\beta F(z')= \sum_{0\leq\eta\leq\mu}\sum_{\Gamma\in P(\eta)}C_{\beta,\Gamma}\bigg(\partial^{\beta-\eta}\partial^{|\Gamma|}_{x_n}f(\tilde{w})\prod_{\gamma\in\Gamma}\partial^\gamma X(w)-\partial^{\beta-\eta}\partial^{|\Gamma|}_{x_n}f(\Tilde{z})\prod_{\gamma\in\Gamma}\partial^\gamma X(z)\bigg).$$ From here, an argument identical to that used to prove estimate *(5)* of shows that $[\partial^\beta F]_{\alpha}(y')\leq Cr(y)^{k-|\beta|}$ for every derivative of order $|\beta|\leq k$ and every $y'\in 2Q'$, giving *(2)*. ◻
Later we will require an appropriate extension by zero of $F$ to all of $\mathbb{R}^{n-1}$. Thankfully, such a function is fairly easy to construct. Recall that $\lambda<\frac{3}{2}$ is a fixed parameter from Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"}.
**Lemma 26**. *There exists a non-negative function in $C^{k,\alpha}(\mathbb{R}^{n-1})$ which agrees with $F$ on $\lambda Q'$.*
*Proof.* Let $\phi:\mathbb{R}\rightarrow\mathbb{R}$ be a smooth non-negative function supported in $[-1,1]$ such that $\phi=1$ on $[-\frac{3}{4},\frac{3}{4}]$. Given $y'\in 2Q'$ define $$\psi(y')=\prod_{j=1}^{n-1}\phi\bigg(\frac{y_j-x_j}{\ell(Q)}\bigg),$$ so that if $y'\in\lambda Q'$ then for each $j\in\{1,\dots,n-1\}$ we have $|y_j-x_j|\leq \frac{\lambda}{2}\ell(Q)\leq \frac{3}{4}\ell(Q)$ (recall that $\lambda<\frac{3}{2}$) and $\phi((y_j-x_j)/\ell(Q))=1$. It follows that $\psi=1$ on $\lambda Q'$. Additionally, if $|x_j-y_j|>\ell(Q)$ for any $j\in\{1,\dots,n-1\}$ then $\phi((y_j-x_j)\ell(Q))=0$, meaning that $\psi$ is supported in $2Q'$.
It remains to show that $\psi F\in C^{k,\alpha}(\mathbb{R}^{n-1})$. The argument from the proof of Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"} shows that $|\partial^\beta\psi(y')|\leq Cr(y')^{-|\beta|}$ and $[\partial^\beta\psi]_\alpha(y')\leq Cr(y')^{-|\beta|-\alpha}$ for $y\in 2Q'$. Combining these estimates on $\psi$ with the inequalities from and employing the sub-product rule, we get $$[\partial^\beta(\psi F)]_{\alpha}(y')\leq\sum_{\gamma\leq\beta}\binom{\beta}{\gamma}([\partial^{\beta-\gamma}\psi]_{\alpha}(y')|\partial^\gamma F(y')|+[\partial^{\beta-\gamma}F]_{\alpha}(y')|\partial^\gamma \psi(y')|)\leq Cr(y')^{k-|\beta|}.$$ It follows that if $|\beta|=k$ then $[\partial^\beta(\psi F)]_{\alpha}(y')\leq C$ uniformly in $2Q'$, and since this holds trivially outside of $2Q'$ we conclude that $\psi F\in C^{k,\alpha}(\mathbb{R}^{n-1})$ as claimed. ◻
This completes our discussion of localized results restricted to a single dyadic cube. Now we consider a the function $\psi_j^2f$, where $\psi_j$ is one of the functions given by Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"}. Our primary result concerning this function is the following lemma; in what follows, given a cube $Q_j$ we denote its center by $x_j$ and we write $r_j=\inf_{Q_j}r$.
**Lemma 27**. *Let $f\in C^{k,\alpha}(\mathbb{R}^n)$ be non-negative, and let $\psi_j$ be one of the partition functions from Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"} supported in $\lambda Q_j$. If $f(x_j)\geq \omega r_j^{k+\alpha}$ then the following estimates hold in $\mathbb{R}^n$:*
- *$|\partial^\beta[\psi_j\sqrt{f}](y)|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|}$,*
- *$[\partial^\beta(\psi_j\sqrt{f})]_{\frac{\alpha}{2}}(y)\leq Cr(y)^{\frac{k}{2}-|\beta|}$ if $k$ is even,*
- *$[\partial^\beta(\psi_j\sqrt{f})]_{\frac{1+\alpha}{2}}(y)\leq Cr(y)^{\frac{k-1}{2}-|\beta|}$ if $k$ is odd.*
*If $f(x_j)<\omega r_j^{k+\alpha}$ and the hypotheses of hold, then for $F$ as in we have*
- *$|\partial^\beta[\psi_j\sqrt{f-F}](y)|\leq Cr(y)^{\frac{k+\alpha}{2}-|\beta|}$,*
- *$[\partial^\beta(\psi_j\sqrt{f-F})]_{\frac{\alpha}{2}}(y)\leq Cr(y)^{\frac{k}{2}-|\beta|}$ if $k$ is even,*
- *$[\partial^\beta(\psi_j\sqrt{f-F})]_{\frac{1+\alpha}{2}}(y)\leq Cr(y)^{\frac{k-1}{2}-|\beta|}$ if $k$ is odd.*
The proofs of these estimates are straightforward; they rely only on the estimates from Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"}, Lemma [Lemma 21](#lem:local1){reference-type="ref" reference="lem:local1"} and Lemma [Lemma 25](#lem:implicitest){reference-type="ref" reference="lem:implicitest"}, together with the product and sub-product rules introduced in Section 2. We omit them for brevity. The lemma above gives us the following result.
**Corollary 28**. *The functions $\psi_j\sqrt{f}$ and $\psi_j\sqrt{f-F}$ belong to $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$.*
# Proof of Theorem [Theorem 1](#main 1){reference-type="ref" reference="main 1"} {#proof-of-theorem-main-1}
Using induction on $n$, we now show that if $k=2$ or $k=3$ then any non-negative $f\in C^{k,\alpha}(\mathbb{R}^{n})$ is a SOS of at most $s_n$ functions $g_1,\dots,g_{s_n}$ in [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"}, for a number $s_n\geq m(n,k,\alpha)$ which we estimate after our construction. Further, we show that for each $j\in\{1,\dots,s_n\}$ the function $g_j$ satisfies
&\|\^g_j(y)\|Cr_f(y)\^-\|\| [for $y\in\mathbb{R}^n$]{.roman},&&[\[induc1\]]{#induc1 label="induc1"}\
&\_(y)Cr_f(y)\^-\|\| [if $k$ is even]{.roman},[\[induc2\]]{#induc2 label="induc2"}\
&\_(y)Cr_f(y)\^-\|\| [if $k$ is odd]{.roman}.[\[induc3\]]{#induc3 label="induc3"}
The function $r_f$ is as in [\[eq:controlfunc\]](#eq:controlfunc){reference-type="eqref" reference="eq:controlfunc"}, and we now include the subscript to emphasize dependence on $f$.
For the base case $n=1$, we show that every non-negative $f\in C^{k,\alpha}(\mathbb{R})$ is a finite sum of half-regular squares for $k=2$ or $k=3$ and $0<\alpha\leq 1$. A stronger result appears in [@BonyFr], but it omits the derivative and semi-norm estimates that we require for our inductive argument, so we furnish a proof with the required estimates. We emphasize that the one-dimensional result is not new, and our decomposition requires $s_1=4$ functions. For our later estimates of $s_n$, we use the optimal constant $m(1,k,\alpha)=2$ established in [@BonyFr].
Fixing a non-negative function $f\in C^{k,\alpha}(\mathbb{R})$ and using , we construct a partition of unity using the control function $r_f$ defined in [\[eq:controlfunc\]](#eq:controlfunc){reference-type="eqref" reference="eq:controlfunc"}, so that we may write $f$ as follows: $$f=\sum_{j=1}^\infty\psi_j^2f.$$ Fixing $j\in\mathbb{N}$, we observe that at the center $x_j$ of $Q_j$, either $f(x_j)\geq \omega\nu r_j^{k+\alpha}$ or $f(x_j)<\omega\nu r_j^{k+\alpha}$.
In the first case, shows that $\psi_j\sqrt{f}$ is a $C^\frac{k+\alpha}{2}(\mathbb{R})$ function that satisfies [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} through [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}. On the other hand, if $f(x_j)<\omega\nu r_j^{k\alpha}$ then shows that there exists a unique local minimum of $f$ at a point $X_j$ near $x_j$, and taking $F_j=f(X_j)$, shows that $$\psi_j\sqrt{f-F_j}\in C^\frac{k+\alpha}{2}(\mathbb{R}).$$ Moreover, shows that $\psi_j\sqrt{f-F_j}$ has the required properties [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} through [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}. Thus $f$ can be decomposed into a sum of two squares on $2Q_j$, $$\psi_j^2f=(\psi_j\sqrt{f-F_j})^2+(\psi_j\sqrt{F_j})^2.$$
Next we must show that $\psi_j\sqrt{F_j}\in C^\frac{k+\alpha}{2}(\mathbb{R})$ and that it satisfies the required pointwise estimates; this is easy, since $F_j$ is constant. By slow variation of $r_f$ we have $F_j\leq Cr_j^{k+\alpha}$, and it follows by property *(3)* of that if $y\in 2 Q_j$ then the following estimate holds: $$|\partial^\beta[\psi_j\sqrt{F_j}](y)|=\sqrt{F_j}|\partial^\beta\psi_j(y)|\leq Cr_j^{\frac{k+\alpha}{2}-|\beta|}\leq Cr(x)^{\frac{k+\alpha}{2}-|\beta|}.$$ This bound trivially holds outside the support of $\psi_j$, meaning it holds everywhere as required.
Next we prove that $\psi_j\sqrt{F_j}$ satisfies the semi-norm estimates [\[induc2\]](#induc2){reference-type="eqref" reference="induc2"} and [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}. To deal with the cases in which $k$ is odd and even simultaneously, we once again define $\Lambda$ as the integer part of $\frac{k}{2}$ and set $\lambda=\frac{k+\alpha}{2}-\Lambda$, so that it suffices to prove that $[\partial^\beta(\psi_j\sqrt{F})]_\lambda(y)\leq Cr(y)^{\Lambda-|\beta|}$. This follows at once from property *(4)* of together with slow variation: $$[\partial^\beta(\psi_j\sqrt{F_j})]_\lambda(y)=\sqrt{F_j}[\partial^\beta\psi_j]_\lambda(y)\leq Cr_j^\frac{k+\alpha}{2}r_j^{-\lambda-|\beta|}=Cr_j^{\Lambda-|\beta|}\leq Cr(y)^{\Lambda-|\beta|}.$$ Consequently, $\psi_j\sqrt{F_j}$ has the properties [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} through [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"} required for our induction.
Regardless of the behaviour of $f$ at $x_j$, we see now that we can write $\psi_j^2f$ as a SOS of at most two functions which satisfy the pointwise estimates [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} through [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}. After relabelling, we can therefore write $$\label{eq:infsum}
f=\sum_{j=1}^\infty h_j^2$$ for functions $h_j\in C^\frac{k+\alpha}{2}(\mathbb{R})$. By estimate *(5)* of , we can identify $s_1=4$ sub-collections of functions in the sum above which enjoy pairwise disjoint supports. Fixing any of these sub-collections and indexing its functions by $S\subseteq\mathbb{N}$ we can define $$\label{eq:recombsum}
g=\sum_{j\in S}h_j,$$ so that $g\in C^\frac{k+\alpha}{2}(\mathbb{R}^n)$ and it satisfies [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} through [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}, since the functions in the sum enjoy pairwise-disjoint supports and their Hölder norms are uniformly bounded. Calling the functions associated to each sub-collection $g_1,\dots,g_4$, we can write $f=g_1^2+g_2^2+g_3^2+g_4^2$, again since the functions in each sub-collection enjoy disjoint supports. This completes the base case.
Now we proceed with the inductive step, assuming for our inductive hypothesis that for every non-negative function $h\in C^{k,\alpha}(\mathbb{R}^{n-1})$ satisfying [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"}, there exist $g_1,\dots,g_{s_{n-1}}$ satisfying estimates [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} through [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"} for which $h=g_1^2+\cdots+g_{s_{n-1}}^2$. Now we let $f\in C^{k,\alpha}(\mathbb{R}^n)$ be non-negative, and using we form the partition of unity induced by $r_f$ to write $$f=\sum_{j=1}^\infty\psi_j^2f.$$ An identical argument to that employed for the base case shows that either $\psi_j\sqrt{f}$ satisfies the required derivative estimates and belongs to the half-regular Hölder space, or we can write $$\psi_j^2f=(\psi_j\sqrt{f-F_j})^2+\psi_j^2F_j.$$ From , the first function on the right-hand side above satisfies [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"}, [\[induc2\]](#induc2){reference-type="eqref" reference="induc2"} and [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}, and it belongs to the half-regular Hölder space.
The remainder function $F_j$ is non-negative and by it can be extended from the support of $\psi_j$ to a function in $C^{k,\alpha}(\mathbb{R}^{n-1})$ which agrees with $F_j$ on this support set. Thus we can identify $F_j$ with its extension to $\mathbb{R}^{n-1}$, and by the inductive hypothesis we can write $$\label{eq:inductivedecomps}
F_j=\sum_{\ell=1}^{s_{n-1}}g_\ell^2.$$ for $g_1,\dots,g_{s_{n-1}}\in C^\frac{k+\alpha}{2}(\mathbb{R}^{n-1})$ which satisfy the estimates $|\partial^\beta g_\ell(y)|\leq Cr_{F_j}(y)^{\frac{k+\alpha}{2}-|\beta|}$ and $[\partial^\beta g_j]_{\lambda}(y)\leq Cr_{F_j}(y)^{\Lambda-|\beta|}$, where as usual $\Lambda=\lfloor\frac{k}{2}\rfloor$ and $\lambda=\frac{k+\alpha}{2}-\Lambda$. Further, the estimates proved in show for $y\in\lambda Q_j$ that $$r_{F_j}(y')=\max\{F_j(y')^\frac{1}{k+\alpha},\sup_{|\xi|=1}[\partial^2_\xi F_j(y')]_+^\frac{1}{k-2+\alpha}\}\leq Cr_f(y),$$ meaning that on the support of $\psi_j$ the estimates for $g_1,\dots,g_{s_{n-1}}$ hold when $r_F$ is replaced by $r_f$. Using these refined estimates and the derivative estimates of Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"} in the sub-product rule, we find for each $\ell\in\{1,\dots,s_{n-1}\}$ that $\psi_jg_\ell$ satisfies [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"} and [\[induc2\]](#induc2){reference-type="eqref" reference="induc2"} or [\[induc3\]](#induc3){reference-type="eqref" reference="induc3"}, depending on the parity of $k$. It follows that we can write $\psi_j^2F_j$ as a sum of at most $s_{n-1}$ half-regular squares.
In summary, for each $j\in\mathbb{N}$ we can write $\psi_j^2f$ as a sum of $s_{n-1}+1$ squares in $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$. Combining the squares obtained for each $j$ and relabelling, we see now that we may write $$f=\sum_{j=1}^\infty h_j^2,$$ for $h_j$ satisfying [\[induc1\]](#induc1){reference-type="eqref" reference="induc1"}-[\[induc3\]](#induc3){reference-type="eqref" reference="induc3"} everywhere. By property *(4)* of , we can partition the functions in the sum above into $(4^n-2^n)(s_{n-1}+1)$ sub-collections of functions which enjoy pairwise disjoint supports. Recombining and relabelling these functions exactly as we did in the one-dimensional setting, we see that we can set $s_n=(4^n-2^n)(s_{n-1}+1)$ to write $$f=\sum_{j=1}^{s_n}g_j^2$$ for functions $g_1,\dots,g_{s_n}$ which inherit the required differential inequalities. Finally, since $f$ was any non-negative $C^{k,\alpha}(\mathbb{R}^n)$ function satisfying [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"}, this completes our inductive step. This shows that if $k=2$ or $k=3$ then every $C^{k,\alpha}(\mathbb{R}^n)$ function is a sum of half-regular squares.
It remains to bound $s_n$ to obtain an estimate for $m(n,k,\alpha)$ when $k=2$ and $k=3$. The work in [@BonyFr] shows that $s_1=m(1,k,\alpha)=2$ for $k\geq 2$, and from estimate *(5)* of Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"} we see that $s_2=9(s_1+1)=27$. For $n\geq 3$ we have the recursion $s_{n}=(4^n-2^n)(s_{n-1}+1)$, from which it follows upon iterating that $$s_n=s_2\prod_{\ell=3}^n(4^\ell-2^\ell)+\sum_{j=0}^{n-3}\prod_{\ell=n-j}^n(4^\ell-2^\ell)$$ when $k\geq 3$. To estimate the first product we observe that if $n\geq 3$ then $$\prod_{\ell=3}^n(4^\ell-2^\ell)=2^{n^2+n-6}\prod_{\ell=3}^n\bigg(1-\frac{1}{2^\ell}\bigg)\leq 2^{n^2+n}\bigg(\frac{7}{512}\bigg),$$ similarly for $0\leq j\leq n-3$ the product in the second sum is bounded by $2^{n+2nj-j^2}(2^{n-j}-1)$, and it follows that $$\sum_{j=0}^{n-3}\prod_{\ell=n-j}^n(4^\ell-2^\ell)\leq 2^n\sum_{j=0}^{n-3}2^{2nj-j^2}(2^{n-j}-1)=2^{n^{2}+n}\sum_{j=3}^{n}\frac{2^{j}-1}{2^{j^{2}}}\leq 2^{n^{2}+n}\sum_{j=3}^{\infty}\frac{2^{j}-1}{2^{j^{2}}}.$$ Combining these estimates and bounding the series, we find that if $n\geq 3$ then $s_n<2^{n^2+n-1.3844}$. This implies the claimed estimate for $m(n,k,\alpha)$, completing the proof of Theorem [Theorem 1](#main 1){reference-type="ref" reference="main 1"}.
In passing, we note that if $G_n$ is the infinite graph encountered in the proof of Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"} in $n$ dimensions, then we have $m(n,k,\alpha)\leq t_n$ where $t_1=2$ and $t_{n}=\chi(G_n)(t_{n-1}+1)$ for $n\geq 2$. So, better estimates of $\chi(G_n)$ enable improved bounds on decomposition size. We leave such improvements for a future work.
# Proof of Theorem [Theorem 3](#main 3){reference-type="ref" reference="main 3"} {#proof-of-theorem-main-3}
After rescaling if necessary, the differential inequalities [\[diffeqs\]](#diffeqs){reference-type="eqref" reference="diffeqs"} imply that [\[eq:needed\]](#eq:needed){reference-type="eqref" reference="eq:needed"} holds. It follows that on each cube $Q_j$ given by Theorem [Theorem 17](#thm:party){reference-type="ref" reference="thm:party"}, either $\psi_j\sqrt{ f}$ is half-regular, or $F_j\in C^{k,\alpha}{(\mathbb{R})}$ can be defined as above. By the result of Bony [@BonyFr], $F_j$ is a sum of squares of at most two half-regular functions, meaning that even if we cannot conclude that $\psi_j\sqrt{ f}$ is half-regular, we can instead write $\psi_j^2f=(\psi_j\sqrt{f-F})^2+h_1^2+h_2^2,$ where each function on the right-hand side belongs to $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$. It follows that $f$ is an infinite sum of half-regular squares, and recombining as in earlier cases we see that it takes at most $\chi(G_2)(m(1,k,\alpha)+1)=27$ squares to represent $f$.
# Connection to Polynomial SOS {#sec:poly}
It is shown by Hilbert [@Hilbert] that if $n\geq 3$ and $k\geq 6$, or if $n\geq 4$ and $k\geq 4$, then there exist non-negative homogeneous polynomials (also called forms) in $n$ variables of degree $k$ which are not sums of squares of polynomials. Such polynomials are in $C^{k,\alpha}(\mathbb{R}^n)$ for any $\alpha>0$, since their $k^{\mathrm{th}}$ order derivatives are constant. Momentarily, we adapt an argument from [@BonyEn] to show that no such polynomial is a SOS of functions in [\[halfreg\]](#halfreg){reference-type="eqref" reference="halfreg"}. Thus, if $n$ and $k$ are both large then there exist non-negative $C^{k,\alpha}(\mathbb{R}^n)$ functions which are not sums of half-regular squares. This fact is well-known; various examples appear in [@BonyEn; @SOS_I].
Another connection between non-decomposable polynomials and sums of squares does not appear to be widely understood. Denoting by $F(n,k)$ the real vector space of forms with real coefficients in $n$ variables of degree $k$, it is shown by Choi, Lam & Reznik [@CLR] that the Pythagoras number of $F(n,k)$ is bounded below by a positive constant $c(n,k)$. In other words, there exists $P\in F(n,k)$ which is a SOS of polynomials, but $P$ is a sum of no fewer than $c(n,k)$ squares. For $\alpha>0$ we have $P\in \Sigma(n,k,\alpha)$, and as we will show momentarily $P$ can be a sum of no fewer than $c(n,k)$ half-regular squares. It follows at once that $m(n,k,\alpha)\geq c(n,k)$ for any $\alpha\in(0,1]$.
Our main bridge between the algebraic perspective of polynomial SOS on the one hand, and the analytic perspective of Hölder spaces on the other, is the following result. It is modelled after [@BonyEn Lem. 1.3] and [@SOS_I Lem. 5.2], and adapts both to our more general situation.
**Lemma 29**. *Let $P$ be a non-negative homogeneous polynomial of even degree $k$, assume that $P$ is not a SOS of fewer than $m$ polynomials, and let $\Omega\subset\mathbb{R}^n$ be a bounded open set containing the origin. Given $C>0$, there exists $\delta>0$ such that $$\qquad\sup_{\Omega}\bigg|\sum_{j=1}^{m}g_j^2-P\bigg|<\delta\quad\implies\quad\sum_{j=1}^{m}\|g_j\|_{C_b^\frac{k+\alpha}{2}(\overline{\Omega})}>C.$$ In other words, if $P$ is a SOS of $m$ functions on $\Omega$, then at least one does not belong to $C_b^\frac{k+\alpha}{2}(\overline{\Omega})$.*
*Proof.* Fix $C>0$ and assume to the contrary that for all $\delta>0$, there exist $g_{1},\dots,g_{m}$ such that $$\sum_{j=1}^m\|g_{j}\|_{C_b^\frac{k+\alpha}{2}(\overline{\Omega})}\leq C\qquad\mathrm{and}\qquad\sup_{\Omega}\bigg|\sum_{j=1}^mg_{j}^2-P\bigg|<\delta.$$ For each $\ell\in\mathbb{N}$ choose $g_{1,\ell},\dots,g_{q,\ell}$ that satisfy the inequalities above with $\delta=\ell^{-1}$. Fix $j$ and consider the sequence $\{g_{j,\ell}\}_{\ell}$ which is uniformly bounded since $\|g_{j,\ell}\|_{C_b^\frac{k+\alpha}{2}(\Omega )}\leq C$ for each $\ell$.
By the embedding $C_b^\frac{k+\alpha}{2}(\overline{\Omega})\hookrightarrow C_b^\frac{k}{2}(\overline{\Omega})$ is compact, so there exists a subsequence of $\{g_{j,\ell}\}_{\ell}$ that converges uniformly to a $\frac{k}{2}$-times continuously differentiable function $g_j$ on $\overline{\Omega}$. Passing to this subsequence if necessary, we see that pointwise in $\Omega$, $$\bigg|\sum_{j=1}^mg_j^2-P\bigg|= \lim_{\ell\rightarrow\infty}\bigg|\sum_{j=1}^mg_{j,\ell}^2-P\bigg|\leq\lim_{\ell\rightarrow\infty}\frac{1}{\ell}=0.$$ Thus in $\Omega$ we can represent $P$ is a sum of squares of $m$ functions in $C^\frac{k}{2}(\Omega)$.
Expanding on an argument of Bony et. al. [@BonyEn], we show that this is impossible. Near zero we can write $g_j=p_j+r_j$ for $p_j$ a polynomial of degree at most $\frac{k}{2}$ and $r_j=o(|x|^\frac{k}{2})$ as $x\rightarrow0$. Explicitly, from Taylor's Theorem we have $$p_j(y)=\sum_{0\leq |\gamma|\leq \frac{k}{2}}\frac{\partial^{\gamma} g_j(0)}{\gamma!}y^\gamma\quad\mathrm{and}\quad r_j(y)=\sum_{|\gamma|=\frac{k}{2}}\frac{|\gamma|}{\gamma!}y^\gamma\int_0^1(1-t)^{|\gamma|-1}(\partial^{\gamma}g_j(ty)-\partial^{\gamma}g_j(0))dt.$$ Taking $Q=p_1^2+\cdots+p_m^2$, it follows from these formulas that we can write $$P-Q=\sum_{j=1}^m2p_jr_j+\sum_{j=1}^mr_j^2=o(|x|^{\frac{k}{2}+d}),$$ where $d$ is the degree of the lowest degree monomial appearing in any $p_j$. *A priori* we know only that $d\geq 0$, however we can refine an estimate for $d$ as follows. Denoting by $r$ the function on the right-hand side above, we have that $P=Q+r$ and it follows from homogeneity of $P$ that for any $\lambda>0$ and $x\neq0$, $$\frac{\lambda^\frac{k}{2}P(x)}{|x|^\frac{k}{2}}=\frac{Q(\lambda x)}{\lambda^\frac{k}{2}|x|^\frac{k}{2}}+\frac{r(\lambda x)}{|\lambda x|^\frac{k}{2}}.$$ Taking a limit as $\lambda\rightarrow0$, the term on the left vanishes, and from the decay $r=o(|x|^\frac{k}{2})$ we get $$\frac{1}{|x|^\frac{k}{2}}\lim_{\lambda\rightarrow0}\frac{Q(\lambda x)}{\lambda^\frac{k}{2}}=0.$$ This implies that each monomial term in $Q$ has degree at least $\frac{k}{2}$, and since $Q$ is a sum of squares the lowest order terms of each $p_j$ cannot cancel, meaning that each monomial term of $p_j$ has degree at least $d\geq \frac{k}{4}$. It follows from the form of our remainders above that $P-Q=o(|x|^{\frac{k}{2}+\frac{k}{4}})$.
To iterate this argument, we now fix $x\neq 0$ and $\lambda>0$, and we use homogeneity of $P$ to write $$\frac{\lambda^\frac{k}{4}P(x)}{|x|^\frac{3k}{4}}=\frac{Q(\lambda x)}{\lambda^\frac{3k}{4}|x|^\frac{3k}{4}}+\frac{r(\lambda x)}{|\lambda x|^\frac{3k}{4}}.$$ It follows from the fact that $r=o(|x|^\frac{3k}{4})$ that by taking a limit in the identity above we get $$\frac{1}{|x|^\frac{3k}{4}}\lim_{\lambda\rightarrow0}\frac{Q(\lambda x)}{\lambda^\frac{3k}{4}}=0,$$ meaning that each monomial term in $Q$ has degree at least $\frac{3k}{4}$, and each monomial in each $p_j$ has degree at least $d\geq \frac{3k}{8}$. The remainder formula now shows that $P-Q=o(|x|^\frac{7k}{8})$. Iterating in this fashion, we see by induction that $P-Q=o(|x|^{k(1-2^{-j})})$ for each $j\in\mathbb{N}$, meaning that $P-Q=o(|x|^k)$ and $P-Q=0$ identically in some neighbourhood of the origin. But this means that $P$ is a sum of squares of $m$ forms, a contradiction. ◻
In case $P$ is not a sum of squares of any number of polynomials, we have the following.
**Corollary 30**. *Let $P$ be a non-negative homogeneous polynomial of even degree $k$ which is not a SOS of polynomials. Then $P$ is not a SOS of $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$ functions for $\alpha>0$.*
More interesting for our purposes is the following more detailed consequence.
**Corollary 31**. *Let $P$ be a non-negative polynomial of even degree $k$ which is not a SOS of fewer than $m$ polynomials. Then $P$ is not a SOS of fewer than $m$ functions in $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$ for $\alpha>0$.*
In passing, we note that Corollary [Corollary 30](#cor:notsos){reference-type="ref" reference="cor:notsos"} together with the existence result from Hilbert [@Hilbert] imply the following interesting result that resembles [@BonyEn Thm. 1.2 (a) & (c)].
**Theorem 32**. *Fix $\alpha>0$ and assume that $n\geq 3$ and $k\geq 6$, or that $n\geq 4$ and $k\geq 4$. Then there exist $C^{k,\alpha}(\mathbb{R}^n)$ functions (polynomials) which are not sums of squares of functions in $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$.*
*Remark:* Any polynomial is in $C^\infty(\mathbb{R}^n)$, so this result implies that if $n\geq 3$ then there exist non-negative functions in $C^\infty(\mathbb{R}^n)$ which are not sums of squares in $C^\infty(\mathbb{R}^n)$. However, if the degree of $P$ is larger than $k$ then $P\not\in C^{k,\alpha}(\mathbb{R}^n)$ for $\alpha\leq 1$, except when $P$ has degree $k+1$ in which case $P\in C^{k,1}(\mathbb{R}^n)$. This means that $C^\infty(\mathbb{R}^n)\not\subset C^{k,\alpha}(\mathbb{R}^n)$, motivating our use of $C^{k,\alpha}(\mathbb{R}^n)$ in the theorem above.
*Proof.* If $n$ and $k$ satisfy the prescribed inequalities and $k$ is even, then by [@Hilbert] there exists a non-negative homogeneous polynomial $P$ of degree $k$ which is not a sum of squares of polynomials. If $\beta$ is a multi-index of order $k$ then $\partial^\beta P$ is constant, meaning that $[\partial^\beta P]_{\alpha,\mathbb{R}^n}=0$ and $P\in C^{k,\alpha}(\mathbb{R}^n)$. By Corollary [Corollary 30](#cor:notsos){reference-type="ref" reference="cor:notsos"}, $P$ is not a sum of squares of half-regular functions.
If $k$ is odd, take $P$ as above with even degree $k-1$. If $|\beta|=k$ then $\partial^\beta P=0$, so once again $P\in C^{k,\alpha}(\mathbb{R}^n)$. But for any $\beta>0$ we recall that $P$ is not a SOS of functions in the space $C^{\frac{k-1}{2},\beta}(\mathbb{R}^n)$, and in particular $P$ is not a SOS in $C^{\frac{k-1}{2},\frac{1+\alpha}{2}}(\mathbb{R}^n)=C^\frac{k+\alpha}{2}(\mathbb{R}^n)$. ◻
# Proof of Theorem [Theorem 2](#main 2){reference-type="ref" reference="main 2"} {#proof-of-theorem-main-2}
To begin, we quote a result from [@CLR]. Recalling that $F(n,k)$ denotes the real vector space of degree $k$ forms in $n$ variables, we expanding the lower bound for $P(n,k)$ given in [@CLR Thm. 6.4], we obtain the following result.
**Theorem 33** (Choi, Lam, Reznik). *If $P(n,k)$ is the Pythagoras number of $F(n,k)$, then $$P(n,k)\geq 2^{n-1}\prod_{j=1}^{n-1}\frac{j+k}{2j+k}\geq 2^{n-1}\left(\frac{1+k}{1+k+n}\right)^{\frac{n}{2}}$$*
This result implies that for each even $k$ and $n\in\mathbb{N}$, there exists a homogeneous polynomial $P$ of degree $k$ in $n$ variables which is a SOS of no fewer than $a$ polynomials, where $$a=2^{n-1}\prod_{j=1}^{n-1}\frac{j+k}{2j+k}.$$ For any $\alpha>0$ we have $P\in C^{k,\alpha}(\mathbb{R}^n)$, and since $P$ is a sum of squares of forms (which necessarily have degree $\frac{k}{2}$), we see also that $P$ is a SOS in $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$ and $P\in\Sigma(n,k,\alpha)$. On the other hand, Corollary [Corollary 31](#cor:notbigsos){reference-type="ref" reference="cor:notbigsos"} indicates $P$ is a SOS of no fewer than $m$ functions in $C^\frac{k+\alpha}{2}(\mathbb{R}^n)$, meaning that $\ell(P)\geq a$. It follows that $$m(n,k,\alpha)=\sup\{\ell(f)\;:\;f\in \Sigma(n,k,\alpha)\}\geq \ell(P)\geq a,$$ giving the desired bound for even $k$. If $k$ is odd, it suffices to repeat the argument with a polynomial of degree $k-1$ and to replace $k$ with $k-1$ in the expression for $a$ above.
# Proof of Theorem [Theorem 4](#main 4){reference-type="ref" reference="main 4"} {#proof-of-theorem-main-4}
Fix non-negative $f\in C^{1,\alpha}(\mathbb{R}^n)$ and observe that if $x,h\in\mathbb{R}^n$ then a Taylor expansion gives $$0\leq f(x)+h\cdot\nabla f(x) +[\nabla f]_\alpha|h|^{1+\alpha}.$$ Since $f$ is non-negative, an identical estimate holds if we replace $h$ with $-h$, showing that $|h\cdot\nabla f(x)|\leq f(x) +[\nabla f]_\alpha|h|^{1+\alpha}$. Taking $h=\lambda\nabla f(x)$ for $\lambda>0$ gives $h\cdot\nabla f(x)=\lambda|\nabla f(x)|^2$ and $$\lambda|\nabla f(x)|^2\leq f(x) +\lambda^{1+\alpha}[\nabla f]_\alpha|\nabla f(x)|^{1+\alpha}.$$
Choosing $\lambda=(f(x)/\alpha[\nabla f]_\alpha)^\frac{1}{1+\alpha}/|\nabla f(x)|$ gives $|\nabla f(x)|\leq [\nabla f]_\alpha^\frac{1}{1+\alpha}f(x)^\frac{\alpha}{1+\alpha}(\alpha^\frac{1}{1+\alpha}+\alpha^{-\frac{\alpha}{1+\alpha}})$, from which the claimed inclusion follows by the mean value theorem.
[^1]: University of Toronto, ON, Canada. sullivan\@math.utoronto.ca. The author was supported by the Britton Estate Scholarship at McMaster University and the Department of Mathematics at the University of Toronto.
| arxiv_math | {
"id": "2309.07275",
"title": "Sum of Squares Decompositions in H\\\"older Spaces",
"authors": "Sullivan Francis MacDonald",
"categories": "math.FA",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
In this paper we investigate the conjugation of Fourier Integral Operators (FIOs) associated to Gevrey phases and symbols and the corresponding semiclassical pseudodifferential operators (pdos) in the Gevrey class. We obtain an Egorov theorem compatible with Gevrey FIOs and real principal part pdos with Gevrey symbols. As a consequence, we obtain a justification of the usual microlocal WKB expansion for Gevrey pdos which are of real principal part at a point in the phase space, with the natural Gevrey subexponential asymptotics with respect to the semiclassical parameter.
address:
- LJAD, Université Côte-d'-Azur, 28 Parc Valrose, 06028 Nice cedex, France
- LJAD, Université Côte-d'-Azur, 28 Parc Valrose, 06028 Nice cedex, France
author:
- Richard Lascar
- Iván Moyano
title: Gevrey WKB method for Pseudodifferential Operators of real principal type
---
# Introduction {#sec:Introduction}
The use of Gevrey functions in the study of some linear and non-linear partial differential equations has experienced a significant developement in recent years. In particular, the use of Gevrey norms have been instrumental in the context of some non-linear Cauchy problems that may be ill-posed in the usual Sobolev context [@GerardVaretMasmoudi] or the study of some singular limits difficult to describe rigorously using other topologies [@MouhotVillani; @BedrossianMasmoudiMouhot].
Let us recall that, as usual, for $N \in {\mathbb N}$ and $s>0$ a function $f\in C^{\infty}({\mathbb R}^N;{\mathbb C})$ is Gevrey of order $s$ (Gevrey-$s$ for short, or simply ${\mathcal G}^s$) whenever for every compact $K \subset {\mathbb R}^N$ the following holds: $$\exists C_{K} > 0, \quad \forall \alpha \in {\mathbb N}^N, \qquad \vert \partial^{\alpha} f \vert_{L^{\infty}(K)} \leq C_K^{\vert \alpha \vert +1} \alpha !^s.
\label{eq:defGevrey}$$ As can be seen from ([\[eq:defGevrey\]](#eq:defGevrey){reference-type="ref" reference="eq:defGevrey"}), the case $s=1$ coincides with the usual class of analytic functions, but the cases $s>1$ may contain smooth functions which are not analytic. Hence, the Gevrey class may be understood as a class of functions in between the analytic and the smooth context.
The use of pseudodifferential calculus associated to particular Gevrey classes, started by Boutet de Monvel and Kree in their seminal paper, [@BoutetKree Sect. 1], has also been important in the past, for example in connexion with the study of propagation of Gevrey singularities of these pseudodifferential operators [@BLascar; @LascarLascarMelrose] or the diffraction of waves around an obstacle [@LebeauGevrey3]. More recently, the study of the FBI transform in the Gevrey context has risen some interesting questions related to the quantization in the complex plane of symbols admitting only quasi-holomorphic extensions which in fact are not unique [@HitrikLascarSjostrandZerzeri] and thus introduces new aspects with respect to the analytic framework [@SjostrandAsterisque; @HitrikSjostrand; @MelinSjostrand].
In this work we are concerned with the study of Gevrey pseudodifferential operators and the resulting WKB expansion in the Gevrey class. In what follows, we introduce some definitions of these Gevrey symbols and the related pseudodifferential operators and then state our main results. We work in a semiclassical framework (cf. [@Zworski]) involving a possibly small parameter $h>0$.
Let us note, for $n \in {\mathbb N}$, $m\in {\mathbb R}$ and $s>0$, the set ${\mathcal S}_s^m({\mathbb R}^n \times {\mathbb R}^n)$ of semiclassical Gevrey $s$ symbols of order $m$. Following [@BoutetKree Sect. 1] we write $a \in {\mathcal S}^m_s$ if and only if for some $C>0$ $$\vert \partial_x^{\alpha} \partial_{\theta}^{\beta} a(x,\theta,h) \vert \leq C^{1 + \vert \alpha \vert + \vert \beta \vert} \alpha!^s \beta!^s h^{-m}
\label{eq:Gevrey symbol}$$ for all $(x,\theta)\in {\mathbb R}^n \times {\mathbb R}^n$ and $\alpha,\beta \in {\mathbb N}^n$. Observe that this condition is more precise than the usual definition of the class ${\mathcal S}^m_{1,0}$ (cf. for example (cf. [@Zworski Chapter 4])) because the constant $C$ in ([\[eq:Gevrey symbol\]](#eq:Gevrey symbol){reference-type="ref" reference="eq:Gevrey symbol"}) is uniform with respect to the multi-indices $\alpha,\beta$. As a consequence, we shall need a specific version of symbolic calculus adapted to the class ${\mathcal S}^m_s$ (in particular Proposition [\[lemma:composition\]](#lemma:composition){reference-type="ref" reference="lemma:composition"} below). Finally, recall that a pdo $A$ is elliptic at a point $(x_0,\theta_0) \in {\mathbb R}^n \times {\mathbb R}^n$ whenever its symbol $a$ satisfies $|a(x_0,\theta_0)| > 0$.
We consider as usual (cf. [@Zworski Chapter 4]) semiclassical pseudo-differential operators (pdo for short) defined as suitable extensions of $$a(x,hD,h)u(x) = \frac{1}{(2 \pi h)^n} \iint a(x,\theta,h) e^{\frac{i}{h}\theta \cdot (x-y)} u(y) \,\mathrm{d}y \,\mathrm{d}\theta, \qquad u \in \mathscr{S}({\mathbb R}^n),$$ for a given symbol $a \in {\mathcal S}^m_s$. One has $a(x,hD,h) = {\operatorname{Op}_h}(a)$ and also ${\operatorname{Op}_h}(a) = {\operatorname{Op}_1}(a_h)$ with $a_h(x,\theta) = a(x,h \theta)$. We shall use the notation ${\operatorname{Op}_h}(a) \in \Psi^m_s$ for such a pdo.
## Motivations, hypothesis and main results
The motivation of our work is twofold. On the one hand, the WKB method is known to hold true in the smooth ($\mathcal{C}^{\infty}$) and analytic categories, which raises the natural question of its vaidity with respect to the specific Gevrey asymptotics required in the Gevrey framework (in particular of the form ([\[eq:Gevrey small\]](#eq:Gevrey small){reference-type="ref" reference="eq:Gevrey small"})). Our Theorem [\[thm:WKB\]](#thm:WKB){reference-type="ref" reference="thm:WKB"} makes this point explicit at least microlocally. On the other hand, we obtain this microlocal WKB expansion as a consequence of Theorem [\[thm:Egorov1\]](#thm:Egorov1){reference-type="ref" reference="thm:Egorov1"}, which is a Gevrey version of the Egorov theorem, a central result in microlocal analysis, first published in [@Egorov], having its own independent interest. We state next our main hypothesis and the context of our results.
Let $P=P(x,hD_x,h)$ a semiclassic Gevrey$-s$ PDO of order zero in ${\mathbb R}^n$, with $n\geq 2$. Let $p=p(x,\xi)$ be the principal symbol of $P$ and let $H_p$ be the associated Hamiltonian. We assume the following hypothesis.
(H1)
: $p$ is real,
(H2)
: $dp(x_0,\xi_0) \not = 0$,
(H3)
: and $p(x_0,\xi_0) = 0$.
If $P=P(x,hD_x,h)$ satisfies hypothesis **(H1)**, **(H2)** and **(H3)**, it is customary to say that $P$ is of *real principal type* at the point $(x_0,\xi_0) \in {\mathbb R}^n \times {\mathbb R}^n$ (cf. [@EgorovBook Definition 3.1]).
The class of real principal type operators enjoys an important property allowing to reduce a general pdo to a canonical form. This is known as Egorov theorem, which in the $C^{\infty}$ class states roughly the following (cf. [@EgorovBook Proposition 3.1] for instance, among other references as [@Zworski Chapter 8], [@Eskin Section 62], [@LernerEgorov]): If $A = A(x,D_x)$ and $B = B(x,D_x)$ are two pdo of real principal part with the same principal symbol, then there exists elliptic pdo $R=R(x,D_x)$ and $S=S(x,D_x)$ of order zero such that the conjugation $AR - SB$ is a smoothing operator. Of course, that this theorem is valid for a large class of pseudo-differential (possibly semiclassical) operators, including Gevrey symbols (see below for a precise definition). On the other hand, if we assume further regularity properties on the class of symbols at hand, for instance Gevrey regularity (see [\[eq:defGevrey\]](#eq:defGevrey){reference-type="ref" reference="eq:defGevrey"} for a definition), we may expect to get more precise information on the canonical transformations and the pseudo-differential calculus involved. In [@Gramchev] gave a first Gevrey version, but the result is not adapted to the semiclassical pdos and in particular to the WKB asymptotics. In this paper we aim at proving a Gevrey version of Egorov theorem which is sensitive to a small parameter $h$. We do this by using the classical WKB approach.
### Main results: Gevrey WKB expansion and Gevrey Egorov's theorem
Our main result is a microlocal semiclassical WKB expansion for real principal part operators in the Gevrey setting compatible with the usual required asymptotics in the Gevrey framework.
**Theorem 1** (Gevrey WKB expansion). *Let $n\geq 2$. Let $P=P(x,hD_x,h)$ be a semiclassical ${\mathcal G}^s$ pdo of order zero in ${\mathbb R}^n$ of symbol $p=p(x,\xi)$ and assume that $P$ is of real principal type at a point $(x_0,\xi_0) \in {\mathbb R}^n \times {\mathbb R}^n$. Let $S \subset {\mathbb R}^n$ be a real ${\mathcal G}^s-$hypersurface of ${\mathbb R}^n$ transversal to $H_p$ at $(x_0,\xi_0)$ . Let $\varphi \in {\mathcal G}^s({\mathbb R}^n)$ be given. Assume that $$p(x,\varphi_x')= 0 \qquad \textrm{and} \qquad \xi_0 = \varphi_x'(x_0).$$ Under these hypothesis, one may solve the WKB problem near $x_0$, i.e.: If $a_0$ and $b$ are given symbols in ${\mathcal S}^0_s({\mathbb R}^n)$, one may find some $a \in {\mathcal S}^0_s({\mathbb R}^n )$ such that $$\left\{ \begin{array}{cc}
\frac{1}{h} e^{-i \frac{\varphi}{h} } P\left( a e^{i \frac{\varphi}{h} } \right) - b = {\mathcal O}_{{\mathcal G}^s}(h^{\infty}), & \textrm{close to } x_0, \\
a\vert_S = a_0. &
\end{array} \right.$$ [\[thm:WKB\]]{#thm:WKB label="thm:WKB"}*
The smallness condition in Theorem [\[thm:WKB\]](#thm:WKB){reference-type="ref" reference="thm:WKB"} is defined in ([\[eq:Gevrey small\]](#eq:Gevrey small){reference-type="ref" reference="eq:Gevrey small"}). As mentioned before, we shall obtain Theorem [\[thm:WKB\]](#thm:WKB){reference-type="ref" reference="thm:WKB"} as a consequence of a particular version of Egorov's theorem in the Gevrey setting, according to our next result.
**Theorem 2** (Gevrey Egorov theorem). *Let $P=P(x,hD_x,h)$ be a ${\mathcal G}^s$ pdo with principal symbol $p=p(x,\xi)$. Assume that $P$ is of real principal type at some point $(x_0,\xi_0)\in T^* {\mathbb R}^n.$ If $$p(x_0,\xi_0)= 0, \quad \frac{\partial p}{\partial \xi}(x_0,\xi_0) \not = 0,$$ then $P$ is microlocally conjugate to $hD_{x_1}$ by ${\mathcal G}^s$ FIOs. [\[thm:Egorov1\]]{#thm:Egorov1 label="thm:Egorov1"}*
We give a precise definition of microlocal conjugation in Definition [\[def:microlocal conjugation\]](#def:microlocal conjugation){reference-type="ref" reference="def:microlocal conjugation"}, after having introduced the necessary objects for the reader's convenience.
**Theorem 3** (Egorov's theorem in the Gevrey class using FBI transforms). *Let $P = P(y, D_y)$ be a $G^s-$differential operator of degree $m$ such that near $(y_0,\eta_0) \in T^* {\mathbb R}^n \setminus \left\{ 0 \right\}$, the symbol $p(y,\eta)$ is of real principal type. Then, one can find a $G^s$-FBI transform as ([\[eq:FBI\]](#eq:FBI){reference-type="ref" reference="eq:FBI"}) and a canonical transform $\kappa$ such that $$hD_{\mathop{\rm Re}\nolimits x_1} \mathcal{T}u - h^m \mathcal{T} P(y,D_y) u = {\mathcal O}_{{\mathcal G}^{2s-1}}(h^{\infty}),$$ with $\kappa(y_0,\eta_0) = (x_0, \xi_0)$, $\xi_0 = \frac{2}{i} \partial_x \phi(x_0)$. [\[thm:Egorov\]]{#thm:Egorov label="thm:Egorov"}*
Observe that this theorem is valid for a large class of pseudo-differential (possibly semiclassical) operators, including Gevrey symbols (see below for a precise definition).
## Strategy and outline
In Section [2](#sec:Stationary){reference-type="ref" reference="sec:Stationary"} we recall some non-stationary and stationary asymptotics with complex Gevrey phase and Gevrey symbols and review the symbolic calculus adapted to Gevrey pdo. We also discuss formal symbols and Carleson's theorem.
In Section [3](#sec:FIO){reference-type="ref" reference="sec:FIO"} we introduce the class of Fourier integral operators associated to Gevrey phases and symbols that will be crucial in the proof of our main theorems.
In Section [4](#sec:ProofEgorov){reference-type="ref" reference="sec:ProofEgorov"} we prove Theorem [\[thm:Egorov1\]](#thm:Egorov1){reference-type="ref" reference="thm:Egorov1"} by using a microlocal WKB expansion, involving the usual steps: reduction to an evolution equation, construction of a suitable phase as a solution to an eikonal equation and finally the construction of a suitable symbol by imposing a hierarchy of transport equations.
## Notation
According to [@HitrikLascarSjostrandZerzeri], in this work we shall say that, for $s\geq 1$ a given function $g$, depending on a small parameter $h>0$, is a ${\mathcal G}^s$-small remainder, and we note $g = {\mathcal O}_{{\mathcal G}^s}(h^{\infty})$, if there exists $C>0$ such that for all $\beta \in {\mathbb N}^m$, $$\vert \partial^{\beta} g \vert \leq C^{1 + \vert \beta \vert } \beta!^s \exp \left( - \frac{h^{-\frac{1}{s}}}{C} \right).
\label{eq:Gevrey small}$$ As usual, we use $\mathscr{S}({\mathbb R}^n)$ to denote the space of Schwartz functions in ${\mathbb R}^n$.
# Stationary phase and symbolic calculus in the Gevrey setting {#sec:Stationary}
In this section we review the some results concerning the symbolic calculus in with Gevrey pseudodifferential operators, as some of them are difficult to find in the literature. We first review the non-stationary and stationary lemmas for Gevrey phases and symbols. As usual, this allows to give a sense to the composition of pseudodifferential operators and yield suitable asymptotic expansions for the resulting symbols. Of course, all the results reviewed below are classical in the $\mathscr{C}^{\infty}$ framework, according to classical references as [@Hormander; @LernerBook; @Eskin; @Zworski], but in the Gevrey context we need to justify that all small remainders have the particular form given in ([\[eq:Gevrey small\]](#eq:Gevrey small){reference-type="ref" reference="eq:Gevrey small"}), which sometimes necessitates some modifications with respect to the smooth or analytic cases.
## Non-stationary and Stationary phase method in the Gevrey setting
We consider a symbol $a = a(x,y;h)$ in ${\mathcal S}^{m_0}_s({\mathbb R}^n \times {\mathbb R}^m)$, $m_0 \in {\mathbb R}$, for a small parameter $0 < h \leq 1$. Let $f = f(x,y)$ be a phase function of class ${\mathcal G}^s({\mathbb R}^n \times {\mathbb R}^m)$. Consider $$\mathcal I_f (h,y) = \int_{{\mathbb R}^n} e^{\frac{i}{h} f(x,y)} a(x,y;h) \,\mathrm{d}x, \qquad y \in {\mathbb R}^m,
\label{eq:stationary phase}$$ for a compactly-supported (in $x$) symbol $a$. The goal of this section is to adapt the usual non-stationary and stationary phase asymptotics to the Gevrey setting.
### Non-stationary phase lemma
The following result is a non-stationary phase lemma adapted to the Gevrey asymptotics in $h$, which has an independent interest.
**Lemma 1** (Non-stationary phase). *Let $a = a(x,y;h)$ in ${\mathcal S}^{m_0}_s({\mathbb R}^n \times {\mathbb R}^m)$ be compactly supported and let $\operatorname{supp}a$ be its support. Assume $f \in {\mathcal G}^s({\mathbb R}^n \times {\mathbb R}^m)$ is such that*
1. *$\mathop{\rm Im}\nolimits f(x,y) \geq 0$,*
2. *$f_x'(x,y) \not = 0$ for every $(x,y) \in \operatorname{supp}a$.*
*Then, $\mathcal I_f (h, \cdot)$ is a ${\mathcal G}^s$-small remainder, i.e., there exists $C>0$ such that for all $\beta \in {\mathbb N}^m$, $$\vert \partial_y^{\beta} \mathcal I_f (h, \cdot) \vert \leq C^{1 + \vert \beta \vert } \beta!^s \exp \left( - \frac{h^{-\frac{1}{s}}}{C} \right).$$ [\[prop: Proof Non-stationary Phase\]]{#prop: Proof Non-stationary Phase label="prop: Proof Non-stationary Phase"}*
*Proof.* As usual, as $\operatorname{supp}a$ is compact, upon using a suitable partition of the unity, we may reduce the result to a purely local situation. Thanks to the hypothesis on $f$, we may assume that $\operatorname{supp}a$ is a sufficiently small neighbourhood of a point around which it is possible to use a diffeomorphism $\kappa$ straightening the phase $f$. Indeed, if $\operatorname{supp}a$ is sufficiently small, let $(\tilde{x},\tilde{y}) = \kappa (x,y)$ with $\tilde{x} = (\tilde{x}_1, \tilde{x}')$ and let $\kappa$ be such that $$\tilde{x}_1 = f(x,y), \qquad \tilde{x}' = x', \qquad \tilde{y} = y, \qquad \qquad \textrm{in }\operatorname{supp}a.$$ Observe that, as $\mathop{\rm Im}\nolimits f(x,y) \geq 0$ and $f_{x_1}(x,y) \not = 0$, this mapping is indeed a local diffeomorphism. Upon changing variables, we find $$\mathcal I_f (h,y) = \int_{{\mathbb R}^n} e^{\frac{i}{h} f(x,y)} a(x,y;h) \,\mathrm{d}x = \int_{{\mathbb R}^n} e^{\frac{i}{h} \tilde{x}_1} \tilde{a}(\tilde{x}_1,\tilde{x}',y;h) \,\mathrm{d}\tilde{x}_1 \,\mathrm{d}\tilde{x}'.$$ Now, taking derivatives of any order with respect to $y$, the usual non-stationary phase lemma gives exponential decay on $h$ and the result follows (cf. for instance [@Zworski Lemma 3.14]). ◻
### Stationary phase lemma {#sec: Stationary Phase}
We turn now to the stationary phase asymptotics in the Gevrey setting. Upon using a suitable partition of unity, we may consider a purely local situation. Let us pick a point $(x_0,y_0) \in {\mathbb R}^n \times {\mathbb R}^m$ such that the following $$\begin{aligned}
f \textrm{ is real-valued}, \label{eq:phase real} \\
f_x'(x_0,y_0) = 0, \label{eq:phase stationnaire} \\
\det \left( f_{xx}''(x_0,y_0) \right) \not = 0, \label{eq:phase stationnaire non degeneree}\end{aligned}$$ hold. Consider $(x,y)$ sufficiently close to $(x_0,y_0)$ in ${\mathbb R}^n\times {\mathbb R}^m$. Then, there exists a function $y \mapsto \chi(y)$ of class ${\mathcal G}^s$ in a neighbourhood of $y_0$ such that $$f_x'(\chi(y),y) = 0, \qquad \chi(y_0) = x_0.
\label{eq:local picture}$$ We have the following stationary phase asymptotics for locally defined symbols that will be used in Proposition [\[lemma:composition\]](#lemma:composition){reference-type="ref" reference="lemma:composition"}.
**Lemma 2** (Stationary phase). *Assume that $\operatorname{supp}a$ is close to $x_0$ so that ([\[eq:local picture\]](#eq:local picture){reference-type="ref" reference="eq:local picture"}), ([\[eq:phase real\]](#eq:phase real){reference-type="ref" reference="eq:phase real"}), ([\[eq:phase stationnaire\]](#eq:phase stationnaire){reference-type="ref" reference="eq:phase stationnaire"}) and ([\[eq:phase stationnaire non degeneree\]](#eq:phase stationnaire non degeneree){reference-type="ref" reference="eq:phase stationnaire non degeneree"}) hold. Then, the symbol $$b(y,h) = e^{-(i/h)f(\chi(y),y)} \int a(x,y;h) e^{(i/h)f(x,y)} \,\mathrm{d}x$$ is of class ${\mathcal G}^s$ near $y_0$ and enjoys the asymptotic expansion $$b(y,h) \sim \left\vert \det \left( \frac{1}{2 i \pi h} f_{xx}''(\chi(y),y) \right) \right\vert^{-\frac{1}{2}} e^{i \pi \frac{\sigma}{4} } \sum_{j\geq 0} h^j L_{f,j,y}[a] (\chi(y),y),
\label{eq:phase stationaire}$$ where each $L_{f,j,y}[a]$ is a differential operator of degree $2j$ in $x$ and $\sigma$ is the signature of the matrix $D^2_x f (x_0,y_0)$.*
*Proof.* Since $f$ is real-valued, we can use Morse lemma. By Taylor expansion of order $2$ around $(x_0,y_0)$ we find $$f(x,y) = f(\chi(y),y) + \frac{1}{2} \langle Q(x,y)(x - \chi(y)), x - \chi(y) \rangle,$$ whenever $(x,y)$ is close to $(x_0,y_0)$, where $$Q(x,y) = 2 \int_0^1 (1 - t) f_{xx}'' \left( \chi(y) + t (x - \chi(y)), y \right) \,\mathrm{d}t.$$ For $(x,y)$ close to $(x_0,y_0)$, $Q(x,y)$ is symmetric and invertible. Setting $$Q_0 = f_{xx}''(x_0,y_0),$$ we observe there is a map $(x,y) \mapsto R(x,y)$ such that $$Q_0 = R(x,y)^t Q(x,y) R(x,y), \qquad \textrm{ with } R(x_0,y_0) = Id.$$ Consider the substitution $$(x,y) \mapsto (\tilde{x}(x,y),y),$$ with $$\tilde{x}(x,y) = R^{-1}(x,y) (x - \chi(y)), \qquad \textrm{near } (x_0,y_0).$$ Write $$b(y,w) = e^{- \frac{i}{h} f(\chi(y),y)} I(y,\omega) = \int e^{\frac{i}{2h} \langle Q_0 \tilde{x}, \tilde{x} \rangle } \tilde{a}(\tilde{x},y,\omega) \,\mathrm{d}\tilde{x},$$ for a ${\mathcal G}^s$ symbol $\tilde{a}$ with $\tilde{x}-\operatorname{supp}$ close to zero. Next, using Plancherel's formula $$b(y,w) = \vert \det( \frac{1}{2i \pi h} Q_0) \vert^{-\frac{1}{2}} e^{ - \frac{i h}{2} \langle Q_0^{-1}D,D \rangle} \tilde{a}(0,y).$$ Now, the asymptotic expansion ([\[eq:phase stationaire\]](#eq:phase stationaire){reference-type="ref" reference="eq:phase stationaire"}) follows from [@Hormander 7.6.7] as a consequence of the usual asymptotics for quadratic phases with nonnegative real part. ◻
## Symbolic calculus with Gevrey symbols
The calculus of Gevrey PDO has been established previously. Let $Q = {\operatorname{Op}_h}(q)$, $A = {\operatorname{Op}_h}(a)$, $q \in {\mathcal S}^{m_0}_s$, $a \in {\mathcal S}^{m}_s$. The composition $Q \circ A = {\operatorname{Op}_h}(q \circ a)$ can be written $$(q \circ a) (x,\xi) = \frac{1}{(2\pi h)^n} \iint e^{-\frac{i}{h}(x-y)\cdot (\xi-\eta) } q(x,\eta) a(y,\xi) \,\mathrm{d}y \,\mathrm{d}\eta.
\label{eq:symbole composition integrale}$$
**Proposition 3**. *Let $Q = {\operatorname{Op}_h}(q)$, $A = {\operatorname{Op}_h}(a)$, $q \in {\mathcal S}^{m_0}_s$, $a \in {\mathcal S}^{m}_s$ be given. Then, the composition $Q \circ A = {\operatorname{Op}_h}(q \circ a)$ satisfies, for arbitrary $N \in {\mathbb N}$, $$(q \circ a)(x,\xi) = \sum_{\vert \alpha \vert < N} \frac{h^{ \vert \alpha \vert }}{\alpha !} D_{\xi}^{\alpha}q(x,\xi)\partial_x^{\alpha} a(x, \xi) + r_N(q,a)(x,\xi),
\label{eq:Taylor}$$ where $r_N(q,a)$ is a ${\mathcal G}^s$ symbol of order $m_0 + m - N$ given by $$r_N(q,a)(x,\xi) = \frac{h^N}{i(2\pi h)^n}\frac{1}{i^N} \sum_{|\alpha| = N} \int_0^1 \frac{(1-\theta)^{(N-1)}}{(N-1)!}\iint_{{\mathbb R}^{2n}} e^{\frac{y\cdot \eta}{ih}}(\partial_{\xi}^{\alpha} q)(x,\xi + \eta) \partial_{x}^{\alpha} a(x+\theta y,\xi) \,\mathrm{d}\theta \,\mathrm{d}y \,\mathrm{d}\eta.$$ [\[lemma:composition\]]{#lemma:composition label="lemma:composition"}*
We shall use ([\[eq:Taylor\]](#eq:Taylor){reference-type="ref" reference="eq:Taylor"}) with $N=1$ in the following sections. The proof relies on the following Lemma, which folllows the lines in [@LernerBook Lemmas 4.1.2 and 4.1.5].
**Lemma 4**. *For every $t \in {\mathbb R}^*$, let $J^t$ be the operator defined by $$J^t b(x,\xi) = |t|^{-n} \iint_{{\mathbb R}^n \times {\mathbb R}^n} b(x + z,\xi + \zeta) e^{\frac{z\cdot \xi}{it}} \frac{\,\mathrm{d}z \,\mathrm{d}\zeta}{(2\pi)^n}, \qquad b \in {\mathcal S}^m_s({\mathbb R}^n \times {\mathbb R}^n).$$ Then, $J^t$ maps ${\mathcal S}_s^{m'}({\mathbb R}^n \times {\mathbb R}^n)$ into itself. Moreover, for any $N \in {\mathbb N}$, $$(J^t b)(x,\xi) = \sum_{\vert \alpha \vert < N} \frac{t^{\vert \alpha \vert }}{\alpha!} D_{\xi}^{\alpha} \partial_x^{\alpha} b(x,\xi) + r_N(t)(x,\xi)$$ with $r_N(t) \in {\mathcal S}^{m-N}_s$ and $$r_N(t)(x,\xi) = t^N \int_0^1 \frac{(1 - \theta)^{N-1}}{(N-1)!} J^{\theta t} \left( (D_{\xi} \cdot \partial_x)^N b \right)(x,\xi) \,\mathrm{d}\theta.$$ [\[lem:Jt\]]{#lem:Jt label="lem:Jt"}*
*Proof.* Let $N \in {\mathbb N}$. By Taylor's expansion up to order $N$ in the exponential we have
$$J^t = \sum_{k < N} \frac{t^k}{k!}(D_{\xi} \partial_x)^k + \int_0^1 \frac{(1 - \theta)^{N-1}}{(N-1)!} J^{\theta t} \left( (tD_{\xi} \partial_x)^N \right) \,\mathrm{d}\theta.$$ Hence, $J^t$ maps ${\mathcal S}_s^{m'}({\mathbb R}^n \times {\mathbb R}^n)$ into itself and one has $$(J^t b)(x,\xi) = \sum_{\vert \alpha \vert < N} \frac{t^{\vert \alpha \vert }}{\alpha!}(D_{\xi}^{\alpha} \partial_x^{\alpha} b(x,\xi) + r_N(t)(x,\xi)$$ with $$r_N(t)(x,\xi) = t^N \int_0^1 \frac{(1 - \theta)^{N-1}}{(N-1)!} J^{\theta t} \left( (tD_{\xi} \partial_x)^N b \right)(x,\xi) \,\mathrm{d}\theta.$$ ◻
*Proof of Proposition [\[lemma:composition\]](#lemma:composition){reference-type="ref" reference="lemma:composition"}.* From ([\[eq:symbole composition integrale\]](#eq:symbole composition integrale){reference-type="ref" reference="eq:symbole composition integrale"}), we may write $$(q \circ a) (x,\xi) = \frac{1}{(2\pi h)^n} \iint e^{-\frac{i}{h} z \cdot \zeta } q(x,\xi + \zeta) a(x + z,\xi) \,\mathrm{d}z \,\mathrm{d}\zeta,$$ thanks to the changes of variables $x + z = y$ and $\xi - \eta = \zeta$. For $t \in {\mathbb R}^*$ set $$J_0^t b = \frac{1}{(2 \pi \vert t\vert)^n} \iint b(z,\zeta) e^{\frac{1}{it}z\cdot \zeta} \,\mathrm{d}z \,\mathrm{d}\zeta, \qquad b = b(z,\zeta) \in {\mathcal S}^{m'}_s({\mathbb R}^n \times {\mathbb R}^n).$$ Then, setting $C_{x,\xi} = q(x,\xi + \zeta) a(x+\zeta,\xi)$ one has $$(q \circ a) (x,\xi) = J_0^h C_{x,\xi}.$$ So thanks to Lemma [\[lem:Jt\]](#lem:Jt){reference-type="ref" reference="lem:Jt"} we may write $$(q \circ a)(x,\xi) = J^h C_{x,\xi}(0,0) = \sum_{\vert \alpha \vert < N} \frac{h^{\vert \alpha \vert }}{\alpha!}D_{\xi}^{\alpha} q(x,\xi) \partial_x^{\alpha} a(x,\xi) + r_N(h)(q,a)(x,\xi)$$ where $$r_N(h)(q,a)(x,\xi) = \int_0^1 \frac{(1 - \theta)^{N-1}}{(N-1)!} J^{\theta h} \left( (tD_{\xi} \partial_x)^N C_{x,\xi}(0,0) \right)\,\mathrm{d}\theta.$$ Now, expanding the remainder $$r_N(q,a)(x,\xi) = \frac{1}{(2 \pi h)^n} \sum_{\vert \gamma \vert = N} \frac{h^N N!}{\gamma!} \int_0^1 \iint_{{\mathbb R}^{2n}} \frac{(1-\theta)^{N-1}}{(N-1)!}e^{\frac{z\cdot \zeta}{ih}} D_{\xi}^{\gamma} q(x, \xi + \zeta) \partial_x^{\gamma}a(x + \theta z,\xi) \,\mathrm{d}z \,\mathrm{d}\zeta \,\mathrm{d}\theta.$$ Using Gevrey stationary phase lemma one may prove that $r_N(q,a)$ is a Gevrey symbol of order $m_0 + m - N$ and thus, $$(q \circ a)(x,\xi) = \sum_{\vert \alpha \vert < N} \frac{h^{ \vert \alpha \vert }}{\alpha !} D_{\xi}^{\alpha}q(x,\xi)\partial_x^{\alpha} a(x, \xi) + r_N(q,a)(x,\xi),$$ which ends the proof. ◻
## Formal symbols and Borel lemma in the Gevrey setting
In this section we introduce a notion of Gevrey formal symbol of a given order and give conditions ensuring the existence of a Gevrey symbol realizing a given sequence of formal symbols, based on Carleson's theorem [@Carleson].
**Definition 5** (Formal symbols). Let $m\in {\mathbb R}$. Let $(a_j)_{j\geq 0}$ be a sequence of symbols such that $$a_j \in {\mathcal S}^{-j + m}_s({\mathbb R}^n \times {\mathbb R}^n), \qquad \forall j \in {\mathbb N}.$$ The sequence of $(a_j)_{j\geq 0}$ is said to be a formal ${\mathcal G}^s$ symbol of degree $m$ if there exists $C>0$ such that $$\vert \partial_x^{\alpha} \partial_{\theta}^{\beta} a_j(x,\theta) \vert \leq C^{1 + \vert \alpha \vert + \vert \beta \vert + j } j!^s \alpha!^s \beta!^s h^{j-m}, \qquad \forall (x,\theta) \in {\mathbb R}^n \times {\mathbb R}^n,
\label{eq:condition symbole precisee}$$ for all $\alpha, \beta \in {\mathbb N}^n$ and $j\in {\mathbb N}$. [\[eq:symbole formel\]]{#eq:symbole formel label="eq:symbole formel"}
Carleson's theorem [@Carleson] allows us to realise Gevrey $s$ formal symbols into symbols. This result is a Gevrey Borel theorem as follows.
**Theorem 4**. *Let $m\in {\mathbb R}$ and let $(a_j)_{j\geq 0} \in {\mathcal S}^{-j+m}$ be a formal ${\mathcal G}^s$ symbol of degree $m$. Then, there exists $a \in {\mathcal S}_s^m({\mathbb R}^n \times {\mathbb R}^n)$ such that for any $N>0$, $\alpha, \beta \in {\mathbb N}^n$ one has uniformly $$\vert \partial_x^{\alpha} \partial_{\theta}^{\beta} ( a - \sum_{j < N} a_j ) \vert \leq C^{1 + \alpha + \beta + N} N!^s\alpha!^s \beta!^s h^{N-m}.$$ [\[thm:Carleson\]]{#thm:Carleson label="thm:Carleson"}*
*Proof.* Consider the sequence $(a_j(x,\theta,h)h^{-j}j!)_{j\geq 0}$. This is a $(s+1)$-sequence in the Boutet-Kree sense, according to [@BoutetKree]. Using Carleson theorem we have that there exists $g(t,x,\theta, h) \in {\mathcal G}^{s+1}(\overline{R}_+, {\mathcal S}_s^m)$ such that for all $j\geq 0$ $$\partial_t^j g(0,x,\eta,h) = j! a_j(x,\theta,h) h^{-j}.$$ Setting $a(x,\theta,h) = g(t,x,\theta, h)\vert_{t = h > 0}$ for $h$ small, by Taylor's formula $$\vert \partial_x^{\alpha} \partial_{\theta}^{\beta} ( a - \sum_{j < N} a_j ) \vert \leq \sup_{\theta' \in [0,1]} \vert \vert \partial_x^{\alpha} \partial_{\theta}^{\beta} \partial_{t}^{j} g(\theta' h, x,\theta, h) \vert \frac{h^{N}}{N!}.$$ As $g$ is a function in ${\mathcal G}^{s+1}$ function w.r.t. $t$ and in ${\mathcal G}^s$ w.r.t. $(x,\theta)$, one has for some $C'>0$ $$\vert \partial_x^{\alpha} \partial_{\theta}^{\beta} ( a - \sum_{j < N} a_j ) \vert \leq C'^{1 + \vert \alpha \vert + \vert \beta \vert + } N!^s \alpha!^s \beta!^s h^{N-m},$$ so the theorem is proven. ◻
## Formal quasinorms in the class of Gevrey symbols
In the set ${\mathcal S}_s^m({\mathbb R}^n \times {\mathbb R}^n)$ we may define, following [@LascarLascar], a family of quasinorms (cf. [@BoutetKree]) of the form : $$N_m(a,T)(x,\theta) = \sum_{(\alpha,\beta) \in {\mathbb N}^d } \frac{h^m T^{\vert \alpha \vert + \vert \beta \vert}}{\alpha!^s\beta!^s} \vert \partial_x^{\alpha} \partial_{\theta}^{\beta} a(x,\theta) \vert,
\label{eq:quasinorms}$$ where $T>0$ is a fixed parameter. We also set $$\overline{N_m}(a,T):= \sup_{(x,\theta)} N_m(a,T)(x,\theta).$$ By Leibniz rule, one has $$\overline{N}_{m+m'}(aa',T) \leq \overline{N}_{m}(a,T) \overline{N}_{m'}(a',T),$$ for any $a \in {\mathcal S}_s^m({\mathbb R}^n \times {\mathbb R}^n)$ and $a' \in {\mathcal S}_s^{m'}({\mathbb R}^n \times {\mathbb R}^n)$.
These quasinorms will be important in the proof of Theorem [\[thm:Egorov1\]](#thm:Egorov1){reference-type="ref" reference="thm:Egorov1"}.
# Fourier Integral Operators in the Gevrey setting {#sec:FIO}
In this section we present some basic results concerning the specific class of Fourier Integral Operators (FIO for short) associated to Gevrey symbols and classes and how they relate to the class of Gevrey pdo.
## FIO with Grevey symbols and phases
We work with a local class of Gevrey semiclassical Fourier integral operator defined in an analogous way to [@Eskin 61.1] in the $h$-independent case and also to [@Zworski] for the $h$ dependent case in the smooth class. These are operators of the form $$Fu(x,h) = \frac{1}{(2\pi h)^n} \iint_{{\mathbb R}^n \times {\mathbb R}^n} e^{\frac{i}{h}(S(x,\eta) -y \cdot \eta)}a(x,\eta) u(y,h) \,\mathrm{d}y \,\mathrm{d}\eta,
\label{eq:GevreyFIO}$$ where the phase function $S = S(x,\eta) \in {\mathcal G}^s({\mathbb R}^n \times {\mathbb R}^n)$ is such that $\det S''_{x,\eta} \not = 0$ and the symbol $a=a(x,\eta)$ is in some class $a \in {\mathcal S}^m_s({\mathbb R}^n \times {\mathbb R}^n)$. We will sometimes use the notation $F \in \mathcal I^m_s({\mathbb R}^n)$ for such an operator.
For a FIO $F$ as above, define (cf. [@Eskin Section 61.1]) its adjoint $F^*$ by $$\langle F u, v \rangle = \langle u, F^* v\rangle,$$ so that $$F^*u(x,h) = \frac{1}{(2\pi h)^n} \iint_{{\mathbb R}^n \times {\mathbb R}^n} e^{\frac{i}{h}(S(y,\eta) - x \cdot \eta) )} \overline{a(y,\eta)} u(y,h) \,\mathrm{d}y \,\mathrm{d}\eta.$$ Then, we have a Gevrey version of [@Eskin Lemma 62.4].
**Lemma 6**. *Let $F \in \mathcal I^m_s$ and let $F^*$ be its adjoint. Then, $$FF^*u(x,h) = \frac{1}{(2\pi h)^n} \iint_{{\mathbb R}^n \times {\mathbb R}^n} e^{\frac{i}{h}(S(x,\xi) - S(y,\xi) )} a(x,\xi) \overline{a(y,\eta)} u(y,h) \,\mathrm{d}y \,\mathrm{d}\xi, \qquad \forall u \in \mathscr{D}'({\mathbb R}^n),$$ we can write $$FF^* = K_1 + K_2,$$ where $K_1 \in \Psi^{2m}_{s}$ and $K_2 = {\mathcal O}_s(h^{\infty})$.*
*Proof.* Using a Gevrey$-s$ partition of the unity we may assume that $\operatorname{supp}(a)$ is small enough so that the mapping $$\Sigma : (y,\eta) \mapsto (x,\xi), \qquad \xi = S'_x(x,\eta), \quad y = S'_{\eta}(x,\eta)$$ is a canonical transformation on $\operatorname{supp}(a)$ of class ${\mathcal G}^s$. Let $\chi \in {\mathcal G}^s_0({\mathbb R}^n)$ and $r>0$ small enough so that $supp (\chi) \subset B(0,r)$ and $\chi = 1$ on $B(0,\frac{r}{2})$. We can write $$S(x,\xi) - S(y,\xi) = \Sigma(x,y,\xi)(x-y).$$ Then, we have $$FF^* = K_1 + K_2$$ with $$\begin{aligned}
K_1 u(x,h) & = \iint_{{\mathbb R}^n \times {\mathbb R}^n} \mathfrak{a}(x,y,\eta) \chi\left( \frac{x-y}{\delta} \right) e^{\frac{i}{h}(x-y)\cdot \eta} \vert J(x,y,\eta) \vert u(y,h) \frac{\,\mathrm{d}y \,\mathrm{d}\xi }{(2\pi h)^n}, \label{eq:k1} \\
K_2 u(x,h) & = \iint_{{\mathbb R}^n \times {\mathbb R}^n} \mathfrak{a}(x,y,\eta) \left( 1 - \chi\left( \frac{x-y}{\delta} \right) \right) e^{\frac{i}{h}(x-y)\cdot \eta} \vert J(x,y,\eta) \vert u(y,h) \frac{\,\mathrm{d}y \,\mathrm{d}\xi}{(2\pi h)^n}, \label{eq:k2}\end{aligned}$$ where $$\mathfrak{a}(x,y,\eta) := a(x,\Sigma^{-1}(x,y,\eta)) \overline{a(y,\Sigma^{-1}(x,y,\eta))}, \qquad J = \partial_{\eta} \Sigma^{-1}.$$ Using Kuranishi's trick (cf. [@Zworski Chapter 8]), we deduce that $K_1$ is a ${\mathcal G}^s$ PDO of order $2m$.
Next, using Gevrey non stationary phase we obtain that $K_2$ is a Gevrey $s$ negligeable reminder, while it is easy to compute the principal symbol of $K_1$, which is of order $2m$. ◻
## Composition of FIO and PDOs in the Gevrey setting
The following result is an adaptation of [@Eskin Lemmas 62.2 and 62.3] to the Gevrey stting, which is possible with only minor modifications thanks to Lemma [\[prop: Proof Non-stationary Phase\]](#prop: Proof Non-stationary Phase){reference-type="ref" reference="prop: Proof Non-stationary Phase"}.
**Proposition 7**. *Let $F$ be a ${\mathcal G}^s$-FIO of the form ([\[eq:GevreyFIO\]](#eq:GevreyFIO){reference-type="ref" reference="eq:GevreyFIO"}) for some phase $S\in {\mathcal G}^s$ and symbol $a\in {\mathcal S}^{m_0}_s$. Then, for every $A = A(x,D) \in \Psi_s^m$ one has $$AFu(x) = \frac{1}{(2\pi h)^n} \int c(x,\eta) e^{\frac{i}{h}S(x,\eta)} \hat{u}(\eta) \,\mathrm{d}\eta, \qquad u \in \mathscr{D}'({\mathbb R}^n),$$ for some $c \in {\mathcal S}_s^{m + m_0}$ whose principal symbol satisfies $$c_0(x,\eta) = a_0(x,S'_x(x,\eta))a(x,\eta),$$ where $a_0$ is the principal symbol of $A$. Also, $$FAu(x) = \frac{1}{(2\pi h)^n} \int_{{\mathbb R}^{n} } c'(x,\eta) e^{\frac{i}{h}S(x,\eta)} \hat{u}(\eta) \,\mathrm{d}\eta + R'u, \qquad u \in \mathscr{D}'({\mathbb R}^n),$$ for some $c' \in {\mathcal S}^{m + m_0}$ whose principal symbol satisfies $$c_0'(x,\eta) = a_0S'_\eta(x,\eta)a(x,\eta),$$ and $R' \in \Psi_s^{-\infty}$. [\[prop:composition FIO pdo\]]{#prop:composition FIO pdo label="prop:composition FIO pdo"}*
## Gevrey wavefront and Lagrangian distributions
We end this section with some details on the wavefront set adapted to the Gevrey setting.
**Definition 8** (Microlocal conjugation by Gevrey FIOs). Given $P,Q \in \Psi^m_s$, we say that $P$ and $Q$ are microlocal conjugate in the ${\mathcal G}^s$ sense around a point $(x_0,\xi_0) \in T^*{\mathbb R}^n$ if there exists a FIO $R \in \mathcal I_s^m$ such that $RP$ equals $QR$ microlocally around $(x_0,\xi_0)$. [\[def:microlocal conjugation\]]{#def:microlocal conjugation label="def:microlocal conjugation"}
Consider a Lagrangian distribution of the form $$u(x,h) = \int e^{\frac{i}{h} \phi(x,\theta) } a(x,\theta,h) \,\mathrm{d}\theta
\label{eq:Lagrangian distribution}$$ for a phase $\phi$ and amplitude $a$ satisfying $$\begin{aligned}
\phi \in {\mathcal G}^s({\mathbb R}^n \times {\mathbb R}^N), \\
\mathop{\rm Im}\nolimits\phi \geq 0, \qquad d\phi \not = 0, \\
a \in {\mathcal S}^{m_0}_s({\mathbb R}^n \times {\mathbb R}^N).\end{aligned}$$ Assume $\phi$ homogeneous of degree 1 in $\theta$ and $a \in {\mathcal S}^{m_0,k_s}$ or $a \in {\mathcal G}^s_0 \cap {\mathcal S}^{m_0}_s$ (see [@LascarLascar])
If $\operatorname{WF}_h(u)$ is the semiclassical wavefront of $(u(\cdot, h))_h$, then it is standard that $$\operatorname{WF}_h(u) \subset \left\{ (x,\phi_x') \, \vert \quad (x,\theta) \in F, \, \phi_{\theta}'(x,\theta) = 0 \right\},$$ with $supp (a) \subset F$. The next result is a analogue version of this fact for the Gevrey semiclassical wavefront $\operatorname{WF}_{s,h}(u)$.
**Lemma 9**. *Let $u$ as in ([\[eq:Lagrangian distribution\]](#eq:Lagrangian distribution){reference-type="ref" reference="eq:Lagrangian distribution"}). Then, one has $$\operatorname{WF}_{s,h}(u) \subset \Lambda = \left\{ (x,\phi_x') \, \vert \quad (x,\theta) \in F, \, \phi_{\theta}'(x,\theta) = 0 \right\},$$*
*Proof.* This is a consequence of the results in [@Hormander Ch. 25] with minor modifications. The Gevrey asymptotic comes from the non-stationary phase lemma in Lemma [\[prop: Proof Non-stationary Phase\]](#prop: Proof Non-stationary Phase){reference-type="ref" reference="prop: Proof Non-stationary Phase"}. ◻
# Proof of Theorem [\[thm:Egorov1\]](#thm:Egorov1){reference-type="ref" reference="thm:Egorov1"} {#sec:ProofEgorov}
The goal of this section is to prove Theorem [\[thm:Egorov1\]](#thm:Egorov1){reference-type="ref" reference="thm:Egorov1"}. We divide the proof into several steps, according to the usual WKB method.
**Step 1. Setting of the problem.**
We may assume w.l.g. that $(x_0,\xi_0)=(0,0)$. Let us choose once for all a particular real coordinate and write as usual $x =(x_1,x')\in {\mathbb R}\times {\mathbb R}^{n-1}$ and $\xi =(\xi_1,\xi')\in {\mathbb R}\times {\mathbb R}^{n-1}$. As $P$ is of real principal type by hypothesis, we can assume that the principal symbol of $P$ writes (cf. for instance [@Zworski Section 12.2]) $$p (x,\xi_1,\xi') = \xi_1 - \lambda(x,\xi'),
\label{eq:Thm2 evolution eq symbol}$$ for some real symbol $\lambda \in {\mathcal S}_s^0({\mathbb R}^n \times {\mathbb R}^{n-1})$ real. If $\lambda(x,\xi')$ stands for the principal symbol of $Q$ let us write $$P(x,hD_{x_1},hD_{x'}) = hD_{x_1} + Q(x,hD_{x'}).
\label{eq:Thm2 evolution eq}$$
**Step 2. Suitable phase using the eikonal equation.**
We consider now the following Cauchy problem: Find $\varphi$ such that $$\left\{ \begin{array}{l}
\frac{\partial \varphi}{\partial x_1} - \lambda \left( x, \varphi_x' \right) = 0, \\
\varphi|_{x_1=0} = x'\cdot \eta',
\end{array} \right.
\label{eq:Thm2 eikonal}$$ where $\lambda \in {\mathcal S}_s^0({\mathbb R}^n \times {\mathbb R}^{n-1})$ given by ([\[eq:Thm2 evolution eq symbol\]](#eq:Thm2 evolution eq symbol){reference-type="ref" reference="eq:Thm2 evolution eq symbol"}). The Cauchy problem ([\[eq:Thm2 eikonal\]](#eq:Thm2 eikonal){reference-type="ref" reference="eq:Thm2 eikonal"}) has a solution $\varphi$ of class ${\mathcal G}^s$. Moreover, since $\det \varphi_{x',\xi'}'' \not = 0$ the phase function $\varphi$ generates a canonical transform.
Our goal in the following sections is to find suitable $h$-FIOs $F,G$ such that $$GPF = hD_{x_1} + {\mathcal O}_s(h^\infty),
\label{eq:Thm2 claim}$$
**Step 3. Reducing the order of the remainder by conjugation.**
Let us introduce a $h-$FIO of the form $$Fu(x,h) = \frac{1}{(2 \pi h)^{n-1}} \int e^{\frac{i}{h}\varphi(x,\eta')} a(x,\eta') \hat{u}(x_1,\frac{\eta'}{h}) d \eta', \qquad u \in \mathscr{S}({\mathbb R}^n),$$ where $a \in {\mathcal S}_s^{\mu}({\mathbb R}^n \times {\mathbb R}^{n-1})$, $\hat{u}$ denotes the partial Fourier transform in the variables $x'$ only and $\varphi$ is given as before. Assume that $a$ is elliptic when $x_1 = 0, x' = 0, \xi_1 = 0, \xi'=0$.
Now, $F$ defined above is a h-FIO of the form ([\[eq:GevreyFIO\]](#eq:GevreyFIO){reference-type="ref" reference="eq:GevreyFIO"}) associated to a ${\mathcal G}^s$ phase and symbol. As $P \in \Psi_s^0$ by hypothesis, we can compose $F$ and $P$ using Proposition [\[prop:composition FIO pdo\]](#prop:composition FIO pdo){reference-type="ref" reference="prop:composition FIO pdo"}. Indeed, from ([\[eq:Thm2 evolution eq\]](#eq:Thm2 evolution eq){reference-type="ref" reference="eq:Thm2 evolution eq"}) we get, for every $u \in \mathscr{S}({\mathbb R}^n)$, $$PFu = (hD_{x_1} + Q)Fu = hD_{x_1}Fu + QFu$$ Now, as the symbol $a$ is supposed to be elliptic at the point $(x_1, x', \xi_1, \xi')=0$, we may find a microlocal inverse of $F$ near this point and thus write $$PFu = F(hD_{x_1}) + F_{-1} u,$$ for some $F_{-1} \in I_s^{\mu-1}$. Moreover, the symbol of $F_{-1}$, namely $\mathfrak{f}$, admits the expansion (as in [@Hormander Theorem 7.7.7]): $$\mathfrak{f} = \sum_{\alpha \geq 0} \frac{h^{|\alpha|}}{\alpha !} p^{(\alpha)}(x,\varphi_x') D_y^{\alpha}(e^{\frac{i}{h}\rho}a)|_{y=x},$$ with $$\rho(x,y,\eta')= \varphi(y,\eta') - \varphi(x,\eta') - (y-x) \varphi_x'(x,\eta').$$ We observe that the first term of the expansion vanishes, for $$p(x,\varphi_x') (e^{\frac{i}{h}\rho}a)|_{y=x} = p(x,\varphi_x') (e^{\frac{i}{h}(\varphi(y,\eta') - \varphi(x,\eta') - (y-x))}a)|_{y=x} = hD_{x_1} \varphi - \lambda(x,\varphi_{x'}') = 0,$$ thanks to ([\[eq:Thm2 eikonal\]](#eq:Thm2 eikonal){reference-type="ref" reference="eq:Thm2 eikonal"}). Hence, $F_{-1} \in \Psi_s^{-1}$.
Let now $G$ be the microlocal inverse of the elliptic $h$-FIO $F$ near zero. Then, we can write $$GF = I + r, \qquad \textrm{microlocally near } (0,0,0,0),$$ for $r \in \Psi_s^{-\infty}$ and hence $$GPFu = GF(hD_{x_1}) + GF_{-1} = hD_{x_1} + R,$$ where $R = r hD_{x_1} + GF_{-1}$. Now, as $G$ and $F_{-1}$ are associated to the same generating function, we have in particular that $GF_{-1}$ is a pdo and moreover $GF_{-1} \in \Psi_s^{-1}$. Furthermore, as $r$ is a regularising pdo, we also have $r hD_{x_1} \in \Psi_s^{-1}$, which guarantees that $$GPFu = hD_{x_1} + R, \qquad R \in \Psi_s^{-1},$$ microlocally near zero. This means that $P$ is microlocally conjugate by ${\mathcal G}^s$ FIOs to $hD_{x_1}$ up to a remainder of degree $-1$.
**Step 4. Reducing arbitrarily the order of the remainder by conjugation.**
We aim now to iterate the scheme of the previous step in order to get ([\[eq:Thm2 claim\]](#eq:Thm2 claim){reference-type="ref" reference="eq:Thm2 claim"}) for a remainder of arbitrary order. We do this thanks to the following iterative scheme, which produces a formal Gevrey symbol.
**Lemma 10**. *Let $q \in {\mathcal S}^{-1}_s({\mathbb R}^n \times {\mathbb R}^n)$. Then, there is a sequence $(a_j)_{j\geq 0}$ of symbols such that $a_j \in {\mathcal S}^{-j}_s({\mathbb R}^n \times {\mathbb R}^n)$ satisfying $$\frac{h}{i} \partial_{x_1}a_0 + qa_0 = 0, \qquad \textrm{and} \qquad a_0\vert_{x_1 = 0} \textrm{ is elliptic},$$ and for all $j\geq 1$, $$\frac{h}{i} \partial_{x_1}a_j + q a_j = r_1(q,a_{j-1}), \qquad \textrm{and} \qquad a_j\vert_{x_1 = 0} = 0.
\label{eq:stationary equations}$$ Moreover, if $$A_j = {\operatorname{Op}_h}(a_j), \qquad A^{(N)} = A_0 + \dots + A_{N-1}, \,\, N \geq 1,$$ one has $$(hD_{x_1} + Q) A^{(N)} = h A^{(N)}D_{x_1} + R_N,$$ with $R_N$ a ${\mathcal G}^s \in \Psi_s^{-(N+1)}$. Finally, if $q$ is of order $m_0$ small enough the above sequence of $(a_j)_{j\geq 0}$ is a formal Gevrey $s$ symbol in the sense of Defintion [\[eq:symbole formel\]](#eq:symbole formel){reference-type="ref" reference="eq:symbole formel"}. [\[lem:transport recursif\]]{#lem:transport recursif label="lem:transport recursif"}*
*Proof.* Equations ([\[eq:stationary equations\]](#eq:stationary equations){reference-type="ref" reference="eq:stationary equations"}) are solved by induction setting for $j\geq 1$ $$a_j = -i \frac{a_0}{h} \int_0^{x_1} \frac{r_1}{a_0}(q,a_{j-1}) \,\mathrm{d}t.$$ One has $a_j \in {\mathcal S}_s^{-j}({\mathbb R}^n \times {\mathbb R}^n)$, in view of ([\[eq:Taylor\]](#eq:Taylor){reference-type="ref" reference="eq:Taylor"}) and ([\[eq:stationary equations\]](#eq:stationary equations){reference-type="ref" reference="eq:stationary equations"}) we have $R_N$ is of order $-N-1$.
We now show that if $q$ is of order $m_0$ small enough the above sequence of $(a_j)_{j\geq 0}$ is a formal Gevrey $s$ symbol.
In order to prove ([\[eq:condition symbole precisee\]](#eq:condition symbole precisee){reference-type="ref" reference="eq:condition symbole precisee"}) we use the formal quasinorms for Gevrey symbols introduced in ([\[eq:quasinorms\]](#eq:quasinorms){reference-type="ref" reference="eq:quasinorms"}). Upon repeating the scheme of the previous step, we may assume that $P=hD_1 + Q$, with $Q \in \Psi_s^{m_0}$ of order $m_0$ small, is a microlocal conjugate to $hD_{x_1}$ by ${\mathcal G}^s$ FIOs, modulo negligeable remainders. If $q$ is the symbol of $Q$, recall that $${\operatorname{Op}_h}(q) \circ {\operatorname{Op}_h}(a) = {\operatorname{Op}_h}(qa + R), \qquad R = R(a,q)$$ with a remainder $R$ as in Lemma [\[lemma:composition\]](#lemma:composition){reference-type="ref" reference="lemma:composition"}. Let us define next $\chi \in {\mathcal G}^s_0({\mathbb R}^{2n})$ such that $$\operatorname{supp}\chi \subseteq \left\{(y,\eta); \, \vert y \vert + \vert \eta \vert \leq r \right\} \qquad \textrm{ and } \qquad \chi = 1 \textrm{ on } \left\{(y,\eta); \, \vert y \vert + \vert \eta \vert \leq \frac{r}{2} \right\}$$ and let us write $$R(a,q) = R_{\chi}(a,q) + S_{\chi}, \qquad R_{\chi} = \chi R(a,q), \quad S_{\chi} = (1 - \chi) R(a,q).$$ We deduce that $S_{\chi}$ is Gevrey $s$ remainder by the non-stationary phase asymptotics in Lemma [\[prop: Proof Non-stationary Phase\]](#prop: Proof Non-stationary Phase){reference-type="ref" reference="prop: Proof Non-stationary Phase"}. Now if $\alpha,\beta \in {\mathbb N}^n$, if ${\epsilon}>0$ small enough, we also have $$\overline{N_m}(\partial_x^{\alpha}\partial_{\theta}^{\alpha}a,T) \leq T^{- (\vert \alpha \vert + \vert \beta \vert ) } {\epsilon}^{- s(\vert \alpha \vert + \vert \beta \vert ) } \alpha!^s \beta!^s \overline{N_m}(a,T(1 + {\epsilon})^s).
\label{eq:seminorms estimate}$$ Now, if $q\in {\mathcal S}^{m_0}_s({\mathbb R}^n \times {\mathbb R}^n)$ with $m_0 \leq -n$, then from the previous equations, $$\overline{N}_{m-1}(R_{\chi}(a),T) \leq \tau^{2n} C_n {\epsilon}^{-s} M_0(q) T^{-1} \overline{N}_m (a, T(1 + {\epsilon})^s).$$ If $(a_j)_{j\geq 0}$ is the sequence defined by ([\[eq:stationary equations\]](#eq:stationary equations){reference-type="ref" reference="eq:stationary equations"}). We claim that estimate ([\[eq:seminorms estimate\]](#eq:seminorms estimate){reference-type="ref" reference="eq:seminorms estimate"}) implies, for $j\geq 0$, ${\epsilon}> 0$ small that for some $M_0' = M_0'(r,q,n)>0$ one has $$\overline{N}_{-j}(a_j,T) \leq M_0'^j {\epsilon}^{-sj} j^{sj} \overline{N}_0(a_0,2T)T^{-j}.
\label{eq:seminorms estimate induction}$$ Indeed, this is true if $j=1$ from ([\[eq:seminorms estimate\]](#eq:seminorms estimate){reference-type="ref" reference="eq:seminorms estimate"}). Assuming that ([\[eq:seminorms estimate induction\]](#eq:seminorms estimate induction){reference-type="ref" reference="eq:seminorms estimate induction"}) holds for $j-1$ with $j\geq 2$, we have for ${\epsilon}$, $\tilde{{\epsilon}}>0$ small $$\overline{N}_{-j}(a_j,T) \leq {\epsilon}^{-sj} \frac{M_0'^j}{T^j} \frac{(j-1)^{s(j-1)}}{(1 + {\epsilon})^{s(j-1)} \tilde{{\epsilon}}^{s(j-1)} } \overline{N}_0 \left(a_0,T(1 + {\epsilon})^s\left( 1 + \frac{\tilde{{\epsilon}}}{j-1} \right)^{s(j-1)} \right).$$ We fix $\delta > 0$ small and choose $\frac{\tilde{{\epsilon}}}{j-1} = \frac{\delta}{j}$ and ${\epsilon}= \frac{\delta}{j}$ in the previous inequality, so that $$(1 + {\epsilon})^s \left( 1 + \frac{\tilde{{\epsilon}}}{j-1} \right)^{s(j-1)} = \left( 1 + \frac{\delta}{j} \right)^{sj}.$$ It remains to check that $$\frac{(1 + {\epsilon})^{s(j-1)}}{ (j-1)^{s(j-1)} } \tilde{{\epsilon}}^{-s(j-1)} \delta^{sj} j^{?sj}\leq \frac{\delta^s}{j^s},$$ which is $(1 + \frac{\delta}{j})^{-s(j-1)} \leq 1$, which is true since $j\geq 1$. So this allows to prove ([\[eq:seminorms estimate induction\]](#eq:seminorms estimate induction){reference-type="ref" reference="eq:seminorms estimate induction"}).
As a consequence, the sequence of symbols $(a_j)_{j\in {\mathbb N}}$ defined by ([\[eq:stationary equations\]](#eq:stationary equations){reference-type="ref" reference="eq:stationary equations"}) is a formal ${\mathcal G}^s$ symbol as desired. ◻
**Step 5. Conclusion.**
Thanks to Lemma [\[lem:transport recursif\]](#lem:transport recursif){reference-type="ref" reference="lem:transport recursif"}, there exists a sequence of symbols $(a_j)_{j\in {\mathbb N}}$ defined by ([\[eq:stationary equations\]](#eq:stationary equations){reference-type="ref" reference="eq:stationary equations"}) of order zero. Using Theorem [\[thm:Carleson\]](#thm:Carleson){reference-type="ref" reference="thm:Carleson"} we deduce that there exists $a \in {\mathcal S}_s^m$ realising $(a_j)_{j\in {\mathbb N}}$. As a consequence, the $h$-FIO associated to the symbol $a$ and the phase $\varphi$ satisfying ([\[eq:Thm2 eikonal\]](#eq:Thm2 eikonal){reference-type="ref" reference="eq:Thm2 eikonal"}) satisfies that $PF = F(hD_{x_1})$ microlocally near zero up to negligeable remainders. This ends the proof of Theorem [\[thm:WKB\]](#thm:WKB){reference-type="ref" reference="thm:WKB"}.
# A Gevrey FBI version of Egorov's theorem: Proof of Theorem [\[thm:Egorov\]](#thm:Egorov){reference-type="ref" reference="thm:Egorov"}
Let us describe a Gevrey$-s$ transform of the type $$\mathcal{T}u(x,h) = \int e^{\frac{i}{h}\varphi(x,y)} a(x,y,h) u(y) dy, \qquad u \in \mathcal{S}',
\label{eq:FBI}$$ with some phase $\varphi$ of Gevrey class.
## Some complex symplectic geometry
For $n \in {\mathbb N}$, we use coordinates $z = (z_1,\dots,z_n)$ for a point $z\in {\mathbb C}^n$. As usual, we note $z_j = x_j + i y_j$, for $x_j,y_j \in {\mathbb R}$ and $j=1,\dots,n$, so that the holomorphic and antiholomorphic derivatives write
$$\frac{\partial}{\partial z_j} = \frac{1}{2} \left( \frac{\partial}{\partial x_j} - i \frac{\partial}{\partial y_j} \right), \qquad \frac{\partial}{\partial \overline{z}_j} = \frac{1}{2} \left( \frac{\partial}{\partial x_j} + i \frac{\partial}{\partial y_j} \right).$$ Let us denote $${\mathcal B}= \left\{ \frac{\partial}{\partial z_j}, \frac{\partial}{\partial \overline{z_j}} \right\}_{j=1,\dots,n}, \qquad {\mathcal B}^* = \left\{ \,\mathrm{d}z_1, \dots, \,\mathrm{d}z_n, \,\mathrm{d}\overline{z}_1, \dots \,\mathrm{d}\overline{z}_n \right\},$$ where ${\mathcal B}$ is the canonical complex basis of ${\mathbb C}^n$ and ${\mathcal B}^*$ is the associated basis of 1-forms. As usual, the canonical volume form is $$\,\mathrm{d}z \wedge \,\mathrm{d}\overline{z} = \left( \frac{2}{i} \right)^n \,\mathrm{d}m(z),$$ where $\,\mathrm{d}m$ is the Lebesgue measure and $$\,\mathrm{d}z = \,\mathrm{d}z_1 \dots \,\mathrm{d}z_n, \qquad \,\mathrm{d}\overline{z} = \,\mathrm{d}\overline{z}_1 \dots \,\mathrm{d}\overline{z}_n,$$
If $\sigma$ be the canonical symplectic form on ${\mathbb R}^n \times {\mathbb R}^n$, given by $$\sigma = \,\mathrm{d}\xi \wedge \,\mathrm{d}x.$$ the canonical form $\sigma_{{\mathbb C}}$ in ${\mathbb C}^n \times {\mathbb C}^n$ such that $$\sigma_{{\mathbb C}} = \,\mathrm{d}\zeta \wedge \,\mathrm{d}\eta,$$ is an extension of $\sigma$ satisfying $$\mathop{\rm Re}\nolimits\sigma_{{\mathbb C}} = \,\mathrm{d}\xi \wedge \,\mathrm{d}x - \,\mathrm{d}\eta \wedge \,\mathrm{d}y, \qquad \mathop{\rm Im}\nolimits\sigma_{{\mathbb C}} = \,\mathrm{d}\xi \wedge \,\mathrm{d}y + \,\mathrm{d}\eta \wedge \,\mathrm{d}x.$$ Recall that a submanifold $\Lambda \subset {\mathbb C}^n \times {\mathbb C}^n$ is totally real if $\Lambda \cap i \Lambda = \left\{ 0 \right\}$, I-Lagrangian if $\mathop{\rm Im}\nolimits\sigma_{{\mathbb C}} \vert_{\Lambda} = 0$ and R-Lagrangian if $\mathop{\rm Re}\nolimits\sigma_{{\mathbb C}} \vert_{\Lambda} = 0$. If $A \in {\mathcal M}_{n\times n}({\mathbb C})$ and $f : {\mathbb C}^n \rightarrow {\mathbb C}$ is a quadratic function given by $$f(z) = \frac{1}{2} \langle Az, z \rangle, \qquad z \in {\mathbb C}^n,$$ then the submainfold $\Lambda_f \subset {\mathbb C}^n \times {\mathbb C}^n$ defined by $$\Lambda_f = \left\{ \left(z,\frac{\,\mathrm{d}f}{\,\mathrm{d}z}(z) \right); \, z \in {\mathbb C}^n \right\}$$ is Lagrangian for $\sigma_{{\mathbb C}}$ (i.e., both I-Lagrangian and R-Lagrangian).
### FBI transforms
Let $\varphi = \varphi(z,x)$ be a holomorphic quadratic function on ${\mathbb C}^n \times {\mathbb C}^n$ satisfying that $$\mathop{\rm Im}\nolimits\left( \frac{\partial^2 \varphi}{\partial x^2} \right) \textrm{ is positive definite} \qquad \textrm{and} \qquad \det \left( \frac{\partial^2 \varphi}{\partial x \partial z} \right) \not = 0.$$ Let $\kappa_{\varphi}$ the associated complex transformation $\kappa_{\varphi}: {\mathbb C}^n \times {\mathbb C}^n \rightarrow {\mathbb C}^n \times {\mathbb C}^n$ defined implicitly by $$\kappa_{\varphi} \left(x, - \frac{\partial \varphi}{\partial x}(z,x) \right) = \left( z, \frac{ \partial \varphi}{\partial z}(z,x) \right), \qquad z,x \in {\mathbb C}^n.$$ According to ([@Zworski]), the transformation $\kappa_{\varphi}$ is canonical for $\sigma_{{\mathbb C}}$. The FBI operator associated to $\varphi$ is $${\mathcal T}_{\varphi} u(z) = \frac{c_{ \varphi}}{h^{\frac{3n}{4}}} \int_{{\mathbb R}^n} e^{\frac{i}{h} \varphi(z,x)} u(x) \,\mathrm{d}x, \qquad u \in \mathscr{S}({\mathbb R}^n),
\label{eq:FBI definition}$$ where $$c_{\varphi} := \frac{\vert \det \frac{\partial^2 \varphi}{\partial x \partial z} \vert}{2^{\frac{n}{2}} \pi^{\frac{3n}{4}} \vert \det \mathop{\rm Im}\nolimits\frac{\partial^2 \varphi}{\partial x^2} \vert^{\frac{1}{4}} }.$$ For the particular choices $$\varphi_0(z,x) = \frac{i}{2} (z-x)^2, \qquad \Phi_0(z) = \frac{1}{2} \vert \mathop{\rm Im}\nolimits z \vert^2, \qquad z,x\in {\mathbb C}.$$ we call ${\mathcal T}_0$ the associated FBI transform defined by ([\[eq:FBI definition\]](#eq:FBI definition){reference-type="ref" reference="eq:FBI definition"}), which is called Bargman transformation.
We state the following corollary.
**Corollary 11** (Gevrey singular solutions). *Instead of performing directly a WFB construction on $P$ we prefer to conjugate $P$ to $hD_1$ to solve the Cauchy problem $$\left\{ \begin{array}{l}
h D_1 u = 0, \\
u\vert_{x_1 = 0} = u_0,
\end{array} \right.$$ for which obviously we have $u = u_0 \otimes 1_{x_1}$.*
Choosing a family $(u_0(x',h))$ with a Grevey $\operatorname{WF}_h$ set reduced to one point in $T^*{\mathbb R}^{n-1}$ we obtain a family $(u(x,h))$ of solutions to $Pu=0$ with a Gevrey $\operatorname{WF}_h$ set reduced to a segment of the null bicharachteristic strip of $P$ through $(x_0,\xi_0)$.
## WKB Heuristics
Set formally $$\mathcal{T}u(x,h) = \int e^{\frac{i}{h}\varphi(x,y)} a(x,y,h) u(y) dy.$$ The condition $$hD_{\mathop{\rm Re}\nolimits x_1} \mathcal{T}u - h^m \mathcal{T} P(y,D_y) u \sim 0 \qquad \textrm{in }G^{2s-1},$$ is equivalent to solving an **eikonal** equation for the phase $\varphi$ of the form $$\varphi_{x_1}'(x,y) = p(y, - \varphi_y'(x,y))$$ and a transport equation for the symbol $a = a(x,y,h)$ of the form $$P'a = e^{-\frac{i}{h}\varphi } \left( (hD_{\mathop{\rm Re}\nolimits x_1} - h^m P^t(y,D_y))(a e^{\frac{i}{h}\varphi}) \right) = 0, \quad \textrm{in } {\mathcal O}_{s}(h^{\infty}),\textrm{close to }(x_0,y_0),$$
## Construction of the canonical transformation $\kappa$
**Proposition 12**. *Let $P = P(y, D_y)$ be a $G^s-$differential operator of degree $m$ such that near $(y_0,\eta_0) \in T^* {\mathbb R}^n \setminus \left\{ 0 \right\}$, the symbol $p(y,\eta)$ is of real principal type. Then, there exists a canonical transform $\kappa$ such that $$\kappa(y_0,\eta_0) = (x_0, \xi_0), \qquad \textrm{where} \quad \xi_0 = \frac{2}{i} \partial_x \phi(x_0).$$ Moreover, if $y = \pi \circ\kappa^{-1}$, the submanifold defined by $$\Gamma = \left\{ (x,y(x)) \in {\mathbb C}^n \times {\mathbb R}^n; \,\, x \textrm{ close to } x_0 \right\}$$ is totally real in ${\mathbb C}^n \times {\mathbb C}^n$ and of maximal dimension $2n$. [\[prop:kappa\]]{#prop:kappa label="prop:kappa"}*
*Proof.* Let $\chi$ be a real $G^s$ canonical map defined near $(y_0,\eta_0)$ mapping $p(y,\eta)$ to $\eta_1$. Such a $\chi$ can be obtained by applying Darboux Lemma.
Now let $\mathcal{T}_0$ be the Bargman transform. $\mathcal{T}_0$ has phase functions $\varphi_0(x,y) = \frac{i}{2}(x-y)^2$, and $\Phi_0(x) = \frac{1}{2} (\mathop{\rm Im}\nolimits x)^2$.
Since $\xi \vert_{\Lambda_{\Phi_0}}$ is real, we will choose $\kappa = K_{\mathcal{T}_0} \circ \chi$. $\kappa$ is parametrized by $\in {\mathbb C}^n$ close to $x_0 \in (0,y_0' - i \eta_0')$, setting $$(y(x),\eta(x)) = \kappa^{-1}\left( x, \frac{2}{i} \partial_x \Phi_0(x) \right).$$ The map $x \in {\mathbb C}^n \mapsto y(x) \in {\mathbb R}^n$ is of class $G^s$ and $$\partial y : T_x {\mathbb C}^n \rightarrow T_{y(x)}{\mathbb R}^n \qquad \textrm{is surjective}.$$ Moreover, $\overline{\partial} y$ is biejctive since $dx$ is canonical.
Setting now $$\Gamma = \left\{ (x,y(x)) \in {\mathbb C}^n \times {\mathbb R}^n; \,\, x \textrm{ close to } x_0 \right\}$$ we get that $\Gamma$ is totally real in ${\mathbb C}^n \times {\mathbb C}^n$ and of maximal dimension $2n$. ◻
## Construction of the phase $\varphi$ on $\Gamma$
In this section we obtain and solve an eikonal equation for the phase $\varphi$. The condition $$hD_{\mathop{\rm Re}\nolimits x_1} \mathcal{T}u - h^m \mathcal{T} P(y,D_y) u \sim 0 \qquad \textrm{in }G^s,$$ is equivalent to solving an **eikonal** equation for the phase $\varphi$. We first construct $\varphi$ and $a$ on $\Gamma$. Next, we use an extension argument providing a (formal) sequence $(a_j)_{j\geq 0}$ of Gevrey symbols. We conclude by Carleson's moment method, which allow to construct a Gevrey symbol $a$ on ${\mathbb C}^n \times {\mathbb R}^n$.
**Proposition 13** (Eikonal equation). *There exists a Gevrey-$s$ phase $\varphi = \varphi(x,y)$ satisfying $$\varphi_{x_1}'(x,y) = p(y, - \varphi_y'(x,y)) \qquad \textrm{ on } \Gamma$$ for $\Gamma$ is given in Proposition [\[prop:kappa\]](#prop:kappa){reference-type="ref" reference="prop:kappa"}. Moreover, $\varphi_{y}(x_0,y_0)=-\eta_0 \in {\mathbb R}^n$, $\operatorname{Im}\varphi''_{y}>0$, $\operatorname{det}\varphi''_{x,y}\neq0$ [\[prop:phase varphi\]]{#prop:phase varphi label="prop:phase varphi"}*
*Proof.* Let us choose $\varphi$ such that $$\frac{\partial \varphi}{\partial x} = \xi, \qquad \frac{\partial \varphi}{\partial y} = -\eta.$$ Observe that, since $\kappa$ is canonical, the $1-$form $$\omega = \left( \xi(x) - \frac{\partial y^t}{\partial x} \eta(x) \right) dx - \frac{\partial y^t}{\partial \overline{x}} \eta(x) d\overline{x}$$ is closed for $x$ close to $x_0$. Now, $\varphi$ defined above satisfies $$\varphi_{x_1}' = \xi_1(x) = - \mathop{\rm Im}\nolimits x_1 = p(y(x), \eta(x)) \qquad \textrm{and} \qquad \eta(x) = - \varphi_y'(x,y(x)).$$ In order to get an FBI transform we may impose further $$\varphi(x,y) = \frac{i}{2}(x'-y')^2 - y_{1,0}\eta_{1,0} + i C(y_1 - y_{1,0})^2, \qquad \mathop{\rm Re}\nolimits C > 0.$$ ◻
## Construction of the symbol $a$ on $\Gamma$
**Proposition 14** (Transport equation). *There exists a symbol $a = a(x,y,h)$ such that $$P'a = e^{-\frac{i}{h}\varphi } \left( (hD_{\mathop{\rm Re}\nolimits x_1} - h^m P^t(y,D_y))(a e^{\frac{i}{h}\varphi}) \right) = 0, \qquad \textrm{close to }(x_0,y_0).$$ Furthermore, $a$ is of class $G^s$ and elliptic at $(x_0,y_0)$. [\[prop:symbol a\]]{#prop:symbol a label="prop:symbol a"}*
*Proof.* As $P'$ is a $h-PDO$ of degree $0$ with symbol $$p'(x,y,\xi,\eta) = \mathop{\rm Re}\nolimits\xi_1 + \varphi_{x_1}' - P\varphi(x,y,\eta),$$ where $$P\varphi(x,y,\eta) = p(y,-\eta - \varphi_y').$$
According to Proposition [\[prop:phase varphi\]](#prop:phase varphi){reference-type="ref" reference="prop:phase varphi"} phase solves $$\varphi_{x_1}'(x,y) = p(y,-\varphi_y'(x,y)) \qquad \textrm{ on } \Gamma.$$ Then for $(x,y) \in \Gamma$ such that $\xi = \eta = 0$ one has $p'=0$. Moreover, $$a = \frac{\partial p'}{\partial \xi}\vert_{\xi = \eta = 0} = e_1, \qquad b = \frac{\partial p'}{\partial \eta}\vert_{\xi = \eta = 0} = \frac{\partial p}{\partial \eta}(y, -\varphi_y'),\qquad \textrm{on } \Gamma.$$ Consider the vector field on ${\mathbb C}^n \times {\mathbb C}^n$ defined by $$\mathcal{V}' = a \partial_x + \overline{a} \partial_{\overline{x}} + b \partial_{ y } + \overline{b} \partial_{\overline{y}}.$$ One checks that $\mathcal{V}'\vert_{\Gamma}$ is tangent to $\Gamma$ if and only if $$b \in {\mathbb R}^n \qquad \textrm{and} \qquad b = \left( \frac{\partial y}{\partial x}\right)a + \left( \frac{\partial y}{\partial \overline{x}}\right) \overline{a}.$$ Observe that if $\mathcal{V} = \alpha \partial_x + \overline{\alpha} \partial_{\overline{x}}$ is a vector field on ${\mathbb C}^n$ and $u(x) = u'(x,y(x))$, one has $\mathcal{V}'u' = \mathcal{V}u$ on $\Gamma$ if and only if $$\alpha = a, \qquad b = \left( \frac{\partial y}{\partial x}\right)a + \left( \frac{\partial y}{\partial \overline{x}}\right) \overline{a}, \quad \textrm{with} \quad \partial_{\overline{x}} u' = \partial_{\overline{y}}u' = 0 \textrm{ on }\Gamma.$$ We fulfill the tangency condition on $\Gamma$ by choosing $$b = \frac{\partial p}{\partial \eta} (y, - \varphi_y') \qquad \textrm{and} \qquad a = \frac{\partial p'}{\partial \xi} \vert_{\xi = \eta = 0}.$$ This follows by construction, as by setting $$\chi'H_p = H_{\eta_1} = \begin{pmatrix} e_1 \\ 0 \end{pmatrix}, \qquad (\chi')^{-1} = \begin{pmatrix} P & Q \\ R & S \end{pmatrix} \textrm{(real symplectic matrix)},$$ with $P$ invertible, $P^t R$ and $Q^t S$ symmetric and $P^t S - R^t Q = I$, we have $$\frac{\partial y}{\partial x} = \frac{1}{2} (P - iQ), \qquad \frac{\partial y}{\partial \overline{x}} = \frac{1}{2} (P + iQ).$$ Then, $$\left( \frac{\partial y}{\partial x}e_1 + \frac{\partial y}{\partial \overline{x}}e_1 \right) = Pe_1 = \frac{\partial p}{\partial \eta} (y, - \varphi_y'), \qquad \textrm{ on } \Gamma,$$ since by construction $$\chi^{-1}(\mathop{\rm Re}\nolimits x, - \mathop{\rm Im}\nolimits x) = (y(x), \eta(x)).$$ ◻
We will solve transport equations on $\Gamma$ ad extend solutions almost holomorphycally from $\Gamma$. This yields a formal solution $(a_j')_{j\geq 0}$ with $a_j' \in {\mathcal S}_s^{-j}({\mathbb C}^n \times {\mathbb C}^n)$.
We set $$b' e^{\frac{i}{h}\varphi} = \frac{1}{h} \left( hD_{\mathop{\rm Re}\nolimits x_1} - h^m P^t(y,D_y) \right)(a e^{\frac{i}{h}\varphi} ).$$ The eikonal equation is satisfied since $\varphi_{\mathop{\rm Re}\nolimits x_1}' = p(y,-\varphi_y')$. Now set
$$e^{\frac{i}{h}\varphi}P_{\varphi}(x,y,hD_y) = h^m P^t (y, D_y)(\cdot e^{\frac{i}{h}\varphi} ),$$ The principal symbol of $P_{\varphi}$, namely $p_{\varphi}$, satisfies $$p_{\varphi}|_{\eta = 0} = p(y,-\varphi_y').$$ If $$P' = h D_{\mathop{\rm Re}\nolimits x_1} + \varphi_{\mathop{\rm Re}\nolimits x_1}' - P_{\varphi}(x,y,hD_y)$$ we have to solve a WKB problem for $P'$ with $\varphi'=0$, which leads to some expansions. If $Q(y,hD_y)$ is a $hPDO$ of order zero on ${\mathbb R}^n$, if $\varphi$ is a complex phase with $\mathop{\rm Im}\nolimits\varphi \geq 0$ and $\,\mathrm{d}\varphi \not = 0$ when $\mathop{\rm Im}\nolimits\varphi = 0$, one has $$Q(a e^{\frac{i}{h}\varphi}) \sim e^{\frac{i}{h}\varphi} \sum_{\alpha \geq 0} \frac{h^{|\alpha|}}{\alpha !} \tilde{q}^{(\alpha)}(y,\varphi_y') D^{\alpha}_z(e^{\frac{i}{h}\rho} a)|_{y=z},
\label{eq:MelinSjostrand}$$ where $\tilde{q}$ is an almost holomorphic extension of $q$ to the whole ${\mathbb C}^n \times {\mathbb C}^n$, $a$ is a symbol and $\rho$ is given by $$\rho(y,z) = \varphi(z) - \varphi(y) - (z-y) \cdot \varphi_y'(y).$$ The expansion ([\[eq:MelinSjostrand\]](#eq:MelinSjostrand){reference-type="ref" reference="eq:MelinSjostrand"}) was obtained by Melin and Sjöstrand in [@MelinSjostrand]. We deal with the Gevrey case and explain how to adapt the original proofs to compute the remainders in our case.
We deal with phases of the form $$a(y,X) = \varphi(z) + (y-z) \cdot \eta,$$ where we have written $X = (z, \eta) \in {\mathbb R}^{2n}$, $\varphi(z)$ is a phase function in ${\mathcal G}^s$ defined close to $y_0 \in {\mathbb R}^n$ and complex-valued with $\mathop{\rm Im}\nolimits\varphi \geq 0$, $\,\mathrm{d}\varphi(y_0) = - \eta_0 \in {\mathbb R}^n \setminus 0$. We shall work close to $X_0 := (y_0,-\eta_0)$ and one has then $$\mathop{\rm Im}\nolimits a(y,X) \geq 0, \qquad a_X'(y_0,X_0) = 0, \qquad \det a_{X,X}''(y_0,X_0) \not = 0.$$
We note $u_h(X)$ some ${\mathcal G}^s$ symbol on ${\mathbb R}^{2n}$ of degree zero compactly supported close to $X_0 = (y_0,\varphi'_y(y_0))$. We have the following lemma.
**Lemma 15**. *Under the previous assumptions there is a stationary phase expansion of Gevrey type $2s-1$ with the form $$\iint_{{\mathbb R}^n \times {\mathbb R}^n} e^{\frac{i}{h}a(y,X) } u_h(X) \,\mathrm{d}X \sim e^{\frac{i}{h} a(y,Z_y)} \sum_{\nu = 0}^{\infty} h^{\nu + n} C_{y,\nu}(D)u_h(Z_y),
\label{eq:MelinSjostrandGevrey}$$ where $\sim$ means modulo some small ${\mathcal G}^{2s-1}$ remainders. [\[lem:MelinSjostrandGevrey\]]{#lem:MelinSjostrandGevrey label="lem:MelinSjostrandGevrey"}*
Above we have noted $a(y,Z)$ an almost holomorphic extension of $a(y,X)$ to the whole ${\mathbb C}^{2n}$ of the same class as $a$ and similarly for $u_h$. Moreover, $y \mapsto Z_y$ is the function defined by $a_Z'(y,Z) = 0$ close to $X_0$ with $Z_{y_0} = X_0$.
*Proof.* We follow closely [@MelinSjostrand] and use Morse type coordinates. We also use the Stokes formula and deform ${\mathbb R}^{2n}$ into complex contours in ${\mathbb C}^{2n}$.
The equations $$a_Z'(y,Z) = 0, \qquad Z_{y_0} = X_0$$ defines a ${\mathcal G}^s$ function since $\det a_{Z,Z}''(y_0,X_0) \not = 0$. By Lemma [@MelinSjostrand Lemma 2.1] one has $$\mathop{\rm Im}\nolimits a(y,Z_y) \geq \frac{1}{C} |\mathop{\rm Im}\nolimits Z_y|^2.$$ Set $$h(y,Z) = a(y,Z+Z_y) - a(y,Z_y),$$which is defined close to $(y_0,0)$ and modulo a small error induces in $Z$ a quadratic form. One has $$\partial_Z h(y,0) = 0, \qquad \partial_{\overline{Z}} h(y,Z) = {\mathcal O}(1) \exp( - \frac{1}{C} |\mathop{\rm Im}\nolimits(Z + Z_y)|^{\frac{-1}{s-1}} ).$$ Set $$R(y,Z) = 2 \int_0^1 (1 - \theta) h_{Z,Z}''(y,\theta Z) \,\mathrm{d}\theta.$$ Since $u \mapsto \exp( - \frac{1}{C} u^{\frac{-1}{s-1}} )$ is increasing one has $$\partial_{\overline{Z}} h(y,Z) = {\mathcal O}(1) \exp( - \frac{1}{C} ( |\mathop{\rm Im}\nolimits Z |^{\frac{-1}{s-1}} + |\mathop{\rm Im}\nolimits Z_y|^{\frac{-1}{s-1}} ) )$$ Writing $$R(y,Z )= i Q^t(y,Z )Q(y,Z ), \qquad Q(y_0,0 ) = A^{-1}, \quad A^t R(y_0,0 )A = iId,$$ we define a change of coordinates in ${\mathbb C}^{2n}$ by $$\overline{Z}(Z ) = Q(y,Z - Z_y )(Z - Z_y ),$$ since $Q(y_0,0 ) \in GL(2n,{\mathbb C})$. In these coordinates one has $$a(y,Z ) = a(y,Z_y ) + \frac{i}{2} \langle \overline{Z}, \overline{Z} \rangle + \rho(y,Z ),$$ with $$\langle \overline{Z}, \overline{Z} \rangle := |\overline{X}|^2 - |\overline{Y}|^2 + 2i \overline{X} \cdot \overline{Y}, \qquad \overline{Z} = \overline{X} + i \overline{Y} \in {\mathbb C}^{2n}.$$ The function $\rho$ is small and $$|\rho(y,Z )| = {\mathcal O}(1 ) \exp (-\frac{1}{C} |\mathop{\rm Im}\nolimits( Z - Z_y )|^{-\frac{1}{s-1}} ).$$ The map $Z\mapsto \overline{Z}(Z )$ has an inverse $\overline{Z} \mapsto \overline{Z}(Z )$ and these two maps depend on $y$ also. Define for $\sigma \in [0,1]$ the integration paths $$\Gamma_{y,\sigma} : \overline{X} \mapsto Z(\overline{Z}_{\sigma} ), \qquad \overline{Z}_{\sigma} = \overline{X} + i \sigma g(y,\overline{X} ), \quad \overline{X} \in {\mathbb R}^{2n},$$ where $g$ is smooth and such that ${\mathbb R}^{2n}$ is given by $\overline{Y} = g(y,\overline{X} )$ in the new coordinates. Now, using Stokes formula we deform $\Gamma_{y,1} = {\mathbb R}^{2n}$ into $\Gamma_{y,0}$. Following [@MelinSjostrand Lemma 2.4], we have $$\mathop{\rm Im}\nolimits a(y,Z(\overline{Z}_{\sigma} ) \geq \frac{1}{C}(1-\sigma ) ( |\mathop{\rm Im}\nolimits Z_y|^2 + |\overline{X}|^2 ) \geq \frac{1}{C'} |\mathop{\rm Im}\nolimits Z(\overline{Z}_{\sigma}|^2.$$ Define the $(2n,0 )$ form in ${\mathbb C}^{2n}$ by $$\omega_h = f_h(Z ) \,\mathrm{d}Z_1 \wedge \cdots \wedge \,\mathrm{d}Z_{2n}, \qquad f_h(Z ) = e^{\frac{i}{h} a(y,Z )} u_h(Z ),$$ since $$\partial_{\overline{Z}} f_h = e^{\frac{i}{h} a(y,Z )} \left( \partial_{\overline{Z}} u_h + \frac{i}{h}u_h \partial_{\overline{Z}} a \right),$$ we have $$\iint_{{\mathbb R}^{2n}} e^{\frac{i}{h} a(y,X ) } u_h(X ) \,\mathrm{d}X = \iint_{\Gamma_{y,1}} e^{\frac{i}{h} a(y,Z ) } u_h(X ) \,\mathrm{d}Z_1 \wedge \dots \wedge \,\mathrm{d}Z_{2n}$$ and $$\iint_{\Gamma_{y,0}} e^{\frac{i}{h} a(y,Z ) } u_h(X ) \,\mathrm{d}Z_1 \wedge \dots \wedge \,\mathrm{d}Z_{2n} = \iint_{V} e^{\frac{i}{h} a(y, Z_y ) } u_h( Z(\overline{X} ) e^{-\frac{1}{2h} |\overline{X}|^2 + \frac{i}{h} \rho } \left|\frac{\partial Z}{\partial \overline{X}} \right| \,\mathrm{d}\overline{X}, \label{eq:256}$$ where $V$ is a neighbourhood of the origin in ${\mathbb R}^{2n}$.
Using [@MelinSjostrand Lemma 2.5] we may reduce the computation of the asymptotics of the right-hand side of ([\[eq:256\]](#eq:256){reference-type="ref" reference="eq:256"}) to the case $\rho = 0$. Indeed, if $Y \mapsto (AY,Y )$ is a non-degenerate quadratic form with $\mathop{\rm Im}\nolimits A \geq 0$, and $\chi$ is a ${\mathcal G}^s$ cut-off function close to the origin in ${\mathbb R}^N$ we can write $$h^{-\frac{N}{2}} \int e^{\frac{i}{2h} (AY,Y ) } u_h(X + Y ) \,\mathrm{d}Y = \ell_{\chi}(X ) + r_{\chi}(X),$$ for $$\ell_{\chi}(X ) = h^{-\frac{N}{2}} \int e^{\frac{i}{2h} (AY,Y ) } u_h(X + Y ) \chi(Y ) \,\mathrm{d}Y, \qquad r_{\chi}(X) = h^{-\frac{N}{2}} \int e^{\frac{i}{2h} (AY,Y ) } u_h(X + Y ) (1 - \chi(Y ) ) \,\mathrm{d}Y.$$ Now,as $$| \partial_X^{\alpha} r_{\chi}(X )| \leq C^{1+|\alpha|} \alpha!^s \exp( - \frac{1}{C} h^{-\frac{1}{s}} ),$$ we deduce that $r_h$ is a small ${\mathcal G}^s$ remainder. On the other hand, $\ell_X$ is a ${\mathcal G}^s$ symbol of the same order as $u_h$ having the asymptotic expansion $$\ell_{\chi}(X ) \sim C_A \sum_{\nu = 0} \frac{h^{\nu}}{\nu!(2i )^{\nu}} (A^{-1}D,D )^{\nu} u_h(X ), \qquad C_A = (\frac{1}{2\pi i}\det A )^{-\frac{1}{2}}.$$ As a consequence, $$v_h(y ) := e^{-\frac{i}{h} a(y,Z_y ) } \int_{{\mathbb R}^{2n}} e^{\frac{i}{h} a(y,X ) }u_h(X ) \,\mathrm{d}X$$ is a ${\mathcal G}^s$ symbol of order $n$ admitting an expansion of the form $$v_h(y ) \sim \sum_{\nu = 0}^{\infty} h^{\nu + n} C_{\nu,y}(D )u_h(Z_y )$$ with $C_{\nu,y}(D )$ differential operators of degree less or equal than $2\nu$. The first term above rewrites as $h^nC_0(y ) u_h(Z_y )$ for a function $C_0(y )(2\pi )^{-n}$ which is a suitable branch of the square root of $\det(\frac{1}{i} a_{Z,Z}''(y,Z_y ) )^{-1}$. From [@HitrikLascarSjostrandZerzeri] we deduce that the term $$R_N(y ) := \sum_{\nu = N}^{\infty} h^{\nu + n} C_{\nu,y}(D )u_h(Z_y )$$ satisfies $$|\partial_y^{\gamma} R_N(y ) | \leq C^{1+\gamma + N} \gamma!^s N!^{2s-1} h^{N+n},$$ where the Gevrey loss is an standard observation. Moreover the last term coming from Stokes formula is also a small ${\mathcal G}^s$ reminder at least when $a(y,\cdot )$ depends analytically in $y$, which is the case for the phase $a(y,X ) = (y-z )\eta + \varphi(z )$, $X=(z,\eta )$. Indeed, writing this term as $$R(y ) = \iint_{\bigcup_{\sigma \in [0,1]} \Gamma_{y,\sigma} } e^{\frac{i}{h} a(y,Z ) } \left( \partial_{\overline{Z}} u_h + \frac{iu_h}{h} \partial_{\overline{Z}}a \right) \wedge \,\mathrm{d}Z_1 \wedge \dots \wedge \,\mathrm{d}Z_{2n}$$ and computing $\partial_y^{\lambda} R(y )$ one has $$\partial_y^{\lambda} R(y ) = \sum \frac{\lambda!}{\mu! \nu!} \iint_{\bigcup_{\sigma \in [0,1]} \Gamma_{y,\sigma} } e^{\frac{i}{h} a(y,Z ) } \frac{\nu!}{\ell! \nu_1! \dots \nu_{\ell}!} \partial_y^{\nu_1} a \dots \partial_y^{\nu_\ell} a h^{-\ell} \partial_y^{\mu} g_h \wedge \,\mathrm{d}Z_1 \wedge \dots \wedge \,\mathrm{d}Z_{2n} \label{eq:262}$$ where the sum runs over the set $$\nu + \mu = \lambda, \qquad
\nu_1 + \cdots \nu_{\ell} = \nu, \qquad
\ell = 1, \cdots, \nu$$ and $g_h = \partial_{\overline{Z}} u_h + \frac{iu_h}{h} \partial_{\overline{Z}}a$. For $Z \in \bigcup_{\sigma \in [0,1]}$ one has $$\mathop{\rm Im}\nolimits a(y,Z ) \geq \frac{1}{C} |\mathop{\rm Im}\nolimits Z|^2.$$ Moreover, $\partial_{\overline{Z}}u_h$ and $\partial_{\overline{Z}}a$ are bounded by $C \exp \left( -\frac{1}{C} |\mathop{\rm Im}\nolimits Z|^{-\frac{1}{s-1}}\right)$ as well as their derivatives by construction. We deduce estimates for the RHS of ([\[eq:262\]](#eq:262){reference-type="ref" reference="eq:262"}), since $$\frac{|\partial_y^{\nu_1} a |}{\nu_1!}... \frac{|\partial_y^{\nu_l} a |}{\nu_l!} \leq C^{|\nu| + 1}, \qquad h|\partial_y^{\mu} g_h| \leq C^{1 + |\mu|} \mu!^s \exp(-\frac{1}{C} |\mathop{\rm Im}\nolimits Z|^{-\frac{1}{s-1}} )$$ and $$\left| e^{\frac{i}{h} a(y,Z) } \right| \leq e^{-\frac{1}{Ch}|\mathop{\rm Im}\nolimits Z|^2 } \sum_{\mu + \nu = \lambda} \frac{\lambda! \nu!}{\mu! \nu! \ell!} h^{-\ell}M! N! h^N |\mathop{\rm Im}\nolimits Z|^{-2N} |\mathop{\rm Im}\nolimits Z|^{\frac{M}{s-1}} \mu!^s,$$ for all $N,M>0$. Choosing $N = \ell$ and ·$M=2 \ell(s-1)$ one has the bound $$C^{1 + |\lambda|} \mu!^s \nu! \ell!^{2s-2} \leq C^{1+|\lambda|} \mu!^s \nu!^{2s-1} \leq C^{1 + |\lambda|} \lambda!^{2s-1}.$$ Hence, $\partial_y^{\lambda}R(y)$ is bounded by $$C^{1+|\lambda|} \lambda!^{2s-1} \exp( -\frac{1}{C} h^{\frac{-1}{2s-1}} )$$ so $R$ is a small ${\mathcal G}^{2s-1}$ remainder. ◻
We use next Lemma [\[lem:MelinSjostrandGevrey\]](#lem:MelinSjostrandGevrey){reference-type="ref" reference="lem:MelinSjostrandGevrey"} to recover the asymptotic ([\[eq:MelinSjostrand\]](#eq:MelinSjostrand){reference-type="ref" reference="eq:MelinSjostrand"}). We can also check that on $\Gamma$ $P'a$ has the form $$P'a = \frac{h}{i} \partial_{\mathop{\rm Re}\nolimits x_1} a + \frac{h}{i} \frac{\partial p}{\partial \eta} (y,-\varphi_y') \partial_y a + ca + R'a$$ with $c$ a symbol of order $-1$ and $R'$ a linear map ${\mathcal S}_s^{m'} \rightarrow {\mathcal S}_s^{m'-2}$.
We solve inductively the equations on $\Gamma$: $$\frac{h}{i} v'a_0 + ca_0 = hb, \qquad \frac{h}{i} v'a_j + ca_j + R'a_{j-1} = 0, \quad j \geq 1,$$ for $$v' = \partial_{\mathop{\rm Re}\nolimits x_1} + \frac{\partial p}{\partial \eta}(y,-\varphi_y')\partial_y + \overline{ \frac{\partial p}{\partial \eta} }(y, -\varphi_y') \partial_{\overline{y}}.$$ We notice that $v'|_{\Gamma}$ is tangent to $\Gamma$ as a subset of ${\mathbb C}^{2n}$.
In order to conclude the proof, we need to check that the remainder $$b'(x,y) = \frac{1}{h}e^{\frac{h}{i} \varphi } \left( h D_{\mathop{\rm Re}\nolimits x_1} - h^m P^t(y,D_y) \right)(a' e^{\frac{i}{h} \varphi} ) - b,$$ when $a' \sim \sum_{j\geq 0} a_j'$, $a_j' \in {\mathcal S}_s^{-j}({\mathbb C}^n \times {\mathbb C}^n)$ and $a_j'|_{\Gamma} = a_j$ for $a_j$ solving the transport equations above is small, i.e.: $\left|b'(x,y)\right|\leq c\left\{ \exp(\frac{-1}{c}\left|y-y(x)\right|^{\frac{-1}{s-1}})+\exp(\frac{-1}{c}h^{\frac{-1}{s}})\right\}$
99
J.Bedrossian, N.Masmoudi and C. Mouhot. Landau Damping: Paraproducts and Gevrey Regularity, *Annals of PDE*, volume 2, 4 (2016).
L. Boutet de Monvel and P. Krée. Pseudo-differential operators and Gevrey classes. *Ann. Inst. Fourier*, vol.7, 295-323 (1967).
L. Carleson. On universal moment problems. *Math. Scandinavica*, 9(1b): 197-206, 1961.
J.J. Duistermaat and Lars Hörmander, Fourier integral operators. II, *Acta Math.*, Volume 128 (1972) no. 3-4, pp. 183-269.
Yu. V. Egorov. On canonical transformations of pseudo-differential operators, *Uspechi Mat. Nauk*, 25, 235-236, (1969).
Yu. V. Egorov. *Linear Differential Equation of Principal Type*. Moscow: Nauka. English transl.: New York: Con temp. Sov. Math. 1986, ZbI.574.35001. Yu. V. Egorov. *Partial Differential Equations IV: Microlocal Analysis and Hyperbolic Equations*, Vol. 33 in Encyclopedia of Mathematical Sciences, Springer (1993).
G. Eskin. *Lectures on linear partial differential equations*. Volume 123 in Graduate Studies in Mathematics Series, American Mathematical Society, 2011.
D. Gérard-Varet, Y. Maekawa and N. Masmoudi. Gevrey stability of Prandtl expansions for $2$-dimensional Navier--Stokes flows. *Duke Math. J.* 167 (13) 2531 - 2631, 15 September 2018.
T. Gramchev, Classical pseudodifferential operators and Egorov's theorem in the Gevrey classes $G^s, s > 1$, *Russ. Math. Surv.* 40 125 (1985).
M. Hitrik, R. Lascar, J. Sjöstrand and M. Zerzeri. Semiclassical gevrey operators in the complex domain. *arXiv:2009.09125*, 2020, *Ann. Inst. Fourier* (2023).
M. Hitrik and J. Sjöstrand. Two minicourses on analytic microlocal analysis. In *Algebraic and Analytic Microlocal Analysis*, pages 483-540. Springer, 2013.
Lars Hörmander. *The analysis of linear partial differential operators*, Vols I-IV, Springer, 1983.
Bernard Lascar. Propagation des singularités Gevrey pour des opérateurs hyperboliques. *Amer. J. of Math.*, 110 pages 413-449 (1988).
B. Lascar and R. Lascar. FBI transforms in Gevrey classes. *Journal d'Analyse Mathématique*, 72(1):105-125, (1997).
B. Lascar, R. Lascar and R. Melrose. Propagation des singularités Gevrey pour la diffraction. *Communications in partial differential equations*, 16(4-5):547-584, (1991).
G. Lebeau. Régularité Gevrey 3 pour la diffraction. *Communications in Partial Differential Equations*, 9(15):1437-1494, (1984).
N. Lerner. *Metrics on the phase space and non-selfadjoint pseudo-differential operators*, Pseudo-Differential Operators. Theory and Applications, Volume 3, Birkhäuser, 2010, xii+397 pages.
Nicolas Lerner, Sur deux contributions de Y. V. Egorov (1938--2018). *Annales de la Faculté des sciences de Toulouse : Mathématiques,* Serie 6, Volume 28 (2019) no. 1, pp. 1-9.
A. Melin and J. Sjöstrand, Fourier Integral Operators with complex phase functions. *Lecture Notes in Mathematics,* vol. 459, pp. 120-224 (1975). Springer.
C. Mouhot and C. Villani. On Landau damping. *Acta Mathematica*, volume 207, pages 29--201 (2011).
J. Sjöstrand. *Singularités analytiques microlocales.* Astérisque 1982.
M. Zworski. *Semiclassical analysis*, volume 138. American Math. Soc., 2012.
| arxiv_math | {
"id": "2310.00979",
"title": "Gevrey WKB method for PDO's of real principal type",
"authors": "Richard Lascar and Ivan Moyano",
"categories": "math.AP",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
# Product Domains {#SecProductDomain}
## $p$-Skwarczyński distance on the product domain
Suppose that $\O_1 \subset \C^{n_1}, \O_2\subset \C^{n_2}$ are two bounded domains. Let $\O=\O_1 \times \O_2$, $z=(z_1,z_2)$ , $w=(w_1,w_2)$. Then $$\r_{p,\O} (z,w)\leq \r_{p,\O_1}(z_1,w_1)+\r_{p,\O_2}(z_2,w_2)$$
By definition and the product rule from [@chen2022p Proposition 2.8] $$\hspace{-9mm}\r_{p,\O}(z,w)=\min_{\theta\in [0,2\pi]}\norm{e^{i\theta}\frac{m_{p,\O}(\cdot,z)}{m_{p,\O}(z)}-\frac{m_{p,\O}(\cdot,w)}{m_{p,\O}(w)}}_{p,\O}$$ $$\hspace{55mm}=\min_{\theta\in [0,2\pi]}\norm{e^{i\theta}\frac{m_{p,\O_1}(\cdot,z_1)m_{p,\O_2}(\cdot,z_2)}{m_{p,\O_1}(z_1)m_{p,\O_2}(\cdot,z_2)}-\frac{m_{p,\O_1}(\cdot,w_1)m_{p,\O_2}(\cdot,w_2)}{m_{p,\O_1}(w_1)m_{p,\O_2}(w_2)}}_{p,\O}$$
For $i=1,2$, $$\text{set } f_i(\z_i)=e^{i\theta_i}\frac{m_{p,\O_i}(\z_i,z_i)}{m_{p,\O_i}(z_i)} ; \hspace{3mm} g_i(\z_i)=\frac{m_{p,\O_i}(\z_i,w_i)}{m_{p,\O_i}(w_i)} \in \A^p(\O_i)$$ where $\theta_i$ is chosen such that $\r_{p,\O_i}(z_i,w_i)=\norm{f_i-g_i}_{p,\O_i}$.\
Then $$\r_{p,\O}(z,w)\leq \norm{f_1f_2-g_1g_2}_{p,\O}\leq \norm{f_1f_2-g_1f_2}_{p,\O}+\norm{g_1f_2-g_1g_2}_{p,\O}.$$ Consider $$\norm{f_1f_2-g_1f_2}_{p,\O}=\left(\int_{\O_1\times\O_2} \abs{f_1(\z_1)-g_1(\z_1)}^p \abs{f_2(\z_2)}^p d\z_1d\z_2\right)^{1/p}$$ $$\hspace{60mm}=\left(\int_{\O_1} \abs{f_1(\z_1)-g_1(\z_1)}^p d\z_1\right)^{1/p} \left(\int_{\O_2} \abs{f_2(\z_2)}^p d\z_2\right)^{1/p}=\r_{p,\O_1}(z_1,w_1)$$ Similarly $\norm{g_1f_2-g_1g_2}_{p,\O}=\r_{p,\O_2}(z_2,w_2)$, so $$\r_{p,\O} (z,w)\leq \r_{p,\O_1}(z_1,w_1)+\r_{p,\O_2}(z_2,w_2)$$
## $p$-Bergman metric on the product domain
In [@chen2022p], the $p$-Bergman metric was defined as follows for a vector field $X$. $$B_p(z_0;X):= K_p(z_0)^{-\frac{1}{p}} \sup\{\,\abs{Xf(z_0)}: f\in \A^p, \; f(z_0)=0, \; \norm{f}_p=1\,\}.$$
The problem was posed to find the $p$-Bergman metric on product of two domains. Here, we provide a partial solution to this problem.
Suppose that $\O_1 \subset \C^{k_1}$ and $\O_2\subset \C^{k_2}$ are bounded domains. Let $\O=\O_1 \times \O_2$, $z=(z_1,z_2)$, $X=(X_1,X_2)$. Then $$B_{p,\O} (z;X)\geq \max_{i=1,2} B_{p,\O_i}(z_i;X_i).$$
Suppose $h\in \A^p(\O_1)$, $\norm{h}_{p,\O_1}=1$, $f\in \A^p(\O_2)$, $f(z_2)=0$, and $\norm{f}_{p,\O_2}=1$. If $g(\z_1,\z_2):=h(\z_1)\cdot f(\z_2)$, then $g\in \A^p(\O)$, and $\norm{g}_{p,\O}=\norm{h}_{p,\O_1}\cdot \norm{f}_{p,\O_2}=1$. Now $$B_{p,\O}(z;X)\geq K_{p,\O}^{-\frac{1}{p}}(z) \abs{X(g)(z)}=K_{p,\O_1}(z_1)^{-\frac{1}{p}}\abs{h(z_1)} \cdot K_{p,\O_2}(z_2)^{-\frac{1}{p}}\abs{ X_2(f)(z_2)}.$$ Taking the supremum over all functions $f$ and $h$ as above, we get $$\sup_{h}K_{p,\O_1}(z_1)^{-\frac{1}{p}}\abs{h(z_1)}=1 \text{ and } \sup_{f} K_{p,\O_2}(z_2)^{-\frac{1}{p}} \abs{X_2(f)(z_2)}= B_{p,\O_2}(z_2;X_2).$$ By symmetry, $$B_{p,\O} (z;X)\geq \max_{i=1,2} B_{p,\O_i}(z_i;X_i). \qedhere$$
# Application {#SecApplications}
From [@chen2022p], we have $\abs{m_p(z,w)}\leq m_p(w)/m_p(z)$, and equality holds if and only if $z=w$. This follows from the reproducing formula and Hölder's inequality. Indeed, $$\abs{m_p(z,w)}=\abs{m_p(z)^{-p}\int_\O\abs{m_p(\z,z)}^{p-2}\overline{m_p(\z,z)}m_p(\z,w) d\z }\leq m_p(w)/m_p(z).$$
We can find a stronger bound using the $p$-Skwarczyński distance.
If $p> 2$, then $$\label{inequality}
\abs{m_p(z,w)}\leq \frac{m_p(w)}{m_p(z)} \left[ 1- \frac{\r_p(z,w)^p}{p\cdot 4^{p+3}} \right].$$ Equation [\[inequality\]](#inequality){reference-type="eqref" reference="inequality"} shows that $\abs{m_p(z,w)}=m_p(w)/m_p(z)$ if and only if $\r_p(z,w)=0$ (equivalently $z=w$).
From [@chen2022p Proposition 4.3 (3)], we have $$\abs{b}^p \geq \abs{a}^p +p\Re(\abs{a}^{p-2}\bar{a}(b-a))+\frac{1}{4^{p+3}} \abs{b-a}^p \text{ when }p>2.$$ Set $a=m_p(\z,z)/m_p(z)$, and $b=e^{i\theta }m_p(\z,w)/m_p(w)$, where $\theta$ will be specified below. Integrating the above inequality shows that $$\begin{gathered}
1\geq 1 + p \Re\left\{ \int_\O m_p(z)^{-p+1}\abs{m_p(\z,z)}^{p-2} \overline{m_p(\z,z)} \left[\frac{e^{i\theta }m_p(\z,w)}{m_p(w)}- \frac{m_p(\z,z)}{m_p(z)} \right] d\z \right\} \\ + \frac{1}{4^{p+3}}\int_\O \abs{ \frac{e^{i\theta }m_p(\z,w)}{m_p(w)}- \frac{m_p(\z,z)}{m_p(z)} }^p d\z.\end{gathered}$$ Applying the reproducing property shows that $$p m_p(z)^{} \Re\left\{ \frac{e^{i\theta }m_p(z,w)}{m_p(w)}- \frac{m_p(z,z)}{m_p(z)}\right\} +\frac{\r_p(z,w)^p}{4^{p+3}}\leq 0.$$ Now choose $\theta$ such that $e^{i\theta} m_p(z,w)=\abs{m_p(z,w)}$. Then, $$\frac{\abs{m_p(z,w)} m_p(z)}{m_p(w)} -1 \leq- \frac{\r_p(z,w)^p}{p\cdot 4^{p+3}}$$ which is equivalent to [\[inequality\]](#inequality){reference-type="eqref" reference="inequality"}.
| arxiv_math | {
"id": "2309.00084",
"title": "$p$-Skwarczy\\'nski distance",
"authors": "Shreedhar Bhat",
"categories": "math.CV",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Suppose that $A$ and $B$ are sets in $\mathbb{R}^d$, and we form the sumset of $A$ with $n$ random points of $B$. Given the volumes of $A$ and $B$, how should we choose them to minimize the expected volume of this sumset? Our aim in this paper is to show that we should take $A$ and $B$ to be Euclidean balls. We also consider the analogous question in the torus $\mathbb{T}^d$, and we show that in this case the optimal choices of $A$ and $B$ are bands, in other words, sets of the form $[x,y]\times\mathbb{T}^{d-1}$. We also give stability versions of our results.
address:
- Mathematical Institute, University of Oxford, Oxford OX26GG, UK.
- Department of Pure Mathematics and Mathematical Statistics, Wilberforce Road, Cambridge, CB30WA, UK, and Department of Mathematical Sciences, University of Memphis, Memphis, TN, USA.
- Department of Pure Mathematics and Mathematical Statistics, Wilberforce Road, Cambridge, CB30WA, UK.
- Mathematical Institute, University of Oxford, Oxford OX26GG, UK.
author:
- Paul Balister
- Béla Bollobás
- Imre Leader
- Marius Tiba
title: Random Translates in Minkowski Sums
---
# Introduction
Let $A$ and $B$ be sets of finite positive measure in $\mathbb{R}^d$. The fundamental Brunn--Minkowski inequality (see for example [@Bar; @Gard]) asserts that the volume $|A+B|$ of the sumset $A+B$ is at least $(|A|^{1/d}+|B|^{1/d})^d$. Note that we may obtain equality for any convex body $A$ by setting $B$ to be a homothetic copy of $A$. But suppose we try to 'approximate' $B$ by a finite set -- in the sense that, for a fixed value of $n$, we form the sumset $A+S$, where $S$ consists of $n$ points chosen independently and uniformly at random from $B$. Given their volumes, how should we choose $A$ and $B$ so as to minimize the expected size of $A+S$?
Note that now there is no reason at all to believe that this minimum may be attained using any given (convex) choice of $A$ by a suitably related choice of $B$, unlike for the Brunn--Minkowski inequality itself. Indeed, it is natural to guess, based for example upon isoperimetric notions, that Euclidean balls might actually be best. And this is what we prove.
**Theorem 1**. *Let $d,n\in\mathbb{N}$ and let $A$ and $B$ be sets of finite positive measure in $\mathbb{R}^d$. Let $A'$ and $B'$ be balls in $\mathbb{R}^d$ with the same volumes as $A$ and $B$ respectively. Then for $b_1,\dots,b_n$ and $b'_1,\dots,b'_n$ chosen independently and uniformly at random from $B$ and $B'$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'+b'_i)|).$$*
From this result, we may obtain estimates on how many random points of $B$ we need to take to ensure that the expected volume is close to the true lower bound on $|A+B|$ given by the Brunn--Minkowski inequality. Indeed, has the following consequence.
**Corollary 2**. *Let $d,n\in\mathbb{N}$ and let $A$ and $B$ be sets of finite positive measure in $\mathbb{R}^d$. Then for $b_1,\dots,b_n$ chosen independently and uniformly at random from $B$ we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)|) \ge (|A|^{1/d}+|B|^{1/d})^d \cdot\big(1-(c+o(1))n^{-2/(d+1)}\big)
\qquad\text{as }n\to\infty,$$ where $c$ is a constant that depends only on the dimension $d$ and the ratio of the volumes of $A$ and $B$.*
It is also very natural to ask similar questions in the torus $\mathbb{T}^d$, where $\mathbb{T}$ denotes the circle $\mathbb{R}/\mathbb{Z}$. This time the analogue of the Brunn--Minkowski inequality would state that bands, meaning sets of the form $[x,y]\times\mathbb{T}^{d-1}$, are best. More precisely, if $A$ and $B$ are (measurable) sets in $\mathbb{T}^d$ then $|A+B| \ge |A|+|B|$ (if the sum of the two volumes is at most $|\mathbb{T}^d|=1$, of course -- so strictly one should write $\min\{|A|+|B|,1\}$). This is a result of Macbeath [@macbeath] -- it can also be deduced immediately from Kneser's theorem [@Knes] in the group $\mathbb{Z}_p^d$. We show that indeed bands are always best to minimize the expected volume of $A+S$, where $S$ consists of $n$ points chosen uniformly at random from $B$.
**Theorem 3**. *Let $d,n\in\mathbb{N}$ and let $A$ and $B$ be sets of positive measure in $\mathbb{T}^d$. Let $A'$ and $B'$ be bands in $\mathbb{T}^d$ [(]{.upright}oriented in the same direction[)]{.upright} with the same volumes as $A$ and $B$ respectively, i.e., sets of the form $I\times\mathbb{T}^{d-1}$, where $I$ is a closed interval in $\mathbb{T}$. Then for $b_1,\dots,b_n$ and $b'_1,\dots,b'_n$ chosen independently and uniformly at random from $B$ and $B'$ respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'+b'_i)|).$$*
To keep the notation simple we use the same notation $|\cdot|$ for Lebesgue measure in the torus as in Euclidean space --- there is no place where this is ambiguous.
Again, allows us to estimate how many points we need to take to ensure that the sum of $A$ and $n$ points of $B$ is, on average, of volume at least $\min\{|A|+|B|,1\}-\varepsilon$.
**Corollary 4**. *Let $n\ge 1$ and let $A$ and $B$ be sets of positive measure in $\mathbb{T}^d$. Then for $b_1,\dots,b_n$ chosen independently and uniformly at random from $B$ we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)|) \ge |A|+\tfrac{n-1}{n+1} |B|
-\tfrac{n-1}{n+1} |B|\big(1-\tfrac{|A|}{|B|}\big)^{n+1}\mathbbm{1}_{|A|<|B|},$$ when $|A|+|B|\le 1$, and $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)|) \ge
1-\big(\tfrac{1-|A|}{|B|}\big)^n\big(|B|-\tfrac{n-1}{n+1}(1-|A|)\big)
-\tfrac{n-1}{n+1} |B|\big(1-\tfrac{|A|}{|B|}\big)^{n+1}\mathbbm{1}_{|A|<|B|},$$ when $|A|+|B|>1$, where $\mathbbm{1}_{|A|<|B|}$ is $1$ if $|A|<|B|$ and $0$ otherwise.*
Our approach to our main result in Euclidean space, , proceeds by first proving a discrete version of the result in one dimension, so in the group $\mathbb{Z}_p$. Thus suppose we have sets $A$ and $B$ in $\mathbb{Z}_p$ of given size, and we want to minimize the expected size of the sum of $A$ with $n$ random points of $B$. Then it is best to take $A$ and $B$ to be intervals. Although this might seem very natural, it is actually quite non-trivial to prove. Indeed, this result, or rather a generalization where we choose the $n$ points not all from the same set but from different sets, is at the heart of our approach, and is where most of the difficulty lies. Once we have proved this result, the general result will follow by some reasonably standard arguments of discrete approximation and Steiner symmetrization.
**Theorem 5**. *Let $n,p\in\mathbb{N}$ with $p$ prime and let $A_1,\dots,A_n$ and $B_1,\dots,B_n$ be finite non-empty subsets of $\mathbb{Z}_p$. Let $A'_1,\dots,A'_n$ and $B'_1,\dots,B'_n$ be intervals of the same sizes as the sets $A_1,\dots,A_n$ and $B_1,\dots B_n$ respectively, with each $A'_i+B'_i$ centred[^1] at $0$ or $1/2$ [(]{.upright}depending on parity[)]{.upright}. Then for $b_i$, $b'_i$ chosen independently and uniformly at random from $B_i$, $B'_i$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|).$$*
It might seem strange to be using the cyclic group, instead of the group $\mathbb{Z}$ of integers, to approximate the continuous space $\mathbb{R}$, but in fact this is vital to our approach. Roughly speaking, this is because in $\mathbb{Z}_p$ the complement of a finite set is still finite --- this will allow us to transform unions into intersections. As we shall see, the size of the *intersection* of some translates of a set $A$ is much easier to deal with than the size of a *union* of translates.
In the torus, our starting point is again the above discrete result. We pass to a continuous version, and then we try to symmetrize on the torus. However, the torus is not isotropic, so one does not have the power of 'symmetrizing in all possible directions' that one has in $\mathbb{R}^d$. Nevertheless, it turns out that there are just enough symmetrizations that we are allowed to perform that we can solve the problem in two dimensions: essentially, we can symmetrize in any 'direction' given by a member of $SL_2(\mathbb{Z})$. This is where most of the work comes: armed with the two-dimensional version of the theorem, it is not hard to pass to the general case.
We also consider stability versions of these results. In other words, if our sets $A$ and $B$ give a value for the expectation that is 'close' to the lower bound, how close must $A$ and $B$ be to the extremal sets for the inequality? We are able to prove stability results for each our two main results that are sharp, in the sense that the dependences among the notions of 'close' are best possible. These are and below: the former as the stability version of and the latter for . Interestingly, the proofs of these stability statements do not rely upon the actual inequalities themselves, so that these may be viewed as also providing alternative proofs of the results ( and ). However, these stability proofs do rely on several deeper recent tools.
The plan of the paper is as follows. In Section [2](#tool){reference-type="ref" reference="tool"} we prove our main tool, . In Section [3](#continuous){reference-type="ref" reference="continuous"} we translate this result into the continuous setting of the 1-dimensional torus. In Section [4](#torus){reference-type="ref" reference="torus"} we prove , our result on the torus, and in the next section we finish the proof of our result in $\mathbb{R}^d$, . Then Section [6](#calculations){reference-type="ref" reference="calculations"} contains the calculations needed for the actual estimates that follow from our results: these are routine, although somewhat fiddly. In Section [7](#stability){reference-type="ref" reference="stability"} we prove our sharp stability result for , and then in Section [8](#stability_sphere){reference-type="ref" reference="stability_sphere"} we prove a corresponding stability result for . We conclude with some open problems.
Our notation is standard. However, we would like to highlight one of the conventions that we use. When we have a measurable set $A$, say in $\mathbb{R}^d$, we often write as though all of its various sections in a given direction were measurable sets in $\mathbb{R}^{d-1}$. Of course, this need not be the case, but as it only fails for a set of sections that has measure zero, this will never affect any of our arguments.
Finally, we mention one more useful convention. Often we have intervals, in $\mathbb{Z}_p$ or $\mathbb{T}$, centred at various points. We always allow the whole space $\mathbb{Z}_p$ or $\mathbb{T}$ as an interval, and we consider it to be 'centred' at any point we choose.
# Proof of {#tool}
*Proof of .* The case $p = 2$ is easy to check, so we may assume that $p$ is an odd prime. We need to show that for $b_i$, $b'_i$ chosen independently and uniformly at random from $B_i$, $B'_i$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|),$$ which after taking complements inside $\mathbb{Z}_p$ is equivalent to $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A_i^c+b_i)|) \le \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A^{\prime c}_i+b'_i)|).$$ After scaling and using $|B_i|=|B'_i|$, we can rewrite the last inequality as $$\sum_{b_1\in B_1,\dots,b_n\in B_n} |\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A_i^c+b_i)|
\le \sum_{b'_1\in B'_1,\dots,b'_n\in B'_n} |\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A^{\prime c}_i+b'_i)|,$$ which is equivalent to $$\sum_{x\in\mathbb{Z}_p}\prod_i |\{(d_i,b_i)\in A_i^c\times B_i : d_i+b_i=x\}|
\le \sum_{x\in\mathbb{Z}_p}\prod_i
|\{(d^{\prime}_i,b'_i)\in A^{\prime c}_i\times B'_i : d^{\prime}_i+b'_i=x\}|,$$ or in other words $$\sum_{x\in\mathbb{Z}_p}\prod_i\mathbbm{1}_{A_i^c}*\mathbbm{1}_{B_i}(x)
\le \sum_{x\in\mathbb{Z}_p}\prod_i\mathbbm{1}_{A^{\prime c}_i}*\mathbbm{1}_{B'_i}(x),$$ where $\mathbbm{1}_X$ represents the characteristic function of $X\subseteq\mathbb{Z}_p$ and $*$ denotes convolution. To simplify the notation, set $$D_i=A_i^c\qquad\text{and}\qquad
D'_i=A^{\prime c}_i + \tfrac{p-1}{2},$$ so as to centre $D'_i+B'_i$ at $0$ or $-1/2$ (depending on parity). Note that $D'_i$ is an interval with the same size as $D_i$. With this notation, the objective is to show that $$\label{eq_equiv}
\sum_{x\in\mathbb{Z}_p}\prod_i\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x)
\le \sum_{x\in\mathbb{Z}_p}\prod_i\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x).$$
In order to show the above inequality we shall exploit just the following properties of these functions.
**Claim 6**. *For any $1\le i\le n$ we have that*
1. *for $j\ge0$, $\sum_{x\in\mathbb{Z}_p} \max(0,\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x)-j)
\le \sum_{x\in\mathbb{Z}_p} \max(0,\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x)-j)$, and*
2. *with $(x_0,x_1,\dots,x_{p-1})=(0,-1,1,-2,2,\dots,-\frac{p-1}{2},\frac{p-1}{2})$, the sequence $\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x_t)$ is decreasing.*
*Proof of .* For the first part, recall Theorem 2 in [@pollard]. This states that for fixed $1\le i\le n$ and $j\ge 0$ we have $$\label{e:T2p}
\sum_{x\in\mathbb{Z}_p} \min(\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x),j)
\ge \sum_{x\in\mathbb{Z}_p} \min(\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x),j).$$ As $|D_i|=|D'_i|$ and $|B_i|=|B'_i|$ we note that $$\sum_{x \in \mathbb{Z}_p} \mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x) = \sum_{x \in \mathbb{Z}_p} \mathbbm{1}_{D_i'}*\mathbbm{1}_{B_i'}(x).$$ Thus [\[e:T2p\]](#e:T2p){reference-type="eqref" reference="e:T2p"} is equivalent to $$\sum_{x\in\mathbb{Z}_p} \big(\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x)-\min(\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x),j))\big)
\le \sum_{x\in\mathbb{Z}_p} \big(\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x)-\min(\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x),j)\big),$$ i.e., $$\sum_{x\in\mathbb{Z}_p} \max(0,\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x)-j)
\le \sum_{x\in\mathbb{Z}_p} \max(0,\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x)-j),$$ as desired.
The second part is just a simple computational check. However, we point out that it is here where we use the fact that the interval $D'_i+B'_i$ is almost symmetric about the origin. ◻
We say two functions $h_1,h_2\colon\mathbb{Z}_p\to\mathbb{N}$ have the *same profile* if there exists a permutation $\tau\colon\mathbb{Z}_p\to\mathbb{Z}_p$ such that $h_1=h_2\circ\tau$. For $1\le i\le n$ construct increasing functions $$f_i,g_i\colon\mathbb{Z}_p\to\mathbb{N}$$ such that $f_i$ and $\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}$ have the same profile and also that $g_i$ and $\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}$ have the same profile. (We define the ordering on $\mathbb{Z}_p$ by identifying $\mathbb{Z}_p$ with $\{0,1,\dots,p-1\}$.) Note that, as these functions have the same profiles, implies that for any $1\le i\le n$ and any $j\ge 0$ we have that $$\sum_{x\in\mathbb{Z}_p} \max(0,f_i(x)-j) \le \sum_{x\in\mathbb{Z}_p} \max(0,g_i(x)-j).$$
**Claim 7**. *We have $$\sum_{x\in\mathbb{Z}_p}\prod_i\mathbbm{1}_{D_i}*\mathbbm{1}_{B_i}(x) \le \sum_{x\in\mathbb{Z}_p}\prod_i f_i(x),$$ and $$\sum_{x\in\mathbb{Z}_p}\prod_i\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(x) = \sum_{x\in\mathbb{Z}_p}\prod_i g_i(x).$$*
*Proof of .* The first part follows from the rearrangement inequality. For the second part let $\sigma\colon\mathbb{Z}_p\to\mathbb{Z}_p$ be the permutation given by $(\sigma(p-1),\dots,\sigma(0))=(0,p-1,1,p-2,2,\dots,\frac{p+1}{2},\frac{p-1}{2})$. It is enough to show the functional equality $g_i(x)=\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}(\sigma(x))$. To see why this holds, note that the functions $g_i\colon\mathbb{Z}_p\to\mathbb{N}$ and $\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}\circ\sigma\colon\mathbb{Z}_p\to\mathbb{N}$ have the same profile and are increasing. Indeed, $g_i$ is increasing by construction and $\mathbbm{1}_{D'_i}*\mathbbm{1}_{B'_i}\circ\sigma$ is increasing by (b); the two functions have the same profile by construction. ◻
We need one more ingredient to complete the proof of .
**Lemma 8**. *Let $n\in\mathbb{N}$. For $1\le i\le n$ consider increasing functions $$f_i,g_i\colon\mathbb{Z}_p\to\mathbb{N}$$ with the property that, for all $j\ge0$, $$\sum_{x\in\mathbb{Z}_p}\max(0,f_i(x)-j) \le \sum_{x\in\mathbb{Z}_p}\max(0,g_i(x)-j).$$ Then it follows that $$\sum_{x\in\mathbb{Z}_p}\prod_i f_i(x) \le \sum_{x\in\mathbb{Z}_p}\prod_i g_i(x).$$*
*Proof.* We first claim that for any $i$ and any $r\in\mathbb{Z}_p$, $\sum_{x=r}^{p-1} f_i(x)\le \sum_{x=r}^{p-1} g_i(x)$. Indeed, assume otherwise and pick the largest $r$ such that $\sum_{x=r}^{p-1} f_i(x)>\sum_{x=r}^{p-1} g_i(x)$. Then clearly $f_i(r)>g_i(r)$, as otherwise the inequality would also hold at $r+1$. By setting $j=g_i(r)$ we have $$\sum_{x\in\mathbb{Z}_p}\max(0,f_i(x)-j)\ge\sum_{x=r}^{p-1} (f_i(x)-j)
>\sum_{x=r}^{p-1}(g_i(x)-j)=\sum_{x\in\mathbb{Z}_p}\max(0,g_i(x)-j),$$ a contradiction. Now for any increasing function $F\colon\mathbb{Z}_p\to\mathbb{N}$ we have $$\begin{aligned}
\sum_{x\in\mathbb{Z}_p}F(x)f_i(x)
&=F(0)\sum_{x=0}^{p-1}f_i(x)+\sum_{r=1}^{p-1}(F(r)-F(r-1))\sum_{x=r}^{p-1}f_i(x)\\
&\le F(0)\sum_{x=0}^{p-1}g_i(x)+\sum_{r=1}^{p-1}(F(r)-F(r-1))\sum_{x=r}^{p-1}g_i(x)\\
&=\sum_{x\in\mathbb{Z}_p}F(x)g_i(x).\end{aligned}$$ Hence, as $f_1f_2\dots f_{i-1}g_{i+1}\dots g_n$ is increasing, $$\sum_{x\in\mathbb{Z}_p}f_1(x)\dots f_{i-1}(x)f_i(x)g_{i+1}(x)\dots g_n(x)\le
\sum_{x\in\mathbb{Z}_p}f_1(x)\dots f_{i-1}(x)g_i(x)g_{i+1}(x)\dots g_n(x).$$ Hence $$\sum_{x\in\mathbb{Z}_p}f_1(x)\dots f_{n-1}(x)f_n(x)\le \sum_{x\in\mathbb{Z}_p}f_1(x)\dots f_{n-1}(x)g_n(x)\le
\dots\le \sum_{x\in\mathbb{Z}_p}g_1(x)\dots g_n(x),$$ as required. ◻
This concludes the proof of . ◻
# Passing to the continuous case {#continuous}
From we can easily pass to the one-dimensional torus $\mathbb{T}=\mathbb{R}/\mathbb{Z}$. We begin with a useful lemma.
**Lemma 9**. *Let $d,n\in\mathbb{N}$ and $0\le\delta\le 1$, and for $i\in [n]$ let $X_i$, $X'_i$, $Y_i$ and $Y'_i$ be measurable subsets of $\mathbb{T}^d$ [(]{.upright}or $\mathbb{R}^d$[)]{.upright} with positive [(]{.upright}finite[)]{.upright} measure and $|X_i\mathbin{\triangle}X'_i|\le \delta |X_i|$ and $|Y_i\mathbin{\triangle}Y'_i| \le \delta |Y_i|$. Then for points $y_i$, $y'_i$ chosen uniformly at random from the sets $Y_i$ and $Y'_i$, respectively, we have $$\Big|\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X'_i+y'_i)|)-\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|)\Big|\le 4n\delta \sum_i |X_i|.$$*
*Proof.* Consider a coupling of $y_i$ and $y'_i$ which maximizes the probability $\mathbb{P}(y_i=y'_i)$. It is easy to see that there exists such a coupling where $$\mathbb{P}(y_i\ne y'_i)\le \frac{|Y_i\cap Y'_i|}{\max(|Y_i|,|Y'_i|)}.$$ Indeed, pick elements $a$, $b$, $c$ and $\theta$ uniformly at random from $Y_i\cap Y'_i$, $Y_i\setminus Y'_i$, $Y'_i\setminus Y_i$ and $[0,1]$, respectively. Set $y_i=a$ if $\theta\le |Y_i\cap Y'_i|/|Y_i|$ and set $y_i=b$ otherwise; similarly set $y'_i=a$ if $\theta\le |Y_i\cap Y'_i|/|Y'_i|$ and set $y_i=c$ otherwise. Then $y_i$ and $y'_i$ are uniform in $Y_i$ and $Y'_i$ respectively and $$\mathbb{P}(y_i\ne y'_i)=\mathbb{P}\bigg(\theta \ge \frac{|Y_i\cap Y'_i|}{\max(|Y_i|,|Y_i'|)}\bigg)
=1-\frac{|Y_i\cap Y'_i|}{\max(|Y_i|,|Y'_i|)} \le \frac{|Y_i\mathbin{\triangle}Y'_i|}{\max(|Y_i|,|Y'_i|)}.$$ Let $E$ be the event that for every $i\in [n]$ we have $y_i=y'_i$. By the above estimate, we have $$\mathbb{P}(E^c) \le \sum_i \mathbb{P}(y_i\ne y_i') \le \sum_i \frac{|Y_i\cap Y'_i|}{\max(|Y_i|,|Y_i'|)}$$ Conditioned on the event $E$, we have the inequality $$\Big||\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X'_i+y'_i)|-|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|\Big|\le \sum_i |X_i\mathbin{\triangle}X'_i|.$$ In general, we have the trivial inequality $$\Big||\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X'_i+y'_i)|-|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|\Big|\le \sum_i (|X_i|+|X'_i|).$$ Combining the last three estimates, we conclude $$\begin{aligned}
\Big|\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X'_i+y'_i)|)-\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i&+y_i)|)\Big|=
\Big|\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X'_i+y'_i)|-|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|)\Big|\\
&\le \Big(\sum_i |X_i|+|X'_i|\Big) \mathbb{P}(E) + \Big(\sum_i |X_i\mathbin{\triangle}X'_i|\Big)\mathbb{P}(E^c)\\
&\le \Big(\sum_i |X_i|+|X'_i|\Big) \sum_i\frac{|Y_i\mathbin{\triangle}Y'_i|}{\max(|Y_i|,|Y'_i|)} + \sum_i |X_i\mathbin{\triangle}X'_i|\\
&\le \Big(\sum_i |X_i|+2|X_i|\Big) \sum_i\delta+\sum_i \delta |X_i|\\
&= (3n+1)\delta \sum_i |X_i|.\end{aligned}$$ The last inequality follows from the hypotheses that $|X_i\mathbin{\triangle}X'_i|\le \delta |X_i|$ and $|Y_i\mathbin{\triangle}Y'_i|\le \delta |Y_i|$. ◻
**Corollary 10**. *Let $n\in\mathbb{N}$ and let $A_1,\dots,A_n$ and $B_1,\dots,B_n$ be sets of positive measure in $\mathbb{T}$. Construct intervals $A'_1,\dots,A'_n$ and $B'_1,\dots,B'_n$, each centred about $0$, with the same measure as the sets $A_1,\dots,A_n$ and $B_1,\dots,B_n$, respectively. Then for $b_i$ and $b'_i$ chosen independently and uniformly at random from $B_i$ and $B'_i$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|)\ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|).$$*
*Proof.* We identify $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ with $(-\frac{1}{2},\frac{1}{2}]$, and fix a small parameter $\varepsilon$ with $0<\varepsilon<1$. As all $A_i$, $B_i$, $A'_i$ and $B'_i$ are measurable, we can approximate them with sets $C_i$, $D_i$, $C'_i$ and $D'_i$, respectively, so that for $i\in [n]$ we have $$C_i=\bigcup_{j\in [k]}\big(\tfrac{q_{i,j}}{p},\tfrac{r_{i,j}}{p}\big],\qquad
D_i=\bigcup_{j\in [k]}\big(\tfrac{s_{i,j}}{p},\tfrac{t_{i,j}}{p}\big],\qquad
C'_i=\big(\tfrac{-h_i}{p},\tfrac{h_i}{p}\big]\quad\text{ and }\quad
D'_i=\big(\tfrac{-\ell_i}{p},\tfrac{\ell_i}{p}\big],$$ where $p$ is a prime, $k\in \mathbb{N}$ with $k\le \varepsilon p$, $h_i,\ell_i\in\mathbb{N}$ and $-\frac{p}{2}<q_{i,1}<r_{i,1}<q_{i,2}<r_{i,2}<\dots< r_{i,k}<\frac{p}{2}$ and $-\frac{p}{2}<s_{i,1}<t_{i,1}<\dots< s_{i,k}<t_{i,k}<\frac{p}{2}$ are integers with $2h_i=\sum_j (r_{i,j}-q_{i,j})$ and $2\ell_i=\sum_j(t_{i,j}-s_{i,j})$, and finally $|C_i\mathbin{\triangle}A_i|\le \varepsilon|A_i|$, $|D_i\mathbin{\triangle}B_i|\le \varepsilon|B_i|$, $|C'_i\mathbin{\triangle}A'_i|\le \varepsilon|A_i'|$ and $|D'_i\mathbin{\triangle}B'_i|\le \varepsilon|B_i'|$.
**Claim 11**. *For $d_i$ and $d'_i$ chosen uniformly at random from $D_i$ and $D'_i$ respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C_i+d_i)|)\ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C'_i+d'_i)|) - 2n\varepsilon.$$*
*Proof.* For $i\in [n]$, construct subsets of $\mathbb{Z}_p$ $$X_i=\bigcup_{j\in [k]}\{q_{i,j}+1,\dots,r_{i,j}\}\quad\text{and}\quad
Y_i=\bigcup_{j\in [k]}\{s_{i,j}+1,\dots,t_{i,j}\}$$ and $$X'_i=\{1-h_i,\dots,h_i\}\quad\text{and}\quad Y'_i=\{1-\ell_i,\dots,\ell_i\}.$$ Note that $|X_i|=|X'_i|$ and $|Y_i|=|Y'_i|$. Moreover, $X'_i$ and $Y'_i$ are discrete intervals centred around $1/2$. Finally, write $$y_i=\lceil pd_i\rceil\quad\text{and}\quad y'_i=\lceil p d'_i\rceil.$$
Note that if $d_i$, $d'_i$ are chosen uniformly at random from $D_i$, $D'_i$, respectively, then $y_i$, $y'_i$ are chosen uniformly at random from $Y_i$, $Y'_i$, respectively. Moreover, for each such choice $$\big||\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C_i+d_i)| - \tfrac{1}{p}|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|\big| \le \tfrac{kn}{p}
\quad\text{and}\quad
\big||\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C'_i+d'_i)| - \tfrac{1}{p}|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X'_i+y'_i)|\big| \le \tfrac{kn}{p}.$$
By , we have that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (X_i+y_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (X'_i+y'_i) |).$$ Combining the last two inequalities, and recalling $k \le \varepsilon p$, we conclude that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C_i+d_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C'_i+d'_i)|) - \tfrac{2kn}{p}
\ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C'_i+d'_i)|) - 2n\varepsilon.
\qedhere$$ ◻
By and the fact that $|C_i\mathbin{\triangle}A_i|\le\varepsilon|A_i|$, $|D_i\mathbin{\triangle}B_i|\le\varepsilon|B_i|$, $|C'_i\mathbin{\triangle}A'_i|\le\varepsilon|A'_i|$ and $|D'_i\mathbin{\triangle}B'_i|\le\varepsilon|B'_i|$, for $b_i$, $b'_i$, $d_i$ and $d'_i$ chosen uniformly at random from $B_i$, $B'_i$, $D_i$ and $D'_i$, respectively, we have $$\Big|\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C_i+d_i)|) - \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|)\Big|
\le 4n\varepsilon\sum_i |A_i| \le 4n^2\varepsilon$$ and $$\Big|\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (C'_i+d'_i)|) - \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|)\Big|
\le 4n\varepsilon\sum_i|A'_i| \le 4n^2\varepsilon.$$
Combining the last inequalities with , we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|) - 2n\varepsilon- 8n^2\varepsilon.$$ As $\varepsilon>0$ can be chosen arbitrarily small, we conclude that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|).
\qedhere$$ ◻
# Derivation of {#torus}
For $u\in\mathbb{T}^{d-1}$ and any subset $X$ of $\mathbb{T}^d$, write $$X_u=\{x:(x,u)\in X\},$$ for the section of $X$ over $u$, where we identify $\mathbb{T}^d$ with $\mathbb{T}\times\mathbb{T}^{d-1}$. We define the *Steiner symmetrization* of a measurable set $X$ in $\mathbb{T}^d$ as the measurable subset $S_d(X)$ of $\mathbb{T}^d$ which has the property that for every $u\in\mathbb{T}^{d-1}$, $S_d(X)_u$ is a closed interval in $\mathbb{T}$ centred around the origin with $|S_d(X)_u|=|X_u|$. For the sake of definiteness, when the closed interval could be either a single point or the empty set we choose it to be the empty set. This is purely for convenience, and any other choice would make no difference to any of the arguments. An alternative would have been to be more careful with this choice, as a way of ensuring that closed sets always do map to closed sets, for example, but there is no need for this.
More generally, given an integer $d\times d$ matrix $M$ with $\det(M)=1$, i.e., $M\in SL_d(\mathbb{Z})$, the Steiner symmetrization $S_M$ transforms a measurable set $X$ in $\mathbb{T}^d$ into the set $X^M=S_M(X)$ in $\mathbb{T}^d$ where $X^M=S_M(X)=M^{-1}(S_d(M(X)))$. Note that $M$ induces a measure preserving bijection $\mathbb{T}^d\to\mathbb{T}^d$.
**Lemma 12**. *For every $M\in SL_d(\mathbb{Z})$ and for any measurable subsets $X$ and $Y$ of $\mathbb{T}^d$ we have $$|X^M \mathbin{\triangle}Y^M| \le |X\mathbin{\triangle}Y|.$$*
*Proof.* As $\det(M)=1$, for any measurable sets $X'$ and $Y'$ we have $$|M(X') \mathbin{\triangle}M(Y')|=|M(X'\mathbin{\triangle}Y')|=|X'\mathbin{\triangle}Y'|.$$ Therefore, it is enough to show the result when $M$ is the identity: $$|S_d(X) \mathbin{\triangle}S_d(Y)| \le |X\mathbin{\triangle}Y|.$$ As $S_d$ acts in each section $\mathbb{T}\times \{u\}$ independently, it is enough to show the result in dimension $d=1$. So assume $d=1$ and note that $$|S_1(X) \mathbin{\triangle}S_1(Y)| = \big||S_1(X)| - |S_1(Y)|\big| = \big||X|-|Y|\big| \le |X\mathbin{\triangle}Y|.
\qedhere$$ ◻
We now start to work towards our result, but just in two dimensions. As we will see, this is where the work actually comes, and afterwards it will be a straightforward matter to deduce the result in higher dimensions.
**Lemma 13**. *There exist matrices $M_j\in SL_2(\mathbb{Z})$ for $j \in \mathbb{N}$ such that the following holds. Let $X$ be a measurable set in $\mathbb{T}^2$ and let $X'=[-c,c]\times\mathbb{T}$ be a band in $\mathbb{T}^2$ with the same area as $X$. Define recursively the sets $X_0=X$ and $X_{j+1}=X_j^{M_j}$, each $j \in \mathbb{N}$. Then $$\lim_{j\to\infty} |X_j \mathbin{\triangle}X'| = 0$$ uniformly in $X$.*
Here, as usual, by 'uniformly in $X$' we mean that for each $\varepsilon>0$ there exists $n$ such that for any $X$ the value of $|X_j \mathbin{\triangle}X'|$ is at most $\varepsilon$ for all $j \ge n$.
*Proof.* For $j \in \mathbb{N}$ define $M_j\in SL_2(\mathbb{Z})$ so that $M_{2j}(x,y)=(x,y)$ and $M_{2j+1}(x,y)=(x,y+jx)$ and note that $M_j(X')=X'$ and hence $(X')^{M_j}=X'$ for all $j$. Hence, by , $|X_j \mathbin{\triangle}X'|$ decreases as $j$ increases.
Fix $X$ with $|X|=|X'|=2c$ and assume for some $j>0$, $|X_{2j+1}\mathbin{\triangle}X'|>4\varepsilon$. Note that we must have $2c\in[2\varepsilon,1-2\varepsilon]$ otherwise we automatically have $|X_{2j+1} \mathbin{\triangle}X'|\le4\varepsilon$. For each $y\in \mathbb{T}$ we have $X'_y=[-c,c]$ and $X_{2j+1,y}=S_2(X_{2j})_y=[-c(y),c(y)]$ is also an interval in $\mathbb{T}$ centred at the origin whose width in general will depend on $y$. Define the sets $$S=\{ y \in \mathbb{T}: c(y) \le c-\varepsilon/2\}\quad\text{and}\quad
B=\{ y \in \mathbb{T}: c(y) \ge c+\varepsilon/2\}.$$ Note that $|X_{2j+1}\mathbin{\triangle}X'|>4\varepsilon$ and $|X_{2j+1}|=|X'|$ implies $|X'\setminus X_{2j+1}|$, $|X_{2j+1}\setminus X'|>2\varepsilon$, and hence $$|S|\ge \varepsilon\quad \text{and}\quad |B| \ge \varepsilon.$$
By the definition of the Steiner symmetrization $S_{M_{2j+1}}$, we note that for every $t \in \mathbb{T}$ the set $I'_t=\{x:(x,t-jx)\in X_{2j+2}\}$ is an interval in $\mathbb{T}$ centred at the origin with the same size as the set $I_t=\{x:(x,t-jx)\in X_{2j+1}\}$. Moreover, for every $t \in \mathbb{T}$ we have that the set $\{x:(x,t-jx)\in X'\}$ is equal to $[-c,c]$. which we will call $I$ for convenience.
**Claim 14**. *For every $j\ge 2/\varepsilon$ and $t\in \mathbb{T}$ we have $$|I'_t\cap I| \ge |I_t\cap I|+ j^{-1}\varepsilon,$$*
*Proof.* As $I$ and $I'_t$ are centred intervals with $|I'_t|=|I_t|$, it is enough to show that $$| I \setminus I_t | \ge j^{-1}\varepsilon
\quad\text{and}\quad
| I_t \setminus I | \ge j^{-1}\varepsilon.$$
Consider the functions $f \colon [c-j^{-1},c) \to \mathbb{T}$ and $g\colon (c,c+j^{-1}] \to \mathbb{T}$ defined by $f(x)=g(x)=t-jx$ and note that these functions are bijective and scale the measure by a factor $j$. Hence if we set $S'=f^{-1}(S)$ and $B'=g^{-1}(B)$ then $$|S'|\ge j^{-1}\varepsilon\quad\text{and}\quad |B'|\ge j^{-1}\varepsilon.$$
As $j^{-1}\le \varepsilon/2$ and $2c\in[2\varepsilon,1-2\varepsilon]$, we have $c-j^{-1}>-c$ and $c+j^{-1}<1-c$, so $S' \subseteq I=[-c,c]$ and $B' \cap I=\emptyset$ in $\mathbb{T}$. Therefore, it is enough to show that $$S' \cap I_t = \emptyset
\quad\text{and}\quad
B' \subseteq I_t.$$
Assume for a contradiction that $$x \in S' \cap I_t,$$ and note that if we let $y=t-jx$ then this assumption is equivalent to $$x \in S' \quad\text{and}\quad (x,y) \in X_{2j+1}.$$ As $x \in S'$ and $y=f(x)$, we have $y \in S$. By construction of $S$, we have $x \in [-c+\varepsilon/2, c-\varepsilon/2]$. However, by construction $x \in S' \subseteq [c-j^{-1},c)$. This gives the desired contradiction as $j^{-1}\le \varepsilon/2$. We conclude that $$S' \cap I_t = \emptyset.$$ Analogously, we conclude that $$B' \subseteq I_t.$$ This finishes the proof of the claim. ◻
To conclude our proof, note that implies that for every $j\ge 2/\varepsilon$ either $|X_{2j+1}\mathbin{\triangle}X'|\le 4\varepsilon$, or $$|X_{2j+2} \cap X'| \ge |X_{2j+1} \cap X'| + j^{-1}\varepsilon,$$ which is equivalent to $$|X_{2j+2} \mathbin{\triangle}X'| \le |X_{2j+1} \mathbin{\triangle}X'| - 2j^{-1}\varepsilon.$$ Again by , $|X_j \mathbin{\triangle}X'|$ decreases as $j$ increases, which implies that $$|X_{2j+2} \mathbin{\triangle}X'| \le |X_{2j} \mathbin{\triangle}X'| - 2j^{-1}\varepsilon.$$ As $\sum_j j^{-1}$ diverges, there must exist some $j_0$ depending only on $\varepsilon$ such that $|X_j\mathbin{\triangle}X'|\le 4\varepsilon$ for all $j\ge j_0$, and we note that $j_0$ can be chosen independently of $X$. We conclude that $$\lim_{j \to \infty} |X_j \mathbin{\triangle}X'| =0\quad\text{uniformly in }X.
\qedhere$$ ◻
**Corollary 15**. *There exists matrices $M_j\in SL_d(\mathbb{Z})$ for $j \in \mathbb{N}$ such that the following holds. Let $X$ be a measurable set in $\mathbb{T}^d$ and let $X'=[-c,c]\times\mathbb{T}^{d-1}$ be a band in $\mathbb{T}^d$ with the same volume as $X$. Define recursively the sets $X_0=X$ and $X_{j+1}=X_j^{M_j}$, each $j \in \mathbb{N}$. Then $$\lim_{j\to\infty} |X_j \mathbin{\triangle}X'| = 0$$ uniformly in $X$.*
*Proof.* Let $M_{j,k}\in SL_d(\mathbb{Z})$ be the matrix that applies the $M_j$ from to coordinates 1 and $k$ and leaves all other coordinates fixed. Using the sequence $M_{j,2}$, $j=1,\dots,n_1$, we have that for sufficiently large $n_1$, $|X_{n_1}\mathbin{\triangle}X^{(2)}|<\varepsilon$, where $X^{(2)}$ is a set with $|X^{(2)}|=|X|$ that is independent of the 2nd coordinate, i.e., $(x_1,x_2,\dots,x_d)\in X^{(2)}$ iff $(x_1,x'_2,\dots,x_d)\in X^{(2)}$ for any $x_2,x'_2\in\mathbb{T}$. Continuing with the sequence $M_{j,3}$, $j=1,\dots,n_1$, and recalling , we have that for sufficiently large $n_1$, $|X_{2n_1}\mathbin{\triangle}X^{(3)}|\le |X_{n_1}\mathbin{\triangle}X^{(2)}|+|X^{(2)}_{n_1}\mathbin{\triangle}X^{(3)}|<2\varepsilon$, where $X^{(2)}_{n_1}$ is the result of applying these symmetrizations to $X^{(2)}$ and $X^{(3)}$ is independent of both 2nd and 3rd coordinates. Continuing this process for each coordinate in turn gives a sequence such that $|X_{(d-1)n_1}\mathbin{\triangle}X^{(d)}|<(d-1)\varepsilon$ for sufficiently large $n_1$. But clearly $X^{(d)}=X'$. Concatenating this sequence of matrices with progressively larger $n_1<n_2<\cdots$ and noting that $|X_j\mathbin{\triangle}X'|$ can never increase, we have a sequence of matrices such that $|X_j\mathbin{\triangle}X'|\to 0$ uniformly in $X$. ◻
**Lemma 16**. *Let $d,n\in\mathbb{N}$, $M\in SL_d(\mathbb{Z})$ and let $X_1,\dots,X_n$ and $Y_1,\dots,Y_n$ be sets of positive measure in $\mathbb{T}^d$. For $i\in[n]$, let sets $X_i^M$ and $Y_i^M$ be obtained from the sets $X_i$ and $Y_i$, respectively, by applying the Steiner symmetrization $S_M$. Then for points $y_i$ and $y^M_i$ chosen independently and uniformly at random from $Y_i$ and $Y^M_i$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M_i+y^M_i)|).$$*
*Proof.* We may assume without loss of generality that $M$ is the identity. Recall that for $u\in\mathbb{T}^{d-1}$ and any subset $S$ of $\mathbb{T}^d$, $S_u=\{x:(x,u)\in S\}$, where we identify $\mathbb{T}^d$ with $\mathbb{T}\times\mathbb{T}^{d-1}$. For $i\in [n]$, write $y_i=(z_i,u_i)$ and $y^M_i=(z^M_i,u^M_i)$ where $z_i,z^M_i\in\mathbb{T}$ and $u_i,u^M_i\in\mathbb{T}^{d-1}$. Now $u_i$ and $u^M_i$ have the same distribution as $|Y_{i,u}|=|Y^M_{i,u}|$ for all $u\in\mathbb{T}^{d-1}$. Thus we can couple $y_i$ and $y^M_i$ so that $u^M_i=u_i$ and, conditioned on $u_i$, $z_i$ and $z^M_i$ are independent and uniform in $Y_{i,u_i}$ and $Y^M_{i,u_i}$ respectively. Also choose $(z,u)\in\mathbb{T}\times\mathbb{T}^{d-1}$ uniformly at random and independent from everything else. Then $$\begin{aligned}
\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+y_i)|)&=\mathbb{P}((z,u)\in\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_i+(z_i,u_i)))\\
&=\mathbb{P}(z\in\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_{i,u-u_i}+z_i))=\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_{i,u-u_i}+z_i)|)\end{aligned}$$ and $$\begin{aligned}
\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M_i+y^M_i)|)&=\mathbb{P}((z,u)\in\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M_i+(z^M_i,u_i)))\\
&=\mathbb{P}(z\in\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M_{i,u-u_i}+z^M_i))=\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M_{i,u-u_i}+z^M_i)|).\end{aligned}$$ We now condition on $u,u_1\dots,u_n\in\mathbb{T}$ and show that if we choose $z_i$ and $z_i^M$ uniformly at random from $Y_{i,u_i}$ and $Y^M_{i,u_i}$ respectively for all $i\in [n]$, then $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X_{i,u-u_i}+z_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M_{i,u-u_i}+z^M_i)|).$$ Write $A_i=X_{i,u-u_i}$ and $B_i=Y_{i,u_i}$ for $i\in[n]$. By the definition of symmetrization $A'_i=X^M_{i,u-u_i}$ and $B'_i=Y^M_{i,u_i}$ are intervals centred around $0$ with the same size as $A_i$ and $B_i$ respectively. Hence, by , we can conclude $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A_i+b_i)|)\ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'_i+b'_i)|),$$ Where $b_i=z_i$ and $b'_i=z^M_i$ are chosen uniformly at random from $B_i=Y_{i,u_i}$ and $B'_i=Y^M_{i,u_i}$ respectively. ◻
**Theorem 17**. *Let $d,n\in \mathbb{N}$ and let $A_1,\dots,A_n$ and $B_1,\dots,B_n$ be sets of positive measure in $\mathbb{T}^d$. For $i\in[n]$, let $A'_i$ and $B'_i$ be bands in $\mathbb{T}^d$ centred at the origin with the same volumes as $A_i$ and $B_i$ respectively, i.e., sets of the form $I \times \mathbb{T}^{d-1}$, where $I$ is a closed interval in $\mathbb{T}$ centred at the origin. Then for $b_1,\dots,b_n$, $b'_1,\dots,b'_n$ chosen independently and uniformly at random from $B_1,\dots,B_n$ and $B'_1,\dots,B'_n$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_i+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'_i+b'_i)|).$$*
*Proof.* For $j \in \mathbb{N}$ let $M_j\in SL_d(\mathbb{Z})$ be the matrices given by . For $i \in [n]$ set $A_{i,0}=A_i$ and $B_{i,0}=B_i$ and for $j \in \mathbb{N}$ recursively define the sequences $A_{i,j+1}=A_{i,j}^{M_j}$ and $B_{i,j+1}=B_{i,j}^{M_j}$. For $i \in [n]$ and $j \in \mathbb{N}$ choose points $b_{i,j}$ independently and uniformly at random from $B_{i,j}$. Then by we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_{i,j}+b_{i,j})|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_{i,j+1}+b_{i,j+1})|)$$ for all $j\in\mathbb{N}$. But by we have $$\lim_{j\to\infty} |A_{i,j} \mathbin{\triangle}A'_i|=\lim_{j \to \infty} |B_{i,j} \mathbin{\triangle}B'_i|=0$$ which, by , implies $$\lim_{j\to\infty}\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_{i,j}+b_{i,j})|) = \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'_i+b'_i)|).$$ Thus we conclude that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_i+b_i)|)=\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_{i,0}+b_{i,0})|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'_i+b'_i)|).
\qedhere$$ ◻
Note that now follows immediately from , by taking all $A_i=A$ and all $B_i=B$.
We mention in passing another approach to proving . In the above, we have started with the result in the discrete setting of $\mathbb{Z}_p$, namely . We then passed to the circle () and went from there to the result in the torus. Alternatively, one could use a result of Tao (Theorem 1.1 in [@Tao]), which is the analogue in compact connected abelian groups of Theorem 2 in [@pollard] (the result we used in Section [2](#tool){reference-type="ref" reference="tool"}), to get to by using it instead of the Pollard-type inequalities from [@pollard] in (a generalisation of) .
# Derivation of {#derivation-of}
For any subset $X$ of $\mathbb{R}^d$ and $u\in \mathbb{R}^{d-1}$, we define the section $X_u=\{x:(x,u)\in X\}$, where we identify $\mathbb{R}^d$ with $\mathbb{R}\times\mathbb{R}^{d-1}$. We define the *Steiner symmetrization* $S_d(X)$ of a measurable subset $X$ of $\mathbb{R}^d$ by setting $S_d(X)_u$ to be a closed interval in $\mathbb{R}$ centred around the origin such that $|S_d(X)_u|=|X_u|$.
More generally, given a matrix $M\in SL_d(\mathbb{R})$, the Steiner symmetrization $S_M$ transforms a measurable set $X$ in $\mathbb{R}^d$ into the set $X^M=S_M(X)$ in $\mathbb{R}^d$ where $S_M(X)=M^{-1}(S_d(M(X)))$.
**Lemma 18**. *Let $d,n\in \mathbb{N}$, $M\in SL_d(\mathbb{R})$ and let $X$ and $Y$ be sets of positive finite measure in $\mathbb{R}^d$. Let sets $X^M$ and $Y^M$ be obtained from sets $X$ and $Y$, respectively, by applying the Steiner symmetrization $S_M$. Then for points $y_1,\dots,y_n$, $y^M_1,\dots,y^M_n$ chosen independently and uniformly at random from $Y$, $Y^M$, respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X+y_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(X^M+y^M_i)|).$$*
*Proof.* We may assume without loss of generality that $M$ is the identity. In addition, by scaling $X$ and $Y$ by the same sufficiently small parameter $\alpha>0$, we may assume that $X$ and $Y$, as well as $X^M$ and $Y^M$, are measurable subsets of $[-\frac{1}{4},\frac{1}{4})^d \subseteq \mathbb{R}^d$. (By [Lemma 9](#lem_close){reference-type="ref" reference="lem_close"} we may approximate $X$ and $Y$ by bounded sets.) In particular, $X+Y, X^M+Y^M \subseteq [-\frac{1}{2},\frac{1}{2})^d$. As the canonical map $\pi\colon [-\frac{1}{2},\frac{1}{2})^d \to \mathbb{T}^d$ is a linear isometry, for sets $X$ and $Y$ reduces to for sets $\pi(X)$ and $\pi(Y)$. ◻
*Proof of .* By [@volcic], there exist a sequence of Steiner symmetrizations which we can apply inductively to the measurable sets $A=A^0$ and $B=B^0$ in order to produce measurable sets $A^k$ and $B^k$ for $k \in \mathbb{N}$, with the property that $|A^k \mathbin{\triangle}A'|, |B^k \mathbin{\triangle}B'| \to 0$ as $k \to \infty$. Choose points $b^k_1,\dots, b^k_n$ uniformly at random from $B^k$.
By , for $k\in \mathbb{N}$ we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A^k+b^k_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A^{k+1}+b^{k+1}_i)|).$$ We also have by $$\lim_{k\to\infty}\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A^k+b^k_i)|) = \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'+b'_i)|).$$ Combining the last two equations, we conclude that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)|) = \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A^0+b^0_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'+b'_i)|).
\qedhere$$ ◻
Actually, the proof above can be easily adapted to give the following generalization.
**Theorem 19**. *Let $d,n \in \mathbb{N}$ and let $A_1,\dots,A_n$, $B_1,\dots,B_n$ be measurable sets in $\mathbb{R}^d$ of finite positive measure. For $i\in[n]$ let $A'_i$ and $B'_i$ be balls in $\mathbb{R}^d$ centred at the origin with the same volumes as $A_i$ and $B_i$, respectively. Then for $b_i$, $b'_i$ chosen independently and uniformly at random from $B_i$, $B'_i$ respectively, we have $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A_i+b_i)|) \ge \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A'_i+b'_i)|).$$*
# Note on the size of $A$ + $n$ random points of $B$ {#calculations}
In this short section we give the estimates needed to prove and .
*Proof of .* By , we may assume that $A$ and $B$ are intervals with $A=[0,a]$ and $B=[0,b]$, say. Order the points $\{b_1,\dots,b_n\}$ so that $0<b_1<b_2<\dots<b_n<b$. Then it is easy to see that, for $i>1$, $b_i-b_{i-1}$ has the same distribution as $b_1$. Indeed, if we condition on $b_i$ the set $\{b_1,\dots,b_{i-1}\}$ is just a uniformly chosen random set of $i-1$ points in $[0,b_i]$ and so by symmetry $b_i-b_{i-1}$ has the same distribution as $b_1-0=b_1$. As this holds conditioned on $b_i$ for any choice of $b_i$, it also holds unconditionally. We also note that the distribution of $b_1$ is just the minimum of $n$ i.i.d. random variables that are uniform on the interval $[0,b]$.
Clearly $A+b_1=[b_1,a+b_1]$ has length $a$. Now each $A+b_i$, $i>1$ adds an additional length of $\min\{a,b_i-b_{i-1}\}$. Indeed, it adds the interval $(a+b_{i-1},a+b_i]$ except when $b_i-b_{i-1}>a$, in which case it adds the interval $[b_i,a+b_i]$. Hence, provided $a+b\le 1$ so that there is no wrap around the torus, linearity of expectation and the above result on the distribution of $b_i-b_{i-1}$ implies $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A+b_i)|)
=a+(n-1)\mathbb{E}(\min\{a,b_1\})
=a+(n-1)b\cdot \mathbb{P}(b_*\le\min\{a,b_1\}),$$ where $b_*$ is chosen independently and uniformly at random from $[0,b]$. But this last probability is just the probability that on picking $n+1$ points $\{b_*,b_1,\dots,b_n\}$ at random in $[0,b]$ that $b_*$ is the smallest, and that the smallest is at most $a$. By symmetry this is $1/(n+1)$ times the probability that the smallest of $n+1$ uniform numbers in $[0,b]$ is at most $a$, which is 1 if $a\ge b$ and $1-(1-a/b)^{n+1}$ if $a<b$. The result follows for $a+b\le1$.
For $a+b>1$ we need to subtract the overlap $X:=|A+(b_n-1)\cap A+b_1|$ that can occur as a result of wrap around. We note that the result holds for $n=1$, so assume $n\ge 2$. By reflecting the interval $[b_1,b]$ we see that $b-b_n$ has the same distribution as $b_2-b_1$, even conditioned on $b_1$. Thus $X$ has the same distribution as $\max\{a+b-1-b_2,0\}$. It is easy to see that the pdf of $b_2$ is $n(n-1)x(b-x)^{n-2}b^{-n}$ and thus $\mathbb{E}X$ can be calculated (after some algebra) as $$\int_{0}^{a+b-1}\mkern-20mu n(n-1)x(b-x)^{n-2}b^{-n}(a+b-1-x)\,dx
=a+\tfrac{n-1}{n+1}b-1+\big(\tfrac{1-a}{b}\big)^n\big(b-\tfrac{n-1}{n+1}(1-a)\big).$$ The result then follows after some algebraic simplifications. ◻
*Proof of .* By , we may assume $A$ and $B$ are balls centred at the origin. If $b_i\in B$ then a point $x\in A+B$ is covered by $A+b_i$ if and only if $b_i\in (x-A)\cap B$. Hence for a uniformly random choice of $b_i$, $$\mathbb{P}(x\in A+b_i)=\tfrac{|(x-A)\cap B|}{|B|}.$$ The expected volume $A+B$ *not* covered by $\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)$ is then given simply as $$\int_{A+B} \prod_{i=1}^n(1-\mathbb{P}(x\in A+b_i))\,dx=\int_{A+B}\big(1-\tfrac{|(x-A)\cap B|}{|B|}\big)^n\,dx.$$ As we are considering $n$ very large, the integral will dominated by the region very close to the boundary of the ball $A+B$ where $(x-A)\cap B$ is very small. Let the radii of $A$ and $B$ be $a$ and $b$ respectively, and assume $x$ is at distance $a+b-\varepsilon$ from the centre of $A+B$. Then $$\label{e:px}
\frac{|(x-A)\cap B|}{|B|}=\frac{a^d\int_0^\alpha \sin^d\theta\,d\theta
+b^d\int_0^\beta \sin^d\theta\,d\theta}{b^d\int_0^\pi \sin^d\theta\,d\theta},$$ where the triangle with side lengths $a$, $b$ and $a+b-\varepsilon$ forms angles $\alpha$ and $\beta$ between the side of length $a+b-\varepsilon$ and the sides of length $a$ and $b$ respectively. To see this note that $(x-A)\cap B$ is formed from two caps, a cap of $x-A$ subtending an angle $\alpha$ at its centre and a cap of $B$ subtending an angle $\beta$ at its centre. The integrals that give the volumes of these caps is of the form $\int_{a\cos\alpha}^a v_{d-1}(\sqrt{a^2-t^2})^{d-1}\,dt$ for $x-A$, and similarly for $B$ with $v_{d-1}$ equal to the volume of a unit $(d-1)$-sphere. A change of variables $t=a\cos\theta$ puts this in the form $v_{d-1}a^d\int_0^\alpha \sin^d\theta\,d\theta$, and similarly for the cap of $B$.
Now fix $a$, $b$ and $d$ and consider the case when $\varepsilon\to 0$. We have $a\sin\alpha=b\sin\beta$ and $a\cos\alpha+b\cos\beta=a+b-\varepsilon$. As $\varepsilon\to0$ this gives $\alpha^2=(1+o(1))\frac{2\varepsilon b/a}{a+b}$, $\beta^2=(1+o(1))\frac{2\varepsilon a/b}{a+b}$ and, for $0\le \theta\le\max\{\alpha,\beta\}$, $\sin^d\theta = (1+o(1))\theta^d$. Thus $$\frac{|(x-A)\cap B|}{|B|}
=\frac{(1+o(1))(a^{-1}+b^{-1})(2\varepsilon ab)^{(d+1)/2}}{(d+1)(a+b)^{(d+1)/2} b^d(v_d/v_{d-1})}
=(1+o(1))C_{a,b,d}\varepsilon^{(d+1)/2},$$ where $C_{a,b,d}$ is a positive constant that depends on $a$, $b$ and $d$, and the $o(1)$ is as $\varepsilon\to0$. The expected fraction of $A+B$ not covered by $\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i(A+b_i)$ is then $$\begin{aligned}
\frac{1}{|A+B|}\int_{A+B}\big(1-\tfrac{|(x-A)\cap B|}{|B|}\big)^n\,dx
&=(1+o(1))\tfrac{d}{a+b}\int_0^\infty \exp\big(-C_{a,b,d}n\varepsilon^{(d+1)/2}\big)\,d\varepsilon\\
&=(1+o(1))\tfrac{d}{a+b}\Gamma(\tfrac{d+3}{d+1})(C_{a,b,d}n)^{-2/(d+1)}.\end{aligned}$$ Indeed, both integrals are dominated by the range when $\varepsilon$ is very small, in which case we can approximate the $(1-\cdots)^n$ term by an exponential and the volume of a spherical shell of thickness $d\varepsilon$ by $\frac{d}{a+b}d\varepsilon$ times the volume of the sphere $A+B$. Evaluating the final integral gives the expected fraction of $A+B$ uncovered as $(c+o(1))n^{-2/(d+1)}$ for some constant $c>0$ depending on $a$, $b$ and $d$. Clearly $c$ is unchanged by uniform scaling of $A$ and $B$ and hence $c$ depends only on the dimension and the ratio of the volumes of $A$ and $B$. ◻
We note that for the case when there is a bounded ratio between the radii $a$ and $b$, one has $c=(\frac{b}{2a}+o(1))d=\Theta(d)$ as $d\to\infty$. Thus we need $n$ to be at least of order $d^{(d+1)/2}$ for the asymptotic formula in to be a reasonable guide to the size of $\mathbb{E}|A+\{b_1,\dots,b_n\}|$.
# Sharp Stability Result for {#stability}
Our aim in this section is to prove a stability result for . Recall that a *Bohr set* $X$ in $\mathbb{T}^d$ is a set of the form $X=\{x \in \mathbb{T}^d \colon |\phi(x)-c| \le \rho \}$, where $\phi \colon \mathbb{T}^d \to \mathbb{T}$ is a continuous homomorphism, $c \in \mathbb{T}$ and $\rho \in [0,\frac{1}{2}]$. Two Bohr sets in $\mathbb{T}^d$ are said to be parallel if they use the same homomorphism $\phi$. The result we will prove, which is sharp, is as follows.
**Theorem 20**. *For $\varepsilon>0$ and $d,n\in\mathbb{N}$, there exists $K=K(\varepsilon,n)>0$ such that the following holds. Let $A$ and $B$ be measurable subsets of $\mathbb{T}^d$ with $\varepsilon\le |A|,|B|\le 1-\varepsilon$. Let $A'$ and $B'$ be intervals in $\mathbb{T}$ centred around $0$ with $|A'|=|A|$ and $|B'|=|B|$. Assume that if we choose $b_1,\dots,b_n$ and $b_1',\dots,b_n'$ independently and uniformly at random from $B$ and $B'$, respectively, then $$\mathbb{E}(|A+\{b_1,\dots,b_n\}|)\le \delta+\mathbb{E}(|A'+\{b'_1,\dots,b'_n\}|).$$ Then, there exist parallel Bohr sets $X$ and $Y$ in $\mathbb{T}^d$ such that $$|A\mathbin{\triangle}X|,|B\mathbin{\triangle}Y|\le K\delta^{1/2}.$$*
We remark that the exponent $1/2$ in is optimal as the following example shows. Fix $\alpha>0$ small and let $A=[0,\frac{1}{2}]$ and $B=[0,\frac{1}{2}-\alpha] \cup [\frac{1}{2},\frac{1}{2}+\alpha]$, considered as a subset of $\mathbb{T}^d$ with $d=1$. Then $A'=B'=[-\frac{1}{4},\frac{1}{4}]$. It is a simple computational task to check that $$\mathbb{E}(|A+\{b_1,b_2\}|)= \Theta(\alpha^2)+\mathbb{E}(|A'+\{b_1',b_2'\}|)$$ and for any Bohr set $X$ in $\mathbb{T}$ $$|B\mathbin{\triangle}X|\ge \alpha.$$ A corresponding statement also holds with $A=[0,\frac{1}{2}-\alpha] \cup [\frac{1}{2},\frac{1}{2}+\alpha]$ and $B=[0,\frac{1}{2}]$.
We will use some inequalities of Riesz--Sobolev type, and stability versions of these inequalities, which are due to Christ and Iliopoulou [@Christ]. We state them below just in the form we need them, namely for $\mathbb{T}^d$, but we mention that, as proved in [@Christ], they also have more general formulations, being true in any compact connected abelian group. Here are these two results.
**Theorem 21** ([@Christ]). *Let $A$, $B$ and $C$ be compact subsets of $\mathbb{T}^d$. Let $A'$, $B'$ and $C'$ be intervals in $\mathbb{T}$ centred around $0$ with the same measures as $A$, $B$ and $C$, respectively. Then $$\int_{C}\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \le \int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx.$$*
**Theorem 22** ([@Christ], Theorem 1.3). *For $\varepsilon>0$ there exists $c=c(\varepsilon)>0$ such that the following holds. Let $A$, $B$ and $C$ be compact subsets of $\mathbb{T}^d$ with $\varepsilon\le |A|,|B|,|C|\le 1-\varepsilon$, $|A|+|B|+|C|\le 2-\varepsilon$ and $2\max(|A|,|B|,|C|) \le |A|+|B|+|C|-\varepsilon$. Let $A'$, $B'$ and $C'$ be intervals in $\mathbb{T}$ centred around $0$ with the same measures as $A$, $B$ and $C$, respectively. Assume that $$\int_{C}\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \ge -\delta +\int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx.$$ Then there exist parallel Bohr sets $X$, $Y$ and $Z$ in $\mathbb{T}^d$ such that $$|A\mathbin{\triangle}X|, |B\mathbin{\triangle}Y|, |C\mathbin{\triangle}Z| \le c\delta^{1/2}.$$*
The main new ingredient we need is the following result.
**Theorem 23**. *For $\varepsilon>0$ and $d,n \in \mathbb{N}$ there exists $\tau=\tau(\varepsilon,n)$ such that the following holds. Let $A$ and $B$ be compact sets in $\mathbb{T}^d$ with $2\varepsilon\le |A|,|B| \le 1-2\varepsilon$, and let $A'$ and $B'$ be intervals in $\mathbb{T}$ centred at the origin with the same measures as $A$ and $B$ respectively.*
*Assume that $$\int_{C}\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \le -\alpha +\int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx$$ for every compact set $C\subseteq\mathbb{T}^d$ with $\varepsilon\le |C| \le 1-\varepsilon$, $|A|+|B|+|C| \le 2-\varepsilon$ and $2\max(|A|,|B|,|C|) \le |A|+|B|+|C|-\varepsilon$, where $C'$is the interval in $\mathbb{T}$ centred at the origin with the same measure as $C$. Then $$\int_{\mathbb{T}^d}(\mathbbm{1}_{A}*\mathbbm{1}_{B}(x))^n dx
\le
-\tau\alpha +\int_{\mathbb{T}}(\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x))^n dx.$$*
We shall derive this result from the following general result about arbitrary functions.
**Theorem 24**. *Let $f,g \colon [0,1] \to [0,1]$ be continuous decreasing functions with $$\int_0^t f(x)\,dx \le \int_0^t g(x)\,dx$$ for $0 \le t \le 1$. Assume that there exists an interval $I \subseteq [0,1]$ with $g'(t)=-\gamma$ and $$\int_0^t f(x)\,dx \le -\alpha+\int_0^t g(x)\,dx$$ for all $t\in I$. Then $$\int_0^{1} f(x)^n dx \le -(\gamma |I|)^{n-1}\alpha+\int_0^{1}g(x)^n dx.$$*
*Proof.* Define $F(t)=\int_0^t f(x)\,dx$ and $G(t)=\int_0^t g(x)\,dx$ and assume $h\colon [0,1]\to[0,\infty)$ is decreasing and continuously differentiable. Then using integration by parts $$\begin{aligned}
\int_0^{1} (g(x)-f(x))h(x)dx
&=(G(x)-F(x))h(x)\Big|_0^{1}-\int_0^{1} (G(x)-F(x))h'(x)dx\\
&\ge -\alpha \int_I h'(x)=\alpha(h(a)-h(b)),\end{aligned}$$ where $I=(a,b)$ and we have used that $G(0)=F(0)$, $G(x)-F(x)\ge 0$ and $G(x)-F(x)\ge\alpha$ on $I$.
Apply this to the function $h(x)=g(x)^{n-1}+g(x)^{n-2}f(x)+\dots+f(x)^{n-1}$ and note that $h$ is decreasing and $g(a)-g(b)\ge \gamma(b-a)$, so $$h(a)-h(b)\ge g(a)^{n-1}-g(b)^{n-1}\ge g(a)^{n-2}(g(a)-g(b))\ge (\gamma|I|)^{n-1}.$$ We deduce that $$\int_0^{1}g(x)^ndx-\int_0^{1}f(x)^ndx=\int_0^{1}(g(x)-f(x))h(x)\,dx
\ge \alpha(h(a)-h(b))\ge (\gamma|I|)^{n-1}\alpha.
\qedhere$$ ◻
*Proof of .* Given a continuous function $h \colon \mathbb{T}^d \to [0,1]$, the *decreasing rearrangement* of $h$ is the unique continuous decreasing function $\tilde h\colon [0,1] \to [0,1]$ such that for every $\lambda \in [0,1]$ $$|\{x : h(x) \ge \lambda\}|=|\{x : \tilde h(x) \ge \lambda\}|.$$
Note that $\mathbbm{1}_{A}*\mathbbm{1}_{B} \colon \mathbb{T}^d \to [0,1]$ and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'} \colon \mathbb{T}\to [0,1]$ are continuous functions. Let $f$ and $g$ be the decreasing rearrangements of $\mathbbm{1}_{A}*\mathbbm{1}_{B}$ and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}$, respectively. For $x\in [-\frac{1}{2},\frac{1}{2})=:\mathbb{T}$, We have $$\!\!\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)=\begin{cases}
\min(|A|,|B|), & \text{if } 2|x| \le ||A|-|B||;\\
\max(0,|A|+|B|-1), & \text{if } 2|x| \ge \min(|A|+|B|,2-|A|-|B|);\\
\frac{|A|+|B|}{2}-x, & \text{otherwise.}
\end{cases}$$ For $x \in [0,1]$, we have $$\qquad g(x)=\begin{cases}
\min(|A|,|B|), & \text{if } x \le ||A|-|B||;\\
\max(0,|A|+|B|-1), & \text{if } x \ge \min(|A|+|B|,2-|A|-|B|);\\
\frac{|A|+|B|}{2}-\frac{x}{2}, & \text{otherwise.}
\end{cases}$$
For $t \in [0,1]$ set $C'_t=[-\frac{t}{2},\frac{t}{2}]\subseteq\mathbb{T}$. As $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x) \ge \mathbbm{1}_{A'}*\mathbbm{1}_{B'}(y)$ for all $x \in C'_t$ and $y \notin C'_t$, it follows that $$\int_{C_t'} \mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx= \int_0^t g(x)\,dx.$$ As $f$ is a decreasing rearrangement of $\mathbbm{1}_{A}*\mathbbm{1}_{B}$, it follows that for $t \in [0,1]$ there exists a compact set $C_t$ in $\mathbb{T}^d$ with $|C_t|=t$ such that $$\int_{C_t} \mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx= \int_0^t f(x)\,dx.$$ For $t \in [0,1]$, $C'_t$, $A'$ and $B'$ are centred intervals with the same measures as the compact sets $C_t$, $A$ and $B$ in $\mathbb{T}^d$, respectively, so implies that $$\int_0^t f(x)\,dx=\int_{C_t} \mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx
\le \int_{C_t'} \mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx= \int_0^t g(x)\,dx.$$ Set $I=[||A|-|B||+\varepsilon,\min(|A|+|B|,2-|A|-|B|)-\varepsilon]$. As $2\varepsilon\le |A|,|B| \le 1-2\varepsilon$, we have $|I| \ge 2\varepsilon$. By construction, for $t \in I$, $g'(t)=-1/2$. Moreover, for $t \in I$, the compact set $C_t$ in $\mathbb{T}^d$, has size $t$. Hence, $\varepsilon\le |C_t| \le 1-\varepsilon$, $|A|+|B|+|C_t| \le 2-\varepsilon$ and $2\max(|A|,|B|,|C_t|) \le |A|+|B|+|C_t|-\varepsilon$. By hypothesis, for $t\in I$ we have $$\int_0^t f(x)\,dx=\int_{C_t} \mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx
\le -\alpha+\int_{C_t'} \mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx= -\alpha +\int_0^t g(x)\,dx.$$ Using these inequalities together with the fact that $|I| \ge 2\varepsilon$ and $g'(t)=-1/2$ for $t \in I$, gives $$\int_0^1 f(x)^n dx \le -\varepsilon^{n-1}\alpha+\int_0^1 g(x)^n dx.$$ As $f$ and $g$ are decreasing rearrangements of $\mathbbm{1}_{A}*\mathbbm{1}_{B}$ and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}$, respectively, we deduce that $f^n$ and $g^n$ are decreasing rearrangements of $(\mathbbm{1}_{A}*\mathbbm{1}_{B})^n$ and $(\mathbbm{1}_{A'}*\mathbbm{1}_{B'})^n$, respectively. Hence $$\int_{\mathbb{T}^d} (\mathbbm{1}_{A}*\mathbbm{1}_{B}(x))^n dx= -\varepsilon^{n-1}\alpha+ \int_{\mathbb{T}} (\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x))^n dx.$$ The conclusion follows with $\tau=\varepsilon^{n-1}$. ◻
*Proof of .* Let $c=c(\varepsilon/2)$ be the constant given by and let $\tau=\tau(\varepsilon/2, n)$ be the constant given by . Set $K=c\tau^{-1/2}(1-\varepsilon)^{n/2}$. By approximating measurable sets with compact sets and applying , we can assume $A$ and $B$ are compact sets in $\mathbb{T}^d$. By hypothesis, we know that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A+b_i)|) \le \delta + \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'+b'_i)|),$$ which after taking complements inside $\mathbb{T}^d$ and $\mathbb{T}$ is equivalent to $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A^c+b_i)|) \ge -\delta + \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A^{\prime c}+b'_i)|).$$ After scaling and using $\varepsilon\le |B|=|B'| \le 1-\varepsilon$, we can rewrite the last inequality as $$\int_{b_1,\dots,b_n\in B} |\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A^c+b_i)|\,db_1\dots db_n
\ge -\delta (1-\varepsilon)^n +
\int_{b'_1,\dots,b'_n\in B'} |\mathbin{\scalebox{0.95}{\ensuremath{\bigcap}}}_i (A^{\prime c}+b'_i)|\,db'_1\dots db'_n,$$ which is equivalent to $$\int_{\mathbb{T}^d} |(x-A^c) \cap B|^n\,dx
\ge -\delta (1-\varepsilon)^n + \int_{\mathbb{T}}|(x-A^{\prime c}) \cap B'|^n\,dx,$$ or in other words $$\int_{\mathbb{T}^d}(\mathbbm{1}_{A^c}*\mathbbm{1}_{B}(x))^n dx
\ge -\delta (1-\varepsilon)^n + \int_{\mathbb{T}}(\mathbbm{1}_{A^{\prime c}}*\mathbbm{1}_{B'}(x))^n dx.$$ To simplify the notation, set $$D=A^c\qquad\text{and}\qquad
D'=A^{\prime c} + \tfrac{1}{2}.$$ Recall $B'$ is a centred interval with the same size as $B$ and note that $D'$ is a centred interval with the same size as $D$. Moreover, note that $\varepsilon\le |D|,|B| \le 1-\varepsilon$. With this notation, we have $$\label{eq_equiv_cont}
\int_{\mathbb{T}^d}(\mathbbm{1}_{D}*\mathbbm{1}_{B}(x))^n dx
\ge -\delta (1-\varepsilon)^n+\int_{\mathbb{T}}(\mathbbm{1}_{D'}*\mathbbm{1}_{B'}(x))^n dx.$$
By with parameters $\varepsilon/2$, $n$ and $\alpha=\tau^{-1}\delta(1-\varepsilon)^n$ applied to the sets $D$ and $B$, it follows that there exists a compact set $C$ in $\mathbb{T}^d$ with $\varepsilon/2 \le |C| \le 1-\varepsilon/2$, $|D|+|B|+|C| \le 2-\varepsilon/2$ and $2\max(|D|,|B|,|C|) \le |D|+|B|+|C|-\varepsilon/2$ such that, if $C'$ is a centred interval in $\mathbb{T}$ with the same size as $C$, then $$\int_{C}\mathbbm{1}_{D}*\mathbbm{1}_{B}(x)\,dx
\ge - \tau^{-1}\delta (1-\varepsilon)^n + \int_{C'}\mathbbm{1}_{D'}*\mathbbm{1}_{B'}(x)\,dx.$$
By with parameters $\varepsilon/2$ applied to the compact sets $D$, $B$ and $C$ in $\mathbb{T}^d$, there exist parallel Bohr sets $W$, $Y$ and $Z$ in $\mathbb{T}^d$ such that $$|D\mathbin{\triangle}W|, |B\mathbin{\triangle}Y|, |C\mathbin{\triangle}Z|
\le c\tau^{-1/2}\delta^{1/2}(1-\varepsilon)^{n/2} = K\delta^{1/2}.$$ Recalling that $D=A^c$, and setting $X=W^c$ (which is also a Bohr set, or more precisely the interior of a Bohr set, parallel to $Y$ and $Z$), we conclude that $$|A\mathbin{\triangle}X|, |B\mathbin{\triangle}Y|, |C\mathbin{\triangle}Z| \le K\delta^{1/2}.
\qedhere$$ ◻
# Sharp Stability Result for {#stability_sphere}
Given a measurable set $X$ in $\mathbb{R}^d$ we define $X'$ to be the ball in $\mathbb{R}^d$ centred around $0$ with the same volume as $X$. In this section we prove the following sharp stability result for .
**Theorem 25**. *For $\varepsilon>0$ and $d,n\in\mathbb{N}$, there exists $K=K(\varepsilon,d,n)>0$ such that the following holds. Let $A$ and $B$ be measurable subsets of $\mathbb{R}^d$ with $\varepsilon\le |A|,|B| \le 1$. Let $A'$ and $B'$ be balls in $\mathbb{R}^d$ centred around $0$ with the same volumes as $A$ and $B$ respectively. Assume that if we choose $b_1,\dots,b_n$ and $b'_1,\dots,b'_n$ uniformly at random from $B$ and $B'$, respectively, then $$\mathbb{E}(|A+\{b_1,\dots,b_n\}|)\le \delta+\mathbb{E}(|A'+\{b'_1,\dots,b'_n\}|).$$ Then there exist homothetic ellipsoids $X$ and $Y$ in $\mathbb{R}^d$ such that $$|A \mathbin{\triangle}X|,|B \mathbin{\triangle}Y|\le K\delta^{1/2}.$$*
Note that the exponent $1/2$ in is optimal by the same examples as given after . We will make use of the following Riesz--Sobolev inequality, due to Riesz [@riesz], together with its stability version, which is due to Christ [@Chr].
**Theorem 26** ([@riesz]). *Let $A$, $B$ and $C$ be measurable subsets of $\mathbb{R}^d$. Let $A'$, $B'$ and $C'$ be balls in $\mathbb{R}^d$ centred around $0$ with the same volumes as $A$, $B$ and $C$, respectively. Then $$\int_{C}\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \le \int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx.$$*
**Theorem 27** ([@Chr], Theorem 2). *For $\varepsilon>0$ and $d \in \mathbb{N}$ there exists $c=c(\varepsilon,d)>0$ such that the following holds. Let $A$, $B$ and $C$ be compact subsets of $\mathbb{R}^d$ with $\varepsilon\le |A|^{1/d},|B|^{1/d},|C|^{1/d}\le 2$ and $2\max(|A|^{1/d},|B|^{1/d},|C|^{1/d}) \le |A|^{1/d}+|B|^{1/d}+|C|^{1/d}-\varepsilon$. Let $A'$, $B'$ and $C'$ be balls in $\mathbb{R}^d$ centred around $0$ with the same volumes as $A$, $B$ and $C$, respectively. Assume that $$\int_{C}\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \ge -\delta +\int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx.$$ Then there exist homothetic ellipsoids $X$, $Y$ and $Z$ in $\mathbb{R}^d$ such that $$|A\mathbin{\triangle}X|, |B\mathbin{\triangle}Y|, |C\mathbin{\triangle}Z| \le c\delta^{1/2}.$$*
The main new ingredient for us is the following result.
**Theorem 28**. *For $\varepsilon>0$ and $d,n\in\mathbb{N}$ there exists $\tau=\tau(\varepsilon,d,n)$ such that the following holds. Let $A$ and $B$ be compact sets in $\mathbb{R}^d$ with $2\varepsilon\le |A|^{1/d},|B|^{1/d} \le 1$. Assume that $$\int_{C}\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \le -\alpha +\int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx$$ for every compact set $C$ in $\mathbb{R}^d$ with $\varepsilon\le |C|^{1/d} \le 2$ and $$2\max(|A|^{1/d},|B|^{1/d},|C|^{1/d}) \le |A|^{1/d}+|B|^{1/d}+|C|^{1/d}-\varepsilon.$$ Then $$\int_{\mathbb{R}^d} 1-\bigg(1-\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\bigg)^n dx
\ge \tau\alpha +\int_{\mathbb{R}^d} 1-\bigg(1-\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\bigg)^n dx$$*
We shall derive this result from a general result about arbitrary functions.
**Theorem 29**. *Let $f,g \colon \mathbb{R}_+ \to [0,1]$ be continuous decreasing functions with bounded support with $$\int_0^\infty f(x)\,dx = \int_0^\infty g(x)\,dx$$ and $$\int_0^t f(x)\,dx \le \int_0^t g(x)\,dx$$ for all $t\ge 0$. Assume that there exists an interval $I \subseteq \mathbb{R}_+$ with $g'(t)=-\gamma$ and $$\int_0^t f(x)\,dx \le -\beta+\int_0^t g(x)\,dx$$ for all $t\in I$. Then $$\int_0^\infty 1-(1-f(x))^n dx \ge (\gamma|I|)^{n-1}\beta+\int_0^\infty 1-(1-g(x))^n dx.$$*
*Proof.* Define $F(t)=\int_0^t f(x)dx$ and $G(t)=\int_0^t g(x)dx$ and assume $h\colon \mathbb{R}_+\to[0,\infty)$ is increasing and continuously differentiable. Then using integration by parts and choosing $T$ sufficiently large so that $f(T)=g(T)=0$, $$\begin{aligned}
\int_0^T (g(x)-f(x))h(x)dx
&=(G(x)-F(x))h(x)\Big|_0^T-\int_0^T (G(x)-F(x))h'(x)dx\\
&\le -\beta \int_I h'(x)=-\beta(h(b)-h(a)),\end{aligned}$$ where $I=[a,b]$ and we have used that $F(0)=G(0)$, $F(T)=G(T)$, $G(x)-F(x)\ge 0$ and $G(x)-F(x)\ge\beta$ on $I$.
Apply this to the function $h(x)=(1-g(x))^{n-1}+(1-g(x))^{n-2}(1-f(x))+\dots+(1-f(x))^{n-1}$ and note that $h$ is increasing and $g(b)\le g(a)- \gamma(b-a) \le 1- \gamma(b-a)$, and so $$h(b)-h(a)\ge (1-g(b))^{n-1}-(1-g(a))^{n-1}\ge (1-g(b))^{n-2}(g(a)-g(b))\ge (\gamma(b-a))^{n-1}.$$ We deduce that $$\begin{aligned}
\int_0^\infty 1-(1-g(x))^ndx-\int_0^\infty 1-(1-f(x))^ndx
&=\int_0^\infty(g(x)-f(x))h(x)\,dx\\
&\le -\beta(h(b)-h(a))\le - (\gamma |I|)^{n-1}\beta.
\qedhere\end{aligned}$$ ◻
*Proof of .* Given a continuous function $h \colon \mathbb{R}^d \to [0,1]$ with bounded support, as before, the *decreasing rearrangement* of $h$ is the unique continuous decreasing function $\tilde h\colon [0,\infty) \to [0,1]$ with bounded support such that for every $\lambda \in [0,1]$ $$|\{x \colon h(x) \ge \lambda\}|=|\{x \colon \tilde h(x) \ge \lambda\}|.$$
Note that $\mathbbm{1}_{A}*\mathbbm{1}_{B}/|B| \colon \mathbb{R}^d \to [0,1]$ and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}/|B'| \colon \mathbb{R}^d \to [0,1]$ are continuous functions with bounded support. Let $f$ and $g$ be the decreasing rearrangements of $\mathbbm{1}_{A}*\mathbbm{1}_{B}/|B|$ and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}/|B'|$, respectively.
Let $v_d$ be the volume of the unit ball in $\mathbb{R}^d$; thus, $v_dr^d$ is the volume of the ball of radius $r$ in $\mathbb{R}^d$. Note that $A'$ and $B'$ are balls centred at the origin of radii $v_d^{-1/d}|A|^{1/d}$ and $v_d^{-1/d}|B|^{1/d}$, respectively.
Now $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}/|B'| \colon \mathbb{R}^d \to [0,1]$ is spherically symmetrical and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)/|B'| \le \mathbbm{1}_{A'}*\mathbbm{1}_{B'}(y)/|B'|$ if $|x| \le |y|$. Thus $$g(v_d|x|^d)=\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}.$$ Moreover, $g(x)=\min(|A|,|B|)/|B|$ for $x\le \rho_-^d$, where $\rho_-:=||A|^{1/d}-|B|^{1/d}|$, and $g(x)=0$ for $x\ge \rho_+^d$, where $\rho_+:=|A|^{1/d}+|B|^{1/d}$, and $g$ has a continuous negative derivative for $\rho_-^d<x<\rho_+^d$.
Set $$I=[\rho_-+\varepsilon, \rho_+-\varepsilon]$$ and $$J=[(\rho_-+\varepsilon)^d, (\rho_+-\varepsilon)^d]$$ Recall that $2\varepsilon\le |A|^{1/d},|B|^{1/d} \le 1$ so that $|I|\ge \varepsilon$, $I \subseteq [\varepsilon,2]$, $|J| \ge \varepsilon^d$ and $J \subseteq [\varepsilon^d, 2^d]$. Also, there exists a $\gamma>0$ depending on $d$ and $\varepsilon$ such that for $x \in J$ $$g'(x) \le -\gamma.$$
For $t \in \mathbb{R}_+$ let $C'_t$ be the ball centred at the origin in $\mathbb{R}^d$ with volume $t$. Now as $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)$ is a decreasing function of $|x|$, it follows that $$\int_{C_t'}\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\,dx = \int_0^t g(x)\,dx.$$
Since $f$ is a decreasing rearrangement of $\mathbbm{1}_{A}*\mathbbm{1}_{B}/|B|$, it follows that for $t\in\mathbb{R}_+$ there exists a compact set $C_t$ in $\mathbb{R}^d$ with $|C_t|=t$ such that $$\int_{C_t}\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\,dx = \int_0^t f(x)\,dx.$$
By the definition of decreasing rearrangement, we have $$\int_0^\infty f(x)\,dx
=\int_{\mathbb{R}^d}\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\,dx
=|A|=|A'|
=\int_{\mathbb{R}^d}\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\,dx
=\int_0^\infty g(x)\,dx.$$
For $t \in \mathbb{R}_+$, $C'_t$, $A'$ and $B'$ are centred balls with the same size as the compact sets $C_t$, $A$ and $B$ in $\mathbb{R}^d$, respectively, so implies $$\int_0^t f(x)\,dx
=\int_{C_t}\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\,dx
\le \int_{C'_t}\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\,dx
= \int_0^t g(x)\,dx.$$
Moreover, for $t \in J$, the compact set $C_t$ in $\mathbb{R}^d$ has size $t$ so $|C_t|^{1/d} \in I$. By the definition of $I$, $\varepsilon\le |C_t|^{1/d} \le 2$ and $2\max(|A|^{1/d},|B|^{1/d},|C_t|^{1/d}) \le |A|^{1/d}+|B|^{1/d}+|C_t|^{1/d}-\varepsilon$. By hypothesis, for $t \in J$ we have $$\int_0^t f(x)\,dx
=\int_{C_t}\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\,dx
\le -\frac{\alpha}{|B|}+\int_{C_t'}\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\,dx
= -\frac{\alpha}{|B|} + \int_0^t g(x)\,dx.$$
Using the last three centred inequalities together with the fact that $|B| \le 1$, $|J| \ge \varepsilon^d$ and $g'(t)\le -\gamma$ for $t \in J$, gives $$\int_0^\infty 1-(1-f(x))^n dx \ge (\gamma \varepsilon^d)^n\alpha+\int_0^\infty 1-(1-g(x))^n dx.$$
As $f$ and $g$ are decreasing rearrangements of $\mathbbm{1}_{A}*\mathbbm{1}_{B}/|B|$ and $\mathbbm{1}_{A'}*\mathbbm{1}_{B'}/|B'|$, respectively, we deduce that $1-(1-f)^n$ and $1-(1-g)^n$ are decreasing rearrangements of $1-\big(1-\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\big)^n$ and $1-\big(1-\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B|}\big)^n$, respectively. Hence $$\int_{\mathbb{R}^d}1-\bigg(1-\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\bigg)^n dx
\ge (\gamma \varepsilon^d)^n\alpha +
\int_{\mathbb{R}^d}1-\bigg(1-\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\bigg)^n dx$$ The conclusion follows with $\tau=(\gamma \varepsilon^d)^n$. ◻
*Proof of .* Let $c=c(\varepsilon^{1/d}/2,d)$ be the constant given by and $\tau=\tau(\varepsilon^{1/d}/2,d,n)$ the constant given by . Set $K=c\tau^{-1/2}$. By approximating measurable sets with compact sets, we can assume $A$ and $B$ are compact sets in $\mathbb{R}^d$. From the hypothesis, we know that $$\mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A+b_i)|) \le \delta + \mathbb{E}(|\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'+b'_i)|).$$ This is equivalent to $$\int_{\mathbb{R}^d}\mathbb{P}\big(x\in\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A+b_i)\big)\,dx
\le \delta + \int_{\mathbb{R}^d}\mathbb{P}\big(x\in\mathbin{\scalebox{0.95}{\ensuremath{\bigcup}}}_i (A'+b'_i)\big)\,dx,$$ which can be expressed as $$\int_{\mathbb{R}^d} 1-\bigg(1-\frac{|(x-A)\cap B|}{|B|}\bigg)^n dx
\le \delta + \int_{\mathbb{R}^d} 1-\bigg(1-\frac{|(x-A')\cap B'|}{|B'|}\bigg)^n dx,$$ or in other words $$\int_{\mathbb{R}^d} 1-\bigg(1-\frac{\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)}{|B|}\bigg)^n dx
\le \delta + \int_{\mathbb{R}^d} 1-\bigg(1-\frac{\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)}{|B'|}\bigg)^n dx.$$
Recall that $A'$ and $B'$ are balls centred at the origin in $\mathbb{R}^d$ with the same size as the compact sets $A$ and $B$, respectively. Moreover, $\varepsilon^{1/d} \le |A|^{1/d}, |B|^{1/d} \le 1$.
By with parameters $\varepsilon^{1/d}/2$, $d$, $n$ and $\alpha = \tau^{-1}\delta$ applied to the sets $A$ and $B$, it follows that there exists a compact set $C$ in $\mathbb{R}^d$ with $\varepsilon^{1/d}/2 \le |C|^{1/d} \le 2$ and $2\max(|A|^{1/d},|B|^{1/d},|C|^{1/d}) \le |A|^{1/d}+|B|^{1/d}+|C|^{1/d}-\varepsilon^{1/d}/2$ such that, if $C'$ is a centred ball in $\mathbb{R}^d$ with the same size as $C$, then $$\int_C\mathbbm{1}_{A}*\mathbbm{1}_{B}(x)\,dx \ge -\tau^{-1}\delta + \int_{C'}\mathbbm{1}_{A'}*\mathbbm{1}_{B'}(x)\,dx.$$
By with parameters $\varepsilon^{1/d}/2$ and $d$ applied to the compact sets $A$, $B$ and $C$ in $\mathbb{R}^d$, there exists homothetic ellipsoids $X$, $Y$, $Z$ in $\mathbb{R}^d$ such that $$|A\mathbin{\triangle}X|, |B\mathbin{\triangle}Y|, |C\mathbin{\triangle}Z| \le c \tau^{-1/2}\delta^{1/2} = K \delta^{1/2}.
\qedhere$$ ◻
# Open problems
Perhaps the most interesting question that leads on from our results is to ask what would happen if we chose points uniformly at random not from the whole of $B$ but just from its boundary (whether we are in Euclidean space or the torus). There are two different natural ways to quantify this problem. For a given $\varepsilon>0$, we could choose uniformly from the set $(1+\varepsilon)B - B$, or we could choose uniformly from $B_\varepsilon- B$, where $B_\varepsilon$ denotes the set of all points at distance at most $\varepsilon$ from $B$. The difference between these two lies in how they 'bias' the boundary measure towards the actual shape and curvature of the boundary -- we feel that both forms are very natural for this problem.
In a different direction, what happens if we use $n$ random points in $A$ and in $B$ (in $\mathbb{R}^d$) and take the convex hull? Is it still true that Euclidean balls minimize the expected value of this quantity? Here $A$ and $B$ would be convex sets of given volumes.
In the results in the paper, we have formed the sumset of $A$ with $n$ random points of $B$. But what would happen if, instead of *points*, we used random copies of a fixed body? For example, what would happen if we used unit intervals, with centres chosen uniformly at random from $B$ but with orientations also chosen at random? To be more precise, given our sets $A$ and $B$ in $\mathbb{R}^d$, we first choose $n$ points uniformly at random from $B$ and then for each of these points we take a unit interval centred at this point with direction chosen uniformly at random -- we wish to minimize the expected volume of the sum of $A$ with the union of these $n$ intervals.
One could also allow this 'extra' body (above, the unit interval) to vary not by a random rotation but rather by being related to $B$: perhaps the most natural example would be if it were a homothetic copy of $B$. Thus one could ask for example the following question: given convex sets $A$ and $B$ in $\mathbb{R}^d$ of given volumes, how do we minimize the expected volume of the sumset of $A$ with $n$ random copies of $B/2$, chosen uniformly at random from the copies of $B/2$ inside $B$? In an equivalent, but perhaps less attractive, formulation, this is asking to minimize the sum of $A+B$ with $n$ random points of $B$.
10
Barthe, F., Autour de l'inégalité de Brunn--Minkowski, *Ann. Fac. Sci. Toulouse Math.* **12** (2003) 127--178.
Christ, M., A sharpened Riesz-Sobolev inequality, `arXiv:1706.02007` (2017).
Christ, M. and M. Iliopoulou, Inequalities of Riesz--Sobolev type for compact connected abelian groups, *Amer. J. Math.* **144** (5) (2022) 1367--1435.
Gardner, R.J., The Brunn--Minkowski inequality, *Bull. Amer. Math. Soc.* **39** (2002) 355--405.
Kneser, M., Summenmengen in lokalkompakten abelschen Gruppen, *Math. Z.* **66** (1956) 88--110.
Macbeath, A.M., On measure of sum sets. II: The sum-theorem for the torus, *Proc. Cam. Phil. Soc.* **49** (1953) 40--43.
Nazarewicz E, M. O'Brien, M. O'Neill and C. Staples, Equality in Pollard's theorem on set addition of congruence classes, *Acta Arithmetica* **127** (2007) 1--15.
Riesz, F., Sur une inégalité intégrale, *J. London Math. Soc.* **5** (1930) 162--168.
Tao, T., An inverse theorem for an inequality of Kneser, *Proc. Steklov Institute Math.* **303** (2018) 193--219.
Volčič, A., Random Steiner symmetrizations of sets and functions, *Calculus of Variations and Partial Differential Equations* **46** (2013) 555--569.
[^1]: *In the case when $A'_i+B'_i=\mathbb{Z}_p$ we mean that the sum of the centres of $A'_i$ and $B'_i$ is $0$ or $1/2$.*
| arxiv_math | {
"id": "2309.00103",
"title": "Random Translates in Minkowski Sums",
"authors": "Paul Balister, Bela Bollobas, Imre Leader and Marius Tiba",
"categories": "math.FA math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We define the singular support of an $\ell$-adic sheaf on a smooth variety over any field. To do this, we combine Beilinson's construction of the singular support for torsion étale sheaves with Hansen and Scholze's theory of universal local acyclicity for $\ell$-adic sheaves.
author:
- Owen Barrett
title: The singular support of an $\ell$-adic sheaf
---
# Introduction
The notion of singular support (or micro-support) has for several decades been the point of departure (see Kashiwara-Schapira [@KS]) for the microlocal study of sheaves and $\mathcal D$-modules on manifolds. It was hoped for many years that an analogue of the singular support existed for algebraic varieties in positive characteristic. In 2015, Beilinson [@Sasha] realized this hope, constructing the singular support of a constructible étale sheaf with torsion coefficients on a smooth variety over any field. One of his many innovations was to relate the singular support to a broader notion of (universal) local acyclicity than what is captured by the vanishing cycles. At the time of his article, a satisfying formalism of universal local acyclicity did not yet exist for $\ell$-adic sheaves. The fundamental nature of the singular support, however, fueled the ambition that it might be defined for $\ell$-adic coefficients, rational coefficients in particular. (As we will see, the case of integral coefficients offers essentially nothing new.) Indeed, since the appearance of Beilinson's article, Umezaki-Yang-Zhao [@UYZ], extending Saito [@Saito], defined the characteristic cycle of a $\overline\mathbf{Q}_\ell$-sheaf. Recent innovations in the theory of universal local acyclicity by Lu-Zheng [@LZ_ula] and Hansen-Scholze [@HS] have extended the definition of universal local acyclicity to $\ell$-adic sheaves on a very broad class of schemes. With this new formalism in hand, it is now possible to define the singular support of an $\ell$-adic sheaf by extending Beilinson's original arguments to the $\ell$-adic setting.
I wish to extend my heartfelt thanks to Sasha Beilinson, Denis-Charles Cisinski, David Nadler, David Hansen, Arthur Ogus and Peter Scholze for clarifying explanations, comments and inspiring discussions. I especially wish to thank Sasha Beilinson for pointing out to me that the perverse t-exactness of the vanishing cycles implies that they send torsion-free perverse sheaves to torsion-free perverse sheaves, which is the heart of the proof of part [\[th:SS:integral_model\]](#th:SS:integral_model){reference-type="ref" reference="th:SS:integral_model"} of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}.
## {#sec:notation_categories}
Fix a field $k$ and a prime number $\ell\ne\mathop{\mathrm{char}}k$. Henceforth, 'variety' will mean '$k$-scheme of finite type.' Let $\Lambda$ be an algebraic extension $E$ of $\mathbf{Q}_\ell$ or the ring of integers $\mathcal O_E$ of such an extension. We wish to define the singular support of a constructible complex of $\Lambda$-sheaves on a variety $X$. If $\Lambda=\mathcal O_E$ for $E$ a finite extension of $\mathbf{Q}_\ell$, the classical approach to constructing the derived category of $\Lambda$-constructible sheaves on a variety involved taking an inverse limit of the corresponding categories with coefficients in $\mathcal O_E/\varpi^n$ ($\varpi$ a uniformizer of $\mathcal O_E$) as $n\to\infty$. To obtain the derived category of $E$-constructible sheaves, one subsequently localized, inverting $\ell$. This approach had the disadvantage that the objects under consideration were no longer sheaves on some site, but were rather formal inverse limits of such. This artificiality caused problems when one wanted to replace the variety $X$ by a space which was no longer of finite type over a field, such as the Milnor tubes which appear in the definition of local acyclicity.
To address these and other issues, Bhatt and Scholze [@BS] in 2015 defined a Grothendieck topology on schemes -- finer than the étale topology -- called the pro-étale topology, and showed that the classical derived categories of constructible $\ell$-adic sheaves could be recovered as full subcategories of $D(X_\text{\upshape pro\'et},\Lambda)$, where this notation literally means the derived category of sheaves of modules over a certain sheaf of rings $\Lambda=\Lambda_X$. This sheaf of rings on $X_\text{\upshape pro\'et}$ is obtained by pulling back the condensed ring corresponding to the topological ring $\Lambda$ along $X_\text{\upshape pro\'et}\to\ast_\text{\upshape pro\'et}$; concretely, if $U\to X$ is weakly étale (i.e. for $U$ in $X_\text{\upshape pro\'et}$), $\Gamma(U,\Lambda_U)=\mathop{\mathrm{Map}}_{\mathrm{cont}}(U,\Lambda)$. Bhatt and Scholze's definition of constructible sheaf was later extended by Hemo, Richarz and Scholbach [@HRS], and it is their definition that we now present.
Fix a scheme $X$. By abuse of notation we will continue to use $\Lambda$ to denote the sheaf $\Lambda_X$ defined above (despite the notation, this sheaf is generally not constant; we will refer to the constant sheaf with value $\Lambda$ as $\underline\Lambda$). We let $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ denote the derived category of $\Lambda$-sheaves on $X_\text{\upshape pro\'et}$, considered as a stable $\infty$-category.[^1] Its homotopy category is triangulated and will be denoted $D(X_\text{\upshape pro\'et},\Lambda)$. The category $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ carries a symmetric monoidal structure given by tensor product, and we say an object in $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ is a *lisse sheaf* if it's dualizable with respect to this structure. (Concretely, $\mathcal F$ in $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ is dualizable if and only if the natural map $\mathcal G\otimes^{L}_\Lambda R\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,\Lambda)\to
R\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,\mathcal G)$ is an isomorphism for every $\mathcal G\in\mathcal D(X_\text{\upshape pro\'et},\Lambda)$.) We let $\mathcal D_\mathrm{lisse}(X,\Lambda)=\mathcal D_\mathrm{lisse}(X)$ denote the full subcategory of $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ on the lisse sheaves. We say that an object $\mathcal F$ of $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ is a *constructible sheaf* if every affine open $U\subset X$ admits a finite partition $U=\coprod_i U_i$ into constructible locally closed subschemes $U_i$ (called strata) so that $\mathcal F|_{U_i}$ is lisse. We let $\mathcal D(X):=\mathcal D(X,\Lambda):=\mathcal D_{\mathrm{cons}}(X_\text{\upshape pro\'et},\Lambda)$ denote the full subcategory of $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ on the constructible sheaves and we refer to objects of $\mathcal D(X)$ simply as *sheaves*.
The standard ($p=0$) t-structure on $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$ often restricts to a t-structure on $\mathcal D(X)$, and we will refer to sheaves in the heart as *abelian* sheaves to emphasize that they're concentrated in degree zero and form an abelian category. With appropriate finiteness assumptions, the six functors can be defined on $\mathcal D(X)$, and we won't decorate these functors with Rs or Ls, so $j_*:=Rj_*$, $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}:=R\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}$, $\otimes:=\otimes^{L}$ etc. No confusion can result as we will explicitly flag any appearance of the un-derived versions of these functors.
## {#sec:intro_ula}
Let $f:X\to S$ be a map locally of finite presentation between schemes over $\mathop{\mathrm{Spec}}\mathbf{Z}[\frac1\ell]$ and let $\mathcal F\in D(X)$.
**Definition 1**. We say $f$ is *universally locally acyclic relative to $\mathcal F$* (or that $\mathcal F$ is universally locally acyclic over $S$ if $f$ is understood) if for every geometric point $x\to X$ and generization $t\to S$ of $f(x)$, the map $\mathcal F_x:=\Gamma(X_x,\mathcal F)\to\Gamma(X_x\times_{S_{f(x)}}t,\mathcal F)$ is an isomorphism, and this remains true after arbitrary base change in $S$.
That this definition, formally identical to the classical one, holds any water is a consequence of the work of Hansen and Scholze, and will be discussed in §[3](#sec:ula){reference-type="ref" reference="sec:ula"}.
When $S$ is a nonsingular curve (or more generally a regular scheme of dimension 1), we can define the notion of universal local acyclicity at a point in terms of vanishing cycles. Let $x\to X$ be a geometric point and denote by $X_x$ and $S_{f(x)}$ the strict localizations of $X$ and $S$ at $x$ and $f(x)$, respectively. As $S_{f(x)}=\mathop{\mathrm{Spec}}A$ with $A$ a strictly henselian dvr, the normalization $V$ of $A$ in ${(\mathop{\mathrm{Frac}}A)}^{\mathrm{alg}}$ is an absolutely integrally closed valuation ring[^2] (with value group $\mathbf{Q}$). Denoting by $j$ and $i$ the open and closed immersions of the generic point $\eta$ and closed point of $V$, respectively, and base extensions of such, the sheaf of nearby cycles $\psi_f(\mathcal F)$ on the geometric special fiber $f^{-1}(f(x))$ is defined as $i^*j_*(\mathcal F|_{X_\eta})$.[^3] It's a constructible sheaf [@HS Theorem 4.1] endowed with a map from $i^*\mathcal F$, and the sheaf of vanishing cycles $\phi_f(\mathcal F)\in D(f^{-1}f(x))$ is obtained as $\mathop{\mathrm{cofib}}(i^*\mathcal F\to\psi_f(\mathcal F))$. We say $f$ is *universally locally acyclic at $x$ relative to $\mathcal F$* if $\phi_f(\mathcal F)_x=0$. This is tantamount to asking that the map $\mathcal F_x=\Gamma(X_x,\mathcal F)\to\Gamma(X_x\times_{S_{f(x)}}\eta,\mathcal F)$ be an isomorphism.
By abuse of notation, when speaking of varieties we'll sometimes refer to universal local acyclicity at a scheme-theoretic point rather than at a geometric point. There is no ambiguity: if $x\in X$ is a scheme-theoretic point, then $\mathop{\mathrm{Gal}}({k}^{\mathrm{sep}}/k)$ acts transitively on (algebraic) geometric points centered on $x$, and induces isomorphisms on the corresponding stalks of $\phi_f(\mathcal F)$. Given $x$, we'll often use $\overline x$ to denote a geometric point centered on $x$ defined by a separable closure of $k(x)$.
## {#sec:transversality_def}
In this section and the next one we recall some terminology from [@Sasha].
If $X$ is a variety, we refer to correspondences of the form $X\xleftarrow h U\xrightarrow fY$ as *test pairs* on $X$. If $\mathcal F\in D(X)$, we say a test pair $(h,f)$ is *$\mathcal F$-acyclic* if $f$ is universally locally acyclic rel. $h^*\mathcal F$.[^4]
Suppose $X$ is a smooth variety and $C\subset T^*X:=T^*(X/k)$ a closed conical (i.e. $\mathbf{G}_m$-invariant) subset of the cotangent bundle of $X$. The *base* of $C$ is its image in $X$, which is closed. A morphism $h:U\to X$ with $U$ smooth is said to be $C$-transversal at a geometric point $u\to U$ if for every nonzero $\mu\in C_{h(u)}$ the covector $dh(\mu)\in T^*_uU$ is nonzero. A map $f:X\to Y$ with $Y$ smooth is said to be $C$-transversal at a geometric point $x\to X$ if for every nonzero $\nu\in T^*_{f(x)}Y$ one has $df\notin C_x$. The morphism $f$ or $h$ is said to be $C$-transversal if it is so at every geometric point.
A test pair $(h,f)$ as above is said to be $C$-transversal at a geometric point $u\to U$ if $U$ and $Y$ are smooth, $h$ is $C$-transversal at $u$ and $f$ is $h^\circ C_u$-transversal at $u$, where $h^\circ C_u:=dh(C_{h(u)})$. The test pair is said to be $C$-transversal if it is so at every geometric point of $U$. If $h:U\to X$ is $C$-transversal (as is the case when $h$ is smooth), then for each geometric point $u\to U$, the closed subsets $h^\circ C_u\subset T^*_uU$ are the fibers of a closed conical subset of $T^*U$, denoted $h^\circ C$ [@Sasha Lemma 1.2(ii)].
## {#sec:ss_def}
Let $X$ be a smooth variety and $\mathcal F\in D(X)$. We say that $\mathcal F$ is *micro-supported* on a closed conical subset $C\subset T^*X$ if every $C$-transversal test pair is $\mathcal F$-acyclic. If there is a smallest closed conical subset $C\subset T^*X$ on which $\mathcal F$ is micro-supported, we say *$\mathcal F$ has singular support* and call $C$ the *singular support* of $\mathcal F$ and denote it by $SS(\mathcal F)$.
We say that $\mathcal F$ is *weakly micro-supported* on a closed conical subset $C\subset T^*X$ if every $C$-transversal test pair $(h,f)$ satisfying the following additional conditions is $\mathcal F$-acyclic:
- The morphism $f:U\to Y$ has target $Y=\mathbf{A}^1$; and
- If $k$ is infinite than $h:U\to X$ is an open immersion; if $k$ is finite then $h$ factors as a composition $U=V_{k'}:=V\otimes_kk'\to V\hookrightarrow X$ of a base change of a finite extension $k'\supset k$ followed by an open immersion.
It's not hard to see that a sheaf $\mathcal F$ is both micro-supported and weakly micro-supported on $T^*X$; we will review why later. It's also not hard to see that the set of all closed conical subsets of $T^*X$ on which $\mathcal F$ is weakly micro-supported has a smallest element, denoted $SS^w(\mathcal F)$. If $\mathcal F$ has singular support, then it's clear that $SS^w(\mathcal F)\subset SS(\mathcal F)$. It follows from Lemma [Lemma 6](#lem:ula_curve){reference-type="ref" reference="lem:ula_curve"} that $SS^w(\mathcal F)$ admits the following description: when $k$ is infinite, $SS^w(\mathcal F)$ is the closure in $T^*X$ of the set of points of the form $(x,df(x))$, where $x\in|X|$ and $f$ is a function on a Zariski neighborhood of $x$ which is not universally locally acyclic at $x$ rel. $\mathcal F$.[^5] When $k$ is finite, $SS^w(\mathcal F)$ coincides with the image under the map $T^*X_{{k}^{\mathrm{alg}}}\to T^*X$ of the closure of the set of points of the form $(x,df(x))$ where $x\in|X_{{k}^{\mathrm{alg}}}|$ and $f$ is a function on $V_{{k}^{\mathrm{alg}}}$, $V\subset X$ a Zariski neighborhood of $x$, such that $f$ is not universally locally acyclic at $x$ rel. $\mathcal F$.[^6]
## {#sec:main_results}
We can now state our main results. Fix a smooth variety $X$. If $\mathcal F\in D(X)$ with $\Lambda$ integral (i.e. so that $\Lambda\otimes\mathbf{Z}/\ell\ne0$), then $\mathcal F=\mathcal F'\otimes_{\mathcal O_E}\Lambda$ for some $\mathcal F'\in D(X,\mathcal O_E)$, where $E$ is a finite extension of $\mathbf{Q}_\ell$. Then $\mathcal F'/\ell:=\mathcal F'\otimes_{\mathbf{Z}_\ell}\mathbf{Z}/\ell$ is an étale sheaf of $\mathbf{Z}/\ell$-modules, and therefore has singular support as defined by Beilinson. If $\Lambda$ is rational (i.e. $\Lambda$ is an algebraic extension $E$ of $\mathbf{Q}_\ell$), then $\mathcal F$ admits an integral model $\mathcal F_0$ by [@HS Corollary 2.4]; i.e. $\mathcal F_0$ is in $D(X,\mathcal O_E)$ and $\mathcal F_0\otimes_{\mathbf{Z}_\ell}\mathbf{Q}_\ell\simeq\mathcal F$.
**Theorem 1**.
1. *[\[th:SS:existence\]]{#th:SS:existence label="th:SS:existence"} Every sheaf $\mathcal F$ in $D(X)$ has singular support.*
2. *If $\Lambda=\mathcal O_E$ with $E/\mathbf{Q}_\ell$ finite, then $SS(\mathcal F)=SS(\mathcal F/\ell)$.*
3. *If $\Lambda=\mathcal O_L$ with $L/\mathbf{Q}_\ell$ infinite algebraic, then $SS(\mathcal F)=SS(\mathcal F')$, where $\mathcal F'$ is any approximation for $\mathcal F$ over the ring of integers of a finite extension of $\mathbf{Q}_\ell$.*
4. *[\[th:SS:integral_comparison\]]{#th:SS:integral_comparison label="th:SS:integral_comparison"} If $\Lambda$ is rational and $\mathcal F_0$ is any integral model for $\mathcal F$, then $SS(\mathcal F)\subset SS(\mathcal F_0)$.*
5. *[\[th:SS:integral_model\]]{#th:SS:integral_model label="th:SS:integral_model"} If $\Lambda$ is rational then $\mathcal F$ admits an integral model $\mathcal F_0$ with $SS(\mathcal F_0)=SS(\mathcal F)$. If $\mathcal F$ is perverse, then any integral model $\mathcal F_0$ for $\mathcal F$ which is perverse and torsion-free as a perverse sheaf[^7] has $SS(\mathcal F_0)=SS(\mathcal F)$.*
6. *[\[th:SS:dim\]]{#th:SS:dim label="th:SS:dim"} If $X$ is connected, then $SS(\mathcal F)$ is equidimensional of dimension $\dim X$.*
7. *[\[th:SS:SSw\]]{#th:SS:SSw label="th:SS:SSw"} One has $SS^w(\mathcal F)=SS(\mathcal F)$.*
8. *[\[th:SS:constituents\]]{#th:SS:constituents label="th:SS:constituents"} One has $SS(\mathcal F)=\bigcup_iSS({^p\mathcal H^i}\mathcal F)$. If $\Lambda$ is rational and $\{\mathcal F_\alpha\}$ are the perverse Jordan-Hölder constituents of $\mathcal F$,[^8] then moreover $SS(\mathcal F)=\bigcup_\alpha SS(\mathcal F_\alpha)$.*
9. *[\[th:SS:verdier\]]{#th:SS:verdier label="th:SS:verdier"} $SS(D\mathcal F)=SS(\mathcal F)$, where $D$ denotes Verdier duality.*
10. *[\[th:SS:smooth_pullback\]]{#th:SS:smooth_pullback label="th:SS:smooth_pullback"} For a smooth map $f:Y\to X$ one has $SS(f^*\mathcal F)=f^\circ SS(\mathcal F)$.*
11. *[\[th:SS:field_extn\]]{#th:SS:field_extn label="th:SS:field_extn"} If $k'/k$ is any extension of fields and $\mathcal F_{k'}$ is the inverse image of $\mathcal F$ on $X_{k'}:=X\otimes_kk'$, then the closed subsets $SS(\mathcal F)_{k'}$ and $SS(\mathcal F_{k'})$ of $T^*X_{k'}$ coincide.*
It continues to be true that if $X$ is a projective space, the irreducible components of the singular support are in bijection with those of the ramification divisor of the Radon transform of $i_*\mathcal F$, where $i$ is a Veronese embedding of degree $>1$. The precise statement will be given in §[4](#sec:SS){reference-type="ref" reference="sec:SS"}.
The theorem shows that the $\ell$-adic singular support enjoys the same formal properties as the singular support of torsion étale sheaves. In particular, when $\mathcal F$ is a rational sheaf, the inclusion $SS(\mathcal F)\subset SS(\mathcal F_0)$ combined with the equidimensionality of both implies that $SS(\mathcal F)$ can be obtained from $SS(\mathcal F_0)$ by deleting some irreducible components; the equality $SS(\mathcal F)=SS^w(\mathcal F)$ shows that these deleted irreducible components correspond to covectors $(x,df(x))$ where $\phi_f(\mathcal F)_{\overline x}$ is nonzero but of torsion; i.e. where $\phi_f(\mathcal F)_{\overline x}\ne0$ but $\phi_f(\mathcal F)_{\overline x}\otimes\mathbf{Q}_\ell=0$, or, equivalently, as $\phi_f(\mathcal F)$ is constructible, so that $\phi_f(\mathcal F)_{\overline x}$ is a $\mathbf{Z}/\ell^n$-module for some $n$. It's possible to describe these deleted components more explicitly; see the proposition below.
If $\mathcal F$ is a rational sheaf and $\mathcal F_0$ is any integral model for it, and $\mathcal G_0$ is a torsion étale constructible sheaf so that $SS(\mathcal G_0)\not\subset SS(\mathcal F)$, then $\mathcal F_0\oplus\mathcal G_0$ is an example of an integral model for $\mathcal F$ such that the inclusion $SS(\mathcal F)\subset SS(\mathcal F_0\oplus\mathcal G_0)$ is strict.
As any approximation for $\mathcal F$ will have the same singular support (cf. Lemma [Lemma 8](#lem:integral_extension_ula){reference-type="ref" reference="lem:integral_extension_ula"}), we may replace $\mathcal F$ by such an approximation and assume $\Lambda$ is a finite extension $E$ of $\mathbf{Q}_\ell$. Suppose first $\mathcal F$ is a perverse sheaf and let $\mathcal F'_0$ be any perverse integral model. Then $\ker(\ell^n:\mathcal F'_0\to\mathcal F'_0)$ eventually stabilizes with stable value $\mathcal K_0$ as $\mathop{\mathrm{Perv}}(X,\mathcal O_E)$ is noetherian. Then $\mathcal F_0:=\mathcal F'_0/\mathcal K_0$ is an integral model for $\mathcal F$ which is torsion-free as a perverse sheaf. I claim $SS^w(\mathcal F_0)=SS^w(\mathcal F)$. First note that the bases of both are the same (this follows from Lemma [Lemma 2](#lem:B2.1){reference-type="ref" reference="lem:B2.1"}[\[lem:B2.1:base\]](#lem:B2.1:base){reference-type="ref" reference="lem:B2.1:base"} and the next paragraph). Next note that we may assume $k$ is infinite by part [\[th:SS:field_extn\]](#th:SS:field_extn){reference-type="ref" reference="th:SS:field_extn"} of the theorem. Now suppose $SS^w(\mathcal F_0)$ were strictly larger than $SS^w(\mathcal F)$, so that we could find a geometric point $x\to X$ and a smooth function $f:U\to\mathbf{A}^1$ on a Zariski neighborhood $U\subset X$ of $x$ such that $\phi_f(\mathcal F_0)_x\ne0$ but $(x,df(x))\notin SS^w(\mathcal F)$. By the definition of $SS^w(\mathcal F)$, this would imply that we could shrink $U$ about $x$ so that $\phi_f(\mathcal F)=0$ on $U\cap f^{-1}f(x)$. The perverse (left) t-exactness of $\phi_f[-1]$ implies that $\phi_f(\mathcal F_0)[-1]$ is torsion-free as a perverse sheaf. This is a contradiction as a nonzero torsion-free perverse sheaf has a nonzero rationalization.
Indeed, let $W$ be a variety and $\mathcal M_0$ a torsion-free perverse $\mathcal O_E$-sheaf on it. If $V\subset W$ is a Zariski open such that $\mathcal M_0|_V$ is nonzero and lisse, then $\mathcal M_0|_V$ is a shift of a torsion-free local system (i.e. abelian lisse sheaf) on $V$ (see Corollary [Corollary 1](#cor:lisse_t){reference-type="ref" reference="cor:lisse_t"}). If $\mathcal M_0$ is not supported generically, then let $i:Z\hookrightarrow W$ denote its support, so that $\mathcal M_0=i_*\mathcal N_0$ for some torsion-free perverse $\mathcal O_E$-sheaf on $Z$. Then $\mathcal N_0$ is supported generically on $Z$ and we may conclude by the previous argument.
For a general $\mathcal F\in D(X,E)$, we may write $\mathcal F$ as a successive extension in $D(X,E)$ of (shifts of) its perverse cohomology sheaves $^p\mathcal H^i\mathcal F$. For each $i$, we can find a perverse integral model $\mathcal F_0^i$ for $^p\mathcal H^i\mathcal F$ with $SS(\mathcal F_0^i)=SS({^p\mathcal H^i\mathcal F})$, as these coincide with their weak versions by part [\[th:SS:SSw\]](#th:SS:SSw){reference-type="ref" reference="th:SS:SSw"} of the theorem. Proceeding by induction on the perverse amplitude of $\mathcal F$, we may assume we've found integral models $\mathcal G_0$ and $\mathcal G_0'$ for $^p\tau^{\leq 0}\mathcal F$ and $^p\tau^{>0}\mathcal F$ with $SS(\mathcal G_0)=SS({^p\tau^{\leq 0}}\mathcal F)$ and $SS(\mathcal G_0')=SS({^p\tau^{>0}}\mathcal F)$. Multiplying by a suitable power of $\ell$, we may assume that the map ${^p\tau^{>0}}\mathcal F[-1]\to{^p\tau^{\leq0}}\mathcal F$ is the rationalization of some map $\varrho:\mathcal G_0'[-1]\to\mathcal G_0$ in $D(X,\mathcal O_E)$ (see §[2](#sec:cons){reference-type="ref" reference="sec:cons"}), so $\mathcal F_0:=\mathop{\mathrm{Cone}}(\varrho)$ has $\mathcal F_0[\ell^{-1}]\simeq \mathcal F$. Part [\[th:SS:constituents\]](#th:SS:constituents){reference-type="ref" reference="th:SS:constituents"} of the theorem implies that $SS(\mathcal F)
=SS(^p\tau^{\leq 0}\mathcal F)\cup SS(^p\tau^{>0}\mathcal F)
=SS(\mathcal G_0')\cup SS(\mathcal G_0')$. On the other hand, the sheaves in $D(X,\mathcal O_E)$ micro-supported on some fixed conical closed subset form a thick subcategory of $D(X,\mathcal O_E)$ by Lemma [Lemma 2](#lem:B2.1){reference-type="ref" reference="lem:B2.1"}[\[lem:B2.1:thick\]](#lem:B2.1:thick){reference-type="ref" reference="lem:B2.1:thick"}, in particular closed under cones and shifts. Therefore $SS(\mathcal F_0)\subset SS(\mathcal G_0')\cup SS(\mathcal G_0')
=SS(\mathcal F)$. The reverse inclusion by part [\[th:SS:integral_comparison\]](#th:SS:integral_comparison){reference-type="ref" reference="th:SS:integral_comparison"} of the theorem. 0◻
If $\mathcal F_0$ is an sheaf with $\mathcal O_E$-coefficients on a smooth variety $X$, the above argument makes it possible to describe explicitly the discrepancy between $SS(\mathcal F_0)$ and $SS(\mathcal F)$. We have by part [\[th:SS:constituents\]](#th:SS:constituents){reference-type="ref" reference="th:SS:constituents"} of the theorem that $SS(\mathcal F_0)=\bigcup_iSS({^p\mathcal H^i}\mathcal F_0)$ and likewise for $\mathcal F:=\mathcal F_0[\ell^{-1}]$ (rationalization is perverse t-exact). Form as above for each $i$ the maximal torsion perverse subsheaf $\mathcal K_0^i\subset{^p\mathcal H^i}\mathcal F_0$ (i.e. so that $\mathcal F_0^i=\ker(\ell^m:{^p\mathcal H^i}\mathcal F_0
\to{^p\mathcal H^i}\mathcal F_0)$ for $m\gg0$), and let $\mathcal G_0^i:=({^p\mathcal H^i}\mathcal F_0)/\mathcal K^i_0$. Then $\mathcal G_0^i$ is a torsion-free perverse sheaf and an integral model for $^p\mathcal H^i\mathcal F$, so by part [\[th:SS:integral_model\]](#th:SS:integral_model){reference-type="ref" reference="th:SS:integral_model"} of the theorem, $SS(\mathcal G_0^i)=SS({^p\mathcal H^i}\mathcal F)$. The argument in the proof above shows that if $0\to\mathcal M_0\to\mathcal N_0\to\mathcal P_0\to0$ is an exact sequence of perverse sheaves on $X$, then $SS(\mathcal N_0)=SS(\mathcal M_0)\cup SS(\mathcal P_0)$. Therefore $SS({^p\mathcal H^i}\mathcal F)$ is obtained from $SS({^p\mathcal H^i}\mathcal F_0)$ by deleting the irreducible components of $SS(\mathcal K_0^i)$ which are not already in $SS({^p\mathcal H^i}\mathcal F)$. This proves the following
**Proposition 1**. *Let $X$ be a smooth variety and $\mathcal F_0$ be in $D(X,\mathcal O_E)$ with $E$ an algebraic extension of $\mathbf{Q}_\ell$. For each $i\in\mathbf{Z}$ let $\mathcal K_0^i$ be the maximal torsion perverse subsheaf of $^p\mathcal H^i\mathcal F_0$. Then $SS(\mathcal F_0)=
SS(\mathcal F_0\otimes_{\mathcal O_E}E)\cup\bigcup_i SS(\mathcal K_0^i)$.*
## {#section}
Finally, we record a useful result concerning $\mathcal F$-transversality. Let $h:W\to X$ be a locally finitely-presented map of locally quasi-excellent $\ell$-coprime[^9] schemes, suppose $X$ is moreover quasi-compact (hence noetherian), and let $\mathcal F\in D(X)$. Following Saito [@Saito §8], we say that $h$ is *$\mathcal F$-transversal* if for every quasi-compact open $U\subset W$ separated over $X$, the map $(h|_U)^*\mathcal F\otimes (h|_U)^!\Lambda\to(h|_U)^!\mathcal F$ is an isomorphism on $U$, where here $(h|_U)^!$ is defined by virtue of [@BS Lemma 6.7.19] and the morphism is the one obtained via adjunction and the projection formula. This property can be checked over a Zariski cover of $W$ by quasi-compact opens separated over $X$, and if $W$ is quasi-compact and $h$ separated, then $h$ is $\mathcal F$-transversal if and only if the map $h^*\mathcal F\otimes h^!\Lambda\to h^!\mathcal F$ is an isomorphism on $W$. Then we have the following result relating $SS(\mathcal F)$-transversality to $\mathcal F$-transversality.
**Proposition 1** ([@Saito Proposition 8.13]). *Let $W$ and $X$ be smooth varieties and $\mathcal F\in D(X)$. Then any $SS(\mathcal F)$-transversal morphism $h:W\to X$ is $\mathcal F$-transversal.*
The proof is identical to the one for étale sheaves using Proposition [Proposition 4](#prop:illusie){reference-type="ref" reference="prop:illusie"} in lieu of [@Illusie Proposition 2.10].
# Constructible sheaves {#sec:cons}
In this section we recall some results concerning the category $\mathcal D(X)$ from [@BS], [@HRS] and [@HS].
## {#sec:cons_facts}
Suppose $E$ is a finite extension of $\mathbf{Q}_\ell$ with ring of integers $(\mathcal O_E,\varpi)$. Then $\mathcal D(X,\mathcal O_E)=\lim_n\mathcal D(X,\mathcal O_E/\ell^n)
=\lim_n\mathcal D(X,\mathcal O_E/\varpi^n)$ [@HRS Proposition 5.1],[^10] and if $E$ is now any algebraic extension of $\mathbf{Q}_\ell$ and $X$ is qcqs, $\mathcal D(X,\mathcal O_E)=\mathop{\mathrm{colim}}\mathcal D(X,\mathcal O_L)$ and $\mathcal D(X,E)=\mathop{\mathrm{colim}}\mathcal D(X,L)$, where the colimit is over those finite extensions $E\supset L\supset\mathbf{Q}_\ell$ [@HRS Proposition 5.2]. The same is true with $\mathcal D(X)$ replaced by $\mathcal D_\mathrm{lisse}(X)$.
When $E/\mathbf{Q}_\ell$ is finite, $\mathcal D(X,\mathcal O_E)$ admits another description than the one given by §[1.1](#sec:notation_categories){reference-type="ref" reference="sec:notation_categories"}: it coincides with the full subcategory of $\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E)$ on the derived $\ell$-complete objects $A$ (i.e. $A=\lim_n(A/\ell^n:=A\otimes_{\mathbf{Z}_\ell}\mathbf{Z}/\ell^n)$) such that $A\otimes_\mathbf{Z}\mathbf{Z}/\ell$ is a perfect-constructible étale sheaf[^11] [@HRS Proposition 7.6].
On any qcqs $X$ on which $\ell$ is invertible and $E/\mathbf{Q}_\ell$ an algebraic extension, $\mathcal D(X,\mathcal O_E)\otimes_{\mathbf{Z}_\ell}\mathbf{Q}_\ell\to\mathcal D(X,E)$ is an equivalence [@HS Corollary 2.4]. The left side coincides with the idempotent completion of the localization $\mathcal D(X,\mathcal O_E)[\ell^{-1}]$, but for the spaces that appear in this paper, idempotent completion is superfluous.[^12] If $\mathcal F$ belongs to $\mathcal D(X,E)$, we will refer to sheaves $\mathcal F_0$ in $\mathcal D(X,\mathcal O_E)$ such that $\mathcal F_0[\ell^{-1}]=\mathcal F$ as *integral models* for $\mathcal F$.
Hansen and Scholze [@HS Theorem 2.2] prove that the assignment $?\mapsto\mathcal D(?,\Lambda)$ determines an arc-sheaf of $\infty$-categories $\mathrm{Sch}_{\mathrm{qcqs}}^\mathrm{op}\to\mathrm{Cat}_\infty$.[^13] This means that if $Y\to X$ is an arc-cover of qcqs schemes, the map $$\begin{tikzcd}[column sep=15]
\mathcal D(X)\to\lim_\Delta\Big(\mathcal D(Y^{\bullet/X})
:=\Big(\mathcal D(Y)\arrow[r,shift left]\arrow[r,shift right]
&\mathcal D(Y\times_XY)\arrow[r,shift left=2]\arrow[r]\arrow[r,shift right=2]&\ldots\Big)\Big)
\end{tikzcd}$$ is an equivalence, where this limit is indexed by the simplex category $\Delta$ and computed in $\mathrm{Cat}_\infty$. This fact will be used in §[3](#sec:ula){reference-type="ref" reference="sec:ula"}.
The following standard result allows us to detect that a map of (integral) constructible sheaves is an isomorphism from the vanishing of its cone modulo $\ell$.
**Lemma 1**. *Let $X$ be any scheme and $E$ a finite extension of $\mathbf{Q}_\ell$,. Reduction modulo $\ell$ is a conservative functor from the full subcategory $\mathcal D_{\mathrm{comp}}(X_\text{\upshape pro\'et},\mathcal O_E)$ of derived $\ell$-complete objects of $\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E)$ to $\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E/\ell)$.*
As $\mathcal D_{\mathrm{comp}}(X_\text{\upshape pro\'et},\mathcal O_E)$ is a stable subcategory of $\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E)$, it suffices to show that if $\mathcal D_{\mathrm{comp}}(X_\text{\upshape pro\'et},\mathcal O_E)$ has $A/\ell=0$, then $A=0$. The exact sequence $0\to\mathbf{Z}/\ell^{n-1}\xrightarrow\ell\mathbf{Z}/\ell^n\to\mathbf{Z}/\ell\to0$ for $n>1$ and induction on $n$ shows that $A/\ell^n=0$ for all $n$. As $A=\lim_nA/\ell^n$, we're done. 0◻
## {#sec:stalks}
Let $X$ be any scheme and $x\in X$. If $\mathcal F$ is in $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$, we denote by $\mathcal F_{\overline x}$ the complex $\Gamma(X_{\overline x},\mathcal F)$ in $\mathcal D(\Lambda)$, the $\infty$-category of chain complexes of $\Lambda$-modules up to quasi-isomorphism. Although the site $X_\text{\upshape pro\'et}$ has enough points as it gives rise to a coherent topos, the stalk functors $\mathcal F\mapsto\mathcal F_{\overline x}$, as $x$ runs over the points of $X$, do not form a conservative family of functors on $\mathcal D(X_\text{\upshape pro\'et},\mathbf{Z})$ [@BS Example 4.2.11].
**Lemma 2**. *Suppose $X$ is a scheme, and suppose that either $\Lambda$ is integral and $X$ is qcqs, or that every constructible closed subset of $X$ has Zariski-locally finitely many irreducible components. Then the stalks $\mathcal F\mapsto\mathcal F_{\overline x}$, as $x$ runs over the points of $X$, form a conservative family of functors for $\mathcal D(X,\Lambda)$. We have $\mathcal F_{\overline x}=\Gamma(\overline x,\overline x^*\mathcal F)\in
\operatorname{Perf}_\Lambda\subset\mathcal D(\Lambda)$, the full subcategory on the perfect complexes.*
It follows from the w-contractibility of $\overline x$ and the definition of the functor $\overline x^*$ that $\Gamma(\overline x,\overline x^*\mathcal F)=\mathop{\mathrm{colim}}\Gamma(U,\mathcal F)$, where the (filtered) colimit is over all pro-étale neighborhoods $U$ of $\overline x$.[^14] This coincides with $\Gamma(X_{\overline x},\mathcal F)$ by [@Stacks [`0993`](https://stacks.math.columbia.edu/tag/0993)]. So far this is true for any $\mathcal F\in\mathcal D(X_\text{\upshape pro\'et},\Lambda)$, but if $\mathcal F\in\mathcal D(X,\Lambda)$, this is a perfect complex since $\mathcal D_\mathrm{lisse}(\overline x,\Lambda)\simeq\operatorname{Perf}_\Lambda$ [@HRS Lemma 1.2].
Conservativity amounts to showing that a constructible sheaf $\mathcal F$ with zero stalks is zero. Replacing $X$ by an affine open, it will suffice to show that its restriction to each constructible stratum $Z$ in a stratification witnessing the constructibility of $\mathcal F$ is zero. If $Z$ has locally a finite number of irreducible components, then the restriction of $\mathcal F$ to $Z$ is (pro-étale)-locally perfect-constant [@HRS Theorem 4.13]. If $\overline x$ is a geometric point of $Z$ and $U\to Z$ is a pro-étale neighborhood of $\overline x$ so that $\mathcal F|_U\simeq \underline N\otimes_{\underline\Lambda}\Lambda_U$ for some $N\in\operatorname{Perf}_\Lambda$, then $\Gamma(\overline x,\overline x^*\mathcal F)=N$, whence necessarily $N=0$.
If $\Lambda$ is integral and $X$ is qcqs, then $\mathcal F$ comes from $\mathcal D(X,\mathcal O_E)$ for $E/\mathbf{Q}_\ell$ finite, so we may assume $\mathcal F$ is derived $\ell$-complete. Reduction modulo $\ell$ commutes with taking stalks ($\mathbf{Z}/\ell$ is perfect), so if $\mathcal F$ has zero stalks the same is true of $\mathcal F/\ell$. As $\mathcal F/\ell$ is an étale sheaf, this implies its vanishing on $X$, so $\mathcal F=0$ by Lemma [2.1](#sec:cons_facts){reference-type="ref" reference="sec:cons_facts"}. 0◻
## {#sec:vanishing_cycles_mod_ell}
The formation of nearby and vanishing cycles commutes with reduction mod. $\ell$ [@BS Lemma 6.5.9(3)] and are constructible [@HS Theorem 4.1].
**Lemma 3**. *Let $V$ be a strictly henselian dvr with $\ell\in V^\times$, $f:X\to\mathop{\mathrm{Spec}}V$ a separated map of finite type and $\mathcal F\in D(X)$ with $\Lambda=\mathcal O_E$, $E/\mathbf{Q}_\ell$ finite. Then $\psi_f(\mathcal F)\otimes\mathbf{Z}/\ell=\psi_f(\mathcal F/\ell)$ and $\phi_f(\mathcal F)\otimes\mathbf{Z}/\ell=\phi_f(\mathcal F/\ell)$ are étale perfect-constructible on the special fiber.*
## {#section-1}
The usual ($p=0$) t-structure on $D(X_\text{\upshape pro\'et},\Lambda)$ restricts to one on $D(X)$ under a mild finiteness assumption.
**Proposition 2** ([@HRS Theorem 6.2]). *Suppose that $X$ is a scheme with the property that every constructible subset of $X$ has locally finitely many irreducible components. Then the usual t-structure on $D(X_\text{\upshape pro\'et},\Lambda)$ restricts to a bounded t-structure on $D(X)$. The heart $D(X)^\heartsuit$ consists of the full subcategory of $D(X_\text{\upshape pro\'et},\Lambda)^\heartsuit$ on those sheaves $\mathcal G$ such that for every open affine $U\subset X$ there is a finite stratification of $U$ into constructible locally closed subsets $U_i$ so that $\mathcal G|_{U_i}$ is locally on $(U_i)_\text{\upshape pro\'et}$ isomorphic to $\underline M_i\otimes_{\underline\Lambda}\Lambda$ for $M_i$ a finitely-presented $\Lambda$-module.*
In this case, lisse sheaves admit a familiar description.
**Corollary 1** ([@HRS Corollary 6.3]). *Suppose that $X$ is a qcqs scheme with locally finitely many irreducible components. Then $M$ in $D(X_\text{\upshape pro\'et},\Lambda)$ is lisse if and only if $M$ is bounded and each $\mathcal H^iM$ is locally on $X_\text{\upshape pro\'et}$ isomorphic to $\underline N\otimes_{\underline\Lambda}\Lambda$ for some finitely-presented $\Lambda$ module $N$.*
## {#section-2}
We now turn to some facts that require additional hypotheses of regularity.
**Proposition 2**. *Assume $X$ is a topologically noetherian and geometrically unibranch scheme, and $E/\mathbf{Q}_\ell$ an algebraic extension. Then $\mathcal D_{\mathrm{lisse}}(X,\mathcal O_E)\otimes_{\mathcal O_E}E\to \mathcal D_{\mathrm{lisse}}(X,E)$ is an equivalence.*
Only essential surjectivity needs to be checked; i.e. the existence of integral models. We may assume $E/\mathbf{Q}_\ell$ is finite and $X$ connected with geometric point $x\to X$. In this case, $\pi_1^{\text{\upshape pro\'et}}(X,x)\simeq\pi_1^{\text{\'et}}(X,x)$ as $X$ is geometrically unibranch [@BS Lemma 7.4.10], so essential surjectivity is clear in degree zero; i.e. for abelian sheaves. Indeed, an $E$-local system on $X$ is classified by a continuous homomorphism $\pi_1^\text{\upshape pro\'et}(X,x)\to\mathop{\mathrm{Aut}}E$, and the étale fundamental group is profinite, hence compact, hence its action on a finite $E$-vector space stabilizes a lattice. As any lisse sheaf is bounded, we can induct on amplitude, using that $\mathcal D_\mathrm{lisse}(X,\Lambda)$ is stable. (In more words, given a map $f:\mathcal F\to\mathcal G$ of lisse $E$-sheaves with integral models $\mathcal F_0,\mathcal G_0$, if $n$ is large enough, $\ell^nf$ is the rationalization of some map $f_0:\mathcal F_0\to\mathcal G_0$ in $D(X,\mathcal O_E)$, and $\mathop{\mathrm{Cone}}\ell^nf\simeq\mathop{\mathrm{Cone}}f$.) 0◻
## {#section-3}
The same is true for perverse sheaves, although this won't be used in the sequel. It's easy to see that if $X$ is a separated variety and $E/\mathbf{Q}_\ell$ is an algebraic extension, the functor $-\otimes_{\mathcal O_E}E:\mathcal D(X,\mathcal O_E)\to\mathcal D(X,E)$ is perverse t-exact. Let $\mathop{\mathrm{Perv}}(X,?)$ denote the heart of $\mathcal D(X,?)$ for the middle perverse t-structure.
**Proposition 3**. *Let $X$ be a separated variety and $E/\mathbf{Q}_\ell$ an algebraic extension. Then $\mathop{\mathrm{Perv}}(X,\mathcal O_E)\otimes_{\mathcal O_E}E\to\mathop{\mathrm{Perv}}(X,E)$ is an equivalence.*
Again, only essential surjectivity needs to be checked, so we may assume $E/\mathbf{Q}_\ell$ is finite. Proceeding along a Jordan-Hölder series for some $\mathcal F$ in $\mathop{\mathrm{Perv}}(X,E)$, we may assume $\mathcal F$ is (up to a shift) the intermediate extension of some $E$-local system $\mathcal L$ on an irreducible subvariety $Y\subset X$ with the property that the reduction $(Y_{{k}^{\mathrm{alg}}})_{\mathrm{red}}$ is smooth over ${k}^{\mathrm{alg}}$. In particular, $Y$ is geometrically unibranch,[^15] so we may find a lattice for $\mathcal L$. It remains only to show that $-\otimes\mathbf{Q}_\ell$ commutes with intermediate extension. This is immediate from Deligne's formula [@BBD Proposition 2.2.4] in light of the fact that $j_*$ and $\tau^F_{\leq0}$ (with the notation there) commute with $-\otimes\mathbf{Q}_\ell$, the former by [@HRS Lemma 5.3] and the latter by the formula $\tau_{\leq0}^F\mathcal G=\mathop{\mathrm{fib}}(\mathcal G\to i_*\tau_{>0}i^*\mathcal G)$ and the t-exactness of $-\otimes\mathbf{Q}_\ell$ for the usual ($p=0$) t-structure. 0◻
# Universal local acyclicity {#sec:ula}
This section is dedicated to a discussion of the notion of universal local acyclicity we will use to define the singular support of a constructible $\ell$-adic sheaf on a smooth variety. Almost all of the ideas in this section are either implicit or explicit in [@HS], so there is no claim of originality. Without further remark, all schemes in this section are assumed to live over $\mathop{\mathrm{Spec}}\mathbf{Z}[\frac1\ell]$.
While the classical definition of local acyclicity is phrased in terms of Milnor fibers, Lu and Zheng discovered in [@LZ_ula] a characterization of universal local acyclicity that remains in the world of varieties. Their characterization is in terms of dualizability in a certain symmetric monoidal 2-category (bicategory) of cohomological correspondances. We now recall the definition of that 2-category, or rather of a slight variation on it with the same dualizable objects that is due to [@HS].
## {#sec:C_S}
In the following discussion (i.e. until §[3.2](#sec:duals_in_2cats){reference-type="ref" reference="sec:duals_in_2cats"}), all schemes are moreover assumed qcqs. Fix a base scheme $S$. Our 2-category $\mathcal C_S:=\mathcal C_{S,\Lambda}$ has objects given by pairs $(X,\mathcal F)$, where $f:X\to S$ is separated of finite presentation and $\mathcal F\in D(X)$. A 1-morphism $g:(X,\mathcal F)\to(Y,\mathcal G)$ in $\mathcal C_S$ is the data of some $\mathcal K\in D(X\times_SY)$ together with a map $\delta:\pi_{Y!}(\pi_X^*\mathcal F\otimes\mathcal K)\to\mathcal G$, where $\pi_X,\pi_Y$ denote the projections $X\times_SY\rightrightarrows X,Y$. Given 1-morphisms $(\mathcal K,\delta)$ and $(\mathcal M,\varepsilon):(X,\mathcal F)\to(Y,\mathcal G)$, a 2-morphism $(\mathcal K,\delta)\Rightarrow(\mathcal M,\varepsilon)$ is given by a map $\mathcal K\to\mathcal M$ in $D(X\times_SY)$ making a commutative triangle $$\begin{tikzcd}[row sep=2]
\pi_{Y!}(\pi_X^*\mathcal F\otimes\mathcal K)\arrow[dr,"\delta"]\arrow[dd] \\
&\mathcal G. \\
\pi_{Y!}(\pi_X^*\mathcal F\otimes\mathcal M)\arrow[ur,"\varepsilon"']
\end{tikzcd}$$
Concerning composition in $\mathcal C_S$, suppose given 1-morphisms $(X,\mathcal F)\xrightarrow{\mathcal L}(Y,\mathcal G)\xrightarrow{\mathcal M}(Z,\mathcal K)$ in $\mathcal C_S$ ($\mathcal L\in D(X\times_SY)$, $\mathcal M\in D(Y\times_SZ)$), which come with maps $\delta:\pi_{Y!}(\pi_X^*\mathcal F\otimes\mathcal L)\to\mathcal G$ and $\varepsilon:\pi_{Z!}(\pi_Y^*\mathcal G\otimes\mathcal M)\to\mathcal K$. The composite $(\mathcal M,\varepsilon)\circ(\mathcal L,\delta)$ is given by $\pi_{XZ!}(\pi_{XY}^*\mathcal L\otimes\pi_{YZ}^*\mathcal M)
\in D(X\times_SZ)$,[^16] together with a map $\pi_{Z!}(\pi_X^*\mathcal F\otimes\pi_{XZ!}(\pi_{XY}^*\mathcal L\otimes\pi_{YZ}^*\mathcal M)
\to\mathcal K$ obtained using proper base change and the projection formula as follows.[^17] (In the following, the projections $\pi$ denote projections from $X\times_SY$, $X\times_SY\times_SZ$ and the like, and the meaning of the notation changes from line to line.) $$\begin{aligned}
&\pi_{Z!}(\pi_X^*\mathcal F\otimes\pi_{XZ!}(\pi_{XY}^*\mathcal L\otimes\pi_{YZ}^*\mathcal M)&(\pi_Z:X\times_SZ\to Z) \\
=&\ \pi_{Z!}(\pi_X^*\mathcal F\otimes\pi_{XY}^*\mathcal L\otimes\pi_{YZ}^*\mathcal M)
&(\pi_Z:X\times_SY\times_SZ\to Z) \\
=&\ \pi_{Z!}(\pi_{YZ!}(\pi_X^*\mathcal F\otimes\pi_{XY}^*\mathcal L)\otimes\mathcal M)
&(\pi_Z:Y\times_SZ\to Z) \\
=&\ \pi_{Z!}(\pi_Y^*(\pi'_{Y!}(\pi_X^*\mathcal F\otimes\mathcal L))\otimes\mathcal M)
&(\pi_Y:Y\times_SZ\to Y,\pi_Y':X\times_SY\to Y) \\
\xrightarrow\delta&\ \pi_{Z!}(\pi_Y^*\mathcal G\otimes\mathcal M)
\xrightarrow\varepsilon\mathcal K.\end{aligned}$$
The symmetric monoidal structure on $\mathcal C_S$ works as follows: on objects we have $(X,\mathcal F)\boxtimes(Y,\mathcal G)
:=(X\times_SY,\mathcal F\boxtimes\mathcal G:=\pi_X^*\mathcal F\otimes\pi_Y^*\mathcal G)$. On 1-morphisms, suppose given 1-morphisms $(X,\mathcal F)\to(Y,\mathcal G)$ and $(X',\mathcal F')\to(Y',\mathcal G')$ represented by objects $\mathcal K\in D(X\times_SY)$ and $\mathcal K'\in D(X'\times_SY')$ and arrows $\pi_{Y!}(\pi_X^*\mathcal F\otimes\mathcal K)\to\mathcal G$ and $\pi_{Y'!}(\pi_{X'}^*\mathcal F'\otimes\mathcal K')\to\mathcal G'$. Their tensor product is the 1-morphism $(X\times_SX',\mathcal F\boxtimes\mathcal F')\to(Y\times_SY',\mathcal G\boxtimes\mathcal G')$ given by $\mathcal K\boxtimes\mathcal K'\in D(X\times_SY\times_SX'\times_SY')$ together with the morphism $\pi_{Y!}(\pi_X^*(\mathcal F\boxtimes\mathcal F')\otimes(\mathcal K\boxtimes\mathcal K'))=
\pi_{Y!}((\pi_X'^*\mathcal F\otimes\mathcal K)\boxtimes(\pi_{X'}'^*\mathcal F'\otimes\mathcal K'))
=\pi_{Y!}'(\pi_X'^*\mathcal F\otimes\mathcal K)\boxtimes\pi_{Y'!}'(\pi_{X'}'^*\mathcal F'\otimes\mathcal K')
\to\mathcal G\boxtimes\mathcal G'$, where here $\pi_X:X\times_SX'\times_SY\times_SY'\to X\times_SX'$, $\pi_X':X\times_SY\to X$ (likewise for $\pi_Y$), and the second isomorphism is given by Künneth.
On 2-morphisms, given 1-morphisms $f_1,f_2:(X,\mathcal F)\to(Y,\mathcal G)$ represented by $\mathcal K_1,\mathcal K_2\in D(X\times_SY)$ and 1-morphisms $g_1,g_2:(X',\mathcal F')\to(Y',\mathcal G')$ represented by $\mathcal K_1',\mathcal K_2'\in D(X'\times_SY')$, and given 2-morphisms $\alpha:f_1\Rightarrow f_2$ represented by some arrow $\mathcal K_1\to\mathcal K_2$ in $D(X\times_SY)$ and $\beta:g_1\Rightarrow g_2$ given by $\mathcal K_1'\to\mathcal K_2'$, $\alpha\boxtimes\beta:f_1\boxtimes g_1\Rightarrow f_2\boxtimes g_2$ is given by $\mathcal K_1\boxtimes\mathcal K_1'\to\mathcal K_2\boxtimes\mathcal K_2'$, which satisfies the required compatibility in light of the Künneth isomorphism above.
The monoidal unit $1_{\mathcal C_S}$ is given by $(S,\Lambda)$.
Given a morphism of schemes $f:S'\to S$, base change gives a symmetric monoidal functor $f^*:\mathcal C_S\to\mathcal C_{S'}$ which on objects sends $(X,\mathcal F)$ to $(X\times_SS',f^*\mathcal F)$. That this determines a symmetric monoidal functor relies on proper base change.
If $\Lambda=\mathcal O_E$ is integral, reduction modulo $\ell$ and inverting $\ell$ determine symmetric monoidal functors $\mathcal C_{S,\mathcal O_E}\to\mathcal C_{S,\mathcal O_E/\ell}$ and $\mathcal C_{S,\mathcal O_E}\to\mathcal C_{S,E}$, respectively (we will only consider the former functor when $E/\mathbf{Q}_\ell$ is finite). On objects these functors send $(X,\mathcal F)$ to $(X,\mathcal F/\ell)$ and $(X,\mathcal F[\ell^{-1}]=\mathcal F\otimes_{\mathcal O_E}E)$, respectively. That these determine symmetric monoidal functors rests on the commutation of pullback and direct image with compact support with reduction modulo $\ell$ and rationalization; for rationalization this follows from the fact that direct image between qcqs schemes commutes with filtered colimits in $D^{\geq 0}(?_\text{\upshape pro\'et},\Lambda)$.
## {#sec:duals_in_2cats}
An object $V$ of a symmetric monoidal 2-category (bicategory) $(\mathcal C,\otimes,1_\mathcal C)$ is said to be *dualizable* if there exists an object $V^\vee$ of $\mathcal C$, called the *dual* of $V$, and 1-morphisms $\mathop{\mathrm{ev}}:V^\vee\otimes V\to1_{\mathcal C}$, $\mathop{\mathrm{coev}}:1_{\mathcal C}\to V\otimes V^\vee$ so that the composites $$V\xrightarrow{\mathop{\mathrm{coev}}\otimes\mathop{\mathrm{id}}} V\otimes V^\vee\otimes V\xrightarrow{\mathop{\mathrm{id}}\otimes\mathop{\mathrm{ev}}} V
\quad\text{and}\quad
V^\vee\xrightarrow{\mathop{\mathrm{id}}\otimes\mathop{\mathrm{coev}}} V^\vee\otimes V\otimes V^\vee\xrightarrow{\mathop{\mathrm{ev}}\otimes\mathop{\mathrm{id}}}V^\vee$$ are isomorphic to the respective identities. It's clear from the definition that if $V$ is dualizable, then $V^\vee$ is too and $V^{\vee\vee}=V$. Likewise, if $V$ and $W$ are dualizable, then $V\otimes W$ is dualizable with dual $V^\vee\otimes W^\vee$. If $F:(\mathcal C,\otimes,1_\mathcal C)\to(\mathcal D,\otimes,1_{\mathcal D})$ is a symmetric monoidal functor of symmetric monoidal 2-categories, then $F$ preserves dualizable objects and duals.
If $V$ is dualizable, then the morphisms $\mathop{\mathrm{ev}}$ and $\mathop{\mathrm{coev}}$ exhibit $-\otimes V^\vee$ as right (and left) adjoint to $-\otimes V$. Thus, for every object $W$ of $\mathcal C$, the internal Hom $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,W)$ exists and is equivalent to $W\otimes V^\vee$. Conversely, dualizability can be characterized in terms of certain internal mapping objects.
**Lemma 1** ([@LZ_ula Lemma 1.4]). *An object $V$ of $\mathcal C$ is dualizable if and only if the internal Hom objects $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,1_{\mathcal C})$ and $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,V)$ exist and the morphism $V\otimes\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,1_\mathcal C)\to\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,V)$ adjoint to the map $V\otimes\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,1_\mathcal C)\otimes V\xrightarrow{\mathop{\mathrm{id}}\otimes\mathop{\mathrm{ev}}} V$ is a split epimorphism, where $\mathop{\mathrm{ev}}:\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(V,1_{\mathcal C})\otimes V\to1_{\mathcal C}$ denotes the counit of adjunction.*
## {#section-4}
Fix a morphism of schemes $f:X\to S$ locally of finite presentation and a sheaf $\mathcal F$ in $D(X)$, defining an object $(X,\mathcal F)$ in $\mathcal C_S$. If $f$ is separated and $X$ and $S$ are qcqs, following Lu and Zheng, Hansen and Scholze define $f$ to be universally locally acyclic relative to $\mathcal F$ if $(X,\mathcal F)$ is dualizable in $\mathcal C_S$. We already made in Definition [1.2](#sec:intro_ula){reference-type="ref" reference="sec:intro_ula"} a definition of universal local acyclicity for $f$ that has a purely local character. This definition specializes to the one of Hansen and Scholze, as is seen from the following
**Proposition 3** ([@HS Theorem 4.4]). *Let $f:X\to S$ be a locally finitely presented morphism of schemes and let $\mathcal F\in D(X)$. Then the following are equivalent:*
1. *[\[4.4_ula_TFAE\]]{#4.4_ula_TFAE label="4.4_ula_TFAE"} The sheaf $\mathcal F$ is universally locally acyclic over $S$; i.e. for any geometric point $x\to X$ and generization $t\to S$ of $f(x)$, the map $\mathcal F_x:=\Gamma(X_x,\mathcal F)\to\Gamma(X_x\times_{S_{f(x)}}t,\mathcal F)$ is an isomorphism, and this remains true after arbitrary base change in $S$.*
2. *[\[4.4_cover_TFAE\]]{#4.4_cover_TFAE label="4.4_cover_TFAE"} The scheme $S$ can be covered by qcqs opens $\{V_i\}$ with the property that for each $i$, $f^{-1}(V_i)$ can be covered by qcqs opens $\{U_{ij}\}$, each separated over $V_i$, such that $(U_{ij},\mathcal F|_{U_{ij}})$ defines a dualizable object of $\mathcal C_{V_i}$ for all $i,j$.*
3. *[\[4.4_all_cover_TFAE\]]{#4.4_all_cover_TFAE label="4.4_all_cover_TFAE"} For every cover of $S$ by qcqs opens $\{V_i\}$ and for every cover of $f^{-1}(V_i)$ by qcqs opens $\{U_{ij}\}$, each separated over $V_i$, $(U_{ij},\mathcal F|_{U_{ij}})$ defines a dualizable object of $\mathcal C_{V_i}$ for all $i,j$.*
4. *[\[4.4_affine_TFAE\]]{#4.4_affine_TFAE label="4.4_affine_TFAE"} For every pair of affine opens $U\subset X$, $V\subset S$ with $f(U)\subset V$, $(U,\mathcal F|_U)$ defines a dualizable object in $\mathcal C_V$.*
5. *[\[4.4_milnor_fiber_TFAE\]]{#4.4_milnor_fiber_TFAE label="4.4_milnor_fiber_TFAE"} For any geometric point $x\to X$ and generization $t\to S$ of $f(x)$, the map $\mathcal F_x=\Gamma(X_x,\mathcal F)\to\Gamma(X_x\times_{S_{f(x)}}S_t)$ is an isomorphism, and this remains true after arbitrary base change in $S$.*
6. *[\[4.4_rank1\]]{#4.4_rank1 label="4.4_rank1"} After every base change along $\mathop{\mathrm{Spec}}V\to S$ with $V$ a rank 1 valuation ring with algebraically closed fraction field $K$, and for every geometric point $x\to X=X_V$ in the special fiber, the map $\mathcal F_x
=\Gamma(X_x,\mathcal F)\to\Gamma(X_x\times_{\mathop{\mathrm{Spec}}V}\mathop{\mathrm{Spec}}K,\mathcal F)$ is an isomorphism.*
*Moreover, the property of $\mathcal F$ being universally locally acyclic over $S$ is stable under base change in $S$ and arc-local on $S$.*
That the property is stable under base change is immediate from the stability under base change of [\[4.4_ula_TFAE\]](#4.4_ula_TFAE){reference-type="ref" reference="4.4_ula_TFAE"}. That [\[4.4_all_cover_TFAE\]](#4.4_all_cover_TFAE){reference-type="ref" reference="4.4_all_cover_TFAE"} implies [\[4.4_affine_TFAE\]](#4.4_affine_TFAE){reference-type="ref" reference="4.4_affine_TFAE"} is clear. As [\[4.4_ula_TFAE\]](#4.4_ula_TFAE){reference-type="ref" reference="4.4_ula_TFAE"}, [\[4.4_milnor_fiber_TFAE\]](#4.4_milnor_fiber_TFAE){reference-type="ref" reference="4.4_milnor_fiber_TFAE"} and [\[4.4_rank1\]](#4.4_rank1){reference-type="ref" reference="4.4_rank1"} are manifestly étale-local on the source, that each is implied by [\[4.4_cover_TFAE\]](#4.4_cover_TFAE){reference-type="ref" reference="4.4_cover_TFAE"} and each implies [\[4.4_all_cover_TFAE\]](#4.4_all_cover_TFAE){reference-type="ref" reference="4.4_all_cover_TFAE"} follows from [@HS Theorem 4.4]. The same theorem also shows that [\[4.4_affine_TFAE\]](#4.4_affine_TFAE){reference-type="ref" reference="4.4_affine_TFAE"} implies [\[4.4_rank1\]](#4.4_rank1){reference-type="ref" reference="4.4_rank1"} and that the property of being arc-local on the base holds when $X$ and $S$ are qcqs and $f$ is separated. This implies the conclusion in general, e.g. using [\[4.4_affine_TFAE\]](#4.4_affine_TFAE){reference-type="ref" reference="4.4_affine_TFAE"}. 0◻
It's clear that Definition [1.2](#sec:intro_ula){reference-type="ref" reference="sec:intro_ula"} can just as well be made for coefficient rings such as $\mathcal O_L/\ell^n$ with $L/\mathbf{Q}_\ell$ a finite extension (or more generally for discrete coefficient rings killed by some power of $\ell$); i.e. for constructible étale sheaves (as $D(?,\mathcal O_L/\ell^n)\simeq D_{\mathrm{cons}}({?}_{\text{\upshape \'et}},\mathcal O_L/\ell^n)$ [@HRS Proposition 7.1]). The previous proposition continues to hold in that setting, as does the following result, which shows that this notion of universal local acyclicity satisfies the desired formalism.
**Lemma 4**. *Let $X$ and $Y$ be schemes locally of finite presentation over a scheme $S$.*
1. *[\[lem:stabilities:smooth\]]{#lem:stabilities:smooth label="lem:stabilities:smooth"} If $f:X\to Y$ is a smooth map over $S$ and $\mathcal F\in D(Y)$ is universally locally acyclic over $S$, then $f^*\mathcal F$ is universally locally acyclic over $S$.*
2. *[\[lem:stabilities:proper\]]{#lem:stabilities:proper label="lem:stabilities:proper"} If $f:X\to Y$ is a proper finitely-presented map over $S$ and $\mathcal F\in D(X)$ is universally locally acyclic over $S$, then $f_*\mathcal F$ is universally locally acyclic over $S$.*
3. *[\[lem:stabilities:base\]]{#lem:stabilities:base label="lem:stabilities:base"} If $S\to S'$ is a smooth map to a scheme $S'$ and $\mathcal F\in D(X)$ is universally locally acyclic over $S$, then $\mathcal F$ is universally locally acyclic over $S'$.*
4. *[\[lem:stabilities:support\]]{#lem:stabilities:support label="lem:stabilities:support"} If $i:X\hookrightarrow Y$ is a constructible closed immersion, then $\mathcal F\in D(X)$ is universally locally acyclic over $S$ if and only if $i_*\mathcal F$ is universally locally acyclic over $S$.*
[\[lem:stabilities:support\]](#lem:stabilities:support){reference-type="ref" reference="lem:stabilities:support"} If $x\to X$ is a geometric point, then as $Y_x\times_YX=X_x$, $\mathcal F_x=\Gamma(X_x,\mathcal F)=\Gamma(Y_x,i_*\mathcal F)$ and $\Gamma(X_x\times_{S_{f(x)}}t,\mathcal F)
=\Gamma(Y_x\times_{S_{f(x)}}t,i_*\mathcal F)$, we may conclude that $\mathcal F_x\to\Gamma(X_x\times_{S_{f(x)}}t,\mathcal F)$ is an isomorphism if and only if $\Gamma(Y_x,i_*\mathcal F)\to \Gamma(Y_x\times_{S_{f(x)}}t,i_*\mathcal F)$ is.
[\[lem:stabilities:base\]](#lem:stabilities:base){reference-type="ref" reference="lem:stabilities:base"} Let $g:S\to S'$ be our smooth morphism. We may assume $X$, $S$ and $S'$ are qcqs and that the morphisms between them are separated. We may also replace $S'$ by $g(S)$ and assume $g$ is fppf, hence a $v$-cover. As the property of being universally locally acyclic is arc-local on the base, it will suffice to show that $\mathcal F|_{X\times_{S'}S}$ is universally locally acyclic over $S$. By hypothesis, $\mathcal F$ is universally locally acyclic over $S$. Part [\[lem:stabilities:smooth\]](#lem:stabilities:smooth){reference-type="ref" reference="lem:stabilities:smooth"} now implies the desired conclusion.
For [\[lem:stabilities:smooth\]](#lem:stabilities:smooth){reference-type="ref" reference="lem:stabilities:smooth"} and [\[lem:stabilities:proper\]](#lem:stabilities:proper){reference-type="ref" reference="lem:stabilities:proper"}, one reduces to the case of $X$ and $Y$ qcqs and separated over $S$ by looking Zariski-locally using the previous proposition. In this case, the proofs are sketched in [@HS §3]. They will be essential to us, so we will provide full details at the end of this section. 0◻
## {#section-5}
In this section and the next we study universal local acyclicity over regular 0- and 1-dimensional bases and recover some familiar results.
**Lemma 5**. *If $X$ is a variety, then any $\mathcal F\in D(X)$ is universally locally acyclic over $\mathop{\mathrm{Spec}}k$.*
As universal local acyclicity is $v$-local on the base, we may assume $k$ is algebraically closed. The conclusion is now immediate from [@HS Theorem 4.1]. 0◻
## {#section-6}
For étale sheaves, local acyclicity over a smooth curve is tantamount to all the vanishing cycles being null. The same is true with $\ell$-adic coefficients.
**Lemma 6**. *Let $f:X\to Y$ be a map locally of finite type with $Y$ a regular 1-dimensional scheme, and $\mathcal F\in D(X)$. Then $f$ is universally locally acyclic rel. $\mathcal F$ if and only if all the vanishing cycles are zero; i.e. if and only if $f$ is universally locally acyclic at every point of $|X|$ rel. $\mathcal F$.*
The criterion of Theorem [Proposition 3](#prop:HS4.4){reference-type="ref" reference="prop:HS4.4"}[\[4.4_rank1\]](#4.4_rank1){reference-type="ref" reference="4.4_rank1"} allows us to reduce to $Y=\mathop{\mathrm{Spec}}\mathcal O_{Y,y}$. Indeed, if $g:\mathop{\mathrm{Spec}}W\to Y$ is any map from the spectrum of a rank 1 absolutely integrally closed valuation ring, $g$ either factors via a point of $Y$ or via a surjective map to $\mathop{\mathrm{Spec}}\mathcal O_{Y,y}$ for some $y\in Y$ of codimension 1. As universal local acyclicity over a field is automatic by Lemma [Lemma 5](#lem:ula_field){reference-type="ref" reference="lem:ula_field"}, we may assume we are in the latter case.
So let $Y=\mathop{\mathrm{Spec}}\mathcal O_{Y,y}$. Let $V_y$ denote the normalization of the strict henselization of $\mathcal O_{Y,y}$ (with respect to some choice of geometric point centered on $y$) in ${(\mathop{\mathrm{Frac}}Y)}^{\mathrm{alg}}$; $V_y$ is an absolutely integrally closed valuation ring. The maps $\mathcal O_{Y,y}\to W$ and $\mathcal O_{Y,y}\to V_y$ are $v$-covers, so since universal local acyclicity can be checked $v$-locally, $f_W:X\times_Y\mathop{\mathrm{Spec}}W\to\mathop{\mathrm{Spec}}W$ is universally locally acyclic rel. $\mathcal F$ if and only if $f_{V_y}$ is, and [@HS Theorem 4.1] tells us this is tantamount to the vanishing cycles (computed relative to $V_y$) being zero. 0◻
## {#section-7}
Let $E/\mathbf{Q}_\ell$ be a finite extension. As reduction modulo $\ell$ induces a symmetric monoidal functor $\mathcal C_{S,\mathcal O_E}\to\mathcal C_{S,\mathcal O_E/\ell}$ in the setting of §[3.1](#sec:C_S){reference-type="ref" reference="sec:C_S"}, it's clear that if $f:X\to S$ is a locally finitely-presented map of schemes and $\mathcal F\in D(X,\mathcal O_E)$ is universally locally acyclic over $S$, then $\mathcal F/\ell$ is, too. The converse holds as well.
**Lemma 7**. *If $f:X\to S$ is a locally finitely-presented map of schemes and $\mathcal F\in D(X,\Lambda)$ with $\Lambda=\mathcal O_E$, $E/\mathbf{Q}_\ell$ finite, then $f$ is universally locally acyclic rel. $\mathcal F$ if and only if it is rel. $\mathcal F/\ell$.*
We may assume $X$ is affine and finitely-presented over $S$, also affine, at which point the forward direction is a consequence of the fact that reduction modulo $\ell$ induces a symmetric monoidal functor $\mathcal C_{S,\Lambda}\to\mathcal C_{S,\Lambda/\ell}$. For the converse, we may assume by the criterion of Proposition [Proposition 3](#prop:HS4.4){reference-type="ref" reference="prop:HS4.4"}[\[4.4_rank1\]](#4.4_rank1){reference-type="ref" reference="4.4_rank1"} that $S$ is a rank 1 absolutely integrally closed valuation ring, and it suffices to know that $\mathcal F\to j_*j^*\mathcal F$ is an isomorphism, where $j$ is the (base extension of the) immersion of the generic point of $S$. Here, $j_*j^*\mathcal F$ is constructible by [@HS Theorem 4.1], so both $\mathcal F$ and $j_*j^*\mathcal F$ are derived $\ell$-complete. It therefore suffices by Lemma [2.1](#sec:cons_facts){reference-type="ref" reference="sec:cons_facts"} to know that $\mathcal F/\ell\to(j_*j^*\mathcal F)\otimes_{\mathbf{Z}_\ell}\mathbf{Z}_\ell/\ell
=j_*j^*(\mathcal F/\ell)$ is an isomorphism (the latter isomorphism as $\mathbf{Z}_\ell/\ell$ is perfect). This is guaranteed by the hypothesis that $f$ is universally locally acyclic rel. $\mathcal F/\ell$. 0◻
## {#section-8}
The following result was shown by Illusie [@Illusie Proposition 2.10] with torsion coefficients under the hypothesis of strong local acyclicity and with $g$ an open immersion. It holds most generally for étale sheaves, so let $\Lambda'$ denote some discrete ring killed by a power of $\ell$ and denote by $\mathcal D({X}_{\text{\upshape \'et}},\Lambda')$ the left-completion of the derived $\infty$-category of $\Lambda'$-modules on a scheme $X$ with homotopy category $D({X}_{\text{\upshape \'et}},\Lambda')$. Let $\mathcal D_{\mathrm{cons}}({X}_{\text{\upshape \'et}},\Lambda')
\subset\mathcal D({X}_{\text{\upshape \'et}},\Lambda')$ denote the full subcategory on those objects which are Zariski-locally lisse (i.e. dualizable in $\mathcal D({X}_{\text{\upshape \'et}},\Lambda')$) along a finite subdivision into constructible locally closed subschemes (see [@HRS §7.1]).
**Proposition 4**. *Let $f,g:X,Y\to S$ be morphisms locally of finite presentation between schemes $X$, $Y$ and $S$ and let $\mathcal F\in D_{\mathrm{cons}}({X}_{\text{\upshape \'et}},\Lambda')$ and $\mathcal G\in D({Y}_{\text{\upshape \'et}},\Lambda')$. Suppose $f$ is universally locally acyclic relative to $\mathcal F$. Then the natural map $\mathcal F\otimes f^*g_*\mathcal G\to
g_*(g^*\mathcal F\otimes f^*\mathcal G)$ is an isomorphism, where $f$ and $g$ continue to denote their base extensions $X\times_SY\to Y,X$.*
*The same holds for $\ell$-adic sheaves $\mathcal F\in D(X)$ and $\mathcal G\in D(Y)$ if $Y$ and $S$ are supposed qcqs and either $S$ noetherian quasi-excellent or $g$ proper.*
The latter hypotheses ensure that $g_*$ preserves constructibility [@BS Lemma 6.7.2]. Note that if $X$ and $S$ are qcqs and $f$ is separated, then an étale sheaf which is $f$-universally locally acyclic is necessarily already constructible [@HS Proposition 3.4(iii)]. Note also that the proposition implies the version with $D({Y}_{\text{\upshape \'et}},\Lambda')$ replaced by its ordinary non-left-completed version provided that $\mathcal G$ is bounded below (i.e. in $D^+$), and that left-completion is superfluous if ${X}_{\text{\upshape \'et}}$ has locally finite $\ell$-cohomological dimension by [@BS Proposition 3.3.7(2)].
Let's first consider the case of étale sheaves. We may assume $X$ and $S$ are affine. The object $(X,\mathcal F)$ is then dualizable in the version of the 2-category $\mathcal C_S$ where $D(?)$ is replaced by $D({?}_{\text{\upshape \'et}},\Lambda')$; i.e. setting (A) of [@HS]. We write $$\begin{aligned}
\mathcal F\otimes f^*g_*\mathcal G
&=\mathcal F\boxtimes_S g_*\mathcal G
=\mathbf D_{X/S}\mathbf D_{X/S}\mathcal F\boxtimes_Sg_*\mathcal G \\
&=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_X(\mathbf D_{X/S}\mathcal F,f^!g_*\mathcal G)
=g_*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{X\times_SY}(g^*\mathbf D_{X/S}\mathcal F,f^!\mathcal G) \\
&=g_*(\mathbf D_{X/S}\mathbf D_{X/S}\mathcal F\boxtimes_S\mathcal G)
=g_*(g^*\mathcal F\otimes f^*\mathcal G).
\end{aligned}$$ Here, the third isomorphism follows from [@HS Proposition 3.4(iv)]. We may check the isomorphism $\mathbf D_{X/S}\mathbf D_{X/S}\mathcal F\boxtimes_S\mathcal G
\to\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(g^*\mathbf D_{X/S}\mathcal F,f^!\mathcal G)$ locally on $X\times_SY$ and assume that $Y$ is also affine, at which point it follows from *loc. cit.*
The case of $\ell$-adic sheaves immediately reduces to the case of a finite extension of $\mathbf{Q}_\ell$. Then, with integral coefficients, it follows directly from the torsion étale case by reducing mod $\ell$ and using Lemma [Lemma 1](#lem:reduction_conservative){reference-type="ref" reference="lem:reduction_conservative"}. The case of rational coefficients can be handled by the same argument as in the torsion étale case, with the reference to [@HS Proposition 3.4(iv)] being replaced by one to Proposition [Proposition 6](#prop:HS_3.3_dualizable_internal_hom){reference-type="ref" reference="prop:HS_3.3_dualizable_internal_hom"} (so that $f^!\mathcal G$ means $(\lim_nf^!(\mathcal G_0/\ell^n))[\ell^{-1}]$, where $\mathcal G_0$ is some integral model for $\mathcal G$). 0◻
*Remark 1*. One recovers smooth base change as a special case of the previous proposition. Indeed, suppose $f$ is smooth and set $\mathcal F=\Lambda$. Then $f$ is universally locally acyclic rel. $\mathcal F$ by Corollary [Corollary 3](#cor:id_lisse){reference-type="ref" reference="cor:id_lisse"}[\[cor:id_lisse:smooth\]](#cor:id_lisse:smooth){reference-type="ref" reference="cor:id_lisse:smooth"}, and the proposition says that $f^*g_*\mathcal G\to g_*f^*\mathcal G$ is an isomorphism.
## {#section-9}
From now until §[4](#sec:SS){reference-type="ref" reference="sec:SS"}, all schemes are assumed qcqs (and still defined over $\mathop{\mathrm{Spec}}\mathbf{Z}[\frac1\ell]$) unless explicitly indicated otherwise. Fix such a scheme $S$. Given a dualizable object $(X,\mathcal F)$ of $\mathcal C_S$, the dual of $(X,\mathcal F)$ has a concrete description in terms of the relative Verdier dual. Let $$\mathbf D_{X/S}(\mathcal F):=
\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{D(X_\text{\upshape pro\'et},\Lambda)}(\mathcal F,f^!\Lambda)
\in\mathcal D(X_\text{\upshape pro\'et},\Lambda).$$ Here, if $\Lambda=\mathcal O_E$ with $E/\mathbf{Q}_\ell$ a finite extension, $f^!\Lambda:=\lim_nf^!(\mathcal O_E/\ell^n)$ as an object of $\mathcal D(X_\text{\upshape pro\'et},\Lambda)$. Here, $\mathcal O_E/\ell^n$ is an étale sheaf, so the functor $f^!$ is defined, but $f^!(\mathcal O_E/\ell^n)$ needn't be constructible.
If $\Lambda=E$ with $E/\mathbf{Q}_\ell$ finite, $f^!\Lambda:=(\lim_nf^!(\mathcal O_E/\ell^n))\otimes_{\mathbf{Z}_\ell}\mathbf{Q}_\ell
=(\lim_nf^!(\mathcal O_E/\ell^n))\otimes_{\mathcal O_E}E
=\mathop{\mathrm{colim}}((\lim_nf^!(\mathcal O_E/\ell^n))\xrightarrow\ell(\lim_nf^!(\mathcal O_E/\ell^n))\xrightarrow\ell\ldots)=:\mathop{\mathrm{colim}}_\ell\lim_n(f^!\mathcal O_E/\ell^n)$, where these limits and colimits are taken in $\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E)$.
If $\mathcal F\in\mathcal D(X,\Lambda)$ with $\Lambda=\mathcal O_E$ and $E/\mathbf{Q}_\ell$ infinite algebraic, we may find a $\mathcal F'\in\mathcal D(X,\mathcal O_L)$, $L/\mathbf{Q}_\ell$ finite, so that $\mathcal F=\mathcal F'\otimes_{\mathcal O_L}\mathcal O_E$, and then $f^!\Lambda:=(f^!\mathcal O_L)\otimes_{\mathcal O_L}\mathcal O_E$, so that $\mathbf D_{X/S}(\mathcal F)=\mathbf D_{X/S}(\mathcal F')\otimes_{\mathcal O_L}\mathcal O_E$ [@HRS Lemma 5.3].
If $\Lambda=E$ with $E/\mathbf{Q}_\ell$ infinite algebraic, we may again approximate $\mathcal F$ by $\mathcal F'\in\mathcal D(X,L)$ and set $f^!\Lambda:=(f^!\mathcal O_E)\otimes_{\mathcal O_E}E
=(f^!\mathcal O_L)\otimes_{\mathcal O_L}E
=(f^!L)\otimes_LE$, so that $\mathbf D_{X/S}(\mathcal F)=\mathbf D_{X/S}(\mathcal F')\otimes_LE$.
**Proposition 4** ([@HS Proposition 3.4]). *Let $f:X\to S$ be a separated map of finite presentation and $(X,\mathcal F)$ be dualizable in $\mathcal C_S$. Then the relative Verdier dual $\mathbf D_{X/S}(\mathcal F)$ is constructible and $(X,\mathbf D_{X/S}(\mathcal F))$ is the dual of $(X,\mathcal F)$ in $\mathcal C_S$. In particular, the biduality map $\mathcal F\to\mathbf D_{X/S}(\mathbf D_{X/S}(\mathcal F))$ is an isomorphism, and the formation of $\mathbf D_{X/S}(\mathcal F)$ commutes with any base change in $S$, as well as with reduction modulo $\ell$ if $\Lambda=\mathcal O_E$ with $E/\mathbf{Q}_\ell$ finite.*
**Corollary 1**. *Let $f:X\to S$ be a map of (not necessarily separated) varieties with $S$ smooth, and assume $f$ separated or $X$ smooth. Let $\mathcal F\in D(X)$. Then $f$ is universally locally acyclic rel. $\mathcal F$ if and only if it is rel. $D\mathcal F$, where $D$ denotes Verdier duality.*
We may assume $X$ and $S$ connected of dimension $r$ and $d$, respectively. Let $a,b:S,X\to\mathop{\mathrm{Spec}}k$ be the structural maps. Then $a^!\Lambda$ is defined, equals $\Lambda(d)[2d]$ and is a dualizing complex on $S$. Therefore if $f$ is separated, $D\mathcal F:=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,b^!\Lambda=f^!a^!\Lambda)$ is given by $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,f^!\Lambda)(d)[2d]
=\mathbf D_{X/S}(\mathcal F)(d)[2d]$, which is universally locally acyclic over $S$ if and only if $\mathcal F$ is.[^18]
If $X$ is smooth, then $b^!\Lambda$ is defined, equals $\Lambda(r)[2r]$ and is a dualizing complex on $X$. For the purposes of checking universal local acyclicity, we may assume $X$ and $S$ are affine and then run the previous argument. 0◻
## {#sec:C_S_internal_homs}
Fix a base scheme $S$. In view of Lemma [3.2](#sec:duals_in_2cats){reference-type="ref" reference="sec:duals_in_2cats"}, it's useful to understand the internal mapping objects in $\mathcal C_S$.
**Proposition 5**. *Fix a finite extension $L$ of $\mathbf{Q}_\ell$. Suppose $(X,\mathcal F)$ and $(Y,\mathcal G)$ are in $\mathcal C_S$ with $\mathcal O_L$-coefficients. Then the internal Hom $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{\mathcal C_S}((X,\mathcal F),(Y,\mathcal G))$ exists and is given by $$(X\times_SY,\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{\text{\upshape pro\'et}}(\pi_1^*\mathcal F,\lim_n\pi_2^!(\mathcal G/\ell^n))$$ if this sheaf is constructible.*
*Similarly, if $(X,\mathcal F)$ and $(Y,\mathcal G)$ are in $\mathcal C_S$ with $L$-coefficients, then the internal Hom $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{\mathcal C_S}((X,\mathcal F),(Y,\mathcal G))$ exists and is given by $$(X\times_SY,\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{\text{\upshape pro\'et}}(\pi_1^*\mathcal F,(\lim_n\pi_2^!(\mathcal G_0/\ell^n))[\ell^{-1}])$$ if this sheaf is constructible, where $\mathcal G_0$ is any integral model for $\mathcal G$.*
Let's first consider integral coefficients. To check the claim, we have to check that adjunction $$\begin{gathered}
\label{eq:hom_categories}\tag{$\ast$}
\mathop{\mathrm{Hom}}_{\mathcal C_S}((X,\mathcal F)\otimes(Y,\mathcal G),(Z,\mathcal M)) \\\cong
\mathop{\mathrm{Hom}}_{\mathcal C_S}((X,\mathcal F),(Y\times_SZ,\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_Y^*\mathcal G,\lim_n\pi_Z^!(\mathcal M/\ell^n))))\end{gathered}$$ determines an isomorphism of categories. Objects of the first category are objects $\mathcal K\in D(X\times_SY\times_SZ)$ equipped with an arrow $\pi_{Z!}(\pi_{XY}^*(\mathcal F\boxtimes \mathcal G)\otimes\mathcal K)\to\mathcal M$ in $D(Z)$. Objects of the latter category are objects $\mathcal K\in D(X\times_SY\times_SZ)$ equipped with an arrow $\pi_{YZ!}(\pi_X^*\mathcal F\otimes\mathcal K)\to\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_Y^*\mathcal G,\lim_n\pi_Z^!(\mathcal M/\ell^n))$ (where $\pi_{YZ}:X\times_SY\times_SZ\to Y\times_SZ$ is the projection), i.e. with an arrow $\pi_{YZ!}(\pi_X^*\mathcal F\otimes\mathcal K)\otimes\pi_Y^*\mathcal G
=\pi_{YZ!}(\pi_{XY}^*(\mathcal F\boxtimes\mathcal G)\otimes\mathcal K)\to\lim_n\pi_Z^!(\mathcal M/\ell^n)$. Given a separated finitely-presented map $h:Y\to X$ and constructible sheaves $\mathcal A$ and $\mathcal B$ on $Y$ and $X$, respectively, there is an isomorphism of Hom complexes $\mathop{\mathrm{Hom}}(\mathcal A,\lim_n h^!(\mathcal B/\ell^n))
=\mathop{\mathrm{Hom}}(h_!\mathcal A,\mathcal B)$ in $\mathcal D(\Lambda)$ obtained by passing the limit outside and using the adjunction on the level of étale sheaves. Therefore the objects in the categories [\[eq:hom_categories\]](#eq:hom_categories){reference-type="eqref" reference="eq:hom_categories"} are the same.
Arrows of the first category are maps $\mathcal K\to\mathcal H$ in $D(X\times_SY\times_SZ)$ making the triangle below commute. $$\begin{tikzcd}
\pi_{Z!}(\pi_{XY}^*(\mathcal F\boxtimes\mathcal G)\otimes\mathcal K)\arrow[d]\arrow[r]&\mathcal M \\
\pi_{Z!}(\pi_{XY}^*(\mathcal F\boxtimes\mathcal G)\otimes\mathcal H)\arrow[ur]
\end{tikzcd}$$ Arrows of the second category are maps $\mathcal K\to\mathcal H$ making the triangle below commute. $$\begin{tikzcd}
\pi_{YZ!}(\pi_X^*\mathcal F\otimes\mathcal K)\arrow[r]\arrow[d]&
\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_Y^*\mathcal F,\lim_n\pi_Z^!(\mathcal M/\ell^n)) \\
\pi_{YZ!}(\pi_X^*\mathcal F\otimes\mathcal H)\arrow[ur]
\end{tikzcd}$$ These are seen to be the same under adjunction; therefore the two $\mathop{\mathrm{Hom}}$ categories are indeed isomorphic.
With $L$-coefficients the argument is the same using also [@HRS Lemma 5.3]. With $\mathcal A_0$ and $\mathcal B_0$ in $\mathcal D(Y,\mathcal O_L)$ and $\mathcal D(X,\mathcal O_L)$, respectively, $\mathcal A=\mathcal A_0[\ell^{-1}]$ etc. and $h$ as above, we have isomorphisms in $\mathcal D(L)$ of the type $$\begin{aligned}
\mathop{\mathrm{Hom}}(\mathcal A,(\lim_n h^!(\mathcal B_0/\ell^n))[\ell^{-1}])
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{Hom}}(\mathcal A_0/\ell^n,h^!(\mathcal B_0/\ell^n)) \\
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{Hom}}(h_!(\mathcal A_0/\ell^n),\mathcal B_0/\ell^n) \\
&=\mathop{\mathrm{Hom}}(h_!\mathcal A_0,\mathcal B)=\mathop{\mathrm{Hom}}(h_!\mathcal A,\mathcal B).
&\ifmeasuring@\qed\else\omit\hfill$\displaystyle\qed$\fi\ignorespaces\end{aligned}$$
The following corollary is the $\ell$-adic analogue of [@HS Proposition 3.3] and follows from Lemma [3.2](#sec:duals_in_2cats){reference-type="ref" reference="sec:duals_in_2cats"} in light of the previous result.
**Corollary 2**. *Suppose $f:X\to S$ is a separated finitely-presented morphism of schemes and $(X,\mathcal F)$ is in $\mathcal C_S$ with $\Lambda=L$ or $\mathcal O_L$, $L$ a finite extension of $\mathbf{Q}_\ell$. Then $(X,\mathcal F)$ is dualizable if $\mathbf D_{X/S}(\mathcal F)=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,f^!\Lambda)$ is constructible and the natural map $$\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal F
\longrightarrow\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\pi_2^!\mathcal F)$$ is an isomorphism in $D(X\times_SX)$, where as usual $\pi_2^!\mathcal F$ denotes the pro-étale sheaf $\lim_n\pi_2^!(\mathcal F/\ell^n)$ with integral coefficients or $(\lim_n\pi_2^!(\mathcal F_0/\ell^n))[\ell^{-1}]$ with $L$-coefficients, where $\mathcal F_0$ is any integral model for $\mathcal F$, and likewise for $f^!\Lambda$.*
## {#section-10}
The following in particular provides a converse to the previous result.
**Proposition 6**. *Suppose $L/\mathbf{Q}_\ell$ is finite and $(X,\mathcal F)$ is dualizable in $\mathcal C_S$ with $\mathcal O_L$-coefficients. Then for every $(Y,\mathcal G)$ in $\mathcal C_S$, $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\lim_n\pi_2^!(\mathcal G/\ell^n))$ is constructible on $X\times_SY$ and we have via the canonical map an isomorphism $$\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G\xlongrightarrow\sim
\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\lim_n\pi_2^!(\mathcal G/\ell^n)).$$*
*Likewise, if $(X,\mathcal F)$ is dualizable in $\mathcal C_S$ with $L$-coefficients, then for every $(Y,\mathcal G)$ in $\mathcal C_S$, $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,(\lim_n\pi_2^!\mathcal G_n)[\ell^{-1}])$ is constructible on $X\times_SY$ and isomorphic to $\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G$ via the canonical map, where $\mathcal G_n:=\mathcal G_0/\ell^n$ for $\mathcal G_0$ any integral model for $\mathcal G$.*
Assume first $\Lambda$ is integral. The dual of $(X,\mathcal F)$ is $(X,\mathbf D_{X/S}\mathcal F)$. Then the point is that $(X,\mathbf D_{X/S}(\mathcal F))\otimes(Y,\mathcal G)
=(X\times_SY,\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G)$ is an internal Hom in $\mathcal C_S$. As $\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G$ is constructible, $\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G=\lim_n((\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G)\otimes_{\mathbf{Z}_\ell}\mathbf{Z}_\ell/\ell^n)$. As reduction modulo $\ell^n$ is a symmetric monoidal functor on the categories $\mathcal C_S$, it preserves internal Homs; therefore $(\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G)\otimes_{\mathbf{Z}_\ell}\mathbf{Z}_\ell/\ell^n=
\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*(\mathcal F/\ell^n),\pi_2^!(\mathcal G/\ell^n))
=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\pi_2^!(\mathcal G/\ell^n))$,[^19] and the sheaf $\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G
=\lim_n\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\pi_2^!(\mathcal G/\ell^n))
=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\lim_n\pi_2^!(\mathcal G/\ell^n))$ is constructible.
Now suppose $\Lambda$ is rational and suppose $(X,\mathcal F)$ is dualizable. By [@HS Proposition 3.8], there is a $v$-cover $S'\to S$ and an integral model $\widetilde{\mathcal F}_0$ for $\mathcal F|_{X'}$ which is universally locally acyclic over $S'$. We also pick integral models $\mathcal F_0,\mathcal G_0$ for $\mathcal F,\mathcal G$ on $X$ and $Y$ (not necessarily universally locally acyclic) and let $\mathcal F_n:=\mathcal F_0/\ell^n$, etc. We wish to show the natural map $\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G\to
\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,(\lim_n\pi_2^!\mathcal G_n)[\ell^{-1}])$ is an equivalence. We check on sections over $j:U\to X\times_SY$ in $(X\times_SY)_\text{\upshape pro\'et}$. The statement is local, so we may assume $X$, $Y$ and $S$ are affine and that $U=\lim_iU_i\to X\times_SY$ is a limit of affine étale maps $j_i:U_i\to X\times_SY$. We denote by $S'^{\bullet/S}$ the Čech nerve associated to $S'\to S$ as in §[2.1](#sec:cons_facts){reference-type="ref" reference="sec:cons_facts"}, and let $X'^{\bullet/X}$ denote the Čech nerve associated to the $v$-cover $X':=X\times_SS'\to X$ obtained via base change, so that $X'^{\bullet/X}=X\times_SS'^{\bullet/S}$, and likewise for $Y$. Let $p_1,p_2,\pi_1,\pi_2$ be defined by the diagram on the left below $$\begin{tikzcd}[column sep=0,row sep=15]
&U\arrow[d,"j"]\arrow[ddl,bend right,"p_1"']\arrow[ddr,bend left,"p_2"] \\
&X\times_SY\arrow[dl,"\pi_1"]\arrow[dr,"\pi_2"'] \\
X\arrow[dr]&&Y\arrow[dl] \\ &S,
\end{tikzcd} \qquad \qquad \begin{tikzcd}[column sep=-30]
&U'^{\bullet/U}=U\times_SS'^{\bullet/S}\arrow[d,"j"]\arrow[ddl,bend right=40,"p_1"'] \arrow[ddr,bend left=40,"p_2"] \\
&X\times_SY\times_SS'^{\bullet/S}\arrow[dl,"\pi_1"]\arrow[dr,"\pi_2"'] \\
X'^{\bullet/X}&&Y'^{\bullet/Y},
\end{tikzcd}$$ and let $p_{1i},p_{2i}$ defined similarly in terms of $j_i$ in lieu of $j$. Put $U':=U\times_SS'$ and let $j,p_1,p_2$ continue to describe base extensions as in the diagram to the right above, and likewise for $j_i,p_{1i},p_{2i}$.
We use the result for integral coefficients and write $$\begin{aligned}
\Gamma(U,\mathbf D_{X/S}(\mathcal F)\boxtimes_S\mathcal G)
&=\lim_\Delta\Gamma(U'^{\bullet/U},
j^*(\mathbf D_{X'^{\bullet/X}/S'^{\bullet/S}}(\mathcal F)\boxtimes_S\mathcal G)) \\
&=\lim_\Delta\Gamma(U'^{\bullet/U},
j^*(\mathbf D_{X'^{\bullet/X}/S'^{\bullet/S}}(\widetilde{\mathcal F}_0)\boxtimes_S\mathcal G_0)\otimes_{\mathcal O_L}L) \\
&=\lim_\Delta\Gamma(U'^{\bullet/U},
j^*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\widetilde{\mathcal F}_0,\pi_2^!\mathcal G_0)\otimes_{\mathcal O_L}L) \\
&=\lim_\Delta\mathop{\mathrm{Hom}}_{U'^{\bullet/U}}(j^*\pi_1^*\mathcal F,j^*\pi_2^!\mathcal G) \\
&=\lim_\Delta\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{Hom}}_{U'^{\bullet/U}}(j^*\pi_1^*\mathcal F_n,j^*\pi_2^!\mathcal G_n) \\
&=\lim_\Delta\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{Hom}}_{X\times_SY\times_SS'^{\bullet/S}}(\pi_1^*\mathcal F_n,j_*j^*\pi_2^!\mathcal G_n) \\
&=\lim_\Delta\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{colim}}_i\mathop{\mathrm{Hom}}_{X\times_SY\times_SS'^{\bullet/S}}(\pi_1^*\mathcal F_n,j_{i*}j_i^*\pi_2^!\mathcal G_n) \\
&=\lim_\Delta\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{colim}}_i\mathop{\mathrm{Hom}}_{U_i'^{\bullet/U_i}}(p_{1i}^*\mathcal F_n,p_{2i}^!\mathcal G_n) \\
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{colim}}_i\lim_\Delta\mathop{\mathrm{Hom}}_{Y'^{\bullet/Y}}(p_{2i!}p_{1i}^*\mathcal F_n,\mathcal G_n) \\
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{colim}}_i\mathop{\mathrm{Hom}}_Y(p_{2i!}p_{1i}^*\mathcal F_n,\mathcal G_n) \\
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{colim}}_i\mathop{\mathrm{Hom}}_{X\times_SY}(\pi_1^*\mathcal F_n,j_{i*}j_i^*\pi_2^!\mathcal G_n) \\
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{Hom}}_{X\times_SY}(\pi_1^*\mathcal F_n,j_*j^*\pi_2^!\mathcal G_n) \\
&=\mathop{\mathrm{colim}}_\ell\lim_n\mathop{\mathrm{Hom}}_U(j^*\pi_1^*\mathcal F_n,j^*\pi_2^!\mathcal G_n) \\
&=\mathop{\mathrm{Hom}}_U(j^*\pi_1^*\mathcal F,j^*\pi_2^!\mathcal G) \\
&=\Gamma(U,\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{X\times_SY}(\pi_1^*\mathcal F,(\lim_n\pi_2^!\mathcal G_n)[\ell^{-1}])).\end{aligned}$$ The point is that we need both arguments of the $\mathop{\mathrm{Hom}}$ to be constructible to apply $v$-descent. This is true for $\mathop{\mathrm{Hom}}(p_{2i!}p_{1i}^*\mathcal F_n,\mathcal G_n)$ because $p_{2i}$ is separated of finite presentation (this is why we pass from $j_*j^*$ to $\mathop{\mathrm{colim}}_i j_{i*}j_i^*$). We can pass the totalization through filtered colimits because the totalization behaves like a finite limit as $\mathcal D^{\geq0}(\mathcal O_L/\ell^n)$ is compactly generated by cotruncated objects [@BM Lemma 3.7]. Finally, the isomorphism $\mathop{\mathrm{colim}}_ij_{i*}j_i^*\pi_2^!\mathcal G_n\xrightarrow\sim j_*j^*\pi_2^!\mathcal G_n$ expresses the finitaryness of étale cohomology for bounded below complexes [@Stacks [`0GIV`](https://stacks.math.columbia.edu/tag/0GIV)]. 0◻
**Corollary 3**.
1. *[\[cor:id_lisse:id\]]{#cor:id_lisse:id label="cor:id_lisse:id"} Let $X$ be a (not necessarily qcqs) scheme and $\mathcal F\in D(X)$. Then $\mathcal F$ is universally locally acyclic over $X$ if and only if $\mathcal F$ is lisse.*
2. *[\[cor:id_lisse:smooth\]]{#cor:id_lisse:smooth label="cor:id_lisse:smooth"} Let $f:X\to S$ be a smooth map of (not necessarily qcqs) schemes and $\mathcal F\in D(X)$ lisse. Then $f$ is universally locally acyclic relative to $\mathcal F$.*
[\[cor:id_lisse:id\]](#cor:id_lisse:id){reference-type="ref" reference="cor:id_lisse:id"} As the properties of being universally locally acyclic and of being lisse are both Zariski-local on the source [@HRS Lemma 4.5], we may assume $X$ is qcqs. The property of $\mathcal F$ being universally locally acyclic over $X$ implies that for any $\mathcal G$ in $D(X)$, $\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,\Lambda)\otimes\mathcal G
\to\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,\mathcal G)$ is an isomorphism by the previous result,[^20] which implies that $\mathcal F$ is dualizable in $\mathcal D(X)$ [@HA Lemma 4.6.1.6], and conversely for $\mathcal G=\mathcal F$ by Corollary [Corollary 2](#HS:3.3_corollary){reference-type="ref" reference="HS:3.3_corollary"}.[^21]
[\[cor:id_lisse:smooth\]](#cor:id_lisse:smooth){reference-type="ref" reference="cor:id_lisse:smooth"} Follows from [\[cor:id_lisse:id\]](#cor:id_lisse:id){reference-type="ref" reference="cor:id_lisse:id"} in view of Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}[\[lem:stabilities:base\]](#lem:stabilities:base){reference-type="ref" reference="lem:stabilities:base"}. 0◻
Of course, the same is true for constructible étale sheaves of $\mathbf{Z}/\ell^n$-modules using [@HS Propositions 3.3 & 3.4(iv)].
## {#section-11}
We can now finish the proof of Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}.
The previous reductions allow us to assume $X$, $Y$ and $S$ are all qcqs and that $X$ and $Y$ are separated over $S$. We also reduce to $\Lambda$ a finite extension $L$ of $\mathbf{Q}_\ell$ or its ring of integers. Let $\mathcal F_n:=\mathcal F/\ell^n$ if $\Lambda$ is integral.
[\[lem:stabilities:smooth\]](#lem:stabilities:smooth){reference-type="ref" reference="lem:stabilities:smooth"} Suppose $f$ has relative dimension $d$. Then $f^*\mathbf D_{Y/S}(\mathcal F)=\mathbf D_{X/S}(f^*\mathcal F)(-d)[-2d]$ and $(f\times_Sf)^*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{Y\times_SY}(\pi_1^*\mathcal F,\pi_2^!\mathcal F)
=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{X\times_SX}(\pi_1^*f^*\mathcal F,\pi_2^!f^*\mathcal F)(-d)[-2d]$, where $\pi_2^!\mathcal F$ continues to denote $\lim_n\pi_2^!\mathcal F_n$ if $\Lambda$ is integral and $(\lim_n\pi_2^!\mathcal F_n)[\ell^{-1}]$ if $\Lambda$ is rational. In particular, both are constructible, and the morphism $$\mathbf D_{X/S}(f^*\mathcal F)\boxtimes_S f^*\mathcal F\to
\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*f^*\mathcal F,\pi_2^!f^*\mathcal F)$$ is an isomorphism, so we can conclude by Corollary [Corollary 2](#HS:3.3_corollary){reference-type="ref" reference="HS:3.3_corollary"} that $f^*\mathcal F$ is universally locally acyclic over $S$.
[\[lem:stabilities:proper\]](#lem:stabilities:proper){reference-type="ref" reference="lem:stabilities:proper"} Local Verdier duality gives $f_*\mathbf{D}_{X/S}(\mathcal F)=\mathbf{D}_{Y/S}(f_*\mathcal F)$. In the case of integral coefficients, that and the Künneth formula allow us to write $$\begin{aligned}
\mathbf{D}_{Y/S}(f_*\mathcal F)\boxtimes_S f_*\mathcal F
&=(f\times_S f)_*(\mathbf{D}_{X/S}(\mathcal F)\boxtimes_S\mathcal F) \\
&\xrightarrow\sim(f\times_S f)_*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_1^*\mathcal F,\pi_2^!\mathcal F) \\
&=\lim_n(f\times_S\mathop{\mathrm{id}})_*(\mathop{\mathrm{id}}\times_Sf)_*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{X\times_SX}((\mathop{\mathrm{id}}\times_Sf)^*\pi_1^*\mathcal F_n,\pi_2^!\mathcal F_n) \\
&=\lim_n(f\times_S\mathop{\mathrm{id}})_*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{X\times_SY}(\pi_1^*\mathcal F_n,\pi_2^!f_*\mathcal F_n) \\
&=\lim_n(f\times_S\mathop{\mathrm{id}})_*\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{X\times_SY}(\pi_1^*\mathcal F_n,(f\times_S\mathop{\mathrm{id}})^!\pi_2^!f_*\mathcal F_n) \\
&=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}_{Y\times_SY}(\pi_1^*f_*\mathcal F,\pi_2^!f_*\mathcal F),\end{aligned}$$ where here we use $\pi_1$ for both projections in the diagram below $$\begin{tikzcd}[column sep=10]
X\times_S X\arrow[rr,"\mathop{\mathrm{id}}\times_Sf"]
\arrow[dr,"\pi_1"']&&X\times_SY\arrow[dl,"\pi_1"] \\ &X,
\end{tikzcd}$$ and likewise for $\pi_2$. Of course, the same argument works for rational coefficients after choosing an integral model $\mathcal F_0$ and pulling out both the limit and the colimit in $\pi_2^!\mathcal F=\mathop{\mathrm{colim}}_\ell\lim_n\pi_2^!(\mathcal F_0/\ell^n)$, allowing us to conclude by Corollary [Corollary 2](#HS:3.3_corollary){reference-type="ref" reference="HS:3.3_corollary"} that $f_*\mathcal F$ is universally locally acyclic over $S$.
This completes the proof of Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}. 0◻
# Singular support {#sec:SS}
In this section, we construct the singular support of an $\ell$-adic sheaf on a smooth variety and prove Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}. We start with the case of an integral sheaf, as it quickly reduces to the torsion étale case. We use without further ado the definitions of §[1.3](#sec:transversality_def){reference-type="ref" reference="sec:transversality_def"} & §[1.4](#sec:ss_def){reference-type="ref" reference="sec:ss_def"}.
## {#sec:integral_SS}
Suppose $X$ is a smooth variety, $\Lambda=\mathcal O_E$ with $E/\mathbf{Q}_\ell$ a finite extension, and $\mathcal F\in D(X,\Lambda)$. Then $\mathcal F/\ell$ is an étale $\mathbf{Z}/\ell$-sheaf, and has a notion of singular support. It follows from Lemma [Lemma 7](#lem:ula_mod_ell){reference-type="ref" reference="lem:ula_mod_ell"} that a test pair is $\mathcal F$-acyclic if and only if it is $\mathcal F/\ell$-acyclic. Therefore $SS(\mathcal F)$ exists and equals $SS(\mathcal F/\ell)$. For the same reason (or by Lemma [Lemma 3](#lem:vanishing_cycles_mod_ell){reference-type="ref" reference="lem:vanishing_cycles_mod_ell"}), we have equality of the weak singular supports. The equality $SS(\mathcal F)=SS(D\mathcal F)$ follows from Corollary [Corollary 1](#cor:verdier_duality){reference-type="ref" reference="cor:verdier_duality"} in view of the equality $SS(\mathcal F)=SS^w(\mathcal F)$. The rest of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"} for $\mathcal F$ follows from the corresponding facts for $\mathcal F/\ell$.
Now suppose $\Lambda=\mathcal O_L$ with $L$ an infinite algebraic extension of $\mathbf{Q}_\ell$. Then we can approximate $\mathcal F$ by some $\mathcal F'\in D(X,\mathcal O_E)$, $E/\mathbf{Q}_\ell$ finite, and $\mathcal F'$ has singular support satisfying Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}. To prove the theorem for $\mathcal F$, it therefore suffices to show the following
**Lemma 8**. *Let $f:X\to S$ be a locally finitely-presented morphism of schemes. Let $\mathbf{Q}_\ell\subset E\subset L$ with $E/\mathbf{Q}_\ell$ finite and $L/E$ algebraic, and let $\mathcal G'\in D(X,\mathcal O_E)$. Then $\mathcal G'$ is universally locally acyclic over $S$ if and only if $\mathcal G:=\mathcal G'\otimes_{\mathcal O_E}\mathcal O_L$ is.*
We may assume $X$ and $S$ are qcqs and that $f$ is separated. As extension of scalars from $\mathcal O_E$ to $\mathcal O_L$ induces a symmetric monoidal functor $\mathcal C_{S,\mathcal O_E}\to\mathcal C_{S,\mathcal O_L}$, the forward direction is obvious. For the converse, suppose $\mathcal G$ is universally locally acyclic over $S$. We may suppose by Proposition [Proposition 3](#prop:HS4.4){reference-type="ref" reference="prop:HS4.4"}[\[4.4_rank1\]](#4.4_rank1){reference-type="ref" reference="4.4_rank1"} that $S$ is a rank 1 absolutely integrally closed valuation ring and that $\mathcal G\to j_*j^*\mathcal G$ is an isomorphism, where $j$ is the (base change of the) immersion of the generic point of $S$, and we must show that the same is true for $\mathcal G'$. As the cone $\mathcal K':=\mathop{\mathrm{cofib}}(\mathcal G'\to j_*j^*\mathcal G')$ is a constructible $\mathcal O_E$-sheaf by [@HS Theorem 4.1], Lemma [Lemma 2](#lem:stalks){reference-type="ref" reference="lem:stalks"} guarantees it will have a nonzero stalk if it's nonzero, say at a geometric point $x\to X$. As $\mathcal O_E\subset\mathcal O_L$ is faithfully flat and $\mathcal K_x
=\Gamma(X_x,\mathcal K:=\mathcal K'\otimes_{\mathcal O_E}\mathcal O_L)
=\Gamma(X_x,\mathcal K')\otimes_{\mathcal O_E}\mathcal O_L
=\mathcal K'_x\otimes_{\mathcal O_E}\mathcal O_L$, $\mathcal K'_x=0$ if and only if $\mathcal K_x=0$. As $\mathcal K=\mathop{\mathrm{cofib}}(\mathcal G\to j_*j^*\mathcal G)=0$ by assumption, we may conclude. 0◻
This completes the proof of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"} for integral $\Lambda$, modulo the first sentence of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:constituents\]](#th:SS:constituents){reference-type="ref" reference="th:SS:constituents"} which will be addressed in §[4.7](#sec:SSw){reference-type="ref" reference="sec:SSw"} below.
## {#section-12}
We turn now to the case of rational $\Lambda$. Here, picking an integral model and computing its singular support will generally give the wrong answer, as is seen by taking the direct sum of a torsion-free $\mathbf{Z}_\ell$-local system with a torsion étale constructible sheaf which is not lisse: the rationalization should have singular support given by the zero section only, but the singular support of such an integral model will have other irreducible components. To remedy this, we analyze Beilinson's constructions in [@Sasha], elaborating on some of them, and explain why they continue to work with rational $\ell$-adic coefficients.
Let $X$ be a smooth variety and $\mathcal F$ in $D(X)$. Let $\mathcal C(\mathcal F)$ denote the set of those conical closed subsets $C$ of $T^*X$ so that $\mathcal F$ is micro-supported on $C$. The proof that $T^*X\in\mathcal C(\mathcal F)$ is unchanged from that of [@Sasha Lemma 1.3], and uses Lemma [Lemma 5](#lem:ula_field){reference-type="ref" reference="lem:ula_field"}, the stability of universal local acyclicity under base change, and Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}[\[lem:stabilities:smooth\]](#lem:stabilities:smooth){reference-type="ref" reference="lem:stabilities:smooth"}.
### Legendre & Radon transforms {#legendre-radon-transforms .unnumbered}
We now recall the definitions of the Legendre and Radon transforms, referring the reader to [@Sasha §1.6] for more detail. Let $V$ be a vector space of dimension $n+1$, $V^\vee$ its dual, $\mathbf{P}$ and $\mathbf{P}^\vee$ the corresponding projective spaces, $Q\hookrightarrow\mathbf{P}\times\mathbf{P}^\vee$ the incidence correspondence, $p,p^\vee:Q\rightrightarrows\mathbf{P},\mathbf{P}^\vee$ the projections.
The *Legendre transform* is a pair of identifications $P(T^*\mathbf{P})\xleftarrow\sim Q\xrightarrow\sim P(T^*\mathbf{P}^\vee)$. Given a point[^22] $x\in\mathbf{P}$ and a line (through the origin) $L\subset T^*_x\mathbf{P}$, we obtain first a line through $x$ in $\mathbf{P}$ and then a hyperplane in $\mathbf{P}$ containing $x$ given as the perpendicular to that line. This hyperplane determines a point $x^\vee\in\mathbf{P}^\vee$. The point $x$ determines a hyperplane $x^\perp$ in $\mathbf{P}^\vee$, whose conormal gives rise to a line $L^\vee$ in $T^*_{x^\vee}\mathbf{P}^\vee$. The Legendre transform sends $(x,L)\mapsto(x^\vee,L^\vee)$ and running the same algorithm again on $(x^\vee,L^\vee)$ gives its inverse.
The *Radon transform* functors $R,R^\vee$ are defined as $R:=p^\vee_*p^*[n-1]:D(\mathbf{P})\to D(\mathbf{P}^\vee)$ and $R^\vee:=p_*p^{\vee*}[n-1]:D(\mathbf{P}^\vee)\to D(\mathbf{P})$. The functors $R$ and $R^\vee(n-1)$ are both left and right adjoint. Brylinski showed [@Brylinski Corollaire 9.15] that if $\mathcal M$ is a perverse sheaf on $\mathbf{P}$, then ${^p\mathcal H^i}(R(\mathcal M))$ is lisse for $i\ne0$, and that the functor $?\mapsto{^p\mathcal H^0}R(?)$ induces an equivalence between the quotient of $\mathop{\mathrm{Perv}}(\mathbf{P})$ by the (full) subcategory of lisse perverse sheaves[^23] and the similar quotient of $\mathop{\mathrm{Perv}}(\mathbf{P}^\vee)$. This implies in particular that the cones of the adjunction morphisms $RR^\vee(n-1)\to\mathop{\mathrm{id}}$ and $\mathop{\mathrm{id}}\to R^\vee R(n-1)$ (on all of $D(\mathbf{P})$ and $D(\mathbf{P}^\vee)$) are lisse, that if $\mathcal M$ is lisse then $R(\mathcal M)$ is lisse, and that if $\mathcal M$ is irreducible and not lisse, then ${^p\mathcal H^0}R(\mathcal M)$ has a single non-lisse irreducible perverse sheaf in its Jordan-Hölder series.
## {#section-13}
Let $X$ be a smooth variety, $\mathcal F\in D(X)$ and $C$ a closed conical subset of $T^*X$.
**Lemma 2** ([@Sasha Lemma 2.1]).
1. *[\[lem:B2.1:base\]]{#lem:B2.1:base label="lem:B2.1:base"} The base of $SS^w(\mathcal F)$ equals the support of $\mathcal F$.*
2. *The sheaf $\mathcal F$ is micro-supported on the zero section $T_X^*X$ if and only if $\mathcal F$ is lisse.*
3. *[\[lem:B2.1:thick\]]{#lem:B2.1:thick label="lem:B2.1:thick"} The sheaves that are micro-supported on $C$ form a thick subcategory of $D(X)$.*
The original proof goes through unchanged; the fact that $(\mathop{\mathrm{id}}_X,\mathop{\mathrm{id}}_X)$ is a $\mathcal F$-acyclic test pair if and only if $\mathcal F$ is lisse translates to the statement of Corollary [Corollary 3](#cor:id_lisse){reference-type="ref" reference="cor:id_lisse"}[\[cor:id_lisse:id\]](#cor:id_lisse:id){reference-type="ref" reference="cor:id_lisse:id"}, while the fact that $\mathcal F$ is micro-supported on $T^*_XX$ if it is lisse follows from Corollary [Corollary 3](#cor:id_lisse){reference-type="ref" reference="cor:id_lisse"}[\[cor:id_lisse:smooth\]](#cor:id_lisse:smooth){reference-type="ref" reference="cor:id_lisse:smooth"}.
## {#section-14}
If $r:X\to Z$ is a map of smooth varieties and $C$ is a conical closed subset of $T^*X$ whose base is proper over $Z$, $r_\circ C$ is defined as the image of $(dr)^{-1}(C)\subset T^*Z\times_ZX$ by the projection $T^*Z\times_ZX\to T^*Z$. It is a conical closed subset of $T^*Z$.
**Lemma 3** ([@Sasha Lemma 2.2]).
1. *If $Y$ is smooth, $f:Y\to X$ is $C$-transversal and $C\in\mathcal C(\mathcal F)$, then $f^\circ C\in\mathcal C(f^*\mathcal F)$.*
2. *[\[lem:B2.2:ii\]]{#lem:B2.2:ii label="lem:B2.2:ii"} If $r:X\to Z$ with $Z$ smooth and $C\in\mathcal C(\mathcal F)$ with base proper over $Z$, then $r_\circ C\in\mathcal C(r_*\mathcal F)$.*
The original proofs go through unchanged using Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}[\[lem:stabilities:proper\]](#lem:stabilities:proper){reference-type="ref" reference="lem:stabilities:proper"} & [\[lem:stabilities:support\]](#lem:stabilities:support){reference-type="ref" reference="lem:stabilities:support"} for [\[lem:B2.2:ii\]](#lem:B2.2:ii){reference-type="ref" reference="lem:B2.2:ii"}. Same for [@Sasha Lemmas 2.3, 2.4 & 2.5]; the proof of part (i) of the latter uses Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}[\[lem:stabilities:support\]](#lem:stabilities:support){reference-type="ref" reference="lem:stabilities:support"} (or just the compatibility of the formation of vanishing cycles with proper base change).
## {#section-15}
In this section we explain how to see in our setting the existence of the singular support and the upper dimension estimate on its dimension. If $g:Y\to Z$ is a map of varieties and $\mathcal G\in D(Y)$, then let $E_g(\mathcal G)$ denote the smallest closed subset of $Y$ such that $g$ is universally locally acyclic rel. $\mathcal G$ on $Y\smallsetminus E_g(\mathcal F)$.
**Proposition 5** ([@Sasha Theorems 3.1 & 3.2]).
1. *[\[prop:SS_exists:X\]]{#prop:SS_exists:X label="prop:SS_exists:X"} Every sheaf $\mathcal F\in D(X)$ on a smooth variety $X$ has singular support. One has $\dim SS(\mathcal F)\leq\dim X$.*
2. *[\[prop:SS_exists:P\]]{#prop:SS_exists:P label="prop:SS_exists:P"} If $X=\mathbf{P}$, the projectivization $P(SS(\mathcal F))\subset P(T^*\mathbf{P})$ equals the Legendre transform of $E_p(p^{\vee*}R(\mathcal F))\subset Q$. If $\mathcal F$ vanishes at the generic point of $\mathbf{P}$ then $SS(\mathcal F)$ is the cone over $P(SS(\mathcal F))$, otherwise $SS(\mathcal F)$ is the union of this cone and $T^*_XX$.*
The existence statement of [\[prop:SS_exists:X\]](#prop:SS_exists:X){reference-type="ref" reference="prop:SS_exists:X"} follows from [\[prop:SS_exists:P\]](#prop:SS_exists:P){reference-type="ref" reference="prop:SS_exists:P"}. The proof of [\[prop:SS_exists:P\]](#prop:SS_exists:P){reference-type="ref" reference="prop:SS_exists:P"} goes like the proof of [@Sasha Theorem 3.2], with Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}[\[lem:stabilities:proper\]](#lem:stabilities:proper){reference-type="ref" reference="lem:stabilities:proper"} in lieu of 3.13 (i.e. Lemma 3.9) and Lemma [Lemma 4](#lem:stabilities){reference-type="ref" reference="lem:stabilities"}[\[lem:stabilities:base\]](#lem:stabilities:base){reference-type="ref" reference="lem:stabilities:base"} in lieu of [@D Lemme 2.14]. This proves Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:existence\]](#th:SS:existence){reference-type="ref" reference="th:SS:existence"}.
Now suppose $\mathcal F$ is a sheaf on $\mathbf{P}$, $\mathcal F_0$ is any integral model for $\mathcal F$ and $\mathcal F_0'$ is any approximation of $\mathcal F_0$ over the ring of integers of a finite extension of $\mathbf{Q}_\ell$. Then $E_p(p^{\vee*}R(\mathcal F))\subset E_p(p^{\vee*}R(\mathcal F_0))
=E_p(p^{\vee*}R(\mathcal F_0'))=E_p(p^{\vee*}R(\mathcal F_0'/\ell))$. Here, the first equality is by Lemma [Lemma 8](#lem:integral_extension_ula){reference-type="ref" reference="lem:integral_extension_ula"}, the second by Lemma [Lemma 7](#lem:ula_mod_ell){reference-type="ref" reference="lem:ula_mod_ell"}, and the inclusion is obvious. Beilinson shows [@Sasha Theorem 3.6] that for any constructible étale $\mathbf{Z}/\ell$-sheaf $\mathcal G$ on $\mathbf{P}^\vee$, one has $\dim E_p(p^{\vee*}\mathcal G)\leq n-1$. Therefore $\dim E_p(p^{\vee*}R(\mathcal F))\leq n-1$ also, which proves the dimension estimate in part [\[prop:SS_exists:X\]](#prop:SS_exists:X){reference-type="ref" reference="prop:SS_exists:X"} and also establishes Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:integral_comparison\]](#th:SS:integral_comparison){reference-type="ref" reference="th:SS:integral_comparison"}, as $P(SS(\mathcal F_0))$ coincides with the Legendre transform of $E_p(p^{\vee*}R(\mathcal F_0))$.
*Remark 2*. As in the case of étale sheaves, this description of the singular support implies its invariance under finite extensions of the base field: i.e. let $k'/k$ be a finite extension; then $SS(\mathcal F_{k'})=SS(\mathcal F)_{k'}$ since $E_{p_{k'}}(p_{k'}^{\vee*}R(\mathcal F_{k'}))=E_p(p^{\vee*}R(\mathcal F))_{k'}$.[^24]
## {#section-16}
To prove the rest of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}, at this point it is necessary to state Beilinson's theorem on the singular support and the ramification divisor, which continues to hold in our setting.
Let $i:\mathbf{P}\hookrightarrow\tilde\mathbf{P}$ denote the Veronese embedding of degree $d>1$. Let $\tilde Q\subset\tilde\mathbf{P}\times\tilde\mathbf{P}^\vee$ denote the incidence correspondence, and let $\tilde p,\tilde p^\vee:\tilde Q\rightrightarrows\tilde\mathbf{P},\tilde\mathbf{P}^\vee$ denote the projections. Let $\tilde R,\tilde R^\vee$ denote the Radon transforms with respect to $\tilde p$ and $\tilde p^\vee$.
A closed conical subset $C\subset T^*\mathbf{P}$ gives rise to one in $T^*\tilde\mathbf{P}$ given by $i_\circ C$. Via the Legendre transform, its projectivization $P(i_\circ C)$ comes with a map to $\tilde\mathbf{P}^\vee$. Let $D_C$ denote the image of $P(i_\circ C)$ in $\tilde\mathbf{P}^\vee$.
If $\mathcal F$ is a sheaf on $\mathbf{P}$, then let $D_{\mathcal F}$ denote the smallest closed subset of $\tilde\mathbf{P}^\vee$ such that $\tilde R(i_*\mathcal F)$ is lisse on $\tilde\mathbf{P}^\vee\smallsetminus D_{\mathcal F}$.
**Proposition 6** ([@Sasha Theorem 1.7]). *The closed subset $D_{\mathcal F}\subset\tilde\mathbf{P}^\vee$ is a divisor. For each irreducible component $D_\gamma$ of $D_{\mathcal F}$, there is a unique irreducible closed conical subset $C_\gamma\subset T^*\mathbf{P}$ of dimension $n$ with $D_\gamma=D_{C_\gamma}$. One has $SS(\mathcal F)=\bigcup_\gamma C_\gamma$. The maps $\tilde p_\gamma^\vee:P(i_\circ C_\gamma)\to D_\gamma$ are generically radicial. For $k$ perfect the maps $\tilde p_\gamma^{\vee}$ are birational unless $\mathop{\mathrm{char}}k=2$ in which case the generic degree may also be 2.*
Beilinson's proof of this theorem relies on a delicate analysis of the geometry of the Veronese embedding, and holds verbatim in our setting. We simply remark that if $\mathcal G$ is an irreducible perverse sheaf supported on $\tilde\mathbf{P}^\vee$ which is not lisse, and $U\subset\tilde\mathbf{P}^\vee$ is the maximal open subset of $\tilde\mathbf{P}^\vee$ such that $\mathcal G|_U$ is lisse, then $\tilde\mathbf{P}^\vee\smallsetminus U$ is a divisor. For this, we may assume $\Lambda$ is a finite extension $E$ of $\mathbf{Q}_\ell$, and we argue as follows. We may find a lattice for the $\pi_1(U)$-representation corresponding to $\mathcal G$, hence a local system $\mathcal L_0$ on $U$ such that $\mathcal G=j_{!*}(\mathcal L[n])$, where $j:U\hookrightarrow\tilde\mathbf{P}^\vee$ and $\mathcal L=\mathcal L_0[\ell^{-1}]$. Let $^\circ j_*:=\mathcal H^0j_*$ denote the un-derived version of the direct image $D(U)^\heartsuit\to D(\tilde\mathbf{P}^\vee)^\heartsuit$ (hearts taken with respect to the usual $p=0$ t-structure). Let $\mathcal G_0:=j_{!*}(\mathcal L_0[n])$. Then $\mathcal G_0$ is an integral model for $\mathcal G$ and has $\mathcal G_0|_U\simeq\mathcal L_0[n]$. Then the maximal open subset $V_0\subset\tilde\mathbf{P}^\vee$ on which $\mathcal G_0=j_{!*}(\mathcal G_0|_U)$ is lisse is $\bigcap_nV_n$, where $V_n\subset\tilde\mathbf{P}^\vee$ is the maximal open subset such that $^\circ j_*\mathcal L_n$ is (étale) locally constant. Indeed, if $L_0$ is the representation of $\mathop{\mathrm{Gal}}({(\mathop{\mathrm{Frac}}\tilde\mathbf{P}^\vee)}^{\mathrm{sep}}/\mathop{\mathrm{Frac}}\tilde\mathbf{P}^\vee)$ corresponding to $\mathcal L_0$, then the largest open set $V'_0\subset\tilde\mathbf{P}^\vee$ to which $L_0$ extends as a representation of $\pi_1(V_0')$ is $\bigcap_nV_n$, and if $\mathcal L_0'$ is the local system on $V_0'$ with $\mathcal L_0'|_U=\mathcal L_0$, and $\kappa:U\hookrightarrow V_0'$ denotes the open immersion, then $\kappa_{!*}(\mathcal L_0[n])
=({^\circ\kappa_*}\mathcal L_0)[n]=\mathcal L_0'[n]$, which shows $V_0=V_0'$. Zariski-Nagata purity shows that each $\tilde\mathbf{P}^\vee\smallsetminus V_n$ is a divisor. Thus $\tilde\mathbf{P}^\vee\smallsetminus V_0$ is a divisor as it is a union of divisors.
Now, if $\tilde\mathbf{P}^\vee\smallsetminus U$ were not a divisor, by the above discussion for $\mathcal G_0$, we would find that $\mathcal G_0$ is lisse on a strictly larger $U'\supsetneq U$. But then $\mathcal G=\mathcal G_0[\ell^{-1}]$ must also be lisse on $U'$, a contradiction. This proves Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:dim\]](#th:SS:dim){reference-type="ref" reference="th:SS:dim"}.
## {#sec:SSw}
We turn to the proof of the equality $SS=SS^w$, the inclusion $SS^w\subset SS$ being automatic. Beilinson's proof [@Sasha §4.9] continues to hold in our setting, and we only need to say a few words about the specialization argument that concludes his proof. We are in the setting of $\mathcal F\in D(\mathbf{P})$, and we've chosen an irreducible component $C_\gamma$ of $SS(\mathcal F)$. Then $D_\gamma:=D_{C_\gamma}$ is an irreducible component of the divisor $D_{\mathcal F}$, and we choose an open dense subset $D_{\gamma}^o\subset D_\gamma$ with the properties (in particular) that $D_\gamma^o$ doesn't intersect other components of $D$ and that $\tilde R(i_*\mathcal F)|_{D_\gamma^o}$ is lisse. Next, we choose a line $L\subset\tilde\mathbf{P}^\vee$ with the property (in particular) that $L$ intersects $D$ properly and $L\cap D_\gamma\subset D_\gamma^o$. Picking a geometric point $\tilde x^\vee\to L\cap D_\gamma$, the proof rests on the fact that the vanishing cycles of $\tilde R(i_*\mathcal F)|_L$ at $x^\vee$ are nonzero (with respect to $\mathop{\mathrm{id}}:L\to L$). This follows from the following
**Lemma 9**. *Let $\mathcal F$ be a constructible sheaf on an irreducible scheme $X=\overline{\{\eta\}}$, and let $U$ be the largest open subset of $X$ where $\mathcal F$ is lisse. Let $\xi$ be a (geometric point centered on a) generic point of $X\smallsetminus U$. Suppose $X\smallsetminus U$ has locally finitely many irreducible components. Then no specialization morphism from a point of $U$ to $\xi$ induces an isomorphism on stalks of $\mathcal F$.*
(Compare [@SGAA Exp. IX Proposition 2.11].) Let's admit the lemma and see how it implies the desired conclusion. If $\mathcal G$ denotes $\tilde R(i_*\mathcal F)$, then $\mathcal G$ is lisse on the complement of $D$ but not on a neighborhood of the generic point of $D_\gamma^o$. It's also lisse when restricted to $D_\gamma^o$ by construction. Let $\xi$ be a geometric point centered on the generic point of $D^o_\gamma$ and fix specialization morphisms $\xi\rightsquigarrow\tilde x^\vee$ and $\overline\eta\rightsquigarrow\xi$. Recalling that being lisse is equivalent to being universally locally acyclic with respect to the identity map by Corollary [Corollary 3](#cor:id_lisse){reference-type="ref" reference="cor:id_lisse"}[\[cor:id_lisse:id\]](#cor:id_lisse:id){reference-type="ref" reference="cor:id_lisse:id"}, we have that $\mathcal F_{\tilde x^\vee}\xrightarrow\sim\mathcal F_\xi$ is an isomorphism and that any specialization map between points of $\mathbf{P}\smallsetminus D_{\mathcal F}$ induce isomorphisms on stalks of $\mathcal F$. However, $\mathcal F_\xi\to\mathcal F_{\overline\eta}$ is not an isomorphism by the lemma. Therefore the composite specialization morphism $\mathcal F_{\tilde x^\vee}\to\mathcal F_{\overline\eta}$ is not an isomorphism. Let $\zeta$ denote the generic point of $L$; it belongs to $\mathbf{P}\smallsetminus D_{\mathcal F}$, so any specialization $\overline\eta\rightsquigarrow\zeta$ induces an isomorphism $\mathcal F_\zeta\xrightarrow\sim\mathcal F_{\overline\eta}$. It follows that, fixing a specialization map $\zeta\rightsquigarrow\tilde x^\vee$, $\mathcal F_{\tilde x^\vee}\to\mathcal F_\zeta$ is not an isomorphism, hence that the vanishing cycles for $\mathop{\mathrm{id}}:L\to L$ are nonzero at $\tilde x^\vee$, as desired.
If any one specialization morphism from a point of $U$ to $\xi$ induces an isomorphism on stalks of $\mathcal F$, then every specialization morphism from a point of $U$ to $\xi$ does, as any two points of $U$ are specializations of $\eta$ and $\mathcal F$ is lisse on $U$. If this is the case, then there is a neighborhood of $\xi$ in $X$ where all specialization morphisms induce isomorphisms on stalks of $\mathcal F$. (Indeed, possibly replacing $X$ by an affine neighborhood of $\xi$, given a partition of $X\smallsetminus U$ witnessing the constructibility of $\mathcal F$, the closure of the union of the strata not containing $\xi$ does not contain $\xi$.) Then $\mathcal F$ is universally locally acyclic with respect to the identity morphism on that larger open subset, hence lisse there, contradicting the maximality of $U$. 0◻
*Remark 3*. The definition of universal local acyclicity requires that we check that the specialization maps on our neighborhood of $\xi$ continue to induce isomorphisms on stalks of $\mathcal F$ after arbitrary base change in $X$, but this follows from what we've already checked. Indeed, let $f:Y\to X$ be any morphism of schemes and let $y_1\rightsquigarrow y_0$ be a specialization of geometric points in $Y$. When $\mathcal F$ is an abelian étale sheaf, it's easy to see that $\operatorname{sp}:f^*(\mathcal F)_{y_0}\to f^*(\mathcal F)_{y_1}$ coincides with the map $\operatorname{sp}:\mathcal F_{f(y_0)}\to\mathcal F_{f(y_1)}$ ($f(y_1)$ is a geometric point of $X_{f(y_0)}$ as $y_1$ is a geometric point of $Y_{y_0}$). This implies the same for constructible sheaves with $\ell$-adic coefficients.
This proves Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:SSw\]](#th:SS:SSw){reference-type="ref" reference="th:SS:SSw"}, and Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:verdier\]](#th:SS:verdier){reference-type="ref" reference="th:SS:verdier"} follows by the same argument as in §[4.1](#sec:integral_SS){reference-type="ref" reference="sec:integral_SS"}. Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:constituents\]](#th:SS:constituents){reference-type="ref" reference="th:SS:constituents"} follows from the equality $SS=SS^w$ in light of the perverse t-exactness of the vanishing cycles functors $\phi_f$. Indeed, if $\{\mathcal F_\alpha\}$ are the constituents of some $\mathcal F\in D(X)$, then $SS(\mathcal F)\subset\bigcup_\alpha SS(\mathcal F_\alpha)$ by Lemma [Lemma 2](#lem:B2.1){reference-type="ref" reference="lem:B2.1"}[\[lem:B2.1:thick\]](#lem:B2.1:thick){reference-type="ref" reference="lem:B2.1:thick"}. For the reverse inclusion, assume $k$ is infinite, suppose $0\to\mathcal K\to\mathcal M\to\mathcal N\to0$ is an exact sequence of perverse sheaves on $X$, and fix a geometric point $x\to X$ and a function $f:X\to\mathbf{A}^1$ so that $f(x)=0$ and such that $\phi_f(\mathcal N)_x\ne0$. Then $x$ must belong to $\mathop{\mathrm{supp}}\phi_f(\mathcal M)$; if it didn't, we could replace $X$ by an open neighborhood of $x$ and find that $\phi_f(\mathcal N)_x=0$ by the t-exactness of $\phi_f$. If $x\in\mathop{\mathrm{supp}}\phi_f(\mathcal M)$, then $df(y)\in SS^w(\mathcal M)$ for all $y$ in a locally closed subset $V$ of the geometric special fiber with $x\in\overline V$. The description of $SS^w(\mathcal M)$ in §[1.4](#sec:ss_def){reference-type="ref" reference="sec:ss_def"} then implies that $df(x)\in SS^w(\mathcal M)$. Of course the same argument holds with $\mathcal K$ in lieu of $\mathcal N$, and shows that $SS^w(\mathcal M)=SS^w(\mathcal K)\cup SS^w(\mathcal N)$. The same argument applied to $^p\tau^{\leq 0}\mathcal F\to\mathcal F\to{^p\tau^{>0}}\mathcal F\to\ $ shows that $SS^w(\mathcal F)$ contains both $SS^w(^p\tau^{\leq 0}\mathcal F)$ and $SS^w(^p\tau^{>0}\mathcal F)$, so $SS^w(\mathcal F)=\bigcup_i SS^w(^p\mathcal H^i\mathcal F)$. This part works just as well with integral coefficients, which proves the first sentence of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:constituents\]](#th:SS:constituents){reference-type="ref" reference="th:SS:constituents"} in general. (The case of finite $k$ reduces to this one after making a finite extension of the base field and using Remark [Remark 2](#remark:SS_finite_extn){reference-type="ref" reference="remark:SS_finite_extn"}.)
Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:smooth_pullback\]](#th:SS:smooth_pullback){reference-type="ref" reference="th:SS:smooth_pullback"} holds with the original proof [@Sasha §4.10], relying on the commutation of $\phi_f$ with smooth pullback.
Finally, to prove Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"}[\[th:SS:field_extn\]](#th:SS:field_extn){reference-type="ref" reference="th:SS:field_extn"}, by Proposition [Proposition 6](#prop:B1.7){reference-type="ref" reference="prop:B1.7"} it suffices to show that if $\mathcal G$ is a sheaf on $\tilde\mathbf{P}^\vee$, $U\subset\tilde\mathbf{P}^\vee$ is the maximal open subset where $\mathcal G$ is lisse with complement $D:=\tilde\mathbf{P}^\vee\smallsetminus U$, and $k'/k$ is any field extension, then $U_{k'}$ is the maximal open subset of $\tilde\mathbf{P}^\vee_{k'}$ where $\mathcal G_{k'}$ is lisse. Clearly, $\mathcal G_{k'}$ is lisse on $U_{k'}$. If $\mathcal G_{k'}$ is lisse on some strictly larger open subset, then it's lisse on a neighborhood of a generic point of $D_{k'}$. Let $x_0$ be a geometric point centered on that generic point, defining also a geometric generic point of $D$, and let $x_1$ be a geometric point of $U_{k'}$ specializing to $x_0$. Then the specialization map $\mathcal G_{x_0}\to\mathcal G_{x_1}$ is an isomorphism. But by the remark above, this specialization map coincides with the corresponding specialization map in $\tilde\mathbf{P}^\vee$, which can't be an isomorphism by the previous lemma, since $\mathcal G$ isn't lisse on a neighborhood of $\xi\in D$ by hypothesis.
This completes the proof of Theorem [Theorem 1](#th:SS){reference-type="ref" reference="th:SS"} with rational coefficients.
BBDG A. Beilinson, *Constructible sheaves are holonomic,* Sel. Math. New Ser. **22** (2016), 1797--1819. J.-L. Brylinski, *Transformations canoniques, dualité projective, théorie de Lefschetz, transformations de Fourier et sommes trigonométriques,* in Géométrie et analyse microlocales, Astérisque **140--141** (1986), 3--134. A. Beilinson, J. Bernstein, P. Deligne and O. Gabber, *Faisceaux pervers*, Astérisque **100** (1983). B. Bhatt and A. Mathew, *The arc-topology,* Duke Math. J. **170** (2021), no. 9, 1899--1988. B. Bhatt and P. Scholze, *The pro-étale topology for schemes,* Astérisque **369** (2015), 99--201. D. Cisinski and F. Déglise, *Etale motives,* Compositio Mathematica **152**(3), 556--666. P. Deligne, *Théorèmes de finitude en cohomologie $\ell$-adique*, in Cohomologie étale (SGA $4\frac12$), Lect. Notes in Math. **569**, Springer-Verlag, Berlin, 1977, 233--251. T. Hemo, T. Richarz and J. Scholbach, *Constructible sheaves on schemes,* Advances in Mathematics **429** (2023) 109179. D. Hansen and P. Scholze, *Relative Perversity,* Comm. Amer. Math. Soc. **3** (2023) 631--668. L. Illusie, Appendice à *Théorèmes de finitude en cohomologie $\ell$-adique*, in Cohomologie étale (SGA $4\frac12$), Lect. Notes in Math. **569**, Springer-Verlag, Berlin, 1977, 252--261. M. Kashiwara and P. Schapira, *Sheaves on manifolds,* Grundlehren der Mathematischen Wissenschaften **292**, Springer-Verlag, Berlin, 1990. J. Lurie, *Higher Algebra,* 2017. <https://www.math.ias.edu/~lurie/papers/HA.pdf> Q. Lu and W. Zheng, *Duality and nearby cycles over general bases,* Duke Math. J. **168**(16) (2019) 3135--3213. Q. Lu and W. Zheng, *Categorical traces and a relative Lefschetz-Verdier formula,* Forum of Mathematics, Sigma **10** (2022) E10. T. Saito, *The characteristic cycle and the singular support of a constructible sheaf,* Invent. math. **207** (2017) 597--695. M. Artin, A. Grothendieck and J.-L. Verdier, eds., *Séminaire de Géométrie Algébrique du Bois Marie 1963--1964: Théorie de Topos et Cohomologie Etale des Schémas III,* Lect. Notes in Math. **305**, Springer-Verlag, Berlin, 1972. The Stacks Project Authors, *Stacks Project,* 2023. <https://stacks.math.columbia.edu/> N. Umezaki, E. Yang, and Y. Zhao, *Characteristic class and the $\varepsilon$-factor of an étale sheaf,* Trans. Amer. Math. Soc. **373** (2020), 6887--6927.
[^1]: So, for example, the cone of some map in $D(X_\text{\upshape pro\'et},\Lambda)$ computes the (homotopy) cofiber or cokernel.
[^2]: i.e. $V$ is a valuation ring and $\mathop{\mathrm{Frac}}V$ is algebraically closed.
[^3]: The nearby cycles are usually defined using the normalization $V'$ of $A$ in ${(\mathop{\mathrm{Frac}}A)}^{\mathrm{sep}}$ in lieu of $V$. The distinction is immaterial as $V'\to V$ is radicial. Indeed, let $K:=\mathop{\mathrm{Frac}}V$, $K':=\mathop{\mathrm{Frac}}V'$, let $v$ denote the valuation on $K'$, and suppose $\mathop{\mathrm{char}}k=:p>0$. Then every $a\in K$ satisfies $a^{p^j}\in K'$ for some $j$, so the valuation $v$ on $K'$ extends uniquely to one on $K$ (still called $v$), so that $v(a)\geq0\Leftrightarrow a^{p^j}\in V'$. If $\mathfrak p\subset V'$ is a prime ideal, $V/\mathfrak p V$ is generated over $V'/\mathfrak p$ by elements $\overline a$ satisfying equations like $\overline a^{p^j}=\overline b\in V'/\mathfrak p$. As $V$ and $V'$ are valuation rings, their spectra are linearly ordered, and as the extension $V'\subset V$ is integral, there are no nontrivial specializations between points in any fiber, whence $|\mathop{\mathrm{Spec}}V\otimes_{V'}k(\mathfrak p)|=1$. As radical ideals in valuation rings are prime, the point of the $\mathfrak p$-fiber is $\sqrt{\mathfrak p V}$, and $k\big(\sqrt{\mathfrak p V}\big)/k(\mathfrak p)$ is purely inseparable as the extension $V/\sqrt{\mathfrak p V}\supset V'/\mathfrak p$ is generated by $p^j$-th roots for various $j$.
[^4]: Beilinson's original definition of $\mathcal F$-acyclicity for étale sheaves was phrased in terms of local acyclicity rather than universal local acyclicity. Gabber later proved they are the same notion for a morphism of finite type with noetherian target [@LZ_bases Corollary 6.6].
[^5]: i.e. for any geometric point $\overline x$ centered on $x$, the vanishing cycles $\psi_f(\mathcal F)$ has nonzero stalk at $\overline x$. That it suffices to check on closed points follows from the constructibility of the sheaf of vanishing cycles.
[^6]: This closed conical subset coincides with the one built from finite extensions. Indeed, let $k'$ be a finite extension of $k$ and let $J_{k'}$ denote the image in $T^*X$ of the set $\{(x,df(x))\mid x\in|X_{k'}|,$ $f$ a function on $V_{k'}$, $V\subset X$ a Zariski neighborhood of (the image of) $x$, such that $f$ is not universally locally acyclic at $x$ rel. $\mathcal F\}\subset T^*X_{k'}$. Then $SS^w(\mathcal F)=\overline{\cup_{k'}J_{k'}}$.
To see that $SS^w(\mathcal F)$ defined in this way is conical, first note that $\cup_{k'}J_{k'}$ is conical: given a geometric closed point $\mu\to\cup_{k'}J_{k'}$ and an $\lambda\in\mathbf{G}_m({k}^{\mathrm{alg}})$, there is a (possibly different) geometric point $\nu\to\cup_{k'}J_{k'}$ centered on the same closed point of $T^*X$ as $\mu$ such that $\nu=(x,df(x))$ for $f$ defined on some neighborhood $V_{{k}^{\mathrm{alg}}}$ of $x\in X_{{k}^{\mathrm{alg}}}$. As $\mathop{\mathrm{Gal}}({k}^{\mathrm{alg}}/k)$ acts on $T^*X_{{k}^{\mathrm{alg}}}$, fixing $T^*X$, there is some $\sigma\in\mathop{\mathrm{Gal}}({k}^{\mathrm{alg}}/k)$ so that $\mu=\sigma(\nu)$. As $\sigma^{-1}(\lambda)f$ is not universally locally acyclic at $x$ rel. $\mathcal F$, the image of $\sigma^{-1}(\lambda)\nu$ in $T^*X$ lies in $SS^w(\mathcal F)$, so $\lambda\sigma(\nu)=\lambda\mu$ does, too. To conclude, observe that the closure of a $\mathbf{G}_m$-stable subset of $T^*X_{{k}^{\mathrm{alg}}}$ is again $\mathbf{G}_m$-stable.
[^7]: *i.e. such that $\ell:\mathcal F_0\to\mathcal F_0$ is injective in $\mathop{\mathrm{Perv}}(X)$.*
[^8]: *An irreducible perverse sheaf is a *constituent* for $\mathcal F$ if it appears in a Jordan-Hölder series for some ${^p\mathcal H^i\mathcal F}$.*
[^9]: 'Locally quasi-excellent' meaning admitting a Zariski cover by spectra of quasi-excellent rings, while '$\ell$-coprime' meaning defined over $\mathop{\mathrm{Spec}}\mathbf{Z}[\frac1\ell]$.
[^10]: As explained in [@HRS §2.2], the inclusion of the $\infty$-category of idempotent complete stable $\infty$-categories into that of stable $\infty$-categories into that of $\infty$-categories preserves small limits and filtered colimits, so this limit can be formed in any of these $\infty$-categories, and we will say no more about this.
[^11]: i.e. is in the essential image of the functor $\mathcal D_\mathrm{cons}({X}_{\text{\upshape \'et}},\mathcal O_E/\ell)\hookrightarrow\mathcal D({X}_{\text{\upshape \'et}},\mathcal O_E/\ell)\to\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E/\ell)$, where a complex of étale $\mathcal O_E/\ell$-sheaves $\mathcal G$ belongs to $\mathcal D_{\mathrm{cons}}({X}_{\text{\upshape \'et}},\mathcal O_E/\ell)$ if there exists a finite stratification of $X$ into constructible locally-closed subsets so that the restriction of $\mathcal G$ to each stratum is étale-locally constant with perfect values.
[^12]: Under weak finiteness hypotheses to ensure the standard t-structure on $\mathcal D(X_\text{\upshape pro\'et},\mathcal O_E)$ descends to one on $\mathcal D(X,\mathcal O_E)$, the Verdier quotient is already idempotent-complete as the t-structure is bounded [@CD Appendix B.2] [@HA Lemma 1.2.4.6]. See Proposition [Proposition 2](#prop:t-structure){reference-type="ref" reference="prop:t-structure"}.
[^13]: Bhatt and Mathew [@BM] prove the corresponding result for constructible étale sheaves. For the definitions of the arc-topology and the closely related (and coarser) $v$-topology, see *op. cit.*
[^14]: This relies on the t-exactness of $\overline x^*:D(X_\text{\upshape pro\'et},\Lambda)\to D(\overline x,\Lambda)$ for the usual ($p=0$) t-structure. Right t-exactness is clear as $H^0\overline x^*:D(X_\text{\upshape pro\'et},\Lambda)^\heartsuit\to
D(\overline x_\text{\upshape pro\'et},\Lambda)^\heartsuit$ is a left adjoint, and left t-exactness follows as $\Gamma(U,-)$ (for $U\in X_\text{\upshape pro\'et}$) is left t-exact on $D(X_\text{\upshape pro\'et},\Lambda)$ and $H^0\overline x^*$ on $D(X_\text{\upshape pro\'et},\Lambda)^\heartsuit$ returns the sheafification of the presheaf which on a condensed set returns a filtered colimit of such $H^0\Gamma(U,-)$.
[^15]: Pick a geometric point $y\to Y$; then the strict localization $Y_y$ of $Y$ at $y$ coincides with that of $Y_{{k}^{\mathrm{sep}}}$ at $y$. The extension ${k}^{\mathrm{alg}}/{k}^{\mathrm{sep}}$ being purely inseparable, the base change $(Y_y)_{{k}^{\mathrm{alg}}}$ is homeomorphic to $Y_y$ and coincides with the strict localization $(Y_{{k}^{\mathrm{alg}}})_y$ of $Y_{{k}^{\mathrm{alg}}}$ at $y$ (as the spectrum of a filtered colimit of strictly henselian local rings along local homomorphisms). Quotienting $Y_{{k}^{\mathrm{alg}}}$ by nilpotents it becomes smooth, so $(Y_{{k}^{\mathrm{alg}}})_y$ has a unique minimal prime and we're done.
[^16]: Here, $\pi_{XZ}$ denotes the projection $X\times_SY\times_SZ\to X\times_SZ$, etc.
[^17]: In light of the existence of integral models, the validity of these formulas in this context, as well as that of the Künneth formula used below, follows directly from the corresponding facts in étale cohomology as the relevant functors on integral sheaves commute with rationalization and reduction modulo $\ell$. See [@BS Lemma 6.7.10 & Lemma 6.7.14]
[^18]: With these hypotheses, $f^!\Lambda$ and $D\mathcal F$ are constructible [@BS Lemmas 6.7.13 & 6.7.19].
[^19]: This uses the fact that if you build the category $\mathcal C_S$ with $D(?)$ replaced by the derived category of (not necessarily constructible) étale sheaves of $\mathcal O_L/\ell^n$-modules on $?\to S$ separated and finitely-presented, then internal Homs always exist and are given by $\mathop{\mathrm{Hom}}((X,\mathcal F),(Y,\mathcal G))=(X\times_SY,\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\pi_X^*\mathcal F,\pi_Y^!\mathcal G))$. The argument is the same as the one appearing in the proof of Lemma [3.9](#sec:C_S_internal_homs){reference-type="ref" reference="sec:C_S_internal_homs"}, but simpler.
[^20]: We may approximate $\mathcal F$ and $\mathcal G$ by sheaves $\mathcal F'$ and $\mathcal G'$ over a finite extension of $\mathbf{Q}_\ell$. Here, $\mathcal F'$ must be universally locally acyclic, as this can be verified by the vanishing of certain cones, and extension of scalars from $\mathcal O_L$ to $\mathcal O_E$ or from $L$ to $E$ ($L/\mathbf{Q}_\ell$ finite, $E/\mathbf{Q}_\ell$ algebraic, $L\subset E$) is faithfully flat.
[^21]: Indeed, if $\mathcal F$ is lisse, then $\mathbf D_{X/X}(\mathcal F)=\text{\usefont{U}{BOONDOX-cal}{m}{n}Hom}(\mathcal F,\Lambda)=\mathcal F^\vee$ is, too (in particular constructible).
[^22]: By abuse of notation, $x\in\mathbf{P}$ will often mean $x=\mathop{\mathrm{Spec}}k'\to\mathbf{P}$, where $k'$ is a finite extension of $k$.
[^23]: Let $a:\mathbf{P}\to\mathop{\mathrm{Spec}}k$ be the structure morphism. A perverse sheaf on $\mathbf{P}$ is lisse if and only if it is the $a^*[n]$ of a local system (i.e. lisse perverse sheaf) on $\mathop{\mathrm{Spec}}k$, as can be seen from Corollary [Corollary 1](#cor:lisse_t){reference-type="ref" reference="cor:lisse_t"} and the argument of the proof of Proposition [Proposition 2](#prop:lisse_equiv){reference-type="ref" reference="prop:lisse_equiv"}.
[^24]: Let $\mathcal G:=p^{\vee*}R(\mathcal F)$. Then we always have $E_{p_{k'}}(\mathcal G_{k'})\subset E_p(\mathcal G)_{k'}$. Suppose the inclusion were strict. Then the image $U$ in $Q$ of $Q_{k'}\smallsetminus E_{p_{k'}}(\mathcal G_{k'})$ would be an open set strictly larger than $Q\smallsetminus E_p(\mathcal G)$. I claim that $\mathcal G$ is $p$-universally locally acyclic on $U$. Indeed, if $x\to U$ is a geometric point, $(U_x)_{k'}$ is the disjoint union of the strict localizations of $Q_{k'}$ at the finitely many points $y_i\in x\otimes_kk'$, and likewise for $(\mathbf{P}_{p(x)})_{k'}$. As $\mathop{\mathrm{Gal}}(k'/k)$ acts transitively on the $y_i$ (we may assume $k'/k$ is Galois) and our sheaf $\mathcal G$ began life on $Q$, $\mathcal G_{y_i}\to\Gamma((U_{k'})_{y_i}\times_{(\mathbf{P}_{k'})_{p(y_i)}}t,\mathcal G)$ is an isomorphism for all $i$ because it is an isomorphism for one $i$ by hypothesis. The same continues to hold if we replace $\mathbf{P}$ by any $S\to\mathbf{P}$.
| arxiv_math | {
"id": "2309.02587",
"title": "The singular support of an $\\ell$-adic sheaf",
"authors": "Owen Barrett",
"categories": "math.AG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this paper we give necessary and sufficient conditions for a knot type to admit non-loose Legendrian and transverse representatives in some overtwisted contact structure, classify all non-loose rational unknots in lens spaces, and discuss conditions under which non-looseness is preserved under cabling.
address:
- |
School of Mathematics\
Georgia Institute of Technology\
Atlanta, GA
- |
Department of Mathematics\
The University of California, Los Angeles, CA
- |
Department of Mathematics\
Princeton University, Princeton, NJ
- |
Department of Mathematics\
University of Cologne, Germany
author:
- Rima Chatterjee
- John B. Etnyre
- Hyunki Min
- Anubhav Mukherjee
title: Existence and construction of non-loose knots
---
# Introduction
Over the last decade, there has been a great deal of study of non-loose Legendrian and transverse knots[^1] in overtwisted contact $3$-manifolds. Most of these results have been either a partial or complete classification of Legendrian knots in certain knot types. For example, [@EliashbergFraser98; @EtnyreMinMukherjee22pre] classified non-loose Legendrian and transverse unknots and torus knots, respectively, and for various ranges of classical invariants [@GeigesOnaran20a; @Matkovic20Pre] had previously classified non-loose torus knots. In [@GeigesOnaran20b], non-loose Hopf links were classified, and in [@GeigesOnaran15] rational unknots were classified in some lens spaces. In [@Ghiggini06b; @Ghiggini06; @LiscaOzsvathStipsiczSzab09], various constructions were given and general properties of non-loose knots were studied in [@Etnyre13].
This paper studies the general properties and constructions of non-loose knots. In particular, we give necessary and sufficient conditions for a knot type to admit non-loose representatives, which corrects an error in the second author's paper [@Etnyre13]. To achieve this, we should understand certain facts about non-loose representatives of rational unknots, the knot types of cores of Heegaard tori in lens spaces. In proving these facts, we completely classify non-loose rational unknots in any overtwisted lens space, extending the work in [@GeigesOnaran15]. We then study the cabling operation and show when "standard cables\" (see Section [1.3](#sec:construction){reference-type="ref" reference="sec:construction"}) of non-loose knots in overtwisted manifolds are non-loose.
## Existence of non-loose knots {#sec:existence}
The most basic question about non-loose Legendrian and transverse knots is when a smooth knot type admits non-loose representatives. We have a complete answer to that question. Before stating the result we recall that a rational unknot $K$ in $M$ is a knot that has $D^2$ as a rational Seifert surface, see [@BakerEtnyre12]. In particular, when $M$ is irreducible, then $K$ is the core of a Heegaard solid torus in a lens space (or $S^3$). We also note that given any knot $K$ in a $3$-manifold $M$, there is a decomposition of $M$ into $M'\#M''$ so that $K\subset M''$ and $M''\setminus K$ is irreducible.
**Theorem 1**. *Let $M$ be an oriented $3$--manifold and $K$ a knot in $M$. Let $M = M' \# M''$ where $K \subset M''$ and $M'' \setminus K$ is irreducible. Then $K$ admits a non-loose Legendrian representative in some overtwisted contact structure on $M$ if and only if*
- *$K$ does not intersect an essential sphere transversely once, and*
- *$M'$ admits a tight contact structure.*
*Moreover, if $K$ is not the unknot in $M$ and admits a non-loose Legendrian representative, then $K$ admits non-loose Legendrian representatives in at least two overtwisted contact structures.*
*Also, $K$ admits a non-loose transverse representative in some overtwisted contact structure on $M$ if and only if*
- *$K$ does not intersect an essential sphere transversely once,*
- *$K$ is not a rational unknot, and*
- *$M'$ admits a tight contact structure.*
*Moreover, if $K$ admits a non-loose transverse representative, then $K$ admits non-loose transverse representatives in at least two overtwisted contact structures.*
*Remark 2*. In Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"} and Corollary [Corollary 3](#characterize-irreducible){reference-type="ref" reference="characterize-irreducible"}, rational unknots include the unknot in $M$.
For irreducible $3$-manifolds, we have a simpler statement that is an immediate corollary of the previous theorem.
**Corollary 3**. *Let $M$ be an irreducible oriented $3$--manifold and $K$ a knot in $M$. Then $K$ admits a non-loose Legendrian representative in some overtwisted contact structure on $M$ if and only if*
- *$K$ is not contained in a $3$--ball, or*
- *$M$ admits a tight contact structure.*
*Moreover, if $K$ is not the unknot in $M$ and admits a non-loose Legendrian representative, then $K$ admits non-loose Legendrian representatives in at least two overtwisted contact structures.*
*Also, $K$ admits a non-loose transverse representative in some overtwisted contact structure on $M$ if and only if*
- *$K$ is not contained in a $3$--ball or $M$ admits a tight contact structure, and*
- *$K$ is not a rational unknot.*
*Moreover, if $K$ admits a non-loose transverse representative, then $K$ admits non-loose transverse representatives in at least two overtwisted contact structures. 0◻*
In [@Etnyre13], the second author claimed that all knots in any irreducible $3$-manifold admitted non-loose Legendrian and transverse representatives. This is not the case and the proof in [@Etnyre13] will be corrected in the proof of Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}.
*Remark 4*. Notice that all knot types in $S^3$ admit non-loose Legendrian representatives and all knot types except the unknot in $S^3$ admit non-loose transverse representatives.
*Remark 5*. According to Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}, the only knot type in a $3$-manifold $M$ (possibly) admitting non-loose Legendrian representatives in only one overtwisted contact structure is the unknot. Indeed, due to the classification of Eliashberg and Fraser [@EliashbergFraser09], we know that the unknot in $S^3$ can be represented by a non-loose Legendrian knot in only one contact structure on $S^3$, namely $\xi_1$.[^2]
Given what is known about the existence of tight contact structures on $3$--manifolds (see for example, [@EliashbergThurston96]), the above theorem shows that if none of the summands of $M$ is a rational homology sphere, then any knot type admits non-loose representatives in some overtwisted contact structure. We can also completely determine when knots in Seifert fibered spaces admit non-loose representatives.
**Corollary 6**. *Let $M_n$ be the result of $(2n-1)$--surgery on the $(2,2n+1)$--torus knot for $n>0$. Any knot in a Seifert fibered spaces admit non-loose Legendrian representatives unless the knot is contained in a ball in $M_n$ or is $S^1\times \{pt\}$ in $S^1\times S^2$.*
So the only irreducible $3$-manifolds in which we do not know if a knot can admit a non-loose representative are hyperbolic homology spheres.
We note that when a knot $K$ in an irreducible manifold $M$ admits a non-loose Legendrian representative in some overtwisted contact structure on $M$, we can say quite a bit more about some of the non-loose representatives. To state our result, we recall some notations. Given a contact structure $\xi$ on a $3$--manifold $M$ and a knot type $K$, we denote by $\mathcal{L}(K)$ the set of all Legendrian knots realizing $K$ up to co-orientation preserving contactomorphism, smoothly isotopic to the identity. If $K$ is null-homologous and oriented, its classical invariants are defined. In this case, let $\mathcal{L}_{(r,t)}(K)$ be the set of the Legendrian representatives with the rotation number equal to $r$ and the Thurston-Bennequin invariant equal to $t$.
**Theorem 7**. *Let $K$ be a knot in an oriented $3$--manifold $M$ that admits non-loose Legendrian representatives. Assume $K$ is not a rational unknot. Then there are at least two overtwisted contact structures $\xi$ and $\xi'$ on $M$ in which $K$ admits infinitely many non-loose Legendrian representatives with any contact framing.*
*In particular, if $K$ is a null-homologous and oriented knot, then in one of the contact structures the set $\mathcal{L}_{(\chi(K)+n,n)} (K)$ contains the non-loose Legendrian representatives $\{L_-^{n,i}\}_{i=0}^\infty$ for $n \in \mathbb{Z}$ and if $-K$ is smoothly isotopic to $K$ (where $-K$ is $K$ with the reversed orientation), then $\mathcal{L}_{(-\chi(K)-n,n)} (K)$ contains $\{L_+^{n,i}\}_{i=0}^\infty$, where*
1. *$L_\pm^{n,i}$ is obtained from $L_\pm^{n,i-1}$ by adding a convex Giroux torsion layer along a torus parallel to the boundary of a standard neighborhood of $L^{n,i-1}_\pm$.*
2. *$S_{\pm}(L_\pm^{n,i})$ is $L_\pm^{n-1,i}$ and $S_\mp(L_\pm^{n,i})$ is loose.*
*In the other contact structures, $\mathcal{L}_{(-\chi(K)+n,n)} (K)$ contains the non-loose Legendrian representatives $\{\overline L_-^{n,i}\}_{i=0}^\infty$ for $n \in \mathbb{Z}$ and if $-K$ is smoothly isotopic to $K$, then $\mathcal{L}_{(\chi(K)-n,n)}(K)$ contains $\{\overline L_+^{n,i}\}_{i=0}^\infty$, where*
1. *$\overline L_\pm^{n,i}$ is obtained from $\overline L_\pm^{n,i-1}$ by adding a convex Giroux torsion layer along a torus parallel to the boundary of a standard neighborhood of $\overline L^{n,i-1}_\pm$.*
2. *$S_\pm(\overline L_\pm^{n,i})$ is $\overline L_\pm^{n-1,i}$ and $S_\mp(\overline L_\pm^{n,i})$ is loose.*
*Remark 8*. The contact structures $\xi$ and $\xi'$ in the theorem are determined by $M$ and $K$, but there does not seem to be an easy way of specifying them explicitly.
*Remark 9*. This theorem was established for fibered knots by the second author in [@Etnyre13].
In [@Honda00a], the second author and Honda showed that the classification of transverse knots is the same as the classification of Legendrian knots up to negative stabilization. From this, we have results similar to Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"} for the existence of non-loose transverse knots.
## Rational unknots
To prove Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"} we need to deal with the case of rational unknots separately. To do so, we will classify non-loose rational unknots. In [@GeigesOnaran15], Geiges and Onaran did this for rational unknots in $L(p,1)$ and $L(5,2)$. Here we will give the classification for all rational unknots in any lens space. We begin by restating the classification results of Geiges and Onaran and make it clear how the non-loose knots are related by stabilization.
We first recall that $L(p,q)$ is $-p/q$-surgery on the unknot in $S^3$. We also recall the smooth classification of rational unknots. Given a lens space $L(p,q)$ one can consider the cores $K_0$ and $K_1$ of the Heegaard tori. It is well-known, see [@BakerEtnyre12], that rational unknots up to isotopy are given by $$\{\text{rational unknots in } L(p,q)\}=\begin{cases}
\{K_0\} & p=2\\
\{K_0,-K_0\}& p\not=2, \, q\equiv \pm1 \,(\!\!\!\!\!\!\mod p)\\
\{K_0,-K_0,K_1,-K_1\} & \text{otherwise.}
\end{cases}$$ See Figure [\[fig:unknotsSmooth\]](#fig:unknotsSmooth){reference-type="ref" reference="fig:unknotsSmooth"} for surgery descriptions of $K_0$ and $K_1$. It is clear that $K_1$ represents the core of one of the Heegaard tori for $L(p,q)$. We note that $L(p,\overline{q})$ is diffeomorphic to $L(p,q)$ via a diffeomorphism that exchanges the Heegaard tori, and so $K_0$ is clearly the other rational unknot in $L(p,q)$.
figures/unknotsSmooth (63, 124)$-p / \overline{q}$ (85, 23)$K_0$ (256, 124)$-p/q$ (281, 24)$K_1$
We begin by recalling a convenient means to describe the classification of Legendrian knots. Given a knot type ${K}$ in an overtwisted contact manifold $(M,\xi)$, we denote by $\mathcal{L}({K})$ the coarse equivalence classes of non-loose Legendrian realizations of $K$ in $(M,\xi)$. Consider the map $\Phi:\mathcal{L}(K)\to \mathbb{Z}^2$ that send a Legendrian $L$ to $\Phi(L)=(\mathop{\mathrm{rot}}(L), \mathop{\mathrm{tb}}(L))$. The image of $\Phi$, together with notation about the number of elements sent to a single point, is called the *mountain range* of $K$. If $K$ is only rationally null-homologous then we have a rational Thurston-Bennequin invariant $\mathop{\mathrm{tb}}_\mathbb{Q}(L)$ and a rational rotation number $\mathop{\mathrm{rot}}_\mathbb{Q}(L)$, see Section [2.5](#rational){reference-type="ref" reference="rational"}, and we can consider the similar map $\Phi:\mathcal{L}(K)\to \mathbb{Q}^2$ sending $L$ to $\Phi(L)=(\mathop{\mathrm{rot}}_\mathbb{Q}(L),\mathop{\mathrm{tb}}_\mathbb{Q}(L))$.
We will say a mountain range for a knot type *contains a $\normalfont\textsf{V}$ based at $(a,b)$* if the image of $\Phi$ contains Legendrian knots $L^0, L^i_\pm$ for $i\in \mathbb{N}$ such that $$\begin{aligned}
\mathop{\mathrm{tb}}(L_\pm^i)=b+i &\text{ and } \mathop{\mathrm{rot}}(L_\pm^i)=a\pm(i-1),\\
\mathop{\mathrm{tb}}(L^0) = b &\text{ and } \mathop{\mathrm{rot}}(L^0) = a,\end{aligned}$$ and satisfy $$\begin{aligned}
&S_\pm(L_\pm^{i+1})=L_\pm^i \text{ and } S_\pm(L_\pm^1)=L^0,\\
&S_\mp(L_\pm^i) \text{ and } S_\pm(L^0) \text{ are loose.}\end{aligned}$$ See the first drawing of Figure [\[fig:mrforru\]](#fig:mrforru){reference-type="ref" reference="fig:mrforru"}.
figures/MountainForRU (63, -4)$(a,b)$ (251, -4)$(a,b)$ (329, -4)$(a,b)$
We say the mountain range for a knot type *contains a back slash based at $(a,b)$* if the image of $\Phi$ contains Legendrian knots $L^i$ for $i\geq 0$ and integer such that $$\mathop{\mathrm{tb}}(L^i)= b+i \text{ and } \mathop{\mathrm{rot}}(L_i)=a-i$$ and satisfy $$\begin{aligned}
&S_+(L^{i+1})=L^i,\\
&S_-(L^i) \text{ and } S_+(L^0) \text{ are loose.}\end{aligned}$$ See the middle drawing of Figure [\[fig:mrforru\]](#fig:mrforru){reference-type="ref" reference="fig:mrforru"}.
Similarly we say the mountain range for a knot type *contains a forward slash based at $(a,b)$* if the image of $\Phi$ contains Legendrian knots $L^i$ for $i\geq 0$ and integer such that $$\mathop{\mathrm{tb}}(L^i)= b+i \text{ and } \mathop{\mathrm{rot}}(L_i)=a+i$$ and satisfy $$\begin{aligned}
&S_-(L^{i+1})=L^i,\\
&S_+(L^i) \text{ and } S_-(L^0) \text{ are loose.}\end{aligned}$$ See the last drawing of Figure [\[fig:mrforru\]](#fig:mrforru){reference-type="ref" reference="fig:mrforru"}.
We now can restate the result of Geiges and Onaran, together with how Legendrian knots are related via stabilization and which non-loose knots are in the same contact structure (in [@GeigesOnaran15], they showed when the non-loose representatives had the same $d_3$ invariant, but did not compute the (half) Euler class). Before stating the theorem, recall there are just two rational unknots in $L(p,1)$ and they are $K_0$ and $-K_0$. The theorem is stated for $K_0$ but applies to $-K_0$ by reversing the sign of the rotation numbers.
**Theorem 10**. *There are exactly $p$ distinct overtwisted contact structures on $L(p,1)$ that admit non-loose Legendrian representatives of $K_0$. Each contains a $\normalfont\textsf{V}$, one is based at $(0,1/p)$ and the others are based at $\left(1-\frac{2k}p, \frac{p+1}p\right)$, for $k\in \{1,\ldots, p-1\}$.*
*The Euler class of the contact structure containing the $\normalfont\textsf{V}$ based at $\left(1-\frac{2k}p, \frac{p+1}p\right)$ is $2k-p$ and $0$ for the $\normalfont\textsf{V}$ based at $(0,1/p)$.*
*Remark 11*. When $p$ is odd it is easy to check that the Euler classes of the contact structures on $L(p,1)$ that admit non-loose representatives are all distinct. When $p$ is even one can show that their half-Euler classes, see [@Gompf98], are all distinct, except for two that both have half-Euler class equal to $0$; but those have distinct $d_3$-invariants as computed in [@GeigesOnaran15]. We do not discuss the half-Euler classes here, so only make this remark for the readers benefit and to note that there are always at least two contact structures that admit non-loose representatives.
We also have the following simple case. Again, the theorem is stated for $K_0$ but applies to $-K_0$ by reversing the sign of the rotation numbers and exchanging the words "back slash" and "forward slash".
**Theorem 12**. *For $p\not=2$ there are exactly three overtwisted contact structures on $L(p,p-1)$ in which there are non-loose Legendrian representatives of $K_0$; in one there is $\normalfont\textsf{V}$ based at $\left(0, \frac{2p-1}p\right)$, in another there is a back slash based at $\left(-1+\frac 2p, \frac{p-1}p\right)$ and in the last one there is a forward slash based at $\left(1-\frac 2p, \frac{p-1}p\right)$. The Euler class of the two contact structures containing the slashes is $\pm (2-p)$ while the Euler class of the contact structure containing the $\normalfont\textsf{V}$ is $0$.*
*Remark 13*. Notice that the Euler class of the contact structures containing these three mountain ranges are distinct except for the case when $p=4$. In this case the two slashes have the same Euler class. Their half-Euler classes are distinct and one can compute that their $d_3$-invariants are distinct, but we do not do that here and just note that in all cases we have proven that there are at least two contact structures admitting non-loose rational unknots.
We state one more simple case where we must consider both $\pm K_0$ and $\pm K_1$, before discussing the general case. The theorem is stated for $K_0$ and $K_1$ but applies to $-K_0$ and $-K_1$ by reversing the sign of the rotation numbers and exchanging the words "back slash" and "forward slash".
**Theorem 14**. *In the lens space $L(2n+1,2)$, for $n\geq 2$, there are $2n+1$ overtwisted contact structures in which $K_0$ has non-loose Legendrian representatives. In one there is a back slash based at $\left(-\frac n{2n+1}, \frac{n+1}{2n+1}\right)$, in another there is a forward slash based at $\left(\frac n{2n+1}, \frac{n+1}{2n+1}\right)$, and in the rest there is a $\normalfont\textsf{V}$ based at $\left(\frac{n-2k}{2n+1}, \frac{n+1}{2n+1}\right)$ for $k=1, \ldots, n-1$ and based at $\left( \frac{n-1-2k}{2n+1},\frac{3n+2}{2n+1}\right)$ for $k=0,\ldots, n-1$. The Euler class of the contact structures containing a slash is $n$ and $n+1$, and the Euler classes of contact structures containing a $\normalfont\textsf{V}$ are all distinct and distinct from $n$ and $n+1$.*
*There are $n+2$ contact structures in which $K_1$ has non-loose Legendrian representatives. In one there is a back slash based at $\left(-\frac 1{2n+1}, \frac 2{2n+1}\right)$, in another there is a forward slash based at $\left(\frac 1{2n+1}, \frac 2{2n+1}\right)$, and in the rest there is a $\normalfont\textsf{V}$ based at $\left(\frac{2n-2k}{2n+1}, \frac{2n+3}{2n+1}\right)$ for $k=1,3,\ldots, 2n-1$. Euler classes of all these contact structures are different except for two of the $\normalfont\textsf{V}$s that both have Euler class $0$.*
We now give the general classification result. We will denote the negative continued fraction $$a_0-\frac{1}{a_1-\frac{1}{\ldots - \frac{1}{a_n}}}$$ for $a_i \leq -2$ by $[a_0, a_1, \ldots, a_n]$.
**Theorem 15**. *Suppose (p,q) is a pair of relatively prime positive integers satisfying $1 < q < p$. Let $-p/q=[a_0,a_1, \ldots, a_n]$ and note that $n\geq 1$ since $q\not= 1$. In $L(p,q)$, the non-loose Legendrian representatives of the rational unknot $K_0$ (and $-K_0$) give $$\begin{cases}
\left|(a_0+1)\cdots (a_{n-2}+1)(a_{n-1}+2)\right| & n > 1\\
|a_0 + 2| & n =1
\end{cases}$$ $\normalfont\textsf{V}$s based at $(r,\overline{q}/p)$, where $1 \leq \overline{q} \leq p$ and $q\overline{q} \equiv 1 \pmod p$, and $$\begin{cases}
|(a_0+1)\cdots (a_{n-2}+1)| & n > 1\\
1 & n = 1
\end{cases}$$ back slashes based at $(r,\overline{q}/p)$ and the same number of forward slashes based at $(r, \overline{q}/p)$ and a formula for $r$ will be given in Section [4](#sec:ruk){reference-type="ref" reference="sec:ruk"}. In addition there are $$|(a_0+1)\cdots (a_{n}+1)|$$ $\normalfont\textsf{V}$s based at $(r,(\overline{q}+p)/p)$. A formula for $r$ and the Euler class of the contact structure containing each of theses mountain ranges will also be given Section [4](#sec:ruk){reference-type="ref" reference="sec:ruk"}, but we note here that the mountain ranges are in at least two distinct contact structures.*
*The non-loose Legendrian representatives of the rational unknot $K_1$ (and $-K_1$) give $$\left|(a_{1}+2)(a_{2}+1)\cdots (a_n+1)\right|$$ $\normalfont\textsf{V}$s based at $(r,q/p)$, and $$\begin{cases}
|(a_2+1)\cdots (a_n+1)| & n > 1\\
1 & n = 1
\end{cases}$$ back slashes based at $(r,q/p)$ and the same number of forward slashes based at $(r, q/p)$ and a formula for $r$ will be given in Section [4](#sec:ruk){reference-type="ref" reference="sec:ruk"}. In addition there are $$|(a_0+1)\cdots (a_{n}+1)|$$ $\normalfont\textsf{V}$s based at $(r,(q+p)/p)$. A formula for $r$ and the Euler class of the contact structure containing each of theses mountain ranges will also be given Section [4](#sec:ruk){reference-type="ref" reference="sec:ruk"}, but we note here that the mountain ranges are in at least two distinct contact structures.*
See Firgure [\[fig:unknotsLegendrian1\]](#fig:unknotsLegendrian1){reference-type="ref" reference="fig:unknotsLegendrian1"} and [\[fig:unknotsLegendrian2\]](#fig:unknotsLegendrian2){reference-type="ref" reference="fig:unknotsLegendrian2"} for the diagrams of non-loose Legendrian representatives of rational unknots. We will show that this are indeed the desired diagrams in Section [4.2](#sdru){reference-type="ref" reference="sdru"}.
figures/unknotsLegendrian1 (90, 120)$\left(\frac{q}{p+q}\right)$ (96, 13)$K_1$ (188, 57)$2-a_0-k$ (366, 57)$k$ (321, 4)$(+1)$ (361, 110)$(s+1)$
figures/unknotsLegendrian2 (-48, 243)$1-a_0-k$ (122, 243)$k$ (24, 302)$(-1)$ (121, 292)$(s+1)$ (-26, 126)$m-2$ (96, 180)$(-1)$ (96, 100)$(-1)$ (96, 70)$(-1)$ (106, 15)$(+1)$
(164, 243)$1-a_0-k$ (334, 243)$k$ (236, 302)$(-1)$ (334, 292)$(s+1)$ (186, 126)$m-2$ (308, 180)$(-1)$ (308, 100)$(-1)$ (308, 70)$(-1)$ (318, 30)$(+1)$
From the above results we have the following immediate corollary that we need for the proof of Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}.
**Corollary 16**. *Every rational unknot in a lens space, except for the unknot in $S^3$, admits non-loose Legendrian representatives in at least two contact structures. Also, every rational unknot does not admit non-loose transverse representatives.*
## Construction of non-loose knots via cabling {#sec:construction}
We begin by defining "standard cables\" of a null-homologous Legendrian knot and then exploring how non-looseness of a knot and its standard cable are related. We first recall that our slope convention is $\frac{\text{meridian}}{\text{longitude}}$. Given a null-homologous Legendrian knot $L$ in any contact manifold, let $N$ be a standard neighborhood of $L$ with ruling slope $q/p$. If $q/p > \mathop{\mathrm{tb}}(L)$, then we define the *standard positive $(p,q)$-cable* of $L$ as a ruling curve on $\partial N$ and denote by $L_{p,q}$.
**Theorem 17**. *Let $(M,\xi)$ be an overtwisted contact $3$--manifold and $K$ a null-homologous knot in $M$. Suppose $L$ is a Legendrian representative of $K$ in $(M,\xi)$. Then for $q/p > \mathop{\mathrm{tb}}(L)$, the standard $(p,q)$-cable $L_{p,q}$ of $L$ is non-loose if and only if $L$ is non-loose.*
Now suppose that $L$ is a Legendrian knot and $q/p \leq \mathop{\mathrm{tb}}(L)$. If $N$ is a standard neighborhood of $L$, then inside of $N$ we can find a convex torus $T$ parallel to $\mathop{\mathrm{\partial}}N$ with two dividing curves of slope $q/p$. We can define the *standard negative $(p,q)$-cable of $L$* as a Legendrian divide of $T$, but we note that there is some ambiguity here. If $q/p\in[\mathop{\mathrm{tb}}(L)-n, \mathop{\mathrm{tb}}(L)-n+1)$ then there are actually $n+1$ distinct tori with dividing slope $q/p$. See Section [2.3](#kot){reference-type="ref" reference="kot"} for more details, but in brief there are $n+1$ convex tori inside $N$ with dividing slope $\mathop{\mathrm{tb}}(L)-n$ and they come from stabilizing $L$, $n$ times (there are $n+1$ ways to do this), and then the tori $T$ with dividing slope $q/p$ are determined by those stabilizations. Thus we see that a standard cable of $L$ is also a standard cable of a $n-1$ times stabilization $L'$ of $L$ and then $q/p\in(\mathop{\mathrm{tb}}(L')-1,\mathop{\mathrm{tb}}(L'))$. Thus we will only define the standard cable of a Legendrian knot $L$ if $q/p$ satisfies this condition. Then there are only two possibilities for the torus $T$ and we denote the corresponding dividing curves by $L_{p,q}^\pm$ depending on whether the torus $T$ inside $N$ but outside a positive or negative stabilization of $L$. We call $L_{p,q}^\pm$ the *$\pm$ standard negative $(p,q)$-cable of $L$*.
**Theorem 18**. *Let $(M,\xi)$ be an overtwisted contact $3$--manifold and $L$ be a Legendrian knot in $(M,\xi)$. If $S_\pm(L)$ is non-loose, then $$\text{$L_{p,q}^\pm$ is non-loose for all $q/p\in (\mathop{\mathrm{tb}}(L)-1,\mathop{\mathrm{tb}}(L))$.}$$ Here $S_\pm(L)$ denotes the $\pm$ stabilization of $L$.*
We notice that whenever we construct a $\pm$ standard negative $(p,q)$-cable one can also construct a standard positive $(p,q)$-cable. Specifically for a Legendrian knot $L$ we define the $L^\pm_{p,q}$ , standard $\pm$ negative cable of $L$ as above, and we can also define $(S_\pm(L))_{p,q}$ the standard positive $(p,q)$-cable of $S_\pm(L)$, as above. We note the following simple result.
**Proposition 19**. *With the notation above $(S_\pm(L))_{p,q}$ is the $n$ fold $\pm$ stabilization of $L^\pm_{p,q}$, where $n=\mathop{\mathrm{tb}}(S_\pm(L))\cdot \frac qp$.*
Our two theorems above show that if $S_\pm(L)$ is non-loose then not only is $L^\pm_{p,q}$ non-loose but so is its $n$ fold $\pm$ stabilization. This proposition shows that we could define the standard positive $(p,q)$-cable in terms of the standard negative cable in certain circumstances, but the standard positive $(p,q)$-cable is defined in greater generality (in particular, we don't need to know that $L$ can be destabilized).
figures/torusknot (-12, 108)$4n+2$ (99, -1)$2n+1$
*Remark 20*. We notice that while our "standard cables\" allow us to construct many non-loose knots in cabled knot types, there are many non-loose representatives that do not come from this construction. We demonstrate this with the $(2,2n+1)$-torus knots which are of course $(2,2n+1)$-cables of the unknot. In Figure [\[fig:mrtorus\]](#fig:mrtorus){reference-type="ref" reference="fig:mrtorus"} the mountain range for the non-loose $(2,2n+1)$-torus knots in $(S^3,\xi_1)$ is given. Consider the non-loose unknot with $\mathop{\mathrm{tb}}=1$ in $(S^3,\xi_1)$. Since $(2n+1)/2>1$ we see that the $(2,2n+1)$-cable is a positive cable of this Legendrian knot, and the standard positive cable will have $\mathop{\mathrm{tb}}=2n+3$. This is the middle dot in the third row from the bottom of the figure, we call this the vertex of the inner $V$. The upper two end points of the inner $V$ are the standard negative cables of the two Legendrian unknots with $tb=n+1$ and all the other points in the inner $V$ correspond to positive cables of unknots with $\mathop{\mathrm{tb}}<n+1$. These are all the non-loose representatives of the $(2,2n+1)$-cable of the unknot that can be seen via our construction. The infinitely many representatives in the outer $\normalfont\textsf{V}$ are not "standard cables\". Similarly, there are two other overtwisted contact structures, $\xi_0$ and $\xi_{1-2n}$, that admit non-loose representatives of the $(2,2n+1)$-torus knot and none of them can be seen by "standard cables\" either.
We also note, that no non-loose negative torus knots are seen as standard cables of non-loose unknots.
Given a transverse knot $K$ we know there is a sequence of Legendrian approximations $L_n$ for $n$ sufficiently small such that, $\mathop{\mathrm{tb}}(L_n)=n$, $S_-(L_n)=L_{n-1}$, and the transverse push-off of $L_n$ is $K$, see [@EtnyreHonda01 Section 2.2]. Give any rational number $q/p$ then for any $n< \lfloor q/p\rfloor$ we can consider the standard $(p,q)$-cable, $(L_n)_{p,q}$, of $L_n$. We have the following simple observation $$S^p_-((L_{n})_{p,q})=(L_{n+1})_{p,q},$$ which follows from Lemma [Lemma 34](#seestab){reference-type="ref" reference="seestab"}. Since the transverse push-off of a Legendrian knot is transversely isotopic to the transverse push-off of the Legendrian knot after any number of negative stabilizations, we define the *standard $(p,q)$-cable of a transverse knot $K$*, denoted $K_{p,q}$, to be the transverse push of of $(L_n)_{p,q}$ for any Legendrian approximation $L_n$ with $n$ less than $q/p$. We can now state the result about cabling non-loose transverse knots.
**Theorem 21**. *Let $(M,\xi)$ be an overtwisted contact $3$--manifold and $K$ be a transverse knot in $(M,\xi)$. Then $K_{p,q}$ is non-loose for any $(p,q)$ if and only if $K$ is non-loose.*
It is well-known that there are non-loose transversely non-simple knots, see [@Etnyre13] and Theoerem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"} above, but all of the currently known examples rely on different amounts of Giroux torsion in the knot complement. Using cables we give the first examples of non-simple non-loose transverse knots with Giroux torsion zero in their complements.
**Theorem 22**. *In the overtwisted contact structure $\xi_2$ on $S^3$, the $(2n+1,2)$-cable of the left-handed trefoil has at least $n$ distinct non-loose Legendrian representatives with $\mathop{\mathrm{tb}}= 4n+2$, $\mathop{\mathrm{rot}}= 2n-1$ and no Giroux torsion in its complement. There are also at least $n$ distinct non-loose transverse representatives with $\mathop{\mathrm{sl}}=2n+3$ and no Giroux torsion in its complement.*
*Remark 23*. We have similar results for all $(p,q)$-cables of the left handed trefoil with $q/p\in (0,1)$, but we only prove the above theorem since it already shows the existence of arbitrarily many transverse knots with the same self-linking and Giroux torsion zero.
**Acknowledgments.** The first author is partially supported by SFB/TRR 191 "Symplectic Structures in Geometry, Algebra and Dynamics, funded by the Deutsche Forschungsgemeinschaff (Project- ID 281071066-TRR 191)" and the Georgia Institute of Technology's Elaine M. Hubbard Distinguished Faculty Award. The second author was partially supported by National Science Foundation grants DMS-1906414 and DMS-2203312 and the Georgia Institute of Technology's Elaine M. Hubbard Distinguished Faculty Award. The authors thank Kenneth Baker for a helpful discussion about Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}.
# Background
We assume the reader is familiar with the basic ideas of contact geometry, Legendrian knots, and convex surfaces, as can be found in, for example, [@EtnyreHonda01; @Geiges08; @Honda00a]. We review parts of this below for the convenience of the reader and to establish notation. In Section [2.1](#fgraph){reference-type="ref" reference="fgraph"} we recall the Farey graph and discuss its relation to curves on tori and in the following section we recall the classification of tight contact structures on $T^2\times[0,1]$, solid tori, and lens spaces. In Section [2.3](#kot){reference-type="ref" reference="kot"} we review several aspects of knots in contact manifolds, such as standard neighborhoods and how these relate to stabilization. The last two sections discuss bypasses and rationally null-homologous knots, respectively.
## The Farey graph {#fgraph}
The Farey graph is an essential tool to keep track of embedded essential curves on a torus. Recall that once a basis for $H_1(T^2)$ is chosen the embedded essential curves on the torus are in one to one correspondence with the rational numbers union infinity.
The Farey graph is constructed in the following way, see Figure [\[fareygraph\]](#fareygraph){reference-type="ref" reference="fareygraph"}. Consider the unit disc in the $xy$-plane. Label the point $(0,1)$ as $0=\frac{0}{1}$ and $(0,-1)$ as $\infty=\frac{1}{0}$. Connect these two points by a straight line. Now if a point on the boundary of the disk has a positive $x$-coordinate and it lies half way between two points labeled $\frac{a}{b}$ and $\frac{c}{d}$ then we label it as $\frac{a+c}{b+d}$. We call this as the "Farey sum\" of $\frac{a}{b}$ and $\frac{c}{d}$ and write as $\frac{a}{b}\oplus\frac{c}{d}$ . Now we connect $\frac{a+c}{b+d}$ with the two points by hyperbolic geodesics (we can consider a hyperbolic metric on the interior of the disk). We iterate this process until all positive rational numbers are labeled point on the boundary of the unit disk. Now we do the same for points on the unit circle with negative $x$-coordinate, except that here we consider $\infty$ as $\frac{-1}{0}$.
figures/farey (142, 2)$\infty$ (145, 306)$0$ (-15, 154)$-1$ (297, 154)$1$ (29, 43)$-2$ (254, 43)$2$ (19, 261)$-1/2$ (254, 261)$1/2$ (72, 296)$-1/3$ (200, 296)$1/3$ (-13, 212)$-2/3$ (285, 212)$2/3$ (-14, 96)$-3/2$ (286, 96)$3/2$ (76, 10)$-3$ (203, 10)$3$
In this paper we will use $\frac{a}{b}\oplus k\frac{c}{d}$ for the iterated Farey sum where we add $\frac{a}{b}$ to $\frac{c}{d}$, $k$-times.
Note that, two embedded curves on the torus with slopes $r$ and $s$ will form a basis of $H_1(T^2)$ if and only if there is an edge between them in the Farey graph. Here we also introduce the dot product of two rational numbers $\frac{a}{b}\mathpalette\mathbin{\vcenter{\hbox{\scalebox{\frac}{$\m@th 0.6\bullet$}}}}{c}{d}=ad-bc$ and note that $|\frac{a}{b}\mathpalette\mathbin{\vcenter{\hbox{\scalebox{\frac}{$\m@th 0.6\bullet$}}}}{c}{d}|$ is the minimum number of times the curves with slopes $\frac{a}{b}$ and $\frac{c}{d}$ can intersect.
We have the following well-known lemma, for example see [@EtnyreLaFountainTosun12]
**Lemma 24**. *Suppose $q/p<-1$. Given $q/p=[a_1,\dots,a_n]$, let $(q/p)^c=[a_1,\dots,a_n+1]$ and $(q/p)^a=[a_1,\dots,a_{n-1}]$. There will be an edge in the Farey graph between each pair of numbers $q/p, (q/p)^c$ and $(q/p)^a$. Moreover, $(q/p)^c$ will be farthest clockwise point from $q/p$ that is larger that $q/p$ with an edge to $q/p$, while $(q/p)^a$ will be the farthest anti-clockwise point from $q/p$ that is less than $q/p$ with an edge to $q/p$.*
In the above lemma if $a_n+1=-1$, then we consider $[a_1,\dots,a_n+1]$ to be $[a_1,\dots,a_{n-1}+1]$. Also if $q/p$ is a negative integer, then $(q/p)^a = \infty$.
A path in the Farey graph is a sequence of elements $p_1,\dots ,p_k$ in $\mathbb{Q}\cup \infty$ moving clockwise such that each $p_i$ is connected to $p_{i+1}$ by an edge in the Farey graph, for $i<k$. Let $P$ be the minimal path in the Farey graph that starts at $s_0$ and goes clockwise to $s_1$. We say $P$ is a *decorated path* if all of its edges are decorated by a $+$ or a $-$. We call a path in the Farey graph a *continued fraction block* if there is a change of basis such that the path goes from $0$ clockwise to $n$ for some positive $n$. We say two choices of signs on the continued fraction block are related by *shuffling* if the number of $+$ signs in the continued fraction blocks are the same.
Now we introduce some notations that we will frequently use in this paper. Given two numbers in $\mathbb{Q}\cup\infty$ we let $[r,s]$ denotes all those numbers in $\mathbb{Q}\cup\infty$ which are clockwise to $r$ in the Farey graph and anticlockwise to $s$.
## Contact structures on $T^2\times [0,1]$, solid tori and lens spaces
Here we briefly recall the classification of tight contact structures on $T^2\times [0,1]$, $S^1\times D^2$, and lens spaces due to Giroux [@Giroux2] and Honda[@Honda00a]. We discuss the classification along the lines of Honda.
### Contact structures on $T^2\times [0,1]$
Consider a contact structure $\xi$ on $T^2\times[0,1]$ that has convex boundary with dividing curves of slope $s_0$ on $T_0=T^2\times\{0\}$ and $s_1$ on $T_1=T^2\times\{1\}$. We also assume that the number of dividing curves is two on each boundary component. We say $\xi$ is *minimally twisting* if any convex torus in $T^2\times[0,1]$ parallel to $T_0$ has dividing curves with slope in $[s_0,s_1]$. We denote the minimally twisting contact structures, up to isotopy, on $T^2\times[0,1]$ with the above boundary conditions as $\mathop{\mathrm{Tight}}^{min}(T^2\times[0,1]; s_0, s_1)$. Giroux [@Giroux2] and Honda [@Honda00a] classified contact structures in $\mathop{\mathrm{Tight}}^{min}(T^2\times[0,1]; s_0, s_1)$, establishing the following results.
**Theorem 25**. *Each decorated minimal path in the Farey graph from $s_0$ clockwise to $s_1$ describes an element of $\mathop{\mathrm{Tight}}^{min}(T^2\times[0,1]; s_0, s_1)$. Two such decorated paths will describe the same contact structure if and only if the decorations differs by shuffling in the continued fraction blocks.*
Notice that if $s_0$ and $s_1$ shares exactly one edge in the Farey graph then there are exactly two tight contact structures in $\mathop{\mathrm{Tight}}^{min}(T^2\times[0,1]; s_0, s_1)$. These are called *basic slices* and the correspondence in the theorem can be understood via stacking basic slices according to the decoration in the path that describes the contact structure. The two different contact structures on a basic slice can be distinguished by their relative Euler class we call them *positive* and *negative basic slices*.
The relative Euler class of the contact structure in $\mathop{\mathrm{Tight}}^{min}(T^2\times[0,1]; s_0, s_1)$ can be computed as follows: let $s_0=r_0, r_2,\ldots r_k=s_1$ be the vertices of the minimal path from $s_0$ to $s_1$ and let $\epsilon_i$ be the sign of the basic slice corresponding to $r_{i-1}$ and $r_i$. Then the relative Euler class of the contact structure associated with this path is Poincaré dual to the curve $$\sum_{i=1}^{n}\epsilon_i(r_i\ominus r_{i-1})$$ where $\frac{a}{b}\ominus \frac{c}{d}=\frac{a-c}{b-d}$.
Now suppose $P$ is a non-minimal path is the Farey graph. So there will be a vertex $v$ in $P$ such that there is an edge between its neighboring vertices $v'$ and $v''$. We can shorten this path by removing $v$ and the two edges and replacing it by the edge between $v'$ and $v''$. We call the new path $P'$. If $P$ were a decorated path, then we call the shortening to get $P'$ is *inconsistent* if the signs of the edges removed are different and *consistent* otherwise. When the shortening is consistent, then we can decorate the new edge in $P'$ by the sign of the removed edges, thus $P'$ is a new decorated path.
Given any decorated path in the Farey graph, even non-minimal, one can construct a contact structure on $T^2\times [0,1]$ by stacking basic slices. We are interested to know when these paths will give us tight contact structures. We have the following result due to Honda [@Honda00a].
**Theorem 26**. *Let $\xi$ be a contact structure on $T^2\times [0,1]$ described by a non-minimal decorated path $P$ in the Farey graph from $s_0$ to $s_1$. Then $\xi$ is tight if and only if one may consistently shorten the path to a shortest path from $s_0$ to $s_1$.*
We now discuss convex Giroux torsion. Consider $\xi=\ker(\sin2\pi z\, dx+\cos2\pi z\, dy)$ on $T^2\times\mathbb{R}$ where $(x,y)$ is the cooordinate on $T^2$ and $z$ is the coordinate on $\mathbb{R}$. Consider the region $T^2\times[0,k]$ for $k\in\frac{1}{2}\mathbb{N}$ and notice that the contact planes twist $k$ times as $z$ goes from $0$ to $k$. We can perturb $T^2\times \{0\}$ and $T^2\times\{k\}$ so that they become convex with two dividing curves of slope $0$. Let $\xi^k$ denotes the resulting contact structure on $T^2\times[0,1]$ (After identifying $T^2\times [0,k]$ with $T^2\times [0,1]$).
For $k \in \frac12\mathbb{N}$, we call $(T^2\times[0,1],\xi^k)$ a *convex $k$ Giroux torsion layer* and if it embeds into a contact manifold $(M,\xi)$, we say $(M,\xi)$ has *convex k Giroux torsion*. We say $(M,\xi)$ has exactly $k$ Giroux torsion if one can embed $(T^2\times[0,1],\xi^k)$ into $(M,\xi)$ but cannot embed $(T^2\times[0,1],\xi^{k+\frac{1}{2}})$ in $(M,\xi)$. On the other hand $(M,\xi)$ has no convex Giroux torsion or zero Giroux torsion if $(T^2\times[0,1],\xi^k)$ does not embed in $(M,\xi)$ for any $k\in\frac{1}{2}\mathbb{N}$.
### Contact structures on solid tori {#conttori}
Now we turn to the contact structures on $S^1\times D^2$. While we will usually use "standard\" coordinates on a solid torus so that the meridional slope is $-\infty=\infty$, it will sometimes be convenient to describe solid tori using other coordinate. We set up notation for this now. Consider $T^2\times [0,1]$ and choose a basis for $H_1(T^2)$ so that we may denote curves on $T^2$ by rational numbers (union infinity). Given $r\in\mathbb{Q}\cup\{\infty\}$ we can foliate $T^2\times \{0\}$ by curves of slope $r$. Let $S_r$ be the result collapsing each leaf in the foliation of $T^2\times \{0\}$ to a point. One may easily verify $S_r$ is a solid torus with meridional slope $r$. We say *$S_r$ is a solid torus with lower meridian $r$*. We could similarly foliate $T^2\times\{1\}$ by curves of slope $r$ and collapse them to obtain $S^r$. This is the *solid torus with upper meridian $r$*. Notice that the "standard\" solid torus is $S_\infty$, and if we do not indicate otherwise, this is the torus we are talking about.
Let $\mathop{\mathrm{Tight}}(S_r,s)$ denote the tight contact structures on the solid torus $S_r$ with lower meridian $r$ and convex boundary having two dividing curves of slope $s$, and similarly for $\mathop{\mathrm{Tight}}(S^r,s)$. We call a path $P$ in the Farey graph moving clockwise *mostly decorated* if all but the first or last edge has a sign. We call it *mostly upper decorated* if there are signs on all but the first edge and *mostly lower decorated* if all the edges have a sign except for the last edge. Giroux [@Giroux2] and Honda [@Honda00a] classified tight contact structures on a solid torus. By a change of basis and using the notation above their result is the following.
**Theorem 27**. *The elements of $\mathop{\mathrm{Tight}}(S_r,s)$ are in one-to-one correspondence with equivalence classes of mostly upper decorated minimal paths in the Farey graph from $r$ clockwise to $s$ where two such paths are considered equivalent if they differ by shuffling in continued fraction blocks.*
*Similarly, elements in $\mathop{\mathrm{Tight}}(S^r,s)$ are in one-to-one correspondence with equivalence classes of mostly lower decorated minimal paths in the Farey graph from $r$ clockwise to $s$ where two such paths are considered equivalent if they differ by shuffling in continued fraction blocks.*
Just like for thickened tori, one can describe a contact structure on a solid tori by any path in the Farey graph, even if not minimal, but the contact structure might not be tight. Again, just as for thickened tori, if $P$ is a non-miminal path we can talk about inconsistent shortenings (same definition as for thickened tori) and consistent shortenings, which are any shortenings that are not inconsistent. When considering solid tori, there is a consistent shortening that was not seen in the thickened torus case. Specifically, if one of the two edges that is removed when shortening the path is the unlabeled one, then we can shorten the path and leave the new edge unlabeled. Notice that this will remove one of the edges that had a sign (that is the unlabeled edge can consistently shorten, and "absorb\", any signed edge).
**Theorem 28**. *Let $\xi$ be a contact structure on $S_r$ described by a non-minimal mostly upper decorated path $P$ in the Farey graph from $r$ to $s$. Then $\xi$ is tight if and only if one may consistently shorten the path to a shortest path from $r$ to $s$. We have a similar statement for $S^r$.*
We will discuss non-minimal paths further in the context of neighborhoods of Legendrian knots in the next section.
### Contact structures on lens spaces
The lens space $L(p,q)$ is defined to be $-p/q$ surgery on the unknot in $S^3$. Equivalently, we can think of $L(p,q)$ as $T^2\times [0,1]$ with the curves of slope $-p/q$ collapsed on $T^2\times\{0\}$ and curves of slope $0$ collapsed on $T^2\times\{1\}$, which could further be described as the result of gluing the torus $S_{-p/q}$ with lower meridian $-p/q$ to the torus $S^0$ with upper meridian $0$ by the identity along the boundary. Giroux [@Giroux2] and Honda [@Honda00a] classified tight contact structure on $L(p,q)$ proving the following.
**Theorem 29**. *Let $P$ be a minimal path in the Farey graph from $-p/q$ clockwise to $0$. The tight contact structures on $L(p,q)$ are in one-to-one correspondence with assignments of signs to all but the first and last edge in $P$ up to shuffling in continued fraction blocks.*
## Knots in contact manifolds {#kot}
Inside any open set containing a Legendrian knot $L$ in a contact manifold $(M,\xi)$ there is a *standard neighborhood* $N(L)$ of $L$. This is a solid torus on which $\xi$ is tight and $\partial N(K)$ is convex with two dividing curves of slope $\mathop{\mathrm{tb}}(L)$. One may arrange, by a small isotopy of $\partial N(L)$ that the characteristic foliation consists of two lines of singularities called *Legendrian divides* and curves of slope $s$, for any $s\not=\mathop{\mathrm{tb}}(L)$. These latter curves are called *ruling curves*. (For most of this subsection we will be considering standard coordinates on a solid torus, so when we say solid torus we mean a solid torus with lower meridian $\infty$ in the terminology of Section [2.2.2](#conttori){reference-type="ref" reference="conttori"}.)
One may reverse the above discussion. Given a solid torus $S$ in a contact manifold $(M,\xi)$ on which $\xi$ is tight and with convex boundary having two dividing curves of slope $n\in\mathbb{Z}$, then there is a unique Legendrian knot $L_S$ with $\mathop{\mathrm{tb}}=n$ having $S$ as its standard neighborhood.
Given a Legendrian knot $L$, we can stabilize $L$ in two ways, $S_\pm(L)$, and choose a standard neighborhood $N(S_\pm(L))$ of $S_\pm(L)$ in $N(L)$. Notice that $N(L)\setminus N(S_\pm(L))$ is a $\pm$ basic slice with boundary slopes $\mathop{\mathrm{tb}}(L)-1$ and $\mathop{\mathrm{tb}}(L)$. It turns out all the sub-tori with convex boundary having integral slopes can be formed this way. A corollary of classification results in Section [2.2.2](#conttori){reference-type="ref" reference="conttori"} that will be useful to us is the following result.
**Corollary 30**. *Let $S$ be a solid torus and $\xi$ a tight contact structure on $S$ such that $\partial S$ is convex with two dividing curves of slope $n\in\mathbb{Z}$. For each positive integer $k$ there are exactly $k+1$ distinct solid tori $S_0,\ldots, S_{k}$ in $S$ with convex boundary having two dividing curves of slope $n-k$. They are determined by the contact structure on $S\setminus S_i$ which in turn is determined by a signed minimal path in the Farey graph from $n-k$ to $n$. If $k'$ is an integer larger than $k$ then a solid torus $S'_j$ with convex boundary having two dividing curves of slope $n-k'$ is contained in $S_i$ if and only if the signed path associated to $S\setminus S_i$ is, after possibly shuffling the signs, a sub path of the signed path associated to $S\setminus S'_j$.*
We also notice that any solid torus with convex boundary has a unique maximal Thurston-Bennequin invariant Legendrian knot in its interior.
**Corollary 31**. *Let $S$ is a solid torus with convex boundary having dividing slope $r\in\mathbb{Q}\setminus\mathbb{Z}$. Then there exists a unique solid torus with convex boundary having two dividing curves of slope $\lfloor{r}\rfloor$ inside $S$ and that torus contains any convex torus with smaller dividing slope.*
We now recall two well-known results about Legendrian knots sitting on a convex torus.
**Lemma 32**. *Suppose $L$ sits on a convex torus $T$ and $L'$ is a ruling curve on $T$ in the same homology class as $L$ (or a copy of $T$ obtained by isotoping $T$ through convex tori). Then $L$ is obtained from $L'$ by some number of stabilizations.*
**Lemma 33**. *Suppose $N$ is a standard neighborhood of a Legendrian knot $L$ and $\partial N$ has longitudinal ruling curves of slope larger than the contact framing (that is $\mathop{\mathrm{tb}}(L)$). Then the ruling curves of $\partial N$ are Legendrian isotopic to $L$.*
The following lemma was proven as Lemma 7.8 in [@EtnyreMinMukherjee22pre].
**Lemma 34**. *Let $(T^2\times[0,1], \xi)$ be a $\pm$ basic slice with dividing slopes $s_i$ on $T^2\times\{i\}$. Let $L_0$ be a Legendrian ruling curve of slope $s_1$ on $T^2\times\{0\}$ and $L_1$ a Legendrian divide on $T^2\times \{1\}$. Then $L_0$ is $S_\pm(L_1)$. Moreover, if $L'_0$ is a Legendrian divide on $T^2\times \{0\}$ and $L'_1$ is a ruling curve of slope $s_0$ on $T^2\times\{1\}$, then $L'_1$ is $S_\mp(L'_0)$.*
*Let $s$ be a vertex in the Farey graph outside the interval $[s_0,s_1]$ for which there are vertices in the Farey graph in $[s,s_0)$ with an edge to $s_1$. If $L''_i$ is a ruling curve of slope $s$ on $T^2\times\{i\}$ then $L''_0$ is $S_\pm^k(L''_1)$ where $k=|(s_1 \ominus s_0) \mathpalette\mathbin{\vcenter{\hbox{\scalebox{s}{$\m@th 0.6\bullet$}}}}|$. Moreover, there is a similar statement when $s$ is outside of $[s_0,s_1]$ for which there are vertices in the Farey graph in $(s_1,s]$ with an edge to $s_0$, and with the roles of $L''_0$ and $L''_1$ interchanged as in the previous paragraph.*
We now discuss how to compute the classical invariants of a Legendrian realization of a cable. Suppose $L_{p,q}$ is a Legendrian realization of the $(p,q)$-cable of $K$ and is contained in $\partial N({K})$, which is convex. Then as was shown in [@etnyre2005cabling], see also [@EtnyreLaFountainTosun12 Theorem 1.7], $\mathop{\mathrm{tb}}(L_{(p,q)})$ and $\mathop{\mathrm{rot}}(L_{(p,q))}$ can be computed as follows.
**Lemma 35**. *The Thurston-Bennequin invariant can be computed as follows:*
1. *Suppose $L_{(p,q)}$ is a Legendrian divide and slope $(\Gamma_{\partial{N(\mathcal{K})}})=\frac{q}{p}$. Then $\mathop{\mathrm{tb}}(L_{(p,q})=pq$.*
2. *Suppose $L_{(p,q)}$ is a Legendrian ruling curve and slope $(\Gamma_{\partial{N(\mathcal{K})}})=\frac{q'}{p'}$. Then $\mathop{\mathrm{tb}}(L_{(p,q})=pq-|pq'-p'q|$.*
**Lemma 36**. *Let $D$ be a convex meridional disk of $N(K)$ with Legendrian boundary on a contact isotopic copy of the convex surface $\partial N(\mathcal{K})$, and let $\Sigma (L)$ be a convex Seifert surface with Legendrian boundary $L\in \mathcal{L}(\mathcal{K})$ which is contained in a contact isotopic copy of $\partial N(\mathcal{K})$. Then*
*$$r(L_{(p,q)})=q\cdot r(\partial D)+p\cdot r(\partial\Sigma(K))$$*
We end this section by discussing neighborhoods of Legendrian knots in non-standard coordinates as discussed in Section [2.2.2](#conttori){reference-type="ref" reference="conttori"}. Suppose $S_r$ is a torus with lower meridian $r$ in a contact manifold $(M,\xi)$. If $\xi$ is tight when restricted to $S_r$ and $S_r$ has convex boundary with two dividing curves of slope $s$ such that $s$ and $r$ are connected by an edge in the Farey graph, then $S_r$ is a standard neighborhood of a Legendrian knot $L$ and the contact framing is $s$. If we stabilize $L$ then the resulting knot will have a standard neighborhood with lower meridian $r$ and dividing curves of slope $s'$, where $s'$ is clockwise of $r$, anti-clockwise of $s$, and has an edge in the Farey graph to both $r$ and $s$.
## Bypasses
Given a convex surface $\Sigma$ in a contact manifold $(M,\xi)$ a *bypass* for $\Sigma$ is an embedded disk $D$ such that $\partial D=\alpha_1\cup \alpha_2$ is Legendrian, $\xi$ is tangent to $D$ at $\alpha_1\cap \alpha_2$, $D\cap \Sigma=\alpha_1$, $\alpha_1$ intersects the dividing curves on $\Sigma$ three times $\{x_1,x_2,x_3\}$, with $x_2$ on the interior of $\alpha_1$ and $\partial \alpha_1=\{x_1,x_3\}$, and $\xi$ is not tangent to $D$ along the interior of $\alpha_2$. We can assume that $\xi$ is tangent to $D$ at the $x_i$'s and the sign of the singularity at $x_2$ is the sign of the bypass. Let $\Sigma\times [0,1]$ be a small $[0,1]$-invariant neighborhood of $\Sigma$ so that $\Sigma=\Sigma\times \{0\}$ and part of $D$ intersects the neighborhood. We can push this neighborhood past $D$, still denoted $\Sigma \times [0,1]$, so that $\Sigma\times \{1\}$ is convex and the dividing curves differ from $\Sigma$'s in a prescribed way, see [@Honda00a]. We say $\Sigma\times \{1\}$ is obtained from $\Sigma$ by *attaching the bypass $D$*. The details of this will not concern us here, but we indicate how attaching bypasses to convex tori changes their dividing curves and how to find bypasses.
If $T$ is a convex torus and $D$ is a bypass for $T$, then attaching the bypass will affect the dividing cures of $T$ in one of the following ways:
1. increase the number of dividing curves by two,
2. decrease the number of dividing curves by two, or
3. change the slope of the dividing curves (this can only happen if $T$ had only two dividing curves).
We elaborate on the last possibility. First recall if $T$ is an oriented surface then there is a positive side and negative side to $T$. The positive side is the side such that if a transverse vector pointing in this direction followed by an oriented basis for $T$ gives an oriented basis for $M$.
**Lemma 37**. *Suppose $T$ is a convex torus with two dividing curves of slope $s$ and ruling curves of slope $r$. If $D$ is a bypass on the positive side of $T$ and we attach $D$ to $T$ to get the convex torus $T'$. Then $T'$ will have two dividing curves of slope $s'$ where $s'$ clockwise of $s$, anti-clockwise of $r$ and is the point in the Farey graph closest to $r$ with an edge to $s$. The region between $T$ and $T'$ is a basic slice and the sign of the basic slice agrees with the sign of the bypass.*
*If $D$ is on the negative side of $T$ and we attach $D$ to $T$ to get the convex torus $T'$. Then $T'$ will have two dividing curves of slope $s'$ where $s'$ anti-clockwise of $s$, clockwise of $r$ and is the point in the Farey graph closest to $r$ with an edge to $s$. The region between $T$ and $T'$ is a basic slice and the sign of the basic slice is the opposite of the sign of the bypass.*
A common way to find bypasses is the imbalance principle. Suppose $A$ is a convex annulus with boundary Legendrian and one component in the convex surface $\Sigma$ and the other in the convex surface $\Sigma'$. If $A$ intersects the dividing set $\Gamma_\Sigma$ more times that it intersects the dividing set $\Gamma_{\Sigma'}$, then one may isotopy $A$ so that there is a bypass for $\Sigma$ on $A$.
## Rationally null-homologous knots {#rational}
The main reference for the material in this section is [@BakerEtnyre12] and the reader is refereed there for details. Recall a knot $K$ in a manifold $M$ is called *rationally null-homologous* if it is trivial in $H_1(M,\mathbb{Q})$. This means that there will be some minimal integer $r$ such that $rK$ is trivial in $H_1(M,\mathbb{Z})$. We call $r$ the order of $K$. Let $N(K)$ be a tubular neighborhood of $K$ and $M_K=M\setminus N(K)$ be the exterior of the neighborhood. It is not hard to see that, given some framing on $\partial N(K)$, there is some $(r,s)$-curve on $\partial N(K)$ that bounds an embedded surface $\Sigma'$ in $M_K$. One may extend $\Sigma'$ in $N(K)$ by "coning\" $\partial \Sigma'$. This gives a surface $\Sigma$ in $M$ whose interior is embedded but its boundary wraps $r$ times around $K$. We call $\Sigma$ a *rational Seifert surface* for $K$. We note that $\Sigma$ might not have connected boundary if $r$ and $s$ are not relatively prime, but we will not consider this case as all our examples below will have $\partial \Sigma$ connected and we assume that throughout the rest of our discussion here. We assume that $K$ is oriented and this will induce an orientation on $\Sigma$.
If $K'$ is another oriented knot in $M$ that is disjoint from $K$ then we define the *rational linking number* to be $$\mathop{\mathrm{lk}}_\mathbb{Q}(K,K')=\frac 1r \Sigma\cdot K'$$ where $\Sigma\cdot K'$ is the algebraic intersection between $\Sigma$ and $K'$. There is some ambiguity in this definition, see Section 1 of [@BakerEtnyre12], but this will not be an issue for the situations considered here.
Now suppose that $K$ is a Legendrian knot in $(M,\xi)$. As discussed in Section [2.3](#kot){reference-type="ref" reference="kot"} we know that $K$ has a standard neighborhood $N(K)$ with convex boundary having two dividing curves determined by the contact framing. Let $K'$ be one of the Legendrian divides on $\partial N(K)$. We define the *rational Thurston-Bennequin invariant* of $K$ to be $$\mathop{\mathrm{tb}}_\mathbb{Q}(K)=\mathop{\mathrm{lk}}_\mathbb{Q}(K,K').$$ To define the rational rotation number we first note that we have an immersion $i:\Sigma\to M$ that is an embedding on the interior of $\Sigma$ and an $r$-to-$1$ mapping on $\partial \Sigma$. (Notice that we are slightly abusing notation thinking of $\Sigma$ as an abstract surface and also as its image under $i$, we hope the meaning to be clear from context.) We can now consider $i^*\xi$ as an oriented $\mathbb{R}^2$-bundle over $\Sigma$. Since $\Sigma$ is a surface with boundary, we know $i^*\xi$ can be trivialized: $i^*\xi=\Sigma\times \mathbb{R}^2$. Let $v$ be a non-zero vector field tangent to $\partial \Sigma$ inducing the orientation on $K$. Using the trivialization of $i^*\xi$ we can think of $v$ as a map from $\partial \Sigma$ to $\mathbb{R}^2$. We can now define the *rational rotation* of $K$ to be $$\mathop{\mathrm{rot}}_\mathbb{Q}(K)=\frac 1r \text{winding}(v,\mathbb{R}^2).$$ Notice that $\text{winding}(v,\mathbb{R}^2)$ is equivalent to the obstruction to extending $v$ to a non-zero vector field over $i^*\xi$ and thus can be interpreted as a relative Euler number.
# The existence of non-loose knots
In this section, we prove our results concerning when a knot type admits a non-loose Legendrian and transverse representative and the number of such representatives.
## Determining which knots admit non-loose representatives
We begin by proving Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"} that gives a necessary and sufficient condition for a knot type $K$ in a $3$-manifold $M$ admitting a non-loose Legendrian representative in some overtwisted contact structure. To do so, we first need the following lemmas.
**Lemma 38**. *The knot $S^1\times \{pt\}$ in $S^1 \times S^2$ does not admit non-loose Legendrian and transverse representatives in any overtwisted contact structure on $S^1 \times S^2$.*
*Proof.* Suppose that $L$ is a Legendrian representative of $S^1 \times \{pt\}$ in an overtwisted contact structure on $S^1 \times S^2$ and fix a framing of $L$. Then it has a standard neighborhood $N$ with two dividing curves of slope $n \in \mathbb{Z}$ with respect to the given framing of $L$. Then there is a unique tight contact structure on $N$ and $N^c = S^1\times S^2 \setminus N$ is a solid torus with two longitudinal dividing curves. If $L$ is non-loose, then the contact structure on $N^c$ should be tight and there is also a unique tight contact structure on $N^c$ with longitudinal dividing curves and hence $N\cup N^c$ is the unique tight structure on $S^1\times S^2$, which contradicts that we assumed the contact structure is overtwisted. Therefore, there are no non-loose Legendrian representatives of $S^1\times \{pt\}$ in $S^1\times S^2$.
Since a Legendrian approximation of a non-loose transverse knot is non-loose (see [@Etnyre13 Proposition 1.2], there are also no non-loose transverse representative of $S^1\times \{pt\}$ in $S^1\times S^2$. ◻
**Lemma 39**. *Let $M$, $M'$ be oriented $3$-manifolds. If $K$ is a knot in $M$ which does not admit non-loose Legendrian (*resp.* transverse) representatives in any overtwisted contact structure on $M$, then $K$ also does not admit non-loose Legendrian (*resp.* transverse) representatives in any overtwisted contact structure on $M \# M'$.*
*Proof.* Suppose $L$ is a non-loose Legendrian representative of $K$ in some overtwisted contact structure on $M \# M'$. Let $N(L)$ be a standard neighborhood of $L$. Then $M \# M' \setminus N(L)$ is tight. Colin [@Colin97] showed that the connect sum of two contact structures is tight if and only if each of the summand contact structures is tight. Thus $M \setminus N(L)$ and $M'$ are both tight. Since $M \# M'$ is overtwisted, $M$ must be overtwisted. This contradicts that there are no non-loose Legendrian representatives of $K$ in $M$.
The same argument works for non-loose transverse representatives. ◻
Now we are ready to prove Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}.
*Proof of Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}.* We first consider non-loose Legendrian representatives. Assume that $K$ intersects an essential sphere once transversely. This implies that $M = S^1\times S^2 \# N$ and $K$ is the core of the $S^1 \times S^2$ summand. By Lemma [Lemma 38](#lem:s1s2){reference-type="ref" reference="lem:s1s2"}, the core of $S^1 \times S^2$ does not admit non-loose Legendrian representatives in any overtwisted contact structure, so $K$ also does not admit non-loose Legendrian representatives in any overtwisted contact structure on $M$ by Lemma [Lemma 39](#lem:connectsum){reference-type="ref" reference="lem:connectsum"}. Also, let $M = M' \# M''$ where $K \subset M''$ and $M'' \setminus K$ is irreducible. Assume $M'$ does not admit a tight contact structure. Suppose $L$ is a non-loose Legendrian representative of $K$ in $M$. Then $M \setminus N(L)$ is tight. Colin [@Colin97] showed that the connect sum of two contact structures is tight if and only if each of the summand contact structures is tight. Thus both $M'$ and $M'' \setminus N(L)$ should be tight. This contradicts the assumption that $M'$ does not admit a tight contact structure, so $K$ does not admit non-loose Legendrian representatives in $M$.
Now suppose $K$ does not intersect an essential sphere transversely once and $M'$ admits a tight contact structure. The first condition implies that $K$ is not the core of $S^1 \times S^2$ summand. We claim that $K$ admits a non-loose Legendrian representative in at least two overtwisted contact structures on $M''$ (for the unknot in $S^3$, at least one). There are two cases we need to consider. If the boundary of $M'' \setminus N(K)$ is compressible, then $M''$ is a lens space (or $S^3$) and $K$ is a rational unknot. Corollary [Corollary 16](#courseRU){reference-type="ref" reference="courseRU"} tells us that if $K$ is not the unknot in $S^3$ then it admits non-loose Legendrian representatives in at least two distinct overtwisted contact structures. For the unknot in $S^3$, it is known that there is a unique overtwisted contact structure on $S^3$ in which $K$ has non-loose representatives, see [@EliashbergFraser09].
If the boundary of $M'' \setminus N(K)$ is incompressible, we can now apply the proof of [@Etnyre13 Theorem 1.8] to construct two overtwisted contact structures on $M''$ in which $K$ admits infinitely many distinct non-loose transverse representatives. (There is a mistake in the proof of [@Etnyre13 Theorem 1.8]. In [@Etnyre13], it was claimed that for any $K$, which is not the unknot in $S^3$, $M\setminus N(K)$ is irreducible when $M$ is irreducible, but that is clearly not the case if $K$ is contained in a ball and $M$ is not $S^3$, but easily proven otherwise.) According to [@Etnyre13 Proposition 1.2], any Legendrian approximation of these transverse representatives is a non-loose Legendrian representative of $K$. Now since $M'$ admits a tight contact structure, we apply Colin's result [@Colin97] again and $M \setminus N(L) = M' \# (M'' \setminus N(L))$ is still tight. Thus $K$ admits a non-loose Legendrian representatives at least two overtwisted contact structure on $M$ (when $K$ is the unknot in $S^3$, at least one).
For non-loose transverse representatives, the same argument works except for the fact that rational unknots do not admit non-loose transverse representatives in any overtwisted contact structure on a lens space, see Corollary [Corollary 16](#courseRU){reference-type="ref" reference="courseRU"} ◻
Corollary [Corollary 3](#characterize-irreducible){reference-type="ref" reference="characterize-irreducible"} is immediate from Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}.
We now turn to the proof of Corollary [Corollary 6](#maincor){reference-type="ref" reference="maincor"} concerning when a knot in a Seifert fibered space admits a non-loose representative.
*Proof of Corollary [Corollary 6](#maincor){reference-type="ref" reference="maincor"}.* The only reducible Seifert fibered spaces are $S^1\times S^2$ and $\mathbb{R}P^3\# \mathbb{R}P^3$ (which is an $S^1$-bundle over $\mathbb{R}P^2$) [@Hatcher Proposition 1.12]. In [@LiscaStipsicz09], Lisca and Stipsicz showed that the only Seifert fibered spaces that do not admit tight contact structures are the manifolds $M_n$ obtained by $2n-1$ surgery on the $(2,2n+1)$--torus knot. Thus the corollary follows form Corollary [Corollary 3](#characterize-irreducible){reference-type="ref" reference="characterize-irreducible"} except for $S^1 \times S^2$ and $\mathbb{R}P^3 \# \mathbb{R}P^3$.
According to Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"}, the knot $S^1\times \{pt\}$ is the only knot type in $S^1\times S^2$ that has no non-loose Legendrian representatives.
Similarly, Combining the fact that $\mathbb{R}P^3$ admits a tight contact structure with Theorem [Theorem 1](#characterize){reference-type="ref" reference="characterize"} tells us that any knot type $K$ in $\mathbb{R}P^3\# \mathbb{R}P^3$ will always have non-loose Legendrian representatives. ◻
## Finding infinitely many non-loose representatives of a knot
To prove Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"} we begin with a construction. Given a knot $K$ in an irreducible $3$--manifold $M$ that admits non-loose representatives (and is not a rational unknot), let $C$ be the complement of a tubular neighborhood of $K$. As discussed in the previous section, $C$ in irreducible and has incompressible boundary. From Theorem 6.1 of [@HondaKazezMatic00] $C$ admits a universally tight contact structure $\overline\xi$ with convex boundary having two meridional dividing curves. One can now glue $T^2\times [0,1]$ to $C$ and extend the contact structure to $\xi''_i$ so that $\xi''_i$ on $T^2\times [0,1]$ is a convex Giroux torsion $i$. As proven in Proposition 4.2 of [@HondaKazezMatic02] for the appropriate extension of $\overline{\xi}$ (recall there are two), $\xi''_i$ is universally tight. It was further shown in the proof of Proposition 4.6 of that paper, that the convex Giroux torsion of $\xi''_i$ is one larger than the convex Giroux torsion of $\xi''_{i-1}$, and thus each $i$ gives a distinct contact structure $\xi''_i$. One may similarly add (the appropriate) half convex Giroux torsion to $\xi''_i$ to get a contact structure $\xi''_{i+\frac 12}$ and as above, for different $i$ all of these contact structures will be distinct. Notice that all these manifolds are canonically diffeomorphic to $C$, so we will consider all the contact structures as living on $C$.
We can now glue a solid torus $S$ to $(C,\xi''_i)$ for any integer or half-integer $i$ to get an overtwisted contact structure as follows. We first extend $\xi''_i$ by adding a basic slice $(T^2\times[0,1], \xi')$ with dividing slopes $\infty$ and $n\in \mathbb{Z}$. There are two choices and as argued in Section 4 of [@HondaKazezMatic02] with one of these choice the resulting contact structure is tight, denote it $\xi'_{i,n}$. Finally glue a solid torus with its unique tight contact structure having convex boundary with dividing slope $n$. The resulting manifold is $M$ and the resulting contact structures will be denoted $\xi'_i$.
We are now ready to prove Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"} saying that all knots that admit non-loose Legendrian representatives (other than a rational unknot) will admit infinitely many such representatives with a given framing in at least two overtwisted contact structures and when the knot is null-homologous we can refine the statement.
*Proof of Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"}.* Notice that $\xi'_i$ is obtained from $\xi'_{i-1}$ by adding a Lutz twist. Thus all the contact structures $\{\xi'_i\}_{i\in\mathbb{N}}$ are the same, [@Geiges08]. Denote that contact structure $\xi$. Similarly all the contact structures $\{\xi'_i\}_{i\in \mathbb{N}+\frac 12}$ are the same and it will be denoted by $\xi'$. Notice that in $\xi'_i$ we can find a solid torus $S_n$ in $S$ with convex boundary having two dividing curves of slope $n$ (here we fix an arbitrary framing on $K$ to identify the framings on $K$ with integers). From Section [2.3](#kot){reference-type="ref" reference="kot"}, we know $S_n$ is the standard neighborhood of a Legendrian knot $L^{n,i}$. Moreover the complement of a standard neighborhood of $L^{n,i}$ is $\xi'_{i,n}$ on $C$. Thus for different $i$, the complements of the $L^{n,i}$ are different. Now notice that $\xi'$ is obtained from $\xi$ by a half Lutz twist on the transverse push-off of $L^{n,i}$. This is known to change the $d_3$ invariant of the contact structure by the self-linking number of the transverse knot [@Geiges08 Proof of Theorem 4.3.1]. Since the $d_3$ invariant is always an integer or an integer modulo an even number, we see that this will change the parity of the $d_3$ invariant. Hence $\xi$ and $\xi'$ must be non-homotopic contact structures. This concludes the first part of Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"}.
We now turn to the second part of the theorem that refines the above result when $K$ is null-homologous. We start with the contact structure $\xi$. Notice that we could arrange for the basic slice attached to $C$ in the construction of $\xi'_i$ to be positive (if it was not just reverse the orientation on all the contact structures involved in the construction). Given this, notice that we can choose the solid tori $S_n$ above so that $S_n\subset S_{n+1}$ and $\overline{S_{n+1}- S_n}$ is a positive basic slice for all $n$. Now since $S_n$ is a standard neighborhood of $L^{n,i}$ we see that $S_-(L^{n,i})$ is $L^{i,n-1}$. Moreover, $S_+(L^{n,i})$ is loose since the complement of a standard neighborhood of $S_+(L^{n,i})$ is obtained from the complement of a standard neighborhood of $L^{n-1}$ by attaching a negative basic slice $B$. Thus the contact structure is overtwisted since we can find a positive basic slice $B'$ in it that shares a boundary with $B$ and has dividing slopes $n$ and $\infty$ (that is the contact structure on $B\cup B'$ is given by a path in the Farey graph that goes from $n-1$ to $n$ to $\infty$ and the first edge has a $+$ and the second has a $-$, so we can shorten the path with a mismatch of signs and so yields an overtwisted contact structure).
From construction it is clear that $\mathop{\mathrm{tb}}(L^{n,i})=n$. To compute the rotation number recall that if $n<0$, then [@Kanda98] says it is simply given by $\chi(\Sigma_+)-\chi(\Sigma_-)$ where $\Sigma$ is a convex Seifert surface for $K$ with Legendrian boundary on the boundary of a standard neighborhood of $L^{n,i}$ and $\Sigma_\pm$ are the $\pm$ regions of the complement of the dividing curves. In the construction of the $\xi''_i$ given in [@HondaKazezMatic02 Section 4] we can see that for $n>0$ there are an even number of boundary parallel circles in the dividing set and $n$ arcs that are boundary parallel. With our choices, there will be a subsurface of $\Sigma$ that is diffeomorphic to $\Sigma$ in $\Sigma_+$ and the $n$ arcs in the dividing set bounds disks in $\Sigma_-$. Thus we see that $\mathop{\mathrm{rot}}(L^{n,i})=\chi(\Sigma)-n$ when $n<0$, but as we know the $L^{n,i}$ for $n\geq 0$ negatively stabilize to $L^{n',i}$ for $n'<0$ we see the formula holds for all $n$. If we relabel $L^{n,i}$ as $L_-^{n,i}$ then we have constructed the claimed knots in Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"}.
Now if $-K$ is smoothly isotopic to $K$ then notice that if we set $L_+^{n,i}$ equal to $-L_-^{n,i}$, then this is the second claimed family of knots in Theorem [Theorem 7](#allhaveinfinity){reference-type="ref" reference="allhaveinfinity"}.
An analogous argument works to given the Legendrian knots $L_\pm^{n,i+\frac 12}$ in $\xi'$. ◻
# Rational unknots {#sec:ruk}
In this section we will first classify non-loose rational unknots in lens space and then in the following subsection prove that Figures [\[fig:unknotsLegendrian1\]](#fig:unknotsLegendrian1){reference-type="ref" reference="fig:unknotsLegendrian1"} and [\[fig:unknotsLegendrian2\]](#fig:unknotsLegendrian2){reference-type="ref" reference="fig:unknotsLegendrian2"} give surgery diagrams for all non-loose rational unknots.
## Classification of non-loose unknots
We begin by outlining how the classification of non-loose knots will work. First recall that $L(p,q)$ is a union of two solid tori $V_0$ and $V_1$. Fixing coordinates on $T^2$ we can think of $V_0$ as having lower meridian $-p/q$ and $V_1$ as having upper meridian $0$. If we fix two curves of slope $s$ on $\partial V_0=\partial V_1$, then a contact structure is determined by taking a tight contact structure on $V_0$ in $\mathop{\mathrm{Tight}}(S_{-p/q},s)$ and a tight contact structure on $V_1$ in $\mathop{\mathrm{Tight}}(S^0,s)$ and gluing them together (there is no guarantee that the result is tight).
We will think of the core of $V_0$ as being the rational unknot $K_0$. If we are looking for non-loose realizations of $K_0$ then we can take any slope $s$ with an edge to $-p/q$ in the Farey graph. Now a tight contact structure on $V_0$ with convex boundary having two dividing curves of slope $s$ is a standard neighborhood of a Legendrian representative in the knot type of $K_0$. (See the end of Section [2.3](#kot){reference-type="ref" reference="kot"}.) So if it is non-loose the contact structure on $V_1$ must be tight. Notice that $s$ is either in $(-p/q, (-p/q)^c]$ or $[(-p/q)^a, -p/q)$ (recall our notation for intervals in the Farey graph from Section [2.1](#fgraph){reference-type="ref" reference="fgraph"}). If it is in the first interval then any contact structure on $V_1$ is in $Tight(S^0,s)$ and is given by a signed path going from $s$ clockwise to $0$ and glue in the unique contact structure on $V_0$ will give a tight contact structure on $L(p,q)$. So we ignore these as we only wish to consider non-loose knots in overtwisted contact structures. Now if $s$ is in $[(-p/q)^a, -p/q)$ then the contact structure on $V_0$ will have convex tori parallel to the boundary with dividing slope any number clockwise of $-p/q$ and anti-clockwise of $s$, in particular it will contain a convex torus with dividing slope $0$. Note when we glue the contact structure on $V_0$ to any tight contact structure on $V_1$ the result will be overtwisted since a Legendrian divide on the convex torus with dividing slope $0$ will bound a disk in $V_1$ and hence we have an overtwisted disk. Thus the non-loose Legendrian realizations of $K_0$ with "framing\" $s$ are in one to one correspondence with the elements of $\mathop{\mathrm{Tight}}(S^0,s)$.
We now notice that the meridional disk for $V_1$ will provide a rational Seifert surface for $K_0$, which has order $p$ in $L(p,q)$. So the rational Thurston-Bennequin invariant is the rational linking of $K_0$ with a contact push-off of $K_0$, given by a Legendrian divide on $\partial V_0$ is $\frac 1p (0 \mathpalette\mathbin{\vcenter{\hbox{\scalebox{s}{$\m@th 0.6\bullet$}}}})$. Similarly if $e$ is the Euler class of the contact structure on $V_1$ then the rational rotation number is $\mathop{\mathrm{rot}}_\mathbb{Q}=\frac 1p e(D)$ where $D$ is the meridional disk for $V_1$. To see this we show that there is an annulus in $V_0$ extending $D$ to a rational Seifert surface for $K_0$ and the tangent vector to $K_0$ and to $\partial D$ can be extended to a nonzero section of the contact structure on the annulus. Thus the relative Euler class of the contact structure pulled back to the domain of the Seifert surface (see Section [2.5](#rational){reference-type="ref" reference="rational"}) is the same as the relative Euler class of $D$. This justifies the formula. To see the claim about the annulus, we note that there is a cover of $V_0$ such that $\partial D$ pulls back to (some number of copies of) a longitude. We now consider a model for this solid torus. We choose a framing so that the pull-back of $\partial D$ has slope $0$. Then the contact structure can be assumed to be $\ker(\sin 2\pi x \, dy + \cos 2\pi x\, dz)$ on $S^1\times \mathbb{R}^2$ thought of as $\mathbb{R}^3$ modulo $x\mapsto x+n$ for some positive integer $n$. Now consider the annulus $A=\{(x,y,z): z=0, y\geq 0\}$ in $S^1\times \mathbb{R}^2$. We can see that the annulus if foliated by ruling curves parallel to the $x$-axis showing that the vector field pointing in the positive $x$-direction orients the lift of $K_0$ and extends to the annulus. (We note here that it is essential that the curve $\partial D$ is positive with respect to the contact framing, meaning that it is not between the meridional slope and the contact framing, for the above model to hold, but that is the case here.) We can now take a quotient of a convex neighborhood of the $x$-axis to construct a model for $V_0$ with the claimed extension of $D$ to a rational Seifert surface for $K_0$.
For the Euler class of a contact structure described above we note that it is the obstruction to extending a trivialization of $\xi$ over the $1$-skeleton to the $2$-skeleton. We can take $K_0$ to be the $1$-skeleton of the lens space and then (the extended as above) $D$ to be the $2$-skeleton. Since we can take a tangent vector to $K_0$ to be a trivialization of $\xi$ over the $1$-skeleton, we see that $e(\xi)$ evaluated on the generator of $H_2(L(p,q))$ coming from $D$ is simply $e(D)$ where $e$ is the relative Euler class of $V_0$ discussed above.
The slopes in $[(-p/q)^a, -p/q)$ with an edge to $-p/q$ are simply $s_k= (-p/q)^a\oplus k(-p/q)$. We now observe how stabilization works. Suppose that $L$ is a Legendrian knot corresponding to the unique tight contact structure on $V_0$ with slope $s_k$, for $k>0$, and some tight contact structure on $V_1$ in $\mathop{\mathrm{Tight}}(S^0,s_k)$. Now inside of $V_0$ there are two solid tori $S_\pm$ (smoothly isotopic to $V_0$) that has convex boundary having two dividing curves of slope $s_{k-1}$ such that the basic slice $V_0\setminus S_\pm$ is a $\pm$ basic slice $B_\pm$. Now $S_\pm$ is the standard neighborhood of $S_\pm(L)$ and the contact structure on the complement is given by the contact structure on $V_1$ union $B_\pm$. That is it is given by the path in the Farey graph describing the contact structure on $V_1$ extended by the edge describing $B_\pm$. This new path might not (usually wont) be a minimal path. If the path can be consistently shortened then the contact structure on the complement of $S_\pm$ is tight, and hence the Legendrian knot that $S_\pm$ is a neighborhood of is non-loose. If the path can be inconsistently shortened, then the complement of $S_\pm$ is overtwisted, and the Legendrian knot that $S_\pm$ is a neighborhood of is loose.
We now consider $s_0=(-p/q)^a$. When we stabilize the knot corresponding to $V_0$ it will have neighborhood $S_\pm$ with dividing slope $(-p/q)^c$ and the basic slice $B_\pm=V_0\setminus S_\pm$ will contain convex tori parallel to the boundary with dividing slope any rational number clockwise of $(-p/q)^c$ and anti-clockwise of $(-p/q)^a$. In particular it will contain a convex surface with dividing slope $0$ and hence $V_1\cup B_\pm$ will be overtwisted. That is any stabilization of a Legendrian knot corresponding to $V_0$ with dividing slope $s_0$ will be loose.
We now give the classification of Legendrian rational unknots in $L(p,1)$, recovering the results from [@GeigesOnaran15] and adding information about how the non-loose unknots are related by stabilization.
*Proof of Theorem [Theorem 10](#Lp1){reference-type="ref" reference="Lp1"}.* Notice that $(-p)^a=-\infty$. So with the notation above there is a non-loose rational unknot $L$ with standard neighborhood $V_0$ with convex boundary having slope $\infty$. From above we see that $\mathop{\mathrm{tb}}_\mathbb{Q}(L)=\frac 1p$. Moreover the contact structure on $V_1$ is the unique tight contact structure on a solid torus with longitudinal dividing curves and the formula for the rotation number above gives $\mathop{\mathrm{rot}}_\mathbb{Q}(L)=0$. Our discussion above also shows that $S_\pm(L)$ is loose.
Now consider $s_1=-p-1$. The contact structures on $V_1$ with this dividing slope correspond to signed paths from $-p-1$ clockwise to $-1$. That is there are $p$ edges in the path and they are all in a continued fractions block, so the contact structure on $V_1$ is determined by the number of $+$ signs. They can be between $0$ and $p$, that is there are $p+1$ non-loose knots, denote them by $L_k$ where $k$ is the number of $-$ signs in the path describing the contact structure on $V_1$. From the formulas above we see that $\mathop{\mathrm{tb}}_\mathbb{Q}(L_k)=(p+1)/p$ and $\mathop{\mathrm{rot}}_\mathbb{Q}(L_k)=1-2k/p$. We also know that the Euler class of the contact structure containing $L_k$ is $p-2k$.
Notice that if we negatively stabilize $L_0$ then the contact structure on its compliment is described by the path from $-1$ anti-clockwise to $-p-1$ having all positive signs and a negative sign on the edge from $-p-1$ to $\infty$. Since we can inconsistently shorten this path, we see that it is overtwisted. Thus $S_-(L_0)$ is loose. Similarly, if we positively stabilize $L_0$ then we have the same path from $-1$ to $-p-1$ and then a $+$ on the edge from $-p-1$ to $\infty$. This path can be consistently shortened to the one that goes from $0$ to $\infty$, that is the contact structure on the complement of the standard neighborhood of $L$. That is $S_+(L_0)=L$. Similarly $S_-(L_p)=L$ and $S_+(L_p)$ is loose. All the other $L_k$ have complements described by paths with both $+$ and $-$ signs, so the same analysis as above shows that any stabilization of them will be loose (since we can always shuffle the basic slices in the continued fraction block so that the sign opposite to the stabilizing sign appears adjacent to the basic slice corresponding to the stabilization).
Now consider $s_l$ for $l>1$. The path describing the contact structures on $V_1$ consist of an edge from $s_l$ to $-p$ and then edges from $-p, -p+1, \ldots, -1$. Those last $p-1$ edges are all in a continued fraction block and the first edge is not. So there are $2p$ non-loose knots. We denote them by $L^l_{\pm,k}$ where $k$ is the number of $-$ signs in the continued fraction block and $\pm$ denotes the sign of the first edge in the path. The argument above shows that $L^l_{+,0}$ has positive stabilization $L^{l-1}_{+,0}$ if $l>2$ and $S_+(L^2_{+,0})=L_0$, while all the negative stabilizations are loose. Similarly for the $L^l_{-,p-1}$. Thus $L$ together with $L_0, L_p$ and all the $L^l_{+,0}$ and $L^k_{-,p-1}$ form a $\normalfont\textsf{V}$ based at $(0,1/p)$.
We can similarly check that $L^l_{+, k}$ , for $l>2$ and $k=1,\ldots, p-2$, positively stabilizes to $L^{l-1}_{+,k}$ and becomes loose when stabilizing negatively, while $L^l_{-, k}$, for $l>2$ and $k=1,\ldots, p-2$, negatively stabilizes to $L^{l-1}_{-,k}$ and becomes loose when stabilizing positively. Now consider $L^2_{+,k}$ for $k=1, \ldots, p-1$ (since $L^2_{+,0}$ was discussed above). If we negatively stabilize the knot it becomes loose, but if we positively stabilize it we can consistently shorten the path describing the contact structure on the complement to see that we get $L_{k}$. Similarly, positively stabilizing $L^2_{-,k-1}$, for $k=0, \ldots, p-1$ (since $L^2_{-,p-1}$ was discussed above), gives a loose knot while negatively stabilizing the knot gives $L_{k}$. Thus we see that the $L_k$, for $k=1, \ldots, p-1$ are the vertex of a $\normalfont\textsf{V}$ based at $(1-2k/p, (p+1)/p)$. ◻
We now turn to the classification of non-loose rational unknots in $L(p,p-1)$.
*Proof of Theorem [Theorem 12](#Lpp1){reference-type="ref" reference="Lpp1"}.* We first notice that $(-p/(p-1))^a=-(p-1)/(p-2)$. With the notation above We consider the non-loose unknot with standard neighborhood $V_0$ having lower meridional slope $-p/(p-1)$ and dividing slope $-(p-1)/(p-2)$. Its complement is the solid torus $V_1$ with upper meridian $0$ and dividing slope $-(p-1)/(p-2)$. There are two tight contact structures on such a torus. So there are two non-loose rational unknots $L_\pm$ with $\mathop{\mathrm{tb}}_\mathbb{Q}(L_\pm)=(p-1)/p$ and we can compute $\mathop{\mathrm{rot}}_Q(L_\pm)=\pm(1-2/p)$. As discussed above, we know any stabilization of either of these knots is loose. In addition, the Euler class of the contact structure containing $L_\pm$ is $\pm(2-p)$. Notice that the sign of the Euler class is opposite to the sign of the rotation number. This is because $-p/(p-1)$ is clockwise of $-(p-1)/(p-2)$.
Now $s_1=-(2p-1)/(2p-3)$. The contact structures on $V_1$ with these boundary conditions are given by the signed paths with vertices $-(2p-1)/(2p-3)$, $-p/(p-1)$, $-1$, and the two edges are part of a continued fraction block, so there are three possible contact structures and hence three non-loose knots: $L_\pm^1$ corresponding to all of the signs being $\pm$ and $K^1$ corresponding the the path with one $+$ and one $-$. The rational Thurston-Bennequin invariant of these Legendrian knots is $(2p-1)/p$ and $\mathop{\mathrm{rot}}_\mathbb{Q}(L^1_\pm)= \pm (2-1/p)$ while $\mathop{\mathrm{rot}}_\mathbb{Q}(K^1)=0$. As we argued in the proof of Theorem [Theorem 10](#Lp1){reference-type="ref" reference="Lp1"} we have $S_\pm(L^1_\pm)=L_\pm$ and $S_\mp(L^1_\pm)$ is loose, while any stabilization of $K^1$ is loose. We also note that the Euler class of the contact structure containing $K^1$ is zero.
For $s_k$, with $k>1$, we see the contact structures on $V_1$ are described by a path with two edges in the Farey graph and these edges are not in a continued fraction block. Thus there are four contact structures on $V_1$ and hence four non-loose knots. Denote by $L_\pm^k$ the Legendrian knot whose complement is given by all the signs being $\pm$ and denote by $K_\pm^k$ the Legendrian knot whose edge from $-1$ to $-p/(p-1)$ is $\pm$. Once again as in the proof of Theorem [Theorem 10](#Lp1){reference-type="ref" reference="Lp1"} we see that $L_+$ together with the $L_+^k$ form a back slash based at $(-1+\frac 2p, \frac{p-1}p)$ while the $L_-$ and $L^k_-$ give a forward slash based at $(1-\frac 2p, \frac{p-1}p)$. Similarly the $K^1$ and $K^k_\pm$ give a $\normalfont\textsf{V}$ based at $(0, \frac{2p-1}p)$. ◻
We will now classify non-loose Legendrian rational unknots in $L(2n+1,2)$ were we must consider $\pm K_0$ and $\pm K_1$.
*Proof of Theorem [Theorem 14](#l2n12){reference-type="ref" reference="l2n12"}.* We start with the non-loose representations of $K_0$. We have that $(-(2n+1)/2)^a= -n-1$ and so $V_0$ with dividing curves of this slope is a standard neighborhood of a non-loose Legendrian realization of $K_0$. The tight contact structure on the complement $V_1$ will be determined by a signed path in the Farey graph from $-n-1$ to $-1$. Notice that this path forms a continued fraction block so the contact structure is determined by the number of $-$ signs in the path. So we have $n+1$ non-loose Legendrian representatives $L_l$, for $l=0, \dots, n$. We easily compute $\mathop{\mathrm{tb}}_\mathbb{Q}(L_l)=(n+1)/(2n+1)$, $\mathop{\mathrm{rot}}_\mathbb{Q}(L_l)= (n-2l)/(2n+1)$, and the Euler class of the contact structure containing $L_l$ is $2l-n$.
Now $s_k=-(n(2k+1)+k+1)/(2k+1)$, here we are using the notation established at the start of this section so these are the slopes of non-loose Legendrian realizations of $K_0$. We analyzed $k=0$ above. For $k=1$, the path from $s_1$ to $-1$ has two jumps in a continued fraction block, that is from $s_1$ to $-(2n+1)/2$ and then to $-n$, and $(n-1)$ jumps from $-n$ to $-1$ and these are all in a continued fraction block. So there are $3n$ tight contact structures on $V_1$ with these boundary conditions. We denote by $L^1_{j,l}$ the non-loose Legendrian knot whose complement as $j$ of $-$ signs in the first continued fraction block, and $l$ in the second. So $j=0,1,2$ and $l=0,\ldots, n-1$. Notice that when $j=0$ then if we positively stabilize $L^1_{0,l}$ we will add a $+$ basic slice to $V_1$ with boundary slopes $s_1$ and $-n-1$. We can then consistently shorten the path to get $L_l$. However, if we negatively stabilize the knot then the contact structure will become overtwisted since there will be an inconsistent shortening of the path, i.e. $S_+(L^1_{0,l}) = L_l$ and $S_-(L^1_{0,l})$ is loose. Similarly $S_-(L^1_{2,l})=L_{l+1}$ and $S_+(L^1_{2,l})$ is loose. So $L^1_{j,l}$ will stabilize to one of the $L_l$ for $j=0$ or $2$. We also know that any stabilization of $L^1_{1,l}$ will be loose. It is easy to see that $\mathop{\mathrm{tb}}_\mathbb{Q}(L^1_{j,l})= (3n+2)/(2n+1)$, $\mathop{\mathrm{rot}}_\mathbb{Q}(L^1_{1,l})=(n-1-2l)/(2n+1)$ for $l=0,\ldots, n-1$, and the Euler class of the contact structures containing the $L^1_{1,l}$ is $n-1-2l$, for $l=0,\ldots, n-1$.
Finally, for $k>1$ we see the path in the Farey graph from $s_k$ to $-1$ goes from $s_k$ to $-(2n+1)/2$ to $-n$, and then from $-n$ to $-1$. The first two jumps are not in a continued fraction block and the last $n-1$ are. So there are $4n$ contact structures on $V_1$ with dividing curves of slope $s_k$ and corresponding non-loose rational unknots $L^k_{i, j,l}$ where $i=0,1$ is the number of $-$ signs on the edge from $s_k$ to $-(2n+1)/2$, $j=0,1$ is the number of $-$ signs on the jump from $-(2n+1)/2$ to $-n$ and $l$ is the number of $-$ signs from $-n$ to $-1$. Notice that the positive stabilization of $L^k_{0,j,k}$ is $L^{k-1}_{j-1,k}$ for $k>3$ and $L^1_{j,l}$ for $k=2$, while a negative stabilization of is loose. Similarly a negative stabilization of $L^k_{1,j,k}$ is $L^{k-1}_{j+1,l}$ for $k>3$ and $L^1_{j+1,l}$ for $k=2$, while a positive stabilization is loose.
From the above we see that $L_l$ is the base for a back slash based at $(-n/(2n+1), (n+1)/(2n+1))$ if $l=0$, a forward slash based at $(n/(2n+1), (n+1)/(2n+1))$ if $l=n$, and a $\normalfont\textsf{V}$ based at $((n-2k)/(2n+1), (n+1)/(2n+1))$ for $l=1,\ldots, n-1$ (notice that the sign of the Euler class is opposite of the rotation number). Similarly $L^1_{1,l}$ is be base for a $\normalfont\textsf{V}$ based at $((n-1-2k)/(2n+1),(3n+2)/(2n+1))$ finishing the classification of non-loose realizations of $K_0$ (and $-K_0$).
We now turn to $K_1$ (and $-K_1$). Now $V_1$ will be the neighborhood of the Legendrian knot and since it has an upper meridional slope of $0$ the possible slopes on its dividing curves are $s_0=\infty$ and $s_k=1/k$ for $k\in \mathbb{N}$. The contact structures on the complement of $V_1$ with dividing slope $s_k$ are given by a signed path in the Farey graph from $-(2n+1)/2$ clockwise to $s_k$ (recall the first jump will not have a sign, so this is really a signed path from $-n$ to $s_k$). Thus for $k=0$ there will be exactly two contact structures since there is an edge from $-n$ to $\infty$, and thus two non-loose knots $L_\pm$. We compute that $\mathop{\mathrm{tb}}_\mathbb{Q}(L_\pm)=2/(2n+1)$ and $\mathop{\mathrm{rot}}_\mathbb{Q}(L_\pm)=\pm 1/(2n+1)$ and the Euler class of the contact structure containing $L_{\pm}$ is $\mp1$. For $k=1$ the contact structures on $V_0$ are given by signed paths in the Farey graph from $-n$ clockwise to $1$. This path has $(n+1)$ jumps all in a continued fraction block, so there are $n+2$ contact structures determined by the number $l$ of $-$ on the path, and hence $n+2$ non-loose Legendrian knots $L_l$. As above we see that $L_0$ negatively stabilizes to $L_+$ and positively stabilizes to be loose, while $L_{n+1}$ positively stabilizes to $L_-$ and negatively stabilizes to be loose (notice that the sign of the Euler class is opposite to the sign of the rotation number). All the other $L_l$ become loose after any stabilization. We also compute that $\mathop{\mathrm{tb}}_\mathbb{Q}(L_l)= (2n+3)/(2n+1)$, $\mathop{\mathrm{rot}}_\mathbb{Q}=(2n+2-4l)/(2n+1)$ for $l=0,\ldots,n+1$, and the contact structure containing $L_l$ has Euler number $-(2n+2-4l)$. Finally, for $k>1$ the signed path determining the contact structure on $V_0$ has one jump from $1/k$ to $0$ and then a continued fraction block from $0$ to $-n$. Thus there are $2(n+1)$ contact structures on $V_0$ and hence $2(n+1)$ non-loose Legendrian knots $L^k_{\pm, l}$. The same arguments as in the $K_0$ case tells us that $L_+$ the base for a forward slash based at $(1/(2n+1), 2/(2n+1))$, $L_-$ contributes a backward slash based at $(1/(2n+1), 2/(2n+1))$, and the $L_l$ with $l=1, \ldots, n$ is the base of a $\normalfont\textsf{V}$ based at $((2n-2(2l-1))/(2n+1), (2n+3)/(2n+1))$. ◻
We can now give the general classification of non-loose rational unknots.
*Proof of Theorem [Theorem 15](#lensgeneral){reference-type="ref" reference="lensgeneral"}.* Consider the lens space $L(p,q)$ where $q\not= 1$ or $p-1$ and let $-p/q=[a_0,a_1, \ldots, a_n]$. We first notice that $(-p/q)^a=[a_0,\ldots, a_{n-1}]$ and we denote this number by $-p''/q''$. Notice that $p'' = \overline{q}$ where $1 \leq \overline{q} \leq p$ and $q\overline{q} \equiv 1 \pmod p$.
Notice that the dividing slopes of a standard neighborhood of non-loose representatives of $K_0$ are $s_k=-(p''+kp)/(q''+kq)$ for $k \geq 0$. To understand when we see a slash or a $\normalfont\textsf{V}$, we need to discuss the signed paths describing tight contact structures on $V_1$ with boundary conditions $s_k$. For $n \geq 2$, consider the path $P$ in the Farey graph from $r=[a_0,\ldots, a_{n-3}, a_{n-2}+1]$ clockwise to $-1$. Decorations on this path will correspond to $A=|(a_0+1)\cdots (a_{n-2}+1)|$ tight contact structures on $V_1$ with dividing slope $r$. We will see that for any $s_k$ the signed path describing a tight contact structure on $V_1$ will contain the path $P$. Specifically, for $s_0$ we see that the path from $-p''/q''$ to $-1$ consists of a continued fraction block of length $|a_{n-1}+1|$ from $-p''/q''$ to $r$ and then $P$. This gives $|a_{n-1}|A$ contact structures. For $s_1$ we see that the path from $s_1$ to $-1$ consists of a continued fraction block of length $|a_n|$ from $s_1$ to $s = [a_0,\ldots, a_{n-1}+1]$, and then a continued fraction block of length $|a_{n-1}+2|$ from $s$ to $r$ followed by $P$. This gives $|a_n-1||a_{n-1}+1|A$ tight contact structures. For $k>1$ the path from $s_k$ to $-1$ consists of one jump from $s_k$ to $-p/q$, followed by a continued fraction block of length $|a_n+1|$ from $-q/p$ to $[a_0,\ldots, a_{n-1}+1]$, followed by a continued fraction block of length $|a_{n-1}+2|$ from $[a_0,\ldots, a_{n-1}+1]$ to $r$, and finally $P$. This gives $2|a_n||a_{n-1}+1|A$ contact structures.
Fix a choice $P_s$ of signs on $P$. We will consider non-loose Legendrian knot $L^1_{i,j,P_s}$ with standard neighborhoods having boundary slope $s_1$ and whose complement is given by a path with $i$ negative signs and $|a_n|-i$ positive signs from $s_1$ to $[a_0,\ldots, a_{n-1}+1]$, and $j$ negative signs and $|a_{n-1}+1|-j$ positive signs from $[a_0,\ldots, a_{n-1}+1]$ to $r$, followed by $P_s$. Notice if we stabilize $L^1_{i,j,P_s}$ then its complement will be the complement of $L^1_{i,j,P_s}$ with a basic slice added to it. The basic slice will have slopes $s_0$ and $s_1$ and we will be able to shorten the resulting path. So if $i\not=0$, or $|a_n|$ the the resulting contact structure will not be tight, since we can shuffle a basic slice of the opposite sign to the stabilization to be adjacent to $s_1$ so when we shorten we have inconsistent signs. On the other hand if $i=0$ then a positive stabilization will be tight. To see which contact structure we get, we need to consider the non-loose Legendrian knot $L^0_{l,P_s}$ with standard neighborhood having boundary slope $s_0$ and whose complement is given by a path from $-p''/q''$ to $r$ with $l$ negative signs and $|a_{n-1}+1|-l$ positive signs followed by $P_s$. We easily see that $S_+(L^1_{0,j,P_s})$ is the same as $L^0_{j,P_s}$ while the negative stabilization is loose. Similarly $S_-(L^1_{|a_n|, j,P_s})$ is the same as $L^0_{j+1,P_s}$ while a positive stabilization is loose.
Finally consider Legendrian knots $L^k_{\pm,i,j,P_s}$ with standard neighborhood having dividing slope $s_k$ and whose complement is given by a path with a $\pm$ basic slice from $s_k$ to $-p/q$, $i$ negative signs in the continued fraction block from $-p/q$ to $[a_0,\ldots, a_{n-1}+1]$, $j$ negative signs in the continued fractions block from $[a_0,\ldots, a_{n-1}+1]$ to $r$, and then $P_s$. As above we see that for $k>2$ we have $S_\mp(L^k_{\pm,i,j,P_s})$ is loose while $S_\pm(L^k_{\pm,i,j,P_s})$ is $L^{k-1}_{\pm,i,j,P_s}$. Similarly $S_\mp(L^2_{\pm,i,j,P_s})$ is loose while $S_+(L^2_{\pm, i,j,P_s})$ is $L^1_{i,j,P_s}$ while $S_-(L^2_{-,i,j,P_s})$ is $L^1_{i+1,j,P_s}$.
Putting this together we see that the $L^0_{l,P_s}$ for $l=0$ is the base for a back slash, while $l=|a_{n-1}+1|$ is the base for a forward slash, there are $|(a_0+1)\cdots (a_{n-2}+1)|$ of each of these. All the other $L^0_{l,P_s}$ are the base of a $\normalfont\textsf{V}$, there are $\left|(a_0+1)\cdots (a_{n-2}+1)(a_{n-1}+2)\right|$ of these. We note that $\mathop{\mathrm{tb}}_\mathbb{Q}=p''/p$ for all of these knots and the rotation numbers of these non-loose knots and Euler classes of the contact structures in which they live can be computed as discussed in the beginning of this section. Notice that the sign of the Euler class is opposite to the sign of the rotation number since $-p/q$ is clockwise of $(-p/q)^a$. Finally the $L^1_{i,j,P_s}$ with $i\not=0$ or $|a_n|$ each give a $\normalfont\textsf{V}$, and there are $$\begin{aligned}
(|(a_n-1)(a_{n-1}+1)|-2|a_{n-1}+1|)|(a_0+1)\cdots (a_{n-2}+1)| = |(a_0+1)\cdots(a_n+1)|\end{aligned}$$ of these. These knots have $\mathop{\mathrm{tb}}_\mathbb{Q}=(p''+p)/p$ and the rotation numbers of these non-loose knots and Euler classes of the contact structures in which they live can be computed as discussed in the beginning of this section. Notice that the sign of the Euler class is opposite to the sign of the rotation number since $-p/q$ is clockwise of $(-p/q)^a$.
We are left to see that non-loose realizations of $K_0$ occur in at least two contact structures. Suppose that $a_0\not=-2$. Notice that the path from $-p/q$ to $-1$ will go through $a_0+1$. Consider a paths $P_l$ in the Farey graph from $-p/q$ to $-1$ such that all the sign less than $a+1$ are $+$ and the jumps from $a_0+1$ to $-1$ have $l$ negative signs and $|a_0+2|-l$ positive signs. Then the Euler class of the contact structure on $L(p,q)$ given by this path will be $c+(|a_0+2| -2l)$ for $l=0,\ldots, |a_0+2|$, where $c$ is a constant determined by the signed path from $-p/q$ to $a_0+1$. Also notice that $p$ is at least $|a_0+1|$ (since it is between $a_0+1$ and $a_0$, it will be an iterated Farey sum of these numbers). Thus the order of the second cohomology of $L(p,q)$ is at least $|a_0+1|$ so the Euler classes indicated above will provide at least two distinct values.
We now assume that $a_0=-2$ and $k$ is the smallest index such that $a_k\not=-2$ (note there will be such a $k$ since if all the $a_i$ are $-2$ then $-p/q$ would be $-n/(n-1)$ for some $n$ and we are assuming our lens space is not $L(n,n-1)$ which was dealt with earlier). In this case $-p/q$ will be between $-(k+1)/k$ and $-(2k+3)/(2k+1)$. To see this note that $-(k+1)/k$ is $[-2.\ldots, -2]$ where there are $k$ is the number of $-2$s, and $-(k+2)/(k+1)$ has the same continued fraction expansion except $k+1$ is the number of $-2$s. From this we see that $[-2,\ldots, -2,-3]$, where the number of $-2$s is $k$, is $-(2k+3)/(2k+1)$. Thus $-(2k+3)/(2k+1)$ is clockwise of $-p/q$ and we must have that $-(k+1)/k$ is anti-clockwise of $-p/q$ (since dropping the last term in a continued fraction moves the number anti-clockwise). In particular, $-p/q$ is in the interval claimed. Notice that this implies that the second cohomology of $L(p,q)$ as order at least $2k+3$.
The first continued fraction block in the path from $-p/q$ to $-1$, traversed anti-clockwise goes from $-1$ to $-(k+2)/(k+1)$ to $-(k+2)/(k+1)\oplus i(-(k+1)/k)$ for $i=1,\ldots, |a_k+1|$. Let $P_l$ be the signed path from $-p/q$ to $-1$ that is all plus signs except on the continued fraction block discussed above, and on that continued fraction block there are $l$ negative signs. One can easily check that each basic slice in the last continued fraction block contributes $\pm(k+1)$ to the Euler class of the contact structure. So the Euler class of the contact structure corresponding to $P_l$ is $c+(k+1)(|a_k+1|-2l)$, for $l=0, \ldots, |a_k+1|$, where $c$ is the contribution from the part of the path before the last continued fraction block. Once again, it is clear that there are at least two distinct contact structures supporting non-loose representatives of $K_0$.
The same sort of analysis as above classifies non-loose representations of $K_1$ and the case $n=1$. ◻
We now end by noting that any rational unknot, other than the unknot in $S^3$, has non-loose Legendrian representatives in at least two contact structures and has no non-loose transverse representatives.
*Proof of Corollary [Corollary 16](#courseRU){reference-type="ref" reference="courseRU"}.* The first part follows from Theorems [Theorem 10](#Lp1){reference-type="ref" reference="Lp1"}, [Theorem 12](#Lpp1){reference-type="ref" reference="Lpp1"}, and [Theorem 15](#lensgeneral){reference-type="ref" reference="lensgeneral"}. The fact that all non-loose Legendrian knots have a lower bound on their $\mathop{\mathrm{tb}}_\mathbb{Q}$ (as shown by the same set of theorems), we clearly have no non-loose transverse knots. ◻
## Surgery diagrams for rational unknots {#sdru}
We will begin by considering the diagram on the left-hand side of Figure [\[fig:unknotsLegendrian1\]](#fig:unknotsLegendrian1){reference-type="ref" reference="fig:unknotsLegendrian1"} and show that these surgery diagrams describe all non-loose Legendrian representatives of $K_1$ with $\mathop{\mathrm{tb}}_\mathbb{Q}=q/p$ in $L(p,q)$. To this end, we notice that the smooth Dehn surgery done on the black knot is $-1+(q/(p+q))=-p/(p+q)$ and performing a Rolfson twist yields surgery coefficient $-p/q$. So the diagram does indeed give the lens space $L(p,q)$. Moreover, one can see that the Rolfson twist changes coordinates on the torus by the matrix $\begin{bmatrix}1& -1\\ 0&1\end{bmatrix}$. This matrix also takes the path defining an overtwisted contact structure on $L(p,q)$ that starts at $0$ and goes anticlockwise to $-p/q$ and goes past $\infty$ to a path that starts at $0$ and goes anticlockwise to $-p/(p+q)$ and goes past $\infty$. Thus all the contact structures constructed in the previous section are also described by this surgery diagram. Finally, notice that the solid torus giving a neighborhood of $K_1$ in the construction in the previous section had upper meridian $0$ and dividing slope $\infty$. The change of coordinates above sends this to a solid torus with upper meridian $0$ and dividing slope $-1$. This solid torus is precisely a neighborhood of the gray curve in the figure, thus completing the justification that the given surgery diagram gives all $\mathop{\mathrm{tb}}_\mathbb{Q}=q/p$ representatives of $K_1$.
We now consider the diagram on the right-hand side of Figure [\[fig:unknotsLegendrian1\]](#fig:unknotsLegendrian1){reference-type="ref" reference="fig:unknotsLegendrian1"} and show that these surgery diagrams describe the non-loose Legendrian representatives of $K_1$ with $\mathop{\mathrm{tb}}_\mathbb{Q}=(p+q)/p$ in $L(p,q)$. We first note that the smooth surgery diagram consists a Hopf link with components $U_1$ and $U_2$, and the smooth surgery coefficients on the components are $a_0$ and $s=[a_1,\ldots, a_n]$, respectively; thus the surgery diagram describes $L(p,q)$ and the contact structure is obviously overtwisted as we are doing $(+1)$-contact surgery on a stabilized knot. Moreover, the gray knot $K_1$ is clearly non-loose since performing Legendrian surgery on it will cancel the $(+1)$-contact surgery on $U_1$ and hence result in a tight manifold.
We now wish to identify the precise non-loose knot realized by the surgery diagram. To this end, we begin by considering $L(-a_0,1)$. In this case, the diagram on the right of Figure [\[fig:unknotsLegendrian1\]](#fig:unknotsLegendrian1){reference-type="ref" reference="fig:unknotsLegendrian1"} does not have the unknot with contact framing $(s+1)$. The diagram gives $-a_0+1$ distinct non-loose Legendrian representatives of $K_1$, which is exactly the number claimed in Theorem [Theorem 10](#Lp1){reference-type="ref" reference="Lp1"}; that is, it is equal to the number of contact structures on $T^2\times I$ with dividing slopes between $a_0-1$ and $-1$. So these are all of the non-loose rational unknots in this case. Now $L(p,q)$ is obtained from $L(-a_0,1)$ by a further contact $(s+1)$ surgery on the surgery dual to the contact $(+1)$ surgered unknot in the diagram. The number of contact structures one can achieve through this is $|a_0-1||a_1+1|\cdots|a_n+1|$, the same as the number of contact structures on $T^2\times I$ with dividing slopes $a_0-1$ and $(-p/q)^a$. Also, this number coincides the number of tight contact structures on the complement of a standard neighborhood of $K_1$ with slope $s_1$ in the proof of Theorem [Theorem 15](#lensgeneral){reference-type="ref" reference="lensgeneral"} (notice that the proof of Theorem [Theorem 15](#lensgeneral){reference-type="ref" reference="lensgeneral"} only considers $K_0$. For $K_1$ we need to reverse the order of $a_0, \ldots, a_n$). Thus again, we see that all the contact structures, and non-loose knots, described in Theorem [Theorem 15](#lensgeneral){reference-type="ref" reference="lensgeneral"} are realized in the surgery diagram.
Turning now to Figure [\[fig:unknotsLegendrian2\]](#fig:unknotsLegendrian2){reference-type="ref" reference="fig:unknotsLegendrian2"}, we note that if we consider the smooth surgery diagram then the contact framing on $K_1$ is indeed the claimed framing (note the smooth surgery coefficient on the bottom-most black knot is $-1$ and the unknots directly above have coefficient $-2$, so one can do a sequence of blow-downs to see that $K_1$ has the claimed framing), and hence realize the desired non-loose representatives of $K_1$, *c.f.* [@EtnyreMinMukherjee22pre].
# Cabling non-loose Legendrian knots
In this section we prove our structure results concerning when non-looseness is preserved under cabling and find non-simple non-loose transverse knots that have no Giroux torsion in their complements.
## Positive cables of non-loose knots
In this section, we prove Theorem [Theorem 17](#thm:poscable){reference-type="ref" reference="thm:poscable"} that says the standard $(p,q)$-cable $L_{p,q}$ of $L$ is non-loose if and only if $L$ is non-loose.
*Proof of Theorem [Theorem 17](#thm:poscable){reference-type="ref" reference="thm:poscable"}.* Given a Legendrian knot $L\in \mathcal{L}(K)$ and a rational number $q/p> \mathop{\mathrm{tb}}(L)$, let $L_{p,q}$ be the standard Legendrian $(p,q)$-cable of $L$. Recall this means that we take a standard neighborhood $N'$ of $L$ so that its ruling curves have slope $q/p$ and $L_{p,q}$ is one of these ruling curves. It is clear that if $L$ is loose then so is $L_{p,q}$ since $L_{p,q}$ is contained in a standard neighborhood of $L$. We will now show that if $L$ is non-loose then so is $L_{p,q}$. We begin with some notation.
Let $N$ be a standard neighborhood of $L_{p,q}.$ We can assume that the intersection of $\partial N'$ with the complement $\overline{(M-N)}$ in an essential annulus $A$ with $\partial A$ ruling curves on $\partial N$. We can make $A$ convex and take an $I$-invariant neighborhood $A\times[0,1]$ of $A$ so that $N\cup (A\times [0,1])$ (after rounding corners) is an $I$-invariant thickened torus with boundary components $T$ and $T'$. We can assume that $T$ is the \"outer torus\", meaning that $T$ bounds a solid torus $S$ that contains $N'$ (and $T'$). Notice $T$ is contact isotopic to $\partial N'$ and thus the dividing curves on $A$ run from one boundary component to the other.
We now assume that $L_{p,q}$ is loose and derive a contradiction. To this end, we know there exists an overtwisted disk $D$ disjoint from $N$. Note that, $D$ might intersect the annulus $A$. We may smoothly isotope $A$ to $A'$ fixing $L_{p,q}$ such that $A'\cap D=\emptyset$. Given the smooth isotopy from $A$ to $A'$, we apply Colin's discritization of isotopy [@Honda02] to find a family of annuli $A=A_0, A_1,\cdots A_n=A'$ where $A_{i+1}$ is obtained from $A_i$ by a single bypass attachment. We may take the union of $A_i$ with one of the annuli of $\partial N\setminus \partial A_i$ to obtain a torus $T_i$ that bounds a solid torus $S_i$ that contains the other annulus of $\partial N\setminus \partial A_i$ (that is $T_i$ is the "outer torus\" of the two tori formed from $A_i$ and one of the two annuli of $\partial N\setminus \partial A_i$). We say the bypass taking $A_i$ to $A_{i+1}$ is attached from the "outside\" if it is outside of $S_i$ and from the "inside\" otherwise. In the proof of the following lemma it will be important to note that because $T_i$ contains a portion of $\partial N$ that contains ruling curves, we can use Lemma [Lemma 33](#isruling){reference-type="ref" reference="isruling"} to assume that $L_{p,q}$ sits on $T_i$. We now have the following result concerning $S_i$.
**Lemma 40**. *For all $i$ we have that $S_i$ is tight and the standard neighborhood $N'$ of $L$ is always contained in $S_i$*
We will prove this lemma below, but assuming it now, we complete the proof of Theorem [Theorem 17](#thm:poscable){reference-type="ref" reference="thm:poscable"}. The lemma tell us that $S_n$ contains $N'$ and is tight. Notice that this implies the complement of $A_n$ in $\overline{(M-N)}$ is tight, contradicting the fact that the overtwisted disk is disjoint from $A_n$. Indeed to see this notice that the complement of $A_n$ in $\overline{(M-N)}$ consists of the complement of $S_n$ and a component contained in $S_n$. So our claim is true if the contact structure on $S_n$ and its complement are both tight. Now since $N'\subset S_n$ and the complement of $N'$ is tight so is the complement of $S_n$ and Lemma [Lemma 40](#claim1){reference-type="ref" reference="claim1"} says $S_n$ is tight. Thus the overtwisted disk $D$ cannot exist. ◻
*Proof of Lemma [Lemma 40](#claim1){reference-type="ref" reference="claim1"}.* We inductively assume that $S_{i-1}$ is tight and contains $N'$ (this is clearly true for $S_0$). We first notice that $S_i$ does not contain any convex tori parallel to its boundary of slope $\infty$. Indeed, since $S_{i-1}$ is tight, it has no such tori and thus if the bypass to get to $S_i$ was attached on the inside of $S_{i-1}$ then clearly $S_i\subset S_{i-1}$ has the claimed property. Now if the bypass is attached from the outside, then $S_{i-1}\subset S_i$ and we argue as follows. Suppose there is a convex torus $T$ with dividing slope $\infty$, notice that $S_i\setminus N'$ is contained in the complement of $N'$ and so is a tight thickened torus and $T$ separates it into two pieces $B$ and $B'$ with $B$ being a basic slice with one boundary component $T_0=\partial N'$ having dividing slope $\mathop{\mathrm{tb}}(L)$ and the other boundary component $T$ having dividing slope $\infty$. Notice $T$ is also a boundary component of $B'$, the other boundary component is $\partial S_i$. Moreover $S_{i-1}$ is contained in $N'\cup B$ and there is a convex torus $T'$ in $B$ with dividing slope $q/p$. Now let $L$ be a Legendrian divide on $T'$ and $L'$ be a Legendrian ruling curve on $\partial S_i$. Suppose the sign of the basic slice $S_i\setminus S_{i-1}$ is positive (so the paths in the Farey graph from $T_0$ to $T'$ and from $T'$ to $\partial S_i$ are all positive). Then by Lemma [Lemma 34](#seestab){reference-type="ref" reference="seestab"} we see that $L'$ is obtained from $L$ by some number of positive stabilizations, while $L_{p,q}$ (the ruling curve on $T_0$) is obtained from $L$ by some number of negative stabilizations. Since $L_{p,q}$ sits on $\partial S_i$, Lemma [Lemma 32](#canstablize){reference-type="ref" reference="canstablize"} implies that it is also a further stabilization of $L'$. However, this is not possible since we noted above that $L_{p,q}$ is obtained from $L$ by strictly negative stabilizations. Thus we have our claim that $S_i$ cannot contain a convex torus with dividing slope $\infty$.
We now notice that the contact structure on $S_i$ is tight if it contains $N'$. To this end notice that $S_i\setminus N'$ is contained in the complement of $N'$ and so the contact structure here is tight. That is we have a tight contact structure on a thickened torus and we recover $S_i$ from it by gluing in the tight contact structure on $N'$ (recall $\partial N'$ has longitudinal dividing curves and so there is a unique contact structure on it). Gluing a tight contact structure on a thickened torus to a tight structure on a solid torus with longitudinal dividing curves will always produce a tight contact structure unless the thickened torus contains a convex torus with dividing slope $\infty$, but that is ruled out by our argument in the previous paragraph. So the contact structure on $S_i$ is tight if $N'\subset S_i$.
Now $S_i$ is obtained by attaching a bypass to $\partial S_{i-1}$ from either the inside or outside of $S_{i-1}$. If the bypass is attached from the outside, then clearly $S_i$ contains $S_{i-1}$ and by induction it also contains $N'$ and we are done. So we are left to consider the case when the bypass is attached from the inside. In this case there are two cases to consider. If $r$ is the dividing slope of $\partial S_{i-1}$ then $r$ is either an integer or a non-integer rational number.
If $S_{i-1}$ has dividing slope $r\in \mathbb{Q}\setminus \mathbb{Z}$ then there is a unique torus $S'$ with convex boundary with dividing slope $\lfloor{r}\rfloor$ and that torus in turn contains $N'$ by Corollary [Corollary 31](#lemma:rational){reference-type="ref" reference="lemma:rational"}. Now $T_i$ is obtained from $T_{i-1}=\partial S_{i-1}$ by a bypass attached from inside $S_{i-1}$. Thus the dividing slope of $T_i$ will have an edge in the Farey graph to $r$. Thus it must be in the interval $[\lfloor{r}\rfloor, r]$. In particular it will contain a torus $T''$ with convex boundary with dividing slope $\lfloor{r}\rfloor$ which must be isotopic to $S'$ since there is a unique such torus in $S_{i-1}$. Thus $S_i$ will contain $N'$.
We are now left to consider the case when the dividing slope of $\partial S_i$ is an integer $n$. By Corollary [Corollary 30](#subtori){reference-type="ref" reference="subtori"} we know the solid torus $N'$ in $S_{i-1}$ is determined by a signed path in the Farey graph associated to the contact structure on $S_{i-1}\setminus N'$. Now suppose the dividing slope on $S_i$ is also an integer (which must be $n-1$). If the signed path associated to $S_{i-1}\setminus N'$ contains both signs then no matter what sign is associated to $S_i\setminus S_{i-1}$ we know that $S_i$ will have to still contain $N'$ by Corollary [Corollary 30](#subtori){reference-type="ref" reference="subtori"}. So we now must consider the case that all the signs of the signed path associated to $S_{i-1}\setminus N'$ are the same, say positive, and opposite to that of $S_i\setminus N'$. We claim that cannot happen. To see this we consider three cases: (1) $q/p>n,$ (2) $q/p<n-1,$ and (3) $n-1<q/p<n$. In Case (1), let $L$ be a ruling curve of slope $q/p$ on $\partial S_{i-1}$ (by which we mean, isotop $\partial S_{i-1}$ through convex surfaces so that it has ruling curves of slope $q/p$) and $L'$ a ruling curve on $\partial S_i$ of slope $q/p$. Then Lemma [Lemma 34](#seestab){reference-type="ref" reference="seestab"} shows that $L'$ is obtained from $L$ by positive stabilizations, while $L_{p,q}$ is obtained from $L$ by negative stabilization, but Lemma [Lemma 34](#seestab){reference-type="ref" reference="seestab"} shows that since $L_{p,q}$ sits on $\partial S_i$ that it is obtained from $L'$ by some number of stabilizations (that could be positive or negative). This is a contradiction since $L_{p,q}$ cannot be obtained from $L$ by strictly negative stabilizations, but also by some positive and negative stabilizations. A similar argument works in Case (2) and (3). ◻
## Negative cables of non-loose knots
In this subsection we prove Theorem [Theorem 18](#thm:negcable){reference-type="ref" reference="thm:negcable"} that says the standard cable $L_{p,q}^\pm$ is non-loose for all $p/q\in(\mathop{\mathrm{tb}}(L)-1, \mathop{\mathrm{tb}}(L))$ if $S_\pm(L)$ is non-loose.
*Proof of Theorem [Theorem 18](#thm:negcable){reference-type="ref" reference="thm:negcable"}.* We begin by considering the case when $S_+(L)$ is non-loose (the case for $S_-(L)$ is similar). For $p/q\in (tb(L)-1, tb(L))$ let $L^+_{p,q}$ be the standard $(p,q)$-cable of $L$. Let $S$ be a standard neighborhood of $L$ and $S'$ a standard neighborhood of $S_+(L)$. By definition $L^+_{p,q}$ is a Legendrian divide on a convex torus in $S\setminus S'$.
Assume for contradiction that $L^+_{p,q}$ is loose. So the contact structure on the complement $X$ of a standard neighborhood of $L^+_{p,q}$ is overtwisted. Let $T_0=\partial S$. Notice that $T_0$ breaks $X$ into two pieces $C$, the complement of $S$, and $B$, containing $\partial X$, and the contact structure restricted to these pieces is tight (since $L$ is non-loose and $B$ is contained in $S$). We know that $T_0$ can be smoothly isotoped to be disjoint from an overtwisted disk and thus by Colin's discritization of isotopy [@Honda02] we know there is a sequence of convex tori $T_0,\ldots, T_k$ such that $T_{i+1}$ is obtained from $T_i$ by attaching a bypass and $T_k$ has an overtwisted disk in its complement. The $T_i$ cut $X$ into two pieces, one $C_i$ is diffeomorphic to $C$ and the other $B_i$ that is diffeomorphic to $B$.
We will now inductively show that the complement of $T_i$ is tight, contradicting the existence of an overtwisted disk. More precisely we will inductively show that either (1) the slope of the dividing curves on $T_i$ is greater than or equal to $\mathop{\mathrm{tb}}(L)$, in which case $B_i$ contains a convex torus $T_i'$ contact isotopic to $T_0$, or (2) the slope of the dividing curves on $T_i$ is less than or equal to $\mathop{\mathrm{tb}}(L)$, in which case the slope of the dividing curves is in $(q/p, \mathop{\mathrm{tb}}(L)]$ and $B_i$ contains a convex torus $T_i''$ contact isotopic to $\partial S'$.
We first observe that our inductive hypothesis implies that the complement of $T_i$ is tight. Indeed, in Case (1) we know that the complement of $T_0$, and hence $T_i'$, in $M$ is tight and $C_i$ is contained in this tight complement so it is tight. Let $S_i$ be the solid torus bounded by $T_i'$ in $M$ (and containing $\partial X$). The thickened torus $A_i$ cobounded by $T_i$ and $T_i'$ is tight (since it is in the complement of $T_i'$ which is contact isotopic to $T_0$) and so $B_i$ is obtained from $S_i$ (which has the unique tight structure on a solid torus with longitudinal dividing curves) by gluing $A_i$ and removing a neighborhood of $L^+_{p,q}$, which is of course tight. Thus the complement of $T_i$ is tight in this case. In Case (2) let $S'_i$ be the solid torus bounded by $T''_i$. Since $T''_i$ is contact isotopic to $\partial S'$ we know $S'_i$ is a tight solid torus and its complement in $M$ is also tight. Since $C_i$ is contained in this complement, we know the contact structure on $C_i$ is tight. The thickened torus $A_i$ between $T''_i$ and $T_i$ in $M$ is tight since it is in the complement of $S''_i$ and as above we see that the solid torus bounded by $T_i$ it tight and $B_i$ is a subset of that.
Suppose $T_i$ satisfies Case (1) of the inductive hypothesis. if the bypass used to go from $T_i$ to $T_{i+1}$ is attached from the outside of $S_i$ then clearly $T_{i+1}$ also satisfies Case (1) of the inductive hypothesis. If the bypass is attached from the inside, then an argument just like in the proof of Lemma [Lemma 40](#claim1){reference-type="ref" reference="claim1"} shows that $T_{i+1}$ also satisfies the inductive hypothesis if the slope $r_i$ of the dividing curves on $T_i$ is in $\mathbb{Q}\setminus \mathbb{Z}$. If $r_i=\mathop{\mathrm{tb}}(L)$, then we can consider that $T_i$ satisfies Case (2) of the inductive hypothesis and proceed as in that case. Now assume that $r_i$ is an integer larger than $\mathop{\mathrm{tb}}(L)$. So the solid torus $S_i$ is a standard neighborhood of a Legendrian knot $L'$. Again as in the proof of Lemma [Lemma 40](#claim1){reference-type="ref" reference="claim1"} we see that if the path in the Farey graph describing the contact structure on the thickened torus $A_i$ between $T_i$ and $T'_i$ contains both signs, then $T_{i+1}$ will satisfy the inductive hypothesis. So we assume that all the signs are the same, say $+$, and the bypass used to get $T_{i+1}$ from $T_i$ is a negative bypass. Then we know inside of $S_{i+1}$ there is a torus $T$ with dividing slope $r_i-1$ and the basic slice between $T_i$ and $T$ is negative. The solid torus $S''$ bounded by $T$ is a standard neighborhood of a Legendrian $L''$ and $L''=S_-(L')$. Moreover, we know $L$ is obtained from $L'$ by some number of positive stabilizations. Inside $S''$ we see a convex torus $T'$ with dividing slope $q/p$ and $L^+_{p,q}$ is a Legendrian divide on this torus. Thus between $T$ and $T'$ there is a convex torus $T''$ with two dividing curves of slope $\mathop{\mathrm{tb}}(L)$. The solid torus bounded by $T''$ is a neighborhood of a Legendrian knot $L''$ and since $L^+(p,q)$ is contained in this solid torus we see that it is also a standard cable of $L''$, but $L''$ is obtained from $L'$ by at least one negative stabilization. Thus the rotation number of $L''$ and $L$ are different. This will mean the rotation number of their standard $(p,q)$-cable will be different. This contradiction implies that the sign of the bypass going from $T_i$ to $T_{i+1}$ must agree with the sign of the basic slices in $A_i$, then again as in the proof of Lemma [Lemma 40](#claim1){reference-type="ref" reference="claim1"} we see that $T_{i+1}$ satisfies the inductive hypothesis.
We now suppose that $T_i$ satisfies Case (2) of the inductive hypothesis. If the slope $r_i$ of $T_i$ is $\mathop{\mathrm{tb}}(L)$ then notice $T_i$ must be contact isotopic to $T_0$ (since $T''_i$ is contact isotopic to $\partial S'$) and so if the bypass attached to $T_i$ to get $T_{i+1}$ is attached from the outside then we will clearly satisfy Case (1) of the hypothesis. If the bypass is attached from the inside then we can proceed as below. If $r_i$ is not $\mathop{\mathrm{tb}}(L)$ then if the bypass is attached from the outside of $S_i$, then $T_{i+1}$ will clearly satisfy the inductive hypothesis since attaching the bypass will result in the dividing curves on $T_{i+1}$ having slope in $(r_i, \mathop{\mathrm{tb}}(L)]$.
We now consider a bypass attached from the inside, and notice that the slope $r_{i+1}$ of the dividing curves on $T_{i+1}$ must be in $(q/p,r]$. To see this suppose it were not. Let $S_{i}$ be the solid torus $T_{i}$ bounds in $M$. If $r_{i+1}$ is less than $q/p$ then the torus $S_{i+1}$ contains a neighborhood of $L^+_{p,q}$ and hence in $S_{i+1}$ we can find a convex torus $T'$ containing $L^+_{p,q}$ and the slope of the dividing curves on $T'$ must be $q/p$, but this implies that the contact structure on $S_{i+1}$ is overtwisted and hence the one on $S_i$ is too, but we argued above that since $T_i$ satisfies the inductive hypothesis, $S_i$ is tight. Thus we know $r_{i+1}\in(p/q,r_i]$.
Assume for now that $T_i$ has only two dividing curves. Since the contact structure on $S_i$ is tight, we know it is given by a signed minimal path in the Farey graph from $\mathop{\mathrm{tb}}(L)-1$ to $r_i$. Said another way, if $A_i$ is the thickened torus cobounded by $T_i$ and $T''_i$ in $M$, then that path describes the tight contact structure on $A_i$ and there is a unique contact structure on the solid torus bounded by $T''_i$. There is a unique convex torus (up to contactomorphism) in $A_i$ with dividing slope $q/p$ and two diving curves and $L^+_{p,q}$ is a neighborhood of one of the Legendrian divides. Now $T_{i+1}$ is a convex torus in $S_i$. If $T_{i+1}$ has more than two dividing curves then it is contained in an $I$-invariant neighborhood of $T_i$ and hence satisfies the inductive hypothesis. If $T_{i+1}$ has two dividing curves then it splits $A_i$ into two thickened tori and each has a contact structure described by paths in the Farey graph that when combined can be shortened to the one describing the contact structure on $A_i$. Thus inside of $S_{i+1}$ we can also find the unique convex torus $T$ with two dividing curves and slope $q/p$. This will be the same torus (up to contactomorphism) as the one in $S_i$ and hence in the solid torus bounded by $T$, there is a torus contact isotopic to $\partial S'$ and hence $T_{i+1}$ satisfies the inductive hypothesis.
Now if $T_i$ has more than two dividing curves then inside of $T_i$ is a torus $T$ with the same dividing slope and just two dividing curves and $T''_i$ is contained in the solid torus bounded by $T$. The slope of the dividing curves on $T_{i+1}$ will be the same as the slope of those on $T_i$ (though the number of dividing curves can change by two) and in $S_{i+1}$ there is a convex torus contact isotopic to $T$. Thus $T_{i+1}$ satisfies the inductive hypothesis. ◻
We end this section by seeing how the standard positive and standard negative cables relate.
*Proof of Proposition [Proposition 19](#relation){reference-type="ref" reference="relation"}.* This is a restatement of Lemma [Lemma 34](#seestab){reference-type="ref" reference="seestab"}. ◻
## Cables of transverse knots
In this section we prove Theorem [Theorem 21](#transversecable){reference-type="ref" reference="transversecable"} that shows the standard cable of a transverse knot is non-loose if and only if the original knot is non-loose.
*Proof of Theorem [Theorem 21](#transversecable){reference-type="ref" reference="transversecable"}.* Assume that $K$ is a non-loose transverse knot and let $L_n$ be its Legendrian approximations as discussed when we defined the standard cable of $K$ just before the statement of Theorem [Theorem 21](#transversecable){reference-type="ref" reference="transversecable"}. We know that since $K$ is non-loose then all the $L_n$ are too, see [@Etnyre13 Proposition 1.2]. Notice that by Theorem [Theorem 17](#thm:poscable){reference-type="ref" reference="thm:poscable"} we know that $(L_n)_{p,q}$ are all non-loose. If $K_{p,q}$ is loose then its Legendrian approximations with sufficiently negative Thurston-Bennequin invariant are also loose, but for very negative $n$, $(L_n)_{p,q}$'s are Legendrian approximations with arbitrarily negative Thurston-Bennequin invariant that are all non-loose. Thus $K_{p,q}$ is non-loose.
If $K$ is loose then all of its Legendrian approximations $L_n$ with sufficiently negative $n$ are loose. Thus $(L_n)_{p,q}$ are all loose and so their transverse push-off is also loose, see [@Etnyre13]. ◻
## Transversely non-loose and non-simple knots
In this section we prove Theorem [Theorem 22](#thm:transnonsimple){reference-type="ref" reference="thm:transnonsimple"} that says in the contact manifold $(S^3,\xi_2)$ the $(2n+1,2)$-cable of the left handed trefoil has at least $n$ non-loose transverse representatives with $\mathop{\mathrm{sl}}=2n+3$.
Before giving the proof, we recall some results from [@EtnyreMinMukherjee22pre]. Let $C$ be the complement of a neighborhood of the left-handed trefoil in $S^3$. In Lemma 6.2 of [@EtnyreMinMukherjee22pre] it was shown there exist tight contact structures $\xi_n, n\in \mathbb{N},$ on $C$ such that $\partial C$ is convex with two dividing curves of slope $1/n$ that have the property that any convex torus parallel to $\partial C$ has dividing slope $1/n$. For $n=1$ we can glue in a solid torus $N$ with a tight contact structure (which is unique since the dividing curves are longitudinal) to get a contact structure on $S^3$ and the solid torus is a standard neighborhood of the Legendrian knot $L$. Theorem 7.22 in [@EtnyreMinMukherjee22pre] shows that this contact structure is $\xi_2$.
Now for $n>1$ we can glue a solid torus to $(C,\xi_n)$, but now there are two choices for the solid torus $N_n^\pm$ (depending on the sign of the bypass on the thickened torus with slopes $0$ and $1/n$). According to the Lemma 6.2 of [@EtnyreMinMukherjee22pre] all these tori are "non-thickenable\" meaning that any solid torus containing them (and smoothly isotopic to them) has dividing slope $1/n$. Repeating the proof of Theorem 1.12 in [@EtnyreLaFountainTosun12] shows that if $S$ is any solid torus in $N_n^\pm$ that is smoothly isotopic to $N_n^\pm$ and has convex boundary with dividing slope in $(0,1/n)$ then $S$ cannot be thickened to a solid torus with boundary having dividing slope larger than $1/n$. Moreover, if $S$ is a solid torus in $N_n^\pm$ as above but with dividing slope equal to $0$ then it will thicken to $N$. Thus all the contact structures on $S^3$ constructed from the $\xi_n$ by gluing in $N_n^\pm$ are $\xi_2$, so all the non-thickenable tori are in one fixed contact structure.
*Proof of Theorem [Theorem 22](#thm:transnonsimple){reference-type="ref" reference="thm:transnonsimple"}.* First fix $n \in \mathbb{N}$. Then inside any $N_k^-$ in $(S^3,\xi_2)$ with $k\leq n$, there is a convex torus parallel to the boundary with two dividing curves of slope $2/(2n+1)$. Notice from above, this torus bounds a solid torus that cannot thicken past $N_k^-$. Let $L_k$ be a Legendrian divide on this torus. The contact framing of $L_k$ agrees with the torus framing so it has $\mathop{\mathrm{tb}}(L_k)=4n+2$. Using Lemma [Lemma 36](#lemma:rot){reference-type="ref" reference="lemma:rot"} we see that $\mathop{\mathrm{rot}}(L_k)=2n-1$. In the proof of Theorem 1.7 in [@EtnyreLaFountainTosun12], it is shown that the only convex torus on which $S_-^l(L_k)$ can sit is $\partial N_k^-$ and a simple state transition argument given there shows that all the $L_k$ are distinct as are all their negative stabilizations. Moreover, this shows that the transverse push-off of $L_k$, which we denote $K_k$, is a non-loose transverse knot and none of the $K_k$ are transversely isotopic. Also, we know $\mathop{\mathrm{sl}}(K_k)=\mathop{\mathrm{tb}}(L_k)-\mathop{\mathrm{rot}}(L_k)=2n+3$.
For any $k \in \mathbb{N}$, all $N_k^-$ are universally tight and the above state transition argument tells us that any torus that $L_k$ sits on should have a universally tight neighborhood. According to [@ChakrabortyEtnyreMin20Pre Theorem 1.10] (the theorem is about tight contact manifolds, but the similar argument works for overtwisted contact manifolds too, see also [@McCullough18]), The necessary and sufficient condition for $L_k$ to destabilize is that it sits on a mixed torus, which has a virtually overtwisted neighborhood. This implies that $L_k$ does not destabilize and the complement of a standard neighborhood of $L_k$ does not contain (half) Giroux torsion. Since $K_k$ is a push-off of $L_k$, the complement of $K_k$ also does not contain (half) Giroux torsion. ◻
10
Kenneth Baker and John Etnyre. Rational linking and contact geometry. In *Perspectives in analysis, geometry, and topology*, volume 296 of *Progr. Math.*, pages 19--37. Birkhäuser/Springer, New York, 2012.
Apratim Chakraborty, John B. Etnyre, and Hyunki Min. Cabling legendrian and transverse knots, 2020.
Vincent Colin. Chirurgies d'indice un et isotopies de sphères dans les variétés de contact tendues. , 324(6):659--663, 1997.
Yakov Eliashberg and Maia Fraser. Classification of topologically trivial Legendrian knots. In *Geometry, topology, and dynamics (Montreal, PQ, 1995)*, volume 15 of *CRM Proc. Lecture Notes*, pages 17--51. Amer. Math. Soc., Providence, RI, 1998.
Yakov Eliashberg and Maia Fraser. Topologically trivial Legendrian knots. , 7(2):77--127, 2009.
Yakov M. Eliashberg and William P. Thurston. Contact structures and foliations on $3$-manifolds. , 20(1):19--35, 1996.
John B. Etnyre. On knots in overtwisted contact structures. , 4(3):229--264, 2013.
John B. Etnyre and Ko Honda. Knots and contact geometry. I. Torus knots and the figure eight knot. , 1(1):63--120, 2001.
John B Etnyre and Ko Honda. Cabling and transverse simplicity. , pages 1305--1333, 2005.
John B. Etnyre, Douglas J. LaFountain, and Bülent Tosun. Legendrian and transverse cables of positive torus knots. , 16(3):1639--1689, 2012.
John B. Etnyre, Hyunki Min, and Anubhav Mukherjee. Non-loose torus knots, 2022. [`arXiv:2206.14848`](https://arxiv.org/abs/2206.14848).
Hansjörg Geiges. , volume 109 of *Cambridge Studies in Advanced Mathematics*. Cambridge University Press, Cambridge, 2008.
Hansjörg Geiges and Sinem Onaran. Legendrian rational unknots in lens spaces. , 13(1):17--50, 2015.
Hansjörg Geiges and Sinem Onaran. Exceptional Legendrian torus knots. , 2020(22):8786--8817, 2020.
Hansjörg Geiges and Sinem Onaran. Legendrian Hopf links. , 71(4):1419--1459, 2020.
Paolo Ghiggini. Infinitely many universally tight contact manifolds with trivial Ozsváth-Szabó contact invariants. , 10:335--357, 2006.
Paolo Ghiggini. Ozsváth-Szabó invariants and fillability of contact structures. , 253(1):159--175, 2006.
Emmanuel Giroux. Structures de contact en dimension trois et bifurcations des feuilletages de surfaces. , 141(3):615--689, 2000.
Robert E. Gompf. Handlebody construction of Stein surfaces. , 148(2):619--693, 1998.
Allen Hatcher. Notes on basic 3-manifold topology. .
Ko Honda. On the classification of tight contact structures. I. , 4:309--368 (electronic), 2000.
Ko Honda. Gluing tight contact structures. , 115(3):435--478, 2002.
Ko Honda, William H. Kazez, and Gordana Matić. Tight contact structures and taut foliations. , 4:219--242, 2000.
Ko Honda, William H. Kazez, and Gordana Matić. Convex decomposition theory. , 2002(2):55--88, 2002.
Yutaka Kanda. On the Thurston-Bennequin invariant of Legendrian knots and nonexactness of Bennequin's inequality. , 133(2):227--242, 1998.
Paolo Lisca, Peter Ozsváth, András I. Stipsicz, and Zoltán Szabó. Heegaard Floer invariants of Legendrian knots in contact three-manifolds. , 11(6):1307--1363, 2009.
Paolo Lisca and András I. Stipsicz. On the existence of tight contact structures on Seifert fibered 3-manifolds. , 148(2):175--209, 2009.
Irena Matkovič. Non-loose negative torus knots. . [`arXiv:2001.07681`](https://arxiv.org/abs/2001.07681).
Andrew McCullough. Legendrian large cables and new phenomenon for non-uniformly thick knots, 2018.
[^1]: These are Legendrian and transverse knots in overtwisted contact manifolds for which the contact structure restricted to their complement is tight.
[^2]: Recall there are an integers worth of overtwisted contact structures on $S^3$ denoted by $\xi_n$ where $\xi_0$ is in the same homotopy class of plane field as the tight contact structure.
| arxiv_math | {
"id": "2310.04908",
"title": "Existence and construction of non-loose knots",
"authors": "Rima Chatterjee, John B. Etnyre, Hyunki Min, Anubhav Mukherjee",
"categories": "math.GT math.SG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
author:
- "Xiaojun Huang [^1] and Wanke Yin[^2]"
title: "**Regular types and order of vanishing along a set of non-integrable vector fields**"
---
Dedicated to the memory of Professor Zhi-Hua Chen
**Abstract**: This paper has two parts. We first survey recent efforts on the Bloom conjecture [@Bl2] which still remains open in the case of complex dimension at least 4. Bloom's conjecture concerns the equivalence of three regular types. There is a more general important notion, called the singular D'Angelo type (or simply, D'Angelo type) [@DA1]. While the finite D'Angelo type condition is the right one for the study of local subelliptic estimates for Kohn's $\overline{\partial}$-Neumann problem, regular types are important as their finiteness gives the global regularity up to the boundary of solutions of Kohn's $\overline{\partial}$-Neumann problem [@Ca2] [@Zai].
In the second part of the paper, we provide a proof of a seemingly elementary but a truly fundamental property (Theorem [Theorem 11](#cor1.3){reference-type="ref" reference="cor1.3"} or its CR version Theorem [Theorem 14](#cor2.3){reference-type="ref" reference="cor2.3"}) on the vanishing order of smooth functions along a system of non-integrable vector fields. A special case, Corollary [Corollary 15](#cor2.4){reference-type="ref" reference="cor2.4"}, of Theorem [Theorem 14](#cor2.3){reference-type="ref" reference="cor2.3"} had already appeared in a paper of D'Angelo \[pp 105, 3. Remark, [@DA3]\]. A main goal in this part is to provide proofs for these results for the purpose of future references. Our arguments are based on a deep normalization theorem for a system of non-integrable vector fields due to Helffer-Nourrigat [@HN], as well as its late generalization in Boauendi-Rothschild [@BR].
# Regular types and Bloom conjecture
## Introduction
For a smoothly bounded pseudoconvex domain $D$ in ${\mathbb C}^n$ with $n\ge 2$, many analytic and geometric properties of $D$ are determined by its invariants from the inherited CR structure bundle over $\partial D$. In the 1960's, Kohn [@FK] established the subelliptic estimate for the $\overline{\partial}$-Newmann problem when the Levi form of $\partial D$ is positive definite everywhere, which is called the strong pseudoconvexity of $D$.
To generalize his subelliptic estimate for the $\overline\partial$-Neumann problem to bounded weakly pseudoconvex domains in ${\mathbb C}^2$, Kohn in his fundamental paper [@Kohn1] introduced three different boundary CR invariants for $D\subset {\mathbb C}^2$. These invariants are, respectively, the maximum order of contact at $p\in
\partial D$ with smooth holomorphic curves at $p$, denoted by $a^{(1)}(\partial D, p)$ and called the contact type at $p$, order of vanishing added by two of the Levi-form along the contact bundle, denoted by $c^{(1)}(\partial D, p)$ and called the Levi-form type at $p$, and the length of the iterated Lie brackets of boundary CR vector fields as well as their conjugates needed to recover the boundary contact direction, denoted by $t^{(1)}(\partial D, p)$ and called vector field commutator type at $p$. Kohn proved that all these invariants are in fact the same, thus simply called the type value of $\partial
D\subset {\mathbb C}^2$ at $p$. When this type value is finite at each point, $D$ is called a smoothly bounded pseudoconvex domain of finite type. Kohn's work in [@Kohn1] shows that the subelliptic estimate for $\overline\partial$-Neumann problems holds for such a domain. Together with that of Greiner [@Gr], Kohn's work also gives the precise information of the subelliptic gain for the $\overline\partial$-Neumann problem for a smoothly bounded finite type weakly pseudoconvex domain in ${\mathbb C}^2$.
Generalizations of Kohn's notion of the boundary finite type condition to higher dimensions have been a subject under extensive investigations in the past half century in Several Complex Variables.
Kohn later introduced a finite type condition in higher dimensions through the subelliptic multipliers [@Kohn2]. The understanding of this type has later revived to be a very active field of studies through the work of many people including Diederich-Fornaess [@DF], Siu [@Siu1], Kim-Zaistev [@KZ1] [@KZ2], Nicoara [@Nic] as well as the reference therein.
Bloom [@Bl1] and Bloom-Graham [@BG1] established Kohn's original notion of types in ${\mathbb C}^2$ to any dimensions. Namely, for each integer $s\in [1,n-1]$ and for a smooth real hypersurface $M\subset {\mathbb C}^n$ with $n\ge 2$ and $p\in M$, Bloom-Graham and Bloom defined the vector field commutator type $t^{(s)}(M,p)$ , the Levi-form type $c^{(s)}(M,p)$ and the regular contact type $a^{(s)}(M,p)$ of $M$ at $p$, which are called the regular multi-types of $M$ at $p\in M$. Bloom-Graham [@BG1] and Bloom [@Bl1] showed that when $s=n-1$, all these types are also the same as in the case of $n=2$. However, without pseudoconvexity for $M$, Bloom [@Bl2] showed that when $s\not = n-1$, while the contact type $a^{(s)}$ may be finite, the commutator type $t^{(s)}$ and the Levi-form type $c^{(s)}$ can be infinite in many examples. The commutator type is intrinsically defined only through the Lie bracket of CR or conjugate CR vector fields of $M$ valued in some smooth subbundles of $T^{(1,0)}M\oplus T^{(0,1)}M$. This notion has already been an important object in the fields such as Sub-elliptic Analysis and Partial Differential Equations. It is often referred as Hörmander commutator type in the literature. The other two types are more on the emphasis of complex analysis, defined through the complex structure of the ambient complex space.
Different from the case of complex dimension two, the regular types are not the right one for the study of the local subelliptic estimates for the $\overline\partial$-Neumann problem for domains in a complex space of complex dimension at least three. However, the early work of Catlin [@Ca2] and the very recent nice work of Zaitsev [@Zai] showed that finite regular types force the finiteness of Catlin's multitypes and also Zaitsev's tower multi-types (which are similarly defined as the Levi-form type but only for a certain formally integrable smooth subbundles of $T^{(1,0)}$). It then can be used to produce a stratification of $\partial
D$ into submanifolds with controlled holomorphic dimension, which provides Property-P for $\partial D$ with a finite regular type condition. Therefore the finite regular type of $\partial D$ gives the compactness of the Newmann operator of $D$, and thus the global regularity of the $\overline\partial$-Neumann problem follows from the classical work of Kohn-Nirenberg [@Str].
The fundamental work of D'Angelo in [@DA1] studied the notion of singular types, widely called the D'Angelo types, by considering the order of contact with not just smooth complex submanifolds but possibly singular complex analytic varieties. D'Angelo finite type condition is a singular contact type condition. Its significance is its equivalence to the existence of the local subelliptic estimate after the fundamental work of Kohn [@Kohn2], Diederich-Fornaess [@DF] and Catlin [@Ca1].
The types mentioned above were introduced through different aspects of studies. Revealing the connections among them always results in a deeper understanding of the subject. For instance, proving that the Kohn multiplier ideal type is equivalent to the finite D'Angelo type would provide a new and more direct solution of the $\overline\partial$-Neumann problem.
## Bloom's conjecture and D'Angelo's conjecture {#subb1}
Let $M\subset \mathbb{C}^n$ be a smooth real hypersurface with $p\in
M$. Then $T^{1,0}M$ is a smooth smooth vector bundle over $M$ of complex dimension $(n-1)$. A smooth section $L$ of $T^{1,0}M$ is called a smooth vector field of type $(1,0)$ or a CR vector field along $M$, and its complex conjugate is called smooth vector field of type $(0,1)$ or a conjugate CR vector field along $M$. Let $\rho$ be a defining function of $M$, namely, $\rho\in C^\infty(U)$ with $U$ an open neighborhood of $M\subset\mathbb{C}^n$ and $U\cap
M=\{\rho=0\}\cap U$, $d\rho|_{U\cap M}\neq 0$. Denote by ${\mathcal
X}_{{\mathbb C}}(M)$ the $C^\infty(M)$-module of all complex valued smooth vector fields tangent to $M$. Write $\theta=-\frac{1}{2}(\partial\rho-
\overline{\partial}\rho)$, called a pure imaginary contact form along $M$. Write $T= 2i\hbox{Im}\left(\sum_{j=1}^{n}\frac{\partial
\rho}{\partial\overline{z_j}}\frac{\partial}{\partial z_j}\right),$ called a pure imaginary contact vector field of $M$. The following holds trivially: $$\langle \theta,T\rangle=|\partial\rho|^2> 0,\ \langle L,\theta\rangle=0\
\hbox{ for any } \hbox{CR vector field }
\ L \hbox{ along }M.$$ For two tangent vector fields $X, Y\in {\mathcal X}_{{\mathbb C}}(M)$, define the Levi form $$\lambda(X,Y)=\langle\theta, [X,\overline{Y}]\rangle.$$ By the Cartan lemma, $\lambda(X,Y)=2\langle d\theta,
X\wedge\overline{Y}\rangle=d\theta(X,Y).$ After replacing $\rho$ by $-\rho$, when needed, if we can make $\lambda(L, L)$ positive definite along $M$ for any vector field $L\not=0$ of type $(1,0)$, we say $M$ is strongly pseudoconvex. If we can only make ${\lambda}$ semi-positive definite, we call $M$ a weakly pseudoconvex hypersurface. When $L_1=\sum_{j=1}^{n}\xi_j\frac{\partial}{\partial z_j},
L_2=\sum_{j=1}^{n}\eta_j\frac{\partial}{\partial z_j}$, we then have $$\label{004}
{\lambda}(L_1,L_2)=\sum_{j,\ell=1}^{n}\frac{\partial^2\rho}{\partial z_j \overline{\partial}
z_\ell}\xi_j\overline{\eta_\ell}.$$ Levi form is a Hermitian form over $T^{(1,0)}M$.
For any $1\leq s\leq n-1$, let $B$ be a smooth complex vector subbundle of $T^{1,0}M$ of complex dimension $s$. Let $\mathcal{M}_1(B)$ be the $C^\infty(M)$-submodule of ${\mathcal
X}_{\mathbb C}(M)$ spanned by the smooth $(1,0)$ vector fields $L$ with $L|_q\in B|_q$ for each $q\in M$, together with their complex conjugates. For $\mu\geq 1$, we let $\mathcal{M}_\mu(B)$ denote the $C^\infty(M)$-submodule spanned by commutators of length less than or equal to $\mu$ of vector fields from $\mathcal{M}_1(B)$ (including $\mathcal{M}_1(B)$). Here, a commutator of length $\mu\ge
2$ of vector fields in $\mathcal{M}_1(B)$ is a vector field of the following form: $[Y_{\mu},[Y_{\mu-1},\cdots,[Y_2,Y_1]\cdots]$ with $Y_j\in \mathcal{M}_1(B)$. Define $t^{(s)}(B,p)=m$ if $\langle F,\partial
\rho\rangle(p)=0$ for any $F\in \mathcal{M}_{m-1}(B)$ but $\langle
G,\partial\rho\rangle(p)\neq 0$ for a certain $G\in \mathcal{M}_{m}(B)$. Namely, $m$ is the smallest number such that $\mathcal{M}_m(B)|_p\not\subset T^{(1,0)}_p\partial D\oplus T^{(0,1)}_p\partial
D.$ If such an $m$ does not exist, we set $t^{(s)}(B,p)=\infty.$ When $B$ has complex dimension one with $B$ being spanned by a $(1,0)$-type vector field $L$ near $p$, we also write $t^{(1)}(B,p)=t_L(\partial D, p).$ $t^{(s)}(B,p)$ is called the vector field commutator type of $B$ at $p$, or the commutator type of $L$ at $p$ when $B_q=span\{L|_q\}$ for $q$ near $p$. Define $$\begin{split}
t^{(s)}(M,p)=\sup\limits_{B}\{t(B,p)|\ B\ \text{is an
$s$-dimensional subbundle of\ }\ T^{1,0}M\}.
\end{split}$$
$t^{(s)}(M,p)$ is called the $s^{th}$-vector field commutator type of $M$ at $p$, or simply the $s^{th}$ commutator type of $M$ at $p$.
Write $\Gamma_{\infty}(B)$ for the set of smooth sections of $B$. We define $c^{(s)}(B,p)=m$ if for any $m-3$ vector fields $F_1,\cdots,F_{m-3}$ of $\mathcal{M}_1(B)$ and any $L\in
\Gamma_{\infty}(B)$ with $L_p\not =0$, it holds that $$F_{m-3}\cdots F_{1}\big({\lambda}(L,L)\big)(p)=0;$$ and for a certain choice of $m-2$ vector fields $G_1,\cdots,G_{m-2}$ of $\mathcal{M}_1(B)$ and a certain $L\in \Gamma_{\infty}(B)$ with $L_p\not =0$, we have $$G_{m-2}\cdots G_{1}\big({\lambda}(L,L)\big)(p)\neq 0.$$ When such an $m$ does not exist, we then set $c^{(s)}(B,p)=\infty$. We define $$\begin{split}
c^{(s)}(M,p)=\sup\limits_{B}\{c^{(s)}(B,p): \ B\ \text{ is an $s$-dimensional subbundle of} \ T^{1,0}M
% \\ \hbox{and} \text{ $V_B$ is a basis of the space of smooth cross sections of $B$}
\}.
\end{split}$$ We call $c^{(s)}(B,p)$ the Levi-form type of $B$ at $p$ and $c^{(s)}(M,p)$ the $s$-Levi form type of $\partial D$ at $p$. When such an $m$ does not exist, we define $c^{(s)}(M,p)=\infty$. For an $L\in
\Gamma_{\infty}(B)$ with $L_p\not =0$, we similarly have the notion of $c_{L}(M,p)$.
We finally define the $s$- regular contact type $a^{(s)}(M,p)$ as follows: $$\begin{split}
a^{(s)}(M,p)=\sup\limits_{X}\big\{\ell|\ &\exists \text{ an $s$-dimensional
complex submanifold}\ X\\
&\text{whose order of contact with $M$ at $p$ is $\ell$}\big\}.
\end{split}$$
Here we remark that the order of contact of $X$ with $M$ at $p$ is defined as the order of vanishing of $\rho|_X$ at $p$.
In [@Kohn1], when $n=2$, Kohn showed that $t^{(1)}(M,p)=c^{(1)}(M,p)=a^{(1)}(M,p)$. Bloom-Graham [@BG1] and Bloom [@Bl1] proved that for any smooth real hypersurface $M\subset{\mathbb C}^n$ with $p\in M$, $$t^{(n-1)}(M,p)=c^{(n-1)}(M,p)=a^{(n-1)}(M,p).$$ And for any $1\leq s\leq n-2$, Bloom in [@Bl2] observed that $a^{(s)}(M,p)\leq c^{(s)}(M,p)$ and $a^{(s)}(M,p)\leq
t^{(s)}(M,p)$. For these results to hold there is no need to assume the pseudoconvexity of $M$. However, the following example of Bloom shows that for $n\geq 3$, when $M$ is not pesudoconvex, it may happen that $a^{(s)}(M,p)< c^{(s)}(M,p)$ and $a^{(s)}(M,p)<
t^{(s)}(M,p)$ for $1\leq s\leq n-2$.
**Example 1** (Bloom, [@Bl2]). *Let $\rho=2\text{Re}(w)+(z_2+\overline{z_2}+|z_1|^2)^2$ and let $M=\{(z_1,z_2,w)\in \mathbb{C}^3|\ \rho=0\}$. Let $p=0$. Then $a^{(1)}(M,0)=4$ but $c^{(1)}(M,0)=t^{(1)}(M,0)=\infty$ .*
To see the $M$ in Example [Example 1](#Bl-example){reference-type="ref" reference="Bl-example"} is not pseudoconvex near $p=0$, we notice that a real normal direction of $M$ near $0$ is given by $\frac{\partial}{\partial u}$ with $u=\hbox{Re}(w)$. Let $\psi(\xi):
\Delta\rightarrow{\mathbb C}^3$ be a holomorphic disk that is smooth up to the boundary such that $\psi=(\psi^*,0)$ with $\psi^*$ being attached to the Heisenberg hypersurface in ${\mathbb C}^2$ defined by $z_2+\overline{z_2}+|z_1|^2=0$ and $\psi(1)=0$. By the Hopf lemma for pseudoconvex domains [@BER], $\psi$ would have a non-zero derivative along $u$-direction, which is a contradiction. Let $L=\frac{\partial}{\partial z_1}-\overline{z_1}\frac{\partial}{\partial z_2}.$ Then $L$ is a CR vector field along $M$. Notice that no matter how many times we perform the Lie bracket for $L$ and $\overline{L}$, we get a vector field without components in $\frac{\partial}{\partial w}$ and $\frac{\partial}{\partial\overline{w}}$, we see that $t^{(1)}_L(M,p)=\infty$. One also computes by ([\[004\]](#004){reference-type="ref" reference="004"}) that $${\lambda}(L,L)=2(z_2+\overline{z_2})+2|z_1|^2.$$ Thus $L({\lambda}(L,L))=\overline{L}({\lambda}(L,L))\equiv 0.$ Hence, $c_L(M,p)=\infty$. (One can also conclude the non-pseudoconvexity of $M$ near $0$ by ([\[004\]](#004){reference-type="ref" reference="004"}).) Thus $t^{(1)}(M,0)=c^{(1)}(M,0)=\infty.$ $a^{(1)}(M,0)$ is at least 4 that is attained by the smooth holomorphic curve $(\xi,0,0)$. Suppose $\phi(\xi)=(\phi_1(\xi),\phi_2(\xi),\phi_3(\xi))$ be a smooth holomorphic curve with maximum order of contact with $M$ at $0$. Assume $a^{(1)}(M,0)>4$. Then the vanishing order of $$\label{005}
\begin{split}h&=2\hbox{Re}(\phi_3)+(2\hbox{Re}(\phi_2)+|\phi_1|^2)^2=2\hbox{Re}(\phi_3)
+4\hbox{Re}^2(\phi_2)+
|\phi_1|^4+4\hbox{Re}(\phi_2)|\phi_1|^2\\
&=2\hbox{Re}(\phi_3+\phi_2^2)+4|\phi_2|^2+
|\phi_1|^4+2(\phi_2+\overline{\phi_2})|\phi_1|^2
\end{split}$$ is at least 5 at $0$. Hence $\phi_3=-\phi_2^2 \ \
\hbox{mod}(|\xi|^5)$, $\phi_2=0\ \hbox{mod}(|\xi|^2)$ and $$h=4|\phi_2|^2+2(\phi_2+\overline{\phi_2})|\phi_1|^2+|\phi_1|^4=0\ \hbox{mod}(|\xi|^5).$$ Since $\phi$ is a smooth curve, it apparently follows that $\phi_1=a_1\xi+O(\xi^2)$ with $a_1\not =0$ and $\phi_2=a_2\xi^k+O(|\xi|^{k+1}$ with $k\ge 2, a_2\not =0.$ Comparing coefficients of degree $4$ in $\xi$, we see a contradiction. Hence $a^{(1)}(M,0)=4$, which is achieved by a smooth holomorphic cure $\phi(\xi)=(\xi,0,0)$.
With the pseudoconvexity assumption of $M$, Bloom in [@Bl2] showed that when $M\subset {\mathbb C}^3,$ $a^{(1)}(M,p)=c^{(1)}(M,p)$. Motivated by this result, Bloom in 1981 [@Bl2] formulated the following famous conjecture:
**Conjecture 2**. *Let $M\subset \mathbb{C}^n$ be a pseudoconvex real hypersurface with $n\geq 3$. Then for any $1\leq s\leq n-2$ and $p\in M$, $$\begin{split}
t^{(s)}(M,p)=c^{(s)}(M,p)=a^{(s)}(M,p).
\end{split}$$*
In a related work, D'Angelo in 1986 conjectured that under pseudo-convexity assumption of $M$, one should have $t^{(1)}_L(M,p)=c^{(1)}_L(M,p)$.
More generally, we formulate the following generalized D'Angelo Conjecture:
**Conjecture 3**. *Let $M\subset \mathbb{C}^n$ be a pseudoconvex real hypersurface with $n\geq 3$. Then for any $1\leq s\leq n-2$, $p\in M$ and a smooth complex vector subbundle $B$ of $T^{(1,0)}M$, it holds that $$t^{(s)}(B,p)=c^{(s)}(B,p).$$*
If confirmed to be true, the generalized D'Angelo conjecture would imply $t^{(s)}(M,p)=c^{(s)}(M,p)$.
40 years after Bloom formulated his conjecture, it was completely settled in the case of complex dimension three in [@HY]. More generally, the following theorem was proved in [@HY]:
**Theorem 4**. *[@HY] Let $M\subset \mathbb{C}^n$ be a smooth pseudoconvex real hypersurface with $n\geq 3$. Then for $s= n-2$ and any $p\in M$, it holds that $$\begin{split}
t^{(n-2)}(M,p)=a^{(n-2)}(M,p)=c^{(n-2)}(M,p).
\end{split}$$*
In particular, we answered affirmatively the Bloom conjecture in the case of complex dimension three (namely, $n=3$):
**Theorem 5**. *[@HY] The Bloom conjecture holds in the case of complex dimension three. Namely, for a smooth pseudoconvex real hypersurface $M\subset \mathbb{C}^3$ and $p\in M$, it holds that $$\begin{split}
t^{(1)}(M,p)=a^{(1)}(M,p)=c^{(1)}(M,p).
\end{split}$$*
The solution to D'Angelo's conjecture in the case of complex dimension three was obtained in [@CYY], fundamentally based on results obtained in [@HY].
**Theorem 6**. *[@CYY] For a smooth pseudoconvex real hypersurface $M\subset \mathbb{C}^3$ and $p\in M$, for any smooth CR vector field $L$ along $M$ with $L|_p\not =0$, it holds that $$t^{(1)}_L(M,p)=c^{(1)}_L(M,p).$$*
The following theorem of D'Angelo also provides a partial solution to D'Angelo's original conjecture:
**Theorem 7**. *[@DA2][\[Da10\]]{#Da10 label="Da10"} Let $M\subset{\mathbb C}^n$ be a smooth pseudoconvex real hypersurface and $p\in M$.*
*(1). For any smooth CR vector field $L$ along $M$ with $L|_p\not =0$, it holds that $c_L(M,p)\le \max\{t_L(M,p),\ 2t_L(M,p)-6\}.$*
*(2). If either $c_L(M,p)=4$ or $t_L(M,p)=4$, then both are $4$.*
A recent paper in Huang-Yin [@HY3] generalizes Theorem [\[Da10\]](#Da10){reference-type="ref" reference="Da10"}(2) to the case when either $c_L(M,p)=6$ or $t_L(M,p)=6$.
The following examples show that the conclusion of the D'Angelo Conjecture may not hold when $M$ is not pseudoconvex.
**Example 8**. *Let $M$ be a real hypersurface in $\mathbb{C}^3$ with a defining function $$\rho:=-(w+\overline{w})+|z_1|^4+z_1\overline{z_2}+z_2\overline{z_1}.$$ Suppose $L$ is the tangent vector field of type $(1,0)$ defined by $$L=\frac{\partial}{\partial z_1}-|z_1|^2\frac{\partial}{\partial
z_2}+(\overline{z_2}+z_1\overline{z_1}^2)\frac{\partial}{\partial w}.$$ Then $t_L(M,0)=\infty$ but $c_L(M,0)=4$.*
First, a direct computation shows that $$L\rho=2z_1\overline{z_1}^2+\overline{z_2}-|z_1|^2\overline{z_1}-(\overline{z_2}+z_1\overline{z_1}^2)\equiv 0.$$ Hence $L$ is indeed a CR vector field along $M$. Notice that $$\begin{split}
\partial\overline{\partial}\rho&=4|z_1|^2dz_1\wedge d\overline{z_1}+dz_1\wedge
d\overline{z_2}+dz_2\wedge d\overline{z_1},\\
{\lambda}(L,{L})&=4|z_1|^2-|z_1|^2-|z_1|^2=2|z_1|^2.
\end{split}$$ Hence $$L\overline{L}{\lambda}(L,{L})=2\neq 0.$$ Hence, we conclude that $c_L(M,0)=4$. We next compute $t_L(M,0)$ as follows: $$\begin{split}
[L,\overline{L}]&=\Big[\frac{\partial}{\partial z_1}-|z_1|^2\frac{\partial}{\partial
z_2}+(\overline{z_2}+z_1\overline{z_1}^2)\frac{\partial}{\partial w},\frac{\partial}{\partial
\overline{z_1}}-|z_1|^2\frac{\partial}{\partial
\overline{z_2}}+({z_2}+\overline{z_1}{z_1}^2)\frac{\partial}{\partial\overline{w}}\Big]\\
&=-\overline{z_1}\frac{\partial}{\partial\overline{z_2}}+2z_1\overline{z_1}\frac{\partial}{\partial
\overline{w}}-|z_1|^2\frac{\partial}{\partial\overline{w}}+{z_1}\frac{\partial}{\partial
{z_2}}-2z_1\overline{z_1}\frac{\partial}{\partial{w}}+|z_1|^2\frac{\partial}{\partial{w}}
\end{split}$$ Thus $$\begin{split}
[L,[L,\overline{L}]]&=2\overline{z_1}\frac{\partial}{\partial\overline{w}}+\frac{\partial}{\partial
{z_2}}-2\overline{z_1}\frac{\partial}{\partial{w}}-\overline{z_1}\frac{\partial}{\partial
\overline{w}}+\overline{z_1}\frac{\partial}{\partial{w}}-(-\overline{z_1})\frac{\partial}{\partial{w}})\\
&=\overline{z_1}\frac{\partial}{\partial\overline{w}}+\frac{\partial}{\partial{z_2}}.
\end{split}$$ Furthermore, we obtain $$\begin{split}
[L,[L,[L,\overline{L}]]] &=\Big[\frac{\partial}{\partial z_1}-|z_1|^2\frac{\partial}{\partial
z_2}+(\overline{z_2}+z_1\overline{z_1}^2)\frac{\partial}{\partial w},\ \overline{z_1}\frac{\partial}{\partial
\overline{w}}+\frac{\partial}{\partial{z_2}}\Big]=0.\\
[\overline{L},[L,[L,\overline{L}]]] &=\Big[\frac{\partial}{\partial
\overline{z_1}}-|z_1|^2\frac{\partial}{\partial
\overline{z_2}}+({z_2}+\overline{z_1}{z_1}^2)\frac{\partial}{\partial\overline{w}},\
\overline{z_1}\frac{\partial}{\partial\overline{w}}+\frac{\partial}{\partial{z_2}}\Big]=\frac{\partial}{\partial
\overline{w}}-\frac{\partial}{\partial\overline{w}}=0.
\end{split}$$ Notice that $$[\overline{L},[\overline{L},[L,\overline{L}]]]=-\overline{[L,[L,[L,\overline{L}]]] }=0,\ \
[\overline{L},[{L},[L,\overline{L}]]]=-\overline{[L,[\overline{L},[L,\overline{L}]]] }=0.$$ Hence $t(L,0)=+\infty$. That $t_L(M,0)=\infty$ can also be seen geometrically as follows:
Notice that $$L(-2w+2z_1\overline{z_2}+|z_1|^4)=\overline{L}(-2w+2z_1\overline{z_2}+|z_1|^4)=0.$$ Thus Re$(L)$, Im$(L)$ as well as their Lie brackets of any length are all tangent to the real codimension two submanifold of ${\mathbb C}^3$ defined by $\{(z_1,z_2,w)\in \mathbb{C}^3:\
2w=2z_1\overline{z_2}+|z_1|^4\}$. Hence the Lie brackets of Re$(L)$ and Im$(L)$ of any length will always been annihilated by the contact form $\theta|_0=-\frac{1}{2}({\partial w}-{\partial\overline{w}})|_0$ at $0$, which shows that $t(L,0)=+\infty$.
Finally, we present the following example, in which $t_L(M,0)$ and $c_L(M,0)$ are finite but different.
**Example 9**. *Let $M$ be a real hypersurface in $\mathbb{C}^3$ with a defining function $$\rho:=-(w+\overline{w})+|z_1|^4+z_1\overline{z_2}+z_2\overline{z_1}+|z_2|^{2}.$$ Suppose $L$ is the tangent vector field of type $(1,0)$ defined by $$L=\frac{\partial}{\partial z_1}-|z_1|^2\frac{\partial}{\partial
z_2}+(\overline{z_2}+z_1\overline{z_1}^2-|z_1|^2\overline{z_2})\frac{\partial}{\partial w}.$$ Then $t_L(M,0)=6$ but $c_L(M,0)=4$.*
As in Example [Example 8](#exa){reference-type="ref" reference="exa"}, we have $L\rho\equiv 0$ and thus $L$ is indeed tangent to $M$. Next $$\begin{split}
\partial\overline{\partial}\rho&=4|z_1|^2dz_1\wedge d\overline{z_1}+dz_1\wedge
d\overline{z_2}+dz_2\wedge d\overline{z_1}+dz_2\wedge d\overline{z_2},\\
{\lambda}(L,{L})&=4|z_1|^2-|z_1|^2-|z_1|^2+|z_1|^4=2|z_1|^2+|z_1|^4.
\end{split}$$ Hence $$L\overline{L}{\lambda}(L,{L})(0)=2\neq 0.$$ We conclude that $c_L(M,0)=4$. As in Example [Example 8](#exa){reference-type="ref" reference="exa"}, $t_L(M,0)$ can be derived as follows: $$\begin{split}
[L,\overline{L}] =&-\overline{z_1}\frac{\partial}{\partial
\overline{z_2}}+2z_1\overline{z_1}\frac{\partial}{\partial\overline{w}}-|z_1|^2\frac{\partial}{\partial
\overline{w}}+{z_1}\frac{\partial}{\partial{z_2}}-2z_1\overline{z_1}\frac{\partial}{\partial
{w}}+|z_1|^2\frac{\partial}{\partial
{w}}\\
&+(-\overline{z_1}{z_2}+|z_1|^4)\frac{\partial}{\partial
\overline{w}}+(z_1\overline{z_2}-|z_1|^4)\frac{\partial}{\partial{w}}\\
=&-\overline{z_1}\frac{\partial}{\partial\overline{z_2}}+{z_1}\frac{\partial}{\partial
{z_2}}-(|z_1|^2-z_1\overline{z_2}+|z_1|^4)\frac{\partial}{\partial
{w}}+(|z_1|^2-z_2\overline{z_1}+|z_1|^4)\frac{\partial}{\partial\overline{w}}.
\end{split}$$ Thus $$\begin{split}
[L,[L,\overline{L}]] &=\overline{z_1}\frac{\partial}{\partial\overline{w}}+\frac{\partial}{\partial
{z_2}}+3z_1\overline{z_1}^2\frac{\partial}{\partial\overline{w}}+(\overline{z_2}-2z_1
\overline{z_1}^2)\frac{\partial}{\partial{w}}-\overline{z_1}\cdot |z_1|^2\frac{\partial}{\partial
w}\\&= \frac{\partial}{\partial{z_2}}+(\overline{z_2}-3\overline{z_1}|z_1|^2)\frac{\partial}{\partial
{w}}+(\overline{z_1}+3\overline{z_1}|z_1|^2)\frac{\partial}{\partial\overline{w}}.
\end{split}$$ Furthermore, we obtain $$\begin{split}
[L,[L,[L,\overline{L}]]] &=3\overline{z_1}^2\frac{\partial}{\partial\overline{w}}-3\overline{z_1}^2\frac{\partial}{\partial{w}}=O(|z|^2).\\
[\overline{L},[L,[L,\overline{L}]]]) &=7|z_1|^2\frac{\partial}{\partial
\overline{w}}-7|z_1|^2\frac{\partial}{\partial{w}}=O(|z|^2).
\end{split}$$ Notice that $$[\overline{L},[\overline{L},[L,\overline{L}]]]=-\overline{[L,[L,[L,\overline{L}]]] }=O(|z|^2),\ \
[\overline{L},[{L},[L,\overline{L}]]]=-\overline{[L,[\overline{L},[L,\overline{L}]]] }=O(|z|^2).$$ Hence, it is clear that $t_L(L,0)\geq 6$. On the other hand, $$[\overline{L},[\overline{L},[L,[L,[L,\overline{L}]]]]]=6\frac{\partial}{\partial
\overline{w}}-6\frac{\partial}{\partial{w}}.$$ It follows that $t_L(M,0)=6$.
# Order of vanishing along vector fields
We first denote in this section by $(x_1,\cdots,x_n)$ for the coordinates of $\mathbb{R}^n$. Let $U\subset {\mathbb R}^n$ be an open subset and let $\{X_1,X_2,\cdots,X_r\}$ be a set of linearly independent real-valued smooth vector fields over $U$. Write $\mathcal D$ for the (real) submodule of the $C^{\infty}(U)$-module ${\mathcal X}_{{\mathbb R}}$ consisting of real-valued smooth vector fields in $U$ generated by $\{X_1,X_2,\cdots,X_r\}$.
**Definition 10**. *Let $f\in C^\infty(p_0)$ be a germ of (complex-valued) smooth function at $p_0\in U$.*
*(1). We say $\mathcal{D}^k(f)(p_0)=0$ if $Y_k\left(Y_{k-1}(\cdots Y_1(f)\cdots)\right)(p_0)=0$ with each $Y_j\in \mathcal{D}$ for $j=1,\cdots,k$.*
*(2). We say the order of vanishing of $f$ along $\mathcal D$, denoted by ${\nu}_{\mathcal{D}}(f)(p_0)$, to be $m\in
{\mathbb N}\cup \{\infty\}$ if $\mathcal{D}^k(f)(p_0)=0$ for any $k<m$ and if $\mathcal{D}^m(f)(p_0)\not =0$, namely, $Y_m\left(\cdots(
Y_1(f))\cdots\right)(p_0)\not = 0$ for a certain $Y_1,\cdots, Y_m$ with each $Y_j\in {\mathcal D}$ when $m<\infty$.*
Our main goal in this section is to prove the following
**Theorem 11**. *Let $f,g$ be germs of smooth functions at $p_0$. Then*
*(1). $${\nu}_{\mathcal D}(fg)(p_0)={\nu}_{\mathcal
D}(f)(p_0)+
{\nu}_{\mathcal D}(g)(p_0).$$*
*(2). If $0\leq f\leq g$ and $g(p_0)=f(p_0)$, then ${\nu}_{\mathcal D}(f)(p_0)\geq {\nu}_{\mathcal D}(g)(p_0)$.*
*(3). If $f\ge 0$, then ${\nu}_{\mathcal D}(f)(p_0)$ is an even number or infinity.*
After a linear change of coordinates, assume that $p_0=0$ and $$\begin{split}
X_j=\frac{\partial}{\partial x_j}+\widehat{X_j}\ \ \text{with} \
\widehat{X_j}|_{p_0}=0.
\end{split}$$
Write $$E_{0}=\hbox{span}\{X_j|_{p_0},\ j=1,\cdots,r\}=\hbox{span}\{\frac{\partial}{\partial x_1}|_{p_0},\cdots,
\frac{\partial}{\partial x_r}|_{p_0}\}.$$
Define $E_1\subset T_{p_0}\Omega$ to be the linear span of $E_0$ and the values at $p_0=0$ of commutators of vector fields from $\mathcal{D}$ of length $m_1$, where $m_1$ is the smallest integer, if existing, such that $E_1\supsetneq E_0$. $m_1$ is called the first Hömander number of $\mathcal{D}$. $\ell_1=\text{dim} E_1-r$ is called the multiplicity of $m_1$.
When $m_1<\infty$, we define $E_2\subset T_{p_0}\Omega$ to be the linear span of $E_1$ and the values at $p_0=0$ of commutators of vector fields from $\mathcal{D}$ of length $m_2$, where $m_2$ is the smallest integer, if existing, such that $E_2\supsetneq E_1$. $m_2$ is called the second Hömander number of $\mathcal{D}$. $\ell_2=\text{dim} E_1-\ell_1$ is called the multiplicity of $m_2$.
Inductively one can define a finite sequence of natural numbers $m_1<\cdots<m_h$ with $h\ge 1$, and the associated linear subspaces of $T_{p_0}U$: $E_1\varsubsetneq E_2\varsubsetneq \cdots
\varsubsetneq E_h\subset T_{p_0}U$. Here $m_h$ is the largest Hörmander number and $\ell_{j+1}=\text{dim}E_{j+1}-\text{dim}E_{j}$ is the multiplicity of $m_{j+1}$ for $j=0,\cdots,h-1$ . When $E_h=T_{p_0}U$, we call $\mathcal{D}$ is of finite Hörmander type at $p_0$. Otherwise, we say $\mathcal{D}$ is of infinite type at $p_0$. A deep theorem on the normalization of $\mathcal{D}$ by Helffer-Nourrigat and Boauendi-Rothschild give the following statements: (For a detailed proof, see, for example, [@BER Theorem 3.5.2]):\
(1). There exists a coordinate system $(x,s,s')$ of ${\mathbb R}^n$ centered at $p_0$ which corresponds $p_0$ to $0$, where $x=(x_1,\cdots,x_r)$, $s=(s_1,\cdots,s_h)$, $s_j=(s_{j1},\cdots,s_{j\ell_j})\in
{\mathbb R}^{\ell_j}$ ($j=1,\cdots,h$), $s'=(s'_1,\cdots,s'_{m'})\in
{\mathbb R}^{m'}$, such that in the new coordinates $(x,s,s')$, the following holds: $$\label{001}\begin{split}
X_k=\frac{\partial}{\partial
x_k}+\sum\limits_{j=1}^{h}\sum\limits_{q=1}^{\ell_j}P_{k,j,q}(x,s_1,\cdots,s_{j-1})
\frac{\partial}{\partial
s_{jq}}+O_{\hbox{wt}}(0),\ \ k=1,\cdots,r.
\end{split}$$ Here after assigning the weight of $x:=(x_1,\cdots,x_r)$ to be one and that of $s_j$ to be $m_j$ for $j=1,\cdots,h$, namely, after setting $\hbox{wt}(x)=1$ and $\hbox{wt}(s_j)=m_j$ for $j=1,\cdots,h$, each $P_{k,j,q}(x,s_1,\cdots,s_{j-1})$ is a weighted homogeneous polynomial of weighted degree $m_j-1$ for $j=1,\cdots,h$. We define the weight of $s'$ to be infinity. For each $x_k,s_{jq}$, set $$\begin{split}
\text{wt}(\frac{\partial}{\partial x_k})=-1,\ \ \text{wt}(\frac{\partial}{\partial
s_{jq}})=-m_j.
\end{split}$$ Write $$\begin{split}
X^0_k=\frac{\partial}{\partial
x_k}+\sum\limits_{j=1}^{h}\sum\limits_{q=1}^{\ell_j}P_{k,j,q}(x,s_1,\cdots,s_{j-1})\frac{\partial}{\partial
s_{jq}}.
\end{split}$$ Then $X_k^0$ is a homogeneous vector field of weighted degree $-1$ for each $k$. The meaning $O_{\hbox{wt}}(0)$ in ([\[001\]](#001){reference-type="ref" reference="001"}) is that each term in the weighted formal Taylor expansion of $X_k-X_k^0$ at $0$ has weight at least $0$ (after assigning the weight of $s'$ to be infinity).
(2). Let $\{Y_{1,1},\cdots,Y_{1,\ell_1}\}$ be the commutators of vector fields from $\mathcal{D}$ of length $m_1$ such that $$\begin{split}
E_1=E_0\oplus \hbox{span}\{Y_{1,1}|_0,\cdots,Y_{1,\ell_1}|_0\}.
\end{split}$$ Notice by the statement in (1) that the lowest degree in the formal expansion of each $Y_{1,j}$ at $0$ is $-m_1$. Hence $$\begin{split}
\hbox{span}\{Y_{1,1},\cdots,Y_{1,l_1}\}|_0=\hbox{span}\{\frac{\partial}{\partial
s_{1,1}}|_0,\cdots,\frac{\partial}{\partial s_{1,\ell_1}}|_0\}.
\end{split}$$ After a linear change of coordinates in $(s_{1,1},\cdots,s_{1,l_1})$, we may assume that $$\begin{split}
Y_{1,j}|_0=\frac{\partial}{\partial s_{1,j}}|_0\ \text{for}\ j=1,\cdots,\ell_1.
\end{split}$$ Inductively, we then define for $k\le h$ the commutators of vector fields from $\mathcal{D}$ of length $m_k$ $$\begin{split}
\{Y_{k,j}\}_{j=1}^{\ell_k}\ \text{such that}\
Y_{k,j}|_0=\frac{\partial}{\partial s_{k,j}}|_0\ \text{for}\ j=1,\cdots,\ell_k\
\text{with \ \ wt}(Y_{k,j})=O_{\hbox{wt}}(-m_k).
\end{split}$$ Then $$\begin{split}
Y_{k,j}=\frac{\partial}{\partial
s_{k,j}}+\sum\limits_{\tau>k}\sum_{q=1}^{\ell_\tau}Q_{k,j,\tau,q}(x,s_1,\cdots,s_{k-1})\frac{\partial}{\partial
s_{\tau q}}+O_{wt}(-m_k+1),\ \ j=1,\cdots,\ell_k.\\
Y^0_{k,j}=\frac{\partial}{\partial
s_{k,j}}+\sum\limits_{\tau>k}\sum_{q=1}^{\ell_\tau}Q_{k,j,\tau,q}(x,s_1,\cdots,s_{k-1})
\frac{\partial}{\partial
s_{\tau q}}
\end{split}$$ Here $Q_{k,j,\tau,q}(x,s_1,\cdots,s_{k-1})$ are weighted homogeneous polynomials of degree $m_\tau-m_k$. The notation $O_{wt}(-m_k+1)$ has a similar meaning. Write $Y_{0,j}=X_j$.
Write $Y_k=(Y_{k1},\cdots,Y_{k\ell_k})$ and $Y_k^{\beta_k}=Y_{k1}^{\alpha_{k,1}}\cdots Y_{k,\ell_k}^{\alpha_{k,\ell_k}}$ with $\beta_k=(\alpha_{k,1},\cdots,\alpha_{k,\ell_k})$ for $k=0,\cdots,h$.
With the just discussed normalization of $\mathcal D$ at our disposal, we now let $f\in C^{\infty}(U)$ and let $f\sim\sum\limits_{j\geq
N}^{\infty}f^{(j)}$ be the weighted formal expansion with $wt(s')=\infty$. Write, in what follows, $\prod_{j=0}^{m_h}Y^{\beta_{j}}_j=Y^{\beta_{0}}_0\cdots Y^{\beta_{m_h}}_{m_h}$. Then $$\label{002}\begin{split}
\left(\prod_{k=0}^{m_h}
Y_k^{\beta_k}\right)(f)=\left(\prod_{k=0}^{m_h}
(Y_k^{0})^{\beta_k}\right)(f^{(N)})+O_{wt}(N^*+1).
\end{split}$$ Here $\left(\prod_{k=1}^{\ell} (Y_k^{0})^{\beta_k}\right)(f^{(N)})$ is a weighted homogeneous polynomial of degree $N^*:=N-\sum_{k=0}^{m_h} \beta_k m_k$, where $m_0=r$.
Let $\Gamma=x^{\beta_0} s_1^{\beta_1}\cdots s_h^{\beta_h}$. Then $\Gamma$ is a polynomial of weighted degree $m:=\sum_{k=0}^{h}|\beta_k|
m_k.$ By ([\[002\]](#002){reference-type="ref" reference="002"}), $\nu_{\mathcal D}(\Gamma)(0)\ge m$. On the other hand $Y_0^{\beta_0}Y^{\beta_1}_1\cdots Y^{\beta_h}_h(\Gamma)(0)=\beta_0 !\beta_1
!\cdots \beta_h!\neq 0.$ Since $Y_0^{\beta_0}Y^{\beta_1}_1\cdots
Y^{\beta_h}_h(\Gamma)(0)$ is a finite linear combination of terms of the form: $$Z_m\cdots Z_1(\Gamma)(0),\ \ Z_j\in \{X_1,\cdots, X_r\}\ \hbox{ for } j=1,\cdots,r$$ we conclude that ${\mathcal D}^m(\Gamma)(0)\not =0$. Thus we conclude that $$\label{003}\begin{split}
{\nu}_{\mathcal D}(\Gamma)(0)=m=\sum_{k=0}^{h}\beta_k m_k.
\end{split}$$
**Theorem 12**. *Assume the notations and definitions set up above. Let $f$ be a germ of complex-valued smooth function at $p_0=0$. Assume that $f= f^{(m)}+O_{wt}(m+1)$, where $f^{(m)}$ is the lowest weighted non-zero homogeneous term of degree $m\in {\mathbb N}$. Then $m= {\nu}_{\mathcal D}(f)(0)$. If $f= O_{wt}(m)$ for any $m$, then ${\nu}_{\mathcal D}(f)(0)=\infty.$*
*Proof.* Assume that $$f^{(m)}=\sum\limits_{\sum\limits_{j=0}^{h}|\beta_j|m_j=m}a_{\beta_0\cdots
\beta_h}x^{\beta_0}s_1^{\beta_1}\cdots s_h^{\beta_h},\ \
a_{\beta_0\cdots \beta_h}\not \equiv 0.$$ We first find the largest $\beta_h$ in the lexicographic order among non-zero monomials. Then among those non-zero monomials with $\beta_h$ being maximal, we find the largest $\beta_{h-1}$ in the lexicographic order. By an induction, we get the $(\beta_h,\cdots,\beta_0)$ which is lexicographically maximal among those non-zero monomials in the above expansion of $f^{(m)}$. Then by ([\[003\]](#003){reference-type="ref" reference="003"}), $$Y_0^{\beta_0}Y^{\beta_1}_1\cdots Y^{\beta_h}_h f(p_0)=a_{\beta_0\cdots
\beta_h}\beta_h!\cdots \beta_0!\neq 0.$$ As in the argument to prove ([\[003\]](#003){reference-type="ref" reference="003"}), we conclude that ${\nu}_{\mathcal D}(f)(0)=m$. ◻
*Proof of Theorem [Theorem 11](#cor1.3){reference-type="ref" reference="cor1.3"}.* We can assume $p_0=0$ and ${\mathcal D}$ has been normalized as above. (1) is a direct consequence of Theorem [Theorem 12](#thm1.2){reference-type="ref" reference="thm1.2"}. As for (2), we have $g-f\geq 0$ and $(g-f)(p_0)=0$. Suppose that ${\nu}_{\mathcal D}(f)(p_0)< {\nu}_{\mathcal D}(g)(p_0)$. Then $g-f=-f^{(m)}+O_{wt}(m+1)$, where $f^{(m)}$ is the lowest non-zero weighted homogeneous polynomial in the expansion of $f$. Apparently, $f^{(m)}\geq 0$ as $f\ge 0$ and ${\nu}_{\mathcal D}(f)(p_0)<\infty$. Also, $-f^{(m)}\geq 0$ as $f-g\ge 0$ and $f^{(m)}\not\equiv 0$. This is a contradiction.
To prove (3), assume that ${\nu}_{\mathcal D}(f)(p_0)=m$ with $0<m<\infty$. Then $f^{(m)}\not \equiv 0$ but $f^{(m)}\ge 0$. Since $f^{(m)}(\tau x, \tau^{m_1}s_1,\cdots,
\tau^{m_h}s_h)=\tau^{m}f^{(m)}(x, s_1,\cdots, s_h)\ge 0$ for any $\tau\in {\mathbb R}$. It follows that $m$ is an even number. ◻
The following result reducing the vanishing order along a system of vector fields to that along a single one could be very useful in applications, which, for instance, also gives an immediate proof of Theorem [Theorem 11](#cor1.3){reference-type="ref" reference="cor1.3"}.
**Theorem 13**. *Let $L(x_1,\cdots,x_r)=\sum\limits_{j=1}^{r}a_jx_j$ be a non-zero linear function in $x=s_0$. Let $$Z=\sum\limits_{j=0}^{h}\sum\limits_{q=1}^{\ell_j}
\frac{1}{m_j}t_{j,q}L(x)^{m_j-1}Y_{j,q},\ \ t_{j,q}\in \mathbb{R}.$$ Let $f\in C^\infty(p_0)$. For a generic choice of $\{t_{j,q}\}$, $${\nu}_{\mathcal{D}}(f)(p_0)= {\nu}_{Z}(f)(p_0).$$ Here ${\nu}_{Z}(f)(p_0)$ is the smallest non-negative integer $\ell$ such that $Z^\ell(f)(p_0)\not = 0$ or $\infty$ when such an $\ell$ does not exist.*
*Proof.* Apparently, $${\nu}_{{Z}}(f)(p_0)\geq
{\nu}_{\mathcal{D}}(f)(p_0):=m.$$ We need only to assume that $m<\infty$ and prove that $Z^mf^{(m)}(p_0)\neq 0$ for a generic choice of $$(t_0,t_1,\cdots,t_h):=(t_{0,1},\cdots,t_{h,\ell_h})\in {\mathbb R}^{n_0},$$ namely, for any choice of $(t_{0,1},\cdots,t_{h,\ell_h})$ but away from a certain proper real analytic variety in ${\mathbb R}^{n_0}$. (Here $n_0=r+\sum_{j=1}^h\ell_j.$)
Let $\gamma(\tau)$ be the integral curve of $Z$ through $p_0=0$. Namely, $\frac{d \gamma(\tau)}{d\tau}=Z\circ \gamma(t)$ and $\gamma(0)=p_0=0$. Write $\gamma(\tau)=(x(\tau),s_1(\tau),\cdots,s_h(\tau),\widetilde{s(\tau)})$. Then $\frac{d x(\tau)}{d\tau}=t_0+O(\tau)$. Hence $x(\tau)=t_0\tau+O(\tau^2)$. Similarly, $$\frac{d
s_j(\tau)}{d\tau}=\frac{1}{m_j}t_jL^{(m_j-1)}(t_0)\tau^{m_j-1}+O(\tau^{m_j}).$$ Hence $$s_j(\tau)=t_jL^{m_j-1}(t_0)\tau^{m_j}+O(\tau^{m_j+1}),\ \
\widetilde{s}(\tau)=O(\tau^{m+1}).$$ Now, $$Z^m(f)(0)=\frac{d}{d t^m}(f^{(m)}\circ \gamma(t))|_{t=0}.$$ Write $$f^{(m)}(s_0,s_1,\cdots,s_h)=\sum a_{\beta_0\beta_1\cdots
\beta_h}s_0^{\beta_0}\cdots s_h^{\beta_h}.$$ Then $$Z^m(f)(p_0=0)=\sum a_{\beta_0\beta_1\cdots
\beta_h}\left(L(t_0)\right)^{\sum\limits_{j=0}^{h}(m_j-1)|\beta_j|}m!t_0^{\beta_0}\cdots
t_h^{\beta_h}.$$ Suppose $Z^m(f)(p_0)\equiv 0$ for any choice of $(t_0,t_1,\cdots,t_h)$. Then $$\sum a_{\beta_0\beta_1\cdots
\beta_h}L^{m-\sum_{j=0}^h\beta_j}(t_0)m!t_0^{\beta_0}\cdots
t_j^{\beta_j}\equiv 0,\ (t_0,t_1,\cdots,t_h)\in {\mathbb R}^{n_0}.$$ Hence $a_{\beta_0\beta_1\cdots \beta_h}\equiv 0.$ This contradicts that $f^{(m)}\equiv 0$. Hence for $(t_{0,1},\cdots,t_{h,\ell_h})$ not from the proper real analytic subset defined by $$\sum
a_{\beta_0\beta_1\cdots
\beta_h}L^{m-\sum_{j=0}^h\beta_j}(t_0)m!t_0^{\beta_0}\cdots
t_j^{\beta_j}= 0,$$ then $${\nu}_{\mathcal{D}}(f)(p_0)= {\nu}_{Z}(f)(p_0)=m.$$ ◻
## Application in the study of regular types of a real hypersurfaces
We now adapt the notations and definitions which we have set up in Section [1.2](#subb1){reference-type="ref" reference="subb1"}. We let $M\subset {\mathbb C}^n$ be a smooth real hypersurface and let $B$ be a smooth subbundle of complex dimension $s$ of $T^{(1,0)}M$. Write ${\mathcal D}_B$ for the real submodule over real-valued smooth functions over $M$ that is generated by $\hbox{Re}(L)$ and $\hbox{Im}(L)$ for any $L\in \Gamma_\infty(B)$, where $\Gamma_\infty(B)$ denotes the set of smooth sections of $B$. Then ${\mathcal D}_B$ is locally generated by $2s$ ${\mathbb R}$-linearly independent real vector fields. For any smooth function $f$, we define $$\nu_B(f)(p)=\nu_{{\mathcal D}_B}(f)(p).$$ Notice that $\nu_B(f)(p)$ is the least $m$ such that $Y_m\cdots
Y_1(f)(p)\not =0$ where $Y_j\in {\mathcal M}_1(B)$. If such an $m$ does not exist, $\nu_B(f)(p)=\infty.$
We then apparently have $$\label{006}\min_{L\in \Gamma_\infty(B),\ L|_p\not
=0}\nu_{B}({\lambda}(L,L))(p)=c^{(s)}(B,p).$$ When $B=\hbox{span}\{L\}$, as before, we write $\nu_L(f)(p)=\nu_B(f)(p).$ Now, we can reformulate Theorem [Theorem 11](#cor1.3){reference-type="ref" reference="cor1.3"} as follows:
**Theorem 14**. *Let $B$ and $M$ be as just defined. Let $f,g$ be germs of (complex-valued) smooth functions over $M$ at $p\in M$. Then*
*(1). $${\nu}_{B}(fg)(p)={\nu}_{B}(f)(p)+
{\nu}_{B}(g)(p).$$*
*(2). If $f,g$ are real-valued and $0\leq f\leq g$ with $g(p)=f(p)$, then ${\nu}_{B}(f)(p)\geq {\nu}_{B}(g)(p)$.*
*(3). If $f\ge 0$, then ${\nu}_{B}(f)(p)$ is an even number if not infinity.*
**Corollary 15**. *Let $M\subset {\mathbb C}^n$ be a real hypersurface with $n\ge 2$. Let $L$ be a CR vector field along $M$ not vanishing at any point. Let $f,g$ be germs of smooth functions over $M$ at $p\in M$. Then*
*(1). $${\nu}_{L}(fg)(p)={\nu}_{L}(f)(p)+
{\nu}_{L}(g)(p).$$*
*(2). If $0\leq f\leq g$ and $g(p)=f(p)$, then ${\nu}_{L}(f)(p)\geq {\nu}_{L}(g)(p)$.*
Corollary [Corollary 15](#cor2.4){reference-type="ref" reference="cor2.4"} had first appeared in a paper of D'Angelo \[pp 105, 3. Remark, [@DA3]\].
**Corollary 16**. *Assume that $M$ is pseudoconvex. For any $p\in M$ and any subbundle $B$ of $T^{(1,0)}M$ with complex dimension $s$, then*
*(1). $c^{(s)}(B,p)$ and $c^{(s)}(M,p)$ are even numbers if not infinity.*
*(2). Let $L$ be a CR vector field of $M$ not equal to zero at any point. If $\mathcal{D}_{L}^{2j}\lambda(X,{X})(0)=0$ and $\mathcal{D}_{L}^{2k}\lambda(Y,{Y})(0)=0$, then $\mathcal{D}_{L}^{j+k+1}\lambda(X,{Y})(0)=0$.*
*Proof.* Pseudo-convexity of $M$ implies that ${\lambda}(L,L)\ge 0$. Then (1) follows immediately from ([\[006\]](#006){reference-type="ref" reference="006"}) and Theorem [Theorem 14](#cor2.3){reference-type="ref" reference="cor2.3"}(3).
To prove (2), by (1), the assumption in (2) shows that $\mathcal{D}_{L}^{2i+1}\lambda(X,{X})(0)=0$ and $\mathcal{D}_{L}^{2j+1}\lambda(Y,{Y})(0)=0$. From the Schwarz inequality for a non-negative Hermition form, it follows that $$|\lambda(X,{Y})|^2\leq \lambda(X,{X})\cdot
\lambda(Y,{Y}).$$ The statement in (2) follows from Theorem [Theorem 14](#cor2.3){reference-type="ref" reference="cor2.3"}. ◻
Still let $M, \ B$ be as above and assume that $M$ is pseudoconvex. For $V_B=\{L_1,\cdots,L_s\}$, a basis of smooth sections of $B$ near $p$. Let $L$ be the one such that $c^{(s)}(B,
p)=\nu_{B}\left({\lambda}(L,L)\right)(p)$ and $L=\sum_{j}a_jL_j$. Then ${\lambda}(L,L)=\sum_{j,k}a_j\overline{a_k}{\lambda}(L_j,L_k).$ Define the trace of the Levi-form along $V_B$ by $$%\begin{split}
\text{tr}_{V_B}{\lambda}_M(q)=\sum\limits_{j=1}^{s} {\lambda}(L_j,{L_j}),
\ \ q\approx p.$$ By Theorem [Theorem 14](#cor2.3){reference-type="ref" reference="cor2.3"} and Corollary [Corollary 16](#cor1.5){reference-type="ref" reference="cor1.5"}(2), it follows that $$\nu_{B}\left({\lambda}(L,L)\right)=\min_{1\leq j\leq s}\{\nu_{B}\left({\lambda}(L_j,L_j)\right)\}=
\nu_{B}\left( \text{tr}_{V_B}{\lambda}_{M}\right)(p).$$ Hence, we have
**Corollary 17**. *Assume that $M$ is pseudoconvex and $B$ is a smooth complex subbundle of $T^{(1,0)}M$. For any local linearly independent local frame $V_B$ of $\Gamma_\infty(B)$ near $p\in M$, it holds that $$c^{(s)}(B,p)=\nu_{B}\left( \text{tr}_{V_B}{\lambda}_{M}\right)(p).$$*
A
S. Baouendi, P. Ebenfelt, L. Rothschild, Real Submanifolds in Complex Space and Their Mappings, Princeton Mathematical Series, vol. 47, Princeton University Press, Princeton, NJ, 1999.
M. S. Baouendi and L. Rothschild, Normal forms for generic manifolds and holomorphic extension of CR functions, J. Differential Geom.25 (1987), no. 3, 431-467.
T. Bloom and I. Graham, A geometric characterization of points of type m on real submanifolds of $\mathbb{C}^n$, J. Differential Geometry 12 (1977), 171-182.
T. Bloom and I. Graham, On 'type' conditions for generic real submanifolds of $\mathbb{C}^n$, Invent. Math. 40 (1977), 217-243.
T. Bloom, Remarks on type conditions for real hypersurfaces in $\mathbb{C}^n$, Proc. Internat. Conf. on Several Complex Variables, Cortona, Italy, 1976-77, Scuola Norm. Sup. Pisa, Pisa, 1978, pp. 14-24.
T. Bloom, On the contact between complex manifolds and real hypersurfaces in $\mathbb{C}^3$, Trans. Amer. Math. Soc. 263 (1981), no. 2, 515-529.
V. Brinzanescu and A. Nicoara, On the relationship between D'Angelo q-type and Catlin q-type, J. Geom. Anal. 25 (2015), no. 3, 1701-1719.
D. Catlin, Subelliptic estimates for the $\overline{\partial}$-Neumann problem on pseudoconvex domains, Ann. Math. 126 (1987), no. 1, 131-191.
D. Catlin, Global regularity of the $\overline{\partial}$-Neumann problem, Complex analysis of several variables (Madison, Wis., 1982), 39-49, Proc. Sympos. Pure Math., 41, Amer. Math. Soc., Providence, RI, 1984.
Y. Chen, W. Yin and P. Yuan, The commutator type and the Levi form type in $\mathbb{C}^3$, J. Math. 40 (2020), no. 4, 389-394.
S. Cho, Y. You, On sharp Hölder estimates of the Cauchy-Riemann equation on pseudoconvex domains in $\mathbb{C}^n$ with one degenerate eigenvalue, Abstract Applied Analysis, 2015, 1-6.
J. D'Angelo, Real hypersurfaces, orders of contact, and applications, Ann. of Math. 115 (1982), no. 3, 615-637.
J. D'Angelo, Iterated commutators and derivatives of the Levi form, Complex analysis (University Park, Pa., 1986), 103-110, Lecture Notes in Math., 1268, Springer, Berlin, 1987.
J. D'Angelo, Several complex variables and the geometry of real hypersurfaces, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1993.
J. D'Angelo, A remark on finite type conditions. J. Geom. Anal. 28 (2018), no. 3, 2602-2608.
J. D'Angelo, Finite-type conditions for real hypersurfaces in ${\mathbb C}^n$, Complex analysis (University Park, Pa., 1986), 83-102, Lecture Notes in Math., 1268, Springer, Berlin, 1987.
J. D'Angelo, J. Kohn, Subelliptic estimates and finite type, Several complex variables (Berkeley, CA, 1995-1996), 199-232, Math. Sci. Res. Inst. Publ., 37, Cambridge Univ. Press, Cambridge, 1999.
G. Folland and J. Kohn, The Neumann problem for the Cauchy-Riemann complex, Annals of Mathematics Studies, No. 75. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1972.
K. Diederich and J. Fornaess, Pseudoconvex domains with real analytic boundary, Ann. of Math. 107 (1978), no. 3, 371-384.
M. Fassina, A remark on two notions of order of contact, J. Geom. Anal. 29 (2019), no. 1, 707-716.
M. Fassina, Type condition for a complex vector field, preprint, 2018.
M. Freeman, The Levi form and local complex foliations, Proc. Amer. Math. Soc. 57 (1976), no. 2, 369-370.
S. Fu, A. Isaev and S. Krantz, Finite type conditions on Reinhardt domains, Complex Variables Theory Appl. 31 (1996), no. 4, 357-363.
P. Greiner, Subelliptic estimates for the $\overline{\partial}$-Neumann problem in $\mathbb{C}^2$, J. Differential Geometry 9 (1974), 239-250.
B. Helffer and J. F. Nourrigat, Approximation d'un systèm de champs de vecteurs et applications à l'hypoellipticité, *Ark. Mat.* 17 (1979), no. 2, 237-254.
X. Huang, W. Yin, Regular multi-types and the Bloom conjecture, J. Math. Pures Appl., 146 (2021), no. 9, 69-98.
X. Huang and W. Yin, A Bishop surface with a vanishing Bishop invariant, Invent. Math. 176 (2009), 461-520.
X. Huang and W. Yin, A note on the commutator type and the Levi form type, preprint. (2023)
J. Kohn, Harmonic integrals on strongly pseudoconvex manifolds I, Ann. of Math. 78 (1963), no. 2, 112-148 ; II., Ann. of Math. 79 (1964), no. 2, 450-472.
J. Kohn, Boundary behaviour of $\overline{\partial}$ on weakly pseudo-convex manifolds of dimension two, J. Differential Geometry 6 (1972), 523-542.
J. Kohn, Subellipticity of the $\overline{\partial}$-Neumann problem on pseudo-convex domains: sufficient conditions, Acta Math. 142 (1979), no. 1-2, 79-122.
S. Kim and D. Zaitsev, Jet vanishing orders and effectivity of Kohn's algorithm in dimension 3, Asian J. Math. 22 (2018), no. 3, 545-568.
S. Kim and D. Zaitsev, Jet vanishing orders and effectivity of Kohn's algorithm in any dimension, . Adv. Math. 387 (2021), Paper No. 107803, 36 pp
J. McNeal, Convex domains of finite type, J. Funct. Anal. 108 (1992), no. 2, 361-373.
J. McNeal and L. Mernik, Regular versus singular order of contact on pseudoconvex hypersurfaces. J. Geom. Anal. 28 (2018), no. 3, 2653-669.
Nicoara, Andreea C. Direct proof of termination of the Kohn algorithm in the real-analytic case. Pure Appl. Math. Q. 18 (2022), no. 2, 719-761.
N. Sibony, Une classe de domaines pseudoconvexes. (French) \[A class of pseudoconvex domains\], Duke Math. J. 55 (1987), no. 2, 299-319.
Y. Siu, Effective termination of Kohn's algorithm for subelliptic multipliers, Pure Appl. Math. Q., Special Issue: In honor of Joseph J. Kohn. Part 2, 6 (2010), no. 4, 1169-1241.
E. Straube, Lectures on the $L^2$-Sobolev theory of the $\overline\partial$-Neumann problem, ESI Lectures in Mathematics and Physics, European Mathematical Society (EMS), Zürich, 2010. viii+206 pp.
D. Zaitsev, Personal communications on the compactness of Kohn's $\overline\partial$-Newmann operator, Europe, June 2023.
Xiaojun Huang, Department of Mathematics, Rutgers University, New Brunswick, NJ 08903, USA (huangx$@$math.rutgers.edu);
Wanke Yin, School of Mathematics and Statistics, Wuhan University, Wuhan, Hubei 430072, China (wankeyin\@whu.edu.cn).
[^1]: Supported in part by DMS-2247151 and DMS-2000050
[^2]: Supported in part by NSFC-12171372
| arxiv_math | {
"id": "2309.09757",
"title": "Regular types and order of vanishing along a set of non-integrable\n vector fields",
"authors": "Xiaojun Huang and Wanke Yin",
"categories": "math.CV math.AP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
**DIRECTIONAL TYKHONOV WELL-POSEDNESS FOR OPTIMIZATION PROBLEMS AND VARIATIONAL INEQUALITIES**\
Vo Ke Hoang [^1]$^,$[^2] and Vo Si Trong Long[^3]$^,$[^4]
**Abstract.** By using the so-called minimal time function, we propose and study a novel notion of *directional Tykhonov well-posedness* for optimization problems, which is an extension of the widely acknowledged notion of Tykhonov. In this way, we first provide some characterizations of this notion in terms of the diameter of level sets and admissible functions. Then, we investigate relationships between the level sets and admissible functions mentioned above. Finally, we apply the technology developed before to study directional Tykhonov well-posedness for variational inequalities. Several examples are presented as well to illustrate the applicability of our results.\
**Key words:** Directional minimal time function; Directional Tykhonov well-posedness; Optimization problems; Variational inequalities; Level sets; Adimissible functions.\
49K40; 49J40; 49J53; 49S05; 90C25.
# Introduction
The concepts of kinds of different well-posedness are highly important in nonlinear analysis, as they occupy a pivotal position in both theory and numerical methods. The first fundamental results in this topic was established by Hadamard [@hadamard1902problemes]. In the early sixties, well-posedness for unconstrained minimization problems was studied by Tykhonov [@tikhonov1966stability], which means the existence and uniqueness of the solution and the convergence of every minimizing sequence to the solution. Its extension to the constrained optimization problems was made by Levitin and Polyak [@levitin1966convergence], allowing minimizing sequences $(x_k)$ to be outside of the feasible set $A$ and expecting $d(x_k, A)$ (the distance from $x_k$ to $A$) to tend to zero. Since these fundamental works, many definitions of well-posedness have been introduced and extensively studied (see, e.g., [@anh2016tykhonov; @anh2021tikhonov; @bianchi2010well; @canovas1999stability; @crespi2007well; @dontchev2006well; @furi1970well; @furi1970characterization; @gutierrez2012pointwise; @huang2006generalized; @ioffe2005typical; @lucchetti2020well; @morgan2006discontinuous; @sofonea2019well; @zolezzi1981characterization] and the references therein). The main directions for developing Hadamard's, Tykhonov's and Levitin-Polyak's famous results so far are introducing generalized well-posesnesses and extending them to optimization-related problems in infinite dimensional spaces. It should be emphasized that in publications cited above, most of kinds of well-posedness were formulated from usual distance functions.
Among the most significant types of generalized distance functions, we mention the so-called minimal time function. Because of their important applications in convex analysis, variational analysis, optimization theory, etc., this class of minimal time functions has been extensively and systematically studied by many researchers (see, e.g., the publications [@colombo2010well; @colombo2004subgradient; @he2006subdifferentials; @jiang2012subdifferential; @mordukhovich2010limiting; @mordukhovich2011subgradients; @wolenski1998proximal] with many references and discussions therein). One of the most significant advantages of the minimal time function is that it can be expressed as a very general directional distance function. Nam and Zălinescu [@nam2013variational] were crucial players in this process from the beginning. They studied several generalized differentiation properties of this class of functions and applied them to investigate location problems. Then, Durea et al. [@durea2016minimal; @durea2017new] made an excellent extension of the work of Nam and Zălinescu, and obtained many applications on the theory of directional metric regularity. This development creates opportunities for further studies. For instance, Long [@long2021new; @long2022invariant; @long2023directional] was introduced and investigated new notions of local and global directional error bounds, in which usual distance functions were replaced by minimal time functions.
Following the aforementioned research directions originated by Durea et al. [@durea2017new] and being prompted by [@burlicua2023directional; @chelmucs2019directional; @cibulka2020stability; @long2022invariant; @long2023directional; @lucchetti1981characterization; @nam2013variational], it is natural to propose a new type of directional Tykhonov well-posedness for optimization problems by making use of the minimal time function, as will be seen in the next section. This notion allows us to study both non-convex and convex cases simultaneously.
Our paper is structured as follows. Section 2 contains some notation, definitions, and basic results broadly used in the subsequent material. Section 3 is devoted to studying characterizations of directional Tykhonov well-posedness for optimization problems. First, we give some results of directional Tykhonov well-posedness without convexity assumptions on given feasible sets and functions. Then, we present a directional Ekeland variational principle. Using this principle, we establish other characterizations of directional Tykhonov well-posedness under differentiability and convexity hypotheses. Properties and relationships between level sets and admissible functions are investigated in Section 4. Section 5 presents applications to direction well-posedness for variational inequalities. The final short section includes conclusions and further remarks.
# Preliminaries
Throughout this paper (unless otherwise specified), the space $X$ under consideration is arbitrary Banach with the norm $\|\cdot\|$, and the canonical pairing $\langle\cdot,\cdot\rangle$ between $X$ and its topological dual $X^*$. Denote the unit sphere in $X$ by $\mathbb{S}$. For a set $C\subset X$, $\mbox{\rm
cl}(C)$ and $\mathop{\mathrm{int}}(C)$ signify the topological closure and interior of $C$ respectively. The diameter of $C$ is given by $$\mbox{\rm diam}(C) = \sup\{\|x -y\|\mid x,y \in C\}.$$ As usual, $\mbox{\rm cone}(C)$ designates the cone generated by $C$. The distance function $d(\cdot, C)$ is defined by $$d(x,C):=\inf\{\|x-y\|\mid y\in C\}\text{ for }x\in X$$ with the convention $d(x, \varnothing):= +\infty$, i.e., $\inf \varnothing= +\infty$. We also use $\mathbb{N}$ and $\mathbb{R}$ to denote the set of natural numbers and real numbers, respectively.
The class of functions below plays a crucial role in our study.
**Definition 1**. Let $\varnothing\neq M \subset \mathbb{S}$ and $\Omega : X\rightrightarrows X$ be a set-valued mapping. Then the function $$\label{minimaltime}
T_{M,\Omega}(y,x):=\inf\{t\geq 0\mid (y+tM)\cap\Omega(x)\neq \varnothing\}\text{ for } (y,x)\in X\times X$$ is called *the directional minimal time function* with respect to $M$.
We shall work not with general functions $T_{M,\Omega}.$ For $\varnothing\neq O \subset X$, we set:
1\) $T_M(y, O) := T_{M,\Omega}(y,x)$ when $\Omega(x)\equiv O$;
2\) $T_M(y, x) := T_{M,\Omega}(y, x)$ when $\Omega$ is the identity map.
Originally, the directional minimal time function is first introduced and studied in [@nam2013variational], where $M$ is a singleton and $\Omega(x)\equiv O$ for all $x\in X$. The above definition was proposed in [@durea2016minimal] and further developed, e.g., in [@chelmucs2019directional; @cibulka2020stability; @durea2017new; @long2023directional]. It should be emphasized here that by taking some special cases of $M$ and $\Omega$, the minimal time function was investigated by many researchers for different purposes, see e.g. [@colombo2010well; @colombo2004subgradient; @he2006subdifferentials; @jiang2012subdifferential; @mordukhovich2011applications; @mordukhovich2011subgradients] and the references therein.
**Remark 2**.
1. Observe that the directional minimal time function ([\[minimaltime\]](#minimaltime){reference-type="ref" reference="minimaltime"}) is an extended-real-valued function in general. For $x,y\in X,$ by Proposition 2.2 (i) of [@durea2016minimal], the domains of $T_M (\cdot,x)$ and $T_M (y,\cdot)$ are $$\mbox{\rm dom}T_M (\cdot,x):=\{z\in X\mid T_M(z,x)<+\infty\}=x-\mbox{\rm cone}(M)$$ and $$\mbox{\rm dom}T_M (y,\cdot):=\{z\in X\mid T_M(y,z)<+\infty\}=y+\mbox{\rm cone}(M),\textrm{respectively.}$$
2. As shown in [@durea2016minimal Remark 1], if $M\subset\mathbb{S}$ is closed, then so is $\mbox{\rm cone}(M)$. Thus, $\mbox{\rm dom}T_M (\cdot,x)$ and $\mbox{\rm dom}T_M (y,\cdot)$ are closed, too.
3. For $x,y\in X$, if $M\subset\mathbb{S}$ is closed, we obtain from Proposition 2.3 (i) in [@durea2016minimal] that $$T_M(z,x)=\|y-x\|\text{ for all } z\in \mbox{\rm dom}T_M(\cdot, x)$$ and $$T_M(y,w)=\|w-y\|\text{ for all } w\in \mbox{\rm dom}T_M(y,\cdot).$$
Throughout the paper, we make the following standing assumption:
*$M$ is a nonempty closed subset of $\mathbb{S}$ satisfying $\mbox{\rm cone}(M)$ is convex.*
The following lemmas are useful in the subsequent sections.
**Lemma 3**. *For every $\varepsilon>0$ and $\varnothing\neq O_\varepsilon\subset X$, we put $$(O_\varepsilon)^{\varepsilon} := \{x\in X\mid T_M(x,O_\varepsilon)\leq \varepsilon\}.$$ If $\mbox{\rm diam}(O_\varepsilon)\to 0$ as $\varepsilon\to 0$, then we have $\mbox{\rm diam}((O_\varepsilon)^{\varepsilon})\to 0$ as $\varepsilon\to 0$.*
*Proof.* For every $\varepsilon> 0$, since $T_M(x_\varepsilon,O_\varepsilon)\leq \varepsilon$ and $T_M(y_\varepsilon,O_\varepsilon)\leq \varepsilon$ for all $x_\varepsilon,y_\varepsilon\in (O_\varepsilon)^{\varepsilon}$, there exist $v_\varepsilon,w_\varepsilon\in O_\varepsilon$ such that $T_M(x_\varepsilon,v_\varepsilon)<2\varepsilon$ and $T_M(y_\varepsilon,w_\varepsilon)<2\varepsilon$. By Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii), we have $$\begin{array}{ll}
\|x_\varepsilon-y_\varepsilon\|&\leq \|x_\varepsilon-v_\varepsilon\| + \|v_\varepsilon- w_\varepsilon\| + \|w_\varepsilon- y_\varepsilon\|\\
&= T_M(x_\varepsilon,v_\varepsilon) + \|v_\varepsilon-w_\varepsilon\| + T_M(y_\varepsilon,w_\varepsilon)\\
&< 2\varepsilon+ \mbox{\rm diam}(O_\varepsilon) + 2\varepsilon,
\end{array}$$ which yields $\mbox{\rm diam}((O_\varepsilon)^{\varepsilon})\to 0$ as $\varepsilon\to 0$. ◻
**Lemma 4**. *Let two sequences $(x_k), (y_k)\subset X$ be such that $(x_k)\to x\in X$ and $(y_k)\to y\in X$ as $k\to \infty$. If $T_M(y_k,x_k)< \infty$ for all $k\in \mathbf{N}$, then $$T_M(y_k,x_k) \to T_M(y,x)\text{ as } k\to\infty.$$*
*Proof.* Since $T_M(y_k,x_k)< \infty$, there exists $t_k\geq0$ and $m_k\in M$ such that $x_k = y_k+ t_km_k$ for all $k\in\mathbb{N}$. Then we have $t_km_k = x_k-y_k \to x-y$ as $k\to\infty$. By Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (ii), $x-y\in \mbox{\rm cone}(M),$ which implies that $y\in \mbox{\rm dom}T_M(\cdot,x)$. Since $\|x_k-y_k\|\to\|x-y\|$ as $k\to\infty$, it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (iii) that $$T_M(y_k,x_k) \to T_M(y,x)\text{ as }k\to\infty.$$ ◻
We need the following lemma which is the special case of Lemma 2.2 in [@long2023directional].
**Lemma 5**. *For any $x\in X,$ if $y\in \mbox{\rm dom}T_M(\cdot,x)$, then $$\mbox{\rm dom}T_M(\cdot,y)\subset \mbox{\rm dom}T_M(\cdot,x).$$*
*Proof.* The conclusion follows directly from Proposition 2.2 in [@long2023directional] and Remark 2.2(ii). ◻
The next result is known (see, e.g., [@durea2017new; @long2022invariant; @long2023directional])
**Lemma 6**. *For any $x,y,z\in X$, we have $$T_M(x,z)\leq T_M(x,y)+T_M(y,z).$$*
*Proof.* The proof can be found in [@durea2017new; @long2022invariant; @long2023directional]. However, we prefer to give the details, for the sake of readability.
It is easy to see that if $T_M(x, y)$ or $T_M(y, z)$ equals $+\infty$, then there is nothing to prove. Otherwise, i.e., both are finite, there exist $m_1, m_2 \in M$ such that $$y=x+T_M(x, y) m_1 \text { and } z=y+T_M(y, z) m_2.$$ Then it follows from the convexity of $\mbox{\rm cone}(M)$ that $$z=x+T_M(x, y) m_1+T_M(y, z) m_2 \in x+\mbox{\rm cone}(M),$$ which implies the conclusion and therefore completes the proof. ◻
Let $A$ be a nonempty closed subset of $X$ and let $f\colon A\to (-\infty,\infty]$ be a proper function. We consider the following optimization problem: $$\textrm{(MP)}\quad\text{minimize } f(x) \text{ subject to } x\in A.$$
For simplicity, we set $$D_x:=A\cap \mbox{\rm dom}T_M(\cdot,x)\text{ for }x\in A.$$ For the problem (MP), we always assume that $A$ is closed. Now we introduce a notion of a global directional minimum point for $f$ as follows:
**Definition 7**. For the problem (MP), one says that $\bar x\in A$ is a global directional minimum point for $f$ with respect to (the set of directions) $M$ on $D_{\bar x}$ iff $$f(\bar x)\leq f(x)\text{ for all } x\in D_{\bar x}.$$
**Remark 8**. Clearly, a global minimum point is a global directional minimum for $f$ with respect to any subsets of $\mathbb{S}$. The examples below show that the converse is not true.
As a straightforward extension of the usual notion of Tykhonov well-posedness, we may naturally propose the following.
**Definition 9**. The problem (MP) is said to be directional Tykhonov well-posed with respect to $M$ iff
1. there exists $\bar x\in A$ such that it is the unique global directional minimum point for $f$ with respect to $M$ on $D_{\bar x}$, and
2. for any sequence $(x_k)\subset D_{\bar x}$ satisfying $f(x_k)\to f(\bar x)$, one has $T_M(x_k,\bar x)\to 0$ as $k\to\infty$.
**Remark 10**. To the best of our knowledge, the concept of directional Tykhonov well-posedness with respect to $M$ has not been addressed in the literature. If $M=\mathbb{S}$ (the unit sphere) then the notion of directional Tykhonov well-posedness with respect to $M$ becomes the usual Tykhonov well-posedness, see [@tikhonov1966stability]. Of course, the usual Tykhonov well-posedness implies the directional Tykhonov well-posedness with respect to all $M\subset\mathbb{S}$ . But, the converse is not true in general as shown by the following examples.
To make the above definitions clear, let us consider the following example.
**Example 11**. Let $X=A=\ell^2$-norm, the space of square summable sequences of real numbers, and let $f: A\to (-\infty,\infty]$ be given by $$f(x):=\left\{\begin{array}{ll}
\sum_{n=1}^{\infty} x^{(n)} & \text { if } x^{(n)}\neq 0 \text{ for at most finitely many }n, \\
\infty & \text { otherwise, }
\end{array} \right.$$ where $x^{(n)}$ denotes the $n$-th coordinate of $x$. We can see that problem (MP) has no usual solution.
Now let $\bar x=(0,0,...)$ and $M=\{-e_1=(-1,0,...)\}$. Then one has $D_{\bar x}=\bar x-\mbox{\rm cone}(M)=\{(\alpha,0,...)\mid \alpha \geq 0\}.$ Clearly, for all $x\in D_{\bar x}\setminus\{\bar x\}$ we have $$f(\bar x)=0<\sum_{n=1}^{\infty}x^{(n)}=f(x),$$ which means $\bar x$ is the unique global direction minimum for $f$ with respect to $M$ on $D_{\bar x}$. Let $(x_k)$ be a sequence in $D_{\bar x}$, where $x_k=(\alpha_k,0,..)$ for some $\alpha_k\geq 0$. Since $x_k\in \mbox{\rm dom}T_M(\cdot,\bar x)$, it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii) that $$\label{key22}
T_M(x_k,\bar x)=\|\bar x-x_k\|=\alpha_k.$$ Moreover, if $f(x_k)\to f(\bar x)$, then $\alpha_k\to 0$. This together with ([\[key22\]](#key22){reference-type="ref" reference="key22"}) implies that $T_M(x_k,\bar x)\to 0$. Therefore, problems (MP) is directional Tykhonov well-posed with respect to $M$.
To end this section, we recall that $f$ is *G$\hat{a}$teaux differentiable* at $\bar x\in \mbox{\rm dom}(f):=\{x\in A\mid f(x)<\infty\}$ if there is $x^*\in X^*$ such that for any $d\in X$, we have $$\lim_{\lambda\to 0}\frac{f(\bar x+\lambda d)-f(\bar x)}{\lambda} = \langle x^*,d \rangle.$$ Then $x^*$ is called G$\hat{\rm a}$teaux derivative of $f$ at $\bar x$ and denote by $f'(\bar x)$.
# Directional Tykhonov well-posedness
In this section, we provide characterizations of direction Tykhonov well-posedness for optimization problems of both convexity and nonconvexity. We also present several examples for comparison with existing results in the literature to make sure of the role of our study.
## Well-posedness without convexity
We begin with a characterization of directional Tykhonov well-posedness for problem (MP) in terms of level sets of $f$. It is often called the Furi--Vignoli criterion.
**Theorem 12**. *For problem (MP), suppose that $f$ is lower semicontinuous. Then ${\rm (MP)}$ is directional Tykhonov well-posed with respect to $M$ if and only if there exists $\bar x\in A$ such that f is bounded from below on $D_{\bar x}:=A\cap \mbox{\rm dom}T_M(\cdot,\bar x)$ and $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, where $\varepsilon>0$ and $$\mathcal{L}(\bar x,\varepsilon):=\left\{y\in D_{\bar x}\mid f(y)\leq \inf_{x\in D_{\bar x}}f(x)+\varepsilon\right\}.$$*
*Proof.* If ${\rm (MP)}$ is directional Tykhonov well-posed with respect to $M$, then there exists $\bar x\in A$ such that $f(\bar x) < f(x)$ for all $x\in D_{\bar x}\setminus\{\bar x\}$, and for any sequence $(x_k)\subset D_{\bar x}$ satisfying $f(x_k)\to f(\bar x)$, one has $T_M(x_k,\bar x)\to 0$. This implies that $f$ is bounded from below on $D_{\bar x}.$ Next, we will prove that $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon \to 0$ by contradiction. Suppose that there exists a sequence $(\varepsilon_k)$ such that $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon_k))\not\to 0$ as $\varepsilon_k \to 0$. Then we can select a subsequence of $(\varepsilon_k)$ (no relabeling) and $a>0$ such that $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon_k))>2a$. Thus for all $k\in \Bbb N$, there exists $x_k\in \mathcal{L}(\bar x,\varepsilon_k)$ such that $\|x_k-\bar x\|\geq a$ and $$f(\bar x) < f(x_k)\leq \inf_{x\in D_{\bar x}} f(x) + \varepsilon_k = f(\bar x) + \varepsilon_k,$$ which yields $f(x_k)\to f(\bar x)$ as $k\to\infty$. However, since $x_k\in \mbox{\rm dom}T(\cdot,\bar x)$ and $\|x_k-\bar x\|\geq a$, it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii) that $$T_M(x_k,\bar x)= \|x_k-\bar x\|\geq a \text{ as } k\to\infty,$$ a contradiction.
Conversely, suppose that there exists $\bar x\in A$ such that $f$ is bounded from below on $D_{\bar x}$ and $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon \to 0$. Then we get that $\inf_{x\in D_{\bar x}}f(x)\in\Bbb R$, and therefore for every $k\in\Bbb N$, there exists $x_k\in D_{\bar x}$ such that $$f(x_k)<\inf_{x\in D_{\bar x}}f(x)+\frac{1}{2^{k}}.$$ Note that if $x\in \mathcal{L}(\bar x,1/2^{k+1})$ then $$f(x)\leq \inf_{x\in D_{\bar x}}f(x)+\frac{1}{2^{k+1}}<\inf_{x\in D_{\bar x}}f(x)+\frac{1}{2^{k}},$$ which implies that $\mathcal{L}(\bar x,1/2^{k+1})\subset \mathcal{L}(\bar x,1/2^{k})$ for all $k\in \Bbb N$. Moreover, since $f$ is lower semicontinuous, $\mathcal{L}(\bar x,1/2^{k})$ is closed for all $k\in\Bbb N$. On the other hand, by the hypothesis, we have $\mbox{\rm diam}\left(\mathcal{L}(\bar x,1/2^{k})\right)\to 0$ as $k\to \infty$. Thus, Cantor's intersection theorem tells us that $$\label{unique}
\bigcap_{k\in \Bbb N}\mathcal{L}(\bar x,1/2^{k})=\{\bar y\} \textrm{ for some } \bar y\in D_{\bar x}.$$ By the definition of the level set $\mathcal{L}(\bar x,1/2^{k})$, one has $$f(\bar y)\leq \inf_{x\in D_{\bar x}}f(x)+\frac{1}{2^k}\text{ for all }k\in \Bbb N,$$ and hence $f(\bar y)\leq \inf_{x\in D_{\bar x}}f(x)$. Furthermore, since $\bar y \in D_{\bar x}$, we get $$\label{key1}
f(\bar y) = \inf_{x\in D_{\bar x}}f(x).$$ Now, we show that (MP) is directional Tykhonov well-posed with respect to $M$. Indeed, since $\bar y\in D_{\bar x}\subset \mbox{\rm dom}T_M(\cdot,\bar x)$, it follows from Lemma [Lemma 5](#lemma2.2){reference-type="ref" reference="lemma2.2"} that $$D_{\bar y} = A\cap \mbox{\rm dom}T_M(\cdot,\bar y)\subset A\cap \mbox{\rm dom}T_M(\cdot,\bar x) = D_{\bar x}.$$ This together with ([\[key1\]](#key1){reference-type="ref" reference="key1"}) implies that $$\inf_{y\in D_{\bar y}}f(y)\leq f(\bar y) = \inf_{x\in D_{\bar x}}f(x)\leq\inf_{y\in D_{\bar y}}f(y),$$ and so $f(\bar y)=\inf_{y\in D_{\bar y}}f(y)$. To verify the uniqueness of $\bar y$, we argue by contradiction. Suppose that there exists $\bar z\in D_{\bar y}\setminus\{\bar y\}$ such that $f(\bar z)=f(\bar y)$. Then it follows from ([\[key1\]](#key1){reference-type="ref" reference="key1"}) and the definition of $\mathcal{L}(\bar x,1/2^k)$ that $\bar z\in \mathcal{L}(\bar x,1/2^k)$ for all $k\in\mathbb{N}$. Combining this together with ([\[unique\]](#unique){reference-type="ref" reference="unique"}) gives a contradiction. It remains to prove that for any sequence $(y_k)\subset D_{\bar y}=A\cap \mbox{\rm dom}T_M(\cdot,\bar y)\subset D_{\bar x}$, if $f(y_k)\to f(\bar y)$ then $T_M(y_k,\bar y)\to 0$ as $k\to \infty$. Indeed, because $f(\bar y)=\inf_{x\in D_{\bar x}}f(x)$ and $f(y_k) = f(\bar y) + (f(y_k)-f(\bar y))$, we get $y_k\in \mathcal{L}(\bar x, f(y_k)-f(\bar y))$ for all $k\in\Bbb N$. Moreover, since $\bar y\in \mathcal{L}(\bar x, f(y_k)-f(\bar y))$, one has $$\label{key17}
\|y_k-\bar y\| \leq \mbox{\rm diam}(\mathcal{L}(\bar x, f(y_k)-f(\bar y))\text{ for all }k\in\Bbb N.$$ Observe that by the hypothesis we have $\mbox{\rm diam}(\mathcal{L}(\bar x, f(y_k)-f(\bar y))\to 0$ as $k\to \infty$ due to $f(y_k)-f(\bar y)\to 0$ as $k\to \infty$. This together with ([\[key17\]](#key17){reference-type="ref" reference="key17"}) implies $\|y_k-\bar y\|\to 0$ as $k\to \infty$. Consequently, $T_M(y_k,\bar y)\to 0$ as $k\to \infty$ by Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii), which therefore justifies the reverse implication and completes the proof of the theorem. ◻
**Remark 13**. If $A=X$ and $M=\mathbb{S}$, then Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"} becomes Theorem 2.2 in [@furi1970well] (for the case of Banach spaces).
Next, we illustrate Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"} by the following classical example.
**Example 14**. Let $X=A=\Bbb R$ and let $f\colon A\to \Bbb R$ be defined by $f(x)=x^2 e^{-x}$. Then, (MP) is not Tykhonov well posed, since the sequence $(x_k)=(k)$ is minimizing but it does not converge to the unique minimum $\bar {x}=0$.
To try with Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}, we take $M=\{1\}$ and $\bar x = \frac{1}{2}$. It is easy to see that $f$ is bounded on $D_{\bar x}=\bar x - \mbox{\rm cone}(M) = (-\infty,1/2]$. Next, we prove that for any $\varepsilon>0$, $$\label{eq0}
\mathcal{L}\left(1/2,\varepsilon\right)=\left\{x\in (-\infty,1/2]\mid x^2e^{-x}\leq \varepsilon\right\}\subset (-\sqrt[3]{\varepsilon},\sqrt[3]{\varepsilon}).$$ Indeed, for each $x\in \mathcal{L}\left(1/2,\varepsilon\right)$, a direct verification shows that $-x^3<x^2e^{-x}\leq \varepsilon$, and thus $-\sqrt[3]{\varepsilon}<x.$ Moreover, if $x\in \mathcal{L}(1/2,\varepsilon)$, then we also have $x^3<x^2e^{-x}\leq\varepsilon$, which implies $x<\sqrt[3]{\varepsilon},$ and therefore ([\[eq0\]](#eq0){reference-type="ref" reference="eq0"}) holds. It follows from ([\[eq0\]](#eq0){reference-type="ref" reference="eq0"}) that $\mbox{\rm diam}(\mathcal{L}(1/2,\varepsilon))\to 0$ as $\varepsilon\to 0$. According to Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}, the problem (MP) is directional Tykhonov well-posed with respect to $M$. It is interesting to note that $\bar x=\frac{1}{2}$ is not a minimum point for $f$ on $(-\infty,\frac{1}{2}]$.
The following function is known (see, e.g., [@lucchetti1981characterization]).
**Definition 15**. A function $c\colon [0,\infty)\rightarrow [0,\infty]$ is called admissible if (i) $c(0)=0$ and (ii) for any sequence $(t_k)\subset [0,\infty)$ satisfying $c(t_k)\rightarrow 0$, one has $t_k\rightarrow 0$.
Without the lower semicontinuity of $f$, the directional Tykhonov well-posedness for problem (MP) can be characterized by the above admissible function as follows:
**Theorem 16**. *Problem (MP) is directional Tyknonov well-posed with respect to $M$ if and only if there exists $\bar x\in A$ and an admissible function $c$ such that for every $x\in D_{\bar x}$, $$\label{key23}
f(\bar x)+c(T_M(x,\bar x))\leq f(x).$$*
*Proof.* If (MP) is directional Tykhonov well-posed with respect to $M$, then there exists $\bar x\in A$ such that $f(\bar x) < f(x)$ for all $x\in D_{\bar x}\setminus \{\bar x\}$, and $f(x_k)\to f(\bar x)$ implies $x_k\to \bar x$. Let $c\colon [0,\infty)\to [0,\infty]$ be defined by $$c(t)=
\begin{cases}
\inf\{|f(x)-f(\bar x)|\mid x\in D_{\bar x} \text{ and } T_M(x,\bar x)=t\} & \text{ if }\{x\in D_{\bar x}\mid T_M(x,\bar x)=t\}\neq \varnothing,\\
1 &\text{ otherwise}.
\end{cases}$$ First, we will show that $c$ is an admissible function. Indeed, we have $$0\leq c(0) = \inf\{|f(x)-f(\bar x)|\mid x\in D_{\bar x} \text{ and } T_M(x,\bar x)=0\} \leq |f(\bar x)-f(\bar x)|=0,$$ which yields $c(0)=0$. Let $(t_k)\subset [0,\infty)$ be a sequence satisfying $c(t_k)\to 0$. Then one has $c(t_k)<1$ for all $k\in\mathbb{N}$ sufficiently large. By the definition of $c$, for each $k$ sufficiently large, there exists $x_k\in D_{\bar x}$ such that $T_M(x_k,\bar x)=t_k$ and $|f(x_k)-f(\bar x)|<c(t_k)+\frac{1}{k}$. Since $c(t_k)+\frac{1}{k}\to 0$ as $k\to \infty$, we get $f(x_k)\to f(\bar x)$. Using the directional Tykhonov well-posedness of (MP) gives us $t_k=T_M(x_k,\bar x)\to 0$. So $c$ is an admissible function. Moreover, since $\bar x$ is a global directional minimum for $f$ on $D_{\bar x}$, for every $x\in D_{\bar x}$ we have $$c(T_M(x,\bar x))\leq |f(x)-f(\bar x)|=f(x)-f(\bar x),$$ which implies that the inequality ([\[key23\]](#key23){reference-type="ref" reference="key23"}) holds.
Conversely, if there are an $\bar x\in A$ and an admissible function $c$ satisfying $f(\bar x)+c(T_M(x,\bar x))\leq f(x)$, then $$\label{key2}
f(\bar x)\leq f(\bar x)+c(T_M(x,\bar x))\leq f(x) \textrm{ for all } x\in D_{\bar x}.$$ This means that $\bar x$ is global directional minimum point for $f$ on $D_{\bar x}$. To see the uniqueness of $\bar x$, we shall prove by contradiction: assume that there exist $\bar y\in D_{\bar x}\setminus\{\bar x\}$ such that $f(\bar x)=f(\bar y)$. Then it follows from ([\[key2\]](#key2){reference-type="ref" reference="key2"}) that $$f(\bar x)+c(T_M(\bar y,\bar x))\leq f(\bar y),$$ and so $c(T_M(\bar y, \bar x))=0$. Since $c$ is a admissible function, one has $T_M(\bar y,\bar x)=0$, which yields $\bar y = \bar x$. This contradiction shows that $\{\bar x\}$ the unique global directional minimum point for $f$ with respect to $M$ on $D_{\bar x}$. On the other hand, if ($x_k)\subset D_{\bar x}$ is a sequence satisfying $f(x_k)\to f(\bar x)$, then $c(T_M(x_k,\bar x))\leq|f(x_k)-f(\bar x)|\to 0$ as $k\to\infty$. Using again the admission of $c$ gives us $T_M(x_k,\bar x)\to 0$. Thus, problem (MP) is directional Tykhonov well-posed with respect to $M$. The proof is complete. ◻
Note that if $A=X$ and $M=\mathbb{S}$, then Theorem [Theorem 16](#theo2){reference-type="ref" reference="theo2"} implies Theorem 3.3 in [@lucchetti1981characterization] (for the case of Banach spaces). Let us give an example to explain the advantage of Theorem [Theorem 16](#theo2){reference-type="ref" reference="theo2"}.
**Example 17**. Let $A=X=C_{[0,1]}$ (the usual space of the real continuous functions on $[0, 1]$ with the maximum norm), and let $P$ be the set of all polynomials on $[0,1]$. Let $f\colon A\to (-\infty,\infty]$ defined by $$f(x) = \begin{cases}
\max_{s\in [0,1]} x(s) &\text{ if } x\in P, \\
\infty & \text{ if } x\not\in P.
\end{cases}$$ One can verify that $f$ is not bounded from below and is not lower semicontinuous.
Now, take $\bar x(s) \equiv 0$ for all $s\in[0,1]$ and $M = \{x(s)\equiv -1\}$. Then one has $$D_{\bar x} = \mbox{\rm dom}T_M(\cdot, \bar x) = \bar x - \mbox{\rm cone}(M) = \{x\mid x(s)\equiv \alpha \textrm{ for }\alpha\geq 0\}.$$ For convenience, we denote the constant polynomial $x(s)\equiv \alpha$ by $x_\alpha$ for all $\alpha\geq 0$. It is not hard to see that $f(x_\alpha) = \alpha$. For any $x_\alpha\in D_{\bar x}$, Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii) implies that $$T_M(x_\alpha,\bar x) = \|x_\alpha - \bar x\| = |\alpha-0| = \alpha.$$ Now let $c:[0,\infty)\to [0,\infty]$ be the identity map, that is $c(t) = t$ for all $t\in [0,\infty)$. Clearly, $c$ is an admissible function. Moreover, for any $x_\alpha\in D_{\bar x}$, we have $$f(\bar x)+c(T_M(x_\alpha,\bar x)) = 0 + \alpha = f(x_\alpha),$$ which means that the inequality ([\[key23\]](#key23){reference-type="ref" reference="key23"}) holds. So, Theorem [Theorem 16](#theo2){reference-type="ref" reference="theo2"} ensures that the problem (MP) is directional Tykhonov well-posed with respect to $M$.
Next, we provide a directional Ekeland variational principle in which usual distance functions are replaced by minimal time functions mentioned in the previous section. This principle will be an important tool in what follows.
**Theorem 18**. *Let $A$ and $f$ be as in the problem (MP). Suppose that $f$ is a lower semicontinuous function on $A$ and suppose further that there exists $\bar x \in A$ such that $f$ is bounded from below on $D_{\bar x}$. Then for every $\varepsilon>0$ and every $x_0\in \mathcal{L}(\bar x,\varepsilon)=\{y\in D_{\bar x}\mid f(y)\leq \inf_{x\in D_{\bar x}} f(x)+\varepsilon\}$, there exists $x_\varepsilon\in D_{\bar x}$ such that the following inequalities hold:*
1. *$T_M(x_0,x_\varepsilon) \leq \sqrt{\varepsilon}$,*
2. *$f(x_\varepsilon)+\sqrt{\varepsilon} T_M(x_0,x_\varepsilon)\leq f(x_0)$, and*
3. *$f(x_\varepsilon) \leq f(y)+\sqrt{\varepsilon} T_M(x_\varepsilon,y)$ for all $y\in D_{\bar x}$.*
*Proof.* For any $x\in D_{\bar x}$, set $$F(x) = \{y\in D_{\bar x}\mid f(y)+\sqrt{\varepsilon} T_M(x,y)\leq f(x)\}.$$ By the lower semicontinuity of $f$, the closedness of $D_{\bar x}$ and Lemma 2.1 (ii) in [@long2022invariant], $F(x)$ is closed for all $x\in D_{\bar x}$. Next, we show that for every $x\in D_{\bar x}$, if $y\in F(x)$ then $F(y)\subset F(x)$. Indeed, take any $y\in F(x)$ and $z\in F(y)$. Then we have by the definition of $F$ that $$f(z)+\sqrt{\varepsilon} T_M(x,z)\leq f(z) + \sqrt{\varepsilon} T_M(x,y) + \sqrt{\varepsilon} T_M(y,z)\leq f(y)+\sqrt{\varepsilon} T_M(x,y) \leq f(x),$$ where the first inequality is satisfied by Lemma [Lemma 6](#long2023directional){reference-type="ref" reference="long2023directional"}, which implies $F(y)\subset F(x)$.
Now, we define inductively a sequence $(x_k)$ starting from $x_1 := x_0$. Suppose that we have defined $x_k$. Consider two possible cases: (A) $\inf_{y\in F(x_k)}f(y) = f(x_k)$ and (B) $\inf_{y\in F(x_k)}f(y) < f(x_k)$. In case (A), set $x_{k+1}:=x_k$. In case (B), we choose $x_{k+1}\in F(x_k)$ such that $$\label{4}
f(x_{k+1})<\frac{1}{2}\left(f(x_k)+\inf_{x\in F(x_k)}f(x)\right) < f(x_{k}).$$ We show that $(x_k)$ is a Cauchy sequence. Indeed, since $x_{k+1}\in F(x_k)$, we get $$\sqrt{\varepsilon} T_M(x_k,x_{k+1}) \leq f(x_k)-f(x_{k+1}).$$ Adding the above inequality up from $k$ to $l - 1 > k$, we have by Lemma [Lemma 6](#long2023directional){reference-type="ref" reference="long2023directional"} that $$\label{key3}
\sqrt{\varepsilon} T_M(x_k,x_l) \leq f(x_k)-f(x_l).$$ Note that the sequence $(f(x_k))$ is decreasing and bounded from below on $D_{\bar x}$, and therefore convergent. Then it follows from ([\[key3\]](#key3){reference-type="ref" reference="key3"}) and Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii) that $(x_k)$ is a Cauchy sequence. Since $D_{\bar x}$ is a closed subset of the Banach space $X$, we can find $x_\varepsilon\in D_{\bar x}$ such that $\lim_{k\to\infty}x_k=x_\varepsilon$. We show that $x_\varepsilon$ satisfies the conclusions of the theorem. Taking $k=0$ in ([\[key3\]](#key3){reference-type="ref" reference="key3"}) gives us $$\label{key4}
f(x_l)+ \sqrt{\varepsilon} T_M(x_0,x_l) \leq f(x_0).$$ Letting $l\to \infty$ and taking Lemma [Lemma 4](#lemma1){reference-type="ref" reference="lemma1"} into account, we obtain (ii). Since $$f(x_0)-f(x_\varepsilon)\leq f(x_0)-\inf_{y\in D_{\bar x}}f(y)<\varepsilon,$$ (i) follows from (ii). In ([\[key3\]](#key3){reference-type="ref" reference="key3"}), fixing $k$ and letting $l\to\infty$ implies that $$x_\varepsilon\in \mbox{\rm
cl}(F(x_k))=F(x_k).$$ Thus $F(x_\varepsilon) \subset \bigcap_{k\in \Bbb N}F(x_k).$ We show that $F(x_\varepsilon)=\{x_\varepsilon\}.$ Arguing by contradiction, suppose that there is $x\in F(x_\varepsilon) \subset \bigcap_{k\in \Bbb N}F(x_k)$ such that $x_\varepsilon\neq x$. Then for every $k\in\Bbb N$, one has $$\label{key5}
\sqrt{\varepsilon}T_M(x_{k+1},x)\leq f(x_{k+1})-f(x)\leq f(x_{k+1}) -\inf_{x\in D_{\bar x}}f(x).$$ On the other hand, it follows from ([\[4\]](#4){reference-type="ref" reference="4"}) that $$f(x_{k+1})-\inf_{x\in D_{\bar x}}f(x)\leq f(x_k)-f(x_{k+1}),$$ which yields $$\lim_{k\to\infty}(f(x_{k+1})-\inf_{x\in D_{\bar x}}f(x))=0.$$ Combining this with ([\[key5\]](#key5){reference-type="ref" reference="key5"}) and Lemma [Lemma 4](#lemma1){reference-type="ref" reference="lemma1"}, we get $\sqrt{\varepsilon} T_M(x_{\varepsilon},x)=0$, and hence $x_\varepsilon=x$. This contradiction shows $F(x_\varepsilon)=\{x_\varepsilon\},$ and therefore justifies (iii). ◻
The following example illustrates that Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"} can be viewed as a directional Ekeland variational principle.
**Example 19**. Let $A=X=C_{[0,1]}$ (the space of the real continuous functions on $[0, 1]$ with the maximum norm), and let $f\colon A\to (-\infty,\infty]$ defined by $$f(x) =
\min_{s\in [0,1]} x(s)$$ We can see that $f$ is continuous but is not bounded from below on $A$. Thus, several recent existing results are out of use.
Now, take $\bar x(s) =s^2$ for all $s\in[0,1]$ and $M = \{x\equiv -1\}$. Then one has $$D_{\bar x} = \mbox{\rm dom}T_M(\cdot, \bar x) = \bar x - \mbox{\rm cone}(M) = \{\bar x+\alpha\mid \alpha\geq 0\}.$$ It is easy to see that $f$ is bounded from below on $D_{\bar x}$. Fixing $\varepsilon>0$ and $x_0\in \mathcal{L}(\bar x,\varepsilon)=\{y\in D_{\bar x}\mid f(y)\leq \inf_{x\in D_{\bar x}} f(x)+\varepsilon\}$. By Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"}, there exists $x_\varepsilon\in D_{\bar x}$ such that the inequalities in this theorem hold.
If in addition, $A$ is convex and $f$ is G$\hat{\rm a}$teaux differentiable, Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"} gives the following corollary.
**Corollary 20**. *In the same setting of Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"}, assume that $A$ is a convex set and $f$ is G$\hat{a}$teaux differentiable on $A$. Then for every $\varepsilon>0$ and every $x_0\in \mathcal{L}(\bar x,\varepsilon)$, there exists $x_\varepsilon\in D_{\bar x}$ such that the following inequalities hold:*
1. *$T_M(x_0,x_\varepsilon) \leq \sqrt{\varepsilon}$,*
2. *$f(x_\varepsilon)+\sqrt{\varepsilon} T_M(x_0,x_\varepsilon)\leq f(x_0)$, and*
3. *$\langle f'(x_\varepsilon),x_\varepsilon-y\rangle \leq \sqrt{\varepsilon} T_M(x_\varepsilon,y)$ for all $y\in D_{\bar x}$.*
*Proof.* It is sufficient to show that the assertion (iii) in Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"} implies the corresponding (iii) in Corollary [Corollary 20](#ekel3){reference-type="ref" reference="ekel3"}. Indeed, since $D_{\bar x}$ is convex, for every $y\in D_{\bar x}$ one has $$x_\varepsilon+t(y-x_\varepsilon) = \lambda y + (1-\lambda)x_\varepsilon\in D_{\bar x}\text{ for all }\lambda \in (0,1).$$ Then it follows from (iii) of Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"} that $$\begin{array}{ll}
f(x_\varepsilon)-f(x_\varepsilon+t(y-x_\varepsilon))&\leq \sqrt{\varepsilon}T_M(x_\varepsilon,x_\varepsilon+\lambda(y-x_\varepsilon))\\
&=\lambda\sqrt{\varepsilon}\|x_\varepsilon-y\|\\
&\leq \lambda\sqrt{\varepsilon}T_M(x_\varepsilon,y),
\end{array}$$ which implies $$\frac{f(x_\varepsilon)-f(x_\varepsilon+\lambda(y-x_\varepsilon))}{\lambda}\leq \sqrt{\varepsilon}T_M(x_\varepsilon,y).$$ Letting $\lambda \to 0$, we obtain the conclusion and thus completes the proof of Theorem. ◻
## Well-posedness under convexity and differentiability assumptions
Under additional assumptions on the data of the problem (MP), there is another characterization of the directional Tykhonov well-posedness formulated as follows:
**Theorem 21**. *For problem (MP), assume that $A\subset X$ is a convex set and $f$ is lower semicontinuous and convex. For every $\varepsilon>0$, we set $$\mathcal{G}(\varepsilon) = \{x\in A\mid f(x)\leq f(y) + \varepsilon T_M(y,x) \text{ for all }y\in D_y\}.$$ Then (MP) is directional Tykhonov well-posed with respect to $M$ if and only if there exists $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{G}(\varepsilon)$ such that $f$ is bounded from below on $D_{\bar x}$, and $\mbox{\rm diam}(\mathcal{G}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, where $$\mathcal{G}'(\bar x,\varepsilon) = \{x\in D_{\bar x}\mid f(x)\leq f(y)+\varepsilon T_M(x,y) \text{ for all } y\in D_{\bar x}\}.$$*
*Proof.* Assume that (MP) is directional Tykhonov well-posed with respect to $M$. Then there exists $\bar x$ such that $f(\bar x)\leq f(y) \leq f(y)+\varepsilon T_M(y,\bar x)$ for all $y\in D_{\bar x}$ and all $\varepsilon>0$, which implies $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{G}(\varepsilon)$ and $f$ is bounded from below on $D_{\bar x}$. Clearly, if $D_{\bar x}$ is a singleton, then $\mbox{\rm diam}(\mathcal{G}'(\bar x,\varepsilon)) = 0$ for all $\varepsilon>0$, and we are done. Otherwise, there exists a real number $a>0$ such that $$C_a:=\{x\in D_{\bar x}\mid T_M(x,\bar x)\geq a\}\neq \varnothing.$$ By the directional Tykhonov well-posedness of (MP), we have $$\label{key6}
m:= \inf\{f(x)-f(\bar x)\mid x\in C_a\}>0.$$ Next, we show that $$\label{6}
\frac{m}{a} T_M(x,\bar x)\leq f(x)-f(\bar x)\text{ for all } x\in C_a.$$ Indeed, fix $x\in C_{a}$. Then $T_M(x,\bar x)\geq a$, which reads as $$\label{key7}
\lambda:= \frac{a}{T_M(x,\bar x)}\leq1.$$ Since $f$ is convex, $$f\left(\lambda x + \left(1-\lambda\right)\bar x\right) \leq \lambda f(x) + \left( 1 - \lambda \right)f(\bar x),$$ or equivalently, $$\label{8}
f\left(\lambda x + \left(1-\lambda \right)\bar x\right) - f(\bar x) \leq \lambda (f(x)-f(\bar x)).$$ Notice that since $A$ and $\mbox{\rm dom}T_M(\cdot,\bar x)$ are convex, so is $D_{\bar x}$. Thus, one has $\lambda x + \left(1-\lambda \right)\bar x \in D_{\bar x}.$ Then it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii) and ([\[key7\]](#key7){reference-type="ref" reference="key7"}) that $$T_M(\lambda x + \left(1-\lambda \right)\bar x,\bar x)=
\left\|\lambda x + \left(1-\lambda \right)\bar x - \bar x\right\| = \left\|\lambda (x-\bar x)\right\| = \left\|\lambda T_M(x,\bar x)\right\|= a.$$ Combining this with ([\[key6\]](#key6){reference-type="ref" reference="key6"}) and ([\[8\]](#8){reference-type="ref" reference="8"}), we obtain $$m\leq \lambda (f(x)-f(\bar x)),$$ which implies, by ([\[key7\]](#key7){reference-type="ref" reference="key7"}), that the inequality ([\[6\]](#6){reference-type="ref" reference="6"}) holds. To proceed further, we will prove that $$\mbox{\rm diam}(\mathcal{G}'(\bar x,\varepsilon))\to 0\text{ as }\varepsilon\to 0.$$ Suppose on the contrary that there exist a sequence $(\varepsilon_k)$ and a real number $a>0$ such that $\mbox{\rm diam}(\mathcal{G}'(\bar x,\varepsilon_k))\not \to 0$ as $\varepsilon_k\to 0$. Thus, we can find a subsequence of $(\varepsilon_k)$ (no relabeling) and $a>0$ such that $\mbox{\rm diam}(L_M(\bar x, \varepsilon_k))>2a>0$. Then for each $k\in\Bbb N$, there exists $x_k\in \mathcal{G}'(\bar x,\varepsilon_k)$ such that $\|x_k-\bar x\|\geq a$. Since $T_M(x_k,\bar x)=\|x_k-\bar x\|$ due to Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (iii), we get $$\label{9}
\frac{m}{a}T_M(x_k,\bar x)\leq f(x_k)-f(\bar x)\leq \varepsilon_k T_M(x_k,\bar x) \text{ for all }k\in\Bbb N,$$ where the first inequality is fulfilled by ([\[6\]](#6){reference-type="ref" reference="6"}) and the other inequality holds since $x_k\in \mathcal{G}'(\bar x,\varepsilon_k)$. Dividing ([\[9\]](#9){reference-type="ref" reference="9"}) by $T_M(x_k,\bar x)$, we obtain $\frac{m}{a}\leq \varepsilon_k$ for all $k\in\mathbb{N}$ which is impossible. So, $\mbox{\rm diam}(\mathcal{G}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$.
To prove the converse under the imposed assumptions that there exists $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{G}(\varepsilon)$ such that $f$ is bounded from below on $D_{\bar x}$, and $\mbox{\rm diam}(\mathcal{G}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, we first show that $\bar x$ minimizes $f$ on $D_{\bar x}$. Indeed, since $\bar x\in \mathcal{G}(\varepsilon)$ for all $\varepsilon >0$, we have $$f(\bar x)\leq f(y) + \varepsilon T_M(y,\bar x) \textrm{ for all } y\in D_{\bar x} \textrm{ and } \varepsilon > 0.$$ Letting $\varepsilon\to 0$, we get $$f(\bar x)\leq f(y) \textrm{ for all }y\in D_{\bar x}.$$ Next, we show that $\mathcal{L}(\bar x, \varepsilon)\subset \left(\mathcal{G}' (\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}}$, where $$\left(\mathcal{G}' (\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}}:=\{x\in D_{\bar x}\mid T_M(x,\mathcal{G}' (\bar x,\sqrt{\varepsilon}))\leq \sqrt{\varepsilon}\}.$$ Indeed, take any $x_0\in \mathcal{L}(\bar x,\varepsilon)$. Then we have $f(x_0)\leq \inf_{x\in D_{\bar x}}f(x) + \varepsilon = f(\bar x)+\varepsilon$. Applying Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"}, we see that there exists $x_\varepsilon\in D_{\bar x}$ such that $T_M(x_0,x_\varepsilon)\leq \sqrt{\varepsilon}$ and $$f(x_\varepsilon)\leq f(y) + \sqrt{\varepsilon}T_M(x_\varepsilon,x) \textrm{ for all }y\in D_{\bar x}.$$ This implies that $x_\varepsilon\in \mathcal{G}'(\bar x,\sqrt{\varepsilon})$, and therefore $$T_M(x_0,\mathcal{G}'(\bar x,\sqrt{\varepsilon}))\leq T_M(x_0,x_\varepsilon)\leq \sqrt{\varepsilon}.$$ So, $\mathcal{L}(\bar x, \varepsilon)\subset \left(\mathcal{G}'(\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}}$. By the imposed assumptions, it follows from Lemma [Lemma 3](#lemmato0){reference-type="ref" reference="lemmato0"} that $\mbox{\rm diam}(\left(\mathcal{G}'(\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}})\to 0$ which in its turn implies $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. Then, Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"} tells us that (MP) is directional Tykhonov well-posed. The proof is complete. ◻
**Remark 22**. (i) It is interesting to observe, in general, that $$\lim_{\varepsilon\to 0}\mbox{\rm diam}(\mathcal{G}(\varepsilon))\neq \lim_{\varepsilon\to 0}\mbox{\rm diam}(\mathcal{G}(\bar x, \varepsilon))\text{ for every }\bar x\in \bigcap_{\varepsilon>0}\mathcal{G}(\varepsilon)$$ Indeed, consider $f: \mathbb R\to \Bbb R$ is given by $f(x)=e^x$. Take $M=\{-1\}$. Since $f$ is strictly increase, for every $x\in\Bbb R$, we get $$f(x)\leq f(y)\leq f(y)+\varepsilon T_M(y,x)\text{ for all } y\in D_x=[x,\infty),$$ which yields $\mathcal{G}(\varepsilon)=\Bbb R$ for any $\varepsilon>0$. While if we take any $\bar x\in \bigcap_{\varepsilon>0}\mathcal{G}(\varepsilon)$, then direct calculations yield $$\lim_{\varepsilon\to 0}\mbox{\rm diam}(\mathcal{G}(\bar x, \varepsilon))=0.$$
\(ii\) If $A=X$ and $M=\mathbb{S}$, we get $T_M(x,y) = T_M(y,x) = \|x-y\|$ and $D_{\bar x} = X$. Hence, $$\mathcal{G}(\varepsilon) = \mathcal{G}'(\bar x,\varepsilon) = \{x\in X\mid f(x)\leq f(y) + \varepsilon\|x-y\|\text{ for all } y\in X\}.$$ So, our Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"} implies Theorem 4.1 in [@lucchetti1981characterization] (for the case of a Banach space).
The following example illustrates advantages of Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"} by showing a case it works well, while a number of existing results are out of use.
**Example 23**. Let $X=A=l_1= \{(x_i) \mid x_i \in \Bbb R, \sum_{i=1}^{\infty}|x_{i}|<\infty\}$ (the Hilbert space with the usual $l_1$-norm) and let $f : A \to\Bbb R$ be defined by $$f(x) = \sum_{n=1}^{\infty}\frac{|\langle x,e_n\rangle|}{n},$$ where $\langle\cdot,\cdot\rangle$ is the usual scalar product and $e_n = (0, 0,\cdots, 1, 0,\cdots),$ 1 at the $n$-th position, is the standard basis. Observe that the function $f$ is convex and continuous, and the problem (MP) has a unique solution at $0.$ But $(e_n)$ is a minimizing sequence which does not converge to $0.$ Thus, (MP) is not classically Tykhonov well-posed.
Now let $M=\{e_1\}$ and $\bar x = 0.$ Since $\bar x$ is a global minimum of $f$ on $A$, for any $\varepsilon>0$ and any $x\in A$, one has $$f(\bar x)\leq f(y)\leq f(y)+\varepsilon T_M(y,x),$$ which implies $\bar x\in \bigcap_{\varepsilon>0}\mathcal{G}(\varepsilon).$ Clearly, $f$ is bounded from below on $D_{\bar x} = \{(-\alpha,0,\cdots)\in \ell_1\mid \alpha\geq 0\}$. To apply Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"}, it remains to show that $\mbox{\rm diam}(\mathcal{G}'(0,\varepsilon))\to 0$ as $\varepsilon\to 0$. For any $\varepsilon>0$ and $x:=(-\alpha_x,0,\cdots)\in D_{\bar x}$, if $x\in \mathcal{G}'(0,\varepsilon)$, then we have $$\label{eq3}
\alpha_x=\sum_{n=1}^{\infty}\frac{|\langle x,e_n\rangle|}{n}=f(x)\leq f(y)+\varepsilon T_M(x,y) \quad \textrm{ for all } y\in D_{\bar x}.$$ Taking $y=0$ tells us that $f(y)=f(0)=0$ and $T_M(x,0)<\infty$ (because $x\in D_{\bar x}=D_0$). Then, ([\[eq3\]](#eq3){reference-type="ref" reference="eq3"}) becomes $$\alpha_x\leq \varepsilon T_M(x,0).$$ Since $\|x-0\| = \alpha_x$, the above inequality implies $\mbox{\rm diam}(\mathcal{G}'(0,\varepsilon))\leq 2\varepsilon T_M(x,0)$, which converges to $0$ as $\varepsilon\to 0$. So, Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"} guarantees that the problem (MP) is directional Tykhonov well-posed with respect to $M$.
Next, we provide a characterization of directional Tykhonov well-posedness in terms of the differentiability of the function $f$.
**Theorem 24**. *Let $X$ be a reflexive Banach space. For problem (MP), suppose that $A\subset X$ is convex and $f$ is lower semicontinuous, convex and G$\hat{a}$teaux differentiable on $A$. Suppose further that $f'$ is continuous on $A$. For every $\varepsilon>0$, we set $$\mathcal{H}(\varepsilon) = \{x\in A\mid \langle{f'(x),x-y}\rangle \leq \varepsilon T_M(y,x) \textrm{ for all }y\in D_x\}.$$ Then (MP) is Tykhonov directional well-posed with respect to $M$ if and only if there exists $\bar x\in A$ such that f is bounded from below on $D_{\bar x}$, $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{H}(\varepsilon)$ and $\mbox{\rm diam}(\mathcal{H}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, where $$\begin{array}{ll}
\mathcal{H}'(\bar x,\varepsilon) = \{x\in D_{\bar x}\mid \langle{f'(x), x-y}\rangle\leq \varepsilon T_M(x,y) \textrm{ for all } y\in D_{\bar x}\}.
\end{array}$$*
*Proof.* Supposed that (MP) is directional well-posed with respect to $M$. Then there exists $\bar x\in A$ such that $f(\bar x) < f(y)$ for all $y\in D_{\bar x}\setminus\{\bar x\}$. This is equivalent, by [@ekeland1999convex Proposition 2.1], to the inequality that $$\langle{f'(\bar x),\bar x - y}\rangle \leq 0\text{ for all } y\in D_{\bar x}.$$ Therefore, $$\langle{f'(\bar x),\bar x - y}\rangle \leq \varepsilon T_M(y,\bar x)\text{ for all } y\in D_{\bar x} \text{ and all } \varepsilon>0,$$ which immediately implies, by the definition of $\mathcal{H}(\varepsilon)$, that $\bar x\in\bigcap_{\varepsilon>0} \mathcal{H}(\varepsilon)$. Clearly, $f$ is bounded from below on $D_{\bar x}$. Next, we show that $\mbox{\rm diam}(\mathcal{H}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$ by way of contradiction. Suppose that there exists a sequence ($\varepsilon_k$) such that $\mbox{\rm diam}(\mathcal{H}'(\bar x,\varepsilon_k))\not \to 0$ as $\varepsilon_k\to 0$. As seen in the proof of Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}, we can find $a>0$ such that for every $k\in \Bbb N$, there exists $x_k\in \mathcal{H}'(\bar x,\varepsilon_k)$ satisfying $\|x_k-\bar x\|\geq a$. We consider two cases: (A) $T_M(x_k,\bar x)\leq 1$ for infinitely many $k$ and (B) $T_M(x_k,\bar x)\leq 1$ for only finitely many $k$. In case (A), we have by the convexity of $f$ that $$f(x_k)-f(\bar x)\leq \langle{f'(x_k),x_k-\bar x}\rangle \leq \varepsilon_k T_M(x_k,\bar x)\leq \varepsilon_k.$$ So, $x_k\in \mathcal{L}(\bar x,\varepsilon_k)=\{x\in D_{\bar x}\mid f(x)\leq \inf_{y\in D_{\bar x}}f(y)+\varepsilon_k\}$ for infinitely many $k$, which implies $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon_k))\geq a$ for infinitely many $k$. This is impossible since $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon_k))\to 0$ as $k\to \infty$ by Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}. If $T_M(x_k,\bar x)\leq 1$ for only finite $k$, then we can find infinitely many $x_k$ such that $T_M(x_k,\bar x)>1$. For these $x_k$ satisfying $T_M(x_k,\bar x)>1$, we show that $$\bar x + \frac{x_k-\bar x}{T_M(x_k,\bar x)}\in \mathcal{L}(\bar x,\varepsilon_k).$$ Indeed, set $\lambda:=\frac{1}{T_M(x_k,\bar x)}\in (0,1)$. Then we have by the convexity of $f$ that $$0\leq \langle{f'(x_k)-f'(\bar x + \lambda(x_k-\bar x)),x_k-(\bar x + \lambda(x_k-\bar x))}\rangle,$$ and thus $$0\leq \langle{f'(x_k)-f'(\bar x + \lambda(x_k-\bar x)),(1-\lambda)(x_k-\bar x)}\rangle.$$ Multiplying both sides of the inequality by $\frac{\lambda}{1-\lambda}$ gives us $$0\leq \langle{f'(x_k)-f'(\bar x + \lambda(x_k-\bar x)),\lambda(x_k-\bar x)}\rangle,$$ or equivalently, $$%\label{eq3}
\langle{f'(\bar x + \lambda(x_k-\bar x)),\lambda(x_k-\bar x)}\rangle\leq \langle{f'(x_k),\lambda(x_k-\bar x)}\rangle.$$ Combining this with the convexity of $f$, we get $$%\label{eq5}
f(\bar x + \lambda(x_k-\bar x))-f(\bar x) \leq \langle{f'(\bar x + \lambda(x_k-\bar x)),\lambda(x_k-\bar x)}\rangle\leq \langle{f'(x_k),\lambda(x_k-\bar x)}\rangle.$$ Replacing $\lambda$ by $\frac{1}{T_M(x_k,\bar x)}$ in the above inequalities gives us $$\label{eq4}
\begin{array}{ll}
f\left(\bar x+ \frac{x_k-\bar x}{T_M(x_k,\bar x)}\right)-f(\bar x)&\leq \left\langle{f'\left(\bar x + \frac{x_k-\bar x}{T_M(x_k,\bar x)}\right),\frac{x_k-\bar x}{T_M(x_k,\bar x)}}\right\rangle\\
&\leq \left\langle{f'(x_k),\frac{x_k-\bar x}{T_M(x_k,\bar x)}}\right\rangle.
\end{array}$$ Since $x_k\in \mathcal{H}'(\bar x,\varepsilon_k)$, one has $$%\label{eq56}
\left\langle{f'(x_k),\frac{x_k-\bar x}{T_M(x_k,\bar x)}}\right\rangle\leq \varepsilon_k T_M\left(x_k,x_k-\frac{x_k-\bar x}{T_M(x_k,\bar x)}\right) = \varepsilon_k \left\|\frac{x_k-\bar x}{T_M(x_k,\bar x)}\right\| = \varepsilon_k,$$ where the equalities are fulfilled by $\bar x-x_k\in \mbox{\rm cone}(M)$ and Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (iii). This together with ([\[eq4\]](#eq4){reference-type="ref" reference="eq4"}) implies that $$f\left(\bar x+ \frac{x_k-\bar x}{T_M(x_k,\bar x)}\right)-f(\bar x)\leq \varepsilon_k.$$ Hence $\bar x+ \frac{x_k-\bar x}{T_M(x_k,\bar x)} \in \mathcal{L}(\bar x,\varepsilon_k)$ for infinitely many $k\in N$. Since $\left\|\bar x+ \frac{x_k-\bar x}{T_M(x_k,\bar x)}-\bar x\right\|= 1$, this contradicts the fact that $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon_k))\to 0$ as $k\to \infty$ by Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}. So, $\mbox{\rm diam}(\mathcal{H}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$,.
Conversely, using the same argument as in the proof of Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"} and applying Corollary [Corollary 20](#ekel3){reference-type="ref" reference="ekel3"}(iii) give us that $$\mathcal{L}(\bar x, \varepsilon)\subset (\mathcal{H}'(\bar x,\sqrt{\varepsilon}))^{\sqrt{\varepsilon}}=\{x\in D_{\bar x}\mid T_M(x,\mathcal{H}'(\bar x,\sqrt{\varepsilon}))\leq \varepsilon.$$
Since $\mbox{\rm diam}(\mathcal{H}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, it follows from Lemma [Lemma 3](#lemmato0){reference-type="ref" reference="lemmato0"} that $\mbox{\rm diam}((\mathcal{H}'(\bar x,\sqrt{\varepsilon}))^{\sqrt{\varepsilon}})\to 0$ as $\varepsilon\to 0$. Hence $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, which yields the problem (MP) is directional Tykhonov well-posed with respect to $M$ by Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}. ◻
Note that if $A=X$ and $M=\mathbb{S}$, then $T_M(x,y) = T_M(y,x) = \|x-y\|$ and $D_{\bar x} = X$. Hence, $$\mathcal{H}(\varepsilon) = \mathcal{H}'(\bar x,\varepsilon) = \{x\in X\mid f(x)\leq f(y) + \varepsilon\|x-y\|\text{ for all } y\in X\},$$ which yields that Theorem [Theorem 24](#wpchar2){reference-type="ref" reference="wpchar2"} becomes [@lucchetti1981characterization Theorem 4.3] (for the case of a Banach space).
Let $X$ be a reflexive Banach space, let $A\subset X$ and $f$ be as in problem (MP) and let $g: A\to (-\infty, \infty]$. Consider the minimization problem: $$\textrm{(MP$'$)}\quad \text{ minimize } (f+g)(x) \text{ subject to }x\in A.$$
We end this section with a result extending Theorems [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"} and [Theorem 24](#wpchar2){reference-type="ref" reference="wpchar2"}.
**Theorem 25**. *For the problem ($\textrm{MP}'$), suppose that $g$ and $f$ are lower semicontinuous and convex. Suppose further that $f$ is directional G$\hat{a}$teaux differentiable on $A$ such that $f'$ is continuous. For $\varepsilon>0$, we set $$\mathcal{P}(\varepsilon) = \{x\in A\mid \langle{f'(x),x-y}\rangle + g(x)-g(y)\leq \varepsilon T_M(y,x), \forall y\in D_x\}.$$ Then the problem $(\textrm{MP}')$ is directional Tykhonov well-posed with respect to $M$ if and only if there exists $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{P}(\varepsilon)$ such that $f+g$ is bounded from below on $D_{\bar x}$, and $\mbox{\rm diam}(\mathcal{P}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, where $$\mathcal{P}'(\bar x,\varepsilon)= \{x\in D_{\bar x}\mid \langle{f'(x),x-y}\rangle + g(x)-g(y)\leq \varepsilon T_M(x,y), \text{ for all } y\in D_{\bar x}\}.$$*
*Proof.* Assume that there exists $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{P}(\varepsilon)$ such that $f+g$ is bounded from below on $D_{\bar x}$, and $\mbox{\rm diam}(\mathcal{P}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. It is sufficient to show that $\mbox{\rm diam}(\mathcal{L}(\bar x, \varepsilon))\to 0$ as $\varepsilon\to 0$, where $$\mathcal{L}( \bar x, \varepsilon)=\{y\in D_{\bar x}\mid (f+g)(y)\leq \inf_{x\in D_{\bar x}}(f+g)(x)+\varepsilon\}\}.$$ In fact, we prove $\mathcal{L}_M(\bar x,\varepsilon)\subset \left(\mathcal{P}'(\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}}$, where $$\left(\mathcal{P}'(\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}} := \{x\in D_{\bar x}\mid T_M(x,\left(\mathcal{P}'(\bar x,\sqrt{\varepsilon})\right))\leq \sqrt{\varepsilon}\}.$$ Indeed, applying Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"} to $f+g$, we obatin that for any $x_0\in \mathcal{L}_M(\bar x, \varepsilon)$, there exists $x_\varepsilon\in D_{\bar x}$ such that $$\label{iekel0}
T_M(x_0,x_\varepsilon)\leq \sqrt{\varepsilon}.$$ Moreover, since $D_{\bar x}$ is convex, it follows from Theorem [Theorem 18](#ekel0){reference-type="ref" reference="ekel0"}(iii) that for any $y\in D_{\bar x}$ and $\lambda\in (0,1)$, $$\label{eq7}
(f+g)(x_\varepsilon)\leq (f+g)(x_\varepsilon+\lambda(y-x_\varepsilon))+\sqrt{\varepsilon}T_M(x_\varepsilon,x_\varepsilon+\lambda(y-x_\varepsilon)).$$ Observe that $$\label{eq8}
T_M(x_\varepsilon,x_\varepsilon+\lambda(y-x_\varepsilon))\leq \lambda T_M(x_\varepsilon,y).$$ Indeed, if $x_\varepsilon\not\in \mbox{\rm dom}T_M(\cdot,y)$ then $T_M(x_\varepsilon,y)=\infty$ and therefore the inequality ([\[eq8\]](#eq8){reference-type="ref" reference="eq8"}) is obviously true. Otherwise, by the convexity of $\mbox{\rm dom}T_M(\cdot,y)$, one has $x_\varepsilon+\lambda(y-x_\varepsilon)\in \mbox{\rm dom}T_M(\cdot,y)$. Then it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (iii) that $$T_M(x_\varepsilon,x_\varepsilon+\lambda(y-x_\varepsilon))=\|x_\varepsilon+t(y-x_\varepsilon)-x_\varepsilon\|= \lambda\|y-x_\varepsilon\|=\lambda T_M(x_\varepsilon,y),$$ which implies that ([\[eq8\]](#eq8){reference-type="ref" reference="eq8"}) holds. From ([\[eq7\]](#eq7){reference-type="ref" reference="eq7"}) and ([\[eq8\]](#eq8){reference-type="ref" reference="eq8"}), we have $$\label{eq1}
\frac{f(x_\varepsilon)- f(x_\varepsilon+\lambda(y-x_\varepsilon))}{\lambda}+\frac{g(x_\varepsilon) -g(x_\varepsilon+\lambda(y-x_\varepsilon))}{\lambda}\leq \sqrt{\varepsilon} T_M(x_\varepsilon,y).$$ By the convexity of $g$, one have $g(x_\varepsilon)-g(y)\leq \frac{g(x_\varepsilon)-g(x_\varepsilon+\lambda(y-x_\varepsilon))}{\lambda}$, and therefore ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}) gives $$\frac{f(x_\varepsilon)-f(x_\varepsilon+\lambda(y-x_\varepsilon))}{\lambda}+g(x_\varepsilon)-g(y)\leq \sqrt{\varepsilon}T_M(x_\varepsilon,y).$$ Taking $\lambda\to 0$, we get $$\langle{f'(x_\varepsilon),x_\varepsilon-y}\rangle+g(x_\varepsilon)-g(y)\leq \sqrt{\varepsilon}T_M(x_\varepsilon,y)\textrm{ for all }y\in D_{\bar x},$$ i.e., $x_\varepsilon\in \mathcal{P}'(\bar x,\sqrt{\varepsilon})$. This together with ([\[iekel0\]](#iekel0){reference-type="ref" reference="iekel0"}) yields $$T_M(x_0,\mathcal{P}'(\bar x,\sqrt{\varepsilon}))\leq T_M(x_0,x_\varepsilon)\leq \sqrt{\varepsilon},$$ and therefore $x_0\in \left(\mathcal{P}'(\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}}$, i.e., $\mathcal{L}(\bar x, \varepsilon)\subset \left(\mathcal{P}'(\bar x,\sqrt{\varepsilon})\right)^{\sqrt{\varepsilon}}$. Using the same argument as in the last paragraph of the proof of Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"}, we obtain that $\mbox{\rm diam}(\mathcal{L}(\bar x, \varepsilon))\to 0$ as $\varepsilon\to 0$. Then Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"} ensures that the problem (MP$'$) is directional Tykhonov well-posed with respect to $M$.
Conversely, assume that (MP$'$) is Tykhonov directional well-posed with respect to $M$. Note that $f+g$ is lower semicontinuous and convex. By Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"}, it is sufficient to show that $\mathcal{P}(\varepsilon)\subset \mathcal{G}(\varepsilon)$ and $\mathcal{P}'(\bar x,\varepsilon)\subset \mathcal{G}'(\bar x,\varepsilon)$, where $\mathcal{G}(\varepsilon)$ and $\mathcal{G}'(\bar x,\varepsilon)$ are defined with respect to $f+g$. Since $f$ is convex, $$(f+g)(x)-(f+g)(y)\leq \langle{f'(x),x-y}\rangle + g(x)-g(y).$$ Using the definitions of these sets, we get $\mathcal{P}(\varepsilon)\subset \mathcal{G}(\varepsilon)$ and $\mathcal{P}'(\bar x,\varepsilon)\subset \mathcal{G}'(\bar x,\varepsilon)$, which therefore completes the proof of the theorem. ◻
It is not hard to verify that if $f\equiv 0$, then Theorem [Theorem 25](#wpchar3){reference-type="ref" reference="wpchar3"} implies Theorem [Theorem 21](#wpchar1){reference-type="ref" reference="wpchar1"}, and if $g\equiv 0$, then it becomes Theorem [Theorem 24](#wpchar2){reference-type="ref" reference="wpchar2"}.
# Level sets and Admissible functions
Let $A$ and $f$ be as in the problem (MP). Throughout this section, we shall suppose that $A\subset X$ is convex and $\bar x\in A$ is a global directional minimum point for $f$ with respect to $M$.
**Definition 26**. The function $f$ is called directionally subhomogeneous with respect to $\bar x$ on $D_{\bar x}$ if for every $x\in D_{\bar x}$, $$\label{subhomo1}
f(sx +(1-s)\bar x)\leq sf(x) +(1-s)f(\bar x) \textrm{ for all } s\in [0,1].$$
In this section, we assume that $f$ is directionally subhomogeneous with respect to $\bar x$ on $D_{\bar x}$, which is a weaker condition compared to convexity.
**Remark 27**. If $\bar x = 0$, $f(0) = 0$ and $M=\mathbb{S}$, then ([\[subhomo1\]](#subhomo1){reference-type="ref" reference="subhomo1"}) becomes $f(sx)\leq sf(x)$ for all $s\in[0,1]$ and $x\in A$, then $f$ is called subhomogeneous with respect to $0$; see [@lucchetti1981characterization page 463].
For $\varepsilon>0$, let $\mathcal{L}(\bar x, \varepsilon)$ be defined as in Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}. We put $$r(\varepsilon) = \frac{1}{2}\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon)),$$ and for $t\in [0,\infty),$ $$c_0(t) = \inf\{f(x)-f(\bar x)\mid x\in D_{\bar x}\text{ and } T_M(x,\bar x)=t\}.$$
Observe that since $\bar x$ is a global directional minimum point, $c_0(t)\geq 0$ for all $t\geq0$, and $c_0(0)=f(\bar x)-f(\bar x)=0$. Clearly, if there is no $x\in A$ satisfying $T_M(x,\bar x)=t$, then $c_0(t)=\infty$. As usual, the domain of $c_0$ is the set $$\mbox{\rm dom}(c_0)=\{t\geq 0\mid c(t)<\infty\}.$$
**Proposition 28**. *Let $t_0\neq 0$ and $s_0\in [0,t_0]$. Then we have $c_0(s_0)\leq \frac{s_0}{t_0}c_0(t_0)$.*
*Proof.* By the definition of $c_0$, for every $\varepsilon>0$, there exists an $x_\varepsilon\in D_{\bar x}=A\cap\mbox{\rm dom}T_M(\cdot,\bar x)$ such that $$\label{key8}
T_M(x_\varepsilon,\bar x)=t_0\text{ and }f(x_\varepsilon) - f(\bar x)\leq c_0(t_0)+\varepsilon.$$ Let $x'_\varepsilon= \frac{s_0}{t_0}x_\varepsilon+ \left(1-\frac{s_0}{t_0}\right)\bar x$. Then the convexity of $D_{\bar x}$ ensures $x'_\varepsilon\in D_{\bar x}$. Therefore, it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii) and ([\[key8\]](#key8){reference-type="ref" reference="key8"}) that $$T_M(x'_\varepsilon,\bar x)=\|x'_\varepsilon-\bar x\|=\left\|\frac{s_0}{t_0}x_\varepsilon+ \left(1-\frac{s_0}{t_0}\right)\bar x-\bar x\right\|=s_0.$$ Using again the definition of $c_0$, we get $$\label{eq44}
c_0(s_0)\leq f(x'_\varepsilon) - f(\bar x) = f\left(\frac{s_0}{t_0}x_\varepsilon+ \left(1-\frac{s_0}{t_0}\right)\bar x\right) - f(\bar x).$$ On the other hand, since $f$ is directionally subhomogeneous with respect to $\bar x$ on $D_{\bar x}$, it follows from ([\[key8\]](#key8){reference-type="ref" reference="key8"}) that for all $\varepsilon>0$, $$f\left(\frac{s_0}{t_0}x_\varepsilon+ \left(1-\frac{s_0}{t_0}\right)\bar x\right) - f(\bar x)\leq \frac{s_0}{t_0}\Big(f(x_\varepsilon)-f(\bar x)\Big)\leq \frac{s_0}{t_0}\Big(c_0(t)+\varepsilon\Big).$$ Combining this with ([\[eq44\]](#eq44){reference-type="ref" reference="eq44"}) gives us $c_0(s_0)\leq \frac{s_0}{t_0}(c_0(t)+\varepsilon)$. Since $\varepsilon$ could be taken arbitrarily small, we get $c_0(s_0)\leq \frac{s_0}{t_0}c_0(t_0)$, which therefore completes the proof of the proposition. ◻
**Remark 29**. If $c_0(t_0)>0$ for some $t_0>0$, then $c_0$ is a strictly increasing function on every interval of the kind $[t_0,\infty)\cap \mbox{\rm dom}(c_0)$. Indeed, assume that $c_0(t_0)>0$ for some $t_0>0$. Then for any $a,b\in \Bbb R$ satisfying $t_0\leq a<b$, Proposition [Proposition 28](#theo3){reference-type="ref" reference="theo3"} tells us that $$c_0(a)\leq \frac{a}{b}c_0(b) \text{ and } 0<c_0(t_0)\leq \frac{t_0}{b}c_0(b),$$ and so $c_0(b)>0$. Moreover, because $\frac{a}{b}<1$, we get $$c_0(a)\leq \frac{a}{b}c_0(b)<c_0(b).$$
**Remark 30**. If $c_0(t)>0$ for all $t>0$, then $c_0$ is an admissible function. Indeed, we only need to show that if $c_0(t_k)\to 0$, then $t_k\to 0$. Suppose for contradiction that $t_k\not\to 0$, then there exists a subsequence $(t_{k_i})$ of $(t_k)$ such that $t_{k_i}>\varepsilon$ for some $\varepsilon>0$. Then by Remark [Remark 29](#adm1){reference-type="ref" reference="adm1"}, one has $c(t_{k_i})>c(\varepsilon)>0$, which is impossible as $c_0(t_k)\to 0$. So $c_0$ is admissible whenever $c_0(t)>0$ for all $t>0$.
The following theorem provides an estimation for $r(c_0)$.
**Theorem 31**. *For every $t>0$, if $c_0(t)>0$, then following inequalities hold $$\label{eq11}
\frac{t}{2}\leq r(c_0(t))\leq t.$$*
*Proof.* Observe that the first inequality of ([\[eq11\]](#eq11){reference-type="ref" reference="eq11"}) is equivalent to $$\label{key9}
t\leq \mbox{\rm diam}(\mathcal{L}(\bar x, c_0(t)).$$ Let $t$ be an arbitrary positive real number satisfying $c_0(t)>0$. By the definition of $c_0$, for every $\rho>0$, there exists a $y_\rho\in A$ such that $T_M(y_\rho,\bar x) = t$ and $$\label{eq12}
f(y_\rho)-f(\bar x)\leq c_0(t)+\rho.$$ Thus $y_\rho\in D_{\bar x}$. Since $f$ is directionally subhomogeneous with respect to $\bar x$ on $D_{\bar x}$, for any $s\in (0,t)$ we have $$\begin{array}{ll}
f\left(\frac{t-s}{t} y_\rho +\frac{s}{t}\bar x \right) -f(\bar x)&\leq \frac{t-s}{t}f(y_\rho) +\frac{s}{t}f(\bar x)-f(\bar x)\\
&= \frac{t-s}{t}(f(y_\rho)-f(\bar x)).
\end{array}$$ By choosing $\rho\leq \frac{s}{t-s}c_0(t) > 0$, the above inequality and ([\[eq12\]](#eq12){reference-type="ref" reference="eq12"}) give us that $$\begin{array}{ll}
f\left(\frac{t-s}{t} y_\rho +\frac{s}{t}\bar x \right) -f(\bar x)&\leq \frac{t-s}{t}(f(y_\rho)-f(\bar x))\\
&\leq \frac{t-s}{t}\left(c_0(t)+\rho\right)\\
&\leq \frac{t-s}{t}\left(c_0(t)+\frac{s}{t-s}c_0(t)\right)=c_0(t).
\end{array}$$ This yields $x_s\in \mathcal{L}(\bar x,c_0(t))$, where $x_s:=(\frac{t-s}{t}) y_\rho + (\frac{s}{t}) \bar x\in D_{\bar x}$ ($D_{\bar x}$ is convex). Moreover, one has $$\begin{array}{ll}
\|x_s-\bar x\| &= \left\|\frac{t-s}{t} y_\rho - \frac{t-s}{t} \bar x\right\|\\
&= \frac{t-s}{t}\| y_\rho - \bar x\|\\
& = t-s,
\end{array}$$ where the last equality holds because $\|y_\rho-\bar x\| = T_M(y_\rho,\bar x) =t.$ So for any $s\in (0,t)$, there exists $x_s\in \mathcal{L}(\bar x, c_0(t))$ such that $\|x_s-\bar x\| = t-s$, which implies $$\mbox{\rm diam}(\mathcal{L}(\bar x,c_0(t)))\geq t-s\text{ for all }s\in (t,0).$$ Taking $s\to 0$, we obtain the inequality ([\[key9\]](#key9){reference-type="ref" reference="key9"}), which therefore completes the proof of the first inequality of ([\[eq11\]](#eq11){reference-type="ref" reference="eq11"}).
For the second inequality of ([\[eq11\]](#eq11){reference-type="ref" reference="eq11"}), we argue by contradiction. Assume that $r(c_0(t_0))>t_0$ for some $t_0>0$. Equivalently, $\mbox{\rm diam}(\mathcal{L}(\bar x,c_0(t_0)))>2t_0$. Then there exists an $x'\in \mathcal{L}(\bar x,c_0(t_0))=\{y\in D_{\bar x}\mid f(y)\leq \inf_{x\in D_{\bar x}} f(x)+c_0(t_0)\}$ such that $$T_M(x',\bar x) =\|x'-\bar x\|=t' > t_0,$$ where the first equality is fulfilled by Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii). This together with the definition of $c_0$ ensures $$c_0(t')\leq f(x') - f(\bar x) \leq c_0(t_0).$$ However, since $c_0(t_0)>0$ and $t_0<t'$, Remark [Remark 29](#adm1){reference-type="ref" reference="adm1"} gives us $c_0(t_0)<c_0(t')$, a contradiction. This completes the proof of the second inequality of ([\[eq11\]](#eq11){reference-type="ref" reference="eq11"}) and, therefore, of the whole theorem. ◻
The following result establishes a relationship between admissible functions and level sets.
**Theorem 32**. *The function $c_0$ is admissible if and only if $\mbox{\rm diam}(\mathcal{L}(\bar x, \varepsilon))\to 0$ as $\varepsilon\to 0$.*
*Proof.* Assume that $c_0$ is admissible. By Definition [Definition 15](#admisd){reference-type="ref" reference="admisd"}, one has $c_0(t)>0$ for all $t>0$. Two cases are possible:
\(A\) there exists an $\varepsilon_0 > 0$ such that $c_0(t)>\varepsilon_0$ for all $t>0$ and
\(B\) for any sequence $(\varepsilon_k)$ satisfying $\varepsilon_k\rightarrow 0$, there exists a sequence $(t_k)$ such that $t_k>0$ and $c_0(t_k)\leq \varepsilon_k$ for all $k\in \Bbb N$.
Let us show below that in each of these cases, we arrive at the conclusion of the first implication. In case (A), one has $f(x)-f(\bar x)>\varepsilon_0$ for all $x\in A$ satisfying $T_M(x,\bar x)>0$. Thus, $\mathcal{L}(\bar x, \delta)=\{\bar x\}$ for all $\delta \in (0,\varepsilon_0)$, which implies that $\mbox{\rm diam}(\mathcal{L}(\bar x, \varepsilon))\to 0$ as $\varepsilon\to 0$. In case (B), one has $$\label{eq9}
\mbox{\rm diam}(\mathcal{L}_M(\bar x,\varepsilon_k)) = 2r(\varepsilon_k)\leq 2r(c_0(t_k))\leq 2t_k,$$ where the first inequality holds because $r$ is a decreasing function and the second inequality is satisfied by Theorem [Theorem 31](#level1){reference-type="ref" reference="level1"}. Notice that since the function $c_0$ is admissible, $t_k\to 0$ by the definition. Combining this with ([\[eq9\]](#eq9){reference-type="ref" reference="eq9"}), we obtain $\mbox{\rm diam}(\mathcal{L}_M(\bar x,\varepsilon_k))\rightarrow 0$ for any sequence $\varepsilon_k\rightarrow 0$.
Conversely, assume that $\mbox{\rm diam}(\mathcal{L}_M(\bar x,\varepsilon))\rightarrow 0$ as $\varepsilon\rightarrow 0$. By Remark [Remark 30](#adm2){reference-type="ref" reference="adm2"}, we only need to show that $c_0(t)>0$ for all $t>0$. Indeed, if there exists $t_0>0$ such that $c_0(t_0)=0$, then for every $k\in \Bbb N$, we can find $x_k\in D_{\bar x}$ such that $T_M(x_k,\bar x)=t_0$ and $f(x_k)-f(\bar x)<1/k$. This means $x_k\neq \bar x$ and $x_k\in \mathcal{L}_M(\bar x, 1/k)$ for all $k\in \Bbb N$. However, since $\bar x\in \mathcal{L}_M(\bar x, 1/k)$ for all $k\in \Bbb N$, this contradicts the assumption that $\mbox{\rm diam}(\mathcal{L}_M(\bar x,\varepsilon))\rightarrow 0$ as $\varepsilon\to 0$. So, the function $c_0$ is admissible. ◻
For the remaining part of this section, assume that $f$ is Gateaux differentiable on $A$ and $f'$ is continuous on $A$.
Before proving the main result, we need the following lemmas.
**Lemma 33**. *If $f$ is directionally subhomogeneous with respect to $\bar x$ on $D_{\bar x}$, then $$\label{eq13}
f(x)-f(\bar x)\leq \langle f'(x), x-\bar x\rangle \textrm{ for all }x\in D_{\bar x}.$$*
*Proof.* Since $f$ is subhomogeneous with respect to $\bar x$, we get $$f(x) = f\left(x-\frac{s}{s+1}\bar x + \frac{s}{s+1}\bar x\right)\leq \frac{1}{s+1}f((s+1)x-s\bar x) + \frac{s}{s+1}f(\bar x).$$ Multiplying both sides by $s+1$ gives us $$(s+1)f(x) \leq f((s+1)x-s\bar x) + sf(\bar x),$$ which implies $$f(x)-f(\bar x)\leq \frac{f(x+s(x-\bar x)) - f(y)}{s}.$$ Letting $s\to 0$, we obtain ([\[eq13\]](#eq13){reference-type="ref" reference="eq13"}). ◻
**Lemma 34**. *For every $x\in D_{\bar x}$, if $T_M(x,\bar x)>1$, then $c_0(1)\leq f(x)-f(\bar x)$.*
*Proof.* Let $x\in D_{\bar x}$ be such that $T_M(x,\bar x)>1$. Since $f$ is directionally subhomogeneous with respect to $\bar x$ on $D_{\bar x}$ and $\frac{1}{T_M(x,\bar x)}<1$, we get from Definition [Definition 26](#subhomo){reference-type="ref" reference="subhomo"} that $$\begin{aligned}
f\left(\frac{x-\bar x}{T_M(x,\bar x)} +\bar x \right) &\leq \frac{1}{T_M(x,\bar x)}(f(x)-f(\bar x))+f(\bar x)\\
&\leq(f(x) - f(\bar x))+f(\bar x)=f(x).
\end{aligned}$$ Therefore, $$c_0(1) \leq f\left(\frac{x-\bar x}{T_M(x,\bar x)} +\bar x \right) - f(\bar x) \leq f(x)-f(\bar x),$$ where the first inequality holds because $T_M\left(\frac{x-\bar x}{T_M(x,\bar x)} +\bar x,\bar x\right) = 1$. The proof of the lemma is complete. ◻
For $\varepsilon>0$, we set $$\mathcal{Q}(\bar x,\varepsilon):= \{x\in D_{\bar x}\mid\langle f'(x),x-\bar x\rangle\leq \varepsilon\}.$$ With this set, the directional Tykhonov well-posedness for (MP) can be characterized as follows.
**Theorem 35**. *For the problems (MP), suppose that $f$ is a lower semicontinuous function. Then the problem (MP) is directional Tykhonov well-posed with respect to $M$ if and only if $$\mbox{\rm diam}(\mathcal{Q}(\bar x,\varepsilon))\to 0 \textrm{ as } \varepsilon\rightarrow 0.$$*
*Proof.* Assume that (MP) is directional Tykhonov well-posed with respect to $M$. Then there is $\bar x\in A$ such that it is the global directional minimum point for $f$. By Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}, it suffices to show that $\mathcal{Q}(\bar x,\varepsilon)\subset \mathcal{L}(\bar x,\varepsilon)$. Indeed, for each $x\in \mathcal{Q}(\bar x,\varepsilon)$, Lemma [Lemma 33](#subhomo2){reference-type="ref" reference="subhomo2"} tells us that $$f(x) - f(\bar x)\leq \langle f'(x),x-\bar x\rangle\leq \varepsilon.$$ Therefore, $x\in \mathcal{L}(\bar x,\varepsilon)$.
Conversely, assume that $\mbox{\rm diam}(\mathcal{Q}(\bar x,\varepsilon))\to 0 \textrm{ as } \varepsilon\rightarrow 0$. Clearly $f$ is bounded from below on $D_{\bar x}$ by $f(\bar x)$. By Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}, it is sufficient to show that $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. This can be done by showing that $\mathcal{L}(\bar x,\varepsilon)\subset (\mathcal{Q}(\bar x,\sqrt{\varepsilon}(\sqrt{\varepsilon}+1))^{\sqrt{\varepsilon}}$ for all $\varepsilon$ sufficiently small, where $$(\mathcal{Q}(\bar x,\sqrt{\varepsilon}(\sqrt{\varepsilon}+1))^{\sqrt{\varepsilon}} := \{x\in D_{\bar x}\mid T_M(x,\mathcal{Q}(\bar x,\sqrt{\varepsilon}(\sqrt{\varepsilon}+1))\leq \sqrt{\varepsilon}\}.$$
Notice that $c_0(1)>0$ by Remark [Remark 29](#adm1){reference-type="ref" reference="adm1"}. Thus, we may assume, without loss of generality, that for all $\varepsilon>0$ sufficiently small, $$\label{key12}
\varepsilon<c_0(1).$$ Fixing any $x_0\in \mathcal{L}(\bar x,\varepsilon)$. Then Corollary $\ref{ekel3}$ tells us there exists $x_\varepsilon\in D_{\bar x}$ such that $$\label{key13}
T_M(x_0,x_\varepsilon) \leq \sqrt{\varepsilon}$$ and $$\langle f'(x_\varepsilon),x_\varepsilon- y\rangle \leq \sqrt{\varepsilon} T_M(x_\varepsilon,y)\text{ for all }y\in D_{\bar x}.$$ Since $\bar x, x_\varepsilon\in D_{\bar x}$, we get $$\label{eq14}
\langle f'(x_\varepsilon),x_\varepsilon- \bar x\rangle \leq \sqrt{\varepsilon} T_M(x_\varepsilon,\bar x),$$ and $$\label{key14}
T_M(x_\varepsilon,\bar x) = \|x_\varepsilon-\bar x\|\leq \|x_\varepsilon-x_0\|+\|x_0-\bar x\|\leq T_M(x_0,x_\varepsilon) + T_M(x_0,\bar x).$$ By ([\[key12\]](#key12){reference-type="ref" reference="key12"}) and $x_0\in \mathcal{L}(\bar x,\varepsilon)$, it follows from Lemma [Lemma 34](#lemma3){reference-type="ref" reference="lemma3"} that $T_M(x_0,\bar x) \leq 1$. Combining this with ([\[key13\]](#key13){reference-type="ref" reference="key13"}) and ([\[key14\]](#key14){reference-type="ref" reference="key14"}), we obtain $$T_M(x_\varepsilon,\bar x)\leq \sqrt{\varepsilon}+1.$$ Therefore, it follows from ([\[eq14\]](#eq14){reference-type="ref" reference="eq14"}) that $x_\varepsilon\in \mathcal{Q}(\bar x,{\sqrt{\varepsilon}(\sqrt{\varepsilon}+1))}$, which yields, by ([\[key13\]](#key13){reference-type="ref" reference="key13"}), that $$\mathcal{L}(\bar x,\varepsilon)\subset \big(\mathcal{Q}(\bar x,{\sqrt{\varepsilon}(\sqrt{\varepsilon}+1)})\big)^{\sqrt{\varepsilon}}\subset \big(\mathcal{Q}(\bar x,{\sqrt{\varepsilon}(\sqrt{\varepsilon}+1)})\big)^{\sqrt{\varepsilon}(\sqrt{\varepsilon}+1)}.$$ Since $\sqrt{\varepsilon}(\sqrt{\varepsilon}+1)\to 0$ as $\varepsilon\to 0$, Lemma [Lemma 3](#lemmato0){reference-type="ref" reference="lemmato0"} tells us $\mbox{\rm diam}(\mathcal{L}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. Hence Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"} guarantees that the problem (MP) is directional Tykhonov well-posed with respect to $M$. ◻
We end this section with the following proposition.
**Proposition 36**. *Let $c_1\colon[0,\infty) \to [0,\infty)$ be given by $$c_1(t) = \inf\{\langle{f'(x),x-\bar x}\rangle \mid x\in D_{\bar x}\textrm{ and } T_M(x,\bar x)=t\}.$$ If $\mbox{\rm diam}(\mathcal{Q}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, then the function $c_1$ is admissible.*
*Proof.* From Lemma [Lemma 33](#subhomo2){reference-type="ref" reference="subhomo2"}, we get $f(x)-f(\bar x) \leq \langle f'(x),x-\bar x\rangle$ for any $x\in D_{\bar x}$. Hence $$\label{key15}
\begin{array}{ll}
c_0(t) &= \inf\{f(x)-f(\bar x)\mid x\in D_{\bar x} \textrm{ and } T_M(y,\bar x)=t\} \\
&\leq \inf\{\langle f'(x),x-\bar x\rangle\mid x\in D_{\bar x}\textrm{ and } T_M(x,\bar x)=t\}\\
& = c_1(t).
\end{array}$$ Now assume that $\mbox{\rm diam}(\mathcal{Q}(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. Then Theorem [Theorem 35](#theo4){reference-type="ref" reference="theo4"} implies that the problem (MP) is directional Tykhonov well-posed with respect to $M$ which in its turn implies $\mbox{\rm diam}(\mathcal{L}(\bar x, \varepsilon))\to 0$ by Theorem [Theorem 12](#theo1){reference-type="ref" reference="theo1"}. Thus, $c_0(t)$ is admissible by Theorem [Theorem 32](#theo5){reference-type="ref" reference="theo5"}. To end, we show that $c_1(t)$ is admissible. Indeed, it is clear that $c_1(0)=0$. For any sequence $(t_k)$ satisfying $c_1(t_k)\to 0$, we get from ([\[key15\]](#key15){reference-type="ref" reference="key15"}) that $c_0(t_0)\to 0$. Then the admissibility of $c_0$ implies that $t_k\to 0$. So $c_1$ is an admissible function. The proof is complete. ◻
# Applications to Variational inequalities
In this section, we assume that $X$ is a reflexive Banach space and $A$ is a closed convex subset of $X$. Let $V$ be a monotone operator from $X$ to $X^*$ and $g:A\to (-\infty,\infty]$. Recall that $V$ is said to be monotone if $$\langle V(x)-V(y),x-y\rangle\geq 0 \text{ for all }x,y\in X,$$ and that $V$ is hemicontnuous if $x\in \mbox{\rm dom}(V)$, $y\in X$ and $x+\lambda_ky\in \mbox{\rm dom}(V)$, where $(\lambda_k)$ is a sequence of positive numbers such that $\lambda_k\to 0$, imply $V(x+\lambda_ky)\to V(x)$ (see, e.g., [@kato1964demicontinuity]).
We consider the following problem: $$\textrm{(VI)}\quad\textrm{find }x\in A \textrm{ such that }\langle{V(x),x-y}\rangle + g( x) - g(y)\leq 0 \textrm{ for all } y\in A.$$
**Definition 37**. One says that $\bar x\in A$ is a directional solution with respect to $M$ of the problem (VI) if $$\langle{V(\bar x),\bar x-y}\rangle + g(\bar x) - g(y)\leq 0 \textrm{ for all } y\in D_{\bar x}.$$
For $\varepsilon>0$ and $\bar x\in A$, we put $$\begin{aligned}
\mathcal{R}(\varepsilon) &= \{x\in A\mid \langle{V(x),x-y}\rangle + g(x)-g(y)\leq \varepsilon T_M(y,x) \textrm{ for all }y\in D_x\},\\
\mathcal{R}'(\bar x,\varepsilon) &= \{x\in D_{\bar x}\mid \langle{V(x),x-y}\rangle + g(x)-g(y)\leq \varepsilon T_M(x,y) \text{ for all } y\in D_{\bar x}\}.
\end{aligned}$$ In view of Theorem [Theorem 25](#wpchar3){reference-type="ref" reference="wpchar3"}, it is quite natural to introduce the following definition.
**Definition 38**. We say that the problem (VI) is directional Tykhonov well-posed with respect to $M$ if there is $\bar x\in A$ such that $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{R}(\varepsilon)$ and $\mbox{\rm diam}(\mathcal{R}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$.
Observe that if $V$ is a derivative of a function $f$ convex, lower semicontinuos and G$\hat{\rm a}$teaux differentiable on $A$, then it follows from Theorem [Theorem 25](#wpchar3){reference-type="ref" reference="wpchar3"} that (VI) is directional Tykhonov well-posed with respect to $M$ and $f+g$ is bounded below on $D_{\bar x}$ if and only if (MP$'$) is directional Tykhonov well-posed with repect to $M$.
**Theorem 39**. *If the problem (VI) is directional Tykhonov well-posed with respect to $M$, i.e., there is $\bar x\in A$ such that $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{R}(\varepsilon)$ and $\mbox{\rm diam}(\mathcal{R}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$, then it has a unique directional solution on $D_{\bar x}$.*
*Proof.* Assume that there exists $\bar x\in A$ such that $\bar x\in \bigcap_{\varepsilon > 0}\mathcal{R}(\varepsilon)$ and $\mbox{\rm diam}(\mathcal{R}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. Fixing any $y\in D_{\bar x}$. Because $T_M(y,\bar x)<\infty$ and $$\langle{V(\bar x) ,\bar x-y}\rangle + g(\bar x)-g(y)\leq \varepsilon T_M(y,\bar x) \text{ for all }\varepsilon>0,$$ we get $$\langle{V(\bar x),\bar x-y}\rangle + g(\bar x)-g(y)\leq 0.$$ Since $y$ is arbitrary, $\bar x$ is the directional solution of (VI) on $D_{\bar x}$. Moreover, since $$\langle{V(\bar x),\bar x-y}\rangle + g(\bar x)-g(y)\leq 0 \leq \varepsilon T_M(\bar x,y),$$ we obtain $\bar x\in \mathcal{R}'(\bar x,\varepsilon)$ for any $\varepsilon>0$. To end, we show that $\bar x$ is the unique direction solution of (VI) on $D_{\bar x}$. Indeed, if $\bar x \neq \bar y\in D_{\bar x}$ is a directional solution of (VI), then for all $\varepsilon>0$ and all $y\in D_{\bar x}$ one has $$\langle{V(\bar y),\bar y-y}\rangle + g(\bar y)-g(y)\leq 0 \leq \varepsilon T_M(\bar y,y).$$ Thus, $\bar y\in \mathcal{R}'(\bar x, \varepsilon)$ for any $\varepsilon>0$, contradicting the hypothesis that $\mbox{\rm diam}(\mathcal{R}'(\bar x,\varepsilon))\to 0$ as $\varepsilon\to 0$. ◻
From now on, we will consider the case where $g$ is identically $0$.
**Proposition 40**. *Let $V$ be a monotone and hemicontinuous operator. For $\varepsilon>0$ and $\bar x\in A$, putting $$\begin{aligned}
\mathcal{S}(\varepsilon) &= \{x\in A\mid \langle{V(y),x-y}\rangle\leq \varepsilon T_M(y,x) \textrm{ for all }y\in D_x\},\\
\mathcal{S}'(\bar x,\varepsilon) &= \{x\in D_{\bar x}\mid \langle{V(y),x-y}\rangle\leq \varepsilon T_M(x,y), \text{ for all } y\in D_{\bar x}\}.
\end{aligned}$$ Then $\mathcal{S}(\varepsilon) = \mathcal{R}(\varepsilon)$ and $\mathcal{S}'(\bar x,\varepsilon)= \mathcal{R}'(\bar x,\varepsilon)$.*
*Proof.* Because $V$ is monotone, we get $\langle{V(y),x-y}\rangle\leq \langle{V(x),x-y}\rangle$ for all $x,y\in A$. Hence $\mathcal{R}(\varepsilon)\subset \mathcal{S}(\varepsilon)$ and $\mathcal{R}'(\bar x,\varepsilon)\subset \mathcal{S}'(\bar x,\varepsilon)$.
Conversely, take any $x\in \mathcal{S}(\varepsilon)$. Then for any $y\in D_x$, we get $x+\lambda(y-x)\in D_x$ for all $\lambda\in (0,1)$, and hence $$\langle{V(x+\lambda(y-x)),\lambda(x-y)}\rangle\leq \varepsilon T_M(x+\lambda(y-x),x)= \varepsilon\lambda\|y-x\|,$$ where the equality is satisfied by Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii). Dividing by $\lambda$, we get $$\langle{V(x+\lambda(y-x)),x-y}\rangle\leq \varepsilon\|y-x\|.$$ Using the hemicontinuity of $V$ and taking $\lambda\to 0$ show $x\in \mathcal{R}(\varepsilon)$. So $\mathcal{S}(\varepsilon) = \mathcal{R}(\varepsilon)$. Similarly, pick any $x\in \mathcal{S}'(\bar x,\varepsilon)$, we show that for any $y\in D_{\bar x}$, $$\langle{V(x),x-y}\rangle\leq \varepsilon T_M(x,y).$$ Clearly, if $x\not\in D_y$, then $T_M(x,y)=\infty$, and hence the inequality is obviously true. Otherwise, we have $y-x\in \mbox{\rm cone}M$. Let $y_\lambda:=x+\lambda(y-x)\in D_{\bar x}$ for all $\lambda\in (0,1)$. Because $y_\lambda-x = \lambda(y-x)\in \mbox{\rm cone}M$, we get $x\in D_{y_\lambda}$ for all $\lambda\in (0,1)$. Since $x\in \mathcal{S}'(\bar x,\varepsilon)$, $$\langle{V (y_\lambda),x - y_\lambda}\rangle \leq \varepsilon T_M(x,y_\lambda) = \varepsilon\|y_\lambda-x\|,$$ where the equality is fulfilled by Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (iii). Consequently, $$\langle{V(x+\lambda(y-x)),x-y}\rangle\leq \varepsilon\|y-x\| = \varepsilon T_M(x,y).$$ Using the hemicontinuity of $V$ and taking $\lambda\to 0$, we get $x\in \mathcal{R}'(\bar x,\varepsilon)$, i.e., $\mathcal{S}'(\bar x,\varepsilon) = \mathcal{R}'(\bar x,\varepsilon)$, which completes the proof of proposition. ◻
**Proposition 41**. *Assume that there exist $\varepsilon>0$, $\bar t>0$ and $\bar x\in A$ such that $$\langle{V(\bar x), \bar x-y}\rangle\leq \varepsilon T_M(y,\bar x)\text{ for all }y\in D_{\bar x}\text{ satisfying }T_M(y,\bar x)\leq \bar t.$$ Then $\bar x\in \mathcal{R}(\varepsilon)$.*
*Proof.* By the convexity of $D_{\bar x}$, one has $y_\lambda:=\bar x+\lambda (y-\bar x)\in D_{\bar x}$ for any $\lambda \in (0,1)$ and any $y\in D_{\bar x}$, and hence $T_M(y_\lambda,\bar x)< \infty$. Then it follows from Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"} (iii) that $$T_M(y_\lambda,\bar x) = \|y_\lambda-\bar x\| = \lambda\|y-\bar x\|,$$ which implies $T_M(y_\lambda,\bar x)\leq \bar t$ for all $\lambda$ sufficiently small. Hence, $$\langle{V(\bar x), \bar x-y_\lambda}\rangle\leq \varepsilon T_M(y_\lambda,\bar x).$$ Consequently, $$\langle{V(\bar x), \bar x-y}\rangle\leq \varepsilon\|\bar x-y\| = \varepsilon T_M(y,x) \text{ for all } y\in D_{\bar x},$$ and thus $\bar x\in \mathcal{R}(\varepsilon)$. ◻
Similarly, we get the following result.
**Proposition 42**. *Assume that there exist $\varepsilon$, $\bar t>0$, $\bar x\in A$ and $\bar x\in D_{\bar x}$ such that $$\langle{V(\bar x), \bar x-y}\rangle\leq \varepsilon T_M(\bar x,y)\text{ for all }y\in D_{\bar x} \textrm{ satisfying } T_M(\bar x,y)<\bar t.$$ Then $\bar x\in \mathcal{R}'(\bar x,\varepsilon)$.*
*Proof.* We prove that $$\langle{V(\bar x), \bar x-y}\rangle\leq \varepsilon T_M(\bar x,y)\text{ for all }y\in D_{\bar x}.$$ Fixing any $y\in D_{\bar x}$. Clearly, if $\bar x\not \in D_y$, then $T_M(\bar x,y)=\infty$, and hence the inequality holds obviously. Otherwise, define $y_\lambda:=\bar x+\lambda(y-\bar x)\in D_{\bar x}$ for every $\lambda \in (0,1)$. Because $\bar x\in D_y$, one has $y-\bar x\in \mbox{\rm cone}(M)$, and so $y_\lambda-\bar x = \lambda(y-\bar x)\in \mbox{\rm cone}(M)$. This implies $T_M(\bar x,y_\lambda)<\infty$. By Remark [Remark 2](#remark1){reference-type="ref" reference="remark1"}(iii), we get $$T_M(\bar x,y_\lambda) = \|\bar x-y_\lambda\| = \lambda\|y-\bar x\|,$$ which implies that $T_M(\bar x,y_\lambda)<\bar t$ for all $\lambda$ sufficiently small. Hence, $$\langle{V(\bar x), \bar x-y_\lambda}\rangle\leq \varepsilon T_M(\bar x,y_\lambda),$$ which yields $$\langle{V(\bar x), \bar x-y}\rangle\leq \varepsilon\|y-\bar x\| = \varepsilon T_M(\bar x,y).$$ So $\bar x\in \mathcal{R}'(\bar x,\varepsilon)$. The proof of proposition is completes. ◻
# Concluding Remarks
This paper is a part of our project involving *directional well-posedness*. It demonstrates how minimal time functions can be treated in the framework of the theory of well-posedness in scalar optimization. After introducing a new type of Tykhonov well-posedness and developing characterizations for this type of well-posedness, we establish relationships between level sets and admissible functions.
Our next goal is to propose and study generalized well-posednesses for optimization-related problems. To achieve this goal, we need to develop further properties for general minimal time functions. We foresee that this potential work could be successful in the future.
10
L. Q. Anh and T. Q. Duy. Tykhonov well-posedness for lexicographic equilibrium problems. , 65(11):1929--1948, 2016.
L. Q. Anh, T. Q. Duy, L. D. Muu, and T. V. Tri. The tikhonov regularization for vector equilibrium problems. , 78:769--792, 2021.
M. Bianchi, G. Kassay, and R. Pini. Well-posed equilibrium problems. , 72(1):460--468, 2010.
M. J. Cánovas, M. A. López, J. Parra, and M. I. Todorov. Stability and well-posedness in linear semi-infinite programming. , 10(1):82--98, 1999.
T. Chelmuş, M. Durea, and E.-A. Florea. Directional pareto efficiency: concepts and optimality conditions. , 182:336--365, 2019.
R. Cibulka, M. Durea, M. Panţiruc, and R. Strugariu. On the stability of the directional regularity. , 28(2):209--237, 2020.
G. Colombo, V. V. Goncharov, and B. S. Mordukhovich. Well-posedness of minimal time problems with constant dynamics in banach spaces. , 18(3-4):349--372, 2010.
G. Colombo and P. R. Wolenski. The subgradient formula for the minimal time function in the case of constant dynamics in hilbert space. , 28:269--282, 2004.
G. Crespi, A. Guerraggio, and M. Rocca. Well posedness in vector optimization problems and vector variational inequalities. , 132:213--226, 2007.
A. L. Dontchev and T. Zolezzi. . Springer, 2006.
M. Durea, M. Panţiruc, and R. Strugariu. Minimal time function with respect to a set of directions: basic properties and applications. , 31(3):535--561, 2016.
M. Durea, M. Panţiruc, and R. Strugariu. A new type of directional regularity for mappings and applications to optimization. , 27(2):1204--1229, 2017.
I. Ekeland and R. Temam. . SIAM, Philadelphia, 1999.
M. Furi and A. Vignoli. About well-posed optimization problems for functionals in metric spaces. , 5(3):225--229, 1970.
M. Furi and A. Vignoli. A characterization of well-posed minimum problems in a complete metric space. , 5(6):452--461, 1970.
C. Gutiérrez, E. Miglierina, E. Molho, and V. Novo. Pointwise well-posedness in set optimization with cone proper sets. , 75(4):1822--1833, 2012.
J. Hadamard. Sur les problèmes aux dérivées partielles et leur signification physique. , pages 49--52, 1902.
Y. He and K. F. Ng. Subdifferentials of a minimum time function in banach spaces. , 321(2):896--910, 2006.
X. Huang and X. Yang. Generalized levitin--polyak well-posedness in constrained optimization. , 17(1):243--258, 2006.
A. Ioffe and R. Lucchetti. Typical convex program is very well posed. , 104(2-3):483--499, 2005.
Y. Jiang and Y. He. Subdifferential properties for a class of minimal time functions with moving target sets in normed spaces. , 91(3):491--502, 2012.
T. Kato. Demicontinuity, hemicontinuity and monotonicity. , 70(6):548--550, 1964.
E. S. Levitin and B. T. Polyak. Convergence of minimizing sequences in conditional extremum problems. In *Doklady Akademii Nauk*, volume 168, pages 997--1000. Russian Academy of Sciences, 1966.
V. S. T. Long. A new notion of error bounds: necessary and sufficient conditions. , 15(1):171--188, 2021.
V. S. T. Long. An invariant-point theorem in banach space with applications to nonconvex optimization. , 194(2):440--464, 2022.
V. S. T. Long. Directional variational principles and applications to the existence study in optimization. , 19(10):7506--7521, 2023.
R. Lucchetti and F. Patrone. A characterization of tyhonov well-posedness for minimum problems, with applications to variational inequalities. , 3(4):461--476, 1981.
R. Lucchetti and T. Zolezzi. On well-posedness and stability analysis in optimization. In *Mathematical Programming with Data Perturbations*, pages 223--251. CRC Press, 2020.
B. S. Mordukhovich and N. M. Nam. Limiting subgradients of minimal time functions in banach spaces. , 46:615--633, 2010.
B. S. Mordukhovich and M. N. Nguyen. Subgradients of minimal time functions under minimal requirements. , 18(4):915--947, 2011.
J. Morgan and V. Scalzo. Discontinuous but well-posed optimization problems. , 17(3):861--870, 2006.
N. M. Nam and C. Zălinescu. Variational analysis of directional minimal time functions and applications to location problems. , 21(2):405--430, 2013.
M. Sofonea and Y.-b. Xiao. On the well-posedness concept in the sense of tykhonov. , 183:139--157, 2019.
A. N. Tikhonov. On the stability of the functional optimization problem. , 6(4):28--33, 1966.
T. Zolezzi. A characterization of well-posed optimal control systems. , 19(5):604--616, 1981.
[^1]: Faculty of Mathematics and Computer Science, University of Science, Ho Chi Minh City, Vietnam.
[^2]: Vietnam National University, Ho Chi Minh City, Vietnam (vokehoang\@gmail.com).
[^3]: Faculty of Mathematics and Computer Science, University of Science, Ho Chi Minh City, Vietnam.
[^4]: Vietnam National University, Ho Chi Minh City, Vietnam (email: vstlong\@hcmus.edu.vn).
| arxiv_math | {
"id": "2309.00515",
"title": "Directional Tykhonov well-posedness for optimization problems and\n variational inequalities",
"authors": "Vo Ke Hoang and Vo Si Trong Long",
"categories": "math.OC",
"license": "http://creativecommons.org/licenses/by-sa/4.0/"
} |
---
abstract: |
In this paper we construct smooth, non-radial solutions of the compressible Euler and Navier-Stokes equation that develop an imploding finite time singularity. Our construction is motivated by the works [@Merle-Raphael-Rodnianski-Szeftel:implosion-i; @Merle-Raphael-Rodnianski-Szeftel:implosion-ii; @Buckmaster-CaoLabora-GomezSerrano:implosion-compressible], but is flexible enough to handle both periodic and non-radial initial data.
author:
- G. Cao-Labora, J. Gómez-Serrano, J. Shi, G. Staffilani
bibliography:
- references.bib
title: Non-radial implosion for compressible Euler and Navier-Stokes in ${\mathbb T}^3$ and $\mathbb{R}^3$
---
# Introduction
In this paper we are concerned with the compressible Euler and Navier-Stokes equations, corresponding to solutions of
$$\begin{aligned}
\begin{split} \label{eq:CNS}
\rho \partial_t u + \rho u \cdot \nabla u &= - \frac{1}{\gamma} \nabla (\rho^{\gamma}) + \nu \Delta u \\
\partial_t \rho + \rm{ div } (\rho u) &= 0.
\end{split} \end{aligned}$$ where $\gamma > 1$ is the adiabatic constant and $\nu = 0$ for Euler, $\nu =1$ for Navier-Stokes. We will consider two domains: $y\in \mathbb{T}^{3}$ (torus) or $y\in \mathbb{R}^{3}$ (whole space).
Specifically, we will be concerned with the construction of *imploding singularities*, that is solutions of [\[eq:CNS\]](#eq:CNS){reference-type="eqref" reference="eq:CNS"} that develop a finite time singularity where both $\| u \|_{L^\infty},\| \rho \|_{L^\infty}$ become infinite in finite time, as opposed to a *shock*, where $u,\rho$ stay bounded but their gradients blow-up.
## Historical Background
Generically, finite time singularities for the compressible Euler and Navier-Stokes equations are in the form of shocks. By using Riemann invariants, Lax [@Lax:singularities-nonlinear-hyperbolic-pde] showed that in 1D shocks may develop. His results were later generalized and improved by John [@John:singularities-1d-wave] and Liu [@Liu:singularities-nlw-quasilinear-hyperbolic-pde].
Later, Sideris [@Sideris:singularities-3d-compressible] employed a virial type argument to prove singularity formation in 2D and 3D. Yin [@Yin:formation-shock-waves-3d-compressible-euler] proved shock formation and development in 3D under spherical symmetry. In a series of landmark monographs [@Christodoulou:shock-development-problem; @Christodoulou:formation-shocks-3d-fluids-book; @Christodoulou-Lisibach:shock-development-spherical-symmetry; @Christodoulou-Miao:compressible-flow-euler] Christodoulou, as well as Christodoulou--Miao and Christodoulou--Lisibach investigated the shock formation and the shock development problem without radial symmetry, and developed a new framework to study such problems. In recent years, there has been an emergence of new and exciting results. Buckmaster, Drivas, Neal, Rickard, Shkoller, and Vicol [@Buckmaster-Shkoller-Vicol:formation-shocks-2d-isentropic-compressible-euler; @Buckmaster-Shkoller-Vicol:point-shocks-3d-compressible-euler; @Buckmaster-Shkoller-Vicol:shock-formation-vorticity-creation-3d-euler; @Buckmaster-Drivas-Shkoller-Vicol:simultaneous-development-shocks-cusps-2d-euler; @Neal-Rickard-Shkoller-Vicol:stable-shock; @Neal-Shkoller-Vicol:characteristics-shock-2d-euler-symmetry-entropy] developed a new framework and studied the shock formation problem in 2D and 3D, even after the first blow-up time or in the context of low regularity, as well as the classification of pre-shocks and the development problem. Luk--Speck [@Luk-Speck:shocks-2d-compressible-euler-vorticity; @Luk-Speck:stability-shock-3d-euler-vorticity-entropy] proved the first result of shock formation involving non-zero vorticity and variable entropy, within the framework developed by Christodoulou. Abbrescia--Speck [@Abbrescia-Speck:singular-boundary-3d-compressible-euler] studied the maximal development problem. An--Chen--Yin [@An-Chen-Yin:ill-posedness-2d-mhd; @An-Chen-Yin:ill-posedness-3d-mhd] proved ill-posedness for compressible MHD and compressible Euler in 2D and 3D. See also the work of Su [@Su:shock-formation-2d-euler-self-similar]. For more references and developments in this direction we refer the reader to the ICM survey of Buckmaster--Drivas--Shkoller--Vicol [@Buckmaster-Drivas-Shkoller-Vicol:icm-survey].
Nonetheless, shock formation is not the only possible singularity that can occur, and self-similar solutions exist. Guderley [@Guderley:singularities-radial] was the first to construct radial, self-similar imploding singularities, though non-smooth. Guderley's construction was later improved and generalized by Jenssen--Tsikkou [@Jenssen-Tsikkou:amplitude-blowup-radial-isentropic-euler; @Jenssen-Tsikkou:radially-symmetric-non-isentropic-euler]. See also the review [@MeyerTerVehn-Schalk:selfsimilar-compression-waves] by Meyer-ter-Vehn--Schalk and the work of Sedov [@Sedov:book-similarity].
In the case of compressible Navier-Stokes, Germain--Iwabuchi [@Germain-Iwabuchi:self-similar-compressible-ns] constructed forward smooth, self-similar solutions, as well as non-singular ones due to cavitation. In [@Germain-Iwabuchi-Leger:backward-self-similar-compressible-ns], Germain--Iwabuchi--Léger studied conditions on the (non)-existence of backward self-similar solutions and settled the existence in the case with degenerate density-dependent viscosity in [@Germain-Iwabuchi-Leger:self-similar-degenerate-compressible-ns]. We also mention here the work of Guo--Jiang [@Guo-Jiang:self-similar-isothermal-compressible-ns] in the isothermal case and of Li--Chen--Xie [@Li-Chen-Xie:self-similar-compressible-1d] in the density-dependent viscosity case and the work by Lazarus [@Lazarus:selfsimilar-shocks-cavities; @Lazarus:selfsimilar-shocks-cavities-erratum] on the converging shock and collapsing cavity problems. For more examples of self-similar solutions in the context of fluid mechanics we refer the reader to the review paper by Eggers--Fontelos [@Eggers-Fontelos:self-similarity]. Xin [@Xin:blowup-compressible-navier-stokes-compact-density], and Rozanova [@Rozanova:blow-up-decreasing-solutions-compressible-ns] proved blow-up for compressible Navier-Stokes in the case of compactly supported and rapidly decaying density respectively.
In a series of breakthrough papers [@Merle-Raphael-Rodnianski-Szeftel:implosion-i; @Merle-Raphael-Rodnianski-Szeftel:implosion-ii; @Merle-Raphael-Rodnianski-Szeftel:implosion-nls], Merle--Raphaël--Rodnianski--Szeftel constructed first smooth, radially imploding solutions to the compressible Euler equation and later built upon them to construct imploding singularities for compressible Navier-Stokes (with decaying density at infinity) and the energy supercritical defocusing nonlinear Schrödinger equation. In their construction, they established a sequence of quantized self-similar scalings accumulating to a critical value, leading to a sequence of imploding self-similar profiles. This was done for almost every value of $\gamma$. In [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible] (see also the review paper [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible-review]), the first two authors together with Buckmaster improved the result to cover all cases of $\gamma$. Moreover, they showed the existence of non-decaying imploding singularities for Navier-Stokes. For a very careful and detailed numerical study of smooth imploding solutions as well as their stability we point out to the work of Biasi [@Biasi:self-similar-compressible-euler].
## Main result {#sec:12}
In [@Merle-Raphael-Rodnianski-Szeftel:implosion-ii], Merle--Raphaël--Rodnianski--Szeftel wrote:
- *One could, in principle, be able to reconnect the profile to one with constant density for large x and rapidly decaying velocity, instead. This should lead to a singularity formation result for Navier-Stokes for solutions with constant density at infinity. Even more generally, the analysis should be amenable to other boundary conditions and domains, e.g. Navier-Stokes and Euler equations on a torus.*
- \...\[The main theorem in [@Merle-Raphael-Rodnianski-Szeftel:implosion-ii]\] *is proved for spherically symmetric initial data. The symmetry is used in a very soft way, and we expect that the blow up \... is stable modulo finitely many instabilities for non symmetric perturbations.*
Our main Theorems answer these two questions on the positive. Broadly speaking, we are able to construct finite time imploding singularities for both Euler and Navier-Stokes with either:
1. smooth periodic initial data (**Theorem [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"}**).
2. smooth, non-radially symmetric initial data that does not vanish at infinity (**Theorem [Theorem 3](#th:euclidean){reference-type="ref" reference="th:euclidean"}**).
The main difficulty in order to tackle this problem is that, contrary to [@Merle-Raphael-Rodnianski-Szeftel:implosion-ii] or [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible], we need to perform a linear stability analysis that allows for *non-radially symmetric* perturbations. There are numerous examples where the upgrade from the radial to the nonradial stability analysis has required a considerable amount of new ideas: for example the energy critical defocusing quintic NLS [@Bourgain:gwp-defocusing-critical-nls-radial; @Grillakis:nls; @Colliander-Keel-Staffilani-Takaoka-Tao:global-wellposedness-energy-critical-nls-R3] or the energy critical wave equation [@Duyckaerts-Kenig-Merle:universality-blowup-energy-critical-wave; @Duyckaerts-Kenig-Merle:universality-blowup-energy-critical-wave-nonradial]. We will divide the linear stability in three parts: the dissipativity of the operator (Section [2.1](#sec:dissipativity){reference-type="ref" reference="sec:dissipativity"}), the maximality (Section [2.2](#sec:maximality){reference-type="ref" reference="sec:maximality"}) and the smoothness of the unstable modes (Section [2.3](#sec:smoothness){reference-type="ref" reference="sec:smoothness"}). All three of them pose obstructions in the non-radial setting and are affected substantially by including non-radial perturbations.
The dissipativity is based on very delicate properties of the profiles ([\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"} and [\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"}). While [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"} is a central part of [@Merle-Raphael-Rodnianski-Szeftel:implosion-i] and [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible], angular perturbations also require the new property [\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"}. The angular perturbations also require to study the high derivative analysis with respect to $\nabla^{m}$, instead of $\Delta^{m/2}$, since one can not afford the embedding constants that would arise if one tries to control angular derivatives by $\Delta^{m/2}$. Studying the high derivatives of the perturbation using $\nabla^m$ complicates the analysis (since one needs to consider an $m$-tensor instead of a scalar quantity) but allows to solve that problem.
The maximality of the operator is completely different from [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible; @Merle-Raphael-Rodnianski-Szeftel:implosion-ii] because it becomes the existence problem of a singular PDE in three dimensions, which is much more complicated than the radial case (where one just needs to solve a singular ODE). Instead of solving the PDE directly, we use a Galerkin approximation together with uniform bounds on the real part of the spectrum of our approximations, which allow us to deduce the maximality of the limit operator.
For the smoothness of unstable modes, we decompose the system via vector spherical harmonics, which decouple the PDE system among those modes. To the best of our knowledge, this is the first time that vector spherical harmonics are used to study the stability problem near a radial solution for fluids PDE, and we believe this will be a very useful tool for non-radial behaviour around radially symmetric profiles. That way, we obtain a singular ODE for every mode. This singular ODE is still more complicated than in the radial setting, since the terms with angular derivatives cannot be diagonalized. This implies that we obtain some coefficients in the ODE that are not uniformly bounded over all modes (since they correspond to angular derivatives). Therefore, the smoothness of each mode does not imply the smoothness of the sum (unlike in the radial case, where there is only one mode), and we need uniform bounds across all the modes.
Another obstruction for the smoothness of unstable modes consists of the singular behaviour of the system of ODEs at $R=1$. While this is present in the radial case, the interaction of this issue with the angular derivative terms mentioned above makes it impossible to implement the same treatment. To resolve this obstacle we use an abstract argument exploiting the fact that the space of unstable modes is of finite dimension, and therefore the smooth unstable modes will form the same set as the $H^m$ unstable modes for some $m$ sufficiently large.
The non-linear stability argument also poses some obstructions with respect to the radial setting. The main difference is that we cannot diagonalize the system with Riemann invariants, so we need to treat some of the highest derivative terms as errors. In order to do that, we need to propagate new estimates in our bootstrap argument.
The fact that we are now able to construct finite time imploding singularities for both Euler and Navier-Stokes equations in a periodic setting may provide a new tool to address blow up or norm inflation phenomena, and more generally, questions related to wave turbulence theory. In fact since in [@Merle-Raphael-Rodnianski-Szeftel:implosion-nls] the authors were able to transfer blow up results from a fluid equation to blow up results for NLS, it is conceivable to think that one may be able to transfer the construction of solutions of fluid equations with growing Sobolev norms (e.g. [@Kiselev-Nazarov:simple-energy-pump-sqg; @He-Kiselev:small-scale-creation-sqg; @Zlatos:exponential-growth-vorticity-gradient-euler-torus; @Kiselev-Yao:small-scale-ipm; @Kiselev-Sverak:double-exponential-euler-boundary]) to solutions of dispersive equations with growing Sobolev norms. Let us recall that Bourgain [@Bourgain:growth-sobolev-norms-schrodinger-quasiperiodic-potential; @Bourgain:growth-sobolev-norms-hamiltonian-pde] was the first to relate the growth of Sobolev norms for solutions to periodic NLS equations to the notion of energy transfer, a fundamental question in wave turbulence.
## Setup of the problem
First of all, let us define $\alpha = \frac{\gamma - 1}{2}$ and the rescaled sound speed $\sigma = \frac{1}{\alpha} \rho^{\alpha}$. We have that $$\begin{aligned}
\rho \partial_t u + \rho u \nabla u &= - \frac{\rho}{\gamma - 1} \nabla (\rho^{2\alpha}) + \nu \Delta u = -\frac{\rho}{\gamma - 1} (\alpha^2 \nabla (\sigma^2)) + \nu \Delta u \\
&= -\rho \frac{2 \alpha^2}{\gamma - 1} \sigma \nabla \sigma + \nu \Delta u = -\alpha \rho \sigma \nabla \sigma + \nu \Delta u \\
\partial_t \sigma &= \rho^{\alpha-1} \partial_t \rho = -\rho^{\alpha - 1} \ensuremath{\mathrm{div\,}}\left( \rho u \right)
= -\rho^{\alpha} \ensuremath{\mathrm{div\,}}(u) - \rho^{\alpha - 1} \nabla \rho \cdot u = -\alpha \sigma \ensuremath{\mathrm{div\,}}(u) - \nabla \sigma \cdot{u},\end{aligned}$$ so the equations for $u, \sigma$ read: $$\begin{aligned}
\label{eq:US1} \begin{split}
\partial_t u &= -u \nabla u + \alpha \sigma \nabla \sigma + \nu \frac{\Delta u}{\sigma^{1/\alpha}}, \\
\partial_t \sigma &= -\alpha \sigma \ensuremath{\mathrm{div\,}}(u) - \nabla \sigma\cdot u.
\end{split}\end{aligned}$$
Now we perform the self-similar change of variables (depending on the self-similar parameter $r>1$): $$\begin{aligned}
\begin{split} \label{eq:SS_coordinates1}
u(x, t) &= \frac{(T-t)^{\frac{1}{r} - 1} }{r} U \left( \frac{x}{(T-t)^{\frac{1}{r}}} , - \frac{\log (T-t)}{r}\right) \\
\sigma(x, t) &= \frac{(T-t)^{\frac{1}{r} - 1} }{r} S \left( \frac{x}{(T-t)^{\frac{1}{r}}} , - \frac{\log (T-t)}{r}\right)
\end{split} \end{aligned}$$ and define the new self-similar space and time coordinates to be $$\label{eq:SS_coordinates2}
s = - \frac{\log (T-t)}{r}, \qquad\mbox{ and } \qquad y = \frac{x}{(T-t)^{\frac{1}{r}}} = e^s x.$$ We also define $$s_0 = - \frac{\log (T)}{r},$$ corresponding to $t = 0$.
In self-similar coordinates, the equations read: $$\begin{aligned}
\label{eq:US2} \begin{split}
\partial_s U &= -(r-1)U - (y + U) \cdot \nabla U - \alpha S \nabla S + \nu C_{\rm{dis}} e^{-\delta_{\rm{dis}}s }\frac{\Delta U}{S^{1/\alpha}}, \\
\partial_s S &= -(r-1)S - (y + U) \cdot \nabla S - \alpha S \ensuremath{\mathrm{div\,}}(U),
\end{split} \end{aligned}$$ with $y\in e^s\mathbb{T}_{L}^3=\{(y_1,y_2,y_3) \; \mbox{s.t.} \; -e^sL \leq y_i \leq e^s L\}$ in the case of the torus, and $y\in \mathbb R^3$ in the Euclidean case. We have denoted $$C_{\rm{dis}} = \frac{r^{1 + \frac{1}{\alpha} }}{\alpha^{ \frac{1}{\alpha} }}, \qquad \mbox{ and } \qquad \delta_{\rm{dis}} = \frac{r-1}{\alpha} + r - 2.$$ Moreover, we will restrict ourselves to the range of parameters $$\label{eq:range_r}
1 < r < \begin{cases}
1 + \frac{2}{\left( 1 + \sqrt{ \frac{2}{\gamma - 1} } \right)^2}, \qquad &\mbox{ for } 1 < \gamma < \frac{5}{3}, \\
\frac{3\gamma - 1}{2 + \sqrt{3} (\gamma - 1)}, \qquad &\mbox{ for } \gamma \geq \frac{5}{3}.
\end{cases}$$ Note that both expressions agree for $\gamma = 5/3$. In particular, this implies the bounds $$\label{eq:rough_range_r}
1 < r < \sqrt 3, \qquad \mbox{ and } \qquad r < 2 - \frac{1}{\gamma},$$ for all $1 < \gamma < \infty$.
In the case of Navier-Stokes ($\nu = 1$), we need to impose an extra condition on our range of parameters, namely: $$\label{eq:condition_for_treating_dissipation}
\delta_{\rm{dis}} = \frac{r-1}{\alpha} + r - 2 > 0.$$
From [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible; @Merle-Raphael-Rodnianski-Szeftel:implosion-i], we know that there exist radially symmetric profiles $(\overline U, \overline S)$ that solve [\[eq:US2\]](#eq:US2){reference-type="eqref" reference="eq:US2"} for $\nu = 0$, with $r$ in the range [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}. That is $$\label{eq:ss_profiles}
(r-1) \overline U + (y + \overline U) \cdot \nabla \overline U + \alpha \overline S \nabla \overline S = 0, \qquad \mbox{ and } \qquad
(r-1) \overline S + (y + \overline U) \cdot \nabla \overline S + \alpha \overline S \ensuremath{\mathrm{div\,}}(\overline U) = 0.$$
More concretely, in the range [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}, [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"}, we know the existence of profiles solving [\[eq:ss_profiles\]](#eq:ss_profiles){reference-type="eqref" reference="eq:ss_profiles"} for almost every $\gamma < 1+2/\sqrt 3$, due to [@Merle-Raphael-Rodnianski-Szeftel:implosion-i]. The set of possible $\gamma$ is not explicit, although $\gamma = 5/3$ (monoatomic gases) is explicitly excluded. We know the existence of profiles for $\gamma = 7/5$ due to [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible].
If we drop condition [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"} (which is only needed for the stability of Navier-Stokes, not the Euler one), [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible] gives the existence of profiles solving [\[eq:ss_profiles\]](#eq:ss_profiles){reference-type="eqref" reference="eq:ss_profiles"} in the regime [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"} for all $\gamma > 1$, including $\gamma = 5/3$.
Moreover, all the profiles discussed above satisfy $$\begin{aligned}
\overline{S} &>0, \label{eq:profiles_positive} \\
|\nabla^j \overline U| + |\nabla^j \overline S | &\lesssim\langle R \rangle^{-(r-1)-j} , \quad \forall j \geq 0, \quad \mbox{ and } \quad \overline S \gtrsim \langle R \rangle^{-r+1}. \label{eq:profiles_decay} \\
1 + \partial_R \overline U_R - \alpha |\partial_R \overline S | &> \widetilde{\eta}, \label{eq:radial_repulsivity}\\
1 + \frac{\overline U_R}{R} - \alpha |\partial_R \overline S | &> \widetilde{\eta},\label{eq:angular_repulsivity} \end{aligned}$$ for some $\widetilde\eta > 0$. Here $R=|y|$ is the radial variable, $j$ is a natural number.
The decay property [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"} follows from a standard analysis of a focus point in the phase portrait of the ODE that $(\overline U, \overline S)$ satisfy. This is shown in [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma A.39] and an analogous proof works for the profiles in [@Merle-Raphael-Rodnianski-Szeftel:implosion-i] (the $j=0$ case is also contained in [@Merle-Raphael-Rodnianski-Szeftel:implosion-i Theorem 2.3 - Item 3]).
The radial repulsivity property [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"} is essential for performing the stability in the radially symmetric case and it corresponds to [@Merle-Raphael-Rodnianski-Szeftel:implosion-i Theorem 2.3 - Item 5; Lemma 2.4] and to [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma A.36] for their respective profiles.
The property [\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"} is a new extra repulsivity property that is needed when there is angular dependence. This property does not appear in the previous literature since it is not needed in the radially symmetric case. We show [\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"} in Lemma [Lemma 37](#lemma:angular_repulsivity){reference-type="ref" reference="lemma:angular_repulsivity"} in the Appendix.
We will study equation [\[eq:US2\]](#eq:US2){reference-type="eqref" reference="eq:US2"} in two settings: the periodic setting and the whole space setting. All of our discussion will focus on the periodic setting and can be adapted to the whole space easily. We set $\mathbb{T}^3$ to be the 3-dimensional torus of period $2$, and let $\mathbb T^3_L$, be the 3-dimensional torus or period $2L$.
**Remark 1**. While we have decided to focus on the physically relevant case of 3 dimensions, our results are expected to hold in any dimension, provided the existence of base objects (i.e. globally self-similar profiles that are solutions to the corresponding ODE similar to [\[eq:ss_profiles\]](#eq:ss_profiles){reference-type="eqref" reference="eq:ss_profiles"}).
**Theorem 2**. *Let $\nu = 1$. Let $\overline U, \overline S$ be self-similar profiles solving [\[eq:ss_profiles\]](#eq:ss_profiles){reference-type="eqref" reference="eq:ss_profiles"} and satisfying [\[eq:profiles_positive\]](#eq:profiles_positive){reference-type="eqref" reference="eq:profiles_positive"}--[\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"}, for some $r$ in the ranges [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}, [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"}. Let $T > 0$ sufficiently small, and $L > 0$ sufficiently large.*
*Then, there exists $C^\infty$ initial data $(u_0,\rho_0)$, with $\rho_0 > 0$, for which equation [\[eq:CNS\]](#eq:CNS){reference-type="eqref" reference="eq:CNS"} on $\mathbb T^3_L$ blows up at time $T$ in a self-similar manner. More concretely, for any fixed $y\in \mathbb R^3$, we have: $$\begin{aligned}
\lim_{t\rightarrow T^-}r (T-t)^{1-\frac{1}{r}} u\left((T-t)^{\frac1r}y,t\right) &= \overline U (|y|), \\
\lim_{t\rightarrow T^-} \left( \alpha^{-1 } r (T-t)^{1-\frac{1}{r}} \right)^{1/\alpha} \rho\left((T-t)^{\frac1r}y,t\right) &= \overline S (|y|)^{1/\alpha} \,.\end{aligned}$$ Moreover, there exists a finite codimension set of initial data satisfying the above conclusions (see Remark [Remark 36](#rem:codimension){reference-type="ref" reference="rem:codimension"} for more details).*
**Theorem 3**. *Let $\nu = 1$. Let $\overline U, \overline S$ be self-similar profiles solving [\[eq:ss_profiles\]](#eq:ss_profiles){reference-type="eqref" reference="eq:ss_profiles"} and satisfying [\[eq:profiles_positive\]](#eq:profiles_positive){reference-type="eqref" reference="eq:profiles_positive"}--[\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"}, for some $r$ in the ranges [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}, [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"}. Let $T > 0$ sufficiently small, and $c > 0$ sufficiently small.*
*Then, there exists $C^\infty$ non-radially symmetric initial data $(u_0,\rho_0)$, with $\rho_0 > c$, for which equation [\[eq:CNS\]](#eq:CNS){reference-type="eqref" reference="eq:CNS"} on $\mathbb R^3$ blows up at time $T$ in self-similar manner. More concretely, for any fixed $y \in \mathbb R^3$, we have: $$\begin{aligned}
\lim_{t\rightarrow T^-}r(T-t)^{1-\frac{1}{r}} u\left((T-t)^{\frac1r}y,t\right) &= \overline U (|y|), \\
\lim_{t\rightarrow T^-} \left( \alpha^{-1} r (T-t)^{1-\frac{1}{r}} \right)^{1/\alpha} \rho\left((T-t)^{\frac1r}y,t\right) &= \overline S (|y|)^{1/\alpha} \,.\end{aligned}$$ Moreover, there exists a finite codimension set of initial data satisfying the above conclusions (see Remark [Remark 36](#rem:codimension){reference-type="ref" reference="rem:codimension"} for more details).*
*Proof.* The proof is analogous as in the torus case except two differences. First, the boundary terms that appear due to the integration by parts in the energy estimates are 0. This will not affect the proof because in the torus case, the boundary terms always cancel with each other. Second, we can use different interpolation Lemmas [Lemma 40](#lemma:GN_general){reference-type="ref" reference="lemma:GN_general"}, [Lemma 43](#lemma:GN_generalnoweightwholespace){reference-type="ref" reference="lemma:GN_generalnoweightwholespace"} in place of Lemmas [Lemma 42](#lemma:GN_generalnoweighttorus){reference-type="ref" reference="lemma:GN_generalnoweighttorus"}, [Lemma 41](#lemma:GN_generaltorus){reference-type="ref" reference="lemma:GN_generaltorus"}. ◻
**Remark 4**. Equation [\[eq:CNS\]](#eq:CNS){reference-type="eqref" reference="eq:CNS"} has the following scaling invariance. If $u, \rho$ are solutions to [\[eq:CNS\]](#eq:CNS){reference-type="eqref" reference="eq:CNS"}, then $$u_\lambda (x, t) = \frac{1}{\lambda^{\alpha} } u \left( \frac{x}{\lambda^{\alpha+1}}, \frac{t}{\lambda^{2\alpha+1}} \right)
\qquad \mbox{ and } \qquad
\rho_\lambda (x, t) = \frac{1}{\lambda} \rho \left( \frac{x}{\lambda^{\alpha+1}}, \frac{t}{ \lambda^{2\alpha+1} } \right)$$ are also solutions. In the periodic setting, if $u, \rho$ are defined on $\mathbb T^3_L$, we obtain that $u_\lambda, \rho_\lambda$ are defined on $\mathbb T^3_{L \cdot \lambda^{\alpha+1} }$.
As a consequence, we can generalize Theorem [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"} for any torus size $2L'$ (not necessarily large) just by taking $(u_\lambda, \rho_\lambda)$ for $\lambda = \left( \frac{L'}{L} \right)^{\frac{1}{1+\alpha}}$, where $u,\rho$ are the solutions from Theorem [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"}.
**Remark 5**. Both results hold also for the Euler case, $\nu = 0$. Moreover, in such a case, condition [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"} is not needed. In fact, condition [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"} guarantees that $\delta_{\mathrm{dis}} > 0$, hence the term $\nu C_{\rm{dis}} e^{-\delta_{\rm{dis}}s }\frac{\Delta U}{S^{1/\alpha}}$ in [\[eq:US2\]](#eq:US2){reference-type="eqref" reference="eq:US2"} has an exponential decay, which is crucial in the proof of stability. If $\nu=0$, this term vanishes and thus the condition is not needed. The proof is analogous to the Navier-Stokes one, except that one does not need to bound the dissipation term when doing energy estimates. Note that the Euler result with periodic boundary conditions and *radially symmetric perturbation* can be easily deduced from the non-periodic one ([@Merle-Raphael-Rodnianski-Szeftel:implosion-nls; @Buckmaster-CaoLabora-GomezSerrano:implosion-compressible]) via finite speed of propagation; however, our setting allows us to deduce singularity formation for compressible Euler in the periodic setting for **any** *small perturbation* (in a finite codimension subspace in which we show stability, which will be defined later).
**Remark 6**. Given the local behaviour of the singularity, one can construct solutions that blow-up at $m$ points by gluing $m$ copies of the blow-up profile from Theorem [Theorem 3](#th:euclidean){reference-type="ref" reference="th:euclidean"} in an $m$-fold configuration, far apart from each other so that they can be treated as small perturbations of each of the individual singularities (this is in the spirit of [@Duyckaerts-Kenig-Merle:universality-blowup-energy-critical-wave-nonradial; @Kenig-Merle:gwp-scattering-blowup-critical-focusing-nlw; @Krieger-Schlag-Tataru:slow-blowup-critical-focusing-wave; @Cortazar-delPino-Musso:infinite-time-bubbling-critical-nlh; @Daskalopoulos-delPino-Sesum:type-ii-ancient-compact-solutions-yamabe]). See for example [@Musso-Pacard-Wei:stationary-solutions-euler; @Fan:loglog-blow-up-m-points] for related works in the context of NLS.
## Structure of the proof of Theorems [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"} and [Theorem 3](#th:euclidean){reference-type="ref" reference="th:euclidean"} {#structure-of-the-proof-of-theorems-thperiodic-and-theuclidean}
In this subsection we explain how to construct finite energy solutions for the Navier-Stokes equation out of the globally self-similar solutions (which are stationary solutions of the self-similar equations [\[eq:ss_profiles\]](#eq:ss_profiles){reference-type="eqref" reference="eq:ss_profiles"}) of the Euler equation, both in the periodic and whole space case. From now on, we will only focus on the former case, though our estimates can be made uniform in $L$ (assumed sufficiently large).
We first perform a truncation to the aforementioned stationary solution such that the truncated solution $(\mathfrak X({e^{-s}}y)\overline{U}, \mathfrak X({e^{-s}}y)\overline{S})$ is periodic on $e^s \mathbb T_L^3$. Here $\mathfrak X(x)$ is a smooth cut off function with $$\ensuremath{\mathrm{supp\,}}\mathfrak X(x)\subset B(0,1), \ \ \mathfrak X(x)=1 \ \text{on}\ B(0,\frac{1}{2}), \ |\nabla { \mathfrak X}|\leq 10.$$
Moreover, we define $\widehat{X}(y, s) = \mathfrak X(e^{-s}y)$. We further add a periodic perturbation $(\widetilde{U},\widetilde{S})$ to this truncated profile:
$$\label{eq:periodic_perturbation}
U = \widetilde{U} + \mathfrak X({e^{-s}}y)\overline U=\widetilde{U}+\widehat{X} \overline{U}, \qquad \mbox{ and } \qquad S = \widetilde{S} + \mathfrak X({e^{-s}}y) \overline S=\widetilde{S} +\widehat{X} \overline{S}.$$ The equation for the perturbation $(\widetilde{U},\widetilde{S})$ reads: $$\begin{aligned}
\label{nsequation1}
&\partial_{s}\widetilde{U}=\underbrace{-(r-1)\widetilde{U}-(y+\widehat{X}\overline{U})\cdot{\nabla\widetilde{U}}-\alpha(\widehat{X}\overline{S})\cdot{\nabla\widetilde{S}}-\widetilde{U}\cdot\nabla(\widehat{X}\overline{U}) -\alpha \widetilde{S} \nabla (\widehat{X}\overline{S})}_{\mathcal L_{u}^{e}}\underbrace{-\widetilde{U}\cdot \nabla\widetilde{U}-\widetilde{S}\cdot\nabla \widetilde{S}}_{\mathcal N_{u}} \\\nonumber
&\underbrace{-\partial_{s}(\widehat{X}\overline{U})-(r-1)(\widehat{X}\overline{U})-(\widehat{X}\overline{U}+y)\cdot\nabla(\widehat{X}\overline{U})-\alpha(\widehat{X}\overline{S})\cdot\nabla(\widehat{X}\overline{S})}_{\mathcal E_u}
+ \underbrace{ \nu C_{\rm{dis}} e^{-\delta_{\rm{dis}}s }\frac{\Delta U}{S^{1/\alpha}} }_{\ensuremath{\mathcal}F_{dis}}\end{aligned}$$ $$\begin{aligned}
\begin{split} \label{nsequation2}
&\partial_{s}\widetilde{S}=\underbrace{-(r-1)\widetilde{S}-(y+\widehat{X}\overline{U})\cdot{\nabla\widetilde{S}}-\alpha(\widehat{X}\overline{S})\ensuremath{\mathrm{div\,}}(\widetilde{U})-\widetilde{U}\cdot\nabla(\widehat{X}\overline{S})-\alpha \widetilde{S} \ensuremath{\mathrm{div\,}}(\widehat{X}\overline{U})}_{\mathcal L_{s}^{e}}\underbrace{-\widetilde{S}\cdot \nabla\widetilde{U}-\alpha\widetilde{S}\ensuremath{\mathrm{div\,}}(\widetilde{U})}_{\mathcal N_{s}}\\
&\underbrace{-\partial_{s}(\widehat{X}\overline{S})-(r-1)(\widehat{X}\overline{S})-(\widehat{X}\overline{U}+y)\cdot\nabla(\widehat{X}\overline{S})-\alpha(\widehat{X}\overline{S})\ensuremath{\mathrm{div\,}}(\widehat{X}\overline{U})}_{\mathcal E_s}.
\end{split} \end{aligned}$$ Moreover, since $(\overline{U},\overline{S})$ is a stationary profile, we can simplify $\mathcal E_u$, $\mathcal E_s$ and have
$$\begin{aligned}
\mathcal E_u &= -(r-1)(\widehat{X}\overline{U})-\widehat{X}(\overline{U}+y)\cdot\nabla(\overline{U})-\alpha\widehat{X}(\overline{S})\cdot\nabla(\overline{S}) -(\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{U} \notag \\
&\qquad-\widehat{X}\overline{U}\cdot\nabla (\widehat{X})\overline{U}-
\alpha(\widehat{X}^2-\widehat{X})\overline{S}\cdot\nabla\overline{S}-\alpha\widehat{X}\overline{S}\cdot \nabla(\widehat{X})\overline{S}-\partial_{s}(\widehat{X})\overline{U}-(y\cdot \nabla\widehat{X})\overline{U} \notag \\
&=-(\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{U}-\widehat{X}\overline{U}\cdot\nabla (\widehat{X})\overline{U}-
\alpha(\widehat{X}^2-\widehat{X})\overline{S}\cdot\nabla\overline{S}-\alpha\widehat{X}\overline{S}\cdot \nabla(\widehat{X})\overline{S}, \label{Eudefinition} \\
%
\mathcal E_s &= -(r-1)(\widehat{X}\overline{S})-\widehat{X}(\overline{U}+y)\cdot\nabla(\overline{S})-\alpha\widehat{X}(\overline{S})\ensuremath{\mathrm{div\,}}{\overline{U}}-(\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{S}\notag \\
&\qquad-\widehat{X}\overline{U}\cdot\nabla (\widehat{X})\overline{S}-\alpha
(\widehat{X}^2-\widehat{X})\overline{S}\ensuremath{\mathrm{div\,}}(\overline{U})-\alpha\widehat{X}\overline{U}\cdot \nabla(\widehat{X})\overline{S}-\partial_{s}(\widehat{X})\overline{S}-(y\cdot \nabla\widehat{X})\overline{S} \notag \\
&=-(\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{S}-(\alpha+1)\widehat{X}\overline{U}\cdot\nabla (\widehat{X})\overline{S}-\alpha
(\widehat{X}^2-\widehat{X})\overline{S}\ensuremath{\mathrm{div\,}}(\overline{U}),\label{Esdefinition}\end{aligned}$$
where we have used that $\partial_s(\widehat{X}) = -y \cdot \nabla \widehat{X}$.
Let us define $C_0$ sufficiently large as in Lemma [Lemma 22](#C0choice){reference-type="ref" reference="C0choice"}. Let $\chi_1: \mathbb R \rightarrow [0, 1]$ be a smooth symmetric cut-off function supported on $\left[ -\frac32 C_0, \frac32 C_0 \right]$ and satisfying $\chi_1 (x) = 1$ for $|x| \leq C_0$. Let $\chi_2 : \mathbb R \rightarrow [0, 1]$ be a smooth symmetric cut-off function supported on $\left[ -\frac52 C_0, \frac52 C_0 \right]$ and satisfying $\chi_2 (x) = 1$ for $|x| \leq 2 C_0$, $\chi_2(x)>0$ in $\left(-\frac52 C_0, \frac52 C_0 \right)$ .
![The graph of $J(1-\chi_1)$ and $\chi_2$.](Figures/cutoff.pdf){#fig:cut off width="60%"}
Using those cut-offs, we define the cut-off linear operator as: $$\label{eq:cutoffL}
\ensuremath{\mathcal}L_u = \chi_2 \ensuremath{\mathcal}L_u^e - J(1-\chi_1), \quad \ensuremath{\mathcal}L_s = \chi_2 \ensuremath{\mathcal}L_s^e - J(1-\chi_1),$$ where $J$ is a sufficiently large constant that will be fixed later.
We will study the cut-off linearized operator $(\mathcal L_u, \mathcal L_s)$ in the space $X$, which we define as follows. First, we consider $\widetilde X$ to be the space $H^{m}_u(Q , \mathbb{R}^3) \times H^{m}_s (Q, \mathbb{R})$ where the $u, s$ subscripts make reference to $\widetilde U$ (vector field) and $\widetilde S$ (scalar field). $Q$ is a cube centered at the origin of length $4C_0$ and we take periodic boundary conditions. We select $m$ to be a sufficiently large integer. We define $X$ to be the subspace of $\widetilde X$ formed by $(U, S)\in \widetilde X$ such that both $U$ and $S$ are compactly supported on $B(0, 3C_0)$. One can thus think of $X$ as a subspace of $H^{m}_u (B(0, 3C_0), \mathbb R^3) \times H^{m}_s (B(0, 3C_0), \mathbb R)$ with the appropriate vanishing conditions at the boundary. Whenever we take Fourier series of $(U, S) \in X$ it should be understood that we do Fourier series on the cube $Q$.
We put in our space $X$ the standard inner product $$\langle (f_u, f_s), (g_u, g_s) \rangle_X = \int_{B(0, 3C_0)} \left( \nabla^m f_u \cdot \nabla^m g_u + \nabla^m f_s \cdot \nabla^m g_s + f_u \cdot g_u + f_s g_s \right)$$ and we obtain the inherited norm $$\label{spacexnorm}
\| (f_u, f_s) \|_X^2 = \int_{B(0, 3C_0)} \left( |\nabla^m f_u |^2 + (\nabla^m f_s )^2 + |f_u|^2 + f_s^2 \right),$$ where $\nabla^m f_s$ is an $m$-tensor indexed by $[3]^m = \{ 1, 2, 3 \}^m$ containing all possible combinations of directional derivatives. In the case of $\nabla^m f_u$, we have an $(m+1)$-tensor since we have an extra index indicating the component of the vector $f_u$.
We now introduce the definition of a maximally dissipative operator.
**Definition 7**. A dissipative operator is a linear operator $A$ defined on a linear subspace $D(A)$ of Hilbert space X such that for all $x \in D(A)$ $$\begin{aligned}
\Re \langle Ax,x\rangle \leq 0.\end{aligned}$$ A dissipative operator is called maximally dissipative if it is dissipative and for all $\lambda > 0$ the operator $\lambda I- A$ is surjective, meaning that the image when applied to the domain $D$ is the whole space $X$.
We will show that $\ensuremath{\mathcal}L = (\ensuremath{\mathcal}L_u, \ensuremath{\mathcal}L_s)$ is maximally dissipative modulo a finite-rank perturbation and that its eigenfunctions are smooth.
**Proposition 8**. *For $\delta_g$ sufficiently small, $J, m$ sufficiently large, the following holds. The Hilbert space $X$ can be decomposed as $V_{\rm{sta}} \oplus V_{\rm{uns}}$, where both $V_{\rm{sta}}, V_{\rm{uns}}$ are invariant subspaces of $\ensuremath{\mathcal}L$. $V_{\rm{uns}}$ is finite dimensional and formed by smooth functions. Moreover, there exists a metric $B$ of $V_{uns}$ such that the decomposition satisfies: $$\begin{aligned}
\begin{split} \label{eq:decomposition_condition}
\Re\langle \ensuremath{\mathcal}L v, v \rangle_B &\geq \frac{-6}{10}\delta_g \| v \|_B^2, \qquad \forall v \in V_{\rm{uns}}, \\
\left\| e^{t\ensuremath{\mathcal}L} v \right\|_X &\leq e^{-t\delta_g/2} \| v \|_X, \qquad \forall v \in V_{\rm{sta}},
\end{split} \end{aligned}$$ where we use $\Re$ to denote the real part.*
Section [2](#sec:linear){reference-type="ref" reference="sec:linear"} will consist of a proof of Proposition [Proposition 8](#prop:maxdissmooth){reference-type="ref" reference="prop:maxdissmooth"}. For the main difficulties we refer to the discussion about the linear stability in Section [1.2](#sec:12){reference-type="ref" reference="sec:12"}. Let us first sketch an outline of the proof.
We first define the finite codimension subspace $X_K \subset X$, as the elements of $X$ for which the Fourier coefficients $\mathcal{F}( U_i)(k) = \mathcal{F} (S) (k) = 0$ for all $|k| < K$. The first step of the proof will consist in showing that $\ensuremath{\mathcal}L+1$ is dissipative for elements of $X_K$ (cf. Definition [Definition 7](#def:dissimax){reference-type="ref" reference="def:dissimax"}), concretely: $$\langle \ensuremath{\mathcal}L (U, S), (U, S) \rangle_X \leq -\| (U, S) \|_X^2, \qquad \forall (U, S) \in X_K.$$
The second step will consist of a Galerkin type argument showing that $\ensuremath{\mathcal}L - \lambda$ is surjective on $X$ for sufficiently large $\lambda$. That will allow us to conclude that $\ensuremath{\mathcal}L+1$ is a finite-dimensional perturbation of a maximally dissipative operator on $X$. In particular, this gives a very good understanding of the spectra of $\ensuremath{\mathcal}L$ and $\ensuremath{\mathcal}L^\ast$ for eigenvalues with real part bigger than $-1$ and allows us to define a finite dimensional space $V_{\rm{uns}}$ and a finite codimensional space $V_{\rm{sta}}$ satisfying [\[eq:decomposition_condition\]](#eq:decomposition_condition){reference-type="eqref" reference="eq:decomposition_condition"} as in Proposition [Proposition 16](#prop:maxdismod){reference-type="ref" reference="prop:maxdismod"}.
The third and last step will be to conclude that the elements of $V_{\rm{uns}}$ are smooth. In order to do so, we will consider $V_{\rm{uns}}(m)$ depending on $m$ and show that all the spaces $V_{\rm{uns}}(m)$, restricted to $B(0, C_0)$, coincide for all sufficiently large $m$. The main step will be to show that a function defined satisfying the eigenfunction equation on $B(0, C_0)$ can be extended uniquely to an eigenfunction on $B(0, 3C_0)$ (Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}). In order to do so, we will decouple the equation using vector spherical harmonics, and obtain uniform bounds over all modes by controlling a carefully chosen form of energy (Lemma [Lemma 20](#eigenfunctionbound){reference-type="ref" reference="eigenfunctionbound"}).
Once Proposition [Proposition 8](#prop:maxdissmooth){reference-type="ref" reference="prop:maxdissmooth"} is proven, we will conclude Theorem [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"} by establishing nonlinear stability. We will use three different ingredients. First, we need to treat the equation outside the cut-offs $\chi_1$ and $\chi_2$ (since the linear analysis only covers the region where those cut-offs are $1$). To do so, we will carefully design our bootstrap assumptions so that we can ensure that the perturbations are being damped in this region along trajectories $y = y_0 e^s$ (which correspond to stationary points after undoing the self-similar change of variables). Secondly, we will handle the dissipation, which is not included in the equation for the profiles, and therefore it cannot be treated as a perturbation. However, condition [\[eq:condition_for_treating_dissipation\]](#eq:condition_for_treating_dissipation){reference-type="eqref" reference="eq:condition_for_treating_dissipation"} ensures that the dissipative term will decay exponentially in self-similar variables. In order to bound it, we will use weighted energy estimates at a higher derivative level than the linear stability (regularity higher than $m$), together with lower bounds for $S$ to rule out possible vacuum. Finally, we need to control the (finitely many) unstable modes arising from the linear stability (that is, the modes given by $V_{\mathrm{uns}}$ in Proposition [Proposition 8](#prop:maxdissmooth){reference-type="ref" reference="prop:maxdissmooth"}). This can be done by restricting the initial data to a finite codimension set (as it is done in Theorem [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"} and Theorem [Theorem 3](#th:euclidean){reference-type="ref" reference="th:euclidean"}) and the way to formalize it is via a topological argument that exploits the unstable structure via Brouwer's fixed point theorem (see for example [@Cote-Martel-Merle:construction-multisoliton-supercritical-gkdv-nls]).
## Organization of the paper
The paper is organized as follows: Section [2](#sec:linear){reference-type="ref" reference="sec:linear"} is devoted to the study of the linearized operator around $\overline U, \overline S$, and it consists of the proof of Proposition [Proposition 8](#prop:maxdissmooth){reference-type="ref" reference="prop:maxdissmooth"}. It is organized in three parts: Section [2.1](#sec:dissipativity){reference-type="ref" reference="sec:dissipativity"} shows that the linearized operator is dissipative, Section [2.2](#sec:maximality){reference-type="ref" reference="sec:maximality"} shows that it is maximal, and Section [2.3](#sec:smoothness){reference-type="ref" reference="sec:smoothness"} shows that the unstable modes are smooth. Section [3](#sec:nonlinear){reference-type="ref" reference="sec:nonlinear"} upgrades the linear estimates to nonlinear, bootstrapping the bounds of the perturbation and using a topological argument to conclude that there exists a set of initial conditions of finite codimension that lead to a finite time imploding singularity. Finally, the Appendix contains a proof of property [\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"} for the globally self-similar solution, as well as some other properties of the solution and some functional analytic results.
## Notation
Throughout the paper we will use the following convention:
We will use $C_x$ (resp. $C$) to denote any constant that may depend on $x$ (resp. independent of all the other parameters). These constants may change from line to line but they are uniformly bounded by a universal constant dependent on $x$ (independent of all other parameters). Similarly, we will denote by $X\lesssim Y$ and by $X\lesssim_x Y$ whenever $X\leq C Y$ and $X\leq C_x Y$ for some $C$, $C_x$ respectively.
We use $\nabla^{l}f$ for the $l$-th order tensor containing all $l$ order derivatives of $f$. In the case where $f$ is a vector (typically $f = U$) then $\nabla^l U$ is a $(l+1)$-th tensor. We also denote by $| \nabla^k f|$ its $\ell^2$ norm. Note that this norm controls all the derivatives *accounting for reorderings of indices*. For example, we have $$|\nabla^j f|^2 = \sum_{\beta \in [3]^j} |\partial_\beta f |^2 \geq \binom{j}{j'} |\partial_1^{j'} \partial_2^{j-j'} f|^2 .$$ Thus, we see that $|\nabla^j f|$ has a stronger control of the mixed derivatives by a combinatorial number, just because any mixed derivative appears multiple times in $\nabla^j f$ as the indices are reordered.
For $\beta=(\beta_1,\beta_2...,\beta_K)$, we let $\partial_{\beta}=\partial_{\beta_1,\beta_2,\beta_3...\beta_K}$, and $\partial_{\beta^{(j)}}=\partial_{\beta_1,\beta_2,...\beta_{j-1},\beta_{j+1}....\beta_{K}}$ (that is, $\beta_j$ denotes the $j$-th component of $\beta$ and $\beta^{(j)}$ denotes the subindex obtained by erasing the $j$-th component in $\beta$).
We will use $y$ as our self-similar variable and we will denote $R = |y|$ for the radius (given that $r$ is reserved, since $1/r$ is the self-similar exponent of our ansatz). We will denote with $\widehat{R}$ the unitary vector field in the radial direction, that is, $\widehat R = \frac{y}{|y|} = \frac{y}{R}$.
In Subsection [2.3](#sec:smoothness){reference-type="ref" reference="sec:smoothness"} we will also use the decomposition into spherical harmonics. For any $n = (n_1, n_2) \in \mathbb Z^2$ such that $|n_2| \leq n_1$ there exists a normalized eigenfunction of the Laplacian on the sphere: $e_n(\theta, \psi)$ (where $\theta$ and $\psi$ are the azimuthal and polar angles of spherical coordinates). It satisfies that $\Delta e_n = -n_1 (n_1+1)e_n$ and the system $\{ e_n \}$ is orthonormal. In particular, any function $f(y)$ can be decomposed into its spherical modes as $$f(y) = \sum_{|n_2|\leq n_1} f_n(R) e_n(\theta, \psi),$$ where the functions $f_n$ only depend on the radius. We will normally omit the condition $|n_2| \leq n_1$ in the summation subindex and whenever we are summing over modes we will just indicate it as $\sum_n$. We will also use vector spherical harmonics [@Hill:spherical-harmonics] that allow to decompose any vector field $V(y)$ as $$V(y) = \sum_n \left( V_{R, n}(R) e_n \widehat R + V_{\Psi, n} (R) R\nabla e_n + V_{\Phi, n}(R) y \wedge \nabla e_n \right).$$ Here, we note that the modes $V_{R, n}(R)$, $V_{\Psi, n}(R)$ and $V_{\Phi, n}(R)$ are radially symmetric functions.
## Acknowledgements
GCL and GS have been supported by NSF under grant DMS-2052651. GCL, JGS and JS have been partially supported by the MICINN (Spain) research grant number PID2021--125021NA--I00. JGS has been partially supported by NSF under Grants DMS-2245017 and DMS-2247537, by the AGAUR project 2021-SGR-0087 (Catalunya). JS has been partially supported by an AMS-Simons Travel Grant. GS has been supported by the Simons Foundation through the Simons Collaboration on Wave Turbulence. JGS is also thankful for the hospitality of the MIT Department of Mathematics, where parts of this paper were done. We thank Tristan Buckmaster for useful conversations.
# Study of the linearized operator {#sec:linear}
## Dissipativity {#sec:dissipativity}
In this subsection we will show that $\ensuremath{\mathcal}L+1$ is dissipative (cf. Definition [Definition 7](#def:dissimax){reference-type="ref" reference="def:dissimax"}) on $X_K$, a finite codimension subspace of $X$ where the Fourier modes smaller than $K$ vanish, that is $\mathcal F( U_i)(k) = 0$, $\mathcal F (S)(k) = 0$ for $|k| < K$.
Note that for any $f \in X_K$ we have the inequality $\| \nabla f \|_{L^2} \geq K \|f\|_{L^2}$.[^1] For $f, g \in X_K$, we have $$\left| \langle f, g \rangle_X - \langle \nabla^m f, \nabla^m g\rangle_{L^2} \right| \leq \| f \|_{L^2} \| g \|_{L^2} \leq K^{-2m} \|f\|_X \|g\|_X$$
Therefore, it suffices to show that $$\int_{B(3C_0)} \nabla^m \ensuremath{\mathcal}L_u \cdot \nabla^m \widetilde U + \nabla^m \ensuremath{\mathcal}L_s \cdot \nabla^m \widetilde S \leq - \| (\widetilde U, \widetilde S )\|_X^2,$$ where we denote by $B(3C_0)$ the ball of radius $3C_0$ (for the rest of this section, the center of the ball is always assumed to be zero and we just indicate its radius).
Now, let us divide the operators $\ensuremath{\mathcal}L_u, \ensuremath{\mathcal}L_s$ as: $$\begin{aligned}
\label{Lequation}
\ensuremath{\mathcal}L_u &= \underbrace{-\chi_{2} (y + \overline U) \cdot \nabla \widetilde U -\chi_{2} \alpha \overline S \nabla \widetilde S }_{\ensuremath{\mathcal}M_u}\underbrace{- J(1-\chi_1)\widetilde{U}}_{\ensuremath{\mathcal}D_u}
\underbrace{ -\chi_{2}(r-1) \widetilde U - \chi_{2}\widetilde U \cdot \nabla \overline U - \chi_{2}\alpha \widetilde S \nabla \overline S }_{\ensuremath{\mathcal}L_u -\ensuremath{\mathcal}D_u- \ensuremath{\mathcal}M_u}, \\
\ensuremath{\mathcal}L_s &= \underbrace{ - \chi_{2}(y + \overline U) \cdot \nabla \widetilde S - \alpha\chi_{2} \overline S \text{div} ( \widetilde U ) }_{\ensuremath{\mathcal}M_s}\underbrace{- J(1-\chi_1)\widetilde{S}}_{\ensuremath{\mathcal}D_s}
\underbrace{ -(r-1)\chi_{2}\widetilde S - \chi_{2}\widetilde U \cdot \nabla \overline S - \chi_{2}\alpha \widetilde S \text{div} (\overline U) }_{\ensuremath{\mathcal}L_s -\ensuremath{\mathcal}D_s- \ensuremath{\mathcal}M_s}.\end{aligned}$$ Notice that $\widehat{X}=1$ whenever $\chi_1,\chi_2\neq 0$, so the cut-off $\widehat X$ does not appear in the expressions of $\ensuremath{\mathcal}L_u$, $\ensuremath{\mathcal}L_s$.
Now we start to show the dissipativity.
**Lemma 9**. *Taking $K$ sufficiently large in terms of $J$, and $J$ large enough depending on $m$, there is a universal constant $C$ (independent of $m$) such that $$\begin{aligned}
&\int_{B(3C_0)} | \nabla^m (\ensuremath{\mathcal}L_u - \ensuremath{\mathcal}M_u-\ensuremath{\mathcal}D_u) \cdot \nabla^m \widetilde U | \leq C \| (\widetilde U, \widetilde S) \|_X^2,\\
%&\leq C (\|\nabla^{m}\tilde{U}\|_{L^2({B(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B(\chi_2)})}^2+\|\nabla^{m}\tilde{U}\|_{L^2({B^c(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B^c(\chi_2)})}^2
&\int_{B(3C_0)} | \nabla^m (\ensuremath{\mathcal}L_s - \ensuremath{\mathcal}M_s-\ensuremath{\mathcal}D_s) \cdot \nabla^m \widetilde S | \leq C \| (\widetilde U, \widetilde S) \|_X^2,
%&\leq C (\|\nabla^{m}\tilde{U}\|_{L^2({B(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B(\chi_2)})}^2+\|\nabla^{m}\tilde{U}\|_{L^2({B^c(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B^c(\chi_2)})}^2\end{aligned}$$ and $$\begin{aligned}
\label{Duestimate}
\int_{B(3C_0)} \nabla^m (\ensuremath{\mathcal}D_u) \cdot \nabla^m \widetilde U &\leq -J\int_{B(3C_0)}(1-\chi_1)|\nabla^m \widetilde U|^2 + C \| (\widetilde U, \widetilde S) \|_X^2,\\
%&\leq -(J-C)(\|\nabla^{m}\tilde{U}\|_{L^2({B^c(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B^c(\chi_2)})}^2)+C(\|\nabla^{m}\tilde{U}\|_{L^2({B(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B(\chi_2)})}^2)
\label{Dsestimate}
\int_{B(3C_0)} \nabla^m (\ensuremath{\mathcal}D_s) \cdot \nabla^m \widetilde S &\leq -J\int_{B(3C_0)}(1-\chi_1)|\nabla^m \widetilde S|^2 + C \| (\widetilde U, \widetilde S) \|_X^2,
%&\leq -(J-C)(\|\nabla^{m}\tilde{U}\|_{L^2({B^c(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B^c(\chi_2)})}^2)+C(\|\nabla^{m}\tilde{U}\|_{L^2({B(\chi_2)})}^2+\|\nabla^{m}\tilde{S}\|_{L^2({B(\chi_2)})}^2)\end{aligned}$$*
*Proof.* Note that in the terms $\ensuremath{\mathcal}L_\circ - \ensuremath{\mathcal}M_\circ - \ensuremath{\mathcal}D_\circ$ or in $\ensuremath{\mathcal}D_\circ$ (for $\circ \in \{ u, s \}$) there are no derivatives in the perturbation. We distribute all the derivatives of $\nabla^m$ among those terms and consider two cases: either all $m$ derivatives hit the perturbation or not. If we are in the second case, we get a factor $\frac{1}{K}$ because $\| \widetilde U \|_{H^{m-1}} \leq \frac{1}{K} \| \widetilde U \|_{H^{m}}$. Since $K$ is sufficiently large with respect to $m$, this is enough to absorb any other high derivative hitting the profile and the cut-off, like $\| \nabla^m \overline U \|_{L^\infty}$, $\| \nabla^m \chi_1 \|_{L^\infty}$,$\| \nabla^m \chi_2 \|_{L^\infty}$. Since there are $O\left( 3^m\right)$ such terms, the total contribution of those terms can be bounded by $\| (\widetilde U, \widetilde S ) \|_X^2$ taking $K$ sufficiently large.
The remaining case is when all derivatives hit the perturbation. That case for $\ensuremath{\mathcal}D_u$ and $\ensuremath{\mathcal}D_s$ directly leads to the first terms in the right-hand sides of [\[Duestimate\]](#Duestimate){reference-type="eqref" reference="Duestimate"} and [\[Dsestimate\]](#Dsestimate){reference-type="eqref" reference="Dsestimate"}, respectively. This concludes the proof of [\[Duestimate\]](#Duestimate){reference-type="eqref" reference="Duestimate"}--[\[Dsestimate\]](#Dsestimate){reference-type="eqref" reference="Dsestimate"}.
We just need to treat the case where all derivatives fall in the perturbation for $\ensuremath{\mathcal}L_u - \ensuremath{\mathcal}M_u - \ensuremath{\mathcal}D_u$ or $\ensuremath{\mathcal}L_s - \ensuremath{\mathcal}M_s - \ensuremath{\mathcal}D_s$. In that case, we obtain terms like $$\begin{aligned}
\sum_{\beta \in [3]^K}\int_{B(3C_0)} (\chi_2(\partial_\beta \widetilde U) \cdot \nabla \overline U) \cdot \partial_\beta \widetilde U &=
\sum_{j, k} \sum_{\beta \in [3]^K}\int_{B(3C_0)} (\chi_2(\partial_\beta \widetilde U_j) \cdot \partial_j \overline U_k) \cdot \partial_\beta \widetilde U_k \\
&\lesssim\sum_{\beta \in [3]^K} \int_{B(3C_0)} \chi_2 |\partial_\beta \widetilde U|^2 \lesssim\| (\widetilde U, \widetilde S) \|_X^2,\end{aligned}$$ given that $|\chi_2| \leq 1$ and $\| \overline U \|_{W^{1, \infty}} \lesssim 1$. Other possibilities are very similar to this term but interchanging $\widetilde U$ with $\widetilde S$ or $\overline U$ with $\overline S$. ◻
Now, let us focus on the bounds of the main contribution, coming from $\ensuremath{\mathcal}M_u$ and $\ensuremath{\mathcal}M_s$. We have that $$\ensuremath{\mathcal}M_{u, i} = -\sum_j \chi_2(y_j + \overline U_j) \partial_j \widetilde U_i - \alpha \chi_2\overline S \partial_{i} \widetilde S \quad \mbox{ and } \quad
\ensuremath{\mathcal}M_s = -\sum_j \chi_2 (y_j + \overline U_j) \partial_j \widetilde S - \alpha \chi_2\overline S \text{div} (\widetilde U).$$
We will need to compute the integral $$\int_{B(3C_0)} \nabla^m \ensuremath{\mathcal}M_u \cdot \nabla^m \widetilde U + \nabla^m \ensuremath{\mathcal}M_s \nabla^m \widetilde S.$$ We distribute the derivatives of $\nabla^m \ensuremath{\mathcal}M_u$ and $\nabla^m \ensuremath{\mathcal}M_s$ and identify the terms with $m$ or $m+1$ derivatives in the perturbation (note that $\ensuremath{\mathcal}M_u$ and $\ensuremath{\mathcal}M_s$ already have one derivative on the perturbations $\widetilde U$, $\widetilde S$).
The terms of $\nabla^m \ensuremath{\mathcal}M_u$ and $\nabla^m \ensuremath{\mathcal}M_s$ with $m+1$ or $m$ derivatives on the perturbation are: $$\begin{aligned}
\ensuremath{\mathcal}J_{1, i, \beta}' &= -\sum_{j \in [3]} \chi_2 (y_j + \overline U_j) \partial_{\beta} \partial_j \widetilde U_i, \qquad
&\ensuremath{\mathcal}J_{1, i, \beta} &= - \sum_{\ell = 1}^m \sum_{j\in [3]} \chi_2 (\partial_{\beta_\ell} \overline U_j) \cdot \partial_{\beta^{(\ell)}} \partial_j \widetilde U_i, \\
%
\ensuremath{\mathcal}J_{2, i, \beta}' &= -\alpha \chi_2\overline S \partial_{\beta} \partial_{i} \widetilde S, \qquad
&\ensuremath{\mathcal}J_{2, i, \beta} &= -\sum_{\ell = 1}^m \alpha \chi_2 (\partial_{\beta_\ell} \overline S) \cdot \partial_{\beta^{(\ell)}} \partial_{i} \widetilde S, \\
%
\ensuremath{\mathcal}J_{3, \beta}' &= -\sum_{j \in [3]} \chi_2(y_j + \overline U_j) \partial_{\beta} \partial_j \widetilde S, \qquad
& \ensuremath{\mathcal}J_{3, \beta} &= - \sum_{\ell = 1}^m \sum_{j \in [3]} \chi_2 (\partial_{\beta_\ell} \overline U_j ) \cdot \partial_{\beta^{(\ell)}} \partial_j \widetilde S, \\
%
\ensuremath{\mathcal}J_{4, \beta}' &= -\alpha \chi_2\overline S \partial_{\beta} \text{div} (\widetilde U), \qquad
& \ensuremath{\mathcal}J_{4, \beta} &= - \sum_{\ell = 1}^m \alpha \chi_2 (\partial_{\beta_\ell} \overline S) \cdot \partial_{\beta^{(\ell)}} \text{div} (\widetilde U),\end{aligned}$$ where $i \in [3]$ and $\beta \in [3]^m$, together with $$\begin{aligned}
\begin{split} \label{eq:chiterms}
\ensuremath{\mathcal}J_{\chi_2,u,i, \beta} &= -\sum_{\ell=1}^m \partial_{\beta_\ell}(\chi_2)\cdot \sum_{j\in [3]} \left( \partial_{\beta^{(\ell)}} \partial_j \widetilde U_i \cdot (y_j + \overline U_j)- \alpha \partial_{\beta^{(\ell)}} \partial_{j} \widetilde S \cdot \overline S\right), \\
\ensuremath{\mathcal}J_{\chi_2,s, \beta} &=- \sum_{\ell = 1}^m \partial_{\beta_\ell}(\chi_2)\cdot \sum_{j\in [3]} \left( \partial_{\beta^{(\ell)}} \partial_j \widetilde S \cdot (y_j + \overline U_j) - \alpha \partial_{\beta^{(\ell)}} \ensuremath{\mathrm{div\,}}(\widetilde U) \overline S \right),
\end{split} \end{aligned}$$ and $$\begin{aligned}
\begin{split}\label{eq:dominantterms}
\ensuremath{\mathcal}J_{1, \beta, i}^\ast &= - \sum_{\ell = 1}^m \chi_2 \sum_{j\in [3]} \partial_{\beta_\ell}( y_j) \cdot \partial_{\beta^{(\ell)}} \partial_j \widetilde U_i = - m \chi_2 \partial_{\beta} \widetilde U_i,\\
\ensuremath{\mathcal}J_{3, \beta}^\ast &= - \sum_{\ell = 1}^m \chi_2 \sum_{j\in [3]} \partial_{\beta_\ell} ( y_j) \cdot \partial_{\beta^{(\ell)}} \partial_j \widetilde S
= -m \chi_2 \partial_{\beta} \widetilde S.
\end{split}\end{aligned}$$
The terms $\ensuremath{\mathcal}J_\ell'$ correspond to terms with $m+1$ derivatives on the perturbation (that is, all $m$ derivatives from $\nabla^m$ fall on the perturbation, which already has a derivative). In the rest of the terms, only $m-1$ derivatives from $\nabla^m$ hit the perturbation, so we need to distribute that extra derivative. The terms $\ensuremath{\mathcal}J_k$ correspond to the cases where that derivative falls on the profile $\overline U$ or $\overline S$, the terms $\ensuremath{\mathcal}J_{\chi_2, k}$ correspond to the cases where that derivative falls on the cut-off $\chi_2$, and the terms $\ensuremath{\mathcal}J_\ell^\ast$ correspond to the cases where that derivative falls on $y$.
First of all, let us treat the rest of the cases, where less than $m-1$ derivatives from $\nabla^m$ fall on the perturbation, that is, the terms that we have not written above.
**Lemma 10** (Terms from $\ensuremath{\mathcal}M$ with less than $m-1$ derivatives on the perturbation). *For $K$ sufficiently large depending on $J$ and $m$, we have: $$\begin{aligned}
\begin{split} \label{eq:Mloworder}
\sum_{\beta \in [3]^m} \sum_{i \in [3]} \int_{B(3C_0)} \left( \partial_{\beta} \ensuremath{\mathcal}M_{u, i} - (\ensuremath{\mathcal}J_{1, i, \beta}^\ast + \ensuremath{\mathcal}J_{1, i, \beta}' + \ensuremath{\mathcal}J_{1, i, \beta} + \ensuremath{\mathcal}J_{2, i, \beta}' + \ensuremath{\mathcal}J_{2, i, \beta} + \ensuremath{\mathcal}J_{\chi_2, u, i, \beta} ) \right) \partial_{\beta} \widetilde U_i &\leq \| (\widetilde U, \widetilde S) \|_X^2, \\
\sum_{\beta \in [3]^m}\int_{B(3C_0)} \left( \partial_{\beta} \ensuremath{\mathcal}M_{s, i} - (\ensuremath{\mathcal}J_{3, \beta}^\ast + \ensuremath{\mathcal}J_{3, \beta}' + \ensuremath{\mathcal}J_{3, \beta} + \ensuremath{\mathcal}J_{4, \beta}' + \ensuremath{\mathcal}J_{4, \beta} + \ensuremath{\mathcal}J_{\chi_2, s, \beta} ) \right) \partial_{\beta} \widetilde S &\leq \| (\widetilde U, \widetilde S) \|_X^2.
\end{split}\end{aligned}$$*
*Proof.* By definition of the $\ensuremath{\mathcal}J$ terms, any term obtained by distributing derivatives in $\partial_{\beta} \ensuremath{\mathcal}M_{u, i} - (\ensuremath{\mathcal}J_{1, i, \beta}^\ast + \ensuremath{\mathcal}J_{1, i, \beta}' + \ensuremath{\mathcal}J_{1, i, \beta} + \ensuremath{\mathcal}J_{2, i, \beta}' + \ensuremath{\mathcal}J_{2, i, \beta} + \ensuremath{\mathcal}J_{\chi_2, u, i, \beta} )$ has at most $m-1$ derivatives on the perturbation. Therefore, any such term is bounded in $L^2$ by: $$\begin{aligned}
&C \left( \| \chi_2 \|_{W^{m, \infty}} + \| J(1-\chi_1) \|_{W^{m, \infty}} \right) \left( \| \overline U \|_{W^{m, \infty}} + \| \overline S \|_{W^{m, \infty}} \right) \left( \| \widetilde U \|_{H^{m-1}(B(3C_0))} + \| \widetilde S \|_{H^{m-1}(B(3C_0))} \right) \\
&\qquad \lesssim_{J, m} \frac{\| (\widetilde U, \widetilde S) \|_X}{K}\end{aligned}$$ where the $\lesssim_{J, m}$ sign indicates that the implied constant is allowed to depend on $J$ and $m$. Given that there are $\lesssim 3^m$ possible terms, we obtain the desired statement provided $K$ is sufficiently large in terms of $m$ and $J$. ◻
Now, we just need to bound the $\ensuremath{\mathcal}J$ terms. The dominant terms will be the terms $\ensuremath{\mathcal}J_\ell^\ast$ and most of the work will consist in showing that they dominate the $\ensuremath{\mathcal}J_\ell$ terms. Thus, we start with the $\ensuremath{\mathcal}J_{\chi_2}$ terms and the $\ensuremath{\mathcal}J_{\ell}'$ terms, which will be easier to bound.
**Lemma 11** (Terms with a derivative falling on the cut-off). *We have that $$\begin{aligned}
\begin{split} \label{eq:cutoffterms}
\sum_{\beta \in [3]^m} &\left( \sum_{i=1}^3 \int_{B(3C_0)} \ensuremath{\mathcal}J_{\chi_2, u, i, \beta} \partial_{\beta} \widetilde U_i
+ \int_{B(3C_0)} \ensuremath{\mathcal}J_{\chi_2, s, \beta} \partial_{\beta} \widetilde S \right) \\
&\lesssim m \sum_{\beta \in [3]^m }\int_{B(3C_0)} (1-\chi_1) \left( |\partial_{\beta} \widetilde U|^2 + (\partial_{\beta} \widetilde S)^2 \right) .
\end{split} \end{aligned}$$*
*Proof.* Let $\circ \in \{ u, s\}$, we have that $$\begin{aligned}
|\ensuremath{\mathcal}J_{\chi_2, \circ, \beta}| &\lesssim\sum_{\ell = 1}^m | \partial_{\beta_\ell} \chi_2| ( 1 + |\overline U| + |\overline S| ) \left( | \nabla \partial_{\beta^{(\ell)}} \widetilde U| + | \nabla \partial_{\beta^{(\ell)}} \widetilde S |\right) \\
&\lesssim|\nabla \chi_2| \sum_{\ell = 1}^m (|\nabla \partial_{\beta^{(\ell)}} \widetilde U| + | \nabla \partial_{\beta^{(\ell)} } \widetilde S | ).\end{aligned}$$ Thus, the left hand side of [\[eq:cutoffterms\]](#eq:cutoffterms){reference-type="eqref" reference="eq:cutoffterms"} is bounded (up to an absolute constant) by $$\sum_{\beta \in [3]^m} \int_{B(3C_0)} |\nabla \chi_2| \left( \sum_{\ell = 1}^m (|\nabla \partial_{\beta^{(\ell)}} \widetilde U| + | \nabla \partial_{\beta^{(\ell)} } \widetilde S | ) \right) \left( |\partial_{\beta} \widetilde U| + | \partial_{\beta} \widetilde S | \right)$$
Now, we use pointwise Cauchy-Schwarz, that is: $$|\partial_{j} \partial_{\beta^{(\ell)}} \widetilde U| | \partial_{\beta} \widetilde S| \leq \frac12 |\partial_j \partial_{\beta^{(\ell)}} \widetilde U |^2 + \frac12 |\partial_{\beta} \widetilde S|^2$$ (and similarly for other terms arising from the product). Thus, we see that the left hand side of [\[eq:cutoffterms\]](#eq:cutoffterms){reference-type="eqref" reference="eq:cutoffterms"} is bounded (up to an absolute constant) by $$\label{eq:narita1}
m \sum_{\beta \in [3]^m} \int_{B(3C_0)} |\nabla \chi_2| \left( \left| \partial_{\beta} \widetilde U \right|^2 + \left| \partial_{\beta} \widetilde S \right|^2 \right).$$
We conclude the Lemma by noting that $|\nabla \chi_2| \lesssim 1-\chi_1$, since $\chi_1 = 0$ in the annulus where $\nabla \chi_2$ is supported. ◻
**Lemma 12** (Energy estimates for terms with $m+1$ derivatives). *We have that $$\begin{aligned}
&\sum_{\beta \in [3]^m}\int_{B(3C_0)} \left( \sum_i (\ensuremath{\mathcal}J_{1, i, \beta}' + \ensuremath{\mathcal}J_{2, i, \beta}') \partial_{\beta} \widetilde U_i + (\ensuremath{\mathcal}J_{3, \beta}' + \ensuremath{\mathcal}J_{4, \beta}') \partial_{\beta} \widetilde S \right) dy \lesssim\| (\widetilde U, \widetilde S) \|_X^2.\end{aligned}$$*
*Proof.* The bound for $\ensuremath{\mathcal}J_{1,i}'$ follows from integration by parts: $$\int_{B(3C_0)} \ensuremath{\mathcal}J_{1, i, \beta}' \partial_{\beta} \widetilde U_i dy = - \int_{B(3C_0)} \chi_2(y+\overline U) \nabla (\partial_{\beta} \widetilde U_i ) \partial_{\beta} \widetilde U_i dy = \frac{1}{2} \int_{B(3C_0)} \text{div}(\chi_2(y + \overline U)) (\partial_{\beta} \widetilde U_i )^2 dy.$$ Note that there are no boundary terms since $\chi_2$ is supported on $B(5C_0/2)$. Similarly, for $\ensuremath{\mathcal}J_3'$ we have: $$\int_{B(3C_0)} \ensuremath{\mathcal}J_{3, \beta}' \partial_{\beta} \widetilde S dy = -\int_{B(3C_0)} \chi_2(y + \overline U) \cdot \nabla (\partial_{\beta} \widetilde S ) \partial_{\beta} \widetilde S dy = \frac{1}{2} \int_{B(3C_0)} \text{div} (\chi_2(y + \overline U)) (\partial_{\beta} \widetilde S )^2 dy.$$
With respect to the terms $\ensuremath{\mathcal}J_2$ and $\ensuremath{\mathcal}J_4$, we have $$\begin{aligned}
\int_{B(3C_0)} \sum_i (\ensuremath{\mathcal}J_{2, i, \beta}' \partial_{\beta} \widetilde U_i) + \ensuremath{\mathcal}J_{4, \beta}' \partial_{\beta} \widetilde S
&= -\alpha \int_{B(3C_0)} \chi_2\overline S \left( \sum_i \partial_i \partial_{\beta} \widetilde S \partial_{\beta} \widetilde U_i + \text{div} (\partial_{\beta} \widetilde U ) \partial_{\beta} \widetilde S \right) dy\\
&= -\alpha \int_{B(3C_0)} \chi_2\overline S \ensuremath{\mathrm{div\,}}\left( \partial_{\beta} \widetilde S \partial_{\beta} \widetilde U_i \right) dy\\
&= \alpha \int_{B(3C_0)} \nabla(\chi_2 \overline S) \cdot \partial_{\beta} \widetilde U \partial_{\beta} \widetilde S dy.\end{aligned}$$ Using Cauchy-Schwarz, we conclude the statement. ◻
Combining all the previous Lemmas (Lemmas [Lemma 9](#lemma:goodtermestimate){reference-type="ref" reference="lemma:goodtermestimate"}, [Lemma 10](#lemma:Mloworder){reference-type="ref" reference="lemma:Mloworder"}, [Lemma 11](#lemma:cutoffterms){reference-type="ref" reference="lemma:cutoffterms"}, [Lemma 12](#lemma:highderterms){reference-type="ref" reference="lemma:highderterms"}) and obtain that for some sufficiently large constant $C$: $$\begin{aligned}
\begin{split}
\int_{B(3C_0)} &\left( \nabla^m \ensuremath{\mathcal}L_u \cdot \nabla^m \widetilde U + \nabla^m \ensuremath{\mathcal}L_s \cdot \nabla^m \widetilde S \right) \leq C \| (\widetilde U, \widetilde S )\|_X^2 + (- J + C m) \int_{B(3C_0)} (1-\chi_1) \left( |\nabla^m \widetilde U|^2 + (\nabla^m \widetilde S)^2 \right) \\
&\quad
+ \sum_{\beta\in [3]^m} \int_{B(3C_0)} \left( \ensuremath{\mathcal}J_{1, \beta}^\ast \partial_{\beta} \widetilde U + \ensuremath{\mathcal}J_{3, \beta}^\ast \partial_{\beta} \widetilde S \right) + \sum_{\beta\in [3]^m} \int_{B(3C_0)} \left( \sum_i (\ensuremath{\mathcal}J_{1, i, \beta} + \ensuremath{\mathcal}J_{2, i, \beta}) \partial_{\beta} \widetilde U_i + ( \ensuremath{\mathcal}J_{3, \beta} + \ensuremath{\mathcal}J_{4, \beta} ) \partial_{\beta} \widetilde S \right) \\
& \leq C \| (\widetilde U, \widetilde S )\|_X^2 - \int_{B(3C_0)} \left( \frac{J(1-\chi_1)}{2} + m \chi_2 \right) \left( |\nabla^m \widetilde U|^2 + (\nabla^m \widetilde S)^2 \right) \\
&\quad + \sum_{\beta \in [3]^m }\int_{B(3C_0)} \left( \sum_i (\ensuremath{\mathcal}J_{1, i, \beta} + \ensuremath{\mathcal}J_{2, i, \beta}) \partial_{\beta} \widetilde U_i + ( \ensuremath{\mathcal}J_{3, \beta} + \ensuremath{\mathcal}J_{4, \beta} ) \partial_{\beta} \widetilde S \right) \label{eq:alcorcon}
\end{split} \end{aligned}$$ where in the second inequality we used that $J$ is sufficiently large depending on $m$ and the explicit expressions for $\ensuremath{\mathcal}J_1^\ast$, $\ensuremath{\mathcal}J_3^\ast$. Now, we proceed to study the terms arising from $\ensuremath{\mathcal}{J}_{1, i}$, $\ensuremath{\mathcal}{J}_{2, i}$, $\ensuremath{\mathcal}{J}_{3}$, $\ensuremath{\mathcal}{J}_{4}$.
### Term $\mathcal{J}_{1, i}$
We have that $$\begin{aligned}
\label{eq:ponteceso}
\sum_{i=1}^3 \sum_{\beta \in [3]^m} \int_{B(3C_0)} \ensuremath{\mathcal}J_{1, i, \beta} \partial_{\beta} \widetilde U_i
&= - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{j \in [3]} \sum_{\beta \in [3]^m}
\int_{B(3C_0)} \chi_2 (\partial_{\beta_\ell} \overline U_j) \cdot \partial_j \partial_{\beta^{(\ell)}} \widetilde U_i \partial_{\beta_\ell} \partial_{\beta^{(\ell)}} \widetilde U_i \\
&= - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{j \in [3]} \sum_{\beta^{(\ell)} \in [3]^{m-1}} \sum_{\beta_\ell \in [3]}
\int_{B(3C_0)} \chi_2 (\partial_{\beta_\ell} \overline U_j) \cdot \partial_j \partial_{\beta^{(\ell)}} \widetilde U_i \partial_{\beta_\ell} \partial_{\beta^{(\ell)}} \widetilde U_i. \nonumber\end{aligned}$$
Using the radial symmetry of $\overline U$ (that is $\overline U = \overline U_R \frac{y}{R}$), we have $$\label{eq:pontecesures}
\partial_{\beta_\ell} \overline U_j = \partial_{\beta_\ell} \left( \frac{y_j}{R} \overline U_R \right) = \left( \partial_R \overline U_R - \frac{\overline U_R}{R} \right) \frac{y_j y_{\beta_\ell}}{R^2} + \delta_{\beta_\ell, j} \frac{\overline U_R}{R},$$
Using [\[eq:pontecesures\]](#eq:pontecesures){reference-type="eqref" reference="eq:pontecesures"} into [\[eq:ponteceso\]](#eq:ponteceso){reference-type="eqref" reference="eq:ponteceso"}, and noting that $\sum_j \frac{y_j}{R}\partial_j = \partial_R$ we have that $$\begin{aligned}
\sum_{i\in [3]} \sum_{\beta \in [3]^m} \int_{B(3C_0)} \ensuremath{\mathcal}J_{1, i, \beta} \partial_{\beta} \widetilde U_i
&= - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{\beta^{(\ell)} \in [3]^{m-1} }
\int_{B(3C_0)} \chi_2\left( \partial_R \overline U_R - \frac{\overline U_R}{R} \right) \cdot | \partial_R \partial_{\beta^{(\ell)}} \widetilde U_i |^2 \notag \\
&\qquad - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{\beta \in [3]^{m} }
\int_{B(3C_0)} \chi_2 \frac{\overline U_R}{R} | \partial_{\beta} \widetilde U_i |^2 \notag \\
&= - m \sum_{i \in [3]} \sum_{\widetilde\beta \in [3]^{m-1} }
\int_{B(3C_0)} \chi_2\left( \partial_R \overline U_R - \frac{\overline U_R}{R} \right) \cdot | \partial_R \partial_{\widetilde\beta} \widetilde U_i |^2 \notag \\
&\qquad - m \sum_{i \in [3]} \sum_{\widetilde\beta \in [3]^{m-1} }
\int_{B(3C_0)} \chi_2 \frac{\overline U_R}{R} \left( | \partial_R \partial_{\widetilde\beta} \widetilde U_i |^2 + | \nabla_{\theta} \partial_{\widetilde\beta} \widetilde U_i |^2 \right) \notag, \end{aligned}$$ where we decomposed $|\nabla \partial_{\widetilde\beta} \widetilde U_i|^2 = |\partial_R\partial_{\widetilde\beta} \widetilde U_i|^2 + |\nabla_{\theta} \partial_{\widetilde\beta} \widetilde U_i|^2$, where $\nabla_{\theta}$ corresponds to all the angular components of the gradient in spherical coordinates.
Finally, we obtain $$\label{eq:J1}
\sum_{i\in [3]} \sum_{\beta \in [3]^m} \int_{B(3C_0)} \ensuremath{\mathcal}J_{1, i, \beta} \partial_{\beta} \widetilde U_i = -m \sum_{i=1}^3 \int_{B(3C_0)} \chi_2 \left( \partial_R \overline U_R \left| \partial_R \nabla^{m-1} \widetilde U_i \right|^2 + \frac{\overline U_R}{R} \left| \nabla_\theta \nabla^{m-1} \widetilde U_i \right| ^2 \right).$$
### Term $\ensuremath{\mathcal}{J}_3$
Similarly as in the case of $\ensuremath{\mathcal}J_1$, we compute: $$\begin{aligned}
\sum_{\beta \in [3]^m }\int_{B(3C_0)} \ensuremath{\mathcal}J_{3, \beta} \partial_{\beta} \widetilde S
&= - \sum_{\ell = 1}^m \sum_{\beta \in [3]^m } \sum_{j \in [3]}\int_{B(3C_0)} \chi_2 (\partial_{\beta_\ell} \overline U_j ) \partial_{\beta^{(\ell)}} \partial_j \widetilde S \partial_{\beta} \widetilde S \\
&= - \sum_{\ell = 1}^m \sum_{\beta^{(\ell)} \in [3]^{m-1} } \sum_{j \in [3]} \sum_{\beta_\ell \in [3]}\int_{B(3C_0)} \chi_2 (\partial_{\beta_\ell} \overline U_j ) \partial_j \partial_{\beta^{(\ell)}} \widetilde S \partial_{\beta_\ell} \partial_{\beta^{(\ell)}} \widetilde S. \end{aligned}$$ Using again [\[eq:pontecesures\]](#eq:pontecesures){reference-type="eqref" reference="eq:pontecesures"}, we have that $$\begin{aligned}
\sum_{\beta \in [3]^m }\int_{B(3C_0)} \ensuremath{\mathcal}J_{3, \beta} \partial_{\beta} \widetilde S
&= - \sum_{\ell = 1}^m \sum_{\beta^{(\ell)} \in [3]^{m-1} } \int_{B(3C_0)} \chi_2 \left( \partial_R \overline U_R - \frac{\overline U_R}{R}\right) |\partial_R \partial_{\beta^{(\ell)}} \widetilde S|^2 \notag \\
&\qquad - \sum_{\ell = 1}^m \sum_{\beta \in [3]^{m} } \int_{B(3C_0)} \chi_2 \frac{\overline U_R}{R} | \partial_{\beta} \widetilde S|^2 \notag \\
&= - m \sum_{\widetilde\beta \in [3]^{m-1} } \int_{B(3C_0)} \chi_2 \left( \partial_R \overline U_R - \frac{\overline U_R}{R}\right) |\partial_R \partial_{\widetilde\beta} \widetilde S|^2 \notag \\
&\qquad - m \sum_{\widetilde\beta \in [3]^{m-1} } \int_{B(3C_0)} \chi_2 \frac{\overline U_R}{R} \left( \left| \partial_R \partial_{\widetilde\beta} \widetilde S \right|^2 + \left| \nabla_\theta \partial_{\widetilde\beta} \widetilde S \right|^2 \right) \notag,\end{aligned}$$
$$\label{eq:J3}
\sum_{\beta \in [3]^m }\int_{B(3C_0)} \ensuremath{\mathcal}J_{3, \beta} \partial_{\beta} \widetilde S
\leq -m \int_{B(3C_0)} \chi_2
\left(
\partial_R \overline U_R \left| \partial_R \nabla^{m-1}\widetilde S \right|^2 +
\frac{\overline U_R}{R} \left| \nabla_\theta \nabla^{m-1}\widetilde S \right|^2.
\right)$$
### Terms $\ensuremath{\mathcal}J_{2, i}$
With respect to $\ensuremath{\mathcal}J_2$, we have $$\begin{aligned}
\sum_{\beta\in [3]^m} \sum_{i=1}^3 \int_{B(3C_0)} \ensuremath{\mathcal}J_{2, i, \beta} \partial_{\beta} \widetilde U_i &= - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{\beta \in [3]^m} \int_{B(3C_0)} \chi_2 \alpha \partial_{\beta_\ell} \overline S \partial_i \partial_{\beta^{(\ell)}} \widetilde S \partial_{\beta} \widetilde U_i \\
&= - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{\beta^{(\ell)} \in [3]^{m-1} } \int_{B(3C_0)} \chi_2 \alpha \partial_R \overline S \partial_i \partial_{\beta^{(\ell)}} \widetilde S \sum_{\beta_{\ell} \in [3] }
\frac{y_{\beta_\ell}}{R} \partial_{\beta_\ell} \partial_{\beta^{(\ell)}} \widetilde U_i \\
&= - \sum_{\ell = 1}^m \sum_{i \in [3]} \sum_{\beta^{(\ell)} \in [3]^{m-1} } \int_{B(3C_0)} \chi_2 \alpha \partial_R \overline S \partial_i \partial_{\beta^{(\ell)}} \widetilde S
\partial_R \partial_{\beta^{(\ell)}} \widetilde U_i \\
&\leq \sum_{\ell = 1}^m \sum_{\beta^{(\ell)} \in [3]^{m-1} } \int_{B(3C_0)} \chi_2 \alpha |\partial_R \overline S| \sum_{i \in [3]} \left( \frac{|\partial_i \partial_{\beta^{(\ell)}} \widetilde S|^2}{2} +
\frac{|\partial_R \partial_{\beta^{(\ell)}} \widetilde U_i|^2}{2} \right) \\
&\leq \sum_{\ell = 1}^m \int_{B(3C_0)} \chi_2 \alpha |\partial_R \overline S| \sum_{\beta \in [3]^{m} }\left( \frac{| \partial_{\beta} \widetilde S|^2}{2} +
\sum_{i \in [3]} \frac{| \partial_{\beta} \widetilde U_i|^2}{2} \right).\end{aligned}$$ We obtain that $$\label{eq:J2}
\sum_{\beta\in [3]^m} \sum_{i=1}^3 \int_{B(3C_0)} \ensuremath{\mathcal}J_{2, i, \beta} \partial_{\beta} \widetilde U_i \leq \frac{m \alpha}{2} \int_{B(3C_0)} \chi_2 |\partial_R \overline S| \left( |\nabla^m \widetilde S|^2 + | \nabla^m \widetilde U |^2 \right).$$
### Term $\ensuremath{\mathcal}J_{4}$
We have that $$\begin{aligned}
\sum_{\beta \in [3]^m} \int_{B(3C_0)} \ensuremath{\mathcal}J_{4, \beta} \partial_\beta \widetilde S
&= - \sum_{\ell = 1}^m \sum_{i\in [3]} \sum_{\beta \in [3]^m} \int_{B(3C_0)} \chi_2 \alpha \partial_{\beta_\ell} \overline S \partial_i \partial_{\beta^{(\ell)}} \widetilde U_i \partial_{\beta} \widetilde S \\
&= - \sum_{\ell = 1}^m \sum_{i\in [3]} \sum_{\beta^{(\ell)} \in [3]^{m-1}} \int_{B(3C_0)} \chi_2 \alpha \partial_R \overline S \partial_i \partial_{\beta^{(\ell)}} \widetilde U_i \sum_{\beta_\ell \in [3]}\frac{y_{\beta_\ell}}{R}\partial_{\beta_\ell} \partial_{\beta^{(\ell)}} \widetilde S \\
&= - \sum_{\ell = 1}^m \sum_{i\in [3]} \sum_{\beta^{(\ell)} \in [3]^{m-1}} \int_{B(3C_0)} \chi_2 \alpha \partial_R \overline S \partial_i \partial_{\beta^{(\ell)}} \widetilde U_i \partial_{R} \partial_{\beta^{(\ell)}} \widetilde S \\
&\leq \sum_{\ell = 1}^m \sum_{\beta^{(\ell)} \in [3]^{m-1}} \int_{B(3C_0)} \chi_2 \alpha | \partial_R \overline S | \sum_{i\in [3]} \left( \frac{| \partial_i \partial_{\beta^{(\ell)}} \widetilde U_i |^2}{2} + \frac{ | \partial_{R} \partial_{\beta^{(\ell)}} \widetilde S |^2}{2}\right) \\
&\leq \sum_{\ell = 1}^m \int_{B(3C_0)} \chi_2 \alpha | \partial_R \overline S | \sum_{\beta \in [3]^{m}} \left( \sum_{i\in [3]} \frac{| \partial_{\beta} \widetilde U_i |^2}{2} + \frac{ | \partial_{\beta} \widetilde S |^2}{2}\right).\end{aligned}$$ Thus, we obtain that $$\label{eq:J4}
\sum_{\beta \in [3]^m} \int_{B(3C_0)} \ensuremath{\mathcal}J_{4, \beta} \partial_\beta \widetilde S
\leq \frac{m\alpha}{2} \int_{B(3C_0)} \chi_2 |\partial_R \overline S| \left( |\nabla^m \widetilde U|^2 + |\nabla^m \widetilde S|^2 \right).$$
### Conclusion
Recall the radial and angular repulsivity properties of the profile from [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"}--[\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"}: $$- \partial_R \overline U_R + \alpha | \partial_R \overline S | < 1-\widetilde{\eta} \qquad \mbox{ and } \qquad - \frac{\overline U_R}{R} + \alpha |\partial_R \overline S | < 1-\widetilde{\eta},$$ for some $\widetilde{\eta} > 0$. Combining [\[eq:J1\]](#eq:J1){reference-type="eqref" reference="eq:J1"}, [\[eq:J3\]](#eq:J3){reference-type="eqref" reference="eq:J3"}, [\[eq:J2\]](#eq:J2){reference-type="eqref" reference="eq:J2"}, [\[eq:J4\]](#eq:J4){reference-type="eqref" reference="eq:J4"}, and using the repulsivity properties of the profile, we obtain $$\begin{aligned}
&\int_{B(3C_0)} \left( \sum_{i=1}^3 (\ensuremath{\mathcal}J_{1, i} + \ensuremath{\mathcal}J_{2, i}) \nabla^m \widetilde U_i + ( \ensuremath{\mathcal}J_3 + \ensuremath{\mathcal}J_4 ) \nabla^m \widetilde S \right) \leq
m (1-\widetilde{\eta}) \left( \| \nabla^m \widetilde U \|_{L^2}^2 + \| \nabla^m \widetilde S \|_{L^2}^2 \right),\end{aligned}$$ for some $\widetilde{\eta} > 0$ fixed.[^2] Now, we plug this back into [\[eq:alcorcon\]](#eq:alcorcon){reference-type="eqref" reference="eq:alcorcon"}, and get $$\begin{aligned}
\int_{B(3C_0)} \left( \nabla^m \ensuremath{\mathcal}L_u \cdot \nabla^m \widetilde U + \nabla^m \ensuremath{\mathcal}L_s \nabla^m \widetilde S \right)
&\leq C \| (\widetilde U, \widetilde S )\|_X^2 - \int_{B(3C_0)} \left( \frac{J(1-\chi_1)}{2} + m \chi_2 \right) \left( |\nabla^m \widetilde U|^2 + (\nabla^m \widetilde S)^2 \right) \\
&\quad + m (1-\widetilde{\eta}) \left( \| \nabla^m \widetilde U \|_{L^2}^2 + \| \nabla^m \widetilde S \|_{L^2}^2 \right).\end{aligned}$$ Since $J$ is sufficiently large depending on $m$, we can assume $J \geq 2m$. Noting that $\chi_1$ is $0$ whenever $\chi_2 < 1$ (that is, $\chi_1 = 0$ for $|y| \geq 2C_0$), we see that $\left( \frac{J(1-\chi_1)}{2} + m \chi_2 \right) \geq m$. Therefore $$\begin{aligned}
\int_{B(3C_0)} \left( \nabla^m \ensuremath{\mathcal}L_u \cdot \nabla^m \widetilde U + \nabla^m \ensuremath{\mathcal}L_s \nabla^m \widetilde S \right)
&\leq C \| (\widetilde U, \widetilde S )\|_X^2 - m \widetilde{\eta} \int_{B(3C_0)} \left( |\nabla^m \widetilde U|^2 + (\nabla^m \widetilde S)^2 \right) \\
& \leq \left( C- \frac{m\widetilde{\eta}}{2} \right) \| (\widetilde U, \widetilde S )\|_X^2.\end{aligned}$$ Here we used that for $(\widetilde U, \widetilde S) \in X_K$ we have that $\|\nabla^m \widetilde U \|_{L^2}^2 + \| \nabla^m \widetilde S \|_{L^2}$ is greater than $\frac12 \| (\widetilde U, \widetilde S)\|_X^2$.
Taking $m$ sufficiently large so that $\frac{m\widetilde{\eta}}{2} > C+1$, we conclude the desired inequality: $$\label{eq:diss_final}
\int \nabla^m \ensuremath{\mathcal}L_u \cdot \nabla^m \widetilde U + \nabla^m \ensuremath{\mathcal}L_s \cdot \nabla^m \widetilde S \leq - \|(\widetilde U, \widetilde S ) \|_X^2, \qquad \forall (\widetilde U, \widetilde S) \in X_K.$$
## Maximality {#sec:maximality}
In this section we show that $\ensuremath{\mathcal}L:X \to X$ satisfies that $\ensuremath{\mathcal}L - \lambda$ is surjective for some sufficiently large $\lambda$. As a consequence, we will obtain that $\ensuremath{\mathcal}L$ is a maximally dissipative operator modulo compact perturbations, and this will give a good description of the spectra of $\ensuremath{\mathcal}L$ and $\ensuremath{\mathcal}L^\ast$ that will allow us to define the spaces of stability and instability of $\ensuremath{\mathcal}L$.
Now we show an equivalent definition of maximality:
**Lemma 13**. *For a dissipative operator $A$ defined on a linear subspace $D(A)$ of a Hilbert space X, if there exists $\lambda_0>0$ such that $\lambda_0I-A$ is surjective, then for all $\lambda>0$, $\lambda I-A$ is surjective.*
*Proof.* Since $\lambda_0I -A$ is surjective, for any $g\in X$, there exists $f\in X$ such that $(\lambda_0I-A)g=f$. From the dissipativity, we also have $$\begin{aligned}
\langle g,f\rangle_X=\langle (\lambda_0I -A)f,f\rangle _{X}\geq \lambda_0 \langle f,f\rangle_{X}.\end{aligned}$$ Therefore, $\lambda_0I-A$ is invertible and $$\|(\lambda_0I-A)^{-1}\|\lesssim \frac{1}{\lambda_0}.$$ Then the convergence radius of $(\lambda I-\lambda_0 I-A)^{-1}$ is at least $\lambda_0$. Hence for all $\lambda \in (0,2\lambda_0)$, $\lambda I-A$ is surjective. By induction on $\lambda$, we have the result. ◻
We now introduce a energy estimate Lemma in $X$.
**Lemma 14**. *There exists some constant $C_{J, m}$ (depending on $m$ and $J$), such that $$\langle \ensuremath{\mathcal}L(\widetilde{U},\widetilde{S}), (\widetilde{U},\widetilde{S}) \rangle_X \leq C_{J, m} \| (\widetilde{U},\widetilde{S}) \|_X^2, \qquad \forall (\widetilde{U},\widetilde{S}) \in X.$$*
*Proof.* Since $C_m$ is allowed to depend on $m$, the proof is straightforward and follows from integration by parts. From [\[Lequation\]](#Lequation){reference-type="eqref" reference="Lequation"}, we have $$\nabla^{m}\ensuremath{\mathcal}L_u = -\chi_{2} ((y + \overline U) \cdot \nabla ) (\nabla^{m} \widetilde{U}) -\chi_{2} \alpha \overline S (\nabla^{m+1}\widetilde S) + \mathcal T_{u},$$ $$\nabla^{m}\ensuremath{\mathcal}L_s = -\chi_{2} ((y + \overline U) \cdot \nabla ) (\nabla^{m} \widetilde S) -\chi_{2} \alpha \overline S \nabla^{m}\ensuremath{\mathrm{div\,}}(\widetilde{U}) + \mathcal T_{s},$$ where the terms $\ensuremath{\mathcal}T_{u}$, $\ensuremath{\mathcal}T_{s}$ have at most $m$ derivatives on $U$ or $S$. Thus, they satisfy
$$\| \mathcal{T}_{\circ}\|_{L^2} \lesssim_{J, m} \| \nabla^m \widetilde{U} \|_{L^2} + \| \nabla^m \widetilde{S} \|_{L^2} + \| \widetilde{U} \|_{L^2} + \| \widetilde{S} \|_{L^2} \lesssim\| (\widetilde{U},\widetilde{S}) \|_X, \qquad \mbox{ for } \circ \in \{ u, s \}.$$ Using those bounds and integrating by parts the top order terms, we get $$\begin{aligned}
&\langle \nabla^{m} \ensuremath{\mathcal}L_{u}, \nabla^{m} \widetilde{U} \rangle_{L^2} + \langle \nabla^{m} \ensuremath{\mathcal}L_{s}, \nabla^{m} \widetilde{S} \rangle_{L^2} \\
&\quad \leq \int \left| \nabla(\chi_{2} (y + \overline U)) \right| \left( |\nabla^{m} \widetilde{U}|^2 + |\nabla^{m} \widetilde{S}|^2 \right) dy + \int |\ensuremath{\mathrm{div\,}}(\chi_2 \alpha\overline{S} )| | \nabla^{m} \widetilde{U}| | \nabla^{m} \widetilde{S}| dy + C_{J, m}' \|(\widetilde{U},\widetilde{S}) \|_{X}^{2}\\
&\quad \leq C_{J, m} \|(\widetilde{U},\widetilde{S})\|_{X}^{2}.\end{aligned}$$ ◻
Now, we state the main Lemma of this section.
**Lemma 15** (Maximality of $\ensuremath{\mathcal}L$). *There exists some $\lambda$ (depending on $m$ and $J$) such that $\ensuremath{\mathcal}L - \lambda : X \to X$ is a surjective operator.*
*Proof.* Let us define the operator $\ensuremath{\mathcal}L_N = P_N (\ensuremath{\mathcal}L_u, \ensuremath{\mathcal}L_s) P_N$ to be a two-sided truncation in frequency of $\ensuremath{\mathcal}L$, where $P_N$ is a projection to frequencies $\leq N$. Note that $P_N$ is defined from $H^m (\mathbb T_{4 C_0})$ to itself (it does not map $X$ to itself, because $P_N (f_u, f_s)$ does not need to satisfy the vanishing conditions that $(f_u, f_s) \in X$ satisfy). In particular, $P_N u \in C^\infty$.
Now, from Lemma [Lemma 14](#lemma:rough_energy_bound){reference-type="ref" reference="lemma:rough_energy_bound"}, we see that $$\langle \ensuremath{\mathcal}L_N (\widetilde{U},\widetilde{S}), (\widetilde{U},\widetilde{S}) \rangle_{H^m} = \langle \ensuremath{\mathcal}L P_N (\widetilde{U},\widetilde{S}), P_N (\widetilde{U},\widetilde{S}) \rangle_{H^m} \leq C_{J, m} \| (\widetilde{U},\widetilde{S}) \|_{H^m}^2,$$ and we obtain that $\ensuremath{\mathcal}L_N - C_{J, m}$ is a dissipative operator.
Since the cokernel and image of $\ensuremath{\mathcal}L_N$ are both finite dimensional spaces, it is clear that $$\| \ensuremath{\mathcal}L_N (\widetilde{U},\widetilde{S}) \|_{H^m} \leq C'_{N,J,m} \| (\widetilde{U},\widetilde{S}) \|_{H^m}$$ for some constant $C'_{N,J,m}$. In particular, $\ensuremath{\mathcal}L_N - 2C_{J,m} - 2C_{N,J,m}'$ is surjective. Here $\mathcal L_{N}$ is a bounded operator, because $\| \frac{ \mathcal L_N}{2C_{J,m}+2C_{N,J,m}^{'}} \|_{H^m}< 1$, so we know $\frac{\ensuremath{\mathcal}L_N}{2C_{J,m} + 2C_{N,J,m}'} -I$ is invertible. Using Lemma [Lemma 13](#maximalitylemma){reference-type="ref" reference="maximalitylemma"} we conclude that $\ensuremath{\mathcal}L_N - C_{J, m}$ is maximally dissipative.
This gives the bound $$\left\| ( \ensuremath{\mathcal}L_N - \lambda )^{-1} \right\|_X \lesssim \frac{1}{\lambda - C_{J, m}}$$ for $\lambda > C_{J, m}$. We take $\lambda = C_{J, m}+1$ and show that $\ensuremath{\mathcal}L - \lambda$ is surjective. From the equation above, we have $\| (\ensuremath{\mathcal}L_N - \lambda)^{-1} \|_{H^m} \leq 1$.
In order to solve the equation $(\ensuremath{\mathcal}L - \lambda )f = F$, we define $f_N = (\ensuremath{\mathcal}L_N - \lambda)^{-1} F$, which are bounded in $X$ due to the bound above. Thus, since $X$ is a separable Hilbert space, there exists some subsequence converging weakly, $f_{N_i} \rightharpoonup f$, to some $f\in X$. Since we are in a bounded domain, the Rellich-Kondrachov compact embedding theorem gives us that the convergence $f_{N_i} \to f$ is strong in the topology of $H^{m-3}$ (with $f \in H^{m}$).
Recalling $$(\ensuremath{\mathcal}L_N - \lambda )f_N = F,$$ and denoting $Y = H^{m-10}$, we obtain $$\begin{aligned}
\left\| (\ensuremath{\mathcal}L - \lambda )f - F \right\|_Y &\leq
\underbrace{ \| P_N (\ensuremath{\mathcal}L - \lambda) P_N f - P_{N}F\|_Y }_0
+\|F - P_{N}F\|_Y+ \| P_N(\ensuremath{\mathcal}L - \lambda ) P_N f - (\ensuremath{\mathcal}L - \lambda) f\|_Y \\
&\leq \|F - P_{N}F\|_Y+\lambda \| P_N f - f \|_Y + \| P_N \ensuremath{\mathcal}L (P_N f - f) \|_Y + \| (P_N - \text{Id} ) \ensuremath{\mathcal}L f \|_Y\end{aligned}$$ Let us take $N = N_i$ and $i \to \infty$. Since $X \subset Y$, the first two summands clearly tend to zero. Moreover, $\|P_N f - f \|_{H^m} \to 0$, combined with the fact that $\| \ensuremath{\mathcal}L f\|_Y \lesssim_{J, m} \| f \|_{H^m}$ yields that the third summand also tends to zero. The fourth summand tends to zero given that $\ensuremath{\mathcal}L f \in Y$. In summary, $$\| (\ensuremath{\mathcal}L - \lambda ) f - F \|_Y = 0$$ and we get $$\ensuremath{\mathcal}L f - \lambda f = F.$$ Then we are left to show $f = (\widetilde{U},\widetilde{S}) \in X$. From the definition of $\chi_1, \chi_2$ before [\[eq:cutoffL\]](#eq:cutoffL){reference-type="eqref" reference="eq:cutoffL"}, we have that for $|y| \geq 5C_0/2$ $$\ensuremath{\mathcal}L_{u}(\widetilde{U},\widetilde{S})=-J\widetilde{U}, \qquad \mbox{ and } \qquad \ensuremath{\mathcal}L_{s}(\widetilde{U},\widetilde{S})=-J\widetilde{S}.$$ Therefore, for $|y| \geq 5C_0/2$ $$f= \left( \frac{F_U}{-J-\lambda}, \frac{F_S}{-J-\lambda} \right).$$ Then since $F=(F_{U}, F_{S})\in (H^{m}_0(B(0,3C_0)))^2$, $(\widetilde{U},\widetilde{S})\in (H^{m})^2$ , we have $(\widetilde{U},\widetilde{S})\in (H^{m}_0(B(0,3C_0)))^2$. ◻
We conclude the maximal dissipativity modulo compact perturbations.
**Proposition 16**. *There exist $m_0$ and some $J_m$ such that for every $m\geq m_0$, and for every $J\geq J_m$, we have the following. For any $\delta_g \in (0, 1)$, we can express the cut-off linearized operator $\ensuremath{\mathcal}L$ as: $$\label{eq:maxdismod}
\ensuremath{\mathcal}L = \ensuremath{\mathcal}L_0 - \delta_g + \ensuremath{\mathcal}K,$$ where $\ensuremath{\mathcal}L_0: X \to X$ is a maximally dissipative operator and $\ensuremath{\mathcal}K$ is a compact operator, both depending on $\delta_g$ (as well as $m$ and $J$).*
*Proof.* The proof is a consequence of:
- $\langle \ensuremath{\mathcal}L (\widetilde U, \widetilde S), (\widetilde U, \widetilde S) \rangle_X \leq -\| (\widetilde U, \widetilde S) \|_X^2$ for every $(\widetilde U, \widetilde S) \in X_K$ (by [\[eq:diss_final\]](#eq:diss_final){reference-type="eqref" reference="eq:diss_final"}).
- $\ensuremath{\mathcal}L - \lambda : X\to X$ is surjective for sufficiently large $\lambda$ (by Lemma [Lemma 15](#lemma:maximality){reference-type="ref" reference="lemma:maximality"}).
The argument that combines those two facts to obtain the statement of the Proposition can be found in [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Corollary 7.7]. ◻
**Lemma 17**. *Let $m \geq m_0$, $J\geq J_m$, and $\delta_g \in (0, 1)$. Then we have:*
1. *[\[item:spectrum\]]{#item:spectrum label="item:spectrum"} We denote by $\sigma (\ensuremath{\mathcal}L)$ the spectrum of $\ensuremath{\mathcal}L$. The set $\Lambda = \sigma (\ensuremath{\mathcal}L) \cap \{ \lambda \in \mathbb{C} : \Re (\lambda) > -\delta_g/2 \}$ is finite and formed only by eigenvalues of $\ensuremath{\mathcal}L$. Moreover each $\lambda \in \Lambda$ has finite algebraic multiplicity. That is, if we let $\mu_{\lambda}$ to be the first natural such that ${\rm ker} (\ensuremath{\mathcal}L-\lambda )^{\mu_{\lambda}} = {\rm ker} (\ensuremath{\mathcal}L- \lambda )^{\mu_\lambda + 1}$, we have that the vector space $$\label{eq:spaceV}
V_{\rm{uns}} = \bigoplus_{\lambda \in \Lambda} {\rm ker} (\ensuremath{\mathcal}L - \lambda )^{\mu_{\lambda}}\,,$$ is finite dimensional.*
2. *[\[item:invariance\]]{#item:invariance label="item:invariance"} We denote by $\ensuremath{\mathcal}L^\ast$ the adjoint of $\ensuremath{\mathcal}L$. Let $\Lambda^\ast = \sigma (\ensuremath{\mathcal}L^\ast) \cap \{ \lambda \in \mathbb{C} : \Re (\lambda) >- \delta_g/2\}$. As before, we define $$\label{eq:spaceVast}
V_{\rm{sta}} = \left( \bigoplus_{\lambda \in \Lambda^\ast} {\rm ker} (\ensuremath{\mathcal}L^\ast - \lambda )^{\mu_{\lambda}^\ast}\right)^{\perp}.$$ We have that both $V_{\rm{uns}}$ and $V_{\rm{sta}}$ are invariant under $\ensuremath{\mathcal}L$. We also have that $\Lambda^\ast = \overline{\Lambda}$ and $\mu_\lambda = \mu_{\overline{\lambda}}^\ast$. Moreover, we have the decomposition $X = V_{\rm{uns}} \oplus V_{\rm{sta}}$.*
3. *[\[item:stability_outwards\]]{#item:stability_outwards label="item:stability_outwards"} The linear transformation $\ensuremath{\mathcal}L|_{V_{\rm{uns}}}: V_{\rm{uns}} \rightarrow V_{\rm{uns}}$ obtained by restricting $\ensuremath{\mathcal}L$ to the finite dimensional space $V_{\rm{uns}}$ has all its eigenvalues with real part larger than $-\delta_g/2$. In particular, there is some basis such that we can express $$\ensuremath{\mathcal}L|_{V_{\rm{uns}}} =
\begin{bmatrix}
J_1 & & & \\
& J_2 & & \\
& & \ddots & \\
& & & J_\ell
\end{bmatrix},
\qquad \mbox{ where } \qquad J_i =
\begin{bmatrix}
\lambda_i & \frac{\delta_g}{10} & & \\
& \lambda_i & \ddots & \\
& & \ddots & \frac{\delta_g}{10} \\
& & & \lambda_i
\end{bmatrix},$$ where $\lambda_i$ are the eigenvalues of $\ensuremath{\mathcal}L|_{V_{\rm{uns}}}$. In that basis, we have that $$\label{eq:chisinau}
\Re \left(\overline{w}^T \cdot \ensuremath{\mathcal}L|_{V_{\rm{uns}}} \cdot w \right) \geq -\frac{6 \delta_g}{10} \| w \|^2, \qquad \forall w \in \mathbb{C}^N.$$*
*Moreover, letting $T(t)$ be the semigroup generated by $\ensuremath{\mathcal}L$, for any $v \in V_{\rm{sta}}$ we have $$\label{eq:prim}
\| T(t) v \|_X \lesssim e^{-\delta_g t/2} \| v \|_X.$$*
*Proof.* The statement holds for any operator of the form [\[eq:maxdismod\]](#eq:maxdismod){reference-type="eqref" reference="eq:maxdismod"} in a Hilbert space (see [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma 7.11], which is itself a slightly different formulation of [@Merle-Raphael-Rodnianski-Szeftel:implosion-ii Lemma 3.3]). The form [\[eq:maxdismod\]](#eq:maxdismod){reference-type="eqref" reference="eq:maxdismod"} is guaranteed by estimate [\[eq:diss_final\]](#eq:diss_final){reference-type="eqref" reference="eq:diss_final"}, since $m\geq m_0$, $J\geq J_m$, $\delta_g \in (0, 1)$. ◻
## Smoothness of $V_{\rm{uns}}$ {#sec:smoothness}
The main result of this subsection is to show that all the elements of $V_{\rm{uns}}$ are smooth provided we choose again $m$ and $J$ sufficiently large (possibly larger than the restriction coming from Proposition [Proposition 16](#prop:maxdismod){reference-type="ref" reference="prop:maxdismod"}). Clearly, it suffices to show this for a basis. We will follow the choice of basis made in Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"} so that the elements of the basis are a union of chains $\{\psi_j\} =
\{(\psi_{u,j}, \psi_{s,j})\}$ that satisfy: $$\begin{aligned}
\nonumber
&\psi_{0} = 0, \ \psi_{j}\neq 0 \text{ when }j\geq 1,\\
&\ensuremath{\mathcal}L \psi_{j+1} = \lambda \psi_{j+1} + \frac{\delta_g}{10} \psi_{j},\label{eq:GEE}\end{aligned}$$ where $\lambda$ is the eigenvalue of $\psi_1$ (it will be fixed through all this subsection, so we omit subindices that would distinguish different eigenvalues). Let us also stress that $\lambda$ and $\psi$ may be complex-valued and $\{\psi_j\}$ must be a finite sequence. The operator $\ensuremath{\mathcal}L$ acts on complex-valued functions like $\psi$ just component-wise (if $\psi = \psi_r + i \psi_i$ with $\psi_r, \psi_i$ real, then the real and imaginary parts of $\ensuremath{\mathcal}L \psi$ are given by $\ensuremath{\mathcal}L \psi_r$ and $\ensuremath{\mathcal}L \psi_i$ respectively). We will also refer to [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"} as the generalized eigenfunction equation.
First of all, let us claim the following Lemma:
**Lemma 18** (Extension of generalized eigenfunctions). *Let $m$ be sufficiently large and $J\geq J_{*}$ sufficiently large independent of $m$. Let $\{\psi_{j}\} = \{( \psi_{U,j}, \psi_{S,j})\}$ be in $H^{2m+100}$ defined on $B(0, C_0)$ so that it satisfies the generalized eigenfunction equation [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"} for $|y| \leq C_0$. Then, there exists a unique generalized eigenfunction $\{\widetilde\psi_{j}\} \in X$ that agrees with $\{\psi_{j}\}$ on $B(0, C_0)$ and satisfies [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"} for all $|y| \leq 3C_0$. Moreover, such extension is constantly equal to zero for $|y| \in (5C_0/2, 3C_0)$.*
Let us delay the proof of this Lemma to the end of the section and show our main result of this section assuming Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}:
**Proposition 19**. *Let $\delta_g \in (0, 1)$. For sufficiently large $\overline{m}$, $\overline{J}$, there exist $m,J$ such that $m>\overline{m}$, $J>\overline{J}$ and we have that the space $V_{\rm{uns}} \subset X$ defined in Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"} is formed by smooth functions.*
*Proof.* Given that the space $X$ depends on $m$ and the operator $\ensuremath{\mathcal}L$ depends on $J$, let us denote $X$ by $X^m$ and $\ensuremath{\mathcal}L$ by $\ensuremath{\mathcal}L_J$ for the rest of this proof.
First of all, we restrict ourselves to $m \geq m_0$, $J\geq \max\{J_m,J_{*}\}$, so that we satisfy the hypotheses of Proposition [Proposition 16](#prop:maxdismod){reference-type="ref" reference="prop:maxdismod"}, Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"} and Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}. We take the definition of $V_{\rm{uns}}$ from Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"} for $\overline{J}_{m} = \max\{J_m,J_{*}\}$ and make the dependence on $m$ explicit, so that $V_{\rm{uns}} (m) \subset X^m$ refers to the space of unstable modes of $\ensuremath{\mathcal}L_{\overline{J}_m}: X^m \to X^m$ (if there are no unstable modes, $V_{\rm{uns}} (m) = \{ 0 \}$). In particular, Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"} gives us that $V_{\rm{uns}} (m)$ is finite dimensional. We moreover define $V_{\rm{uns}}^{C_0} (m)$ to be the vector space obtained by restricting the functions of $V_{\rm{uns}} (m)$ to $B(0, C_0)$.
Let us show that if $m_0\leq m'\leq 2m'+100 \leq m$, then $V_{\rm{uns}}^{C_0}(m) \subset V_{\rm{uns}}^{C_0}(m')$. We take $\{\psi_{i}\}$ be a chain of eigenfunctions in $V_{\rm{uns}}(m)$ such that $\{\psi_i\} \subset V_{\rm{uns}}(m)$, and it satisfies the generalized eigenfunction equation [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"}: $$\ensuremath{\mathcal}L_{\overline J_m} \psi_{i+1} = \lambda \psi_{i+1} + (\delta_g/10) \psi_{i}.$$ Given that the operator $\ensuremath{\mathcal}L_{J}$ is independent of $J$ on $B(0, C_0)$ (the only dependence on $J$ comes from $(1-\chi_1)J$, which vanishes on $B(0, C_0)$), we have that the restriction function $\psi_{i}|_{B(0, C_0)}$ satisfies the generalized eigenfunction equation for $\ensuremath{\mathcal}L_{J_{m'}}$ on $B(0, C_0)$: $$\ensuremath{\mathcal}L_{\overline J_{m'}} \psi_{i+1} = \lambda \psi_{i+1} + (\delta_g/10) \psi_{i}.$$ Since it is clearly in $H^{2m'+100}$ (as $m \geq 2m'+100$), we are in the hypotheses of Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}. Therefore, there exists a unique extension $\{\widetilde\psi_{j}\} \subset X^{m'}$, which is a chain generalized eigenfunction of $\ensuremath{\mathcal}L_{\overline{J}_{m'}}$ in and agrees with $\{\psi_{j}|_{B(0, C_0)}\}$ on $B(0, C_0)$. Therefore, $\{\psi_{j}|_{B(0, C_0)}\} \subset V_{\rm{uns}}^{C_0}(m')$. We conclude the desired property: $$\label{eq:Vunschain} V_{\rm{uns}}^{C_0}(m) \subset V_{\rm{uns}}^{C_0}(m'), \qquad \mbox{ for } m \geq 2m'+100.$$
Let $\{m_{j}\}$ be an integer sequence such that $m_{j+1}\geq 2m_{j}+100$ with $m_{1}=2m_0$. Since $\dim (V_{\rm{uns}}^{C_0}(m_{j}))$ is finite, integer, and non-increasing for $j\geq 1$ (due to [\[eq:Vunschain\]](#eq:Vunschain){reference-type="eqref" reference="eq:Vunschain"}); there must exist a value $j_{*}$ such that $\dim (V_{\rm{uns}}^{C_0}(m))$ is constant for every $m \in [m_j, \infty)$ with $j\geq j_{*}$. Then, since $V_{\rm{uns}}^{C_0}(m_{j+1}) \subset V_{\rm{uns}}^{C_0}(m_j)$ by [\[eq:Vunschain\]](#eq:Vunschain){reference-type="eqref" reference="eq:Vunschain"}, we deduce that $V_{\rm{uns}}^{C_0}(m_j)$ is the same vector space for $j\geq j_*$.
Now, let us fix $m_*=m_{j_*}$ and $J =\overline J_{m_{*}}$. We will show $V_{\rm{uns}}(m_{*})$ only consists of smooth functions. Let $\{\psi_{j}\}$ be a chain of generalized eigenfunctions in $V_{\rm{uns}}(m)$ and $\{\overline\psi_{j}\}$ be their restriction on $B(0,C_0)$. As a consequence of the above, all $\overline\psi_{j}$ are smooth on $B(0, C_0)$. Thus, by Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"} for any $m\geq m_*$, the $\overline\psi_j$ admit a unique $H^{m}$ extension to $B(0, 3C_0)$ satisfying [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"} for $\ensuremath{\mathcal}L_J$, which we call $\widetilde\psi_{j}$ (note here that $m$ may be much larger than $J$, it is fundamental that in Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"} we do not require $J$ to be sufficiently large in terms of $m$). Now, both $\{\psi_{j}\}$ and $\{\widetilde\psi_{j}\}$ are $H^{m}$ extensions of $\{\psi_{j}|_{B(0,C_0)}\}$ satisfying the generalized eigenvalue equation [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"} for $\ensuremath{\mathcal}L_J$. By the uniqueness in Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}, they are equal. Then $\{\psi_j\}$ are in $H^{m}$ for any $m\geq m_{*}.$ ◻
Thus, it only remains to show Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}.
*Proof of Lemma [Lemma 18](#lemma:eigenextension){reference-type="ref" reference="lemma:eigenextension"}.* Let us denote $\psi_{j}=(\psi_{U,j}, \psi_{S,j}) = (U_j, S_j)$.
**Vanishing:** First, it is very easy to justify that $$\label{vanishing}
\psi_{j}=0, \qquad \text{ for all } \,j$$ for $|y| \in \left( \frac52 C_0, 3C_0 \right)$ by induction. First, $\psi_0=0$ by definition [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"}. Then we assume [\[vanishing\]](#vanishing){reference-type="eqref" reference="vanishing"} holds for $j\leq j_0$. Since $\chi_1 = \chi_2 = 0$ in that region due to [\[eq:cutoffL\]](#eq:cutoffL){reference-type="eqref" reference="eq:cutoffL"} we have that $\mathcal L = -J$ , we see that [\[eq:GEE\]](#eq:GEE){reference-type="eqref" reference="eq:GEE"} reads $$-J \psi_{U,j_0+1} = \lambda U_{j_0+1}, \qquad \mbox{ and } \qquad -J \psi_{S,j_0+1} = \lambda S_{j_0+1},$$ for $|y| \in \left( \frac52 C_0, 3 C_0 \right)$. Since $\Re(\lambda) > -\delta_g/2 \geq -1/2$ by construction, and we can assume $J\geq 1$, we conclude that $\psi = 0$ in the region $|y| \in \left( \frac52 C_0, 3 C_0 \right)$.
**Uniqueness:**
We consider two generalized eigenfunction chains agreeing on $B(0, C_0)$ (in particular, if they are not identically zero, they share the same $\lambda$). Letting $\{(U_j, S_j)\}$ be the difference of those eigenfunctions, we have $U_j|_{B(0,C_0)}=0$, $S_{j}|_{B(0,C_0)}=0$ and we want to show $(U_j,S_j)=0$ everywhere.
Now we proceed by induction. We first assume that for $j\leq j_0$, $U_{j_0}=S_{j_0}=0$. Thus, for $j=j_0+1$, we have $\ensuremath{\mathcal}L (U_{j_0+1}, S_{j_0+1}) = \lambda (U_{j_0+1}, S_{j_0+1})$.
For the sake of simplicity, we will use $(U,S)$ to denote $(U_{j_0+1}, S_{j_0+1})$. Developing the equation $(\ensuremath{\mathcal}L_U, \ensuremath{\mathcal}L_S) (U,S) = \lambda (U, S)$, we have that $$\begin{aligned}
\lambda U &= -\chi_2 (r-1) U - \chi_2 (y + \overline U) \nabla U - \chi_2 \alpha \overline S \nabla S - \chi_2 U \nabla \overline U - \alpha \chi_2 S \nabla \overline S - J(1-\chi_1)U, \\
\lambda S &= -\chi_2 (r-1) S - \chi_2 (y + \overline U) \nabla S - \chi_2 \alpha \overline S \ensuremath{\mathrm{div\,}}(U) - \chi_2 S \ensuremath{\mathrm{div\,}}(\overline U ) - \alpha \chi_2 U \nabla \overline S - J(1-\chi_1)S.\end{aligned}$$
For some weight $\phi$ that will be defined later, we multiply the equations by $U^\ast \phi$ and $S^\ast \phi$ respectively, where $f^\ast$ denotes the complex conjugate of $f$. Then, we take the real part and integrate in some ball $B = B(0, \rho )$ with $\rho<\frac{5}{2}C_0$. We get that $$\begin{aligned}
\int_B \Re (\lambda ) |U|^2 \phi &= \frac12 \int_B \partial_R (\chi_2 (R+\overline U_R))\phi |U|^2 + \frac12 \int_B \left( \frac{\partial_R \phi}{\phi} (\chi_2 (R+\overline U_R)) \right) |U|^2 \phi+\int_B \frac{\chi_2 (R+\overline U_R)}{R} \phi |U|^2 \\
&\qquad
- \frac12 \int_{\partial B} \chi_2(R+\overline U_R) \phi |U|^2
- \alpha \int_B \chi_2 \overline S \phi \Re (\nabla S \cdot U^\ast) \\
&\qquad - J\int (1-\chi_1) |U|^2\phi
+ \int_B \chi_2 (f_1 |U|^2 + f_2 |S|^2)\phi,\end{aligned}$$ and $$\begin{aligned}
\int_B \Re (\lambda ) |S|^2 \phi
&= \frac12 \int \partial_R (\chi_2 (R+\overline U_R)) \phi |S|^2 + \frac12 \int_B \left( \frac{\partial_R \phi}{\phi} (\chi_2 (R+\overline U_R)) \right) |S|^2 \phi+\int_B \frac{\chi_2 (R+\overline U_R)}{R} \phi |S|^2
\\
&\qquad - \frac12 \int_{\partial B}\chi_2 (R+\overline U_R) \phi |S|^2
- \alpha \int_B \chi_2 \overline S \phi \Re (\ensuremath{\mathrm{div\,}}(U) S^\ast ) \\
&\qquad - J\int (1-\chi_1) |S|^2 \phi
+ \int_B \chi_2 (f_3 |U|^2 + f_4 |S|^2)\phi,\end{aligned}$$ where $f_i$ are radially symmetric functions with an $L^\infty$ norm bounded by some universal constant. Adding up both equations, noting that $$\Re (\nabla S \cdot U^\ast) + \Re (\ensuremath{\mathrm{div\,}}(U) S^\ast) = \Re ( U \cdot \nabla S^\ast ) + \Re ( \ensuremath{\mathrm{div\,}}(U ) S^\ast ) = \Re\ensuremath{\mathrm{div\,}}(U S^\ast),$$ and integrating by parts, we see that $$\begin{aligned}
\nonumber
\Re (\lambda )\int_B ( |U|^2 + |S|^2 )\phi &= \frac12 \int_B \left( \frac{\partial_R \phi}{\phi} (\chi_2 (R+\overline U_R)) \right) (|U|^2 + |S|^2) \phi + \alpha \int_B \chi_2 \overline S \frac{\partial_R \phi}{\phi} \Re ( \frac{y}{R}\cdot( U S^\ast) )\phi \\
&\quad- J\int_B (1-\chi_1) (|U|^2 + |S|^2) \phi + O \left( \int_B (\chi_2 + |\chi_2'| ) (|U|^2 + |S|^2) \phi \right) \nonumber \\
&\quad -\frac12 \int_{\partial B} \chi_2 (R+\overline U_R)\phi (|U|^2 + |S|^2) - \alpha \int_{\partial B} \chi_2 \overline S \phi \Re ( S^{*} \frac{y}{R}\cdot U ), \label{eq:edmundodantes}\end{aligned}$$ where we used that $|US^\ast| \leq \frac12 (|U|^2 + |S|^2)$. Using it again, since $\partial_{R}\phi\leq 0$, we get $$\begin{aligned}
&-\frac12 \int_B \chi_2 \frac{\partial_R \phi}{\phi}\phi (R + \overline U_R-\alpha |\overline S|)(|U|^2 + |S|^2) + J \int_B (1-\chi_1) \phi (|U|^2 + |S|^2) + \Re (\lambda )\int_B (|U|^2 + |S|^2 ) \phi \\
&\qquad \leq \frac12 \int_{\partial B} \chi_2 (-R-\overline U_R+\alpha |\overline S|)\phi (|U|^2 + |S|^2) + O \left( \int_B (\chi_2 + |\chi_2'| ) (|U|^2 + |S|^2)\phi \right),\end{aligned}$$ where we used equation [\[eq:edmundodantes\]](#eq:edmundodantes){reference-type="eqref" reference="eq:edmundodantes"} to bound the last term. Now, recall that $U, S = 0$ for $R \leq C_0$ and we want to show they are zero in all $B(0, 3C_0)$. Note also that from Lemma [Lemma 39](#lemma:integrated_repulsivity){reference-type="ref" reference="lemma:integrated_repulsivity"} and [\[eq:profiles_positive\]](#eq:profiles_positive){reference-type="eqref" reference="eq:profiles_positive"}, we have that $$R + \overline U_R (R) - \alpha |\overline S (R)| \geq \widetilde\eta (R-1),$$ for some $\widetilde\eta > 0$ and where we denote with $\overline U_R$ the radial component of the profile (which is the only nonzero component since the profile is radially symmetric). Since the contribution of the first integral is zero for $R \leq C_0$, we can lower bound $R + \overline U_R (R) - \alpha \overline S (R)$ by $\widetilde\eta (R-1)$. Moreover, assuming $\rho > C_0$, we also get a sign for the boundary term in the right hand side. Thus, we obtain
$$\begin{aligned}
\int_{B\backslash B(0,C_0)} \left( -\chi_2 \frac{\partial_R \phi}{\phi} \frac{\widetilde\eta}{2} + J(1-\chi_1)
+ \Re (\lambda )\right) (|U|^2 + |S|^2) \phi \leq C \int_{B\backslash B(0,C_0)} (|U|^2 + |S|^2)\phi,\end{aligned}$$ where $C$ is a sufficiently large constant. Finally, note that $\Re (\lambda ) > -\delta_g/2\geq -1/2$ from the definition of $V_{\rm{uns}}$ in Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"}. Take $\phi=e^{-\frac{2J}{\widetilde{\eta}}R}$ to satisfy $\partial_R \phi = -\phi \frac{2J}{\widetilde\eta}$ in the region $|y| \in \left( C_0, \frac32 C_0 \right)$ outside. Since $(1-\chi_1)+\chi_2 \geq 1$ by definition, we conclude $$\int_B (J-1/2) (|U|^2 + |S|^2)\phi \leq C \int_B (|U|^2 + |S|^2)\phi.$$ This yields a contradiction for $J$ sufficiently large, unless $U = S = 0$ everywhere. We conclude the proof of uniqueness.
**Existence:** We consider the equations $$\ensuremath{\mathcal}L (U_j, S_j) = \lambda (U_j, S_j) + (\delta_g/10) (U_{j-1}, S_{j-1}).$$ We have a solution on $B(0, C_0)$ and want to extend it to $B(0, 3C_0)$.
For the sake of simplicity, we first consider the following equation by letting $U,S$ denote $U_{j}, S_{j}$ and $U^{-},S^{-}$ denote $U_{j-1}, S_{j-1}$. Let us recall: $$\begin{aligned}
\lambda U + \frac{\delta_g}{10} U^- &= -\chi_2 (r-1) U - \chi_2 (R + \overline U_R) \partial_R U - \chi_2 \alpha \overline S \nabla S - \chi_2 U \nabla \overline U - \alpha \chi_2 S \nabla \overline S - J(1-\chi_1)U, \\
\lambda S + \frac{\delta_g}{10} S^- &= -\chi_2 (r-1) S - \chi_2 (R + \overline U_R) \partial_R S - \chi_2 \alpha \overline S \ensuremath{\mathrm{div\,}}U - \chi_2 S \ensuremath{\mathrm{div\,}}\overline U - \alpha \chi_2 U \nabla \overline S - J(1-\chi_1)S, \end{aligned}$$ Now, we express the complex scalar $S$ as $S = \sum_n S_n(R) e_n$, where $e_n$ are spherical harmonics. Moreover, any complex vector field can be expressed uniquely [@Hill:spherical-harmonics] as $$U = \sum_n \left( U_{R,n}(R) \widehat R e_n + U_{\Psi, n}(R) R \nabla e_n + U_{\Phi, n}(R) y \wedge \nabla e_n \right).$$ Let us note that $$\nabla S = \sum_n \left( \partial_R S_n \widehat R e_n + \frac{S_n}{R} R\nabla e_n \right), \qquad
\ensuremath{\mathrm{div\,}}U = \sum_n \left( \partial_R U_{R, n}(R) + \frac{2U_{R, n}(R)}{R} - \frac{U_{\Psi, n}(R)}{R} n_1(n_1+1) \right)e_n,$$ using that $R^2 \Delta e_n = -n_1 (n_1+1) e_n$. Note also that in any spherical coordinate system (that is, in any basis formed by $\widehat R$ and vectors perpendicular to $\widehat R$), we have $\partial_R \widehat R = 0$ and $\partial_v \widehat R = \frac{\partial_v y}{R} = \frac{v}{R}$ for any $v$ perpendicular to $\widehat R$. Let us recall that $\widehat R$ is the unitary vector in the radial direction, as discussed in the notation.
Therefore $$\begin{aligned}
U\nabla \overline U &= U \nabla ( \overline U_R \widehat R) = \overline U_R' U_R \widehat R + \overline U_R U\cdot \nabla\widehat R = \sum_n \left( \overline U_R' U_{R, n} e_n \widehat R + \overline U_R (U_{R, n} e_n \partial_R + U_{\Psi, n} \partial_{R \nabla e_n} + U_{\Phi, n} \partial_{y \wedge \nabla e_n}) \widehat R \right) \\
&= \sum_n \left( \overline U_R' U_{R, n} e_n \widehat R + \frac{\overline U_R U_{\Psi, n}}{R} R\nabla e_n + \frac{\overline U_R U_{\Phi, n}}{R} y \wedge \nabla e_n \right).\end{aligned}$$
We obtain the equations $$\begin{aligned}
\lambda U_{R, n} + \frac{\delta_g}{10} U^-_{R, n} &= -\chi_2 (r-1) U_{R, n} - \chi_2 (R + \overline U_R) \partial_R U_{R, n} - \chi_2 \alpha \overline S S_n' - \chi_2 \overline U_R' U_{R, n} - \alpha \chi_2 S_n \partial_R \overline S - J(1-\chi_1)U_{R, n}, \\
\lambda U_{\Psi, n} + \frac{\delta_g}{10} U^-_{\Psi, n} &= -\chi_2 (r-1) U_{\Psi, n} - \chi_2 (R + \overline U_R) \partial_R U_{\Psi, n} - \chi_2 \alpha \overline S \frac{S_n(R)}{R} - \chi_2 \frac{\overline U_R U_{\Psi, n}}{R} - J(1-\chi_1)U_{\Psi, n}, \\
\lambda U_{\Phi, n} + \frac{\delta_g}{10} U^-_{\Phi, n} &= -\chi_2 (r-1) U_{\Phi, n} - \chi_2 (R + \overline U_R) \partial_R U_{\Phi, n} - \chi_2 \frac{\overline U_R U_{\Phi, n}}{R} - J(1-\chi_1)U_{\Phi, n}, \\
\lambda S_{n} + \frac{\delta_g}{10} S^-_{n} &= -\chi_2 (r-1) S_n - \chi_2 (R + \overline U_R) \partial_R S_n - \chi_2 \alpha \overline S \left( U_{R, n}'(R) + \frac{2U_{R, n}(R)}{R} - \frac{U_{\Psi, n}(R)}{R} n_1(n_1+1) \right) \\
&\quad - \chi_2 \alpha S_n \left( \partial_R \overline U_R + \frac{2\overline U_R}{R} \right) - \alpha \chi_2 U_{R, n} \partial_R \overline S - J(1-\chi_1)S_n, \end{aligned}$$ so, as we can see, the equations diagonalize in the different modes (the evolution of the $n$-th spherical harmonic mode only depends on the other $n$-th spherical harmonic modes).
Defining $W = U_R + S$ and $Z = U_R - S$, we have $$\begin{aligned}
\begin{split}\label{eq:sys}
\chi_2 (R + \overline U_{R} + \alpha \overline S) \partial_R W_{n} &= -\lambda W_{n} - \frac{\delta_g}{10} W^-_{n} - J(1-\chi_1)W_{n} + \chi_2 \alpha \overline S \frac{U_{\Psi, n}(R)}{R} n_1 (n_1+1) + \chi_2 \mathcal E_{W, n}, \\
\chi_2 (R + \overline U_{R} - \alpha \overline S ) \partial_R Z_n &= -\lambda Z_{ n} - \frac{\delta_g}{10} Z^-_{n} - J (1-\chi_1) Z_n - \chi_2 \alpha \overline S \frac{U_{\Psi, n}(R)}{R} n_1 (n_1+1) + \chi_2 \mathcal E_{Z, n}, \\
\chi_2 (R + \overline U_{R}) \partial_R U_{\Psi, n} &= -\lambda U_{\Psi, n} - \frac{\delta_g}{10} U^-_{\Psi, n} - J(1-\chi_1)U_{\Psi, n} - \chi_2 \left( (r-1) U_{\Psi, n} + \alpha \overline S \frac{W_n-Z_n}{2R} + \frac{\overline U_R U_{\Psi, n}}{R} \right), \\
\chi_2 (R + \overline U_{R}) \partial_R U_{\Phi, n} &= -\lambda U_{\Phi, n} - \frac{\delta_g}{10} U^-_{\Phi, n} - J(1-\chi_1)U_{\Phi, n} - \chi_2 \left( (r-1) + \frac{\overline U_R}{R} \right) U_{\Phi, n},
\end{split}\end{aligned}$$ where we defined the lower order terms $$\begin{aligned}
\mathcal E_{W, n} &= - \left( (r-1) + \alpha \partial_R \overline S
\right) W_n-\partial_{R}\overline{U}_{R}(U_{R,n}+\alpha S_{n})-\frac{2\alpha}{R}[\overline{S}U_{R,n}+\overline{U}_{R}S_n], \\
\mathcal E_{Z, n} &= -\left( (r-1) - \alpha \partial_R \overline S
\right) Z_n -\partial_{R}\overline{U}_{R}(U_{R,n}-\alpha S_{n})-\frac{2\alpha}{R}[\overline{S}U_{R,n}+\overline{U}_{R}S_n].\end{aligned}$$ The possibility of complex eigenfunctions is handled in the spherical mode decomposition, given that real functions correspond to the coefficient of the mode $(n_1, n_2)$ being the conjugate of the coefficient of the mode $(n_1, -n_2)$.
Since $U^-, S^-$ are given in $B(0, 3C_0)$ by induction, we can solve [\[eq:sys\]](#eq:sys){reference-type="eqref" reference="eq:sys"} by standard ODE theory for $R\in [C_0, 3C_0)$ given that the terms multiplying the derivative do not vanish.
With respect to $R > \frac52 C_0$ we obtain directly that $W_n = Z_n = U_{\Psi, n} = U_{\Phi, n} = 0$ from [\[eq:sys\]](#eq:sys){reference-type="eqref" reference="eq:sys"} (either in the diagonalizable or non-diagonalizable case).
Taking Taylor series, and using that $J$ is sufficiently large, one can also conclude smoothness for each $W_n,Z_n, U_{\phi,n}, U_{\psi,n}$ at $R = \frac{5}{2}C_0$ (see [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma 7.5]).
**Lemma 20**. *Consider the spherical energy $$E_{n,k}(R) = ( R + \overline U_R + \alpha \overline S ) \frac{|\partial_{R}^{k}W_n|^2}{2} + ( R + \overline U_R - \alpha \overline S ) \frac{|\partial_{R}^{k}Z_n|^2}{2} + ( R + \overline U_R ) \left( |\partial_{R}^{k}U_{\Psi, n}|^2 + |\partial_{R}^{k}U_{\Phi, n}|^2 \right) (n_1\left( n_1+1\right)+1),$$ and $$E_{n,k}^{-}(R) = ( R + \overline U_R + \alpha \overline S ) \frac{|\partial_{R}^{k}W_n^{-}|^2}{2} + ( R + \overline U_R - \alpha \overline S ) \frac{|\partial_{R}^{k}Z_n^{-}|^2}{2} + ( R + \overline U_R ) \left( |\partial_{R}^{k}U_{\Psi, n}^{-}|^2 + |\partial_{R}^{k}U_{\Phi, n}^{-}|^2 \right) (n_1\left( n_1+1\right)+1).$$ Then we have for $C_0\leq R<\frac{5}{2}C_0$, $$\begin{aligned}
\label{modesmoothestimate}
E_{n,k}(R)\lesssim E_{n,k}(C_0)+\sup_{C_0\leq R<\frac{5}{2}C_0}[E_{n,k}^{-}(R)]+\sum_{k'=0}^{k-1}\sup_{C_0\leq R<\frac{5}{2}C_0}[E_{n,k'}(R)][\sqrt{n_1(n_1+1)}+1].\end{aligned}$$ Here the implicit constant in $\lesssim$ can depend on $m,J$.*
*Proof.* In the following proof we use the notation: $$O(f(R))\Leftrightarrow \text{bounded by } C f(R)\text{ for some fixed constant $C$,}$$ $$O_{m,J}(f(R))\Leftrightarrow \text{ bounded by } C_{m,J}f(R) \text{ for some constant depending on $m,J$}.$$ First, we show the bound for $k=0$. For $R\in [C_0, \frac{5}{2}C_0)$, since $\chi_2\neq 0$, we have $$\begin{aligned}
\partial_R E_{n,0} &= - \frac{J(1-\chi_1) + \Re\lambda}{\chi_2}\left( |W_{n,0}|^2 + |Z_{n,0}|^2 + 2(n_1(n_1+1)+1) (|U_{\Psi, n,0}|^2 + |U_{\Phi, n,0}|^2 )\right)\\
&\quad +O( E_{n,0} ) + O\left( \frac{1}{\chi_2}\sqrt{E_{n,0} E_{n,0}^{-} }\right).\end{aligned}$$ because $\mathcal E_{W, n,0}, \mathcal E_{Z, n,0} = O(E_{n,0})$ and the terms $\alpha \overline S \frac{\Re (U_{\Psi, n,0}W^{\ast}_{n,0})}{2R}n_1(n_1+1)$ and $\alpha \overline S \frac{\Re(U_{\Psi, n,0}Z_{n,0}^{\ast})}{2R}n_1(n_1+1)$ cancel between the contribution coming from the equations of $W_{n,0}, Z_{n,0}$ and the contribution coming from the equation of $U_{\Psi, n,0}$. In fact, from [\[eq:sys\]](#eq:sys){reference-type="eqref" reference="eq:sys"}, we have $$\begin{aligned}
&\quad \Re( W_{n,0}^{\ast}\frac{U_{\Psi, n,0}}{R}n_1(n_1+1)\chi_2\alpha\overline{S}-Z_{n,0}^{\ast}\frac{U_{\Psi, n,0}}{R}n_1(n_1+1)\chi_2\alpha\overline{S}-n_1(n_1+1)U_{\Psi, n,0}^{\ast}(\frac{W_{n,0}-Z_{n,0}}{R})\chi_2\alpha\overline{S})\\
&=\chi_2\alpha\overline{S}n_1(n_1+1)\frac{1}{R}\Re(W_{n,0}^{\ast}U_{\Psi, n,0}-Z_{n,0}^{\ast}U_{\Psi, n,0}-U_{\Psi, n,0}^{\ast}(W_{n,0}-Z_{n,0}))\\
&=0.\end{aligned}$$ Here, $E_{n,0}^{-}$ corresponds to the energy of the $n$-th mode of the previous eigenfunction, in the non-diagonal case.
In particular, using that $\Re(\lambda)\geq \frac{-\delta_{g}}{2}$ as in Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"}, and the lower bounds on $R + \overline U_R \pm \alpha \overline S$ for $R \in [R_1, R_2]$, there exists an absolute constant $C$ such that $$\begin{aligned}
\quad \partial_R E_{n,0} &\leq \frac{1}{\chi_2} C E_{n,0} - \frac{J(1-\chi_1)}{\chi_2} \frac{E_{n,0}}{100}
+ \frac{C}{\chi_2}\sqrt{E_{n,0} E_{n,0}^{-} } \\
&\leq \frac{2}{\chi_2} C E_{n,0} - \frac{J(1-\chi_1)}{\chi_2} \frac{E_{n,0}}{100}
+ \frac{C}{\chi_2}E_{n,0}^{-},\end{aligned}$$ where we apply Cauchy-Schwarz in the second inequality.
Now, note that $\chi_1 = 0$ in the region where $\chi_2<1$. Thus, the term $\frac{CE_{n,0}}{\chi_2}$ in the region $\chi_2<1$ can be absorbed assuming $J$ is sufficiently large. Then we have the estimate: $$\frac{1}{\chi_2}C\leq \frac{J}{400\chi_2}(1-\chi_1)+C.$$ Enlarging $C$ if needed, we obtain $$\partial_R E_{n,0} \leq CE_{n,0} - \frac{J(1-\chi_1)}{\chi_2} \frac{E_{n,0}}{200} + \frac{C E_{n,0}^{-}}{\chi_2}.$$ Again, enlarging $C$ if needed $$\begin{aligned}
\partial_R E_{n,0} &\leq C(E_{n,0}+E_{n,0}^{-}) - \frac{(1-\chi_1)}{\chi_2} \left( \frac{J E_{n,0}}{200} - CE_{n,0}^{-} \right) \\
&\leq C(E_{n,0}+E_{n,0}^{-}) - \frac{J(1-\chi_1)}{400\chi_2} \left( E_{n,0} - E_{n,0}^{-} \right) - \frac{J(1-\chi_1)}{400\chi_2} E_{n,0}\end{aligned}$$ taking $J \geq 400C$. Then we have $$\begin{aligned}
\label{es:eigenfunctionsmode}
\partial_R (E_{n,0}e^{-2CR}) &\leq \left(C(E_{n,0}^{-}-E_{n,0}) - \frac{J(1-\chi_1)}{400\chi_2} \left( E_{n,0} - E_{n,0}^{-} \right) - \frac{J(1-\chi_1)}{400\chi_2} E_{n,0}\right)e^{-2CR}.\end{aligned}$$ Therefore, we claim $E_{n,0}(R)e^{-2CR}\leq \sup_{R}(E_{n,0}^{-})+E_{n,0}(C_0)$. When $R=C_0$, the estimate is satisfied naturally. Moreover, from estimate [\[es:eigenfunctionsmode\]](#es:eigenfunctionsmode){reference-type="eqref" reference="es:eigenfunctionsmode"}, when $E_{n,0}(R)e^{-2CR}= \sup_{R}(E_{n,0}^{-})+E_{n,0}(C_0)$, we have $\frac{dE_{n,0}(R)e^{-2CR}}{dR}\leq 0$. We get $$\begin{aligned}
\label{smootheigenfunction0}
E_{n,0}\lesssim e^{2CR}\left(\sup_{R}(E_{n,0}^{-})+E_{n,0}(C_0)\right)\end{aligned}$$ and [\[modesmoothestimate\]](#modesmoothestimate){reference-type="eqref" reference="modesmoothestimate"} follows when $k=0$.
Now we estimate the higher derivatives with respect to $R$. From [\[eq:sys\]](#eq:sys){reference-type="eqref" reference="eq:sys"}, we have $$\begin{aligned}
\label{higherorderderivativeeigenfunctions}
&\chi_2 (R + \overline U_R + \alpha \overline S) \partial_R^{k+1} W_{n}+\sum_{j\leq k}C_{j,k}\partial_{R}^{-j+1+k}(\chi_2 (R + \overline U_R + \alpha \overline S))\partial_R^{j} W_{n} \\\nonumber
&= -\lambda \partial_R^{k}W_{n} - \frac{\delta_g}{10} \partial_R^{k}W^-_{n} - J(1-\chi_1)\partial_R^{k}W_{n}- \sum_{j\leq k-1}C_{j,k}\partial_R^{k-j}(J(1-\chi_1))\partial_R^{j}W_{n}\\\nonumber
&+ \chi_2 \alpha \overline S \frac{\partial_R^{k}U_{\Psi, n}(R)}{R} n_1 (n_1+1)+ \sum_{j\leq k-1}C_{j,k}\partial_R^{k-j} \left( \frac{\chi_2 \alpha \overline S}{R} \right) \partial_R^{j}U_{\Psi, n}(R) n_1 (n_1+1) +\partial_R^{k} (\chi_2 \mathcal E_{W, n}).\end{aligned}$$ Here we use same $C_{j,k}$ but actually mean possibly different constants depending on $k,j$. Let $\delta_{m,J}$ sufficiently small to be fixed later. From the definition of $\chi_2$, there exists $\varepsilon_{m,J}>0$ depending on $\delta_{m,J}$, such that for $R\in[C_0,\frac{5}{2}C_0-\delta_{m,J}]$, $|\chi_2|\geq \varepsilon_{m,J}$. Then from [\[higherorderderivativeeigenfunctions\]](#higherorderderivativeeigenfunctions){reference-type="eqref" reference="higherorderderivativeeigenfunctions"}, when $R\in[C_0,\frac{5}{2}C_0-\delta_{m,J}]$, we have $$\begin{aligned}
(R + \overline U_R + \alpha \overline S) \partial_R^{k+1} W_{n}&=\sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) \left( \sqrt{n_1(n_1+1)}+1 \right) +O_{m,J} \left( \sqrt{E_{n,k}(R)} \right)\\
&\quad+O_{m,J} \left( \sqrt{E_{n,k}^{-}(R)} \right) + \alpha \overline S \frac{\partial_R^{k}U_{\Psi, n}(R)}{R} n_1 (n_1+1).\end{aligned}$$ Similarly, we also have the following estimates for other terms in the spherical energy: $$\begin{aligned}
(R + \overline U_R + \alpha \overline S) \partial_R^{k+1} Z_{n}&= \sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) \left( \sqrt{n_1(n_1+1)}+1 \right) + O_{m,J} \left( \sqrt{E_{n,k}(R)} \right)\\
&\quad+O_{m,J} \left( \sqrt{E_{n,k}^{-}(R)} \right)- \alpha \overline S \frac{\partial_R^{k}U_{\Psi, n}(R)}{R} n_1 (n_1+1),\end{aligned}$$ $$\begin{aligned}
(R + \overline U_R - \alpha \overline S) \partial_R^{k+1} U_{\psi,n}&= \sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) +O_{m,J} \left( \frac{\sqrt{E_{n,k}(R)}}{\sqrt{n_1(n_1+1)}+1)} \right)\\
&\quad+O_{m,J} \left( \frac{\sqrt{E_{n,k}^{-}(R)}}{(\sqrt{n_1(n_1+1)}+1)} \right)- \alpha \overline S \frac{\partial_R^{k}(W_{n}(R)-Z_{n}(R))}{R},\end{aligned}$$ and $$\begin{aligned}
(R + \overline U_R ) \partial_R^{k+1} U_{\psi,n}&= \sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) +O_{m,J} \left( \frac{\sqrt{E_{n,k}(R)}}{(\sqrt{n_1(n_1+1)}+1)} \right)\\
&\quad+O_{m,J} \left( \frac{\sqrt{E_{n,k}^{-}(R)}}{(\sqrt{n_1(n_1+1)}+1)} \right).\end{aligned}$$ Then a similar cancellation as in the $k=0$ case happens, and we have the following estimate for the spherical energy: $$\partial_{R}E_{n,k}(R)= \sum_{k'=0}^{k-1}O_{m,J}(E_{n,k'}(R)) \left( \sqrt{n_1(n_1+1)}+1 \right) +O_{m,J}(E_{n,k}(R))+O_{m,J}(E_{n,k}^{-}(R)).$$ Therefore we get: $$\begin{aligned}
\begin{split} \label{smootheigenfunction1}
\sup_{R\in[C_0,\frac{5}{2}C_0-\delta_{m,J}]}E_{n,k}(R)&\lesssim |E_{n,k}(C_0)|+\sup_{R\in [C_0,\frac{5}{2}C_0-\delta_{m,J}]}E_{n,k}^{-}(R) \\
&\qquad +\sum_{k'=0}^{k-1}\sup_{R\in [C_0,\frac{5}{2}C_0-\delta_{m,J}]}E_{n,k'}(R) \left( \sqrt{n_1 (n_1+1)}+1 \right).
\end{split} \end{aligned}$$
Now if $\delta_{m,J}\leq \frac{1}{10}C_0$, for $\frac{5}{2}C_0\geq R\geq \frac{5}{2}C_0-\delta_{m,J}$, we have $\chi_1=0$. Then from [\[higherorderderivativeeigenfunctions\]](#higherorderderivativeeigenfunctions){reference-type="eqref" reference="higherorderderivativeeigenfunctions"}, the following estimate holds: $$\begin{aligned}
(R + \overline U_R + \alpha \overline S) \partial_R^{k+1} W_{n}=&-\frac{\chi_2'}{\chi_2}k(R + \overline U_R + \alpha \overline S)(\partial_R^{k}W_{n}(R))-\frac{J}{\chi_2}\partial_{R}^{k}W_{n}-\frac{\lambda}{\chi_{2}}\partial_{R}^{k}W_{n}(R)+O(\partial_{R}^{k}W_n(R))\\
&\qquad+\frac{1}{\chi_2}O_{m,J} \left( \sqrt{E_{n,k}^{-}(R)} \right) -\frac{1}{\chi_2}\sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) \left( \sqrt{n_1(n_1+1)}+1 \right) \\
&\qquad+\alpha \overline S \frac{\partial_R^{k}U_{\Psi, n}(R)}{R} n_1 (n_1+1).\end{aligned}$$ Similarly, we have the estimate for $\partial_{R}^{k}Z_n$: $$\begin{aligned}
(R + \overline U_R - \alpha \overline S) \partial_R^{k+1} Z_{n}(R)
&=-\frac{\chi_2'}{\chi_2}k(R + \overline U_R - \alpha \overline S)\partial_R^{k}Z_{n}(R)-\frac{J}{\chi_2}\partial_{R}^{k}Z_{n}(R)-\frac{\lambda}{\chi_{2}}\partial_{R}^{k}Z_{n}(R)+O(\partial_{R}^{k}Z_n(R))\\
&+\frac{1}{\chi_2} O_{m,J} \left( \sqrt{E_{n,k}^{-}(R)} \right) + \frac{1}{\chi_2}\sum_{k'=0}^{k-1} O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) \left( \sqrt{(n_1)(n_1+1)}+1 \right) \\
&-\alpha \overline S \frac{\partial_R^{k}U_{\Psi, n}(R)}{R} n_1 (n_1+1).\end{aligned}$$
for $\partial_{R}^{k}U_{\psi,n}$:
$$\begin{aligned}
(R + \overline U_R)\partial_R^{k+1} U_{\psi,n}(R) &=
-\frac{\chi_2'}{\chi_2}k(R + \overline U_R)\partial_R^{k} U_{\psi,n}(R)-\frac{J}{\chi_2}\partial_{R}^{k}U_{\psi,n}(R) - \frac{\lambda}{\chi_{2}}\partial_{R}^{k}U_{\psi,n}(R) + O \left( \partial_{R}^{k}U_{\psi,n}(R) \right) \\
&+\frac{1}{\chi_2}\frac{O_{m,J} \left( \sqrt{E_{n,k}^{-}(R)} \right) }{\sqrt{n_1(n_1+1)}+1}+\frac{1}{\chi_2}\sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right) \\
&-\alpha \overline S \frac{(\partial_R^{k}W_{n}(R)-\partial_R^{k}Z_{n}(R))}{R}.\end{aligned}$$
and $\partial_{R}^{k}U_{\phi,n}:$ $$\begin{aligned}
(R + \overline U_R)\partial_R^{k+1} U_{\phi,n}&=-\frac{\chi_2'}{\chi_2}k(R + \overline U_R)\partial_R^{k} U_{\phi,n}(R)-\frac{J}{\chi_2}\partial_{R}^{k}U_{\phi,n}(R)-\frac{\lambda}{\chi_{2}}\partial_{R}^{k}U_{\phi,n}(R)+O(\partial_{R}^{k}{U_{\phi,n}(R)})\\
&+\frac{1}{\chi_2}O_{m,J}\left(\frac{\sqrt{E_{n,k}^{-}(R)}}{\sqrt{n_1(n_1+1)}+1}\right)+\frac{1}{\chi_2}\sum_{k'=0}^{k-1}O_{m,J} \left( \sqrt{E_{n,k'}(R)} \right).\end{aligned}$$ Then for the energy $E_{n,k}(R)$, we have $$\begin{aligned}
\partial_{R}E_{n,k}(R)&\leq -\frac{2\chi_2'}{\chi_2}k E_{n,k}(R)-\frac{J}{2\chi_2}E_{n,k}(R)\\
&+\frac{1}{\chi_2}O_{m,J}(E_{n,k}^{-}(R))+\frac{1}{\chi_2}\sum_{k'=0}^{k-1}O_{m,J}(E_{n,k'}(R)) \left( \sqrt{n_1(n_1+1)}+1 \right),\end{aligned}$$ where we also used that $\Re(\lambda) > -\delta_g/2 \geq -1/2$, $\chi_2\leq 1$ and $J$ sufficiently large.
Next we will use a similar method as in [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma 7.5] to bound the energy. In fact, let $$\begin{aligned}
R_{m,J}=\frac{5}{2}C_0-\delta_{m,J}, \quad B_{m,J}=O_{m,J}(E_{n,k}^{-}(R))+\sum_{k'=0}^{k-1}O_{m,J}(E_{n,k'}(R) \left( \sqrt{n_1(n_1+1)}+1) \right).\end{aligned}$$
Then $$\frac{d}{d\gamma}e^{\int_{R_{m,J}}^{\gamma}\frac{J}{2\chi_2(\beta)}}=e^{\int_{R_{m,J}}^{\gamma}\frac{J}{2\chi_2(\beta)}d\beta}\frac{J}{2\chi_2(\gamma)}.$$
We have $$\begin{aligned}
E_{n,k}(R) &\leq e^{-\int_{R_{m,J}}^{R}\frac{J}{2\chi_2(\beta)}d\beta}(|E_{n,k}(R_m)|+m\int_{R_{m,J}}^{R}\frac{d}{d\gamma}e^{\int_{R_{m,J}}^{\gamma}\frac{J}{2\chi_2(\beta)}d\beta}(|B_{m,J}(\gamma)|+C_{m,J}|\chi_{2}'(\gamma)|E_{n,k}(\gamma))d\gamma)\\
& \leq |E_{n,k}(R_m)|+ 2m \sup_{\gamma \in [R_{m,J},\frac{5}{2}C_0]}|B_{m,J}(\gamma)|+C_{m,J}\sup_{\gamma \in [R_{m,J},\frac{5}{2}C_0]}(|\chi_2'(\gamma)||E_{n,k}(\gamma)|).\end{aligned}$$ Since $\displaystyle \lim_{R\to\frac{5} {2}C_0}\chi'_2(R)=0$ from the definition, we choose $\delta_{m,J}$ sufficiently small to make $$\sup_{\gamma \in {[R_{m,J},\frac{5}{2}C_0]}}C_{m,J}|\chi_2'(\gamma)|\leq \frac{1}{2}.$$ Therefore, we have $$\begin{aligned}
\begin{split}\label{smootheigenfunction2}
\sup_{R\in[R_{m,J},\frac{5}{2}C_0]}E_{n,k}(R)&\lesssim |E_{n,k}(R_{m,J})|+\sup_{R\in [R_{m,J},\frac{5}{2}C_0]}E_{n,k}^{-}(R) \\
&\qquad +\sum_{k'=0}^{k-1}\sup_{R\in [R_{m,J},\frac{5}{2}C_0]}E_{n,k'}(R)(\sqrt{n_1 (n_1+1)}+1).
\end{split} \end{aligned}$$ From [\[smootheigenfunction0\]](#smootheigenfunction0){reference-type="eqref" reference="smootheigenfunction0"}, [\[smootheigenfunction1\]](#smootheigenfunction1){reference-type="eqref" reference="smootheigenfunction1"} and [\[smootheigenfunction2\]](#smootheigenfunction2){reference-type="eqref" reference="smootheigenfunction2"}, we get [\[modesmoothestimate\]](#modesmoothestimate){reference-type="eqref" reference="modesmoothestimate"}. ◻
Now we change back $(U,S)$ to $(U_{j},S_{j})$, and $(U^{-},S^{-})$ to $(U_{j-1},S_{j-1})$. Let $E_{n,k}^{j}$ denote the corresponding $k$-th order spherical energy of $(U_{j},S_{j})$ . Then $E_{n,k}^{0}=0$ and from [\[modesmoothestimate\]](#modesmoothestimate){reference-type="eqref" reference="modesmoothestimate"}, we have $$\begin{aligned}
\label{estimate:eigenfunctions3}
E_{n,k}^{j}(R) \lesssim E_{n,k}^{j}(C_0) + \sup_{C_0\leq R<\frac{5}{2}C_0} E_{n,k}^{j-1}(R)
+ \sum_{k'=0}^{k-1} \sup_{C_0 \leq R < \frac{5}{2} C_0} E_{n,k'}^{j}(R) \left( \sqrt{n_1(n_1+1)}+1 \right).\end{aligned}$$ Hence we get that for all $j$, $0\leq k\leq m$, $$\begin{aligned}
\label{estimate:eigenfunctions2}
E_{n,k}^{j}(R)\lesssim \sum_{j'=1}^{j}\sum_{k'=0}^{k}E_{n,k'}^{j'}(C_0) \left( \sqrt{n_1(n_1+1)}+1 \right)^{k}.\end{aligned}$$ First, for $k=0$ (from [\[smootheigenfunction0\]](#smootheigenfunction0){reference-type="eqref" reference="smootheigenfunction0"}) and $J=0$ ($E_{n,k}^{0}=0$), the estimate holds. Now we only need to show if for both $\widetilde{j}\leq j-1$, $0\leq k\leq m$ and $\widetilde{j}=j$, $\widetilde{k}\leq k$, [\[estimate:eigenfunctions2\]](#estimate:eigenfunctions2){reference-type="eqref" reference="estimate:eigenfunctions2"} holds, then for $(j, k+1)$, [\[estimate:eigenfunctions2\]](#estimate:eigenfunctions2){reference-type="eqref" reference="estimate:eigenfunctions2"} also holds. From the induction assumption and [\[estimate:eigenfunctions3\]](#estimate:eigenfunctions3){reference-type="eqref" reference="estimate:eigenfunctions3"} we have $$\begin{aligned}
E_{n,k+1}^{j}(R)&\lesssim E_{n,k+1}^{j}(C_0)+\sum_{j'=1}^{j-1}\sum_{k'=0}^{k+1}E_{n,k'}^{j}(C_0)[\left( \sqrt{n_1(n_1+1)}+1 \right)^{k+1}\\
&\quad+\sum_{k'=0}^{k}\sum_{j'=1}^{j}\sum_{k''=0}^{k'}E_{n,k''}^{j'}(C_0)\left( \sqrt{n_1(n_1+1)}+1 \right)^{k+1}\\
&\lesssim \sum_{j'=1}^{j}\sum_{k'=0}^{k+1}E_{n,k'}^{j'}(C_0) \left( \sqrt{n_1(n_1+1)}+1\right)^{k+1}.\end{aligned}$$
Therefore, the regularity of $\{\psi_{j}\}$ in $\{C_0\leq R\leq 3 C_0\}$ is worse than in $\{\frac{1}{2}C_0\leq R\leq C_0\}$ by at most $(m+C_0)$ derivatives in the angular direction. Thus we are done. ◻
# Nonlinear Stability {#sec:nonlinear}
## Agreement of equations
In order to use our analysis for the linearized cut-off operator $\ensuremath{\mathcal}L = \chi_2 (\ensuremath{\mathcal}L_u^{e}, \ensuremath{\mathcal}L_s^{e}) - J(1-\chi_1)$, we consider an additional equation, which we call the *truncated equation*: $$\begin{aligned}
\partial_s (\widetilde U_t, \widetilde S_t) &= \ensuremath{\mathcal}L (\widetilde U_t, \widetilde S_t) + \chi_2 \ensuremath{\mathcal}F, \nonumber \\
(\widetilde U_t, \widetilde S_t)|_{s=s_0}&= (\widetilde U_{t, 0}, \widetilde S_{t, 0}) ,
\label{eq:truncated}\end{aligned}$$ where $\ensuremath{\mathcal}F$ is the forcing coming from the original (extended) equation [\[nsequation1\]](#nsequation1){reference-type="eqref" reference="nsequation1"} and $(\widetilde U_{t,0}, \widetilde S_{t,0})$ is the initial data for the extended equation which will be further specified later.
**Lemma 21**. *Suppose that a classical solution $(\widetilde U, \widetilde S)$ to [\[nsequation1\]](#nsequation1){reference-type="eqref" reference="nsequation1"}--[\[nsequation2\]](#nsequation2){reference-type="eqref" reference="nsequation2"} exists up to time $s'$. Suppose that $(\widetilde U_t, \widetilde S_t)$ is a classical solution to [\[eq:truncated\]](#eq:truncated){reference-type="eqref" reference="eq:truncated"} until time $s'$ as well. If at $s=s_0$, $|y|\leq C_0$, we have $(\widetilde{U}_{t},\widetilde{S}_{t})$ agrees with $(\widetilde{U},\widetilde{S}),$ then, for $|y| \leq C_0$, $s \in [s_0, s')$, we also have that $(\widetilde U_t, \widetilde S_t)$ agrees with $(\widetilde U, \widetilde S)$.*
*Proof.* Let us show that they are equal by energy estimates. Let $(\delta_U, \delta_S) = (\widetilde U, \widetilde S) - (\widetilde U_t, \widetilde S_t)$. Given that $(\widetilde U_t, \widetilde S_t)$ satisfies equation [\[eq:truncated\]](#eq:truncated){reference-type="eqref" reference="eq:truncated"} (where we recall $\ensuremath{\mathcal}F$ is the forcing from the *extended* equation), we have that $$\begin{aligned}
\begin{split} \label{eq:diff_truncated_extended}
\partial_s (\delta_U, \delta_S) &= \ensuremath{\mathcal}L^{e} (\widetilde U, \widetilde S) - \chi_2 \ensuremath{\mathcal}L ^{e} (\widetilde U_t, \widetilde S_t) + (1-\chi_1) J (\widetilde U_t, \widetilde S_t) \\
&= \chi_2\ensuremath{\mathcal}L^{e} ( \delta_U, \delta_S) + (1-\chi_2) \ensuremath{\mathcal}L^{e}(\widetilde U, \widetilde S) + (1-\chi_1) J (\widetilde U_t, \widetilde S_t).
\end{split} \end{aligned}$$
We compute the evolution of the energy: $$E(s) = \int_{B(0, C_0)}( |\delta_U(y, s)|^2 + \delta_S (y, s)^2 ) dy.$$ Clearly $E(s_0) = 0$ because of the hypothesis of the lemma. Moreover from [\[eq:diff_truncated_extended\]](#eq:diff_truncated_extended){reference-type="eqref" reference="eq:diff_truncated_extended"} we get that the evolution of $E$ is dictated by $$\begin{aligned}
\frac12 \frac{d}{ds} E
&= \int_{B(0, C_0)} - (y+\overline U)\frac12 \nabla \left( |\delta_U|^2 \right) - \alpha \overline S \nabla \delta_S \cdot \delta_U - (r-1)|\delta_U|^2 - \delta_U \cdot \nabla \overline U \cdot \delta_U - \alpha \delta_S \delta_U \cdot \nabla \overline Sdy \\
&\qquad + \int_{B(0, C_0)} -(y+\overline U) \frac12 \nabla \left( \delta_S^2 \right) - \alpha \overline S \delta_S \text{div}(\delta_U) - (r-1) \delta_S^2 - \delta_S \delta_U \cdot \nabla \overline S - \alpha \delta_S^2 \text{div} (\overline U)dy \\
&=
\int_{B(0, C_0)} \frac{3+\text{div}(\overline U)}{2} (|\delta_U|^2 + \delta_S^2) + \alpha \nabla \overline S \cdot (\delta_S\delta_U) - (r-1) |\delta_U|^2 - ((\delta_U \cdot \nabla) \overline U) \cdot \delta_U - \alpha \delta_S\delta_U \cdot \nabla \overline S \\
&\qquad- (r-1) \delta_S^2 - \delta_S \delta_U \cdot \nabla \overline S - \alpha \delta_S^2 \text{div} (\overline U)dy\\
&\qquad + \int_{\partial B(0, C_0)} \frac{-R-\overline U_R}{2} (|\delta_U|^2 + \delta_S^2) - \alpha \overline S \delta_S (\delta_U\cdot \vec{n})d\sigma \\
&\leq C E + \int_{\partial B(0, C_0)} \frac{-R - \overline U_R + \alpha \overline S}{2} (|\delta_U|^2 + \delta_S^2) \leq CE.\end{aligned}$$ In the last inequality we used $-R - \overline U_R + \alpha \overline S < 0$ for $R =C_0$, due to Lemma [Lemma 39](#lemma:integrated_repulsivity){reference-type="ref" reference="lemma:integrated_repulsivity"}.
In particular, since $\frac{d}{ds} E \leq 2CE$ and $E(s_0) = 0$, we conclude that $E(s) = 0$ for all $s \in [s_0, s')$, and therefore $(\widetilde U, \widetilde S)$ agrees with $(\widetilde U_t, \widetilde S_t)$ for $|y| \leq C_0$, $s\in [s_0, s')$. ◻
## Bootstrap of bounds
In this subsection, we plan to show that given a small enough initial perturbation, if the unstable mode $P_{\rm{uns}}$ is controlled, then the perturbation $\widetilde{U}$, $\widetilde{S}$ is also controlled and has exponential decay as $s$ evolves.
We will choose our parameters as follows: $$\label{choiceofparameter}
\frac{1}{s_0}\ll \delta_{0}^{\frac{3}{2}}\ll \delta_1 \ll %\delta_g(\delta_0)^{\frac{5}{4}}
\delta_0 \ll \frac{1}{E}\ll \overline{\varepsilon}\ll \frac{1}{K}\ll\frac{1}{m} \ll \eta \ll \delta_g \ll \delta_{dis} = O(1),$$ and $$\label{choiceofparameter2}
\frac{1}{K}\ll\widetilde{\eta} .$$ In the above chain of inequalities, we can think that any parameter is allowed to depend on all the parameters that it has to the right, with the exception of $\delta_0$ and $\delta_1$ which should satisfy $\delta_0^{3/2} \ll \delta_1 \ll \delta_0$ (we can take $\delta_1 = \delta_0^{5/4}$, for example). For the convenience of the reader, let us mention briefly where all these inequalities are used in our argument:
We control $\ensuremath{\mathcal}F_{dis}$ to be sufficiently small via $\frac{1}{s_0}\ll \delta_0^{\frac{3}{2}}$, as well as in [\[eq:I4\]](#eq:I4){reference-type="eqref" reference="eq:I4"}.
In $\delta_0^{\frac{3}{2}}\ll \delta_1\ll\delta_0$, the second inequality is chosen to control the initial perturbation [\[initialdatacondition2\]](#initialdatacondition2){reference-type="eqref" reference="initialdatacondition2"}, which is sufficiently small compared to the bootstrap estimates in Proposition [Proposition 23](#prop:bootstrap){reference-type="ref" reference="prop:bootstrap"}. The first one is chosen to control the nonlinear terms $N_{u,s}$ as in Lemma [Lemma 28](#forcingestimate1){reference-type="ref" reference="forcingestimate1"} and the lower bound of $S$ Lemma [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"}.
$\delta_0\ll E$ is used to control the lower order Sobolev norm of $\widetilde{U},\widetilde{S}$ in Lemma [Lemma 32](#boosstrapestimate2){reference-type="ref" reference="boosstrapestimate2"}, and Lemma [Lemma 33](#lowerorderestimate){reference-type="ref" reference="lowerorderestimate"}.
$\frac{1}{E}\ll \overline{\varepsilon}$ is not crucial and we only need $\frac{1}{s_0}\ll\overline{\varepsilon}$, which is also used to control $\ensuremath{\mathcal}F_{dis}$ in [\[eq:I4\]](#eq:I4){reference-type="eqref" reference="eq:I4"}.
$\overline{\varepsilon}\ll \frac{1}{K}$ is used in the estimate of the $\ensuremath{\mathcal}F_{dis}$ in [\[epsicondition01\]](#epsicondition01){reference-type="eqref" reference="epsicondition01"} and [\[eq:grenada\]](#eq:grenada){reference-type="eqref" reference="eq:grenada"}.
$\frac{1}{K}\ll \frac{1}{m}$ is used in Lemma [Lemma 29](#forcingestimate2){reference-type="ref" reference="forcingestimate2"}. In order to bound the $X$ norm using our bootstrap assumptions [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we need to perform the non-linear stability analysis at a higher regularity than the linear stability analysis.
$\eta$ is defined in [\[weightdefi\]](#weightdefi){reference-type="eqref" reference="weightdefi"} and $\widetilde{\eta}$ is defined in [\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"} and [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"}. $\eta\ll\frac{1}{m}$ is not crucial and we only need $\frac{1}{k}\ll\eta,\widetilde{\eta}$, which is used in the estimate [\[energyestimatewithweight\]](#energyestimatewithweight){reference-type="eqref" reference="energyestimatewithweight"}.
For $\eta\ll \delta_g$, we only use the weaker estimate $\eta \ll 1$. This estimate is used in [\[epsicondition01\]](#epsicondition01){reference-type="eqref" reference="epsicondition01"} and [\[eq:grenada\]](#eq:grenada){reference-type="eqref" reference="eq:grenada"}.
$\delta_g\ll \delta_{dis}$ is used in Lemmas [Lemma 31](#boosstrapestimate1){reference-type="ref" reference="boosstrapestimate1"} and [Lemma 32](#boosstrapestimate2){reference-type="ref" reference="boosstrapestimate2"} in order to control the term $\ensuremath{\mathcal}F_{dis}$ [\[nsequation1\]](#nsequation1){reference-type="eqref" reference="nsequation1"}. Because $\ensuremath{\mathcal}F_{dis}$ has decay not faster than $e^{-\delta_{dis}s}$, our bootstrap estimate can only have a lower decay.
For the initial data, let $$\begin{aligned}
\label{eq:tildeplusunstable}
\begin{cases}
\widetilde{U}_0=\widetilde{U}_0^{'}+\sum_{i=1}^{N}a_i,
\varphi_{i,u}\\
\widetilde{S}_0=\widetilde{S}_0^{'}+\sum_{i=1}^{N}a_i
\varphi_{i,s},
\end{cases}\end{aligned}$$ where $$\label{eq:tilde_is_stable}
(\chi_2 \widetilde U_0', \chi_2 \widetilde S_0' ) \in V_{\rm{sta}},$$ and $(\varphi_{i,u},\varphi_{i,s})$ is an orthogonal and normal basis of $V_{\rm{uns}}$. ${a_i}$ is an $N$-dimensional vector to be fixed later. In this setting, the unstable component of the solution will be determined by $\{a_i\}$. Moreover, we will pick $\widetilde U_0', \widetilde S_0'$ so that our initial data $(\widetilde U_0, \widetilde S_0)$ satisfies: $$\label{initialdatacondition2}
\|\widetilde{U}_0\|_{L^{\infty}}, \|\widetilde{S}_0\|_{L^{\infty}}\leq \delta_1, \qquad E_{K}(s_0)\leq\frac{E}{2}, \qquad \mbox{and} \qquad \widetilde{S}_0+\widehat{X}\overline{S}_0\geq \frac{\delta_1}{2} \left\langle\frac{R}{R_0} \right\rangle^{1-r},$$ and the decay of the derivatives: $$\label{initialdatacondition}
|\nabla(\widetilde{U}_0+\widehat{X} \overline{U})|+|\nabla(\widetilde{S}_0+\widehat{X} \overline{S})|\lesssim \frac{1}{\langle y\rangle^r}.$$ Recall that $\widehat{X}$ is the cut-off function defined in [\[eq:periodic_perturbation\]](#eq:periodic_perturbation){reference-type="eqref" reference="eq:periodic_perturbation"}.
In order to make use of the property of the truncated linear operator, we also define the truncated solutions and initial data. Recall that $(\widetilde{U},\widetilde{S})$ satisfy the following Navier-Stokes equations: $$\begin{aligned}
\begin{split} \label{navierstokesperturb1}
\partial_s \widetilde U &= \ensuremath{\mathcal}L_{u}^{e}(\widetilde{U},\widetilde{S})+\ensuremath{\mathcal}F_{u}(\widetilde{U},\widetilde{S}),\\
\partial_s \widetilde S &= \ensuremath{\mathcal}L_{s}^{e}(\widetilde{U},\widetilde{S})+\ensuremath{\mathcal}F_{s}(\widetilde{U},\widetilde{S}).
\end{split} \end{aligned}$$ with $\ensuremath{\mathcal}F_{u}=N_{u}+\ensuremath{\mathcal}F_{dis}+\ensuremath{\mathcal}E_{u}$, $\ensuremath{\mathcal}F_{s}=N_{s}+\ensuremath{\mathcal}E_{s}$. Here $\ensuremath{\mathcal}E_{u}$, $\ensuremath{\mathcal}E_{s }$, $N_{s}$, $N_{u}$, $\ensuremath{\mathcal}F_{dis}$ are defined in [\[nsequation1\]](#nsequation1){reference-type="eqref" reference="nsequation1"}, [\[nsequation2\]](#nsequation2){reference-type="eqref" reference="nsequation2"}, [\[Esdefinition\]](#Esdefinition){reference-type="eqref" reference="Esdefinition"}, [\[Eudefinition\]](#Eudefinition){reference-type="eqref" reference="Eudefinition"}. Now let $(\widetilde{U}_{t},\widetilde{S}_{t})$ be the solution of the truncated equation: $$\begin{aligned}
\begin{split} \label{navierstokesperturb1trun}
\partial_s \widetilde U_t &= \ensuremath{\mathcal}L_{u}(\widetilde{U}_t,\widetilde{S}_t)+\chi_2\ensuremath{\mathcal}F_{u}(\widetilde{U},\widetilde{S}),\\
\partial_s \widetilde S_t &= \ensuremath{\mathcal}L_{s}(\widetilde{U}_t,\widetilde{S}_t)+\chi_2\ensuremath{\mathcal}F_{s}(\widetilde{U},\widetilde{S}),
\end{split} \end{aligned}$$ with $\ensuremath{\mathcal}L = \chi_2 (\ensuremath{\mathcal}L_u^{e}, \ensuremath{\mathcal}L_s^{e}) - J(1-\chi_1)$ and the initial value $$\begin{aligned}
\label{initialdatatruncated}
\begin{cases}
\widetilde{U}_{t,0}=\chi_2\widetilde{U}_0^{'}+\sum_{i=1}^{N}a_i,
\varphi_{i,u}\\
\widetilde{S}_{t,0}=\chi_2\widetilde{S}_0^{'}+\sum_{i=1}^{N}a_i
\varphi_{i,s}.
\end{cases}\end{aligned}$$ Here $(\varphi_{i,u},\varphi_{i,s})$ are a basis of the unstable space $V_{uns}$ as chosen in Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"}. From Lemma [Lemma 21](#truncatedestimate){reference-type="ref" reference="truncatedestimate"}, since when $|y|\leq C_0$, $(\widetilde{U}_{t,0},\widetilde{S}_{t,0})=(\widetilde{U}_{0},\widetilde{S}_{0}),$ we have $(\widetilde{U}_{t},\widetilde{S}_{t})=(\widetilde{U},\widetilde{S})$ for all time when $|y|\leq C_0$, as long as the solution exists.
Before we state the main proposition of this section (Proposition [Proposition 23](#prop:bootstrap){reference-type="ref" reference="prop:bootstrap"}), we discuss the choice of $C_0$ and the weight $\phi$ that will be used in the energy estimates.
**Lemma 22**. *We can choose $C_0$ sufficiently large such that the profile $(\overline{U},\overline{S})$ satisfies the following conditions for all $s\geq s_0$: $$\label{profilecondition1}
|\alpha\widehat{X}\overline{S}|, |\widehat{X}\overline{U}|, |\alpha\nabla(\widehat{X}\overline{S})|, |\nabla(\widehat{X}\overline{U})|\leq\frac{1}{100C_1} \quad \text{ for } |y|\geq C_0,$$ $$\label{profilecondition4}
\|\nabla(\widehat{X}\overline{U})\|_{L^{\infty}(|y|\geq C_0)}+\|\nabla(\widehat{X}\overline{S})\|_{L^{\infty}(|y|\geq C_0)} \leq \frac{1}{C C_2},$$ $$\label{profilecondition2}
\|\nabla^{2}(\widehat{X}\overline{U})\|_{L^{8}({|y|\geq C_0})}+\|\nabla^2(\widehat{X}\overline{S})\|_{L^{8}({|y|\geq C_0})} \leq \frac{1}{C C_2},$$ $$\label{profilecondition3}
\sum_{j=3}^{5}(\|\widehat{X}\overline{U}\|_{\dot{H}^{j}(|y|\geq C_0)}+\|\widehat{X}\overline{S}\|_{\dot{H}^{j}(|y|\geq C_0)})\leq \frac{1}{C C_2},$$ with $C_2 = 100$, $C_1$ determined via $$\label{parameterchosen}
\frac{1}{C_2}=\frac{32}{r-1}\frac{1}{C_1}\left(\frac{1}{C_2}\right)^{\frac{1}{20}},$$ and $C$ being some universal constant used in lemma [Lemma 32](#boosstrapestimate2){reference-type="ref" reference="boosstrapestimate2"} which is independent of all parameters, r being the self-similar parameter as in [\[eq:SS_coordinates1\]](#eq:SS_coordinates1){reference-type="eqref" reference="eq:SS_coordinates1"}. Here the $L^{8}$ estimate [\[profilecondition2\]](#profilecondition2){reference-type="eqref" reference="profilecondition2"} will be used in the energy estimate of $\|\widetilde{U}\|_{\dot{H}^{4}}+\|\widetilde{S}\|_{\dot{H}^{4}}$ (Lemma [Lemma 32](#boosstrapestimate2){reference-type="ref" reference="boosstrapestimate2"}) through a Hölder estimate.*
*Proof.* Since $|\widehat{X}|\leq 1$, $|\nabla\widehat{X}|\leq 2e^{-s}$, [\[profilecondition1\]](#profilecondition1){reference-type="eqref" reference="profilecondition1"}, [\[profilecondition4\]](#profilecondition4){reference-type="eqref" reference="profilecondition4"} follows from the decay of $(\overline{S},\overline{U})$ [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"}.
For [\[profilecondition2\]](#profilecondition2){reference-type="eqref" reference="profilecondition2"}, [\[profilecondition3\]](#profilecondition3){reference-type="eqref" reference="profilecondition3"}, the terms with no derivatives hitting on $\widehat{X}$ also have the corresponding decay. Moreover, we have $$\ensuremath{\mathrm{supp\,}}(\nabla\widehat{X}) \subset B(0, e^s)\cap B^c(0,\frac{e^s}{2}), \ |\nabla^{j}\widehat{X}|\lesssim e^{-js}.$$ Then for $\nabla^{j}\widehat{X}\nabla^{l}\widetilde{S},$ with $j\geq 1$, $j+l\geq 2$, $q\geq 2$, we get $$\begin{aligned}
\|\nabla^{j}\widehat{X}\nabla^{l}\overline{S}\|_{L^{q}(\mathbb{R}^3)}^{q} &\lesssim e^{-jsq}\int_{\frac{1}{2}e^s}^{e^s}|y|^{-q(r-1)-ql+2}dy\\
&\lesssim e^{-jsq} |y|^{-q(r-1)-ql+3}\big|^{e^s}_{\frac{1}{2}e^s}\\
&\lesssim e^{-jsq} e^{(-q(r-1)-ql+3)s}\\
&\lesssim e^{-s(2q-3)}\\
&\lesssim e^{-s}.\end{aligned}$$ Then, since $s\geq s_0\gg 1$, those terms with at least one derivative on $\widehat{X}$ are sufficiently small. ◻
We define the weight $\phi$ as follows: $$\begin{aligned}
\label{weightdefi}
\phi(y)=\begin{cases}
1 & \text{ for } |y|\leq R_0,\\
\frac{|y|^{2(1-\eta)}}{2R_0^{2(1-\eta)}}, &\text{ for } |y|\geq 4R_0,
\end{cases}\end{aligned}$$ with $R_0$ chosen such that $$\label{Rochoice estimate}
\begin{split}
\overline{S}(y)\geq 2\delta_0, &\text{ for all $|y|\leq R_0$,}\\
|\nabla \overline{S}(y)|,\ |\nabla \overline{U}(y)|\leq \frac{\delta_1}{2},\ \overline{S}(y)\leq C\delta_0, \overline{U}(y)\leq C\delta_0,&\text{ for all $|y|\geq R_0.$}
\end{split}$$ The existence of $R_0$ follows from $\delta_0^{\frac{3}{2}}\ll \delta_1\ll\delta_0 \ll 1$ and the decay of the profile [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"}. We have: $$\begin{aligned}
\label{choiceR0}
\delta_0 \thicksim \frac{1}{R_0^{r-1}}.\end{aligned}$$
Now we state our main proposition:
**Proposition 23**. *For the initial data chosen above, if the solution exists for $s\in [s_0,s_1]$ and the unstable mode satisfies $\|P_{\rm{uns}}(\widetilde{U}_t,\widetilde{S}_t)\|_{X} \leq \delta_1 e^{-\varepsilon(s-s_0)}$, then $$\begin{aligned}
\label{lowerestimate1}
|\widetilde{U}|,|\widetilde{S}| &\leq \delta_0 \frac{1}{C_2}e^{-\varepsilon(s-s_0)}, \\
\label{lowerestimate2}
\|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}, \|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)} &\leq \delta_0 e^{-\varepsilon(s-s_0)}, \\
\label{higherestimate3}
E_{K}=\int_{\mathbb{R}^3}(|\nabla^{K}U|^2+|\nabla^{K}S|^2)\phi^{K}dy &\leq E.\end{aligned}$$ where $\nabla^K S$ and $\nabla^K U_i$ are $K$-tensors containing all possible combinations of $K$ derivatives, and $\varepsilon=\frac{12\delta_g}{25},$ $C_2=100$.*
*Proof.* The proof follows by a bootstrap argument: [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"} and [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"} will be shown in Subsection [3.3](#lowerestimate){reference-type="ref" reference="lowerestimate"}. In particular, [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"} will follow from Lemmas [Lemma 30](#linfitnityinside){reference-type="ref" reference="linfitnityinside"} and [Lemma 31](#boosstrapestimate1){reference-type="ref" reference="boosstrapestimate1"}. Equation [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"} will follow from [\[boosstrapestimate2\]](#boosstrapestimate2){reference-type="eqref" reference="boosstrapestimate2"}. Equation [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"} will be shown in Subsection [3.4](#higherorderestimate){reference-type="ref" reference="higherorderestimate"}. ◻
## Propagation of lower order estimates {#lowerestimate}
In this and the next subsection, we always assume [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"}, [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"}, [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"} are true and aim to show the better bounds:
$$LHS\leq \frac{1}{2}RHS .$$ In order to bound the last terms in $L^\infty$, We first show a few Lemmas regarding the decay of $S$ and $\nabla S$, $\nabla U$. We will use interpolation Lemmas [Lemma 41](#lemma:GN_generaltorus){reference-type="ref" reference="lemma:GN_generaltorus"}, [Lemma 42](#lemma:GN_generalnoweighttorus){reference-type="ref" reference="lemma:GN_generalnoweighttorus"} proved in the Appendix.
**Lemma 24**. *We have $$\label{perturbationenergy}
\widetilde{E}_{K}:=\int_{e^s\mathbb{T}_{L}^{3}}(|\nabla^{K}\widetilde{U}|^2+|\nabla^{K}\widetilde{S}|^2)\phi^{K}dy\leq 2E.$$*
*Proof.* Since $(\widetilde{U},\widetilde{S})=(U,S)-(\widehat{X}\overline{U},\widehat{X}\overline{S}),$ and $K\ll E$, from [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we only need to show $$\label{profilekestimate}
\int_{e^s\mathbb{T}_{L}^{3}}(|\nabla^{K}(\widehat{X}\overline{U})|^2+|\nabla^{K}(\widehat{X}\overline{S})|^2)\phi^{K}dy\lesssim_{K} 1.$$ From the decay of the profile [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"}, we have $$\left( |\nabla^{K} \overline{U} |^2+|\nabla^{K} \overline{S} |^2 \right) \phi^{K}\lesssim_{K} \langle R\rangle^{-2(r-1)-2K\eta}.$$ From $\frac{1}{K}\ll \eta$ [\[choiceofparameter\]](#choiceofparameter){reference-type="eqref" reference="choiceofparameter"}, we claim $$\label{profilekestimate1}
\int_{e^s\mathbb{T}_{L}^{3}} \left( |\nabla^{K}\overline{U}|^2+|\nabla^{K}\overline{S}|^2 \right) \phi^{K}dy\lesssim_{K} 1.$$ From the definition of $\widehat{X}$, we have $$\label{estimatehatX}
\ensuremath{\mathrm{supp\,}}\widehat{X} \subset B \left( 0, e^s \right) ,\ \ensuremath{\mathrm{supp\,}}(1-\widehat{X}) \subset B^c \left( 0,\frac{e^s}{2} \right), \ \ensuremath{\mathrm{supp\,}}\nabla\widehat{X} \subset B \left( 0, e^s \right) \cap B^c \left( 0,\frac{e^s}{2} \right),$$ and $$\label{estimatehatX2}
\|\partial^{j}\widehat{X}\|_{L^{\infty}}\lesssim e^{-js}.$$ Then we have $$\begin{aligned}
\label{profilekestimate2}
&\quad\sum_{1\leq j\leq K}\int_{e^s\mathbb{T}_{L}^{3}}|\nabla^{K-j} \widehat{X}|^2 \left( |\nabla^{j} \overline{U} |^2+|\nabla^{j} \overline{S} |^2 \right)\phi^{K}dy\\\nonumber
&\lesssim_{K} \sum_{1\leq j\leq K}e^{-2(K-j)s}\int_{\frac{1}{2}e^{s}}^{e^s}|y|^{2(1-\eta)K}|y|^{-2(r-1)-2j}|y|^{2}dy\\\nonumber
&\lesssim_{K} \sum_{1\leq j\leq K}e^{-2(K-j)s+(3+2(1-\eta)K-2(r-1)-2j)s}\\\nonumber
&\lesssim_{K} e^{(-2K\eta+3-2(r-1))s}\lesssim_{K}1.\end{aligned}$$ The last inequality follows again from the choice of parameter $\frac{1}{K}\ll \eta$. [\[choiceofparameter\]](#choiceofparameter){reference-type="eqref" reference="choiceofparameter"}. Then [\[profilekestimate\]](#profilekestimate){reference-type="eqref" reference="profilekestimate"} follows from [\[profilekestimate1\]](#profilekestimate1){reference-type="eqref" reference="profilekestimate1"} and [\[profilekestimate2\]](#profilekestimate2){reference-type="eqref" reference="profilekestimate2"}. ◻
**Lemma 25** (Bounds for $S$). *We have the lower bound*
*$$S \gtrsim \delta_1 \left\langle \frac{R}{R_0} \right\rangle^{-(r-1)}.$$ On the region $|y| > R_0$, we have the upper bound $$S \lesssim\delta_0 \max \left\{ \left\langle \frac{R}{R_0} \right\rangle^{-(r-1)}, e^{-(s-s_0)(r-1)} \right\}.$$*
*Proof.* For $R \leq R_0$ we have $\overline S \geq 2 \delta_0$, so $S \geq 2 \delta_0 - \| \widetilde S \|_{L^\infty} \geq\delta_0$ and the inequality is clear. Thus, let us assume $R \geq R_0$. We choose $(y_0, \overline{s})$ such that either $|y_0| = R_0$ or $\overline{s} = s_0$. We consider the trajectories starting at $(y_0, \overline{s})$.
Let us define $\omega_{y_0} (s) = e^{(r-1)(s-\overline{s})} S (y_0 e^{s-\overline{s}}, s)$. From [\[eq:US2\]](#eq:US2){reference-type="eqref" reference="eq:US2"}: $$\partial_s \omega_{y_0} = - e^{(r-1) (s-\overline{s})} U \cdot \nabla S - \alpha e^{(r-1) (s-\overline{s})} S \ensuremath{\mathrm{div\,}}(U).$$ Using the interpolation inequality [\[eq:GNresultinfty_simplifiedtorus\]](#eq:GNresultinfty_simplifiedtorus){reference-type="eqref" reference="eq:GNresultinfty_simplifiedtorus"} between $\| \phi^{K/2} \nabla^K \widetilde S \|_{L^2}$ and $\| \widetilde S \|_{L^\infty}$ yields $\| \phi^{1/2} \nabla \widetilde S \|_{L^\infty}\leq \delta_0^{\frac{9}{10}}$. Moreover, for $R \geq R_0$ $$\begin{aligned}
\phi^{1/2} | \nabla (\widehat X \overline S) | \lesssim\left( \frac{R}{R_0} \right)^{1-\eta} \left( \nabla (\widehat X ) \overline S + \widehat X | \nabla S |\right)\end{aligned}$$ Now, note that $|\nabla (\widehat X (|y| e^{-s}))| = -e^{-s} \widehat X' (|y| e^{-s})$. Since $\widehat X'$ is supported on $[1/2, 1]$, we see that $e^{s} \approx y$ on the support, and we obtain that $|\nabla \widehat X| \lesssim\frac{1}{R}$. Therefore, using [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"} we get that for $R\geq R_0$: $$\begin{aligned}
\phi^{1/2} | \nabla (\widehat X \overline S) | \lesssim\frac{R}{R_0} \cdot \frac{1}{R^r} \leq \frac{1}{R_0} \approx \delta_0^{\frac{1}{r-1}} \leq \delta_0.\end{aligned}$$ Combining this with $\| \phi^{1/2} \nabla \widetilde S \|_{L^\infty}\leq \delta_0^{\frac{9}{10}}$, we obtain $\| \phi^{1/2} \nabla S \|_{L^\infty}\leq \delta_0^{\frac{9}{10}}$. Analogously, it can be deduced that $\| \phi^{1/2} \nabla U \|_{L^\infty (R\geq R_0)} \leq \delta_0^{9/10}$. Thus, we obtain $$\begin{aligned}
&\left| e^{(r-1)(s-\overline{s})} U(y_0 e^{(s-\overline{s})}, s) \nabla S (y_0 e^{(s-\overline{s})}, s) \right| \lesssim\eta^{-1} \left( \frac{ y_0 e^{(s-\overline{s})} }{R_0} \right)^{-1+\eta} e^{(r-1) (s-\overline{s})} \delta_0^{19/10} \\
&\quad\lesssim\eta^{-1} \delta_0^{19/10} \left\langle \frac{y_0}{R_0} \right\rangle^{-1+\eta} e^{(s-\overline{s})(\eta+r-2)}\lesssim\delta_0^{9/5} \left\langle \frac{y_0}{R_0} \right\rangle^{1-r}e^{-(s-\overline{s})/10}.\end{aligned}$$ Similarly $$\left| \alpha \omega_{y_0} \ensuremath{\mathrm{div\,}}(U) \right| \lesssim\delta_0^{9/5} \left\langle \frac{y_0}{R_0} \right\rangle ^{1-r}e^{-(s-\overline{s})/10}.$$ Thus, we get $$|\partial_s \omega |\lesssim\delta_0^{9/5} \left\langle \frac{y_0}{R_0} \right\rangle^{1-r}e^{-(s-\overline{s})/10}.$$ Integrating we obtain $$\label{eq:cadiz}
\left| \omega_{y_0}(s) - \omega_{y_0}(\overline{s}) \right| \lesssim\delta_0^{9/5} \left\langle \frac{y_0}{R_0} \right\rangle^{1-r},$$ uniformly in $s$. Note also that $\delta_0\leq \omega_{y_0}(\overline{s}) \leq C \delta_0$ if $y_0 = R_0$, using that $2 \delta_0 \leq \overline S \leq C\delta_0$. Thus, we get $\omega_{y_0}(s) \approx \delta_0$ if $y_0=R_0$. That is, $$\label{eq:almeria}
S \left( y_0 e^{(s-\overline{s})} , s \right) \approx \delta_0 e^{-(r-1)(s-\overline{s})}.$$ Undoing the change of variables, we note $\frac{y}{y_0} = e^{s-\overline{s}}$ and therefore $$S \left( y, s \right) \approx \delta_0 \left( \frac{y}{y_0} \right)^{-(r-1)}.$$ Since $y_0 = R_0$ this directly shows the lower bound and the upper bound. For $s=s_0$, note that for $y\geq R_0$, from the definition of the initial data [\[initialdatacondition2\]](#initialdatacondition2){reference-type="eqref" reference="initialdatacondition2"}, we have $$\frac{\delta_1}{2} \left\langle \frac{y_0}{R_0} \right\rangle ^{1-r}\leq S(y_0,s_0) \leq C \delta_0.$$ Thus, we get $$\begin{aligned}
e^{(1-r)(s-s_0)}\delta_1 \left\langle\frac{y_0}{R_0} \right\rangle ^{1-r}\lesssim S(y,s)=e^{(1-r)(s-s_0)}w_{y_0}(s)\lesssim \delta_0e^{(1-r)(s-s_0)}.\end{aligned}$$ Then the upper bound directly follows. From $e^{(1-r)(s-s_0)}=(\frac{y}{y_0})^{1-r}$, we get the lower bound. ◻
**Lemma 26**. *We have that $$\label{eq:voltaire1}
|\nabla \widetilde S | + | \nabla \widetilde U | \lesssim\delta_0 \left\langle \frac{R}{R_0} \right\rangle^{-r}$$ and $$\label{eq:voltaire2}
|\nabla S | + | \nabla U | \lesssim\left\langle \frac{R}{R_0} \right\rangle^{-r}$$*
*Proof.* For $R\leq R_0$, [\[eq:voltaire1\]](#eq:voltaire1){reference-type="eqref" reference="eq:voltaire1"} is satisfied by the $L^{\infty}$ assumption [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"}, the $H^{4}(|y|\geq C_0)$ assumption [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"} and Lemma [Lemma 30](#linfitnityinside){reference-type="ref" reference="linfitnityinside"} for the bound inside $B(0,C_0)$. Combining these with the decay of the profile (equation [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"}), we get [\[eq:voltaire2\]](#eq:voltaire2){reference-type="eqref" reference="eq:voltaire2"} given that $U =\widehat{X} \overline U + \widetilde U$ and $S =\widehat{X} \overline S + \widetilde S$.
For $R\geq R_0$, without loss of generality we will do the proof for $S$ and $\widetilde{S}$. The proof for $U$ and $\widetilde{U}$ will be analogous.
Using again the energy estimate assumption [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we get $$\label{eq:diderot}
\int \left( | \nabla^K U |^2 + | \nabla^K S |^2 \right) \phi^K \lesssim_K E.$$ Now, we interpolate between $| U |, | S | \lesssim \delta_0$ (from the decay of the profile [\[Rochoice estimate\]](#Rochoice estimate){reference-type="eqref" reference="Rochoice estimate"} and the assumption of the perturbation [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"} ) when $R\geq R_0$ and [\[eq:diderot\]](#eq:diderot){reference-type="eqref" reference="eq:diderot"}. Concretely, using equation [\[eq:GNresultinfty_simplified\]](#eq:GNresultinfty_simplified){reference-type="eqref" reference="eq:GNresultinfty_simplified"} for $m = K$, $j \in \{ 1, 2 \}$, $q = 2$, we obtain $$| \nabla^j U | + | \nabla^j S | \lesssim_K \delta_0^{1 - \frac{j}{K-3/2}} E^{\frac{j}{K-3/2}} \phi^{- \frac{Kj}{2(K-3/2)} } + \delta_0 \langle x \rangle^{-j}.$$ Given that $j \in \{ 1, 2 \}$ and $\delta_0$ depends on $K, E$, we obtain that $$\label{eq:rough_decay2}
| \nabla^2 U | + | \nabla^2 S | \lesssim\delta_0^{9/10} \phi^{-1}, \quad \mbox{ and } \quad
| \nabla U | + | \nabla S | \lesssim\delta_0^{9/10} \phi^{-1/2}.$$
We look at the equation for $\nabla S$. Taking a derivative in [\[eq:US2\]](#eq:US2){reference-type="eqref" reference="eq:US2"}, we have that $$\partial_s \partial_i \ensuremath{\mathcal}S = -r \partial_i S - y \nabla \partial_i S - \partial_i (U \nabla S ) - \alpha \partial_i ( S \ensuremath{\mathrm{div\,}}(U))$$ For any $|y| \geq R_0$, we consider $y = y_0 e^{(s-\overline{s})}$ where either $\overline{s} = s_0$ or $|y_0| = R_0$. We define $\omega_{y_0}(s) = e^{r(s-\overline{s})} \partial_i S( y_0 e^s, s )$, and we obtain the equation $$\partial_s \omega = -e^{r(s-\overline{s})} \left( \partial_i U \nabla S + U \nabla \partial_i S + \alpha \partial_i S \ensuremath{\mathrm{div\,}}(U) + \alpha S \partial_i \ensuremath{\mathrm{div\,}}(U) \right).$$ where everything is evaluated at $(y_0 e^s, s)$. Using the inequalities from [\[eq:rough_decay2\]](#eq:rough_decay2){reference-type="eqref" reference="eq:rough_decay2"}, we get $$| \partial_s \omega_{y_0} | \lesssim\delta_0^{9/5} \phi \left( y_0 e^{s-\overline{s}} \right)^{-1} e^{r(s-\overline{s})} \lesssim\delta_0^{9/5}e^{(r-2+2\eta)(s-\overline{s})} \left\langle\frac{y_0}{R_0} \right\rangle^{-2+2\eta}.$$ Thus $$\left|\partial_{i}S (y_0,s) - e^{-r(s-\overline{s})}\partial_{i}S (y_0,\overline{s})) \right|=e^{-r(s-\overline{s})}\left| \omega_{y_0}(s) - \omega_{y_0}(\overline{s}) \right| \lesssim\delta_0^{9/5}e^{(-2+2\eta)(s-\overline{s})} \left\langle \frac{y_0}{R_0} \right\rangle^{-r}.$$ Therefore, from $|\partial_{i}S(y_0,\overline{s})|\lesssim \delta_0$ when $|y_0|=R_0$ and the decay for the initial value $|\partial_{i}S(y_0,s_0)|\lesssim \delta_0 \left\langle\frac{|y_0|}{R_0} \right\rangle^{-r}$ (from the choice of $R_0$ [\[choiceR0\]](#choiceR0){reference-type="eqref" reference="choiceR0"} and the initial value [\[initialdatacondition\]](#initialdatacondition){reference-type="eqref" reference="initialdatacondition"}), we have when $R\geq R_0$, $$|\nabla S|\lesssim \left\langle\frac{R}{R_0} \right\rangle^{-r}\delta_0.$$ Then we get [\[eq:voltaire2\]](#eq:voltaire2){reference-type="eqref" reference="eq:voltaire2"}.
For [\[eq:voltaire1\]](#eq:voltaire1){reference-type="eqref" reference="eq:voltaire1"}, we only need to show additionally the decay of the profile $|\nabla(\widehat{X}\overline{S})|\lesssim \left\langle\frac{R}{R_0} \right\rangle^{-r}.$ Since $\nabla(\widehat{X}\overline{S})=\nabla(\overline{S})\widehat{X}+\nabla(\widehat{X})\overline{S}$, the decay of the first term directly follows from the decay of $\overline{S}$. For the second term, we can combine the estimates $|\nabla \widehat{X}|\lesssim e^{-s}$ and in $\ensuremath{\mathrm{supp\,}}\nabla{\widehat{X}}, |y|\thicksim e^{-s}$ to get the decay. ◻
**Lemma 27**. *For every $1 \leq j \leq K-2$, $\overline{\varepsilon}>0$, we have that $$\label{eq:brazil3}
\frac{| \nabla^j S | \phi^{j/2} }{S} + \frac{| \nabla^j U | \phi^{j/2}}{S} \lesssim_{\delta_0, \overline{\varepsilon}} \left\langle R \right\rangle^{-\frac{-r(j-1) + (1-\eta) \frac{5j}{2} + K\eta - 5/2 }{K-5/2} + \overline{\varepsilon}}\lesssim_{\delta_0,\overline{\varepsilon}} \left\langle R \right\rangle^{\overline{\varepsilon}}.$$ and $$\label{eq:brazil2}
| \nabla^j U | + | \nabla^j S | \lesssim_{\delta_0,\overline{\varepsilon}} \langle R \rangle^{-j(1-\eta) - (r-1)+\overline{\varepsilon}}.$$ For $j=K-1,$ we have $$\label{eq:brazilk-1}
\left\| \langle R \rangle^{K(1-\eta)\frac{K-2}{K-1}-\overline{\varepsilon}}|\nabla^{K-1}S| \right\|_{L^{2+\frac{2}{K-2}}}+ \left\|\langle R \rangle^{K(1-\eta)\frac{K-2}{K-1}-\overline{\varepsilon}}|\nabla^{K-1}U| \right\|_{L^{2+\frac{2}{K-2}}}\lesssim_{\delta_0,\overline{\varepsilon}} 1.$$*
*Proof.* In order to show [\[eq:brazil3\]](#eq:brazil3){reference-type="eqref" reference="eq:brazil3"}, we do interpolation between $\| \nabla S \cdot \langle R\rangle ^{r} \|_{L^\infty}$ and $\| \phi^{K/2} \nabla^K S \|_{L^2}$. We consider Lemma [Lemma 40](#lemma:GN_general){reference-type="ref" reference="lemma:GN_general"} from the Appendix with $m=K-1$, $i = j-1$, $p=\infty$, $q = 2$, $\overline{r} = \infty$, so that $\theta = \frac{j-1}{K-5/2}$ (which is between $(j-1)/(K-1)$ and $1$ as Lemma [Lemma 41](#lemma:GN_generaltorus){reference-type="ref" reference="lemma:GN_generaltorus"} requires, since $j \leq K-2$). From [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"}, using Lemma [Lemma 26](#lemma:Sprimebounds){reference-type="ref" reference="lemma:Sprimebounds"} to bound $\| \nabla S \langle R \rangle^{r} \|_{L^\infty}$, we obtain $$| \nabla^j U | \lesssim_{\delta_0} \phi^{-\frac{K\theta}{2}} \langle R \rangle^{-r(1-\theta ) +\overline{\varepsilon} }+\langle R \rangle^{-j+1-r}
\lesssim_{\delta_0} \phi^{-\frac{K\theta}{2}} \langle R \rangle^{-r(1-\theta ) +\overline{\varepsilon} }.$$ The last inequality follows from $1\ll K\eta.$ Thus, we have that $$\begin{aligned}
\label{decayestimatederi}
| \nabla^j U | &\lesssim_{\delta_0} \phi^{-j/2} \langle R \rangle^{-(r-1)+\overline{\varepsilon}}
\phi^{-\frac{K\theta}{2} + j/2} \cdot \langle R \rangle^{-r(1-\theta)+r-1}\\
&
=
\phi^{-j/2} \langle R \rangle^{-(r-1)+\overline{\varepsilon}}
\phi^{\frac{-K(j-1) + j(K-5/2)}{2(K-5/2)}} \cdot \langle R \rangle^{r\frac{j-1}{K-5/2}-1}\\ \nonumber
%
&= \phi^{-j/2} \langle R \rangle^{-(r-1)+\overline{\varepsilon}}
\phi^{\frac{K - \frac52 j}{2(K-5/2)}} \cdot \langle R \rangle^{\frac{rj-r - K + 5/2}{K-5/2}} \\ \nonumber
&\lesssim
\phi^{-j/2} \langle R \rangle^{-(r-1)+\overline{\varepsilon}}
\cdot \langle R \rangle^{\frac{rj-r - K + 5/2 + (1-\eta) (K - \frac52 j )}{K-5/2}}.\end{aligned}$$ This shows the first bound from [\[eq:brazil3\]](#eq:brazil3){reference-type="eqref" reference="eq:brazil3"} corresponding to $U$ and the one for $S$ is analogous. For the second bound in [\[eq:brazil3\]](#eq:brazil3){reference-type="eqref" reference="eq:brazil3"}, and [\[eq:brazil2\]](#eq:brazil2){reference-type="eqref" reference="eq:brazil2"}, we can use [\[decayestimatederi\]](#decayestimatederi){reference-type="eqref" reference="decayestimatederi"} and $$rj-r - K + 5/2 + (1-\eta) (K - \frac52 j )= (j-1)(r-\frac{5}{2})+\eta(j\frac{5}{2}-K)\leq 0.$$ For [\[eq:brazilk-1\]](#eq:brazilk-1){reference-type="eqref" reference="eq:brazilk-1"}, we use the interpolation Lemma [\[eq:GNresulttorus\]](#eq:GNresulttorus){reference-type="ref" reference="eq:GNresulttorus"} between $\| \nabla S \cdot \langle R\rangle ^{r} \|_{L^\infty}$ and $\| \phi^{K/2} \nabla^K S \|_{L^2}$ again. By taking $m=k-1$, $i=K-2$, $p=\infty$, $q=2$, $\theta=\frac{K-2}{K-1}$, we have $$\begin{aligned}
\left\| \langle R\rangle^{K(1-\eta)\frac{K-2}{K-1}+r\frac{1}{K-1}-\overline{\varepsilon}}\nabla^{k-1}S \right\|_{L^{2+\frac{2}{K-2}}}\lesssim_{\delta_0,\overline{\varepsilon}}1.\end{aligned}$$ Here [\[eq:GN_extracondtorus\]](#eq:GN_extracondtorus){reference-type="eqref" reference="eq:GN_extracondtorus"} is satisfied because $1\ll K\eta$. Thus [\[eq:brazilk-1\]](#eq:brazilk-1){reference-type="eqref" reference="eq:brazilk-1"} follows from $r>1$. ◻
**Lemma 28**. *We have $$\chi_2 \mathcal E_{u}=0, \quad \chi_2 \mathcal E_{s}=0,$$ $$\|\mathcal E_{u}\|_{L^{\infty}}, \|\mathcal E_{u}\|_{\dot{H}^4(|y|\geq C_0)}\leq \delta_1 e^{-(s-s_0)},$$ and $$\|\mathcal E_{s}\|_{L^{\infty}}, \|\mathcal E_{s}\|_{\dot{H}^4(|y|\geq C_0)}\leq \delta_1 e^{-(s-s_0)}.$$*
*Proof.* From the decay of the profile [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"}, we have $$|\partial^{j}\overline{U}|, |\partial^{j}\overline{S}|\lesssim |y|^{1-r-j}.$$ Then using the estimates for $\widehat{X}$ [\[estimatehatX\]](#estimatehatX){reference-type="eqref" reference="estimatehatX"}, [\[estimatehatX2\]](#estimatehatX2){reference-type="eqref" reference="estimatehatX2"}, we have $$\|(\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{U}\|_{L^{\infty}}\lesssim \||\widehat{X}^2-\widehat{X}||y|^{1-2r}\|_{L^{\infty}}\lesssim e^{(1-2r)s}.$$ We also have $$\|\widehat{X}\overline{U}\cdot\nabla (\widehat{X})\overline{U}\|_{L^{\infty}}\lesssim \||\widehat{X}||\nabla (\widehat{X})||y|^{2-2r}\|_{L^{\infty}}\lesssim e^{(1-2r)s}.$$ Other terms in $\mathcal E_{u}$ and $\mathcal E_{s}$ (cf. [\[Eudefinition\]](#Eudefinition){reference-type="eqref" reference="Eudefinition"}) can be controlled in a similar way and we have $$\|\mathcal E_{u}\|_{L^{\infty}}+\|\mathcal E_{s}\|_{L^{\infty}}\lesssim e^{(1-2r)s}\leq \delta_1 e^{-(s-s_0)},$$ where we used $r>1$ and $s_0\gg 1$.
By taking $4$ derivatives, we have $$|\partial^4((\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{U})|\lesssim
\sum_{i+j=4}\partial^{i} (\widehat{X}^2-\widehat{X})\partial^{j}(\overline{U}\cdot\nabla\overline{U})\lesssim e^{-is}|y|^{1-2r-j}.$$ Then $$\begin{aligned}
\|\partial^4((\widehat{X}^2-\widehat{X})\overline{U}\cdot\nabla\overline{U})\|_{L^{2}(\mathbb{R}^3)}^2&\lesssim \sum_{i+j=4}e^{-2is}\int_{\frac{1}{2}e^s}^{e^s}|y|^{2(1-2r-j)}|y|^2dy\\
\qquad&\lesssim \sum_{i+j=4}e^{(-2i+5-4r-2j)s}\\
\qquad&\lesssim e^{(-4r-3)s}\lesssim e^{-2s}.\end{aligned}$$ Similarly, we have $$\begin{aligned}
&\|\partial^4(\widehat{X}\overline{U}\cdot\nabla (\widehat{X})\overline{U})\|_{L^{2}(\mathbb{R}^3)}^2\lesssim e^{-2s}.
%&\lesssim \sum_{i+j+k+l=4}e^{-2(i+j+1)s}\int_{\frac{1}{2}e^s}^{e^s}|y|^{2(2-2r-k-l)}|y|^2dy\\
%&\lesssim \sum_{i+j+k+l=4}e^{(-2(i+j+1)+2(2-2r-k-l)+3)s}\\
%&\lesssim e^{(-4r-3)s}\lesssim e^{-2s}.\end{aligned}$$ The rest of the terms can be bounded in the same way and we have $$\|\mathcal E_{u}\|_{\dot{H}^4(|y|\geq C_0)}+\|\mathcal E_{s}\|_{\dot{H}^4(|y|\geq C_0)}\lesssim e^{-s}\leq \delta_1 e^{-(s-s_0)}.$$ ◻
**Lemma 29**. *We have $$\|N_u\|_{L^{\infty}},\|\chi_2N_u\|_{X},\|N_s\|_{L^{\infty}},\|\chi_2N_s\|_{X}\leq \delta_1^{ \frac{6}{5}} e^{-\frac{3}{2}\varepsilon(s-s_0)},$$ and $$\|\ensuremath{\mathcal}F_{dis}\|_{L^{\infty}},\|\chi_2 \ensuremath{\mathcal}F_{dis}\|_{X} \leq \delta_1^{ \frac{6}{5}} e^{-\delta_{dis}\frac{s}{2}}.$$*
*Proof.* The proof is similar as [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma 8.11]. From the interpolation inequality [\[eq:GNresultinfty_simplifiedtorus\]](#eq:GNresultinfty_simplifiedtorus){reference-type="eqref" reference="eq:GNresultinfty_simplifiedtorus"} in the Appendix, we have for all $0\leq j\leq m$: $$\|\nabla^{j}U\|_{L^{\infty}}, \|\nabla^{j}S\|_{L^{\infty}}\lesssim_{\delta_1,E} 1.$$ Moreover, when $|y|\leq 3C_0$, $S\geq \overline{S}-\delta_0 \gtrsim 1$.
Thus we have $$e^{-\delta_{dis}s}\left|\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right|\lesssim_{\delta_1, E} e^{-\delta_{dis}s}\leq (\delta_1)^{\frac{6}{5}} e^{-\delta_{dis}\frac{s}{2}}.$$ and $$\begin{aligned}
e^{-\delta_{dis}s}\left\|\chi_2\left(\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right)\right\|_{H^{m}}&\lesssim e^{-\delta_{dis}s}\left(\left\|\nabla^{m}\left(\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right)\right\|_{L^{\infty}(B(0,3C_0))}+\left\|\left(\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right)\right\|_{L^{\infty}(B(0,3C_0))}\right)\lesssim_{\delta_1, E} e^{-\delta_{dis}s} \nonumber \\
&\leq (\delta_1)^{\frac{6}{5}} e^{-\delta_{dis}\frac{s}{2}} .\end{aligned}$$ For $N_u$ $N_s$, from inequality [\[eq:GNresultinfty_simplifiedtorus\]](#eq:GNresultinfty_simplifiedtorus){reference-type="eqref" reference="eq:GNresultinfty_simplifiedtorus"}: $$\begin{aligned}
\|\nabla^{j}(\widetilde{U}\cdot(\nabla \widetilde{U}))\|_{L^{\infty}}&\lesssim_{m} \sum_{l\leq j\leq m}\|\nabla^{j}\widetilde{U}\|_{L^{\infty}}\|\nabla^{j-l+1}\widetilde{U}\|_{L^{\infty}}\\
\qquad&\lesssim_{m} E^{\frac{1}{20}}\delta_0
^{\frac{39}{20}}e^{-\frac{19}{20}\varepsilon(s-s_0)}\\\qquad&\lesssim\delta_0
^{\frac{19}{10}}e^{-\frac{19}{20}\varepsilon(s-s_0)}.\end{aligned}$$ Here we also use the bootstrap estimates [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"}, [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}. A similar estimate holds for $\widetilde{U}\cdot (\nabla\widetilde{S}),$ $\widetilde{S} (\nabla\widetilde{S})$ and $\widetilde{S}\, \ensuremath{\mathrm{div\,}}\widetilde{U}$: $$\begin{aligned}
&\quad\|\nabla^{j}(\widetilde{U}\cdot(\nabla \widetilde{S}))\|_{L^{\infty}}+\|\nabla^{j}(\widetilde{S}\cdot(\nabla \widetilde{S}))\|_{L^{\infty}}+\|\nabla^{j}(\widetilde{S}\cdot(\ensuremath{\mathrm{div\,}}\widetilde{U}))\|_{L^{\infty}}\\\nonumber
&\lesssim_{m} \sum_{l\leq j\leq m}(\|\nabla^{j}\widetilde{S}\|_{L^{\infty}}\|\nabla^{j-l+1}\widetilde{U}\|_{L^{\infty}}+\|\nabla^{j}\widetilde{S}\|_{L^{\infty}}\|\nabla^{j-l+1}\widetilde{S}\|_{L^{\infty}})+\|\nabla^{j}\widetilde{U}\|_{L^{\infty}}\|\nabla^{j-l+1}\widetilde{S}\|_{L^{\infty}})\\\nonumber
\qquad
&\lesssim_{m} E^{\frac{1}{20}}\delta_0
^{\frac{39}{20}}e^{-\frac{19}{20}\varepsilon(s-s_0)}\\\nonumber
\qquad&\lesssim\delta_0
^{\frac{19}{10}}e^{-\frac{19}{20}\varepsilon(s-s_0)}.\end{aligned}$$
Then $$\|N_u\|_{L^{\infty}}, \|N_s\|_{L^{\infty}}\lesssim \delta_0
^{\frac{19}{10}}e^{-\frac{19}{20}\varepsilon(s-s_0)}\leq \delta_1^{\frac{6}{5}} e^{-\varepsilon\frac{3}{2}(s-s_0)}.$$ Moreover, since $\ensuremath{\mathrm{supp\,}}\chi_2\in B(0,3C_0),$ we have $$\|\chi_2N_u\|_{H^{m}}\lesssim \sum_{j\leq m}\|\nabla^{j}(N_u)\|_{L^{\infty}}\lesssim\delta_0
^{\frac{19}{10}}e^{-\frac{19}{20}\varepsilon(s-s_0)}\leq \delta_1^{\frac{6}{5}} e^{-\frac{3}{2}\varepsilon(s-s_0)}.$$ $\|\chi_2N_s\|_{H^{m}}$ can be bounded in the similar way. ◻
**Lemma 30**. *For $j\leq 4$, we have $$\|\nabla^{j}\widetilde{U}\|_{L^{\infty}(B(0,C_0))}+\|\nabla^{j}\widetilde{S}\|_{L^{\infty}(B(0,C_0))}\lesssim \frac{\delta_1}{\delta_g}e^{-\frac{\delta_g}{2}(s-s_0)}.$$*
*Proof.* The proof is similar to [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Lemma 8.12]. Since the truncated solution and the extended solution agree in $B(0,C_0).$ Then $$\|\nabla^{j}\widetilde{U}\|_{L^{\infty}(B(0,C_0))}=\|\nabla^{j}\widetilde{U}_t\|_{L^{\infty}(B(0,C_0))}\lesssim \|\widetilde{U}_t\|_{X}.$$ Recall from [\[navierstokesperturb1trun\]](#navierstokesperturb1trun){reference-type="eqref" reference="navierstokesperturb1trun"}, we have $$\partial_{s}(\widetilde{U}_t,\widetilde{S}_t)=(\widetilde{U}_t,\widetilde{S}_t)+\chi_2F(\widetilde{U},\widetilde{S}),$$ with $\chi_2F_{u}=\chi_2\ensuremath{\mathcal}F_{dis}+\chi_2N_{u}$, $\chi_2F_{s}=\chi_2N_s.$ Using Lemma [Lemma 17](#lemma:abstract_result){reference-type="ref" reference="lemma:abstract_result"}, we could project the equation onto the stable modes and have $$\begin{aligned}
\partial_{s}P_{\rm{sta}}(\widetilde{U}_t,\widetilde{S}_t)=\mathcal{L}P_{\rm{sta}}(\widetilde{U}_t,\widetilde{S}_t)+P_{\rm{sta}}(\chi_2F(\widetilde{U},\widetilde{S})).\end{aligned}$$ Then from Duhamel's formula, we have $$\begin{aligned}
P_{\rm{sta}}(\widetilde{U}_t,\widetilde{S}_t)(s)=T(s-s_0)P_{\rm{sta}}(\widetilde{U}_t,\widetilde{S}_t)(s_0)+\int_{s_0}^{s}T(s-\overline{s})(P_{\rm{sta}}(\chi_2F(\widetilde{U},\widetilde{S})))(\overline{s})d\overline{s},\end{aligned}$$ with $T$ the semigroup generated by $\mathcal{L}.$
From the fact that $\mathcal{L}$ generates a contraction semigroup in $X$, we have $$\begin{aligned}
&\quad\|P_{\rm{sta}}(\widetilde{U}_t,\widetilde{S}_t)\|_{X}\leq \delta_1 e^{-(s-s_0)\frac{\delta_g}{2}}+\int_{s_0}^{s}2\delta_1 e^{-(s-\overline{s})\frac{\delta_g}{2}}e^{-\frac{3}{2}\varepsilon(\overline{s}-s_0)}d\overline{s}\lesssim \frac{\delta_1}{\delta_g}e^{-\frac{\delta_g}{2}(s-s_0)}.\end{aligned}$$ The unstable part of the solution is controlled directly from the assumption in Proposition [Proposition 23](#prop:bootstrap){reference-type="ref" reference="prop:bootstrap"}. ◻
Now we show the bootstrap when $|y|>C_0$.
**Lemma 31**. *Under the bootstrap assumptions [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"}, [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"}, [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"} we have, for $s\geq s_0$, $|y|\geq C_0$: $$|\widetilde{U}|,|\widetilde{S}| \leq {\delta_0}\frac{1}{2}\frac{1}{C_2}e^{-\varepsilon(s-s_0)}.$$*
*Proof.* From Gagliardo-Nirenberg's interpolation inequality [\[eq:GNresultnoweighttorus\]](#eq:GNresultnoweighttorus){reference-type="eqref" reference="eq:GNresultnoweighttorus"} in the Appendix, we have $$|\nabla\widetilde{U}|, |\nabla \widetilde{S}|\leq {\delta_0}e^{-\varepsilon(s-s_0)}\left(\frac{1}{C_2}\right)^{\frac{1}{20}}.$$
Then for $|y|\geq C_0$, from [\[profilecondition1\]](#profilecondition1){reference-type="eqref" reference="profilecondition1"}, since $|\widehat{X}|\leq 1$, $|\nabla\widehat{X}|\leq 10$, we have $$|\widehat{X}\overline{U}\nabla\widetilde{U}|+|\alpha\widehat{X}\overline{S}\nabla\widetilde{S}|+|\widetilde{U}\cdot\nabla(\widehat{X}\overline{U})|+|\alpha\widetilde{S}\nabla(\widehat{X}\overline{S})|\leq {\delta_0}e^{-\varepsilon(s-s_0)}\frac{1}{C_1}\left(\frac{1}{C_2}\right)^{\frac{1}{20}},$$ $$|\widehat{X}\overline{U}\nabla\widetilde{S}|+|\alpha\widehat{X}\overline{S}\ensuremath{\mathrm{div\,}}\widetilde{U}|+|\ensuremath{\mathrm{div\,}}(\widehat{X}\overline{U})\widetilde{S}|+|\alpha\widetilde{U}\nabla(\widehat{X}\overline{S})|\leq {\delta_0}e^{-\varepsilon(s-s_0)}\frac{1}{C_1}\left(\frac{1}{C_2}\right)^{\frac{1}{20}}.$$ Moreover, by [\[nsequation1\]](#nsequation1){reference-type="eqref" reference="nsequation1"}, we have
$$\begin{aligned}
&\partial_{s}\widetilde{U}=-(r-1)\widetilde{U}-y\cdot{\nabla\widetilde{U}}-((\widehat{X}\overline{U})\cdot{\nabla\widetilde{U}}+\alpha(\widehat{X}\overline{S})\cdot{\nabla\widetilde{S}}+\widetilde{U}\cdot\nabla(\widehat{X}\overline{U}) +\alpha \widetilde{S} \nabla (\widehat{X}\overline{S}))+\mathcal N_{u} +\mathcal E_u
+ \ensuremath{\mathcal}F_{dis}.\end{aligned}$$ $$\begin{aligned}
&\partial_{s}\widetilde{S}=-(r-1)\widetilde{S}-y\cdot{\nabla\widetilde{S}}-((\widehat{X}\overline{U})\cdot{\nabla\widetilde{S}}+\alpha(\widehat{X}\overline{S})\ensuremath{\mathrm{div\,}}(\widetilde{U})+\widetilde{U}\cdot\nabla(\widehat{X}\overline{S})+\alpha \widetilde{S} \ensuremath{\mathrm{div\,}}(\widehat{X}\overline{S}))+\mathcal N_{s}+\mathcal E_s.\end{aligned}$$ Then for $1\leq j\leq 3$, $s_0\leq \overline{s} \leq s$, using Lemmas [Lemma 28](#forcingestimate1){reference-type="ref" reference="forcingestimate1"} and [Lemma 29](#forcingestimate2){reference-type="ref" reference="forcingestimate2"}, we have $$|\partial_{s}\widetilde{U}_j(e^{s-\overline{s}}y_0)+(r-1)\widetilde{U}_j(e^{s-\overline{s}}y_0)|\leq{\delta_0}e^{-\varepsilon(s-s_0)}\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}+\delta_1^{\frac{6}{5}}e^{-\frac{3}{2}\varepsilon(s-s_0)}+\delta_1e^{-(s-s_0)}+ \delta_1^{ \frac{6}{5}} e^{-\delta_{dis}\frac{s}{2}},$$
and$$|\partial_{s}\widetilde{S}(e^{s-\overline{s}}y_0)+(r-1)\widetilde{S}(e^{s-\overline{s}}y_0)|\leq{\delta_0}e^{-\varepsilon(s-s_0)}\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}+\delta_1^{\frac{6}{5}}e^{-\frac{3}{2}\varepsilon(s-s_0)}+\delta_1e^{-(s-s_0)}.$$ From the choice of parameters [\[choiceofparameter\]](#choiceofparameter){reference-type="eqref" reference="choiceofparameter"}, we have $$\begin{aligned}
&\quad|\partial_{s}(\widetilde{U}_j(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)})+(r-1-\varepsilon)\widetilde{U}_j(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)}|\leq 2{\delta_0}\frac{1}{C_1}\left( \frac{1}{C_2} \right)^{\frac{1}{20}},\\
&=|e^{\varepsilon(s-s_0)}(\partial_s\widetilde{U}_j(e^{s-\overline{s}}y_0)+(r-1)\widetilde{U}_j(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)})|\\&\leq
{\delta_0}\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}+\delta_1^{\frac{6}{5}}e^{-\frac{1}{2}\varepsilon(s-s_0)}+\delta_1e^{-(1+\varepsilon)(s-s_0)}\\
&\leq 2{\delta_0}\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}. \end{aligned}$$ Similarly, we have $$|\partial_{s}(\widetilde{S}(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)})+(r-1-\varepsilon)(\widetilde{S}(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)})|\leq 2{\delta_0}\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}.$$ Then we claim $(\widetilde{U}(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)}, \widetilde{S}(e^{s-\overline{s}}y_0)e^{\varepsilon(s-s_0)})$ never cross the domain $$\left\{(x_1,x_2)\left||x_1|, |x_2|\leq \frac{8}{r-1}\delta_0\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}\right\}\right.$$ from inside.
From initial data conditions [\[initialdatacondition\]](#initialdatacondition){reference-type="eqref" reference="initialdatacondition"} and Lemma [Lemma 30](#linfitnityinside){reference-type="ref" reference="linfitnityinside"}, for $s=\overline{s}=s_0, |y|\geq C_0,$ $y\in e^{s_0}{\mathbb T}_{L}^3$ or $|y|=C_0$, $s=\overline{s}$, $y\in e^{\overline{s}}\mathbb{T}_{L}^3$, we have $$|\widetilde{U}_j(y_0)|e^{\varepsilon(\overline{s}-s_0)},|\widetilde{S}(y_0)|e^{\varepsilon(\overline{s}-s_0)}\leq \frac{8}{r-1}\delta_0\frac{1}{C_1}\left( \frac{1}{C_2} \right)^{\frac{1}{20}}.$$ Moreover, for any $|y|\geq C_0$, $s\geq s_0$, $y\in e^s\mathbb{T}_{L}^3$, there exists a trajectory $(y_0e^{s-\overline{s}},s)$ passing through $(y,s)$ that starts either from $s=\overline{s}=s_0, |y|\geq C_0,$ $y\in e^{s_0}\mathbb{T}_{L}^3$ or $|y|=C_0$, $s=\overline{s}$, $y\in e^{\overline{s}}\mathbb{T}_{L}^3$.
Then the result follows from the chosen constant parameters [\[parameterchosen\]](#parameterchosen){reference-type="eqref" reference="parameterchosen"}: $$\frac{32}{r-1}\frac{1}{C_1} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}= \frac{1}{C_2}.$$ ◻
Now we control $\|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0, y\in e^s\mathbb{T}_{L}^3}),$ and $\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0, y\in e^s\mathbb{T}_{L}^3)}$.
**Lemma 32**. *Under bootstrap assumptions [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"}, [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"}, [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we have $$\|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0, y\in e^s\mathbb{T}_L^3)}, \|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0, y\in e^s\mathbb{T}_L^3)} \leq \frac{\delta_0}{2C_2}e^{-\varepsilon(s-s_0)}.$$*
*Proof.* For any $4$-th order derivative $\partial_{\beta}=\partial_{\beta_1,\beta_2,\beta_3,\beta_4}$, we have $$\frac{d\partial_{\beta} \widetilde{U}}{ds}=B_{u,0}(\widetilde{U},\widetilde{S})+B_{u,1}(\widetilde{U},\widetilde{S})+B_{u,2}(\widetilde{U},\widetilde{S})+B_{u,3}(\widetilde{U},\widetilde{S})+\partial_{\beta}(N_u)+\partial_{\beta}(\ensuremath{\mathcal}E_u)+\partial_{\beta}\ensuremath{\mathcal}F_{dis},$$ $$\frac{d\partial_{\beta} \widetilde{S}}{ds}=B_{s,0}(\widetilde{U},\widetilde{S})+B_{s,1}(\widetilde{U},\widetilde{S})+B_{s,2}(\widetilde{U},\widetilde{S})+B_{s,3}(\widetilde{U},\widetilde{S})+\partial_{\beta}(N_s)+\partial_{\beta}(\ensuremath{\mathcal}E_s).$$ Here $B_{u,0}$, $B_{s,0}$ contain linear terms with $5$ derivatives hitting on $\widetilde{U}$, $\widetilde{S}$. $B_{u,1}$, $B_{s,1}$ contain linear terms with $4$ derivatives hitting on $\widetilde{U}$, $\widetilde{S}$. $B_{u,2}$, $B_{s,2}$ contain linear terms with $3$ derivatives, and $B_{u,3}$, $B_{s,3}$ contains linear terms with less than or equal to 2 derivatives hitting on $\widetilde{U}$, $\widetilde{S}$.
For the sake of simplicity in the following estimate we use $\int_{|y|\geq C_0}$ to denote the integration on $\{|y|\geq C_0\}\cap e^s\mathbb{T}_{L}^{3}.$ since $y\cdot \vec{n}=e^s L$ on $\partial( e^s \mathbb{T}_{L}^3)$, we have $$\begin{aligned}
\label{highenergyestimate01}
I&=\frac{d}{ds} \left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right)\\\nonumber
&=\frac{d}{ds}\sum_{\beta\in [3]^{4}}\int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2dy\\\nonumber
&=\sum_{\beta\in [3]^{4}}\int_{|y|\geq C_0}\frac{d}{ds}(|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2)dy+\sum_{\beta\in [3]^{4}}e^sL\int_{y\in \partial(e^s\mathbb{T}_L^3)}\sum_{j}|\partial_{\beta}\widetilde{U}_j|^2+(\partial_{\beta}\widetilde{S}
)^2d\sigma\\\nonumber
&=2\sum_{j=0}^{3} \left( \int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot B_{u,j}dy+\int_{|y|\geq C_0} \partial_{\beta}\widetilde{S}B_{s,j}dy \right)\\\nonumber
&\quad+2 \left( \int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot \partial_{\beta}N_udy+\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot \partial_{\beta}N_sdy \right) \\\nonumber
&\quad+2 \left( \int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot \partial_{\beta}\ensuremath{\mathcal}E_udy+\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot \partial_{\beta}\ensuremath{\mathcal}E_sdy \right)\\\nonumber
&\quad+2 \left( \int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot \partial_{\beta}\ensuremath{\mathcal}F_{dis} dy \right)\\\nonumber
&\quad+\text{boundary terms}.\end{aligned}$$ Now we start to bound each term separately. Recall that $(\widetilde{U},\widetilde{S})$ satisfy [\[nsequation1\]](#nsequation1){reference-type="eqref" reference="nsequation1"}, [\[nsequation2\]](#nsequation2){reference-type="eqref" reference="nsequation2"}, for $B_{u,0}$ and $B_{s,0}$, we have $$\begin{aligned}
I_{0,\beta}&=\quad2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot B_{u,0}dy+2\int_{|y|\geq C_0} \partial_{\beta}\widetilde{S}B_{s,0}dy\\
&=-2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot (y+\widehat{X}\overline{U})\cdot \nabla \partial_{\beta}\widetilde{U}dy-2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot \nabla \partial_{\beta}\widetilde{S}\alpha\widehat{X}\overline{S}dy\\
&\quad-2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot (y+\widehat{X}\overline{U})\cdot \nabla \partial_{\beta}\widetilde{S}dy-2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\ensuremath{\mathrm{div\,}}(\partial_{\beta}\widetilde{U})\alpha\widehat{X}\overline{S}dy\\
&\leq \int_{|y|\geq C_0} \left( 3+|\widehat{X}\overline{U}'(R)|+\frac{2\widehat{X}|\overline{U}|}{R} \right)
|\partial_{\beta}\widetilde{U}|^2dy+\int_{|y|\geq C_0}\left( 3+|\widehat{X}\overline{U}'(R)|+\frac{2\widehat{X}|\overline{U}|}{R} \right) |\partial_{\beta}\widetilde{S}|^2dy\\
&\quad+2\int_{|y|\geq C_0}|\nabla(\alpha\widehat{X}\overline{S})||\partial_{\beta}\widetilde{S}||\partial_{\beta}\widetilde{U}|dy\\
&\quad + \int_{|y|=C_0}(C_0+|\widehat{X}\overline{U}|) |\partial_{\beta}\widetilde{U}|^2d\sigma+\int_{|y|=C_0} (C_0+|\widehat{X}\overline{U}|)|\partial_{\beta}\widetilde{S}|^2d\sigma+\int_{|y|=C_0}\alpha|\widehat{X}\overline{S}||\partial_{\beta}\widetilde{U}||\partial_{\beta}\widetilde{S}|d\sigma\\
&\quad-\int_{\partial (e^{s}\mathbb{T}_{L}^{3})} \left( (\partial_{\beta}\widetilde{S})^2+\sum_{j}(\partial_{\beta}\widetilde{U}_j)^2 \right) (y \cdot \vec{n})d\sigma.\end{aligned}$$ Using Lemma [Lemma 30](#linfitnityinside){reference-type="ref" reference="linfitnityinside"}, we have $$\begin{aligned}
&\quad\int_{|y|=C_0}(C_0+|\widehat{X}\overline{U}|)|\partial_{\beta}\widetilde{U}|^2d\sigma+\int_{|y|=C_0}(C_0+|\widehat{X}\overline{U}|)|\partial_{\beta}\widetilde{S}|^2d\sigma+\int_{|y|=C_0}\alpha|\widehat{X}\overline{S}||\partial_{\beta}\widetilde{U}||\partial_{\beta}\widetilde{S}|d\sigma\\
&\lesssim C_0^2(C_0+|\widehat{X}\overline{U}|+\alpha|\widehat{X}\overline{S}|) \left( \frac{\delta_1}{\delta_g} \right)^{2}e^{-\delta_g(s-s_0)}\\
&\leq \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}}e^{-2\varepsilon(s-s_0)}.\end{aligned}$$ Then $$\begin{aligned}
\label{B0termestimate}
I_{0,\beta}&=2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot B_{u,0}dy+2\int_{|y|\geq C_0} \partial_{\beta}\widetilde{S}B_{s,0}dy\\\nonumber
&\leq \int_{|y|\geq C_0} \left( 3+|\widehat{X}\overline{U}|'(R)+\frac{2\widehat{X}|\overline{U}|}{R}+|\nabla(\alpha\widehat{X}\overline{S})| \right) \left( |\partial_{\beta}\widetilde{U}|^2+|\partial_{\beta}\widetilde{S}|^2 \right) dy+ \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}}e^{-2\varepsilon(s-s_0)}\\\nonumber
&\quad\underbrace{-\int_{\partial (e^{s}\mathbb{T}_{L}^{3})} \left( (\partial_{\beta}\widetilde{S})^2+\sum_{j}(\partial_{\beta}\widetilde{U}_j)^2 \right) (y \cdot \vec{n})d\sigma}_{\text{boundary term}}.\end{aligned}$$ Notice that since $y \cdot \vec{n}=e^{s}L$ on $\partial(e^{s}\mathbb{T}_{L}^{3}),$ this boundary term in $I_{0,\beta}$ is same as the boundary term in [\[highenergyestimate01\]](#highenergyestimate01){reference-type="eqref" reference="highenergyestimate01"} with opposite sign.
For $B_{u,1}$, $B_{s,1}$ we have $$\begin{aligned}
B_{u,1}&=-(r-1)\partial_{\beta}\widetilde{U}-\partial_{\beta}\widetilde{U}\cdot\nabla(\widehat{X}\overline{U})-\alpha\partial_{\beta}\widetilde{S}\nabla(\widehat{X}\overline{S})\\&
\quad-\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{U}_i+y_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{U}-\alpha\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{S})\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S}\\
&=-(r-1)\partial_{\beta}\widetilde{U}-\partial_{\beta}\widetilde{U}\cdot\nabla(\widehat{X}\overline{U})-\alpha\partial_{\beta}\widetilde{S}\nabla(\widehat{X}\overline{S})\\&
\quad-\sum_{\lambda=1}^{4}\sum_{i=\beta_{\lambda}}\partial_{\beta_{\lambda}}(y_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{U}-\sum_{\lambda=1}^{4}\sum_{i=1 }^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{U}_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{U}-\alpha\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{S})\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S}\\
&=-(r-1)\partial_{\beta}\widetilde{U}-\partial_{\beta}\widetilde{U}\cdot\nabla(\widehat{X}\overline{U})-\alpha\partial_{\beta}\widetilde{S}\nabla(\widehat{X}\overline{S})\\&
\quad-4\partial_{\beta} \widetilde{U}-\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{U}_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{U}-\alpha\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{S})\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S},\end{aligned}$$ and $$\begin{aligned}
B_{s,1}&=-(r-1)\partial_{\beta}\widetilde{S}-\partial_{\beta}\widetilde{U}\cdot \nabla (\widehat{X}\overline{S})-\alpha\partial_{\beta}\widetilde{S}\ensuremath{\mathrm{div\,}}(\widehat{X}\overline{U})\\
&\quad-\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{U}_i+y_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S}-\alpha\sum_{\lambda=1}^{4}\partial_{\beta_{\lambda}}(\widehat{X}\overline{S})\partial_{\beta^{(\lambda)}} \ensuremath{\mathrm{div\,}}(\widetilde{U})\\
&=-(r-1)\partial_{\beta}\widetilde{S}-\partial_{\beta}\widetilde{U}\cdot \nabla (\widehat{X}\overline{S})-\alpha\partial_{\beta}\widetilde{S}\ensuremath{\mathrm{div\,}}(\widehat{X}\overline{U})\\
&\quad-\sum_{\lambda=1}^{4}\sum_{i=\beta_{\lambda}}\partial_{\beta_{\lambda}}(y_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S}-\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{U}_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S}-\alpha\sum_{\lambda=1}^{4}\partial_{\beta_{\lambda}}(\widehat{X}\overline{S})\partial_{\beta^{(\lambda)}} \ensuremath{\mathrm{div\,}}(\widetilde{U})\\
&=-(r-1)\partial_{\beta}\widetilde{S}-\partial_{\beta}\widetilde{U}\cdot \nabla (\widehat{X}\overline{S})-\alpha\partial_{\beta}\widetilde{S}\ensuremath{\mathrm{div\,}}(\widehat{X}\overline{U})\\
&\quad-4\partial_{\beta} \widetilde{S}-\sum_{\lambda=1}^{4}\sum_{i=1}^{3}\partial_{\beta_{\lambda}}(\widehat{X}\overline{U}_i)\partial_{i}\partial_{\beta^{(\lambda)}} \widetilde{S}-\alpha\sum_{\lambda=1}^{4}\partial_{\beta_{\lambda}}(\widehat{X}\overline{S})\partial_{\beta^{(\lambda)}} \ensuremath{\mathrm{div\,}}(\widetilde{U}).\end{aligned}$$ Then, $$\begin{aligned}
\label{B1termestimate}
I_{1,\beta}&=2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot B_{u,1}dy+2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot B_{s,1}dy\\\nonumber
&\leq -2(4+r-1)\int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2dy\\\nonumber
&\quad+C \left( \|\nabla(\widehat{X}\overline{U})\|_{L^{\infty}(|y|\geq C_0)}+\|\nabla(\widehat{X}\overline{S})\|_{L^{\infty}(|y|\geq C_0)} \right) \left(\|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^2+ \|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^2 \right).\end{aligned}$$
For $B_{u,2}$ and $B_{s,2}$, from Hölder's inequality, we have $$\begin{aligned}
\label{B2termestimate}
&\|B_{s,2}\|_{L^2(|y|\geq C_0)}+\|B_{u,2}\|_{L^2(|y|\geq C_0)}\\\nonumber
&\quad\leq C \left( \|\nabla^{2}(\widehat{X}\overline{U})\|_{L^{8}({|y|\geq C_0})}+\|\nabla^2(\widehat{X}\overline{S})\|_{L^{8}({|y|\geq C_0})} \right) \left( \|\nabla^{3}\widetilde{U}\|_{L^{\frac{8}{3}}(|y|\geq C_0)}+\|\nabla^3{\widetilde{S}}\|_{L^{\frac{8}{3}}(|y|\geq C_0)} \right) \\ \nonumber
&\quad\leq C \left( \|\nabla^{2}(\widehat{X}\overline{U})\|_{L^{8}({|y|\geq C_0})}+\|\nabla^2(\widehat{X}\overline{S})\|_{L^{8}({|y|\geq C_0})} \right) \delta_0 e^{-\varepsilon(s-s_0)} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}.\end{aligned}$$ Here we also used the Gagliardo-Nirenberg interpolation Lemma [\[lemma:GN_generalnoweighttorus\]](#lemma:GN_generalnoweighttorus){reference-type="eqref" reference="lemma:GN_generalnoweighttorus"}, by taking $p=\infty$, $q=2$, $\theta=\frac{3}{4}$, $r=\frac{8}{3}$, $i=3$, $m=4$. Thus $$\begin{aligned}
I_{2,\beta}&=2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot B_{u,2}dy+2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot B_{s,2}dy\\
&\leq C(\|\nabla^{2}(\widehat{X}\overline{U})\|_{L^{8}({|y|\geq C_0})}+\|\nabla^2(\widehat{X}\overline{S})\|_{L^{8}({|y|\geq C_0})})\left(\int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2dy\right)^{\frac{1}{2}}{\delta_0}e^{-\varepsilon(s-s_0)} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}.\end{aligned}$$ For $B_{u,3}$ and $B_{s,3}$, from the condition of $C_0$ [\[profilecondition3\]](#profilecondition3){reference-type="eqref" reference="profilecondition3"}, the bootstrap assumptions [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"}, [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"} and interpolation inequality [\[eq:GNresultnoweighttorus\]](#eq:GNresultnoweighttorus){reference-type="eqref" reference="eq:GNresultnoweighttorus"} with $p=\infty,q=2,\theta=\frac{4}{5}, m=4$, we have $$\begin{aligned}
&\|B_{s,3}\|_{L^2(|y|\geq C_0)}+\|B_{u,3}\|_{L^2(|y|\geq C_0)} \\
& \qquad\leq
C\sum_{j=3}^{5} \left( \|\widehat{X}\overline{U}\|_{\dot{H}^{j}(|y|\geq C_0)}+\|\widehat{X}\overline{S}\|_{\dot{H}^{j}(|y|\geq C_0)} \right) \left( \|\widetilde{U}\|_{W^{2,\infty}(|y|\geq C_0)}+\|\widetilde{S}\|_{W^{2,\infty}(|y|\geq C_0)} \right) \\
& \qquad\leq C\sum_{j=3}^{5} \left( \|\widehat{X}\overline{U}\|_{\dot{H}^{j}(|y|\geq C_0)}+\|\widehat{X}\overline{S}\|_{\dot{H}^{j}(|y|\geq C_0)} \right) \delta_0 \left( \frac{1}{C_2} \right)^{\frac{1}{20}}e^{-\varepsilon(s-s_0)}.\end{aligned}$$ Thus $$\begin{aligned}
\label{B3termestimate}
I_{3,\beta}&=2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot B_{u,3}dy+2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot B_{s,3}dy\\\nonumber
&\leq C\sum_{j=3}^{5} \left( \|\widehat{X}\overline{U}\|_{\dot{H}^{j}(|y|\geq C_0)}+\|\widehat{X}\overline{S}\|_{\dot{H}^{j}(|y|\geq C_0)} \right) {\delta_0} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}e^{-\varepsilon(s-s_0)}\left(\int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2dy\right)^{\frac{1}{2}}.\end{aligned}$$ For the nonlinear terms, we use Hölder's inequality and get $$\begin{aligned}
&\|\partial_{\beta}N_u\|_{L^{2}(|y|\geq C_0)}+\|\partial_{\beta}N_s\|_{L^{2}(|y|\geq C_0)}\\
&\quad \leq C \left( \|\nabla ^{5}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}+\|\nabla ^{5}\widetilde{S}\|_{{L^{2}(|y|\geq C_0)}} \right) \left( \|\widetilde{U}\|_{{L^{\infty}(|y|\geq C_0)}}+\|\widetilde{S}\|_{{L^{\infty}(|y|\geq C_0)}} \right) \\
&\qquad + C \left( \|\nabla ^{3}\widetilde{U}\|_{{L^{\frac{8}{3}}(|y|\geq C_0)}}+\|\nabla ^{3}\widetilde{S}\|_{{L^{\frac{8}{3}}(|y|\geq C_0)}} \right) \left( \|\nabla ^{2}\widetilde{U}\|_{{L^{8}(|y|\geq C_0)}}+\|\nabla ^{2}\widetilde{S}\|_{{L^{8}(|y|\geq C_0)}} \right)\\
&\qquad + C \left( \|\nabla ^{4}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}+\|\nabla ^{4}\widetilde{S}\|_{{L^{2}(|y|\geq C_0)}} \right) \left( \|\nabla\widetilde{U}\|_{{L^{\infty}(|y|\geq C_0)}}+\|\nabla \widetilde{S}\|_{{L^{\infty}(|y|\geq C_0)}} \right).\end{aligned}$$ From the Gagliardo-Nirenberg interpolation Lemma [Lemma 42](#lemma:GN_generalnoweighttorus){reference-type="ref" reference="lemma:GN_generalnoweighttorus"}, and the bootstrap assumptions [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"}, [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we get $$\begin{aligned}
&\|\nabla ^{5}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}\lesssim \|\nabla ^{K}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}^{\theta_k}\|\nabla ^{4}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}^{1-\theta_k}+\|\nabla ^{4}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}\lesssim E^{\frac{1}{10}}\delta_0^{\frac{9}{10}}e^{-\varepsilon(s-s_0)\frac{9}{10}}+\delta_0e^{-\varepsilon(s-s_0)},\\
&\|\nabla ^{2}\widetilde{U}\|_{L^{8}(|y|\geq C_0)}, \|\nabla ^{3}\widetilde{U}\|_{{L^{\frac{8}{3}}(|y|\geq C_0)}}\lesssim \|\nabla ^{4}\widetilde{U}\|_{{L^{2}(|y|\geq C_0)}}^{\frac{4}{3}}\|\widetilde{U}\|_{{L^{\infty}(|y|\geq C_0)}}^{\frac{1}{4}}+\| \widetilde{U}\|_{{L^{\infty}(|y|\geq C_0)}}\lesssim \delta_0e^{-\varepsilon(s-s_0)}.\end{aligned}$$ Similar estimates hold for $\widetilde{S}$. Then $$\begin{aligned}
&\|\partial_{\beta}N_u\|_{L^{2}(|y|\geq C_0)}+\|\partial_{\beta}N_s\|_{L^{2}(|y|\geq C_0)}\\
&\quad \leq C{\delta_0}^{\frac{19}{10}}e^{-\frac{19}{10}\varepsilon(s-s_0)}E^{\frac{1}{10}}\\
&\quad \leq {\delta_0}^{\frac{3}{2}}e^{-\frac{3}{2}\varepsilon(s-s_0)}.\end{aligned}$$ Here we used $\delta_0E\ll 1$ in the last step. Then $$\begin{aligned}
\label{Ntermestimate}
I_{N,\beta}&=2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot \partial_{\beta}N_u dy+2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{S}\cdot \partial_{\beta}N_s dy\leq 2{\delta_0}^{\frac{3}{2}}e^{-\frac{3}{2}\varepsilon(s-s_0)}\left(\int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2dy)\right)^{\frac{1}{2}}.\end{aligned}$$ For $\partial_{\beta}\left(\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right)$, from Lemma [Lemma 27](#lemma:brasilia){reference-type="ref" reference="lemma:brasilia"} we have $$\begin{aligned}
|\nabla^{4-j}\Delta U| &\lesssim_{\delta_0,\varepsilon}S \phi^{-3+\frac{j}{2}}R^{\varepsilon}.\\
|\nabla^{j}\frac{1}{S^{\frac{1}{\alpha}}}| &\lesssim \sum_{l\leq j, k_1+k_2+...k_l=j} \frac{|\nabla^{k_1}S||\nabla^{k_2}S|...|\nabla^{k_l}S|}{S^{\frac{1}{\alpha}+l}}\\
&\lesssim \sum_{l\leq j, k_1+k_2+...k_l=j} \phi^{-\frac{k_1+k_2+...k_{l}}{2}}\frac{1}{S^{\frac{1}{\alpha}}}R^{j\varepsilon}\\
&\lesssim_{\delta_0, R_0,\varepsilon} \phi^{-\frac{j}{2}}\frac{1}{S^{\frac{1}{\alpha}}}R^{j\varepsilon} . \end{aligned}$$ Hence we have $$\begin{aligned}
\left| \partial_{\beta}\left(\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right) \right| \lesssim &\sum_{j} \left| \nabla^{4-j}\Delta U \right| \cdot \left| \nabla^{j}\left(\frac{1}{S^{\frac{1}{\alpha}}} \right)\right| \lesssim_{\delta_1,\varepsilon} \phi^{-3}S^{1-\frac{1}{\alpha}}R^{5\varepsilon}\lesssim_{\delta_1,R_0,\varepsilon} \left( \frac{1}{R} \right)^{6(1-\eta)-(r-1)(\frac{1}{\alpha}-1)}\\
\qquad &\lesssim_{\delta_0, R_0,\varepsilon} \left( \frac{1}{R} \right)^{3},\end{aligned}$$ where we used $\eta\ll 1, (r-1)(\frac{1}{\alpha}-1)\leq (r-1)(\frac{1}{\alpha}+2)\leq 2$, and Lemma [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"}.
Then we have $$e^{-\delta_{dis}s} \left\| \partial_{\beta}\left(\frac{\Delta U}{S^{\frac{1}{\alpha}}}\right) \right\|_{L^2(|y|\geq C_0)}\lesssim_{\delta_0,R_0}e^{-\delta_{dis}s}\leq \delta_1 e^{-\delta_g(s-s_0)},$$ and $$\begin{aligned}
I_{dis,\beta}=2\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\partial_{\beta} \left( e^{-\delta_{dis}s}\frac{\Delta U}{S^{\frac{1}{\alpha}}} \right) dy\leq 2\delta_1 e^{-\delta_g(s-s_0)}\left( \int_{|y|\geq C_0}(|\partial_{\beta}\widetilde{U}|^2+|\partial_{\beta}\widetilde{S}|^2)dy \right)^{\frac{1}{2}} .\end{aligned}$$ Additionally, from Lemma [\[profilekestimate1\]](#profilekestimate1){reference-type="ref" reference="profilekestimate1"}, we have $$\begin{aligned}
I_{E,\beta}=4\int_{|y|\geq C_0}\partial_{\beta}\widetilde{U}\cdot\partial_{\beta}\mathcal{E}_{u}+\partial_{\beta}\widetilde{S}\partial_{\beta}\mathcal{E}_{s}dy\leq 2\delta_1 e^{-(s-s_0)} \left( \int_{|y|\geq C_0}\left( |\partial_{\beta}\widetilde{U}|^2+|\partial_{\beta}\widetilde{S}|^2 \right) dy \right)^{\frac{1}{2}}.\end{aligned}$$ In conclusion, from [\[highenergyestimate01\]](#highenergyestimate01){reference-type="eqref" reference="highenergyestimate01"} and estimates for each term, we have $$\begin{aligned}
I&\leq \left( -2(4+r-1)+3 \left( 1+ \|\nabla(\widehat{X}\overline{U})+\|\frac{\widehat{X}\overline{U}}{R}\|_{L^{\infty}(|y|\geq C_0)}+ \left\|\nabla(\widehat{X}\overline{S}) \right\|_{L^{\infty}(|y|\geq C_0)} \right) \right) \sum_{\beta\in [3]^{4}}\int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2dy\\\nonumber
%
&\quad+C \left( \|\nabla(\widehat{X}\overline{U})\|_{L^{\infty}(|y|\geq C_0)}+\|\nabla(\widehat{X}\overline{S})\|_{L^{\infty}(|y|\geq C_0)} \right)
\left( \|\widetilde{U}\|^2_{\dot{H}^4(|y|\geq C_0)}+ \|\widetilde{S}\|^2_{\dot{H}^4(|y|\geq C_0)} \right) + 3^{4} \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}}e^{-2\varepsilon(s-s_0)}\\
%
&\quad+\Bigg[C \left( \|\nabla^{2}(\widehat{X}\overline{U})\|_{L^{8}({|y|\geq
C_0})}+\|\nabla^2(\widehat{X}\overline{S})\|_{L^{8}({|y|\geq C_0})} \right) \delta_0 e^{-\varepsilon(s-s_0)} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}\\
&\qquad+C\sum_{j=3}^{5} \left( \|\widehat{X}\overline{U}\|_{\dot{H}^{j}(|y|\geq C_0)}+\|\widehat{X}\overline{S}\|_{\dot{H}^{j}(|y|\geq C_0)} \right) \delta_0 \left( \frac{1}{C_2} \right)^{\frac{1}{20}}e^{-\varepsilon(s-s_0)}+2{\delta_0}^{\frac{3}{2}}e^{-\frac{3}{2}\varepsilon(s-s_0)}+2\delta_1 e^{-\delta_g(s-s_0)}\\
&\qquad+ 4\delta_1 e^{-(s-s_0)} \Bigg] \cdot\sum_{\beta\in[3]^{4}} \left( \int_{|y|\geq C_0}|\partial_{\beta}\widetilde{U}|^2+(\partial_{\beta}\widetilde{S})^2 dy \right)^{\frac{1}{2}}\\
%%%%%%%%%%%%%%%%%
&\leq
\left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right)\\\nonumber
&\qquad\times \left( -5-2(r-1)+C \left( \|\nabla(\widehat{X}\overline{U})\|_{L^{\infty}(|y|\geq C_0)}+\|\nabla(\widehat{X}\overline{S})\|_{L^{\infty}(|y|\geq C_0)}+\|\frac{\widehat{X}\overline{U}}{R}\|_{L^{\infty}(|y|\geq C_0)} \right) \right) \\
&\quad+C \left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right)^{\frac{1}{2}} \Bigg[(\|\nabla^{2}(\widehat{X}\overline{U})\|_{L^{8}({|y|\geq C_0})}+\|\nabla^2(\widehat{X}\overline{S})\|_{L^{8}({|y|\geq C_0})}) \delta_0 e^{-\varepsilon(s-s_0)}\left( \frac{1}{C_2} \right)^{\frac{1}{20}}\\
&\qquad+\sum_{j=3}^{5} \left( \|\widehat{X}\overline{U}\|_{\dot{H}^{j}(|y|\geq C_0)}+\|\widehat{X}\overline{S}\|_{\dot{H}^{j}(|y|\geq C_0)} \right) \delta_0 \left( \frac{1}{C_2} \right)^{\frac{1}{20}}e^{-\varepsilon(s-s_0)}
+2{\delta_0}^{\frac{3}{2}}e^{-\frac{3}{2}\varepsilon(s-s_0)}+4\delta_1 e^{-\delta_g(s-s_0)} \Bigg]\\
&\quad+3^{4} \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}}e^{-2\varepsilon(s-s_0)}.\end{aligned}$$ From the profile conditions [\[profilecondition4\]](#profilecondition4){reference-type="eqref" reference="profilecondition4"}, [\[profilecondition2\]](#profilecondition2){reference-type="eqref" reference="profilecondition2"}, [\[profilecondition3\]](#profilecondition3){reference-type="eqref" reference="profilecondition3"}, we have $$\begin{aligned}
&\quad\frac{d}{ds} \left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right) \leq - \left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right) \\
&\qquad +\frac{1}{20} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}\delta_0
e^{-\varepsilon(s-s_0)} \left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right)^{\frac{1}{2}}+81 \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}} e^{-2\varepsilon(s-s_0)}.\end{aligned}$$ Let $A(s)= \left( \|\widetilde{U}\|_{\dot{H}^4(|y|\geq C_0)}^{2}+\|\widetilde{S}\|_{\dot{H}^4(|y|\geq C_0)}^{2} \right) e^{2\varepsilon(s-s_0)}.$ Then we get: $$\begin{aligned}
\frac{dA(s)}{ds}\leq -\frac{1}{2}A(s)+\frac{1}{20} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}\delta_0
\sqrt{A(s)}+81 \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}}.\end{aligned}$$ We then claim $$\label{Aenergyestimate}
A(s)< \left( \frac{1}{5} \left( \frac{1}{C_2} \right)^{\frac{1}{20}}\delta_0 \right)^2+4 \cdot 81 \left( \frac{\delta_1}{\delta_g} \right)^{\frac{17}{10}}\leq \frac{1}{4} \delta_0^2,$$ since [\[Aenergyestimate\]](#Aenergyestimate){reference-type="eqref" reference="Aenergyestimate"} holds for $s_0=0$ and if there exists $s^{*}$ such that the equality holds, we have $\frac{dA(s^{*})}{ds}<0$. ◻
## Energy estimate for higher order bounds {#higherorderestimate}
In this section, we show how to close the bootstrap assumption for the energy. That is, assuming we are under the hypotheses of Proposition [Proposition 23](#prop:bootstrap){reference-type="ref" reference="prop:bootstrap"}, we will show that $E_{K} \leq \frac12 E$, see [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"} for a definition of $E_K$.
First of all, we let $[3]^K$ to be the set of all possible $K$-th order multiindices formed with $\{ 1, 2, 3 \}$. Thus, from definition [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we have that $$E_{K} = \sum_{\beta \in [3]^K} \int_{e^s\mathbb T_{L}^3}
\left( | \partial_\beta U |^2 + | \partial_\beta S |^2 \right) \phi^K dy,$$ where $\partial_\beta U$ is the vector $(\partial_\beta U_1, \partial_\beta U_2, \partial_\beta U_3)$.
Taking a $\partial_\beta$ derivative in [\[eq:US2\]](#eq:US2){reference-type="eqref" reference="eq:US2"} (with viscosity) and using that $\partial_\beta (y \nabla f) = K \partial_\beta f + y \nabla \partial_\beta f$, we have that: $$\begin{aligned}
\label{eq:US3} \begin{split}
(\partial_s + r -1 + K )\partial_\beta U_i + y\cdot \nabla \partial_\beta U_i + \partial_\beta (U \cdot \nabla U_i) + \alpha \partial_\beta (S \partial_i S) &= \frac{r^{1+\frac{1}{\alpha}}}{\alpha^{1/\alpha} }
e^{-\delta_{\rm dis} s} \partial_\beta \frac{\Delta U_i}{ S^{1/\alpha }}\,, \\
(\partial_s + r - 1 + K)\partial_\beta S + y \cdot \nabla \partial_\beta S + \alpha \partial_\beta (S \ensuremath{\mathrm{div\,}}U) + \partial_\beta (U \cdot \nabla S) &= 0.
\end{split} \end{aligned}$$ Now, multiplying each equation of [\[eq:US3\]](#eq:US3){reference-type="eqref" reference="eq:US3"} by $\phi^K \partial_\beta S$ and $\phi^K \partial_\beta U$ respectively, and integrating over $e^s\mathbb T_{L}^3$, we obtain the following equality, (we will use the notation $\int$ but mean integration over $e^s\mathbb T_{L}^3$): $$\begin{aligned}
\begin{split}
\left( \frac{1}{2}\partial_s + r -1 + K \right) E_{K} & + \underbrace{ \sum_{\beta \in [3]^K} \int \phi^K y \cdot \left( \partial_\beta S \nabla \partial_\beta S + \sum_i \partial_\beta U_i \nabla \partial_\beta U_i \right) }_{\ensuremath{\mathcal}I_1} \\
&\quad+ \underbrace{ \sum_{\beta \in [3]^K}\int \phi^K \left( \sum_i \partial_\beta U_i \partial_\beta (U \cdot \nabla U_i) + \partial_\beta S \partial_\beta (U \cdot \nabla S) \right) }_{\ensuremath{\mathcal}I_2}\\
&\quad + \underbrace{ \sum_{\beta \in [3]^K} \int \phi^K \alpha \left( \sum_i \partial_\beta U_i \partial_\beta (S \partial_i S) + \partial_\beta S \partial_\beta (S \ensuremath{\mathrm{div\,}}U)\right) }_{\ensuremath{\mathcal}I_3} \\
&\quad \underbrace{-\frac{1}{2}e^{s}L\sum_{\beta \in [3]^K}\int_{\partial (e^s\mathbb{T}_{L}^{3})}(|\partial_{\beta}U|^2+|\partial_{\beta}S|^2)}_{\text{boundary term}}\\
&= \underbrace{ \sum_{\beta \in [3]^K} C_{\rm{dis}}
e^{-\delta_{\rm dis} s} \int \phi^K \partial_\beta U \cdot \partial_\beta \frac{\Delta U_i}{ S^{1/\alpha }} }_{\ensuremath{\mathcal}I_4}. \label{eq:EE1}
\end{split} \end{aligned}$$
### Main energy estimate: terms $\ensuremath{\mathcal}I_1$, $\ensuremath{\mathcal}I_2$ and $\ensuremath{\mathcal}I_3$
First of all, integrating by parts, we see that $$\begin{aligned}
\label{estimateI1}
\ensuremath{\mathcal}I_1 &= \sum_{\beta \in [3]^K} \int \phi^K \frac{y}{2} \left( \nabla | \partial_\beta S |^2 + \sum_i \nabla | \partial_\beta U_i |^2 \right) dy \\
&= \frac{-1}{2} \sum_{\beta \in [3]^K} \int \frac{\ensuremath{\mathrm{div\,}}(\phi^K y)}{\phi^K} \left( |\partial_\beta U |^2 + |\partial_\beta S |^2 \right) \phi^K dy \notag+\underbrace{\frac{1}{2}e^{s}L\sum_{\beta \in [3]^K}\int_{\partial (e^s\mathbb{T}_{L}^{3})}(|\partial_{\beta}U|^2+|\partial_{\beta}S|^2)d\sigma}_{\text{boundary term}}\\\nonumber
&= \frac{-K}{2} \sum_{\beta \in [3]^K} \int \frac{y \cdot \nabla \phi}{\phi} \left( |\partial_\beta U |^2 + |\partial_\beta S |^2 \right) \phi^K dy + O ( E )+\underbrace{\frac{1}{2}e^{s}L\sum_{\beta \in [3]^K}\int_{\partial (e^s\mathbb{T}_{L}^{3})}(|\partial_{\beta}U|^2+|\partial_{\beta}S|^2)d\sigma.}_{\text{boundary term}}\end{aligned}$$ The boundary term cancels the same term in [\[eq:EE1\]](#eq:EE1){reference-type="eqref" reference="eq:EE1"}. Next, we prove the following lemma:
**Lemma 33**. *Let $\beta_j$ denote the $j-$th index of the multiindex $\beta$, and analogously, let $\beta^{(j)}$ be the $(K-1)$-th order multiindex generated by erasing $\beta_j$. More specifically, we have $$\partial_{\beta^{(j)}}=\partial_{\beta_1}\partial_{\beta_2}...\partial_{\beta_{j-1}}\partial_{\beta_{j+1}}...\partial_{\beta_K}.$$ Under the bootstrap assumptions [\[lowerestimate1\]](#lowerestimate1){reference-type="eqref" reference="lowerestimate1"}, [\[lowerestimate2\]](#lowerestimate2){reference-type="eqref" reference="lowerestimate2"}, [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}, we have $$\begin{aligned}
\begin{split}
\Bigg\|
\partial_\beta (U \nabla U_i ) &- U \nabla \partial_\beta U_i - \sum_{j=1}^K \sum_{k=1}^3 \left( \left( \partial_R (\widehat{X}\overline U_R) - \frac{\widehat{X}\overline U_R}{R} \right) \frac{y_{\beta_j} y_k}{R^2} + \delta_{k, \beta_j} \frac{\widehat{X}\overline U_R}{R} \right) \partial_k \partial_{\beta^{(j)}} U_i
\Bigg\|_{L^2_{\phi^{K}}} \\
&= O\left( E^{1/2-1/(4K)} + \|\phi^{K/2} \partial_\beta U \|_{L^2} \right), \label{eq:mainder1}\end{split}\end{aligned}$$ $$\begin{aligned}
\begin{split}
\Bigg\|
\partial_\beta (U \nabla S ) &- U \nabla \partial_\beta S - \sum_{j=1}^K \sum_{k=1}^3 \left( \left( \partial_R (\widehat{X}\overline U_R) - \frac{\widehat{X}\overline U_R}{R} \right) \frac{y_{\beta_j} y_k}{R^2} + \delta_{k, \beta_j} \frac{\widehat{X}\overline U_R}{R} \right) \partial_k \partial_{\beta^{(j)}} S
\Bigg\|_{L^2_{\phi^{K}}} \\
&= O\left( E^{1/2-1/(4K)} + \| \phi^{K/2} \partial_\beta S \|_{L^2} \right), \label{eq:mainder2} \end{split}\end{aligned}$$ $$\begin{aligned}
\left\|
\partial_\beta (S \partial_i S ) - S \nabla \partial_\beta S - \sum_{j=1}^K \frac{y_{\beta_j}}{R} \partial_R (\widehat{X}\overline S) \partial_i \partial_{\beta^{(j)}} S
\right\|_{L^2_{\phi^{K}}} &= O\left( E^{1/2-1/(4K)} + \| \phi^{K/2}\partial_\beta S \|_{L^2} \right), \label{eq:mainder3} \\
%
\left\|
\partial_\beta (S \ensuremath{\mathrm{div\,}}(U) ) - S \partial_\beta \ensuremath{\mathrm{div\,}}(U) - \sum_{j=1}^K \sum_{i=1}^3 \frac{y_{\beta_j}}{R} \partial_R (\widehat{X}\overline S) \partial_i \partial_{\beta^{(j)}} U_i
\right\|_{L^2_{\phi^{K}}} &= O\left( E^{1/2-1/(4K)} + \|\phi^{K/2} \partial_\beta U \|_{L^2} \right). \label{eq:mainder4} \end{aligned}$$*
*Proof.* Since all those bounds are analogous, let us just show [\[eq:mainder1\]](#eq:mainder1){reference-type="eqref" reference="eq:mainder1"}. We develop $\partial_\beta (U \nabla U_i)$ by the Leibniz rule and we observe that: $$\label{eq:uruguay1}
\left| \partial_\beta(U \nabla U_i) - \partial_\beta U \nabla U_i - U \nabla \partial_\beta U_i - \sum_{j=1}^K \partial_{\beta_j} U \nabla \partial_{\beta^{(j)}} U_i \right| \lesssim_K \sum_{\ell, \ell' \geq 2, \ell + \ell' = K+1} | \nabla^\ell U | \cdot | \nabla^{\ell'} U |,$$ where $\lesssim_K$ indicates that the implicit constant is allowed to depend on $K$. Now, using [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"} from Lemma [Lemma 40](#lemma:GN_general){reference-type="ref" reference="lemma:GN_general"} for $\psi = 1$, $p=\infty$, $q=2$, $m=K$, $\overline{r} = \frac{2(K+1)}{\ell}$, $\theta = \frac{\ell - 3/\overline{r}}{K - 3/2}$ we have that $$\begin{aligned}
\begin{split} \label{eq:mendel}
\| \langle R \rangle^{-1/10} \phi^{\frac{K\theta}{2}} \nabla^\ell \widetilde U \|_{L^{{\overline{r}}} } &\lesssim\delta_0^{1-\theta} E^{\frac{\theta}{2}} \\
\| \langle R \rangle^{-1/10} \phi^{\frac{K\theta}{2}} \nabla^\ell (\widehat X \overline U_R) \|_{L^{{\overline{r}}} } &\lesssim_K 1.
\end{split} \end{aligned}$$ Thus, we obtain that for $\ell+\ell' = K+1$, $\ell, \ell' \geq 2$, using [\[perturbationenergy\]](#perturbationenergy){reference-type="eqref" reference="perturbationenergy"}: $$\begin{aligned}
\label{eq:trinidad1}
\left\| |\nabla^\ell \widetilde U | | \nabla^{\ell'} \widetilde U| \phi^{K/2} \right\|_{L^2} &= \left\| |\nabla^\ell \widetilde U | | \nabla^{\ell'} \widetilde U| \phi^{\frac{K}{2}\cdot \frac{\ell - 3/\overline{r}}{K-3/2}}\phi^{\frac{K}{2}\cdot \frac{\ell' - 3/r'}{K-3/2}} \phi^{-\frac12 \cdot \frac{K}{K-3/2}} \right\|_{L^2} \notag \\
&\leq \left\| \langle R \rangle^{-1/10} \phi^{\frac{K}{2}\cdot \frac{\ell - 3/\overline{r}}{K-3/2}} \nabla^\ell \widetilde U \right\|_{L^{\frac{2(K+1)}{\ell}}}
\left\| \langle R \rangle^{-1/10} \phi^{\frac{K}{2}\cdot \frac{\ell' - 3/r'}{K-3/2}} \nabla^{\ell'} \widetilde U \right\|_{L^{\frac{2(K+1)}{\ell'}}} \notag \\
& \lesssim\delta_0^{\frac{K-5/2}{K-3/2}}E^{\frac{K-\frac{1}{2}}{2(K-3/2)}}.\end{aligned}$$ In the first inequality we used that $\ell+\ell' = K+1$ together with $\frac{K(\ell-3/\overline{r})}{K-3/2} + \frac{K(\ell' - 3/r')}{K-3/2} = K\frac{K-1/2}{K-3/2}$ for $\overline{r}= \frac{2(K+1)}{\ell}$, $r' = \frac{2(K+1)}{\ell'}$. [\[eq:GN_extracond\]](#eq:GN_extracond){reference-type="eqref" reference="eq:GN_extracond"} is satisfied because $\eta\gg \frac{1}{K}.$ In the second one we used [\[eq:mendel\]](#eq:mendel){reference-type="eqref" reference="eq:mendel"} with $\theta + \theta' = \frac{K-1/2}{K-3/2}$. Similar to [\[eq:trinidad1\]](#eq:trinidad1){reference-type="eqref" reference="eq:trinidad1"}, using again [\[perturbationenergy\]](#perturbationenergy){reference-type="eqref" reference="perturbationenergy"} and [\[eq:mendel\]](#eq:mendel){reference-type="eqref" reference="eq:mendel"}, we obtain that $$\begin{aligned}
\label{eq:trinidad2}
\left\| |\nabla^\ell \widetilde U | | \nabla^{\ell'} (\widehat{X}\overline U_R) | \phi^{K/2} \right\|_{L^2} &\leq
\left\| \langle R \rangle^{-1/10} \phi^{\frac{K}{2}\cdot \frac{\ell - 3/\overline{r}}{K-3/2}} \nabla^\ell \widetilde U \right\|_{L^{\frac{2(K+1)}{\ell}}}
\left\| \langle R \rangle^{-1/10} \phi^{\frac{K}{2}\cdot \frac{\ell' - 3/r'}{K-3/2}} \nabla^{\ell'} (\widehat X \overline U_R) \right\|_{L^{\frac{2(K+1)}{\ell'}}} \notag \\
&\lesssim_K \delta_0^{\frac{K-2}{(K-3/2)(K+1)}}E^{\frac12 \left(1- \frac{K-2}{(K-3/2)(K+1)} \right)}
\lesssim 1,\end{aligned}$$ and $$\begin{aligned}
\label{eq:trinidad3}
\left\| |\nabla^\ell (\widehat{X}\overline U_R) | | \nabla^{\ell'} (\widehat{X}\overline U_R) | \phi^{K/2} \right\|_{L^2} \leq\left\| \phi^{\frac{K}{2}\cdot \frac{(\ell - 3/\overline{r})}{K-3/2}} \nabla^\ell (\widehat{X}\overline U_R) \right\|_{L^{\frac{2(K+1)}{\ell}}}
\left\| \phi^{\frac{K}{2}\cdot \frac{(\ell' - 3/\overline{r})}{K-3/2}} \nabla^{\ell'} (\widehat{X}\overline U_R) \right\|_{L^{\frac{2(K+1)}{\ell'}}}\lesssim_K 1.\end{aligned}$$ Combining [\[eq:trinidad1\]](#eq:trinidad1){reference-type="eqref" reference="eq:trinidad1"}--[\[eq:trinidad3\]](#eq:trinidad3){reference-type="eqref" reference="eq:trinidad3"} (and using that $\delta_0$ is sufficiently small depending on $K$, $E$), we obtain $$\label{eq:uruguay2}
\left\| | \nabla^\ell U | \cdot |\nabla^{\ell'} U | \phi^{K/2} \right\|_{L^2}
\lesssim_K 1.$$
From [\[eq:uruguay1\]](#eq:uruguay1){reference-type="eqref" reference="eq:uruguay1"}--[\[eq:uruguay2\]](#eq:uruguay2){reference-type="eqref" reference="eq:uruguay2"}, using that $E$ is sufficiently large depending on $K$, we deduce $$\label{eq:uruguay3}
\left\| \partial_\beta(U \nabla U_i) - \partial_\beta U \nabla U_i - U \nabla \partial_\beta U_i - \sum_{j=1}^K \partial_{\beta_j} U \nabla \partial_{\beta^{(j)}} U_i \right\|_{L^2_{\phi^{K}}} \lesssim E^{1/4}.$$
Using the Gagliardo-Nirenberg inequality between $\| \widetilde U \|_{L^\infty} \leq \delta_0$ and $\| \widetilde U \|_{\dot{H}^{m}} \leq E$ (and using that $\delta_0$ is sufficiently small depending on $E$), we obtain that $$\label{eq:nablaUtilde}
\| \nabla \widetilde U \|_{L^\infty} \lesssim\delta_0^{9/10}.$$ We deduce that $$\label{eq:nablaUtotal}
\| \nabla U \|_{L^\infty} \leq \| \nabla \widetilde U \|_{L^\infty} + \| \nabla ( \widehat{X}\overline U_R) \|_{L^\infty} \lesssim 1.$$ Therefore $$\label{eq:uruguay4}
\| \partial_\beta U \nabla U_i \phi^K \|_{L^2} \lesssim\| \nabla U_i \|_{L^\infty} \| \partial_\beta U \|_{L^2_{\phi^{K}}} \lesssim\| \partial_\beta U \|_{L^2_{\phi^{K}}}.$$
Thus, from [\[eq:uruguay3\]](#eq:uruguay3){reference-type="eqref" reference="eq:uruguay3"} and [\[eq:uruguay4\]](#eq:uruguay4){reference-type="eqref" reference="eq:uruguay4"}, we conclude $$\label{eq:uruguay5}
\left\| \partial_\beta(U \nabla U_i) - U \nabla \partial_\beta U_i - \sum_{j=1}^K \partial_{\beta_j} U \nabla \partial_{\beta^{(j)}} U_i \right\|_{L^2_{\phi^{K}}} \lesssim E^{1/4} + \| \partial_\beta U \phi^{K/2} \|_{L^2}.$$ Using [\[eq:nablaUtilde\]](#eq:nablaUtilde){reference-type="eqref" reference="eq:nablaUtilde"}, we also have that $$\left\| \sum_{j=1}^K \partial_{\beta_j} \widetilde U \nabla \partial_{\beta^{(j)}} U_i \right\|_{L^2_{\phi^{K}}} \lesssim\delta_{0}^{9/10} E^{\frac{1}{2}} \lesssim\delta_{0}^{4/5}.$$ Using this into [\[eq:uruguay5\]](#eq:uruguay5){reference-type="eqref" reference="eq:uruguay5"}, we deduce $$\label{eq:uruguay6}
\left\| \partial_\beta(U \nabla U_i) - U \nabla \partial_\beta U_i - \sum_{j=1}^K \partial_{\beta_j} (\widehat{X}\overline U_R) \nabla \partial_{\beta^{(j)}} U_i \right\|_{L^2_{\phi^{K}}} \lesssim E^{1/2-1/(4K)} + \| \partial_\beta U \phi^{K/2} \|_{L^2}.$$ Finally, we compute explicitly the last term of the norm. Using that $\overline U$ is radially symmetric, we have $$\partial_{\beta_j} (\widehat{X}\overline U_k) = \partial_{\beta_j} \left( \frac{y_k}{R} \widehat{X}\overline U_R \right) = \left( \partial_R (\widehat{X}\overline U_R) - \frac{\widehat{X}\overline U_R}{R} \right) \frac{y_k y_{\beta_j}}{R^2} + \delta_{\beta_j, k} \frac{\widehat{X} \overline U_R}{R},$$ and plugging this into [\[eq:uruguay6\]](#eq:uruguay6){reference-type="eqref" reference="eq:uruguay6"}, we conclude the proof. ◻
We will use this Lemma to estimate $\ensuremath{\mathcal}I_2$ and $\ensuremath{\mathcal}I_3$. Note that once $j$ is fixed, it is equivalent to say that $\beta$ runs over $[3]^K$ than to say that $\beta^{(j)}$ runs over $[3]^{K-1}$ and $\beta_j$ over $[3] = \{1, 2, 3\}$. For fixed $j$ we will denote $\widetilde\beta = \beta^{(j)}$ and $m = \beta_j$. Using [\[eq:mainder1\]](#eq:mainder1){reference-type="eqref" reference="eq:mainder1"}, we see that $$\begin{aligned}
\sum_{\beta \in [3]^K} &\int \phi^K \sum_i \partial_\beta U_i \partial_\beta (U \cdot \nabla U_i) = \sum_{\beta \in [3]^K } \left( O(E^{1/2-1/(4K)} ) + O(\| \phi^{K/2} \partial_\beta U \|_{L^2}) \right) \| \partial_\beta U \phi^{K/2} \|_{L^2} \notag \\
&\quad + \sum_{\beta \in [3]^K} \int \phi^K \sum_i \Bigg[ U \nabla \partial_\beta U_i \partial_\beta U_i + \partial_\beta U_i \sum_{j=1}^K \sum_{k=1}^3 \left( \left( \partial_R (\widehat{X}\overline U_R) - \frac{\widehat{X}\overline U_R}{R} \right) \frac{y_{\beta_j} y_k}{R^2} + \delta_{k, \beta_j} \frac{\widehat{X}\overline U_R}{R} \right) \partial_k \partial_{\beta^{(j)}} U_i \Bigg] \notag \\
&=
O(3^K E^{1-1/(4K)} ) + O\left( \sum_{\beta \in [3]^K } \| \phi^{K/2} \partial_\beta U \|_{L^2}^2 \right) - \sum_{\beta \in [3]^K} \int \frac{\ensuremath{\mathrm{div\,}}(\phi^K U)}{2 \phi^K } |\partial_\beta U |^2 \phi^K \notag \\
&\quad + \sum_{j=1}^K \sum_{\widetilde\beta \in [3]^{K-1}} \sum_{i, k, m = 1}^3 \int \phi^K \left( \left( \partial_R (\widehat{X}\overline U_R) - \frac{\widehat{X}\overline U_R}{R} \right) \frac{y_k y_m}{R^2} + \delta_{k, m} \frac{\widehat{X}\overline U_R}{R} \right)
\partial_m \partial_{\widetilde\beta} U_i \partial_k \partial_{\widetilde\beta} U_i \notag \\
&=
O(E ) - \sum_{\beta \in [3]^K} \int \frac{K \nabla \phi \cdot U+ \phi \ensuremath{\mathrm{div\,}}(U)}{2\phi} |\partial_\beta U |^2 \phi^K \notag \\
&\quad +K \sum_{\widetilde\beta \in [3]^{K-1}} \int \phi^K \left( \partial_R(\widehat{X}\overline U_R) - \frac{\widehat{X}\overline U_R}{R} \right) \sum_{i=1}^3 | \partial_R \partial_{\widetilde\beta} U_i |^2
+ K \sum_{\widetilde\beta \in [3]^{K-1}} \sum_{i, k = 1}^3 \int \phi^K \frac{\widehat{X}\overline U_R}{R} |\partial_k \partial_{\widetilde\beta} U_i|^2 \notag \\
&\geq
O(E ) + K \sum_{\widetilde\beta \in [3]^{K-1}} \int \left(
\frac{- \nabla \phi \cdot \overline U}{2\phi } | \nabla \partial_{\widetilde\beta} U|^2
+ \partial_R (\widehat X \overline U_R) | \partial_R \partial_{\widetilde\beta} U|^2 +
\frac{\widehat{X}\overline U_R}{R} |\nabla_{\theta} \partial_{\widetilde\beta} U |^2 \right) \phi^K. \label{eq:paraguay1}\end{aligned}$$ The first equality is a combination of [\[eq:mainder1\]](#eq:mainder1){reference-type="eqref" reference="eq:mainder1"} with Cauchy-Schwarz. After the second equality, the first two big-$O$ terms arise from the geometric-quadratic inequality, the third term arises from integration by parts, and the fourth term is just obtained by rearranging indices (we extract the $j$-th index in $\beta$ and define $\widetilde\beta = \beta^{(j)}$ together with $m = \beta_j$). In the third equality we used that $\sum_{k, m} \frac{y_m y_k}{R^2} \partial_{k} f \partial_{m} f = |\partial_R f|^2$. In the last inequality we used that $\|\ensuremath{\mathrm{div\,}}(U)\|_{L^\infty} \lesssim 1$ (from [\[eq:nablaUtotal\]](#eq:nablaUtotal){reference-type="eqref" reference="eq:nablaUtotal"}) and $\|\frac{\nabla \phi}{2\phi}\widetilde{U}\|_{L^{\infty}}\lesssim \delta_0$. Recall that $\nabla_{\theta}$ is the gradient in spherical coordinates without the radial derivative.
An analogous computation using [\[eq:mainder2\]](#eq:mainder2){reference-type="eqref" reference="eq:mainder2"} yields $$\begin{aligned}
&\sum_{\beta \in [3]^K} \int \phi^K \partial_\beta S \partial_\beta (U \cdot \nabla S) \notag \\
&\qquad \label{eq:paraguay2} \geq
K \sum_{\widetilde\beta \in [3]^{K-1}} \int \left( \frac{-\nabla \phi \cdot \widehat{X}\overline U_R}{2\phi} | \nabla \partial_{\widetilde\beta} S |^2
+ \partial_R (\widehat X \overline U_R) |\partial_R \partial_{\widetilde\beta} U|^2
+ \frac{\widehat X \overline U_R}{R} | \nabla_{\theta} \partial_{\widetilde\beta} S|^2\right) \phi^K + O(E).\end{aligned}$$
Adding [\[eq:paraguay1\]](#eq:paraguay1){reference-type="eqref" reference="eq:paraguay1"} and [\[eq:paraguay2\]](#eq:paraguay2){reference-type="eqref" reference="eq:paraguay2"}, we see that $$\begin{aligned}
-\ensuremath{\mathcal}I_2 \leq O(E) &+ K \int \left( \frac{\nabla \phi \cdot \widehat{X}\overline U_R}{2\phi} - \partial_R (\widehat X \overline U_R) \right) \phi^K \left( | \partial_R \nabla^{K-1} U |^2 + | \partial_R \nabla^{K-1} S|^2 \right) \notag \\
&+ K \int \left( \frac{\nabla \phi \cdot \widehat{X}\overline U_R}{2\phi} - \frac{\widehat X \overline U_R}{R} \right) \phi^K \left( | \nabla_\theta \nabla^{K-1} U |^2 + | \nabla_\theta \nabla^{K-1} S|^2 \right). \label{eq:paraguay3}\end{aligned}$$
We focus on $\ensuremath{\mathcal}I_3$. Using [\[eq:mainder3\]](#eq:mainder3){reference-type="eqref" reference="eq:mainder3"}, we start computing $$\begin{aligned}
\sum_{\beta \in [3]^K} &\sum_{i=1}^3 \int \phi^K \partial_\beta (S \partial_i S) \partial_\beta U_i =
\sum_{\beta \in [3]^K} O \left( (E^{1/2-1/(4K)} + \| \phi^{K/2} \partial_\beta S \|_{L^2} ) \| \phi^{K/2} \partial_\beta U \|_{L^2} \right) \notag \\
&\quad + \sum_{\beta \in [3]^K} \int \phi^K S \nabla \partial_\beta S \cdot \partial_\beta U + \sum_{i=1}^3 \sum_{\beta \in [3]^K } \int \phi^K \partial_\beta U_i \sum_{j=1}^K \frac{y_{\beta_j}}{R} \partial_R (\widehat{X}\overline S) \partial_{\beta^{(j)}} S\notag \\
&=
O \left( 3^K E^{1-1/(4K)} + \sum_{\beta \in [3]^K } (\|
\phi^{K/2} \partial_\beta U \|_{L^2}^2 + \| \phi^{K/2} \partial_\beta S \|_{L^2}^2 ) \right) + \sum_{\beta \in [3]^K } \sum_{i=1}^3 \int \phi^K S \partial_i \partial_\beta S \partial_\beta U_i \notag \\
&\quad +
\sum_{j=1}^K \sum_{\beta^{(j)} \in [3]^{K-1}} \sum_{\beta_j =1}^3 \sum_{i = 1}^3 \int \phi^K \partial_R (\widehat{X}\overline S) \frac{y_{\beta_j}}{R} \partial_{\beta_j} \partial_{\beta^{(j)}} U_i \partial_i \partial_{\beta^{(j)} } S \notag \\
&=
O \left( 3^K E^{1-1/(4K)} + \sum_{\beta \in [3]^K } (\|
\phi^{K/2} \partial_\beta U \|_{L^2}^2 + \| \phi^{K/2} \partial_\beta S \|_{L^2}^2 ) \right) + \sum_{\beta \in [3]^K } \sum_{i=1}^3 \int \phi^K S \partial_i \partial_\beta S \partial_\beta U_i \notag \\
&\quad +
\sum_{j=1}^K \sum_{\widetilde\beta \in [3]^{K-1}} \sum_{i, m = 1}^3 \int \phi^K \partial_R (\widehat{X}\overline S) \frac{y_m}{R} \partial_m \partial_{\widetilde\beta} U_i \partial_i \partial_{\widetilde\beta } S \notag \\
&=
O(E) + \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K S \partial_i \partial_\beta S \partial_\beta U_i + K \sum_{\widetilde\beta \in [3]^{K-1}} \sum_i \int \phi^K \partial_R( \widehat{X}\overline S) \partial_R \partial_{\widetilde\beta} U_i \partial_i \partial_{\widetilde\beta} S \notag \\
&\geq O(E) + \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K S \partial_i \partial_\beta S \partial_\beta U_i - K \sum_{\widetilde\beta \in [3]^{K-1}} \sum_{i=1}^3 \int \phi^K |\partial_R(\widehat{X} \overline S) | \left( \frac{| \nabla \partial_{\widetilde\beta} U_i |^2}{2} + \frac{|\partial_i \partial_{\widetilde\beta} S|^2}{2} \right) \notag \\
&= O(E) + \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K S \partial_i \partial_\beta S \partial_\beta U_i - \frac{K}{2} \sum_{\beta \in [3]^{K}}\int \phi^K |\partial_R (\widehat{X}\overline S)| \left( | \partial_{\beta} U |^2 + | \partial_{ \beta} S|^2\right). \label{eq:bolivia1}\end{aligned}$$ An analogous computation using [\[eq:mainder4\]](#eq:mainder4){reference-type="eqref" reference="eq:mainder4"} yields: $$\begin{aligned}
\begin{split}\label{eq:bolivia2}
\sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K \partial_\beta (S \partial_i U_i ) \partial_\beta S &\geq
O(E) + \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K S \partial_\beta S \partial_i \partial_\beta U_i \\
&\quad - \frac{K}{2} \int \phi^K |\partial_R(\widehat{X}\overline S)| ( | \partial_\beta U |^2 + | \partial_\beta S |^2 ).
\end{split} \end{aligned}$$
Adding up [\[eq:bolivia1\]](#eq:bolivia1){reference-type="eqref" reference="eq:bolivia1"} and [\[eq:bolivia2\]](#eq:bolivia2){reference-type="eqref" reference="eq:bolivia2"}, and integrating by parts, we get $$\begin{aligned}
\label{eq:bolivia3}
-\ensuremath{\mathcal}I_3 &\leq O(E) - \alpha \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K \frac{\partial_i (\phi^K S)}{\phi^K} \partial_\beta S \partial_\beta U_i + K\alpha \sum_{\beta \in [3]^K} \int \phi^K |\partial_R (\widehat{X}\overline S) | ( |\partial_\beta U |^2 + | \partial_\beta S |^2 )\\\nonumber
&\leq O(E) - \alpha \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K \frac{\partial_i (\phi^K \widehat{X}\overline S)}{\phi^K} \partial_\beta S \partial_\beta U_i + K\alpha \sum_{\beta \in [3]^K} \int \phi^K |\partial_R (\widehat{X}\overline S) | ( |\partial_\beta U |^2 + | \partial_\beta S |^2 ), \end{aligned}$$ where we recall $\alpha = \frac{\gamma- 1}{2}$ and we also use $\|\frac{\partial_{i}(\phi^{K})\widetilde{S}}{\phi^{K}}\|_{L^{\infty}}\lesssim K\delta_0\lesssim 1.$ Since $\| \nabla \overline S \|_{L^\infty} \lesssim 1$, we have $$\begin{aligned}
\left| \sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K \frac{\partial_i (\phi^K \widehat{X}\overline S)}{\phi^K} \partial_\beta S \partial_\beta U_i \right| &\leq
\sum_{\beta \in [3]^K} \sum_{i=1}^3 \int \phi^K \left( | \partial_i (\widehat{X}\overline S) | \frac{| \partial_\beta U_i|^2 + | \partial_\beta S |^2}{2} + K \frac{\partial_i \phi \widehat{X}\overline S}{\phi} \partial_\beta S \partial_\beta U_i \right) \\
&\leq O(E) + K \sum_{\beta \in [3]^K} \int \phi^K \frac{| \nabla \phi | \widehat{X}\overline S}{\phi} |\partial_\beta S| \cdot |\partial_\beta U | \\
&\leq O(E) + \frac{K}{2} \sum_{\beta \in [3]^K} \int \phi^K \frac{| \nabla \phi | \widehat{X}\overline S}{\phi} \left( |\partial_\beta S|^2 + |\partial_\beta U |^2 \right),\end{aligned}$$ and using this into [\[eq:bolivia3\]](#eq:bolivia3){reference-type="eqref" reference="eq:bolivia3"}, we get $$\label{eq:bolivia4}
-\ensuremath{\mathcal}I_3 \leq O(E) + K\alpha \int \phi^K \left( \frac{| \nabla \phi | \widehat{X}\overline S}{2\phi} + | \partial_R (\widehat{X}\overline S) |\right)\left( |\nabla^K S |^2 + | \nabla^K U |^2 \right)$$
Finally, adding [\[estimateI1\]](#estimateI1){reference-type="eqref" reference="estimateI1"}, [\[eq:paraguay3\]](#eq:paraguay3){reference-type="eqref" reference="eq:paraguay3"} and [\[eq:bolivia4\]](#eq:bolivia4){reference-type="eqref" reference="eq:bolivia4"}, we obtain $$\begin{aligned}
\begin{split} \label{eq:argentina}
-\ensuremath{\mathcal}I_1-\ensuremath{\mathcal}I_2-\ensuremath{\mathcal}I_3 &\leq O(E) + K \int \left(\partial_R(\widehat{X} \overline U_R) + \alpha |\partial_R(\widehat{X} \overline S) | \right) \phi^K ( |\partial_R \nabla^{K-1} U|^2 + | \partial_R \nabla^{K-1} S |^2 ) \\
& \qquad+ K \int \left( \frac{\widehat{X}\overline U_R}{R} + \alpha |\partial_R(\widehat{X} \overline S) | \right) \phi^K ( |\nabla_\theta \nabla^{K-1} U|^2 + | \nabla_\theta \nabla^{K-1} S |^2 ) \\
& \qquad + K \int \frac{\nabla \phi \cdot (y + \widehat{X}\overline U) + | \nabla \phi | \widehat{X}\overline S}{2\phi} \phi^K ( |\nabla^K U|^2 + | \nabla^K S |^2 )\\
&\qquad-\underbrace{\frac{1}{2}e^{s}L\sum_{\beta \in [3]^K}\int_{\partial (e^s\mathbb{T}_{L}^{3})}(|\partial_{\beta}U|^2+|\partial_{\beta}S|^2)d\sigma.}_{\text{boundary term}}.
\end{split} \end{aligned}$$
Now, we claim the following modified repulsivity properties: $$\begin{aligned}
\label{eq:argentina2}
\partial_R (\widehat{X}\overline U_R) + \alpha |\partial_R (\widehat{X}\overline S) |+ \frac{\nabla \phi \cdot (y+\widehat{X}\overline U) + | \nabla \phi | \widehat{X}\overline S}{2\phi} \leq 1 - \frac{\min \{ \widetilde\eta, \eta \}}{2}, \\
\frac{\widehat{X}\overline U_R}{R} + \alpha |\partial_R (\widehat{X}\overline S) |+ \frac{\nabla \phi \cdot (y+\widehat{X}\overline U) + | \nabla \phi | \widehat{X}\overline S}{2\phi} \leq 1 - \frac{\min \{ \widetilde\eta , \eta \} }{2}. \label{eq:argentina5halfs}\end{aligned}$$ In the region $|y| \leq R_0$, we have $\phi = 1$, so the last fraction on the LHS is zero. Also, $|\partial_R \widehat X| \lesssim e^{-s_0}$. Therefore, by taking $\widetilde\eta$ sufficiently small [\[eq:argentina2\]](#eq:argentina2){reference-type="eqref" reference="eq:argentina2"} and [\[eq:argentina5halfs\]](#eq:argentina5halfs){reference-type="eqref" reference="eq:argentina5halfs"} follow from the standard repulsivity properties [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"}--[\[eq:angular_repulsivity\]](#eq:angular_repulsivity){reference-type="eqref" reference="eq:angular_repulsivity"}.
In the region $|y| \geq R_0$, since $R_0$ is sufficiently large depending on $\eta$ [\[Rochoice estimate\]](#Rochoice estimate){reference-type="eqref" reference="Rochoice estimate"} and the profiles decay (equation [\[eq:profiles_decay\]](#eq:profiles_decay){reference-type="eqref" reference="eq:profiles_decay"}), we have that: $$-\partial_R \overline U_R + \alpha |\partial_R \overline S |+ \frac{\nabla \phi \cdot (y+\overline U) + | \nabla \phi | \overline S}{2\phi} \leq \frac{\eta}{4} + \frac{R\partial_R \phi}{2\phi} \leq \frac{\eta}{4} + 1-\eta < 1-\frac{\eta}{2},$$ and this concludes the proof of [\[eq:argentina2\]](#eq:argentina2){reference-type="eqref" reference="eq:argentina2"}. We can conclude [\[eq:argentina5halfs\]](#eq:argentina5halfs){reference-type="eqref" reference="eq:argentina5halfs"} analogously just interchanging $\partial_R \overline U$ by $\overline U/R$. Finally, combining [\[eq:argentina\]](#eq:argentina){reference-type="eqref" reference="eq:argentina"} with [\[eq:argentina2\]](#eq:argentina2){reference-type="eqref" reference="eq:argentina2"}--[\[eq:argentina5halfs\]](#eq:argentina5halfs){reference-type="eqref" reference="eq:argentina5halfs"}, we get that $$\begin{aligned}
-\ensuremath{\mathcal}I_1 - \ensuremath{\mathcal}I_2 - \ensuremath{\mathcal}I_3 &\leq O(E) + K\left( 1 - \frac{\min \{ \eta , \widetilde\eta \} }{2} \right) \sum_{\beta \in [3]^K} \int \phi^K ( |\partial_\beta U |^2 + | \partial_\beta S |^2 )\\\nonumber
& \qquad-\underbrace{\frac{1}{2}e^{s}L\sum_{\beta \in [3]^K}\int_{\partial (e^s\mathbb{T}_{L}^{3})}(|\partial_{\beta}U|^2+|\partial_{\beta}S|^2)d\sigma.}_{\text{boundary term}}\end{aligned}$$ and plugging this into [\[eq:EE1\]](#eq:EE1){reference-type="eqref" reference="eq:EE1"}, we get $$\label{eq:EE2}
\partial_s E_{K} \leq -\frac{K\min \{ \eta, \widetilde\eta \} }{2} E_{K} + O(E)+\ensuremath{\mathcal}I_{4}.$$
### Dissipation term
Starting from the expression of $\ensuremath{\mathcal}I_4$ in [\[eq:EE1\]](#eq:EE1){reference-type="eqref" reference="eq:EE1"}, we commute $\partial_\beta$ with $S^{-1/\alpha}$ and integrate by parts: $$\begin{aligned}
\label{eq:Dmain}
\ensuremath{\mathcal}I_4 = \frac{r^{1 + \frac{1}{\alpha}} }{\alpha^{1/\alpha}} e^{-\delta_{\rm{dis}} s } \sum_{i=1}^{3} \sum_{\beta \in [3]^K}
\left( \underbrace{ \int \phi^K \partial_\beta U_i \left[ \partial_\beta, \frac{1}{S^{1/\alpha}} \right] \Delta U_i }_{\ensuremath{\mathcal}D_{1, \beta, i}}
- \underbrace{ \int \frac{\phi^K}{S^{1/\alpha}} |\nabla \partial_\beta U_i|^2 }_{\ensuremath{\mathcal}D_{2, \beta, i}}
-\underbrace{ \int \nabla \left( \frac{\phi^K}{S^{1/\alpha}} \right) \nabla \partial_\beta U_i \partial_\beta U_i }_{\ensuremath{\mathcal}D_{3, \beta, i}} \right).
\end{aligned}$$ Clearly $\ensuremath{\mathcal}D_{2, \beta, i } > 0$, so that term already has the correct sign. We will bound $\ensuremath{\mathcal}D_{1, \beta, i}$ and $\ensuremath{\mathcal}D_{3, \beta, i}$. Let us start with the latter: $$\begin{aligned}
| \ensuremath{\mathcal}D_{3, \beta, i} | &=\left| \int \left( \frac{K \phi^{K-1} \nabla \phi}{S^{1/\alpha}} - \frac{1}{\alpha} \cdot \frac{\phi^K \nabla S}{S^{1+1/\alpha}} \right) \nabla \partial_\beta U_i \partial_\beta U_i \notag \right|\\
&\leq D_{2, \beta, i}^{1/2} \left[ K \left( \int \frac{\phi^{K-2} |\nabla \phi |^2}{S^{1/\alpha} } | \partial_\beta U_i |^2 \right)^{1/2} + \frac{1}{\alpha} \left( \int \phi^K \frac{|\nabla S|^2}{S^{2+1/\alpha}} | \partial_\beta U_i |^2 \right)^{1/2} \right] \notag \\
&\leq D_{2, \beta, i}^{1/2} E^{1/2} \left( K \left\| \frac{| \nabla \phi |^2}{\phi^2 S^{1/\alpha } }\right\|_{L^\infty} + \frac{1}{\alpha} \left\| \frac{| \nabla S |^2}{S^{2+1/\alpha}} \right\|_{L^\infty} \right). \label{eq:suriname1}\end{aligned}$$
Using Lemmas [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"} and [Lemma 26](#lemma:Sprimebounds){reference-type="ref" reference="lemma:Sprimebounds"}, we see that $$\begin{aligned}
\label{eq:suriname2}
\frac{| \nabla \phi |^2}{\phi^2 S^{1/\alpha} } \lesssim_{\delta_0} \frac{1}{\langle R \rangle^2 \cdot \langle R \rangle^{-(r-1)/\alpha}} \leq 1, \qquad \mbox{ and } \qquad
\frac{| \nabla S |^2}{S^{2+1/\alpha}} \lesssim_{\delta_0} \frac{\langle R \rangle^{-2r}}{\langle R \rangle^{-(2+1/\alpha)(r-1)}} \leq 1,\end{aligned}$$ where in the last inequality of both equations we used $2>\frac{r-1}{\alpha}$ (due to equation [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}).
Plugging [\[eq:suriname2\]](#eq:suriname2){reference-type="eqref" reference="eq:suriname2"} into [\[eq:suriname1\]](#eq:suriname1){reference-type="eqref" reference="eq:suriname1"}, and using that $\delta_0$ is sufficiently small depending on $E, K$, we obtain $$\label{eq:suriname3}
|\ensuremath{\mathcal}D_{3, \beta, i}| \lesssim_{\delta_0} \ensuremath{\mathcal}D_{2, \beta, i}^{1/2}$$
We move to study $\ensuremath{\mathcal}D_{1, \beta, i}$ in [\[eq:Dmain\]](#eq:Dmain){reference-type="eqref" reference="eq:Dmain"}. Note that $$\left| \left[ \partial_\beta, \frac{1}{S^{1/\alpha}} \right] f - \sum_{j=1}^K \partial_{\beta_j} \frac{1}{S^{1/\alpha}} \partial_{\beta^{(j)}}f \right| \lesssim 3^K\max_{\substack{ |\beta| + | \beta ' | = K \\ | \beta | \leq K-2}} |\partial_{\beta'} \left( \frac{1}{S^{1/\alpha}} \right) \partial_\beta f|$$ just by identifying the terms of order $K-1$ in the commutator. Using this into the expression of $\ensuremath{\mathcal}D_{1,\beta, i}$, we obtain $$\label{eq:chile}
\ensuremath{\mathcal}D_{1, \beta, i} \lesssim\underbrace{ \sum_{j=1}^K \int \phi^K \partial_\beta U_i \partial_{\beta^{(j)}} \Delta U_i \frac{\partial_{\beta_j} S}{S^{1+1/\alpha}} }_{\ensuremath{\mathcal}D_{1, \beta, i}' } + 3^K E^{1/2} \underbrace{ \max_{\substack{ |\beta| + | \beta ' | = K \\ | \beta | \leq K-2}} \left\| \partial_{\beta'} \left( \frac{1}{S^{1/\alpha}} \right) \partial_\beta \Delta U_i \phi^K \right\|_{L^2}. }_{\ensuremath{\mathcal}D_{1, \beta, i}'' }$$
Defining $$\ensuremath{\mathcal}D_2 = \sum_{\beta \in [3]^K} \sum_{i=1} \ensuremath{\mathcal}D_{2, \beta, i} = \int | \nabla^{K+1} U |^2 \frac{\phi^K}{S^{1/\alpha} },$$ we can bound $\ensuremath{\mathcal}D_{1, \beta, i}'$ as $$\begin{aligned}
\ensuremath{\mathcal}D_{1, \beta, i}' &\lesssim K \int \phi^K | \partial_\beta U | \cdot |\nabla^{K+1} U | \frac{| \nabla S |}{S^{1+1/\alpha}} \lesssim_{K} \ensuremath{\mathcal}D_2^{1/2} \left( \int \phi^K |\partial_\beta U|^2 \frac{| \nabla S |^2}{S^{2+1/\alpha}} \right)^{1/2} \notag \\
&\lesssim_{K}
\ensuremath{\mathcal}D_2^{1/2} E^{1/2} \left\| \frac{| \nabla S |^2}{S^{2+1/\alpha}} \right\|_{L^\infty} \lesssim_{\delta_0} \ensuremath{\mathcal}D_2^{1/2}. \label{eq:kamchatka}\end{aligned}$$ where in the last inequality we used [\[eq:suriname2\]](#eq:suriname2){reference-type="eqref" reference="eq:suriname2"} and the fact that $\delta_0$ is allowed to depend on $E$.
Using Lemma [Lemma 27](#lemma:brasilia){reference-type="ref" reference="lemma:brasilia"}, we proceed to bound $\ensuremath{\mathcal}D_{1, \beta, i}''$. Developing the derivative $\partial_{\beta ' } \frac{1}{S^{1/\alpha}}$, we have $$\label{eq:stockholm}
\ensuremath{\mathcal}D_{1, \beta, i}'' = \max_{\substack{ |\beta| + | \beta ' | = K \\ | \beta | \leq K-2}} \left\| \partial_{\beta'} \left( \frac{1}{S^{1/\alpha}} \right) \partial_\beta \Delta U_i \phi^{K/2} \right\|_{L^2}
\lesssim_K \max_{\substack{ |\beta^0| + | \beta^1 | + \ldots + | \beta^\ell |= K \\ | \beta^0 | \leq K-2}}
\left\| \frac{1}{S^{1/\alpha}} \partial_{\beta^0} \Delta U_i \frac{\partial_{\beta^1} S}{S} \frac{\partial_{\beta^2} S}{S} \ldots \frac{\partial_{\beta^\ell} S}{S} \phi^{K/2} \right\|_{L^2}$$ We consider different cases.
**Generic case. $| \beta^0 | \leq K-4$ and $| \beta^i | \leq K-2$ for all $i \geq 1$.** In this case, we always have less than $K-2$ derivatives falling on each term. Let us define $$F(j) = -\frac{-r(j-1) + (1-\eta) \frac{5j}{2} + K\eta - 5/2 }{K-5/2} + \overline{\varepsilon} ,$$ so a direct application of inequality [\[eq:brazil3\]](#eq:brazil3){reference-type="eqref" reference="eq:brazil3"} and Lemma [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"} yields $$\begin{aligned}
&\left\| \frac{\partial_{\beta^0} \Delta U_i }{S^{1/\alpha}} \frac{\partial_{\beta^1} S}{S} \frac{\partial_{\beta^2} S}{S} \ldots \frac{\partial_{\beta^\ell} S}{S} \phi^{K/2} \right\|_{L^2} \\
&\qquad\leq \left\| \frac{\phi^{\frac{| \beta^0 |+ 2}{2} }\partial_{\beta^0} \Delta U_i}{S \langle R \rangle^{F(|\beta^0|+2 )}} \right\|_{L^\infty}
\left\| \frac{\phi^{ \frac{| \beta^1 |}{2} }\partial_{\beta^1} S}{S \langle R \rangle^{F(|\beta^1| )}} \right\|_{L^\infty} \ldots
\left\| \frac{\phi^{\frac{| \beta^\ell |}{2} }\partial_{\beta^\ell} S}{S \langle R \rangle^{F(|\beta^\ell| )}} \right\|_{L^\infty}
\left\| \frac{\langle R \rangle^{F(|\beta^0|+2)} \langle R \rangle^{ \sum_{i=1}^\ell F(| \beta^i |)}}{\phi S^{1/\alpha - 1} } \right\|_{L^2} \\
&\qquad\lesssim_{\delta_0}
\left\| \frac{\langle R \rangle^{ -\frac{-r(K+1-\ell) + (1-\eta) \frac{5(K+2)}{2} + K\eta (\ell+1) - 5(\ell+1) /2 }{K-5/2}+ \overline{\varepsilon} (\ell + 1)} }{\phi \langle R \rangle^{-(r-1)(1/\alpha - 1)}} \right\|_{L^2}.\end{aligned}$$ Let us look at the exponent in the numerator of the expression above and define $$\begin{aligned}
\Xi &= -\left( -r(K+1-\ell) + (1-\eta) \frac{5(K+2)}{2} + K\eta (\ell+1) - 5 (\ell+1)/2 \right) \\
&=
rK + r -r\ell - \frac{5K}{2} + \left( \frac{3}{2} - \ell \right)K\eta + 5 \ell/2 -5/2+5\eta.\end{aligned}$$ If $\ell \geq 2$, since $K$ is sufficiently large with respect to $\ell$, we have that $(3/2 - \ell)K\eta < -100\ell$ and we obtain $$\Xi \leq \left( r - \frac52 \right)K.$$ If $\ell = 1$, we get that $$\Xi = \left( r - \frac52 \right) K + \frac12 K\eta + 5\eta \leq \left( r - \frac52 + \eta \right)K,$$ using that $K\eta \gg \eta$. Thus, we have $$\begin{aligned}
\label{epsicondition01}
\left\| \frac{\partial_{\beta^0} \Delta U_i }{S^{1/\alpha}} \frac{\partial_{\beta^1} S}{S} \frac{\partial_{\beta^2} S}{S} \ldots \frac{\partial_{\beta^\ell} S}{S} \phi^{K/2} \right\|_{L^2}
&\lesssim_{\delta_0}
\left\| \frac{\langle R \rangle^{(r-5/2+\eta + \overline{\varepsilon}(K+1)) } }{ \langle R \rangle^{2(1-\eta)-(r-1)(1/\alpha - 1)}} \right\|_{L^2} \\\nonumber
&= \left( \int \langle R \rangle^{-9+2r+2(r-1)(1/\alpha-1)+6\eta + 2(K+1)\overline{\varepsilon}}dR \right)^{1/2}.\end{aligned}$$ Finally, we use that $\eta$ is sufficiently small since $\frac{r-1}{\alpha} \leq 2$ (from equation [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}), $\overline{\varepsilon}$ is sufficiently small depending on $K$, $2r < 4$ and $\langle R \rangle^{-3+\overline{\varepsilon}}$ is integrable. We obtain $$\left\| \frac{\partial_{\beta^0} \Delta U_i }{S^{1/\alpha}} \frac{\partial_{\beta^1} S}{S} \frac{\partial_{\beta^2} S}{S} \ldots \frac{\partial_{\beta^\ell} S}{S} \phi^{K/2} \right\|_{L^2} \lesssim_{\delta_0} 1$$
**Case where some $| \beta^i | \in \{ K-1, K \}$ for $i \geq 1$.** Without loss of generality assume $i=1$. If $| \beta^1 | = K$, using the energy bound $$\left\| \phi^{K/2} \partial_{\beta^1} S \frac{\Delta U_i}{S^{1+1/\alpha}} \right\|_{L^2} \lesssim_{\delta_0} \left\| \frac{| \nabla^2 U |}{S^{1+1/\alpha}} \right\|_{L^\infty}$$ and if $| \beta^1 | = K-1$, using [\[eq:brazilk-1\]](#eq:brazilk-1){reference-type="eqref" reference="eq:brazilk-1"} the two cases are: $$\begin{aligned}
\left\| \phi^{K/2} \partial_{\beta^1} S \frac{\partial_j S \Delta U_i}{S^{2+1/\alpha}} \right\|_{L^2} &\lesssim_{\delta_0,\overline{\varepsilon}} \|\langle R\rangle^{K(1-\eta)\frac{K-2}{K-1}-\overline{\varepsilon}}\partial_{\beta^{1}}S \|_{L^{2+\frac{2}{K-2}}}\left\| \frac{ | \nabla S |\langle R\rangle}{S} \right\|_{L^\infty} \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}-1} |\nabla^2 U|}{S^{1+1/\alpha}} \right\|_{L^{2(K-1)}}, \\
\left\| \phi^{K/2} \partial_{\beta^1} S \frac{\partial_j \Delta U_i}{S^{1+1/\alpha}} \right\|_{L^2} &\lesssim_{\delta_0,\overline{\varepsilon}} \|\langle R\rangle^{K(1-\eta)\frac{K-2}{K-1}-\overline{\varepsilon}}\partial_{\beta^{1}}S \|_{L^{2+\frac{2}{K-2}}} \left\| \frac{ \langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}} | \nabla^3 U | }{S^{1+1/\alpha}} \right\|_{L^{2(K-1)}}.\end{aligned}$$ On the one hand, combining Lemma [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"} with Lemma [Lemma 26](#lemma:Sprimebounds){reference-type="ref" reference="lemma:Sprimebounds"}, we see that $\left\| \frac{ | \nabla S |\langle R \rangle}{S } \right\|_{L^\infty} \lesssim_{\delta_0} 1$. On the other hand, using [\[eq:brazil2\]](#eq:brazil2){reference-type="eqref" reference="eq:brazil2"} for $j = 2, 3$ together with Lemma [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"}, we get $$\begin{aligned}
\label{eq:grenada}
&\left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}-1}| \nabla^2 U |}{S^{1+1/\alpha}} \right\|_{L^{2(K-1)}} + \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}} | \nabla^3 U | }{S^{1+1/\alpha}} \right\|_{L^{2(K-1)}}\\\nonumber
& \quad \lesssim_{\delta_0,\overline{\varepsilon}}
\left\| \frac{R ^{-r-2+2\eta+\overline{\varepsilon}+K(1-\eta)\frac{1}{K-1}+2\overline{\varepsilon}}}{\langle R \rangle^{-(r-1)(1+1/\alpha)}} \right\|_{L^{2(K-1)}} + \left\| \frac{ R ^{-r-2+3\eta+\overline{\varepsilon}+K(1-\eta)\frac{1}{K-1}+2\overline{\varepsilon}}}{\langle R \rangle^{ -(r-1)(1+1/\alpha)}} \right\|_{L^{2(K-1)}} \lesssim_{\delta_0,\overline{\varepsilon}} 1,\end{aligned}$$ where in the last inequality we used $(r-1)\left( 2 + \frac{1}{\alpha} \right) < 2$ (equivalent to $r - 1 < 1-1/\gamma$ from equation [\[eq:rough_range_r\]](#eq:rough_range_r){reference-type="eqref" reference="eq:rough_range_r"}) and $r>1$.
**Case where $| \beta^0 | = K-2$.** We can bound $$\begin{aligned}
\left\| \partial_{\beta^0} \Delta U \phi^{K/2} \nabla^2 \left( \frac{1}{S^{1/\alpha}} \right) \right\|_{L^2}
&\leq \| \phi^{K/2} \nabla^K U \|_{L^2} \left( \left\| \frac{\nabla^2 S}{S^{1+1/\alpha}}\right\|_{L^\infty} + \left\| \frac{|\nabla S|^2}{S^{2+1/\alpha}}\right\|_{L^\infty} \right) \\
&\leq E \left( \left\| \frac{\nabla^2 S}{S^{1+1/\alpha}}\right\|_{L^\infty} + \left\| \frac{\langle R\rangle^{-2r}}{\langle R \rangle^{-(r-1)(2+1/\alpha)}}\right\|_{L^\infty} \right) \lesssim_{\delta_0} 1\end{aligned}$$ where in the last inequality we used again that $2 > (r-1)(2+\frac{1}{\alpha})$ (from equation [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}. We also bounded $\left\| \frac{\nabla^2 S}{S^{1+1/\alpha}}\right\|_{L^\infty}$ in the same way as $\left\| \frac{\nabla^2 U}{S^{1+1/\alpha}}\right\|_{L^\infty}$ in the last case.
**Case where $| \beta^0 | = K-3$.** Using [\[eq:brazilk-1\]](#eq:brazilk-1){reference-type="eqref" reference="eq:brazilk-1"}, together with $S \gtrsim_{\delta_0} \langle R \rangle^{-r+1}$ (Lemma [Lemma 25](#lemma:Sbounds){reference-type="ref" reference="lemma:Sbounds"}) and $| \nabla S | \lesssim_{\delta_0} \langle R \rangle^{-r}$ (Lemma [Lemma 26](#lemma:Sprimebounds){reference-type="ref" reference="lemma:Sprimebounds"}), we have that $$\begin{aligned}
&\quad\left\| \partial_{\beta^0} \Delta U \phi^{K/2} \nabla^3 \left( \frac{1}{S^{1/\alpha}} \right) \right\|_{L^2} \\
&\leq
\left\| \langle R\rangle^{K(1-\eta)\frac{K-2}{K-1}-\overline{\varepsilon}} \nabla^{K-1} U \right\|_{L^{2+\frac{2}{K-2}}}\\
&\quad\cdot\left( \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}}\nabla^{3} S}{S^{1+1/\alpha}}\right\|_{L^{2(K-1)}} + \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}}|\nabla^{2} S| | \nabla S |}{S^{2+1/\alpha}}\right\|_{L^{2(K-1)}} + \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}}|\nabla S|^3}{S^{3+1/\alpha}}\right\|_{L^{2(K-1)}} \right) \\
&\lesssim_{\delta_0,\overline{\varepsilon}}
\left( \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}}\nabla^{3} S}{S^{1+1/\alpha}}\right\|_{L^{2(K-1)}} + \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}}|\nabla^{2} S| }{S^{1+1/\alpha}}\right\|_{L^{2(K-1)}} + \left\| \frac{\langle R\rangle^{K(1-\eta)\frac{1}{K-1}+\overline{\varepsilon}-1}\langle R \rangle^{-2r} }{ \langle R \rangle^{-(r-1)(2+1/\alpha)}}\right\|_{L^{2(K-1)}} \right).\end{aligned}$$ The same reasoning applied in [\[eq:grenada\]](#eq:grenada){reference-type="eqref" reference="eq:grenada"} for $U$ applies here for $S$, and using again that $2 > (r-1)(2+1/\alpha)$ (from equation [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"}), we conclude that $$\left\| \partial_{\beta^0}\Delta U \phi^{K/2} \nabla^3 \left( \frac{1}{S^{1/\alpha}} \right) \right\|_{L^2} \lesssim_{\delta_0} 1$$
Combining all our different cases and using [\[eq:stockholm\]](#eq:stockholm){reference-type="eqref" reference="eq:stockholm"}, we obtain $$\label{eq:albacete}
\ensuremath{\mathcal}D_{1, \beta, i}'' \lesssim_{\delta_0} 1.$$
Using [\[eq:suriname3\]](#eq:suriname3){reference-type="eqref" reference="eq:suriname3"}, [\[eq:chile\]](#eq:chile){reference-type="eqref" reference="eq:chile"}, [\[eq:kamchatka\]](#eq:kamchatka){reference-type="eqref" reference="eq:kamchatka"}, and [\[eq:albacete\]](#eq:albacete){reference-type="eqref" reference="eq:albacete"}, we obtain that $$\label{eq:calamares}
\ensuremath{\mathcal}D_{1, \beta, i} + |\ensuremath{\mathcal}D_{3, \beta, i}| \lesssim_{\delta_0} \ensuremath{\mathcal}D_2^{1/2} + 1.$$ Recall that our objective is to bound $$\ensuremath{\mathcal}I_4 = C_{\rm{dis}} e^{-\delta_{\rm{dis}}s} \sum_{i=1}^3 \sum_{\beta \in [3]^{K}} \left( \ensuremath{\mathcal}D_{1, \beta, i} - \ensuremath{\mathcal}D_{2, \beta, i} - \ensuremath{\mathcal}D_{3, \beta, i} \right),$$ and that $\ensuremath{\mathcal}D_2 = \sum_{\beta \in [3]^K} \sum_{i=1}^3 \ensuremath{\mathcal}D_{2, \beta, i}$. Using [\[eq:calamares\]](#eq:calamares){reference-type="eqref" reference="eq:calamares"} and that $\delta_0$ is allowed to depend on $r, \alpha, K$, we have that $$\ensuremath{\mathcal}I_4 + C_{\rm{dis}} e^{-\delta_{\rm{dis}}s} \ensuremath{\mathcal}D_2 \leq C_{\delta_0} e^{-\delta_{\rm{dis}}s} \left( \ensuremath{\mathcal}D_2^{1/2} + 1 \right),$$ where $C_{\delta_0}$ is sufficiently large depending on $\delta_0$ (and therefore on $r, \alpha, K$). Using Cauchy-Schwarz, we get $$\ensuremath{\mathcal}I_4 + C_{\rm{dis}} e^{-\delta_{\rm{dis}}s} \ensuremath{\mathcal}D_2 \leq C_{\delta_0} e^{-\delta_{\rm{dis}}s} + e^{-\delta_{\rm{dis}}s} \left( \ensuremath{\mathcal}D_2\frac{C_{\rm{dis}}}{2} + C_{\delta_0}^2 \frac{2 }{C_{\rm{dis}}} \right),$$ and therefore $$\ensuremath{\mathcal}I_4 \leq \frac{r^{1+\frac{1}{\alpha}}}{\alpha^{\frac{1}{\alpha}}} e^{-\delta_{\rm{dis}}s} \left( - \ensuremath{\mathcal}D_2 + \frac12 \ensuremath{\mathcal}D_2 \right) + e^{-\delta_{\rm{dis}}s} \left( C_{\delta_0} + C_{\delta_0}^2 \frac{2 }{C_{\rm{dis}}} \right) \lesssim_{\delta_0} e^{-\delta_{\rm{dis}}s},$$ where we used that $\ensuremath{\mathcal}D_2 > 0$. Since $s_0$ is sufficiently large depending on $\delta_0$, we conclude that $$\label{eq:I4}
\ensuremath{\mathcal}I_4 \leq e^{-\delta_{\rm{dis}} (s-s_0)}.$$
Plugging this into [\[eq:EE2\]](#eq:EE2){reference-type="eqref" reference="eq:EE2"}, we have $$\label{energyestimatewithweight}
\partial_s E_{K} \leq -\frac{K\min \{ \eta , \widetilde\eta \} }{2} E_{K} + O(E) + e^{-\delta_{\rm{dis}}(s-s_0)}.$$ Given that $K$ is sufficiently large depending on $\eta, \widetilde\eta$, when $E_k\geq \frac{1}{2}E$, we have $\partial_{s}E_{k}<0$ and we close the bootstrap assumption [\[higherestimate3\]](#higherestimate3){reference-type="eqref" reference="higherestimate3"}.
## Topological argument section for the unstable modes
In this section we control the unstable modes.
Recall that $(\varphi_{i,u},\varphi_{i,s})$, $1\leq i\leq N$ is a normalized basis of $V_{\rm{uns}}$. For any $(\widetilde{U}_{t}(\cdot, s),\widetilde{S}_{t}(\cdot,s))$, we denote $$k(s)=P_{\rm{uns}}(\widetilde{U}_{t}(\dot,s),\widetilde{S}_{t}(\cdot,s))=\sum_{i=1}^{N}k_i(s)(\varphi_{i,u},\varphi_{i,s}).$$ Due to the initial data condition [\[initialdatatruncated\]](#initialdatatruncated){reference-type="eqref" reference="initialdatatruncated"}, together with ($\chi_2 \widetilde U_0', \chi_2 \widetilde S_0' ) \in V_{\rm{sta}}$ (by [\[eq:tilde_is_stable\]](#eq:tilde_is_stable){reference-type="eqref" reference="eq:tilde_is_stable"}), we have that $$k_i(0)=a_i.$$
From Lemma [Proposition 8](#prop:maxdissmooth){reference-type="ref" reference="prop:maxdissmooth"}, there exists a metric $B$ such that $$\langle \mathcal{L}k(s),k(s) \rangle_{B}\geq \frac{-6\delta_g}{10}\langle k(s),k(s) \rangle_{B}.$$ Since the unstable space $V_{\rm{uns}}$ only has finite dimension, $X\subset H^m (B(0, 3C_0))$ we have $$\label{normequivalence}
\|k(s)\|_{X}\lesssim_{m}\|k(s)\|_{B}\lesssim_{m}\|k(s)\|_{X}.$$ where we denote with $\| a \|_X$ the norm $\left\| \sum a_i \psi_i \right\|_X$. Here $m$ shows up because of [\[spacexnorm\]](#spacexnorm){reference-type="eqref" reference="spacexnorm"}.
We will always restrict ourselves to values of $a_i$ such that $$\left\| a \right\|_B \leq \delta_1^{11/10}.$$ In particular, from [\[normequivalence\]](#normequivalence){reference-type="eqref" reference="normequivalence"}, we have $\| a \|_X \lesssim_m \delta_1^{11/10}$. Recalling that $\psi_i$ are compactly supported, that $\delta_1$ is sufficiently small depending on $m, E$ and that $\| \psi_i \|_X = 1$ (normalized eigenfunctions), it is clear that the required initial conditions for $\widetilde U_0, \widetilde S_0$ [\[initialdatacondition2\]](#initialdatacondition2){reference-type="eqref" reference="initialdatacondition2"}--[\[initialdatacondition\]](#initialdatacondition){reference-type="eqref" reference="initialdatacondition"} are satisfied for any such choice of $a_i$ as long as $$\begin{aligned}
\begin{split} \label{eq:conditions_for_tilde}
\|\widetilde{U}_0' \|_{L^{\infty}}, \|\widetilde{S}_0' \|_{L^{\infty}}\leq \delta_1, \qquad \| \widetilde U_0' \|_{H^K}, \| \widetilde S_0' \|_{H^K} \leq\frac{E}{4},
\qquad \widetilde{S}_0'+\widehat{X}\overline{S}_0\geq \frac{\delta_1}{2} \left\langle \frac{R}{R_0} \right\rangle^{1-r}, \\
|\nabla(\widetilde{U}_0'+\widehat{X} \overline{U})|+|\nabla(\widetilde{S}_0'+\widehat{X} \overline{S})|\lesssim \frac{1}{\langle |y| \rangle^r}.
\end{split} \end{aligned}$$
Let $$\ensuremath{\mathcal}R(s)=\{k(s)| \|k(s)\|_{X}\leq \delta_1 e^{-\frac{4}{3}\varepsilon(s-s_0)}\},$$ $$\widetilde{\ensuremath{\mathcal}R}(s)=\{k(s)| \|k(s)\|_{B}\leq \delta_1^{\frac{11}{10}}e^{-\frac{4}{3}\varepsilon(s-s_0)}\}.$$ From [\[normequivalence\]](#normequivalence){reference-type="eqref" reference="normequivalence"} and the choice of parameter [\[choiceofparameter\]](#choiceofparameter){reference-type="eqref" reference="choiceofparameter"}, we have $$\widetilde{\ensuremath{\mathcal}R}(s)\Subset \ensuremath{\mathcal}R(s).$$
We will always choose $\{ a_i \} = k(s_0)\in \widetilde{\ensuremath{\mathcal}R}(s_0)$. From Proposition [Proposition 23](#prop:bootstrap){reference-type="ref" reference="prop:bootstrap"}, and Lemmas [Lemma 28](#forcingestimate1){reference-type="ref" reference="forcingestimate1"}, [Lemma 29](#forcingestimate2){reference-type="ref" reference="forcingestimate2"}, when $k(s)$ does not leave $\ensuremath{\mathcal}R(s)$, the unstable mode is controlled and we have the estimate for the forcing: $$\label{forcingest}
\|\chi_2 \ensuremath{\mathcal}F\|_{X}\lesssim \delta_1^{\frac{6}{5}}e^{-\frac{3}{2}\varepsilon(s-s_0)}.$$ Next, we show the following outgoing property:
**Proposition 34**. *Suppose $k(s)\in \widetilde{\ensuremath{\mathcal}R}(s)$ at times $s\in[s_0,s_1]$ and at time $s_1$, we have $k(s_1)\in \partial{\widetilde{\ensuremath{\mathcal}R}(s_1)},$ that is $$\|k(s_1)\|_{B}=\delta_1^{\frac{11}{10}}e^{-\frac{4}{3}\varepsilon(s_1-s_0)}.$$ Then, we have that $k(s)$ does not belong to $\widetilde{\ensuremath{\mathcal}R}(s)$ for s close enough to $s_1$ from above.*
*Proof.* Recall that $(\widetilde{U}_{t},\widetilde{S}_{t})$ satisfy the equation [\[navierstokesperturb1trun\]](#navierstokesperturb1trun){reference-type="eqref" reference="navierstokesperturb1trun"}: $$\begin{aligned}
\partial_s \widetilde U_t &= \ensuremath{\mathcal}L_{u}(\widetilde{U}_t,\widetilde{S}_t)+\chi_2\ensuremath{\mathcal}F_{u}(\widetilde{U},\widetilde{S}),\\\nonumber
\partial_s \widetilde S_t &= \ensuremath{\mathcal}L_{s}(\widetilde{U}_t,\widetilde{S}_t)+\chi_2\ensuremath{\mathcal}F_{s}(\widetilde{U},\widetilde{S}),\end{aligned}$$ with initial value [\[initialdatatruncated\]](#initialdatatruncated){reference-type="eqref" reference="initialdatatruncated"}: $$\begin{aligned}
\begin{cases}
\widetilde{U}_{t,0}=\chi_2\widetilde{U}_0^{'}+\sum_{i=1}^{N}a_i
\varphi_{i,u}\\
\widetilde{S}_{t,0}=\chi_2\widetilde{S}_0^{'}+\sum_{i=1}^{N}a_i
\varphi_{i,s}.
\end{cases}\end{aligned}$$ Since $V_{\rm{uns}}$ is invariant with respect to the linear operator $\ensuremath{\mathcal}L$, we have $$\begin{aligned}
\begin{cases}
\partial_s k(s) &= \ensuremath{\mathcal}L k(s) + P_{\rm{uns}}(\chi_2 \ensuremath{\mathcal}F), \\
k(s_0) &= (\sum_{i=1}^{N}a_i
\varphi_{i,u},\sum_{i=1}^{N}a_i
\varphi_{i,s}),
\end{cases}\end{aligned}$$ From the forcing estimate [\[forcingest\]](#forcingest){reference-type="eqref" reference="forcingest"}, we have $$\begin{aligned}
&\|P_{\rm{uns}}(\chi_2 \ensuremath{\mathcal}F)\|_{B}\lesssim_{m}\|P_{\rm{uns}}(\chi_2 \ensuremath{\mathcal}F)\|_{X}\lesssim_{m} \|\chi_2 \ensuremath{\mathcal}F\|_{X}\lesssim_{m}\delta_{1}^{\frac{6}{5}}e^{-\frac{3}{2}\varepsilon(s_1-s_0)}.\end{aligned}$$ Then $$\begin{aligned}
\left\langle \frac{d}{ds}k(s),k(s) \right\rangle_{B}\bigg|_{s=s_1}\geq -\frac{6\delta_g}{10}\langle k(s_1),k(s_1) \rangle_{B}-C_m \delta_{1}^{\frac{6}{5}}e^{-\frac{3}{2}\varepsilon(s_1-s_0)}\|k(s_1)\|_{B}.\end{aligned}$$ Then when $\|k(s_1)\|_{B}=\delta_1^{\frac{11}{10}}e^{-\frac{4}{3}\varepsilon(s-s_0)}$, we have $$\begin{aligned}
\left\langle \frac{d}{ds}k(s),k(s) \right\rangle_{B}\bigg|_{s=s_1}&\geq -\frac{6\delta_g}{10}\langle k(s_1),k(s_1) \rangle_{B}-C_m\delta_{1}^{\frac{6}{5}-\frac{11}{10}}e^{-\varepsilon(\frac{3}{2}-\frac{4}{3})(s_1-s_0)} \langle k(s_1),k(s_1) \rangle_{B}.\\
&\geq \frac{-31}{50}\delta_g \langle k(s_1),k(s_1) \rangle_{B}.\end{aligned}$$ Since $\frac{31}{50}\delta_g< \frac{4}{3}\varepsilon=\frac{16}{25}\delta_g,$ when $s=s_1$ we have $$\frac{d}{ds}(\langle k(s),k(s)\rangle_{B}e^{\frac{8}{3}\varepsilon(s-s_0)})\geq \frac{-31}{25}\delta_g\langle k(s),k(s)\rangle e^{\frac{8}{3}\varepsilon(s-s_0)}+\frac{8}{3} \varepsilon\langle k(s),k(s) \rangle_{B}e^{\frac{8}{3}\varepsilon(s-s_0)}>0.$$
Hence $k(s)$ exits $\widetilde{\ensuremath{\mathcal}R}(s)$ at $s=s_1$. ◻
Next, we show the existence of at least one choice of $\{a_i\}$ such that $k(s)$ is in $\ensuremath{\mathcal}R(s)$ for all $s$. We borrow the result from [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Proposition 8.15] (the same proof works here).
**Proposition 35**. *There exist specific initial conditions $\{a_i\} \in \widetilde{\ensuremath{\mathcal}R} (s_0)$ such that $k(s_0)=\{a_i\}$ and $k(s)\in \ensuremath{\mathcal}R(s)$ for all $s>s_0$.*
Thus, for any choice of $\widetilde U_0'$, $\widetilde S_0'$ satisfying our initial data conditions [\[initialdatacondition2\]](#initialdatacondition2){reference-type="eqref" reference="initialdatacondition2"}--[\[initialdatacondition\]](#initialdatacondition){reference-type="eqref" reference="initialdatacondition"}, we can take $\{a_i\}$ as in Proposition [Proposition 35](#prop:ai){reference-type="ref" reference="prop:ai"}. Then, recalling equation [\[eq:tildeplusunstable\]](#eq:tildeplusunstable){reference-type="eqref" reference="eq:tildeplusunstable"}, the initial data for the perturbation is given by: $$\widetilde U_0 = \widetilde U_0' + \sum_{i=1}^N a_i \psi_{i, u}, \qquad \mbox{ and } \qquad \widetilde S_0 = \widetilde S_0' + \sum_{i=1}^N a_i \psi_{i, s}.$$ For such initial data we are under the hypothesis of Proposition [Proposition 23](#prop:bootstrap){reference-type="ref" reference="prop:bootstrap"} for all $s_1 \geq s_0$. Therefore, from [\[lowerestimate\]](#lowerestimate){reference-type="eqref" reference="lowerestimate"} we obtain the exponential decay bounds $$| \widetilde U |, | \widetilde S| \leq \frac{\delta_0}{C_2} e^{-\varepsilon(s-s_0)}.$$
Let us also recall from [\[eq:periodic_perturbation\]](#eq:periodic_perturbation){reference-type="eqref" reference="eq:periodic_perturbation"} that $U = \widetilde U (y, s) + X(ye^{-s}) \overline U (y)$ and $S = \widetilde S (y, s) + X(ye^{-s}) \overline S (y)$. Using equations [\[eq:SS_coordinates1\]](#eq:SS_coordinates1){reference-type="eqref" reference="eq:SS_coordinates1"}--[\[eq:SS_coordinates2\]](#eq:SS_coordinates2){reference-type="eqref" reference="eq:SS_coordinates2"} to change back from self-similar variables to original physical variables, we obtain that $$\begin{aligned}
u(x, t) &= \frac{X(x)}{r}\cdot \frac{1}{(T-t)^{1-\frac{1}{r}}} \overline U \left( \frac{x}{(T-t)^{1/r}} \right) + \frac{1}{(T-t)^{1-\frac{1}{r}-\frac{\varepsilon}{r}}} U_{\rm{err}} \left( \frac{x}{(T-t)^{1/r}}, t \right), \\
\sigma (x, t) &= \frac{X(x)}{r}\cdot \frac{1}{(T-t)^{1-\frac{1}{r}}} \overline S \left( \frac{x}{(T-t)^{1/r}} \right) + \frac{1}{(T-t)^{1-\frac{1}{r}-\frac{\varepsilon}{r}}} S_{\rm{err}} \left( \frac{x}{(T-t)^{1/r}}, t \right),\end{aligned}$$ where $$| U_{\rm{err}} |, | S_{\rm{err}} | \leq \frac{r \delta_0}{C_2}, \qquad \forall x \in \mathbb T_L^3, \; \mbox{ and } \; \forall t \in [0, T)$$ Finally, recalling that $\sigma = \frac{1}{\alpha} \rho^\alpha$, we obtain Theorem [Theorem 2](#th:periodic){reference-type="ref" reference="th:periodic"}.
**Remark 36**. The only conditions we have imposed on $(\widetilde U_0', \widetilde S_0')$ are [\[eq:conditions_for_tilde\]](#eq:conditions_for_tilde){reference-type="eqref" reference="eq:conditions_for_tilde"} and [\[eq:tilde_is_stable\]](#eq:tilde_is_stable){reference-type="eqref" reference="eq:tilde_is_stable"}. Conditions [\[eq:conditions_for_tilde\]](#eq:conditions_for_tilde){reference-type="eqref" reference="eq:conditions_for_tilde"} are open, so we can consider some Banach space $\ensuremath{\mathcal}B$ including all the norms considered in [\[eq:conditions_for_tilde\]](#eq:conditions_for_tilde){reference-type="eqref" reference="eq:conditions_for_tilde"} so that [\[eq:conditions_for_tilde\]](#eq:conditions_for_tilde){reference-type="eqref" reference="eq:conditions_for_tilde"} is satisfied in a sufficiently small ball of $\ensuremath{\mathcal}B$ centered at the origin that we denote by $B_{\ensuremath{\mathcal}B}$.
On the other hand, the restriction from [\[eq:tilde_is_stable\]](#eq:tilde_is_stable){reference-type="eqref" reference="eq:tilde_is_stable"}, that is, $P_{\rm{uns}} (\widetilde U_0', \widetilde S_0') = 0$, is a finite dimensional closed restriction. Since $V_{\rm{uns}}$ is the range of $P_{\rm{uns}}$, we know that there is always a representative of $(\widetilde U_0', \widetilde S_0')$ in the quotient space $\ensuremath{\mathcal}B / V_{\rm{uns}}$ that satisfies [\[eq:tilde_is_stable\]](#eq:tilde_is_stable){reference-type="eqref" reference="eq:tilde_is_stable"}.
Moreover, note that from the choice of initial data $$\left( \widetilde U_0, \widetilde S_0 \right) = (\widetilde U_0', \widetilde S_0') + \sum_{i=1}^N a_i \psi_i,$$ that $(\widetilde U_0, \widetilde S_0)$ and $(\widetilde U_0', \widetilde S_0')$ belong to the same equivalence class of $\ensuremath{\mathcal}B / V_{\rm{uns}}$.
Therefore, the set of $(\widetilde U_0, \widetilde S_0)\in \ensuremath{\mathcal}B$ for which we show singularity formation covers an open set of the quotient space $\ensuremath{\mathcal}B / V_{\rm{uns}}$. Since $V_{\rm{uns}}$ is finite dimensional, we refer to the set where blow-up occurs as a finite codimension set of initial data.
# Appendix
**Lemma 37** (Angular repulsivity). *We have that the profiles $(\overline U, \overline S)$ from [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible] or [@Merle-Raphael-Rodnianski-Szeftel:implosion-i] satisfy: $$1 + \frac{\overline U_R}{R} - \alpha | \partial_R \overline S | > \eta,$$ for some $\eta > 0$ sufficiently small that is allowed to depend on $r$ or $\gamma$.*
*Proof.* Before showing the Lemma, let us introduce some notation and properties of the known self-similar profiles (these properties apply to both profiles from [@Merle-Raphael-Rodnianski-Szeftel:implosion-i] and [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible]).
We will adopt the notation of [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible] and denote $U = \frac{\overline U}{R}$, $S = \frac{\overline S}{R}$. We will also regularly use $W, Z$ coordinates, which are just given by $W = U + S$ and $Z = U-S$. We will also reparametrize by $\xi = \log(R)$. The reason to introduce that extra factor of $\frac{1}{R}$ and the $\xi$ reparametrization is that now the ODEs satisfied by the self-similar profiles in these new coordinates are autonomous: $$\partial_\xi W (\xi) = \frac{N_W (W, Z)}{D_W (W, Z)}, \qquad \mbox{ and } \quad \partial_\xi Z = \frac{N_Z (W, Z)}{D_Z (W, Z)}$$ where $D_W, D_Z$ are first-degree polynomials given by $$D_W(W, Z) = 1 + \frac{1 + \gamma}{4} W + \frac{3-\gamma}{4}Z, \qquad \mbox{ and }\qquad D_Z(W,Z) = 1 + \frac{3-\gamma}{4} W + \frac{1+\gamma}{4} Z,$$ and $N_W, N_Z$ are second degree polynomials given by $$\begin{aligned}
N_W(W, Z) &= -rW +\frac{\gamma-1}{4} Z^2 + \frac{\gamma-3}{4} WZ-\frac{\gamma W^2}{2}, \\
N_Z(W, Z) &= -r Z +\frac{\gamma-1}{4} W^2+ \frac{\gamma-3}{4} W Z-\frac{\gamma Z^2}{2}.\end{aligned}$$
Moreover, the family of profiles considered pass through the point $P_s$ which is the rightmost of the two solutions to the second degree equation $N_Z(W, Z) = D_Z(W, Z) = 0$. We define $\overline P_s$ to be the other solution to the system $N_Z(W, Z) = D_Z(W, Z) = 0$. It is also assumed that the profiles pass through $P_s$ at $R=1$ (otherwise, they can be rescaled to do so). The formulas for $P_s$ and $\overline P_s$ are given by $$\begin{aligned}
P_s = (W_0, Z_0) &= \Big( \frac{\gamma^2 r+(\gamma+1) \mathcal{R}_1 -3 \gamma^2-2 \gamma r+10 \gamma-3 r-3}{4 (\gamma-1)^2}, \\
& \qquad \frac{\gamma^2 r+(\gamma-3) \mathcal{R}_1 -3 \gamma^2-6 \gamma r+6 \gamma+9 r-7}{4 (\gamma-1)^2} \Big)\,, \\
\overline P_s = (\overline W_0, \overline Z_0) &= \Big( \frac{\gamma^2 r-(\gamma+1) \mathcal{R}_1-3 \gamma^2-2 \gamma r+10 \gamma-3 r-3}{4 (\gamma-1)^2}, \\
& \qquad \frac{\gamma^2 r+(3-\gamma) \mathcal{R}_1 -3 \gamma^2-6 \gamma r+6 \gamma+9 r-7}{4 (\gamma-1)^2} \Big)\,,\end{aligned}$$ where $$\label{eq:R1}
\mathcal{R}_1 = \sqrt{\gamma^2 (r-3)^2-2 \gamma(3r^2-6r+7)+(9r^2-14r+9)}\,.$$
One can now apply the formula $\partial_\xi W = \frac{N_W}{D_W}$ at point $P_s$ and get the formula for $W_1$, the first-order Taylor coefficient of the solution at $P_s$. The equation $\partial_\xi Z = \frac{N_Z}{D_Z}$ it is however singular at $P_s$ (since both $N_Z$ and $D_Z$ vanish), but taking a derivative one can see that $Z_1 \nabla D_Z (P_s) \cdot (W_1, Z_1) = \nabla N_Z(P_s) \cdot (W_1, Z_1)$. Solving that second-degree equation one gets two possible solutions for $Z_1$ and all the profiles in the family we are considering correspond to the following choice: $$\begin{aligned}
W_1 &= \frac{\gamma\left(-3 \left(\mathcal{R}_1+6\right)-3 \gamma(r-3)+2 r\right)+\mathcal{R}_1+5 r+5}{4 (\gamma-1)^2} \,,\\
Z_1 &= \frac{-\left(3 \gamma ^3-7 \gamma ^2+\gamma +11\right) r+\gamma (\gamma (9 \gamma -3 \ensuremath{\mathcal}R_1-25)+10 \ensuremath{\mathcal}R_1-4(\gamma-1) \ensuremath{\mathcal}R_2+27)-3 \ensuremath{\mathcal}R_1+4 (\gamma-1) \ensuremath{\mathcal}R_2-3}{4 (\gamma -1)^2 (\gamma +1)}\,,\end{aligned}$$ where $$\begin{aligned}
\mathcal{R}_2 &=\frac{1}{\gamma-1}\bigg(\gamma ((76-27 \gamma ) \gamma -71)-\left((3 \gamma -5) ((\gamma -5) \gamma +2) r^2\right)+(\gamma (\gamma (18 \gamma -52)+50)-8)
r\\
&\qquad+\ensuremath{\mathcal}R_1 (9 (\gamma -2) \gamma +((2-3 \gamma ) \gamma +5) r+5)+18\bigg)^{\frac12}\,.\end{aligned}$$
Another point of interest of the phase portrait is the intersection $N_Z = 0$ and $N_W = 0$. Since both are second-degree polynomials there are at most four solutions to $N_Z = N_W = 0$. Two of them ($(0, 0)$ and $(-r, -r)$) lie on the diagonal $W = Z$, and of the other two only one of them lies in the positive density halfplane $W>Z$. That one is given by: $$P_\star = \left( \frac{2 \left(\sqrt{3}-1\right) r}{3 \gamma-1}, -\frac{2 \left(1+\sqrt{3}\right) r}{3 \gamma-1} \right).$$
Moreover, the profiles considered satisfy that: $$\label{eq:DWDZ}
D_W > 0, D_Z < 0, \quad \forall \xi < 0, \qquad \mbox{ and } \qquad D_W > 0, D_Z > 0 \quad \forall \xi > 0.$$ We also sometimes call these regions 'left of the phase portrait' (region where $D_W > 0, D_Z < 0$, corresponding to $R<1$) and 'right of the phase portrait' (region where $D_W, D_Z > 0$, corresponding to $R>1$).
For the convenience of the reader, we refer to Figure [2](#fig:phase_portrait){reference-type="ref" reference="fig:phase_portrait"}, where have plotted the phase portrait that the profile $(W, Z)$ satisfies, together with all the definitions that we have introduced.
![Phase portrait of the profile for $\gamma = 5/3$, $r=1.15$](Figures/graphic_mathematica.png){#fig:phase_portrait width="50%"}
Lastly, let us recall from [\[eq:range_r\]](#eq:range_r){reference-type="eqref" reference="eq:range_r"} that for the profiles considered (including all the profiles of [@Merle-Raphael-Rodnianski-Szeftel:implosion-i] and [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible]), we have $1<r<r^\ast$, where $$\label{eq:defr}
r^\ast(\gamma) = \begin{cases}
r^\ast_{<5/3} = 1 + \frac{2}{\left( 1 + \sqrt{ \frac{2}{\gamma- 1} }\right)^2} \quad \mbox{ for } 1< \gamma< 5/3, \\
r_{\geq 5/3}^\ast = \frac{3\gamma- 1}{2 + \sqrt{3} (\gamma- 1)} \quad \mbox{ for } \gamma\geq 5/3.
\end{cases}$$ We have defined here the quantities $r^\ast_{<5/3}$ and $r_{\geq 5/3}^\ast$ to be given by their respective expressions, so they are defined for all $\gamma > 1$ (however, we just have $r^\ast = r_{<5/3}^\ast$ for $\gamma \leq 5/3$ and $r^\ast = r^\ast_{\geq 5/3}$ for $\gamma \geq 5/3$). Let us also recall that [\[eq:defr\]](#eq:defr){reference-type="eqref" reference="eq:defr"} implies the following inequality (that can be found in [\[eq:rough_range_r\]](#eq:rough_range_r){reference-type="eqref" reference="eq:rough_range_r"}): $$\label{eq:danglars}
1 < r < 2 - \frac{1}{\gamma} = \frac{2\gamma - 1}{\gamma}.$$ Inequality [\[eq:danglars\]](#eq:danglars){reference-type="eqref" reference="eq:danglars"} is looser than [\[eq:defr\]](#eq:defr){reference-type="eqref" reference="eq:defr"}, but it will be very useful, since it is simpler than [\[eq:defr\]](#eq:defr){reference-type="eqref" reference="eq:defr"} and it will suffice in many cases where we need to upper bound $r$.
**Part I. $R<1$: Right of the phase portrait.**
Let us start showing Lemma [Lemma 37](#lemma:angular_repulsivity){reference-type="ref" reference="lemma:angular_repulsivity"} for $\xi < 0$, that is, $R < 1$. In this region, using the radial repulsivity [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"}, it suffices to show $$\frac{\overline U}{R} - \partial_R \overline U > 0.$$ In our $W, Z$ variables, this corresponds to showing $$\frac{N_W}{D_W} + \frac{N_Z}{D_Z} < 0,$$ or equivalently $$\label{eq:part1}
N_W D_Z + N_Z D_W > 0.$$
We consider the branch of $N_W D_Z + N_Z D_W = 0$ that lies in our region $D_W > 0, D_Z < 0$, which actually connects $P_0$ to $P_s$ (recall $P_0$ is a point at infinity). Using variables $U = \frac{W+Z}{2}, S = \frac{W-Z}{2}$ we have $$N_W D_Z + N_Z D_W = \frac{\gamma - 1}{2} S^2 (3 (\gamma - 1) U + 2 r - 2) - 2 U (U+1) (r+U)$$ and the curve solving $N_W D_Z + N_Z D_W = 0$ inside the region $D_W > 0, D_Z < 0$ can be parametrized as $$b_S = \frac{2}{\gamma - 1} \sqrt{ b_U+1} \sqrt{r+ b_U} \sqrt{ \frac{ b_U }{ 3 b_U+2(r-1)/(\gamma - 1)} }, \quad \mbox{ and } \quad b_U \in \left( U(P_s), \frac{-2(r-1)}{3(\gamma - 1)} \right).$$ where $b_U$ goes from $U (P_s)$ to $\frac{-2(r-1)}{3(\gamma - 1)}$ (at $P_0$) and $(b_U, b_S)$ describes our curve in $(U, S)$ coordinates. We have that $-1 < U(P_s) < \frac{-2(r-1)}{3(\gamma - 1)} < 0$ by Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"}. Thus, it can be seen that all three square roots have positive quantities inside (the first two due to $b_U > -1$, the fraction because both $b_U$ and $\frac{2(r-1)}{3(\gamma - 1)} + b_U$ are negative). We define $b$ to be the barrier defined by $b_U, b_S$ in $W, Z$ coordinates.
Now the approach is the following. We show [\[eq:part1\]](#eq:part1){reference-type="eqref" reference="eq:part1"} in small neighbourhoods of $P_s$ and $P_0$ using a Taylor expansion around those points. Then, if [\[eq:part1\]](#eq:part1){reference-type="eqref" reference="eq:part1"} was to fail at some intermediate point $\xi \in (-\infty, 0)$, it would need to cross $b$ in both directions. However, we will also show that the vector field $(N_W/D_W, N_Z/D_W)$ along $b$ points always to the same side of $b$, making that impossible.\
**Part I.1. Constant sign along $b$**\
We start by showing that that the vector field $(N_W/D_W, N_Z/D_W)$ along $b$ points always to the same side of $b$. Equivalently, we can show that $(N_W D_Z, N_Z D_W)$ along $b$ always points to the same side of $b$. Note that $N_W D_Z = -N_Z D_W$ along $b$ (since it solves $N_W D_Z + N_Z D_W = 0$) and that the vector field $(N_W D_Z, N_Z D_W)$ does not vanish through $b$ except at the endpoint $P_s$. Thus, it suffices to show that $(-1, 1)$ along $b = (b_W, b_Z)$ always points to the same side of $b = (b_W, b_Z)$.
Given that $b$ is defined implicitly by $N_W D_Z + N_Z D_W = 0$, a normal vector at any given point is given by $\nabla ( N_W D_Z + N_Z D_W )$. Thus, we need to show that $(-1, 1) \cdot \nabla ( N_W D_Z + N_Z D_W )$ has constant sign along $b$. We have $$(-1, 1) \cdot \nabla ( N_W D_Z + N_Z D_W ) = \frac{3(\gamma - 1)^2}{4} ( W- Z) \left( W + Z+ \frac{4 (r-1)}{3 (\gamma - 1)} \right).$$ Since $\gamma > 1$ and $W - Z = 2 \frac{ S}{R}> 0$, we just need to show that $W + Z+ \frac{4 (r-1)}{3 (\gamma - 1)}$ has a constant sign along $b$. Recalling that $U = \frac{W+Z}{2}$, this is equivalent to show that $b_U + \frac{2 (r-1)}{3 (\gamma - 1)}$ has constant sign. Since $b_U$ ranges from $U(P_s)$ to $\frac{-2(r-1)}{3(\gamma - 1)}$ this is true.\
**Part I.2 Taylor analysis at $P_s$ and $P_0$.**\
We start by checking [\[eq:part1\]](#eq:part1){reference-type="eqref" reference="eq:part1"} at $P_s$. We need to show $W_1 + Z_1 < 0$. The inequality $W_1 + Z_1 < 0$ is deferred to Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"}.
With respect to the analysis at $P_0$, from [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Proposition 2.5], we have that the solution near $R=0$ has the form: $$W = \frac{w_0}{R} + \sum_{i=1}^\infty w_i R^{i-1}, \qquad \mbox { and } \qquad Z = \frac{-w_0}{R} + \sum_{i=1}^\infty w_i (-R)^{i-1},$$ where $w_1 = \frac{-2(r-1)}{3(\gamma - 1)}$ and $w_2, w_3, \ldots$ can be expressed in terms of $w_0$ (changing $w_0$ corresponds to a dilation of the profile, so it leaves $w_1$ unchanged but modifies the rest of the coefficients).
Plugging this into expression [\[eq:part1\]](#eq:part1){reference-type="eqref" reference="eq:part1"} and using $w_1 = \frac{-2(r-1)}{3(\gamma - 1)}$, we get $$\frac{N_W}{D_W} + \frac{N_Z}{D_Z} = \left( -\frac{16 (r-1) (3 \gamma -2 r-1) ((3 \gamma -5) r+2)}{27 w_0^2 (\gamma-1)^5}-6 w_3\right) R^2 + O(R^4).$$
Developing the equation $\partial_\xi W = \frac{N_W}{D_W}$, we get $$w_3 = -\frac{8 (r-1) (3 \gamma-2 r-1) ((3 \gamma-5) r+2)}{135 w_0^2 (\gamma-1)^5},$$ and therefore $$\frac{N_W}{D_W} + \frac{N_Z}{D_Z} = -\frac{32 (r-1) (3 \gamma -2 r-1) ((3 \gamma-5) r+2)}{135 w_0^2 (\gamma-1)^5} R^2 + O(R^4).$$ Since $\gamma > 1$ and $\gamma > r > 1$ (from [\[eq:rough_range_r\]](#eq:rough_range_r){reference-type="eqref" reference="eq:rough_range_r"}), we just need to show that $(3 \gamma-5) r+2 > 0$. If $\gamma > 5/3$ it is obvious, otherwise using also [\[eq:rough_range_r\]](#eq:rough_range_r){reference-type="eqref" reference="eq:rough_range_r"}, we get $$\label{eq:villefort}
(3 \gamma-5) r+2 > (3\gamma - 5) \left(2 - \frac{1}{\gamma} \right) +2 = 6 \gamma - 3 - 10 + \frac{5}{\gamma} + 2 = \frac{(6\gamma - 5)(\gamma - 1)}{\gamma} > 0.$$
**Part II: $R>1$. Left of the phase portrait**\
Let us recall that from the proof of [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible Proposition 3.1] we know that for $\xi > 0$, the solution lies to the left of $P_s$. In particular, it is inside the triangle delimited by $D_Z = 0$, $W=Z$ and $W=W_0$.
**Part II.1. Showing that the solution lies in $N_W < 0$.** Let us show moreover that the solution has to lie in the region where $N_W < 0$. It is easy to check $N_W = 0$ describes a parabola in $(W, Z)$ coordinates (it has discriminant $\frac{9-14\gamma+9\gamma^2}{16} > 0$). One can solve that second-degree equation and get that the right branch of the parabola is parametrized in terms of $Z$ as $$p_W(Z) = \frac{\sqrt{9 \gamma^2 Z^2-8 \gamma r Z-14 \gamma Z^2+16 r^2+24 r Z+9 Z^2}+\gamma Z-4 r-3 Z}{4 \gamma}.$$ It satisfies the following properties:
- $N_W(W, Z) < 0$ for every $W>p_W(Z)$. That is, $N_W < 0$ for points to the right of the branch.
- The minimum of $p_W(Z)$ is zero and it is achieved at $Z = 0$. That is, the leftmost point of the branch is $(0, 0)$.
- $p_W(Z)$ is decreasing for $Z < 0$ and increasing for $Z > 0$.
- $(p_W(Z), Z)$ lies on the plane $S > 0$ (that is $p_W(Z) - Z > 0$) whenever $Z < 0$.
Let $P_i = (W_i, Z_i)$ the point of intersection of $N_W = 0$ with $D_Z = 0$. Our objective is to show that the solution cannot traverse the branch $(p_W(Z), Z)$ for $Z \in (Z_i, 0)$. Since the solution remains to the right of $D_Z = 0$ and $W=Z$, this will also show that the solution is to the right of $(p_W(Z), Z)$ for $Z < Z_i$ or $Z > 0$ respectively. This situation can be visualized in Figure [3](#fig:part21){reference-type="ref" reference="fig:part21"}.
![Phase portrait to the left of $P_s$ for $\gamma = 5/3$ and $r = 1.05$. In green, the curve $(p_W(Z), Z)$ (that is, $N_W = 0$). The black curve represents $N_Z = 0$ and the red line represents $D_Z = 0$.](Figures/PartII1.png){#fig:part21 width="50%"}
Note that the vector field $\left(\frac{N_W}{D_W}, \frac{N_Z}{D_Z}\right)$ over $N_W = 0$ is only vertical, and its sign is dictated by $N_Z$ (recall $D_Z > 0$). If $N_Z > 0$, then the field points upwards, and since $p_W(Z)$ is decreasing (third property) this means that solutions cannot go from the right of $(p_W(Z), Z)$ towards its left part.
The only solution to $N_W = N_Z = 0$ in the open halfplane $S > 0$ is given by $$(W_\ast, Z_\ast) = \left( \frac{2 \left(\sqrt{3}-1\right) r}{3 \gamma-1},-\frac{2 \left(1+\sqrt{3}\right) r}{3 \gamma-1}\right).$$ The sign of $N_Z(p_W(Z), Z)$ is positive for $Z \in (Z_\ast, 0)$. It is clear that the sign is constant since $p_W(Z)$ does not pass through any other solution of $N_W = N_Z = 0$ from $P_\ast$ to $(0, 0)$ (recall the other two solutions are $-P_\ast$ and $(-r, -r)$). Moreover, that sign is positive because a Taylor analysis shows $N_Z (p_W(Z),Z) = -rZ + O(Z^2)$, showing that the sign is positive for some $Z < 0$.
Since $N_Z(p_W(Z)) > 0$ for $Z \in (Z_\ast, 0)$, we know that the solution cannot traverse $(p_W(Z), Z)$ from right to left whenever $Z \in (Z_\ast, 0)$. It is possible that $Z_\ast \leq Z_i$ (that is, if $(W_\ast, Z_\ast)$ lies on $D_Z \leq 0$), and in that case, we are done. Otherwise, we have that $Z_i < Z_\ast$ (as in Figure [3](#fig:part21){reference-type="ref" reference="fig:part21"}), and we still need to treat the region $Z \in (Z_i, Z_\ast)$, which is inside the eye-shaped area bounded by $D_Z = 0$ and $N_Z = 0$ (red line and black arc in Figure [3](#fig:part21){reference-type="ref" reference="fig:part21"}).
Let us suppose that indeed the solution traverses $(p_W(Z), Z)$ for some $Z \in (Z_i, Z_\ast)$. Let us call $\Delta$ the (curved) triangle determined by $N_Z < 0, N_W > 0, D_Z > 0$ (in Figure [3](#fig:part21){reference-type="ref" reference="fig:part21"}, this is the triangle located to the left of $P_\ast$ and $P_i$, determined by the green, red and black lines). After transversing $(p_W(Z), Z)$, the solution lies inside $\Delta$. We will prove a contradiction by showing that the solution cannot exit through any of the sides of $\Delta$. It cannot exit through the side $N_W = 0$, because $\frac{N_Z}{D_Z} < 0$ and therefore the field points in the inward direction to $\Delta$. It cannot exit through $D_Z = 0$, because we know our solution satisfies $D_Z > 0$ from [\[eq:DWDZ\]](#eq:DWDZ){reference-type="eqref" reference="eq:DWDZ"}. Moreover, in $\Delta$, $W$ is increasing (since $N_W/D_W > 0$), so the solution inside $\Delta$ is confined to values of $W$ bigger than the value of $W$ when entering $\Delta$. This makes it impossible to exit through $N_Z = 0$, since that side is located to the left of $N_W = 0$. This concludes the proof that the solution cannot traverse $(p_W(Z), Z)$ and therefore, $N_W < 0$.
Once equipped with the knowledge that the trajectory lies on $N_W < 0$, we divide the proof in two parts, showing both $$1 + \frac{\overline U_R}{R} + \alpha \partial_R \overline S > 0, \quad \mbox{ and } \quad
1 + \frac{\overline U_R}{R} - \alpha \partial_R \overline S > 0.$$ We express both conditions in one equation as $$\label{eq:part2_1}
1 + \frac{\overline U_R}{R} \pm \alpha \partial_R \overline S > 0,$$ meaning that the inequality should hold for both choices of the sign. Going to $(U, S)$ variables, equation [\[eq:part2_1\]](#eq:part2_1){reference-type="eqref" reference="eq:part2_1"} can be written as $$\left( 1 + \frac{ W + Z}{2}
\pm \alpha \frac{W - Z}{2} \right) \pm \frac{\alpha}{2} \left( \frac{N_W}{D_W} - \frac{N_Z}{D_Z} \right) > 0$$ Given that the left parenthesis is exactly equal to $D_W$ or $D_Z$ (depending on the sign), we just need to show $$D_W + \frac{\alpha}{2} \frac{N_W}{D_W} - \frac{\alpha}{2} \frac{N_Z}{D_Z} > 0 \qquad \mbox{ and } \qquad D_Z + \frac{\alpha}{2} \frac{N_Z}{D_Z} - \frac{\alpha}{2} \frac{N_W}{D_W} > 0.$$ Since $D_W, D_Z > 0$, we can multiply by $D_W D_Z > 0$, and it suffices to show $$\label{part22}
\Xi_1 := D_W^2 D_Z + \frac{\alpha}{2} N_W D_Z - \frac{\alpha}{2} N_Z D_W > 0;$$ and $$\label{part12}
D_Z^2 D_W + \frac{\alpha}{2} N_Z D_W - \frac{\alpha}{2} N_W D_Z > 0.$$
**Part II.2 Showing [\[part22\]](#part22){reference-type="eqref" reference="part22"}**
First, we express $\Xi_1$ in $(U, S)$ coordinates, obtaining $$\Xi_1 = (U+1)^3 + \frac{S}{4} (\gamma-1) \left(r ((\gamma-3) U-2)-2 (\gamma-1) U^2+(5-3 \gamma) U+2\right)-\frac{1}{4} (\gamma-1)^2 (U+1) S^2.$$
Thus, $\Xi_1$ is a second-degree polynomial in terms of $S$. Moreover, it is clear that the discriminant is positive, since the independent term and the second-order term have opposite signs. Let $S_-(U), S_+(U)$ be the two real roots of the second-degree polynomial $\Xi_1$. Since $U = \frac{W+Z}{2} = \frac{D_Z + D_W}{2} - 1 > -1$, the principal coefficient of $\Xi_1$ (as a polynomial in $S$) is negative and therefore $\Xi_1$ is positive whenever $S_- < S < S_+$. Since the independent term is positive, $S_- < 0$, and we just need to check $S < S_+(U)$ due to the constraint $S > 0$.
Let us recall that the solution lies in the triangle $D_Z > 0$, $W < W_0$, $W-Z > 0$ (red triangle in Figure [4](#fig:PartII2){reference-type="ref" reference="fig:PartII2"}). We will show that the solution satisfies the extra condition $U > U(\overline P_s)$ (this corresponds to being to the upper-right part of the green segment in Figure [4](#fig:PartII2){reference-type="ref" reference="fig:PartII2"}). We defer the proof of $U > U(\overline P_s)$ to the end of Part II.2 and let us for now assume that the solution lies in the quadrilateral determined by $D_Z > 0$, $W < W_0$, $W-Z > 0$, $U > U(\overline P_s)$. Let us denote that quadrilateral by $Q$ (note that the inequalities are strict, so $Q$ is an open set). We will show that the boundary of $Q$ satisfies $\Xi_1 \geq 0$ (this corresponds to being in the upper-left side of the brown curve in Figure [4](#fig:PartII2){reference-type="ref" reference="fig:PartII2"}). This means that $S \leq S_+(U)$ for every point of $\partial Q$, implying that $S < S_+(U)$ on $Q$, and concluding the proof of [\[part22\]](#part22){reference-type="eqref" reference="part22"} (assuming the condition $U > U(\overline P_s)$ that we will show later).
Thus, we proceed to show that $\Xi_1 \geq 0$ on $\partial Q$. Let us note that we will be always working on the half-plane $S > 0$, so we will alternate between the equivalent conditions $S \leq S_+(U)$ and $\Xi_1 \geq 0$, depending on which one is more convenient for each specific side of our quadrilateral.
![Phase portrait of the profile for $\gamma = 5/3$, $r=1.34$. The red triangle corresponds to the triangle where $D_Z > 0$, $W < W_0$, $W-Z > 0$. The black curve corresponds to $N_Z = 0$ and the brown curve to $\Xi_1 = 0$. The green line represents $U = U(\overline P_s)$. The quadrilateral $Q$ corresponds to the area of the picture that is located to the upper-right part of the green segment. Thus, it has three red sides and one green side, and $\overline P_s$, $P_s$ are two of its vertices. ](Figures/PartII2.png){#fig:PartII2 width="50%"}
Clearly, the side $W=Z$ satisfies $S < S_+(U)$, since $S_+(U) > 0$ and $S = 0$ on that line. Let us now proceed with the side $W = W_0$ (right side of the red triangle in Figure [4](#fig:PartII2){reference-type="ref" reference="fig:PartII2"}). Since $\Xi_1$ is a third-degree polynomial in $Z$ vanishing at $P_s$ ($N_Z(P_s) = D_Z(P_s) = 0$) it suffices to show that $\partial_Z^i \Xi_1 > 0$ for $i \in \{ 1, 2, 3 \}$. That would show $\Xi_1$ is positive on all the vertical halfline of $W=W_0$ above $P_s$, including the side of our region. We have that $$\begin{aligned}
\partial_Z \Xi_1 |_{P_s} &= \frac{1}{4} (\gamma (r-1)+r (2 Z_0-1)+2 W_0+5) > 0, \\
\partial_Z^2 \Xi_1 |_{P_s} &= \frac{1}{8} \left(\gamma^2 (-r+3 Z_0+2)+4 \gamma r-3 r+12 W_0+9 Z_0+22\right) > 0,\\
\partial_Z^3 \Xi_1 |_{P_s} &= \frac{3}{16} \left(\gamma^2-2 \gamma+5\right) = \frac{3}{4} + \frac{3}{16} (\gamma - 1)^2 > 0.\end{aligned}$$ The third inequality is trivial while the first two are shown in Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"}.
We proceed with the side $D_Z = 0$ of $Q$. If $D_Z = 0$ we have $\Xi_1 = \frac{-\alpha}{2} N_Z D_W$, so the sign is determined by $N_Z$. Let us recall $N_Z = 0$ intersects $D_Z = 0$ twice, at $\overline P_s$ (to the left) and $P_s$ to the right. We need to show that $N_Z \leq 0$ along $D_Z = 0$ between the points $\overline P_s$ and $P_s$ (cf Figure [4](#fig:PartII2){reference-type="ref" reference="fig:PartII2"}). This is true because $N_Z$ along $D_Z = 0$ (parametrized by $W$) is a second degree polynomial with quadratic coefficient $\frac{2(\gamma - 1)^2}{(\gamma - 1)^2}$. Thus, the polynomial is negative between the roots (that is, between $\overline P_s$ and $P_s$) and positive outside. We have therefore showed that $\Xi_1 \geq 0$ (equivalently $S < S_+(U)$) the side $D_Z = 0$ of our quadrilateral, that goes from $\overline P_s$ to $P_s$.
Lastly, we need to treat the side $U = U(\overline P_s)$ of our quadrilateral region $Q$. We know that $S = S_+(U)$ at $\overline P_s$, since $\Xi_1 = 0$ there ($D_Z = N_Z = 0$). Therefore, we have that $S < S_+(U)$ for any other point along the line $U = U(\overline P_s)$ with smaller value of $S$. This includes the side $U = U(\overline P_s)$ of our quadrilateral. We have shown that $\Xi_1 \geq 0$ on $\partial Q$. From the discussion above, using $\Xi_1 \geq 0$ on $\partial Q$ we conclude [\[part22\]](#part22){reference-type="eqref" reference="part22"} assuming that the solution stays in the region $U > U(\overline P_s)$ (and therefore, stays inside $Q$).
Let us finally show that $U > U(\overline P_s)$. We construct the barrier $b(t) = ( \overline W_0 - t, \overline Z_0 + t)$. In order for the solution to remain in the upper-right part of the barrier, the condition needed is: $$\Xi_3 = N_W(b(t)) D_Z(b(t)) + N_Z(b(t)) D_W(b(t)) > 0,$$ where we are using the vector field $(N_W D_Z, N_Z D_W)$, which has the same direction and sign as $\left(N_W/D_W, N_Z/D_W \right)$, given that $D_W, D_Z > 0$. A computation shows that $$\begin{aligned}
\frac{\Xi_3}{t} & = -\frac{(r-1) \left(3 \gamma^2 (r-3)+\gamma (-14 r-3 \ensuremath{\mathcal}R_1+22)+15 r+5 \ensuremath{\mathcal}R_1-17\right)}{4 (\gamma -1)}\\
&-\frac{1}{8} t ((\gamma-1) (-3 \gamma (r-3)+r+3 \ensuremath{\mathcal}R_1-7)).\end{aligned}$$ Using Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"}, we prove that the independent term of $\frac{\Xi_3}{t}$ is positive, thus showing $\Xi_3 > 0$ for small $t$ (equivalently, $U > U(\overline P_s)$ close to $(\overline W_0, \overline Z_0)$). If we also show that $\Xi_3$ is positive at the intersection of $b(t)$ with $W = Z$, we will be done, since $\Xi_3/t$ is an affine function of $t$ (and therefore positive between any two positive points). Let us denote by $(\kappa, \kappa)$ the intersection of $b(t)$ with $W = Z$. Using the expression $N_W D_Z + N_Z D_W$ on a point of the form $(\kappa, \kappa)$, we obtain $$\Xi_3 = -2\kappa (\kappa +1 )(\kappa + r).$$ In particular, since $r>1$, we have that $\Xi_3 > 0$ for $\kappa \in (-1, 0)$. Thus, we are done by showing that $$-1 < \frac{\overline W_0 + \overline Z_0}{2} < 0.$$ We refer to Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"} for this inequality.\
**Part II.3. Showing [\[part12\]](#part12){reference-type="eqref" reference="part12"}.**\
Since $D_Z, D_W > 0$, and $\alpha > 0$, we can reduce [\[part12\]](#part12){reference-type="eqref" reference="part12"} to show the stronger inequality $$\Xi_2 := N_Z D_W - N_W D_Z > 0.$$ Expressing $\Xi_2$ in $(U, S)$ variables, we obtain $$\label{eq:morcef}
\frac{\Xi_2}{S} = r (2-(\gamma-3) U)+U (2 \gamma U+3 \gamma-1) -\frac{1}{2} (\gamma-1)^2 S^2$$ In particular, in the halfplane $S > 0$ we have that $\Xi_2 > 0$ whenever $$S < S_+(U) = \frac{\sqrt{ 2 \left( r (2-(\gamma-3) U)+U (2 \gamma U+3 \gamma-1)\right)}}{\gamma - 1}.$$
Note that this definition of $S_+(U)$ is different from the one we gave in Part II.2. Each definition of $S_+(U)$ only applies to the part where is it defined. Note that $S_+(U)$ may be complex, since the radical is not always real. If $S_+(U)$ is complex for some $U$, then there is no possible $S$ such that $\Xi_2 \geq 0$ (one example for this is at point $(W, Z) = (-0.5, -0.8)$ of Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"} since the brown hyperbola $S = S_+(U)$ does not intersect the diagonal $U = \frac{-0.5-0.8}{2}$). In particular, whenever we say that $S \leq S_+(U)$ it should be interpreted that both $S_+(U)$ is real and $S \leq S_+(U)$.
![Phase portrait of the profile for $\gamma = 5/2$, $r=1.20$. We are plotting the region corresponding to $W - Z > 0$ and $D_Z > 0$, with the boundary in red. The orange region corresponds to $U > U(P_s)$ and the blue to $U < U(P_s)$. The black curve corresponds to $N_Z = 0$. The brown curve corresponds to $\Xi_2 = 0$, and we want to show the solution stays in the region with $S$ below the brown curve (above the right branch of the brown curve). The green curves represent $N_W = 0$ and $U = U(\overline P_s)$.](Figures/PartII3_1.png){#fig:PartII3_1 width="50%"}
Let us start showing that $\Xi_2 \geq 0$ for $U \geq U(P_s)$ (orange region in Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}). On the one hand, writing the condition $D_Z > 0$ in $(U, S)$ coordinates, we obtain that $S < \frac{2(1+U)}{\gamma - 1}$. On the other hand, we have that $\Xi_2$ is positive along $D_Z = 0$ for $U > U(P_s)$. This comes from the observation that $\Xi_2 = -N_Z D_W$ along $D_Z = 0$, the fact that $D_W > 0$ along $D_Z = 0$ and the observation from Part II.2 that, along $D_Z = 0$, $N_Z$ is positive between the two roots ($\overline P_s$ and $P_s$) and $N_Z < 0$ outside. Since $\Xi_2 \geq 0$ along $D_Z = 0$ for $U > U(P_s)$, we obtain that $S_+(U)$ is real and bigger than $\frac{2(1+U)}{\gamma - 1}$ (which is the value that $S$ takes at $D_Z = 0$). Since the solution satisfies $S < \frac{2(1+U)}{\gamma - 1}$, we are done with this region.
Now, we just need to show that $\Xi_2 > 0$ whenever the solution is in the region $U < U(P_s)$ (this corresponds to the blue region of Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}). The second-degree equation $\Xi_2 = 0$ describes a conic that clearly passes through $P_s$, and we will denote by $b(t)$ the conic branch of $\Xi_2 = 0$ that starts from $P_s$ on the side $D_Z > 0$ (with initial value $b(0) = P_s$). Since $\Xi_2 = 0$ is a conic we can determine $b(t)$ for all $t \geq 0$ (with an arbitrary parametrization). Since $N_W(P_s) < 0$ and $D_W(P_s) > 0$, from the expression of $\Xi_2$, $b(t)$ has to lie in the region $N_Z < 0$ for $t \geq 0$ sufficiently small. Thus, $b(t)$ starts in the eye-shaped region with vertices $P_s, \overline P_s$ delimited by $N_Z < 0$, $D_Z > 0$ (in Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}, that region is delimited by the black curve and red line, and $b(t)$ is the brown curve starting at $P_s$). We denote the eye-shaped region $N_Z < 0$, $D_Z > 0$ by $E$. Moreover, $E$ is contained on the halfplane $W > Z$ since the hyperbola $N_Z$ only intersects $W = Z$ twice: at $(0, 0)$ and $(-r, -r)$. The intersection at $(0, 0)$ corresponds to the upper branch (which is not the one bounding $E$) and the intersection $(-r, -r)$ is located in $D_Z < 0$ (thus, it is not on the boundary of $E$. Moreover, $b(t)$ has to exit $E$ at some point (if $b(t)$ is unbounded that is obvious, if it is an ellipse, it needs to reach $P_s$ from the side $D_Z < 0$). There are two possibilities regarding how $b(t)$ exits $E$:
- Case A. $b(t)$ exits $E$ through some point with with $D_Z > 0$. This implies that the point of exit satisfies $N_Z = 0$. Combining that with $\Xi_2 = 0$, we get $N_W D_Z = 0$. Since $D_Z > 0$, we also have $N_W = 0$. There is only one solution to $N_W = N_Z = 0$ on the open halfplane $W > Z$, and it is given by $P_\star$. Thus, the point of exit is $P_\star$. This possibility is depicted in Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}.
- Case B. $b(t)$ exits $E$ through $D_Z = 0$. From $\Xi_2 = 0$, and $D_Z = 0$, we obtain that $N_Z D_W = 0$ at that point of exit. Since $D_Z > D_W$ on our halfplane $W > Z$, it is not possible that $D_W$ vanishes, and therefore $N_Z = 0$. There are two points with $N_Z = D_Z = 0$, namely $P_s$ and $\overline P_s$. The point of exit cannot be $P_s$, since, even if $b(t)$ was a periodic conic (an ellipse) it would arrive back to $P_s$ from the halfplane $D_Z < 0$ (since we started from $D_Z > 0$). Thus, the point of exit is $\overline P_s$. This case is depicted in Figure [6](#fig:PartII3_2){reference-type="ref" reference="fig:PartII3_2"}.
Moreover, we can characterize cases A and B as follows. Since $E$ is contained in $W > Z$ and $N_Z = N_W = 0$ only has one simple solution ($P_\ast$) on that halfplane. If $D_Z (P_\star) > 0$ we have that $P_\star \in \partial E$, and since $N_W = 0$ cuts the region $E$, it has to exit through the side $D_Z = 0$ (there are no more solutions to $N_W = N_Z = 0$). Thus, if $D_Z (P_\star) > 0$, it has to divide $E$ in two, so that $P_s$ and $\overline P_s$ lie on different parts. Thus, case B is not possible, since before arriving to $\overline P_s$, $b(t)$ would need to cross $N_W = 0$ (and since $\Xi_2 = 0$ we obtain that $N_Z = 0$ as well, meaning that $b(t)$ reaches $P_\ast$ before $\overline P_s$). Reciprocally if $D_Z(P_\star) < 0$, we have that $P_\star \notin \partial E$, so it must happen that we are in case B. If $D_Z(P_\star) = 0$ we have $\overline P_s = P_\star$ and we can take any case.
![This figure is analogous to Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}, but in the case of $\gamma = 1.2$, $r=1.111$. Here, the hyperbola $N_Z = 0$ (in black) is extremely close to the line $D_Z = 0$ (in red). Note that the point $P_\star$ is to the left of $\overline P_s$. Therefore, the brown curve $b(t)$ (which is also overlapped with the black parabola and red line) exits $E$ through $\overline P_s$.](Figures/PartII3_2.png){#fig:PartII3_2 width="50%"}
Let us show that in any of the cases above the solution will not transverse $b(t)$ while $b(t) \in E$. The solution starts above $b(t)$ due to the Taylor analysis performed in Part I.2 (the term $D_W D_Z^2$ that we omitted is irrelevant since it is second order at $P_s$). We will start showing that the field $(N_W D_Z, N_Z D_W)$ (which is parallel and has the same orientation as $(N_W/D_W, N_Z/D_Z)$ points in the lower-down direction of $b(t)$ while $b(t)\in E$. Given the implicit definition of $b(t)$ as $\Xi_2 = 0$, a normal vector of $\Xi_2$ pointing down-left is given by $\nabla (N_W D_Z - N_Z D_W)$. We have that $$\left( -1, -1 \right) \cdot \nabla (N_W D_Z - N_Z D_W) = \frac{1}{2} (W-Z) (\gamma (-r+2 W+2 Z+3)+3 r-1) = S \left(-1+3r +\gamma (3-r+4U) \right).$$ We show that this quantity is positive. Since the parenthesis is increasing with $U$ it suffices to show that it is positive for the left limit of the interval of $U$ considered. We show that $\left(-1+3r +\gamma (3-r+4U) \right) > 0$ for $U = U(P_\star)$ if $D_Z (P_\star ) \geq 0$ and for $U = U(\overline P_s)$ if $D_Z (P_\star) < 0$ in Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"}. This concludes that in both of our cases, on the arc $\Xi_2 = 0$, the field points always in the lower-left direction. Now, let us divide our argument with respect to the aforementioned two cases.
In case B, we have that $b(t)$ and $D_Z = 0$ form an inner inside $E$. If the solution traverses $b(t)$ while in $E$, it would enter that region. This region cannot be exited, since the field of $b(t)$ points to the down-left direction and we know that the solution stays in $D_Z = 0$. This contradicts the fact that the solution reaches $P_\infty = (0, 0)$. So the solution could not have traversed $b(t)$ in the first place.
In case A, we consider the triangular-shaped region $T$ inside $E$ bounded by $N_W = 0$, $D_Z = 0$ and $b(t)$ (the three sides have colours red, green and brown in Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}, and two of its vertices are $P_s$ and $P_\star$). If the solution crosses $b(t)$ while inside $E$, it necessarily enters $T$. We know that the solution satisfies $N_W > 0$ (from Part II.1) and $D_Z > 0$. Thus, the solution can only exit through $b(t)$. But this is impossible since we showed that while $b(t) \in E$, the vector field points to the down-left direction (thus, inwards $T$). Thus, the solution cannot exit $T$. This contradicts that the solution converges to $P_\infty = (0, 0) \notin T$ and we deduce that the solution did not traverse $b(t)$ in the first place.
We have proven that the solution does not traverse $b(t)$ while in $E$. Let us now conclude the proof that the solution satisfies $\Xi_2 > 0$ while in the region $U < U(P_s)$.
If we are in case A, we consider $b(t)$ until reaching $P_\star$ and then we connect with the branch of $N_W = 0$ that joins $P_\star$ with $(0, 0)$ (green curve in Figure [5](#fig:PartII3_1){reference-type="ref" reference="fig:PartII3_1"}. We close the region with the condition $U < U(P_s)$ (recall $U(P_s) < 0$ from Lemma [Lemma 38](#lemma:auxiliary){reference-type="ref" reference="lemma:auxiliary"}). While in $U < U(P_s)$, the smooth solution has to stay in this region (we just showed it cannot traverse $b(t)$, and we showed $N_W < 0$ in Part II.1). Moreover, we have that $S \leq S_+(U)$ on the boundary (and in particular, $S_+(U)$ is real). Along the side $U = U(P_s)$ we showed $S \leq S_+(U)$ when treating the case $U \geq U(P_s)$. We clearly have $S = S_+(U)$ along $b(t)$ (since solves $\Xi_2 = 0$). We have that $\Xi_2 = N_Z D_W$ along $N_W = 0$ and we recall from Part II.1 that $N_Z D_W$ is nonnegative along the branch of $N_W = 0$ that joint $P_\star$ with $(0, 0)$. Equivalently, $S_+(U)$ is real and $S \leq S_+(U)$. Since we have that $S \leq S_+(U)$ at the boundary of our region, we conclude $S < S_+(U)$ in the interior, and therefore the solution satisfies $\Xi_2 > 0$.
If we are in case B, we apply the same procedure but when constructing our region, we substitute the branch $N_W = 0$ of case A by the line $U = U(\overline P_s)$ (green line in Figure [6](#fig:PartII3_2){reference-type="ref" reference="fig:PartII3_2"}). We also have that the solution cannot traverse this line due to the argument given in Part II.2. It is also clear that $S \leq S_+(U)$ on this side, since $U$ is constant and $S$ is decreasing starting from $\overline P_s$ (at which $S = S_+(U)$). The other sides of the boundary also satisfy $S \leq S_+(U)$ by the same arguments as in case A, so we conclude that $S < S_+(U)$ in the interior of the region. Since the solution lives in this region while $U < U(P_s)$, we are done. ◻
**Lemma 38**. *For all $\gamma > 1$ and $1 < r < r^\ast (\gamma )$ we have the following inequalities:*
- *$W_1 + Z_1 < 0$.*
- *$\frac{1}{4} (\gamma (r-1)+r (2 Z_0-1)+2 W_0+5) > 0$.*
- *$\frac{1}{8} \left(\gamma^2 (-r+3 Z_0+2)+4 \gamma r-3 r+12 W_0+9 Z_0+22\right) > 0$.*
- *$3 \gamma^2 (r-3)+\gamma (-14 r-3 \ensuremath{\mathcal}R_1+22)+15 r+5 \ensuremath{\mathcal}R_1-17$ is negative.*
- *$-1 < U(P_s) < \frac{-2(r-1)}{3(\gamma - 1)} < 0$.*
- *$-1 < U(\overline P_s) < 0$.*
- *$U = U(P_\star)$ satisfies $-1 + 3 r +\gamma(3-r + 4 U) > 0$. Moreover, this is also satisfied for $U = U(\overline P_s)$ in the case where $D_Z (P_\star) < 0$.*
*Proof.* **Item 1**\
We start showing the inequality $W_1 + Z_1 < 0$. It will be useful to express this quantity in terms of $$D_{Z, 1} = \nabla D_Z (W_1, Z_1) = \frac{3-\gamma}{4}W_1 + \frac{1+\gamma}{4}Z_1.$$ We have $$W_1 + Z_1 = \frac{4}{\gamma + 1} D_{Z, 1} + \left( 1 - \frac{(3-\gamma)}{\gamma + 1}\right) W_1 = \frac{2(\gamma - 1)}{\gamma + 1} \left( \frac{2 D_{Z, 1}}{\gamma - 1}+W_1\right),$$ so we just need to show that the right parenthesis is negative. Using the explicit formulas for $D_{Z, 1}, W_1$ from [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible] in terms of $\gamma, r, \ensuremath{\mathcal}R_2$, we obtain $$\left( \frac{2 D_{Z, 1}}{\gamma - 1}+W_1\right) = \frac{-3 \gamma^2 (r-3)-\gamma(3 \ensuremath{\mathcal}R_1+8)+3 r+\ensuremath{\mathcal}R_1-1 -2(\gamma-1) \ensuremath{\mathcal}R_2}{4 (\gamma-1)^2}.$$ The denominator is clearly positive, so we just need to show that the numerator is negative. Since the term $-2(\gamma - 1)\ensuremath{\mathcal}R_2$ is negative, it suffices to show that this term dominates. That is, it suffices to show that $$\begin{aligned}
4(\gamma-1)^2 \ensuremath{\mathcal}R_2^2 &- \left( -3 \gamma^2 (r-3)-\gamma(3 \ensuremath{\mathcal}R_1+8)+3 r+\ensuremath{\mathcal}R_1-1 \right)^2 \\
&= -2(\gamma+ 1) \Big( \underbrace{ \left(9 \gamma^3 (r-3)^2-33 \gamma^2 r^2+114 \gamma^2 r-189 \gamma^2+43 \gamma r^2-82 \gamma r+147 \gamma-11 r^2+6 r-31\right) }_A \\
&\qquad+ \ensuremath{\mathcal}R_1 \underbrace{ \left(9 \gamma^2 r-27 \gamma^2-6 \gamma r+42 \gamma-7 r-11\right) }_B \Big)\end{aligned}$$ is positive. Let us denote by $\Xi = A + B \ensuremath{\mathcal}R_1$ the big parenthesis in the right hand side. We need to show that $\Xi$ is negative.
In order to study $\Xi$, we multiply it by its algebraic conjugate $A - B\ensuremath{\mathcal}R_1$ and obtain $$\begin{aligned}
\begin{split} \label{eq:calvacanti}
\Xi (A - B\ensuremath{\mathcal}R_1) &= -64 (r-1) (\gamma - 1) \Bigg( \left(\frac{3 \gamma-1}{\sqrt{3} (\gamma-1)+2}-r\right) ((3 \gamma-5) r+2) \\
& \qquad \left((3 (\gamma-2) \gamma-1) r+(3 \gamma-1)
\left(\sqrt{3} \gamma-\sqrt{3}+2\right) \right)\Bigg).
\end{split} \end{aligned}$$ The terms $r-1$ and $\gamma - 1$ are clearly positive. The term $\left(\frac{3 \gamma-1}{\sqrt{3} (\gamma-1)+2}-r\right)$ is equal to $(r^\ast_{<5/3} - r)$ from the definition of $r^\ast_{<5/3}$ (equation [\[eq:defr\]](#eq:defr){reference-type="ref" reference="eq:defr"}). The term $(3\gamma - 5)r+2$ is positive due to [\[eq:villefort\]](#eq:villefort){reference-type="eqref" reference="eq:villefort"}. Let us argue the term in the last parenthesis is also positive. Since it is linear in $r$, we just need to check that it is positive at $r=1$ and $r = r^\ast_{<5/3} \geq r^\ast(\gamma)$ (we have that $r_{<5/3}^\ast = r^\ast$ for $\gamma \leq 5/3$ but we also have that $r_{<5/3}^\ast > r^\ast$ for $\gamma > 5/3$). We have that $$(3 (\gamma-2) \gamma-1) r+(3 \gamma-1)
\left(\sqrt{3} \gamma-\sqrt{3}+2\right) \Big|_{r=1} = (\gamma-1) \left(3 \sqrt{3} \gamma+3 \gamma-\sqrt{3}+3\right) > 0,$$ and letting $\ell = \sqrt{\gamma - 1}$: $$\begin{aligned}
&(3 (\gamma-2) \gamma-1) r+(3 \gamma-1)
\left(\sqrt{3} \gamma-\sqrt{3}+2\right) \Big|_{r=r_{5/3}^\ast} \\
&= \quad \frac{\ell^2 \left(3 \sqrt{3} \ell^4+9 \ell^4+6 \sqrt{6} \ell^3+6 \sqrt{2} \ell^3+8 \sqrt{3} \ell^2+12
\ell^2+4 \sqrt{6} \ell+12 \sqrt{2} \ell+4 \sqrt{3}+4\right)}{\left(\ell+\sqrt{2}\right)^2},\end{aligned}$$ which is clearly positive for $\ell > 0$ (all the coefficients are positive).
Going back to [\[eq:calvacanti\]](#eq:calvacanti){reference-type="eqref" reference="eq:calvacanti"}, we deduce that $\Xi (A - B\ensuremath{\mathcal}R_1)$ has the same sign as $-\left(r_{\geq 5/3}^\ast - r \right)$, where we recall $r_{\geq 5/3}^\ast, r^\ast_{<5/3}, r^\ast$ were defined in [\[eq:defr\]](#eq:defr){reference-type="eqref" reference="eq:defr"}. In particular, we know that $\Xi$ does not vanish for $r\in (1, r^\ast)$, $\gamma \geq 5/3$ and that $\Xi$ at most vanishes once for $r\in (1, r^\ast)$, $\gamma < 5/3$ (and if it does, the root is a single root). Recall that we want to show that $\Xi < 0$.
We have that $$\begin{aligned}
\Xi \Big|_{r=1} = 0, \qquad \mbox{ and } \qquad
\frac{\partial}{\partial r}\Xi \Big|_{r=1} = -16(\gamma - 1).\end{aligned}$$ This shows that $\Xi < 0$ for $\gamma \geq 5/3$, since it is negative on some right neighbourhood of $r=1$ and we know it does not change sign since $\Xi (A-B\ensuremath{\mathcal}R_2)$ does not vanish.
With respect to the case $\gamma < 5/3$, in that case we have $$\begin{aligned}
\Xi \Big|_{r=r^\ast_{<5/3}} = \frac{16 \left(\sqrt{2} \sqrt{\frac{1}{\gamma-1}}+3\right) (\gamma-1) (3 \gamma-5)}{\left(\sqrt{2}
\sqrt{\frac{1}{\gamma-1}}+1\right)^4},\end{aligned}$$ which is clearly negative since $3\gamma - 5 < 0$. In this case, we know that $\Xi$ vanishes at most once with a single root in $r\in (1, r^\ast)$. However, since $\Xi$ is negative for a right neighbourhood of $r=1$ and for $r=r^\ast$, $\Xi$ can only vanish in between if it has a double root or if has more than one root. Thus, $\Xi$ remains negative in all the interval $(1, r^\ast)$.\
**Item 2**\
Using the expressions for $W_0$ and $Z_0$ we have $$\begin{aligned}
&\gamma (r-1) + r (2Z_0 -1) + 2W_0 + 5 \\
&=
\frac{ \overbrace{ 2 \gamma^3 (r-1)+\gamma^2 ((r-8) r+11)-6 \gamma r^2+10 \gamma r-12 \gamma+9 r^2-12 r+7 }^A
+ \ensuremath{\mathcal}R_1 \overbrace{ (\gamma r+\gamma-3 r+1) }^B}{2 (\gamma-1)^2}.\end{aligned}$$ We will conclude our proof showing that both $A$ and $B$ are positive. Using that $r < \frac{2\gamma- 1}{\gamma}$ from [\[eq:danglars\]](#eq:danglars){reference-type="eqref" reference="eq:danglars"}, we have $$B = \gamma r+\gamma-3 r+1 < 3\gamma- 3\frac{2\gamma- 1}{\gamma} = 3 \frac{\gamma^2 - 2\gamma+ 1}{\gamma} = 3\frac{(\gamma-1)^2}{\gamma} > 0.$$ With respect to $A$, we have that $$A =4 (\gamma-1)^2 + 2 (\gamma-3) (\gamma+1) (\gamma-1) (r-1)+(\gamma-3)^2 (r-1)^2,$$ which clearly shows $A > 0$ whenever $\gamma\geq 3$ (since all terms are nonnegative and $4(\gamma- 1)^2 > 0$). In the case $\gamma< 3$, note that we have a second-degree polynomial of $(r-1)$ with positive principal coefficient. Thus, it is $A$ is a convex parabola with respect to $r$, so suffices to show that $A$ is positive and has negative derivative (with respect to $r$) at $r = r^\ast_{<5/3}$, given that $r^\ast \leq r^\ast_{<5/3}$.
Denoting $\ell = \sqrt{\gamma- 1}$ (which is in $(0, \sqrt 2)$ for $\gamma \in (1, 3)$), we have: $$\begin{aligned}
A \Big|_{r = r_{<5/3}^\ast} &= \frac{4 \ell^6 \left(\ell^2+2\right) \left(\ell^2+2 \sqrt{2} \ell+2\right)}{\left(\ell+\sqrt{2}\right)^4} > 0, \\
\frac{\partial}{\partial r} A \Big|_{r = r_{<5/3}^\ast} &= \frac{2 \ell^3 \left(\ell^2-2\right) \left(\ell^3+2 \sqrt{2} \ell^2+6 \ell+4
\sqrt{2}\right)}{\left(\ell+\sqrt{2}\right)^2} < 0.\end{aligned}$$
**Item 3**\
We have to show that $\gamma^2 (-r + 3 Z_0 + 2) + 4\gamma r - 3 r + 12 W_0 + 9 Z_0 + 22 > 0$. We start by reexpressing this quantity as $$\frac{\overbrace{11 + 13\gamma-\gamma^3+\gamma^2+\left(-\gamma^3+5 \gamma^2+5 \gamma-33\right) r }^A +\overbrace{ 3 \left(\gamma^2-2 \gamma+5\right) }^B\ensuremath{\mathcal}R_1}{4 (\gamma-1)}.$$ Thus, we need to show that the numerator is positive. It is clear that $B \ensuremath{\mathcal}R_1 > 0$ (since $\gamma^2 - 2\gamma + 1 = (\gamma- 1)^2 \geq 0$). We will show that $A > 0$ for $\gamma < 4$ and that $B^2 \ensuremath{\mathcal}R_1^2 - A^2 > 0$ for $\gamma \geq 4$. This would conclude the proof.
Let us start with $A > 0$ for $\gamma < 4$. Since $A$ is a first-degree polynomial in $r$, it suffices to show that $A > 0$ for both $r=1$ and $r=r^\ast$. We have that $$\begin{aligned}
A \Big|_{r=1} &= 2(\gamma- 1)(11 + 2\gamma- \gamma^2 ) , \\
A \Big|_{r=r_{<5/3}^\ast} &= \frac{4 (\gamma-1) (11 + 2\gamma- \gamma^2 )}{\sqrt{2} \sqrt{\frac{1}{\gamma-1}}+1}, \\
A \Big|_{ r = r_{\geq 5/3}^\ast} &= \frac{(\gamma-1) \left( (3\gamma-5)+\sqrt{3} (\gamma+ 1) \right)(11 + 2\gamma- \gamma^2 ) }{\sqrt{3}(\gamma-1)+2}.\end{aligned}$$ We observe that all three expressions are positive because $(11 + 2\gamma- \gamma^2) = 12 - (\gamma- 1)^2$, so it is positive for $\gamma \in (1, 4]$. In the last expression note also that $3\gamma - 5 \geq 0$ because this expression is only applied for $\gamma \geq 5/3$.
Finally, we need to show $B^2 \ensuremath{\mathcal}R_1^2 - A^2 > 0$ for $\gamma > 4$. Computing $B^2 \ensuremath{\mathcal}R_1^2 - A^2$ we obtain $$\begin{aligned}
C:=\frac{B^2 \ensuremath{\mathcal}R_1^2 - A^2}{8(\gamma-1)^2} &= \Big(
(\gamma-3)^2 \left(\gamma^2-2 \gamma+13\right) (r-1)^2 \\
&\qquad + \left(-5\gamma^4+12 \gamma^3-30 \gamma^2-52 \gamma-69\right) (r-1)
+4 \left(\gamma^2-2 \gamma+13\right) (\gamma-1)^2\Big).\end{aligned}$$ Since $C$ is a second-degree polynomial in $(r-1)$ with positive dominant coefficient, it is enough to show that $C > 0$ at $r = r^\ast$ and $\frac{\partial C}{\partial r} < 0$ at $r = r^\ast$. Recall $r^\ast = r^\ast_{\geq 5/3}$ since we are in the case $\gamma> 4$. We have that $$\begin{aligned}
C \Big|_{r=r^\ast} &= -\frac{\gamma-1}{\left(\sqrt{3} \gamma-\sqrt{3}+2\right)^2} \Big( 3 \left(7 \sqrt{3}-13\right) \gamma^5+\left(249-131 \sqrt{3}\right) \gamma^4+\left(466 \sqrt{3}-982\right) \gamma^3 \\
&\qquad -6 \left(177 \sqrt{3}-379\right) \gamma^2+\left(1673 \sqrt{3}-3027\right) \gamma-1255 \sqrt{3}+2389\Big) \\
& =-\frac{\gamma-1}{\left(\sqrt{3} \gamma-\sqrt{3}+2\right)^2}\Big(\left(21 \sqrt{3}-39\right) (\gamma-4)^5+\left(289 \sqrt{3}-531\right) (\gamma-4)^4 \\
&+\left(1730 \sqrt{3}-3238\right) (\gamma-4)^3+6 \left(899 \sqrt{3}-1761\right) (\gamma-4)^2+\left(8889 \sqrt{3}-18147\right) (\gamma-4)\\
&+99 \left(63 \sqrt{3}-125\right)\Big) \\
%
\frac{\partial C}{\partial r} \Big|_{r= r^\ast} &= \frac{1}{{\sqrt{3} \gamma-\sqrt{3}+2}} \Big( \left(6-7 \sqrt{3}\right) \gamma^5+\left(35 \sqrt{3}-64\right) \gamma^4-6 \left(21 \sqrt{3}-46\right) \gamma^3 \\
&\qquad +14 \left(17 \sqrt{3}-60\right) \gamma^2+\left(1174-443 \sqrt{3}\right) \gamma+303 \sqrt{3}-840 \Big) \\
&= \frac{1}{{\sqrt{3} \gamma-\sqrt{3}+2}} \Big( \left(6-7 \sqrt{3}\right) (\gamma-4)^5+\left(56-105 \sqrt{3}\right) (\gamma-4)^4 \\
&+\left(212-686 \sqrt{3}\right) (\gamma-4)^3+42 \left(4-57 \sqrt{3}\right) (\gamma-4)^2-3 \left(1529 \sqrt{3}+334\right) (\gamma-4)\\
&-9 \left(437 \sqrt{3}+240\right) \Big).\end{aligned}$$ Thus, the proof follows from the two fifth-degree polynomials of $\gamma$ above being negative for $\gamma \geq 4$ due to all coefficients being negative.\
**Item 4**\
Let us define $$A = 3 \gamma^2 (r-3)-14 \gamma r+22 \gamma+15 r-17,$$ which includes all terms except the ones with $\ensuremath{\mathcal}R_1$. We need to show that $A + (-3\gamma+5)\ensuremath{\mathcal}R_1 < 0$.
We start showing that $A < 0$. Since $A$ is an affine function of $r$ and $1 < r < r^\ast < \frac{2\gamma-1}{\gamma}$ (the last inequality is due to [\[eq:danglars\]](#eq:danglars){reference-type="eqref" reference="eq:danglars"}), it suffices to show that $A<0$ for $r=1$ and $r = \frac{2\gamma-1}{\gamma}$. We have that $$A \Big|_{r=1} = -2(\gamma-1) (3\gamma-1) < 0, \qquad \mbox{ and } \qquad A\Big|_{r=(2\gamma-1)/\gamma} = -3\frac{(\gamma-1)^2(\gamma+5)}{\gamma} < 0.$$
Therefore, we have that $A < 0$. This immediately shows that $A + (-3\gamma+5)\ensuremath{\mathcal}R_1 < 0$ for $\gamma\geq 5/3$. With respect to the case $\gamma< 5/3$, it suffices to show $A^2 - (-3\gamma+5)^2\ensuremath{\mathcal}R_1^2 > 0$. We have that $$A^2 - (-3\gamma+5)^2\ensuremath{\mathcal}R_1^2
= 32 (\gamma-1)^2 ((3 \gamma-5) r+2)$$ and we conclude by recalling that $(3 \gamma-5) r+2$ is positive due to [\[eq:villefort\]](#eq:villefort){reference-type="eqref" reference="eq:villefort"} and the discussion immediately before that equation.\
**Item 5**\
With respect to $- \frac{2(r-1)}{3(\gamma- 1)} - U(P_s)$ being positive, we have that $$\label{eq:ulanbator}
- \frac{2(r-1)}{3(\gamma- 1)} - U(P_s) = \frac{9\gamma- 7 + (1-3 \gamma)r -3 \ensuremath{\mathcal}R_1}{12 (\gamma-1)}$$ Since clearly $\gamma- 1 > 0$ it suffices to show that the numerator is positive. Now, let us recall from [\[eq:danglars\]](#eq:danglars){reference-type="eqref" reference="eq:danglars"} that $r^\ast < \frac{2\gamma-1}{\gamma}$. Using that (and $1-3\gamma< 0$) we have $$9\gamma- 7 + (1-3 \gamma)r > 9\gamma- 7 + \frac{(2\gamma-1) (1-3\gamma)}{\gamma} = \frac{3\gamma^2 -2\gamma- 1}{\gamma} > 0.$$ Thus, in order to show that the numerator of [\[eq:ulanbator\]](#eq:ulanbator){reference-type="eqref" reference="eq:ulanbator"} is positive, we just need to show that $(9\gamma- 7 + (1-3 \gamma)r)^2$ is bigger than $(3\ensuremath{\mathcal}R_1)^2$. Indeed, we have $$( 9\gamma- 7 + (1-3 \gamma)r )^2 - 9 \ensuremath{\mathcal}R_1^2 = 16 (r-1) ((3 \gamma-5) r+2) > 0,$$ where the factor $(3\gamma- 5)r + 2$ is positive because of [\[eq:villefort\]](#eq:villefort){reference-type="eqref" reference="eq:villefort"} and the discussion immediately before that equation.
The other part of this item is to show that $U(P_s) + 1$ is positive. We have that $$U(P_s) + 1 = \frac{\gamma r+\gamma-3r+1+\ensuremath{\mathcal}R_1 }{4 (\gamma-1)}.$$ Since $\ensuremath{\mathcal}R_1$ and $\gamma-1$ are positive, we will be done if we show that $\gamma r+\gamma-3r+1 > 0$. Since this is an affine function of $r$, for $r\in (1, r^\ast)$, it will take it values between the value at $r=1$ and the value at $r = \frac{2\gamma-1}{\gamma}$ (recall that from [\[eq:danglars\]](#eq:danglars){reference-type="eqref" reference="eq:danglars"} this is an upper bound of $r^\ast$). When $r=1$ we clearly have $\gamma r+\gamma-3r+1 = 2(\gamma- 1)>0$. For $r = \frac{2\gamma-1}{\gamma}$, we have $$\label{eq:seoul}
\left( \gamma r+\gamma-3r+1 \right) \Big|_{r = (2\gamma-1)/\gamma} = 3 \frac{(\gamma-1)^2}{\gamma}$$
**Item 6**\
Let us show that $U(\overline P_s) \in (-1, 0)$. We have that $$U(\overline P_s)= \frac{\gamma(r-3)-3 r+5-\ensuremath{\mathcal}R_1}{4 (\gamma-1)}$$ In order to show that $U(\overline P_s)$ is negative, it suffices to show that $\gamma r - 3\gamma- 3 r + 5 < 0$. Since this function is affine in $r$, it suffices to check so at $r=1$ and $r=\frac{2\gamma-1}{\gamma}$ (which is an upper bound of $r^\ast$). At $r=1$ we have $\gamma r - 3\gamma- 3 r + 5 = -2\gamma+2$ and at $r=\frac{2\gamma-1}{\gamma}$ we have $$\left( \gamma r - 3\gamma- 3 r + 5 \right) \Big|_{r = (2\gamma-1)/\gamma} = \frac{-\gamma^2 -2\gamma+3}{\gamma} < 0.$$
With respect to $U(\overline P_s) > -1$, we have that $$U(\overline P_s) + 1 = \frac{\gamma r+\gamma-3 r+1-\ensuremath{\mathcal}R_1}{4 (\gamma-1)}$$ Let us recall that in [\[eq:seoul\]](#eq:seoul){reference-type="eqref" reference="eq:seoul"} and the paragraph before, we showed $\gamma r+\gamma-3 r+1 > 0$. Therefore, in order to show that $U(\overline P_s) + 1$ is positive, it suffices to show that $(\gamma r+\gamma-3 r+1)^2 - \ensuremath{\mathcal}R_1^2$ is positive. We have $$(\gamma r+\gamma-3 r+1)^2 - \ensuremath{\mathcal}R_1^2
= 8 (\gamma-1)^2 (r-1) > 0$$
**Item 7**\
Let us start by showing the inequality for $U(P_\star)$. We have that $$-1 + 3 r +\gamma(3-r + 4 U(P_\star)) = \frac{ -3 \gamma^2 r+9 \gamma^2+2 \gamma r-6 \gamma-3 r+1 }{3 \gamma-1}$$ Let us call $A$ to the numerator of the fraction above. It suffices to show that $A > 0$. Since $A$ is an affine function of $r$, it suffices to show that $A > 0$ at $r=1$ and $r = \frac{2\gamma-1}{\gamma}$ (since $r^\ast < \frac{2\gamma-1}{\gamma}$ by [\[eq:danglars\]](#eq:danglars){reference-type="eqref" reference="eq:danglars"}). We have that $$A \Big|_{r=1} = 2(\gamma-1)(3\gamma+1) > 0, \qquad \mbox{ and }\qquad A\Big|_{r=(2\gamma-1)/\gamma} = \frac{(\gamma-1) (3\gamma^2 + 4\gamma- 3)}{\gamma} > 0.$$
Now, let us show the inequality for $U(\overline P_s)$. We have that $$-1 +3r + \gamma(3-r+ 4 U (\overline P_s)) =
\frac{\gamma r+\gamma-3 r+1 -\gamma\ensuremath{\mathcal}R_1}{\gamma-1}$$ From [\[eq:seoul\]](#eq:seoul){reference-type="eqref" reference="eq:seoul"} and the discussion immediately before, we have that $\gamma r+\gamma-3 r+1 > 0$. Therefore, we just need to show that $(\gamma r+\gamma-3 r+1)^2 - \gamma^2 \ensuremath{\mathcal}R_1^2$ is positive. We have that $$B = (\gamma r+\gamma-3 r+1)^2 - \gamma^2 \ensuremath{\mathcal}R_1^2
= - (\gamma-1) \left( \gamma^3 (r-3)^2+\gamma^2 \left(-5 r^2+6
r-5\right)+\gamma\left(3 r^2-10 r+3\right)+(1-3 r)^2 \right)$$ Extracting the factor $\gamma - 1$ and taking a derivative with respect to $r$, we obtain $$\frac{\partial}{\partial r} \left( \frac{B}{\gamma-1} \right) = 10 \gamma+6 + 6 \gamma^3-6 \gamma^2+\left(-2 \gamma^3+10 \gamma^2-6
\gamma-18\right) r$$ Let us note that this is an affine function of $r$. Moreover it is positive at $r=1$ and $r = \frac{2\gamma-1}{\gamma} > r^\ast$ since $$\begin{aligned}
\frac{\partial}{\partial r}\Big|_{r=1} \left( \frac{B}{\gamma-1} \right) &= 4 (\gamma-1)(3 + 2\gamma+ \gamma^2), \\
\frac{\partial}{\partial r}\Big|_{r=(2\gamma- 1)/\gamma} \left( \frac{B}{\gamma-1} \right) &= \frac{2(\gamma-1) (-9+3\gamma+ 9\gamma^2 + \gamma^3)}{\gamma}.\end{aligned}$$ Therefore, we have that $\frac{\partial}{\partial r} \left( \frac{B}{\gamma-1} \right)$ is positive for all $r \in (1, r^\ast )$ and we obtain that $B$ is increasing in that interval.
We recall that we only need to show $B > 0$ for the values of $\gamma, r$ such that $D_Z (P_\star) < 0$. We have that $$D_Z (P_\star ) = \frac{-1+3\gamma- r(2 + \sqrt{3} (\gamma-1 ) )}{3\gamma- 1},$$ which is clearly decreasing with $r$. Therefore, solving the equation $D_Z (P_\star ) = 0$, we obtain that the condition $D_Z(P_\star) < 0$ is equivalent to $$\label{eq:rthres}
r > \frac{3\gamma- 1}{2 + \sqrt{3} (\gamma-1) } = r_{\geq 5/3}^\ast .$$ That is, the threshold value for $r$ at which $D_Z(P_\star)$ becomes negative is given by the same formula that describes $r^\ast$ for $\gamma \geq 5/3$. In particular, for $\gamma \geq 5/3$, there is nothing to show, since we will always have $D_Z(P_\star ) > 0$ for $r \in (1, r^\ast )$. However, for $\gamma \in (1, 5/3)$, we need to show that $B > 0$ on $(r_{\geq 5/3}^\ast (\gamma ), r_{<5/3}^\ast (\gamma ) )$. Since we have shown that $B$ is increasing with $r$, it suffices to do so at the left endpoint of the interval. We have that $$\frac{B}{\gamma - 1} \Big|_{r = r_{\geq 5/3}^\ast} = \frac{2 (\gamma-1)^2 \left(9 \left(\sqrt{3}-2\right) \gamma^3-21
\left(\sqrt{3}-2\right) \gamma^2+\left(7 \sqrt{3}-2\right) \gamma+5
\sqrt{3}-14\right)}{\left(\sqrt{3} \gamma-\sqrt{3}+2\right)^2}$$ so it suffices to show that the third-degree polynomial $$\left(9 \left(\sqrt{3}-2\right) \gamma^3-21 \left(\sqrt{3}-2\right) \gamma^2+\left(7 \sqrt{3}-2\right) \gamma+5 \sqrt{3}-14\right)$$ is positive for $\gamma \in (1, 5/3)$.
We can rewrite it as $$\begin{aligned}
& 16 + 12 (-1 + \sqrt{3}) \left(\gamma- \frac53\right) + 24 (-2 + \sqrt{3}) \left(\gamma- \frac53\right)^2 +
9 (-2 + \sqrt{3}) \left(\gamma- \frac53\right)^3 \\
&>
16 + 12 (-1 + \sqrt{3}) \left(\gamma- \frac53\right) + 24 (-2 + \sqrt{3}) \left(\gamma- \frac53\right)^2\end{aligned}$$ for $\gamma\in \left(1,\frac53\right)$, and the conclusion follows from the inequality $$16 > |12 (-1 + \sqrt{3})| + |24 (-2 + \sqrt{3})| = 36 - 12\sqrt{3}.$$ ◻
**Lemma 39**. *We have that the profiles $(\overline U, \overline S)$ satisfy $$R + \overline U_R - \alpha \overline S > (R-1)\eta$$ for $R > 1$.*
*Proof.* From [@Merle-Raphael-Rodnianski-Szeftel:implosion-i], [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible] we have that the profiles constructed satisfy $R+\overline U_R - \alpha \overline S = 0$ for $R=1$ (i.e. $D_Z(P_s) = 0$). Therefore, using the fundamental theorem of calculus and [\[eq:radial_repulsivity\]](#eq:radial_repulsivity){reference-type="eqref" reference="eq:radial_repulsivity"}, we have $$\begin{aligned}
R+\overline U_R - \alpha \overline S = \int_1^R \left( \widetilde R + \overline U_R' (\widetilde R) - \alpha \overline S' (\widetilde R) \right) d\widetilde R > \int_1^R \eta d\widetilde R.\end{aligned}$$ ◻
**Lemma 40**. *Let $f:\mathbb R^3 \to \mathbb R^3$. Let $I_{-1} = [0, 1]$ and $I_j = [2^{j}, 2^{j+1}]$. Let $\phi (x), \psi(x) \geq 0$ be weights such that there exist values $\phi_j, \psi_j$ and a constant $C$ satisfying that $\phi (x) \in [\phi_j/C, C \phi_j]$ and $\psi(x) \in [\psi_j/C, C\psi_j]$ for every $x$ with $|x| \in I_j$. Moreover, let us assume the weights are approximately $1$ at the origin, that is $\phi_{-1} = \psi_{-1} = \phi_0 = \psi_0 = 1$. Let $1 \leq i \leq m$. Assume that the parameters $p, q, \overline{r}, \theta$ satisfy: $$\label{eq:GN_conditions}
\frac{1}{\overline{r}} = \frac{i}{3} + \theta \left( \frac{1}{q} - \frac{m}{3} \right) + \frac{1-\theta}{p}, \qquad \mbox{ and } \qquad \frac{i}{m} \leq \theta < 1,$$ Letting $\overline{r} = \infty$, we have $$\label{eq:GNresultinfty}
\left| \nabla^i f \right| \lesssim\| \psi^m f \|_{L^p}^{1-\theta} \| \phi^m \nabla^m f \|_{L^q}^{\theta} \psi^{-m(1-\theta)} \phi^{-m\theta} + \left\| \psi^m f \right\|_{L^p} \cdot \langle x \rangle^{3\theta (1/q-1/p)-m\theta} \psi^{-m}.$$ If moreover $p = \infty, q = 2$ and $\psi = 1$, we obtain $$\label{eq:GNresultinfty_simplified}
\left| \nabla^i f \right| \lesssim\| f \|_{L^\infty}^{1-\frac{i}{m-3/2}} \| \phi^m \nabla^m f \|_{L^q}^{\frac{i}{m-3/2}} \phi^{-m\frac{i}{m-3/2}} + \left\| f \right\|_{L^\infty} \cdot \langle x \rangle^{-i}.$$ For $\overline{r} \in [1,\infty)$, any $\varepsilon> 0$, and under the extra assumption: $$\label{eq:GN_extracond}
\left( \frac{\phi(x)}{\langle x \rangle \psi(x) }\right)^{m\theta} \langle x \rangle^{3\theta (1/q-1/p)} \lesssim 1$$ we have the weighted Gagliardo-Nirenberg inequalities: $$\label{eq:GNresult}
\| \langle x \rangle^{-\varepsilon} \psi^{m(1-\theta)} \phi^{m\theta} \nabla^i f \|_{L^{\overline{r}}} \lesssim\| \psi^m f \|_{L^p}^{1-\theta} \| \phi^m \nabla^m f \|_{L^q}^{\theta} + \| \psi^m f \|_{L^p}$$ The implicit constants in [\[eq:GNresultinfty\]](#eq:GNresultinfty){reference-type="eqref" reference="eq:GNresultinfty"} and [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"} may depend on the parameters $p, q, i, m, \theta, C$, (as well as $r$, $\varepsilon>0$ for [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"}) but they are independent of $f$ and $\psi$, $\phi$.*
*Proof.* We show first [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"}. We divide $\mathbb R^3$ dyadically, defining $A_{-1} = B(0, 1)$ and $A_j = B(0, 2^{j+1}) \setminus B(0, 2^{j})$ for $j \geq 0$. It is clear that $\mathbb R^3 = \cup_{j \geq -1} A_i$. Now, note that $$\label{eq:rousseau}
\| \langle x \rangle^\varepsilon\psi^{m-i} \phi^i \nabla^i f \|_{L^{\overline{r}}} \leq \sum_{j=-1}^\infty 2^{-j\varepsilon} \| \mathbbm{1}_{A_j} \psi^{m-i} \phi^i \nabla^i f \|_{L^{\overline{r}}} \lesssim_\varepsilon\max_{-1 \leq j <\infty } \| \mathbbm{1}_{A_j} \psi^{m-i} \phi^i \nabla^i f \|_{L^{\overline{r}}}$$ so we reduce to bound by the right hand side of [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"}, [\[eq:GNresultinfty\]](#eq:GNresultinfty){reference-type="eqref" reference="eq:GNresultinfty"} each of the dyadic pieces.
Now, if the maximum on [\[eq:rousseau\]](#eq:rousseau){reference-type="eqref" reference="eq:rousseau"} is achieved for $j \in \{-1, 0 \}$, we are done, because the Gagliardo-Nirenberg inequality for bounded domains ensures that $$\begin{aligned}
\| \mathbbm{1}_{A_{-1}} \nabla^i f \|_{L^{\overline{r}}} &\lesssim\| \mathbbm{1}_{A_{-1}} \nabla^m f \|_{L^q}^{\theta} \| \mathbbm{1}_{A_{-1}} f \|_{L^p}^{1-\theta} + \| \mathbbm{1}_{A_0} f \|_{L^p} \label{eq:Am1} \\
\| \mathbbm{1}_{A_0} \nabla^i f \|_{L^{\overline{r}}} &\lesssim\| \mathbbm{1}_{A_0} \nabla^m f \|_{L^q}^{\theta} \| \mathbbm{1}_{A_0} f \|_{L^p}^{1-\theta} + \| \mathbbm{1}_{A_0} f \|_{L^p}, \label{eq:A0}\end{aligned}$$ and we have that $\phi_{j}, \psi_{j} = 1$ for $j \in \{-1, 0 \}$ (so the weight's range is inside $[1/C, C]$ on those regions). Thus, we can assume that the maximum is achieved for $j \geq 1$. Now, for $j \geq 1$, consider the function $f_j(x) = f(2^{j} x)$. The change of variables formula yields $$\| \mathbbm{1}_{A_0} \nabla^{i'} f_j \|_{L^{p'}} = 2^{ji'} 2^{-3j/p'} \| \mathbbm{1}_{A_j} \nabla^{i'} f \|_{L^{p'}}$$ Plugging this into [\[eq:A0\]](#eq:A0){reference-type="eqref" reference="eq:A0"}, we obtain $$2^{3j(i/3-1/\overline{r})} \| \mathbbm{1}_{A_j} \nabla^i f \|_{L^{\overline{r}}} \lesssim 2^{3j(m/3-1/q)(1-\theta)}\| \mathbbm{1}_{A_j} \nabla^m f \|_{L^q}^{\theta} 2^{(-3j/p)\theta} \| \mathbbm{1}_{A_j} f \|_{L^p}^{1-\theta} + 2^{-3j/p} \| \mathbbm{1}_{A_j} f \|_{L^p}.$$ Using condition [\[eq:GN_conditions\]](#eq:GN_conditions){reference-type="eqref" reference="eq:GN_conditions"}, this just reads $$\label{eq:Aj}
\| \mathbbm{1}_{A_j} \nabla^i f \|_{L^{\overline{r}}} \lesssim\| \mathbbm{1}_{A_j} \nabla^m f \|_{L^q}^{\theta} \| \mathbbm{1}_{A_j}f \|_{L^p}^{1-\theta} + 2^{3j(-i/3+1/\overline{r}-1/p)} \| \mathbbm{1}_{A_j} f \|_{L^p},$$ for $j \geq 0$.
Now, introducing the weights, we see that $$\begin{aligned}
\| \mathbbm{1}_{A_j} \psi^{m(1-\theta)} \phi^{m\theta} \nabla^i f \|_{L^{\overline{r}}} &\lesssim\psi_j^{m(1-\theta)} \phi_j^{m\theta} \| \mathbbm{1}_{A_j} \nabla^m f \|_{L^q}^{\theta} \| \mathbbm{1}_{A_j} f \|_{L^p}^{1-\theta} + 2^{3j (-i/3+1/\overline{r}-1/p)} \psi_j^{-m\theta} \phi_j^{m\theta} \| \psi^m \mathbbm{1}_{A_j} f \|_{L^p} \notag \\
&\lesssim
\| \mathbbm{1}_{A_j} \psi^m \nabla^m f \|_{L^q}^{\theta} \| \phi \mathbbm{1}_{A_j} f \|_{L^p}^{1-\theta} + 2^{j (-i+3/\overline{r}-3/p)} \left( \frac{\phi_j}{\psi_j} \right)^{m\theta} \| \psi^m \mathbbm{1}_{A_j} f \|_{L^p} \label{eq:marat}\end{aligned}$$ where we recall that our weights are within $[\phi_j/C, \phi_j C]$ and $[\psi_j/C, \psi_j C]$, respectively, for $x \in A_j$. Recall also that $\lesssim$ is allowed to depend on $C, i, m, p, q, \overline{r}, \varepsilon$ (in particular, it can absorb constants $C^m$).
Now, note from [\[eq:GN_conditions\]](#eq:GN_conditions){reference-type="eqref" reference="eq:GN_conditions"} $$\label{eq:robespierre}
-i + \frac{3}{\overline{r}} - \frac{3}{p} = - m \theta + \theta \left(\frac{3}{q} - \frac{3}{p} \right).$$ Using that together with the fact that $\phi, \psi$ are within a constant factor of $\phi_j, \psi_j$ for $x \in A_j$, we obtain that $$2^{j (-i+3/\overline{r}-3/p)} \left( \frac{\phi_j}{\psi_j} \right)^{m\theta} = \left( 2^j \right)^{\theta (3/q - 3/p)} \left( \frac{ \phi_j}{\langle x \rangle\psi_j} \right)^{m\theta} \lesssim
\langle x \rangle^{\theta (3/q - 3/p)} \left( \frac{ \phi (x)}{\langle x \rangle \psi (x)} \right)^{m\theta} \lesssim 1.$$ where in the last inequality we used [\[eq:GN_extracond\]](#eq:GN_extracond){reference-type="eqref" reference="eq:GN_extracond"}. Plugging this inequality into [\[eq:marat\]](#eq:marat){reference-type="eqref" reference="eq:marat"}, we conclude [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"}.
Now, let us show [\[eq:GNresultinfty\]](#eq:GNresultinfty){reference-type="eqref" reference="eq:GNresultinfty"}. If $x\in A_{-1}$ we are done by equation [\[eq:Am1\]](#eq:Am1){reference-type="eqref" reference="eq:Am1"} taking $\overline{r} = \infty$. Thus, let us suppose that $x \in A_j$ for some $j\geq 0$. Using the same dilation argument on the Gagliardo-Nirenberg inequality for bounded domains as in the $\overline{r}<\infty$ case, we get [\[eq:marat\]](#eq:marat){reference-type="eqref" reference="eq:marat"} in a completely analogous way. Therefore: $$\begin{aligned}
|\nabla^{i} f(x)| \psi^{m(1-\theta)}(x) \phi^{m\theta}(x) | &\leq
\| \mathbbm{1}_{A_j} \psi^m \nabla^m f \|_{L^q}^{\theta} \| \phi^m \mathbbm{1}_{A_j} f \|_{L^p}^{1-\theta} \\
&\quad + 2^{j (-i+3/\overline{r}-3/p)} \left( \frac{\phi_j}{\psi_j} \right)^{m\theta} \| \psi^m \mathbbm{1}_{A_j} f \|_{L^p}\end{aligned}$$ Lastly, since $x\in A_j$, we have $$2^{j (-i+3/\overline{r}-3/p)} \left( \frac{\phi_j}{\psi_j} \right)^{m\theta} \lesssim\langle x \rangle^{-i+3/\overline{r}-3/p} \left( \frac{\phi (x)}{\psi (x)} \right)^{m\theta} = \langle x \rangle^{3\theta (1/q-1/p)} \left( \frac{\phi (x)}{\langle x \rangle \psi (x)} \right)^{m\theta}$$ using again [\[eq:robespierre\]](#eq:robespierre){reference-type="eqref" reference="eq:robespierre"}. Thus, we conclude [\[eq:GNresultinfty\]](#eq:GNresultinfty){reference-type="eqref" reference="eq:GNresultinfty"}. ◻
**Lemma 41**. *Let $f:{\mathbb T}_{L}^3 \to \mathbb R^3$, with $L\geq 1$. Let $I_{-1} = [0, 1]$ and $I_j = [2^{j}, 2^{j+1}]$. Let $\phi (x), \psi(x) \geq 0$ be weights such that there exist values $\phi_j, \psi_j$ and a constant $C$ satisfying that $\phi (x) \in [\phi_j/C, C \phi_j]$ and $\psi(x) \in [\psi_j/C, C\psi_j]$ for every $x$ with $|x| \in I_j$. Moreover, let us assume the weights are approximately $1$ at the origin, that is $\phi_{-1} = \psi_{-1} = \phi_0 = \psi_0 = 1$. Let $1 \leq i \leq m$. Assume that the parameters $p, q, r, \theta$ satisfy: $$\label{eq:GN_conditionstorus}
\frac{1}{\overline{r}} = \frac{i}{3} + \theta \left( \frac{1}{q} - \frac{m}{3} \right) + \frac{1-\theta}{p}, \qquad \mbox{ and } \qquad \frac{i}{m} \leq \theta < 1,$$ Letting $\overline{r} = \infty$, we have $$\label{eq:GNresultinftytorus}
\left| \nabla^i f \right| \lesssim\| \psi^m f \|_{L^p({\mathbb T}_{L}^3)}^{1-\theta} \| \phi^m \nabla^m f \|_{L^q({\mathbb T}_{L}^3)}^{\theta} \psi^{-m(1-\theta)} \phi^{-m\theta} + \left\| \psi^m f \right\|_{L^p({\mathbb T}_{L}^3)} \cdot \langle x \rangle^{3\theta (1/q-1/p)-m\theta} \psi^{-m}.$$ If moreover $p = \infty, q = 2$ and $\psi = 1$, we obtain $$\label{eq:GNresultinfty_simplifiedtorus}
\left| \nabla^i f \right| \lesssim\| f \|_{L^\infty({\mathbb T}_{L}^3)}^{1-\frac{i}{m-3/2}} \| \phi^m \nabla^m f \|_{L^q({\mathbb T}_{L}^3)}^{\frac{i}{m-3/2}} \phi^{-m\frac{i}{m-3/2}} + \left\| f \right\|_{L^\infty({\mathbb T}_{L}^3)} \cdot \langle x \rangle^{-i}.$$ For $\overline{r} \in [1,\infty)$, any $\varepsilon> 0$, and under the extra assumption: $$\label{eq:GN_extracondtorus}
\left( \frac{\phi(x)}{\langle x \rangle \psi(x) }\right)^{m\theta} \langle x \rangle^{3\theta (1/q-1/p)} \lesssim 1$$ we have the weighted Gagliardo-Nirenberg inequalities: $$\label{eq:GNresulttorus}
\| \langle x \rangle^{-\varepsilon} \psi^{m(1-\theta)} \phi^{m\theta} \nabla^i f \|_{L^{\overline{r}}({\mathbb T}_{L}^3)} \lesssim\| \psi^m f \|_{L^p({\mathbb T}_{L}^3)}^{1-\theta} \| \phi^m \nabla^m f \|_{L^q({\mathbb T}_{L}^3)}^{\theta} + \| \psi^m f \|_{L^p({\mathbb T}_{L}^3)}$$ The implicit constants in [\[eq:GNresultinftytorus\]](#eq:GNresultinftytorus){reference-type="eqref" reference="eq:GNresultinftytorus"} and [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"} may depend on the parameters $p, q, i, m, \theta, C$, (as well as $\overline{r}$, $\varepsilon>0$ for [\[eq:GNresult\]](#eq:GNresult){reference-type="eqref" reference="eq:GNresult"}) but they are independent of $f$,$L$, and $\psi$, $\phi$.*
*Proof.* We first divide dyadically $\mathbb{T}_{L}^{3}$. Let $\delta=\log_{2}(L)-[\log_{2}(L)]$, where $[x]$ is the largest integer not greater than $[x]$. From the def, $\delta \in[1,2]$ and $\log_{2}(\frac{L}{\delta})$ is an integer. Define $\widetilde{A}_{-1}=\mathbb{T}_{\delta}^{3},$ $\widetilde{A}_{j}=\mathbb{T}_{\delta 2^{j+1}}^{3}\backslash \mathbb{T}_{\delta 2^{j}}^{3}$ for $j\geq 0$. Then we have $\mathbb{T}_{L}^{3}=\cup_{j\leq \log_{2}(\frac{L}{\delta})-1}\widetilde{A}_{j}.$
Now we show for any $j$, $x\in \widetilde{A}_{j}$, there exists $\overline{\phi}_{j}$, $\overline{\psi}_{j},$ such that $\phi (x) \in [\overline{\phi}_{j}/C^{3}, C^{3} \overline{\phi_j}]$ and $\psi(x) \in [\overline{\phi}_{j}/C^{3}, C^{3}\overline{\psi_j}].$ For any $x\in \widetilde{A_j},$ we have $\delta 2^{j}\leq |x|\leq \sqrt{2}\delta 2^{j+1}\leq 2^{j+3}.$ Let $\widetilde{\psi_{j}}=\max\{\psi_{j},\psi_{j+1},\psi_{j+2}\},\widehat{\psi_{j}}=\min\{\psi_{j},\psi_{j+1},\psi_{j+2}\}$. Then we get $\psi(x)\in [\frac{\widehat{\psi_{j}}}{C},C\widetilde{\psi_{j}}].$ Moreover, from the condition of $\psi$, we have $\psi_{j+1}\leq \psi_{j} C^2 \leq \psi_{j+1} C^4.$ Then $\widetilde{\psi}_{j}\leq \widehat{\psi}_{j} C^6.$ Therefore $[\widehat{\psi}_{j}/C, \widetilde{\psi_j}]\subset [\sqrt{\widehat{\phi_{j}}\widetilde{\phi_{j}}}\frac{1}{C^{3}},\sqrt{\widehat{\phi_{j}}\widetilde{\phi_{j}}}C^3].$
Then we could use the similar proof as in Lemma [Lemma 41](#lemma:GN_generaltorus){reference-type="ref" reference="lemma:GN_generaltorus"} by estimate the dyadic pieces on $\widetilde{A}_j$ instead of $A_{j}$. ◻
**Lemma 42**. *Let $f:\mathbb{T}_{L}^3 \to \mathbb R^3$ be a periodic function on $H^{m}(\mathbb{T}_{L}^3)$. $C_0\geq 1$, $L\geq 10C_0$. Assume that the parameters $p, q, r, \theta$ satisfy: $$\label{eq:GN_conditions02torus_periodic}
\frac{1}{\overline{r}} = \frac{i}{3} + \theta \left( \frac{1}{q} - \frac{m}{3} \right) + \frac{1-\theta}{p}, \qquad \frac{i}{m} \leq \theta < 1, \qquad \mbox{ and } \qquad \frac{m}{3}\geq \frac{1}{q}-\frac{1}{p},$$ we have the Gagliardo-Nirenberg inequalities: $$\label{eq:GNresultnoweighttorus}
\| \nabla^i f \|_{L^{\overline{r}}(|y|\geq C_0, y\in e^s\mathbb{T}_{L}^3)} \lesssim\| f \|_{L^p(|y|\geq C_0, y\in e^s\mathbb{T}_{L}^3)}^{1-\theta} \| \nabla^m f \|_{L^q(|y|\geq C_0, y\in e^s\mathbb{T}_{L}^3)}^{\theta} + \| f \|_{L^p(|y|\geq C_0, y\in e^s\mathbb{T}_{L}^3)}.$$ The implicit constants in [\[eq:GNresultnoweighttorus\]](#eq:GNresultnoweighttorus){reference-type="eqref" reference="eq:GNresultnoweighttorus"} may depend on the parameters $p, q, i, m, \theta, \overline{r}$, but they are independent of $f$ and $L, C_0$.*
**Proof.* We first show for any $k$, $D_{k}=\{y|C_0\leq |y|\leq C_0(\frac{4}{3})^{k}\}$, we have $$\begin{aligned}
\label{GNinequalityDk}
\| \nabla^i f \|_{L^{\overline{r}}(D_k)} \lesssim\| f \|_{L^p(D_k)}^{1-\theta} \| \nabla^m f \|_{L^q(D_k)}^{\theta} + \| f \|_{L^p(D_k)},\end{aligned}$$ with the implicit constant independent of $C_0$, $k$.*
*Let $A$ be the annulus where $1\leq |x| \leq \frac{4}{3}$. Then $D_k=\cup_{j=1}^{k}\lambda_j A$ with $\lambda_j=C_0(\frac{4}{3})^{j-1}.$*
*Moreover, from the Gagliardo-Nirenberg inequality in bounded domains, we have $$\begin{aligned}
\|\nabla^{i}(f(\lambda_j x))\|_{L^{\overline{r}}(A)}\lesssim\|\nabla^{m}(f(\lambda_j x))\|_{L^{q}(A)}^{\theta}\|f(\lambda_j x)\|_{L^{p}(A)}^{1-\theta}+\|f(\lambda_j x)\|_{L^{p}(A)}.\end{aligned}$$ This is equivalent to $$\begin{aligned}
\lambda_{j}^{i-\frac{3}{\overline{r}zz}}\|\nabla^{i}(f(x))\|_{L^{\overline{r}}(\lambda_j A)}\lesssim\lambda_j^{(m-\frac{3}{q})\theta-\frac{3}{p}(1-\theta)}\|\nabla^{n}(f(x))\|_{L^{q}(A)}^{\theta}\|f( x)\|_{L^{p}(\lambda_j A)}^{1-\theta}+\lambda_j^{-\frac{3}{p}}\|f(x)\|_{L^{p}(\lambda_j A)}.\end{aligned}$$ Since $i-\frac{3}{\overline{r}}=(m-\frac{3}{q})\theta-\frac{3}{p}(1-\theta)$, we have $$\begin{aligned}
\|\nabla^{i}(f(x))\|_{L^{\overline{r}}(D_k)}^{\overline{r}}=\sum_{j=1}^{k}(\|\nabla^{i}(f(x))\|_{L^{\overline{r}}(\lambda_j A)})^{\overline{r}}&\lesssim\sum_{j=1}^{k} \|\nabla^{m}(f(x))\|_{L^{q}(\lambda_j A)}^{\overline{r}\theta}\|f( x)\|_{L^{p}(\lambda_j A)}^{\overline{r}(1-\theta)}+\sum_{j=1}^{k} \lambda_j^{(-\frac{3}{p}+\frac{3}{r}-i)r}\|f(x)\|_{L^{p}(\lambda_j A)}^{\overline{r}}\\
&\lesssim\sum_{j=1}^{k} \|\nabla^{m}(f(x))\|_{L^{q}(\lambda_j A)}^{\overline{r}\theta}\|f( x)\|_{L^{p}(\lambda_j A)}^{\overline{r}(1-\theta)}+\sum_{j=1}^{k} (\lambda_j^{(-\frac{3}{p}+\frac{3}{r}-i)r})\|f(x)\|_{L^{p}(D_k)}^{\overline{r}}\\
&\lesssim\sum_{j=1}^{k} \|\nabla^{m}(f(x))\|_{L^{q}(\lambda_j A)}^{q*\frac{\overline{r}\theta}{q}}\|f( x)\|_{L^{p}(\lambda_j A)}^{p*\frac{\overline{r}(1-\theta)}{p}}+\|f(x)\|_{L^{p}(D_k)}^{\overline{r}},\end{aligned}$$ where we use $-\frac{3}{p}+\frac{3}{\overline{r}}-i<0$ in the last inequality.*
*Now we claim for $l_1,l_2>0, l_1+l_2\geq 1$, $a_j,b_j\geq 0$, $$\begin{aligned}
\label{holderine}
\sum_{j}a_j^{l_1}b_j^{l_2}\leq (\sum_{j}a_j)^{l_1}(\sum_{j}b_j)^{l_2}.\end{aligned}$$ In fact, when $l_1\geq \frac{1}{2}$, $l_2\geq \frac{1}{2}$, from Holder's inequality, we have $$\begin{aligned}
&\sum_{j}a_j^{l_1}b_j^{l_2}\leq \sum_{j}a_j^{\frac{1}{2}}b_j^{\frac{1}{2}}(\sup_{j}{a_j})^{l_1-\frac{1}{2}}(\sup_{j}{b_j})^{l_2-\frac{1}{2}}\\
&\leq (\sum_j(a_j))^{\frac{1}{2}}(\sum_j(b_j))^{\frac{1}{2}}(\sum_{j}{a_j})^{l_1-\frac{1}{2}}(\sum_{j}{b_j})^{l_2-\frac{1}{2}}\\
&\leq(\sum_{j}a_j)^{l_1}(\sum_{j}b_j)^{l_2}.\end{aligned}$$ When $l_1\leq \frac{1}{2},$ we have $$\begin{aligned}
&\sum_{j}a_j^{l_1}b_j^{l_2}\leq \sum_{j}a_j^{l_1}b_j^{1-l_1}(\sup_{j}{b_j})^{l_2+l_1-1}\\
&\leq (\sum_ja_j)^{l_1}(\sum_jb_j)^{1-l_1}(\sum_{j}{b_j})^{l_2+l_1-1}\\
&\leq(\sum_{j}a_j)^{l_1}(\sum_{j}b_j)^{l_2}.\end{aligned}$$ Then from [\[holderine\]](#holderine){reference-type="eqref" reference="holderine"}, since $\frac{\overline{r}\theta}{q}+\frac{\overline{r}(1-\theta)}{p}=1-\frac{i\overline{r}}{3}+\theta\frac{m\overline{r}}{3}\geq 1$, we have $$\begin{aligned}
&\|\nabla^{i}(f(x))\|_{L^{\overline{r}}(D_k)}^{\overline{r}}=\sum_{j=1}^{k}(\|\nabla^{i}(f(x))\|_{L^{\overline{r}}(\lambda_j A)})^{\overline{r}}\\
&\lesssim(\sum_{j=1}^{k} \|\nabla^{m}(f(x))\|_{L^{q}(\lambda_j A)}^{q})^{\frac{\overline{r}\theta}{q}}(\sum_{j=1}^{k}\|f( x)\|_{L^{p}(\lambda_j A)}^{p})^{\frac{\overline{r}(1-\theta)}{p}}+\|f(x)\|_{L^{p}(D_k)}^{\overline{r}},\\
&\lesssim\|\nabla^{m}(f(x))\|_{L^{q}(D_k)}^{\overline{r}\theta}\|f(x)\|_{L^{p}(D_k)}^{\overline{r}(1-\theta)}+\|f(x)\|_{L^{p}(D_k)}^{\overline{r}}.\end{aligned}$$ Then [\[GNinequalityDk\]](#GNinequalityDk){reference-type="eqref" reference="GNinequalityDk"} follows.*
*For $e^{s}\mathbb{T}_{L}^3$, it is clear that there exists $k$, such that $D_{k-1}\subset \mathbb{T}_{L}^3 \subset D_{k}.$ Moreover, for any $x\in \partial_{D_{k-1}}$, we have $$\begin{aligned}
&\text{dist}(x,\partial_{D_{k}})\leq \frac{1}{3}C_{0}(\frac{4}{3})^{k-1}\leq \frac{1}{3}L\leq \frac{1}{2}(L-C_0).\end{aligned}$$ Since $f$ is an periodic function on $e^s\mathbb{T}_{L}^3$, there is an natural periodic extension $E(f)$ to $D_k$ and satisfying $$\|\nabla^{i}E(f)\|_{L^{l}(D_k)}\leq 3^3 \|\nabla^{i}f\|_{L^{l}(|y|\geq C_0, y\in\mathbb{T}_{L}^3)}.$$ for all $i,l$. Then we have [\[eq:GNresultnoweighttorus\]](#eq:GNresultnoweighttorus){reference-type="eqref" reference="eq:GNresultnoweighttorus"}. ◻*
**Lemma 43**. *Let $f:\mathbb R^3 \to \mathbb R^3$ be in $H^{m}(\mathbb R^3 )$. $C_0\geq 1$, $L\geq 10C_0$. Assume that the parameters $p, q, r, \theta$ satisfy: $$\label{eq:GN_conditions02torus}
\frac{1}{\overline{r}} = \frac{i}{3} + \theta \left( \frac{1}{q} - \frac{m}{3} \right) + \frac{1-\theta}{p}, \qquad \frac{i}{m} \leq \theta < 1, \qquad \mbox{ and } \qquad \frac{m}{3}\geq \frac{1}{q}-\frac{1}{p},$$ we have the Gagliardo-Nirenberg inequalities: $$\label{eq:GNresultnoweightwholespace}
\| \nabla^i f \|_{L^{\overline{r}}(|y|\geq C_0)} \lesssim\| f \|_{L^p(|y|\geq C_0)}^{1-\theta} \| \nabla^m f \|_{L^q(|y|\geq C_0)}^{\theta} + \| f \|_{L^p(|y|\geq C_0)}.$$ The implicit constants in [\[eq:GNresultnoweighttorus\]](#eq:GNresultnoweighttorus){reference-type="eqref" reference="eq:GNresultnoweighttorus"} may depend on the parameters $p, q, i, m, \theta, \overline{r}$, but they are independent of $f$ and $L, C_0$.*
*Proof.* We could write $D=\{y||y|\geq C_0\}=\cup_{j=1}^{\infty}\lambda_j A$, where $A$, $\lambda_j$ are same as in Lemma [Lemma 42](#lemma:GN_generalnoweighttorus){reference-type="ref" reference="lemma:GN_generalnoweighttorus"}. Then the similar proof holds. ◻
----------------------------------------
**Gonzalo Cao-Labora**
Departament of Mathematics
Massachusetts Institute of Technology
182 Memorial Drive, 2-334
Cambridge, MA 02139, USA
Email: gcaol\@mit.edu
**Javier Gómez-Serrano**
Department of Mathematics
Brown University
314 Kassar House, 151 Thayer St.
Providence, RI 02912, USA
Email: javier_gomez_serrano\@brown.edu
**Jia Shi**
Departament of Mathematics
Massachusetts Institute of Technology
182 Memorial Drive, 2-157
Cambridge, MA 02139, USA
Email: jiashi\@mit.edu
**Gigliola Staffilani**
Departament of Mathematics
Massachusetts Institute of Technology
182 Memorial Drive, 2-251
Cambridge, MA 02139, USA
Email: gigliola\@math.mit.edu
----------------------------------------
[^1]: With a slight abuse of notation, we denote $\| f \|_{L^2}^2 = \| f_u \|_{L^2}^2 + \| f_s \|_{L^2}^2$
[^2]: $\widetilde{\eta}$ depends only on the profile from [@Buckmaster-CaoLabora-GomezSerrano:implosion-compressible], that is, it depends on $r$ and $\gamma$. We let $r$, $\gamma$ (and therefore our profile) to be fixed throughout all this work.
| arxiv_math | {
"id": "2310.05325",
"title": "Non-radial implosion for compressible Euler and Navier-Stokes in\n $\\mathbb{T}^3$ and $\\mathbb{R}^3$",
"authors": "Gonzalo Cao-Labora, Javier G\\'omez-Serrano, Jia Shi, Gigliola\n Staffilani",
"categories": "math.AP math-ph math.MP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We study the algebraic complexity of Euclidean distance minimization from a generic tensor to a variety of rank-one tensors. The Euclidean Distance (ED) degree of the Segre-Veronese variety counts the number of complex critical points of this optimization problem. We regard this invariant as a function of inner products and conjecture that it achieves its minimal value at Frobenius inner product. We prove our conjecture in the case of matrices. We discuss the above optimization problem for other algebraic varieties, classifying all possible values of the ED degree. Our approach combines tools from Singularity Theory, Morse Theory, and Algebraic Geometry.
address:
- Friedrich-Schiller-Universität Jena, Institut für Mathematik, Ernst-Abbe-Platz 2 07737 Jena, Germany
- |
Universidade Estadual de Campinas (UNICAMP)\
Instituto de Matemática, Estatística e Computação Científica (IMECC)\
Rua Sérgio Buarque de Holanda, 651\
13083-970 Campinas-SP, Brazil
- INRIA and CMAP, École Polytechnique, IP Paris, CNRS, France
- Department of Mathematics, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
author:
- Khazhgali Kozhasov
- Alan Muniz
- Yang Qi
- Luca Sodomaco
bibliography:
- bibliography.bib
title: On the minimal algebraic complexity of the rank-one approximation problem for general inner products
---
# Introduction
The best rank-one approximation problem is classically studied in Multilinear Algebra and Optimization [@EckartYoung; @Banach; @Friedland2011OnBR; @GKT2013; @Stu22icm]. Given finite-dimensional real vector spaces $V^{\mathsmaller{\mathbb{R}}}_1, \ldots, V^{\mathsmaller{\mathbb{R}}}_k$ and (the square of) some norm $q$ on their tensor product $V_1^{\mathsmaller{\mathbb{R}}}\otimes\cdots\otimes V^{\mathsmaller{\mathbb{R}}}_k$, the problem consists in finding global minima of the distance function $(x_1,\ldots, x_k)\mapsto q(u-x_1\otimes\cdots\otimes x_k)$ from a given *data tensor* $u\in V_1^{\mathsmaller{\mathbb{R}}}\otimes\cdots\otimes V^{\mathsmaller{\mathbb{R}}}_k$ to the set of rank-one (also known as, decomposable) tensors, $$\begin{aligned}
\label{eq:BROAP}
\min_{x_i\in V^{\mathsmaller{\mathbb{R}}}_i,\, i\in[k]} q(u-x_1\otimes\cdots\otimes x_k).\end{aligned}$$ Any minimizer in [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} is referred to as *a best rank-one approximation of $u$* (with respect to $q$).
The above problem has numerous applications in mathematics and other areas such as Data Science [@Udell2017WhyAB; @GJZh; @kohn2022geometry; @kohn2023geometry] and Signal Processing [@KR; @DELATHAUWER200431; @Co; @QCL16; @SDFHPF]. Also, optimization of a homogeneous polynomial over the Euclidean sphere in $V^{\mathsmaller{\mathbb{R}}}$ can be modeled as [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} for a *symmetric* tensor $u\in V^{\mathsmaller{\mathbb{R}}}\otimes\cdots\otimes V^{\mathsmaller{\mathbb{R}}}$, see [@Nie2013SemidefiniteRF; @Kozhasov2017OnFR; @KlerkLaurent]. Furthermore, in Quantum Information Theory, an analog of [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} for complex tensors is equivalent to measuring *quantum entanglement of multipartite states* [@Wei2003GeometricMO; @Derksen2017TheoreticalAC; @DM2018].
In the case of matrices (that is, tensors of order $k=2$), problem [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} is in general much more manageable. Thus, when $q$ is associated with the *Frobenius* (also known as *trace*) *inner product*, the celebrated Eckart-Young theorem [@EckartYoung] gives a recipe for finding a best rank-one approximation to a given *data matrix* $u\in V^{\mathsmaller{\mathbb{R}}}_1\otimes V^{\mathsmaller{\mathbb{R}}}_2$: one simply needs to truncate the Singular Value Decomposition of $u$. There is no direct way to solve the analogous problem for tensors of order $k\geq 3$, which is known to be NP-hard [@Nesterov; @HiLi]. Despite this and the fact that a *best rank-$r$ approximation* of a high-order tensor might not exist for $r>1$ [@deSilva2008tensor], problem [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} lies in the core of all existing low-rank approximation schemes for tensors [@KoldaBader; @GKT2013]. Thus, for example, *successive rank-one approximations* are used to find a reasonable low-rank approximation of a tensor [@daSilva2015iterative]. Consequently, theoretical advance in understanding problem [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} is of great importance.
When $q$ in [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} is the (square of the) norm associated with an inner product $Q$ on the space of tensors $V^{\mathsmaller{\mathbb{R}}}_1\otimes \dots\otimes V^{\mathsmaller{\mathbb{R}}}_k$, a best rank-one approximation, being a minimizer, is a critical point of the distance function to the manifold of rank-one tensors. *Critical rank-one approximations* (to a data tensor $u$) are by definition critical points of the distance function in [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"}, see [@FO; @DH2016; @DOT; @HTT2023]. These are precisely real solutions to a certain system of polynomial equations in the entries of the rank-one tensor. When the data tensor $u$ is general enough, the number of complex solutions of the same system of polynomial equations is finite and is known as the *Euclidean Distance (ED) degree* of the algebraic variety of complex rank-one tensors, see [@DHOST] and Section [2](#sec: prelim){reference-type="ref" reference="sec: prelim"} for more details. This number is a sort of *algebraic complexity* of the rank-one approximation problem. It provides an upper bound on the number of critical rank-one approximations to a general data tensor. In the present work, we regard the ED degree as a function of the inner product $Q$ and attempt to minimize it. Our motivation comes from Optimization and Data Science: a target function with fewer critical points is potentially easier to minimize via local optimization methods such as gradient descent, Newton method, and their variations [@ge2022optimization; @kohn2023function].
We conjecture that the Frobenius inner product achieves the smallest possible ED degree, see Section [3](#sec: product metrics tensor spaces){reference-type="ref" reference="sec: product metrics tensor spaces"} and Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"}. This problem turns out to be nontrivial already in the case of matrices, for which we are able to settle it. In fact, in Section [4](#sec: minimum ED degree Segre-Veronese){reference-type="ref" reference="sec: minimum ED degree Segre-Veronese"} we show that, with respect to an arbitrary fixed inner product, any sufficiently general data matrix of size $n\times m$ possesses at least $\min(n,m)$ many critical rank-one approximations. Hence, the algebraic complexity of the best rank-one approximation problem for matrices cannot be smaller than $\min(n,m)$, which equals the ED degree of the Frobenius inner product. Our result can be seen as a qualitative version of the Eckart-Young theorem with respect to a general inner product. In practical terms, it asserts that, topologically, the distance function from a general data matrix to the manifold of rank-one matrices cannot be too simple, as it has at least one critical point of each *Morse index*, see Theorem [Theorem 25](#thm: lower bound Q-distance function){reference-type="ref" reference="thm: lower bound Q-distance function"}.
Symmetric tensors of order $k$ are in one-to-one correspondence with degree $k$ homogeneous polynomials and (up to a sign) a symmetric rank-one tensor corresponds to the $k$-th power of a linear form. The Frobenius inner product of two symmetric tensors equals the so-called *Bombieri-Weyl* inner product of the associated homogeneous polynomials. In this case, a best rank-one approximation to a symmetric data tensor $u$ can be always chosen to be symmetric [@Banach; @Friedland2011BestRO] and problem [\[eq:BROAP\]](#eq:BROAP){reference-type="eqref" reference="eq:BROAP"} is equivalent to optimization of the associated homogeneous polynomial over the Euclidean sphere in $V$, see [@QiLuo Thm. $2.19$]. The algebraic measure of complexity, given by the ED degree of the variety of $k$-th powers of complex linear forms, provides a sharp upper bound on the number of critical points of the mentioned polynomial optimization problem, see [@Kozhasov2017OnFR]. A particular case of Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} then asserts that the Bombieri-Weyl inner product achieves the smallest possible ED degree among all inner products on the space of homogeneous polynomials, see Theorem [Theorem 26](#thm: symmetric_matrices){reference-type="ref" reference="thm: symmetric_matrices"}.
Distance minimization is not peculiar only to varieties of rank-one matrices and tensors. More generally, one can consider an arbitrary algebraic subset $\mathcal{X}^\mathsmaller{\mathbb{R}}$ of a real vector space $V^\mathsmaller{\mathbb{R}}$. Again, denoting by $q$ the (square of the) norm associated with an inner product $Q$ on $V^\mathsmaller{\mathbb{R}}$, we are interested in the optimization problem $$\label{eq: min distance function}
\min_{x\in \mathcal{X}^\mathsmaller{\mathbb{R}}}q(u-x)\,,$$ where $u\in V^\mathsmaller{\mathbb{R}}$ is a given data point. Again, one defines the ED degree of the algebraic variety $\mathcal{X}$, the complexification of $\mathcal{X}^{\mathsmaller{\mathbb{R}}}$, as the number of complex critical points of the distance function in [\[eq: min distance function\]](#eq: min distance function){reference-type="eqref" reference="eq: min distance function"} for a general $u\in V^{\mathsmaller{\mathbb{R}}}$. As was shown in [@DHOST], the ED degree does not depend on $Q$ as long as it is general enough. This number, called the *generic ED degree* of $\mathcal{X}$, gives an upper bound on the ED degree of $\mathcal{X}$ with respect to any other inner product $Q$ on $V^{\mathsmaller{\mathbb{R}}}$. The *ED defect* of $\mathcal{X}$ with respect to $Q$ is then defined as the difference between the generic ED degree of $\mathcal{X}$ and the ED degree of $\mathcal{X}$ relative to $Q$. Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} can be equivalently stated as follows: the largest ED defect of the variety of rank-one tensors is achieved for the Frobenius inner product. An elegant formula for the ED defect was discovered in [@maxim2020defect]. In Section [5](#sec: ED defects){reference-type="ref" reference="sec: ED defects"} we exploit it to analyze ED defects of some special varieties, such as quadratic hypersurfaces, rational normal curves, and Veronese surfaces. In some cases, we can provide a complete classification of possible ED defects. These results in particular give further evidence to our Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} beyond the matrix case.
Finally, in Section [6](#sec: ED polynomial){reference-type="ref" reference="sec: ED polynomial"} we link our results on the defectivity of the ED degree to the study of the $\varepsilon$-offset hypersurface of an algebraic variety $\mathcal{X}^\mathsmaller{\mathbb{R}}$, that is the locus of data points having distance $\varepsilon$ from $\mathcal{X}^\mathsmaller{\mathbb{R}}$, see [@ottaviani2020distance].
# Acknowledgements {#acknowledgements .unnumbered}
This work is partially supported by the Thematic Research Programme *"Tensors: geometry, complexity and quantum entanglement"*, University of Warsaw, Excellence Initiative - Research University and the Simons Foundation Award No. 663281 granted to the Institute of Mathematics of the Polish Academy of Sciences for the years 2021-2023. We are thankful for the support and great working conditions. AM is supported by INCTmat/MCT/Brazil, CNPq grant number 160934/2022-2. LS is supported by a KTH grant by the Verg foundation and Brummer & Partners MathDataLab. We are also grateful to Jose Israel Rodriguez for a discussion that inspired Theorem [Theorem 23](#thm: local minimality){reference-type="ref" reference="thm: local minimality"}.
# Preliminaries {#sec: prelim}
In this section, we give some basic definitions and state results that play important roles in the research concerning the Euclidean Distance degree. For further details, we refer to [@DHOST].
## Euclidean Distance degree
Let $V$ be an $(N+1)$-dimensional complex vector space with a real structure. Its real part $V^\mathsmaller{\mathbb{R}}$ is equipped with a positive definite symmetric bilinear form $Q\colon V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\to\mathbb{R}$, hence the pair $(V^\mathsmaller{\mathbb{R}}, Q)$ is a Euclidean space. The quadratic form associated with $Q$ is denoted by $q$ with $q(x)=Q(x,x)$ for all $x\in V^\mathsmaller{\mathbb{R}}$. We extend $Q$ and $q$ to a symmetric bilinear form and, respectively, a quadratic form on the complex space $V$, denoting these by the same letters. Furthermore, we denote by $\mathcal{Q}$ the complex quadric hypersurface defined by the equation $q(x)=0$, and it is commonly referred to as the *isotropic quadric*. We may regard $\mathcal{Q}$ as an affine cone in $V$ or as a projective variety in $\mathbb{P}(V)=\mathbb{P}^N$. Note that, since $Q$ is positive definite, the real locus $\mathcal{Q}^\mathsmaller{\mathbb{R}}=\mathcal{Q}\cap\mathbb{P}(V^\mathsmaller{\mathbb{R}})$ of $\mathcal{Q}$ is empty.
Consider a real projective variety $\mathcal{X}\subset\mathbb{P}^N$, whose real part is denoted by $\mathcal{X}^\mathsmaller{\mathbb{R}}\subset\mathbb{P}(V^\mathsmaller{\mathbb{R}})=\mathbb{P}^N_{\mathsmaller{\mathbb{R}}}$. The *affine cone* over $\mathcal{X}$ is denoted by $C\mathcal{X}\subset V$. Note that $C\mathcal{X}$ is a real affine variety, and the set of its real points satisfies $(C\mathcal{X})^\mathsmaller{\mathbb{R}}=C\mathcal{X}^\mathsmaller{\mathbb{R}}\subset V^\mathsmaller{\mathbb{R}}$. Given a *data point* $u\in V^\mathsmaller{\mathbb{R}}$, we consider the *distance function* $$\label{eq: distance function cone}
\mathop{\mathrm{dist}}_{C\mathcal{X}^\mathsmaller{\mathbb{R}},u}^Q\colon C\mathcal{X}^\mathsmaller{\mathbb{R}}\to\mathbb{R}\,,\quad x\mapsto q(u-x)\,.$$ The main optimization problem is $$\label{eq: main optimization problem}
\min_{x\in C\mathcal{X}^\mathsmaller{\mathbb{R}}}\mathop{\mathrm{dist}}_{C\mathcal{X}^\mathsmaller{\mathbb{R}},u}^Q(x)\,.$$ The algebraic relaxation of the above problem consists of studying all critical points of $\mathop{\mathrm{dist}}_{C\mathcal{X}^\mathsmaller{\mathbb{R}},u}^Q$, seen as a polynomial objective function defined over the complex variety $C\mathcal{X}$ and taking complex values. Indeed, the smooth local minimizers of the distance function sit among these critical points. A point $x$ on the (Zariski open) subset $\in(C\mathcal{X})_{\mathrm{sm}}\subset C\mathcal{X}$ of smooth points of $C\mathcal{X}$ is *critical* for $\mathop{\mathrm{dist}}_{C\mathcal{X}^\mathsmaller{\mathbb{R}},u}^Q$ if the vector $u-x$ belongs to the *normal space* of $C\mathcal{X}$ at $x$: $$\label{eq: def normal space}
N_xC\mathcal{X}\coloneqq\{v^*\in V \mid\text{$Q(v^*,v)=0$ for all $v\in T_xC\mathcal{X}$}\}\,,$$ where $T_xC\mathcal{X}$ is the tangent space of $C\mathcal{X}$ at $x$. The *ED correspondence* of $\mathcal{X}$ is the incidence variety $$\label{eq: ED correspondence}
\mathcal{E}(\mathcal{X},Q)\coloneqq\overline{\left\{(x,u)\in V\times V\mid\text{$x\in(C\mathcal{X})_\mathrm{sm}$ and $u-x\in N_x\mathcal{X}$}\right\}}\,,$$ where the closure is taken with respect to the Zariski topology on $V\times V$. Denote with $\mathrm{pr}_1$ and $\mathrm{pr}_2$ the projections of $\mathcal{E}(\mathcal{X},Q)$ onto the two factors of $V\times V$. On one hand, the first projection $\mathrm{pr}_1\colon\mathcal{E}(\mathcal{X},Q)\to C\mathcal{X}$ is surjective and provides the structure of a vector bundle over $(C\mathcal{X})_{\mathrm{sm}}$, whose rank equals the codimension of $\mathcal{X}\subset\mathbb{P}^N$. Consequently, $\mathcal{E}(\mathcal{X},Q)$ is an (irreducible) variety in $V\times V$ of dimension $N+1=\dim(V)$. On the other hand, the second projection $\mathrm{pr}_2 \colon\mathcal{E}(\mathcal{X},Q)\to V$ is surjective, and its fibers are generically of constant cardinality. By definition, this cardinality $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})$ is the *Euclidean Distance (ED) degree of $\mathcal{X}$* with respect to $Q$, and those $x\in\mathcal{X}_{\mathrm{sm}}$ for which $(x,u)\in \mathcal{E}(\mathcal{X},Q)$ are referred to as *ED degree critical points* (relative to $u$). The *ED discriminant* $\Sigma(\mathcal{X},Q)$ is then defined as the Zariski closure in $V$ of the set of critical values of $\mathrm{pr}_2$: $$\begin{aligned}
\label{eq: ED disc}
\Sigma(\mathcal{X},Q)\coloneqq\overline{\left\{ u\in V \mid\text{$\dim(\textrm{\normalfont{d}}_{(x,u)}\mathrm{pr}_2(T_{(x,u)} \mathcal{E}(\mathcal{X},Q))< N+1$ for some $(x,u)\in \mathcal{E}(\mathcal{X},Q)_{\mathrm{sm}}$}\right\}}\,,\end{aligned}$$ where $\textrm{\normalfont{d}}_{(x,u)} \mathrm{pr}_2\colon T_{(x,u)} \mathcal{E}(\mathcal{X},Q)\to T_u V=V$ is the differential of $\mathrm{pr}_2$. In particular, $u\in \Sigma(\mathcal{X},Q)$ if there are less than $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})$ ED degree critical points relative to $u$.
**Definition 1**. A data point $u\in V$ is called *generic* if it lies in the complement of $\Sigma(\mathcal{X},Q)$.
**Remark 2**. *In the above definition of Euclidean Distance degree, $Q$ can be any nondegenerate (not necessarily positive definite or even real) symmetric bilinear form. In the following, by $S^2 V$ and $S^2 V^\mathsmaller{\mathbb{R}}$ we denote the space of all symmetric bilinear forms on $V$ and, respectively, on $V^\mathsmaller{\mathbb{R}}$. Furthermore, (real) positive definite forms in $S^2V$ and $S^2 V^\mathsmaller{\mathbb{R}}$ constitute a convex cone that is denoted by $S_+^2 V$ and $S^2_+ V^\mathsmaller{\mathbb{R}}$. By its definition, $S^2_+ V^\mathsmaller{\mathbb{R}}$ consists of all inner products on $V^\mathsmaller{\mathbb{R}}$.*
## Chern classes, duality, and polar classes
We briefly outline some important notions of intersection theory and projective duality theory that are used in this paper.
Let $\mathcal{X}\subset\mathbb{P}^N$ be an irreducible projective variety. The $i$th Chow group of $\mathcal{X}$ is denoted by $A^i(\mathcal{X})$ and is the group of formal linear combinations with integer coefficients of $i$-codimensional irreducible subvarieties of $\mathcal{X}$, up to rational equivalence. The direct sum $A^*(\mathcal{X})=\bigoplus_{i\ge 0}A^i(\mathcal{X})$ equipped with the intersection product is the Chow ring (or intersection ring) of $\mathcal{X}$. We refer to [@fulton1998intersection Chapter 8] for more details on the subject.
**Example 3**. If $\mathcal{X}=\mathbb{P}^N\times\mathbb{P}^N=\mathbb{P}(V)\times\mathbb{P}(V)$ embedded in $\mathbb{P}(V\otimes V)\cong\mathbb{P}^{N(N+1)}$, we have that $A^*(\mathbb{P}^N\times\mathbb{P}^N)\cong\mathbb{Z}[s,t]/(s^{N+1},t^{N+1})$, where $s$ and $t$ are the pullbacks of the hyperplane classes in the two factors of $\mathbb{P}^N\times\mathbb{P}^N$ via the two projection maps.
Now let $\mathcal{E}\to\mathcal{X}$ be a vector bundle on $\mathcal{X}$ of rank $r$. For every $i\in\{0,\ldots,r\}$ one can associate to $\mathcal{E}$ a rational equivalence class $c_i(\mathcal{E})\in A^i(\mathcal{X})$, called the *$i$th Chern class* of $\mathcal{E}$. We briefly recall some basic facts about Chern classes, and we refer to [@fulton1998intersection §3.2] for more details. If $r=n=\dim(\mathcal{X})$, then $c_n(\mathcal{E})\in A^n(\mathcal{X})$ is a zero-dimensional cycle or a finite sum of classes of points. The sum of the coefficients of this cycle is an integer and is called the *top Chern number of $\mathcal{E}$*. The *total Chern class* of $\mathcal{E}$ is $c(\mathcal{E})\coloneqq\sum_{i=0}^rc_i(\mathcal{E})$. We recall two formal properties of Chern classes, which are useful tools to compute Chern classes of more complex vector bundles on $\mathcal{X}$. First, given a Cartier divisor $D$ on $\mathcal{X}$ and the line bundle $\mathcal{L}=\mathcal{O}(D)$, then $c(\mathcal{L})=1+D$, or equivalently $c_1(\mathcal{L})=D$. Secondly, given a short exact sequence $0\to\mathcal{A}\to\mathcal{B}\to\mathcal{C}\to 0$ of vector bundles on $\mathcal{X}$, the *Whitney sum* property $c(\mathcal{A})\cdot c(\mathcal{C})=c(\mathcal{B})$ holds.
**Example 4**. Consider the case $\mathcal{X}=\mathbb{P}^N=\mathbb{P}(V)$ for some complex vector space $V$ of dimension $N+1$. We compute the total Chern class of the tangent bundle $\mathcal{T}_{\mathbb{P}^N}$ of $\mathbb{P}^N$, which is typically denoted by $c(\mathbb{P}^N)$. This is an example of when the rank of a vector bundle equals the dimension of its base variety. We denote with $\mathcal{O}_{\mathbb{P}^N}$ the trivial line bundle $\mathbb{P}^N\times L$, where $L$ is a one-dimensional complex vector space. In particular $\mathcal{O}_{\mathbb{P}^N}\otimes V$ is a trivial vector bundle on $\mathbb{P}^N$ of rank $N+1$. Let $\mathcal{O}_{\mathbb{P}^N}(-1)$ the tautological line bundle on $\mathbb{P}^N$, whose fiber $\mathcal{O}_{\mathbb{P}^N}(-1)_x$ on $x=[v]\in\mathbb{P}^N$ is the line $\langle v\rangle$ spanned by $v$. Instead, the hyperplane bundle of $\mathbb{P}^N$ is the dual $\mathcal{O}_{\mathbb{P}^N}(1)\coloneqq\mathcal{O}_{\mathbb{P}^N}(-1)^\vee$ of the tautological line bundle. Furthermore, for every nonzero integer $d$, we define $\mathcal{O}_{\mathbb{P}^N}(d)\coloneqq\mathcal{O}_{\mathbb{P}^N}(1)^{\otimes d}$ if $d>0$, otherwise $\mathcal{O}_{\mathbb{P}^N}(d)\coloneqq\mathcal{O}_{\mathbb{P}^N}(-d)^\vee$ if $d<0$. In particular, for all $d\in\mathbb{Z}$, the total Chern class of $\mathcal{O}_{\mathbb{P}^N}(d)$ is $c(\mathcal{O}_{\mathbb{P}^N}(d))=1+dh$, where $h\in A^1(\mathbb{P}^N)$ is the rational equivalence class of a hyperplane. From the Euler sequence [@hartshorne1977algebraic Example II.8.20.1] $$\label{eq: Euler sequence}
0\to\mathcal{O}_{\mathbb{P}^N}\to\mathcal{O}_{\mathbb{P}^N}(1)\otimes V\to\mathcal{T}_{\mathbb{P}^N}\to 0$$ and the Whitney sum property explained before, we obtain that $c(\mathbb{P}^N)=c(\mathcal{O}_{\mathbb{P}^N}(1)\otimes V)/c(\mathcal{O}_{\mathbb{P}^N})$. On one hand, from the previous identities, we have $c(\mathcal{O}_{\mathbb{P}^N})=1+0h=1$. On the other hand $\mathcal{O}_{\mathbb{P}^N}(1)\otimes V\cong\mathcal{O}_{\mathbb{P}^N}(1)^{\oplus(N+1)}$, and the total Chern class of a direct sum of vector bundles is the product of the total Chern classes of its summands, hence $c(\mathcal{O}_{\mathbb{P}^N}(1)\otimes V)=(1+h)^{N+1}$. In conclusion, we have that $c(\mathbb{P}^N)=(1+h)^{N+1}=\sum_{i=0}^N\binom{N+1}{i}h^i$, since we are working in $A^*(\mathbb{P}^N)\cong\mathbb{Z}[h]/(h^{N+1})$. In particular, the top Chern number of $\mathbb{P}^N$ is $c_N(\mathbb{P}^N)=N+1$.
We recall the notion of the dual variety of a projective variety, see [@GKZ Chapter 1].
**Definition 5**. Let $\mathcal{X}\subset\mathbb{P}^N$ be an irreducible projective variety. A hyperplane $H\subset\mathbb{P}^N$ is *tangent to $\mathcal{X}$* if $H$ contains the projective tangent space $T_xX$ for some smooth point $x$ of $\mathcal{X}$. The *dual variety* $\mathcal{X}^{\vee}$ of $\mathcal{X}$ is the Zariski closure in $(\mathbb{P}^N)^*\coloneqq\mathbb{P}(V^*)$ of the set of all hyperplanes tangent to $\mathcal{X}$. A variety $\mathcal{X}$ is *dual defective* if $\mathrm{codim}(\mathcal{X}^{\vee})>1$. Otherwise, it is *dual nondefective*. When $\mathcal{X}= \mathbb{P}^N$, then $\mathcal{X}^{\vee} = \emptyset$ and $\mathrm{codim}(\mathcal{X}^{\vee}) = N+1$.
With the help of a nondegenerate symmetric bilinear form $Q$, one can identify the complex vector space $V$ with its dual $V^*$ via the isomorphism sending a vector $v^*\in V$ to the linear form $v\mapsto Q(v^*, v)$ on $V$. In this way, we identify the dual variety $\mathcal{X}^{\vee}\subset(\mathbb{P}^N)^*$ with the variety $$\label{eq: other notion of dual variety}
\mathcal{X}^{\vee}=\overline{\bigcup_{x\in C\mathcal{X}_{\mathrm{sm}}}\mathbb{P}(N_xC\mathcal{X})}\subset\mathbb{P}^N\,,$$ where the bar stands for Zariski closure in $\mathbb{P}^N$. The variety at the right-hand side of [\[eq: other notion of dual variety\]](#eq: other notion of dual variety){reference-type="eqref" reference="eq: other notion of dual variety"} is also denoted by $\mathcal{X}^\perp$, but we write $\mathcal{X}^{\vee}$ for simplicity of notations. On one hand, this alternative notion of dual variety is useful because it allows us to consider $\mathcal{X}$ and $\mathcal{X}^\vee$ in the same space $\mathbb{P}^N$. On the other hand, identity [\[eq: other notion of dual variety\]](#eq: other notion of dual variety){reference-type="eqref" reference="eq: other notion of dual variety"} depends on the choice of $Q$, in contrast with the classical notion of dual variety in Definition [Definition 5](#def: dual varieties){reference-type="ref" reference="def: dual varieties"}.
**Definition 6**. Let $\mathcal{X}\subset\mathbb{P}^N$ be an irreducible projective variety. The *conormal variety* of $\mathcal{X}$ is the incidence correspondence $$\label{eq: def conormal variety}
\mathcal{N}_\mathcal{X}\coloneqq\overline{\{(x,H)\in \mathbb{P}^N\times(\mathbb{P}^N)^*\mid\text{$x\in\mathcal{X}_{\mathrm{sm}}$ and $H\supset T_x\mathcal{X}$}\}}\,.$$
The projection of $\mathcal{N}_\mathcal{X}$ onto the first factor $\mathbb{P}^N$ defines a projective vector bundle over $\mathcal{X}$ of rank equal to $\mathrm{codim}(X)-1$. This implies that $\mathcal{N}_\mathcal{X}$ is an irreducible variety of dimension $N-1$ in $\mathbb{P}^N\times(\mathbb{P}^N)^*$. Instead, the projection of $\mathcal{N}_\mathcal{X}$ onto $(\mathbb{P}^N)^*$ is the dual variety $\mathcal{X}^\vee$ of $\mathcal{X}$. A fundamental feature of conormal variety is the *biduality theorem* [@GKZ Chapter 1]: one has $\mathcal{N}_\mathcal{X}=\mathcal{N}_{\mathcal{X}^{\vee}}$. The latter implies $(\mathcal{X}^{\vee})^\vee = X$, the so-called *biduality*.
Similarly, as in [\[eq: other notion of dual variety\]](#eq: other notion of dual variety){reference-type="eqref" reference="eq: other notion of dual variety"} for the dual variety, in the following, we identify the conormal variety $\mathcal{N}_\mathcal{X}$ with the variety in $\mathbb{P}^N\times\mathbb{P}^N$ $$\label{eq: identity conormal}
\mathcal{N}_\mathcal{X}=\overline{\{(x,y)\in \mathbb{P}^N\times\mathbb{P}^N\mid\text{$x=[v]\in\mathcal{X}_{\mathrm{sm}}$ and $y\in\mathbb{P}(N_vC\mathcal{X})$}\}}\,,$$ where $N_xC\mathcal{X}$ is defined in [\[eq: def normal space\]](#eq: def normal space){reference-type="eqref" reference="eq: def normal space"}. Since $\mathrm{codim}(\mathcal{N}_\mathcal{X})=N+1$ within $\mathbb{P}^N\times\mathbb{P}^N$, using Example [Example 3](#ex: Chow ring Segre product){reference-type="ref" reference="ex: Chow ring Segre product"}, the rational equivalence class of $\mathcal{N}_\mathcal{X}$ in $A^{N+1}(\mathbb{P}^N\times\mathbb{P}^N)$ can be written in the form $$[\mathcal{N}_\mathcal{X}]=\delta_0(\mathcal{X})s^Nt+\delta_1(\mathcal{X})s^{N-1}t^2+\cdots+\delta_{N-1}(\mathcal{X})st^N\,.$$ The coefficients $\delta_i(\mathcal{X})$ are known as the *multidegrees of $\mathcal{N}_\mathcal{X}$*. Letting $n=\dim(\mathcal{X})$, we have the relations (see [@kleiman1986tangency Prop. (3), p.187]) $$\label{eq: identity multidegrees conormal polar degrees}
\delta_i(\mathcal{X})=0\quad\forall\,i>n\,,\quad\delta_i(\mathcal{X})=\deg(p_{n-i}(\mathcal{X}))\quad\forall\,i\in\{0,\ldots,n\}\,,$$ where $p_j(\mathcal{X})$ is the *$i$-th polar class* of $\mathcal{X}$ [@piene1978polar].
If we assume $\mathcal{X}$ smooth, the invariants $\delta_i(\mathcal{X})$ may be computed utilizing the Chern classes $c_i(\mathcal{X})$ of $\mathcal{X}$, namely the Chern classes of the tangent bundle $\mathcal{T}_\mathcal{X}\to\mathcal{X}$. Suppose that the embedding $\mathcal{X}\subset\mathbb{P}^N$ is given by a line bundle $\mathcal{L}$, and let $h=c_1(\mathcal{L})$. One computes [@Holme §3]: $$\label{eq: Holme}
\delta_i(\mathcal{X})=\sum_{j=0}^{n-i}(-1)^{j}\binom{n+1-j}{i+1}c_j(\mathcal{X})\cdot h^{n-j}\,,$$ where $c_j(\mathcal{X})\cdot h^{n-j}$ is equal to $\deg(c_j(\mathcal{X}))$ times the class of a point $h^n\in A^n(\mathcal{X})$. The right-hand side of [\[eq: Holme\]](#eq: Holme){reference-type="eqref" reference="eq: Holme"} is always a nonnegative integer. The integer $\mathrm{def}(X)=\mathrm{codim}(\mathcal{X}^{\vee})-1$ equals the minimum $i$ such that $\delta_i(\mathcal{X})\neq 0$. Whenever $\mathcal{X}$ is dual nondefective, one has $$\label{eq: degdual}
\mathrm{deg}(\mathcal{X}^{\vee})=\delta_0(\mathcal{X})=\sum_{j=0}^n(-1)^j(n+1-j)\,c_j(\mathcal{X})\cdot h^{n-j}\,.$$ We can also invert the relation between polar classes and Chern classes and get the new relation $$\label{eq: Holme inverted}
c_i(\mathcal{X})=\sum_{j=0}^i(-1)^{j}\binom{n+1-j}{i-j}\delta_{n-j}(\mathcal{X})\cdot h^{i-j}\,.$$ Similar relations hold for singular varieties by replacing the Chern classes with the Chern-Mather classes (see [@piene1978polar], [@piene1988cycles Theorem 3], and [@aluffi2018projective Proposition 3.13] for more details): $$\begin{aligned}
\label{eq: polar Chern-Mather}
\begin{split}
\delta_i(\mathcal{X}) &= \sum_{j=0}^{n-i}(-1)^{j}\binom{n+1-j}{i+1}c_{j}^M(\mathcal{X})\cdot h^{n-j}\\
c_i^M(\mathcal{X}) &= \sum_{j=0}^i(-1)^{j}\binom{n+1-j}{i-j}\delta_{n-j}(\mathcal{X})\cdot h^{i-j}\,.
\end{split}\end{aligned}$$
## Generic ED degree and ED defect {#sec: generic ED degree and ED defect}
Given a variety $\mathcal{X}\subset\mathbb{P}^N$, the value of $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})$ depends on the way how $\mathcal{X}$ intersects the quadric $\mathcal{Q}\subset \mathbb{P}^N$ associated with the bilinear form $Q\in S^2_+(V)$. When $\mathcal{Q}$ is in general position with respect to $\mathcal{X}$, the ED degree of $\mathcal{X}$ reaches its maximum possible value. This is the key point of the next result.
**Theorem 7**. *[@DHOST Theorem 5.4][\[thm: ED degree sum polar classes\]]{#thm: ED degree sum polar classes label="thm: ED degree sum polar classes"} Let $\mathcal{X}\subset\mathbb{P}^N$ be a projective variety and consider its conormal variety $\mathcal{N}_\mathcal{X}\subset\mathbb{P}^N\times\mathbb{P}^N$. If $\mathcal{N}_\mathcal{X}$ is disjoint from the diagonal $\Delta(\mathbb{P}^N)$ of $\mathbb{P}^N\times\mathbb{P}^N$, then $$\label{eq: sum polar classes}
\mathop{\mathrm{EDdegree}}_Q(\mathcal{X}) = \delta_0(\mathcal{X})+\cdots+\delta_{N-1}(\mathcal{X}) = \mathop{\mathrm{EDdegree}}_Q(\mathcal{X}^{\vee})\,.$$*
In order to satisfy the hypotheses of Theorem [\[thm: ED degree sum polar classes\]](#thm: ED degree sum polar classes){reference-type="ref" reference="thm: ED degree sum polar classes"}, it is sufficient that the varieties $\mathcal{X}$ and $\mathcal{Q}$ are transversal, namely that $\mathcal{X}\cap\mathcal{Q}$ is smooth and disjoint from $\mathcal{X}_{\mathrm{sing}}$. In particular, this condition holds when $\mathcal{Q}$ is generic in the projective space $\mathbb{P}(S^2V)$ of quadric hypersurfaces. Hence, the value in [\[eq: sum polar classes\]](#eq: sum polar classes){reference-type="eqref" reference="eq: sum polar classes"} is called *generic ED degree* of $\mathcal{X}$ and is denoted by $\mathop{\mathrm{gEDdegree}}(\mathcal{X})$.
Using the identities in [\[eq: Holme\]](#eq: Holme){reference-type="eqref" reference="eq: Holme"} and Theorem [\[thm: ED degree sum polar classes\]](#thm: ED degree sum polar classes){reference-type="ref" reference="thm: ED degree sum polar classes"}, one gets the generic ED degree of a smooth projective variety as a function of its Chern classes.
**Theorem 8**. *[@DHOST Theorem 5.8][\[thm: ED degree Chern classes\]]{#thm: ED degree Chern classes label="thm: ED degree Chern classes"} Consider the assumptions of Theorem [\[thm: ED degree sum polar classes\]](#thm: ED degree sum polar classes){reference-type="ref" reference="thm: ED degree sum polar classes"}. Furthermore, we assume that $\mathcal{X}\subset\mathbb{P}^N$ is smooth of dimension $n$. Then $$\label{eq: ED degree Chern classes}
\mathop{\mathrm{EDdegree}}_Q(\mathcal{X}) = \mathop{\mathrm{gEDdegree}}(\mathcal{X}) = \sum_{i=0}^n(-1)^i(2^{n+1-i}-1)\deg(c_i(\mathcal{X}))\,.$$*
The previous result has been extended to singular varieties by Aluffi in [@aluffi2018projective Proposition 2.9].
For a specific choice of a bilinear form $Q$, the value in [\[eq: sum polar classes\]](#eq: sum polar classes){reference-type="eqref" reference="eq: sum polar classes"} is only an upper bound for the ED degree of a variety $\mathcal{X}\subset\mathbb{P}^N$ with respect to $Q$. This motivates the definition of *defect of ED degree* of $\mathcal{X}$ with respect to $Q$ as the difference $$\label{eq: defect of ED degree}
\mathop{\mathrm{EDdefect}}_Q(\mathcal{X})\coloneqq \mathop{\mathrm{gEDdegree}}(\mathcal{X})-\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})\,.$$ The study of this invariant is the main content of [@maxim2020defect]. Let us recall their main results for the reader's convenience. Below, $\mathscr{X}_0$ denotes a Whitney stratification of the singular locus of $\mathcal{X}\cap\mathcal{Q}$. The order between strata $V<S$ means $V\subset\bar{S}$, and $L_{V,S}$ is the complex link of the pair of strata $(V,S)$. Finally, $\mu_V$ is the Euler characteristic (in reduced cohomology) of the Milnor fiber around any point $x\in V$; if $V = \{x\}$ is isolated, then $\mu_V = (-1)^{\dim (\mathcal{X}\cap\mathcal{Q})}\mu_x$ the classical Milnor number, see [@milnor-book-sings p.64]. The following results will be useful in the sequel.
**Theorem 9**. *[@maxim2020defect Theorem 1.5][\[thm: ED defect general\]]{#thm: ED defect general label="thm: ED defect general"} Let $\mathcal{X}\subset\mathbb{P}^N$ be a smooth irreducible projective variety not contained in the isotropic quadric $\mathcal{Q}$. Then $$\label{eq: ED defect main formula}
\mathop{\mathrm{EDdefect}}_Q(\mathcal{X}) = \sum_{V\in \mathscr{X}_0} (-1)^{\mathrm{codim}_{\mathcal{X}\cap\mathcal{Q}}(V)}\alpha_V \cdot \mathop{\mathrm{gEDdegree}}(\bar{V})$$ with $\alpha_V = \mu_V -\sum_{V<S}\chi_c(L_{V,S})\cdot \mu_S$.*
The previous formula [\[eq: ED defect main formula\]](#eq: ED defect main formula){reference-type="eqref" reference="eq: ED defect main formula"} is generally difficult to use. However, in our computations, we mostly apply the next two nice simplifications.
**Corollary 10**. *[@maxim2020defect Corollary 1.8][\[cor: ED defect isolated singularities\]]{#cor: ED defect isolated singularities label="cor: ED defect isolated singularities"} Assume that $\mathcal{Z}=\mathrm{Sing}(\mathcal{X}\cap\mathcal{Q})$ consists of isolated points. Then $$\mathop{\mathrm{EDdefect}}_Q(\mathcal{X}) = \sum_{x\in\mathcal{Z}}\mu_x\,,$$ where $\mu_x$ is the Milnor number of the isolated singularity $x\in\mathcal{Z}$.*
**Corollary 11**. *[@maxim2020defect Corollary 1.9][\[cor: ED defect equisingular\]]{#cor: ED defect equisingular label="cor: ED defect equisingular"} Assume that $\mathcal{Z}=\mathrm{Sing}(\mathcal{X}\cap\mathcal{Q})$ is connected and that it is a closed (smooth and connected) stratum in a Whitney stratification of $\mathcal{X}\cap\mathcal{Q}$ (namely $\mathcal{X}\cap\mathcal{Q}$ is equisingular along $Z$). Then $$\mathop{\mathrm{EDdefect}}_Q(\mathcal{X}) = \mu\cdot\mathop{\mathrm{gEDdegree}}(\mathcal{Z})\,,$$ where $\mu$ is the Milnor number of the isolated transversal singularity at some point of $Z$.*
## Morse theory {#sec: Morse theory}
In this subsection, we briefly recall basic facts in Morse theory. Given a differentiable manifold $\mathcal{M}$ and a differentiable function $f\colon\mathcal{M}\to\mathbb{R}$, a point $p\in\mathcal{M}$ is called a *critical point of $f$* if the differential $$\textrm{\normalfont{d}}_p f\colon T_p\mathcal{M}\to T_{f(p)} \mathbb{R}\cong \mathbb{R}$$ vanishes. For $v, w\in T_p\mathcal{M}$, we can extend $v, w$ to vector fields $\tilde{v}, \tilde{w}$ on an open neighborhood of $p$. The Hessian $H_p(f)$ of $f$ at a critical point $p$ is the symmetric bilinear map defined by $$H_p(f)\colon T_p\mathcal{M}\times T_p\mathcal{M}\to\mathbb{R}, \quad H_p(f)(v,w) = v \bigl(\tilde{w}(f)\bigr).$$
**Definition 12**. Let $f\colon\mathcal{M}\to\mathbb{R}$ be a differentiable function over a differentiable manifold $\mathcal{M}$, and $p$ be a critical point of $f$. The *index* of $p$, denoted by $\lambda_p$, is the dimension of the subspace of $T_p\mathcal{M}$ on which the Hessian $H_p(f)$ is negative definite. The critical point $p$ is said to be *nondegenerate* if and only if $H_p(f)$ is nondegenerate. Moreover, we say that $f$ is a *Morse function* if the critical points of $f$ are all nondegenerate.
Note that nondegenerate critical points are isolated [@Milnor1963 Corollary 2.3]. Hence, a Morse function $f\colon\mathcal{M}\to\mathbb{R}$ on a compact manifold $\mathcal{M}$ has finitely many critical points. Furthermore, if we let $m_k$ be the number of critical points of $f$ of index $k$ for all $k\in\{0,\ldots,\dim(\mathcal{M})\}$, then $$\label{eq: inequalities Betti}
m_k \ge b_k(\mathcal{M})$$ for all $k$, where $b_k(\mathcal{M})$ is the $k$th Betti number of $\mathcal{M}$, i.e., the rank of the $k$th homology group $H_k(\mathcal{M})$. These inequalities are known as weak *Morse inequalities*. In this paper, we also need the following strong Morse inequalities, see for example [@Milnor1963 §$5$] or [@BH04 Theorem 3.33]:
**Theorem 13**. *For a Morse function $f\colon\mathcal{M}\to\mathbb{R}$ on a compact manifold $\mathcal{M}$, we have*
1. *$\sum_{k=0}^i (-1)^{i-k} m_k \ge \sum_{k=0}^i (-1)^{i-k} b_k(\mathcal{M})$ for every $i\in\{0,\ldots,\dim(\mathcal{M})\}$.*
2. *$\sum_{k=0}^{\dim(\mathcal{M})} (-1)^k m_k = \sum_{k=0}^{\dim(\mathcal{M})} (-1)^k b_k(\mathcal{M})$.*
In the following lemma, we study a necessary and sufficient condition under which the distance function from a point to a real differentiable manifold is Morse. First, given a differentiable submanifold $\mathcal{M}\subset V^\mathsmaller{\mathbb{R}}$, similarly as in [\[eq: ED correspondence\]](#eq: ED correspondence){reference-type="eqref" reference="eq: ED correspondence"} we define the *(real) ED correspondence* of $\mathcal{M}$ as $$\label{eq: real ED correspondence manifold M}
\mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}\coloneqq\{(x,u)\in V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\mid\text{$x\in \mathcal{M}$ and $u-x\in N_x \mathcal{M}$}\}\,,$$ where $N_x\mathcal{M}=\left\{v^*\in V^\mathsmaller{\mathbb{R}}\mid Q(v^*,v)=0,\ v\in T_x \mathcal{M}\right\}$ is the normal space to $\mathcal{M}$ at $x\in \mathcal{M}$, cf. [\[eq: def normal space\]](#eq: def normal space){reference-type="eqref" reference="eq: def normal space"}.
**Lemma 14**. *Let $\mathcal{M}$ be a differentiable submanifold of a real vector space $V^\mathsmaller{\mathbb{R}}$ and let $Q\colon V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\to\mathbb{R}$ be an inner product on $V^\mathsmaller{\mathbb{R}}$ with the associated quadratic form $q$ defined as $q(x)=Q(x,x)$, $x\in V^\mathsmaller{\mathbb{R}}$. For a given $u\in V^\mathsmaller{\mathbb{R}}$, the distance function $$\label{eq: dist^Q}
\mathop{\mathrm{dist}}^Q_{\mathcal{M},u} \colon \mathcal{M}\to\mathbb{R}\,,\quad x\mapsto q(u-x)$$ is Morse if and only if $u$ is a regular value of the projection $\mathop{\mathrm{pr}}_2\colon\mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}\to V^\mathsmaller{\mathbb{R}}$ on the second factor. In particular, for a dense subset of $u \in V^\mathsmaller{\mathbb{R}}$ the function [\[eq: dist\^Q\]](#eq: dist^Q){reference-type="eqref" reference="eq: dist^Q"} is Morse.*
*Proof.* We base our arguments on the proof of [@GG:stablemaps Lemma 4.6]. Let us consider the function $$\label{eq: def F(x,u) differential}
F\colon \mathcal{M}\times V^\mathsmaller{\mathbb{R}}\to T^*\mathcal{M}\,,\quad F(x,u)\coloneqq\left(x, -\frac{1}{2}\textrm{\normalfont{d}}_x\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}\right)\,,$$ where $\textrm{\normalfont{d}}_x\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is the differential of $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ and $$\label{eq: identity differential F(x,u)}
\textrm{\normalfont{d}}_x\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}(\dot{x})=-2\,Q(u-x,\dot{x})\quad\forall\,\dot{x}\in T_x\mathcal{M}\,.$$ By a standard fact from transversality theory, the function $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse if and only if $F_u \coloneqq F(\cdot,u)$ is transverse to the zero section $\mathcal{Z}\coloneqq\{(x,0)\mid x\in \mathcal{M}\} \subset T^*\mathcal{M}$. We claim that the map $F$ (as a whole) is transverse to $\mathcal{Z}$. For this, take any $(x,u)\in \mathcal{M}\times V^\mathsmaller{\mathbb{R}}$ such that $F(x,u)\in\mathcal{Z}$. We need to show that $$\begin{aligned}
\label{eq: identity F transverse to Z}
\textrm{\normalfont{d}}_{(x,u)} F(T_x\mathcal{M}\oplus T_uV^\mathsmaller{\mathbb{R}}) + T_{F(x,u)}\mathcal{Z}=T_{F(x,u)}T^*\mathcal{M}\,.\end{aligned}$$ Note that $F(x,u) = (x,0)$ and $T_{F(x,u)}T^*\mathcal{M}=T_{(x,0)}T^*\mathcal{M}= T_{(x,0)}\mathcal{Z}\oplus T^*_x\mathcal{M}$. On the other hand, the differential of $F_x\coloneqq F(x,\_)$ is given as $$\textrm{\normalfont{d}}_u F_x\colon T_u V^\mathsmaller{\mathbb{R}}\to T_x^*\mathcal{M}\,,\quad \dot{u}\longmapsto\left[\dot{x} \mapsto Q(\dot{u},\dot{x})\right].$$ Since any $f\in T^*_x\mathcal{M}$ is obtained as the restriction of a linear form on $V^\mathsmaller{\mathbb{R}}$ to $T_x\mathcal{M}\subset V^\mathsmaller{\mathbb{R}}$, it can be written as $f(\dot{x}) = Q(\dot{u},\dot{x})$, $\dot{x}\in T_x\mathcal{M}$, for some $\dot{u}\in V^\mathsmaller{\mathbb{R}}=T_uV^\mathsmaller{\mathbb{R}}$. Therefore, $\textrm{\normalfont{d}}_u F_x$ is surjective. This implies that $F$ is transverse to $\mathcal{Z}$ and, in particular, the real ED correspondence $\mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}=F^{-1}(\mathcal{Z})$ is a differentiable submanifold of $\mathcal{M}\times V^\mathsmaller{\mathbb{R}}$ of dimension $\dim(V^\mathsmaller{\mathbb{R}})$.
Let now $u\in V^\mathsmaller{\mathbb{R}}$ be a regular value of $\mathop{\mathrm{pr}}_2\colon \mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}\to V^\mathsmaller{\mathbb{R}}$ and let $x\in\mathcal{M}$ be any point with $(x,u)\in \mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}$. Then the differential $\textrm{\normalfont{d}}_{(x,u)}\mathop{\mathrm{pr}}_2\colon T_{(x,u)}\mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}\to T_u V^\mathsmaller{\mathbb{R}}$ defined by $(\dot{x},\dot{u})\mapsto \dot{u}$ is an isomorphism, hence $$\label{eq: direct sum tangent spaces}
T_x\mathcal{M}\oplus T_uV^\mathsmaller{\mathbb{R}}=T_x \mathcal{M}\oplus T_{(x,u)} \mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}\,.$$ Applying $\textrm{\normalfont{d}}_{(x,u)}F$ to both sides of [\[eq: direct sum tangent spaces\]](#eq: direct sum tangent spaces){reference-type="eqref" reference="eq: direct sum tangent spaces"} yields $\textrm{\normalfont{d}}_{(x,u)} F(T_x\mathcal{M}\oplus T_uV^\mathsmaller{\mathbb{R}})=\textrm{\normalfont{d}}_{x} F_u(T_x\mathcal{M})+ T_{F(u,x)}\mathcal{Z}$. Combining this with [\[eq: identity F transverse to Z\]](#eq: identity F transverse to Z){reference-type="eqref" reference="eq: identity F transverse to Z"} we obtain $\textrm{\normalfont{d}}_x F_u(T_x\mathcal{M}) + T_{F_u(x)}\mathcal{Z}= T_{F_u(x)} T^*\mathcal{M}$, which means that $F_u$ is transverse to $\mathcal{Z}$ or, equivalently, $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse.
Vice versa, if $(x,u)\in \mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}$ is a critical point of $\mathop{\mathrm{pr}}_2$, the kernel of $\textrm{\normalfont{d}}_{(x,u)}\mathop{\mathrm{pr}}_2$ contains a nontrivial vector $(\dot{x},0)\in T_{(x,u)}\mathcal{E}(\mathcal{M},Q)^\mathsmaller{\mathbb{R}}$. Since $\textrm{\normalfont{d}}_x F_u$ sends $(\dot{x},0)$ to $T_{F_u(x)}\mathcal{Z}=T_{(x,0)}\mathcal{Z}$ and since $\dim(T^* \mathcal{M})=\dim(\mathcal{M})+\dim(\mathcal{Z})$, the map $F_u$ cannot be transverse to $\mathcal{Z}$ and $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is not a Morse function.
Finally, by a version of Sard's theorem for smooth maps (see [@GG:stablemaps Cor 1.14]), the function $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse for a dense subset of $u\in V^\mathsmaller{\mathbb{R}}$. ◻
Next, we prove in the following a slightly different version of Lemma [Lemma 14](#lem: generic=Morse){reference-type="ref" reference="lem: generic=Morse"}, i.e., we would like to take the space of inner products into account as well. First, we denote with $S^2_+ V^\mathsmaller{\mathbb{R}}$ the cone of inner products on $V^\mathsmaller{\mathbb{R}}$, which is a cone in $S^2V^\mathsmaller{\mathbb{R}}$. Given a differentiable submanifold $\mathcal{M}\subset V^\mathsmaller{\mathbb{R}}$, we define the *extended real ED correspondence* of $\mathcal{M}$ as $$\label{eq: extended ED correspondence}
\mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}\coloneqq\{(x,u,Q)\in V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\times \mathcal{C}\mid\text{$x\in \mathcal{M}$ and $u-x\in N_x\mathcal{M}$}\}\,,$$ where $N_x\mathcal{M}$ is defined as in [\[eq: def normal space\]](#eq: def normal space){reference-type="eqref" reference="eq: def normal space"} using the inner product $Q$ in the triple $(x,u,Q)$.
**Lemma 15**. *Let $\mathcal{M}\subset V^\mathsmaller{\mathbb{R}}$ be a differentiable submanifold of a real vector space and let $Q\colon V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\to\mathbb{R}$ be an inner product on $V^\mathsmaller{\mathbb{R}}$ with the associated quadratic form $q$ defined as $q(x)=Q(x,x)$, $x\in V^\mathsmaller{\mathbb{R}}$. Then for a given $(u, Q) \in V^\mathsmaller{\mathbb{R}}\times \mathcal{C}$ the distance function $$\begin{aligned}
\label{eq: dist^QQ}
\mathop{\mathrm{dist}}^Q_{\mathcal{M},u} \colon \mathcal{M}\to\mathbb{R}\,,\quad x\mapsto q(u-x)\end{aligned}$$ is Morse if and only if $(u, Q)$ is a regular value of the projection $\mathop{\mathrm{pr}}_2\colon \mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}\to V^\mathsmaller{\mathbb{R}}\times \mathcal{C}$. In particular, for a dense subset of $(u, Q) \in V^\mathsmaller{\mathbb{R}}\times \mathcal{C}$ the function [\[eq: dist\^QQ\]](#eq: dist^QQ){reference-type="eqref" reference="eq: dist^QQ"} is Morse.*
*Proof.* Define $F \colon \mathcal{M}\times V^\mathsmaller{\mathbb{R}}\times \mathcal{C}\to T^*\mathcal{M}$ by the differential of [\[eq: dist\^QQ\]](#eq: dist^QQ){reference-type="eqref" reference="eq: dist^QQ"}, i.e., $$\begin{aligned}
F(x,u,Q) \coloneqq \left(x, -\frac{1}{2}\textrm{\normalfont{d}}_x\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}\right)\,,\quad\textrm{where}\quad -\frac{1}{2} \textrm{\normalfont{d}}_x\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}(\dot{x})\ =\ Q(u-x,\dot{x})\quad\textrm{for}\quad \dot{x}\in T_x\mathcal{M}\,.\end{aligned}$$ Let $\mathcal{Z}\coloneqq\{(x,0)\mid x\in X\}\subset T^*\mathcal{M}$ be the zero section of $T^*\mathcal{M}$. Then $$\begin{split}
(x, u, Q) \in \mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}& \iff x \in X \text{ and } Q(x-u, T_x\mathcal{M}) = 0 \\
& \iff \textrm{\normalfont{d}}_x \mathop{\mathrm{dist}}^Q_{\mathcal{M},u}(T_x\mathcal{M}) = 0 \\
& \iff F(x, u, Q) = (x, 0)\,,
\end{split}$$ which implies that $F^{-1}(\mathcal{Z}) = \mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}$. By a similar argument as Lemma [Lemma 14](#lem: generic=Morse){reference-type="ref" reference="lem: generic=Morse"} we can show that $F$ is transverse to $\mathcal{Z}$, implying that $\mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}= F^{-1}(\mathcal{Z})$ is a differentiable submanifold of $\mathcal{M}\times V^\mathsmaller{\mathbb{R}}\times \mathcal{C}$ of dimension $\dim(V\mathsmaller{\mathbb{R}})$. More precisely, we can show $$\label{eq:Mreg}
\textrm{\normalfont{d}}_{(x,u,Q)} F(T_x\mathcal{X}\oplus T_uV^\mathsmaller{\mathbb{R}}\oplus T_Q\mathcal{C}) + T_{(x,0)}\mathcal{Z}= T_{(x,0)} T^*\mathcal{M}= T_{(x,0)}\mathcal{Z}\oplus T_x^*\mathcal{M}\,.$$ This is because the differential given by $$\textrm{\normalfont{d}}_{(u,Q)} F_{x}\colon T_uV^\mathsmaller{\mathbb{R}}\oplus T_Q\mathcal{C}\to T_x^*\mathcal{M}\, \quad (\dot{u}, \dot{Q}) \mapsto \left[\dot{x} \longmapsto Q(\dot{u},\dot{x}) + \dot{Q}(u-x, \dot{x}) \right]$$ is already surjective as we have seen that $\textrm{\normalfont{d}}_{(u,Q)} F_{x}(T_uV^\mathsmaller{\mathbb{R}}\oplus \{0\}) = T_x^*\mathcal{M}$ in the proof of Lemma [Lemma 14](#lem: generic=Morse){reference-type="ref" reference="lem: generic=Morse"}.
We claim that $(u, Q)$ is a regular value of $\mathop{\mathrm{pr}}_2$ if and only if $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse. First, assume that $(u, Q)$ is a regular value of $\mathop{\mathrm{pr}}_2$. Then $$\textrm{\normalfont{d}}_{(x,u,Q)}\mathop{\mathrm{pr}}_2\colon T_{(x,u,Q)} \mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}\to T_x V^\mathsmaller{\mathbb{R}}\oplus T_Q \mathcal{C}\,, \quad (\dot{x},\dot{u},\dot{Q}) \mapsto (\dot{u}, \dot{Q})$$ is an isomorphism, implying $T_{(x,u,Q)} \left( V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\times \mathcal{C}\right) \cong T_x\mathcal{M}\oplus T_{(x,u,Q)} \mathcal{E}(\mathcal{M})^\mathsmaller{\mathbb{R}}$. Applying $\textrm{\normalfont{d}}_{(x,u,Q)}F$ we have $$\textrm{\normalfont{d}}_{(x,u,Q)}F\bigl(T_{(x,u,Q)} \left( V^\mathsmaller{\mathbb{R}}\times V^\mathsmaller{\mathbb{R}}\times \mathcal{C}\right)\bigr) = \textrm{\normalfont{d}}_xF_{(u,Q)}(T_x\mathcal{M}) + T_{(x,0)}\mathcal{Z}\,.$$ By [\[eq:Mreg\]](#eq:Mreg){reference-type="eqref" reference="eq:Mreg"}, we have that $$\textrm{\normalfont{d}}_xF_{(u,Q)}(T_x\mathcal{M}) + T_{(x,0)}\mathcal{Z}= T_{(x,0)}T^*\mathcal{M}\,,$$ namely, $F_{(u,Q)}$ is transverse to $\mathcal{Z}$. By [@BH04 Lemma 5.23], $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse.
For the other direction, assume that $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse, i.e., $F_{(u,Q)}$ is transverse to $\mathcal{Z}$. If $(x, u, Q)$ is a critical point of $\mathop{\mathrm{pr}}_2$, the kernel of $\textrm{\normalfont{d}}_{(x,u,Q)} \mathop{\mathrm{pr}}_2$ contains some $(\dot{x}, 0) \in T_{(x,u,Q)}F^{-1}(\mathcal{Z})$, which implies that $\textrm{\normalfont{d}}_x F_{(u,Q)} \left((\dot{x}, 0)\right) \in T_{(x,0)}\mathcal{Z}$. This contradicts the assumption $F_{(u,Q)}$ is transverse to $\mathcal{Z}$.
By [@GG:stablemaps Cor 1.14], the set of regular values is dense. Thus $\mathop{\mathrm{dist}}^Q_{\mathcal{M},u}$ is Morse for a dense subset of $(u, Q) \in V^\mathsmaller{\mathbb{R}}\times \mathcal{C}$. ◻
# ED degrees for Segre-Veronese varieties {#sec: product metrics tensor spaces}
Throughout the rest of the paper, $k$ is a positive integer, while $\mathbf{n}=(n_1,\ldots,n_k)$ and $\mathbf{d}=(d_1,\ldots,d_k)$ are $k$-tuples of nonnegative integers. Consider $k$ real vector spaces $V_1^\mathsmaller{\mathbb{R}},\ldots,V_k^\mathsmaller{\mathbb{R}}$ with $\dim(V_i^\mathsmaller{\mathbb{R}})=n_i+1$. Recall that $V_i=V_i^\mathsmaller{\mathbb{R}}\otimes\mathbb{C}$. Given a $k$-tuple $\mathbf{d}$, we define $S^\mathbf{d}V\coloneqq S^{d_1}V_1\otimes\cdots\otimes S^{d_k}V_k$. In particular, if $d_i=1$ for all $i$, then $S^\mathbf{d}V=V\coloneqq V_1\otimes\cdots\otimes V_k$. In this section we call $N$ the dimension of the projective space $\mathbb{P}(V)$, that is $(n_1+1)\cdots(n_k+1)-1$, so we write $\mathbb{P}(V)=\mathbb{P}^N$.
Let $\mathbb{P}^\mathbf{n}$ be the Cartesian product $\mathbb{P}(V_1)\times\cdots\times\mathbb{P}(V_k)=\mathbb{P}^{n_1}\times\cdots\times\mathbb{P}^{n_k}$. We denote by $\nu_\mathbf{d}\colon\mathbb{P}^\mathbf{n}\to\mathbb{P}(S^\mathbf{d}V)$ the Segre-Veronese embedding of $\mathbb{P}^\mathbf{n}$ via the line bundle $\mathcal{O}_{\mathbb{P}^\mathbf{n}}(\mathbf{d})$. Its image $\mathcal{V}_{\mathbf{d},\mathbf{n}}=\nu_\mathbf{d}(\mathbb{P}^\mathbf{n})$ is the *Segre-Veronese variety* of $\mathbb{P}(S^\mathbf{d}V)$. More precisely, $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is populated by the so-called *decomposable tensors* of $\mathbb{P}(S^\mathbf{d}V)$, which are tensors of the form $\nu_\mathbf{d}(x_1,\ldots,x_k)=x_1^{d_1}\otimes\cdots\otimes x_k^{d_k}$, where $x_i\in V_i$ for all $i\in[k]$. When $k=1$, $V_1=V$, $\mathbf{n}=(n)$ and $\mathbf{d}=(d)$ we just write $\mathcal{O}_{\mathbb{P}^\mathbf{n}}(\mathbf{d})=\mathcal{O}_{\mathbb{P}^n}(d)$ and $\mathcal{V}_{\mathbf{d},\mathbf{n}}=\mathcal{V}_{d,n}$ is the *Veronese variety* of $\mathbb{P}(S^dV)$. When $d_i=1$ for all $i\in[k]$, we use the notation $\mathbf{1}=1^k=(1,\ldots,1)$ and the variety $\mathcal{V}_{\mathbf{1},\mathbf{n}}$, also denoted with $\Sigma_\mathbf{n}$, is the *Segre variety* of $\mathbb{P}(V)$. If also $\mathbf{n}=\mathbf{1}$, then we use the notation $\Sigma_k$ to denote $\Sigma_{\mathbf{1}}$.
Our goal is to study the ED degrees of the varieties $\mathcal{V}_{\mathbf{d},\mathbf{n}}$, hence we need to consider a positive definite symmetric bilinear form on the space $\mathbb{P}(S^\mathbf{d}V)$ too. A natural choice is given in the following definition.
**Definition 16**. For all $i\in[k]$, let $Q_i$ be a positive definite symmetric bilinear form on the space $V_i^\mathsmaller{\mathbb{R}}$, with associated quadratic form $q_i$. Consider two decomposable tensors $T = x_1^{d_1}\otimes\cdots\otimes x_k^{d_k}$ and $T' = y_1^{d_1}\otimes\cdots\otimes y_k^{d_k}$ on $S^\mathbf{d}V^\mathsmaller{\mathbb{R}}$. The *Frobenius (or Bombieri-Weyl) inner product* between $T$ and $T'$ is $$\label{eq: Frobenius inner product for tensors}
Q_F(T,T')\coloneqq \prod_{i=1}^k Q_i(x_i, y_i)\,,$$ and it is extended to every tensor in $S^\mathbf{d}V$ by linearity. The isotropic quadrics in the factors $\mathbb{P}(V_i)=\mathbb{P}^{n_i}$ are denoted by $\mathcal{Q}_i$, while the corresponding isotropic quadric in $\mathbb{P}(S^\mathbf{d}V)$ is denoted by $\mathcal{Q}_F$.
The ED degree of a Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ with respect to a Frobenius inner product $Q_F$ is the content of the next result.
**Theorem 17**. *[@FO Theorem 1][\[thm: FO formula\]]{#thm: FO formula label="thm: FO formula"} The ED degree of the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$ with respect to a Frobenius inner product $Q_F$ equals the coefficient of the monomial $h_1^{n_1}\cdots h_k^{n_k}$ in the expansion of $$\prod_{i=1}^k\frac{\widehat{h}_i^{n_i+1}-h_i^{n_i+1}}{\widehat{h}_i-h_i},\quad\widehat{h}_i\coloneqq\left(\sum_{j=1}^k d_jh_j\right)-t_i\,.$$*
Interestingly, Aluffi and Harris computed the same ED degree with an independent formula.
**Theorem 18**. *[@aluffi-harris §9][\[thm: AH formula\]]{#thm: AH formula label="thm: AH formula"} The ED degree of the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$ with respect to a Frobenius inner product $Q_F$ equals the coefficient of the monomial $h_1^{n_1}\cdots h_k^{n_k}$ in the expansion of $$\frac{1}{1-d_1h_1-\cdots-d_kh_k}\cdot\prod_{i=1}^k\frac{(1-h_i)^{n_i+1}}{1-2d_ih_i}\,.$$*
Since $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is a smooth toric variety on $\mathbb{P}(S^\mathbf{d}V)$, the degrees of its Chern classes are relatively easy to compute. The proof of the next result relies on computations made by the fourth author of this paper in a more general setting in [@sodphd Chapter 5].
**Proposition 19**. *The degrees of the Chern classes of the tangent bundle of the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$ are $$\label{eq: degree ith Chern class Segre-Veronese}
\deg(c_i(\mathcal{V}_{\mathbf{d},\mathbf{n}})) = (|\mathbf{n}|-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}}\,\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}\,,$$ where $|\mathbf{n}|=n_1+\cdots+n_k=\dim(\mathcal{V}_{\mathbf{d},\mathbf{n}})$, $\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}=d_1^{n_1-\alpha_1}\cdots d_k^{n_k-\alpha_k}$ and for all $\alpha\in\mathbb{Z}_{\ge0}^k$ $$\label{eq: def gamma alpha}
\gamma_{\boldsymbol{\alpha}}\coloneqq
\begin{cases}
\prod_{i=1}^k\frac{\binom{n_i+1}{\alpha_i}}{(n_i-\alpha_i)!} & \mbox{if $n_i\ge\alpha_i\ \forall\,i\in[k]$}\\
0 & \mbox{otherwise.}
\end{cases}$$*
*Proof.* Recall that $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is the image of the embedding of $\mathbb{P}^\mathbf{n}$ via the line bundle $\mathcal{O}_{\mathbb{P}^\mathbf{n}}(\mathbf{d})$. In particular, if $h_i=c_1(\mathcal{O}_{\mathbb{P}^{n_i}}(1))$ for all $i\in[k]$, then by Example [Example 4](#ex: Chern classes tangent bundle PP n){reference-type="ref" reference="ex: Chern classes tangent bundle PP n"} we have $c(\mathbb{P}^{n_i})=\sum_{\ell_i=0}^{n_i}\binom{n_i+1}{\ell_i}h_i^{\ell_i}$ and $c(\mathbb{P}^\mathbf{n})=c(\mathbb{P}^{n_1})\cdots c(\mathbb{P}^{n_k})$. Similarly, as in Example [Example 3](#ex: Chow ring Segre product){reference-type="ref" reference="ex: Chow ring Segre product"}, we have $A^*(\mathbb{P}^\mathbf{n})\cong\mathbb{Z}[h_1,\ldots,h_k]/(h_1^{n_1+1},\ldots,h_k^{n_k+1})$, and $$\begin{aligned}
c(\mathbb{P}^\mathbf{n}) = \prod_{i=1}^k c(\mathbb{P}^{n_i})=\prod_{i=1}^k\left(\sum_{\ell_i=0}^{n_i}\binom{n_i+1}{\ell_i}h_i^{\ell_i}\right) = \sum_{j=0}^{|\mathbf{n}|}\sum_{|\boldsymbol{\alpha}|=j}\left(\prod_{i=1}^k\binom{n_i+1}{\alpha_i}\right) h_1^{\alpha_1}\cdots h_k^{\alpha_k}\,,\end{aligned}$$ where $|\mathbf{n}|=n_1+\cdots+n_k=\dim(\mathbb{P}^\mathbf{n})=\dim(\mathcal{V}_{\mathbf{d},\mathbf{n}})$. Therefore, the $i$-th Chern class of $\mathbb{P}^\mathbf{n}$ is $$\label{eq: Chern class product}
c_i(\mathbb{P}^\mathbf{n})=\sum_{|\boldsymbol{\alpha}|=i}\left(\prod_{i=1}^k\binom{n_i+1}{\alpha_i}\right) h_1^{\alpha_1}\cdots h_k^{\alpha_k}\quad\forall\,0\le i\le |\mathbf{n}|\,.$$ The hyperplane class of $\mathbb{P}(S^\mathbf{d}V)$ restricted to $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is precisely $c_1(\mathcal{O}_{\mathbb{P}^\mathbf{n}}(\mathbf{d}))=d_1h_1+\cdots+d_kh_k$. We consider its power $$\label{eq: power Chern class line bundle product}
c_1(\mathcal{O}_{\mathbb{P}^\mathbf{n}}(\mathbf{d}))^{|\mathbf{n}|-i}=\sum_{|\boldsymbol{\omega}|=|\mathbf{n}|-i}\binom{|\mathbf{n}|-i}{\omega}(d_1h_1)^{\omega_1}\cdots (d_kh_k)^{\omega_k}\,,$$ where $\binom{|\mathbf{n}|-i}{\omega}\coloneqq\binom{|\mathbf{n}|-i}{\omega_1!\cdots\omega_k!}$. The product between the two polynomials in [\[eq: Chern class product\]](#eq: Chern class product){reference-type="eqref" reference="eq: Chern class product"} and [\[eq: power Chern class line bundle product\]](#eq: power Chern class line bundle product){reference-type="eqref" reference="eq: power Chern class line bundle product"} equals an integer times the class of a point $h_1^{n_1}\cdots h_k^{n_k}\in A^{|\mathbf{n}|}(\mathbb{P}^\mathbf{n})$. Recalling the relations $h_1^{n_1+1}=\cdots=h_k^{n_k+1}=0$, for all $i$ and $\alpha,\omega$ such that $|\boldsymbol{\alpha}|=i$ and $|\boldsymbol{\omega}|=|\mathbf{n}|-i$, the monomial $h_1^{\alpha_1+\omega_1}\cdots h_k^{\alpha_k+\omega_k}$ is nonzero if and only if $\boldsymbol{\alpha}+\boldsymbol{\omega}=\mathbf{n}$. Therefore, the coefficient of $h_1^{n_1}\cdots h_k^{n_k}$ in the previous product is precisely the right-hand side of [\[eq: degree ith Chern class Segre-Veronese\]](#eq: degree ith Chern class Segre-Veronese){reference-type="eqref" reference="eq: degree ith Chern class Segre-Veronese"}. This concludes the proof. ◻
The next statement follows by combining Theorem [\[thm: ED degree Chern classes\]](#thm: ED degree Chern classes){reference-type="ref" reference="thm: ED degree Chern classes"} and Proposition [Proposition 19](#prop: degrees Chern classes Segre-Veronese){reference-type="ref" reference="prop: degrees Chern classes Segre-Veronese"}.
**Theorem 20**. *The generic ED degree of the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$ is $$\label{eq: gen ED degree Segre-Veronese}
\mathop{\mathrm{gEDdegree}}(\mathcal{V}_{\mathbf{d},\mathbf{n}})=\sum_{i=0}^{|\mathbf{n}|}(-1)^i(2^{|\mathbf{n}|+1-i}-1)(|\mathbf{n}|-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}}\,\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}\,,$$ where the coefficients $\gamma_{\boldsymbol{\alpha}}$ are defined in [\[eq: def gamma alpha\]](#eq: def gamma alpha){reference-type="eqref" reference="eq: def gamma alpha"}.*
The formula [\[eq: gen ED degree Segre-Veronese\]](#eq: gen ED degree Segre-Veronese){reference-type="eqref" reference="eq: gen ED degree Segre-Veronese"} simplifies a lot for nonsymmetric tensors of binary format, namely for $\mathbf{n}=\mathbf{d}=\mathbf{1}$. We recall that a *derangement* of a set is a permutation of the elements of the set in which no element appears in its original position. The number of derangements of a set with $k$ elements is denoted by $!k$ and is also called the *subfactorial* of $k$. In particular $!0=1$, $!1=0$ and $!k=(k-1)[!(k-1)+!(k-2)]$ for all $k\ge 2$. Furthermore, the *incomplete Gamma function* is $\Gamma(\alpha,x)\coloneqq\int_x^\infty t^{\alpha-1}e^{-t}dt$.
**Corollary 21**. *If $\mathbf{n}=\mathbf{d}=\mathbf{1}=1^k$, then the generic ED degree of the Segre variety $\Sigma_k$ is $$\label{eq: gen ED degree binary tensors}
\mathop{\mathrm{gEDdegree}}(\Sigma_k)=2^{k+1}\cdot!k-\frac{\Gamma(k+1,-2)}{e^2}\,.$$*
*Proof.* Consider the numbers $\gamma_{\boldsymbol{\alpha}}$ defined in [\[eq: def gamma alpha\]](#eq: def gamma alpha){reference-type="eqref" reference="eq: def gamma alpha"}. Under our assumptions, for a tuple $\boldsymbol{\alpha}\in\mathbb{Z}_{\ge 0}^k$, we have that $\gamma_{\boldsymbol{\alpha}}=\prod_{i=1}^k\binom{2}{\alpha_i}=2^{|\boldsymbol{\alpha}|}$ if $\alpha_i\le 1$ for all $i\in[k]$, otherwise $\gamma_{\boldsymbol{\alpha}}=0$. In this way, we can restrict only to summands $\alpha\in\{0,1\}^k$ and by Theorem [Theorem 20](#thm: gen ED degree Segre-Veronese){reference-type="ref" reference="thm: gen ED degree Segre-Veronese"}, we conclude that $$\begin{aligned}
\mathop{\mathrm{gEDdegree}}(\Sigma_k) &= \sum_{i=0}^k(-1)^i(2^{k+1-i}-1)(k-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}} = \sum_{i=0}^k(-1)^i(2^{k+1-i}-1)(k-i)!\sum_{\substack{\alpha\in\{0,1\}^k\\|\boldsymbol{\alpha}|=i}}2^i\\
&= \sum_{i=0}^k(-1)^i(2^{k+1-i}-1)(k-i)!\binom{k}{i}2^i = k!\sum_{i=0}^k\frac{(-1)^i}{i!}(2^{k+1}-2^i)\,.\end{aligned}$$ The equality in [\[eq: gen ED degree binary tensors\]](#eq: gen ED degree binary tensors){reference-type="eqref" reference="eq: gen ED degree binary tensors"} follows by the identities $$!k = k!\sum_{i=0}^k\frac{(-1)^i}{i!}\,,\quad \frac{\Gamma(k+1,-2)}{e^2} = k!\sum_{i=0}^k\frac{(-1)^i}{i!}2^i\,.\qedhere$$ ◻
In general, the ED degree of $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ with respect to a Frobenius inner product $Q_F$ is much smaller than its generic ED degree. For example, comparing Theorem [\[thm: FO formula\]](#thm: FO formula){reference-type="ref" reference="thm: FO formula"} and Corollary [Corollary 21](#corol: gen ED degree binary tensors){reference-type="ref" reference="corol: gen ED degree binary tensors"} for $\mathbf{n}=\mathbf{d}=\mathbf{1}=1^k$ yields the inequality $$\mathop{\mathrm{gEDdegree}}(\Sigma_k)=2^{k+1}\cdot!k-\frac{\Gamma(k+1,-2)}{e^2}\ge k!=\mathop{\mathrm{EDdegree}}_{Q_F}(\Sigma_k)\,,$$ which is strict for all $k\ge 2$. We display the values of both sides for small $k$ in Table [1](#tab: generic ED degree Segre binary){reference-type="ref" reference="tab: generic ED degree Segre binary"}.
$k$ 1 2 3 4 5 6 7 8 9 10
---------------------------------------------- --- --- ---- ----- ------ ------- -------- --------- ----------- ------------
$\mathop{\mathrm{gEDdegree}}(\Sigma_k)$ 1 6 34 280 2808 33808 473968 7588992 136650880 2733508864
$\mathop{\mathrm{EDdegree}}_{Q_F}(\Sigma_k)$ 1 2 6 24 120 720 5040 40320 362880 3628800
: Comparison between generic ED degree and ED degree of $\Sigma_k$ for a Frobenius inner product for small values of $k$.
Consider instead the case $\mathbf{d}=(1,1)$, $\mathbf{n}=(n_1,n_2)$ (assume $n_1\le n_2$), where $\Sigma_\mathbf{n}$ parametrizes all $(n_1+1)\times(n_2+1)$ matrices of rank at most one. On one hand, Theorem [\[thm: FO formula\]](#thm: FO formula){reference-type="ref" reference="thm: FO formula"} gives the value $\mathop{\mathrm{EDdegree}}_{Q_F}(\Sigma_\mathbf{n})=n_1+1$, which agrees with the Eckart-Young theorem, see for example [@ottaviani2015geometric] for more details. On the other hand, we display in Table [2](#tab: generic ED degree Segre matrices){reference-type="ref" reference="tab: generic ED degree Segre matrices"} the first values of $\mathop{\mathrm{gEDdegree}}(\Sigma_\mathbf{n})$.
$n_1\le n_2$ 1 2 3 4 5 6 7 8 9 10
-------------- --- ---- ----- ------ ------- -------- --------- ---------- ---------- -----------
1 6 10 14 18 22 26 30 34 38 42
2 39 83 143 219 311 419 543 683 839
3 284 676 1324 2292 3644 5444 7756 10644
4 2205 5557 11821 22341 38717 62805 96717
5 17730 46222 104026 209766 388722 673854
6 145635 388327 910171 1928191 3768211
7 1213560 3288712 7947416 17500200
8 10218105 28031657 69374105
9 86717630 240186706
10 740526303
: Generic ED degrees of $\Sigma_\mathbf{n}$ for small values of $\mathbf{n}=(n_1,n_2)$ with $n_1\le n_2$.
For example, the first row corresponds to the format $2\times n_2$, and as shown in [@ottaviani2021asymptotics Remark 4.21], we have that $$\mathop{\mathrm{gEDdegree}}(\Sigma_{1,n_2})=4n_2+2>2=\mathop{\mathrm{EDdegree}}_{Q_F}(\Sigma_{1,n_2})\,,$$ in particular, the left-hand side diverges for $n_2\to\infty$, while the right-hand side remains constant. We also note that, interestingly, the diagonal of Table [2](#tab: generic ED degree Segre matrices){reference-type="ref" reference="tab: generic ED degree Segre matrices"} coincides with the integer sequence in [@oeis [A231482](http://oeis.org/A231482)].
The following general conjecture motivates all the research done in this work.
**Conjecture 22**. *Consider the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$. Let $Q_F$ be a Frobenius inner product on $S^\mathbf{d}V^\mathsmaller{\mathbb{R}}$. For any positive definite symmetric bilinear form $Q$ on $S^\mathbf{d}V^\mathsmaller{\mathbb{R}}$, we have $$\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{\mathbf{d},\mathbf{n}})\ge\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})\,,$$ namely the maximum defect of ED degree of $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is given by the difference $$\mathop{\mathrm{gEDdegree}}(\mathcal{V}_{\mathbf{d},\mathbf{n}})-\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})\,.$$*
Even though we do not have a proof of Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} in general, in the next section we settle it in the case of matrices, that is, tensors of order two. Specifically, in Theorems [Theorem 25](#thm: lower bound Q-distance function){reference-type="ref" reference="thm: lower bound Q-distance function"} and [Theorem 26](#thm: symmetric_matrices){reference-type="ref" reference="thm: symmetric_matrices"} we show that, given any inner product, the associated distance function from a generic real (symmetric) matrix to the manifold of rank-1 real (symmetric) matrices has at least as many critical points as the distance function associated to the Frobenius inner product.
In Section [5](#sec: ED defects){reference-type="ref" reference="sec: ED defects"} we perform a detailed study of ED defects of various varieties. The results stated there are of independent interest and they give further evidence that Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} holds.
In the rest of this section, we prove a local version of Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} under the assumption of the existence of a generic real data point with only real ED degree critical points.
**Theorem 23**. *Assume that there exists a generic real tensor $u \in S^\mathbf{d}V^\mathsmaller{\mathbb{R}}$ such that the number of critical points of $\mathop{\mathrm{dist}}^{Q_F}_{C\mathcal{V}_{\mathbf{d},\mathbf{n}}^{\mathsmaller{\mathbb{R}}},u}$ equals $\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})$. Then $$\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{\mathbf{d},\mathbf{n}}) \ge \mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})$$ for all inner products $Q\in S^2_+(S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}})$ in a sufficiently small neighborhood of $Q_F\in S^2_+(S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}})$. In other words, the Frobenius inner product is a local minimum of the integer-valued function $\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{\mathbf{d},\mathbf{n}})$ on the open convex cone $S^2_+(S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}})$ of inner products on $S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}$.*
*Proof.* By the assumption and Lemma [Lemma 14](#lem: generic=Morse){reference-type="ref" reference="lem: generic=Morse"}, for some generic $u\in S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}$ the distance function $\mathop{\mathrm{dist}}^{Q_F}_{C\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}}}: C\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}}\rightarrow \mathbb{R}$ is Morse and has $\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})$ critical points. By Lemma [Lemma 15](#lem: uQMorse){reference-type="ref" reference="lem: uQMorse"}, pairs $(w,Q)\in S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}\times S^2_+(S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}})$ such that the function $\mathop{\mathrm{dist}}^Q_{C\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{\mathbf{d}, \mathbf{n}}}: C\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}}\rightarrow \mathbb{R}$ is not Morse form a semialgebraic set $\Sigma(\mathcal{V}^{\,\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}})$ of positive codimension and $(u,Q_F)\in (S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}\times S^2_+(S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}))\setminus \Sigma(\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}})$ lies in its complement. Consider now a basic neighborhood $U\times \mathfrak{Q}\subset (S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}\times S^2_+(S^{\mathbf{d}} V^{\mathsmaller{\mathbb{R}}}))\setminus \Sigma(\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}})$ of $(u, Q_F)$ contained in the complement of $\Sigma(\mathcal{V}_{\mathbf{d},\mathbf{n}}^{\mathsmaller{\mathbb{R}}})$. Every pair $(w, Q)$ in $U\times \mathfrak{Q}$ corresponds to a Morse function $\mathop{\mathrm{dist}}^{Q}_{C\mathcal{V}_{\mathbf{d},\mathbf{n}}^{\,\mathsmaller{\mathbb{R}}},w}$, whose number of critical points we denote by $N(w, Q)$. By [@BH04 Corollary 5.24], critical points of $\mathop{\mathrm{dist}}^{Q}_{C\mathcal{V}^{\,\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}},w}$ lie in a small neighborhood of critical points of $\mathop{\mathrm{dist}}^{Q_F}_{C\mathcal{V}^{\,\mathsmaller{\mathbb{R}}}_{\mathbf{d},\mathbf{n}},u}$. In particular, $N(w,Q)=N(u, Q_F)$ is constant for all $(w,Q)\in U\times \mathfrak{Q}$. It follows that $$\mathop{\mathrm{EDdegree}}_{Q}(\mathcal{V}_{\mathbf{d},\mathbf{n}}) \ge N(w,Q) = N(u,Q_F) = \mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})\,,$$ where $(w, Q)$ is in $U\times \mathfrak{Q}$. ◻
In [@Kozhasov2017OnFR] the first author of the present work showed the existence of generic symmetric tensors $u\in S^d V^\mathsmaller{\mathbb{R}}$ such that $\mathop{\mathrm{dist}}^{Q_F}_{C\mathcal{V}^{\mathsmaller{\mathbb{R}}}_{d,n}, u}$ has $\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{d,n})$ many real critical points. Thus, Theorem [Theorem 23](#thm: local minimality){reference-type="ref" reference="thm: local minimality"} combined with this result yields the following corollary.
**Corollary 24**. *For all inner products $Q\in S^2_+(S^d V^{\mathsmaller{\mathbb{R}}})$ in a small neighborhood of $Q_F$ we have $$\begin{aligned}
\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{d,n})\geq \mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{d,n}),\end{aligned}$$ that is, the Frobenius inner product $Q_F$ is a local minimum of the function $\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{d,n})$.*
# Minimum ED degrees for varieties of rank-one matrices {#sec: minimum ED degree Segre-Veronese}
This section aims to prove Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} in the case of matrices (bi-linear forms) and symmetric matrices. The next result implies Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} for $k=2$ and $\mathbf{d}=(1,1)$.
**Theorem 25**. *Consider the cone $C\Sigma_\mathbf{n}\subseteq V=V_1\otimes V_2$ over the Segre variety $\Sigma_\mathbf{n}\subset\mathbb{P}(V)$ of rank-1 matrices. Given any inner product $Q\in S^2(V^\mathsmaller{\mathbb{R}})$ and the associated quadratic form $q\colon V^\mathsmaller{\mathbb{R}}\to\mathbb{R}$, the distance function $$\label{eq: distance function matrices}
\mathop{\mathrm{dist}}^Q_{C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}},u}\colon C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}\to\mathbb{R}\,,\quad x\otimes y \mapsto q(u-x\otimes y),$$ from a generic matrix $u \in V^\mathsmaller{\mathbb{R}}$ to the real cone $C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ has at least $\min(n_1,n_2)+1$ real critical points. In particular, if $Q_F$ is a Frobenius inner product on $V^\mathsmaller{\mathbb{R}}$, we have $$\label{eq: lower bound ED degree matrices}
\mathop{\mathrm{EDdegree}}_Q(\Sigma_\mathbf{n})\ge \mathop{\mathrm{EDdegree}}_{Q_F}(\Sigma_\mathbf{n})=\min(n_1,n_2)+1\,.$$*
*Proof.* Without loss of generality, we assume $n_1\le n_2$. Suppose that the inner product $Q_F$ is induced by the inner products $Q_1$, $Q_2$ on the factors $V_1^\mathsmaller{\mathbb{R}}$, $V_2^\mathsmaller{\mathbb{R}}$, respectively. The associated quadratic forms are denoted with $q_1$ and $q_2$. For $i\in\{1,2\}$, let $\mathbb{S}^{Q_i}=\{x\in V_i^\mathsmaller{\mathbb{R}}\mid q_i(x)=1\}$ denote the unit sphere in the Euclidean space $(V_i^\mathsmaller{\mathbb{R}}, Q_i)$ and let $\mathbb{S}^Q=\{u\in V_1^\mathsmaller{\mathbb{R}}\otimes V_2^\mathsmaller{\mathbb{R}}\mid q(u)=1\}$ be the unit sphere in the inner product space $(V^\mathsmaller{\mathbb{R}},Q)$. Recall that the elements of $C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ are rank-one matrices $x\otimes y$ for some nonzero $x\in V_1^\mathsmaller{\mathbb{R}}$ and $y\in V_2^\mathsmaller{\mathbb{R}}$. For any rank-one matrix $x\otimes y\in \mathbb{S}^Q$, we can always assume that $x\in \mathbb{S}^{Q_1}$: if $q_1(x)\neq 1$, then the assumption $q(x\otimes y)=1$ allows us to say that $q_1(x)\neq 0$, hence we can replace $x$ with $\tilde{x}=x/\sqrt{q_1(x)}$ and $y$ with $\tilde{y}=\sqrt{q_1(x)}\,y$, so that $\tilde{x}\otimes\tilde{y}=x\otimes y$ and $\tilde{x}\in\mathbb{S}^{Q_1}$. Therefore, the $q_2$-length of $y\in V_2^\mathsmaller{\mathbb{R}}$ is determined uniquely. Thus, given $x\in\mathbb{S}^{Q_1}$, we define $$\mathbb{S}_x^{Q_2}\coloneqq\left\{y\in V_2^\mathsmaller{\mathbb{R}}\mid x\otimes y\in\mathbb{S}^Q \right\}.$$ Note that $\mathbb{S}_x^{Q_2}\subset V_2^\mathsmaller{\mathbb{R}}$ is a quadratic hypersurface homeomorphic to the sphere $\mathbb{S}^{Q_2}$. Consider now the sphere bundle $$\label{eq: sphere bundle}
E_Q\coloneqq\left\{(x,y)\in V_1^\mathsmaller{\mathbb{R}}\times V_2^\mathsmaller{\mathbb{R}}\mid\text{$x\in \mathbb{S}^{Q_1}$ and $y\in \mathbb{S}_x^{Q_2}$}\right\}\longrightarrow\mathbb{S}^{Q_1}$$ whose fiber over $x\in \mathbb{S}^{Q_1}$ is $\mathbb{S}_x^{Q_2}$. We now argue that $E_Q\to\mathbb{S}^{Q_1}$ is trivial, that is, there exists a homeomorphism between $E_Q$ and the product of spheres $\mathbb{S}^{Q_1}\times \mathbb{S}^{Q_2}$. In order to construct it, we fix the basis $\mathcal{B}=\{e_i\otimes e_j\mid\text{$i\in\{0,\ldots, n_1\}$ and $j\in\{0,\ldots,n_2\}$}\}$ of $V^\mathsmaller{\mathbb{R}}$, whose elements $e_i\otimes e_j$ we order lexicographically. We call $\mathrm{vec}$ the map that to a matrix $u=\sum_{i=0}^{n_1}\sum_{j=0}^{n_2} u_{ij}e_i\otimes e_j\in V^\mathsmaller{\mathbb{R}}$ associates the column-vector $\mathrm{vec}(u)=(u_{00},\ldots, u_{0n_2},u_{10},\ldots,u_{1n_2},\ldots,u_{n_10},\ldots,u_{n_1n_2})^\mathsmaller{\mathsf{T}}$ of its coordinates in the basis $\mathcal{B}$. Let $M_Q$ be the positive definite symmetric matrix of $Q$ in the basis $\mathcal{B}$. We can write $M_Q=N_Q^\mathsmaller{\mathsf{T}}N_Q$, where the rows of $N_Q$ are vectorizations of some $(n_1+1)\times (n_2+1)$-matrices $N_{ij}$ for all $i\in\{0,\ldots, n_1\}$ and $j\in\{0,\ldots, n_2\}$. So, the $Q$-length of $x\otimes y$ satisfies $$\begin{aligned}
\label{eq: q(x tensor y)}
\begin{split}
q(x\otimes y) &= \mathrm{vec}(x\otimes y)^\mathsmaller{\mathsf{T}}M_Q\, \mathrm{vec}(x\otimes y) = \mathrm{vec}(x\otimes y)^\mathsmaller{\mathsf{T}}N_Q^\mathsmaller{\mathsf{T}}N_Q\, \mathrm{vec}(x\otimes y)\\
&= \sum_{i=0}^{n_1}\sum_{j=0}^{n_2} (x^\mathsmaller{\mathsf{T}}N_{ij}\,y)^2 = \sum_{i=0}^{n_1}\sum_{j=0}^{n_2} y^\mathsmaller{\mathsf{T}}N_{ij}^\mathsmaller{\mathsf{T}}\,x\,x^T N_{ij}\,y = y^\mathsmaller{\mathsf{T}}\left(\sum_{i=0}^{n_1}\sum_{j=0}^{n_2} N_{ij}^\mathsmaller{\mathsf{T}}xx^\mathsmaller{\mathsf{T}}N_{ij}\right) y\,.
\end{split}\end{aligned}$$ Let $P(x)\coloneqq\sum_{i=0}^{n_1}\sum_{j=0}^{n_2} N_{ij}^\mathsmaller{\mathsf{T}}xx^\mathsmaller{\mathsf{T}}N_{ij}$ be the symmetric $(n_2+1)\times (n_2+1)$ matrix appearing in brackets after the last equality in [\[eq: q(x tensor y)\]](#eq: q(x tensor y)){reference-type="eqref" reference="eq: q(x tensor y)"}. Its entries depend quadratically on $x\in V_1^\mathsmaller{\mathbb{R}}$. Furthermore, since $q(x\otimes y)=y^\mathsmaller{\mathsf{T}}P(x)\,y$, the matrix $P(x)$ is positive definite for any $x\in\mathbb{S}^{Q_1}$. Let $P(x)=L(x)^\mathsmaller{\mathsf{T}}L(x)$ be the Cholesky decomposition of $P(x)$. Since $P(x)$ is positive definite, the Cholesky factor $L(x)$ is a unique upper triangular matrix with positive diagonal entries. Furthermore, the dependence of $L(x)$ on $x\in\mathbb{S}^{Q_1}$ is continuous by [@Schatzman Lemma $12.1.6$]. All in all, we define a continuous bundle map $$\label{eq: def bundle map phi}
\varphi\colon E_Q\to\mathbb{S}^{Q_1}\times\mathbb{S}^{Q_2}\,,\quad(x,y)\mapsto(x,L(x)y)\,.$$ The matrix $L(x)$ establishes a linear bijection between fibers $\mathbb{S}_x^{Q_2}$ and $\mathbb{S}^{Q_2}$ and $\varphi$ is a homeomorphism that trivializes the bundle [\[eq: sphere bundle\]](#eq: sphere bundle){reference-type="eqref" reference="eq: sphere bundle"}. By Künneth formula [@Kuenneth], the Betti numbers of $E_Q$ are $$\label{eq: Betti E_Q}
b_k(E_Q)=
\begin{cases}
1 & \text{for $k\in\{0,n_1+n_2\}$}\\
1 & \text{for $k\in\{n_1,n_2\}$ if $n_1\neq n_2$}\\
2 & \textrm{for $k=n_1=n_2$ if $n_1=n_2$}\\
0 & \text{otherwise.}
\end{cases}$$ Recall that any real rank-one tensor on $C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ can be written as $\lambda\,x\otimes y$, where $\lambda\in\mathbb{R}$ and $(x,y)\in E_Q$. By definition, given $u\in V^\mathsmaller{\mathbb{R}}$, the point $\lambda\,x\otimes y\neq 0$ is an ED critical point of the distance function $\mathop{\mathrm{dist}}^Q_{C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}},u}$ if $u-\lambda\,x\otimes y\in N_{\lambda\,x\otimes y}C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$. Since $x\otimes y$ obviously belongs to $T_{\lambda\,x\otimes y}C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$, one must have that $$\begin{aligned}
Q(u,x\otimes y)-\lambda\,Q(x\otimes y,x\otimes y)=Q(u,x\otimes y)-\lambda\,q(x\otimes y)=Q(u,x\otimes y)-\lambda=0\,,\end{aligned}$$ hence the ED critical points relative to $u$ are of the form $Q(u,x\otimes y)\,x\otimes y$. We now look at other tangent directions to $C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ at $\lambda\,x\otimes y$. Observe that $\lambda=Q(u,x\otimes y)\neq 0$. The total space $E_Q$ of the bundle [\[eq: sphere bundle\]](#eq: sphere bundle){reference-type="eqref" reference="eq: sphere bundle"} locally parameterizes the set $\lambda\,\mathbb{S}^Q\cap C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ of rank-one tensors of $q$-norm $\lambda$ via the map $\psi_\lambda\colon E_Q\to\lambda\,\mathbb{S}^Q\cap C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ defined by $\psi_\lambda(\tilde{x},\tilde{y})\coloneqq\lambda\,\tilde{x}\otimes\tilde{y}$. Thus, if $\gamma(t)=(x(t),y(t))$ is a smooth curve in $E_Q$ emanating from $(x,y)=\gamma(0)$, then $\psi_\lambda(\gamma(t))$ is a curve in $\lambda\,\mathbb{S}^Q\cap C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ emanating from $\lambda\,x\otimes y=\psi_\lambda(\gamma(0))$. In particular, the differential of $\psi_\lambda$ sends the tangent vector $\gamma'(0)=(x'(0),y'(0))\in T_{\gamma(0)}E_Q$ to the tangent vector $$\left.\frac{d}{dt}\right|_{t=0}\psi_\lambda(\gamma(t))=\lambda(x'(0)\otimes y+x\otimes y'(0))\in T_{\psi_\lambda(\gamma(0))}(\lambda\,\mathbb{S}^Q\cap C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}})\,.$$ Note that $x\otimes y\in N_{\lambda x\otimes y} (\lambda \mathbb{S}^Q\cap C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}})$, since the derivative of the identity $Q(\lambda x(t)\otimes y(t), x(t)\otimes y(t))=\lambda$ at $t=0$ reads as $2 Q(\lambda( x'(0)\otimes y(0)+x(0)\otimes y'(0)),x\otimes y)=0$. This implies that $u-\lambda x\otimes y=u-Q(u,x\otimes y) x\otimes y$ belongs to $N_{\lambda\,x\otimes y}(\lambda\mathbb{S}^Q\cap C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}})$ if and only if $u$ does. Hence, the rank-one matrix $Q(u,x\otimes y)\,x\otimes y$ is a (real) ED degree critical point relative to $u$ if and only if $(x,y)\in E_Q$ is a critical point of the bilinear form $F_u(\tilde{x},\tilde{y})=Q(u,\tilde{x}\otimes \tilde{y})$ restricted to $E_Q$. Furthermore, $Q(u,x\otimes y)\,x\otimes y\in C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}}$ is a degenerate critical point of the distance function $\mathop{\mathrm{dist}}^Q_{C\Sigma_\mathbf{n}^\mathsmaller{\mathbb{R}},u}$ if and only if $(x,y)\in E_Q$ is a degenerate critical point of the function $F_u|_{E_Q}$. Lemma [Lemma 14](#lem: generic=Morse){reference-type="ref" reference="lem: generic=Morse"} then implies that $F_u|_{E_Q}$ is Morse if and only if $u$ is generic. Under these circumstances, for a given $k\in\{0,1,\ldots,n_1+n_2\}$, let $m_k$ denote the number of critical points of $F_u|_{E_Q}$ of index $k$. Recall that the index of a critical point of a Morse function is the number of negative eigenvalues of the Hessian matrix of the function at this point. Applying Theorem [Theorem 13](#thm: strong Morse){reference-type="ref" reference="thm: strong Morse"}, we have that $$\label{eq: strong Morse E_Q}
\sum_{k=0}^i (-1)^{i-k} m_k \ge \sum_{k=0}^i (-1)^{i-k} b_k(E_Q)\,,$$ where the Betti numbers of $E_Q$ are given by [\[eq: Betti E_Q\]](#eq: Betti E_Q){reference-type="eqref" reference="eq: Betti E_Q"}. Since $F_u(x,y)=F_u(-x,-y)$, if $(x,y)\in E_Q$ is a critical point of index $k$, then so is $(-x,-y)\in E_Q$. Also, since $F_u(x,y)=-F_u(-x,y)$, if $(x,y)\in E_Q$ is a critical point of index $k$, then $(-x,y)\in E_Q$ is a critical point of index $n_1+n_2-k$. In particular, the numbers $m_k$ are all even and $m_k=m_{n_1+n_2-k}$. We have that $m_0\ge b_0=1$, but since $m_0$ is even, then necessarily $m_0\ge 2$. Next, we have $m_1-m_0\ge b_1-b_0=-1$. Again, by the fact that $m_k$ are even, we have that $m_1-m_0\ge 0$. Summing this inequality with $m_0\ge 2$ we derive that $m_1\ge 2$. Proceeding in this way, we obtain that for all even $2i<n_1, n_2$ and odd $2j+1<n_1, n_2$, $$\begin{aligned}
\begin{split}
\sum_{k=0}^{2i}(-1)^{2i-k}m_k &\ge \sum_{k=0}^{2i} (-1)^{2i-k} b_k(E_Q) = 1\\
\sum_{k=0}^{2j+1}(-1)^{2j+1-k}m_k &\ge \sum_{k=0}^{2j+1} (-1)^{2j+1-k} b_k(E_Q) = -1\,,
\end{split}\end{aligned}$$ that, together with the evenness of $m_k$, yields $$\label{eq:localshit}
\sum_{k=0}^{2i}(-1)^{2i-k}m_k\ge 2\,,\quad
\sum_{k=0}^{2j+1}(-1)^{2j+1-k}m_k\ge 0\,.$$ Summing these inequalities with $i=j$, we obtain that $m_i\ge 2$ whenever $i<n_1, n_2$. Now, we have to distinguish the following two cases.
*Case 1 ($n_1=n_2=n$)*. By [\[eq: Betti E_Q\]](#eq: Betti E_Q){reference-type="eqref" reference="eq: Betti E_Q"} we have that $$\begin{aligned}
\sum_{k=0}^{n}(-1)^{n-k}m_k \ge \sum_{k=0}^{n}(-1)^{n-k} b_k(E_Q)\ =\ 2+(-1)^n\,.\end{aligned}$$ Summing this inequality with one of [\[eq:localshit\]](#eq:localshit){reference-type="eqref" reference="eq:localshit"} (depending on the parity of $n$), we obtain $m_n\ge 3$. But since $m_k$ is always even, we must have that $m_n\ge 4$. Since $m_k=m_{2n-k}$, we have that the total number $\sum_{k=0}^{2n}m_k$ of critical points of $F_u|_{E_Q}$ satisfies $$\begin{aligned}
\sum_{k=0}^{2n}m_k = 2\sum_{k=0}^{n-1} m_k +m_n \ge 4n+4 = 4(n+1)\,.\end{aligned}$$
*Case 2 ($n_1<n_2$)*. Again [\[eq: Betti E_Q\]](#eq: Betti E_Q){reference-type="eqref" reference="eq: Betti E_Q"} gives the inequality $$\begin{aligned}
\sum_{k=0}^{n_1} (-1)^{n_1-k} m_k \ge \sum_{k=0}^{n_1}(-1)^{n_1-k} b_k(E_Q) = 1+(-1)^{n_1},\end{aligned}$$ whose sum with [\[eq:localshit\]](#eq:localshit){reference-type="eqref" reference="eq:localshit"} yields that $m_{n_1}\ge 2$. Since $m_k=m_{n_1+n_2-k}$, we obtain the following bound on the total number of critical points of $F_u|_{E_Q}$: $$\begin{aligned}
\sum_{k=0}^{n_1+n_2} m_k \ge 2\sum_{k=0}^{n_1} m_k \ge 4(n_1+1)\,.\end{aligned}$$ We conclude that $F_u|_{E_Q}$ always has at least $4(n_1+1)=4(\min(n_1,n_2)+1)$ critical points. Equivalently, at least $\min(n_1,n_2)+1$ ED critical points relative to $u$ must be real. ◻
Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} also holds for $k=1$ and $\mathbf{d}=(2)$, that is, in the case of symmetric matrices of any size, as we state next.
**Theorem 26**. *Consider the cone $C\mathcal{V}_{2,n}\subseteq S^2V$ over the Veronese variety $\mathcal{V}_{2,n}\subset\mathbb{P}(S^2V)$ of rank-1 symmetric matrices. Given any inner product $Q\in S^2_+(S^2V^\mathsmaller{\mathbb{R}})$ and the associated quadratic form $q\colon S^2V^\mathsmaller{\mathbb{R}}\to\mathbb{R}$, the distance function $$\label{eq: distance function symmetric matrices}
\mathop{\mathrm{dist}}^Q_{C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}},u}\colon C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}}\to\mathbb{R}\,,\quad\pm\,x\otimes x \mapsto q(u\mp x\otimes x)\,,$$ from a generic symmetric matrix $u \in S^2V^\mathsmaller{\mathbb{R}}$ to the real cone $C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}}$ has at least $n+1$ real critical points. In particular, if $Q_F$ is a Frobenius inner product on $S^2V^\mathsmaller{\mathbb{R}}$, we have $$\label{eq: lower bound ED degree symmetric matrices}
\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{2,n})\ge \mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{2,n})=n+1\,.$$*
*Proof.* The proof strategy is similar to the one of Theorem [Theorem 25](#thm: lower bound Q-distance function){reference-type="ref" reference="thm: lower bound Q-distance function"}. The set $\mathbb{S}^Q\coloneqq\{x\in V^\mathsmaller{\mathbb{R}}\mid q(x\otimes x)=1\}$ of vectors whose symmetric square has unit $q$-norm is diffeomorphic to an $n$-sphere of an arbitrary norm in $V^\mathsmaller{\mathbb{R}}$. A real symmetric matrix of rank one can be represented as $\lambda\,x\otimes x\in C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}}$ for some $x\in\mathbb{S}^Q$ and $\lambda\neq 0$. By definition, if $\lambda\,x\otimes x\neq 0$ is an ED critical point relative to $u\in S^2V^\mathsmaller{\mathbb{R}}$, then $u-\lambda\,x\otimes x\in N_{\lambda\,x\otimes x}C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}}$. Since $x\otimes x\in T_{\lambda\,x\otimes x}C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}}$, we have $$Q(u,x\otimes x)-\lambda\,Q(x\otimes x, x\otimes x) = Q(u,x\otimes x)-\lambda\,q(x\otimes x) = Q(u,x\otimes x)-\lambda = 0\,.$$ Furthermore, we have $$\begin{aligned}
\begin{split}
q(u-Q(u,x\otimes x)\,x\otimes x) &= Q(u,u)-2\,Q(u,x\otimes x)Q(u,x\otimes x)+Q(u,x\otimes x)^2\\
&= q(u)-Q(u,x\otimes x)^2\,,
\end{split}\end{aligned}$$ thus $Q(u,x\otimes x)\,x\otimes x$ is an ED critical point relative to $u$ if and only if $x\in\mathbb{S}^Q$ is a critical point of the function $F_u(x)=Q(u,x\otimes x)$, $x\in \mathbb{S}^Q$ (note that $\lambda=Q(u,x\otimes x)\neq 0$). Furthermore, $Q(u,x\otimes x)\,x\otimes x\in C\mathcal{V}_{2,n}^\mathsmaller{\mathbb{R}}$ is a degenerate critical point if and only if $x\in \mathbb{S}^Q$ is a degenerate critical point of $F_u|_{\mathbb{S}^Q}$. Lemma [Lemma 14](#lem: generic=Morse){reference-type="ref" reference="lem: generic=Morse"} then implies that $F_u|_{\mathbb{S}^Q}$ is Morse if and only if $u$ is generic. Denoting by $m_k$ the number of critical points of $F_u|_{\mathbb{S}^Q}$ of index $k\in\{0,1\ldots,n\}$, by Theorem [Theorem 13](#thm: strong Morse){reference-type="ref" reference="thm: strong Morse"} we get that $$\sum_{k=0}^i(-1)^{i-k} m_k\ \ge\ \sum_{k=0}^i (-1)^{i-k} b_k(\mathbb{S}^Q)\,,$$ where the Betti numbers of $\mathbb{S}^Q$ are $$\label{eq: Betti S_Q}
b_k(\mathbb{S}^Q)=
\begin{cases}
1 & \text{for $k\in\{0,n\}$}\\
0 & \text{otherwise.}
\end{cases}$$ Since $F_u(x)=F_u(-x)$, the numbers $m_k$ have to be even. From this, since $m_0\ge b_0(\mathbb{S}^Q)=1$, we have that $m_0\ge 2$. This inequality together with $m_1-m_0\ge b_1-b_0=-1$ and the fact that $m_1$ is even implies that $m_1\ge 2$. Proceeding so we obtain for $i\in\{1,\ldots,n-1\}$ that $$\sum_{k=0}^i (-1)^{i-k} m_k\ \ge\ \sum_{k=0}^i (-1)^{i-k} b_k(\mathbb{S}^Q)\ =\ (-1)^i\,.$$ The previous inequalities and the fact that all $m_k$ are even yield $$\begin{aligned}
\label{eq: local_even}
\sum_{k=0}^i (-1)^{i-k} m_k \ge 2\end{aligned}$$ for even $i$ and $$\begin{aligned}
\label{eq: local_odd}
\sum_{k=0}^i (-1)^{i-k} m_k \ge 0\end{aligned}$$ for odd $i$. Summing these two inequalities for $i-1$ and $i<n$, we obtain that $m_i\ge 2$. Also, if $n$ is even, we have that $\sum_{k=0}^n (-1)^{n-k} m_k\ge \sum_{k=0}^n (-1)^{n-k} b_k(\mathbb{S}^Q)=2$, which summed with [\[eq: local_odd\]](#eq: local_odd){reference-type="eqref" reference="eq: local_odd"} for $i=n-1$ yields $m_n\ge 2$. Analogously, if $n$ is odd, we have that $\sum_{k=0}^n(-1)^{n-k} m_k\ge \sum_{k=0}^n (-1)^{n-k} b_k(\mathbb{S}^Q)=0$, which summed with [\[eq: local_even\]](#eq: local_even){reference-type="eqref" reference="eq: local_even"} for $i=n-1$ gives $m_n\ge 2$. Summarizing, we have that the Morse function $F_u|_{\mathbb{S}^Q}$ has at least $m_k\ge 2$ critical points of each Morse index $k\in\{0,1\ldots,n\}$. Since antipodal critical points $x,-x\in\mathbb{S}^Q$ of $F_u|_{\mathbb{S}^Q}$ give rise to the same ED critical point $Q(u,x\otimes x)\,x\otimes x$ relative to $u$, we have that there are at least $n+1$ real ED critical points of $u$. This proves the inequality [\[eq: lower bound ED degree symmetric matrices\]](#eq: lower bound ED degree symmetric matrices){reference-type="eqref" reference="eq: lower bound ED degree symmetric matrices"}. ◻
# ED defects of special varieties {#sec: ED defects}
## Quadric hypersurfaces {#subsec: ED defects quadrics}
Consider the case $k=2$ and $\mathbf{n}=\mathbf{d}=(1,1)$. The Segre variety $\Sigma_2=\Sigma_\mathbf{n}$ is a smooth quadric surface in $\mathbb{P}(\mathbb{C}^2\otimes\mathbb{C}^2)=\mathbb{P}^3_{\mathsmaller{\mathbb{C}}}$. Since $\Sigma_2=V(F)$ for some quadratic homogeneous polynomial $F$, we may consider the symmetric matrix $M_F$ associated with the polynomial $F$. We always assume that $\dim(\Sigma_2^\mathsmaller{\mathbb{R}})=2$, or rather that the real zero locus of $F$ is a surface in $\mathbb{P}^3_{\mathsmaller{\mathbb{R}}}$. Similarly, for a real symmetric bilinear form $Q\in S^2\mathbb{R}^4$, we let $M_Q$ be the symmetric matrix associated with $Q$. As long as $M_Q$ varies in $S^2\mathbb{C}^4$ and is not proportional to $M_F$, the pair $(M_F,M_Q)$ defines a pencil of quadrics in $\mathbb{P}^3_{\mathsmaller{\mathbb{C}}}$, that is, a projective line in $\mathbb{P}(S^2\mathbb{C}^4)$, or rather a point in the Grassmannian $\mathbb{G}(1,\mathbb{P}(S^2\mathbb{C}^4))$. On the one hand, we have already pointed out in the previous sections that the value of $\mathop{\mathrm{EDdegree}}_Q(\Sigma_2)$ depends on the intersection between $\Sigma_2$ and the isotropic quadric $\mathcal{Q}=V(Q)$. On the other hand, since $M_F$ and $M_Q$ have real entries, to compute the possible values of $\mathop{\mathrm{EDdegree}}_Q(\Sigma_2)$, we may pass through the classification of pencils of real symmetric matrices, especially those containing at least one positive definite element.
Let $\mathbb{K}$ be either $\mathbb{R}$ or $\mathbb{C}$ and let $Q_1, Q_2 \in \mathbb{K}[x_0, \ldots, x_n]$ homogeneous of degree two spanning a pencil of quadrics $\{Q_1 - tQ_2 = 0\}$. We can then associate the pencil of Hessian matrices $A_1-tA_2$; linear changes of coordinates correspond to congruences $P(A_1-tA_2)P^\mathsmaller{\mathsf{T}}$. Hence, to classify pencils of quadrics up to a change of coordinates is to classify pencils of symmetric matrices up to congruence.
Suppose that $\det(A_2) \neq 0$ and consider the map $\phi\colon \mathbb{K}[t]^{\oplus(n+1)} \to \mathbb{K}[t]^{\oplus(n+1)}$ given by multiplication with $A_1-tA_2$. It follows that $\mathrm{coker}(\phi)$ is a finitely generated torsion $\mathbb{K}[t]$-module and the structure theorem implies that $$\mathrm{coker}(\phi) = \frac{\mathbb{K}[t]}{q_1(t)}\oplus\cdots\oplus\frac{\mathbb{K}[t]}{q_r(t)}\,,$$ where $q_j(t)$ are primary factors of $\det(A_1-tA_2)$, called the elementary divisors. For $\mathbb{K}=\mathbb{C}$ we may write $q_j(t) = (t-a_i)^{\sigma_{i,j}}$ and the collection of the integers $\sigma_{i,j}$ form what is called the *Segre symbol* of the pencil. Two pencils of complex symmetric matrices sharing the same Segre symbol and the same roots $a_i$ are congruent, see [@fevola2021pencils Theorem 1.1].
The real case is more subtle. Besides the elementary divisors, that do not necessarily have real roots, one needs to take also into account the inertia. We refer to [@Thomp Theorem 2] for the classification. Nonetheless, a pencil of real symmetric matrices $\{A_1-tA_2\}$ such that $A_2$ is positive definite is rather special.
**Lemma 27**. *Let $\{A_1-tA_2\}$ be a pencil of real symmetric $n\times n$ matrices such that $A_2$ is positive definite. Then there exists $S \in \mathrm{GL}(n, \mathbb{R})$ such that $$S(A_1-tA_2)S^\mathsmaller{\mathsf{T}}= \Lambda - t\,I_n\,,$$ where $\Lambda$ is diagonal with the same eigenvalues as $A_2^{-1}A_1$ and $I_n$ is the identity matrix of order $n$. In particular, only the number $1$ can occur in the Segre symbol.*
**Remark 28**. *This result is proved, for instance, in [@HJ Theorem 7.6.4]. We reproduce the argument below. Note that the lemma reduces the classification of pencils of real symmetric matrices containing a positive definite element to the classification of roots and Segre symbols, as in the complex case.*
*Proof of Lemma [Lemma 27](#lem: simult-diag){reference-type="ref" reference="lem: simult-diag"}.* A positive definite matrix $A_2$ admits a Cholesky decomposition $A_2 = LL^\mathsmaller{\mathsf{T}}$. Due to the spectral theorem, the real symmetric matrix $L^{-1} A_1 (L^{-1})^\mathsmaller{\mathsf{T}}$ can be decomposed as $L^{-1} A_1 (L^{-1})^\mathsmaller{\mathsf{T}}=U \Lambda U^\mathsmaller{\mathsf{T}}$, where $U \in \mathrm{O}(n,\mathbb{R})$ is an orthogonal matrix and $\Lambda$ is diagonal. To conclude, define $S \coloneqq U^\mathsmaller{\mathsf{T}}L^{-1}$.
Note that $A_2^{-1}A_1 = (S^\mathsmaller{\mathsf{T}}S)(S^{-1} \Lambda (S^\mathsmaller{\mathsf{T}})^{-1}) = S^\mathsmaller{\mathsf{T}}\Lambda (S^\mathsmaller{\mathsf{T}})^{-1}$. Hence $A_2^{-1}A_1$ and $\Lambda$ have the same eigenvalues. Furthermore, since $\Lambda - t\,I_n$ is diagonal, the elementary divisors have the form $t-\lambda$, where $\lambda$ is an eigenvalue of $\Lambda$. ◻
**Example 29**. Let us return to the study of $\mathop{\mathrm{EDdegree}}_Q(\Sigma_2)$. The radical ideal of $\Sigma_2$ is generated by the homogeneous polynomial $F(x_0,x_1,x_2,x_3)=x_0x_3-x_1x_2$. Hence, the Hessian matrix $M_F$ has signature $(2,2)$. Let $Q$ be another symmetric bilinear form such that its Hessian $M_Q$ is positive definite. Due to Lemma [Lemma 27](#lem: simult-diag){reference-type="ref" reference="lem: simult-diag"}, $M_F - tM_Q$ is congruent to $\mathrm{diag}(a_0-t, a_1-t, a_2-t, a_3-t)$ for $a_0,a_1, a_2, a_3\in \mathbb{R}\setminus \{0\}$. Imposing that $M_F$ has signature $(2,2)$ we fall in one of the cases below. Given $\mathcal{Q}=V(Q)$, we also describe $\Sigma_2\cap\mathcal{Q}\subset\mathbb{P}^3_{\mathsmaller{\mathbb{C}}}$, following [@fevola2021pencils Example 3.1], and compute the corresponding ED degree of $\Sigma_2$.
1. $a_0 < a_1 < 0 < a_3 < a_3$: In this case $\sigma = [1,1,1,1]$ and $\Sigma_2\cap\mathcal{Q}$ is a smooth elliptic curve. Then $\mathop{\mathrm{EDdefect}}_Q(\Sigma_2)=0$ and $\mathop{\mathrm{EDdegree}}_Q(\Sigma_2)=\mathop{\mathrm{gEDdegree}}(\Sigma_2)=6$.
2. $a_0= a_1 < 0 < a_2<a_3$: In this case $\sigma = [(1,1),1,1]$ and $\Sigma_2\cap\mathcal{Q}$ is the union of two noncoplanar conics that meet in two points $P_1,P_2$. By Corollary [\[cor: ED defect isolated singularities\]](#cor: ED defect isolated singularities){reference-type="ref" reference="cor: ED defect isolated singularities"} we have $\mathop{\mathrm{EDdefect}}_Q(\Sigma_2)=\mu_{P_1}+\mu_{P_2}=2$, namely $\mathop{\mathrm{EDdegree}}_Q(\Sigma_2)=6-2=4$.
3. $a_0 = a_1 < 0 < a_2 = a_3$: In this case $\sigma = [(1,1),(1,1)]$ and $\Sigma_2\cap\mathcal{Q}$ is the union of four lines, and each line meets two other lines, so $\mathcal{Z}=\mathrm{Sing}(\Sigma_2\cap\mathcal{Q})$ consists of four simple points $P_1,P_2,P_3,P_4$. Note that this is the case when $Q=Q_F$ is the Frobenius inner product, or any product metric, namely an element of $S^2(\mathbb{C}^2)\otimes S^2(\mathbb{C}^2)\subset S^2(\mathbb{C}^2\otimes\mathbb{C}^2)$. By Corollary [\[cor: ED defect isolated singularities\]](#cor: ED defect isolated singularities){reference-type="ref" reference="cor: ED defect isolated singularities"} we have $\mathop{\mathrm{EDdefect}}_{Q_F}(\Sigma_2)=\sum_{i=1}^4\mu_{P_i}=1+1+1+1=4$, namely $\mathop{\mathrm{EDdegree}}_{Q_F}(\Sigma_2)=6-4=2$.
Another Segre symbol also involves only the number $1$, namely $[(1,1,1),1]$. However, the associated pencil does not contain any matrix of signature $(2,2)$. Also note that quadric $\mathcal{Q}_F$ coming from the Frobenius inner product has the maximal ED defect, as predicted by Theorem [Theorem 25](#thm: lower bound Q-distance function){reference-type="ref" reference="thm: lower bound Q-distance function"}. Nonetheless, we can get greater ED defects if we allow more general pencils. For instance, for the Segre symbol $[(2,2)]$, the intersection $\Sigma_2\cap\mathcal{Q}$ is the union of a double line and two other lines that meet the double line at the points $P_1,P_2$. This case is studied in detail in [@maxim2020defect Example 4.6], and we have that $\mathop{\mathrm{EDdefect}}_Q(\Sigma_2)=5$ or, equivalently, $\mathop{\mathrm{EDdegree}}_Q(\Sigma_2)=6-5=1$.
This example tells us that, after fixing a Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$, there may exist smooth complex quadric hypersurfaces $\mathcal{Q}\subset\mathbb{P}(S^\mathbf{d}V)$ such that $\mathop{\mathrm{EDdefect}}_Q(\mathcal{V}_{\mathbf{d},\mathbf{n}})$ is greater than $\mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})$. Our Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} subsumes that if we restrict to real $\mathcal{Q}\subset\mathbb{P}(S^\mathbf{d}V)$ whose associated matrix $M_Q$ is positive definite, then always $\mathop{\mathrm{EDdefect}}_Q(\mathcal{V}_{\mathbf{d},\mathbf{n}})\leq \mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{\mathbf{d},\mathbf{n}})$.
**Example 30**. An analogous case study is also possible for the plane Veronese conic $\mathcal{V}_{2,1}\subset\mathbb{P}^2(\mathbb{R})$ that parametrizes $2\times 2$ real symmetric matrices of rank one. It is the zero locus of the homogeneous polynomial $F(x_0,x_1,x_2)=x_0x_2-x_1^2$ whose Hessian matrix $M_F$ has signature $(2,1)$. Given another conic $\mathcal{Q}\subset\mathbb{P}^2$ corresponding to a positive definite matrix $M_Q$, Lemma [Lemma 27](#lem: simult-diag){reference-type="ref" reference="lem: simult-diag"} implies that $M_F-t M_Q$ is congruent to $\mathrm{diag}(a_0-t,a_1-t,a_2-t)$ for some real $a_0, a_1, a_2\in \mathbb{R}\setminus\{0\}$. Since signatures of congruent matrices coincide, we can distinguish two cases:
1. $a_0<a_1<0<a_2$. In this case $\sigma=[1,1,1]$ and the intersection $\mathcal{V}_{2,1}\cap \mathcal{Q}$ consists of $4$ reduced points. Then $\mathop{\mathrm{EDdefect}}_Q(\mathcal{V}_{2,1})=0$ and $\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{2,1})=\mathop{\mathrm{gEDdegree}}(\mathcal{V}_{2,1})=4$.
2. $a_0=a_1<0<a_2$. Here $\sigma=[(1,1),1]$ and $\mathcal{V}_{2,1}\cap \mathcal{Q}$ consists of $2$ double points $P_1,P_2$. We have that $\mathop{\mathrm{EDdefect}}_Q(\mathcal{V}_{2,1})=\mu_{P_1}+\mu_{P_2}=2$ and hence $\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{2,1})=4-2=2$. This case corresponds to the Frobenius inner product.
More in general, we can consider the problem for a real quadric hypersurface $\mathcal{X}\subset\mathbb{P}(V^\mathsmaller{\mathbb{R}})$:
**Problem 31**. *Let $\mathcal{X}$ be a fixed real smooth quadric hypersurface in $\mathbb{P}(V^\mathsmaller{\mathbb{R}})$. Let $\mathcal{Q}$ be a smooth complex quadric hypersurface, and assume that its associated symmetric matrix $M_Q$ is real positive definite, namely $\mathcal{Q}$ induces an inner product in $V^\mathsmaller{\mathbb{R}}$. What is the maximum possible ED defect of $\mathcal{X}$ with respect to $\mathcal{Q}$?*
Related to this problem, we know from [@DHOST Eq. (7.1)] that the generic ED degree of a smooth quadric hypersurface $\mathcal{X}\subset\mathbb{P}^N$ is $2N$.
**Proposition 32**. *Let $\mathcal{X}=V(F)$ and $\mathcal{Q}=V(Q)$ be smooth quadric hypersurfaces in $\mathbb{P}^N$ such that the associated symmetric matrices $M_F$ and $M_Q$ are real. Furthermore, assume that $M_Q$ is positive definite. Then $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})= 2(r-1)$, where $r$ is the number of distinct eigenvalues of $M_Q^{-1}M_F$.*
*Proof.* Due to Lemma [Lemma 27](#lem: simult-diag){reference-type="ref" reference="lem: simult-diag"} the pencil of matrices $M_F + tM_Q$ is congruent to $\Lambda - t\,I_{N+1}$ where $\Lambda$ has the same eigenvalues as $M_Q^{-1}M_F$, say $a_1, \ldots, a_r$ are the distinct ones. Let $\{e_1, \ldots, e_{N+1}\}$ denote the chosen basis of $V$ and define $\Gamma_i\coloneqq\{\, j \in[N+1] \mid \Lambda e_j = a_i e_j\}$. Then the quadrics $F$ and $Q$ in these coordinates are, respectively, $$F = \sum_{i=1}^r a_i\sum_{j\in \Gamma_i}x_j^2 \quad \text{ and } \quad Q = \sum_{i=1}^rx_i^2.$$ It follows from the Jacobian ideal computation that $\mathcal{X}\cap\mathcal{Q}$ is singular along $\mathcal{S}_i = V(\{x_j\mid j\not\in\Gamma_i\})\cap\mathcal{Q}\subset\mathbb{P}^N$, which is not empty only if $\ell_i\coloneqq|\Gamma_i|\ge 2$. Furthermore, note that the $\mathcal{S}_i$ are smooth and pairwise disjoint. Indeed, for $i\neq j$ we have $\Gamma_i \cap \Gamma_j = \emptyset$, hence the ideal of $\mathcal{S}_i\cap\mathcal{S}_j$ contains $x_1, \ldots, x_{N+1}$ so it is empty. Also, up to reordering the basis, we may assume that $\Gamma_i = \{1, \ldots, l_i\}$, hence $$\mathcal{S}_i = V(x_1^2+\ldots + x_{l_i}^2 , x_{l_i+1} , \ldots, x_{N+1})\,,$$ and it is clear that $\mathcal{S}_i$ is smooth.
The previous considerations tell us that $\mathop{\mathrm{EDdefect}}_Q(\mathcal{X})$ is the sum of $N+1$ contributions $c_i$, each one coming from every variety $\mathcal{S}_i$. We claim that $c_i = 2(\ell_i -1)$ for all $i\in[N+1]$. Indeed, if $\ell_i \ge 3$ then $\mathcal{S}_i$ is a smooth quadric hypersurface in $V(\{x_j\mid j\not\in\Gamma_i\})\cong\mathbb{P}^{\ell_i-1}$. Then, due to Corollary [\[cor: ED defect equisingular\]](#cor: ED defect equisingular){reference-type="ref" reference="cor: ED defect equisingular"}, $c_i = \mathop{\mathrm{gEDdegree}}(\mathcal{S}_i) = 2(\ell_i-1)$, where the last equality comes from [@DHOST Eq. (7.1)]. If $\ell_i = 2$, then $\mathcal{S}_i$ gives two nondegenerate singularities, hence $c_i = 2= 2(2-1)$. By abuse of notation, if $\ell_i = 1$, $\mathcal{S}_i = \emptyset$ and contributes to the ED defect with $c_i=2(\ell_i-1) = 0$. Summing up $$\mathop{\mathrm{EDdefect}}_Q(\mathcal{X}) = \sum_{i = 1}^r 2(\ell_i-1) = \sum_{i = 1}^r 2\,\ell_i-2\,r = 2(N+1-r)\,.$$ Therefore $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})=\mathop{\mathrm{gEDdegree}}(\mathcal{X})-\mathop{\mathrm{EDdefect}}_Q(\mathcal{X})=2N-2(N+1-r)=2(r-1)$. ◻
Proposition [Proposition 32](#prop: ED degree X quadric){reference-type="ref" reference="prop: ED degree X quadric"} reveals that the minimum ED degree of a real smooth quadric hypersurface $\mathcal{X}=V(F)$ with respect to a real inner product $Q$ is $2$, otherwise we would have $\mathcal{X}\subseteq \mathcal{Q}$. The minimum $2$ is attained precisely when $M_Q^{-1}M_F$ defines what Aluffi-Harris call a "sphere", see [@aluffi-harris §9.2]. More precisely $F = a\sum_{i=1}^l x_i^2 + Q$ for some $a \in \mathbb{R}$ (any) $l\in [n-1]$. Of course, if we remove the assumption that $\mathcal{X}$ is real, we can produce examples of quadric hypersurfaces $\mathcal{X}$ with ED degree one.
## Rational normal curves {#subsec: ED defects rational normal curves}
Coming back to Segre-Veronese varieties, let us consider now the case $k=1$, $\mathbf{d}=(d)$, and $\mathbf{n}=(1)$, namely the case of a rational normal curve $\mathcal{V}_{d,1}\subset\mathbb{P}(S^dV)\cong\mathbb{P}^d_{\mathsmaller{\mathbb{C}}}$. By Theorem [Theorem 20](#thm: gen ED degree Segre-Veronese){reference-type="ref" reference="thm: gen ED degree Segre-Veronese"} we get that $$\mathop{\mathrm{gEDdegree}}(\mathcal{V}_{d,1})=3d-2\,.$$ Given any inner product $Q$ on $S^dV^\mathsmaller{\mathbb{R}}\cong\mathbb{R}^{d+1}$, if $\mathcal{Q}=V(Q)\subset\mathbb{P}^d_{\mathsmaller{\mathbb{C}}}$ does not contain $\mathcal{V}_{d,1}$, then the intersection $\mathcal{V}_{d,1}\cap\mathcal{Q}$ consists of $2d$ points counted with multiplicity. If $\mathcal{Q}$ is generic, then $\mathcal{V}_{d,1}\cap\mathcal{Q}$ consists of $2d$ distinct points, namely $\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{d,1})=0$. Assume that $Q=Q_F$, where $Q_F$ is a Frobenius inner product. Without loss of generality, we can assume that $Q_F$ is obtained as the tensor power of the standard Euclidean inner product on $V^\mathsmaller{\mathbb{R}}\cong\mathbb{R}^2$. Hence, if we consider the parametrization $$\label{eq: parametrization rational normal curve}
\nu_d([x_0:x_1])=[x_0^d:x_0^{d-1}x_1:\cdots:x_0x_1^{d-1}:x_1^d]$$ of $\mathcal{V}_{d,1}$, then the isotropic quadric $\mathcal{Q}_F$ has equation $$\label{eq: equation Q_F case rational normal curve}
\sum_{i=0}^d\binom{d}{i}z_i^2=0\,,$$ where $z_0,\ldots,z_d$ are homogeneous coordinates of $\mathbb{P}^d$. In order to compute the intersection $\mathcal{V}_{d,1}\cap\mathcal{Q}_F$, we substitute the relations $z_i=x_0^{d-i}x_1^i$ in [\[eq: equation Q_F case rational normal curve\]](#eq: equation Q_F case rational normal curve){reference-type="eqref" reference="eq: equation Q_F case rational normal curve"}. This produces the identity $$0=(x_0^2+x_1^2)^d=(x_0+\sqrt{-1}x_1)^d(x_0-\sqrt{-1}x_1)^d\,.$$ Hence $\mathcal{V}_{d,1}\cap\mathcal{Q}_F$ consists of the two points $\nu_d([1:\sqrt{-1}])$ and $\nu_d([1:-\sqrt{-1}])$ with multiplicity $d$. By Corollary [\[cor: ED defect isolated singularities\]](#cor: ED defect isolated singularities){reference-type="ref" reference="cor: ED defect isolated singularities"} we conclude that $\mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{d,1})=2(d-1)$, that is $\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{d,1})=d$. One may verify that this value is equal to the one obtained for example in Theorem [\[thm: FO formula\]](#thm: FO formula){reference-type="ref" reference="thm: FO formula"}. In general, we have the following result.
**Proposition 33**. *Consider the real rational normal curve $\mathcal{V}_{d,1}^\mathsmaller{\mathbb{R}}$ parametrized by the map [\[eq: parametrization rational normal curve\]](#eq: parametrization rational normal curve){reference-type="eqref" reference="eq: parametrization rational normal curve"}, and its associated complex curve $\mathcal{V}_{d,1}\subset\mathbb{P}^d_{\mathsmaller{\mathbb{C}}}$. Let $Q$ be a nondegenerate symmetric bilinear form on $S^dV^\mathsmaller{\mathbb{R}}$. If its associated real symmetric matrix $M_Q$ is positive definite, then $\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{d,1})$ is an even integer. Furthermore, if $\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{d,1})\ge 2d-1$, then the matrix $M_Q$ cannot be chosen real positive definite.*
*Proof.* Consider the variables $x_0,x_1$ and the vector $v=(x_0^d,x_0^{d-1}x_1,\ldots,x_0x_1^{d-1},x_1^d)$. Suppose first that $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is not contained in the quadric $\mathcal{Q}=V(Q)$, hence the intersection $\mathcal{V}_{d,1}\cap\mathcal{Q}$ consists of $2d$ complex points, counted with multiplicity. These points in $\mathbb{P}(S^d V)$ correspond in $\mathbb{P}(V)\cong\mathbb{P}^1_{\mathsmaller{\mathbb{C}}}$ to the complex roots of the binary form $b(x_0,x_1)=vM_Qv^\mathsmaller{\mathsf{T}}$ of degree $2d$. Since $b(x_0,x_1)$ has real coefficients, it has either real roots or nonreal roots coming in conjugate pairs.
Suppose now that $M_Q$ is real positive definite. Then the equation $b(x_0,x_1) = 0$ cannot have nonzero real solutions $(x_0,x_1)$, therefore it has $t\ge 1$ distinct pairs of nonreal conjugate roots with multiplicities $m_1,\ldots,m_t$ such that $m_1+\cdots+m_t=d$. Hence, by Corollary [\[cor: ED defect isolated singularities\]](#cor: ED defect isolated singularities){reference-type="ref" reference="cor: ED defect isolated singularities"}, we have that $$\label{eq: even defect}
\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{d,1})=\sum_{i=1}^t 2(m_i-1)=2(d-t)\,.$$ In particular, the first part of the statement follows. Furthermore, the maximum possible defect is attained for $t=1$ and is equal to $2d-2$, and this value is strictly smaller than the maximum ED defect of $\mathcal{V}_{d,1}$ when is not contained in $\mathcal{Q}$, that is equal to $2d-1$. Instead, when $\mathcal{V}_{d,1}\subset\mathcal{Q}$, then $\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{d,1})=0$ or $\mathop{\mathrm{EDdefect}}_Q(\mathcal{V}_{d,1})=3d-2$. In this case we also have $vM_Qv^T=0$ for all vectors $v=(x_0^d,x_0^{d-1}x_1,\ldots,x_0x_1^{d-1},x_1^d)$. Choosing $(x_0, x_1) = (1,0)$ we get a nonzero real solution of $b(x_0,x_1)=0$, hence $M_Q$ cannot be positive definite. ◻
**Remark 34**. *If $M_Q$ is positive definite then $Q$ has no real points. The converse is true up to the choice of $\pm M_Q$. Sylvester's law of inertia reduces the problem to looking at the possible signatures; if $M_Q$ has both positive and negative eigenvectors, we can cook up a real point in $Q$.*
**Remark 35**. *For any $d\ge 1$, we can write an explicit example of a quadric $\mathcal{Q}_d=V(Q_d)$ such that $\mathop{\mathrm{EDdefect}}_{Q_d}(\mathcal{V}_{d,1})=2d-1$. Indeed, consider the quadric $Q_d$ of equation $$\label{eq: equation Q_d}
z_0^2-\left(\sum_{i=1}^{\lfloor\frac{d-1}{2}\rfloor}z_iz_{d+1-i}\right)+\left\lfloor\frac{d-1}{2}\right\rfloor z_{\lfloor\frac{d+1}{2}\rfloor}z_{\lceil\frac{d+1}{2}\rceil}=0\,.$$ The symmetric matrix $M_d$ associated with $Q_d$ has two diagonal blocks: the $1\times 1$ diagonal block with entry $1$, and the $d\times d$ diagonal block that is an anti-diagonal matrix. The vectors $e_0=(1,0,\ldots,0)$ and $e_1=(0,1,\ldots,0)$ are eigenvectors of $M_d$ with eigenvalues $1$ and $-\frac{1}{2}$ respectively. This means that $M_d$ is not positive definite. Plugging in the relations $z_i=x_0^{d-i}x_1^i$ in [\[eq: equation Q_d\]](#eq: equation Q_d){reference-type="eqref" reference="eq: equation Q_d"}, we get $$\begin{aligned}
\begin{split}
0 &= x_0^{2d}-\left(\sum_{i=1}^{\lfloor\frac{d-1}{2}\rfloor}x_0^{d-i}x_1^ix_0^{i-1}x_1^{d+1-i}\right)+\lfloor\frac{d-1}{2}\rfloor x_0^{d-\lfloor\frac{d+1}{2}\rfloor}x_1^{\lfloor\frac{d+1}{2}\rfloor}x_0^{d-\lceil\frac{d+1}{2}\rceil}x_1^{\lceil\frac{d+1}{2}\rceil}\\
&= x_0^{2d}-\lfloor\frac{d-1}{2}\rfloor x_0^{d-1}x_1^{d+1}+\lfloor\frac{d-1}{2}\rfloor x_0^{d-1}x_1^{d+1} = x_0^{2d}\,,
\end{split}\end{aligned}$$ where we used the identity $\lfloor\frac{d+1}{2}\rfloor+\lceil\frac{d+1}{2}\rceil=d+1$. Hence, $\mathcal{V}_{d,1}\cap\mathcal{Q}_d$ consists of the point $\nu_d([0,1])$ with multiplicity $2d$ and $\mathop{\mathrm{EDdefect}}_{Q_d}(\mathcal{V}_{d,1})=2d-1$.*
## Veronese surfaces {#subsec: ED defects Veronese surfaces}
Consider the Veronese surface $\mathcal{V}_{d,2}\subset\mathbb{P}(S^d\mathbb{C}^3)\cong\mathbb{P}^{\binom{d+2}{2}-1}_{\mathsmaller{\mathbb{C}}}$. By Theorem [Theorem 20](#thm: gen ED degree Segre-Veronese){reference-type="ref" reference="thm: gen ED degree Segre-Veronese"} (see also [@DHOST Proposition 7.10] for arbitrary Veronese varieties) we have $$\mathop{\mathrm{gEDdegree}}(\mathcal{V}_{d,2})=7d^2-9d+3\,.$$ Let $Q$ be an inner product on $S^d\mathbb{R}^3$, whose associated complex quadric hypersurface in $\mathbb{P}(S^d\mathbb{C}^3)$ is $\mathcal{Q}$. The intersection $\mathcal{V}_{d,2}\cap\mathcal{Q}$ has degree $2d^2$ and can be written as the image of a plane curve $\mathcal{C}$ of degree $2d$ under the Veronese embedding $\nu_d$. Here we used the property that, for any curve $\mathcal{C}\subset\mathbb{P}^2_{\mathsmaller{\mathbb{C}}}$ of degree $k$, $\deg(\nu_d(\mathcal{C})) = dk$. Hence, using Theorem [\[thm: ED defect general\]](#thm: ED defect general){reference-type="ref" reference="thm: ED defect general"} and its corollaries, the classification of the possible ED defects $\mathop{\mathrm{EDdefect}}_Q(\mathcal{V}_{d,2})$ corresponds to the classification of all plane curves of degree $2d$.
**Example 36** (Second Veronese Surface). We focus on the case $d=2$, hence $\mathcal{V}_{2,2}$ is the second Veronese surface in $\mathbb{P}^5_{\mathsmaller{\mathbb{C}}}$. The intersection $\mathcal{V}_{2,2}\cap\mathcal{Q}$ has degree $8$ and can be written as $\nu_2(\mathcal{C})$ for some plane curve $\mathcal{C}$ of degree $4$. If $Q=Q_F$ is a Frobenius inner product, the curve is quite special and consists of a double conic $2\mathcal{D}$. Thus, Corollary [\[cor: ED defect equisingular\]](#cor: ED defect equisingular){reference-type="ref" reference="cor: ED defect equisingular"} implies $$\label{eq: ED defect V22}
\mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{2,2}) = \mathop{\mathrm{gEDdegree}}(\nu_2(\mathcal{D})) = 3\cdot4-2 = 10\,.$$ Now assume that $Q$ is such that $\mathcal{C}=\mathcal{V}_{2,2}\cap\mathcal{Q}$ is reduced. The total sum of the Milnor numbers of its singularities is at most $7$, unless $\mathcal{C}$ is the union of four concurrent lines, which has only one singularity with $\mu = 9$, see [@shin-bound]. Due to Corollary [\[cor: ED defect isolated singularities\]](#cor: ED defect isolated singularities){reference-type="ref" reference="cor: ED defect isolated singularities"}, we have $\mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{2,2}) \leq 9$ in this case. Next, we deal with the other nonreduced cases. Below $\mathcal{D}$ denotes a plane conic, while $\mathcal{L}$ and $\mathcal{L}_i$ are lines.
1. $\mathcal{C}= \mathcal{D}\cup 2\mathcal{L}$, with $\mathcal{D}\cap\mathcal{L}= \{P_1,P_2\}$. Then $\mathrm{Sing}(\mathcal{C})$ has three strata: $\mathcal{S}_0 = \mathcal{L}\setminus\{P_1,P_2\}$ and $\mathcal{S}_i = \{P_i\}$, $i=1,2$. The Milnor fiber $F_{\mathcal{S}_0}$ around any point of $\mathcal{S}_0$ is homotopic to $\{x^2 = 1\}$, hence $\chi(F_{\mathcal{S}_0}) = 2$ and $\mu_{\mathcal{S}_0} = 1$. Thus $\alpha_{\mathcal{S}_0} = 1$. For $\mathcal{S}_1$ or $\mathcal{S}_2$ the Milnor fiber is homotopic to $\{x^2y = 1\}$, hence $\mu_{\mathcal{S}_1} = \mu_{\mathcal{S}_2} = -1$. The complex link in this case is just a point (see [@rod-wang-maxlike p.492] for the definition) thus $\alpha_{\mathcal{S}_i} = \mu_{\mathcal{S}_i} - \mu_{\mathcal{S}_0} = -2$. Therefore, $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 4\alpha_{\mathcal{S}_0} - \alpha_{\mathcal{S}_1} -\alpha_{\mathcal{S}_2} = 4 -(-2) -(-2) = 8\,.$$
2. $\mathcal{C}=\mathcal{D}\cup 2\mathcal{L}$, with $\mathcal{D}\cap\mathcal{L}= \{2P\}$, a tangent line. Then $\mathrm{Sing}(\mathcal{C})$ has two strata: $\mathcal{S}_0 = \mathcal{L}\setminus\{P\}$ and $\mathcal{S}_1 = \{P\}$. As in the previous case $\alpha_{\mathcal{S}_0} = 1$. For $\mathcal{S}_1$ we may use the formula in [@melle-euler] to compute $\chi(F_{\mathcal{S}_1}) = - 2\cdot3 + 3 = -3$ hence $\mu_{\mathcal{S}_1} = -4$ and $\alpha_{\mathcal{S}_1} = -4 - 1 = -5$. Therefore, $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 4\alpha_{\mathcal{S}_0} - \alpha_{\mathcal{S}_1} = 4 -(-5) = 9\,.$$
3. $\mathcal{C}= \mathcal{L}_1 \cup \mathcal{L}_2 \cup 2\mathcal{L}_3$ with $\mathcal{L}_1\cap \mathcal{L}_2 \cap \mathcal{L}_3 = \emptyset$. This is similar to case $(1)$, but with an extra isolated singularity $A_1$. Therefore, $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 8 + 1 = 9\,.$$
4. $\mathcal{C}= \mathcal{L}_1 \cup \mathcal{L}_2 \cup 2\mathcal{L}_3$ with $\mathcal{L}_1\cap \mathcal{L}_2 \cap \mathcal{L}_3 = \{P\}$. We have two strata: $\mathcal{S}_0 = \mathcal{L}_3\setminus \{P\}$ and $\mathcal{S}_1 = \{P\}$. Also, $\alpha_{\mathcal{S}_0} = 1$ and $\chi(F_{\mathcal{S}_1}) = - 4$. Therefore, $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 4\alpha_{\mathcal{S}_0} - \alpha_{\mathcal{S}_1} = 4 -(-5 -1) = 10\,.$$
5. $\mathcal{C}= 2\mathcal{L}_1 \cup 2\mathcal{L}_2$ with $\mathcal{L}_1\cap \mathcal{L}_2 = \{P\}$. We have three strata: $\mathcal{S}_0 = \mathcal{L}_1 \setminus \{P\}$, $\mathcal{S}_1 = \mathcal{L}_2 \setminus \{P\}$, and $\mathcal{S}_2 = \{P\}$. As in previous cases, $\alpha_{\mathcal{S}_0} = \alpha_{\mathcal{S}_1} = 1$. For $\mathcal{S}_2$ we get $\chi(F_{\mathcal{S}_2}) = 0$ hence $\alpha_{\mathcal{S}_2} = -1 - \alpha_{\mathcal{S}_0} - \alpha_{\mathcal{S}_1} = -3$. Then $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 4\alpha_{\mathcal{S}_0} + 4\alpha_{\mathcal{S}_1} - \alpha_{\mathcal{S}_2} = 4+ 4 -(-3) = 11\,.$$
6. $\mathcal{C}= 3\mathcal{L}_1 \cup \mathcal{L}_2$ with $\mathcal{L}_1\cap \mathcal{L}_2 = \{P\}$. We have two strata: $\mathcal{S}_0 = \mathcal{L}_1\setminus \{P\}$ and $\mathcal{S}_2 = \{P\}$. Now $\alpha_{\mathcal{S}_0} = 2$ and $\mu_{\mathcal{S}_1} = -1$. Then $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 4\alpha_{\mathcal{S}_0} - \alpha_1 = 4\cdot 2 - ( -1 - 2) = 11\,.$$
7. $\mathcal{C}= 4\mathcal{L}$. Then we may apply Corollary [\[cor: ED defect equisingular\]](#cor: ED defect equisingular){reference-type="ref" reference="cor: ED defect equisingular"} to get $$\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{2,2}) = 3\mathop{\mathrm{gEDdegree}}(\nu_2(\mathcal{L})) = 12\,.$$
From Theorem [Theorem 26](#thm: symmetric_matrices){reference-type="ref" reference="thm: symmetric_matrices"} the maximal ED defect coming from a real positive definite inner product is $10$, as we computed in [\[eq: ED defect V22\]](#eq: ED defect V22){reference-type="eqref" reference="eq: ED defect V22"}. Equivalently, the minimum ED degree is $\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{2,2})=3$. Thus, cases (5), (6), and (7) above cannot be realized by a definite bilinear form $Q$. In fact, none of these cases can be realized by positive quadrics. Indeed, note that the Veronese surface $\mathcal{X}$ is defined over $\mathbb{R}$ and so is $\mathcal{C}= \mathcal{V}_{2,2}\cap\mathcal{Q}$, whenever $\mathcal{Q}$ is real. If $Q$ is a definite quadric then it has no real (closed) points, hence neither does $\mathcal{C}$. However, every curve above has real (closed) points, as it follows from the lemma below.
**Lemma 37**. *Let $\mathcal{C}\subset\mathbb{P}^2_{\mathsmaller{\mathbb{C}}}$ be a projective plane curve defined over $\mathbb{R}$. If either $\mathcal{C}$ contains an odd degree subcurve (over $\mathbb{C}$) or $\mathcal{C}$ contains an odd number of isolated singular points, then $\mathcal{C}$ contains a real point, i.e., $C^\mathsmaller{\mathbb{R}}\neq \emptyset$.*
To some extent, the idea behind this statement is that "a real one-variable polynomial has at least one real root".
*Proof.* Denote by $\bar{z}$ the complex conjugate of $z\in\mathbb{C}$. Then $z\mapsto\bar{z}$ generates the Galois group $\mathrm{Gal}(\mathbb{C}/\mathbb{R})$. For any scheme $\mathcal{X}$ defined over $\mathbb{R}$, $\mathrm{Gal}(\mathbb{C}/\mathbb{R})$ acts on the irreducible components of the associated complex scheme $\mathcal{X}^\mathsmaller{\mathbb{C}}$ and the orbits are defined over $\mathbb{R}$, i.e., if $T$ is an irreducible component of $\mathcal{X}^\mathsmaller{\mathbb{C}}$, then there exists another component $\overline{T} \cong T$ such that $T\cup \overline{T}$ is defined over $\mathbb{R}$. In particular, if $T$ is fixed by $\mathrm{Gal}(\mathbb{C}/\mathbb{R})$, then $T$ is defined over $\mathbb{R}$. For a proof of this fact in more generality, see [@stacks-project [Lemma 04KY](https://stacks.math.columbia.edu/tag/04KY)]. In particular, if $\mathcal{X}$ is zero-dimensional of odd length, the action of $\mathrm{Gal}(\mathbb{C}/\mathbb{R})$ has fixed points, hence $\mathcal{X}$ has real (closed) points.
In our case, let $\mathcal{C}$ be a curve defined over $\mathbb{R}$. Assume, without loss of generality, that $\mathcal{C}=\mathcal{C}_{\rm red}$ is reduced. Note that $\mathrm{Sing}(\mathcal{C})$ is also defined over $\mathbb{R}$. By our previous discussion, if $\mathrm{Sing}(\mathcal{C})$ has odd length, then it has real points, which are contained in $\mathcal{C}$.
Now let $\mathcal{C}=\mathcal{C}_1\cup\cdots\cup\mathcal{C}_k$ be a decomposition of $\mathcal{C}$ into irreducible components. Suppose that $\deg(\mathcal{C}_i) \equiv 1 \pmod{2}$ for some $i\in[k]$. Then $\mathcal{C}_i \cup \overline{\mathcal{C}_i} \subset \mathcal{C}$ is defined over $\mathbb{R}$. If $\mathcal{C}_i = \overline{\mathcal{C}_i}$, i.e., $\mathcal{C}_i$ is defined by a real homogeneous polynomial, then take $\mathcal{L}\subset\mathbb{P}^2$ a general line, then take $\mathcal{X}= \mathcal{L}\cap\mathcal{C}_i$. If $\mathcal{C}_i \neq \overline{\mathcal{C}_i}$ then take $\mathcal{X}= \mathcal{C}_i \cap \overline{\mathcal{C}_i}$. In either case, $\overline{\mathcal{X}} = \mathcal{X}$ is defined over $\mathbb{R}$, and it has odd length, due to Bézout's Theorem. Thus $\mathcal{X}$ has real points, hence so does $\mathcal{C}$. ◻
**Lemma 38**. *For all $d\ge 2$ consider the Veronese surface $\mathcal{V}_{d,2}\subset\mathbb{P}(S^d\mathbb{C}^3)\cong\mathbb{P}^{\binom{d+2}{2}-1}_{\mathsmaller{\mathbb{C}}}$. Let $Q$ be a symmetric bilinear form on $S^d\mathbb{R}^3$ such that $\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{d,2}) \ge \mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{d,2})$, where $Q_F$ is a Frobenius inner product on $S^d\mathbb{R}^3$. Given $\mathcal{Q}=V(Q)$, then the curve $\mathcal{Q}\cap\mathcal{V}_{d,2}$ is nonreduced.*
*Proof.* We show the contrapositive. For any quadric hypersurface $\mathcal{Q}\subset\mathbb{P}^{\binom{d+2}{2}-1}_{\mathsmaller{\mathbb{C}}}$ we have that $\mathcal{Q}\cap\mathcal{V}_{d,2} = \nu_d(\mathcal{C})$, where $\mathcal{C}\subset\mathbb{P}^2_{\mathsmaller{\mathbb{C}}}$ is a plane curve of degree $2d$. If $\mathcal{Q}= \mathcal{Q}_F$ is a Frobenius quadric, then $\mathcal{C}= d\mathcal{D}$, where $\mathcal{D}$ is a smooth conic. It follows from Corollary [\[cor: ED defect equisingular\]](#cor: ED defect equisingular){reference-type="ref" reference="cor: ED defect equisingular"} that $$\mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{d,2}) = (d-1)\mathop{\mathrm{gEDdegree}}(\nu_d(\mathcal{D})) = (d-1)(6d-2)\,.$$ Now consider a quadric $\mathcal{Q}$ such that $\mathcal{C}$ is reduced. Then Theorem [\[cor: ED defect isolated singularities\]](#cor: ED defect isolated singularities){reference-type="ref" reference="cor: ED defect isolated singularities"} says that the ED defect is the sum of the Milnor numbers of $\mathcal{C}$, and a bound for this number was proved in [@shin-bound]. Hence, $$\mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{d,2}) = \sum_{P \in \mathcal{C}} \mu_P \leq (2d -1)^2\,.$$ It is clear that $(2d -1)^2 < (d-1)(6d-2)$ for $d\ge 2$, concluding the proof. ◻
**Corollary 39**. *Conjecture [Conjecture 22](#conj: main){reference-type="ref" reference="conj: main"} holds for the 3-rd Veronese surface $\mathcal{V}_{3,2}\subset\mathbb{P}^9_{\mathsmaller{\mathbb{C}}}$, namely $$\mathop{\mathrm{EDdegree}}_Q(\mathcal{V}_{3,2})\ge\mathop{\mathrm{EDdegree}}_{Q_F}(\mathcal{V}_{3,2})=7$$ for any inner product on $S^d\mathbb{R}^{10}$.*
*Proof.* Let $Q$ be a quadric such that $\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{3,2}) \ge \mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{3,2})$, and let $\mathcal{C}$ be the preimage of $\mathcal{V}_{3,2} \cap \mathcal{Q}$ under the Veronese embedding $\nu_3$. In particular $\deg(\mathcal{C})=6$. Let $F$ be a homogeneous polynomial defining $\mathcal{C}\subset\mathbb{P}^2_{\mathsmaller{\mathbb{C}}}$ and let $F = F_1^{\beta_1} \cdots F_k^{\beta_k}$ be the irreducible factorization of $F$. Then we associate to $\mathcal{C}$ a partition of $6$ by putting $n_i$ times $\deg (F_i)$. From Lemmas [Lemma 37](#lem: odd has real points){reference-type="ref" reference="lem: odd has real points"} and [Lemma 38](#lem:defVerSurf){reference-type="ref" reference="lem:defVerSurf"} we know that the admissible partitions contain only even numbers and have repetitions. This is enough to conclude that the partition associated with $\mathcal{C}$ is $\{2,2,2\}$, i.e., $\mathcal{C}$ is a triple conic. Therefore $\mathop{\mathrm{EDdefect}}_{Q}(\mathcal{V}_{3,2}) = \mathop{\mathrm{EDdefect}}_{Q_F}(\mathcal{V}_{3,2})$. ◻
**Remark 40**. *The same approach applied to the $4$th Veronese surface $\mathcal{V}_{4,2}$ yields the admissible partitions $\{4,4\}$, $\{4,2,2\}$, and $\{2,2,2,2\}$. The last case is associated with a Frobenius inner product. However, a bound for the ED defect in the first two cases still eludes us.*
# Nontransversality conditions and leading coefficient of the ED polynomial {#sec: ED polynomial}
Let $\mathcal{X}\subset\mathbb{P}(V)=\mathbb{P}^N$ be a projective variety, and consider its affine cone $C\mathcal{X}\subset V$. Attached to the ED correspondence $\mathcal{E}(\mathcal{X},Q)$ defined in [\[eq: ED correspondence\]](#eq: ED correspondence){reference-type="eqref" reference="eq: ED correspondence"} is the *$\varepsilon$-offset correspondence* of $\mathcal{X}$ $$\mathcal{O}\mathcal{E}_{\varepsilon}(X,Q)\coloneqq\mathcal{E}(\mathcal{X},Q)\cap\{(x,u)\in V\times V\mid q(u-x)=\varepsilon^2\}\,,$$ for all $\varepsilon\in\mathbb{C}$, that is a subvariety of $\mathcal{E}(\mathcal{X},Q)$ of codimension 1. Its projection onto the second factor $V$ is called *$\varepsilon$-offset hypersurface* of $\mathcal{X}$ and is denoted by $\mathcal{O}_{\varepsilon}(X,Q)$. It was introduced and studied first in [@horobet2019offset].
If we regard the coordinates of $u$ as symbolic parameters, as well as the distance parameter $\varepsilon$, and if we assign specific values to the entries $q_{ij}$ of the matrix $M_Q$, then the ideal of $\mathcal{O}_{\varepsilon}(X,Q)$ in $\mathbb{C}[u][\varepsilon]$ has a unique generator, that is called *ED polynomial* of $\mathcal{X}$ in [@ottaviani2020distance]. Here we denote it by $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$. When we keep the entries $q_{ij}$ also as symbolic parameters, the ED polynomial lives in $\mathbb{C}[u,q_{ij}][\varepsilon]$ and we denote it by $\mathrm{EDpoly}_{\mathcal{X}}(u,\varepsilon^2,Q)$.
**Proposition 41**. *[@ottaviani2020distance Proposition 2.3][\[pro: roots ED polynomial\]]{#pro: roots ED polynomial label="pro: roots ED polynomial"} For a generic $u\in V$, the roots $\varepsilon^2$ of $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$ are precisely of the form $\varepsilon^2=q(u-x)$, where $x$ is a critical point of the squared distance function $q(u-x)$ on $\mathcal{X}_{\mathrm{sm}}$. In particular, the distance $\varepsilon$ from the real locus $X^\mathsmaller{\mathbb{R}}$ to a data point $u\in V^\mathsmaller{\mathbb{R}}$ is a root of $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$. Furthermore, $\mathrm{EDpoly}_{\mathcal{X},Q}(u,0)=0$ for all $u\in X^\mathsmaller{\mathbb{R}}$.*
Introducing the ED polynomial $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$, we stressed that $\varepsilon$ is seen as a variable as well as the coordinates of the data point $u$. Hence, a natural question is to compute the $\varepsilon^2$-degree of $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$.
**Theorem 42**. *[@horobet2019offset Theorem 2.9][\[thm: HW epsilon-degree offset\]]{#thm: HW epsilon-degree offset label="thm: HW epsilon-degree offset"} For a generic $u\in V$, the $\varepsilon^2$-degree of $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$ equals $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})$.*
**Corollary 43**. *For a general $u\in V$, the $\varepsilon^2$-degree of $\mathrm{EDpoly}_{\mathcal{X}}(u,\varepsilon^2,Q)$ equals $\mathop{\mathrm{gEDdegree}}(\mathcal{X})$.*
In the following, we write $\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})=D_Q$ for brevity. An immediate consequence of Theorem [\[thm: HW epsilon-degree offset\]](#thm: HW epsilon-degree offset){reference-type="ref" reference="thm: HW epsilon-degree offset"} is that, for every choice of $Q$, the monomial $\varepsilon^{2D_Q}$ is the leading monomial of $\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)$. In particular, we can write $$\label{eq: write ED polynomial explicitly}
\mathrm{EDpoly}_{\mathcal{X},Q}(u,\varepsilon^2)=\sum_{i=0}^{D_Q}p_i(u)\,\varepsilon^{2i}\,,$$ where $p_i(u)$ is a homogeneous polynomial in the coordinates of $u$ and $p_{D_Q}(u)\neq 0$. The next result tells us something more about the coefficients $p_{D_Q}(u)$ and $p_0(u)$.
**Proposition 44**. *[@ottaviani2020distance Corollary 4.6][\[prop: leading coefficient ED polynomial\]]{#prop: leading coefficient ED polynomial label="prop: leading coefficient ED polynomial"} Let $\mathcal{X}\subset\mathbb{P}(V)$ be a projective variety, and $Q\in S^2V$ a symmetric bilinear form on $V$ whose associated quadric hypersurface is $\mathcal{Q}$. If $\mathcal{X}$ is transversal to $\mathcal{Q}$, then each coefficient $p_i(u)$ of $\varepsilon^{2i}$ in the expression [\[eq: write ED polynomial explicitly\]](#eq: write ED polynomial explicitly){reference-type="eqref" reference="eq: write ED polynomial explicitly"} is a homogeneous polynomial in the coordinates of $u$ of degree $2D_Q-2i$. In particular $p_{D_Q}(u)=p_{D_Q}\in\mathbb{C}$ and the ED polynomial of $\mathcal{X}$ is an integral algebraic function.*
As a consequence of Proposition [\[prop: leading coefficient ED polynomial\]](#prop: leading coefficient ED polynomial){reference-type="ref" reference="prop: leading coefficient ED polynomial"}, we can write the "symbolic" ED polynomial of $\mathcal{X}$ as $$\label{eq: write ED polynomial Q symbolic explicitly}
\mathrm{EDpoly}_{\mathcal{X}}(u,\varepsilon^2,Q)=\sum_{i=0}^{D}p_i(u,Q)\varepsilon^{2i}\,,$$ where $D=\mathop{\mathrm{gEDdegree}}(\mathcal{X})$, $p_i(u,Q)$ is a homogeneous polynomial in the coordinates of $u$ and in the entries of $M_Q$, and in particular $p_{D}(u,Q)=p_{D}(Q)\neq 0$ depends only on the entries of $M_Q$.
Our goal is to describe the reduced hypersurface $V(p_{D}(Q))\subset\mathbb{P}(S^2V)$, and compute its degree.
**Proposition 45**. *Let $\mathcal{X}\subset\mathbb{P}(V)=\mathbb{P}^N$ be an irreducible variety of dimension $n$. For any $e\ge 1$, we consider the space $\mathbb{P}(S^eV^*)$ that parameterizes all hypersurfaces in $\mathbb{P}(V)$ of degree $e$. Define $$\label{eq: def Z d X}
\mathcal{Z}_{e,\mathcal{X}}\coloneqq\overline{\{\mathcal{Y}\in\mathbb{P}(S^eV^*)\mid\text{$\exists\,x\in \mathcal{X}_{\mathrm{sm}}\cap \mathcal{Y}_{\mathrm{sm}}$ such that $T_x\mathcal{Y}\supseteq T_x\mathcal{X}$}\}}\,.$$ The variety $\mathcal{Z}_{e,\mathcal{X}}$ coincides with the dual variety of the $e$th Veronese embedding of $\mathcal{X}_e\coloneqq\nu_e(\mathcal{X})\subset\mathbb{P}(S^eV)$, in particular it is irreducible. Furthermore, $\mathcal{Z}_{e,\mathcal{X}}$ is always a hypersurface in $\mathbb{P}(S^eV^*)$ if $e\ge 2$. The degree of $\mathcal{Z}_{e,\mathcal{X}}$ is $$\label{eq: degree dual Veronese X}
\sum_{j=0}^n(-1)^j(n+1-j)\,c_j^M(\mathcal{X})\cdot(eh)^{n-j}\,,$$ where $h=c_1(\mathcal{O}_\mathcal{X}(1))$.*
*Proof.* Every element $\mathcal{Y}\in\mathbb{P}(S^eV^*)$ corresponds to a hyperplane $H_\mathcal{Y}\subset\mathbb{P}(S^eV)$. In particular, if we consider the Veronese variety $\mathcal{V}_{e,N}=\nu_e(\mathbb{P}^N)\subset\mathbb{P}(S^eV)$, then $\mathcal{Y}\cong\mathcal{V}_{e,N}\cap H_\mathcal{Y}$. Now assume that $T_x\mathcal{X}\subset T_x\mathcal{Y}$ for some point $x\in \mathcal{X}_{\mathrm{sm}}\cap \mathcal{Y}_{\mathrm{sm}}$. This is equivalent to the condition that the hyperplane $H_\mathcal{Y}\subset\mathbb{P}(S^eV)$ contains the tangent space $T_{\nu_e(x)}\mathcal{X}_e$, namely $H$ belongs to the dual variety $\mathcal{X}_e^\vee$ of $\mathcal{X}_e$. By taking closures, this means that the variety $\mathcal{Z}_{e,\mathcal{X}}$ is equal to $\mathcal{X}_e^\vee$. Since we are assuming that $\mathcal{X}$ is irreducible, then $\mathcal{X}_e$ is irreducible as well, and its dual variety $\mathcal{X}_e^\vee$ too.
Now observe that any subvariety of $\mathcal{V}_{e,N}$ is of the form $\mathcal{X}_e$ for some variety $\mathcal{X}\subset\mathbb{P}^N$, and $\deg(\mathcal{X}_e)=e^n\deg(\mathcal{X})$, in particular it is larger than one when $e>1$. This means that $\mathcal{V}_{e,N}$ does not contain linear subspaces for all $e\ge 2$, hence its dual variety $\mathcal{V}_{e,N}^\vee$ is always a hypersurface. This fact is also an immediate consequence of Lemma [\[lem: conditions for X mu dual hypersurface, in general\]](#lem: conditions for X mu dual hypersurface, in general){reference-type="ref" reference="lem: conditions for X mu dual hypersurface, in general"}. Similarly, for any subvariety $\mathcal{X}\subset\mathbb{P}^N$ and $e\ge 2$, the variety $\mathcal{X}_e^\vee=\mathcal{Z}_{e,\mathcal{X}}$ is a hypersurface.
Now assume that $\mathcal{X}$ is smooth. The first Chern class of the line bundle corresponding to the Veronese embedding of $\mathcal{X}$ into $\mathbb{P}(S^eV)$ is equal to $eh$. Using this fact and applying equation [\[eq: degdual\]](#eq: degdual){reference-type="eqref" reference="eq: degdual"}, we obtain the degree of $\mathcal{X}_e^\vee$ in [\[eq: degree dual Veronese X\]](#eq: degree dual Veronese X){reference-type="eqref" reference="eq: degree dual Veronese X"} for all $e\ge 2$. A similar formula holds when $\mathcal{X}$ is singular, after replacing the Chern classes with Chern-Mather classes. ◻
**Example 46**. Consider homogeneous coordinates $(z_{30},z_{21},z_{12},z_{03})$ for $\mathbb{P}^3$ and the quartic surface $\mathcal{X}\subset\mathbb{P}^3$ defined by the equation $$z_{21}^{2}z_{12}^{2}-4\,z_{30}z_{12}^{3}-4\,z_{21}^{3}z_{03}+18\,z_{30}z_{21}z_{12}z_{03}-27\,z_{30}^{2}z_{03}^{2}=0\,,$$ which defines the discriminant variety of a cubic binary form. The surface $\mathcal{X}$ has dual defect $\mathrm{def}(\mathcal{X})=1$, as its dual variety is a rational normal cubic in $\mathbb{P}^3$. Consider the smooth point $P=[0,1,0,0]$ of $\mathcal{X}$. The tangent plane $T_P\mathcal{X}$ has equation $z_{03}=0$, and its intersection with $\mathcal{X}$ is scheme-theoretically the double line $V(z_{03},z_{12}^2)$ and the simple conic $V(z_{03},z_{21}^2-z_{30}z_{12})$. Furthermore, the conic $V(z_{03},z_{21}^2-z_{30}z_{12})$ is tangent to the line $V(z_{03},z_{12})$ at the point $[1,0,0,0]$, which belongs to $\mathcal{X}_{\mathrm{sing}}$. Therefore, $\mathrm{Cont}(T_P\mathcal{X},\mathcal{X})$ is the line $V(z_{03},z_{12})$, confirming the fact that $\mathrm{def}(\mathcal{X})=1$. Now consider a quadric surface $\mathcal{Q}\subset\mathbb{P}^3$. Its general equation has the form $$q_{30,30}z_{30}^2+q_{30,21}z_{30}z_{21}+\cdots+q_{03,03}z_{03}^2=0\,,$$ where the coefficients $q_{30,30},q_{30,21},\ldots q_{03,03}$ are homogeneous coordinates of $\mathbb{P}(S^2(\mathbb{C}^4)^*)\cong\mathbb{P}^9$. The normal space of $\mathcal{Q}$ at $P$ is generated by the vector $(q_{30,21},2q_{21,21},q_{12,21},q_{03,21})$. Therefore, $T_P\mathcal{Q}=T_P\mathcal{X}$ if and only if the previous vector is proportional to the vector $(0,0,0,1)$, namely if and only if $q_{30,21}=q_{21,21}=q_{12,21}=0$. These conditions define a projective subspace $W_P$ of $\mathbb{P}(S^2(\mathbb{C}^4)^*)$ of dimension 6. A generic element $\mathcal{Q}\in W_P$ meets $\mathcal{X}$ on an irreducible curve of degree $8$, and the reduced singular locus of $\mathcal{Q}\cap\mathcal{X}$ consists of $P$ and the six points of intersection between $\mathcal{Q}$ and the rational normal cubic $\mathcal{X}_{\mathrm{sing}}$. All these considerations tell us that a generic element of $W_P$ is tangent to $\mathcal{X}$ only at $P$. This confirms that the variety $\mathcal{Z}_{2,\mathcal{X}}=[\nu_2(\mathcal{X})]^\vee$ is a hypersurface in $\mathbb{P}(S^2(\mathbb{C}^4)^*)$. In order to compute its degree, note that $\delta_0(\mathcal{X})=0$, $\delta_1(\mathcal{X})=\deg(\mathcal{X}^{\vee})=3$, and $\delta_2(\mathcal{X})=\deg(\mathcal{X})=4$. Plugging these numbers into the second set of relations in [\[eq: polar Chern-Mather\]](#eq: polar Chern-Mather){reference-type="eqref" reference="eq: polar Chern-Mather"}, we obtain that $c_0^M(\mathcal{X})=4$, $c_1^M(\mathcal{X})=9$, and $c_2^M(\mathcal{X})=6$. Then using [\[eq: degree dual Veronese X\]](#eq: degree dual Veronese X){reference-type="eqref" reference="eq: degree dual Veronese X"}, we get that $$\deg(\mathcal{Z}_{2,\mathcal{X}})=3\,c_0^M(\mathcal{X})\deg((2h)^2)-2\,c_1^M(\mathcal{X})\deg(2h)+c_2^M(\mathcal{X})=3\cdot 4\cdot 4-2\cdot 9\cdot 2+6=18\,.$$
All the previous considerations lead to the following result.
**Proposition 47**. *Let $\mathcal{X}$ be an irreducible projective variety in $\mathbb{P}(V)$. Consider the nonzero polynomial $p_{D}(u,Q)=p_{D}(Q)$ in [\[eq: write ED polynomial Q symbolic explicitly\]](#eq: write ED polynomial Q symbolic explicitly){reference-type="eqref" reference="eq: write ED polynomial Q symbolic explicitly"} and its zero locus $V(p_{D})$ in the space $\mathbb{P}(S^2V)$ of quadric hypersurfaces of $\mathbb{P}(V)$. Then $$\label{eq: zero locus leadcoef 1}
V(p_{D})=\mathbb{P}(\{Q\in S^2V\mid\mathop{\mathrm{EDdegree}}_Q(\mathcal{X})<\mathop{\mathrm{gEDdegree}}(\mathcal{X})\})\,.$$ In particular, $p_{D}(Q)$ has a positive degree in the entries of $M_Q$. In particular $V(p_{D})$ is an irreducible hypersurface and corresponds to the dual variety $[\nu_2(\mathcal{X})]^\vee$ under the second Veronese embedding of $\mathbb{P}(V)$ into $\mathbb{P}(S^2V)$.*
*Proof.* Identity [\[eq: zero locus leadcoef 1\]](#eq: zero locus leadcoef 1){reference-type="eqref" reference="eq: zero locus leadcoef 1"} follows immediately by Corollary [Corollary 43](#cor: degree ED polynomial symbolic Q){reference-type="ref" reference="cor: degree ED polynomial symbolic Q"} and Proposition [\[prop: leading coefficient ED polynomial\]](#prop: leading coefficient ED polynomial){reference-type="ref" reference="prop: leading coefficient ED polynomial"}. Since the right-hand side is a proper closed subvariety of $\mathbb{P}(S^2V)$, then necessarily $p_{D}(Q)$ has positive degree in the entries of $M_Q$. Consider the variety $\mathcal{Z}_{e,\mathcal{X}}$ introduced in [\[eq: def Z d X\]](#eq: def Z d X){reference-type="eqref" reference="eq: def Z d X"} for $e=2$. The inclusion $V(p_{D})\subset \mathcal{Z}_{2,\mathcal{X}}$ follows by identity [\[eq: zero locus leadcoef 1\]](#eq: zero locus leadcoef 1){reference-type="eqref" reference="eq: zero locus leadcoef 1"} and Theorem [\[thm: ED degree sum polar classes\]](#thm: ED degree sum polar classes){reference-type="ref" reference="thm: ED degree sum polar classes"}. Since $V(p_{D})$ is a hypersurface, and $\mathcal{Z}_{2,\mathcal{X}}$ is an irreducible hypersurface by Proposition [Proposition 45](#prop: variety Q not transversal X irreducible){reference-type="ref" reference="prop: variety Q not transversal X irreducible"}, then necessarily $V(p_{D})=\mathcal{Z}_{2,\mathcal{X}}$. The last part results from Proposition [Proposition 45](#prop: variety Q not transversal X irreducible){reference-type="ref" reference="prop: variety Q not transversal X irreducible"}. ◻
**Example 48**. Let $\mathcal{X}\subset\mathbb{P}^2$ be the irreducible projective conic of equation $x_0x_1-x_2^2=0$. A generic conic $\mathcal{Q}\subset\mathbb{P}^2$ has equation $$q_{200}x_0^2+q_{110}x_0x_1+q_{101}x_0x_2+q_{020}x_1^2+q_{011}x_1x_2+q_{002}x_2^2=0\,,$$ where $q_{200},\ldots q_{002}$ are homogeneous coordinates for $\mathbb{P}(S^2V)$. By Proposition [Proposition 45](#prop: variety Q not transversal X irreducible){reference-type="ref" reference="prop: variety Q not transversal X irreducible"}, the dual variety $[\nu_2(\mathcal{X})]^\vee$ is an irreducible hypersurface of degree $$\deg([\nu_2(\mathcal{X})]^\vee)=2\,c_0(\mathcal{X})\deg(2h)-c_1(\mathcal{X})=2\cdot 2\cdot 2-2=6\,.$$ We identify $\mathbb{P}(S^2V)^*$ with $\mathbb{P}(S^2V)$ via the standard inner product of $S^2V$. Via this identification, we verified in M2 that the equation of $[\nu_2(\mathcal{X})]^\vee$ is $$\begin{gathered}
16\,q_{200}q_{110}^{4}q_{020}-4\,q_{110}^{3}q_{101}^{2}q_{020}-128\,q_{200}^{2}q_{110}^{2}q_{020}^{2}+144\,q_{200}q_{110}q_{101}^{2}q_{020}^{2}-27\,q_{101}^{4}q_{020}^{2}\\
+256\,q_{200}^{3}q_{020}^{3}-80\,q_{200}q_{110}^{2}q_{101}q_{020}q_{011}+18\,q_{110}q_{101}^{3}q_{020}q_{011}-192\,q_{200}^{2}q_{101}q_{020}^{2}q_{011}\\
-4\,q_{200}q_{110}^{3}q_{011}^{2}+q_{110}^{2}q_{101}^{2}q_{011}^{2}+144\,q_{200}^{2}q_{110}q_{020}q_{011}^{2}-6\,q_{200}q_{101}^{2}q_{020}q_{011}^{2}+18\,q_{200}q_{110}q_{101}q_{011}^{3}\\
-4\,q_{101}^{3}q_{011}^{3}-27\,q_{200}^{2}q_{011}^{4}+64\,q_{200}q_{110}^{3}q_{020}q_{002}-12\,q_{110}^{2}q_{101}^{2}q_{020}q_{002}-256\,q_{200}^{2}q_{110}q_{020}^{2}q_{002}\\
+144\,q_{200}q_{101}^{2}q_{020}^{2}q_{002}-160\,q_{200}q_{110}q_{101}q_{020}q_{011}q_{002}+18\,q_{101}^{3}q_{020}q_{011}q_{002}-12\,q_{200}q_{110}^{2}q_{011}^{2}q_{002}\\
+2\,q_{110}q_{101}^{2}q_{011}^{2}q_{002}+144\,q_{200}^{2}q_{020}q_{011}^{2}q_{002}+18\,q_{200}q_{101}q_{011}^{3}q_{002}+96\,q_{200}q_{110}^{2}q_{020}q_{002}^{2}\\
-12\,q_{110}q_{101}^{2}q_{020}q_{002}^{2}-128\,q_{200}^{2}q_{020}^{2}q_{002}^{2}-80\,q_{200}q_{101}q_{020}q_{011}q_{002}^{2}-12\,q_{200}q_{110}q_{011}^{2}q_{002}^{2}\\
+q_{101}^{2}q_{011}^{2}q_{002}^{2}+64\,q_{200}q_{110}q_{020}q_{002}^{3}-4\,q_{101}^{2}q_{020}q_{002}^{3}-4\,q_{200}q_{011}^{2}q_{002}^{3}+16\,q_{200}q_{020}q_{002}^{4}=0\,.
\end{gathered}$$ Up to a scalar factor, the previous polynomial corresponds to the leading coefficient $p_D$ of the symbolic ED polynomial of $\mathcal{X}$.
For the rest of the section, we focus on the case when $\mathcal{X}$ is a Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}$. Recall the notations given at the beginning of Section [3](#sec: product metrics tensor spaces){reference-type="ref" reference="sec: product metrics tensor spaces"}.
**Lemma 49**. *[@GKZ Chapter 1, Corollary 5.10][\[lem: conditions for X mu dual hypersurface, in general\]]{#lem: conditions for X mu dual hypersurface, in general label="lem: conditions for X mu dual hypersurface, in general"} The dual variety of the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is a hypersurface in $\mathbb{P}(S^\mathbf{d}V^*)$ if and only if $$\label{eq: inequalities for X mu}
n_i\le\sum_{j\neq i}n_j$$ for all indices $i$ such that $d_i=1$.*
The next lemma is a consequence of the results in [@tevelev2003projectively §1.4.B] and is useful when considering the composition of a Segre-Veronese embedding with another Veronese embedding.
**Lemma 50**. *Consider the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$. For all $e\ge 2$, the Veronese embedding $\nu_e(\mathcal{V}_{\mathbf{d},\mathbf{n}})\subset\mathbb{P}(S^e(S^\mathbf{d}V))$ is degenerate, in particular, it is contained in a proper subspace $W$ of $\mathbb{P}(S^e(S^\mathbf{d}V))$ isomorphic to $\mathbb{P}(S^{e\mathbf{d}} V)$, where $e\mathbf{d}=(ed_1,\ldots,ed_k)$. Furthermore, the dual variety $\nu_e(\mathcal{V}_{\mathbf{d},\mathbf{n}})^\vee$ in $\mathbb{P}(S^e(S^\mathbf{d}V))$ can be described as the cone over the dual variety $\mathcal{V}_{e\mathbf{d},\mathbf{n}}^\vee$ in $\mathbb{P}(S^{e\mathbf{d}}V)$ with vertex corresponding to $W$.*
In the proof of Theorem [Theorem 20](#thm: gen ED degree Segre-Veronese){reference-type="ref" reference="thm: gen ED degree Segre-Veronese"} we computed the degrees of the Chern classes $c_i(\mathcal{V}_{\mathbf{d},\mathbf{n}})$. These allow us to derive also the polar degrees of $\mathcal{V}_{\mathbf{d},\mathbf{n}}$:
**Proposition 51**. *The polar degrees of the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ are $$\label{eq: polar classes Segre-Veronese}
\delta_j(\mathcal{V}_{\mathbf{d},\mathbf{n}})=\sum_{i=0}^{|\mathbf{n}|-j}(-1)^{i}\binom{|\mathbf{n}|+1-i}{j+1}(|\mathbf{n}|-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}}\,\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}\quad\forall\,j\in\{0,\ldots,|\mathbf{n}|\}\,,$$ where $\gamma_{\boldsymbol{\alpha}}$ is defined in [\[eq: def gamma alpha\]](#eq: def gamma alpha){reference-type="eqref" reference="eq: def gamma alpha"}. When the dual variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}^\vee$ is a hypersurface, its degree is $$\label{eq: degree dual Segre-Veronese}
\deg(\mathcal{V}_{\mathbf{d},\mathbf{n}}^\vee)=\delta_0(\mathcal{V}_{\mathbf{d},\mathbf{n}})=\sum_{i=0}^{|\mathbf{n}|}(-1)^i(|\mathbf{n}|+1-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}}\,\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}\,.$$*
*Proof.* Since $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is a smooth variety, the degrees $\delta_j(\mathcal{V}_{\mathbf{d},\mathbf{n}})$ may be written in terms of the Chern classes of $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ computed in Proposition [Proposition 19](#prop: degrees Chern classes Segre-Veronese){reference-type="ref" reference="prop: degrees Chern classes Segre-Veronese"} and using the identity [\[eq: Holme\]](#eq: Holme){reference-type="eqref" reference="eq: Holme"}. In particular, when $\mathcal{V}_{\mathbf{d},\mathbf{n}}^\vee$ is a hypersurface, its degree coincides with the polar class $\delta_0(\mathcal{V}_{\mathbf{d},\mathbf{n}})$, so we apply the formula [\[eq: degdual\]](#eq: degdual){reference-type="eqref" reference="eq: degdual"}. ◻
**Corollary 52**. *Consider the Segre-Veronese variety $\mathcal{V}_{\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^\mathbf{d}V)$. The locus of hypersurfaces of degree $e$ that have a nontransversal intersection with $\mathcal{V}_{\mathbf{d},\mathbf{n}}$ is (isomorphic to) the dual variety $[\nu_e(\mathcal{V}_{\mathbf{d},\mathbf{n}})]^\vee$ in $\mathbb{P}(S^e(S^\mathbf{d}V))$. Furthermore, it is always a hypersurface when $d\ge 2$, of degree $$\label{eq: degree dual Segre-Veronese d mu}
\deg([\nu_e(\mathcal{V}_{\mathbf{d},\mathbf{n}})]^\vee)=\sum_{i=0}^{|\mathbf{n}|}(-1)^i e^{|\mathbf{n}|-i}(|\mathbf{n}|+1-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}}\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}\,,$$ where $\gamma_{\boldsymbol{\alpha}}$ is defined in [\[eq: def gamma alpha\]](#eq: def gamma alpha){reference-type="eqref" reference="eq: def gamma alpha"}.*
*Proof.* The first part follows by Proposition [Proposition 45](#prop: variety Q not transversal X irreducible){reference-type="ref" reference="prop: variety Q not transversal X irreducible"}. Consider now the nondegenerate Segre-Veronese variety $\mathcal{V}_{e\mathbf{d},\mathbf{n}}\subset\mathbb{P}(S^{e\mathbf{d}}V)$. By Lemma [\[lem: conditions for X mu dual hypersurface, in general\]](#lem: conditions for X mu dual hypersurface, in general){reference-type="ref" reference="lem: conditions for X mu dual hypersurface, in general"}, the dual variety of $\mathcal{V}_{e\mathbf{d},\mathbf{n}}$ is a hypersurface for all $e\ge 2$. This means that, applying Lemma [Lemma 50](#lem: composing with a Veronese embedding){reference-type="ref" reference="lem: composing with a Veronese embedding"}, the dual variety $[\nu_e(\mathcal{V}_{\mathbf{d},\mathbf{n}})]^\vee$ in $\mathbb{P}(S^e(S^\mathbf{d}V))$ is a hypersurface as well, with the same degree of $\mathcal{V}_{e\mathbf{d},\mathbf{n}}^\vee$, computed using Proposition [Proposition 51](#pro: polar degrees Segre-Veronese){reference-type="ref" reference="pro: polar degrees Segre-Veronese"}. ◻
We consider two applications of Corollary [Corollary 52](#cor: dual variety e-Veronese embedding V d,n){reference-type="ref" reference="cor: dual variety e-Veronese embedding V d,n"}.
**Example 53**. Let $k=1$, $\mathbf{d}=(d)$ and $\mathbf{n}=(1)$, hence we are considering the rational normal curve $\mathcal{V}_{d,1}\subset\mathbb{P}(S^d\mathbb{C}^2)=\mathbb{P}^d_{\mathsmaller{\mathbb{C}}}$. Then $$\deg([\nu_e(\mathcal{V}_{d,1})]^\vee) = \deg(\mathcal{V}_{ed,1}^\vee) = \sum_{i=0}^1(-1)^i(ed)^{1-i}(2-i)!\gamma_i = 2ed\gamma_0-\gamma_1 = 2(ed-1)\,,$$ since in this case $\gamma_0=1$ and $\gamma_1=2$. Note that $2(ed-1)$ is the degree of the discriminant of a binary form of degree $ed$. In particular, for $e=2$ we obtain that the leading coefficient of the symbolic ED polynomial $\mathrm{EDpoly}_{\mathcal{V}_{d,1}}(u,\varepsilon^2,Q)$ has degree $2d$.
**Example 54**. Let $k=2$, $\mathbf{d}=(d_1,d_2)$ and $\mathbf{n}=(1,1)$. Then $$\begin{aligned}
\begin{split}
\deg([\nu_e(\mathcal{V}_{\mathbf{d},\mathbf{n}})]^\vee) &= \deg(\mathcal{V}_{e\mathbf{d},\mathbf{n}}^\vee)\\
&= \sum_{i=0}^2(-1)^ie^{2-i}(3-i)!\sum_{|\boldsymbol{\alpha}|=i}\gamma_{\boldsymbol{\alpha}}\mathbf{d}^{\mathbf{n}-\boldsymbol{\alpha}}\\
&= 6e^2\gamma_{(0,0)}d_1d_2-2e(\gamma_{(1,0)}d_2+\gamma_{(0,1)}d_1)+\gamma_{(1,1)}\\
&= 6e^2d_1d_2-4e(d_1+d_2)+4\,,
\end{split}\end{aligned}$$ since in this case $\gamma_{(0,0)}=1$, $\gamma_{(1,0)}=\gamma_{(0,1)}=2$, and $\gamma_{(1,1)}=4$. In particular for $\mathbf{d}=(1,1)$ we are considering the Segre variety $\Sigma_\mathbf{n}=\Sigma_2$ and $$\deg([\nu_e(\Sigma_2)]^\vee) = 6e^2-8e+4\,.$$ As a consequence, the leading coefficient of the symbolic ED polynomial $\mathrm{EDpoly}_{\Sigma_2}(u,\varepsilon^2,Q)$ is a homogeneous polynomial in 10 variables of degree 12.
| arxiv_math | {
"id": "2309.15105",
"title": "On the minimal algebraic complexity of the rank-one approximation\n problem for general inner products",
"authors": "Khazhgali Kozhasov, Alan Muniz, Yang Qi, Luca Sodomaco",
"categories": "math.AG math.OC",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this paper we use display calculus to show the decidability for normal modal logic $\mathrm{K}$ and some of its extensions.
author:
- Jinsheng Chen
bibliography:
- ref.bib
title: Deciding some displayable modal logics
---
# Introduction
Proof theory for modal logics aims at developing 'good' calculi for modal logics. Though there is not a formal definition of 'good' calculi, it is agreed that Gentzen-style calculi for normal modal logics like $\mathrm{K}$, $\mathrm{T}$, $\mathrm{S4}$ (see e.g. [@Ono1998]) are representatives of good calculi. These calculi have the following properties:
\(1\) Cut-elimination. The following *cut* rule, which is the proof-theoretic analogue of *modus ponds* in Hilbert-style axiomatization, $$\frac{\Gamma \Rightarrow\Delta, \varphi \quad \varphi, \Gamma’ \Rightarrow\Delta’}{\Gamma, \Gamma’ \Rightarrow\Delta, \Delta’}$$ is eliminable from the calculus without affecting what are provable. From the bottom up, the cut rule introduces a new formula. The elimination of the cut rule paves the way to prove the subfromula property introduced below and hence decidability.
\(2\) Subformula property. If sequent $\mathcal{S}$ is provable, then there is a proof for $\mathcal{S}$ consisting only of subformulas in $\mathcal{S}$.
\(3\) Decidability. There is a root-first algorithm to decide whether a sequent is provable in the calculus. 'Root-first' means to consider the main connective of a formula in the sequent and try applicable rules.
Until now, Gentzen-style sequent calculi satisfying these properties simultaneously has only been developed for a limited number of modal logics. Even worse, it seems difficult to develop such calculi for some simple logics. Take the Gentzen-style calculus for modal logic $\mathrm{S5}$ as an example (see [@Ono1998]). It has the subformula property and is decidable, but not cut-eliminable. A restricted version of cut elimination is proved: [@Takano1992] showed that every application of cut can be transformed into one where the cut formula is a subformula in the endsequent.
To endow more modal logics with good calculi, generalizations and variations of Gentzen-style sequent calculi introduced in the literature which includes hypersequent calculus[@avron1996method], labelled sequent calculus [@negri2005proof], deep sequent system [@brunnler2009deep], and display calculus [@belnap] and so on. Back to the example of $\mathrm{S5}$, [@poggiolesi_2008] introduced a hypersequent calculus for $\mathrm{S5}$, which is cut-admissible, decidable and has the subformula property.
This paper contributes to display calculus. It was introduced in [@belnap], and further developed in papers like [@restall1995display; @kracht1996power; @gore1996completeness; @restall1998displaying; @wansing2013displaying; @ciabattoni2016power; @greco2016unified; @chen2022syntactic]. According to [@kracht1996power], display calculus is a proof-system aiming to explore in depth the strategy to replace logical connectives by structural operators. This is called *Gentzenization*. In a proof of a Gentzenized calculus, structural operators representing logical connectives are manipulated and transformed into the corresponding connectives at a certain stage and this process is not reversible. The rules for manipulating structural operators are called *structural rules*, which typically includes *weakening*, *contraction* and the cut rule. The rules that transforms a structural operator to a logical connective are called *connective rules*. A standard Gentzen sequent can be written as $\varphi_1, \ldots, \varphi_m \Rightarrow\psi_1,\ldots, \psi_n$, where the comma is a structural operator and $\varphi_1, \ldots, \varphi_m$ and $\psi_1,\ldots, \psi_n$ are two structures. The comma means different things depending on the place it occupies in the sequent: it is interpreted as *and* when to the left of the double arrow and as *or* when to the right, which is witnessed by the connective rules for $\land$ and $\lor$. An example of structural rules is the left contraction rule: from $\varphi ,\varphi, \Gamma \Rightarrow\Delta$ imply $\varphi, \Gamma \Rightarrow\Delta$.
Display calculus for modal logic has more than one structural operator. [@wansing2013displaying] uses binary structural operator $\circ$ which is interpreted as conjuction when it is to the left of $\Rightarrow$ and disjunction when to the right. Furthermore, it uses a unary operator $\ast$ for negation and a binary operator $\bullet$ for modalities, which means the backward-looking diamond $\blacklozenge$ when it is to the left of $\Rightarrow$ and the forward-looking box $\Box$ when to the right. In the display calculus for modal logic $\mathrm{K}$, sequents $\bullet X \Rightarrow Y$ and $X \Rightarrow\bullet Y$ are inter-provable, which reflects the fact that $\blacklozenge$ and $\Box$ are a residuated pair from an algebraic perspective.
The main advantages of display calculus are as follows:
\(1\) A general cut-elimination theorem[^1]. For any display calculus, if it satisfies certain syntactic criterion, then it is cut-eliminable. The corollary of this theorem is as follows: given a cut-eliminable display calculus for $\mathrm{K}$ and add rules of a specified form, then the new display calculus is also cut-eliminable and has the subformula property.
\(2\) Given a display calculus for normal modal logic $\mathrm{K}$, there is an algorithm to transform some axioms into rules of the specified form mentioned in the last paragraph such that the display calculus plus these rules is sound and complete with $\mathrm{K}$ plus these axioms.[^2] As a result, for some extensions of $\mathrm{K}$, we can easily have display calculus for these logics with cut elimination and subformula property.
However, it is unclear whether decidability of displayable modal logics can be extracted from their display calculi[^3]. We try to fill in this gap in this paper. We show that the display calculus for normal modal logic $\mathrm{K}$ and a class of its extensions, called *normal extension* is decidable. Normal extensions of $\mathrm{K}$ includes logics axiomatized by some Soctt-Lemmon axioms and primitive axioms (See Section [8.2](#sec:Scott){reference-type="ref" reference="sec:Scott"}). These results pertain to some displayable temporal logics and some logics algebraized by bounded distributive lattices with operators.
The canonical method to prove that a calculus $\mathrm{D}$ is decidable is to prove that (1) each provable sequent in $\mathrm{D}$ has a proof of a particular form, and that (2) the search space for proofs of that form for any sequent is finite. For instance, in Gentenze-style calculus for modal logic $\mathrm{K}$, it is required to show that each provable sequent has a irredundant, *$3$-reduced* proof with subformula property and that the search space for such proofs is finite (see [@Ono1998]). In this paper, we follow the same method. Due to the complexity caused by the number of structural operators in display calculus and by the bidirectional *display rules*, it is not straightforward to apply this method. See Section [3](#sec:proofStra){reference-type="ref" reference="sec:proofStra"} for a description of our proof strategy.
The rest of the paper is structured as follows. Section [2](#sec:Pre){reference-type="ref" reference="sec:Pre"} introduces basic notions for display calculus and the display calculi for normal modal logic $\mathrm{K}$ and the basic temporal logic $\mathrm{Kt}$. Section [3](#sec:proofStra){reference-type="ref" reference="sec:proofStra"} describes the strategy for proving decidability. Each of Sections [4](#sec:5){reference-type="ref" reference="sec:5"}, [5](#sec:6){reference-type="ref" reference="sec:6"} and [6](#sec:7){reference-type="ref" reference="sec:7"} proves a property mentioned in the strategy. Section [7](#sec:decida){reference-type="ref" reference="sec:decida"} proves the decidability of display calculus $\mathrm{D.K}$. Section [8](#sec:extensions){reference-type="ref" reference="sec:extensions"} proves the decidability of some extensions of $\mathrm{D.K}$. Section [9](#sec:conclu){reference-type="ref" reference="sec:conclu"} concludes the paper.
# Preliminareis {#sec:Pre}
Modal logics and temporal logics are the main objects in this paper. The language for modal logics is an extension of that for propositional logic with modalities $\Box$ and $\Diamond$ and the language for temporal logics builds on this by adding $\blacksquare$ and $\blacklozenge$. It is *structures*, not formulas, that are manipulated in display calculus. Therefore, structural operators are needed. Following [@kracht1996power], we use $\mathrm{I}$ for $\top$ and $\bot$, $\circ$ for conjunction and disjunction and $\bullet$ for modalities. Here, 'a structural operator (like $\mathrm{I}$) for a logical connective (like $\top$ and $\bot$)' means that there are rules in the display calculi that transforms the structural operator to the logical connectives. This sections gives the basic notions relating to language, structures and sequents.
## Language, structures, and sequents
*Language.* Given a set $\mathrm{Prop}$ of propositional letters, language $\mathcal{L}$ is defined recursively as follows: $$\mathcal{L}:: = p \mid \top \mid \bot\mid \neg \varphi \mid (\varphi \land \varphi )\mid (\varphi \lor \varphi) \mid (\varphi \to \varphi) \mid \Box \varphi\mid \blacksquare\varphi \mid \Diamond\varphi \mid \blacklozenge\varphi$$ where $p \in \mathrm{Prop}$.
*Structures.* The set $\mathrm{Str}$ of structures is defined recursively as follows: $$\mathrm{Str} :: = \varphi \mid \mathrm{I}\mid (X \circ X) \mid \ast X \mid \bullet X$$ where $\varphi \in \mathcal{L}$.
The outset parentheses may be omitted when no confusion arises.
If structure $X$ is a formula or $\mathrm{I}$, then $X$ is called an *atomic structure*. For any sequent $\mathcal{S}$, denote by $At(\mathcal{S})$ the set of atomic substructures in $\mathcal{S}$.
For a structure $X$, the set $Sub(X)$ of *substructures* of $X$ is defined recursively as follows: $$\begin{aligned}
&Sub(\mathrm{I}) = \{\mathrm{I}\}\\
&Sub(\varphi) = \{\varphi\}\\
&Sub(X \circ Y) = Sub(X) \cup Sub(Y) \\
&Sub(\ast X) = \{\ast X \} \cup Sub(X) \\
&Sub(\bullet X) = \{\bullet X\} \cup Sub(X)
\end{aligned}$$
*Sequents.* A *sequent* is of the form $X\Rightarrow Y$, where $X, Y \in \mathrm{Str}$. $X$ is called the *antecedent* and $Y$ is called the *succedent* of $X \Rightarrow Y$. We call $Z$ a *substructure* of $X \Rightarrow Y$ if $Z \in Sub(X) \cup Sub(Y)$.
A substructure $Z$ of $X\Rightarrow Y$ is called an *antecedent part* of $X\Rightarrow Y$ if it is in the scope of an even number of $\ast$ and a *succedent part* of $X \Rightarrow Y$ if it is in the scope of an odd number of $\ast$.
For any structure, the *length* $l(X)$ of a structure $X$ is defined recursively as follows: $$\begin{aligned}
&l(p) = l(\top) = l(\bot) =1\\
&l(\varphi \land \psi) = l(\varphi \lor \psi) =l(\varphi \to \psi)= l(\varphi) + l(\psi)\\
&l(\neg \varphi)=l(\Box \varphi) = l(\blacksquare\varphi) = l (\Diamond\varphi) =l(\blacklozenge\varphi) = l(\varphi)+1 \\
& \quad \\
& l(\mathrm{I}) =1 \\
& l( X \circ Y) = l (X) + l(Y)\\
&l (\ast X) = l (\bullet X) = l(X) +1\\
\end{aligned}$$ For any sequent $X \Rightarrow Y$, the *length* $l(X\Rightarrow Y)$ of $X \Rightarrow Y$ is $l(X) + l(Y)$.
The following proposition about the possibilities of being a substructure is straightforward.
[\[lem:12121650\]]{#lem:12121650 label="lem:12121650"} Let $X\Rightarrow Y$ be a sequent and $Z$ a substructure of $X\Rightarrow Y$. Then one of the following conditions hold:
1. $X\Rightarrow Y$ contains $W\circ \star Z$ or $\star Z \circ W$ as a substructure.
2. $X\Rightarrow Y$ is $\star Z\Rightarrow Y$ or $X\Rightarrow \star Z$.
where $W$ is a structure and $\star$ is a sequence of $\ast$ and $\bullet$.
For a substructure $Z$ of $X\Rightarrow Y$, if item (1) in Proposition [\[lem:12121650\]](#lem:12121650){reference-type="ref" reference="lem:12121650"} holds, then we call $Z$ a *dependent substructure* of $X\Rightarrow Y$. Otherwise, we call it an *independent substructure* of $X\Rightarrow Y$. For example, if $Z$ is a dependent substructure of $X\Rightarrow Y$, then $X\Rightarrow Y$ is $(X\Rightarrow Y)[\star Z \circ W]$ or $(X\Rightarrow Y)[W\circ \star Z]$, where $W$ is a structure and $\star$ is sequence of $\ast$ and $\bullet$.
Substructures $Z_1$ and $Z_2$ of $X$ are called *distinct* if neither $Z_1$ is a substructure of $Z_2$ nor $Z_2$ is a substructure of $Z_1$.
Given a structure $X$, the notation $X[Z]$ means that $Z$ is a substructure of $X$. Similarly, given a sequent $X\Rightarrow Y$, the notation $(X\Rightarrow Y) [Z]$ means that $Z$ is a substructure of $X$ or $Y$. We denote by $(X\Rightarrow Y) [W \backslash Z]$ the sequent obtained by substituting the exhibited occurrence of $Z$ by $W$[^4]. More generally, notation $X[Z_1,\ldots, Z_n]$ means that $Z_1,\ldots, Z_n$ are distinct substructures of $X$ and $X[Z_1,\ldots, W\backslash Z_i, \ldots, Z_n]$ denotes the sequent obtained by substituting the exhibited $Z_i$ with $W$.
*Display calculi.* A *sequent rule* is of the form $$(\mathcal{R}) \frac{X_1\Rightarrow Y_1 \ldots X_m \Rightarrow Y_m}{X\Rightarrow Y}$$ where $m \ge 0$. $X_1\Rightarrow Y_1, \ldots, X_m \Rightarrow Y_m$ are called the *premises* of this rule and $X \Rightarrow Y$ is called the conclusion. If $m=0$, we simply write $X\Rightarrow Y$ and call it an *initial sequent*.
A display calculus is a set of sequent rules. A *proof* in a display calculus $\mathrm{D}$ is a tree of sequents such that each leaf is an initial sequent and that for each node $X \Rightarrow Y$ and its children nodes $X_1 \Rightarrow Y_1,\ldots, X_m \Rightarrow Y_m$, $\frac{X_1 \Rightarrow Y_1,\ldots, X_m \Rightarrow Y_m}{X \Rightarrow Y}$ is an instance of a rule in $\mathrm{D}$.
We use $$\AXC{$X\Rightarrow Y$}
\doubleLine
\UIC{$X' \Rightarrow Y'$}
\DP$$ as a shorthand for $$\frac{X \Rightarrow Y}{X' \Rightarrow Y'}\quad \text{~and~} \quad \frac{X'\Rightarrow Y'}{X \Rightarrow Y}$$
The notation $\mathrm{D}\vdash X \Rightarrow Y$ means that there is a proof for $X \Rightarrow Y$ in $\mathrm{D}$.
A sequent rule $\mathcal{R}$ is *admissible* in $\mathrm{D}$ if $\mathrm{D} \vdash X_i \Rightarrow Y_i$ for $1\le i \le m$ implies $\mathrm{D} \vdash X \Rightarrow Y$.
A display calculus $\mathrm{D}$ is said to be *cut eliminable* if the cut rule is admissible in the calculus obtained by removing the cut rule from $\mathrm{D}$.
## Display calculus $\mathrm{D.K}$ and $\mathrm{D.Kt}$ {#sec:DisplayK}
This subsection introduces the display calculus $\mathrm{D.K}$ for the smallest normal modal logic $\mathrm{K}$. It is taken from [@kracht1996power], but the notation is a bit different. We refer the reader to that paper for a detailed discussion.
The display calculus $\mathrm{D.K}$ consists of $(Id)$, $(Cut)$, *display rules*, *connective rules* and *structural rules*.
$(Id)$ and $(Cut)$ are as follows: $$(Id) ~ p\Rightarrow p \quad (Cut) \frac{X\Rightarrow \varphi \quad \varphi \Rightarrow Y}{X\Rightarrow Y}$$
The display rules are as follows:
--------------------------------------------- ---------------------------------------------
$$\AxiomC{$X \circ Y \Rightarrow Z$} $$\AxiomC{$X \circ Y \Rightarrow Z$}
\doubleLine \doubleLine
\LeftLabel{$(\Rightarrow \ast 1)$} \LeftLabel{$(\Rightarrow \ast 2)$}
\UnaryInfC{$X\Rightarrow Z \circ \ast Y$} \UnaryInfC{$Y\Rightarrow \ast X\circ Z $}
\DisplayProof$$ \DisplayProof$$
$$\AxiomC{$X \Rightarrow Y \circ Z$} $$\AxiomC{$X \Rightarrow Y \circ Z$}
\doubleLine \doubleLine
\LeftLabel{$(\ast \Rightarrow 1)$} \LeftLabel{$(\ast \Rightarrow 2)$}
\UnaryInfC{$X \circ \ast Z \Rightarrow Y $} \UnaryInfC{$\ast Y \circ X \Rightarrow Z $}
\DisplayProof$$ \DisplayProof$$
$$\AxiomC{$\ast X \Rightarrow Y$} $$\AxiomC{$ X \Rightarrow \ast Y$}
\doubleLine \doubleLine
\LeftLabel{$(\ast \Rightarrow )$} \LeftLabel{$(\Rightarrow \ast )$}
\UnaryInfC{$\ast Y \Rightarrow X $} \UnaryInfC{$ Y \Rightarrow \ast X $}
\DisplayProof$$ \DisplayProof$$
$$\AxiomC{$ \ast \ast X \Rightarrow Y$} $$\AxiomC{$ X \Rightarrow \ast \ast Y$}
\doubleLine \doubleLine
\LeftLabel{$(\ast \ast \Rightarrow )$} \LeftLabel{$(\Rightarrow \ast \ast )$}
\UnaryInfC{$X \Rightarrow Y $} \UnaryInfC{$X \Rightarrow Y $}
\DisplayProof$$ \DisplayProof$$
$$\AxiomC{$ X \Rightarrow \bullet Y$}
\doubleLine
\LeftLabel{$(\Rightarrow \bullet )$}
\UnaryInfC{$ \bullet X \Rightarrow Y $}
\DisplayProof$$
--------------------------------------------- ---------------------------------------------
The connective rules are the following:
---------------------------------------------------------------- ----------------------------------------------------------------------------
$(\Rightarrow \top)$ $$\AxiomC{$\mathrm{I}\Rightarrow \top $} $(\top \Rightarrow)$ $$\AxiomC{$\mathrm{I}\Rightarrow X$}
\DisplayProof$$ \UnaryInfC{$\top \Rightarrow X$}
\DisplayProof$$
$(\Rightarrow \bot)$ $$\AxiomC{$X\Rightarrow \mathrm{I}$} $(\bot \Rightarrow)$ $$\AxiomC{$ \bot \Rightarrow \mathrm{I}$}
\UnaryInfC{$X\Rightarrow \bot$} \DisplayProof$$
\DisplayProof$$
$(\Rightarrow \neg)$ $$\AxiomC{$X\Rightarrow \ast \varphi$} $(\neg \Rightarrow)$ $$\AXC{$\ast \varphi \Rightarrow X$}
\UnaryInfC{$X\Rightarrow \neg \varphi$} \UIC{$\neg \varphi \Rightarrow X$}
\DisplayProof$$ \DP$$
$(\Rightarrow \land)$ $$\AXC{$X\Rightarrow \varphi$} $(\land \Rightarrow)$ $$\AXC{$\varphi \circ \psi \Rightarrow X$}
\AXC{$Y\Rightarrow \psi$} \UIC{$\varphi \land \psi \Rightarrow X$}
\BIC{$ X \circ Y \Rightarrow \varphi\land \psi$} \DP$$
\DP$$
$(\Rightarrow \lor)$ $$\AXC{$X\Rightarrow \varphi\circ \psi$} $(\lor \Rightarrow)$ $$\AXC{$\varphi\Rightarrow X$}
\UIC{$X\Rightarrow \varphi \lor \psi$} \AXC{$ \psi \Rightarrow Y$}
\DP$$ \BIC{$\varphi\lor \psi \Rightarrow X \circ Y$}
\DP$$
$(\Rightarrow \to)$ $$\AXC{$X \circ \varphi \Rightarrow \psi$} $( \to \Rightarrow)$ $$\AXC{$X\Rightarrow \varphi$}
\UIC{$X\Rightarrow \varphi \to \psi$} \AXC{$\psi \Rightarrow Y$}
\DP$$ \BIC{$\varphi \to \psi \Rightarrow \ast X \circ Y$}
\DP$$
$(\Rightarrow \Box)$ $$\AXC{$ \bullet X\Rightarrow \varphi$} $(\Box \Rightarrow)$ $$\AXC{$\varphi \Rightarrow X$}
\UIC{$X\Rightarrow \Box \varphi$} \UIC{$\Box \varphi \Rightarrow \bullet X$}
\DP$$ \DP$$
$(\Rightarrow \Diamond )$ $$\AXC{$ X\Rightarrow \varphi$} $(\Diamond \Rightarrow)$ $$\AXC{$\ast \bullet \ast \varphi \Rightarrow X$}
\UIC{$\ast \bullet \ast X \Rightarrow \Diamond \varphi$} \UIC{$\Diamond \varphi \Rightarrow X$}
\DP$$ \DP$$
---------------------------------------------------------------- ----------------------------------------------------------------------------
The structural rules in $\mathrm{D.K}$ are in Figure [\[fig:structuralRules\]](#fig:structuralRules){reference-type="ref" reference="fig:structuralRules"}. The display calculus $\mathrm{D.Kt}$ for the minimal temporal logic $\mathrm{Kt}$ is obtained by adding to $\mathrm{D.K}$ the following rules:
--------------------------------------------------------------------- ---------------------------------------------------------------------------------
$(\blacksquare \Rightarrow)$$$\AXC{$ \varphi \Rightarrow X$} $(\Rightarrow \blacksquare )$ $$\AXC{$X \Rightarrow \ast \bullet \ast \varphi$}
\UIC{$\blacksquare \varphi \Rightarrow \ast \bullet \ast X$} \UIC{$X \Rightarrow \blacksquare \varphi$}
\DP$$ \DP$$
$(\blacklozenge\Rightarrow)$ $$\AXC{$\varphi\Rightarrow \bullet X$} $(\Rightarrow \blacklozenge)$ $$\AXC{$X\Rightarrow \varphi$}
\UIC{$\blacklozenge \varphi \Rightarrow X$} \UIC{$\bullet X \Rightarrow \blacklozenge \varphi$}
\DP$$ \DP$$
--------------------------------------------------------------------- ---------------------------------------------------------------------------------
[\[fig:structuralRules\]]{#fig:structuralRules label="fig:structuralRules"}
---------------------------------------------------------- ---------------------------------------------------------
$(Al)$ $$\AXC{$X_1 \circ (X_2 \circ X_3) \Rightarrow Z$} $(Ar)$ $$\AXC{$Z\Rightarrow X_1 \circ (X_2 \circ X_3)$}
\doubleLine \doubleLine
\UIC{$(X_1 \circ X_2) \circ X_3 \Rightarrow Z$} \UIC{$Z\Rightarrow (X_1 \circ X_2) \circ X_3$}
\DP$$ \DP$$
$(Pl)$ $$\AXC{$ X \circ Y \Rightarrow Z$} $(Pr)$ $$\AXC{$Z \Rightarrow X \circ Y$}
\UIC{$ Y \circ X \Rightarrow Z$} \UIC{$ Z\Rightarrow Y \circ X$}
\DP$$ \DP$$
$(Ql)$ $$\AXC{$\mathrm{I}\Rightarrow Y$} $(Qr)$ $$\AXC{$ X \Rightarrow \mathrm{I}$}
\doubleLine \doubleLine
\UIC{$\ast \mathrm{I}\Rightarrow Y$} \UIC{$X \Rightarrow \ast \mathrm{I}$}
\DP$$ \DP$$
$(\mathrm{I}l)$ $$\AXC{$X\Rightarrow Z$} $(\mathrm{I}r)$ $$\AXC{$X\Rightarrow Z$}
\doubleLine \doubleLine
\UIC{$\mathrm{I}\circ X \Rightarrow Z$} \UIC{$X\Rightarrow \mathrm{I}\circ Z$}
\DP$$ \DP$$
$(Wl)$ $$\AXC{$X \Rightarrow Z$} $(Wr)$ $$\AXC{$X\Rightarrow Z$}
\UIC{$Y \circ X\Rightarrow Z$} \UIC{$X \Rightarrow Z\circ Y $}
\DP$$ \DP$$
$(Cl)$ $$\AXC{$X \circ X \Rightarrow Z$} $(Cr)$ $$\AXC{$Z\Rightarrow X \circ X$}
\UIC{$ X\Rightarrow Z$} \UIC{$Z\Rightarrow X$}
\DP$$ \DP$$
$(Ml)$ $$\AXC{$\mathrm{I}\Rightarrow Y$} $(Mr)$ $$\AXC{$X\Rightarrow \mathrm{I}$}
\UIC{$\bullet \mathrm{I}\Rightarrow Y$} \UIC{$X\Rightarrow \bullet \mathrm{I}$}
\DP$$ \DP$$
---------------------------------------------------------- ---------------------------------------------------------
For sequents $\mathcal{S}$ and $\mathcal{S}'$, $\mathcal{S}$ and $\mathcal{S}'$ are said to be *displayablely equivalent* in $\mathrm{D.K}$ if $\mathcal{S}$ and $\mathcal{S}'$ are inter-provable with only display rules in $\mathrm{D.K}$.
For every sequent $\mathcal{S}$ and every antecedent (succedent) part $W$ of $\mathcal{S}$, there is a displayablely equivalent $\mathcal{S}'$ such that $W$ is the antecedent (succedent) of $\mathcal{S}'$ in $\mathrm{D.K}$ and $\mathrm{D.Kt}$.
*Proof.* See Theorem 1 in [@kracht1996power]. Note that only display rules are needed to prove the theorem. ◻
Denote by $\mathrm{D.K}^-$ the calculus obtained by removing the cut rule from $\mathrm{D.K}$.
[\[thm:cutAdmi\]]{#thm:cutAdmi label="thm:cutAdmi"} $\mathrm{D.K}$ and $\mathrm{D.Kt}$ are cut eliminable.
*Proof.* See Theorem 11 in [@kracht1996power]. ◻
# Poof strategy {#sec:proofStra}
A display calculus $\mathrm{D}$ is said to be *decidable*, if for any formula $\varphi$, there exists an algorithm that can decide whether $\varphi$ is provable in $\mathrm{D}$ or not. The following sections show that $\mathrm{D.K}$ is decidable.
To this aim, by Theorem [\[thm:cutAdmi\]](#thm:cutAdmi){reference-type="ref" reference="thm:cutAdmi"}, it suffices to show that $\mathrm{D.K}^-$ is decidable. The proof strategy is as follows:
1. First, we show that in an 'equivalent' variant $\mathrm{D.K}^-_\ast$ (defined in the next section) of $\mathrm{D.K}^-$, any provable sequent in $\mathrm{D.K}^-_\ast$ has an *$\ast$-reduced*, *$3$-reduced*, *irredundant*, *r-complexity-increasing* proof satisfying the *subformula property*.
2. Then we show that for any sequent, the search space for possible such proofs in $\mathrm{D.K}^-_\ast$ is finite. Therefore, $\mathrm{D.K}^-_\ast$ is decidable. Since $\mathrm{D.K}^-$ is 'equivalent' to $\mathrm{D.K}^-_\ast$, $\mathrm{D.K}^-$ is decidable.
The idea behind $\ast$-reduced proofs is to avoid the infinite searching caused by $(\ast \ast \Rightarrow)$ and $(\Rightarrow\ast \ast)$ which requires that no consecutive occurrences of $\ast$ should appear in such proofs.
The notion of $3$-reduced proofs requires that the same structures occurs at most $3$ times in a sequent. It deals with $(Cl)$, $(Cr)$, $(Il)$ and $(Ir)$ and restricts the number of applications of these rules in proof searching. Readers familiar with Gentzen's sequent calculi will soon realize that the same problem also occurs in proving the decidability of Gentzen's sequent calculi: the contraction rules delete repetitive information when they are applied, and lead to infinite searching when they are applied bottom-up to search for possible proofs. For sequent calculi, it is easy to recognize repetitive information because one just needs to check whether there are occurrences of the same formula separated by the only structural operator, the comma. Display calculi have one binary structural operator $\circ$, and two unary structural operators $\ast$ and $\bullet$ and contain bi-directional rules which allow us to transform sequents equivalently. As a result, some repetitive occurrences of information no longer appear side-by-side and this makes it more difficult to recognize them. The key idea to restricting the applications of contractions rules is the same in sequent calculi and display calculi, but more work needs to done in the latter.
Irredundant proofs are those do not contain repetitive occurrences of a sequent. Any redundant proof can be transformed into an irredundant one by deleting the sequents between the repetitive occurrences.
The property of being r-complexity-increasing captures what increases from leaves to the root in a proof. Such a quantitative property can help us know when to stop in proof searching.
A proof with subformula property is one such that each sequent in the proof is contains only subformulas in the endsequent.
Now come back to our proof strategy. For any sequent $\mathcal{S}$, denote by $\Gamma^3_\mathcal{S}$ the set of $\ast$-reduced and $3$-reduced sequents made of subformuals in $\mathcal{S}$, and with $r$-complexity less than or equal to $\mathcal{S}$. To show that $\mathrm{D.K}^-_\ast$ is decidable, it suffices to show that $\Gamma^3_\mathcal{S}$ is finite. And this is proved by showing that $\Gamma^3_\mathcal{S}$ is a subset of another finite set (see Definition [\[defn:08071015\]](#defn:08071015){reference-type="ref" reference="defn:08071015"}).
# $\mathrm{D.K}^-_\ast$ and $\ast$-reduced proofs {#sec:5}
This section introduces display calculus $\mathrm{D.K}^-_\ast$. For any sequent $\mathcal{S}$, its provability in $\mathrm{D.K}$ is the same as its *$\ast$-reduced* conterpart in $\mathrm{D.K}^-_\ast$. Before proving this result, we introduce the notion of $\ast$-reduced sequents and proofs.
A structure is called *$\ast$-reduced* if it does not contain $\ast \ast$. A sequent is called *$\ast$-reduced* if every substructure in it is $\ast$-reduced. A proof is called *$\ast$-reduced* if each sequent in it is $\ast$-reduced.
For any structure $X$, denote by $\tau_\ast(X)$ the structure obtained by deleting each occurrence of $\ast \ast$ in $X$. Equivalently, $\tau_\ast(X)$ can be defined inductively as follows: $$\begin{aligned}
&\tau_\ast(\varphi) = \varphi\\
&\tau_\ast(\bullet X) = \bullet \tau_\ast(X)\\
& \tau_\ast(X \circ Y) = \tau_\ast(X) \circ \tau_\ast(Y)\\
&\tau_\ast (\ast \ast X) = \tau_\ast(X)\\
&\tau_\ast(\ast \bullet X) =\ast \bullet \tau_\ast(X)\\
&\tau_\ast(\ast (X \circ Y)) =\ast (\tau_\ast(X)\circ \tau_\ast(Y))\\
\end{aligned}$$
For any sequent $\mathcal{S}$, denote by $\tau_\ast(\mathcal{S})$ the sequent obtained by deleting each occurrence of $\ast\ast$ in $\mathcal{S}$.
Display calculus $\mathrm{D.K}^-_\ast$ is obtained from $\mathrm{D.K}^-$ by deleting $(\ast \ast \Rightarrow)$ and $(\Rightarrow\ast\ast)$, restricting $(Wl)$ and $(Wr)$ to adding $\ast$-reduced structures, and replacing $(\to \Rightarrow)$, $(\Rightarrow\ast 1)$, $(\Rightarrow\ast 2)$, $(\ast \Rightarrow 1)$ and $(\ast \Rightarrow 2)$ in $\mathrm{D.K}^-$ with the following rules, respectively: $$(\to \Rightarrow')\frac{X\Rightarrow\varphi \quad \psi \Rightarrow Y}{\varphi \to \psi \Rightarrow\overline{X} \circ Y}$$
---------------------------------------------------- ----------------------------------------------------
$$\AxiomC{$X \circ Y \Rightarrow Z$} $$\AxiomC{$X \circ Y \Rightarrow Z$}
\doubleLine \doubleLine
\LeftLabel{$(\Rightarrow \ast 1')$} \LeftLabel{$(\Rightarrow \ast 2')$}
\UnaryInfC{$X\Rightarrow Z \circ \overline{Y}$} \UnaryInfC{$Y\Rightarrow \overline{X}\circ Z $}
\DisplayProof$$ \DisplayProof$$
$$\AxiomC{$X \Rightarrow Y \circ Z$} $$\AxiomC{$X \Rightarrow Y \circ Z$}
\doubleLine \doubleLine
\LeftLabel{$(\ast \Rightarrow 1')$} \LeftLabel{$(\ast \Rightarrow 2')$}
\UnaryInfC{$X \circ \overline{ Z} \Rightarrow Y $} \UnaryInfC{$\overline{ Y} \circ X \Rightarrow Z $}
\DisplayProof$$ \DisplayProof$$
---------------------------------------------------- ----------------------------------------------------
where $\overline{X}$ is $X'$ if $X$ is of the form $\ast X'$ and is $\ast X$ otherwise.
Being $\ast$-reduced is the first requirement for sequents in finite proof searching. This concept and the new display rules in $\mathrm{D.K}^-_\ast$ aim to deal with the infinite search space caused by display rules concerning $\ast$ in $\mathrm{D.K}^-$.
The following result follows immediately from the forms of rules in $\mathrm{D.K}^-_\ast$:
[\[lem:astreduced\]]{#lem:astreduced label="lem:astreduced"} Each proof in $\mathrm{D.K}^-_\ast$ is a $\ast$-reduced proof.
Next we prove two lemmas and then the key result of this section: the provability of a sequent in $\mathrm{D.K}^-$ is the same as its $\ast$-reduced counterpart in $\mathrm{D.K}^-_\ast$
[\[lem:05012323\]]{#lem:05012323 label="lem:05012323"} For any sequent $\mathcal{S}$, if $\mathrm{D.K}^-_\ast \vdash \mathcal{S}$, then $\mathrm{D.K}^- \vdash \mathcal{S}$.
*Proof.* It suffices to show that $(\to \Rightarrow')$, $(\Rightarrow\ast 1')$, $(\Rightarrow\ast 2')$, $(\ast \Rightarrow 1')$ and $(\ast \Rightarrow 2')$ are admissible in $\mathrm{D.K}^-$. We only show the top-to-bottom direction of $(\to \Rightarrow')$ is admissible in $\mathrm{D.K}^-$. Other cases can be proved in a similar way.
Assume that $\mathrm{D.K}^-_\ast \vdash X\Rightarrow\varphi$ and $\mathrm{D.K}^-_\ast \vdash \psi \Rightarrow Y$. If $X$ is of the form $\ast X'$, then $\overline{X} = X'$. A proof of $\varphi \to \psi \Rightarrow X' \circ Y$ in $\mathrm{D.K}^-$ is as follows: $$\AXC{$\ast X' \Rightarrow\varphi$}
\AXC{$\psi \Rightarrow Y$}
\RightLabel{$( \to \Rightarrow)$}
\BIC{$\varphi \to \psi \Rightarrow\ast \ast X' \circ Y$}
\RightLabel{$(\ast \Rightarrow 1)$}
\UIC{$(\varphi \to \psi )\circ \ast Y \Rightarrow\ast \ast X'$}
\RightLabel{$(\Rightarrow\ast \ast)$}
\UIC{$(\varphi \to \psi )\circ \ast Y \Rightarrow X'$}
\RightLabel{$(\ast \Rightarrow 1)$}
\UIC{$\varphi \to \psi \Rightarrow X' \circ Y$}
\DP$$
If $X$ is not of the form $\ast X'$, then $\overline{X} = \ast X$. Then $\varphi \to \psi \Rightarrow\ast X \circ Y$ can be obtained by an application of $(\to \Rightarrow)$. ◻
[\[lem:05012322\]]{#lem:05012322 label="lem:05012322"} For any sequent $X \Rightarrow Y$,
1. $\mathrm{D.K}^- \vdash X\Rightarrow Y$ iff $\mathrm{D.K}^- \vdash \tau_\ast(X)\Rightarrow Y$.
2. $\mathrm{D.K}^- \vdash Y \Rightarrow X$ iff $\mathrm{D.K}^- \vdash Y \Rightarrow\tau_\ast (X)$.
*Proof.* We prove these claims simultaneously by induction on the length $l(\tau_\ast(X))$ of $\tau_\ast(X)$.
If $l(\tau_\ast(X))=1$, then $\tau_\ast(X) = p$ for some propositional letter $p$ and $X = \star p$, where $\star$ is an even number of $\ast$. The claims follow from applications of $(\ast \ast \Rightarrow)$ and $(\Rightarrow\ast \ast)$.
If $l(\tau_\ast(X))>1$, we use display rules to display the direct substructure $\tau_\ast(X)$ and apply the induction hypothesis. For example, Iif $\tau_\ast(X) = \tau_\ast(X_1) \circ \tau_\ast(X_2)$, then we use $(\Rightarrow\ast 1)$ and $(\Rightarrow\ast 2)$ and the induction hypothesis. $$\AXC{$ X_1 \circ X_2 \Rightarrow Y$}
\RightLabel{$(\Rightarrow\ast 1)$}
\UIC{$X_1 \Rightarrow Y \circ \ast X_2$}
\RightLabel{$(I.H.)$}
\UIC{$\tau_\ast (X_1) \Rightarrow Y \circ \ast X_2$}
\RightLabel{$(\Rightarrow\ast 1)$}
\UIC{$\tau_\ast (X_1) \circ X_2 \Rightarrow Y $}
\RightLabel{$(\Rightarrow\ast 2)$}
\UIC{$ X_2 \Rightarrow\ast \tau_\ast (X_1)\circ Y $}
\RightLabel{$(I.H.)$}
\UIC{$ \tau_\ast(X_2) \Rightarrow\ast \tau_\ast (X_1)\circ Y $}
\RightLabel{$(\Rightarrow\ast 2)$}
\UIC{$ \tau_\ast (X_1)\circ \tau_\ast(X_2) \Rightarrow Y $}
\DP$$ A proof from $\tau_\ast (X_1)\circ \tau_\ast(X_2) \Rightarrow Y$ to $X_1 \circ X_2 \Rightarrow Y$ is obtained by reversing the above proof. ◻
[\[prop:astequiv\]]{#prop:astequiv label="prop:astequiv"} For any sequent $\mathcal{S}$, $\mathrm{D.K}^- \vdash \mathcal{S}$ iff $\mathrm{D.K}^-_\ast \vdash \tau_\ast(\mathcal{S})$.
*Proof.* For the left-to-right direction, assume that $\mathcal{S}$ is derivable in $\mathrm{D.K}^-$ with a proof of height $h$. We prove by induction on $h$.
If $h=1$, $\mathcal{S}$ is an instance of $(Id)$. So is $\tau_\ast(\mathcal{S})$.
If $h>1$, let the last rule applied be $\mathcal{R}$.
If $\mathcal{R}$ is not $(\ast \ast \Rightarrow)$, $(\Rightarrow\ast \ast)$, $(Wl)$, $(Wr)$, $(\to \Rightarrow)$, $(\Rightarrow\ast 1)$, $(\Rightarrow\ast 2)$, $(\ast \Rightarrow 1)$ or $(\ast \Rightarrow 2)$, then we apply the induction hypothesis to the premise(s) of $\mathcal{R}$ and then apply $\mathcal{R}$.
If $\mathcal{R}$ is $(\to \Rightarrow)$, $(\Rightarrow\ast 1)$, $(\Rightarrow\ast 2)$, $(\ast \Rightarrow 1)$ or $(\ast \Rightarrow 2)$, then $\mathcal{R}$ is not a rule in $\mathrm{D.K}^-_\ast$. But we can use $\mathcal{R}'$ in $\mathrm{D.K}^-_\ast$. For example, if $\mathcal{R}$ is $(\to \Rightarrow)$, then the proof ends with $$\frac{X \Rightarrow\varphi \quad \psi \Rightarrow Y}{\varphi \to \psi \Rightarrow\ast {X} \circ Y}(\to \Rightarrow)$$ By the induction hypothesis, $\mathrm{D.K}^-_\ast \vdash \tau_\ast (X) \Rightarrow\varphi$ and $\mathrm{D.K}^-_\ast \vdash \psi \Rightarrow\tau_\ast(Y)$. Then by $(\to \Rightarrow')$, $\mathrm{D.K}^-_\ast \vdash \varphi \to \psi \Rightarrow\overline{\tau_\ast(X) } \circ \tau_\ast(Y)$. Since $\overline{\tau_\ast(X)} = \tau_\ast(\ast X)$, $\mathrm{D.K}^-_\ast \vdash \varphi \to \psi \Rightarrow\tau_\ast(\ast X) \circ \tau_\ast(Y)$, as required. Note that the proof uses equation $\overline{\tau_\ast(X)} = \tau_\ast(\ast X)$.
If $\mathcal{R}$ is $(Wl)$ or $(Wr)$, let the newly added structure be $W$. Then we add $\tau_\ast(W)$ instead, where $\tau_\ast(W)$ is the structure obtained by deleting $\ast \ast$ in $W$.
If $\mathcal{R}$ is $(\ast \ast \Rightarrow)$ or $(\Rightarrow\ast \ast)$, then the sequent obtained by applying the induction hypothesis to the premise is the required sequent.
For the right-to-left direction, assume that $\mathrm{D.K}^-_\ast \vdash \tau_\ast(\mathcal{S})$. By Lemma [\[lem:05012323\]](#lem:05012323){reference-type="ref" reference="lem:05012323"}, $\mathrm{D.K}^- \vdash \tau_\ast(\mathcal{S})$. By Lemma [\[lem:05012322\]](#lem:05012322){reference-type="ref" reference="lem:05012322"}, $\mathrm{D.K}^- \vdash \mathcal{S}$. ◻
# $n$-reduced proofs in $\mathrm{D.K}^-_\ast$ {#sec:6}
This section shows that the provability of a sequent is the same as its *3-reduced* counterpart in $\mathrm{D.K}^-_\ast$, in which the number of repetitive substructures and $\mathrm{I}$ is restricted. Being 3-reduced is the second requirement for sequents in finite proof searching. This concept aims to deal with the infinite search space caused by ($\mathrm{I}$l), ($\mathrm{I}$r), (Cl) and (Cr) in $\mathrm{D.K}^-_\ast$.
Repetitive substructures also occur in Gentzen's sequent. Take $\varphi, \varphi ,\Gamma \Rightarrow\Delta$ as an example, in which $\varphi$ occurs twice. Situations in display calculi are more complicated because display rules allow certain structures and structural operators to move freely from one side of $\Rightarrow$ to the other which may make repetitive substructures difficult to be recognized. Therefore, a notion of *equivalence* dissolving such movements is important for repetitive substructures recognition.
## Equivalent sequents
First let us analyze a set of rules in $\mathrm{D.K}^-_\ast$. Let $\Omega$ be the set of display rules, (Al), (Ar), (Pl) and (Pr) in $\mathrm{D.K}^-_\ast$. $\Omega$ satisfies the following properties: each rule in $\Omega$ has one premise; if $\frac{\mathcal{S}}{\mathcal{S}'}$ is an instance of a rule in $\Omega$, so is $\frac{\mathcal{S}'}{\mathcal{S}}$. It follows that if $\mathcal{D}$ is a proof from $\mathcal{S}$ to $\mathcal{S}'$ with only rules in $\Omega$, then the proof obtained by reversing $\mathcal{D}$ is a proof from $\mathcal{S}'$ to $\mathcal{S}$ with only rules in $\Omega$.
In addition, each rule in $\Omega$ is *atomicity-preserving*, defined as follows:
Let $\mathcal{R}=\frac{\mathcal{S}_1, \ldots, \mathcal{S}_n}{\mathcal{S}}$ be a sequent rule. $\mathcal{R}$ is called *atomicity-preserving* if
1. For any instance of $\mathcal{R}$, $At(\mathcal{S}_1) \cup\ldots \cup At(\mathcal{S}_n) = At(\mathcal{S})$, where $At(\mathcal{S}')$ is the set of atomic substructures in $\mathcal{S}'$ for any sequent $\mathcal{S}'$.
2. $\mathcal{R}$ is closed under *single atomic substitution*. That is, For any instance $\frac{\mathcal{S}_1 \ldots \mathcal{S}_i[ Z] \ldots \mathcal{S}_n}{\mathcal{S}[Z]}$ of $\mathcal{R}$, any atomic substructure $Z$ of $\mathcal{S}_i (1\le i \le n)$ and any structure $W$, $$\frac{\mathcal{S}_1 \ldots \mathcal{S}_i[W \backslash Z] \ldots \mathcal{S}_n}{\mathcal{S}[W\backslash Z]}$$ is also an instance of $\mathcal{R}$.
Condition (1) means that no atomic structure is introduced or removed an by application of $\mathcal{R}$.
The rules in $\mathrm{D.K}^-_\ast$ other than the rules in $\Omega$ are not atomicity-preserving.
By the above properties of $\Omega$, we define the notion of *sequent equivalence*:
[\[def:equiv\]]{#def:equiv label="def:equiv"} Sequents $\mathcal{S}$ and $\mathcal{S}'$ are called *equivalent* if $\mathcal{S}'$ can be derived from $\mathcal{S}$ with only rules in $\Omega$, which consists of display rules, ($Al$), ($Ar$), ($Pl$) and ($Pr$).
Equivalent sequents can be thought of as one sentences said in different ways. For example, "Today is sunny" is another way of saying "It is sunny today". They expresses the same information.
The notion of sequent equivalence is different from the notion of inter-provability in our setting. Since the intuition we want to characterize is the free movement of certain structures and structural operators, the requirement of being atomicity-preserving is added. This is the reason why $(\mathrm{I}l)$ and $(\mathrm{I}r)$ are excluded from $\Omega$.
The following proposition is straightforward:
[\[prop:equivProp\]]{#prop:equivProp label="prop:equivProp"} For any sequents $\mathcal{S}, \mathcal{S}', \mathcal{S}''$,
1. $\mathcal{S}$ is equivalent to itself.
2. If $\mathcal{S}$ is equivalent to $\mathcal{S}'$, then $\mathcal{S}'$ is equivalent to $\mathcal{S}$.
3. If $\mathcal{S}$ is equivalent to $\mathcal{S}'$ and $\mathcal{S}'$ is equivalent to $\mathcal{S''}$, then $\mathcal{S}$ is equivalent to $\mathcal{S}''$.
Equivalent sequents has the following property: if the same substitution of an atomic substructure is applied to equivalent sequents, then the resulting sequents are still equivalent .
[\[equiv:substituition\]]{#equiv:substituition label="equiv:substituition"} Let $\mathcal{S}$ and $\mathcal{S}'$ be equivalent sequents and $Z$ an atomic substructure of $\mathcal{S}$. Then $Z$ is an atomic substructure of $\mathcal{S}'$ and $\mathcal{S}[W \backslash Z]$ and $\mathcal{S}'[W \backslash Z]$ are equivalent for any structure $W$.
*Proof.* It follows from the fact that each rule in $\Omega$ is atomicity-preserving. ◻
## Superfluous substructures, superfluous $\mathrm{I}$ and reduced sequents
A substructure may appear more than once in a sequent. In a proof, the repetition of some substructures may be redundant. The idea is captured by the following definition, which is based on the notion of sequent equivalence.
In any sequent $X\Rightarrow Y$, two occurrences $Z_{(1)}$ and $Z_{(2)}$ of substructure $Z$ are said to be *matching* if $X\Rightarrow Y$ is equivalent to $Z_{(1)} \circ Z_{(2)} \Rightarrow W$ or $W\Rightarrow Z_{(1)} \circ Z _{(2)}$, where $W$ is a structure. If $X\Rightarrow Y$ contains at least two matching occurrences of $Z$, we say that $Z$ is *superfluous* in $X\Rightarrow Y$.
The following lemma tells us that if $Z$ is a superfluous substructure in $X \Rightarrow Y$, then the matching occurrences of $Z$ are not 'separated' by $\bullet$.
[\[lem:08301623\]]{#lem:08301623 label="lem:08301623"} Let $Z$ be a superfluous substructure in $X \Rightarrow Y$ and $Z_{(i)}$ one of the matching occurrences of $Z$ in $X \Rightarrow Y$. Then the any substructure in $X \Rightarrow Y$ which contains $Z_{(i)}$ and does not contain another occurrence of $Z$ is of the form $\ast ^n W [Z_{(i)}]$, where $\ast^n$ is a sequence of $n$ $\bullet$'s for some $n \in \mathbb{N}$.
*Proof.* We prove by contradiction and assume that there is a substructure in $X \Rightarrow Y$ which contains $Z_{(i)}$ and does not contain another occurrence of $Z$ is of the form $\star_1 \bullet \star_2 W [Z_{(i)}]$ where $\star_1$ and $\star_2$ are sequences of $\ast$ and $\bullet$.
Since $Z_{(i)}$ one of the matching occurrences of $Z$ in $X \Rightarrow Y$, there is another $Z_{(i')}$ in $X \Rightarrow Y$ such that $X \Rightarrow Y$ is equivalent to $Z_{(i)} \circ Z_{(i')} \Rightarrow U$ or $U \Rightarrow Z_{(i)} \circ Z_{(i')}$ for some structure $U$. This contradicts the fact that no matter which rules of display rules, (Al), (Ar), (Pl) and (Pr) are used, either $Z_{(i)}$ or $Z_{(i')}$ is in the scope of the exhibited $\bullet$ in $\star_1 \bullet \star_2 W [Z_{(i)}]$, but not both. ◻
The following lemma says that matching occurrences of a structure remains matching when unrelated structure is changed.
[\[lem:matchTransfer\]]{#lem:matchTransfer label="lem:matchTransfer"} (1) Let $X$ be a structure containing matching occurrences $Z_{(1)}, Z_{(2)}$ of a substructure $Z$ in $X\Rightarrow Y$. Then $Z_{(1)}$ and $Z_{(2)}$ are matching in $X \Rightarrow U$ for any structure $U$.
\(2\) Let $X$ be a structure containing matching occurrences $Z_{(1)}, Z_{(2)}$ of a substructure $Z$ in $Y \Rightarrow X$. Then $Z_{(1)}$ and $Z_{(2)}$ are matching in $U \Rightarrow X$ for any structure $U$.
*Proof.* We prove these claims simultaneously by induction on the length $l(X)$ of $X$. By assumption, $l(X) >1$.
Consider the case that $l(X) = 2$. For item (1), $X \Rightarrow Y$ is of the form $p \circ p \Rightarrow Y$. Then for any $U$, the two occurrences of $p$ are matching in $p \circ p \Rightarrow U$. The proof for item (2) is similar.
Assume that $l(X)>2$. First consider the case that $X$ is of the form $\ast X'$. For item (1), $\ast X' \Rightarrow Y$ is equivalent to $\overline{Y} \Rightarrow X'$. By assumption, $Z_{(1)}$ and $Z_{(2)}$ are matching occurrences of a substructure $Z$ in $\ast X' \Rightarrow Y$. Since $\ast X' \Rightarrow Y$ is equivalent to $\overline{Y} \Rightarrow X'$, $Z_{(1)}$ and $Z_{(2)}$ are matching in $\overline{Y} \Rightarrow X'$. Then by the induction hypothesis of item (2), $Z_{(1)}$ and $Z_{(2)}$ are matching in $U' \Rightarrow X'$ for any structure $U'$. Since $U' \Rightarrow X'$ is equivalent to $\ast X' \Rightarrow\overline{U'}$, $Z_{(1)}$ and $Z_{(2)}$ are matching in $\ast X' \Rightarrow\overline{U'}$. This is equivalent to say that $Z_{(1)}$ and $Z_{(2)}$ are matching in $\ast X' \Rightarrow U$ for any structure $U$. The proof for item (2) is similar.
The case that $X$ is of the form $\bullet X'$ is similar because $\bullet X' \Rightarrow Y$ is equivalent to $X' \Rightarrow\bullet Y$ and $Y \Rightarrow\bullet X'$ is equivalent to $\bullet Y \Rightarrow X'$.
Now consider the case that $X$ is of the form $X_1 \circ X_2$. For item (1), If the two occurrences $Z_{(1)}, Z_{(2)}$ of $Z$ are in $X_1$, the claim follows from the fact that $X_1 \circ X_2 \Rightarrow Y$ is equivalent to $X_1 \Rightarrow Y \circ \overline{X_2}$ and the induction hypothesis. The proof is the same for the case that $Z_{(1)}, Z_{(2)}$ are in $X_2$.
If $X$ is of the form $X_1 [Z_{(1)} ]\circ X_2[Z_{(2)}]$, either the length of $X_1[Z_{(1)} ]$ or the length of $X_2[Z_{(2)}]$ is larger than $1$. Without loss of generality, assume that $l (X_1[Z_{(1)} ]) >1$.
Consider the case that $X_1 [Z_{(1)} ]$ is of the form $\ast X_1' [Z_{(1)} ]$.
If $X_2[Z_{(2)}]$ is of the form $X_2' [Z_{(2)}] \circ X_2''$ or $X_2'' \circ X_2' [Z_{(2)}]$, then $X \Rightarrow Y$ is equivalent to $\ast X_1' [Z_{(1)} ] \circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''}$. The claim follows from the induction hypothesis and the fact that $X \Rightarrow Y$ is equivalent to $\ast X_1' [Z_{(1)} ] \circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''}$.
If $X_2[Z_{(2)}]$ is of the form $\ast X_2' [Z_{(2)}]$, then $X \Rightarrow Y$ is equivalent to $\overline{Y} \Rightarrow X_1'[Z_{(1)}] \circ X_2' [Z_{(2)}]$. The claim follows from the induction hypothesis and the fact that $X \Rightarrow Y$ is equivalent to $\overline{Y} \Rightarrow X_1'[Z_{(1)}] \circ X_2' [Z_{(2)}]$.
The possibility that $X_2[Z_{(2)}]$ is of the form $\bullet X_2' [Z_{(2)}]$ is excluded by Lemma [\[lem:08301623\]](#lem:08301623){reference-type="ref" reference="lem:08301623"}.
For the same reason, $X_1 [Z_{(1)} ]$ is not of the form $\bullet X_1' [Z_{(1)} ]$. This completes the proof for item (1). The proof for item (2) is similar. ◻
$(\mathrm{I}l)$ and $(\mathrm{I}r)$ mean that certain occurrences of $\mathrm{I}$ are redundant. This is captured by the notion of *superfluous occurrences of $\mathrm{I}$*.
[\[defn:08301743\]]{#defn:08301743 label="defn:08301743"} An occurrence of $\mathrm{I}$ in $X\Rightarrow Y$ is said to be *superfluous* if it is a dependent substructure of $X\Rightarrow Y$ and $X \Rightarrow Y$ is equivalent to $Z \circ \mathrm{I}\Rightarrow U$ or $U \Rightarrow Z \circ \mathrm{I}$ for some structures $Z$ and $U$.
If $X \Rightarrow Y$ contains a superfluous occurrence of $\mathrm{I}$, we say that $\mathrm{I}$ is *superfluous* in $X\Rightarrow Y$.
By a proof similar to that of Lemma [\[lem:08301623\]](#lem:08301623){reference-type="ref" reference="lem:08301623"}, it follows that if $\mathrm{I}$ is a superfluous occurrence in $\mathcal{S}$, then $\mathcal{S}$ is of the form $\mathcal{S}[\ast^n \mathrm{I}\circ W]$ or $\mathcal{S}[W \circ \ast^n \mathrm{I}]$, for some $n \in \mathbb{N}$.
Now we define the notion of reduced sequents. They are sequents that contain no superfluous substructure or superfluous $\mathrm{I}$.
A sequent is called *reduced* if it does not contain any superfluous substructure or superfluous $\mathrm{I}$.
We will show that each provable sequent has a *$3$-reduced proof* in $\mathrm{D.K}^-_\ast$. The notions of $n$-reduced sequents and proofs for $n\in \mathbb{N}$ are as follows.
For a sequent $\mathcal{S}$ and a superfluous substructure $Z$ in $\mathcal{S}$, let $\rho(Z)$ be the number of matching occurrences of $Z$ in $\mathcal{S}$.
For a natural number $n$, a sequent $\mathcal{S}$ is called *$n$-reduced* if
1. for any superfluous substructure $Z$ in $\mathcal{S}$, $\rho(Z) \le n$;
2. $\mathcal{S}$ contains at most $n$ superfluous occurrences of $\mathrm{I}$.
A proof is called *n-reduced* if each sequent occurring in it is n-reduced.
In order to make a sequent reduced or $n$-reduced, we need to remove (some) superfluous occurrences of substructures and $\mathrm{I}$. The definition of deletion is given next.
[\[def:delSubstr\]]{#def:delSubstr label="def:delSubstr"} Let $\mathcal{S}$ be a sequent and $Z$ a superfluous substructure of $\mathcal{S}$. The *deletion of $Z$ from $\mathcal{S}$*, denoted $d(Z)(\mathcal{S})$, is defined as follows:
1. $Z$ is a dependent substructure. Then $\mathcal{S}$ is of the form $\mathcal{S}[\ast^n Z \circ W]$ or $\mathcal{S}[W \circ \ast^n Z ]$, where $\ast^n$ is a sequence of $n$ $\ast$'s for $n \in \mathbb{N}$ (by Lemma [\[lem:08301623\]](#lem:08301623){reference-type="ref" reference="lem:08301623"}). Let $$d(Z)(\mathcal{S}) = \left \{
\begin{aligned}
&\mathcal{S}[W \backslash\ast^n Z \circ W] & &\text{~if~} \mathcal{S}= \mathcal{S}[\ast^n Z \circ W]\\
&\mathcal{S}[W \backslash W\circ \ast^n Z] & & \text{~if~} \mathcal{S}= \mathcal{S}[ W\circ \ast^n Z]
\end{aligned}
\right.$$
2. $Z$ is an independent substructure. Then $\mathcal{S}$ is of the form $\ast^n Z\Rightarrow Y$ or $X\Rightarrow \ast^n Z$, where $\ast^n$ is a sequence of $n$ $\ast$'s for $n \in \mathbb{N}$ (by Lemma [\[lem:08301623\]](#lem:08301623){reference-type="ref" reference="lem:08301623"}). We substitute $\ast^n Z$ with $\mathrm{I}$, i.e., $$d(Z)(\mathcal{S}) = S[\mathrm{I}\backslash\ast^n Z] = \left \{
\begin{aligned}
&\mathrm{I}\Rightarrow Y & &\text{~if~} \mathcal{S}=\ast^n Z\Rightarrow Y \\
&X\Rightarrow\mathrm{I}& & \text{~if~} \mathcal{S}= X\Rightarrow \ast^n Z
\end{aligned}
\right.$$
[\[def:deli\]]{#def:deli label="def:deli"} Let $\mathcal{S}$ be a sequent and $\mathrm{I}$ a superfluous occurrence in $\mathcal{S}$. Then $\mathcal{S}$ is of the form $\mathcal{S}[ \ast^n \mathrm{I}\circ W]$ or $\mathcal{S}[W \circ \ast^n \mathrm{I}]$ for some $n \in \mathbb{N}$ (see discussion after Definition [\[defn:08301743\]](#defn:08301743){reference-type="ref" reference="defn:08301743"}).
The *deletion of $\mathrm{I}$ from $\mathcal{S}$*, denoted $d(\mathrm{I})(\mathcal{S})$, is defined as follows:
$$d(\mathrm{I})(\mathcal{S}) = \left \{
\begin{aligned}
&\mathcal{S}[W \backslash\ast^n \mathrm{I}\circ W] & &\text{~if~} \mathcal{S}= \mathcal{S}[\ast^n \mathrm{I}\circ W]\\
&\mathcal{S}[W \backslash W\circ \ast^n \mathrm{I}] & & \text{~if~} \mathcal{S}= \mathcal{S}[ W\circ \ast^n \mathrm{I}]
\end{aligned}
\right.$$
It is straightforward that operations in Definitions [\[def:delSubstr\]](#def:delSubstr){reference-type="ref" reference="def:delSubstr"} and [\[def:deli\]](#def:deli){reference-type="ref" reference="def:deli"} are enough to make a sequent reduced. If a sequent $\mathcal{S}'$ is obtained from $\mathcal{S}$ by applying these operations until $\mathcal{S}'$ is reduced, then we call $\mathcal{S'}$ a *reduced sequent from $\mathcal{S}$*.
## Uniqueness up to inter-provability
There may be more than one reduced sequent from a sequent. We will show that all reduced sequents obtained from a sequent are inter-provable with display rules and $(\mathrm{I}r)$ in $\mathrm{D.K}^-_\ast$.
For equivalent sequents $\mathcal{S}_1$ and $\mathcal{S}_2$, by the atomicity-preserving property of display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$, we can say that $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ contains the same substructure $W$, though $W$ may occur at different positions in $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$. Let $\mathcal{S}_1'$ and $\mathcal{S}_2'$ be the sequents obtained by deleting the same substructure $W$ in $\mathcal{S}_1$ and $\mathcal{S}_2$, respectively. We can show that $\mathcal{S}_{1}'$ and $\mathcal{S}_{2}'$ are equivalent.
[\[lem:06181739\]]{#lem:06181739 label="lem:06181739"} Let $\mathcal{S}_1$ and $\mathcal{S}_{2}$ be equivalent sequents such that $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ contains the same substructure $W$. Then $d(W)(\mathcal{S}_{1})$ is equivalent to $d(W)(\mathcal{S}_{2})$.
*Proof.* Since $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are equivalent, there is a proof $\mathcal{D}$ from $\mathcal{S}_{1}$ to $\mathcal{S}_{2}$ with only display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$. We prove by induction on the height $h$ of this proof.
If $h = 0$, then $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$ are the same. The claim follow directly.
If $h>0$, consider the last rule $\mathcal{R}$ applied. Consider the case that $\mathcal{R}$ is the top-to-bottom direction of $(\Rightarrow\ast 2')$. Then $\mathcal{S}_2$ is $Y \Rightarrow\overline{X} \circ Z$ and $\mathcal{D}$ ends with $$\frac{X \circ Y \Rightarrow Z}{Y \Rightarrow\overline{X} \circ Z}$$ By the induction hypothesis, $d(W)(\mathcal{S}_{1})$ is equivalent to $d(W)(X \circ Y \Rightarrow Z)$. Now we show that $d(W)(X \circ Y \Rightarrow Z)$ is equivalent to $d(W)(Y \Rightarrow\overline{X} \circ Z)$. If $W$ is a substructure of $X$, then $X \circ Y \Rightarrow Z$ is $X[ U \circ \ast W] \circ Y \Rightarrow Z$ or $X[ \ast W \circ U] \circ Y \Rightarrow Z$ for some structure $U$. Without loss of generality, assume the former case holds. Then $d(W)(X \circ Y \Rightarrow Z) = X [ U \backslash U \circ \ast W ] \circ Y \Rightarrow Z$. By an application of $(\Rightarrow\ast 2')$, we have $Y \Rightarrow\overline{X [ U \backslash U \circ \ast W]} \circ Z$, which is $d(W) (Y \Rightarrow\overline{X} \circ Z)$. Therefore, $d(W)(X \circ Y \Rightarrow Z)$ is equivalent to $d(W) (Y \Rightarrow\overline{X} \circ Z)$. The case that $W$ is a substructure of $Y$ and the case that $W$ is a substructure of $Z$ can be proved similarly.
Since $d(W)(\mathcal{S}_{1})$ is equivalent to $d(W)(X \circ Y \Rightarrow Z)$ and $d(W)(X \circ Y \Rightarrow Z)$ is equivalent to $d(W)(Y \Rightarrow\overline{X} \circ Z)$, $d(W)(\mathcal{S}_{1})$ is equivalent to $d(W)(Y \Rightarrow\overline{X} \circ Z)$, which is $d(W)(\mathcal{S}_{2})$.
Other cases of $\mathcal{R}$ are similar. ◻
Then we have the following result about how reduced sequents preserve equivalence.
[\[prop:06191740\]]{#prop:06191740 label="prop:06191740"} Let $\mathcal{S}_1'$ be a reduced sequent from $\mathcal{S}_1$, and $\mathcal{S}_2'$ a reduced sequent from $\mathcal{S}_2$. If $\mathcal{S}_1$ and $\mathcal{S}_2$ are equivalent, then $\mathcal{S}_1'$ and $\mathcal{S}_2'$ are equivalent.
*Proof.* It follows directly from Lemma [\[lem:06181739\]](#lem:06181739){reference-type="ref" reference="lem:06181739"}. ◻
Next we show that it does not matter which of the matching occurrences of a substructure is deleted.
[\[lem:20230428\]]{#lem:20230428 label="lem:20230428"} (1) Let $X$ be a structure containing matching occurrences $Z_{(1)}, Z_{(2)}$ of a substructure $Z$ in $X\Rightarrow Y$. Then $d(Z_{(1)}) (X\Rightarrow Y)$ is equivalent to $d(Z_{(2)})(X\Rightarrow Y)$.
\(2\) Let $X$ be a structure containing matching occurrences $Z_{(1)}, Z_{(2)}$ of a substructure $Z$ in $Y\Rightarrow X$. Then $d(Z_{(1)}) (Y\Rightarrow X)$ is equivalent to $d(Z_{(2)})(Y\Rightarrow X)$.
*Proof.* We prove these claims simultaneously by induction on the length $l(X)$ of $X$. By assumption, $l(X) >1$.
Consider the case that $l(X) = 2$. For item (1), $X \Rightarrow Y$ is of the form $p \circ p \Rightarrow Y$. Then $d(p_{(1)})( p \circ p \Rightarrow Y) =d(p_{(2)}) (p \circ p \Rightarrow Y)$. It follows that they are equivalent. Item (2) can be proved in a similar way.
Assume that $l(X)>2$. First consider the case that $X$ is of the form $\ast X'$. For item (1), $\ast X' \Rightarrow Y$ is equivalent to $\overline{Y} \Rightarrow X'$. Then by the induction hypothesis of item (2), $d(Z_{(1)}) (\overline{Y} \Rightarrow X')$ is equivalent to $d(Z_{(2)})(\overline{Y} \Rightarrow X')$. By Lemma [\[lem:06181739\]](#lem:06181739){reference-type="ref" reference="lem:06181739"}, $d(Z_{(1)}) (\ast X' \Rightarrow Y)$ is equivalent to $d(Z_{(1)}) (\overline{Y} \Rightarrow X')$ and $d(Z_{(2)}) (\ast X' \Rightarrow Y)$ is equivalent to $d(Z_{(2)}) (\overline{Y} \Rightarrow X')$. By the transitivity of sequent equivalence (Proposition [\[prop:equivProp\]](#prop:equivProp){reference-type="ref" reference="prop:equivProp"}), it follows that $d(Z_{(1)}) (\ast X' \Rightarrow Y)$ is equivalent to $d(Z_{(2)}) (\ast X' \Rightarrow Y)$. Item (2) can be proved in a similar way.
The case that $X$ is of the form $\bullet X'$ is similar because $\bullet X' \Rightarrow Y$ is equivalent to $X' \Rightarrow\bullet Y$ and $Y \Rightarrow\bullet X'$ is equivalent to $\bullet Y \Rightarrow X'$.
Now consider the case that $X$ is of the form $X_1 \circ X_2$. For item (1), If the two occurrences $Z_{(1)}, Z_{(2)}$ of $Z$ are in $X_1$, the claim follows from the fact that $X_1 \circ X_2 \Rightarrow Y$ is equivalent to $X_1 \Rightarrow Y \circ \overline{X_2}$ and the induction hypothesis. The proof is the same for the case that $Z_{(1)}, Z_{(2)}$ are in $X_2$.
If $X$ is of the form $X_1 [Z_{(1)} ]\circ X_2[Z_{(2)}]$, since $l(X) >2$, either the length of $X_1[Z_{(1)} ]$ or the length of $X_2[Z_{(2)}]$ is larger than 1. Without loss of generality, assume that $l (X_1[Z_{(1)} ]) >1$.
Consider the case that $X_1 [Z_{(1)} ]$ is of the form $\ast X_1' [Z_{(1)} ]$.
If $X_2[Z_{(2)}]$ is of the form $X_2' [Z_{(2)}] \circ X_2''$ or $X_2'' \circ X_2' [Z_{(2)}]$, then $X \Rightarrow Y$ is equivalent to $\ast X_1' [Z_{(1)} ] \circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''}$. By the induction hypothesis, $d(Z_{(1)})(\ast X_1' [Z_{(1)} ] \circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''})$ is equivalent to $d(Z_{(2)})(\ast X_1' [Z_{(1)} ] \circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''})$. By Lemma [\[lem:06181739\]](#lem:06181739){reference-type="ref" reference="lem:06181739"}, $d(Z_{(1)})(X \Rightarrow Y)$ is equivalent to $d(Z_{(1)})(\ast X_1'$ $[Z_{(1)} ]$ $\circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''})$ and $d(Z_{(2)})(X \Rightarrow Y)$ is equivalent to $d(Z_{(2)})(\ast X_1' [Z_{(1)} ]$ $\circ X_2' [Z_{(2)}] \Rightarrow Y \circ \overline{X_2''})$. By the transitivity of sequent equivalence (Proposition [\[prop:equivProp\]](#prop:equivProp){reference-type="ref" reference="prop:equivProp"}), $d(Z_{(1)})(X \Rightarrow Y)$ is equivalent to $d(Z_{(2)})(X \Rightarrow Y)$.
If $X_2[Z_{(2)}]$ is of the form $\ast X_2' [Z_{(2)}]$, then $X \Rightarrow Y$ is equivalent to $\overline{Y} \Rightarrow X_1'[Z_{(1)}] \circ X_2' [Z_{(2)}]$. Then we can use the induction hypothesis of item (2) and proceed as above to show that $d(Z_{(1)})(X \Rightarrow Y)$ is equivalent to $d(Z_{(2)})(X \Rightarrow Y)$.
The possibility that $X_2[Z_{(2)}]$ is of the form $\bullet X_2' [Z_{(2)}]$ is excluded by Lemma [\[lem:08301623\]](#lem:08301623){reference-type="ref" reference="lem:08301623"}.
For the same reason, $X_1 [Z_{(1)} ]$ is not of the form $\bullet X_1' [Z_{(1)} ]$. This completes the proof for item (1). The proof for item (2) is similar. ◻
Next is the key result of this subsection: the reduced sequent from a sequent is unique up to inter-provability.
[\[prop:reducedInterPro\]]{#prop:reducedInterPro label="prop:reducedInterPro"} Let $\mathcal{S}_1$ and $\mathcal{S}_2$ be reduced sequents from $\mathcal{S}$. Then $\mathcal{S}_1$ and $\mathcal{S}_2$ are inter-provable with display rules and $(\mathrm{I}r)$ in $\mathrm{D.K}^-_\ast$.
*Proof.* This follows from the fact that for any sequent $X \Rightarrow Y$, $X\Rightarrow Y$ and $X \circ \overline{Y} \Rightarrow\mathrm{I}$ are inter-provable with display rules and $(\mathrm{I}r)$ and Lemma [\[lem:20230428\]](#lem:20230428){reference-type="ref" reference="lem:20230428"}. ◻
By this proposition, though there may be more than one reduced sequent from $\mathcal{S}$, they are inter-provable. Therefore, it does not cause confusion when we speak of 'the' reduced sequent from $\mathcal{S}$. For any sequent $\mathcal{S}$, we use $r(\mathcal{S})$ to denote the reduced sequent from $\mathcal{S}$.
*Reducing a substructure in a sequent* is important in subsequent sections and this is its definition.
[\[def:reducesubstru\]]{#def:reducesubstru label="def:reducesubstru"} Let $\mathcal{S}$ be a sequent and $W$ a substructure in $\mathcal{S}$. *Reducing $W$ in $\mathcal{S}$* means to delete all superfluous substructures and superfluous $\mathrm{I}$ in $W$. We use $\mathfrak{r}(W)$ to denote the substructure of $W$ after reducing $W$ in $\mathcal{S}$. Hence, the result of reducing $W$ in $\mathcal{S}$ is $\mathcal{S}[\mathfrak{r}(W) \backslash W]$.
By Lemma [\[lem:matchTransfer\]](#lem:matchTransfer){reference-type="ref" reference="lem:matchTransfer"}, matching occurrences of a substructure remains matching when unrelated structure is changed. Therefore, we can use one symbol $\mathfrak{r}$ for reducing $W$ in $X[W] \Rightarrow Y$ and in $X[W]\Rightarrow Y'$ and denote the results by $X[\mathfrak{r}(W) \backslash W] \Rightarrow Y$ and $X[\mathfrak{r}(W) \backslash W] \Rightarrow Y'$, respectively.
## The existence of 3-reduced proofs in $\mathrm{D.K}^-_\ast$
This subsection shows that for any reduced and provable $\mathcal{S}$ in $\mathrm{D.K}^-_\ast$, there is a 3-reduced proof for $\mathcal{S}$.
For sequent $X \Rightarrow Y$ and $X' \Rightarrow Y'$, the notation $X\Rightarrow Y \vdash^{n} X'\Rightarrow Y'$ means that there exists a $n$-reduced proof from $X \Rightarrow Y$ to $X' \Rightarrow Y'$ in $\mathrm{D.K}^-_\ast$; the notation $X\Rightarrow Y \dashv \vdash^{n} X'\Rightarrow Y'$ means that $X\Rightarrow Y \vdash^{n} X'\Rightarrow Y'$ and $X'\Rightarrow Y' \vdash^{n} X\Rightarrow Y$.
The following lemma says that we can go back and forth in $\mathrm{D.K}^-_\ast$ with a $n$-reduced proof between a sequent $\mathcal{S}$ and the sequent obtained by deleting a superfluous occurrence of a substructure or $\mathrm{I}$ from $\mathcal{S}$.
[\[lem:DealWithSupflu\]]{#lem:DealWithSupflu label="lem:DealWithSupflu"} Let $\mathcal{S}$ be a n-reduced sequent and $\mathcal{S}'$ the sequent obtained from $\mathcal{S}$ by deleting a superfluous occurrence of a substructure or $\mathrm{I}$. Then $\mathcal{S} \dashv \vdash^{n} \mathcal{S}'$ in $\mathrm{D.K}^-_\ast$.
*Proof.* We have three subcases to consider. The deleted structure is (1) a superfluous occurrence of $\mathrm{I}$, (2) a dependent substructure or (3) an independent substructure.
\(1\) If the deleted structure is a superfluous $\mathrm{I}$, then $\mathcal{S}$ contains $\ast^n \mathrm{I}\circ W$ or $W\circ \ast^n \mathrm{I}$ as a substructure, where $W$ is a structure and $n \in \mathbb{N}$. Without loss of generality, assume that $\mathcal{S}$ contains $W \circ \ast^n \mathrm{I}$ as a substructure. By the definition of deletion, the $W$ in $W \circ \ast^n \mathrm{I}$ remains in $\mathcal{S}'$ and $\mathcal{S}$ is $\mathcal{S}'[(W\circ \ast^n \mathrm{I})\backslash W]$.
Since $\mathrm{I}$ is superfluous in $\mathcal{S}$, there exists a proof from $\mathcal{S}'[(W\circ \ast^n \mathrm{I})\backslash W]$ to $Z\circ \mathrm{I}\Rightarrow U$ or $U \Rightarrow Z\circ \mathrm{I}$ for some structures $Z$ and $U$ using only display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$. Let $\mathcal{D}$ be the shortest of such proofs and $h$ the height of $\mathcal{D}$. We prove by induction on $h$.
First, consider the case that $h=0$. It follows that $W$ is $Z$ and $\ast^n$ is an empty string.
If $\mathcal{S}'[(W\circ\ast^n \mathrm{I})\backslash W]$ is equivalent to $Z\circ \mathrm{I}\Rightarrow U$, then $\mathcal{S}'[(W\circ\ast^n \mathrm{I})\backslash W]$ is $Z\circ \mathrm{I}\Rightarrow U$ and $\mathcal{S}'$ is $Z \Rightarrow U$. Use $(\mathrm{I}l)$ and $(Pl)$ to derive the latter from the former and use the other rule in $(\mathrm{I}l)$ and $(Pl)$ for the other direction.
If $\mathcal{S}'[(W\circ \ast^n \mathrm{I})\backslash W]$ equivalent to $U \Rightarrow Z \circ \mathrm{I}$, then $\mathcal{S}' [(W\circ \ast^n \mathrm{I})\backslash W]$ is $U\Rightarrow Z\circ \mathrm{I}$ and $\mathcal{S}'$ is $U\Rightarrow Z$. Use $(\mathrm{I}r)$ and $(Pr)$ to derive the latter from the former and use the other rule in $(\mathrm{I}r)$ and $(Pr)$ for the other direction.
If $h>0$, consider the first rule $\mathcal{R}$ applied in $\mathcal{D}$.
Assume that $\mathcal{D}$ starts with: $$\AXC{$\mathcal{S}'[(W\circ\ast^n \mathrm{I})\backslash W]$}
\RightLabel{$(\mathcal{R})$}
\UIC{$\mathcal{S}''$}
\DP$$ Since $\mathcal{R}$ is a display rule, $(Al)$, $(Ar)$, $(Pl)$ or $(Pr)$, $\mathcal{R}$ is atomicity-preserving. Since $\mathcal{S}$ is n-reduced and $\mathcal{S}$ is $\mathcal{S}'[(W\circ\ast^n \mathrm{I})\backslash W]$, $\mathcal{S}'[(W\circ\ast^n \mathrm{I})\backslash W]$ is n-reduced and $\mathcal{R}$ is atomicity-preserving, $\mathcal{S}''$ is n-reduced.
Note that $\mathcal{S}''$ is equivalent to $Z\circ \mathrm{I}\Rightarrow U$ or $U \Rightarrow Z\circ \mathrm{I}$ but the shortest proof is of height $h-1$. Therefore, by the induction hypothesis, $\mathcal{S}' \dashv \vdash^{n}\mathcal{S}''$.
Since each of display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$ contains two opposite rules and $\mathcal{R}$ is one of the rules in display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$, its opposite $\mathcal{R}^{-1}$ exists. We have the following proof: $$\AXC{$\mathcal{S}''$}
\RightLabel{$(\mathcal{R}^{-1})$}
\UIC{$\mathcal{S}'[(W\circ \ast^n \mathrm{I})\backslash W]$}
\DP$$
Since $\mathcal{S}' \dashv \vdash^{n}\mathcal{S}''$ and $\mathcal{S}$ is $\mathcal{S}'[(W\circ \ast^n \mathrm{I})\backslash W]$, $\mathcal{S} \dashv \vdash^{n} \mathcal{S}'$.
\(2\) If the deleted structure $Z$ is a dependent substructure, then $\mathcal{S}$ contains $\ast^n Z\circ W$ or $W\circ \ast^n Z$ as a substructure, where $W$ is a structure and $n \in \mathbb{N}$. Without loss of generality, assume that $\mathcal{S}$ contains $W \circ \ast^n Z$ as a substructure. By the definition of deletion, the $W$ in $W\circ \ast^n Z$ remains in $\mathcal{S}'$ and $\mathcal{S}$ is $\mathcal{S}'[(W\circ \ast^n Z)\backslash W]$.
Since $Z$ is superfluous in $\mathcal{S}$, there exists a proof from $\mathcal{S}'[(W\circ \ast^n Z)\backslash W]$ to $Z\circ Z \Rightarrow U$ or $U \Rightarrow Z\circ Z$ for some structure $U$ using only display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$. Let $\mathcal{D}$ be the shortest of such proofs and $h$ the height of $\mathcal{D}$. We prove by induction on $h$.
First, consider the case that $h=0$. It follows that $W$ is $Z$.
If $\mathcal{S}'[(W\circ\ast^n Z)\backslash W]$ is equivalent to $Z\circ Z \Rightarrow U$, then $\mathcal{S}'[(W\circ\ast^n Z)\backslash W]$ is $Z\circ Z \Rightarrow U$ and $\mathcal{S}'$ is $Z \Rightarrow U$. Use $(Cl)$ to derive the latter from the former and use $(Wl)$ for the other direction.
If $\mathcal{S}'[(W\circ\ast^n Z)\backslash W]$ is equivalent to $U \Rightarrow Z\circ Z$, $\mathcal{S}'[(W\circ \ast^n Z)\backslash W]$ is $U\Rightarrow Z\circ Z$ and $\mathcal{S}'$ is $U\Rightarrow Z$. Use $(Cr)$ to derive the latter from the former and use $(Wr)$ for the other direction.
If $h>0$, consider the first rule $\mathcal{R}$ applied in $\mathcal{D}$.
Assume that $\mathcal{D}$ starts with: $$\AXC{$\mathcal{S}'[(W\circ\ast^n Z)\backslash W]$}
\RightLabel{$(\mathcal{R})$}
\UIC{$\mathcal{S}''$}
\DP$$ Since $\mathcal{R}$ is a display rule, $(Al)$, $(Ar)$, $(Pl)$ or $(Pr)$, $\mathcal{R}$ is atomicity-preserving. Since $\mathcal{S}$ is n-reduced and $\mathcal{S}$ is $\mathcal{S}'[(W\circ\ast^n Z)\backslash W]$, $\mathcal{S}'[(W\circ\ast^n Z)\backslash W]$ is n-reduced and $\mathcal{R}$ is atomicity-preserving, $\mathcal{S}''$ is n-reduced.
Note that $\mathcal{S}''$ is equivalent to $Z\circ Z\Rightarrow U$ or $U \Rightarrow Z\circ Z$ but the shortest proof is of height $h-1$. Therefore, by the induction hypothesis, $\mathcal{S}' \dashv \vdash^{n}\mathcal{S}''$.
Since each of display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$ contains two opposite rules and $\mathcal{R}$ is one of the rules in display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$, its opposite $\mathcal{R}^{-1}$ exists. We have the following proof: $$\AXC{$\mathcal{S}''$}
\RightLabel{$(\mathcal{R}^{-1})$}
\UIC{$\mathcal{S}'[(W\circ \ast^n Z)\backslash W]$}
\DP$$
Since $\mathcal{S}' \dashv \vdash^{n}\mathcal{S}''$ and $\mathcal{S}$ is $\mathcal{S}'[(W\circ \ast^n Z)\backslash W]$, $\mathcal{S} \dashv \vdash^{n} \mathcal{S}'$.
\(3\) If the deleted structure $Z$ is an independent substructure, then $\mathcal{S}$ is $\ast^n Z \Rightarrow Y$ or $X \Rightarrow \ast^n Z$. Without loss of generality, assume that $\mathcal{S}$ is $\ast^n Z \Rightarrow Y$. Then $\mathcal{S}'$ is $\mathrm{I}\Rightarrow Y$. We have to show $\mathrm{I}\Rightarrow Y \dashv \vdash^{n}\ast^n Z \Rightarrow Y$.
This is a n-reduced proof from $\mathrm{I}\Rightarrow Y$ to $\ast^n Z \Rightarrow Y$: $$\AXC{$\mathrm{I}\Rightarrow Y$}
\RightLabel{$(Wl)$}
\UIC{$\ast^n Z \circ \mathrm{I}\Rightarrow Y$}
\RightLabel{$(Pl)$}
\UIC{$\mathrm{I}\circ \ast^n Z \Rightarrow Y$}
\RightLabel{$(Il)$}
\UIC{$\ast^n Z\Rightarrow Y$}
\DP$$
Now we prove $\ast^n Z \Rightarrow Y \vdash^{n} \mathrm{I}\Rightarrow Y$.
Since the $Z$ in $\ast^n Z \Rightarrow Y$ is independently superfluous, $Y$ contains an occurrence of $Z$ and $\ast^n Z \Rightarrow Y$ is equivalent to $Z \circ Z \Rightarrow U$ or $U \Rightarrow Z \circ Z$ for some structure $U$. In other words, there are two occurrences of $Z$ in $\ast^n Z \Rightarrow Y$, each on one side of $\Rightarrow$ and they can be moved to the same side of $\Rightarrow$ with only display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$. By observation, $Y$ must be of the form $Y_1\circ Y_2$, otherwise the two occurrences of $Z$ can not be gathered together. Without loss generality, assume that the other occurrence of $Z$ is a substructure of $Y_1$.
By $(\ast \Rightarrow 2')$, we have $\overline{ Y_1} \circ Z \Rightarrow Y_2$. It follows that the $Z$ in $\overline{ Y_1} \circ Z \Rightarrow Y_2$ is dependently superfluous. By item (2) in this proof, $\overline {Y_1}\circ \ast^n Z \Rightarrow Y_2 \dashv \vdash^{n} \overline{Y_1}\vdash Y_2$. It remains to show that $\overline {Y_1}\Rightarrow Y_2 \vdash^{n} \mathrm{I}\vdash Y_1\circ Y_2$. An n-reduced proof is as follows: $$\AXC{$\overline {Y_1}\Rightarrow Y_2$}
\RightLabel{$(Il)$}
\UIC{$\mathrm{I}\circ \overline{Y_1} \Rightarrow Y_2$}
\RightLabel{$(Pl)$}
\UIC{$ \overline{Y_1}\circ \mathrm{I}\Rightarrow Y_2$}
\RightLabel{$(\ast \Rightarrow 2')$}
\UIC{$\mathrm{I}\Rightarrow Y_1 \circ Y_2$}
\DP$$ ◻
Next is the main result of this section.
[\[lem:semireduced\]]{#lem:semireduced label="lem:semireduced"} If $\mathcal{S}$ is reduced and is provable in $\mathrm{D.K}^-_\ast$, then there is a 3-reduced proof for $r(\mathcal{S})$ in $\mathrm{D.K}^-_\ast$.
*Proof.* Since $\mathcal{S}$ is derivable in $\mathrm{D.K}^-_\ast$, there exists a proof $\mathcal{D}$ of height $h$ for $\mathcal{S}$.
We prove by induction on $1\le i \le h$ that for each sub-proof $\mathcal{D}'$ of $\mathcal{D}$ of height $i$ with the last sequent being $\mathcal{S}'$, there is a 3-reduced proof for $r(\mathcal{S}')$. Recall that $r(\mathcal{S}')$ denotes the reduced sequent from $\mathcal{S}'$.
If $i =1$, then $\mathcal{S}'$ is an instance of $(Id)$, $(\Rightarrow \top)$, $(\bot \Rightarrow)$. So is $r(\mathcal{S}')$. Instances of $(Id)$, $(\Rightarrow \top)$, $(\bot \Rightarrow)$ are 3-reduced.
If $i>1$, then we consider the last rule $\mathcal{R}$ applied in $\mathcal{D}'$.
*General idea.* Let $\mathcal{R}$ be $$\frac{\mathcal{S}_1,\ldots, \mathcal{S}_n}{\mathcal{S}}$$ By induction hypothesis, there are 3-reduced proofs $\mathcal{D}_1,\ldots,$ $\mathcal{D}_n$ for $r(\mathcal{S}_1),\ldots,$ $r(\mathcal{S}_n)$, respectively. To show that there is a 3-reduced proof for $r(\mathcal{S})$, we have to derive $r(\mathcal{S})$ from $r(\mathcal{S}_1), \ldots, r(\mathcal{S}_n)$. Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"} is important here, because the sequents obtained by reducing certain substructures serve as middle sequents from $r(\mathcal{S}_1), \ldots, r(\mathcal{S}_n)$ to $r(\mathcal{S})$.
In this process, we need to add or delete some superfluous substructures and this is where Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"} comes into play.
We divide the rules in $\mathrm{D.K}^-_\ast$ into three groups:
1. Atomicity-preserving rules, which are display rules, (Al), (Ar), (Pl) and (Pr);
2. Connective rules;
3. (Ql), (Qr), ($\mathrm{I}$l), ($\mathrm{I}$r), (Wl), (Wr), (Cl), (Cr), (Ml) and (Mr).
\(1\) We only give the proof for the case that $\mathcal{R}$ is the top-to-bottom rule of $(\Rightarrow \ast 1')$. The proof for other cases is similar.
In this case, $\mathcal{S}'$ is $X \Rightarrow Z \circ \overline{W}$. Then $\mathcal{D}'$ ends with $$\AXC{$X \circ W \Rightarrow Z$}
\RightLabel{$(\Rightarrow \ast 1')$}
\UIC{$X\Rightarrow Z \circ \overline{W}$}
\DP$$
By the induction hypothesis, there is a 3-reduced proof for $r(X \circ W \Rightarrow Z)$. By Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"}, $\mathfrak{r}(X) \circ \mathfrak{r}(W) \Rightarrow \mathfrak{r}(Z)$ is 3-reduced (the worst case is that each of $\mathfrak{r}(X)$, $\mathfrak{r}(W)$ and $\mathfrak{r}(Z)$ contains a matching occurrence of a substructure or is an occurrence of $\mathrm{I}$). By Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}, $r(X \circ W \Rightarrow Z) \vdash^{3} \mathfrak{r}(X) \circ \mathfrak{r}(W) \Rightarrow \mathfrak{r}(Z)$. Then by $(\Rightarrow \ast 1')$, we have $\mathfrak{r}(X) \Rightarrow \mathfrak{r}(Z) \circ \overline{\mathfrak{r}(W) }$. Note that $r(X \Rightarrow Z \circ \overline{W})$ can be obtained from $\mathfrak{r}(X) \Rightarrow \mathfrak{r}(Z) \circ \overline{\mathfrak{r}(W) }$ by deleting superfluous substructures and $\mathrm{I}$. Therefore, by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}, $\mathfrak{r}(X) \Rightarrow \mathfrak{r}(Z) \circ \overline{\mathfrak{r}(W) } \vdash^{3} r(X \Rightarrow Z \circ \overline{W})$. Since there is a 3-reduced proof for $r(X \circ W \Rightarrow Z)$, there is a 3-reduced proof for $r(X \Rightarrow Z \circ \overline{W})$.
\(2\) We only give the proof for the case that $\mathcal{R}$ is $(\land \Rightarrow)$, $(\Rightarrow\land)$ or $(\to \Rightarrow')$. The proof for other cases is similar.
Consider the case that $\mathcal{R}$ is $(\land \Rightarrow)$. Then $\mathcal{S}'$ is of the form $\varphi \land \psi\Rightarrow Y$ and $\mathcal{D}'$ ends with $$\frac{\varphi \circ \psi \Rightarrow Y}{\varphi \land \psi \Rightarrow Y}$$
By induction hypothesis, there is a 3-reduced proof for $r(\varphi \circ \psi \Rightarrow Y)$. By Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"}, $\mathfrak{r}(\varphi \circ \psi ) \Rightarrow \mathfrak{r}(Y)$ is 2-reduced. By Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}, $r(\varphi \circ \psi \Rightarrow Y)\vdash^{3} \mathfrak{r}(\varphi \circ \psi ) \Rightarrow \mathfrak{r}(Y)$.
There are two subcases:
(a) $\mathfrak{r}(\varphi \circ \psi) = \varphi$. Then $\varphi = \psi$. First we apply $(Wl)$ to derive $\varphi \circ \varphi \Rightarrow\mathfrak{r}(Y)$ from $\varphi \Rightarrow\mathfrak{r}(Y)$. Then by $(\land \Rightarrow)$, we have $\varphi \land \varphi \Rightarrow\mathfrak{r}(Y)$. It may not be $r(\varphi \land \varphi \Rightarrow Y)$ since $\mathfrak{r}(Y)$ may contain an occurrence of $\varphi \land \varphi$ that need to be deleted. This can be handled by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}.
(b) $\mathfrak{r}(\varphi \circ \psi) = \varphi \circ \psi$. Then by $(\land \Rightarrow)$, we have $\varphi \land \varphi \Rightarrow\mathfrak{r}(Y)$. Then we proceed as in item (a).
Consider the case that $\mathcal{R}$ is $(\Rightarrow\land)$. Then $\mathcal{S}'$ is of the form $X_1 \circ X_2 \Rightarrow A \land B$ and $\mathcal{D}'$ ends with $$\frac{X_1 \Rightarrow\varphi \quad X_2 \Rightarrow\psi}{X_1 \circ X_2 \Rightarrow\varphi \land \psi}$$ By the induction hypothesis, there are 3-reduced proofs for $r(X_1 \Rightarrow A )$ and $r(X_2 \Rightarrow B)$.
By Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"}, $\mathfrak{r}(X_1) \Rightarrow\varphi$ and $\mathfrak{r}(X_2) \Rightarrow\psi$ are 2-reduced. We proceed as follows: $$\AXC{$r(X_1 \Rightarrow\varphi)$}
\RightLabel{{\small Lemma \ref{lem:DealWithSupflu}}}
\UIC{$\mathfrak{r}(X_1) \Rightarrow\varphi$}
\AXC{$r(X_2 \Rightarrow\psi)$}
\RightLabel{{\small Lemma \ref{lem:DealWithSupflu}}}
\UIC{$ \mathfrak{r}(X_2) \Rightarrow\psi$}
\RightLabel{$(\Rightarrow\land )$}
\BIC{$ \mathfrak{r} (X_1) \circ \mathfrak{r} (X_2) \Rightarrow\varphi \land \psi$}
\DP$$
$\mathfrak{r} (X_1) \circ \mathfrak{r} (X_2) \Rightarrow\varphi \land \psi$ is 3-reduced, since the worst case is that each of $\mathfrak{r} (X_1)$, $\mathfrak{r}(X_2)$ and $\varphi \land \psi$ contains a matching occurrence of $\varphi \land \psi$. Moreover, $\mathfrak{r}(X_1) \circ \mathfrak{r} (X_2) \Rightarrow\varphi \land \psi$ may not be $r(X_1 \circ X_2 \Rightarrow\varphi\land \psi)$ because $\mathfrak{r}(X_1) \circ \mathfrak{r} (X_2)$ may contain an occurrence of $\varphi \land \varphi$ that need to be deleted. This can be handled by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}.
Consider the case that $\mathcal{R}$ is $(\to \Rightarrow')$. Then $\mathcal{S}'$ is of the form $\varphi \to \psi \Rightarrow\overline{X} \circ Y$ and $\mathcal{D}'$ ends with $$\frac{X\Rightarrow\varphi \quad \psi \Rightarrow Y}{\varphi \to \psi \vdash \overline{X} \circ Y}$$
By induction hypothesis, there are 3-reduced proofs for $r(X\Rightarrow\varphi )$ and $r(\psi \Rightarrow Y)$. By Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"}, $\mathfrak{r}(X ) \Rightarrow\varphi$ and $\psi \Rightarrow\mathfrak{r}(Y)$ are 2-reduced. We proceed as follows: $$\AXC{$r(X \Rightarrow\varphi )$}
\RightLabel{{\small Lemma \ref{lem:DealWithSupflu}}}
\UIC{$\mathfrak{r}(X) \Rightarrow\varphi$}
\AXC{$r(\psi \Rightarrow Y)$}
\RightLabel{{\small Lemma \ref{lem:DealWithSupflu}}}
\UIC{$ \psi \Rightarrow\mathfrak{r}(Y)$}
\RightLabel{$(\to \Rightarrow')$}
\BIC{$\varphi \to \psi \Rightarrow\overline{\mathfrak{r}(X)} \circ \mathfrak{r}(Y)$}
\DP$$
$\varphi \to \psi \Rightarrow\overline{\mathfrak{r}(X)} \circ \mathfrak{r}(Y)$ is $3$-reduced, since the worst case is that each of $\varphi \to \psi$, $\overline{\mathfrak{r}(X)}$ and $\mathfrak{r}(Y)$ contains a matching occurrence of $\varphi \to \psi$. Moreover, $\varphi \to \psi \Rightarrow\overline{\mathfrak{r}(X)} \circ \mathfrak{r}(Y)$ may not be $r(\varphi \to \psi \Rightarrow\overline{X} \circ Y)$ because $\varphi \to \psi \Rightarrow\overline{\mathfrak{r}(X)} \circ \mathfrak{r}(Y)$ may contain an occurrence of $\varphi \to \varphi$ that need to be deleted. This can be handled by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}.
\(3\) The proof for (Ql), (Qr), ($\mathrm{I}$l), ($\mathrm{I}$r), (Wl), (Wr), (Cl), (Cr), (Ml) and (Mr) is straightforward.
If $\mathcal{R}$ is (Ql), (Qr), (Wl), (Wr), (Ml) or (Mr), we apply the induction hypothesis to the premise and then apply $\mathcal{R}$. Take (Ql) as an example. Then $\mathcal{D}'$ ends with $$\AXC{$\mathrm{I}\Rightarrow Y$}
\RightLabel{(Ql)}
\UIC{$\ast \mathrm{I}\Rightarrow Y$}
\DP$$ By induction hypothesis, there exists a 3-reduced proof for $r(\mathrm{I}\Rightarrow Y)$. Since $r(\mathrm{I}\Rightarrow Y)= \mathrm{I}\Rightarrow\mathfrak{r}(Y)$, this is also a 3-reduced proof for $\mathrm{I}\Rightarrow\mathfrak{r}(Y)$. Then by (Ql), we have $\ast \mathrm{I}\Rightarrow\mathfrak{r}(Y)$, which is $r(\ast \mathrm{I}\Rightarrow Y)$.
If $\mathcal{R}$ is ($\mathrm{I}$l), ($\mathrm{I}$r), (Cl) or (Cr), then the reduced sequent from the premise of $\mathcal{R}$ and that of the conclusion of $\mathcal{R}$ are the same. So the claim follows directly from the induction hypothesis. ◻
# Irredundancy, $r$-complexity increasing property and subformula property {#sec:7}
This section introduces the remaining requirements for sequents in finite proof searching.
#### Irredundant proofs
A proof is said to be *irredundant* if no sequent appear twice in any of its branches. Any redundant proof can be transformed into an irredundant one simply by excising the branch sections between repetitions of a sequent. Therefore,
[\[lem:irredun\]]{#lem:irredun label="lem:irredun"} For any sequent $\mathcal{S}$ and display calculus $\mathrm{D}$, if $\mathcal{S}$ is provable in $\mathrm{D}$, then there is an irredundant proof for $\mathcal{S}$ in $\mathrm{D}$.
#### $r$-complexity increasing property
Recall that $l(\varphi)$ denotes the length of $\varphi$ for any formula $\varphi$.
[\[def:complexity\]]{#def:complexity label="def:complexity"} The complexity $c(X\vdash Y)$ of the sequent $X\vdash Y$ is defined inductive as follows: $$\begin{aligned}
& c(\varphi) = l(\varphi)\\
& c(\mathrm{I}) =0\\
& c (\ast X)= c(X)\\
&c(\bullet X) = c(X) \\
&c(X\circ Y) = \max\{c(X),c(Y)\}\\
\end{aligned}$$ Then we set $c(X\Rightarrow Y)= \max\{c(X), c(Y)\}+ n_\bullet(X \Rightarrow Y)$, where $n_\bullet(X \Rightarrow Y)$ is the number of $\bullet$ in $X\Rightarrow Y$.
Note that $\bullet$'s are considered at the final stage of complexity calculation, which ensures that $(\Rightarrow\bullet)$ is complexity-preserving.
It follows immediate that equivalent sequents are of the same complexity.
[\[lem:06151544\]]{#lem:06151544 label="lem:06151544"} For any sequent $\mathcal{S}$, if $\mathcal{S}_1$ and $\mathcal{S}_2$ are reduced sequents obtained from sequent $\mathcal{S}$, then $\mathcal{S}_1$ and $\mathcal{S}_2$ are complexity-preservingly inter-provable.
*Proof.* By Proposition [\[prop:reducedInterPro\]](#prop:reducedInterPro){reference-type="ref" reference="prop:reducedInterPro"}, $\mathcal{S}_1$ and $\mathcal{S}_2$ are inter-provable with display rules, $(Al)$, $(Ar)$, $(Pl)$, $(Pr)$ and $(\mathrm{I}r)$ in $\mathrm{D.K}^-_\ast$. Since these rule are complexity-preserving, $\mathcal{S}_1$ and $\mathcal{S}_2$ are complexity-preservingly inter-provable. ◻
By this lemma, let the *r-complexity* $rc(\mathcal{S})$ of $\mathcal{S}$ be $c(r(\mathcal{S}))$, where $r(\mathcal{S})$ is the reduced sequent from $\mathcal{S}$ and is discussed after Proposition [\[prop:reducedInterPro\]](#prop:reducedInterPro){reference-type="ref" reference="prop:reducedInterPro"}.
We will show that each proof in $\mathrm{D.K}^-_\ast$ is $r$-complexity increasing. Before that, we prove a lemma.
[\[lem:comdecre\]]{#lem:comdecre label="lem:comdecre"} For any proof in $\mathrm{D.K}^-_\ast$, $r$-complexity increases from leaves to the root.
*Proof.* We prove by induction on the height $h$ of the proof. If $h=1$, the claim holds trivially.
If $h>1$, consider the last rule $\mathcal{R}$ applied in the proof. If $\mathcal{R}$ is a display rule, $(Al)$, $(Ar)$, $(Pl)$ or $(Pr)$, let the derivation end with $$\frac{\mathcal{S}}{\mathcal{S}'}$$ By definition, $rc(\mathcal{S}) = c(r(\mathcal{S}))$ and $rc(\mathcal{S}')=c (r(\mathcal{S'}))$. By assumption, $\mathcal{S}$ and $\mathcal{S}'$ are equivalent. By Proposition [\[prop:06191740\]](#prop:06191740){reference-type="ref" reference="prop:06191740"}, $r(\mathcal{S})$ and $r(\mathcal{S'})$ are equivalent. Since equivalent sequents are of the same complexity, $c(r(\mathcal{S})) = c(r(\mathcal{S}'))$. Therefore, $rc(\mathcal{S}) = rc(\mathcal{S}')$.
For example, consider the case that $\mathcal{R}$ is the top-to-bottom rule of $(\Rightarrow\bullet)$. Then the proof ends with
Now suppose that $\mathcal{R}$ is not a display rule, $(Al)$, $(Ar)$, $(Pl)$ or $(Pr)$. Take $(\Rightarrow\land)$ as an example. Let the derivation end with $$\frac{X\Rightarrow\varphi \quad Y \Rightarrow\psi}{X \circ Y \Rightarrow\varphi \land \psi}$$ By definition, $rc(X\Rightarrow\varphi) = c(r(X\Rightarrow\varphi))$, $rc(Y \Rightarrow\varphi) =c(r(Y \Rightarrow\psi))$ and $rc(X \circ Y\Rightarrow\varphi \land \psi)= c(r(X \circ Y \Rightarrow\varphi \land \psi))$. By the definition of $\mathfrak{r}$, $r(X \Rightarrow\varphi)$ is $r(\mathfrak{r}(X) \Rightarrow\varphi)$, where $\mathfrak{r}(X)$ is the substructure of $X$ after reducing $X$ in $X \Rightarrow\varphi$. Similarly, $r(X \circ Y \Rightarrow\varphi \land \psi)$ is $r(\mathfrak{r}(X) \circ \mathfrak{r}(Y) \Rightarrow\varphi \land \psi)$. It follows that $c(r(\mathfrak{r}(X) \Rightarrow\varphi)) \le c(r (\mathfrak{r}(X) \circ \mathfrak{r}(Y) \Rightarrow\varphi \land \psi))$. Similarly, $c(r(\mathfrak{r}(Y) \Rightarrow\psi)) \le c(r (\mathfrak{r}(X) \circ \mathfrak{r}(Y) \Rightarrow\varphi \land \psi))$.
The proofs for other rules are similar. ◻
#### Subformula property
A proof with conclusion $\mathcal{S}$ satisfies the *subformula property* if each formula appearing in the proof is a subformula of a formula in $\mathcal{S}$.
By observation on rules in $\mathrm{D.K}^-_\ast$, it follows that:
[\[lem:subform\]]{#lem:subform label="lem:subform"} Any proof in $\mathrm{D.K}^-_\ast$ satisfies the subformula property.
# Decidability {#sec:decida}
Previous sections show that a sequent $\mathcal{S}$ is provable in $\mathrm{D.K}^-_\ast$ iff there is a $\ast$-reduced, 3-reduced, irredundant, r-complexity-increasing proof satisfying subformula property for $r(\mathcal{S})$ in $\mathrm{D.K}^-_\ast$. Therefore, to decide whether $\mathcal{S}$ is provable in $\mathrm{D.K}^-_\ast$, one algorithm is to check all such possible proofs for $r( \mathcal{S})$.
Denote by $\Gamma^n_\mathcal{S}$ the set of $\ast$-reduced and $n$-reduced sequents made of subformuals in $\mathcal{S}$, and with $r$-complexity less than or equal to $\mathcal{S}$[^5]. To show that $\mathrm{D.K}^-_\ast$ is decidable, it suffices to show that $\Gamma^3_\mathcal{S}$ is finite.
Section [6](#sec:7){reference-type="ref" reference="sec:7"} will discuss the decidability of extensions of $\mathrm{D.K}^-_\ast$, where the more general notion of $\Gamma^n_\mathcal{S}$ ($n \in \mathbb{N}$) is in need. Therefore, we will show that $\Gamma^n_\mathcal{S}$ is finite, instead of just $\Gamma^3_\mathcal{S}$.
To show that $\Gamma^n_\mathcal{S}$ is finite, we will show that it is a subset of a finite set defined as follows.
[\[defn:08071015\]]{#defn:08071015 label="defn:08071015"} Given a finite set $\Phi$ of formulas, the set $\Omega^\Phi$ of structures is defined as follows:
1. $\Phi \subseteq \Omega^\Phi$;
2. If $X, Y \in \Omega^\Phi$, then $X \circ Y \in \Omega^\Phi$;
3. If $X\in \Omega^\Phi$ and $X$ is not of the form $\ast X'$, then $\ast X \in \Omega^\Phi$;
4. If $X\in \Omega^\Phi$, then $\bullet X \in \Omega^\Phi$.
5. If $X \in \Omega^\Phi$, then $X\circ \mathrm{I}$ and $\mathrm{I}\circ X$ belong to $\Omega^\Phi$.
Let $\Omega_{m,n}^\Phi$ the smallest subset of $\Omega^\Phi$ such that for any $X \in \Omega_{m,n}^\Phi$, (1) $X$ contains at most $m$ $\bullet$'s, (2) each formula in $X$ occurs at most $n$ times in $X$, and (3) $X$ contains at most $m$ dependent occurrences of $\mathrm{I}$.
Now we show that $\Omega_{m,n}$ is finite.
[\[lem:finite1\]]{#lem:finite1 label="lem:finite1"} Let $\Phi$ be a finite set of formulas with $k$ elements. Then $\Omega_{m,n}^\Phi$ is finite.
*Proof.* It suffices to show that there is an upper bound of the length of structures in $\Omega_{m,n}^\Phi$. Let the length of the longest formulas in $\Phi$ be $t$. An upper bound is $(k+1)nt+2((k+1)n-1) +m$. The proof is as follows.
Let $x_1, \ldots ,x_{(k+1)n}$ be an enumeration of elements from $\Phi\cup\{\mathrm{I}\}$ such that each element occurs exactly $n$ times.
The idea to construct one of the longest structures in $\Omega_{m.n}^\Phi$ is as follows. Every element in $x_1, \ldots ,x_{(k+1)n}$ is used to form a structure and $\ast$ is added to this structure whenever possible. Finally, $m$ $\bullet$'s are prefixed to the structure.
We formalize this idea and define a sequence of structures as follows: $$\begin{aligned}
&X_1 = \ast x_1\\
&X_{j} = \ast(X_{j-1} \circ \ast x_j)
\end{aligned}$$
Then one of the longest structures in $\Omega_{m,n}^\Phi$ is as follows: $$\underbrace{\bullet \ldots \bullet}_{m} X_{(k+1)n}$$ This structure contains $(k+1)n$ atomic structure, $2((k+1)n -1)$ $\ast$'s and $m$ $\bullet$'s. Since the length of each formula in $\Phi$ is less than or equal to $t$, its length is less than or equal to $(k+1)nt+2((k+1)n-1) +m$. ◻
[\[lem:03231805\]]{#lem:03231805 label="lem:03231805"} Let $\mathcal{S}$ be a sequent. Then $\Gamma^n_\mathcal{S}$ is finite for $n \in \mathbb{N}$.
*Proof.* Assume that $\mathcal{S}$ is of complexity $m$. Let $\Phi$ be the set of subformulas in $\mathcal{S}$. Then $\Phi$ is finite.
Let $\Delta = \{W \Rightarrow Z \mid W , Z \in \Omega^\Phi_{mn,(2n+1)mn} \}$. By Lemma [\[lem:finite1\]](#lem:finite1){reference-type="ref" reference="lem:finite1"}, $\Delta$ is finite. It follows from the definition that for any sequent $W \Rightarrow Z$, $W \Rightarrow Z \in \Delta$ iff (1) $W \Rightarrow Z$ is made of subformulas in $\mathcal{S}$, (2) $W \Rightarrow Z$ is $\ast$-reduced, (3) $W$ contains at most $(2n+1)mn$ dependent occurrences of $\mathrm{I}$, (4) $Z$ contains contains at most $(2n+1)mn$ dependent occurrences of $\mathrm{I}$, (5) the number of $\bullet$'s in $W$ is at most $mn$, (6) the number of $\bullet$'s in $Z$ is at most $mn$, (7) each formula occurs at most $(2n+1)mn$ times in $W$ and (8) each formulas occurs at most $(2n+1)mn$ times in $Z$.
To show that $\Gamma^n_\mathcal{S}$ is finite, it suffices to show that $\Gamma^n_\mathcal{S} \subseteq \Delta$. We prove by contradiction and assume that $W \Rightarrow Z \in \Gamma^n_\mathcal{S}$ and $W \Rightarrow Z \not \in \Delta$ for some sequent $W \Rightarrow Z$. Since $W \Rightarrow Z \in \Gamma^n_\mathcal{S}$, it follows directly from the definition that $W \Rightarrow Z$ satisfies items (1)-(4) mentioned in the last paragraph.
Now we show that the number of $\bullet$'s in $W$ is at most $mn$. Since $W \Rightarrow Z \in \Gamma^n_\mathcal{S}$, by definition, $rc(W \Rightarrow Z) \le rc(\mathcal{S})=m$ and $W \Rightarrow Z$ is $n$-reduced. Then the number of $\bullet$'s in $r(W \Rightarrow Z)$ is less than or equal to $m$. Since $W \Rightarrow Z$ is $n$-reduced and is obtained from $r(W \Rightarrow Z)$ by duplicating substructures of the form $\ast^n X$ in $r(W \Rightarrow Z)$ (see Definition [\[def:delSubstr\]](#def:delSubstr){reference-type="ref" reference="def:delSubstr"}) or adding dependent $\mathrm{I}$'s, the number of $\bullet$'s in $W \Rightarrow Z$ is at most $mn$.
It can be proved in a similar way that the number of $\bullet$'s in $Z$ is at most $mn$. Therefore, $W \Rightarrow Z$ satisfies items (1)-(6) mentioned above.
Since $W\Rightarrow Z \not \in \Delta$, either there exists a formula $\varphi$ that occurs more than $(2n+1)mn$ times in $W$ or there exists a formula $\varphi$ that occurs more than $(2n+1)mn$ times in $Z$. Without loss of generality, we assume the former.
Since the number of $\bullet$'s is at most $mn$ in $W$ and formula $\varphi$ occurs more than $(2n+1)mn$ times in $W$, distribute these occurrences of $\varphi$ to these $\bullet$'s and one will find that there exists a substructure $W'$ of $W$, which is of the form $$\bullet ( \underbrace{\varphi \circ \ldots \circ \varphi}_{j_1} \circ \underbrace{\ast \varphi \circ \ldots \ast \circ \varphi}_{j_2} \circ W'')$$ after applications of the commutative law and associative law of $\circ$, where $W''$ is a structure and $j_1+ j_2 \ge \frac{(2n+1)mn}{mn}=2n+1$ .
Since $j_1 + j_2 \ge 2n+1$, either the number of occurrences of $\varphi$ or the number of occurrences of $\ast \varphi$ is larger than $n$, contradicting the fact that $W \Rightarrow Z$ is $n$-reduced. ◻
[\[theo:decidability\]]{#theo:decidability label="theo:decidability"} For any sequent $\mathcal{S}$, whether $\mathcal{S}$ is provable or not in $\mathrm{D.K}$, $\mathrm{D.K}^-$ or $\mathrm{D.K}^-_\ast$ is decidable.
*Proof.* By Lemmas [\[lem:astreduced\]](#lem:astreduced){reference-type="ref" reference="lem:astreduced"}, [\[lem:semireduced\]](#lem:semireduced){reference-type="ref" reference="lem:semireduced"}, [\[lem:irredun\]](#lem:irredun){reference-type="ref" reference="lem:irredun"}, [\[lem:comdecre\]](#lem:comdecre){reference-type="ref" reference="lem:comdecre"} and [\[lem:subform\]](#lem:subform){reference-type="ref" reference="lem:subform"}, $\mathcal{S}$ is provable in $\mathrm{D.K}^-_\ast$ iff $\mathcal{S}$ is $\ast$-reduced and there exists a 3-reduced, irredundant, r-complexity-increasing proof with subformula property for $r(\mathcal{S})$, each sequent in which is of $r$-complexity less than or equal to $\mathcal{S}$. Recall that $\Gamma_\mathcal{S}^3$ denotes the set of $\ast$-reduced and 3-reduced sequents made of subformulas in $\mathcal{S}$ and of $r$-complexity less than or equal to $\mathcal{S}$. Therefore, one algorithm to decide whether $\mathcal{S}$ is provable in $\mathrm{D.K}^-_\ast$ is to generate all possible proofs only consisting of sequents in $\Gamma_\mathcal{S}^3$.
By Lemma [\[lem:03231805\]](#lem:03231805){reference-type="ref" reference="lem:03231805"}, $\Gamma^3_\mathcal{S}$ is finite. Therefore, $\mathrm{D.K}^-_\ast$ is decidable.
By Proposition [\[prop:astequiv\]](#prop:astequiv){reference-type="ref" reference="prop:astequiv"}, $\mathcal{S}$ is provable in $\mathrm{D.K}^-$ iff $\tau_\ast(\mathcal{S})$ is provable in $\mathrm{D.K}^-$. Therefore, $\mathrm{D.K}^-$ is decidable.
By Theorem [\[thm:cutAdmi\]](#thm:cutAdmi){reference-type="ref" reference="thm:cutAdmi"}, the cut rule is admissible in $\mathrm{D.K}^-$. Therefore, $\mathrm{D.K}$ is decidable. ◻
# Decidability of extensions of $\mathrm{D.K}^-_\ast$ {#sec:extensions}
This section discusses how to generalize the method for finite proof-searching in $\mathrm{D.K}^-_\ast$ to extensions of $\mathrm{D.K}^-_\ast$.
For an extension $\mathrm{D.}\Xi$ of $\mathrm{D.K}^-_\ast$, to show that $\mathrm{D.}\Xi$ is decidable, by Lemma [\[lem:03231805\]](#lem:03231805){reference-type="ref" reference="lem:03231805"}, it suffices to show that a sequent $\mathcal{S}$ is provable in $\mathrm{D.}\Xi$ iff there is a $\ast$-reduced, $n$-reduced, irredundant, r-complexity-increasing proof satisfying subformula property for $r(\mathcal{S})$ in $\mathrm{D.}\Xi$, where $n \in \mathbb{N}$.
Section [8.1](#sec:8.1){reference-type="ref" reference="sec:8.1"} introduces *normal extensions of $\mathrm{D.K}^-_\ast$* and show that they are decidable in this way. Section [8.2](#sec:Scott){reference-type="ref" reference="sec:Scott"} discusses some examples of extensions of $\mathrm{D.K}^-_\ast$.
## Normal extensions of $\mathrm{D.K}^-_\ast$ {#sec:8.1}
Recall that we use $\star$ with or without superscripts or subscripts to denote a sequence of $\ast$ and $\bullet$. Let $l(\star)$ be the number of symbols in $\star$.
Let $\Phi =\{X_1, \ldots , X_n\}$ be a set of structural variables. The set $\mathfrak{S}$ of structural schemes is defined recursively as follows: $$\mathfrak{S} :: =X \mid \ast X \mid \bullet X \mid X \circ X$$ where $X\in \{X_1, \ldots , X_n\}$. A structural scheme is called *$\Phi$-normal* if it contains each variable in $\Phi$ and each variable occurs exactly once.
Two structural schemes are said to be *equivalent* if they are inter-provable with display rules, $(Al)$, $(Ar)$, $(Pl)$ and $(Pr)$.
*Being $\ast$-reduced, equivalence, superfluous substructures and $\mathrm{I}$ and their deletion and reduction* for structural schemes are the same as those for sequents. One can obtain the definitions by substituting 'structural schemes' for 'sequents' in the related definitions in Sections [4](#sec:5){reference-type="ref" reference="sec:5"} and [5](#sec:6){reference-type="ref" reference="sec:6"}.
The *complexity* of a structural schemes is defined by deleting the clause about formulas in Definition [\[def:complexity\]](#def:complexity){reference-type="ref" reference="def:complexity"} and adding $c(X) =1$ for $X \in \Phi$. Then let the *r-complexity* $rc (\mathfrak{S}) = c(r(\mathfrak{S}))$.
[\[def:type12\]]{#def:type12 label="def:type12"} Let $\Phi=\{ X_1, \ldots, X_n\}$ be a set of structural variables. A rule $\mathcal{R}$ is called a *left $\Phi$-normal rule* if it is of the form $$\frac{\mathfrak{S}_1 \Rightarrow Y \quad \ldots \quad \mathfrak{S}_k \Rightarrow Y }{\mathfrak{S} \Rightarrow Y}$$ where
1. $k \in \mathbb{N}$;
2. Each $\mathfrak{S}_i (1 \le i \le k)$ and $\mathfrak{S}$ are $\Phi$-normal.
3. Each premise and the conclusion are $\ast$-reduced;
4. For each $1 \le i \le k$, $rc(\mathfrak{S}_i) \le rc(\mathfrak{S})$
A rule $\mathcal{R}$ is called a *right $\Phi$-normal rule* if it is of the form $$\frac{Y \Rightarrow\mathfrak{S}_1 \quad \ldots \quad Y \Rightarrow\mathfrak{S}_k }{Y \Rightarrow\mathfrak{S}}$$ and satisfies the above conditions. Given a finite set $\Phi$ of structural variables, a rule is called *$\Phi$-normal* if it is either a left $\Phi$-normal rule or a right $\Phi$-normal rule. An extension of $\mathrm{D.K}^-_\ast$ is called *$\Phi$-normal* if it is obtained by adding to $\mathrm{D.K}^-_\ast$ a set of $\Phi$-normal rules.
Then we have the following result.
[\[lem:09041535\]]{#lem:09041535 label="lem:09041535"}
For any normal extension $\mathrm{D.}\Xi$ of $\mathrm{D.K}^-_\ast$, it is immediate that
1. each provable sequent has an irredundant proof;
2. each proof is $\ast$-reduced, r-complexity-increasing and satisfies the subformula property;
*Proof.* Item (1) is holds because any redundant proof can be transformed into an irredundant one simply by excising the branch sections between repetitions of a sequent.
Since the premise(s) and the conclusion are $\ast$-reduced for each rule in $\Xi$ and each proof in $\mathrm{D.K}^-_\ast$ is $\ast$-reduced (Lemma [\[lem:astreduced\]](#lem:astreduced){reference-type="ref" reference="lem:astreduced"}), each proof in $\mathrm{D}.\Xi$ is $\ast$-reduced.
Since each rule in $\Xi$ is $r$-complexity increasing and each proof in $\mathrm{D.K}^-_\ast$ is $r$-complexity increasing (Lemma [\[lem:comdecre\]](#lem:comdecre){reference-type="ref" reference="lem:comdecre"}), each proof in $\mathrm{D}.\Xi$ is $r$-complexity increasing.
Since each rule in $\Xi$ is $\Phi$-normal and each proof in $\mathrm{D.K}^-_\ast$ satisfies the subformula property (Lemma [\[lem:subform\]](#lem:subform){reference-type="ref" reference="lem:subform"}), each proof in $\mathrm{D}.\Xi$ satisfies the subformula property. ◻
Next is a generalization of Lemma [\[lem:semireduced\]](#lem:semireduced){reference-type="ref" reference="lem:semireduced"} to any normal extension of $\mathrm{D.K}^-_\ast$. Its proof makes use of Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}, which also holds for any normal extension of $\mathrm{D.K}^-_\ast$, because only display rules, $(Al)$, $(Ar)$, $(Pl)$, $(Pr)$, $(\mathrm{I}l)$, $(\mathrm{I}r)$, $(Cl)$ and $(Cr)$ are needed in the proof.
[\[lem:09041448\]]{#lem:09041448 label="lem:09041448"} Let $\Phi =\{ X_1,\ldots, X_n\}$ be a set of structural variables. Let $\Xi$ be a set of $\Phi$-normal rules. If $\mathcal{S}$ is reduced and is provable in $\mathrm{D.} \Xi$, then there is a $\max \{n+1,3\}$-reduced proof for $\mathcal{S}$ in $\mathrm{D.}\Xi$.
*Proof.* Since $\mathcal{S}$ is derivable in $\mathrm{D.}\Xi$, there exists a proof $\mathcal{D}$ of height $h$ for $\mathcal{S}$. We prove by induction on $1\le i \le h$ that for each sub-proof $\mathcal{D}'$ of $\mathcal{D}$ of height $i$ with the last sequent being $\mathcal{S}'$, there is an $\max\{n+1,3 \}$-reduced proof for $r(\mathcal{S}')$.
The base step and the induction step where the last rule applied is a rule in $\mathrm{D.K}^-_\ast$ are the same as Lemma [\[lem:semireduced\]](#lem:semireduced){reference-type="ref" reference="lem:semireduced"}.
Consider the case that $\mathcal{R}$ is a left $\Phi$-normal rule. Let $\mathcal{R}$ be $$\frac{\mathfrak{S}_1 \Rightarrow Y \quad \ldots \quad \mathfrak{S}_k \Rightarrow Y }{\mathfrak{S} \Rightarrow Y}$$
By Definition [\[def:type12\]](#def:type12){reference-type="ref" reference="def:type12"}, the $\mathfrak{S}_i$ is $\Phi$-normal. It follows that it contains each variable in $\Phi$ and each variable occurs exactly once. Then $\mathfrak{S}_i \Rightarrow Y$ can be written as $\mathfrak{S}_i [ X_1, \ldots, X_n] \Rightarrow Y$. By Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"}, $\mathfrak{S}_i [\mathfrak{r}(X_1) \backslash X_1, \ldots,\mathfrak{r}(X_n) \backslash X_n ] \Rightarrow\mathfrak{r} (Y)$ is $(n+1)$-reduced, since the worst case is that each of $\mathfrak{r}(X_i)(1\le i \le n)$ and $\mathfrak{r}(Y)$ contains a matching occurrence of a substructure or is $\mathrm{I}$.
For each $1 \le i \le k$, by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}, $$r(\mathfrak{S}_i \Rightarrow Y) \vdash^{n+1} \mathfrak{S}_i [\mathfrak{r}(X_1) \backslash X_1, \ldots,\mathfrak{r}(X_n) \backslash X_n] \Rightarrow\mathfrak{r} (Y).$$ By the induction hypothesis, there is a $\max\{n+1,3 \}$-reduced proofs for $r(\mathfrak{S}_i \Rightarrow Y)(1 \le i \le k)$. It follows that there is a $\max\{n+1,3 \}$-reduced proof for $\mathfrak{S}_i [\mathfrak{r}(X_1) \backslash X_1,$ $\ldots,\mathfrak{r}(X_n) \backslash X_n] \Rightarrow\mathfrak{r} (Y)$.
Then apply $\mathcal{R}$ and we have $$\frac{\mathfrak{S}_1[\mathfrak{r}(X_1) \backslash X_1, \ldots,\mathfrak{r}(X_n) \backslash X_n] \Rightarrow\mathfrak{r} (Y) \ldots \mathfrak{S}_k [\mathfrak{r}(X_1) \backslash X_1, \ldots,\mathfrak{r}(X_n) \backslash X_n] \Rightarrow\mathfrak{r} (Y)}{\mathfrak{S} [\mathfrak{r}(X_1) \backslash X_1, \ldots,\mathfrak{r}(X_n) \backslash X_n] \Rightarrow\mathfrak{r} (Y)}$$ $\mathfrak{S} [\mathfrak{r}(X_1) \backslash X_1, \ldots,\mathfrak{r}(X_n) \backslash X_n] \Rightarrow\mathfrak{r} (Y)$ is $(n+1)$-reduced and it may not be $r(\mathfrak{S} \Rightarrow Y )$ because it may contains superfluous substructures or superfluous occurrences of $\mathrm{I}$. This can be handled by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}.
To illustrate the above proof, consider a $\{X,Y\}$-normal rule $\mathcal{R}_{.3}$ as follows : $$\frac{\triangleright (X\circ \triangleright Y) \Rightarrow Z \quad \triangleright (X\circ Y) \Rightarrow Z \quad \triangleright(\triangleright X \circ Y) \Rightarrow Z}{\triangleright X \circ \triangleright Y \Rightarrow Z}$$ where $\triangleright$ is a shorthand for $\ast \bullet \ast$.
By the induction hypothesis, there are $3$-reduced proofs for $r(\triangleright (X\circ \triangleright Y) \Rightarrow Z)$, $r(\triangleright (X\circ Y) \Rightarrow Z)$ and $r(\triangleright(\triangleright X \circ Y) \Rightarrow Z)$.
By Definition [\[def:reducesubstru\]](#def:reducesubstru){reference-type="ref" reference="def:reducesubstru"}, $\triangleright (\mathfrak{r}(X) \circ \triangleright \mathfrak{r}(Y)) \Rightarrow\mathfrak{r}(Z)$, $\triangleright (\mathfrak{r}(X)\circ \mathfrak{r}(Y)) \Rightarrow\mathfrak{r}(Z)$ and $\triangleright(\triangleright\mathfrak{r}(X)\circ \mathfrak{r}(Y)) \Rightarrow\mathfrak{r}(Z))$ are $3$-reduced. Then we proceed as follows, in which the upper part is by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"} and the final step is by $\mathcal{R}_{.3}$. $$\AXC{$r(\triangleright (X\circ \triangleright Y) \Rightarrow Z)$}
\UIC{$\triangleright (\mathfrak{r}(X) \circ \triangleright \mathfrak{r}(Y)) \Rightarrow\mathfrak{r}(Z) $}
\AXC{$r(\triangleright (X\circ Y) \Rightarrow Z) $}
\UIC{$\triangleright (\mathfrak{r}(X)\circ \mathfrak{r}(Y)) \Rightarrow\mathfrak{r}(Z)$}
\AXC{$r(\triangleright(\triangleright X \circ Y) \Rightarrow Z)$}
\UIC{$\triangleright(\triangleright\mathfrak{r}(X)\circ \mathfrak{r}(Y)) \Rightarrow\mathfrak{r}(Z))$}
\TrinaryInfC{$\triangleright \mathfrak{r}(X) \circ \triangleright \mathfrak{r}(Y) \Rightarrow\mathfrak{r}(Z)$}
\DP$$ $\triangleright \mathfrak{r}(X) \circ \triangleright \mathfrak{r}(Y) \Rightarrow\mathfrak{r}(Z)$ is $3$-reduced. It may not be $r(\triangleright X \circ \triangleright Y \Rightarrow Z)$ because it may contains superfluous substructures or superfluous occurrences of $\mathrm{I}$. This can be handled by Lemma [\[lem:DealWithSupflu\]](#lem:DealWithSupflu){reference-type="ref" reference="lem:DealWithSupflu"}.
The case that $\mathcal{R}$ is a right $\Phi$-normal rule is similar to the case that $\mathcal{R}$ is a left $\Phi$-normal rule. ◻
[\[prop:extensionsdeci\]]{#prop:extensionsdeci label="prop:extensionsdeci"} Let $\Xi$ be a set of $\Phi$-normal rules. Then $\mathrm{D}.\Xi$ is decidable.
*Proof.* Let $n$ be the cardinality of $\Xi$. By Lemmas [\[lem:09041535\]](#lem:09041535){reference-type="ref" reference="lem:09041535"} and [\[lem:09041448\]](#lem:09041448){reference-type="ref" reference="lem:09041448"}, $\mathcal{S}$ is provable in $\mathrm{D}.\Xi$ iff $\mathcal{S}$ is $\ast$-reduced and there exists a $\max\{n+1,3\}$-reduced, irredundant, r-complexity-increasing proof with subformula property for $r(\mathcal{S})$, each sequent in which is of $r$-complexity less than or equal to $\mathcal{S}$.
By Lemma [\[lem:03231805\]](#lem:03231805){reference-type="ref" reference="lem:03231805"}, the set of possible proofs only consisting of sequents in $\Gamma_\mathcal{S}^{\max\{n+1, 3\}}$ is finite. Therefore, $\mathrm{D}.\Xi$ is decidable. ◻
## Scott-Lemmon axioms and primitive axioms {#sec:Scott}
A class of axioms called *Scott-Lemmon axiomss* subsumes a range of standard normal modal axioms, e.g., reflexivity, transitivity, euclideanness, etc. They are of the form: $$\Diamond^h \Box^i \varphi \to \Box^j \Diamond^k \varphi$$ where $h,i,j,k \ge 0$ and $\Diamond^n \varphi$ (likewise, $\Box^n \varphi$) denotes the formula $\varphi$ prefixed with $n$-occurrecnes of $\Diamond$ (resp. $\Box$).
[@kracht1996power] showed that an axiomatic extension of Hilbert-style tense logic can be properly displayed (by adding structural rules to $\mathrm{D.Kt}$) iff it is axiomatizable by a set of primitive axioms. It also provided a method to obtain display calculi from such axiomatizations, which transform primitive axioms into structural rules such that the rules preserve cut elimination when added to $\mathrm{D.K}$ or $\mathrm{D.Kt}$.
Specifically, a formula is called a *primitive axiom* if it is of the form $\varphi \to \psi$ where both $\varphi$ and $\psi$ contain only variables, $\top$, $\land$, $\lor$, $\blacklozenge$ and $\Diamond$ and $\varphi$ contains each propositional variable at most once.
It is known that any Scott-Lemmon axiom $\Diamond^h \Box^i \varphi \to \Box^j \Diamond^k \varphi$ is equivalent to the following primitive axiom $$\blacklozenge^h \Diamond^j \varphi \to \Diamond^i \blacklozenge^k \varphi$$ and its corresponding structural rule that preserves cut elimination is as follows: $$\frac{(\ast \bullet \ast)^i \bullet^k X \Rightarrow Y}{\bullet^h (\ast \bullet \ast)^j X \Rightarrow Y}$$ where $(\ast \bullet \ast)^i$ is the sequence of symbols consisting of $i$ $\ast \bullet \ast$'s and $(\ast \bullet \ast)^j$ is the sequence of symbols consisting of $j$ $\ast \bullet \ast$'s. Note that it is a left $\{X\}$-normal rule, if $i+k \le h+j$. Then we have the following corollary of Theorem [\[prop:extensionsdeci\]](#prop:extensionsdeci){reference-type="ref" reference="prop:extensionsdeci"}.
Let $\Gamma$ be a set of Scott-Lemmon axioms such that each formula in $\Xi$ is of the form $\Diamond^h \Box^i \varphi \to \Box^j \Diamond^k \varphi$ where $i+k \le h+j$. Let $\Xi$ be the set of normal rules obtained from $\Gamma$. Then $\mathrm{D.K}+ \Xi$ is decidable.
The following is a table for some Scott-Lemmon axioms and their corresponding structural rules:
-------------------------------------------------- --------------------------------------------------------------------------------------------------
Axioms Corresponding structural rules
\(D\) $\Box \varphi \to \Diamond\varphi$ $\dfrac{\ast \bullet \ast \bullet X \Rightarrow Y}{X \Rightarrow Y}$
\(T\) $\Box \varphi \to \varphi$ $\dfrac{\ast \bullet \ast X \Rightarrow Y}{X \Rightarrow Y}$
\(4\) $\Box \varphi \to \Box \Box \varphi$ $\dfrac{\ast \bullet \ast X \Rightarrow Y}{\ast \bullet \ast \ast \bullet \ast X \Rightarrow Y}$
\(5\) $\Diamond\varphi \to \Box \Diamond\varphi$ $\dfrac{\ast \bullet \ast \bullet X \Rightarrow Y}{\bullet \ast \bullet \ast X \Rightarrow Y}$
\(B\) $\varphi \to \Box\Diamond\varphi$ $\dfrac{ \bullet X \Rightarrow Y} {\ast \bullet \ast X \Rightarrow Y}$
$(Alt_1)$ $\Diamond\varphi \to \Box \varphi$ $\dfrac{X \Rightarrow Y}{\bullet \ast \bullet \ast X \Rightarrow Y}$
$(T^c)$ $\varphi \to \Box \varphi$ $\dfrac{X \Rightarrow Y}{\ast \bullet \ast X \Rightarrow Y}$
$(4^c)$ $\Box \Box \varphi \to \Box \varphi$ $\dfrac{\ast \bullet \ast \ast \bullet \ast X \Rightarrow Y}{\ast \bullet \ast X \Rightarrow Y}$
-------------------------------------------------- --------------------------------------------------------------------------------------------------
$(.3):\Diamond\varphi \land \Diamond\psi \to \Diamond(\varphi \land \Diamond\psi) \lor \Diamond(\varphi \land \psi) \lor \Diamond(\Diamond\varphi \land \psi)$ is a primitive axiom that is not a Scott-Lemmon axioms. Its corresponding structural rule is as follows: $$\frac{\triangleright (X\circ \triangleright Y) \Rightarrow Z \quad \triangleright (X\circ Y) \Rightarrow Z \quad \triangleright(\triangleright X \circ Y) \Rightarrow Z}{\triangleright X \circ \triangleright Y \Rightarrow Z}$$ where $\triangleright$ is a abbreviation for $\ast \bullet \ast$. This rule appeared in the poof of Theorem [\[prop:extensionsdeci\]](#prop:extensionsdeci){reference-type="ref" reference="prop:extensionsdeci"} as an example.
The corresponding rules for (4), (5), (B), ($Alt_1$), ($T^c$) and (.3) are left $\{X\}$-normal rules. By Theorem [\[prop:extensionsdeci\]](#prop:extensionsdeci){reference-type="ref" reference="prop:extensionsdeci"}, $\mathrm{D.K} +\Xi$ is decidable for any $\Xi\subseteq \{ (4), (5), (B), (Alt_1),$ $(T^c),$ $(.3)\}$.
Note that the structural rule for $(T)$ is not a left $\{X\}$-normal rule. Therefore, Theorem [\[prop:extensionsdeci\]](#prop:extensionsdeci){reference-type="ref" reference="prop:extensionsdeci"} does not imply that the display calculus $\mathrm{S5}$ is decidable.
# Conclusion {#sec:conclu}
Extracting decidability from display calculus is an important question in proof theory. A negative reuslt concerning the decidability of display logics from [@kracht1996power] is that for a logic with at least two modal operators, it is undecidable whether or not its display calculus is decidable. And according to [@kracht1996power], "it is not known so far whether for monomodal logics the theorem holds as well".
This paper provides a partial answer by proving the decidability of display calculus $\mathrm{D.K}$ and some of its extensions. Our result is not contradictory to the negative result mentioned above because $\Diamond$, $\blacklozenge$, $\Box$ and $\blacksquare$ are one modal operator in essence. This is because $\Diamond$ and $\Box$, $\blacklozenge$ and $\blacksquare$ are inter-definable in normal modal logics repectively and $\blacklozenge$ is the tense dual of $\Box$.
[^1]: This was proposed in [@belnap]. [@kracht1996power] and [@wansing2013displaying] provides rephrased proofs.
[^2]: The formulas with an effective algorithm are called *primitive axioms* in [@kracht1996power] and called *analytic inductive axioms* in [@greco2016unified]. An influential result for modal logics in [@kracht1996power] is such that an axiomatic extension of the basic temporal logic is displayable iff it is axiomatized by a set of primitive axioms. The logics studied in [@greco2016unified] is algebraized by bounded distributive lattices with operators, which is more general than the algebraic semantics for modal logics
[^3]: [@restall1998displaying] proved the decidability of some displayable substructural logics through their display calculi. The display calculi for modal logics are more complex than those for substructural logics and hence more techniques are needed to prove the results in our paper.
[^4]: Therefore, $(X\Rightarrow Y) [W \backslash Z]$ means a single substitution and is different from the common uniform substitution in logic.
[^5]: Note that the set of subformulas in $\mathcal{S}$ equals the set of subfomulas in $r(\mathcal{S})$ and $rc(r(\mathcal{S}))= rc(\mathcal{S})$. For convenience, $\Gamma^n_\mathcal{S}$ is defined in terms of $\mathcal{S}$ instead of $r(\mathcal{S})$.
| arxiv_math | {
"id": "2309.02699",
"title": "Deciding some displayable modal logics",
"authors": "Jinsheng Chen",
"categories": "math.LO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We prove a variant of the Theorem of Ito-Michler, investigating the properties of finite groups where a prime number $p$ does not divide the degree of any irreducible character left invariant by some Galois automorphism $\sigma$ of order $p$.
author:
- "Nicola Grittini [^1]"
bibliography:
- BIB.bib
title: On the degrees of irreducible characters fixed by some field automorphism in finite groups
---
# Introduction
This paper is meant to be the continuation of the work the author began in [@Grittini:Degrees_of_irreducible_fixed_p-solvable]. It was proved in [@Grittini:Degrees_of_irreducible_fixed_p-solvable] that, if $G$ is a $p$-solvable group, for some prime number $p$, $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}}/\mathbb{Q})$ is of order $p$, and $p$ does not divide the degree of any irreducible character of $G$ fixed by $\sigma$, then $G$ has a normal Sylow $p$-subgroup. In $p$-solvable groups, this extends [@Dolfi-Navarro-Tiep:Primes_dividing Theorem A], which is itself a variant of the celebrated Theorem of Ito-Michler.
As it was already noted in [@Grittini:Degrees_of_irreducible_fixed_p-solvable], the hypothesis on group $p$-solvability is essential, since it is possible to find simple groups, of order divisible by a prime number $p$, where $p$ does not divide the degree of any $\sigma$-invariant irreducible character of the group, for some Galois automorphism $\sigma$ of order $p$. Examples of this kind are $\operatorname{PSL}_2(8)$ and $\operatorname{PSL}_2(32)$ with $p=3$, or $\operatorname{PSL}_2(81)$ and $\operatorname{Sz}(32)$ for $p=5$.
Nevertheless, in this paper we prove that it is still possible to extend the results of [@Grittini:Degrees_of_irreducible_fixed_p-solvable] to non-solvable groups, and our hypothesis on $\sigma$-invariant character degrees imposes severe restrictions to the group normal structure also in the non-solvable case.
**Theorem 1**. *Let $G$ be a finite group, let $p$ be a prime number and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}}/\mathbb{Q})$ of order $p$. Suppose that $p$ does not divide the degree of any irreducible character of $G$ fixed by $\sigma$, then $\operatorname{O}^{p'}(G) = \operatorname{O}_{p}(G) \times K$ for some $K \lhd G$ such that $\operatorname{O}_{p'}(K)$ is solvable and $K/\operatorname{O}_{p'}(K)$ is the direct product of some non-abelian simple groups of Lie type with no $\sigma$-invariant characters of degree divisible by $p$. Moreover, if $p=2$ then $G$ has a normal Sylow $2$-subgroup.*
We may notice that in general we cannot hope for $\operatorname{O}_{p'}(K)$ to be trivial, since $\operatorname{SL}_2(8)$ would be a counterexample for $p=3$. Hence, this is probably the closest we can be to have a normal Sylow $p$-subgroup in a non-$p$-solvable group.
We can see that, if $G$ is a finite group and $p$ is an odd prime number, it is possible to take $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}}/\mathbb{Q})$ of order $p$ of minimal kernel, such that, for each prime number $r$ dividing $\ensuremath{\left| G \right|}$, the restriction of $\sigma$ to $\mathbb{Q}_{\ensuremath{\left| G \right|}_r}$ is equal the identity if and only if $p \nmid [\mathbb{Q}_{\ensuremath{\left| G \right|}_r} : \mathbb{Q}]$. Then, we have that a character $\chi \in \operatorname{Irr}(G)$ is $\sigma$-invariant if and only if $p \nmid [\mathbb{Q}(\chi) : \mathbb{Q}]$ and, therefore, either $G$ verifies the hypothesis of our Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"}, or there exists $\chi \in \operatorname{Irr}(G)$ such that $p$ divides $\chi(1)$ and not $[\mathbb{Q}(\chi) : \mathbb{Q}]$. Hence, the following corollary holds.
**Corollary 2**. *Let $G$ be a finite group and let $p$ be an odd prime number. Suppose that, for each $\chi \in \operatorname{Irr}(G)$, if $p$ divides $\chi(1)$ then it also divides $[\mathbb{Q}(\chi) : \mathbb{Q}]$. Then, $\operatorname{O}^{p'}(G) = \operatorname{O}_{p}(G) \times K$ for some $K \lhd G$ such that $\operatorname{O}_{p'}(K)$ is solvable and $K/\operatorname{O}_{p'}(K)$ is the direct product of some non-abelian simple groups of Lie type.*
This may be compared with the results and conjectures in [@Navarro-Tiep:The_fields_of_values]. In fact, if $G$ is a finite group and $p$ a prime number, in [@Navarro-Tiep:The_fields_of_values] is conjectured that, for $\chi \in \operatorname{Irr}(G)$, if $p \nmid \chi(1)$, then, roughly speaking, the $p$-part of $[\mathbb{Q}(\chi) : \mathbb{Q}]$ is relatively *large*. It is also proved that, if $p \mid \chi(1)$, then $[\mathbb{Q}(\chi) : \mathbb{Q}]$ can be any value (see [@Navarro-Tiep:The_fields_of_values Theorem 2.2]). What we prove here is that, if $G$ is a finite group and $p$ an odd prime number, then either the thesis of our Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"} holds, or there exists at least one character $\chi \in \operatorname{Irr}(G)$ such that $p \mid \chi(1)$ and the $p$-part of $[\mathbb{Q}(\chi) : \mathbb{Q}]$ is trivial. Of course, this does not contradict [@Navarro-Tiep:The_fields_of_values Theorem 2.2], since our theorem guarantees the existence of merely *one* irreducible character with said characteristics in the group.
Essential for proving our Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"} is to categorize, at east partially, the simple groups which have a $\sigma$-invariant character of degree divisible by $p$, for every prime number $p$ dividing the order of the group and for every Galois automorphism $\sigma$ of order $p$. The following theorem partially classifies the finite non-abelian simple groups with a rational character of degree divisible by a prime number $p$. The theorem is due to Gunter Malle, and it is published here with his consent.
For coprime integers $p,q$, we let $e_p(q)$ denote the multiplicative order of $q$ modulo $p$.
**Theorem 3**. *Let $S$ be a finite simple group of Lie type and $p>2$ a prime divisor of $\ensuremath{\left| S \right|}$. If all rational characters of $S$ have $p'$-degree, then one of the following statements is true:*
1. *$S={}^{2}\!B_2(q^2)$ with $p \mid (q^4+1)$,*
2. *$S=\operatorname{PSL}_n(q)$ with $e_p(q)=1$ and $p>n$,*
3. *$S=\operatorname{PSU}_n(q)$ with $e_p(q)=2$ and $p>n$,*
4. *$S=\operatorname{PSL}_n(q)$ with $n=2$, or $(n,e_p(q)) = (4,2)$ or $(n,p,e_p(q))=(3,3,1)$,*
5. *$S=\operatorname{PSU}_n(q)$ with $(n,e_p(q)) = (4,1)$ or $(n,p,e_p(q))=(3,3,2)$,*
6. *$S={{\operatorname{PSO}}}_{2n}^+(q)$ with $(n,e_p(q))\in\{(5,2),(7,2)\}$, or*
7. *$S={{\operatorname{PSO}}}_{2n}^-(q)$ with $$(n,e_p(q))\in\{(4,1),(4,2),(4,4),(5,1),(6,1),(6,2),(7,1),(8,1),(8,2)\}.$$*
Of course, the thesis of Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"} is stronger that what we actually need, and it is possible to find a simple group $S$, of order divisible by $p$, which does not have any rational character of degree divisible by $p$ and, nevertheless, it has a $\sigma$-invariant character of degree divisible by $p$ for every choice of $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| S \right|}}/\mathbb{Q})$ of order $p$. In fact, $S=\operatorname{PSL}_3(4)$ is an example of this kind of groups, for $p=3$, since it has no rational irreducible characters of degree divisible by $3$ and, nevertheless, it has only $2$ irreducible characters of degree equal to $45$ and only $2$ of degree equal to $63$, which are therefore fixed by any Galois automorphism of order $3$. In general, if $S = \operatorname{PSL}_3(q)$ such that $3 \mid q-1$, and $p=3$, we prove in Proposition [Proposition 7](#proposition:PSL3){reference-type="ref" reference="proposition:PSL3"} that $S$ always has a $\sigma$-invariant character of degree divisible by $p$, despite appearing as an exception in Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"}.
We should mention here that it is particularly difficult to find examples of simple groups of Lie type of rank higher then $1$ and such that $p$ does not divide the degree of any $\sigma$-invariant character. In fact, for example, if $G=\operatorname{SL}_n(q)$ and $r$ is a primitive prime divisor of $q^n - 1$, then $G$ has a semisimple element $s$ of order $r$ and such that $q-1$ divides $\ensuremath{\left| G:\operatorname{C}_{G}(s) \right|}$ (see for instance [@Moreto-Tiep:Prime_divisors Lemma 2.4]), and the Deligne-Lusztig theory suggests the existence of an irreducible character in the dual group of degree $\ensuremath{\left| G:\operatorname{C}_{G}(s) \right|}$ (or possibly a fraction of $\ensuremath{\left| G:\operatorname{C}_{G}(s) \right|}$) having values in $\mathbb{Q}_r$. Hence, if a prime number $p$ divides $q-1$ and not $r-1$, the group has an irreducible character of degree divisible by $p$ and fixed by every Galois automorphism $\sigma$ of order $p$. Now, it is possible to find a group $G=\operatorname{PSL}_n(q)$ and a prime number $p$ such that $p \mid q-1$ and $p \mid r-1$ for every primitive prime divisor $r$ of $q-1$, however, the smallest example we could find of rank higher then $1$ is $\operatorname{PSL}_4(11)$, with $p=5$, which is itself quite large and, nevertheless, we verified that it has rational characters of degree divisible by $p$. This shows that it is computationally difficult to find high-rank simple groups which verify the hypotheses of our theorem.
Finally, as a by-product of some of the results we use in order to prove our Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"}, we are able to prove the following theorem on groups where all rational irreducible characters have odd degree. Notice that groups with these characteristics were already studied in [@Tiep-TongViet:Odd-degree_rational].
**Theorem 4**. *Let $G$ be a finite non-solvable group and suppose that all the rational characters of $G$ have odd degree. Suppose also that there is no subgroup $N \lhd \operatorname{O}^{2'}(G)$ such that $\operatorname{O}^{2'}(G)/N \cong D_{10}$, the dihedral group of $10$ elements. Then, there exists $L \lhd G$ solvable such that $L < \operatorname{O}^{2'}(G)$ and $\operatorname{O}^{2'}(G)/L \cong S_1 \times ... \times S_k$, with $S_i \cong \operatorname{PSL}_2(3^{f_i})$, for some odd $f_i \geq 3$, for each $i=1,...,k$.*
This result may be compared with [@Dolfi-Malle-Navarro:Grous_no_real_p-elements Theorem C], also reported as [@Tiep-TongViet:Odd-degree_rational Theorem 3.2], and it underlines the relation between rational characters of odd degree and rational (or real) elements of odd order. The topic is also studied by the author in [@Grittini:Odd-degree_rational].
# Simple groups and rational characters {#section:simple_groups}
The results in this section are due to Gunter Malle, and they are published here with his consent. The two propositions in this section, taken together, prove our Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"}.
For coprime integers $p,q$ we let $e_p(q)$ denote the multiplicative order of $q$ modulo $p$.
For the following result note that unipotent characters of finite reductive groups $G$ have the centre $\operatorname{Z}(G)$ in their kernel and so descend to irreducible characters of $G/\operatorname{Z}(G)$, which by a slight abuse of notation we will also call unipotent.
**Proposition 1**. *Let $S$ be a finite simple group of Lie type and $p>2$ a prime divisor of $\ensuremath{\left| S \right|}$. If all rational unipotent characters of $S$ have $p'$-degree, then one of the following statements is true:*
1. *$S={}^{2}\!B_2(q^2)$ with $p \mid (q^4+1)$,*
2. *$S=\operatorname{PSL}_n(q)$ with $e_p(q)=1$ and $p>n$,*
3. *$S=\operatorname{PSU}_n(q)$ with $e_p(q)=2$ and $p>n$,*
4. *$S=\operatorname{PSL}_n(q)$ with $n=2$, or $(n,e_p(q))\in\{(3,3),(4,2)\}$ or $(n,p,e_p(q))=(3,3,1)$,*
5. *$S=\operatorname{PSU}_n(q)$ with $(n,e_p(q))\in\{(3,6),(4,1)\}$ or $(n,p,e_p(q))=(3,3,2)$,*
6. *$S={{\operatorname{PSO}}}_{2n}^+(q)$ with $(n,e_p(q))\in\{(5,2),(7,2)\}$, or*
7. *$S={{\operatorname{PSO}}}_{2n}^-(q)$ with $$(n,e_p(q))\in\{(4,1),(4,2),(4,4),(5,1),(6,1),(6,2),(7,1),(8,1),(8,2)\}.$$*
*Proof.* For the Tits simple group this is immediate from the known character table, so we may now assume that $S=G/\operatorname{Z}(G)$ for $G$ the group of fixed points under a Steinberg map of a simple algebraic group of simply connected type. By our prior remark we may argue with the unipotent characters of $G$ in place of $S$. If $p$ is the defining prime of $G$, then the Steinberg character of $G$, of degree a power of $p$ [@Geck-Malle:Character_Theory Proposition 3.4.10], is as required. So now assume $p$ is not the defining prime. If $G$ is of exceptional type, it is immediate from the known degrees (listed e.g. in [@Carter:Finite_groups_of_Lie_type Section 13.9]) that there do exist rational unipotent characters of degree divisible by $p$ for any non-defining prime $p$, unless when $G$ is a Suzuki group ${}^{2}\!B_2(q^2)$ and $p$ divides $q^4+1$.
For the groups of classical type we use that all unipotent characters are rational, see [@Geck-Malle:Character_Theory Corollary 4.5.6]. Also recall from [@Geck-Malle:Character_Theory Proposition 4.5.9] that degrees of unipotent characters are products of cyclotomic polynomials $\Phi_d$ (evaluated at $q$), possibly divided by a 2-power (which does not matter for our purpose here since we assume $p>2$). Furthermore, as a consequence of $d$-Harish-Chandra theory, any unipotent character lying in the $d$-Harish-Chandra series above a $d$-cuspidal unipotent character $\rho$ has at least the same $\Phi_d$-part in its generic degree as $\rho$ [@Geck-Malle:Character_Theory Corollary 4.6.25].
Now first assume $G=\operatorname{SL}_n(q)$ with $n\ge3$. Let $d=e_p(q)$ be the order of $q$ modulo $p$. Note that $d\le n$ if $p$ divides $\ensuremath{\left| G \right|}$. If $\lambda$ is a $d$-core partition of $n$, then by the degree formula [@Geck-Malle:Character_Theory Proposition 4.3.1] the degree of the unipotent character labelled by $\lambda$ is divisible by $\Phi_d(q)$ and hence by $p$. According to the main result of [@Granville-Ono:Defect_zero_p-blocks] there is such a $d$-core for any integer $n\ge d$ when $d\ge4$. So now assume $d\le3$. There exist 3-cores $\lambda_n\vdash n$ for $n=4,5,6$. Now if $n\equiv t\pmod3$ with $t\in\{4,5,6\}$ then all unipotent characters in the $3$-Harish-Chandra series of $G$ above the 3-core $\lambda_t$ have degree divisible by $\Phi_3(q)$, hence by $p$. This only leaves the case $n=3$ when $d=3$. There are 2-core partitions $\lambda_n\vdash n$ for $n=3,6$, and since any unipotent character in a 2-Harish-Chandra series above the characters labelled by these cores has degree divisible by $p$, we are left with $n=4$ when $d=2$. Finally, in the case $d=1$ assume that $p\le n$. Then $p$ divides $\Phi_p(q)$ and by our previous considerations there is a unipotent character of degree divisible by $p$, unless possibly when $n=3$.
Next consider $G=\operatorname{SU}_n(q)$ with $n\ge3$ and let $d=e_p(-q)$. Since the degree formula for unipotent characters of $G$ is obtained from the one for $\operatorname{SL}_n(q)$ by replacing $q$ by $-q$ (and adjusting the sign if necessary) [@Geck-Malle:Character_Theory Proposition 4.3.5], our investigation for $\operatorname{SL}_n(q)$ only leaves us with the cases $(n,d)\in\{(3,6),(4,1)\}$ and $d=2$, $p>n$, as listed in (3).
Now let $G_n={{\operatorname{Sp}}}_{2n}(q)$ or ${{\operatorname{SO}}}_{2n+1}(q)$ with $n\ge2$ and set $d=e_p(q)$. If $d$ is odd, the for any $d$-core $\lambda\vdash n$ the unipotent character labelled by the symbol of defect 1 for the bipartition $(\lambda;-)$ is of $p$-defect zero [@Geck-Malle:Character_Theory Corollary 4.4.18]. So as before we only need to concern ourselves with the cases $d=1$ and $d=3$. Now the cuspidal unipotent character of ${{\operatorname{Sp}}}_4(q)$ has degree divisible by $\Phi_1(q)$, and so has every unipotent character of $G_n$, $n\ge3$, in its Harish-Chandra series. Moreover, the degrees of the unipotent characters of $G_n$, $n=3,4,5$, labelled by the bipartitions $(2;1)$, $(31;-)$, respectively $(31;1)$, are divisible by $\Phi_3(q)$. Now assume $d=2e$ is even. If $e$ is odd we use the fact (Ennola duality) that the set of degree polynomials of $G_n$ is invariant under replacing $q$ by $-q$ (and adjusting the sign), so if there exists a unipotent character of $G_n$ of degree divisible by $\Phi_e(q)$, then there is also one of degree divisible by $\Phi_{2e}(q)$. If $e$ is even, then let $\lambda=(\lambda_1\le\ldots\le\lambda_r)\vdash n$ be an $e$-core, and $\beta=(\beta_1,\ldots,\beta_r)$ with $\beta_i:=\lambda_i+i-1$ be the corresponding $\beta$-set. Let $\gamma=(0,\ldots,r-2)$, a $\beta$-set for the empty partition. Let $S$ be the symbol whose first row consists of the $\beta_i$ with $\beta_i\pmod{2e}\in\{0,\ldots,e-1\}$ as well as the $\gamma_i$ with $\gamma_i\pmod{2e}\in\{e,\ldots,2e-1\}$ (where we consider the smallest non-negative residue), and the second row consisting of the remaining $\beta_i$, $\gamma_i$. Then $S$ has rank $n$ and odd defect and by construction is an $e$-cocore, hence it labels a unipotent character of $G_n$ of $p$-defect zero [@Geck-Malle:Character_Theory Corollary 4.4.18]. This leaves the case $e=2$ for which there might be no $e$-core of $n$. But here the unipotent characters of $G_n$, $n=2,3$, labelled by the symbols $\binom{0\ 1}{2}$, $\binom{0\ 1}{3}$ respectively have degree divisible by $\Phi_4(q)$, so by $p$, and we are again done by $d$-Harish-Chandra theory.
Finally we consider the even-dimensional orthogonal groups $G_n^\pm={{\operatorname{Spin}}}_{2n}^\pm(q)$ for $n\ge4$. Let $d=e_p(q)$. If $d$ is odd, the principal series unipotent character labelled by a $d$-core $\lambda\vdash n$ is as desired, again by [@Geck-Malle:Character_Theory Corollary 4.4.18]. For $d=1$ note that $G_4^+$ has a cuspidal unipotent character, hence of degree divisible by $\Phi_1(q)$, and so has $G_9^-$. This leaves the cases in (5). Let $d=2e$ be even. If $e$ is odd, the claim follows again by Ennola duality from the case when $d$ is odd. If $e$ is even, we construct a symbol of defect 0 or 4 which is an $e$-cocore from an $e$-core of $n$ by the same procedure as for types $B_n$ and $C_n$. Again, this leaves the case $e=2$. Here, the symbols $$\binom{3}{1},\ \binom{1\ 5}{0\ 1},\ \binom{0\ 1\ 5}{1},\ \binom{1\ 5}{}$$ are 2-cocores for $G_4^+,G_5^+,G_5^-,G_6^-$ respectively, leaving the case $(n,d)=(4,4)$ for $G_4^-$, listed in (5). ◻
**Proposition 2**. *Assume we are in one of the following cases of Proposition [Proposition 1](#proposition:unip_rational){reference-type="ref" reference="proposition:unip_rational"}:*
1. *$S=\operatorname{PSL}_3(q)$ with $e_p(q)=3$, or*
2. *$S=\operatorname{PSU}_3(q)$ with $e_p(q)=6$.*
*Then $S$ has a rational irreducible characters of degree divisible by $p$.*
*Proof.* For $S=\operatorname{PSL}_3(q)$ with $q$ odd there is an involution $s$ in the group $\operatorname{PGL}_3(q)$ with centraliser $\operatorname{GL}_2(q)$. The semisimple characters in the Lusztig series of $\operatorname{SL}_3(q)$ labelled by $s$ have degree $\ensuremath{\left| \operatorname{PGL}_3(q):\operatorname{GL}_2(q) \right|}_{q'}=q^2+q+1$, they are rational as any involution is rational, and they have $\operatorname{Z}(\operatorname{SL}_3(q))$ in their kernel as $s\in\operatorname{PSL}_3(q)$. If $q$ is even, let $s$ be the image in $\operatorname{PGL}_3(q)$ of an element of order three in $\operatorname{GL}_3(q)$ with three distinct eigenvalues. Then $s$ is rational and has centraliser a maximal torus, of order $q^2-1$ when $3 \mid (q+1)$, respectively $(q-1)^2.3$ when $3 \mid (q-1)$. The corresponding semisimple character of degree $q^3-1$, respectively $\Phi_2\Phi_3/3$, of $\operatorname{SL}_3(q)$ is then rational and trivial on the centre. This deals with the cases $d=3$. The case of $\operatorname{PSU}_3(q)$ with $d=6$ can be handled entirely analogously. ◻
# Almost-simple groups
In this section, we will prove Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"} for almost-simple groups, i.e., for groups $G$ such that $S \leq G \leq \operatorname{Aut}(S)$, where the non-abelian simple group $S$ is identified with $\operatorname{Inn}(S)$. In particular, we will prove that, if $G$ is an almost-simple group with socle $S$, $p$ is a prime number such that $p \mid \ensuremath{\left| G/S \right|}$, and $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}}/\mathbb{Q})$ has order $p$, then $G$ has a $\sigma$-invariant irreducible character of degree divisible by $p$ and such that $S$ is not contained in its kernel. This last condition is actually more then what we need, and it requires some further work to be obtained, since we cannot simply apply Theorem A and Theorem C of [@Grittini:Degrees_of_irreducible_fixed_p-solvable]. On the other hand, since there are actually some imprecisions in the proof of Theorem A and Theorem C of [@Grittini:Degrees_of_irreducible_fixed_p-solvable], we can see this as an opportunity to correct them.
First, we need to mention a result from [@Grittini:Degrees_of_irreducible_fixed_p-solvable], which we will use repeatedly in the current paper.
**Lemma 3** ([@Grittini:Degrees_of_irreducible_fixed_p-solvable Lemma 2.4]). *Let $G$ be a finite group, let $N \lhd G$ and let $\varphi\in \operatorname{Irr}(N)$ such that $(\ensuremath{\left| I_G(\varphi)/N \right|},o(\varphi)\varphi(1))=1$. Let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}} / \mathbb{Q})$ and suppose there exists $g \in G$ such that $\varphi^{\sigma} = \varphi^g$. Then, there exists $\chi \in \operatorname{Irr}(G \! \mid \!\varphi)$ such that $\chi^{\sigma} = \chi$. Moreover, $\chi$ can be chosen such that $\chi(1) = \varphi(1) \ensuremath{\left| G:I_G(\varphi) \right|}$.*
We will now prove the following, rather technical Proposition [Proposition 5](#proposition:semisimple_characters){reference-type="ref" reference="proposition:semisimple_characters"}, which is essentially a slight variant of [@Grittini:Degrees_of_irreducible_fixed_p-solvable Theorem 3.3]. In doing so, we correct the aforementioned imprecisions appearing in [@Grittini:Degrees_of_irreducible_fixed_p-solvable]. Clearly, this would require us to repeat large sections of the proof of [@Grittini:Degrees_of_irreducible_fixed_p-solvable Theorem 3.3]. We will also rely often on the results of the Deligne-Lusztig theory. A complete description of the theory can be found in [@Geck-Malle:Character_Theory].
First, however, we need to prove the following lemma.
**Lemma 4**. *Let $S = \mathbf{S}^F$ be a finite group of Lie type defined over a field of $q_0$ elements, for some Steinberg map $F$, such that $S$ is perfect and $S/\operatorname{Z}(S)$ is simple. Take $q$ such that $q^2 = q_0$ if $S/\operatorname{Z}(S)$ is of twisted type, and take $q = q_0$ if $S/\operatorname{Z}(S)$ is untwisted. Then, there exists an $F$-stable maximal torus $\mathbf{T} \leq \mathbf{S}$ such that $T = \mathbf{T}^F$ contains $\operatorname{Z}(S)$ and it also has a nontrivial cyclic subgroup $T_0$ of order either $q - 1$ or $q^2 - 1$, and such that $\ensuremath{\left| T_0 \cap \operatorname{Z}(S) \right|}$ is a power of $2$.*
*Proof.* It follows from the Classification of the finite simple groups that, for any choice of $S$, there exists a maximally split torus $T$ containing $\operatorname{Z}(S)$ and containing also a subgroup $T_0$ of order either $q - 1$ or $q^2 - 1$ (see for instance the classification in [@Wilson:Finite_Simple_Groups]). If $\ensuremath{\left| \operatorname{Z}(S) \right|}$ is a power of $2$, then we are done. If $S/\operatorname{Z}(S) \cong \operatorname{PSL}_n(q)$ with $n > 2$ and $q \neq 2$, then we can consider the subgroup of $T$ generated by $\operatorname{diag}(\lambda,1,...,1,\lambda^{\shortminus 1})$, for some $\lambda \in \mathbb{F}_q^{\times}$ of order $q - 1$, which has trivial intersection with $\operatorname{Z}(S)$. If $S/\operatorname{Z}(S) \cong \operatorname{PSL}_n(2)$, then $\ensuremath{\left| \operatorname{Z}(S) \right|} = 1$. If $S/\operatorname{Z}(S) \cong \operatorname{PSL}_2(9)$, then $q - 1 = 8$.
If $S/\operatorname{Z}(S) \cong \operatorname{PSU}_n(q)$, then usually $T_0$ can be chosen of order $q - 1$ and, since $\ensuremath{\left| \operatorname{Z}(S) \right|} \mid q + 1$, it follows that $\ensuremath{\left| T_0 \cap \operatorname{Z}(S) \right|} \leq 2$. In the remaining cases, namely if $q = 2$ or $S/\operatorname{Z}(S) \cong \operatorname{PSU}_4(3)$, then $n \geq 4$ and we can consider the subgroup of $T$ generated by $\operatorname{diag}(\lambda,\lambda^{\shortminus 1},1,...,1,\overline{\lambda},\overline{\lambda^{\shortminus 1}})$, for some $\lambda \in \mathbb{F}_{q^2}^{\times}$ of order $q^2 - 1$, which has trivial intersection with $\operatorname{Z}(S)$ (notice that $\operatorname{SU}_4(2)$ has trivial centre).
If $S/\operatorname{Z}(S) \cong \Omega_7(3)$ or $G_2(3)$, then $T$ has a subgroup of order $q - 1 = 2$. If $S/\operatorname{Z}(S) \cong E_6(q)$, then we can see from the description of $T$ given in [@Wilson:Finite_Simple_Groups Section 4.10.3] that it has a subgroup of order $q - 1$ with trivial intersection with the centre. Finally, if $S/\operatorname{Z}(S) \cong {}^2E_6(q)$, then we can see from [@Buturlakin:Spectra_finite_simple_groups_E6_2E6 Proposition 2.1] that there exists a torus $T \cong ({\mathbb{Z}}_{q+1})^2 \times ({\mathbb{Z}}_{q^2-1})^2$, containing $\operatorname{Z}(S)$, and it has a cyclic subgroup $T_0$ of order $q_0 - 1$, while $\ensuremath{\left| \operatorname{Z}(S) \right|} = (3,q_0 + 1)$. Hence, $T_0 \cap \operatorname{Z}(S) = 1$. ◻
We recall that, if $S = \mathbf{S}^F$ is a finite group of Lie type, it is always possible to find a finite group $S \leq \tilde{S}$ such that $\tilde{S} = \mathbf{\tilde{S}}^{\tilde{F}}$, for some $\mathbf{S} \leq \mathbf{\tilde{S}}$ with connected centre and some Steinberg map $\tilde{F}$ such that $\tilde{F} = F$ on $\mathbf{S}$ (see [@Geck-Malle:Character_Theory Lemma 1.7.3]). With a little abuse of terminology, we may say that $\tilde{S}$ is a *regular embedding* for $S$. Moreover, if $S \neq {}^2F_4(2)'$ is a simple group of Lie type, then it is always possible to find a finite group of Lie type $H = \mathbf{H}^F$ such that $S \cong H/\operatorname{Z}(H)$. If $\tilde{H}$ is a regular embedding for $H$, then we can abuse again the terminology and say that $\tilde{S}=\tilde{H}/\operatorname{Z}(H)$ is again a *regular embedding* for $S$.
We may also notice that, if $\nu$ is a field automorphism for a group of Lie type $S$, and $\tilde{S}$ is a *regular embedding* for $S$, then $\nu$ acts also on $\tilde{S}$.
**Proposition 5**. *Let $S = \mathbf{S}^F$ be a finite group of Lie type, for some Steinberg map $F$, such that $S$ is perfect and $S/\operatorname{Z}(S) \ncong {}^2E_6(q)$ is simple, and let $S \leq \tilde{S}$ be a regular embedding for $S$. Let $\nu \in \operatorname{Aut}(S)$ be a field automorphism for $S$ of order $p$, for some odd prime number $p$, and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{p\ensuremath{\left| \tilde{S} \right|}} / \mathbb{Q})$ of order $p$. Then, there exists a semisimple character $\chi \in \operatorname{Irr}(\tilde{S})$ such that $\operatorname{Z}(S) \leq \ker(\chi)$, $[\chi_S,{\chi_S}^{\nu}]=0$, and either $\chi^{\sigma}=\chi$ or $\chi^{\sigma}=\chi^{\nu^k}$ for some positive integer $k$ coprime with $p$.*
*Proof.* Suppose that $S$ is defined over a field of $q_0=\ell^f$ elements, then by Lemma [Lemma 4](#lemma:torus_subgroup){reference-type="ref" reference="lemma:torus_subgroup"} here exists an $F$-stable maximal torus $\mathbf{T} \leq \mathbf{S}$, such that $T = \mathbf{T}^F$ contains $\operatorname{Z}(S)$, and a subgroup $T_0 \leq T$ of order $q - 1$, with either $q_0 = q$ or $q_0 = q^2$, such that $\ensuremath{\left| T_0 \cap \operatorname{Z}(S) \right|}$ is a power of $2$. By [@Geck-Malle:Character_Theory Lemma 1.7.7], there exists an $\tilde{F}$-stable maximal torus $\mathbf{T} \leq \mathbf{\tilde{T}}$ of $\mathbf{\tilde{S}}$ such that, if $\tilde{T} = \mathbf{\tilde{T}}^{\tilde{F}}$, then $\tilde{S} = \tilde{T}S$ and $T = \tilde{T} \cap S$.
Notice that $\nu(x) = x^{\ell^{\nicefrac{f}{p}}}$ for all $x \in T$. By [@Grittini:Degrees_of_irreducible_fixed_p-solvable Lemma 3.1] and its proof, there exists $\theta \in \operatorname{Irr}(\tilde{T})$ such that $o(\theta_{T_0})$ is odd, ${\theta_T}^{\nu} \neq \theta_T$ and $\theta$ is fixed either by $\sigma$ or by $\nu^k \times \sigma^{\shortminus 1}$, for some positive integer $k$ coprime with $p$. Moreover, since $T_0 \cap \operatorname{Z}(S) \leq \ker(\theta)$, we can see from the proof of [@Grittini:Degrees_of_irreducible_fixed_p-solvable Lemma 3.1] that we can actually take $\theta$ such that it contains the whole centre $\operatorname{Z}(S)$ in its kernel.
Let $\rho \in P \times \langle \sigma \rangle$ such that $\theta^{\rho} = \theta$ and either $\rho = 1 \times \sigma$ or $\rho = \nu^k \times \sigma^{\shortminus 1}$, and let $\chi \in \operatorname{Irr}(\tilde{S})$ be a semisimple character lying over $\theta$. By [@Grittini:Degrees_of_irreducible_fixed_p-solvable Lemma 3.2], $\chi^{\rho}$ is a semisimple character lying over $\theta^{\rho}=\theta$ and, since $\mathbf{\tilde{S}}$ has connected centre, by [@Geck-Malle:Character_Theory paragraph 2.6.10] we have that $\chi^{\rho} = \chi$. Finally, notice that $\theta_T, {\theta_T}^{\nu} \in \operatorname{Irr}(T)$ are not conjugated in $S$, since otherwise the Frattini argument would imply that $\nu \in I_G(\theta_T) S$ and, thus, $\nu \in I_G(\theta_T)$. Since the irreducible constituents of $\chi_S$ and ${\chi^{\nu}}_S$ are semisimple characters lying over $\theta_T$ and ${\theta_T}^{\nu}$, it follows from [@Geck-Malle:Character_Theory Theorem 2.6.2] that $[{\chi^{\nu}}_S,\chi_S] = 0$. ◻
**Corollary 6**. *Let $S \ncong {}^2E_6(q),{}^2F_4(2)'$ be a finite simple group of Lie type, and let $S \leq \tilde{S}$ be a regular embedding for $S$. Let $\nu \in \operatorname{Aut}(S)$ be a field automorphism for $S$ of order $p$, for some odd prime number $p$, and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{p\ensuremath{\left| \tilde{S} \right|}} / \mathbb{Q})$ of order $p$. Then, there exists a semisimple character $\chi \in \operatorname{Irr}(\tilde{S})$ such that $[\chi_S,{\chi_S}^{\nu}]=0$ and either $\chi^{\sigma}=\chi$ or $\chi^{\sigma}=\chi^{\nu^k}$ for some positive integer $k$ coprime with $p$.*
*Proof.* Since $S \ncong {}^2F_4(2)'$, there exists a finite group of Lie type $H = \mathbf{H}^F$, for some Steinberg map $F$, such that $H$ is perfect and $S \cong H/\operatorname{Z}(H)$. Then, Proposition [Proposition 5](#proposition:semisimple_characters){reference-type="ref" reference="proposition:semisimple_characters"} guarantees the existence of a character $\chi$ with the desired properties which contains $\operatorname{Z}(H)$ in its kernel. ◻
We now need to address explicitly two of the cases left out by Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"}.
**Proposition 7**. *Let $S$ be a simple group and assume that either*
1. *$S=\operatorname{PSL}_3(q)$ with $3 \mid q - 1$, or*
2. *$S=\operatorname{PSU}_3(q)$ with $3 \mid q + 1$.*
*Let $G \geq S$ be an almost-simple group with socle $S$ such that $G/S$ is a $3$-group, and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}}/\mathbb{Q})$ be of order $3$. Then, $G$ has a $\sigma$-invariant irreducible character of degree divisible by $3$ which does not contain $S$ in its kernel.*
*Proof.* We prove the proposition explicitly only for $S=\operatorname{PSL}_3(q)$ and $3 \mid q - 1$, since the case of $S=\operatorname{PSU}_3(q)$ with $3 \mid q + 1$ can be handled analogously.
We can identify $G$ with a subgroup of $\operatorname{Aut}(S)$ containing $S$, and we can define $S < \tilde{S} < \operatorname{Aut}(S)$ such that $\tilde{S} \cong \operatorname{PGL}_3(q)$. If we consider the character tables of $\operatorname{PGL}_3(q)$ (see [@Steinberg:The_representations_of Table VIII]) and of $\operatorname{PSL}_3(q)$ (see [@Simpson-Frame:The_character_table_for Table 2]), with $3 \mid q-1$, we can see that almost all the irreducible characters of $\tilde{S}$ restrict irreducibly to $S$, with the only exception being a single character of $\tilde{S}$ of degree $(q+1)(q^2+q+1)$. We can also notice that $\tilde{S}$ has exactly $\frac{1}{6}(q^2 - 5q + 10)$ irreducible characters of degree $(q+1)(q^2+q+1)$, while $S$ has exactly $\frac{1}{3}(q - 4)$ irreducible characters of degree $q^2+q+1$ and $\frac{1}{6}q(q - 1)$ irreducible characters of degree $(q-1)(q^2+q+1)$. We can see that, since $q \equiv 1$ (mod $3$), then $3$ divides all the mentioned degrees. We will use all these facts in the following paragraphs.
Suppose first that $G$ contains a non-trivial field automorphism for $S$, let $\tau \in G$ be a field automorphism of maximal order and let $\nu=\tau^{o(\tau)/p}$ be a field automorphism of order $3$. Consider the group $\Gamma = \langle \tau \rangle \ltimes \tilde{S}$, so that $\Gamma = G\tilde{S}$. It follows from Corollary [Corollary 6](#corollary:semisimple_characters){reference-type="ref" reference="corollary:semisimple_characters"} that there exists a semisimple character $\psi \in \operatorname{Irr}(\tilde{S})$ such that ${\psi^{\nu}}_S \neq \psi_S$ and either $\psi^{\sigma}=\psi$ or $\psi^{\sigma}=\psi^{\nu^k}$ for some positive integer $k$ coprime with $3$.
Since $\nu$ is of minimal order in the cyclic group $\langle \tau \rangle$, we have that $\psi$ has inertia subgroup $I_{\Gamma}(\psi)=\tilde{S}$. If $\tilde{S}<G$, then the thesis follows by Lemma [Lemma 3](#lemma:normal_coprime){reference-type="ref" reference="lemma:normal_coprime"} (notice that, in this case, the conclusion also follows directly from [@Grittini:Degrees_of_irreducible_fixed_p-solvable Theorem C]). Suppose now that $G \cap \tilde{S} = S$ and notice that $\psi_S$ is irreducible, since otherwise also $\psi^{\nu} \neq \psi$ would not restrict irreducibly to $S$, and we have seen that all the irreducible characters of $\tilde{S}$ but one restrict irreducibly to $S$. Hence, $\chi = \psi_S \in \operatorname{Irr}(S)$, $\chi^{\nu} \neq \chi$ and either $\chi^{\sigma}=\chi$ or $\chi^{\sigma}=\chi^{\nu^k}$. In particular, $I_G(\chi)=S$ and it follows again from Lemma [Lemma 3](#lemma:normal_coprime){reference-type="ref" reference="lemma:normal_coprime"} that $\chi^G$ is a $\sigma$-invariant irreducible character of $G$ of degree divisible by $3$.
We can now assume that $G$ does not contain a non-trivial field automorphism for $S$ and, thus, either $G=\tilde{S}$ or $G=S$. In the former case, $G$ has exactly $\frac{1}{6}(q^2 - 5q + 10)$ irreducible characters of degree $(q+1)(q^2+q+1)$. Since $q \equiv 1$ (mod $3$), we have that $q^2+q+1 \equiv 0$ (mod $3$). On the other hand, $q$ is congruent modulo $9$ to either $1$, $4$ or $7$ and we can see that, in either cases, $q^2 - 5q + 10 \not\equiv 0$ (mod $9$) and, thus, $3 \nmid \frac{1}{6}(q^2 - 5q + 10)$. It follows that $\sigma$ fixes at least one character of degree $(q+1)(q^2+q+1)$ and we are done in this case.
Finally, assume that $G=S$, then it has exactly $\frac{1}{3}(q - 4)$ irreducible characters of degree $q^2+q+1$ and $\frac{1}{6}q(q - 1)$ irreducible characters of degree $(q-1)(q^2+q+1)$. As in the previous paragraph, $3$ divides both degrees. Suppose that $9 \nmid q - 4$, then $3 \nmid \frac{1}{3}(q - 4)$ and $G$ has a $\sigma$-invariant character of degree equals to $q^2+q+1$. Hence, we can assume that $q \equiv 4$ (mod $9$) and it follows that $q(q - 1) \equiv 3$ (mod $9$). In particular, $3 \nmid \frac{1}{6}q(q - 1)$ and $G$ has a $\sigma$-invariant character of degree $(q-1)(q^2+q+1)$. This conclude our proof. ◻
We now cite a result from [@Grittini:Degrees_of_irreducible_fixed_p-solvable], which we will use often, sometimes without mentioning it explicitly.
**Proposition 8** ([@Grittini:Degrees_of_irreducible_fixed_p-solvable Proposition 2.3]). *Let $G$ be a finite group, let $p$ be a prime number and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}} / \mathbb{Q})$ such that $o(\sigma)$ is a power of $p$. Suppose $H \leq G$ such that $p \nmid \ensuremath{\left| G:H \right|}$ and suppose, for some $\psi \in \operatorname{Irr}(H)$, that $\psi(1)$ divides the degree of every irreducible constituent of $\psi^G$. If $\psi$ is fixed by $\sigma$, then $\sigma$ also fixes some irreducible constituents of $\psi^G$.*
Notice that the hypothesis of $\psi(1)$ dividing the degree of every irreducible constituent of $\psi^G$ is automatically verified if there exists $N \lhd G$, with $N \leq H$, and a character $\varphi\in \operatorname{Irr}(N)$ which extends to $\psi$.
We are now ready to prove Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"} for almost-simple groups, in the stronger version we mentioned.
**Theorem 9**. *Let $G$ be an almost-simple group with socle $S$, let $p$ be an odd prime number and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| G \right|}}/\mathbb{Q})$ be of order $p$. If $p \mid \ensuremath{\left| G/S \right|}$, then $G$ has a $\sigma$-invariant irreducible character $\chi$ of degree divisible by $p$ and such that $S$ is not contained in its kernel.*
*Proof.* First, we notice that, if $S$ is either an alternating or a sporadic simple group, then the Classification of finite simple groups tells us that $\operatorname{Aut}(S)/S$ is a $2$-group. Hence, we can assume that $S$ is a simple group of Lie type.
Let $S \leq \tilde{S} \leq \operatorname{Aut}(S)$ be a regular embedding for $S$, and let $P/S \in \operatorname{Syl}_{p}\left(G/S\right)$. Suppose first that there exists a rational character $\varphi\in \operatorname{Irr}(S)$ of degree divisible by $p$ and let $T=I_G(\varphi)$ be its inertia subgroup. We can assume that $T/S \cap P/S$ is a Sylow $p$-subgroup for $T/S$. Since $p$ is odd and $\varphi$ is rational, it follows from [@Isaacs Theorem 6.30] that $\varphi$ has a rational extension $\theta$ to $P \cap T$, and by Proposition [Proposition 8](#proposition:invariant_in_subgroup){reference-type="ref" reference="proposition:invariant_in_subgroup"} there exists a $\sigma$-invariant character $\hat{\varphi} \in \operatorname{Irr}(T \! \mid \!\varphi)$, lying over $\theta$, of degree divisible by $p$. We can take $\chi = \hat{\varphi}^G$ and we are done in this case.
Hence, we can assume that no rational irreducible character of $S$ has degree divisible by $p$. It follows that either $p \nmid \ensuremath{\left| S \right|}$ or we are in one of the cases of Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"}. If $p=3$, $S = \operatorname{PSL}_3(q)$ and $3 \mid q - 1$, or $S = \operatorname{PSU}_3(q)$ and $3 \mid q + 1$, then $\ensuremath{\left| \tilde{S}/S \right|} = 3$ and, since $\operatorname{Aut}(S)/\tilde{S}$ is abelian, we have that $P$ is normal in $G$. Since $P/S$ is a $p$-group, by Proposition [Proposition 7](#proposition:PSL3){reference-type="ref" reference="proposition:PSL3"} there exists a $\sigma$-invariant character $\psi \in \operatorname{Irr}(P)$ of degree divisible by $p$ and such that $S \nleq \ker(\psi)$, then the theorem follows from Proposition [Proposition 8](#proposition:invariant_in_subgroup){reference-type="ref" reference="proposition:invariant_in_subgroup"}.
In all the remaining cases, namely if $p \nmid \ensuremath{\left| S \right|}$ or if we are in the remaining cases of Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"}, we know from the Classification of finite simple groups that $p \nmid \ensuremath{\left| \tilde{S}/S \right|}$, $P(G \cap \tilde{S})/S \lhd G/S$ and $P/S$ is generated by a field automorphism $\tau$ of order a power of $p$. By Corollary [Corollary 6](#corollary:semisimple_characters){reference-type="ref" reference="corollary:semisimple_characters"}, there exists $\psi \in \operatorname{Irr}(\tilde{S})$ which does not have $S$ in its kernel, such that $[{\psi_S}^{\nu}, \psi_S] = 0$, where $\nu=\tau^{o(\tau)/p}$ has order $p$, and such that $\psi^{\rho}=\psi$, for $\rho \in \langle \tau \rangle \times \langle \sigma \rangle$ equal to either $1 \times \sigma$ or $\nu^k \times \sigma^{\shortminus 1}$ for some positive integer $k$ coprime with $p$. Let $N = G \cap \tilde{S}$, if $\varphi$ is an irreducible constituent of $\psi_N$, then $\varphi^{\nu} \neq \varphi$ and, since $p \nmid \ensuremath{\left| \tilde{S}/N \right|}$, $\varphi^{\rho}=\varphi$ and, thus, either $\varphi^{\sigma}=\varphi$ or $\varphi^{\sigma}=\varphi^{\nu^k}$. Since $\nu$ is contained in every proper subgroup of $\langle \tau \rangle$, we have that $I_{PN}(\varphi)=N$ and it follows from Lemma [Lemma 3](#lemma:normal_coprime){reference-type="ref" reference="lemma:normal_coprime"} that $\varphi^{PN} \in \operatorname{Irr}(PN)$ is $\sigma$-invariant and $p$ divides $\varphi^{PN}(1)=\varphi(1)\ensuremath{\left| PN:N \right|}$. Finally, since $PN$ is normal in $G$, the theorem follows again from Proposition [Proposition 8](#proposition:invariant_in_subgroup){reference-type="ref" reference="proposition:invariant_in_subgroup"}. ◻
# Proof of Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"} {#proof-of-theorem-resultmain}
The following lemma is essentially [@Bianchi-Chillag-Lewis-Pacifici:Character_degree_graphs Lemma 5], only slightly more general.
**Lemma 10**. *Let $G$ be a finite group and let $M \lhd G$ such that $M=S_1 \times ... \times S_n$, for some non-abelian simple groups $S_i$. For $i=1,...,n$, let $\delta_i \in \operatorname{Irr}(S_i)$ such that it extends to $\operatorname{Aut}(S_i)$, and such that ${\delta_i}^g = \delta_j$ whenever ${S_i}^{g^{\shortminus 1}} = S_j$, for $g \in G$. Then, $\delta = \delta_1 \times ... \times \delta_n \in \operatorname{Irr}(M)$ extends to $G$. Moreover, if each $\delta_i$ has a rational extension to $\operatorname{Aut}(S_i)$, then $\delta$ has a rational extension to $G$.*
*Proof.* Let $M = U_1 \times ... \times U_k$, where $U_1,...,U_k$ are minimal normal subgroups of $G$. In particular, each $U_j$ is the direct product of all the $S_i$s in some orbit of the action of $G$ on $S_1,...,S_n$. For any $j = 1,...,k$, we may write $\pi_j \subseteq \{1,...,n\}$ for the set of indexes such that $S_i \leq U_j$ if and only if $i \in \pi_j$. Now, for each $j = 1,...,k$, we consider the character $\varphi_j = \xi_1 \times ... \times \xi_n \in \operatorname{Irr}(M)$ such that $\xi_i=\delta_i$ if $i \in \pi_j$ and $\xi_i=1_{S_i}$ otherwise. Notice that, since $U_i \cap U_j = 1$ if $i \neq j$, $\prod_{j=1}^k \varphi_j = \delta \in \operatorname{Irr}(M)$.
Now, for some $j=1,...,k$ and for some $i = 1,...,n$ such that $S_i \leq U_j$, consider $\delta_i \in \operatorname{Irr}(S_i)$ and notice that it has a (possibly rational) extension to $\operatorname{N}_{G}(S_i)$, since the corresponding character $\hat{\delta_i} \in \operatorname{Irr}(S_i\operatorname{C}_{G}(S_i)/\operatorname{C}_{G}(S_i))$ has a (possibly rational) extension to $\operatorname{N}_{G}(S_i)/\operatorname{C}_{G}(S_i)$, which is isomorphic to a subgroup of $\operatorname{Aut}(S_i)$ containing $S_i$. Let $\theta \in \operatorname{Irr}(\operatorname{N}_{G}(S_i))$ be such (possibly rational) extension and let $\chi_j = \theta^{\otimes G}$ as defined in [@Isaacs:Character_correspondence Section 4]. We have that $\chi_j$ is a character of $G$, and that it is rational if $\theta$ is rational. Moreover, by [@Isaacs:Character_correspondence Lemma 4.1], if $x = x_1 \times ... \times x_n \in M$ then $$\chi_j(x) = \prod_{t \in T} \theta(txt^{-1}) = \prod_{t \in T} {\delta_i}^t(x) = \prod_{m \in \pi_j} {\delta_m}(x) = \varphi_j(x)$$ where $T$ is a set of representatives of the right cosets of $\operatorname{N}_{G}(S_i)$ in $G$, since ${\delta_i}^t = \delta_m$ whenever ${S_i}^{t{\shortminus 1}}=S_m$, and $\{S_m\}_{m \in \pi_i}$ is an orbit under $G$ (notice that, with an abuse of notation, we are identifying $\delta_i$ with $\delta_i 1_{\operatorname{C}_{M}(S_i)} \in \operatorname{Irr}(M)$). It follows that ${\chi_j}_M = \varphi_j$.
Finally, if we consider $\chi = \prod_{i=1}^k \chi_i$, we have that $\chi$ is rational if each $\delta_i$ has a rational extension to $\operatorname{Aut}(S_i)$, and that $\chi_M = \prod_{i=1}^k \varphi_i = \delta \in \operatorname{Irr}(M)$. Hence, $\chi$ is irreducible and it is a (possibly rational) extension of $\delta$ to $G$. ◻
The following proposition is a direct consequence of the previous Lemma [Lemma 10](#lemma:extension){reference-type="ref" reference="lemma:extension"} and of [@Giudici-Liebeck-Praeger-Saxl-Tiep:Arithmetic_results Theorem 2], and we will use it also to prove our Theorem [Theorem 4](#result:rational){reference-type="ref" reference="result:rational"}. We recall that, if $p$ is a prime number, a group $H \leq \operatorname{Sym}_n$ is said to be *$p$-concealed* if $p \mid \ensuremath{\left| H \right|}$ and all the orbits of $H$ on the power set of $\{1,...,n\}$ have size coprime with $p$.
**Proposition 11**. *Let $G$ be a finite group and let $M \lhd G$ be a non-abelian minimal normal subgroup, such that $M=S_1 \times ... \times S_n$, for some non-abelian simple groups $S_i \cong S$, for $i = 1,...,n$. Suppose $\operatorname{O}^{p'}(G/M)=G/M$ for some prime number $p$, then either*
- *$G$ has a rational irreducible character of degree divisible by $p$,*
- *$n=1$ and there exists $N \lhd G$ such that $N \cap M = 1$ and $G/N$ is an almost simple group with socle $MN/N$, or*
- *there exists $M \leq N \lhd G$ such that either $G/N \cong \operatorname{Alt}_k$ for some $k \mid n$, $G/N \cong \operatorname{A\Gamma L}_1(8)$ and $p=3$, or $G/N \cong D_{10}$, the dihedral group of order $10$, and $p=2$.*
*Proof.* If $n=1$, then $\operatorname{C}_{G}(M) \lhd G$, $\operatorname{C}_{G}(M) \cap M=1$ and $G/\operatorname{C}_{G}(M)$ is isomorphic to a subgroup of $\operatorname{Aut}(S_1)$ and, thus, it is almost simple. Hence, from now on we assume that $n>1$.
We shall now consider the action of the group $G/M$ on the set $\Lambda = \{S_1, ... , S_n \}$. Let $\tilde{\Omega} \subseteq 2^{\Lambda}$ be a (eventually trivial) partition of $\Lambda$ such that $G/M$ preserves $\tilde{\Omega}$ and acts primitively on its elements. For $\Delta_i \in \tilde{\Omega}$, with $i = 1,...,k=\ensuremath{\left| \tilde{\Omega} \right|}$, define $$U_i = \bigtimes_{S_j \in \Delta_i} S_j.$$ We have that $M = U_1 \times ... \times U_k$ and $G/M$ acts primitively on $\Omega = \{ U_1,...,U_k \}$. Moreover, if we consider $N = \bigcap_{i=1}^k \operatorname{N}_{G}(U_i)$, we have that $G/N$ acts on $\Omega$ both primitively and faithfully, and $p \mid \ensuremath{\left| G/N \right|}$ because $\operatorname{O}^{p'}(G/M)=G/M$.
Now, let $\delta \in \operatorname{Irr}(S)$ be a rational character which extends rationally to $\operatorname{Aut}(S)$. Notice that such character always exists: if $S$ is of Lie type we can use the Steinberg character, while if $S$ is either a sporadic or alternating group we can simply consider the restriction to $S$ of a rational character of $\operatorname{Aut}(S)$ of odd degree. For $\Delta \subseteq \Omega$, we define the character $$\psi_{\Delta} = \delta_1 \times ... \times \delta_n \in \operatorname{Irr}(M)=\operatorname{Irr}(S_1 \times ... \times S_n)$$ such that $\delta_i = \delta$ if $S_i \leq U_j$ for some $U_j \in \Delta$, $\delta_i = 1_{S_i}$ otherwise. Let $T_{\Delta}=I_G(\psi_{\Delta})$ be the inertia subgroup of $\psi_{\Delta}$ in $G$ and notice that, by Lemma [Lemma 10](#lemma:extension){reference-type="ref" reference="lemma:extension"}, $\psi_{\Delta}$ has a rational extension to $T_{\Delta}$.
Now, suppose that $G/N$ is $p$-concealed in its action on $2^{\Omega}$. Then, by [@Giudici-Liebeck-Praeger-Saxl-Tiep:Arithmetic_results Theorem 2], either $G/N \cong \operatorname{Alt}_k$, $G/N \cong \operatorname{AGL}_3(2)$ or $\operatorname{A\Gamma L}_1(8)$ and $p=3$, or $G/N \cong D_{10}$ and $p=2$. Since $\operatorname{AGL}_3(2)$ has a rational character of degree $6$, we are done in this case. Assume now that $G/N$ is not $p$-concealed and, thus, there exists $\Delta \subset \Omega$ such that $p \mid \ensuremath{\left| \Delta^{G/N} \right|} = \ensuremath{\left| G:G_{\Delta} \right|}$, where $G_{\Delta}/N \leq G/N$ is the stabilizer of $\Delta$. Now, let $\psi_{\Delta}$ as in the previous paragraph and notice that $T_{\Delta}=G_{\Delta}$, since $g \in G$ fixes $\psi_{\Delta}$ if and only if ${U_j}^g \in \Delta$ whenever $U_j \in \Delta$. It follows that, if $\hat{\psi} \in \operatorname{Irr}(T_{\Delta})$ is a rational extension of $\psi_{\Delta}$ to $T_{\Delta}$, then $\chi = \hat{\psi}^G \in \operatorname{Irr}(G)$ is rational and $p \mid \chi(1) = \ensuremath{\left| G:T_{\Delta} \right|} \hat{\psi}(1)$. ◻
*Proof of Theorem [Theorem 4](#result:rational){reference-type="ref" reference="result:rational"}.* First, notice that, if $\operatorname{O}^{2'}(G)$ has a rational character of even degree, then so has $G$ (this is a consequence of [@Isaacs Theorem 6.30]). Thus, we can assume that $\operatorname{O}^{2'}(G)=G$.
If $M \lhd G$ is solvable, the theorem follows by induction on $\ensuremath{\left| G \right|}$. Let $M$ be a non-abelian minimal normal subgroup of $G$, it follows from Proposition [Proposition 11](#proposition:rational){reference-type="ref" reference="proposition:rational"} that there exists $N \lhd G$ with $N \cap M = 1$ such that $G/N$ is almost simple, since alternating groups always have a rational irreducible character of even degree (see [@Tiep-TongViet:Odd-degree_rational Theorem C]). By [@Tiep-TongViet:Odd-degree_rational Theorem B], we have that $G/N \cong \operatorname{PSL}_2(3^f)$ or $\operatorname{PGL}_2(3^f)$, for some odd $f \geq 3$, and we can see from [@Steinberg:The_representations_of Table III] that $\operatorname{PGL}_2(3^f)$ has a rational character of degree $3^f - 1$. Hence, $G/N \cong M \cong \operatorname{PSL}_2(3^f)$ and $G=N \times M$. Then the theorem follows by induction on $\ensuremath{\left| G \right|}$. ◻
The following lemma is a consequence of [@Isaacs:Finite_Group_Theory Problem 3D.4] and it was possibly already known. Notice that the lemma remains true, with an almost identical proof, also if we replace $p$ with a set of primes $\pi$.
**Lemma 12**. *Let $G$ be a finite group such that $\operatorname{O}^{p}(G)=G$, for some prime number $p$, and suppose that $G$ acts by conjugation on a $p$-group $P$. If $G$ acts trivially on $P/\Phi(P)$, then $G$ centralizes $P$.*
*Proof.* Let $C = \operatorname{C}_{G}(P)$ and notice that $C \lhd G$. Suppose $C \lneq G$, then there exists $xC \in G/C$ of prime order $r \neq p$. Hence, there exists $x \in G$ of order a power of $r$ and such that $x \not\in C$. However, $\langle x \rangle$ is a group of order coprime with $\ensuremath{\left| P \right|}$ acting trivially on $P/\Phi(P)$ and it follows from [@Isaacs:Finite_Group_Theory Problem 3D.4] that $\langle x \rangle \leq C$, a contradiction. ◻
Finally, we also need a corollary of Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"} and Proposition [Proposition 7](#proposition:PSL3){reference-type="ref" reference="proposition:PSL3"}, to deal with the order of the *Schur multiplier* $M(S)$ of a simple group of Lie type $S$ with no $\sigma$-invariant characters of degree divisible by $p$. See [@Isaacs Definition 11.12] for the definition of Schur multiplier.
**Corollary 13**. *Let $p > 2$ be a prime number, let $S$ be a simple group of Lie type of order divisible by $p$ and let $\sigma \in \operatorname{Gal}(\mathbb{Q}_{\ensuremath{\left| S \right|}}/\mathbb{Q})$ of order $p$. If $S$ has no $\sigma$-invariant irreducible characters of degree divisible by $p$, then $p \nmid \ensuremath{\left| M(S) \right|}$.*
*Proof.* By Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"} and Proposition [Proposition 7](#proposition:PSL3){reference-type="ref" reference="proposition:PSL3"}, either $S=\operatorname{PSL}_n(q)$ or $\operatorname{PSU}_n(q)$ with $p>n$, or $S \in \{\operatorname{PSL}_4(q), \operatorname{PSU}_4(q), {}^{2}\!B_2(q^2), {{\operatorname{PSO}}}_{2n}^+(q), {{\operatorname{PSO}}}_{2n}^-(q)\}$, for some positive integer $n$ and some prime power $q$. In the latter case, we have from the Classification of finite simple groups that $M(S)$ is a $2$-group, and, thus, $p \nmid \ensuremath{\left| M(S) \right|}$, with the only exception of $S=\operatorname{PSU}_4(3)$, with $\ensuremath{\left| M(S) \right|}=36$, which, however, has a rational character of degree divisible by $3$. In the former case, we have that, with few exceptions, $\ensuremath{\left| M(S) \right|}$ divides $n$ and, therefore, $p \nmid \ensuremath{\left| M(S) \right|}$ also in this case. If these exceptions occur, the order of the Schur multiplier is equal to $2$, unless $S=\operatorname{PSL}_2(9)$, with $\ensuremath{\left| M(S) \right|}=6$, $S=\operatorname{PSL}_3(4)$, with $\ensuremath{\left| M(S) \right|}=48$, $S=\operatorname{PSU}_4(3)$, which we already treated, or $S=\operatorname{PSU}_6(2)$, with $\ensuremath{\left| M(S) \right|}=12$. In all this cases, our corollary holds unless $p=3$ and, thus, $n=2$. Hence, the only possible exception is when $S = \operatorname{PSL}_2(9)$, and the Steinberg character is a rational irreducible character of $S$ of degree divisible by $3$, contradicting our hypothesis. ◻
*Proof of Theorem [Theorem 1](#result:main){reference-type="ref" reference="result:main"}.* Assume first that $p=2$. We only need to prove that $G$ is solvable, since then the thesis follows from [@Grittini:Degrees_of_irreducible_fixed_p-solvable Theorem A]. Let $M \lhd G$ be minimal such that $G/M$ is solvable and let $M/L$ be a chief factor of $G$; by eventually replacing $G$ with $G/L$ we can assume that $L=1$ and $M = S_1 \times ... \times S_n$ is a minimal normal subgroup, with $S_i \cong S$ for some non-abelian simple group $S$ and for each $i=1,...,n$. Since $G$ does not have any rational character of even degree, it follows from [@Tiep-TongViet:Odd-degree_rational Theorem B] that $S \cong \operatorname{PSL}_2(3^f)$ for some odd integer $f \geq 3$.
Let $q=3^f$ and notice that $q-1=2m$ for some odd integer $m$. Since $\sigma$ has order $2$, either it fixes all elements of $\mathbb{Q}_m$ or it acts on it as the complex conjugation. If we now consider the character table of $\operatorname{PGL}_2(q)$ (see [@Steinberg:The_representations_of Table III]) and we compare it with the one of $\operatorname{PSL}_2(q)$ (see [@Dornhoff:Group_Representation_TheoryA Theorem 38.1]), we can see that $\operatorname{PGL}_2(q)$ has $(q+1)/2$ real-valued characters of even degree having values in $\mathbb{Q}_m$, which are, thus, fixed by $\sigma$, and $(q-1)/2$ of them restrict irreducibly to $\operatorname{PSL}_2(q)$. If $S \cong \operatorname{PSL}_2(3^f)$ with $f$ odd, this observation implies that $S$ has an irreducible character $\delta \in \operatorname{Irr}(S)$ of even degree with a $\sigma$-invariant extension to a subgroup $S < H < \operatorname{Aut}(S)$ such that $H \cong \operatorname{PGL}_2(3^f)$. Since $\ensuremath{\left| \operatorname{Aut}(S)/H \right|} = f$ is odd, $\delta$ has a $\sigma$-invariant extension to its inertia subgroup in $\operatorname{Aut}(S)$.
Consider now the character $\varphi=\delta \times 1_{S_2} \times ... \times 1_{S_n} \in \operatorname{Irr}(M)$, with $\delta$ as in the previous paragraph, and let $T=I_G(\varphi)$ be its inertia subgroup; we have that $T/\operatorname{C}_{G}(S_1)$ is the inertia subgroup in $\operatorname{N}_{G}(S_1)/\operatorname{C}_{G}(S_1)$ of $\hat{\varphi} \in \operatorname{Irr}(S_1\operatorname{C}_{G}(S_1)/\operatorname{C}_{G}(S_1))$, corresponding to $\varphi$, and the $\sigma$-invariant extension $\hat{\theta} \in \operatorname{Irr}(T/\operatorname{C}_{G}(S_1))$ of $\hat{\varphi}$ corresponds to a $\sigma$-invariant extension $\theta \in \operatorname{Irr}(T)$ of $\varphi$. Since $2 \mid \varphi(1)=\theta(1)$, it follows that $\theta^G \in \operatorname{Irr}(G)$ is a $\sigma$-invariant character of even degree, in contradiction with our hypothesis. Hence, $G$ is solvable if $p=2$ and we are done in this case.
From now on, we can assume that $p>2$. Let $M \lhd G$ be a minimal normal subgroup of $G$, then by induction we have that $\operatorname{O}^{p'}(G/M) = O/M \times K/M$, with $O/M = \operatorname{O}_{p}(G/M)$ and $K/M$ perfect such that $\operatorname{O}^{p'}(K/M)=K/M$, $\operatorname{O}_{p'}(K/M)$ is solvable, and $(K/M)/\operatorname{O}_{p'}(K/M)$ is the direct product of some non-abelian simple groups with no $\sigma$-invariant irreducible characters of degree divisible by $p$.
Suppose that $M$ is a $p$-group and notice that $M \leq \operatorname{Z}(O)$, since $\operatorname{Z}(O) \lhd G$ and $M \cap \operatorname{Z}(O) \neq 1$. Moreover, if $M \nleq \operatorname{O}^{p}(K)$, then $M \cap \operatorname{O}^{p}(K) = 1$ and $K = M \times \operatorname{O}^{p}(K)$ and we are done in this case. Hence, we can assume that $\operatorname{O}^{p}(K) = K$.
Suppose $M \leq \Phi(O)$, then we have that $K/M$ centralizes $O/\Phi(O)$ and, by Lemma [Lemma 12](#lemma:frattini_quotient){reference-type="ref" reference="lemma:frattini_quotient"}, it follows that $K/M$ centralizes $O$ and, thus, $M \leq \operatorname{Z}(K)$. Let $M \leq \tilde{H} \lhd G$ such that $\tilde{H}/M = \operatorname{O}_{p'}(K/M)$ and let $H$ be a $p'$-complement for $M$ in $\tilde{H}$, then $H = \operatorname{O}_{p'}(\tilde{H}) \lhd G$ and, by eventually replacing $G$ with $G/H$, we can assume that $\tilde{H}=M$ and $K/M$ is the direct product of some non-abelian simple groups of Lie type of order divisible by $p$. Let $S$ be any of these non-abelian simple groups, we have that $S$ has no $\sigma$-invariant irreducible characters of degree divisible by $p$ and, thus, it follows from Corollary [Corollary 13](#corollary:Schur_multiplier){reference-type="ref" reference="corollary:Schur_multiplier"} that $p \nmid \ensuremath{\left| M(S) \right|}$, where $M(S)$ is the Schur multiplier of $S$. Hence, we have by [@Isaacs Corollary 11.20] that $M$ has a normal complement in $\tilde{K}$ for every $M < \tilde{K} \lhd K$ such that $\tilde{K}/M$ is simple (namely, $\tilde{K} = M \times \tilde{K}'$) and, thus, $M$ has a normal complement $C$ in $K$. Moreover, $C=\operatorname{O}^{p}(K)$ and, therefore, $C$ is normal in $G$ and we are done in this case, since we have that $O \times K = O \times C$, with $C \cong K/M$.
Assume now that $M \cap \Phi(O) = 1$, then we have that $O = \tilde{O} \times M$ for some $\tilde{O} \lhd G$ and, by eventually replacing $G$ with $G/\tilde{O}$, we can assume that $O=M$. Let $C = \operatorname{C}_{K}(M) \lhd G$; by proceeding as in the previous paragraph we can prove that $M$ has a complement $\tilde{C}$ in $C$ such that $\tilde{C} \lhd G$. Hence, if $C=K$ we are done. If $C < K$, then by replacing $G$ with $G/\tilde{C}$ we can assume that $K/M$ is isomorphic to a subgroup of $\operatorname{Aut}(M) = \operatorname{GL}_k(p)$, for some $k$ such that $\ensuremath{\left| M \right|}=p^k$. In particular, we have that $K$ is isomorphic to a subgroup of the affine group $\operatorname{AGL}_k(p)$ and, thus, $M$ is complemented in $K$. Now, let $\lambda \in \operatorname{Irr}(M)$ of order $p$ and let $T=I_K(\lambda)$ be its inertia subgroup; since $M$ has a complement in $T$, $\lambda$ has an extension $\hat{\lambda}$ to $T$ of order $p$ (see, for instance, [@Grittini:p-length_character_degrees Lemma 3.4]), which is $\sigma$-invariant, since it has values in $\mathbb{Q}_p$. It follows that, if $p \mid \ensuremath{\left| K:T \right|}$, then $\hat{\lambda}^K \in \operatorname{Irr}(K)$ is a $\sigma$-invariant character of degree divisible by $p$ and we get a contradiction. Hence, we must assume that the action of $K/M$ on $V=\operatorname{Irr}(M)$ is $p$-exceptional, according to the definition in [@Giudici-Liebeck-Praeger-Saxl-Tiep:Arithmetic_results]. Moreover, by eventually replacing $K$ with $K/\tilde{M}$ for some $\tilde{M} < M$ such that $\tilde{M} \lhd K$, we can assume that $M$ is normal minimal in $K$ and, thus, $K/M$ acts irreducibly on $V$. Suppose first that the action is imprimitive and let $V = V_1 \oplus ... \oplus V_m$ be any imprimitive decomposition of $V$. By [@Giudici-Liebeck-Praeger-Saxl-Tiep:Arithmetic_results Theorem 3] and [@Giudici-Liebeck-Praeger-Saxl-Tiep:Arithmetic_results Theorem 2], and since $K$ is perfect, we have that $K/M$ has a chief factor $K/N$ isomorphic to either $\operatorname{Alt}_m$ or $\operatorname{PSL}_3(2)$ and, in either cases, it follows that $K$ has a $\sigma$-invariant character of degree divisible by $p$. Suppose now that $K/M$ acts on $V$ primitively, then by [@Giudici-Liebeck-Praeger-Saxl-Tiep:Arithmetic_results Theorem 1] either $K/M$ is transitive on $V \smallsetminus \{1_M\}$ or it has a composition factor isomorphic to either $\operatorname{PSL}_2(5)$, $\operatorname{PSL}_2(11)$, $\operatorname{M}_{11}$, or $\operatorname{M}_{23}$. We can verify directly that each group appearing in the latter case has a $\sigma$-invariant irreducible character of degree divisible by $p$, for every choice of $\sigma$ of order $p$. Hence, we can assume that $K/M$ is transitive on $V \smallsetminus \{1_M\}$, and it follows from Hering's Theorem (see [@Liebeck:Affine_permutation_groups Appendix 1]) that, since $K$ is perfect and $p>2$, either
(i) $K/M$ has a normal subgroup isomorphic to $\operatorname{SL}_a(q)$ with $p^k=q^a$, to $\operatorname{Sp}_{2a}(q)$ with $p^k=q^{2a}$, or to $\operatorname{SL}_2(5)$;
(ii) $K/M$ is isomorphic to a subgroup of $\operatorname{GL}_2(p)$.
(iii) there exists $R \lhd K$ such that $K/R \cong \operatorname{Alt}_5$;
(iv) $K/M \cong \operatorname{SL}_2(13)$ and $p=3$;
If (i) or (iii) hold, then $K$ has a chief factor $K/L$ isomorphic either to $\operatorname{PSL}_a(q)$ or $\operatorname{Sp}_{2a}(q)$, and the Steinberg character of such groups is a rational character of degree a power of $p$, or to $\operatorname{PSL}_2(5)$ or $\operatorname{Alt}_5$, which have a $\sigma$-invariant character of degree divisible by $p$ for every prime $p \mid \ensuremath{\left| K/L \right|}$ and every $\sigma$ of odd order. The same is true also in case (ii), since we can see from [@Huppert:Endliche_Gruppen II.8.27 Hauptsatz] that the only non-solvable subgroups of $\operatorname{GL}_2(p)$ are $\operatorname{SL}_2(p)$ and possibly $\operatorname{Alt}_5$. Finally, if (iv) holds, then $K/M$ has exactly two characters of degree $6$. Hence, our hypothesis is contradicted in each of these cases.
From now on, we can assume that $\operatorname{O}_{p}(G)=1$. Suppose that $M$ is an abelian $p'$-group and let $P < G$ be Sylow $p$-subgroup of $G$, then it follows from [@Grittini:Degrees_of_irreducible_fixed_p-solvable Theorem B] that $1 < \operatorname{N}_{G}(P) \cap M = \operatorname{C}_{M}(P) \leq \operatorname{C}_{M}(O) = \operatorname{Z}(O) \cap M$. Since $\operatorname{Z}(O) \lhd G$ and $M$ is minimal normal in $G$, we have that $M \leq \operatorname{Z}(O)$ and it follows that $O = \operatorname{O}_{p}(O) \times M$ and we are done.
Finally, suppose that $M = S_1 \times ... \times S_n$ is the direct product of some isomorphic non-abelian simple groups. Let $S \lhd M$ be any of such non-abelian simple groups and notice that, if $p \mid \ensuremath{\left| G : \operatorname{N}_{G}(S) \right|}$, then we can always find a character $\varphi\in \operatorname{Irr}(S)$ such that $\varphi 1_{\operatorname{C}_{M}(S)} \in \operatorname{Irr}(M)$ has a rational extension $\hat{\varphi}$ to $\operatorname{N}_{G}(S)=I_G(\varphi 1_{\operatorname{C}_{M}(S)})$, and ${\hat{\varphi}}^G$ is a rational irreducible character of degree divisible by $p$. Suppose $p \mid \ensuremath{\left| \operatorname{N}_{G}(S) : \operatorname{C}_{G}(S)S \right|}$, then by Theorem [Theorem 9](#theorem:almost_simple){reference-type="ref" reference="theorem:almost_simple"} there exist an irreducible character $\varphi$ of $S$ and an irreducible character $\hat{\varphi}$ of $I_G(\varphi 1_{\operatorname{C}_{M}(S)}) \leq \operatorname{N}_{G}(S)$, lying over $\varphi 1_{\operatorname{C}_{M}(S)}$, such that $\hat{\varphi}^{\operatorname{N}_{G}(S)}$ is $\sigma$-invariant and of degree divisible by $p$, and it follows that $\hat{\varphi}^G = (\hat{\varphi}^{\operatorname{N}_{G}(S)})^G$ is a $\sigma$-invariant irreducible character of degree divisible by $p$. Therefore, $\operatorname{C}_{G}(S)S/M$ contains a Sylow $p$-subgroup of $G/M$ for every choice of $S \lhd M$ simple, and it follows in particular that $O/M$ centralizes each $S_i$, for $i=1,...,n$, and, thus, $O = \operatorname{C}_{O}(M)M = \operatorname{C}_{O}(M) \times M$, and $\operatorname{C}_{O}(M) = \operatorname{O}_{p}(O) \lhd G$. Since we are assuming that $\operatorname{O}_{p}(G)=1$, we have that $O=M$. It follows that, by Proposition [Proposition 8](#proposition:invariant_in_subgroup){reference-type="ref" reference="proposition:invariant_in_subgroup"}, $K = \operatorname{O}^{p'}(G)$ does not have any $\sigma$-invariant irreducible character of degree divisible by $p$ and, working by induction, we can assume that $K=G$. Since alternating groups of order divisible by $p$ always have a $\sigma$-invariant character of degree divisible by $p$, it follows from Proposition [Proposition 11](#proposition:rational){reference-type="ref" reference="proposition:rational"}, and from $K$ being perfect, that $K = N \times M$ for some $N \lhd K$ and, by induction, $K/\operatorname{O}_{p'}(N) = N/\operatorname{O}_{p'}(N) \times M\operatorname{O}_{p'}(N)/\operatorname{O}_{p'}(N)$ is the direct product of some non-abelian simple groups with no $\sigma$-invariant irreducible characters of degree divisible by $p$. ◻
## Acknowledgements {#acknowledgements .unnumbered}
The author is grateful to Gunter Malle for providing Theorem [Theorem 3](#result:simple_rational){reference-type="ref" reference="result:simple_rational"} and the contents of Section [2](#section:simple_groups){reference-type="ref" reference="section:simple_groups"}, and in general for his precious advice and support in the research on the topic of the paper.
[^1]: The paper was written while the author was visiting the Department of Mathematics in the RPTU, in Kaiserslautern. The author is grateful for the generous hospitality and the pleasant environment he found in the University.
| arxiv_math | {
"id": "2309.05796",
"title": "On the degrees of irreducible characters fixed by some field\n automorphism in finite groups",
"authors": "N. Grittini",
"categories": "math.GR",
"license": "http://creativecommons.org/licenses/by-sa/4.0/"
} |
---
abstract: |
We consider the Tutte polynomial of three classes of greedoids: those arising from rooted graphs, rooted digraphs and binary matrices. We establish the computational complexity of evaluating each of these polynomials at each fixed rational point $(x,y)$. In each case we show that evaluation is $\#$P-hard except for a small number of exceptional cases when there is a polynomial time algorithm. In the binary case, establishing $\#$P-hardness along one line relies on Vertigan's unpublished result on the complexity of counting bases of a matroid. For completeness, we include an appendix providing a proof of this result.
**Mathematics Subject Classifications:** 05C31, 68Q17, 05B35
author:
- |
Christopher Knapp\
Brunel University\
Kingston Lane\
Uxbridge, UK\
`[email protected] `
- |
Steven Noble\
Birkbeck, University of London\
Malet Street\
London, UK\
`[email protected]`
bibliography:
- difficultpoints.bib
title: The Complexity of the Greedoid Tutte Polynomial
---
# Introduction
Tutte's eponymous polynomial is perhaps the most widely studied two-variable graph and matroid polynomial due to its many specializations, their vast breadth and the richness of the underlying theory. Discussion of the Tutte polynomial and closely related polynomials fills an entire handbook [@zbMATH07553843]. Tutte first introduced the Tutte polynomial of a graph, as the *dichromate* in [@zbMATH03087501]. It is closely related to Whitney's rank generating function [@MR1503085] which Tutte extended from graphs to matroids in his PhD thesis [@Tutte-thesis]. Crapo [@MR262095] later extended the definition of the Tutte polynomial to matroids. See Farr [@Farr-Chapter] for more on the early history of the Tutte polynomial.
The simplest definition of the Tutte polynomial $T(G;x,y)$ of a graph $G$ is probably in terms of the rank function $r$. Given a graph $G$ and set $A$ of its edges, we have $r(A)=|V(G)|-k(G|A)$, where $k(G|A)$ is the number of connected components of the graph obtained from $A$ by deleting the edges in $E(G)-A$ (and keeping all the vertices).
**Definition 1**. For a graph $G$ with edge set $E$, we have $$T(G;x,y) = \sum_{A\subseteq E} (x-1)^{r(E)-r(A)} (y-1)^{|A|-r(A)}.$$
By making appropriate substitutions for $x$ and $y$, a huge number of graph invariants with connections to diverse areas of mathematics may be obtained. We summarise just a few of these evaluations that are particularly relevant later in this paper. A *spanning subgraph* of a graph $G$ is a subgraph including all the vertices of $G$.
- $T(G;1,1)$ is the number of maximal spanning forests of $G$. (If $G$ is connected, then this is the number of spanning trees.)
- $T(G;2,1)$ is the number of spanning forests of $G$.
- $T(G;1,2)$ is the number of spanning subgraphs of $G$ having the same number of components as $G$.
- $T(G;1,0)$ is the number of acyclic orientations of $G$ with one predefined source vertex per component of $G$ [@GreeneZaslavsky].
Other evaluations (up to a simple pre-factor) include the reliability polynomial, chromatic polynomial and partition function of the $q$-state Potts model. For a full list of evaluations see [@BrylawskiOxley; @Ellis-MonaghanMerino; @zbMATH07553843].
Given a graph polynomial of this type, a natural question is to determine its complexity, that is to classify the points $(a,b)$ according to whether there is a polynomial time algorithm to evaluate the polynomial at $(a,b)$ or whether the evaluation is computationally intractable. Because of the inherent difficulties of measuring the complexity of algorithms involving arbitrary real numbers, we restrict $a$ and $b$ to being rational. This question was completely resolved in a groundbreaking paper by Jaeger, Vertigan and Welsh [@JVW]. A stronger result was obtained by Vertigan and Welsh [@zbMATH00563595], who proved the theorem below. For $\alpha$ in $\mathbb{Q}-\{0\}$, let $H_{\alpha}=\{(x,y)\in \mathbb{Q}^2: (x-1)(y-1) = \alpha\}$, and let $H_0^x=\{(1,y): y \in \mathbb{Q}\}$ and $H_0^y = \{(x,1) : x \in \mathbb{Q}\}$. This family of hyperbolae seems to play a special role in the theory of the Tutte polynomial, both in terms of its evaluations and its complexity.
**Theorem 2** (Vertigan, Welsh). *Evaluating the Tutte polynomial of a bipartite planar graph at any fixed point $(a,b)$ in the rational plane is $\#$P-hard apart from when $(a,b)$ lies on $H_1$ or $H_2$, or when $(a,b)$ equals $(-1,-1)$ or $(1,1)$, when there exists a polynomial-time algorithm.*
Roughly speaking, the proof of the hardness part of this result (at least without the planar bipartite restriction) proceeds as follows. By exploiting a result of Brylawski [@Brylawski], one first shows that for most points $(a,b)$, the existence of a polynomial time algorithm to evaluate $T(G;a,b)$ for every graph $G$ would imply the existence of a polynomial time algorithm to evaluate $T(G;x,y)$ at every point $(x,y)$ in $H_{\alpha}$, where $\alpha=(a-1)(b-1)$. Given a graph $G$, let $G^k$ and $G_k$ denote, respectively, the graph obtained by replacing every edge of $G$ by $k$ parallel edges and the graph obtained by replacing every non-loop of $G$ by a path comprising $k$ edges and every loop by a circuit comprising $k$ edges. The former is known as the *$k$-thickening* of $G$ and the latter as the *$k$-stretch* of $G$. Brylawski gave expressions for the Tutte polynomials of $G^k$ and $G_k$ in terms of the Tutte polynomial of $G$. By varying $k$, one may obtain expressions for $T(G;a_k,b_k)$ at a sequence $\{(a_k,b_k)\}$ of points on $H_{\alpha}$, and then solve for the coefficients of the one-variable polynomial obtained by restricting the domain of $T$ to $H_\alpha$. There remain several special cases because the sequence $\{(a_k,b_k)\}$ sometimes contains only a small number of distinct points. The second step proceeds by determining a $\#$P-hard point on each curve $H_{\alpha}$. Many of these come from evaluations of the chromatic polynomial.
The Tutte polynomial is essentially a generating function for the number of subsets of the edges of a graph according to their rank and size. Following the work of Jaeger, Vertigan and Welsh, many authors have established corresponding results for a variety of graph polynomials defined in a similar way but using different notions of rank. These include the cover polynomial [@zbMATH06072038], the Bollobás--Riordan polynomial [@zbMATH05770222], the interlace polynomial [@zbMATH05623498], the rank generating function of a graphic $2$-polymatroid [@zbMATH05039067] and the Tutte polynomial of a bicircular matroid [@zbMATH05039064]. In each case, the proof techniques have some similarities: the bulk of the work is done using a graph operation analogous to the thickening, but there are considerable technical difficulties required to deal with the special cases and to complete the proof. These results provide evidence for Makowsky's Difficult Point Conjecture which states that for an $n$-variable graph polynomial $P$ that may be defined in monadic second order logic, there is a set $S$ of points with the following properties:
1. For every $\mathbf x\in S$, there is a polynomial time algorithm to evaluate $P(\mathbf x)$;
2. For every $\mathbf x\notin S$, it is $\#$P-hard to evaluate $P(\mathbf x)$;
3. The set $S$ is the finite union of algebraic sets in $\mathbb{C}^n$ each having dimension strictly less than $n$.
For full details see [@zbMATH05552168].
In this paper we prove results analogous to Theorem [Theorem 2](#VW){reference-type="ref" reference="VW"} for two graph polynomials, the Tutte polynomials of a rooted graph and a rooted digraph, and a polynomial of binary matrices, the Tutte polynomial of a binary greedoid. Each of these polynomials is a special case of the Tutte polynomial of a greedoid introduced by Gordon and McMahon [@GordonMcMahon] and the proofs have considerable commonality. (All the necessary definitions are provided in the next sections.) The graph polynomials are the analogue of the Tutte polynomial for rooted graphs and rooted digraphs, and our results provide further evidence for Makowsky's Difficult Point Conjecture.
An overview of the paper is as follows. In Section [2](#Rooted Graphs, Rooted Digraphs and Greedoids){reference-type="ref" reference="Rooted Graphs, Rooted Digraphs and Greedoids"} we provide necessary background on rooted graphs, rooted digraphs, greedoids and computational complexity. In the following section we describe the Tutte polynomial of a greedoid and list some of its evaluations for each of the three classes of greedoid that we work with. Within our hardness proofs we require an analogue of the thickening operation and various other constructions which can be defined for arbitrary greedoids, and may be of independent interest. We describe these in Section [4](#sec:constructions){reference-type="ref" reference="sec:constructions"} and provide analogues of Brylawski's results [@Brylawski] expressing the Tutte polynomial for these constructions in terms of the Tutte polynomials of their constituent greedoids.
In Section [5](#section rooted graphs hardness){reference-type="ref" reference="section rooted graphs hardness"}, we prove the following result completely determining the complexity of evaluating the Tutte polynomial of a rooted graph at a rational point.
**Theorem 3**. *Evaluating the Tutte polynomial of a connected, rooted, planar, bipartite graph at any fixed point $(a,b)$ in the rational $xy$-plane is $\#$P-hard apart from when $(a,b)$ equals $(1,1)$ or when $(a,b)$ lies on $H_1$.*
*There are polynomial time algorithms to evaluate the Tutte polynomial of a rooted graph at $(1,1)$ and at any point lying on $H_1$.*
In Section [6](#sec:rooted digraph){reference-type="ref" reference="sec:rooted digraph"}, we prove the equivalent result for the Tutte polynomial of a rooted digraph.
**Theorem 4**. *Evaluating the Tutte polynomial of a root-connected, rooted digraph at any fixed point $(a,b)$ in the rational $xy$-plane is $\#$P-hard apart from when $(a,b)$ equals $(1,1)$, when $(a,b)$ lies on $H_1$, or when $b=0$.*
*There are polynomial time algorithms to evaluate the Tutte polynomial of a rooted digraph at $(1,1)$, at any point lying on $H_1$ and at any point $(a,0)$.*
We then determine the complexity of evaluating the Tutte polynomial of a binary greedoid.
**Theorem 5**. *Evaluating the Tutte polynomial of a binary greedoid at any fixed point $(a,b)$ in the rational $xy$-plane is $\#$P-hard apart from when $(a,b)$ lies on $H_1$.*
*There is a polynomial time algorithm to evaluate the Tutte polynomial of a binary greedoid at any point lying on $H_1$.*
One special case of this theorem depends on a special case of an unpublished result of Vertigan, who proved that the problem of counting bases of a binary matroid is $\#$P-complete. For completeness, in Appendix [8](#sec:appendix){reference-type="ref" reference="sec:appendix"}, we provide a proof of this result for all fields.
# Preliminaries {#Rooted Graphs, Rooted Digraphs and Greedoids}
## Rooted graphs and digraphs
All our graphs are allowed to have loops and multiple edges. A *rooted graph* is a graph with a distinguished vertex called the *root*. Most of the graphs we work with will be rooted but occasionally we will work with a graph without a root. For complete clarity, we will sometimes refer to such graphs as *unrooted graphs*. We denote a rooted graph $G$ with vertex set $V(G)$, edge set $E(G)$ and root $r(G)$ by a triple $(V(G),E(G),r(G))$. We omit the arguments when there is no fear of ambiguity. Many of the standard definitions for graphs can be applied to rooted graphs in the natural way. Two rooted graphs $(V,E,r)$ and $(V',E',r')$ are *isomorphic* if the unrooted graphs $(V,E)$ and $(V',E')$ are isomorphic via an isomorphism mapping $r$ to $r'$. For a subset $A$ of $E$, the *rooted spanning subgraph* $G|A$ is formed from $G$ by deleting all the edges in $E-A$ (and keeping all the vertices). The *root component* of $G$ is the connected component of $G$ containing the root. A set $A$ of edges of $G$ is *feasible* if the root component of $G|A$ is a tree and contains every edge of $A$. We define the *rank* $\rho_G(A)$ of $A$ to be $$\rho_G(A) =\max\{|A'|: A'\subseteq A, A \text{ is feasible}\}.$$ We omit the subscript when the context is clear. We let $\rho(G) = \rho(E)$. Observe that a set $A$ of edges is feasible if and only if $\rho(A) = |A|$. A feasible set is a *basis* if $\rho(A)=\rho(G)$. So $A$ is a basis of $G$ if and only if it is the edge set of a spanning tree of the root component of $G$.
A *rooted digraph* is a digraph with a distinguished vertex called the *root*. We denote a rooted digraph $D$ with vertex set $V(D)$, edge set $E(D)$ and root $r(D)$ by a triple $(V(D),E(D),r(D))$. Once again we omit the arguments when there is no chance of ambiguity. Two rooted digraphs $(V,E,r)$ and $(V',E',r')$ are *isomorphic* if the unrooted digraphs $(V,E)$ and $(V',E')$ are isomorphic via an isomorphism mapping $r$ to $r'$. We say that the *underlying rooted graph* of a rooted digraph is the rooted graph we get when we remove all the directions on the edges. For a subset $A$ of $E$, the *rooted spanning subdigraph* $D|A$ is formed from $D$ by deleting all the edges in $E-A$. The *root component* of $D$ is formed by deleting every vertex $v$ to which there is no directed path from $r$ in $D$, together with its incident edges. The rooted digraph is *root-connected* if there is a directed path from the root to every other vertex. The rooted digraph $D$ is an *arborescence rooted at $r$* if $D$ is root-connected and its underlying rooted graph is a tree. Observe that a set $A$ of edges of $D$ is *feasible* if and only if the root component of $D|A$ is an arborescence rooted at $r$ and contains every edge of $A$. The *rank* $\rho_D(A)$ of $A$ is defined by $$\rho_D(A) = \max\{|A'|: A' \subseteq A, D|A' \text{ is feasible}\}.$$ We can omit the subscript when the context is clear. We let $\rho(D) = \rho(E)$. A set $A$ of edges is feasible if and only if $\rho(A) = |A|$. A feasible set is a *basis* if $\rho(A) = \rho(D)$. So $A$ is a basis of $D$ if and only if it is the edge set of an arborescence rooted at $r$ that includes every vertex of the root component of $D$.
## Greedoids
Greedoids are generalizations of matroids, first introduced by Korte and Lovász in 1981 in [@KorteLovasz]. The aim was to generalize the characterization of matroids as hereditary set systems on which the greedy algorithm is guaranteed to determine the optimal member of the set system, according to an arbitrary weighting. Most of the information about greedoids which we summarise below can be found in [@bjorner+zieglerzbMATH00067326] or [@kortebookzbMATH04212062].
**Definition 6**. A *greedoid* $\Gamma$ is an ordered pair $(E,\mathcal{F})$ consisting of a finite set $E$ and a non-empty collection $\mathcal{F}$ of subsets of $E$ satisfying the following axioms:
1. $\emptyset \in \mathcal{F}$.
2. [\[ax:G2\]]{#ax:G2 label="ax:G2"} For all $F$ and $F'$ in $\mathcal{F}$ with $|F'|<|F|$ there exists some $x \in F-F'$ such that $F' \cup x \in \mathcal{F}$.
The set $E$ is *ground set* of $\Gamma$ and the members of $\mathcal{F}$ are the *feasible sets* of $\Gamma$. The axioms are the first and third of the usual axioms specifying a matroid in terms of its independent sets, so clearly every matroid is a greedoid, but a greedoid does not necessarily satisfy the hereditary property satisfied by the independent sets of a matroid requiring that the collection of independent sets is closed under taking subsets. The *rank* $\rho_{\Gamma}(A)$ of a subset $A$ of $E$ is given by $$\rho_{\Gamma}(A) = \max\{|A'|: A' \subseteq A, A' \in \mathcal{F}\}$$ and we let $\rho(\Gamma)= \rho_{\Gamma}(E)$. We omit the subscript when the context is clear. Notice that a set $A$ is feasible if and only if $\rho(A) = |A|$. A feasible set is a *basis* if $\rho(A) = \rho(\Gamma)$. We denote the collection of bases of $\Gamma$ by $\mathcal B(\Gamma)$. Axiom [\[ax:G2\]](#ax:G2){reference-type="ref" reference="ax:G2"} implies that every basis has the same cardinality. Note that the rank function determines $\Gamma$ but the collection of bases does not. For example, suppose that a greedoid has ground set $\{1,2\}$ and unique basis $\{1,2\}$. Then its collection of feasible sets could either be $\{\emptyset, \{1\}, \{1,2\}\}$, $\{\emptyset, \{2\}, \{1,2\}\}$ or $\{\emptyset, \{1\}, \{2\}, \{1,2\}\}$.
The rank function of a greedoid can be characterized in a similar way to the rank function of a matroid [@KorteLovasz3].
**Proposition 7**. *The rank function $\rho$ of a greedoid with ground set $E$ takes integer values and satisfies each of the following.*
1. *For every subset $A$ of $E$, $0\leq \rho(A) \leq |A|$;*
2. *[\[ax:GR2\]]{#ax:GR2 label="ax:GR2"} For all subsets $A$ and $B$ of $E$ with $A\subseteq B$, $\rho(A)\leq \rho(B)$;*
3. *For every subset $A$ of $E$, and elements $e$ and $f$, if $\rho(A)=\rho(A\cup e)=\rho(A \cup f)$, then $\rho(A)=\rho(A\cup e\cup f)$.*
*Moreover if $E$ is a finite set and $\rho$ is a function from the subsets of $E$ to the integers, then $\rho$ is the rank function of a greedoid with ground set $E$ if and only if $\rho$ satisfies conditions (GR1)--(GR3) above.*
The following lemma is easily proved using induction on $|B|$ and will be useful later.
**Lemma 8**. *Let $(E,\rho)$ be a greedoid specified by its rank function and let $A$ and $B$ be subsets of $E$ such that for all $b\in B$, $\rho (A\cup b)=\rho(A)$. Then $\rho(A\cup B)=\rho(A)$.*
Two greedoids $\Gamma_1 = (E_1, \mathcal{F}_1)$ and $\Gamma_2 = (E_2, \mathcal{F}_2)$ are *isomorphic*, denoted by $\Gamma_1 \cong \Gamma_2$, if there exists a bijection $f: E_1 \rightarrow E_2$ that preserves the feasible sets.
The following two examples of greedoids were introduced in [@zbMATH03871387]. Let $G$ be a rooted graph. Take $\Gamma=(E,\mathcal{F})$ so that $E=E(G)$ and a subset $A$ of $E$ is in $\mathcal{F}$ if and only if the root component of $G|A$ is a tree containing every edge of $A$. Then $\Gamma$ is a greedoid. Any greedoid which is isomorphic to a greedoid arising from a rooted graph in this way is called a *branching greedoid*. The branching greedoid of a rooted graph $G$ is denoted by $\Gamma(G)$.
Similarly suppose we have a rooted digraph $D$ and take $\Gamma=(E,\mathcal{F})$ so that $E={E}(D)$ and a subset $A$ of $E$ is in $\mathcal{F}$ if and only if the root component of $D|A$ is an arborescence rooted at $r$ and contains every edge of $A$. Then $\Gamma$ is a greedoid. Any greedoid which is isomorphic to a greedoid arising from a rooted digraph in this way is called a *directed branching greedoid*. The directed branching greedoid of a rooted digraph $D$ is denoted by $\Gamma(D)$. (There should be no ambiguity with the overload of notation for a branching greedoid and a directed branching greedoid.)
Notice that for both rooted graphs and digraphs, the concepts of feasible set, basis and rank are compatible with their definitions for the associated branching greedoid or directed branching greedoid in the sense that a set $A$ of edges is feasible in a rooted graph $G$ if and only if it is feasible in $\Gamma(G)$, and similarly for the other concepts.
We now define the class of *binary greedoids*. These are a special case of a much broader class, the *Gaussian elimination greedoids*, introduced by Goecke in [@Goecke], motivated by the Gaussian elimination algorithm. Let $M$ be an $m\times n$ binary matrix. It is useful to think of the rows and columns of $M$ as being labelled by the elements of $[m]$ and $[n]$ respectively, where $[n]=\{1,\ldots,n\}$. If $X$ is a subset of $[m]$ and $Y$ is a subset of $[n]$, then $M_{X,Y}$ denotes the matrix obtained from $M$ by deleting all the rows except those with labels in $X$ and all the columns except those with labels in $Y$. Take $\Gamma=([n],\mathcal F)$, so that $$\mathcal{F} = \{A \subseteq [n]: \text{ the submatrix $M_{[|A|],A}$ is non-singular}\}.$$ By convention, the empty matrix is considered to be non-singular. Then $\Gamma$ is a greedoid. Any greedoid which is isomorphic to a greedoid arising from a binary matrix in this way is called a *binary greedoid*. The binary greedoid of a binary matrix $M$ is denoted by $\Gamma(M)$.
**Example 9**. Let $$M = \bordermatrix{~ & 1 & 2 & 3 & 4 \cr
~ & 1 & 0 & 0 & 1 \cr
~ & 1 & 0 & 1 & 0 \cr
~ & 0 & 1 & 1 & 1 \cr}.$$ The binary greedoid $\Gamma(M)$ has ground set $\{1,2,3,4\}$ and feasible sets $$\{\emptyset, \{1\}, \{4\}, \{1,3\}, \{1,4\}, \{3,4\}, \{1,2,3\}, \{1,2,4\},\{2,3,4\}\}.$$
The following lemma is clear.
**Lemma 10**. *Let $E=[n]$, let $M$ be an $m \times n$ binary matrix with columns labelled by $E$ and let $M'$ be obtained from $M$ by adding row $i$ to row $j$, where $i < j$. Then $\Gamma(M') \cong \Gamma(M)$.*
A consequence of this lemma is that if $\Gamma$ is a binary greedoid, then there is a binary matrix $M$ with linearly independent rows so that $\Gamma=\Gamma(M)$. With this in mind we easily obtain the following result which will be useful later.
**Lemma 11**. *Let $\Gamma$ be a binary greedoid. Then there is a binary matroid $M$ so that $\mathcal B(M)=\mathcal B(\Gamma)$.*
In contrast with the situation in matroids, where every graphic matroid is binary, it is not the case that every branching greedoid is binary. For example, take $G$ to be the star with four vertices in which the central vertex is the root. Then $\Gamma(G)$ is not binary. The same example but with the edges directed away from the root demonstrates that not every directed branching greedoid is binary.
An element of a greedoid is a *loop* if it does not belong to any feasible set. So if $G$ is a rooted graph then an edge $e$ is a loop of $\Gamma(G)$ if it does not lie on any path from the root and if $G$ is connected then it is just a loop in the normal graph-theoretic sense. Similarly if $D$ is a directed rooted graph then an edge $e$ is a loop of $\Gamma(D)$ if it does not lie on any directed path from the root. As the concepts of loops in greedoids and in rooted graphs and digraphs do not completely coincide, we use the term *greedoid loop* whenever there is potential for confusion.
Let $\Gamma$ be a greedoid with ground set $E$ and rank function $\rho$. Elements $e$ and $f$ of $E$ are said to be *parallel* in $\Gamma$ if for all subsets $A$ of $E$, $$\rho(A \cup e) = \rho(A \cup f) = \rho(A \cup e \cup f).$$ As far as we are aware, the following elementary lemma does not seem to have been stated before.
**Lemma 12**. *Let $\Gamma$ be a greedoid. Define a relation $\bowtie$ on the ground set of $\Gamma$ by $e\bowtie f$ if $e$ and $f$ are parallel in $\Gamma$. Then $\bowtie$ is an equivalence relation and if $\Gamma$ has at least one loop, then one of the equivalence classes of $\bowtie$ comprises the set of loops.*
*Proof.* The only part of the lemma that is not immediately obvious is that $\bowtie$ is transitive. Let $\rho$ be the rank function of $\Gamma$ and $e$, $f$ and $g$ be elements of $\Gamma$, so that $e\bowtie f$ and $f\bowtie g$. Then for any subset $A$ of elements of $\Gamma$, we have $\rho(A\cup e)=\rho (A\cup f)=\rho(A\cup e \cup f)$ and $\rho(A\cup f)=\rho(A\cup g)=\rho(A\cup f \cup g)$. Thus $\rho(A\cup e)=\rho(A\cup g)$. By applying Lemma [Lemma 8](#lem:rankuseful){reference-type="ref" reference="lem:rankuseful"} to $A\cup f$ and elements $e$ and $g$, we see that $\rho(A\cup e\cup f \cup g) = \rho(A\cup f)$. Thus, by [\[ax:GR2\]](#ax:GR2){reference-type="ref" reference="ax:GR2"}, $\rho(A\cup f) = \rho(A\cup e\cup f\cup g) \geq \rho(A\cup e \cup g) \geq \rho (A\cup e)$. But as $\rho(A\cup e)=\rho(A\cup f)$, equality must hold throughout, so $\rho(A\cup e\cup g)=\rho(A\cup e)=\rho(A\cup g)$, as required. ◻
## Complexity
We assume some familiarity with computational complexity and refer the reader to one of the standard texts such as [@Garey] or [@Papadimitriou] for more background. Given two computational problems $\pi_1$ and $\pi_2$, we say that $\pi_2$ is *Turing reducible* to $\pi_1$ if there exists a deterministic Turing machine solving $\pi_2$ in polynomial time using an oracle for $\pi_1$, that is a subroutine returning an answer to an instance of $\pi_1$ in constant-time. When $\pi_2$ is Turing reducible to $\pi_1$ we write $\pi_2 \propto_T \pi_1$ and we say that solving problem $\pi_1$ is at least as hard as solving problem $\pi_2$. The relation $\propto_T$ is transitive.
Informally, the class $\#$P is the counting analogue of NP, that is, the class of all counting problems corresponding to decision problems in NP. Slightly more precisely, a problem is in $\#$P if it counts the number of accepting computations or "witnesses" of a problem in NP. Consider the decision problem of determining whether a graph has a proper vertex $3$-colouring. The obvious non-deterministic algorithm for this problem interprets a "witness" as a colouring of the vertices with $3$ colours and verifies that it is a proper colouring. So the corresponding problem in $\#$P would be to determine the number of proper vertex $3$-colourings. A computational problem $\pi$ is said to be *$\#$-hard* if $\pi' \propto_T \pi$ for all $\pi' \in \#\text{P}$, and *$\#$P-complete* if, in addition, $\pi \in \#\text{P}$. Counting the number of vertex $3$-colourings of a graph is an example of an $\#$P-complete problem.
The following lemma is crucial in many of our proofs.
**Lemma 13**. *There is an algorithm which when given a non-singular integer $n \times n$ matrix $A$ and an integer $n$-vector $b$ such that the absolute value of every entry of $A$ and $b$ is at most $2^l$, outputs the vector $x$ so that $Ax=b$, running in time bounded by a polynomial in $n$ and $l$.*
One algorithm to do this is a variant of Gaussian elimination known as the Bareiss algorithm [@zbMATH03298216]. Similar ideas were presented by Edmonds [@Edmonds]. See also [@zbMATH00467138].
# The Tutte Polynomial of a Greedoid {#The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph}
Extending the definition of the Tutte polynomial of a matroid, McMahon and Gordon defined the Tutte polynomial of a greedoid in [@GordonMcMahon]. The *Tutte polynomial* of a greedoid $\Gamma$ with ground set $E$ and rank function $\rho$ is given by $$T(\Gamma;x,y) = \sum_{A\subseteq E}(x-1)^{\rho(\Gamma)-\rho(A)}(y-1)^{|A|-\rho(A)}.$$ When $\Gamma$ is a matroid, this reduces to the usual definition of the Tutte polynomial of a matroid. For a rooted graph $G$ we let $T(G;x,y) = T(\Gamma(G);x,y)$, for a rooted digraph $D$ we let $T(D;x,y)=T(\Gamma(D);x,y)$ and for a binary matrix $M$ we let $T(M;x,y)=T(\Gamma(M);x,y)$.
**Example 14**.
1. Let $P_k$ be the rooted (undirected) path with $k$ edges in which the root is one of the leaves. Then $$T(P_k;x,y) = 1+ \sum_{i=1}^k (x-1)^i y^{i-1}.$$
2. Let $S_k$ be the rooted (undirected) star with $k$ edges in which the root is the central vertex. Then $$T(S_k;x,y) = x^k.$$
The Tutte polynomial of a greedoid retains many of the properties of the Tutte polynomial of a matroid, for example, it has a delete--contract recurrence, although its form is not as simple as that of the Tutte polynomial of a matroid [@GordonMcMahon]. Moreover, for a greedoid $\Gamma$:
- $T(\Gamma;1,1)$ is the number of bases of $\Gamma$;
- $T(\Gamma;2,1)$ is the number of feasible sets of $\Gamma$;
- $T(\Gamma;1,2)$ is the number of subsets $A$ of elements of $\Gamma$ so that $\rho(A)=\rho(\Gamma)$.
- $T(\Gamma;2,2)= 2^{|E(\Gamma)|}$.
But the Tutte polynomial of a greedoid also differs fundamentally from the Tutte polynomial of a matroid, for instance, unlike the Tutte polynomial of a matroid, the Tutte polynomial of a greedoid can have negative coefficents. For example, $T(\Gamma(P_2);x,y) = x^2y-2xy+x+y$.
The Tutte polynomial of a rooted graph has some of the same evaluations as the Tutte polynomial of a unrooted graph. Let $G$ be a rooted graph with edge set $E$.
- $T(G;1,1)$ is the number of spanning trees of the root component of $G$. (When $G$ is connected, this is just the number of spanning trees of $G$.)
- $T(G;2,1)$ is the number of subsets $A$ of $E$, so that the root component of $G|A$ is a tree containing all the edges of $A$.
- $T(G;1,2)$ is the number of subsets $A$ of $E$ so that the root component of $G|A$ includes every vertex of the root component of $G$. (When $G$ is connected, this is just the number of subsets $A$ so that $G|A$ is connected.)
- If no component of $G$ other than the root component has edges, then $T(G;1,0)$ is the number of acyclic orientations of $G$ with a unique source. Otherwise $T(G;1,0)=0$.
We record the following proposition stating that the Tutte polynomial of a connected rooted graph $G$ coincides with the Tutte polynomial of the corresponding unrooted graph $G'$ along the line $x=1$. This is easy to prove by noting that $\rho(G) = r(G')$ and a subset $A$ of the edges of $G$ satisfies $\rho(A)=\rho(G)$ if and only if $r(A)=r(G')$.
**Proposition 15**. *Let $G=(V,E,r)$ be a connected rooted graph and let $G'=(V,E)$ be the corresponding unrooted graph. Then $$T(G;1,y) = T(G';1,y).$$*
We list some evaluations of the Tutte polynomial of a digraph. Let $D$ be a rooted digraph with edge set $E$ and root $r$.
- $T(D;1,1)$ is the number of spanning arborescences of the root component of $D$ rooted at $r$. (When $D$ is root-connected, this is just its number of spanning arborescences rooted at $r$.)
- $T(D;2,1)$ is the number of subsets $A$ of $E$, so that the root component of $D|A$ is an arborescence rooted at $r$ containing every edge of $A$.
- $T(D;1,2)$ is the number of subsets $A$ of $E$, so that the root component of $D|A$ includes every vertex of the root component of $D$. (When $D$ is connected, this is just the number of subsets $A$ so that $D|A$ is root-connected.)
- $T(D;1,0)=1$ if $D$ is acyclic and root-connected, and $0$ otherwise.
The last evaluation will be discussed in more detail in Section [6](#sec:rooted digraph){reference-type="ref" reference="sec:rooted digraph"}.
Gordon and McMahon [@GordonMcMahon] proved that if $T_1$ and $T_2$ are rooted arborescences, then $T(T_1;x,y) = T(T_2;x,y)$ if and only if $T_1 \cong T_2$.
We list some evaluations of the Tutte polynomial of a binary matroid. Let $M$ be an $m\times n$ binary matrix with linearly independent rows.
- $T(M;1,1)$ is the number of subsets $A$ of the columns of $M$ so that the submatrix of $M$ corresponding to the columns in $A$ is non-singular.
- $T(M;2,1)$ is the number of subsets $A$ of the columns of $M$ so that the submatrix $M_{[|A|],A}$ is non-singular.
- $T(M;1,2)$ is the number of subsets $A$ of the columns of $M$ containing a subset $A'$ so that the submatrix of $M$ corresponding to the columns in $A'$ is non-singular.
If a point $(a,b)$ lies on the hyperbola $H_1$ then we have $(a-1)(b-1)=1$ by definition. Thus the Tutte polynomial of a greedoid $\Gamma$ evaluated at such a point is given by $$\begin{aligned}
T(\Gamma;a,b) &= \sum_{A \subseteq E(\Gamma)}(a-1)^{\rho(\Gamma)-\rho(A)}(b-1)^{|A|-\rho(A)}\\&=(a-1)^{\rho(\Gamma)}\sum_{A \subseteq E(\Gamma)}\left(\frac{1}{a-1}\right)^{|A|}=(a-1)^{\rho(\Gamma)-|E(\Gamma)|}a^{|E(\Gamma)|}.\end{aligned}$$ Therefore, given $|E(\Gamma)|$ and $\rho(\Gamma)$, it is easy to compute $T(\Gamma;a,b)$ in polynomial time. For all of the greedoids that we consider, both $|E(\Gamma)|$ and $\rho(\Gamma)$ will be either known or easily computed.
The *characteristic polynomial* of a greedoid was first introduced by Gordon and McMahon in [@GordonMcMahon2] and is a generalization of the characteristic or chromatic polynomial of a matroid. For a greedoid $\Gamma$, the *characteristic polynomial* $p(\Gamma;\lambda)$ is defined by $$\label{characteristic polynomial specialization of tutte}p(\Gamma;\lambda) = (-1)^{\rho(\Gamma)} T(\Gamma;1-\lambda,0).$$
# Greedoid Constructions {#sec:constructions}
In this section we introduce three greedoid constructions and give expressions for the Tutte polynomial of greedoids resulting from these constructions.
The first construction is just the generalization of the $k$-thickening operation introduced by Brylawski [@Brylawski] from matroids to greedoids. Given a greedoid $\Gamma=(E,\mathcal F)$, its $k$-thickening is the greedoid $\Gamma^k$ that, informally speaking, is formed from $\Gamma$ by replacing each element by $k$ parallel elements. More precisely, $\Gamma^k$ has ground set $E'= E \times [k]$ and collection $\mathcal F'$ of feasible sets as follows. Define $\mu$ to be the projection operator $\mu: 2^{E \times [k]} \rightarrow 2^E$ so that element $e\in \mu(A)$ if and only if $(e,i) \in A$ for some $i$. Now a subset $A$ is feasible in $\Gamma^k$ if and only if $\mu(A)$ is feasible in $\Gamma$ and $|\mu(A)|=|A|$. The latter condition ensures that $A$ does not contain more than one element replacing a particular element of $\Gamma$.
It is clear that $\Gamma^k$ is a greedoid and moreover $\rho_{\Gamma^k}(A)= \rho_{\Gamma}(\mu(A))$. In particular $\rho(\Gamma^k)=\rho(\Gamma)$. For any element $e$ of $\Gamma$ the elements $(e,i)$ and $(e,j)$ are parallel. The effect of the $k$-thickening operation on the Tutte polynomial of a greedoid is given in the following theorem, generalizing the expression for the $k$-thickening of the Tutte polynomial given by Brylawski [@Brylawski].
**Theorem 16**. *Let $\Gamma$ be a greedoid. The Tutte polynomial of the $k$-thickening $\Gamma^k$ of $\Gamma$ when $y \neq -1$ is given by $$\label{greedoid thickening equation when y not equal -1}
T(\Gamma^k;x,y) = (1+y+ \cdots +y^{k-1})^{\rho_G(\Gamma)}T\left(\Gamma;\frac{x+y+ \cdots +y^{k-1}}{1+y+ \cdots + y^{k-1}},y^k\right).$$ When $y=-1$ we have $$T(\Gamma^k;x,-1) = \begin{cases} (x-1)^{\rho_G(\Gamma)} & \text{if $k$ is even; }\\ T(\Gamma;x,-1) & \text{if $k$ is odd. }
\end{cases}$$*
*Proof.* Let $\Gamma^k$ be the $k$-thickened greedoid, let $E'$ denote its ground set and let $E$ be the ground set of $\Gamma$. Then $E'=E \times [k]$. Let $\mu$ be the mapping defined in the discussion at the beginning of this section. To ensure that we do not divide by zero in our calculations, we prove the case when $y=1$ separately.
For each $A' \subseteq E'$ we have $\rho_{\Gamma^k}(A') = \rho_{\Gamma}(\mu(A'))$ and furthermore $\rho(\Gamma^k) = \rho(\Gamma)$. The Tutte polynomial of $\Gamma^k$ when $y \notin\{-1,1\}$ is thus given by $$\begin{aligned}
T(\Gamma^k;x,y) &= \sum_{A' \subseteq E'}(x-1)^{\rho(\Gamma^k)-\rho_{\Gamma^k}(A')}(y-1)^{|A'|-\rho_{\Gamma^k}(A')} \notag\\
&=\sum_{A \subseteq E}\sum_{\substack{A' \subseteq E': \\ \mu(A') = A}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(\mu(A'))}(y-1)^{|A'|-\rho_{\Gamma}(\mu(A'))} \label{equation in greedoid thickening proof}\\
&=\sum_{A \subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y-1)^{-\rho_{\Gamma}(A)}\sum_{\substack{A' \subseteq E': \\ \mu(A')=A}}(y-1)^{|A'|}\notag\\
&=\sum_{A \subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y-1)^{-\rho_{\Gamma}(A)}(y^k-1)^{|A|}\notag\\
&=(1+y+\cdots + y^{k-1})^{\rho(\Gamma)}\sum_{A \subseteq E} \left(\frac{(x-1)(y-1)}{y^k-1}\right)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y^k-1)^{|A|-\rho_{\Gamma}(A)}\notag\\
&=(1+y+ \cdots + y^{k-1})^{\rho(\Gamma)}T\left(\Gamma; \frac{x+y+\cdots + y^{k-1}}{1+y+ \cdots + y^{k-1}},y^k\right).\notag\end{aligned}$$
When $y=1$ we get non-zero terms in Equation [\[equation in greedoid thickening proof\]](#equation in greedoid thickening proof){reference-type="ref" reference="equation in greedoid thickening proof"} if and only if $|A'|=\rho_{\Gamma}(\mu(A'))$, which implies that $|A'|=|A|$. For each $A \subseteq E$ there are $k^{|A|}$ choices for $A'$ such that $\mu(A')=A$ and $|A'|=|A|$. Therefore we have $$\begin{aligned}
T(\Gamma^k;x,1) &= \sum_{\substack{A \subseteq E: \\ \rho_{\Gamma}(A) = |A|}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)} \sum_{\substack{A' \subseteq E': \\ \mu(A') = A, \\ |A'|=|A|}}1 = \sum_{\substack{A \subseteq E: \\ \rho_{\Gamma}(A) = |A|}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)} k^{\rho_{\Gamma}(A)}\\
&= \sum_{\substack{A \subseteq E: \\ \rho_{\Gamma}(A) = |A|}}\left(\frac{x-1}{k}\right)^{\rho(\Gamma)-\rho_{\Gamma}(A)} k^{\rho(\Gamma)}= k^{\rho(\Gamma)} T\left(\Gamma;\frac{x+k-1}{k},1\right)\end{aligned}$$ which agrees with Equation [\[greedoid thickening equation when y not equal -1\]](#greedoid thickening equation when y not equal -1){reference-type="ref" reference="greedoid thickening equation when y not equal -1"} when $y=1$.
When $y=-1$ we have $$\begin{aligned}
T(\Gamma^k;x,-1) &=\sum_{A \subseteq E}\sum_{\substack{A' \subseteq E': \\ \mu(A') = A}}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(\mu(A'))}(-2)^{|A'|-\rho_{\Gamma}(\mu(A'))} \notag\\
&=\sum_{A \subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(-2)^{-\rho_{\Gamma}(A)}\sum_{\substack{A' \subseteq E': \\ \mu(A')=A}}(-2)^{|A'|}\notag\\
&=\sum_{A \subseteq E}(x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(-2)^{-\rho_{\Gamma}(A)}((-1)^k-1)^{|A|}\notag\\
&= \left\{ \begin{array}{ll} (x-1)^{\rho(\Gamma)} & \text{if $k$ is even};\\ T(\Gamma;x,-1) & \text{if $k$ is odd.} \end{array} \right. \notag\end{aligned}$$ Note that the only contribution to $T(\Gamma^k;x,-1)$ when $k$ is even is from the empty set. ◻
The second construction is a little more involved. To motivate it we first describe a natural construction operation on rooted graphs. Let $G$ and $H$ be disjoint rooted graphs with $G$ being connected. Then the *$H$-attachment* of $G$, denoted by $G \sim H$, is formed by taking $G$ and $\rho(G)$ disjoint copies of $H$, and identifying each vertex of $G$ other than the root with the root vertex of one of the copies of $H$. The root of $G\sim H$ is the root of $G$. See Figure [\[fig:attach\]](#fig:attach){reference-type="ref" reference="fig:attach"} for an illustration of the attachment operation.
Suppose that $V(G)=\{r,v_1,\ldots,v_{\rho(G)}\}$, where $r$ is the root of $G$, let $E_0$ be the edge set of $G$ and let $E_i$ be the edge set of the copy of $H$ attached at $v_i$. A set $F$ is feasible in $\Gamma(G\sim H)$ if and only if each of the following conditions holds.
1. $F \cap E_0$ is feasible in $\Gamma(G)$.
2. For all $i$ with $1 \leq i \leq \rho(G)$, $F \cap E_i$ is feasible in $\Gamma(H)$.
3. For all $i$ with $1 \leq i \leq \rho(G)$, if $v_i$ is not in the root component of $G|(F \cap E_0)$, then $F \cap E_i = \emptyset$.
In order to extend these ideas to general greedoids, we begin by describing the notion of a closed set, which was first defined for greedoids by Korte and Lovasz [@KorteLovasz]. Let $\Gamma$ be a greedoid with ground set $E$ and rank function $\rho$. Given a subset $A$ of $E$, its *closure* $\sigma_{\Gamma}(A)$ is defined by $\sigma_{\Gamma}(A)=\{e: \rho(A\cup e)=\rho(A)\}$. We will drop the dependence on $\Gamma$ whenever the context is clear. Note that it follows from the definition that $A \subseteq \sigma(A)$. Moreover Lemma [Lemma 8](#lem:rankuseful){reference-type="ref" reference="lem:rankuseful"} implies that $\rho(\sigma(A))=\rho(A)$. Furthermore if $e \notin \sigma(A)$, then $\rho(A \cup e) > \rho(A)$, so axiom [\[ax:GR2\]](#ax:GR2){reference-type="ref" reference="ax:GR2"} implies that $\rho(\sigma(A) \cup e)> \rho(\sigma(A))$ and hence $\sigma(\sigma(A))=\sigma(A)$. A subset $A$ of $E$ satisfying $A=\sigma(A)$ is said to be *closed*. Every subset of $E$ of the form $\sigma(X)$ for some $X$ is closed.
We now introduce what we call an attachment function. Let $\Gamma$ be a greedoid with rank function $\rho$. A function $f:\mathcal{F} \rightarrow 2^{[\rho(\Gamma)]}$ is called a *$\Gamma$-attachment function* if it satisfies both of the following.
1. For each feasible set $F$, we have $|f(F)|=\rho(F)$.
2. If $F_1$ and $F_2$ are feasible sets and $F_1 \subseteq \sigma(F_2)$ then $f(F_1) \subseteq f(F_2)$.
The following property of attachment functions is needed later.
**Lemma 17**. *Let $\Gamma$ be a greedoid and $f$ be a $\Gamma$-attachment function. Let $A$ be a subset of the elements of $\Gamma$ and let $F_1$ and $F_2$ be maximal feasible subsets of $A$. Then $f(F_1)=f(F_2)$.*
*Proof.* It follows from the axioms for the feasible sets of a greedoid that all maximal feasible subsets of $A$ have the same size. Thus $\rho(F_1)=\rho(F_2)=\rho(A)$. For every element $e$ of $A$, $\rho (F_1) \leq \rho(F_1\cup e) \leq \rho(A)$. As $\rho(F_1)=\rho(A)$, equality must hold throughout. Thus $e\in \sigma(F_1)$. Hence $A\subseteq \sigma(F_1)$, so $F_2\subseteq \sigma(F_1)$. By symmetry, $F_1\subseteq \sigma(F_2)$. The result then follows from the second condition satisfied by a $\Gamma$-attachment function. ◻
Given greedoids $\Gamma_1$ and $\Gamma_2$ with disjoint ground sets, and $\Gamma_1$-attachment function $f$, we define the *$\Gamma_2$-attachment of $\Gamma_1$*, denoted by $\Gamma_1 \sim_f \Gamma_2$ as follows. The ground set $E$ is the union of the ground set $E_0$ of $\Gamma_1$ together with $\rho=\rho(\Gamma_1)$ disjoint copies $E_1,\ldots, E_\rho$ of the ground set of $\Gamma_2$. In the following we abuse notation slightly by saying that for $i>0$, a subset of $E_i$ is feasible in $\Gamma_2$ if the corresponding subset of the elements of $\Gamma_2$ is feasible. A subset $F$ of $E$ is feasible if and only if each of the following conditions holds.
1. $F \cap E_0$ is feasible in $\Gamma_1$.
2. For all $i$ with $1 \leq i \leq \rho$, $F \cap E_i$ is feasible in $\Gamma_2$.
3. For all $i$ with $1 \leq i \leq \rho$, if $i\notin f(F \cap E_0)$ then $F \cap E_i=\emptyset$.
**Proposition 18**. *For any greedoids $\Gamma_1$ and $\Gamma_2$, and $\Gamma_1$-attachment function $f$, the *$\Gamma_2$-attachment of $\Gamma_1$* is a greedoid.*
*Proof.* We use the notation defined above to describe the ground set of $\Gamma_1 \sim_f \Gamma_2$. Clearly the empty set is feasible in $\Gamma_1 \sim_f \Gamma_2$. Suppose that $F_1$ and $F_2$ are feasible sets in $\Gamma_1 \sim_f \Gamma_2$ with $|F_2|>|F_1|$. If there is an element $e$ of $F_2 \cap E_0$ which is not in $\sigma_{\Gamma_1} (F_1 \cap E_0)$ then $(F_1 \cap E_0) \cup e$ is feasible in $\Gamma_1$. Moreover $F_1 \cap E_0 \subseteq \sigma_{\Gamma_1} ((F_1 \cap E_0) \cup e)$, so $f(F_1 \cap E_0) \subseteq f((F_1 \cap E_0)\cup e)$. Consequently $F_1 \cup e$ is feasible in $\Gamma_1 \sim_f \Gamma_2$.
On the other hand, suppose that $F_2 \cap E_0 \subseteq \sigma_{\Gamma_1}(F_1 \cap E_0)$. Then $f(F_2 \cap E_0) \subseteq f(F_1 \cap E_0)$. Moreover, as there is no element $e$ of $(F_2 \cap E_0)-(F_1 \cap E_0)$ such that $(F_1 \cap E_0)\cup e$ is feasible, we have $|F_2 \cap E_0|\leq |F_1 \cap E_0|$. So for some $i$ in $f(F_2 \cap E_0)$, we have $|F_2 \cap E_i| > |F_1 \cap E_i|$. Thus there exists $e \in (F_2-F_1)\cap E_i$ such that $(F_1 \cap E_i) \cup e$ is feasible in $\Gamma_2$. As $i\in f(F_2 \cap E_0)$, we have $i \in f(F_1 \cap E_0)$. Hence $F_1 \cup e$ is feasible in $\Gamma_1 \sim_f \Gamma_2$. ◻
Every greedoid $\Gamma$ has an attachment function formed by setting $f(F)=[|F|]$ for each feasible set $F$. However there are other examples of attachment functions. Let $G$ be a connected rooted graph in which the vertices other than the root are labelled $v_1,\ldots, v_{\rho}$. There is an attachment function $f$ defined on $\Gamma(G)$ as follows. For every feasible set $F$, define $f(F)$ so that $i\in f(F)$ if and only if $v_i$ is in the root component of $G|F$. It is straightforward to verify that $f$ is indeed an attachment function. Furthermore if $H$ is another rooted graph then $\Gamma(G\sim H)= \Gamma(G)\sim_f \Gamma(H)$.
We now consider the rank function of $\Gamma = \Gamma_1 \sim_f \Gamma_2$. We keep the same notation as above for the elements of $\Gamma$. Let $A$ be a subset of $E(\Gamma)$ and let $F$ be a maximal feasible subset of $A\cap E_0$. Then $$\label{eq:rankattach} \rho_{\Gamma}(A) = \rho_{\Gamma_1}(A \cap E_0) + \sum_{i \in f(F)} \rho_{\Gamma_2}(A \cap E_i).$$ Observe that the number of subsets of $E(\Gamma)$ with specified rank, size and intersection with $E_0$ does not depend on the choice of $f$. Consequently the Tutte polynomial of $\Gamma_1 \sim_f \Gamma_2$ does not depend on $f$. We now make this idea more precise by establishing an expression for the Tutte polynomial of an attachment.
**Theorem 19**. *Let $\Gamma_1$ and $\Gamma_2$ be greedoids, and let $f$ be an attachment function for $\Gamma_1$. Then the Tutte polynomial of $\Gamma_1 \sim_f \Gamma_2$ is given by $$T(\Gamma_1 \sim_f \Gamma_2;x,y) = T(\Gamma_2; x,y)^{\rho(\Gamma_1)} T\Big(\Gamma_1;\frac{(x-1)^{\rho(\Gamma_2)+1}y^{|E(\Gamma_2)|}}{T(\Gamma_2;x,y)}+1,y\Big),$$ providing $T(\Gamma_2;x,y)\ne 0$.*
*Proof.* Let $\Gamma = \Gamma_1 \sim_f \Gamma_2$. We use the notation defined above to describe the ground set of $\Gamma$. It is useful to extend the definition of the attachment function $f$ to all subsets of $E_0$ by setting $f(A)$ to be equal to $f(F)$ where $F$ is a maximal feasible set of $A$. Lemma [Lemma 17](#lem:attachment){reference-type="ref" reference="lem:attachment"} ensures that extending $f$ in this way is well-defined. It follows from Equation [\[eq:rankattach\]](#eq:rankattach){reference-type="ref" reference="eq:rankattach"} that $\rho(\Gamma) = \rho(\Gamma_1)(\rho(\Gamma_2)+1)$. We have $$\begin{aligned}
T(\Gamma;x,y) &= \sum_{A\subseteq E(\Gamma)} (x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)} (y-1)^{|A|-\rho(A)}\\
&= \sum_{A_0 \subseteq E_0} (x-1)^{\rho(\Gamma_1) - \rho_{\Gamma_1}(A_0)}(y-1)^{|A_0|-\rho_{\Gamma_1}(A_0)}\cdot \prod_{i\notin f(A_0)} \sum_{A_i \subseteq E_i} (x-1)^{\rho(\Gamma_2)} (y-1)^{|A_i|}\\& \phantom{=}\ \cdot \prod_{i \in f(A_0)} \sum_{A_i \subseteq E_i} (x-1)^{\rho(\Gamma_2)- \rho_{\Gamma_2}(A_i)} (y-1)^{|A_i|-\rho_{\Gamma_2}(A_i)} \\
&= \sum_{A_0 \subseteq E_0} (x-1)^{\rho(\Gamma_1) - \rho_{\Gamma_1}(A_0)} (T(\Gamma_2;x,y))^{\rho_{\Gamma_1}(A_0)}\\
& \phantom{=} \ \cdot
\big((x-1)^{\rho(\Gamma_2)}
y^{|E(\Gamma_2)|}\big)^{\rho(\Gamma_1)-\rho_{\Gamma_1}(A_0)} (y-1)^{|A_0|-\rho_{\Gamma_1}(A_0)}\\
&= (T(\Gamma_2;x,y))^{\rho(\Gamma_1)} \sum_{A_0 \subseteq E_0} (y-1)^{|A_0|-\rho_{\Gamma_1}(A_0)} \Big(\frac{(x-1)^{\rho(\Gamma_2)+1}y^{|E(\Gamma_2)|}}{T(\Gamma_2;x,y)}\Big)^{\rho(\Gamma_1)-\rho_{\Gamma_1}(A_0)}\\
&= T(\Gamma_2; x,y)^{\rho(\Gamma_1)} T\Big(\Gamma_1;\frac{(x-1)^{\rho(\Gamma_2)+1}y^{|E(\Gamma_2)|}}{T(\Gamma_2;x,y)}+1,y\Big).\end{aligned}$$ ◻
The third construction is called the full rank attachment. Given greedoids $\Gamma_1=(E_1,\mathcal F_1)$ and $\Gamma_2=(E_2, \mathcal F_2)$ with disjoint ground sets, the *full rank attachment of $\Gamma_2$ to $\Gamma_1$* denoted by $\Gamma_1 \approx \Gamma_2$ has ground set $E_1 \cup E_2$ and a set $F$ of elements is feasible if either of the two following conditions holds.
1. $F \in \mathcal F_1$;
2. $F \cap E_1 \in \mathcal F_1$, $F\cap E_2 \in \mathcal F_2$ and $\rho_{\Gamma_1}(F\cap E_1) = \rho(\Gamma_1)$.
It is straightforward to prove that $\Gamma_1 \approx \Gamma_2$ is a greedoid.
Suppose that $\Gamma=\Gamma_1\approx \Gamma_2$ and that $A$ is a subset of $E(\Gamma)$. Then $$\rho(A) = \begin{cases} \rho(A \cap E_1) & \text{if $\rho(A \cap E_1)< \rho(\Gamma_1)$,}\\
\rho(A \cap E_1) + \rho(A\cap E_2) & \text{if $\rho(A \cap E_1)= \rho(\Gamma_1)$.}
\end{cases}$$ This observation enables us to prove the following identity for the Tutte polynomial.
**Theorem 20**. *Let $\Gamma_1$ and $\Gamma_2$ be greedoids, and let $\Gamma=\Gamma_1 \approx \Gamma_2$. Let $E$, $E_1$ and $E_2$ denote the ground sets of $\Gamma$, $\Gamma_1$ and $\Gamma_2$ respectively. Then $$T(\Gamma_1\approx\Gamma_2;x,y)\\ = T(\Gamma_1;x,y)(x-1)^{\rho(\Gamma_2)}y^{|E_2|} + T(\Gamma_1;1,y) ( T(\Gamma_2;x,y) - (x-1)^{\rho(\Gamma_2)}y^{|E_2|}).$$*
*Proof.* We have $$\begin{aligned}
\MoveEqLeft{T(\Gamma_1 \approx \Gamma_2;x,y)}\\
&= \sum_{A\subseteq E} (x-1)^{\rho(\Gamma)-\rho_{\Gamma}(A)}(y-1)^{|A|-\rho_{\Gamma}(A)}\\
&= \sum_{\substack{A_1 \subseteq E_1:\\ \rho_{\Gamma_1}(A_1)<\rho(\Gamma_1)}} (x-1)^{\rho(\Gamma_1)-\rho_{\Gamma_1}(A_1)} (y-1)^{|A_1|-\rho_{\Gamma_1}(A_1)} \sum_{A_2 \subseteq E_2} (x-1)^{\rho(\Gamma_2)} (y-1)^{|A_2|}\\
&\phantom{=} \ { }+
\sum_{\substack{A_1 \subseteq E_1:\\ \rho_{\Gamma_1}(A_1)=\rho(\Gamma_1)}} (y-1)^{|A_1|-\rho_{\Gamma_1}(A_1)}\sum_{A_2 \subseteq E_2} (x-1)^{\rho(\Gamma_2)-\rho_{\Gamma_2}(A_2)} (y-1)^{|A_2|-\rho_{\Gamma_2}(A_2)}\\
&= \sum_{A_1\subseteq E_1} (x-1)^{\rho(\Gamma_1)-\rho_{\Gamma_1}(A_1)} (y-1)^{|A_1|-\rho_{\Gamma_1}(A_1)} (x-1)^{\rho(\Gamma_2)}y^{|E_2|}\\
&\phantom{=} \ { }+
\sum_{\substack{A_1 \subseteq E_1:\\
\rho_{\Gamma_1}(A_1)=\rho(\Gamma_1)}}
(y-1)^{|A_1|-\rho_{\Gamma_1}(A_1)} \\ &\phantom{=} \ \cdot
\Big(\sum_{A_2 \subseteq E_2} (x-1)^{\rho(\Gamma_2)-\rho_{\Gamma_2}(A_2)} (y-1)^{|A_2|-\rho_{\Gamma_2}(A_2)}-
(x-1)^{\rho(\Gamma_2)}y^{|E_2|}\Big)\\&= T(\Gamma_1;x,y)(x-1)^{\rho(\Gamma_2)}y^{|E_2|} +
T(\Gamma_1;1,y)\big(T(\Gamma_2;x,y) -(x-1)^{\rho(\Gamma_2)}y^{|E_2|}\big).\end{aligned}$$ ◻
This construction will be useful later in Section [7](#sec:binarygreedoids){reference-type="ref" reference="sec:binarygreedoids"} when $\Gamma_1$ and $\Gamma_2$ are binary greedoids with $\Gamma_1=\Gamma(M_1)$ and $\Gamma_2=\Gamma(M_2)$, where $M_1$ has full row rank. Then $\Gamma_1 \approx \Gamma_2 = \Gamma(M)$ where $M$ has the form $$M= \left(\begin{array}{c|c} M_1 & 0 \\ \hline 0 & M_2 \end{array}\right).$$
# Rooted Graphs {#section rooted graphs hardness}
Throughout the remainder of the paper we focus on three computational problems. Let $\mathbb G$ denote either the class of branching greedoids, directed branching greedoids or binary greedoids. Our first problem is computing all the coefficients of the Tutte polynomial for a greedoid in the class $\mathbb G$.
$\pi_1[\mathbb G]$ : $\#$[Rooted Tutte Polynomial]{.smallcaps}\
**Input:** $\Gamma \in \mathbb G$.\
**Output:** The coefficients of $T(\Gamma;x,y)$.
The second problem involves computing the Tutte polynomial along a plane algebraic curve $L$. We restrict our attention to the case where $L$ is a rational curve given by the parametric equations $$x(t) = \frac{p(t)}{q(t)} \quad \text{ and } \quad y(t) = \frac{r(t)}{s(t)},$$ where $p$, $q$, $r$ and $s$ are polynomials over $\mathbb{Q}$. More precisely, we compute the coefficients of the one-variable polynomial obtained by restricting $T$ to the curve $L$.
$\pi_2[\mathbb G,L]$ : $\#$[Rooted Tutte Polynomial Along $L$]{.smallcaps}\
**Input:** $\Gamma \in \mathbb G$.\
**Output:** The coefficients of the rational function of $t$ given by evaluating $T(\Gamma;x(t),y(t))$ along $L$.
Most of the time, $L$ will be one of the hyperbolae $H_{\alpha}$. We will frequently make a slight abuse of notation by writing $L=H_{\alpha}$.
The final problem is the evaluation of the Tutte polynomial at a fixed rational point $(a,b)$.
$\pi_3[\mathbb G,a,b]$ : $\#$[Rooted Tutte Polynomial At $(a,b)$]{.smallcaps}\
**Input:** $\Gamma \in \mathbb G$.\
**Output:** $T(\Gamma;a,b)$.
It is straightforward to see that for each possibility for $\mathbb G$, we have $$\pi_3[\mathbb{G},a,b] \propto_T \pi_2[\mathbb{G},H_{(a-1)(b-1)}] \propto_T \pi_1[\mathbb{G}].$$ Our results in the remainder of the paper will determine when the opposite reductions hold.
In this section we prove Theorem [Theorem 3](#maintheoremrootedgraph){reference-type="ref" reference="maintheoremrootedgraph"}. We let $\mathcal{G}$ be the class of branching greedoids of connected, rooted, planar, bipartite graphs and take $\mathbb{G}=\mathcal{G}$. It is, however, more convenient to take the input to each problem to be a connected, rooted, planar, bipartite graph rather than its branching greedoid.
We begin by reviewing the exceptional points of Theorem [Theorem 3](#maintheoremrootedgraph){reference-type="ref" reference="maintheoremrootedgraph"}. If a point $(a,b)$ lies on the hyperbola $H_1$ then, following the remarks at the end of Section [3](#The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph){reference-type="ref" reference="The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph"}, $T(G;a,b)$ is easily computed. We noted in Section [3](#The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph){reference-type="ref" reference="The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph"} that for a connected rooted graph $G$, $T(G;1,1)$ is equal to the number of spanning trees of $G$. That this can be evaluated in polynomial time follows from Kirchhoff's Matrix--Tree theorem [@Kirchhoff]. Hence there are polynomial time algorithms to evaluate the Tutte polynomial of a connected rooted graph at $(1,1)$ and at any point lying on $H_1$. It is easy to extend this to all rooted graphs because every edge belonging to a component that does not include the root is a loop in the corresponding branching greedoid.
We will now review the hard points of Theorem [Theorem 3](#maintheoremrootedgraph){reference-type="ref" reference="maintheoremrootedgraph"}. A key step in establishing the hardness part of Theorem [Theorem 3](#maintheoremrootedgraph){reference-type="ref" reference="maintheoremrootedgraph"} for points lying on the line $y=1$ is to strengthen a result of Jerrum [@Jerrum]. Given an unrooted graph $G=(V,E)$, a *subtree* of $G$ is a subgraph of $G$ which is a tree. (We emphasize that the subgraph does not have to be an induced subgraph.) Jerrum [@Jerrum] showed that the following problem is $\#$P-complete.
$\#$[Subtrees]{.smallcaps}\
**Input:** Planar unrooted graph $G$.\
**Output:** The number of subtrees of $G$.
Consider the restriction of this problem to bipartite planar graphs.
$\#$[Bisubtrees]{.smallcaps}\
**Input:** Bipartite, planar unrooted graph $G$.\
**Output:** The number of subtrees of $G$.
We shall show that $\#$[Bisubtrees]{.smallcaps} is $\#$P-complete. We say that an edge of a graph $G$ is *external* in a subtree $T$ of $G$ if it is not contained in $E(T)$. Let $t_{i,j}(G)$ be the number of subtrees of $G$ with $i$ external edges having precisely one endvertex in $T$ and $j$ external edges having both endvertices in $T$.
Recall that the $k$-stretch of an unrooted graph $G$ is obtained by replacing each loop by a circuit with $k$ edges and every other edge by a path of length $k$. Let $t(G)$ denote the number of subtrees of $G$.
**Proposition 21**. *For every unrooted graph $G$, the number of subtrees of the $k$-stretch $G_k$ of $G$ is given by $$t(G_k) = \left(\sum_{i,j \geq 0}t_{i,j}(G)k^i \binom{k+1}{2}^j\right) + \frac{k(k-1)|E|}{2}.$$*
*Proof.* Let $E(G) = \{e_1, e_2, \ldots, e_m\}$ and let $E_t$ be the set of edges replacing $e_t$ in $G_k$ for $1 \leq t \leq m$. Thus $E(G_k) = \bigcup_{t=1}^m E_t$. We can think of the vertices of $G_k$ as being of two types: those corresponding to the vertices of $G$ and the extra ones added when $G_k$ is formed. We construct a function $f$ that maps every subtree $T$ of $G_k$ to a graph $T'$ which is either a subtree of $G$ or an empty graph with no vertices or edges. We let $V(T')$ comprise all the vertices of $V(T)$ corresponding to vertices in $G$. The edge set $E(T')$ is defined so that $e_t \in E(T')$ if and only if $E_t \subseteq E(T)$.
Let $T'$ be a subtree of $G$ with at least one vertex, $i$ external edges having precisely one endvertex in $T'$ and $j$ external edges having both endvertices in $T'$.
If $T \in f^{-1}(T')$ then it must contain all of the edges in $G_k$ that replace the edges in $E(T')$. Suppose there is an edge $e = v_1v_2$ in $G$ that is external in $T'$ with $v_1\in V(T')$ and $v_2\notin V(T')$. Then there are $k$ possibilities for the subset of $E_t$ appearing in $T$. Now suppose there exists an edge $e_t= v_1v_2$ in $G$ that is external in $T'$ with $v_1, v_2 \in V'$. Then there are $\binom{k+1}{2}$ choices for the subset of $E_t$ appearing in $T$. Therefore, $$|f^{-1}(T_{i,j}')| = k^i \binom{k+1}{2}^j.$$
It remains to count the subtrees of $G_k$ mapped by $f$ to a graph with no vertices. Such a subtree does not contain any vertices corresponding to vertices in $G$. There are $(k-1)|E(G)|$ subtrees of $G_k$ comprising a single vertex not in $V(G)$ and no edges, and $\binom{k-1}{2}|E(G)|$ subtrees of $G_k$ with at least one edge but not containing any vertex in $V(G)$. Hence $$t(G_k) = \left(\sum_{i,j \geq 0}t_{i,j}(G)k^i \binom{k+1}{2}^j\right) + \frac{k(k-1)}{2}|E(G)|.$$ ◻
We can now show that [Bisubtrees]{.smallcaps} is $\#$P-complete.
**Proposition 22**. *The problem [Bisubtrees]{.smallcaps} is $\#$P-complete.*
*Proof.* It is clear that [Bisubtrees]{.smallcaps} belongs to $\#$P. To establish hardness, first note that $G_2, \ldots, G_{4|E(G)|+2}$ are all bipartite and may be constructed from $G$ in polynomial time. We have $\max_{i,j \geq 0}\{i+2j: t_{i,j}(G)>0\} \leq \max_{i,j \geq 0}\{i+2j: i+j \leq |E(G)|\} =2{|E(G)|}$. Therefore, by Proposition [Proposition 21](#bipartite stretch){reference-type="ref" reference="bipartite stretch"}, $t(G_k)$ is a polynomial in $k$ of degree at most $2|E(G)|$. So we can write $$t(G_k) = \sum_{p=0}^{2|E(G)|}a_pk^p.$$ Thus, if we compute $t(G_k)$ for $k=2,\ldots, 4|E(G)|+2$, then we can apply Lemma [Lemma 13](#lem:gauss){reference-type="ref" reference="lem:gauss"} to recover $a_i$ for all $i$ and then determine $t(G)=t(G_1)$ in polynomial time. Therefore we have shown that [Subtrees]{.smallcaps} $\propto_T$ [Bisubtrees]{.smallcaps}. ◻
We now present three propositions which together show that at most fixed rational points $(a,b)$, evaluating the Tutte polynomial of a connected, bipartite, planar, rooted graph at $(a,b)$ is just as hard as evaluating it along the curve $H_{(a-1)(b-1)}$. The $k$-thickening operation is crucial. Notice that $\Gamma(G^k)\cong (\Gamma(G))^k$, so we may apply Theorem [Theorem 16](#greedoid thickening){reference-type="ref" reference="greedoid thickening"} to obtain an expression for $T(G^k)$. The first proposition deals with the case when $a \neq 1$ and $b \notin \{-1,0,1\}$.
**Proposition 23**. *Let $L = H_{\alpha}$ for some $\alpha \in \mathbb{Q} - \{0\}$. Let $(a,b)$ be a point on $L$ such that $b \notin \{-1,0\}$. Then $$\pi_2[\mathcal{G},L] \propto_T \pi_3[\mathcal{G},a,b].$$*
*Proof.* For a point $(x,y)$ on $L$ we have $y\neq 1$. Therefore $z=y-1 \neq 0$ and so $\alpha / z = x-1$. Let $G$ be in $\mathcal{G}$. Along $L$ the Tutte polynomial of $G$ has the form $$T(G;x,y) = T(G; 1+ \alpha / z, 1+z) = \sum_{A \subseteq E(G)}\left(\frac{\alpha}{z}\right)^{\rho(G)-\rho(A)}z^{|A|-\rho(A)} = \sum_{i = -\rho(G)}^{|E(G)|}t_i z^i,$$ for some $t_{-\rho(G)}, \ldots , t_{|E(G)|}$.
We now show that we can determine all of the coefficients $t_i$ from the evaluations $T(G^k;a,b)$ for $k= 1,\ldots, |E(G)|+\rho(G)+1$ in time polynomial in $|E(G)|$. For each such $k$, $G^k$ may be constructed from $G$ in time polynomial in $|E(G)|$ and is bipartite, planar and connected. By Theorem [Theorem 16](#greedoid thickening){reference-type="ref" reference="greedoid thickening"}, we have $$T(G^k;a,b) = (1+b+\ldots + b^{k-1})^{\rho(G)}T\left(G; \frac{a+b+\ldots + b^{k-1}}{1+b+\ldots + b^{k-1}},b^k\right).$$
Since $b \neq -1$, we have $1+b+\ldots + b^{k-1} \neq 0$. Therefore we may compute $$T\left(G; \frac{a+b+\ldots + b^{k-1}}{1+b+\ldots + b^{k-1}},b^k\right)$$ from $T(G^k;a,b)$. The point $\left(\frac{a+b+\ldots + b^{k-1}}{1+b+\ldots + b^{k-1}},b^k\right)$ will also be on the curve $L$ since $$\left(\frac{a+b+\ldots + b^{k-1}}{1+b+\ldots + b^{k-1}}-1\right)(b^k-1) = (a-1)(b-1).$$ As $b \notin \{-1,0,1\}$, for $k=1,2, \ldots, |E(G)|+\rho(G)+1$, the points $\left(\frac{a+b+\ldots + b^{k-1}}{1+b+\ldots + b^{k-1}},b^k\right)$ are pairwise distinct. Therefore by evaluating $T(G^k;a,b)$ for $k=1,\ldots, |E(G)|+\rho(G)+1$, we obtain $\sum_{i = -\rho(G)}^{|E(G)|}t_iz^i$ for $|E(G)|+\rho(G)+1$ distinct values of $z$. This gives us $|E(G)|+\rho(G)+1$ linear equations for the coefficients $t_i$. The matrix of the equations is a Vandermonde matrix and clearly non-singular. So, we may apply Lemma [Lemma 13](#lem:gauss){reference-type="ref" reference="lem:gauss"} to compute $t_i$ for all $i$ in time polynomial in $|E(G)|$. ◻
The next proposition deals with the case when $a =1$. Recall $H_0^x = \{(1,y) : y \in \mathbb{Q}\}$ and $H_0^y = \{(x,1) : x \in \mathbb{Q}\}$.
**Proposition 24**. *Let $L = H_0^x$ and let $b\in \mathbb{Q}-\{-1,0,1\}$. Then $$\pi_2[\mathcal{G},L] \propto_T \pi_3[\mathcal{G},1,b].$$*
*Proof.* Let $G$ be in $\mathcal{G}$. Along $L$ the Tutte polynomial of $G$ has the form $$T(G;1,y) = \sum_{\substack{A \subseteq E(G): \\ \rho(A)=\rho(G)}}(y-1)^{|A|-\rho(G)} = \sum_{i = -\rho(G)}^{|E(G)|}t_i y^i,$$ for some $t_{-\rho(G)}, \ldots , t_{|E(G)|}$.
The proof now follows in a similar way to that of Proposition [Proposition 23](#main proposition 1){reference-type="ref" reference="main proposition 1"} by computing $T(G^k;1,b)$ for $k=1, \ldots, |E(G)|+\rho(G)+1$ and then determining each coefficient $t_i$ in time polynomial in $|E(G)|$. ◻
The following proposition deals with the case when $b=1$.
**Proposition 25**. *Let $L=H_0^y$ and $a \in \mathbb{Q} -\{1\}$. Then $$\pi_2[\mathcal{G},L] \propto_T \pi_3[\mathcal{G},a,1].$$*
*Proof.* Let $G$ be in $\mathcal{G}$. Along $L$ the Tutte polynomial of $G$ has the form $$T(G;x,1) = \sum_{\substack{A \subseteq E(G):\\ \rho(A)=|A|}}(x-1)^{\rho(G)-\rho(A)}= \sum_{i =0}^{\rho(G)}t_i x^i,$$ for some $t_{0}, \ldots , t_{\rho(G)}$.
We now show that we can determine all of the coefficients $t_i$ from the evaluations $T(G^k;a,1)$ for $k= 1,\ldots, \rho(G)+1$ in time polynomial in $|E(G)|$. For each such $k$, $G^k$ may be constructed from $G$ in time polynomial in $|E(G)|$ and is bipartite, planar and connected. By Theorem [Theorem 16](#greedoid thickening){reference-type="ref" reference="greedoid thickening"}, we have $$T(G^k;a,1) = k^{\rho(G)}T\left(G; \frac{a+k-1}{k},1\right).$$
Therefore we may compute $T\left(G; \frac{a+k-1}{k},1\right)$ from $T(G^k;a,1)$. Clearly $\left(\frac{a+k-1}{k},1\right)$ lies on $H_0^y$. Since $a \neq 1$, the points $\left(\frac{a+k-1}{k},1\right)$ are pairwise distinct for $k = 1,2,\ldots, \rho(G)+1$. Therefore by evaluating $T(G^k;a,1)$ for $k=1,\ldots, \rho(G)+1$, we obtain $\sum_{i = 0}^{|\rho(G)|}t_iz^i$ for $\rho(G)+1$ distinct values of $z$. This gives us $\rho(G)+1$ linear equations for the coefficients $t_i$. Again the matrix of the equations is a Vandermonde matrix and clearly non-singular. So, we may apply Lemma [Lemma 13](#lem:gauss){reference-type="ref" reference="lem:gauss"} to compute $t_i$ for all $i$ in time polynomial in $|E(G)|$. ◻
We now summarize the three preceding propositions.
**Proposition 26**. *Let $L$ be either $H_0^x$, $H_0^y$, or $H_{\alpha}$ for $\alpha \in \mathbb{Q}-\{0\}$. Let $(a,b)$ be a point on $L$ such that $(a,b) \neq (1,1)$ and $b \notin \{-1,0\}$. Then $$\pi_2[\mathcal{G}, L] \propto_T \pi_3[\mathcal{G},a,b].$$*
We now consider the exceptional case when $b=-1$. For reasons that will soon become apparent, we recall from Example [Example 14](#eg:littletrees){reference-type="ref" reference="eg:littletrees"} that $T(P_2;x,y) = x^2y-2xy+x+y$ and $T(S_k;x,y) = x^k$.
**Proposition 27**. *Let $L$ be the line $y=-1$. For $a \notin \{\frac{1}{2},1\}$ we have $$\pi_2[\mathcal{G},L] \propto_T \pi_3[\mathcal{G}, a,-1].$$*
*Proof.* Let $G$ be in $\mathcal G$ and let $z=x-1$. Along $L$ the Tutte polynomial of $G$ has the form $$T(G;x,-1) = \sum_{A \subseteq E(G)}z^{\rho(G)-\rho(A)}(-2)^{|A|-\rho(A)} = \sum^{\rho(G)}_{i=0} t_i z^i$$ for some $t_{0}, \ldots, t_{\rho(G)}$.
We now show that, apart from a few exceptional values of $a$, we can determine all of the coefficients $t_i$ in polynomial time from $T(G\sim S_k;a,-1)$, for $k= 0, 1, \ldots, \rho(G)$, in time polynomial in $|E(G)|$. For each such $k$, $G\sim S_k$ may be constructed from $G$ in time polynomial in $|E(G)|$ and is bipartite, planar and connected.
By Theorem [Theorem 19](#attachment function greedoids){reference-type="ref" reference="attachment function greedoids"} we have $$T(G \sim S_k;a,-1) = a^{k \rho(G)}T\left(G;\frac{(a-1)^{k+1}(-1)^k}{a^k}+1,-1 \right).$$ Providing $a \neq 0$ we may compute $T\left(G;\frac{(a-1)^{k+1}(-1)^k}{a^k}+1,-1 \right)$ from $T(G\sim S_k;a,-1)$. For $a \notin\{\frac{1}{2},1\}$ the points $\left(\frac{(a-1)^{k+1}(-1)^k}{a^k}+1,-1\right)$ are pairwise distinct for $k=0,1,2,\ldots, \rho(G)$. Therefore by evaluating $T(G\sim S_k;a,-1)$ for $k=0,1,2,\ldots, \rho(G)$ where $a \notin \{0,\frac{1}{2},1\}$, we obtain $\sum_{i = 0}^{\rho(G)}t_iz^i$ for $\rho(G)+1$ distinct values of $z$. This gives us $\rho(G)+1$ linear equations for the coefficients $t_i$. Again the matrix corresponding to these equations is a Vandermonde matrix and clearly non-singular. So, we may apply Lemma [Lemma 13](#lem:gauss){reference-type="ref" reference="lem:gauss"} to compute $t_i$ for all $i$ in time polynomial in $|E(G)|$. Hence for $a \notin \{0,\frac{1}{2},1\}$, $\pi_2[\mathcal G,L] \propto \pi_3[\mathcal G,a,-1]$.
We now look at the case when $a=0$. Note that $T(P_2;0,-1) = -1$. Applying Theorem [Theorem 19](#attachment function greedoids){reference-type="ref" reference="attachment function greedoids"} to $G$ and $P_2$ gives $$T(G\sim P_2;0,-1) = (-1)^{\rho(G)}T\left(G; \frac{(-1)^3(-1)^2}{-1}+1,-1\right)
=(-1)^{\rho(G)}T(G;2,-1).$$ Therefore we have the reductions $$\pi_2[\mathcal{G},L] \propto_T \pi_3[\mathcal{G},2,-1] \propto_T \pi_3[\mathcal{G},0,-1].$$ Since the Turing reduction relation is transitive, this implies that evaluating the Tutte polynomial at the point $(0,-1)$ is at least as hard as evaluating it along the line $y=-1$. This completes the proof. ◻
We now begin to classify the complexity of $\pi_3$. The next results will establish hardness for a few special cases, namely when $b \in \{-1,0,1\}$.
**Proposition 28**. *The problem $\pi_3[\mathcal{G},1,b]$ is $\#$P-hard apart from when $b=1$, in which case it has a polynomial time algorithm.*
*Proof.* The hardness part follows directly from Theorem [Theorem 2](#VW){reference-type="ref" reference="VW"} and Proposition [Proposition 15](#prop:x=1){reference-type="ref" reference="prop:x=1"}. We have already noted the existence of a polynomial time algorithm to solve $\pi_3[\mathcal{G},1,1]$. ◻
**Proposition 29**. *The problem $\pi_3[\mathcal{G},a,-1]$ is $\#$P-hard apart from when $a = 1/2$, in which case it has a polynomial time algorithm.*
*Proof.* First note that there is a polynomial time algorithm for $\pi_3[\mathcal{G},a,-1]$ because $(\frac{1}{2},-1)$ lies on $H_1$. Now let $L$ be the line $y=-1$. By Proposition [Proposition 27](#prop: b=-1){reference-type="ref" reference="prop: b=-1"} we have $$\pi_2[\mathcal{G},L] \propto_T \pi_3[\mathcal{G},a,-1]$$ for $a \notin \{\frac{1}{2},1\}$. So $$\pi_3[\mathcal{G},1,-1] \propto_T \pi_3[\mathcal{G},a,-1]$$ for $a \neq 1/2$. By Proposition [Proposition 28](#proposition (1,b)){reference-type="ref" reference="proposition (1,b)"} we know that $\pi_3[\mathcal{G},1,-1]$ is $\#$P-hard. So the result follows. ◻
**Proposition 30**. *The problem $\pi_3[\mathcal{G},a,0]$ is $\#$P-hard apart from when $a = 0$, in which case it has a polynomial time algorithm.*
*Proof.* Let $G$ be in $\mathcal{G}$. First note that evaluating the Tutte polynomial of $G$ at the point $(0,0)$ is easy since $(0,0)$ lies on the hyperbola $H_1$.
The rooted graph $G\sim S_1$ may be constructed from $G$ in time polynomial in $|E(G)|$ and is bipartite, planar and connected. Applying Theorem [Theorem 19](#attachment function greedoids){reference-type="ref" reference="attachment function greedoids"} to $G$ and $S_1$ gives $$T(G\sim S_1; a, 0) = a^{\rho(G)}T(G;1,0).$$ Since $a \neq 0$ we may compute $T(G;1,0)$ from $T(G \sim S_1;a,0)$. Therefore $\pi_3[\mathcal{G},1,0] \propto \pi_3[\mathcal{G},a,0]$. By Proposition [Proposition 28](#proposition (1,b)){reference-type="ref" reference="proposition (1,b)"}, $\pi_3[\mathcal{G},1,0]$ is $\#$P-hard, and the result follows. ◻
Recall from Equation [\[characteristic polynomial specialization of tutte\]](#characteristic polynomial specialization of tutte){reference-type="ref" reference="characteristic polynomial specialization of tutte"} that along $y=0$ the Tutte polynomial of a rooted graph specializes to the characteristic polynomial. Therefore we have the following corollary.
**Corollary 31**. *Computing the characteristic polynomial $p(G;k)$ of a connected rooted graph $G$ is $\#$P-hard for all $k \in \mathbb{Q}-\{1\}$. When $k=1$, there is a polynomial time algorithm.*
*Proof.* Let $k$ be in $\mathbb{Q}$. We have $$p(G;k) = (-1)^{\rho(G)}T(G;1-k,0).$$ By Proposition [Proposition 30](#prop: b=0){reference-type="ref" reference="prop: b=0"} evaluating $T(G;1-k,0)$ is $\#$P-hard providing $k \neq 1$. Furthermore when $k=1$ we have $$p(G;1) = (-1)^{\rho(G)}T(G;0,0) = \left\{ \begin{array}{ll} 1 & \text{ if $G$ is edgeless; }\\ 0 & \text{ otherwise, }\\
\end{array}\right.$$ and so it is easy to compute (as expected since $(0,0)$ lies on $H_1$). ◻
We now consider points along the line $y=1$.
**Proposition 32**. *The problem $\pi_2[\mathcal{G}, a,1]$ is $\#$P-hard when $a \neq 1$.*
*Proof.* Let $G$ be a connected, planar, bipartite, unrooted graph with $V(G)=\{v_1, \ldots, v_n\}$. Now for $1\leq j\leq n$, let $G_j$ be the graph in $\mathcal G$ obtained from $G$ by choosing $v_j$ to be the root. Let $\rho_j$ denote the rank function of $G_j$ and $a_i(G_j)$ be the number of subsets $A$ of the edges of $G_j$ having size $i$ so that the root component of $G|A$ is a tree. Then $$T(G_j;x,1) = \sum_{\substack{A \subseteq E: \\ \rho_j(A)=|A|}}(x-1)^{\rho(G_j)-|A|} =\sum_{i=0}^{\rho(G_j)}a_i(G_j)(x-1)^{\rho(G_j)-i}.$$ Let $a_i(G)$ denote the number of subtrees of $G$ with $i$ edges. Then $$a_i(G) = \sum_{j=1}^n \frac{a_i(G_j)}{i+1}.$$ This is because every subtree $T$ of $G$ with $i$ edges has $i+1$ vertices and its edge set is one of the sets $A$ contributing to $a_i(G_j)$ for the $i+1$ choices of $j$ corresponding to its vertices.
Given an oracle for $\pi_2[\mathcal{G},H_0^y]$, we can compute $a_i(G_j)$ for $i=0,\ldots,|E(G)|$ and $1\leq j \leq n$ in time polynomial in $|E(G)|$. So we can compute $a_i(G)$ and consequently the number of trees of $G$ in time polynomial in $|E(G)|$. Thus $$\#\text{SUBTREES} \propto_T \pi_3[\mathcal{G},H_0^y].$$ By Proposition [Proposition 26](#main subtheorem){reference-type="ref" reference="main subtheorem"} we have $$\#\text{SUBTREES} \propto_T\pi_3[\mathcal{G},H_0^y] \propto_T \pi_2[\mathcal{G},a,1]$$ for $a\ne 1$. The result now follows from Proposition [Proposition 22](#prop:jerrum){reference-type="ref" reference="prop:jerrum"}. ◻
We now summarize our results and prove Theorem [Theorem 3](#maintheoremrootedgraph){reference-type="ref" reference="maintheoremrootedgraph"}.
*Proof of Theorem [Theorem 3](#maintheoremrootedgraph){reference-type="ref" reference="maintheoremrootedgraph"}.* Let $(a,b)$ be a point on $H_{\alpha}$ for some $\alpha$ in $\mathbb{Q} - \{0,1\}$. By Proposition [Proposition 26](#main subtheorem){reference-type="ref" reference="main subtheorem"} we have $\pi_2[\mathcal{G}, H_{\alpha}] \propto_T \pi_3[\mathcal{G},a,b]$ providing $b \notin \{-1,0\}$. The hyperbola $H_{\alpha}$ crosses the $x$-axis at the point $(1-\alpha, 0)$. By Proposition [Proposition 30](#prop: b=0){reference-type="ref" reference="prop: b=0"} the problem $\pi_3[\mathcal{G},1-\alpha,0]$ is $\#$P-hard since $\alpha \neq 1$. This gives us a $\#$P-hard point on each of these curves and therefore implies $\pi_2[\mathcal{G},H_{\alpha}]$ is $\#$P-hard for $\alpha \in \mathbb{Q}-\{0,1\}$. Hence $\pi_3[\mathcal{G},a,b]$ is $\#$P-hard for $(a,b)\in H_{\alpha}$ with $\alpha \in \mathbb{Q}-\{0,1\}$ and $b \neq -1$. The rest of the proof now follows directly by Propositions [Proposition 28](#proposition (1,b)){reference-type="ref" reference="proposition (1,b)"}, [Proposition 29](#proposition (a,-1)){reference-type="ref" reference="proposition (a,-1)"} and [Proposition 32](#proposition (a,1)){reference-type="ref" reference="proposition (a,1)"}, and the discussion concerning the easy points at the beginning of the section. ◻
# Rooted Digraphs {#sec:rooted digraph}
In this section we let $\mathbb G$ be the class of directed branching greedoids of root-connected rooted digraphs, a class we denote by $\mathcal D$. We consider the same three problems as in the previous section. Again, it is more convenient to think of the input as being a root-connected rooted digraph rather than its directed branching greedoid. We present analogous results to those in the previous section by finding the computational complexity of evaluating the Tutte polynomial of a root-connected digraph at a fixed rational point, eventually proving Theorem [Theorem 4](#maintheoremdigraph){reference-type="ref" reference="maintheoremdigraph"}.
We begin the proof by examining the easy points. Let $D$ be a rooted digraph with edge set $E$ and rank function $\rho$. If a point $(a,b)$ lies on the hyperbola $H_1$ then, following the remarks at the end of Section [3](#The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph){reference-type="ref" reference="The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph"}, $T(D;a,b)$ is easily computed. We now show that evaluating $T(D;a,0)$ is easy for all $a \in \mathbb{Q}$. A *sink* in a digraph is a non-isolated vertex with no outgoing edges. Suppose that $D$ is a root-connected, rooted digraph with $s$ sinks. Then Gordon and McMahon [@GordonMcMahon3] have shown that its characteristic polynomial $p$ satisfies the following.
$$p(D;\lambda) = \begin{cases} (-1)^{\rho(D)}(1-\lambda)^s & \text{if $D$ is acyclic;} \\
0 & \text{if $D$ has a directed cycle.} \end{cases}$$ Using the relation $T(D;1-\lambda,0) = (-1)^{\rho(D)}p(D;\lambda)$ we see that $$T(D;x,0) = \begin{cases} x^s & \text{if $D$ is acyclic;} \\
0 & \text{if $D$ has a directed cycle.} \end{cases}$$ It is easy to count the sinks in a digraph so the problem $\pi_3[\mathcal{D},a,0]$ can be solved in polynomial time for any $a \in \mathbb{Q}$. Every edge of a component of a rooted digraph other than the root component is a greedoid loop, so if $D$ has such an edge then $T(D;1-\lambda,0)=0$. Furthermore, the addition or removal of isolated vertices makes no difference to $T(D)$. So $T(D;a,0)$ can be computed in polynomial time for the class of all rooted digraphs.
We noted in Section [3](#The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph){reference-type="ref" reference="The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph"} that $T(D;1,1)$ is the number of spanning arborescences of the root component of $D$ rooted at $r$. This can be computed in polynomial time using the Matrix--Tree theorem for directed graphs [@MR1579119; @MR27521].
We now move on to consider the hard points. The $k$-thickening operation will again be crucial: the *$k$-thickening* $D^k$ of a root-connected digraph $D$ is obtained by replacing every edge $e$ in $D$ by $k$ parallel edges that have the same direction as $e$. We have $\Gamma(D^k)\cong (\Gamma(D))^k$, so Theorem [Theorem 16](#greedoid thickening){reference-type="ref" reference="greedoid thickening"} can be applied to give an expression for $T(D^k)$.
The proof of the following proposition is omitted as it is analogous to that of Proposition [Proposition 26](#main subtheorem){reference-type="ref" reference="main subtheorem"}.
**Proposition 33**. *Let $L$ be either $H_0^x, H_0^y$, or $H_{\alpha}$ for $\alpha \in \mathbb{Q}-\{0\}$. Let $(a,b)$ be a point on $L$ such that $(a,b) \neq (1,1)$ and $b \notin \{-1,0\}$. Then $$\pi_2[\mathcal{D},L] \propto_T \pi_3[\mathcal{D},a,b].$$*
We let $\overrightarrow{P_k}$ be the root-connected directed path of length $k$ with the root being one of the leaves and $\overrightarrow{S_k}$ be the root-connected directed star with $k$ edges emanating from the root. Then $T(\overrightarrow{P_k};x,y) = 1+\sum_{i=1}^k(x-1)^iy^{i-1}$ and $T(\overrightarrow{S_k};x,y)=x^k$. The proof of the following proposition is analogous to that of Proposition [Proposition 27](#prop: b=-1){reference-type="ref" reference="prop: b=-1"} with $\overrightarrow{P_k}$ and $\overrightarrow{S_k}$ playing the roles of $P_k$ and $S_k$.
**Proposition 34**. *Let $L$ be the line $y=-1$. For $a \notin \{\frac{1}{2},1\}$ we have $$\pi_2[\mathcal{D},L] \propto_T \pi_3[\mathcal{D},a,-1].$$*
Next we classify the complexity of $\pi_3[\mathcal{D},1,b]$ for $b \notin \{0,1\}$. Suppose we have a root-connected digraph $D$ and generate a random subgraph $(D,p)$ of $D$ by deleting each edge with probability $p$ independently of all the other edges. Let $g(D;p)$ denote the probability that $(D,p)$ is root-connected and let $g_j$ be the number of subsets $A$ of $E(D)$ with size $j$ so that $D|A$ is root-connected. Notice that $g_j$ is equal to the number of subsets $A$ of $E$ with $|A|=j$ and $\rho(A)=\rho(E)$. Then $$g(D;p) = \sum_{j=0}^{|E(D)|}g_j p^{|E(D)|-j}(1-p)^{j}.$$
Provan and Ball [@ProvanBall] showed that the following problem is $\#$P-complete for each rational $p$ with $0 < p < 1$, and computable in polynomial time when $p=0$ or $p=1$.
$\#$[Connectedness Reliability]{.smallcaps}\
**Input:** $D \in \mathcal{D}$.\
**Output:** $g(D;p)$.
Note that we have restricted the input digraph to being root-connected which Provan and Ball did not, but this does not make a difference to the complexity, because if $D$ is not root-connected then clearly $g(D;p)=0$. We now use this result to classify the complexity of points along the line $x=1$.
**Proposition 35**. *The computational problem $\pi_3[\mathcal{D},1,b]$ is $\#$P-hard for $b>1$.*
*Proof.* Let $D$ be a root-connected digraph with edge set $E$ and rank function $\rho$. Then for $0<p<1$ we have $$\begin{aligned}
g(D;p)&= \sum_{\substack{ A \subseteq E(D): \\ \rho(A) = \rho(D)}}p^{|E(D)|-|A|}(1-p)^{|A|}= p^{|E(D)|-\rho(D)}(1-p)^{\rho(D)}\sum_{\substack{ A \subseteq E(D): \\ \rho(A)=\rho(D)}}\left(\frac{1-p}{p}\right)^{|A|-\rho(A)}\\
&=p^{|E(D)|-\rho(D)}(1-p)^{\rho(D)}T\left(D;1,\frac{1}{p}\right). \end{aligned}$$
Evaluating $g(D;p)$ is therefore Turing-reducible to evaluating $T(D;1,\frac{1}{p})$ for $0<p<1$. Therefore, $\pi_3[\mathcal{D},1,b]$ is $\#$P-hard for $b>1$. ◻
In order to determine the complexity of the point $\pi_3[\mathcal{D},1,-1]$, we introduce a new operation on root-connected digraphs which we call the *$k$-digon-stretch*. We define a *tailed $k$-digon* from $u$ to $v$ to be the digraph defined as follows. The vertex set is $\{w_0=u,w_1,\ldots,w_k,w_{k+1}=v\}$. There is an edge $w_0w_1$ and a directed cycle of length $2$ on $w_i$ and $w_{i+1}$ for each $i$ with $1\leq i \leq k$. An example of a tailed $k$-digon is shown in Figure [\[fig:ktaildigon\]](#fig:ktaildigon){reference-type="ref" reference="fig:ktaildigon"}. (The labelling of the edges will be needed later.) For a root-connected digraph $D$, the $k$-digon-stretch of $D$ is constructed by replacing every directed edge $uv$ in $D$ by a tailed $k$-digon from $u$ to $v$. We denote the $k$-digon-stretch of $D$ by $D_k$.
**Theorem 36**. *Let $D$ be a root-connected digraph. Then $$T(D_k;1,y)=(k+1)^{|E(D)|-\rho(D)}y^{k |E(D)|}T\left(D;1,\frac{k+y}{k+1}\right).$$*
*Proof.* Let $S$ be a subset of edges of a tailed $k$-digon from $u$ to $v$. If $S$ contains all the edges on the unique directed $uv$-path through the $k$-tailed digon, then $S$ is said to *admit a $uv$-dipath*. Let $A$ be a subset of $E(D_k)$ and $P(A)$ be the set of edges $uv$ in $D$ for which $A$ admits a $uv$-dipath.
We have $\rho(A) = \rho(D_k)$ if and only if
1. for each directed edge $uv$ of $D$ and each vertex $w$ of the corresponding tailed $k$-digon from $u$ to $v$ in $D_k$, $A$ includes the edges of a path in the $k$-tailed digon from either $u$ or $v$ to $w$, and
2. $\rho(P(A))=\rho(D)$.
Note that $\rho(D_k)= k|E(D)| + \rho(D)$. We can write $A$ as the disjoint union $A = \bigcup_{e \in E(D)}A_e$ where $A_e$ is the set of edges of $A$ belonging to the tailed $k$-digon corresponding to $e$. The Tutte polynomial of $D_k$ along the line $x=1$ is given by $$\begin{aligned}
&T(D_k;1,y)=\sum_{\substack{ A \subseteq E(D_k):\\ \rho(A)=\rho(D_k)}}(y-1)^{|A|-\rho(D_k)} = \sum_{\substack{ B \subseteq E(D):\\ \rho(B)=\rho(D)}}\sum_{\substack{ A \subseteq E(D_k):\\ \rho(A)=\rho(D_k)\\P(A)=B}}(y-1)^{|A|-\rho(D_k)} \notag \\
&=\sum_{\substack{ B \subseteq E(D):\\ \rho(B)=\rho(D)}}\sum_{\substack{ A \subseteq E(D_k):\\ \rho(A)=\rho(D_k),\\ P(A)=B}}\underbrace{\left(\prod_{\substack{e \in E(D):\\ e \notin P(A)}}(y-1)^{|A_e|-k}\right)}_{(1)}\underbrace{\left(\prod_{\substack{e \in E(D):\\ e \in P(A)}}(y-1)^{|A_e|-(k+1)}\right)}_{(2)}(y-1)^{|P(A)|- \rho(D)}.
\label{end line in digon formula}\end{aligned}$$ Consider a tailed $k$-digon from $u$ to $v$ with vertex set labelled as described just before the statement of the theorem. For $0\leq i \leq k$, let $p_i$ denote the edge $w_iw_{i+1}$; for $1\leq i \leq k$, let $q_i$ denote the edge $w_{i+1}w_{i}$.
In the first product above we are considering edges $e=uv$ for which $e\notin P(A)$. Thus $A_e$ does not contain all of $p_0$, ..., $p_k$. Let $j$ be the smallest integer such that $p_j\notin A_e$. As we are only interested in sets $A$ with $\rho(A)=\rho(D_k)$, each of $q_{j+1}$, ..., $q_k$ belongs to $A_e$. Thus $|A_e|\geq k$. Moreover each of $p_{j+1}$, ..., $p_k$ and $q_1$, ..., $q_j$ may or may not belong to $A_e$. As there are $k+1$ possibilities for $j$, summing $$\prod_{\substack{e \in E(D):\\ e \notin P(A)}}(y-1)^{|A_e|-k}$$ over all possible choices of $A_e$ for $e\notin P(A)$ gives $\left((k+1)y^k\right)^{|E(D)|-|P(A)|}$.
In the second product above we are considering edges $e=uv$ for which $e\in P(A)$. Thus $A_e$ contains all of $p_0$, ..., $p_k$. So $|A_e|\geq k+1$. Moreover each of $q_1$, ..., $q_k$ may or may not belong to $A_e$. Summing $$\prod_{\substack{e \in E(D):\\ e \in P(A)}}(y-1)^{|A_e|-(k+1)}$$ over all possible choices of $A_e$ for $e\in P(A)$ gives $y^{k|P(A)|}$.
Thus the right side of Equation [\[end line in digon formula\]](#end line in digon formula){reference-type="ref" reference="end line in digon formula"} becomes $$\begin{aligned}
\MoveEqLeft{\sum_{\substack{ B \subseteq E(D):\\ \rho(B)=\rho(D)}}y^{k|B|}\left((k+1)y^k\right)^{|{E(D)}|-|B|}(y-1)^{|B|- \rho(D)}}\\
&=y^{k|E(D)|}\sum_{\substack{ B \subseteq E(D):\\ \rho(B)=\rho(D)}}(k+1)^{\rho(B)-|B|+|E(D)|-\rho(D)}(y-1)^{|B|-\rho(B)}\\
&=y^{k|E(D)|}(k+1)^{|E(D)|-\rho(D)}T\left(D;1,\frac{y+k}{k+1}\right).\end{aligned}$$ ◻
We now complete the classification of complexity for points on the line $H_0^x$.
**Proposition 37**. *The problem $\pi_3[\mathcal{D},1,b]$ is $\#$P-hard for $b\notin \{0,1\}$.*
*Proof.* For $b \notin \{-1,0,1\}$ the result follows immediately from Propositions [Proposition 33](#digraph reduction 1){reference-type="ref" reference="digraph reduction 1"} and [Proposition 35](#digraph hard b>1){reference-type="ref" reference="digraph hard b>1"}. By Theorem [Theorem 36](#digon-stretch){reference-type="ref" reference="digon-stretch"}, if $D$ is root-connected, then $$T(D_2;1,-1)= 3^{|E(D)|-\rho(D)}T\left(D;1,\frac{1}{3}\right).$$ As $D_2$ is root-connected and can be constructed from $D$ in polynomial time, $\pi_3(\mathcal D,1,\frac 13) \propto \pi_3(\mathcal D,1,-1)$, so $\pi_3(\mathcal D,1,-1)$ is $\#$P-hard. ◻
We now show that evaluating the Tutte polynomial of a root-connected digraph at most points on the hyperbola $H_{\alpha}$ for $\alpha \neq 0$ is at least as hard as evaluating it at the point $(1+\alpha,2)$.
**Proposition 38**. *Let $\alpha$ be in $\mathbb{Q}-\{0\}$ and $(a,b)$ be a point on $H_{\alpha}$ with $b \notin \{-1,0\}$, then $$\pi_3[\mathcal{D},1+\alpha,2] \propto_T \pi_3[\mathcal{D},a,b].$$*
*Proof.* For $\alpha$ in $\mathbb{Q} -\{0\}$, the hyperbola $H_{\alpha}$ crosses the line $y=2$ at the point $(1+\alpha, 2)$. By Proposition [Proposition 33](#digraph reduction 1){reference-type="ref" reference="digraph reduction 1"}, we know that for any point $(a,b)$ on $H_{\alpha}$ with $b \notin \{-1,0\}$ we have $\pi_3[\mathcal{D},1+\alpha,2]\propto_T \pi_2[\mathcal{D},H_{\alpha}] \propto_T \pi_3[\mathcal{D},a,b]$. ◻
We will now show that evaluating the Tutte polynomial of a root-connected digraph at most of the points on the line $y=2$ is $\#$P-hard. This will enable us to classify the complexity of most points lying on the hyperbola $H_{\alpha}$ for all $\alpha \in \mathbb{Q}-\{0\}$.
**Proposition 39**. *The problem $\pi_3[\mathcal{D},a,2]$ is $\#$P-hard for $a \neq 2$.*
*Proof.* We begin by proving that when $L$ is the line $y=2$ we have $$\pi_2[\mathcal{D},L] \propto_T \pi_3[\mathcal{D},a,2]$$ for $a \notin \{1,2\}$. Let $D$ be a root-connected digraph and let $z = x-1$. Along $L$ the Tutte polynomial of $D$ has the form $$T(D;x,2) = \sum_{A \subseteq E(D)}z^{\rho(D)-\rho(A)} = \sum_{i=0}^{\rho(D)}t_iz^i$$ for some $t_0,t_1, \ldots, t_{\rho(D)}$. We will now show that for most values of $a$, we may determine all of the coefficients $t_i$ in polynomial time from $T(D\sim \overrightarrow{S_k};a,2)$ for $k=0,1, \ldots, \rho(D)$. For each such $k$, $D\sim \overrightarrow{S_k}$ is root-connected and can be constructed in polynomial time. By Theorem [Theorem 19](#attachment function greedoids){reference-type="ref" reference="attachment function greedoids"}, we have $$T(D \sim \overrightarrow{S_k};a,2) = a^{k \rho(D)}T\left(D; \frac{2^k(a-1)^{k+1}}{a^k}+1,2\right).$$ Therefore we may compute $T\left(D; \frac{2^k(a-1)^{k+1}}{a^k}+1,2\right)$ from $T(D \sim \overrightarrow{S_k};a,2)$ when $a \neq 0$. For $a \notin \{0,\frac{2}{3},1,2\}$ the values of $\left(\frac{2^k(a-1)^{k+1}}{a^k}+1,2\right)$ are pairwise distinct for $k=0,1, \ldots, \rho(D)$. Therefore by evaluating $T(D \sim \overrightarrow{S_k};a,2)$ for $k=0, 1, \ldots, \rho(D)$ where $a \notin \{0,\frac{2}{3},1,2\}$, we obtain $\sum_{i=0}^{\rho(D)}t_iz^i$ for $\rho(D)+1$ distinct values of $z$. This gives us $\rho(D)+1$ linear equations for the coefficients $t_i$, and so by Lemma [Lemma 13](#lem:gauss){reference-type="ref" reference="lem:gauss"}, they may be recovered in polynomial time. Hence evaluating the Tutte polynomial of a root-connected digraph along the line $y=2$ is Turing-reducible to evaluating it at the point $(a,2)$ for $a \notin \{0,\frac{2}{3},1,2\}$.
We now consider the cases where $a=0$ or $a=\frac{2}{3}$. The digraph $D\sim \overrightarrow{P_2}$ is root-connected and may be constructed in polynomial time. By Theorem [Theorem 19](#attachment function greedoids){reference-type="ref" reference="attachment function greedoids"}, we have $$T(D\sim \overrightarrow{P_2};0,2) = 2^{\rho(D)}T\left(D;\frac{(-1)^32^2}{2}+1,2\right)\\
= 2^{\rho(D)}T(D;-1,2).$$ Therefore $\pi_3[\mathcal{D},-1,2] \propto_T \pi_3[\mathcal{D},0,2]$. Similarly we have $$T\left(D\sim \overrightarrow{P_2};\frac{2}{3},2\right) = 2^{\rho(D)}T\left(D;\frac{(-\frac{1}{3})^32^2}{2}+1,2\right)= 2^{\rho(D)}T\left(D;\frac{25}{27},2\right).$$ Therefore $\pi_3[\mathcal{D},25/27,2] \propto_T \pi_3[\mathcal{D},2/3,2]$. Putting all this together we get $\pi_2[\mathcal{D},L] \propto_T \pi_3[\mathcal{D},a,2]$ for all $a$ in $\mathbb Q-\{1,2\}$. Consequently $\pi_3[\mathcal{D},1,2] \propto_T \pi_3[\mathcal{D},a,2]$, for all $a$ in $\mathbb Q-\{2\}$.
By Proposition [Proposition 37](#digraph hard x=1){reference-type="ref" reference="digraph hard x=1"}, we know that $\pi_3[\mathcal{D},1,2]$ is $\#$P-hard. This completes the proof. ◻
**Theorem 40**. *Let $\alpha$ be in $\mathbb{Q}-\{0,1\}$ and $(a,b)$ be a point on $H_{\alpha}$ with $b \neq 0$. Then $\pi_3[\mathcal{D},a,b]$ is $\#$P-hard.*
*Proof.* Suppose first that $b\ne -1$. By Proposition [Proposition 38](#digraph reduction to y=2){reference-type="ref" reference="digraph reduction to y=2"}, $\pi_3[\mathcal{D},1+\alpha,2] \propto_T \pi_3[\mathcal{D},a,b]$. As $\alpha \ne 1$, Proposition [Proposition 39](#digraph hard along y=2){reference-type="ref" reference="digraph hard along y=2"}, implies $\pi_3[\mathcal{D},a,b]$ is $\#$P-hard.
Now suppose that $b=-1$. As $(a,b) \notin H_1$, we have $a\ne \frac 12$. So by Proposition [Proposition 34](#digraph reduction 2){reference-type="ref" reference="digraph reduction 2"}, $\pi_3[\mathcal{D},1,-1] \propto_T \pi_3[\mathcal{D},a,-1]$. By Proposition [Proposition 37](#digraph hard x=1){reference-type="ref" reference="digraph hard x=1"}, $\pi_3[\mathcal{D},1,-1]$ is $\#$P-hard. Therefore $\pi_3[\mathcal{D},a,-1]$ is $\#$P-hard. ◻
The only remaining points we need to classify are those lying on the line $y=1$. To do this we prove that the problem of evaluating the Tutte polynomial of a root-connected digraph at most fixed points along this line is at least as hard as the analogous problem for rooted graphs.
**Theorem 41**. *The problem $\pi_3[\mathcal{D},a,1]$ is $\#$P-hard for $a$ in $\mathbb{Q}-\{1\}$.*
*Proof.* Let $G$ be a connected rooted graph with root $r$. Construct a rooted graph $D$ with root $r$ by replacing every edge of $G$ by a pair of oppositely directed edges. Then $D$ is root-connected and can be constructed from $G$ in polynomial time. We can define a natural map $f:2^{E(D)} \rightarrow 2^{E(G)}$ so that $f(A)$ is the set of edges of $G$ for which at least one corresponding directed edge is included in $A$.
If $\rho_{G}(A)=|A|$ then the root component of $G|A$ is a tree and includes all the edges of $A$. Similarly if $\rho_D(A')=|A'|$ then the root component of $D|A'$ is an arborescence rooted at $r$ and includes all the edges of $A'$. For every subset $A$ of $E$ with $\rho_G(A) = |A|$, there is precisely one choice of $A'$ with $\rho_D(A')=|A'|$ and $f(A')=A$, obtained by directing all the edges of $A$ away from $r$. Thus there is a one-to-one correspondence between subsets $A$ of $E$ with $\rho_G(A) = |A|$ and subsets $A'$ of $E(D)$ with $\rho_D(A')=|A'|$, and this correspondence preserves the sizes of the sets. Therefore we have $$T(D;x,1) = \sum_{\substack{A'\subseteq E(D):\\ |A'|=\rho_D(A')}}(x-1)^{\rho(D)-|A'|} = \sum_{\substack{A \subseteq E:\\ |A|=\rho_G(A)}}(x-1)^{\rho(G)-|A|}= T(G;x,1).$$ So $\pi_3[\mathcal{G},a,1] \propto_T \pi_3[\mathcal{D},a,1]$. So by Proposition [Proposition 32](#proposition (a,1)){reference-type="ref" reference="proposition (a,1)"}, we deduce that $\pi_3[\mathcal{D},a,1]$ is $\#$P-hard for $a \neq 1$. ◻
# Binary Greedoids {#sec:binarygreedoids}
In our final section we let $\mathbb G$ be the class of binary greedoids. We present analogous results to those in the previous section by finding the computational complexity of evaluating the Tutte polynomial of a binary greedoid at a fixed rational point, eventually proving Theorem [Theorem 5](#maintheorembinarygreedoid){reference-type="ref" reference="maintheorembinarygreedoid"}. As before, it is convenient to think of the input as being a binary matrix rather than its binary greedoid.
We begin by examining the easy points of Theorem [Theorem 5](#maintheorembinarygreedoid){reference-type="ref" reference="maintheorembinarygreedoid"}. Let $\Gamma$ be a binary greedoid with element set $E$ and rank function $\rho$. If a point $(a,b)$ lies on the hyperbola $H_1$ then, following the remarks at the end of Section [3](#The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph){reference-type="ref" reference="The Tutte Polynomial of a Rooted Graph and of a Rooted Digraph"} $T(\Gamma;a,b)$ is easily computed.
We now focus on the hard points. The $k$-thickening operation will again be crucial. Given a binary matrix $M$, the $k$-thickening $M^k$ of $M$ is obtained by replacing each column of $M$ by $k$ copies of the column. We have $\Gamma(M^k) = (\Gamma(M))^k$, so Theorem [Theorem 16](#greedoid thickening){reference-type="ref" reference="greedoid thickening"} can be applied to compute the $T(M^k)$ in terms of $T(M)$. Let $I_k$ denote the $k\times k$ identity matrix. Then $\Gamma(I_k)\cong \Gamma(P_k)$, so $T(I_k)=T(P_k)=1 +\sum_{j=1}^k(x-1)^jy^{j-1}$.
The proof of the following proposition is analogous to that of Proposition [Proposition 26](#main subtheorem){reference-type="ref" reference="main subtheorem"}, thus we omit it.
**Proposition 42**. *Let $L$ be either $H_0^x, H_0^y$, or $H_{\alpha}$ for $\alpha \in \mathbb{Q}-\{0\}$. Let $(a,b)$ be a point on $L$ such that $(a,b) \neq (1,1)$ and $b \notin \{-1,0\}$. Then $$\pi_2[\mathcal{B},L] \propto_T \pi_3[\mathcal{B},a,b].$$*
A binary matroid is a matroid that can be represented over the finite field $\mathbb{Z}_2$. Every graphic matroid is also binary, so Theorem [Theorem 2](#VW){reference-type="ref" reference="VW"} and Lemma [Lemma 11](#binary greedoid to matroid){reference-type="ref" reference="binary greedoid to matroid"} imply that $\pi_2[\mathcal{B},1,b]$ is $\#$P-hard providing $b\neq 1$. This immediately gives the following.
**Proposition 43**. *The problem $\pi_3[\mathcal{B},1,b]$ is $\#$P-hard for all $b$ in $\mathbb Q-\{1\}$.*
The following result has been announced by Vertigan in [@zbMATH00750655] and slightly later in [@zbMATH01464264], but up until now no written proof has been published. For completeness, we provide a proof in Appendix [8](#sec:appendix){reference-type="ref" reference="sec:appendix"}.
**Theorem 44** (Vertigan). *Evaluating the Tutte polynomial of a binary matroid is $\#$P-hard at the point $(1,1)$.*
Using this result, we are able to fill in the missing point $(1,1)$ from the previous result and also establish hardness along the line $y=1$.
**Proposition 45**. *The problem $\pi_3[\mathcal{B},a,1]$ is $\#$P-hard for all $a$.*
*Proof.* By Proposition [Proposition 42](#binary greedoid reduction 1){reference-type="ref" reference="binary greedoid reduction 1"} we have $\pi_2[\mathcal{B},H_0^y] \propto_T \pi_3[\mathcal{B},a,1]$ for $a \neq 1$. The result now follows Theorem [Theorem 44](#thm:Vertigan){reference-type="ref" reference="thm:Vertigan"}. ◻
**Proposition 46**. *Let $\Gamma$ be a binary greedoid and let $\Gamma'=\Gamma(I_k)$. Then $$T(\Gamma\approx \Gamma';x,y) = T(\Gamma;x,y)(x-1)^ky^k + T(\Gamma;1,y)\Big(1+ \sum_{j=1}^k (x-1)^jy^{j-1}-(x-1)^ky^k\Big).$$*
*Proof.* The proof follows immediately from Theorem [Theorem 20](#thm:fullrankattach){reference-type="ref" reference="thm:fullrankattach"}. ◻
We now classify the complexity of $\pi_3[\mathcal{B},a,b]$ when $b=0$ or $b=-1$.
**Proposition 47**. *The problem $\pi_3[\mathcal{B},a,0]$ is $\#$P-hard for all $a \neq 0$.*
*Proof.* Let $M$ be a binary matrix with linearly independent rows. Then from Theorem [Theorem 20](#thm:fullrankattach){reference-type="ref" reference="thm:fullrankattach"}, we have $T(M\approx I_1;a,0)=aT(M;1,0)$. Therefore when $a \neq 0$ we have $\pi_2[\mathcal{B},1,0] \propto_T \pi_2[\mathcal{B},a,0]$. The result now follows from Proposition [Proposition 43](#binary greedoid hard along $x=1$){reference-type="ref" reference="binary greedoid hard along $x=1$"}. ◻
**Proposition 48**. *The problem $\pi_3[\mathcal{B},a,-1]$ is $\#P$-hard for all $a\neq \frac 12$.*
*Proof.* Let $M$ be a binary matrix with linearly independent rows. We have $$(2a-1)T(M;1,-1) = T(M\approx I_1;a,-1) + (a-1)T(M;a,-1).$$ Thus, $\pi_3[\mathcal B,1,-1]\propto_T \pi_3[\mathcal B,a,-1]$. By using Proposition [Proposition 43](#binary greedoid hard along $x=1$){reference-type="ref" reference="binary greedoid hard along $x=1$"}, we deduce that $\pi_{0}[\mathcal B,a,-1]$ is $\#$P-hard. ◻
Our final result completes the proof of Theorem [Theorem 5](#maintheorembinarygreedoid){reference-type="ref" reference="maintheorembinarygreedoid"}.
**Theorem 49**. *Let $(a,b)$ be a point in $H_{\alpha}$ for $\alpha \in \mathbb{Q}-\{0,1\}$ with $b\neq -1$. Then $\pi_3[\mathcal{B},a,b]$ is $\#$P-hard.*
*Proof.* For $\alpha \in \mathbb{Q}-\{0,1\}$, the hyperbola $H_{\alpha}$ crosses the $x$-axis at the point $(1-\alpha,0)$. By Proposition [Proposition 42](#binary greedoid reduction 1){reference-type="ref" reference="binary greedoid reduction 1"} since $b\neq -1$ and $(a,b) \neq (1,1)$ we have $\pi_3[\mathcal{B},1-\alpha,0] \propto_T \pi_3[\mathcal{B},a,b]$. The result now follows from Proposition [Proposition 47](#binary greedoid hard along $y=0$){reference-type="ref" reference="binary greedoid hard along $y=0$"}. ◻
# Counting bases in a represented matroid {#sec:appendix}
In this appendix, we present a proof that counting the number of bases of a represented matroid is $\#$P-complete. More precisely, we consider the following family of counting problems. Let $\mathbb F$ be a field.
[Counting Bases of $\mathbb F$-Represented Matroids]{.smallcaps}\
**Input:** A $(0,1)$-matrix $A$.\
**Output:** The number of bases of $M(A)$, the matroid represented by $A$ over the field $\mathbb F$.
**Theorem 50**. *For every field $\mathbb F$, [Counting Bases of $\mathbb F$-Represented Matroids]{.smallcaps} is $\#$P-complete.*
A proof of this result was announced nearly 30 years ago by Dirk Vertigan --- it first seems to have been referred to in [@zbMATH00750655] and slightly later in [@zbMATH01464264], where it is described as an unpublished manuscript --- but no written proof has been circulated. Sketches of the proof have been presented by Vertigan in talks, for example, at the Conference for James Oxley in 2019 [@Vertiganunpub]. The second author was present at this meeting and the material in this section has been produced from his incomplete recollection of the talk. All the key ideas are due to Vertigan but the details including any errors, omissions or unnecessary complications are due to the authors. As pointed out to us by Dillon Mayhew [@Dillon], Vertigan's proof presented in [@Vertiganunpub] introduced an intermediate step involving weighted bases; our proof does not require this intermediate step but this comes at the cost of introducing a larger matrix in the reduction. We provide the proof, partly as a service to the community because we know of several colleagues who have tried to recreate it, but primarily because a referee has pointed out the undesirability of relying on an unpublished result. Although our original aim was only to establish the special case of Theorem [Theorem 50](#thm:main){reference-type="ref" reference="thm:main"} relevant for our work, it turns out that little extra effort is required to prove Theorem [Theorem 50](#thm:main){reference-type="ref" reference="thm:main"} in full generality.
We require very little matroid theory other than basic notions such as rank, circuits and the closure operator. As we work exclusively with matroids having representations drawn from a specific family of matrices considered over different fields, the claims we make about the associated matroids can easily be checked by considering the representing matrices. For background on matroids see [@Oxley].
A graph is *simple* if it has no loops or parallel edges. To prove hardness, we give a reduction from counting perfect matchings in a simple graph, a problem which is well-known to be $\#$P-complete [@zbMATH03646278]. Clearly, it makes no difference to the complexity of counting perfect matchings if we forbid our graphs from having isolated vertices. Given such a graph $G$ with $n$ vertices, we construct a family of matrices $\{A_i: 1 \leq i \leq \lfloor n/2\rfloor +1\}$ with entries in $\{0,1\}$. By considering these matrices as being defined over different fields, we obtain two corresponding families of matroids. Which family arises depends on whether the field has characteristic two. Thus the proof of Theorem [Theorem 50](#thm:main){reference-type="ref" reference="thm:main"} splits into two parts depending on whether the characteristic of the underlying field is two.
We shall generally think of matrices as coming with sets indexing their rows and columns. If $A$ is a matrix with sets $X$ and $Y$ indexing its rows and columns respectively, then we say that $A$ is an $X\times Y$ matrix. For non-empty subsets $X'$ and $Y'$ of $X$ and $Y$, respectively, $A[X',Y']$ is the submatrix of $A$ obtained by deleting the rows indexed by elements of $X-X'$ and the columns indexed by elements of $Y-Y'$.
Suppose that $G$ is a simple graph without isolated vertices having vertex set $\{v_1,\ldots,v_n\}$ and edge set $\{e_1,\ldots,e_m\}$. Let $k$ be a strictly positive integer. Let $$\begin{aligned}
X&=\{v_1,\ldots,v_n, e_1,\ldots, e_m\} \cup \{ f_{i,j}: 1\leq i\leq m, 1\leq j \leq k\} \\
\intertext{and} Y&=\{v_1,\ldots,v_n, e_1,\ldots, e_m\} \cup \{ w_{i,j}, x_{i,j}, y_{i,j}, z_{i,j} : 1\leq i\leq m, 1\leq j \leq k\}.\end{aligned}$$ Here both $X$ and $Y$ include all the vertices and edges of $G$, together with several new elements. The matrix $A_k$ is an $X\times Y$ matrix. To specify its entries suppose that $e_i$ has endvertices $v_a$ and $v_b$ with $a<b$. Then for each $j$ with $1\leq j\leq k$, taking $X'=\{v_a,v_b,f_{i,j}\}$ and $Y'=\{v_a,v_b,e_i,w_{i,j},x_{i,j},y_{i,j},z_{i,j}\}$, we let $${A_k}[X',Y'] = \begin{bNiceMatrix}[first-row,first-col]
& v_a & v_b & e_i & w_{i,j} & x_{i,j} & y_{i,j} & z_{i,j} \\
v_a & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
v_b & 0 & 1 & 1 & 0 & 0 & 1 & 1 \\
f_{i,j} & 0 & 0 & 0 & 1 & 1 & 1 & 1
\end{bNiceMatrix}.$$ We complete the definition of $A_k$ by setting every as yet unspecified entry to zero.
Fix $\mathbb F$ and let $N_k=M(A_k)$, that is, the matroid with element set $Y$ represented by $A_k$ considered over $\mathbb F$. Taking $Y'$ as in the previous paragraph, if $\mathbb F$ has characteristic two, then $N_k|Y'$ is isomorphic to the Fano matroid $F_7$ and otherwise $N_k|Y'$ is isomorphic to the non-Fano matroid $F_7^-$ obtained from $F_7$ by relaxing the circuit-hyperplane $\{e_i,x_{i,j},y_{i,j}\}$. Now let $M_k=N_k\setminus (V\cup E)$. Note that $r(M_k)=r(N_k)=|V|+|E|k$ and that for each vertex $v$ and edge $e$ of $G$, $N_k$ contains elements $e$ and $v$, but $M_k$ contains neither.
We shall show that for each $k$, every basis of $M_k$ corresponds to what we call a *feasible template* of $G$, that is, a subgraph of $G$ in which some edges are directed (possibly in both directions) and some are labelled, satisfying certain properties which we describe below. In particular, we will see that the bidirected edges in a feasible template form a matching in $G$. Furthermore, the number of bases of $M_k$ corresponding to each feasible template depends only on $k$ and the numbers of edges directed and labelled in each possible way, and is easily computed. By varying $k$ and counting the number of bases of $M_k$, we can recover the number of feasible templates with each possible number of bidirected edges. The number of feasible templates with $n/2$ bidirected edges is equal to the number of perfect matchings of $G$.
Let $G$ be a simple graph without isolated vertices, having vertex set $V=\{v_1,\ldots,v_n\}$ and edge set $E=\{e_1,\ldots, e_m\}$. A *template* of $G$ is a spanning subgraph of $G$ in which edges may be bidirected, that is, two arrows are affixed one pointing to each endvertex, (uni)directed or undirected, and are labelled according to the following rules.
- Every bidirected edge is unlabelled.
- A (uni)directed edge $e=v_av_b$ with $a<b$ is labelled either $wx$ or $yz$ if $e$ is directed towards $a$ and is labelled either $wy$ or $xz$ if $e$ is directed towards $b$.
- An undirected edge is labelled either $wz$ or $xy$.
Even though the matroid $M_k$ itself depends on whether $\mathbb F$ has characteristic two, the proofs of the two cases have a great deal in common. To prevent repetition we describe the common material here, before finishing the two cases separately. For $1\leq i \leq m$ and $1 \leq j \leq k$, let $F_{i,j}= \{w_{i,j},x_{i,j},y_{i,j},z_{i,j}\}$ and for $1 \leq i \leq m$, let $F_i =\bigcup_{1 \leq j \leq k} F_{i,j}$. For all $i$ and $j$, the set $F_{i,j}$ is a circuit and $r(M_k\setminus F_{i,j})<r(M_k)$. Let $B$ be a basis of $M_k$. Then $1 \leq |B\cap F_{i,j}| \leq 3$. Moreover, for all $i$, $r(F_i)=k+2$ and $r(M_k\setminus F_i) \leq r(M_k)-k$, so $k \leq |B\cap F_i| \leq k+2$. The main idea in the proof is to use templates to classify each basis $B$ of $M_k$ by specifying $|B\cap F_i|$ for each $i$ and when $|B\cap F_i|=k+1$, implying that $|B\cap F_{i,j}|=2$ for precisely one value $j^*$ of $j$, additionally specifying $B\cap F_{i,j^*}$.
Suppose edge $e_i$ joins vertices $v_a$ and $v_b$ in $G$ and $a<b$. If $|B\cap F_i| = k$, then $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)=\emptyset$, and if $|B\cap F_i| = k+2$, then $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)=\{v_a,v_b,e_i\}$. If $|B\cap F_i| = k+1$, then $|B \cap F_{i,j}| =2$ for precisely one value $j^*$ of $j$ and $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)$ depends on $B\cap F_{i,j^*}$.
- If $B\cap F_{i,j^*}$ is $\{w,x\}$ or $\{y,z\}$, then $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)=\{v_a\}$.
- If $B\cap F_{i,j^*}$ is $\{w,y\}$ or $\{x,z\}$, then $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)=\{v_b\}$.
- If $B\cap F_{i,j^*}$ is $\{w,z\}$, then $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)= \{e_i\}$.
- If $B\cap F_{i,j^*}$ is $\{x,y\}$, then $\mathop{\mathrm{cl}}_{N_k}(B\cap F_i)-E(M_k)$ is $\{e_i\}$ when $\mathbb F$ has characteristic two and is empty otherwise.
To each subset $S$ of $E(M_k)$, such that for all $i$, $S\cap F_i$ is independent and for all $i$ and $j$, $|S\cap F_{i,j}|\geq 1$, we associate a template $T(S)$ of $G$, by starting with an edgeless graph with vertex set $V(G)$ and doing the following for each edge $e_i$ of $G$ such that $|S\cap F_{i,j}|>1$ for some $j$. Suppose that $e_i=v_av_b$ with $a<b$. We first consider whether to add $e_i$ to $T(S)$ and whether to direct it.
- If $\mathop{\mathrm{cl}}_{N_k}(S\cap F_i)-E(M_k)=\{v_a,v_b,e_i\}$, then add $e_i$ to $T(S)$ and bidirect it.
- If $\mathop{\mathrm{cl}}_{N_k}(S\cap F_i)-E(M_k)=\{v_a\}$, then add $e_i$ to $T(S)$ and direct it from $v_b$ to $v_a$.
- If $\mathop{\mathrm{cl}}_{N_k}(S\cap F_i)-E(M_k)=\{v_b\}$, then add $e_i$ to $T(S)$ and direct it from $v_a$ to $v_b$.
- If $\mathop{\mathrm{cl}}_{N_k}(S\cap F_i)-E(M_k)\subseteq \{e_i\}$, then add $e_i$ to $T(S)$ (and do not direct it).
In the last three cases above, we also label $e_i$. To do this let $j^*$ be the unique value of $j$ such that $|S\cap F_{i,j}|=2$. Then label $e_i$ with the elements of $S\cap F_{i,j^*}$, but with their subscripts omitted. In this way the edge $e_i$ is given two labels from the set $\{w,x,y,z\}$.
## $\mathbb F$ has characteristic two
We now focus on the case when $\mathbb F$ has characteristic two. The following result is the key step in the proof.
**Proposition 51**. *A subset $B$ of $E(M_k)$ is a basis of $M_k$ if and only if all of the following conditions hold.*
1. *For all $i$, $B\cap F_{i}$ is independent.*
2. *For all $i$ and $j$, $|B \cap F_{i,j}| \geq 1$.*
3. *The subgraph of $T(B)$ induced by its undirected edges is acyclic.*
4. *It is possible to direct the undirected edges of $T(B)$ so that every vertex has indegree one.*
*Proof.* We first show that the conditions are collectively sufficient. Suppose that $B$ satisfies each of the conditions and that $T(B)$ has $b$ bidirected edges, $r$ (uni)directed edges and $u$ undirected edges. Then the last condition implies that $2b+r+u=n$. Combining this with the first two conditions gives $|B|= km + 2b + r + u = km+n=r(M_k)$. So, it is sufficient to prove that $r(B)=r(M_k)$. We will show that the last two conditions imply that $v_i \in \mathop{\mathrm{cl}}_{N_k}(B)$ for $i=1,\ldots,n$. Then the second condition ensures that $\mathop{\mathrm{cl}}_{N_k}(B)=E(N_k)$ and consequently $r(B)=r(N_k)=r(M_k)$ as required.
Consider a vertex $v$ of $G$. The last two conditions imply that there is a (possibly empty) path $P$ in $T(B)$ between $v$ and a vertex $v'$ having indegree one and comprising only undirected edges. Suppose that the vertices of $P$ in order are $v_{j_1}=v', v_{j_2}, \ldots, v_{j_l}=v$ and that for $1\leq h \leq l-1$, the edge joining $v_{j_h}$ and $v_{j_{h+1}}$ in $P$ is $e_{i_h}$. Then $\{v_{j_1},e_{i_1},\ldots,e_{i_{l-1}}\} \subseteq \mathop{\mathrm{cl}}_{N_k}(B)$. As $v_{j_h} \in \mathop{\mathrm{cl}}_{N_k} (\{v_{j_{h-1}},e_{i_{h-1}}\})$ for $h=2,\ldots,l$, we see that $v=v_{j_l}\in \mathop{\mathrm{cl}}_{N_k}(B)$, as required. Thus the conditions are sufficient.
To show that each condition is necessary we suppose that $B$ is a basis of $M_k$. Clearly the first condition is necessary. We observed earlier that for all $i$ and $j$, $r(E(M_k)-F_{i,j})<r(M_k)$, so the second condition is also necessary. Suppose, without loss of generality, that edges $e_1,\ldots,e_l$ are undirected and form a circuit in $T(B)$. Then the corresponding elements $e_1, \ldots, e_l$ form a circuit in $N_k$. Because each of $e_1,\ldots,e_l$ is undirected, $e_i \in \mathop{\mathrm{cl}}_{N_k}(B\cap F_{i})$ for $i=1,\ldots,l$. Thus $$e_{l} \in \mathop{\mathrm{cl}}_{N_k} (\{e_{1},\ldots,e_{l-1}\}) \subseteq \mathop{\mathrm{cl}}_{N_k}\Bigg( \bigcup_{i=1}^{l-1} (B\cap F_{i})\Bigg).$$ So there is a circuit of $N_k$ contained in $\{e_l\} \cup (B\cap F_{l})$ and another contained in $\{e_{l}\} \cup \bigcup_{i=1}^{l-1} (B\cap F_{i})$. Hence there is a circuit of $N_k$ and consequently of $M_k$ contained in $\bigcup_{i=1}^{l} (B\cap F_{i})$, contradicting the fact that $B$ is a basis. Thus the third condition is necessary.
Finally, suppose that $T(B)$ has $b$ bidirected edges, $r$ (uni)directed edges and $u$ undirected edges. Then, as $km+2b+r+u=|B|=r(M_k)=km+n$, we have $2b+r+u=n$. Observe that if the undirected edges are assigned a direction, then the sum of the indegrees of the vertices will become $n$. Suppose that it is impossible to direct the undirected edges of $T(B)$ so that each vertex has indegree one. Then, before directing the undirected edges, there must either be a vertex $z$ with indegree at least two, or two vertices $x$ and $y$ both having indegree at least one and joined by a path $P$ of undirected edges.
In either case the aim is to establish a contradiction by showing that there is some vertex $v$ such that $B\cup \{v\}$ contains two distinct circuits in $N_k$. Then $B$ contains a circuit of $N_k$ and consequently of $M_k$. In the former case there are distinct edges $e_i$ and $e_j$ directed towards (and possibly away from as well) $z$ in $T(B)$. So $z \in \mathop{\mathrm{cl}}_{N_k} (B \cap F_i) \cap \mathop{\mathrm{cl}}_{N_k} (B\cap F_j)$ implying that $(B\cap F_i) \cup \{z\}$ and $(B\cap F_j) \cup \{z\}$ both contain circuits of $N_k$ including $z$. But then $(B\cap F_i)\cup (B\cap F_j)= B\cap (F_i \cap F_j)$ contains a circuit of $N_k$ and consequently of $M_k$, contradicting the fact that $B$ is a basis of $M_k$. So we may assume that the latter case holds. Suppose that, without loss of generality, the vertices of $P$ in order are $v_1=x,v_2,\ldots,v_l=y$. Suppose, again without loss of generality, that for $i=2,\ldots,l$, the edge joining $v_{i-1}$ and $v_{i}$ in $P$ is $e_i$, that $e_1$ is directed towards $x=v_1$ in $T(B)$ and $e_{l+1}$ is directed towards $y=v_l$ in $T(B)$. Then $y \in \mathop{\mathrm{cl}}_{N_k} (B\cap F_{l+1})$ and $x \in \mathop{\mathrm{cl}}_{N_k} (B\cap F_{1})$. Furthermore, for each $i=2,\ldots,l$, $e_i \in \mathop{\mathrm{cl}}_{N_k} (B\cap F_i)$, so $v_i \in \mathop{\mathrm{cl}}_{N_k}\Big( \bigcup_{j=1}^{i} (B\cap F_{j}) \Big)$. In particular, $y \in \mathop{\mathrm{cl}}_{N_k}\Big( \bigcup_{j=1}^{l} (B\cap F_{j}) \Big)$. So, there is a circuit of $N_k$ contained in $\{y\} \cup \{B\cap F_{l+1}\}$ and another contained in $\{y\} \cup \bigcup_{j=1}^{l} (B\cap F_{j})$. Hence there is a circuit of $N_k$ and consequently of $M_k$ contained in $\bigcup_{j=1}^{l+1} (B\cap F_{j})$, contradicting the fact that $B$ is a basis. It follows that it possible to direct the undirected edges of each component of $T(B)$ so that every vertex has indegree one, establishing the necessity of the final condition. ◻
We say that a template $T$ is *feasible* if it satisfies the last two conditions in the previous result, that is, if the subgraph induced by its undirected edges is acyclic and every vertex of the graph obtained from $T$ by contracting the undirected edges has indegree equal to one.
**Proposition 52**. *Let $G$ be a simple graph without isolated vertices and let $T$ be a feasible template of $G$ with $b$ bidirected edges. Then the number of bases of $M_k$ with template $T$ is $$4^{km}\Big(\frac k4\Big)^n \Big(\frac 4k + 12\Big)^b.$$*
*Proof.* If follows from the definition of feasibility that if a feasible template contains $b$ bidirected edges, then it has $n-2b$ edges which are either (uni)directed or undirected. Furthermore $G$ has $m-n+b$ edges which are not in $T$. Suppose that $B$ is a basis with template $T$. We count the number of choices for $B$. Suppose that $e_i$ is an edge of $G$ which is not present in $T$. Then for $j=1,\ldots,k$, we have $|F_{i,j} \cap B|=1$, so there are $4^k$ choices for $B\cap F_i$. Now suppose that $e_i$ is either (uni)directed or undirected in $T$. Then for all but one choice of $j$ in $1,\ldots,k$, we have $|F_{i,j}\cap B|=1$ and for the remaining possibility for $j$, $|F_{i,j}\cap B|=2$, with the choice of elements of $F_{i,j}$ specified by the labelling of the edge $e_i$. Thus there are $k\cdot 4^{k-1}$ choices for $B\cap F_i$. Finally suppose that $e_i$ is a bidirected edge. Then there are two subcases to consider. Either $|F_{i,j}\cap B|=3$ for one value of $j$ and $|F_{i,j}\cap B|=1$ for all other values of $j$, or $|F_{i,j}\cap B|=2$ for two values of $j$ and $|F_{i,j}\cap B|=1$ for all other values of $j$. Suppose that $|F_{i,j'}\cap B|=|F_{i,j''}\cap B|=2$ for $j'\ne j''$. Then we also require that $\mathop{\mathrm{cl}}_{N_k}(B\cap F_{i,j'})-F_i \ne \mathop{\mathrm{cl}}_{N_k}(B\cap F_{i,j'})-F_i$. Thus there are $k \cdot 4^k + \binom k2 \cdot 6 \cdot 4 \cdot 4^{k-2}$ choices for $B\cap F_i$.
So the number of bases of $M_k$ with template $T$ is $$(4^k)^{m-n+b} (k\cdot 4^{k-1})^{n-2b} (k \cdot 4^k + \binom k2 \cdot 6 \cdot 4 \cdot 4^{k-2})^b = 4^{km}\Big(\frac k4\Big)^n \Big(\frac 4k + 12\Big)^b.$$ ◻
**Theorem 53**. *If $\mathbb F$ is a field with characteristic two, then the problem [Counting Bases of $\mathbb F$-Represented Matroids]{.smallcaps} is $\#$P-complete.*
*Proof.* It is clear that [Counting Bases of $\mathbb F$-Represented Matroids]{.smallcaps} belongs to $\#$P. To prove hardness, we give a reduction from counting perfect matchings. Let $G$ be a simple graph with $n$ vertices and $m$ edges. We may assume that $G$ has no isolated vertices and $n$ is even. We can construct representations of the matroids $M_1, \ldots, M_{n/2+1}$ in time polynomial in $n$ and $m$. For $k=1,\ldots, n/2+1$, let $b_k$ denote the number of bases of $M_k$ and for $j=0,\ldots,n/2$, let $t_j$ denote the number of feasible templates of $G$ with $j$ bidirected edges. Then for $k=1,\ldots, n/2+1$, by Proposition [Proposition 52](#prop:counting){reference-type="ref" reference="prop:counting"}, we have $$b_k = \sum_{j=0}^{n/2} 4^{km}\Big(\frac k4\Big)^n \Big(\frac 4k + 12\Big)^j t_j.$$ Given $b_1,\ldots, b_{n/2+1}$, we may recover $t_0,\ldots,t_{n/2}$ in time polynomial in $n$ and $m$. In particular, we may recover $t_{n/2}$. But feasible templates with $n/2$ bidirected edges are in one-to-one correspondence with perfect matchings of $G$. As counting perfect matching is $\#$P-complete by [@zbMATH03646278], we deduce that when $\mathbb F$ has characteristic two, [Counting Bases of $\mathbb F$-Represented Matroids]{.smallcaps} is $\#$P-complete. ◻
## $\mathbb F$ does not have characteristic two
When $\mathbb F$ does not have characteristic two, we can proceed in a similar way, but the proof is a little more complicated, as we need to consider more carefully circuits of undirected edges in a template. We say that a circuit of a template comprising only undirected edges is *good* if it has an odd number of edges labelled $wz$. The following lemma gives us the key property of circuits of undirected edges in the template of a basis.
**Lemma 54**. *Let $G$ be a simple graph without isolated vertices and let $M_k$ and $N_k$ be the associated matrices. Let $C$ be a circuit of $G$ and select a set $Z$ of $2|C|$ elements of $M_k$ as follows. For each $i$ such that $e_i$ is an edge of $C$, choose $j$ with $1\leq j \leq k$ and add either $w_{i,j}$ and $z_{i,j}$, or $x_{i,j}$ and $y_{i,j}$ to $Z$. To simplify notation we omit the second subscript and for each $i$ denote the elements added to $Z$ by either $w_i$ and $z_i$, or $x_i$ and $y_i$. Then both of the following hold.*
1. *If $|\{i: \{w_i,z_i\}\subseteq Z\}|$ is odd then $Z$ is independent in $M_k$ (and $N_k$) and for each vertex $v$ of $C$, $v\in \mathop{\mathrm{cl}}_{N_k}(Z)$.*
2. *If $|\{i: \{w_i,z_i\}\subseteq Z\}|$ is even then $Z$ is a circuit in $M_k$ (and $N_k$).*
*Proof.* For an edge $e_i$ of $C$, we say that $e_i$ is a $wz$-edge if $\{w_i,z_i\}\subseteq Z$, and otherwise we say that it is an $xy$-edge. We first prove that $Z$ is either independent or a circuit, depending on the parity of $|\{i: \{w_i,z_i\}\subseteq Z\}|$. Consider the submatrix $A$ of $A_k$ containing just the columns indexed by members of $Z$ and consider the coefficients of a non-trivial linear combination of these columns summing to zero. As each row of $A$ is either zero or contains two non-zero entries, both equal to one, we may assume that the non-zero coefficients are all $\pm 1$. Furthermore, for every $wz$-edge $e_i$, the coefficients of $w_i$ and $z_i$ must sum to zero, and similarly for every $xy$-edge $e_i$, the coefficients of $x_i$ and $y_i$ must sum to zero. Now consider two adjacent edges $e_i$ and $e_j$ in $C$, and let $v$ be their common endvertex. As the row indexed by $v$ contains one non-zero entry in a column indexed by an element of $\{w_i,x_i,y_i,z_i\}\cap Z$ and also one in a column indexed by an element of $\{w_j,x_j,y_j,z_j\}\cap Z$, we deduce that the coefficients of $\{w_i,x_i,y_i,z_i\}\cap Z$ are non-zero if and only those of $\{w_j,x_j,y_j,z_j\}\cap Z$ are non-zero. Consequently all the coefficients in a non-trivial linear combination of the columns of $A$ are non-zero. Now imagine traversing $C$ in $G$ and suppose that $e_i$ and $e_j$ are consecutive (not necessarily adjacent) $wz$-edges. Then it is not difficult to see that the coefficients of $w_i$ and $w_j$ (and of $z_i$ and $z_j$) have opposite signs. Thus, if there is an odd number of $wz$-edges, then no non-trivial linear combination of the columns of $A$ sums to zero and $Z$ is independent. Alternatively, if there is an even number of $wz$-edges, then one can assign coefficients $\pm 1$ to columns indexed by $w_i$ or $z_i$ meeting the necessary conditions we have established, and then it is not difficult to check that non-zero coefficients may be assigned to all the remaining columns in order to give a non-trivial linear combination of the columns of $A$ summing to zero. Thus $Z$ is dependent, and as we have shown that all coefficients of a non-trivial linear combination of the columns of $A$ summing to zero must be non-zero, we deduce that $Z$ is a circuit.
Finally, suppose that there is an odd number of $wz$-edges and let $V(C)$ denote the vertex set of $C$. Then $r_{N_k}(Z\cup V(C))=2|C|=r_{N_k}(Z)$, so for each vertex $v$ of $C$, $v\in \mathop{\mathrm{cl}}_{N_k}(Z)$. ◻
The analogue of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"} is as follows.
**Proposition 55**. *A subset $B$ of $E(M_k)$ is a basis of $M_k$ if and only if all of the following conditions hold.*
- *For all $i$, $B\cap F_{i}$ is independent.*
- *For all $i$ and $j$, $|B \cap F_{i,j}| \geq 1$.*
- *Every circuit of $T(B)$ comprising only undirected edges is good.*
- *It is possible to direct the undirected edges of $T(B)$ so that every vertex has indegree one.*
*Proof.* Most of the proof follows that of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"}. The main difference concerns circuits of $T(B)$ comprising undirected edges. To prove the sufficiency of the conditions we modify the last part of the sufficiency argument of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"}. Suppose that $B$ satisfies each of the conditions. The key step involves showing that for every vertex $v$ in $G$, we have $v\in\mathop{\mathrm{cl}}_{N_k}(B)$. The last two conditions imply that for a vertex $v$ of $G$, there is a (possibly empty) path $P$ in $T(B)$, comprising only undirected edges, between $v$ and a vertex $v'$, which either has indegree one or belongs to a good circuit. Using Lemma [Lemma 54](#lem:keycharnot2){reference-type="ref" reference="lem:keycharnot2"} for the latter case, we see that in either case $v' \in \mathop{\mathrm{cl}}_{N_k}(B)$ and the proof may continue in the same way as that of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"}.
To show that each condition is necessary we suppose that $B$ is a basis of $M_k$. The necessity of the first two conditions follows in the same way as in the proof of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"} and the necessity of the third follows from Lemma [Lemma 54](#lem:keycharnot2){reference-type="ref" reference="lem:keycharnot2"}. The necessity of the final condition follows from a similar argument to that used in the proof of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"}, but there are more cases to consider. Notice that the necessity of the third condition implies that every undirected edge of $T(B)$ belongs to at most one circuit comprising only undirected edges. If it is not possible to direct the undirected edges of $T(B)$ so that each edge has indegree one, then before directing the undirected edges one of the following must occur.
1. There is a vertex $z$ of $T(B)$ with indegree at least two.
2. There is a vertex $z$ belonging to two edge-disjoint good circuits.
3. There is a vertex $z$ of $T(B)$ with indegree one which belongs to a good circuit.
4. There are vertices $x$ and $y$ of $T(B)$ not belonging to the same good circuit and joined by a path $P$ comprising undirected edges and so that each of $x$ and $y$ either has indegree one or belongs to a good circuit.
To show that each possibility leads to a contradiction, the aim is again to show that there is a vertex $v$ of $B$ such that $B\cup \{v\}$ contains two distinct circuits of $N_k$. The first case is the same as in the proof of Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"}. The second and third follow similarly with the aid of Lemma [Lemma 54](#lem:keycharnot2){reference-type="ref" reference="lem:keycharnot2"} and the final one follows in a similar way to the analogous case in Proposition [Proposition 51](#prop:keychartwo){reference-type="ref" reference="prop:keychartwo"}, noting first that by Lemma [Lemma 54](#lem:keycharnot2){reference-type="ref" reference="lem:keycharnot2"}, if necessary, there are disjoint subsets $B_x$ and $B_y$ of $B$ with $x\in \mathop{\mathrm{cl}}_{N_k}(B_x)$ and $y\in \mathop{\mathrm{cl}}_{N_k}(B_y)$ and then deducing that $y$ (and in fact every vertex of $P$) belongs to $\mathop{\mathrm{cl}}_{N_k}(B_x)$. ◻
We amend the definition of feasibility to say that a template $T$ is *feasible* if it satisfies the last two conditions in the previous result, that is, if the subgraph induced by its undirected edges contains no circuits including an even number of edges labelled $wz$ and it is possible to direct the undirected edges of $T(B)$ so that every vertex has indegree one.
**Proposition 56**. *Let $G$ be a simple graph without isolated vertices and let $T$ be a feasible template of $G$ with $b$ bidirected edges. Then the number of bases of $M_k$ with template $T$ is $$4^{km}\Big(\frac k4\Big)^n \Big(\frac 3k + 13\Big)^b.$$*
*Proof.* The proof is very similar to that of Proposition [Proposition 52](#prop:counting){reference-type="ref" reference="prop:counting"}. The key difference is counting the number of choices for $F_i$ when $i$ is a bidirected edge and $|F_{i,j'}\cap B|=|F_{i,j''}\cap B|=2$ for $j'\ne j''$. There are now $26$ ways to choose $F_{i,j'}$ and $F_{i,j''}$ compared with $24$ when $\mathbb F$ has characteristic two.
So the number of bases of $M_k$ with template $T$ is $$(4^k)^{m-n+b} (k\cdot 4^{k-1})^{n-2b} (k \cdot 4^k + \binom k2 \cdot 26 \cdot 4^{k-2})^b = 4^{km}\Big(\frac k4\Big)^n \Big(\frac 3k + 13\Big)^b.$$ ◻
**Theorem 57**. *If $\mathbb F$ is a field with characteristic other than two, then the problem [Counting Bases of $\mathbb F$-Represented Matroids]{.smallcaps} is $\#$P-complete.*
The proof is identical to that of Theorem [Theorem 53](#thm:chartwo){reference-type="ref" reference="thm:chartwo"}.
# Acknowledgement {#acknowledgement .unnumbered}
We thank Mark Jerrum and Dillon Mayhew for helpful suggestions concerning Appendix [8](#sec:appendix){reference-type="ref" reference="sec:appendix"}.
| arxiv_math | {
"id": "2309.04537",
"title": "The complexity of the greedoid Tutte polynomial",
"authors": "Christopher Knapp and Steven Noble",
"categories": "math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We introduce two notions called $k-$weakly uniform rotundity ($k-$WUR) and $k-$weakly locally uniform rotundity ($k-$WLUR) in real Banach spaces. These are natural generalizations of the well-known concepts $k-$UR and WUR. By introducing two best approximation notions namely $k-$weakly strong Chebyshevity and $k-$weakly uniform strong Chebyshevity, we generalize some of the existing results to $k-$WUR and $k-$WLUR spaces. In particular, we present characterizations of $k-$WUR spaces in terms of $k-$weakly uniformly strong Chebyshevness. Also, the inheritance of the notions $k-$WUR and $k-$WLUR by quotient spaces are discussed. Further, we provide a necessary and sufficient condition for an infinite $\ell_p-$product space to be $k-$WUR (respectively, $k-$WLUR). As a consequence, we observe that the notions WUR and $k-$WUR coincide for an infinite $\ell_p-$product of a Banach space.
address:
- Department of Mathematics, National Institute of Technology Tiruchirappalli, Tiruchirappalli, Tamil Nadu - 620 015, India
- Department of Mathematics, National Institute of Technology Tiruchirappalli, Tiruchirappalli, Tamil Nadu - 620 015, India
author:
- P. Gayathri
- V. Thota
title: On $k-$WUR and its generalizations
---
[^1]
[^2]
# Introduction
Let $X$ be a real Banach space and $X^*$ be its dual. The closed unit ball and the unit sphere of $X$ are denoted by $B_X$ and $S_X$ respectively. Various rotundity notions play significant role in the geometry of Banach spaces, best approximation theory and fixed point theory. As a more robust definition of rotundity, Clarkson [@Clar1936] proposed the concept of uniform rotundity in 1936. Then, as a directionalization of uniform rotundity in 1939, Šmulian introduced weakly uniform rotundity [@Smul1939]. The corresponding local versions called, locally uniform rotundity and weakly locally uniform rotundity were later introduced by Lovaglia in 1955 [@Lova1955]. Additionally, generalizations of these local versions known as midpoint locally uniform rotundity and weakly midpoint locally uniform rotundity are introduced and studied in the literature. We refer to [@AlDi1985; @DeGZ1993; @DuSh2018; @GaTh2022; @Megg1998; @Smit1978a; @Smit1986; @ZhLZ2015] for further study on these notions.
In addition to these notions, Sullivan established the notion of $k-$uniform rotundity [@Sull1979] in 1979 as a generalization of uniform rotundity using the $k-$dimensional volume that is presented as follows. For any $k \in \mathbb{Z}^{+},$ the $k-$dimensional determinant $D_k[x_1, x_2, \dots, x_{k+1};f_1, f_2, \dots, f_k]$ generated by $(k+1)-$vectors $x_1,x_2,\dots,x_{k+1}$ in $X$ and $k-$functionals $f_1,f_2,\dots,f_k$ in $X^*$ is defined as
$$\begin{aligned}
D_k[x_1, x_2, \dots, x_{k+1};f_1, f_2, \dots, f_k]= &
\begin{vmatrix}
1 & 1 & \dots & 1 \\
f_1(x_1) & f_1(x_2) & \dots & f_1(x_{k+1})\\
\vdots & \vdots & \ddots & \vdots \\
f_k(x_1) & f_k(x_2) & \dots & f_k(x_{k+1})\\
\end{vmatrix}.\\\end{aligned}$$
**Definition 1**. *[[@Sull1979]]{.nodecor} Let $k \in \mathbb{Z}^{+}.$ A space $X$ is said to be $k-$uniformly rotund (in short, $k-$UR), if for every $\epsilon>0$, $$\delta^{k}_{X}(\epsilon) \coloneqq \inf\left\{1-\dfrac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1}x_i \right \Vert: x_1, x_2, \dots,x_{k+1}\in S_X,
V[(x_i)_{i=1}^{k+1}]\geq \epsilon
\right\}>0.$$*
It is significant to note that the notion $k-$UR reinforces Singer's concept of $k-$rotundity [@Sing1960], which is stated in the following manner. The space $X$ is said to be $k-$rotund, if for any $x_1, x_2, \dots, x_{k+1} \in S_X$ with $V[(x_i)_{i=1}^{k+1}]>0$, it follows that $\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_i\|=1.$ Similar to this, $k-$version of some other rotundity properties are also introduced and studied in the literature as $k-$locally uniform rotundity (in short, $k-$LUR) [@Sull1979], $k-$midpoint locally uniform rotundity (in short, $k-$MLUR) [@He1997], $k-$weakly midpoint locally uniform rotundity (in short, $k-$WMLUR) [@XiLi2004] and $k-$strong rotundity [@VeRS2021]. We refer to [@GaTh2023; @GeSu1981; @KaVe2018; @LiYu1985; @LiZZ2018; @NaWa1988; @SmTu1990; @Veen2021; @YuWa1990] for further study on these notions.
It would seem relevant to investigate the question of whether it is feasible to define the concepts of WUR and WLUR in a corresponding $k-$version. In response to this question, we introduce the notions $k-$WUR and $k-$WLUR (see, ) in this article. These generalizations are natural in Sullivan's sense.
It is well established in the literature that the geometry of Banach spaces and the best approximation theory in Banach spaces are closely related. Several authors investigated various rotundity properties in terms of notions from best approximation theory. To see this, we need the following notations and notions.
Let $k \in \mathbb{Z}^{+}.$ For any non-empty bounded subset $C$ of $X,$ the $k-$dimensional diameter $diam_k(C)$ is defined as $diam_k(C)=\sup\{V[(x_{i})_{i=1}^{k+1}]: x_1, x_2, \dots, x_{k+1} \in C\}.$ For any non-empty subset $A$ of $X,$ $x\in X$ and $\delta>0,$ we define $P_A(x)=\{y \in A: \|x-y\|=d(x,A)\}$ and $P_A(x, \delta)=\{y \in A: \|x-y\| \leq d(x, A)+ \delta\},$ where $d(x,A)= \inf\{\|x-y\|: y \in A\}.$
The set $A$ is said to be proximinal at $x,$ if $P_A(x)$ is non-empty. We say that $A$ is $k-$Chebyshev [@Sing1960] at $x,$ if A is proximinal at $x$ and $diam_k(P_A(x))=0.$ We say that $A$ is $k-$strongly Chebyshev (in short, $k-$SCh) [@VeRS2021] at $x,$ if $A$ is proximinal at $x$ and for every $\epsilon>0$ there exists $\delta>0$ such that $diam_k(P_A(x, \delta))< \epsilon.$ Let $B$ be any non-empty subset of $X.$ The set $A$ is said to be proximinal on $B,$ if $A$ is proximinal at every $x \in B.$ Similarly, we define for other notions such as $k-$Chebyshev and $k-$strongly Chebyshev. We say that $A$ is $k-$uniformly strongly Chebyshev (in short, $k-$USCh) [@KaVe2018] on $B,$ if $A$ is proximinal on $B$ and for every $\epsilon>0$ there exists $\delta>0$ such that $diam_k(P_A(x, \delta))< \epsilon$ for all $x \in B.$
According to Singer [@Sing1960], a space is $k-$rotund if every proximinal convex subset is $k-$Chebyshev, and the converse also holds. Recently, a characterization of $k-$UR spaces in terms of $k-$USCh obtained by Kar and Veeramani [@KaVe2018]. Also, a characterization of $k-$MLUR spaces in terms of $k-$SCh is obtained in [@LiZZ2018]. Further, Veena Sangeetha et al. [@VeRS2021] proved that the space is $k-$strongly rotund iff every closed convex subset is $k-$SCh.
In light of the aforementioned results, it is reasonable to presume that the new geometric notions $k-$WUR and $k-$WLUR may be characterized or analyzed in terms of appropriate notions from best approximation theory. To achieve this, two best approximation notions namely $k-$weakly strong Chebyshevness and $k-$weakly uniformly strong Chebyshevness (see, ) are defined in this article.
It has been extensively explored and is of great interest how quotient spaces may inherit rotundity notions and how geometric notions can be stable under $\ell_p-$product. Geremia and Sullivan [@GeSu1981] proved that for any $1<p<\infty,$ $(\oplus_p X_i)_{i\in\mathbb{N}}$ is $2-$UR iff all but except one of the $X_i$ are UR with a common modulus of convexity and the remaining space is $2-$UR. Later, in [@YuWa1990], the authors generalized this result for any $k\in \mathbb{Z^+}.$ In [@SmTu1990], Smith and Turett proved that an $\ell_p-$product space can not be $k-$UR whenever none of the underlying space is UR. Recently, all the preceding results were obtained for $k-$rotund spaces in an analogous way [@Veen2021; @VeRS2021]. This article seeks to examine the stability of the notions $k-$WUR and $k-$WLUR in this approach.
The paper is organized as follows. In Section 2, we present some properties of $k-$dimensional determinants, which are essential to prove our results. We introduce the notions $k-$weakly strong Chebyshevness (in short, $k-$$w$SCh), $k-$weakly uniformly strong Chebyshevness (in short, $k-$$w$USCh), property $k-$$w$UC and obtain some relationships among them. These notions will be used to provide some necessary and sufficient conditions for a space to be $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR).
In Section 3, we introduce and study the notions $k-$WUR and $k-$WLUR. Some of the sequential characterizations of $k-$WUR and $k-$WLUR presented in this section are necessary to prove our main results. Characterizations of $k-$WUR spaces in terms of $k-$$w$USCh of the closed unit ball as well as subspaces are obtained. Further, the concepts $k-$$w$SCh and property $k-$$w$UC are explored in $k-$WLUR and $k-$WMLUR spaces. We provide counter examples to demonstrate that the converses of some implications are not necessarily true.
We investigate the stability of the notions $k-$WUR, $k-$WLUR and $k-$WMLUR in Section 4. First, we obtain two results that correlate $k-$WUR and $k-$WLUR properties of a space with the associated quotient spaces. We provide necessary and sufficient conditions for finite and infinite $\ell_p-$product space to be $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR). As a consequence, we observe that the notions WUR (respectively, WLUR, WMLUR) and $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR) coincide for an infinite $\ell_p-$product of a Banach space.
# Preliminaries
This section begins with some $k-$dimensional determinant properties that will be utilized throughout the article. We assume all subspaces to be closed.
**Remark 2**. *Let $x_1, x_2, \dots, x_{k+1} \in X$ and $f_1, f_2, \dots, f_k \in X^*.$ Then,*
1. *$D_k[(x_i+w)_{i=1}^{k+1};(f_j)_{j=1}^{k}]= D_k[(x_i)_{i=1}^{k+1};(f_j)_{j=1}^{k}],$ for any $w \in X;$*
2. *$D_k[(cx_i)_{i=1}^{k+1};(f_j)_{j=1}^{k}]$ = $c^kD_k[(x_i)_{i=1}^{k+1};(f_j)_{j=1}^{k}],$ for any $c \in \mathbb{R}$;*
3. *$D_k[(x_i)_{i=1}^{k+1};(g_j)_{j=1}^{k}]>0$ for some $g_1, g_2, \dots, g_k \in S_{X^*}$ $\Leftrightarrow$ the set $\{x_i-x_{k+1}: 1 \leq i \leq k\}$ is linearly independent;*
4. *$D_k[(x_i+Y)_{i=1}^{k+1}; (h_j)_{j=1}^{k}] = D_k[(x_i+y_i)_{i=1}^{k+1}; (h_j)_{j=1}^{k}],$ where $Y$ is a subspace of $X,$ $x_i+Y \in X/Y,$ $y_i \in Y$ for all $1 \leq i \leq k+1$ and $h_j \in Y^{\perp} \cong (X/Y)^*$ for all $1 \leq j \leq k.$*
In the following result, we observe certain continuity properties of $k-$dimensional determinants. The proof of [@KaVe2018 Lemmas 2.9 and 2.10] may also be used to prove the following lemma. However, we concisely provide a direct proof here.
**Lemma 3**. *Let $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$bounded sequences in $X$. Then for any $f_1, f_2, \dots, f_{k} \in S_{X^*}$ the following statements hold.*
1. *If $(c_n^{(1)}),(c_n^{(2)}), \dots, (c_n^{(k+1)})$ be $(k+1)-$sequences in $\mathbb{R}$ such that $c_n^{(i)} \to c_i$ for some $c_i \in \mathbb{R},$ for all $1 \leq i \leq k+1,$ then $\vert D_k[(c_n^{(i)}x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}]- D_k[(c_ix_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k} ]\vert \to 0.$*
2. *If $(y_n^{(1)}),(y_n^{(2)}), \dots, (y_n^{(k+1)})$ be $(k+1)-$sequences in $X$ such that $y_n^{(i)} \xrightarrow{w} y_i$ for some $y_i \in X,$ for all $1 \leq i\leq k+1,$ then $\vert D_k[(y_n^{(i)}+x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}]- D_k[(y_i+x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}]\vert \to 0.$*
*Proof.* $(1)$: Let $(c_n^{(1)}), (c_n^{(2)}), \dots, (c_n^{(k+1)})$ be $(k+1)-$sequences in $\mathbb{R}$ such that $c_n^{(i)} \to c_i$ for some $c_i \in \mathbb{R},$ for all $1 \leq i \leq k+1.$ Let $a_n= (D_k[(c_n^{(i)}x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}]- D_k[(c_ix_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}])$ for every $n \in \mathbb{N}.$ $$\begin{aligned}
|a_n|=& \vert D_k[(c_n^{(i)}x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}]+ \sum_{i=1}^{k}D_k[(c_n^{(j)}x_n^{(j)})_{j=1}^{i}, (c_jx_n^{(j)})_{j=i+1}^{k+1}; (f_j)_{j=1}^{k}]- \\
&\hspace{1.5cm} \sum_{i=1}^{k}D_k[(c_n^{(j)}x_n^{(j)})_{j=1}^{i}, (c_jx_n^{(j)})_{j=i+1}^{k+1}; (f_j)_{j=1}^{k}]-
D_k[(c_ix_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k} ]\vert \\
\leq &\sum_{i=1}^{k+1}|D_k[(c_n^{(j)}x_n^{(j)})_{j=1}^{i}, (c_jx_n^{(j)})_{j=i+1}^{k+1}; (f_j)_{j=1}^{k}]-D_k[(c_n^{(j)}x_n^{(j)})_{j=1}^{i-1}, (c_jx_n^{(j)})_{j=i}^{k+1}; (f_j)_{j=1}^{k}]|.\\
\end{aligned}$$ Using the properties of determinant, for any $n \in \mathbb{N}$ we have $|a_n| \leq \sum_{i=1}^{k+1}|b_n^{(i)}|,$ where $$\begin{aligned}
b_n^{(i)} &= \begin{vmatrix}
1 & \dots & 1 & 0 & 1 & \dots & 1 \\
f_1(c_n^{(1)}x_n^{(1)})& \dots & f_1(c_n^{(i-1)}x_n^{(i-1)})& (c_n^{(i)}-c_i)f_1(x_n^{(i)})& f_1(c_{i+1}x_n^{(i+1)}) & \dots & f_1(c_{k+1}x_n^{(k+1)})\\
\vdots& \ddots & \vdots& \vdots& \vdots& \ddots &\vdots\\
f_k(c_n^{(1)}x_n^{(1)})& \dots & f_k(c_n^{(i-1)}x_n^{(i-1)}) & (c_n^{(i)}-c_i)f_k(x_n^{(i)})& f_k(c_{i+1}x_n^{(i+1)}) & \dots & f_k(c_{k+1}x_n^{(k+1)})\\
\end{vmatrix}
\end{aligned}$$ for all $1 \leq i \leq k+1.$ Denote $M= \sup\{\|x_n^{(i)}\|, \|c_n^{(i)}x_n^{(i)}\|, \|c_ix_n^{(i)}\|: 1 \leq i \leq k+1, n \in \mathbb{N}\}.$ Now, for any $1 \leq i \leq k+1,$ by evaluating the determinant $b_n^{(i)}$ along the $i^{th}$ column, we have $$|b_n^{(i)}| \leq \sum\limits_{s=1}^{k}|(c_n^{(i)}-c_i)f_s(x_n^{(i)}) D_{k-1}[(c_n^{(j)}x_n^{(j)})_{j=1, }^{i-1}, (c_jx_n^{(j)})_{j=i+1}^{k+1}; (f_j)_{j=1, j\neq s}^{k}] |\\
\leq |c_n^{(i)}-c_i| M^{k}kk!$$ and hence $|b_n^{(i)}| \to 0.$ Therefore, $|a_n| \to 0.$\
$(2)$: Proof follows by the similar argument involved in the proof of $(1).$ ◻
For any $k \in \mathbb{Z}^{+},$ we define $\mathcal{S}_k(n)=\{\alpha \subseteq \{1,2, \dots, k\}: \alpha \ \textnormal{contains exactly}\ n \ \textnormal{elements}\}$ for every $n \in \{1, 2, \dots, k\}$ and $\mathcal{S}_k(0)= \emptyset$. If $n \in \{1,2, \dots, k\}$ and $\alpha \in \mathcal{S}_k(n),$ we denote the elements of $\alpha$ as $\alpha_1, \alpha_2, \dots, \alpha_n,$ where $\alpha_1< \alpha_2 < \dots < \alpha_n.$
*Proof.* Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ be $(k+2)-$sequences in $X$ and $f_1, f_2, \dots, f_{k+1} \in S_{X^*}.$\
Case$-(i)$: Suppose $D_1[x_n^{(1)}, x_n^{(i+1)};f_j] \to 0$ for all $1 \leq i, j \leq k+1.$ For every $n \in \mathbb{N},$ using Sylvester's determinant identity [@HoJo2013 Page 27], we have $D_{k+1}[(x_n^{(i)})_{i=1}^{k+2}; (f_j)_{j=1}^{k+1}]= det([c_{j,i}^{(n)}]),$ where $c_{j,i}^{(n)}= D_1[x_n^{(1)}, x_n^{(i+1)};f_j]$ for all $1 \leq i,j \leq k+1.$ By evaluating the determinant $det([c_{j,i}^{(n)}])$ along any row and using the assumption, we have $D_{k+1}[(x_n^{(i)})_{i=1}^{k+2}; (f_j)_{j=1}^{k+1}] \to 0.$\
Case$-(ii)$: Suppose $D_1[x_n^{(1)}, x_n^{(i+1)};f_j]$ does not converge to $0$ for some $1 \leq i,j \leq k+1.$ Now, we claim that, there exists $r \in \mathbb{Z}^{+}$ such that $2 \leq r \leq k$ satisfying $$D_{r-1}[(x_{n}^{(\alpha_i)})_{i=1}^{r}; (f_{\beta_j})_{j=1}^{r-1}] \nrightarrow 0 \ \mbox{for some} \ \alpha \in \mathcal{S}_{k+2}(r), \beta \in \mathcal{S}_{k+1}(r-1)$$ and $$D_{r}[(x_n^{(\lambda_i)})_{i=1}^{r+1}; (f_{\mu_j})_{j=1}^{r}] \to 0 \ \mbox{for every} \ \lambda \in \mathcal{S}_{k+2}(r+1), \mu \in \mathcal{S}_{k+1}(r).$$ If $D_{2}[(x_n^{(\lambda_i)})_{i=1}^{3}; (f_{\mu_j})_{j=1}^{2}] \to 0$ for every $\lambda \in \mathcal{S}_{k+2}(3) \ \mbox{and} \ \mu \in \mathcal{S}_{k+1}(2),$ then by the assumption of Case$-(ii),$ choose $r=2.$ If not, then there exist $\alpha \in \mathcal{S}_{k+2}(3)$ and $\beta \in \mathcal{S}_{k+1}(2)$ such that $D_{2}[(x_{n}^{(\alpha_i)})_{i=1}^{3}; (f_{\beta_j})_{j=1}^{2}] \nrightarrow 0.$ Now if $D_{3}[(x_n^{(\lambda_i)})_{i=1}^{4}; (f_{\mu_j})_{j=1}^{3}] \to 0$ for every $\lambda \in \mathcal{S}_{k+2}(4) \ \mbox{and} \ \mu \in \mathcal{S}_{k+1}(3),$ then choose $r=3.$ Similarly, proceeding like this and using the hypothesis, the claim holds. Therefore, there exist $r \in \mathbb{Z}^{+}$ with $2 \leq r \leq k,$ $\epsilon>0$ and a subsequence $(n_{m})$ of $(n)$ satisfying $$\vert D_{r-1}[(x_{n_m}^{(\alpha_i)})_{i=1}^{r}; (f_{\beta_j})_{j=1}^{r-1}] \vert \geq \epsilon \ \mbox{for some} \ \alpha \in \mathcal{S}_{k+2}(r), \beta \in \mathcal{S}_{k+1}(r-1), \mbox{for all} \ m \in \mathbb{N}$$ and $$D_{r}[(x_n^{(\lambda_i)})_{i=1}^{r+1}; (f_{\mu_j})_{j=1}^{r}] \to 0 \ \mbox{for every} \ \lambda \in \mathcal{S}_{k+2}(r+1), \mu \in \mathcal{S}_{k+1}(r).$$ Without loss of generality, assume $\alpha=\{1,2, \dots, r\}$ and $\beta=\{1, 2, \dots, r-1\}.$ For any $m \in \mathbb{N},$ consider $$\begin{aligned}
A_{m}= &
\begin{bmatrix}
1 & 1 & \dots & 1 \\
f_1(x_{n_m}^{(1)}) & f_1(x_{n_m}^{(2)}) & \dots & f_1(x_{n_m}^{(k+2)})\\
\vdots & \vdots & \ddots & \vdots \\
f_{k+1}(x_{n_m}^{(1)}) & f_{k+1}(x_{n_m}^{(2)}) & \dots & f_{k+1}(x_{n_m}^{(k+2)})\\
\end{bmatrix}\hspace{-0.1cm}.
\end{aligned}$$ Now using Sylvester's determinant identity [@HoJo2013 Page 27] for every $m \in \mathbb{N},$ we have $$det(A_m)D_{r-1}[(x_{n_m}^{(i)})_{i=1}^{r}; (f_{j})_{j=1}^{r-1}]^{k-r+1}= det(B_m),$$ where $B_m= [b_{s,t}^{(r,m)}]_{r+1 \leq s,t \leq k+2}$ and $b_{s,t}^{(r,m)}= D_r[(x_{n_m}^{(i)})_{i=1}^{r},x_{n_m}^{(t)}; (f_j)_{j=1}^{r-1}, f_{s-1}]$ for all $r+1 \leq s, t \leq k+2.$ By evaluating the determinant of $B_m$ along any row and using the claim, we get $det(B_m) \to 0.$ Note that $|D_{k+1}[(x_{n_m}^{(i)})_{i=1}^{k+2}; (f_j)_{j=1}^{k+1}]|= |det(A_m)| \leq \frac{1}{\epsilon^{k-r+1}}|det(B_m)|$ and hence $|D_{k+1}[(x_{n_m}^{(i)})_{i=1}^{k+2}; (f_j)_{j=1}^{k+1}]| \to 0.$ Thus, $|D_{k+1}[(x_{n}^{(i)})_{i=1}^{k+2}; (f_j)_{j=1}^{k+1}]| \to 0.$ ◻
Now, we characterize Schur's property using $k-$dimensional determinants. The space $X$ is said to have Schur's property, if norm and weak convergences coincide for sequences in $X.$
**Proposition 4**. *The following statements are equivalent.*
1. *$X$ has Schur's property.*
2. *If $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $X$ and $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_{k} \in S_{X^*},$ then $V[(x_n^{(i)})_{i=1}^{k+1}] \to 0.$*
*Proof.* $(1) \Rightarrow (2)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $X.$ Assume that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*}.$ Observe that, there exist $(k)-$sequences $(f_n^{(1)}), (f_n^{(2)}), \dots, (f_n^{(k)})$ in $S_{X^*}$ such that $V[(x_n^{(i)})_{i=1}^{k+1}] \leq D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_n^{(j)})_{j=1}^{k}] + \frac{1}{n}$ for all $n \in \mathbb{N}.$ Now, it is enough to show that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_n^{(j)})_{j=1}^{k} ] \to 0.$\
\
Step$-(2)$: Fix $f_3, f_4, \dots, f_{k} \in S_{X^*}.$ For any $f_2 \in S_{X^*},$ by evaluating $D_k[(x_n^{(i)})_{i=1}^{k+1}; f_n^{(1)},(f_j)_{j=2}^{k}]$ along the $3^{rd}$ row, we have $D_k[(x_n^{(i)})_{i=1}^{k+1}; f_n^{(1)},(f_j)_{j=2}^{k}]=f_2(\sum_{i=1}^{k+1} (-1)^{(3+i)}x_n^{(i)}M_n^{(3,i)}),$ where for any $1 \leq i \leq k+1$ and $n \in \mathbb{N},$ $M_n^{(3,i)}$ denotes the minor of the $(3,i)^{th}$ entry of the determinant $D_k[(x_n^{(i)})_{i=1}^{k+1}; f_n^{(1)},(f_j)_{j=2}^{k}].$ Now, by using the similar argument involved in Step$-(1),$ we have $D_k[(x_n^{(i)})_{i=1}^{k+1}; f_n^{(1)},f_n^{(2)},(f_j)_{j=3}^{k}] \to 0$.\
By repeating the same procedure up to Step$-(k),$ we get $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_n^{(j)})_{j=1}^{k} ] \to 0.$\
$(2) \Rightarrow (1)$: If $X$ is a finite dimensional space, then there is nothing to prove. Let $X$ be an infinite dimensional space and $(x_n^{(1)})$ be a sequence in $X$ such that $x_n^{(1)} \xrightarrow{w} 0.$ For each $n \in \mathbb{N},$ by Hahn-Banach theorem, there exists $f_n^{(1)} \in S_{X^*}$ such that $f_n^{(1)}(x_n^{(1)})=\|x_n^{(1)}\|.$ Now, for every $n \in \mathbb{N}$ and $2 \leq i \leq k,$ there exists $x_n^{(i)} \in \cap_{j=1}^{i-1} ker(f_n^{(j)}) \cap S_{X}$ and by Hahn-Banach theorem, there exists $f_n^{(i)} \in S_{X^*}$ such that $f_n^{(i)}(x_n^{(i)})=1.$ Since $x_n^{(1)} \xrightarrow{w} 0$ and $(x_n^{(i)})$ are bounded sequences for all $2 \leq i \leq k,$ by , we have $D_k[0,(x_n^{(i)})_{i=1}^{k}; (g_j)_{j=1}^{k}] \to 0$ for all $g_1, g_2, \dots, g_k \in S_{X^*}.$ Therefore, by $(2),$ we have $V[0,(x_n^{(i)})_{i=1}^{k}] \to 0,$ which further implies, $D_k[0,(x_n^{(i)})_{i=1}^{k}; (f_n^{(j)})_{j=1}^{k}] \to 0.$ Since $D_k[0,(x_n^{(i)})_{i=1}^{k}; (f_n^{(j)})_{j=1}^{k}]=\|x_n^{(1)}\|,$ it follows that $x_n^{(1)} \to 0.$ Hence the proof. ◻
In the following definition, we introduce two notions called $k-$weakly strong Chebyshevness and $k-$weakly uniformly strong Chebyshevness which are weaker to the notions $k-$strong Chebyshevness [@VeRS2021] and $k-$uniformly strong Chebyshevness [@KaVe2018] respectively. These new notions will be used to characterize $k-$WUR, $k-$WMLUR spaces in Section $3$.
**Definition 5**. *Let $A$ and $B$ be non-empty subsets of $X,$ $x \in X$ and $k \in \mathbb{Z}^{+}.$ Then we say that $A$ is*
1. *$k-$weakly strongly Chebyshev (in short, $k-$$w$SCh) at $x,$ if $A$ is proximinal at $x$ and for every $\epsilon>0,$ $f_1,f_2,\dots,f_k \in S_{X^*}$ there exists $\delta=\delta(\epsilon,x,(f_j)_{j=1}^{k})>0$ such that $\vert D_k[(x_i)_{i=1}^{k+1};(f_j)_{j=1}^{k}]\vert \leq \epsilon$ whenever $x_1,x_2,\dots,x_{k+1} \in P_A(x,\delta);$*
2. *$k-$$w$SCh on $B,$ if $A$ is $k-$$w$SCh at every $x \in B;$*
3. *$k-$weakly uniformly strongly Chebyshev (in short, $k-$$w$USCh) on $B,$ if $A$ is proximinal on $B$ and for every $\epsilon>0,$ $f_1,f_2,\dots,f_k \in S_{X^*}$ there exists $\delta=\delta(\epsilon,(f_j)_{j=1}^{k})>0$ such that $\vert D_k[(x_i)_{i=1}^{k+1};(f_j)_{j=1}^{k}]\vert \leq \epsilon$ whenever $x_1,x_2,\dots,x_{k+1} \in P_A(x,\delta)$ and $x \in B.$*
The notion $1-$$w$SCh (respectively, $1-$$w$USCh) coincides with the notion weakly strongly Chebyshev [@BLLN2008; @GaTh2022] (respectively, weakly uniformly strongly Chebyshev [@GaTh2022]).
Observe that, $A$ is $k-$$w$USCh on $B$ $\Rightarrow$ $A$ is $k-$$w$SCh on $B$ $\Rightarrow$ $A$ is $k-$Chebyshev on $B.$ In , we will see that the reverse implications are not necessarily true. Further, $A$ is $k-$USCh (respectively, $k-$SCh) on $B$ $\Rightarrow$ $A$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $B,$ in general the converse does not hold (see, ), however, using , the converse holds whenever the space has Schur's property.
**Example 6**. *Consider the space $X=(\ell_1, \|\cdot\|_H)$ from [@Smit1978a Example 5] and $k \in \mathbb{Z}^{+}.$ In [@Smit1978a], it is proved that $X$ is rotund, but not MLUR. Then, by [@Megg1998 Theorems 5.1.18 and 5.3.28], it follows that $B_X$ is Chebyshev on $X,$ but not approximatively compact on $X$ (see, [@BLLN2008 Definition 1.1]). Therefore, by [@VeRS2021 Lemma 2.8], $B_X$ is not $k-$SCh on $X.$ Since $X$ has Schur's property, we have $B_X$ is not $k-$$w$SCh on $X.$ However, $B_X$ is $k-$Chebyshev on $X.$*
The following sequential version of is easy to verify and will be used further.
**Proposition 7**. *Let $A$ and $B$ be non-empty subsets of $X$ and $x \in X.$ Then the following statements hold.*
1. *$A$ is $k-$$w$SCh at $x$ iff $A$ is proximinal at $x$ and for any $(k+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}), \dots,$ $(x_n^{(k+1)})$ in $A$ such that $\|x_n^{(i)}-x\| \to d(x,A)$ for all $1 \leq i \leq k+1,$ it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
2. *$A$ is $k-$$w$USCh on $B$ iff $A$ is proximinal on $B$ and for any $(k+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}),$ $\dots, (x_n^{(k+1)})$ in $A,$ a sequence $(y_n)$ in $B$ such that $\|x_n^{(i)}-y_n\|-d(y_n,A) \to 0$ for all $1 \leq i \leq k+1,$ it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
Now, we introduce a notion called property $k-$weakly UC which is a generalization of both property $w$UC [@GaTh2022] and property $k-$UC [@KaVe2018].
**Definition 8**. *Let $A$ and $B$ be non-empty subsets of $X$ and $k \in \mathbb{Z^+}.$ The pair $(A,B)$ is said to have property $k-$weakly UC (in short, property $k-$$w$UC), if for any $(k+1)-$sequences $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ in $A$ and a sequence $(y_n)$ in $B$ such that $\|x_n^{(i)}-y_n\| \to d(A,B)$ for all $1 \leq i \leq k+1,$ it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
The property $k-$$w$UC coincides with property $w$UC [@GaTh2022] for the case $k=1.$ Further, if $(A,B)$ has property $k-$UC, then $(A,B)$ has property $k-$$w$UC. The converse does not hold in general (see, ). However, using , the converse holds, whenever the space has Schur's property.
The following result is a consequence of . On the other hand, it reveals that if a pair of subsets has property $w$UC, then it has property $k-$$w$UC and a similar statement holds for the notions $k-$$w$USCh and $k-$$w$SCh.
**Proposition 9**. *Let $A$ and $B$ be non-empty subsets of $X$ and $x \in X.$ Then the following statements hold.*
1. *If $(A,B)$ has property $k-$$w$UC, then $(A,B)$ has property $(k+1)-$$w$UC.*
2. *If $A$ is $k-$$w$USCh on $B,$ then $A$ is $(k+1)-$$w$USCh on $B.$*
3. *If $A$ is $k-$$w$SCh at $x,$ then $A$ is $(k+1)-$$w$SCh at $x.$*
*Proof.* $(1)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ be $(k+2)-$sequences in $A,$ $(y_n)$ be a sequence in $B$ such that $\|x_n^{(i)}-y_n\|\to d(A, B)$ for all $1 \leq i \leq k+2$ and $f_1, f_2, \dots, f_{k+1} \in S_{X^*}.$ By assumption, it follows that $D_k[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_{\beta{j}})_{j=1}^{k}] \to 0$ for all $\alpha \in \mathcal{S}_{k+2}(k+1)$ and $\beta \in \mathcal{S}_{k+1}(k).$ Hence, by , we have $D_{k+1}[(x_n^{(i)})_{i=1}^{k+2}; (f_j)_{j=1}^{k+1}] \to 0.$ Thus, $(A,B)$ has property $(k+1)-$$w$UC.
The proofs of $(2)$ and $(3)$ follow in the similar lines of proof of $(1).$ ◻
We remark that the converses of the statements of need not be true for any $k \in \mathbb{Z}^{+}$ (see, ).
In the following proposition and remark, we present some relations among the notions $k-$$w$SCh, $k-$$w$USCh and property $k-$$w$UC.
**Proposition 10**. *Let $A$ and $B$ be non-empty subsets of $X.$ Then the following statements hold.*
1. *If $A$ is $k-$$w$USCh on $B,$ then $(A,B)$ has property $k-$$w$UC.*
2. *If $(A,B)$ has property $k-$$w$UC, then $A$ is $k-$$w$USCh on $B_0,$ where $B_0=\{y \in B: \|x-y\|=d(A,B)$ for some $x \in A \}.$*
*Proof.* $(1)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $A$ and $(y_n)$ be a sequence in $B$ such that $\|x_n^{(i)}-y_n\| \to d(A,B)$ for all $1 \leq i \leq k+1.$ Since, for any $1 \leq i \leq k+1,$ $$0 \leq \|x_n^{(i)}-y_n\|-d(y_n,A) \leq \|x_n^{(i)}-y_n\|-d(A,B),$$ we have $\|x_n^{(i)}-y_n\|-d(y_n,A) \to 0.$ Thus, by assumption, it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$ Hence, $(A,B)$ has property $k-$$w$UC.\
$(2)$: Clearly, $A$ is proximinal on $B_0.$ Let $(x_n^{(1)}),(x_n^{(2)}),\dots,(x_n^{(k+1)})$ be $(k+1)-$sequences in A and $(y_n)$ be a sequence in $B_0$ such that $\|x_n^{(i)}-y_n\|- d(y_n,A) \to 0$ for all $1 \leq i \leq k+1.$ Since $d(y_n,A)= d(A,B)$ for all $n \in \mathbb{N},$ we have $\|x_n^{(i)}-y_n\| \to d(A,B)$ for all $1 \leq i \leq k+1.$ Therefore, by assumption, it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$ Thus, $A$ is $k-$$w$USCh on $B_0.$ ◻
**Remark 11**. *Let $A$ be a non-empty bounded subset of $X$ and $B$ be a non-empty boundedly compact subset of $X.$ If $A$ is $k-$$w$SCh on $B,$ then $(A,B)$ has property $k-$$w$UC.*
The next example shows that the converse of the statements of and need not be true. In particular, property $k-$$w$UC of the pair $(A,B)$ is not sufficient for the proximinality of $A$ on $B.$
**Example 12**. * *
1. *Let $k \in \mathbb{Z}^{+}$ and $M=(c_0, \|\cdot\|_{\infty}).$ By [@DeGZ1993 Chapter II, Corollary 6.9], $M$ admits an equivalent norm (say, $\|\cdot\|_r$) such that $X=(c_0, \|\cdot\|_r)$ is WUR. Since $X$ is not reflexive, there exists a subspace $Y$ of $X$ such that $Y$ is not proximinal at some $x \in X.$ However, by [@GaTh2022 Theorem 4.6], $(Y, \{x\})$ has property $w$UC and hence, by , it has property $k-$$w$UC.*
2. *Let $k \in \mathbb{Z}^{+}$ and $X= (\mathbb{R}^{k+1}, \|\cdot\|_{\infty}).$ Consider $A= B_X$ and $B= 3S_X \cup \{2(\sum_{i=1}^{k+1}e_i)\}.$ It is easy to prove that $(A,B)$ has property $w$UC and hence, by , it has property $k-$$w$UC. However, it is clear that $A$ is not $k-$Chebyshev at $3e_1 \in B.$ Thus, $A$ is not $k-$$w$USCh on $B.$*
3. *Let $k \in \mathbb{Z}^{+},$ $X= (\mathbb{R}^{k+1}, \|\cdot\|_2) \oplus_{\infty} \mathbb{R}$ and $Y =(\mathbb{R}^{k+1}, \|\cdot\|_2) \oplus_{\infty} \{0\}$ be the subspace of $X.$ Consider $A=B_Y$ and $B=\{(2e_1,0)\} \cup \{(0, 1+\frac{1}{n}): n \in \mathbb{N}\}.$ Observe that $d(A,B)=1$ and $B_0=\{y \in B: \|x-y\|=d(A,B) \text{ for some } x \in A \}=\{(2e_1,0)\}.$ Clearly, $A$ is $k-$$w$USCh on $B_0.$ For all $n \in \mathbb{N}$ and $1 \leq i \leq k+1,$ define $x_n^{(i)}=(e_i,0)$ and $y_n=(0, 1+\frac{1}{n}).$ Therefore, $\|x_n^{(i)}-y_n\| \to 1$ for all $1 \leq i \leq k+1,$ but by , there exists $g_1, g_2, \dots, g_{k} \in S_{X^*}$ such that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (g_j)_{j=1}^{k}]= \epsilon$ for some $\epsilon>0.$ Thus, $(A,B)$ does not have property $k-$$w$UC.*
The proof of the subsequent result follows in similar lines of the proof of [@GaTh2023 Theorem 2.12].
**Theorem 13**. * The following statements hold.*
1. *If $B_X$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $rS_X$ for some $r\in (1, \infty),$ then $B_X$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $tS_X$ for every $t \in (1, \infty).$*
2. *If $S_X$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $rS_X$ for some $r \in (1, \infty),$ then $S_X$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $tS_X$ for every $t \in (1, \infty).$*
3. *If $S_X$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $rS_X$ for some $r \in (0,1),$ then $S_X$ is $k-$$w$USCh (respectively, $k-$$w$SCh) on $tS_X$ for every $t \in (0,\infty).$*
Now, we present a characterization of $k-$rotund spaces in terms of $k-$rotundity of the quotient spaces.
**Theorem 14**. *Let $\alpha, \beta \in \mathbb{Z}^{+}$ and $X$ be a Banach space satisfying $dim(X) \geq k+2,$ $1 \leq \alpha \leq dim(X)-(k+1)$ and $k+1 \leq \beta \leq dim(X)-1.$ Consider the following statements.*
1. *$X$ is $k-$rotund.*
2. *$X/M$ is $k-$rotund, whenever $M$ is a proximinal subspace of $X.$*
3. *$X/F$ is $k-$rotund, whenever $F$ is a subspace of $X$ with $dim(F)= \alpha.$*
4. *$X/Y$ is $k-$rotund, whenever $Y$ is a proximinal subspace of $X$ with $codim(Y)= \beta.$*
*Then $(1) \Leftrightarrow (2) \Leftrightarrow (3) \Rightarrow (4).$ Further, if $X$ is reflexive, then all the statements are equivalent.*
*Proof.* $(1) \Rightarrow (2)$: Let $M$ be a proximinal subspace of $X.$ Let $x_1+M, x_2+M, \dots, x_{k+1}+M \in S_{X/M}$ with $\Vert \sum_{i=1}^{k+1}(x_i+M) \Vert =k+1.$ Since $M$ is proximinal on $X,$ for every $1 \leq i \leq k+1$ there exists $y_i \in M$ such that $\|x_i-y_i\|= d(x_i,M)=1.$ Note that $$k+1= \left \Vert \sum\limits_{i=1}^{k+1} (x_i+M) \right \Vert = d\left(\sum\limits_{i=1}^{k+1}x_i,M\right) \leq \left \Vert \sum\limits_{i=1}^{k+1}x_i - \sum\limits_{i=1}^{k+1}y_i \right\Vert \leq k+1,$$ which implies $\|\sum_{i=1}^{k+1}(x_i -y_i)\|= k+1.$ Therefore, by $(1),$ we have $V[(x_i-y_i)_{i=1}^{k+1}]=0.$ Using , it is easy to verify that $V[(x_i+M)_{i=1}^{k+1}] \leq V[(x_i-y_i)_{i=1}^{k+1}]$ and hence $V[(x_i+M)_{i=1}^{k+1}] = 0.$ Thus, $X/M$ is $k-$rotund.\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (1)$: Suppose there exist $x_1, x_2, \dots, x_{k+1} \in S_{X}$ with $\|\sum_{i=1}^{k+1}x_i\|=k+1$ such that $V[(x_i)_{i=1}^{k+1}]>0.$ By Hahn-Banach theorem, there exists $f \in S_{X^*}$ such that $f(\sum_{i=1}^{k+1}x_i)= \|\sum_{i=1}^{k+1}x_i\|.$ Therefore, $f(x_i)=1$ for all $1 \leq i \leq k+1.$ Choose a subspace $F$ such that $F\subseteq ker(f),$ $F \cap span\{x_i- x_{k+1}: 1 \leq i \leq k\}=\{0\}$ and $dim(F)=\alpha.$ Hence, by Ascoli's formula, for all $1 \leq i \leq k+1,$ we have $$1= |f(x_i)|=d(x_i, ker(f)) \leq d(x_i,F) \leq \|x_i\|=1.$$ Therefore, $\|x_i+F\|=1$ for all $1 \leq i \leq k+1.$ Similarly, we have $\|\sum_{i=1}^{k+1}(x_i+F)\|=k+1.$ By $(3),$ we get $V[(x_i+F)_{i=1}^{k+1}]=0.$ Thus, by , there exist $\lambda_1, \lambda_2, \dots ,\lambda_{k} \in \mathbb{R}$ such that $\lambda_k =1$ and $\sum_{i=1}^{k} \lambda_i(x_i-x_{k+1}+F)=0+F.$ Observe that $\sum_{i=1}^{k}\lambda_i(x_i-x_{k+1}) \in F.$ Therefore $\sum_{i=1}^{k}\lambda_i(x_i-x_{k+1}) =0,$ which implies $V[(x_i)_{i=1}^{k+1}]=0.$ This is a contradiction.\
$(2) \Rightarrow (4)$: Obvious.
Let $X$ be a reflexive space. Suppose there exist $x_1, x_2, \dots, x_{k+1} \in S_{X}$ with $\|\sum_{i=1}^{k+1}x_i\|=k+1$ such that $V[(x_i)_{i=1}^{k+1}]>0.$ By Hahn-Banach theorem, there exists $f \in S_{X^*}$ such that $f(\sum_{i=1}^{k+1}x_i)= \|\sum_{i=1}^{k+1}x_i\|.$ Therefore, $f(x_i)=1$ for all $1 \leq i \leq k+1.$ Choose a subspace $Y$ such that $Y\subseteq ker(f),$ $codim(Y)=\beta$ and $Y \cap span\{x_i- x_{k+1}: 1 \leq i \leq k\}=\{0\}$. Since $Y$ is proximinal on $X,$ by replacing $F$ by $Y$ in the proof of $(3) \Rightarrow (1)$ and repeating the argument involved in the proof, we get a contradiction. Hence the proof. ◻
As a consequence of , we observe that the implication $(4) \Rightarrow (1)$ of need not be true in general, for any $k \in \mathbb{Z}^{+}.$
**Example 15**. *Let $k \in \mathbb{Z}^{+}$ and $X= M \oplus_{1} (\mathbb{R}^{k}, \|\cdot\|_{1}),$ where $M$ is the Read's space [@Read2018]. Clearly, $X$ is not $k-$rotund. Let $Y$ be any subspace of $X$ with $codim(Y)=k+2.$ Since any finite co-dimensional subspace of $M$ with co-dimension greater than one is not proximinal on $M,$ by [@BLLN2008 Corollary 4.2], it follows that $Y$ is not proximinal on $X.$ Therefore, $X$ does not have any proximinal subspace of co-dimension $k+2.$*
# Characterizations of $k-$WUR, $k-$WLUR and $k-$WMLUR
In this section, we introduce and study two notions called $k-$weakly uniform rotundity and $k-$weakly locally uniform rotundity. We present a few characterizations of $k-$WUR, $k-$WLUR and $k-$WMLUR in terms of the notions discussed in Section $2$.
**Definition 16**. *Let $k \in \mathbb{Z^+}.$ A space $X$ is said to be*
1. *$k-$weakly uniformly rotund (in short, $k-$WUR), if for every $\epsilon >0$ and $f_1,f_2,\dots,f_k \in S_{X^*}$, $$\delta^{k}_{X}(\epsilon,(f_j)_{j=1}^{k}) \coloneqq \inf\left\{ \{1\} \cup \left\{1-\dfrac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1}x_i \right \Vert:
\begin{array}{l}
x_1, x_2, \dots,x_{k+1}\in S_X,\\
| D_k[(x_i)_{i=1}^{k+1}; (f_j)_{j=1}^{k}]|\geq \epsilon
\end{array}
\right\}
\right\} >0;$$*
2. *$k-$weakly locally uniformly rotund (in short, $k-$WLUR) at $x \in S_X$, if for every $\epsilon >0$ and $f_1,f_2,\dots,f_k \in S_{X^*}$, $$\delta^{k}_{X}(\epsilon, x, (f_j)_{j=1}^{k}) \coloneqq \inf\left\{ \{1\} \cup \left\{1-\dfrac{1}{k+1}\left \Vert x+ \sum\limits_{i=1}^{k}x_i \right \Vert:
\begin{array}{l}
x_1, x_2, \dots,x_{k}\in S_X,\\
| D_k[x, (x_i)_{i=1}^{k}; (f_j)_{j=1}^{k}]|\geq \epsilon
\end{array}
\right\}
\right\} >0.$$ We say $X$ is $k-$weakly locally uniformly rotund (in short, $k-$WLUR), if $X$ is $k-$WLUR at every $x \in S_X$.*
Clearly, the notion $1-$WUR (respectively, $1-$WLUR) coincide with the notion WUR (respectively, WLUR). The equivalent sequential formulation of the notions $k-$WUR and $k-$WLUR given in the following results are useful to prove our results.
**Proposition 17**. *The following statements are equivalent.*
1. *$X$ is $k-$WUR.*
2. *If $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ are $(k+1)-$sequences in $S_X$ such that $\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to 1,$ then $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
3. *If $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ are $(k+1)-$sequences in $B_X$ such that $\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to 1,$ then $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
4. *If $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ are $(k+1)-$sequences in $X$ such that $\|x_n^{(i)}\| \to 1$ for all $1 \leq i \leq k+1$ and $\frac{1}{k+1}\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to 1,$ then $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
*Proof.* $(1)\Leftrightarrow(2)$: These implications follow from the .\
$(2) \Rightarrow (4)$: Let $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $X$ with $\|x_n^{(i)}\| \to 1$ for all $1 \leq i \leq k+1$ and $\frac{1}{k+1}\| \sum_{i=1}^{k+1}x_n^{(i)} \| \to 1.$ Let $f_1,f_2,\dots,f_k \in S_{X^*}.$ For all $n \in \mathbb{N}$ and $1 \leq i \leq k+1,$ define $\overline{x}_n^{(i)}= \frac {x_n^{(i)}}{\|x_n^{(i)}\|}.$ Since $$1 \geq \frac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1}\overline{x}_n^{(i)} \right \Vert \geq \frac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1}x_n^{(i)} \right \Vert- \frac{1}{k+1}\sum\limits_{i=1}^{k+1} \left \Vert\overline{x}_n^{(i)}-x_n^{(i)} \right \Vert,$$ it follows that $\frac{1}{k+1}\|\sum_{i=1}^{k+1}\overline{x}_n^{(i)}\| \to 1.$ Thus, by assumption, $D_k[(\overline{x}_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0.$ Further, using , we have $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0.$\
$(4) \Rightarrow (3)$: Let $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $B_X$ such that $\frac{1}{k+1}\|\sum_{i=1}^{k+1} x_n^{(i)}\| \to 1.$ Note that for any $1 \leq i \leq k+1,$ we have $$\frac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1}x_n^{(i)} \right \Vert \leq \frac{\|x_n^{(i)}\|}{k+1}+ \frac{k}{k+1} \leq 1$$ and hence $\|x_n^{(i)}\| \to 1.$ Thus, by $(4),$ $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots, f_k \in S_{X^*}.$\
$(3) \Rightarrow (2)$: Obvious. ◻
The proof of the following corollary is similar to the proof of .
**Corollary 18**. *Let $x \in S_X$. Then the following statements are equivalent.*
1. *$X$ is $k-$WLUR at $x$.*
2. *If $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k)})$ are $(k)-$sequences in $S_X$ such that $\frac{1}{k+1}\|x+\sum_{i=1}^{k}x_n^{(i)}\| \to 1,$ then $D_k[x, (x_n^{(i)})_{i=1}^{k};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
3. *If $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k)})$ are $(k)-$sequences in $B_X$ such that $\frac{1}{k+1}\|x+\sum_{i=1}^{k}x_n^{(i)}\| \to 1,$ then $D_k[x, (x_n^{(i)})_{i=1}^{k};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
4. *If $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k)})$ are $(k)-$sequences in $X$ such that $\|x_n^{(i)}\| \to 1$ for all $1 \leq i \leq k$ and $\frac{1}{k+1}\|x+\sum_{i=1}^{k}x_n^{(i)}\| \to 1,$ then $D_k[x, (x_n^{(i)})_{i=1}^{k};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
It is easy to verify that the observations given in the following remark hold.
**Remark 19**. * *
1. *From the definitions, it follows that $k-$UR $\Rightarrow$ $k-$WUR $\Rightarrow$ $k-$WLUR $\Rightarrow$ $k-$rotund. Further, $k-$LUR $\Rightarrow$ $k-$WLUR.*
2. *In general, none of the implications given in $(1)$ can be reversed (see, ). However, if the space is finite dimensional, then all the notions in $(1)$ coincide.*
3. *There is no relation between the notion $k-$WUR and any of the notions $k-$LUR, $k-$MLUR, $k-$strongly rotund (see, ). Also, there is no relation between the notion $k-$WLUR and any of the notions $k-$MLUR, $k-$strongly rotund (see, ).*
The following result is an outcome of , wherein we show that if a space is WUR (respectively, WLUR), then it is $k-$WUR (respectively, $k-$WLUR) for any $k \in \mathbb{Z}^+.$
**Proposition 20**. *Let $x \in S_X.$ Then the following statements hold.*
1. *If $X$ is $k-$WUR, then $X$ is $(k+1)-$WUR.*
2. *If $X$ is $k-$WLUR at $x,$ then $X$ is $(k+1)-$WLUR at $x.$*
*Proof.* $(1)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ be $(k+2)-$sequences in $S_X$ such that $\|\sum_{i=1}^{k+2}x_n^{(i)}\| \to k+2$ and $f_1, f_2, \dots, f_{k+1} \in S_{X^*}.$ Since for any $1 \leq j \leq k+2,$ we have $$\left\Vert\sum\limits_{i=1}^{k+2}x_n^{(i)}\right\Vert- 1=\left\Vert \sum\limits_{i=1}^{k+2}x_n^{(i)}\right\Vert- \left\Vert x_n^{(j)}\right\Vert \leq \left\Vert \sum\limits_{i=1, i \neq j}^{k+2}x_n^{(i)}\right\Vert \leq k+1,$$ which implies $\|\sum_{i=1, i \neq j}^{k+2}x_n^{(i)}\| \to k+1.$ By assumption, we have $D_{k}[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_{\beta_j})_{j=1}^{k}] \to 0$ for all $\alpha \in \mathcal{S}_{k+2}(k+1)$ and $\beta \in \mathcal{S}_{k+1}(k).$ Therefore, by , $D_{k+1}[(x_n^{(i)})_{i=1}^{k+2};(f_j)_{j=1}^{k+1}] \to 0.$ Thus, $X$ is $(k+1)-$WUR.\
$(2)$: Let $x \in S_X,$ $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $S_X$ with $\|x+\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+2$ and $f_1, f_2, \dots, f_{k+1} \in S_{X^*}.$ Note that, $\|x+\sum_{i=1}^{k}x_n^{(\alpha_i)}\| \to k+1$ for all $\alpha \in \mathcal{S}_{k+1}(k).$ Since $X$ is $k-$WLUR at $x,$ it follows that $|D_{k}[x,(x_n^{(\alpha_i)})_{i=1}^{k}; (f_{\beta_j})_{j=1}^{k}]| \to 0$ for all $\alpha, \beta \in \mathcal{S}_{k+1}(k).$ Now, as a result of [@Suya2000 Lemma 2], for any $\beta \in \mathcal{S}_{k+1}(k),$ we have $$|D_{k}[(x_n^{(i)})_{i=1}^{k+1}; (f_{\beta_j})_{j=1}^{k}]| \leq \sum_{\alpha \in \mathcal{S}_{k+1}(k)}|D_{k}[x, (x_n^{(\alpha_i)})_{i=1}^{k}; (f_{\beta_j})_{j=1}^{k}]|,$$ which implies $|D_{k}[(x_n^{(i)})_{i=1}^{k+1}; (f_{\beta_j})_{j=1}^{k}]| \to 0.$ Thus, by , $D_{k+1}[x,(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k+1}] \to 0.$ Hence, $X$ is $(k+1)-$WLUR at $x.$ ◻
The subsequent example shows that the converses of the statements of , need not be true for any $k \in \mathbb{Z}^{+}.$ Further, we will see in that there exists a strongly rotund space which is $(k+1)-$WUR, but not $k-$WLUR.
**Example 21**. *Let $k \in \mathbb{Z}^{+},$ $k \geq 2$ and $i_1 < i_2 < \dots< i_k.$ For each $x=(x_1, x_2, \dots)$ in $l_2,$ define $$\|x\|_{i_1,i_2, \dots, i_k}^{2}= \left(\sum\limits_{j=1}^{k}\vert x_{i_j} \vert \right)^2+ \sum\limits_{i \neq i_1, i_2, \dots, i_k} x_i^2.$$ Let $X= (l_2, \| \cdot\|_{i_1, i_2, \dots, i_k}).$ In [@LiYu1985 Example 2], it is proved that $X$ is $k-$UR, but not $(k-1)-$rotund. Thus, $X$ is $k-$WUR, but not $(k-1)-$WLUR.*
As noted in , now we provide an example.
**Example 22**. *Consider the space $X= (\ell_2,\| \cdot \|_W)$ from $\textnormal{\cite[Example 2]{Smit1978a}}$ and $k \in \mathbb{Z}^{+}.$ In [@Smit1978a], it is proved that $X$ is WUR, but not MLUR and it does not have the Kadets-Klee property (see, [@Megg1998 Definition 2.5.26]). From [@Megg1998 Theorems 5.1.18 and 5.3.28], it follows that $B_X$ is Chebyshev on $X,$ but not approximatively compact on $X.$ Therefore, by [@VeRS2021 Lemma 2.8], $B_X$ is not $k-$SCh on $X.$ Thus, by [@LiZZ2018 Theorem 2.6], $X$ is not $k-$MLUR. Observe that $X$ is not $k-$strongly rotund. However, by , $X$ is $k-$WUR.*
Now, we present some sequential characterizations of $k-$WUR in terms of an uniform version of $k-$WMLUR.
**Definition 23**. *[[@XiLi2004]]{.nodecor}[\[def KWLUR\]]{#def KWLUR label="def KWLUR"} Let $k \in \mathbb{Z^+}.$ A space $X$ is said to be $k-$WMLUR, if for any $(k+1)-$sequences $(x_n^{(1)}),(x_n^{(2)}),$ $\dots, (x_n^{(k+1)})$ in $S_X$ and $x \in S_X$ with $\|(k+1)x-\sum_{i=1}^{k+1}x_n^{(i)}\| \to 0,$ it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$*
It is easy to verify from the definitions that $k-$WLUR $\Rightarrow$ $k-$WMLUR $\Rightarrow$ $k-$rotund. However, none of the implications can be reversed in general (see, ).
**Theorem 24**. *The following statements are equivalent.*
1. *$X$ is $k-$WUR.*
2. *If $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ are $(k+2)-$sequences in $S_X$ such that $\|(k+1)x_n^{(k+2)}-\sum_{i=1}^{k+1}x_n^{(i)}\| \to 0,$ then $D_k[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*}$ and $\alpha \in \mathcal{S}_{k+2}(k+1).$*
3. *If $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ are $(k+2)-$sequences in $S_X$ such that $\|(k+1)x_n^{(k+2)}-\sum_{i=1}^{k+1}x_n^{(i)}\| \to 0,$ then $D_k[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*}$ and for some $\alpha \in \mathcal{S}_{k+2}(k+1).$*
*Proof.* $(1) \Rightarrow (2)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ be $(k+2)-$sequences in $S_X$ with $\|(k+1)x_n^{(k+2)}-\sum_{i=1}^{k+1}x_n^{(i)}\| \to 0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*}.$ Observe that $\| \sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1.$ Thus, by $(1),$ we get $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Now, it is enough to show that $D_k[x_n^{(k+2)},(x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0$ for all $\beta \in \mathcal{S}_{k+1}(k).$ For every $n \in \mathbb{N}$, consider $y_n=\frac{1}{k+1}\sum_{i=1}^{k+1}x_n^{(i)}$ and let $\beta \in \mathcal{S}_{k+1}(k)$. Note that for any $n \in \mathbb{N},$ we have $|D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}]|= (k+1) |D_k[y_n, (x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}]|,$ which implies $D_k[y_n, (x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$ Since $y_n- x_n^{(k+2)} \to 0,$ it follows from that $$|D_k[y_n-x_n^{(k+2)}+x_n^{(k+2)},(x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] -D_k[x_n^{(k+2)},(x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}]| \to 0.$$ Therefore, $D_k[x_n^{(k+2)},(x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (1)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $S_X$ such that $\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1$ and $f_1, f_2, \dots, f_{k}$ in $S_{X^*}.$ For each $n \in \mathbb{N},$ define $x_n^{(k+2)} = \frac{\sum_{i=1}^{k+1}x_n^{(i)}}{\|\sum_{i=1}^{k+1}x_n^{(i)}\|}.$ Since $\| (k+1)x_n^{(k+2)} - \sum_{i=1}^{k+1}x_n^{(i)} \| \to 0$, by $(3),$ $D_k[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for some $\alpha \in \mathcal{S}_{k+2}(k+1).$ If $\alpha=\{1,2, \dots, k+1\},$ then it is done. Assume $D_k[x_n^{(k+2)},(x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0$ for some $\beta \in \mathcal{S}_{k+1}(k).$ Then, using , we have $D_k\left[\frac{1}{k+1}\sum_{i=1}^{k+1}x_n^{(i)},(x_n^{(\beta_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}\right] \to 0.$ Thus, $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Hence, $X$ is $k-$WUR. ◻
The proof of the subsequent corollary follows in similar lines of the proof of .
**Corollary 25**. *Let $x \in S_X.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WLUR at $x.$*
2. *If $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ are $(k+2)-$sequences in $S_X$ such that $x_n^{(1)}=x$ for all $n \in \mathbb{N}$ and $\|(k+1)x_n^{(k+2)}-\sum_{i=1}^{k+1}x_n^{(i)}\| \to 0,$ then $D_k[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*}$ and $\alpha \in \mathcal{S}_{k+2}(k+1).$*
3. *If $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+2)})$ are $(k+2)-$sequences in $S_X$ such that $x_n^{(1)}=x$ for all $n \in \mathbb{N}$ and $\|(k+1)x_n^{(k+2)}-\sum_{i=1}^{k+1}x_n^{(i)}\| \to 0,$ then $D_k[(x_n^{(\alpha_i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*}$ and for some $\alpha \in \mathcal{S}_{k+2}(k+1).$*
In the following proposition and example, we discuss some relationships between rotundity properties of a space and its double dual.
**Proposition 26**. *If $X$ is $k-$WUR, then $X^{**}$ is $k-$rotund.*
*Proof.* Suppose $X^{**}$ is not $k-$rotund. Then there exist $(k+1)$ elements $x_1^{**}, x_2^{**}, \dots, x_{k+1}^{**}$ in $S_{X^{**}}$ such that $\|\sum_{i=1}^{k+1}x_i^{**}\|=k+1,$ but $D_k[(x_i^{**})_{i=1}^{k+1}; (\tilde{g}_j)_{j=1}^{k}]=\epsilon$ for some $\tilde{g}_1, \tilde{g}_2, \dots, \tilde{g}_k \in S_{X^{***}}$ and $\epsilon>0.$ For every $1 \leq i \leq k+1,$ by Goldstine's theorem, there exists a net $(x_{\alpha_i}^{(i)})_{\alpha_i \in I_i}$ in $B_X$ such that $x_{\alpha_i}^{(i)} \xrightarrow{w^*} x_i^{**}.$ Then, by [@Megg1998 Page 150], there exists a subnet $(x_{\beta}^{(i)})$ of $(x_{\alpha_i}^{(i)})$ with the same index set for every $1 \leq i \leq k+1$. Now, using the $w^*-$lower semi-continuity of the norm function, we have $$k+1 = \left\Vert \sum\limits_{i=1}^{k+1}x_i^{**} \right\Vert \leq \liminf_{\beta} \left\Vert \sum\limits_{i=1}^{k+1}x_{\beta}^{(i)} \right\Vert \leq \limsup_{\beta} \left\Vert \sum\limits_{i=1}^{k+1}x_{\beta}^{(i)} \right\Vert \leq k+1,$$ which implies $\|\sum_{i=1}^{k+1} x_{\beta}^{(i)}\| \to k+1.$ Therefore, by assumption, $D_k[(x_{\beta}^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*}.$ Since $x_{\beta}^{(i)} \xrightarrow{w^*} x_i^{**}$ for all $1 \leq i \leq k+1$, we have $D_k[(x_{\beta}^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to D_k[(x_{i}^{**})_{i=1}^{k+1}; (f_j)_{j=1}^{k}]$ for all $f_1, f_2, \dots, f_k \in S_{X^*}.$ Therefore, $D_k[(x_{i}^{**})_{i=1}^{k+1}; (f_j)_{j=1}^{k}]=0$ for all $f_1, f_2, \dots, f_{k} \in S_{X^*}.$ Further for every $1 \leq j \leq k,$ by Goldstine's theorem, there exists a net $(f_{\lambda_j}^{(j)})_{\lambda_j \in J_j}$ in $B_{X^*}$ such that $f_{\lambda_j}^{(j)} \xrightarrow{w^*} \tilde{g}_j.$ Then for every $1 \leq j \leq k$ it is easy to find a subnet $(f_{\gamma}^{(j)})$ of $(f_{\lambda_j}^{(j)})$ with the same index set. Since $f_{\gamma}^{(j)} \xrightarrow{w^*} \tilde{g}_j$ for all $1 \leq j \leq k$, it follows that $D_k[(x_i^{**})_{i=1}^{k+1}; (f_{\gamma}^{(j)})_{j=1}^{k}] \to D_k[(x_i^{**})_{i=1}^{k+1}; (\tilde{g}_j)_{j=1}^{k}].$ Thus $D_k[(x_i^{**})_{i=1}^{k+1}; (\tilde{g}_j)_{j=1}^{k}] = 0,$ which is a contradiction. Hence $X^{**}$ is $k-$rotund. ◻
The following example illustrates that in the assumption $k-$WUR cannot be replaced by $k-$LUR (hence, $k-$WLUR). Further, we will see in that the property $k-$WUR of a space $X$ is not sufficient for the space $X^{**}$ to be $k-$WMLUR. The converse of need not be true in general. To see this, consider a strongly rotund space which is not $k-$WLUR (see, ).
**Example 27**. *Let $X= (l_1, \|\cdot\|_1)$ and $k \in \mathbb{Z}^{+}.$ By [@DeGZ1993 Chapter II, Theorem 2.6], $X$ admits an equivalent norm (say, $\|\cdot\|_r)$ such that $Y= (l_1, \|\cdot\|_r)$ is LUR. Note that, by [@DeGZ1993 Chapter II, Corollary 3.5], $Y^*$ is not smooth. Thus, $Y^{**}$ is not rotund. Now, consider the Banach space $Z= l_2(Y).$ Then, by [@Lova1955 Theorem 1.1], $Z$ is LUR (hence, $k-$LUR). Clearly, $Z^{**} \cong l_2(Y^{**}).$ Therefore, by [@Veen2021 Corollary 2.10], $Z^{**}$ is not $k-$rotund.*
We present some necessary and/or sufficient conditions for the notions $k-$WUR, $k-$WLUR and $k-$WMLUR in terms of property $k-$$w$UC, $k-$$w$USCh and $k-$$w$SCh.
In the next result, we obtain some characterization of $k-$WUR in terms of property $k-$$w$UC.
**Theorem 28**. *Let $r > 1.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WUR.*
2. *If $A$ and $B$ are non-empty subsets of $X$ such that $A$ is convex, then $(A,B)$ has property $k-$$w$UC.*
3. *$(B_X,rS_X)$ has property $k-$$w$UC.*
4. *$(S_X,rS_X)$ has property $k-$$w$UC.*
*Proof.* $(1) \Rightarrow (2)$: Let $A$ and $B$ be non-empty subsets of $X$ and $A$ be convex. Let $(x_n^{(1)}), (x_n^{(2)}),$ $\dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $A,$ $(y_n)$ be a sequence in $B$ such that $\|x_n^{(i)}-y_n\| \to d(A,B)$ for all $1 \leq i \leq k+1$ and $f_1, f_2, \dots, f_k \in S_{X^*}.$ If $d(A,B)=0,$ then it is clear that $(A,B)$ has property $k-$$w$UC. Assume $d(A,B)>0.$ Since $A$ is convex, we have $$d(A,B) \leq \left \Vert \frac{1}{k+1}\sum\limits_{i=1}^{k+1}x_n^{(i)}-y_n \right \Vert= \frac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1} (x_n^{(i)}-y_n) \right \Vert \leq \frac{1}{k+1} \sum\limits_{i=1}^{k+1}\| x_n^{(i)}-y_n \|$$ and hence $\frac{1}{k+1}\| \sum_{i=1}^{k+1} (x_n^{(i)}-y_n) \| \to d(A,B).$ Now, by $(1)$, we have $D_k\left[\left(\frac{x_n^{(i)}-y_n}{d(A,B)}\right)_{i=1}^{k+1};(f_j)_{j=1}^{k}\right] \to 0.$ Therefore, by , we have $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0.$ Hence, $(A,B)$ has property $k-$$w$UC.\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (4)$: Since $S_X \subseteq B_X$ and $d(S_X, rS_X)= d(B_X, rS_X),$ it follows from the assumption that $(S_X,rS_X)$ has property $k-$$w$UC.\
$(4) \Rightarrow (1)$: Let $(S_X, rS_X)$ has property $k-$$w$UC. By and , it follows that $(S_X, (k+1)S_X)$ has property $k-$$w$UC. Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $S_{X}$ with $\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1$ and $f_1,f_2,\dots, f_k \in S_{X^*}.$ For every $n \in \mathbb{N},$ define $y_n= \sum_{i=1}^{k+1}x_n^{(i)}.$ Then for all $1 \leq i \leq k+1,$ we have $$\left \Vert (k+1) \frac{y_n}{\|y_n\|}- x_n^{(i)} \right \Vert = \left \Vert \frac{(k+1)x_n^{(i)}}{\|y_n\|}+ (k+1) \frac{y_n-x_n^{(i)}}{\|y_n\|}-x_n^{(i)} \right \Vert \\
\leq \left \vert \frac{k+1}{\|y_n\|}-1 \right \vert + \frac{(k+1)k}{\|y_n\|}.$$ Thus, $\left \Vert x_n^{(i)}-(k+1) \frac{y_n}{\|y_n\|}\right \Vert \to d(S_X, (k+1)S_X)$ for all $1 \leq i \leq k+1.$ Since $(S_X,(k+1)S_X)$ has property $k-$$w$UC, we get $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0.$ Hence, $X$ is $k-$WUR. ◻
The following corollary is an immediate consequence of and . However, the converse need not be true in general.
**Corollary 29**. *Let $r \in (0,1).$ If $(S_X,rS_X)$ has property $k-$$w$UC, then $X$ is $k-$WUR.*
Now, in view of and , we characterize $k-$WUR spaces in terms of $k-$weakly uniformly strong Chebyshevness of the corresponding closed unit ball.
**Theorem 30**. *Let $r > 1.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WUR.*
2. *$B_X$ is $k-$$w$USCh on $X.$*
3. *$B_X$ is $k-$$w$USCh on $rS_X.$*
*Proof.* $(1) \Rightarrow (2)$: It is enough to show that $B_X$ is $k-$$w$USCh on $X \backslash B_X.$ Let $(x_n^{(1)}), (x_n^{(2)}), \dots,$ $(x_n^{(k+1)})$ be $(k+1)-$sequences in $B_X$, $(y_n)$ be a sequence in $X \backslash B_X$ with $\|x_n^{(i)}-y_n\|-d(y_n,B_X) \to 0$ for all $1 \leq i \leq k+1.$ Note that for all $n \in \mathbb{N},$ we have $d(y_n,B_X)=\|y_n\|-1,$ which implies $\|y_n\|-\|x_n^{(i)}-y_n\| \to 1$ for all $1 \leq i \leq k+1.$ Since $$\frac{1}{k+1}\left \Vert \sum\limits_{i=1}^{k+1}x_n^{(i)}\right \Vert \geq \|y_n\|- \frac{1}{k+1}\left \Vert\sum\limits_{i=1}^{k+1}x_n^{(i)}-(k+1)y_n \right \Vert \geq \|y_n\|- \frac{1}{k+1}\sum\limits_{i=1}^{k+1}\|x_n^{(i)}-y_n\|,$$ it follows that $\frac{1}{k+1}\| \sum_{i=1}^{k+1}x_n^{(i)} \| \to 1.$ Thus, by $(1),$ we have $D_k[(x_n^{(i)})_{i=1}^{k+1};(f_j)_{j=1}^{k}] \to 0$ for all $f_1,f_2,\dots,f_k \in S_{X^*}.$ Therefore, $B_X$ is $k-$$w$USCh on $X \backslash B_X.$\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (1)$: By $(3)$ and , we have $(B_X,rS_X)$ has property $k-$$w$UC. Thus, by , it follows that $X$ is $k-$WUR. ◻
In light of , we now present few examples to illustrate some of the implications mentioned in Section $2$ cannot be reversed in general. As mentioned immediately after , the following example shows that, in general $k-$$w$USCh (respectively, property $k-$$w$UC) does not imply $k-$SCh (respectively, property $k-$UC).
**Example 31**. *Let $k \in \mathbb{Z}^+$. Consider the space $X$ as in . Since $X$ is $k-$WUR, by , $B_X$ is $k-$$w$USCh on $X.$ However as mentioned in , $B_X$ is not $k-$SCh on $X.$ In addition, observe that $X$ is $k-$WUR, but not $k-$UR. Hence, by , $(B_X, (k+1)S_X)$ has property $k-$$w$UC. However, by [@KaVe2018 Theorem 2.19], $(B_X, (k+1)S_X)$ does not have property $k-$UC.*
As noted in Section $2,$ from the following example we can observe that the converses of the statements of are not necessarily true.
**Example 32**. *Let $k \geq 2$. Consider a $k-$WUR space $X$ which is not $(k-1)-$rotund (see, ). Therefore, by and [@GaTh2023 Proposition 2.4], $B_{X}$ is $k-$$w$USCh on $2S_{X}$, but $B_X$ is not $(k-1)-$Chebyshev on $2S_{X}.$ Further, by , $(B_{X}, 2S_{X})$ has property $k-$$w$UC, but it does not have property $(k-1)-$$w$UC.*
For any non-empty closed convex subset $C$ of $X$ and $\alpha>0,$ we define $C^{\alpha}=\{x \in X: d(x, C)= \alpha\}.$ For any $x^* \in S_{X^*},$ we say that the set $ker(x^*)=\{x \in X: x^*(x)=0\}$ is a hyperplane of $X.$
It follows from and that, every proximinal convex subset $C$ of a $k-$WUR space is $k-$$w$USCh on $C^{\alpha}$ for any $\alpha>0.$ In fact something more is true. To see this we define a notion called $k-$equi weakly uniform strong Chebyshevity as follows. Let $\mathcal{M}$ be a collection of proximinal convex subsets of $X$ and $\alpha>0.$ We say that $\mathcal{M}$ is $k-$equi weakly uniformly strongly Chebyshev (in short, $k-$equi $w$USCh) on $\mathcal{M}^{\alpha},$ if for every $\epsilon>0$ and $f_1,f_2,\dots,f_k \in S_{X^*}$ there exists $\delta>0$ such that $\vert D_k[(x_i)_{i=1}^{k+1};(f_j)_{j=1}^{k}]\vert \leq \epsilon$ whenever $M \in \mathcal{M},$ $x \in M^{\alpha}$ and $x_1,x_2,\dots,x_{k+1} \in P_M(x,\delta).$
**Theorem 33**. *Let $\alpha>0.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WUR.*
2. *$\mathcal{C}$ is $k-$equi $w$USCh on $\mathcal{C}^{\alpha},$ where $\mathcal{C}$ is the collection of all proximinal convex subsets of $X.$*
3. *$\mathcal{M}$ is $k-$equi $w$USCh on $\mathcal{M}^{\alpha},$ where $\mathcal{M}$ is the collection of all proximinal subspaces of $X.$*
4. *$\mathcal{H}$ is $k-$equi $w$USCh on $\mathcal{H}^{\alpha},$ where $\mathcal{H}$ is the collection of all proximinal hyperplanes of $X.$*
5. *$\mathcal{F}$ is $k-$equi $w$USCh on $\mathcal{F}^{\alpha},$ where $\mathcal{F}$ is the collection of all $k-$dimensional subspaces of $X.$*
*Proof.* $(1) \Rightarrow (2)$: Let $(C_n)$ be a sequence of proximinal convex subsets of $X.$ Let $(y_n^{(1)}), (y_n^{(2)}), \dots,$ $(y_n^{(k+1)})$ be $(k+1)-$sequences with $y_n^{(i)} \in C_n$ for all $n \in \mathbb{N}$ and $1 \leq i \leq k+1,$ $(x_n)$ be a sequence with $x_n \in C_n^{\alpha}$ for all $n \in \mathbb{N}$ such that $\|y_n^{(i)}-x_n\| \to \alpha$ for all $1 \leq i \leq k+1$ and $f_1, f_2, \dots, f_k \in S_{X^*}.$ Since $C_n$ is convex, we have $$d(x_n,C_n) \leq \left\Vert \frac{1}{k+1}\sum\limits_{i=1}^{k+1}y_n^{(i)}-x_n \right\Vert= \frac{1}{k+1}\left\Vert\sum_{i=1}^{k+1}(y_n^{(i)}-x_n) \right\Vert \leq \frac{1}{k+1} \sum\limits_{i=1}^{k+1}\|y_n^{(i)}-x_n\|$$ and hence $\frac{1}{k+1}\Vert\sum_{i=1}^{k+1}(y_n^{(i)}-x_n) \Vert \to \alpha.$ Thus, by $(1),$ it follows that $D_k[(\frac{y_n^{(i)}-x_n}{\alpha})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Therefore, by , $D_k[(y_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$\
$(2) \Rightarrow (3) \Rightarrow (4)$: Obvious.\
$(4) \Rightarrow (1)$: Let $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $S_X$ such that $\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1$ and $f_1, f_2, \dots, f_{k}\in S_{X^*}.$ For every $n \in \mathbb{N},$ define $y_n= \frac{1}{k+1}\sum_{i=1}^{k+1}x_n^{(i)}.$ By Hahn-Banach theorem, for every $n \in \mathbb{N}$ there exists $g_n \in S_{X^*}$ such that $g_n(y_n)= \|y_n\|.$ Let $1 \leq i \leq k+1$. Observe that $g_n(y_n) \to 1$ and $g_n(x_n^{(i)}) \to 1$. Now, define $H_n= ker(g_n),$ $\beta_n= d(y_n,H_n)$ and $z_n^{(i)}= y_n-x_n^{(i)}-g_n(y_n-x_n^{(i)})\frac{y_n}{\|y_n\|}$ for all $n \in \mathbb{N}.$ Clearly $H_n$ is proximinal on $X$ for all $n \in \mathbb{N}$ and $\beta_n \to 1.$ Note that $z_n^{(i)} \in H_n$ and $\frac{\alpha y_n}{\beta_n} \in H_n^{\alpha}$ for all $n \in \mathbb{N}.$ Since $$d(H_n, H_n^{\alpha}) \leq \left\Vert \frac{\alpha z_n^{(i)}}{\beta_n}- \frac{\alpha y_n}{\beta_n} \right\Vert\\
= \frac{\alpha}{\beta_n} \left\Vert x_n^{(i)}+ g_n(y_n-x_n^{(i)})\frac{y_n}{\|y_n\|}\right\Vert\\
\leq \frac{\alpha}{\beta_n} \left(\|x_n^{(i)}\|+ |g_n(y_n-x_n^{(i)})| \right),$$ we have $\| \frac{\alpha z_n^{(i)}}{\beta_n}- \frac{\alpha y_n}{\beta_n} \| \to \alpha.$ Thus, by $(4),$ $D_k[(\frac{\alpha}{\beta_n}z_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Further, using and , we have $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Hence, $X$ is $k-$WUR.\
$(2) \Rightarrow (5)$: Obvious.\
$(5) \Rightarrow (1)$: Suppose there exist $\epsilon>0,$ $g_1, g_2, \dots, g_{k} \in S_{X^*}$ and $(k+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}),$ $\dots,(x_n^{(k+1)})$ in $S_X$ such that $\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1,$ but $|D_k[(x_n^{(i)})_{i=1}^{k+1}; (g_j)_{j=1}^{k}]|> \epsilon$ for all $n \in \mathbb{N}.$ Now for every $n \in \mathbb{N},$ define $F_n= span\{x_n^{(i)}- x_n^{(k+1)}: 1 \leq i \leq k\}.$ Using , observe that $F_n$ is a $k-$dimensional subspace of $X$ and hence it is proximinal on $X.$ Thus, for every $n \in \mathbb{N},$ there exist $\lambda_n^{(1)}, \lambda_n^{(2)}, \dots, \lambda_n^{(k)} \in \mathbb{R}$ such that $\|x_n^{(k+1)}+\sum_{i=1}^{k}\lambda_n^{(i)}(x_n^{(i)}- x_n^{(k+1)})\| = d(x_n^{(k+1)}, F_n).$ Denote $d(x_n^{(k+1)}, F_n)= \beta_n$ for all $n \in \mathbb{N}.$ Using [@VeVe2018 Lemma 2.3], we have $\beta_n \to 1.$ Note that $d(\frac{\alpha}{\beta_n}x_n^{(k+1)}, F_n)=\alpha,$ $\frac{\alpha}{\beta_n}(x_n^{(k+1)}- x_n^{(i)}) \in F_n$ and $\|\frac{\alpha}{\beta_n}(x_n^{(k+1)}-x_n^{(i)})- \frac{\alpha}{\beta_n}x_n^{(k+1)}\| \to \alpha$ for all $1 \leq i \leq k+1.$ Therefore, our assumption leads to $D_k[(\frac{\alpha}{\beta_n}(x_n^{(k+1)}-x_n^{(i)}))_{i=1}^{k+1}; (g_j)_{j=1}^{k}] \to 0$. Thus from and , $D_k[(x_n^{(i)})_{i=1}^{k+1}; (g_j)_{j=1}^{k}] \to 0,$ which is a contradiction. Hence the proof. ◻
We remark that are generalizations of [@GaTh2022 Theorems 4.5, 4.6 and 4.15].
In the next two results, we present a necessary and a sufficient condition for a space to be $k-$WLUR in terms of $k-$weakly strongly Chebyshevness.
**Proposition 34**. *If $X$ is a $k-$WLUR space, then every proximinal convex subset of $X$ is $k-$$w$SCh on $X.$*
*Proof.* In view of , it is enough to prove that every proximinal convex subset $C$ of $X$ with $d(0, C)=1$ is $k-$$w$SCh at $0.$ Let $(x_n^{(1)}), (x_n^{(2)}),\ldots,(x_n^{(k+1)})$ be $(k+1)-$sequences in $C$ such that $\|x_n^{(i)}\| \xrightarrow{} 1$ for all $1\leq i \leq k+1$ and $f_1, f_2, \dots, f_k \in S_{X^*}.$ Choose $y\in P_C(0)$ and observe that $y\in S_X.$ Since $C$ is convex, for any $\alpha \in \mathcal{S}_{k+1}(k)$, we have $$1= d(0, C) \leq \frac{1}{k+1}\left\|y+\sum_{i=1}^k x_n^{(\alpha_i)}\right\| \leq \frac{1}{k+1}(\|y\|+\sum_{i=1}^k \|x_n^{(\alpha_i)}\|)$$ and hence $\|y+\sum_{i=1}^k x_n^{(\alpha_i)}\| \xrightarrow{} k+1.$ Thus, by assumption, we have $D_k[y, (x_n^{(\alpha_i)})_{i=1}^k; (f_j)_{j=1}^k] \xrightarrow{} 0$ for any $\alpha \in \mathcal{S}_{k+1}(k).$ Since, by [@Suya2000 Lemma 2], $$|D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^k]| \leq \sum_{\alpha \in \mathcal{S}_{k+1}(k) } |D_k[y, (x_n^{(\alpha_i)})_{i=1}^k; (f_j)_{j=1}^k]|,$$ we have $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^k] \xrightarrow{} 0.$ ◻
We remark that the converse of not necessarily true (see, ).
**Theorem 35**. *Consider the following statements.*
1. *$X$ is $k-$WLUR.*
2. *$S_X$ is $k-$$w$SCh on $rS_X$ for some $r \in (0,1).$*
3. *$(S_X, C)$ has property $k-$$w$UC, whenever $C$ is a non-empty boundedly compact subset of $X$ with $d(0,C)>0.$*
*Then $(1) \Leftarrow (2) \Leftrightarrow (3).$*
*Proof.* $(2) \Rightarrow (1)$: From the assumption and , we have $S_X$ is $k-$$w$SCh on $\frac{1}{2}S_X.$ Let $x \in S_X,$ $(x_n^{(1)}),(x_n^{(2)}), \dots, (x_n^{(k)})$ be $(k)-$sequences in $S_X$ such that $\Vert x+\sum_{i=1}^{k}x_n^{(i)} \Vert \to k+1$ and $f_1,f_2,\dots,f_k \in S_{X^*}.$ Observe that for any $1 \leq i \leq k,$ we have $\|x_n^{(i)}+x\| \to 2$ and $$d(\frac{1}{2}x, S_X) \leq \left \Vert \frac{x_n^{(i)}+x}{{\|x_n^{(i)}+x}\|}-\frac{1}{2}x \right \Vert \leq \left \vert \frac{1}{\|x_n^{(i)}+x\|}-\frac{1}{2} \right \vert + \frac{1}{\|x_n^{(i)}+x\|},$$ which implies $\left \Vert \frac{x_n^{(i)}+x}{\|x_n^{(i)}+x\|}-\frac{1}{2}x \right \Vert \to d(\frac{1}{2}x, S_X).$ Thus, we have $D_k \left[x, \left(\frac{x_n^{(i)}+x}{\|x_n^{(i)}+x\|}\right)_{i=1}^{k};(f_j)_{j=1}^{k}\right] \to 0.$ Therefore, by and , it follows that $D_k [x, (x_n^{(i)})_{i=1}^{k};(f_j)_{j=1}^{k}] \to 0.$\
$(2) \Rightarrow (3)$: This implication follows from .\
$(3) \Rightarrow (2)$: This implication follows from the . ◻
We note that for the case $k=1$, and $(2) \Rightarrow (1)$ of are proved in [@GaTh2022 Propositon 4.7] and [@DuSh2018 Theorem 3.12] respectively. In the following example, we see that the other implications of and need not be true in general for any $k \geq 2$. For instance for any $k \geq 2$, consider the $k-$WUR space $X=(\mathbb{R}^{k}, \|\cdot\|_{\infty})$ and $x=(r,r, \dots, r)$ for some $0 < r<1.$ It is easy to see that, $S_X$ is not $k-$Chebyshev at $x,$ hence $S_X$ is not $k-$$w$SCh on $rS_X.$ Further, by , $(S_X, rS_X)$ does not have property $k-$$w$UC.
In light of the and , it is natural to ask whether the local version of holds. The following result provides a positive answer to this question. We conclude this section with some characterizations of the $k-$WMLUR spaces.
**Theorem 36**. *Let $r>1.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WMLUR.*
2. *$B_X$ is $k-$$w$SCh on $X.$*
3. *$B_X$ is $k-$$w$SCh on $rS_X.$*
*Proof.* $(1) \Rightarrow (2)$: In view of the , it is enough to prove that $B_X$ is $k-$$w$SCh on $\frac{k+1}{k}S_X.$ Let $x \in S_X,$ $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $B_X$ such that $\|x_n^{(i)}- \frac{k+1}{k}x\| \to \frac{1}{k}$ and $f_1, f_2, \dots, f_k \in S_{X^*}.$ We need to show that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Observe that for any $1 \leq i \leq k+1,$ $$\frac{k+1}{k}-1 \leq \frac{k+1}{k}\|x\| - \|x_n^{(i)}\| \leq \left \Vert x_n^{(i)}- \frac{k+1}{k}x\right \Vert$$ holds, which implies $\|x_n^{(i)}\| \to 1.$ Now, we claim that $D_{k}[x,(x_n^{(\alpha_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0$ for all $\alpha \in \mathcal{S}_{k+1}(k).$ Let $\alpha \in \mathcal{S}_{k+1}(k).$ For every $n \in \mathbb{N},$ define $z_n=(k+1)x-\sum_{i=1}^{k} x_n^{(\alpha_i)}.$ Since $$d\left(B_X, \frac{k+1}{k}S_X\right)\leq \left\Vert\frac{1}{k}\sum\limits_{i=1}^{k}x_n^{(\alpha_i)}- \frac{k+1}{k}x\right\Vert \leq \frac{1}{k}\sum\limits_{i=1}^{k}\left\Vert x_n^{(\alpha_i)}- \frac{k+1}{k}x \right\Vert,$$ we have $\|z_n\| \to 1.$ Observe that $\|(k+1)x-(\sum_{i=1}^{k}x_n^{(\alpha_i)}+z_n)\|=0$ for all $n \in \mathbb{N}.$ Hence, by $(1),$ we get $D_k[z_n, (x_n^{(\alpha_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$ Note that $|D_k[x,(x_n^{(\alpha_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}]| = \frac{1}{k+1} |D_k[z_n,(x_n^{(\alpha_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}]|$ for all $n \in \mathbb{N}.$ Thus, $D_k[x,(x_n^{(\alpha_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$ Hence the claim.\
Now, it follows from [@Suya2000 Lemma 2] that $$|D_{k}[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}]| \leq \sum\limits_{\alpha \in \mathcal{S}_{k+1}(k)}|D_{k}[x,(x_n^{(\alpha_i)})_{i=1}^{k}; (f_j)_{j=1}^{k}],$$ which implies $D_{k}[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Hence, $B_X$ is $k-$$w$SCh on $\frac{k+1}{k}S_X.$\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (1)$: It follows from the assumption and that $B_X$ is $k-$$w$SCh on $(k+1)S_X.$ Let $x \in S_X,$ $(x_n^{(1)}), (x_n^{(2)}), \dots, (x_n^{(k+1)})$ be $(k+1)-$sequences in $S_X$ such that $\|(k+1)x-\sum_{i=1}^{k+1} x_n^{(i)}\| \to 0$ and $f_1, f_2, \dots, f_k \in S_{X^*}.$ We need to show that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Note that for any $1 \leq i \leq k+1,$ we have $$\begin{aligned}
(k+1)\|x\|- \|x_n^{(i)}\| & \leq \|x_n^{(i)}-(k+1)x\|\\
&\leq \left\Vert \sum\limits_{j=1}^{k+1}x_n^{(j)}-(k+1)x \right\Vert + \left\Vert \sum\limits_{j=1, j \neq i}^{k+1}x_n^{(j)}\right\Vert \\
&\leq \left\Vert \sum\limits_{j=1}^{k+1}x_n^{(j)}-(k+1)x \right\Vert+k.
\end{aligned}$$ Thus, $\|x_n^{(i)}-(k+1)x\| \to k.$ Since $B_X$ is $k-$$w$SCh on $(k+1)S_X,$ we get $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Hence, $X$ is $k-$WMLUR. ◻
We remark that for the case $k=1,$ is proved in [@ZhLZ2015 Theorem 2.6]. The following corollary is an immediate consequence of and .
**Corollary 37**. *The following statements are equivalent.*
1. *$X$ is $k-$WMLUR.*
2. *If $A$ is a closed ball in $X$ and $B$ is a non-empty boundedly compact subset of $X,$ then $(A,B)$ has property $k-$$w$UC.*
As specified immediately after , the following example illustrates that, in general, $k-$rotundity does not imply $k-$WMLUR.
**Example 38**. *Let $k \in \mathbb{Z}^{+}.$ Consider the space $X$ as in . Note that $X$ is $k-$rotund, but not MLUR. From , it is clear that $B_X$ is not $k-$$w$SCh on $X.$ Therefore, by , $X$ is not $k-$WMLUR.*
# Stability of $k-$WUR, $k-$WLUR and $k-$WMLUR
In this section, we examine the stability of the notions $k-$WUR, $k-$WLUR and $k-$WMLUR. We begin with the inheritance nature of the notions $k-$WUR and $k-$WLUR by quotient spaces.
In view of , it is natural to ask whether a similar characterization holds for the notions $k-$WUR and $k-$WLUR. To answer this question, in the following result, we prove that the collection of all quotient spaces of a $k-$WUR space is uniformly $k-$WUR. Indeed the reverse implication also holds.
**Theorem 39**. *Let $\alpha, \beta \in \mathbb{Z}^{+}$, $X$ be a Banach space satisfying $dim(X) \geq k+2,$ $1 \leq \alpha \leq dim(X)-(k+1)$ and $k+1 \leq \beta \leq dim(X)-1.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WUR.*
2. *For every $\epsilon>0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*},$ it follows that*
*$\inf \{\delta^{k}_{X/M}(\epsilon, (f_j)_{j=1}^{k}): M \subseteq \cap_{j=1}^{k}ker(f_j) \}>0.$*
3. *For every $\epsilon>0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*},$ it follows that*
*$\inf \{\delta^{k}_{X/F}(\epsilon, (f_j)_{j=1}^{k}): F \subseteq \cap_{j=1}^{k}ker(f_j)$ with $dim(F)= \alpha \}>0.$*
4. *For every $\epsilon>0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*},$ it follows that*
*$\inf \{\delta^{k}_{X/Y}(\epsilon, (f_j)_{j=1}^{k}): Y\subseteq \cap_{j=1}^{k}ker(f_j)$ with $codim(Y)= \beta \}>0.$*
*Proof.* $(1) \Rightarrow (2)$: Suppose there exist $\epsilon>0,$ $f_1, f_2, \dots, f_{k} \in S_{X^*}$, a sequence $(M_n)$ of subspaces in $X$ such that $M_n \subseteq \cap_{j=1}^{k}ker(f_j)$ and $\delta^{k}_{X/M_n}(\epsilon, (f_j)_{j=1}^{k})< \frac{1}{n}$ for all $n \in \mathbb{N}.$ Then there exist $(k+1)-$sequences $(x_n^{(1)}+M_n), (x_n^{(2)}+M_n), \dots, (x_n^{(k+1)}+M_n)$ with $x_n^{(i)}+M_n \in S_{X/M_n}$ for all $n \in \mathbb{N},$ $1 \leq i \leq k+1$ such that $|D_k[(x_n^{(i)}+M_n)_{i=1}^{k+1}; (f_j)_{j=1}^{k}]| \geq \epsilon$ for all $n \in \mathbb{N}$, but $\|\sum_{i=1}^{k+1}(x_n^{(i)}+M_n)\| \to k+1.$ Since $d(x_n^{(i)}, M_n)=1,$ there exists $y_n^{(i)} \in M_n$ such that $\|x_n^{(i)}-y_n^{(i)}\| \to 1$ for all $1 \leq i \leq k+1.$ Therefore, we have $$\left\Vert\sum_{i=1}^{k+1}x_n^{(i)}+M_n\right\Vert \leq \left \Vert \sum\limits_{i=1}^{k+1}x_n^{(i)}- \sum\limits_{i=1}^{k+1}y_n^{(i)} \right\Vert \leq \sum\limits_{i=1}^{k+1}\|x_n^{(i)}-y_n^{(i)}\|$$ and hence $\|\sum_{i=1}^{k+1}(x_n^{(i)}-y_n^{(i)})\| \to k+1.$ By $(1),$ we get $D_{k}[(x_n^{(i)}-y_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Thus, by , we have $D_k[(x_n^{(i)}+M_n)_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ This is a contradiction.\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (1)$: Suppose there exist $\epsilon>0,$ $f_1, f_2, \dots, f_{k} \in S_{X^*}$ and $(k+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}), \dots,$ $(x_n^{(k+1)})$ in $S_X$ such that $\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1,$ but $|D_{k}[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}]| \geq \epsilon$ for all $n \in \mathbb{N}.$ By Hahn-Banach theorem, for every $n \in \mathbb{N}$ there exists $g_n \in S_{X^*}$ such that $g_n(\sum_{i=1}^{k+1}x_n^{(i)})= \| \sum_{i=1}^{k+1}x_n^{(i)}\|.$ Now, for every $n \in \mathbb{N},$ choose a subspace $F_n$ of $X$ such that $F_n \subseteq (\cap_{j=1}^{k}ker(f_j)) \cap ker(g_n)$ and $dim(F_n)= \alpha.$ Let $1 \leq i \leq k+1.$ Since $g_n(x_n^{(i)}) \to 1$ and $$|g_n(x_n^{(i)})|= d(x_n^{(i)}, ker(g_n)) \leq d(x_n^{(i)}, F_n) \leq \|x_n^{(i)}\| =1,$$ it follows that $\|x_n^{(i)}+F_n\| \to 1.$ For every $n \in \mathbb{N},$ define $y_n^{(i)}= \frac{x_n^{(i)}}{d(x_n^{(i)},F_n)}.$ Note that $y_n^{(i)}+F_n \in S_{X/F_n}$ for all $n \in \mathbb{N}$ and $g_n(y_n^{(i)}) \to 1.$ Therefore, we have $$\left \vert g_n \left( \sum\limits_{i=1}^{k+1} y_n^{(i)} \right) \right\vert= d\left(\sum\limits_{i=1}^{k+1} y_n^{(i)}, ker(g_n)\right) \leq d\left(\sum\limits_{i=1}^{k+1} y_n^{(i)}, F_n\right) \leq \sum\limits_{i=1}^{k+1}\left \Vert y_n^{(i)}+F_n \right \Vert=k+1$$ and hence $\|\sum_{i=1}^{k+1} y_n^{(i)}+F_n\| \to k+1.$ Thus, by assumption, we get $D_k[(y_n^{(i)}+F_n)_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ Further, by and , it follows that $D_k[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \to 0.$ This is a contradiction.\
$(2) \Rightarrow (4)$: Obvious.\
$(4) \Rightarrow (1)$: Suppose there exist $\epsilon>0,$ $f_1, f_2, \dots, f_{k} \in S_{X^*}$ and $(k+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}), \dots,$ $(x_n^{(k+1)})$ in $S_X$ such that $\|\sum_{i=1}^{k+1}x_n^{(i)}\| \to k+1,$ but $D_{k}[(x_n^{(i)})_{i=1}^{k+1}; (f_j)_{j=1}^{k}] \geq \epsilon$ for all $n \in \mathbb{N}.$ By Hahn-Banach theorem, for every $n \in \mathbb{N}$ there exists $g_n \in S_{X^*}$ such that $g_n(\sum_{i=1}^{k+1}x_n^{(i)})= \| \sum_{i=1}^{k+1}x_n^{(i)}\|.$ Now, for every $n \in \mathbb{N},$ choose a subspace $Y_n$ of $X$ such that $Y_n \subseteq (\cap_{j=1}^{k}ker(f_j)) \cap ker(g_n)$ and $codim(Y_n)= \beta.$ By replacing $F_n$ by $Y_n$ in the proof of $(3) \Rightarrow (1)$ and repeating the argument involved in the proof, we get a contradiction. Hence the proof. ◻
The following corollary is an immediate consequence of .
**Corollary 40**. *If $X$ is $k-$WUR and $M$ is a subspace of $X,$ then $X/M$ is $k-$WUR.*
Now, we present an analogous result of for the notion $k-$WLUR.
**Theorem 41**. *Let $\alpha, \beta \in \mathbb{Z}^{+},$ $X$ be a Banach space satisfying $dim(X) \geq k+3,$ $1 \leq \alpha \leq dim(X)-(k+2)$, $k+2 \leq \beta \leq dim(X)-1$ and $x \in S_X.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WLUR at $x.$*
2. *For every $\epsilon>0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*},$ it follows that*
*$\inf \{\delta^{k}_{X/M}(\epsilon, x+M, (f_j)_{j=1}^{k}): M \subseteq \cap_{j=1}^{k}ker(f_j) \ \mbox{and} \ d(x,M)=1 \}>0.$*
3. *For every $\epsilon>0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*},$ it follows that*
*$\inf \{\delta^{k}_{X/F}(\epsilon, x+F, (f_j)_{j=1}^{k}): dim(F)= \alpha \ \mbox{with} \ F \subseteq \cap_{j=1}^{k}ker(f_j) \ \mbox{and} \ d(x,F)=1 \}>0.$*
4. *For every $\epsilon>0$ and $f_1, f_2, \dots, f_{k} \in S_{X^*},$ it follows that*
*$\inf \{\delta^{k}_{X/Y}(\epsilon, x+Y, (f_j)_{j=1}^{k}): codim(Y)= \beta \ \mbox{with} \ Y\subseteq \cap_{j=1}^{k}ker(f_j) \ \mbox{and} \ d(x,Y)=1 \}>0.$*
*Proof.* $(1) \Rightarrow (2)$: Suppose there exist $\epsilon>0,$ $f_1, f_2, \dots, f_{k} \in S_{X^*},$ a sequence $(M_n)$ of subspaces in $X$ such that $M_n\subseteq \cap_{j=1}^{k}ker(f_j)$, $d(x, M_n)=1$ and $\delta^{k}_{X/M_n}(\epsilon, x+M_n, (f_j)_{j=1}^{k})< \frac{1}{n}$ for all $n \in \mathbb{N}.$ Then there exist $(k)-$sequences $(x_n^{(1)}+M_n), (x_n^{(2)}+M_n), \dots, (x_n^{(k)}+M_n)$ with $x_n^{(i)}+M_n \in S_{X/M_n}$ for all $n \in \mathbb{N}$ and $1 \leq i \leq k$ such that $|D_k[(x+M_n),(x_n^{(i)}+M_n)_{i=1}^{k}; (f_j)_{j=1}^{k}]| \geq \epsilon$ for all $n \in \mathbb{N}$, but $\|(x+M_n)+\sum_{i=1}^{k}(x_n^{(i)}+M_n)\| \to k+1$. Since $d(x_n^{(i)}, M_n)=1,$ there exists $y_n^{(i)} \in M_n$ such that $\|x_n^{(i)}-y_n^{(i)}\| \to 1$ for all $1 \leq i \leq k.$ Therefore, we have $$\left\Vert (x+M_n)+\sum_{i=1}^{k}(x_n^{(i)}+M_n)\right\Vert \leq \left \Vert x+\sum\limits_{i=1}^{k}x_n^{(i)}- \sum\limits_{i=1}^{k}y_n^{(i)} \right\Vert \leq \|x\|+ \sum\limits_{i=1}^{k}\|x_n^{(i)}-y_n^{(i)}\|$$ and hence $\|x+\sum_{i=1}^{k}(x_n^{(i)}-y_n^{(i)})\| \to k+1.$ By $(1),$ we get $D_{k}[x, (x_n^{(i)}-y_n^{(i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$ Thus, by , we have $D_k[(x+M_n),(x_n^{(i)}+M_n)_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$ This is a contradiction.\
$(2) \Rightarrow (3)$: Obvious.\
$(3) \Rightarrow (1)$: Suppose there exist $\epsilon>0,$ $f_1, f_2, \dots, f_{k} \in S_{X^*}$ and $(k)-$sequences $(x_n^{(1)}), (x_n^{(2)}), \dots,$ $(x_n^{(k)})$ in $S_X$ such that $\|x+\sum_{i=1}^{k}x_n^{(i)}\| \to k+1,$ but $|D_{k}[x, (x_n^{(i)})_{i=1}^{k}; (f_j)_{j=1}^{k}]| \geq \epsilon$ for all $n \in \mathbb{N}.$ By Hahn-Banach theorem, there exists $g \in S_{X^*}$ such that $g(x)=\|x\|$ and for every $n \in \mathbb{N},$ there exists $g_n \in S_{X^*}$ such that $g_n(x+\sum_{i=1}^{k+1}x_n^{(i)})= \|x+ \sum_{i=1}^{k+1}x_n^{(i)}\|.$ Observe that $g_n(x_n^{(i)}) \to 1$ for all $1 \leq i \leq k$ and $g_n(x) \to 1.$ Now, for every $n \in \mathbb{N},$ choose a subspace $F_n$ of $X$ such that $F_n \subseteq (\cap_{j=1}^{k}ker(f_j)) \cap ker(g_n) \cap ker(g)$ and $dim(F_n)= \alpha.$ Let $1 \leq i \leq k.$ Since $$|g_n(x_n^{(i)})|= d(x_n^{(i)}, ker(g_n)) \leq d(x_n^{(i)}, F_n) \leq \|x_n^{(i)}\| =1,$$ we have $\|x_n^{(i)}+F_n\| \to 1.$ Similarly, $\|x+ F_n\|=1$ for all $n \in \mathbb{N}.$ For every $n \in \mathbb{N}$, define $y_n^{(i)}= \frac{x_n^{(i)}}{d(x_n^{(i)}, F_n)}$ and observe that $y_n^{(i)}+F_n \in S_{X/F_n}$ and $g_n(y_n^{(i)}) \to 1$. Therefore, we have $$\left \vert g_n \left(x+ \sum\limits_{i=1}^{k} y_n^{(i)} \right) \right\vert= d\left(x+\sum\limits_{i=1}^{k} y_n^{(i)}, ker(g_n)\right) \leq d\left(x+\sum\limits_{i=1}^{k} y_n^{(i)}, F_n\right) \leq k+1$$ and hence $\|(x+F_n)+\sum_{i=1}^{k} (y_n^{(i)}+F_n)\| \to k+1.$ By $(3),$ we get $D_k[(x+F_n), (y_n^{(i)}+F_n)_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0,$ Thus, by and , $D_k[x, (x_n^{(i)})_{i=1}^{k}; (f_j)_{j=1}^{k}] \to 0.$ This is a contradiction.\
$(2) \Rightarrow (4)$: Obvious.\
$(4) \Rightarrow (1)$: Suppose there exist $\epsilon>0,$ $f_1, f_2, \dots, f_{k} \in S_{X^*}$ and $(k)-$sequences $(x_n^{(1)}), (x_n^{(2)}), \dots,$ $(x_n^{(k)})$ in $S_X$ such that $\|x+\sum_{i=1}^{k}x_n^{(i)}\| \to k+1,$ but $|D_{k}[x, (x_n^{(i)})_{i=1}^{k}; (f_j)_{j=1}^{k}]| \geq \epsilon$ for all $n \in \mathbb{N}.$ By Hahn-Banach theorem, there exists $g \in S_{X^*}$ such that $g(x)=\|x\|$ and for every $n \in \mathbb{N},$ there exists $g_n \in S_{X^*}$ such that $g_n(x+\sum_{i=1}^{k+1}x_n^{(i)})= \|x+ \sum_{i=1}^{k+1}x_n^{(i)}\|.$ Now, for every $n \in \mathbb{N},$ choose a subspace $Y_n$ of $X$ such that $Y_n \subseteq (\cap_{j=1}^{k}ker(f_j)) \cap ker(g_n) \cap ker(g)$ and $codim(Y_n)= \beta.$ By replacing $F_n$ by $Y_n$ in the proof of $(3) \Rightarrow (1)$ and proceeding in a similar way, we get a contradiction. Hence the proof. ◻
As a consequence of , we have the following result.
**Corollary 42**. *If $X$ is $k-$WLUR and $M$ is a proximinal subspace of $X,$ then $X/M$ is $k-$WLUR.*
*Proof.* Let $\epsilon >0$, $x+M \in S_{X/M}$ and $f_1, f_2, \dots, f_k \in S_{M^{\perp}} \cong S_{(X/M)^*}$. By assumption, there exists $y \in M$ such that $d(x, M)=\|x-y\|=1$. Since $X$ is $k-$WLUR at $(x-y)$, it follows from that $\delta_{X/M}^{k}(\epsilon, (x-y)+M, (f_j)_{j=1}^{k})>0$. Thus, $X/M$ is $k-$WLUR. ◻
In , we see that, in general, a quotient space of a $k-$WLUR space need not be $k-$WLUR. Further, we note that there exists a space $X$ and a subspace $M$ of $X$ such that both $X$ and $X/M$ are $k-$WUR (hence, $k-$WLUR), but $M$ is not proximinal on $X$. To see this, consider any WUR space which is not reflexive (see, ).
**Example 43**. *Let $k \in \mathbb{Z}^+$ and $X= (\ell_1, \|\cdot\|_r)$ be the space considered in [@SmTu1990 Example 1]. That is, for any $x \in \ell_1$, $\|x\|_r= (\|x\|_1^2+ \|S(x)\|_2^2)^{\frac{1}{2}}$, where $S: \ell_1 \to \ell_2$ defined as $S(\alpha_n)= (\alpha_n2^{\frac{-n}{2}})$ for all $(\alpha_n) \in \ell_1$. By [@SmTu1990 Theorem 1], $(\ell_1, \|\cdot\|_1) \cong X/M$ for some subspace $M$ of $X$. Therefore, $X/M$ is not $k-$rotund. However, following [@Smit1978a Example 6], it is easy to see that $X$ is LUR (hence, $X$ is $k-$WLUR).*
From the , it follows that every subspace of a $k-$WUR (respectively, $k-$WLUR) space is $k-$WUR (respectively, $k-$WLUR). Further, in view of , it is natural to ask whether $k-$WUR and $k-$WLUR are three space properties [@Megg1998 Definition 1.7.8] or not. To see this, consider a space $X= M \oplus_1 (\mathbb{R}^k, \|\cdot\|_1)$, where $M$ is a WUR space and $k \in \mathbb{Z^+}.$ Observe that $X$ is not $k-$WLUR. However, $X/M$ and $M$ are $k-$WUR. Thus, $k-$WUR and $k-$WLUR are not three space properties.
Now we present a result that is closely related to , which also generalizes [@Rmou2017 Proposition 3.2].
**Proposition 44**. *Let $Y$ be a subspace of $X$ such that $Y^\perp \subseteq NA(X),$ where $NA(X)$ is the set of all norm attaining functionals on $X$. If $X$ is $k-$WLUR, then $X/Y$ is $k-$rotund.*
*Proof.* Suppose $X/Y$ is not $k-$rotund. Then there exist $(k+1)$ elements $x_1+Y, x_2+Y, \dots, x_{k+1}+Y$ in $S_{X/Y}$ such that $\|\sum_{i=1}^{k+1}(x_i+Y)\|=k+1,$ but $D_k[(x_i+Y)_{i=1}^{k+1};(\widetilde{g_j})_{j=1}^k]=\epsilon$ for some $\epsilon >0$ and $\widetilde{g_1}, \widetilde{g_2}, \dots, \widetilde{g_k} \in S_{(X/Y)^*}.$ By Hahn-Banach Theorem, there exists $\widetilde{f} \in S_{(X/Y)^*}$ such that $\widetilde{f}(\sum_{i=1}^{k+1}x_i+Y)=k+1.$ Observe that $\widetilde{f}(x_i+Y)=1$ for all $1\leq i\leq k+1.$ Let $T: (X/Y)^* \xrightarrow{} Y^\perp$ be the isometric isomorphism defined by $T(\widetilde{h})=\widetilde{h} \circ q,$ where $q: X \xrightarrow{} X/Y$ is the quotient map. Clearly, $\widetilde{f} \circ q \in Y^\perp \subseteq NA(X)$ and $\|\widetilde{f} \circ q\|=1.$ Thus, there exists $z\in S_X$ such that $(\widetilde{f} \circ q )(z)=\widetilde{f}(z+Y)=1.$ For every $1\leq i \leq k+1,$ choose a sequence $(y_n^{(i)})$ in $Y$ such that $\|x_i-y_n^{(i)}\| \xrightarrow{} 1.$ Let $\alpha \in \mathcal{S}_{k+1}(k).$ Note that $$(\widetilde{f} \circ q)\left(\sum_{i=1}^k(x_{\alpha_i}-y_n^{(\alpha_i)})+z\right) = (\widetilde{f} \circ q)\left(\sum_{i=1}^kx_{\alpha_i}+z\right) = \widetilde{f}\left(\left(\sum_{i=1}^kx_{\alpha_i}+z\right)+Y\right)=k+1.$$ Since $(\widetilde{f} \circ q)\left(\sum_{i=1}^k(x_{\alpha_i}-y_n^{(\alpha_i)})+z \right) \leq \left\Vert \sum_{i=1}^k(x_{\alpha_i}-y_n^{(\alpha_i)})+z \right\Vert \leq \sum_{i=1}^k\|x_{\alpha_i}-y_n^{(\alpha_i)}\|+1,$ we have $\|\sum_{i=1}^k(x_{\alpha_i}-y_n^{(\alpha_i)})+z\|\xrightarrow{} k+1.$ By assumption, we have $D_k[z, (x_{\alpha_i}-y_n^{(\alpha_i)})_{i=1}^k;(f_j)_{j=1}^k] \xrightarrow{} 0$ for all $f_1, f_2, \dots, f_k \in S_{X^*},$ in particular, $D_k[z, (x_{\alpha_i}-y_n^{(\alpha_i)})_{i=1}^k;(\widetilde{g_j} \circ q)_{j=1}^k] \xrightarrow{} 0.$ Thus, it follows that $D_k[z, (x_{\alpha_i})_{i=1}^k;(\widetilde{g_j} \circ q)_{j=1}^k] = 0$ which further implies, $D_k[z+Y, (x_{\alpha_i}+Y)_{i=1}^k;(\widetilde{g_j})_{j=1}^k] =0.$ By [@Suya2000 Lemma 2], $$|D_k[(x_{i}+Y)_{i=1}^{k+1};(\widetilde{g_j})_{j=1}^k]| \leq \sum_{\alpha \in \mathcal{S}_{k+1}(k)} |D_k[z+Y,(x_{\alpha_i}+Y)_{i=1}^{k}; (\widetilde{g_j})_{j=1}^k ]| = 0,$$ which is a contradiction. Hence $X/Y$ is $k-$rotund. ◻
In the rest of the section, we mainly focus on the finite and infinite $\ell_p-$product of the notions $k-$WUR, $k-$WLUR, $k-$WMLUR.
In it is noted that the notions $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR) and $(k+1)-$WUR (respectively, $(k+1)-$WLUR, $(k+1)-$WMLUR) do not coincide in general. However, we prove that these notions coincide in $\ell_p-$product of a Banach space for $1 < p< \infty$. For this we need the following results.
**Theorem 45**. *Let $1 \leq p \leq \infty,$ $X_i$ be a Banach space for all $1 \leq i \leq k$ and $X=(\oplus_pX_i)_{i=1}^{k}.$ Then the following statements hold.*
1. *If $X$ is $k-$WUR, then $X_i$ is WUR for some $1 \leq i \leq k.$*
2. *If $X$ is $k-$WLUR, then $X_i$ is WLUR for some $1 \leq i \leq k.$*
3. *If $X$ is $k-$WMLUR, then $X_i$ is WMLUR for some $1 \leq i \leq k.$*
*Proof.* $(1)$: Let $k \geq 2 ,$ $1 \leq p < \infty$ and $X$ be $k-$WUR. Suppose $X_i$ is not WUR for all $1 \leq i \leq k.$ Then for each $1 \leq i \leq k$ there exist $f_i \in S_{X_i^{*}}$ and $(2)-$sequences $(x_n^{(i)}),$ $(y_n^{(i)})$ in $S_{X_i}$ such that $\|x_n^{(i)}+y_n^{(i)}\| \to 2,$ but $|f_i(x_n^{(i)}-y_n^{(i)})|> \epsilon$ for all $n \in \mathbb{N},$ for some $\epsilon>0.$ For every $n \in \mathbb{N},$ define $$\begin{aligned}
z_n^{(1)} &= \frac{1}{k^{\frac{1}{p}}}(
x_n^{(1)}, x_n^{(2)}, x_n^{(3)}, \dots, x_n^{(k-1)}, x_n^{(k)} ),\\
z_n^{(2)} &= \frac{1}{k^{\frac{1}{p}}}(
y_n^{(1)}, x_n^{(2)}, x_n^{(3)}, \dots, x_n^{(k-1)}, x_n^{(k)}),\\
z_n^{(3)} &= \frac{1}{k^{\frac{1}{p}}}(
y_n^{(1)}, y_n^{(2)}, x_n^{(3)}, \dots, x_n^{(k-1)}, x_n^{(k)}), \\
\vdots & \\
z_n^{(k+1)} &= \frac{1}{k^{\frac{1}{p}}}(
y_n^{(1)}, y_n^{(2)}, y_n^{(3)}, \dots, y_n^{(k-1)}, y_n^{(k)}).
\end{aligned}$$ Clearly, $z_n^{(t)} \in S_X$ for all $1 \leq t\leq k+1$ and $n \in \mathbb{N}.$ Now for every $1 \leq j \leq k,$ let $g_j= (0,\dots,0, f_j,0, \dots 0) \in S_{X^*},$ here $f_j$ is in the $j^{th}$ coordinate. Since $$\begin{aligned}
D_k[(z_n^{(t)})_{t=1}^{k+1}; (g_j)_{j=1}^{k}] &= \frac{1}{k^{\frac{k}{p}}}
\begin{vmatrix}
1 & 1 & 1& \dots & 1 \\
f_1(x_n^{(1)}) & f_1(y_n^{(1)}) & f_1(y_n^{(1)}) & \dots & f_1(y_n^{(1)})\\
f_2(x_n^{(2)}) & f_2(x_n^{(2)}) & f_2(y_n^{(2)}) & \dots & f_2(y_n^{(2)})\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
f_k(x_n^{(k)}) & f_k(x_n^{(k)}) & f_k(x_n^{(k)}) & \dots & f_k(y_n^{(k)})
\end{vmatrix}\\
&= \frac{1}{k^{\frac{k}{p}}}
\begin{vmatrix}
0 & 0 & 0& \dots & 1 \\
f_1(x_n^{(1)}-y_n^{(1)}) & 0 & 0 & \dots & f_1(y_n^{(1)})\\
f_2(x_n^{(2)}-y_n^{(2)}) & f_2(x_n^{(2)}-y_n^{(2)}) & 0 & \dots & f_2(y_n^{(2)})\\
\vdots & \vdots & \vdots & \ddots & \vdots \\
f_k(x_n^{(k)}-y_n^{(k)}) & f_k(x_n^{(k)}-y_n^{(k)}) & f_k(x_n^{(k)}-y_n^{(k)}) & \dots & f_k(y_n^{(k)})
\end{vmatrix},
\end{aligned}$$ it follows that $$|D_k[(z_n^{(t)})_{t=1}^{k+1}; (g_j)_{j=1}^{k}]| = \frac{1}{k^{\frac{k}{p}}}\left\vert \prod\limits_{i=1}^{k}f_i(x_n^{(i)}-y_n^{(i)})\right\vert > \frac{1}{k^{\frac{k}{p}}} \epsilon^{k}.$$ Since $$\begin{aligned}
\frac{1}{k+1}\left\Vert\sum\limits_{t=1}^{k+1}z_n^{(t)}\right\Vert&= \frac{1}{(k+1)k^{\frac{1}{p}}} \left(\sum\limits_{i=1}^{k}\left\Vert ix_n^{(i)}+(k+1-i)y_n^{(i)}\right \Vert ^{p}\right)^{\frac{1}{p}} \\
&= \frac{1}{k^{\frac{1}{p}}} \left(\sum\limits_{i=1}^{k}\left\Vert\frac{i}{k+1}x_n^{(i)}+\left( 1- \frac{i}{k+1} \right) y_n^{(i)}\right\Vert^{p}\right)^{\frac{1}{p}},
\end{aligned}$$ we have $\|\sum_{t=1}^{k+1}z_n^{(t)}\| \to k+1.$ This is a contradiction. For the case $p= \infty,$ a similar proof holds.\
$(2)$: Let $k \geq 2 ,$ $1 \leq p < \infty$ and $X$ be $k-$WLUR. Suppose $X_i$ is not WLUR for all $1 \leq i \leq k.$ Then for each $1 \leq i \leq k$ there exist $f_i \in S_{X_i^{*}}$, $y_i\ \in S_{X_i}$ and a sequence $(x_n^{(i)})$ in $S_{X_i}$ such that $\|y_i+x_n^{(i)}\| \to 2,$ but $|f_i(y_i-x_n^{(i)})|> \epsilon$ for all $n \in \mathbb{N},$ for some $\epsilon>0.$ Now, using the preceding functionals, sequences and by assuming $y_n^{(i)}=y_i$ for all $n \in \mathbb{N}$ and $1 \leq i \leq k$, construct $k-$functionals $g_1, g_2, \dots, g_k$ in $S_{X^*}$ and $k-$sequences $(z_n^{(1)}), (z_n^{(2)}), \dots, (z_n^{(k)})$ in $S_{X}$ as in the proof of $(1).$ Let $z= \frac{1}{k^{\frac{1}{p}}}(y_1, y_2, \dots, y_k) \in S_{X}.$ By following the similar technique as in the proof of $(1),$ we have $\|z+ \sum_{i=1}^{k}z_n^{(i)}\| \to k+1$ and $|D_k[z, (z_n^{(i)})_{i=1}^{k}; (g_j)_{j=1}^{k}]| > \frac{1}{k^{\frac{k}{p}}} \epsilon^{k}$ for all $n \in \mathbb{N}$. This is a contradiction. For the case $p= \infty,$ a similar proof holds.\
$(3)$: Let $k \geq 2 ,$ $1 \leq p < \infty$ and $X$ be $k-$WMLUR. Suppose $X_i$ is not WMLUR for all $1 \leq i \leq k.$ Therefore, by , $B_{X_i}$ is not $w$SCh on $2S_{X_i}$ for all $1 \leq i \leq k.$ Then for each $1 \leq i \leq k,$ there exist $f_i \in S_{X_i^{*}},$ $w_i \in 2S_{X_i}$ and $(2)-$sequences $(x_n^{(i)}),$ $(y_n^{(i)})$ in $B_{X_i}$ such that $\|x_n^{(i)}-w_i\| \to 1$ and $\|y_n^{(i)}-w_i\| \to 1,$ but $|f_i(x_n^{(i)}-y_n^{(i)})|> \epsilon$ for all $n \in \mathbb{N},$ for some $\epsilon>0.$ Now, using the preceding sequences and functionals, we define $(k+1)-$sequences $(z_n^{(1)}), (z_n^{(2)}), \dots, (z_n^{(k+1)})$ in $B_X$ and $(k)-$functionals $g_1, g_2, \dots, g_k$ in $S_{X^*}$ as in the proof of $(1).$ Let $w= \frac{1}{k^{\frac{1}{p}}}(w_1, w_2, \dots, w_k) \in 2S_X.$ Since for any $1 \leq t \leq k+1,$ we have $$1= d(w, B_X) \leq \|z_n^{(t)}-w\|= \frac{1}{k^{\frac{1}{p}}} \left( \sum\limits_{j=1}^{t-1} \|y_n^{(j)}-w_j\|^p+ \sum_{j=t}^{k}\|x_n^{(j)}-w_j\|^{p}\right)^{\frac{1}{p}}$$ and hence $\|z_n^{(t)}-w\| \to d(w, B_X).$ Using the similar argument involved in the proof of $(1)$, we have $$|D_k[(z_n^{(t)})_{t=1}^{k+1}; (g_j)_{j=1}^{k}]| = \frac{1}{k^{\frac{k}{p}}}\left\vert \prod\limits_{i=1}^{k}f_i(x_n^{(i)}-y_n^{(i)})\right\vert > \frac{1}{k^{\frac{k}{p}}} \epsilon^{k}.$$ Thus, $B_X$ is not $k-$$w$SCh at $w.$ Therefore, by , $X$ is not $k-$WMLUR, which is a contradiction. For the case $p= \infty,$ a similar proof holds. Hence the proof. ◻
We notice that can be extended to infinite $\ell_p-$product which is presented in the following corollary.
**Corollary 46**. *Let $1 \leq p \leq \infty,$ $X_i$ be a Banach space for all $i \in \mathbb{N}$ and $X= (\oplus_pX_i)_{i \in \mathbb{N}}.$ If $X$ is $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR), then all but except $(k-1)-$spaces of the collection $\{X_i\}_{i \in \mathbb{N}}$ are WUR (respectively, WLUR, WMLUR).*
The following corollary is an immediate consequence of , and [@Smit1986 A.2, A.3, A.4].
**Corollary 47**. *Let $1 < p < \infty.$ Then the following statements are equivalent.*
1. *$X$ is WUR (respectively, WLUR, WMLUR).*
2. *$\ell_p(X)$ is WUR (respectively, WLUR, WMLUR).*
3. *$\ell_p(X)$ is $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR).*
From the preceding result, we conclude that unlike the notion WUR (respectively, WLUR, WMLUR), the notion $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR) for $k>1$, need not be lifted to $\ell_p-$ product space. To see this consider a space $X$ which is $k-$WUR but not rotund (see, ).
Now, we present a necessary condition for a finite $\ell_p-$product space to be $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR).
**Theorem 48**. *Let $X,$ $Y$ be Banach spaces and $1 \leq p \leq \infty.$ For any $k \in \mathbb{Z}^{+},$ there exist $k_1, k_2 \in \mathbb{Z}^{+}$ with $k=k_1+k_2-1$ such that the following statements hold.*
1. *If $X \oplus_p Y$ is $k-$WUR, then $X$ is $k_1-$WUR and $Y$ is $k_2-$WUR.*
2. *If $X \oplus_p Y$ is $k-$WLUR, then $X$ is $k_1-$WLUR and $Y$ is $k_2-$WLUR.*
3. *If $X \oplus_p Y$ is $k-$WMLUR, then $X$ is $k_1-$WMLUR and $Y$ is $k_2-$WMLUR.*
*Proof.* $(1)$: Let $X \oplus_{p} Y$ be $k-$WUR space. Suppose $Y$ is WUR, then there is nothing to prove. Assume $Y$ is not WUR. Then there exists $k_2 \in \mathbb{Z}^{+}$ such that $2 \leq k_2 \leq k$ and $Y$ is $k_2-$WUR, but not $(k_2-1)-$WUR. Now, it is enough to show that $X$ is $k_1-$WUR, where $k_1= k-k_2+1.$ Suppose $X$ is not $k_1-$WUR. Then there exist $\epsilon >0,$ $(k_1+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}),$ $\dots,$$(x_n^{(k_1+1)})$ in $S_X$ with $\|\sum_{i=1}^{k_1+1}x_n^{(i)}\| \to k_1+1,$ but $|D_{k_1}[(x_n^{(i)})_{i=1}^{k_1+1}; (f_j)_{j=1}^{k_1}]| \geq \epsilon$ for all $n \in \mathbb{N}$ and for some $f_1, f_2, \dots, f_{k_1} \in S_{X^*}.$ Since $Y$ is not $(k_2-1)-$WUR, there exist $(k_2)-$sequences $(y_n^{(1)}),$ $(y_n^{(2)}), \dots,$$(y_n^{(k_2)})$ in $S_Y$ with $\|\sum_{i=1}^{k_2}y_n^{(i)}\| \to k_2,$ but $|D_{k_2-1}[(y_n^{(i)})_{i=1}^{k_2}; (g_j)_{j=1}^{k_2-1}] |\geq \epsilon$ for all $n \in \mathbb{N}$ and for some $g_1, g_2, \dots, g_{k_2-1} \in S_{Y^*}.$ Choose $r>0$ with $\|(r,r)\|_{p}=1.$ For every $n \in \mathbb{N}$ and $1 \leq i \leq k+1,$ define $$z_n^{(i)}= \begin{cases}
r(x_n^{(i)}, y_n^{(k_2)}), & \ \textnormal{if} \ 1 \leq i \leq k_1+1;\\
r(x_n^{(k_1+1)}, y_n^{(i-k_1-1)}), & \ \textnormal{if} \ k_1+2 \leq i \leq k_1+k_2.
\end{cases}$$ Clearly, $z_n^{(i)} \in X\oplus_p Y$ and $\|z_n^{(i)}\|=1$ for all $1 \leq i \leq k+1,$ $n \in \mathbb{N}.$ Note that
$$\begin{aligned}
1- \frac{1}{k+1}\left\Vert \sum\limits_{i=1}^{k+1}z_n^{(i)}\right\Vert &= \|(r,r)\|_p- \frac{r}{k+1} \left\Vert \left( \sum\limits_{i=1}^{k_1+1}x_n^{(i)}+(k_2-1)x_n^{(k_1+1)}, \sum\limits_{i=1}^{k_2-1}y_n^{(i)}+ (k_1+1)y_n^{(k_2)}
\right)\right\Vert\\
&= \|(r,r)\|_p- \frac{r}{k+1} \left\Vert \left(\left\Vert \sum\limits_{i=1}^{k_1+1}x_n^{(i)}+(k_2-1)x_n^{(k_1+1)}\right\Vert, \left\Vert \sum\limits_{i=1}^{k_2-1}y_n^{(i)}+ (k_1+1)y_n^{(k_2)} \right\Vert
\right)\right\Vert\\
& \leq r \left\Vert\left(1- \frac{1}{k+1}\left\Vert\sum\limits_{i=1}^{k_1}x_n^{(i)}+ (k_2)x_n^{(k_1+1)}\right\Vert, 1- \frac{1}{k+1}\left\Vert\sum\limits_{i=1}^{k_2-1}y_n^{(i)}+ (k_1+1)y_n^{(k_2)}\right\Vert \right)\right \Vert\\
&\leq r \left\vert 1- \frac{1}{k+1} \left\Vert \sum\limits_{i=1}^{k_1}x_n^{(i)}+ (k_2)x_n^{(k_1+1)}\right\Vert\right \vert+
r \left\vert 1- \frac{1}{k+1} \left\Vert \sum\limits_{i=1}^{k_2-1}y_n^{(i)}+ (k_1+1)y_n^{(k_2)}\right\Vert\right \vert.
\end{aligned}$$
Thus, by [@GaTh2023 Lemma 3.8], it follows that $\frac{1}{k+1}\| \sum_{i=1}^{k+1}z_n^{(i)}\| \to 1.$ For every $1 \leq j \leq k,$ define $$h_j= \begin{cases}
(f_j,0), & \ \textnormal{if} \ 1 \leq j \leq k_1 ;\\
(0, g_{j-k_1}), & \ \textnormal{if} \ k_1+1 \leq j \leq k_1+k_2-1.
\end{cases}$$ Clearly, $h_j \in (X \oplus_p Y)^{*}$ and $\|h_j\|=1$ for all $1 \leq j \leq k.$ Now, consider $$\begin{aligned}
D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]&= D_{k}[((rx_n^{(i)},0))_{i=1}^{k_1+1}, (r(x_n^{(k_1+1)}, y_n^{(l)}-y_n^{(k_2)}))_{l=1}^{k_2-1}; (h_j)_{j=1}^{k}] \\
&= r^{k} det \left(\begin{bmatrix}
A_n & B_n\\
0 & C_n
\end{bmatrix}\right)\hspace{-0.1cm},
\end{aligned}$$ where $A_n= [a_{i,j}^{(n)}]$, here $a_{1,j}^{(n)}=1,$ $a_{i+1,j}^{(n)}=f_i(x_n^{(j)})$ for all $1 \leq i \leq k_1,$ $1 \leq j \leq k_1+1$;\
$B_n=[b_{l,m}^{(n)}]$, here $b_{1,m}^{(n)}=1, b_{l+1,m}^{(n)}= f_l(x_n^{k_1+1})$ for all $1 \leq m \leq k_2-1$, $1 \leq l \leq k_1$;\
$C_n=[c_{s,t}^{(n)}],$ here $c_{s,t}^{(n)}= g_s(y_n^{t}-y_n^{k_2})$ for all $1 \leq s,t \leq k_2-1.$\
Therefore, by assumption, for all $n \in \mathbb{N}$ we have $$\vert D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}] \vert = r^{k} |D_{k_1}[(x_n^{(i)})_{i=1}^{k_1+1}; (f_j)_{j=1}^{k_1}] D_{k_2-1}[(y_n^{(i)})_{i=1}^{k_2}; (g_j)_{j=1}^{k_2-1}]|
\geq r^{k} \epsilon^{2},$$ which is a contradiction to $X\oplus_p Y$ is $k-$WUR. Thus, $X$ is $k_1-$WUR.\
$(2)$: Let $X \oplus_{p} Y$ be $k-$WLUR space. Suppose $Y$ is WLUR, then there is nothing to prove. Assume $Y$ is not WLUR. Then there exists $k_2 \in \mathbb{Z}^{+}$ such that $2 \leq k_2 \leq k$ and $Y$ is $k_2-$WLUR, but not $(k_2-1)-$WLUR. Now, it is enough to show that $X$ is $k_1-$WLUR, where $k_1= k-k_2+1.$ Suppose $X$ is not $k_1-$WLUR. Then there exist $x \in S_X$, $\epsilon >0$ and $(k_1)-$sequences $(x_n^{(1)}), (x_n^{(2)}),\dots,(x_n^{(k_1)})$ in $S_X$ with $\|x+\sum_{i=1}^{k_1}x_n^{(i)}\| \to k_1+1,$ but $|D_{k_1}[x,(x_n^{(i)})_{i=1}^{k_1}; (f_j)_{j=1}^{k_1}]| \geq \epsilon$ for all $n \in \mathbb{N}$ and for some $f_1, f_2, \dots, f_{k_1} \in S_{X^*}$. Since $Y$ is not $(k_2-1)-$WLUR, there exist $y \in S_X$ and $(k_2-1)-$sequences $(y_n^{(1)}),$ $(y_n^{(2)}), \dots,$$(y_n^{(k_2-1)})$ in $S_Y$ with $\|y+\sum_{i=1}^{k_2-1}y_n^{(i)}\| \to k_2$, but $|D_{k_2-1}[y,(y_n^{(i)})_{i=1}^{k_2-1}; (g_j)_{j=1}^{k_2-1}] |\geq \epsilon$ for all $n \in \mathbb{N}$ and for some $g_1, g_2, \dots, g_{k_2-1} \in S_{Y^*}$. Using preceding functionals, consider $k-$functionals $h_1, h_2, \dots, h_{k}$ in $S_{(X \oplus_p Y)^*}$ as in the proof of $(1)$. Choose $r>0$ with $\|(r,r)\|_{p}=1.$ Let $z= r(x, y)$. For every $n \in \mathbb{N}$ and $1 \leq i \leq k$, define $$z_n^{(i)}= \begin{cases}
r(x_n^{(i)}, y), & \ \textnormal{if} \ 1 \leq i \leq k_1;\\
r(x, y_n^{(i-k_1)}), & \ \textnormal{if} \ k_1+1 \leq i \leq k_1+k_2-1.
\end{cases}$$ Clearly, $z, z_n^{(i)} \in S_{(X\oplus_p Y)}$ for all $1 \leq i \leq k$ and $n \in \mathbb{N}$. Now using similar argument as in the proof of $(1)$, we have $\frac{1}{k+1}\|z+ \sum_{i=1}^{k}z_n^{(i)}\| \to 1$ and $\vert D_k[z,(z_n^{(i)})_{i=1}^{k}; (h_j)_{j=1}^{k}] \vert \geq r^{k} \epsilon^{2}$ for all $n \in \mathbb{N}$. This contradicts the assumption $X\oplus_p Y$ is $k-$WLUR. Thus, $X$ is $k_1-$WLUR.\
$(3)$: Let $X \oplus_{p} Y$ be $k-$WMLUR space. Suppose $Y$ is WMLUR, then there is nothing to prove. Assume $Y$ is not WMLUR. Then there exists $k_2 \in \mathbb{Z}^{+}$ such that $2 \leq k_2 \leq k$ and $Y$ is $k_2-$WMLUR, but not $(k_2-1)-$WMLUR. Now, it is enough to show that $X$ is $k_1-$WMLUR, where $k_1= k-k_2+1.$ Suppose $X$ is not $k_1-$WMLUR. Then, by , $B_X$ is not $k_1-$$w$SCh on $2S_X.$ Therefore there exist $x \in S_X,$ $\epsilon >0$ and $(k_1+1)-$sequences $(x_n^{(1)}), (x_n^{(2)}),\dots,(x_n^{(k_1+1)})$ in $B_X$ with $\|x_n^{(i)}-2x\| \to 1,$ but $|D_{k_1}[(x_n^{(i)})_{i=1}^{k_1+1}; (f_j)_{j=1}^{k_1}]| \geq \epsilon$ for all $n \in \mathbb{N}$ and for some $f_1, f_2, \dots, f_{k_1} \in S_{X^*}.$ Since, by , $B_Y$ is not $(k_2-1)-$$w$SCh on $2S_Y,$ there exist $y \in S_Y$ and $(k_2)-$sequences $(y_n^{(1)}),(y_n^{(2)}), \dots,(y_n^{(k_2)})$ in $B_Y$ with $\|y_n^{(i)}-2y\| \to 1,$ but $|D_{k_2-1}[(y_n^{(i)})_{i=1}^{k_2}; (g_j)_{j=1}^{k_2-1}] |\geq \epsilon$ for all $n \in \mathbb{N}$ and for some $g_1, g_2, \dots, g_{k_2-1} \in S_{Y^*}.$ Choose $r>0$ with $\|(r,r)\|_{p}=1.$ Using preceding sequences and functionals, consider $(k+1)-$sequences $(z_n^{(1)}), (z_n^{(2)}), \dots, (z_n^{(k+1)})$ in $B_{X \oplus_p Y}$ and $k-$functionals $h_1, h_2, \dots, h_k$ in $S_{(X \oplus_p Y)^*}$ as in the proof of $(1)$. Let $z=r(x,y) \in S_{X \oplus_p Y}$. It is easy to verify that, $\|z_n^{(i)}-2z\| \to 1$ for all $1 \leq i \leq k+1$. Now, following the similar technique, as in the proof of $(1)$, we get $\vert D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}] \vert \geq r^{k} \epsilon^{2}$ for all $n \in \mathbb{N}$, which implies $B_{X\oplus_p Y}$ is not $k-$$w$SCh on $X\oplus_p Y.$ Therefore, by , $X$ is not $k-$WMLUR, which is a contradiction. Thus, $X$ is $k_1-$WMLUR. Hence the proof. ◻
The next result is an immediate consequence of and the fact $(\oplus_pX_i)_{i=1}^{d} \cong (\oplus_pX_i)_{i=1}^{d-1} \oplus_p X_{d}.$
**Corollary 49**. *Let $d \in \mathbb{Z}^{+}$, $d>1$ and $1 \leq p \leq \infty.$ Let $X_i$ be a Banach space for all $1 \leq i \leq d$ and $X= (\oplus_pX_i)_{i=1}^{d}.$ If $X$ is $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR), then there exist $k_1, k_2, \dots, k_d \in \mathbb{Z}^{+}$ such that $k= \sum_{i=1}^{d}k_i-d+1$ and $X_i$ is $k_i-$WUR (respectively, $k_i-$WLUR, $k_i-$WMLUR).*
In the following result, we provide a sufficient condition for a finite $\ell_p-$product space to be $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR).
**Theorem 50**. *Let $X, Y$ be Banach spaces and $1<p < \infty.$ For any $k_1, k_2, k \in \mathbb{Z}^{+}$ satisfying $k=k_1+k_2-1$, the following statements hold.*
1. *If $X$ is $k_1-$WUR and $Y$ is $k_2-$WUR, then $X \oplus_p Y$ is $k-$WUR.*
2. *If $X$ is $k_1-$WLUR and $Y$ is $k_2-$WLUR, then $X \oplus_p Y$ is $k-$WLUR.*
3. *If $X$ is $k_1-$WMLUR and $Y$ is $k_2-$WMLUR, then $X \oplus_p Y$ is $k-$WMLUR.*
*Proof.* $(1)$: Let $X$ be $k_1-$WUR, $Y$ be $k_2-$WUR and $k= k_1+k_2-1.$ Let $(z_n^{(1)}), (z_n^{(2)}), \dots, (z_n^{(k+1)})$ be $(k+1)-$sequences in $S_{(X\oplus_p Y)}$ with $\|\sum_{i=1}^{k+1}z_n^{(i)}\| \to k+1$ and $h_1, h_2, \dots, h_{k} \in S_{(X\oplus_pY)^{*}}.$ Clearly $h_j= (f_j, g_j)$ for some $f_j \in B_{X^*},$ $g_j \in B_{Y^*},$ for all $1 \leq j \leq k$ and $z_n^{(i)}= (x_n^{(i)}, y_n^{(i)})$ for some $x_n^{(i)} \in B_X,$ $y_n^{(i)} \in B_Y$ for all $n \in \mathbb{N},$ $1 \leq i \leq k+1.$ Let $1 \leq i < j \leq k+1.$ Since $$\frac{1}{k+1}\left\Vert\sum\limits_{t=1}^{k+1}z_n^{(t)} \right\Vert \leq \frac{1}{k+1} \left(\left \Vert z_n^{(i)} +z_n^{(j)}\right\Vert + k-1\right) \leq 1,$$ it follows that $\|z_n^{(i)}+z_n^{(j)}\| \to 2.$ Note that $$\begin{aligned}
\Vert z_n^{(i)}+ z_n^{(j)}\Vert &= \Vert (\Vert x_n^{(i)}+x_n^{(j)}\Vert, \Vert y_n^{(i)}+y_n^{(j)}\Vert) \Vert\\
& \leq \Vert (\Vert x_n^{(i)} \Vert + \Vert x_n^{(j)}\Vert, \Vert y_n^{(i)} \Vert +\Vert y_n^{(j)}\Vert) \Vert\\
&= \Vert (\Vert x_n^{(i)} \Vert , \Vert y_n^{(i)} \Vert )+ ( \Vert x_n^{(j)}\Vert, \Vert y_n^{(j)}\Vert) \Vert\\
&\leq 2.
\end{aligned}$$ Therefore, $\Vert (\Vert x_n^{(i)} \Vert , \Vert y_n^{(i)} \Vert )+ ( \Vert x_n^{(j)}\Vert, \Vert y_n^{(j)}\Vert) \Vert \to 2.$ Since $(\mathbb{R}^{2}, \|\cdot\|_p)$ is uniformly rotund, it follows that $\Vert( \Vert x_n^{(i)}\Vert- \Vert x_n^{(j)} \Vert, \Vert y_n^{(i)} \Vert-\Vert y_n^{(j)} \Vert ) \Vert \to 0,$ which implies $\Vert x_n^{(i)}\Vert- \Vert x_n^{(j)} \Vert \to 0$ and $\Vert y_n^{(i)} \Vert-\Vert y_n^{(j)} \Vert \to 0.$\
Case$-(i)$: Assume that the sequence $(\|x_n^{(1)}\|)$ converges. Therefore, for every $1 \leq i \leq k+1$ we have $\|x_n^{(i)}\| \to a_1$ for some $a_1 \in [0,1]$, which further implies $\|y_n^{(i)}\| \to a_2$, where $a_2= (1-a_1^{p})^{\frac{1}{p}} .$ Let $\alpha \in \mathcal{S}_{k+1}(k_1+1).$ Note that for any subsequence $(n_m)$ of $(n)$, we have $\| \sum_{i=1}^{k_1+1} z_{n_m}^{(\alpha_i)} \| \to k_1+1$ and $$\begin{aligned}
\limsup\limits_{n \to \infty} \left \Vert \sum\limits_{i=1}^{k_1+1} z_{n_m}^{(\alpha_i)} \right\Vert &= \limsup\limits_{n \to \infty} \left \Vert \left(\left\Vert\sum\limits_{i=1}^{k_1+1} x_{n_m}^{(\alpha_i)} \right\Vert, \left\Vert \sum\limits_{i=1}^{k_1+1} y_{n_m}^{(\alpha_i)} \right\Vert \right)\right\Vert\\
& \leq \left \Vert \left( \limsup\limits_{n \to \infty} \left\Vert\sum\limits_{i=1}^{k_1+1} x_{n_m}^{(\alpha_i)} \right\Vert, \limsup\limits_{n \to \infty} \left\Vert \sum\limits_{i=1}^{k_1+1} y_{n_m}^{(\alpha_i)} \right\Vert \right)\right\Vert\\
& \leq \left \Vert \left( \limsup\limits_{n \to \infty} \sum\limits_{i=1}^{k_1+1} \left\Vert x_{n_m}^{(\alpha_i)} \right\Vert, \limsup\limits_{n \to \infty} \sum\limits_{i=1}^{k_1+1} \left\Vert y_{n_m}^{(\alpha_i)} \right\Vert \right)\right\Vert\\
&= \|((k_1+1)a_1, (k_1+1)a_2)\|\\
&= k_1+1.
\end{aligned}$$ Thus $\limsup\limits_{n \to \infty} \|\sum_{i=1}^{k_1+1} x_{n_m}^{(\alpha_i)} \| = (k_1+1)a_1,$ which further implies $\|\sum_{i=1}^{k_1+1} x_n^{(\alpha_i)} \| \to (k_1+1)a_1.$ If $a_1=0,$ by , we have $D_{k_1}[(x_n^{(\alpha_i)})_{i=1}^{k_1+1}; (f_{\lambda_j})_{j=1}^{k_1}] \to 0$ for all $\lambda \in \mathcal{S}_{k}(k_1).$ Suppose $a_1 \neq 0.$ Since $X$ is $k_1-$WUR, we have $D_{k_1}[(x_n^{(\alpha_i)})_{i=1}^{k_1+1}; (f_{\lambda_j})_{j=1}^{k_1}] \to 0$ for all $\lambda \in \mathcal{S}_{k}(k_1).$ Similarly, for every $\beta \in \mathcal{S}_{k+1}(k_2+1)$ we have $\|\sum_{j=1}^{k_2+1} y_n^{(\beta_j)} \| \to (k_2+1)a_2$ and $D_{k_2}[(y_n^{(\beta_i)})_{i=1}^{k_2+1}; (g_{\mu_j})_{j=1}^{k_2}] \to 0$ for all $\mu \in \mathcal{S}_{k}(k_2).$ Consider, $$\begin{aligned}
D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}] &= \begin{vmatrix}
1 & 1& \dots &1\\
f_1(x_n^{(1)})+g_1(y_n^{(1)}) & f_1(x_n^{(2)})+g_1(y_n^{(2)}) & \dots & f_1(x_n^{(k+1)})+g_1(y_n^{(k+1)})\\
\vdots & \vdots & \ddots & \vdots\\
f_k(x_n^{(1)})+g_k(y_n^{(1)}) & f_k(x_n^{(2)})+g_k(y_n^{(2)}) & \dots & f_k(x_n^{(k+1)})+g_k(y_n^{(k+1)})\\
\end{vmatrix}.
\end{aligned}$$ Since the determinant is multilinear, we can write the preceding determinant as the sum of $2^{k}$ determinants each of order $(k+1).$ Then, by rearranging the rows, we can rewrite all $2^{k}$ determinants such that $$| D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \leq \sum_{t=1}^{2^k}|a_n^{(t)}|,$$ where $$\begin{aligned}
a_n^{(t)} &= \begin{vmatrix}
1 & 1& \dots &1\\
f_{\alpha_1}(x_n^{(1)})& f_{\alpha_1}(x_n^{(2)})& \dots & f_{\alpha_1}(x_n^{(k+1)})\\
\vdots & \vdots & \ddots & \vdots\\
f_{\alpha_{r_t}}(x_n^{(1)}) & f_{\alpha_{r_t}}(x_n^{(2)}) & \dots & f_{\alpha_{r_t}}(x_n^{(k+1)})\\
g_{\beta_1}(y_n^{(1)})& g_{\beta_1}(y_n^{(2)}) & \dots & g_{\beta_1}(y_n^{(k+1)})\\
\vdots & \vdots & \ddots & \vdots\\
g_{\beta_{s_t}}(y_n^{(1)})& g_{\beta_{s_t}}(y_n^{(2)}) & \dots & g_{\beta_{s_t}}(y_n^{(k+1)})
\end{vmatrix}
\end{aligned}$$ for some $\alpha \in \mathcal{S}_k(r_t),$ $\beta \in \mathcal{S}_k(s_t)$ and $0 \leq r_t, s_t \leq k$ with $r_t+s_t=k$ for all $1 \leq t \leq 2^{k}.$ Observe that in each determinant $a_n^{(t)},$ either $r_t \geq k_1$ or $s_t \geq k_2.$ Consider the determinant $a_n^{(t_0)},$ for some $1 \leq t_0 \leq 2^{k}.$\
subcase$-(a)$: Suppose $r_{t_0}\geq k_1.$ Then evaluate the determinant $a_n^{(t_0)}$ using the Laplace expansion of the determinant [@HoJo2013] (by fixing the first $(k_1+1)-$rows). Since each entry of the determinant $a_n^{(t_0)}$ is bounded by 1 and $D_{k_1}[(x_n^{(\alpha_i)})_{i=1}^{k_1+1}; (f_{\lambda_j})_{j=1}^{k_1}] \to 0$ for all $\alpha \in \mathcal{S}_{k+1}(k_1+1),$ $\lambda \in \mathcal{S}_{k}(k_1),$ it follows that $|a_n^{(t_0)}| \to 0.$\
subcase$-(b)$: Suppose $s_{t_0}\geq k_2.$ Then evaluate the determinant $a_n^{(t_0)}$ using the Laplace expansion of the determinant [@HoJo2013] (by fixing the rows $R_1, R_{r_{t_0}+1}, R_{r_{t_0}+2}, \dots, R_{r_{t_0}+k_2}).$ Since each entry of the determinant $a_n^{(t_0)}$ is bounded by 1 and $D_{k_2}[(y_n^{(\beta_i)})_{i=1}^{k_2+1}; (g_{\mu_j})_{j=1}^{k_2}] \to 0$ for all $\beta \in \mathcal{S}_{k+1}(k_2+1),$ $\mu \in \mathcal{S}_{k}(k_2),$ it follows that $|a_ n^{(t_0)}| \to 0.$\
Therefore, $|a_n^{(t)}| \to 0$ for all $1 \leq t \leq 2^{k}.$ Thus, $|D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \to 0.$\
Case$-(ii)$: Assume that the sequence $(\|x_n^{(1)}\|)$ does not converge. We need to show that $|D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \to 0.$ Suppose $|D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]|$ does not converge to $0.$ Then there exist a subsequence $(n_m)$ of $(n)$ and $\epsilon>0$ such that $|D_k[(z_{n_m}^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \geq \epsilon$ for all $m \in \mathbb{N}.$ Since the sequence $(\|x_{n_m}^{(1)}\|)$ is bounded, there exists a subsequence $(\|x_{m_s}^{(1)}\|)$ of $(\|x_{n_m}^{(1)}\|)$ such that $\|x_{m_s}^{(1)}\| \to b_1$ for some $b_1 \in [0,1].$ Now, by Case$-(i),$ we have $|D_k[(z_{m_s}^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \to 0$ as $s \to \infty,$ which is a contradiction. Thus, $|D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \to 0.$\
$(2)$: Let $X$ be $k_1-$WLUR, $Y$ be $k_2-$WLUR and $k= k_1+k_2-1.$ Let $z \in S_{(X\oplus_p Y)},$ $(z_n^{(1)}), (z_n^{(2)}), \dots,$ $(z_n^{(k)})$ be $(k)-$sequences in $S_{(X\oplus_p Y)}$ with $\|z+\sum_{i=1}^{k}z_n^{(i)}\| \to k+1$ and $h_1, h_2, \dots, h_{k} \in S_{(X\oplus_pY)^{*}}.$ Clearly $h_j= (f_j, g_j)$ for some $f_j \in B_{X^*}$, $g_j \in B_{Y^*},$ for all $1 \leq j \leq k$, $z=(x,y)$ and $z_n^{(i)}= (x_n^{(i)}, y_n^{(i)})$ for some $x,x_n^{(i)} \in B_X,$ $y,y_n^{(i)} \in B_Y$ for all $n \in \mathbb{N}$, $1 \leq i \leq k$. By considering $z_n^{(k+1)}=z$ for all $n \in \mathbb{N}$ and following the similar steps involved in the proof of $(1)$, we obtain $\|x_n^{(i)}\| \to \|x\|$ and $\|y_n^{(i)}\| \to \|y\|$ for all $1 \leq i \leq k$. Now the rest of the proof follows as Case$-(i)$ in the proof of $(1)$.\
$(3)$: Let $X$ be $k_1-$WMLUR, $Y$ be $k_2-$WMLUR and $k= k_1+k_2-1.$ Now, by , it is enough to show that $B_{X \oplus_p Y}$ is $k-$$w$SCh on $2S_{X \oplus_p Y}.$ Let $z \in S_{X \oplus_p Y},$ $(z_n^{(1)}), (z_n^{(2)}), \dots, (z_n^{(k+1)})$ be $(k+1)-$sequences in $B_{(X\oplus_p Y)}$ such that $\|2z-z_n^{(i)}\| \to 1$ for all $1 \leq i \leq k+1$ and $h_1, h_2, \dots, h_{k} \in S_{(X\oplus_pY)^{*}}.$ Clearly $h_j= (f_j, g_j)$ for some $f_j \in B_{X^*}$, $g_j \in B_{Y^*},$ for all $1 \leq j \leq k,$ $z=(x,y)$ and $z_n^{(i)}= (x_n^{(i)}, y_n^{(i)})$ for some $x, x_n^{(i)} \in B_X$, $y, y_n^{(i)} \in B_Y$ for all $n \in \mathbb{N}$, $1 \leq i \leq k+1.$ Let $1 \leq i \leq k+1$. Note that $$\begin{aligned}
\Vert 2z-z_n^{(i)}\Vert &= \Vert (\Vert2x- x_n^{(i)}\Vert, \Vert 2y-y_n^{(i)}\Vert) \Vert\\
& \geq \Vert (\Vert 2x \Vert- \Vert x_n^{(i)} \Vert, \Vert 2y \Vert- \Vert y_n^{(i)} \Vert ) \Vert\\
&= \Vert (\Vert 2x \Vert , \Vert 2y \Vert )- ( \Vert x_n^{(i)}\Vert, \Vert y_n^{(i)}\Vert) \Vert\\
&\geq \Vert 2z \Vert - \Vert z_n^{(i)} \Vert\\
&\geq 1,
\end{aligned}$$ which implies $\Vert (\Vert 2x \Vert , \Vert 2y \Vert )- ( \Vert x_n^{(i)}\Vert, \Vert y_n^{(i)}\Vert) \Vert \to 1.$ Since $B_{(\mathbb{R}^{2}, \|\cdot\|_p)}$ is strongly Chebyshev on $(\mathbb{R}^{2}, \|\cdot\|_p),$ we have $( \Vert x_n^{(i)}\Vert, \Vert y_n^{(i)}\Vert) \to (\Vert x \Vert , \Vert y \Vert )$, which further implies $\Vert x_n^{(i)}\Vert \to \Vert x \Vert$ and $\Vert y_n^{(i)} \Vert \to \Vert y \Vert$. For any subsequence $(n_m)$ of $(n)$, observe that $$\begin{aligned}
\liminf\limits_{n \to \infty} \left \Vert 2z-z_{n_m}^{(i)} \right\Vert &= \liminf\limits_{n \to \infty} \left \Vert \left(\left\Vert 2x-x_{n_m}^{(i)} \right\Vert, \left\Vert 2y- y_{n_m}^{(i)} \right\Vert \right)\right\Vert\\
& \geq \left \Vert \left(\liminf\limits_{n \to \infty} \left\Vert 2x-x_{n_m}^{(i)} \right\Vert, \liminf\limits_{n \to \infty} \left\Vert 2y - y_{n_m}^{(i)} \right\Vert \right)\right\Vert\\
& \geq \left\Vert \left( \left\Vert x \right\Vert, \left\Vert y \right\Vert \right) \right \Vert\\
&= 1,
\end{aligned}$$ which implies $\liminf\limits_{n \to \infty} \| 2x-x_{n_m}^{(i)} \| = \|x\|.$ Therefore, $\| 2x-x_n^{(i)} \| \to \|x\|$. If $\|x\|=0,$ by , we have $D_{k_1}[(x_n^{(\alpha_i)})_{i=1}^{k_1+1}; (f_{\lambda_j})_{j=1}^{k_1}] \to 0$ for all $\alpha \in \mathcal{S}_{k+1}(k_1+1)$ and $\lambda \in \mathcal{S}_{k}(k_1).$ Assume $\|x\| \neq 0.$ Note that for any $1 \leq i \leq k+1,$ we have $$\left \Vert \frac{x_n^{(i)}}{\|x_n^{(i)}\|}- 2 \frac{x}{\|x\|} \right\Vert = \left\vert \frac{x_n^{(i)}}{\|x_n^{(i)}\|} - \frac{x_n^{(i)}}{\|x\|}+\frac{x_n^{(i)}}{\|x\|} - 2 \frac{x}{\|x\|} \right\vert \leq \left\Vert x_n^{(i)} \right\Vert \left\vert \frac{1}{\|x_n^{(i)}\|}- \frac{1}{\|x\|} \right\vert+ \frac{1}{\|x\|} \left\Vert x_n^{(i)}-2x \right\Vert$$ and hence $\left\Vert \frac{x_n^{(i)}}{\|x_n^{(i)}\|}- 2 \frac{x}{\|x\|} \right\Vert \to 1.$ Since $X$ is $k_1-$WMLUR, by , it follows that $B_X$ is $k_1-$$w$SCh on $2S_X.$ Therefore $D_{k_1}\left[\left(\frac{x_n^{(\alpha_i)}}{\|x_n^{(\alpha_i)}\|}\right)_{i=1}^{k_1+1}; (f_{\lambda_j})_{j=1}^{k_1}\right] \to 0,$ further by and , we have $D_{k_1}[(x_n^{(\alpha_i)})_{i=1}^{k_1+1}; (f_{\lambda_j})_{j=1}^{k_1}] \to 0$ for all $\alpha \in \mathcal{S}_{k+1}(k_1+1)$ and $\lambda \in \mathcal{S}_{k}(k_1).$ Similarly, $D_{k_2}[(y_n^{(\beta_i)})_{i=1}^{k_2+1}; (g_{\mu_j})_{j=1}^{k_2}] \to 0$ for all $\beta \in \mathcal{S}_{k+1}(k_2+1)$ and $\mu \in \mathcal{S}_{k}(k_2).$ Now by repeating the similar technique involved in Case$-(i)$ of the proof of $(1)$, we obtain $|D_k[(z_n^{(i)})_{i=1}^{k+1}; (h_j)_{j=1}^{k}]| \to 0.$ Hence the proof. ◻
The next result is an immediate consequence of and the fact $(\oplus_pX_i)_{i=1}^{d} \cong (\oplus_pX_i)_{i=1}^{d-1} \oplus_p X_{d}.$
**Corollary 51**. *Let $d \in \mathbb{Z}^{+}$, $d>1$ and $1 < p < \infty.$ Let $X_i$ be a Banach space for all $1 \leq i \leq d$ and $X= (\oplus_pX_i)_{i=1}^{d}.$ If $X_i$ is $k_i-$WUR (respectively, $k_i-$WLUR, $k_i-$WMLUR) for all $1 \leq i \leq d,$ then $X$ is $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR) where $k= \sum_{i=1}^{d}k_i-d+1.$*
As a consequence of [@Smit1986 A.2, A.3, A.4], , we now present the necessary and sufficient condition for an infinite $\ell_p-$product space to be $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR).
**Theorem 52**. *Let $1 < p < \infty$, $X_i$ be a Banach space for all $i \in \mathbb{N}$ and $X= (\oplus_pX_i)_{i\in \mathbb{N}}.$ Then the following statements are equivalent.*
1. *$X$ is $k-$WUR (respectively, $k-$WLUR, $k-$WMLUR).*
2. *There exists $j \in \mathbb{N}$ such that $X_i$ is WUR (respectively, WLUR, WMLUR) for all $i > j$ and for each $i \leq j$ there exists $k_i \in \mathbb{Z}^{+}$ with $\sum_{i=1}^{j}k_i-j+1 \leq k$ such that $X_i$ is $k_i-$WUR (respectively, $k_i-$WLUR, $k_i-$WMLUR).*
We now provide few examples to demonstrate that some of the implications and assertions mentioned in the preceding sections cannot be reversed in general.
The subsequent example reveals that some implications observed in and one given immediately below cannot be reversed generally.
**Example 53**. * *
1. *Consider the space $X= (\ell_2, \| \cdot\|_L)$ from [@Smit1978a Example 1] and $k \in \mathbb{Z}^{+}.$ In [@Smit1978a], it is proved that $X$ is LUR and reflexive, but not WUR. By , $\ell_2(X)$ is not $k-$WUR. However, by [@Lova1955 Theorem 1.1], $\ell_2(X)$ is LUR (hence, $k-$LUR).*
2. *Consider the space $X= (\ell_2, \| \cdot\|_A)$ from [@Smit1978a Example 3] and $k \in \mathbb{Z}^{+}.$ In [@Smit1978a], it is proved that $X$ is strongly rotund (hence, MLUR), but not WLUR. By , $\ell_2(X)$ is not $k-$WLUR. However, by [@Smit1986], $\ell_2(X)$ is strongly rotund (hence, $k-$strongly rotund and $k-$MLUR).*
We now present an example of a space which is strongly rotund and $(k+1)-$WUR, but not $k-$WLUR as specified immediately below .
**Example 54**. *For each $x=(x_1, x_2, \dots)$ in $\ell_2,$ define $\|x\|_1= \sup\{\|x\|_{i_1, i_2}: i_1< i_2\},$ where $\|x\|_{i_1, i_2}$ is defined as in . Let $(c_n)$ be a decreasing sequence of positive real numbers converges to zero. Define the continuous map $T: (\ell_2, \|\cdot\|_1) \to (\ell_2, \|\cdot\|_2)$ by $T(x_1, x_2, \dots)= (c_2x_2, c_3x_3, \dots).$ Now, define $\|x\|_r^2= \|x\|_1^2+ \|T(x)\|_2^2$ for all $x \in \ell_2.$ Let $B=(\ell_2, \|\cdot\|_r).$ In [@NaWa1988 Example 2], it is proved that $B$ is $2-$UR and rotund, but not LUR. Now, we will prove that the space $B$ is not WLUR. Let $(e_n)$ be the standard basis of $(\ell_2, \|\cdot\|_2).$ It is easy to see that $\|e_1\|_r=1,$ $\|e_n\|_r \to 1$ and $\|e_1+e_n\|_r \to 2.$ Consider $f=\frac{e_1}{\|e_1\|} \in S_{B^*}.$ Observe that $f(e_1-e_n)= \frac{1}{\|e_1\|}$ for all $n \geq 2.$ Therefore, $e_n-e_1$ does not converge to $0$ weakly. Hence, $B$ is not WLUR. For any $k \in \mathbb{Z}^{+}$ , consider $X=B \oplus_2 B \oplus_2 \dots \oplus_2 B$ $(k)-$times. Clearly, $X$ is strongly rotund. Since $B$ is $2-$WUR, it follows from that $X$ is $(k+1)-$WUR. However, it is easy to see from that $X$ is not $k-$WLUR.*
The following example illustrates that the implication observed immediately after cannot be reversed in general. The example also shows that the converse of not necessarily true.
**Example 55**. *Let $k \in \mathbb{Z}^{+}$ and $X$ be a $k-$strongly rotund, but not $k-$WLUR space (see, ). Therefore, by [@VeRS2021 Theorem 2.10], every closed convex subset of $X$ is $k-$SCh on $X,$ in particular $B_X$ is $k-$$w$SCh on $2S_X.$ However, by , $B_X$ is not $k-$$w$USCh on $2S_X.$*
As mentioned after , the following example demonstrate that $k-$weakly uniform rotundity of $X$ does not imply that the space $X^{**}$ is $k-$WMLUR.
**Example 56**. *Let $Z=(c_0, \|\cdot\|_{\infty})$and $k \in \mathbb{Z}^{+}.$ By [@DeGZ1993 Chapter II, Corollary 6.9], $Z$ admits an equivalent norm (say, $\|\cdot\|_r)$ such that $Y= (c_0, \|\cdot\|_r)$ is WUR. Since it is proved in [@AlDi1985] that $\ell_{\infty}$ does not have any equivalent WMLUR renorming, we have $Y^{**}$ is not WMLUR. Consider the Banach space $X= l_2(Y).$ Then, by [@Smit1986 A.2], $X$ is WUR (hence, $k-$WUR). Clearly, $X^{**} \cong l_2(Y^{**}).$ Therefore, by , $X^{**}$ is not $k-$WMLUR.*
99
G. A. Aleksandrov and I. P. Dimitrov. On the equivalent weakly midpoint locally uniformly rotund renorming of the space $l_\infty$. In *Proc. 14th Spring Conference of the Union of Bulgarian Mathematicians, Sunny Beach*, pages 189--191 (in Russian). 1985.
P. Bandyopadhyay, Y. Li, B. L. Lin, and D. Narayana. Proximinality in Banach spaces. , 341(1):309--317, 2008.
J. A. Clarkson. Uniformly convex spaces. , 40(3):396--414, 1936.
R. Deville, G. Godefroy, and V. Zizler. . Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 64. Longman Scientific & Technical, Harlow, 1993.
S. Dutta and P. Shunmugaraj. Weakly compactly LUR Banach spaces. , 458(2):1203--1213, 2018.
P. Gayathri and V. Thota. Characterizations of weakly uniformly rotund Banach spaces. , 514(1):Paper No. 126298, 15, 2022.
P. Gayathri and V. Thota. On geometric and best approximation properties of $k$-UR and its generalizations. , 17(2):Paper No. 29, 32, 2023.
R. Geremia and F. Sullivan. Multidimensional volumes and moduli of convexity in Banach spaces. , 127:231--251, 1981.
R. Y. He. -strongly convex and locally $K$-uniformly smooth spaces. , 17(2):251--256, 1997.
R. A. Horn and C. R. Johnson. . Cambridge University Press, Cambridge, second edition, 2013.
S. Kar and P. Veeramani. On $k$-uniformly rotund spaces and spaces with property $k$-UC. , 19(7):1263--1273, 2018.
B. L. Lin and X. T. Yu. On the $k$-uniform rotund and the fully convex Banach spaces. , 110(2):407--410, 1985.
C. Liu, Z. Zhang, and Y. Zhou. A note in approximative compactness and midpoint locally $k$-uniform rotundity in Banach spaces. , 38(2):643--650, 2018.
A. R. Lovaglia. Locally uniformly convex Banach spaces. , 78(1):225--238, 1955.
R. E. Megginson. . Graduate Texts in Mathematics, vol. 183. Springer-Verlag, New York, 1998.
C. X. Nan and J. H. Wang. On the L$k$-UR and L-$k$R spaces. , 104(3):521--526, 1988.
C. J. Read. Banach spaces with no proximinal subspaces of codimension 2. , 223(1):493--504, 2018.
M. Rmoutil. Norm-attaining functionals need not contain 2-dimensional subspaces. , 272(3):918--928, 2017.
I. Singer. On the set of the best approximations of an element in a normed linear space. , 5:383--402, 1960.
M. A. Smith. Some examples concerning rotundity in Banach spaces. , 233(2):155--161, 1978.
M. A. Smith. Rotundity and extremity in $l^p(X_i)$ and $L^p(\mu,X)$. In *Geometry of normed linear spaces, Urbana-Champaign, Ill, 1983, In Contemp. Math.*, vol. 52, pages 143--162. Amer. Math. Soc., Providence, RI, 1986.
M. A. Smith and B. Turett. Some examples concerning normal and uniform normal structure in Banach spaces. , 48(2):223--234, 1990.
F. Sullivan. A generalization of uniformly rotund Banach spaces. , 31(3):628--636, 1979.
Suyalatu. On some generalization of local uniform smoothness and dual concepts. , 33(1):101--108, 2000.
M. Veena Sangeetha. Geometry of product spaces. , 503(1):Paper No. 125285, 23, 2021.
M. Veena Sangeetha, M. Radhakrishnan, and S. Kar. On $k$-strong convexity in Banach spaces. , 28(4):1193--1210, 2021.
M. Veena Sangeetha and P. Veeramani. Uniform rotundity with respect to finite-dimensional subspaces. , 25(4):1223--1252, 2018.
V. L. Šmulian. On the principle of inclusion in the space of the type $({\rm B})$. , 5(47):317--328, 1939.
J. Xian and Y. J. Li. -very-convex spaces and $K$-very-smooth spaces. , 24(3):483--492, 2004.
X. T. Yu and J. P. Wang. On moduli of $k$-rotundity and $k$-convexity. , 11(2):212--222, 1990.
Z. Zhang, C. Liu, and Y. Zhou. Some examples concerning proximinality in Banach spaces. , 200:136--143, 2015.
[^1]:
[^2]:
| arxiv_math | {
"id": "2309.14224",
"title": "On $k-$WUR and its generalizations",
"authors": "P. Gayathri and Vamsinadh Thota",
"categories": "math.FA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
This work is concerned with features of planar dynamical systems governed by a smooth velocity field and additive white noise. By Helmholtz's theorem, the system's velocity field can be decomposed into an irrotational and a solenoidal part, defined by a scalar and a vector potential, respectively. The meaning of this decomposition, however, is generally unclear, because it yields different potentials in different coordinates, and the choice of basis may not be obvious for a given system. In contrast, the dynamics themselves are independent of the basis in which they are represented. To address this discrepancy, we first present a coordinate-independent formulation of the Helmholtz decomposition for general, noise-driven planar systems. In the second part of our investigation, we focus on noise-free, steady planar flows. For this type of system, we analytically derive conditions for ruling out closed orbits in certain regions of phase space. We demonstrate our methods on well-known examples of dynamical systems in the plane.
author:
- Tiemo Pedergnana
- Nicolas Noiray
bibliography:
- aipsamp.bib
title: Certain features of planar systems
---
> One of the holy grails of nonlinear dynamics is to extract predictive information about a system's trajectories from its velocity field without resorting to numerical time-integration. In an attempt to understand a given system, one may appeal to Helmholtz's famous theorem, by which a velocity field can be written as the sum of the gradient of a scalar potential and the curl of a vector potential. Due to the coordinate-dependence of this eponymous Helmholtz decomposition, however, the meaning of the scalar and vector potentials for a given dynamical system is ambiguous if the basis is not specified. In contrast, the dynamics associated with the velocity field are independent of the choice of basis. The first part of this study seeks to resolve this discrepancy for planar dynamical systems by understanding the differential-geometric transformation properties of their Helmholtz decomposition. Specifically, we consider a bivariate Langevin equation subject to an arbitrary, smooth mapping to facilitate the identification of certain hidden coordinates which may simplify a given system's analysis. In the second part, we focus on deterministic, steady planar flows. For this important subclass, we present conditions for ruling out closed orbits in certain subsets of phase space. We illustrate the application of our results on classic examples of planar dynamical systems.
# Overview
## Dynamical system
In this work, we study planar (stochastic) dynamical systems given by a bivariate Langevin equation of the form[@stratonovich1963topics; @gardiner1985handbook; @risken1996fokker] $$\begin{aligned}
\dot{x}=\mathcal{F}(x,t)+\mathcal{B}(x)\Xi, \label{Dynamical system}\end{aligned}$$ which is defined on a subset of the plane $\mathcal{D}\subset \mathbb{R}^2$.[^1] Equation [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} states that the evolution of the random variable $x$: $\mathbb{R}^+\rightarrow \mathbb{R}^{2}$ over time $t\in\mathbb{R}^+$ is governed by the smooth velocity field $\mathcal{F}(x,t)$: $\mathbb{R}^2\times\mathbb{R}^+\rightarrow \mathbb{R}^{2}$, the diffusion tensor $\mathcal{B}$: $\mathbb{R}^2\times\mathbb{R}^+\rightarrow \mathbb{R}^{2\times 2}$ and the vector $\Xi=(\xi_1,\xi_2)^T$, whose entries are white Gaussian noise sources with variance $\Gamma$ and zero mean.[@stratonovich1963topics] Note that, if either $\Gamma=0$ or $\mathcal{B}$ vanishes identically for all $x\in\mathcal{D}$, the system [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} defines a deterministic planar flow with velocity field $\mathcal{F}$.[@guckenheimer2013nonlinear; @strogatz2018nonlinear see pp. 42--65 and pp. 125--306, respectively] Unless explicitly stated otherwise, all quantities are assumed to be real in this work.
## Helmholtz decomposition
To analyze the dynamics [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} we recall Helmholtz's theorem, originally derived by Stokes and Helmholtz,[@stokes_2009; @Helmholtz185825] which states that, under certain restrictions, a smooth, planar velocity field $u$ can be decomposed as follows:[see @morse1953methods pp. 52--54] $$\begin{aligned}
u(x,t)&=&{-\nabla \mathcal{V}(x,t)}+{S\nabla \mathcal{H}(x,t)}, \label{Helmholtz decomposition}\end{aligned}$$ where $[\nabla (\cdot)]_m={\partial (\cdot)}/{\partial{x_m}}$, $\mathcal{V}$ is the scalar potential, $\mathcal{H}$ is the Hamiltonian function and $$\begin{aligned}
S&=&\begin{pmatrix}
0 & 1\\
-1 & 0
\end{pmatrix}. \label{Skew symm matrix}\end{aligned}$$ The two-dimensional Helmholtz decomposition (HD) [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"} can be derived from the three-dimensional case given in the above references by setting the vector potential equal to $(0,0,\mathcal{H})^T$. The HD is broadly used in fluid mechanics, [@Morino198665; @Joseph200614272; @Linke2014782; @Buhler20141007; @Lindborg2014; @Buhler2017361; @Schoder20203019; @Caltagirone2021] electromagnetics, [@Aharonov1959485; @Konopinski1978499; @Haber2000150; @hehl2003foundations] geophysics, [@Weiss201340; @Du2017S111; @Shi2019509] imaging [@Paganin19982586; @Park20063697] and computer vision. [@Kohlberger2003432; @Guo2005493; @Cuzol2007329] The reader interested in mathematical discussions of the HD, [@simader92; @Farwig2007239; @sohr2012navier; @moses1971; @Schweizer2018; @Giga2022] its generalization to $n$-dimensional vector fields[@Glotzl2023] or a review of its applications[@Bhatia20131386] is referred to the respective literature. Uniqueness or existence are not of concern in this work, which is focused on the transformation properties of the HD [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"} for smooth planar fields. Therefore, in the following, we generally assume that a (quasi-)unique decomposition[^2] exists throughout $\mathcal{D}$, such that Eq. [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"} serves as a definition of $u$. In Sec. [\[Helmholtz section\]](#Helmholtz section){reference-type="ref" reference="Helmholtz section"}, we explicitly construct the Helmholtz decomposition [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"} for two broad, and practically relevant classes of planar systems.
## Transformation properties [\[Intro part 3\]]{#Intro part 3 label="Intro part 3"}
In expressing the HD by Eq. [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"}, the use of Cartesian (rectangular) coordinates was tacitly assumed. [@morse1953methods see pp. 21--52] With this in mind, we can study the dynamics of a planar dynamical system under additive white noise, $\dot{x}=u+\Xi$, in such coordinates. By Eq. [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"}, such a system can be written as $$\begin{aligned}
\dot{x}&=&{-\nabla \mathcal{V}(x,t)}+{S\nabla \mathcal{H}(x,t)}+\Xi. \label{Cartesian system}\end{aligned}$$ At first sight, comparing Eqs. [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} and [\[Cartesian system\]](#Cartesian system){reference-type="eqref" reference="Cartesian system"} is sensible only in the special case when $\mathcal{B}$ corresponds to the 2-by-2 identity matrix. However, under an arbitrary, smooth mapping (see Fig. [1](#Figure 1){reference-type="ref" reference="Figure 1"}) $$\begin{aligned}
x=f(y) \label{General mapping},\end{aligned}$$ after redefining $y\rightarrow x$, Eq. [\[Cartesian system\]](#Cartesian system){reference-type="eqref" reference="Cartesian system"} is transformed into the following system: $$\begin{aligned}
\dot{x}=-g^{-1}(x)\nabla \widetilde{\mathcal{V}}(x,t) \pm\dfrac{1}{\sqrt{\det[g(x)]}}S\nabla\widetilde{\mathcal{H}}(x,t)+h(x)^{-1}\widetilde{\Xi}, \label{Transformed 2D Helmholtz decomposition}\end{aligned}$$ where $J=\nabla f$ is the Jacobian matrix of the mapping $f$, $J=Qh$ is the polar decomposition of $J$,[@horn2012matrix see p. 449] $Q=Q^{-T}\in\mathbb{R}^{2\times 2}$ is orthogonal, $h\in\mathbb{R}^{2\times 2}$ is positive definite, $(\cdot)^{T}$ is the transpose, $g=h^T h\in\mathbb{R}^{2\times 2}$ is the positive definite metric tensor of the mapping $f$, $\det(\cdot)$ is the determinant, $\widetilde{\mathcal{V}}(x,t)=\mathcal{V}(f(x),t)$ is the transformed scalar potential and $\widetilde{\mathcal{H}}(x,t)=\mathcal{H}(f(x),t)$ is the transformed Hamiltonian. The "$\pm$"-sign in Eq. [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} indicates whether $Q$ is purely rotational ($\det Q=1$, "$+$") or if it contains a reflection ($\det Q=-1$, "$-$"). By the positive definiteness of $h$, $\det h(x)=\sqrt{\det g(x)}$ is always positive. Furthermore, by the uniqueness of the Cholesky decomposition for positive definite matrices,[@horn2012matrix see p. 441] the polar decomposition $J=Qh$ and the QR factorization[@horn2012matrix see p. 449] of $J$ coincide, which implies that $h^{-1}$ is generally a lower triangular matrix. We also mention the implicit assumption in the derivation of Eq. [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} that the transformed noise term $\widetilde{\Xi}$ is related to its original counterpart by[@Pedergnana2022 see Sec. III. B. for a discussion] $$\begin{aligned}
\Xi=Q(x)\widetilde{\Xi}. \label{Noise transformation formula}\end{aligned}$$ Equation [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} shows explicitly that the Hamiltonian ($\widetilde{\mathcal{V}}=0$) or gradient-driven ($\widetilde{\mathcal{H}}=0$) nature of a system, if present, is an intrinsic feature of that system which persists under a smooth mapping to a different, arbitrary set of coordinates. We eludicate the transformation formula [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} on the example of the forced-damped Harmonic oscillator in Sec. [\[Helmholtz section\]](#Helmholtz section){reference-type="ref" reference="Helmholtz section"}.
A given system [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} with nonzero $\Xi$ and positive definite $\mathcal{B}$ can be compared to Eq. [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} by identifying $h^{-1}=\mathcal{B}$.[^3] Establishing such a correspondence for vanishing $\Xi$, i.e., in the purely deterministic case, may be difficult and such a correspondence will generally not be unique. Indeed, as we show in Sec. [\[Hamiltonian section\]](#Hamiltonian section){reference-type="ref" reference="Hamiltonian section"}, in certain cases, the *same* (deterministic) system can be written as a transformed Hamiltonian system $\dot{x}=\sqrt{\det g}^{-1}S\nabla {\widetilde{\mathcal{H}}}$ or, alternatively, as a transformed gradient system $\dot{x}=-g^{-1}\nabla \widetilde{V}$, in a parameter range where $\widetilde{\mathcal{H}}$ has no closed isocontours. These findings suggests that the qualifiers "Hamiltonian" or "gradient-driven" are not necessarily mutually exclusive, dependent on the isocontour geometry of $\widetilde{\mathcal{H}}$.
![This work is focused on features of planar dynamical systems $\dot{x}=u$ which persist under a smooth, invertible mapping $f$ to a different, arbitrary set of coordinates. An example of such a feature in steady flows in the plane is the existence or absence of a closed orbit $C$, shown in red. In Sec. [\[Main results 2\]](#Main results 2){reference-type="ref" reference="Main results 2"}, we present analytical criteria for ruling out closed orbits in such systems. Shown in blue is the tangent vector $dl$ to the curve $C$ at a certain point.](Figure_1.eps){#Figure 1 width="30%"}
## Hamiltonian dynamics
The first and last terms on the right-hand-side of Eq. [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} have been derived for arbitrary-dimensional systems in previous work.[@Pedergnana2022] The present study complements those results for the planar case by adding the transformed Hamiltonian term. If we drop all other terms and assume $\det Q=1$, the transformed Hamiltonian system reads $$\begin{aligned}
\dot{x}=\dfrac{1}{\sqrt{\det[g(x)]}}S\nabla\widetilde{\mathcal{H}}(x,t). \label{Transformed Hamiltonian system}\end{aligned}$$ This expression is consistent with earlier works[@Gonzalez-Gascon198661; @Nutku199027] where it was recognized that the restriction to Cartesian coordinates was inhibiting the identification of the underlying Hamiltonian structure present in certain systems. This is especially important for deterministic, steady Hamiltonian systems ($\partial \mathcal{H}/\partial t=0$), in which $\widetilde{\mathcal{H}}$ is conserved along trajectories: $\dot{\widetilde{\mathcal{H}}}=0$.[@arnol2013mathematical] By comparison of Eqs. [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} and [\[Transformed Hamiltonian system\]](#Transformed Hamiltonian system){reference-type="eqref" reference="Transformed Hamiltonian system"}, and considering that $\nabla\cdot(\nabla\times v)=0$ for any vector field $v$, the following necessary and sufficient criteria for identifying the Hamiltonian nature of a given planar system can be deduced:
- A general planar system described by Eq. [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} is Hamiltonian if and only if there exists a positive definite scalar field $\alpha$ such that $$\nabla \cdot [\alpha(x) \mathcal{F}(x,t)],$$ where $[\nabla \cdot(\cdot)]_i=\partial (\cdot)_i / \partial x_i$,[^4] vanishes identically.
- For nonzero white noise $\Xi\neq 0$ satisfying the assumptions made in Sec. [\[Intro part 3\]](#Intro part 3){reference-type="ref" reference="Intro part 3"} and nonsingular $\mathcal{B}$, a noise-driven system given by Eq. [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} is Hamiltonian if and only if $$\nabla \cdot [\det[\mathcal{B}(x)]^{-1} \mathcal{F}(x,t)]$$ vanishes identically.
If either (I) or (II) are satisfied, the term in the square bracket is proportional to $S\nabla \widetilde{\mathcal{H}}$. If (I) is satisfied, $\alpha$ is proportional to $\sqrt{\det g}$. In the purely deterministic case with $\Xi=0$, (II) does not apply because, in this case, $\mathcal{B}$ is ill-defined. Criteria (I) and (II) are tested on specific examples in Sec. [\[Hamiltonian section\]](#Hamiltonian section){reference-type="ref" reference="Hamiltonian section"}.
For completeness, we note that criteria (I) and (II) determine the Hamiltonian character of a system in an open neighborhood of a point for which both $\alpha$ ($\det g$) and $\mathcal{F}$ exist. For example, if either criterion (I) or (II) is satisfied almost everywhere but $\det g$ or $\mathcal{F}$ blow up (only) at the origin, then the respective system is generally Hamiltonian in an open neighborhood of any point *except* the origin. This distinction is related to the concept of closed and exact differential forms and is solely dependent on the topology of the considered domain. See pp. 172--182 of @guillemin2010differential for an analogous example of a *gradient* field ($\widetilde{\mathcal{H}}=0$) with a singularity at the origin and a more detailed discussion. If (I) or (II) is satisfied but, for example, $\det g$ is singular over a smooth curve which separates the plane into two simply connected regions $\mathcal{D}_1$ and $\mathcal{D}_2$, $\mathcal{D}_1\cup \mathcal{D}_2=\mathbb{R}^2$, then the respective system can generally be expressed as a Hamiltonian system in each of $\mathcal{D}_1$ and $\mathcal{D}_2$, but not over the entire plane. We show examples in Sec. [\[Hamiltonian section\]](#Hamiltonian section){reference-type="ref" reference="Hamiltonian section"} where such topological considerations are relevant.
## Steady planar flows
A different coordinate-independent feature of (steady) planar flows is the existence or absence of a closed orbit $C$ in the phase space (see Fig. [1](#Figure 1){reference-type="ref" reference="Figure 1"}).[^5] There exist few simple, analytical methods to rule out such orbits in steady planar systems.[@guckenheimer2013nonlinear; @strogatz2018nonlinear see p. 44 and pp. 201--205, respectively] To complement the existing literature on these systems, in Sec. [\[Main results 2\]](#Main results 2){reference-type="ref" reference="Main results 2"}, we analytically derive conditions which allow practitioners to rule out closed orbits of a given system in certain regions of phase space.
# Main results [\[Main results\]]{#Main results label="Main results"}
## Transformed Hamiltonian systems [\[Main results 1\]]{#Main results 1 label="Main results 1"}
Since the gradient and noise terms in Eq. [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} have already been derived,[@Pedergnana2022] what is left is to obtain the transformation formula for the Hamiltonian system $\dot{x}=S\nabla \mathcal{H}$ under a general mapping of the form [\[General mapping\]](#General mapping){reference-type="eqref" reference="General mapping"}. For this, we set $x=f(y)$ and apply the chain rule to this system in index form: $$\begin{aligned}
\dot{x}_m&=&S_{mn}\frac{\partial \mathcal{H}(f(y),t)}{\partial x_n},\\
\frac{\partial x_m}{\partial y_k} \dot{y}_k &=&S_{mn}\frac{\partial y_l}{\partial x_n}\frac{\partial \widetilde{\mathcal{H}}(y,t)}{\partial y_l}, \\
\dot{y}_k &=&\underbrace{\frac{\partial y_k}{\partial x_m}S_{mn}\frac{\partial y_l}{\partial x_n}}_{\widetilde{S}_{kl}}\frac{\partial \widetilde{\mathcal{H}}(y,t)}{\partial y_l}, \label{int res}\end{aligned}$$ where the time-dependence of $x$ and $y$ was suppressed for brevity. In Eq. [\[int res\]](#int res){reference-type="eqref" reference="int res"}, we defined the matrix $\widetilde{S}$, whose entries are given by $$\begin{aligned}
\widetilde{S}_{kl}&=&\frac{\partial y_k}{\partial x_m}S_{mn}\frac{\partial y_l}{\partial x_n}\\
&=&\frac{\partial y_k}{\partial x_1}\frac{\partial y_l}{\partial x_2}-\frac{\partial y_k}{\partial x_2}\frac{\partial y_l}{\partial x_1}\\
\implies \widetilde{S}&=&\Big(\frac{\partial y_1}{\partial x_1}\frac{\partial y_2}{\partial x_2}-\frac{\partial y_1}{\partial x_2}\frac{\partial y_2}{\partial x_1}\Big)S. \label{int res 2}\end{aligned}$$ To interpret this result, we note that the term in brackets is simply the determinant of the Jacobian of the inverse mapping $y=f^{-1}(x)$. By the properties of the determinant,[@horn2012matrix see pp. 8--12] we conclude that $$\begin{aligned}
\frac{\partial y_1}{\partial x_1}\frac{\partial y_2}{\partial x_2}-\frac{\partial y_1}{\partial x_2}\frac{\partial y_2}{\partial x_1}=[\det J(y)]^{-1}.\end{aligned}$$ A further simplification is enabled by the polar decomposition $J=Qh$, where $Q$ is an orthogonal matrix with $\det(Q)=\pm 1$. Note that $\det{J}=\det{Q}\det{h}$ and $\det{h}=\sqrt{\det{g}}$. Collecting the above results, we have shown that under a general mapping $f$, a planar Hamiltonian system transforms like $$\begin{aligned}
S\nabla \mathcal{H}(x,t)\rightarrow \pm \dfrac{1}{\sqrt{\det[g(y)]}} S{\nabla}_y\, \widetilde{\mathcal{H}}(y,t), \label{int res 3}\end{aligned}$$ where $({\nabla}_y)_m=\partial /\partial y_m$ and the transformed Hamiltonian is defined as $\widetilde{\mathcal{H}}(y,t)=\mathcal{H}(f(y),t)$. Redefining $y\rightarrow x$, $\nabla_y\rightarrow \nabla$ in Eq. [\[int res 3\]](#int res 3){reference-type="eqref" reference="int res 3"} and combining this formula with previous results[@Pedergnana2022] yields Eq. [\[Transformed Hamiltonian system\]](#Transformed Hamiltonian system){reference-type="eqref" reference="Transformed Hamiltonian system"}.
## Ruling out closed orbits [\[Main results 2\]]{#Main results 2 label="Main results 2"}
Dulac's criterion concerns the autonomous planar dynamical system $$\begin{aligned}
\dot{x}=u(x), \label{Steady planar system}\end{aligned}$$ governed by the smooth vector field $u$ defined on a simply connected domain $\mathcal{D}\subset\mathbb{R}^2$. The criterion states that if there exists a smooth function $\varphi=\varphi(x)$ (the Dulac function[@Busenberg1993463]) such that the divergence of $\varphi u$ has only one sign throughout $\mathcal{D}$, then the system [\[Steady planar system\]](#Steady planar system){reference-type="eqref" reference="Steady planar system"} has no closed orbits[^6] which are entirely contained in that region.[see @strogatz2018nonlinear p. 204] Here, under the same assumptions on $u$ and $\mathcal{D}$, we prove the following statement:
**Theorem 1**. *If there exists a smooth, positive definite $2$-by-$2$ matrix function $\mathcal{N}=\mathcal{N}(x)$ with entries $\mathcal{N}_{mn}(x)$, $m,n\in\{1,2\}$, such that the out-of-plane component $\omega$ of $$\begin{aligned}
\mathrm{curl}\,U(x)=\begin{pmatrix}
0\\
0\\
\omega(x)
\end{pmatrix}, \label{theorem 1 definition}
\end{aligned}$$ where $$\begin{aligned}
U(x)=\begin{pmatrix}
\mathcal{N}_{11}(x)u_1(x)+\mathcal{N}_{12}(x)u_2(x)\\
\mathcal{N}_{21}(x)u_1(x)+\mathcal{N}_{22}(x)u_2(x)\\
0
\end{pmatrix}, \label{U definition}
\end{aligned}$$is zero throughout a simply connected subset of the plane $\mathcal{D}\subset \mathbb{R}^2$, then the planar system $\dot{x}=u(x)$, $u=(u_1,u_2)^T$, has no closed orbits contained entirely in $\mathcal{D}$.*
To prove Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}, we assume that the out-of-plane-component of $\mathrm{curl}\, U$, where $U$ is defined in Eq. [\[U definition\]](#U definition){reference-type="eqref" reference="U definition"}, is zero in $\mathcal{D}$. Integrating this expression over a simply connected subset $\mathcal{R}$ of the domain $\mathcal{D}$ bounded by the closed, positively oriented curve $C=\partial \mathcal{R}$, and applying Stokes' theorem[see @morse1953methods p. 43] yields $$\begin{aligned}
0&=&\int_\mathcal{R} \omega(x) dA \label{criterion theorem 1}\\
&=&\int_\mathcal{R} [\mathrm{curl}\, U(x)]^T n dA\\
&=&\oint_C u^T(x) \mathcal{N}^T(x) dl, \label{Dulac int res}\end{aligned}$$ where $dA$ is an infinitesimal area element, $n=(0,0,1)^T$ is the normal vector and $dl$ is an infinitesimal tangent vector to $C$ (compare Fig. [1](#Figure 1){reference-type="ref" reference="Figure 1"}). Along trajectories $x(t)$ governed by the system [\[Steady planar system\]](#Steady planar system){reference-type="eqref" reference="Steady planar system"}, $dl=udt$.[see @strogatz2018nonlinear p. 204] Therefore, if $C$ is an orbit of the system [\[Steady planar system\]](#Steady planar system){reference-type="eqref" reference="Steady planar system"}, then it can be parametrized such that the RHS of Eq. [\[Dulac int res\]](#Dulac int res){reference-type="eqref" reference="Dulac int res"} takes the following form: $$\begin{aligned}
\oint_C u^T(x(t)) \mathcal{N}^T(x(t)) u(x(t))dt. \label{Dulac int res 2}\end{aligned}$$ By the positive definiteness of $\mathcal{N}$ ($\Leftrightarrow$ of $\mathcal{N}^T$) and because $C$ is chosen arbitrarily, the integral [\[Dulac int res 2\]](#Dulac int res 2){reference-type="eqref" reference="Dulac int res 2"} generally has a positive sign,[^7] in contradiction to the assumption that $\omega$ vanishes in $\mathcal{D}$. This contradiction shows that, under the assumptions of Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}, $\mathcal{D}$ can contain no closed curve $\Gamma$ which is an orbit of the system [\[Steady planar system\]](#Steady planar system){reference-type="eqref" reference="Steady planar system"}.
Theorem [\[Theorem 1\]](#Theorem 1){reference-type="eqref" reference="Theorem 1"} can be applied in automated fashion by making, for example, a positive definite ansatz of the form $$\begin{aligned}
\mathcal{N}(x)=\begin{pmatrix}
a(x_1) &\mathcal{N}_{12}(x_1,x_2)\\
0& b
\end{pmatrix}, \label{Ansatz 1}
\end{aligned}$$ or $$\begin{aligned}
\mathcal{N}(x)=\begin{pmatrix}
c &0\\
\mathcal{N}_{21}(x_1,x_2)& d(x_2)
\end{pmatrix}, \label{Ansatz 2}\end{aligned}$$ where $a$ and $d$ are arbitrary, positive functions of $x_1$ and $x_2$, respectively, and $b$, $c\in\mathbb{R}^+$ are positive real numbers. We comment on this ansatz in Sec. [\[Discussion section\]](#Discussion section){reference-type="ref" reference="Discussion section"}. Note that $a$, $b$, $c$ and $d$ can be arbitrarily specified, and are only restricted by their positivity. In the examples we consider below, substituting Eqs. [\[Ansatz 1\]](#Ansatz 1){reference-type="eqref" reference="Ansatz 1"} and [\[Ansatz 2\]](#Ansatz 2){reference-type="eqref" reference="Ansatz 2"} into $U$ defined in Eq. [\[U definition\]](#U definition){reference-type="eqref" reference="U definition"} and requiring that $\mathrm{curl}\, U=0$ yields two linear first-order differential equations for each system, one in $x_2$ (for $\mathcal{N}_{12}$) and one in $x_1$ (for $\mathcal{N}_{21}$). In each of these equations, the other respective coordinate is treated as a parameter. If the solution of either of the resulting equations exists over a simply connected planar domain $\mathcal{D}$, then, by Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}, there exists no closed orbit fully contained in $\mathcal{D}$. We note that, in a Cartesian basis, $\mathrm{curl}\,(\cdot)=\nabla \times (\cdot)$ and $\omega(x)=\partial U_1/\partial x_2-\partial U_2/\partial x_1$, so that the differential equations resulting from Eqs. [\[Ansatz 1\]](#Ansatz 1){reference-type="eqref" reference="Ansatz 1"} and [\[Ansatz 2\]](#Ansatz 2){reference-type="eqref" reference="Ansatz 2"} are $$\begin{aligned}
-a(x_1)\dfrac{\partial u_1(x_1,x_2)}{\partial x_2}-\dfrac{\partial \mathcal{N}_{12}(x_1,x_2)}{\partial x_2}u_2(x_1,x_2)\nonumber\\
-\mathcal{N}_{12}(x_1,x_2)\dfrac{\partial u_2(x_1,x_2)}{\partial x_2}+b\dfrac{\partial u_2(x_1,x_2)}{\partial x_1}=0, \label{corrolary 1}\\
-c\dfrac{\partial u_1(x_1,x_2)}{\partial x_2}+\dfrac{\partial \mathcal{N}_{21}(x_1,x_2)}{\partial x_1}u_1(x_1,x_2)\nonumber\\
+\mathcal{N}_{21}(x_1,x_2)\dfrac{\partial u_1(x_1,x_2)}{\partial x_1}+d(x_2)\dfrac{\partial u_2(x_1,x_2)}{\partial x_1}=0, \label{corrolary 2}
\end{aligned}$$ respectively. The above results are summarized in the following corollary to Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}:
**Corollary 1**. *If the steady planar system $\dot{x}=u(x)$ is formulated in Cartesian coordinates, it has no closed orbits contained entirely in any simply connected region in which a solution of either Eq. [\[corrolary 1\]](#corrolary 1){reference-type="eqref" reference="corrolary 1"} or Eq. [\[corrolary 2\]](#corrolary 2){reference-type="eqref" reference="corrolary 2"} exists.*
Let us now apply the mathematical tools presented in this section to specific examples.
# Examples [\[Examples section\]]{#Examples section label="Examples section"}
This section is divided into three parts: In Sec. [\[Helmholtz section\]](#Helmholtz section){reference-type="ref" reference="Helmholtz section"}, we verify the transformation formula [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} on the forced-damped harmonic oscillator and we construct the Helmholtz decomposition for two general classes of dynamical systems. In Sec. [\[Hamiltonian section\]](#Hamiltonian section){reference-type="ref" reference="Hamiltonian section"}, we test criteria (I) and (II) by considering a number of examples of transformed Hamiltonian systems found in the literature. A common theme throughout the examples in Sec. [\[Hamiltonian section\]](#Hamiltonian section){reference-type="ref" reference="Hamiltonian section"} is that we do not claim to make any new discoveries with regard to the dynamics themselves, but we show that our transformation theory can describe these systems in a unified fashion. Lastly, in Sec. [\[Ruling out closed orbits section\]](#Ruling out closed orbits section){reference-type="ref" reference="Ruling out closed orbits section"}, we test Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"} for ruling out closed orbits on well-known examples of oscillators in the plane.
## Helmholtz decomposition [\[Helmholtz section\]]{#Helmholtz section label="Helmholtz section"}
### Harmonic oscillator[\[Example Harmonic Oscillator\]]{#Example Harmonic Oscillator label="Example Harmonic Oscillator"}
For our first example, we consider the noise-driven, forced-damped harmonic oscillator with position $x_1$ and normalized velocity $x_2=\dot{x}_1/\omega_0$, whose dynamics are defined by the system $$\begin{aligned}
\dot{x}&=&\begin{pmatrix} \omega_0 x_2 \\
-\gamma x_2-\omega_0x_1+\dfrac{F}{\omega_0}\cos\omega t+\dfrac{\xi}{\omega_0}
\end{pmatrix} \label{Damped harmonic oscillator},
\end{aligned}$$ where $\omega_0$ is the eigenfrequency, $\gamma$ is the damping, $F$ is the forcing amplitude and $\xi$ is a zero-mean Gaussian white noise source with variance $\Gamma$. The HD of the planar system [\[Damped harmonic oscillator\]](#Damped harmonic oscillator){reference-type="eqref" reference="Damped harmonic oscillator"} can be written as follows: $$\begin{aligned}
\dot{x}&=&\underbrace{-\begin{pmatrix}
\frac{\partial}{\partial x_1}\\
\frac{\partial}{\partial x_2}
\end{pmatrix} \Big(\frac{\gamma x_2^2}{2}-\dfrac{F}{\omega_0}x_2\cos\omega t\Big)}_{=-\nabla \mathcal{V}(x,t)}+\underbrace{\begin{pmatrix}
\frac{\partial}{\partial x_2}\\
-\frac{\partial}{\partial x_1}
\end{pmatrix} \frac{\omega_0( x_1^2+x_2^2)}{2}}_{=S\nabla \mathcal{H}(x)} \nonumber \\
&&+\underbrace{\begin{pmatrix}
0\\
\dfrac{\xi}{\omega_0}
\end{pmatrix}}_{=\Xi}. \label{HD int res}\end{aligned}$$ Note that the HD in Eq. [\[HD int res\]](#HD int res){reference-type="eqref" reference="HD int res"} is only quasi-unique, as the forcing term could be neglected in the potential $\mathcal{V}$ and, instead, a term proportional to $x_1$ (specifically, $-[F x_1/\omega_0] \cos\omega t$) could be added to the Hamiltonian $\mathcal{H}$, leaving the dynamics [\[Damped harmonic oscillator\]](#Damped harmonic oscillator){reference-type="eqref" reference="Damped harmonic oscillator"} unchanged.
Next, we apply a transformation to amplitude-phase coordinates $y=(A,\phi)$, $A\in\mathbb{R}^+$, $\phi\in [0, 2\pi]$: $$\begin{aligned}
\underbrace{\begin{pmatrix}
x_1\\
x_2
\end{pmatrix}}_{x}= \underbrace{\begin{pmatrix}
A \cos {\phi}\\
A\sin {\phi}
\end{pmatrix}}_{f(y)}. \label{DHO transformation}\end{aligned}$$ After substituting the expression [\[DHO transformation\]](#DHO transformation){reference-type="eqref" reference="DHO transformation"} into Eq. [\[Damped harmonic oscillator\]](#Damped harmonic oscillator){reference-type="eqref" reference="Damped harmonic oscillator"}, some algebraic manipulation and redefining $y\rightarrow x$, we find that the transformed dynamics are given by, with $x=(A,\phi)$: $$\begin{aligned}
\dot{x}&=& \begin{pmatrix}
-\gamma A \sin^2\phi+\dfrac{F}{\omega_0}\cos\omega t\sin\phi+\dfrac{\xi}{\omega_0}\sin \phi\\
-\omega_0- \gamma \sin\phi\cos\phi+\dfrac{F}{\omega_0 A}\cos\omega t \cos\phi +\dfrac{\xi}{\omega_0 A}\cos\phi
\end{pmatrix}\nonumber \\\label{DHO tf dynamics}\end{aligned}$$ To confirm the transformation formula [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"}, we compute the Jacobian matrix of the mapping [\[DHO transformation\]](#DHO transformation){reference-type="eqref" reference="DHO transformation"} and its polar decomposition: $$\begin{aligned}
J(x)&=&\underbrace{\begin{pmatrix}
\cos \phi & -\sin\phi \\
\sin\phi & \cos \phi
\end{pmatrix}}_{=Q(x)}\underbrace{\begin{pmatrix}
1 & 0 \\
0 & A
\end{pmatrix}}_{=h(x)}. \label{J Ex A}\end{aligned}$$ We observe that $\det Q=1$. From the expressions in Eq. [\[J Ex A\]](#J Ex A){reference-type="eqref" reference="J Ex A"}, we immediately obtain the metric tensor $g=h^T h$ and its determinant: $$\begin{aligned}
g(x)&=&\mathrm{diag}(1,A^2),\\
\det g(x)&=&A^2.\end{aligned}$$ Furthermore, using Eq. [\[Noise transformation formula\]](#Noise transformation formula){reference-type="eqref" reference="Noise transformation formula"}, we obtain $\widetilde{\Xi}=Q^T\Xi=(\xi \sin\phi/\omega_0,\xi\cos\phi/\omega_0)^T$. By substituting the above expressions into Eq. [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"}, using the definitions $\widetilde{\mathcal{V}}(x,t)=\mathcal{V}(f(x),t)$ and $\widetilde{\mathcal{H}}(x,t)=\mathcal{H}(f(x),t)$, one can verify that the transformed dynamics [\[DHO tf dynamics\]](#DHO tf dynamics){reference-type="eqref" reference="DHO tf dynamics"} are equivalent to $$\begin{aligned}
\dot{x}&=&-\underbrace{\begin{pmatrix}
1 & 0 \\
0 & A^{-2}
\end{pmatrix}}_{=g^{-1}(x)}\underbrace{\begin{pmatrix}
\frac{\partial}{\partial A}\\
\frac{\partial}{\partial \phi}
\end{pmatrix}\bigg(\dfrac{\gamma A^2 \sin^2\phi}{2}-\dfrac{F A}{\omega_0}\cos\omega t \sin\phi\bigg)}_{=\nabla\widetilde{\mathcal{V}}(x,t)}\nonumber\\
&+&\hspace{-0.1cm}\underbrace{\dfrac{1}{A}}_{=\sqrt{\det g(x)}^{-1}}\underbrace{\begin{pmatrix}
\frac{\partial}{\partial \phi}\\
-\frac{\partial}{\partial A}
\end{pmatrix}\bigg(\dfrac{\omega_0 A^2}{2}\bigg)}_{=S\nabla \widetilde{\mathcal{H}}(x)}+\underbrace{\begin{pmatrix}
1 & 0 \\
0 & A^{-1}
\end{pmatrix}}_{=h^{-1}(x)}\underbrace{\begin{pmatrix}
\widetilde{\xi}_1\\
\widetilde{\xi}_2
\end{pmatrix}}_{=\widetilde{\Xi}}, \label{DHO IHD}\end{aligned}$$ confirming the transformation formula [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"}.
### Polynomial Liénard system
In first-order form, Liénard's equation reads as follows:[see @strogatz2018nonlinear p. 212] $$\begin{aligned}
\dot{x}=\begin{pmatrix} x_2 \\
-p(x_1)-q(x_1)x_2
\end{pmatrix}. \label{Lienard system}\end{aligned}$$ We focus here on the class of nonlinear oscillators for which the antiderivative of $p$ exists and $q$ is given by a polynomial of finite order $M$: $$\begin{aligned}
q(x_1)=\sum_{m=1}^M q_m x_1^m.\label{q eq} \end{aligned}$$ The subclass for which $M\leq 2$ encompasses many well-known examples of linear and nonlinear oscillators, some of which are analyzed below in Sec. [\[Ruling out closed orbits section\]](#Ruling out closed orbits section){reference-type="ref" reference="Ruling out closed orbits section"}. The bifurcations of the system [\[Lienard system\]](#Lienard system){reference-type="eqref" reference="Lienard system"} with $M=2$ for which $p$ is a polynomial of order $5$ have recently been analyzed.[@delpino2023limit] A Liénard system [\[Lienard system\]](#Lienard system){reference-type="eqref" reference="Lienard system"} with $M=4$ has previously been studied in the context of thermoacoustic instabilities in turbulent combustors.[@Noiray2017]
Assuming a Cartesian basis, the Helmholtz decomposition [\[Cartesian system\]](#Cartesian system){reference-type="eqref" reference="Cartesian system"} of the Liénard system [\[Lienard system\]](#Lienard system){reference-type="eqref" reference="Lienard system"} is given by $$\begin{aligned}
\mathcal{V}(x)&=&\dfrac{q(x_1)x_2^2}{2} -q''(x_1)\dfrac{x_2^4}{24}+\dots\nonumber\\
&=&\sum_{n=0} q^{(2n)}(x_1) \dfrac{(-1)^{n}x_2^{2(n+1)}}{2(n+1)!}, \label{V eq}\\
\mathcal{H}(x)&=&\dfrac{x_2^2}{2}+\int p(x_1) dx_1+q'(x_1)\dfrac{x_2^3}{6}-q'''(x_1)\dfrac{x_2^5}{120}+\dots\nonumber\\
&=&\dfrac{x_2^2}{2}+\int p(x_1) dx_1+\sum_{n=0} q^{(2n+1)}(x_1) \dfrac{(-1)^{n}x_2^{2n+3}}{(2n+3)!}, \nonumber\\ \label{H eq}\end{aligned}$$ where a dash denotes the derivative, $\int p dx_1$ is the antiderivative of $p$ and $q^{(n)}$ denotes the $n$^th^ derivative of $q$. This result can be verified by substituting the above expressions into Eq. [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"}, whereby the dynamics [\[Lienard system\]](#Lienard system){reference-type="eqref" reference="Lienard system"} are recovered. The sums in Eqs. [\[V eq\]](#V eq){reference-type="eqref" reference="V eq"} and [\[H eq\]](#H eq){reference-type="eqref" reference="H eq"} truncate due to the finite order of $q$ defined in Eq. [\[q eq\]](#q eq){reference-type="eqref" reference="q eq"}.
### Nonlinear modal dynamics
Recently, the interference of an harmonic wave with a nonlinear cavity mode was described by a model which, in the unforced limit, belongs to the following class of systems:[@pedergnana2023superradiant] $$\begin{aligned}
\dot{a}=\big[i\Omega(|a|)+\Gamma(|a|)\big]a, \label{modal dynamics example}\end{aligned}$$ where $i$ is the imaginary unit, $a\in\mathbb{C}$ is the complex modal amplitude, $\Omega$ is the nonlinear oscillation frequency and $\Gamma$ is the nonlinear damping of the system. We assume that both $\Omega$ and $\Gamma$ depend only on the absolute value of $a$, but not on its phase. Defining $\rho=|a|$, $\theta=\arg (a)$ and separating real and imaginary parts yields, with $x=(\rho,\theta)^T$, $$\begin{aligned}
\dot{x}=\begin{pmatrix} \Gamma(\rho) \rho \\
\Omega(\rho) \end{pmatrix}.\end{aligned}$$ Assuming a curvilinear basis with $g=\mathrm{diag}(1,\rho^2)$ and $\sqrt{\det g}=\rho$ (see also Sec. [\[Example Harmonic Oscillator\]](#Example Harmonic Oscillator){reference-type="ref" reference="Example Harmonic Oscillator"}), the transformed Helmholtz decomposition [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} of the system [\[modal dynamics example\]](#modal dynamics example){reference-type="eqref" reference="modal dynamics example"} is readily found to be $$\begin{aligned}
\widetilde{\mathcal{V}}(x)&=&-\int \Gamma(\rho)\rho d\rho,\label{V eq 2}\\
\widetilde{\mathcal{H}}(x)&=&-\int \Omega(\rho) \rho d\rho, \label{H eq 2}\end{aligned}$$ whereby it is assumed that both integrals exist in a well-defined sense.
## Transformed Hamiltonian flows [\[Hamiltonian section\]]{#Hamiltonian section label="Hamiltonian section"}
### Harmonic oscillator
We consider the transformed system [\[DHO tf dynamics\]](#DHO tf dynamics){reference-type="eqref" reference="DHO tf dynamics"} derived in the previous section, but with $\gamma=0$, which takes the form of a general dynamical system [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} with $x=(A,\phi)^T$: $$\begin{aligned}
\dot{x}&=& \underbrace{\begin{pmatrix}
\dfrac{F}{\omega_0}\cos\omega t\sin\phi\\
-\omega_0+\dfrac{F}{\omega_0 A}\cos\omega t \cos\phi
\end{pmatrix}}_{=\mathcal{F}(x,t)} +\underbrace{\begin{pmatrix}
1 & 0 \\
0 & A^{-1}
\end{pmatrix}\begin{pmatrix}
\widetilde{\xi}_1\\
\widetilde{\xi}_2
\end{pmatrix}}_{=\mathcal{B}(x) \Xi}.\nonumber\\ \label{undamped tf dynamics}\end{aligned}$$ Using criterion (II), setting $\mathcal{B}=h^{-1}$, where $h=\mathrm{diag}(1,A)$ is defined by Eq. [\[J Ex A\]](#J Ex A){reference-type="eqref" reference="J Ex A"}, one can verify that the undamped transformed system [\[undamped tf dynamics\]](#undamped tf dynamics){reference-type="eqref" reference="undamped tf dynamics"} is indeed Hamiltonian: $$\begin{aligned}
\underbrace{\begin{pmatrix}
\frac{\partial}{\partial A}\\
\frac{\partial}{\partial \phi}
\end{pmatrix}}_{=\nabla}\cdot\Bigg[\underbrace{A}_{=\sqrt{\det \mathcal{B}}^{-1}}\underbrace{\begin{pmatrix}\dfrac{F}{\omega_0}\cos\omega t\sin\phi\\
-\omega_0+\dfrac{F}{\omega_0 A}\cos\omega t \cos\phi
\end{pmatrix}}_{=\mathcal{F}(x,t)}\Bigg]=0.\end{aligned}$$ To see why this should be the case, recall that, without loss of generality, the term involving $F$ in the HD [\[HD int res\]](#HD int res){reference-type="eqref" reference="HD int res"} can be included in $\mathcal{H}$ so that $\mathcal{V}$ vanishes identically for $\gamma=0$.
### Lotka--Volterra equations
The Lotka-Volterra equations are used to describe the interaction between two species, one being the predator which preys on the other.[@lotka1925elements; @volterra1927variazioni] @Nutku199027 noticed that these equations have the structure of a steady Hamiltonian system with a quantity $\widetilde{\mathcal{H}}$ which is conserved along trajectories. To avoid duplicity, we simply refer the reader to this previous work, from which, using criterion (I), one can verify that the Lotka-Volterra equations (Eq. 4 in the reference[@Nutku199027]) have the form of a transformed Hamiltonian system $\dot{x}=\sqrt{\det g}^{-1} S \nabla \widetilde{\mathcal{H}}$. On the two coordinate axes, $\det g$ blows up, which is why the same Hamiltonian formulation does not hold in all quadrants. This is not an issue, since the Lotka--Volterra equations are defined only in the first quadrant $x_1$, $x_2\geq 0$, where $x_1$, $x_2$ are the population numbers of the prey and of the predators, respectively.
The contribution of the present work is not the recognition of the Hamiltonian structure of this well-known system, but to highlight that its nonstandard Hamiltonian form (see also the work of @Gonzalez-Gascon198661) can be understood, in terms of the general formula [\[Transformed Hamiltonian system\]](#Transformed Hamiltonian system){reference-type="eqref" reference="Transformed Hamiltonian system"}, as resulting from a smooth nonlinear mapping from some original set of coordinates in which the system was of the form $\dot{x}=S\nabla \mathcal{H}$.
### Kermack--McKendrick theory
A simplfied version of Kermack and McKendrick's mathematical theory of epidemics[@Kermack199133] is described by the following planar system with $x=(x_1,x_2)$[see @strogatz2018nonlinear p. 188]: $$\begin{aligned}
\dot{x}=\begin{pmatrix}
-k x_1 x_2 \\
k x_1 x_2 -lx_2
\end{pmatrix}. \label{Kermack system}\end{aligned}$$ where $x_1\geq 0$ denotes the healthy population, $x_2\geq 0$ is the sick population and $k$, $l>0$ are constants. While we note that the Hamiltonian structure of a similar, three-dimensional model has previously been analyzed,[@Nutku1990L1145; @Ballesteros2020] we focus here on the planar version. In the positive quadrant $x_1$, $x_2>0$, by criterion (I), Eq. [\[Kermack system\]](#Kermack system){reference-type="eqref" reference="Kermack system"} represents a transformed Hamiltonian system [\[Transformed Hamiltonian system\]](#Transformed Hamiltonian system){reference-type="eqref" reference="Transformed Hamiltonian system"} with $$\begin{aligned}
1/\sqrt{\det g(x)}&=&k x_1 x_2, \label{singular gdet}\\
\widetilde{\mathcal{H}}(x)&=&\dfrac{l}{k}\ln x_1 -x_1-x_2. \label{KK Hamiltonian}\end{aligned}$$ The conserved quantity [\[KK Hamiltonian\]](#KK Hamiltonian){reference-type="eqref" reference="KK Hamiltonian"} was suggested by Strogatz; we only show that the Kermack--McKendrick system has a hidden Hamiltonian structure, which explains why such a conserved quantity exists. By Eq. [\[singular gdet\]](#singular gdet){reference-type="eqref" reference="singular gdet"}, the determinant of the metric tensor is singular on the two axes. As a consequence, the system [\[Kermack system\]](#Kermack system){reference-type="eqref" reference="Kermack system"} can not be expressed in terms of the same transformed Hamiltonian $\widetilde{\mathcal{H}}$ throughout $\mathbb{R}^2$. This is not of concern in this example because the variables live only in the first quadrant (as in the Lotka-Volterra equations), throughout which a Hamiltonian formulation exists.
### Coupled Kuramoto oscillators [\[Swarm system\]]{#Swarm system label="Swarm system"}
@okeefe derive, from two coupled Kuramoto oscillators, the following system, with $x=(y,\theta)$: $$\begin{aligned}
\dot{x}=\begin{pmatrix}
-\mathcal{J}\sin y \cos \theta,\\
-\mathcal{K}\sin \theta \cos y,
\end{pmatrix} \label{swarm system new}\end{aligned}$$ where $\mathcal{J}$ and $\mathcal{K}$ are coupling constants. We showed in our previous work[@Pedergnana2022] that this steady, planar system can be written as a transformed gradient system $\dot{x}=-g^{-1}\nabla \widetilde{\mathcal{V}}$ for $\mathcal{J}$, $\mathcal{K}>0$. If, however, $\mathcal{K}$ is less than zero, the same system has nested, closed orbits and there exists a conserved Hamiltonian $\widetilde{\mathcal{H}}(x)$.[@okeefe] In the positive quadrant $y$, $\theta>0$, Eq. [\[swarm system new\]](#swarm system new){reference-type="eqref" reference="swarm system new"} is, by criterion (I), equivalent to a transformed Hamiltonian system [\[Transformed Hamiltonian system\]](#Transformed Hamiltonian system){reference-type="eqref" reference="Transformed Hamiltonian system"} with[see @okeefe see p. 7 for the derivation of $\widetilde{\mathcal{H}}$] $$\begin{aligned}
1/\sqrt{\det g(x)}&=&\dfrac{(\sin y)^{\mathcal{K}+1}}{(\sin \theta)^{\mathcal{J}-1}}, \label{swarm determinant}\\
\widetilde{\mathcal{H}}(x)&=&-\dfrac{(\sin \theta)^\mathcal{J}}{(\sin y)^\mathcal{K}}. \label{swarm Hamiltonian}\end{aligned}$$ Similar expressions can be found in each of the other quadrants from the requirement that $\det g$ is always positive. A nonsmooth Hamiltonian which describes the system [\[swarm system new\]](#swarm system new){reference-type="eqref" reference="swarm system new"} in each of the quadrants (but not along the axes) is given by $-|\widetilde{\mathcal{H}}|$, where $\widetilde{\mathcal{H}}$ is defined in Eq. [\[swarm Hamiltonian\]](#swarm Hamiltonian){reference-type="eqref" reference="swarm Hamiltonian"}. This quantity is visualized in Fig. [2](#Figure 2){reference-type="ref" reference="Figure 2"} for different values of $\mathcal{J}$ and $\mathcal{K}$.[^8] Of course, the quantities [\[swarm determinant\]](#swarm determinant){reference-type="eqref" reference="swarm determinant"} and [\[swarm Hamiltonian\]](#swarm Hamiltonian){reference-type="eqref" reference="swarm Hamiltonian"} are not the unique Hamiltonian formulation of the system [\[swarm system new\]](#swarm system new){reference-type="eqref" reference="swarm system new"}. An alternative description that could be used, for $y$, $\theta>0$, is $$\begin{aligned}
1/\sqrt{\det {g_\mathrm{alt}(x)}}&=&\sin y \sin \theta,\\
\widetilde{\mathcal{H}}_\mathrm{alt}(x)&=&\mathcal{K}\ln (\sin y)-\mathcal{J}\ln (\sin \theta). \label{alt}\end{aligned}$$ The two formulations given above describe the same conserved quantity. Indeed, we have $\widetilde{\mathcal{H}}=-\exp[-\widetilde{\mathcal{H}}_\mathrm{alt}]$.
![Illustration of the Hamiltonian governing a system of coupled Kuramoto oscillators discussed in Sec. [\[Swarm system\]](#Swarm system){reference-type="ref" reference="Swarm system"}, over the coupling constants $\mathcal{J}$ and $\mathcal{K}$ (semi-log scale). Shown are contour plots of $-|\widetilde{\mathcal{H}}|$, where $\widetilde{\mathcal{H}}$ is defined by Eq. [\[swarm Hamiltonian\]](#swarm Hamiltonian){reference-type="eqref" reference="swarm Hamiltonian"}, over the periodic domain $[-\pi,\pi]\times [-\pi,\pi]$, ranging from values of $-1$ (black) to $0$ (white), overlaid with selected contour lines (green). In the cases with $\mathcal{K}=1$, the colormap was cut off at $\pm 1$ because $\widetilde{\mathcal{H}}$ shows singular behavior.](Figure_2.eps){#Figure 2 width="43%"}
The Hamiltonian formulation of Eq. [\[swarm system new\]](#swarm system new){reference-type="eqref" reference="swarm system new"} given in Eqs. [\[swarm determinant\]](#swarm determinant){reference-type="eqref" reference="swarm determinant"} and [\[swarm Hamiltonian\]](#swarm Hamiltonian){reference-type="eqref" reference="swarm Hamiltonian"} is also true for $\mathcal{K}>0$, where the system is known to be governed by the gradient of a steady potential $\widetilde{\mathcal{V}}$.[@Pedergnana2022] In other words, for $\mathcal{K}>0$, the system [\[swarm system new\]](#swarm system new){reference-type="eqref" reference="swarm system new"} is simultaneously a Hamiltonian system and a gradient system. Consequently, in this parameter range, $\widetilde{\mathcal{H}}$ can have no closed isocontours, because this property would be in contradiction to its gradient-driven nature for positive $\mathcal{J}$ and $\mathcal{K}$.[see @strogatz2018nonlinear pp. 201--202] This argument is confirmed by the visualization of the Hamiltonian $\widetilde{\mathcal{H}}$ in Fig. [2](#Figure 2){reference-type="ref" reference="Figure 2"}. This figure shows that, as expected, when $\mathcal{K}$ is varied from negative to positive values, the contours of $\widetilde{\mathcal{H}}$ lose their closed character. We comment further on these observations in Sec. [\[Discussion section\]](#Discussion section){reference-type="ref" reference="Discussion section"}.
### Strogatz's conservative system [\[Strogatz system\]]{#Strogatz system label="Strogatz system"}
Strogatz proposes the following conservative system with $x=(x_1,x_2)$:[see @strogatz2018nonlinear p. 190] $$\begin{aligned}
\dot{x}=\begin{pmatrix}
x_1 x_2 \\
-x_1^2
\end{pmatrix}. \label{Strogatz system new}\end{aligned}$$ By criterion (I), in the right half-plane $x_1>0$, this system is a transformed Hamiltonian system [\[Transformed Hamiltonian system\]](#Transformed Hamiltonian system){reference-type="eqref" reference="Transformed Hamiltonian system"} with $$\begin{aligned}
1/\sqrt{\det g(x)}&=&x_1,\\
\widetilde{\mathcal{H}}(x)&=&\dfrac{x_1^2+x_2^2}{2}.\end{aligned}$$ A similar expression can be found for the left half-plane. Due to the singular nature of $\det g$ on the $y$-axis, the Hamiltonian formulation of the system [\[Strogatz system new\]](#Strogatz system new){reference-type="eqref" reference="Strogatz system new"} suffers from similar problems as the other examples in this section.
## Ruling out closed orbits [\[Ruling out closed orbits section\]]{#Ruling out closed orbits section label="Ruling out closed orbits section"}
![image](Figure_3.eps){width="68%"}
### Harmonic oscillator
To apply Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"} and Corollary [Corollary 1](#Corollary 1){reference-type="ref" reference="Corollary 1"}, we now consider the harmonic oscillator with $\omega_0=1$ in the undamped ($\gamma=0$), unforced ($F=0$), noise-free ($\Xi=0$) limit, which is a steady planar Hamiltonian system $\dot{x}=S\nabla \mathcal{H}$ with $\mathcal{H}(x)=(x_1^2+x_2^2)/2$ that possesses nested closed orbits in the form of circles (or ellipses of the coordinates are not normalized) around the origin (see Fig. [\[Figure 3\]](#Figure 3){reference-type="ref" reference="Figure 3"}, left inset). In a Cartesian basis with normalized variables $x=(x_1,x_2)$ (see Sec. [\[Example Harmonic Oscillator\]](#Example Harmonic Oscillator){reference-type="ref" reference="Example Harmonic Oscillator"}), the dynamics read $$\begin{aligned}
\dot{x}&=&\underbrace{\begin{pmatrix} x_2 \\
-x_1
\end{pmatrix}}_{=u(x)}. \label{Unforced Damped harmonic oscillator}
\end{aligned}$$ Substituting the velocity field $u(x)$ given by the RHS of Eq. [\[Unforced Damped harmonic oscillator\]](#Unforced Damped harmonic oscillator){reference-type="eqref" reference="Unforced Damped harmonic oscillator"} into Eqs. [\[corrolary 1\]](#corrolary 1){reference-type="eqref" reference="corrolary 1"} and [\[corrolary 2\]](#corrolary 2){reference-type="eqref" reference="corrolary 2"} yields the following set of differential equations: $$\begin{aligned}
-a(x_1)+\dfrac{\partial \mathcal{N}_{12}(x_1,x_2)}{\partial x_2} x_1-b =0, \label{DHO corrolary 1}\\
-c+\dfrac{\partial \mathcal{N}_{21}(x_1,x_2)}{\partial x_1} x_2 - d(x_2)=0. \label{DHO corrolary 2}
\end{aligned}$$ The general solutions of Eqs. [\[DHO corrolary 1\]](#DHO corrolary 1){reference-type="eqref" reference="DHO corrolary 1"} and [\[DHO corrolary 2\]](#DHO corrolary 2){reference-type="eqref" reference="DHO corrolary 2"} are given by $$\begin{aligned}
\mathcal{N}_{12}(x_1,x_2)&=&\dfrac{[a(x_1)+b] x_2}{x_1}+C_1, \label{DHO sol 1}\\
\mathcal{N}_{21}(x_1,x_2)&=&\dfrac{[c+d(x_2)] x_1}{x_2}+C_2, \label{DHO sol 2}\end{aligned}$$ where $C_1$ and $C_2$ are constants of integration. The expression [\[DHO sol 1\]](#DHO sol 1){reference-type="eqref" reference="DHO sol 1"} exists for $x_1\neq 0$ (left and right half-plane) and [\[DHO sol 2\]](#DHO sol 2){reference-type="eqref" reference="DHO sol 2"} for $x_2\neq 0$ (bottom and upper half-plane). None of these regions fully contain any closed orbits the system [\[Unforced Damped harmonic oscillator\]](#Unforced Damped harmonic oscillator){reference-type="eqref" reference="Unforced Damped harmonic oscillator"}, confirming Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"} and its Corollary [Corollary 1](#Corollary 1){reference-type="ref" reference="Corollary 1"}. The above results are visualized, together with typical trajectories of the system [\[Unforced Damped harmonic oscillator\]](#Unforced Damped harmonic oscillator){reference-type="eqref" reference="Unforced Damped harmonic oscillator"}, in Fig. [\[Figure 3\]](#Figure 3){reference-type="ref" reference="Figure 3"} (left inset).
### Van der Pol oscillator [\[Vdp section\]]{#Vdp section label="Vdp section"}
Nearly a century ago, Balthasar Van der Pol proposed his nonlinear oscillator to describe self-sustained relaxation oscillations in electronic circuits.[@van1926lxxxviii] To test Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}, we consider here the following nondimensionalized set of equations describing an unforced Van der Pol oscillator[see @guckenheimer2013nonlinear pp. 67--68]: $$\begin{aligned}
\dot{x}=\begin{pmatrix}
x_2\\
-x_1 +\alpha (1-x_1^2) x_2,
\end{pmatrix} \label{VDP system}\end{aligned}$$ where $\alpha\geq 0$ is a non-negative constant. Assuming a Cartesian basis and substituting the velocity field $u(x)$ given by the RHS of [\[VDP system\]](#VDP system){reference-type="eqref" reference="VDP system"} into Eqs. [\[corrolary 1\]](#corrolary 1){reference-type="eqref" reference="corrolary 1"} and [\[corrolary 2\]](#corrolary 2){reference-type="eqref" reference="corrolary 2"} yields the following equations: $$\begin{aligned}
-a(x_1)-\dfrac{\partial \mathcal{N}_{12}(x_1,x_2)}{\partial x_2}(-x_1 +\alpha (1-x_1^2) x_2)\nonumber\\
-\mathcal{N}_{12}(x_1,x_2)\alpha(1-x_1^2)-b(1+2 \alpha x_1 x_2)=0, \label{VDP corrolary 1}\\
-c+\dfrac{\partial \mathcal{N}_{21}(x_1,x_2)}{\partial x_1}x_2-d(x_2)(1+2\alpha x_1 x_2)=0. \label{VDP corrolary 2}
\end{aligned}$$ These equations have the following general solutions: $$\begin{aligned}
\mathcal{N}_{12}(x_1,x_2)&=&\dfrac{[a(x_1)+b]x_2+ \alpha b x_1 x_2^2+C_1}{x_1 -\alpha (1-x_1^2) x_2}, \label{VDP sol 1}\\
\mathcal{N}_{21}(x_1,x_2)&=&\dfrac{[c+d(x_2)]x_1}{x_2}+ d(x_2) \alpha x_1^2 +C_2. \label{VDP sol 2}\end{aligned}$$ For $\alpha>0$, the zero level set of the denominator of [\[VDP sol 1\]](#VDP sol 1){reference-type="eqref" reference="VDP sol 1"} describes a curve which crosses the origin and diverges to negative infinity at $x_1=-1$ and to positive infinity at $x_1=1$. In general, $\mathcal{N}_{12}$ exists everywhere but on this curve. Similarly, the solution [\[VDP sol 2\]](#VDP sol 2){reference-type="eqref" reference="VDP sol 2"} exists in the upper and in the lower half-plane, but not on the $x$-axis ($x_2=0$). The limit cycle of the Van der Pol oscillator,[see @guckenheimer2013nonlinear pp. 67--82] which winds around the origin, necessarily crosses both of these curves, on which the solutions [\[VDP sol 1\]](#VDP sol 1){reference-type="eqref" reference="VDP sol 1"} and [\[VDP sol 2\]](#VDP sol 2){reference-type="eqref" reference="VDP sol 2"} do not exist, confirming Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}. See Fig. [\[Figure 3\]](#Figure 3){reference-type="ref" reference="Figure 3"}, middle inset, for an illustration of the above results on an example with $\alpha=0.7$.
### Duffing nonlinearity [\[Duffing section\]]{#Duffing section label="Duffing section"}
As our final example, we examine the Duffing oscillator[@duffing1918forced] with a double-well Hamiltonian in nondimensionalized form[see @guckenheimer2013nonlinear pp. 82--91]: $$\begin{aligned}
\dot{x}=\begin{pmatrix}
x_2\\
x_1-x_1^3.
\end{pmatrix} \label{Duffing system}\end{aligned}$$ The system [\[Duffing system\]](#Duffing system){reference-type="eqref" reference="Duffing system"} is of the form $\dot{x}=S\nabla \mathcal{H}$ with $\mathcal{H}(x)=(x_2^2-x_1^2)/2+x^4/4$ and is governed by two families of closed orbits, segregated by the curve $\mathcal{H}(x)=0$, the separatrix. For $\mathcal{H}>0$, the flow [\[Duffing system\]](#Duffing system){reference-type="eqref" reference="Duffing system"} has closed orbits encircling the separatrix and the fixed point at the origin. In the region where $\mathcal{H}<0$, inside the separatrix, there are two nested sets of closed orbits, mirror-symmetric with respect to the $y$-axis, which wind around the fixed points $x=(-1,0)^T$ and $x=(1,0)^T$, respectively (see Fig. [\[Figure 3\]](#Figure 3){reference-type="ref" reference="Figure 3"}, right inset). Using a Cartesian basis, substituting the RHS of [\[Duffing system\]](#Duffing system){reference-type="eqref" reference="Duffing system"} into Eqs. [\[corrolary 1\]](#corrolary 1){reference-type="eqref" reference="corrolary 1"} and [\[corrolary 2\]](#corrolary 2){reference-type="eqref" reference="corrolary 2"} leads to $$\begin{aligned}
-a(x_1)-\dfrac{\partial \mathcal{N}_{12}(x_1,x_2)}{\partial x_2}(x_1-x_1^3)+b(1-3x_1^2)=0, \label{Duffing corrolary 1}\\
-c+\dfrac{\partial \mathcal{N}_{21}(x_1,x_2)}{\partial x_1}x_2+d(x_2)(1-3x_1^2)=0, \label{Duffing corrolary 2}
\end{aligned}$$ whose general solutions are given by $$\begin{aligned}
\mathcal{N}_{12}(x_1,x_2)&=&\dfrac{[b-a(x_1)]x_2-3bx_1^2 x_2}{x_1-x_1^3}+C_1, \label{Duffing sol 1}\\
\mathcal{N}_{21}(x_1,x_2)&=&\dfrac{c x_1+d(x_2)(x_1^3-x_1)}{x_2}+C_2. \label{Duffing sol 2}\end{aligned}$$ By Corollary [Corollary 1](#Corollary 1){reference-type="ref" reference="Corollary 1"}, looking at Eqs. [\[Duffing sol 1\]](#Duffing sol 1){reference-type="eqref" reference="Duffing sol 1"} and [\[Duffing sol 2\]](#Duffing sol 2){reference-type="eqref" reference="Duffing sol 2"}, any closed orbits of the steady planar system [\[Duffing system\]](#Duffing system){reference-type="eqref" reference="Duffing system"} must cross one of the lines $x=1$, $-1$ or $x=0$ and, additionally, cross the $x$-axis $x_2=0$. By the above discussion, as shown in Fig. [\[Figure 3\]](#Figure 3){reference-type="ref" reference="Figure 3"}, right inset, these conditions are indeed satisfied, confirming Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"}.
# Discussion and Outlook [\[Discussion section\]]{#Discussion section label="Discussion section"}
We recall that the first part of this work is concerned with the Helmholtz decomposition for planar systems. Using the example of the Harmonic oscillator, we argue that, given a transformed system [\[DHO tf dynamics\]](#DHO tf dynamics){reference-type="eqref" reference="DHO tf dynamics"}, without any knowledge of its original form [\[Damped harmonic oscillator\]](#Damped harmonic oscillator){reference-type="eqref" reference="Damped harmonic oscillator"}, there exists a certain functional structure, the transformed Helmholtz decomposition [\[DHO IHD\]](#DHO IHD){reference-type="eqref" reference="DHO IHD"}, that may be more easily observed if the general formula [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} is known.
We have shown that the Helmholtz decomposition [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"} exists for broad classes of dynamical systems. It is left to future research to understand what use can be made of the knowledge of $\mathcal{V}$ and $\mathcal{H}$ in the general case when neither of these two quantities vanishes identically.
The Kuramoto oscillator system analyzed in Sec. [\[Swarm system\]](#Swarm system){reference-type="ref" reference="Swarm system"} can, for certain parameter ranges, be written as either a transformed gradient or a transformed Hamiltonian system. It would be interesting to understand under which circumstances a steady Hamiltonian system $\dot{x}=\sqrt{\det g}^{-1} S \nabla \widetilde{\mathcal{H}}$, for which $\widetilde{\mathcal{H}}$ has no closed isocontours, can be written as a transformed gradient system $\dot{x}=-g^{-1}\nabla\widetilde{\mathcal{V}}$.
The transformation formula [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} allowed us to identify the underlying Hamiltonian structure of the systems analyzed in Sec. [\[Hamiltonian section\]](#Hamiltonian section){reference-type="ref" reference="Hamiltonian section"}. It is possible that this generalized Hamiltonian framework could be used for canonical quantization, whereby, roughly speaking, quantum analogues of classical systems are derived by replacing the Hamiltonian variables with their (quantum) operator counterparts.[@dirac1981principles]
The form of the ansatz [\[Ansatz 1\]](#Ansatz 1){reference-type="eqref" reference="Ansatz 1"} and [\[Ansatz 2\]](#Ansatz 2){reference-type="eqref" reference="Ansatz 2"}, which led to Corollary [Corollary 1](#Corollary 1){reference-type="ref" reference="Corollary 1"}, was motivated by the simplicity of the resulting equations and the underlying goal of obtaining quantitative tests for Theorem [Theorem 1](#Theorem 1){reference-type="ref" reference="Theorem 1"} in examples where the dynamics are well-known. The expressions [\[Ansatz 1\]](#Ansatz 1){reference-type="eqref" reference="Ansatz 1"} and [\[Ansatz 2\]](#Ansatz 2){reference-type="eqref" reference="Ansatz 2"} assumed for $\mathcal{N}$ may well not be optimal for ruling out closed orbits in new and unknown systems. In the future, more general expressions for the smooth, positive definite matrix $\mathcal{N}$ could be used. Theorem 1 may be useful to Hilbert's 16^th^ problem of determining the number of limit cycles in a given planar, polynomial dynamical system.[@hilbert1900mathematische]
# Author contributions {#author-contributions .unnumbered}
**Tiemo Pedergnana**: Formal analysis (lead), visualization (lead), writing--original draft, writing--review and editing (equal). **Nicolas Noiray**: Formal analysis (supporting), supervision, visualization (supporting), writing--review and editing (equal).
# Acknowledgements {#acknowledgements .unnumbered}
This project is funded by the Swiss National Science Foundation under Grant agreement 184617.
# AUTHOR DECLARATIONS {#author-declarations .unnumbered}
## Conflict of Interest {#conflict-of-interest .unnumbered}
The authors have no conflicts to disclose.
# Data availability {#data-availability .unnumbered}
The datasets used for generating the plots in this study can be directly obtained by numerical simulation of the related mathematical equations in the manuscript.
[^1]: Throughout this work, a dot over a dependent variable denotes the total time derivative.
[^2]: Adding a constant to $\mathcal{V}$ or $\mathcal{H}$ does not change $u$ defined in Eq. [\[Helmholtz decomposition\]](#Helmholtz decomposition){reference-type="eqref" reference="Helmholtz decomposition"}. Furthermore, a spatially constant term in the dynamics [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} can be included as a linear (monomial) term either in $\mathcal{V}$ or in $\mathcal{H}$; see Sec. [\[Example Harmonic Oscillator\]](#Example Harmonic Oscillator){reference-type="ref" reference="Example Harmonic Oscillator"} for an example.
[^3]: Formally, this amounts to assuming that the dynamics [\[Dynamical system\]](#Dynamical system){reference-type="eqref" reference="Dynamical system"} were originally given by [\[Cartesian system\]](#Cartesian system){reference-type="eqref" reference="Cartesian system"} before being transformed into [\[Transformed 2D Helmholtz decomposition\]](#Transformed 2D Helmholtz decomposition){reference-type="eqref" reference="Transformed 2D Helmholtz decomposition"} via some mapping [\[General mapping\]](#General mapping){reference-type="eqref" reference="General mapping"}.
[^4]: We follow the Einstein summation convention in this work, whereby repeated indices in a product imply summation over those indices.
[^5]: An "orbit" is the shape of a curve in the (frozen) phase space of a steady dynamical system along which a trajectory runs.
[^6]: The case of a closed orbit which is a fixed point is excluded from the discussion here.
[^7]: Unless $u$ vanishes identically, in which case there is nothing to prove.
[^8]: Note that the color map is cut off in the top row of Fig. [2](#Figure 2){reference-type="ref" reference="Figure 2"}. This is because the main emphasis of this figure is the *shape* of the isocontours, not their level values.
| arxiv_math | {
"id": "2309.02513",
"title": "Certain features of planar systems",
"authors": "Tiemo Pedergnana and Nicolas Noiray",
"categories": "math.DS math-ph math.CA math.MP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Recently, Paolini and Shelah have constructed absolutely Hopfian torsion-free abelian groups of any given size. In contrast, we show that this is not necessarily the case for absolutely co-Hopfian groups. We apply the infinitary logic to show a dichotomy behavior of co-Hopfian groups, by showing that the first beautiful cardinal is the turning point for this property. Namely, we prove that there are no absolute co-Hopfian abelian groups above the first beautiful cardinal. An extension of this result to the category of modules over a commutative ring is given.
address:
- Hakimiyeh, Tehran, Iran.
- |
School of Mathematics,\
Institute for Research in Fundamental Sciences (IPM),\
P.O. Box: 19395--5746, Tehran, Iran.
- |
Einstein Institute of Mathematics, The Hebrew University of Jerusalem, 9190401, Jerusalem, Israel; and\
Department of Mathematics, Rutgers University, Piscataway, NJ 08854-8019, USA
author:
- Mohsen Asgharzadeh
- Mohammad Golshani
- Saharon Shelah
date:
- 9-05-2022
-
title: Expressive Power of Infinitary Logic and Absolute co-Hopfianity
---
Annotated Content
§0 Introduction, (label x), pg.
§0A Background, (label x), pg.
\[In particular see [@Sh:678], [@Sh:880] and [@Sh:948].\]
§0B The present work, (label x), pg.
\[Mostly we can deal with $R$-modules for suitable $R.$\]
§0C Absolutely Hopfian and co-Hopfian, (label x), pg.
\[See [@Sh:1214 Th. 1.3]\] on existence of Hopfian in every $\lambda.$ This leave the question on co-Hopfin, is solved in §6.\]
§1 Towards elimination of quantifiers (for $\rm{AB}$), (label a), pg.
\[For first order see W. Szmielew; references see [@Sh:54], for infinitary logic to some extent [@Sh:977]. See more in §2. We try to sort out for which formulas $\varphi(\overline{x}_{[\varepsilon]}), \,$ $\varphi(M)$ is an $M$-closed subgroup.\]
§2 A frame, (label k), pg.
\[Assume $\mathbf{f}= \{ K, \mathbb{L}, \lambda, \kappa, \theta \}$ is a suitable frame (i.e. as in [Definition 41](#k2){reference-type="ref" reference="k2"}). We show that if $\overline{a} \in {}^{\varepsilon}M, \, \overline{\alpha} \in {}^{\varepsilon} M_{*}$ for $\alpha < \kappa,$ then for some $\beta < \gamma < \kappa, \overline{a,}$ $\overline{a} - \overline{a}_{\beta} + \overline{\alpha}_{\gamma}$ realizes the same $\mathbb{L}$-type. The background is the elimination quantifiers for first order logic to $\mathrm{AB}$ (and for $R$-modules) by existential positive formulas. But there, if $\varphi_{i}(\overline{x})$ for $i < n$ define a strict subgroup at $\varphi(G)$ then $\bigcup_{\varphi < n} \varphi_{i} (M) \subsetneqq \varphi(M).$ But here the parallel may fail as we have infinite $n.$\]
§3 On nice frame existence, (label L), pg.
\[In [@Sh:977] we deal with elimination of quantifiers in $\mathbb{L}_{\infty, \theta}$ up to positive existential formulas $\underline{but}$ we need to add individual constants. We try to deal with it in a general case, we start to see if we can do better. In [Theorem 58](#L7){reference-type="ref" reference="L7"} we prove when there is a nice $\mathbf{f}$ such that $M_{\mathbf{f}} = M.$ We also explain the problem in trying to prove that absolutely there is a non-trivial automorphism.\]
§4 $\kappa$-algebraic closure, (label n), pg.
\[We investigate $(< \kappa)$-algebraically closed and make some tries to build endomorphisms.\]
§5 On affine elimination, (label m), pg.
\[We try to replace subgroups by affine subsets, i.e., ones closed under $x - y +z$.
*Question 1*. Do we preserve $\kappa$-well ordering?
Earlier we consider formulas $\varphi(\overline{x}_{[\varepsilon]})$ which defines subgroups; here we try affine ones, i.e., defining sets closed under $x - y + z.$ Such formulas appear in universal algebra.
We advance seemingly finding new cases but not clear if enough to eliminate quantifiers.\]
§6 No co-Hopfian $M$ of cardinality $\geq \kappa_{\rm{beau}},$ (label r), pg.
\[No $G \in \rm{AB}_{\geq \kappa_{\rm{beau}}}$ is co-Hopfian.\]
# Introduction
The original [@Sh:1023] fail (there are absolutely indecomposable abelian group in every cardinality). By [@Sh:F2035] and [@Sh:1214] in every $\lambda$ there is an absolutely Hopfian $G \in \mathrm{AB}_{\lambda}.$ (Earlier version [@Sh:F1961], of interest?). Here we make some tries at elimination of quantifiers for infinite abelian groups; interesting but no real solution, we also prove "no co-Hopfian $G \in \mathrm{AB}_{\lambda}$ when $\lambda \geq \lambda_{\rm{beau}}.$
In §1, §2, §3, §4 and §5, we try to analyze what formulas can say in abelian structures? Specifically which formulas define subgroups? define affine subsets? When do we have $\kappa$-well ordering? Sort out $(< \kappa)$-algebraically. The hope has been that it will help to prove that there are no $G$ such that $G$ has absolutely no-trivial monomorphism \[and even just non-decomposable\] abelian group in any cardinality $\geq \kappa_{\rm{beau}}$ (where $\kappa_{\rm{beau}}$ is the first beautiful cardinal). We tend to think those are interesting *per se*. They may also hint how to construct counter-examples.
# Introduction
An abelian group $G$ is called *Hopfian* (resp. *co-Hopfian*) if its surjective (resp. injective) endomorphisms are automorphisms. In other words, co-Hopfian property of groups is, in some sense, dual to the Hopfian groups. These groups were first considered by Baer in [@baer], under different names. Hopf [@hopf] himself showed that the fundamental group of closed two-dimensional orientable surfaces are hopfian.
There are a lot of interesting research papers in this area. Here, we recall only a short list of them. Following the book [@comb], Hopf in 1932, raised the question as to whether a finitely generated group can be isomorphic to a proper factor of itself. For this and more observations, see [@comb]. Beaumont [@B] proved that if $G$ is an abelian group of finite rank all of whose elements have finite order, then $G$ has no proper isomorphic subgroups. Kaplansky [@KAP] extended this to modules over a commutative principal ideal ring $R$ such that every proper residue class ring of $R$ is finite. Beaumont and Pierce [@Be] proved that if $G$ is co-Hopfian, then the torsion part of $G$ is of size at most continuum, and further that $G$ cannot be a $p$-groups of size $\aleph_0$. This naturally left open the problem of the existence of co-Hopfian $p$-groups of uncountable size $\leq 2^{\aleph_0}$, which was later solved by Crawley [@crawley] who proved that there exist $p$-groups of size $2^{\aleph_0}$. One may attempt to construct (co-)Hopfian groups of large size by taking a huge direct sum of (co-)Hofian groups. In this regard, Baumslag [@ba] asked when is the direct sum of two (co-)Hopfian groups again (co)-Hopfian? Corner [@cor] constructed two torsion-free abelian Hopfian groups which have non-Hopfian direct sum. See [@GF], for more on this and its connections with the study of algebraic entropy.
Despite its long history, only very recently the problem of the existence of uncountable (co-)Hopfian abelian groups was solved, see [@AGSa] and [@1214].
The usual construction of such groups is not absolute, in the sense that they may loose their property in some generic extension of the universe, and the problem of giving an explicit and constructive construction of such groups has raised some attention in the literature. For example, while it is known from the work of Shelah [@Sh:44] that there are indecomposable abelian groups of any infinite size, the problem of the existence of arbitrary large absolutely indecomposable groups is still open, see [@Nad94]. It is worth noticing that indecomposability implies the Hopfian property.
In order to be more explicit, call a group $G$ *absolutely co-Hopfian* if it is co-Hopfian in any further generic extension of the universe. Similarly, one may define *absolutely Hopfian* groups. An open problem in this area was as follows:
**Problem 2**.
- Is it possible to construct absolutely Hopfian torsion-free groups of a given size?
- Is it possible to construct absolutely co-Hopfian torsion-free groups of a given size?
Recently, Paolini and Shelah [@1214 Theorem 1.3] constructed absolutely Hopfian torsion-free groups of any given size $\lambda$, thereby confirming Problem [Problem 2](#prob){reference-type="ref" reference="prob"}(i) in positive direction. On the one hand, and as Hopfian and co-Hopfian groups are dual to each ether, one may predict that there is a connection between Problem [Problem 2](#prob){reference-type="ref" reference="prob"} (i) and (ii). But, any such dual functor, may enlarge or collapse the cardinality of the corresponding groups, hence we can not use the ideas behind the duality to answer Problem [Problem 2](#prob){reference-type="ref" reference="prob"}(ii). For example, Braun and Strüngmann [@independence_paper] showed that the existence of infinite abelian $p$-groups of size $\aleph_0 < |G| < 2^{\aleph_0}$ of the following types are independent of ZFC:
- both Hopfian and co-Hopfian,
- Hopfian but not co-Hopfian,
- co-Hopfian but not Hopfian.
Also, they proved that the above three types of groups of size $2^{\aleph_0}$ exist in ZFC.
On the other hand, there are some partial positive results in the direction of Problem [Problem 2](#prob){reference-type="ref" reference="prob"} (ii). Here, we recall some of these. In [@Sh:110], Shelah studied and coined the concept of a *beautiful cardinal*, denoted by $\kappa_{\rm{beau}}$, which is a kind of Ramsey cardinal (see Definition [Definition 56](#beau){reference-type="ref" reference="beau"}). This cardinal has an essential role in the study of absolutely co-Hopfian groups. Indeed, according to [@Sh:880], for any infinite cardinal $\lambda < \kappa_{\rm{beau}}$, there is an absolutely endo-rigid abelian group, but if $\vert G \vert \geq \kappa_{\rm{beau}}$, then by [@Sh:678] it has non-trivial monomorphisms. Let us state the remaining part of Problem [Problem 2](#prob){reference-type="ref" reference="prob"}:
**Problem 3**. Is it possible to construct absolutely co-Hopfian torsion-free groups of size $\geq \kappa_{\rm{beau}}$?
Of course, if $G = G_{1} \oplus G_{2},$ we can define an automorphism $\pi$ of $G$ such that $x \in G_{1} \Rightarrow \pi(x) = x$ and $x \in G_{2} \Rightarrow \pi(x) = -x$ and so except in degenerated cases $\pi$ is not trivial.
It is very fitting that this work is dedicated to Laszlo: he has been the father of modern abelian group theory; his book [@Fuc73] made me in $1973,$ start to work on abelians groups, (in [@Sh:44]); this work was motivated by thinking of a paper suitable to be contributed to a volume in his honour; and last but not least the problem on the existence of indescomposable ad endo-rigid abelian groups in every cardinality was the first I had started to work on reading his book. Meanwhile Fuchs [@Fuc74] has succeeded yo prove existence of indecomposable abelian groups up to the first measurable cardinal.
The question was solved by the author ([@Sh:44]); and see on the subject TrlifajGöbel [@GbTl12], but the proof was less explicit than the one in [@Fuc73]: it used stationary subsets of regular uncountable cardinals. We may wonder: is this non-effectiveness necessary? How can we phrase this as an explicit problem? Moreover, we call a family $\mathbf{G}$ of abelian groups endo-rigid when: if $G_{1}, G_{2} \in \mathbf{G}$ and $h \in \mathrm{Hom}(G_{1}, G_{2})$ is non-zero then $G_{1} = G_{2}$ and $h$ is a multiplication by an integer. In fact the proof in [@Fuc73] is by building by induction on $\lambda$ such family $\mathbf{G}_{\lambda}$ of $2^{\lambda}$ abelian groups each of cardinality $\lambda.$
We may look at model theory essentially replacing "isomorphic" by "almost isomorphic", that is isomorphisms by potential isomorphisms, i.e. isomorphism in some forcing extension (= generic extension). In [@Sh:12] we have suggested to reconsider (in this direction) a major theme in model theory, that of counting the number of isomorphism types. Recall that $M, N$ are almost-isomorphic [iff]{.ul} $M, N$ have (the same vocabulary and), the same $\mathbb{L}_{\infty, \aleph_{0}}$-theory, equivalently are isomorphic in some forcing extension [iff]{.ul} they are no absolutely non-isomoprhic. For a theory $T$ let $\mathcal{I}_{\infty, \aleph_{0}}(\lambda, T)$ be the number of models of $T$ of cardinality $\lambda$ up to almost isomorphism, i.e. $\vert \{ M/ \equiv_{\mathbb{L}_{\infty, \aleph_{0}}}: M \ \text{a model of} \ T \ \text{of cardinality} \ \lambda \} \vert.$ This behaves nicely ([@Sh:12]): if $T$ has cardinality $\leq \lambda,$ is first order or is just $\subseteq \mathbb{L}_{\lambda^{+}, \aleph_{0}}$ and of cardinality $\leq \lambda$ then $$\mathcal{I}_{\infty, \aleph_{0}}(\lambda, T) \leq \lambda < \mu \Rightarrow \mathcal{I}_{\infty, \aleph_{0}}(\mu, T) \leq \mathcal{I}_{\infty, \aleph_{0}}(\lambda, T),$$ (recently on $\mathcal{I}_{\infty, \aleph_{0}}(-, T)$) for $\aleph_{0}$-stable $T,$ see a work of Laskowski-Shelah ([@Sh:1016]). In [@Sh:12] we also define $M$ is ai-rigid, i.e. $$a \neq b \in M \Rightarrow (M, a) \not \equiv_{\mathbb{L}_{\infty, \aleph_{0}}}(N, a)$$ and have downward LST theorem for it. Generally on almost isomorphism and $\mathbb{L}_{\infty, \aleph_{0}}$ see Barwise [@Bar73].
Later Nadel [@Nad94] ask more specifically about the number of torsion free abelian groups up to being almost isomorphic He suggested further to consider homomorphisms, in particular for abelian groups; that is, maybe we cannot find absolutely-rigid abelian groups of arbitrarily large cardinal. In fact Nadel approach was to look old constructions, he pointed out that the original constructions of Fuchs in [@Fuc73] were absolute and the ones in [@Fuc74], [@Sh:44] were not. Fuchs one used infinite products (which are explicit but not absolute) and [@Sh:44] use stationary sets.
For "endo-rigid" the answer is that we cannot construct when some specific mild large cardinal exists by Eklof-Shelah [@Sh:678], see Eklof-Mekler [@EM02 Ch.IV, CCC3, pg487], i.e. $\lambda = \kappa(\omega)$ the first $\omega$-Erdos cardinal. Moreover, if $\lambda \geq \kappa(\omega)$ then for every sequence $\langle G_{\alpha}: \alpha < \lambda \rangle$ for some $\alpha < \beta < \lambda,$ in some $\mathbf{V}^{\mathbb{P}}, G_{\alpha}$ is embeddable into $G_{\beta};$ moreover if $x_{\gamma} \in G_{\gamma}$ for $\gamma < \lambda$ then for some $\alpha < \beta < \lambda,$ in some $\mathbf{V}^{\mathbb{P}}$ there is an embedding of $G_{\alpha}$ into $G_{\beta}$ mapping $x_{\alpha}$ to $x_{\beta},$ (so $\forall \alpha (G_{\alpha} = )$ is allowed) This explains why [@Fuc73] gets only indecomposable abelian groups (not endo-rigid).
It was claimed there ([@Sh:678], [@EM02]) that for every $\lambda$ there are absolutely indecomposable abelian groups, but the proof was withdrawn.
The problem left open was solved by Göbel-Shelah [@Sh:880]: if $\lambda < \kappa(\omega)$ then there are absolutely endo-rigid abelian groups; we can use a free abelian groups $G$ with distinguished pure subgroups $G_{n} (n \in \mathbb{N});$ the motivation was by [@FuGb08] this implies considerable more (e.g. using $4$ or $5$ primes only). That is, by [@Sh:880], for such $\lambda$, there is a family of $2^{\lambda}$ abelian groups of cardinality $\lambda$ which is absolutely endo-rigid. It is explicitly pointed out there that this gives absolutely an indecomposable abelian group in any such cardinal.
All this still left open question about the existence of indecomposable ones (in every cardinality); we have made several wrongs tries.
Another interpretation of "more explicit construction" is "provable without the axiom of choice". We may also ask for more: no epimorphism (for monomorphisms we cannot). Also there are many works on such problems on $R$-modules and we may wonder on the situation for $R$-modules. We may take another approach: building on sets which are not well orderable, on this see [@Sh:1005 §(3B)].
## The present work {#0B}
We recall that:
(A) 1. By [@Sh:44] there is $G \in \mathop{\mathrm{TFAB}}_{\lambda}$ which has only the trivial endomorphism $x \mapsto nx \ (n \in \mathbb{Z}),$ and can add $\forall p (pG \neq G)$ hence $G$ is:
2. Hopfian see [@Sh:F2035 §6], on no existence of co-Hopfian,
3. almost co-Hopfian as in [@Sh:1205],
(B) 1. by [@Sh:678], no absolutely endo-rigid,
2. by [@Sh:880]if $\lambda < \kappa_{\rm{beau}}$ we can add: some $G \in \mathop{\mathrm{TFAB}}_{\lambda}$ is absolutely endo-rigid (see also more [@Sh:591]) (that is, is endo-rigid even after Levy collapsing $\vert G \vert$ to $\aleph_{0}),$ see more [@Sh:948],
3. By Paolini-Shelah [@Sh:1214 Th. 1.3] there is an absolutely Hopfian $G \in \mathop{\mathrm{TFAB}}_{\lambda},$
(C) for group I think it is done somewhere try [@Sh:742], [@Sh:739],
(D) in (B) the $\lambda < \kappa_{\rm{beau}}$ is necessary see [@Sh:678]. In clauses (B), (D) see [@Sh:110],
(E) [@Sh:977] point the problem, see §5 but in $\mathrm{Affine}_{1}$. See more in [@Sh:110].
## Absolutely Hopfian and co-Hopfian {#0C}
Recently by Paolini-Shelah [@Sh:1214] we know:
1. for every $\lambda$ there is an absolutely Hopfian abelian group of cardinality $\lambda.$
*Question 4*. What about absolutely co-Hopfian $G \in \mathrm{AB}_{\lambda}?$ For $\lambda \geq \kappa_{\rm{beau}},$ we shall solve this in $\S6.$
The Hopfian and co-Hopfian property easily can be extended to the context of modules over commutative rings and even to the context of sheaves over schemes. However, compared to the case of abelian groups, and up to our knowledge, there are very few results for modules. For example, a funny result of Vasconcelos [@vo 1.2] says that any surjective $f:M\to M$ is an isomorphism, where $M$ is a noetherian module over a commutative and noetherian ring $R$. As a geometric sample, let $X$ be an algebraic variety over an algebraically closed field. If a morphism $f : X\to X$ is injective then a result of Ax and Grothendieck indicates that $f$ is bijective, see Jean-Pierre Serre's exposition [@ser].
In contrast to the the case of Hopfian groups, we prove the following dichotomy property of co-Hopfian groups. Indeed, we prove a more general result in the context of additive $\tau$-models (see Definition [Definition 8](#a2){reference-type="ref" reference="a2"}), which includes in particular the cases of abelian groups and $R$-modules, for an arbitrary commutative ring $R$:
**Theorem 5**. *The following assertions are valid:*
1. *If $M$ is an abelian group of cardinality greater or equal $\kappa := \kappa_{\rm{beau}}$, then $M$ is not absolutely co-Hopfian, indeed, after collapsing the size of $M$ into $\omega$, there is a one-to-one endomorphism $\varphi \in \rm{End}(M)$ which is not onto.*
2. *If $M$ is an $R$-module of cardinality greater or equal $\kappa=\kappa_{\rm{beau}}(R, \aleph_0)$, then $M$ is not absolutely co-Hopfian.*
3. *If $M$ is an additive $\tau$-model of cardinality greater or equal $\kappa=\kappa_{\rm{beau}}(\tau)$, then $M$ is not absolutely co-Hopfian.*
Putting things together, we conclude that for every infinite cardinal $\lambda,$ there is an absolutely co-Hopfian abelian group of size $\lambda$ if and only if $\lambda$ is less than the first beautiful cardinal.
The organization of this paper is as follows.
Szmielew [@sz] developed first order theory of abelian groups, see also [@Sh:54] for further discussions. In Section 2 we review infinitary languages and develop some parts of abelian group theory in this context. For this, we use the concept of $\theta$-models from [@Sh:977]. We also introduce some sub-languages of the infinitary languages, which play some role in our later investigation.
For Section 3, let us fix a pair $(\lambda, \theta)$ of regular cardinals and let $\kappa$ be as Theorem [Theorem 5](#1.2){reference-type="ref" reference="1.2"}. The new object studied here is called the general frame and its enrichment the *additive frame*. Such a frame is of the form $$\mathbf{f}:=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega),$$ where $M$ comes from Theorem [Theorem 5](#1.2){reference-type="ref" reference="1.2"}, and it has an additive $\tau_{M}$-model of cardinality $\geq \kappa$ and $\mathscr{L}$ is a class of certain formulas in the vocabulary $\tau_{M}$. For more details, see Definition [Definition 41](#k2){reference-type="ref" reference="k2"}. The main result of Section 3 is Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}. This gives us an additive frame $\mathbf{f}$.
Section 4 is about the concept of *algebraic closure* in a frame (see Definition [Definition 66](#n8){reference-type="ref" reference="n8"}). This enables us to improve Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}, which is needed in the squeal.
In Section 5 we put all things together and present the proof of Theorem [Theorem 5](#1.2){reference-type="ref" reference="1.2"}. Let $\chi:=|M|\geq\kappa$, and let $\mathbb{P}:=\rm{Col}(\aleph_{0},\chi)$. Forcing with $\mathbb{P}$ enables us to collapse $|M|$ into $\aleph_0$, i.e., for any $\mathbb{P}$-generic filter $G_{\mathbb{P}}$ over $V$, we have
$V[G_{\mathbb{P}}]\models$"$M$ is countable".
We show in $V[G_{\mathbb{P}}]$, there exists a 1-1 map $\pi:M\to M$ which is not surjective.
Finally, as another application of infinitary logic, we present a new construction of Hopfian groups.
We close the introduction by noting that all groups (resp. rings) are abelian (resp. commutative), otherwise specialized, and our notation is standard and follows that in Fuchs books [@Fuc73] and [@fuchs] and Eklof-Mekler [@EM02].
# Infinitary languages {#2}
In the first subsection, we briefly review infinitary languages and the concept of additive $\theta$-models, introduced in [@Sh:977]. In the second subsection, we present basic properties of affine subsets, i.e., ones closed under $x - y +z$. The main result is Theorem [Proposition 27](#m8){reference-type="ref" reference="m8"}.
## A review of infinitary languages
In this subsection we briefly review the infinitary logic, and refer to [@Bar73] and [@Sh:977] for more information.
We may wonder which set $\mathscr{L}$ of formulas is relevant for $M \in \mathrm{AB}$ or just additive structures?
In [Fact 32](#a11){reference-type="ref" reference="a11"} on we treat general $\mathscr{L}\subseteq \mathbb{L}_{\infty, \infty}$ not just existential positive conjunctive as in [@Sh:977]. Earlier ([Definition 8](#a2){reference-type="ref" reference="a2"}) we define additive $\theta$-structures, and phrase the relevant statements (in [Lemma 20](#a5){reference-type="ref" reference="a5"}) and discuss various things including relevant second order quantifiers (see [Definition 31](#a9){reference-type="ref" reference="a9"}) and in [Fact 32](#a11){reference-type="ref" reference="a11"} we formalize the second order version.
**Convention 6**. Given a model $M$, by $\tau_M$ we mean the language (or vocabulary) of the model $M$.
*Notation 7*.
1. By $\mathrm{AB}$ we mean the class of abelian groups.
2. Given a vocabulary $\tau$ which contains two place functions $+, -$, we define the affine operation $\mathrm{Affine}( {x}, {y} , {z})$ as the three place function $\mathrm{Affine}( {x}, {y} , {z}):= {x} - {y} + {z}$.
**Definition 8**. Let $M$ be a model of vocabulary $\tau_M$.
1. We say $M$ is an *additive $\theta$-model* when:
(a) the two place function symbols $+, -$ and the constant symbol $0$ belong to $\tau_{M},$
(b) $G_{M} = (\vert M \vert, +^{M}, -^{M}, 0^{M})\in \mathrm{AB}$,
(c) $R^{M}$ is a subgroup of ${}^{n}(G_{M})$, for any predicate symbol $R \in \tau_M$ with $\rm{arity(R)} = n$,
(d) $F^{M}$ is a homomorphism from ${}^{n}(G_{M})$ into $G_{M}$, for any function symbol $F \in \tau_M$ with arity $n,$
(e) $\tau_{M}$ has cardinality $\leq \theta.$
2. For an additive $\theta$-model $M$, we say $X \subseteq {M}$ is *affine if $X$ is closed under the affine operation $\mathrm{Affine}( {x}, {y} , {z})$. In other words, ${a} - {b} + {c} \in X$ provided that ${a}, {b},{c} \in X$.*
3. We say $M$ is an affine $\theta$-model provided:
(a) we do not necessarily have $+, -,0$ in the vocabulary, but only the three place function $\mathrm{Affine}(x,y,z)$,
(b) if $R\in\tau_M$ is an $n$-place predicate and $\overline{a}_{l}= \langle a_{l, i}: i<n \rangle\in R^M$ for $l=0,1,2$ and $$\overline{b}:=\mathrm{Affine}(\overline{a}_0, \overline{a}_1, \overline{a}_2)=\big\langle \mathrm{Affine}(a_{0, i},a_{1, i},a_{2, i}): i<n\big\rangle,$$ then $\overline{b}\in R^M$,
(c) for any $n$-place function symbol $F \in \tau_M$ and $\overline{a}_{l}= \langle a_{l, i}: i<n \rangle\in {}^{n}M$, for $l=0,1,2$, we have $$F^M(\mathrm{Affine}(\overline{a}_0, \overline{a}_1, \overline{a}_2)) = \mathrm{Affine}(F^M(\overline{a}_0), F^M(\overline{a}_1), F^M(\overline{a}_2)),$$
(d) $\tau_{M}$ has cardinality $\leq \theta.$
4. Suppose $M$ is an affine $\theta$-model. We say $M$ is truly affine provided for some fixed $a\in M$ and for the following interpretation
- $x+y :=\mathrm{Affine}(x,a,y)= x-a+y,$
- $x-y := \mathrm{Affine}(x,y,a)= x-y+a,$
- $0:=a$,
then we get an abelian group, and hence an additive $\theta$-model.
We may omit $\theta$ if it is clear from the context.
*Remark 9*. i) A natural question arises: Is any affine $\theta$-model truly affine? Not necessarily this holds, see Example [Example 10](#ntrueaf){reference-type="ref" reference="ntrueaf"}, below.
ii\) More generally, we can replace $\{+,-,\mathrm{Affine}\}$ for a set $\tau_f$ of beautiful function from [@61]. The corresponding result holds in this frame.
**Example 10**. Let $G$ be an abelian group, $H$ be a proper subgroup of it and $a \in G\setminus H.$ Define $M$ as follows:
- the universe of $M$ is $a+H,$
- $\tau_{M} := \{+, -, \mathrm{Affine}\}$,
- $+^M$ and $-^M$ are $+^G \restriction M$ and $-^G \restriction M$ respectively,
- $\mathrm{Affine}^{M}:= \mathrm{Affine}^G \restriction M$, where $\mathrm{Affine}^G=\{x-y+z: x, y, z \in G \}$.
Then the following two assertions hold:
- $M$ is an affine $\aleph_0$-model, isomorphic to $H$.
- $M$ is not an abelian group.
**Definition 11**. (1) We say a class $K$ of models is an *additive $\theta$-class*, when $M$ is an additive $\theta$-model for all $M \in K$, and $$\tau_{M} = \tau_{N}\quad \forall M,N \in K.$$ We denote the resulting common language by $\tau_K.$
\(2\) Similarly, one can define affine $\theta$-classes.
**Hypothesis 12**. Let $\Omega$ be a set of cardinals with $1\in\Omega$ and members of $\Omega\setminus \{1\}$ are infinite cardinals.
*Notation 13*.
1. By $\overline{x}_{[u]}$ or $\overline{x}_{u}$ we mean $\langle x_{\alpha}: \alpha \in u \rangle.$ So, with no repetition.
2. Suppose $\varphi(\overline{x}_{u})$ is a formula. By $\varphi(M)$ we mean $\{ \overline{a}\in {}^{u}M: M \models \varphi[\overline{a}] \}.$
3. For a formula $\varphi(\overline{x}_{[u]}, \overline{y}_{[v]})$ and $\overline{b}\in {}^{v}M,$ we let $$\varphi(M, \overline{b}) := \big\{ \overline{a}\in {}^{u}M: M \models \varphi[\overline{a}, \overline{b}] \big\}.$$
4. Given a sequence $t$, by $\rm{lg}(t)$ we mean the length of $t$.
**Definition 14**. Suppose $\kappa$ and $\mu$ are infinite cardinals, which we allow to be $\infty$. The infinitary language $\mathcal{L}_{\mu, \kappa}(\tau)$ is defined so as its vocabulary is the same as $\tau,$ it has the same terms and atomic formulas as in $\tau,$ but we also allow conjunction and disjunction of length less than $\mu$, i.e., if $\phi_j,$ for $j<\beta < \mu$ are formulas, then so are $\bigvee_{j<\beta}\phi_j$ and $\bigwedge_{j<\beta}\phi_j$. Also, quantification over less than $\kappa$ many variables (i.e., if $\phi=\phi((v_i)_{i<\alpha})$, where $\alpha < \kappa$, is a formula, then so are $\forall_{i<\alpha}v_i \phi$ and $\exists_{i<\alpha}v_i\phi$).
Note that $\mathcal{L}_{\omega, \omega}(\tau)$ is just the first order logic with vocabulary $\tau.$ Given $\kappa$, $\mu$ and $\tau$ as above, we are sometimes interested in some special formulas from $\mathcal{L}_{\mu, \kappa}(\tau)$.
**Definition 15**. Suppose $\kappa$ and $\lambda$ are infinite cardinals or possibly $\infty$. We define the logic $\mathscr{L}_{\lambda, \kappa,\Omega}$ as follows:
1. For a vocabulary $\tau$, the language $\mathscr{L}_{\lambda, \kappa,\Omega}(\tau)$ is defined as the set of formulas with $<\theta$ free variables (without loss of generality they are subsets of $\{x_{\zeta}:\zeta<\theta\}$, see Discussion [Discussion 16](#dis1){reference-type="ref" reference="dis1"}) which is the closure of the set of basic formulas , i.e., atomic and the negation of atomic formulas, under:
- conjunction of $<\lambda$ formulas,
- disjunction of $<\lambda$ formulas,
- For any $\sigma\in\Omega$,
- $\varphi(\overline{x} ):=(\exists^{\sigma}\overline{x}')\psi(\overline{x},\overline{x}')$, or
- $\varphi(\overline{x} ):=(\forall^{\sigma}\overline{x}')\psi(\overline{x},\overline{x}')$,
where $\psi(\overline{x},\overline{x}')$ is a formula. We usually omit $\sigma$, if $\sigma=1$.
We usually omit $\Omega$ if it is clear from the context.
2. Satisfaction is defined as usual, where for the formulas $\varphi(\overline{x} ):=(\exists^{\sigma}\overline{x}')\psi(\overline{x},\overline{x}')$ and $\varphi(\overline{x} ):=(\forall^{\sigma}\overline{x}')\psi(\overline{x},\overline{x}')$, it is defined as:
- If $\varphi(\overline{x}):=(\exists^{\sigma}\overline{x}')\psi(\overline{x},\overline{x}')$, $M$ is a $\tau$-model, and $\overline{a}\in {}^{\lg(\overline{x})}M$, then $$M\models\varphi(\overline{a} )$$ if and only if there are $\overline{b}_\varepsilon\in {}^{\lg(\overline{x}')}M$ for all $\varepsilon<\sigma$ pairwise distinct such that $M\models \psi(\overline{a},\overline{b}_{\varepsilon})$ for all $\varepsilon<\sigma$.
- If $\varphi(\overline{x} ):=(\forall^{\sigma}\overline{x}')\psi(\overline{x},\overline{x}')$, then $$M\models\varphi(\overline{x} ) \iff M\models \neg\big[\exists^{\sigma}\overline{x}' \neg\big(\psi(\overline{x},\overline{x}')\big)\big].$$ Note that $\neg(\psi(\overline{x},\overline{x}'))$ is not necessarily in $\mathscr{L}_{\lambda, \kappa,\Omega}(\tau)$.
**Discussion 16**. Given a formula $\varphi$ in $\mathcal{L}_{\infty, \theta}(\tau)$ with free variables $\overline{x}_{\varphi},$ we can always assume that $\overline{x}_{\varphi}=\langle x_\zeta:\zeta\in u_\varphi\rangle,$ for some $u_\varphi\in[\theta]^{<\theta}$. The key point is that if $\varphi=\varphi(\overline{x}),$ where $\overline{x}= \langle x_\zeta:\zeta\in w \rangle,$ where $w$ is a set of ordinals of size less than $\theta$, and if $f: w \leftrightarrow u$ is a bijection where $u \in [\theta]^{<\theta}$, and $$\psi(\overline{x}) \equiv \mathrm{Sub}_f^{\overline{x}}(\varphi),$$ where $\mathrm{Sub}_f^{\overline{x}}(\varphi)$ is essentially the formula obtained from $\varphi$ by replacing the variable $x_\zeta$ by $x_{f(\zeta)}$, then if $\bar a \in$$^{w}M$, $\bar b \in$$^{u}M$ and $a_\zeta = b_{f(\zeta)}$, for $\zeta \in w$, then $$M \models \varphi(\bar a) \iff M \models \psi(\bar b).$$ We can similarly assume that all bounded variables are from $\{x_i: i<\theta \}$. In some sense a formula $\varphi$ has free variables $\overline{x}_{\varphi}:=\langle x_\zeta:\zeta\in u\rangle,$ where $u\in[\theta]^{<\theta}$ Also, we allow free $\mathrm{subset}_f^{\overline{x}_{u}}(\varphi),$ where $g:u\to\mathrm{Ord}$ is one-to-one.
More explicitly:
1. Basis formula $:=$ are the form $P(\ldots \sigma_i(\overline{x}),\ldots)_{i<\mathrm{arity}(P)}$ or $\neg P(\ldots \sigma_i(\overline{x}),\ldots)_{i<\mathrm{arity}(P)}$, $P$ predicate $\tau$, $\overline{x}_{\varphi}:=\overline{x}_{u_{\varphi}}:=\langle x_\zeta:\zeta\in u\rangle,$ for some $u\in[\theta]^{<\theta}$.
2. $\varphi(\overline{x}_{u})=(\exists^{\sigma} \overline{x}_v)\varphi^1_s(\overline{x}_{u,v})$ where $u,v$ are disjoint subsets of $\theta$.
3. $\varphi(\overline{x}_{u})=(\forall^{\sigma} \overline{x}_v)\varphi^1_s(\overline{x}_{u,v})$ where $u,v$ are disjoint subsets of $\theta$.
4. Conjunction essentially $(\exists^{\sigma} \overline{x}_w)\bigwedge_{i \in I}\varphi_i(\overline{x}_{u},\overline{x}_{w})$ where $u\subseteq \theta$ and $w\subseteq\mathrm{Ord}$, $u\cap w$ pedantically $\varphi_i(\overline{x}_{u},\overline{x}_{u_i})$
1. $u_i\in[\theta]^{<\theta}$
2. $w\subseteq\mathrm{Ord}$
3. $w_i\subseteq w$ of cardinality equal to $|u_i|$
4. $f_i:w_i\rightarrowtail u_i$ is one-to-one, and
5. $\varphi(\overline{x}_{u})= \exists \overline{x}_{w} \bigwedge_{i \in I}\mathrm{sub}_{f_i}^{\overline{x}_{u_i}}\varphi (\overline{x}_{u},\overline{x}_{u_i})$.
**Convention 17**. In what follows, saying closed under $\exists$ (resp. $\forall$) means under all $\exists^{\sigma}$ (resp. $\forall^{\sigma}$).
In the next definition, we consider some classes of infinitary formulas that we will work with them latter.
**Definition 18**. Suppose $\theta$ is an infinite cardinal, or $\infty,$ and suppose $\tau$ is a language. Here, we collect some infinitary sub-classes of the language $\mathscr{L}_{\infty, \theta}(\tau)$:
1. $\mathscr{L}_{\infty, \theta}^{\rm{cop}}(\tau)$ is the class of conjunction-positive formulas, i.e., the closure of atomic formulas under $\bigwedge, \exists, \forall$.
2. $\mathscr{L}_{\infty, \theta}^{\rm{cpe}}(\tau)$ is the class of conjunction-positive existential formulas, i.e., the closure of atomic formulas under $\bigwedge$ and $\exists$.
3. $\mathscr{L}_{\infty, \theta}^{\rm{co}}(\tau)$ is the closure of atomic formulas and $x_i \neq x_j$ under $\bigwedge$, $\exists$ and $\forall$.
4. $\mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau)$ is the closure of atomic formulas and $x_i \neq x_j$ under $\bigwedge$ and $\exists$.
We shall use freely the following simple fact.
**Fact 19**. $\mathscr{L}_{\infty, \theta}^{\rm{co}}(\tau) \supseteq \mathscr{L}_{\infty, \theta}^{\rm{cop}}(\tau) \cup \mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau) \supseteq \mathscr{L}_{\infty, \theta}^{\rm{cop}}(\tau) \cap \mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau) \supseteq \mathscr{L}_{\infty, \theta}^{\rm{cpe}}(\tau)$.
The following lemma is easy to prove.
**Lemma 20**. *Assume $M$ is an additive $\theta$-model, $\tau = \tau_{M}$ and $\varphi(\overline{x}_u) \in \mathscr{L}_{\infty, \infty}(\tau)$ with $\varepsilon=\rm{lg}(\overline{x})$. The following assertions are valid:*
1. *If $\varphi(\overline{x}_u) \in \mathscr{L}_{\infty, \infty}^{\rm{cop}}(\tau)$, then $\varphi(M)$ is a sub-group of ${}^{u}M.$*
2. *If $\varphi(\overline{x}_u) \in \mathscr{L}^{\rm{cpe}}_{\infty, \infty}(\tau)$, $f \in \rm{End}(M)$ and $M \models \varphi[\overline{a}]$, then $M \models \varphi[f(\overline{a})].$*
3. *If $\varphi(\overline{x}_u) \in \mathscr{L}^{\rm{cpe}}_{\infty, \theta}(\tau)$, $M, N$ are $\tau$-models and $f: M \to N$ is a homomorphism, then $f$ maps $\varphi(M)$ into $\varphi(N).$*
4. *If $\varphi(\overline{x}_u) \in \mathscr{L}^{\rm{ce}}_{\infty, \theta}(\tau)$, $M, N$ are $\tau$-models and $f: M \to N$ is a 1-1 homomorphism, then $f$ maps $\varphi(M)$ into $\varphi(N).$*
5. *Assume $\varphi(\overline{x}_{[u]}) \in \mathfrak{L}_{\infty, \infty}(\tau)$ is positive (= closure of atomic under $\{ \wedge, \vee, \exists, \forall \}$). [Then]{.ul}:*
1. *If $G$ is an abelian group, $f$ is an epimorphism and $G \models \varphi[\overline{a}_{[u]}]$ [then]{.ul} $G \models \varphi[f(\overline{a})].$*
6. *If $\varphi(\overline{x}_u) \in \mathscr{L}^{\rm{co}}_{\infty, \theta}(\tau)$, $M, N$ are $\tau$-models and $f: M \to N$ is a bijection, then $f$ is an isomorphism from $\varphi(M)$ onto $\varphi(N).$*
7. *Assume $\psi(\bar y)$ is obtained from $\varphi(\bar x)$ by adding dummy variables, permuting the variables and substitution not identifying variables. Then $$\psi(\bar y) \in \mathscr{L}^{\ast}_{\infty, \theta}(\tau) \iff \varphi(\bar x) \in \mathscr{L}^{\ast}_{\infty, \theta}(\tau),$$ where $\ast \in \{\rm{cop}, \rm{cpe}, \rm{ce}, \rm{co} \}$.*
*Proof.* The proof is by induction on the complexity of the formulas. For completeness, we sketch the proof. If the formula is an atomic formula, then it is evident that all of the above items are satisfied. It is also easy to see that each item is preserved under $\bigwedge$, in the sense that if $\psi= \bigwedge_{i \in I}\varphi_i$ is well-defined and the lemma holds for each $\varphi_i$, then it holds for $\psi$.
We now consider the case where $\psi(\bar x)= (\exists^\sigma \bar y)\varphi(\bar x, \bar y)$, and assume the induction hypothesis holds for $\varphi.$ We consider each clause separately, assuming in each case, the formula $\varphi$ is in the assumed language.
1. **Clause (1)**: Suppose $\varphi(M)$ is a subgroup of $^{\rm{lg}(\bar x)+\rm{lg}(\bar y)}M$. We show that $\psi(M)$ is a subgroup of $^{\rm{lg}(\bar x)}M$. To see this, let $\bar a_0, \bar a_1 \in \psi(M)$. Then for some $\bar b_0$ and $\bar b_1$ we have $$M \models `` \varphi[\bar a_0, \bar b_0]\emph{ and }\varphi[\bar a_1, \bar b_1]''.$$ By induction, $M \models$$\varphi[\bar a_0$ - $\bar a_1, \bar b_0$ - $\bar b_1]$, hence $$M \models \psi[\bar a_0-\bar a_1].$$ Thus $\bar a_0$-$\bar a_1 \in \psi(M)$.
2. **Clause (2)**: Suppose $M \models$$\psi[\bar a]$. Then for some $\bar b$, we have $M \models \varphi[\bar a, \bar b]$. By the induction, $M \models \varphi[f(\bar a), f(\bar b)]$, and hence $M \models \psi[f(\bar a)]$, as requested.
3. **Clause (3)**: As in clause (2), we can show that if $M \models$$\psi[\bar a]$, then $N \models$$\psi[f(\bar a)]$, and this gives the required result.
4. **Clause (4)**: As in clause (3). The assumption of $f$ being 1-1 is used to show that if $x_i \neq x_j$, then $f(x_i) \neq f(x_j)$.
5. **Clause (5)**: As in clause (4).
6. **Clause (6)**: This is easy.
Finally, suppose that $\psi(\bar x)= (\forall^\sigma \bar y)\varphi(\bar x, \bar y)$, and assume the induction hypothesis holds for $\varphi$. We only have to consider items (1), (5) and (6).
1. **Clause (1)**: Suppose $\varphi(M)$ is a subgroup of $^{\rm{lg}(\bar x)+\rm{lg}(\bar y)}M$. We show that $\psi(M)$ is a subgroup of $^{\rm{lg}(\bar x)}M$. To see this, let $\bar a_0, \bar a_1 \in \psi(M)$. We have to show that $\bar a_0$-$\bar a_1 \in \psi(M)$. Thus let $\bar b \in$$^{\rm{lg}(\bar y)}M$. By the induction hypothesis, $$M \models `` \varphi[\bar a_0, \bar b]\emph{ and }\varphi[\bar a_1, \bar 0]''.$$ Thanks to induction, $M \models$$\varphi[\bar a_0$ - $\bar a_1, \bar b$ - $\bar 0]$. As this holds for all $\bar b$, we have $$M \models \psi[\bar a_0-\bar a_1].$$ Thus $\bar a_0$-$\bar a_1 \in \psi(M)$, as requested.
2. **Clause (5)**: As before, we can easily show that $f$ maps $\psi(M)$ into $\psi(N).$ To see it is onto, let $\bar c \in \psi(N)$. Then $N \models \psi(\bar c)$. As $f$ is onto, for some $\bar a$ we have $\bar c=f(\bar a)$. We have to show that $\bar a \in \psi(M)$. Thus let $\bar b \in$$^{\rm{lg}(\bar y)}M$. Then $\bar d=f(\bar b) \in$$^{\rm{lg}(\bar y)}N$, and by our assumption, $N \models \varphi(\bar c, \bar d)$. As $f$ is an isomorphism, $M \models \varphi(\bar a, \bar b)$. As $\bar b$ was arbitrary, $M \models \psi(\bar a)$, i.e., $\bar a \in \psi(M)$.
3. **Clause (6)**: This is easy.
The lemma follows. ◻
Let us repeat the above result in the context of $R$-modules:
**Corollary 21**. *Let $M$ be an $R$-module.*
1. *If $\varphi(\overline{x}_{u}) \in \mathscr{L}_{\infty, \theta}^{\rm{{cpe}}},$ then $\varphi(M)$ is an abelian subgroup of $({}^{u}M, +).$*
2. *Similar result holds for formulas $\varphi(\overline{x}_{u}) \in \mathscr{L}_{\infty, \theta}^{\rm{{cop}}}.$*
*Furthermore, if $R$ is commutative, then in the above, $\varphi(M)$ becomes a sub-module of $({}^{u}M, +).$*
*Remark 22*. If $R$ is not commutative, then $\varphi(M)$ is not necessarily a submodule. To see this, suppose $a, b, c \in R$ are such that $abc \neq bac,$ and suppose $M$ is a left $R$-module. Define $F_a: M \to M$ as $F_a(x)=ax$. Now note that $(c, ac) \in F_a(M).$ If $F_a(M)$ is a sub-module, then we must have $(bc, bac) \in F_a(M).$ By definition, $(bc, bac)=F_a(x)=(x, ax)$ for some $x \in M$. It then follows that $x=bc$ and hence $$bac = ax = abc,$$ which contradicts $abc \neq bac.$
## More on affineness
In this subsection we try replacing subgroups by affine subsets. The main result of this subsection is Proposition [Proposition 27](#m8){reference-type="ref" reference="m8"}. First, we fix the hypothesis and present the corresponding definitions. Here, affinity demand relates only to the formulas, not the content.
**Hypothesis 23**.
1. $R$ is a ring,
2. $M$ is an $R$-module,
3. $\kappa, \theta$ are as in Definition [Definition 41](#k2){reference-type="ref" reference="k2"}.
**Definition 24**. Let $\mathrm{Affine}_{1}$ be the set of all formulas $\varphi(\overline{x}) \in \mathscr{L}_{\infty, \theta}(\tau_{M})$ so that $\rm{lg}(\overline{x}) < \theta$ and $\varphi(M)$ is closed under $\overline{x} - \overline{y} + \overline{z}$. In other words, $\overline{a} - \overline{b} + \overline{c} \in \varphi(M)$ provided that $\overline{a}, \overline{b}, \overline{c} \in \varphi(M)$.
We now define another class $\mathrm{Affine}_{2}$ of formulas of $\mathscr{L}_{\infty, \theta}(\tau_{M})$, and show that it is included in $\mathrm{Affine}_{1}$. To this end, we first make the following definition.
**Definition 25**. Suppose $\alpha_*$ is an ordinal. Let $\varphi(\overline{x}, \overline{y})$ and $\overline{\psi}(\overline{x}, \overline{y}) = \langle \psi_{\alpha}(\overline{x}, \overline{y}): \alpha < \alpha_{*} \rangle$ be a sequence of formulas from $\mathscr{L}_{\infty, \theta}(\tau_{M})$. Let $\overline{b} \in {}^{\mathop{\mathrm{lg}}(\overline{x})}{M}$ and $\overline{a} \in {}^{\mathop{\mathrm{lg}}(\overline{y})}{M}$. Then we set
1. $\mathrm{set}_{\overline{\psi}}(\overline{b}, \overline{a})$ stands for the following set $$\mathrm{set}_{\overline{\psi}}(\overline{b}, \overline{a}):=\big\{ \alpha \in \alpha_{*}: (\overline{b}^{\frown}\overline{a}\in\psi_{\alpha}(M)) \big\}.$$
2. By $\mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a})$ we mean $$\mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}):= \big\{ u \subseteq \alpha_{*}: \text{for some} \ \overline{c}\in \varphi(M, \overline{a}) \ \text{we have} \ u = \mathrm{set}_{\overline{\psi}}(\overline{c}, \overline{a}) \big\}.$$
3. By $\mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a})$ we mean $$\bigg\{ (w_{0}, w_{1}): w_{0} \subseteq w_{1} \subseteq \alpha_{*} \ \text{and } \exists\ u_{0}, u_{1} \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}) \ \text{s.t.} \ w_{1} \subseteq u_{1} \ \text{and} \ u_{0} \cap w_{1} = w_{0} \bigg\}.$$ In particular, we have the following flowchart: $$\xymatrix{
&&\alpha_{*}\\ &u_{0}\ar[ur]^{\subseteq}&&u_{1}\ar[ul]_{\subseteq}\\& w_{0}\ar[rr]^{ \subseteq}\ar[u]^{\subseteq}
&&w_{1}\ar[u]_{\subseteq}
&&&}$$.
We are now ready to define the class $\mathrm{Affine}_{2}$ of formulas:
**Definition 26**. Let $\mathrm{Affine}_{2}$ be the closure of the set of atomic formulas by:
(a) arbitrary conjunctions,
(b) existential quantifier $\exists \overline{x},$ and
(c) suppose for a given ordinal $\alpha_*$, the formulas $\varphi(\overline{x}, \overline{y}), \langle \psi_{\alpha}(\overline{x}, \overline{y}): \alpha < \alpha_* \rangle$ are taken from $\mathrm{Affine}_{2}$ such that $\varphi(\overline{x}, \overline{y}) \geq \psi_{\alpha}(\overline{x}, \overline{y})$ for all $\alpha < \alpha_*$. Also suppose that $$\Upsilon \subseteq \{ (w_{0}, w_{1}): w_{0} \subseteq w_{1} \subseteq \alpha_{*} \}.$$ Then $\vartheta(\overline{y}) = \Theta_{\varphi, \overline{\psi}, \Upsilon}(\overline{y}) \in \mathrm{Affine}_{2},$ where $\vartheta(\overline{y})$ is defined such that $$M \models \vartheta[\overline{a}]\iff \Upsilon \subseteq \mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a}).$$
ii\) Let $({\overline{\psi}},\alpha_{*})$ be as above. The notation $\mathrm{set}_{\overline{\psi}}(b, \overline{a}):=\mathrm{set}(b, \overline{a})$ stands for the following set $$\big\{ \alpha \in u: \alpha < \alpha_{*} \ \text{and } \exists \overline{c}\in \varphi(M, \overline{a}) \ \text{s.t.} \ ( {\color{red} \overline{b}\overline{c}\in\psi_{\alpha}(M)} \Leftrightarrow \alpha \in u) \big\}.$$[^1]
iii\) By $\mathop{\mathrm{Set}}_{\overline{\psi}}(\overline{a}) :=\mathop{\mathrm{Set}}(\overline{a})$ we mean $$\big\{ u \subseteq \alpha_{*}: \text{for some} \ \overline{b}\in \varphi(M, \overline{a}) \ \text{we have} \ u = \mathrm{set}(b, \overline{a}) \big\}.$$
iv\) By $\mathop{\mathrm{inter}}(\overline{a}) := \mathop{\mathrm{inter}}_{\overline{\psi}}(\overline{a})$ we mean $$\bigg\{ (w_{0}, w_{1}): w_{0} \subseteq w_{1} \subseteq \alpha_{*} \ \text{and } \exists\ u_{0}, u_{1} \in \mathop{\mathrm{Set}}(\overline{a}) \ \text{s.t.} \ w_{1} \subseteq u_{1} \ \text{and} \ u_{0} \cap w_{1} = w_{0} \bigg\}.$$
The main result of this section is the following.
**Proposition 27**. *Adopt the previous notation. Then $\mathrm{Affine}_{2} \subseteq \mathrm{Affine}_{1}.$*
*Proof.* We prove the theorem in a sequence of claims. We proceed by induction on the complexity of the formula $\vartheta$ that if $\vartheta \in \mathrm{Affine}_{2},$ then $\vartheta \in \mathrm{Affine}_{1}$. This is clear if $\vartheta$ is an atomic formula. Suppose $\vartheta=\bigwedge_{i \in I}\vartheta_i$, and the claim holds for all $\vartheta_i , i \in I$. It is then clear from inductive step that $\vartheta \in \mathrm{Affine}_{1}$ as well. Similarly, if $\vartheta=\exists \overline{x} \varphi(\overline{x}),$ and if the claim holds for $\varphi(\overline{x})$, then clearly it holds for $\vartheta$.
Now suppose that $\alpha_*$ is an ordinal, $\varphi(\overline{x}, \overline{y}), \langle \psi_{\alpha}(\overline{x}, \overline{y}): \alpha < \alpha_* \rangle$ are in $\mathrm{Affine}_{2}$, such that $\varphi(\overline{x}, \overline{y}) \geq \psi_{\alpha}(\overline{x}, \overline{y})$ for all $\alpha < \alpha_*.$ Also, suppose that $$\Upsilon \subseteq \{ (w_{0}, w_{1}): w_{0} \subseteq w_{1} \subseteq \alpha_{*} \}.$$ Assume by the induction hypothesis that the formulas $\varphi(\overline{x}, \overline{y})$ and $\psi_{\alpha}(\overline{x}, \overline{y})$, for $\alpha < \alpha_*$, are in $\mathrm{Affine}_1.$ We have to show that $\vartheta(\overline{y}) = \Theta_{\varphi, \overline{\psi}, \Upsilon}(\overline{y}) \in \mathrm{Affine}_{1}$ as well.
Now, we bring the following claim:
**Claim 28**. *Adopt the above notation. Assume $\overline{a}_{l} \in {}^{\rm{lg(\overline{y})}}M$ for $l = 0, 1, 2 ,3$ and $\overline{a}_{3} = \overline{a}_{0} - \overline{a}_{1} + \overline{a}_{2}.$ If $u_{j} \in \rm{Set}(\overline{a}_{j})$ for $j =0, 1$ and $u = u_{0} \cap u_{1}$, then $$\big\{ w \cap u: w \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}_{2}) \big\} =\big \{ w \cap u: w \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}_{3}) \big\}.$$*
*Proof.* Let $\overline{b}_{0} \in \varphi(M, \overline{a}_{0})$ and $\overline{b}_{1} \in \varphi(M, \overline{a}_{1})$ be such that $u_0= \mathrm{set}_{\overline{\psi}}(\overline{b}_{0}, \overline{a}_{0})$ and $u_1= \mathrm{set}_{\overline{\psi}}(\overline{b}_{1}, \overline{a}_{1})$. Suppose that $w \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}_{2})$. Then for some $\overline{b}_{2} \in \varphi(M, \overline{a}_{2})$ we have $w=\mathrm{set}_{\overline{\psi}}(\overline{b}_{2}, \overline{a}_{2})$. We have to find $w' \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}_{3})$ such that $w' \cap u=w \cap u.$
Set $\overline{b}_{3} := \overline{b}_{0} - \overline{b}_{1} + \overline{b}_{2}$. Note that $$l=0, 1, 2 \implies ~ \overline{b}_{l} ^{\frown} \overline{a}_{l} \in \varphi(M),$$ hence, as $\varphi \in \mathrm{Affine}_1,$ we have $$\overline{b}_{0} ^{\frown} \overline{a}_{0} - \overline{b}_{1} ^{\frown} \overline{a}_{1} + \overline{b}_{2} ^{\frown} \overline{a}_{2} \in \varphi(M).$$ Clearly, $$\overline{b}_3 ^{\frown} \overline{a}_3=\overline{b}_{0} ^{\frown} \overline{a}_{0} - \overline{b}_{1} ^{\frown} \overline{a}_{1} + \overline{b}_{2} ^{\frown} \overline{a}_{2}\in \varphi(M).$$ According to its definition, $\overline{b}_3 \in \varphi(M, \overline{a}_3).$
Let $w'=\mathrm{set}_{\overline{\psi}}(\overline{b}_{3}, \overline{a}_{3})$. We show that $w' \cap u=w \cap u.$ Suppose $\alpha \in u.$ Then, we have $\alpha \in u_0 \cap u_1$, and hence $\overline{b}_j ^{\frown} \overline{a}_j \in \psi_\alpha(M)$, for $j=0, 1$. Thus as $\psi_\alpha \in \mathrm{Affine}_1$, we have
$$\begin{array}{ll}
\alpha \in w' & \iff \overline{b}_3 ^{\frown} \overline{a}_3 \in \psi_\alpha(M) \\
& \iff \overline{b}_2 ^{\frown} \overline{a}_2 \in \psi_\alpha(M)\\
& \iff\alpha \in w.
\end{array}$$
Suppose $w' \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}_{3}).$ By symmetry, $w\cap u=w' \cap u$ for some $w \in \mathop{\mathrm{Set}}_{\varphi, \overline{\psi}}(\overline{a}_{2}).$ The claim follows. $\rm{set}(b_{0}, \overline{a}_{0}) \supseteq u \subseteq \rm{set}(\overline{b}_{1}, \overline{a}_{2}).$ We claim:
1. if $\overline{b}_{2} \in \varphi(M, \overline{a}_{2})$ and $\overline{b}_{3} = \overline{b}_{0} - \overline{b}_{1} + \overline{b}_{2}$ and $\alpha \in u$ then $$\alpha \in \rm{set}(b_{2}, \overline{a}_{2}) \iff \alpha \in \rm{set}(b_{3}, \overline{a}),$$
2. if $\overline{b}_{3} \in \varphi(M, \overline{a}_{3})$ and $- \overline{b}_{2} = \overline{b}_{0} - \overline{b}_{1} - \overline{b}_{3}$ and then the above hold.
Clearly $(*)_{1}$ and $(*)_{2}$ suffice and by symmetry $(*)_{1}$ suffice.
To prove $(*)_{1}$ we assume $\rm{set}(b_{l}, \overline{a}_{l}) \supseteq u$ for $l = 0, 1$ and we should prove $$\rm{set}(b_{\varepsilon}, \overline{a}_{\varepsilon}) \cap u = \rm{set}(b_{2}, \overline{a}_{2}) \cap u.$$
First, let $\alpha \in u \cap \rm{set}(b_{2}, \overline{a}_{2})$ and recall $b_{l} \in \psi_{\alpha}(M, \overline{a}_{l})$ for $l = 0, 1$ by assumption of $(*)_{1}.$ So,
- $b_{l} \overline{a}_{l} \in \psi_{\alpha}(M)$ for $l = 0, 1.$ Also,
- $b_{0} \overline{a}_{0} - b_{1} \overline{a}_{1} + b_{2} \overline{a}_{2} = (b_{0} - b_{1} + b_{1}) (\overline{a}_{0} - a_{1} + a_{2}) = b_{3} \overline{a}_{3},$
so by the assumption on $\psi$ we have $b_{3} \overline{a}_{3} \in \psi_{\alpha}(M)$. This means $\alpha \in \rm{set} (b_{3}, \overline{a}_{2}).$
Second, assume $\alpha \in u \cap \rm{set}(b_{3}, \overline{a}_{1}).$ So $(*)_{1}$ holds indeed, hence clause (a) holds. ◻
Let us apply the previous claim and observe that:
**Claim 29**. *Let $\varepsilon=\mathop{\mathrm{lg}}(\overline{y}) < \theta$ and $\overline{a}_{l} \in {}^{\varepsilon}M$ for $l = 0, 1, 2$, and set $\overline{a}_{3} := \overline{a}_{0} - \overline{a}_{1} + \overline{a}_{2}$. If $\Upsilon\subseteq \bigcap_{l \leq 2} \mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a}_{l})$, then $\Upsilon \subseteq{\mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a}_{3})}.$*
*Proof.* Let $(w_{0}, w_{1}) \in \Upsilon$. We shall prove that $(w_{0}, w_{1}) \in \mathop{\mathrm{inter}}(\overline{a}_{3}).$ For $j \leq 2,$ as $(w_{0}, w_{1}) \in \Upsilon \subseteq \mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a}_{j})$, there is a pair $u_{j, 0}, u_{j, 1} \in \mathrm{set}_{\varphi, \overline{\psi}}(\overline{a}_{j})$ witnessing it. Namely, we have $$w_1 \subseteq u_{j, 1} \text{~and~} u_{j, 0} \cap w_1=w_0.$$ Now, we can find $\overline{b}_{j, 0}, \overline{b}_{j, 1}$ such that $\rm{set}(\overline{b}_{j, 0}, \overline{a}_{j}) = u_{j, 0}$ and $\rm{set}(\overline{b}_{j, 1}, \overline{a}_{j}) = u_{j, 1}.$ Set
1. $\overline{b}_{3, 0} := \overline{b}_{0, 0} - \overline{b}_{1, 0} + \overline{b}_{2, 0}$,
2. $\overline{b}_{3, 1} := \overline{b}_{0, 1} - \overline{b}_{1, 1} + \overline{b}_{2, 1}.$
In the light of Claim [Claim 28](#m11){reference-type="ref" reference="m11"}, there exist $u_{3, 1} \in \mathrm{set}_{\varphi, \overline{\psi}}(\overline{b}_{3})$ and $u_{3, 0} \in \mathrm{set}_{\varphi, \overline{\psi}}(\overline{b}_{3})$ such that the following two equalities are valid:
1. $u_{3, 1} \cap (u_{0, 1} \cap u_{1, 1}) = u_{2, 1} \cap (u_{0, 1} \cap u_{1, 1})$, and
2. $u_{3, 0} \cap (u_{0, 0} \cap u_{1, 0}) = u_{2, 0} \cap (u_{0, 0} \cap u_{1, 0})$.
By clause (1), we have $u_{3, 1} \supseteq w_1$. Hence $$\begin{array}{ll}
w_0 &\subseteq u_{3, 1} \cap w_1\\&= u_{3, 1} \cap (u_{0, 1} \cap u_{1, 1})\cap w_1\\
&= \big(u_{3, 1} \cap (u_{0, 1} \cap u_{1, 1}) \big) \cap (u_{0, 1} \cap u_{1, 1})\cap w_1 \\
&\stackrel{(2)}=\big( u_{2, 1} \cap (u_{0, 1} \cap u_{1, 1})\big) \cap (u_{0, 1} \cap u_{1, 1})\cap w_1\\
&\subseteq w_0,
\end{array}$$ and so $$u_{3, 1} \cap w_1 = w_0.$$ The claim follows. ◻
Now, we are ready to complete the proof of Proposition [Proposition 27](#m8){reference-type="ref" reference="m8"}. To this end, we fix the following data:
1. $\vartheta(\overline{y}) = \theta_{\varphi, \overline{\psi}, \Upsilon}(\overline{y}),$
2. $\overline{a}_0, \overline{a}_1, \overline{a}_2 \in \vartheta(M)$,
3. $\overline{a}_3:=\overline{a}_0 - \overline{a}_1+\overline{a}_2$.
This gives us $\Upsilon \subseteq \mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a}_{l})$ for $l \leq 2$. Thanks to Claim [Claim 29](#m17){reference-type="ref" reference="m17"}, we know $\Upsilon \subseteq{\mathop{\mathrm{inter}}_{\varphi, \overline{\psi}}(\overline{a}_{3})}.$ According to Definition [Definition 26](#m5){reference-type="ref" reference="m5"}(c) one has $\overline{a}_3 \in \vartheta(M)$. Consequently, $\vartheta(\overline{y}) \in \mathrm{Affine}_1$, and the proposition follows. ◻
# Additive frames
In this section we introduce the concept of an additive frame. Each additive frame contains, among other things, an abelain group. We will show that each abelian group can be realized in this way. In particular, the main result of this section is Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}.
The following is one of our main and new frameworks:
**Discussion 30**.
) We can allow second order quantifiers but only additive ones, see [Definition 31](#a9){reference-type="ref" reference="a9"}(2),
) When we ask for suitable $\mathbb{L}$ instead "closure of the set of atomic by $\mathscr{C} = \{ \wedge, \exists, \forall \}$" we may consider $\mathscr{C} \subseteq \mathfrak{L}_{\infty, \infty}$ i.e. closure by $\mathscr{C}$ means substituting the atomic formulas by earlier formulas. So $\mathscr{C} \subseteq \mathfrak{L}_{\infty, \infty}(\tau_{\theta}),$ $\tau_\theta = \{ R_{\zeta}: \zeta < \theta \},$ where $\rm{arity}(R_{\varepsilon, \zeta}) = \varepsilon$ So $\mathbb{L}^{p}_{\infty, \theta}$ comes from $\mathscr{C} = \{ \bigwedge_{\varepsilon< \zeta} \bigvee_{\varepsilon< \zeta} \exists \overline{x}_{[\varepsilon]} R_{\varepsilon}(\overline{x}_{[\varepsilon]}): (\forall \overline{x}_{\varepsilon} R_{\varepsilon}(\overline{x}_{\varepsilon}))\}$: $R_{\varepsilon}$ is $\varepsilon$-place, $u \subseteq \theta, \mathrm{otp}(u) = \varepsilon$.
) We would like to find more formulas defining subgroups.
[Question]{.ul}: what about the formula?
$$\psi(\overline{z}) = (\exists \dots \overline{x}_{\alpha} \dots)_{\alpha < \kappa} \left[ \bigwedge_{\alpha < \kappa} \varphi(\overline{x}_{\kappa}, \overline{z}) \wedge \bigwedge_{\alpha < \beta} \overline{x}_{\alpha} \cap \overline{x}_{\beta} = \emptyset \right]$$ is it additive preserved by monomorphisms?
) Call $\psi(\overline{x}_{[u]})$ additive if $\psi(M) \subseteq {}^{u}M$ is a subgroup,
1. the following formula $\psi(\overline{z})$ is where, $$\psi(\overline{z}) = (\exists \dots x_{\alpha} \dots ) \left[ \bigwedge_{\alpha < \beta} \varphi(\overline{x}_{\beta}- x_{\alpha}, \overline{z}) \wedge \bigwedge_{\alpha < \beta} \overline{x}_{\alpha} \cap \overline{x}_{\beta} = \emptyset \right].$$
What is the gain? It is positive existential YES but as a candidate to $\mathbb{L}_{\mathfrak{f}}$ it change the notion of sub-formula,
1. Can we say more on second order logic? Naturally we have subgroup of ${}^{u}(\vert M \vert, \in), u \ \in [\theta]^{< \theta}.$ Applying such quantifiers preserve the property "a formula define a subgroup".
**Definition 31**.
) $\mathbb{L}_{\lambda, \kappa}(\tau)$ is defined as usual but negation are applied only on atomic formulas and we still have $\wedge, \vee, (\exists \overline{\varepsilon}), (\forall \overline{y}_{\varepsilon})$ when $\varepsilon< \kappa.$
) $\mathbb{L}_{\lambda, \kappa}^{2}(\tau)$ is defined like $\mathbb{L}_{\lambda, \kappa}$ [but]{.ul} we have also variables $R_{\varepsilon}$ on $\varepsilon$-place relations for $\varepsilon< \kappa,$ and have quantifiers $(\exists \overline{R}), (\forall \overline{R})$ where $\overline{R}$ is a sequence of length $< \kappa$ of variables as allow,
) $\mathbb{L}_{\lambda, \kappa}^{{2, \rm{afn}}},$ where the vocabulary $\tau$ is additive (so $=, x + y, x - y, 0$ are in $\tau$) is defined as above but for defining $M \models \varphi[\overline{a}, \overline{R}]$ by induction on $\varphi,$ we restrict ourselves to affine $(M, \overline{R}),$ see §5.
) We say a formula $\varphi(\overline{x})$ is sub-like, hom-like or up to-lie when it satisfies the condition of [Lemma 20](#a5){reference-type="ref" reference="a5"}(1), [Lemma 20](#a5){reference-type="ref" reference="a5"}(2) or [Lemma 20](#a5){reference-type="ref" reference="a5"}(3) respectively.
) Similarly for a set $\mathscr{C}$ of formulas.
**Fact 32**. Parallel of [Lemma 20](#a5){reference-type="ref" reference="a5"} for logics as in Definition [Definition 31](#a9){reference-type="ref" reference="a9"}.
**Definition 33**.
) We say that $\mathscr{C}$ is a $(\lambda, \sigma, \theta)$-template [when]{.ul}:
a) $\mathscr{C}$ is a set of formulas in $\mathbb{L}_{\lambda, \theta}(\tau_{\sigma, \theta})$ $\tau_{\sigma, \theta} = \{ R_{\varepsilon, i}: i < \sigma \ \text{and} \ \varepsilon< \theta \}$ and $R_{\varepsilon, i}$ is an $\varepsilon$-place predicate; $\lambda$ may be $\infty$ so each $\varphi$ is $\varphi(\overline{x}_{[u]}),$ $u \subseteq \mathrm{Ord}$ of cardinality $< \theta,$
) For $\mathscr{C}$ as above and $\tau$ a vocabulary of arity $< \theta,$ let $\mathfrak{L}_{\infty, \theta, \mathscr{C}}(\tau)$ be the following set of formulas $\varphi = \varphi(\overline{x}_{[u]}), u \subseteq \rm{Ord},$ $\vert u \vert < \theta$: it is the closure of the set of atomic formulas by:
a) arbitrary conjunctions,
b) $\exists \overline{x}_{[u]}$ for $u$ of cardinality $< \theta$,
c) substitutions in a formula from $\mathscr{C}.$
) But we define the set of subformulas $\rm{SFor}(\varphi)$ of $\varphi \in \mathbb{L}_{\infty, \theta, \mathscr{C}}$ as not the one of $\mathbb{L}_{\infty, \infty}$ but reflect its being "the closure" i.e. (but we can add or omit dummy variables),
a) for $\psi(\overline{x})$ atomic, $\rm{SFor}(\psi(\overline{x}_{[u]})) = \{ \psi(\overline{x}) \},$
b) if $\psi = \psi(\overline{x}) = \bigwedge_{\alpha \in u} \varphi_{\alpha}(\overline{x}'_{\alpha}),$ where $\overline{x}'_{\alpha}$ a subsequence of $\overline{x}$ of length $< \theta$ then
$$\begin{array}{clcr}
\rm{SFor}(\psi)= & \bigcup \{ \rm{SFor(\psi_{\alpha}): \alpha \in u} \}
\\
&\bigcup \{ \bigwedge_{a \in v} \psi_{v}:v \subseteq \alpha \}\\
&\bigcup \{ \psi \},\\
\end{array}$$
c) if $\psi(\overline{x}) = \rm{sub}_{\overline{\varphi}}^{\overline{R}}(\vartheta(\overline{x}))$ then $\rm{SFor}(\psi(\overline{x})) = \bigcup \{ \rm{SFor(\varphi_{i})}: i < lg(\overline{\varphi}) \} \cup \{ \psi(\overline{x}) \}.$
**Convention 34**.
) If not said otherwise, $\theta$ is fixed as well as $\tau$ an additive vocabulary (for some claims not necessary), and $\mathscr{C}$ a template in [Definition 33](#a13){reference-type="ref" reference="a13"},
) $M$ denote an additive $\theta$-model, $K$ and additive $\theta$-class.
How does this affect [Lemma 20](#a5){reference-type="ref" reference="a5"}?
[\[a17\]]{#a17 label="a17"}
1) If $\mathscr{C} \subseteq \mathbb{L}_{\infty, \sigma, \theta}(\tau_{\sigma, \theta})$ is positive (i.e. is a set of positive formulas disjunction) then $\mathbb{L}_{\infty, \theta, \mathscr{C}}(\tau)$ is positive, so [Lemma 20](#a5){reference-type="ref" reference="a5"}(1) apply i.e. $\varphi[\overline{x}_{[u]}] \in \mathfrak{L}_{\infty, \theta, \varepsilon}(\tau)$ implies $\varphi(M))$ is sub-model,
2) If $\mathscr{C} \subseteq \mathfrak{L}_{\infty, \sigma, \theta}(\tau_{\sigma, \theta})$ is cositive existential (i.e. see [Lemma 20](#a5){reference-type="ref" reference="a5"}(1)) [then]{.ul} $\mathbb{L}_{\infty, \theta, \mathscr{C}}(\tau)$ positive existential,
3) Paralely for additive,
4) If $\mathscr{C}$ is sub-like then so is $\mathfrak{L}_{\lambda, \kappa, \mathscr{C}},$
5) If $\mathscr{C}$ is cositive, cositive existential or positive, then it is sub-like or hom-like.
*Remark 35*. So [Lemma 20](#a5){reference-type="ref" reference="a5"} apply.
*Proof..* Should be clear. ◻
**Definition 36**.
) We define the set of special sub-formulas of $\varphi(\overline{x}) \in \mathbb{L}_{\infty, \theta, \mathscr{C}}(\tau)$ denoted by $\rm{SPSF}(\varphi(\overline{x}))$ by induction on $\varphi$ as follows (we may omit dummy variables):
(a) for $\varphi(\overline{x})$ atomic, $\mathrm{SPSF}(\varphi) = \{ \varphi\},$
(b) for $\psi(\overline{x}) = \bigwedge_{\alpha \in u} \varphi_{\alpha}(\overline{x}_{\alpha})$ let $$\mathrm{SPSF}(\varphi) = \bigcup \bigg\{ \bigwedge_{\alpha \in v} \varphi_{\alpha}'(\overline{x}): v \subseteq u \ \text{and} \ \varphi_{\alpha}' \in \rm{SPSF}(\varphi_{\alpha}), \ \text{for} \ \alpha \in v \bigg\},$$
(c) if $\psi(\overline{x}_{[u]}) = \exists \overline{y}_{[u]} \varphi(\overline{y}_{[v]}, \overline{x}_{[u]})$ then $\mathrm{SPSF}(\psi(\overline{x}_{[u]}))$ is defined by: $$\bigg\{ (\exists \overline{x}_{[v_{1}]}) \varphi'(\overline{y}_{[v_{1}]}, \overline{x}_{[u_{1}]}): v_{1} \subseteq v, u_{1} \subseteq u \ \text{and} \ \varphi'(\overline{y}_{[v_{1}]}, \overline{x}_{[u_{1}]}) \in \rm{SPTF}\big(\varphi(\overline{y}_{[v]}, \overline{x}_{[u]})\big) \bigg\}$$
(d) if $\psi(\overline{x}) = \rm{sub}_{\overline{\varphi}}^{\overline{R}}(\varphi(\overline{x}))$ then $\rm{SPSF}(\psi(\overline{x})) = \{ \rm{sub}_{\overline{\varphi}}^{R}(\varphi'(\overline{x})): \varphi' \in \rm{SPSF(\varphi)} \}.$
) We say $\varphi_{1}(\overline{x}), \varphi_{2}(\overline{x})$ are isom [when]{.ul} the only difference is the indexing in $\bigwedge_{\alpha \in u} \varphi_{\alpha}$ and changing (legally) bounded quantifiers.
[\[a23\]]{#a23 label="a23"}
1) If $\mathscr{C} \subseteq \mathbb{L}_{\sigma, \theta, \mathscr{C}}(\tau_{\sigma \theta}),$ $\kappa > \vert \mathscr{C} \vert + \sigma + \theta$ is beautiful, and $\varphi_{\alpha}(\overline{x}) \in L_{\infty, \theta, \mathscr{C}}(\tau)$ for $\alpha < \kappa$ [then]{.ul} there exists $\alpha < \beta,$ $\varphi_{\alpha}$ is isom to some $\varphi_{\beta}'(\overline{x}) \in \rm{SPSF}(\varphi_{\beta}(\overline{x})),$
2) If above $\mathscr{C}$ is cositive (or just hom-like) $\varphi(\overline{x}) \in \rm{SPSF}(\psi(\overline{x}))$ and $M \models \psi[\overline{a}]$ then $M \models \varphi[\overline{a}],$
3) Both true even for $\mathbb{L}_{\infty, \infty, \mathscr{C}}(\tau),$ where $\ \kappa > \vert \mathscr{C} \vert$ is beautiful.
*Proof..*
1) As in [@Sh:110], [@Sh:678].
2) By induction on $\varphi.$
3) Similarly.
◻
We now give examples of templates $\mathscr{C}$.
**Definition 37**.
) Let $\mathscr{C}_{\rm{ex}} = \mathscr{C}_{\theta}^{\rm{ex}}$ ("$\rm{ex}$" stand for existential) when it consist of $\exists \overline{x}_{[u]} R(\overline{x}_{[u]}, \overline{y})$ and $\rm{arity}(R) = \varepsilon+ \rm{lg}(\overline{u}),$
) Let $\mathscr{C}_{\rm{un}} = \mathscr{C}_{\theta}^{\rm{un}}$ ("$\rm{un}$" stand for universal) when it consist of $(\forall \overline{x}_{[u]}) R(\overline{x}_{[u]}, \overline{y}),$ $\rm{arity(R)} = \varepsilon+ \rm{lg}(\overline{u}).$
**Observation 38**. * *
*) If $\mathscr{C}_{\rm{ex}} \subseteq \mathscr{C}$ then $\mathbb{L}_{\infty, \theta, \mathscr{C}}(\tau)$ is closed under $(\exists \overline{x})$ when $\rm{lg}(\overline{x}) < \theta,$*
*) Parallel for $\mathscr{C}_{\rm{un}}.$*
**Discussion 39**. Now having [Definition 37](#a26){reference-type="ref" reference="a26"} and [Observation 38](#a29){reference-type="ref" reference="a29"} is not so helpful, we already know it. The following try to formalize "$M$ can be represented as $M_{1} \oplus M_{2}$ such that $\overline{z} \subseteq M_{1}, \overline{y} \subseteq M_{2}$".
The point is that below we use $\kappa > \theta$ (even $> \lambda^{+}\theta + \sigma$) which is beautiful, so $\mathbb{L}_{\infty, \theta, \mathscr{C}} \nsubseteq \mathbb{L}_{\infty, \theta}$ but still [\[a23\]](#a23){reference-type="ref" reference="a23"}(1), (2) apply.
**Definition 40**. \[Remark\] Let $\mathscr{C}_{\rm{ds}} = \mathscr{C}_{\theta, \kappa}^{\rm{ds}}$ be the set of sentences $$\psi(\overline{x}_{[\varepsilon]}, \overline{y}) = (\exists \dots, \overline{x}_{[\varepsilon]}^{\alpha}, \dots)_{\alpha < \kappa} \left( \bigwedge_{\alpha < \beta < \kappa} R (\overline{x}_{[\varepsilon]}^{\beta}, \overline{y}) \right)$$ where $\rm{arity}(R) = \varepsilon+ \rm{lg}(\overline{y}).$
[\[a35\]]{#a35 label="a35"} [\[a23\]](#a23){reference-type="ref" reference="a23"} apply to $\mathscr{C}_{\theta, \kappa}^{\rm{ds}}.$
# Frames: towards eliminating quantifiers for infinitary logics {#2}
## Frames
In [@Sh:977] we try a more general frame still using subgroups (rather then e.g. affine subsets or, so called "beautiful terms" see [@Sh:61]).
**Definition 41**.
1. We say $$\mathbf{f}:=(M_\mathbf{f}, \mathscr{L}_\mathbf{f}, \lambda_\mathbf{f}, \kappa_\mathbf{f}, \theta_\mathbf{f},\Omega_\mathbf{f})=(M, \mathscr{L}, \lambda, \kappa, \theta,\Omega)$$ is a *general frame* if:
(1) $M$ is a $\tau_{M}$-model.
(2) $\mathscr{L}$ is a class or set of formulas in the vocabulary $\tau_{M},$ such that each $\varphi \in \mathscr{L}$ has the form $\varphi(\overline{x}), \ \overline{x}$ of length $< \theta$.
(3) For every $\overline{a} \in {}^{\varepsilon}M, ~ \varepsilon< \theta$, there is a formula $\varphi_{\overline{a}}(\overline{x}) \in \mathscr{L}$ such that:
(a) $\overline{a} \in \varphi_{\overline{a}}(M),$
(b) (the minimality condition) if $\psi(\overline{x}) \in \mathscr{L}$ and $\overline{a} \in \psi(M),$ then $\varphi_{\overline{a}}(M) \subseteq \psi(M).$
(4) (a) if $\varphi_{\alpha}(\overline{x}) \in \mathscr{L}$ for $\alpha < \kappa$, then for some $\alpha < \beta < \kappa$, we have $\varphi_{\alpha}(M) \supseteq \varphi_{\beta}(M),$
(b) if $\varphi_{\alpha, \beta}(\overline{x}, \overline{y}) \in \mathscr{L}$ for $\alpha < \beta < \lambda$, then for some $\alpha_{1} < \alpha_{2} < \alpha_{3} < \lambda$ we have $\varphi_{\alpha_{1}, \alpha_{2}}(M) \supseteq \varphi_{\alpha_{1}, \alpha_{3}}(M), \varphi_{\alpha_{2}, \alpha_{3}}(M)$.
(5) $\lambda, \kappa$ are regular and $\lambda \geq \kappa \geq \theta\geq |\Omega|+|\tau_M|,$ where $\Omega$ is a set of cardinals such that $1 \in \Omega$ and all other cardinals in it are infinite.
2. We say a general frame $\mathbf{f}$ is an *additive frame* if in addition, it satisfies:
1. $(\vert M \vert, +^{M}, -^{M}, 0^{M})$ is an abelian group. Moreover, $M$ is an additive $\theta$-model.
2. If $\varphi(\overline{x}_u) \in \mathscr{L}$, then $\varphi(M) \subseteq {}^{u} M$ is a sub-group of $M$.
3. An additive frame $\mathbf{f}$ is an *additive$^+$ frame* if $M_{\mathbf{f}}$ has cardinality greater or equal to $\lambda$.
*Remark 42*. Given a general frame $\mathbf{f}$ as above, we always assume that the language $\mathscr{L}$ is closed under permutation of variables, adding dummy variables and finite conjunction.
The next lemma is a criterion for an additive frame to be additive$^+$.
**Lemma 43**. *Suppose $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta,\Omega)$ is an additive frame. Then it is an additive$^+$ frame if and only if for each $\varepsilon \in (0, \theta)$, there exists some $\bar a \in$$^{\varepsilon}M$ such that $\varphi_{\bar a}$ has cardinality $\geq \lambda$.*
*Proof.* he assumption clearly implies $\mathbf{f}$ is additive$^+$. To see the other direction, suppose $\mathbf{f}$ is an additive$^+$ frame and $\varepsilon \in (0, \theta).$ Suppose by the way of contradiction, $|\varphi_{\bar a}| < \lambda$ for all $\bar a \in$$^{\varepsilon}M$. By induction on $\alpha < \kappa$ we can find a sequence $\langle \bar a_\beta: \beta < \kappa \rangle$ such that for each $\beta <\kappa,$ $\bar a_\beta \notin \bigcup_{\alpha < \beta }\varphi_{\bar a_\alpha}$. This contradicts Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(4)(a). ◻
The following defines a partial order relation on formulas of a frame.
**Definition 44**. Assume $\mathbf{f}$ is a general frame, and let $\psi(\overline{x}), \varphi(\overline{x})$ be in $\mathscr{L}_{\mathbf{f}}$.
1. We say $\psi(\overline{x}) \leq \varphi(\overline{x})$ if $\varphi(M) \supseteq \psi(M).$
2. We say $\psi(\overline{x})$ and $\varphi(\overline{x})$ are equivalent, denoted by $\psi(\overline{x}) \equiv \varphi(\overline{x})$, if $\varphi(M)= \psi(M).$
3. Suppose $\overline{a}, \overline{b} \in {}^{\varepsilon}M$. We say $\overline{a} \leq \overline{b}$ (resp. $\overline{a} \equiv \overline{b}$) if $\varphi_{\overline{a}} \leq \varphi_{\overline{b}}$ (resp. $\varphi_{\overline{a}} \equiv \varphi_{\overline{b}}$).
**Convention 45**. If not clear otherwise, $\mathbf{f}$ is an additive frame, $K = K_{\mathbf{f}}$ etc, and $M$ is a member of $K$ but we may "forget" to mention $M$ in the $\varphi_{\overline{a}}^{M}, -s.$
**Observation 46**. *Without loss of generality $\varphi_{\overline{a}} = \varphi_{- \overline{a}}$ and $\varphi_{\overline{a}} \equiv \varphi_{\overline{b}} \Rightarrow \varphi_{\overline{a}} = \varphi_{\overline{b}}.$*
*Notation 47*. Assume $\mathbf{f}=(M, \lambda, \kappa, \theta,\Omega)$ is an additive frame. Let $\overline{a}_{l}=\langle a_{l, \zeta}: \zeta < \varepsilon\rangle \in {}^{\varepsilon}M$ for $l < n$. We set:
- $-\overline{a}_{l} := \langle -a_{l, \zeta}: \zeta < \varepsilon\rangle,$
- $\overline{a}_{1} + \overline{a}_{2} := \langle a_{1, \zeta} + a_{2, \zeta}: \zeta < \varepsilon\rangle,$
- $\sum_{l < n} \overline{a}_{l} := \langle \sum_{l<n }a_{l, \zeta}: \zeta < \varepsilon\rangle,$
- $\overline{a}-\overline{b}:=\overline{a}+(-\overline{b}).$
) $\varphi, \psi, \vartheta$ vary on $\mathscr{L},$
) $\varepsilon, \zeta, \xi$ vary on $\theta$ and $\mathscr{L}_{\varepsilon} = \{ \varphi(\overline{x}) \in \mathscr{L}: \rm{lg} (\overline{x}) = \varepsilon\},$
) $\overline{a}, \overline{b}, \overline{c}, \overline{d}$ vary on ${}^{\theta >} M$ and $M \in K,$
) Let $\overline{a} \sim \overline{b}$ or "$\overline{a}, \overline{b}$ are equivalent" means $\rm{lg} (\overline{a}) = \rm{lg} (\overline{b})$ and $\varphi_{\overline{a}} \equiv \varphi_{\overline{b}}$ when $M$ is clear from the context; if not, we may say $\equiv_{M}$ or $\varphi^{M}_{\overline{a}} = \varphi_{\overline{b}}^{M}$ or e.g. (in M),
1.
2. Let $\varphi = \varphi(\overline{x}, \overline{y}) \in \mathscr{L},$ and $\overline{a} \in {}^{\rm{lg}(\overline{x})}M$. By $\varphi(\overline{a}, M)$ we mean $$\{ \overline{b} \in {}^{\rm{lg}(\overline{y})}M: M \models \varphi[ \overline{a}, \overline{b}] \}.$$
3. Let $\overline{a}_{l}=\langle a_{l, \zeta}: \zeta < \varepsilon\rangle \in {}^{\varepsilon}M$ for $l < n$. We set:
- $-\overline{a}_{l} := \langle -a_{l, \zeta}: \zeta < \varepsilon\rangle,$
- $\overline{a}_{1} + \overline{a}_{2} := \langle a_{1, \zeta} + a_{2, \zeta}: \zeta < \varepsilon\rangle,$
- $\sum_{l < n} \overline{a}_{l} := \langle \sum_{l<n }a_{l, \zeta}: \zeta < \varepsilon\rangle,$
- $\overline{a}-\overline{b}:=\overline{a}+(-\overline{b}).$
**Lemma 48**. *Suppose $\mathbf{f}=(M, \lambda, \kappa, \theta,\Omega)$ is an additive frame, $\overline{a} \in {}^{\varepsilon}M, \varphi = \varphi_{\overline{a}}$ and $\overline{a}_{\alpha} \in \varphi(M)$ for $\alpha < \lambda$. Then for some $\overline{\beta}, \overline{\gamma}$ we have:*
(a) *$\overline{\beta} = \langle \beta_{i}: i < \lambda \rangle \in {}^{\lambda} \lambda$ is increasing,*
(b) *$\overline{\gamma} = \langle \gamma_{i}: i < \lambda \rangle \in {}^{\lambda}\lambda$ is increasing,*
(c) *$\beta_{i} < \gamma_{i} < \beta_{i+1},$ for all $i<\lambda$,*
(d) *$\overline{a} - \overline{a}_{\beta_{i}} + \overline{a}_{\gamma_{i}}$ is equivalent to $\overline{a},$ for all $i<\lambda.$*
*Proof.* First, we reduce the lemma to the following claim:
1. It is suffice to prove, for each sequences $\overline{a}, \langle \overline{a}_{\alpha}: \alpha < \lambda \rangle$ as above, there are $\beta < \gamma < \lambda$ such that $\overline{a} - \overline{a}_{\beta} + \overline{a}_{\gamma}$ is equivalent to $\overline{a}.$
To see this, suppose $(*)$ holds. By induction on $i<\lambda,$ we define the increasing sequences $\langle \beta_{i}: i < \lambda \rangle$ and $\langle \gamma_{i}: i < \lambda \rangle$ as requested. Thus suppose that $i<\lambda$, and we have defined $\langle \gamma_{j},\beta_{j}: j<i \rangle$. In order to define $(\beta_{i}, \gamma_{i})$, we let $$\alpha_{*} := \sup \{ \gamma_{j} + \beta_{j} + 1: j < i \}.$$ Since $\lambda$ is regular, $\alpha_{*}<\lambda$. Now, apply $(\ast)$ to $\overline{a}$ and $\langle \overline{a}_{\alpha_{*} + \alpha}:\alpha < \kappa\rangle$. This gives us $\beta < \gamma < \kappa$ such that $$\overline{a} - \overline{a}_{\alpha_{*} +\beta} + \overline{a}_{\alpha_{*} +\gamma} \equiv \overline{a}.$$ Thus it suffices to set $\beta_i=\alpha_*+\beta$ and $\gamma_i=\alpha_*+\gamma.$
So, things are reduced in showing $(\ast)$ holds. To see this, we define the formula $\varphi_{ \beta \gamma}$ as $$(+)\quad\quad\quad \varphi_{ \beta \gamma} := \varphi_{\overline{a} - \overline{a}_{\beta} + \overline{a}_{\gamma}},$$ where $\beta < \gamma < \lambda$. Note that $\overline{a},\overline{a}_{\beta} , \overline{a}_{\gamma}\in \varphi_{\overline{a}}(M),$ hence as $\varphi_{\overline{a}}(M)$ is a submodel, $\overline{a}-\overline{a}_{\beta}+ \overline{a}_{\gamma}\in \varphi_{\overline{a}}(M).$ Thanks to the minimality condition from Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(3)(b), this implies that $$\varphi_{\overline{a}} \geq \varphi_{\beta, \gamma}.$$ Thus, it is sufficient to find $\beta < \gamma < \kappa$ such that $\varphi_{\overline{a}} \leq \varphi_{\beta, \gamma}$. By the property presented in Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(4)(b) there are $\alpha_{1} < \alpha_{2} < \alpha_{3} < \kappa$ such that $$\varphi_{\alpha_{1}, \alpha_{2}} \geq \varphi_{\alpha_{1}, \alpha_{3}}, \varphi_{\alpha_{2}, \alpha_{3}}.$$ So,
(1) $\overline{a} - \overline{a}_{\alpha_{1}} + \overline{a}_{\alpha_{2}} \in \varphi_{\alpha_{1}, \alpha_{2}}(M)$,
(2) $\overline{a} - \overline{a}_{\alpha_{1}} + \overline{a}_{\alpha_{3}} \in
\varphi_{\alpha_{1}, \alpha_{3}}(M) \subseteq \varphi_{\alpha_{1}, \alpha_{2}}(M)$,
(3) $\overline{a} - \overline{a}_{\alpha_2} + \overline{a}_{\alpha_3} \in \varphi_{\alpha_{2}, \alpha_{3}}(M) \subseteq \varphi_{\alpha_{1}, \alpha_{2}}(M).$
Hence $$\begin{array}{clcr}
\overline{a}&=(1)-(2)+(3) \in\varphi_{\alpha_{1}, \alpha_{2}}(M).
%\\&\stackrel{(+)}=\varphi_{\overline{a} - \overline{a}_{\alpha_{1}} + \overline{a}_{\alpha_{2}}}(M).
\end{array}$$ Combining this with the minimality property from Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(3)(b) we observe that $\varphi_{\overline{a}} \leq \varphi_{\alpha_1, \alpha_2}$. Thus it suffices to take $\beta=\alpha_1$ and $\gamma=\alpha_2$. $$\varphi_{\overline{a}}(M) = \varphi_{\overline{a} - \overline{a}_{\alpha_{1}} + \overline{a}_{\alpha_{2}}}(M).$$In other words, $\overline{a}$ and $\overline{a} - \overline{a}_{\alpha_{1}} + \overline{a}_{\alpha_{2}}$ are equivalent as promised. ◻
**Corollary 49**. *Suppose $\mathbf{f}=(M, \lambda, \kappa, \theta,\Omega)$ is an additive frame. The following assertions are valid:*
1. *Suppose $\varphi_{\overline{a}}(M)$ has cardinality $\geq \lambda$. Then there is $\overline{b} \in \varphi_{\overline{a}}(M)$ such that $\overline{b} \neq \overline{a}$ and $\overline{b}$ is equivalent to $\overline{a}.$*
2. *If for some $\overline{c}\in$$^{\varepsilon}M$, $\varphi_{\overline{c}}(M)$ has cardinality $\geq \lambda,$ then for all $\overline{a}\in$$^{\varepsilon}M$ the set $$\{\overline{b} \in \varphi_{\overline{a}}(M): \overline{b} \text{~is equivalent to~} \overline{a} \}$$ has cardinality $\geq \lambda.$*
3. *If $\mathbf{f}$ ia an additive$^+$ frame and $\varepsilon \in (0, \theta)$, then for all $\overline{a} \in$$^{\varepsilon}M$, the set $\{\overline{b} \in \varphi_{\overline{a}}(M): \overline{b} \text{~is equivalent to~} \overline{a} \}$ has cardinality $\geq \lambda.$*
*Proof.* (1) Since $\varphi_{\overline{a}}(M)$ has cardinality $\geq \lambda$, we can take a sequence $\langle \overline{a}_{\alpha}: \alpha < \lambda\rangle$ of length $\lambda$ of pairwise distinct elements of $\varphi_{\overline{a}}(M)$ with no repetition. We apply Lemma [Lemma 48](#k11){reference-type="ref" reference="k11"} to find increasing sequences $\overline{\beta} = \langle \beta_{i}: i < \lambda \rangle$ and $\overline{\gamma} = \langle \gamma_{i}: i < \lambda \rangle$ such that for all $i<\lambda$
(a) $\beta_{i} < \gamma_{i},$
(b) $\overline{a} - \overline{a}_{\beta_{i}} + \overline{a}_{\gamma_{i}}$ is equivalent to $\overline{a}.$
Set $\overline{b}_i:=\overline{a} - \overline{a}_{\beta_{i}} + \overline{a}_{\gamma_{i}}$. Since $\overline{a}_{\alpha}$'s are distinct, we deduce that $\overline{a} \neq \overline{b}_i$, for at least one $i<\lambda$. Thanks to (b), we know $\overline{b}_i$ is equivalent to $\overline{a}.$
\(2\) Let $X= \{\overline{b} \in \varphi_{\overline{a}}(M): \overline{b} \text{~is equivalent to~} \overline{a} \},$ and let $\mu=|X|.$ Suppose towards contradiction that $\mu < \lambda,$ and let $\langle \overline{b}_i: i<\mu \rangle$ be enumerates $X$. Let also $\langle \overline{c}_{\alpha}: \alpha < \lambda\rangle$ be a sequence of length $\lambda$ of pairwise distinct elements of $\varphi_{\overline{c}}(M)$ with no repetition. By induction on $\alpha < \lambda$ we can find $\xi_\alpha < \lambda$ such that
1. $\overline{a}_{\xi_\alpha} \notin \{ \overline{b}_i - \overline{c} +\overline{c}_{\xi_\beta}: \beta < \alpha, i< \mu \}$.
By the argument of clause (1), applied to the sequence $\langle \overline{c}_{\xi_{\alpha}}: \alpha < \lambda\rangle$, we can find some $\beta_{\iota} < \gamma_{\iota}< \lambda$ such that $\overline{a} - \overline{c}_{\xi_{\beta_{\iota}}} + \overline{c}_{\xi_{\gamma_{\iota}}}$ is equivalent (but not equal) to $\overline{c}.$ Thus for some $i<\mu, \overline{a} - \overline{c}_{\xi_{\beta_{\iota}}} + \overline{c}_{\xi_{\gamma_{\iota}}}=\overline{b}_i$. But then $$\overline{c}_{\xi_{\gamma_{\iota}}} = \overline{b}_i - \overline{a} + \overline{c}_{\xi_{\beta_{\iota}}},$$ which contradicts $(*)_{\gamma_{\iota}}$.
\(3\) By Lemma [Lemma 43](#lem3){reference-type="ref" reference="lem3"} and clause (2). ◻
**Lemma 50**.
1. *Suppose $\mathbf{f}=(M, \lambda, \kappa, \theta,\Omega)$ is a general frame, $\varepsilon< \theta$ and $\overline{a}_{\alpha} \in {}^{\varepsilon}M$ for $\alpha < \kappa$. Then there is some $\alpha < \kappa$ such that the set $\{ \beta < \kappa: \overline{a}_{\beta} \in \varphi_{\overline{a}_{a}} (M) \}$ is unbounded in $\kappa.$*
2. *In clause (1), we can replace $\kappa$ by any cardinal $\kappa' \geq \kappa.$*
*Proof.* We prove clause (2). Suppose on the way of contradiction that, for each $\alpha < \kappa',$ the set $X_\alpha=\{ \beta < \kappa': \overline{a}_{\beta} \in \varphi_{\overline{a}_{a}} (M) \}$ is bounded in $\kappa'.$ So,
1. $\forall\alpha<\kappa', \exists \alpha<\beta_\alpha<\kappa'$ such that $\forall \beta\geq \beta_\alpha$ we have $\varphi_{\alpha}\ngeq\varphi_{\beta}.$
We define an increasing and continuous sequence $\langle \zeta_\alpha:\alpha<\kappa \rangle$ of ordinals less than $\kappa'$, by induction on $\alpha$ as follows:
- $\zeta_0:=0,$
- $\zeta_{\alpha+1}:=\beta_{\zeta_\alpha}$,
- $\zeta_{\delta}:=\lim_{\alpha<\delta}\zeta_\alpha$ for limit ordinal $\delta$.
Consider the sequence $\{\varphi_{\zeta_\alpha}:\alpha<\kappa\}$, and apply the property presented in Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(4)(a) to find $\gamma<\delta<\kappa$ such that
1. $\varphi_{\zeta_\gamma}\geq\varphi_{\zeta_\delta}.$
Since $\gamma<\delta$, $\zeta_\delta\geq \zeta_{\gamma+1}=\beta_{\zeta_\gamma}$. We apply $(*)_1$ for $\alpha:=\zeta_\gamma$ and $\beta:=\zeta_\delta\geq \beta_\alpha$. This gives us $\varphi_{\zeta_\gamma}\ngeq\varphi_{\zeta_\delta},$ which contradicts $(*)_2$. ◻
## On nice frame existence {#3}
**Definition 51**. We say $\mathbf{f}= (M, \lambda, \kappa, \theta,\Omega)$ is nice [when]{.ul} it is as in [Definition 41](#k2){reference-type="ref" reference="k2"} fixing $\langle \varphi_{\overline{a}}: \overline{a} \in {}^{\theta > }M \rangle.$ If $\kappa = \lambda$ we may omit $\kappa.$
**Theorem 52**. * *
*) For an $R$-module, $\varphi(\overline{x}_{[\varepsilon]}) \in \mathbb{L}_{\infty, \theta}^{\rm{{cpe}}},$ the set of cositive existential formulas (= the closure of the atomic formulas under $\bigwedge_{\alpha < \beta}$ and $(\exists \overline{x}_{\varepsilon, \zeta })$ ($\varepsilon< \zeta < \theta$)) (so no $\neg,$ no disjunction $\vee$), we have:*
- *$\varphi(M) = \{ \overline{a} \in {}^{\varepsilon}M: M \models \varphi[\overline{a}] \}$ is an abelian subgroup of $({}^{\varepsilon}M, +).$*
*) We can use $\mathbb{L}^{\rm{cop}}$ when we allow $\forall \overline{x},$ ($\rm{cop}$ stand for no disjunction no negation).*
*Proof..* Just check or quote or see §[\[1\]](#1){reference-type="ref" reference="1"}. ◻
In what follows we need to use a couple of results from [@Sh:110]. To make the paper more self contained, we borrow some definitions and results from it.
**Definition 53**.
1. By a * tree* we mean a partially-ordered set $(\mathscr{T},\leq )$ such that for all $%
t\in \mathscr{T}$, $$pred( t ):=\{s\in \mathscr{T}:s<t\},$$ is a well-ordered set; moreover, there is only one element $r$ of $\mathscr{T}$, called the *root* of $\mathscr{T}$, such that $pred( r)$ is empty.
2. The order-type of $pred(t)$ is called the height of $t$, denoted by ht$(t)$.
3. The height of $\mathscr{T}$ is $\sup \{$ht$(t) + 1:t\in \mathscr{T}\}$.
**Definition 54**.
1. A *quasi-order* $\mathcal{Q}$ is a pair $(\mathcal{Q},\leq _{\mathcal{Q}})$ where $\leq _{\mathcal{Q}}$ is a reflexive and transitive binary relation on $\mathcal{Q}$.
2. $\mathcal{Q}$ is called $\kappa$*--narrow*, if there is no *antichain* in $\mathcal{Q}$ of size $\kappa$, i.e., for every $f:\kappa
\rightarrow \mathcal{Q}$ there exist $\nu \neq \mu$ such that $f(\nu )\leq _{\mathcal{Q}}f(\mu )$.
3. For a quasi-order $\mathcal{Q}$, a $\mathcal{Q}$*-labeled tree* is a pair $(\mathscr{T},\Phi
_{\mathscr{T}})$ consisting of a tree $\mathscr{T}$ of height $\leq \omega$ and a function $%
\Phi _{\mathscr{T}}:\mathscr{T}\rightarrow \mathcal{Q}$.
4. $\mathcal{Q}$ is $\kappa$-well ordered if for every sequence $\left\langle q_i: i<\kappa \right\rangle$ of elements of $\mathcal{Q},$ there are $i<j<\kappa$ such that $q_i \leq_{\mathcal{Q}} q_j.$
*Remark 55*. On any set of $\mathcal{Q}$-labeled trees we define a quasi-order by: $(\mathscr{T}_{1},\Phi _{1})\preceq (\mathscr{T}_{2},\Phi _{2})$ if and only if there is a function $\nu :\mathscr{T}_{1}\rightarrow \mathscr{T}_{2}$ equips with the following properties:
- for all $t\in \mathscr{T}_{1}$, $\Phi _{1}(t)\leq _{\mathcal{Q}}\Phi _{2}(\nu (t))$,
- $t\leq _{\mathscr{T}_{1}}t^{\prime } \Longrightarrow \nu(t)\leq
_{\mathscr{T}_{2}}\nu(t^{\prime }),$
- for all $t\in \mathscr{T}_{1}$, ht$_{\mathscr{T}_{1}}(t)=$ht$_{\mathscr{T}_{2}}(\nu(t))$.
**Definition 56**.
1. Given infinite cardinals $\kappa$ and $\mu,$ the notation $\kappa \longrightarrow (\omega)_\mu^{<\omega}$ means that: for every function $f: [\kappa]^{<\omega} \to \mu$, there exists an infinite subset $X \subseteq \kappa$ and a function $g : \omega \to \mu$ such that $f(Y) = g(|Y|)$ for all finite subsets $Y$ of $X$.
2. Let $\kappa_{\rm{beau}}$ denote the first beautiful[^2] cardinal. This is defined as the smallest cardinal $\kappa$ such that $\kappa \longrightarrow (\omega)_2^{<\omega}$.
3. Given a ring $R$ and an infinite cardinal $\theta$, let $\kappa_{\rm{beau}}(R, \theta)$ denote the least cardinal $\kappa$ such that $\kappa \longrightarrow (\omega)_{|R|+\theta^{<\theta}}^{<\omega}$.
4. Given a vocabulary $\tau$, let $\kappa_{\rm{beau}}(\tau,\theta)$ denote the least cardinal $\kappa$ such that $\kappa \longrightarrow (\omega)_{|\tau|+\theta^{<\theta}}^{<\omega}$. If $\theta=\aleph_0$, we may omit it.
Now, we can state:
**Fact 57**. (Shelah, [@Sh:110 theorems 5.3+ 2.10]) [\[qwo\]]{#qwo label="qwo"} Let $\mathcal{Q}$ be a quasi-order of cardinality $<\kappa_{\rm{beau}}$, and $%
\mathcal{S}$ be a set of $\mathcal{Q}$-labeled trees with $\leq \omega$ level. Then $\mathcal{S}$ is $\kappa_{\rm{beau}}$-narrow, and even $\kappa$-well ordered.
We are now ready to state and prove the main result of this section.
**Theorem 58**.
1. *Assume $M$ is an $R$-module and $\kappa=\kappa_{\rm{beau}}(R, \theta)$ and $\Omega$ is such that $1 \in \Omega$ and $|\Omega| \leq \theta$. Also assume that $\lambda \geq \kappa$ is regular and satisfies $\lambda \to (\kappa + 1)_{4}^{3}$. The following hold:*
(a) *$\mathbf{f}= (M, \lambda, \kappa, \theta,\Omega)$ is an additive frame, whenever $$\mathscr{L}\subseteq \{ \varphi(\overline{x}): \varphi \in \mathscr{L}_{\infty, \theta}^{\rm{cop}}(\tau_{M}), \rm{lg}(\overline{x}) < \theta \}$$ is closed under arbitrary conjunctions.*
(b) *$\mathbf{f}= (M, \lambda, \kappa, \theta,\Omega)$ is an additive frame, whenever $$\mathscr{L}\subseteq \{ \varphi(\overline{x}): \varphi \in \mathscr{L}_{\infty, \theta}^{\rm{co}}(\tau_{M}), \rm{lg}(\overline{x}) < \theta \}$$ is closed under arbitrary conjunctions.*
2. *If $M$ is a $\tau$-model, $\kappa=\kappa_{\rm{beau}}(\tau,\theta)$ and $\lambda \geq \kappa$ is regular and satisfies $\lambda \to (\kappa + 1)_{4}^{3}$. Then $\mathbf{f}= (M, \lambda, \kappa, \theta,\Omega)$ is a additive frame, whenever $$\mathscr{L}\subseteq \{ \varphi(\overline{x}): \varphi \in \mathscr{L}_{\infty, \theta}^{\rm{co}}(\tau_{M}), \rm{lg}(\overline{x}) < \theta \}$$ is closed under arbitrary conjunctions.*
3. *Furthermore, if $|M| \geq \lambda$, then in items (i) and (ii), $\mathbf{f}$ becomes an additive$^+$ frame.*
*i) $\mathbf{f}= (\mathscr{L}, \kappa, \lambda, M, \theta)$ is an additive frame [when]{.ul}:*
a) *$\mathscr{L} \subseteq \{ \varphi(\overline{x}_{[\varepsilon]}): \varphi \in \mathbb{L}^{\rm{cos}}(\tau_{M}), \varepsilon< \theta \},$*
b) *$\mathscr{L}$ is closed under arbitrary conjunctions,*
c) *$\kappa > \vert R \vert + \theta, \ \theta \geq \aleph_{0}, \ \kappa$ is a beautiful cardinal,*
d) *$\lambda \geq \kappa$ is weakly compact or just $\lambda \to (\kappa + 1)_{4}^{3}$ and $\lambda$ regular e.g. $\lambda = (2^{2^{k}})^{+}$ (or less suffice? see [Remark 60](#L10){reference-type="ref" reference="L10"}).*
*ii) The claims [Lemma 48](#k11){reference-type="ref" reference="k11"}, [Corollary 49](#k14){reference-type="ref" reference="k14"} holds for $(\mathscr{L}, \lambda, \kappa, M, \theta).$*
*Proof.* (i): We have to show that $\mathbf{f}$ satisfies the relevant items of Definition [Definition 41](#k2){reference-type="ref" reference="k2"}. Items (1), (2) and (5) are true by definition. As $M$ is an $R$-module, clause (6) is valid. The validity of (7) follows from Lemma [Lemma 20](#a5){reference-type="ref" reference="a5"}.
Let us consider clause (3). Thus suppose that $\overline{a} \in {}^{\varepsilon}M$, where $\varepsilon< \theta$. We are going to find some $\varphi \in \mathscr{L}$ such that:
i) $\overline{a} \in \varphi(M),$
ii) if $\psi \in \mathscr{L}$ and $\overline{a} \in \psi(M),$ then $\varphi(M) \subseteq \psi(M).$
For any $\overline{b} \in {}^{\varepsilon}M,$ if there is some formula $\varphi(\overline{x})$ such that $$(\dagger)_1 \quad\quad\quad M\models\varphi(\overline{a})\wedge \neg \varphi(\overline{b}),$$ then let $\varphi_{\overline{b}} \in \mathscr{L}$ be such a formula. Otherwise, let $\varphi_{\overline{b}} \in \mathscr{L}$ be any true formula such that $\varphi_{ \overline{b}}(M)= {}^{\varepsilon}M$. Finally set $$(\dagger)_2 \quad\quad\quad\varphi:= \bigwedge \{ \varphi_{\overline{b}}: \overline{b} \in {}^{\varepsilon}M \}.$$ Now, we claim that $\varphi$ is as desired. First, we check (3)(a), that is $\overline{a} \in \varphi(M).$ As $$\varphi(M)=\bigcap_{\overline{b} \in {}^{\varepsilon}M} \varphi_{\overline{b}}(M),$$ it suffices to show that $\overline{a} \in \varphi_{\overline{b}}(M)$, for $\overline{b} \in {}^{\varepsilon}M$. Fix $\overline{b}$ as above. If there is no formula $\varphi(\overline{x})$ as in $(\dagger)$, then $\overline{a} \in {}^{\varepsilon}M= \varphi_{ \overline{b}}(M)$, and we are done. Otherwise, by its definition, $M \models \varphi_{ \overline{b}}(\overline{a})$, and hence, again we have $\overline{a} \in \varphi_{\overline{b}}(M)$.
To see (3)(b) holds, let $\psi(\overline{x})$ be such that $\overline{a} \in \psi(M).$ We have to show that $\varphi(M) \subseteq \psi(M).$ Suppose by the way of contradiction that $\varphi(M) \nsubseteq \psi(M).$ Take $\overline{b}\in\varphi(M)
\setminus \psi(M)$. Now, $M \models \neg \psi(\overline{b}),$ and by our assumption $M \models \psi(\overline{a}).$ In particular, by our construction, the formula $\varphi_{\overline{b}}$ satisfies $$(\dagger)_3\quad\quad\quad M \models \varphi_{\overline{b}}(\overline{a}) \wedge \neg \varphi_{\overline{b}} (\overline{b}).$$
Now,
1. $\overline{b}\in\varphi(M)\stackrel{(+)}\subseteq\varphi_{\overline{b}}(M)$,
2. by $(\dagger)_3$, $M \models \neg \varphi_{\overline{b}} (\overline{b})$, and hence $\overline{b}\notin \varphi_{\overline{b}}(M)$.
By $\bullet_{1}$ and $\bullet_{2}$ we get a contradiction.
Now we turn to clause (4). First let us consider (4)(a). Thus suppose that $\varphi_{\alpha}(\overline{x}) \in \mathscr{L}$, for $\alpha < \kappa$, are given. We should find some $\alpha < \beta<\kappa$ such that $\varphi_{\alpha} \geq \varphi_{\beta}$, i.e., $\varphi_{\alpha}(M) \supseteq \varphi_{\beta}(M).$ To this end, first note that we can restrict ourselves to those formulas such that both free and bounded variables appearing in them are among $\{x_i:i<\theta\}$, the set of free variables of $\varphi$ has the form $\{x_\zeta: \zeta < \varepsilon \}$, and the quantifiers have the form $\exists \bar x_{[\varepsilon_0, \varepsilon_1)} \text{~and~} \forall \bar x_{[\varepsilon_0, \varepsilon_1)},$ where $\varepsilon_0 < \varepsilon_1 <\theta,$ and $\bar x_{[\varepsilon_0, \varepsilon_1)}=\langle x_\xi: \varepsilon_0 \leq \xi < \varepsilon_1 \rangle$. In what follows, writing a formula as $\varphi(\bar{x})$, we mean $\bar{x}$ lists the free variables appearing in $\varphi$ in increasing order.
We can consider a formula $\varphi(\bar{x})$ as a type $(\mathscr{T},\boldmath c)$ such that
(a) $\mathscr{T}$ is a tree with $\leq \omega$ levels with no infinite branches,
(b) $\boldmath c$ is a function with domain $\mathscr{T}$,
(c) if $t\in \mathscr{T}\setminus \max(\mathscr{T})$, then $c(t)$ is in the following set$$\bigg\{\wedge,\exists\bar{x},\forall\bar{x}:\bar{x} \emph{ has form } \bar x_{[\varepsilon_0, \varepsilon_1)} \emph{ for some } \varepsilon_0 < \varepsilon_1<\theta\bigg\},$$
(d) if $t\in \max(\mathscr{T})$ then $\boldmath c(t)$ is an atomic formula in $\tau_M$.
Clearly, $|\mathrm{Rang}(\boldmath c)|\leq\theta+|R|$. For each $\alpha < \kappa$ set $\varphi_{\alpha}(\bar{x})=(\mathscr{T}_{\alpha},\boldmath c_{\alpha}).$
Let $\mathcal{Q}$ be the range of the function $c$. Note that it is a quasi-order under the $\leq$ relation defined in Definition [Definition 44](#k86){reference-type="ref" reference="k86"}, and clearly, it has cardinality $$|\mathcal{Q}| \leq |\tau_M| + \theta^{<\theta} <\kappa_{\rm{beau}}.$$ In particular, by Fact [\[qwo\]](#qwo){reference-type="ref" reference="qwo"}, applied to the sequence $$\langle (\mathscr{T}_{\alpha},\boldmath c_{\alpha}): \alpha < \kappa \rangle.$$ we get some ${\alpha}<\beta$ and a function $f$ equipped with the following property:
(a) $f$ is a 1-1 function from $\mathscr{T}_{\alpha}$ into $\mathscr{T}_{\beta}$, which is level, order and non-order preserving and such that $$t\in \boldmath c_{\alpha}\Rightarrow\big(\boldmath c_{\beta}(f(t))=\boldmath c_{\alpha}(t)\big).$$
Let $\mathscr{T}_{\alpha}':=\mathrm{Rang}(f)$ and $\boldmath c_{\alpha}':=\boldmath c_{\beta}\upharpoonright\mathscr{T}_{\alpha}'$. By this notation, $\varphi_{\alpha}(\bar{x})$ can also be represented by $(\mathscr{T}_{\alpha}',\boldmath c_{\alpha}')$. For any $t\in\mathscr{T}_{\alpha}'$ we define $\varphi_t^1$ to be the formula represented by $$\bigg(\{s\in\mathscr{T}_{\alpha}':t\leq_{\mathscr{T}_{\alpha}'}s\},\boldmath {c}_{\beta}\upharpoonright \{s\in\mathscr{T}_{\alpha}':t\leq_{\mathscr{T}_{\alpha}'}s\}\bigg),$$ and define $\varphi_t^2$ to be the formula represented via $$\bigg(\{s\in\mathscr{T}_{\beta} :t\leq_{\mathscr{T}_{\beta}'}s\},\boldmath c_{\beta} \upharpoonright \{s\in\mathscr{T}_{\beta}:t\leq_{\mathscr{T}_{\beta}'}s\}\bigg).$$ Note that the formula $\varphi_t^1$ may have fewer free variables than $\varphi_t^2$, but we can add the remaining variables. So, we may and do assume that $$\varphi_t^{\ell}=\varphi_t^{\ell}(\bar{x}_t)\quad \ell=1,2.$$ Now we are going to prove that for every $t$, $$\varphi^1_t(M) \supseteq \varphi^2_t(M).$$ This is done by induction on the depth of $t$ inside $\mathscr{T}_{\alpha}'$ which is possible, as $\mathscr{T}_{\alpha}'$ is well-founded. By the way we defined our trees, it suffices to deal with the following cases:
1. Case 1): $c_{\beta}(t)$ is a basic formal, i.e., an atomic formula or its negation.
2. Case 2): $c_{\beta}(t)$ is $\wedge$.
3. Case 3): $c_{\beta}(t)$ is $\exists \overline{x}'$ or just $c_{\beta}(t)$ is $\exists^{\sigma} \overline{x}'$.
4. Case 4) $c_{\beta}(t)$ is $\forall \overline{x}'$ or just $c_{\beta}(t)$ is $\forall^{\sigma} \overline{x}'$.
Let discuss each cases separably:
Case 1): Here, $t$ is a maximal node of the tree $\mathscr{T}_{\beta}$. Hence, necessarily a maximal node of $\mathscr{T}_{\alpha}'$, and recall that $$\varphi^1_t=c_{\beta}(t)=\varphi^2_t.$$ Consequently, the conclusion is obvious.
Case 2): Here, we have
1. $\varphi^2_t:=\bigwedge\{\varphi^2_s:s\in\mathrm{suc}_{\mathscr{T}_{\beta}}(t)\}$,
2. $\varphi^1_t:=\bigwedge\{\varphi^1_s:s\in\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)\}$.
By the choice of the $\mathscr{T}_{\alpha}'$ and the function $f$, we have $$\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)\subseteq \mathrm{suc}_{\mathscr{T}_{\beta}}(t).$$Now, due to the induction hypotheses$$s\in\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)\Longrightarrow \varphi^1_s(M) \supseteq \varphi^2_s(M).$$ According to the definition of sanctification for $\wedge$ we are done, indeed:
$$\begin{array}{ll}
\varphi^1_t(M) & = \bigcap\{\varphi^1_s(M): s\in\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t) \} \\
& \supseteq \bigcap\{\varphi^2_s(M): s\in\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t) \}\\
& \supseteq \bigcap\{\varphi^s_s(M): s\in\mathrm{suc}_{\mathscr{T}_{\beta}}(t) \}.
\end{array}$$
Case 3): Let $\overline{x}'_t=\overline{x}_s^{\frown}\overline{x}'$, and recall that $\mathrm{suc}_{\mathscr{T}_{\beta}}(t)$ is singleton, and also $\mathrm{suc}_{\mathscr{T}_{\alpha}}(f^{(-1)}(t))$ is singleton, because $\boldmath c_{\beta}(t)=\boldmath c_{\alpha}(f^{-1}(t))$. This implies that $\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)$ is singleton, say $\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)=\{s\}=\mathrm{suc}_{\mathscr{T}_{\beta}}(t)$. In order to see $\varphi^1_t(M) \supseteq \varphi^2_t(M)$, we take $\overline{a}\in {}^{\lg(\overline{x})}M$ be such that $M\models\varphi^2_t(\overline{a})$ and shall show that $M\models\varphi^1_t(\overline{a})$. Indeed, $$\varphi^{\ell}_t(\overline{x}):=(\exists^{\sigma} \overline{x}')\varphi^{\ell}_s(\overline{x}_s,\overline{x}')\quad \ell=1,2,$$ and since $$M \models (\exists^{\sigma} \overline{x}')\varphi^{2}_s(\overline{a},\overline{x}'),$$ necessarily, for some pairwise disjoint $\overline{b}_{\zeta} \in {}^{\lg(\overline{x}')}M$ we have $$M\models\varphi^2_s(\overline{a},\overline{b}_{\zeta}).$$ Thanks to the inductive hypothesis, we know $$M\models\varphi^1_s(\overline{a},\overline{b}_{\zeta}).$$ According to the definition of sanctification, we have$$M\models (\exists^{\sigma} \overline{x}')\varphi^1_s(\overline{a},\overline{x}'),$$which means that $M\models\varphi^1_t(\overline{a})$, as promised.
Case 4): Let $\overline{x}'_t=\overline{x}_s^{\frown}\overline{x}'$, and recall that $\mathrm{suc}_{\mathscr{T}_{\beta}}(t)$ is singleton, and also $\mathrm{suc}_{\mathscr{T}_{\alpha}}(f^{-1}(t))$ is singleton, because $\boldmath c_{\alpha}(f^{-1}(t))\subseteq\boldmath c_{\beta}(t)$. This implies that $\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)$ is singleton, say $\mathrm{suc}_{\mathscr{T}_{\alpha}'}(t)=\{s\}=\mathrm{suc}_{\mathscr{T}_{\beta}}(t)$. In order to see $\varphi^1_t(M) \supseteq \varphi^2_t(M)$, we take $\overline{a}\in {}^{\lg(\overline{x})}M$ be such that $M\models\varphi^2_t(\overline{a})$ and shall show that $M\models\varphi^1_t(\overline{a})$. Similar to the Case (3), we can write $$\varphi^{\ell}_t(\overline{x}):=(\forall^{\sigma} \overline{x}')\varphi^{\ell}_s(\overline{x}_s,\overline{x}')\quad \ell=1,2.$$ Suppose it is not the case that $M \models \varphi^1_t(\overline{a})$. This means, by Definition [Definition 15](#a21){reference-type="ref" reference="a21"}(2)(b), that there are pairwise disjoint $\overline{b}_{\zeta} \in {}^{\lg(\overline{x}')}M$, for $\zeta<\sigma$ such that $$M\models\neg\varphi^1_s(\overline{a},\overline{b}_{\zeta}).$$ By the induction hypothesis, we have $\varphi^1_s(M) \supseteq \varphi^2_s(M)$, hence for each $\zeta<\sigma$, $$M\models\neg\varphi^2_s(\overline{a},\overline{b}_{\zeta}).$$ The later means that $M \models \varphi^2_t(\overline{a})$ is not true, which contradicts our initial assumption. This completes the proof of clause (4)(a).
Finally, let us turn to check the property presented in clause (4)(b) from Definition [Definition 41](#k2){reference-type="ref" reference="k2"}. Suppose $\varphi_{\alpha, \beta}(\overline{x}) \in \mathscr{L}$ for $\alpha < \beta < \lambda$ are given. We need to find some $\alpha_{1} < \alpha_{2} < \alpha_{3} < \kappa$ such that $$\varphi_{\alpha_{1}, \alpha_{2}} \geq \varphi_{\alpha_{1}, \alpha_{3}}, \varphi_{\alpha_{2}, \alpha_{3}}.$$ To see this, we define a coloring $\mathbf{c}: [\lambda]^{3} \to 2 \times 2$ as follows. Fix $\alpha < \beta < \gamma < \lambda$, and define the following pairing function
$$\mathbf{c} (\{ \alpha, \beta, \gamma \}):= \bigg(\emph{truth value of } \varphi_{\alpha, \beta} \geq \varphi_{\alpha, \gamma} , \emph{ truth value of } \varphi_{\alpha, \gamma} \geq \varphi_{\beta, \gamma} \bigg).$$ By the assumption $\lambda \to (\kappa + 1)_{4}^{3}$, there is $X \subseteq \lambda$ of order type $\kappa +1$ such that $\mathbf{c} \restriction[X]^{3}$ is constant. Let $\alpha_{i} \in X$, for $i \leq \kappa$, be an increasing enumeration of $X$. Consider the sequence $$\{\varphi_{\alpha_{0}, \alpha_{ i}}: i< \kappa\},$$ and applying clause (4)(a) to it. This gives us $i < j < \kappa$ such that $$\varphi_{\alpha_{0}, \alpha_{i}} \geq \varphi_{\alpha_{0}, \alpha_{j}}.$$ Note that this implies that $\mathbf{c} (\alpha_{0},\alpha_{i},\alpha_{j})= (1,\iota)$, for some $\iota \in \{0, 1\}.$ Since $\mathbf{c} \restriction[X]^{3}$ is constant, it follows that for any $\alpha < \beta < \gamma$ from $X$, we have $\mathbf{c} (\alpha, \beta, \gamma)= (1,\iota)$, in particular
- $\varphi_{\alpha, \beta} \geq \varphi_{\alpha, \gamma}$, for all $\alpha < \beta < \gamma$ in $X$.
Again, applying clause (4)(a) to the sequence $\{\varphi_{\alpha_{i}, \alpha_{ \kappa}}: i< \kappa\},$ we can find some $i < j < \kappa$ such that $$\varphi_{\alpha_{i}, \alpha_{\kappa}} \geq \varphi_{\alpha_{j}, \alpha_{\kappa}}.$$ It follows that $\mathbf{c} (\alpha_{i},\alpha_{j},\alpha_{\kappa})= (1,1)$, hence as $\mathbf{c} \restriction[X]^{3}$ is constant, we have $\mathbf{c} (\alpha, \beta, \gamma)= (1,1)$, for all $\alpha < \beta < \gamma$ from $X$. In particular,
- $\varphi_{\alpha, \gamma} \geq \varphi_{\beta,\gamma }$, for any $\alpha < \beta < \gamma$ in $X$.
Now, combining $(*)_1$ along with $(*)_2$, we get that for all $\alpha < \beta < \gamma$ from $X$ $$\varphi_{\alpha, \beta} \geq \varphi_{\alpha, \gamma} \geq \varphi_{\beta,\gamma },$$ and this completes the proof of (i).
(ii): This is similar to case (i). ◻
*Remark 59*. By the Erdos-Rado partition theorem, see [@EHMR], it suffices to take $\lambda=\beth_2(\kappa)^+$.
*Remark 60*. In Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"} its suffice to assume $\lambda = \kappa^{+}$ (or just $\lambda = \mathrm{cf}(\lambda) \geq \kappa^{+}).$
*Proof..* Given $\varphi_{\alpha, \beta} \in \mathscr{L}$ for $\alpha < \beta < \lambda$ we can find an increasing sequence $\langle \alpha_{i}: i \leq \kappa \rangle$ of ordinals $< \kappa^{+}$ such that:
$(*)_{1}$ if $i < j_{1} < j_{2} \leq \kappa$ then $\varphi_{\alpha(i), \alpha(j_{1})} \geq \varphi_{\alpha(i), \alpha(j_{1})} \iff \varphi_{\alpha(i), \alpha(j_{1})} \geq \varphi_{\alpha(i), \alpha(\kappa)}.$
.
Now by clause (4)(a) from Definition [Definition 41](#k2){reference-type="ref" reference="k2"} we have:
$(*)_{2}$ If $i < j_{1} < j_{2} \leq \kappa$ then $\varphi_{\alpha(i), \alpha(j_{1})} \geq \varphi_{\alpha(i), \alpha(j_{2})}.$
Again, we apply clause (4)(a) from Definition [Definition 41](#k2){reference-type="ref" reference="k2"} to $\langle \varphi_{\alpha(i), \alpha(\kappa)}: i < \kappa \rangle$ we observe there are $i < j < \kappa$ such that $$\varphi_{\alpha(i), \alpha(\kappa)} \geq \varphi_{\alpha(j), \alpha(\kappa)}.$$ Combining these together we deduce that $$\varphi_{\alpha(i), \alpha(j)} \geq \varphi_{\alpha_{i}, \alpha(\kappa)} \geq \varphi _{\alpha(j), \alpha(\kappa)} ,$$as claimed. ◻
*Question 61*. What about $\kappa = \theta?$ see [@Sh:110] (trees with $\omega$ levels labeled by $\mathbb{Q}$ which is $\kappa$- b.q.o). But to work better without $\forall$ or just existential positive.
**Discussion 62**. We try to describe an old failed attempt to prove that given $\mathbf{f}$ say as in [Theorem 52](#L4){reference-type="ref" reference="L4"} say $\mathfrak{L} = \mathbb{L}_{\infty, \theta}^{\rm{pe}}(\tau_{M}).$
- $\Vert M_{\mathbf{f}} \Vert \geq \lambda_{\mathbf{f}}$ or just $\Vert M_{\mathbf{f}} \Vert$ large enough after forcing with $\rm{Levy}(\aleph_{0}, \Vert M \Vert^{\theta}),$ $M$ has a non-trivial automorphism (below $\mathrm{AP}$ is a set of approximations to an automorphism (really a name of one)).
**Definition 63**. We define $\rm{AP}$ by:
a) $(\overline{a}_{1}, \overline{a}_{2}) \in \rm{AP}$ iff $\varphi_{\overline{a}_{1}} \equiv \varphi_{\overline{a}_{2}}$ so $\overline{a}_{1}, \overline{a}_{2} \in {}^{\varepsilon}M$ for some $\varepsilon< \theta,$
b) $(\overline{a}_{1} , \overline{a}_{2}) \leq_{\rm{AP}} (\overline{b}_{1}, \overline{b}_{2})$ iff $\overline{a}_{l} \unlhd \overline{b}_{l}$ for $l = 1, 2$ (and $(\overline{a}_{1}, \overline{a}_{2}), (\overline{b}_{1}, \overline{b}_{2}) \in \rm{AP}).$
**Problem 64**. Given $(\overline{a}_{1}, \overline{a}_{2}) \in \rm{AP}, b_{1} \in M$ but for no $b_{2} \in M,$ $(\overline{a}_{1} b_{1}, \overline{a}_{2}b_{2}) \in \rm{AP}.$ Let us try to analyze this situation.
Let $A_{\overline{a}_{1} b_{1}, \overline{a}_{2}} = \{ b: \varphi_{\overline{a}_{1} b_{1}}[\overline{a}_{2}, b] \}.$ Our problem is i.e. the problematic case is:
- $b \in A_{\overline{a}_{1}b_{1}a_{2}} \Rightarrow \varphi_{\overline{a}_{1}b_{1}} \gneqq \varphi_{\overline{a}_{2}b}.$
Let $\varphi = \varphi_{{a}_{1}b_{1}}$ (that is, if this do not occurs we are done).
\(B\) For $c \in \varphi(M, \overline{a}_{\theta})$ but $B_{c, \overline{a}_{i}} = \{ d-c: d \in \varphi(M, \overline{a}_{i})) \}$
$(*)_{1}$ $M = M_{\varphi} = \{ b'\overline{a}' - b''\overline{a}': b' \overline{a}', b'\overline{a}' \in \varphi(M) \}$ a subgroup of $\varphi(M),$
$(*)_{2}$ $M_{\varphi}' = \{ y: (y, \overline{a}) \in M \}$ a submodel of $M,$
$(*)_{3}$ if $c\overline{a}_{e} \in \varphi(M)$ then $[ g \in M' \iff (c + g, \overline{a}_{i}) \in \varphi(M)]?$
.
$(*)_{4}$ choose $b_{2} \in \varphi(M, \overline{a}_{2})$ so $\varphi(M, \overline{a}_{i}) = \{ (g + b_{i}, \overline{a}_{i}): g \in G' \},$
) Define $E_{\varphi} = \{ (\overline{a}_{1}, \overline{a}_{2}): \overline{a} \in \varphi(M) \}$ and for some $\overline{b}_{i} \in \varphi(M, \overline{a}_{i})$
- The bad case: (no $L_{0}$) [iff]{.ul}:
1. $\forall b' \in \varphi(M, \overline{a}_{1}) \, \forall b'' \in \varphi(M, \overline{a}_{2}) [b' \overline{a}_{1} \not\approx b'' \overline{a}_{2}],$
.
1. Let $\Phi_{\varphi} = \{ \psi: \varphi > \psi \}$ and $\Phi_{\varphi}^{1} \subseteq$ of cardinality $< \kappa$ cover,
2. bad case iff $\varphi(M, \overline{a}_{1}) \neq \bigcup_{\psi \in \Phi_{\varphi}^{1}}(M, \overline{a}_{1})$ and $\varphi(M, \overline{a}_{2}) = \bigcup_{\psi \in \Phi_{\varphi}^{1}}(M, \overline{a}_{2}),$
3. is the problem because of $b \in \rm{cl}(\emptyset)?$
**Discussion 65**. Well using a larger cardinal help?
Say $\kappa$ is supercompact, $M \in \rm{H}(\chi), \Vert M \Vert > \kappa, M \in \mathfrak{B} \prec (H(\chi, \in))$ as in super-compact.
(a) Consider finite $(\overline{a}^{\smallfrown}\overline{b}) \in \rm{AP},$ $\overline{a} \subseteq \mathfrak{B}, \overline{b} \cap \mathfrak{B} = \emptyset,$
(b) Consider $(\overline{a}_{1}, a_{0}), \ \overline{a}_{0} \subseteq \mathfrak{B}, \overline{a}_{1} \not \subseteq \mathfrak{B}$ and we conclude the projections $\overline{a}_{1} \mapsto \overline{a}_{0}, \overline{a}_{0} \mapsto \overline{a}_{0},$
$(*)$ alternatively of $\mathfrak{0} \in \mathfrak{B},$ $\overline{a}_{1}, a_{0}$ some types ones $\mathfrak{B}_{0}.$
4A) conclusion: can we find $c.$
# $\kappa$-algebraic closure {#4}
In this section, and among other things, we define the concept of closure of a sequence inside an additive model and present some properties of it.
**Definition 66**. Suppose $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega)$ is an additive frame, $\varepsilon< \theta$ and $\overline{a} \in {}^{\varepsilon} M$. We define the closure of $\overline{a}$ in $M$ as the following: $$\rm{cl}(\overline{a}, M) := \{ b \in M: \varphi_{{\overline{a}^{\smallfrown} \langle b \rangle }}(M, \overline{a}) \ \text{has cardinality} \ < \kappa \}.$$
(a) $\rm{cl}(\overline{a}, M) = \{ b \in M: \varphi_{{\overline{a}^{\smallfrown} \langle b \rangle }}(M, \overline{a}) \ \text{has cardinality} \ < \kappa \},$
(b) $\rm{afn}(b, \overline{a}) = \{ c \in M: \varphi_{\langle b \rangle^{\smallfrown}\overline{a}} = \varphi_{\langle c \rangle^{\smallfrown}\overline{a}} \},$
(c) $\rm{grp}(b, \overline{a}) = \{ c_{1} - c_{2}: \varphi_{\langle b \rangle^{\smallfrown }\overline{a}} = \varphi_{\langle c_{1} \rangle^{\smallfrown} \overline{a}} = \varphi_{\langle c_{2} \rangle^{\smallfrown} \overline{a}} \},$
(d) $\rm{Grp}(\overline{a}) = \bigcup \{ \rm{grp}(b, \overline{a}): b \in \rm{cl}(\overline{a}) \},$
(e) $\rm{GRp}(b, \overline{a}) = \bigcup \{ \rm{grp}(b, \overline{a}_{\bullet}): \varphi_{\overline{a}_{\bullet}} = \varphi_{\overline{a}} \ \text{and} \ \varphi_{b_{\bullet} \overline{a}_{\bullet}} = \varphi_{b \overline{a}} \}.$
Given $\overline{a} \in{}^{\varepsilon}M$, $\varepsilon< \theta$, recall that the additive frame $\mathbf{f}$ equips us with some formula $\varphi_{\overline{a}} \in \mathscr{L}$ such that:
(a) $\overline{a} \in \varphi_{\overline{a}}(M),$
(b) if $\psi \in \mathscr{L}$ and $\overline{a} \in \psi(M),$ then $\varphi_{\overline{a}}(M) \subseteq \psi(M).$
Here we would like to get more information about such formulas. In the next lemma, we extend Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}, by finding an additive frame, for which the formulas $\varphi_{\overline{a}}$ have stronger properties.
**Lemma 67**. *Assume the hypotheses in Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}(2), and let $\mathbf{f}= (M, \lambda, \kappa, \theta,\Omega)$ be the general frame constructed there. Then given $\overline{a}, \overline{b} \in{}^{< \theta}M$, we can find a formula $\varphi_{\overline{a},\overline{b}}( \overline{x},\overline{y}) \in \mathscr{L}$ such that:*
(a) *$M\models \varphi_{\overline{a},\overline{b}}(\overline{a},\overline{b}),$*
(b) *if $\psi(\overline{x},\overline{y}) \in \mathscr{L}$, $\overline{c} \in {}^{\lg(\overline{a})}M,$ and $M\models \psi(\overline{c},\overline{b}),$ then $\varphi_{\overline{a},\overline{b}}(M,\overline{b}) \subseteq \psi(M,\overline{b}),$*
*Furthermore, we can assume that for all $\overline{a}, \overline{b} \in{}^{< \theta}M$, $$\varphi_{\overline{a}}(M) \subseteq \varphi_{\overline{a},\overline{b}}(M,\overline{b}).$$*
*Proof.* The existence of the formulas $\varphi_{\overline{a},\overline{b}}( \overline{x},\overline{y}) \in \mathscr{L}$, can be proved as in the proof of Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}. To get the last part of the lemma, it suffices to replace the formula $\varphi_{\overline{a}}(\overline{x})$, by $$\varphi_{\overline{a}}(\overline{x}) \wedge \bigwedge_{\overline{b} \in{}^{< \theta}M} \varphi_{\overline{a},\overline{b}}( \overline{x},\overline{y}).$$ The lemma follows. ◻
In what follows we will use the following result several times:
**Lemma 68**. *Suppose $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega)$ is a general frame as in Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}, $\varepsilon< \theta$ and $\overline{a} \in {}^{\varepsilon}M$. We assume in addition that $\kappa\in \Omega$ and $\mathscr{L}$ is closed under $\exists^\kappa$. Then the following three assertions are true:*
(a) *the set $\rm{cl}(\overline{a}, M)$ has cardinality $< \kappa.$*
(b) *$\{ a_i: i < \varepsilon\} \subseteq \rm{cl}(\overline{a}, M).$*
(c) *Assume $b \in \rm{cl}(\overline{a}, M)$. Then $\rm{cl}(\overline{a}^{\frown} \langle b \rangle, M) \subseteq \rm{cl}(\overline{a}, M)$.*
*Proof.* (a). Suppose not, and let $\overline{a} \in {}^{\varepsilon}M$ be such that the set $\rm{cl}(\overline{a}, M)$ has cardinality $\geq \kappa.$ This gives us a family $\{b_\alpha:\alpha<\kappa\}\subseteq \rm{cl}(\overline{a}, M)$. Define $\overline{a}_\alpha:=\overline{a}^{\smallfrown}\langle b_\alpha \rangle$. In the light of Lemma [Lemma 50](#k17){reference-type="ref" reference="k17"}, there is $\alpha_* < \kappa$ such that the set $$X:=\{ \beta < \kappa: \overline{a}_{\beta} \in \varphi_{\overline{a}_{\alpha_*}} (M) \}$$ is unbounded in $\kappa.$ So, $\{\overline{a}_{\beta}:\beta\in X\}\subseteq\varphi_{\overline{a}_{\alpha_*}} (M,\overline{a})$, which implies that $$|\varphi_{\overline{a}_{\alpha_*}}(M,\overline{a})|\geq |X|=\kappa.$$ This contradicts the fact that $b_{\alpha_*} \in \rm{cl}(\overline{a}, M).$
(b). Let $i< \varepsilon$, and define the formula $\psi(\overline{x}, y)$ as $$\psi(\overline{x}, y):= (y=x_i).$$ Then $\psi(\overline{a}, M)=\{a_i\}$. It is also clear that $\psi(\overline{x}, y)$ is minimal with this property. This implies that $$\varphi_{\overline{a}^{\frown} \langle a_i\rangle}(\overline{a}, M)=
\psi(\overline{a}, M).$$ In particular, $|\varphi_{\overline{a}^{\frown} \langle a_i\rangle}(\overline{a}, M)|=1 < \kappa$, and consequently $a_i \in \rm{cl}(\overline{a}, M).$
(c). Suppose $d \in \rm{cl}(\overline{a}^{\frown} \langle b \rangle, M)$. Thanks to Definition [Definition 66](#n8){reference-type="ref" reference="n8"}, $\varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }(\overline{a}, b, M)$ has cardinality less that $\kappa.$ As $b \in \rm{cl}(\overline{a}, M)$, clearly $$(\ast)_1\quad\quad\quad\emph{ the set } B:=\varphi_{{\overline{a}^{\smallfrown} \langle b \rangle }}(\overline{a},M)\emph{ has cardinality } <\kappa.$$ For $b_1\in B$ let $A_{b_1}:=\varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1,M)$.
We now show that $$(\ast)_2\qquad\quad\quad\quad \emph{ if }
b_1\in B \emph{ then } A_{b_1}\emph{ has cardinality } <\kappa.$$
Assume towards a contradiction that $|A_{b_1}| \geq \kappa$ for some $b_1 \in B$. Reformulating this, means that:$$M\models\varphi_{\overline{a}^{\frown} \langle b\rangle}(\overline{a},b_1) \wedge\exists^\kappa z \ \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1,z ).$$ Let $$\psi(\overline{x},{y}):=\varphi_{\overline{a}^{\frown} \langle b\rangle}(\overline{x}, {y}) \wedge\exists^\kappa z \ \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{x}, {y},z ),$$and recall that $\psi\in\mathscr{L}$ and $\overline{a}^{\frown} \langle b_1\rangle \in\psi(M)$. Note that $\overline{a}^{\frown} \langle b \rangle \notin\psi(M)$, as $$|\varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b,M)|<\kappa.$$
Next we bring the following claim: $$(\ast)_{2.1}\quad\quad\quad \overline{a}^{\frown} \langle b_1\rangle \notin \varphi_{\overline{a}^{\frown} \langle b\rangle}(M).$$ To see this we argue by the way of contradiction that $\overline{a}^{\frown} \langle b_1\rangle \in\varphi_{\overline{a}^{\frown} \langle b\rangle}(M)$. This implies following the minimality condition that $$\varphi_{\overline{a}^{\frown} \langle b\rangle}(M)\subseteq \varphi_{\overline{a}^{\frown} \langle b_1\rangle}(M)\subseteq\psi(M).$$Consequently, $\overline{a}^{\frown} \langle b\rangle\in\psi(M)$. This contradiction completes the proof of $(\ast)_{2.1}
.$ But, we have $\langle b_1\rangle \in\varphi_{\overline{a}^{\frown} \langle b\rangle}(\overline{a},M)$. This yields that $$(\ast)_{2.2}\quad\quad\quad\overline{a}^{\frown} \langle b_1\rangle \in \varphi_{\overline{a}^{\frown} \langle b\rangle}(M).$$ But $(\ast)_{2.1}$ and $(\ast)_{2.2}$ together lead to a contradiction. In sum, the desired property $(\ast)_2$ is valid. By our assumptions, $$M \models \forall^\kappa \overline{x} \bigg( \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }(\overline{a}, b, x_i) \rightarrow \exists i \neq j, x_i = x_j\bigg),$$ and $$M \models \forall^\kappa \overline{y} \bigg( \varphi_{\overline{a}^{\frown} \langle b \rangle}(\overline{a}, y_i) \rightarrow \exists i \neq j, y_i = y_j\bigg),$$
This means that $M \models \exists^\kappa z \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1, z)$.
Recalling that $\kappa$ is regular, it follows that $$(\ast)_3\qquad\quad \bigcup_{b_1\in\beta} \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1,M)\emph{ has cardinality } <\kappa.$$ We will show that $$(\dagger)\quad\quad\quad\quad\quad\varphi_{{\overline{a}^{\smallfrown} \langle d \rangle }}(\overline{a},M)\subseteq \bigcup_{b_1\in B} \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1,M),$$ from which it will follow that $\varphi_{{\overline{a}^{\smallfrown} \langle d \rangle }}(\overline{a}, M)$ has cardinality less that $\kappa,$ and hence by definition, $d \in \rm{cl}(\overline{a}, M)$. Let us prove $(\dagger)$. To this end, let $d_1\in\varphi_{{\overline{a}^{\smallfrown} \langle d \rangle }}(\overline{a}, M)$. This implies that $\overline{a}^{\frown}\langle d_1 \rangle\in\varphi_{{\overline{a}^{\smallfrown} \langle d \rangle }}(M)$. Clearly, $$M\models \exists y\varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},y,d),$$ hence, $$M\models \exists y\varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},y,d_1).$$ This gives us some $b_1$ so that $$M\models \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1,d_1).$$ Then ${b_1\in B}$, and $d_1 \in \varphi_{\overline{a}^{\frown} \langle b \rangle^{\smallfrown} \langle d \rangle }( \overline{a},b_1,M),$ and consequently, $(\dagger)$ holds. We are done. ◻
**Definition 69**. Suppose $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega)$ is a general frame. For $\overline{a} \in {}^{<\theta}M$, we introduce the following:
1. $\rm{afn}(b, \overline{a}) = \{ c \in \rm{cl}(\overline{a}, M): \varphi_{\overline{a}^{\smallfrown}\langle b \rangle} \leq \varphi_{\overline{a}^{\smallfrown} \langle c \rangle} \}.$
2. Suppose $\mathbf{f}$ is abelian. Then $\rm{grp}(b, \overline{a}) = \{ c_{1} - c_{2}: c_1, c_2 \in \rm{afn}(b, \overline{a}) \}.$
**Hypothesis 70**. In what follows, and up to the end of this section, let us assume that $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega)$ is an additive frame, $\kappa \in \Omega$ and $\mathscr{L}$ is closed under $\exists^\kappa x$.
**Lemma 71**. *Let $\overline{a} \in {}^{<\theta} M$ and $b\in \rm{cl}(\overline{a}, M)$. Then $\rm{afn}(b, \overline{a})$ is a subset of $\rm{cl}(\overline{a})$ and it is affine.*
*Proof.* First, recall that "$\rm{afn}(b, \overline{a}) \subseteq \rm{cl}(\overline{a}, M)$" holds by the definition. For the second phrase, we have to show that $\rm{afn}(b, \overline{a})$ is closed under $x - y + z$. To see this, let $c_{i} \in \rm{afn}(b, \overline{a})$ for $i = 1, 2, 3$ and set $c := c_{1} - c_{2} + c_{3}.$ Since $\varphi_{\overline{a}^{\smallfrown}\langle b \rangle}(M)$ is affine-closed, $$\overline{a}^{\smallfrown}\langle c \rangle = \overline{a}^{\smallfrown} \langle c_{1} \rangle - \overline{a}^{\smallfrown} \langle c_{2} \rangle + \overline{a}^{\smallfrown} \langle c_{3} \rangle \in \varphi_{\overline{a}^{\smallfrown}\langle b \rangle}(M).$$ According to the minimality, $\varphi_{\overline{a}^{\smallfrown}\langle b \rangle} \leq \varphi_{\overline{a}^{\smallfrown} \langle c \rangle}$. Thanks to Lemma [Lemma 68](#n11){reference-type="ref" reference="n11"}, $c \in \rm{cl}(\overline{a}, M)$. Hence $c\in\rm{afn}(b, \overline{a})$, and we are done. ◻
**Lemma 72**. *The following holds:*
1. *$\rm{grp}(b, \overline{a})$ is a subgroup of $M.$*
2. *$\rm{afn}(b, \overline{a}) = \{ b + d: d \in \rm{grp}(b, \overline{a}) \}.$*
*Proof.* For clause (1), let $c_{i} \in \rm{grp}(b, \overline{a}),$ where $i = 1, 2$. Following definition, there are some $b_{i, 1}, b_{i, 2} \in \rm{afn}(b, \overline{a})$ such that $c_{i} = b_{i, 1} - b_{i, 2}$. So,
$$\begin{array}{ll}
c_{1} - c_{2} &= (b_{1, 1} - b_{1, 2}) - (b_{2, 1} - b_{2, 2})\\
&= (b_{1, 1}- b_{1, 2} + b_{2, 2}) - b_{2, 1}.
\end{array}$$ According to Lemma [Lemma 71](#n14){reference-type="ref" reference="n14"}, we know $b^*_{2, 1}=b_{1, 1}- b_{1, 2} + b_{2, 2} \in \rm{afn}(b, \overline{a})$, and hence by definition of $\rm{grp}(b, \overline{a})$, $$c_{1} - c_{2}= b^*_{2, 1} - b_{2, 1} \in \rm{grp}(b, \overline{a}).$$
To prove clause (2), let $c \in \rm{afn}(b, \overline{a})$. As clearly $b \in \rm{afn}(b, \overline{a}),$ by clause (1), $-b + c \in \rm{grp}(b, \overline{a}),$ hence $$c = b + (-b+c) \in \big\{ b + d: d \in \rm{grp}(b, \overline{a}) \big\}.$$ Conversely, suppose $d \in \rm{grp}(b, \overline{a})$. Due to its definition, there are for some $c_1, c_2 \in \rm{afn}(b, \overline{a})$ so that $d= c_1 - c_2$. Consequently, in view of Lemma [Lemma 71](#n14){reference-type="ref" reference="n14"}, we see $$b+d = b -c_2+ c_1 \in \rm{afn}(b, \overline{a}).$$ The equality follows. ◻
*Question 73*. Adopt the notation from Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(c). We ask the following three natural questions:
1. Suppose $\psi (y, \overline{x}_{[\varepsilon]}) = \varphi_{\rm{grp}(b, \overline{a})}$. Is $\psi({}^{\varepsilon}M, \overline{a}) = \rm{grp}(b, \overline{a})?$
2. Is $\rm{cl}(\overline{a})$ affine?
3. Let $b \in \rm{cl}(\overline{a})$. Is $\varphi_{\rm{grp}(b, \overline{a})}(M)$ of cardinality $< \kappa?$
**Definition 74**. Let $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega)$ be an additive frame such that $\kappa \in \Omega$ and $\mathscr{L}$ is closed under $\exists^\kappa x$. We say $\mathbf{f}$ is very nice, if it satisfies the following extra properties:
1. $\mathscr{L}_{\mathbf{f}} = \mathscr{L}_{\infty, \theta}^{\rm{pe}}(\tau_{M})$, or just $\mathscr{L}_{\mathbf{f}}$ is closed under $\exists \overline{x}_u,$ up to equivalence, where $|u| < \theta$.
2. For every $X \subseteq {}^{\varepsilon}M$, there is a formula $\varphi_X(\overline{x})$, such that $X \subseteq \varphi_X(M)$, and if $\psi(\overline{x})$ is such that $X \subseteq \psi(M),$ then $\varphi_X(M) \subseteq \psi(M)$.
**Discussion 75**. By a repetition of the argument presented in the proof of Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"} we know Definition [Definition 74](#n5){reference-type="ref" reference="n5"}(2) holds, when $M$ is an $R$-module and $\mathbf{f}$ is defined as in [Theorem 58](#L7){reference-type="ref" reference="L7"}.
**Proposition 76**. *Suppose the additive frame $\mathbf{f}=(M, \mathscr{L}, \lambda, \kappa, \theta, \Omega)$ is very nice. The following conditions hold.*
1. *Assume $b \in M$ and $\overline{a}\in {}^{\varepsilon}M$. There exists some formula $\varphi(\overline{x}, y)$ with $\mathop{\mathrm{lg}}(\overline{x})=\varepsilon$ such that $\varphi(\overline{a}, M) = \rm{grp}(b, \overline{a}).$*
2. *If $c \in \rm{grp}(b, \overline{a})$ and $b \in \rm{cl}(\overline{a})$ then $c \in \rm{cl}(\overline{a}).$ Moreover, $\rm{cl}(\overline{a})$ is a subgroup of $M$.*
3. *Let $b \in \rm{cl}(\overline{a})$. Then $\varphi_{\rm{grp}(b, \overline{a})}(M)$ is of cardinality $< \kappa.$*
*Proof.* (1): Define the formula $\varphi$ as $$\varphi(\overline{x}, y) = (\exists {y}^{1}, y^{2})\bigg[ \varphi_{\overline{a}^{\smallfrown} \langle b \rangle}(\overline{x}, y^1) \wedge \varphi_{\overline{a} ^{\smallfrown} \langle b \rangle}(\overline{x}, y^2) \wedge y = y_{2} - y_{1}\bigg].$$ We show that $\varphi$ is as required. By Definition [Definition 74](#n5){reference-type="ref" reference="n5"}, $\varphi(\overline{x}, y) \in \mathscr{L}.$
First, suppose that $b \in \rm{grp}(b, \overline{a})$, and let $b_{1}, b_{2} \in \rm{afn}(b, \overline{a})$ be such that $b = b_{2} - b_{1}$. Then $b_{1}, b_{2}$ witness $M \models \varphi(\overline{a}, b).$ Hence $$(\ast)_1\quad\quad\qquad\rm{grp}(b, \overline{a}) \subseteq \varphi(\overline{a}, M).$$ In order to prove the reverse inclusion, suppose that $b \in \varphi(\overline{a}, M).$ This implies that $M \models \varphi(\overline{a}, b)$. Take $b_{1}, b_{2}$ be witness it, i.e., $b = b_2 - b_1$ and $$M \models \varphi_{\overline{a}^{\smallfrown} \langle b \rangle}(\overline{a}, b_1) \wedge \varphi_{\overline{a} ^{\smallfrown} \langle b \rangle}(\overline{a}, b_2).$$ On the other hand, for $l=1, 2$ we have $$M \models \varphi_{\overline{a}^{\smallfrown} \langle b \rangle}(\overline{a}, b_l) \Rightarrow \varphi_{\overline{a}^{\smallfrown} \langle b \rangle} \leq \varphi_{\overline{a}^{\smallfrown} \langle b_l \rangle},$$ hence $b_{l} \in \rm{afn}(b, \overline{a})$. Consequently, $b=b_2 -b_1 \in \rm{grp}(b, \overline{a}).$ As $b$ was arbitrary, we conclude that $$(\ast)_2 \quad\qquad\quad \varphi(\overline{a}, M) \subseteq \rm{afn}(b, \overline{a}).$$ By $(\ast)_1$ and $(\ast)_2$, we have $\varphi(M, \overline{a}) = \rm{afn}(b, \overline{a})$, and we are done.
(2): In the light of Lemma [Lemma 71](#n14){reference-type="ref" reference="n14"}, it suffices to show that $\rm{cl}(\overline{a})$ is a subgroup of $M$. To this end, let $b_{1}, b_{2} \in \rm{cl}(\overline{a}), \, \varepsilon= \rm{lg}(\overline{a})$ and let $\varphi(\overline{x}, y)$ be as in clause (1). It is easily seen that:
1. $M \models \varphi(\overline{a}, b_2 - b_1),$
2. $\varphi(\overline{a}, M) = \{ b'' - b': b'' \in \varphi_{\overline{a}^{\smallfrown} \langle b_2 \rangle} ( \overline{a}, M) \ \text{and} \ b' \in \varphi_{\overline{a}^{\smallfrown} \langle b_1 \rangle}(\overline{a}, M) \},$
3. $\varphi(\overline{a}, M)$ has cardinality $< \kappa.$
Thanks to the minimality condition, $$\varphi_{\overline{a}^{\smallfrown} \langle b_2 -b_1 \rangle}(M) \subseteq \varphi(\overline{a}, M).$$ In other words, $|\varphi_{\overline{a}^{\smallfrown} \langle b_2 -b_1 \rangle}(M)| < \kappa$, which implies that $b_{2} - b_{1} \in \rm{cl}(\overline{a}).$ We have proved $$b_{1}, b_{2} \in \rm{cl}(\overline{a}) \Rightarrow b_{2} - b_{1} \in \rm{cl}(\overline{a}).$$ Therefore, $\rm{cl}(\rm{\overline{a}})$ is a subgroup of $M.$
(3): The proof is similar to the proof of clause (2). ◻
**Definition 77**. We define $(\mathrm{AP}, \leq_{\mathrm{AP}(2)})$ by:
1. $\mathrm{AP}^{+}$ is the set of $(\overline{a}, \overline{b}_{1}, \overline{b}_{2})$ such that:
a) $\overline{a}, \overline{b} \in {}^{\theta >}M,$
b) $\rm{Rang}(\overline{b}_{l})$ is disjoint to $\rm{cl}(\overline{a})$ for $l = 1, 2,$
c) $\varphi_{\overline{b}_{1} \overline{a}} = \varphi_{\overline{b}_{2} \overline{a}}.$
2. Debt.
**Discussion 78**.
) The hope is that this is a better set pf approximation to an automorphism of $M.$
) Of course, it is better to approximate a non-trivial decomposition but this seem more transparent.
The problem now is:
**Problem 79**.
) If $(\overline{a}, \overline{b}_{1}, \overline{b}_{2}) \in \mathrm{AP}^{+}$ and $c \in \rm{cl}(\overline{a})$ then $(\overline{a}_{c}, \overline{b}_{1}, \overline{b}_{2}) \in \mathrm{AP}^{+}.$
) If $(\overline{a}_{1}, \overline{b}_{1},\overline{b}_{2}) \in \mathrm{AP}^{+}$ and $c_{1} \in M \setminus \rm{cl(\overline{a})}$ then for some $\overline{c}_{2}, \ (\overline{a}, \overline{b} c_{1}, \overline{b_{2}}c_{2} ) \in \mathrm{AP}^{+}.$
Another possibility is:
**Definition 80**. Let $(\mathrm{AP}_{2}, \leq_{\mathrm{AP}(2)})$ be:
1. $(\overline{a}, \overline{b}_{1}, \overline{b}_{1}) \in \mathrm{AP}_{2}$ [iff]{.ul}:
(a) $\overline{a} \in {}^{\kappa >}M$ listing $\rm{cl}(\overline{a}^{\smallfrown} u)$ for some finite $u,$
(b) $\overline{b}_{1}, \overline{b}_{2}$ are disjoint to $\overline{a},$
(c) $\varphi_{\overline{b}_{1} \overline{a}} = \varphi_{\overline{b}_{2} \overline{a}}.$
2. the order $\leq_{\mathrm{AP}(2)}$ is $(\overline{a}', \overline{b}_{1}', b_{2}') \leq (\overline{a}'', \overline{b}_{1}'', \overline{b}_{2}'')$ [iff]{.ul}:
a) both tuples are in $\mathrm{AP}_{2},$
b) $\overline{b}_{l}' \unlhd \overline{b}_{l}''$ for $l = 1, 2.$
*Question 81*. Fix $\overline{a} \in {}^{\theta >}M,$ and let $\rm{eq}_{\kappa}(b, \overline{a}) = \{ \overline{c}: \ \text{if} \ \overline{c} \subseteq \rm{cl}(\overline{a}) \ \text{then} \ \varphi_{b \overline{c} \, \overline{a}} = \varphi_{d \overline{c} \, \overline{a}} \}.$ In $\mathbf{V}^{\rm{Levy}(\aleph_{0}, \Vert M \Vert ^{< \theta})}$ what is $\{ \pi \restriction M \setminus \rm{eq}_{\kappa}(b, a): \pi \in \rm{Aut}(M), \pi(\overline{a}) = \overline{a} \}.$
# Dichotomy behavior of absolutely co-Hopfian property
This section is devoted to the proof of Theorem [Theorem 5](#1.2){reference-type="ref" reference="1.2"} from the introduction. We start by recalling the definition of (co)-Hopfian modules.
**Definition 82**. Let $M$ be an $R$-module.
- $M$ is called *Hopfian* if its surjective $R$-endomorphisms are automorphisms.
- $M$ is called *co-Hopfian* if its injective $R$-endomorphisms are automorphisms.
This can be extended to:
**Definition 83**. Let $M$ be a $\tau$-model.
- $M$ is called *Hopfian* if its surjective $\tau$-morphisms are $\tau$-automorphisms.
- $M$ is called *co-Hopfian* if its injective $\tau$-morphisms are $\tau$-automorphisms.
For the convenience of the reader, we present the definition of potentially isomorphic, and discuss some basic facts about them which are used in the paper, and only sketch the proofs in most instances.
**Definition 84**. Let $M, N$ be two structures of our vocabulary. Recall that $M$ and $N$ are called *potentially isomorphic provided they are isomorphic in some forcing extension.*
Recall that a group $G$ is called absolutely co-Hopfian (resp. Hopfian) if it is co-Hopfian (resp. Hopfian) in any further generic extension of the universe.
**Discussion 85**. Suppose $M$ and $N$ are potentially isomorphic. According to [@marker] and [@Nad] this is holds iff for every $\mathscr{L}_{\infty, \aleph_0}$-sentence $\varphi,$ $$(M \models \varphi) \Longleftrightarrow (N \models \varphi).$$ We denote this property by $M \equiv_{\mathscr{L}_{\infty, \aleph_0}} N.$
The following is a simple variant of Discussion [Discussion 85](#nad){reference-type="ref" reference="nad"}. We state it in our context.
**Lemma 86**. *Assume $M$ and $N$ are two $\tau$-structures.*
1. *Suppose for every sentence $\varphi \in \mathscr{L}_{\infty, \aleph_0}(\tau)$, $$(M \models \varphi) \Longrightarrow (N \models \varphi).$$ Then there is an embedding of $M$ into $N$ in $V[G_{\mathbb{P}}]$, where $\mathbb{P}$ collapses $|M|+|N|$ into $\aleph_0$.*
2. *In clause (1), it suffices to consider sentences $\varphi$ in the closure of base formulas under arbitrary conjunctions and $\exists x.$*
*Proof.* We give a proof for completeness. Let $M=\{a_n: n<\omega \}$ be an enumeration of $M$ in $V[G_{\mathbb{P}}]$. By induction on $n$ we define a sequence $\langle b_n: n<\omega \rangle$ of members of $N$ such that for each formula $\varphi(x_0, \cdots, x_{n})$ from $\mathscr{L}_{\infty, \aleph_0}$, $$(*)_n \quad\quad\quad \big(M \models \varphi(a_0, \cdots, a_{n})\big) \Rightarrow \big( N \models \varphi(b_0, \cdots, b_{n})\big).$$ Let $$\Phi_0(x_0) = \bigwedge \{ \varphi(x_0): M \models \varphi(a_0) \} \in \mathscr{L}_{\infty, \aleph_0}.$$ Then $M \models \Phi_0(a_0)$, and by our assumption, there exists some $b_0 \in N$ such that $N \models \Phi_0(b_0)$. Now suppose that $n<\omega$ and we have defined $b_0, \cdots, b_n.$ We are going to define $b_{n+1}$. Let $$\Phi_{n+1}(x_0, \ldots, x_{n+1}) = \bigwedge \big\{ \varphi(x_0, \ldots, x_{n+1}): M \models \varphi(a_0, \cdots, a_{n+1}) \big\}.$$Clearly, $\Phi_{n+1}(x_0, \cdots, x_{n+1}) \in \mathscr{L}_{\infty, \aleph_0}$. Also, $$M \models \exists x_{n+1} \Phi_{n+1}(a_0, \cdots, a_n, x_{n+1}).$$ According to the induction hypothesis $(*)_n$, we have $$N \models \varphi(b_0, \cdots, b_{n+1})$$ for some $b_{n+1} \in N$. This completes the construction of the sequence $\langle b_n: n<\omega \rangle$. The assignment $a_n\mapsto b_n$ defines a map $f: M \to N$ which is an embedding of $M$ into $N$. ◻
**Fact 87**.
1. Let $\lambda \geq \kappa_{\rm{beau}}$ and let $\langle G_{\alpha}: \alpha < \lambda \rangle$ be a sequence of $\tau$-models with $|\tau|< \kappa_{\rm{beau}}(\tau),$. Then in some forcing extension $\mathbf{V}^{\mathbb{P}}, G_{\alpha}$ is embeddable into $G_{\beta},$ for some $\alpha < \beta < \lambda.$ Here, $\mathbb{P}$ collapses $| G_{\alpha}|+|G_{\beta}|$ into $\aleph_0$. Moreover, if $x_{\gamma} \in G_{\gamma}$ for $\gamma < \lambda$ then for some $\alpha < \beta < \lambda,$ in some $\mathbf{V}^{\mathbb{P}}$ there is an embedding of $G_{\alpha}$ into $G_{\beta}$ mapping $x_{\alpha}$ to $x_{\beta}$[^3].
2. Suppose $M$ and $N$ are abelian groups, or $R$-modules, and for every sentence $\varphi \in \mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau)$, we have $$(M \models \varphi) \Longrightarrow (N \models \varphi).$$ Then there is an embedding of $M$ into $N$ in $V[G_{\mathbb{P}}]$, where $\mathbb{P}$ collapses $|M|+|N|$ into $\aleph_0$.
3. Moreover, we can strengthen the conclusion of part (2) to the following:
- there is a $\mathbb{P}$-name $\pi$ satisfying:
- If $\overline{a}\in( {}^{<\theta }M)\cap V$ then $\pi(\overline{a})\in ({}^{<\theta } N)\cap V$,
- $\Vdash_\mathbb{P}\pi$ maps $\overline{a}\in( {}^{<\theta }M)\cap V$ onto $\{\overline{b}\in {}^{<\theta }M:\mathrm{rang}(\overline{b}) \subseteq \mathrm{rang}(\pi)\}.$
*Proof.* For (1), see [@Sh:678]. Parts (2) and (3) are standard, see for example [@marker]. (2). By Lemma [Lemma 86](#lem2){reference-type="ref" reference="lem2"}, it suffices to show that for every sentence $\varphi \in \mathscr{L}_{\infty, \aleph_0}$, if $M \models \varphi,$ then $N \models \varphi.$ We argue by induction on the complexity of $\varphi$ that
$(*)_\varphi$ if $\phi \in \{\varphi, \neg\varphi\}$ and $M \models \phi$, then $N \models \phi.$
If $\varphi$ is an atomic formula, then it has the form $$\varphi:= \sum_{i \in I} r_ix_i=0,$$ for some finite index set $I$ and some $r_i \in R$. Now, if $\phi=\varphi,$ this follows from our assumption that $\varphi \in \mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau).$ Next, we deal with the formula $\phi=\neg\varphi.$ On the one hand, $\phi$ can be written as $$\phi:= \exists y \big(y= \sum_{i \in I} r_ix_i \wedge y \neq 0 \big).$$ On the other hand we observe $$\exists y \big(y= \sum_{i \in I} r_ix_i \wedge y \neq 0 \big) \in \mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau).$$ Again, the claim is clear.
Suppose $(*)_\varphi$ holds. It is routine to check that
- $(*)_{\neg\varphi}$ holds,
- $(*)_{\exists\varphi}$ holds.
Suppose $(*)_{\varphi_i}$ holds for all $i\in I$. It remains to note that $(*)_{\bigwedge_{i \in I}\varphi}$ holds as well. So, we are done.
(3): this is similar to part (2). ◻
**Discussion 88**. We may look at model theory essentially replacing "isomorphic" by "almost isomorphic", that is isomorphisms by potential isomorphisms, see Definition [Definition 84](#al){reference-type="ref" reference="al"}. In [@Sh:12] we have suggested to reconsider (in this direction) a major theme in model theory, that of counting the number of isomorphism types.
Now, we are ready to prove:
**Theorem 89**. *The following assertions are valid:*
1. *If $M$ is an abelian group of cardinality $\geq \kappa := \kappa_{\rm{beau}}$, then $M$ is not absolutely co-Hopfian, indeed, after collapsing the size of $M$ into $\omega$, there is a one-to-one endomorphism $\varphi \in \rm{End}(M)$ which is not onto.*
2. *If $M$ is an $R$-module of cardinality $\geq \kappa=\kappa_{\rm{beau}}(R)$, then $M$ is not absolutely co-Hopfian.*
3. *If $M$ is an $\tau$-model of cardinality $\geq \kappa=\kappa_{\rm{beau}}(\tau)$, then $M$ is not absolutely co-Hopfian.*
*Notation 90*. By $A \| B$ we mean $A\subseteq B$ or $B\subseteq A$.
**Theorem 91**. *(1) and (2): Assume $M \in \rm{AB}, \kappa = \kappa_{\rm{beau}} > \theta$ [or]{.ul} $M$ an additive $(< \theta)$-model, $\kappa = \kappa_{\rm{beau}}(\theta).$ Let $\Omega:=\{1,\kappa\}$. If $M$ has cardinality $\geq \kappa$ [then]{.ul} after some forcing there is a one-to-one endomorphism $\varphi \in \rm{End}(M)$ which is not onto.*
*Proof.* Let $M$ be an abelian group or an $R$-module of size $|M| \geq \kappa$. Thanks to Theorem [Theorem 58](#L7){reference-type="ref" reference="L7"}, there exists an additive frame $\mathbf{f}$ as there such that $M:=M_{\mathbf{f}}$, $\mathscr{L}_{\mathbf{f}}=\mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau), \kappa_{\mathbf{f}}=\kappa$, $\lambda_{\mathbf{f}}=\beth_2(\kappa)^+$ and $\theta_{\mathbf{f}}=\aleph_0.$
The proof splits into two cases:
**Case 1: for some $\varepsilon< \theta,$ and $\overline{a}, \overline{b}\in {}^{\varepsilon}M$, $\varphi_{\overline{a}}(M) \supsetneqq \varphi_{\overline{b}}(M)$**.
Consider the $\tau$-models $(M, \overline{b})$ and $(M, \overline{a})$. Let $\varphi \in \mathscr{L}_{\infty, \theta}^{\rm{ce}}(\tau)$ be a sentence, and suppose that $(M, \overline{a}) \models \varphi.$ As $\varphi_{\overline{a}}(M) \supsetneqq \varphi_{\overline{b}}(M)$, it follows that $(M, \overline{b}) \models \varphi.$ Thus by Fact [Fact 87](#ef){reference-type="ref" reference="ef"}(2), working in the generic extension by $\mathbb{P}=\text{Col}(\aleph_0, |M|)$, there exists a one-to-one endomorphism $\pi \in \rm{End}(M)$ such that $\pi(\overline{a}) = \overline{b}.$
There is nothing to prove if $\pi$ is not onto. So, without loss of generality we may and do assume that $\pi$ is onto. Then $\pi, \pi^{-1} \in \rm{Aut}(M)$ and $\pi^{-1}(\overline{b}) = \overline{a}$. We claim that $\varphi_{\overline{a}} \leq \varphi_{\overline{b}}$. Due to the minimality condition for $\varphi_{\overline{a}}$, it is enough to show that $\overline{a}\in \varphi_{\overline{b}}(M)$. To this end, recall that $\overline{b}\in \varphi_{\overline{b}}(M)$. By definition, $M \models \varphi_{\overline{b}}[\overline{b}]$. In the light of Lemma [Lemma 20](#a5){reference-type="ref" reference="a5"}(2) we observe that $$M \models \varphi_{\overline{b}}[\pi^{-1}(\overline{b})]=\varphi_{\overline{b}}[ \overline{a} ].$$ We should be careful: If $\theta_{\mathbf{f}}>\aleph_{0}$, this does not follow, as $${}^{<\theta }M\cap V\neq {}^{<\theta }M\cap V[G_{\mathbb{P}}],$$ but, as $\theta_{\mathbf{f}}=\aleph_{0}$ this is absolute.
By definition, this means that $\overline{a}\in \varphi_{\overline{b}}(M)$, as requested. Consequently, $\varphi_{\overline{a}}(M) \subseteq\varphi_{\overline{b}}(M)$, which contradicts our assumption.
**Case 2: not case $1$**.
Given two sets $A, B \subseteq M$, by $A \| B$ we mean that $A\subseteq B$ or $B\subseteq A$. So in this case, the following holds:
$(*)$: $\forall \varepsilon< \kappa$ and $\forall \overline{a},\overline{b}\in {}^{\varepsilon}M$ we have $\bigg(\varphi_{\overline{a}}(M)\|\varphi_{\overline{b}}(M) \Rightarrow \varphi_{\overline{a}}(M)=\varphi_{\overline{b}}(M)\bigg).$
Set $\Gamma = \{ \varphi_{\overline{a}}: \overline{a}\in {}^{<\theta}M \}$. Now, we have the following easy claim.
**Claim 92**. *$\Gamma$ is a set of cardinality $< \kappa.$*
*Proof.* To see this, set $\Gamma_{\varepsilon} := \{ \varphi_{\overline{a}}: \overline{a}\in {}^{\varepsilon}M \}$. Clearly, $\Gamma=\bigcup_{\varepsilon<\theta}\Gamma_{\varepsilon}$, and since $\theta< \kappa$ and $\kappa$ is regular, it suffices to show that $|\Gamma_{\varepsilon}|<\kappa$ for all $\varepsilon<\theta$. Suppose not and search for a contradiction. Take $\varepsilon<\theta$ be such that $|\Gamma_{\varepsilon}|\geq\kappa$. This enables us to find a sequence $\langle \overline{a}_\alpha: \alpha < \kappa \rangle$ in $^{\varepsilon}M$ such that
- $\forall{\alpha}<\kappa,~\varphi_{{\overline{a}}_\alpha}\in\Gamma_{\varepsilon}$
- $\forall \alpha\neq \beta$, $\varphi_{{\overline{a}}_\alpha}(M)\neq \varphi_{{\overline{a}}_\beta}(M)$.
We apply the property presented in Definition [Definition 41](#k2){reference-type="ref" reference="k2"}(5)(a) to the family $\{\varphi_{{\overline{a}}_\alpha} \}_{\alpha < \kappa}$, to find some $\alpha < \beta < \kappa$ such that $\varphi_{{\overline{a}}_\beta}(M) \supseteq \varphi_{{\overline{a}}_\alpha}(M).$ By $(\ast)$, this implies that $\varphi_{{\overline{a}}_\beta}(M) = \varphi_{{\overline{a}}_\alpha}(M),$ which contradicts $\bullet_2$. ◻
Let $\chi:=|M|\geq\kappa$, and let $\mathbb{P}:=\rm{Col}(\aleph_{0},\chi)$. Forcing with $\mathbb{P}$, collapses $|M|$ into $\aleph_0$, i.e., for any $\mathbb{P}$-generic filter $G_{\mathbb{P}}$ over $V$, we have
$V[G_{\mathbb{P}}]\models$"$M$ is countable".
We are going to show there is in $V[G_{\mathbb{P}}]$, there exists a 1-1 map $\pi:M\to M$ which is not surjective. To this end, we define approximations to the existence of such $\pi$:
1. Let $\mathrm{AP}$ be the set of all triples $(\overline{a}, \overline{b}, c)$ such that:
(a) $\overline{a}, \overline{b}\in {}^{\varepsilon}M$ for some $\varepsilon< \theta,$
(b) $\varphi_{\overline{a}} (\overline{x}) \equiv \varphi_{\overline{b}}(\overline{x})$ (in $M$),
(c) $c \in M$ is such that $c \notin \rm{cl}(\overline{b}, M),$ i.e., $\varphi_{\overline{b}^{\smallfrown} \langle c \rangle}(M) \ \text{has cardinality} \ \geq \kappa$.
**Claim 93**. *$\mathrm{AP}\neq \emptyset.$*
*Proof.* According to Lemma [Lemma 68](#n11){reference-type="ref" reference="n11"}(a), $\rm{cl}(\overline{a})$ has cardinality $< \kappa.$ In particular, $|\rm{cl}(\emptyset)|<\kappa$, and hence as $|M| \geq \kappa,$ we can find some $c\in M\setminus\rm{cl}(\emptyset)$, and consequently, $(\langle\rangle\langle\rangle,c)\in\mathrm{AP}.$ The claim follows. ◻
Next, we bring the following claim, which plays the key role in our proof.
**Claim 94**. *Suppose $(\overline{a}, \overline{b}, c)\in \mathrm{AP}$ and $d_{1} \in M$. Then there is some $d_{2} \in M$ such that $$\big(\overline{a}^{\smallfrown} \langle d_{1} \rangle, \overline{b}^{\smallfrown}\langle d_{2} \rangle, c\big) \in \mathrm{AP}.$$*
*Proof.* Recall that in $M$ we have $\varphi_{\overline{a}} (\overline{x}) \equiv \varphi_{\overline{b}}(\overline{x})$. First, we use this to find $d \in M$ such that $$(\dagger)\quad\quad\quad M \models \varphi_{\overline{a}^{\smallfrown} \langle d_{1} \rangle}(\overline{b}, d).$$ Indeed, we look at the formula $$\psi(\overline{x}):=\exists y \varphi_{\overline{a}^{\smallfrown}\langle d_{1} \rangle}(\overline{x},y).$$Since $\overline{a}\in \psi(M)$, and due to the minimality of $\varphi_{\overline{a}}$ with respect to this property, we should have $$\overline{b}\in \varphi_{\overline{b}} (M) =\varphi_{\overline{a}} (M) \subseteq \psi(M).$$ In other words, $$M \models \exists y\varphi_{\overline{a}^{\smallfrown} \langle d_{1} \rangle}(\overline{b}, y).$$ Hence, for some $d \in M$ we must have $M \models \varphi_{\overline{a}^{\smallfrown} \langle d_{1} \rangle}(\overline{b}, d).$ So, $(\dagger)$ is proved. From this, $\overline{b}^{\smallfrown}\langle d \rangle\in\varphi_{\overline{a}^{\smallfrown} \langle d_{1} \rangle}(M)$. This implies, using the minimality condition on formulas, that $\varphi_{\overline{a}^{\smallfrown}\langle d_{1} \rangle}(M) \subseteq \varphi_{\overline{b}^{\smallfrown}\langle d \rangle}(M)$. Combining this along with $(*)$ yields that $$\varphi_{\overline{a}^{\smallfrown}\langle d_{1} \rangle}(M)= \varphi_{\overline{b}^{\smallfrown}\langle d \rangle}(M).$$ First, we deal with the case $d \in \rm{cl}(\overline{b}, M)$. Thanks to Lemma [Lemma 68](#n11){reference-type="ref" reference="n11"}(c), and recalling that $\Omega$ contains $\{1,{\kappa}\}$, we know $\rm{cl}(\overline{b}^{\frown} \langle d \rangle) =\rm{cl}(\overline{b})$. Consequently, $c \notin \rm{cl}(\overline{b}^{\smallfrown}\langle d \rangle)$. Following definition, one has $$\big(\overline{a}^{\smallfrown} \langle d_{1} \rangle, \overline{b}^{\smallfrown}\langle \rangle, c\big) \in \mathrm{AP},$$ and we are done by taking $d_2=d.$ So, without loss of generality let us assume that $d \notin \rm{cl}(\overline{b}, M).$ It follows that the set $$\mathbf{I}:= \bigg\{ c' \in M: M \models \varphi_{\overline{b}^{\smallfrown} \langle d \rangle}[\overline{b}, c'] \bigg\}$$ has cardinality $\geq \kappa.$ Therefore, there is $c' \in \mathbf{I}\setminus \rm{cl}(\overline{b}^{\smallfrown}\langle d \rangle, M).$ By the minimality condition on $\varphi_{\overline{b}^{\smallfrown}\langle d \rangle}$ and since $\overline{b}^{\smallfrown}\langle c' \rangle \in \varphi_{\overline{b}^{\smallfrown}\langle d \rangle}(M)$, we have $\varphi_{\overline{b}^{\smallfrown}\langle d \rangle}(M) \subseteq \varphi_{\overline{b}^{\smallfrown}\langle c' \rangle}(M)$. Now, we use this along with $(*)$ and deduce that $$\varphi_{\overline{b}^{\smallfrown}\langle d \rangle}(M) = \varphi_{\overline{b}^{\smallfrown}\langle c' \rangle}(M).$$ Then, in the same vein as above, we can find some $d'$ such that $\varphi_{\overline{b}^{\smallfrown} \langle c, d' \rangle}(M) = \varphi_{\overline{b}^{\smallfrown} \langle c', d \rangle}(M)$. This yields that $c \notin \rm{cl}(\overline{b}^{\smallfrown}\langle d' \rangle, M)$, hence $d_2=d'$ is as required. ◻
In $V[G_{\mathbb{P}}]$, $M$ is countable. Let us list $M$ as $\{a_i:i<\omega\}$. We define $\pi:M\to M$ by evaluating $\pi$ at $a_i$. We do this by induction, in such a way that for some fixed $c \in M$ and all $n<\omega$, if $\pi(a_i)=b_i$, then $$(\dagger\dagger)_n \quad\quad\quad \big(\langle a_i:i<n\rangle, \langle b_i:i<n\rangle, c\big) \in \mathrm{AP}.$$ Recall from Claim [Claim 93](#cla2){reference-type="ref" reference="cla2"} and its proof that there is some $c$ in $M$ such that $(\langle\rangle\langle\rangle,c)\in\mathrm{AP}$. Let us apply Claim [Claim 94](#cla3){reference-type="ref" reference="cla3"} to
- ${\overline{a}}:=\langle\rangle$
- ${\overline{b}}:=\langle\rangle$
- $d_1:=a_0$.
This gives us an element $b_0\in M$ such that $\big(\langle a_0\rangle, \langle b_0\rangle, c\big) \in \mathrm{AP}.$ Let $f(a_0)=b_0$. Now, suppose inductively we have defined $\pi(a_i)=b_i$ for all $i<n$ such that $(\dagger\dagger)_n$ is true. Let us apply Claim [Claim 94](#cla3){reference-type="ref" reference="cla3"} to
- ${\overline{a}}:=\langle a_i:i<n\rangle$,
- ${\overline{b}}:=\langle b_i:i<n\rangle$,
- $d_1:=a_n$.
This gives us an element $b_n \in M$ such that $$\big( {\overline{a}}^{\smallfrown} \langle a_n \rangle, {\overline{b}}^{\smallfrown} \langle b_n \rangle, c \big) \in \mathrm{AP}.$$ Let $\pi(a_n)=b_n$, and note that $(\dagger\dagger)_{n+1}$ holds as well. This completes the inductive definition of $\pi$.
The proof becomes complete if we can show the following three items are satisfied:
- $\pi$ is a homomorphism.
- $\pi$ is 1-to-1.
- $\pi$ is not surjective.
Let us check these:
- Suppose $\varphi=\varphi_{\overline{a}}$ is a first order formula, hence, without loss of generality, the length of $\overline{a}$ is finite and we can enlarge $\overline{a}$ to $\langle a_i:i<n\rangle$ for some $n<\omega$. Recall that $b_i:=\pi(a_i)$. By the construction $$\big(\langle a_j:j<n\rangle, \langle b_j:j<n\rangle, c\big) \in \mathrm{AP}.$$ Assume $M\models \varphi(\overline{a})$. We show that $M\models \varphi(\overline{b})$. We have ${\overline{b}}\in\varphi_{\overline{b}}(M)$ and by our construction, $\varphi_{\overline{b}}(M)=\varphi_{\overline{a}}(M)=\varphi(M)$, hence ${\overline{b}}\in\varphi(M)$, which means that $M\models \varphi(\overline{b})$. From this, it immediately follows that $\pi$ is a homomorphism.
- Following the construction given by Claim [Claim 94](#cla3){reference-type="ref" reference="cla3"}, we can always find $b_n$ so that $b_n\neq b_i$ for all $i<n$. So, $f$ is 1-to-1.
- Suppose by the way of contradiction that $\pi$ is surjective. In particular, we can find some $n<\omega$ such that $c=\pi(a_n)=b_n$. In the light of Lemma [Lemma 68](#n11){reference-type="ref" reference="n11"}(b) we observe that $$c=b_n \in \rm{cl}\big( \langle b_i: i < n+1 \rangle, M\big),$$ which contradicts the following fact$$\big(\langle a_i: i < n+1 \rangle, \langle b_i: i < n+1 \rangle, c\big) \in \mathrm{AP}.$$
The proof is now complete.
(3): The proof of this item is similar to the first part. ◻
# A new construction of Hopfian groups
In this section we give a new construction of absolutely Hopfian groups. Let us start with the following well-known definition.
**Definition 95**. A group $G$ is pure in a torsion-free abelian group $H$ if $G\subseteq H$ and $nG = nH\cap G$ for every $n\in\mathbb{Z}$.
*Notation 96*. i) Let $G$ be an abelian torsion free group and $Y$ be a subset of $G$. By $\mathrm{PC}_G(Y)$ we mean the minimal pure subgroup of $G$ which includes $Y.$
ii\) The notation $\mathbb{Q}_R$ stands for the field of fractions of the integral domain $R$.
**Lemma 97**. *Suppose $\mathbf f$ is an endomorphism of $M$ such that:*
1. *$(a) \quad R$ is a commutative torsion free ring with $1_R$,*
2. *$(b) \quad\mathbf f$ maps $\varphi_0(M)$ onto $\varphi_0(M)$,*
3. *$(c) \quad M$ is a torsion free $R$-module,*
4. *$(d) \quad \varphi_\varepsilon(x) \in
\mathscr{L}^{ep}_{\infty,\aleph_0}(\tau_R)$, for $\varepsilon \le
\varepsilon_*$ where $ep$ stands for existential positive (or*
*just generated from the atomic formulas by $\exists$ and $\wedge$),*
5. *$(e) \quad \langle \varphi_\varepsilon(M):\varepsilon \le
\varepsilon_* \rangle$ is $\subseteq$-decreasing continuous of sub-modules of $M$,*
6. *$(f) \quad \varphi_{\varepsilon_*}(M) = \{0_M\}$,*
7. *$(g) \quad \varphi_\varepsilon(M)/ \varphi_{\varepsilon
+1}(M)$ is torsion free of rank 1, so isomorphic to some sub-$R$-module of $\mathbb{Q}_R$ considered as an $R$-module,*
8. *$(h) \quad x_\varepsilon \in \varphi_\varepsilon(M) \backslash
\varphi_{\varepsilon +1}(M)$ for $\varepsilon < \varepsilon_*$,*
9. *$(i) \quad \varphi_0(M) =
\mathrm{PC}_M(\{x_\varepsilon:\varepsilon < \varepsilon_*\})$.*
*Then $\mathbf f \restriction\varphi_0(M)$ is one-to-one.*
*Proof.* For $\varepsilon < \varepsilon_*$ let $y_\varepsilon := \mathbf
f(x_\varepsilon)$. As $\varphi_\varepsilon(x) \in
\mathscr{L}_{\infty,\aleph_0}(\tau_R)$ is existential positive, clearly $\mathbf f$ maps $\varphi_\varepsilon(M)$ into $\varphi_\varepsilon(M)$, hence $y_\varepsilon \in \varphi_\varepsilon(M)$. Set $$\mathscr{U}:= \{\varepsilon <
\varepsilon_*:y_\varepsilon \in \varphi_{\varepsilon +1}(M)\}.$$
**Claim 98**. *$\mathscr{U}= \emptyset$.*
*Proof.* Assume towards a contradiction that $\mathscr{U}\ne \emptyset$ and let $\zeta$ be the first member of $\mathscr{U}$. As we are assuming "$\mathbf f$ maps $\varphi_0(M)$ onto $\varphi_0(M)$\" there is $z \in \varphi_0(M)$ such that $\mathbf f(z) = x_\zeta$. Let $R^+:=R\setminus\{0\}$. Since $z \in \varphi_0(M)$ by items (i) and (c) of the assumption of the claim we can find
1. $b \in R^+,n\in\mathbb{N}$
2. $\varepsilon_0 < \ldots < \varepsilon_n < \varepsilon_*$ and
3. $a_0,\dotsc,a_n \in R^+$,
such that $$M \models ``bz = \sum\limits_{\ell
\le n} a_\ell x_{\varepsilon_\ell}".$$ Now applying $\mathbf f$ we have $$M \models ``bx_\zeta = \sum\limits_{\ell \le n} a_\ell
y_{\varepsilon_\ell}".$$ We consider two cases:
[Case 1]{.ul}: $\varepsilon_0 < \zeta$.
To see this, note that $bx_\zeta \in \varphi_\zeta(M) \subseteq
\varphi_{\varepsilon_0 +1}(M)$ and for $\ell > 0$ we have $a_\ell y_{\varepsilon_\ell} \in
\varphi_{\varepsilon_\ell}(M) \subseteq \varphi_{\varepsilon_0 +1}(M)$, so as $bx_\zeta = \sum\limits_{\ell \le n} a_\ell
y_{\varepsilon_\ell}$ we get $a_0 y_{\varepsilon_0} \in
\varphi_{\varepsilon_0 +1}(M)$ and as $a_0 \in R^+$ by clause (f) we get $y_{\varepsilon_0} \in \varphi_{\varepsilon_0 +1}(M)$. This contradicts $\varepsilon_0 < \zeta = \min(\mathscr{U})$.
[Case 2]{.ul}: $\varepsilon_0 \ge \zeta$.
On the one hand as $b \in R^+$ clearly $bx_\zeta \notin \varphi_{\zeta +1}(M)$. On the other hand
1. $\varepsilon_\ell > \zeta \Rightarrow a_\ell y_{\varepsilon_\ell} \in
\varphi_{\varepsilon_\ell}(M) \subseteq
\varphi_{\varepsilon_0 +1}(M) \subseteq \varphi_{\zeta +1}(M)$ and
2. $\varepsilon_\ell = \zeta
\Rightarrow \ell = 0 \Rightarrow y_{\varepsilon_\ell} = y_\zeta \in
\varphi_{\zeta +1}(M)$.
Hence $\sum\limits_{\ell \le n} a_\ell
y_{\varepsilon_\ell} \in \varphi_{\zeta +1}(M)$. Combining these together we get a contradiction to $$M \models ``bx_\zeta = \sum\limits_{\ell \le n}
a_\ell y_{\varepsilon_\ell}",$$and we are done. The claim follows. ◻
Now, if $x \in \varphi_0(M) \backslash \{0_M\}$, then by clause (i) of the assumption for some $b \in R^+,n$ and $\varepsilon_0 < \ldots <
\varepsilon_n < \varepsilon_*$ and $a_\ell \in R^+$ for $\ell \le n$ we have $$M \models ``bx = \sum\limits_{\ell \le n} a_\ell
x_{\varepsilon_\ell}"$$ hence $$b \mathbf f(x) = \sum\limits_{\ell \le n} a_\ell
\mathbf f(x_{\alpha_\ell}) \in a_0 y_{\varepsilon_0} +
\varphi_{\varepsilon_0 +1}(M).$$ As $a_0 \in R^+,$ and in the light of clause (h) we observe $y_{\varepsilon_0}
\in \varphi_{\varepsilon_0}(M_{\varepsilon_0}) \backslash
\varphi_{\varepsilon_0 +1}(M)$. By items (f) and (g) of the assumption, $a_0 y_{\varepsilon_0}
\notin \varphi_{\varepsilon_0+1}(M)$. Due to the previous sentence, we know $b \mathbf f(x) \ne 0_M$. Hence $\mathbf f(x) \ne 0_M$. So, we have proved that $\mathbf f$ maps any non-zero member of $\varphi_0(M)$ into a non-zero member of $\varphi_0(M)$. In other words, $\mathbf f \restriction\varphi_0(M)$ is one-to-one as promised. ◻
M. Asgharzadeh, M. Golshani, and Saharon Shelah, *Co-Hopfian and boundedly endo-rigid mixed abelian groups*, arXiv: 2210.17210.
Reinhold Baer, . Bull. Amer. Math. Soc. **50** (1944), 267-278.
R. A. Beaumont and R. S. Pierce, . Trans. Amer. Math. Soc. **91** (1959), 209-219.
R. A. Beaumont, Groups with isomorphic proper subgroups, Bull. Amer. Math. Soc. **51** (1945) 381-387.
Jon Barwise, *Back and forth through infinitary logic*, Studies in Model Theory, Math. Assoc. Amer., Buffalo, N.Y., 1973, pp. 5-34, MAA Studies in Math., Vol. **8**.
G. Braun and L. Strüngmann, . Proc. Amer. Math. Soc. 143 (2015), no. 8, 3331-3341.
G. Baumslag, *Hopficity and Abelian Groups*, in Topics in Abelian Groups (Proc. Sympos., New Mexico State Univ.), Scott Foresman and Co., Chicago, 1963. pp.331-335.
A. L. S. Corner, *Three examples on hopficity in torsion-free abelian groups*, Acta Math. Hungarica **16**, No. 3-4(1965), 303-310.
Peter Crawley, , Bull. Amer. Math. Soc. **68** (1962), no. 5, 463-467.
Paul C. Eklof and Alan Mekler, *Almost free modules: Set theoretic methods*, North--Holland Mathematical Library, vol. **65**, North--Holland Publishing Co., Amsterdam, 2002, Revised Edition.
Erdos, Paul; Hajnal, Andras; Mate, Attila; Rado, Richard (1984), *Combinatorial set theory: partition relations for cardinals*, Studies in Logic and the Foundations of Mathematics, vol. **106**, Amsterdam: North-Holland Publishing Co.
Paul C. Eklof and Saharon Shelah, *Absolutely rigid systems and absolutely indecomposable groups*, abelian groups and modules (Dublin, 1998), Trends in Mathematics, Birkhäuser, Basel, 1999, math.LO/0010264, pp. 257--268.
Laszlo Fuchs, *Abelian groups*, Springer Monographs in Mathematics. Springer, Cham, 2015.
László Fuchs, *Infinite abelian Groups*, vol. I, II, Academic Press, New York, 1970, 1973.
B. Goldsmith and K. Gong. *Algebraic entropies, Hopficity and co-Hopficity of direct sums of Abelian groups*, Topol. Algebra Appl. **3** (2015), 75-85.
Rüdiger Göbel and Warren May, *Four submodules suffice for realizing algebras over commutative rings*, Journal of Pure and Applied Algebra **65** (1990), 29--43.
Rüdiger Göbel and Jan Trlifaj, *Approximations and endomorphism algebras of modules, vol. i, ii*, de Gruyter Expositions in Mathematics, Walter de Gruyter, 2012. Rüdiger Göbel and and Saharon Shelah, *Absolutely Indecomposable Modules, Proceedings of the American Mathematical Society*, **135** (2007), 1641-1649.
H. Hopf, *Fundamentalgruppe und zweite Bettische Gruppe*, Comment. Math. Helv., **14** (1941), 257-309.
I. Kaplansky, *A note on groups without isomorphic subgroups*, Bull. Amer. Math. Soc. **51** (1945) 529-530.
Marker, David Lectures on infinitary model theory. Lecture Notes in Logic, **46** Association for Symbolic Logic, Chicago, IL; Cambridge University Press, Cambridge, 2016.
M.E. Nadel, Jonathan Stavi, *$L_{\infty\lambda}$-equivalence, isomorphism and potential isomorphism* Trans. Amer. Math. Soc. **236** (1978), 51-74.
M.E. Nadel, *Scott heights of abelian groups*, Journal of Symbolic Logic **59** (1994), 1351--1359.
G. Paolini and S. Shelah, *On the existence of uncountable Hopfian and co-Hopfian abelian groups*, to appear Israel J. Math. arXiv: 2107.11290.
Jean-Pierre Serre, *How to use finite fields for problems concerning infinite fields*, arXiv:0903.0517 \[math.AG\].
W. Magnus, A. Karras and D. Solitar, *Combinatorial group theory*, Interscience, New York, 1966.
Saharon Shelah, *Infinite abelian groups, Whitehead problem and some constructions*, Israel Journal of Mathematics **18** (1974), 243--256.
Saharon Shelah, *Interpreting set theory in the endomorphism semi-group of afree algebra or in a category* Ann. Sci. Univ. Clermont, (**60** Math. No. 13), (1976). 1-29.
Saharon Shelah, *Modules and infinitary logics, Groups and model theory*, Contemp. Math., vol. **576**, Amer. Math. Soc., Providence, RI, (20120) pp. 305-316.
Saharon Shelah, *The lazy model theorist's guide to stability*, Logique et Analyse, 18 Anne, Vol. **71-72** (1975), 241-308.
Saharon Shelah, *Better quasi-orders for uncountable cardinals*, Israel J. Math. **42** (1982), no. 3, 177-226.
W. Szmielew, *Elementary properties of abelian groups*, Fundamenta Mathematica, **41** (1954), 203-271.
Wolmer V. Vasconcelos, *On finitely generated flat modules* Trans. Amer. Math. Soc. **138** (1969), 505?512.
[^1]: originally: $\mathrm{set}(b, \overline{a}) = \{ \alpha \in u: \alpha < \alpha_{*} \ \text{and there is} \ \overline{c}\in \varphi(M, \overline{a}) \ \text{such that} \ \forall \alpha < \alpha_{*}(\psi_{\alpha}[\overline{c}, \overline{a}] \Leftrightarrow \alpha \in u) \}.$
[^2]: This also is called the first $\omega$-Erdös cardinal.
[^3]: This explains why [@Fuc73] gets only indecomposable abelian groups (not endo-rigid).
| arxiv_math | {
"id": "2309.16997",
"title": "Expressive Power of Infinitary Logic and Absolute co-Hopfianity",
"authors": "Mohsen Asgharzadeh, Mohammad Golshani, Saharon Shelah",
"categories": "math.LO math.AC math.GR",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
In this paper we proved that there exists four distinct diffeomorphism classes of three dimensional real Bott tower $M(A)=(S^1)^3/(\mathbb{Z}_2)^3$, and 12 distinct diffeomorphism classes of four dimensional real Bott tower $M(A)=(S^1)^4/(\mathbb{Z}_2)^4$, where matrix $A$ corresponds to the action of $(\mathbb{Z}_2)^n$ on $(S^1)^n$ for n=3,4.
address: Department of Mathematics, Tokyo Metropolitan University.
author:
- ADMI NAZRA
title: Real Bott Tower
---
# Introduction
In this paper we study free actions of the product of cyclic groups of order 2, $(\mathbb{Z}_2)^n$ on the n-dimensional Torus $T^n=(S^1)^n$ for n=3,4. The action of $(\mathbb{Z}_2)^n$ can be expressed by an upper triangular matrix $A$ where the diagonal entries are one and the other entries are either one or zero. We call such a matrix $n$-th *Bott matrix* and the $n$-dimensional orbit space $M(A)=T^n/(\mathbb{Z}_2)^n$ is said to be a **real Bott tower**. By the definition, Bott matrices consist of $8$ for $n=3$ and $64$ for $n=4$.
The main purpose of this paper is to determine the diffeomorphism classes of real Bott towers for dimension $n=3$ and $4$. The original conjecture is whether *$A$ is conjugate to $A'$ if and only if$M(A)$ and $M(A')$ are diffeomorphic.* However we found that there are non-diffeomorphic Bott towers $M(A), M(A')$ with representative matrices $A,A'$ are conjugate. So the necessary condition of the conjecture is not true. But the sufficient condition of the conjecture is not known so far. On the other hand the real Bott tower $M(A)$ turns out to be an $n$-dimensional compact euclidean space form (Riemannian flat manifold). Then we can apply the Bieberbach theorems to classify Bott towers.
If $M$ is a compact flat Riemannian manifold of dimension n, then $M$ is affinely diffeomorphic to $\mathbb{R}^n/\Gamma$, where $\Gamma$, the fundamental group of $M$, is a torsion-free discrete uniform subgroup of the Euclidean group $\mathbb {E}(n)=\mathbb {R}^n\rtimes O(n)$. Such groups $\Gamma$ are called Bieberbach groups.
# Preliminaries
Let us start with some definitions and theorems used throughout this paper.
## Groups Acting on Spaces
**Definition 1**. *Suppose that $X$ is a topological space and $G$ is a group. Then we say that X is a $G\hspace{-0.15cm}-\hspace{-0.15cm}space$ if $G$ acts on $X$ such that the correspondence $G \times X\longrightarrow X$, $(g,x)\longrightarrow gx$, is continuous.*
**Theorem 2**. *If $G$ is a finite group acting freely on a Hausdorff space $X$ then $p: X \rightarrow X/G$ is a covering.*
*Proof.* Let $G=\{ 1=g_0,g_1,g_2,...,g_n\}$. Since $X$ is Hausdorff there are open neighborhoods $U_0,U_1,...,U_n$ of $g_0x,g_1x,g_2x,...,g_nx$ respectively with $U_0\cap U_j=\emptyset$ for $j=1,2,...,n$. Let $U$ be the intersection $\cap_{j=0}^{n}\,g^{-1}_jU_j$, which is clearly an open neighborhood of $x$ $(X$ is topological space).
Now $g_iU=\cap_{j=0}^{n}\, g_i(g^{-1}_jU_j)\subseteq U_i$ and $$\begin{aligned}
g_iU\cap g_jU&=g_j((g^{-1}_jg_iU)\cap U)\\
&=g_j(g_kU\cap U) \,\,\,\,\,\,\ \text {(for some k)}\\
&=\emptyset\end{aligned}$$ since $g_kU\subseteq U_k$ and $U\subseteq U_0$.
Next note that $p:X\longrightarrow X/G$ is a continuous surjective open map. Since $p$ is an open map, $p(U)$ is an open neighborhood of $Gx=p(x)$ and $p^{-1}(p(U))={\cup}_{g\in G} \,gU$ with $\{gU;g\in G\}$ being disjoint open subsets of $X$. Furthermore $p|gU:gU\rightarrow p(U)$ is a continuous open bijective mapping and hence a homeomorphism. ◻
We say $G$ is *discontinuous* at $x\in X$ if, given sequence $\{\gamma_i\}\subset G$ of distinct elements, the sequence $\{\gamma_i x\}\subset X$ has no accumulation point. $G$ is *discontinuous* on $X$ if it is discontinuous at every point of $X$. The action of $G$ is *properly discontinuous* on $X$ if for every compact subset $K$ of $X$ the set of $\gamma\in G$ such that $\gamma K\cap K\ne \emptyset$ is finite. If $\{\gamma\in G|\gamma K\cap K\ne \emptyset \}$ is compact, then the action of $G$ is called *proper* action.
**Lemma 3**. *If $G$ acts properly discontinuously on $X$ then $G$ is discontinuous on $X$.*
*Proof.* Suppose $G$ is not discontinuous on $X$. That is, given a sequence $\{\gamma_i\}$ of distinct elements of $G$ such that $\{\gamma_i x: x\in X\}$ convergence to $y\in X$. Then $K=\{x,y,\gamma_1x,\gamma_2x,....\}$ is compact subset of $X$. As $\gamma_ix\in K\cap \gamma_iK$ for each $i$, we have a contradiction. ◻
Let $G$ be a Hausdorff topological group. A *discrete subgroup* is a group with a countable number of elements and the discrete topology (every point is an open set).
**Lemma 4**. *Let $\Gamma$ and $K$ be subgroups of $G$ with $K$ compact and $G$ locally compact. These conditions are equivalent:*
- *$\Gamma$ is discontinuous at some point of $G/K$,*
- *$\Gamma$ is discontinuous on $G/K$,*
- *$\Gamma$ is properly discontinuous on $G/K$,*
- *$\Gamma$ is discrete in $G$.*
*Proof.* $((i)\Rightarrow (iv))$. Let $\Gamma$ be discontinuous at $y=gK\in G/K$, for some $g \in G$. Then $y$ has a neighborhood $UK$, $U$ open in $G$ such that $\{\gamma\in \Gamma|\gamma(y)\in UK \}$ is finite. Now $Ug^{-1}$ is a neighborhood of $1$ in $G$ which has finite intersection with $\Gamma$. For this, let $Ug^{-1}\cap \Gamma=\{\gamma_i=u_i g^{-1}|u_i\in U, \gamma_i\in \Gamma\}$. Then $\gamma_i g =u_i g^{-1}g=u_i$, $\gamma_i gK=u_iK\in UK$, so $\gamma_i y=u_iK\in UK$. Thus $Ug^{-1}\cap \Gamma$ is finite. As $G$ is Hausdorff, a smaller neighborhood of $1$ meets $\Gamma$ only in $1$. Let $Ug^{-1}=U'$. Then for any $\gamma\in \Gamma$, $V=\gamma U'$ contains only the element $\gamma$ of $\Gamma$. Thus $\Gamma$ is discrete in $G$.\
$((iv)\Rightarrow (iii))$. Let $\Gamma$ be discrete. Since $G$ locally compact, we choose a neighborhood $U$ of $1$ in $G$ with compact closure ($\bar U$ is compact). If $g\in G$, put $V_g=gUKU^{-1}g^{-1}$ has compact closure (i.e. $\bar V_g$ is compact). For if $\gamma\in V_g$, $\gamma g=g \Rightarrow \gamma gU=gUK$, so $\forall \gamma\in V_g$, $\gamma\in gUKU^{-1}g^{-1}$. Now $\gamma(p(gU))\cap p(gU)\ne \emptyset \Leftrightarrow \gamma \in V_g$. Note that $\Gamma\cap \bar V_g \subset \bar V_g.$ Since $\Gamma$ is discrete, then $\Gamma\cap \bar V_g$ should be closed. We know that a closed subset of compact set is compact. Therefore $\Gamma\cap \bar V_g$ is compact. Hence $\Gamma\cap \bar V_g$ is finite. So, for any compact subset $C=p(g \bar U)\subset G/K$, $\{\gamma\in\Gamma| \gamma C\cap C\not=\emptyset \}$ is finite. Thus $\Gamma$ is properly discontinuous.\
$((iii)\Rightarrow (ii))$ by Lemma [Lemma 3](#L1){reference-type="ref" reference="L1"}, and $((ii)\Rightarrow (i))$ trivially. ◻
## Group Extension and 2-cocycle
A group extension is a short exact sequence $$\begin{CD}
1 @>>> \Delta@>>> \pi @>p>> Q @>>> 1
\end{CD}$$ where $\Delta$ is a normal subgroup of $\pi$. There is a homomorphism $$\mu:\pi {\longrightarrow}Aut(\Delta)\,\,\,\,\, \text {by} \,\,g\mapsto \mu_g;\,\,\mu_g(n)=gng^{-1}\,\,(g\in \pi,\, n\in \Delta).$$ It is easily checked that $$\label{Ps0}
\mu_{gh}(n)=\mu_g\mu_h(n).$$ Choose a section $s:Q{\longrightarrow}\pi$ such that $p\circ s=id$ and $s(1)=1$. Consider the map $\phi:Q{\longrightarrow}Aut(\Delta)$ given by $\phi(\alpha)=\mu_{s(\alpha)}$ $(\alpha\in Q).$ Then suppose $$\begin{CD}
1 @>>> Inn(\Delta) @>>> Aut(\Delta) @>\nu >> Out(\Delta) @>>> 1
\end{CD}$$ is the natural exact sequence. Then $\phi$ induces a homomorphism $$\label{Ps2}
\varphi:Q{\longrightarrow}Out(\Delta)\,\,\ \alpha\mapsto \nu\circ \mu_{s(\alpha)}.$$ Since $s(\alpha\beta)$, $s(\alpha)s(\beta)$ are mapped to $\alpha\beta\in Q$, i.e $$\begin{aligned}
ps(\alpha\beta)&=\alpha\beta=ps(\alpha)ps(\beta)\\
&=p(s(\alpha)s(\beta)),\end{aligned}$$ so there exist an element $f:Q\times Q {\rightarrow}\Delta$, $f(\alpha,\beta)\in \Delta$ such that $$\begin{aligned}
f(\alpha,\beta)s(\alpha\beta)=s(\alpha)s(\beta).\end{aligned}$$ The function $f$ satisfies the following conditions.
- $\phi(\alpha)(\phi(\beta)(n))=f(\alpha,\beta)\phi(\alpha\beta)(n)f(\alpha,\beta)^{-1}$
- $f(\alpha,1)=f(1,\alpha)=1$
- $\phi(\alpha)(f(\beta,\gamma))f(\alpha,\beta\gamma)=f(\alpha,\beta)f(\alpha\beta,\gamma)$.
Conversely, given a function $\phi:Q{\rightarrow}Aut(\Delta)$ and a function $f:Q\times Q{\rightarrow}\Delta$ satisfying (i)-(iii), we define a group law in the product $\Delta\times Q:$ $$\label{cycle1}
(n,\alpha)(m,\beta)=(n\phi(\alpha)(m)f(\alpha,\beta), \alpha\beta).$$ By this product [\[cycle1\]](#cycle1){reference-type="eqref" reference="cycle1"}, we have a group $\pi=\Delta\times Q$. Setting $p((n,\alpha))=\alpha$, we obtain a group extension $1{\rightarrow}\Delta {\rightarrow}\pi \stackrel {p}{\rightarrow}Q{\rightarrow}1$.
Next let us consider $$\begin{aligned}
f(\alpha,\beta)s(\alpha\beta) n s(\alpha\beta)^{-1}f(\alpha,\beta)^{-1}&=s(\alpha)s(\beta)n s(\alpha\beta)^{-1}f(\alpha,\beta)^{-1}\\
\mu_{f(\alpha,\beta)s(\alpha\beta)}(n) &=s(\alpha)s(\beta)n s(\beta)^{-1}s(\alpha)^{-1}\\
\mu_{f(\alpha,\beta)} \mu_{s(\alpha\beta)}(n) &=\mu_{s(\alpha)s(\beta)}(n)\,\,\,\,\,\,\,\,\text {by}\,\, \eqref{Ps0}\\
&=\mu_{s(\alpha)}\mu_{s(\beta)}(n).\end{aligned}$$ Since $\nu(Inn(\Delta))=1$, and let $a'=f(\alpha,\beta)\in \Delta$, $\mu_{f(\alpha,\beta)}(n)=\mu_{a'}(n)=a'na'^{-1}\in Inn(\Delta),$ then for any $n\in \Delta$, $\nu(\mu_{f(\alpha,\beta)})=1$. So $$\begin{aligned}
\nu(\mu_{f(\alpha,\beta)} \mu_{s(\alpha\beta)}) &=\nu(\mu_{s(\alpha)}\mu_{s(\beta)})\\
\nu(\mu_{f(\alpha,\beta)}) \nu( \mu_{s(\alpha\beta)}) &=\nu(\mu_{s(\alpha)}) \nu(\mu_{s(\beta)})\\
\nu( \mu_{s(\alpha\beta)}) &=\nu(\mu_{s(\alpha)}) \nu(\mu_{s(\beta)})\\
\varphi(\alpha\beta) &=\varphi(\alpha) \varphi(\beta).\end{aligned}$$ Hence $\varphi$ is homomorphism. We call $\varphi$ an *abstract kernel*.
Next, let us consider the case where $\Delta=A$ is abelian. In other words, we try to classify all group extensions of an abelian group $A$ by $Q$ with abstract kernel $\varphi:Q{\longrightarrow}Out(A)$. Since $A$ is abelian, $Aut(A)=Out(A)$, so $\varphi=\phi:Q{\longrightarrow}Aut(A)$. Moreover the condition function $f$ above, (i) holds automatically. A map $f:Q\times Q {\longrightarrow}A$ satisfying the two equalities (ii) and (iii) above is called a *2-cocycle* for the abstract kernel $\phi$. The set of all 2-cocycles is denoted by $Z_{\phi}^2(Q;A)$ or simply, by $Z^2(Q;A)$.
Let $\lambda:Q{\rightarrow}A$ be any map such that $\lambda(1)=1$. Define $\delta' \lambda:Q\times Q {\rightarrow}A$ by $$\delta' \lambda(\alpha, \beta)=\phi(\alpha)(\lambda(\beta))\lambda(\alpha) \lambda(\alpha\beta)^{-1}.$$ It turns out that such a $\delta' \lambda$ is a 2-cocycle and is called a *2-coboundary*. The set of all 2-coboundaries is denoted by $B_{\phi}^2(Q;A)$. Clearly, $B_{\phi}^2(Q;A)$ is a subgroup of $Z_{\phi}^2(Q;A)$. Let $f_1, f_2: Q\times Q {\rightarrow}A$ be 2-cocyles. We say $f_1$ is *cohomologous* to $f_2$ if there is a map $\lambda$ such that $f_1(\alpha, \beta)=\delta'\lambda(\alpha, \beta) f_2(\alpha, \beta)$ $(\forall \alpha, \beta\in Q)$.
We define the second cohomology group as the quotient group $$H_{\phi}^2(Q;A)=Z_{\phi}^2(Q;A)/B_{\phi}^2(Q;A).$$
From hypothesis 1 below, when ${\mathcal N}={\mathbb R}^n$, $H^2_{\bar\phi}(Q;\mathcal N)=0$ for any finite group $Q$. Then there is a function $\chi:Q{\rightarrow}\mathcal N$ ($1$-chain) such that $f=\delta'\chi$; $$\label{coboun}
f(\alpha,\beta)=\bar\phi(\alpha)(\chi(\beta))\chi(\alpha)\chi(\alpha\beta)^{-1}
\ (\alpha,\beta\in Q).$$ Now let's check the property of $\bar\phi$. As $\phi(\alpha)(n)=s(\alpha)ns(\alpha)^{-1}$, we see that $$\begin{split}
\phi(\alpha)(\phi(\beta)(n))&=s(\alpha)(s(\beta)ns(\beta)^{-1})s(\alpha)^{-1}\\
&=f(\alpha,\beta)s(\alpha\beta)ns(\alpha\beta)^{-1}f(\alpha,\beta)^{-1}\\
&=f(\alpha,\beta)\phi(\alpha\beta)(n)f(\alpha, \beta)^{-1}
\end{split}$$ By the uniqueness of $\bar\phi$, the following holds.
$$\label{rel1}
\bar\phi(\alpha)(\bar\phi(\beta)(x))=
f(\alpha,\beta)\bar\phi(\alpha\beta)(x)f(\alpha\beta)^{-1} \ (x\in \mathcal N).$$
Now, we give three **hypotheses** as follows:
- Let ${\mathcal N}$ be connected Lie group contains $\Delta$ as a discrete uniform subgroup. $(\Delta, {\mathcal N})$ has the unique extension property; If $\phi(\alpha):\Delta{\longrightarrow}\Delta$ is an automorphism, then $\phi$ extends to a unique automorphism $\bar \phi(\alpha):{\mathcal N}{\longrightarrow}{\mathcal N}$.
- ${\mathcal N}$ acts properly and freely on $X$ such that there is a principal bundle $${\mathcal N}{\longrightarrow}X{\longrightarrow}W \,\,\, \text{with}\,\, W=X/{\mathcal N}\,\, \text {is manifold}.$$
- Let $(\pi,X)$ be a properly discontinuous action.
By hypothesis 1, we can consider the *pushout* $\pi{\mathcal N}=\{(n,\gamma)|n\in {\mathcal N}, \gamma\in Q\}$; $$\label{it1}
\begin{CD}
1 @>>> \Delta @>>> \pi @>>> Q @>>> 1\\
@. @VVV @VVV {||} @.\\
1@>>> {\mathcal N}@>>>\pi{\mathcal N}@>>> Q @>>> 1,
\end{CD}$$ particularly, note that $$\label{it2}
\pi \cap {\mathcal N}=\Delta,$$ and $$\label{it3}
\pi \,\,\text{normalizes}\,\, {\mathcal N},$$ since for any $(x,\gamma)\in \pi$ and $(n,1)\in {\mathcal N}$, $(x,\gamma)(n,1)(x,\gamma)^{-1}=$\
$(x \bar \phi(\gamma)(n)x^{-1},1)\in {\mathcal N}.$
**Theorem 5**. *There is a proper action $(\pi{\mathcal N}, X)$ such that the following is equivariant fibration; $$({\mathcal N},{\mathcal N}){\longrightarrow}(\pi{\mathcal N},X) \stackrel {p} {\longrightarrow}(Q,W).$$*
*Proof.* We note that ${\mathcal N}$ acts properly and $\pi$ acts properly discontinuous on $X$ (hypothesis 2,3). Consider the natural projection ${\mathcal N}{\longrightarrow}\pi{\mathcal N}/\pi$ which induces a quotient diffeomorphism ${\mathcal N}/{\mathcal N}\cap \pi=\pi{\mathcal N}/\pi$. Since ${\mathcal N}\cap \pi=\Delta$ by [\[it2\]](#it2){reference-type="eqref" reference="it2"} and ${\mathcal N}/\Delta$ is compact ($\Delta$ is an uniform subgroup of ${\mathcal N}$ ), we can choose a compact subset $K\subset {\mathcal N}$ such that $\pi{\mathcal N}=\pi K (\subset \pi{\mathcal N})$.
Now we prove $\pi{\mathcal N}$ acts properly. Let $(n_i,\gamma_i)\in \pi{\mathcal N}$ acts on $x_i\in X$ where lim $x_i=x\in X\,\,(i{\rightarrow}\infty )$. Suppose $(n_i,\gamma_i)x_i{\rightarrow}y\in X (i{\rightarrow}\infty )$. As above, $(n_i,\gamma_i)=s_ik_i$ $(s_i\in \pi, \,\,k_i\in K)$. Then $(n_i,\gamma_i)x_i=s_i k_i x_i=s_i(k_i x_i)$. As $K$ is compact, we may assume $k_i {\rightarrow}k\in K\,\,(i{\rightarrow}\infty )$ so that $k_i x_i {\rightarrow}kx\,\,(i{\rightarrow}\infty )$ and as $s_i(k_i x_i){\rightarrow}y\,\,(i{\rightarrow}\infty )$, by properness of $\pi$, $\{s_i\}$ consist of finitely many elements, in fact $s_i{\rightarrow}s\in \pi$ for sufficiently larger $i$. Hence $(n_i,\gamma_i)=s_ik_i {\rightarrow}sk\in \pi{\mathcal N}$. ◻
## Rigid Motions
**Definition 6**. *A rigid motion is an ordered pair $(s,M)$ with $M\in O(n)$ and $s\in\mathbb{R}^n$. A rigid motion acts on $\mathbb{R}^n$ by $$\label{Ri1}
(s,M)x=Mx+s \,\,\,\, for \,\, x\in\mathbb{R}^n.$$*
We multiply rigid motions by composing them, so $$\label{Ri2}
(s,M)(t,N)=(Mt+s,MN).$$ We denote the group of rigid motions (of $\mathbb{R}^n$) by $\mathbb{E}(n)$.
**Definition 7**. *An affine motion is an ordered pair $(s,M)$ with $M\in GL(n,\mathbb{R})$ and $s\in\mathbb{R}^n$.*
Affine motions act on $\mathbb{R}^n$ via [\[Ri1\]](#Ri1){reference-type="eqref" reference="Ri1"} and form a group, $\mathbb{A}(n)$, under [\[Ri2\]](#Ri2){reference-type="eqref" reference="Ri2"}.
**Definition 8**. *Define a homomorphism $L:\mathbb{E}(n) \rightarrow O(n)$ by $$L(s,M)=M.$$ If $\alpha\in\mathbb{E}(n)$, $L(\alpha)$ is called the rotational part of $\alpha$. We can also define the translational part of $\alpha$ by $$T(s,M)=s,$$ but the map $T:\mathbb{E}(n)\rightarrow \mathbb{R}^n$ is not a homomorphism.*
We say $(s,M)$ is a *pure translation* if $M=I,$ the identity matrix. If $\Gamma$ is a subgroup of $\mathbb{E}(n)$, $\Gamma\cap \mathbb{R}^n$ will denote the subgroup of $\Gamma$ of pure translations.
Recall that such a group $\Gamma$ is *torsionfree* if $\Gamma$ has no elements of finite order.
**Definition 9**. *We say that a subgroup $\Gamma$ of $\mathbb{E}(n)$ is *uniform* if $\mathbb{R}^n/\Gamma$ is compact where the orbit space, $\mathbb{R}^n/\Gamma$, is the set of orbits with the quotient (or identification) topology. We say $\Gamma$ is *reducible* if we can find $\alpha\in \mathbb {A(}n)$, so that $T(\alpha\Gamma\alpha^{-1})$ does not span ${\mathbb R}^n$. In other words, $\Gamma$ is reducible if after an affine change of coordinates, all the elements of $\Gamma$ have translational parts in a proper subspace of ${\mathbb R}^n$. If $\Gamma$ is not reducible, we say $\Gamma$ is *irreducible*.*
**Proposition 10**. *Let $\Gamma$ be a subgroup of $\mathbb{E}(n)$. Then if $\Gamma$ acts freely, $\Gamma$ is torsionfree. Furthermore, if $\Gamma$ is discontinuous and torsionfree, $\Gamma$ acts freely.*
*Proof.* Suppose $\alpha\in\Gamma$, $\alpha\ne(0,I)$, and $\alpha^k=(0,I)$. Let $x$ be arbitrary in $\mathbb{R}^n$, and set $$y=\sum_{i=0}^{k-1} \alpha^ix.$$ Then $\alpha y=y$, so $\Gamma$ does not act freely.
If $\Gamma$ is discontinuous on ${\mathbb R}^n$, then $\Gamma$ acts properly discontinuously on ${\mathbb R}^n$. Then stabilizer $\Gamma_x$ is finite $\forall x\in {\mathbb R}^n$. Hence if $\Gamma$ is torsionfree, then $\Gamma_x=\{1\}$ $\forall x\in {\mathbb R}^n$. So $\Gamma$ acts freely on ${\mathbb R}^n$. ◻
In this paper we will be concerned with discrete subgroups $\Gamma$ of $\mathbb{E}(n)$ which act freely on $\mathbb{R}^n$.
**Definition 11**. *We say that $\Gamma$ is crystallographic if $\Gamma$ is uniform and discrete. If $\Gamma$ is crystallographic and torsionfree in $\mathbb{E}(n)$, we say $\Gamma$ is a Bieberbach subgroup of $\mathbb{E}(n)$.*
**Theorem 12**. *Let $\Gamma$ be a subgroup of the Euclidean group $\mathbb {E}(n)=\mathbb {R}^n\rtimes O(n)$. Then $\Gamma$ is a torsionfree discrete uniform subgroup of $\mathbb {E}(n)$ if and only if $\Gamma$ acts properly discontinuously and freely on $\mathbb {R}^n$ with compact quotient.*
*Proof.* Let $X=\mathbb {E}(n)=\mathbb {R}^n\rtimes O(n)$. Let $K=O(n)$, compact subgroup. Then $p:X\longrightarrow X/K=\mathbb {R}^n$ by $p(a, A)=a$.\
$(\Rightarrow )$. Let $\Gamma$ is a torsionfree discrete uniform subgroup of $\mathbb {E}(n)$. Then $\Gamma$ is discontinuous by Lemma [Lemma 4](#L2){reference-type="ref" reference="L2"}. Therefore $\Gamma$ acts freely by Proposition [Proposition 10](#P:1){reference-type="ref" reference="P:1"}. Also $\Gamma$ acts properly discontinuously by Lemma [Lemma 4](#L2){reference-type="ref" reference="L2"}. Finally, since $\Gamma$ is uniform then $\mathbb {R}^n/\Gamma$ is compact.\
$(\Leftarrow )$. Let $\Gamma$ acts properly discontinuously and freely on $\mathbb {R}^n$ with compact quotient. By Proposition [Proposition 10](#P:1){reference-type="ref" reference="P:1"}, $\Gamma$ is torsionfree. Since $\Gamma$ acts properly discontinuously then $\Gamma$ is discrete in $\mathbb {E}(n)$ by Lemma [Lemma 4](#L2){reference-type="ref" reference="L2"}. $\Gamma$ is uniform, because $\mathbb {R}^n/\Gamma$ is compact. ◻
We have defined the group of rigid motion ${\mathbb E}(n)$ which acts on ${\mathbb R}^n$. Suppose torsionfree crystallographic subgroup $\Gamma\subset {\mathbb E}(n) \subset \mathbb{A}(n)$ acts on ${\mathbb R}^n$, and by first Bieberbach theorem there is an exact sequence $$1{\longrightarrow}{\mathbb Z}^n {\longrightarrow}\Gamma{\longrightarrow}F {\longrightarrow}1,$$ where $F$ is a finite group.
Suppose $\mathcal {C}(\Gamma)$ the center of $\Gamma$ is non trivial. If $\alpha=(s,M)\in \mathcal {C}(\Gamma)$, $\alpha n \alpha^{-1}=n$ $\forall n\in {\mathbb Z}^n ({\mathbb Z}^n \triangleleft \Gamma)$. Then $(n,I)=(s,M)(n,I)(s,M)^{-1}=(Mn,I)$ $\Rightarrow M=I$. Therefore $L(\mathcal {C}(\Gamma))=I$, $\mathcal {C}(\Gamma)\subsetneqq {\mathbb Z}^n$. Since $\Gamma$ is torsionfree, then $\mathcal {C}(\Gamma)\cong {\mathbb Z}^k \subset {\mathbb Z}^n$, $1\leq k< n$. Hence we get $$1 {\longrightarrow}{\mathbb Z}^k {\longrightarrow}\Gamma{\longrightarrow}\Gamma/{\mathbb Z}^k =Q {\longrightarrow}1.$$
What is the form of $Q$ which acts on ${\mathbb R}^{(n-k)}\,\,(k<n)?$. Let $$\begin{aligned}
\gamma= \footnotesize
\Bigl (
\left(\begin{array}{c}
a\\
b
\end{array}\right)
\left(\begin{array}{cc}
A_{k\times k}& B_{k\times (n-k)}\\
C_{(n-k)\times k}& D_{(n-k)\times (n-k)}
\end{array}\right)
\Bigr )\in \Gamma\,\, \normalsize\text {and } \,\,
\footnotesize\\
\left(\begin{array}{c}
x\\
y
\end{array}\right) \footnotesize\in {\mathbb R}^n=
\left(\begin{array}{c}
{\mathbb R}^k\\
{\mathbb R}^{n-k}
\end{array}\right) \text {where $x\in{\mathbb R}^k, \, y\in {\mathbb R}^{n-k}$}.\end{aligned}$$
Now consider $\phi:Q {\longrightarrow}Aut({\mathbb Z}^k)$ by $\phi(\alpha)(x)=\gamma x \gamma^{-1}$ which is represented by matrix $A\in GL(k,{\mathbb Z})$ such that $\phi(\alpha)(x)=Ax$ for $\alpha\in Q$. Therefore $\phi(\alpha)$ naturally extends to $\bar \phi(\alpha):{\mathbb R}^k {\longrightarrow}{\mathbb R}^k$ ($\Gamma$ normalizes ${\mathbb R}^k$ also, $\gamma{\mathbb R}^k \gamma^{-1}=A{\mathbb R}^k$).
Then we have for any $x\in {\mathbb R}^k$, $$\begin{aligned}
\Bigl ( \left(\begin{array}{c}
x\\
0
\end{array}\right), I \Bigr )=&
\gamma
\Bigl ( \left(\begin{array}{c}
x\\
0
\end{array}\right), I \Bigr )\gamma^{-1}=
\Bigl (
\left(\begin{array}{cc}
A& B\\
C& D
\end{array}\right) \left(\begin{array}{c}
x\\
0
\end{array}\right), I \Bigr )\\
=&
\Bigl (\left(\begin{array}{c}
Ax\\
Cx
\end{array}\right),I \Bigr )\\
\Rightarrow A=I, \,\, C=0.\end{aligned}$$
By Theorem [Theorem 5](#fibration){reference-type="ref" reference="fibration"} there is an equivariant fibration $$({\mathbb Z}^k,{\mathbb R}^k){\longrightarrow}(\Gamma,{\mathbb R}^n)\stackrel {(\nu, p)}{\longrightarrow}(Q,W).$$ For $\alpha\in Q$, $y\in W={\mathbb R}^{n-k}$ $$\begin{aligned}
\alpha y &=
\alpha p
\left(\begin{array}{c}
x\\
y
\end{array}\right)
= \nu(\gamma)p
\left(\begin{array}{c}
x\\
y
\end{array}\right)
=
p \Bigl (\gamma
\left(\begin{array}{c}
x\\
y
\end{array}\right) \Bigr )\\
&=p \Bigl (\left(\begin{array}{c}
a+x+By\\
b+Dy
\end{array}\right)\Bigr )=b+Dy\\
&=(b,D)y\end{aligned}$$ we get $\alpha=(b,D)$. If $\left(\begin{array}{cc}
I& B\\
0& D
\end{array}\right)\in O(n)$ then $D\in O(n-k)$. Thus $Q=\{(b,D)\subset {\mathbb E}(n-k)\}$.
## Bieberbach Theorems
Bieberbach theorems are about discrete, uniform subgroups of $\mathbb{E}(n)$, but for geometric applications of his theorems (to flat manifolds), we require the subgroups to have an additional property.
**Theorem 13**. *(First Bieberbach) Let $\Gamma$ be a crystallographic subgroup of $\mathbb {E}(n)$. Then*
- *$L(\Gamma )$ is finite, $L:\mathbb {E}(n)\rightarrow O(n).$*
- *$\Gamma \cap \mathbb {R}^n$ is a lattice (finitely generated free abelian group) which spans $\mathbb {R}^n$. (cf. [@C85] for proof)*
This means that for an n-dimensional crystallographic group $\Gamma$, the translation part $\Gamma^*=\Gamma\cap \mathbb{R}^n$ of $\Gamma$ is a free abelian group isomorphic with $\mathbb{Z}^n$, such that the vector space spanned by $\Gamma^*$ is the whole space $\mathbb{R}^n$. If $\Gamma^*=\Gamma$ then $\Gamma$ is torsionfree and the manifold $\mathbb{R}^n/\Gamma$ is an n-dimensional torus.
Recall that a sequence of groups and homomorphisms is *exact* if the image of any of the homomorphisms is equal to the kernel of the next one. By the first Bieberbach theorem, if $\Gamma$ is any crystallographic group, then $\Gamma$ satisfies an exact sequence $$\label{ESq}
0\longrightarrow \Gamma^*\longrightarrow \Gamma\longrightarrow \Phi \longrightarrow 1$$ where $\Gamma^*=\Gamma\cap \mathbb{R}^n$ is a lattice of rank $n$, and $\Phi =L(\Gamma)$ is a finite group. We call $\Phi$, the *holonomy group* of $\Gamma$. We use the \"0\" at the left of [\[ESq\]](#ESq){reference-type="eqref" reference="ESq"} to indicate that we usually write $\Gamma^*$ additively, while the \"1\" at the right indicates that we usually write $\Phi$ multiplicatively.
**Proposition 14**. *Let $\Gamma$ be a crystallographic subgroup of $\mathbb {E}(n)$. Then $\Gamma \cap \mathbb {R}^n$ is the unique normal, maximal abelian subgroup of $\Gamma$.*
*Proof.* Let $\rho \subset \Gamma$ be a normal abelian subgroup. It suffices to show $\rho\subset \mathbb{R}^n$, i.e. $L(\rho)=I$. Let $(s,M)\in\rho$. We can assume Ms=s, because $(s,I)(s,M)(-s,I)=(s+s-Ms,M)=(s,M)$ $\Leftrightarrow Ms=s$. Let $(v,I)$ be any element of $\Gamma^*$. Then $\rho$ normal implies $$(v,I)(s,M)(-v,I)=(v-Mv+s,M)\in\rho.$$ But $\rho$ abelian $\Rightarrow$ $[(s,M),(v-Mv+s,M)]=(0,I)$ which yields $$Mv-M^2v-v+Mv=0,$$ or $(M-I)^2v=0$.\
Let $X=\{x\in\mathbb{R}^n: Mx=x\}$ and write $\mathbb{R}^n=X\oplus Y$ orthogonally. Write $v=v_x+v_y$ with $v_x\in X$ and $v_y\in Y$, then $$0=(M-I)^2v=(M-I)^2(v_x+v_y)=(M-I)^2v_y.$$ Now on $Y$, $(M-I)$ is non-singular, so $(M-I)^2v_y=0\Rightarrow v_y=0$. $\Rightarrow v=v_x$. Hence $Mv=v$ $\forall v\in\Gamma^*$. Since $\Gamma^*$ spans $\mathbb{R}^n$, $Mx=x$ $\forall x\in\mathbb{R}^n$, so $M=I$. Hence $(s,M)=(s,I)\in\mathbb{R}^n\Rightarrow \rho\subset\mathbb{R}^n$. ◻
**Theorem 15**. *(Second Bieberbach). Suppose that $f:\Gamma_1\longrightarrow \Gamma_2$ is an isomorphism between crystallographic group. Then there exist a diffeomorphism $h:\mathbb{R}^n \longrightarrow \mathbb{R}^n$ such that $h(\gamma x)=f(\gamma)h(x)$. Moreover, such a diffeomorphism can be taken as an affine transformation, i.e., $h=(a,A)\in \mathbb{A}(n)$ and so $f(\gamma)=h\gamma h^{-1}$ $(\gamma \in \Gamma_1)$.*
*Proof.* $(\Rightarrow )$. Let $\mathbb{Z}^n$ be a maximal abelian subgroup of $\Gamma_1$ such that $$1\longrightarrow \mathbb{Z}^n\longrightarrow \Gamma_1\longrightarrow Q_1\longrightarrow 1$$ is a group extension. Since $\mathbb{Z}^n$ is characteristics, f maps isomorphism onto $\mathbb{Z}^n$ of $\Gamma_2$ so that we have a commutative diagram $$\label{Pe1}
\begin{CD}
1 @>>> \mathbb{Z}^n @>Z_1>> \Gamma_1 @>p_1>> Q_1 @>>> 1 \\
@. @VVgV @VVfV @VV\Phi V \\
1 @>>> \mathbb{Z}^n @>Z_2>> \Gamma_2 @>p_2>> Q_2 @>>> 1 \\
\end{CD}$$ where $g=f|_{\mathbb{Z}^n}$, $\Phi$ is an induced isomorphism. We write $\Gamma_i$ as the set $\mathbb{Z}^n \times Q_i$ with group law $$\label{Pe2}
(n,\alpha)(m,\beta)=(n\phi_i(\alpha)(m)f_i(\alpha,\beta),\alpha\beta) \,\,\,\,\,\,\, (i=1,2).$$ Here we write ${\mathbb Z}^n$ multiplicatively, and homomorphism $$\phi_i:Q_i{\longrightarrow}Aut({\mathbb Z}^n)=GL(n,{\mathbb Z})$$ is defined as follows; choose a section $s_i:Q_i{\longrightarrow}\Gamma_i$ so that $p_i\circ s_i=id$. $\Gamma_i$ normalizes ${\mathbb Z}^n$, so $$\phi_i(\alpha):{\mathbb Z}^n{\longrightarrow}{\mathbb Z}^n$$ is defined by $$\phi_i(\alpha)(n)=s_i(\alpha)ns_i(\alpha)^{-1}.$$ Then defined $f_i:Q_i\times Q_i\longrightarrow {\mathbb Z}^n$ by $$f_i(\alpha,\beta)s_i(\alpha\beta)=s_i(\alpha)s_i(\beta).$$ As $p_2f(1,\alpha)=\Phi p_1(1,\alpha)=\Phi (\alpha)$ and $p_2f(n,1)=\Phi p_1(n,1)=\Phi (1)=1$, we write $$\label{Pe3}
f(1,\alpha)=(\lambda(\alpha),\Phi (\alpha))\\ \,\,\,\,\, ,
f(n,1)=(g(n),1)$$ for some function $\lambda:Q_1{\longrightarrow}{\mathbb Z}^n$. Then $$\begin{aligned}
f(n,\alpha)&=f((n,1)(1,\alpha))=f(n,1)f(1,\alpha)\\
&=(g(n),1)(\lambda(\alpha),\Phi(\alpha)). \end{aligned}$$ Note we choose a normalized 2-cocycle $f_i$, that is $$f_i(1,\alpha)=f_i(\beta,1)=1 \,\,\,\,\,(\forall \alpha,\beta\in Q_i).$$ Therefore $$\label{Pe4}
f(n,\alpha)=(g(n)\phi_i(1)(\lambda(\alpha))f_i(1,\Phi(\alpha)),\Phi (\alpha))\\
=(g(n)\lambda(\alpha),\Phi (\alpha)).$$
Next consider the equalities $$\begin{aligned}
f(1,\alpha)f(g^{-1}(n),1)&=f(\phi_1(\alpha)(g^{-1}(n)),\alpha)\\
(\lambda(\alpha),\Phi (\alpha))(n,1)&=(g(\phi_1(\alpha)(g^{-1}(n)))\lambda(\alpha),\Phi (\alpha))\\
(\lambda(\alpha)\phi_2(\Phi(\alpha))(n),\Phi (\alpha))&=(g(\phi_1(\alpha)(g^{-1}(n)))\lambda(\alpha),\Phi (\alpha))\end{aligned}$$ imply that $$\label{Pe5}
g\phi_1(\alpha)g^{-1}(n)=\lambda(\alpha)\phi_2(\Phi (\alpha))(n)\lambda(\alpha)^{-1}.$$ The equalities $$\begin{aligned}
f(1,\alpha)f(1,\beta)&=f((1,\alpha)(1,\beta))\\
&=f(\phi_i(\alpha)(1)f_1(\alpha,\beta),\alpha\beta)\\
&=f(f_1(\alpha,\beta),\alpha\beta)\\
(\lambda(\alpha),\Phi (\alpha))(\lambda(\beta),\Phi (\beta))&=(g(f_1(\alpha,\beta))\lambda(\alpha\beta),\Phi (\alpha\beta)) \,\,\,\,\ \text {by}\,\eqref {Pe3},\, \eqref {Pe4}\end{aligned}$$ and $$\begin{aligned}
&(\lambda(\alpha),\Phi (\alpha))(\lambda(\beta),\Phi (\beta))\\
&=(\lambda(\alpha)\phi_2(\Phi(\alpha))(\lambda(\beta))f_2(\Phi(\alpha),\Phi(\beta)),\Phi (\alpha)\Phi(\beta))\\
&=(\lambda(\alpha)\phi_2(\Phi(\alpha))(\lambda(\beta))\lambda(\alpha)^{-1}\lambda(\alpha)f_2(\Phi(\alpha),\Phi(\beta)),\Phi (\alpha)\Phi(\beta))\\
&=(g\phi_1(\alpha)g^{-1}(\lambda(\beta))\lambda(\alpha)f_2(\Phi(\alpha),\Phi(\beta)),\Phi (\alpha)\Phi(\beta))\,\,\,\, \text {by}\,\, \eqref {Pe5},\end{aligned}$$ then we have $$\label{Pe6}
g(f_1(\alpha,\beta))=g\phi_1(\alpha)g^{-1}(\lambda(\beta))\lambda(\alpha)f_2(\Phi (\alpha),\Phi (\beta))\lambda(\alpha\beta)^{-1}.$$
Let us consider a group cohomology $H_\phi^2(Q;{\mathbb Z}^n)$, where cocycle $[f_i]\in H_{\phi_i}^2(Q;{\mathbb Z}^n)$ $(i=1,2)$. The exact sequence $$\begin{CD}
1 @>>> {\mathbb Z}^n @>i>> {\mathbb R}^n @>j>> T^n @>>> 1
\end{CD}$$ gives a cohomology exact sequence $$\begin{CD}
... @>>> H_\phi^1(Q;{\mathbb R}^n) @>j_*>> H_\phi^1(Q;T^n) @>\delta >> H_\phi^2(Q;{\mathbb Z}^n) \\
@>i_*>> H_\phi^2(Q;{\mathbb R}^n) @>j_*>> H_\phi^2(Q;T^n) @>>> ...
\end{CD}$$ If $Q$ is finite group, and ${\mathbb R}^n$ is the $Q-$module, then $H_{\phi}^i(Q;{\mathbb R}^n)=0$ $(i\geq 1)$. Using this, we have $$\begin{CD}
H_{\phi_i}^1(Q_i;T^n) @>\delta >> H_{\phi_i}^2(Q_i;{\mathbb Z}^n)
\end{CD}$$ which follows that $\delta [\hat \chi_i ]=[f_i]$, $[\hat \chi_i ]\in H_{\phi_i}^1(Q_i;T^n)$. The meaning of $\delta [\hat \chi_i ]=[f_i]$ is explained as follows; $$\begin{aligned}
\begin{CD}
0 @>>> C_\phi^1(Q,\mathbb{Z}^n) @>i>> C_\phi^1(Q,{\mathbb R}^n) @>j>> C_\phi^1(Q,T^n) @>>> 0 \\
@. @VV\delta'V @VV\delta'V @VV\delta'V \\
0 @>>> C_\phi^2(Q,\mathbb{Z}^n) @>i>> C_\phi^2(Q,{\mathbb R}^n) @>j>> C_\phi^2(Q,T^n) @>>> 0 \\
@. @. \chi_i @>j>> \hat \chi_i @>>> 0 \\
@. @. @VV\delta'V @VV\delta'V \\
@. f_i @>i>> \delta' \chi_i @>j>> 0 @.
\end{CD}\end{aligned}$$ $f_i=if_i=\delta'\chi_i$ $(i=1,2)$. Thus $$\label{Pe21}
f_1(\alpha,\beta)=\delta'_{\bar\phi_1 (\alpha)}\chi_1(\alpha,\beta)=\bar \phi_1(\alpha)(\chi_1(\beta))\chi_1(\alpha)\chi_1(\alpha\beta)^{-1}$$ where $\chi_i:Q_i{\longrightarrow}{\mathbb R}^n$ are functions, and $\delta'_{\bar\phi_i (\alpha)}\chi_i:Q_i\times Q_i{\longrightarrow}{\mathbb R}^n$ the coboundary of 1-cochains.
We note that $\phi_i(\alpha):{\mathbb Z}^n{\longrightarrow}{\mathbb Z}^n$ is represented by (an integral) matrix so that $\phi_i(\alpha)$ extends to ${\mathbb R}^n$ onto itself. We write it as $\bar\phi_i(\alpha)$ (= the same matrix). So $\phi_i:Q_i{\longrightarrow}Aut({\mathbb Z}^n)$ extends uniquely to $\bar\phi_i:Q_i{\longrightarrow}Aut({\mathbb R}^n)=GL(n,{\mathbb R})$.
Put $g(f_1(\alpha,\beta))=g_*f_1(\alpha,\beta)$ and $(\bar g_*\chi_1)(\alpha)=\bar g(\chi_1(\alpha))$. Here the isomorphism $g:{\mathbb Z}^n{\longrightarrow}{\mathbb Z}^n$ extends uniquely to an isomorphism $\bar g:{\mathbb R}^n{\longrightarrow}{\mathbb R}^n$ by the same reason as before. Also we put $\Phi^*f_2(\alpha,\beta)=f_2(\Phi(\alpha),\Phi(\beta))$ and $\Phi^*\chi_2(\alpha)=\chi_2(\Phi(\alpha))$. Then [\[Pe21\]](#Pe21){reference-type="eqref" reference="Pe21"} implies $$\begin{aligned}
gf_1(\alpha,\beta)&=\bar g(\bar \phi_1(\alpha)(\chi_1(\beta))\chi_1(\alpha)\chi_1(\alpha\beta)^{-1})\\
&=\bar g\bar \phi_1(\alpha)\bar g^{-1}(\bar g\chi_1(\beta))\bar g\chi_1(\alpha)\bar g\chi_1(\alpha\beta)^{-1}\\
&=\bar g\bar \phi_1(\alpha)\bar g^{-1}(\bar g_*\chi_1(\beta))\bar g_*\chi_1(\alpha)\bar g_*\chi_1(\alpha\beta)^{-1}\\
&=\delta'_{\bar g\bar \phi_1\bar g^{-1}}\bar g_*\chi_1(\alpha,\beta),\end{aligned}$$ hence $$\label{Pe22}
g_*f_1=\delta'_{\bar g\bar \phi_1\bar g^{-1}}\bar g_*\chi_1.$$ Here we can write $\overline {g\phi_1g^{-1}}=\bar g\bar \phi_1(\alpha)\bar g^{-1}$. Also, $f_2=\delta'\chi_2$ shows $$\label{Pe23}
\Phi ^*f_2=\delta'\Phi^*\chi_2.$$
Next we calculate $$\begin{aligned}
&\delta'_{\bar g\bar \phi_1\bar g^{-1}}(\lambda.\Phi^*\chi_2)(\alpha,\beta)\\
&= \bar g\bar \phi_1(\alpha)\bar g^{-1}(\lambda.\Phi^*\chi_2(\beta)).(\lambda.\Phi^*\chi_2(\alpha)).(\lambda.\Phi^*\chi_2(\alpha\beta))^{-1}\,\,\, \text {by}\,\, \eqref{Pe21}\\
&= \bar g\bar \phi_1(\alpha)\bar g^{-1}(\lambda(\beta).\chi_2(\Phi(\beta))).\lambda(\alpha)\chi_2(\Phi(\alpha)).\chi_2(\Phi(\alpha\beta))^{-1}\lambda(\alpha\beta)^{-1}\\
&= \bar g\bar \phi_1(\alpha)(\bar g^{-1}(\lambda(\beta))) \,\,\, \bar g\bar \phi_1(\alpha)\bar g^{-1}(\chi_2(\Phi(\beta))) \lambda(\alpha)\,\,\\
& \,\,\,\,\,\,\, \chi_2(\Phi(\alpha)) \chi_2(\Phi(\alpha\beta))^{-1}\lambda(\alpha\beta)^{-1}\\
&= \bar g\bar \phi_1(\alpha)(\bar g^{-1}(\lambda(\beta))) \,\,\, \lambda(\alpha) \bar\phi_2(\Phi(\alpha))(\chi_2(\Phi(\beta)))\,\,\\
& \,\,\,\,\,\,\, \chi_2(\Phi(\alpha)) \chi_2(\Phi(\alpha)\Phi(\beta))^{-1}\lambda(\alpha\beta)^{-1} \,\,\,\,\ \text {by}\,\, \eqref{Pe5}\\
&= \bar g\bar \phi_1(\alpha)(\bar g^{-1}(\lambda(\beta)) ) \,\,\, \lambda(\alpha) \,\,\delta'\chi_2(\Phi(\alpha),\Phi(\beta))\lambda(\alpha\beta)^{-1} \,\,\,\,\ \text {by}\,\, \eqref{Pe21}\\
&= \bar g\bar \phi_1(\alpha)(\bar g^{-1}(\lambda(\beta)) ) \,\,\, \lambda(\alpha) \,\,f_2(\Phi(\alpha),\Phi(\beta))\lambda(\alpha\beta)^{-1} \,\,\,\,\,\,\, \text {by}\,\, \eqref{Pe21}\\
&=g(f_1(\alpha,\beta)) \,\,\, \text {by}\,\, \eqref{Pe6}.\end{aligned}$$ Hence $$\label{Pe24}
\delta'_{\bar g\bar \phi_1\bar g^{-1}}(\lambda.\Phi^*\chi_2)=g_*f_1.$$ Since $g_*f_1=\delta'_{\bar g\bar \phi_1\bar g^{-1}}\bar g_*\chi_1$ by [\[Pe22\]](#Pe22){reference-type="eqref" reference="Pe22"}, $$\begin{aligned}
\delta'_{\bar g\bar \phi_1\bar g^{-1}}(\lambda.\Phi^*\chi_2)&=g_*f_1\\
&=\delta'_{\bar g\bar \phi_1\bar g^{-1}}(\bar g_*\chi_1)\\
[\bar g_*\chi_1(\lambda\Phi^*\chi_2)^{-1}]&\in H_{\bar g\bar \phi_1\bar g^{-1}}^1(Q;{\mathbb R}^n)\end{aligned}$$ On the other hand, using the fact that $H_{\bar g\bar \phi_1\bar g^{-1}}^1(Q;{\mathbb R}^n)=0$, so there exist $\mu \in {\mathbb R}^n$ such that $$\begin{aligned}
\delta'_{\bar g\bar \phi_1\bar g^{-1}}\mu&=\bar g_*\chi_1.(\lambda\Phi^*\chi_2)^{-1}\\
\delta'_{\bar g\bar \phi_1\bar g^{-1}}\mu.(\lambda\Phi^*\chi_2)&=\bar g_*\chi_1\\
(\delta'_{\bar g\bar \phi_1\bar g^{-1}}\mu.\lambda\Phi^*\chi_2)(\alpha)&=\bar g_*\chi_1(\alpha)\\
(\delta'_{\bar g\bar \phi_1\bar g^{-1}}\mu(\alpha)).(\lambda\Phi^*\chi_2)(\alpha)&=\bar g\chi_1(\alpha)\\
\bar g\bar \phi_1(\alpha)\bar g^{-1}(\mu)\mu^{-1}\,\,((\lambda. \Phi^*\chi_2)(\alpha))&=\\
\bar g\bar \phi_1(\alpha)\bar g^{-1}(\mu)\,((\lambda. \Phi^*\chi_2)(\alpha))\,\mu^{-1}&=\\
\bar g\bar \phi_1(\alpha)\bar g^{-1}(\mu)\,(\lambda(\alpha). \chi_2(\Phi(\alpha)))\,\mu^{-1}&=\\
\bar g\bar \phi_1(\alpha) \bar g^{-1}(\mu)\lambda(\alpha)\chi_2(\Phi(\alpha))\,\mu^{-1}&=\end{aligned}$$ Hence by [\[Pe5\]](#Pe5){reference-type="eqref" reference="Pe5"} $$\label{Pe25}
\bar g(\chi_1(\alpha))=\lambda(\alpha)\bar \phi_2(\Phi(\alpha))(\mu)\chi_2(\Phi(\alpha))\,\mu^{-1}.$$
Next we consider a map $$\label{Pe31}
h:{\mathbb R}^n{\longrightarrow}{\mathbb R}^n \,\,\,\, \text {by} \,\,\, h(x)=\bar g(x)\mu.$$ Since $\bar g$ is an isomorphism, so is $h$, then $h$ is a diffeomorphism. Now we show that $h$ is $(\Gamma_1,\Gamma_2)-$equivariant.\
We first assume the rigid motion of $\Gamma_i$ on ${\mathbb R}^n$ is the Seifert action. Note that $\Gamma_1$ acts on ${\mathbb R}^n$ as follows. $$\label{Pe32}
(n,\alpha)x=n\bar\phi_1(\alpha)(x)\chi_1(\alpha),\,\,\,\,
(n,\alpha)x=n\bar\phi_2(\alpha)(x)\chi_2(\alpha).$$ Then $$\begin{aligned}
&h((n,\alpha)x)\\
&=h(n\bar\phi_1(\alpha)(x)\chi_1(\alpha))\\
&=\bar g(n\bar\phi_1(\alpha)(x)\chi_1(\alpha))\mu \,\,\,\,\, \text {by} \, \eqref{Pe31}\\
&=g(n)\bar g \bar\phi_1(\alpha)(x) \,\,\bar g \chi_1(\alpha)\,\,\mu \,\,\,\,\, (\bar g \,\text {is an isomorphism)}\\
&=g(n)\bar g \bar\phi_1(\alpha)\bar g^{-1}(\bar g(x))
\lambda(\alpha)\bar\phi_2(\Phi(\alpha))(\mu) \chi_2(\Phi(\alpha))\mu^{-1}\mu \,\,\,\,\,\text {by} \, \eqref{Pe25}\\
&=g(n) \lambda(\alpha) \bar\phi_2(\Phi(\alpha))(\bar g(x))
\bar\phi_2(\Phi(\alpha))(\mu) \chi_2(\Phi(\alpha)) \,\,\,\,\,\text {by} \, \eqref{Pe5}\\
&=g(n) \lambda(\alpha). \,\,\bar\phi_2(\Phi(\alpha))(\bar g(x).\mu) \,\,
\chi_2(\Phi(\alpha)) \\
&=(g(n) \lambda(\alpha),\Phi(\alpha)) \bar g(x)\mu \,\,\,\,\,\text {(by \eqref{Pe32}\,\,and note $g(n) \lambda(\alpha)\in {\mathbb Z}^n$)}\\
&=f(n,\alpha)\bar g(x)\mu \,\,\,\,\,\text {by} \, \eqref{Pe4}.\\
&=f(n,\alpha)h(x).\end{aligned}$$ Thus $$\label{Pe33}
h((n,\alpha)x)=f(n,\alpha)h(x).$$
Now, we shall see that the Seifert action coincides with the rigid motions.
Given a crystallographic group, we have the following extension:
$$1{\rightarrow}{\mathbb Z}^n{\rightarrow}\Gamma{\rightarrow}Q{\rightarrow}1.$$ As usual an element $\gamma$ of $\Gamma$ is described as $(n,\alpha)$. We wrote the Seifert action on ${\mathbb R}^n$ additively as follows. $$\label{Seifert}
(n,\alpha)x=n+\bar\phi(\alpha)(x)+\chi(\alpha) \ \ ((n,\alpha)\in\Gamma).$$ We may check it is a group action, namely it satisfies that $$(n,\alpha)((m,\beta)x)=((n,\alpha)\cdot (m,\beta))x.$$
Suppose that $\Gamma$ is a crystallographic group of ${{\mathbb E}}(n)$. Note that $(n,1)x=n+x$, that is, ${\mathbb Z}^n$ acts as translations. So its span ${\mathbb R}^n$, viewed as a group, acts on ${\mathbb R}^n$ as translations; $$(y,1)x=y+x\ (\forall\ (y,1)\in {\mathbb R}^n).$$ Since $\Gamma$ normalizes ${\mathbb Z}^n$, it normalizes its span, ${\mathbb R}^n$. Noting that $(1,\alpha)$ $(n,1)(1,\alpha)^{-1}=(\phi(\alpha)(n),1)$, we can write $$(1,\alpha)(x,1)(1,\alpha)^{-1}=(\bar\phi(\alpha)(x),1).$$ For the origin $0\in {\mathbb R}^n$, we put $$(1,\alpha)0=\chi(\alpha)\in{\mathbb R}^n.$$
Noting $(x,1)0=x$ as above, we can interpret the above Seifert action [\[Seifert\]](#Seifert){reference-type="eqref" reference="Seifert"}, $$\begin{split}
(n,\alpha)x&=((n,1)(1,\alpha))x=(n,1)((1,\alpha)x)\\
&=(n,1)((1,\alpha)(x,1)0)\\
&=(n,1)(((1,\alpha)\cdot (x,1)\cdot(1,\alpha)^{-1})\cdot(1,\alpha)0)\\
&=(n,1)((\bar\phi(\alpha)(x),1)(1,\alpha)0)\\
&=(n,1)((\bar\phi(\alpha)(x),1) \chi(\alpha))\\
&=(n,1)(\bar\phi(\alpha)(x)+\chi(\alpha))\\
&=n+\bar\phi(\alpha)(x)+\chi(\alpha).
\end{split}$$
On the other hand, when $\Gamma\subset{\mathbb E}(n)$, let $$(n,\alpha)=\gamma=(a,A)\in {\mathbb E}(n).$$
As $\gamma x=a+Ax$ $(\forall\ x\in {\mathbb R}^n)$, taking $x=0$, compare the above equation so that $$n+\chi(\alpha)=a.$$ Since $(n,\alpha)x=\gamma x=a+Ax$, it follows that $\bar\phi(\alpha)(x)=Ax$. This holds for all $x\in {\mathbb R}^n$, it implies that $\bar\phi(\alpha)=A$.
Hence identifying $(n+\chi(\alpha),\bar\phi(\alpha))=
(a,A)$, the Seifert action of $\Gamma$ coincides with the rigid motions. $$(n,\alpha)=(n+\chi(\alpha),\bar\phi(\alpha))\in {\mathbb E}(n) \ (\forall (n,\alpha)\in \Gamma).$$ ◻
**Theorem 16**. *(Third Bieberbach).[\[Bie3\]]{#Bie3 label="Bie3"} For any given n, there are only a finite number of n-dimensional crystallographic groups, up to affine equivalence.*
This implies that there are only finitely many n-dimensional flat Riemannian manifolds.
To prove the third Bieberbach theorem, first of all, we assume the theorem in [@Th97] that
**Theorem 17**. *The number of non-isomorphic finite groups in\
$GL(n,{\mathbb Z})$ up to conjugation is finite.*
*Proof.* *of Theorem* [\[Bie3\]](#Bie3){reference-type="ref" reference="Bie3"}. Given a finite group $F$ and a faithful representation $\phi:F{\rightarrow}GL(n,{\mathbb Z})$, ${\mathbb Z}^n$ is viewed as a $F$-module through $\phi$. Then each equivalence class of cocycle of the second cohomology $H^2_{\phi}(F,{\mathbb Z}^n)$ defines a group extension $$\label{group-ex1}
1{\rightarrow}{\mathbb Z}^n{\rightarrow}\Gamma\stackrel{L}{\longrightarrow}F{\rightarrow}1.$$ Then we have proved (see the proof of Theorem [Theorem 15](#T:6){reference-type="ref" reference="T:6"}) that $\Gamma$ acts on ${\mathbb R}^n$ by $$(n,\alpha)x=n+\phi(\alpha)(x)+\chi(\alpha).$$ Then note that ${\mathbb Z}^n$ is a maximal abelian group of $\Gamma$. For this, let ${\mathbb Z}^n\subset \Delta$ be an abelian subgroup of $\Gamma$. Choose $(n,\alpha)\in \Delta$ so that $$(n,\alpha)(m,1)(n,\alpha)^{-1}=(\phi(\alpha)(m),1).$$ Note $$(n,\alpha)^{-1}=(\phi(\alpha^{-1})(n^{-1}\cdot f(\alpha,\alpha^{-1})^{-1}),\alpha^{-1}).$$ Since $\Delta$ is abelian, $(n,\alpha)(m,1)(n,\alpha)^{-1}=(m,1)$, $\phi(\alpha)(m)=m$ $(\forall \ m\in {\mathbb Z}^n)$. It follows that $\phi(\alpha)={\rm id}$. As our assumption is that $\phi:$ $F{\rightarrow}$ $GL(n,{\mathbb Z})$ is a faithful representation, this implies that $\alpha=1$ which shows that $\Delta\ni(n,\alpha)=(n,1)$ so that $\Delta\subset {\mathbb Z}^n$.
Identify $(n,\alpha)=(n+\chi(\alpha),\phi(\alpha))\in \mathbb{A}(n)$. Moreover since $\phi(F)$ is a finite group in $GL(n,{\mathbb Z})\subset GL(n,{\mathbb R})$, there is an element $B\in GL(n,{\mathbb R})$ such that $B\cdot\phi(F)\cdot B^{-1}\subset O(n)$. Then the correspondence $$(n,\alpha){\rightarrow}(B(n+\chi(\alpha)),B\cdot\phi(\alpha)\cdot B^{-1})$$ is the injective homomorphism from $\Gamma$ to ${\mathbb E}(n)$. Now $\Gamma\subset {\mathbb E}(n)$ where ${\mathbb Z}^n$ is a maximal abelian subgroup, by the definition $\Gamma$ is a crystallographic group.
So, each equivalence class $((F,\phi),[f])$ of $\{(F,\phi), H^2_{\phi}(F,{\mathbb Z}^n)\}$ corresponds to the equivalence class of crystallographic groups.
Conversely, given a crystallographic group $\Gamma$ of ${\mathbb E}(n)$, then there is a group extension as in [\[group-ex1\]](#group-ex1){reference-type="eqref" reference="group-ex1"} such that ${\mathbb Z}^n$ is the maximal free abelian group and $F$ is a finite group. A group extension $\Gamma$ defines a unique homomorphism $\phi:F{\rightarrow}{\rm Aut}({\mathbb Z}^n)=GL(n,{\mathbb Z})$ induced by the conjugation of $\Gamma$. Note that this is a faithful homomorphism, that is, if $\phi(\alpha)={\rm id}$, letting $\gamma=(a,A)$ such that $L(\gamma)=\alpha=A$, $$An=\phi(\alpha)(n)={\rm id}(n)=n.$$Since $n\in {\mathbb Z}^n$ is the (integral) basis of ${\mathbb R}^n$, this implies that $A={\rm I}$. So $\alpha=1$. In particular, we have a pair $(F,\phi)$. Since ${\mathbb Z}^n$ is viewed as a $F$-module through $\phi$, $\Gamma$ defines a $2$-cocycle $[f]\in H^2_{\phi}(F,{\mathbb Z}^n)$. An equivalence class of crystallographic group $\Gamma$ defines an element $((F,\phi),[f])$ which implies that $\{(F,\phi), H^2_{\phi}(F,{\mathbb Z}^n)\}$ is in one-to-one correspondence with the equivalence classes of crystallographic groups.
As $H^2_{\phi}(F,{\mathbb Z}^n)$ is of finite order (since $F$ is finite), the finiteness of group extensions comes from that of $(F,\phi)$ where $\phi:F{\rightarrow}GL(n,{\mathbb Z})$ is a faithful representation. For each $n$ fixed, this is the number of non-isomorphic conjugacy classes of a finite group $\phi(F)$ in $GL(n,{\mathbb Z})$. By Theorem [Theorem 17](#Thurs){reference-type="ref" reference="Thurs"}, such $(F,\phi)$ is finite. This proves the theorem. ◻
Bieberbach theorems imply many results in the theory of compact flat manifolds some of which we state below.
**Theorem 18**. *Let $X$ and $Y$ be compact flat manifolds. Then $\pi_1(X)$ is isomorphic to $\pi_1(Y)$ if and only if $X$ and $Y$ are diffeomorphic.*
*Proof.* $(\Rightarrow )$ We consider $\pi_1(X)$ and $\pi_1(Y)$ as Bieberbach subgroup of $\mathbb{E}(n)$. By Theorem [Theorem 15](#T:6){reference-type="ref" reference="T:6"}, there is $\alpha\in\mathbb{R}^n$ s.t if $F:\pi_1(X)\longrightarrow \pi_1(Y)$ is an isomorphism, $F(\beta)=\alpha\beta\alpha^{-1}$ $\forall \beta\in\pi_1(X)$. Let $$p_x:\mathbb{R}^n\longrightarrow X=\mathbb{R}^n/\pi_1(X) \,\,\, \text {and} \,\,\,
p_y:\mathbb{R}^n\longrightarrow Y=\mathbb{R}^n/\pi_1(Y)$$ be the projection (or covering maps). Define $f:X\longrightarrow Y$ by $$f(x)=p_y\circ \alpha\circ p^{-1}_x(x), \,\,\, \text{for} \,\,\, x\in X.$$ To see that this is well-defined, let $\bar x\in p^{-1}_x(x)$ and $\beta\in\pi_1(X)$. We must show $p_y\circ \alpha(\beta\bar x)=p_y\circ \alpha(\bar x)$. But $\alpha\beta\alpha^{-1}=\gamma\in\pi_1(Y)$, so $\alpha\beta=\gamma\alpha$ and $$\begin{aligned}
p_y\circ \alpha(\beta\bar x)&=p_y(\alpha\beta\bar x)=p_y(\gamma\alpha\bar x)\\
&=p_y(\alpha\bar x)\\
&=p_y\circ \alpha(\bar x),\end{aligned}$$ so f is well-defined.
To show that $f$ is diffeomorphism, we need merely to show $f$ is bijective. Define $g:Y\longrightarrow X$ by $$\begin{aligned}
g(y)=p_x\circ \alpha^{-1}\circ p^{-1}_y(y), \,\,\, \text{for} \,\,\, y\in Y.\end{aligned}$$ Similarly, $g$ is also well-defined. For $\bar y\in p^{-1}_y(y)$ and $\gamma\in\pi_1(Y)$. We show $p_x\circ \alpha^{-1}(\gamma\bar y)=p_x\circ \alpha^{-1}(\bar y)$. But $\alpha\beta\alpha^{-1}=\gamma\in\pi_1(Y)$, so $\beta\alpha^{-1}=\alpha^{-1}\gamma$ and $$\begin{aligned}
p_x\circ \alpha^{-1}(\gamma\bar y)&=p_x(\alpha^{-1}\gamma\bar y)=p_x(\beta\alpha^{-1}\bar y)\\
&=p_x(\alpha^{-1}\bar y)\\
&=p_x\circ \alpha^{-1}(\bar y).\end{aligned}$$ Now, $$\begin{aligned}
f\circ g(y)&=f(p_x\circ \alpha^{-1}\circ p^{-1}_y(y))\\
&=p_y\circ \alpha\circ p^{-1}_x(p_x\circ \alpha^{-1}\circ p^{-1}_y(y))\\
&=y.\end{aligned}$$ Similarly, $g\circ f(x)=x$. So, $g=f^{-1}$, f is bijective.
$(\Leftarrow )$. Let $f:X\longrightarrow Y$ is a diffeomorphism. So f is a homeomorphism. We show that $F:\pi_1(X)\longrightarrow \pi_1(Y)$ is an isomorphism.
Since $f$ is a homeomorphism, there exists the inverse map $f^{-1}:Y\longrightarrow X$ has the property that $$f^{-1}\circ f=I_X, \,\,\,\, f\circ f^{-1}=I_Y.$$ Let us consider the induced homomorphisms $$f_*:\pi_1(X)\longrightarrow \pi_1(Y), \,\,\,\, f^{-1}_*:\pi_1(Y)\longrightarrow \pi_1(X)$$ in the fundamental groups. Since $(f^{-1}\circ f)_*=f^{-1}_*\circ f_*$, and $(I_X)_*:\pi_1(X)\longrightarrow \pi_1(X)$, then $f^{-1}_*\circ f_*$ is the identity map on $\pi_1(X)$. Similarly, we can see that $f_*\circ f^{-1}_*$ is the identity map on $\pi_1(Y)$. Therefore, $f_*$ and $f^{-1}_*$ are inverses of each other, i.e., $F=f_*:\pi_1(X)\longrightarrow \pi_1(Y)$ is an isomorphism. ◻
**Theorem 19**. *Let $\Gamma$ be a subgroup of $\mathbb{E}(n)$. Then the orbit space $\mathbb{R}^n/\Gamma$ is a compact flat n-dimensional manifold if and only if $\Gamma$ is torsionfree, discrete and irreducible.*
*Proof.* $(\Rightarrow )$ If $M=\mathbb{R}^n/\Gamma$ is a compact flat n-manifold, then $\pi_1(M)=\Gamma$ and hence $\Gamma$ acts freely on $\mathbb{R}^n$, so $\Gamma$ must be torsionfree (by Proposition [Proposition 10](#P:1){reference-type="ref" reference="P:1"}). Since $M$ is an n-manifold, $\Gamma$ must be discrete. Since $\mathbb{R}^n/\Gamma$ is compact, by Lemma 7 [@K11], $\Gamma$ is irreducible\
$(\Leftarrow )$ Since $\Gamma$ is a torsionfree discrete subgroup of $\mathbb{E}(n)$, then by Proposition [Proposition 10](#P:1){reference-type="ref" reference="P:1"} $\Gamma$ acts freely on $\mathbb{R}^n$ and discrete subgroup of $\mathbb{E}(n)$ acts properly discontinuously on $\mathbb{R}^n$, so $\mathbb{R}^n/\Gamma$ is an n-manifold. By first Bieberbach theorem, $\mathbb{R}^n/\Gamma$ is covered by $\mathbb{R}^n/(\Gamma\cap \mathbb{R}^n)$ which is the n-torus. Hence $\mathbb{R}^n/\Gamma$ is compact. ◻
According to some theorems above, Bieberbach proved that, $\Gamma$ and $\Gamma '$ are two isomorphic discrete compact subgroups (crystallographic subgroups) of $\mathbb{E}(n)$ if and only if $\exists$ $\gamma \in \mathbb{A}(n)$ such that $\gamma \Gamma\gamma ^{-1}=\Gamma'$. This implies that, if $M$ and $M'$ are compact flat Riemannian manifolds with isomorphic fundamental groups $\Gamma$ and $\Gamma'$ respectively then, $M$ is diffeomorphic to $M'$ if and only if $\exists$ $\gamma \in \mathbb{A}(n)$ such that $\gamma \Gamma\gamma ^{-1}=\Gamma'$.\
To show that $M(A)=(S^1)^n/(\mathbb{Z}_2)^n$ is aspherical, by Theorems [Theorem 2](#T:1){reference-type="ref" reference="T:1"} then $p: (S^1)^n \rightarrow (S^1)^n/(\mathbb{Z}_2)^n$ is covering. Since $q:\mathbb{R}^n \rightarrow (S^1)^n$ is a covering with $\mathbb{R}^n$ its universal covering space, then $p\circ q:\mathbb{R}^n \rightarrow (S^1)^n/(\mathbb{Z}_2)^n$ is covering. Hence $(S^1)^n/(\mathbb{Z}_2)^n$ is aspherical.
# Three Dimensional Real Bott Tower
We begin this section with defining the action of cyclic group $\mathbb {(Z}_2)^3$ on $(S^1)^3$.
Let $$\label {matrix}
A=\left(\begin{array}{lcr}
1& a_{12} & a_{13}\\
0& 1 & a_{23}\\
0& 0 & 1
\end{array}\right)$$ be one of the 3rd Bott matrices. Then each $a_{ij}$ represents either $0$ or $1$. We use the following notation for $a\in\{0,1\}$ $${\bar z}^a=\left\{\begin{array}{lr} \bar z & \mbox{if}\ a=1\\
z& \mbox{if}\ a=0
\end{array}\right.$$where $\bar z$ is the conjugate of the complex number $z$. If $(g_1,g_2,g_3)$ $\in (\mathbb{Z}_2)^3$ are generators, then $(\mathbb{Z}_2)^3$ acts on $T^3$ as $$\begin{aligned}
\begin{split}
g_1(z_1,z_2,z_3)&=(-z_1,{\bar z_2}^{a_{12}},{\bar z_3}^{a_{13}})\\
g_2(z_1,z_2,z_3)&=(z_1,-z_2,{\bar z_3}^{a_{23}})\\
g_3(z_1,z_2,z_3)&=(z_1,z_2,-z_3).
\end{split}\end{aligned}$$It is easy to see that $(\mathbb{Z}_2)^3$ acts freely on $T^3$ such that the orbit space $M(A)=T^3/(\mathbb{Z}_2)^3$ is a smooth compact manifold. In this way, taking an $n$-th Bott matrix $A$, we obtain a free action of $(\mathbb{Z}_2)^n$ on $T^n$.
Let $Pr(x,y,z)=(e^{2\pi {\bf i}x},e^{2\pi {\bf i}y},
e^{2\pi {\bf i}z})=(z_1,z_2,z_3)$ be the canonical covering map of ${\mathbb R}^3$ onto $T^3$. Given $A$ as before, each generator $g_i$ of $(\mathbb{Z}_2)^3$ acting on $T^3$ lifts to a map $\tilde {g}_i:\mathbb{R}^3\rightarrow \mathbb{R}^3$ so that this diagram is commutative $$\begin{CD}
\mathbb{R}^3 @>\tilde {g}_i>> \mathbb{R}^3\\
@VPrVV @VVPrV\\
T^3 @>g_i>> T^3.
\end{CD}$$ Then the fundamental group of $M(A)$ is isomorphic to the group $\Gamma(A)$ generated by $\{\tilde g_1,\tilde g_2,\tilde g_3\}$. Moreover, it forms a torsion-free discrete uniform subgroup of the Euclidean group $\mathbb {E}(n)=\mathbb {R}^n\rtimes {\rm O}(n)$. $\Gamma(A)$ is called *Bieberbach group* (torsion free crystallographic group). We call $\Gamma(A)$ a Bott group associated to the Bott matrix $A$, for brevity.
For example, let $$\label{ex}
A=\left(\begin{array}{lcr}
1& 0 & 1\\
0& 1 & 1\\
0& 0 & 1
\end{array}\right),$$ and by definition we have $$\begin{aligned}
\label{klein}
\begin{split}
g_1(z_1,z_2,z_3)&=(-z_1, z_2,\bar z_3)\\
g_2(z_1,z_2,z_3)&=(z_1,-z_2,\bar z_3)\\
g_3(z_1,z_2,z_3)&=(z_1,z_2,-z_3).
\end{split}\end{aligned}$$ Thus we get $\tilde {g}_1 \footnotesize
\left(\begin{array}{c}
x\\
y\\
z
\end{array}\right)=
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & -1
\end{array}\right)
\left(\begin{array}{c}
x\\
y\\
z
\end{array}\right)+
\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)$. Therefore we may write the generator $\tilde {g}_1= \footnotesize
\Bigl(
\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right),
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & -1
\end{array}\right)\Bigr)=(s_1, M_1)\in \Gamma(A)$ where $s_1\in{\mathbb R}^3, \, M_1\in O(3)$. Similarly, we have\
$\tilde {g}_2=$ $\footnotesize
\Bigl(
\left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right),
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & -1
\end{array}\right)\Bigr)=$ $(s_2, M_2)$, $\tilde {g}_3= \footnotesize
\Bigl(
\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right),
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)\Bigr)$ $=(s_3, I_3)$, where $s_2, s_3 \in {\mathbb R}^3$ and $M_2\in O(3)$. Hence $\Gamma(A) =<\tilde g_1, \tilde g_2, \tilde g_3>$.\
By this explanation we see that action $\mathbb{(Z}_2)^3$ on $(S^1)^3$ coincides with action $\Gamma$ on $\mathbb{R}^3$.
Given a Bott matrix $A$ as ([\[matrix\]](#matrix){reference-type="ref" reference="matrix"}), $M(A)$ admits a maximal $T^k$-action ($k=1,2,3$) which is obtained as follows.\
Let $t_i\in T^3$, and define $T^3$-action on $(S^1)^3$ by $$\begin{aligned}
t_1(z_1,z_2,z_3)=(e^{2\pi i\theta }z_1,z_2,z_3)\\
t_2(z_1,z_2,z_3)=(z_1,e^{2\pi i\theta }z_2,z_3)\\
t_3(z_1,z_2,z_3)=(z_1,z_2,e^{2\pi i\theta }z_3)\end{aligned}$$ where $0\leq \theta\leq 1$.\
If $t_ig(z_1,z_2,z_3)=gt_i(z_1,z_2,z_3)$ for $t_i\in T^k$ and $g\in({\mathbb Z}_2)^3$ then $M(A)$ admits a $T^k$-action ($k=1,2,3$) which is maximal..
The above example, it is easy to check that $t_1g_i=g_it_1$, $t_2g_i=g_it_2$ for $i=1,2,3$, but $t_3g_1\not=g_1t_3$. Therefore $M(A)$ with representative matrix ([\[ex\]](#ex){reference-type="ref" reference="ex"}) admits $T^2$-action which is maximal.
According to Bieberbach theorem, fundamental group $\pi _1(M(A))=\Gamma=\langle \tilde{g}_1,\tilde{g}_2,\tilde{g}_3 \rangle$ is isomorphic to $\pi _1(M(A'))=\Gamma '=\langle \tilde{h}_1,\tilde{h}_2,\tilde{h}_3 \rangle$ if and only if $\exists$ $\gamma \in \mathbb{A}(3)=\mathbb{R}^3\rtimes GL(3,\mathbb{R})$ such that $\gamma \Gamma\gamma ^{-1}=\Gamma'$. So, if we can find $\tilde \varphi =\gamma \in \mathbb{A}(3)$ such that $\gamma \Gamma\gamma ^{-1}=\Gamma'$, then $M(A)$ is diffeomorphic to $M(A')$. This means, we have to define map $\tilde \varphi$ or $\varphi$ such that the diagrams below are commutative $$\begin{CD}
\mathbb{R}^3 @>\tilde {\varphi }>> \mathbb{R}^3\\
@V\tilde {g}_iVV @VV\tilde {h}_jV\\
\mathbb{R}^3 @>\tilde {\varphi }>> \mathbb{R}^3
\end{CD} \hspace{1cm}
\begin{CD}
T^3 @>\varphi>> T^3\\
@Vg_iVV @VVh_jV\\
T^3 @>\varphi>> T^3
\end{CD}$$ for $i=j=1,2,3.$ This is a tool that we will use later to determine the diffeomorphism classes of $M(A)$.\
If $M(A)$ is a $3$-dimensional real Bott tower, then it is classified as follows.
**Theorem 20**. *Let $M(A)$ be a $3$-dimensional real Bott tower. Then $M(A)$ admits a maximal $T^k$-action $(k=1,2,3)$. There exist exactly $4$ distinct diffeomorphism classes of $M(A)$ in which all $8$ Bott matrices fall into the following classes:*
- *(orientable)($T^3$-action.) The Identity matrix $I_3.$*
- *(nonorientable)($T^2$-action.)\
$A_1=\left(\begin{array}{lcr}
1& 1 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)$, $A_2=\left(\begin{array}{lcr}
1& 0 & 1\\
0& 1 & 1\\
0& 0 & 1
\end{array}\right)$, $A_3=\left(\begin{array}{lcr}
1& 0 & 0\\
0& 1 & 1\\
0& 0 & 1
\end{array}\right)$,\
*
- *$A_4=\left(\begin{array}{lcr}
1& 0 & 1\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)$.*
- *(nonorientable)($S^1$-action.)\
$A_5=\left(\begin{array}{lcr}
1& 1 & 0\\
0& 1 & 1\\
0& 0 & 1
\end{array}\right)$, $A_6=\left(\begin{array}{lcr}
1& 1 & 1\\
0& 1 & 1\\
0& 0 & 1
\end{array}\right)$.*
- *(orientable)($S^1$-action.) $A_7=\left(\begin{array}{lcr}
1& 1& 1\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)$.*
*Moreover, Bott matrices in each class are conjugate.*
*Proof.* For simplicity, notation \"$\approx$\" indicates diffeomorphic. The proof is organized by the following steps.
1. Determine conjugacy classes of 8 Bott matrices (we used Maple to determine this).
2. For each conjugacy class, we check whether $M(A), M(A')$ with representative matrices $A,A'$ are diffeomorphic or not by using Bieberbach theorems.
A detailed proof is as follows:
- $M(I_3)$ is not diffeomorphic to any $M(A_i)$ for $i=1,2,3,4,5,6,7.$
- $M(A_1)\approx M(A_2)\approx M(A_3)\approx M(A_4)$
- $M(A_5)\approx M(A_6)$
- $M(A_1)$ is not diffeomorphic to $M(A_5)$
- $M(A_1)$ is not diffeomorphic to $M(A_7)$
- $M(A_5)$ is not diffeomorphic to $M(A_7)$
. This is clear, because holonomy group of $I_3$ is trivial.\
.\
$\bullet$ $M(A_{1})\approx M(A_{2})$.\
For $$\begin{aligned}
&A_{1}=
\left(\begin{array}{ccc}
1& 1 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{2}=
\left(\begin{array}{ccc}
1& 0 & 1 \\
0& 1 & 1 \\
0& 0 & 1 \\
\end{array}\right)
\\
&g_1(z_1,z_2,z_3)=(-z_1,\bar z_2,z_3) &h_1h_2(z_1,z_2,z_3)=(-z_1,-z_2,z_3)\\
&g_2(z_1,z_2,z_3)=(z_1,-z_2,z_3) &h_2(z_1,z_2,z_3)=(z_1,-z_2,\bar z_3)\\
&g_3(z_1,z_2,z_3)=(z_1,z_2,-z_3) &h_3(z_1,z_2,z_3)=(z_1, z_2,-z_3)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{1}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{2}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3)=(z_3,z_1z_3,z_2)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_3,z_1z_3,z_2)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,\bar z_2,z_3) @>\varphi>> (z_3,-z_1z_3,\bar z_2)
\end{CD}
\hspace{1cm}
\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_3,z_1z_3,z_2)\\
@Vg_3VV @Vh_1h_2 VV\\
(z_1,z_2,-z_3) @>\varphi>> (-z_3,-z_1z_3,z_2)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_3,z_1z_3,z_2)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3) @>\varphi>> (z_3,z_1z_3,-z_2).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{ccc}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
0& 0 & 1 \\
1& 0 & 1 \\
0& 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{1}\gamma ^{-1}=\Gamma_{2}.$\
$\bullet M(A_{1})\approx M(A_{3})$.\
For $$\begin{aligned}
&A_{1}=
\left(\begin{array}{ccc}
1& 1 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{3}=
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 1 \\
0& 0 & 1 \\
\end{array}\right)
\\
&g_1(z_1,z_2,z_3)=(-z_1,\bar z_2,z_3) &h_1(z_1,z_2,z_3)=(-z_1,z_2,z_3)\\
&g_2(z_1,z_2,z_3)=(z_1,-z_2,z_3) &h_2(z_1,z_2,z_3)=(z_1,-z_2,\bar z_3)\\
&g_3(z_1,z_2,z_3)=(z_1,z_2,-z_3) &h_3(z_1,z_2,z_3)=(z_1, z_2,-z_3)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{1}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{3}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3)=(z_3,z_1,z_2)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_3,z_1,z_2)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,\bar z_2,z_3) @>\varphi>> (z_3,-z_1,\bar z_2)
\end{CD}
\hspace{1cm}
\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_3,z_1,z_2)\\
@Vg_3VV @Vh_1 VV\\
(z_1,z_2,-z_3) @>\varphi>> (-z_3,z_1,z_2)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_3,z_1,z_2)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3) @>\varphi>> (z_3,z_1,-z_2).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{ccc}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
0& 0 & 1 \\
1& 0 & 0 \\
0& 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{1}\gamma ^{-1}=\Gamma_{3}$.\
$\bullet M(A_{1})\approx M(A_{4})$.\
For $$\begin{aligned}
&A_{1}=
\left(\begin{array}{ccc}
1& 1 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{4}=
\left(\begin{array}{ccc}
1& 0 & 1 \\
0& 1 & 0 \\
0& 0 & 1 \\
\end{array}\right)
\\
&g_1(z_1,z_2,z_3)=(-z_1,\bar z_2,z_3) &h_1(z_1,z_2,z_3)=(-z_1,z_2,\bar z_3)\\
&g_2(z_1,z_2,z_3)=(z_1,-z_2,z_3) &h_2(z_1,z_2,z_3)=(z_1,-z_2,z_3)\\
&g_3(z_1,z_2,z_3)=(z_1,z_2,-z_3) &h_3(z_1,z_2,z_3)=(z_1, z_2,-z_3)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{1}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$ $$\begin{aligned}
\Gamma_{4}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0\\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3)=(z_1,z_3,z_2)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_1,z_3,z_2)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,z_3) @>\varphi>> (-z_1,z_3,\bar z_2)
\end{CD}
\hspace{1cm}
\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_1,z_3,z_2)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3) @>\varphi>> (z_1,-z_3,z_2)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_1,z_3,z_2)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3) @>\varphi>> (z_1,z_3,-z_2).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{ccc}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 0 & 1 \\
0& 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{1}\gamma ^{-1}=\Gamma_{4}$.\
.\
$\bullet M(A_{5})\approx M(A_{6})$.\
For $$\begin{aligned}
&A_{5}=
\left(\begin{array}{ccc}
1& 1 & 0 \\
0& 1 & 1 \\
0& 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{6}=
\left(\begin{array}{ccc}
1& 1 & 1 \\
0& 1 & 1 \\
0& 0 & 1 \\
\end{array}\right)
\\
&g_1(z_1,z_2,z_3)=(-z_1,\bar z_2,z_3) &h_1h_2(z_1,z_2,z_3)=(-z_1,-\bar z_2,z_3)\\
&g_2(z_1,z_2,z_3)=(z_1,-z_2,\bar z_3) &h_2(z_1,z_2,z_3)=(z_1,-z_2,\bar z_3)\\
&g_3(z_1,z_2,z_3)=(z_1,z_2,-z_3) &h_3(z_1,z_2,z_3)=(z_1, z_2,-z_3)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{5}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$ $$\begin{aligned}
\Gamma_{6}=
\Bigl < \left(\begin{array}{ccc}
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{ccc}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3)=(z_1,iz_2,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_1,iz_2,z_3)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3) @>\varphi>> (-z_1,-\bar {iz}_2,z_3)
\end{CD}
\hspace{1cm}
\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_1,iz_2,z_3)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3) @>\varphi>> (z_1,iz_2,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3) @>\varphi>>(z_1,iz_2,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3) @>\varphi>> (z_1,-iz_2,\bar z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{ccc}
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{5}\gamma ^{-1}=\Gamma_{6}$.\
. First, we determine the holonomy groups of $\Gamma_1$ and $\Gamma_5$. If $L:\mathbb{E(}3)\longrightarrow O(3)$ is the projection, then $L(\Gamma )=\Phi$ is called the holonomy group of $\Gamma$. Hence $$\begin{aligned}
\Phi_1=&
\Bigl <I,
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right)\Bigr>=\mathbb{Z}_2\\
\Phi_5=&
\Bigl<I,
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0 \\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right)
\Bigr >=\mathbb{Z}_2 \times \mathbb{Z}_2.\end{aligned}$$ If $M(A_1)$ is diffeomorphic to $M(A_5)$, by Bieberbach's theorem, then $\exists$ $B\in GL(3,\mathbb{R})$ with $(b,B)=\gamma \in \mathbb{A}(3)$ such that $L(\gamma \Gamma_1 \gamma^{-1} )=BL(\Gamma_1)B^{-1}=L(\Gamma_5)$. This is impossible because $L(\Gamma_1)=\mathbb{Z}_2$ and $L(\Gamma_5)=\mathbb{Z}_2 \times \mathbb{Z}_2$.\
. Let $\Phi_1=
\Bigl <I,
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right)=P \Bigr> \hspace{0.5cm}
\Phi_7=
\Bigl<I,
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0 \\
0& 0 & -1
\end{array}\right)=Q \Bigr>$. If $M(A_1)$ is difffeomorphic to $M(A_7)$ then $\exists$ $B$ such that $BPB^{-1}=Q$. Consider $|BPB^{-1}|=|Q|$ or $|B||P-\lambda I||B^{-1}|=|Q-\lambda I|$. It means the eigenvalues of $P$ and $Q$ are the same. This is a contradiction, because the eigenvalues of $P$ are 1,1 and -1, but the eigenvalues of $Q$ are -1,-1 and 1.\
). The argument is similar with d), because the holonomy of $\Gamma_7$ is $\mathbb{Z}_2$.\
We observed that the sufficient condition of conjecture is true for $n=3$. ◻
# Four Dimensional Real Bott Tower
In this case there are 64 Bott matrices $A$ that correspond to action of $\mathbb {(Z}_2)^4$ on $(S^1)^4$. Definition of the action is similar to the three dimensional real Bott tower.
**Theorem 21**. *Let $M(A)$ be a $4$-dimensional real Bott tower. Then $M(A)$ admits a maximal $T^k$-action $(k=1,2,3,4)$. There exists exactly 12-diffeomorphism classes of $M(A)$. The Bott matrices among $64$ fall into the following classes*
- *(orientable) ($T^4$-action.) The Identity matrix $I_4.$*
- *(orientable) ($T^2$-action.) $$\begin{aligned}
\hspace{1cm}
A_{11}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{21}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{31}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a5}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a17}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right) .\end{aligned}$$*
- *(orientable) ($S^1$-action.) $$\begin{aligned}
A_{a15}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a27}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right) .\end{aligned}$$*
- *(nonorientable) ($T^3$-action.) $$\begin{aligned}
\hspace{1cm}
A_{2}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{3}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{4}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{5}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{6}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{7}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{8}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{9}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{17}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{25}=&
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a1}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
- *(nonorientable) ($S^1$-action.) $A_{a21}=\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)$.*
- *(nonorientable) ($T^2$-action.) $$\begin{aligned}
\hspace{1cm}
A_{a3}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a7}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{10}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{12}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{18}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{22}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{26}=&
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{32}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a9}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a25}=&
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
- *(nonorientable)($T^2$-action.) $$\begin{aligned}
\hspace{1cm}
A_{a4}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a8}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{14}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{16}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{20}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{24}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{28}=&
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{30}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
- *(nonorientable) ($S^1$-action.) $$\begin{aligned}
\hspace{1.3cm}
A_{a13}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a29}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a18}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a19}=&
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a20}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a22}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a23}=&
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a24}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
- *(nonorientable)($S^1$-action.)\
$A_{a11}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0\\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{a31}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1\\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right).$*
- *(nonorientable) ($T^2$-action.) $$\begin{aligned}
\hspace{1cm}
A_{13}=&
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{15}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{19}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{23}=&
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{27}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
A_{29}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a2}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a6}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
- *(nonorientable) ($S^1$-action.) $$\begin{aligned}
\hspace{1.3cm}
A_{a10}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a12}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a26}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a32}=&
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
- *(nonorientable)($S^1$-action.) $$\begin{aligned}
\hspace{1.3cm}
A_{a14}=&
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a16}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right),
A_{a28}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)\\
A_{a30}=&
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right).\end{aligned}$$*
*Moreover, Bott matrices in each class are conjugate.*
*Proof.* Similar with 3 dimensional real Bott tower, the proof is organized by the following steps.
1. Determine conjugacy classes of 64 Bott matrices (we used Maple to determine this).
2. For each conjugacy class, we check whether $M(A), M(A')$ with representative matrices $A,A'$ are diffeomorphic or not by using Bieberbach theorems.
A detailed proof is as follows:
- $I_4$ is not diffeomorphic to any $M(A)$.
- $M(A_{11})\approx M(A_{21})\approx M(A_{31})\approx M(A_{a5})\approx M(A_{a17})$.
- $M(A_{a15})\approx M(A_{a27})$.
- $M(A_{11})$ is not diffeomorphic to $M(A_{a15})$.
- $M(A_{2})\approx M(A_{3})\approx M(A_{4})\approx M(A_{5})\approx M(A_{6})
\approx M(A_{7})\approx M(A_{8})\approx M(A_{9})\approx M(A_{17})\approx M(A_{25})\approx M(A_{a1})$.
- $M(A_{11})$ is not diffeomorphic to $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{a15})$ is not diffeomorphic to $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{2})$ is not diffeomorphic to $M(A_{a21})$.
- $M(A_{a3})\approx M(A_{a7})\approx M(A_{10})\approx M(A_{12})\approx M(A_{18})
\approx M(A_{22})\approx M(A_{26})\approx M(A_{32})\approx M(A_{a9})\approx M(A_{a25})$.
- $M(A_{a3})$ is not diffeomorphic to $M(A_{11})$, $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{a3})$ is not diffeomorphic to $M(A_{a15})$.
- $M(A_{a4})\approx M(A_{a8})\approx M(A_{14})\approx M(A_{16})\approx M(A_{20})
\approx M(A_{24})\approx M(A_{28})\approx M(A_{30})$.
- $M(A_{a4})$ is not diffeomorphic to $M(A_{11})$, $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{a4})$ is not diffeomorphic to $M(A_{a15})$.
- $M(A_{14})$ is not diffeomorphic to $M(A_{a3})$.
- $M(A_{a13})\approx M(A_{a29})\approx M(A_{a18})\approx M(A_{a19})\approx M(A_{a20})
\approx M(A_{a22})\approx M(A_{a23})\approx M(A_{a24})$.
- $M(A_{a13})$ is not diffeomorphic to $M(A_{11})$, $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{a13})$ is not diffeomorphic to $M(A_{a15})$, $M(A_{a3})$ and $M(A_{a4})$.
- $M(A_{a11})\approx M(A_{a31})$.
- $M(A_{a11})$ is not diffeomorphic to $M(A_{11})$, $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{a11})$ is not diffeomorphic to $M(A_{a15})$, $M(A_{a3})$ and $M(A_{a4})$ .
- $M(A_{a11})$ is not diffeomorphic to $M(A_{a13})$.
- $M(A_{13})\approx M(A_{15})\approx M(A_{19})\approx M(A_{23})\approx M(A_{27})
\approx M(A_{29})\approx M(A_{a2})\approx M(A_{a6})$.
- $M(A_{13})$ is not diffeomorphic to $M(A_{11})$, $M(A_{2})$ and $M(A_{a21})$.
- $M(A_{13})$ is not diffeomorphic to $M(A_{a15})$, $M(A_{a13})$ and $M(A_{a11})$.
- $M(A_{13})$ is not diffeomorphic to $M(A_{a3})$.
- $M(A_{13})$ is not diffeomorphic to $M(A_{a4})$.
- $M(A_{a10})\approx M(A_{a12})\approx M(A_{a26})\approx M(A_{a32})$. and\
$M(A_{a14})\approx M(A_{a16})\approx M(A_{a28})\approx M(A_{a30})$.
- $M(A_{a10})$ and $M(A_{a14})$ are not diffeomorphic to any $M(A)$ in classes **a)-j)** as written in theorem.
- $M(A_{a10})$ is not diffeomorphic to $M(A_{a14})$.
This is clear because the holonomy of $I_4$ is trivial.\
.\
$\bullet$ $M(A_{11})\approx M(A_{21})$.\
For $$\begin{aligned}
&A_{11}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{21}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)\end{aligned}$$ $$\begin{aligned}
&g_1(z_1,z_2,z_3,z_4)=(-z_1, z_2,z_3,z_4). &h_1(z_1,z_2,z_3,z_4)=(-z_1, z_2,\bar z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1, -z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{11}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{21}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_1,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,-z_1,z_3,z_4)
\end{CD}
\hspace{.5cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_2,z_1,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_2,z_1,\bar z_3,\bar z_4)
\end{CD}
\hspace{.5cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_1,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{11}\gamma ^{-1}=\Gamma_{21}$.\
$\bullet M(A_{11}) \approx M(A_{31})$.\
For $$\begin{aligned}
&A_{11}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{31}&=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)&
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1, z_2,z_3,z_4). \hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)&=(-z_1, z_2,\bar z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2h_1(z_1,z_2,z_3,z_4)&=(-z_1, -z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)&=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)&=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{11}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{31}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1z_2,z_1,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_1,z_3,z_4)\\
@Vg_1VV @Vh_2h_1 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1z_2,-z_1,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_1,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1z_2,z_1,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_1,z_3,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_1z_2,z_1,\bar z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_1,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1z_2,z_1,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 1 & 0 & 0\\
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{11}\gamma ^{-1}=\Gamma_{31}$.\
$\bullet M(A_{11})\approx M(A_{a5})$.\
For $$\begin{aligned}
&A_{11}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a5}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1, z_2,z_3,z_4). \hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1, -z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{11}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a5}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_3,z_1,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_1VV @Vh_3 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,z_3,-z_1,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_2,-z_3,z_1,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_2,\bar z_3,z_1,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_3,z_1,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
1& 0 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{11}\gamma ^{-1}=\Gamma_{a5}$.\
$\bullet M(A_{11})\approx M(A_{a17})$.\
For $$\begin{aligned}
&A_{11}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a17}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1, z_2,z_3,z_4). \hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1, -z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{11}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a17}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_3,z_4,z_1)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_4,z_1)\\
@Vg_1VV @Vh_4 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,z_3,z_4,-z_1)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_4,z_1)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_2,-z_3,z_4,z_1)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_4,z_1)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_2,\bar z_3,\bar z_4,z_1)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_4,z_1)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_3,-z_4,z_1).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1\\
1& 0 & 0 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{11}\gamma ^{-1}=\Gamma_{a17}$.\
**(3).**\
$\bullet M(A_{a15})\approx M(A_{a27})$.\
For $$\begin{aligned}
&A_{a15}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a27}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a15}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a27}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,z_3,\bar z_4) @>\varphi>> (-z_1,\bar z_2,\bar z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (z_1,-z_2,\bar z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a15}\gamma ^{-1}=\Gamma_{a27}$.\
They are not diffeomorphic, because the holonomy of $\Gamma _{11}$ and $\Gamma _{a15}$ are $\mathbb{Z}_2$ and $\mathbb{Z}_2 \times \mathbb{Z}_2$ respectively.\
\
$\bullet$ $M(A_{3})\approx M(A_{2})$.\
For $$\begin{aligned}
&A_{3}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{3}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{2}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_3,z_2,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,z_3,z_2,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,-z_3,z_2,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,z_3,-z_2,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_3,z_2,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{3}\gamma ^{-1}=\Gamma_{2}$.\
$\bullet$ $M(A_{2})\approx M(A_{4})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{4}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{4}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_2z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,z_2,z_2z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-z_2z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,-z_2,-z_2z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,z_2z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{4}$.\
$\bullet$ $M(A_{2})\approx M(A_{5})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{5}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{5}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_3,z_2,z_1,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_2,z_1,z_4)\\
@Vg_1VV @Vh_3 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_3,z_2,-z_1,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_2,z_1,z_4)\\
@Vg_3VV @Vh_1 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (-z_3,z_2,z_1,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_2,z_1,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_3,-z_2,z_1,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_2,z_1,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_3,z_2,z_1,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 0 & 1 & 0\\
0& 1 & 0 & 0\\
1& 0 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{5}$.\
$\bullet$ $M(A_{2})\approx M(A_{6})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{6}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_3(z_1,z_2,z_3,z_4)=(-z_1,z_2,-z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{6}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_1z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_1VV @Vh_1h_3 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,z_2,-z_1z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,\bar z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-z_1z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,-z_2,z_1z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,z_1z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
1& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{6}$.\
$\bullet$ $M(A_{3})\approx M(A_{7})$.\
For $$\begin{aligned}
&A_{3}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{7}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{3}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{7}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,-z_1z_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_1z_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,-z_1z_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_1z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{3}\gamma ^{-1}=\Gamma_{7}$.\
$\bullet$ $M(A_{2})\approx M(A_{8})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{8}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{8}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_2z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_2z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,-z_1z_2,z_2z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_2z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_1z_2,-z_2z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_2z_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,-z_1z_2,-z_2z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_2z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_1z_2,z_2z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{8}$.\
$\bullet$ $M(A_{2})\approx M(A_{9})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{9}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{9}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_3,z_4,z_2)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_4,z_2)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,z_3,z_4,z_2)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_4,z_2)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,-z_3,\bar z_4,z_2)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_4,z_2)\\
@Vg_2VV @Vh_4 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,z_3,z_4,-z_2)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_4,z_2)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_3,-z_4,z_2).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1\\
0& 1 & 0 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{9}$.\
$\bullet$ $M(A_{2})\approx M(A_{17})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{17}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{17}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_3,z_1,z_4,z_2)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_4,z_2)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_3,-z_1,z_4,z_2)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_4,z_2)\\
@Vg_3VV @Vh_1 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (-z_3,z_1,\bar z_4,z_2)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_4,z_2)\\
@Vg_2VV @Vh_4 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_3,z_1,z_4,-z_2)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_4,z_2)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_3,z_1,-z_4,z_2).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 0 & 1 & 0\\
1& 0 & 0 & 0\\
0& 0 & 0 & 1\\
0& 1 & 0 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{17}$.\
$\bullet$ $M(A_{2})\approx M(A_{25})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{25}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{25}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_2z_3,z_4,z_1)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_2z_3,z_4,z_1)\\
@Vg_1VV @Vh_4 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,z_2z_3,z_4,-z_1)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_2z_3,z_4,z_1)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_2,-z_2z_3,\bar z_4,z_1)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_2z_3,z_4,z_1)\\
@Vg_2VV @Vh_1h_2 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (-z_2,-z_2z_3,z_4,z_1)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(-z_2,-z_2z_3,z_4,z_1)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_2z_3,-z_4,z_1).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 0 & 1\\
1& 0 & 0 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{25}$.\
$\bullet$ $M(A_{2})\approx M(A_{a1})$.\
For $$\begin{aligned}
&A_{2}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a1}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{2}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a1}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_3,z_4,z_1,z_2)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_4,z_1,z_2)\\
@Vg_1VV @Vh_3 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_3,z_4,-z_1,z_2)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_4,z_1,z_2)\\
@Vg_3VV @Vh_1 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (-z_3,\bar z_4,z_1,z_2)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_4,z_1,z_2)\\
@Vg_2VV @Vh_4 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_3,z_4,z_1,-z_2)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_4,z_1,z_2)\\
@Vg_4VV @Vh_2 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_3,-z_4,z_1,z_2).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 0 & 1 & 0\\
0& 0 & 0 & 1\\
1& 0 & 0 & 0\\
0& 1 & 0 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{2}\gamma ^{-1}=\Gamma_{a1}$.\
If $M(A_{11})$ is diffeomorphic to $M(A_{2})$ and $M(A_{a21})$ respectively, then by Bieberbach's theorem $\exists B\in GL(4,\mathbb{R})$ such that $BPB^{-1}=Q$ and $BPB^{-1}=R$ respectively, where $\Phi_{11}=\footnotesize
\Bigl <I, P=\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right)\Bigr>$, $\Phi_{2}=\footnotesize
\Bigl <I, Q=\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right)\Bigr>$, $\Phi_{a21}=\footnotesize
\Bigl <I, R=\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right)\Bigr>$. Similar to the proof of Theorem [Theorem 20](#T:3){reference-type="ref" reference="T:3"}(e), by argument of eigenvalue, there is a contradiction, where the eigenvalues of $P$ are 1,1,-1 and -1, but the eigenvalues of $Q$ are 1,1,1 and -1, and the eigenvalues of $R$ are 1,-1,-1 and -1.\
Similar to the proof of (4) above.\
Similar to the proof of (6) above.\
\
$\bullet$ $M(A_{a3})\approx M(A_{a7})$.\
For $$\begin{aligned}
&A_{a3}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a7}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a3}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a7}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,iz_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,-iz_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a3}\gamma ^{-1}=\Gamma_{a7}.$\
$\bullet$ $M(A_{10})\approx M(A_{a3})$.\
For $$\begin{aligned}
&A_{10}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a3}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a3}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_3,z_1,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_1VV @Vh_3 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,z_3,-z_1,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_2,-z_3,z_1,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_2,\bar z_3,z_1,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_3,z_1,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
1& 0 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{10}\gamma ^{-1}=\Gamma_{a3}$.\
$\bullet$ $M(A_{a9})\approx M(A_{a7})$.\
For $$\begin{aligned}
&A_{a9}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a7}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a9}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a7}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_4,z_3)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,iz_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_4,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-iz_2,z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a9}\gamma ^{-1}=\Gamma_{a7}.$\
$\bullet$ $M(A_{a9})\approx M(A_{a25})$.\
For $$\begin{aligned}
&A_{a9}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a25}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a9}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a25}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,iz_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-iz_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a9}\gamma ^{-1}=\Gamma_{a25}.$\
$\bullet$ $M(A_{10})\approx M(A_{12})$.\
For $$\begin{aligned}
&A_{10}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{12}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{12}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,z_2,iz_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_2,-\bar {iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{10}\gamma ^{-1}=\Gamma_{12}.$\
$\bullet$ $M(A_{10})\approx M(A_{18})$.\
For $$\begin{aligned}
&A_{10}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{18}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{18}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_1,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,-z_1,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_2,z_1,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_2,z_1,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_1,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{10}\gamma ^{-1}=\Gamma_{18}$.\
$\bullet$ $M(A_{10})\approx M(A_{22})$.\
For $$\begin{aligned}
&A_{10}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{22}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_3(z_1,z_2,z_3,z_4)=(-z_1,z_2,-\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{22}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_1,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,iz_3,z_4)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (z_2,-z_1,iz_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_2,z_1,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,iz_3,z_4)\\
@Vg_2VV @Vh_1h_3 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_2,z_1,-\bar {iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_1,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{10}\gamma ^{-1}=\Gamma_{22}$.\
$\bullet$ $M(A_{10})\approx M(A_{26})$.\
For $$\begin{aligned}
&A_{10}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{26}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{26}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_3,z_4)$, we get these commtative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,-z_1z_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_1z_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_1z_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_1z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{10}\gamma ^{-1}=\Gamma_{26}$.\
$\bullet$ $M(A_{12})\approx M(A_{32})$.\
For $$\begin{aligned}
&A_{12}=
\left(\begin{array}{cccc}
1& 0 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{32}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{12}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{32}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,z_4) @>\varphi>> (-z_1,-z_1z_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_1z_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (z_1,-z_1z_2,\bar z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_1z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{12}\gamma ^{-1}=\Gamma_{32}$.\
Similar to (4).\
We know that\
$\Phi_{a15}=\scriptsize\left<I,\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right)=P,\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right)=Q \right>$ and $\Phi_{a3}=\left<I,\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)=R, \left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right)=S\right>$. If $M(A_{a15})$ is diffeomorphic to $M(A_{a3})$ then $\exists$ $B\in GL(4,\mathbb{R})$ such that $$BPB^{-1}=
\begin{cases}
R \hspace{1cm} (i)\\
S \hspace{1cm} (ii)\\
RS \hspace{.71cm} (iii)
\end{cases}$$ and $$BQB^{-1}=
\begin{cases}
R \hspace{1cm} (iv)\\
S \hspace{1cm} (v)\\
RS \hspace{.71cm} (vi).
\end{cases}$$ By calculating the eigenvalues of cases (i), (ii),(iv) and (v), then we get a contradiction, and from the cases (iii) and (vi) we get $P=Q$ which is also a contradiction.\
.\
$\bullet$ $M(A_{a4})\approx M(A_{a8})$.\
For $$\begin{aligned}
&A_{a4}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a8}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a4}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a8}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,iz_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,-iz_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a4}\gamma ^{-1}=\Gamma_{a8}.$\
$\bullet$ $M(A_{a4})\approx M(A_{14})$.\
For $$\begin{aligned}
&A_{a4}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{14}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a4}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{14}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_3,z_1,z_2,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_2,z_4)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (z_3,-z_1,\bar z_2,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_2,z_4)\\
@Vg_3VV @Vh_1 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (-z_3,z_1,z_2,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_2,z_4)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_3,z_1,-z_2,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_3,z_1,z_2,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_3,z_1,z_2,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 0 & 1 & 0\\
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a4}\gamma ^{-1}=\Gamma_{14}$.\
$\bullet$ $M(A_{14})\approx M(A_{16})$.\
For $$\begin{aligned}
&A_{14}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{16}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{14}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{16}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1,z_2,iz_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_2,-\bar {iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{14}\gamma ^{-1}=\Gamma_{16}$.\
$\bullet$ $M(A_{14})\approx M(A_{24})$.\
For $$\begin{aligned}
&A_{14}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{24}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{14}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{24}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_1z_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1z_2,z_3,z_4)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (z_2,-z_1z_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_2,z_1z_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1z_2,z_3,z_4)\\
@Vg_2VV @Vh_1h_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_2,-z_1z_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_1z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{14}\gamma ^{-1}=\Gamma_{24}$.\
$\bullet$ $M(A_{20})\approx M(A_{24})$.\
For $$\begin{aligned}
&A_{20}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{24}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,z_4).\hspace{.1cm} &h_1h_3(z_1,z_2,z_3,z_4)=(-z_1,z_2,-\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{20}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{24}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_1VV @Vh_1h_3 VV\\
(-z_1,z_2,\bar z_3,z_4) @>\varphi>> (-z_1,z_2,-\bar {iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,-z_2,iz_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{20}\gamma ^{-1}=\Gamma_{24}$.\
$\bullet$ $M(A_{14})\approx M(A_{30})$.\
For $$\begin{aligned}
&A_{14}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{30}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{14}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{30}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1,-z_1z_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_1z_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_1z_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_1z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{14}\gamma ^{-1}=\Gamma_{30}$.\
$\bullet$ $M(A_{30})\approx M(A_{28})$.\
For $$\begin{aligned}
&A_{30}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{28}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{30}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{28}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_1,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_1VV @Vh_2 VV\\
(-z_1,z_2,\bar z_3,\bar z_4) @>\varphi>> (z_2,-z_1,\bar z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_2,z_1,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_2,z_1,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_1,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_1,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{30}\gamma ^{-1}=\Gamma_{28}$.\
. Similar to (4)\
. Similar to (11)\
. $M(A_{14})$ is not diffeomorphic to $M(A_{a3})$ .\
Let $$\begin{aligned}
\Gamma_{a3}= <&\tilde {g}_1,\tilde {g}_2,t_3,t_4>\\
=\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >
\\
\Gamma_{14}= <&\tilde {h}_1,\tilde {h}_2,\tilde {h}_3,t_4>\\
=\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >. \end{aligned}$$
Since $\tilde {g}_1\tilde {g}_2\tilde {g}_1^{-1}=\tilde {g}_2^{-1}$, $\tilde {g}_1t_4\tilde {g}_1^{-1}=t_4$, $t_3\tilde {g}_2t_3^{-1}=\tilde {g}_2$, and $t_3t_4t_3^{-1}=t_4$, then $\Gamma _{a3}=\langle \tilde {g}_2,t_4 \rangle \rtimes \langle \tilde {g}_1,t_3 \rangle$. Then the center $\mathcal C(\Gamma_{a3})=\langle t_1, t_3\rangle$ where $t_1=\tilde g_1^2=\footnotesize \left(\begin{array}{c}
1\\
0\\
0\\
0
\end{array}\right), t_3=\tilde g_3=\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)$. There is an central group extension: $$\begin{aligned}
1\rightarrow \langle t_1, t_3 \rangle \rightarrow \Gamma_{a3}\rightarrow \Delta_{a3}\rightarrow 1,\end{aligned}$$ where $$\begin{aligned}
\Delta_{a3}&= \Bigl<\gamma_1=\left(\begin{array}{c}
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0 \\
0& -1
\end{array}\right), \gamma _2=\left(\begin{array}{c}
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cc}
1& 0 \\
0& 1
\end{array}\right)\Bigr >\rtimes
\langle \beta \rangle \end{aligned}$$ with $\beta =\Bigl(\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
-1& 0\\
0& 1
\end{array}\right)\Bigr)$.\
For $\Gamma_{14}= \langle \tilde {h}_1,\tilde {h}_2, \tilde{h}_3,t_4\rangle$, since $\tilde {h}_1\tilde {h}_3\tilde {h}_1^{-1}=\tilde {h}_3$, $\tilde {h}_1t_4\tilde {h}_1^{-1}=t_4^{-1}$, $\tilde {h}_2\tilde {h}_3\tilde {h}_2^{-1}=\tilde {h}_3^{-1}$, and $\tilde {h}_2t_4\tilde {h}_2^{-1}=t_4$ then $\Gamma _{14}=\langle \tilde {h}_3,t_4 \rangle \rtimes \langle \tilde {h}_1,\tilde {h}_2 \rangle$. The center $\mathcal
C(\Gamma_{14})=\langle t_1, t_2\rangle$ where $t_1=\tilde h_1^2$, $t_2=\tilde h_2^2= \footnotesize
\left(\begin{array}{c}
0\\
1\\
0\\
0\end{array}\right)$. It induces an extension $$\begin{aligned}
1\rightarrow \langle t_1,t_2\rangle \rightarrow \Gamma_{14}\rightarrow \Delta_{14}\rightarrow 1,\end{aligned}$$ where $$\begin{aligned}
\Delta_{14}=\Bigl <s_1=\left(\begin{array}{c}
\frac 12\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0 \\
0& -1
\end{array}\right),s_2=\left(\begin{array}{c}
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 \\
0& 1
\end{array}\right)\Bigr >\rtimes
\langle \alpha ,\beta \rangle\end{aligned}$$ with $\alpha =\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& -1\end{array}\right), \beta =\left(\begin{array}{c}
0\\
0
\end{array}\right) \left(\begin{array}{cc}
-1& 0\\
0& 1
\end{array}\right)$.
Suppose that $\Gamma_{a3}$ is isomorphic to $\Gamma_{14}$ with isomorphism $\varphi:\Gamma_{a3}\rightarrow \Gamma_{14}$. Then it induces an isomorphism $\hat\varphi:\Delta_{a3}\rightarrow
\Delta_{14}$ $$\begin{aligned}
\begin{CD}
1 @>>> \langle t_1,t_3 \rangle @>>> \Gamma_{a3}@>>> \Delta_{a3}@>>> 1\\
@. @. @V \varphi VV @V \hat \varphi VV\\
1 @>>> \langle t_1,t_2 \rangle @>>> \Gamma_{14}@>>> \Delta_{14}@>>> 1.
\end{CD}\end{aligned}$$\
Consider $\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \beta )=\alpha$ and $\hat \varphi (\gamma_1^{b_1} \gamma_2^{b_2} \beta )=\beta$ where $a_1, a_2, b_1, b_2 \in \mathbb{Z}$. Then $$\begin{aligned}
\alpha\beta=&\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \beta \gamma_1^{b_1} \gamma_2^{b_2} \beta) \\
=&\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \beta \gamma_2^{b_2} \beta) \hspace{.5cm} (\beta \gamma_1 \beta^{-1}=\gamma_1^{-1})\\
=&\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \gamma_2^{b_2}) \hspace{.9cm} (\beta \gamma_2 \beta^{-1}=\gamma_2).\end{aligned}$$ Since $\alpha \beta$ is a torsion element and $\hat \varphi$ is an isomorphism, then $\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \gamma_2^{b_2}$ is also a torsion element. Let $$\begin{aligned}
g=&\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \gamma_2^{b_2} \\
=&
\Bigl ( \left(\begin{array}{c}
\frac{1}{2}a_1\\
(-1)^{a_1}\frac{1}{2}a_2
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& (-1)^{a_1}\end{array}\right)\Bigr)
\Bigl ( \left(\begin{array}{c}
-\frac{1}{2}b_1\\
(-1)^{-b_1}\frac{1}{2}b_2
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& (-1)^{-b_1}\end{array}\right)\Bigr)\\
=&
\Bigl ( \left(\begin{array}{c}
\frac{1}{2}(a_1 - b_1)\\
\frac{1}{2}((-1)^{a_1}a_2+(-1)^{(a_1-b_1)}b_2)
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& (-1)^{a_1-b_1}\end{array}\right)\Bigr).\end{aligned}$$ We have $$\begin{aligned}
g^2=&\Bigl ( \left(\begin{array}{c}
a_1 - b_1\\
(1+(-1)^{(a_1-b_1)})\frac{1}{2}((-1)^{a_1}a_2+(-1)^{(a_1-b_1)}b_2
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& 1 \end{array}\right)\Bigr)\\
=&
\Bigl ( \left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& 1 \end{array}\right)\Bigr).\end{aligned}$$\
Therefore we get $a_1=b_1$ and $(-1)^{a_1}a_2+b_2=0$. Hence $\hat \varphi (g)=\hat \varphi
\Bigl ( \left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& 1 \end{array}\right)\Bigr)=
\Bigl ( \left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
-1& 0\\
0& -1 \end{array}\right)\Bigr)$. This yields a contradiction.\
.\
$\bullet$ $M(A_{a13})\approx M(A_{a29})$.\
For $$\begin{aligned}
&A_{a13}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a29}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a29}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,\bar z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,iz_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-iz_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a13}\gamma ^{-1}=\Gamma_{a29}.$\
$\bullet$ $M(A_{a13})\approx M(A_{a18})$.\
For $$\begin{aligned}
&A_{a13}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a18}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a18}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_4,z_2,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_4,z_2,z_3)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,z_3,\bar z_4) @>\varphi>> (-z_1,\bar z_4,\bar z_2,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_4,z_2,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_4,z_2,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_4,z_2,z_3)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,z_4,-z_2,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_4,z_2,z_3)\\
@Vg_4VV @Vh_2 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,-z_4,z_2,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 0 & 0 & 1\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a13}\gamma ^{-1}=\Gamma_{a18}$.\
$\bullet$ $M(A_{a13})\approx M(A_{a19})$.\
For $$\begin{aligned}
&A_{a13}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a19}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a19}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,z_3,\bar z_4) @>\varphi>> (-z_1,\bar z_2,\bar z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_2,z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a13}\gamma ^{-1}=\Gamma_{a19}$.\
$\bullet$ $M(A_{a18})\approx M(A_{a20})$.\
For $$\begin{aligned}
&A_{a18}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a20}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4)\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4) &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4) &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4) &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a18}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a20}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_2z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,\bar z_3,z_4) @>\varphi>> (-z_1,\bar z_2,\bar z_2\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-z_2z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,-z_2,-z_2z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_2z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,z_2z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 1 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a18}\gamma ^{-1}=\Gamma_{a20}$.\
$\bullet$ $M(A_{a22})\approx M(A_{a23})$.\
For $$\begin{aligned}
&A_{a22}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a23}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,\bar z_4)\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4) &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4) &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4) &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a22}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a23}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_3,z_2,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_1,\bar z_3,\bar z_2,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,-z_3,z_2,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,z_3,-z_2,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_3,z_2,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a22}\gamma ^{-1}=\Gamma_{a23}$.\
$\bullet$ $M(A_{a23})\approx M(A_{a24})$.\
For $$\begin{aligned}
&A_{a23}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a24}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,\bar z_4)\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4) &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4) &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4) &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a23}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a24}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_3,z_2z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2z_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_1,\bar z_3,\bar z_2\bar z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2z_3,z_4)\\
@Vg_3VV @Vh_2h_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,-z_3,-z_2z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2z_3,z_4)\\
@Vg_2VV @Vh_3 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,z_3,-z_2z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_3,z_2z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_3,z_2,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 0 & 1 & 0\\
0& 1 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a23}\gamma ^{-1}=\Gamma_{a24}$.\
$\bullet$ $M(A_{a23})\approx M(A_{a29})$.\
For $$\begin{aligned}
&A_{a23}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a29}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,\bar z_4)\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4) &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4) &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4) &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a23}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a29}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,\bar z_3,\bar z_4) @>\varphi>> (-z_1,\bar z_2,\bar z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,\bar z_4) @>\varphi>> (z_1,-z_2,\bar z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a23}\gamma ^{-1}=\Gamma_{a29}$.\
. Similar to (4).\
. Similar to (11).\
.\
$\bullet$ $M(A_{a11})\approx M(A_{a31})$.\
For $$\begin{aligned}
&A_{a11}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a31}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1, z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1, z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a11}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a31}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,iz_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (z_1,-iz_2,\bar z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a11}\gamma ^{-1}=\Gamma_{a31}.$\
. Similar to (4).\
. Similar to (11).\
. $M(A_{a11})$ is not diffeomorphic to $M(A_{a13})$.\
Let $$\begin{aligned}
\Gamma_{a13}= \langle &\tilde {g}_1,\tilde {g}_2,t_3,t_4\rangle\\
= \Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >
\\
\Gamma_{a11}= \langle &\tilde {h}_1,\tilde {h}_2,t_3,t_4\rangle\\
= \Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Since $\tilde {g}_1\tilde {g}_2\tilde {g}_1^{-1}=\tilde {g}_2^{-1},\tilde {g}_1 t_3\tilde {g}_1^{-1}=t_3,$ and $\tilde {g}_1 t_4\tilde {g}_1^{-1}=t_4$ then $\Gamma_{a13}= \langle \tilde {g}_2,t_3,t_4\rangle \rtimes \langle\tilde {g}_1\rangle$ and the center $\displaystyle \mathcal C(\Gamma_{a13})= \langle \tilde {g}_1^2 \rangle
= \langle t_1 \rangle$, where $t_1$= $\left(\begin{array}{c}
1\\
0\\
0\\
0\end{array}\right)$. It induces an extension $$\begin{aligned}
1\rightarrow \langle t_1\rangle \rightarrow \Gamma_{a13}\longrightarrow \Delta_{a13}\rightarrow 1,\end{aligned}$$ where $\Delta_{a13}$ is isomorphic to $$\begin{aligned}
\Bigl <&\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)
\Bigr >\rtimes \Bigl< \left(\begin{array}{c}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& 1 & 0\\
0& 0 & -1
\end{array}\right)\Bigr>.\end{aligned}$$ Put $$\begin{aligned}
\Delta_{a13}^f=&\langle p,q,r \rangle\\
=& \footnotesize \Bigl <\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$ and $$\begin{aligned}
\beta = \footnotesize \Bigl< \left(\begin{array}{c}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& 1 & 0\\
0& 0 & -1
\end{array}\right)\Bigr>\end{aligned}$$ so that $\Delta_{a13}=\Delta_{a13}^f \rtimes \beta$.
Next, since $\tilde {h}_1\tilde {h}_2\tilde {h}_1^{-1}=\tilde {h}_2^{-1},\tilde {h}_1 t_3\tilde {h}_1^{-1}=t_3,$ and $\tilde {h}_1 t_4\tilde {h}_1^{-1}=t_4$ then $\Gamma_{a11}= \langle \tilde{h}_2,t_3,t_4\rangle \rtimes \langle\tilde {h}_1\rangle$ and the center $\displaystyle \mathcal C(\Gamma_{a11})= \langle \tilde {h}_1^2 \rangle= \langle t_1 \rangle$. It induces an extension $$\begin{aligned}
1\rightarrow \langle t_1 \rangle \rightarrow \Gamma_{a11}\longrightarrow \Delta_{a11}\rightarrow 1,\end{aligned}$$ where $\Delta_{a11}$ is isomorphic to $$\begin{aligned}
\Bigl <&\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)
\Bigr >\rtimes \langle \left(\begin{array}{c}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)\rangle.\end{aligned}$$ Put $$\begin{aligned}
\Delta_{a11}^f= \footnotesize
\Bigl <\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& -1 & 0\\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),
\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$ and $$\begin{aligned}
\alpha = \footnotesize
\langle \left(\begin{array}{c}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)\rangle\end{aligned}$$ so that $\Delta_{a11}=\Delta_{a11}^f\rtimes \alpha$.
Suppose there is an isomorphism $\varphi:\Gamma_{a13}\rightarrow
\Gamma_{a11}$ which induces an isomorphism $\hat\varphi:\Delta_{a13}\rightarrow \Delta_{a11}$ $$\begin{aligned}
\begin{CD}
1 @>>> \langle t_1 \rangle @>>> \Gamma_{a13}@>>> \Delta_{a13}@>>> 1\\
@. @. @V \varphi VV @V \hat \varphi VV\\
1 @>>> \langle t_1 \rangle @>>> \Gamma_{a11}@>>> \Delta_{a11}@>>> 1.
\end{CD}\end{aligned}$$ Consider $\alpha=\hat \varphi (p^{a_1} q^{a_2} r^{a_3}\beta )=\footnotesize
\hat \varphi \bigg (\left(\begin{array}{c}
\frac{1}{2}a_1\\
(-1)^{a_1}\frac{1}{2}a_2\\
\frac{1}{2}a_3
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& (-1)^{a_1} & 0\\
0& 0 & -1
\end{array}\right)\bigg )$. Since $\alpha$ is a torsion element and $\hat \varphi$ is an isomorphism then $p^{a_1} q^{a_2} r^{a_3}\beta$ is also a torsion element. Therefore $$\begin{aligned}
(p^{a_1} q^{a_2} r^{a_3}\beta)^2=&\footnotesize
\bigg (\left(\begin{array}{c}
0\\
(1+(-1)^{a_1})(-1)^{a_1}\frac{1}{2}a_2\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)\bigg )\\
=&
\bigg (\left(\begin{array}{c}
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right)\bigg ).\end{aligned}$$ Hence $a_1=odd$ or $a_2=0$. If $a_1=odd$ then $$\begin{aligned}
p^{a_1} q^{a_2} r^{a_3}\beta=
\footnotesize
\bigg (\left(\begin{array}{c}
\frac{1}{2}(2n+1)\\
-\frac{1}{2}a_2\\
\frac{1}{2}a_3
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& -1 & 0\\
0& 0 & -1
\end{array}\right)\bigg ).\end{aligned}$$ By Bieberbach theorem, $\exists \gamma=(b,B) \in \mathbb{A}(4)$ such that $BL(p^{a_1} q^{a_2} r^{a_3}\beta)B^{-1}=L(\hat\varphi (p^{a_1} q^{a_2} r^{a_3}\beta))$ or $$\begin{aligned}
B\footnotesize \left(\begin{array}{ccc}
-1& 0 & 0 \\
0& -1 & 0\\
0& 0 & -1
\end{array}\right)\normalsize B^{-1}=\footnotesize
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),\end{aligned}$$ which is impossible. Also if $a_2=0$ then $$\begin{aligned}
p^{a_1} q^{a_2} r^{a_3}\beta=
\footnotesize
\bigg (\left(\begin{array}{c}
\frac{1}{2}a_1\\
0\\
\frac{1}{2}a_3
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& (-1)^{a_1} & 0\\
0& 0 & -1
\end{array}\right)\bigg ).\end{aligned}$$ By Bieberbach theorem, $\exists \gamma=(b,B) \in \mathbb{A}(4)$ such that $BL(p^{a_1} q^{a_2} r^{a_3}\beta)B^{-1}=L(\hat\varphi (p^{a_1} q^{a_2} r^{a_3}\beta))$ or $$\begin{aligned}
B\footnotesize \left(\begin{array}{ccc}
-1& 0 & 0 \\
0& (-1)^{a_1} & 0\\
0& 0 & -1
\end{array}\right)\normalsize B^{-1}=\footnotesize
\left(\begin{array}{ccc}
-1& 0 & 0 \\
0& 1 & 0\\
0& 0 & 1
\end{array}\right),\end{aligned}$$ which is also impossible. Therefore $\Gamma_{a11}$ is not isomorphic to $\Gamma_{a13}$.\
.\
$\bullet$ $M(A_{13})\approx M(A_{15})$.\
For $$\begin{aligned}
&A_{13}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{15}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2h_1(z_1,z_2,z_3,z_4)=(-z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{15}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1z_2,z_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1z_2,z_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1z_2,z_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_3,z_4)\\
@Vg_2VV @Vh_2h_1 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_1z_2,-z_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1z_2,z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 1 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{13}\gamma ^{-1}=\Gamma_{15}$.\
$\bullet$ $M(A_{13})\approx M(A_{19})$.\
For $$\begin{aligned}
&A_{13}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{19}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{19}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1,z_2,\bar z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_2,z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{13}\gamma ^{-1}=\Gamma_{19}$.\
$\bullet$ $M(A_{13})\approx M(A_{23})$.\
For $$\begin{aligned}
&A_{13}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{23}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 0 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{23}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_4,z_3)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1,-z_1z_2,\bar z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_1z_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_4,z_3)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_1z_2,z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{13}\gamma ^{-1}=\Gamma_{23}$.\
$\bullet$ $M(A_{13})\approx M(A_{27})$.\
For $$\begin{aligned}
&A_{13}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{27}=
\left(\begin{array}{cccc}
1& 0 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2h_1(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{27}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1z_2,z_2,z_4,z_3)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_4,z_3)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1z_2,z_2,\bar z_4,z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_4,z_3)\\
@Vg_3VV @Vh_4 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1z_2,z_2,z_4,-z_3)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_4,z_3)\\
@Vg_2VV @Vh_2h_1 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_1z_2,-z_2,z_4,\bar z_3)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1z_2,z_2,z_4,z_3)\\
@Vg_4VV @Vh_3 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1z_2,z_2,-z_4,z_3).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 1 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 0 & 1\\
0& 0 & 1 & 0
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{13}\gamma ^{-1}=\Gamma_{27}$.\
$\bullet$ $M(A_{13})\approx M(A_{29})$.\
For $$\begin{aligned}
&A_{13}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{29}=
\left(\begin{array}{cccc}
1& 0 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{29}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_1z_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (-z_1,-z_1z_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_1,z_1z_2,-z_3,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_1z_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_1z_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_1z_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
1& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{13}\gamma ^{-1}=\Gamma_{29}$.\
$\bullet$ $M(A_{13})\approx M(A_{a2})$.\
For $$\begin{aligned}
&A_{13}=
\left(\begin{array}{cccc}
1& 0 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a2}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{13}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a2}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_2,z_3,z_1,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_1VV @Vh_3 VV\\
(-z_1,z_2,z_3,\bar z_4) @>\varphi>> (z_2,z_3,-z_1,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_3VV @Vh_2 VV\\
(z_1,z_2,-z_3,z_4) @>\varphi>> (z_2,-z_3,z_1,z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_2VV @Vh_1 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (-z_2,\bar z_3,z_1,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_2,z_3,z_1,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_2,z_3,z_1,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
1& 0 & 0 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{13}\gamma ^{-1}=\Gamma_{a2}$.\
$\bullet$ $M(A_{a2})\approx M(A_{a6})$.\
For $$\begin{aligned}
&A_{a2}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a6}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 0 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_3(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,-z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a2}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a6}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,z_1z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_1VV @Vh_1h_3 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,\bar z_2,-z_1z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-z_1z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,z_3,z_4) @>\varphi>> (z_1,-z_2,z_1z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,z_1z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,z_1z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
1& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a2}\gamma ^{-1}=\Gamma_{a6}$.\
. Similar to (4).\
. Similar to (11).\
. $M(A_{13})$ is not diffeomorphic to $M(A_{a3})$.\
Let $$\begin{aligned}
\Gamma_{a3}= <&\tilde {g}_1,\tilde {g}_2,t_3,t_4>\\
=\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >
\\
\Gamma_{13}= <&\tilde {h}_1,\tilde {h}_2,t_3,t_4>\\
=\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
We know from (15) above, that the center $\displaystyle \mathcal C(\Gamma_{a3})= \langle t_1,
t_3\rangle$ where $t_1=\tilde {g}_1^2$. There is an extension $$\begin{aligned}
1\rightarrow \langle t_1,t_3\rangle \rightarrow \Gamma_{a3}\rightarrow \Delta_{a3}\rightarrow 1,\end{aligned}$$ where $\Delta_{a3}$ is isomorphic to $$\begin{aligned}
\Bigl <&\gamma_1=\left(\begin{array}{c}
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0 \\
0& -1
\end{array}\right), \gamma_2=\left(\begin{array}{c}
0\\
\frac{1}{2}
\end{array}\right) \left(\begin{array}{cc}
1& 0 \\
0& 1
\end{array}\right)
\Bigr >\rtimes \langle \beta \rangle\end{aligned}$$ where $\beta =\Bigl(\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 \\
0& 1
\end{array}\right)\Bigr)$.
For $\Gamma_{13}$, since $\tilde {h}_1 t_3\tilde {h}_1^{-1}=t_3$, $\tilde {h}_1 t_4\tilde {h}_1^{-1}=t_4^{-1}$, $h_2 t_3 h_2^{-1}=t_3^{-1}$ and $h_2 t_4 h_2^{-1}=t_4$, then $\Gamma_{13}=\langle t_3,t_4 \rangle \rtimes \langle \tilde {h}_1,\tilde {h}_2 \rangle$. Then center $\mathcal C(\Gamma_{13})= \langle t_1,t_2\rangle$. It induces an extension $$\begin{aligned}
1\rightarrow \langle t_1,t_2\rangle \rightarrow \Gamma_{13}\rightarrow \Delta_{13}\rightarrow 1,\end{aligned}$$ where $\Delta_{13}$ is isomorphic to $$\begin{aligned}
\Bigl <& s_1=\left(\begin{array}{c}
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 \\
0& 1
\end{array}\right), s_2=\left(\begin{array}{c}
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0\\
0& 1
\end{array}\right)
\Bigr >\rtimes \langle \alpha ,\beta \rangle\end{aligned}$$ with $\alpha =\Bigr(\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 \\
0 & -1
\end{array}\right)\Bigr),\beta =\Bigl(
\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 \\
0 &1
\end{array}\right)\Bigr)$.\
Suppose that $\Gamma_{a3}$ is isomorphic to $\Gamma_{13}$ with isomorphism $\varphi :\Gamma_{a3}\rightarrow \Gamma_{13}$ which induces an isomorphism $\hat \varphi : \Delta _{a3}\rightarrow \Delta _{13}$ $$\begin{aligned}
\begin{CD}
1 @>>> \langle t_1,t_3 \rangle @>>> \Gamma_{a3}@>>> \Delta_{a3}@>>> 1\\
@. @. @V \varphi VV @V \hat \varphi VV\\
1 @>>> \langle t_1,t_2 \rangle @>>> \Gamma_{13}@>>> \Delta_{13}@>>> 1.
\end{CD}\end{aligned}$$ By the same argument as (15) above, consider $\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \beta )=\alpha$ and $\hat \varphi (\gamma_1^{b_1} \gamma_2^{b_2} \beta )=\beta$ where $a_1, a_2, b_1, b_2 \in \mathbb{Z}$. Then $$\begin{aligned}
\alpha\beta=&\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \beta \gamma_1^{b_1} \gamma_2^{b_2} \beta) \\
=&\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \beta \gamma_2^{b_2} \beta) \hspace{.5cm} (\beta \gamma_1 \beta^{-1}=\gamma_1^{-1})\\
=&\hat \varphi (\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \gamma_2^{b_2}) \hspace{.9cm} (\beta \gamma_2 \beta^{-1}=\gamma_2).\end{aligned}$$ Since $\alpha \beta$ is a torsion element and $\hat \varphi$ is an isomorphism, then $\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \gamma_2^{b_2}$ is also a torsion element. Let $$\begin{aligned}
g=&\gamma_1^{a_1} \gamma_2^{a_2} \gamma_1^{-b_1} \gamma_2^{b_2} \\
=&
\Bigl ( \left(\begin{array}{c}
\frac{1}{2}a_1\\
(-1)^{a_1}\frac{1}{2}a_2
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& (-1)^{a_1}\end{array}\right)\Bigr)
\Bigl ( \left(\begin{array}{c}
-\frac{1}{2}b_1\\
(-1)^{-b_1}\frac{1}{2}b_2
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& (-1)^{-b_1}\end{array}\right)\Bigr)\\
=&
\Bigl ( \left(\begin{array}{c}
\frac{1}{2}(a_1 - b_1)\\
\frac{1}{2}((-1)^{a_1}a_2+(-1)^{(a_1-b_1)}b_2)
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& (-1)^{a_1-b_1}\end{array}\right)\Bigr).\end{aligned}$$ We have $$\begin{aligned}
g^2=&\Bigl ( \left(\begin{array}{c}
a_1 - b_1\\
(1+(-1)^{(a_1-b_1)})\frac{1}{2}((-1)^{a_1}a_2+(-1)^{(a_1-b_1)}b_2
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& 1 \end{array}\right)\Bigr)\\
=&
\Bigl ( \left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& 1 \end{array}\right)\Bigr).\end{aligned}$$ Therefore we get $a_1=b_1$ and $(-1)^{a_1}a_2+b_2=0$. Hence $\hat \varphi (g)=\hat \varphi
\Bigl ( \left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0\\
0& 1 \end{array}\right)\Bigr)=
\Bigl ( \left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{cc}
-1& 0\\
0& -1 \end{array}\right)\Bigr)$. This yields a contradiction.\
. $M(A_{13})$ is not diffeomorphic to $M(A_{a4})$.\
Let $$\begin{aligned}
\Gamma_{a4}= <&\tilde {g}_1,\tilde {g}_2,\tilde {g}_3,t_4>\\
=\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\\
%=<&\tilde {g}_1,\tilde {g}_2\tilde {g}_3,\tilde {g}_3,t_4>\\
%=\Bigl < &\left(\begin{array}{cccc}
%\frac{1}{2}\\
%0\\
%0\\
%0
%\end{array}\right)
%\left(\begin{array}{cccc}
%1& 0 & 0 & 0\\
%0& -1 & 0 & 0\\
%0& 0 & 1 & 0\\
%0& 0 & 0 & 1
%\end{array}\right),
%\left(\begin{array}{cccc}
%0\\
%\frac{1}{2}\\
%\frac{1}{2}\\
%0
%\end{array}\right)
%\left(\begin{array}{cccc}
%1& 0 & 0 & 0\\
%0& 1 & 0 & 0\\
%0& 0 & 1 & 0\\
%0& 0 & 0 & 1
%\end{array}\right),
%\\
%&\left(\begin{array}{cccc}
%0\\
%0\\
%\frac{1}{2}\\
%0
%\end{array}\right)
%\left(\begin{array}{cccc}
%1& 0 & 0 & 0\\
%0& 1 & 0 & 0\\
%0& 0 & 1 & 0\\
%0& 0 & 0 & -1
%\end{array}\right),
%\left(\begin{array}{cccc}
%0\\
%0\\
%0\\
%\frac{1}{2}
%\end{array}\right)
%\left(\begin{array}{cccc}
%1& 0 & 0 & 0\\
%0& 1 & 0 & 0\\
%0& 0 & 1 & 0\\
%0& 0 & 0 & 1
%\end{array}\right)
%\Bigr >
\\
\Gamma_{13}= <&\tilde {h}_1,\tilde {h}_2,t_3,t_4>\\
=\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Since $\tilde {g}_1\tilde {g}_2\tilde {g}_1^{-1}=\tilde {g}_2^{-1}$, $\tilde {g}_1 t_4\tilde {g}_1^{-1}=t_4$, $\tilde {g}_3 \tilde {g}_2\tilde {g}_3^{-1}=\tilde {g}_2$, $\tilde {g}_3 t_4\tilde {g}_3^{-1}=t_4^{-1}$, then $\Gamma_{a4}=\langle \tilde {g}_2,t_4 \rangle \rtimes \langle \tilde {g}_1,\tilde {g}_3 \rangle$. Then the center $\displaystyle \mathcal C(\Gamma_{a4})= \langle t_1,
t_3\rangle$ where $t_1=\tilde {g}_1^2$ and $t_3=\tilde {g}_3^2$. There is an extension $$\begin{aligned}
1\rightarrow \langle t_1,t_3\rangle \rightarrow \Gamma_{a4}\rightarrow \Delta_{a4}\rightarrow 1,\end{aligned}$$ where $\Delta_{a4}$ is isomorphic to $$\begin{aligned}
\Bigl <&\gamma _1=\left(\begin{array}{c}
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cc}
1& 0 \\
0& -1
\end{array}\right), \gamma _2=\left(\begin{array}{c}
0\\
\frac{1}{2}
\end{array}\right) \left(\begin{array}{cc}
1& 0 \\
0& 1
\end{array}\right)
\Bigr > \rtimes \langle \alpha_1,\alpha_2 \rangle\end{aligned}$$ with $\alpha_1=\Bigl(\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
-1& 0 \\
0& 1
\end{array}\right)\Bigr)$ and $\alpha_2=\Bigl(\left(\begin{array}{c}
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 \\
0& -1
\end{array}\right)\Bigr)$.
For $\Gamma_{13}$, the center $\mathcal C(\Gamma_{13})= \langle t_1,t_2\rangle$. It induces an extension $$\begin{aligned}
1\rightarrow \langle t_1,t_2\rangle \rightarrow \Gamma_{13}\rightarrow \Delta_{13}\rightarrow 1,\end{aligned}$$ where $\Delta_{13}$ is isomorphic to $$\begin{aligned}
\Bigl <& s_1=\left(\begin{array}{c}
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 \\
0& 1
\end{array}\right), s_2=\left(\begin{array}{c}
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{ccc}
1& 0\\
0& 1
\end{array}\right)
\Bigr >\rtimes \langle \alpha ,\beta \rangle\end{aligned}$$ with $\alpha=\alpha_1$ and $\beta=\alpha_2$.
Suppose that $\Gamma_{13}$ is isomorphic to $\Gamma_{a4}$ with isomorphism $\varphi :\Gamma_{13}\rightarrow \Gamma_{a4}$ which induces an isomorphism $\hat \varphi : \Delta _{13}\rightarrow \Delta _{a4}$ $$\begin{aligned}
\begin{CD}
1 @>>> \langle t_1,t_3 \rangle @>>> \Gamma_{a4}@>>> \Delta_{a4}@>>> 1\\
@. @. @A \varphi AA @A \hat \varphi AA\\
1 @>>> \langle t_1,t_2 \rangle @>>> \Gamma_{13}@>>> \Delta_{13}@>>> 1.
\end{CD}\end{aligned}$$
Since $\tilde g_2 \tilde g_3=\tilde g_3 \tilde g_2$ then $$\begin{aligned}
\tilde g_2^{-1} \tilde g_3 &=\tilde g_3^{-1} \tilde g_3 \tilde g_2^{-1} \tilde g_3\\
&=\tilde g_3^{-1} \tilde g_2^{-1} \tilde g_3^{2}\\
&=(\tilde g_2 \tilde g_3)^{-1}\tilde g_3^{2}\\
\tilde g_2^{-1}\tilde g_1 \tilde g_3 \tilde g_1^{-1} &=(\tilde g_2 \tilde g_3)^{-1}\tilde g_3^{2}.\end{aligned}$$ We get $$\label{Eq1}
\tilde g_1(\tilde g_2\tilde g_3)\tilde {g_1}^{-1}= (\tilde g_2\tilde g_3)^{-1} \tilde g_3^{2}.$$
Now we want to find the elements $h,h'\in \Gamma_{13}$ such that $\varphi(h)=\tilde g_1$ and $\varphi(h')=\tilde g_2\tilde g_3$. According to the above diagram, let us consider the following diagram $$\begin{CD}
\footnotesize \biggl ( \left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right),I \biggr)= \tilde g_2\tilde g_3 @>>>
\gamma_1\alpha_2=
\biggl ( \left(\begin{array}{cc}
\frac{1}{2}\\
0
\end{array}\right),I \biggr)\\
@A \varphi AA @A \hat \varphi AA\\
\biggl ( \left(\begin{array}{cccc}
c\\
d\\
\frac{1}{2}a\\
\frac{1}{2}b
\end{array}\right),I \biggr)= {t_3}^a{t_4}^b{t_1}^c{t_2}^d @>>>
{s_1}^a {s_2}^b=
\biggl ( \left(\begin{array}{cc}
\frac{1}{2}a\\
\frac{1}{2}b
\end{array}\right),I \biggr)
\end{CD}$$ so, $$\label{Eq2}
\tilde g_2\tilde g_3=\varphi ({t_3}^a{t_4}^b{t_1}^c{t_2}^d ),$$ and from the diagrams $$\begin{CD}
\footnotesize \biggl ( \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right),\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right) \biggr)= \tilde g_1 @>>>
\alpha_1=
\biggl ( \left(\begin{array}{cc}
0\\
0
\end{array}\right),\left(\begin{array}{cc}
-1& 0 \\
0& 1
\end{array}\right) \biggr)\\
@A \varphi AA @A \hat \varphi AA\\
{\tilde h_1}{t_3}^{a'}{t_4}^{b'}{t_1}^{c'}{t_2}^{d'}= @>>> \beta{s_1}^{a'} {s_2}^{b'}=\\
%@. @.\\
\biggl ( \left(\begin{array}{cccc}
c'+\frac{1}{2}\\
d'\\
\frac{1}{2}{a'}\\
-\frac{1}{2}{b'}
\end{array}\right),\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right) \biggr) @.
%\hspace {2cm}
\biggl ( \left(\begin{array}{cc}
\frac{1}{2}a'\\
-\frac{1}{2}b'
\end{array}\right),\left(\begin{array}{cc}
1& 0 \\
0& -1
\end{array}\right) \biggr)
\end{CD}$$ or $$\begin{CD}
\footnotesize \biggl ( \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right),\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right) \biggr)= \tilde g_1 @>>>
\alpha_1=
\biggl ( \left(\begin{array}{cc}
0\\
0
\end{array}\right),\left(\begin{array}{cc}
-1& 0 \\
0& 1
\end{array}\right) \biggr)\\
@A \varphi AA @A \hat \varphi AA\\
{\tilde h_2}{t_3}^{a'}{t_4}^{b'}{t_1}^{c'}{t_2}^{d'}= @>>> \alpha{s_1}^{a'} {s_2}^{b'}=\\
%@. @.\\
\biggl ( \left(\begin{array}{cccc}
c'\\
d'+\frac{1}{2}\\
-\frac{1}{2}{a'}\\
\frac{1}{2}{b'}
\end{array}\right),\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right) \biggr) @.
%\hspace {2cm}
\biggl ( \left(\begin{array}{cc}
-\frac{1}{2}a'\\
\frac{1}{2}b'
\end{array}\right),\left(\begin{array}{cc}
-1& 0 \\
0& 1
\end{array}\right) \biggr)
\end{CD}$$ we get $$\label{Eq3}
\tilde {g_1}=\varphi (\tilde h_1{t_3}^{a'}{t_4}^{b'}{t_1}^{c'}{t_2}^{d'} )=\varphi (\tilde h_1t),$$ or $$\label{Eq3b}
\tilde {g_1}=\varphi (\tilde h_2{t_3}^{a'}{t_4}^{b'}{t_1}^{c'}{t_2}^{d'} )=\varphi (\tilde h_2t),$$ where $t={t_3}^{a'}{t_4}^{b'}{t_1}^{c'}{t_2}^{d'}$.
Next we get $$\begin{aligned}
\tilde g_1 (\tilde g_2\tilde g_3)\tilde {g_1}^{-1}=&\varphi (\tilde h_1 t)\varphi ({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d} )\varphi (\tilde h_1 t)^{-1}
\hspace{.5cm} (\text {by} \, \eqref{Eq2},\, \eqref{Eq3})\\
(\tilde g_2\tilde g_3)^{-1}t_3=&\varphi (\tilde h_1 t ({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d})t^{-1}\tilde h_1^{-1})
\hspace{1.1cm} (\text {by} \, \eqref{Eq1})\\
=&\varphi (\tilde h_1 ({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d})\tilde h_1^{-1})\\
=&\varphi (\tilde h_1 {t_3}^{a}{t_4}^{b}\tilde h_1^{-1}{t_1}^{c}{t_2}^{d})\\
=&\varphi (({t_3}^{a}{t_4}^{b})^{-1}t_3^{2a}{t_1}^{c}{t_2}^{d})
\hspace{1.5cm} (h_1 {t_3}^{a}{t_4}^{b}h_1^{-1}={t_3}^{a}{t_4}^{-b})\\
=&\varphi (({t_3}^{a}{t_4}^{b})^{-1}({t_1}^{c}{t_2}^{d})t_3^{2a})\\
=&\varphi (({t_3}^{a}{t_4}^{b})^{-1}({t_1}^{c}{t_2}^{d})^{-1}(({t_1}^{c}{t_2}^{d})t_3^{a})^2)\\
=&\varphi (({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d})^{-1}({t_1}^{c}{t_2}^{d}t_3^{a})^2)\\
=&(\tilde g_2\tilde g_3)^{-1}\varphi (({t_1}^{c}{t_2}^{d}t_3^{a})^2),\end{aligned}$$ or $$\begin{aligned}
\tilde g_1 (\tilde g_2\tilde g_3)\tilde {g_1}^{-1}=&\varphi (\tilde h_2 t)\varphi ({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d} )\varphi (\tilde h_2 t)^{-1}
\hspace{.5cm} (\text {by} \, \eqref{Eq2},\, \eqref{Eq3b})\\
(\tilde g_2\tilde g_3)^{-1}t_3=&\varphi (\tilde h_2 t ({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d})t^{-1}\tilde h_2^{-1})
\hspace{1.1cm} (\text {by} \, \eqref{Eq1})\\
=&\varphi (\tilde h_2 ({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d})\tilde h_2^{-1})\\
=&\varphi (\tilde h_2 {t_3}^{a}{t_4}^{b}\tilde h_1^{-1}{t_1}^{c}{t_2}^{d})\\
=&\varphi (({t_3}^{a}{t_4}^{b})^{-1}t_4^{2b}{t_1}^{c}{t_2}^{d})
\hspace{1.5cm} (h_2 {t_3}^{a}{t_4}^{b}h_2^{-1}={t_3}^{-a}{t_4}^{b})\\
=&\varphi (({t_3}^{a}{t_4}^{b})^{-1}({t_1}^{c}{t_2}^{d})t_4^{2b})\\
=&\varphi (({t_3}^{a}{t_4}^{b})^{-1}({t_1}^{c}{t_2}^{d})^{-1}(({t_1}^{c}{t_2}^{d})t_4^{b})^2)\\
=&\varphi (({t_3}^{a}{t_4}^{b}{t_1}^{c}{t_2}^{d})^{-1}({t_1}^{c}{t_2}^{d}t_4^{b})^2)\\
=&(\tilde g_2\tilde g_3)^{-1}\varphi (({t_1}^{c}{t_2}^{d}t_4^{b})^2).\end{aligned}$$ Hence $$\label{Eq4}
t_3=\varphi (({t_1}^{c}{t_2}^{d}t_3^{a})^2),$$ or $$\label{Eq4b}
t_3=\varphi (({t_1}^{c}{t_2}^{d}t_4^{b})^2).$$ Let us consider the diagrams below $$\begin{CD}
\footnotesize \biggl ( \left(\begin{array}{cccc}
p\\
\frac{1}{2}m\\
\frac{1}{2}m+l\\
\frac{1}{2}q
\end{array}\right),I \biggr)= t_3^l(\tilde g_2\tilde g_3)^{m}(t_1^pt_4^q) @>>>
(\gamma_1\alpha_2)^m\gamma_2^q=
\biggl ( \left(\begin{array}{cc}
\frac{1}{2}m\\
\frac{1}{2}q
\end{array}\right),I \biggr)\\
@A \varphi AA @A \hat \varphi AA\\
\biggl ( \left(\begin{array}{cccc}
c\\
d\\
\frac{1}{2}{a}\\
0
\end{array}\right),I \biggr) ={t_1}^{c}{t_2}^{d}t_3^a @>>> {s_1}^{a}=
\biggl ( \left(\begin{array}{cc}
\frac{1}{2}a\\
0
\end{array}\right),I \biggr)
\end{CD}$$ or $$\begin{CD}
\footnotesize \biggl ( \left(\begin{array}{cccc}
p\\
\frac{1}{2}m\\
\frac{1}{2}m+l\\
\frac{1}{2}q
\end{array}\right),I \biggr)= t_3^l(\tilde g_2\tilde g_3)^{m}(t_1^pt_4^q) @>>>
(\gamma_1\alpha_2)^m\gamma_2^q=
\biggl ( \left(\begin{array}{cc}
\frac{1}{2}m\\
\frac{1}{2}q
\end{array}\right),I \biggr)\\
@A \varphi AA @A \hat \varphi AA\\
\biggl ( \left(\begin{array}{cccc}
c\\
d\\
0\\
\frac{1}{2}{b}
\end{array}\right),I \biggr) ={t_1}^{c}{t_2}^{d}t_4^b @>>> {s_2}^{b}=
\biggl ( \left(\begin{array}{cc}
0\\
\frac{1}{2}b
\end{array}\right),I \biggr),
\end{CD}$$ we have $$\begin{aligned}
\varphi ({t_1}^{c}{t_2}^{d}t_3^a)=&t_3^l(\tilde g_2\tilde g_3)^{m}(t_1^pt_4^q)\\
\varphi (({t_1}^{c}{t_2}^{d}t_3^a)^2)=&t_3^{2l}(\tilde g_2\tilde g_3)^{2m}(t_1^{2p}t_4^{2q})\\
=&t_3^{2l}(t_2t_3)^{m}(t_1^{2p}t_4^{2q}) \hspace{1cm} ((\tilde g_2\tilde g_3)^{2}=t_2t_3)\\
=&t_1^{2p}t_2^m t_3^{m+2l}t_4^{2q},\end{aligned}$$ or $$\begin{aligned}
\varphi ({t_1}^{c}{t_2}^{d}t_4^b)=&t_3^l(\tilde g_2\tilde g_3)^{m}(t_1^pt_4^q)\\
\varphi (({t_1}^{c}{t_2}^{d}t_4^b)^2)=&t_3^{2l}(\tilde g_2\tilde g_3)^{2m}(t_1^{2p}t_4^{2q})\\
=&t_3^{2l}(t_2t_3)^{m}(t_1^{2p}t_4^{2q}) \hspace{1cm} ((\tilde g_2\tilde g_3)^{2}=t_2t_3)\\
=&t_1^{2p}t_2^m t_3^{m+2l}t_4^{2q}.\end{aligned}$$ Therefore from [\[Eq4\]](#Eq4){reference-type="eqref" reference="Eq4"} or [\[Eq4b\]](#Eq4b){reference-type="eqref" reference="Eq4b"} $t_3=t_1^{2p}t_2^m t_3^{m+2l}t_4^{2q}$. Hence $p=m=q=0$ and $m+2l=1$ or $2l=1$. This yields a contradiction.\
.\
$\bullet$ $M(A_{a10})\approx M(A_{a12})$.\
For $$\begin{aligned}
&A_{a10}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a12}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a12}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,\bar z_2,iz_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_2,-\bar{iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a10}\gamma ^{-1}=\Gamma_{a12}$.\
$\bullet$ $M(A_{a10})\approx M(A_{a26})$.\
For $$\begin{aligned}
&A_{a10}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{1cm}
&A_{a26}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a10}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a26}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,iz_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-iz_2,\bar z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a10}\gamma ^{-1}=\Gamma_{a26}$.\
$\bullet$ $M(A_{a12})\approx M(A_{a32})$.\
For $$\begin{aligned}
&A_{a12}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a32}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a12}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a32}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,iz_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (z_1,-iz_2,\bar{z}_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a12}\gamma ^{-1}=\Gamma_{a32}$.\
$\bullet$ $M(A_{a14})\approx M(A_{a16})$.\
For $$\begin{aligned}
&A_{a14}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a16}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a14}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a16}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_1VV @Vh_1 VV\\
(-z_1,\bar z_2,z_3,\bar z_4) @>\varphi>> (-z_1,\bar {z}_2,iz_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-z_2,-\bar{iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a14}\gamma ^{-1}=\Gamma_{a16}$.\
$\bullet$ $M(A_{a14})\approx M(A_{a30})$.\
For $$\begin{aligned}
&A_{a14}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a30}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4).\hspace{.1cm} &h_1h_2(z_1,z_2,z_3,z_4)=(-z_1,-\bar z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a14}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a30}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,iz_2,z_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_1VV @Vh_1h_2 VV\\
(-z_1,\bar z_2,z_3,\bar z_4) @>\varphi>> (-z_1,-\bar {iz}_2,z_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,iz_2,-z_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_2VV @Vh_2 VV\\
(z_1,-z_2,\bar z_3,z_4) @>\varphi>> (z_1,-iz_2,\bar{z}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,iz_2,z_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,iz_2,z_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
\frac{1}{4}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a14}\gamma ^{-1}=\Gamma_{a30}$.\
$\bullet$ $M(A_{a28})\approx M(A_{a30})$.\
For $$\begin{aligned}
&A_{a28}=
\left(\begin{array}{cccc}
1& 1 & 1 & 0 \\
0& 1 & 1 & 1\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a30}=
\left(\begin{array}{cccc}
1& 1 & 1 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,\bar z_3,z_4).\hspace{.1cm} &h_1h_3(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,-\bar z_3,z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,\bar z_4). &h_2h_3(z_1,z_2,z_3,z_4)=(z_1,-z_2,-\bar z_3,\bar z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a28}=
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a30}=
\Bigl < & \left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$
Let $\varphi(z_1,z_2,z_3,z_4)=(z_1,z_2,iz_3,z_4)$, we get these commutative diagrams
$$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_1VV @Vh_1h_3 VV\\
(-z_1,\bar z_2,\bar z_3,z_4) @>\varphi>> (-z_1,\bar {z}_2,-\bar {iz}_3,z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_3VV @Vh_3 VV\\
(z_1,z_2,-z_3,\bar z_4) @>\varphi>> (z_1,z_2,-iz_3,\bar z_4)
\end{CD}$$ $$\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_2VV @Vh_2h_3 VV\\
(z_1,-z_2,\bar z_3,\bar z_4) @>\varphi>> (z_1,-z_2,-\bar{iz}_3,\bar z_4)
\end{CD}
\hspace{.1cm}
\begin{CD}
(z_1,z_2,z_3,z_4) @>\varphi>>(z_1,z_2,iz_3,z_4)\\
@Vg_4VV @Vh_4 VV\\
(z_1,z_2,z_3,-z_4) @>\varphi>> (z_1,z_2,iz_3,-z_4).
\end{CD}$$
Therefore $\exists$ $\gamma
\footnotesize
=
\Bigl( \left(\begin{array}{cccc}
0\\
0\\
\frac{1}{4}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr)
\normalsize
\in \mathbb{A}(n)$ s.t. $\gamma \Gamma_{a28}\gamma ^{-1}=\Gamma_{a30}$.\
. Similar to (4), because $\Phi_{a10}=\Phi_{a14}=\mathbb{Z}_2 \times \mathbb{Z}_2 \times \mathbb{Z}_2$.\
**(30).** $M(A_{a10})$ is not diffeomorphic to $M(A_{a14})$.\
For $$\begin{aligned}
&A_{a10}=
\left(\begin{array}{cccc}
1& 1 & 0 & 0 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\hspace{.1cm}
&A_{a14}=
\left(\begin{array}{cccc}
1& 1 & 0 & 1 \\
0& 1 & 1 & 0\\
0& 0 & 1 & 1\\
0& 0 & 0 & 1
\end{array}\right)
\\
&g_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,z_4).\hspace{.1cm} &h_1(z_1,z_2,z_3,z_4)=(-z_1,\bar z_2,z_3,\bar z_4)\\
&g_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4). &h_2(z_1,z_2,z_3,z_4)=(z_1,-z_2,\bar z_3,z_4)\\
&g_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4). &h_3(z_1,z_2,z_3,z_4)=(z_1,z_2,-z_3,\bar z_4)\\
&g_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4). &h_4(z_1,z_2,z_3,z_4)=(z_1,z_2,z_3,-z_4)\end{aligned}$$ then $$\begin{aligned}
\Gamma_{a10}=&\langle \tilde {g}_1, \tilde {g}_2, \tilde {g}_3, t_4 \rangle\\
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >\end{aligned}$$
$$\begin{aligned}
\Gamma_{a14}=&\langle \tilde {h}_1, \tilde {h}_2, \tilde {h}_3, t_4 \rangle\\
\Bigl < &\left(\begin{array}{cccc}
\frac{1}{2}\\
0\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& -1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & -1 & 0\\
0& 0 & 0 & 1
\end{array}\right),
\\
&\left(\begin{array}{cccc}
0\\
0\\
\frac{1}{2}\\
0
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & -1
\end{array}\right),
\left(\begin{array}{cccc}
0\\
0\\
0\\
\frac{1}{2}
\end{array}\right)
\left(\begin{array}{cccc}
1& 0 & 0 & 0\\
0& 1 & 0 & 0\\
0& 0 & 1 & 0\\
0& 0 & 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$ Since $\tilde {g}_1\tilde {g}_2\tilde {g}_1^{-1}=\tilde {g}_2^{-1}$, $\tilde {g}_1 \tilde {g}_3\tilde {g}_1^{-1}=\tilde {g}_3$, and $\tilde {g}_1 t_4\tilde {g}_1^{-1}=t_4$, then $\Gamma_{a10}=\langle \tilde {g}_2, \tilde {g}_3, t_4 \rangle \rtimes \langle \tilde {g}_1 \rangle$. Then the center $\displaystyle \mathcal C(\Gamma_{a10})= \langle t_1 \rangle$ where $t_1=\tilde {g}_1^2$. There is an extension $$\begin{aligned}
1\rightarrow \langle t_1 \rangle \rightarrow \Gamma_{a10}\rightarrow \Delta_{a10}\rightarrow 1,\end{aligned}$$ where $\Delta_{a10}=\Delta^f_{a10}\rtimes \langle \alpha \rangle
=\langle p,q,r \rangle \rtimes \langle \alpha \rangle$ is isomorphic to $$\begin{aligned}
\Bigl <\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& -1 & 0 \\
0& 0 & 1
\end{array}\right), \left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right) \left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right) \left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr > \rtimes \\
\Big < \left(\begin{array}{c}
0\\
0\\
0
\end{array}\right) \left(\begin{array}{ccc}
-1& 0 & 0\\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr >.\end{aligned}$$ Then, since $\tilde {h}_1\tilde {h}_2\tilde {h}_1^{-1}=\tilde {h}_2^{-1}$, $\tilde {h}_1 \tilde {h}_3\tilde {h}_1^{-1}=\tilde {h}_3$, and $\tilde {h}_1 t_4\tilde {h}_1^{-1}=t_4^{-1}$, then $\Gamma_{a14}=\langle \tilde {h}_2, \tilde {h}_3, t_4 \rangle \rtimes \langle \tilde {h}_1 \rangle$. Then the center $\displaystyle \mathcal C(\Gamma_{a14})= \langle t_1 \rangle$ where $t_1=\tilde {h}_1^2$. There is an extension $$\begin{aligned}
1\rightarrow \langle t_1 \rangle \rightarrow \Gamma_{a14}\rightarrow \Delta_{a14}\rightarrow 1,\end{aligned}$$ where $\Delta_{a14}=\Delta^f_{a14}\rtimes \langle \beta \rangle=\langle s,t,u \rangle \rtimes \langle \beta \rangle$ is isomorphic to $$\begin{aligned}
\Bigl <\left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right)
\left(\begin{array}{ccc}
1& 0 & 0\\
0& -1 & 0 \\
0& 0 & 1
\end{array}\right), \left(\begin{array}{c}
0\\
\frac{1}{2}\\
0
\end{array}\right) \left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right),
\left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right) \left(\begin{array}{ccc}
1& 0 & 0\\
0& 1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr > \rtimes \\
\Big < \left(\begin{array}{c}
0\\
0\\
0
\end{array}\right) \left(\begin{array}{ccc}
-1& 0 & 0\\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right)
\Bigr > .\end{aligned}$$ Suppose that $\Gamma_{a10}$ is isomorphic to $\Gamma_{a14}$ with isomorphism $\varphi:\Gamma_{a10}\rightarrow \Gamma_{a14}$. Then it induces an isomorphism $\hat\varphi:\Delta_{a10}\rightarrow
\Delta_{a14}$ $$\begin{aligned}
\begin{CD}
1 @>>> \langle t_1 \rangle @>>> \Gamma_{a10}@>>> \Delta_{a10}@>>> 1\\
@. @. @V \varphi VV @V \hat \varphi VV\\
1 @>>> \langle t_1 \rangle @>>> \Gamma_{a14}@>>> \Delta_{a14}@>>> 1.
\end{CD}\end{aligned}$$ **Claim 1:**\
Given an isomorphism $\hat \varphi:\Delta_{a10}\rightarrow \Delta_{a14}$, then $\hat \varphi (\Delta_{a10}^f)=\Delta_{a14}^f$, that is, $\hat \varphi$ maps isomorphically.\
Since $\hat \varphi (\alpha )$ is a torsion element, we may write $\hat \varphi (\alpha )=h\beta$ for some $h\in \Delta_{a14}^f$. Since $\hat \varphi$ is an isomorphism, there exist $g\in \Delta_{a10}^f$ such that $\hat \varphi (g\alpha )=\beta$. Hence $\beta =\hat \varphi (g)\hat \varphi (\alpha )=\hat \varphi (g)h\beta$. So $\hat \varphi (g)h=1$ or $h =\hat \varphi (g)^{-1}=\hat \varphi (g^{-1})$. Hence $$\label{E1}
\hat \varphi (\alpha )=\hat \varphi (g^{-1})\beta.$$ By using ([\[E1\]](#E1){reference-type="ref" reference="E1"}), $$\begin{aligned}
\hat \varphi (\Delta_{a10})=&\hat \varphi (\Delta_{a10}^f) \rtimes \hat \varphi (\alpha )\\
=&\hat \varphi (\Delta_{a10}^f) \rtimes \hat \varphi (g^{-1})\beta \\
=&\hat \varphi (\Delta_{a10}^f) \rtimes \beta \hspace{1cm} (\hat \varphi (g^{-1})\in \hat \varphi (\Delta_{a10}^f))\end{aligned}$$ while $\hat \varphi (\Delta_{a10})=\Delta_{a14}$. Therefore $$\label{E2}
\hat \varphi (\Delta_{a10}^f) \rtimes \beta =\Delta_{a14}^f \rtimes \beta.$$
Let $s\in \Delta_{a14}^f$, then $s\beta =
\footnotesize
\Big ( \left(\begin{array}{c}
\frac{1}{2}\\
0\\
0
\end{array}\right) \left(\begin{array}{ccc}
-1& 0 & 0\\
0& -1 & 0 \\
0& 0 & -1
\end{array}\right)
\Bigr)$ which is a torsion element. By using ([\[E2\]](#E2){reference-type="ref" reference="E2"}) $\hat \varphi (g'_2)\beta =s\beta$ for some $g'_2\in \Delta_{a10}^f$. Hence $$\label{E3}
\hat \varphi (g'_2)=s.$$ Let $t\in \Delta_{a14}^f$, then $st\beta =
\footnotesize
\Big ( \left(\begin{array}{c}
\frac{1}{2}\\
-\frac{1}{2}\\
0
\end{array}\right) \left(\begin{array}{ccc}
-1& 0 & 0\\
0& -1 & 0 \\
0& 0 & 1
\end{array}\right)
\Bigr)$ which is a torsion element. By using ([\[E2\]](#E2){reference-type="ref" reference="E2"}) there exist $g'_3\in \Delta_{a10}^f$ such that $\hat \varphi (g'_3)\beta =st\beta$. So, $\hat \varphi (g'_3)=st$ or $s^{-1}\hat \varphi (g'_3) =t$, and by ([\[E3\]](#E3){reference-type="ref" reference="E3"}) $$\label{E4}
\hat \varphi ((g'_2)^{-1} g'_3)=t$$ where $(g'_2)^{-1}g'_3\in \Delta_{a10}^f$.
As $u\beta =
\footnotesize
\Big ( \left(\begin{array}{c}
0\\
0\\
\frac{1}{2}
\end{array}\right) \left(\begin{array}{ccc}
-1& 0 & 0\\
0& 1 & 0 \\
0& 0 & -1
\end{array}\right)
\Bigr)$ is a torsion element. By using ([\[E2\]](#E2){reference-type="ref" reference="E2"}) there exist $g''\in \Delta_{a10}^f$ such that $\hat \varphi (g'')\beta =u\beta$. Hence $$\label{E5}
\hat \varphi (g'')=u.$$
If $g\in \Delta_{a10}^f$, we have $\hat \varphi (g)=h\in \Delta_{a14}^f$. If this is not true, then $\hat \varphi (g)=h\beta$. But $h=\hat \varphi (g')$ for some $g'\in \Delta_{a10}^f$. So, the above implies $h^{-1}\hat \varphi (g)=\beta$ or $\hat \varphi (g'^{-1}g)=\beta$ where $g'^{-1}g\in \Delta_{a10}^f$. Since $\Delta_{a10}^f$ is torsion free, it is impossible. Therefore, since for any $g\in \Delta_{a10}^f$ $\hat \varphi (g)=h\in\Delta_{a14}^f$ then $\hat \varphi (\Delta_{a10}^f)\subset \Delta_{a14}^f$, and as ([\[E3\]](#E3){reference-type="ref" reference="E3"}) ([\[E4\]](#E4){reference-type="ref" reference="E4"}) and ([\[E5\]](#E5){reference-type="ref" reference="E5"}) imply surjectivity, then $\hat \varphi (\Delta_{a10}^f)= \Delta_{a14}^f$.\
\
**Claim 2:**\
Suppose $\Delta_{a10}$ and $\Delta_{a14}$ are isomorphic by $\hat \varphi$. Since $\Delta_{a10}$ and $\Delta_{a14}$ are crystallographics groups (rigid motion of $\mathbb{R}^{3}$), there are affinely conjugate, i.e. there exist an equivariant diffeomorphism (=affine transformation) $$\label{E6}
(\hat \varphi , S):(\Delta_{a10}, \mathbb{R}^{3})\longrightarrow (\Delta_{a14}, \mathbb{R}^{3})$$ such that $S(\gamma v)=\hat \varphi (\gamma )S(v)$ or equivalently $S\gamma =\hat \varphi (\gamma )S$, $S\gamma S^{-1}=\hat \varphi (\gamma )$.\
First we note that $\hat \varphi$ induces an isomorphism $$\begin{aligned}
\bar \varphi : \Delta_{a10}/{\Delta_{a10}^f}=\langle \bar \alpha \rangle \longrightarrow
\Delta_{a14}/{\Delta_{a14}^f}=\langle \bar \beta \rangle\end{aligned}$$ by $\bar \varphi (\bar \gamma )=\overline {\hat {\varphi} (\gamma )}$. For $\bar \gamma =\bar \gamma'$ $(\gamma ,\gamma'\in \Delta_{a10})$, then $\gamma =r\gamma'$ $(r\in \Delta_{a10}^f)$. Then $\hat \varphi (\gamma) =\hat \varphi (r) \hat \varphi (\gamma')=r'\hat \varphi (\gamma')$ for some $r'\in \Delta_{a14}^f(\because \hat \varphi (\Delta_{a10}^f)= \Delta_{a14}^f)$. Hence $\overline {\hat {\varphi} (\gamma )}=\overline {\hat {\varphi} (\gamma' )}$ which show that it is well defined.
Note that, if $\gamma=\tilde g \alpha$, $\tilde g\in \Delta _{a10}^f$, then $\bar \gamma =\overline{\tilde g \alpha }=\bar \alpha$ and $$\begin{aligned}
\hat \varphi (\gamma)=&\hat \varphi (\tilde g) \hat \varphi (\alpha)\\
=&\hat \varphi (\tilde g) \hat \varphi (g^{-1}) \beta \hspace{1cm} \text {by} \hspace{0.2cm}\eqref{E1}\\
=&\hat \varphi (\tilde g g^{-1})\beta.\end{aligned}$$ Since $g,\tilde g\in \Delta _{a10}^f$ then $\hat \varphi (\tilde g g^{-1})\in \Delta _{a14}^f$ and $\hat \varphi (\gamma) \in \Delta _{a14}$ by assumption. Hence $\overline {\hat \varphi (\gamma) }=\bar \beta$, and by definition $$\label{E7}
\bar \varphi {(\bar \alpha) }=\bar \beta.$$
Note also, $S$ induces a diffeomorphism $$\begin{aligned}
\bar S: M(\Delta _{a10}^f)=\mathbb{R}^{3}/\Delta _{a10}^f \longrightarrow M(\Delta _{a14}^f)=\mathbb{R}^{3}/\Delta _{a14}^f\end{aligned}$$ by $\bar S(\bar v)=\overline {S(v)}$. For this, if $\bar v=\bar v'$ then $v'=rv$ $(r\in\Delta _{a10}^f )$. $S(v')=S(rv)=\hat \varphi (r)S(v)$ by [\[E6\]](#E6){reference-type="eqref" reference="E6"} where $\hat \varphi (r)\in \Delta _{a14}^f$ (because of Claim 1). Thus $\overline {S(v')}=\overline {S(v)}$.
Moreover $\bar S$ is an equivariant diffeomorphism $$\begin{aligned}
(\bar \varphi ,\bar S): (\langle \bar \alpha \rangle, M(\Delta _{a10}^f))\longrightarrow (\langle \bar \beta \rangle,M(\Delta _{a14}^f)).\end{aligned}$$ For this, $$\begin{aligned}
\bar S (\bar \alpha \bar v)=&\bar S (\overline{\alpha v})=\overline {S(\alpha v)}\\
=&\overline {\hat \varphi (\alpha ) S(v)} \hspace {0.9cm} (\text {by} \quad \eqref{E6})\\
=&\overline {\hat {\varphi} (\alpha )} \quad \overline {S(v)} \hspace {0.5cm} (\text {by \,\,($\ast$)\,\, below})\\
=&\bar \beta \bar S(\bar v)\end{aligned}$$ $(\ast )$\
The action of $\Delta _{a10}/\Delta _{a10}^f=\langle \bar \alpha \rangle$ on $M(\Delta _{a10}^f)$ is obtained: $\bar \gamma \bar v =\overline {\gamma v}$ where $\bar \gamma \in \Delta _{a10}/\Delta _{a10}^f$ and $\bar v \in \mathbb{R}^3/\Delta _{a10}^f$. If $\bar\gamma=\bar {\gamma'}$ then $\gamma'=g \gamma$, where $\gamma', \gamma \in \Delta _{a10}$ and $g\in \Delta _{a10}^f$. Therefore $\overline{\gamma' v}=\overline {g\gamma v}=\overline{g(\gamma v)}=\overline{\gamma v}$. If $\bar v=\bar {v'}$ then $v'=g'v$ where $g'\in \Delta _{a10}^f$. Therefore $\overline {\gamma' v'}=
\overline {\gamma' g' v}=\overline {(\gamma'g' \gamma'^{-1}) \gamma'v}=\overline {g'' \gamma' v}=\overline {\gamma' v}$, where $g''\in \Delta _{a10}^f$ normal subgroup. So, when $\bar\gamma=\bar {\gamma'}$, $\bar v=\bar {v'}$ then $\bar \gamma' \bar {v'}=
\overline {\gamma' v'}=\overline {\gamma' v}=\overline {\gamma v}=\bar \gamma \bar v$.
We have this diagram:\
$$\begin{CD}
\Delta _{a10}^f @>\varphi>>\Delta _{a14}^f\\
@VVV @VVV\\
(\Delta _{a10},\mathbb{R}^{3}) @>(\varphi,S)>> (\Delta _{a14},\mathbb{R}^{3})\\
@VVV @VVV\\
(\Delta _{a10}/\Delta _{a10}^f,\mathbb{R}^{3}/\Delta _{a10}^f) @>(\bar \varphi,\bar S)>> (\Delta _{a14}/\Delta _{a14}^f,\mathbb{R}^{3}/\Delta _{a14}^f)\\
\end{CD}$$ where $\bar S (\bar \alpha \bar v)=\bar \beta \bar S (\bar v)$ as $\bar S$ is an equivariant diffeomorphism.\
We note that, the fixed point set of $\bar \alpha$, $Fix(\bar \alpha)=\{ \bar v\in M(\Delta _{a10}^f)|\bar \alpha \bar v=\bar v\}$ mapped diffeomorphically onto $Fix(\bar \beta)$.\
Now, to get $Fix(\bar \alpha)$ and $Fix(\bar \beta)$, the argument is as follows; By considering the exact sequence $${\mathbb Z}^3{\longrightarrow}\Delta_{a10}^f {\longrightarrow}{\mathbb Z}_2 \times {\mathbb Z}_2,$$ the action of ${\mathbb Z}_2 \times {\mathbb Z}_2=<\tau , \mu >$ on $T^3$ by $\tau \footnotesize \overline {\left(\begin{array}{ccc}
x_1\\
x_2\\
x_3
\end{array}\right)}=\overline {\left(\begin{array}{ccc}
\frac{1}{2}+x_1\\
-x_2\\
x_3
\end{array}\right)}={\left(\begin{array}{ccc}
-z_1\\
\overline {z_2}\\
z_3
\end{array}\right)},$ $\mu \footnotesize \overline {\left(\begin{array}{ccc}
x_1\\
x_2\\
x_3
\end{array}\right)}=\overline {\left(\begin{array}{ccc}
x_1\\
\frac{1}{2}+x_2\\
-x_3
\end{array}\right)}={\left(\begin{array}{ccc}
z_1\\
-z_2\\
\overline {z_3}
\end{array}\right)}$, and the action of $< \bar\alpha>$ on ${\mathbb R}^3/\Delta_{a10}^f$ by $\bar \alpha
\footnotesize \overline {\left(\begin{array}{ccc}
x_1\\
x_2\\
x_3
\end{array}\right)}=\footnotesize \overline {\left(\begin{array}{ccc}
-x_1\\
x_2\\
x_3
\end{array}\right)}$, we have the commutative diagram $$\begin{CD}
{\mathbb Z}_2 \times {\mathbb Z}_2 @. {\mathbb Z}_2 \times {\mathbb Z}_2\\
@VVV @VVV\\
T^3={\mathbb R}^3/{\mathbb Z}^3 @>\tilde \alpha>> {\mathbb R}^3/{\mathbb Z}^3 \\
@VpVV @VpVV\\
{\mathbb R}^3/\Delta_{a10}^f @>\bar \alpha>> {\mathbb R}^3/\Delta_{a10}^f\\
\end{CD}$$ where $\tilde \alpha$ is a lift of $\bar \alpha$ and $p$ is defined by $p(\footnotesize \left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right))=\overline {\left(\begin{array}{ccc}
x_1\\
x_2\\
x_3
\end{array}\right)}$ such that $p\circ \tilde \alpha=\bar \alpha\circ p$.
Note that $\tilde \alpha\footnotesize \left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\footnotesize \left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)\in T^3$, since $\tilde \alpha\footnotesize \overline {\left(\begin{array}{ccc}
{x_1}\\
x_2\\
x_3
\end{array}\right)}=\footnotesize \overline {\left(\begin{array}{ccc}
{-x_1}\\
x_2\\
x_3
\end{array}\right)}= \left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)\in T^3$.
Let $x=\footnotesize\left(\begin{array}{ccc}
{x_1}\\
x_2\\
x_3
\end{array}\right)$ and $z=\footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)$ . If $z\in Fix(\tilde \alpha)$, $\tilde\alpha z=z$, then $p(\tilde\alpha z)=p(z)$. This implies $$\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)=\tilde\alpha\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\tau^m \mu^n
\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right) \,\,\,\, m,n \in \{0,1\}.$$\
So, the possibilities of $Fix(\tilde\alpha)$ is obtained from $$\label{FixAl}
\footnotesize\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)= \normalsize
\begin{cases}
z\\
\tau z\\
\mu z\\
\tau \mu z.
\end{cases}$$
Before determining $Fix(\tilde\alpha)$, note that $\tilde\alpha(\tau z)=\tau (\tilde\alpha z)$ and $\tilde\alpha(\mu z)=\mu (\tilde\alpha z)$, since $-\bar z_i=\overline {-z_i}$. Hence if $z\in Fix(\tilde\alpha)$, $\tau z\in Fix(\tilde\alpha)$ and $\mu z\in Fix(\tilde\alpha)$.
Now we calculate $Fix(\tilde\alpha)$ for any case in [\[FixAl\]](#FixAl){reference-type="eqref" reference="FixAl"}.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)$, $\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{\pm 1}\\
z_2\\
z_3
\end{array}\right)$. Since $\tau \footnotesize\left(\begin{array}{ccc}
{1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{-1}\\
\bar z_2\\
z_3
\end{array}\right)$, so $Fix(\tilde\alpha)=\footnotesize\Bigl \{ \left(\begin{array}{ccc}
{1}\\
z_2\\
z_3
\end{array}\right)\Bigr \}$.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)=\normalsize\tau \footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)= \left(\begin{array}{ccc}
{-z_1}\\
\bar z_2\\
z_3
\end{array}\right)$, $\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{\pm i}\\
\pm 1\\
z_3
\end{array}\right)$. Since $\tau \footnotesize\left(\begin{array}{ccc}
{i}\\
\pm 1\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{-i}\\
\pm 1\\
z_3
\end{array}\right)$, and $\mu \footnotesize\left(\begin{array}{ccc}
{i}\\
1\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{i}\\
-1\\
\bar z_3
\end{array}\right)$ so $Fix(\tilde\alpha)=\footnotesize\Bigl \{ \left(\begin{array}{ccc}
{i}\\
1\\
z_3
\end{array}\right)\Bigr \}$.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)=\normalsize\mu \footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)= \left(\begin{array}{ccc}
{z_1}\\
-z_2\\
\bar z_3
\end{array}\right)$, there is no $z_2=-z_2$. So $Fix(\tilde\alpha)=\emptyset$.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
z_3
\end{array}\right)=\normalsize\tau \mu \footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)= \left(\begin{array}{ccc}
{-z_1}\\
-\bar z_2\\
\bar z_3
\end{array}\right)$, $\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{\pm i}\\
\pm i\\
\pm 1
\end{array}\right)$. Since $$\begin{aligned}
\mu \footnotesize\left(\begin{array}{ccc}
{i}\\
i\\
1
\end{array}\right)&=\left(\begin{array}{ccc}
{i}\\
-i\\
1
\end{array}\right),
&\mu \footnotesize\left(\begin{array}{ccc}
{i}\\
i\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{i}\\
-i\\
-1
\end{array}\right)\\
\mu \footnotesize\left(\begin{array}{ccc}
{-i}\\
i\\
1
\end{array}\right)&=\left(\begin{array}{ccc}
{-i}\\
-i\\
1
\end{array}\right),
&\mu \footnotesize\left(\begin{array}{ccc}
{-i}\\
i\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{-i}\\
-i\\
-1
\end{array}\right)\\
\tau \mu \footnotesize\left(\begin{array}{ccc}
{i}\\
-i\\
1
\end{array}\right)&=\left(\begin{array}{ccc}
{-i}\\
-i\\
1
\end{array}\right),
&\tau \mu \footnotesize\left(\begin{array}{ccc}
{i}\\
-i\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{-i}\\
-i\\
-1
\end{array}\right)\end{aligned}$$ so $Fix(\tilde\alpha)=\footnotesize\Bigl \{ \left(\begin{array}{ccc}
{i}\\
-i\\
1
\end{array}\right), \left(\begin{array}{ccc}
{i}\\
-i\\
-1
\end{array}\right)\Bigr \}$.
To determine $Fix(\bar \alpha)$, the argument is as follows. If $z\in Fix(\tilde\alpha)$, $$\begin{aligned}
\bar \alpha\bar x &=\bar \alpha p(z)
=p(\tilde\alpha z)
=p(z)\\
&=\bar x,\end{aligned}$$ hence $p(Fix(\tilde\alpha))=Fix(\bar \alpha)$. Now we get $$\begin{aligned}
Fix(\bar \alpha)=
\begin{cases}
\Bigl \{ p \footnotesize\left(\begin{array}{ccc}
{1}\\
z_2\\
z_3
\end{array}\right)
\Bigr \}=
\Bigl \{ \footnotesize\overline {\left(\begin{array}{ccc}
{1}\\
x_2\\
x_3
\end{array}\right)}
\Bigr \}=T^2\\
\Bigl \{ p \footnotesize\left(\begin{array}{ccc}
{i}\\
1\\
z_3
\end{array}\right)
\Bigr \}=
\Bigl \{ \footnotesize\overline {\left(\begin{array}{ccc}
\frac{1}{4}\\
1\\
x_3
\end{array}\right)}
\Bigr \}=S^1\\
\Bigl \{ p \footnotesize\left(\begin{array}{ccc}
{i}\\
i\\
1
\end{array}\right),
p \footnotesize\left(\begin{array}{ccc}
{i}\\
i\\
-1
\end{array}\right)
\Bigr \}=
\Bigl \{ \footnotesize\overline {\left(\begin{array}{ccc}
\frac{1}{4}\\
\frac{1}{4}\\
1
\end{array}\right)},
\overline {\left(\begin{array}{ccc}
\frac{1}{4}\\
\frac{1}{4}\\
-1
\end{array}\right)}
\Bigr \}.
\end{cases}\end{aligned}$$\
Similarly, we can find $Fix(\bar \beta)$ as follows. If $z\in Fix(\tilde \beta)$, $\tilde\beta z=z$, then $p(\tilde\beta z)=p(z)$. This implies $$\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
\bar z_3
\end{array}\right)=\tilde\beta\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\tau^m \mu^n
\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right) \,\,\,\, m,n \in \{0,1\}.$$\
So, the possibilities of $Fix(\tilde\beta)$ is obtained from $$\label{FixBe}
\footnotesize\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
\bar z_3
\end{array}\right)= \normalsize
\begin{cases}
z\\
\tau z\\
\mu z\\
\tau \mu z.
\end{cases}$$
Note that $\tilde\beta(\tau z)=\tau (\tilde\beta z)$ and $\tilde\beta(\mu z)=\mu (\tilde\beta z)$. Hence if $z\in Fix(\tilde\beta)$, $\tau z\in Fix(\tilde\beta)$ and $\mu z\in Fix(\tilde\beta)$.
Now we calculate $Fix(\tilde\beta)$ for any case in [\[FixBe\]](#FixBe){reference-type="eqref" reference="FixBe"}.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
\bar z_3
\end{array}\right)=\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)$, $\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{\pm 1}\\
z_2\\
\pm 1
\end{array}\right)$. Since $\tau \footnotesize\left(\begin{array}{ccc}
{1}\\
z_2\\
1
\end{array}\right)=\left(\begin{array}{ccc}
{-1}\\
\bar z_2\\
1
\end{array}\right)$, $\tau \footnotesize\left(\begin{array}{ccc}
{1}\\
z_2\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{-1}\\
\bar z_2\\
-1
\end{array}\right)$ so $Fix(\tilde\beta)=\footnotesize\Bigl \{ \left(\begin{array}{ccc}
{1}\\
z_2\\
1
\end{array}\right),
\left(\begin{array}{ccc}
{1}\\
z_2\\
-1
\end{array}\right)\Bigr \}$.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
\bar z_3
\end{array}\right)=\normalsize\tau \footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)= \left(\begin{array}{ccc}
{-z_1}\\
\bar z_2\\
z_3
\end{array}\right)$, $\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{\pm i}\\
\pm 1\\
\pm 1
\end{array}\right)$. Since $$\begin{aligned}
\mu \footnotesize\left(\begin{array}{ccc}
{i}\\
1\\
1
\end{array}\right)&=\left(\begin{array}{ccc}
{i}\\
-1\\
1
\end{array}\right),
&\mu \footnotesize\left(\begin{array}{ccc}
{i}\\
1\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{i}\\
-1\\
-1
\end{array}\right)\\
\mu \footnotesize\left(\begin{array}{ccc}
{-i}\\
1\\
1
\end{array}\right)&=\left(\begin{array}{ccc}
{-i}\\
-1\\
1
\end{array}\right),
&\mu \footnotesize\left(\begin{array}{ccc}
{-i}\\
1\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{-i}\\
-1\\
-1
\end{array}\right)\\
\tau \mu \footnotesize\left(\begin{array}{ccc}
{i}\\
-1\\
1
\end{array}\right)&=\left(\begin{array}{ccc}
{-i}\\
1\\
1
\end{array}\right),
&\tau \mu \footnotesize\left(\begin{array}{ccc}
{i}\\
-1\\
-1
\end{array}\right)=\left(\begin{array}{ccc}
{-i}\\
1\\
-1
\end{array}\right)\end{aligned}$$ so $Fix(\tilde\beta)=\footnotesize\Bigl \{ \left(\begin{array}{ccc}
{i}\\
1\\
1
\end{array}\right), \left(\begin{array}{ccc}
{i}\\
1\\
-1
\end{array}\right)\Bigr \}$.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
\bar z_3
\end{array}\right)=\normalsize\mu \footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)= \left(\begin{array}{ccc}
{z_1}\\
-z_2\\
\bar z_3
\end{array}\right)$, there is no $z_2=-z_2$. So $Fix(\tilde\beta)=\emptyset$.\
If $\left(\begin{array}{ccc}
{\bar z_1}\\
z_2\\
\bar z_3
\end{array}\right)=\normalsize\tau \mu \footnotesize\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)= \left(\begin{array}{ccc}
{-z_1}\\
-\bar z_2\\
\bar z_3
\end{array}\right)$, $\left(\begin{array}{ccc}
{z_1}\\
z_2\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{\pm i}\\
\pm i\\
z_3
\end{array}\right)$. Since $$\begin{aligned}
\mu \footnotesize\left(\begin{array}{ccc}
{i}\\
i\\
z_3
\end{array}\right)&=\left(\begin{array}{ccc}
{i}\\
-i\\
z_3
\end{array}\right),
&\mu \footnotesize\left(\begin{array}{ccc}
{-i}\\
i\\
z_3
\end{array}\right)=\left(\begin{array}{ccc}
{-i}\\
-i\\
z_3
\end{array}\right)\\
\tau \footnotesize\left(\begin{array}{ccc}
{i}\\
i\\
z_3
\end{array}\right)&=\left(\begin{array}{ccc}
{-i}\\
-i\\
z_3
\end{array}\right)\end{aligned}$$ so $Fix(\tilde\beta)=\footnotesize\Bigl \{ \left(\begin{array}{ccc}
{i}\\
i\\
z_3
\end{array}\right)\Bigr \}$.
To determine $Fix(\bar \beta)$, the argument is as follows. If $z\in Fix(\tilde\beta)$, $$\begin{aligned}
\bar \beta\bar x &=\bar \beta p(z)
=p(\tilde\beta z)
=p(z)\\
&=\bar x,\end{aligned}$$ hence $p(Fix(\tilde\beta))=Fix(\bar \beta)$. Now we get $$\begin{aligned}
Fix(\bar \beta)=
\begin{cases}
\Bigl \{ p \footnotesize\left(\begin{array}{ccc}
{1}\\
z_2\\
-1
\end{array}\right),
p \footnotesize\left(\begin{array}{ccc}
{1}\\
z_2\\
1
\end{array}\right)
\Bigr \}=
\Bigl \{ \footnotesize\overline {\left(\begin{array}{ccc}
{1}\\
x_2\\
-1
\end{array}\right)},
\overline {\left(\begin{array}{ccc}
{1}\\
x_2\\
1
\end{array}\right)}
\Bigr \}=2S^1\\
\Bigl \{ p \footnotesize\left(\begin{array}{ccc}
{i}\\
1\\
1
\end{array}\right),
p \footnotesize\left(\begin{array}{ccc}
i\\
1\\
-1
\end{array}\right)
\Bigr \}=
\Bigl \{ \footnotesize\overline {\left(\begin{array}{ccc}
\frac{1}{4}\\
1\\
1
\end{array}\right)},
\overline {\left(\begin{array}{ccc}
\frac{1}{4}\\
1\\
-1
\end{array}\right)}
\Bigr \}\\
\Bigl \{ p \footnotesize\left(\begin{array}{ccc}
i\\
i\\
z_3
\end{array}\right) \Bigr \}=
\Bigl \{ \footnotesize\overline {\left(\begin{array}{ccc}
\frac{1}{4}\\
\frac{1}{4}\\
x_3
\end{array}\right)}\Bigr \}=S^1.
\end{cases}\end{aligned}$$ This yields a contradiction.\
We observed that the sufficient condition of conjecture is true for $n=4$. ◻
10 Charlap, L.S., *Bieberbach Groups and Flat Manifolds*, Springer-Verlag, 1985. Charlap, L.S., Compact Flat Riemannian Manifolds I, *Annal of Math.* 81, (1965), 15. Kobayashi, S., and K. Nomizu, *Foundation of Differential Geometry I*, Wiley-Interscience, New York, 1963. Kosniowski, C., *A First Course in Algebraic Topology*, Cambridge University Press, 1980. Thurston, W.P., *Three-Dimensional Geometry and Topology*, vol 1, Princeton University Press, 1997. Wolf, J.A., *Spaces of Constant Curvature*, McGraw-Hill Book Company, 1967.
| arxiv_math | {
"id": "2309.03200",
"title": "Real Bott Tower",
"authors": "Admi Nazra",
"categories": "math.AT",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We study the behavior of the smallest possible constants $d(a,b)$ and $d_n$ in Hardy inequalities $$\int_a^b\left(\frac{1}{x}\int_a^xf(t)dt\right)^p\,dx\leq
d(a,b)\,\int_a^b [f(x)]^p dx$$ and $$\sum_{k=1}^{n}\Big(\frac{1}{k}\sum_{j=1}^{k}a_j\Big)^p\leq
d_n\,\sum_{k=1}^{n}a_k^p.$$ The exact rate of convergence of $d(a,b)$ and $d_n$ is established and the "almost extremal" function and sequence are found.
address: Department of Mathematics and Informatics, University of Sofia, 5 James Bourchier Blvd., 1164 Sofia, Bulgaria
author:
- Ivan Gadjev
title: On the Constants and Extremal Function and Sequence for Hardy Inequalities in $L_p$ and $l_p$
---
[^1]
# Introduction and statement of the results
Between 1919 and 1925, in the series of papers [@H1919; @H1920; @H1925] G. H. Hardy established the following inequalities which are nowadays known as the celebrated Hardy's ones. Let $p>1$, then the integral inequality states that $$\label{eq01}
\int_0^\infty\left(\frac{1}{x}\int_0^xf(t)dt\right)^p\,dx\leq
\left(\frac{p}{p-1}\right)^{p}\,\int_0^\infty f^p(x)dx$$ holds for every $f$, such that $f(x)\geq0$ for $x\in (0,\infty)$ and $f^p$ is integrable over $(0,\infty)$.
The corresponding discrete version claims that $$\label{eq02}
\sum_{k=1}^{\infty}\Big(\frac{1}{k}\sum_{j=1}^{k}a_j\Big)^p\leq
\left(\frac{p}{p-1}\right)^{p}\,\sum_{k=1}^{\infty}{a_k^p}$$ for every sequence $\{ a_k \}$ of non-negative numbers, for which the series on the right-hand side converges.
In his 1920 paper [@H1920] Hardy claimed that he and Marcel Riesz derived ([\[eq02\]](#eq02){reference-type="ref" reference="eq02"}) independently, but in their results the larger constant $(p^2/(p-1))^p$ appears on the right-hand side. E. Landau, in the letter [@Lan2] dated 1921, published later in [@Lan1], was the first to establish ([\[eq02\]](#eq02){reference-type="ref" reference="eq02"}) with the exact constant $(p/(p-1))^{p}$ in the sense that there is no smaller one for which [\[eq02\]](#eq02){reference-type="eqref" reference="eq02"} holds for every sequence of non-negative numbers $a_k$. For the latter statement he considered the sequence $a_k^\ast=k^{-1/p-\varepsilon}$, suggested earlier by Hardy, and showed that $$\left( \frac{a_1^\ast + \cdots + a_k^\ast}{k} \right)^p > \left(\frac{p}{p-1}\right)^p \left( (a_k^\ast)^p - \frac{p}{k^{2-1/p}} \right).$$ Since $\sum_{k=1}^\infty (a_k^\ast)^p \to \infty$ as $\varepsilon \to 0$, the summation of the latter inequalities implies the sharpness of $(p/(p-1))^{p}$ for [\[eq02\]](#eq02){reference-type="eqref" reference="eq02"}. In the same letter Landau pointed out that equality in [\[eq02\]](#eq02){reference-type="eqref" reference="eq02"} occurs only for the trivial sequence, that is, when $a_k=0$ for every $k \in \mathbb{N}$. Similarly, equality in [\[eq01\]](#eq01){reference-type="eqref" reference="eq01"} occurs if and only if $f(x)\equiv0$ almost everywhere.
The lack of nontrivial extremizers and the fact that the above argument of Landau does not work for finite sequences motivates one to consider, for any $a$ and $b$ with $-\infty\le a < b\le \infty$ and weight positive a.e. functions $u(x), v(x)$, the so-called general Hardy's integral inequality $$\label{eqI159}
\int_a^b\left(\int_a^xf(t)dt\right)^pu(x)\,dx\leq
d(a,b)\,\int_a^b f^p(x)v(x) dx,\quad f \in L^p[v;a,b]$$ and its discrete counterpart $$\label{eqI169}
\sum_{k=1}^{n}\Big(\sum_{j=1}^{k}a_j\Big)^pu_k\leq
d_n\,\sum_{k=1}^{n}{a_k^p}v_k, \qquad a_k,u_k,v_k\geq0,\,\,\,k=1,2,...,n.$$ Obviously the equations [\[eqI159\]](#eqI159){reference-type="eqref" reference="eqI159"} and [\[eqI169\]](#eqI169){reference-type="eqref" reference="eqI169"} are the"finite versions" of [\[eq01\]](#eq01){reference-type="eqref" reference="eq01"} and [\[eq02\]](#eq02){reference-type="eqref" reference="eq02"}. The natural questions are: what are the best constants $d(a,b)$ and $d_n$ and the corresponding extremizers. This is exactly our endeavor in this paper, mainly because of the importance of Hardy's inequalities, their far reaching generalizations, especially to the so-called Hardy-Sobolev inequalities, and thus the necessity of understanding them more thoroughly. Answering the above questions in satisfactory manner for arbitrary weight functions and sequences such that the inequalities [\[eqI159\]](#eqI159){reference-type="eqref" reference="eqI159"} and [\[eqI169\]](#eqI169){reference-type="eqref" reference="eqI169"} hold, is impossible. In this paper we consider the important \"unweighted\" versions of inequalities [\[eqI159\]](#eqI159){reference-type="eqref" reference="eqI159"} and [\[eqI169\]](#eqI169){reference-type="eqref" reference="eqI169"}, i.e. $$\label{eqI15}
\int_a^b\left(\frac{1}{x}\int_a^xf(t)dt\right)^p\,dx\leq
d(a,b)\,\int_a^b [f(x)]^p dx, \qquad f(x)\geq0,\ \ f \in L^p[a,b]$$ and $$\label{eqI16}
\sum_{k=1}^{n}\Big(\frac{1}{k}\sum_{j=1}^{k}a_j\Big)^p\leq
d_n\,\sum_{k=1}^{n}{a_k^p}, \qquad a_k\geq0,\,\,\,k=1,2,...,n.$$
The behavior of the constant $d(a,b)$ was studied in many papers - see, for instance, [@Tom], [@Tal],[@Muc]. The best results about $d(a,b)$ for $p>1$ could be summarized in the following way (see, for instance [@KMP2007] or [@KP2003]). Let $$B=\sup_{a<x<b} \left\{(x-a)^{p-1}\left(x^{1-p}-b^{1-p} \right)\right\}$$ then for the constant $d(a,b)$ the next estimations are true $$\frac{1}{p-1}B\le d(a,b) \le \left(\frac{p}{p-1} \right)^p B.$$ It is easy to see that only the right estimation gives asymptotically (when $a\rightarrow 0$ or $b\rightarrow \infty$ or both) the exact constant. But not the rate of convergence.
In [@DGM] we studied the inequality [\[eqI15\]](#eqI15){reference-type="eqref" reference="eqI15"} for $p=2$ and established the exact constant $d(a,b)$ and the extremal function.
**Theorem 1**. *[@DGM][\[th01\]]{#th01 label="th01"} Let $a$ and $b$ be any fixed numbers with $0<a<b<\infty$. Then the inequality $$\label{MR01}
\int_a^b\left(\frac{1}{x}\int_a^xf(t)dt\right)^2 dx\leq
\frac{4}{1+4\alpha^2}\,\int_a^bf^2(x)\, dx,$$ where $\alpha$ is the only solution of the equation $$\tan\left(\alpha\ln\frac{b}{a} \right)+2\alpha=0\ \ \mathrm{in\ the\ interval}\ \ \left(\frac{\pi}{2\ln\frac{b}{a}},\frac{\pi}{\ln\frac{b}{a}} \right),$$ holds for every $f \in L^2[a,b]$. Moreover, equality in [\[MR01\]](#MR01){reference-type="eqref" reference="MR01"} is attained for $$f_{a,b}(x)=x^{-1/2}\left(2\alpha\cos(\alpha\ln x)+\sin(\alpha\ln x) \right).$$*
The behavior of the constant $d_n$ for $p=2$ as a function of $n$ was also studied extensively - see, for instance, [@HW1], [@HW2],[@Wilf], [@Wilf2], [@Pec]. In [@Wilf] Herbert S. Wilf established the exact rate of convergence of the constant $d_n$ for $p=2$ $$d_n=4-\frac{16\pi^2}{\ln^2n}+O\left(\frac{\ln\ln n}{\ln^3n} \right).$$
In [@FS] F. Stampach gave slightly better estimation, i.e. $$d_n=4-\frac{16\pi^2}{\ln^2n}+\frac{32\pi^2(\gamma+6\log2)}{\log^3n}+
O\left(\frac{1}{\ln^4n} \right).$$
In [@DIGR] we also studied the asymptotic behavior of the constant $d_n$ for $p=2$. It was proved there that $d_n$ can be expressed in terms of the smallest zero of a continuous dual Hahn polynomial of degree $n$ (see [@Long]), for a specific choice of the parameters, in terms of which these polynomials are defined. Despite that nice interpretation of $d_n$, it was only proved in [@DIGR Theorem 1.1] that the next inequalities are true for every natural $n\geq 3$ $$\label{eq022}
4\Bigg(1-\frac{4}{\ln n +4}\Bigg)\leq d_n \leq
4\Bigg(1-\frac{8}{(\ln n + 4)^2}\Bigg).$$
In all proofs of the above mentioned estimations for the constant $d_n$, the authors substantially used the special properties of the space $l_2$. In [@DGM] and [@GG] we applied a different approach which allowed us to give a simpler proof of some of the mentioned estimations and to find an almost extremal sequence. We proved the next theorem.
**Theorem 2**. *[@DGM][\[th02\]]{#th02 label="th02"} Let $$\label{eq028}
a_k=\int_k^{k+1}h(x)dx,$$ where $$\label{eq020}
h(x)=x^{-1/2}\left(2\alpha\cos(\alpha\ln x)+\sin(\alpha\ln x) \right),\quad 1\le x\le n+1,$$ and $\alpha$ is the only solution of the equation $$\tan(\alpha\ln (n+1))+2\alpha=0\ \ \mathrm{in\ the\ interval}\ \ \left(\frac{\pi}{2\ln(n+1)},\frac{\pi}{\ln(n+1)} \right).$$ Then $$\label{eq018}
\sum_{k=1}^{n}\Big(\frac{1}{k}\sum_{j=1}^{k}a_j\Big)^2\geq
\frac{4}{1+4\alpha^2}\,\sum_{k=1}^{n}{a_k^2}.$$*
By combining the results [\[eq022\]](#eq022){reference-type="eqref" reference="eq022"} and [\[eq018\]](#eq018){reference-type="eqref" reference="eq018"} the exact rate of convergence of $\{ d_n \}$ is established and the next very sharp estimates for $d_n$, i.e. the next inequalities $$\label{estdn}
4-\frac{16\pi^2}{\ln^2(n+1)} \le d_n \le
4-\frac{32}{(\ln n + 4)^2}$$ hold for every natural $n\geq 3$. Also the sequence $\{a_k\}_1^n$ defined in [\[eq028\]](#eq028){reference-type="eqref" reference="eq028"} is the almost extremal sequence.
In this paper we establish very sharp estimates for $d(a,b)$ and $d_n$ for $2\le p<\infty$, as well as obtain an "almost extremal" function for [\[eqI15\]](#eqI15){reference-type="eqref" reference="eqI15"} and "almost extremal" sequence for [\[eqI16\]](#eqI16){reference-type="eqref" reference="eqI16"}. Our main results are summarized in the following two theorems.
**Theorem 3**. *Let $2\le p<\infty$ and $0<a<b<\infty$. Then there exist positive constants $c_1=c_1(p)$ and $c_2=c_2(p)$, depending only on $p$, such that the next estimates for the constant $d(a,b)$ in [\[eqI15\]](#eqI15){reference-type="eqref" reference="eqI15"} hold $$\label{MR1}
\left(\frac{p}{p-1} \right)^p \left(1-\frac{c_1}{\ln^2 \frac{b}{a}}\right)\le
d(a,b)\,\le\left(\frac{p}{p-1} \right)^p \left(1+\frac{c_2}{\ln^2 \frac{b}{a}}\right)^{-1}.$$ Moreover, the function $$f^\ast(x)=\frac{1}{x^{1/p}}\left(\frac{\alpha p}{p-1} \cos(\alpha\ln x)+\sin(\alpha\ln x)\right)$$ where $\alpha$ is the only solution of the equation $$\tan\left(\alpha\ln \frac{b}{a} \right)+\frac{\alpha p}{p-1}=0$$ in the interval $\left(\frac{\pi}{2\ln \frac{b}{a}},\frac{\pi}{\ln \frac{b}{a}} \right)$, is an \"almost extremal\" function in the sense that $$\int_a^b\left(\frac{1}{x}\int_a^x f^\ast(t)dt\right)^p\,dx\geq
\left(\frac{p}{p-1} \right)^p \left(1-\frac{c_1}{\ln^2 \frac{b}{a}}\right)\,\int_a^b [f^\ast(x)]^p dx.$$*
An immediate consequence of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} is the next corollary.
**Corollary 4**. *When either of the limits relations $a\rightarrow0$, $b\rightarrow\infty$, or both hold, i.e. $\ln(b/a)\rightarrow\infty$, then $$d(a,b) \sim
\left(\frac{p}{p-1} \right)^p -\frac{C}{\ln^2 \frac{b}{a}}.$$ More precisely there exist constants $C_1(p)>0$ and $C_2(p)>0$ depending only on $p$ such that $$\left(\frac{p}{p-1} \right)^p -\frac{C_1}{\ln^2 \frac{b}{a}} \le d(a,b) \le
\left(\frac{p}{p-1} \right)^p -\frac{C_2}{\ln^2 \frac{b}{a}}.$$ Also, for the constant $C_1$ the next estimation holds $$C_1\le \left(\frac{p}{p-1} \right)^{p+1}p\pi^2$$ which is the exact constant for $p=2$.*
*Remark 5*. Because of the close connection to the maximal function also it is natural to consider the inequality $$\int_a^b\left(\frac{1}{x-a}\int_a^xf(t)dt\right)^p\,dx\leq
d(a,b)\,\int_a^b [f(x)]^p dx, \qquad f(x)\geq0,\ \ f \in L^p[a,b]$$ which is equivalent (by change of variables) to $$\int_0^b\left(\frac{1}{x}\int_0^xf(t)dt\right)^p\,dx\leq
d(b)\,\int_0^b [f(x)]^p dx, \qquad f(x)\geq0,\ \ f \in L^p[0,b].$$ Then from the above corollary we obtain that $d(b)=\left(\frac{p}{p-1} \right)^p$ and the only function for which the equality is attained is $f=0$ a.e.
**Theorem 6**. *Let $2\le p<\infty$. Then there exist positive constants $c_3=c_3(p)$ and $c_4=c_4(p)$, depending only on $p$, such that for every natural $n\geq 2$ the next estimates for the constant $d_n$ in [\[eqI16\]](#eqI16){reference-type="eqref" reference="eqI16"} hold $$\label{MR2}
\left(\frac{p}{p-1} \right)^p-\frac{c_3}{\ln^2 n}\le
d_n\,\le\left(\frac{p}{p-1} \right)^p-\frac{c_4}{\ln^2 n}.$$ Moreover, the sequence $$a_k^\ast = \int_k^{k+1}f^*(x)\, dx,\quad k=1,2,...,n,$$ where $$\label{eq20}
f^*(x)=\frac{1}{x^{1/p}}\left(\frac{\alpha p}{p-1}\cos(\alpha\ln x)+\sin(\alpha\ln x)\right),\quad 1\le x\le n+1$$ and $\alpha$ is the only solution of the equation $$\tan(\alpha\ln (n+1))+\frac{\alpha p}{p-1}=0$$ in the interval $\left(\frac{\pi}{2\ln(n+1)},\frac{\pi}{\ln(n+1)} \right)$, is an \"almost extremal\" sequence in the sense that $$\label{eqR1}
\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_j^\ast\right)^p\geq
\left(\left(\frac{p}{p-1} \right)^p-\frac{c_3}{\ln^2 n}\right)\,\sum_{k=1}^{n}{[a_k^\ast]^p}.$$ Also, leting $n\rightarrow\infty$ for the constant $c_3$ the next estimation holds $$c_3\le \left(\frac{p}{p-1} \right)^{p+1}p\pi^2$$ which is the exact constant for $p=2$.*
*Remark 7*. The constants $c_2(p)$ and $c_4(p)$ in Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} and Theorem [Theorem 6](#th2){reference-type="ref" reference="th2"} are by no means the best ones. They could be improved in a lot of ways but that could have made the proofs longer and more complicated. Our goal was to establish the exact rate of convergence and to keep the proofs as simple as possible.
Henceforth, the constant $c=c(p)$ will always be an absolute constant, which means it will depend only on $p$. Also, it may be different on each occurrence. The relation $O(f)$ means that there exists a positive constant $c(p)$, depending only on $p$ such that $\left|O(f)\right|\le c(p)|f|$.
The paper is organized as follows. In Section 2, some technical results are proved, in Section 3, the right inequality of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} is proved, in Section 4, the left inequality of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} is proved, in Section 5, the left inequality of Theorem [Theorem 6](#th2){reference-type="ref" reference="th2"} is proved and in Section 6, the right inequality of Theorem [Theorem 6](#th2){reference-type="ref" reference="th2"} is proved.
# Auxiliary Results
We need some technical lemmas.
**Lemma 8**. *For $0\le x\le 1$ and $\alpha\geq 0$ the next inequality is true: $$\label{eq1}
(1-x)^\alpha \le 1-\alpha x+\frac{1}{2}(\alpha x)^2.$$*
*Proof.* From $1-x \le e^{-x}$ we have for $\alpha\geq 0$ and $\alpha x<1$ $$(1-x)^{\alpha}\le e^{-\alpha x}=1-\alpha x+\frac{(\alpha x)^2}{2}+\sum_{k=3}^{\infty}(-1)^k\frac{(\alpha x)^k}{k!}
\le 1-\alpha x+\frac{1}{2}(\alpha x)^2.$$ For $\alpha x\geq1$ the function $f(x)=1-\alpha x+\frac{1}{2}(\alpha x)^2-(1-x)^\alpha$ is increasing and consequently $f(x)\geq f(0)=0$. ◻
**Lemma 9**. *For $x\geq -1$, $0\le \alpha\le 1$ the next inequality is true: $$\label{eq2}
(1+x)^\alpha \le 1+\alpha x.$$*
*Proof.* Let $\alpha =1/\beta$. Then [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} follows from the Bernoulli inequality. ◻
**Lemma 10**. *For $x\geq 0$, $\alpha\geq 0$ and $\alpha x<1$ the next inequality is true: $$\label{eq234}
(1+x)^\alpha \le 1+\alpha x+(\alpha x)^2.$$*
*Proof.* $$(1+x)^\alpha< e^{\alpha x}
=1+\alpha x+\sum_{k=2}^{\infty}\frac{(\alpha x)^k}{k!}
\le 1+\alpha x+(\alpha x)^2\sum_{k=2}^{\infty}\frac{1}{k!}
\le 1+\alpha x+(\alpha x)^2.$$ ◻
**Lemma 11**. *Let $b>1$, $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and $$\label{eq200}
g(x)=\frac{1}{x^{1/(pq)}}\left(\cos(\alpha\ln x) \right)^{1/q}$$ where $\alpha=\frac{1}{\ln b}\arctan\frac{1}{p}$ . Then there exists a constant $c=c(p)>0$, depending only on $p$ such that for $1\le x\le b$ the next inequality holds: $$\label{eqI3}
\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha^2q^2} \right)^{p/q}\le
-q\left(1+c\alpha^2 \right)^{-1}x^{1+1/q}\left[(g(x))^p \right]'.
%\frac{\left(\cos(\alpha\ln x)\right)^{p/q-1}}{1+c\alpha^2} (\cos(\alpha\ln x)+\alpha p\sin(\alpha\ln x))$$*
*Proof.* The above inequality [\[eqI3\]](#eqI3){reference-type="eqref" reference="eqI3"} is equivalent to $$\begin{aligned}
&\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha^2q^2} \right)^{p/q}\\
&\le \left(1+c\alpha^2 \right)^{-1}\left(\cos(\alpha\ln x)\right)^{p/q-1} (\cos(\alpha\ln x)+\alpha p\sin(\alpha\ln x))\end{aligned}$$ or $$\left(\frac{1+\alpha qy}{1+\alpha^2q^2} \right)^{p/q}\le
\frac{1+\alpha py}{1+c\alpha^2}$$ where $y=\tan(\alpha\ln x)$. Obviously $y=\tan(\alpha\ln x)\le\tan(\alpha\ln b)=1/p$. We consider two cases.
**Case 1.** $\alpha\geq1.$\
Since $$\frac{1+\alpha qy}{1+\alpha^2q^2}\le \frac{1+\alpha q/p}{1+\alpha^2}
\le \frac{1+\alpha }{1+\alpha^2}\le1$$ we have $$\left(\frac{1+\alpha qy}{1+\alpha^2q^2} \right)^{p/q}
=\frac{1+\alpha qy}{1+\alpha^2q^2} \left(\frac{1+\alpha qy}{1+\alpha^2q^2} \right)^{p/q-1}
\le \frac{1+\alpha py}{1+\alpha^2q^2}.$$
**Case 2.** $\alpha<1.$\
From lemma [Lemma 10](#lem021){reference-type="ref" reference="lem021"} $$(1+\alpha qy)^{p/q}\le 1+\alpha py+(\alpha py)^2$$ and from Bernoulli's inequality $$\left(1+\alpha^2q^2 \right)^{p/q}\geq 1+\alpha^2 pq.$$ So, it is enough to prove that there exists a constant $c=c(p)>0$ such that $$1+\alpha py+(\alpha py)^2\le
\frac{1+\alpha^2 pq}{1+c\alpha^2}(1+\alpha py)$$ which after some simplifications is $$(py)^2\le \frac{pq-c}{1+c\alpha^2}.$$ Since $py\le 1$ and $\alpha<1$ the above inequality is true if we take, for instance, $c=(pq-1)/2$.
By taking $c=\min\{q^2, (pq-1)/2 \}$ we complete the proof of the lemma. ◻
*Remark 12*. The inequality [\[eqI3\]](#eqI3){reference-type="eqref" reference="eqI3"} could be written in the following way $$\label{eqI31}
\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha^2q^2} \right)^{p/q}\le
-q\left(1+\frac{c}{\ln^2b} \right)^{-1}x^{1+1/q}\left(g^{p}(x) \right)'$$ where $c=\arctan (1/p)\min\{q^2, (pq-1)/2 \}$.
**Lemma 13**. *Let $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$, $0<\epsilon<1$, $$b_0=e^{\pi/\sqrt{\min\{q(p-q)^{-1}(pq+1)^{-2},4(pq)^{-2} \}\epsilon}}$$ and $$\label{eq21}
f^*(x)=\frac{1}{x^{1/p}}(\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x))$$ where $\alpha$ is the only solution of the equation $$\tan\left(\alpha\ln b \right)+\alpha q=0$$ in the interval $\left(\frac{\pi}{2\ln b},\frac{\pi}{\ln b} \right)$. Then for every $b>b_0$ the next inequality holds $$\begin{aligned}
\label{eqI5}
(\sin(\alpha\ln x))^{p/q}\geq
-\frac{qx^{1+1/q}\left[(f^*(x))^{p/q} \right]'} {1+(pq+\epsilon)\alpha^2 }\end{aligned}$$*
*Remark 14*. The function $f^*$ defined by [\[eq21\]](#eq21){reference-type="eqref" reference="eq21"} is well defined. Indeed, if $\alpha\ln x\in \left(0,\frac{\pi}{2}\right]$ it is obvious. If $\alpha\ln x\in \left(\frac{\pi}{2},\pi \right)$ since the function $h(x)=\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x)$ is decreasing and $h(b)=0$ it follows that $h(x)>h(b)=0$, i.e. $h(x)>0$ for $1\le x\le b$ and consequently $f^*(x)>0$.
*Proof.* It is easy to see that for every $b>b_0$ $$\label{eqalpha}
\alpha \le \sqrt{\min\left\{\frac{q}{(p-q)(pq+1)^2}, \frac{4}{(pq)^2}\right \}\epsilon}.$$ Since $$\begin{aligned}
\label{eqI4}
&-\frac{qx^{1+1/q}}{1+\alpha^2 pq}\left[(f^*(x))^{p/q} \right]'\notag\\
&=(\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x))^{p/q-1}
\left(\sin(\alpha\ln x)-\frac{\alpha (p-q)}{1+\alpha^2 pq}\cos(\alpha\ln x) \right)\end{aligned}$$ we need to prove that for every $b>b_0$ the next inequality is true $$\begin{aligned}
\label{eqI6}
&\frac{1+(pq+\epsilon)\alpha^2 }{1+pq\alpha^2}(\sin(\alpha\ln x))^{p/q}\notag\\
&\geq (\alpha q\cos(\alpha\ln x)+\sin(\alpha\ln x))^{p/q-1}
\left(\sin(\alpha\ln x)-\frac{\alpha (p-q)}{1+\alpha^2 pq}\cos(\alpha\ln x) \right).\end{aligned}$$ We consider two cases.
**Case 1.** $\alpha\ln x\in \left[\frac{\pi}{2},\pi \right)$.\
Let $\alpha\ln x=\pi-\phi,\, 0<\phi\le\pi/2$. Then[\[eqI6\]](#eqI6){reference-type="eqref" reference="eqI6"} is equivalent to $$\begin{aligned}
\frac{1+(pq+\epsilon)\alpha^2 }{1+pq\alpha^2}(\sin(\alpha\ln x))^{p/q}
\geq (-\alpha q\cos\phi+\sin\phi)^{p/q-1}
\left(\sin\phi+\frac{\alpha (p-q)}{1+\alpha^2 pq}\cos\phi \right)\end{aligned}$$ or $$\begin{aligned}
\label{I61}
\left[1+(pq+\epsilon)\alpha^2 \right]y^{p/q}\geq \left(y-\alpha q \right)^{p/q-1}
\left[\left(1+\alpha^2pq \right)y+\alpha (p-q) \right]\end{aligned}$$ where $y=\tan\phi$. Since $y>\alpha q$ in this case, we have from Bernoulli's inequality $$\left(\frac{y}{y-\alpha q} \right)^{p/q}\geq \frac{y+\alpha(p-q)}{y-\alpha q}.$$ So, it is enough to prove that $$\left[1+(pq+\epsilon)\alpha^2 \right][y+\alpha(p-q)]\geq \left(1+\alpha^2 pq \right)y+\alpha (p-q)$$ which is easy to verify.
**Case 2.** $\alpha\ln x\in \left[0,\frac{\pi}{2} \right)$.\
In this case [\[eqI6\]](#eqI6){reference-type="eqref" reference="eqI6"} is equivalent to $$\begin{aligned}
\label{eqI9}
\left(1+(pq+\epsilon)\alpha^2\right )y^{p/q}
\geq (y+\alpha q )^{p/q-1}
\left[\left(1+\alpha^2 pq\right )y-\alpha(p-q) \right]\end{aligned}$$ where $y=\tan(\alpha\ln x)$. If $y\le \frac{\alpha (p-q)}{1+\alpha^2 pq}$ then [\[eqI9\]](#eqI9){reference-type="eqref" reference="eqI9"} is obvious. Let $y> \frac{\alpha (p-q)}{1+\alpha^2 pq}$.
**Case 2.1.** $2\le p< 3$\
[\[eqI9\]](#eqI9){reference-type="eqref" reference="eqI9"} is equivalent to $$\begin{aligned}
1+(pq+\epsilon)\alpha^2\geq \left(1+\frac{\alpha q}{y} \right)^{p/q-1}
\left(1+\alpha^2 pq-\frac{\alpha(p-q)}{y} \right).\end{aligned}$$ Since $$\left(1+\frac{\alpha q}{y} \right)^{p/q-1} \le 1+\frac{\alpha(p-q)}{y}$$ it is enough to prove that $$\begin{aligned}
1+(pq+\epsilon)\alpha^2\geq \left(1+\frac{\alpha(p-q)}{y}\right)
\left(1+\alpha^2pq-\frac{\alpha(p-q)}{y} \right).\end{aligned}$$ Simplifying it $$\begin{aligned}
\epsilon
\geq \frac{\alpha pq(p-q)}{y}-\frac{(p-q)^2}{y^2}
\end{aligned}$$ which is true since $\alpha pq<2\sqrt\epsilon$.
**Case 2.2.** $p\geq 3$\
[\[eqI9\]](#eqI9){reference-type="eqref" reference="eqI9"} is equivalent to $$\begin{aligned}
\left[1+(pq+\epsilon)\alpha^2\right ]\left(\frac{y}{y+\alpha q}\right)^{p/q-1}
\geq 1+\alpha^2pq-\frac{\alpha(p-q)}{y} .\end{aligned}$$ From Bernoulli's inequality $$\begin{aligned}
\left(\frac{y}{y+\alpha q}\right)^{p/q-1}
\geq 1-\frac{ \alpha(p-q)}{y+\alpha q }. \end{aligned}$$ So, it is enough to prove that $$\begin{aligned}
\left[1+(pq+\epsilon)\alpha^2\right ]\left( 1-\frac{\alpha (p-q)}{y+\alpha q} \right)
\geq 1-\frac{\alpha (p-q)}{(1+\alpha^2 pq)y}.\end{aligned}$$ Simplifying it $$\epsilon+\frac{q(p-q)}{y(y+\alpha q)}\geq \frac{(p-q)(pq+\epsilon)\alpha}{y+\alpha q}.$$ If $y\le q[(pq+\epsilon)\alpha]^{-1}$ it is obvious. For $y> q[(pq+\epsilon)\alpha]^{-1}$ $$\begin{aligned}
\frac{(p-q)(pq+\epsilon)\alpha}{y+\alpha q}<\frac{(p-q)(pq+\epsilon)\alpha}
{q[(pq+\epsilon)\alpha]^{-1}+\alpha q}<\epsilon
\end{aligned}$$ since $$\alpha^2<\frac{q\epsilon}{(p-q)(pq+1)^2}.$$
The lemma is proved. ◻
**Lemma 15**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural numbers $i$ and $n$ such that $i\le n$ the next inequality is true: $$\label{eq5}
\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}
\le q\left(\frac{1}{i^{1/q}}-\frac{1}{(n+1)^{1/q}} \right).$$*
*Proof.* We will prove that there is a $k_0 \in \mathbb{N}$ such that for every $k\geq k_0$ the next inequality is true: $$\label{eq51}
\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}
\le q\left(\frac{1}{k^{1/q}}-\frac{1}{(k+1)^{1/q}} \right).$$ From [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} $$\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}\le 1-\frac{1}{qk^{1/q}}+\frac{1}{2q^2k^{2/q}}$$ so, it is enough to prove that there is a $k_0 \in \mathbb{N}$ such that for every $k\geq k_0$ $$\label{eq52}
\frac{1}{k^{1+1/q}}\left(1-\frac{1}{qk^{1/q}}+\frac{1}{2q^2k^{2/q}} \right)
\le q\left(\frac{1}{k^{1/q}}-\frac{1}{(k+1)^{1/q}} \right)$$ i.e. $$\label{eq53}
1-\frac{1}{qk^{1/q}}+\frac{1}{2q^2k^{2/q}}
\le qk\left[1-\left(\frac{k}{k+1} \right)^{1/q} \right].$$ But from [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} $$\left(\frac{k}{k+1} \right)^{1/q}=\left(1-\frac{1}{k+1} \right)^{1/q}
\le 1-\frac{1}{q(k+1)}.$$ and then $$qk\left[1-\left(\frac{k}{k+1} \right)^{1/q} \right]\geq \frac{k}{k+1}=1-\frac{1}{k+1}.$$ So, it is enough to prove that there is a $k_0 \in \mathbb{N}$ such that for every $k\geq k_0$ $$\label{eq54}
1-\frac{1}{qk^{1/q}}+\frac{1}{2q^2k^{2/q}}\le 1-\frac{1}{k+1},$$ i.e. $$\frac{1}{k+1}+\frac{1}{2q^2k^{2/q}} \le \frac{1}{qk^{1/q}}$$ which is obviously true for $k$ big enough. Actually, by considering the function $f(x)=\frac{1}{xk^{1/x}}+\frac{1}{2x^2k^{2/x}}$ it is not difficult to prove that it is true for every $k\geq 8$. From the above it follows that there is $i_0$ such that for every $i\geq i_0$ [\[eq51\]](#eq51){reference-type="eqref" reference="eq51"} is true and consequently [\[eq5\]](#eq5){reference-type="eqref" reference="eq5"} as well.
Now, let [\[eq5\]](#eq5){reference-type="eqref" reference="eq5"} is true for $i+1$. We will prove that it is true for $i$ as well. We have $$\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}
\le \frac{1}{i^{1+1/q}}\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}
+q\left(\frac{1}{(i+1)^{1/q}}-\frac{1}{(n+1)^{1/q}} \right)$$ So, it is enough to prove that $$\frac{1}{i^{1+1/q}}\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}+
\frac{q}{(i+1)^{1/q}}\le\frac{q}{i^{1/q}}$$ i.e. $$\frac{1}{qi}\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}+
\left(\frac{i}{i+1} \right)^{1/q}\le 1.$$ and from [\[eq2\]](#eq2){reference-type="eqref" reference="eq2"} it follows that it is enough to prove that $$\label{eq55}
\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}\le \frac{i}{i+1}.$$ From Bernoulli's inequality for $\alpha\geq 1$ and $0\le x<1$ $$(1-x)^\alpha\le 1-\frac{\alpha x}{1+(\alpha-1)x}$$ we have $$\left(1-\frac{1}{pi^{1/q}}\right)^{p/q}\le 1-\frac{p}{pqi^{1/q}+p-q}.$$ So, it is enough to prove $$\frac{1}{i+1}\le \frac{p}{pqi^{1/q}+p-q}$$ i.e. $$i^{1/q}\le \frac{i}{q}+\frac{1}{p}$$ which is easy to verify.
The proof of the lemma is complete. ◻
**Lemma 16**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural numbers $i$ and $n$ such that $i\le n$ the next inequality is true: $$\begin{aligned}
\label{eq6}
\sum_{k=i}^{n}&\frac{\ln^2k-2q\ln k+2q^2}{k^{1+1/q}}\notag\\
&>\frac{q\left(\ln^2i+2q^2 \right)}{i^{1/q}}%\Big|_i^n
+\frac{\ln^2i-2q\ln i+2q^2}{2i^{1+1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}.\end{aligned}$$*
*Proof.* For the function $$f(x)=\frac{\ln^2x-2q\ln x+2q^2}{x^{1+1/q}}$$ we have $$f'(x)=\frac{1}{x^{2+1/q}}\left[-\left(1+\frac{1}{q}\right)\ln^2 x+2(q+2)\ln x-2q(q+2) \right]$$ and $$f''(x)=\frac{1}{x^{3+1/q}}
\left[\left(2+\frac{3}{q}+\frac{1}{q^2}\right)\ln^2 x-2(6+2q+\frac{3}{q})\ln x+4\left(q^2+3q+2\right) \right].$$ It is easy to see that $f'(x)<0$, $f''(x)>0$ and consequently $f'(x)$ is increasing and $|f'(x)|$ is decreasing. From Euler's summation formula (see, for instance [@Apostol p.149]) $$\sum_{k=i}^nf(k)=\int_i^nf(x)dx+\frac{f(i)+f(n)}{2}+\int_i^n\left(x-[x]-\frac{1}{2} \right)f'(x)dx$$ where $[x]$ is the floor function.
$$\int_i^n\left(x-[x]-\frac{1}{2} \right)f'(x)dx=\sum_{k=i}^{n-1}\left(\int_k^{k+1/2}+\int_{k+1/2}^{k+1} \right)>0$$ since $$\int_k^{k+1/2}>0,\,\,\int_{k+1/2}^{k+1}<0\quad\mbox{and}\quad \int_k^{k+1/2}>\left|\int_{k+1/2}^{k+1} \right|.$$ Then $$\begin{aligned}
\sum_{k=i}^nf(k)&>\int_i^nf(x)dx+\frac{f(i)+f(n)}{2}
>\int_i^nf(x)dx+\frac{\ln^2i-2q\ln i+2q^2}{2i^{1+1/q}}\\
&=\frac{q\left(\ln^2i+2q^2 \right)}{i^{1/q}}
+\frac{\ln^2i-2q\ln i+2q^2}{2i^{1+1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}.\end{aligned}$$ The lemma is proved. ◻
**Lemma 17**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural numbers $i$ and $n$ such that $i\le n$ the next inequality is true: $$\begin{aligned}
\label{eq7}
\sum_{k=i}^{n}&\frac{\ln^2k-2q\ln k+2q^2}{k^{1+2/q}}
<\frac{\ln^2i-2q\ln i+2q^2}{i^{1+2/q}}
+\frac{q\left(\ln^2i-q\ln i+3/2q^2\right)}{2i^{2/q}}.\end{aligned}$$*
*Proof.* For the function $$g(x)=\frac{\ln^2x-2q\ln x+2q^2}{x^{1+2/q}}$$ we have $$g'(x)=\frac{1}{x^{2+2/q}}\left[-\left(1+\frac{2}{q}\right)\ln^2 x+2(q+3)\ln x-2q(q+3) \right]<0$$ and consequently $g(x)$ is decreasing and $$\begin{aligned}
\sum_{k=i}^ng(k)&
<\frac{\ln^2i-2q\ln i+2q^2}{i^{1+2/q}}+\int_i^ng(x)dx\\
&<\frac{\ln^2i-2q\ln i+2q^2}{i^{1+2/q}}
+\frac{q\left(\ln^2i-q\ln i+3/2q^2\right)}{2i^{2/q}}.\end{aligned}$$ The lemma is proved. ◻
**Lemma 18**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural numbers $i$ and $n$ such that $i\le n$ the next inequality is true: $$\begin{aligned}
\label{eq9}
\sum_{k=i}^n\frac{1}{k^p} &\left[\left(k^{1/q} \right)^{p/q-1}-\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1} \right]
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)-2q^2 \right]\notag\\
&<\frac{\ln^2i-2q\ln i+2q^2}{qi^{1+2/q}}+\frac{\ln^2i-q\ln i+3/2q^2}{2i^{2/q}}
-\frac{2q^2}{3i^{3/q}}+\frac{2q^2}{3n^{3/q}}.\end{aligned}$$*
*Proof.* Since $$\left(1-\frac{1}{pk^{1/q}}\right)^{p/q-1}
>\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}
>1-\frac{1}{qk^{1/q}}$$ we have $$\begin{aligned}
\left(k^{1/q} \right)^{p/q-1}-\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1}
=\left(k^{1/q} \right)^{p/q-1}\left[1-\left(1-\frac{1}{pk^{1/q}} \right)^{p/q-1} \right]
<\frac{1}{q}\left(k^{1/q} \right)^{p/q-2}.\end{aligned}$$ Then $$\begin{aligned}
\sum_{k=i}^n\frac{1}{k^p}& \left[\left(k^{1/q} \right)^{p/q-1}-\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1} \right]
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)-2q^2 \right]\notag\\
&<\frac{1}{q}\sum_{k=i}^n\frac{1}{k^{1+3/q}}\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)-2q^2 \right]\notag\\
&=\frac{1}{q}\sum_{k=i}^n\frac{\ln^2k-2q\ln k+2q^2}{k^{1+2/q}}
-2q\sum_{k=i}^n\frac{1}{k^{1+3/q}}\\
&<\frac{1}{q}\sum_{k=i}^n\frac{\ln^2k-2q\ln k+2q^2}{k^{1+2/q}}
-2q\int_{i}^n\frac{dx}{x^{1+3/q}}\\
&=\frac{1}{q}\sum_{k=i}^n\frac{\ln^2k-2q\ln k+2q^2}{k^{1+2/q}}
-\frac{2q^2}{3i^{3/q}}+\frac{2q^2}{3n^{3/q}}\end{aligned}$$ and the lemma follows from [\[eq7\]](#eq7){reference-type="eqref" reference="eq7"}. ◻
**Lemma 19**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural $i$ there is a constant $c=c(p)>0$, depending only on $p$ such that for every natural $i\geq2$ the next inequality is true: $$\begin{aligned}
\label{eq10}
2q^2-\frac{3}{4}+\frac{2q}{3i^{2/q}}-\frac{2q}{i^{1+1/q}}-\frac{q^2}{i^{1/q}}
-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{1/q}}>c(p).\end{aligned}$$*
*Proof.* It is obvious that there is a $i_0$ and a constant $c(p,i_0)$ such that for every $i>i_0$ the above inequality is true. For $i\le i_0$ we have $$\begin{aligned}
&2q^2-\frac{3}{4}+\frac{2q}{3i^{2/q}}-\frac{2q}{i^{1+1/q}}-\frac{q^2}{i^{1/q}}
-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{1/q}}\\
&>2q^2-\frac{3}{4}-\frac{4q}{3i^{1+1/q}}-\frac{q^2}{i^{1/q}}
-\frac{\left(\ln i-q/2\right)^2+5/4q^2}{2qi^{1/q}}\\
&=\frac{1}{i^{1/q}}\left[ \left(2q^2-\frac{3}{4}\right)i^{1/q}-\frac{4q}{3i}-q^2
-\frac{\left(\ln i-q/2\right)^2+5/4q^2}{2q}\right]
%>\frac{1}{i_0^{1/q}}\left[ \left(2q^2-\frac{3}{4}\right)i^{1/q}-\frac{4q}{3i}-q^2
%-\frac{\left(\ln i-q/2\right)^2+5/4q^2}{2q}\right].\end{aligned}$$ Now we will prove that for every $i\le i_0$ $$\begin{aligned}
\left(2q^2-\frac{3}{4}\right)i^{1/q}>q^2+\frac{4q}{3i}
+\frac{\left(\ln i-q/2\right)^2+5/4q^2}{2q}.\end{aligned}$$ We consider the cases $i=2$ and $i\geq3$ separately.
**Case 1.** $i=2$\
We need to prove that $$\begin{aligned}
\left(2q^2-\frac{3}{4}\right)2^{1/q}>q^2+\frac{2}{3}q
+\frac{\left(\ln 2-q/2\right)^2+5/4q^2}{2q}.\end{aligned}$$ Since $\left(\ln 2-q/2\right)^2<0.1$ it is enough to prove $$\begin{aligned}
\left(2q^2-\frac{3}{4}\right)2^{1/q}>q^2+\frac{2}{3}q+\frac{1}{20q}+\frac{5}{8}q
=q^2+\frac{31}{24}q+\frac{1}{20q}.\end{aligned}$$ Considering for $1\le x\le2$ the function $$f(x)=\left(2x^2-\frac{3}{4}\right)2^{1/x}-x^2-\frac{31}{24}x-\frac{1}{20x}$$ we have $$\begin{aligned}
f'(x)&=4x2^{1/x}-\left(2-\frac{3}{4x^2} \right)2^{1/x}\ln 2-2x-\frac{31}{24}+\frac{1}{20x^2}\\
&>4x2^{1/x}-\frac{7}{10}\left(2-\frac{3}{4x^2} \right)2^{1/x}-2x-\frac{31}{24}\\
&=\left(4x-\frac{7}{5}+\frac{21}{40x^2} \right)2^{1/x}-2x-\frac{31}{24}\\
&>4\sqrt2x-\frac{7\sqrt2}{5}-2x-\frac{31}{24}> 4\sqrt2-\frac{7\sqrt2}{5}-2-\frac{31}{24}>0
\end{aligned}$$ and consequently the function $f(x)$ is increasing and $f(x)>f(1)>0$.
**Case 2.** $i\geq3$\
We have $$i^{1/q}=e^{\ln i/q}>1+\frac{\ln i}{q}+\frac{\ln^2 i}{2q^2}\quad\mbox{and}\quad
\left(\ln i-q/2\right)^2<\ln^2i$$ so it is enough to prove that for $3\le i$ the next inequality is true $$\left(2q^2-\frac{3}{4} \right)\left(1+\frac{\ln i}{q}+\frac{\ln^2 i}{2q^2} \right)
>q^2+\frac{4q}{3i}+\frac{\ln^2i}{2q}+\frac{5}{8}q$$ i.e. $$q^2-\frac{3}{4}+ \left(2q-\frac{3}{4q} \right)\ln i
>\frac{4q}{3i}+\frac{5}{8}q$$ because $$\frac{2q^2-\frac{3}{4}}{2q^2}>\frac{1}{2q}.$$ But $2q-\frac{3}{4q}>5/4$ and $$q^2-\frac{3}{4}+ \left(2q-\frac{3}{4q} \right)\ln i>q^2-\frac{3}{4}+\frac{5}{4}\ln 3
>q^2+\frac{1}{2}.$$ Also, $\frac{4q}{3i}+\frac{5}{8}q<\frac{4q}{9}+\frac{5}{8}q=\frac{77}{72}q$ and since $q^2+\frac{1}{2}>\frac{77}{72}q$ the lemma is proved. ◻
**Lemma 20**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ there is a constant $c=c(p)>0$ such that for every natural $n$ the next inequality is true: $$\begin{aligned}
\label{eq11}
\frac{\ln^22+2q^2}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}-\ln^22-2q\sum_{k=2}^n\frac{1}{k^{1+2/q}}
-\frac{\ln^22-q\ln2+3/2q^2}{q2^{1+2/q}}>c(p).\end{aligned}$$*
*Proof.* We have $$\sum_{k=2}^n\frac{1}{k^{1+2/q}}=\frac{1}{2^{1+2/q}}+\sum_{k=3}^n\frac{1}{k^{1+2/q}}
<\frac{1}{2^{1+2/q}}+\int_2^\infty \frac{dx}{x^{1+2/q}}=\frac{1+q}{2.2^{2/q}}$$ and $$\frac{\ln^22-q\ln2+3/2q^2}{q2^{1+2/q}}
=\frac{\left(\ln2-q/2\right)^2+5/4q^2}{q2^{1+2/q}}
<\frac{1}{20q2^{2/q}}+\frac{5q}{2^{3+2/q}}.$$ Since $$\frac{2}{3}\frac{q}{2^{3/q}}>\frac{1}{20q2^{2/q}}$$ and $$\left(1-\frac{1}{2^{1/q}} \right)\ln^22<\frac{1}{4}$$ it follows $$\begin{aligned}
&\frac{\ln^22+2q^2}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}-\ln^22-2q\sum_{k=2}^n\frac{1}{k^{1+2/q}}
-\frac{\ln^22-q\ln2+3/2q^2}{q2^{1+2/q}}\\
&>\frac{q}{2^{2/q}}\left[\left(2^{1+1/q}-1 \right)q-\frac{13}{8} \right]-\frac{1}{4}
>\frac{1}{4}\left[\left(2^{1+1/q}-1 \right)q-\frac{21}{8} \right]>0\end{aligned}$$ since the function $f(x)=\left(2^{1+1/x}-1 \right)x$ is increasing and consequently\
$\left(2^{1+1/q}-1 \right)q>f(1)=3$. ◻
**Lemma 21**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural numbers $i$ and $n$ such that $2\le i\le n$ the next inequality is true: $$\begin{aligned}
\label{eq12}
\sum_{k=i}^{n}\frac{1}{k^p}&\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1}
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right]\notag\\
&>\frac{q\left(\ln^2i+2q^2 \right)}{i^{1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}
%-2q^2\left(\frac{1}{i^{1+2/q}}+\frac{q}{2i^{2/q}} \right)\notag\\
-2q^2\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\notag\\
&-\frac{\ln^2i-q\ln i+3/2q^2}{2i^{2/q}}+\frac{2q^2}{3i^{3/q}}-\frac{2q^2}{3n^{3/q}}.\end{aligned}$$*
*Proof.* $$\begin{aligned}
\sum_{k=i}^{n}\frac{1}{k^p}&\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1}
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right]=J-I\end{aligned}$$ where $$\begin{aligned}
J&=\sum_{k=i}^{n}\frac{1}{k^p}\left(k^{1/q} \right)^{p/q-1}
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right]\\
&=\sum_{k=i}^{n}\frac{\ln^2k-2q\ln k+2q^2}{k^{1+1/q}}-2q^2\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\end{aligned}$$ and $$\begin{aligned}
I&=\sum_{k=i}^{n}\frac{1}{k^p}
\left[\left(k^{1/q}\right)^{p/q-1}-\left(k^{1/q}-\frac{1}{p}- \right)^{p/q-1}\right]
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right].\notag\\\end{aligned}$$ From [\[eq6\]](#eq6){reference-type="eqref" reference="eq6"} $$\begin{aligned}
J&>\frac{q\left(\ln^2i+2q^2 \right)}{i^{1/q}}
+\frac{\ln^2i-2q\ln i+2q^2}{2i^{1+1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}
-2q^2\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\end{aligned}$$ and from [\[eq9\]](#eq9){reference-type="eqref" reference="eq9"} $$\begin{aligned}
I&<\frac{\ln^2i-2q\ln i+2q^2}{qi^{1+2/q}}+\frac{\ln^2i-q\ln i+3/2q^2}{2i^{2/q}}
-\frac{2q^2}{3i^{3/q}}+\frac{2q^2}{3n^{3/q}}.\end{aligned}$$ For $i\geq2$ we have $qi^{1/q}\geq2$ and consequently $$\begin{aligned}
\frac{\ln^2i-2q\ln i+2q^2}{2i^{1+1/q}}\geq\frac{\ln^2i-2q\ln i+2q^2}{qi^{1+2/q}}.\end{aligned}$$ The lemma is proved. ◻
**Lemma 22**. *For $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and every natural $n$ the next inequality is true: $$\begin{aligned}
\label{eq13}
\sum_{k=1}^{n}\frac{1}{k^p}&\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1}
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right]\notag\\
&>\frac{q\left(\ln^22+2q^2 \right)}{2^{1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}
-2q^2\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}\notag\\
&-\frac{\ln^22-q\ln 2+3/2q^2}{2^{1+2/q}}+\frac{2}{3}\frac{q^2}{2^{3/q}}-\frac{2q^2}{3n^{3/q}}.\end{aligned}$$*
*Proof.* Since $$\begin{aligned}
&\sum_{k=1}^{n}\frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1}
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right]\\
&=\sum_{k=2}^{n}\frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q-1}
\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2\right)-2q^2\right]\end{aligned}$$ the lemma follows from the previous one. ◻
**Lemma 23**. *Let $p\geq 2$, $\frac{1}{p}+\frac{1}{q}=1$ and $A>1$. Then for every natural numbers $i$ and $n$ such that $i\le n$ the next equality is true: $$\begin{aligned}
\label{eq15.1}
&\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{1}{A^j\ln^{2j}(n+1)}
\sum_{k=i}^n \frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}\notag\\
&=\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}i}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^{2}(n+1)i^{1/q}} \right)+O\left(\frac{1}{A^2n^{1/q}} \right).\end{aligned}$$*
*Proof.* Let us denote by $g$ the function $$g(x)=\frac{\ln^{2j}x-2qj\ln^{2j-1}x}{x^{1+1/q}}.$$ Again, from Euler's summation formula $$\sum_{k=i}^ng(k)=\int_i^ng(x)dx+\frac{g(i)+f(n)}{2}+\int_i^n\left(x-[x]-\frac{1}{2} \right)g'(x)dx$$ where $[x]$ is the floor function. Now $$|g(i)|=\left|\frac{\ln^{2j}i-2qj\ln^{2j-1}i}{i^{1+1/q}} \right|<\frac{c(p)j\ln^{2j}i}{i^{1+1/q}}
<\frac{c(p)j^2\ln^{2j-2}n}{i^{1/q}}$$ and $$|g(n)|=\left|\frac{\ln^{2j}n-2qj\ln^{2j-1}n}{n^{1+1/q}} \right|<\frac{c(p)j\ln^{2j}n}{n^{1+1/q}}
<\frac{c(p)j^2\ln^{2j-2}n}{i^{1/q}}.$$ For $g(x)$ we have $$g'(x)=\frac{\ln^{2j-2}x}{x^{2+1/q}}
\left[-\left(1+\frac{1}{q}\right)\ln^2x+2j(q+2)\ln x-2qj(2j-1) \right]$$ and consequently $$|g'(x)|<\frac{c(p)j^2\ln^{2j}x}{x^{2+1/q}}<\frac{c(p)j^2\ln^{2j-2}x}{x^{1+1/q}}<\frac{c(p)j^2\ln^{2j-2}n}{x^{1+1/q}}.$$ Then $$\left|\int_i^n\left(x-[x]-\frac{1}{2} \right)g'(x)dx\right|
<c(p)j^2\ln^{2j-2}n\int_i^n\frac{1}{x^{1+1/q}}<\frac{c(p)j^2\ln^{2j-2}n}{i^{1/q}}.$$ Also $$\int_i^ng(x)dx=\frac{q\ln^{2j}i}{i^{1/q}}-\frac{q\ln^{2j}n}{n^{1/q}}.$$ Consequently $$\begin{aligned}
\sum_{k=i}^n \frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}
=\frac{q\ln^{2j}i}{i^{1/q}}-\frac{q\ln^{2j}n}{n^{1/q}}+O\left(\frac{j^2\ln^{2j-2}n}{i^{1/q}} \right).\end{aligned}$$ Then $$\begin{aligned}
&\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{1}{A^j\ln^{2j}(n+1)}
\sum_{k=i}^n \frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}\notag\\
&=\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}i}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^{2}(n+1)i^{1/q}} \right)+O\left(\frac{1}{A^2n^{1/q}} \right).\end{aligned}$$ ◻
# Proof of the right inequality in theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} $$d(a,b)\,\le\left(\frac{p}{p-1} \right)^p \left(1+\frac{c}{\ln^2 \frac{b}{a}}\right)^{-1}\,,\qquad c=c(p)>0.$$ {#proof-of-the-right-inequality-in-theorem-th1-dableleftfracpp-1-rightp-left1fraccln2-fracbaright-1qquad-ccp0.}
By simple change of variables and notations it is easy to see that it is enough to prove [\[MR1\]](#MR1){reference-type="eqref" reference="MR1"} for the interval $(1,b)$.
From Holder's inequality we have for every two functions $f(x)\geq0$ and $g(x)> 0, \,x\in(1,b)$ and $f^p(x)$ and $g^q(x)$ are integrable over (1,b), $$\left(\int_1^x f(t)dt \right)^p \le\left(\int_1^x g^q(t)dt \right)^{p/q}\left(\int_1^x \frac{f^p(t)}{g^p(t)}dt \right).$$ After multiplying both sides by $x^{-p}$, integrating from 1 to $b$ and changing the order of integration in the right side we get $$\begin{aligned}
\int_1^b\left(\frac{1}{x}\int_1^x f(t)dt \right)^pdx \le
\int_1^b\left[\frac{1}{g^p(t)}\int_t^b\left(\int_1^x g^q(u)du \right)^{p/q}\frac{dx}{x^p} \right]f^p(t)dt.\end{aligned}$$ Let us denote for brevity $M(g,t)=g^{-p}(t)M^*(g,t)$ where $$M^*(g,t)=\int_t^b\left(\int_1^x g^q(u)du \right)^{p/q}\frac{dx}{x^p}.$$ Then for every two functions $f(x)\geq 0$ and $g(x)> 0$, $1<x<b$ such that $f^p(x)$ and $g^q(x)$ are integrable the next upper estimation holds $$\begin{aligned}
\int_1^b\left(\frac{1}{x}\int_1^x f(t)dt \right)^pdx \le
\max_{1< t< b}M(g,t) \int_1^b f^p(t)dt\end{aligned}$$ and consequently for every function $g(x)> 0,\,1<x<b$ $$d(1,b)\le \max_{1< t< b}M(g,t).$$ Now we want to minimize $$\max_{1< t< b}M(g,t)$$ over all functions $g(x)> 0$ on the interval $(1,b)$ or to find $$\min_{g(x)> 0}\,\max_{1< t< b}\frac{1}{g^p(t)}\int_t^b\left(\int_1^x g^q(u)du \right)^{p/q}\frac{dx}{x^p}.$$
*Remark 24*. For $g(x)=x^{-1/(pq)}$ we obtain the original Hardy inequality. Indeed, we have $$\int_1^x g^q(u)du=qx^{1/q}-q<qx^{1/q},$$ $$\int_t^b\left(\int_1^x g^q(u)du \right)^{p/q}\frac{dx}{x^p}
<\int_t^b\left(qx^{1/q} \right)^{p/q}\frac{dx}{x^p}=q^p\left(t^{-1/q}-b^{-1/q} \right)<q^pt^{-1/q}$$ for every $1< t< b$. Consequently $M(g,t)<q^p$ for every $1< t< b$, which means that $$\max_{1< t< b}M(g,t)<q^p \quad\mbox{i.e.}\quad d(1,b)\le q^p.$$
Now, for the function $g(x)$, defined by [\[eq200\]](#eq200){reference-type="eqref" reference="eq200"} we have $$\begin{aligned}
\int_1^xg^q(u)du&=\frac{q}{1+\alpha^2q^2}\left[x^{1/q}(\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x))-1 \right]\\
&<\frac{qx^{1/q}}{1+\alpha^2q^2}\left[\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x) \right]\end{aligned}$$ and for every $1< t< b$ $$\label{eqI1}
M^*(g,t)<q^{p/q}\int_t^b\left(\frac{\cos(\alpha\ln x)+\alpha q\sin(\alpha\ln x)}{1+\alpha^2q^2} \right)^{p/q}
\frac{dx}{x^{1+1/q}}.$$ Then from [\[eqI31\]](#eqI31){reference-type="eqref" reference="eqI31"} it follows that for every $1< t< b$ $$M^*(g,t)<-q^p\left(1+\frac{c}{\ln^2b}\right)^{-1}\int_t^b\left(g^p(x) \right)'dx
\le q^p\left(1+\frac{c}{\ln^2b}\right)^{-1}g^p(t)$$ and consequently $$M(g,t)\le q^p\left(1+\frac{c}{\ln^2b}\right)^{-1}
=\left(\frac{p}{p-1} \right)^p \left(1+\frac{c}{\ln^2 \frac{b}{a}}\right)^{-1}.$$ The last means that $$\max_{1< t< b}M(g,t)\le
\left(\frac{p}{p-1} \right)^p \left(1+\frac{c}{\ln^2 \frac{b}{a}}\right)^{-1}$$ i.e. $$d(1,b)\le \left(\frac{p}{p-1} \right)^p \left(1+\frac{c}{\ln^2 \frac{b}{a}}\right)^{-1}.$$
# Proof of the left inequality in theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} $$d(a,b)\,\geq\left(\frac{p}{p-1} \right)^p-\frac{c}{\ln^2 \frac{b}{a}}\,,\qquad c=c(p)>0.$$ {#proof-of-the-left-inequality-in-theorem-th1-dabgeqleftfracpp-1-rightp-fraccln2-fracbaqquad-ccp0.}
By changing the order of integration, we write the left side of [\[eqI15\]](#eqI15){reference-type="eqref" reference="eqI15"} for $a=1$ and $f(x)>0,\,\,1<x<b$ in the following way $$\begin{aligned}
\int_1^b\left(\frac{1}{x}\int_1^xf(t)dt\right)^p \,dx=\int_1^b M(t)f^p(t)dt\end{aligned}$$ where $$\begin{aligned}
M(t)=\frac{1}{[f(t)]^{p/q}}\int_t^b\left(\int_1^x f(u)du \right)^{p/q}\frac{dx}{x^p}. \end{aligned}$$ Obviously $$d(1,b)\geq \min_{1< t< b}M(t).$$ Then for the function $f^*(x)$ defined in [\[eq21\]](#eq21){reference-type="eqref" reference="eq21"} we have $$\int_1^x f^*(u)du=qx^{1/q}\sin(\alpha\ln x)$$ and $$\begin{aligned}
\label{eqI7}
\int_t^b\left(\int_1^x f^*(u)du \right)^{p/q}\frac{dx}{x^p}
=q^{p/q}\int_t^b (\sin(\alpha\ln x))^{p/q}\frac{dx}{x^{1+1/q}}.\end{aligned}$$ Now let $0<\epsilon<1$. Then from [\[eqI7\]](#eqI7){reference-type="eqref" reference="eqI7"} and [\[eqI5\]](#eqI5){reference-type="eqref" reference="eqI5"} it follows that for $b>b_0$ where $$b_0=e^{\pi/\sqrt{\min\{q(p-q)^{-1}(pq+1)^{-2},4(pq)^{-2} \}\epsilon}}$$ the next inequality holds $$\int_t^b\left(\int_1^x f^*(u)du \right)^{p/q}\frac{dx}{x^p}
\geq -\frac{q^p}{1+(pq+\epsilon)\alpha^2 }\int_t^b\left[(f^*(x))^{p/q} \right]'dx
=\frac{q^p[f^*(t)]^{p/q}}{1+(pq+\epsilon)\alpha^2 }.$$ Consequently for every $b>b_0$ $$\begin{aligned}
M(t)\geq \frac{q^p}{1+(pq+\epsilon)\alpha^2 }\geq q^p\left(1-(pq+\epsilon)\alpha^2 \right)
\geq q^p-\frac{q^p(pq+\epsilon)\pi^2}{\ln^2b}\end{aligned}$$ i.e. $$\label{I17}
%\left(\frac{p}{p-1} \right)^p-\frac{c(p)}{\ln^2 b}\le
d(1,b)\geq q^p-\frac{q^p(pq+\epsilon)\pi^2}{\ln^2b}.$$ Since for $b\le b_0$ we have $$d(1,b)\geq q^p-\frac{q^p\ln^2b_0}{\ln^2b}$$ by taking $c=\max\{q^p(pq+\epsilon)\pi^2, q^p\ln^2b_0\}$ we complete the proof.
# Proof of the left inequality in theorem [Theorem 6](#th2){reference-type="ref" reference="th2"} $$d_n\geq\left(\frac{p}{p-1} \right)^p-\frac{c}{\ln^2 n}\,,\qquad c=c(p)>0.$$ {#proof-of-the-left-inequality-in-theorem-th2-d_ngeqleftfracpp-1-rightp-fraccln2-nqquad-ccp0.}
By changing the order of summation we write the left side of [\[eqI16\]](#eqI16){reference-type="eqref" reference="eqI16"} in the following way $$\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_j\right)^p=
\sum_{i=1}^{n} \left[\frac{1}{a_i^{p/q}} \sum_{k=i}^{n}\frac{1}{k^p}\left(\sum_{j=1}^{k}a_j\right)^{p/q}\right]a_i^p
=\sum_{i=1}^{n}M_ia_i^p$$ where $$M_i=\frac{1}{a_i^{p/q}} M_i^* \quad\mbox{and}\quad M_i^*=\sum_{k=i}^{n}\frac{1}{k^p}\left(\sum_{j=1}^{k}a_j\right)^{p/q}.$$ Then $$\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{j=1}^{k}a_j\right)^p\geq \min_{1\le i\le n}M_i\sum_{k=1}^{n}a_i^p$$ and consequently $$\label{eq3}
d_n\geq \min_{1\le i\le n}M_i.$$ Now, we will prove that the sequence $a_k^*$ defined in Theorem [Theorem 6](#th2){reference-type="ref" reference="th2"} is the "almost extremal" sequence, i. e. the inequality [\[eqR1\]](#eqR1){reference-type="eqref" reference="eqR1"} holds. We have $$\sum_{j=1}^{k}a_j^*=\int_1^{k+1}f^*(x)dx=q(k+1)^{1/q}\sin(\alpha\ln (k+1))$$ where $f^*$ is the function defined by [\[eq20\]](#eq20){reference-type="eqref" reference="eq20"}. It is easy to see that the function $x^{p-1-1/q}\left(\sin(\alpha\ln x)\right)^{p/q}=\left(x^{1/q}\sin(\alpha\ln x)\right)^{p/q}$ is increasing and consequently $$\begin{aligned}
\label{eqR2}
M_i^*&=q^{p/q}\sum_{k=i}^{n}\frac{(k+1)^{p-1-1/q}}{k^p}\left(\sin(\alpha\ln (k+1))\right)^{p/q}\notag\\
&\geq q^{p/q}\int_i^{n+1}(\sin(\alpha\ln x))^{p/q}\frac{dx}{x^{1+1/q}}.\end{aligned}$$ Since the function $f^*(x)$ is continuous there exists a point $\eta_i\in [i,i+1]$ such that $a_i=f^*(\eta_i)$. Now let $0<\epsilon<1$. Then from [\[eqR2\]](#eqR2){reference-type="eqref" reference="eqR2"} and Lemma [Lemma 13](#lem20){reference-type="ref" reference="lem20"} it follows that for every integer $$n>n_0=e^{\pi/\sqrt{\min\{q(p-q)^{-1}(pq+1)^{-2},4(pq)^{-2} \}\epsilon}}$$ $$\begin{aligned}
%\label{eqR3}
M_i^*\geq -\frac{q^p}{1+(pq+\epsilon)\alpha^2 }\int_{\eta_i}^b\left[(f^*(x))^{p/q} \right]'dx
=\frac{q^p\left[f^*(\eta_i)\right]^{p/q}}{1+(pq+\epsilon)\alpha^2 }\end{aligned}$$ and consequently $$\begin{aligned}
%\label{eqR3}
M_i&\geq \frac{q^p}{1+(pq+\epsilon)\alpha^2 }\geq q^p\left(1-(pq+\epsilon)\alpha^2 \right)
\geq q^p-\frac{q^p(pq+\epsilon)\pi^2}{\ln^2(n+1)}\end{aligned}$$ i.e. $$d_n\geq q^p-\frac{q^p(pq+\epsilon)\pi^2}{\ln^2(n+1)}.$$ Since for $n\le n_0$ we have $$d_n\geq q^p-\frac{q^p\ln^2n_0}{\ln^2n}$$ by taking $c=\max\{q^p(pq+\epsilon)\pi^2, q^p\ln^2n_0\}$ we complete the proof.
# Proof of the right inequality in theorem [Theorem 6](#th2){reference-type="ref" reference="th2"} $$d_n<\left(\frac{p}{p-1} \right)^p-\frac{c}{\ln^2 n}\,,\qquad c=c(p)>0.$$ {#proof-of-the-right-inequality-in-theorem-th2-d_nleftfracpp-1-rightp-fraccln2-nqquad-ccp0.}
From Holder's inequality we have for every two sequences $\mu_i>0$ and $\eta_i\geq0$, $i=1,...n$ $$\sum_{i=1}^{k}\mu_i\eta_i\le\left(\sum_{i=1}^{k}\eta_i^p\right)^{1/p}\left(\sum_{i=1}^{k}\mu_i^q\right)^{1/q}$$ or $$\left(\frac{1}{k}\sum_{i=1}^{k}\mu_i\eta_i\right)^p
\le \frac{1}{k^p}\left(\sum_{i=1}^{k}\eta_i^p\right)\left(\sum_{i=1}^{k}\mu_i^q\right)^{p/q}.$$ Denoting $a_i=\mu_i\eta_i$ and after changing the order of summation we get $$\sum_{k=1}^{n}\left(\frac{1}{k}\sum_{i=1}^{k}a_i\right)^p
\le \sum_{i=1}^{n}M_ia_i^p \le \left( \max_{1\le i\le n}M_i\right) \sum_{i=1}^{n}a_i^p,$$ where $$M_i=\frac{1}{\mu_i^p}M_i^*,\quad M_i^*=\sum_{k=i}^{n}\frac{1}{k^p}\left(\sum_{j=1}^{k}\mu_j^q\right)^{p/q}.$$ Obviously $$d_n\le \max_{1\le i\le n}M_i,\quad\mbox{so we want to minimize }\quad
\max_{1\le i\le n}M_i$$ over all sequences $\mu=\{\mu_i>0\},\,i=1,2, ... ,n$, i.e. to find $$\min_{\mu>0}\,\max_{1\le i\le n}M_i$$ or, at least, to make it as small as possible.
*Remark 25*. By choosing, for instance, $$\mu_k= k^{-1/(pq)},\quad k=1,2,\,...\,n %\mbox{or}\quad \mu_k^2=\int_{k-1}^{k}\frac{dx}{\sqrt x}$$ we obtain the Hardy's inequality with $d_n=\left(\frac{p}{p-1} \right)^p$.
Indeed, $$\label{eq14}
\sum_{j=1}^{k}\mu_j^q=1+\sum_{j=2}^{k}\frac{1}{j^{1/p}}<1+\int_1^k\frac{dx}{x^{1/p}}=qk^{1/q}-q+1=
qk^{1/q}\left(1-\frac{1}{pk^{1/q}} \right)$$ and from [\[eq5\]](#eq5){reference-type="eqref" reference="eq5"} of Lemma [Lemma 15](#lem05){reference-type="ref" reference="lem05"} $$M_i^*\le q^{p/q}\sum_{k=i}^{n}\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}}\right)^{p/q}
\le\left(\frac{p}{p-1} \right)^pi^{-1/q}.$$ Consequently $$M_i\le\left(\frac{p}{p-1} \right)^p \quad\mbox{and}\quad d_n\le\left(\frac{p}{p-1} \right)^p.$$
But in order to prove the right inequality of [\[MR2\]](#MR2){reference-type="eqref" reference="MR2"} we need to make a more complicated choise of the sequence $\mu_k$. Let $$\mu_k=
\left(\frac{A}{k^{1/p}}-\frac{1}{\ln^2(n+1)}\int_{k}^{k+1}\frac{\ln^2 x}{x^{1/p}}dx\right)^{1/q}$$ where $A=A(p)>2$ is a constant which depends only on $p$ and will be chosen later. It is obvious that the sequence $\mu_k,\, k=1,...,n$ is well defined. Then for every $i$ such that $1\le i\le n$ $$\label{eq31}
\mu_i^p<\frac{A^{p/q}}{i^{1/q}}=\frac{c(p)}{i^{1/q}}$$ and $$\begin{aligned}
\label{eq15}
\mu_i^p&>\left(\frac{A}{i^{1/p}}-\frac{\ln^2(i+1)}{i^{1/p}\ln^2(n+1)}\right)^{p/q}
=\frac{A^{p/q}}{i^{1/q}}\left(1-\frac{\ln^2(i+1)}{A\ln^2(n+1)}\right)^{p/q}\notag\\
&=\frac{A^{p/q}}{i^{1/q}}\sum_{j=0}^\infty(-1)^j \binom{p/q}{j}
\left(\frac{\ln^2(i+1)}{A\ln^2(n+1)} \right)^j.\end{aligned}$$ Now $$\begin{aligned}
\sum_{j=1}^{k}\mu_j^q
&=A\sum_{j=1}^{k}\frac{1}{j^{1/p}}-\frac{1}{\ln^2(n+1)}\int_{1}^{k+1}\frac{\ln^2 x}{x^{1/p}}dx\\
&<A\sum_{j=1}^{k}\frac{1}{j^{1/p}}-\frac{1}{\ln^2(n+1)}\int_{1}^{k}\frac{\ln^2 x}{x^{1/p}}dx.
%&\le 10\int_{0}^{k}\frac{dx}{\sqrt x}-\frac{1}{\ln^2(n+1)}\int_{1}^{k}\frac{\ln^2 x}{\sqrt x}dx\\
%&=20\sqrt k-\frac{2\sqrt k}{\ln^2(n+1)}\left[\ln^2 k-4\ln k+8-\frac{8}{\sqrt k} \right]\end{aligned}$$ Since $$\int_{1}^{k}\frac{\ln^2 x}{x^{1/p}}dx=
qk^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)-2q^3$$ and from [\[eq14\]](#eq14){reference-type="eqref" reference="eq14"} we have $$\begin{aligned}
\sum_{j=1}^{k}\mu_j^q
&<Aq\left(k^{1/q}-\frac{1}{p} \right)-\frac{q}{\ln^2(n+1)}\left[k^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)-2q^2 \right]\\
%&\le 10\int_{0}^{k}\frac{dx}{\sqrt x}-\frac{1}{\ln^2(n+1)}\int_{1}^{k}\frac{\ln^2 x}{\sqrt x}dx\\
%&=20\sqrt k-\frac{2\sqrt k}{\ln^2(n+1)}\left[\ln^2 k-4\ln k+8-\frac{8}{\sqrt k} \right]
&=Aq\left(k^{1/q}-\frac{1}{p} \right)(1-S(k))\end{aligned}$$ where for brevity we denoted by $$S(k)=\frac{k^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)-2q^2}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)}.
%<\frac{k^{1/q}\left(\ln^2k-2q\ln k+2q^2 \right)}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)}.$$ We have $$\begin{aligned}
\label{eq17}
(Aq)^{-p/q}M_i^*&<(Aq)^{-p/q}\sum_{k=i}^{n} \frac{1}{k^p}\left[Aq\left(k^{1/q}-\frac{1}{p} \right)\right]^{p/q}
\left(1-S(k)\right)^{p/q}\notag\\
&=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}S^j(k) \notag\\
&=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
-\frac{p}{q}\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}S(k) \notag\\
&+\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}S^j(k)=L_1-L_2+L_3.\end{aligned}$$ From [\[eq5\]](#eq5){reference-type="ref" reference="eq5"} of Lemma [Lemma 15](#lem05){reference-type="ref" reference="lem05"} $$\label{L1}
L_1=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\le q\left(\frac{1}{i^{1/q}}- \frac{1}{(n+1)^{1/q}}\right).$$
From [\[eq12\]](#eq12){reference-type="ref" reference="eq12"} of Lemma [Lemma 21](#lem12){reference-type="ref" reference="lem12"} we have for $2\le i\le n$ $$\begin{aligned}
\label{L21}
L_2(i\geq2)
&=\frac{p}{q}\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}S(k)\notag\\
&>\frac{p}{qA\ln^2(n+1)}
\Big[\frac{q\left(\ln^2i+2q^2 \right)}{i^{1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}
-2q^2\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\notag\\
&-\frac{\ln^2i-q\ln i+3q^2/2}{2i^{2/q}}+\frac{2q^2}{3i^{3/q}}-\frac{2q^2}{3n^{3/q}}\Big]\end{aligned}$$ and since $S(1)=0$ $$\begin{aligned}
\label{L22}
L_2(i=1)
&=\frac{p}{q}\sum_{k=1}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}S(k)\notag\\
&>\frac{p}{qA\ln^2(n+1)}
\Big[\frac{q\left(\ln^22+2q^2 \right)}{2^{1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}
-2q^2\sum_{k=2}^{n}\frac{1}{k^{1+2/q}}\notag\\
&-\frac{\ln^22-q\ln 2+3q^2/2}{2^{1+2/q}}+\frac{2}{3}\frac{q^2}{2^{3/q}}-\frac{2q^2}{3n^{3/q}}\Big]\end{aligned}$$ for $i=1$.
For $k\geq2$ from [\[eq1\]](#eq1){reference-type="eqref" reference="eq1"} of Lemma [Lemma 8](#lem01){reference-type="ref" reference="lem01"} and since $$\left|\frac{2q}{\ln k}-\frac{2q^2-2q^2k^{-1/q} }{\ln^2k}\right|<\frac{c(p)}{\ln k}$$ it follows that for every natural $j\geq 2$ $$\begin{aligned}
S^j(k)&=\left[\frac{k^{1/q}\ln^2k}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)} \right]^j
\left[1-\frac{2qj}{\ln k}+\frac{\left(2q^2-2q^2k^{-1/q} \right)j}{\ln^2k}+O\left(\frac{j^2}{\ln^2k} \right) \right]\\
&=\left[\frac{k^{1/q}\ln^2k}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)} \right]^j
\left[1-\frac{2qj}{\ln k}+O\left(\frac{j^2}{\ln^2k} \right) \right]\\
&=\left[\frac{k^{1/q}}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)} \right]^j
\left[\ln^{2j}k-2qj\ln^{2j-1}k \right]\\
&+\left[\frac{k^{1/q}\ln^2k}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)} \right]^j
O\left(\frac{j^2}{\ln^2k} \right)\\
&=\left[\frac{k^{1/q}}{A\left(k^{1/q}-\frac{1}{p} \right)\ln^2(n+1)} \right]^j
\left[\ln^{2j}k-2qj\ln^{2j-1}k \right]+\left(\frac{2}{A}\right)^jO\left(\frac{j^2}{\ln^2(n+1)} \right).\end{aligned}$$ Then for $i\geq 2$ $$\begin{aligned}
%\label{eq27}
L_3&=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}S^j(k)\\
&=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j\binom{p/q}{j}
\frac{k^{j/q}\left(\ln^{2j}k-2qj\ln^{2j-1}k\right)}{A^j\left(k^{1/q}-\frac{1}{p} \right)^j\ln^{2j}(n+1)} \\
&+\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\left(\frac{2}{A}\right)^jO\left(\frac{j^2}{\ln^2(n+1)} \right)
=L_{31}+L_{32}.\end{aligned}$$ Now $$\begin{aligned}
\label{eq27}
\left|L_{32}\right|&=\left|\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\left(\frac{2}{A}\right)^jO\left(\frac{j^2}{\ln^2(n+1)} \right)\right|\notag\\
&<\frac{c(p)}{\ln^2(n+1)}\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q} \right)^{p/q}
\sum_{j=2}^\infty \left|\binom{p/q}{j}\right|\left(\frac{2}{A}\right)^j j^2\notag\\
&=\frac{c(p)}{A^2\ln^2(n+1)}\sum_{k=i}^n\frac{1}{k^{1+1/q}}
<\frac{c(p)}{i^{1/q}A^2\ln^2(n+1)}=\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)} \right).\end{aligned}$$ For $\alpha\geq 1$ and $0\le x\le 1/2$ the next inequality holds $$\frac{1}{(1-x)^\alpha}\le 1+\alpha 2^{\alpha+1}x$$ which is easy to verify. Then for $p\geq 2$ and $j\geq 1$ we have $$\frac{k^{j/q}}{\left(k^{1/q}-\frac{1}{p} \right)^j}=\frac{1}{\left(1-\frac{1}{pk^{1/q}} \right)^j}
=1+O\left(\frac{2^j j}{k^{1/q}} \right)
=1+O\left(\frac{2^j j}{\ln^2k} \right).$$ Consequently $$\begin{aligned}
%\label{eq28}
&\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{k^{j/q}\left(\ln^{2j}k-2qj\ln^{2j-1}k\right)}{A^j\left(k^{1/q}-\frac{1}{p} \right)^j\ln^{2j}(n+1)}\notag \\
&=\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^j\ln^{2j}(n+1)}+
O\left(\frac{1}{A^2\ln^2(n+1)}\right).\end{aligned}$$ Then by [\[eq15.1\]](#eq15.1){reference-type="eqref" reference="eq15.1"} of Lemma [Lemma 23](#lem15){reference-type="ref" reference="lem15"} $$\begin{aligned}
\label{eq29}
L_{31}&=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^j\ln^{2j}(n+1)}\notag\\
&+\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
O\left(\frac{1}{A^2\ln^2(n+1)}\right)\notag\\
&=\sum_{k=i}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^j\ln^{2j}(n+1)}\notag\\
&+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)\notag\\
&=\sum_{k=i}^n \frac{1}{k^{1+1/q}}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^j\ln^{2j}(n+1)}\notag\\
&+\sum_{k=i}^n \frac{1}{k^{1+1/q}}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^j\ln^{2j}(n+1)}O\left(\frac{1}{k^{1/q}} \right)\notag\\
&+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)\notag\\
&=\sum_{k=i}^n \frac{1}{k^{1+1/q}}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}
\frac{\ln^{2j}k-2qj\ln^{2j-1}k}{A^j\ln^{2j}(n+1)}\notag\\
&+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)\notag\\
&=\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{1}{A^j\ln^{2j}(n+1)}
\sum_{k=i}^n \frac{\ln^{2j}k-2qj\ln^{2j-1}k}{k^{1+1/q}}\notag\\
&+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)\notag\\
&=\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}i}{A^j\ln^{2j}(n+1)}
+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)+O\left(\frac{1}{A^2 n^{1/q}}\right)\end{aligned}$$ because $$\begin{aligned}
\frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
&=\frac{1}{k^{1+1/q}}\left(1-\frac{1}{pk^{1/q}} \right)^{p/q}
=\frac{1}{k^{1+1/q}}\left(1+ O\left(\frac{1}{k^{1/q}} \right)\right),\end{aligned}$$
$$\begin{aligned}
\left| \binom{p/q}{j}\frac{(-1)^j\left(\ln^{2j}k-2qj\ln^{2j-1}k\right)}{A^j\ln^{2j}(n+1)}O\left(\frac{1}{k^{1/q}} \right)\right|
<\frac{cj^{p/q+1}\ln^2k}{A^j\ln^{2}(n+1)k^{1/q}}
<\frac{cj^{p/q+1}}{A^j\ln^{2}(n+1)}\end{aligned}$$ and $$\begin{aligned}
\sum_{k=i}^n \frac{1}{k^{1+1/q}}\sum_{j=2}^\infty\frac{cj^{p/q+1}}{A^j\ln^{2}(n+1)}
<\frac{c}{A^2\ln^{2}(n+1)}\sum_{k=i}^n \frac{1}{k^{1+1/q}}
<\frac{c}{A^2\ln^2(n+1)i^{1/q}}.\end{aligned}$$ Consequently for $i\geq2$ $$\begin{aligned}
\label{eq30}
L_{3}(i\geq2)=\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}i}{A^j\ln^{2j}(n+1)}
+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)+O\left(\frac{1}{A^2 n^{1/q}}\right).\end{aligned}$$ Since $S(1)=0$ we have for $i=1$ $$\begin{aligned}
\label{eq31}
L_{3}(i=1)&=\sum_{k=1}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}S^j(k)\notag\\
&=\sum_{k=2}^n \frac{1}{k^p}\left(k^{1/q}-\frac{1}{p} \right)^{p/q}
\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}S^j(k)\notag\\
&=\frac{q}{2^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^2(n+1)}\right)+O\left(\frac{1}{A^2 n^{1/q}}\right)\notag\\
&=\frac{q}{2^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^2(n+1)}\right)\end{aligned}$$
We consider the cases $i=1$ and $i\geq2$ separately.
**Case $i=1$.**\
From [\[L1\]](#L1){reference-type="eqref" reference="L1"}, [\[L21\]](#L21){reference-type="eqref" reference="L21"} and [\[eq31\]](#eq31){reference-type="eqref" reference="eq31"} we obtain $$\begin{aligned}
&(Aq)^{-p/q}M_1^*\\
&<q-\frac{p}{A\ln^2(n+1)}\left[\frac{\ln^22+2q^2}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}
-2q\sum_{k=2}^n\frac{1}{k^{1+2/q}}
-\frac{\ln^22-q\ln2+3q^2/2}{q2^{1+2/q}} \right]\\
&\,\,\,\,\,\,\,\,\,+\frac{q}{2^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^2(n+1)}\right)-T\\
&=q\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
-q\left(1-\frac{1}{2^{1/q}}\right)\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}\\
&-\frac{p}{A\ln^2(n+1)}\left[\frac{\ln^22+2q^2}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}
-\ln^22-2q\sum_{k=2}^n\frac{1}{k^{1+2/q}}
-\frac{\ln^22-q\ln2+3/2q^2}{q2^{1+2/q}} \right]\\
&+O\left(\frac{1}{A^2\ln^2(n+1)}\right)-T\\
&=q\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^2(n+1)}\right)-T\\
&-\frac{p}{A\ln^2(n+1)}\left[\frac{\ln^22+2q^2}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}
-\ln^22-2q\sum_{k=2}^n\frac{1}{k^{1+2/q}}
-\frac{\ln^22-q\ln2+3/2q^2}{q2^{1+2/q}} \right]\\\end{aligned}$$ where $$T=\frac{q}{(n+1)^{1/q}}
%+\frac{c(p)}{(n+1)^{1/q}A^2\ln^2(n+1)}
-\frac{p}{A\ln^2(n+1)} \left[\frac{\ln^2n+2q^2 }{n^{1/q}}
+\frac{2q}{3n^{3/q}}\right].$$ It is obvious that by taking $A=A(p)$ big enough we can make $T$ positive. By Lemma [Lemma 20](#lem11){reference-type="ref" reference="lem11"} there is a constant $c(p)$ such that $$\begin{aligned}
\frac{\ln^22+2q^2}{2^{1/q}}+\frac{2}{3}\frac{q}{2^{3/q}}
-\ln^22-\sum_{k=2}^n\frac{2q}{k^{1+2/q}}
-\frac{\ln^22-q\ln2+3/2q^2}{q2^{1+2/q}}>c(p).\end{aligned}$$ Then $$\begin{aligned}
&(Aq)^{-p/q}M_1^*<
q\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
+O\left(\frac{1}{A^2\ln^2(n+1)}\right)-\frac{pc(p)}{A\ln^2(n+1)}.\end{aligned}$$ Again, by taking $A=A(p)$ big enough we have $$\begin{aligned}
&(Aq)^{-p/q}M_1^*<
q\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}2}{A^j\ln^{2j}(n+1)}
-\frac{c(p)}{A\ln^2(n+1)}\end{aligned}$$ and from [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} and [\[eq31\]](#eq31){reference-type="eqref" reference="eq31"} we get $$M_1<\left(\frac{p}{p-1} \right)^p-\frac{c(A,p)}{\ln^2(n+1)}
=\left(\frac{p}{p-1} \right)^p-\frac{c(p)}{\ln^2(n+1)}.$$
**Case $i\geq2$.**\
From [\[L1\]](#L1){reference-type="eqref" reference="L1"}, [\[L21\]](#L21){reference-type="eqref" reference="L21"}, [\[eq30\]](#eq30){reference-type="eqref" reference="eq30"} we obtain $$\begin{aligned}
\label{eq18}
&(Aq)^{-p/q}M_i^*
<q\left[\frac{1}{i^{1/q}}-\frac{1}{(n+1)^{1/q}}\right]\\
&-\frac{p}{qA\ln^2(n+1)}
\Big[\frac{q\left(\ln^2i+2q^2 \right)}{i^{1/q}}-\frac{q\left(\ln^2n+2q^2 \right)}{n^{1/q}}
-2q^2\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\notag\\
&-\frac{\ln^2i-q\ln i+3/2q^2}{2i^{2/q}}+\frac{2q^2}{3i^{3/q}}-\frac{2q^2}{3n^{3/q}}\Big]
+O\left(\frac{1}{A^2n^{1/q}} \right)\\
&+\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}i}{A^j\ln^{2j}(n+1)}
+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)\\
&=\frac{q}{i^{1/q}}\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}i}{A^j\ln^{2j}(n+1)}
-\frac{q}{(n+1)^{1/q}}\\
&-\frac{p}{A\ln^2(n+1)}
\Big[\frac{2q^2}{i^{1/q}}-\frac{\ln^2n+2q^2}{n^{1/q}}
-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}+\frac{2q}{3i^{3/q}}-\frac{2q}{3n^{3/q}}\notag\\
&-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{2/q}}\Big]
+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)+O\left(\frac{1}{A^2n^{1/q}} \right)\\
&=\frac{q}{i^{1/q}}\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}(i+1)}{A^j\ln^{2j}(n+1)}
-\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}(i+1)-\ln^{2j}i}{A^j\ln^{2j}(n+1)}\notag\\
&-\frac{p}{A\ln^2(n+1)}
\Big[\frac{2q^2-\ln^2(i+1)+\ln^2i}{i^{1/q}}
-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\notag\\
&-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{2/q}}+\frac{2q}{3i^{3/q}}\Big]+
\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)}\right)-T\end{aligned}$$ where $$T=\frac{q}{(n+1)^{1/q}}
%+\frac{c(p)}{(n+1)^{1/q}A^2\ln^2(n+1)}
-\frac{p}{A\ln^2(n+1)} \left[\frac{\ln^2n+2q^2 }{n^{1/q}}
+\frac{2q}{3n^{3/q}}\right]+O\left(\frac{1}{A^2n^{1/q}} \right).$$ By taking $A=A(p)$ big enough we can make $T$ positive. Now $$\begin{aligned}
\left|\frac{q}{i^{1/q}}\sum_{j=2}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}(i+1)-\ln^{2j}i}{A^j\ln^{2j}(n+1)}\right|
=\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)} \right).\end{aligned}$$ Also we have $$\ln^2(i+1)-\ln^2i<\frac{3}{4}$$ and $$\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}\le \frac{1}{i^{1+2/q}}+\int_i^n\frac{dx}{x^{1+2/q}}
=\frac{1}{i^{1+2/q}}+\frac{q}{2i^{2/q}}$$ and consequently $$\begin{aligned}
&\frac{2q^2-\ln^2(i+1)+\ln^2i}{i^{1/q}}
-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}
-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{2/q}}+\frac{2q}{3i^{3/q}}\\
&>\frac{1}{i^{1/q}}\left[2q^2-\frac{3}{4}+\frac{2q}{3i^{2/q}}-\frac{2q}{i^{1+1/q}}-\frac{q^2}{i^{1/q}}
-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{1/q}}\right].\end{aligned}$$ By Lemma [Lemma 19](#lem10){reference-type="ref" reference="lem10"} there is a constant $c(p)>0$ such that $$\begin{aligned}
\frac{2q^2-\ln^2(i+1)+\ln^2i}{i^{1/q}}
-2q\sum_{k=i}^{n}\frac{1}{k^{1+2/q}}
-\frac{\ln^2i-q\ln i+3/2q^2}{2qi^{2/q}}+\frac{2q}{3i^{3/q}}>\frac{c(p)}{i^{1/q}}.\end{aligned}$$
Then $$\begin{aligned}
&(Aq)^{-p/q}M_i^*\\
&<\frac{q}{i^{1/q}}\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}(i+1)}{A^j\ln^{2j}(n+1)}
-\frac{pc(p)}{A\ln^2(n+1)i^{1/q}}+\frac{1}{i^{1/q}}O\left(\frac{1}{A^2\ln^2(n+1)} \right).\end{aligned}$$ By taking $A=A(p)$ big enough we obtain $$M_i^*<\frac{A^{p/q}q^p}{i^{1/q}}\sum_{j=0}^\infty (-1)^j \binom{p/q}{j}\frac{\ln^{2j}(i+1)}{A^j\ln^{2j}(n+1)}
-\frac{c(A,p)}{i^{1/q}\ln^2(n+1)}$$ and from [\[eq15\]](#eq15){reference-type="eqref" reference="eq15"} and [\[eq31\]](#eq31){reference-type="eqref" reference="eq31"} we get $$M_i<q^p-\frac{c(A,p)}{\ln^2(n+1)}
=\left(\frac{p}{p-1} \right)^p-\frac{c(A,p)}{\ln^2(n+1)}
=\left(\frac{p}{p-1} \right)^p-\frac{c(p)}{\ln^2(n+1)}.$$
10
G. H. Hardy, *Notes on some points in the integral calculus, LI. On Hilbert's double-series theorem, and some connected theorems concerning the convergence of infinite series and integrals*, Messenger Math. 48 (1919), 107--112.
G. H. Hardy, *Notes on a theorem of Hilbert*, Math. Z. 6 (1920), 314--317.
G. H. Hardy, *Notes on some points in the integral calculus, LX. An inequality between integral*, Messenger Math. 54 (1925), 150--156.
E. Landau, *Letter to G. H. Hardy*, June 21, 1921.
E. Landau, *A note on a theorem concerning series of positive terms: Extract from a letter of Prof. E. Landau to Prof. I. Schur*, J. London Math. Soc. 1 (1926), 38--39.
Tomaselli, G, A class of inequalities, Boll. Un. Mat. Ital. 2 (1969), 622-631. MR 41 \# 411
Talenti, G, Observazioni sopra una classe di disuguaglianze (Italian), Rend. Sem. mat. Fis. Milano 39 (1969), 171-185. MR 43 \# 6380
Muckenhoupt, B., Hardy's inequality with weights, Studia math. 44 (1972), 31-38. MR 47 \# 418
A. Kufner, L. Maligranda, and L.-E. Person, *The Hardy Inequality: About its History and Some Related Results*, Vydavatelský servis, 2007.
A. Kufner, L.-E. Person, and N. Samko, *Weighted Inequalities of Hardy Type*, 2nd ed., World Scientific, Singapore, 2017.
Dimitar Dimitrov, Ivan Gadjev and Ismail Murad, *Sharp Hardy's Inequalities in HIlbert Spaces*, to appear, preprint https://arxiv.org/abs/2306.08172
Widom, H., On the eigenvalues of certain Hermitian operators, Trans. Amer. Math. Soc., 88(1), 491-522 (1958).
Widom, H., Extreme eigenvalues of translation kernels, Trans. Amer. Math. Soc., 100(2), 252-262 (1961).
H.S. Wilf, H., On finite sections of the classical inequalities, Nederl. Akad. Wet. Amsterdam Proc. Ser. A, 1962 - core.ac.uk
Bruijn, N. G. DE and H. S. Wilf, On Hilbert's Inequality in n dimensions, Bull. Amer. Math. Soc.
A. $\check{\mbox{C}}$i$\check{\mbox{z}}$me$\check{\mbox{s}}$ija, J. Pe$\check{\mbox{c}}$arić, Mixed means and Hardy's inequality, Math. Inequal. Appl. 1(1998), no. 4, 491-506.
F. Stampach, Asymptotic spectral properties of the Hilbert L-matrix, SIAM J. Matrix Anal. Appl. 43 (2022), 1658-1679
Dimitar K. Dimitrov, Ivan Gadjev, Geno Nikolov, Rumen Uluchev, *Hardy's inequalities in finite dimensional Hilbert spaces*, Proc. Amer. Math. Soc. 149 (2021), 2515-2529, DOI:https://doi.org/10.1090/proc/15467,
W-Gao Long, D. Dai, Y-T. Li, X-S. Wang, Asymptotics of orthogonal polynomials with asymptotic Freud-like weights, Stud. Appl. Math. 144 (2020), 133--163.
Ivan Gadjev, Vasil Gochev, On the constant in the Hardy inequality for finite sequences, Mathematical Inequalities & Applications, vol:26, issue: 2, 2023, pages: 493-498
T. Apostol, *Mathematical Analysis, 2nd edition*, Readind Mass: Addison-Wesley, 1974
[^1]: Research supported by the Bulgarian National Research Fund through Contract KP-06-N62/4.
| arxiv_math | {
"id": "2310.00281",
"title": "On the Constants and Extremal Function and Sequence for Hardy\n Inequalities in $L_p$ and $l_p$",
"authors": "Ivan Gadjev",
"categories": "math.CA math.AP",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
The SIML (abbreviation of Separating Information Maximal Likelihood) method, has been introduced by N. Kunitomo and S. Sato and their collaborators to estimate the integrated volatility of high-frequency data that is assumed to be an Itô process but with so-called microstructure noise. The SIML estimator turned out to share many properties with the estimator introduced by P. Malliavin and M.E. Mancino. The present paper establishes the consistency and the asymptotic normality under a general sampling scheme but without microstructure noise. Specifically, a fast convergence shown for Malliavin--Mancino estimator by E. Clement and A. Gloter is also established for the SIML estimator.
**Mathematics Subject Classification (2020):** 62G20, 60F05, 60H05.
**Keywords:** SIML method, Malliavin--Mancino's Fourier estimator, non-parametric estimation, consistency, asymptotic normality.
author:
- "Jirô Akahori[^1], Ryuya Namba[^2], and Atsuhito Watanabe[^3] [^4]"
title: The SIML method without microstructure noise
---
# Introduction
## The Problem {#Problem}
Throughout the present paper, we consider a complete probability space $(\Omega, \mathcal{F}, \mathbf{P})$, which supports a $d$-dimensional Wiener process $\mathbf{W} \equiv (W^1, W^2, \dots, W^d)$ on the time interval $[0,1]$. We denote by $\mathcal{F}_t,\, t \in [0, 1],$ the complete $\sigma$-algebra generated by $\{\mathbf{W}_s : 0 \leq s \leq t \}$ and by $L^2_a [0,1]$ the space of $\{\mathcal{F}_t \}$-adapted processes $\theta$ with $\mathbf{E} [\int_0^1 |\theta(s)|^2 \mathrm{d}s ] < +\infty$.
Let $J \in \mathbb{N}$. Consider an Itô process $$\label{Ito}
\begin{split}
X^j_t = X^j_0
+ \int_0^t b^j (s) \,\mathrm{d}s
+ \sum_{r=1}^d \int_0^t \sigma^j_r (s) \, \mathrm{d} W^r_s,
\end{split}$$ for $j=1, 2, \dots, J$ and $t \in [0,1]$, where $b^j, \sigma^j_r \in L^2_a [0,1]$ for all $j =1, 2, \dots, J$ and $r =1, 2, \dots, d$.
We take the observations for the $j$-th component of the process at time $0 = t^j_0 <t^j_1 < \cdots< t^j_{n_j} = 1$ for $j = 1, 2, \dots, J$. Here we conventionally assume that we observe the initial price and the final price but the assumption can be relaxed. We are interested in constructing an estimator $(V^{j,j'})_{j, j'=1, 2, \dots, J}$ of integrated volatility matrix defined by $$\int_0^t \Sigma^{j,j'} (s) \,\mathrm{d}s
:=\sum_{r=1}^d \int_0^t \sigma_r^j (s) \sigma_r^{j'} (s)\, \mathrm{d}s,
\qquad t \in [0, 1],$$ out of the observations, which is consistent in the sense that each $V^{j,j'}$ converges to $\int_0^t \Sigma^{j,j'} (s) \,{\rm d}s$ in probability as $n := \min_{1\leq j \leq J} n_j \to \infty$, under the condition that $$\label{rho}
\rho_n := \max_{j,k} |t^j_k - t^j_{k-1}| \to 0$$ as $n \to \infty$.
## SIML method {#SIMLintro}
Let us briefly review the *separating information maximum likelihood* (SIML for short) estimator, introduced by N. Kunitomo together with his collaborator S. Sato in a series of papers [@KS08a; @KS08b; @KS10; @KS2; @KS13] where the observations are assumed to be with *microstructure noise*. Namely, the observations are $$Y^j (t^j_k) \equiv X^j (t^j_k)+ v^j_k \label{microstructure noise}$$ for $k = 0, 1, \dots, n_j$ and $j = 1, 2, \dots, J$, where $\{v^j_k\}_{j, k}$ is a family of zero-mean $i.i.d.$ random variables with finite fourth moment, which are independent of the Wiener process $\mathbf{W}$.
Let the observations be equally spaced, that is, $t^j_k \equiv k/n$. The estimator of the SIML method is given by $$\label{SIMLestimator}
\begin{split}
V_{n,m_n}^{j,j'}
:= \frac{n}{m_n} \sum_{l=1}^{m_n} \left( \sum_{k=1}^{n^j}
p^{n_j}_{k,l} \Delta Y^j_k \right)
\left(\sum_{k'=1}^{n^{j'}} p^{n_{j'}}_{k',l} \Delta Y^{j'}_{k'} \right),
\end{split}$$ where $m_n (\ll n)$ is an integer, $$p^n_{k,l} = \sqrt{\frac{2}{n+ \frac{1}{2}}}
\cos \left( \left(l - \frac{1}{2}\right)
\pi \left( \frac{k-\frac{1}{2}}{n+\frac{1}{2}}
\right) \right)$$ for $k,l=1, 2, \dots, n$, $n \in \mathbf{N}$, and we understand $\Delta$ to be the difference operator given by $(\Delta a)_k = a_k -a_{k-1}$ for a sequence $\{a_k\}_k$. We then write $$\Delta Y^j_k = Y^j_{t^j_{k}} - Y^j_{t^j_{k-1}}.$$
They have proved the following two properties.
1. (*the consistency*): the convergence in probability of $V^{j,j'}_{n, m_n}$ to $\int_0^1 \Sigma^{j,j'} (s) \,\mathrm{d}s$ as $n \to \infty$ is attained, provided that $m_n = o (n^{1/2})$, and
2. (*the asymptotic normality of the error*): the stable convergence of $$\begin{split}
& \sqrt{m_n}\left(V^{j,j'}_{n, m_n}-\int_0^1 \Sigma^{j,j'} (s) \,\mathrm{d}s \right) \\
& \to N \left(0, \int_0^1 \left(\Sigma^{j,j}(s) \Sigma^{j',j'}(s) + (\Sigma^{j,j'} (s))^2 \right) \mathrm{d}s
\right)
\end{split}$$ holds true as $n \to \infty$ if $m_n = o (n^{2/5})$,
under some mild conditions on $b$ and $\Sigma$. See [@KSK] for more details. In the book [@KSK], more properties of the SIML estimator are proven. Here we just pick up some of them.
## SIML as a variant of Malliavin--Mancino method
The Malliavin--Mancino's Fourier (MMF for short) method, introduced in [@MMFS] and [@MMAS], is an estimation method for the spot volatility $\Sigma^{j, j'} (s)$ appeared in Section [1.1](#Problem){reference-type="ref" reference="Problem"}, by constructing an estimator of the Fourier series of $\Sigma^{j, j'}$. The series consists of estimators of Fourier coefficients given by $$\label{Fourier-estimator}
\begin{split}
\widehat{\Sigma^{j,j'}_{n,m_n}}(q)
:=\frac{1}{m_n} \sum_{l=1}^{m_n} \left( \sum_{k=1}^{n^j}e^{2\pi\sqrt{-1} (l+q) t^j_{k-1} }
\Delta Y^j_k \right)
\left(\sum_{k'=1}^{n^{j'}}e^{-2\pi\sqrt{-1} l t_{k'-1}^{j'} } \Delta Y^{j'}_{k'} \right)
\end{split}$$ for $q \in \mathbf{Z}$. As we see, $\widehat{\Sigma^{j,j'}_{n,m_n}}(0)$ is quite similar to the SIML estimator [\[SIMLestimator\]](#SIMLestimator){reference-type="eqref" reference="SIMLestimator"}.
The main concern of the SIML estimator is to eliminate the microstructure noise, and it was derived from a heuristic observation that it might maximize a virtual likelihood function (see [@KSK Chapter 3, Section 2]). On the other hand, the MMF method aims at the estimation of spot volatilities, though the cut-off effects have been recognized well among the Italian school, especially by M. Mancino and S. Sanfelici (see [@MS1] and [@MS2])[^5]. Nonetheless, the two methods reached to a similar solution, *independently*. This is really striking and worth further investigations.
The task of the present paper is to establish limit theorems for the SIML estimator, given below as [\[SIMLestimator2\]](#SIMLestimator2){reference-type="eqref" reference="SIMLestimator2"} with a more general sampling scheme than [\[KSSS\]](#KSSS){reference-type="eqref" reference="KSSS"}, under the *no-microstructure noise circumstance*. We mostly employ the techniques from [@ClGl]. Some of them are directly applicable to our framework, but some are not. The main difficulty comes from the nature of the kernel [\[new Dirichlet1\]](#new Dirichlet1){reference-type="eqref" reference="new Dirichlet1"}. Unlike the Dirichlet kernel, its integral over $[0,1]$ is not unit $\times 1/m$, which causes some serious troubles. Among the contributions of the present paper, establishing the fast convergence corresponding to the one studied in [@ClGl] as well as the limit theorems under the general sampling scheme is to be the most important one. The study of the limit theorems under the general sampling scheme with the cases with microstructure noise is postponed to a forthcoming paper.
## Organization of the rest of the present paper
The rest of the present paper is divided into two parts. The former part, Section [2](#sec:GFE){reference-type="ref" reference="sec:GFE"}, studies the consistency of the estimator. The latter part, Section [3](#sec:LTS){reference-type="ref" reference="sec:LTS"}, investigates the asymptotic normality of the estimator. Both sections are structured to be *pedagogical*. Explaining the intuitions behind the setting and the assumptions for the main theorems, the essence of the proof is given in advance of the statement. The proofs are given concisely in the last subsection.
# Consistency of the SIML estimator in the absence of microstructure noise {#sec:GFE}
## Setting
To state our results and to give proofs for them in a neat way, we restate the setting with some new notations. First, for a given observation time grid $\Pi := \{ (t_k^j)_{k= 0,1,\cdots,n_j}: j=1, 2, \dots, J \}$, we define $$\begin{split}
{\Pi}^* &:= \{\varphi = (\varphi_1 (s), \cdots, \varphi_J(s)) : [0,1] \to [0,1]^J \mid \text{{\bf (A1)} and {\bf (A2)}} \}.
\end{split}$$ where we put
- The image $\varphi_j ([t^j_{k-1}, t^j_{k}) )$ is one point in $[t^j_{k-1}, t_k^j ]$ for $k=1, 2, \dots, n_j$ and $j=1, 2, \dots, J$,
- It holds that $\varphi^j ([t^j_{k-1}, t^j_{k}) ) \ne \varphi^j ([t^j_{k}, t^j_{k+1}) )$ for $k=1, 2, \dots, n_j$ and $j=1, 2, \dots, J$.
By using a function in $\Pi^*$, we can rewrite the Riemann sums in [\[SIMLestimator\]](#SIMLestimator){reference-type="eqref" reference="SIMLestimator"} as stochastic integrals for which Itô's formula is applicable.
As remarked in the introduction, we will be working on the situations where $v^j_K \equiv 0$ henceforth. Thus, the SIML estimator [\[SIMLestimator\]](#SIMLestimator){reference-type="eqref" reference="SIMLestimator"} can now be redefined as $$\begin{aligned}
&V_{n,m_n}^{j,j'} \nonumber\\
&:= \frac{2n}{n+\frac{1}{2}}\frac{1}{m_n} \sum_{l=1}^{m_n} \left(\int_0^1
\cos \left(l-\frac{1}{2} \right) \pi \varphi^j (s) \,\mathrm{d}X^j_s
\right) \left(\int_0^1
\cos \left(l-\frac{1}{2} \right) \pi \varphi^{j'} (s) \, \mathrm{d}X^{j'}_s
\right),
\label{SIMLestimator2}
\end{aligned}$$ where $\varphi \in ((k/n)_{k=1}^n, \cdots,(k/n)_{k=1}^n)^*$ is defined by $$\label{KSSS}
\varphi^j \left( \left[\frac{k-1}{n}, \frac{k}{n}\right) \right) = \frac{2k-1}{2n+1}
= \frac{1}{n} \left( k-1 + \frac{n-k+1}{2n+1}\right) \in \left[\frac{k-1}{n}, \frac{k}{n}\right)$$ for $k=1, 2, \dots, n$ and $j=1, 2, \dots, J$.
In the sequel, we rather work on general sampling scheme, that is, general $\Pi$ and $\varphi \in \Pi^*$, under the condition of [\[rho\]](#rho){reference-type="eqref" reference="rho"}. In doing so, the equation [\[SIMLestimator2\]](#SIMLestimator2){reference-type="eqref" reference="SIMLestimator2"} is the definition of the estimator $V^{j,j'}_{n,m_n}$, leaving [\[SIMLestimator\]](#SIMLestimator){reference-type="eqref" reference="SIMLestimator"} as a special case.
We also introduce a symmetric kernel $\mathcal{D}^{j,j'}_m : [0,1]\times[0,1] \to \mathbf{R}$ associated with $\varphi \in \Pi^*$ by $$\begin{aligned}
\label{new Dirichlet1}
\begin{aligned}
\mathcal{D}_{m}^{j, j'}(u, s)
&:=\frac{1}{2m}\frac{\sin{m\pi\left(\varphi^j(u)+\varphi^{j'}(s)\right)}}{\sin{\pi\left(\varphi^j(u)+\varphi^{j'}(s)\right)/2}} + \frac{1}{2m}\frac{\sin{m\pi\left(\varphi^j(u)-\varphi^{j'}(s)\right)}}{\sin{\pi\left(\varphi^j(u)-\varphi^{j'}(s)\right)/2}} \\
% \big|\mathcal{D}_{m_n}^{j, j'}\bigd_{m_n}^{j,j',+}(u,s)+d_{m_n}^{j,j',-}(u,s),^p(u, s)
% &=:d_{m}^{j,j',+}(u,s)+d_{m}^{j,j',-}(u,s)
%\big|d_{m_n}^{j,j',+}(u,s)\big|^p +\big|d_{m_n}^{j,j',-}(u,s)\big|^p % \label{new Dirichlet2}
\end{aligned}
\end{aligned}$$ for $u,s \in [0,1]$. Then, by applying Itô's formula to the products of the stochastic integrals in [\[SIMLestimator2\]](#SIMLestimator2){reference-type="eqref" reference="SIMLestimator2"}, we have $$\begin{aligned}
\label{ItoF}
\begin{aligned}
\frac{n+\frac{1}{2}}{n}{V}_{n,m_n}^{j,j'} &=
\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) \, \Sigma^{j,j'} (s) \,
\mathrm{d}s
%\langle X^j, X^{j'} \rangle_s
+ \left( \int_0^1 \int_0^s +
\int_0^1 \int_0^u \right)
% {\rm d}X^{j'}_s {\rm d}X^{j}_u
\mathcal{D}_{m_n}^{j, j'}(u, s) \, {\rm d}X^j_u {\rm d}X^{j'}_s
% ({\rm d}X^j_u {\rm d}X^{j'}_s + {\rm d}X^{j'}_u {\rm d}X^{j}_s)
\end{aligned}\end{aligned}$$ since $$\begin{aligned}
\label{key equality}
&\frac{2}{m}\sum_{l=1}^{m}\cos\left(l-\frac{1}{2}\right)\pi u \cos\left(l-\frac{1}{2}\right)\pi s \nonumber\\
&=% d_{m_n}^{j,j',+}(u,s)+d_{m_n}^{j,j',-}(u,s),
\frac{1}{2m}\frac{\sin{m\pi\left(u+s)\right)}}{\sin{\pi\left(u+s\right)/2}} + \frac{1}{2m}\frac{\sin{m\pi\left(u-s\right)}}{\sin{\pi\left(u-s\right)/2}}
\qquad u, s \in [0, 1].\end{aligned}$$
## Discussions for possible sampling schemes
In this section, we will discuss how the sampling scheme $\Pi$ and $\varphi \in \Pi^*$ should be. As we will see, we necessarily have $$\begin{aligned}
\label{sampligcond}
\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) g (s) \,\mathrm{d}s
\to \int_0^1 g(s) \, \mathrm{d}s \quad\text{as $ n \to \infty $ for any $ g \in C [0,1]$.}\end{aligned}$$ to obtain $V^{j,j'}_{n, m_n} \to \int_0^1 \sigma^{j,j'} (s) \,\mathrm{d}s$ in probability.
First, we consider the cases where $$\begin{aligned}
\label{MC}
\text{ $ m_n \to \infty$ as $ n \to \infty$}\end{aligned}$$ and $$\begin{aligned}
\label{MMC}
\text{$ \rho_n m_n \to 0 $ as $ n \to \infty$.}\end{aligned}$$
**Lemma 1**. *Under the conditions [\[rho\]](#rho){reference-type="eqref" reference="rho"}, [\[MC\]](#MC){reference-type="eqref" reference="MC"} and [\[MMC\]](#MMC){reference-type="eqref" reference="MMC"}, we have [\[sampligcond\]](#sampligcond){reference-type="eqref" reference="sampligcond"}.*
*Proof.* Put $$\begin{aligned}
% \begin{aligned}
\mathcal{D}_m (u,s)
&:= \frac{2}{m}\sum_{l=1}^{m}\cos\left(l-\frac{1}{2}\right)\pi u \cos\left(l-\frac{1}{2}\right)\pi s \label{D-exp1}\\
% &=\frac{1}{2m}\frac{\sin{2m\pi\left(u+s)\right)}}{\sin{\pi\left(u+s\right)}} + \frac{1}{2m}\frac{\sin{2m\pi\left(u-s\right)}}{\sin{\pi\left(u-s\right)}}
& = \frac{1}{m} \sum_{l=1}^{m}
\left( \cos \left(l-\frac{1}{2}\right)\pi (u+s)
+ \cos \left(l-\frac{1}{2}\right)\pi (u-s) \right)
\qquad u, s \in [0, 1] \nonumber
% \end{aligned}\end{aligned}$$ Then, on one hand, we have $$\begin{aligned}
\mathcal{D}_{m}^{j, j'}(u, s) = \mathcal{D}_m (\varphi^j(u),
\varphi^{j'}(s) ), \end{aligned}$$ and $$\begin{aligned}
\begin{aligned}
|\mathcal{D}_{m}^{j, j'}(u, s) -\mathcal{D}_m (u,s) )| \leq 2m|\varphi^j (u)-u| + 2m|\varphi^{j'} (s) - s|,
\qquad u, s \in [0, 1],
\end{aligned}\end{aligned}$$ since it holds in general that $$\begin{aligned}
|\cos c x - \cos c y|\leq c|x-y| \end{aligned}$$ for a constant $c > 0$. Therefore, under the assumption [\[rho\]](#rho){reference-type="eqref" reference="rho"}, $$\begin{aligned}
\begin{aligned}
& \int_0^1 (\mathcal{D}_{m_n}^{j, j'}(s, s) -\mathcal{D}_{m_n} (s,s) )g (s) \,\mathrm{d}s \leq
4 \rho_n m_n \Vert g \Vert_{L^2}
\to 0
\end{aligned}\end{aligned}$$ as $n \to \infty$. On the other hand, since $$\begin{aligned}
\begin{aligned}
\mathcal{D}_{m}(s, s)
&= 1 + \frac{1}{2m}\frac{\sin (2m\pi s)}{\sin{(\pi s)}},
%\frac{1}{m} \sum_{l=1}^m \cos (2l-1) \pi s,
% &= 1+ \frac{1}{2m} \sum_{l=-m+1}^m e^{\sqrt{-1}(2l-1)\pi s}
\end{aligned}\end{aligned}$$ we have $$\begin{aligned}
\begin{aligned}
& \int_0^1 \mathcal{D}_{m}(s, s) g (s) \,\mathrm{d}s
-\int_0^1 g (s) \, \mathrm{d}s = \frac{1}{2m} \int_0^1 g (s)\frac{\sin (2m\pi s)}{\sin{(\pi s)}} \, \mathrm{d}s
\end{aligned}\end{aligned}$$ Since $$\begin{aligned}
\left| \frac{\sin (2m\pi s)}{\sin{(\pi s)}}\right| \leq
\begin{cases}
\frac{1}{2 \pi s}\,
& s \in (0, 1/2]
\\
\frac{1}{2\pi (1-s)} & s \in [1/2, 1)
\end{cases},\end{aligned}$$ by setting $A_\varepsilon := [0,\varepsilon) \cup (1-\varepsilon, 1]$, we have $$\begin{aligned}
\label{loge}
\begin{aligned}
& \int_{[0,1]\setminus A_\varepsilon} \left| \frac{\sin (4m\pi s)}{2m \sin{(2\pi s)}}\right| \, \mathrm{d}s
\leq
\int_{[\varepsilon, 1/2]}\frac{1}{4m\pi s} \,
\mathrm{d}s +\int_{[1/2, 1-\varepsilon]} \frac{1}{4m\pi (1-s)} \,
\mathrm{d}s \\
& \leq \frac{1}{2m\pi} (\log \varepsilon^{-1} - \log 2)
\end{aligned}\end{aligned}$$ for arbitrary $\varepsilon \in (0, 1/4)$. Using [\[loge\]](#loge){reference-type="eqref" reference="loge"} and the bound $$\begin{aligned}
\left| \frac{\sin (2m\pi s)}{2m \sin{(\pi s)}}\right| = \left|
\frac{1}{m} \sum_{l=1}^m \cos (2l-1) \pi s \right| \leq 1 \end{aligned}$$ on $A_\varepsilon$, we obtain $$\begin{aligned}
\begin{aligned}
\left| \int_0^1 g (s)\frac{\sin (2m\pi s)}{\sin{(\pi s)}} \, \mathrm{d}s \right|
\leq \Vert g \Vert_\infty
\left(- \frac{\log (2\varepsilon)}{2m\pi}
+ 2 \varepsilon \right).
\end{aligned}\end{aligned}$$ In particular, by taking $\varepsilon = m^{-1}$ for $m > 4$, we see that, for $\alpha < 1$, $$\begin{aligned}
\label{qDir}
\begin{aligned}
& m^{\alpha}_n \left(\int_0^1 \mathcal{D}_{m}(s, s) g (s) \,\mathrm{d}s
-\int_0^1 g (s) \mathrm{d}s\right) \to 0 \quad
\text{as $ m_n, n \to \infty $. }
\end{aligned}\end{aligned}$$ Given the above two observations, the proof is complete since $$\begin{aligned}
\begin{aligned}
\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) g (s) \,\mathrm{d}s
= \int_0^1 \mathcal{D}_{m_n}(s, s) g (s) \,\mathrm{d}s
+ \int_0^1 (\mathcal{D}_{m_n}^{j, j'}(s, s) -\mathcal{D}_{m_n} (s,s) )g (s) \,\mathrm{d}s.
\end{aligned}\end{aligned}$$ ◻
To work on the "optimal rate\" (see [@ClGl A3], see also [@MRS Remark 3.2]) $$\label{H4}
0< \liminf_{m_n, n \to \infty} m_n\rho_n \leq \limsup_{m_n, n \to \infty} m_n\rho_n < \infty.$$ we need to assume [\[sampligcond\]](#sampligcond){reference-type="eqref" reference="sampligcond"} instead of proving. This is the strategy taken in [@ClGl]. Proposition [Proposition 2](#eq-sp){reference-type="ref" reference="eq-sp"} below justifies the strategy.
For integers $l$, we denote by $[ l ]_n$ its remainder of the division by $n$, that is, $[l]_n \equiv l (\mod n)$ with the property $0 \leq [l]_n < n$.
**Proposition 2**. *Let $t^j_k \equiv k/n$.*
*(i) Let $\varphi^j ([t^j_{k-1}, t^j_k)) = t^j_{k-1}$ for all $j$ and $k$ and assume that $m_n \to \infty$ and $[m_n]_n/m_n \to 0$ as $n \to \infty$. Then, the statement [\[sampligcond\]](#sampligcond){reference-type="eqref" reference="sampligcond"} holds true.*
*(ii) On the contrary, let $J \geq 2$, $\varphi^j ([t^j_{k-1}, t^j_k)) = t^j_{k-1}$ while $\varphi^{j'} ([t^{j'}_{k-1}, t^{j'}_k)) = t^{j'}_{k}$ for some $1 \leq j \neq j' \leq J$. Then, when $m_n = 2n$, $\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) \,\mathrm{d}s \equiv 0$, that is, [\[sampligcond\]](#sampligcond){reference-type="eqref" reference="sampligcond"} fails to be true.*
*Proof.* (i) First we note that in this case $$\begin{aligned}
\begin{aligned}
\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) g(s) \,\mathrm{d}s
= \int_0^1 g(s) \,\mathrm{d}s + \frac{1}{m_n } \sum_{l=1}^{m_n} \sum_{j=1}^{n} \cos \left( (2l-1) \pi \frac{j-1}{n}\right) \int_{(j-1)/n}^{j/n} g(s) \,\mathrm{d}s.
\end{aligned}\end{aligned}$$ By denoting $\zeta_{2n} = e^{\sqrt{-1}\pi/n}$, we see that $$\begin{aligned}
\begin{aligned}
& \sum_{l= cn+1}^{(c+1)n} \cos \left( (2l-1) \pi \frac{j-1}{n} \right)
= \frac{1}{2}\sum_{l= cn+1}^{(c+1)n} (\zeta_{2n}^{(2l-1)(j-1)} + \zeta_{2n}^{(2l-1)(2n-j+1)})= 0
\end{aligned}\end{aligned}$$ for $c \in \mathbf{N}$ and $j \neq 1$. Then, $$\begin{aligned}
\begin{aligned}
& \left| \frac{1}{m_n }\sum_{l=1}^{m_n} \sum_{j=1}^{n} \cos \left( (2l-1) \pi \frac{j-1}{n}\right) \int_{(j-1)/n}^{j/n} g(s) \,\mathrm{d}s\right| \\
& = \left|\frac{1}{m_n }\sum_{l=1}^{[m_n]_n} \sum_{j=1}^{n} \cos \left( (2l-1) \pi \frac{j-1}{n}\right) \int_{(j-1)/n}^{j/n} g(s) \,\mathrm{d}s + \frac{m_n- [m_n]_n}{m_n}\int_0^{1/n} g(s)\,\mathrm{d}s \right| \\
& \leq
\frac{1}{m_n } \sum_{j=1}^{n}
\sum_{l=1}^{[m_n]_n}
\left|\cos \left( (2l-1) \pi \frac{j-1}{n}\right)\right| \int_{(j-1)/n}^{j/n} |g(s)| \,\mathrm{d}s + \int_0^{1/n} |g(s)|\,\mathrm{d}s \\
&\leq \left( \frac{[m_n]_n}{m_n}
+ \frac{1}{n} \right)\Vert g \Vert_2,
\end{aligned}\end{aligned}$$ which converges to zero as $n \to \infty$ by the assumption.
\(ii\) In this case, $$\begin{aligned}
\begin{aligned}
& \int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s)\,\mathrm{d}s \\
& = \frac{1}{n m} \sum_{j=1}^n\sum_{l=1}^{2n}
\left( \cos \left(l-\frac{1}{2}\right)\pi \left(\frac{2j-1}{n} \right)
+ \cos \left(l-\frac{1}{2}\right)\pi \left(\frac{1}{n}\right) \right) \\
&= \frac{1}{2n m} \sum_{j=1}^n\sum_{l=1}^{2n}
(\zeta_{4n}^{(2l-1)(2j-1)} + \zeta_{4n}^{(2l-1)(4n-2j+1)}
+ \zeta_{4n}^{(2l-1)} + \zeta_{4n}^{(2l-1)}) =0.
\end{aligned}\end{aligned}$$ ◻
## Discussions for the residues
Given the discussions in the previous subsection, the consistency of the estimator $V^{j,j'}_{n, m_n}$ is now reduced to the convergence (to zero) of the residue terms $$\begin{aligned}
\label{Fundecomp}
\begin{aligned}
& \left(\int_0^1 \int_0^s
+ \int_0^t\int_0^u \right) \mathcal{D}_{m_n}^{j, j'}(u, s)\, %{\rm d}X^j_u {\rm d}X^{j'}_s
{\rm d}X^j_u {\rm d}X^{j'}_s% + {\rm d}X^{j'}_u {\rm d}X^{j}_s)
\\
&= M_{m_n}^{j,j'}(1)
+ M_{m_n}^{j',j}(1) + I_{m_n}^{1,j,j'}(1) + I_{m_n}^{2,j,j'} (1) + I_{m_n}^{3,j,j'} (1) \\
&\hspace{1cm}+I_{m_n}^{1,j',j}(1) + I_{m_n}^{2,j',j} (1) + I_{m_n}^{3,j',j} (1)
\end{aligned}\end{aligned}$$ for $j, j' = 1, 2, \dots, J$, where $$\begin{aligned}
\begin{aligned}
M_{m}^{j,k} (t)&:= \int_0^t \left (\int_0^s \mathcal{D}_{m}^{j, k}(s, u) \sum_r \sigma_r^{k}(u) \,{\rm d}W_u^r \right) \sum_r \sigma_r^{j}(s)\,{\rm d}W_s^r,\\
I_{m}^{1,j,k} (t)
&=\int_0^t \left (\int_0^s \mathcal{D}_{m}^{j, k}(s,u) b^{j'} (u) \,{\rm d}u \right) \sum_r \sigma_r^{j}(s)\,{\rm d}W_s^r, \\
I_{m}^{2,j,k} (t)
&= \int_0^t \left (\int_0^s \mathcal{D}_{m}^{j, k}(s, u) \sum_r \sigma_r^{k}(u)\, {\rm d}W_u^r \right) b^j(s) \, {\rm d}s,
\end{aligned}\end{aligned}$$ and $$\begin{aligned}
I_{m}^{3,j,k} (t)
&=\int_0^t \left (\int_0^s \mathcal{D}_{m}^{j, k}(s, u) b^{j'}(u)\, {\rm d}u \right)
b^{j}(s) \,{\rm d}s.\end{aligned}$$ for $j, k =1, 2, \cdots, J$, and $t \in [0,1]$. We assume the following
**Assumption 3**. *(i) For $p \geq 1$, it holds that $$\begin{aligned}
\label{H1-1}
A_p:= %\Big\{
{\bf E}\Big[(\sup_{t\in[0,1]} \sum_j|b^j(t)|^2)^{p/2}\Big] + {\bf E}\Big[(\sup_{t\in[0,1]}
\sum_{r,j} |\sigma_r^{j} (t)|^2)^{p/2}\Big]
%\Big\}
< +\infty.
\end{aligned}$$ (ii) Each function $t \mapsto \sigma^j_r(t)$, $j=1, 2, \dots, J, \, r=1, 2, \dots, d$, is continuous on $[0,1]$ almost surely.*
**Lemma 4**. *Under Assumption [Assumption 3](#H1){reference-type="ref" reference="H1"}, we have, as $m \to \infty$, $$\begin{aligned}
\label{estI1I3}
\mathbf{E} [| I_{m}^1 (t) |^2] + \mathbf{E} [| I_{m}^3 (t) |^2]
\leq O \left(\int_0^t \left(\int_0^s |\mathcal{D}_{m}^{j, j'}(s, u) |\,{\rm d}u \right)^2 \mathrm{d}s \right), \end{aligned}$$ and $$\begin{aligned}
\label{estMI2}
\mathbf{E} [| M_{m} (t) |^2] + \mathbf{E} [| I_{m}^2 (t) |^2]
\leq O \left(\int_0^t \int_0^s |\mathcal{D}_{m}^{j, j'}(s, u) |^2\,{\rm d}u \mathrm{d}s \right),\end{aligned}$$ where $O (\cdot)$ is Landau's big $O$. Here we omit the superscript $j, j'$ for clarity.*
*Proof.* By Itô's isometry, we have $$\begin{aligned}
\begin{aligned}
\mathbf{E} [| I_{m}^1 (t) |^2]
& \leq \mathbf{E} \left[ \int_0^t \sum_r |\sigma^j_r (s) |^2 \left(\int_0^s |\mathcal{D}_{m}^{j, j'}(s, u) ||b^{j'} (
u)| \, \mathrm{d}u \right)^2 \mathrm{d}s \right] \\
% &\leq \mathbf{E} \left[ \left(\int_0^t |b^{j'} (s)|\int_0^s |\mathcal{D}_{m_n}^{j, j'}(s, u) ||b^{j'} ( u)| \, \mathrm{d}u \right)^2 \right] \\
& \leq A_4 \left(\int_0^t \int_0^s |\mathcal{D}_{m}^{j, j'}(s, u) |\,{\rm d}u \mathrm{d}s\right)^2 ,
\end{aligned}\end{aligned}$$ and $$\begin{aligned}
\begin{aligned}
\mathbf{E} [| I_{m}^3 (t) |^2]
&\leq \mathbf{E} \left[ \left(\int_0^t |b^{j} (s)|\int_0^s |\mathcal{D}_{m}^{j, j'}(s, u) ||b^{j'} (
u)| \, \mathrm{d}u \mathrm{d}s \right)^2 \right] \\
& \leq A_4 \int_0^t \left(\int_0^s |\mathcal{D}_{m}^{j, j'}(s, u) |\,{\rm d}u \right)^2 \mathrm{d}s,
\end{aligned}\end{aligned}$$ while with the Schwartz and Burkhölder--Davis--Gundy (BDG henceforth) inequality, we also obtain $$\begin{aligned}
\begin{aligned}
& \mathbf{E} [| M_{m} (t) |^2] + \mathbf{E} [| I_{m}^2 (t) |^2] \\
& \leq \int_0^1 \mathbf{E} \left[ \left(\int_0^s \mathcal{D}_{m}^{j, j'}(s, u) \sum_r \sigma_r^{j'}(u) \,{\rm d}W_u^r \right)^4\right]^{1/2} \mathbf{E} \left[\left(\sum_r (\sigma_r^{j} (s))^2 + (b^{j} (s))^2
\right)^2 \right]^{1/2} \mathrm{d}s
\\
& \leq C_{4,\mathrm{BDG}} \int_0^1 \mathbf{E} \left[ \left(\int_0^s (\mathcal{D}_{m_n}^{j, j'}(s, u))^2 \sum_r (\sigma_r^{j'}(u))^2 \,{\rm d}u \right)^2 \right]^{1/2}
\sqrt{2} A_4^{1/2} \mathrm{d}s \\
% & \leq \sqrt{2} A_4 \int_0^1 \mathbf{E} \left[ (\sup_{u \in [0,1] }\sum_r (\sigma_r^j(u))^2 )^2\right]^{1/2} \int_0^s (\mathcal{D}_{m_n}^{j, j'}(s, u))^2 \,{\rm d}u \mathrm{d}s
& \leq C_{4,\mathrm{BDG}} \sqrt{2} A_4 \int_0^1 \int_0^s (\mathcal{D}_{m}^{j, j'}(s, u) )^2\,{\rm d}u \mathrm{d}s,
\end{aligned}\end{aligned}$$ where $C_{4,\mathrm{BDG}}$ is the universal constant appearing in the BDG inequality. ◻
## Statement and a proof
**Theorem 5** (Consistency of the estimator). *Assume [\[rho\]](#rho){reference-type="eqref" reference="rho"}, [\[MC\]](#MC){reference-type="eqref" reference="MC"} and [\[sampligcond\]](#sampligcond){reference-type="eqref" reference="sampligcond"}. Then, under Assumption [Assumption 3](#H1){reference-type="ref" reference="H1"}, for $j, j'=1, 2, \dots, J$, we have $$\begin{aligned}
V_{n,m_n}^{j,j'}
% &=\frac{a_{\Pi_{n}}}{m_n}\sum_{l=1}^{m_n} \left(\int_0^1\sqrt{2}\cos\Big(l-\frac{1}{2}\Big)\pi\varphi^j(s){\rm d}X_s^j\right)\nonumber\\ &\hspace{1cm}\times \left(\int_0^1 \sqrt{2}\cos\Big(l-\frac{1}{2}\Big)\pi\varphi^{j'}(s)\overline{h(\varphi^{j'}(s))} {\rm d}X_s^{j'}\right)\nonumber \\
&\to
\int_0^1 \Sigma^{j,j'} (s) \,\mathrm{d}s\end{aligned}$$ in probability as $n \to \infty$.*
*Proof.* The convergence to zero of the second term in [\[ItoF\]](#ItoF){reference-type="eqref" reference="ItoF"} is seen by Lemma [Lemma 4](#A1){reference-type="ref" reference="A1"}, since $\mathcal{D}_{m_n}^{j,j'}(s,u) \to 0$ as $m_n \to \infty$ uniformly on every compact subset of $(0,1)^2$, and since $\mathcal{D}_{m_n}^{j,j'}(s,u)$ is bounded, the dominated convergence theorem implies $$\int_0^1 \int_0^s (\mathcal{D}_{m_n}^{j,j'}(s,u))^2{\rm d}u \mathrm{d}s \to 0$$ as $n \to \infty$. The convergence in $L^1(P)$ of the first term in [\[ItoF\]](#ItoF){reference-type="eqref" reference="ItoF"} to $\int_0^1 \Sigma^{j,j'} (s) \mathrm{d}s$ is implied by [\[sampligcond\]](#sampligcond){reference-type="eqref" reference="sampligcond"} and Assumption [Assumption 3](#H1){reference-type="ref" reference="H1"} (ii). ◻
# Asymptotic Normality {#sec:LTS}
## Discussions on the scale
We start with a heuristic argument of finding the proper scale $R_n$ such that $$\begin{aligned}
R_n \left( V_{n,m_n}^{j,j'}
-\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) \Sigma^{j,j'} (s) \, {\rm d}s\right),
\end{aligned}$$ converges stably in law to a (conditioned) Gaussian variable. Looking at the decomposition [\[Fundecomp\]](#Fundecomp){reference-type="eqref" reference="Fundecomp"}, we see that the quadratic variation (process) of $M^{j,j'}$ $$\begin{aligned}
\label{QVofM}
\begin{aligned}
=: \int_0^t \Big(\int_0^s
\mathcal{D}_{m_n}^{j, j'}(s,u)
\mathcal{D}_{m_n}^{k, k'}(s,u)
\Sigma^{j', k'} (u)
% \frac{{\rm d}\langle X^{j'},X^{k'} \rangle_u}{{\rm d}u}
\,{\rm d}u\Big)
\Sigma^{j,k} (s)
% \frac{{\rm d}\langle X^{j},X^{k} \rangle_s}{{\rm d}s}
\, {\rm d}s
+ \mathrm{Res}_t, \qquad t>0,
\end{aligned}\end{aligned}$$ especially the first term, is the main term to control.
The following is the first key to find the scale.
**Proposition 6**. *Suppose that $\rho_n m_n^2 \to 0$, together with $m_n \to \infty$ as $n \to \infty$. Then, for any $g \in C[0,1]^2$, $$\begin{aligned}
\begin{aligned}
m_n \int_0^1 \int_0^s
\mathcal{D}_{m_n}^{j, j'}(s,u)
\mathcal{D}_{m_n}^{k, k'}(s,u)
g(s,u) \, {\rm d}s
\,{\rm d}u
\to
\int_0^1 g(s,s)
\, {\rm d}s
\end{aligned}\end{aligned}$$ as $n \to \infty$.*
A proof will be given in section [4.3](#PrPr){reference-type="ref" reference="PrPr"} in the Appendices. The choice $R_n = m_n^{1/2}$ is convincing once we establish the following.
**Lemma 7**. *Suppose that $$\begin{aligned}
\limsup_{n \to \infty} \rho_n m_n < \infty.
\end{aligned}$$ Then, for $j, j'=1, 2, \dots, J$ and $p>1$, there exists a positive constant $C_p>0$, only depending on the choice of $p>1$, such that $$\limsup_{m_n,n \to \infty}m_n \sup_{s \in [0,1]}
\int_0^1\big|\mathcal{D}_{m_n}^{j, j'}(u, s)\big|^p\,{\rm d}u \leq C_p.$$*
A proof will be given in section [4.1](#pf3.2){reference-type="ref" reference="pf3.2"}. The following is a direct consequence of Lemma [Lemma 16](#lemma3.2){reference-type="ref" reference="lemma3.2"}, given the estimate of [\[estI1I3\]](#estI1I3){reference-type="eqref" reference="estI1I3"}.
**Corollary 8**. *Under the same assumptions of Lemma [Lemma 16](#lemma3.2){reference-type="ref" reference="lemma3.2"}, we have $$\begin{aligned}
m_n^{1/2} \mathbf{E} [|I^{1,j,j'} + I^{3,j,j'}|] \to 0\quad (n \to \infty)\end{aligned}$$ in probability.*
## Discussions on the sampling scheme, continued
The assumption in Proposition [Proposition 6](#PRSS){reference-type="ref" reference="PRSS"} is too demanding. We again follow the strategy of [@ClGl] instead of proving. We assume the following.
**Assumption 9**. *There exist integrable functions $\gamma^{j,j',k,k'}$ on $[0,1]$ such that $$\begin{aligned}
\label{SS-AS}
& m_n \int_0^1 \int_0^s
\mathcal{D}_{m_n}^{j, j'}(s,u)
\mathcal{D}_{m_n}^{k, k'}(s,u) \,{\rm d}u\,{\rm d}s
\to \int_0^t \gamma^{j,j',k,k'} (s) \,{\rm d}s
\end{aligned}$$ as $m_n, n \to \infty$.*
The condition [\[SS-AS\]](#SS-AS){reference-type="eqref" reference="SS-AS"} is easier to check than the following.
**Lemma 10**. *Assume [\[rho\]](#rho){reference-type="eqref" reference="rho"} and [\[SS-AS\]](#SS-AS){reference-type="eqref" reference="SS-AS"}. Then, for any $t \in [0, 1]$ and a continuous function $g:[0,1]^2 \to \mathbb{R}$, the following convergences as $m, n \to \infty$ hold. $$\label{convfan}
\begin{aligned}
& m_n \int_0^t \int_0^s
\mathcal{D}_{m_n}^{j, j'}(s,u)
\mathcal{D}_{m_n}^{k, k'}(s,u) g(s,u)\,{\rm d}u\,{\rm d}s
\to \int_0^t \gamma^{j,j',k,k'} (s) g(s,s)\,{\rm d}s.
\end{aligned}$$*
*Proof.* For $j, j'=1, 2, \dots, J$, $s \in [0,1)$ and $\varepsilon>0$, it holds that $$\label{keylm3+}
m_n \int_0^{s- \varepsilon}
| \mathcal{D}_{m_n}^{j, j'}(u, s)
\mathcal{D}_{m_n}^{k, k'}(u, s)|\,{\rm d}u
\to 0 \quad \text{as }m_n, n \to \infty.$$ By the expression [\[new Dirichlet1\]](#new Dirichlet1){reference-type="eqref" reference="new Dirichlet1"}, we see that, for sufficiently large $n$ and $u<s-\varepsilon$, it holds that $|\mathcal{D}_{m_n}^{j,j'} (u,s)| \leq C_\varepsilon m_n^{-1}$, where the constant $C_\varepsilon$ only depends on $\varepsilon$, from which the statement [\[keylm3+\]](#keylm3+){reference-type="eqref" reference="keylm3+"} immediately follows. That [\[keylm3+\]](#keylm3+){reference-type="eqref" reference="keylm3+"} implies [\[convfan\]](#convfan){reference-type="eqref" reference="convfan"} is also immediate. ◻
Our strategy with Assumption [Assumption 9](#SSfAN){reference-type="ref" reference="SSfAN"} might be justified by a convincing example. Let us consider the case where $$\begin{aligned}
\Pi = \left\{ \left(\frac{k}{n}, \frac{k}{n}, \cdots, \frac{k}{n} \right) \in [0,1]^J: k=0,1,\dots, n \right\},\end{aligned}$$ and for all $j$, $\varphi^j \equiv \varphi$ for some $\varphi$, that is, a synchronous sampling case. In this case, $$\begin{aligned}
\begin{aligned}
& \int_0^t \int_0^s \mathcal{D}_{m_n}^{i, i'}(u, s)
\mathcal{D}_{m_n}^{j, j'}(u, s)\, {\rm d}u\,{\rm d}s \\
&= \int_0^{[nt]/n} \left(\int_0^{[ns]/n} \left|\mathcal{D}_{m_n}\left( \varphi (u), \varphi(s) \right) \right|^2 {\rm d}u + \left(s- \frac{[ns]}{s}\right)\left|\mathcal{D}_{m_n}\left(\varphi (s), \varphi(s)\right) \right|^2\right)\,{\rm d}s \\
&+ \left(t- \frac{[nt]}{t}\right)\left(\int_0^{[nt]/n} \left|\mathcal{D}_{m_n} \left(\varphi (u), \varphi(t)\right) \right|^2 {\rm d}u\right)+ \left|\mathcal{D}_{m_n} \left(\varphi (t), \varphi(t)\right)\right|^2 \int_{[nt]/n}^t \left(s- \frac{[nt]}{n}\right)\,{\rm d}s.
\end{aligned} \end{aligned}$$
**Example 11**. *We first consider the example of sampling studied in [@ClGl]. Let $$\begin{aligned}
\varphi \left( \left[\frac{k-1}{n},\frac{k}{n} \right) \right)
= \frac{k-1}{n}, \quad k=1, \cdots, n,\end{aligned}$$ and $$\begin{aligned}
\rho_n m_n = \frac{m_n}{n} = a \in \mathbf{N}.\end{aligned}$$ Then, since $$\begin{aligned}
\begin{aligned}
& \mathcal{D}_{m_n} \left(\frac{k}{n}, \frac{k'}{n} \right)
= \mathcal{D}_{m_n} \left(\frac{ak}{m_n}, \frac{a k'}{m_n} \right)
= \frac{1}{2m}\frac{\sin{2a\pi\left(k+k')\right)}}{\sin{\pi\left(k+k'\right)/(2n)}} + \frac{1}{2m}\frac{\sin{2a\pi\left(k-k'\right)}}{\sin{\pi\left(k-k'\right)/(2n)}} \\
&= \begin{cases}
0 & k\ne k', \\
1& k = k'
\end{cases},
\end{aligned}\end{aligned}$$ we have $$\begin{aligned}
\begin{aligned}
& \int_0^t \int_0^s \mathcal{D}_{m_n}^{i, i'}(u, s)
\mathcal{D}_{m_n}^{j, j'}(u, s)\, {\rm d}u\,{\rm d}s =
\frac{t}{2n}.
\end{aligned}\end{aligned}$$ Thus, this example satisfy Assumption [Assumption 9](#SSfAN){reference-type="ref" reference="SSfAN"} with $\gamma (s) \equiv a/2$.*
**Example 12**. *Let us consider the case [\[KSSS\]](#KSSS){reference-type="eqref" reference="KSSS"}; $$\begin{aligned}
\varphi^j \left( \left[\frac{k-1}{n}, \frac{k}{n}\right) \right) = \frac{2k-1}{2n+1},\end{aligned}$$ and $$\begin{aligned}
\frac{2m}{2n+1} = a \in \mathbf{N}. \end{aligned}$$ In this case, since $$\begin{aligned}
\begin{aligned}
& \mathcal{D}_{m_n} \left(\frac{2k-1}{2n+1}, \frac{2k'-1}{2n+1} \right)
= \mathcal{D}_{m_n} \left(\frac{a(2k-1)}{2m_n}, \frac{a (2k'-1)}{2m_n} \right) \\
& = \frac{1}{2m}\frac{\sin{2a\pi\left(k+k'-1)\right)}}{\sin{\pi(k+k'-1)/(2n+1)}} + \frac{1}{2m}\frac{\sin{2a\pi\left(k-k'\right)}}{\sin{\pi\left(k-k'\right)/(2n+1)}}
= \begin{cases}
0 & k\ne k' \\
1 & k=k',
\end{cases}
\end{aligned}\end{aligned}$$ we have $$\begin{aligned}
\begin{aligned}
& \int_0^t \int_0^s \mathcal{D}_{m_n}^{i, i'}(u, s)
\mathcal{D}_{m_n}^{j, j'}(u, s)\, {\rm d}u\,{\rm d}s = \frac{t}{2n}.
\end{aligned}\end{aligned}$$ Thus, it satisfies Assumption [Assumption 9](#SSfAN){reference-type="ref" reference="SSfAN"} with $\gamma (s) \equiv a/2$.*
## More on the estimates on the residues; we may need a bit of Malliavin calculus
Contrary the case of $I^1$ and $I^3$, the combination of the estimate [\[estMI2\]](#estMI2){reference-type="eqref" reference="estMI2"} and Lemma [Lemma 16](#lemma3.2){reference-type="ref" reference="lemma3.2"} is insufficient to prove the convergence $m_n^{1/2} \mathbf{E} [|I^2|] \to 0$. Instead of the standard "BDG approach\" taken in the proof of Lemma [Lemma 4](#A1){reference-type="ref" reference="A1"}, we resort to a bit of Malliavin calculus, its integration by parts (IBP for short) formula to be precise, which is the approach taken in [@ClGl][^6]. Specifically, to estimate $\mathbf{E}[|I^2_{m_n}|^2]$, we use the IBP instead of Schwartz inequality to get $$\begin{aligned}
\label{IBP-1}
\begin{aligned}
(\mathbf{E}[|I^{2,j,k}_{m_n}|^2]
&=) \int_{[0,1]^2}
\mathbf{E} \left[\left (\int_0^s \mathcal{D}_{m_n}^{j, k}(s, u) \sum_r \sigma_r^{k}(u)\, {\rm d}W_u^r \right)
L_{m_n}^{j,k}(s',s')
%L(s') %,s')
%(\int_0^{s'} \mathcal{D}_{m_n}^{j, k}(s', u) \sum_r \sigma_r^{k}(u)\, {\rm d}W_u^r \right)
b^j(s) b^j(s') \right]\mathrm{d}s
\mathrm{d}s' \\
&= \int_{[0,1]^2}
\mathbf{E} \left[ \int_0^s \mathcal{D}_{m_n}^{j, k}(s, u) \sum_r \sigma_r^{k}(u)\,
%{\rm d}W_u^r \right)
\nabla_{u,r} ( L_{m_n}^{j,k}(s',s')
%L(s')%,s')
b^j(s) b^j(s') ) \mathrm{d}u \right]\mathrm{d}s
\mathrm{d}s',
\end{aligned}\end{aligned}$$ where $$\begin{aligned}
%\label{L_{m_n}^{j,j'}}
L_{m_n}^{j,j'}(t,s):=\int_0^s \mathcal{D}_{m_n}^{j, j'}(t, u)
\sum_r \sigma_r^{j'}(u) \,{\rm d}W_u^r, \qquad s,t \in [0, 1],\end{aligned}$$ and $\nabla_{u,r}$ denotes[^7] the Malliavin--Shigekawa derivative in the direction of "$\mathrm{d} W_u^r$\". The merit of the expression in the right-hand-side of [\[IBP-1\]](#IBP-1){reference-type="eqref" reference="IBP-1"} is that we obtain an estimate with $\int |\mathcal{D}|$ instead of $\int |\mathcal{D}|^2$, though we need to assume further some differentiability and integrability of $b$ and $\sigma$[^8];
**Assumption 13**. *(i) For any $p>1$, $\sigma^j_r \in \mathbb{D}^{1,p}$, $j=1, 2, \dots, J$, $r=1, 2, \dots, d$, and it holds that $$\label{H3}
{\bf E}\Big[
(\sup_{s,t\in[0,1]}\sum_{r,r',j}| \nabla_{s,r'} \sigma_r^j(t)|^2)^{p/2} \Big] < +\infty,$$ where $\mathbb{D}^{1, p}$ stands for the domain of the Malliavin derivative $\nabla$ in $L^p(\Omega)$ (see [@ND] for details).*
*(ii) For any $p>1$, $b^j \in \mathbb{D}^{1,p}$, and $j=1, 2, \dots, J$, it holds that $$\label{H3b}
{\bf E}\Big[
(\sup_{s,t\in[0,1]}\sum_{r,j}| \nabla_{s,r} b^j(t)|^2)^{p/2} \Big] < +\infty.$$*
**Lemma 14**. *Under Assumptions [Assumption 3](#H1){reference-type="ref" reference="H1"} and [Assumption 13](#MalDer){reference-type="ref" reference="MalDer"}, we have $$\begin{aligned}
m_n \mathbf{E} [| I_{m_n}^2 (t) |^2] \to 0 \quad (m_n,n \to \infty). \end{aligned}$$*
*Proof.* We start with [\[IBP-1\]](#IBP-1){reference-type="eqref" reference="IBP-1"}. We can go further; $$\begin{aligned}
\begin{aligned}
\mathbf{E}[|I^{2,j,k}_{m_n}|^2] = \int_{[0,1]^2}
\int_0^s \mathcal{D}_{m_n}^{j, k}(s, u)
\mathbf{E}\left[ \sum_r
\sigma_r^{k}(u)\Phi_r(u)b^j(s) b^j(s') \right]
\mathrm{d}u \mathrm{d}s
\mathrm{d}s',
\end{aligned}\end{aligned}$$ where $$\begin{aligned}
\begin{aligned}
& \Phi_r(u) \\
&:=
1_{\{u\leq s'\}} \left( \mathcal{D}_{m_n}^{j, k}(s', u) \sigma^k_r(u)
+ \int_0^{s'}\mathcal{D}_{m_n}^{j, k}(s', u') \sum_{r'} \nabla_{u,r} \sigma^k_{r'} (u') \mathrm{d}W^r_{u'} \right)+ L_{m_n}^{j,k}(s',s')\nabla_{u,r}.
\end{aligned}\end{aligned}$$ Since it holds that for $$\begin{aligned}
\begin{aligned}
& (\mathbf{E}[|L_{m_n}^{j,k}(s',s')|^p])^{2/p} +
\left(\mathbf{E}[| \int_0^{s'}\mathcal{D}_{m_n}^{j, k}(s', u') \sum_{r'} \nabla_{u,r} \sigma^k_{r'} (u') \mathrm{d}W^r_{u'}|^p] \right)^{2/p} \\
& \hspace{2cm} = O \left( \int_0^s |\mathcal{D}_{m_n}^{j, k}(s, u)|^2 \mathrm{d}u
\right), \qquad p>1,
\end{aligned}\end{aligned}$$ by BDG and Assumptions [Assumption 3](#H1){reference-type="ref" reference="H1"} and [Assumption 13](#MalDer){reference-type="ref" reference="MalDer"}, we have $$\begin{aligned}
\begin{aligned}
\mathbf{E}[|I^{2,j,k}_{m_n}|^2]
&= O\bigg( \int_{[0,1]^2}\int_0^{s\wedge s'}
|\mathcal{D}_{m_n}^{j, k}(s, u) |
|\mathcal{D}_{m_n}^{j, k}(s', u)|
\mathrm{d}u \mathrm{d}s
\mathrm{d}s'\\
&+ \int_{[0,1]^2}\int_0^{s}
|\mathcal{D}_{m_n}^{j, k}(s, u) |
\mathrm{d}u
\left( \int_0^{s'} |\mathcal{D}_{m_n}^{j, k}(s', u')|^2
\mathrm{d}u'\right)^{1/2} \mathrm{d}s\bigg),
\end{aligned}\end{aligned}$$ which is seen to be $o (m_n)$ by Lemma [Lemma 16](#lemma3.2){reference-type="ref" reference="lemma3.2"}. ◻
## Statement and a proof
The error distribution is obtained as
**Theorem 15** (Asymptotic normality). *Under Assumptions [Assumption 3](#H1){reference-type="ref" reference="H1"}, [Assumption 9](#SSfAN){reference-type="ref" reference="SSfAN"}, and [Assumption 13](#MalDer){reference-type="ref" reference="MalDer"}, for $j, j'=1, 2, \dots, J$, the sequence of random variables $$\begin{aligned}
m_n^{1/2}\left( V_{n,m_n}^{j,j'}
-\int_0^1 \mathcal{D}_{m_n}^{j, j'}(s, s) \Sigma^{j,j'} (s) \, {\rm d}s\right),
\end{aligned}$$ converges to $$\begin{aligned}
\int_0^1
\sqrt{(\gamma^{j,j',j,j'} (s) + \gamma^{j',j,j',j}
\Sigma^{j,j}(s) \Sigma^{j',j'}(s)
+ 2 \gamma^{j,j',j',j}(s) (\Sigma^{j,j'} (s))^2 }
\, {\rm d}B_s
\end{aligned}$$ stably in law as $m_n,n \to \infty$, where $B=(B_t)_{t \in [0.\, 1]}$ is a one-dimensional Brownian motion independent of $(W_t)_{t \in [0, 1]}$.*
*Proof.* Given Corollary [Corollary 8](#cor3){reference-type="ref" reference="cor3"}, Lemma [Lemma 14](#cor4){reference-type="ref" reference="cor4"}, and Assumption [Assumption 9](#SSfAN){reference-type="ref" reference="SSfAN"}, it suffices to show that $m_n \mathbf{E}[ |\mathrm{Res}_t |] \to 0$ and $$\begin{aligned}
&{\bf E}\Big[\langle m_n^{1/2} M^{j,j'}_{m_n},W^r \rangle^2_t\Big]\nonumber\\
&=m_n \int_{[0,t]^2} {\bf E}
\Big[L_{m_n}^{j,j'}(s,s)L_{m_n}^{j,j'}(s',s')\sigma_r^{j}(s)\sigma_r^{j}(s')\Big]
{\rm d}s \,{\rm d}s' \to 0 \label{new notaion}
\end{aligned}$$ as $m_n, n \to \infty$ (by Jacod's theorem [@JJ], see also [@JP]), but totally the same proof as the one in [@ClGl] works, and so we omit it. ◻
999999
Clément, E. and Gloter, A.: *Limit theorems in the Fourier transform method for the estimation of multivariate volatility*, Stochastic Process. Appl. **121** (2011), 1097--1124.
Jacod, J.: *On continuous conditional gaussian martingales and stable convergence in law*, Seminaire de Probabilites, XXXI, Vol. 1655 of Lecture Notes in Math., Springer, Berlin **31** (1997), 232--246.
Jacod, J. and Protter, P.: *Asymptotic error distributions for the Euler method for stochastic differential equations*, Annals of Probability, Vol.26,**1** (1998), 267--307.
Kunitomo, N. and S. Sato: *Separating information maximum likelihood estimation of real ized volatility and covariance with micro-market noise*. Discussion Paper CIRJE-F-581, (2008), Graduate School of Economics, University of Tokyo.
Kunitomo, N. and S. Sato. *Realized Volatility, Covariance and Hedging Coefficient of Nikkei 225 Futures with Micro-Market Noise*. Discussion Paper CIRJE-F-601, (2008), Graduate School of Economics, University of Tokyo.
Kunitomo, N. and S. Sato. *Robustness of the separating information maximum likelihood estimation of realized volatility with micro-market noise*. CIRJE Discussion Paper F-733, (2010) University of Tokyo.
Kunitomo, N. and Sato, S.: *The SIML estimation of realized volatility of the Nikkei-225 futures and hedging coefficient with micro-market noise*, Math. Comput. Simulation **81** (2011), 1272--1289.
Kunitomo, N., and S. Sato. *Separating information maximum likelihood estimation of realized volatility and covariance with micro-market noise*. North American Journal of Economics and Finance **26**: 282--309, (2013).
Kunitomo, N., Sato, S. and Kurisu, D.: *Separating Information Maximum Likelihood Method for High-Frequency Financial Data*, Springer Briefs in Statistics, JSS Research Series in Statistics, Springer, Tokyo, 2018.
Malliavin, P. and Mancino, M. E.: *Fourier series method for measurement of multivariate volatilities*, Finance Stoch. **6** (2002), 49--61.
Malliavin, P. and Mancino, M. E.: *A Fourier transform method for nonparametric estimation of multivariate volatility*, Ann. Statist. **37** (2009), 1983--2010.
Mancino, M. E., Recchioni, M. C. and Sanfelici, S.: Fourier--Malliavin Volatility Estimation, Theory and Practice, Springer Briefs in Quantitative Finance, Springer, Cham, 2017.
Mancino, M.E. and Sanfelici, S.: *Robustness of Fourier estimator of integrated volatility in the presence of microstructure noise*, Comput. Statist. Data Anal. **52** (2008), 2966--2989.
Mancino, M.E. and Sanfelici, S.: *Estimation of quarticity with high-frequency data*, Quant. Finance **12** (2012), 607--622.
Nualart, D.: The Malliavin Calculus and Related Topics. Second edition. Probability and its Applications (New York). Springer-Verlag, Berlin, (2006).
# Appendices
**Lemma 16**. *For $j, j'=1, 2, \dots, J$ and $p>1$, there exists a positive constant $C_p>0$, only depending on the choice of $p>1$, such that $$\limsup_{m_n,n \to \infty}m_n \sup_{s \in [0,1]}
\int_0^1\big|\mathcal{D}_{m_n}^{j, j'}(u, s)\big|^p\,{\rm d}u \leq C_p.$$*
## A proof of Lemma [Lemma 16](#lemma3.2){reference-type="ref" reference="lemma3.2"} {#pf3.2}
Let $$\begin{aligned}
D_m (x) := \frac{1}{2m} \frac{\sin m \pi x}{\sin \frac{\pi}{2}x}.\end{aligned}$$ By extending $\varphi^j$ and $\varphi^{j'}$ periodically, $\mathcal{D}^{j,j'}_m (u,s) = D_m(\varphi^j(u) + \varphi^{j'} (s)) + D_m(\varphi^j(u) - \varphi^{j'}(s))$ is periodic in both $u$ and $s$ with the period $2$, and therefore $$\begin{aligned}
& \sup_{s \in [0,1]}
\int_0^1\big|\mathcal{D}_{m_n}^{j, j'}(u, s)\big|^p\,{\rm d}u \\
& \leq \sup_{c \in \mathbf{R}}
\left( \int_{c-1}^{c+1} \big|D_{m_n}(\varphi^j(u)-c)\big|^p\,{\rm d}u +
\int_{-c-1}^{-c+1}\big|D_{m_n}(\varphi^j(u)+c)\big|^p\,{\rm d}u \right).\end{aligned}$$ Therefore, it is sufficient to show that $$\label{bound1}
% \limsup_{m_n,n \to \infty}m^{-1}_{n}J^{-}_n(u)\equiv
\limsup_{m_n,n \to \infty}m_{n}\sup_{c \in \mathbb{R}}
\int_{c-1}^{c+1}
%\int_{\left(c-\frac{1}{2}\right)\vee 0}^{\left(c+\frac{1}{2}\right)\wedge 1}
\big|D_{m_n}(\varphi^j(u)-c)\big|^p {\rm d}u < \infty.$$ Put $a:=\limsup_{m_n, n \to \infty} m_n \rho_n$ which is in $[ 0, \infty)$ by the assumption, and let $$\begin{aligned}
J_n^{1}(u):=
\int_{c-\frac{1}{2}}^{c+\frac{1}{2}}
% {\left(c-\frac{1}{2}\right)\vee 0}^{\left(c+\frac{1}{2}\right)\wedge 1}
\bm{1}_{\{|u-c|>\frac{2a+2}{m_n}\}}\big|D_{m_n} ( \varphi^j(u)-c)\big|^p {\rm d}u
\end{aligned}$$ and $$\begin{aligned}
J_n^{2}(u):= \int_{c-1}^{c+1}
%{\left(c-\frac{1}{2}\right)\vee 0}^{\left(c+\frac{1}{2}\right)\wedge 1}
\bm{1}_{
\{|u-c|\leq\frac{2a+2}{m_n}\}}\big|D_{m_n} ( \varphi^j(u)-c)\big|^p {\rm d}u.\end{aligned}$$ For $m_n$ and $n$ large enough, we see that $$\begin{aligned}
\label{mn2}
\begin{aligned}
%& \frac{2a+2}{m_n} <
|u-c| &=\big|\varphi^{j}(u)-c-(\varphi^{j}(u)-u)|
%\\&
\leq\big|\varphi^{j}(u)-c\big|+
\big|\varphi^{j}(u)-u\big|\\
&\leq\big|\varphi^{j}(u)-c\big|+\rho_n.
\end{aligned}
\end{aligned}$$ Therefore we obtain $$\begin{aligned}
\label{2pa}
\begin{aligned}
|u-c|>\frac{2a+2}{m_n} &\Rightarrow \big|\varphi^{j}(u)-c\big|
\geq\frac{2}{m_n}+\frac{2a-m_n\rho_n}{m_n}>\frac{2+a}{m_n} \\
& \Rightarrow \frac{1}{2 m_n \big|\varphi^{j}(u)-c\big|} \leq \frac{1}{2} \frac{1}{2+a} < 1.
\end{aligned}\end{aligned}$$ Since $$\label{Dirbound}
\big|D_{m_n}(x)\big| \leq 1 \wedge \frac{1}{2m_n x}$$ for $x \in \mathbf{R}$, it follows from [\[2pa\]](#2pa){reference-type="eqref" reference="2pa"} that $$\label{bound3}
\begin{split}
J_n^{1}(u) \leq
\int_{c-1}^{c+1}
%_{\left(c-\frac{1}{2}\right)\vee 0}^{\left(c+\frac{1}{2}\right)\wedge 1}
\bm{1}_{\{|u-c|>\frac{2a+2}{m_n}\}}\left|\frac{1}{2m_n(\varphi^j(u)-c)}\right|^p {\rm d}u.
\end{split}$$
Since [\[mn2\]](#mn2){reference-type="eqref" reference="mn2"} implies, for sufficiently large $n$, $|u-c|>\frac{2a+2}{m_n} \Rightarrow\left|\varphi^{j}(u)-c\right|\geq|u-c|-\frac{2a}{m_n}$, one has $$\begin{aligned}
\begin{aligned}
&\int_{c-1}^{c+1}
\bm{1}_{\{|u-c|>\frac{2a+2}{m_n}\}}\left|\frac{1}{2m_n(\varphi^j(u)-c)}\right|^p {\rm d}u \\
&\leq\int_{c-1}^{c+1}
\bm{1}_{\{|u-c|>\frac{2a+2}{m_n}\}}\left|\frac{1}{2m_n\left(|u-c|-\frac{2a}{m_n}\right)}\right|^p {\rm d}u,\\
&= \int_{c-1}^{c+1}
\bm{1}_{\{\frac{m_n}{2}|u-c|-a>1\}}\frac{1}{\left|4\left(\frac{m_n}{2}|u-c|-a\right)\right|^p} \,{\rm d}u, \\
\end{aligned}\end{aligned}$$ (by changing variables with $w = \frac{m_n}{2} |u-c|-a$) $$\begin{aligned}
\label{bound4}
\begin{aligned}
&= 4 \int_{-a}^{\frac{m_n}{2}-a} 1_{\{w > 1\}}
\frac{1}{4^p m_n} w^{-p} \,{\rm d}w \leq\left(\frac{1}{4}\right)^p\frac{4}{m_n}
\int_1^{\infty}\omega^{-p}{\rm d}\omega.
\end{aligned}
\end{aligned}$$ This establishes [\[bound1\]](#bound1){reference-type="eqref" reference="bound1"} since clearly one has, by [\[Dirbound\]](#Dirbound){reference-type="eqref" reference="Dirbound"}, $$\begin{aligned}
\begin{aligned}
J_n^2(u) \leq \int_{c-1}^{c+1}
%_{\left(c-\frac{1}{2}\right)\vee 0}^{\left(c+\frac{1}{2}\right)\wedge 1}
\bm{1}_{\{|u-c|\leq\frac{2a+2}{m_n}\}} {\rm d}u
\leq \frac{4a+4}{m_n}.
\end{aligned}\end{aligned}$$ 0◻
## Proof of Lemma [Lemma 4](#A1){reference-type="ref" reference="A1"} {#proof-of-lemma-a1}
Let us recall that in the proof of Theorem [Theorem 15](#CLT with no micro){reference-type="ref" reference="CLT with no micro"} we have put $$\begin{aligned}
\begin{aligned}
M_{m_n}(t)&= \int_0^t \left (\int_0^s \mathcal{D}_{m_n}^{j, j'}(s, u) \sum_r \sigma_r^{j'}(u) \,{\rm d}W_u^r \right) \sum_r \sigma_r^{j}(s)\,{\rm d}W_s^r,\\
I_{m_n}^1(t)
&=\int_0^t \left (\int_0^s \mathcal{D}_{m_n}^{j, j'}(s,u) b^{j'}(u) \,{\rm d}u \right) \sum_r \sigma_r^{j}(s)\,{\rm d}W_s^r, \\
I_{m_n}^2(t)
&= \int_0^t \left (\int_0^s \mathcal{D}_{m_n}^{j, j'}(s, u) \sum_r \sigma_r^{j'}(u)\, {\rm d}W_u^r \right) b^j(s) \, {\rm d}s,
\end{aligned}\end{aligned}$$ and $$\begin{aligned}
I_{m_n}^3(t)
&=\int_0^t \left (\int_0^s \mathcal{D}_{m_n}^{j, j'}(s, u) b^{j'}(u)\, {\rm d}u \right)
b^{j}(s) \,{\rm d}s.\end{aligned}$$ Here we omit the superscript $j, j'$ for clarity.
By Itô's isometry, we have $$\begin{aligned}
\begin{aligned}
\mathbf{E} [| I_{m_n}^1 (t) |^2]
& \leq \mathbf{E} \left[ \int_0^t \sum_r |\sigma^j_r (s) |^2 \left(\int_0^s |\mathcal{D}_{m_n}^{j, j'}(s, u) ||b^{j'} (
u)| \, \mathrm{d}u \right)^2 \mathrm{d}s \right] \\
% &\leq \mathbf{E} \left[ \left(\int_0^t |b^{j'} (s)|\int_0^s |\mathcal{D}_{m_n}^{j, j'}(s, u) ||b^{j'} ( u)| \, \mathrm{d}u \right)^2 \right] \\
& \leq A_4 \left(\int_0^t \int_0^s |\mathcal{D}_{m_n}^{j, j'}(s, u) |\,{\rm d}u \mathrm{d}s\right)^2 ,
\end{aligned}\end{aligned}$$ and $$\begin{aligned}
\begin{aligned}
\mathbf{E} [| I_{m_n}^3 (t) |^2]
&\leq \mathbf{E} \left[ \left(\int_0^t |b^{j} (s)|\int_0^s |\mathcal{D}_{m_n}^{j, j'}(s, u) ||b^{j'} (
u)| \, \mathrm{d}u \mathrm{d}s \right)^2 \right] \\
& \leq A_4 \int_0^t \left(\int_0^s |\mathcal{D}_{m_n}^{j, j'}(s, u) |\,{\rm d}u \right)^2 \mathrm{d}s,
\end{aligned}\end{aligned}$$ while with the Schwartz and Burkhölder--Davis--Gundy inequality, we also obtain $$\begin{aligned}
\begin{aligned}
& \mathbf{E} [| M_{m_n} (t) |^2] + \mathbf{E} [| I_{m_n}^2 (t) |^2] \\
& \leq \int_0^1 \mathbf{E} \left[ \left(\int_0^s \mathcal{D}_{m_n}^{j, j'}(s, u) \sum_r \sigma_r^{j'}(u) \,{\rm d}W_u^r \right)^4\right]^{1/2} \mathbf{E} \left[\left(\sum_r (\sigma_r^{j} (s))^2 + (b^{j} (s))^2
\right)^2 \right]^{1/2} \mathrm{d}s
\\
& \leq C_{4,\mathrm{BDG}} \int_0^1 \mathbf{E} \left[ \left(\int_0^s (\mathcal{D}_{m_n}^{j, j'}(s, u))^2 \sum_r (\sigma_r^{j'}(u))^2 \,{\rm d}u \right)^2 \right]^{1/2}
\sqrt{2} A_4^{1/2} \mathrm{d}s \\
% & \leq \sqrt{2} A_4 \int_0^1 \mathbf{E} \left[ (\sup_{u \in [0,1] }\sum_r (\sigma_r^j(u))^2 )^2\right]^{1/2} \int_0^s (\mathcal{D}_{m_n}^{j, j'}(s, u))^2 \,{\rm d}u \mathrm{d}s
\\ & \leq C_{4,\mathrm{BDG}} \sqrt{2} A_4 \int_0^1 \int_0^s (\mathcal{D}_{m_n}^{j, j'}(s, u) )^2\,{\rm d}u \mathrm{d}s,
\end{aligned}\end{aligned}$$ where $C_{4,\mathrm{BDG}}$ is the universal constant appearing in the Burkhölder--Davis--Gundy inequality. 0◻
## A proof of Proposition [Proposition 6](#PRSS){reference-type="ref" reference="PRSS"} {#PrPr}
Under the condition that $\rho_n m_n^2 \to 0$, we have, by a similar argument as the one we did for the proof of Lemma [Lemma 1](#l21){reference-type="ref" reference="l21"}, $$\begin{aligned}
m_n \int_0^1 \int_0^u \left(\mathcal{D}^{j,j'} (s,u) \mathcal{D}^{k,k'} (s,u) -(\mathcal{D} (s,u))^2 \right) f(s,u) \,\mathrm{d}u \mathrm{d}s \to 0 \quad (n \to \infty).\end{aligned}$$ Therefore, it suffices to prove $$\begin{aligned}
& m \int_0^1 \int_0^u (\mathcal{D} (s,u))^2 f(s,u) \,\mathrm{d}u \mathrm{d}s -\frac{1}{2}\int_0^1 f(s,s) \, \mathrm{d}s \to 0 \quad (n \to \infty). \end{aligned}$$ We note that, extending $f(s,u)$ from $\{ (s,u) : s\leq u \}$ to $[0,1]$ symmetrically, $$\begin{aligned}
\begin{aligned}
\int_0^1 \int_0^u (\mathcal{D} (s,u))^2 f(s,u) \,\mathrm{d}u \mathrm{d}s
= \frac{1}{2} \int_0^1 \int_0^1 |\mathcal{D} (s,u)|^2 f(s,u) \,\mathrm{d}u \mathrm{d}s.
\end{aligned}\end{aligned}$$ Then, letting $$\begin{aligned}
g(s) :=m\int_0^1 |\mathcal{D} (u,s)|^2\,\mathrm{d}u,\end{aligned}$$ we have $$\begin{aligned}
& m \int_0^1 \int_0^u (\mathcal{D} (s,u))^2 f(s,u) \,\mathrm{d}u \mathrm{d}s -\frac{1}{2}\int_0^1 f(s,s) \, \mathrm{d}s \nonumber \\
&= \frac{m}{2} \int_0^1 \int_0^1 |\mathcal{D} (s,u)|^2 ((f(s,u) -f(s,s))\,\mathrm{d}u \mathrm{d}s -\frac{1}{2}\int_0^1 (1-g(s))f(s,s) \, \mathrm{d}s. \label{1-g}\end{aligned}$$ By the expression [\[D-exp1\]](#D-exp1){reference-type="eqref" reference="D-exp1"}, $$\begin{aligned}
\begin{aligned}
& %\int_0^1 |\mathcal{D} (u,s)|^2\,\mathrm{d}u
g(u) \\
& = \frac{4}{m}\sum_{l=1}^{m}
\sum_{l'=1}^{m}\cos\left(l-\frac{1}{2}\right)\pi u \cos\left(l'-\frac{1}{2}\right)\pi u
\int_0^1 \cos\left(l-\frac{1}{2}\right)\pi s \cos\left(l'-\frac{1}{2}\right)\pi s \,\mathrm{d}s.
\end{aligned}\end{aligned}$$ Since it holds that $$\begin{aligned}
\begin{aligned}
& \int_0^1 \cos\left(l-\frac{1}{2}\right)\pi s \cos\left(l'-\frac{1}{2}\right)\pi s \,\mathrm{d}s \\
&= \frac{1}{2} \int_0^1 \left(\cos(l+l'-1)\pi s + \cos(l-l')\pi s \right)\,\mathrm{d}s \\
& = \begin{cases}
1/2 & l=l' \\
0 & l \ne l'
\end{cases},
\end{aligned}\end{aligned}$$ we have $$\begin{aligned}
\begin{aligned}
& g(u) %\int_0^1 |\mathcal{D} (u,s)|^2\,\mathrm{d}u
= \frac{2}{m}\sum_{l=1}^{m}\cos^2\left(l-\frac{1}{2}\right)\pi u
\\
&= 1 + \frac{1}{m} \sum_{l=1}^m
\cos (2l-1) \pi u \\
&= 1+ \frac{1}{2m} \frac{\sin 2 m \pi u}{\sin 2 \pi u}
= 1 + D_m (2u).
\end{aligned}\end{aligned}$$ Then, by Lemma [Lemma 16](#lemma3.2){reference-type="ref" reference="lemma3.2"}, we see that $$\begin{aligned}
\frac{1}{2}\int_0^1 (1-g(s))f(s,s) \, \mathrm{d}s \to 0 \quad (m \to \infty).\end{aligned}$$
Finally we shall prove the convergence of the first term in [\[1-g\]](#1-g){reference-type="eqref" reference="1-g"}. Recalling [\[key equality\]](#key equality){reference-type="eqref" reference="key equality"}, we have $$\begin{aligned}
\label{2a2}
\begin{aligned}
& m \left| \int_0^1 \int_0^1 |\mathcal{D} (s,u)|^2 (f(s,u) -f(s,s))\,\mathrm{d}u \mathrm{d}s \right| \\
&\leq \int_{[0,1]^2} 2m ((D_m (u+s))^2
+ (D_m (u-s))^2) | f(s,u) -f(s,s) |
\,\mathrm{d}u \mathrm{d}s.
\end{aligned}\end{aligned}$$ We rely on the uniformly continuity of $f$. For arbitrary sufficiently small $\varepsilon > 0$, we can take $\delta > 0$ such that $$\begin{aligned}
|s-u| <\delta
\Rightarrow |f(s,u)-f(s,s)| < \varepsilon.\end{aligned}$$ Let $$\begin{aligned}
A^+_\delta := \{ (s,u) \in [0,1]^2:
\delta/2 < s+u < \delta/2 \},\end{aligned}$$ and $$\begin{aligned}
A^-_\delta := \{ (s,u) \in [0,1]^2:
|s-u| < \delta \}.\end{aligned}$$ Then clearly $(s,u) \in A^{\pm}_\delta$ satisfies $|s-u|< \delta$ and therefore $|f(s,u)-f(s,s)| < \varepsilon$. Then, we can bound the right-hand-side of [\[2a2\]](#2a2){reference-type="eqref" reference="2a2"} by $$\begin{aligned}
\begin{aligned}
4 \Vert f\Vert_\infty \left(\int_{A^+_\delta} \frac{1}{2m}
\frac{\sin^2 m \pi (u+s)}{\sin^2 \pi (u+s)/2}\,\mathrm{d}u \mathrm{d}s
+
\int_{A^-_\delta} \frac{1}{2m}
\frac{\sin^2 m \pi (u-s)}{\sin^2 \pi (u-s)/2}\,\mathrm{d}u \mathrm{d}s \right) \\
+ \varepsilon \left(\int_{[0,1]^2} \frac{1}{2m} \left(
\frac{\sin^2 m \pi (u+s)}{\sin^2 \pi (u+s)/2} +
\frac{\sin^2 m \pi (u-s)}{\sin^2 \pi (u-s)/2} \right)\,\mathrm{d}u \mathrm{d}s \right).
\end{aligned}\end{aligned}$$ Since $$\begin{aligned}
\begin{aligned}
\int_{A^+_\delta} \frac{1}{2m}
\frac{\sin^2 m \pi (u+s)}{\sin^2 \pi (u+s)/2}\,\mathrm{d}u \mathrm{d}s
+ \int_{A^-_\delta} \frac{1}{2m}
\frac{\sin^2 m \pi (u-s)}{\sin^2 \pi (u-s)/2}\,\mathrm{d}u \mathrm{d}s
\leq 2 \int_\delta^{1} \frac{1}{2my^2}\,\mathrm{d}y \leq \frac{1}{m\delta}
\end{aligned}\end{aligned}$$ and $$\begin{aligned}
\begin{aligned}
& \int_{[0,1]^2} \frac{1}{2m} \left(
\frac{\sin^2 m \pi (u+s)}{\sin^2 \pi (u+s)/2} +
\frac{\sin^2 m \pi (u-s)}{\sin^2 \pi (u-s)/2} \right)\,\mathrm{d}u \mathrm{d}s \\
&= \frac{1}{2m} \int_{[0,1]^2} \left(\left(\sum_{l=-m+1}^m e^{(l-\frac{1}{2})\pi (s+u)}
\right)^2 + \left(\sum_{l=-m+1}^m e^{(l-\frac{1}{2})\pi (s-u)} \right)^2\right)
\,\mathrm{d}u \mathrm{d}s
= \frac{1}{2},
\end{aligned}\end{aligned}$$ we have $$\begin{aligned}
\begin{aligned}
& m \left| \int_0^1 \int_0^1 |\mathcal{D} (s,u)|^2 (f(s,u) -f(s,s))\,\mathrm{d}u \mathrm{d}s \right| \leq \frac{4 \Vert f\Vert_\infty}{m\delta} + \frac{\varepsilon}{2},
\end{aligned}\end{aligned}$$ which shows the convergence to zero (as $m \to \infty$) of the first term in [\[1-g\]](#1-g){reference-type="eqref" reference="1-g"}. 0◻
[^1]: Department of Mathematical Science, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577, Japan (e-mail: `[email protected]`)
[^2]: Department of Mathematical Science, Kyoto Sangyo University, Motoyama, Kamigamo, Kita-ku, Kyoto, 603-8555 Japan (e-mail: `[email protected]`)
[^3]: Kusatsu 525-8529, Japan Garduate School of Science and Engineering, Ritsumeikan University, 1-1-1, Noji-Higashi, Kusatsu, Shiga, 525-8577, Japan (e-mail: `[email protected]`)
[^4]: Corresponding author
[^5]: It has been pointed out that the "bias\" $\mathbf{E} [ V^{j,j'} - \widehat{\Sigma^{j,j'}_{n,m_n}}(0)]$ converges to zero and the mean square error $\mathbf{E} [ (V^{j,j'} - \widehat{\Sigma^{j,j'}_{n,m_n}}(q) )^2$ does not diverge when $m_n = o (n)$ as $n \to \infty$, which is not the case with the realized volatility.
[^6]: To be precise, they used the IBP technique to prove the convergence of $\mathrm{Res}_t$ in [\[QVofM\]](#QVofM){reference-type="eqref" reference="QVofM"} and $\langle M, W\rangle$, but used another approach to prove the convergence of $I^2$ for which the Malliavin differentiability for $b$ is not required.
[^7]: Here we avoid using the commonly used notation $D$ for the derivative so as not to mix it up with the Dirichlet kernel.
[^8]: As is remarked in [@ClGl], the conditions [\[H3\]](#H3){reference-type="eqref" reference="H3"} and [\[H3b\]](#H3b){reference-type="eqref" reference="H3b"} are not too strong; for example the solution for a stochastic differential equation with smooth bounded coefficients naturally satisfy these conditions.
| arxiv_math | {
"id": "2310.02160",
"title": "The SIML method without microstructure noise",
"authors": "Jir\\^o Akahori and Ryuya Namba and Atsuhito Watanabe",
"categories": "math.ST math.PR stat.TH",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We develop a limit theory for controlled mean field stochastic partial differential equations in a variational framework. More precisely, we prove existence results for mean field limits and particle approximations, and we establish a set-valued propagation of chaos result which shows that sets of empirical distributions converge to sets of mean field limits in the Hausdorff metric topology. Further, we discuss limit theorems related to stochastic optimal control theory. To illustrate our findings, we apply them to a controlled interacting particle system of stochastic porous media equations.
address: Albert-Ludwigs University of Freiburg, Ernst-Zermelo-Str. 1, 79104 Freiburg, Germany
author:
- David Criens
title: |
A limit theory for controlled\
McKean--Vlasov SPDEs
---
[^1]
# Introduction
The area of controlled McKean--Vlasov dynamics, also known as mean field control, has rapidly developed in the past years, see, e.g., the monograph [@car_della] and the references therein. There is also increasing interest in infinite dimensional systems such as controlled McKean--Vlasov stochastic PDEs (mean field SPDEs), see, e.g., [@car_etal_18; @fouque_zhang] for equations appearing in financial mathematics. We also refer to the recent paper [@CGKPR23], where the authors investigate mean field SPDEs within the semigroup approach of Da Prato and Zabczyk [@DaPratoEd1], for which they establish well-posedness of the state equation, the dynamic programming principle and a Bellman equation.
Mean field dynamics are typically motivated by particle approximations (related to propagation of chaos), see, for instance, Sznitman's seminal monograph [@SnzPoC]. It is a natural and important task to make this motivation rigorous. For finite dimensional systems, a general limit theory (for controlled equations) was developed in the paper [@LakSIAM17] and extended in [@DPT22] to a setup with common noise. The purpose of the present paper is to establish a limit theory for controlled interacting infinite dimensional SPDEs in the variational framework initiated by Pardoux [@par75] and Krylov--Rozovskii [@krylov_rozovskii], see also the more recent monographs [@liu_rockner; @par].
To give a more precise account of our contributions, consider a particle system $Y = (Y^1, \dots, Y^n)$ of the form $$\begin{aligned}
d Y^k_t &= \int b (f, t, Y^k_t, \mathscr{X}_n (Y_t)) \mathfrak{m}(t, df) dt + \bar{\sigma}(\mathfrak{m}(t, \cdot\,), t, Y^k_t, \mathscr{X}_n (Y_t)) d W^k_t,
\quad
Y^k_0 = x,
\\
\mathscr{X}_n (Y_t) &= \frac{1}{n} \sum_{i = 1}^n \delta_{Y^i_t},
\end{aligned}$$ for $k = 1, \dots, n$ with $$\bar{\sigma}\bar{\sigma}^* (\mathfrak{m}(t, \cdot\,), t, Y^k_t, \mathscr{X}_n (Y_t)) = \int \sigma \sigma^* (f, t, Y^k_t, \mathscr{X}_n (Y_t)) \mathfrak{m}(t, df).$$ Here, $\mathfrak{m}$ denotes a kernel that models the control variable, and $W^1, \dots, W^n$ are independent cylindrical Brownian motions. This corresponds to a relaxed control framework in the spirit of [@nicole1987compactification; @ElKa15]. Let $\mathcal{R}^n(x)$ be the set of joint empirical distributions of particles and controls (latter are captured via $\mathfrak{m}(t, df) dt$ in a suitable space of Radon measures). The associated set of mean field limits is denoted by $\mathcal{R}^0 (x)$. It consists of a set of probability measures supported on laws of $(Y, \mathfrak{m}(t, df)dt)$, where $Y$ solves a McKean--Vlasov equation of the form $$d Y_t = \int b (f, t, Y_t, P^Y_t) \mathfrak{m}(t, df) dt + \bar{\sigma}( \mathfrak{m}(t, \cdot\,), t, Y_t, P^Y_t) d W_t, \quad Y_0 = x,$$ with $P^Y_t = \mathcal{L} \textit{aw}\, (Y_t)$.
Conceptually, our main results are split into two groups. The first one is probabilistic and deals with the convergence of controlled particle systems, while the second one sheds light on mean field limits from a stochastic optimal control perspective.
On the probabilistic page, we show that $\mathcal{R}^n (x)$ and $\mathcal{R}^0 (x)$ are nonempty and compact, and that, for any sequence $(x^n)_{n = 0}^\infty$ of initial values such that $x^n \to x^0$, the set-valued sequence $(\mathcal{R}^n (x^n))_{n = 1}^\infty$ converges to $\mathcal{R}^0 (x^0)$ in the Hausdorff metric topology (on a suitable Wasserstein space of probability measures). This result can be seen as set-valued propagation of chaos. Indeed, when $\mathcal{R}^n (x^n)$ and $\mathcal{R}^0 (x^0)$ are singletons, we recover a classical formulation of the propagation of chaos property. To the best of our knowledge, the concept and formulation of set-valued propagation of chaos is new. The observation that $\mathcal{R}^0 (x) \not = \varnothing$ provides existence for McKean--Vlasov SPDEs in a variational framework. In particular, it entails an existence result for classical (uncontrolled) McKean--Vlasov SPDEs in the spirit of a very recent result from [@HLL23]. Our proof for $\mathcal{R}^0 (x) \not = \varnothing$ is based on particle approximation. In the absence controls, the particle approximation can be compared to a recent propagation of chaos result for stochastic 2D Navier--Stokes equations that was established in [@HLL23].
As second main contribution, we investigate approximation properties of optimal control problems. Namely, if $\psi$ is a continuous input function of suitable growth (on a Wasserstein space of probability measures), we prove that the value functions associated to the particle approximation $$x \mapsto \sup_{Q \in \mathcal{R}^n (x) } E^Q \big[ \psi \big]$$ converge compactly (in their initial values) to the value function of the mean field limit $$x \mapsto \sup_{Q \in \mathcal{R}^0 (x)} E^Q \big[ \psi \big].$$ We also derive ramification of this statement for upper and lower semicontinuous input functions $\psi$ of suitable growth. These results allow us to deduce limit theorems in the spirit of the seminal work [@LakSIAM17]. Namely, we show that all accumulation points of sequences of $n$-state nearly optimal controls maximize the mean field value function, and that any optimal mean field control can be approximated by a sequence of $n$-state nearly optimal controls.
Let us now comment on our assumptions on the coefficients $b$ and $\sigma$. The main condition has two layers, each formulated with the help of a Gelfand triple. On the first, we impose continuity, coercivity and growth assumptions that can be compared to mild existence conditions from [@GRZ]. These assumptions suffice to obtain that $\mathcal{R}^n (x)$ and $\mathcal{R}^0 (x)$ are nonempty and compact, as well as the particle approximation. As explained in [@GRZ], such conditions can be verified for stochastic porous media and Navier--Stokes equations.
The second Gelfand triple is used to formulate growth and weak monotonicity conditions, which we employ to establish set-valued propagation of chaos and the limit theory for stochastic control problems. These assumptions can be verified for stochastic porous media equations, but they fail for stochastic Navier--Stokes equations that lack weak monotonicity properties (see [@liu_rockner]). It is an interesting open problem to replace the weak monotonicity conditions in our work by local monotonicity conditions ([@liu_r:JFA]) to cover for instance stochastic 2D Navier--Stokes systems. Such a strengthening appears to be challenging due to the non-local structure of McKean--Vlasov equations. We leave this question for future research.
To illustrate our theory, we verify our assumptions for a slow diffusion framework where the controlled particle system is of the form $$d Y^k_t = \Big[ \Delta ( |Y^k_t|^{q - 2} Y^k_t) + \frac{1}{n} \sum_{i = 1}^n\, (Y^k_t - Y^i_t)+ \int c (f)\, \mathfrak{m}(t, df) \Big] dt + \sigma d W^k_t,$$ with $q \geq 2$ and $k = 1, \dots, n$. It is worth mentioning that our results apply to the classical finite dimensional framework.
We now comment on related literature and proofs. Our work is heavily inspired by [@LakSIAM17]. In particular, as in [@LakSIAM17], parts of our proofs rely on compactification and martingale problem methods that were established in [@nicole1987compactification]. In contrast to [@nicole1987compactification; @LakSIAM17], we deal with an infinite dimensional setting that is technically different. Let us point out that our work applies to new cases that are not covered by the well-studied finite dimensional setup, since our monotonicity conditions are weaker than their Lipschitz counterparts (that are used in [@LakSIAM17]). In the following, we highlight some important differences compared to the finite dimensional framework. Due to multiple state spaces (recall that we work with two Gelfand triples), we have to keep track of several topologies that influence convergence and continuity properties. Furthermore, our assumptions are not sufficient to establish a priori moment estimates, which we therefore have to build into our setting, and strong existence and uniqueness properties for equations with random coefficients. To overcome this obstacle, we work with an improved weak existence result that keeps track of the driving noise, adapting a method from [@jacod1981weak], and we apply a change of topology, where we work with the second Gelfand triple. Finally, compared to the finite dimensional case, we also require different tightness methods, which are influenced by [@GRZ; @LakSPA15].
The paper is structured as follows. Our framework and the main results are explained in Section [2](#sec: main1){reference-type="ref" reference="sec: main1"}. The slow diffusion example is discussed in Section [3](#sec: ex){reference-type="ref" reference="sec: ex"} and the proof of our main theorem is given in Section [4](#sec: pf){reference-type="ref" reference="sec: pf"}.
**Remark on Notation 1**. In this paper, $C$ denotes a generic positive constant that might change from line to line. In case the constant depends on important quantities, this is mentioned specifically.
# A Limit Theory for Controlled SPDEs in a Variational Framework {#sec: main1}
## The Setting and Notation
Let $\mathbb{H}, \mathbb{X}$ and $\mathbb{V}$ be separable Hilbert spaces and let $\mathbb{Y}$ be a separable reflexive Banach space (whose topological duals are denoted by $\mathbb{H}^*, \mathbb{X}^*, \mathbb{V}^*$ and $\mathbb{Y}^*$, respectively). We assume that $$\mathbb{Y}\subset \mathbb{H}\subset \mathbb{X}\subset \mathbb{V}$$ continuously and densely. For any Banach space $\mathbb{E}$, we denote its norm by $\|\cdot\|_{\mathbb{E}}$ and the dualization between $\mathbb{E}$ and $\mathbb{E}^*$ by $_\mathbb{E}\langle \, \cdot, \cdot\, \rangle_{\mathbb{E}^*}$, i.e., $$_\mathbb{E} \langle e, e^* \rangle_{\mathbb{E}^*} := e^* (e), \quad e^* \in \mathbb{E}^*, e \in \mathbb{E}.$$ In general, we endow all Banach spaces with their norm topologies.
Below, we will work with two Gelfand triples $$\mathbb{V}^* \subset \mathbb{H}^* \cong\, \mathbb{H}\subset \mathbb{V}, \quad \mathbb{Y}\subset \mathbb{X}\cong\, \mathbb{X}^* \subset \mathbb{Y}^*.$$ In particular, for $x \in \mathbb{H}, y \in \mathbb{V}^*$ and $z \in \mathbb{X}, v \in \mathbb{Y}$, we have $${_\mathbb{V}} \langle x, y \rangle_{\mathbb{V}^*} = \langle x, y \rangle_\mathbb{H}, \qquad {_{\mathbb{Y}^*}} \langle z, v \rangle_\mathbb{Y}= \langle z, v \rangle_\mathbb{X},$$ where $\langle \cdot, \cdot\rangle_\mathbb{H}$ and $\langle \cdot, \cdot \rangle_\mathbb{X}$ are the scalar products of $\mathbb{H}$ and $\mathbb{X}$, respectively.
Throughout this paper, we extend scalar functions from a smaller to a larger Banach space by setting them $\infty$ outside its original domain. For instance, we set $\|x\|_\mathbb{Y}:= \infty$ for $x \in \mathbb{V}\backslash \mathbb{Y}$. In this way, $\|\cdot\|_\mathbb{Y}$ becomes a lower semicontinuous map from $\mathbb{V}$ into $[0, \infty]$ ([@liu_rockner Excercise 4.2.3]).
Before we proceed, let us give an overview on all parametric constants that we use in this paper: $$\begin{aligned}
T, \lambda, \alpha, \gamma, \beta, \eta, \varrho\in \mathbb{R}_+.\end{aligned}$$ We impose the following restrictions: $$\begin{aligned}
\label{eq: cond constants}
T, \lambda > 0, \quad \alpha, \gamma > 1, \quad \beta \geq 2 \vee \gamma, \quad \eta \geq \frac{\beta}{2} \vee \frac{\alpha}{2}, \quad \eta > 2, \quad \alpha > \varrho\geq 1.\end{aligned}$$ Define $$\Omega := \Big\{ \omega \in C ([0, T]; \mathbb{V}) \colon \int_0^T \|\omega(s)\|_\mathbb{Y}^\alpha ds < \infty \Big\}.$$ We endow $\Omega$ with the topology induced by the metric $$\mathsf{d}(\omega, \alpha) := \sup_{s \in [0, T]} \|\omega(s) - \alpha (s)\|_{\mathbb{V}} + \Big( \int_0^T \| \omega (s) - \alpha (s)\|^\alpha_\mathbb{Y}ds \Big)^{1/ \alpha}, \ \ \omega, \alpha \in \Omega,$$ which turns it into a Polish space. Further, denote $\mathcal{F}:= \mathcal{B}(\Omega)$. The coordinate map $X = (X_t)_{t \in [0, T]}$ on $\Omega$ is defined by $X_t (\omega) = \omega (t)$ for $\omega \in \Omega$, and the corresponding filtration is defined by $\mathbf{F} = (\mathcal{F}_t)_{t \in [0, T]}$ with $\mathcal{F}_t := \sigma (X_s, s \in [0, t])$.
Take another separable Hilbert space $\mathbb{U}$, which we use as state space for the randomness that drives our systems. The space of Hilbert--Schmidt operators from $\mathbb{U}$ into $\mathbb{H}$ is denoted by $L_2 (\mathbb{U}; \mathbb{H})$ and the Hilbert--Schmidt norm is denoted by $\|\cdot\|_{L_2 (\mathbb{U}; \mathbb{H})}$.
For any metric space $(E, m)$, let $\mathcal{P}(E)$ be the set of Borel probability measures on $E$ and endow this space with the weak topology, i.e., the topology of convergence in distribution. It is well-known that $\mathcal{P}(E)$ is a Polish space once $E$ has the same property ([@dellacheriemeyer Theorem III.60, p. 73]). For $r \geq 1$ and an arbitrary reference point $e_0 \in E$, we set (with abuse of notation) $$\|\mu\|_{r, E} := \Big( \int m (e, e_0)^r\, \mu (d e) \Big)^{1/r}, \quad \mu \in \mathcal{P}(E),$$ and $$\mathcal{P}^r (E) := \big\{ \mu \in \mathcal{P}(E) \colon \|\mu\|_{r, E} < \infty \big\}.$$ Further, let $\mathsf{w}^E_r$ be the $r$-Wasserstein metric[^2] on $\mathcal{P}^r (E)$, i.e., $$\mathsf{w}^E_r (\mu, \nu) := \Big( \inf_{\pi \in \Pi (\mu, \nu)} \int m (x, y)^r \pi (dx, dy) \Big)^{1/r}, \quad \mu, \nu \in \mathcal{P}^r (E),$$ where $\Pi (\mu, \nu)$ is the set of couplings of $\mu$ and $\nu$, i.e., the set of all Borel probability measures $\pi$ on $E^2$ such that $\pi (dx \times E) = \mu$ and $\pi (E\times dy) = \nu$. It is well-known ([@villani09 Theorem 6.18]) that $(\mathcal{P}^r (E), \mathsf{w}^E_r)$ is a complete separable metric space once $(E, m)$ has the same property. If not mentioned otherwise, we endow $\mathcal{P}^r (E)$ with the topology induced by $\mathsf{w}^E_r$. In case $E$ is a (separable) Banach space, we take $e_0 = 0$ by convention.
We fix a function ${\mathcal{N}}\colon \mathbb{Y}\to [0, \infty]$ with the following properties: it is lower semicontinuous, ${\mathcal{N}}(x) = 0$ implies $x = 0$, and $${\mathcal{N}}(cy) \leq c^\alpha {\mathcal{N}}(y), \quad \forall \, c \geq 0, \, y \in \mathbb{Y},$$ and $$\big\{ y \in \mathbb{Y}\colon {\mathcal{N}}(y) \leq 1 \big\} \text{ is relatively compact in \(\mathbb{Y}\)}.$$ Moreover, we set $${\mathcal{N}}_p (y) := \|y\|^{2 (p - 1)}_\mathbb{H}{\mathcal{N}}(y), \quad p \in \mathbb{N}, \, y \in \mathbb{Y}.$$
To model the control variable, we fix an action space $F$ that is assumed to be a compact metrizable space. Let $\mathbb{M}([0, T] \times F)$ be the set of all Radon measures on $[0, T] \times F$ and define $\mathbb{M}$ as its subset of all measures in $\mathbb{M}([0, T] \times F)$ whose projections on $[0, T]$ coincide with the Lebesgue measure. We endow $\mathbb{M}$ with the vague topology, which turns it into a compact metrizable space ([@EKNJ88 Theorem 2.2]). The Borel $\sigma$-field on $\mathbb{M}$ is denoted by $\mathcal{M}$ and the identity map on $\mathbb{M}$ is denoted by $M$. Further, we define the $\sigma$-fields $$\mathcal{M}_t := \sigma \big(M (C) \colon C \in \mathcal{B}([0, t] \times F)\big), \quad t \in [0, T].$$ In the following, we consider the product space $\Theta:= \Omega \times \mathbb{M}$. Let $\mathsf{r}$ be a metric on $\mathbb{M}$ that induces its topology. Then, we endow $\Theta$ with the metric $\mathsf{d}+ \mathsf{r}$, which induces the product topology. We also set $\mathcal{O} := \mathcal{F}\otimes \mathcal{M}$. The product filtration $\mathbf{O} := (\mathcal{O}_t)_{t \in [0, T]}$ is given by $\mathcal{O}_t := \mathcal{F}_t \otimes \mathcal{M}_t$. With little abuse of notation, we denote the coordinate map on $\Theta$ by $(X, M)$.
Let $\varkappa\colon \mathbb{H}\to \mathbb{R}_+$ be a continuous function that is bounded on bounded subsets of $\mathbb{H}$ and such that $$\begin{aligned}
\varkappa(x) \geq &\ \exp \big\{ 6 \lambda \eta T ( 1 + 18 \eta) \big\} (1 + 2 \|x\|^{2\eta}_\mathbb{H})
+ \tfrac{1}{2} \big[ \|x\|^{2}_\mathbb{H}+ 3 \lambda T (1 + 3 \exp \big\{ 114 \lambda T \big\} (1 + 2 \|x\|^{2}_\mathbb{H}) ) \big]
\\&\ \ + \tfrac{1}{\beta + 2} \big[ \|x\|^{\beta + 2}_\mathbb{H}+ \lambda (\tfrac{\beta}{2} + 1) T (\beta + 3) (1 + 3 \exp \big\{ 6 \lambda (\tfrac{\beta}{2} + 1) T ( 9 \beta + 19) \big\} (1 + 2 \|x\|^{\beta + 2}_\mathbb{H}) ) \big].\end{aligned}$$ For $n \in \mathbb{N}$ and $x \in \mathbb{H}$, define $$\begin{aligned}
\mathscr{G}& := \Big\{ \mu \in \mathcal{P}(\Theta) \colon E^\mu \Big[ \sup_{s \in [0, T]} \|X_s\|^{2\eta}_\mathbb{H}+ \int_0^T \big[ {\mathcal{N}}(X_s) + {\mathcal{N}}_{\beta /2 + 1} (X_s) \big] ds \Big] < \infty \Big\},\end{aligned}$$ and $$\mathscr{J}(x) := \Big\{ Q \in \mathcal{P}(\mathcal{P}(\Theta)) \colon \int E^\mu \Big[ \sup_{s \in [0, T]} \|X_s\|^{2\eta}_\mathbb{H}+ \int_0^T \big[ {\mathcal{N}}(X_s) + {\mathcal{N}}_{\beta /2 + 1} (X_s) \big] ds \Big] Q (d \mu) \leq \varkappa(x) \Big\}.$$ By [@bogachev Theorem 8.10.61], $\mathscr{G}$ and $\mathscr{J}(x)$ are Borel subsets of $\mathcal{P}(\Theta)$ and $\mathcal{P}(\mathcal{P}(\Theta))$, respectively.
We write $\mathcal{P}^{2 \eta}_{\varrho} (\mathbb{H})$ for the space $\mathcal{P}^{2 \eta} (\mathbb{H})$ endowed with the topology induced by $\mathsf{w}^{\mathbb{V}}_{\varrho}$, i.e., the subspace topology coming from $\mathcal{P}^\varrho(\mathbb{V})$.
## The Coefficients
Next, we introduce the coefficients for the equations under consideration. We take two coefficients $$\begin{aligned}
&b \colon F \times [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H}) \to \mathbb{V}, \\
&\sigma \colon F \times [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H}) \to L_2 (\mathbb{U}; \mathbb{H})\end{aligned}$$ that are supposed to be Borel measurable. Further, we presume that $$\sup_{f \in F} \| \sigma (f, t, y, \mu)\|_{L_2 (\mathbb{U}; \mathbb{H})} < \infty$$ for all $(t, y, \mu) \in [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_{\varrho}$. Finally, let $$\bar{\sigma}\colon \mathcal{P}(F) \times [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H}) \to L_2 (\mathbb{U}; \mathbb{H})$$ be a Borel map such that $$\begin{aligned}
\label{eq: os}
\bar{\sigma}\bar{\sigma}^* (\nu, t, y, \mu) = \int \sigma \sigma^* (f, t, y, \mu) \nu (df) \end{aligned}$$ for all $(\nu, t, y, \mu) \in \mathcal{P}(F) \times [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$. We emphasize that $\bar{\sigma}$ always exists. For example, one can take the nonnegative operator root ([@liu_rockner Proposition 2.3.4] or [@ReSi Theorem VI.9]) of the right hand side in [\[eq: os\]](#eq: os){reference-type="eqref" reference="eq: os"}. As the root map $A \mapsto A^{1/2}$ is continuous in the norm and strong operator topology ([@ReSi Problem 14, p. 217]), notice that it does not only preserve measurability but also continuity properties.
Our main result has two layers that require different assumptions. The first layer is given by the following condition.
**Condition 1**.
1. *$\mathbb{V}^* \subset \mathbb{Y}$ (continuously and) compactly.*
2. *For every $v \in \mathbb{V}^*, t \in [0, T]$ and $\nu \in \mathcal{P}(F)$, the maps $$\begin{aligned}
F \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H}) \ni (f, y, \mu) &\mapsto {_\mathbb{V}}\langle b (f, t, y, \mu), v \rangle_{\mathbb{V}^*} \in \mathbb{R}, \\
F \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H}) \ni (f, y, \mu) &\mapsto \sigma^* (f, t, y, \mu) v \in \mathbb{U}, \\
\mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H}) \ni (y, \mu) &\mapsto \bar{\sigma}^* (\nu, t, y, \mu) v \in \mathbb{U}
\end{aligned}$$ are continuous. Further, for every $v \in \mathbb{V}^*$ and all compact sets $\mathscr{K} \subset \mathbb{Y}$ and $\mathscr{H} \subset \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$, the maps ${_\mathbb{V}} \langle b, v \rangle_{\mathbb{V}^*}$ and $\bar{\sigma}v$ are bounded on $F \times [0, T] \times \mathscr{K} \times \mathscr{H}$ and $\mathcal{P}(F) \times [0, T] \times \mathscr{K} \times \mathscr{H}$, respectively.*
3. *For the constant $\lambda> 0$ as in [\[eq: cond constants\]](#eq: cond constants){reference-type="eqref" reference="eq: cond constants"}, it holds that $$\begin{aligned}
{_\mathbb{V}}\langle b (f, s, w, \mu), w\rangle_{\mathbb{V}^*} & \leq \lambda \big(1 +\|w\|^2_\mathbb{H}+ \|\mu\|^2_{2, \mathbb{H}}\big) - {\mathcal{N}}(w), \label{eq: cond coe}
\\
\|\sigma (f, s, v, \mu)\|^{2 \gamma}_{L_2 (\mathbb{U}; \mathbb{H})} + \| b (f, s, v, \mu)\|^{\gamma}_{\mathbb{V}} &\leq \lambda \big( (1 + {\mathcal{N}}(v) ) (1 + \|v\|^\beta_\mathbb{H})+ \|\mu\|^{\beta}_{\beta, \mathbb{H}} \big), \label{eq: cond growth drift}
\\
\| \bar{\sigma}(\nu, s, v, \mu) \|^2_{L_2 (\mathbb{U}; \mathbb{H})} &\leq \lambda \big(1 + \|v\|^2_\mathbb{H}+ \|\mu\|^2_{2, \mathbb{H}} \big), \label{eq: cond growth diffusion}
\end{aligned}$$ for all $f \in F, \nu \in \mathcal{P}(F), s \in [0, T], w \in \mathbb{V}^*, v \in \mathbb{Y}$ and $\mu \in \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$.*
The following is our second main condition.
**Condition 2**.
1. *$b (F \times [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})) \subset \mathbb{Y}^*$.*
2. *There exists a constant $C > 0$ such that $$\begin{aligned}
\| b (f, s, y, \mu) \|^{\alpha / (\alpha - 1)}_{\mathbb{Y}^*} &\leq C \big( (1 + \mathcal{N} (y) ) (1 + \|y\|^\beta_\mathbb{H}) + \|\mu\|^\beta_{\beta, \mathbb{H}}\big), \label{eq: cond 2 (ii.1)}
\\
{_{\mathbb{Y}^*}}\langle b (f, s, y, \mu) - b (f, s, v, \mu^*), y - v \rangle_{\mathbb{Y}} &\leq C \big( \|y - v\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mu, \mu^*) \big), \label{eq: cond 2 (ii.2)}
\\
\| \bar{\sigma}(\nu, s, y, \mu) - \bar{\sigma}( \nu, s, v, \mu^*)\|^2_{L_2 (\mathbb{U}; \mathbb{X})} &\leq C \big( \|y - v\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mu, \mu^*) \big) \label{eq: cond 2 (ii.3)}
\end{aligned}$$ for all $f \in F, \nu \in \mathcal{P}(F), s \in [0, T]$ and $y, v \in \mathbb{Y}, \mu, \mu^* \in \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$.*
We are in the position to introduce the particle systems of interest and its proposed mean field limits.
## Controlled Particles Systems and Mean Field Limits
For $n \in \mathbb{N}$, define $$\begin{aligned}
&\mathscr{S}_n \colon \Theta^n \to \mathcal{P}(\Theta), \quad \mathscr{S}_n (\theta^1, \dots, \theta^n) := \frac{1}{n} \sum_{k = 1}^n \delta_{\theta^k},
\\
&\mathscr{X}_n \colon \Omega^n \to \mathcal{P}(\Omega), \quad \mathscr{X}_n (\omega^1, \dots, \omega^n) := \frac{1}{n} \sum_{k = 1}^n \delta_{\omega^k}.\end{aligned}$$ It is known ([@LakSPA15 Lemma 3.2]) that there exists a predictable probability kernel $\mathfrak{m}$ from $[0, T] \times \Theta$ into $F$ such that $$M (dt, df) = \mathfrak{m}(t, M, df) dt.$$ In the following we will also write $\mathfrak{m}(t, M)$ for the measure $\mathfrak{m}(t, M, dy)$.
The following are the particles systems of interest.
**Definition 3**. *For $n \in \mathbb{N}$ and $x \in \mathbb{H}$, let $\mathcal{C}^n (x)$ be the set of all $Q \in \mathcal{P}(\Theta^n)$ with the following properties:*
1. *$Q \circ \mathscr{S}_n^{-1} \in \mathscr{J}(x)$;*
2. *possibly on a standard extension of $(\Theta^n, \mathcal{O}^n, \mathbf{O}^n, Q)$, there exist independent cylindrical standard Brownian motions $W^1, \dots, W^n$ such that a.s., for $k = 1, \dots, n$, all $t \in [0, T]$ and $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X^k_t, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v\rangle_{\mathbb{V}^*} &+ \int_0^t \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X^k_s, \mathscr{X}_n (X_s)) \mathfrak{m}(s, M^k, df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big\langle \int_0^t \bar{\sigma}( \mathfrak{m}(s, M^k), s, X^k_s, \mathscr{X}_n (X_s)) d W^k_s, v \Big \rangle_\mathbb{H}.
\end{aligned}$$ Of course, it is implicit that the integrals are well-defined.*
*Further, we define $$\mathcal{R}^n (x) := \big\{ Q \circ \mathscr{S}_n^{-1} \colon Q \in \mathcal{C}^n (x) \big\} \subset \mathcal{P}(\mathcal{P}(\Theta)).$$*
The next definition introduces their proposed mean field limits.
**Definition 4**. *For $x \in \mathbb{H}$, we define $\mathcal{C}^0 (x)$ to be the set of all measures $Q \in \mathcal{P}(\Theta)$ with the following properties:*
1. *$Q \in \mathscr{G}$;*
2. *possibly on a standard extension of $(\Theta, \mathcal{O}, \mathbf{O}, Q)$, there exists a cylindrical standard Brownian motion $W$ such that a.s., for all $t \in [0, T]$ and $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X_t, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v\rangle_{\mathbb{V}^*}&+ \int_0^t \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X_s, Q^X_s) \mathfrak{m}(s, M, df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big\langle \int_0^t \bar{\sigma}( \mathfrak{m}(s, M),s, X_s, Q^X_s) d W_s, v \Big \rangle_\mathbb{H}.
\end{aligned}$$ Of course, it is implicit that the integrals are well-defined.*
*Further, we set $$\mathcal{R}^0 (x) := \big\{ P \in \mathcal{P}(\mathcal{P}(\Theta)) \colon P (\mathcal{C}^0 (x)) = 1\big\} \cap \mathscr{J}(x).$$*
The Definitions [Definition 3](#def: C^n){reference-type="ref" reference="def: C^n"} and [Definition 4](#def: C MK){reference-type="ref" reference="def: C MK"} are in the spirit of the variational framework for SPDEs as studied by Pardoux [@par75] and Krylov--Rozovskii [@krylov_rozovskii], see also [@liu_rockner; @par]. In Section [4.1](#sec: mp){reference-type="ref" reference="sec: mp"} below, we establish characterizations via controlled martingale problems as studied in [@EKNJ88; @ElKa15].
## Main Result
The following theorem is the main result of this paper.
**Theorem 5**. *Assume that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} holds and take a sequence $(x^n)_{n = 0}^\infty \subset \mathbb{H}$ such that $x^n \to x^0$ (in $\|\cdot\|_\mathbb{H}$). Then, the following hold:*
1. *For every $n \in \mathbb{N}$, the sets $\mathcal{R}^0 (x^0)$ and $\mathcal{R}^n (x^n)$ are nonempty and compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.*
2. *Every sequence $(Q^n)_{n = 1}^\infty$ with $Q^n \in \mathcal{R}^n (x^n)$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ and every of its accumulation points is in the set $\mathcal{R}^0 (x^0)$.*
3. *For every upper semicontinuous function $\psi \colon \mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta)) \to \mathbb{R}$, such that $$\begin{aligned}
\label{eq: bound property}
\exists \, C > 0 \colon \ \ |\psi (\nu)| \leq C \big[ 1 + \|\nu\|^\alpha_{\alpha, \mathcal{P}^\varrho(\Theta)}\big] \ \ \forall \, \nu \in \mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta)),\end{aligned}$$ it holds that $$\limsup_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big] \leq \sup_{Q \in \mathcal{R}^0 (x^0)} E^Q \big[ \psi \big].$$*
*Assume that Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} holds additionally.*
1. *For every $Q \in \mathcal{R}^0 (x^0)$ and every subsequence $(x^{M_n})_{n = 1}^\infty$ of $(x^n)_{n = 1}^\infty$, there exists a further subsequence $(x^{N_n})_{n = 1}^\infty \subset (x^{M_n})_{n = 1}^\infty$ and measures $Q^{N_n} \in \mathcal{R}^{N_n} (x^{N_n})$ such that $Q^{N_n} \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.*
2. *For every lower semicontinuous function $\psi \colon \mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta)) \to \mathbb{R}$ with the property [\[eq: bound property\]](#eq: bound property){reference-type="eqref" reference="eq: bound property"}, it holds that $$\sup_{Q \in \mathcal{R}^0 (x^0)} E^Q \big[ \psi \big] \leq \liminf_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big].$$*
3. *For every compact set $\mathscr{K} \subset \mathbb{H}$ and every continuous function $\psi \colon \mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta)) \to \mathbb{R}$ with the property [\[eq: bound property\]](#eq: bound property){reference-type="eqref" reference="eq: bound property"}, it holds that $$\begin{aligned}
\label{eq: compact convergence}
\sup_{x \in \mathscr{K} } \Big| \sup_{Q\in \mathcal{R}^n (x)} E^Q \big[ \psi \big] - \sup_{Q\in \mathcal{R}^0(x)} E^Q \big[ \psi \big] \Big| \to 0, \quad n \to \infty,
\end{aligned}$$ and the map $$\mathbb{H}\ni x \mapsto \sup_{Q\in \mathcal{R}^0(x)} E^Q \big[ \psi \big]$$ is continuous.*
4. *$\mathcal{R}^n (x^n) \to \mathcal{R}^0 (x^0)$ as $n \to \infty$ in the Hausdorff metric topology[^3] on the space of nonempty compact subsets of $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.[^4]*
**Remark 6**.
1. Part (i) of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (actually this follows from part (ii)) includes an existence result for certain mean field controlled stochastic PDEs. In particular, taking $F$ as singleton, this includes an existence result for usual (in the sense of uncontrolled) McKean--Vlasov equations. In this regard, the theorem contains an existence result comparable to some from the recent paper [@HLL23].
2. Part (ii) of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} provides a particle approximation for controlled equations. In this regard, the result is comparable to a propagation of chaos result from [@HLL23] for uncontrolled stochastic 2D Navier--Stokes equations (that are covered by our result, see part (iv) of this remark).
3. The items (iii), (v) and (vi) from Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} are related to stochastic optimal control theory, as they provide some understanding of mean field control problems and their approximations. We will discuss more details in Corollary [Corollary 7](#coro: control){reference-type="ref" reference="coro: control"} below.
The flavor of part (vii) from Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} is probabilistic. It can be viewed as *set-valued propagation of chaos*. To the best of our knowledge, such a result and the concept itself are new in the literature on mean field control.
4. Let us also comment on our main assumptions. Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} is rather mild and heavily inspired by (C1) -- (C3) from [@GRZ]. It is satisfied for stochastic porous media equations ([@GRZ Section 5]) and Navier--Stokes equations on bounded domains ([@GRZ Section 6]).
Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} is a type of growth and weak monotonicity condition that appears frequently in the literature ([@krylov_rozovskii; @liu_rockner; @par]). For example, it can be verified for stochastic porous media equations ([@liu_rockner Example 4.1.11]). However, it fails for stochastic Navier--Stokes equations which lack weak monotonicity ([@liu_rockner Section 5.1.3]). It is an interesting open problem to establish (iv) -- (vii) under, say, a local monotonicity condition (as derived in [@liu_r:JFA] for SPDEs without control and measure dependence) that covers stochastic 2D Navier--Stokes frameworks.
In Section [3](#sec: ex){reference-type="ref" reference="sec: ex"} below, we show that both Conditions [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} and [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} hold for certain controlled slow diffusion equations.
In finite dimensional cases, Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} and [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} correspond to classical growth and monotonicity conditions. In fact, as monotonicity conditions are weaker than their Lipschitz counterparts, Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} covers some new cases even for finite dimensional frameworks (cf. [@LakSIAM17 Assumption B]).
The next corollary should be compared to [@LakSIAM17 Theorems 2.11, 2.12], which are limit theorems in the context of stochastic optimal control. The first part shows that accumulation points of $n$-state nearly optimal controls are optimal for the McKean--Vlasov system, while the second part explains that every optimal McKean--Vlasov control can be obtained as limit of $n$-state nearly optimal controls.
**Corollary 7**. *Suppose that the Conditions [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} and [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} hold, take a continuous function $$\psi \colon \mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta)) \to \mathbb{R}$$ with the property [\[eq: bound property\]](#eq: bound property){reference-type="eqref" reference="eq: bound property"} and an initial value $x \in \mathbb{H}$.*
1. *Let $(\varepsilon_n)_{n = 1}^\infty \subset \mathbb{R}_+$ be a sequence such that $\varepsilon_n \to 0$. For $n \in \mathbb{N}$, suppose that $Q^n \in \mathcal{R}^n (x)$ is such that $$\sup_{Q \in \mathcal{R}^n (x)} E^{Q} \big[ \psi \big] - \varepsilon^n \leq E^{Q^n} \big[ \psi \big].$$ In other words, $Q^n$ is a so-called *$n$-state $\varepsilon_n$-optimal control*. Then, the sequence $(Q^n)_{n = 1}^\infty$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ and every accumulation point $Q^0$ is in $\mathcal{R}^0 (x)$ and optimal in the sense that $$\begin{aligned}
\label{eq: def opti}
E^{Q^0} \big[ \psi \big] = \sup_{Q \in \mathcal{R}^0 (x)} E^Q \big[ \psi \big].
\end{aligned}$$*
2. *Take a measure $Q^0 \in \mathcal{R}^0 (x)$ that is optimal (i.e., it satisfies [\[eq: def opti\]](#eq: def opti){reference-type="eqref" reference="eq: def opti"}). Then, there are sequences $(\varepsilon_n)_{n = 1}^\infty \subset \mathbb{R}_+$ and $(Q^n)_{n = 1}^\infty \subset \mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ such that $\varepsilon_n \to 0$, each $Q^n$ is an $n$-state $\varepsilon_n$-optimal control and $Q^n \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.*
*Proof.* (i). By Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (vi), we have $$\sup_{Q \in \mathcal{R}^0 (x)} E^Q \big[ \psi \big] \leftarrow \sup_{Q \in \mathcal{R}^n (x)} E^{Q} \big[ \psi \big] - \varepsilon^n \leq E^{Q^n} \big[ \psi \big] \leq \sup_{Q \in \mathcal{R}^n (x)} E^Q \big[ \psi \big] \to \sup_{Q \in \mathcal{R}^0 (x)} E^Q \big[ \psi \big],$$ which shows that $$\lim_{n \to \infty} E^{Q^n} \big[ \psi \big] = \sup_{Q \in \mathcal{R}^0 (x)} E^Q \big[ \psi \big].$$ By part (ii) of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"}, $(Q^n)_{n = 1}^\infty$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ and every accumulation point $Q^0$ is in $\mathcal{R}^0 (x)$. Thus, by virtue of [@villani09 Definition 6.8], we get that $$E^{Q^0} \big[ \psi \big] = \lim_{n \to \infty} E^{Q^n} \big[ \psi \big] = \sup_{Q \in \mathcal{R}^0 (x)} E^Q \big[ \psi \big].$$ This is the claim.
(ii). By Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iv), there exists a sequence $(Q^n)_{n = 1}^\infty$ such that $Q^n \in \mathcal{R}^n (x)$ and $Q^n \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. Using that $Q^0$ is optimal, Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (vi) and [@villani09 Definition 6.8], we get that $$\lim_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x)} E^{Q} \big[ \psi \big] = E^{Q^0} \big[ \psi \big] = \lim_{n \to \infty} E^{Q^n} \big[ \psi \big].$$ Consequently, $$0 \leq \varepsilon^n := \sup_{Q \in \mathcal{R}^n (x)} E^{Q} \big[ \psi \big] - E^{Q^n} \big[ \psi \big]\to 0,$$ which shows that $Q^n$ is an $n$-state $\varepsilon^n$-optimal control. The claim is proved. ◻
# Example: Mean Field Control for Porous Media Models {#sec: ex}
In this section we verify the Conditions [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} and [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} for interacting controlled stochastic porous media systems of the form $$d Y^k_t = \Big[ \Delta ( |Y^k_t|^{q - 2} Y^k_t) + \frac{1}{N} \sum_{i = 1}^N \, (Y^k_t - Y^i_t)+ \int c\, (f)\, \mathfrak{m}(t, M^k, df) \Big] dt + \sigma d W^k_t,$$ where $q \geq 2$ and $k = 1, \dots, N$. For $q = 2$ this equation corresponds to the stochastic heat equation. The cases $q > 2$ are typically called *slow diffusion models*. More specifically, $q = 3$ describes the flow of an ideal gas in a porous medium. There are also other interesting situations, e.g., thermal propagation in plasma ($q = 7$), or plasma radiation ($q = 5$). We refer to the monograph [@BPR] for more information about (stochastic) porous media equations.
We now introduce a precise setting for this system. Let $d \in \mathbb{N}$ be a fixed dimension and let $\mathcal{O} \subset \mathbb{R}^d$ be a bounded open set with smooth boundary. For $k \in \mathbb{N}$ and $p > 1$, let $W^{k, p}_0 (\mathcal{O})$ be the usual Sobolev space on $\mathcal{O}$ with Dirichlet boundary conditions. The dual space of $W^{k, p}_0 (\mathcal{O})$ can be identified with $W^{-k, p'} (\mathcal{O})$, where $p' = p / (p - 1)$ ([@adams Theorem 3.12] or [@krylov_rozovskii Section III.1]). In this way, the dualization between $W^{k, p}_0 (\mathcal{O})$ and $W^{-k, p'} (\mathcal{O})$ is defined by means of the usual scalar product in $L^2 (\mathcal{O})$. In the following, we take some $q \geq 2$ and $$\begin{aligned}
{3}
\mathbb{Y}&:= L^q (\mathcal{O}), \qquad \qquad &&\mathbb{H}:= L^2 (\mathcal{O}), \qquad \qquad &&\mathbb{X}:= W^{-1, 2} (\mathcal{O}),
\\
\mathbb{V}&:= W^{- d - 2, 2} (\mathcal{O}), && \mathbb{V}^* := W^{d + 2, 2}_0 (\mathcal{O}). &&\end{aligned}$$ Further, we set $$\begin{aligned}
{\mathcal{N}}(y) := \begin{cases} \int_{\mathcal{O}} \big| \nabla ( |y(u)|^{q/2 - 1} y (u) ) \big|^2 du, & \text{ if } |y|^{q/2 - 1} y \in W^{1, 2}_0 (\mathcal{O}), \\
+ \infty, &\text{ otherwise}.
\end{cases}\end{aligned}$$ Thanks to [@GRZ Lemma 5.1], for $\alpha := q$, the function ${\mathcal{N}}$ is as in the previous section, i.e., it has all presumed properties. The action space $F$ is still assumed to be a compact metrizable space.
Next, we introduce the coefficients $b$ and $\sigma$. We set $$b (f, s, x, \mu) \equiv b (f, x, \mu) (\,\cdot\,) := \Delta (|x (\,\cdot\,)|^{q - 2} x (\,\cdot\,)) + \int (x (\,\cdot\,) - z (\,\cdot\,)) \mu(dz) + c (f) (\,\cdot\,),$$ where $F \times \mathbb{R}\ni (f, u) \mapsto c (f) (u)$ is a bounded continuous function. Further, for some separable Hilbert space $\mathbb{U}$, let $\sigma \in L_2 (\mathbb{U}; \mathbb{H})$ be a constant Hilbert--Schmidt coefficient.
**Lemma 8**. *In the above setting, Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} holds for $\gamma = q / (q - 1)$, large enough $\lambda$ and all $\beta, \eta$ that satisfy [\[eq: cond constants\]](#eq: cond constants){reference-type="eqref" reference="eq: cond constants"}.*
*Proof.* We have $\mathbb{V}^* \subset \mathbb{Y}$ compactly by [@adams Theorem 6.3], i.e., (i) holds. Moreover, except for the measure dependent component $- \int z \mu (dz),$ (ii) and (iii) from Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} follow from [@GRZ Lemma 5.2]. In the following, we comment on the measure component.
Take $v \in \mathbb{V}^*$ and a sequence $(\mu^n)_{n = 0}^\infty \subset \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$ such that $\mu^n \to \mu^0$ in $\mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$. By the continuity of the map $z \mapsto {_\mathbb{V}} \langle z, v \rangle_{\mathbb{V}^*}$ from $\mathbb{V}$ into $\mathbb{R}$, and the fact that $| {_\mathbb{V}} \langle z, v \rangle_{\mathbb{V}^*} | \leq \|z\|_\mathbb{V}\| v \|_{\mathbb{V}^*}$, we deduce from $\mu^n \to \mu^0$ in $\mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$ that $$\begin{aligned}
\mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big \langle \int z \mu^n (dz) - \int z \mu^0 (dz), v \Big \rangle_{\mathbb{V}^*} = \int {_\mathbb{V}}\langle z, v \rangle_{\mathbb{V}^*} ( \mu^n (dz) -\mu^0 (dz)) \to 0.
\end{aligned}$$ We conclude that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (ii) holds.
Next, we discuss Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (iii). Take $v \in \mathbb{V}^*, w \in \mathbb{Y}$ and $\mu \in \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$. First, notice that $$\begin{aligned}
- \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big \langle \int z \mu (dz), w \Big \rangle_{\mathbb{V}^*} &= - \int \langle z, w \rangle_\mathbb{H}\, \mu (dz) \leq \|w\|_\mathbb{H}\| \mu \|_{1, \mathbb{H}} \leq \tfrac{1}{2} \big( \|w\|^2_\mathbb{H}+ \|\mu\|^2_{2, \mathbb{H}} \big).\end{aligned}$$ We conclude that [\[eq: cond coe\]](#eq: cond coe){reference-type="eqref" reference="eq: cond coe"} holds (where the term $- {\mathcal{N}}$ comes from the non-measure dependent part of the coefficient $b$, cf. [@GRZ Lemma 5.2]). Finally, notice that $$\Big\| \int z \mu (dz) \Big\|^\gamma_\mathbb{V}\leq \Big( \int \|z\|_\mathbb{V}\mu (dz) \Big)^{\gamma} \leq \|\mu\|_{1, \mathbb{H}}^\gamma \leq \|\mu\|_{2, \mathbb{H}}^\gamma,$$ i.e., we conclude [\[eq: cond growth drift\]](#eq: cond growth drift){reference-type="eqref" reference="eq: cond growth drift"}. This completes the proof. ◻
Next, we also discuss Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"}. In the following, we consider the Gelfand triple $$\mathbb{Y}= L^q (\mathcal{O}) \subset \mathbb{X}= W^{-1, 2}(\mathcal{O}) \, \cong \, \mathbb{X}^* = W^{1, 2}_0 (\mathcal{O}) \subset \mathbb{Y}^* = (L^q (\mathcal{O}))^*,$$ where we understand $\,\cong\,$ via the Riesz map $\mathcal{R} := (- \Delta)^{-1}$ as discussed in [@liu_rockner Lemma 4.1.12]. We also recall the well-known fact ([@liu_rockner Lemma 4.1.13]) that $\Delta$ extends to a linear isometry from $L^{q / (q - 1)} (\mathcal{O})$ into $\mathbb{Y}^*$.
**Lemma 9**. *In the above setting, Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} holds.*
*Proof.* Notice that $$x \in \mathbb{Y}= L^q (\mathcal{O}) \quad \Longrightarrow \quad |x|^{q - 2} x \in L^{q / (q - 1)} (\mathcal{O}).$$ Hence, $\Delta (|x|^{q - 2} x) \in \mathbb{Y}^*$ for all $x \in \mathbb{Y}$. This implies that Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} (i) holds.
Next, we discuss Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} (ii). Take $x \in \mathbb{Y}$ and recall that $\alpha = q$. Thanks to [@GRZ Lemma C.1], we obtain $$\begin{aligned}
\| \Delta (|x|^{q - 2} x) \|^{q / (q - 1)}_{\mathbb{Y}^*} &= \int_\mathcal{O} |x (u)|^{q} du = \|x\|_\mathbb{Y}^q \leq C \big( {\mathcal{N}}(x) + \|x\|^q_\mathbb{V}\big) \leq C \big( {\mathcal{N}}(x) + \|x\|^q_\mathbb{H}\big).
\end{aligned}$$ Further, using that $\mathbb{H}\subset \mathbb{X}\subset \mathbb{Y}^*$, we get that $$\begin{aligned}
\Big\| x - \int z \mu(dz) + c (f) \Big\|_{\mathbb{Y}^*}^{q / (q - 1)} %&\leq \Big\| x - \int z \mu(dz) + c (f) \Big\|_{\X}^{q / (q - 1)}
&\leq C\, \Big\| x - \int z \mu(dz) + c (f) \Big\|_{\mathbb{H}}^{q / (q - 1)}
\\&\leq C\, \big(\|x\|^{q / (q - 1)}_\mathbb{H}+ \|\mu\|_{2, \mathbb{H}}^{q/ (q - 1)} + 1 \big).\end{aligned}$$ Therefore, [\[eq: cond 2 (ii.1)\]](#eq: cond 2 (ii.1)){reference-type="eqref" reference="eq: cond 2 (ii.1)"} holds.
It is easy to check that $$\forall \, s, t \in \mathbb{R}\colon \quad (|t|^{p - 2} t - |s|^{p - 2} s)(t - s) \geq 0.$$ Hence, for $y, v \in \mathbb{Y}$, we obtain $$\begin{aligned}
{_{\mathbb{Y}^*}} \langle \Delta (|y|^{p - 2} y) - \Delta (|v|^{p- 1} v), y - v \rangle_{\mathbb{Y}} &= - \int_{\mathcal{O}} \big( |y (u)|^{p - 2} y (u) - |v (u)|^{p - 2} v (u)\big) \big( y(u) - v (u) \big) du
\\&\leq 0.\end{aligned}$$ Take $\mu, \nu \in \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$ and recall that the Wasserstein metric can be realized by some optimal coupling ([@car_della part I, p. 353]), i.e., there exists a coupling $\pi$ of $\mu$ and $\nu$ such that $$\mathsf{w}^{\mathbb{X}}_1 (\mu, \nu) = \int \|z - w\|_\mathbb{X}\pi(dz, dw).$$ Consequently, we observe that $$\begin{aligned}
\Big\| \int z \mu (dz) - \int z \nu (dz) \Big\|_\mathbb{X}& = \Big\| \iint z\pi (dz, dw) - \iint w \pi (dz, dw) \Big\|_\mathbb{X}
\\&\leq \iint \|z - w \|_\mathbb{X}\pi (dz, dw)
= \mathsf{w}^{\mathbb{X}}_1 (\mu, \nu). \end{aligned}$$ Using this observation, we get $$\begin{aligned}
\mathop{\vphantom{\int}}\nolimits_{\mathbb{Y}^*}\hspace{-0.1cm}\Big \langle y - v + \int z \mu (dz) - \int z \nu(dz), y - v \Big\rangle_{\mathbb{Y}}&= \Big \langle y - v + \int z \mu (dz) - \int z \nu(dz), y - v \Big \rangle_\mathbb{X}
\\&\leq \Big\| y - v + \int z \mu (dz) - \int z \nu(dz)\Big\|_\mathbb{X}\, \| y - v \|_\mathbb{X}
\\&\leq \|y - v \|_\mathbb{X}^2 + \frac{1}{2} \Big( \|y - v\|_\mathbb{X}^2 + \mathsf{w}^\mathbb{X}_1 (\mu, \nu)^2 \Big)
\\&\leq \frac{3}{2}\, \|y - v\|^2_\mathbb{X}+ \frac{1}{2}\, \mathsf{w}^\mathbb{X}_2 (\mu, \nu)^2.\end{aligned}$$ All together, we conclude that [\[eq: cond 2 (ii.2)\]](#eq: cond 2 (ii.2)){reference-type="eqref" reference="eq: cond 2 (ii.2)"} holds.
Finally, as $\sigma$ is constant, [\[eq: cond 2 (ii.3)\]](#eq: cond 2 (ii.3)){reference-type="eqref" reference="eq: cond 2 (ii.3)"} holds trivially and the proof is complete. ◻
# Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} {#sec: pf}
In this section we prove our main Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"}. Let us shortly comment on the structure of this section. In Section [4.1](#sec: mp){reference-type="ref" reference="sec: mp"}, we start with a martingale problem characterizations for the sets $\mathcal{C}^0$ and $\mathcal{C}^n$. Thereafter, in Section [4.2](#sec: compactness){reference-type="ref" reference="sec: compactness"}, we investigate compactness properties. In Section [4.3](#sec: existence){reference-type="ref" reference="sec: existence"}, we provide a general existence result for stochastic PDEs with certain random coefficients. In the remaining sections, we proceed with the proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} in a chronological order.
## Martingale Problem Characterizations of $\mathcal{C}^0$ and $\mathcal{C}^n$ {#sec: mp}
Let $\mathscr{B}:= \{e_i \colon i \in \mathbb{N}\} \subset \mathbb{V}^*$ be a countable dense set. Furthermore, suppose that $\mathcal{C}^2_b$ is a countable subset of $C^2_b(\mathbb{R}; \mathbb{R})$ that is dense for the norm $\|f\|_\infty + \|f'\|_\infty + \|f''\|_\infty$. Finally, for $s \in [0, T]$, let $\mathcal{T}_s \subset C_b (\Theta; \mathbb{R})$ be a countable separating class for $\mathcal{O}_s$. The existence of such a class follows as in the proof of [@LakSIAM17 Lemma A.1].
We also need the following notation to define relaxed control rules. For $g \in C^2_b (\mathbb{R}; \mathbb{R}), y \in \mathbb{V}^*$ and $(f, t, v, \mu) \in F \times [0, T] \times \mathbb{Y}\times \mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$, we set $$\begin{aligned}
\mathcal{L}_{g, y} (f, t, v, \mu) := g' ({_\mathbb{V}} \langle v, y \rangle_{\mathbb{V}^*}) {_\mathbb{V}}\langle b (f, t, v, \mu), y \rangle_{\mathbb{V}^*}
+ \tfrac{1}{2} g'' ({_\mathbb{V}}\langle v, y \rangle_{\mathbb{V}^*}) \| \sigma^* (f, t, v, \mu) y \|^2_\mathbb{U}.\end{aligned}$$ Finally, for $g \in C^2_b (\mathbb{R}; \mathbb{R}), f \in F, y^1, \dots, y^n \in \mathbb{V}^*, v^1, \dots, v^n \in \mathbb{Y}, \mu \in \mathcal{P}^{2 \eta}_{\varrho} (\mathbb{H})$ and $i = 1, \dots, n$, we set $$\begin{aligned}
\mathcal{L}^i_{g, y^1, \dots, y^n} &(f, t, v^1, \dots, v^n, \mu)
\\&\hspace{-0.75cm} :=
g' \Big( \sum_{k = 1}^n {_\mathbb{V}}\langle v^k, y^k \rangle_{\mathbb{V}^*} \Big) {_\mathbb{V}}\langle b (f, t, v^i, \mu), y^i \rangle_{\mathbb{V}^*} + \frac{1}{2}\, g'' \Big( \sum_{k = 1}^n {_\mathbb{V}}\langle v^k, y^k \rangle_{\mathbb{V}^*} \Big) \| \sigma^* (f, t, v^i, \mu) y^i \|^2_\mathbb{U}.\end{aligned}$$ We are in the position to provide a martingale characterization of the sets $\mathcal{C}^0$ and $\mathcal{C}^n$.
**Lemma 10**. *Suppose that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} [(iii)]{.upright} holds. Let $x \in \mathbb{H}$ and $Q\in \mathcal{P}(\Theta)$. The following are equivalent:*
1. *$Q \in \mathcal{C}^0 (x)$.*
2. *The following properties hold:*
1. *$Q (X_0 = x) = 1$;*
2. *$Q \in \mathscr{G}$;*
3. *for all $y \in \mathbb{V}^*$ and $g \in C^2_b (\mathbb{R}; \mathbb{R})$, the process $$\mathsf{M}^{g, y} := g ({_\mathbb{V}}\langle X, y\rangle_{\mathbb{V}^*}) - \int_0^\cdot \int \mathcal{L}_{g, y} (f, s, X_s, Q^X_s) M (ds, df)$$ is a (square integrable) $Q$-$\mathbf{O}$-martingale.*
3. *The following properties hold:*
1. *$Q (X_0 = x) = 1$;*
2. *$Q \in \mathscr{G}$;*
3. *for all $y \in \mathscr{B}, g \in \mathcal{C}^2_b, s, t \in \mathbb{Q}_+ \cap [0, T], s < t$ and all $\psi \in \mathcal{T}_s$, $$E^Q \big[ (\mathsf{M}^{g, y}_t - \mathsf{M}^{g, y}_s) \psi \big] = 0.$$*
*Proof.* We will prove the following implications: $$\begin{aligned}
\textup{(i)} \Rightarrow \textup{(ii)}, \quad \textup{(ii)} \Rightarrow \textup{(i)}, \quad \textup{(iii)} \Rightarrow \textup{(ii)}.
\end{aligned}$$ As (ii) $\Rightarrow$ (iii) is trivial, these complete the proof.
*$\textup{(i)} \Rightarrow \textup{(ii)}$:* Notice that (a) and (b) hold. We now work on a standard extension of $(\Theta, \mathcal{O}, \mathbf{O}, Q)$ and we use the notation from Definition [Definition 4](#def: C MK){reference-type="ref" reference="def: C MK"}, i.e., a.s., for all $t \in [0, T]$ and $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X_t, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v\rangle_{\mathbb{V}^*}&+ \int_0^t \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X_s, Q^X_s) \mathfrak{m}(s, M, df), v \Big\rangle_{\mathbb{V}^*} ds
\\&+ \Big\langle \int_0^t \bar{\sigma}(\mathfrak{m}(s, M), s, X_s, Q^X_s) d W_s, v \Big \rangle_\mathbb{H}.
\end{aligned}$$ In the following, fix $v \in \mathbb{V}^*$. The classical Itô formula for real-valued semimartingales yields that a.s. $$\begin{aligned}
\mathsf{M}^{g, y} &= g ({_\mathbb{V}} \langle x, y \rangle_{\mathbb{V}^*}) + \int_0^\cdot \Big( g' ({_\mathbb{V}} \langle X_s, y \rangle_{\mathbb{V}^*}) \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X_s, Q^X_s) \mathfrak{m}(s, M, df), v \Big\rangle_{\mathbb{V}^*}
\\&\hspace{3.5cm} + \frac{1}{2} g'' ({_\mathbb{V}}\langle X_s, y \rangle_{\mathbb{V}^*}) \Big\langle \int \sigma \sigma^* (f, s, X_s, Q^X_s) \mathfrak{m}(s, M, df) y, y \Big\rangle_\mathbb{H}\Big) ds
\\
&= g ({_\mathbb{V}} \langle x, y \rangle_{\mathbb{V}^*}) + \int_0^\cdot \Big( g' ({_\mathbb{V}} \langle X_s, y \rangle_{\mathbb{V}^*}) \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X_s, Q^X_s) \mathfrak{m}(s, M, df), v \Big\rangle_{\mathbb{V}^*}
\\&\hspace{3.5cm} + \frac{1}{2} g'' ({_\mathbb{V}}\langle X_s, y \rangle_{\mathbb{V}^*}) \| \bar{\sigma}^* (\mathfrak{m}(s, M), s, X_s, Q^X_s)y\|_\mathbb{U}^2 \Big) ds
\\
&= g ({_\mathbb{V}} \langle x, y \rangle_{\mathbb{V}^*}) + \int_0^\cdot g' ( {_\mathbb{V}}\langle X_s, v\rangle_{\mathbb{V}^*} ) \langle \bar{\sigma}^* (\mathfrak{m}(s, M), s, X_s, Q^X_s) v, d W_s \rangle_\mathbb{U}.
\end{aligned}$$ Denoting by $[\,\cdot\, , \, \cdot\,]$ the quadratic variation process, we obtain that a.s. $$\begin{aligned}
\big[ \mathsf{M}^{g, y}, \mathsf{M}^{g, y} \big]_T = \int_0^T \big( g' ({_\mathbb{V}} \langle X_s, v\rangle_{\mathbb{V}^*})\big)^2 \|\bar{\sigma}^* (\mathfrak{m}(s, M), s, X_s, Q^X_s) v\|^2_\mathbb{U}ds.
\end{aligned}$$ Using the growth bound [\[eq: cond growth diffusion\]](#eq: cond growth diffusion){reference-type="eqref" reference="eq: cond growth diffusion"} and that $Q \in \mathscr{G}$, we obtain that $$E^Q \Big[ \big[ \mathsf{M}^{g, y}, \mathsf{M}^{g, y}\big]_T \Big] < \infty.$$ We conclude that $\mathsf{M}^{g, y}$ is a (square integrable) $Q$-$\mathbf{O}$-martingale, i.e., (i) follows.
*$\textup{(ii)} \Rightarrow \textup{(i)}$:* By virtue of [@JS Theorem II.2.42], (ii) implies that, for every $v \in \mathbb{V}^*$, the process $${_\mathbb{V}}\langle X, v \rangle_{\mathbb{V}^*} - {_\mathbb{V}} \langle x, v \rangle_{\mathbb{V}^*} - \int_0^\cdot \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big \langle \int b (f, s, X_s, Q^X_s) \mathfrak{m}(s, M, df), v \Big\rangle_{\mathbb{V}^*} ds$$ is a continuous local $Q$-$\mathbf{O}$-martingale with quadratic variation process $$\int_0^\cdot \| \bar{\sigma}^* (\mathfrak{m}(s, M), s, X_s, Q^X_s) v \|^2_\mathbb{U}ds.$$ The growth condition [\[eq: cond growth diffusion\]](#eq: cond growth diffusion){reference-type="eqref" reference="eq: cond growth diffusion"} and $Q \in \mathscr{G}$ imply that $Q$-a.s. $$\int_0^T \| \bar{\sigma}(\mathfrak{m}(s, M), s, X_s, Q^X_s) \|^2_{L_2 (\mathbb{U}; \mathbb{H})} ds < \infty.$$ As $\mathbb{V}^* \subset \mathbb{H}^*$ densely, we deduce from [@OndRep Corollary 6] that, possibly on a standard extension of $(\Theta, \mathcal{O}, \mathbf{O}, Q)$, there exists a cylindrical Brownian motion $W$ such that, for all $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X, v \rangle_{\mathbb{V}^*} - {_\mathbb{V}} \langle x, v \rangle_{\mathbb{V}^*} - \int_0^\cdot \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big \langle \int b (f, s, X_s, Q^X_s) & \mathfrak{m}(s, M, df), v \Big\rangle_{\mathbb{V}^*} ds
\\&= \Big \langle \int_0^\cdot \bar{\sigma}(\mathfrak{m}(s, M), s, X_s, Q^X_s) d W_s, v \Big \rangle_\mathbb{H}.
\end{aligned}$$ This proves that part (ii) from Definition [Definition 4](#def: C MK){reference-type="ref" reference="def: C MK"} holds. Consequently, $Q \in \mathcal{C}^0 (x)$.
*$\textup{(iii)} \Rightarrow \textup{(i)}$:* This implication follows readily by a density argument. ◻
A similar result can also be proved for the set $\mathcal{C}^n (x)$.
**Lemma 11**. *Suppose that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} [(iii)]{.upright} holds. Let $n \in \mathbb{N}, x \in \mathbb{H}$ and $Q\in\mathcal{P}(\Theta^n)$. The following are equivalent:*
1. *$Q \in \mathcal{C}^n (x)$.*
2. *The following hold:*
1. *$Q (X^k_0 = x, k = 1, \dots, n) = 1$;*
2. *$Q \circ \mathscr{S}_n^{-1} \in \mathscr{J}(x)$;*
3. *for all $v^1, \dots, v^n \in \mathbb{V}^*$ and $g \in C^2_b (\mathbb{R}; \mathbb{R})$, the process $$\begin{aligned}
g \Big( \sum_{k = 1}^n\, {_\mathbb{V}}\langle X^k, v^k\rangle_{\mathbb{V}^*}\Big) %&- g \Big( \sum_{k = 1}^n\, {_\V}\langle x, v^k\rangle_{\V^*}\Big)
- \sum_{k = 1}^n \int_0^\cdot \int \mathcal{L}^k_{g, v^1, \dots, v^n} (f, s, X_s, \mathscr{X}_n (X_s)) M^k (ds, df)
\end{aligned}$$ is a (square integrable) $Q$-$\mathbf{O}$-martingale.*
*Proof.* The lemma follows similar to the proof of (i) $\Leftrightarrow$ (ii) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}. We omit the details for brevity. ◻
The following observation follows directly from Lemma [Lemma 11](#lem: mg chara N){reference-type="ref" reference="lem: mg chara N"}.
**Corollary 12**. *Suppose that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} [(iii)]{.upright} holds. For every $x \in \mathbb{H}$ and $n \in \mathbb{N}$, the set $\mathcal{C}^n (x)$ is convex.*
## Compactness properties {#sec: compactness}
In this section we investigate (relative) compactness of the sets $\mathcal{R}^n$ and $\mathcal{R}^0$.
**Lemma 13**. *Suppose that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} [(i)]{.upright} and [(iii)]{.upright} hold. For every nonempty bounded set $B \subset \mathbb{H}$, the set $$\mathcal{R}(B) := \bigcup_{n \in \mathbb{N}} \bigcup_{x \in B} \mathcal{R}^n (x)$$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.*
*Proof.* *Step 1: Tightness in $\mathcal{P}(\mathcal{P}(\Theta))$.* We define the map $I \colon \mathcal{P}(\mathcal{P}(\Theta)) \to \mathcal{P}(\Omega)$ by $$\begin{aligned}
\label{eq: I}
I (Q) (G) := \int m (G \times \mathbb{M}) Q(dm), \quad G \in \mathcal{B}(\Omega).
\end{aligned}$$ Thanks to [@SnzPoC (2.5), p. 178] and the compactness of $\mathbb{M}$, tightness of $\mathcal{R}(B)$ in $\mathcal{P}(\mathcal{P}(\Theta))$ follows from tightness of $$\mathcal{I} := \{I (Q) \colon Q \in \mathcal{R}(B)\} \subset \mathcal{P}(\Omega).$$ By virtue of [@GRZ Lemma 4.3], Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (i) and the definition of $\mathscr{J}(x)$, it suffices to prove that there exists a constant $l > 0$ such that $$\begin{aligned}
\label{eq: to show for tightness}
\sup_{Q \in \mathcal{R} (B)} E^{I (Q)} \Big[ \sup_{s \not = t} \frac{\|X_s - X_t\|_\mathbb{V}}{|t - s|^{l}} \Big] < \infty.
\end{aligned}$$ Take $Q = P \circ \mathscr{S}^{-1}_n \in \mathcal{R}^n (x)$. Then, $$E^{I (Q)} \Big[ \sup_{s \not = t} \frac{\|X_s - X_t\|_\mathbb{V}}{|t - s|^{l}} \Big] = \frac{1}{n} \sum_{k = 1}^n E^P \Big[ \sup_{s \not = t} \frac{\|X^k_s - X^k_t\|_\mathbb{V}}{|t - s|^{l}} \Big].$$ By definition of $\mathcal{C}^n (x)$, have $P$-a.s., for all $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X^k_t, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v \rangle_{\mathbb{V}^*} &+ \int_0^t \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X^k_s, \mathscr{X}_n (X_s)) \mathfrak{m}(s, M, df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big \langle \int_0^t \bar{\sigma}(\mathfrak{m}(s, M^k), s, X^k_s, \mathscr{X}_n (X_s)) d W^k_s, v \Big\rangle_\mathbb{H}, \end{aligned}$$ for independent cylindrical Brownian motions $W^1, \dots, W^n$. This means that $P$-a.s. $$X^k = x + \int_0^\cdot \int b (f, s, X^k_s, \mathscr{X}_n (X_s)) \mathfrak{m}(s, M, df) ds + \int_0^\cdot \bar{\sigma}(\mathfrak{m}(s, M^k),s, X^k_s, \mathscr{X}_n(X_s)) d W^k_s,$$ for $k = 1, \dots, n$, where the equality is meant to be in $\mathbb{V}$. Consequently, $P$-a.s., for $s < t$, $$\begin{aligned}
\|X_t^k- X^k_s\|_\mathbb{V}\leq \|x\|_\mathbb{V}&+ \int_s^t \int \| b (f, r, X^k_r, \mathscr{X}_n (X_r)) \|_\mathbb{V}\mathfrak{m}(r, M, df) dr
\\&+ \Big\| \int_s^t \bar{\sigma}(\mathfrak{m}(r, M^k), r, X^k_r, \mathscr{X}_n (X_r)) d W^k_r \Big\|_\mathbb{H}.
\end{aligned}$$ By Hölder's and Jensen's inequality and [\[eq: cond growth drift\]](#eq: cond growth drift){reference-type="eqref" reference="eq: cond growth drift"}, we obtain that $$\begin{aligned}
\int_s^t \int \| b (f,& r, X^k_r, \mathscr{X}_n (X_r)) \|_{\mathbb{V}} \mathfrak{m}(r, M, df) dr
\\&\leq |t - s|^{1 - 1 / \gamma} \, \int_s^t \int \|b (f, r, X^k_r, \mathscr{X}_n (X_r))\|^\gamma_\mathbb{V}\mathfrak{m}(r, M, df) dr
\\&\leq |t - s|^{1 - 1 /\gamma} \int_0^T \lambda \, \Big( (1 + {\mathcal{N}}(X^k_r)) ( 1 + \|X^k_r\|^\beta_\mathbb{H}) + \frac{1}{n} \sum_{i = 1}^n \|X^i_r\|^\beta_\mathbb{H}\Big) dr.
\end{aligned}$$ Thus, as $Q \circ \mathscr{S}_n^{-1} \in \mathscr{J}(x)$, we get that $$\begin{aligned}
\frac{1}{n} \sum_{k = 1}^n E^P & \Big[ \sup_{s \not = t} \int_s^t \int \|b (f, X^k_r, \mathscr{X}_n (X_r))\|_\mathbb{V}\mathfrak{m}(r, M, df) dr \Big \slash |t - s|^{1 - 1 /\gamma} \Big] \leq C, \end{aligned}$$ where the constant only depends on the boundedness of $B$ and the function $\varkappa$ that appears in the definition of $\mathscr{J}(x)$. By Burkholder's inequality and [\[eq: cond growth diffusion\]](#eq: cond growth diffusion){reference-type="eqref" reference="eq: cond growth diffusion"}, we obtain that $$\begin{aligned}
E^P \Big[ \Big\| \int_s^t \bar{\sigma}(\mathfrak{m}(r, &M^k), r, X^k_r, \mathscr{X}_n (X_r)) d W^k_s \Big\|^{2 \eta}_\mathbb{H}\, \Big]
\\&\leq E^P \Big[ \Big( \int_s^t \| \bar{\sigma}(\mathfrak{m}(r, M^k), r, X^k_r, \mathscr{X}_n (X_r)) \|^2_{L_2 (\mathbb{U}; \mathbb{H})} dr \Big)^{\eta} \, \Big]
\\&\leq E^P \Big[ \Big( \int_s^t \lambda\, \Big( 1 + \|X^k_r\|^2_\mathbb{H}+ \frac{1}{n} \sum_{i = 1}^n \|X^i_r\|^2_\mathbb{H}\Big) dr \Big)^{\eta}\, \Big].\end{aligned}$$ Consequently, using that $Q \circ \mathscr{S}_n^{-1} \in \mathscr{J}(x)$, there exists a constant $C > 0$, with the same dependencies as above, such that $$\begin{aligned}
\frac{1}{n} \sum_{k = 1}^n E^P \Big[ \Big\| \int_s^t \bar{\sigma}(\mathfrak{m}(r, M^k), r, X^k_r, \mathscr{X}_n (X_r)) d W^k_s \Big\|^{2 \eta}_\mathbb{H}\, \Big] &\leq C \, | t - s |^{\eta}. \end{aligned}$$ The Besov--Hölder embedding ([@FV Corollary A.2]) yields that $$\begin{aligned}
\frac{1}{n} \sum_{k = 1}^n & \, E^P \Big[ \sup_{s \not = t} \Big\| \int_s^t \bar{\sigma}(\mathfrak{m}(r, M^k), r, X^k_r, \mathscr{X}_n (X_r)) d W^k_r \Big\|_\mathbb{H}\Big \slash |t - s|^{(\eta - 2) / 2 \eta} \Big]
\\&\leq \frac{1}{n} \sum_{k = 1}^n E^P \Big[ \int_0^T \int_0^T \Big\| \int_s^t \bar{\sigma}(\mathfrak{m}(r, M^k), r, X^k_r, \mathscr{X}_n (X_r)) d W^k_r \Big\|^{2\eta}_\mathbb{H}\Big \slash |t - s|^{\eta} dtds \Big]
\leq C T^2.\end{aligned}$$ We conclude that [\[eq: to show for tightness\]](#eq: to show for tightness){reference-type="eqref" reference="eq: to show for tightness"} holds with $l = (1 - 1 / \gamma) \wedge (\eta - 2)/ 2\eta$.
*Step 2: Relative compactness in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.* First of all, recall that $\varrho< \alpha$. Moreover, using the definition of $\mathscr{J}$ and [@GRZ Lemma C.1], we obtain that $$\begin{aligned}
\sup_{Q \in \mathcal{R}(B)} E^{I (Q)} \big[ \mathsf{d}(X, 0)^\alpha \big] &\leq C \, \sup_{Q \in \mathcal{R}(B)} E^{I (Q)} \Big[ \sup_{s \in [0, T]} \| X_s\|^\alpha_\mathbb{V}+ \int_0^T \| X_s \|^\alpha_\mathbb{Y}ds \Big]
\\&\leq C \, \sup_{Q \in \mathcal{R}(B)} E^{I (Q)} \Big[ \sup_{s \in [0, T]} \| X_s\|^\alpha_\mathbb{V}+ \int_0^T \big[ {\mathcal{N}}(X_s) + \|X_s\|^\alpha_\mathbb{V}\big] ds \Big]
\\&\leq C \, \sup_{Q \in \mathcal{R}(B)} E^{I (Q)} \Big[ \sup_{s \in [0, T]} \| X_s\|^\alpha_\mathbb{V}+ \int_0^T {\mathcal{N}}(X_s) ds \Big]
< \infty. \end{aligned}$$ Hence, by [@LakSPA15 Corollary B.2] and Step 1, we conclude that $\mathcal{R}(B)$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. The proof is complete. ◻
**Lemma 14**. *Let $(x^n)_{n = 0}^\infty \subset \mathbb{H}$ be such that $x^n \to x^0$ in $\|\,\cdot\,\|_\mathbb{H}$. Furthermore, let $(Q^n)_{n = 0}^\infty \subset \mathcal{P}(\mathcal{P}(\Theta))$ be such that $Q^n \in \mathscr{J}(x^n)$ for all $n \in \mathbb{N}$ and $Q^n \to Q^0$ in $\mathcal{P}(\mathcal{P}(\Theta))$. Then, $Q^0 \in \mathscr{J}(x^0)$.*
*Proof.* Notice that $(t, \omega) \mapsto \omega (t)$ is continuous from $[0, T] \times \Omega$ into $\mathbb{V}$ by the Arzelà--Ascoli theorem. Hence, $(t, \omega) \mapsto \|\omega (t)\|_\mathbb{H}$ is lower semicontinuous from $[0, T] \times \Omega$ into $[0, \infty]$ (as $x \mapsto \|x\|_\mathbb{H}$ is lower semicontinuous from $\mathbb{V}$ into $[0, \infty]$; recall our convention that $\|x\|_\mathbb{H}= \infty$ for $x \in \mathbb{V}\backslash \mathbb{H}$). By Berge's maximum theorem (see [@hu_papa Propositoon 1.3.1] for a suitable version), the same is true for the map $(t, \omega) \mapsto \sup_{s \in [0, t]} \| \omega(s)\|_\mathbb{H}$. Also notice that $$\omega \mapsto \int_0^T \big[ {\mathcal{N}}(\omega (s)) + {\mathcal{N}}_{\beta /2 + 1} (\omega (s)) \big] ds$$ is lower semicontinuous from $\Omega$ into $[0, \infty]$ (by Fatou's lemma). Now, using again Fatou's lemma ([@feinberg20 Theorem 2.4]), we obtain that the map $$\mu \mapsto E^\mu \Big[ \sup_{s \in [0, T]} \|X_s\|^{2\eta}_\mathbb{H}+ \int_0^T \big[ {\mathcal{N}}(X_s) - {\mathcal{N}}_{\beta /2 + 1} (X_s) \big] ds \Big]$$ is lower semicontinuous. Consequently, once again by Fatou's lemma, the assumption that $Q^n \in \mathscr{J}(x^n)$, and the continuity of $x \mapsto \varkappa(x)$, $$\begin{aligned}
\int E^{\mu} \Big[ \sup_{s \in [0, T]} & \|X_s\|^{2\eta}_\mathbb{H}+ \int_0^T \big[ {\mathcal{N}}(X_s) + {\mathcal{N}}_{\beta /2 + 1} (X_s) \big] ds \Big] Q^0 (d \mu)
%&\leq \frac{1}{N} \sum_{k = 1}^N \, \liminf_{n \to \infty} E^{Q^{n}} \Big[ \sup_{s \in [0, T]} \|X^k_s\|^{2p}_\H + \int_0^T \N_p (X^k_s) ds \Big]
\\&\leq \liminf_{n \to \infty} \int E^{\mu} \Big[ \sup_{s \in [0, T]} \|X_s\|^{2 \eta}_\mathbb{H}+ \int_0^T \big [ {\mathcal{N}}(X_s) + {\mathcal{N}}_{\beta /2 + 1} (X_s) \big] ds \Big] Q^n (d \mu)
\\&\leq \liminf_{n \to \infty} \varkappa(x^{n})
= \varkappa( x^0). \phantom{\int}\end{aligned}$$ We conclude that $Q^0 \in \mathscr{J}(x^0)$. ◻
**Lemma 15**. *Assume that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} holds. For every compact set $\mathscr{K} \subset \mathbb{H}$, the set $$\begin{aligned}
\mathcal{R}^0 (\mathscr{K}) := \bigcup_{x \in \mathscr{K}} \mathcal{R}^0 (x)
\end{aligned}$$ is compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.*
*Proof.* *Step 1.* We first show that $\mathcal{R}^0 (\mathscr{K})$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. Here, we argue along the lines of the proof for Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"}. Recall the definition of the map $I$ from [\[eq: I\]](#eq: I){reference-type="eqref" reference="eq: I"}. First, we prove tightness of $\mathcal{J} := \{ I (Q) \colon Q \in \mathcal{R}^0(\mathscr{K})\}$. Take $Q \in \mathcal{R}^0(x)$ with $x \in \mathscr{K}$. By the definition of $\mathcal{R}^0(x)$, for $Q$-a.a. $\mu \in \mathcal{P}(\Theta)$, $\mu \in \mathcal{C}^0(x)$. Now, we can argue as in the proof of Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"} to conclude that $$\begin{aligned}
E^{I (Q)} \Big[ \sup_{s \not = t} & \frac{\|X_t - X_s\|_\mathbb{V}}{|t - s|^{(1 - 1/ \gamma) \wedge (\eta - 2) / 2\eta}} \Big]
\\&= \int E^{\mu} \Big[ \sup_{s \not = t} \frac{\|X_t - X_s\|_\mathbb{V}}{|t - s|^{(1 - 1/ \gamma) \wedge (\eta - 2) / 2 \eta}} \Big] Q (d \mu)
\\&\leq C \Big( 1 + \int E^\mu \Big[ \sup_{s \in [0, T]} \|X_s\|^{2 \eta}_\mathbb{H}+ \int_0^T {\mathcal{N}}(X_s) ds + \int_0^T {\mathcal{N}}_{\beta/2 + 1}(X_s) ds \Big] Q (d \mu) \Big).
\end{aligned}$$ By the definition of $\mathscr{J}$, the properties of $\varkappa$ and the boundedness of $\mathscr{K}$, there exists a constant independent of $Q$ such that $$E^{I (Q)} \Big[ \sup_{s \not = t} \frac{\|X_s - X_t\|_\mathbb{V}}{|t - s|^{(1 - 1/ \gamma) \wedge (\eta - 2) / 2 \eta}} \Big] \leq C.$$ Using the same facts again, it follows from Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (i) and [@GRZ Lemma 4.3] that $\mathcal{J}$ is tight in $\mathcal{P}(\Omega)$. Finally, relative compactness in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ follows from [@LakSPA15 Corollary B.2], [@GRZ Lemma C.1] and, once again, the definition of $\mathscr{J}$.
*Step 2.* In this step, we prove that $\mathcal{R}^0 (\mathscr{K})$ is closed, where we use the martingale problem characterization of $\mathcal{C}^0$ as given by Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}. Take a sequence $(Q^n)_{n = 1}^\infty\subset \mathcal{R}^0(\mathscr{K})$ such that $Q^n \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. By definition of $\mathcal{R}^0 (\mathscr{K})$, there exists a sequence $(x^n)_{n = 1}^\infty \subset \mathscr{K}$ such that $\mu \circ X^{-1}_0 = \delta_{x^n}$ for $Q^n$-a.a. $\mu \in \mathcal{P}(\Theta)$. By the compactness of $\mathscr{K}$, possibly passing to a subsequence, we may assume that $x^n \to x^0 \in \mathscr{K}$ in $\|\cdot\|_\mathbb{H}$.
Take $\varepsilon > 0$ and set $G = G (\varepsilon) := \{ Q \in \mathcal{P}(\Theta) \colon \mathsf{p}_{\mathbb{V}} (Q \circ X^{-1}_0, \delta_{x^0}) \leq \varepsilon \}$, where $\mathsf{p}_{\mathbb{V}}$ denotes some metric on $\mathcal{P}(\mathbb{V})$ that induces the weak topology. As the map $Q \mapsto \mathsf{p}_{\mathbb{V}} (Q \circ X^{-1}_0, \delta_{x_0})$ is continuous from $\mathcal{P}(\Theta)$ into $\mathbb{R}_+$, the set $G$ is closed in $\mathcal{P}(\Theta)$. Hence, by the Portmanteau theorem, $$\begin{aligned}
Q^0 (G) &\geq \limsup_{n \to \infty} Q^n (G)
= \limsup_{n \to \infty} \mathds{1} \{ \mathsf{p}_{\mathbb{V}} (\delta_{x^n}, \delta_{x^0} ) \leq \varepsilon \} = 1.
\end{aligned}$$ As $\varepsilon > 0$ was arbitrary, it follows that $$\begin{aligned}
Q^0 ( \{ \mu \in \mathcal{P}(\Theta) \colon \mu\circ X^{-1}_0 = \delta_{x^0} \} ) = 1,
\end{aligned}$$ that is almost all realizations of $Q^0$ satisfy (a.i) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"} with initial value $x^0$.
We deduce from Lemma [Lemma 14](#lem: M closed){reference-type="ref" reference="lem: M closed"} that $Q^0 \in \mathscr{J}(x^0)$. In particular, this implies that $Q^0 (\mathscr{G}) = 1$, i.e., (a.ii) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"} holds for $Q^0$-a.a. realizations.
It remains to show that $Q^0$-a.a. $\mu \in \mathcal{P}(\Theta)$ satisfy (a.iii) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}. Take $g \in C^2_b(\mathbb{R}; \mathbb{R})$ and $v \in \mathbb{V}^*$. For $k > 0$ and $r \in [0, T]$, we define a map $\mathsf{M}_r^k \colon \Theta \times \mathcal{P}(\Theta) \to \mathbb{R}$ by $$\begin{aligned}
\mathsf{M}_r^k (\omega, m, \mu) := g ({_\mathbb{V}}\langle \omega(r), v \rangle_{\mathbb{V}^*}) - \int_0^r\int \, \Big[ (- k) \vee \mathcal{L}_{g, v} (f, s, \omega(s), \mu_s) \wedge k \Big] \,m (ds, df),
\end{aligned}$$ where $\mu_s := \mu \circ X^{-1}_s$.
**Lemma 16**. *Suppose that the Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} [(ii)]{.upright} holds. For every $k > 0$ and $r \in [0, T]$, the map $\mathsf{M}^k_r$ is continuous from $\Theta \times \mathscr{G}$ into $\mathbb{R}$, where we endow $\mathscr{G}$ with the subspace topology coming from the Wasserstein space $\mathcal{P}^\varrho(\Theta)$.*
*Proof.* Take a sequence $(\omega^n, m^n, \mu^n)_{n = 0}^\infty \subset \Theta \times \mathscr{G}$ with $(\omega^n, m^n, \mu^n) \to (\omega^0, m^0, \mu^0)$ and notice that $$\begin{aligned}
\big| \mathsf{M}^k_r (\omega^n&, m^n, \mu^n) - \mathsf{M}^k_r (\omega^0, m^0, \mu^0) \big|
\\&\leq \big| g ({_\mathbb{V}}\langle \omega^n(r), v\rangle_{\mathbb{V}^*}) - g ({_\mathbb{V}}\langle \omega^0 (r), v \rangle_{\mathbb{V}^*}) \big|
\\&\qquad+ \int_0^r \sup_{f \in F} \big| (-k) \vee \mathcal{L}_{g, v} (f, s, \omega^n (s), \mu^n_s) \wedge k
- (-k) \vee \mathcal{L}_{g, v} (f, s, \omega^0 (s), \mu^0_s) \wedge k\big| ds
\\&\qquad + \Big| \int_0^r \int (- k) \vee \mathcal{L}_{g, v} (f, s, \omega^0 (s), \mu^0_s) \wedge k\, \big( m^n (ds, df) - m^0 (ds, df) \big)\Big|
\\&=: I_n + II_n + III_n.
\end{aligned}$$ First, as $\omega^n \to \omega^0$ in $\Omega$, we have $\omega^n (r) \to \omega^0 (r)$ in $\mathbb{V}$, which immediately implies that $I_n \to 0$.
Next, $\omega^n \to \omega^0$ in $\Omega$ also implies that $\|\omega^n - \omega^0\|_\mathbb{Y}\to 0$ in Lebesgue measure. By the continuity of $[0, T] \times \Omega \ni (s, \omega) \mapsto \omega (s) \in \mathbb{V}$, it follows that $\mu^n_s \to \mu^0_s$ in $\mathcal{P}^{2 \eta}_\varrho(\mathbb{H})$ for each $s \in [0, T]$. Using Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (ii) and Berge's maximum theorem ([@charalambos2013infinite Theorem 17.31]), we obtain that the integrand in $II_n$ converges to zero in Lebesgue measure. As it is bounded (by $2k$), the dominated convergence theorem yields that $II_n \to 0$.
Finally, notice that the integrand in $III_n$ is continuous in the $F$-variable for each fixed time point. Consequently, $III_n \to 0$ follows from [@SPS_1981__15__529_0 Corollary 2.9]. The proof is complete. ◻
Take $0 \leq s < t \leq T$ and $\mathfrak{t} \in \mathcal{T}_s$. For $\mu \in \mathcal{P}(\Theta)$, define $$\mathsf{Z}^k (\mu) := \iint \big[ \mathsf{M}^k_t (\omega, m, \mu) - \mathsf{M}^k_s (\omega, m, \mu) \big] \mathfrak{t} (\omega, m) \mu (d \omega, dm),$$ and $$\mathsf{Z}(\mu) := \liminf_{k \to \infty} \mathsf{Z}^k (\mu).$$ By virtue of [@bogachev Theorem 8.10.61], $\mathsf{Z}^k$ and $\mathsf{Z}$ are Borel maps. Thanks to [\[eq: cond growth drift\]](#eq: cond growth drift){reference-type="eqref" reference="eq: cond growth drift"}, we obtain $$\label{eq: bound}
\begin{split}
\big| \mathcal{L}_{g, v} (f, s, v, \mu_s) \big|^{\gamma} \leq C \Big( (1 + {\mathcal{N}}(v)) (1 + \|v\|^\beta_\mathbb{H}) + E^\mu \big [ \|X_s\|^\beta_\mathbb{H}\big] \Big).
\end{split}$$ As $Q^n \in \mathscr{J}(x^n)$ and $Q^0 \in \mathscr{J}(x^0)$, we obtain, for $Q^n$-a.a. and $Q^0$-a.a. $\mu \in \mathcal{P}(\Theta)$, $$\begin{aligned}
\label{eq: oZ after bound}
\mathsf{Z}(\mu) = E^\mu \Big[ \Big( g ( {_\mathbb{V}}\langle X_t , v \rangle_{\mathbb{V}^*} ) - g ({_\mathbb{V}}\langle X_s, v \rangle_{\mathbb{V}^*} ) - \int_s^t \int \mathcal{L}_{g, v} (f, r, X_r, \mu_r) M (dr, df) \Big)\, \mathfrak{t}\ \Big].
\end{aligned}$$ In the following, we prove that $$E^{Q^0} \big[ | \mathsf{Z}| \big] = 0.$$ By Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}, as $\mathscr{B}, \mathcal{C}^2_b$ and $\mathcal{T}_s$ are countable, this implies that $Q^0$-a.a. $\mu \in \mathcal{P}(\Theta)$ satisfy (iii) from Definition [Definition 4](#def: C MK){reference-type="ref" reference="def: C MK"}. This completes the proof.
Notice that $Q^n$-a.s. $\mathsf{Z}= 0$, as $Q^n (\mathcal{C}^0 (x^n)) = 1$. Thus, by the triangle inequality, we observe that $$\label{eq: main triangle}
\begin{split}
E^{Q^0} \big [ | \mathsf{Z}| \big] &\leq E^{Q^n} \big[ | \mathsf{Z}^k- \mathsf{Z}| \big] + \big| E^{Q^n} \big [ | \mathsf{Z}^k | \big ] - E^{Q^0} \big[ | \mathsf{Z}^k | \big] \big| + \big | E^{Q^0} \big [ | \mathsf{Z}^k | \big ] - E^{Q^0} \big [ | \mathsf{Z}| \big ]\big |
\\&=: I_{n, k} + II_{n, k} + III_{n, k}.
\end{split}$$
By virtue of [@bogachev Theorem 8.10.61] and Lemma [Lemma 16](#lem: oM mit k cont){reference-type="ref" reference="lem: oM mit k cont"}, $II_{n, k} \to 0$ as $n \to \infty$ for every $k > 0$. Using [\[eq: bound\]](#eq: bound){reference-type="eqref" reference="eq: bound"}, we obtain $$\begin{aligned}
I_{n, k} &\leq \int E^\mu \Big[ \int_0^T \int \big| \mathcal{L}_{g, v} (f, r, X_r, \mu_r) \big| \mathds{1}_{\{|\mathcal{L}_{g, v} (f, r, X_r, \mu_r)| > k\}} M(ds, df) \Big] Q^n (d\mu)
\\&\leq \frac{C}{k^{\gamma - 1}}, \end{aligned}$$ where $C > 0$ depends on $\varkappa$ from $\mathscr{J}$ but is independent of $n$. Similarly, we obtain that $$III_{n, k} \leq \frac{C}{k^{\gamma - 1}}.$$ Hence, $I_{n, k} + III_{n, k}\to 0$ as $k \to \infty$ uniformly in $n$. Thus, choosing first a large $k$ and taking then $n \to \infty$ shows that $I_{n, k} + II_{n, k} + III_{n, k}$ can be made arbitrarily small, which entails that $Q^0$-a.s. $\mathsf{Z}= 0$. In summary, $Q^0$-a.a. realizations satisfy (iii) from Definition [Definition 4](#def: C MK){reference-type="ref" reference="def: C MK"} and consequently, $Q^0 \in \mathcal{R}^0 (x^0) \subset \mathcal{R}^0 (\mathscr{K})$. The proof is complete. ◻
We record another observation.
**Lemma 17**. *Suppose that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} holds. For every $x \in \mathbb{H}$ and $n \in \mathbb{N}$, the sets $\mathcal{C}^n (x)$ and $\mathcal{R}^n (x)$ are compact in $\mathcal{P}^\varrho(\Theta^n)$ and $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$, respectively.*
*Proof.* Similar to Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"}, one proves that the set $\mathcal{C}^n (x)$ is relatively compact in $\mathcal{P}^\varrho(\Theta^n)$. Further, a martingale problem argument (related to Lemma [Lemma 11](#lem: mg chara N){reference-type="ref" reference="lem: mg chara N"}, see also the proof of Lemma [Lemma 15](#lem: C comp){reference-type="ref" reference="lem: C comp"}) shows that $\mathcal{C}^n (x)$ is closed in $\mathcal{P}^\varrho(\Theta^n)$. We omit the details for brevity. In summary, $\mathcal{C}^n (x)$ is compact. These claims transfer directly to $\mathcal{R}^n (x)$ by the continuity of $P \mapsto P \circ \mathscr{S}_n^{-1}$ from $\mathcal{P}^\varrho(\Theta^n)$ into $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$, cf. [@LakSPA15 Proposition A.1]. ◻
## An existence result {#sec: existence}
Fix an $N \in \mathbb{N}$ and a filtered probability space $(\Sigma, \mathcal{A}, (\mathcal{A}_t)_{t \in [0, T]}, P)$ that supports independent cylindrical Brownian motions $W^1, \dots, W^N$ and predictable kernel $\mathfrak{q}^1, \dots, \mathfrak{q}^N$ from $[0, T] \times \Sigma$ into $F$. We define the product setup $$\Psi := \Sigma \times \Omega^N, \quad \mathcal{G} := \mathcal{A} \otimes \mathcal{F}^N, \quad \mathcal{G}_t := \bigcap_{s > t} \mathcal{A}_s \otimes \mathcal{F}^N_s.$$ With little abuse of notation, let $X = (X^1, \dots, X^N)$ be the projection to the second coordinate, and extend $W^1, \dots, W^N$ in the obvious manner to the product setup. The following can be seen as an extension of [@GRZ Theorem 4.6] to a setting with random coefficients. The result is very much in the spirit of the seminal paper [@jacod1981weak] whose ideas we also adapt.
**Proposition 18**. *Assume that Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} holds and fix $x \in \mathbb{H}$. Then, there exists a probability measure $Q$ on $(\Psi, \mathcal{G}, (\mathcal{G}_{t})_{t \in [0, T]})$ such that $W^1, \dots, W^N$ are independent cylindrical Brownian motions under $Q$ and $Q$-a.s., for all $k = 1, \dots, N, t \in [0, T]$ and $v \in \mathbb{V}^*$, $$\label{eq: existence identity} \begin{split}
{_\mathbb{V}}\langle X^k_t, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v \rangle_{\mathbb{V}^*} &+ \int_0^t \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big \langle \int b (f, s, X^k_s, \mathscr{X}_N (X_s)) \mathfrak{q}^k_s (df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big \langle \int_0^t \bar{\sigma}(\mathfrak{q}_s^k, s, X^k_s, \mathscr{X}_N (X_s)) d W^k_s, v \Big\rangle_\mathbb{H}.
\end{split}$$ Moreover, $Q \circ \mathscr{S}_N (X, K)^{-1} \in \mathscr{J}(x)$ (where we take $K := (\delta_{\mathfrak{q}^1_t} dt, \dots, \delta_{\mathfrak{q}^N_t} dt)$) and there exists a probability transition kernel $\mathscr{Q}$ such that $$\begin{aligned}
\label{eq: kernel dec}
Q (d z, d \omega) = \mathscr{Q} (z, d \omega) P (dz)\end{aligned}$$ and $\mathscr{Q} (\, \cdot\,, G)$ is $P$-a.s. $\mathcal{A}_t$-measurable for every $G \in \mathcal{F}^N_t$.*
*Proof.* The proof of this proposition is based on ideas from [@GRZ Theorem 4.6] and [@jacod1981weak Theorem 1.8].
*Step 0:* Before we can start the proof, we shortly recall the concept of *weak-strong convergence* as, for instance, studied in [@SPS_1981__15__529_0]. We say that a sequence $(Q^n)_{n = 1}^\infty$ of probability measures on $(\Psi, \mathcal{G})$ converges in the *weak-strong sense* to a probability measure $Q^0$ on $(\Psi, \mathcal{G})$ if $$E^{Q^n} \big[ g \big] \to E^{Q^0} \big[ g \big]$$ for all bounded Carathéodory functions from $\Psi$ into $\mathbb{R}$, i.e., all bounded functions $\Psi \to \mathbb{R}$ that are $\mathcal{G}$-measurable in the $\Sigma$-variable and continuous in the $\Omega^N$-variable. We refer also to [@CPS23 Section 2] for some discussion (and a collection of useful results) on weak-strong convergence.
*Step 1:* Let $\{\ell_n \colon n \in \mathbb{N}\} \subset \mathbb{V}^*$ be an orthonormal basis of $\mathbb{H}$ such that $$\| \Pi_n x \|_\mathbb{V}\leq C \|x\|_\mathbb{V}, \quad \forall \, n \in \mathbb{N}, x \in \mathbb{V},$$ where $$\Pi_n x := \sum_{k = 1}^n {_\mathbb{V}} \langle x, \ell_k \rangle_{\mathbb{V}^*} \ell_k, \quad x \in \mathbb{V}.$$ Such an orthonormal basis exists thanks to [@GRZ Lemma 4.4]. Furthermore, let $\{ g_n \colon n \in \mathbb{N}\} \subset \mathbb{U}$ be an orthonormal basis of $\mathbb{U}$. Now, set $$\begin{aligned}
\mathbb{H}_n &:= \operatorname{span} \, \{ \ell_1, \dots, \ell_n \} \subset \mathbb{V}^* \subset \mathbb{Y}\subset \mathbb{H}\subset \mathbb{V}, \\
\mathbb{U}_n &:= \operatorname{span} \, \{ g_1, \dots, g_n\} \subset \mathbb{U}.\end{aligned}$$ We define the coefficients $b_n^k \colon \Sigma \times [0, T] \times \mathbb{H}_n^N \to \mathbb{H}_n$ and $\sigma_n^k \colon \Sigma, [0, T] \times \mathbb{H}_n^N \to L (\mathbb{U}_n; \mathbb{H}_n)$ by $$\begin{aligned}
b_n^k (w, t, x_1, \dots, x_N) &:= \Pi_n\, \int b (f, t, x_k, \mathscr{X}_N (x)) \mathfrak{q}^k_t (df) (w), \\
\sigma_n^k (w, t, x_1, \dots, x_N) &:= \Pi_n\, \bar{\sigma}(\mathfrak{q}^k_t (w), t, x_k, \mathscr{X}_N (x)),\end{aligned}$$ where $x = (x_1, \dots, x_N) \in \mathbb{H}_n^N$. Let us summarize some properties of $b_n^k$ and $\sigma^k_n$. First, for every $(w, t) \in \Sigma \times [0, T]$, the maps $$x \mapsto b^k_n (w, t, x), \quad x \mapsto \sigma^k_n (w, t, x)$$ are continuous by Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (ii). Moreover, thanks to Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (iii), we also have $$\label{eq: coercivity finite}
\begin{split}
\langle b^k_n (w, t, x), x_k \rangle_{\mathbb{H}_n}
&\leq \lambda \Big( 1 + \|x_k\|_{\mathbb{H}_n}^2 + \frac{1}{N} \sum_{i = 1}^N \|x_i\|^2_{\mathbb{H}_n} \Big) - {\mathcal{N}}(x_k), \\
\| \sigma^k_n (w, t, x) \|_{L_2 (\mathbb{U}_n; \mathbb{H}_n)}^2 &\leq \lambda \Big( 1 + \|x_k\|_{\mathbb{H}_n}^2 + \frac{1}{N} \sum_{i = 1}^N \|x_i\|^2_{\mathbb{H}_n} \Big).
\end{split}$$ For $m > 0$, let $\phi_m \in C^\infty_c (\mathbb{R}; [0, 1])$ be a cutoff function such that $$\phi_m (y) = \begin{cases} 1, & |y| \leq m, \\ 0, & |y| \geq 2m.\end{cases}$$ We also set $$\begin{aligned}
b^k_{n, m} (w, t, x) := \phi_m (\|x\|_{\mathbb{H}_n^N})\, b^k_n (w, t, x), \quad \sigma^k_{n, m} (w, t, x) := \phi_m (\|x\|_{\mathbb{H}_n^N})\, \sigma^k_n (w, t, x), \end{aligned}$$ where $\|x\|_{\mathbb{H}_n^N} = \sum_{i = 1}^N \|x_i\|_{\mathbb{H}_n}$. Notice that $b^k_{n, m}$ and $\sigma^k_{n, m}$ have the same continuity properties as $b^k_n$ and $\sigma^k_n$. Furthermore, $b^k_{n, m}$ and $\sigma^k_{n, m}$ are bounded, which follows from Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (ii).
*Step 2:* We deduce from [@jacod1981weak Theorem 1.8] that there exists a probability measure $Q^{n, m}$ on the product space $(\Psi, \mathcal{G}, (\mathcal{G}_{t})_{t \in [0, T]})$ such that $Q^{n, m}$ admits a decomposition of the type [\[eq: kernel dec\]](#eq: kernel dec){reference-type="eqref" reference="eq: kernel dec"},[^5] $W^1, \dots, W^N$ are independent cylindrical Brownian motions under $Q^{n, m}$ (which follows from the kernel decomposition, cf. [@jacod1981weak Lemma 2.17]) and $Q^{n, m}$-a.s., for all $k = 1, \dots, N$ and $t \in [0, T]$, $$\begin{aligned}
X^k_t = \Pi_n x + \int_0^t b^k_{n, m} (s, X_s) ds + \int_0^t \sigma^k_{n, m} (s, X_s) d W^{k, n}_s, \end{aligned}$$ where $$W^{k, n} := \sum_{i = 1}^n\, \langle W^k, g_i \rangle_\mathbb{U}\, g_i.$$ In particular, each $X^k$ is $Q^n$-a.s. $\mathbb{H}_n$-valued (in particular, finite dimensional).
*Step 3:* Thanks to [\[eq: coercivity finite\]](#eq: coercivity finite){reference-type="eqref" reference="eq: coercivity finite"}, one may follow the proof of [@GRZ Theorem C.3] to conclude that the family $\{Q^{n, m} \circ X^{-1} \colon m \in \mathbb{N}\} \subset \mathcal{P}(\Omega^N)$ is tight. Consequently, as the $\Sigma$-marginal of each measure from $(Q^{n, m})_{m = 1}^\infty$ coincides with $P$, we deduce from [@CPS23 Theorem 2.5] that there exists a subsequence $(Q^{n, N_m})_{m = 1}^\infty$ that converges in the weak-strong sense to a probability measure $Q^{n}$. As explained in the proof of [@jacod1981weak (3.25), p. 190], the limit $Q^n$ also admits a decomposition of the type [\[eq: kernel dec\]](#eq: kernel dec){reference-type="eqref" reference="eq: kernel dec"}, and $W^1, \dots, W^N$ remain independent cylindrical Brownian motions under $Q^n$, cf. [@jacod1981weak Lemma 2.17].
*Step 4:* For $g \in C^2_b (\mathbb{R}; \mathbb{R})$, $u \in \mathbb{U}_n$ and $h \in \mathbb{H}_n$, we set $$\begin{aligned}
\mathsf{K}^k_{n, m} &(w, \omega, t)
\\&:= g ( \langle W^{k, n}_t (w), u \rangle_{\mathbb{U}_n} + \langle X^k_t (\omega), h \rangle_{\mathbb{H}_n} ) - g( \langle W^{k, n}_0 (w), u \rangle_{\mathbb{U}_n} + \langle X^k_0 (\omega), h \rangle_{\mathbb{H}_n} ) \\&\qquad- \int_0^t \Big( g' ( \langle W^{k, n}_s (w) , u \rangle_{\mathbb{U}_n} + \langle X^k_s (\omega), h \rangle_{\mathbb{H}_n} ) \langle b^k_{n, m} (w, s, X_s (\omega)), h \rangle_{\mathbb{H}_n}
\\&\qquad\hspace{0.75cm}+ \frac{1}{2}\, g'' ( \langle W^{k, n}_s (w), u \rangle_{\mathbb{U}_n} + \langle X^k_s (\omega), h \rangle_{\mathbb{H}_n} ) \| u + (\sigma^k_{n, m})^* (w, s, X_s (\omega)) h \|^2_{\mathbb{U}_n} \Big) ds.\end{aligned}$$ Notice that, for each $t \in [0, T]$, the map $(w, \omega) \mapsto \mathsf{K}^k_{n, \infty} (w,\omega, t)$ is a Carathéodory function. Furthermore, whenever $(\omega^m)_{m = 1}^\infty \subset C ([0, T]; \mathbb{H}^N_n)$ is a sequence that converges uniformly, it also follows that $$\big| \mathsf{K}^k_{n, m} (w, \omega^m, t) - \mathsf{K}^k_{n, \infty} (w, \omega^m, t) \big| \to 0, \quad \text{as } m \to \infty,$$ where we can apply the dominated convergence theorem thanks to the continuity of $b^k_n$ and $\sigma^k_n$. Using [\[eq: coercivity finite\]](#eq: coercivity finite){reference-type="eqref" reference="eq: coercivity finite"}, it follows from a standard Gronwall argument (see, e.g., [@GRZ pp. 1762 -- 1763]) that there exists a constant $C > 0$, independent of $n, m$, such that $$\begin{aligned}
\label{eq: second moment}
E^{Q^{n, m}} \Big[ \sup_{s \in [0, T]} \|X_s\|^2_{\mathbb{H}_n^N} \Big] \leq C. \end{aligned}$$ Notice also that, by Itô's formula, $$\begin{aligned}
\label{eq: ito iden}
\mathsf{K}^k_{n, m} = \int_0^\cdot g' ( \langle W^{k, n}_s, u \rangle_{\mathbb{U}_n} + \langle X^k_s, h \rangle_{\mathbb{H}_n} ) \langle u + (\sigma^k_{n, m})^* (s, X_s) h, d W^{k, n}_s \rangle_{\mathbb{U}_n}.\end{aligned}$$ Hence, taking [\[eq: second moment\]](#eq: second moment){reference-type="eqref" reference="eq: second moment"}, [\[eq: ito iden\]](#eq: ito iden){reference-type="eqref" reference="eq: ito iden"} and the second line from [\[eq: coercivity finite\]](#eq: coercivity finite){reference-type="eqref" reference="eq: coercivity finite"} into consideration, we may conclude that (for each $n \in \mathbb{N}$) the family $\{ Q^{n, m} \circ \mathsf{K}^k_{n, m} (\, \cdot\,,\, \cdot\, , s)^{-1} \colon s \in [0, T], m \in \mathbb{N}\}$ is uniformly integrable.
In summary, we deduce from [@CPS23 Theorem 3.20] that each $\mathsf{K}^k_{n, \infty}$ is a $Q^n$-martingale, and [@JS Theorem II.2.42] and a representation result for local martingales (e.g., [@OndRep Corollary 6]) show that, under $Q^n$, $$\begin{aligned}
X^k = \Pi_n x + \int_0^\cdot b^k_{n} (s, X_s) ds + \int_0^\cdot \sigma^k_{n} (s, X_s) d W^{k, n}_s, \end{aligned}$$ cf. also [@jacod80 Theorem 6.3].
*Step 5:* Next, we show that $Q^n \circ \mathscr{S}_N (X, K)^{-1} \in \mathscr{J}(x)$. Fix some $p \geq 1$. By Itô's formula, [\[eq: coercivity finite\]](#eq: coercivity finite){reference-type="eqref" reference="eq: coercivity finite"} and Young's inequality, we obtain that $$\label{eq: big ineq ito} \begin{split}
\|X^k_t\|^{2p}_{\mathbb{H}} &= \|\Pi_n x \|^{2p}_{\mathbb{H}} + 2p \int_0^t \|X^k_s\|^{2 (p - 1)} \langle b^k_n (s, X_s), X^k_s \rangle_\mathbb{H}ds
\\&\qquad + p \int_0^t \|X_s^k\|^{2 (p - 1)}_\mathbb{H}\| \sigma^k_n (s, X_s) \|^2_{L_2 (\mathbb{U}_n; \mathbb{H}_n)} ds
\\&\qquad + 2 p ( p - 1 ) \int_0^t \|X^k_s\|^{2 ( p - 2 )} \| (\sigma^k_n)^* (s, X_s) X^k_s \|^2_{\mathbb{U}} ds
\\&\qquad + 2 p \int_0^t \| X^k_s \|^{2 (p - 1)}_\mathbb{H}\langle X^k_s, \sigma^k_n (s, X_s) d W^{k, n}_s \rangle_\mathbb{H}
\\&\leq \|x \|^{2p}_{\mathbb{H}} + \int_0^t \|X^k_s\|^{2 (p - 1)} \Big[3p \lambda\, \Big( 1 + \|X^k_s\|^2_\mathbb{H}+ \frac{1}{N} \sum_{i = 1}^N \|X^i_s\|^2_{\mathbb{H}} \Big) - 2p {\mathcal{N}}(X^k_s) \Big] ds
\\&\qquad + 2 p ( p - 1 ) \int_0^t \|X^k_s\|^{2 ( p - 1 )} \, \lambda\, \Big( 1 + \|X^k_s\|^2_\mathbb{H}+ \frac{1}{N} \sum_{i = 1}^N \|X^i_s\|^2_{\mathbb{H}} \Big) ds
\\&\qquad + 2 p \int_0^t \| X^k_s \|^{2 (p - 1)}_\mathbb{H}\langle X^k_s, \sigma^k_n (s, X_s) d W^{k, n}_s \rangle_\mathbb{H}
\\&\leq \|x \|^{2p}_{\mathbb{H}} + \int_0^t \Big[\lambda p (2p + 1) \Big( 1 + \Big(2 + \frac{1}{p}\Big) \|X^k_s\|^{2p}_\mathbb{H}+ \frac{p-1}{Np} \sum_{i = 1}^N \|X^i_s\|^{2p}_{\mathbb{H}} \Big) - 2p {\mathcal{N}}_p (X^k_s) \Big] ds
\\&\qquad + 2 p \int_0^t \| X^k_s \|^{2 (p - 1)}_\mathbb{H}\langle X^k_s, \sigma^k_n (s, X_s) d W^{k, n}_s \rangle_\mathbb{H}.
\end{split}$$ Consequently, using Burkhoder's inequality ([@krylov_rozovskii Theorem 2.5]), we obtain that $$\label{eq: first main to kappa} \begin{split}
\frac{1}{N} \sum_{k = 1}^N E^{Q^n} \Big[ \sup_{s \in [0, t]} \|X^k_s\|^{2p}_\mathbb{H}\Big]
&\leq \|x\|^{2p}_\mathbb{H}+ \int_0^t \lambda p (2p + 1) \Big( 1 + \frac{3}{N} \sum_{i = 1}^N \|X^i_s\|^{2p}_{\mathbb{H}} \Big) ds
\\&\qquad + \frac{6 p}{N} \sum_{k = 1}^N E^P \Big[ \Big(\int_0^t \|X^k_s\|^{4 p - 2}_\mathbb{H}\| \sigma^k_n (s, X_s) \|^2_{L_2 (\mathbb{U}_n; \mathbb{H}_n)} ds \Big)^{1/2} \Big].
\end{split}$$ Using again [\[eq: coercivity finite\]](#eq: coercivity finite){reference-type="eqref" reference="eq: coercivity finite"} and Young's inequality, we obtain that $$\begin{aligned}
E^{Q^n} & \Big[ \Big(\int_0^t \|X^k_s\|^{4 p - 2}_\mathbb{H}\| \sigma^k_n (s, X_s) \|^2_{L_2 (\mathbb{U}_n; \mathbb{H}_n)} ds \Big)^{1/2} \Big]
\\&\leq E^{Q^n} \Big[ \sup_{r \in [0, t]} \|X^k_r\|^p_\mathbb{H}\Big(\int_0^t \|X^k_s\|^{2 (p - 1)}_\mathbb{H}\| \sigma^k_n (s, X_s) \|^2_{L_2 (\mathbb{U}_n; \mathbb{H}_n)} ds \Big)^{1/2} \Big]
\\&\leq \frac{1}{12 p} E^{Q^n} \Big[ \sup_{r \in [0, t]} \|X^k_r\|^{2p}_\mathbb{H}\Big] + 3 p E^{Q^n} \Big[ \int_0^t \|X^k_s\|^{2 (p - 1)}_\mathbb{H}\| \sigma^k_n (s, X_s) \|^2_{L_2 (\mathbb{U}_n; \mathbb{H}_n)} ds \Big]
\\&\leq \frac{1}{12 p} E^{Q^n} \Big[ \sup_{r \in [0, t]} \|X^k_r\|^{2p}_\mathbb{H}\Big] + 3 p E^{Q^n} \Big[ \int_0^t \lambda \Big( 1 + \Big(2 + \frac{1}{p}\Big) \|X^k_s\|^{2p}_\mathbb{H}+ \frac{p-1}{Np} \sum_{i = 1}^N \|X^i_s\|^{2p}_{\mathbb{H}} \Big) ds \Big].\end{aligned}$$ Together with [\[eq: first main to kappa\]](#eq: first main to kappa){reference-type="eqref" reference="eq: first main to kappa"}, we conclude that $$\begin{aligned}
\frac{1}{N} \sum_{k = 1}^N E^{Q^n} \Big[ \sup_{s \in [0, t]} \|X^k_s\|^{2p}_\mathbb{H}\Big] &\leq 2 \|x\|^{2p}_\mathbb{H}+ \int_0^t 2 \lambda (p (2p + 1) + 16p^2)\Big( 1 + \frac{3}{N} \sum_{i = 1}^N E^{Q^n} \big[ \|X^i_s\|^{2p}_{\mathbb{H}} \big] \Big) ds
\\
&\leq 2 \|x\|^{2p}_\mathbb{H}+ \int_0^t6 \lambda p (1 + 18 p)\Big( 1 + \frac{1}{N} \sum_{i = 1}^N E^{Q^n} \Big[ \sup_{r \in [0, s]} \|X^i_r\|^{2p}_{\mathbb{H}} \Big] \Big)ds.\end{aligned}$$ Now, we deduce from Gronwall's lemma[^6] that $$\begin{aligned}
\frac{1}{N} \sum_{k = 1}^N E^{Q^n} \Big[ \sup_{s \in [0, T]} \|X^k_s\|^{2p}_\mathbb{H}\Big] &\leq (1 + 2 \|x\|^{2p}_\mathbb{H}) \exp \big\{ 6 \lambda p T (1 + 18 p) \big\}.
% = \mathsf{c} \, (1 + 2 \|x\|^{2p}_\H),\end{aligned}$$ Finally, taking expectation in [\[eq: big ineq ito\]](#eq: big ineq ito){reference-type="eqref" reference="eq: big ineq ito"}, and using the martingale property of the Itô integral, we observe that $$\begin{aligned}
\frac{1}{N} \sum_{k = 1}^N E^{Q^n} \big[ \|X^k_T\|^{2p}_\mathbb{H}\big] \leq \|x\|^{2p}_\mathbb{H}&+ \int_0^T \lambda p (2p + 1) \Big(1 + \frac{3}{N} \sum_{i = 1}^N E^{Q^n} \big[ \|X^i_s\|^{2p}_\mathbb{H}\big] \Big) ds \\&- \frac{2p}{N} \sum_{k = 1}^N E^{Q^n} \Big[ \int_0^T {\mathcal{N}}_p (X^k_s) ds \Big], \end{aligned}$$ and consequently, $$\begin{aligned}
\frac{1}{N} \sum_{k = 1}^N &E^{Q^n} \Big[ \int_0^T {\mathcal{N}}_p (X^k_s) ds \Big]
\\&\leq \frac{1}{2p} \Big[ \|x\|^{2p}_\mathbb{H}+ \lambda p T (2p + 1) ( 1 + 3 (1 + 2 \|x\|^{2p}_\mathbb{H}) \exp \big\{ 6 \lambda p T (1 + 18 p) \big\} ) \Big].\end{aligned}$$ In summary, putting these estimates together, it follows that $Q^n \circ \mathscr{S}_N(X, K)^{-1} \in \mathscr{J}(x)$.
*Step 6:* We are in the position to sketch the final step of the proof. First of all, it follows similar to Step 1 of the proof for Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"} that the family $\{ Q^n \circ X^{-1} \colon n \in \mathbb{N}\}$ is tight in $\mathcal{P}(\Omega^N)$. Thus, as in Step 3 of this proof, there exists a subsequence $(Q^{M_n})_{n = 1}^\infty$ that converges to a probability measure $Q$ in the weak-strong sense. As before, $W^1, \dots, W^N$ remain independent cylindrical Brownian motions under $Q$ and the measure $Q$ admits the decomposition [\[eq: kernel dec\]](#eq: kernel dec){reference-type="eqref" reference="eq: kernel dec"} (see again [@jacod1981weak p. 190]). Now, by virtue of Lemma [Lemma 11](#lem: mg chara N){reference-type="ref" reference="lem: mg chara N"}, we can use a martingale problem argument as in Step 4 above (see also Step 3 from the proof of [@GRZ Theorem 4.5]) to conclude that [\[eq: existence identity\]](#eq: existence identity){reference-type="eqref" reference="eq: existence identity"} holds under $Q$. We omit the details for brevity. Finally, $Q \circ \mathscr{S}_N (X, K)^{-1} \in \mathscr{J}(x)$ follows Lemma [Lemma 14](#lem: M closed){reference-type="ref" reference="lem: M closed"} and Step 5. ◻
The following is an immediate consequence of Proposition [Proposition 18](#prop: existence){reference-type="ref" reference="prop: existence"}.
**Corollary 19**. *If Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} holds, then $\mathcal{C}^n (x) \not = \varnothing$ for all $n \in \mathbb{N}$ and $x \in \mathbb{H}$.*
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (i) {#proof-of-theorem-theo-main1-i}
First of all, $\mathcal{R}^n (x^n)$ is nonempty by Corollary [Corollary 19](#coro: non empty){reference-type="ref" reference="coro: non empty"} and compact by Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"}. The compactness of $\mathcal{R}^0 (x^0)$ follows from Lemma [Lemma 15](#lem: C comp){reference-type="ref" reference="lem: C comp"}. Finally, anticipating the following section, the claim $\mathcal{R}^0 (x) \not = \varnothing$ follows from Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (ii). 0◻
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (ii) {#proof-of-theorem-theo-main1-ii}
Take a sequence $(x^n)_{n = 0}^\infty \subset \mathbb{H}$ such that $x^n \to x^0$ in $\|\cdot\|_\mathbb{H}$ and let $(Q^n)_{n = 1}^\infty$ be such that $Q^n \in \mathcal{R}^n (x^n)$. By Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"}, the set $\bigcup_{n = 1}^\infty \mathcal{R}^n (x^n) \subset \bigcup_{n, m = 1}^\infty \mathcal{R}^n (x^m) = \mathcal{R}( \{x^n \colon n \in \mathbb{N}\} )$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. Consequently, $(Q^n)_{n = 1}^\infty$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. It remains to prove that every of its accumulation points is in $\mathcal{R}^0 (x^0)$. To lighten our presentation, we prove that $Q^0\in \mathcal{R}^0 (x^0)$ whenever $Q^n \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. The argument is based on Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}.
First of all, $$\begin{aligned}
Q^0 ( \{ \mu \in \mathcal{P}(\Theta) \colon \mu \circ X^{-1}_0 = \delta_{x^0} \} ) = 1
\end{aligned}$$ follows as in the proof for Lemma [Lemma 15](#lem: C comp){reference-type="ref" reference="lem: C comp"}, i.e., (iii.a) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"} holds for $Q^0$-a.a. $\mu \in \mathcal{P}(\Theta)$.
Lemma [Lemma 14](#lem: M closed){reference-type="ref" reference="lem: M closed"} yields that $Q^0 \in \mathscr{J}(x^0)$. In particular, (iii.b) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"} holds for $Q^0$-a.a. $\mu \in \mathcal{P}(\Theta)$.
It remains to investigate part (iii.c) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}. Take $s, t \in \mathbb{Q}_+ \cap [0, T], s < t$ and $\mathfrak{t} \in \mathcal{T}_s$. We define $\mathsf{Z}^k$ and $\mathsf{Z}$ as in the proof of Lemma [Lemma 15](#lem: C comp){reference-type="ref" reference="lem: C comp"}. In particular, recall [\[eq: oZ after bound\]](#eq: oZ after bound){reference-type="eqref" reference="eq: oZ after bound"}. We now prove that $Q^0$-a.s. $\mathsf{Z}= 0$. By Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}, as $\mathscr{B}, \mathcal{C}^2_b$ and $\mathcal{T}_s$ are countable, this implies that almost all realizations of $Q^0$ satisfy (iii.c) from Lemma [Lemma 10](#lem: mg chara){reference-type="ref" reference="lem: mg chara"}. In summary, we then can conclude that $Q^0 \in \mathcal{R}^0 (x^0)$.
The proof of $Q^0$-a.s. $\mathsf{Z}= 0$ is split into two steps. First, we prove that $$\begin{aligned}
\label{eq: first main oZ = 0}
\lim_{n \to \infty} E^{Q^n} \big[ | \mathsf{Z}| \big] = E^{Q^0} \big[ | \mathsf{Z}| \big], \end{aligned}$$ and afterwards, we show that $$\begin{aligned}
\label{eq: second main oZ = 0}
\lim_{n \to \infty} E^{Q^n} \big[ | \mathsf{Z}|^2 \big] = 0.\end{aligned}$$ Obviously, [\[eq: first main oZ = 0\]](#eq: first main oZ = 0){reference-type="eqref" reference="eq: first main oZ = 0"} and [\[eq: second main oZ = 0\]](#eq: second main oZ = 0){reference-type="eqref" reference="eq: second main oZ = 0"} yield that $E^{Q^0} [ | \mathsf{Z}| ] = 0$, which proves $Q^0$-a.s. $\mathsf{Z}= 0$.
*Step 1: Proof of [\[eq: first main oZ = 0\]](#eq: first main oZ = 0){reference-type="eqref" reference="eq: first main oZ = 0"}.* By the triangle inequality, we observe that $$\label{eq: main triangle}
\begin{split}
| E^{Q^n} [ | \mathsf{Z}| ] - E^{Q^0} [ | \mathsf{Z}| ] | &\leq | E^{Q^n} [ | \mathsf{Z}| ] - E^{Q^n} [ | \mathsf{Z}^k | ] | \\&\hspace{1cm}+ | E^{Q^n} [ | \mathsf{Z}^k | ] - E^{Q^0} [ | \mathsf{Z}^k | ] |
\\&\hspace{1cm}+ | E^{Q^0} [ | \mathsf{Z}^k | ] - E^{Q^0} [ | \mathsf{Z}| ] |
\\&=: I_{n, k} + II_{n, k} + III_{n, k}.
\end{split}$$
By virtue of [@bogachev Theorem 8.10.61] and Lemma [Lemma 16](#lem: oM mit k cont){reference-type="ref" reference="lem: oM mit k cont"}, we have $II_{n, k} \to 0$ as $n \to \infty$ for every $k > 0$. Next, we discuss the term $I_{n, k}$. With $Q^n = P^n \circ \mathscr{S}_n^{-1}$, $Q^n \in \mathscr{J}(x^n)$ and [\[eq: bound\]](#eq: bound){reference-type="eqref" reference="eq: bound"}, we obtain that $$\begin{aligned}
I_{n, k} &\leq \frac{C}{n}\sum_{k = 1}^n E^{P^n} \Big[ \int_0^T \int \big| \mathcal{L}_{g, v} (f, r, X^k_r, \mathscr{X}_n (X_r)) \big| \mathds{1}_{\{| \mathcal{L}_{g, v} (f, r, X^k_r, \mathscr{X}_n (X_r))| > k\}} M^k (ds, df) \Big]
\\&\leq \frac{1}{k^{\gamma - 1}} \frac{C}{n}\sum_{k = 1}^n E^{P^n} \Big[ \int_0^T \int \big| \mathcal{L}_{g, v} (f, r, X^k_r, \mathscr{X}_n (X_r)) \big|^\gamma M^k (ds, df) \Big]
\\&\leq \frac{C}{k^{\gamma- 1}}, \end{aligned}$$ where we emphasize that the constant $C > 0$ is independent of $n$. Similarly, using that $Q^0 \in \mathscr{J}(x^0)$ and [\[eq: bound\]](#eq: bound){reference-type="eqref" reference="eq: bound"}, we obtain that $$\begin{aligned}
III_{n, k} \leq \frac{C}{k^{\gamma - 1}}.\end{aligned}$$ Again, the constant is independent of $n$.
In summary, $I_{n, k} + III_{n, k} \to 0$ as $k \to \infty$ uniformly in $n$. Thus, choosing first a large $k$ and taking then $n \to \infty$ shows that $I_{n, k} + II_{n, k} + III_{n, k}$ can be made arbitrarily small, which entails that [\[eq: first main oZ = 0\]](#eq: first main oZ = 0){reference-type="eqref" reference="eq: first main oZ = 0"} holds.
*Step 2: Proof of [\[eq: second main oZ = 0\]](#eq: second main oZ = 0){reference-type="eqref" reference="eq: second main oZ = 0"}.* For $r \in [0, T]$ and $i = 1, \dots, n$, we set $$\mathsf{K}^{i, n}_r := g ( {_\mathbb{V}}\langle X^i_r, v \rangle_{\mathbb{V}^*}) - g ({_\mathbb{V}}\langle X^i_0, v\rangle_{\mathbb{V}^*}) - \int_0^r\int \mathcal{L}_{g, v} (f, u, X^i_u, \mathscr{X}_n (X_u)) M^i (du, df).$$ Notice that $$\begin{aligned}
E^{Q^n} \big[ \mathsf{Z}^2 \big] = \frac{1}{n^2} \sum_{i, j = 1}^n E^{P^n} \Big[ (\mathsf{K}^{i, n}_t - \mathsf{K}^{i, n}_s) (\mathsf{K}^{j, n}_t - \mathsf{K}^{j, n}_s) \, \mathfrak{t} (X^i, M^i) \mathfrak{t} (X^j, M^j) \Big].\end{aligned}$$ Recall from Definition [Definition 3](#def: C^n){reference-type="ref" reference="def: C^n"} that there are independent cylindrical Brownian motions $W^1, \dots, W^n$ such that $P^n$-a.s., for $k = 1, \dots, n$, $$\begin{aligned}
{_\mathbb{V}}\langle X^k_t, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}} \langle x^n, v\rangle_{\mathbb{V}^*} &+ \int_0^t \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X^k_s, \mathscr{X}_n (X_s)) \mathfrak{m}(s, M^k, df) , v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big\langle \int_0^t \bar{\sigma}(\mathfrak{m}(s, M^k), s, X^k_s, \mathscr{X}_n (X_s)) d W^k_s, v \Big \rangle_\mathbb{H}.
\end{aligned}$$ Thus, by Itô's formula, we obtain that $P^n$-a.s. $$\mathsf{K}^{i, n} = \int_0^\cdot g'' ({_\mathbb{V}}\langle X^i_u, v \rangle_{\mathbb{V}^*}) \langle \bar{\sigma}^* (\mathfrak{m}(u, M^i), u, X^i_u, \mathscr{X}_n (X_u)) v, d W^i_u \rangle_\mathbb{U}.$$ Take $i \not = j$. By the independence of $W^i$ and $W^j$, the quadratic variation of $\mathsf{K}^{i, n}$ and $\mathsf{K}^{j, n}$ vanishes. As $\mathsf{K}^{i, n}$ and $\mathsf{K}^{j, n}$ are square integrable $P^n$-martingales (see Lemma [Lemma 11](#lem: mg chara N){reference-type="ref" reference="lem: mg chara N"}), this means that the product $\mathsf{K}^{i, n} \mathsf{K}^{j, n}$ is also a $P^n$-martingale. Consequently, using that $\mathsf{K}^{i, n}, \mathsf{K}^{j, n}$ and $\mathsf{K}^{i, n}\mathsf{K}^{j, n}$ are $P^n$-martingales (recall that $\mathfrak{t}$ is $\mathcal{O}_s$-measurable), we obtain $$\begin{aligned}
E^{P^n} \Big[ (\mathsf{K}^{i, n}_t &- \mathsf{K}^{i, n}_s) (\mathsf{K}^{j, n}_t - \mathsf{K}^{j, n}_s) \, \mathfrak{t} (X^i, M^i) \mathfrak{t} (X^j, M^j) \Big]
\\&= E^{P^n} \Big[ (\mathsf{K}^{i, n}_t \mathsf{K}^{j, n}_t - \mathsf{K}^{i, n}_t \mathsf{K}^{j, n}_s - \mathsf{K}^{i, n}_s \mathsf{K}^{j, n}_t + \mathsf{K}^{i, n}_s \mathsf{K}^{j, n}_s) \, \mathfrak{t} (X^i, M^i) \mathfrak{t} (X^j, M^j) \Big]
\\&= E^{P^n} \Big[ (\mathsf{K}^{i, n}_s \mathsf{K}^{j, n}_s - \mathsf{K}^{i, n}_s \mathsf{K}^{j, n}_s - \mathsf{K}^{i, n}_s \mathsf{K}^{j, n}_s + \mathsf{K}^{i, n}_s \mathsf{K}^{j, n}_s) \, \mathfrak{t} (X^i, M^i) \mathfrak{t} (X^j, M^j) \Big] = 0.\end{aligned}$$ Finally, using Burkholder's inequality, $Q^n \in \mathscr{J}(x^n)$ and [\[eq: cond growth diffusion\]](#eq: cond growth diffusion){reference-type="eqref" reference="eq: cond growth diffusion"}, we get that $$\begin{aligned}
E^{Q^n} \big[ \mathsf{Z}^2 \big] &= \frac{1}{n^2} \sum_{i = 1}^n E^{P^n} \Big[ (\mathsf{K}^{i, n}_t - \mathsf{K}^{i, n}_s)^2 \, \mathfrak{t} (X^i, M^i)^2 \Big]
\\&\leq \frac{C}{n^2} \sum_{i = 1}^n E^{P^n} \Big[ (\mathsf{K}^{i, n}_t - \mathsf{K}^{i, n}_s)^2 \Big]
\\&\leq \frac{C}{n^2} \sum_{i = 1}^n E^{P^n} \Big[ \int_s^t \| \bar{\sigma}^* (\mathfrak{m}(u, M^i), u, X^i_u, \mathscr{X}_n (X_u)) \|_{L_2 (\mathbb{U}; \mathbb{H})}^2 du \Big]
\\&\leq \frac{C}{n^2} \sum_{i = 1}^n E^{P^n} \Big[ \int_0^T \lambda \Big( 1+ \|X^i_u\|^2_\mathbb{H}+ \frac{1}{n} \sum_{k = 1}^n \|X^k_u\|^2_\mathbb{H}\Big) du \Big]
\\&\leq \frac{C}{n} \Big( 1 + \frac{1}{n} \sum_{k = 1}^n E^{P^n} \Big[ \sup_{u \in [0, T]} \|X^k_u\|^2_\mathbb{H}\Big] \Big)
\\&\leq \frac{C}{n}.\end{aligned}$$ This bound proves [\[eq: second main oZ = 0\]](#eq: second main oZ = 0){reference-type="eqref" reference="eq: second main oZ = 0"}. The proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (ii) is complete. 0◻
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iii) {#proof-of-theorem-theo-main1-iii}
We start with an elementary fact that is probably well-known ([@LakSIAM17 p. 1657]) but we did not find a reference.
**Lemma 20**. *Let $(E, \mathsf{e})$ be a complete separable metric space and let $\mathcal{P}^\ell (E)$ be the corresponding $\ell$-Wasserstein space for $\ell \geq 1$. Assume that $g \colon E \to \mathbb{R}$ is an upper semicontinuous function such that $$\exists\, x_0 \in E \colon \ \mathsf{c} := \sup \Big\{ \frac{| g (x) |}{1 + \mathsf{e} (x, x_0)^\ell} \colon x \in E \Big\} < \infty.$$ Let $(\mu^n)_{n = 0}^\infty \subset \mathcal{P}^\ell (E)$ be a sequence such that $\mu^n \to \mu^0$ in $\mathcal{P}^\ell (E)$. Then, $$\limsup_{n \to \infty} E^{\mu^n} [ g ] \leq E^{\mu^0} [ g ].$$*
*Proof.* By Skorokhod's coupling theorem, on some probability space (whose expectation we denote by $E$), there are random variables $X^0, X^1, \dots$ with laws $\mu^0, \mu^1, \dots$ such that a.s. $X^n \to X^0$. Notice that $$Y^n := \mathsf{c}\, \big[ 1 + \mathsf{e} (X^n, x_0)^\ell\, \big] - g (X^n) \geq 0.$$ Hence, Fatou's lemma yields that $$\begin{aligned}
E \Big[ \liminf_{n \to \infty} Y^n \Big] \leq \liminf_{n \to \infty} E [ Y^n ].
\end{aligned}$$ As $g$ is presumed to be upper semicontinuous, we have a.s. $$\liminf_{n \to \infty} Y^n = \mathsf{c}\, \big[ 1 + \mathsf{e} (X^0, x_0)^\ell\, \big] - \limsup_{n \to \infty} g (X^n) \geq\mathsf{c}\, \big[ 1 + \mathsf{e} (X^0, x_0)^\ell\, \big] - g (X^0),$$ which implies $$\mathsf{c}\, \big[ 1 + \mathsf{e} (X^0, x_0)^\ell\, \big] - E [ g (X^0) ] \leq \liminf_{n \to \infty} E [ Y^n ].$$ As $\mu^n \to \mu^0$ in $\mathcal{P}^\ell (E)$, we get $$\liminf_{n \to \infty} E [ Y^n ] = \mathsf{c}\, \big[ 1 + \mathsf{e} (X^0, x_0)^\ell\, \big] - \limsup_{n \to \infty} E [ g (X^n) ],$$ and finally, $$- E [ g (X^0) ] \leq - \limsup_{n \to \infty} E [ g (X^n) ].$$ The proof is complete. ◻
*Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} [(iii)]{.upright}.* We use the notation from Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iii). Using the compactness of $\mathcal{R}^n (x^n)$, which is due to Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (i), and standard properties of the limes superior, there exists a subsequence $(N_n)_{n = 1}^\infty$ of $1, 2, \dots$ and measure $Q^{N_n} \in \mathcal{R}^{N_n} (x^{N^n})$ such that $$\limsup_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big] = \lim_{n \to \infty} E^{Q^{N_n}} \big[ \psi \big].$$ By Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (ii), there is a subsequence of $(Q^{N_n})_{n = 1}^\infty$ that converges in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ to a measure $Q^0 \in \mathcal{R}^0 (x^0)$. Recalling that $\psi$ is upper semicontinuous from $\mathcal{P}^\varrho(\Theta)$ into $\mathbb{R}$ and that it satisfies the growth condition [\[eq: bound property\]](#eq: bound property){reference-type="eqref" reference="eq: bound property"}, we get from Lemma [Lemma 20](#lem: gen Fatou W){reference-type="ref" reference="lem: gen Fatou W"} that $$\lim_{n \to \infty} E^{Q^{N_n}} \big[ \psi \big] \leq E^{Q^0} \big[ \psi \big] \leq \sup_{Q \in \mathcal{R}^0 (x^0)} E^Q \big[ \psi \big].$$ This completes the proof. ◻
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iv) {#proof-of-theorem-theo-main1-iv}
The following strategy is inspired by the proof for [@LakSIAM17 Theorem 2.12]. In particular, we learned the idea to use the Krein--Milman theorem from that proof. Let us start with an auxiliary result whose proof is postponed to the end of this section.
**Lemma 21**. *Assume that the Conditions [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} and [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} hold. Let $(x^n)_{n = 0}^\infty \subset \mathbb{H}$ be a sequence such that $x^n \to x^0$ in $\|\cdot\|_\mathbb{H}$ and take $P \in \mathcal{C}^0 (x^0)$. Then, there exists a sequence $(Q^{n})_{n = 1}^\infty$ with $Q^{n} \in \mathcal{R}^{n} (x^{n})$ such that $Q^{n} \to \delta_{P}$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$.*
With this lemma at hand, we are ready to prove Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iv). Let $(x^n)_{n = 1}^\infty \subset \mathbb{H}$ be such that $x^n \to x^0$ in $\|\cdot\|_\mathbb{H}$ and take $Q^0 \in \mathcal{R}^0 (x^0)$. Furthermore, let $(x^{M_n})_{n = 1}^\infty$ be a subsequence of $(x^n)_{n = 1}^\infty$. Recall from Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (i) that the set $\mathcal{R}^0 (x^0)$ is nonempty and compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. Furthermore, it is easy to see that $\mathcal{R}^0 (x^0)$ is convex. We denote its extreme points by $\operatorname{ex}\, (\mathcal{R}^0 (x^0))$. Thanks to the Krein--Milman theorem ([@charalambos2013infinite Theorem 7.68]), we have $$\mathcal{U}^0 (x^0) = \overline{\operatorname{co}}\, \big[\operatorname{ex}\, (\mathcal{R}^0 (x^0))\big],$$ where $\overline{\operatorname{co}} \,[\, \cdot\,]$ denotes the closure (in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$) of the convex hull. Let $$\Delta (\mathcal{R}^0 (x^0)) := \Big\{ \mu \in \mathcal{P}(\mathcal{C}^0 (x^0)) \colon \mu = \sum_{k = 1}^2 \lambda^k \delta_{\mu^k}, \ \ \lambda^k \geq 0,\ \lambda^1 + \lambda^2 = 1, \ \mu^k \in \mathcal{C}^0 (x^0) \Big\}.$$ Thanks to [@winkler Theorem 2.1, Examples 2.1 (a)], we have $$\operatorname{ex}\, ( \mathcal{R}^0 (x^0)) \subset \Delta (\mathcal{R}^0 (x^0)).$$ Hence, there exists a sequence $$(R^n)_{n = 1}^\infty \subset \operatorname{co}\big[ \{ \delta_{P} \colon P \in \mathcal{C}^0 (x^0) \} \big]$$ such that $R^n \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. We write $$R^n = \sum_{k = 1}^{p_n} a^n_k \delta_{P_{n, k}}, \quad \text{with } p^n \in \mathbb{N}, \ a^n_k \geq 0, \ \sum_{k = 1}^{p_n} a^n_k = 1, \ P_{n, k} \in \mathcal{C}^0 (x^0).$$ By Lemma [Lemma 21](#lem: approx point mass){reference-type="ref" reference="lem: approx point mass"}, there are sequences $(Q^m_{n, k})_{m = 1}^\infty$ such that $Q^m_{n, k} \in \mathcal{R}^m (x^m)$ and $Q^m_{n, k} \to \delta_{P_{n, k}}$ as $m \to \infty$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. Now, set $$R^{n, m} := \sum_{k = 1}^{p_n} a^n_k Q^{M_m}_{n, k}.$$ Recall from Corollary [Corollary 12](#coro: conv){reference-type="ref" reference="coro: conv"} that $\mathcal{C}^{M_m} (x^{M_m})$ is convex. Hence, $\mathcal{R}^{M_m} (x^{M_m})$ is also convex and $R^{n, m} \in \mathcal{R}^{M_m} (x^{M_m})$. Furthermore, it is clear that $R^{n, m} \to R^n$ as $m \to \infty$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. To keep our notation simple, let $\mathsf{w}$ be the $\varrho$-Wasserstein metric on $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. For every $n \in \mathbb{N}$, there exists an $N_n \in \mathbb{N}$ such that $\mathsf{w}(R^{n, N_n}, R^n) \leq \tfrac{1}{n}.$ Hence, $$\mathsf{w}(R^{n, N_n}, Q^0) \leq \mathsf{w}(R^{n, N_n}, R^n) + \mathsf{w}(R^n, Q^0) \leq \tfrac{1}{n} + \mathsf{w}(R^n, Q^0) \to 0.$$ Consequently, as $R^{n, N_n} \in \mathcal{R}^{M_{N_n}} (x^{M_{N_n}})$, the sequence $(R^{n, N_n})_{n = 1}^\infty$ has the claimed properties. The proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iv) is complete. 0◻
It is left to prove Lemma [Lemma 21](#lem: approx point mass){reference-type="ref" reference="lem: approx point mass"}.
*Proof of Lemma [Lemma 21](#lem: approx point mass){reference-type="ref" reference="lem: approx point mass"}.* Let $(x^n)_{n = 0}^\infty \subset \mathbb{H}$ be a sequence such that $x^n \to x^0$ in $\|\cdot\|_\mathbb{H}$ and take $R \in \mathcal{C}^0 (x^0)$. By definition, there exists a standard extension $(\Sigma, \mathcal{A}, (\mathcal{A}_t)_{t \in [0, T]}, P)$ of $(\Theta, \mathcal{O}, \mathbf{O}, R)$, supporting a standard cylindrical Brownian motion $W$, such that $P$-a.s., for all $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v\rangle_{\mathbb{V}^*}&+ \int_0^\cdot \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X_s, R_s) \mathfrak{m}(s, M, df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big\langle \int_0^\cdot \bar{\sigma}(\mathfrak{m}(s, M), s, X_s, R_s) d W_s, v \Big \rangle_\mathbb{H},
\end{aligned}$$ where $R_s = R \circ X^{-1}_s = P \circ X^{-1}_s$. For $n \in \mathbb{N}$, consider the $n$-fold product $(\Sigma^n, \mathcal{A}^n, (\mathcal{A}^n_t)_{t \in [0, T]}, P^n)$, where $$\mathcal{A}^n_t := \bigcap_{s > t} \bigotimes_{k = 1}^n \mathcal{A}_s, \quad P^n := \bigotimes_{k = 1}^n P.$$ With obvious notation, we have $P^n$-a.s., for $k = 1, \dots, n$ and $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle X^k, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x, v\rangle_{\mathbb{V}^*}&+ \int_0^\cdot \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big\langle \int b (f, s, X_s^k, R_s) \mathfrak{m}(s, M^k, df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big\langle \int_0^\cdot \bar{\sigma}(\mathfrak{m}(s, M^k), s, X^k_s, R_s) d W^k_s, v \Big \rangle_\mathbb{H}, \end{aligned}$$ where, by construction, $W^1, \dots, W^n$ are independent cylindrical Brownian motions. Finally, we set $$\Psi := \Sigma^n \times \Omega^n, \quad \mathcal{G}_t := \mathcal{A}^n \otimes \mathcal{F}^n, \quad \mathcal{G}_t := \bigcap_{s > t} \, \mathcal{A}^n_s \otimes \mathcal{F}^n_s.$$ We denote the coordinate process on the second coordinate by $(Y^1, \dots, Y^n)$. By Proposition [Proposition 18](#prop: existence){reference-type="ref" reference="prop: existence"}, there exists a probability measure $Q^n$ on $(\Psi, \mathcal{A})$ such that $Q^n \circ \mathscr{S}_n (Y, M)^{-1} \in \mathscr{J}(x^n)$, where $M := (M^1, \dots, M^n)$, and $Q^n$-a.s., for all $k = 1, \dots, n$ and $v \in \mathbb{V}^*$, $$\begin{aligned}
{_\mathbb{V}}\langle Y^k, v \rangle_{\mathbb{V}^*} = {_\mathbb{V}}\langle x^n, v \rangle_{\mathbb{V}^*} &+ \int_0^\cdot \mathop{\vphantom{\int}}\nolimits_\mathbb{V}\hspace{-0.1cm}\Big \langle \int b (f, s, Y^k_s, \mathscr{X}_n (Y_s)) \mathfrak{m}(s, M^k, df), v \Big \rangle_{\mathbb{V}^*} ds
\\&+ \Big \langle \int_0^\cdot \bar{\sigma}(\mathfrak{m}(s, M^k), s, Y^k_s, \mathscr{X}_n (Y_s)) d W^k_s, v \Big\rangle_\mathbb{H}.
\end{aligned}$$ Furthermore, $Q^n$ has a decomposition of the type [\[eq: kernel dec\]](#eq: kernel dec){reference-type="eqref" reference="eq: kernel dec"}, which entails that the dynamics of $X^1, \dots, X^n$ are still valid under $Q^n$, see [@jacod79 Proposition 10.46].
We notice that $R^n := Q^n \circ \mathscr{S}_n (Y, M)^{-1} \in \mathcal{R}^n (x^n)$. In the following, we prove that $R^n \to \delta_R$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. This finishes the proof. We split the remainder into two steps.
*Step 1.* Notice that $\{R^n \colon n \in \mathbb{N}\}$ is relatively compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ by Lemma [Lemma 13](#lem: Rk rel comp){reference-type="ref" reference="lem: Rk rel comp"}. Therefore, $R^n \to \delta_R$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ follows once we prove that all accumulation points of $(R^n)_{n = 1}^\infty$ coincide with $\delta_R$. Let us explain our argument for this in detail. First of all, we prove that $\delta_R, R^n\in \mathcal{P}(\mathcal{P}(\Upsilon))$, where $$\Upsilon := \Big\{ \omega \in C ([0, T]; \mathbb{X}) \colon \int_0^T \| \omega (s) \|^\alpha_\mathbb{Y}ds < \infty \Big\} \times \mathbb{M},$$ which we endow (for a moment) with the metric $\mathsf{d}+ \mathsf{r}$. At this point, recall that $$\mathsf{d}(\omega, \alpha) = \sup_{s \in [0, T]} \|\omega(s) - \alpha (s)\|_{\mathbb{V}} + \Big( \int_0^T \| \omega (s) - \alpha (s)\|^\alpha_\mathbb{Y}ds \Big)^{1/ \alpha},$$ and that $\mathsf{r}$ is a metric on $\mathbb{M}$ that induces its topology. We notice that $\Upsilon \in \mathcal{B}(\Theta)$ (and consequently, also $\mathcal{P}(\Upsilon) \in \mathcal{B} (\mathcal{P}(\Theta))$) by [@schwarz Theorem 5, p. 101] (using that $\mathbb{X}\subset \mathbb{V}$). If we have a sequence $(\mu^n)_{n = 0}^\infty \subset \mathcal{P}(\mathcal{P}(\Upsilon))$, then $\mu^n \to \mu^0$ in $\mathcal{P}(\mathcal{P}(\Theta))$ if and only if $\mu^n \to \mu^0$ in $\mathcal{P}(\mathcal{P}(\Upsilon))$, cf. [@EK Corollary 3.3.2]. Consequently, provided $\delta_R, R^n \in \mathcal{P}(\mathcal{P}(\Upsilon))$, it suffices to prove that all accumulation points of $(R^n)_{n = 1}^\infty$ in $\mathcal{P}(\mathcal{P}(\Upsilon))$ coincide with $\delta_R$.
Now, define $$\begin{aligned}
\mathsf{d}' (\omega, \alpha) &:= \sup_{s \in [0, T]} \| \omega (s) - \alpha (s)\|_\mathbb{V}, \\
\mathsf{d}^* (\omega, \alpha) &:= \sup_{s \in [0, T]} \| \omega (s) - \alpha (s)\|_{\mathbb{X}}.\end{aligned}$$ On the space $\Upsilon$, the topologies generated either by $\mathsf{d}+ \mathsf{r}$ or $\mathsf{d}^* + \mathsf{r}$ are both stronger than those generated by $\mathsf{d}' + \mathsf{r}$ (using that $\mathbb{X}\subset \mathbb{V}$). Furthermore, we notice that the Borel $\sigma$-fields on $\Upsilon$ (and also on $\mathcal{P}(\Upsilon)$) coincide for all these topologies (when $\mathcal{P}(\Upsilon)$ is endowed with the weak topology corresponding to one of these topologies), see [@schwarz Corollary 2, p. 101].[^7]
Take a sequence $(\mu^n)_{n = 1}^\infty \subset \mathcal{P}(\mathcal{P}(\Upsilon))$ such that $\mu^n \to \mu$ in $\mathcal{P}(\mathcal{P}(\Upsilon))$ when $\Upsilon$ is endowed with the topology generated by $\mathsf{d}+ \mathsf{r}$ and $\mu^n \to \mu^*$ when $\Upsilon$ is endowed with the topology generated by $\mathsf{d}^* + \mathsf{r}$. Then, we also have $\mu^n \to \mu$ and $\mu^n \to \mu^*$ in $\mathcal{P}(\mathcal{P}(\Upsilon))$ when $\Upsilon$ is endowed with the topology generated by $\mathsf{d}' + \mathsf{r}$. By the uniqueness of the limit, we get that $\mu = \mu^*$.
Consequently, to complete our proof, it suffices to prove that $R^n \to \delta_R$ in $\mathcal{P}(\mathcal{P}(\Upsilon))$ when $\Upsilon$ is endowed with $\mathsf{d}^* + \mathsf{r}$, and that these measures are really supported on $\mathcal{P}(\mathcal{P}(\Upsilon))$, of course. This is program of our second (and final) step.
*Step 2.* Notice that $Q^n$-a.s. $X^k, Y^k \in L^\alpha ([0, T]; \mathbb{Y})$. By virtue of Condition [Condition 1](#cond: main1){reference-type="ref" reference="cond: main1"} (i) and [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"}, we also observe that the dynamics of $X^k$ and $Y^k$ hold in $\mathbb{Y}^*$ up to a $dt \otimes Q^n$ null set. Moreover, by the integrability properties of $X^k$ and $Y^k$, and Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} (ii), we obtain that $Q^n$-a.s. $$\begin{aligned}
&\int_0^T \int \| b (f, s, Y^k_s, \mathscr{X}_n (Y_s)) \|^{\alpha / (1 - \alpha)}_{\mathbb{Y}^*} \mathfrak{m}(s, M^k, df) ds < \infty, \\
&\int_0^T \int \| b (f, s, X^k_s, R_s) \|^{\alpha / (1 - \alpha)}_{\mathbb{Y}^*} \mathfrak{m}(s, M^k, df) ds < \infty.\end{aligned}$$ Finally, recalling that $\mathbb{H}\subset \mathbb{X}$, we observe that the Itô integrals within the dynamics of $X^k$ and $Y^k$ are $\mathbb{X}$-valued (local) martingales.
In summary, we deduce from the Krylov--Rozovskii Itô formula ([@krylov_rozovskii Theorem I.3.1]) that $(X^k, M^k)$ and $(Y^k, M^k)$ have $Q^n$-a.s. paths in $\Upsilon$ and, together with Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} (ii), that $Q^n$-a.s. $$\begin{aligned}
\|Y^{k}_t &- X^k_t\|^2_{\mathbb{X}} - \|x^n - x^0\|_{\mathbb{X}}^2
\\ &\ = \int_0^t 2 \langle Y^{n}_s - X^k_s, (\bar{\sigma}(\mathfrak{m}(s, M^k), s, Y^{k}_s, \mathscr{X}_n (Y_s)) - \bar{\sigma}(\mathfrak{m}(s, M^k), s, X^k_s, R_s) ) d W^k_s \rangle_{\mathbb{X}}
\\&\ \qquad+ \int_0^t 2 \mathop{\vphantom{\int}}\nolimits_{\mathbb{Y}^*}\hspace{-0.1cm}\Big \langle \int \big( b (f, s, Y^{k}_s, \mathscr{X}_n (Y_s)) - b (f, s, X^k_s, R_s) \big) \mathfrak{m}(s, M^k, df), Y^{k}_s - X^k_s \Big \rangle_{\mathbb{Y}} ds
\\&\ \qquad + \int_0^t \| \bar{\sigma}(\mathfrak{m}(s, M^k),s, Y^{k}_s, \mathscr{X}_n (Y_s)) - \bar{\sigma}(\mathfrak{m}(s, M^k),s, X^k_s, R_s)\|^2_{L_2 (\mathbb{U}; \mathbb{X})} ds
\\&\ \leq \int_0^t 2 \langle Y^{k}_s - X^k_s, (\bar{\sigma}(\mathfrak{m}(s, M^k), s, Y^{k}_s, \mathscr{X}_n (Y_s)) - \bar{\sigma}(\mathfrak{m}(s, M^k), s, X^k_s, R_s) ) d W^k_s \rangle_{\mathbb{X}}
\\&\ \qquad + \int_0^t C \Big( \|Y^{k}_s - X^k_s\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mathscr{X}_n (Y_s), R_s)^2 \Big) ds.\end{aligned}$$ Using Burkholder's inequality, again Condition [Condition 2](#cond: main2){reference-type="ref" reference="cond: main2"} and Young's inequality, we obtain that $$\begin{aligned}
E^{Q^n} \Big[ \sup_{s \in [0, t]} & \Big| \int_0^s 2 \langle Y^{k}_u - X^k_u, (\bar{\sigma}(\mathfrak{m}(u, M^k), u, Y^{k}_u, \mathscr{X}_n (Y_u)) - \bar{\sigma}(\mathfrak{m}(u, M^k), u, X^k_u, R_u) ) d W^k_u \rangle_{\mathbb{X}} \Big| \, \Big]
\\&\leq CE^{Q^n} \Big[ \Big( \int_0^t \|Y^{k}_u - X^k_u\|_{\mathbb{X}}^2 \,\big[ \|Y^{k}_u - X^k_u\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mathscr{X}_n (Y_u), R_u)^2\, \big] du \Big)^{1/2}\, \Big]
\\&\leq CE^{Q^n} \Big[ \sup_{s \in [0, t]} \|Y^{k}_s - X^k_s\|_{\mathbb{X}} \Big( \int_0^t \big[ \|Y^{k}_u - X^k_u\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mathscr{X}_n (Y_u), R_u)^2\, \big] du \Big)^{1/2}\, \Big]
\\&\leq E^{Q^n} \Big[\, \frac{1}{2} \sup_{s \in [0, t]} \|Y^{k}_s - X^k_s\|_{\mathbb{X}}^2 + \frac{C}{2}\, \int_0^t \big[ \|Y^{k}_u - X^k_u\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mathscr{X}_n (Y_u), R_u)^2\, \big] du \, \Big].\end{aligned}$$ In summary, we conclude that $$\begin{aligned}
E^{Q^n} \Big[ \sup_{s \in [0, t]} \|Y^{k}_s - X^k_s\|^2_{\mathbb{X}} \Big] \leq 2 \|x^n - x^0\|^2_{\mathbb{X}} + C E^{Q^n} \Big[ \int_0^t \big[ \|Y^{k}_u - X^k_u\|^2_{\mathbb{X}} + \mathsf{w}^{\mathbb{X}}_2 (\mathscr{X}_n (Y_u), R_u)^2 \big] du \, \Big].\end{aligned}$$ Gronwall's lemma yields that $$\begin{aligned}
\label{eq: Gron 1}
E^{Q^n} \Big[ \sup_{s \in [0, t]} \|Y^{k}_s - X^k_s\|^2_{\mathbb{X}} \Big] \leq C \Big( \|x^n - x^0\|_{\mathbb{X}}^2 + \int_0^t E^{Q^n} \Big[\mathsf{w}_2^{\mathbb{X}} (\mathscr{X}_n (Y_s), R_s)^2 \Big] ds \Big).\end{aligned}$$ For $\mu, \nu \in \mathcal{P}(\Omega)$, define $$\mathsf{w}_t (\mu, \nu)^2:= \inf_{\pi \in \Pi (\mu, \nu)} \iint \sup_{s \in [0, t]} \| \omega (s) - \alpha (s)\|^2_{\mathbb{X}}\, \pi (d\omega, d \alpha),$$ where $\Pi (\mu, \nu)$ denotes the set of all couplings of $\mu$ and $\nu$. Using the coupling $\frac{1}{n} \sum_{k = 1}^n \delta_{(Y^{k}, X^k)}$, we observe that $$\begin{aligned}
\label{eq: coupl 1}
\mathsf{w}_t(\mathscr{X}_n (Y), \mathscr{X}_n (X))^2 \leq \frac{1}{n} \sum_{k= 1}^n \sup_{s \in [0, t]} \|Y^{k}_s - X^k_s\|^2_{\mathbb{X}}.\end{aligned}$$ Hence, using the triangle inequality, [\[eq: Gron 1\]](#eq: Gron 1){reference-type="eqref" reference="eq: Gron 1"} and [\[eq: coupl 1\]](#eq: coupl 1){reference-type="eqref" reference="eq: coupl 1"}, it follows that $$\begin{aligned}
E^{Q^n} \Big[ \mathsf{w}_t & (\mathscr{X}_n (Y), R \circ X^{-1})^2 \Big]
\\&\leq 2 E^{Q^n} \Big[ \mathsf{w}_t (\mathscr{X}_n (Y), \mathscr{X}_n (X))^2 \Big] + 2 E^{Q^n} \Big[ \mathsf{w}_t (\mathscr{X}_n (X), R \circ X^{-1})^2 \Big]
\\&\leq 2 E^{Q^n} \Big[ \frac{1}{n} \sum_{k = 1}^n \sup_{s \in [0, t]} \|Y^{k}_s - X^k_s\|^2_\mathbb{H}\Big] + 2 E^{Q^n} \Big[ \mathsf{w}_t (\mathscr{X}_n (Z^n), R \circ X^{-1})^2 \Big]
\\&\leq C \Big( \|x^n - x^0\|_{\mathbb{X}}^2 + \int_0^t E^{Q^n} \Big[\mathsf{w}_2^{\mathbb{X}} (\mathscr{X}_n (Y_s), R_s)^2 \Big] ds + E^{Q^n} \Big[ \mathsf{w}_t (\mathscr{X}_n (X), R \circ X^{-1})^2 \Big] \Big)
\\&\leq C \Big( \|x^n - x^0\|_{\mathbb{X}}^2 + \int_0^t E^{Q^n} \Big[\mathsf{w}_s (\mathscr{X}_n (Y), R \circ X^{-1})^2 \Big] ds + E^{Q^n} \Big[ \mathsf{w}_t (\mathscr{X}_n (X), R \circ X^{-1})^2 \Big] \Big).\end{aligned}$$ Using Gronwall's lemma once again, we get that $$\begin{aligned}
\label{eq: main1}
E^{Q^n} \Big[ \mathsf{w}_T (\mathscr{X}_n (Y), R \circ X^{-1})^2 \Big] \leq C \Big( \|x^n - x^0\|_{\mathbb{X}}^2+ E^{Q^n} \Big[ \mathsf{w}_T (\mathscr{X}_n (X), R \circ X^{-1})^2 \Big] \Big).\end{aligned}$$ Under $Q^n$, the processes $X^1, X^2, \dots, X^n$ are independent with law $R \circ X^{-1}$. Hence, as also $$E^R \Big[ \sup_{s \in [0, T]} \|X_s\|^2_{\mathbb{X}} \Big] \leq C E^R \Big[ \sup_{s \in [0, T]} \|X_s\|^2_{\mathbb{H}} \Big] < \infty,$$ by the definition of $\mathscr{G}$, it follows from [@LakLN Corollary 2.14] that $$E^{Q^n} \Big[ \mathsf{w}_T (\mathscr{X}_n (X), R \circ X^{-1})^2 \Big] \to 0 \text{ as } n \to \infty.$$ In summary, we conclude from [\[eq: main1\]](#eq: main1){reference-type="eqref" reference="eq: main1"} that $$E^{Q^n} \Big[ \mathsf{w}_T (\mathscr{X}_n (Y), R \circ X^{-1})^2 \Big] \to 0 \text{ as } n \to \infty.$$ Endowing $\Upsilon$ with the metric $\mathsf{d}^* + \mathsf{r}$, the triangle inequality, and once again [@LakLN Corollary 2.14], we conclude that $$E^{Q^n} \Big[ \mathsf{w}_2^{\Upsilon} (\mathscr{S}_n (Y, M), R)^2 \Big] \to 0 \text{ as } n \to \infty.$$ Let $\Xi := C ([0, T]; \mathbb{X}) \times \mathbb{M},$ and endow this space with the topology induced by $\mathsf{d}^* + \mathsf{r}$ (which turns it into a Polish space). Then, we proved that $R^n \to \delta_R$ in $\mathcal{P}(\mathcal{P}(\Xi))$ (as on this space Wasserstein convergence implies weak convergence, see [@villani09 Theorem 6.9]). Finally, it follows (again) from [@EK Corollary 3.3.2] that $R^n \to \delta_R$ in $\mathcal{P}(\mathcal{P}(\Upsilon))$ (where $\Upsilon$ is endowed with $\mathsf{d}^* + \mathsf{r}$). This completes the proof. ◻
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (v) {#proof-of-theorem-theo-main1-v}
We use the notation from Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (v). First, there exists a subsequence $(N_n)_{n = 1}^\infty$ of $1, 2, \dots$ such that $$\liminf_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big] = \lim_{n \to \infty} \sup_{Q \in \mathcal{R}^{N_n} (x^{N_n})} E^Q \big[ \psi \big].$$ Take an arbitrary measure $Q^0 \in \mathcal{R}^0 (x^0)$. Then, by Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iv), there exists a subsequence $(M_n)_{n = 1}^\infty$ of the subsequence $(N_n)_{n = 1}^\infty$ and measures $Q^{M_n} \in \mathcal{R}^{M_n} (x^{M_n})$ such that $Q^{M_n} \to Q^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Omega))$. By the assumptions on $\psi$, and Lemma [Lemma 20](#lem: gen Fatou W){reference-type="ref" reference="lem: gen Fatou W"}, $$\begin{aligned}
E^{Q^0} \big[ \psi \big] \leq \liminf_{n \to \infty} E^{Q^{M_n}} \big[ \psi \big]
&\leq \liminf_{n \to \infty} \sup_{Q\in \mathcal{R}^{M_n} (x^{M_n})} E^{Q} \big[ \psi \big]
\\&= \lim_{n \to \infty} \sup_{Q \in \mathcal{R}^{N_n} (x^{N_n})} E^Q \big[ \psi \big]
\\&= \liminf_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big].\end{aligned}$$ As $Q^0$ was arbitrary, we get $$\sup_{Q \in \mathcal{R}^0 (x^0)} E^Q \big[ \psi \big] \leq \liminf_{n \to \infty} \sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big].$$ The proof is complete. 0◻
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (vi) {#proof-of-theorem-theo-main1-vi}
By Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iii) and (v), for every sequence $(x^n)_{n = 0}^\infty \subset \mathbb{H}$ with $x^n \to x^0$ in $\|\cdot\|_\mathbb{H}$, we get $$\sup_{Q \in \mathcal{R}^n (x^n)} E^Q \big[ \psi \big] \to \sup_{Q \in \mathcal{R}^0 (x^0)} E^Q \big[ \psi \big], \quad n \to \infty.$$ Now, it follows from [@remmert Theorem on pp. 98--99] that $x \mapsto \sup_{Q \in \mathcal{R}^0 (x^0)} E^Q [ \psi ]$ is continuous and that [\[eq: compact convergence\]](#eq: compact convergence){reference-type="eqref" reference="eq: compact convergence"} holds. The proof is complete. 0◻
## Proof of Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (vii) {#proof-of-theorem-theo-main1-vii}
To keep our notation simple, we write $\mathsf{w}:= \mathsf{w}_{\varrho}^{\mathcal{P}^\varrho(\Theta)}$. By definition of the Hausdorff metric topology, we have to prove that $$\begin{aligned}
\max \Big\{ \max_{Q \in \mathcal{R}^n (x^n)} \mathsf{w}(Q, \mathcal{R}^0 (x^0)) , \max_{Q \in \mathcal{R}^0 (x^0)} \mathsf{w}(Q, \mathcal{R}^n (x^n))\Big\} \to 0.
\end{aligned}$$ Notice that the maxima are attained by the compactness of the sets $\mathcal{R}^n (x^n)$ and $\mathcal{R}^0 (x^0)$.
We start investigating the first term. By the compactness of $\mathcal{R}^n (x^n)$, for every $n \in \mathbb{N}$, there exists a measure $Q^n \in \mathcal{R}^n (x^n)$ such that $$\max_{Q \in \mathcal{R}^n (x^n)} \mathsf{w}(Q, \mathcal{R}^0 (x^0)) = \mathsf{w}(Q^n, \mathcal{R}^0 (x^0)).$$ By Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (ii), every subsequence of $1, 2, \dots$ has a further subsequence $(N_n)_{n = 1}^\infty$ such that $(Q^{N_n})_{n = 1}^\infty$ converges in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ to a measure $Q^0 \in \mathcal{R}^0 (x^0)$. Now, by the continuity of the distance function, we have $$\mathsf{w}(Q^{N_n}, \mathcal{R}^0 (x^0)) \to \mathsf{w}(Q^0, \mathcal{R}^0 (x^0)) = 0.$$ We conclude that $$\max_{Q \in \mathcal{R}^n (x^n)} \mathsf{w}(Q, \mathcal{R}^0 (x^0)) = \mathsf{w}(Q^n, \mathcal{R}^0 (x^0)) \to 0 \text{ as } n \to \infty.$$
We turn to the second term. By the compactness of $\mathcal{R}^0 (x^0)$, for every $n \in \mathbb{N}$, there exists a measure $R^n \in \mathcal{R}^0 (x^0)$ such that $$\max_{Q \in \mathcal{R}^0 (x^0)} \mathsf{w}(Q, \mathcal{R}^n (x^n)) = \mathsf{w}(R^n, \mathcal{R}^n (x^n)).$$ Let $(N^n_1)_{n = 1}^\infty$ be an arbitrary subsequence of $1, 2, \dots$. Again by compactness of $\mathcal{R}^0 (x^0)$, there exists a subsequence $(N^n_2)_{n = 1}^\infty \subset (N^n_1)_{n = 1}^\infty$ such that $(R^{N^n_2})_{n = 1}^\infty$ converges in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ to a measure $R^0 \in \mathcal{R}^0 (x^0)$. By Theorem [Theorem 5](#theo: main1){reference-type="ref" reference="theo: main1"} (iv), there exists another subsequence $(N^n_3)_{n = 1}^\infty \subset (N^n_2)_{n = 1}^\infty$ and measures $(Q^{N^n_3})_{n = 1}^\infty$ such that $Q^{N^n_3} \in \mathcal{R}^{N^n_3}(x^{N^n_3})$ and $Q^{N^n_3} \to R^0$ in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$. Finally, $$\mathsf{w}(R^{N^n_3}, \mathcal{R}^{N^n_3} (x^{N^n_3})) \leq \mathsf{w}(R^{N^n_3}, Q^{N^n_3}) \leq \mathsf{w}(R^{N^n_3}, R^0) + \mathsf{w}(R^0, Q^{N^n_3}) \to 0.$$ As $(N^n_1)_{n = 1}^\infty$ was arbitrary, this proves that $$\mathsf{w}(R^n, \mathcal{R}^n (x^n)) \to 0.$$ In summary, $\mathcal{R}^n (x^n) \to \mathcal{R}^0 (x^0)$ in the Hausdorff metric topology. The proof is complete. 0◻
1 R. A. Adams. . Academic Press, 2nd ed., 2003.
C. D. Aliprantis and K. B. Border. . Springer Berlin Heidelberg, 3rd ed., 2006.
V. Barbu, G. Da Prato and M. Röckner. . Springer International Publishing Switzerland, 2016.
V. I. Bogachev. . Springer Berlin Heidelberg, 2007.
R. Carmona and F. Delarue. . Springer International Publishing, 2018.
R. Carmona, J.-P. Fouque, S. M. Mousavi and L.-H. Sun. Systemic Risk and Stochastic Games with Delay. , 179:366--399, 2018.
P. Clement and W. Desch. An elementary proof for the triangle inequality for the Wasserstein metric. , 136(1):333--339, 2008.
A. Cosso, F. Gozzi, I. Kharroubi, H. Pham and M. Rosestolato. Optimal control of path-dependent McKean-Vlasov SDEs in infinite dimension. , 33(4): 2863-2918, 2023.
D. Criens, P. Pfaffelhuber and T. Schmidt. The martingale problem method revisited. , 28(19):1--46, 2023.
G. Da Prato and J. Zabczyk. . Cambridge University Press, 1992.
C. Dellacherie and P. A. Meyer. . North-Holland Publishing Company - Amsterdam, New York, Oxford, 1978.
M. F. Djete, D. Possamaı̈ and X. Tan. McKean--Vlasov optimal control: limit theory and equivalence between different formulations. , 47(4):2891--2930, 2022.
N. El Karoui, D. Nguyen and M. Jeanblanc-Picqué. Compactification methods in the control of degenerate diffusions: existence of an optimal control. , 20(3):169--219, 1987.
N. El Karoui, D. Nguyen and M. Jeanblanc-Picqué. Existence of an optimal Markovian filter for the control under partial observations. , 26(5):1025--1061, 1988.
N. El Karoui and X. Tan. Capacities, measurable selection and dynamic programming part II: application in stochastic control problems. arXiv:1310.3364v2, 2015.
S. N. Ethier and T. G. Kurtz. . Wiley, 2005.
E. A. Feinberg, P. O. Kasyanov and Y. Liang. Fatou's lemma for weakly converging measures under the uniform integrability condition. , 64(4):615--630, 2020.
J. P. Fouque and Z. Zhang. Mean field game with delay: a toy example. , 6(3), 90, 2018.
P. K. Fritz and N. B. Victor. Multidimensional Stochastic Processes as Rough Paths. Cambridge University Press, 2010.
B. Gołdys, M. Röckner and X. Zhang. Martingale solutions and Markov selections for stochastic partial differential equations. , 119:1725--1764, 2009.
W. Hong, S. Li and W. Liu. McKean-Vlasov stochastic partial differential equations: existence, uniqueness and propagation of chaos. arXiv:2306.15508, 2023.
S. Hu and N. S. Papageorgiou. . Kluwer Academic Publishers, 1997.
J. Jacod. . Springer Berlin Heidelberg New York, 1979.
J. Jacod. Weak and strong solutions of stochastic differential equations. , 3:171--191, 1980.
J. Jacod and J. Mémin. Sur un type de convergence intermédiaire entre la convergence en loi et la convergence en probabilité. , 15:529--546, 1981.
J. Jacod and J. Mémin. Weak and strong solutions of stochastic differential equations: Existence and stability. In D. Williams, editor, *Stochastic Integrals*, volume 851 of *Lecture Notes in Mathematics*, pages 169--212. Springer Berlin Heidelberg, 1981.
J. Jacod and A. N. Shiryaev. . Springer Berlin Heidelberg, 2nd ed., 2003.
N. V. Krylov and B. L. Rozovskii. Stochastic Evolution Equations. , 16:1233--1277, 1981.
D. Lacker. Mean field games via controlled martingale problems: Existence of Markovian equilibria. , 125:2856--2894, 2015.
D. Lacker. Limit theory for controlled McKean--Vlasov dynamics. , 55(3):1641--1672, 2017.
D. Lacker. Mean field games and interacting particle systems. Lecture notes, Columbia University, 2018.
W. Liu and M. Röckner. SPDE in Hilbert spaces with locally monotone coefficients. , 259:2902--2922, 2010.
W. Liu and M. Röckner. Springer International Publishing Switzerland, 2015.
M. Ondreját. Brownian representations of cylindrical local martingales, martingale problem and strong Markov property of weak solutions of SPDEs in Banach spaces. , 55:1003--1039, 2005.
É. Pardoux. . Thèse, University Paris--Sud, 1975.
É. Pardoux. . Springer Nature Switzerland, 2021.
M. Reed and B. Simon. . Academic Press, revised and enlarged edition, 1980.
R. Remmert. . Springer Science+Business Media New York, 1991.
L. Schwartz. Oxford University Press, 1973.
A.-S. Sznitman. Topics in propagation of chaos. In P.-L. Hennequin, editor, *Ecole d'Eté de Probabilités de Saint-Flour XIX --- 1989*, 165--251, Springer Berlin Heidelberg, 1991.
C. Villani. Springer Berlin Heidelberg, 2009.
G. Winkler. Extreme points of moment sets. , 13(4):581--587, 1988.
[^1]:
[^2]: In fact, it is a metric once $(E, m)$ is separable, see [@CD08].
[^3]: *Definition 3.70 in [@charalambos2013infinite]*
[^4]: *Recall that the sets $\mathcal{R}^n (x^n)$ and $\mathcal{R}^0 (x^0)$ are nonempty and compact in $\mathcal{P}^\varrho(\mathcal{P}^\varrho(\Theta))$ thanks to part (i).*
[^5]: By this we also mean that the kernel has the measurability property explained below [\[eq: kernel dec\]](#eq: kernel dec){reference-type="eqref" reference="eq: kernel dec"}. In [@jacod1981weak Theorem 1.8] this kernel decomposition follows form the fact that solution measure is *very good*, see [@jacod1981weak Corollary 2.20].
[^6]: Of course, Gronwall's lemma requires that the involved function is integrable. In our case, it is well-known that the expectations are finite (see, e.g., [@GRZ] or [@liu_rockner Lemma 5.1.5]). Alternatively, it would also be possible to work with stopping times. At this point, our main interest is an explicit upper bound to relate $Q^n$ to the set $\mathscr{J}(x)$.
[^7]: Notice that $\mathcal{P}(\Upsilon)$ is really the same space irrespective of the topology of $\Upsilon$, as their Borel $\sigma$-fields coincide.
| arxiv_math | {
"id": "2310.00928",
"title": "A limit theory for controlled McKean-Vlasov SPDEs",
"authors": "David Criens",
"categories": "math.PR math.AP math.OC",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
An edge colouring of $K_n$ with $k$ colours is a *Gallai $k$-colouring* if it does not contain any rainbow triangle. Gyárfás, Pálvölgyi, Patkós and Wales proved that there exists a number $g(k)$ such that $n\geq g(k)$ if and only if for any colour distribution sequence $(e_1,\cdots,e_k)$ with $\sum_{i=1}^ke_i=\binom{n}{2}$, there exist a Gallai $k$-colouring of $K_n$ with $e_i$ edges having colour $i$. They also showed that $\Omega(k)=g(k)=O(k^2)$ and posed the problem of determining the exact order of magnitude of $g(k)$. Feffer, Fu and Yan improved both bounds significantly by proving $\Omega(k^{1.5}/\log k)=g(k)=O(k^{1.5})$. We resolve this problem by showing $g(k)=\Theta(k^{1.5}/(\log k)^{0.5})$.
Moreover, we generalise these definitions by considering rainbow $H$-free colourings of $K_n$ for any general graph $H$, and the natural corresponding quantity $g(H,k)$. We prove that $g(H,k)$ is finite for every $k$ if and only if $H$ is not a forest, and determine the order of $g(H,k)$ when $H$ contains a subgraph with minimum degree at least 3.
author:
- "Zhuo Wu[^1]"
- "Jun Yan[^2]"
title: Distribution of colours in rainbow $H$-free colourings
---
# Introduction
An edge colouring of $K_n$ using $k$ colours is a map $c: E(K_n)\to [k]$, where the colour of an edge $uv$ is $c(uv)$. Given a simple graph $H$ and an edge colouring of $K_n$, we say that this colouring contains a *rainbow copy* of $H$ if there exists a subgraph of $K_n$ isomorphic to $H$, whose edges all have different colours. Otherwise, we say this colouring is *rainbow $H$-free*.
For non-negative integers $e_1,\cdots,e_k$, we say the sequence $(e_1,\cdots,e_k)$ is *$n$-good* iff $\sum_{i=1}^ke_i=\binom{n}{2}$. An edge colouring of the complete graph $K_n$ using $k$ colours is said to have *colour distribution sequence* $(e_1,\cdots,e_k)$ if exactly $e_i$ edges have colour $i$ for every $i\in[k]$. In this case, we also say that this is an *$(e_1,\cdots,e_k)$-colouring* of $K_n$.
In [@GS], Gyárfás and Simonyi initiated the study of rainbow $K_3$-free colourings of $K_n$. They named these colourings Gallai colourings, referring to Gallai's related work [@G] on comparability graphs. Since then, many Ramsey-type results [@GSSS; @GS; @MS] and enumeration results [@BBH; @BBMS; @BL] for Gallai colourings have been obtained.
A new avenue of research on the distribution of colours in Gallai colourings was proposed by Gyárfás, Pálvölgyi, Patkós and Wales in [@GPPW]. They proved the following theorem, which says that as long as we have sufficiently many vertices, any colour distribution sequence can be realised as a Gallai colouring.
**Theorem 1** ([@GPPW]). *For every integer $k\geq 2$, there exists an integer $N$ such that for all $n\geq N$ and all non-negative integers $e_1,\cdots,e_k$ satisfying $\sum_{i=1}^ke_i=\binom{n}{2}$, there exists a Gallai colouring of $K_n$ with colour distribution sequence $(e_1,\cdots,e_k)$.*
In [@GPPW], Gyárfás, Pálvölgyi, Patkós and Wales denoted the smallest such integer $N$ by $g(k)$. They also gave the estimate $2k-2\leq g(k)\leq 8k^2+1$, and posed the problem of determining the exact order of magnitude of $g(k)$. The gap was closed considerably when Feffer, Fu and Yan [@FFY] proved that $\Omega(k^{1.5}/\log k)\leq g(k)\leq O(k^{1.5})$. The first main result of this paper resolves this problem.
**Theorem 2**. *$g(k)=\Theta(k^{1.5}/(\log k)^{0.5})$.*
Motivated by this result, we study a generalised version of this problem. We first generalise the quantity $g(k)$ to $g(H,k)$ as follows.
**Definition 3**. *Let $H$ be a graph and let $k$ be a positive integer. Define $g(H,k)$ to be the smallest integer $N$, such that for all $n\geq N$ and all non-negative integers $e_1,\cdots,e_k$ satisfying $\sum_{i=1}^ke_i=\binom{n}{2}$, there exists a rainbow $H$-free colouring of $K_n$ with colour distribution sequence $(e_1,\cdots,e_k)$. If such $N$ does not exist, define $g(H,k)=\infty$.*
Note that setting $H=K_3$ recovers the definition of $g(k)$. Moreover, we have the following simple property of $g(H,k)$.
**Lemma 4**. *If $H_1$ is a subgraph of $H_2$, then $g(H_1,k)\geq g(H_2,k)$.*
*Proof.* This follows from the observation that any rainbow $H_1$-free graph is also rainbow $H_2$-free. ◻
The natural questions of interest regarding $g(H,k)$ are the following.
**Question 5**. *For what graphs $H$ and integers $k$ is $g(H,k)$ finite? If $g(H,k)$ is finite, What is the order of magnitude for $g(H,k)$ in terms of $k$?*
It turns out that the behaviour of $g(H,k)$ depends a graph parameter called degeneracy. We recall this definition below.
**Definition 6**. *Let $H$ be a graph and let $k$ be a positive integer.*
- *$H$ is $k$-degenerate if every subgraph $H'$ of $H$ has a vertex of degree at most $k$.*
- *The degeneracy of $H$ is the smallest positive integer $k$ for which $H$ is $k$-degenerate.*
As two examples, 1-degenerate graphs are exactly forests, and every graph with degeneracy at least $k$ contains a subgraph of minimum degree at least $k$. With this definition in mind, the results we will prove in this paper are the following.
**Theorem 7**. *If $H$ is a forest, or equivalently have degeneracy 1, then $g(H,k)=\infty$ for all sufficiently large $k$.*
**Theorem 8**. *If $H$ is a graph on $m$ vertices with degeneracy at least $3$, then $g(H,k)=\Theta_m(k)$.*
**Theorem 9**. *If $H$ is a graph on $m$ vertices with degeneracy $2$, then $$\Omega_m(k)\leq g(H,k)\leq O(k^{1.5}/(\log k)^{0.5}).$$*
The paper is organised as follows. We prove Theorem [Theorem 7](#tree1){reference-type="ref" reference="tree1"}, Theorem [Theorem 8](#degen3){reference-type="ref" reference="degen3"} and Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"} in Section [2](#tree){reference-type="ref" reference="tree"}, Section [3](#d3){reference-type="ref" reference="d3"} and Section [4](#upper){reference-type="ref" reference="upper"}, respectively. Note that the upper bound in Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"} also gives the upper bound in Theorem [Theorem 2](#triangle){reference-type="ref" reference="triangle"}. Finally, we prove the lower bound of Theorem [Theorem 2](#triangle){reference-type="ref" reference="triangle"} in Section [5](#t){reference-type="ref" reference="t"}.
# Trees {#tree}
In this section, we prove Theorem [Theorem 7](#tree1){reference-type="ref" reference="tree1"}. Note that any forest is the subgraph of a tree, therefore, by Lemma [Lemma 4](#subgraph){reference-type="ref" reference="subgraph"}, it suffices to prove Theorem [Theorem 7](#tree1){reference-type="ref" reference="tree1"} in the case when $H$ is a tree. In fact, we will prove the following stronger result.
**Theorem 10**. *For every integer $m\geq 2$, there exists a constant $D=D(m)$ such that the following holds: If $H$ be any tree on $m$ vertices, then any edge colouring of $K_n$ such that every colour is used to colour at most $\frac1D\binom{n}{2}$ edges contains a rainbow copy of $H$.*
Note that the condition in Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"} implies implicitly that $\frac1D\binom{n}{2}\geq 1$, or equivalently $n\geq\Omega(\sqrt D)$.
We first show that the Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"} implies Theorem [Theorem 7](#tree1){reference-type="ref" reference="tree1"}.
*Proof of Theorem [Theorem 7](#tree1){reference-type="ref" reference="tree1"}.* Let $D=D(m)$ be the constant in Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"}. Let $k\geq\lfloor 2D \rfloor$ be an integer. For any sufficiently large integer $n$, assume that $\binom{n}{2}=qk+r$, where $0\le r\le k-1$. Set $e_1=e_2=\cdots=e_{k-r}=q$ and $e_{k-r+1}=\cdots=e_k=q+1$. Then $(e_1,\cdots,e_k)$ is $n$-good and $$e_i\le 1+\frac1k\binom{n}{2}\le \frac1D\binom{n}{2}$$ for all $i\in[k]$. Hence, by Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"}, any colouring of $K_n$ with colour distribution sequence $(e_1,\cdots,e_k)$ contains a rainbow copy of $H$. From definition, this proves $g(H,k)=\infty$. ◻
We will prove Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"} with $D(m)=(6m)^{6m}$. Given an edge colouring $c$ of $K_n$ and any vertex $v\in V$, let $$C(v)=\bigcup_{v'\in V\setminus\{v\}}\{c(vv')\}$$ be the set of colours assigned to edges adjacent to $v$. The following technical lemma shows that at least half of the vertices satisfy $|C(v)|\geq2m+1$.
**Lemma 11**. *Let $m\geq 2$ be an integer and let $D=(6m)^{6m}$. Let $c$ be an edge colouring of $K_n$ such that every colour is used to colour at most $\frac1D\binom{n}{2}$ edges. Then there are at most $\frac12n$ vertices $v$ such that $|C(v)|\le 2m$.*
*Proof.* We prove this lemma by contradiction. Assume that there is a set $V_0$ of at least $\frac12n$ vertices $v\in V(K_n)$ such that $|C(v)|\le 2m$. We show by induction that for every integer $0\leq j\leq 2m$, we can find a subset $V_j$ of $V_0$ such that
$$\left|\bigcap_{v\in V_j}C(v)\right|\ge j\text{\quad and\quad}|V_j|\ge \frac{n}{2(6m)^j}.$$
The base case follows from the definition of $V_0$. Suppose now we have picked $V_j$ as required for some $0\leq j\leq 2m-1$. Without loss of generality, assume that $$\{1,2,\dots,j\}\subseteq \bigcap_{v\in V_j}C(v).$$ From the assumption, there are at most $\frac{j{\binom n2}}{D}$ edges coloured with one of the colours $1,2,\cdots,j$. Hence, by pigeonhole, there exists a vertex $u\in V_j$ such that there are at most $\frac{2j\binom{n}{2}}{D|V_j|}$ edges adjacent to $u$ with colour $1,2,\cdots,j$. Note that for any $u'\in V_j\setminus\{u\}$, the colour $c(uu')$ of the edge $uu'$ must belong to the set $C(u)$, which has size at most $2m$. By pigeonhole again, there exists some colour $q\in C(u)\setminus[j]$ such that at least $$\frac{|V_j|-1-\frac{2j\binom{n}{2}}{D|V_j|}}{2m-j}$$ vertices $u'\in V_j\setminus\{u\}$ satisfy $c(uu')=q$. Set $$V_{j+1}=\{u'\mid u'\in V_j\setminus\{u\}\text{ and } c(uu')=q.\}$$ We verify that $V_{j+1}$ works.
Indeed, the first condition holds as $$\{1,2,\cdots,j,q\}\subseteq \bigcap_{v\in V_{j+1}}C(v).$$ For the second condition, we have $$\begin{aligned}
|V_{j+1}|&\ge \frac{|V_j|-1-\frac{2j\binom{n}{2}}{D|V_j|}}{2m-j}\\
&\ge \frac{|V_j|-1-\frac{2j(n-1)(6m)^j}{D}}{2m} && \text{by }|V_j|\ge \frac{n}{2(6m)^j} \\
&\ge \frac{|V_j|-1-\frac n{(6m)^{2m}}}{2m} && \text{by }D=D(m)=(6m)^{6m}\\
&\ge \frac{|V_j|}{6m}\ge \frac{n}{2(6m)^{j+1}}, && \text{by }|V_j|\ge \frac{n}{2(6m)^j} \end{aligned}$$ as required. Hence, we can pick the sets $V_0,V_1,\cdots,V_{2m}$ by induction.
Note that for any $u\in V_{2m}$, we have $$2m\geq|C(u)|\geq\left|\bigcap_{v\in V_{2m}}C(v)\right|\geq 2m.$$ Hence, there exists a set $C$ of size $2m$ such that $C(u)=C$ for all $u\in V_{2m}$. In particular, for any two vertices $u,u'\in V_{2m}$, the colour of the edge $uu'$ must belong to this set $C$. Therefore, we have $$\binom{|V_{2m}|}{2}\le 2m\frac{\binom{n}{2}}{D}.$$ However, $$\binom{|V_{2m}|}{2}\ge \frac{1}{4}|V_{2m}|^2\ge \frac{n^2}{16(6m)^{4m}}\ge \frac{n^2}{(6m)^{5m}}> 2m\frac{\binom{n}{2}}{D},$$ a contradiction. This completes the proof of Lemma [Lemma 11](#smallcolour){reference-type="ref" reference="smallcolour"}. ◻
Now we can prove Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"}.
*Proof of Theorem [Theorem 10](#tree2){reference-type="ref" reference="tree2"}.* We will show that $D(m)=(6m)^{6m}$ works. Proceed by induction on $m$. The case $m=2$ is trivial. Suppose we have already established the theorem for all trees with fewer than $m$ vertices. Since $H$ is a tree, it has a leaf $v_0$. Let the vertices in $H$ be $v_0,v_1,\cdots, v_{\ell-1}$, and suppose the unique edge adjacent to $v_0$ is $v_0v_1$. Let $H'=H-v_0$.
Let $G'$ be the subgraph of $K_n$ induced by the set of vertices $v$ satisfying $|C(v)|\ge 2m+1$, then $G'$ contains at least $\frac12n$ vertices by Lemma [Lemma 11](#smallcolour){reference-type="ref" reference="smallcolour"}. From assumption, each colour is used at most $$\frac{\binom{n}{2}}{D(m)}\le \frac{\binom{|v(G')|}{2}}{D(m-1)}$$ times in $G'$. Therefore, we may apply induction hypothesis to find a rainbow copy of $H'$ in $G'$. Suppose vertex $v_i$ in $H'$ is embedded to vertex $u_i$ in $G'$ for all $1\le i\le m-1$. Note that
$$\left|\bigcup_{v\in V\setminus\{u_2,\cdots,u_{m-1}\}}\{c(u_1v)\}\right|\ge \big|C(u_1)\big|-(m-2)\geq m-1.$$ Hence, we can find a vertex $u_0\in V\setminus\{u_1,u_2,\cdots,u_{k-1}\}$ such that the edge $u_0u_1$ has a different colour to all $m-2$ edges in this rainbow copy of $H'$. It follows that there exists a rainbow copy of $H$ in $K_n$ with vertices $u_0,u_1,\cdots,u_{m-1}$. ◻
# Graphs with degeneracy at least 3 {#d3}
In this section we look at the case when $H$ is a graph with degeneracy at least 3 and prove Theorem [Theorem 8](#degen3){reference-type="ref" reference="degen3"}. We begin with the lower bound.
**Lemma 12**. *Suppose $n\geq m\geq 3$ and $e_1,\cdots,e_k\geq0$ are integers such that $(e_1,\cdots,e_k)$ is $n$-good and $$\sum_{i=1}^k\binom{e_i}{2}<\frac{n(n-1)(n-2)}{m(m-1)(m-2)}.$$ Then any $(e_1,\cdots,e_k)$-colouring of $K_n$ contains a rainbow copy of $K_m$.*
*Proof.* Fix an $(e_1,\cdots,e_k)$-colouring of $K_n$. For each $i\in[k]$, let $E_i$ be the set of edges in $K_n$ that has colour $i$. Pick a size $m$ subset $U$ of $[n]$ uniformly at random. For each $i\in[k]$ and any distinct edges $e,e'\in E_i$, the probability that both $e$ and $e'$ are in $U$ is either $\frac{\binom{n-3}{m-3}}{\binom{n}{m}}=\frac{m(m-1)(m-2)}{n(n-1)(n-2)}$ if $e$ and $e'$ share a vertex, or $\frac{\binom{n-4}{m-4}}{\binom{n}{m}}=\frac{m(m-1)(m-2)(m-3)}{n(n-1)(n-2)(n-3)}\leq\frac{m(m-1)(m-2)}{n(n-1)(n-2)}$ if they don't. Using a union bound, we see that the expected number of monochromatic edge pairs in $U$ is at most $$\frac{m(m-1)(m-2)}{n(n-1)(n-2)}\sum_{i=1}^k\binom{e_i}{2}<1.$$ Hence, there exists a realisation of $U$ such that the complete graph induced by the $m$ vertices in $U$ is rainbow. ◻
**Corollary 13**. *Let $H$ be a graph on $m$ vertices. Then $g(H,k)\geq k/m^3$ for all sufficiently large $k$.*
*Proof.* When $m\le 2$, the theorem is trivial. When $m\ge 3$, let $n=\lfloor k/m^3\rfloor$ and assume $k$ is large enough so that $\binom n2\geq k$. Suppose that $\binom{n}{2}=qk+r$, where $0\le r\le k-1$. Set $e_1=e_2=\cdots=e_{k-r}=q$ and $e_{k-r+1}=\cdots=e_k=q+1$. Then $$\begin{aligned}
\sum_{i=1}^k\binom{e_i}{2}
&\le \sum_{i=1}^k\frac{1}{2}\left(\frac{\binom{n}{2}}{k}+1\right)\frac{\binom{n}{2}}{k}\\
&\leq\sum_{i=1}^k\frac{\binom{n}{2}^2}{k^2}<\frac{n^4}{4k}\le \frac{n^3}{4m^3}\\
&\le \frac{n(n-1)(n-2)}{m(m-1)(m-2)}.\end{aligned}$$ So by Lemma [Lemma 12](#lemma:linearlowerbound){reference-type="ref" reference="lemma:linearlowerbound"}, we can find a rainbow copy of $K_m$, and hence a rainbow copy of $H$ in this colouring of $K_n$. ◻
For the upper bound, we first prove it under the stronger assumption that $H$ has minimum degree at least 3.
**Lemma 14**. *Let $H$ be a graph on $m$ vertices with minimum degree at least 3. Assume $n\geq 2k$ and $(e_1,\cdots,e_k)$ is an $n$-good sequence. Then there exists a rainbow $H$-free $(e_1,\cdots,e_k)$-colouring of $K_n$.*
*Proof.* We prove this by induction on $k$. Note that every graph $H$ with minimum degree 3 contains at least 6 edges, so any colouring using less than 6 colours will be rainbow $H$-free. Thus the statement is trivially true for all $k\leq5$. Assume now $k\geq 6$.
Without loss of generality, assume $e_1\geq\cdots\geq e_k>0$. Then $e_1\geq\frac1k\binom{n}{2}\geq n-1$ and $e_k\leq\frac1k\binom n2$. Let $t\geq1$ be the smallest integer satisfying $\binom{t}{2}+t(n-t)\geq e_k$. By minimality, $\binom{t-1}{2}+(t-1)(n-t+1)<e_k$, and thus $$\binom{t}{2}+t(n-t)-e_k<n-t\le e_1.$$ This means that we can colour $e_k$ of the $\binom{t}{2}+t(n-t)$ edges adjacent to $v_{n-t+1},\cdots,v_n$ in $K_n$ with colour $k$, and colour the remaining ones with colour 1.
Also, since $$\binom{\frac nk}{2}+\frac nk\left(n-\frac nk\right)=\frac n{2k}\left(2n-\frac nk-1\right)\geq\frac n{2k}(n-1)=\frac1k\binom{n}2\geq e_k,$$ we have $t\leq\frac nk$ and thus $n-t\geq n(1-\frac1k)\geq 2(k-1)$. Let $e_1'=e_1-\binom t2-t(n-t)+e_k$ and $e_i'=e_i$ for all $2\leq i\leq k-1$. It is easy to verify that $(e_1', e_2', \cdots, e_{k-1}')$ is an $(n-t)$-good sequence. Hence, we can use induction hypothesis to find a rainbow $H$-free $(e_1',\cdots,e_{k-1}')$-colouring of $K_{n-t}$. Combining this with the colouring above for edges adjacent to $v_{n-t+1},\cdots, v_n$, we obtain an $(e_1,\cdots,e_k)$-colouring of $K_n$.
We claim that this colouring is rainbow $H$-free. Indeed, let $H'$ be any subgraph of $K_n$ isomorphic to $H$. If all vertices of $H'$ are from $v_1,\cdots,v_{n-t}$, then it is not rainbow by the inductive construction. Otherwise, $H'$ contains least one of $v_{n-t+1},\cdots,v_n$. But edges adjacent to each of these vertices can only have colour 1 or $k$, so $H'$ not rainbow as $H'\cong H$ has minimum degree 3. ◻
We can now deduce Theorem [Theorem 8](#degen3){reference-type="ref" reference="degen3"}.
*Proof of Theorem [Theorem 8](#degen3){reference-type="ref" reference="degen3"}.* Let $H$ be a graph on $m$ vertices with degeneracy at least 3. Then $H$ must contain a subgraph $H'$ with minimum degree at least 3. Lemma [Lemma 14](#lemma:mindegree3){reference-type="ref" reference="lemma:mindegree3"} implies that $g(H',k)\leq 2k$. So by Lemma [Lemma 4](#subgraph){reference-type="ref" reference="subgraph"}, $g(H,k)\leq g(H',k)\leq2k$, which gives the upper bound. The lower bound follows from Corollary [Corollary 13](#generallower){reference-type="ref" reference="generallower"}. ◻
# Graphs with degeneracy 2 {#upper}
In this section, we study the case when $H$ is a graph with degeneracy 2 and prove Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"}. We only need to prove the upper bound as the lower bound follows from Corollary [Corollary 13](#generallower){reference-type="ref" reference="generallower"}. Let $H$ be a graph with degeneracy 2. To prove an upper bound of the form $g(H,k)\leq N$, we need to construct a rainbow $H$-free $(e_1,\cdots,e_k)$-colouring of $K_n$ for every $n\geq N$ and every $n$-good sequence $(e_1,\cdots,e_k)$. It turns out that repeated suitable applications of the following colouring steps would work.
**Definition 15**. *Suppose we have a complete graph $K_n$ and a sequence $(e_1,\cdots,e_k)$ satisfying $\sum_{i=1}^ke_i\geq\binom{n}{2}$.*
- *A standard colouring step on $K_n$ involves splitting it into $K_t$ and $K_{n-t}$ for some $1\leq t\leq\left\lfloor\frac n2\right\rfloor$, and colouring all $t(n-t)$ edges between them with some colour $i$ satisfying $e_i\geq t(n-t)$. The size of such a standard colouring step is $t$.*
- *A simple colouring step is a standard colouring step of size 1.*
We are interested in colourings of $K_n$ that can be obtained entirely from standard colouring steps.
**Definition 16**. *Given a complete graph $K_n$ and a sequence $(e_1,\cdots,e_k)$ satisfying $\sum_{i=1}^ke_i\geq\binom{n}{2}$, we say $K_n$ has a standard colouring (with respect to $(e_1,\cdots,e_k)$) if it has an edge colouring that can be obtained from a sequence of standard colouring steps. More formally, $K_n$ has a standard colouring if it has an edge colouring that can be obtained using the following stepwise algorithm, where we view each set $S_\ell$ as a multiset.*
- *Initialise by setting $S_0=\{K_n\}$ and $e_{0,j}=e_j$ for all $j\in[k]$*
- *For $\ell\geq 0$, if $S_\ell=\{K_1,\cdots,K_1\}$, terminate.*
*Otherwise, pick some $K_m\in S_\ell$ with $m\geq 2$, some $i\in[k]$ and some $1\leq t\leq\left\lfloor\frac m2\right\rfloor$, such that $t(m-t)\leq e_{\ell,i}$.*
- *Set $S_{\ell+1}=(S_\ell\setminus\{K_m\})\cup\{K_t,K_{m-t}\}$, $e_{\ell+1,i}=e_{\ell,i}-t(m-t)$ and $e_{\ell+1,j}=e_{\ell,j}$ for all $j\in[k]\setminus\{i\}$. Note that this correspoinds to a standard colouring step on $K_m$ of size $t$ using colour $i$.*
- *Repeat the second and third step.*
*Similarly, we say $K_n$ has a simple colouring if it has an edge colouring that can be obtained from a sequence of simple colouring steps.*
We first prove a lemma that explains why we are interested in standard colouring steps and standard colourings.
**Lemma 17**. *Every standard colouring of $K_n$ is rainbow cycle-free, and therefore rainbow $H$-free for all graphs $H$ with degeneracy 2.*
*Proof.* For every $3\leq k\leq n$, let $C$ be an arbitrary cycle of length $k$ in $K_n$. Let $\ell\geq1$ be the smallest index such that vertices in $C$ do not all belong to the same complete graph in $S_\ell$. By minimality, they all belong to the same complete graph in $S_{\ell-1}$, say $K_m$, that was split into two complete graphs $K_t$ and $K_{m-t}$ by a standard colouring step in the $\ell$-th iteration of the algorithm. From definition, all edges between $K_t$ and $K_{m-t}$ have the same colour. Since at least two edges on $C$ go between $K_t$ and $K_{m-t}$, $C$ is not rainbow. ◻
Assume now that the algorithm starts with a complete graph $K_n$ and an $n$-good sequence $(e_1,\cdots,e_k)$. Then throughout the algorithm, there is a relation between the sizes of the complete graphs in $S_{\ell}$ and the $e_{\ell,j}$'s.
**Lemma 18**. *For every $\ell\geq 0$, if $S_\ell=\{K_{m_1},\cdots,K_{m_r}\}$, then $$\sum_{j=1}^r\binom{m_j}{2}=\sum_{j=1}^ke_{\ell,j}.$$*
*Proof.* We use induction on $\ell$. The case $\ell=0$ follows from the assumption that $(e_1,\cdots,e_k)=(e_{0,1},\cdots,e_{0,k})$ is $n$-good. Suppose this is true for some $\ell\geq0$, and without loss of generality assume that the $(\ell+1)$-th standard colouring step is splitting $K_{m_1}$ into $K_t$ and $K_{m_1-t}$ and colouring the $t(m_1-t)$ edges with colour $1$. Then we have $$\begin{aligned}
\binom{t}{2}+\binom{m_1-t}{2}+\sum_{j=2}^r\binom{m_j}{2}&=\binom{t}{2}+\binom{m_1-t}{2}-\binom{m_1}{2}+\sum_{j=1}^ke_{\ell,j}\\
&=-t(m_1-t)+e_{\ell,1}+\sum_{j=2}^ke_{\ell,j}\\
&=\sum_{j=1}^ke_{\ell+1,j},\end{aligned}$$ as required. ◻
This relation can also be proved by noting that edges that remained to be coloured after $\ell$ iterations are exactly those within the complete graphs in $S_\ell$.
Using this, we can prove that a simple colouring step can always be performed during the algorithm under a mild size condition.
**Lemma 19**. *For every $\ell\geq 0$, if $S_\ell=\{K_{m_1},\cdots,K_{m_r}\}$ and $m_1\geq 2k$, then a simple colouring step can be performed on $K_{m_1}$.*
*Proof.* By Lemma [Lemma 18](#relation){reference-type="ref" reference="relation"}, we have $$\sum_{j=1}^ke_{\ell,j}=\sum_{j=1}^r\binom{m_j}{2}\geq\binom{m_1}{2}.$$ So there exists $i\in[k]$ such that $e_{\ell,i}\geq\frac1{k}\binom{m_1}{2}\geq m_1-1$. Hence, we can perform a simple colouring step on $K_{m_1}$ using colour $i$. ◻
To obtain result analogous to Lemma [Lemma 19](#remove1){reference-type="ref" reference="remove1"} below size $2k$, we need the following important concept of cushion.
**Definition 20**. *For $\ell\geq 0$, if $S_\ell=\{K_{m_1},\cdots,K_{m_r}\}$, then we say the cushion we have for $K_{m_1}$ is $\sum_{j=1}^ke_{\ell,j}-\binom{m_1}{2}=\sum_{j=2}^r\binom{m_j}{2}$, and we say that this cushion is provided by $K_{m_2},\cdots,K_{m_r}$.*
**Lemma 21**. *For all $m\geq 2$, suppose $K_m\in S_{\ell}$ and the cushion we have for $K_m$ is at least $\min\{\frac12(k^2-k),km\}$. Then $m-1$ consecutive simple colouring steps can be performed on $K_m$ to produce a simple colouring.*
*Proof.* It suffices to show that for each $0\leq t\leq m-2$, it is possible to perform a simple colouring step on $K_{m-t}$. Indeed, performing simple colouring steps on $K_{m},\cdots,K_{m-t+1}$ colours $\sum_{j=1}^{t}(m-j)=\frac12t(2m-t-1)$ edges in total.
If the cushion we have is at least $\frac12(k^2-k)$, then $$\begin{aligned}
\sum_{j=1}^ke_{\ell+t,j}&=\sum_{j=1}^ke_{\ell,j}-\frac12t(2m-t-1)\\
&\geq\binom{m}{2}+\frac12(k^2-k)-\frac12t(2m-t-1)\\
&=\frac12(m^2-m+k^2-k-2mt+t^2+t)\\
&=\frac12(t-m+k)(t-m+k+1)+k(m-t-1)\\
&\geq k(m-t-1).\end{aligned}$$
If instead the cushion we have is at least $km$, then $$\begin{aligned}
\sum_{j=1}^ke_{\ell+t,j}&=\sum_{j=1}^ke_{\ell,j}-\frac12t(2m-t-1)\\
&\geq\binom{m}{2}+km-\frac12t(2m-t-1)\\
&=\binom{m-t}{2}+km\\
&\geq km\geq k(m-t-1).\end{aligned}$$ Hence, in both cases there exists a colour $i$ with $e_{\ell+t,i}\geq m-t-1$. So we can perform a simple colouring step on $K_{m-t}$ using colour $i$, as required. ◻
We remark that a weaker version of this lemma in the case when $m=2k$ was proved by Feffer, Fu and Yan in [@FFY] using a much more complicated method.
We prove one final lemma before proving Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"}. This lemma allow us to perform several consecutive standard colouring steps of the same size, which will be useful for creating cushion.
**Lemma 22**. *Let $n,m,t,k\geq 1$ and $e_1,\cdots,e_k\geq 0$ be integers satisfying $n>tm$ and $\sum_{i=1}^ke_i>tn(m+k)$. Then we can perform $m$ consecutive standard colouring steps of size $t$ on $K_n$.*
*Proof.* For each $j\in[m]$, the $j$-th size $t$ standard colouring step splits $K_{n-(j-1)t}$ into $K_t$ and $K_{n-jt}$, and colour the $t(n-jt)$ edges between them using one of the $k$ colours. Since $t(n-jt)\leq tn$, colour $i$ can be used to perform a size $t$ standard colouring step at least $\left\lfloor\frac{e_i}{tn}\right\rfloor$ times for each $i\in[k]$. Since $$\sum_{i=1}^k\left\lfloor\frac{e_i}{tn}\right\rfloor\geq\sum_{i=1}^k\frac{e_i}{tn}-k>m+k-k=m,$$ it is possible to perform $m$ such steps. ◻
We now sketch the upper bound proof in Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"}, which proceeds in three stages:
- Stage 1: We perform $k$ consecutive standard colouring steps of size $r=\Theta\left(\frac{k^{0.5}}{(\log k)^{0.5}}\right)$ using Lemma [Lemma 22](#cr){reference-type="ref" reference="cr"}. Let $\mathcal{R}$ be the set of $k$ complete graphs $K_r$ split off in these steps. These complete graphs in $\mathcal{R}$ will provide a small but still significant amount of cushion.
- Stage 2: We perform several standard colouring steps of larger size, so that the set $\mathcal{C}$ of complete graphs split off in this stage provide a big cushion of at least $\frac12k^2$.
- Stage 3: By Lemma [Lemma 19](#remove1){reference-type="ref" reference="remove1"}, we can reduce the size of the largest remaining complete graph down to $K_{2k}$. Using the cushion provided by $\mathcal{C}$ and Lemma [Lemma 21](#2kspare){reference-type="ref" reference="2kspare"}, we can find a simple colouring for this $K_{2k}$. Then, we use Lemma [Lemma 21](#2kspare){reference-type="ref" reference="2kspare"} again and the cushion provided by $\mathcal{R}$ to find simple colourings for each complete graph in $\mathcal{C}$. Finally, we show that the cushion provided by $\mathcal{R}$ is enough for us to find simple colourings for the complete graphs in $\mathcal{R}$ itself as well.
The idea of this proof is already present implicitly in the upper bound proof in [@FFY] but with $\mathcal{C}$ and $\mathcal{R}$ being the same set, which turns out to be less than optimal.
Finally, we can prove Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"}.
*Proof of Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"}.* The lower bound follows directly from Corollary [Corollary 13](#generallower){reference-type="ref" reference="generallower"}, so we just need to prove the upper bound. The proof proceeds in 3 stages. Assume $k$ is sufficiently large and set $$\beta=5000000,\text{ }n=\left\lfloor\frac{\beta k^{1.5}}{(\log{k})^{0.5}}\right\rfloor.$$ Let $(e_1,\cdots,e_k)$ be any $n$-good sequence, and write $e=\binom{n}{2}$. Initialise the colouring algorithm by setting $e_{0,j}=e_j$ for all $j\in[k]$ and $S_0=\{K_n\}$.
**Stage 1: The Small Cushion Steps**
Set $r=\left\lfloor\frac{\beta k^{0.5}}{30(\log k)^{0.5}}\right\rfloor$. Since $$\sum_{j=1}^ke_{0,j}=e=\binom{n}{2}>\frac{\beta^2k^3}{3\log k}\geq10krn>rn(k+k),$$ we may perform $k$ consecutive standard colouring steps of size $r$ on $K_n$ using Lemma [Lemma 22](#cr){reference-type="ref" reference="cr"}. Let $\mathcal{R}$ be this collection of $k$ complete graphs of size $r$. Note that we have coloured at most $krn\leq\frac{\beta^2k^3}{30\log k}<0.1e$ edges. Therefore, at the end of this stage, we have $\sum_{j=1}^ke_{k,j}\geq0.9e$.
**Stage 2: The Big Cushion Steps**
Let $J_1=\left\{j\in[k]\mid e_{k,j}\geq\frac{\beta^2k^{2.25}}{(\log k)^{1.25}}\right\}$. Since $$|J_1|\frac{\beta^2k^{2.25}}{(\log k)^{1.25}}\leq\sum_{j\in J_1}e_{k,j}\leq e,$$ it follows that $|J_1|\leq\frac12k^{0.75}(\log k)^{0.25}$. We split into two cases.
**Case 1.** $\sum_{j\in J_1}e_{k,j}\geq0.1e$.
Set $c=\left\lfloor\frac{k^{0.75}}{(\log k)^{0.75}}\right\rfloor$. Since $$\sum_{j\in J_1}e_{k,j}\geq0.1e\geq\frac{\beta^2k^3}{30\log k}\geq cn\cdot\frac{\beta k^{0.75}(\log k)^{0.25}}{30}>cn(\left\lceil k^{0.75}(\log k)^{0.25}\right\rceil+|J_1|),$$ we can perform $\left\lceil k^{0.75}(\log k)^{0.25}\right\rceil$ consecutive simple colouring steps of size $c$ using only the colours in $J_1$ by Lemma [Lemma 22](#cr){reference-type="ref" reference="cr"}. Let $\mathcal{C}$ be this collection of $\left\lceil k^{0.75}(\log k)^{0.25}\right\rceil$ complete graphs of size $c$. Observe that complete graphs in $\mathcal{C}$ create a cushion of $$k^{0.75}(\log k)^{0.25}\binom{c}{2}\geq\frac{k^{2.25}}{3(\log k)^{1.25}}\gg k^2,$$ which is enough for us to apply Lemma [Lemma 21](#2kspare){reference-type="ref" reference="2kspare"} later on. At this point, the set of complete graphs yet to be coloured consists of the complete graphs in $\mathcal{R}$ coming from Stage 1, the complete graphs in $\mathcal{C}$ coming from Stage 2 and one other large complete graph of size $n-rk-c\left\lceil k^{0.75}(\log k)^{0.25}\right\rceil\gg2k$.
**Case 2.** $\sum_{j\in J_1}e_{k,j}<0.1e$.
Let $J_2=\left\{j\in[k]\mid e_{k,j}\leq\frac{\beta^2k^2}{30\log k}\right\}$, then $\sum_{j\in J_2}e_{k,j}\leq\frac{\beta^2k^3}{30\log k}\leq0.1e$. Let $J_3=[k]\setminus(J_1\cup J_2)$. Without loss of generality, assume $J_3=[\ell]$. Then $\frac{\beta^2k^2}{30\log k}<e_{k,j}<\frac{\beta^2k^{2.25}}{(\log k)^{1.25}}$ for all $j\in J_3=[\ell]$ and $$\sum_{j=1}^\ell e_{k,j}=\sum_{j=1}^ke_{k,j}-\sum_{j\in J_1}e_{k,j}-\sum_{j\in J_2}e_{k,j}\geq0.9e-0.1e-0.1e=0.7e.$$
For each $j\in J_1\cup J_2$, use colour $j$ to perform as many simple colouring steps as possible to the largest remaining complete graph. Suppose a total of $N-k$ simple colouring steps are performed. Then for each $j\in J_1\cup J_2$, we have $e_{N,j}<n$, as otherwise we could perform another simple colouring step using colour $j$. Let $K_{x_0}$ be the largest complete graph in $S_{N}$. Then $S_{N}$ consists of $K_{x_0}$, $k$ complete graphs $K_r$ coming from Stage 1, and many $K_1$ from the simple colouring steps above. Therefore, by Lemma [Lemma 18](#relation){reference-type="ref" reference="relation"}, we have $$\binom{x_0}{2}+k\binom{r}{2}=\sum_{j\in J_1\cup J_2}e_{N,j}+\sum_{j=1}^\ell e_{N,j}\geq\sum_{j=1}^\ell e_{N,j}=\sum_{j=1}^\ell e_{k,j}\geq0.7e.$$ Since $k\binom{r}{2}\ll e$, it follows that $0.5x_0^2\geq\binom{x_0}{2}\geq0.66e\geq0.32\frac{\beta^2k^3}{\log k}$, and thus $x_0\geq\frac{4\beta k^{1.5}}{5(\log k)^{0.5}}$.
Now consider the following process.
- Start with the complete graph $K_{x_0}$.
- In the $j$-th step, if $x_{j-1}<\frac{2\sqrt{\beta}k^{1.25}}{(\log k)^{0.25}}$, we stop. Otherwise, perform a standard colouring step on $K_{x_{j-1}}$ of maximum possible size $c_j<\frac12x_{j-1}$ to split $K_{x_{j-1}}$ into $K_{x_j}$ and $K_{c_j}$, where $x_j=x_{j-1}-c_j$.
The maximality of $c_j$ implies that $$c_jx_j\leq e_{N+j-1,j} \text{\quad and \quad} (c_j+1)(x_j-1)>e_{N+j-1,j}.$$ From this, it follows that $$e_{N+j,j}=e_{N+j-1,j}-c_jx_j<x_j-c_j-1\leq x_j-1< n,$$ and $$\frac{e_{N+j-1,j}}{2x_j}\leq\frac{e_{N+j-1,j}}{x_j-1}-1\leq c_j\leq\frac{e_{N+j-1,j}}{x_j}, \tag{$\spadesuit$}$$ where the first inequality holds because $$e_{N+j-1,j}=e_{k,j}\ge \frac{\beta^2k^2}{30\log k}>2n\ge 2x_j.$$
Assume the process stop in step $\ell'$. We claim that $x_{\ell'}\le\frac{2\sqrt{\beta}k^{1.25}}{(\log k)^{0.25}}$. Indeed, this follows trivially if the process stops because $x_{j-1}<\frac{2\sqrt{\beta}k^{1.25}}{(\log k)^{0.25}}$ for some $j\in[\ell]$. If instead we performed all $\ell$ standard colouring steps described above, then at the end we have $$\sum_{j=1}^ke_{N+\ell,j}=\sum_{j\in J_1\cup J_2}e_{N,j}+\sum_{j=1}^\ell e_{N+j,j}< kn.$$ This implies that $x_{\ell}<\frac{2\sqrt{\beta}k^{1.25}}{(\log k)^{0.25}}$, as required, because otherwise $$\binom{x_{\ell}}{2}>kn\geq\sum_{j=1}^ke_{N+\ell,j},$$ contradicting Lemma [Lemma 18](#relation){reference-type="ref" reference="relation"}.
Since $x_0\geq\frac{4\beta k^{1.5}}{5(\log k)^{0.5}}$, we have $\frac{x_0}{x_{\ell'}}\geq \frac{2\sqrt{\beta}k^{0.25}}{5(\log k)^{0.25}}$. Moreover, we have $$2k\ll\frac{\sqrt{\beta}k^{1.25}}{(\log k)^{0.25}}\leq \frac12x_{\ell'-1}\leq x_{\ell'}<\frac{2\sqrt{\beta}k^{1.25}}{(\log k)^{0.25}},$$ It follows that for $j\in[\ell']$, we have $$c_j\leq\frac{e_{N+j-1,j}}{x_j}\leq\frac{\beta^2k^{2.25}}{(\log k)^{1.25}}\frac{(\log k)^{0.25}}{\sqrt{\beta}k^{1.25}}\leq\frac{\beta^{1.5}k}{\log k}. \tag{$\clubsuit$}$$
Let $\mathcal{C}$ be the collection $K_{c_1},\cdots,K_{c_{\ell'}}$ of small complete graphs split off in this process. The amount of cushions provided by $\mathcal{C}$ is at least
$$\begin{aligned}
\sum_{j=1}^{\ell'}\binom{c_j}{2}\geq\frac13\sum_{j=1}^{\ell'}c_j^2
&\geq\frac13\sum_{j=1}^{\ell'}c_j\frac{e_{N+j-1,j}}{2x_j} && \text{by $(\spadesuit)$}\\
&=\frac13\sum_{j=1}^{\ell'}c_j\frac{e_{k,j}}{2x_j}\\
&\geq\frac{\beta^2k^2}{180\log k}\sum_{j=1}^{\ell'}\frac{c_j}{x_j} && \text{by }e_{k,j}\ge \frac{\beta^2k^2}{30\log k}\\
&=\frac{\beta^2k^2}{180\log k}\sum_{j=1}^{\ell'}\int_{x=x_j}^{x_{j-1}}\frac{1}{x_j}\text{d}x &&\text{by }c_j=x_{j-1}-x_j\\
&\geq\frac{\beta^2k^2}{180\log k}\sum_{j=1}^{\ell'}\int_{x=x_j}^{x_{j-1}}\frac{1}{x}\text{d}x\\
&=\frac{\beta^2k^2}{180\log k}\int_{x=x_{\ell'}}^{x_0}\frac 1x\text{d}x\\
&=\frac{\beta^2k^2}{180\log k}\log{\frac{x_0}{x_{\ell'}}}\\
&\geq\frac{\beta^2k^2}{180\log k}\log\frac{2\sqrt{\beta}k^{0.25}}{5(\log k)^{0.25}}\\
&\geq\frac{\beta^2k^2}{720\log k}(\log k-\log\log k)\gg k^2,\end{aligned}$$ which is enough for us to apply Lemma [Lemma 21](#2kspare){reference-type="ref" reference="2kspare"} later on. At this point, the set of complete graphs left to be coloured consists of those in $\mathcal{R}$, those in $\mathcal{C}$ and the complete graph $K_{x_{\ell'}}$ with $x_{\ell'}\gg2k$.
**Stage 3: Colour the Remaining Complete Graphs**
In both cases, there is only one complete graph left with size at least $2k$. We can first use Lemma [Lemma 19](#remove1){reference-type="ref" reference="remove1"} to perform as many simple colouring steps on it as needed to reduce its size down to $2k$. Then we can use Lemma [Lemma 21](#2kspare){reference-type="ref" reference="2kspare"} to find a simple colouring for it, because the complete graphs in $\mathcal{C}$ provide a cushion of at least $\frac12k^2$.
It remains to colour the complete graphs in $\mathcal{C}$ and $\mathcal{R}$. First we find a simple colouring for each of those in $\mathcal{C}$. Recall that, for every $K_s\in \mathcal{C}$,
- If we are in Case 1, then $s=c=\left\lfloor\frac{k^{0.75}}{(\log k)^{0.75}}\right\rfloor$,
- If we are in Case 2, then $s=c_j\leq\frac{\beta^{1.5}k}{\log k}$ for some $j\in[\ell']$ by $(\clubsuit)$.
Note that the $k$ complete graphs of size $r$ in $\mathcal{R}$ together provide a cushion of at least $$k\binom{r}{2}\geq\frac{\beta^2k^2}{2000\log k}\geq ks$$ for $K_s$. Hence, by Lemma [Lemma 21](#2kspare){reference-type="ref" reference="2kspare"} we can find a simple colouring for $K_s$.
Finally, we prove that we can colour all $k$ complete graphs of size $r$ in $\mathcal{R}$. We do this by showing that it is always possible to perform a simple colouring step on the largest remaining complete graphs in $\mathcal{R}$. Indeed, suppose the largest remaining complete graph has size $2\leq t\leq r$, then the other $k-1$ complete graphs all have size at least $t-1$. Hence by Lemma [Lemma 18](#relation){reference-type="ref" reference="relation"}, we have $$\sum_{j=1}^ke_{L,j}\geq(k-1)\binom{t-1}{2}+\binom{t}{2}\geq(k-1)(t-2)+(t-1)=k(t-2)+1.$$ So there exists some $i\in[k]$ with $e_{L,i}\geq t-1$, which we can use to perform a simple colouring step on $K_t$. As this is true for every $t$, we can repeat and colour all complete graphs in $\mathcal{R}$. This completes a standard colouring of $K_n$ with colour distribution sequence $(e_1,\cdots,e_k)$. By Lemma [Lemma 17](#standardhfree){reference-type="ref" reference="standardhfree"}, this colouring is rainbow $H$-free, which completes the proof. ◻
# Triangle {#t}
In this section, we resolve the question posed by Gyárfás et al. in [@GPPW] by proving Theorem [Theorem 2](#triangle){reference-type="ref" reference="triangle"}. Recall that rainbow $K_3$-free colourings are also known as Gallai colourings, and $g(K_3,k)=g(k)$. As we have proved the upper bound in Section [4](#upper){reference-type="ref" reference="upper"}, it suffices to prove the lower bound here. Note that Theorem [Theorem 2](#triangle){reference-type="ref" reference="triangle"} also shows that the upper bound in Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"} is tight up to the choice of constant in the case when $H=K_3$.
We will use the following fundamental decomposition theorem of Gallai colourings.
**Theorem 23** ([@FFY]). *Suppose we have a Gallai $k$-colouring of $K_n$. Then there exist at most two colours, which we call base colours, and a decomposition of $K_n$ into $m\ge 2$ vertex disjoint complete graphs $K_{n_1},\cdots,K_{n_m}$, such that for each $1\leq i<j\leq m$, there exists a base colour with all edges between $V(K_{n_i})$ and $V(K_{n_j})$ having this colour. Moreover, each base colour is used to colour at least $n-1$ edges between these smaller complete graphs.*
Now we can prove Theorem [Theorem 2](#triangle){reference-type="ref" reference="triangle"}.
*Proof of Theorem [Theorem 2](#triangle){reference-type="ref" reference="triangle"}.* By Theorem [Theorem 9](#degen2){reference-type="ref" reference="degen2"}, we only need to show $g(H,k)=\Omega(k^{1.5}/(\log k)^{0.5}).$ The proof consists of 3 steps.
Consider the colour distribution sequence $(e_1,\cdots,e_k)$ on $n$ vertices given by $e_1=\cdots=e_{c}=a+1$, $e_{c+1}=\cdots=e_{\left\lceil\frac{k}{2}\right\rceil}=a$ and $e_{\left\lceil\frac{k}{2}\right\rceil+1}=\cdots=e_{k}=b$, where $$\alpha=\frac1{10},\text{ } n=\left\lfloor\frac{\alpha k^{1.5}}{(\log k)^{0.5}}\right\rfloor,\text{ } b=\left\lfloor\frac{k}{2}\right\rfloor,\text{ } a=\left\lfloor\frac{\binom{n}{2}-b\left\lfloor\frac{k}{2}\right\rfloor}{\left\lceil\frac{k}{2}\right\rceil}\right\rfloor,\text{ } c=\binom{n}{2}-b\left\lfloor\frac{k}{2}\right\rfloor-a\left\lceil\frac{k}{2}\right\rceil.$$ It is easy to verify that $(e_1,e_2,\cdots,e_k)$ is an $n$-good sequence. We will show that for sufficiently large $k$, there is no Gallai colouring of $K_n$ with colour distribution sequence $(e_1,\cdots,e_k)$.
Suppose for a contradiction that there is such a Gallai colouring, then we can perform the following stepwise splitting process by repeatedly applying Theorem [Theorem 23](#strongdecomp){reference-type="ref" reference="strongdecomp"}:
- Initialise with $K_{x_0}=K_n$.
- For each $i\geq0$, if $x_i\le b+1$, we stop. Otherwise, by Theorem [Theorem 23](#strongdecomp){reference-type="ref" reference="strongdecomp"}, we can find a decomposition of $K_{x_i}$ using at most two base colours into $m\ge 2$ vertex disjoint complete graphs $K_{y_1},\cdots,K_{y_m}$, where $y_1\geq\cdots\geq y_m$.
- Let $t_{i+1}=y_m$ and $x_{i+1}=x_i-t_{i+1}$. Split $K_{x_i}$ into $K_{y_m}$ and $K_{x_{i+1}}$, with the latter being the union of $K_{y_1},\cdots,K_{y_{m-1}}$. Note that by Theorem [Theorem 23](#strongdecomp){reference-type="ref" reference="strongdecomp"}, every edge between $V(K_{x_{i+1}})$ and $V(K_{y_m})$ is coloured with a base colour.
- Repeat the second and third step.
Hence, we can find a nested sequence of complete graphs $K_{x_0}\supset K_{x_1}\supset\cdots\supset K_{x_\ell}$, where $n=x_0>x_1>\cdots>x_{\ell-1}\ge b+2>x_{\ell}$. Moreover, by the last part of Theorem [Theorem 23](#strongdecomp){reference-type="ref" reference="strongdecomp"}, in each iteration of the process above, the base colour chosen must be used to colour at least $b+1$ edges. Hence colours $\left\lceil\frac k2\right\rceil+1,\cdots,k$ will never be picked as base colours in this process.
We claim that $t_{i+1}\leq\frac{4(a+1)}{x_{i}}$ for all $0\leq i\leq\ell-1$. Indeed, every edge between the complete graphs $V(K_{x_{i+1}})$ and $V(K_{y_m})$ must be coloured with a base colour. However, the at most 2 base colours together can be used to colour at most $2(a+1)$ edges. This implies that $$\frac12x_it_{i+1}\leq(x_i-t_{i+1})t_{i+1}=x_{i+1}y_m\le 2(a+1),$$ where we used $t_{i+1}=y_m\leq\frac12x_i$. Thus, $t_{i+1}\leq\frac{4(a+1)}{x_i}$, as claimed.
We can now give a lower bound on the number of edges coloured with base colours in this splitting process. This number is $$\begin{aligned}
\sum_{i=1}^{\ell}t_i(x_{i-1}-t_i)&=\sum_{i=1}^{\ell}\int_{x=x_i}^{x_{i-1}}(x_{i-1}-t_i)\text{d}x\\
&\geq\sum_{i=1}^{\ell}\int_{x=x_i}^{x_{i-1}}\left(x_{i-1}-\frac{4(a+1)}{x_{i-1}}\right)\text{d}x\\
&\geq\sum_{i=1}^{\ell}\int_{x=x_i}^{x_{i-1}}\left(x-\frac{4(a+1)}{x}\right)\text{d}x\\
&=\int_{x=x_{\ell}}^{x_0}\left(x-\frac{4(a+1)}{x}\right)\text{d}x\\
&\geq\int_{x=b+1}^{n}\left(x-\frac{4(a+1)}{x}\right)\text{d}x\\
&=\frac{n^2-(b+1)^2}{2}-4(a+1)\log\frac{n}{b+1}\\
&\geq\frac{n^2}{2}-\frac{2b^2}{3}-4(a+1)\log\frac n{b}.\end{aligned}$$
Recall that only the first $\left\lceil\frac{k}{2}\right\rceil$ colours can be used as base colours in this process.
The total number of edges that can be coloured with the first $\left\lceil\frac{k}{2}\right\rceil$ colours is $$\sum_{i=1}^{\left\lceil\frac k2\right\rceil}e_i=\binom{n}{2}-b\left\lfloor\frac k2\right\rfloor\leq\frac{n^2}{2}-b^2.$$ So we will have a contradiction if $$\frac{n^2}{2}-\frac{2b^2}{3}-4(a+1)\log\frac n{b}>\frac{n^2}{2}-b^2.$$
Note that for sufficiently large $k$ we have $$b^2=\left\lfloor\frac{k}{2}\right\rfloor^2\geq\left(\frac{k-1}2\right)^2\geq\frac{k^2}{5},$$ $$4(a+1)\leq 5a\leq 5\frac{\binom{n}{2}}{\left\lceil{\frac k2}\right\rceil}\leq\frac{5n^2}{k}\leq\frac{5\alpha^2k^2}{\log k},$$ $$\log\frac nb\leq\log\left(\frac{\alpha k^{1.5}}{(\log k)^{0.5}}\frac{3}{k}\right)=\log\left(\frac{3\alpha k^{0.5}}{(\log k)^{0.5}}\right)=\frac12(\log k-\log\log k)+\log(3\alpha)\leq\log k.$$ Putting these together and using $\alpha=\frac1{10}$, we get $$\left(\frac{n^2}{2}-\frac{2b^2}{3}-4(a+1)\log\frac n{b}\right)-\left(\frac{n^2}{2}-b^2\right)=\frac{b^2}3-4(a+1)\log\frac nb\geq\frac{k^2}{15}-\frac{k^2}{20}>0,$$ as required. This proves that if $k$ is sufficiently large, then there is no Gallai colouring with colour distribution sequence $(e_1,\cdots,e_k)$. Therefore, $g(k)\geq\frac{\alpha k^{1.5}}{(\log k)^{0.5}}$. ◻
99 J. de Oliveira Bastos, F. S. Benevides and J. Han. The number of Gallai $k$-colorings of complete graphs, *Journal of Combinatorial Theory, Series B*, 144 (2020), 1-13. J. de Oliveira Bastos, F. S. Benevides, G. O. Mota and I. Sau. Counting Gallai 3-colorings of complete graphs, *Discrete Mathematics*, 342.9 (2019), 2618-2631. J. Balogh and L. Li. The typical structure of Gallai colorings and their extremal graphs, *SIAM Journal on Discrete Mathematics*, 33.4 (2019), 2416-2443. J. Feffer, Y. Fu and J. Yan. Remarks on the distribution of colors in Gallai colorings, *Discrete Mathematics*, 343.10 (2020), 111996. T. Gallai. Transitiv orientierbare Graphen, *Acta Mathathematica Hungarica*, 18 (1967), 25-66. A. Gyárfás, D. Pálvölgyi, B. Patkós and M. Wales. Distribution of colors in Gallai colorings, *European Journal of Combinatorics*, 86 (2020), 103087. A. Gyárfás, G. N. Sárközy, A. Sebő and S. Selkow. Ramsey-type results for Gallai colorings, *Journal of Graph Theory*, 64.3 (2010), 233-243. A. Gyárfás and G. Simonyi. Edge colorings of complete graphs without tricolored triangles, *Journal of Graph Theory*, 46.3 (2004), 211-216. C. Magnant and I. Schiermeyer. Gallai-Ramsey number for $K_5$, *Journal of Graph Theory*, 101.3 (2022), 455-492. A. Frieze and M. Krivelevich. On rainbow trees and cycles, *The electronic journal of combinatorics*, 15 (2008), R59.
[^1]: Mathematics Institute and DIMAP, University of Warwick, UK. Email: `[email protected]`. Supported by the Warwick Mathematics Institute Centre for Doctoral Training and funding from University of Warwick.
[^2]: Mathematics Institute, University of Warwick, UK. Email: `[email protected]`. Supported by the Warwick Mathematics Institute Centre for Doctoral Training and funding from the UK EPSRC (Grant number: EP/W523793/1).
| arxiv_math | {
"id": "2309.05606",
"title": "Distribution of colours in rainbow H-free colourings",
"authors": "Zhuo Wu, Jun Yan",
"categories": "math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
A generic orthotope is an orthogonal polytope whose tangent cones are described by read-once Boolean functions. The purpose of this note is to develop a theory of Ehrhart polynomials for integral generic orthotopes. The most remarkable part of this theory is a relation between the number of lattice points in an integral generic orthotope $P$ and the number of unit cubes in $P$ of various floral types. This formula is facilitated through the introduction of a set of "local polynomials" defined for every read-once Boolean function.
author:
- David Richter
title: Ehrhart Polynomials of Generic Orthotopes
---
2020 AMS Mathematics Subject Classification: 51M20, 52B11, 52B70.
# Introduction
This author introduced a theory of generic orthotopes in [@richter_genericorthotopes]. The goal of this note is to establish a theory of Ehrhart polynomials specialized to integral generic orthotopes, extending the results of [@richter_genericorthotopes]. The most prominent result here is a formula, stated as Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}, which demonstrates that counting lattice points in a generic orthotope $P$ is equivalent to counting integral unit cubes in $P$ while keeping track of their "floral types". We demonstrate this formula by introducing an algebra $\mathcal{F}$ generated by a set of equivalence classes of read-once boolean functions (essentially series-parallel diagrams) and a couple of useful bases $\{h(e_\alpha)\}$ and $\{h^{-1}(e_\alpha)\}$ for this algebra. This analysis gives rise to a plethora of identities concerning integral generic orthotopes. It also showcases the calculus of series-parallel diagrams used for studying generic orthotopes as introduced in [@richter_genericorthotopes].
A major theme of this work (including [@richter_genericorthotopes]) is that while many workers have studied aspects of orthogonal polytopes in a fixed dimension, usually $d=2$ or $d=3$, relatively few have considered common structural questions among orthogonal polytopes irrespective of dimension. As one may notice, we may prove all of the results shown here and in [@richter_genericorthotopes] in an elementary manner. Thus, the results shown here appear among the lowest-hanging fruit in the deep, dark, and largely unexplored forest of general orthogonal polytopes. On another point, this author wants to persuade the reader that generic orthotopes in particular possess a distinctive intrinsic charm and hopes that they will notice this throughout this theory; for example, this author is impressed at the way that ideas and results tend to flow as soon as one demands that every vertex of an orthogonal polytope be floral. (On this point, it should be obvious that this author is biased!)
We have attempted to be as pedantic as possible in this note. Thus, as we develop our adaptation of Ehrhart polynomials to generic orthotopes, we concurrently walk the reader through a running example in 2 dimensions. The main reason for this is that we wish to highlight as brightly as possible the distinctiveness of the Ehrhart theory as it pertains to generic orthotopes in particular. Indeed, the underlying algebra $\mathcal{F}$ and the polynomials $h(e_\alpha)$ or $h^{-1}(e_\alpha)$ which are instrumental in expressing our results have been not found among any extant works about Ehrhart theory and lattice point enumeration. (This should not be surprising because the introduction of read-once Boolean functions in the study of orthogonal polytopes appears to be novel. For that matter, we have not found these notions among the extant literature on Boolean functions or Boolean algebras.) The example serves to display the motivation of the theory but also to better understand the notation and concepts used throughout. We also offer an example in 3 dimensions in a separate section, should the reader wish to see something more illustrative or more substantial. Also, this author feels that the examples also serve to better understand the foundational article [@richter_genericorthotopes].
## Some notions and conventions
This subsection summarizes some notation conventions and notions used throughout. The reader should refer to [@richter_genericorthotopes] for supplemental details.
We use the symbol $d$ to denote the dimension of the ambient space, and in most cases assume $d$ is fixed; certainly $d$ is fixed when we speak of a particular orthogonal polytope. Accordingly, we denote $\left[d\right]:=\{1,2,3,...,d\}$. In most cases, we use the symbol $P$ for an arbitrary $d$-dimensional bounded integral generic orthotope.
A "congruence class" may refer to a floral arrangement congruence class or a floral vertex congruence class. In all cases, we use "congruence" to mean Euclidean-geometric congruence. As we explained in [@richter_genericorthotopes], the Coxeter group $BC_d$ acts on floral arrangements, so we say that $\alpha$ is congruent to $\alpha'$ if there is a group element $g\in BC_d$ such that $\alpha'=g\cdot\alpha$. In other words, $\alpha$ is congruent to $\alpha'$ if both $\alpha$ and $\alpha'$ lie in the same orbit under the standard action of $BC_d$ on $\mathbb{R}^d$. Notice that the phrase "floral arrangement congruence class" is meaningful only when $d$ is fixed. We use lower case Greek letters $\alpha,\beta$ to denote either a floral vertex, a floral arrangement, a floral vertex congruence class, or a floral arrangement congruence class. We justify the overuse of this notation by the fact that this theory is already laden with notation; we try to explain the usage where it may be unclear.
The *standard $d$-dimensional unit cube in $\mathbb{R}^d$* is $I^d=[0,1]^d$. A *$d$-dimensional integral unit cube* is any translate $v+I^d$, where $v\in\mathbb{Z}^d$. If $k\in\{0,1,2,...,d\}$, then a *$k$-dimensional integral unit cube* is any $k$-dimensional face of a $d$-dimensional integral unit cube. A *relatively open $k$-dimensional integral unit cube* is the relative interior of a $k$-dimensional integral unit cube with respect to its affine hull. Notice that every $x\in\mathbb{R}^d$ belongs to a unique relatively open $k$-dimensional integral unit cube for some $k\in\{0,1,2,...,d\}$. The dimension of the cube containing $x=(x_1,x_2,...,x_d)$ is given by $$k(x):=\#\left(\{i:x_i\notin\mathbb{Z}\}\right).$$ Thus, $k(x)=0$ precisely when $x\in\mathbb{Z}^d$ is a lattice point and $k(x)=d$ precisely when $x$ lies interior to a $d$-dimensional integral unit cube.
On multiple occasions throughout, we refer to the "floral type" of a point or a relatively open cube. The following, which may be inferred from the results in [@richter_genericorthotopes], is intended to clarify the meaning of this.
**Proposition 1**. *Suppose $P\subset\mathbb{R}^d$ is a $d$-dimensional integral generic orthotope and $C\subset P$ is a relatively open integral unit cube. Then there is a floral arrangement $\alpha\subset\mathbb{R}^d$ such that the tangent cone at every point of $C$ is congruent to $\alpha$.*
Thus, if $C\subset P$ is a relatively open integral unit cube, then define the *floral type* of $C$ as the floral arrangement congruence class which contains the tangent cone of any given point of $C$. Notice that this meaning extends to the case when $C$ is a point (i.e. a cube of dimension zero) of $P$. Figure [1](#floraltypes){reference-type="ref" reference="floraltypes"} displays a 3-dimensional generic orthotope whose faces are marked with various floral types.
We recall from [@richter_genericorthotopes] that we associate a *dimension* to every floral vertex $\alpha$, defined as the number of segments in the series-parallel diagram which defines it. Thus, $\dim(\scalerel*{\includegraphics{ar.pdf}}{]})=0$, $\dim(\scalerel*{\includegraphics{arx.pdf}}{]})=1$, $\dim(\scalerel*{\includegraphics{arxx.pdf}}{]})=\dim(\scalerel*{\includegraphics{arxxn.pdf}}{]})=2$, and so on. On several occasions, we also refer to the *degree* $\deg\alpha$ of a floral arrangement, this being its degree of genericity as defined in [@richter_genericorthotopes]. Moreover, if $\alpha$ is a floral arrangement, then there exists a positive integer $k$ and a floral vertex $\beta$ supported on $k$ half-spaces such that $\alpha$ is congruent to $\beta\times\mathbb{R}^{d-k}$. In such a circumstance, we may rely on the relation $\deg\alpha=d-\dim\beta=d-k$.
![Floral types for a 3-dimensional generic orthotope.](floraltypes.pdf){#floraltypes width="\\textwidth"}
Suppose $s:\left[d\right]\rightarrow\{-1,0,1\}$ is a vector. Define the *generalized orthant of $s$* as $$\Omega_s:=\{(x_1,x_2,...,x_d):\mathrm{sign}(x_i)=s_i\hbox{ for all }i\}.$$ Notice $\Omega_s$ is homeomorphic to an open cell $\mathbb{R}^{k(s)}$, where $k(s)=\#\left\{s_i:s_i\neq 0\right\}$, and $\mathbb{R}^d$ contains precisely $2^k\cdot\binom{d}{k}=2^k\cdot\frac{d!}{k!(d-k)!}$ generalized orthants of dimension $k$. Also, each $x=(x_1,x_2,...,x_d)\in\mathbb{R}^d$ lies in the generalized orthant $\Omega_s$, where $s_i=\mathrm{sign}(s_i)\in\{-1,0,1\}$ for all $i$.
# Introductory theory and example
If $P\subset\mathbb{R}^d$ is a bounded subset, then define the *lattice point enumerator* of $P$ by $$L(P):=\#(P\cap\mathbb{Z}^d).$$ If $P$ is an integral convex polytope and $tP$ is obtained by uniformly dilating $P$ by the factor of $t\in\mathbb{N}$, then $L(tP)$ is well known to be a polynomial function of $t$ called the Ehrhart polynomial of $P$, cf. [@BR_2015; @MS_2005]. This note adapts this theory to the case when $P$ is an integral generic orthotope.
Throughout this note we use an example in 2 dimensions to illustrate the theory. Thus, define an orthogonal polygon by $P:=\bigcup_{v\in S}\left(v+[0,1]^2\right),$ where $$S=\left\{
\begin{array}{l}
(0,0),(0,1),(0,2),(0,3),(1,3),(2,1),(2,2), \\
(2,3),(3,1),(3,2),(3,3),(4,0),(4,1),(4,2), \\
(5,2),(6,1),(6,2),(6,3),(6,4) \\
\end{array}
\right\}.$$ Figure [2](#orthogoncolor){reference-type="ref" reference="orthogoncolor"} displays a sketch of $P$. Apparently $P$ is an integral generic orthotope with $L(P)=37$.
If $\alpha$ is a floral arrangement congruence class, then let $L_\alpha(P)$ denote the number of lattice points $x\in P\cap\mathbb{Z}^d$ such that the tangent cone at $x$ lies in $\alpha$. For example, $L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(P)$ counts the number of lattice points lying interior to $P$ and $L_\alpha(P)$ counts the number of vertices of $P$ congruent to a member of $\alpha$ when $\alpha$ is a floral vertex congruence class. In the example, we distinguish the lattice points of various types by color. Thus, $$\begin{array}{rclc}
L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(P) &=& 3 & \hbox{(green),} \\
L_{\scalerel*{\includegraphics{arx.pdf}}{]}}(P) &=& 16 & \hbox{ (black),} \\
L_{\scalerel*{\includegraphics{arxx.pdf}}{]}}(P) &=& 11 & \hbox{ (blue),} \\
L_{\scalerel*{\includegraphics{arxxn.pdf}}{]}}(P) &=& 7 & \hbox{ (red).} \\
\end{array}$$
If $t$ is a positive integer, then let $tP$ denote the image of $P$ under the dilation map $x\mapsto tx$. That $L(tP)$ is a polynomial function of $t$ follows from well-known results concerning Ehrhart polynomials. Thus, we shall refer to $L(tP)$ as the *Ehrhart polynomial* of $P$. We are interested in decomposing $L(tP)$ according to floral arrangement congruence classes. Thus, we write $$L(tP)=\sum_{\alpha}L_{\alpha}(tP),$$ where we sum over floral arrangement congruence classes $\alpha$. As $L_\alpha(tP)$ is a polynomial function of $t$ for each $\alpha$, let $L_{\alpha,k}(P)$ denote the coefficients such that $$L_{\alpha}(tP)=\sum_k L_{\alpha,k}(P)t^k.$$ The coefficients $L_{\alpha,k}(P)$ for the running example appear in Figure [\[lpenumerator\]](#lpenumerator){reference-type="ref" reference="lpenumerator"}. Thus, $$L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(tP)=1-17t+19t^2\hbox{ and }L_{\scalerel*{\includegraphics{arx.pdf}}{]}}(tP)=-18+34t,$$ while $L_{\scalerel*{\includegraphics{arxx.pdf}}{]}}(tP)=11$ and $L_{\scalerel*{\includegraphics{arxxn.pdf}}{]}}(tP)=7$ are constant. Notice that evaluating $L_{\alpha}(tP)$ at $t=1$ yields the values $L_{\alpha}(P)$ given above. Also notice that the degree of $L_{\alpha}(tP)$ coincides with the degree of genericity of a tangent cone lying in the equivalence class $\alpha$.
We shall demonstrate a couple ways to determine $L_{\alpha}(tP)$ generally. The first is based on counting unit cubes in $P$ while keeping track of the floral types of their tangent cones. The second method is the formula appearing in Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}, which uses a set of "local polynomials" $h(e_\alpha)$ and $h^{-1}(e_\alpha)$ lying in a certain associative algebra which we introduce below. The first method is trivial, as we are still counting lattice points in more or less the same way. However, this method is significant as it will facilitate the establishment of second method.
![Illustration of the running example.](orthogoncolor.pdf){#orthogoncolor width="50%"}
$$\begin{array}{c|ccc}
\alpha & L_{\alpha,0} & L_{\alpha,1} & L_{\alpha,2}\\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& 1 & -17 & 19 \\
\scalerel*{\includegraphics{arx.pdf}}{]}& -18 & 34 & \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& 11 & & \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& 7 & & \\
\end{array}$$
# Counting cubes
Suppose $P\subset\mathbb{R}^d$ is an integral generic orthotope. For each floral arrangement congruence class $\alpha$ and for each integer $k\geq 0$, let $C_{\alpha,k}(P)$ denote the number of relatively open $k$-dimensional integral unit cubes $C\subset P$ such that the tangent cone along each point of $C$ lies in $\alpha$. Figure [\[localcounts\]](#localcounts){reference-type="ref" reference="localcounts"} shows the values $C_{\alpha,k}(P)$ for the running example. For each positive integer $t$, define $$C_{\alpha}(P,t):=\sum_k C_{\alpha,k}(P)t^k.$$
$$\begin{array}{c|ccc}
\alpha & C_{\alpha,0} & C_{\alpha,1} & C_{\alpha,2}\\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& 3 & 21 & 19 \\
\scalerel*{\includegraphics{arx.pdf}}{]}& 16 & 34 & \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& 11 & & \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& 7 & & \\
\end{array}$$
Although $C_\alpha(P,t)$ does not count cubes of type $\alpha$ in the dilate $tP$, it is closely related to the function $L_{\alpha}(tP)$:
**Proposition 2**. *Suppose $P$ is an integral generic orthotope and $t$ is a positive integer. Then $$L_{\alpha}(tP)=C_{\alpha}(P,t-1).
\label{shiftrelation}$$*
*Proof.* Suppose $C\subset P$ is a relatively open integral unit cube of dimension $k$ and $t$ is a positive integer. Then there are precisely $(t-1)^k$ lattice points $x\in tC$ such that the tangent cone at $x$ coincides with the tangent cone at each point of $C$. Using the partition of $P$ into relatively open integral unit cubes (of varying dimensions), the relation ([\[shiftrelation\]](#shiftrelation){reference-type="ref" reference="shiftrelation"}) follows. ◻
Figure [3](#dilation){reference-type="ref" reference="dilation"} displays the result of dilating the running example by the factor $t=3$. Notice that if $C\subset P$ is a relatively open integral unit square, for example, then the interior of $3C$ contains precisely $(3-1)^2=4$ lattice points.
Using the binomial theorem, we obtain a relation between the coefficients $L_{\alpha,k}(P)$ and the counts $C_{\alpha,k}(P)$:
**Corollary 3**. *Suppose $P$ is an integral generic orthotope, $\alpha$ is a floral arrangement, and $k\in\{0,1,2,...,\deg\alpha\}$. Then $$L_{\alpha,k}(P)=\sum_{j=k}^{\deg\alpha}(-1)^{j+k}\binom{j}{k}C_{\alpha,j}(P).$$ [\[clrelation\]]{#clrelation label="clrelation"}*
![Dilating the example.](dilation.pdf){#dilation width="\\textwidth"}
For the running example, we compute $$L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(tP)=C_{\scalerel*{\includegraphics{arx.pdf}}{]}}(P,t-1)=3+21(t-1)+19(t-1)^2=1-17t+19t^2$$ and $$L_{\scalerel*{\includegraphics{arx.pdf}}{]}}(tP)=C_{\scalerel*{\includegraphics{arx.pdf}}{]}}(P,(t-1))=16+34(t-1)=-18+34t,$$ yielding the first two rows of the table in Figure [\[lpenumerator\]](#lpenumerator){reference-type="ref" reference="lpenumerator"}.
# Local polynomials
This section introduces and studies a set of "local polynomials" $h(e_\alpha)$, defined for each floral vertex congruence class $\alpha$. These are instrumental in the relation, stated as Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}, between the number of lattice points and the numbers of relatively open integral unit cubes of various floral types in an integral generic orthotope.
Each local polynomial has an expression $$h(e_\alpha):=\sum_{\beta}m_{\alpha,\beta}e_\beta,$$ summing over floral vertex congruence classes $\beta$. Here, $m_{\alpha,\beta}$ is an integer that measures the occurrence of floral vertices congruent to $\beta$ in $\alpha$. As such, their definition requires careful analysis of faces and cross-sections of floral vertices, thus explaining the length of this section.
We call the expressions $h(e_\alpha)$ "polynomials" because they lie in an infinite-dimensional, commutative, unital associative algebra $\mathcal{F}$ which behaves much like a ring of polynomials. (We note that the algebra $\mathcal{F}$, while containing a ring of polynomials in one variable as a subring, has two different ways to multiply. Thus, although $\mathcal{F}$ qualifies as having the structure of an associative algebra, this terminology neglects this additional structure.) We call the expressions $h(e_\alpha)$ "local" because they contain information about incidences of floral arrangements in a given floral vertex.
A basis for $\mathcal{F}$ as a vector space consists of all expressions $e_\alpha$, where $\alpha$ is a floral vertex congruence class, together with $e_{\scalerel*{\includegraphics{ar.pdf}}{]}}$. The multiplication in $\mathcal{F}$ is given by the rule $$e_\alpha\cdot e_\beta:=e_{\alpha\wedge\beta},$$ and the element $e_{\scalerel*{\includegraphics{ar.pdf}}{]}}$ serves as the multiplicative identity element. Notice that there is a unique monomorphism $\mathbb{Q}[t]\hookrightarrow\mathcal{F}$ which maps $1\mapsto e_{\scalerel*{\includegraphics{ar.pdf}}{]}}$ and $t\mapsto e_{\scalerel*{\includegraphics{arx.pdf}}{]}}$. (We assume throughout that the field of scalars is $\mathbb{Q}$.)
The definition of the coefficients $m_{\alpha,\beta}$ is facilitated by the observation, which was established in [@richter_genericorthotopes]:
**Proposition 4**. *Suppose $\alpha$ is a floral vertex and $x\in\alpha$. Then there is a floral vertex $\beta$ such that the tangent cone at $x$ is congruent to $\beta\times\mathbb{R}^{d-\dim\beta}$.*
With this, say that $x\in\alpha$ *has floral type $\beta$* if the tangent cone at $x$ is congruent to $\beta\times\mathbb{R}^{d-k}$, where $\beta$ is a $k$-dimensional floral vertex. If $x$ lies in the interior of $\alpha$, then we decree that $x$ has floral type $\scalerel*{\includegraphics{ar.pdf}}{]}$.
Recalling [@richter_genericorthotopes], we may assert:
**Proposition 5**. *Suppose $\alpha$ is a floral vertex and $x\in\alpha$. (a) If $x$ lies in the relative interior of an edge $e$ of $\alpha$, then the floral type of $x$ coincides with the cross-section of $\alpha$ across $e$. (b) If $x$ lies in the relative interior of a facet of $\alpha$, then $x$ has floral type $\scalerel*{\includegraphics{arx.pdf}}{]}$. (c) The origin $x=\mathbf{0}$ has floral type $\alpha$.*
Suppose $\alpha$ and $\beta$ are floral vertices with $\dim\alpha=d$ and $\dim\beta=k$. From the results above, we may define $$m_{\alpha,\beta}:=\sum_f \mu_{d-k}(f),$$ summing over all $(d-k)$-dimensional faces $f$ which have floral type $\beta$, and $\mu_{d-k}$ is the $(d-k)$-dimensional orthant-counting function (defined in [@richter_genericorthotopes]). In other words, $m_{\alpha,\beta}$ is the number of $(d-k)$-dimensional generalized orthants $\Omega\subset\alpha$ such that each point of $\Omega$ has floral type $\beta$. Apparently we have $m_{\alpha',\beta'}=m_{\alpha,\beta}$ whenever $\alpha'$ is congruent to $\alpha$ or $\beta'$ is congruent to $\beta$. Thus, we use $m_{\alpha,\beta}$ to denote this common value when $\alpha$ and $\beta$ are floral vertex congruence classes.
The table in Figure [\[volumepolys\]](#volumepolys){reference-type="ref" reference="volumepolys"} displays the polynomials $h(e_\alpha)$ for $\dim\alpha\leq 3$. Inspecting Figure [4](#good3dvertices){reference-type="ref" reference="good3dvertices"}, for example, we may see $$m_{\scalerel*{\includegraphics{arxxnxn.pdf}}{]},\scalerel*{\includegraphics{arx.pdf}}{]}}=5,$$ as the floral vertex defined by $\scalerel*{\includegraphics{arxxnxn.pdf}}{]}$ has precisely 5 two-dimensional generalized orthants with floral type $\beta=\scalerel*{\includegraphics{arx.pdf}}{]}$. The polynomials $h(e_\alpha)$ when $\dim\alpha=4$ appear in a table in the appendix.
![Three-dimensional floral vertices.](good3dvertices.pdf){#good3dvertices width="75%"}
$$\begin{array}{c|c}
\alpha & h(e_\alpha) \\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]}) \\
\scalerel*{\includegraphics{arx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+(\scalerel*{\includegraphics{arx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& 3(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+3(\scalerel*{\includegraphics{arx.pdf}}{]})+3(\scalerel*{\includegraphics{arxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnx.pdf}}{]}& 3(\scalerel*{\includegraphics{ar.pdf}}{]})+5(\scalerel*{\includegraphics{arx.pdf}}{]})+2(\scalerel*{\includegraphics{arxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxn.pdf}}{]}& 5(\scalerel*{\includegraphics{ar.pdf}}{]})+5(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxxn.pdf}}{]}& 7(\scalerel*{\includegraphics{ar.pdf}}{]})+3(\scalerel*{\includegraphics{arx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxn.pdf}}{]}) \\
\end{array}$$
The polynomials $h(e_\alpha)$ and the coefficients $m_{\alpha,\beta}$ enjoy several properties which we need in order to state and prove Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}. Like the results in [@richter_genericorthotopes], all of these properties are established using elementary means.
**Proposition 6**. *Suppose $\alpha$ and $\beta$ are floral vertices. If $\dim\beta>\dim\alpha$, then no point of $\alpha$ has floral type $\beta$.*
A corollary to this is that $m_{\alpha,\beta}=0$ when $\dim\beta>\dim\alpha$, and $m_{\alpha,\beta}=0$ for all but a finite number of $\beta$ for each $\alpha$. By a similar token, if $\dim\alpha=\dim\beta$, then the values of $m_{\alpha,\beta}$ are quickly determined:
**Proposition 7**. *Suppose $\alpha$ and $\beta$ are floral vertex congruence classes with $\dim\alpha=\dim\beta$. Then $$m_{\alpha,\beta}=
\left\{\begin{array}{ccl}
1 & \hbox{if} & \beta=\alpha, \\
0 & \hbox{if} & \beta\neq\alpha. \\
\end{array}\right.$$*
Recall from [@richter_genericorthotopes] that $\overline{\alpha}$ denotes the floral vertex complementary to $\alpha$. Since $m_{\alpha,\scalerel*{\includegraphics{ar.pdf}}{]}}=\mu_d(\alpha)$ coincides with the number of $(\dim\alpha)$-dimensional orthants occupied by $\alpha$, this yields:
**Proposition 8**. *If $\alpha$ is a floral vertex congruence class with $\alpha\neq\scalerel*{\includegraphics{ar.pdf}}{]}$, then $m_{\alpha,\scalerel*{\includegraphics{ar.pdf}}{]}}+m_{\overline{\alpha},\scalerel*{\includegraphics{ar.pdf}}{]}}=2^{\dim\alpha}$.*
Similarly, that $\partial\overline{\alpha}=\partial\alpha=\alpha\cap\overline{\alpha}$ for all $\alpha$ gives us:
**Proposition 9**. *Suppose $\alpha\subset\mathbb{R}^d$ is a floral vertex and $x\in\partial\alpha$. Then $x$ has floral type $\beta$ in $\alpha$ if and only if $x$ has floral type $\overline{\beta}$ in $\overline{\alpha}$.*
As a corollary, we have $m_{\overline{\alpha},\overline{\beta}}=m_{\alpha,\beta}$ whenever $\beta\neq\scalerel*{\includegraphics{ar.pdf}}{]}$.
Recall from [@richter_genericorthotopes] that if $\alpha$ and $\beta$ are floral vertices, then $\alpha\times\beta$ is a floral vertex given by the conjunction of the series-parallel diagrams underlying $\alpha$ and $\beta$. This in turn implies that $$h(e_{\alpha}\cdot e_{\beta})= h(e_{\alpha\wedge\beta})=h(e_\alpha)\cdot h(e_\beta)$$ for all floral vertex congruence classes $\alpha$ and $\beta$. Since $h(e_{\scalerel*{\includegraphics{ar.pdf}}{]}})=e_{\scalerel*{\includegraphics{ar.pdf}}{]}}$ we may say:
**Proposition 10**. *The map $h:\mathcal{F}\rightarrow\mathcal{F}$ is an algebra automorphism.*
We illustrate how the properties outlined above serve to compute small examples of the expressions $h(e_\alpha)$. For example, we may write: $$\begin{aligned}
h(\scalerel*{\includegraphics{arxxuxxn.pdf}}{]}) &= h(\scalerel*{\includegraphics{arxxn.pdf}}{]})\cdot h(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
&=\left[3(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \right]\cdot\left[3(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \right] \\
& = 9(\scalerel*{\includegraphics{ar.pdf}}{]})\cdot(\scalerel*{\includegraphics{ar.pdf}}{]})+ 4(\scalerel*{\includegraphics{arx.pdf}}{]})\cdot(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]})\cdot(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
& +12(\scalerel*{\includegraphics{ar.pdf}}{]})\cdot(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{ar.pdf}}{]})\cdot(\scalerel*{\includegraphics{arxxn.pdf}}{]})+4(\scalerel*{\includegraphics{arx.pdf}}{]})\cdot(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
& = 9(\scalerel*{\includegraphics{ar.pdf}}{]})+12(\scalerel*{\includegraphics{arx.pdf}}{]})+4(\scalerel*{\includegraphics{arxx.pdf}}{]})+6(\scalerel*{\includegraphics{arxxn.pdf}}{]})+4(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxuxxn.pdf}}{]}),\end{aligned}$$ as $h$ is an algebra automorphism. Using this and the facts $m_{\overline{\alpha},\overline{\beta}}=m_{\alpha,\beta}$ and $m_{\alpha,\scalerel*{\includegraphics{ar.pdf}}{]}}+m_{\overline{\alpha},\scalerel*{\includegraphics{ar.pdf}}{]}}=2^{\dim \alpha}$ we also obtain: $$\begin{aligned}
h(\scalerel*{\includegraphics{arxxuxx.pdf}}{]}) &= (16-9)(\scalerel*{\includegraphics{ar.pdf}}{]})+12\overline{(\scalerel*{\includegraphics{arx.pdf}}{]})}+4\overline{(\scalerel*{\includegraphics{arxx.pdf}}{]})}+6\overline{(\scalerel*{\includegraphics{arxxn.pdf}}{]})}+4\overline{(\scalerel*{\includegraphics{arxxnx.pdf}}{]})}+\overline{(\scalerel*{\includegraphics{arxxuxxn.pdf}}{]})} \\
& = 7(\scalerel*{\includegraphics{ar.pdf}}{]})+12(\scalerel*{\includegraphics{arx.pdf}}{]})+4(\scalerel*{\includegraphics{arxxn.pdf}}{]})+6(\scalerel*{\includegraphics{arxx.pdf}}{]})+4(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxuxx.pdf}}{]}).\end{aligned}$$ Notice that both of these appear in a table in the appendix.
## The polynomials $h^{-1}(e_\alpha)$
For each non-negative integer $d$, let $\mathcal{F}_d$ denote the subspace of $\mathcal{F}$ spanned by those $e_\alpha$ for which $\dim\alpha\leq d$. The dimension of $\mathcal{F}_d$ is $$\dim\mathcal{F}_d=\sum_{k=0}^d A_k=1+1+2+4+\cdots+A_d,$$ where $A_k$ denotes the number of series-parallel diagrams with precisely $k$ segments. As we observed in [@richter_genericorthotopes], $A_k$ is the $k$th term in sequence A000084 in the Online Encyclopedia of Integer Sequences. Several values of $A_d$ and $\sum_{k=0}^d A_k$ appear in Figure [\[oeis\]](#oeis){reference-type="ref" reference="oeis"}.
$$\begin{array}{c|c|c}
& & \dim\mathcal{F}_d \\
d & A_d & =\sum_{k=0}^d A_k \\
\hline
0 & 1 & 1 \\
1 & 1 & 2 \\
2 & 2 & 4 \\
3 & 4 & 8 \\
4 & 10 & 18 \\
5 & 24 & 42 \\
6 & 66 & 108 \\
7 & 180 & 288 \\
8 & 522 & 810 \\
9 & 1532 & 2342 \\
10 & 4624 & 6966 \\
\end{array}$$
From the results above, we know that $h^{-1}$ is an automorphism of the algebra $\mathcal{F}$. Morever, the restriction of $h-\mathrm{Id}$ to the subspace $\mathcal{F}_d$ is nilpotent, whence we have $$h^{-1}=\sum_{k=0}^{d}(-1)^k(h-\mathrm{Id})^k.$$ We tabulate $h^{-1}(e_\alpha)$ for $\dim\alpha\leq 3$ in Figure [\[eulerpolys3\]](#eulerpolys3){reference-type="ref" reference="eulerpolys3"} and those with $\dim\alpha=4$ in the appendix. We shall exhibit more properties of $h^{-1}(e_\alpha)$ after we establish the main result.
$$\begin{array}{c|c}
\alpha & h^{-1}(e_\alpha) \\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]}) \\
\scalerel*{\includegraphics{arx.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})+(\scalerel*{\includegraphics{arx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})-2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})-2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxx.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})+3(\scalerel*{\includegraphics{arx.pdf}}{]})-3(\scalerel*{\includegraphics{arxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+(\scalerel*{\includegraphics{arx.pdf}}{]})-2(\scalerel*{\includegraphics{arxx.pdf}}{]})-(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxn.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+(\scalerel*{\includegraphics{arx.pdf}}{]})-(\scalerel*{\includegraphics{arxx.pdf}}{]})-2(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxxn.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})+3(\scalerel*{\includegraphics{arx.pdf}}{]})-3(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxn.pdf}}{]}) \\
\end{array}$$
# The Main Result
We use the expressions $h^{-1}(e_\alpha)$ to state our main result. For notational convenience, define $D\in\mathrm{End}(\mathcal{F})$ by $$D(e_\alpha):=2^{\dim{\alpha}}e_\alpha$$ for each $\alpha$. With this, we assert:
**Theorem 11**. *Suppose $P$ is an integral generic orthotope and $k\in\{0,1,2,...,d\}$. Then $$\sum_{\alpha}L_{\alpha,k}(P)e_\alpha=2^{k-d}\sum_{\deg\alpha=k}C_{\alpha,\deg\alpha}(P)D(h^{-1}(e_\alpha)).$$ [\[mainresult\]]{#mainresult label="mainresult"}*
Before proving it, we demonstrate this formula using our running example. Using the formula from Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"} with the values tabulated in Figure [\[localcounts\]](#localcounts){reference-type="ref" reference="localcounts"} and $k=0$, we have $$\begin{aligned}
& 2^{k-d}\sum_{\deg\alpha=k}C_{\alpha,\deg\alpha}(P)D(h^{-1}(e_\alpha)) \\
&= 2^{0-2}\sum_{\deg\alpha=0}C_{\alpha,0}(P)D(h^{-1}(e_\alpha)) \\
&= 2^{-2}\left[C_{\scalerel*{\includegraphics{arxx.pdf}}{]},0}(P) D(h^{-1}(\scalerel*{\includegraphics{arxx.pdf}}{]}))+C_{\scalerel*{\includegraphics{arxxn.pdf}}{]},0}(P)D(h^{-1}(\scalerel*{\includegraphics{arxxn.pdf}}{]}))\right] \\
&= \frac{1}{4}\left[11D(h^{-1}(\scalerel*{\includegraphics{arxx.pdf}}{]}))+7D(h^{-1}(\scalerel*{\includegraphics{arxxn.pdf}}{]}))\right] \\
&= \frac{11}{4}D\left((\scalerel*{\includegraphics{ar.pdf}}{]})-2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxx.pdf}}{]})\right)
+\frac{7}{4}D\left(-(\scalerel*{\includegraphics{ar.pdf}}{]})-2(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]})\right) \\
&= \frac{11}{4}\left((\scalerel*{\includegraphics{ar.pdf}}{]})-4(\scalerel*{\includegraphics{arx.pdf}}{]})+4(\scalerel*{\includegraphics{arxx.pdf}}{]})\right)
+\frac{7}{4}\left(-(\scalerel*{\includegraphics{ar.pdf}}{]})-4(\scalerel*{\includegraphics{arx.pdf}}{]})+4(\scalerel*{\includegraphics{arxxn.pdf}}{]})\right) \\
&= 1(\scalerel*{\includegraphics{ar.pdf}}{]})-18(\scalerel*{\includegraphics{arx.pdf}}{]})+11(\scalerel*{\includegraphics{arxx.pdf}}{]})+7(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
&= \sum_{\alpha}L_{\alpha,0}(P)e_\alpha,\end{aligned}$$ yielding the first column of the table in Figure [\[lpenumerator\]](#lpenumerator){reference-type="ref" reference="lpenumerator"}. Likewise, with $k=1$, $$\begin{aligned}
2^{-1}C_{\scalerel*{\includegraphics{arx.pdf}}{]},1}(P)D(h^{-1}(\scalerel*{\includegraphics{arx.pdf}}{]})) = & \frac{1}{2}\cdot 34\cdot D(-(\scalerel*{\includegraphics{ar.pdf}}{]})+(\scalerel*{\includegraphics{arx.pdf}}{]})) \\
=& \frac{1}{2}\cdot 34\left[-(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})\right] \\
=& -17(\scalerel*{\includegraphics{ar.pdf}}{]})+34(\scalerel*{\includegraphics{arx.pdf}}{]}) \\
=& L_{\scalerel*{\includegraphics{ar.pdf}}{]},1}(P)e_{\scalerel*{\includegraphics{ar.pdf}}{]}}+L_{\scalerel*{\includegraphics{arx.pdf}}{]},1}(P)e_{\scalerel*{\includegraphics{arx.pdf}}{]}}. \end{aligned}$$
We devote the remainder of this section to proving Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}. We accomplish this by first computing $L_{\alpha,0}(P)$ for an integral generic orthotope and then considering cross-sections to obtain the general formula.
Define the *Euler vector* of $P$ by $$\phi(P):=\sum_{\alpha}L_{\alpha,0}(P)e_\alpha\in\mathcal{F},$$ summing over all floral arrangement congruence classes $\alpha$ with $\deg\alpha\leq d$. The Euler vector for our running example is thus $$\phi(P)=e_{\scalerel*{\includegraphics{ar.pdf}}{]}}-18e_{\scalerel*{\includegraphics{arx.pdf}}{]}}+11e_{\scalerel*{\includegraphics{arxx.pdf}}{]}}+7e_{\scalerel*{\includegraphics{arxxn.pdf}}{]}}.$$
**Lemma 12**. *Suppose $P$ is an integral generic orthotope, $\beta$ is a floral vertex congruence class, and $k\in\{0,1,2,...,d\}$. Then $$\sum_{\alpha}2^{-\dim\alpha}m_{\alpha,\beta}C_{\alpha,k}(P)
=2^{-\dim\beta} \binom{\deg\beta}{k}C_{\beta,\deg\beta}(P),$$ summing over all floral vertex congruence classes $\alpha$ with $\dim\alpha\leq d$. [\[countinglemma\]]{#countinglemma label="countinglemma"}*
*Proof.* This is a double-counting formula. Notice that $C_{\beta,\deg\beta}(P)$ is the number of $(\deg\beta)$-dimensional relatively open integral unit cubes of type $\beta$. For each $k$-dimensional relatively open unit cube $C$ of type $\alpha$, the number of $(\deg\beta)$-dimensional relatively open integral unit cubes of type $\beta$ incident to $C$ coincides with the number $m_{\alpha,\beta}$ of orthants of floral type $\beta$ contained in the floral vertex $\alpha$ (abusing notation slightly). The auxiliary values $2^{-\dim\alpha}$, $2^{-\dim\beta}$, and $\binom{\deg\beta}{k}$ are normalizing factors. In particular, notice that $\binom{\deg\beta}{k}$ counts the number of orthographic subspaces of dimension $k$ in $\mathbb{R}^{\deg\beta}$. ◻
We demonstrate the formula above with our running example, using $\beta=\scalerel*{\includegraphics{arx.pdf}}{]}$ and $k=0$. We have $$\begin{aligned}
&\sum_{\alpha}2^{-\dim\alpha}m_{\alpha,\beta}C_{\alpha,k}(P) \\
&= \sum_{\alpha}2^{-\dim\alpha}m_{\alpha,\scalerel*{\includegraphics{arx.pdf}}{]}}C_{\alpha,0}(P) \\
&= 2^{-\dim(\scalerel*{\includegraphics{ar.pdf}}{]})}m_{\scalerel*{\includegraphics{ar.pdf}}{]},\scalerel*{\includegraphics{arx.pdf}}{]}}C_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P)+2^{-\dim(\scalerel*{\includegraphics{arx.pdf}}{]})}m_{\scalerel*{\includegraphics{arx.pdf}}{]},\scalerel*{\includegraphics{arx.pdf}}{]}}C_{\scalerel*{\includegraphics{arx.pdf}}{]},0}(P) \\
& +2^{-\dim(\scalerel*{\includegraphics{arxx.pdf}}{]})}m_{\scalerel*{\includegraphics{arxx.pdf}}{]},\scalerel*{\includegraphics{arx.pdf}}{]}}C_{\scalerel*{\includegraphics{arxx.pdf}}{]},0}(P)+2^{-\dim(\scalerel*{\includegraphics{arxxn.pdf}}{]})}m_{\scalerel*{\includegraphics{arxxn.pdf}}{]},\scalerel*{\includegraphics{arx.pdf}}{]}}C_{\scalerel*{\includegraphics{arxxn.pdf}}{]},0}(P) \\
&= 2^{-0}\cdot 0\cdot 3+2^{-1}\cdot 1\cdot 16+2^{-2}\cdot 2\cdot 11+2^{-2}\cdot 2\cdot 7
= 17. \end{aligned}$$ On the other hand, $$\begin{aligned}
&\binom{\deg\beta}{k}2^{-\dim\beta} C_{\beta,\deg\beta}(P) \\
&=\binom{\deg(\scalerel*{\includegraphics{arx.pdf}}{]})}{0}2^{-\dim(\scalerel*{\includegraphics{arx.pdf}}{]})} C_{\scalerel*{\includegraphics{arx.pdf}}{]},\deg(\scalerel*{\includegraphics{arx.pdf}}{]})}(P) \\
&=\binom{1}{0}2^{-1} C_{\scalerel*{\includegraphics{arx.pdf}}{]},1}(P)
=1\cdot\frac{1}{2}\cdot 34 =17.\end{aligned}$$
The formula in Lemma [\[countinglemma\]](#countinglemma){reference-type="ref" reference="countinglemma"} is a generalization of a formula from [@richter_genericorthotopes] for the volume expressed with the orthant-counting function $\mu_d$. Thus, with $\beta=\scalerel*{\includegraphics{ar.pdf}}{]}$ and $k=0$, we have $$\sum_{\alpha}2^{-\dim\alpha}m_{\alpha,\beta}C_{\alpha,k}(P)=\sum_{\alpha}2^{-\dim\alpha}m_{\alpha,\scalerel*{\includegraphics{ar.pdf}}{]}}C_{\alpha,0}(P)
=2^{-d}\sum_\alpha\mu_d(\alpha)n_\alpha(P),$$ where $2^{-\dim\alpha}m_{\alpha,\scalerel*{\includegraphics{ar.pdf}}{]}}=2^{-d}\mu_d(\alpha)$ is the fraction of all $d$-dimensional orthants occupied by $\alpha\in\mathbb{R}^d$ and $C_{\alpha,0}=n_\alpha(P)$ is the number of lattice points in $P$ having floral type $\alpha$. On the other hand, we have $$2^{-\dim\beta} \binom{\deg\beta}{k}C_{\beta,\deg\beta}(P)=2^{-0} \binom{d}{0}C_{\scalerel*{\includegraphics{ar.pdf}}{]},d}(P)=\mathrm{Vol}_d(P).$$
**Proposition 13**. *If $P$ is an integral generic orthotope, then $$\sum_{k=0}^d(-1)^k\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P)e_\alpha
=2^{-d}\sum_{\deg\beta=0}C_{\beta,0}(P)h^{-1}(e_\beta).$$ [\[localcount\]]{#localcount label="localcount"}*
We remark that the stipulation $\deg\beta=0$ in this proposition is equivalent to requiring that $\beta$ be a floral vertex. In other words, the domain of summation in the right-hand side of this formula coincides with $d$-dimensional floral vertex congruence classes.
*Proof.* This is a straightforward computation using the results above. In detail, notice $$\begin{aligned}
&\sum_{k=0}^d(-1)^k\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P)h(e_\alpha) \\
&=\sum_{k=0}^d (-1)^k\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P)\sum_{\beta} m_{\alpha,\beta}e_\beta
\hbox{ (definition of $h$)} \\
&=\sum_{k=0}^d (-1)^k\sum_{\beta}\sum_{\alpha} 2^{-\dim\alpha}m_{\alpha,\beta}C_{\alpha,k}(P)e_\beta \\
&=\sum_{k=0}^d (-1)^k\sum_{\beta}\binom{\deg\beta}{k}2^{-\dim\beta}C_{\beta,\deg\beta}(P)e_\beta
\hbox{ (from Proposition \ref{localcount})} \\
&=\sum_{\beta}2^{-\dim\beta}C_{\beta,\deg\beta}\left[\sum_{k=0}^{\deg\beta} (-1)^k \binom{\deg\beta}{k}\right]e_\beta. \end{aligned}$$ The expression in square brackets vanishes when $\deg\beta>0$ and equals 1 when $\deg\beta=0$, so we obtain $$\begin{aligned}
&\sum_{k=0}^d(-1)^k\sum_{\alpha}2^{-\dim\alpha}C_{\alpha,k}(P)h(e_\alpha) \\
&=\sum_{\deg\beta=0}2^{-\dim\beta}C_{\beta,\deg\beta}(P)e_\beta \\
&=2^{-d}\sum_{\deg\beta=0}C_{\beta,0}(P)e_\beta\end{aligned}$$ Applying $h^{-1}$, the result follows. ◻
**Corollary 14**. *Suppose $P$ is an integral generic orthotope. Then $$\phi(P)=2^{-d}\sum_{\deg\alpha=0}C_{\alpha,0}(P)D(h^{-1}(e_{\alpha})).$$ [\[eulervector\]]{#eulervector label="eulervector"}*
*Proof.* Notice $$\begin{aligned}
\phi(P) &= \sum_{\alpha}L_{\alpha,0}(P)e_\alpha
\hbox{ (definition of $\phi$)} \\
&= \sum_{\alpha}\sum_{k=0}^{d}(-1)^k C_{\alpha,k}(P)e_\alpha
\hbox{ (from Corollary \ref{clrelation})} \\
&= \sum_{k=0}^{d}(-1)^k\sum_{\alpha} C_{\alpha,k}(P)e_\alpha \\
&= \sum_{k=0}^{d}(-1)^k\sum_{\alpha} 2^{-\dim\alpha}C_{\alpha,k}(P)D(e_\alpha)
\hbox{ (definition of $D$)} \\
&=2^{-d}\sum_{\deg\alpha=0}C_{\alpha,0}(P)D(h^{-1}(e_\alpha))
\hbox{ (from Proposition \ref{localcount})} \\\end{aligned}$$ ◻
With this, we prove Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"} by considering cross-sections. Suppose $k$ is a positive integer, $I\subset[d]$ with $|I|=k$, and $\lambda:I\rightarrow(\mathbb{Z}+\frac{1}{2})$ is a tuple of half integers. Recall from [@richter_genericorthotopes] that the cross-section $P\cap\Pi_{I,\lambda}$ is a generic orthotope of dimension $d-k$, where $\Pi_{I,\lambda}$ is the generalized hyperplane determined by $(I,\lambda)$. Thus, for an appropriate choice of $v\in\mathbb{R}^d$ (depending on $\lambda$), the translation $v+\left(P\cap\Pi_{I,\lambda}\right)$ is integral and congruent to $P\cap\Pi_{I,\lambda}$. Counting integral unit cubes of dimension $k$ in $P$ thus amounts to counting lattice points in $P\cap\Pi_{I,\lambda}$. This then yields, for any floral arrangement $\alpha$, $$C_{\alpha,k}(P)=\sum_{(I,\lambda)} C_{\alpha,0}(v_\lambda+P\cap\Pi_{I,\lambda}),$$ summing over all choices of $(I,\lambda)$ with $|I|=k$. Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"} then follows immediately. $\square$
# Auxiliary results
This section has several results which supplement the preceding theory.
## Euler vector
The following helps to explain why we refer to $\phi(P)$ as the Euler vector:
**Proposition 15**. *If $P$ is an integral generic orthotope of dimension $d$ and Euler characteristic $\chi(P)$, then $$\chi(P)=(-1)^d L_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P).$$*
*Proof.* For a $d$-dimensional integral generic orthotope, define $$f_d(P):=\sum_{k=0}^d(-1)^k C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}(P).$$ We will see that $(-1)^d f_d$ agrees with the Euler characteristic valuation. The proof follows by a routine induction argument on $d$.
Suppose first that $d=1$. Then $P$ is a disjoint union of closed intervals and $-f_d(P)$ is the number of such intervals of $P$, i.e. the Euler characteristic of $P$.
Next, suppose $P$ is an integral generic orthotope of dimension $d$ and $\Pi$ is a $(d-1)$-dimensional integral hyperplane such that the half-spaces on either side of $\Pi$ intersect $P$ in $d$-dimensional orthogonal polytopes, say $P^+$ and $P^-$. Thus, if $H^\pm$ are the open half-spaces on either side of $\Pi$, then $P^+$ (resp. $P^-$) is the closure of the intersection $P\cap H^+$ (resp. $P\cap H^-$). We observe that $$C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}\left(P^+\cup P^-\right)=C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}\left(P^+\right)+C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}\left(P^-\right)+C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}\left(P^+\cap P^-\right)$$ for all $k$, although we must interpret this equation properly. Thus, while $C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}(P)$ generally denotes the number of relatively open $k$-dimensional integral unit cubes of floral type $\scalerel*{\includegraphics{ar.pdf}}{]}$ in $P$, we notice that each such cube necessarily lies in the relative interior of $P$. In particular, we notice that the orthogonal polytopes $P=P^+\cup P^-$, $P^+$, and $P^-$ are all $d$-dimensional generic orthotopes, whereas $P^+\cap P^-$ is a generic orthotope of dimension $d-1$. Thus, for example, if $x$ lies in the relative interior of $P^+\cap P^-$, then $x\in \partial P^+$ and $x\in\partial P^-$ and $x$ cannot have floral type $\scalerel*{\includegraphics{ar.pdf}}{]}$ in $P^+$ or in $P^-$. From this observation, we see that $$f_d(P^+\cup P^-)=f_d(P^+)+f_d(P^-)+f_{d-1}(P^+\cap P^-).$$ Thus, $$\begin{aligned}
&(-1)^df_d(P^+\cup P^-)\ \\
&=(-1)^df_d(P^+)+(-1)^df_d(P^-)-(-1)^{d-1}f_{d-1}(P^+\cap P^-),\end{aligned}$$ Notice that $C_{\scalerel*{\includegraphics{ar.pdf}}{]},k}(I^d)$ is either 0 or 1 depending respectively on whether $k<d$ or $k=d$, where $I^d$ is the standard unit cube. In particular, we have $f_d(I^d)=(-1)^d$ for all $d$. Therefore $(-1)^df_d=\chi$ for all $d$. ◻
This in turn yields:
**Corollary 16**. *If $P$ is a $d$-dimensional integral generic orthotope and $\alpha$ is a floral arrangement congruence class, then $(-1)^{\deg\alpha}L_{\alpha,0}(P)$ is the sum of the Euler characteristics of the faces of $P$ whose tangent cones lie in $\alpha$.*
Thus, the Euler vector $\phi(P)=\sum_{\alpha}L_{\alpha,0}(P)e_\alpha$ is a tabulation of these values ranging over all floral arrangement congruence classes. This is apparent in the running example. Thus, notice that each vertex has Euler characteristic $1$, while $P$ has precisely $L_{\scalerel*{\includegraphics{arxx.pdf}}{]},0}(P)$ and $L_{\scalerel*{\includegraphics{arxxn.pdf}}{]},0}(P)$ vertices of types $\scalerel*{\includegraphics{arxx.pdf}}{]}$ and $\scalerel*{\includegraphics{arxxn.pdf}}{]}$ respectively. Similarly, $P$ has $-L_{\scalerel*{\includegraphics{arx.pdf}}{]},0}=18$ edges, with each having Euler characteristic $1$. Finally, $L_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P)=1$, as $P$ is homeomorphic to the disc $I^2$.
## Ehrhart-Macdonald reciprocity
The preceding result quickly leads to a derivation of Ehrhart-Macdonald reciprocity for integral generic orthotopes. Recall that $tP$ contains precisely $L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(tP)$ interior lattice points. If $t>0$, then let $L(-tP)$ denote the evaluation of the polynomial $L(tP)$ at $-t$.
**Proposition 17**. *If $P$ is a $d$-dimensional integral generic orthotope, then $$L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(tP)=(-1)^d L(-tP).$$*
*Proof.* The Euler characteristic of $P$ is given by $$\begin{aligned}
\chi(P) &=\sum_{\alpha}\sum_k(-1)^k C_{\alpha,k}(P) \\
&=\sum_{\alpha}L_{\alpha,0}(P) \\
&=L_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P)+\sum_{\alpha\neq\scalerel*{\includegraphics{ar.pdf}}{]}}L_{\alpha,0}(P), \end{aligned}$$ where $\alpha$ denotes a floral arrangement congruence class. Using $\chi(P)=(-1)^d L_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P)$ from above, this yields $$\sum_{\alpha\neq\scalerel*{\includegraphics{ar.pdf}}{]}}L_{\alpha,0}(P)=\left[(-1)^d-1\right]L_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P).$$ As in the proof of Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}, we consider cross-sections of dimension $d-k$ to obtain $$\sum_{\alpha\neq\scalerel*{\includegraphics{ar.pdf}}{]}}L_{\alpha,k}(P)=\left[(-1)^{d-k}-1\right]L_{\scalerel*{\includegraphics{ar.pdf}}{]},k}(P)$$ for all $k$. The result then follows immediately. ◻
In the running example, we notice $$L(tP)=L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(tP)+L_{\scalerel*{\includegraphics{arx.pdf}}{]}}(tP)+L_{\scalerel*{\includegraphics{arxx.pdf}}{]}}(tP)+L_{\scalerel*{\includegraphics{arxxn.pdf}}{]}}(tP)=1+17t+19t^2,$$ while $$L_{\scalerel*{\includegraphics{ar.pdf}}{]}}(tP)=1-17t+19t^2.$$
## Properties of $h^{-1}(e_\alpha)$
For any $\alpha$, let $s_{\alpha,\beta}\in\mathbb{Z}$ denote the coefficients such that $$h^{-1}(e_\alpha)=\sum_\beta s_{\alpha,\beta}e_\beta,$$ summing over floral vertex congruence classes $\beta$.
Recall from [@richter_genericorthotopes] that the bouquet sign function $\sigma$ satisfies $\sigma(\alpha)=(-1)^{\rho(\alpha)}$, where $\rho(\alpha)$ is the number of loops in the series-parallel diagram defining $\alpha$ (which coincides with the number of disjunctions used in its read-once expression). Using Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"} and the fact that both of $(-1)^d L_{\alpha,0}(P)$ and the sum of the bouquet signs of the vertices of $P$ are equal to the Euler characteristic of $P$ yields:
**Proposition 18**. *$s_{\alpha,\scalerel*{\includegraphics{ar.pdf}}{]}}=(-1)^{\dim\alpha}\sigma(\alpha)$ for every floral vertex congruence class.*
We summarize several other properties of the polynomials $h^{-1}(e_\alpha)$ and the coefficients $s_{\alpha,\beta}$:
- $s_{\alpha,\alpha}=1$ for all $\alpha$.
- $s_{\alpha,\beta}=0$ if $\dim\alpha=\dim\beta$ and $\alpha\neq\beta$.
- $s_{\alpha,\beta}=0$ whenever $\dim\beta>\dim\alpha$.
- $s_{\alpha,\beta}=s_{\overline{\alpha},\overline{\beta}}$ whenever $\beta\neq\scalerel*{\includegraphics{ar.pdf}}{]}$.
- $s_{\alpha,\beta}$ is the number of cross-sections of type $\beta$ in $\alpha$ when $\dim\beta=\dim\alpha-1$.
- $s_{\alpha,\scalerel*{\includegraphics{arx.pdf}}{]}}=-(-1)^{\dim\alpha}\sum_{\beta}\sigma(\beta)$, summing over all facets $\beta$ of $\alpha$.
- $\sum_{\beta}2^{\dim\beta}s_{\alpha,\beta}=\sigma(\alpha)$, summing over all floral vertex congruence classes $\beta$.
One may establish all of these in a manner similar to our study of the polynomials $h(e_\alpha)$. Notice that the last of these is a reflection of Ehrhart-Macdonald reciprocity. All of these properties are apparent in the tables in Figures [\[eulerpolys3\]](#eulerpolys3){reference-type="ref" reference="eulerpolys3"} and in the appendix.
## Special Cases
Let $P\subset\mathbb{R}^d$ be a $d$-dimensional integral generic orthotope. We have already seen that $(-1)^dL_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P)$ coincides with the Euler characteristic of $P$. The purpose of this section is to detail some other special formulas involving some familiar valuations of $P$, if only to better understand the notation. In particular, we notice:
- $\sum_{\deg\alpha=0} L_{\alpha,0}(P)=\#(\hbox{vertices of }P).$
- $\sum_{\deg\alpha=1} L_{\alpha,0}(P)=-\#(\hbox{edges of }P).$
- $L_{\scalerel*{\includegraphics{arx.pdf}}{]},d-1}(P)=-2L_{\scalerel*{\includegraphics{ar.pdf}}{]},d-1}(P)=\mathrm{Volume}_{d-1}(\partial P).$
- $L_{\scalerel*{\includegraphics{ar.pdf}}{]},d}(P)=C_{\scalerel*{\includegraphics{ar.pdf}}{]},d}(P)=\mathrm{Volume}_d(P).$
These are all trivial given an understanding of the notation $L_{\alpha,k}(P)$. Notice the third of these is a statement of the well-known fact that the degree-$(d-1)$ coefficient of the Ehrhart polynomial of a $d$-dimensional polytope $P$ is half of the negative of the measure of $\partial P$.
# A multivariable generalization
The enumeration of lattice points described above can be adapted to the case where coordinates are dilated independently. Thus, if $\mathbf{t}=(t_1,t_2,...,t_d)$ is a tuple of positive integers, then define the *generalized Ehrhart function* of $P$ by $$L(\mathbf{t}P):=\#(\mathbf{t}P\cap\mathbb{Z}^d),$$ where $\mathbf{t}P$ denotes the image of $P$ under the dilation map $$(x_1,x_2,...,x_d)\mapsto(t_1x_1,t_2x_2,...,t_dx_d).$$
All of the results in this section follow through an analysis identical to that used above. As above, we write $$L(\mathbf{t}P):=\sum_\alpha L_\alpha(\mathbf{t}P),$$ where $L_\alpha(\mathbf{t}P)$ denotes the number of lattice points in $\mathbf{t}P$ at which the tangent cone lies in $\alpha$.
**Proposition 19**. *Suppose $P$ is an integral generic orthotope and $\alpha$ is a floral arrangement. Then $\mathbf{t}\mapsto L_\alpha(\mathbf{t}P)$ is a polynomial function of $\mathbf{t}=(t_1,t_2,...,t_d)$ of degree $\deg\alpha$ in which every term is a square-free monomial in $\{t_1,t_2,...,t_d\}$.*
For $I\subset[d]$, let $\mathbf{t}_I=\prod_{i\in I}t_i$ be the square-free monomial corresponding to $I$. Then we write $$L_\alpha(\mathbf{t}P):=\sum_{k=0}^{\deg\alpha}\sum_{|I|\leq k}L_{\alpha,I}(P)\mathbf{t}_I.$$ We obtain explicit expressions for $L_{\alpha,I}(P)$ as corollary of Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}. Thus, given a floral arrangement $\alpha$ and $I\subset[d]$, let $C_{\alpha,I}(P)$ denote the number of relatively open integral unit cubes of dimension $|I|$ in $P$ whose tangent cones are congruent to $\alpha$ and which are parallel to the hyperplane $\Pi_I=\mathrm{span}\{e_i:i\in I\}$. Then the formula gives us: $$\sum_\alpha L_{\alpha,I}(P)e_\alpha=2^{|I|-d}\sum_{\deg\alpha=|I|}C_{\alpha,I}(P)D(h^{-1}(e_\alpha)).
\label{generalizedehrhart}$$
We illustrate this for the running example. The cube counts $C_{\alpha,I}(P)$ appear in the table in Figure [\[multivariablecounts2d\]](#multivariablecounts2d){reference-type="ref" reference="multivariablecounts2d"} and the coefficients $L_{\alpha,I}(P)$ appear in the table in Figure [\[multivariablecoefs2d\]](#multivariablecoefs2d){reference-type="ref" reference="multivariablecoefs2d"}.
$$\begin{array}{c|cccc}
\alpha & C_{\alpha,\{\}} & C_{\alpha,\{1\}} & C_{\alpha,\{2\}} & C_{\alpha,\{1,2\}}\\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& 3 & 12 & 9 & 19 \\
\scalerel*{\includegraphics{arx.pdf}}{]}& 16 & 14 & 20 & \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& 11 & & & \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& 7 & & & \\
\end{array}$$
$$\begin{array}{c|cccc}
\alpha & L_{\alpha,\{\}} & L_{\alpha,\{1\}} & L_{\alpha,\{2\}} & L_{\alpha,\{1,2\}}\\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& 1 & -7 & -10 & 19 \\
\scalerel*{\includegraphics{arx.pdf}}{]}& -18 & 14 & 20 & \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& 11 & & & \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& 7 & & & \\
\end{array}$$
# Example in 3 dimensions
Define a 3-dimensional integral orthogonal polytope as $P:=\bigcup_{v\in S}\left(v+[0,1]^3\right)$, where $S\subset\mathbb{R}^3$ appears in Figure [\[examplecorners\]](#examplecorners){reference-type="ref" reference="examplecorners"}. Evidently $P$ lies in the primary octant in $\mathbb{R}^3$. One should imagine $P$ as an assembly of several stacks of unit cubes resting "skyscraper style" on a flat surface representing the $(x,y)$-plane in $\mathbb{R}^3$. A sketch of $P$ appears in Figure [5](#exampleehrhart){reference-type="ref" reference="exampleehrhart"}. The shading in the figure is intended to illustrate the annular facet where $P$ makes contact with the $(x,y)$ plane. The numbers appearing in the sketch tell the height of each stack. Notice that $P$ is a solid 3-dimensional torus.
$$S=\left\{\begin{array}{ccccccc}
(0,0,0), & (1,0,0), & (2,0,0), & (3,0,0), & (4,0,0), & (0,1,0), & (1,1,0), \\
(2,1,0), & (3,1,0), & (0,2,0), & (2,2,0), & (0,3,0), & (1,3,0), & (2,3,0), \\
(1,0,1), & (2,0,1), & (3,0,1), & (4,0,1), & (2,1,1), & (3,1,1), & (2,2,1), \\
(0,3,1), & (1,3,1), & (2,3,1), & (1,0,2), & (2,0,2), & (3,0,2), & (4,0,2) \\
\end{array}\right\}.$$
![A 3-dimensional integral generic orthotope.](exampleehrhart.pdf){#exampleehrhart width="33.3%"}
Figure [\[combinatorics\]](#combinatorics){reference-type="ref" reference="combinatorics"} displays the values of $C_{\alpha,k}(P)$ and $L_{\alpha,k}(P)$. Using the formula from Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"} with the tabulated values, for example, one may compute the Euler vector: $$\begin{aligned}
\phi(P)=
& 2^{-3}\left[15D(h^{-1}(\scalerel*{\includegraphics{arxxx.pdf}}{]}))+11D(h^{-1}(\scalerel*{\includegraphics{arxxnx.pdf}}{]}))\right. \\
&\left.+5D(h^{-1}(\scalerel*{\includegraphics{arxxnxn.pdf}}{]}))+1D(h^{-1}(\scalerel*{\includegraphics{arxxxn.pdf}}{]}))\right] \\
=& \frac{15}{8}\left[-(\scalerel*{\includegraphics{ar.pdf}}{]})+6(\scalerel*{\includegraphics{arx.pdf}}{]})-12(\scalerel*{\includegraphics{arxx.pdf}}{]})+8(\scalerel*{\includegraphics{arxxx.pdf}}{]})\right] \\
+& \frac{11}{8}\left[(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})-8(\scalerel*{\includegraphics{arxx.pdf}}{]})-4(\scalerel*{\includegraphics{arxxn.pdf}}{]})+8(\scalerel*{\includegraphics{arxxnx.pdf}}{]})\right] \\
+& \frac{5}{8}\left[(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arx.pdf}}{]})-4(\scalerel*{\includegraphics{arxx.pdf}}{]})-8(\scalerel*{\includegraphics{arxxn.pdf}}{]})+8(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})\right] \\
+& \frac{1}{8}\left[-(\scalerel*{\includegraphics{ar.pdf}}{]})+6(\scalerel*{\includegraphics{arx.pdf}}{]})-12(\scalerel*{\includegraphics{arxxn.pdf}}{]})+8(\scalerel*{\includegraphics{arxxxn.pdf}}{]})\right] \\
=& 0(\scalerel*{\includegraphics{ar.pdf}}{]})+16(\scalerel*{\includegraphics{arx.pdf}}{]})-36(\scalerel*{\includegraphics{arxx.pdf}}{]})-12(\scalerel*{\includegraphics{arxxn.pdf}}{]}) \\
& +15(\scalerel*{\includegraphics{arxxx.pdf}}{]})+11(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+5(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+1(\scalerel*{\includegraphics{arxxxn.pdf}}{]}).\end{aligned}$$ In particular, notice that $L_{\scalerel*{\includegraphics{ar.pdf}}{]},0}(P)=(-1)^3\chi(P)=0$, as $P$ is a solid 3-dimensional torus.
We may also use the formula ([\[generalizedehrhart\]](#generalizedehrhart){reference-type="ref" reference="generalizedehrhart"}) to compute the multivariable Ehrhart polynomials. These appear in Figure [\[coefs\]](#coefs){reference-type="ref" reference="coefs"}. Reading the table, for example, one sees $$L_{\scalerel*{\includegraphics{arx.pdf}}{]}}(\mathbf{t}P)=16-(32t_1+24t_2-28t_3)+(28t_1t_2+32t_1t_3+20t_2t_3)$$ yields the number of lattice points in $\mathbf{t}P$ whose tangent cones have type $\scalerel*{\includegraphics{arx.pdf}}{]}$.
$$\begin{array}{c|cccc|cccc}
\alpha & C_{\alpha,0} & C_{\alpha,1} & C_{\alpha,2} & C_{\alpha,3} & L_{\alpha,0} & L_{\alpha,1} & L_{\alpha,2} & L_{\alpha,3} \\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& 1 & 17 & 44 & 28 & 0 & 13 & -40 & 28 \\
\scalerel*{\includegraphics{arx.pdf}}{]}& 12 & 76 & 80 & & 16 & -84 & 80 & \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& 32 & 68 & & & -36 & 68 & & \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& 4 & 16 & & & -12 & 16 & & \\
\scalerel*{\includegraphics{arxxx.pdf}}{]}& 15 & & & & 15 & & & \\
\scalerel*{\includegraphics{arxxnx.pdf}}{]}& 11 & & & & 11 & & & \\
\scalerel*{\includegraphics{arxxnxn.pdf}}{]}& 5 & & & & 5 & & & \\
\scalerel*{\includegraphics{arxxxn.pdf}}{]}& 1 & & & & 1 & & & \\
\end{array}$$
$$\begin{array}{c|c|ccc|ccc|c}
\alpha & L_{\alpha,\emptyset} & L_{\alpha,1} & L_{\alpha,2} & L_{\alpha,3} & L_{\alpha,12} & L_{\alpha,13} & L_{\alpha,23}
& L_{\alpha,123} \\
\hline
\scalerel*{\includegraphics{ar.pdf}}{]}& 0 & 6 & 5 & 2 & -14 & -16 & -10 & 28 \\
\scalerel*{\includegraphics{arx.pdf}}{]}& 16 & -32 & -24 & -28 & 28 & 32 & 20 \\
\scalerel*{\includegraphics{arxx.pdf}}{]}& -36 & 28 & 22 & 18 \\
\scalerel*{\includegraphics{arxxn.pdf}}{]}& -12 & 4 & 2 & 10 \\
\scalerel*{\includegraphics{arxxx.pdf}}{]}& 15 \\
\scalerel*{\includegraphics{arxxnx.pdf}}{]}& 11 \\
\scalerel*{\includegraphics{arxxnxn.pdf}}{]}& 5 \\
\scalerel*{\includegraphics{arxxxn.pdf}}{]}& 1 \\
\end{array}$$
# Conclusion
We have described a theory of Ehrhart polynomials adapted to integral generic orthotopes. We anticipate further development concerning combinatorics of generic orthotopes. This note was concerned solely with generic orthotopes which are integral, and one can imagine extending these results to generic orthotopes which are rational. Even further, we anticipate a version of the Euler-Maclaurin summation formulae for generic orthotopes in a vein similar to that of [@BV_1997].
This author is particularly curious about the space of generic orthotopes with a given count data set $\{C_{\alpha,\deg\alpha}\}$. As we have seen, the number of lattice points in $P$ depends only on these values. However, even in dimension $d=2$, it is easy to find non-congruent generic orthogons which have the same data $\{C_{\alpha,\deg\alpha}\}$ (and therefore the same number of lattice points). Can we say anything definitive about the space of integral generic orthotopes having the same combinatorial data?
We introduced the algebra $\mathcal{F}$ and the "local polynomials" $h(e_\alpha),h^{-1}(e_\alpha)\in\mathcal{F}$ in order to express our formula in Theorem [\[mainresult\]](#mainresult){reference-type="ref" reference="mainresult"}. One notices that these objects can be defined entirely within the theory of Boolean functions. That is, they are intrinsic to the study of read-once Boolean functions. The question of how these polynomials and especially the coefficients $m_{\alpha,\beta},s_{\alpha,\beta}\in\mathbb{Z}$ may arise in studies of complexity measures of read-once Boolean functions therefore arises naturally, cf. [@LM_2021].
99
Alexander Barvinok. Lattice Points, Polyhedra, and Complexity. In: *Geometric Combinatorics.* Edited by Ezra Miller and Victor Reiner. IAS/Park City Mathematics Series, vol. 14. American Mathematical Society, 2007.
Matthias Beck and Sinai Robins. Computing the continuous discretely. *Integer-point enumeration in polyhedra.* 2nd ed. Springer, New York, 2015.
Michel Brion and Michèle Vergne. Lattice points in simple polytopes. *J. Amer. Math. Soc.* **10** (1997), no. 2, 371--392.
Yael Karshon, Shlomo Sternberg, and Jonathan Weitsman. Exact Euler-Maclaurin formulas for simple lattice polytopes. *Adv. in Appl. Math.* **39** (2007), no. 1, 1--50.
Vadim Lozin and Mikhail Moshkov. Critical Properties and Complexity Measures of Read-Once Boolean Functions. *Ann. Math.\] Artificial Intelligence*, **89** (2021) 595--614.
Ezra Miller and Bernd Sturmfels. *Combinatorial Commutative Algebra.* Springer, Berlin, 2005.
David Richter. Generic Orthotopes. Submitted for publication, under revision.
# Appendix
This section displays the polynomials $h(e_\alpha)$ and $h^{-1}(e_\alpha)$ where $\dim\alpha=4$.
$$\begin{array}{c|c}
\alpha & h(e_\alpha) \\
\hline
\scalerel*{\includegraphics{arxxxx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+4(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{arxx.pdf}}{]})+4(\scalerel*{\includegraphics{arxxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxx.pdf}}{]}& 3(\scalerel*{\includegraphics{ar.pdf}}{]})+8(\scalerel*{\includegraphics{arx.pdf}}{]})+7(\scalerel*{\includegraphics{arxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]})+2(\scalerel*{\includegraphics{arxxx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxnx.pdf}}{]}& 5(\scalerel*{\includegraphics{ar.pdf}}{]})+10(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{arxx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxnx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxxnx.pdf}}{]}& 7(\scalerel*{\includegraphics{ar.pdf}}{]})+10(\scalerel*{\includegraphics{arx.pdf}}{]})+3(\scalerel*{\includegraphics{arxx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxn.pdf}}{]})+3(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxnx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxuxxn.pdf}}{]}& 9(\scalerel*{\includegraphics{ar.pdf}}{]})+12(\scalerel*{\includegraphics{arx.pdf}}{]})+4(\scalerel*{\includegraphics{arxx.pdf}}{]})+6(\scalerel*{\includegraphics{arxxn.pdf}}{]})+4(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxuxxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxuxx.pdf}}{]}& 7(\scalerel*{\includegraphics{ar.pdf}}{]})+12(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{arxx.pdf}}{]})+4(\scalerel*{\includegraphics{arxxn.pdf}}{]})+4(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxuxx.pdf}}{]})\\
\scalerel*{\includegraphics{arxxxnxn.pdf}}{]}& 9(\scalerel*{\includegraphics{ar.pdf}}{]})+10(\scalerel*{\includegraphics{arx.pdf}}{]})+3(\scalerel*{\includegraphics{arxx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxnxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxnxn.pdf}}{]}& 11(\scalerel*{\includegraphics{ar.pdf}}{]})+10(\scalerel*{\includegraphics{arx.pdf}}{]})+2(\scalerel*{\includegraphics{arxx.pdf}}{]})+6(\scalerel*{\includegraphics{arxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxnxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxxn.pdf}}{]}& 13(\scalerel*{\includegraphics{ar.pdf}}{]})+8(\scalerel*{\includegraphics{arx.pdf}}{]})+(\scalerel*{\includegraphics{arxx.pdf}}{]})+7(\scalerel*{\includegraphics{arxxn.pdf}}{]})+2(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+2(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxx.pdf}}{]})\\
\scalerel*{\includegraphics{arxxxxn.pdf}}{]}& 15(\scalerel*{\includegraphics{ar.pdf}}{]})+4(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{arxxn.pdf}}{]})+4(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxxn.pdf}}{]}) \\
\end{array}$$
$$\begin{array}{c|c}
\alpha & h^{-1}(e_\alpha) \\
\hline
\scalerel*{\includegraphics{arxxxx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})-4(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{arxx.pdf}}{]})-4(\scalerel*{\includegraphics{arxxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxx.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})+3(\scalerel*{\includegraphics{arxx.pdf}}{]})+(\scalerel*{\includegraphics{arxxn.pdf}}{]})-2(\scalerel*{\includegraphics{arxxx.pdf}}{]})-2(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxnx.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arxx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxn.pdf}}{]})-(\scalerel*{\includegraphics{arxxx.pdf}}{]})-2(\scalerel*{\includegraphics{arxxnx.pdf}}{]})-(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxnx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxxnx.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})-4(\scalerel*{\includegraphics{arx.pdf}}{]})+3(\scalerel*{\includegraphics{arxx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxn.pdf}}{]})-3(\scalerel*{\includegraphics{arxxnx.pdf}}{]})-(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxnx.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxuxxn.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+4(\scalerel*{\includegraphics{arx.pdf}}{]})+4(\scalerel*{\includegraphics{arxx.pdf}}{]})-2(\scalerel*{\includegraphics{arxxn.pdf}}{]})-4(\scalerel*{\includegraphics{arxxnx.pdf}}{]})+(\scalerel*{\includegraphics{arxxuxxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxuxx.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})+4(\scalerel*{\includegraphics{arx.pdf}}{]})-2(\scalerel*{\includegraphics{arxx.pdf}}{]})+4(\scalerel*{\includegraphics{arxxn.pdf}}{]})-4(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxuxx.pdf}}{]})\\
\scalerel*{\includegraphics{arxxxnxn.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})-4(\scalerel*{\includegraphics{arx.pdf}}{]})+3(\scalerel*{\includegraphics{arxx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxn.pdf}}{]})-(\scalerel*{\includegraphics{arxxx.pdf}}{]})-3(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxnxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxnxn.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+2(\scalerel*{\includegraphics{arxx.pdf}}{]})+2(\scalerel*{\includegraphics{arxxn.pdf}}{]})-(\scalerel*{\includegraphics{arxxnx.pdf}}{]})-2(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})-(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxnxn.pdf}}{]}) \\
\scalerel*{\includegraphics{arxxnxxn.pdf}}{]}& (\scalerel*{\includegraphics{ar.pdf}}{]})+(\scalerel*{\includegraphics{arxx.pdf}}{]})+3(\scalerel*{\includegraphics{arxxn.pdf}}{]})-2(\scalerel*{\includegraphics{arxxnxn.pdf}}{]})-2(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxnxx.pdf}}{]})\\
\scalerel*{\includegraphics{arxxxxn.pdf}}{]}& -(\scalerel*{\includegraphics{ar.pdf}}{]})-4(\scalerel*{\includegraphics{arx.pdf}}{]})+6(\scalerel*{\includegraphics{arxxn.pdf}}{]})+-4(\scalerel*{\includegraphics{arxxxn.pdf}}{]})+(\scalerel*{\includegraphics{arxxxxn.pdf}}{]}) \\
\end{array}$$
[\[eulerpolys4\]]{#eulerpolys4 label="eulerpolys4"}
| arxiv_math | {
"id": "2309.09026",
"title": "Ehrhart Polynomials of Generic Orthotopes",
"authors": "David Richter",
"categories": "math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We consider the initial boundary value problem to the motion of an inextensible hanging string of finite length under the action of the gravity. In this problem, the tension of the string is also an unknown quantity. It is determined as a unique solution to a two-point boundary value problem, which is derived from the inextensibility of the string together with the equation of motion, and degenerates linearly at the free end. We derive a priori estimates of solutions to the initial boundary value problem in weighted Sobolev spaces under a natural stability condition. The necessity for the weights results from the degeneracy of the tension. Uniqueness of solution is also proved.
---
**A priori estimates of solutions to the motion\
of an inextensible hanging string**
Tatsuo Iguchi and Masahiro Takayama[^1]
# Introduction
We are concerned with the motion of a homogeneous and inextensible string of finite length $L$ under the action of the gravity and a tension of the string. Suppose that one end of the string is fixed and another one is free. Let $s$ be the arc length of the string measured from the free end of the string so that the string is descried as a curve $$\bm{x}(s,t)=(x_1(s,t),x_2(s,t),x_3(s,t)), \qquad s\in [0,L]$$ at time $t$. We can assume without loss of generality that the fixed end of the string is placed at the origin in $\mathbb{R}^3$. Let $\rho$ be a constant density of the string, $\bm{g}$ the acceleration of gravity vector, and $\tau(s,t)$ a scalar tension of the string at the point $\bm{x}(s,t)$ at time $t$. See Figure [1](#intro:hanging string){reference-type="ref" reference="intro:hanging string"}.
(0,0) (320,-50)$(x_1,x_2)$ (235,-15)$x_3$ (232,-47)$s=L$ (225,-175)$s=0$ (220,-150)$s$ (160,-102)$\bm{g}$ (230,-126)$\bm{x}=\bm{x}(s,t)$ (240,-155)$-\tau(s,t)\bm{x}'(s,t)$ (223,-100)$\tau(s,t)\bm{x}'(s,t)$
![Hanging String](HangingString.eps){#intro:hanging string width="0.45\\linewidth"}
Then, the motion of the string is described by the equations $$\begin{cases}
\rho\ddot{\bm{x}}-(\tau\bm{x}')'=\rho\bm{g} &\mbox{in}\quad (0,L)\times(0,T), \\
|\bm{x}'|=1 &\mbox{in}\quad (0,L)\times(0,T),
\end{cases}$$ where $\dot{\bm{x}}$ and $\bm{x}'$ denote the derivatives of $\bm{x}$ with respect to $t$ and $s$, respectively, so that $\bm{x}'$ is a unit tangential vector of the string. The first equation is the equation of motion. The novelty of this problem is that we do not assume any elasticity of the string so that the tension $\tau$ in the first equation is also an unknown quantity of the problem. In other words, we do not assume any constitutive equation for the tension $\tau$. Whereas we impose the second equation, which describes the fact that the string is inextensible. We also note that the tension is caused by the inextensibility of the string and the gravity as we will see below. The boundary conditions at the both ends of the string are given by $$\begin{cases}
\bm{x}=\bm{0} &\mbox{at}\quad \{s=L\}\times(0,T), \\
\tau=0 &\mbox{at}\quad \{s=0\}\times(0,T).
\end{cases}$$ The first boundary condition represents that the one end of the string is fixed at the origin, and the second one represents that another end is free. By making the change of the variables $s\to Ls$, $t\to\sqrt{L/|\bm{g}|}t$, $\bm{x}\to L\bm{x}$, $\tau\to \rho|\bm{g}|L\tau$, and $\bm{g}/|\bm{g}|\to\bm{g}$ we may assume that $\rho=1$, $L=1$, and $|\bm{g}|=1$. Therefore, in the following we consider the equations $$\label{Eq}
\begin{cases}
\ddot{\bm{x}}-(\tau\bm{x}')' = \bm{g} &\mbox{in}\quad (0,1)\times(0,T), \\
|\bm{x}'|=1 &\mbox{in}\quad (0,1)\times(0,T),
\end{cases}$$ under the boundary conditions $$\label{BC}
\begin{cases}
\bm{x}=\bm{0} &\mbox{on}\quad \{s=1\}\times(0,T), \\
\tau=0 &\mbox{on}\quad \{s=0\}\times(0,T).
\end{cases}$$ Here, $\bm{g}$ is a constant unit vector. Finally, we impose the initial conditions of the form $$\label{IC}
(\bm{x},\dot{\bm{x}})|_{t=0}=(\bm{x}_0^\mathrm{in},\bm{x}_1^\mathrm{in}) \quad\mbox{in}\quad (0,1).$$ This is the initial boundary value problem that we are going to consider in this paper.
As was explained above, we do not assume any elasticity of the string, which implies that the tension $\tau$ is also an unknown quantity. On the other hand, we assume that the string is inextensible so that we impose the constraint $|\bm{x}'|=1$, which together with the gravity causes a tension of the string. In other words, by using the constraint we can derive an equation for the tension $\tau$ as follows. Let $(\bm{x},\tau)$ be a solution to [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} and [\[BC\]](#BC){reference-type="eqref" reference="BC"}. Then, we see that $\tau$ satisfies the following two-point boundary value problem $$\label{BVP}
\begin{cases}
-\tau''+|\bm{x}''|^2\tau = |\dot{\bm{x}}'|^2 &\mbox{in}\quad (0,1)\times(0,T), \\
\tau=0 &\mbox{on}\quad \{s=0\}\times(0,T), \\
\tau'=-\bm{g}\cdot\bm{x}' &\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ where we regard the time $t$ as a parameter. In fact, by differentiating the constraint $|\bm{x}'|^2=1$ with respect to $s$ and $t$, we have $\bm{x}'\cdot\bm{x}''=0$, $\bm{x}'\cdot\bm{x}'''+|\bm{x}''|^2 = 0$, $\bm{x}'\cdot\dot{\bm{x}}'=0$, and $\bm{x}'\cdot\ddot{\bm{x}}'+|\dot{\bm{x}}'|^2 = 0$. Therefore, differentiating the first equation in [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} with respect to $s$ and then taking an inner product with $\bm{x}'$, we obtain the first equation in [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}. Taking an inner product of the first equation in [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} with $\bm{x}'$, taking its trace on $s=0$, and using the first boundary condition in [\[BC\]](#BC){reference-type="eqref" reference="BC"}, we obtain the last boundary condition in [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}. It is easy to see that for each fixed time $t$, the two-point boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} can be solved uniquely, so that $\tau$ is determined by $\bm{x}'(\cdot,t)$ and $\dot{\bm{x}}'(\cdot,t)$. Unlike standard theories of nonlinear wave equaions, in our problem the tension $\tau$ depends nonlocally in space and time on $\bm{x}'$. Particularly, we need an information of the curvature vector $\bm{x}''(\cdot,t)$ and the deformation velocity $\dot{\bm{x}}'(\cdot,t)$ of the tangential vector of the string to determine the tension $\tau$.
As we will see in Section [3](#sect:BVP){reference-type="ref" reference="sect:BVP"} below, if the deformation velocity is initially zero, that is, if $\dot{\bm{x}}(\cdot,0)=\bm{0}$, then the initial tension satisfies $\tau(s,0)\simeq -(\bm{g}\cdot\bm{x}'(1,0))s$. This consideration leads us to the following stability condition $$\label{SC}
-\bm{g}\cdot\bm{x}'(1,t) \geq c_0>0.$$ In fact, if $\dot{\bm{x}}(\cdot,0)=\bm{0}$ and $-\bm{g}\cdot\bm{x}'(1,0)<0$, then the initial tension $\tau(s,0)$ is negative everywhere except at the free end $s=0$, so that the equation of motion in [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} becomes elliptic in space and time. As a result, the initial boundary value problem becomes ill-posed. This stability condition can be easily understand geometrically; see Figures [\[fig:HS1\]](#fig:HS1){reference-type="ref" reference="fig:HS1"} and [3](#fig:HS2){reference-type="ref" reference="fig:HS2"}.
(0,0) (77,42)$\bm{g}$ (107,22)$\bm{x}'(1,t)$ (307,58)$\bm{g}$ (337,12)$\bm{x}'(1,t)$
![Stability conditions is not satisfied.](StabilityCondition1.eps){#fig:HS2 width="30mm"}
![Stability conditions is not satisfied.](StabilityCondition2.eps){#fig:HS2 width="30mm"}
Our main objective is to show the local well-posedness of the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} in appropriate weighted Sobolev spaces under the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"}. Toward this goal, in this paper we will derive a priori estimates of the solution $(\bm{x},\tau)$ to the problem, which are given in Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"}. We also prove a uniqueness of solution to the problem in the class where a priori estimates would be obtained. The uniqueness is given in Theorem [**Theorem** 3](#th:unique){reference-type="ref" reference="th:unique"}.
Even if a priori estimates of the solution $(\bm{x},\tau)$ would be obtained, it is not straightforward to construct an existence theory and we need more technical calculations than what will be done in this paper. Therefore, we postpone this existence part in our future work. Nevertheless, we will give a remark concerning existence of solution. In the derivation of the two-point boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, we use essentially the constraint $|\bm{x}'|=1$ so that it is natural to expect that [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} contains an information of the constraint. In view of this, we will consider the initial boundary value problem of a hyperbolic equation $$\label{HP}
\begin{cases}
\ddot{\bm{x}}-(\tau\bm{x}')' = \bm{g} &\mbox{in}\quad (0,1)\times(0,T), \\
\bm{x}=\bm{0} &\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ for $\bm{x}$ under the initial condition [\[IC\]](#IC){reference-type="eqref" reference="IC"}, coupled with the two-point boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} for $\tau$, in place of the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}. One may think that a boundary condition on the free end $s=0$ is missing for the well-posedness of the problem for $\bm{x}$. However, it is not the case because the tension is degenerate at $s=0$. For more details, see M. Takayama [@Takayama2018]. We will use the initial value problem to the hyperbolic and elliptic coupled system [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} to construct the solution $(\bm{x},\tau)$. In order to show the equivalence of the problem, we need to show that the solution $(\bm{x},\tau)$ to the transformed system [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} satisfies the constraint $|\bm{x}'|=1$ under appropriate conditions on the initial data. Here, we note that if there exists a smooth solution to the original problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}, then the initial data have to satisfy the constraints $|\bm{x}_0^{\mathrm{in}\,\prime}|=1$ and $\bm{x}_0^{\mathrm{in}\,\prime}\cdot\bm{x}_1^{\mathrm{in}\,\prime}=0$ in $(0,1)$. Conversely, we will show in Theorem [**Theorem** 4](#th:equiv){reference-type="ref" reference="th:equiv"} that if the initial data satisfy these constrains, then any regular solution $(\bm{x},\tau)$ to the transformed system [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} satisfying the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"} satisfies the constraint $|\bm{x}'|=1$.
Contrary to the studies on elastic strings, there are few results on the well-posedness of the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} to the motion of an inextensible straing. M. Reeken [@Reeken1979-1; @Reeken1979-2] considered the motion of an inextensible string of *infinite* length having one end fixed at the point $(0,0,\infty)$ in a gravity field. For technical reasons he assumed that the acceleration of gravity vector $\bm{g}$ is not constant. To be precise, he assumed that $\bm{g}=\bm{g}(s)\in C^\infty([0,\infty))$ is constant for $s\in[0,l]$ and grows linearly beyond $s=l$ for some positive $l$. Under this non-physical condition, he proved the existence locally in time and uniqueness of solution provided that the initial data are sufficiently close to a trivial stationary solution. He applied the hard implicit function theorem, which is also known as the Nash--Moser theorem, to construct the solution so that higher regularity must be imposed on the initial data and that a loss of derivatives was allowed. S. C. Preston [@Preston2011] considered the motion of an inextensible string of finite length in *the absence of gravity*, that is, in the case $\bm{g}=\bm{0}$. Under this particular situation, he proved the existence locally in time and uniqueness of solution for arbitrary initial data in some weighted Sobolev spaces, whose weights are stronger than those used by M. Reeken [@Reeken1979-1; @Reeken1979-2]. S. C. Preston used weights for all derivatives of the position $\bm{x}(s,t)$ so that $C^1$-regularity of the solution up to the free end $s=0$ cannot be guaranteed, whereas the weighted Sobolev spaces used by M. Reeken include solutions which are sufficiently regular up to the fee end. Y. Şengül and D. Vorotnikov [@SengulVorotnikov2017] considered exactly the same initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} as ours and proved the existence of an admissible Young measure solution after transforming the problem into a system of conservation laws with a discontinuous flux. We note that the existence of such a generalized Young measure solution does not follow the classical well-posedness of the problem. To our knowledge, these are only results on the existence of solution to the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}, so that its well-posedness has not been resolved so far. We aim to show the well-posedness of the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} in weighted Sobolev spaces including solutions which are sufficiently regular up to the fee end.
As related topics, there are several results on the rotations of an inextensible hanging string about a vertical axis with one free end under the action of the gravity. We can observe stable configurations, in which its shape is not changing with time, when we force to rotate the string from the upper fixed end. These configurations are related to the angular velocity of the rotation. A representative result on this problem was given by I. I. Kolodner [@Kolodner1955], who proved that the corresponding nonlinear eigenvalue problem with a constant angular velocity $\omega$ has exactly $n$ non-trivial solutions if and only if $\omega$ satisfies $\omega_n<\omega\leq\omega_{n+1}$ with $\omega_n\equiv\sigma_n\sqrt{|\bm{g}|/4L}$, where $\sigma_n$ is the $n$-th zero of the Bessel function $J_0(z)$, $\bm{g}$ is the acceleration of gravity vector, and $L$ is the length of the string. For more results on the rotating string, see references in [@AmoreBoydMarquez2023].
The contents of this paper are as follows. In Section [2](#sect:results){reference-type="ref" reference="sect:results"} we begin with introducing a weighted Sobolev space $X^m$, which plays an important role in the problem, and then state our main results in this paper; a priori estimates of solutions in Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"}, uniqueness of solution in Theorem [**Theorem** 3](#th:unique){reference-type="ref" reference="th:unique"}, and the equivalence of the original problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} and the transformed problem [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} in Theorem [**Theorem** 4](#th:equiv){reference-type="ref" reference="th:equiv"}. In Section [3](#sect:BVP){reference-type="ref" reference="sect:BVP"} we analyze Green's function related to the two-point boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} to derive precise pointwise estimates of the solution in terms of norms of the weighted Sobolev space $X^m$ for the coefficients. In Section [4](#sect:FS){reference-type="ref" reference="sect:FS"} we present basic properties of the space $X^m$ and related calculus inequalities. We also introduce another weighted Sobolev space $Y^m$ for convenience in estimating higher order derivatives of the solution to the two-point boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}. In Section [5](#sect:equiv){reference-type="ref" reference="sect:equiv"} we prove Theorem [**Theorem** 4](#th:equiv){reference-type="ref" reference="th:equiv"}. In Section [6](#sect:EE){reference-type="ref" reference="sect:EE"} we analyze a linearized system to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}, [\[BC\]](#BC){reference-type="eqref" reference="BC"}, and [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and derive an energy estimate of the solution. In Section [7](#sect:unique){reference-type="ref" reference="sect:unique"} we prove Theorem [**Theorem** 3](#th:unique){reference-type="ref" reference="th:unique"} by applying the energy estimate obtained in Section [6](#sect:EE){reference-type="ref" reference="sect:EE"} with a slight modification. In Section [8](#sect:ETau){reference-type="ref" reference="sect:ETau"} we derive estimates for the tension $\tau$ assuming some bounds of norms for $\bm{x}$ and the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"}. In the critical case of the regularity index, we use a weaker norm for $\tau$ than those in the other cases. In Section [9](#sect:EstID){reference-type="ref" reference="sect:EstID"} we evaluate initial values for time derivatives of $\bm{x}$ and $\tau$ in terms of the initial data $(\bm{x}_0^\mathrm{in},\bm{x}_1^\mathrm{in})$. Finally, in Section [10](#sect:APE){reference-type="ref" reference="sect:APE"} we prove Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"}.
**Acknowledgement**\
T. I. is partially supported by JSPS KAKENHI Grant Number JP22H01133.
# Main results {#sect:results}
**Notation**. For $1\leq p\leq\infty$, we denote by $L^p$ the Lebesgue space on the open interval $(0,1)$. For non-negative integer $m$, we denote by $H^m$ the $L^2$ Sobolev space of order $m$ on $(0,1)$. The norm of a Banach space $B$ is denoted by $\|\cdot\|_B$. The inner product in $L^2$ is denoted by $(\cdot,\cdot)_{L^2}$. We put $\partial_t=\frac{\partial}{\partial t}$ and $\partial_s=\frac{\partial}{\partial s}$. The norm of a weighted $L^p$ space with a weight $s^\alpha$ is denoted by $\|s^\alpha u\|_{L^p}$, so that $\|s^\alpha u\|_{L^p}^p=\int_0^1s^{\alpha p}|u(s)|^p \mathrm{d}s$ for $1\leq p<\infty$. It is sometimes denoted by $\|\sigma^\alpha u\|_{L^p}$, too. This would cause no confusion. $[P,Q]=PQ-QP$ denotes the commutator. We denote by $C(a_1, a_2, \ldots)$ a positive constant depending on $a_1, a_2, \ldots$. $f\lesssim g$ means that there exists a non-essential positive constant C such that $f\leq Cg$ holds. $f\simeq g$ means that $f\lesssim g$ and $g\lesssim f$ hold. $a_1 \vee a_2 = \max\{a_1,a_2\}$.
For a non-negative integer $m$ we define a weighted Sobolev space $X^m$ as a set of all function $u=u(s)\in L^2$ equipped with a norm $\|\cdot\|_{X^m}$ defined by $$\label{WSS}
\|u\|_{X^m}^2 =
\begin{cases}
\displaystyle
\|u\|_{H^k}^2 + \sum_{j=1}^k\|s^j\partial_s^{k+j}u\|_{L^2}^2 &\mbox{for}\quad m=2k, \\
\displaystyle
\|u\|_{H^k}^2 + \sum_{j=1}^{k+1}\|s^{j-\frac12}\partial_s^{k+j}u\|_{L^2}^2 &\mbox{for}\quad m=2k+1.
\end{cases}$$ For a function $u=u(s,t)$ depending also on time $t$, we introduce norms $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\cdot}_m$ and $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\cdot}_{m,*}$ by $$\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{u(t)}_m^2 = \sum_{j=0}^m \|\partial_t^j u(t)\|_{X^{m-j}}^2, \qquad
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{u(t)}_{m,*}^2 = \sum_{j=0}^{m-1} \|\partial_t^j u(t)\|_{X^{m-j}}^2.$$ The first norm $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\cdot}_m$ will be used to evaluate $\bm{x}$, whereas the second norm $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\cdot}_{m,*}$ will be used to evaluate $\tau$. However, in the critical case on the regularity index $m$, we need to use a weaker norm than $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\cdot}_{m,*}$. For $\epsilon>0$, we introduce norms $\|\cdot\|_{X_\epsilon^k}$ for $k=1,2,3$ as $$\|u\|_{X_\epsilon^k}^2 =
\begin{cases}
\|s^\epsilon u\|_{L^\infty}^2 + \|s^{\frac12+\epsilon}u'\|_{L^2}^2 &\mbox{for}\quad k=1, \\
\|u\|_{L^\infty}^2 + \|s^\epsilon u'\|_{L^2}^2 + \|s^{1+\epsilon} u''\|_{L^2}^2 &\mbox{for}\quad k=2, \\
\|u\|_{L^\infty}^2 + \|s^\epsilon u'\|_{L^\infty}^2 + \|s^{\frac12+\epsilon}u''\|_{L^2}^2 + \|s^{\frac32+\epsilon}u'''\|_{L^2}^2
&\mbox{for}\quad k=3,
\end{cases}$$ and put $$\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{u(t)}_{3,*,\epsilon}^2 = \|u(t)\|_{X_\epsilon^3}^2 + \|\partial_tu(t)\|_{X_\epsilon^2}^2 + \|\partial_t^2 u(t)\|_{X_\epsilon^1}^2.$$ The following theorem is one of main theorems in this paper and gives a priori estimates of the solution $(\bm{x},\tau)$ to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}.
****Theorem** 1**. *For any integer $m\geq4$ and any positive constants $M_0$ and $c_0$, there exist a sufficiently small positive time $T$ and a large constant $C$ such that if the initial data satisfy $$\label{CondID}
\begin{cases}
\|\bm{x}_0^\mathrm{in}\|_{X^m}+\|\bm{x}_1^\mathrm{in}\|_{X^{m-1}} \leq M_0, \\
-\bm{g}\cdot\bm{x}_0^{\mathrm{in}\,\prime}(1) \geq 2c_0,
\end{cases}$$ then any regular solution $(\bm{x},\tau)$ to the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} satisfies the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"}, $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(t)}_m \leq C$, and $$\begin{cases}
C^{-1}s \leq \tau(s,t) \leq Cs, \quad \sum_{j=1}^{m-3}|\partial_t^j\tau(s,t)|\leq Cs, \quad |\partial_t^{m-2}\tau(s,t)|\leq Cs^\frac12, \\
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{m-1,*} \leq C \qquad\ \mbox{in the case $m\geq5$}, \\
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{3,*,\epsilon} \leq C(\epsilon) \qquad\mbox{in the case $m=4$}
\end{cases}$$ for $0\leq t\leq T$ and $\epsilon>0$, where the constant $C(\epsilon)$ depends also on $\epsilon$.*
****Remark** 2**. *The requirement $m\geq4$ corresponds to the quasilinear regularity in the sense that $m=4$ is the minimal integer regularity index $m$ that ensures the embedding $C^0([0,T];X^m)\cap C^1([0,T];X^{m-1}) \hookrightarrow C^1([0,1]\times[0,T])$; see Remark [**Remark** 15](#remark:embedding){reference-type="ref" reference="remark:embedding"}. Therefore, $m=4$ is a critical regularity index in the classical sense.*
We then consider the uniqueness of the solution $(\bm{x},\tau)$ to the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}. To this end, we need to specify a class that the solutions belong to. Here, we consider the solutions satisfying $$\label{SolClass}
\begin{cases}
\bm{x}' \in C^0([0,T];X^2)\cap C^1([0,T];X^1), \\
\displaystyle
\inf_{t\in[0,T]}(-\bm{g}\cdot\bm{x}'(1,t))>0.
\end{cases}$$ We note that if $\bm{x} \in C^0([0,T];X^4) \cap C^1([0,T];X^3)$ satisfies the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"}, then it also satisfies [\[SolClass\]](#SolClass){reference-type="eqref" reference="SolClass"}; see Remark [**Remark** 15](#remark:embedding){reference-type="ref" reference="remark:embedding"}.
****Theorem** 3**. *The solution to the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} is unique in the class [\[SolClass\]](#SolClass){reference-type="eqref" reference="SolClass"}.*
The following theorem ensures the equivalence of the original problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} and the transformed problem [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"}.
****Theorem** 4**. *Let $(\bm{x},\tau)$ be a solution to the transformed problem [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} in the class [\[SolClass\]](#SolClass){reference-type="eqref" reference="SolClass"}. Suppose that the initial data satisfy $|\bm{x}_0^{\mathrm{in}\,\prime}(s)|\equiv1$ and $\bm{x}_0^{\mathrm{in}\,\prime}(s)\cdot\bm{x}_1^{\mathrm{in}\,\prime}(s)\equiv0$. Then, we have $|\bm{x}'(s,t)|\equiv1$.*
We prove Theorems [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"}, [**Theorem** 3](#th:unique){reference-type="ref" reference="th:unique"}, and [**Theorem** 4](#th:equiv){reference-type="ref" reference="th:equiv"} in Sections [10](#sect:APE){reference-type="ref" reference="sect:APE"}, [7](#sect:unique){reference-type="ref" reference="sect:unique"}, and [5](#sect:equiv){reference-type="ref" reference="sect:equiv"}, respectively.
# Two-point boundary value problem {#sect:BVP}
In view of [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} we will consider the two-point boundary value problem $$\label{TBVP}
\begin{cases}
-\tau''+|\bm{x}''|^2\tau = h \quad\mbox{in}\quad (0,1), \\
\tau(0)=0, \quad \tau'(1)=a,
\end{cases}$$ where $\bm{x}(s)$ and $h(s)$ are given functions and $a$ is a constant.
## Green's function
As is well-known, Green's function to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} can be constructed as follows. Let $\varphi$ and $\psi$ be unique solutions to the initial value problems $$\label{IVPphi}
\begin{cases}
-\varphi''+|\bm{x}''|^2\varphi = 0 \quad\mbox{in}\quad (0,1), \\
\varphi(0)=0, \quad \varphi'(0)=1,
\end{cases}$$ and $$\label{IVPpsi}
\begin{cases}
-\psi''+|\bm{x}''|^2\psi = 0 \quad\mbox{in}\quad (0,1), \\
\psi(1)=1, \quad \psi'(1)=0,
\end{cases}$$ respectively. The Wronskian $W(s;\varphi,\psi)=\varphi(s)\psi'(s)-\varphi'(s)\psi(s)$ is a non-zero constant since the uniqueness of solution to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} is easily verified. Particularly, we have $W(s;\varphi,\psi)\equiv-\varphi'(1)$. A sharp estimate for $\varphi'(1)$ will be given below; see Lemma [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"}. In terms of these fundamental solutions, Green's function to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} is given by $$\label{GF}
G(s,r)=
\begin{cases}
\dfrac{\varphi(s)\psi(r)}{\varphi'(1)} &\mbox{for}\quad 0\leq s\leq r, \\[2ex]
\dfrac{\psi(s)\varphi(r)}{\varphi'(1)} &\mbox{for}\quad r\leq s\leq 1.
\end{cases}$$ Particularly, the unique solution to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} can be expressed as $$\label{SolBVP}
\tau(s)=a\frac{\varphi(s)}{\varphi'(1)}+\frac{\psi(s)}{\varphi'(1)}\int_0^s\varphi(\sigma)h(\sigma)\mathrm{d}\sigma
+\frac{\varphi(s)}{\varphi'(1)}\int_s^1\psi(\sigma)h(\sigma)\mathrm{d}\sigma.$$ We proceed to evaluate these fundamental solutions $\varphi$ and $\psi$.
****Lemma** 5**. *Let $\varphi$ be a unique solution to [\[IVPphi\]](#IVPphi){reference-type="eqref" reference="IVPphi"}. Then, for any $s\in[0,1]$ we have $$\begin{cases}
1 \leq \varphi'(s) \leq \exp(\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2), \\
s \leq \varphi(s) \leq s\exp(\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2).
\end{cases}$$*
*Proof.* It is sufficient to show the first estimate because the second one can easily follow from the first one by integrating it over $[0,s]$ and by using the initial condition $\varphi(0)=0$.
We first show that $\varphi(s)>0$ for all $s\in(0,1]$. In view of the initial conditions at $s=0$, we have $\varphi(s)>0$ for $0<s\ll1$. Now, suppose that there exists $s_*\in(0,1]$ such that $\varphi(s_*)=0$. We can assume without loss of generality that $\varphi(s)>0$ for $0<s<s_*$, so that $\varphi'(s_*)\leq0$. Then, we have $\varphi''(s)=|\bm{x}''(s)|^2\varphi(s) \geq 0$ for $0<s<s_*$. This implies that $\varphi'(s)$ is non-decreasing in the interval $[0,s_*]$, so that $\varphi'(s_*)\geq\varphi'(0)=1$. This contradicts with $\varphi'(s_*)\leq0$. Therefore, $\varphi(s)>0$ holds for all $s\in(0,1]$. Particularly, $\varphi'(s)$ is non-decreasing in the whole interval $[0,1]$, so that we obtain $\varphi'(s)\geq\varphi'(0)=1$ for all $s\in[0,1]$.
We proceed to show the upper bound of $\varphi'(s)$. Since $\varphi'(s)$ is a non-decreasing function, we have $\varphi(s)=\int_0^s\varphi'(\sigma)\mathrm{d}\sigma \leq s\varphi'(s)$. Therefore, we see that $$\begin{aligned}
\varphi'(s)
&= 1+\int_0^s\varphi''(\sigma) \mathrm{d}\sigma \\
&= 1+\int_0^s|\bm{x}''(\sigma)|^2\varphi(\sigma) \mathrm{d}\sigma \\
&\leq 1+\int_0^s \sigma|\bm{x}''(\sigma)|^2\varphi'(\sigma) \mathrm{d}\sigma,\end{aligned}$$ which together with Gronwall's inequality yields $\varphi'(s) \leq \exp(\int_0^s\sigma|\bm{x}''(\sigma)|^2 \mathrm{d}\sigma)$. This gives the desired estimate. ◻
****Lemma** 6**. *Let $\psi$ be a unique solution to [\[IVPpsi\]](#IVPpsi){reference-type="eqref" reference="IVPpsi"}. Then, for any $s\in[0,1]$ and any $\alpha\geq0$ we have $$\begin{cases}
1 \leq \psi(s) \leq \exp(\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2), \\
0 \geq s^\alpha\psi'(s) \geq -\|\sigma^{\frac{\alpha}{2}}\bm{x}''\|_{L^2}^2\exp(\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2).
\end{cases}$$*
*Proof.* We first show that $\psi(s)>0$ for all $s\in[0,1]$. In view of the initial condition $\psi(1)=1$, we have $\psi(s)>0$ for $0\leq 1-s\ll1$. Now, suppose that there exists $s_*\in[0,1)$ such that $\psi(s_*)=0$. We can assume without loss of generality that $\psi(s)>0$ for $s_*<s\leq1$, so that $\psi'(s_*)\geq0$. Then, we have $\psi''(s)=|\bm{x}''(s)|^2\psi(s) \geq 0$ for $s_*<s\leq1$. This implies that $\psi'(s)$ is non-decreasing in the interval $[s_*,1]$, so that $\psi'(s)\leq\psi'(1)=0$ for all $s\in[s_*,1]$. This implies that $\psi(s)$ is non-increasing in the interval $[s_*,1]$, so that $\psi(s_*)\geq\psi(1)=1$. This contradicts with $\psi(s_*)=0$. Therefore, $\psi(s)>0$ holds for all $s\in[0,1]$. Particularly, $\psi''(s)\geq0$ holds for all $s\in[0,1]$, which implies in turn that $\psi'(s)\leq0$ and $\psi(s)\geq1$ for all $s\in[0,1]$.
We then show the upper bound of $\psi(s)$. Noting that $\psi(s)$ is a non-increasing function and that $\psi'(1)=0$, we see that $$\begin{aligned}
\label{PfEstPsi}
\psi'(s)
&= -\int_s^1\psi''(\sigma) \mathrm{d}\sigma \\
&= -\int_s^1|\bm{x}''(\sigma)|^2\psi(\sigma) \mathrm{d}\sigma \nonumber \\
&\geq -\int_s^1|\bm{x}''(\sigma)|^2 \mathrm{d}\sigma \psi(s), \nonumber\end{aligned}$$ which together with Gronwall's inequality and $\psi(1)=1$ yields $$\begin{aligned}
\psi(s)
&\leq \exp\left(\int_s^1\int_\sigma^1|\bm{x}''(\tilde{\sigma})|^2 \mathrm{d}\tilde{\sigma}\mathrm{d}\sigma\right) \\
&= \exp\left( \int_s^1(\sigma-s)|\bm{x}''(\sigma)|^2 \mathrm{d}\sigma\right) \\
&\leq \exp\left( \int_0^1\sigma|\bm{x}''(\sigma)|^2 \mathrm{d}\sigma\right).\end{aligned}$$ This shows the desired upper bound.
We finally show the lower bound of $\psi'(s)$. It follows from [\[PfEstPsi\]](#PfEstPsi){reference-type="eqref" reference="PfEstPsi"} that $$s^\alpha \psi'(s) \geq -\int_s^1\sigma^\alpha|\bm{x}''(\sigma)|^2 \mathrm{d}\sigma \psi(s),$$ which together with the upper bound of $\psi(s)$ gives the desired one. ◻
When the function $\bm{x}$ depends also on the time $t$, the fundamental solution $\varphi$ depends on the time $t$, too. In the following lemma, we will give estimates for the time derivative $\dot{\varphi}$.
****Lemma** 7**. *Let $\varphi$ be a unique solution to [\[IVPphi\]](#IVPphi){reference-type="eqref" reference="IVPphi"}. Then, for any $(s,t)\in[0,1]\times[0,T]$ we have $$\begin{cases}
|\dot{\varphi}'(s,t)| \leq \|\sigma^{\frac12}\dot{\bm{x}}''(t)\|_{L^2}\|\sigma^{\frac12}\bm{x}''(t)\|_{L^2}
\exp(2\|\sigma^{\frac12}\bm{x}''(t)\|_{L^2}^2), \\
|\dot{\varphi}(s,t)| \leq s\|\sigma^{\frac12}\dot{\bm{x}}''(t)\|_{L^2}\|\sigma^{\frac12}\bm{x}''(t)\|_{L^2}
\exp(2\|\sigma^{\frac12}\bm{x}''(t)\|_{L^2}^2).
\end{cases}$$*
*Proof.* It is sufficient to show the first estimate because the second one can easily follow from the first one by integrating it over $[0,s]$ and by using the initial condition $\dot{\varphi}(0,t)=0$. We note that $\dot{\varphi}$ is a solution to the initial value problem $$\begin{cases}
-\dot{\varphi}''+|\bm{x}''|^2\dot{\varphi} = h \quad\mbox{in}\quad (0,1), \\
\dot{\varphi}(0)=\dot{\varphi}'(0)=0,
\end{cases}$$ where $h=-2(\dot{\bm{x}}''\cdot\bm{x}'')\varphi$ and we regard the time $t$ as a parameter, so that we omit $t$ from the notation in the following of this proof. Integrating the equation over $[0,s]$ and using the initial condition $\dot{\varphi}(0)=0$, we have $$\dot{\varphi}'(s) = \int_0^s|\bm{x}''(\sigma)|^2\dot{\varphi}(\sigma)\mathrm{d}\sigma - \int_0^s h(\sigma)\mathrm{d}\sigma.$$ Since $|\dot{\varphi}(\sigma)|=|\int_0^\sigma \dot{\varphi}'(\tilde{\sigma})\mathrm{d}\tilde{\sigma}|
\leq \sigma\sup_{0\leq r\leq \sigma}|\dot{\varphi}'(r)|$, we obtain $$\sup_{0\leq r\leq s}|\dot{\varphi}'(r)|
\leq \int_0^s \sigma|\bm{x}''(\sigma)|^2 \sup_{0\leq r\leq \sigma}|\dot{\varphi}'(r)|\mathrm{d}\sigma + \int_0^s|h(\sigma)|\mathrm{d}\sigma.$$ Therefore, Gronwall's inequality yields $\sup_{0\leq r\leq s}|\dot{\varphi}'(r)| \leq \exp(\int_0^s \sigma|\bm{x}''(\sigma)|^2\mathrm{d}\sigma)\int_0^s|h(\sigma)|\mathrm{d}\sigma$. It follows from Lemma [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"} that $\|h\|_{L^1} \leq \|\sigma^{\frac12}\dot{\bm{x}}''\|_{L^2}\|\sigma^{\frac12}\bm{x}''\|_{L^2} \exp(\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2)$, so that we obtain the desired estimate. ◻
## Estimate of solutions
We remind that we are considering solutions satisfying the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"}. Therefore, in view of [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} we first consider the case where $h(s)$ and $a$ are both non-negative.
****Lemma** 8**. *Let $\tau$ be a unique solution to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"}. Suppose that $h(s)$ and $a$ are both non-negative. Then, for any $s\in[0,1]$ we have $$\begin{cases}
as\exp(-\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2) \leq \tau(s) \leq (a+\|h\|_{L^1})s, \\
a-(a+\|h\|_{L^1})\|\sigma^{\frac12}\bm{x}''\|_{L^2}^2 \leq \tau'(s) \leq a+\|h\|_{L^1}.
\end{cases}$$*
*Proof.* We remind that the solution $\tau$ is expressed by Green's function as [\[SolBVP\]](#SolBVP){reference-type="eqref" reference="SolBVP"}. Under the assumptions we have $\tau(s) \geq a\varphi(s)/\varphi'(1)$. Therefore, the lower bound of $\tau(s)$ follows from Lemma [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"}. Integrating the equation for $\tau$ over $[s,1]$, we have $$\tau'(s)+\int_s^1|\bm{x}''(\sigma)|^2\tau(\sigma)\mathrm{d}\sigma = a + \int_s^1h(\sigma)\mathrm{d}\sigma.$$ Since the positivity of $\tau(s)$ is already guaranteed, this implies the upper bound of $\tau'(s)$, and then that of $\tau(s)$. As for the lower bound of $\tau'(s)$, we see that $$\begin{aligned}
\tau'(s)
&\geq a - \int_s^1|\bm{x}''(\sigma)|^2\tau(\sigma)\mathrm{d}\sigma \\
&\geq a - (a+\|h\|_{L^1}) \int_s^1 \sigma|\bm{x}''(\sigma)|^2\mathrm{d}\sigma.\end{aligned}$$ This gives the desired estimate. ◻
We proceed to give estimate of the solution $\tau$ to the problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} without assuming the non-negativity of $h(s)$ and $a$. Such estimates will be used to evaluate the derivatives of $\tau$ with respect to $t$.
****Lemma** 9**. *For any $M>0$ there exists a constant $C=C(M)>0$ such that if $\|\sigma^{\frac12}\bm{x}''\|_{L^2} \leq M$, then the solution $\tau$ to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} satisfies $$\begin{cases}
|\tau(s)| \leq C(|a|s+\|\sigma^\alpha h\|_{L^1}s^{1-\alpha}), \\
s^\alpha|\tau'(s)| \leq C(|a|s^\alpha + \|\sigma^\alpha h\|_{L^1})
\end{cases}$$ for any $s\in[0,1]$ and any $\alpha\in[0,1]$.*
*Proof.* It follows from Lemmas [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"} and [**Lemma** 6](#lem:EstPsi){reference-type="ref" reference="lem:EstPsi"} that $\varphi(s)\simeq s$, $\varphi'(s)\simeq1$, $\psi(s)\simeq1$, and $s|\psi'(s)| \lesssim 1$. Since the solution $\tau(s)$ is expressed as [\[SolBVP\]](#SolBVP){reference-type="eqref" reference="SolBVP"}, we have $$\begin{aligned}
|\tau(s)|
&\lesssim |a|s + \int_0^s \sigma|h(\sigma)| \mathrm{d}\sigma + s\int_s^1 |h(\sigma)| \mathrm{d}\sigma \\
&\lesssim |a|s + s^{1-\alpha}\int_0^1 \sigma^\alpha|h(\sigma)| \mathrm{d}\sigma.\end{aligned}$$ This gives the first estimate of the lemma. In view of $$\label{Sol'BVP}
\tau'(s)=a\frac{\varphi'(s)}{\varphi'(1)}+\frac{\psi'(s)}{\varphi'(1)}\int_0^s\varphi(\sigma)h(\sigma)\mathrm{d}\sigma
+\frac{\varphi'(s)}{\varphi'(1)}\int_s^1\psi(\sigma)h(\sigma)\mathrm{d}\sigma,$$ we see that $$\begin{aligned}
s^\alpha|\tau'(s)|
&\lesssim |a|s^\alpha + s^\alpha|\psi'(s)| \int_0^s \sigma|h(\sigma)| \mathrm{d}\sigma
+ s^\alpha \int_s^1 |h(\sigma)| \mathrm{d}\sigma \\
&\lesssim |a|s^ \alpha + \int_0^1 \sigma^\alpha|h(\sigma)| \mathrm{d}\sigma.\end{aligned}$$ This gives the second estimate of the lemma. ◻
In general, if $h(s)$ has a singularity at $s=0$, then so is $\tau'(s)$. To evaluate the singularity in terms of $L^p$-norm, the above pointwise estimate does not give a sharp one. Next, we will derive a sharp $L^p$ estimate of $\tau'(s)$. To this end, we prepare the following calculus inequality.
****Lemma** 10**. *Let $1\leq p\leq\infty$ and $\alpha+\frac{1}{p}>0$, and put $H(s)=\int_s^1 h(\sigma) \mathrm{d}\sigma$. Then, we have $$\|s^\alpha H\|_{L^p} \leq \frac{1}{(\alpha+\frac{1}{p})^\frac{1}{p}} \|s^{\alpha+\frac{1}{p}}h\|_{L^1}.$$*
*Proof.* The case $p=\infty$ is trivial, so that we assume $p<\infty$. We may also assume without loss of generality that $h(s)$ is non-negative. By integration by parts, we see that $$\begin{aligned}
\|s^\alpha H\|_{L^p}^p
&= \int_0^1s^{\alpha p}\left(\int_s^1 h(\sigma) \mathrm{d}\sigma \right)^p \mathrm{d}s \\
&= \left[ \frac{1}{\alpha p+1}s^{\alpha p+1}\left(\int_s^1 h(\sigma) \mathrm{d}\sigma \right)^p \right]_0^1
+ \frac{p}{\alpha p+1}\int_0^1 s^{\alpha p+1}h(s) \left(\int_s^1 h(\sigma) \mathrm{d}\sigma \right)^{p-1} \mathrm{d}s \\
&= \frac{p}{\alpha p+1} \int_0^1 s^{\alpha+\frac{1}{p}}h(s) \left(s^{\alpha+\frac{1}{p}}\int_s^1 h(\sigma) \mathrm{d}\sigma \right)^{p-1}
\mathrm{d}s \\
&\leq \frac{p}{\alpha p+1} \left( \int_0^1 s^{\alpha+\frac{1}{p}}h(s) \mathrm{d}s \right)^p.\end{aligned}$$ Therefore, we obtain the desired estimate. ◻
****Lemma** 11**. *For any $M>0$ there exists a constant $C=C(M)>0$ such that if $\|\sigma^{\frac12}\bm{x}''\|_{L^2} \leq M$, then the solution $\tau$ to the boundary value problem [\[TBVP\]](#TBVP){reference-type="eqref" reference="TBVP"} satisfies $$\|s^\alpha\tau'\|_{L^p} \leq C(|a|+\|s^{\alpha+\frac{1}{p}}h\|_{L^1})$$ for any $p\in[1,\infty]$ and any $\alpha\geq0$ satisfying $\alpha+\frac{1}{p}\leq1$.*
*Proof.* The case $p=\infty$ has already been proved in Lemma [**Lemma** 9](#lem:EstSolBVP2){reference-type="ref" reference="lem:EstSolBVP2"}, so that we assume $p<\infty$. It follows from [\[Sol\'BVP\]](#Sol'BVP){reference-type="eqref" reference="Sol'BVP"} and Lemmas [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"} and [**Lemma** 6](#lem:EstPsi){reference-type="ref" reference="lem:EstPsi"} that $$\begin{aligned}
s^\alpha|\tau'(s)|
&\lesssim |a|s^\alpha + s^\alpha|\psi'(s)|\int_0^s\sigma|h(\sigma)| \mathrm{d}\sigma
+ s^\alpha\int_s^1|h(\sigma)| \mathrm{d}\sigma \\
&\lesssim |a|s^\alpha + s^{1-\frac{1}{p}}|\psi'(s)| \int_0^s \sigma^{\alpha+\frac{1}{p}}|h(\sigma)| \mathrm{d}\sigma
+ s^\alpha\int_s^1|h(\sigma)| \mathrm{d}\sigma.\end{aligned}$$ Here, in view of [\[PfEstPsi\]](#PfEstPsi){reference-type="eqref" reference="PfEstPsi"} we have $|\psi'(s)| \lesssim \int_s^1 |\bm{x}''(\sigma)|^2 \mathrm{d}\sigma$, so that by Lemma [**Lemma** 10](#lem:CalIneq){reference-type="ref" reference="lem:CalIneq"} we get $$\|s^{1-\frac{1}{p}}\psi'\|_{L^p} \lesssim \|s|\bm{x}''|^2\|_{L^1} = \|s^{\frac12}\bm{x}''\|_{L^2}^2 \lesssim 1.$$ Therefore, by Lemma [**Lemma** 10](#lem:CalIneq){reference-type="ref" reference="lem:CalIneq"} again we obtain $$\|s^\alpha\tau'\|_{L^p}
\lesssim |a| + \biggl( 1+ \frac{1}{(\alpha+\frac{1}{p})^\frac{1}{p}} \biggr) \|s^{\alpha+\frac{1}{p}}h\|_{L^1}.$$ Since $(\alpha+\frac{1}{p})^\frac{1}{p} \geq (\frac{1}{p})^\frac{1}{p} \geq \exp(\exp(-1))$, we obtain the desired estimate. ◻
# Function spaces {#sect:FS}
In this section, we present several properties related to the weighted Sobolev space $X^m$, which is equipped with the norm $\|\cdot\|_{X^m}$ defined by [\[WSS\]](#WSS){reference-type="eqref" reference="WSS"}.
## Weighted Sobolev space $X^m$
This function space $X^m$ is essentially the same one introduced by M. Reeken [@Reeken1979-1]. By the definition of the norm, it holds obviously that $\|u\|_{X^m} \leq \|u\|_{X^{m+1}}$ for $m=0,1,2,\ldots$. Moreover, this space is characterized as follows. Let $D$ be the unit disc in $\mathbb{R}^2$ and $H^m(D)$ the $L^2$ Sobolev space of order $m$ on $D$. For a function $u$ defined in the open interval $(0,1)$, we define $u^\sharp(x,y)=u(x^2+y^2)$ which is a function on $D$.
****Lemma** 12** ([@Takayama2018 Proposition 3.2]). *Let $m$ be a non-negative integer. The map $X^m\ni u \mapsto u^\sharp \in H^m(D)$ is bijective and it holds that $\|u\|_{X^m} \simeq \|u^\sharp\|_{H^m(D)}$ for any $u\in X^m$.*
****Lemma** 13**. *For a non-negative integer $m$, we have $\|(su')'\|_{X^m} \lesssim \|u\|_{X^{m+2}}$.*
*Proof.* Put $(A_2u)=-(su'(s))'$. Since $\Delta u^\sharp = -4 (A_2u)^\sharp$, by Lemma [**Lemma** 12](#lem:NormEq){reference-type="ref" reference="lem:NormEq"} we have $\|A_2u\|_{X^m} \simeq \|\Delta u^\sharp\|_{H^m(D)} \leq \|u^\sharp\|_{H^{m+2}(D)} \simeq \|u\|_{X^{m+2}}$, so that we obtain the desired estimate. ◻
****Lemma** 14**. *For any $q\in[1,\infty)$ and any $\epsilon>0$ there exist positive constants $C_q=C(q)$ and $C_\epsilon=C(\epsilon)$ such that for any $u\in X^1$ we have $\|u\|_{L^q} \leq C_q\|u\|_{X^1}$ and $\|s^\epsilon u\|_{L^\infty} \leq C_\epsilon\|u\|_{X^1}$.*
*Proof.* It is easy to see that $\|u^\sharp\|_{L^q(D)}=\pi^\frac1q\|u\|_{L^q}$ for any $q\in[1,\infty]$. Therefore, the Sobolev embedding theorem $H^1(D) \hookrightarrow L^q(D)$ and Lemma [**Lemma** 12](#lem:NormEq){reference-type="ref" reference="lem:NormEq"} gives the first estimate of the lemma. As for the second one, we let $q\geq\frac32$. For $s\in[0,1]$ we see that $$\begin{aligned}
s|u(s)|^q
&= \int_0^s(\sigma|u(\sigma)|^q)'\mathrm{d}\sigma \\
&\leq \int_0^1( |u(\sigma)|^q + q|u(\sigma)|^{q-1}\sigma|u'(\sigma)| )\mathrm{d}\sigma \\
&\leq \|u\|_{L^{2(q-1)}}^{q-1}(\|u\|_{L^2}+q\|\sigma u'\|_{L^2}) \\
&\lesssim \|u\|_{X^1}^q,\end{aligned}$$ where we used the Cauchy--Schwarz inequality and the first estimate of the lemma. This gives $s^\frac1q|u(s)| \lesssim \|u\|_{X^1}$. Since $q\geq\frac32$ is arbitrary, we obtain the second estimate of the lemma. ◻
****Remark** 15**.
1. *In Lemma [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} we cannot take $q=\infty$ or equivalently $\epsilon=0$. In other words, the embedding $X^1 \hookrightarrow L^\infty(0,1)$ does not hold. A counter-example is given by $u(s)=\log(\log(\frac{\mathrm{e}}{s}))$.*
2. *Unlike the standard Sobolev spaces, $u\in X^{m+1}$ does not necessarily imply $u'\in X^m$. A counter-example is given by $u(s)=\int_0^s\log(\log(\frac{\mathrm{e}}{\sigma}))\mathrm{d}\sigma$, which is in $X^3$. However, in view of the embedding $X^2 \hookrightarrow L^\infty(0,1)$ we easily check that its first derivative $u'$ is not in $X^2$.*
3. *Alternatively, we have $\|su'\|_{X^m} \leq \|u\|_{X^{m+1}}$ and $\|u'\|_{X^m} \leq \|u\|_{X^{m+2}}$.*
****Lemma** 16**. *For a non-negative integer $j$, we have $\|\partial_s^j u\|_{L^\infty} \lesssim \|u\|_{X^{2j+2}}$.*
*Proof.* Since $X^2 \hookrightarrow H^1 \hookrightarrow L^\infty$, we have $\|u\|_{L^\infty} \lesssim \|u\|_{X^2}$. This together with $\|u'\|_{X^m} \leq \|u\|_{X^{m+2}}$ yields the desired estimate. ◻
The following lemma gives a weighted $L^p$ estimate of derivatives of functions defined in $(0,1)$ in terms of $X^m$ norm.
****Lemma** 17**. *For a positive integer $k$ there exists a positive constant $C$ such that for any $p\in[2,\infty]$ and any $j=1,2,\ldots,k$ we have $$\begin{cases}
\|s^{j-\frac12-\frac1p}\partial_s^{k+j-1}u\|_{L^p}
\leq C\bigl( \|s^{j-1}\partial_s^{k+j-1}u\|_{L^2} + \|s^{j-1}\partial_s^{k+j-1}u\|_{L^2}^{\frac12+\frac1p}
\|s^{j}\partial_s^{k+j}u\|_{L^2}^{\frac12-\frac1p} \bigr), \\[1ex]
\|s^{j-\frac1p}\partial_s^{k+j}u\|_{L^p}
\leq C\bigl( \|s^{j-\frac12}\partial_s^{k+j}u\|_{L^2}
+ \|s^{j-\frac12}\partial_s^{k+j}u\|_{L^2}^{\frac12+\frac1p} \|s^{j+\frac12}\partial_s^{k+j+1}u\|_{L^2}^{\frac12-\frac1p} \bigr).
\end{cases}$$ Particularly, $$\begin{cases}
\displaystyle
\sum_{j=1}^k\|s^{j-\frac12-\frac1p}\partial_s^{k+j-1}u\|_{L^p} \leq C\|u\|_{X^{2k}}, \\
\displaystyle
\sum_{j=1}^k\|s^{j-\frac1p}\partial_s^{k+j}u\|_{L^p} \leq C\|u\|_{X^{2k+1}}.
\end{cases}$$*
*Proof.* It is sufficient to show the first two estimates. $$\begin{aligned}
s^{2j-1}|\partial_s^{k+j-1}u(s)|^2
&= \int_0^s (\sigma^{2j-1}|\partial_\sigma^{k+j-1}u(\sigma)|^2)' \mathrm{d}\sigma \\
&\lesssim \int_0^1\{ \sigma^{2(j-1)}|\partial_\sigma^{k+j-1}u(\sigma)|^2
+ \sigma^{j-1}|\partial_\sigma^{k+j-1}u(\sigma)| \sigma^j|\partial_\sigma^{k+j}u(\sigma)| \}\mathrm{d}\sigma \\
&\leq \|\sigma^{j-1}\partial_\sigma^{k+j-1}u\|_{L^2}^2
+ \|\sigma^{j-1}\partial_\sigma^{k+j-1}u\|_{L^2}\|\sigma^{j}\partial_\sigma^{k+j}u\|_{L^2},\end{aligned}$$ which shows the first estimate in the case $p=\infty$. The case $p=2$ is trivial. Therefore, interpolating the estimates in the cases $p=2$ and $p=\infty$ we obtain the first estimate in the case $2\leq p\leq \infty$. The second estimate can be proved in the same way so that we omit the proof. ◻
****Lemma** 18**. *Let $m$ be a non-negative integer. Then, we have $$\begin{cases}
\|uv\|_{L^2} \lesssim \|u\|_{X^1} \|v\|_{X^1}, \\
\|uv\|_{X^m} \lesssim \|u\|_{X^{m \vee 2}} \|v\|_{X^m}.
\end{cases}$$*
*Proof.* These estimates follows directly from Lemma [**Lemma** 12](#lem:NormEq){reference-type="ref" reference="lem:NormEq"} together with the well-known inequalities $\|UV\|_{L^2(D)} \lesssim \|U\|_{H^1(D)} \|V\|_{H^1(D)}$ and $\|UV\|_{H^m(D)} \lesssim \|U\|_{H^{m\vee2}(D)} \|V\|_{H^m(D)}$. ◻
****Lemma** 19**. *Let $m$ be an non-negative integer, $\Omega$ an open set in $\mathbb{R}^N$, and $F\in C^m(\Omega)$. There exists a positive constant $C=C(m,N)$ such that if $u\in X^m$ takes its value in a compact set $K$ in $\Omega$, then we have $$\|F(u)\|_{X^m} \leq C\|F\|_{C^m(K)}(1+\|u\|_{X^m})^m.$$ If, in addition, $u$ depends also on time $t$, then we have also $$\begin{cases}
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ F(u(t)) }_m \leq C\|F\|_{C^m(K)}(1+\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ u(t) }_m )^m, \\
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ F(u(t)) }_{m,*} \leq C\|F\|_{C^m(K)}(1+\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ u(t) }_{m,*} )^m.
\end{cases}$$*
*Proof.* It is well-known that if $U\in H^m(D)$ takes its value in a compact set $K$, then we have $\|F(U)\|_{H^m(D)} \leq C\|F\|_{C^m(K)}(1+\|U\|_{H^m(D)})^m$. This together with Lemma [**Lemma** 12](#lem:NormEq){reference-type="ref" reference="lem:NormEq"} implies the first estimate of the lemma. Similarly, we can obtain the later estimates. ◻
****Lemma** 20**. *Let $k$ be a non-negative integer. It holds that $$\begin{aligned}
& \|s^k[\partial_s^{2k+1},u]v\|_{L^2} \lesssim
\begin{cases}
\min\{ \|u'\|_{L^\infty}\|v\|_{L^2}, \|u'\|_{L^2}\|v\|_{L^\infty} \} &\mbox{for}\quad k=0, \\
\|u'\|_{X^{2k}}\|v\|_{X^{2k}} &\mbox{for}\quad k\geq1,
\end{cases} \\
& \|s^{k+\frac12}[\partial_s^{2k+2},u]v\|_{L^2} \lesssim
\begin{cases}
\min\{ \|u'\|_{X^2}\|v\|_{X^1}, \|u'\|_{X^1}\|v\|_{X^2} \} &\mbox{for}\quad k=0, \\
\|u'\|_{X^{2k+1}}\|v\|_{X^{2k+1}} &\mbox{for}\quad k\geq1.
\end{cases}\end{aligned}$$*
*Proof.* We first prove the first estimate of the lemma. The case $k=0$ is trivial, so that we consider the case $k\geq1$. We evaluate it as $$\begin{aligned}
\|s^k[\partial_s^{2k+1},u]v\|_{L^2} \lesssim \|s^k\partial_s^{2k}u'\|_{L^2}\|v\|_{L^\infty}
+ \sum_{k_1+k_2=2k, k_1\leq 2k-1}\|s^{\alpha_1}\partial_s^{k_1}u'\|_{L^\infty} \|s^{\alpha_2}\partial_s^{k_2}v\|_{L^2},\end{aligned}$$ where $\alpha_1$ and $\alpha_2$ should be taken so that $\alpha_1+\alpha_2 \leq k$. We choose these indices as $$\alpha_1 =
\begin{cases}
0 &\mbox{if}\quad k_1\leq k-1, \\
k_1-k+\frac12 &\mbox{if}\quad k_1\geq k,
\end{cases}
\qquad
\alpha_2 =
\begin{cases}
0 &\mbox{if}\quad k_2\leq k, \\
k_2-k &\mbox{if}\quad k_2\geq k+1.
\end{cases}$$ Then, we see that these indices satisfy the desired property. Moreover, by Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"} we have $\|s^{\alpha_1}\partial_s^{k_1}u'\|_{L^\infty} \lesssim \|u'\|_{X^{2k}}$ and $\|s^{\alpha_2}\partial_s^{k_2}v\|_{L^2} \lesssim \|v\|_{X^{2k}}$. These estimates together with Lemma [**Lemma** 16](#lem:embedding2){reference-type="ref" reference="lem:embedding2"} with $j=0$ yield the first estimate of the lemma.
We then prove the second estimate of the lemma. In the case $k=0$, by Lemmas [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"} and [**Lemma** 16](#lem:embedding2){reference-type="ref" reference="lem:embedding2"} with $j=0$ we see that $$\begin{aligned}
\|s^\frac12[\partial_s^2,u]v\|_{L^2}
&\leq \|s^\frac12u''\|_{L^\infty}\|v\|_{L^2} + 2\|u'\|_{L^\infty}\|s^\frac12v'\|_{L^2} \\
&\lesssim \|u'\|_{X^2} \|v\|_{X^1}, \\
\|s^\frac12[\partial_s^2,u]v\|_{L^2}
&\leq \|s^\frac12u''\|_{L^2}\|v\|_{L^\infty} + 2\|u'\|_{L^2}\|s^\frac12v'\|_{L^\infty} \\
&\lesssim \|u'\|_{X^1} \|v\|_{X^2},\end{aligned}$$ which yield the estimate in the case $k=0$. In the case $k\geq1$, we evaluate it as $$\|s^{k+\frac12}[\partial_s^{2k+2},u]v\|_{L^2}
\lesssim \|s^{k+\frac12}\partial_s^{2k+1}u'\|_{L^2} \|v\|_{L^\infty}
+\sum_{k_1+k_2=2k+1,k_1\leq 2k} \|s^{\alpha_1}\partial_s^{k_1}u'\|_{L^\infty} \|s^{\alpha_2}\partial_s^{k_2}v\|_{L^2},$$ where $\alpha_1$ and $\alpha_2$ should be taken so that $\alpha_1+\alpha_2 \leq k+\frac12$. We choose these indices as $$\alpha_1 =
\begin{cases}
0 &\mbox{if}\quad k_1\leq k-1, \\
\frac12 &\mbox{if}\quad k_1=k, \\
k_1-k &\mbox{if}\quad k_1\geq k+1,
\end{cases}
\qquad
\alpha_2 =
\begin{cases}
0 &\mbox{if}\quad k_2\leq k, \\
k_2-k-\frac12 &\mbox{if}\quad k_2\geq k+1.
\end{cases}$$ Then, we see that these indices satisfy the desired property. Here, by Lemmas [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} with $\epsilon=\frac12$ we have $\|s^\frac12\partial_s^ku'\|_{L^\infty} \lesssim \|\partial_s^ku'\|_{X^1} \leq \|u'\|_{X^{2k+1}}$. Therefore, by Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"} we obtain $\|s^{\alpha_1}\partial_s^{k_1}u'\|_{L^\infty} \lesssim \|u'\|_{X^{2k+1}}$ and $\|s^{\alpha_2}\partial_s^{k_2}v\|_{L^2} \lesssim \|v\|_{X^{2k+1}}$. These estimates yield the second estimate of the lemma. ◻
## Weighted Sobolev space $Y^m$
For a non-negative integer $m$ we define another weighted Sobolev space $Y^m$ as the set of all function $u$ defined in the open interval $(0,1)$ equipped with a norm $\|\cdot\|_{Y^m}$ defined by $$\|u\|_{Y^m}^2 =
\begin{cases}
\|s^\frac12 u\|_{L^2}^2 &\mbox{for}\quad m=0, \\
\displaystyle
\|u\|_{H^k}^2 + \sum_{j=1}^{k+1}\|s^j\partial_s^{k+j}u\|_{L^2}^2 &\mbox{for}\quad m=2k+1, \\
\displaystyle
\|u\|_{H^k}^2 + \sum_{j=1}^{k+2}\|s^{j-\frac12}\partial_s^{k+j}u\|_{L^2}^2 &\mbox{for}\quad m=2k+2.
\end{cases}$$ Obviously, it holds that $\|u\|_{Y^m} \leq \|u\|_{Y^{m+1}}$ and $\|u\|_{Y^m} \leq \|u\|_{X^m} \leq \|u\|_{Y^{m+1}}$ for $m=0,1,2,\ldots$. This function space $Y^m$ is introduced so that the identity $$\label{Ym1}
\|u\|_{X^{m+1}}^2 = \|u\|_{L^2}^2 + \|u'\|_{Y^m}^2$$ holds for any $m=0,1,2,\ldots$. Particularly, $u\in X^{m+1}$ implies $u'\in Y^m$. Note also that we have the identity $$\label{Ym2}
\|u\|_{Y^m}^2 =
\begin{cases}
\displaystyle
\|u\|_{X^{2k}}^2 + \|s^{k+1}\partial_s^{2k+1}u\|_{L^2}^2 &\mbox{for}\quad m=2k+1, \\
\displaystyle
\|u\|_{X^{2k+1}}^2 + \|s^{k+\frac32}\partial_s^{2k+2}u\|_{L^2}^2 &\mbox{for}\quad m=2k+2.
\end{cases}$$
****Lemma** 21**. *It holds that $$\|u'v'\|_{Y^m} \lesssim
\begin{cases}
\|u\|_{X^{m+2}} \|v\|_{X^{m+2}} &\mbox{for}\quad m=0,1, \\
\|u\|_{X^{m+1\vee4}} \|v\|_{X^{m+1}} &\mbox{for}\quad m=0,1,2,\ldots.
\end{cases}$$*
*Proof.* By Lemmas [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} and [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}, we see that $$\begin{aligned}
\|u'v'\|_{Y^0}
&\leq \|s^\frac14u'\|_{L^4} \|s^\frac14v'\|_{L^4} \\
&\lesssim \|u\|_{X^2} \|v\|_{X^2}, \end{aligned}$$ $$\begin{aligned}
\|u'v'\|_{Y^1}
&\leq \|u'\|_{L^4} \|v'\|_{L^4} + \|s^\frac12u'\|_{L^\infty}\|s^\frac12v''\|_{L^2} + \|s^\frac12u''\|_{L^2}\|s^\frac12v''\|_{L^\infty} \\
&\lesssim \|u'\|_{X^1} \|v'\|_{X^1} + \|u'\|_{X^1}\|v\|_{X^3} + \|u\|_{X^3}\|v'\|_{X^1} \\
&\lesssim \|u\|_{X^3} \|v\|_{X^3},\end{aligned}$$ which give the first estimate of the lemma.
Similarly, by Lemmas [**Lemma** 16](#lem:embedding2){reference-type="ref" reference="lem:embedding2"} and [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"} we see that $$\begin{aligned}
\|u'v'\|_{Y^0}
&\leq \|u'\|_{L^\infty} \|s^\frac12v'\|_{L^2} \\
&\lesssim \|u\|_{X^4} \|v\|_{X^1}, \end{aligned}$$ $$\begin{aligned}
\|u'v'\|_{Y^1}
&\leq \|u'\|_{L^\infty}( \|v'\|_{L^2} + \|sv''\|_{L^2}) + \|s^\frac12u''\|_{L^\infty}\|v'\|_{L^2} \\
&\lesssim \|u\|_{X^4} \|v\|_{X^2},\end{aligned}$$ $$\begin{aligned}
\|u'v'\|_{Y^2}
&\leq \|u'\|_{L^\infty}( \|v'\|_{L^2} + \|s^\frac12v''\|_{L^2} + \|sv'''\|_{L^2} ) \\
&\quad\;
+ \|s^\frac12u''\|_{L^\infty}( \|v'\|_{L^2} + \|s^\frac12v''\|_{L^2} ) + \|s^\frac32u'''\|_{L^\infty}\|v'\|_{L^2}\\
&\lesssim \|u\|_{X^4} \|v\|_{X^3},\end{aligned}$$ which give the second estimate for $m=0,1,2$.
We then consider the case $m=2k+1$ with $k\geq1$. By [\[Ym2\]](#Ym2){reference-type="eqref" reference="Ym2"} and Lemmas [**Lemma** 18](#lem:Algebra){reference-type="ref" reference="lem:Algebra"} and [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}, we see that $$\begin{aligned}
\|u'v'\|_{Y^{2k+1}}
&\leq \|u'v'\|_{X^{2k}} + \|s^{k+1}\partial_s^{2k+1}(u'v')\|_{L^2} \\
&\lesssim \|u'\|_{X^{2k}} \|v'\|_{X^{2k}}
+ \sum_{j=0}^k \bigl( \|s^j\partial_s^{j+1}u\|_{L^\infty} \|s^{k+1-j}\partial_s^{2k+2-j}v\|_{L^2} \\
&\qquad
+ \|s^j\partial_s^{j+1}v\|_{L^\infty} \|s^{k+1-j}\partial_s^{2k+2-j}u\|_{L^2} \bigr) \\
&\lesssim \|u\|_{X^{2k+2}} \|v\|_{X^{2k+2}},\end{aligned}$$ which gives the second estimate for $m=2k+1$.
Finally, we consider the case $m=2k$ with $k\geq2$. Similarly as above, we see that $$\begin{aligned}
\|u'v'\|_{Y^{2k}}
&\lesssim \|u'v'\|_{X^{2k-1}} + \|s^{k+\frac12}\partial_s^{2k}(u'v')\|_{L^2} \\
&\lesssim \|u'\|_{X^{2k-1}} \|v'\|_{X^{2k-1}}
+ \sum_{j=0}^k \bigl( \|s^j\partial_s^{j+1}u\|_{L^\infty} \|s^{k+\frac12-j}\partial_s^{2k+1-j}v\|_{L^2} \\
&\qquad
+ \|s^j\partial_s^{j+1}v\|_{L^\infty} \|s^{k+\frac12-j}\partial_s^{2k+1-j}u\|_{L^2} \bigr) \\
&\lesssim \|u\|_{X^{2k+1}} \|v\|_{X^{2k+1}},\end{aligned}$$ which gives the second estimate for $m=2k$. The proof is complete. ◻
****Lemma** 22**. *If $\tau|_{s=0}=0$, then we have $$\|\tau u''v''\|_{Y^m} \lesssim
\begin{cases}
\|\tau'\|_{L^2} \|u\|_{X^4} \|v\|_{X^3} &\mbox{for}\quad m=0, \\
\|\tau'\|_{L^\infty} \min\{ \|u\|_{X^4} \|v\|_{X^2}, \|u\|_{X^3} \|v\|_{X^3} \} &\mbox{for}\quad m=0, \\
\|\tau'\|_{L^2} \|u\|_{X^4} \|v\|_{X^4} &\mbox{for}\quad m=1, \\
\|\tau'\|_{L^\infty} \|u\|_{X^4} \|v\|_{X^3} &\mbox{for}\quad m=1, \\
\|\tau'\|_{L^\infty \cap X^{m-1}} \|u\|_{X^{m+2}} \|v\|_{X^{m+2}} &\mbox{for}\quad m\geq2.
\end{cases}$$*
*Proof.* We first note that under the assumption $\tau|_{s=0}=0$ we have $\tau(s)=\int_0^s\tau'(\sigma)\mathrm{d}\sigma$, so that $|\tau(s)| \leq s^{1-\frac1p}\|\tau'\|_{L^p}$ for $1\leq p\leq\infty$. Therefore, we see that $$\|\tau u''v''\|_{Y^0} \leq \min\{ \|\tau'\|_{L^2}\|s u''v''\|_{L^2},\|\tau'\|_{L^\infty}\|s^\frac32 u''v''\|_{L^2} \}$$ and that by Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"} $$\begin{aligned}
\|s u''v''\|_{L^2}
&\leq \|s^\frac12u''\|_{L^\infty} \|s^\frac12v''\|_{L^2} \\
&\lesssim \|u\|_{X^4}\|v\|_{X^3}\end{aligned}$$ and $$\begin{aligned}
\|s^\frac32 u''v''\|_{L^2}
&\leq \min\{ \|s^\frac12u''\|_{L^\infty} \|sv''\|_{L^2}, \|su''\|_{L^\infty} \|s^\frac12v''\|_{L^2} \} \\
&\lesssim \min\{ \|u\|_{X^4} \|v\|_{X^2}, \|u\|_{X^3} \|v\|_{X^3} \},\end{aligned}$$ which give the estimates of the lemma in the case $m=0$. Similarly, we see that $$\begin{aligned}
\|\tau u''v''\|_{Y^1}
&\leq \|\tau'\|_{L^2}( \|s^\frac12u''v''\|_{L^2} + \|s^\frac32 u'''v''\|_{L^2} + \|s^\frac32u''v'''\|_{L^2} ) \\
&\leq \|\tau'\|_{L^2}( \|s^\frac12u''\|_{L^\infty} \|v''\|_{L^2}
+ \|s^\frac32u'''\|_{L^\infty}\|v''\|_{L^2} + \|u''\|_{L^2}\|s^\frac32v'''\|_{L^\infty} ) \\
&\lesssim \|\tau'\|_{L^2} \|u\|_{X^4} \|v\|_{X^4}, \end{aligned}$$ and $$\begin{aligned}
\|\tau u''v''\|_{Y^1}
&\leq \|\tau'\|_{L^\infty}( \|su''v''\|_{L^2} + \|s^2u'''v''\|_{L^2} + \|s^2u''v'''\|_{L^2} ) \\
&\leq \|\tau'\|_{L^\infty}( \|s^\frac12u''\|_{L^\infty} \|s^\frac12v''\|_{L^2}
+ \|s^\frac32u'''\|_{L^\infty}\|s^\frac12v''\|_{L^2} + \|s^\frac12u''\|_{L^\infty}\|s^\frac32v'''\|_{L^2} ) \\
&\lesssim \|\tau'\|_{L^\infty} \|u\|_{X^4} \|v\|_{X^3}, \end{aligned}$$ which give the estimates of the lemma in the case $m=1$.
We then consider the case $m=2k$ with $k\geq1$. We have $$\begin{aligned}
\|\tau u''v''\|_{Y^{2k}}
&\lesssim \sum_{j_0+j_1+j_2\leq k-1}\|(\partial_s^{j_0}\tau)(\partial_s^{j_1+2}u)(\partial_s^{j_2+2}v)\|_{L^2} \\
&\quad\;
+ \sum_{j=1}^{k+1}\sum_{j_0+j_1+j_2=k+j-1} \|s^{j-\frac12}(\partial_s^{j_0}\tau)(\partial_s^{j_1+2}u)(\partial_s^{j_2+2}v)\|_{L^2}.\end{aligned}$$ We first evaluate $I_1(j_0,j_1,j_2)=\|(\partial_s^{j_0}\tau)(\partial_s^{j_1+2}u)(\partial_s^{j_2+2}v)\|_{L^2}$, where $j_0+j_1+j_2\leq k-1$. In the following calculations, we use frequently Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}.
1. The case $(j_0,j_1,j_2)=(0,k-1,0),(0,0,k-1)$. $$\begin{aligned}
I_1(0,k-1,0)
&\leq \|\tau'\|_{L^2} \|\partial_s^{k+1}u\|_{L^2}\|s^\frac12v''\|_{L^\infty} \\
&\lesssim \|\tau'\|_{L^2} \|u\|_{H^{k+1}} \|v\|_{X^4} \\
&\lesssim \|\tau'\|_{L^2} \|u\|_{X^{2k+2}} \|v\|_{X^{2k+2}}.\end{aligned}$$ Similar estimate holds for $I_1(0,0,k-1)$.
2. The other cases. Since $j_1,j_2 \leq k-2$, we have $$\begin{aligned}
I_1(j_0,j_1,j_2)
&\lesssim \|\partial_s^{j_0}\tau\|_{L^\infty} \|\partial_s^{j_1+2}u\|_{H^1} \|\partial_s^{j_2+2}v\|_{H^1} \\
&\lesssim \|\tau\|_{W^{k-1,\infty}} \|u\|_{H^{k+1}} \|v\|_{H^{k+1}} \\
&\lesssim \|\tau'\|_{X^{2(k-1)}} \|u\|_{X^{2k+2}} \|v\|_{X^{2k+2}}.\end{aligned}$$
In any of these cases, we have $I_1(j_0,j_1,j_2) \lesssim \|\tau'\|_{X^{2(k-1)}} \|u\|_{X^{2k+2}} \|v\|_{X^{2k+2}}$.
We then evaluate $I_2(j_0,j_1,j_2;j)=\|s^{j-\frac12}(\partial_s^{j_0}\tau)(\partial_s^{j_1+2}u)(\partial_s^{j_2+2}v)\|_{L^2}$, where $1\leq j\leq k+1$ and $j_0+j_1+j_2=k+j-1$. In the following calculations, we will use Lemmas [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} and [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}.
1. The case $j=k+1$ and $j_1=j_2=0$. $$\begin{aligned}
I_2(2k,0,0;k+1)
&\leq \|s^{k-\frac12}\partial_s^{2k}\tau\|_{L^2} \|s^\frac12u''\|_{L^\infty} \|s^\frac12v''\|_{L^\infty} \\
&\lesssim \|\tau'\|_{X^{2k-1}} \|u\|_{X^4} \|v\|_{X^4} \\
&\leq \|\tau'\|_{X^{2k-1}} \|u\|_{X^{2(k+1)}} \|v\|_{X^{2k+2}}.\end{aligned}$$
2. The case $(j_0,j_1,j_2)=(0,k+j-1,0),(0,0,k+j-1)$. $$\begin{aligned}
I_2(0,k+j-1,0;j)
&\leq \|\tau'\|_{L^\infty} \|s^j\partial_s^{k+j+1}u\|_{L^2} \|s^\frac12v''\|_{L^\infty} \\
&\lesssim \|\tau'\|_{L^\infty} \|u\|_{X^{2k+2}} \|v\|_{X^{2k+2}}.\end{aligned}$$ Similar estimate holds for $I_2(0,0,k+j-1;j)$.
3. The case $j_0=0$ and $j_1,j_2\ne k+j-1$. We evaluate it as $$I_2(0,j_1,j_2;j) \leq \|\tau'\|_{L^\infty} \|s^{\alpha_1}\partial_s^{j_1+2}u \|_{L^\infty} \|s^{\alpha_2}\partial_s^{j_2+2}v\|_{L^2},$$ where $\alpha_1$ and $\alpha_2$ should be chosen so that $\alpha_1+\alpha_2\leq j+\frac12$. We choose these indices as $$\alpha_1 =
\begin{cases}
0 &\mbox{if}\quad j_1+2 \leq k, \\
j_1-k+\frac32 &\mbox{if}\quad j_1+2 \geq k+1,
\end{cases}
%
\qquad
\alpha_2 =
\begin{cases}
0 &\mbox{if}\quad j_2+2 \leq k+1, \\
j_2-k+1 &\mbox{if}\quad j_2+2 \geq k+2.
\end{cases}$$ Then, we see that $\alpha_1$ and $\alpha_2$ satisfy in fact $\alpha_1+\alpha_2\leq j+\frac12$. Moreover, we have $\|s^{\alpha_1}\partial_s^{j_1+2}u \|_{L^\infty} \lesssim \|u\|_{X^{2k+2}}$ and $\|s^{\alpha_2}\partial_s^{j_2+2}v\|_{L^2} \lesssim \|v\|_{X^{2k+2}}$.
4. The other cases. In view of $1\leq j_0\leq 2k-1$ and $j_1,j_2\leq k+j-2$, we evaluate it as $$I_2(0,j_1,j_2;j) \leq \|s^{\alpha_0}\partial_s^{j_0}\tau\|_{L^\infty} \|s^{\alpha_1}\partial_s^{j_1+2}u \|_{L^\infty} \|s^{\alpha_2}\partial_s^{j_2+2}v\|_{L^2},$$ where $\alpha_0$, $\alpha_1$, and $\alpha_2$ should be chosen so that $\alpha_0+\alpha_1+\alpha_2\leq j-\frac12$. We choose the indices $\alpha_0$ as $$\alpha_0 =
\begin{cases}
0 &\mbox{if}\quad j_0=1 \mbox{ or } 1\leq j_0\leq k-1, \\
\frac12 &\mbox{if}\quad j_0=k\geq2, \\
j_0-k &\mbox{if}\quad j_0\geq k+1,
\end{cases}$$ $\alpha_1$ and $\alpha_2$ as in the above case (iii). Then, we see that these indices satisfy in fact $\alpha_0+\alpha_1+\alpha_2\leq j-\frac12$. Moreover, we have $\|s^{\alpha_0}\partial_s^{j_0}\tau\|_{L^\infty} \lesssim \|\tau'\|_{L^\infty \cap X^{2k-1}}$, $\|s^{\alpha_1}\partial_s^{j_1+2}u \|_{L^\infty} \lesssim \|u\|_{X^{2(k+1)}}$ and $\|s^{\alpha_2}\partial_s^{j_2+2}v\|_{L^2} \lesssim \|v\|_{X^{2k+2}}$.
In any of these cases, we have $I_2(j_0,j_1,j_2;j) \lesssim \|\tau'\|_{L^\infty \cap X^{m-1}} \|u\|_{X^{2k+2}} \|v\|_{X^{2k+2}}$. To summarize, we obtain the desired estimate of the lemma in the case $m=2k$.
The case $m=2k+1$ with $k\geq1$ can be proved in the same way as above so we omit the proof in the case. ◻
## Averaging operator $\mathscr{M}$ {#sect:AOM}
For a function $u$ defined in the open interval $(0,1)$ we define an averaging operator $\mathscr{M}$ by $$\label{defM}
(\mathscr{M}u)(s) = \frac{1}{s}\int_0^su(\sigma) \mathrm{d}\sigma = \int_0^1u(sr) \mathrm{d}r.$$ Then, we have $$\label{derM}
\partial_s^j(\mathscr{M}u)(s)
= \int_0^1r^j(\partial_\sigma^ju)(sr) \mathrm{d}r
= \frac{1}{s^{j+1}}\int_0^s \sigma^j\partial_\sigma^ju(\sigma) \mathrm{d}\sigma.$$ We will evaluate this in a weighted $L^p$ space.
****Lemma** 23**. *Let $1\leq p\leq\infty$. Suppose that $\alpha$ and $\beta$ satisfy $\alpha+1>\beta+\frac1p$. For a function $u$ defined in $(0,1)$ we put $$U_\alpha(s)=\frac{1}{s^{\alpha+1}}\int_0^s\sigma^\alpha u(\sigma) \mathrm{d}\sigma.$$ Then, we have $$\|s^\beta U_\alpha\|_{L^p} \leq \frac{1}{\alpha-\beta+1-\frac1p}\|s^\beta u\|_{L^p}.$$*
*Proof.* Since $s^\beta U_\alpha(s) = \frac{1}{s^{\alpha-\beta+1}}\int_0^s \sigma^{\alpha-\beta}(\sigma^\beta u(\sigma)) \mathrm{d}\sigma$, it is sufficient to show the estimate in the case $\beta=0$. We may assume also that $u(s)$ is non-negative. We first consider the case $p=\infty$, so that we assume $\alpha+1>0$. $$\begin{aligned}
|U_\alpha(s)|
&\leq \frac{1}{s^{\alpha+1}}\int_0^s \sigma^\alpha \mathrm{d}\sigma \|u\|_{L^\infty} = \frac{1}{\alpha+1}\|u\|_{L^\infty},\end{aligned}$$ which shows the estimate in the case $p=\infty$. Therefore, we suppose that $1\leq p<\infty$. By using integration by parts and noting the condition $(\alpha+1)p-1>0$, we see that $$\begin{aligned}
\|U_\alpha\|_{L^p}^p
&= \left[ -\frac{1}{(\alpha+1)p-1}\frac{1}{s^{(\alpha+1)p-1}}\biggl( \int_0^s \sigma^\alpha u(\sigma) \mathrm{d}\sigma \biggr)^p \right]_0^1 \\
&\quad\;
+ \frac{p}{(\alpha+1)p-1}\int_0^1\frac{s^\alpha u(s)}{s^{(\alpha+1)p+1}}
\biggl( \int_0^s \sigma^\alpha u(\sigma) \mathrm{d}\sigma \biggr)^{p-1} \mathrm{d}s \\
&\leq \frac{p}{(\alpha+1)p-1}\int_0^1 u(s)(U_\alpha(s))^{p-1} \mathrm{d}s \\
&\leq \frac{p}{(\alpha+1)p-1}\|u\|_{L^p}\|U_\alpha\|_{L^p}^{p-1},\end{aligned}$$ where we used Hölder's inequality. This shows the desired estimate. ◻
****Corollary** 24**. *Let $j$ be non-negative integer, $1\leq p\leq \infty$, and $\beta<j+1-\frac1p$. Then, we have $$\|s^\beta\partial_s^j(\mathscr{M}u)\|_{L^p} \leq \frac{1}{j+1-\beta-\frac1p}\|s^\beta\partial_s^j u\|_{L^p}.$$ Particularly, $\|\mathscr{M}u\|_{X^m} \leq 2\|u\|_{X^m}$ for $m=0,1,2,\ldots$.*
*Proof.* In view of [\[derM\]](#derM){reference-type="eqref" reference="derM"}, the estimate follows directly from Lemma [**Lemma** 23](#lem:EstM){reference-type="ref" reference="lem:EstM"}. ◻
# Equivalence of the systems {#sect:equiv}
In this section we prove Theorem [**Theorem** 4](#th:equiv){reference-type="ref" reference="th:equiv"}, which ensures the equivalence of the original system [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} and the transformed system [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} under the stability condition [\[SC\]](#SC){reference-type="eqref" reference="SC"}.
## Auxiliary estimates of solutions
Taking into account the class [\[SolClass\]](#SolClass){reference-type="eqref" reference="SolClass"} of solutions, we assume that $$\label{SolClassEst}
\begin{cases}
\|\bm{x}'(t)\|_{X^2}+\|\dot{\bm{x}}'(t)\|_{X^1} \leq M, \\
-\bm{g}\cdot\bm{x}'(1,t) \geq c_0 \qquad\mbox{for}\quad 0\leq t\leq T
\end{cases}$$ with positive constants $c_0$ and $M$. Particularly, we have $$\|\bm{x}''(t)\|_{L^2}, \|s\bm{x}'''(t)\|_{L^2}, \|\dot{\bm{x}}'(t)\|_{L^2}, \|s^\frac12\dot{\bm{x}}''(t)\|_{L^2} \leq M
\quad\mbox{for}\quad 0\leq t\leq T.$$
****Lemma** 25**. *Let $\bm{x}$ satisfy the first condition in [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}. Then, we have $$\begin{cases}
\|s^{\frac12-\frac1p}\bm{x}''(t)\|_{L^p} \leq C(M) &\mbox{for}\quad 2\leq p\leq\infty, \\
\|\dot{\bm{x}}'(t)\|_{L^q} \leq C(q,M) &\mbox{for}\quad 1\leq q<\infty, \\
\|s^\epsilon\dot{\bm{x}}'(t)\|_{L^\infty} \leq C(\epsilon,M) &\mbox{for}\quad \epsilon>0.
\end{cases}$$*
*Proof.* The first estimate follows from Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}, and the last two estimates follow from Lemma [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"}. ◻
****Lemma** 26**. *Let $M$ and $c_0$ be positive constants and $q\in[1,\infty)$. There exist positive constants $C_1=C(M,c_0)$ and $C_q=C(M,c_0,q)$ such that if $\bm{x}$ satisfies [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}, then the solution $\tau$ to the boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} satisfies $$C_1^{-1}s \leq \tau(s,t) \leq C_1s, \quad |\tau'(s,t)| \leq C_1, \quad \|\tau''(t)\|_{L^q} \leq C_q$$ for any $(s,t)\in[0,1]\times[0,T]$.*
*Proof.* The first two estimates follow from Lemma [**Lemma** 8](#lem:EstSolBVP1){reference-type="ref" reference="lem:EstSolBVP1"}. As for the last one, we see that $$\begin{aligned}
\|\tau''(t)\|_{L^q}
&\leq \|\tau(t)|\bm{x}''(t)|^2\|_{L^q} + \||\dot{\bm{x}}'(t)|^2\|_{L^q} \\
&\leq C_1\|s^\frac12\bm{x}''(t)\|_{L^{2q}}^2 + \|\dot{\bm{x}}'(t)\|_{L^{2q}}^2,\end{aligned}$$ which together with Lemma [**Lemma** 25](#lem:EstSol1){reference-type="ref" reference="lem:EstSol1"} gives the last estimate. ◻
****Lemma** 27**. *For any positive constants $M$ and $c_0$, there exists a positive constant $C_1=C(M,c_0)$ such that for any solution $(\bm{x},\tau)$ to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[BC\]](#BC){reference-type="eqref" reference="BC"} satisfying [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} we have $\|\ddot{\bm{x}}'(t)\|_{L^2} \leq C_1$ for $0\leq t\leq T$.*
*Proof.* We remind here that $\tau$ is also the solution to the boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}. It follows from the hyperbolic equation for $\bm{x}$ that $\ddot{\bm{x}}'=(\tau\bm{x}')'' = \tau\bm{x}'''+2\tau'\bm{x}''+\tau''\bm{x}'$, so that by Lemma [**Lemma** 26](#lem:EstSol2){reference-type="ref" reference="lem:EstSol2"} we have $|\ddot{\bm{x}}'| \lesssim s|\bm{x}'''|+|\bm{x}''|+|\tau''|$, which yields the desired estimate. ◻
****Lemma** 28**. *For any positive constants $M$ and $c_0$, there exists a positive constant $C_1=C(M,c_0)$ such that for any solution $(\bm{x},\tau)$ to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[BC\]](#BC){reference-type="eqref" reference="BC"} satisfying [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} we have $$|\dot{\tau}(s,t)| \leq C_1s, \quad |\dot{\tau}'(s,t)| \leq C_1$$ for any $(s,t)\in[0,1]\times[0,T]$.*
*Proof.* Note that $\dot{\tau}$ is a solution to the boundary value problem $$\begin{cases}
-\dot{\tau}''+|\bm{x}''|^2\dot{\tau} = h_1 \quad\mbox{in}\quad (0,1), \\
\dot{\tau}(0)=0, \quad \dot{\tau}'(1)=a_1,
\end{cases}$$ where $h_1=2(\dot{\bm{x}}'\cdot\ddot{\bm{x}}')-2(\bm{x}''\cdot\dot{\bm{x}}'')\tau$ and $a_1=-\bm{g}\cdot\dot{\bm{x}}'|_{s=1}$. By Lemmas [**Lemma** 26](#lem:EstSol2){reference-type="ref" reference="lem:EstSol2"} and [**Lemma** 27](#lem:EstSol3){reference-type="ref" reference="lem:EstSol3"}, we see that $\|h_1(t)\|_{L^1} \lesssim \|\dot{\bm{x}}'(t)\|_{L^2}\|\ddot{\bm{x}}'(t)\|_{L^2} + \|\bm{x}''(t)\|_{L^2}\|s\dot{\bm{x}}''(t)\|_{L^2} \lesssim1$. Moreover, by the standard Sobolev embedding theorem we have $|a_1|\lesssim \|\dot{\bm{x}}'(t)\|_{X^1} \lesssim 1$. Therefore, the desired estimates follow from Lemma [**Lemma** 9](#lem:EstSolBVP2){reference-type="ref" reference="lem:EstSolBVP2"} in the case $\alpha=0$. ◻
## Proof of Theorem [**Theorem** 4](#th:equiv){reference-type="ref" reference="th:equiv"} {#proof-of-theorem-thequiv}
Suppose that $(\bm{x},\tau)$ is a solution to the transformed system [\[HP\]](#HP){reference-type="eqref" reference="HP"}, [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}, and [\[IC\]](#IC){reference-type="eqref" reference="IC"} in the class [\[SolClass\]](#SolClass){reference-type="eqref" reference="SolClass"}. Then, there exist positive constants $M$ and $c_0$ such that [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} holds. Therefore, we can use the estimates for $\tau$ and $\dot{\tau}$ obtained in Lemmas [**Lemma** 26](#lem:EstSol2){reference-type="ref" reference="lem:EstSol2"} and [**Lemma** 28](#lem:EstSol4){reference-type="ref" reference="lem:EstSol4"}. Moreover, we have $\tau'(1,t)=-\bm{g}\cdot\bm{x}'(1,t)\geq c_0$. We will use such estimates freely in the following.
Put $h(s,t)=|\bm{x}'(s,t)|^2-1$. It is sufficient to show that $h(s,t)\equiv0$. By a straightforward calculation, we have $$\begin{cases}
\ddot{h} = \tau h'' + 2\tau' h' + 2\tau''h &\mbox{in}\quad (0,1)\times(0,T), \\
\tau h' + 2\tau'h = 0 &\mbox{on}\quad \{s=1\}\times(0,T), \\
(h,\dot{h})|_{t=0} = (0,0) &\mbox{in}\quad (0,1).
\end{cases}$$ Therefore, $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\int_0^1(\tau|\dot{h}|^2+\tau^2|h'|^2) \mathrm{d}s
&= \int_0^1(4\tau\tau''h\dot{h}+\dot{\tau}|\dot{h}|^2+2\tau\dot{\tau}|h'|^2) \mathrm{d}s
+ 2(\tau^2h'\dot{h})|_{s=1},\end{aligned}$$ where we used the hyperbolic equation for $h$, integration by parts, and the boundary condition $\tau|_{s=1}=0$. As for the boundary term, by using the boundary condition for $h$ at $s=1$ we see that $$\begin{aligned}
2(\tau^2h'\dot{h})|_{s=1}
&= -4(\tau\tau'h\dot{h})|_{s=1} \\
&= -2\frac{\mathrm{d}}{\mathrm{d}t}(\tau\tau'h^2)|_{s=1} + 2((\dot{\tau}\tau'+\tau\dot{\tau}')h^2)|_{s=1}. \end{aligned}$$ Putting $E(t)=\int_0^1(\tau|\dot{h}|^2+\tau^2|h'|^2) \mathrm{d}s + 2(\tau\tau'h^2)|_{s=1}$, we obtain $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}E(t)
&= \int_0^1(4\tau\tau''h\dot{h}+\dot{\tau}|\dot{h}|^2+2\tau\dot{\tau}|h'|^2) \mathrm{d}s + 2((\dot{\tau}\tau'+\tau\dot{\tau}')h^2)|_{s=1}. \end{aligned}$$ Moreover, we have the equivalence $E(t) \simeq \|s^\frac12 \dot{h}(t)\|_{L^2}^2 + \|s h'(t)\|_{L^2}^2 + h(1,t)^2$. Therefore, we see easily that $$\int_0^1(\dot{\tau}|\dot{h}|^2+2\tau\dot{\tau}|h'|^2) \mathrm{d}s + 2((\dot{\tau}\tau'+\tau\dot{\tau}')h^2)|_{s=1}
\lesssim E(t).$$ To evaluate the integral of $\tau\tau''h\dot{h}$, we observe that $$\begin{aligned}
|h(s,t)|
&\leq |h(1,t)|+\int_s^1|h'(\sigma,t)|\mathrm{d}\sigma \\
&\leq |h(1,t)|+\left(\int_s^1\sigma^{-2}\mathrm{d}\sigma\right)^{\frac12}\left(\int_s^1\sigma^2|h'(\sigma,t)|^2\mathrm{d}\sigma\right)^{\frac12} \\
&\lesssim |h(1,t)|+s^{-\frac12}\|\tau h'\|_{L^2}.\end{aligned}$$ Therefore, $$\begin{aligned}
\int_0^1|\tau\tau''h\dot{h}| \mathrm{d}s
&\lesssim \int_0^1 |\tau''|(s^{\frac12}|h|)(\tau^{\frac12}|\dot{h}|) \mathrm{d}s \\
&\lesssim \|\tau''\|_{L^2}(|h|_{s=1}|+\|\tau h'\|_{L^2})\|\tau^{\frac12}\dot{h}\|_{L^2} \\
&\lesssim E(t).\end{aligned}$$ Summarizing the above estimates, we get $\frac{\mathrm{d}}{\mathrm{d}t}E(t) \lesssim E(t)$, which together with the initial condition $E(0)=0$ and Gronwall's inequality yields the desired result. $\Box$
# Energy estimate for a linearized system {#sect:EE}
## Differential operator $\mathscr{A}_\tau$
We introduce a variable coefficient differential operator $\mathscr{A}_\tau$ by $$\label{defA}
\mathscr{A}_\tau\bm{x}=-(\tau\bm{x}')',$$ so that [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} can be written as $$\begin{cases}
\ddot{\bm{x}}+\mathscr{A}_\tau\bm{x}=\bm{g} &\mbox{in}\quad (0,1)\times(0,T), \\
|\bm{x}'|=1 &\mbox{in}\quad (0,1)\times(0,T).
\end{cases}$$
****Lemma** 29**. *For any $C_1\geq1$ there exists a constant $C\geq1$ such that if $\tau(s)$ satisfies $$C_1^{-1}s \leq \tau(s) \leq C_1s, \quad |\tau'(s)| \leq C_1$$ for any $s\in[0,1]$, then we have an equivalence of the norms $$C^{-1}( \|s\bm{x}''\|_{L^2}+\|\bm{x}'\|_{L^2} ) \leq \|\mathscr{A}_\tau\bm{x}\|_{L^2} \leq C( \|s\bm{x}''\|_{L^2}+\|\bm{x}'\|_{L^2}).$$*
*Proof.* It is sufficient to show the first estimate. Moreover, in view of $\|s\bm{x}''\|_{L^2} \leq C_1\|\tau\bm{x}''\|_{L^2} \leq C_1 (\|(\tau\bm{x}')'\|_{L^2}+\|\tau'\bm{x}'\|_{L^2})$, it is sufficient to show $\|\bm{x}'\|_{L^2} \lesssim\|\mathscr{A}_\tau\bm{x}\|_{L^2}$. Integrating the definition [\[defA\]](#defA){reference-type="eqref" reference="defA"} of $\mathscr{A}_\tau$ over $[0,s]$, we have $$\tau(s)\bm{x}'(s) = -\int_0^s (\mathscr{A}_\tau\bm{x})(\sigma) \mathrm{d}\sigma = - s\mathscr{M}(\mathscr{A}_\tau\bm{x})(s),$$ where $\mathscr{M}$ is the averaging operator defined in Section [4.3](#sect:AOM){reference-type="ref" reference="sect:AOM"}. Therefore, $|\bm{x}'(s)| \leq C_1|\mathscr{M}(\mathscr{A}_\tau\bm{x})(s)|$ for any $s\in[0,1]$. Now, the desired estimate follows from Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} in the case $\beta=j=0$ and $p=2$. ◻
## Linearized system
In this subsection we derive an energy estimate of solutions to a linearized system for [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}, [\[BC\]](#BC){reference-type="eqref" reference="BC"}, and [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"}. We denote variations of $(\bm{x},\tau)$ by $(\bm{y},\nu)$ in the linearization. Then, the linearized system has the form $$\label{LEq}
\begin{cases}
\ddot{\bm{y}}+\mathscr{A}_\tau\bm{y}+\mathscr{A}_\nu\bm{x} = \bm{f} &\mbox{in}\quad (0,1)\times(0,T), \\
\bm{x}'\cdot\bm{y}'=f &\mbox{in}\quad (0,1)\times(0,T), \\
\bm{y}=\bm{0} &\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ and $$\label{LBVP}
\begin{cases}
-\nu''+|\bm{x}''|^2\nu = 2\dot{\bm{x}}'\cdot\dot{\bm{y}}' - 2(\bm{x}''\cdot\bm{y}'')\tau + h &\mbox{in}\quad (0,1)\times(0,T), \\
\nu = 0 &\mbox{on}\quad \{s=0\}\times(0,T), \\
\nu' = -\bm{g}\cdot\bm{y}' &\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ where $\bm{f}$, $f$, and $h$ can be regarded as given functions. As for $\bm{x}$, in addition to [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} we assume that $$\label{SolClassEst2}
\begin{cases}
\|\bm{x}(t)\|_{X^3} + \|\dot{\bm{x}}(t)\|_{X^2} \leq M_1, \\
-\bm{g}\cdot\bm{x}'(1,t) \geq c_0 \qquad\mbox{for}\quad 0\leq t\leq T.
\end{cases}$$ Note that these conditions are less restrictive than [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}. An advantage to assume this condition is that we can take the constant $M_1$ smaller than the constant $M$ in [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} in applications. We are going to evaluate the functional $E(t)$ defined by $$\label{DefE}
E(t) = \|\dot{\bm{y}}(t)\|_{X^1}^2 + \|\bm{y}(t)\|_{X^2}^2.$$
****Proposition** 30**. *For any positive constants $M_1$, $M$, and $c_0$ and any $\epsilon \in (0,\frac12)$, there exist positive constants $C_1=C(M_1,c_0)$ and $C_2(\epsilon)=C(M,M_1,c_0,\epsilon)$ such that if $(\bm{x},\tau)$ is a solution to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[BC\]](#BC){reference-type="eqref" reference="BC"} satisfying [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} and [\[SolClassEst2\]](#SolClassEst2){reference-type="eqref" reference="SolClassEst2"}, then for any solution $(\bm{y},\nu)$ to [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"}--[\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"} we have $$E(t) \leq C_1 \mathrm{e}^{C_2(\epsilon) t}\left( E(0) + S_1(0) + C_2(\epsilon)\int_0^t S_2(t')\mathrm{d}t' \right),$$ where $$\label{DefS12}
\begin{cases}
S_1(t) = \|\bm{f}\|_{L^2}^2 + \|sh\|_{L^1}^2,\\
S_2(t) = \|\dot{\bm{f}}\|_{L^2}^2 + \|s^{\frac12-\epsilon}\dot{f}\|_{L^2}^2 + |\dot{f}|_{s=1}|^2
+ \|s\dot{h}\|_{L^1}^2 + \|s^{\frac12+\epsilon}h\|_{L^2}^2.
\end{cases}$$*
****Remark** 31**. *The linearized system [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"} is overdetermined due to the second equation in [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"}. Therefore, it is natural to ask if one could obtain a similar energy estimate as above without using the second equation. The answer is affirmative if we impose further regularities on $\bm{x}$ in addition to [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} and [\[SolClassEst2\]](#SolClassEst2){reference-type="eqref" reference="SolClassEst2"}. In other words, thanks to the almost orthogonality condition $\bm{x}'\cdot\bm{y}'=f$ we can minimize the regularity imposed on $\bm{x}$. For more details, we refer to T. Iguchi and M. Takayama [@IguchiTakayama2023].*
*Proof of Proposition [**Proposition** 30](#prop:EE){reference-type="ref" reference="prop:EE"}.* By taking $L^2$-inner product of the first equation in [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"} with $\mathscr{A}_{\tau}\dot{\bm{y}}$ and using integration by parts, we have $$\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d}t}\{ \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}^2 \}
&= (\dot{\tau}\dot{\bm{y}}',\dot{\bm{y}}')_{L^2} + 2(\mathscr{A}_{\dot{\tau}}\bm{y},\mathscr{A}_{\tau}\bm{y})_{L^2}
+ 2(\mathscr{A}_{\tau}\dot{\bm{y}},\bm{f})_{L^2} - 2(\mathscr{A}_{\tau}\dot{\bm{y}}, \mathscr{A}_{\nu}\bm{x})_{L^2},\end{aligned}$$ where the boundary terms are vanished due to the boundary conditions. In view of Lemmas [**Lemma** 29](#lem:EstA){reference-type="ref" reference="lem:EstA"}, [**Lemma** 26](#lem:EstSol2){reference-type="ref" reference="lem:EstSol2"}, and [**Lemma** 28](#lem:EstSol4){reference-type="ref" reference="lem:EstSol4"}, the first two terms in the right-hand side can be easily handled so that we will focus on the last two terms. We evaluate the third term as $(\mathscr{A}_{\tau}\dot{\bm{y}},\bm{f})_{L^2} = \frac{\rm d}{{\rm d}t}(\mathscr{A}_{\tau}\bm{y},\bm{f})_{L^2}
- (\mathscr{A}_{\dot{\tau}}\bm{y},\bm{f})_{L^2} - (\mathscr{A}_{\tau}\bm{y},\dot{\bm{f}})_{L^2}$. As for the last term, by integration by parts, we see that $$\begin{aligned}
(\mathscr{A}_{\tau}\dot{\bm{y}}, \mathscr{A}_{\nu}\bm{x})_{L^2}
&= ((\tau\dot{\bm{y}}')',(\nu\bm{x}')')_{L^2} \\
&= -(\tau\dot{\bm{y}}',(\nu\bm{x}')'')_{L^2} + ( \tau\dot{\bm{y}}'\cdot(\nu\bm{x}')' )|_{s=1} \\
&= -(\tau\dot{\bm{y}}',\nu''\bm{x}'+2\nu'\bm{x}''+\nu\bm{x}''')_{L^2}
+ ( \tau\dot{\bm{y}}'\cdot(\nu\bm{x}''+\nu'\bm{x}') )|_{s=1}.\end{aligned}$$ Here, the term $(\tau\dot{\bm{y}}',\nu''\bm{x}')_{L^2}$ would be troublesome if we evaluate it directly. To bypass the trouble, we make use of the second equation in [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"}. Differentiating it with respect to $t$ we have $$\label{OR}
\bm{x}'\cdot\dot{\bm{y}}'=\dot{f}-\dot{\bm{x}}'\cdot\bm{y}',$$ so that the term can be written as $(\tau\dot{\bm{y}}',\nu''\bm{x}')_{L^2}=(\tau\dot{f},\nu'')_{L^2}-(\tau\bm{y}',\nu''\dot{\bm{x}}')_{L^2}$, which can now be easily handled. Thanks to the relation [\[OR\]](#OR){reference-type="eqref" reference="OR"} again, we have $( (\tau\dot{\bm{y}}')\cdot(\nu'\bm{x}') )|_{s=1} = ( \tau\nu'(\dot{f}-\bm{y}'\cdot\dot{\bm{x}}'))|_{s=1}$, which can also be easily handled. Therefore, we obtain $$\label{LEE1}
\frac{\mathrm{d}}{\mathrm{d}t}\{ \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}^2
- 2(\mathscr{A}_\tau\bm{y},\bm{f})_{L^2} \}
= I_1 - 2( \nu\tau\bm{x}''\cdot\dot{\bm{y}}' )|_{s=1},$$ where $$\begin{aligned}
I_1
&= (\dot{\tau}\dot{\bm{y}}',\dot{\bm{y}}')_{L^2} + 2(\mathscr{A}_{\dot{\tau}}\bm{y},\mathscr{A}_{\tau}\bm{y})_{L^2}
- 2(\mathscr{A}_{\dot{\tau}}\bm{y},\bm{f})_{L^2} - 2(\mathscr{A}_{\tau}\bm{y},\dot{\bm{f}})_{L^2} \\
&\quad\;
- 2(\tau\bm{y}',\nu''\dot{\bm{x}}')_{L^2} + 2(\tau\dot{\bm{y}}',2\nu'\bm{x}''+\nu\bm{x}''')_{L^2} + 2(\tau\nu'',\dot{f})_{L^2} \\
&\quad\;
+ 2( \tau\nu'(\dot{\bm{x}}'\cdot\bm{y}'-\dot{f}))|_{s=1}.\end{aligned}$$ The only remaining term that we have to evaluate is $( \nu\tau\bm{x}''\cdot\dot{\bm{y}}' )|_{s=1}$, to which we need more careful analysis.
****Lemma** 32**. *For any positive constants $M$ and $c_0$, there exists a positive constant $C_2=C(M,c_0)$ such that if $(\bm{x},\tau)$ is a solution to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[BC\]](#BC){reference-type="eqref" reference="BC"} satisfying [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}, then for any solution $(\bm{y},\nu)$ to [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"}--[\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"} we have $$\label{LEE2}
- 2( \nu\tau\bm{x}''\cdot\dot{\bm{y}}' )|_{s=1}
= \frac{\mathrm{d}}{\mathrm{d}t}\left( \frac{\varphi'}{\varphi}\nu^2 \right)\biggr|_{s=1} + I_2,$$ where $\varphi$ is the fundamental solution defined by [\[IVPphi\]](#IVPphi){reference-type="eqref" reference="IVPphi"} and $I_2$ satisfies $$\begin{aligned}
|I_2| \leq C_2( \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}^2
+ \|\nu\|_{L^\infty}^2 + \|\nu'\|_{L^2}^2 + \|\bm{f}\|_{L^2}^2 + |\dot{f}|_{s=1}|^2 + \|s\dot{h}\|_{L^1}^2 ).\end{aligned}$$*
*Proof.* By taking the trace of the hyperbolic equation for $\bm{x}$ on $s=1$ and using the boundary condition, we have $(\tau\bm{x}''+\tau'\bm{x}')|_{s=1}+\bm{g}=\bm{0}$. Therefore, by [\[OR\]](#OR){reference-type="eqref" reference="OR"} again $$\label{LI21}
\bm{g}\cdot\dot{\bm{y}}'+\tau(\bm{x}''\cdot\dot{\bm{y}}')=I_{2,1} \quad\mbox{on}\quad s=1,$$ where $I_{2,1} = (\tau'( \dot{\bm{x}}'\cdot\bm{y}' - \dot{f} ))|_{s=1}$. Differentiating [\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"} with respect to $t$, we see that $\dot{\nu}$ satisfies $$\begin{cases}
-\dot{\nu}''+|\bm{x}''|^2\dot{\nu} = h_I + h_{II}' &\mbox{in}\quad (0,1)\times(0,T), \\
\dot{\nu} = 0 &\mbox{on}\quad \{s=0\}\times(0,T), \\
\dot{\nu}' = -\bm{g}\cdot\dot{\bm{y}}' &\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ where $$\begin{aligned}
h_I &= 2(\ddot{\bm{x}}'\cdot\dot{\bm{y}}') - 2(\dot{\bm{x}}''\cdot\bm{y}'')\tau - 2(\bm{x}''\cdot\bm{y}'')\dot{\tau}
- 2(\bm{x}''\cdot\dot{\bm{x}}'')\nu - 2(\dot{\bm{x}}''\cdot\ddot{\bm{y}}) + 2(\tau\bm{x}'')'\cdot\dot{\bm{y}}' + \dot{h}, \\
h_{II} &= 2(\dot{\bm{x}}'\cdot\ddot{\bm{y}}) - 2(\bm{x}''\cdot\dot{\bm{y}}')\tau.\end{aligned}$$ Now, we use the solution formula [\[SolBVP\]](#SolBVP){reference-type="eqref" reference="SolBVP"} to $\dot{\nu}$ at $s=1$ to obtain $$\dot{\nu}=\frac{\varphi}{\varphi'}(-\bm{g}\cdot\dot{\bm{y}}'+h_{II})
+ \frac{1}{\varphi'}\int_0^1( \varphi(\sigma)h_I(\sigma)-\varphi'(\sigma)h_{II}(\sigma))\mathrm{d}\sigma
\quad\mbox{on}\quad s=1,$$ where we used integration by parts. Since $h_{II}=- 2(\bm{x}''\cdot\dot{\bm{y}}')\tau$ on $s=1$, we get $$\label{LI22}
\bm{g}\cdot\dot{\bm{y}}'+2\tau(\bm{x}''\cdot\dot{\bm{y}}')=-\frac{\varphi'}{\varphi}\dot{\nu} + I_{2,2}
\quad\mbox{on}\quad s=1,$$ where $$I_{2,2} = \frac{1}{\varphi|_{s=1}}\int_0^1( \varphi_1(\sigma)h_I(\sigma)-\varphi_1'(\sigma)h_{II}(\sigma))\mathrm{d}\sigma.$$ By [\[LI21\]](#LI21){reference-type="eqref" reference="LI21"} and [\[LI22\]](#LI22){reference-type="eqref" reference="LI22"}, we obtain [\[LEE2\]](#LEE2){reference-type="eqref" reference="LEE2"} with $$I_2 = \left[ -\left\{ \frac{\mathrm{d}}{\mathrm{d}t}\left( \frac{\varphi'}{\varphi} \right) \right\}\nu^2
+ 2\nu(I_{2,1}-I_{2,2}) \right]\biggr|_{s=1}.$$
It remains to show the estimate for $I_2$ of the lemma. Since $\bm{x}$ is supposed to satisfy [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}, we will use freely the estimates in Lemmas [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"}, [**Lemma** 7](#lem:EstDtPhi){reference-type="ref" reference="lem:EstDtPhi"}, [**Lemma** 25](#lem:EstSol1){reference-type="ref" reference="lem:EstSol1"}--[**Lemma** 28](#lem:EstSol4){reference-type="ref" reference="lem:EstSol4"} without any comment. Then, we have $|I_2| \lesssim \|\nu\|_{L^\infty}^2 + \|\nu\|_{L^\infty}( |I_{2,1}|+|I_{2,2}| )$. By the standard Sobolev embedding theorem, we have $|I_{2,1}| \lesssim |\bm{y}'|_{s=1}| + |\dot{f}|_{s=1}| \lesssim \|\bm{y}'\|_{L^2}+\|s\bm{y}''\|_{L^2}+|\dot{f}|_{s=1}|$. We have also $$\begin{aligned}
|I_{2,2}|
&\lesssim \|s h_I\|_{L^1}+\|h_{II}\|_{L^1} \\
&\lesssim \|s^\frac12\dot{\bm{y}}'\|_{L^2}+\|s\bm{y}''\|_{L^2} + \|\ddot{\bm{y}}\|_{L^2} + \|\nu\|_{L^\infty} + \|s\dot{h}\|_{L^1}.\end{aligned}$$ It follows from the first equation in [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"} for $\bm{y}$ that $\|\ddot{\bm{y}}\|_{L^2} \leq \|\mathscr{A}_{\tau}\bm{y}\|_{L^2} + \|\mathscr{A}_{\nu}\bm{x}\|_{L^2} + \|\bm{f}\|_{L^2}$. Here, we have $\|\mathscr{A}_{\nu}\bm{x}\|_{L^2} \leq \|\nu\|_{L^\infty}\|\bm{x}''\|_{L^2} + \|\nu'\|_{L^2}
\lesssim \|\nu\|_{L^\infty} + \|\nu'\|_{L^2}$. These estimates together with Lemma [**Lemma** 29](#lem:EstA){reference-type="ref" reference="lem:EstA"} yield the desired estimate. ◻
We need to evaluate $\nu$ in terms of $\bm{y}$, which will be given in the next lemma.
****Lemma** 33**. *Under the same hypotheses and the same notations in Lemma [**Lemma** 32](#lem:BT){reference-type="ref" reference="lem:BT"}, we have $$\begin{cases}
\|s^{-\frac12}\nu\|_{L^\infty} + \|s^\frac12\nu'\|_{L^\infty} + \|\nu'\|_{L^2}
\leq C_1( \|\tau^\frac12\dot{\bm{y}}'\|_{L^2} + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2} + \|s^\frac12h\|_{L^1} ), \\
\|s^{\frac12+\epsilon}\nu''\|_{L^2}
\leq C_1(\epsilon)( \|\tau^\frac12\dot{\bm{y}}'\|_{L^2} + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2} ) + \|s^{\frac12+\epsilon}h\|_{L^2},
\end{cases}$$ where $\epsilon\in(0,\frac12)$ is arbitrary and $C_1(\epsilon)=C(M,c_0,\epsilon)$. Particularly, we have $$\|\ddot{\bm{y}}\|_{L^2} \leq C_1( \|\tau^\frac12\dot{\bm{y}}'\|_{L^2} + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}
+ \|\bm{f}\|_{L^2} + \|s^\frac12h\|_{L^1}).$$*
*Proof.* We note that $\nu$ is the solution to the boundary value problem [\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"} and that we have $$\begin{cases}
\|s^\frac12( \dot{\bm{x}}'\cdot\dot{\bm{y}}'-(\bm{x}''\cdot\bm{y}'')\tau )\|_{L^1}
\lesssim \|s^\frac12\dot{\bm{y}}'\|_{L^2}+\|s\bm{y}''\|_{L^2}, \\
|\bm{g}\cdot\bm{y}'|_{s=1}| \lesssim \|\bm{y}'\|_{L^2}+\|s\bm{y}''\|_{L^2}.
\end{cases}$$ Therefore, the first estimate of the lemma follows from Lemmas [**Lemma** 9](#lem:EstSolBVP2){reference-type="ref" reference="lem:EstSolBVP2"}, [**Lemma** 11](#lem:EstSolBVP3){reference-type="ref" reference="lem:EstSolBVP3"}, and [**Lemma** 29](#lem:EstA){reference-type="ref" reference="lem:EstA"}. Moreover, by the first equation in [\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"} we have $$\|s^{\frac12+\epsilon}(\nu''+h)\|_{L^2}
\lesssim \|s^\frac14\bm{x}''\|_{L^4}^2\|s^{-\frac12}\nu\|_{L^\infty}
+ \|s^\epsilon\dot{\bm{x}}'\|_{L^\infty}\|s^\frac12\dot{\bm{y}}'\|_{L^2} + \|s^\frac12\bm{x}''\|_{L^\infty}\|s\bm{y}''\|_{L^2},$$ which together with the first estimate, Lemmas [**Lemma** 25](#lem:EstSol1){reference-type="ref" reference="lem:EstSol1"} and [**Lemma** 29](#lem:EstA){reference-type="ref" reference="lem:EstA"}, and $\|s^\frac12 h\|_{L^1} \leq (1-2\epsilon)^{-\frac12}\|s^{\frac12+\epsilon}h\|_{L^2}$ gives the second one. ◻
In view of [\[LEE1\]](#LEE1){reference-type="eqref" reference="LEE1"} and [\[LEE2\]](#LEE2){reference-type="eqref" reference="LEE2"}, we define an energy functional $\mathscr{E}(t)$ by $$\begin{aligned}
\mathscr{E}(t) &= \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}^2 - 2(\mathscr{A}_\tau\bm{y},\bm{f})_{L^2}
- \left( \frac{\varphi'}{\varphi}\nu^2 \right)\biggr|_{s=1}
+ \lambda( \|\dot{\bm{y}}\|_{L^2}^2 + \|\bm{y}\|_{X^1}^2 ),\end{aligned}$$ where the parameter $\lambda>0$ will be chosen so large that the following lemma holds.
****Lemma** 34**. *For any positive constants $M_1$ and $c_0$, there exist $\lambda_0=\lambda(M_1,c_0)>0$ and $C_1=C(M_1,c_0)\geq1$ such that if $\lambda=\lambda_0$, then we have $$\mathscr{E}(t) \leq C_1( E(t) + S_1(t) ), \qquad E(t) \leq C_1( \mathscr{E}(t) +S_1(t)),$$ where $E(t)$ and $S_1(t)$ are defined by [\[DefE\]](#DefE){reference-type="eqref" reference="DefE"} and [\[DefS12\]](#DefS12){reference-type="eqref" reference="DefS12"}.*
*Proof.* It is sufficient to evaluate $(\nu|_{s=1})^2$. Since $\nu$ is a solution to the boundary value problem [\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"}, the solution formula [\[SolBVP\]](#SolBVP){reference-type="eqref" reference="SolBVP"} yields $$\begin{aligned}
\nu
&= -\frac{\varphi}{\varphi'}( \bm{g}\cdot\bm{y}' + 2\tau(\bm{x}''\cdot\bm{y}') )
+ \frac{1}{\varphi'} \int_0^1 ( 2(\varphi\tau\bm{x}'')'\cdot\bm{y}' - 2(\varphi\dot{\bm{x}}')'\cdot\dot{\bm{y}} + \varphi h)\mathrm{d}\sigma
\quad\mbox{on}\quad s=1,\end{aligned}$$ where we used integration by parts. This together with Lemmas [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"} and [**Lemma** 8](#lem:EstSolBVP1){reference-type="ref" reference="lem:EstSolBVP1"} gives $$\begin{aligned}
|\nu|_{s=1}|
&\leq C_1(1+|\bm{x}''|_{s=1}|)|\bm{y}'|_{s=1}|
+ C_1(\|s ^\frac12\bm{x}''\|_{L^2}+\|s^\frac32\bm{x}'''\|_{L^2})\|s^\frac12\bm{y}'\|_{L^2} \\
&\quad\;
+ C_1(\|\dot{\bm{x}}'\|_{L^2}+\|s\dot{\bm{x}}''\|_{L^2})\|\dot{\bm{y}}\|_{L^2} + \|sh\|_{L^1} \\
&\leq C_1(\|s^\frac12\bm{y}'\|_{L^2}^\frac12\|s\bm{y}''\|_{L^2}^\frac12 + \|s^\frac12\bm{y}'\|_{L^2} + \|\dot{\bm{y}}\|_{L^2} + \|sh\|_{L^1},\end{aligned}$$ where we used the Sobolev embedding theorem $\|u\|_{L^\infty} \lesssim \|u\|_{L^2}^\frac12\|u'\|_{L^2}^\frac12 + \|u\|_{L^2}$. Therefore, by Lemma [**Lemma** 29](#lem:EstA){reference-type="ref" reference="lem:EstA"} we obtain the desired equivalence. ◻
We go back to the proof of Proposition [**Proposition** 30](#prop:EE){reference-type="ref" reference="prop:EE"}. We fix the parameter $\lambda$ in the energy functional $\mathscr{E}(t)$ as $\lambda=\lambda_0$. It follows form [\[LEE1\]](#LEE1){reference-type="eqref" reference="LEE1"} and [\[LEE2\]](#LEE2){reference-type="eqref" reference="LEE2"} that $\frac{\mathrm{d}}{\mathrm{d}t}\mathscr{E}(t)
= I_1+I_2 + 2\lambda_0\{(\dot{\bm{y}},\ddot{\bm{y}})_{L^2}+(s^\frac12\bm{y}',s^\frac12\dot{\bm{y}}')_{L^2}+(\bm{y},\dot{\bm{y}})_{L^2} \}$. Since $\bm{x}$ is supposed to satisfy [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}, we will use freely the estimates in Lemmas [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"}, [**Lemma** 7](#lem:EstDtPhi){reference-type="ref" reference="lem:EstDtPhi"}, [**Lemma** 25](#lem:EstSol1){reference-type="ref" reference="lem:EstSol1"}--[**Lemma** 28](#lem:EstSol4){reference-type="ref" reference="lem:EstSol4"} without any comment. Then, by Lemmas [**Lemma** 29](#lem:EstA){reference-type="ref" reference="lem:EstA"} and [**Lemma** 33](#lem:EstNu){reference-type="ref" reference="lem:EstNu"} we see that $$\begin{aligned}
|I_1|
&\lesssim \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_\tau\bm{y}\|_{L^2}^2 + \|(\bm{f},\dot{\bm{f}})\|_{L^2}^2
+ \|\bm{y}'\|_{L^2} \|s^\frac34\nu''\|_{L^2} \|s^\frac14\dot{\bm{x}}'\|_{L^\infty} \\
&\quad\;
+ \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}( \|s^\frac12\nu'\|_{L^\infty}\|\bm{x}''\|_{L^2} + \|s^{-\frac12}\nu\|_{L^\infty}\|s\bm{x}'''\|_{L^2}) \\
&\quad\;
+ \|s^{\frac12+\epsilon}\nu''\|_{L^2}\|s^{\frac12-\epsilon}\dot{f}\|_{L^2}
+ \|s^\frac12\nu'\|_{L^\infty}(|\bm{y}'|_{s=1}| + |\dot{f}|_{s=1}| ) \\
&\lesssim \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_\tau\bm{y}\|_{L^2}^2 + S_1(t) + S_2(t),\end{aligned}$$ where we used again $\|s^\frac12h\|_{L^1} \leq (1-2\epsilon)^{-\frac12}\|s^{\frac12+\epsilon}h\|_{L^2}$. Similarly, by Lemma [**Lemma** 32](#lem:BT){reference-type="ref" reference="lem:BT"} we obtain $|I_2| \leq C_2( E(t) + S_1(t) + S_2(t))$, so that $\frac{\mathrm{d}}{\mathrm{d}t}\mathscr{E}(t) \leq C_2(\epsilon)( E(t)+S_1(t)+S_2(t))$. Moreover, we see easily that $S_1(t) \leq 2 S_1(0) + 4t\int_0^tS_2(t')\mathrm{d}t'$. These together with Lemma [**Lemma** 34](#lem:EquiNorm2){reference-type="ref" reference="lem:EquiNorm2"} and Gronwall's inequality gives the desired estimate. ◻
# Uniqueness of solution {#sect:unique}
In this section we prove Theorem [**Theorem** 3](#th:unique){reference-type="ref" reference="th:unique"}, which ensures the uniqueness of solution to the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}. Suppose that $(\bm{x}_1,\tau_1)$ and $(\bm{x}_2,\tau_2)$ are solutions to the problem in the class [\[SolClass\]](#SolClass){reference-type="eqref" reference="SolClass"}. Then, we can assume without loss of generality that $\bm{x}_1$ and $\bm{x}_2$ satisfy [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"} with positive constants $M$ and $c_0$. Therefore, we will use freely the estimates in Lemmas [**Lemma** 5](#lem:EstPhi){reference-type="ref" reference="lem:EstPhi"}, [**Lemma** 7](#lem:EstDtPhi){reference-type="ref" reference="lem:EstDtPhi"}, [**Lemma** 25](#lem:EstSol1){reference-type="ref" reference="lem:EstSol1"}--[**Lemma** 28](#lem:EstSol4){reference-type="ref" reference="lem:EstSol4"} without any comment. Putting $\bm{y}=\bm{x}_1-\bm{x}_2$, $\nu=\tau_1-\tau_2$, $\bm{x}=\frac12(\bm{x}_1+\bm{x}_2)$, $\tau=\frac12(\tau_1+\tau_2)$, we are going to show that $\bm{y}=\bm{0}$ and $\nu=0$. We see that $\bm{y}$ satisfies $$\label{EqDiff1}
\begin{cases}
\ddot{\bm{y}}+\mathscr{A}_{\tau}\bm{y}+\mathscr{A}_\nu\bm{x}=\bm{0} &\mbox{in}\quad (0,1)\times(0,T), \\
\bm{x}'\cdot\bm{y}'=0 &\mbox{in}\quad (0,1)\times(0,T), \\
\bm{y}=\bm{0} &\mbox{on}\quad \{s=1\}\times(0,T), \\
(\bm{y},\dot{\bm{y}})=(\bm{0},\bm{0}) &\mbox{on}\quad (0,1)\times\{t=0\},
\end{cases}$$ and that $\nu$ satisfies $$\label{EqDiff2}
\begin{cases}
-\nu''+|\bm{x}''|^2\nu = 2(\dot{\bm{x}}'\cdot\dot{\bm{y}}')-2(\bm{x}''\cdot\bm{y}'')\tau + h &\mbox{in}\quad (0,1)\times(0,T), \\
\nu = 0 &\mbox{on}\quad \{s=0\}\times(0,T), \\
\nu' = -\bm{g}\cdot\bm{y}' &\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ where $h=-\frac14|\bm{y}''|^2\nu$. Although $\bm{x}$ satisfies [\[SolClassEst\]](#SolClassEst){reference-type="eqref" reference="SolClassEst"}, $(\bm{x},\tau)$ is not a solution to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"} in general. Therefore, we cannot apply the energy estimate obtained in Proposition [**Proposition** 30](#prop:EE){reference-type="ref" reference="prop:EE"} directly and we need to modify the estimate. In this case, in place of [\[LEE1\]](#LEE1){reference-type="eqref" reference="LEE1"} we have $$\frac{\mathrm{d}}{\mathrm{d}t}\{ \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}^2 \}
= I_1 - 2( \nu\tau\bm{x}''\cdot\dot{\bm{y}}' )|_{s=1},$$ where $$\begin{aligned}
I_1
&= (\dot{\tau}\dot{\bm{y}}',\dot{\bm{y}}')_{L^2} + 2(\mathscr{A}_{\dot{\tau}}\bm{y},\mathscr{A}_{\tau}\bm{y})_{L^2} \\
&\quad\;
- 2(\tau\bm{y}',\nu''\dot{\bm{x}}')_{L^2} + 2(\tau\dot{\bm{y}}',2\nu'\bm{x}''+\nu\bm{x}''')_{L^2}
+ 2( \tau\nu'(\dot{\bm{x}}'\cdot\bm{y}'))|_{s=1}.\end{aligned}$$ To evaluate the boundary term $( \tau\nu'(\dot{\bm{x}}'\cdot\bm{y}'))|_{s=1}$, we showed Lemma [**Lemma** 32](#lem:BT){reference-type="ref" reference="lem:BT"}, where we used essentially the fact that $(\bm{x},\tau)$ is a solution to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}. Therefore, we need to modify Lemma [**Lemma** 32](#lem:BT){reference-type="ref" reference="lem:BT"} as follows.
****Lemma** 35**. *It holds that $$\label{LEE3}
-2( \tau\nu\bm{x}''\cdot\dot{\bm{y}}' )|_{s=1}
= 2\frac{\mathrm{d}}{\mathrm{d}t}\left( \frac{\tau^2}{\tau_1^2+\tau_2^2}\frac{\varphi'}{\varphi}\nu^2 \right)\biggr|_{s=1} + I_2,$$ where $\varphi$ is the fundamental solution defined by [\[IVPphi\]](#IVPphi){reference-type="eqref" reference="IVPphi"} for $\bm{x}$ and $I_2$ satisfies $$\begin{aligned}
|I_2| \leq C_2( \|\tau^\frac12\dot{\bm{y}}'\|_{L^2}^2 + \|\mathscr{A}_{\tau}\bm{y}\|_{L^2}^2
+ \|\nu\|_{L^\infty}^2+\|\nu'\|_{L^2}^2 + \|s\dot{h}\|_{L^1}^2).\end{aligned}$$*
*Proof.* By taking the trace of the hyperbolic equation for $\bm{x}_j$ on $s=1$ and using the boundary condition, we have $-(\tau_j\bm{x}_j''+\tau_j'\bm{x}_j')|_{s=1}=\bm{g}$, so that $$\begin{aligned}
2\bm{x}''
&= -\left(\frac{1}{\tau_1}+\frac{1}{\tau_2}\right)\bm{g} - \left(\frac{\tau_1'}{\tau_1}+\frac{\tau_2'}{\tau_2}\right)\bm{x}'
- \frac12\left(\frac{\tau_1'}{\tau_1}-\frac{\tau_2'}{\tau_2}\right)\bm{y}' \quad\mbox{on}\quad s=1.\end{aligned}$$ Differentiating the second equation in [\[EqDiff1\]](#EqDiff1){reference-type="eqref" reference="EqDiff1"} with respect to $t$ we have $\bm{x}'\cdot\dot{\bm{y}}'=-\dot{\bm{x}}'\cdot\bm{y}'$. These two identities yield $$\label{I21}
\frac{\tau}{2}\left(\frac{1}{\tau_1}+\frac{1}{\tau_2}\right)\bm{g}\cdot\dot{\bm{y}}' + \tau(\bm{x}''\cdot\dot{\bm{y}}') = I_{2,1}
\quad\mbox{on}\quad s=1,$$ where $$I_{2,1}=\left\{\frac{\tau}{2}\left(\frac{\tau_2'}{\tau_2}\dot{\bm{x}}_1'+\frac{\tau_1'}{\tau_1}\dot{\bm{x}}_2'\right)
\cdot\bm{y}'\right\}\biggr|_{s=1}.$$ On the other hand, [\[LI22\]](#LI22){reference-type="eqref" reference="LI22"} is still valid. By [\[I21\]](#I21){reference-type="eqref" reference="I21"} and [\[LI22\]](#LI22){reference-type="eqref" reference="LI22"}, we obtain [\[LEE3\]](#LEE3){reference-type="eqref" reference="LEE3"} with $$I_2 = \left[ -\left\{\frac{\mathrm{d}}{\mathrm{d}t}\left( \frac{2\tau^2}{\tau_1^2+\tau_2^2}\frac{\varphi'}{\varphi} \right) \right\}\nu^2
+ 2\nu\left( \frac{2\tau_1\tau_2}{\tau_1^2+\tau_2^2} I_{2,1}-\frac{2\tau^2}{\tau_1^2+\tau_2^2} I_{2,2} \right) \right]\biggr|_{s=1}.$$ The estimate for $I_2$ is exactly the same as in the proof of Lemma [**Lemma** 32](#lem:BT){reference-type="ref" reference="lem:BT"}, so we omit it. ◻
Thanks to this lemma, we see that the energy estimate obtained in Proposition [**Proposition** 30](#prop:EE){reference-type="ref" reference="prop:EE"} is still valid in this case. We use the estimate with, for example, $\epsilon=\frac14$ and obtain $$E(t) \leq C_1\mathrm{e}^{C_2t}\left( E(0) + S_1(0) + C_2\int_0^tS_2(t')\mathrm{d}t' \right),$$ where $E(t)=\|\dot{\bm{y}}(t)\|_{X^1}^2 + \|\bm{y}(t)\|_{X^2}^2$, $S_1(t)=\|sh(t)\|_{L^1}^2$, and $S_2(t)=\|s\dot{h}(t)\|_{L^1}^2+\|s^\frac34h(t)\|_{L^2}^2$. Here, we remind that $h$ was defined by $h=-\frac14|\bm{y}''|^2\nu$. By the initial conditions in [\[EqDiff1\]](#EqDiff1){reference-type="eqref" reference="EqDiff1"}, we have $E(0)=S_1(0)=0$. In view of $$\begin{cases}
|h| \lesssim s(|\bm{x}_1''|+|\bm{x}_2''|)|\bm{y}''|, \\
|\dot{h}| \lesssim s(|\dot{\bm{x}}_1''|+|\dot{\bm{x}}_2''|+|\bm{x}_1''|+|\bm{x}_2''|)|\bm{y}''|,
\end{cases}$$ we have $$\begin{cases}
\|s^\frac12h\|_{L^2} \lesssim ( \|s^\frac12\bm{x}_1''\|_{L^\infty} + \|s^\frac12\bm{x}_2''\|_{L^\infty} ) \|s\bm{y}''\|_{L^2}, \\
\|s^\frac12\dot{h}\|_{L^1} \lesssim ( \|s^\frac12\dot{\bm{x}}_1''\|_{L^2} + \|s^\frac12\dot{\bm{x}}_2''\|_{L^2}
+ \|\bm{x}_1''\|_{L^2} + \|\bm{x}_2''\|_{L^2} )\|s\bm{y}''\|_{L^2}.
\end{cases}$$ Therefore, we obtain $E(t) \lesssim \int_0^tE(t')\mathrm{d}t'$, which together with Gronwall's inequality implies $E(t)\equiv0$ so that $\dot{\bm{y}}=\bm{0}$ and that $\bm{y}=\bm{0}$ due to the initial conditions. Then, by the uniqueness of solution to the boundary value problem [\[EqDiff2\]](#EqDiff2){reference-type="eqref" reference="EqDiff2"} implies $\nu=0$. Therefore, the proof of Theorem [**Theorem** 3](#th:unique){reference-type="ref" reference="th:unique"} is complete. $\Box$
# Estimates for the tension $\tau$ {#sect:ETau}
****Lemma** 36**. *Let $M$ and $c_0$ be positive constants and $m$ and $j$ integers such that $m\geq5$ and $0\leq j\leq m-2$. There exists a positive constant $C=C(M,c_0,m)$ such that if $\bm{x}$ satisfies $$\sum_{l=0}^{j+1} \|\partial_t^l\bm{x}(t)\|_{X^{m-l}} \leq M, \qquad -\bm{g}\cdot\bm{x}'(1,t)\geq c_0,$$ then the solution $\tau$ to the boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} satisfies the following estimates: $$\begin{cases}
\|\partial_t^j\tau'(t)\|_{L^\infty \cap X^{m-1-j}} \leq C &\mbox{in the case $j\leq m-3$}, \\
\|\partial_t^{m-2}\tau'(t)\|_{X^1} \leq C &\mbox{in the case $j=m-2$}.
\end{cases}$$*
*Proof.* We prove this lemma by induction on $j$. Assuming that $$\label{IndAss1}
\sum_{l=0}^{j-1} \|\partial_t^l\tau'(t)\|_{L^\infty \cap X^{m-1-l}} \lesssim 1$$ holds in the case $j\geq1$, we are going to evaluate $\partial_t^j\tau$. We note that the above induction hypothesis together with the boundary condition $\partial_t^l\tau|_{s=0}=0$ implies that $|\partial_t^l\tau(s,t)| \lesssim s$ for $l=0,1,\ldots,j-1$ and that $\partial_t^j\tau$ is a solution to the boundary value problem $$\label{BVPdtj}
\begin{cases}
-\partial_t^j\tau''+|\bm{x}''|^2\partial_t^j\tau = h_j &\mbox{in}\quad (0,1)\times(0,T), \\
\partial_t^j\tau=0 &\mbox{on}\quad \{s=0\}\times(0,T), \\
\partial_t^j\tau'= a_j&\mbox{on}\quad \{s=1\}\times(0,T),
\end{cases}$$ where $h_j=\partial_t^j|\dot{\bm{x}}'|^2-[\partial_t^j,|\bm{x}''|^2]\tau$ and $a_j=-\bm{g}\cdot\partial_t^j\bm{x}'|_{s=1}$. Here, in view of $2\leq m-j$ and the standard Sobolev embedding theorem we see easily that $|a_j(t)| \lesssim \|\partial_t^j\bm{x}(t)\|_{X^{m-j}} \lesssim 1$.
**Step 1.** We first derive pointwise estimates for $\partial_t^j\tau$. By [\[IndAss1\]](#IndAss1){reference-type="eqref" reference="IndAss1"}, we see that $$|h_j| \lesssim \sum_{j_1+j_2=j, j_1\leq j_2}|\partial_t^{j_1+1}\bm{x}'\cdot\partial_t^{j_2+1}\bm{x}'|
+ \sum_{j_1+j_2\leq j, j_1\leq j_2}s|\partial_t^{j_1}\bm{x}''\cdot\partial_t^{j_2}\bm{x}''|.$$
\(i\) The case $j\leq m-3$. In this case, we have $j_1\leq j_2\leq m-3$ so that $2\leq m-(j_2+1) \leq m-(j_2+1)$ and that $3\leq m-j_2 \leq m-j_1$. Therefore, we obtain $$\begin{aligned}
\|\partial_t^{j_1+1}\bm{x}'\cdot\partial_t^{j_2+1}\bm{x}'\|_{L^1}
&\leq \|\partial_t^{j_1+1}\bm{x}'\|_{L^2} \|\partial_t^{j_2+1}\bm{x}'\|_{L^2} \\
&\leq \|\partial_t^{j_1+1}\bm{x}\|_{X^{m-(j_1+1)}} \|\partial_t^{j_2+1}\bm{x}\|_{X^{m-(j_2+1)}}\end{aligned}$$ and $$\begin{aligned}
\|s\partial_t^{j_1}\bm{x}''\cdot\partial_t^{j_2}\bm{x}''\|_{L^1}
&\leq \|s^\frac12\partial_t^{j_1}\bm{x}''\|_{L^2} \|s^\frac12\partial_t^{j_2}\bm{x}''\|_{L^2} \\
&\leq \|\partial_t^{j_1}\bm{x}\|_{X^{m-j_1}}\|\partial_t^{j_2}\bm{x}\|_{X^{m-j_2}}\end{aligned}$$ which imply $\|h_j\|_{L^1}\lesssim 1$. By Lemma [**Lemma** 9](#lem:EstSolBVP2){reference-type="ref" reference="lem:EstSolBVP2"} with $\alpha=0$ we obtain $\|\partial_t^j\tau'(t)\|_{L^\infty} \lesssim 1$.
\(ii\) The case $j=m-2$. In this case, we have $j_1\leq m-3$ and $j_2\leq m-2$ so that $2\leq m-(j_1+1)$, $1\leq m-(j_2+1)$ and that $3\leq m-j_1$ and $2\leq m-j_2$. Therefore, we obtain $$\begin{aligned}
\|s^\frac12\partial_t^{j_1+1}\bm{x}'\cdot\partial_t^{j_2+1}\bm{x}'\|_{L^1}
&\leq \|\partial_t^{j_1+1}\bm{x}'\|_{L^2} \|s^\frac12\partial_t^{j_2+1}\bm{x}'\|_{L^2} \\
&\leq \|\partial_t^{j_1+1}\bm{x}\|_{X^{m-(j_1+1)}} \|\partial_t^{j_2+1}\bm{x}\|_{X^{m-(j_2+1)}}\end{aligned}$$ and $$\begin{aligned}
\|s^\frac32\partial_t^{j_1}\bm{x}''\cdot\partial_t^{j_2}\bm{x}''\|_{L^1}
&\leq \|s^\frac12\partial_t^{j_1}\bm{x}''\|_{L^2} \|s\partial_t^{j_2}\bm{x}''\|_{L^2} \\
&\leq \|\partial_t^{j_1}\bm{x}\|_{X^{m-j_1}}\|\partial_t^{j_2}\bm{x}\|_{X^{m-j_2}}\end{aligned}$$ which imply $\|s^\frac12h_{m-2}\|_{L^1}\lesssim 1$. By Lemma [**Lemma** 11](#lem:EstSolBVP3){reference-type="ref" reference="lem:EstSolBVP3"} with $\alpha=0$ and $p=2$, we obtain $\|\partial_t^{m-2}\tau'(t)\|_{L^2} \lesssim 1$.
**Step 2.** We then derive an estimate for $\|\partial_t^j\tau'\|_{X^{m-1-j}}$. To this end, we will evaluate $\|\partial_t^j\tau'\|_{X^{k+1}}$ inductively on $k$ for $0\leq k\leq m-2-j$. By [\[Ym1\]](#Ym1){reference-type="eqref" reference="Ym1"}, we have $\|\partial_t^j\tau'\|_{X^{k+1}}^2 = \|\partial_t^j\tau'\|_{L^2}^2 + \|\partial_t^j\tau''\|_{Y^k}^2$ so that it is sufficient to evaluate $\|\partial_t^j\tau''\|_{Y^k}$. Moreover, by [\[BVPdtj\]](#BVPdtj){reference-type="eqref" reference="BVPdtj"} we have $\|\partial_t^j\tau''\|_{Y^k} \leq \|(\partial_t^j\tau)|\bm{x}''|^2\|_{Y^k} + \|h_j\|_{Y^k}$. We evaluate the first term $\|(\partial_t^j\tau)|\bm{x}''|^2\|_{Y^k}$ by using the estimates obtained in the previous Step 1 and Lemma [**Lemma** 22](#lem:CalIneq2){reference-type="ref" reference="lem:CalIneq2"} as follows.
\(i\) The case $k=0$. $$\|(\partial_t^j\tau)|\bm{x}''|^2\|_{Y^0} \lesssim \|\partial_t^j\tau'\|_{L^2}\|\bm{x}\|_{X^4}^2.$$
\(ii\) The case $k=1$. Due to the restriction $0\leq k\leq m-2-j$, we have $j\leq m-3$, so that $$\|(\partial_t^j\tau)|\bm{x}''|^2\|_{Y^1} \lesssim \|\partial_t^j\tau'\|_{L^\infty} \|\bm{x}\|_{X^4}^2.$$
\(iii\) The case $2\leq k\leq m-2-j$. In this case we have $j\leq m-4$ and $k+2\leq m$, so that $$\|(\partial_t^j\tau)|\bm{x}''|^2\|_{Y^k} \lesssim \|\partial_t^j\tau'\|_{L^\infty \cap X^{k-1}} \|\bm{x}\|_{X^{k+2}}^2.$$ Therefore, we have $$\|(\partial_t^j\tau)|\bm{x}''|^2\|_{Y^k} \lesssim
\begin{cases}
1 &\mbox{for}\quad k=0,1, \\
1+\|\partial_t^j\tau'\|_{X^{k-1}} &\mbox{for}\quad 2\leq k\leq m-2-j,
\end{cases}$$ so that by induction we obtain $\|\partial_t^j\tau'\|_{X^{k+1}} \lesssim 1+\|h_j\|_{Y^k}$ for $0\leq k\leq m-2-j$. Particularly, we get $\|\partial_t^j\tau'\|_{X^{m-1-j}} \lesssim 1+\|h_j\|_{Y^{m-2-j}}$.
**Step 3.** We finally derive an estimate for $\|h_j\|_{Y^{m-2-j}}$. In view of $h_j=\partial_t^j|\dot{\bm{x}}'|^2-[\partial_t^j,|\bm{x}''|^2]\tau$, we have $$\|h_j\|_{Y^{m-2-j}} \lesssim \sum_{j_1+j_2=j,j_1\leq j_2}I_1(j_1,j_2;j)
+ \sum_{j_0+j_1+j_2=j, j_0\leq j-1, j_1\leq j_2}I_2(j_0,j_1,j_2;j),$$ where $$\begin{cases}
I_1(j_1,j_2;j) = \|\partial_t^{j_1+1}\bm{x}'\cdot\partial_t^{j_2+1}\bm{x}'\|_{Y^{m-2-j}}, \\
I_2(j_0,j_1,j_2;j) = \|(\partial_t^{j_0}\tau)\partial_t^{j_1}\bm{x}''\cdot\partial_t^{j_2}\bm{x}''\|_{Y^{m-2-j}}.
\end{cases}$$
We evaluate $I_1(j_1,j_2;j)$ by using Lemma [**Lemma** 21](#lem:CalIneq1){reference-type="ref" reference="lem:CalIneq1"} as follows.
\(i\) The case $j\leq m-4$. In this case we have $j_1\leq m-5$, so that $4\leq m-(j_1+1)$. Therefore, $$\begin{aligned}
I_1(j_1,j_2;j)
&\lesssim \|\partial_t^{j_1+1}\bm{x}\|_{X^{m-1-j \vee 4}} \|\partial_t^{j_2+1}\bm{x}\|_{X^{m-1-j}} \\
&\leq \|\partial_t^{j_1+1}\bm{x}\|_{X^{m-(j_1+1)}} \|\partial_t^{j_2+1}\bm{x}\|_{X^{m-(j_2+1)}}.\end{aligned}$$
\(ii\) The case $j=m-2,m-3$. In the case $j_2=j$, $$\begin{aligned}
I_1(0,j;j)
&\lesssim \|\partial_t\bm{x}\|_{X^4} \|\partial_t^{j+1}\bm{x}\|_{X^{m-1-j}} \\
&\leq \|\partial_t\bm{x}\|_{X^{m-1}} \|\partial_t^{j+1}\bm{x}\|_{X^{m-(j+1)}},\end{aligned}$$ and in the case $j_2\leq j-1$ $$\begin{aligned}
I_1(j_1,j_2;j)
&\lesssim \|\partial_t^{j_1+1}\bm{x}\|_{X^{m-j}} \|\partial_t^{j_2+1}\bm{x}\|_{X^{m-j}} \\
&\leq \|\partial_t^{j_1+1}\bm{x}\|_{X^{m-(j_1+1)}} \|\partial_t^{j_2+1}\bm{x}\|_{X^{m-(j_2+1)}}.\end{aligned}$$ In any of these cases, we have $I_1(j_1,j_2;j) \lesssim 1$.
We proceed to evaluate $I_2(j_0,j_1,j_2;j)$ by using Lemma [**Lemma** 22](#lem:CalIneq2){reference-type="ref" reference="lem:CalIneq2"} as follows.
\(i\) The case $j\leq m-3$. In this case we have $m-2-j\geq1$ and $j_1\leq m-4$, so that $$\begin{aligned}
I_2(j_0,j_1,j_2;j)
&\lesssim \|\partial_t^{j_0}\tau'\|_{L^\infty \cap X^{m-3-j}} \|\partial_t^{j_1}\bm{x}\|_{X^{m-j \vee 4}} \|\partial_t^{j_2}\bm{x}\|_{X^{m-j}} \\
&\lesssim \|\partial_t^{j_0}\tau'\|_{L^\infty \cap X^{m-1-j_0}} \|\partial_t^{j_1}\bm{x}\|_{X^{m-j_1}} \|\partial_t^{j_2}\bm{x}\|_{X^{m-j_2}}.\end{aligned}$$
\(ii\) The case $j=m-2$. In the case $j_2=j$, $$\begin{aligned}
I_2(0,0,j;j)
&\lesssim \|\tau'\|_{L^\infty} \|\bm{x}\|_{X^4} \|\partial_t^j\bm{x}\|_{X^2} \\
&\lesssim \|\tau'\|_{L^\infty} \|\bm{x}\|_{X^m} \|\partial_t^j\bm{x}\|_{X^{m-j}},\end{aligned}$$ and in the case $j_2\leq j-1=m-3$, $$\begin{aligned}
I_2(j_0,j_1,j_2;j)
&\lesssim \|\partial_t^{j_0}\tau'\|_{L^\infty} \|\partial_t^{j_1}\bm{x}\|_{X^3} \|\partial_t^{j_2}\bm{x}\|_{X^3} \\
&\lesssim \|\partial_t^{j_0}\tau'\|_{L^\infty} \|\partial_t^{j_1}\bm{x}\|_{X^{m-j_1}} \|\partial_t^{j_2}\bm{x}\|_{X^{m-j_2}}.\end{aligned}$$ In any of these cases, we have $I_2(j_0,j_1,j_2;j) \lesssim 1$. Therefore, we have shown that $\|h_j\|_{Y^{m-2-j}} \lesssim 1$.
Summarizing the above calculations, we have proved that $$\begin{cases}
\|\partial_t^j\tau'(t)\|_{L^\infty \cap X^{m-1-j}} \lesssim 1 &\mbox{in the case $j\leq m-3$}, \\
\|\partial_t^{m-2}\tau'(t)\|_{X^1} \lesssim 1&\mbox{in the case $j=m-2$}
\end{cases}$$ under the inductive hypothesis [\[IndAss1\]](#IndAss1){reference-type="eqref" reference="IndAss1"}. Therefore, we obtain the desired estimates. ◻
In the case $m=4$ we cannot expect that the estimates for the tension $\tau$ obtained in Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} hold. In this critical case, we obtain weaker estimates for the tension $\tau$, which are given in the following lemma.
****Lemma** 37**. *Let $M$ and $c_0$ be positive constants and $j$ an integer such that $0\leq j\leq 2$. For any $\epsilon>0$, there exists a positive constant $C(\epsilon)=C(M,c_0,\epsilon)$ such that if $\bm{x}$ satisfies $$\label{EstTauAss}
\sum_{l=0}^{j+1} \|\partial_t^l\bm{x}(t)\|_{X^{4-l}} \leq M, \qquad -\bm{g}\cdot\bm{x}'(1,t)\geq c_0,$$ then the solution $\tau$ to the boundary value problem [\[BVP\]](#BVP){reference-type="eqref" reference="BVP"} satisfies the following estimates: $$\begin{cases}
\|\partial_t^j\tau'(t)\|_{X_\epsilon^{3-j}} \leq C(\epsilon) &\mbox{in the case $j=0,1$}, \\
\|\partial_t^2\tau'(t)\|_{X_\epsilon^1} \leq C(\epsilon) &\mbox{in the case $j=2$}.
\end{cases}$$ In addition to [\[EstTauAss\]](#EstTauAss){reference-type="eqref" reference="EstTauAss"} with $j=2$, if we assume $\|\partial_t\bm{x}'(t)\|_{L^\infty} \leq M$, then we have $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau'(t) }_{3,*} \leq C$, where $C=C(M,c_0)>0$.*
*Proof.* As before, we prove this lemma by induction on $j$. In the following we will use estimates in Lemmas [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} and [**Lemma** 21](#lem:CalIneq1){reference-type="ref" reference="lem:CalIneq1"} and $\|u'\|_{X^k} \lesssim \|u\|_{X^{k+2}}$ without any comment.
\(i\) The case $j=0$. The estimate $\|\tau'\|_{L^\infty} \lesssim 1$ in the proof of Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} is still valid. By using the equation for $\tau$, we have $|\tau''|\lesssim |\dot{\bm{x}}'|^2+s|\bm{x}''|^2$, so that $$\begin{aligned}
\label{EstTau''}
\|s^\epsilon \tau''\|_{L^\infty}
&\lesssim \|s^\frac{\epsilon}{2}\dot{\bm{x}}'\|_{L^\infty}^2 + \|s^\frac12\bm{x}''\|_{L^\infty}^2 \\
&\lesssim \|\dot{\bm{x}}\|_{X^3}^2 + \|\bm{x}\|_{X^4}^2. \nonumber\end{aligned}$$ Differentiating the equation for $\tau$ with respect to $s$, we have $|\tau'''| \lesssim |\dot{\bm{x}}'\cdot\dot{\bm{x}}''|+|\bm{x}''|^2+s|\bm{x}''\cdot\bm{x}'''|$, so that $$\begin{aligned}
\label{EstTau'''}
\|s^{\frac12+\epsilon}\tau'''\|_{L^2}
&\lesssim \|s^\epsilon\dot{\bm{x}}'\|_{L^\infty} \|s^\frac12\dot{\bm{x}}''\|_{L^2}
+ \|s^\frac12\bm{x}''\|_{L^\infty}( \|s\bm{x}'''\|_{L^2} + \|\bm{x}''\|_{L^2} ) \\
&\lesssim \|\dot{\bm{x}}\|_{X^3}^2 + \|\bm{x}\|_{X^4}^2. \nonumber\end{aligned}$$ Similarly, we have $|\tau''''| \lesssim |\dot{\bm{x}}'\cdot\dot{\bm{x}}'''|+|\dot{\bm{x}}''|^2 + s(|\bm{x}''\cdot\bm{x}''''|+|\bm{x}'''|^2)
+ |\bm{x}''\cdot\bm{x}'''| + |\bm{x}''|^2|\tau''|$, so that $$\begin{aligned}
\|s^{\frac32+\epsilon}\tau''''\|_{L^2}
&\lesssim \|s^\epsilon\dot{\bm{x}}'\|_{L^\infty} \|s^\frac32\dot{\bm{x}}'''\|_{L^2}
+ \|s\dot{\bm{x}}''\|_{L^\infty} \|s^\frac12\dot{\bm{x}}''\|_{L^2} \\
&\quad\;
+ \|s^\frac12\bm{x}''\|_{L^\infty}( \|s^2\bm{x}''''\|_{L^2} + \|s\bm{x}'''\|_{L^2} + \|s^\frac12\tau''\|_{L^\infty}\|\bm{x}''\|_{L^2} ) \\
&\quad\;
+ \|s^\frac32\bm{x}''\|_{L^\infty}\|s\bm{x}'''\|_{L^2} \\
&\lesssim \|\dot{\bm{x}}\|_{X^3}^2 + (1+\|s^\frac12\tau''\|_{L^\infty})\|\bm{x}\|_{X^4}^2.\end{aligned}$$ Therefore, we obtain $\|\tau'(t)\|_{X_\epsilon^3} \lesssim 1$.
\(ii\) The case $j=1$. The estimate $\|\dot{\tau}'\|_{L^\infty} \lesssim 1$ in the proof of Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} is still valid. By the equation for $\dot{\tau}$, we have $|\dot{\tau}''|\lesssim |\dot{\bm{x}}'\cdot\ddot{\bm{x}}'|+s(|\bm{x}''\cdot\dot{\bm{x}}''|+|\bm{x}''|^2)$, so that $$\begin{aligned}
\label{EstdtTau''}
\|s^\epsilon\dot{\tau}''\|_{L^2}
&\lesssim \|s^\epsilon\dot{\bm{x}}'\|_{L^\infty} \|\ddot{\bm{x}}'\|_{L^2}
+ \|s^\frac12\bm{x}''\|_{L^\infty}( \|s^\frac12\dot{\bm{x}}''\|_{L^2} + \|\bm{x}''\|_{L^2} ) \\
&\lesssim \|\dot{\bm{x}}\|_{X^3} \|\ddot{\bm{x}}\|_{X^2}
+ \|\bm{x}\|_{X^4} ( \|\dot{\bm{x}}\|_{X^3} + \|\bm{x}\|_{X^4} ). \nonumber\end{aligned}$$ Similarly, we have $|\dot{\tau}'''| \lesssim |\dot{\bm{x}}'\cdot\ddot{\bm{x}}''| + |\dot{\bm{x}}''\cdot\ddot{\bm{x}}'|
+ s(|\bm{x}''\cdot\dot{\bm{x}}'''| + |\bm{x}'''\cdot\dot{\bm{x}}''| + |\bm{x}''\cdot\bm{x}'''| )
+ |\bm{x}''\cdot\dot{\bm{x}}''| + |\bm{x}''|^2$, so that $$\begin{aligned}
\|s^{1+\epsilon}\dot{\tau}'''\|_{L^2}
&\lesssim \|s^\epsilon\dot{\bm{x}}'\|_{L^\infty} \|s\ddot{\bm{x}}''\|_{L^2}
+ \|s\dot{\bm{x}}''\|_{L^\infty} ( \|\ddot{\bm{x}}'\|_{L^2} + \|s\bm{x}'''\|_{L^2} ) \\
&\quad\;
+ \|s^\frac12\bm{x}''\|_{L^\infty} ( \|s^\frac32\dot{\bm{x}}'''\|_{L^2} + \|s\bm{x}'''\|_{L^2}
+ \|s^\frac12\dot{\bm{x}}''\|_{L^2} + \|\bm{x}''\|_{L^2} ) \\
&\lesssim \|\dot{\bm{x}}\|_{X^3} \|\ddot{\bm{x}}\|_{X^2} + \|\bm{x}\|_{X^4} \|\dot{\bm{x}}\|_{X^3} + \|\bm{x}\|_{X^4}^2.\end{aligned}$$ Therefore, we obtain $\|\dot{\tau}'(t)\|_{X_\epsilon^2} \lesssim 1$.
\(iii\) The case $j=2$. The estimate $\|\ddot{\tau}'\|_{L^2} \lesssim 1$ in the proof of Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} is still valid. By the equation for $\ddot{\tau}$, we have $|\ddot{\tau}''| \lesssim |\dot{\bm{x}}'\cdot \dddot{\bm{x}}'| + |\ddot{\bm{x}}'|^2
+ s( |\bm{x}''\cdot\ddot{\bm{x}}''| + |\dot{\bm{x}}''|^2 + |\bm{x}''\cdot\dot{\bm{x}}''|)
+ s^\frac12|\bm{x}''|^2$, so that $$\begin{aligned}
\|s^{\frac12+\epsilon}\ddot{\tau}''\|_{L^2}
&\lesssim \|s^\epsilon\dot{\bm{x}}'\|_{L^\infty} \|s^\frac12\dddot{\bm{x}}'\|_{L^2}
+ \|s^\frac12\ddot{\bm{x}}'\|_{L^\infty} \|\ddot{\bm{x}}'\|_{L^2} + \|s\dot{\bm{x}}''\|_{L^\infty}\|s^\frac12\dot{\bm{x}}''\|_{L^2} \\
&\quad\;
+ \|s^\frac12\bm{x}''\|_{L^\infty} ( \|s\ddot{\bm{x}}''\|_{L^2} + \|s^\frac12\dot{\bm{x}}''\|_{L^2} + \|\bm{x}''\|_{L^2} ) \\
&\lesssim \|\dot{\bm{x}}\|_{X^3} \|\dddot{\bm{x}}\|_{X^1} + (\|\ddot{\bm{x}}\|_{X^2}+\|\dot{\bm{x}}\|_{X^3}+\|\bm{x}\|_{X^4})^2.\end{aligned}$$ Moreover, it follows from [\[BVPdtj\]](#BVPdtj){reference-type="eqref" reference="BVPdtj"} with $j=2$ and Lemma [**Lemma** 11](#lem:EstSolBVP3){reference-type="ref" reference="lem:EstSolBVP3"} that $\|s^\epsilon\ddot{\tau}'\|_{L^\infty} \lesssim |a_2| + \|s^\epsilon h_2\|_{L^1}$. Here, we see that $\|s^\epsilon h_2\|_{L^1} \leq \epsilon^{-\frac12}\|s^{\frac12(1+\epsilon)}h_2\|_{L^2}$, which can be evaluated as above. Therefore, we obtain $\|\ddot{\tau}'(t)\|_{X_\epsilon^1} \lesssim 1$.
The only reason to use an additional weight $s^\epsilon$ in the above estimates is a lack of the estimate for $\|\partial_t\bm{x}'(t)\|_{L^\infty}$; we note that $\partial_t\bm{x}(t) \in X^3$ does not necessarily imply $\partial_t\bm{x}'(t) \in L^\infty$. Therefore, we obtain the later assertion of the lemma. The proof is complete. ◻
# Estimates for initial values {#sect:EstID}
In this section we evaluate the initial value $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(0)}_m$ in terms of the initial data $\|\bm{x}(0)\|_{X^m}$ and $\|\dot{\bm{x}}(0)\|_{X^{m-1}}$. Although it is sufficient to evaluate $\partial_t^j\bm{x}$ only at time $t=0$, we will evaluate them at general time $t$. We remind that the operator $\mathscr{A}_\tau$ was defined by [\[defA\]](#defA){reference-type="eqref" reference="defA"}.
****Lemma** 38**. *If $\tau|_{s=0}=0$, then we have $$\|\mathscr{A}_\tau\bm{x}\|_{X^m} \lesssim
\begin{cases}
\min\{ \|\tau'\|_{L^\infty} \|\bm{x}\|_{X^2}, \|\tau'\|_{X^1} \|\bm{x}\|_{X^3} \} &\mbox{for}\quad m=0, \\
\min\{ \|\tau'\|_{X^{m \vee 2}} \|\bm{x}\|_{X^{m+2}}, \|\tau'\|_{X^m} \|\bm{x}\|_{X^{m+2 \vee 4}} \} &\mbox{for}\quad m=0,1,2,\ldots.
\end{cases}$$*
*Proof.* We put $\mu(s)=\frac{\tau(s)}{s}=\frac{1}{s}\int_0^s\tau'(\sigma)\mathrm{d}\sigma=(\mathscr{M}\tau')(s)$, where $\mathscr{M}$ is the averaging operator defined by [\[defM\]](#defM){reference-type="eqref" reference="defM"}, and $(A_2u)(s)=-(su'(s))'$. Then, we have the identity $$\label{IdAtau}
\mathscr{A}_\tau\bm{x} = \mu A_2\bm{x}+(\mu-\tau')\bm{x}'.$$ Therefore, by Lemmas [**Lemma** 13](#lem:EstA2){reference-type="ref" reference="lem:EstA2"} and [**Lemma** 18](#lem:Algebra){reference-type="ref" reference="lem:Algebra"} and Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} we see that $$\begin{aligned}
\|\mathscr{A}_\tau\bm{x}\|_{L^2}
&\leq \|\mu\|_{L^\infty}\|A_2\bm{x}\|_{L^2} + (\|\mu\|_{L^\infty}+\|\tau'\|_{L^\infty})\|\bm{x}'\|_{L^2} \\
&\lesssim \|\tau'\|_{L^\infty} \|\bm{x}\|_{X^2}, \\
\|\mathscr{A}_\tau\bm{x}\|_{L^2}
&\lesssim \|\mu\|_{X^1}\|A_2\bm{x}\|_{X^1} + (\|\mu\|_{X^1}+\|\tau'\|_{X^1})\|\bm{x}'\|_{X^1} \\
&\lesssim \|\tau'\|_{X^1}\|\bm{x}\|_{X^3}, \end{aligned}$$ and that $$\begin{aligned}
\|\mathscr{A}_\tau\bm{x}\|_{X^m}
&\lesssim \|\mu\|_{X^{m \vee 2}} \|A_2\bm{x}\|_{X^m} + ( \|\mu\|_{X^{m \vee 2}} + \|\tau'\|_{X^{m \vee 2}} )\|\bm{x}'\|_{X^m} \\
&\lesssim \|\tau'\|_{X^{m \vee 2}} \|\bm{x}\|_{X^{m+2}}, \\
\|\mathscr{A}_\tau\bm{x}\|_{X^m}
&\lesssim \|\mu\|_{X^m} \|A_2\bm{x}\|_{X^{m \vee 2}} + ( \|\mu\|_{X^m} + \|\tau'\|_{X^m} )\|\bm{x}'\|_{X^{m \vee 2}} \\
&\lesssim \|\tau'\|_{X^m} \|\bm{x}\|_{X^{m+2 \vee 4}}.\end{aligned}$$ Therefore, we obtain the desired estimates. ◻
****Lemma** 39**. *Let $M$ and $c_0$ be positive constants and $m$ an integer such that $m\geq4$. There exists a positive constant $C=C(M,c_0,m)$ such that if $(\bm{x},\tau)$ is a solution to [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} and [\[BC\]](#BC){reference-type="eqref" reference="BC"} satisfying $$\begin{cases}
\|\bm{x}(t)\|_{X^m} + \|\partial_t\bm{x}(t)\|_{X^{m-1}} \leq M, \\
-\bm{g}\cdot\bm{x}'(1,t) \geq c_0,
\end{cases}$$ then we have $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(t)}_m \leq C$.*
*Proof.* We will prove $\|\partial_t^j\bm{x}(t)\|_{X^{m-j}} \leq C$ inductively for $j=2,3,\ldots,m$.
**The case $m\geq5$.** We first consider the case $m\geq5$. By Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} with $j=0$, we have $\|\tau'(t)\|_{L^\infty \cap X^{m-1}} \lesssim 1$. Now, assuming $2\leq j\leq m$ and $$\sum_{l=0}^{j-1} \|\partial_t^l\bm{x}(t)\|_{X^{m-l}}
+ \sum_{l=0}^{j-2} \|\partial_t^l\tau'(t)\|_{L^\infty \cap X^{m-(l+1)}} \lesssim 1,$$ we will evaluate $(\partial_t^j\bm{x},\partial_t^{j-1}\tau')$. In the case $2\leq j\leq m-1$, we evaluate $\partial_t^j\bm{x}$ as $$\begin{aligned}
\|\partial_t^j\bm{x}\|_{X^{m-j}}
&= \|\partial_t^{j-2}(\mathscr{A}_\tau\bm{x}-\bm{g})\|_{X^{m-j}} \\
&\lesssim \sum_{j_1+j_2=j-2}\|((\partial_t^{j_1}\tau)\partial_t^{j_2}\bm{x}')'\|_{X^{m-j}}+1.\end{aligned}$$ Here, we have $j_1\leq m-3$ so that $2\leq m-(j_1+1)$. Therefore, by Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} $$\begin{aligned}
\|((\partial_t^{j_1}\tau)\partial_t^{j_2}\bm{x}')'\|_{X^{m-j}}
&\lesssim \|\partial_t^{j_1}\tau'\|_{X^{m-j \vee 2}} \|\partial_t^{j_2}\bm{x}\|_{X^{m-j+2}} \\
&\leq \|\partial_t^{j_1}\tau'\|_{X^{m-(j_1+1)}} \|\partial_t^{j_2}\bm{x}\|_{X^{m-j_2}}.\end{aligned}$$ These estimates give $\|\partial_t^j\bm{x}(t)\|_{X^{m-j}} \lesssim 1$. Now, we can apply Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} with $j$ replaced by $j-1$ to obtain $\|\partial_t^{j-1}\tau'(t)\|_{L^\infty \cap X^{m-j}} \lesssim 1$.
In the case $j=m$, we evaluate $\partial_t^m\bm{x}$ as $$\begin{aligned}
\|\partial_t^m\bm{x}\|_{L^2}
&= \|\partial_t^{m-2}\mathscr{A}_\tau\bm{x}\|_{X^{m-j}} \\
&\lesssim \|((\partial_t^{m-2}\tau)\bm{x}')'\|_{L^2} + \sum_{j_1+j_2=m-2, j_1\leq m-3}\|((\partial_t^{j_1}\tau)\partial_t^{j_2}\bm{x}')'\|_{L^2}.\end{aligned}$$ Here, by Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} we see that $\|((\partial_t^{m-2}\tau)\bm{x}')'\|_{L^2} \lesssim \|\partial_t^{m-2}\tau'\|_{X^1}\|\bm{x}\|_{X^3} \lesssim 1$ and that $$\begin{aligned}
\|((\partial_t^{j_1}\tau)\partial_t^{j_2}\bm{x}')'\|_{L^2}
&\lesssim \|\partial_t^{j_1}\tau'\|_{X^2} \|\partial_t^{j_2}\bm{x}\|_{X^2} \\
&\lesssim \|\partial_t^{j_1}\tau'\|_{X^{m-(j_1+1)}} \|\partial_t^{j_2}\bm{x}\|_{X^{m-j_2}}.\end{aligned}$$ These estimates give $\|\partial_t^m\bm{x}(t)\|_{L^2} \lesssim 1$. Therefore, by induction we obtain $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(t)}_m \lesssim 1$.
**The case $m=4$.** We then consider the case $m=4$. By Lemma [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"} with $j=0$, we have $\|\tau'(t)\|_{X_\epsilon^3} \leq C(\epsilon)$ for $\epsilon>0$. Since $X_\epsilon^3 \hookrightarrow X^2$ for $0<\epsilon\leq\frac12$, we have also $\|\tau'(t)\|_{X^2}\lesssim1$. By Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"}, $$\begin{aligned}
\|\partial_t^2\bm{x}\|_{X^2}
&\leq \|\mathscr{A}_\tau\bm{x}\|_{X^2} + 1 \\
&\lesssim \|\tau'\|_{X^2} \|\bm{x}\|_{X^4} + 1.\end{aligned}$$ Therefore, we obtain $\|\partial_t^2\bm{x}(t)\|_{X^2} \lesssim 1$. Then, by Lemma [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"} with $j=1$, we have $\|\dot{\tau}'(t)\|_{X_\epsilon^2} \leq C(\epsilon)$ for $\epsilon>0$. Since $X_\epsilon^2 \hookrightarrow X^1$ for $0<\epsilon\leq\frac12$, we have also $\|\dot{\tau}'(t)\|_{X^1}\lesssim1$. By Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"}, $$\begin{aligned}
\|\partial_t^3\bm{x}\|_{X^1}
&\leq \|\mathscr{A}_\tau\dot{\bm{x}}\|_{X^1} + \|\mathscr{A}_{\dot{\tau}}\bm{x}\|_{X^1} \\
&\lesssim \|\tau'\|_{X^2} \|\dot{\bm{x}}\|_{X^3} + \|\dot{\tau}'\|_{X^1} \|\bm{x}\|_{X^4}.\end{aligned}$$ Therefore, we obtain $\|\partial_t^3\bm{x}(t)\|_{X^1} \lesssim 1$. Then, by Lemma [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"} with $j=2$, we have $\|\ddot{\tau}'(t)\|_{X_\epsilon^1} \leq C(\epsilon)$ for $\epsilon>0$. Since $X_\epsilon^1 \hookrightarrow L^2$ for $0<\epsilon<\frac12$, we have also $\|\ddot{\tau}'(t)\|_{L^2}\lesssim1$. By Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"}, $$\begin{aligned}
\|\partial_t^4\bm{x}\|_{L^2}
&\leq \|\mathscr{A}_\tau\ddot{\bm{x}}\|_{L^2} + 2\|\mathscr{A}_{\dot{\tau}}\dot{\bm{x}}\|_{L^2} + \|\mathscr{A}_{\ddot{\tau}}\bm{x}\|_{L^2} \\
&\lesssim \|\tau'\|_{X^2} \|\ddot{\bm{x}}\|_{X^2} + \|\dot{\tau}'\|_{X^1} \|\dot{\bm{x}}\|_{X^3} + \|\ddot{\tau}'\|_{L^2} \|\bm{x}\|_{X^4}.\end{aligned}$$ Therefore, we obtain $\|\partial_t^4\bm{x}(t)\|_{L^2} \lesssim 1$. Summarizing these estimates, we get $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_4 \lesssim 1$. The proof is complete. ◻
# A priori estimates of solutions {#sect:APE}
In this last section, we prove Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"}.
****Lemma** 40**. *For any integer $m\geq4$ and any positive constants $M_1$, $M_2$, and $c_0$, there exists a positive constant $C_2=C_2(M_1,M_2,c_0,m)$ such that if $(\bm{x},\tau)$ is a regular solution to [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} and [\[BC\]](#BC){reference-type="eqref" reference="BC"} satisfying $$\label{EstAss}
\begin{cases}
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_{m-1} \leq M_1, \qquad -\bm{g}\cdot\bm{x}'(1,t) \geq c_0, \\
\|\partial_t^{m-1}\bm{x}(t)\|_{X^1}^2 + \|\partial_t^{m-2}\bm{x}(t)\|_{X^2}^2 \leq M_2,
\end{cases}$$ then we have $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_m \leq C_2$.*
The proof of this lemma is divided into two cases: (i) $m\geq5$ and (ii) $m=4$.
## Proof of Lemma [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"} in the case $m\geq5$ {#proof-of-lemma-lemapex-in-the-case-mgeq5}
In this subsection, we prove Lemma [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"} in the case $m\geq5$, so that we suppose $m\geq5$. For $3\leq j\leq m+1$ we are going to show $$\label{IndAss}
\sum_{l=1}^{j-1} \|\partial_t^{m-l}\bm{x}(t)\|_{X^{l}} \lesssim 1$$ by induction on $j$. Therefore, assuming that [\[IndAss\]](#IndAss){reference-type="eqref" reference="IndAss"} holds for some $3\leq j\leq m$ we evaluate $\|\partial_t^{m-j}\bm{x}(t)\|_{X^j}$, which can be written as $$\begin{aligned}
\label{NormDeco}
\|\partial_t^{m-j}\bm{x}(t)\|_{X^j}^2
&= \|\partial_t^{m-j}\bm{x}(t)\|_{L^2}^2 + \|\partial_t^{m-j}\bm{x}'(t)\|_{X^{j-2}}^2 \\
&\quad\; +
\begin{cases}
\|s^k\partial_s^{2k}\partial_t^{m-2k}\bm{x}(t)\|_{L^2}^2 &\mbox{for}\quad j=2k, \\
\|s^{k+\frac12}\partial_s^{2k+1}\partial_t^{m-(2k+1)}\bm{x}(t)\|_{L^2}^2 &\mbox{for}\quad j=2k+1.
\end{cases} \nonumber\end{aligned}$$ The first term in the right-hand side can be easily evaluated.
We proceed to evaluate the second term. By the assumptions and Lemmas [**Lemma** 8](#lem:EstSolBVP1){reference-type="ref" reference="lem:EstSolBVP1"}, [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"}, and [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"}, we have $\tau(s,t)\simeq s$ and $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{m-2,*} \lesssim 1$ in the case $m\geq6$ and $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\mu(t)}_{3,*,\epsilon} \leq C_0(\epsilon)$ for any $\epsilon>0$ in the case $m=5$. We introduce a new quantity $\mu(s,t)$ by $$\label{defMu}
\mu(s,t) = \frac{\tau(s,t)}{s} = \frac1s \int_0^s \tau'(\sigma,t)\mathrm{d}\sigma = \mathscr{M}(\tau'(\cdot,t))(s),$$ where $\mathscr{M}$ is the averaging operator defined by [\[defM\]](#defM){reference-type="eqref" reference="defM"} and we have used the boundary condition $\tau|_{s=0}=0$. Then, by Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} we have also $\mu(s,t)\simeq 1$ and $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\mu(t)}_{m-2,*} \lesssim 1$ in the case $m\geq6$ and $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\mu(t)}_{3,*,\epsilon} \leq C_0(\epsilon)$ for any $\epsilon\in(0,1)$ in the case $m=5$. Integrating the hyperbolic equation for $\bm{x}$ in [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} with respect to $s$ over $[0,s]$ and using the boundary condition $\tau|_{s=0}=0$, we obtain $\tau\bm{x}'=\int_0^s(\ddot{\bm{x}}-\bm{g})\mathrm{d}\sigma$, so that $$\label{ExpX}
\bm{x}' = \mu^{-1}(\mathscr{M}\ddot{\bm{x}}-\bm{g}).$$ Roughly speaking, this expression makes us to convert estimates for the time derivatives of $\bm{x}$ into those for the spatial derivatives of $\bm{x}$ with less weight of $s$. Differentiating this with respect to $t$ and using $\mu(s,t)\simeq1$ and $|\partial_t\mu(s,t)|\lesssim1$, we have $|\partial_t\bm{x}'| \lesssim |\mathscr{M}(\partial_t^3\bm{x})| + |\mathscr{M}(\partial_t^2\bm{x})| + 1$. Here, we see that $\|\partial_t^3\bm{x}\|_{L^\infty} \lesssim \|\partial_t^3\bm{x}\|_{X^2} \lesssim 1$ and $\|\partial_t^2\bm{x}\|_{L^\infty} \lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}}_4 \lesssim 1$. Therefore, by Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} we get $\|\partial_t\bm{x}'(t)\|_{L^\infty} \lesssim 1$. By Lemma [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"} again, we obtain $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{3,*} \lesssim 1$ so that $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\mu(t)}_{3,*} \lesssim 1$. In other words, we have $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\mu(t)}_{m-2,*} \lesssim 1$ in any cases. We will use these estimates in the following without any comment.
Now, we go back to evaluate the second term in the right-hand sided of [\[NormDeco\]](#NormDeco){reference-type="eqref" reference="NormDeco"}. In the case $j=3$, by Lemma [**Lemma** 18](#lem:Algebra){reference-type="ref" reference="lem:Algebra"} and Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} we see that $$\begin{aligned}
\|\partial_t^{m-3}\bm{x}'\|_{X^1}
&\lesssim \|\partial_t^{m-3}(\mu^{-1})\|_{X^1} \|\mathscr{M}\ddot{\bm{x}}-\bm{g}\|_{X^2} \\
&\quad\;
+ \sum_{j_1+j_2=m-3, j_1\leq m-4} \|\partial_t^{j_1}(\mu^{-1})\|_{X^2} \|\mathscr{M}(\partial_t^{j_2+2}\bm{x})\|_{X^1} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \mu^{-1} }_{m-2,*}
\Biggl( 1+\|\ddot{\bm{x}}\|_{X^2} + \sum_{j_2=1}^{m-3} \|\partial_t^{j_2+2}\bm{x}\|_{X^1} \Biggr),\end{aligned}$$ which together with Lemma [**Lemma** 19](#lem:EstCompFunc1){reference-type="ref" reference="lem:EstCompFunc1"} yields $\|\partial_t^{m-3}\bm{x}'(t)\|_{X^1} \lesssim 1$. We then consider the case $4\leq j\leq m$. By Lemma [**Lemma** 18](#lem:Algebra){reference-type="ref" reference="lem:Algebra"} and Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} we see that $$\begin{aligned}
\|\partial_t^{m-j}\bm{x}'\|_{X^{j-2}}
&\lesssim \sum_{j_1+j_2=m-j} \|\partial_t^{j_1}(\mu^{-1})\|_{X^{j-2}} \|\mathscr{M}(\partial_t^{j_2+2}\bm{x}) - \partial_t^{j_2}\bm{g}\|_{X^{j-2}} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \mu^{-1} }_{m-2,*}( 1+\|\partial_t^{m-(j-2)}\bm{x}\|_{X^{j-2}}+\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}}_{m-1} ),\end{aligned}$$ which together with Lemma [**Lemma** 19](#lem:EstCompFunc1){reference-type="ref" reference="lem:EstCompFunc1"} yields $\|\partial_t^{m-j}\bm{x}'(t)\|_{X^{j-2}} \lesssim 1$.
It remains to evaluate the last term in [\[NormDeco\]](#NormDeco){reference-type="eqref" reference="NormDeco"}. Applying $\partial_s^{j-2}\partial_t^{m-j}$ to the hyperbolic equation for $\bm{x}$, we have $\partial_s^{j-2}\partial_t^{m-j+2}\bm{x} = \tau\partial_s^j\partial_t^{m-j}\bm{x} + [\partial_s^{j-1},\tau]\partial_t^{m-j}\bm{x}' + \partial_s^{j-2}[\partial_t^{m-j},\mathscr{A}_\tau]\bm{x}$, so that $$\begin{aligned}
s|\partial_s^j\partial_t^{m-j}\bm{x}|
&\lesssim |\partial_s^{j-2}\partial_t^{m-j+2}\bm{x}| + |[\partial_s^{j-1},\tau]\partial_t^{m-j}\bm{x}'| + |\partial_s^{j-2}[\partial_t^{m-j},\mathscr{A}_\tau]\bm{x}|.\end{aligned}$$ By making use of this expression, we evaluate the last term in [\[NormDeco\]](#NormDeco){reference-type="eqref" reference="NormDeco"}. In the case $j=2k$ with $k\geq2$, by Lemmas [**Lemma** 20](#lem:commutator){reference-type="ref" reference="lem:commutator"} and [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} we see that $$\begin{aligned}
\|s^k\partial_s^{2k}\partial_t^{m-2k}\bm{x}\|_{L^2}
&\lesssim \|\partial_t^{m-2(k-1)}\bm{x}\|_{X^{2(k-1)}} + \|s^{k-1}[\partial_s^{2k-1},\tau]\partial_t^{m-2k}\bm{x}'\|_{L^2} \\
&\quad\;
+ \|[\partial_t^{m-2k},\mathscr{A}_\tau]\bm{x}\|_{X^{2(k-1)}} \\
&\lesssim \|\partial_t^{m-2(k-1)}\bm{x}\|_{X^{2(k-1)}} + \|\tau'\|_{X^{2(k-1)}} \|\partial_t^{m-2k}\bm{x}'\|_{X^{2(k-1)}} \\
&\quad\;
+ \sum_{j_1+j_2=m-2k-1}\|\partial_t^{j_1+1}\tau'\|_{X^{2(k-1)}} \|\partial_t^{j_2}\bm{x}\|_{X^{2k}} \\
&\lesssim \|\partial_t^{m-(j-2)}\bm{x}\|_{X^{j-2}}
+ \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau' }_{m-2,*}( \|\partial_t^{m-j}\bm{x}'\|_{X^{j-2}} + \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}}_{m-1} ),\end{aligned}$$ which implies $\|s^k\partial_s^{2k}\partial_t^{m-2k}\bm{x}\|_{L^2} \lesssim 1$. We then consider the case $j=2k+1$ with $k\geq1$. By Lemmas [**Lemma** 20](#lem:commutator){reference-type="ref" reference="lem:commutator"} and [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} we see that $$\begin{aligned}
\|s^{k+\frac12}\partial_s^{2k+1}\partial_t^{m-(2k+1)}\bm{x}\|_{L^2}
&\lesssim \|\partial_t^{m-(2k-1)}\bm{x}\|_{X^{2k-1}} + \|s^{k-\frac12}[\partial_s^{2k},\tau]\partial_t^{m-(2k+1)}\bm{x}'\|_{L^2} \\
&\quad\;
+ \|[\partial_t^{m-(2k+1)},\mathscr{A}_\tau]\bm{x}\|_{X^{2k-1}} \\
&\lesssim \|\partial_t^{m-(2k-1)}\bm{x}\|_{X^{2k-1}} + \|\tau'\|_{X^{2k-1 \vee 2}} \|\partial_t^{m-(2k+1)}\bm{x}'\|_{X^{2k-1}} \\
&\quad\;
+ \|[\partial_t^{m-j},\mathscr{A}_\tau]\bm{x}\|_{X^{j-2}} \\
&\lesssim \|\partial_t^{m-(j-2)}\bm{x}\|_{X^{j-2}} + \|\tau'\|_{X^{m-2}} \|\partial_t^{m-j}\bm{x}'\|_{X^{j-2}} \\
&\quad\;
+ \|[\partial_t^{m-j},\mathscr{A}_\tau]\bm{x}\|_{X^{j-2}}.\end{aligned}$$ Here, the last term is evaluated as $$\begin{aligned}
\|[\partial_t^{m-j},\mathscr{A}_\tau]\bm{x}\|_{X^{j-2}}
&\lesssim \|\partial_t^{m-j}\tau'\|_{X^{j-2}} \|\bm{x}\|_{X^{j \vee 4}} \\
&\quad\;
+ \sum_{j_1+j_2=m-j-1, j_2\geq1} \|\partial_t^{j_1+1}\tau'\|_{X^{j-2 \vee 2}} \|\partial_t^{j_2}\bm{x}\|_{X^{j-2}} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau' }_{m-2,*}\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_{m-1}.\end{aligned}$$ These estimates yield $\|s^{k+\frac12}\partial_s^{2k+1}\partial_t^{m-(2k+1)}\bm{x}\|_{L^2} \lesssim 1$.
Summarizing the above argument, we see that under the induction hypothesis [\[IndAss\]](#IndAss){reference-type="eqref" reference="IndAss"} it holds that $\|\partial_t^{m-j}\bm{x}\|_{X^j} \lesssim 1$. Therefore, we obtain $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_{m,*} \leq C_2$. Finally, we evaluate $\|\partial_t^m\bm{x}\|_{L^2}$ as follows. Thanks to the estimate $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_{m,*} \lesssim 1$, we can improve the estimate for $\tau'$ as $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau' }_{m-1,*} \lesssim 1$ by Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"}. Therefore, by Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} we see that $$\begin{aligned}
\|\partial_t^m\bm{x}\|_{L^2}
&= \|\partial_t^{m-2}(\mathscr{A}_\tau\bm{x})\|_{L^2} \\
&\lesssim \|\tau'\|_{X^2} \|\partial_t^{m-2}\bm{x}\|_{X^2}
+ \sum_{j_1+j_2=m-2, j_2\leq m-3} \|\partial_t^{j_1}\tau'\|_{X^1} \|\partial_t^{j_2}\bm{x}\|_{X^3} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau' }_{m-1,*} \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}}_{m,*},\end{aligned}$$ which gives $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m \lesssim 1$. The proof of Lemma [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"} in the case $m\geq5$ is complete. $\Box$
## Proof of Lemma [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"} in the case $m=4$ {#proof-of-lemma-lemapex-in-the-case-m4}
In this subsection, we prove Lemma [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"} in the critical case $m=4$, so that we suppose $m=4$. The proof consists of several steps. Before going into the proof, we note that by Lemma [**Lemma** 8](#lem:EstSolBVP1){reference-type="ref" reference="lem:EstSolBVP1"} we have $\tau(s,t)\simeq s$ and $|\tau'(s,t)|\lesssim1$. Particularly, the quantity $\mu$ defined by [\[defMu\]](#defMu){reference-type="eqref" reference="defMu"} satisfies $\mu(s,t)\simeq1$.
**Step 1.** Estimate for $\|\bm{x}'(t)\|_{X^2}$. We rewrite [\[ExpX\]](#ExpX){reference-type="eqref" reference="ExpX"} as $$\label{ExpX2}
\mu\bm{x}' = \mathscr{M}\ddot{\bm{x}}-\bm{g}.$$ Differentiating this with respect to $s$, we have $\mu\bm{x}''+\mu'\bm{x}'=(\mathscr{M}\ddot{\bm{x}})'$. Taking an inner product of this equation with $\bm{x}'$ and using the constraint $|\bm{x}'|^2=1$ together with $\bm{x}'\cdot\bm{x}''=0$, we obtain $\mu'=\bm{x}'\cdot(\mathscr{M}\ddot{\bm{x}})'$. Therefore, by Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} we see that $\|\mu'\|_{L^2} \leq \|(\mathscr{M}\ddot{\bm{x}})'\|_{L^2} \leq \frac23\|\ddot{\bm{x}}'\|_{L^2}\lesssim1$. Then, in view of $\bm{x}''=\mu^{-1}( (\mathscr{M}\ddot{\bm{x}})' - \mu'\bm{x}')$ we obtain $\|\bm{x}''\|_{L^2} \lesssim \|(\mathscr{M}\ddot{\bm{x}})'\|_{L^2} + \|\mu'\|_{L^2} \lesssim 1$.
Differentiating [\[ExpX2\]](#ExpX2){reference-type="eqref" reference="ExpX2"} twice with respect to $s$, we have $\mu\bm{x}'''+2\mu'\bm{x}''+\mu''\bm{x}'=(\mathscr{M}\ddot{\bm{x}})''$. Taking an inner product of this equation with $\bm{x}'$ and using $\bm{x}'\cdot\bm{x}'''+|\bm{x}''|^2=0$, we obtain $\mu''=\bm{x}'\cdot(\mathscr{M}\ddot{\bm{x}})''+\mu|\bm{x}''|^2$. Therefore, by Corollary [**Corollary** 24](#cor:WEM1){reference-type="ref" reference="cor:WEM1"} and Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"} we see that $$\begin{aligned}
\|s\mu''\|_{L^2}
&\lesssim \|s(\mathscr{M}\ddot{\bm{x}})''\|_{L^2} + \|s\bm{x}''\|_{L^\infty}\|\bm{x}''\|_{L^2} \\
&\lesssim \|\ddot{\bm{x}}\|_{X^2} + \|\bm{x}\|_{X^3}\|\bm{x}''\|_{L^2},\end{aligned}$$ which implies $\|s\mu''\|_{L^2} \lesssim 1$. Then, in view of $\bm{x}'''=\mu^{-1}( (\mathscr{M}\ddot{\bm{x}})''-2\mu'\bm{x}''-\mu''\bm{x}')$ we obtain $\|s\bm{x}'''\|_{L^2} \lesssim \|s(\mathscr{M}\ddot{\bm{x}})''\|_{L^2} + \|\mu'\|_{L^2}\|s\bm{x}''\|_{L^\infty} + \|s\mu''\|_{L^2} \lesssim 1$. Therefore, we obtain $$\label{EstX'}
\|\bm{x}'(t)\|_{X^2} + \|\mu(t)\|_{X^2} \leq C_2.$$ Particularly, we get $\|s^\frac12\bm{x}''(t)\|_{L^\infty}+\|s^\frac12\mu'(t)\|_{L^\infty} \leq C_2$.
**Step 2.** Estimate for $\|\partial_t\bm{x}'(t)\|_{X^1}$. Differentiating [\[ExpX2\]](#ExpX2){reference-type="eqref" reference="ExpX2"} with respect to $t$, we have $\mu\dot{\bm{x}}'+\dot{\mu}\bm{x}'=\mathscr{M}(\partial_t^3\bm{x})$. Taking an inner product of this equation with $\bm{x}'$ and using $\bm{x}'\cdot\dot{\bm{x}}'=0$, we obtain $\dot{\mu} = \bm{x}'\cdot\mathscr{M}(\partial_t^3\bm{x})$. Therefore, we see that $\|\dot{\mu}\|_{L^2} \leq \|\mathscr{M}(\partial_t^3\bm{x})\|_{L^2} \leq 2\|\partial_t^3\bm{x}\|_{L^2} \lesssim 1$. Differentiating [\[ExpX2\]](#ExpX2){reference-type="eqref" reference="ExpX2"} with respect to $t$ and $s$, we have $\mu\dot{\bm{x}}''+\mu'\dot{\bm{x}}'+\dot{\mu}\bm{x}''+\dot{\mu}'\bm{x}'=(\mathscr{M}(\partial_t^3\bm{x}))'$. Taking an inner product of this equation with $\bm{x}'$ and using $\bm{x}'\cdot\dot{\bm{x}}''+\bm{x}''\cdot\dot{\bm{x}}'=0$, we obtain $\dot{\mu}'=\bm{x}'\cdot(\mathscr{M}(\partial_t^3\bm{x}))'+\mu\bm{x}''\cdot\dot{\bm{x}}'$. Therefore, we see that $$\begin{aligned}
\|s^\frac12\dot{\mu}'\|_{L^2}
&\leq \|s^\frac12(\mathscr{M}(\partial_t^3\bm{x}))'\|_{L^2} + \|\mu\|_{L^\infty} \|s^\frac12\bm{x}''\|_{L^\infty} \|\dot{\bm{x}}'\|_{L^2} \\
&\lesssim \|\partial_t^3\bm{x}\|_{X^1} + \|\bm{x}'\|_{X^2}\|\dot{\bm{x}}\|_{X^2}, \end{aligned}$$ which implies $\|s^\frac12\dot{\mu}'\|_{L^2} \lesssim 1$. Then, in view of $\dot{\bm{x}}''=\mu^{-1}( (\mathscr{M}(\partial_t^3\bm{x}))' - \mu'\dot{\bm{x}}'-\dot{\mu}\bm{x}''-\dot{\mu}'\bm{x}')$ we obtain $\|s^\frac12\dot{\bm{x}}''\|_{L^2} \lesssim \|s^\frac12(\mathscr{M}(\partial_t^3\bm{x}))'\|_{L^2} + \|s^\frac12\mu\|_{L^\infty}\|\bm{x}''\|_{L^2}
+ \|\dot{\mu}\|_{L^2}\|s^\frac12\bm{x}''\|_{L^\infty} + \|s^\frac12\dot{\mu}'\|_{L^2} \lesssim1$. Therefore, we obtain $$\label{EstdtX'}
\|\partial_t\bm{x}'(t)\|_{X^1} + \|\partial_t\mu(t)\|_{X^1} \leq C_2.$$ Particularly, we get $\|s^\epsilon\partial_t\bm{x}'(t)\|_{L^\infty} + \|s^\epsilon\partial_t\mu(t)\|_{L^\infty} \leq C_2(\epsilon)$ for any $\epsilon>0$.
**Step 3.** Estimate for $\|\bm{x}(t)\|_{X^4}$. We first derive estimates for $\tau$. We remind that we have already $\tau(s,t)\simeq s$ and $|\tau'(s,t)|\lesssim1$. Therefore, the first lines in [\[EstTau\'\'\]](#EstTau''){reference-type="eqref" reference="EstTau''"} and [\[EstTau\'\'\'\]](#EstTau'''){reference-type="eqref" reference="EstTau'''"} are still valid, so that we obtain $$\|s^\epsilon\tau''(t)\|_{L^\infty} + \|s^{\frac12+\epsilon}\tau'''(t)\|_{L^2} \leq C_2(\epsilon) \quad\mbox{for}\quad \epsilon>0.$$ In view of [\[NormDeco\]](#NormDeco){reference-type="eqref" reference="NormDeco"} and [\[EstX\'\]](#EstX'){reference-type="eqref" reference="EstX'"}, it is sufficient to evaluate $\|s^2\bm{x}''''\|_{L^2}$ to obtain an estimate for $\|\bm{x}\|_{X^4}$. Differentiating the hyperbolic equation in [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} twice with respect to $s$, we have $s|\bm{x}''''| \lesssim |\ddot{\bm{x}}''|+|\bm{x}'''|+|\tau''\bm{x}''|+|\tau'''|$, so that $$\begin{aligned}
\|s^2\bm{x}''''\|_{L^2}
&\lesssim \|s\ddot{\bm{x}}''\|_{L^2} + \|s\bm{x}'''\|_{L^2} + \|s^\frac12\tau''\|_{L^\infty}\|\bm{x}''\|_{L^2} + \|s\tau'''\|_{L^2} \\
&\lesssim \|\ddot{\bm{x}}\|_{X^2} + (1+\|s^\frac12\tau''\|_{L^\infty}) \|\bm{x}'\|_{X^2} + \|s\tau'''\|_{L^2}.\end{aligned}$$ Therefore, we obtain $\|\bm{x}(t)\|_{X^4} \leq C_2$.
**Step 4.** Estimate for $\|\partial_t\bm{x}(t)\|_{X^3}$. We first derive estimates for $\dot{\tau}$. The estimates $|\dot{\tau}(s,t)|\lesssim s$ and $|\dot{\tau}'(s,t)|\lesssim1$ in the proof of Lemma [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"} and the fist line in [\[EstdtTau\'\'\]](#EstdtTau''){reference-type="eqref" reference="EstdtTau''"} are still valid, so that we obtain $$|\dot{\tau}(s,t)| \leq C_2s, \quad \|\dot{\tau}'(t)\|_{L^\infty} \leq C_2, \quad
\|s^\epsilon\dot{\tau}''(t)\|_{L^2} \leq C_2(\epsilon) \quad\mbox{for}\quad \epsilon>0.$$ In view of [\[NormDeco\]](#NormDeco){reference-type="eqref" reference="NormDeco"} and [\[EstdtX\'\]](#EstdtX'){reference-type="eqref" reference="EstdtX'"}, it is sufficient to evaluate $\|s^\frac32\dot{\bm{x}}'''\|_{L^2}$ to obtain an estimate for $\|\partial_t\bm{x}\|_{X^3}$. Differentiating the hyperbolic equation in [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"} with respect to $s$ and $t$, we have $s|\dot{\bm{x}}'''| \lesssim |\partial_t^3\bm{x}'| + |\dot{\bm{x}}''|+|\tau''\dot{\bm{x}}'| + s|\bm{x}'''|+|\bm{x}''|+|\dot{\tau}''|$, so that $$\begin{aligned}
\|s^\frac32\dot{\bm{x}}'''\|_{L^2}
&\lesssim \|s^\frac12\partial_t^3\bm{x}'\|_{L^2} + \|s^\frac12\dot{\bm{x}}''\|_{L^2} + \|s^\frac12\tau''\|_{L^\infty}\|\dot{\bm{x}}'\|_{L^2} \\
&\quad\;
+ \|s\bm{x}'''\|_{L^2} + \|\bm{x}''\|_{L^2} + \|s^\frac12\dot{\tau}''\|_{L^2} \\
&\lesssim \|\partial_t^3\bm{x}\|_{X^1} + (1+\|s^\frac12\tau''\|_{L^\infty})\|\dot{\bm{x}}'\|_{X^1}
+ \|\bm{x}'\|_{X^2} + \|s^\frac12\dot{\tau}''\|_{L^2}.\end{aligned}$$ Therefore, we obtain $\|\partial_t\bm{x}(t)\|_{X^3} \leq C_2$.
**Step 5.** Estimate for $\|\partial_t^4\bm{x}(t)\|_{L^2}$. We have shown that $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(t)}_{4,*}\lesssim1$. Therefore, by Lemma [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"} we have $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{3,*,\epsilon} \leq C_2(\epsilon)$ for any $\epsilon>0$. Particularly, $\|\dot{\tau}'(t)\|_{X^1}+\|\ddot{\tau}'(t)\|_{L^2} \lesssim 1$. Therefore, by Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} we see that $$\begin{aligned}
\|\partial_t^4\bm{x}\|_{L^2}
&= \|\partial_t^2(\mathscr{A}_\tau\bm{x})\|_{L^2} \\
&\leq \|\mathscr{A}_\tau\ddot{\bm{x}}\|_{L^2} + 2\|\mathscr{A}_{\dot{\tau}}\dot{\bm{x}}\|_{L^2} + \|\mathscr{A}_{\ddot{\tau}}\bm{x}\|_{L^2} \\
&\lesssim \|\tau'\|_{L^\infty} \|\ddot{\bm{x}}\|_{X^2} + \|\dot{\tau}'\|_{X^1} \|\dot{\bm{x}}\|_{X^3}
+ \|\ddot{\tau}'\|_{L^2} \|\bm{x}\|_{X^4},\end{aligned}$$ which implies $\|\partial_t^4\bm{x}(t)\|_{L^2} \leq C_2$. Summarizing the above estimates, we obtain $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_4 \leq C_2$. The proof of Lemma [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"} in the case $m=4$ is complete. $\Box$
## Proof of Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"} {#proof-of-theorem-thape}
We are ready prove Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"}, which ensures a priori estimates of the solution $(\bm{x},\tau)$ to the initial boundary value problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}. We are going to show that for any regular solution $(\bm{x},\tau)$ to the problem, if the initial data satisfy [\[CondID\]](#CondID){reference-type="eqref" reference="CondID"}, then the estimates in [\[EstAss\]](#EstAss){reference-type="eqref" reference="EstAss"} hold in fact for $0\leq t\leq T$ by choosing appropriately the positive constants $M_1$, $M_2$, and the positive time $T$. In the following, we simply denote the constants $C_0=C(M_0,c_0,m)$, $C_1=C(M_1,c_0,m)$, and $C_2=C(M_2,M_1,c_0,m)$. These constants may change from line to line.
Suppose that the initial data $(\bm{x}_0^\mathrm{in},\bm{x}_1^\mathrm{in})$ satisfy [\[CondID\]](#CondID){reference-type="eqref" reference="CondID"} and that $(\bm{x},\tau)$ is a regular solution to the problem [\[Eq\]](#Eq){reference-type="eqref" reference="Eq"}--[\[IC\]](#IC){reference-type="eqref" reference="IC"}. By Lemmas [**Lemma** 39](#lem:EstID){reference-type="ref" reference="lem:EstID"}, [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"}, and [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"}, we have $$\label{EstID}
\begin{cases}
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(0) }_m + \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau'(0) }_{m-2,*} \leq C_0, \\
C_0^{-1}s \leq \tau(s,0) \leq C_0s, \quad \sum_{j=1}^{m-3}|\partial_t^j\tau(s,0)| \leq C_0s.
\end{cases}$$ Suppose also that the solution $(\bm{x},\tau)$ satisfies [\[EstAss\]](#EstAss){reference-type="eqref" reference="EstAss"} for $0\leq t\leq T$, where the constants $M_0$, $M_1$, and time $T$ will be defined later. Then, by Lemmas [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"}, [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"}, and [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"}, we have $$\begin{cases}
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_m \leq C_2, \\
C_1^{-1}s \leq \tau(s,t) \leq C_1s, \quad \sum_{j=1}^{m-3}|\partial_t^j\tau(s,t)|\leq C_2s, \quad |\partial_t^{m-2}\tau(s,t)|\leq C_2s^\frac12, \\
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{m-1,*} \leq C_2 \qquad\ \mbox{in the case $m\geq5$}, \\
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\tau'(t)}_{3,*,\epsilon} \leq C_2 \qquad\mbox{in the case $m=4$ with $\epsilon=\frac14$}
\end{cases}$$ for $0\leq t\leq T$. Here, we note that there is no special reason on the choice $\epsilon=\frac14$ and that we can choose $\epsilon$ arbitrarily such that $0<\epsilon<\frac12$. We put $\bm{y}=\partial_t^{m-2}\bm{x}$ and $\nu=\partial_t^{m-2}\tau$. Then, we see that $(\bm{y},\nu)$ satisfies the linearized system [\[LEq\]](#LEq){reference-type="eqref" reference="LEq"} and [\[LBVP\]](#LBVP){reference-type="eqref" reference="LBVP"} with $(\bm{f},f,h)$ given by $$\begin{aligned}
\bm{f} &= \{ \partial_t^{m-2}(\tau\bm{x}')-(\partial_t^{m-2}\tau)\bm{x}'-\tau\partial_t^{m-2}\bm{x}' \}', \\
f &= -\frac12\{ \partial_t^{m-2}(\bm{x}'\cdot\bm{x}') - 2\bm{x}'\cdot\partial_t^{m-2}\bm{x}' \}, \\
h &= \{ \partial_t^{m-2}(\dot{\bm{x}}'\cdot\dot{\bm{x}}') - 2\dot{\bm{x}}'\cdot\partial_t^{m-2}\dot{\bm{x}}' \} \\
&\quad\;
- \{ \partial_t^{m-2}(\tau\bm{x}''\cdot\bm{x}'') - (\partial_t^{m-2}\tau)\bm{x}''\cdot\bm{x}'' - 2\tau\bm{x}''\cdot\partial_t^{m-2}\bm{x}'' \}.\end{aligned}$$ Therefore, by Proposition [**Proposition** 30](#prop:EE){reference-type="ref" reference="prop:EE"} we obtain the energy estimate $$\label{EI1}
E(t) \leq C_1 \mathrm{e}^{C_2 t}\left( E(0) + S_1(0) + C_2\int_0^t S_2(t')\mathrm{d}t' \right),$$ where $E(t) = \|\dot{\bm{y}}(t)\|_{X^1}^2+\|\bm{y}(t)\|_{X^2}^2 = \|\partial_t^{m-1}\bm{x}(t)\|_{X^1}^2 + \|\partial_t^{m-2}\bm{x}(t)\|_{X^2}^2$, and $S_1(t)$ and $S_2(t)$ are defined by [\[DefS12\]](#DefS12){reference-type="eqref" reference="DefS12"}.
****Lemma** 41**. *It holds that $E(0)+S_1(0)\leq C_0$ and $S_2(t)\leq C_2$.*
*Proof.* We first evaluate $S_2(t)$. In the case $m\geq5$, by Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} we see that $$\begin{aligned}
\|\dot{\bm{f}}\|_{L^2}
&\lesssim \sum_{j_1+j_2=m-1, j_1,j_2\leq m-2} \|((\partial_t^{j_1}\tau)\partial_t^{j_2}\bm{x}')'\|_{L^2} \\
&\lesssim \|\partial_t\tau'\|_{X^2} \|\partial_t^{m-2}\bm{x}\|_{X^2}
+ \sum_{j_1\leq m-2, j_2\leq m-3} \|\partial_t^{j_1}\tau'\|_{X^1} \|\partial_t^{j_2}\bm{x}\|_{X^3} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \tau' }_{m-1,*} \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m.\end{aligned}$$ In the case $m=4$, we have $\|\dot{\bm{f}}\|_{L^2} \lesssim \|((\partial_t^2\tau)\partial_t\bm{x}')'\|_{L^2} + \|((\partial_t\tau)\partial_t^2\bm{x}')'\|_{L^2}$. Here, we note that $\|\partial_t^2\tau'\|_{L^3} \leq \|s^\frac14\partial_t^2\tau'\|_{L^\infty} \|s^{-\frac14}\|_{L^3} \leq C_2$. Therefore, the first term can be evaluated as $$\begin{aligned}
\|((\partial_t^2\tau)\partial_t\bm{x}')'\|_{L^2}
&\leq \|s^{-\frac12}\partial_t^2\tau\|_{L^\infty} \|s^\frac12\partial_t\bm{x}''\|_{L^2} + \|\partial_t^2\tau'\|_{L^3} \|\partial_t\bm{x}'\|_{L^6} \\
&\lesssim \|\partial_t^2\tau'\|_{L^3} \|\partial_t\bm{x}\|_{X^3},\end{aligned}$$ where we used Lemmas [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} and [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}. The second term can be evaluated by Lemma [**Lemma** 38](#lem:EstAtau){reference-type="ref" reference="lem:EstAtau"} as $\|((\partial_t\tau)\partial_t^2\bm{x}')'\|_{L^2} \lesssim \|\partial_t\tau'\|_{L^\infty}\|\partial_t^2\bm{x}\|_{X^2}$. In any case, we have $\|\dot{\bm{f}}(t)\|_{L^2} \leq C_2$ for $0\leq t\leq T$.
As for $\dot{f}$, by Lemma [**Lemma** 14](#lem:embedding){reference-type="ref" reference="lem:embedding"} we see that $$\begin{aligned}
\|s^\frac14\dot{f}\|_{L^2}
&\lesssim \sum_{j_1+j_2=m-1,j_1\leq j_2\leq m-2} \|s^\frac14\partial_t^{j_1}\bm{x}'\|_{L^\infty} \|\partial_t^{j_2}\bm{x}'\|_{L^2} \\
&\lesssim \sum_{j_1\leq m-3, j_2\leq m-2} \|\partial_t^{j_1}\bm{x}\|_{X^3}\|\partial_t^{j_2}\bm{x}\|_{X^2} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m^2,\end{aligned}$$ which yields $\|s^\frac14\dot{f}(t)\|_{L^2} \leq C_2$ for $0\leq t\leq T$. Similarly, by the standard Sobolev embedding theorem we have $|\dot{f}(1,t)| \lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_m^2 \leq C_2$ for $0\leq t\leq T$.
As for $h$, we see that $$\begin{aligned}
\|s\dot{h}\|_{L^1}
&\lesssim \sum_{j_1+j_2=m-1,j_1\leq j_2\leq m-2} \|\partial_t^{j_1+1}\bm{x}'\|_{L^2} \|s^\frac12\partial_t^{j_2+1}\bm{x}'\|_{L^2} \\
&\quad\;
+ \sum_{j_0+j_1+j_2=m-1,j_0\leq m-2, j_1\leq j_2\leq m-2}
\|s^{-\frac12}\partial_t^{j_0}\tau\|_{L^\infty} \|s^\frac12\partial_t^{j_1}\bm{x}''\|_{L^2} \|s\partial_t^{j_2}\bm{x}''\|_{L^2} \\
&\lesssim \sum_{j_1\leq m-3, j_2\leq m-2} \|\partial_t^{j_1+1}\bm{x}\|_{X^2} \|\partial_t^{j_2+1}\bm{x}\|_{X^1} \\
&\quad\;
+ \sum_{j_0,j_2\leq m-2, j_1\leq m-3}
\|\partial_t^{j_0}\tau'\|_{L^2} \|\partial_t^{j_1}\bm{x}\|_{X^3} \|\partial_t^{j_2}\bm{x}\|_{X^2} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m^2 + \sum_{j_0\leq m-2} \|\partial_t^{j_0}\tau'\|_{L^2}\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m^2\end{aligned}$$ and that $$\begin{aligned}
\|s^\frac12h\|_{L^2}
&\lesssim \sum_{j_1+j_2=m-2,j_1\leq j_2\leq m-3} \|s^\frac12\partial_t^{j_1+1}\bm{x}'\|_{L^\infty} \|\partial_t^{j_2+1}\bm{x}'\|_{L^2} \\
&\quad\;
+ \sum_{j_0+j_1+j_2=m-2,j_0\leq m-3, j_1\leq j_2\leq m-3}
\|s^{-1}\partial_t^{j_0}\tau\|_{L^\infty} \|s\partial_t^{j_1}\bm{x}''\|_{L^\infty} \|s^\frac12\partial_t^{j_2}\bm{x}''\|_{L^2} \\
&\lesssim \sum_{j_1,j_2 \leq m-3} \|\partial_t^{j_1+1}\bm{x}\|_{X^2} \|\partial_t^{j_2+1}\bm{x}\|_{X^2} \\
&\quad\;
+ \sum_{j_0,j_1,j_2\leq m-3} \|\partial_t^{j_0}\tau'\|_{L^\infty} \|\partial_t^{j_1}\bm{x}\|_{X^3} \|\partial_t^{j_2}\bm{x}\|_{X^3} \\
&\lesssim \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m^2 + \sum_{j_0\leq m-3} \|\partial_t^{j_0}\tau'\|_{L^\infty}\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x} }_m^2,\end{aligned}$$ where we used Lemma [**Lemma** 17](#lem:CalIneqLp1){reference-type="ref" reference="lem:CalIneqLp1"}. Therefore, we obtain $\|s\dot{h}(t)\|_{L^1}+\|s^\frac12h(t)\|_{L^2} \leq C_2$ for $0\leq t\leq T$. Summarizing the above estimates, we get $S_2(t) \leq C_2$ for $0\leq t\leq T$.
It remains to evaluate $E(0)$ and $S_1(0)$. Since we have [\[EstID\]](#EstID){reference-type="eqref" reference="EstID"}, by similar evaluations as above, we obtain $E(0)+S_1(0) \leq C_0$. The proof is complete. ◻
This lemma and [\[EI1\]](#EI1){reference-type="eqref" reference="EI1"} implies $E(t) \leq C_1 \mathrm{e}^{C_2 t}( C_0 + C_2t)$. On the other hand, it is easy to see that $\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(t)}_{m-1} \leq \@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(0)}_{m-1}+\int_0^t\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{\bm{x}(t')}_m\mathrm{d}t' \leq C_0 + C_2t$ and that $-\bm{g}\cdot\bm{x}'(1,t) \geq -\bm{g}\cdot\bm{x}'(1,0) - \int_0^t|\partial_t\bm{x}'(1,t')|\mathrm{d}t' \geq 2c_0 - C_2t$. Summarizing the above estimates, we have shown $$\begin{cases}
\@ifstar%
\left|\mkern-1.5mu\left|\mkern-1.5mu\left|
%
\mathopen{|\mkern-1.5mu|\mkern-1.5mu|}
\right
\mathclose{|\mkern-1.5mu|\mkern-1.5mu|}
|\mkern-1.5mu\right|\mkern-1.5mu\right|
{ \bm{x}(t) }_{m-1} \leq C_0 + C_2t, \qquad -\bm{g}\cdot\bm{x}'(1,t) \geq 2c_0-C_2t, \\
\|\partial_t^{m-1}\bm{x}(t)\|_{X^1}^2 + \|\partial_t^{m-2}\bm{x}(t)\|_{X^2}^2 \leq C_1\mathrm{e}^{C_2 t}( C_0 + C_2t).
\end{cases}$$ Now, we define the constants $M_1$ and $M_2$ by $M_1=2C_0$ and $M_2=4C_0C_1$ and then choose the time $T$ so small that $C_2T\leq\min\{C_0,c_0,\log2\}$. Then, by the standard argument we see that the solution $(\bm{x},\tau)$ satisfies in fact [\[EstAss\]](#EstAss){reference-type="eqref" reference="EstAss"} for $0\leq t\leq T$ and the estimates in Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"} follows from Lemmas [**Lemma** 40](#lem:APEx){reference-type="ref" reference="lem:APEx"}, [**Lemma** 36](#lem:EstTau1){reference-type="ref" reference="lem:EstTau1"}, and [**Lemma** 37](#lem:EstTau2){reference-type="ref" reference="lem:EstTau2"}. The proof of Theorem [**Theorem** 1](#th:APE){reference-type="ref" reference="th:APE"} is complete. $\Box$
99 P. Amore, J. P. Boyd, and A. Márquez, The heterogeneous helicoseir, Physica D, **446** (2023) 133669. T. Iguchi and M. Takayama, Well-posedness of the initial boundary value problem for degenerate hyperbolic systems with a localized term and its application to the linearized system for the motion of an inextensible hanging string, in preparation. I. I. Kolodner, Heavy rotating string---a nonlinear eigenvalue problem, Comm. Pure Appl. Math., **8** (1955) 395--408. S. C. Preston, The motion of whips and chains, J. Differential Equations, **251** (2011), 504--550. M. Reeken, Classical solutions of the chain equation I, Math. Z., **165** (1979), 143--169. M. Reeken, Classical solutions of the chain equation II, Math. Z., **166** (1979), 67--82. Y. Şengül and D. Vorotnikov, Generalized solutions for inextensible string equations, J. Differential Equations, **262** (2017), 3610--3641. M. Takayama, Initial-boundary value problem for the degenerate hyperbolic equation of a hanging string, Osaka J. Math., **55** (2018), 547--565.
Tatsuo Iguchi
Department of Mathematics
Faculty of Science and Technology, Keio University
-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan
E-mail: `[email protected]`
Masahiro Takayama
Department of Mathematics
Faculty of Science and Technology, Keio University
-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan
E-mail: `[email protected]`
[^1]: Corresponding author
| arxiv_math | {
"id": "2309.01400",
"title": "A priori estimates of solutions to the motion of an inextensible hanging\n string",
"authors": "Tatsuo Iguchi, Masahiro Takayama",
"categories": "math.AP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Given a frame in a finite dimensional Hilbert space we construct additive perturbations which decrease the condition number of the frame. By iterating this perturbation, we introduce an algorithm that produces a tight frame in a finite number of steps. Additionally, we give sharp bounds on additive perturbations which preserve frames and we study the effect of appending and erasing vectors to a given tight frame. We also discuss under which conditions our finite-dimensional results are extendable to infinite-dimensional Hilbert spaces.
address:
- Oleg Asipchuk, Florida International University, Department of Mathematics and Statistics, Miami, FL 33199, USA
- Jacob Glidewell, The University of Alabama, Department of Mathematics, Tuscaloosa, AL 35487-0350, USA
- Luis Rodriguez, Florida International University, Department of Mathematics and Statistics, Miami, FL 33199, USA
author:
- Oleg Asipchuk
- Jacob Glidewell
- Luis Rodriguez
bibliography:
- Main.bib
title: Additive Stability of Frames
---
# Introduction
Hilbert space frames are powerful mathematical tools that provide a natural extension to the concept of bases. This subject originated in the work of Duffin and Schaeffer [@Duffin] and has been greatly studied since the seminal work of Daubechies, Grossmann, and Meyer [@DGM1986].
**Definition 1**. *A finite or countable family of vectors ${\mathcal F} =\{x_j\}_{j\in J}$ in a Hilbert space $\mathbb{H}$ is a **frame** if there are constants $0<A\le B<\infty$ satisfying: $$A\|x\|^2 \le \sum_{j \in J}|\langle x,x_j\rangle|^2 \le B\|x\|^2,\mbox{ for all }x\in \mathbb{H} .$$*
If $A=B$ this is an *A-tight frame* and if $A=B=1$ this is a *Parseval frame*. The largest $A$ and smallest $B$ satisfying this inequality are called the optimal *lower* (respectively, *upper*) *frame bounds*. The frame is said to be *equal-norm* if all elements have the same norm.
Typically, frames are not linearly independent, but linear independence often does not align well with the demands of applied problems; instead, the redundancy makes frames ideal for handling signals and other types of data, as it allows multiple representations of vectors in terms of the frame vectors.
Frames are stable, in the sense that a small perturbation of a frame is still a frame. This property allows for accurate and reliable signal recovery even in the presence of noise or other disturbances. Thus, quantitative estimates of the stability of frames under perturbations are crucial for applications such as signal reconstruction and de-noising.
Three questions naturally arise with regard to the stability of frames:
1. How much can a frame be perturbed and still remain a frame?
2. How much does a frame need to be perturbed to become a tight frame?
3. How does a frame change when more vectors are added?
The goal of this paper is to give new progress on these questions in applicable ways.
Frame perturbations have been studied in many contexts, most notably with specific applications to Gabor frames, [@Christensen1996; @DV2018; @SZ2001; @SZ2003], and exponential frames and bases; the Kadec $\frac{1}{4}$-theorem [@Kadec] states that $\{e^{2\pi i\lambda_n x}\}_{n\in \mathbb Z}$ is a basis in $L^2[0,1]$ if $\sup_n|n-\lambda_n|<\frac{1}{4}$. O. Christensen extended this result to frames. See [@ChristensenGabor] and also [@BR1997; @SZ1999].
We focus our study on the stability of frames under additive perturbations. That is, given a frame $\{f_j\}_{ {j\in J}}$ in a Hilbert space ${\mathbb{H}}$ and $\epsilon>0$, we consider the set $\{f_j+\delta_j\}_ {j\in J}$ where $||\delta_j||<\epsilon$ for each $j$. We can think of the $\delta_j$ as \"noise\" that is added to the frame.
To our knowledge, no explicit results are available for additive perturbations of bases or frames in finite-dimensional Hilbert spaces.
The Krein-Milman-Rutman theorem in [@Young] states that for a basis $\{x_n\}_{n\in\mathbb N}$ in a Banach space $X$, there exists $\epsilon_n>0$ for each $n$ such that if $\{y_n\}_{n \in \mathbb N}$ is a sequence in $X$ with $\left|\left| x_n-y_n \right|\right|<\epsilon_n$, then $\{y_n\}_{n \in \mathbb N}$ is also a basis in $X$. If $X$ has infinite dimension, it follows from the proof of the theorem in [@Young] that the lower bound of the sequence $\{ \epsilon_n \}_{n\in\mathbb N}$ is $\epsilon=0$. If $X$ is finite-dimensional, we estimate $\epsilon$ in Proposition [Proposition 8](#thm: StabilitywithPW){reference-type="ref" reference="thm: StabilitywithPW"} and Corollary [Corollary 1](#cor: UTFstability){reference-type="ref" reference="cor: UTFstability"}.
## Our results
We have two theorems that we can call main results. Both of them solve Problem (2) but under different conditions. First, we consider the case when we have an upper bound on the norm of perturbation.
**Theorem 2**. *Suppose $\{v_j\}_{j=1}^k$ is a frame in $\mathbb R^n$ with frame bounds $A<B$. For every $\epsilon >0$ there exists $\{\delta_j\}_{j=1}^k$ where $||\delta_j||<\epsilon$ such that $\{\widetilde{v}_j\}=\{v_j+\delta_j\}_{j=1}^k$ is a frame with frame constants $0<A_1\le B_1$ where $$\frac{B_1}{A_1}<\frac{B}{A}.$$*
The *condition number* of a frame with optimal frame constants $A$ and $B$ is $\kappa= \frac BA$. The condition number indicates how \"close\" a frame is to being tight. It also measures how sensitive the frame operator is to changes or errors in the input, and how much error in the output results from an error in the input. The problem of finding optimal frame bounds and condition number for Gabor frames is well studied. See for example the recent [@Faulhuber2018; @FS2023]. In applications it is desirable to have frames with a condition number as close to $1$ as possible; in the proof of Theorem [Theorem 2](#thm: betterFrame){reference-type="ref" reference="thm: betterFrame"} we provide an explicit expression for perturbations $\delta_j$ that reduce the condition number of a given frame.
Our next result shows that we can obtain a tight frame with a finite number of perturbations
**Theorem 3**. *Suppose $\{v_j\}_{j=1}^k$ is a frame in $\mathbb R^n$ with optimal frame bounds $A< B$. We can construct $\{\delta_j\}_{j=1}^k$ such that $\{\widetilde{v}_j\}=\{v_j+\delta_j\}_{j=1}^k$ is a nonzero tight frame, with an algorithm in at most $n-1$ steps.*
The algorithm used in the proof of Theorem [Theorem 3](#thm: tightAlg){reference-type="ref" reference="thm: tightAlg"} produces the explicit perturbations $\delta_j$ that make the frame tight.
In Theorem [Theorem 2](#thm: betterFrame){reference-type="ref" reference="thm: betterFrame"} the frame constants do not have to be optimal, but the optimality of the frame constants in Theorem [Theorem 3](#thm: tightAlg){reference-type="ref" reference="thm: tightAlg"} is crucial at each step of the proof.
The organization of this paper is as follows: In Section 2 we briefly recall the definitions and the tools that we use for our results. In Section 3 we prove our main results. In Section 4 we discuss Problems (1) and (3). Finally, in Section 5 we discuss open problems and remarks.
*Acknowledgments*. Foremost, we would like to express our sincere gratitude to our mentor Professor Laura De Carli for her continuous support of our research.
The research of this project was done during the Summer 2023 REU program "AMRPU @ FIU\" that took place at the Department of Mathematics and Statistics, Florida International University, and was supported by the NSF (REU Site) grant DMS-2050971. In particular, support of J.G. came from the above grant. We acknowledge the collaboration of Mikhail Samoshin in the initial stage of this project and his participation in the technical report (available at go.fiu.edu/amrpu).
# Preliminaries
## Frames
We have used the excellent textbook [@Christensen] for the definitions and some of the results presented in this section.
Let ${\mathcal F}=\{f_j\}_{j\in J}$ be a frame in a Hilbert space ${\mathbb{H}}$. We recall the definition of three important operators that are associated with ${\mathcal F}$.
1. The synthesis operator $T: \ell^2(\mathbb N) \to \mathbb{H}$ is given by $$T\{c_j\}_{j \in J} = \sum_{j \in J}c_jf_j.$$
2. The analysis operator $T^*: \mathbb{H}\to\ell^2(\mathbb N)$ is given by $$T^*f= \{\langle f, f_j\rangle\}_{j \in J}.$$
3. The frame operator $S:\mathbb{H}\to \mathbb{H}$ is given by $S=TT^*$ or equivalently, $$Sf = \sum_{j \in J}\langle f, f_j\rangle f_j.$$
If $\mathbb{H}=\mathbb R^n$, the operator $T$ is represented by a matrix whose columns are the vectors of the frame.
Frames, like bases, can be used to represent the vectors in the space. However, with the added redundancy, we have to choose canonical ways in which to do this. If $\{f_j\}_{j \in J}$ is a Parseval frame, the frame operator $S=I$ where $I$ is the identity on $\mathbb{H}$ so we obtain an exact representation $$f=\sum_{j \in J}\langle f, f_j\rangle f_j$$ for every $f\in \mathbb{H}$, see [@Han Section 3.2]. Although the representation is not unique and $\{f_j\}_{j \in J}$ need not be an orthonormal basis, we still can recover coefficients using the inner product.
If $\{f_j\}_{j \in J}$ is a tight frame with frame bound $A$, then $S=AI$ where $I$ is the identity on $\mathbb{H}$. If $\mathbb{H}=\mathbb R^n$, the rows of the synthesis matrix $T$ are orthogonal and have the same norm $A$. For every $f\in \mathbb{H},$ $$f=\sum_{j \in J}\frac{1}{A}\langle f, f_j\rangle f_j.$$ If the frame is not tight, this representation formula needs to be modified. Let $\{f_j\}_{j \in J}$ be a frame with frame operator $S$. Then according to [@Han Section 6.1], $S$ is invertible, $\{S^{-1}f_j\}_{j \in J}$ is a frame called the canonical dual frame, and for every $f\in \mathbb{H}$, $$\label{repr-formula}
f= \sum_{j \in J}\langle f, S^{-1}f_j\rangle f_j= \sum_{j \in J}\langle f, f_j\rangle S^{-1}f_j.$$
The following lemma establishes the connection between the norms of the vectors in a tight frame and the bound $A$. It appears as an exercise in [@Han], but we provide a proof for the convenience of the reader.
**Lemma 4**. *If $\{f_j\}_{j=1}^k$ is a tight frame for $\mathbb R^n$ with frame bound $A$ then $$A=\frac{1}{n} \sum_{j=1}^{k} ||f_j||^2.$$*
*Proof.* Let $\{e_i\}_{i=1}^n$ be the canonical orthonormal basis for $\mathbb R^n$. Then:
$$\begin{aligned}
\frac{1}{n} \sum_{j=1}^k \left|\left| f_j \right|\right|^2
&= \frac{1}{n} \sum_{j=1}^k \left|\left| \sum_{i=1}^n \langle f_j,e_i \rangle e_i \right|\right|^2
= \frac{1}{n} \sum_{j=1}^k \sum_{i=1}^n \left|\left| \langle f_j,e_i \rangle e_i \right|\right|^2 \\
&= \frac{1}{n} \sum_{j=1}^k \sum_{i=1}^n | \langle f_j,e_i \rangle |^2
= \frac{1}{n} \sum_{i=1}^n \left( \sum_{j=1}^k | \langle e_i,f_j \rangle |^2 \right)\\
&= \frac{1}{n} \sum_{i=1}^n \left( A \left|\left| e_i \right|\right|^2 \right) = A.
\end{aligned}$$ ◻
## The Paley-Wiener theorem
The Paley-Wiener theorem is one of the fundamental stability criteria. It was originally proved by R. Paley and N. Wiener in 1934 and it provides a sufficient condition for the stability of Riesz bases. A proof of this theorem can be found e.g. in [@Young Theorem 10].
**Theorem 5** (Paley-Wiener). *Let $\{f_j\}_{j \in J}$ be a Riesz basis for Banach space $X$ and let $\{g_j\}_{j \in J}$ be a sequence in $X$. Suppose there exists $0\le \lambda<1$ such that $$\left|\left| \sum_{j \in J}c_j(f_j-g_j) \right|\right|\le \lambda\left|\left| \sum_{j \in J}c_jf_j \right|\right|$$ for all finite sequences $\{c_j\}_{j \in J}$. Then, $\{g_j\}_{j \in J}$ is a Riesz basis for $X$.*
O. Christensen proved the following extension of Theorem [Theorem 5](#thm:PWclassically){reference-type="ref" reference="thm:PWclassically"} to frames in [@Christensen].
**Theorem 6**. *Let $\{f_j\}_{j \in J}$ be a frame for $\mathbb{H}$ with frame bounds $A, B>0$ and let $\{g_k\}_{k \in J}$ be a sequence in $\mathbb{H}$. Suppose there exists $\lambda, \mu\ge 0$ such that $\lambda+\frac{\mu}{\sqrt{A}}<1$ and $$\label{eq: PWIneq}
\left|\left| \sum_{k \in J}c_k(f_k-g_k) \right|\right|\le \lambda\left|\left| \sum_{k \in J}c_kf_k \right|\right|+\mu\left(\sum_{k \in J}|c_k|^2\right)^{\frac{1}{2}}$$*
*for all finite sequences $\{c_k\}_{k \in J}$. Then, $\{g_k\}_{k \in J}$ is a frame for $\mathbb{H}$ with frame bounds $$A\left(1-\lambda-\frac{\mu}{\sqrt{A}}\right)^2, \text{ and } \; B\left(1+\lambda+\frac{\mu}{\sqrt{B}}\right)^2.$$*
# Proofs of Theorems [Theorem 2](#thm: betterFrame){reference-type="ref" reference="thm: betterFrame"} and [Theorem 3](#thm: tightAlg){reference-type="ref" reference="thm: tightAlg"} {#proofs-of-theorems-thm-betterframe-and-thm-tightalg}
First, we prove a simple lemma on which the proof of Theorems [Theorem 2](#thm: betterFrame){reference-type="ref" reference="thm: betterFrame"} and [Theorem 3](#thm: tightAlg){reference-type="ref" reference="thm: tightAlg"} are based.
**Lemma 7**. *Let $S$ be an invertible, self-adjoint $n\times n$ matrix. Then for every $r>0$, $S+2rI_{n\times n}+r^2S^{-1}$ has all eigenvalues of the form $\lambda+2r+\frac{r^2}{\lambda}$ where $\lambda$ is an eigenvalue of $S$.*
*Proof.* Since $S$ is self-adjoint, by the Spectral theorem, $S$ is diagonalizable. Hence, there exists $Q$ invertible and $D= \text{diag}(A, \dots, B)$ where $S= QDQ^{-1}.$ Since $S$ is invertible, $D$ is invertible. Thus, $$S+2rI_{n\times n}+r^2S^{-1}= QDQ^{-1}+2rI_{n\times n}+r^2QD^{-1}Q^{-1}= Q(D+2rI_{n\times n}+r^2D^{-1})Q^{-1}.$$ Hence, the eigenvalues of $S+2rI_{n\times n}+r^2S^{-1}$ are of the form $\lambda+2r+\frac{r^2}{\lambda}$ where $\lambda$ is an eigenvalue of $S$. ◻
## Proof of Theorem $\ref{thm: betterFrame}$
*Proof.* Let $T$ be the synthesis operator and $S=TT^*$ be the frame operator for $\{v_j\}_{j=1}^k$. By [@Han Proposition 3.19], every $x\in\mathbb R^n$ can be written as $$x=\sum_{j=1}^k \langle x, S^{-1}v_j\rangle v_j.$$ Let $\delta_j = rS^{-1}v_j$ where $0<r< \min\left(\frac{\epsilon}{\max_{j}||S^{-1}v_j||}, A\right)$. Now, for $x\in \mathbb R^n$ we have: $$\begin{aligned}
\sum_{j=1}^k|\langle x, \widetilde{v}_j \rangle|^2 &= \sum_{j=1}^k|\langle x, v_j \rangle|^2+\sum_{j=1}^k 2\langle x,v_j\rangle\langle x, \delta_j\rangle +\sum_{j=1}^k|\langle x, \delta_j\rangle|^2\\
&=\sum_{j=1}^k|\langle x, v_j \rangle|^2+\sum_{j=1}^k2\langle x,v_j\rangle\langle x, rS^{-1}v_j\rangle +\sum_{j=1}^k|\langle x, rS^{-1}v_j\rangle|^2\\
&= \sum_{j=1}^k|\langle x, v_j \rangle|^2+2r\langle x, \sum_{j=1}^k\langle x, S^{-1}v_j\rangle v_j\rangle +r^2\sum_{j=1}^k|\langle x, S^{-1}v_j\rangle|^2\\
&=\langle Sx, x\rangle +2r\langle x, x\rangle +r^2\langle S^{-1}x, x\rangle \\
&=\langle (S+2rI_{n\times n}+r^2S^{-1})x, x\rangle\\
&= \langle (T+rS^{-1}T)(T+rS^{-1}T)^*x, x\rangle.
\end{aligned}$$
Let $\lambda_{min},\lambda_{max}$ be the minimum and maximum eigenvalues of $S$. By [@Han Proposition 3.27], $\lambda_{min}>0$. By Lemma [Lemma 7](#lem: newEigenvalues){reference-type="ref" reference="lem: newEigenvalues"} and [@Han Proposition 3.27], since $r<A\le \lambda_{min}$ and the map $\lambda\mapsto \lambda+2r+\frac{r^2}{\lambda}$ is increasing for $\lambda\ge r$, the optimal lower and upper frame constants are $$A_1=\lambda_{min}+2r+\frac{r^2}{\lambda_{min}}\ge A+2r+\frac{r^2}{A},$$ and $$B_1=\lambda_{max}+2r+\frac{r^2}{\lambda_{max}}\le B+2r+\frac{r^2}{B}.$$
Hence, $$\frac{B_1}{A_1}\le \frac{B+2r+\frac{r^2}{B}}{A+2r+\frac{r^2}{A}}.$$ Note finally $$\frac{B+2r+\frac{r^2}{B}}{A+2r+\frac{r^2}{A}}<\frac{B}{A} \iff 2r(B-A)+r^2\left(\frac{B}{A}-\frac{A}{B}\right)>0,$$ so the result follows. ◻
Using the approach in Theorem [Theorem 2](#thm: betterFrame){reference-type="ref" reference="thm: betterFrame"}, we have an algorithm which produces to a tight frame in finitely many steps. Each step depends on the frame bounds and the inverse of the frame operator of the previous step.
## Proof of Theorem $\ref{thm: tightAlg}$
*Proof.* Let $T_0$ be the synthesis operator and $S_0=T_0T_0^*$ be the frame operator for $\{v_j\}_{j=1}^k$. Let $A_0=A$ and $B_0=B$.
For each $1\le m\le n-1$, define $r_m$ and $\{v_j^{(m)}\}_{j=1}^k$ by, $$\begin{cases}
r_m=\sqrt{A_{m-1}B_{m-1}}\\
v_j^{(m)}= v_j^{(m-1)}+r_mS_{m-1}^{-1}v_j^{(m-1)} & j=1,2,\dots, k,
\end{cases}$$ where $A_m, B_m$ are the frame bounds for $\{v_j^{(m)}\}_{j=1}^k$.
Moreover, let the synthesis operator for $\{v_j^{(m)}\}_{j=1}^k$ be $$T_m = \begin{bmatrix}
\vert & \vert & & \vert\\
v_1^{(m)} & v_2^{(m)} & \vdots & v_k^{(m)}\\
\vert & \vert & & \vert
\end{bmatrix}$$ and the frame operator $S_m = T_mT_m^*$.
We will show via induction that for each $1\le m\le n-1$, $\{v_{j}^{(m)}\}_{j=1}^k$ is a frame and at least $m+1$ of the eigenvalues of $S_m$ are equal to $B_m$. Indeed, for $m=1$, $r_1=\sqrt{AB}.$ Note that $T_1 = T_{0}+r_1S_{0}^{-1}T_{0}.$ Hence, $$\begin{aligned}
S_1 = T_1T_1^* &= (T_{0}+r_1S_{0}^{-1}T_{0})(T_{0}^*+r_1T_0^*S_{0}^{-1})\\
&= S_0+r_1S_0^{-1}T_0T_0^*+ r_1T_0T_0^*S_0^{-1}+r_1^2S_0^{-1}T_0T_0^*S_0^{-1}\\
&=S_0+2r_1I_{n\times n}+r_1^2S_0^{-1}.
\end{aligned}$$
By Lemma [Lemma 7](#lem: newEigenvalues){reference-type="ref" reference="lem: newEigenvalues"}, the eigenvalues of $S_1$ are of the form $$\lambda^\prime =\lambda+2r_1+\frac{r_1^2}{\lambda},$$ where $\lambda$ ranges over the eigenvalues of $S_0$. Since $\lambda\ge A>0$, $\lambda^\prime >0$, and so $\{v_j^{(1)}\}_{j=1}^k$ is a frame. Moreover, two of the eigenvalues of $S_1$ are $$A+2r_1+\frac{r_1^2}{A}, \text{ and } \; B+2r_1+\frac{r_1^2}{B}$$ which are both equal and so, equal to $B_1$.
Now suppose for some $1\le m\le n-2$, we have $\{v_j^{(m)}\}_{j=1}^k$ a frame, and $m+1$ of the eigenvalues of $S_m$ equal to $B_m$. Note $r_{m+1}=\sqrt{A_mB_m}$. By the same computation as in the base case, the eigenvalues of $S_{m+1}$ are of the form $$\lambda^\prime =\lambda+2r_{m+1}+\frac{r_{m+1}^2}{\lambda},$$ where $\lambda$ ranges over the eigenvalues of $S_m$. Since $\lambda\ge A_m>0$, we again have $\{v_j^{(m+1)}\}_{j=1}^k$ a frame. After the mapping $$\lambda \mapsto \lambda+2r_{m+1}+\frac{r_{m+1}^2}{\lambda},$$ the $m$ eigenvalues of $S_m$ equal to $B_m$ are still equal. Moreover, since $r_{m+1}=\sqrt{A_mB_m}$, $$A_m+2r_{m+1}+\frac{r_{m+1}^2}{A_m}= B_m+2r_{m+1}+\frac{r_{m+1}^2}{B_m}=B_{m+1}.$$ Thus, we have $m+1$ eigenvalues of $S_{m+1}$ equal to $B_{m+1}$ (namely, those mapped from the $m$ equal on the previous step and the one mapped from $A_m$). This completes the induction.
Therefore, $S_{n-1}$ is a multiple of the identity since it has all equal eigenvalues. Hence, $\{v_j^{(n-1)}\}_{j=1}^k$ is a tight frame. Then, let $\delta_j= v_j^{(n-1)}-v_j$ for each $1\le j\le k$.
Finally, we show $\{v_j^{(n-1)}\}_{j=1}^k$ is nonzero. In particular, after every iteration, none of the frame elements become $0$. This follows from induction. Indeed, for $m=0$, all of the frame elements are nonzero. Suppose for some $m\ge 0$, $\{v_j^{(m)}\}_{j=1}^k$ is nonzero. Observe for every $1\le j\le k$, $$v_j^{(m+1)}= (I_{n\times n}+r_{m+1}S_{m}^{-1})v_{j}^{(m)}.$$ It suffices to show $I_{n\times n}+r_{m+1}S_{m}^{-1}$ is nonsingular. By the diagonalization argument as in Lemma [Lemma 7](#lem: newEigenvalues){reference-type="ref" reference="lem: newEigenvalues"}, the eigenvalues of $I_{n\times n}+r_{m+1}S_{m}^{-1}$ are of the form $1+\frac{r_{m+1}}{\lambda}$ where $\lambda$ ranges over the eigenvalues of $S_m$. Since these are all positive, we are done. ◻
# Perturbations in Finite Dimensions
In this section we focus on Problems (1) and (3) stated in the introduction. First we consider Problem (1): \"How much can a frame be perturbed and still remain a frame?\".
Given a frame, we quantify additive perturbations that preserve the frame structure. We begin by establishing a sharp bound on the norm of the perturbations.
**Proposition 8**. *Suppose $\{v_j\}_{j=1}^k$ is a frame in $\mathbb R^n$ with frame bounds $A, B>0$. If $\epsilon\le \frac{\sqrt{A}}{\sqrt{k}}$, then for every $\{\delta_j\}_{j=1}^k$ with $||\delta_j||< \epsilon$ , the set $\{v_j+\delta_j\}_{j=1}^k$ is a frame.*
*Proof.* Fix $\{\delta_j\}_{j=1}^k$ with $||\delta_j||< \epsilon$. Let $\alpha = \max_{j}||\delta_j||$. We apply Theorem [Theorem 6](#thm: Paley-Wiener){reference-type="ref" reference="thm: Paley-Wiener"}. Fix $\{c_m\}_{j=1}^k\subset \mathbb R$. Then by the triangle inequality and Cauchy-Schwarz, $$\begin{aligned}
\left|\left|\sum_{j=1}^k c_j\delta_{j}\right|\right| &\le \sum_{j=1}^k |c_j|\cdot ||\delta_j||\le \left(\sum_{j=1}^k |c_j|^{2}\right)^{\frac{1}{2}}\left(\sum_{j=1}^k ||\delta_j||^2\right)^{\frac{1}{2}}\\
&\le\alpha\sqrt{k}\left(\sum_{j=1}^k |c_j|^{2}\right)^{\frac{1}{2}}.\end{aligned}$$ Since $\frac{\alpha\sqrt{k}}{\sqrt{A}}< \frac{\epsilon\sqrt{k}}{\sqrt{A}}\le 1$, we are done. ◻
There are frames for which this bound on $\epsilon$ is sharp. To show this, we have the following corollary:
**Corollary 1**. *Suppose $\{v_j\}_{j=1}^k$ is a unit tight frame in $\mathbb R^n$. If $\epsilon\le \frac{1}{\sqrt{n}}$, then for every $\{\delta_j\}_{j=1}^k$ with $||\delta_j||< \epsilon$ , $\{v_j+\delta_j\}_{j=1}^k$ is a frame. This bound is optimal for $\epsilon$.*
*Proof.* Note by Lemma [Lemma 4](#lem: TightFrameBound){reference-type="ref" reference="lem: TightFrameBound"}, $A=\frac{k}{n}$ for a unit tight frame. Then by Proposition [Proposition 8](#thm: StabilitywithPW){reference-type="ref" reference="thm: StabilitywithPW"}, we reach the desired bound on $\epsilon.$ We will now give an example of equality. Let the frame be $\left\{\begin{bmatrix}
1\\ 0
\end{bmatrix},\begin{bmatrix}
0\\ 1
\end{bmatrix}\right\}$ . By [@Han Lemma 4.1], this is a unit-tight frame. Moreover, if $\epsilon=\frac{1}{\sqrt{2}}$, then we can perturb all the frame elements to be $\frac{1}{2}\begin{bmatrix}
1\\ 1
\end{bmatrix}.$ Clearly, this is no longer a frame. ◻
**Remark 1**. *Our Corollary [Corollary 1](#cor: UTFstability){reference-type="ref" reference="cor: UTFstability"} is a more general version of the following problem that appears as exercise 6.B.14 in [@Axler]:*
*Suppose $\{e_j\}_{j=1}^n$ is an orthonormal basis of $\mathbb R^n$ and $\{v_j\}_{j=1}^n$ such that $\left|\left| e_j-v_j \right|\right|\le \frac{1}{\sqrt{n}}$ for each $j$. Prove $\{v_j\}_{j=1}^n$ is a basis.*
## Adding Vectors to a Frame
In this section, we focus our attention on Problem (3) stated in the introduction with regard to tight frames. Theorem 3.3 in [@LS2009] establishes that if a tight frame is augmented by $N$ vectors and the resulting frame is still tight, then $N$ must be greater than the dimension of the space.
In the case of $\mathbb R^n$, we show that the only way that a tight frame can be augmented and still remain tight is when the vectors added form a tight frame themselves. Note that the vectors added forming a tight frame implies that the amount of vectors needs to be at least $n$, the dimension of $\mathbb R^n$, since being a frame in finite dimension is equivalent to being a spanning set.
**Proposition 9**. *Let $\{v_i\}_{i=1}^k$ be a tight frame for $\mathbb R^n$ with frame bound $A$. Append $p$ vectors $v_{k+1},v_{k+2},...,v_{k+p}$ obtaining $\{v_i\}_{i=1}^{k+p}$. Then, $\{v_i\}_{i=1}^{k+p}$ is a tight frame for $\mathbb R^n$ if and only if $\{v_i\}_{i=k+1}^{k+p}$ is a tight frame for $\mathbb R^n$.*
*Proof.* The reverse direction follows directly from the definition of a tight frame. It suffices to show the forward direction. Since $\{v_i\}_{i=1}^k$ is tight with bound $A$ we have $\sum_{i=1}^{k} |\langle v,v_i \rangle|^2 = A||v||^2$ for all $v \in \mathbb R^n$. Suppose $\{v_i\}_{i=1}^{k+p}$ is tight with frame bound $\hat{A}$ so that $\sum_{i=1}^{k+p} |\langle v,v_i \rangle|^2 = \hat{A}||v||^2$ for all $v \in \mathbb R^n$. Then for all $v \in \mathbb R^n$: $$\begin{gathered}
\hat{A}||v||^2 = \sum_{i=1}^{k+p} |\langle v,v_i \rangle|^2 = \sum_{i=1}^{k} |\langle v,v_i \rangle|^2 + \sum_{i=k+1}^{k+p} |\langle v,v_i \rangle|^2 = A||v||^2 + \sum_{i=k+1}^{k+p} |\langle v,v_i \rangle|^2 \\
\implies \sum_{i=k+1}^{k+p} |\langle v,v_i \rangle|^2 = \left( \hat{A}-A \right)||v||^2\end{gathered}$$ which shows that $\{v_i\}_{i=k+1}^{k+p}$ is tight with frame bound $(\hat{A}-A)$. ◻
It is known that every finite frame for $\mathbb R^n$ can be turned into a tight frame with the addition of at most $n-1$ vectors. See Proposition 6.1 in [@Casazza2016]. If you start with a tight frame nothing needs to be added to make it tight and of course, adding zero vectors (not adding a vector at all) is in accordance with the \"at most $n-1$ vectors\" condition. However, it is interesting to note that [Proposition 9](#TightFramesProp){reference-type="ref" reference="TightFramesProp"} implies that if the frame is already tight and vectors will be added, we need $n$ or more vectors to produce another tight frame.
Applications making use of frames require the associated algorithms to be numerically stable, which tight frames satisfy optimally as noted in [@KUTYNIOK2013]. Thus, a key question in frame theory is the following: given a frame, can the frame vectors be modified to become a tight frame? One way to do so is by scaling each frame vector as in [@DeCarliCassaza]. This motivates the following definition for a scalable frame which we borrow from [@DeCarliCassaza]:
**Definition 10**. *A frame $\left\{x_i\right\}_{i=1}^{m}$ for $\mathbb R^n$ is scalable if there exist non-negative constants $\left\{a_i\right\}_{i=1}^{m}$ for which $\left\{a_i x_i\right\}_{i=1}^{m}$ is a tight frame.*
Our Proposition [Proposition 9](#TightFramesProp){reference-type="ref" reference="TightFramesProp"} immediately produces the following result:
**Corollary 2**. *If $\{v_i\}_{i=1}^k$ is a frame for $\mathbb R^n$ which contains a scalable sub-frame with $p$ elements; if $k-p<n$, then $\{v_i\}_{i=1}^k$ itself is not scalable.*
Although Proposition [Proposition 9](#TightFramesProp){reference-type="ref" reference="TightFramesProp"} is from the perspective of adding vectors to a tight frame, we obtain corollaries that extend results regarding erasures. Frames have gained significant traction due to their resilience to noise in signal processing applications but also due to their ability to withstand transmission losses which present themselves mathematically as erasures of frame elements.
The topic of erasures has been addressed in the past, see [@GOYAL2001] and [@HOLMES2004]. More recently, the situation where there is a single erasure has been analyzed in [@Datta2020]. Via our corollaries, we provide improvements to some results in the aforementioned paper.
**Corollary 3**. *If $p=1$ so that one vector is being added to a tight frame, the resulting frame will never be tight unless $n=1$ (in which case we are in $\mathbb R^1$ where every frame is tight).*
With this corollary we can establish the same result as Theorem 2.3 in [@Datta2020] which states that if a vector is removed from a tight frame, the resulting set is no longer a tight frame. However, we can improve upon this with the following:
**Corollary 4**. *If the amount of vectors appended, $p$, is less than the dimension of the space, $n$, that is $p<n$, then the resulting frame is never tight.*
From this corollary, we can deduce that if we start with a tight frame for $\mathbb R^n$ and $p$ vectors are removed, as long as $p<n$ the resulting set is never a tight frame. We see that erasures in numbers up to the dimension of the space never result in a tight frame. This raises the following question: Is it possible to have a certain amount of erasures to a tight frame so that the resulting set is still a tight frame? There is no answer to this question with what is present in [@Datta2020] or anywhere else as far as I know. However, we can provide one with the following:
**Corollary 5**. *If $p \geq n$ then the resulting frame is tight only when the set of vectors added make a tight frame themselves.*
Thus we answer in the affirmative: a tight frame can encounter erasures so that the resulting set is still a tight frame, but that occurs if and only if the removed vectors form a tight frame themselves.
We have provided a complete characterization of a tight frame encountering erasures and whether the resulting set after the erasures is a tight frame or not:
**Corollary 6**.
- *If a tight frame for $\mathbb R^n$ encounters $p < n$ erasures, the resulting set is never a tight frame.*
- *If a tight frame for $\mathbb R^n$ encounters $p \geq n$ erasures, the resulting set is a tight frame if and only if the set of vectors removed form a tight frame themselves.*
The condition number of a frame corresponds to the classical condition number of the frame operator matrix which is a measure of the effect of perturbations on the inverse. A low condition number is desired for purposes of numerical stability in the reconstruction formula which makes use of the inverse of the frame operator matrix. Changes in the condition number after erasures is of interest and [@Datta2020] states in Proposition 2.5 that for a unit norm tight frame the condition number always increases after a single erasure. But we can now extend Proposition 2.5 and drop the requirement that the tight frame must be unit norm:
**Proposition 11**. *The condition number of a tight frame always increases after a single erasure*
*Proof.* A single erasure on a tight frame, which has condition number 1, leaves the resulting set not being a tight frame with a condition number greater than 1. ◻
This raises an important question: what is the new condition number of a tight frame that has had some frame elements removed? Proposition 2.10 in [@Datta2020] establishes a worst case and best case of the new condition number after a single erasure but it is worthwhile to explore this problem further in more generality.
# Open problems and Remarks
The representation formula [\[repr-formula\]](#repr-formula){reference-type="eqref" reference="repr-formula"} requires the inversion of the frame operator matrix. The optimal scenario from a computational perspective is encountered with tight frames in which the frame operator is a multiple of the identity and hence the easiest to invert. The next best case would be having a diagonal frame operator as they are also easy to invert and thus offer benefits similar to tight frames. In [@Datta2017] the following question is posed as a possible research direction: Given a frame $\left\{ v_i \right\}$, what are the operators $M$ for which $\left\{M v_i\right\}$ is a frame whose frame operator is a diagonal matrix? This question can also be approached from the perspective of additive perturbations: Given a frame $\left\{ v_i \right\}$, what are the perturbations $\left\{ d_i \right\}$ for which $\left\{v_i + d_i\right\}$ is a frame whose frame operator is a diagonal matrix?
We show a simple example of a perturbation that transforms a frame in $\mathbb R^2$ into a frame with a diagonal frame operator.
We denote with $e_1=(1, 0), e_2=(0,1)$ the vectors of the canonical basis of $\mathbb R^2$.
**Proposition 12**. *If $\{v_j\}_{j=1}^k$ is a nonzero frame in $\mathbb R^2$, then there exists $\epsilon\in \mathbb R$ such that the frame operator of $\{v_j\}_{j=1}^{k-1}\cup \{v_k+\epsilon e_j\}$ is diagonal for some $j\in\{1, 2\}$.*
*Proof.* Let $T=\begin{bmatrix}
u_1 \\ u_2
\end{bmatrix}$ be the synthesis operator for $\{v_j\}_{j=1}^k$ Here $u_1, u_2$ are the rows of the matrix $T$. Let $v_s(i)$ be the entry in $T$ with largest absolute value i.e. maximum $|v_j(i)|$. Reorder the frame elements where this is the last vector $v_k$.
Let $$\epsilon= -\frac{\langle u_1, u_2\rangle}{v_k(i)}$$ and let $T^\prime =\begin{bmatrix}
u_1^\prime \\ u_2^\prime
\end{bmatrix}$ be the synthesis operator for $\{v_j\}_{j=1}^{k-1}\cup \{v_k+\epsilon e_i\}$. Thus, $$\begin{aligned}
\langle u_1^\prime, u_2^\prime \rangle &= \langle u_1, u_2\rangle + \epsilon v_k(i)=0.
\end{aligned}$$
Hence, the frame operator $T^\prime(T^\prime)^*$ is diagonal. ◻
With this generalization, we can rephrase one of the questions that we posed in the introduction regarding frame stability in terms of diagonal frames:
1. How much does a frame need to be perturbed to become a frame with a diagonal frame operator?
## Infinite Dimensional Frames
To conclude this paper, we give some remarks about extending our main results to infinite dimensional Hilbert spaces.
The construction given in Theorem [Theorem 2](#thm: betterFrame){reference-type="ref" reference="thm: betterFrame"} relies on the following fact in finite dimensions: $$\max_{j \in \{1,...,k\}} \left|\left| S^{-1}v_j \right|\right|<\infty.$$ In other words, the canonical dual frame is always bounded. However, in infinite dimensional spaces, it is not immediately obvious whether this is true.
The construction given in Theorem [Theorem 3](#thm: tightAlg){reference-type="ref" reference="thm: tightAlg"} needs the frame operator to have finitely many eigenvalues. In infinite dimensions, the theorem holds for frames with frame operators having finite spectrum. However, most frames operators do not have this property, so more powerful tools will need to be used.
As mentioned in the introduction, a corollary to the proof of the Krein-Milman-Rutman theorem forbids an infinite dimension analog to Proposition [Proposition 8](#thm: StabilitywithPW){reference-type="ref" reference="thm: StabilitywithPW"}. We leave as an open problem to characterize the additive perturbations $\{\epsilon_n\}_{n \in \mathbb N}$ in the Krein-Milman-Rutman theorem that preserve frames.
We state a simple convexity property of perturbed frames which could allude to thinking of frames topologically:
**Proposition 13**. *Suppose $\{x_n\}_{n \in \mathbb N}$ is a frame in $\mathcal{H}$ with frame bounds $A,B$. Moreover, suppose $\{y_n\}_{n \in \mathbb N}$ is a sequence in $\mathbb{H}$ such that $\{x_n\}_{n \in \mathbb N}$ and $\{y_n\}_{n \in \mathbb N}$ satisfy [\[eq: PWIneq\]](#eq: PWIneq){reference-type="eqref" reference="eq: PWIneq"} with $\lambda=0$ and $0\le \mu<\sqrt{A}$. Let $\tau < \frac{\sqrt{A}}{\mu}$. Then, for every sequence of complex numbers $\{t_n\}_{n \in \mathbb N}$ with $|t_n|\le \tau$, $$\left\{(1-t_n)x_n+t_ny_n\right\}_{n \in \mathbb N}$$ is a frame.*
*Proof.* We apply Theorem [Theorem 6](#thm: Paley-Wiener){reference-type="ref" reference="thm: Paley-Wiener"}. Fix $\{c_n\}_{n=1}^N$ in $\mathbb C$. Note $\sup_{n \in \mathbb N}|t_n|\le \tau$. Then, $$\begin{aligned}
\left|\left| \sum_{n \in \mathbb N} c_n(x_n - (1-t_n)x_n-t_ny_n) \right|\right| &=\left|\left| \sum_{n \in \mathbb N}c_nt_n(x_n-y_n) \right|\right|\\
&\le \mu \left(\sum_{n \in \mathbb N} |c_nt_n|^2\right)^{\frac{1}{2}}\\
&\le \mu \tau \left(\sum_{n \in \mathbb N} |c_n|^2\right)^{\frac{1}{2}}.
\end{aligned}$$
Since $\frac{\mu\tau}{\sqrt{A}}<1$, we are done. ◻
**Remark 2**. *Observe $\frac{\sqrt{A}}{\mu}>1$ so $\tau =1$ always satisfies Proposition [Proposition 13](#prop:convexExtension){reference-type="ref" reference="prop:convexExtension"}.*
| arxiv_math | {
"id": "2309.06331",
"title": "Additive Stability of Frames",
"authors": "Oleg Asipchuk, Jacob Glidewell, and Luis Rodriguez",
"categories": "math.FA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We analyze the problem of network identifiability with nonlinear functions associated with the edges. We consider a static model for the output of each node and by assuming a perfect identification of the function associated with the measurement of a node, we provide conditions for the identifiability of the edges in a specific class of functions. First, we analyze the identifiability conditions in the class of all nonlinear functions and show that even for a path graph, it is necessary to measure all the nodes except by the source. Then, we consider analytic functions satisfying $f(0)=0$ and we provide conditions for the identifiability of paths and trees. Finally, by restricting the problem to a smaller class of functions where none of the functions is linear, we derive conditions for the identifiability of directed acyclic graphs. Some examples are presented to illustrate the results.
author:
- "Renato Vizuete and Julien M. Hendrickx [^1] [^2]"
bibliography:
- CDC.bib
title: " Nonlinear network identifiability: The static case "
---
# Introduction
Networked systems composed by nodes or subsystems interacting with each other are ubiquitous [@bullo2022lectures]. In several of these systems, the knowledge of the dynamics associated with edges is essential for the analysis of the system and design of control algorithms. However, the identification of the networked systems from partial measurements without disconnections of some parts of the network can be really challenging since a measured signal depends on the combination of the dynamics of potentially many edges.
There has been some recent works in the linear case on the conditions of identifiability: when is it possible to unambiguously recover local dynamics from a set of measured nodes? This question is important in order to design experiments and position sensors and excitations [@ramaswamy2019generalized; @bombois2023informativity; @kivits2023identifiability]. This depends mainly on the topology of the network and on the position of the excitation and measured signal. Graph theoretical conditions are available in full measurement case or full excitation [@hendrickx2019identifiability], but not in the general case yet [@legat2020local; @legat2021path; @bazanella2019network; @cheng2023necessary]. However, most actual systems of interest are nonlinear, including many different research fields like coupled oscillators [@dorfler2014synchronization], gene regulatory networks [@pan2012reconstruction], biochemical reaction networks [@aalto2020gene], social networks [@bizyaeva2023nonlinear], among others. While linear systems usually provide a local approximation of nonlinear phenomena, no one to the best of our knowledge has studied the identifiability question for nonlinear systems.
The identification of a nonlinear system is itself a challenging problem due to the variety of potential models (e.g., Hammerstein, Wiener, Volterra series) and the constant arising of new formulations for particular applications. Depending on the type of nonlinearities and its location (i.e., at the level of inputs, outputs or in the middle of interactions), certain models can be more suited for specific applications, while others could not give a good description of some systems [@janczak2004identification; @nelles2020nonlinear; @paduart2010identification]. In addition to the complexity of a single nonlinear model, a network involves several nonlinear systems associated with the edges, which generates complex collective behaviors and increase considerably the difficulties of the identification problem.
In the nonlinear case, the conditions for identifiability of networks do not depend only on the network topology but also on the types of nonlinear functions. For instance, trigonometric functions in coupled oscillators [@dorfler2014synchronization] are very different from the activation functions in neural networks that can be nondifferentiable [@aggarwal2018neural]. Furthermore, in heterogeneous networks, different types of functions could be associated with edges in the same network. In addition, the class of functions considered for the problem of identifiability could be determinant. It is clear that if we restrict the problem to a small class of functions, the conditions for identifiability of a network could be relaxed, but the functions in the class could not fit real models. Moreover, properties of functions such as continuity, differentiability, analyticity, etc., could play an important role in the determination of conditions for the identifiability of networks.
We study here the question of identifiability in the nonlinear setting, assuming in this first work that the local dynamics have very simple structure (i.e., the output of a node is entirely determined by static interactions with the neighbors). We show that, surprisingly, the conditions for identifiability in directed acyclic graphs are weaker than in the linear case, provided that the dynamics are indeed not linear, and do not involve constant output component (when they do, the problem is indeed unsolvable). We explain this by the fact that in the linear case, the loss of identifiability often results from ambiguities made possible by the superposition principle/superposition of signal, which is no longer possible in the nonlinear case.
In this work, we provide a formulation of the network identifiability problem in the nonlinear case, by considering a static model in the edges. By restricting the problem to a specific class of functions, we provide identifiability conditions for paths and trees. Furthermore, by considering a smaller class of functions, we derive conditions for the identifiability of directed acyclic graphs.
# Problem formulation
## Model class
Since the type of nonlinear dynamics in a network can be really complex, in this preliminary work, we will consider a static additive model to focus on the effect of the nonlinearities. Therefore, we exclude dynamical processes that involve any memory at the level of nodes or edges. Our objective is to generalize the results of this paper to more complex dynamical models in future works.
For a network composed by $n$ nodes, we consider that the output of each node $i$ is given by: $$\label{eq:nonlinear_model}
y_i^k=\sum_{j\in \mathcal{N}_i}f_{i,j}(y_j^{k-1})+u_i^{k-1}, \;\; \text{for all } i\in\{1,\ldots,n\},$$ where the superscripts denote the value of the inputs and outputs at the specific time instant, $f_{i,j}$ is a nonlinear function, $\mathcal{N}_i$ is the set of in-neighbors of node $i$, and $u_i$ is an external excitation signal. The node $i$ is not included in $\mathcal{N}_i$, since it would imply a dynamical process at the level of the node. The model [\[eq:nonlinear_model\]](#eq:nonlinear_model){reference-type="eqref" reference="eq:nonlinear_model"} corresponds to a nonlinear static version of the model considered in [@hendrickx2019identifiability; @legat2020local; @legat2021path], where the nonlinearities are located in the edges. In this case, the output of a node $i$ is determined by its own excitation signal $u_i$, and the outputs of the neighbors $y_j$ affected by a nonlinear function $f_{i,j}$ associated with the edge that connects the neighbor. Notice that when the functions $f_{i,j}$ in [\[eq:nonlinear_model\]](#eq:nonlinear_model){reference-type="eqref" reference="eq:nonlinear_model"} are linear, the conditions for identifiability of linear networks derived in [@hendrickx2019identifiability; @legat2020local; @legat2021path] also hold. In this work, we will consider analytic functions with a Taylor series that converges to the function for all $x\in\mathbb R$. This representation as power series will allow us to derive conditions for the identifiability of nonlinear networks.
Model [\[eq:nonlinear_model\]](#eq:nonlinear_model){reference-type="eqref" reference="eq:nonlinear_model"} corresponds to the full excitation case where all the nodes are excited. The nonzero functions $f_{i,j}$ between the agents define the topology of the network $G$, forming the set of edges $E$. In this work, we do not consider multi-edges between two nodes.
**Assumption 1**. *The topology of the network is known, where the presence of an edge implies a nonzero function.*
Assumption [Assumption 1](#ass:full_excitation){reference-type="ref" reference="ass:full_excitation"} implies that we know which nodes are connected by nonzero functions. The objective is to determine which nodes need to be measured to identify all the nonlinear functions in the network.
Similarly to [@hendrickx2019identifiability; @legat2020local; @legat2021path], for the identification process we assume that in an ideal scenario the relations between excitations and outputs of the nodes have been perfectly identified. In this work, we restrict our attention to networks that do not contain any cycle (i.e., directed acyclic graphs). This implies that when we measure a node $i$, we identify the function $F_i^k$: $$\begin{gathered}
\label{eq:function_Fi}
\!\!\!\!\!y_i^k\!=\!u_i^{k-1}\!+F_i^k(u_1^{k-2},\ldots,u_1^{k-m_1},\ldots,u_{n_i}^{k-2},\ldots,u_{n_i}^{k-m_{n_i}}),\\
1,\dots,n_i\in \mathcal{N}_i^p,\end{gathered}$$ where $\mathcal{N}_i^p$ denotes the set of nodes that have a path to the measured node $i$. The function $F_i^k$ is implicitly defined by [\[eq:nonlinear_model\]](#eq:nonlinear_model){reference-type="eqref" reference="eq:nonlinear_model"} and only depends on a finite number of inputs due to the absence of memory on the edges and nodes, and the absence of cycles. With a slight abuse of notation, we use the superscript in the function $F_i^{k-s}$ to indicate that all the inputs in [\[eq:function_Fi\]](#eq:function_Fi){reference-type="eqref" reference="eq:function_Fi"} are delayed by $s$.
**Example 1**. *Let us consider the graph in Fig. [\[fig:DAG_Fone\]](#fig:DAG_Fone){reference-type="ref" reference="fig:DAG_Fone"} where the measurement of the node 3 provides the output: $$\begin{aligned}
y_3^k&=u_3^{k-1}+F_3^k\nonumber\\
&=u_3^{k-1}+f_{3,2}(u_2^{k-2}+f_{2,1}(u_1^{k-3}))+f_{3,1}(u_1^{k-2}).\label{eq:DAG_different_paths}
\end{aligned}$$ We can observe that the function $F_3^k$ depends on the inputs of the nodes 1 and 2 that have a path to the node 3.*
## Identifiability
The identifiability problem is related to the possibility of identifying the functions $f_{i,j}$ based on several measurements. For this, we introduce the following relationship between the measurements and the functions $f_{i,j}$.
**Definition 1** (Set of measured functions). *Given a set of measured nodes $\mathcal{N}^m$, the set of measured functions $F(\mathcal{N}^m)$ associated with $\mathcal{N}^m$ is given by: $$F(\mathcal{N}^m):=\{F_i^k\;|\;i\in \mathcal{N}^m\}.$$*
We say that a function $f_{i,j}$ associated with an edge satisfies $F(\mathcal{N}^m)$ if $f_{i,j}$ can lead to $F(\mathcal{N}^m)$ through [\[eq:nonlinear_model\]](#eq:nonlinear_model){reference-type="eqref" reference="eq:nonlinear_model"}.
For completely arbitrary functions, the identifiability problem can be really challenging or even unrealistic. For this reason, we restrict the identifiability problem to a certain class of functions $\mathcal{F}$, which implies that the functions associated with the edges belong to $\mathcal{F}$ and that the identifiability is considered only among the functions belonging to $\mathcal{F}$. The different classes of functions will be specified depending on the results.
**Definition 2** (Edge identifiable). *In a network $G$, an edge $f_{i,j}$ is identifiable in a class $\mathcal{F}$ if given a set of measured functions $F(\mathcal{N}^m)$, every set of functions in $\mathcal{F}$ leading to $F(\mathcal{N}^m)$ has the same $f_{i,j}$.*
**Definition 3** (Network identifiable). *A network $G$ is identifiable in a class $\mathcal{F}$ if all the edges are identifiable in the class $\mathcal{F}$.*
The function $F_i^k$ in [\[eq:function_Fi\]](#eq:function_Fi){reference-type="eqref" reference="eq:function_Fi"} is the most complete information that we can obtain when we measure a node $i$. This implies that if it is not possible to identify the functions $f_{i,j}$ with $F_i^k$, these edges are unidentifiable. On the contrary, if the functions $f_{i,j}$ are identifiable, it seems reasonable that under some conditions, the function $F_i^k$ can be well approximated after sufficiently long experiments, which could allow us to identify the functions $f_{i,j}$ approximately.
## First results
We provide a result about the information that we can obtain with the measurement of sinks and sources[^3].
**Proposition 1** (Sinks and sources). *The measurement of the sources is never necessary for the identifiability of the network. The measurement of all the sinks is necessary for the identifiability of the network.*
*Proof.* First, the measurement of any source $j$ generates the output $y_j^k=u_j^{k-1},$ which does not provide any information about functions associated with edges in the network. Next, let us consider a sink $i$ with $m$ incoming edges. The measurement of this sink provides an output: $$\label{eq:output_sink_functions}
y_i^k=u_i^{k-1}+f_{i,1}(y_1^{k-1})+\cdots+f_{i,m}(y_m^{k-1}),$$ and it is the only way of obtaining information of the functions $f_{i,1},\ldots,f_{i,m}$. Thus, the measurement of all the sinks is necessary. ◻
The following lemma provides a result about the structure of the function $F_i^k$ associated with the measurement of a node $i$ with respect to the excitation signals of the in-neighbors.
**Lemma 1**. *Let $j$ be an in-neighbor of a measured node $i$ and the function $F_i^k$. Then, by assuming all the variables but $u_j^{k-2}$ constant, we have: $$F_i^k=\alpha+f_{i,j}(u_j^{k-2}+\beta),$$ where $\alpha$ and $\beta$ are constants with respect to $u_j^{k-2}$.*
*Proof.* According to [\[eq:function_Fi\]](#eq:function_Fi){reference-type="eqref" reference="eq:function_Fi"}, the function $F_i^k$ of a measured node $i$ is given by: $$\begin{aligned}
F_i^k&=\sum_{\ell=1}^m f_{i,\ell}(y_\ell^{k-1})\nonumber\\
&=\sum_{\ell=1}^m f_{i,\ell}(u_\ell^{k-2}+F_{\ell}^{k-1}) \label{eq:sum_measure_node},
\end{aligned}$$ where $m$ is the number of in-neighbors of the node $i$. All the functions $F_{\ell}^{k-1}$ depend on inputs delayed by 1, which implies that no $F_{\ell}^{k-1}$ depends on $u_j^{k-2}$. Finally, no $f_{i,p}$ with $p\neq j$ can be a function of $u_j^{k-2}$ since there are no multi-edges. ◻
Lemma [Lemma 1](#lemma:unique_functions){reference-type="ref" reference="lemma:unique_functions"} implies that $f_{i,j}$ in [\[eq:sum_measure_node\]](#eq:sum_measure_node){reference-type="eqref" reference="eq:sum_measure_node"} is the only function that depends on $u_j^{k-2}$, and $F_{j}^{k-1}$ does not depend on $u_j^{k-2}$.
# Paths and trees
## Strong requirements for general nonlinear functions
Since the conditions for the identifiability of linear networks are based on the existence of paths in the network that carry information from the excited nodes to the measured nodes [@hendrickx2019identifiability; @legat2021path], we first focus on the conditions for the identifiability of a path graph [^4] in the nonlinear case.
In the linear case, for this graph topology we only need to measure the sink to identify all the transfer functions of the network thanks to the superposition principle [@hendrickx2019identifiability]. However, this is not true for the nonlinear case.
**Example 2** (Path graph). *Fig. [\[fig:path_graph\]](#fig:path_graph){reference-type="ref" reference="fig:path_graph"} presents a simple path graph with 3 nodes where the measurement of the sink is not enough to identify the network when general nonlinear functions are considered.*
**Proposition 2** (General nonlinear functions). *For identifiability of a path graph in the class of general nonlinear functions, it is necessary to measure all the nodes except by the source.*
*Proof.* Let us consider a path graph with $n>2$ nodes and a node $i$ in the middle, which is neither the source nor the sink. The output of the node $i+1$ is given by: $$\begin{aligned}
y_{i+1}^k&=u_{i+1}^{k-1}+F_{i+1}^k\label{eq:F_ik}\\
&=u_{i+1}^{k-1}+f_{i+1,i}(u_i^{k-2}+f_{i,i-1}(u_{i-1}^{k-3}+F_{i-1}^{k-2})). \nonumber\end{aligned}$$ If the node $i$ is not measured and we consider the functions $\tilde f_{i,i-1}(x)=f_{i,i-1}(x)+\gamma$ and $\tilde f_{i+1,i}(x)=f_{i+1,i}(x-\gamma)$ with $\gamma\neq 0$, the function $F_{i+1}^k$ in [\[eq:F_ik\]](#eq:F_ik){reference-type="eqref" reference="eq:F_ik"} is the same, which implies that the path graph cannot be identified.
On the other hand, if we measure all the nodes, we know the function $F_j^k$ associated with any node $j$ in the network. Let us consider a node $i$ with an in-neighbor and we set all the inputs to 0 except $u_i^{k-1}$. Then, the measurement of the node $i$ gives us: $$\begin{aligned}
y_i^k&=u_i^{k-1}+F_i^k\label{eq:F_iik}\\
&=u_i^{k-1}+f_{i,i-1}(u_i^{k-2}+F_{i-1}^{k-1}(0))\nonumber\end{aligned}$$ If there is another function $\tilde f_{i,i-1}$ satisfying $F_{i}^k$ in [\[eq:F_iik\]](#eq:F_iik){reference-type="eqref" reference="eq:F_iik"}, we would have for all $u_i^{k-2}\in\mathbb R$: $$f_{i,i-1}(u_i^{k-2}+F_{i-1}^{k}(0))=\tilde f_{i,i-1}(u_i^{k-2}+F_{i-1}^{k}(0)),$$ which implies that $f_{i,i-1}=\tilde f_{i,i-1}$ and we can identify $f_{i,i-1}$. Following a similar approach for the other nodes, we can identify all the nonlinear functions in the path graph. Finally, by Proposition [Proposition 1](#prop:sinks_sources){reference-type="ref" reference="prop:sinks_sources"}, it is never necessary to measure the source. ◻
Notice that in the proof of Proposition [Proposition 2](#prop:path_general){reference-type="ref" reference="prop:path_general"} we do not use properties of analytic functions and the results are also valid for nonlinear functions that are not analytic. Proposition [Proposition 2](#prop:path_general){reference-type="ref" reference="prop:path_general"} shows that even in a simple graph topology like a path graph, which is the key for the identification of more complex network topologies, the identification of general nonlinear functions cannot be performed by only measuring the sink. This is due to the constant factor associated with the static behavior of the function.
## Identifiability conditions for functions with no constant effect
We restrict the identifiability problem to a smaller class of functions without a static component.
**Definition 4** (Class of functions $\mathcal{F}_Z$). *Let $\mathcal{F}_Z$ be the class of functions $f:\mathbb R\to\mathbb R$ with the following properties:*
1. *$f$ is analytic in $\mathbb R$.*
2. *$f(0)=0$.*
Thus, we consider that for any function in $\mathcal{F}_Z$, the Taylor series at 0 converges to the function for all $x\in \mathbb R$. Since the power series is unique, there is no loss of generality by considering the Taylor series at 0 and not centered at a different point in $\mathbb R$.
Notice that the class $\mathcal{F}_Z$ encompasses numerous nonlinear functions [@abramowitz1965handbook], including polynomial functions which are used for the approximation of continuous functions through the Weierstrass Approximation theorem [@llavona1986approximation]. Also, there is no loss of generality with the class $\mathcal{F}_Z$ since the results are valid for the variable part of the functions.
**Lemma 2**. *For identifiability in the class $\mathcal{F}_Z$, the measurement of a node provides the identification of all the incoming edges of the node.*
*Proof.* Let us consider a node $i$ with $m$ incoming edges. The output of the node $i$ is given by: $$\label{eq:F_incoming_edges}
y_i^k=u_i^{k-1}+F_i^k(u_1^{k-2},\ldots,u_m^{k-2},\ldots),$$ where $F_i^k$ is determined by the set of functions $\{ f \}$ associated with the edges of the network. Let us assume that there exists another set of functions $\{ \tilde f \}\neq \{ f \}$ such that: $$F_i^k(u_1^{k-2},\ldots,u_m^{k-2},\ldots)=\tilde F_i^k(u_1^{k-2},\ldots,u_m^{k-2},\ldots),$$ where $\tilde F_i^k$ is composed by the functions in the set $\{ \tilde f\}$. Let us choose a point $(u_j^{k-2},0,\ldots,0)$ with $j=1,\ldots,m$, such that all the inputs are set to zero except one of the inputs of the incoming edges of the node $i$. Then, since each function is in $\mathcal{F}_Z$ and by Lemma [Lemma 1](#lemma:unique_functions){reference-type="ref" reference="lemma:unique_functions"} we have: $$F_i^k=f_{i,j}(u_j^{k-2})\quad \text{and} \quad \tilde F_i^k=\tilde f_{i,j}(u_j^{k-2}).$$ Since we assume that $F_i^k=\tilde F_i^k$, it yields: $$f_{i,j}(u_j^{k-2})=\tilde f_{i,j}(u_j^{k-2}), \text{ for all } u_j^{k-2}\in\mathbb R,$$ which implies that $f_{i,j}=\tilde f_{i,j}$. Following a similar argument for each incoming edge of node $i$, we prove that all the functions associated with the incoming edges of the node $i$ are unique and can be identified. ◻
Notice that due to other possible paths from an in-neighbor $j$ of the node $i$, additional terms of the form $u_j^{k-r}, r>2$ could appear in [\[eq:F_incoming_edges\]](#eq:F_incoming_edges){reference-type="eqref" reference="eq:F_incoming_edges"}. However, they will always be delayed by virtue of Lemma [Lemma 1](#lemma:unique_functions){reference-type="ref" reference="lemma:unique_functions"}.
The following lemmas involve properties of analytic functions that will be used in the proof of the results in this section.
**Lemma 3**. *(Periodic functions)[\[lemma:periodic\]]{#lemma:periodic label="lemma:periodic"} If for some $p_0 \in \mathbb R$, an analytic function $f:\mathbb R\to \mathbb R$ is periodic for periods $p\in[p_0-\epsilon,p_0+\epsilon]$ with $\epsilon>0$, then the function $f$ is constant.*
*Proof.* The proof is left to Appendix [5.1](#app:1){reference-type="ref" reference="app:1"}. ◻
**Lemma 4**. *Given three non-zero analytic functions $f:\mathbb R\to\mathbb R$ and $g,\tilde g:\mathbb R^m\to \mathbb R$ satisfying $g(0)=\tilde{g}(0)=0$. If for all $x\in\mathbb R$, $y\in\mathbb R^m$, the functions $f$, $g$ and $\tilde g$ satisfy: $$f(x+g(y_1,\ldots,y_m))=f(x+\tilde{g}(y_1,\ldots,y_m)),$$ then either $g=\tilde g$ or $f$ is constant.*
*Proof.* The proof is left to Appendix [5.2](#app:2){reference-type="ref" reference="app:2"}. ◻
**Corollary 1**. *Under the same conditions as in Lemma [Lemma 4](#lemma:identification_one_function){reference-type="ref" reference="lemma:identification_one_function"}, if $f(0)=0$, then $$g=\tilde{g}.$$*
**Proposition 3** (Paths). *For identifiability of a path graph in the class $\mathcal{F}_Z$, it is necessary and sufficient to measure the sink.*
*Proof.* Let us consider a path with $n$ nodes. The measurement of the sink gives us the output: $$\begin{aligned}
y_n^k&=u_n^{k-1}+F_n^k\nonumber\\
&=u_n^{k-1}+f_{n,n-1}(u_{n-1}^{k-2}+F_{n-1}^{k-1})\label{eq:sink_path}. \end{aligned}$$ Let us assume that there is a set $\{ \tilde f\}\neq \{ f\}$ such that $F_n^k=\tilde F_n^k$, which by [\[eq:sink_path\]](#eq:sink_path){reference-type="eqref" reference="eq:sink_path"} implies: $$f_{n,n-1}(u_{n-1}^{k-2}+F_{n-1}^{k-1})=\tilde f_{n,n-1}(u_{n-1}^{k-2}+\tilde F_{n-1}^{k-1}).$$ By Lemma [Lemma 2](#lemma:sinks){reference-type="ref" reference="lemma:sinks"}, we can guarantee that $f_{n,n-1}=\tilde f_{n,n-1}$, and we have: $$f_{n,n-1}(u_{n-1}^{k-2}+F_{n-1}^{k-1})= f_{n,n-1}(u_{n-1}^{k-2}+\tilde F_{n-1}^{k-1}).$$ Then, we use Corollary [Corollary 1](#corr:g_egal_g){reference-type="ref" reference="corr:g_egal_g"} to guarantee that $F_{n-1}^{k-1}=\tilde F_{n-1}^{k-1}$. Notice that now the identifiability problem is equivalent to having measured the node $n-1$ and by following a similar approach, we can continue with the identification of all the edges and guarantee that $\{ f \}=\{ \tilde f \}$, such that all the path can be identified. ◻
**Proposition 4** (Trees). *For identifiability of a tree in the class $\mathcal{F}_Z$, it is necessary and sufficient to measure all the sinks.*
*Proof.* From Proposition [Proposition 1](#prop:sinks_sources){reference-type="ref" reference="prop:sinks_sources"}, it is necessary to measure all the sinks. Let us consider an arbitrary tree and the measurement of a sink $i$. Let us assume that there are $m$ in-neighbors of the sink $i$ and there is a set $\{ \tilde f\}\neq \{ f \}$ such that $F_i^{k}=\tilde F_i^{k}$, which implies: $$\label{eq:branches_tree}
\sum_{\ell=1}^mf_{i,\ell}(u_\ell^{k-2}+F_\ell^{k-1})=\sum_{\ell=1}^m\tilde f_{i,\ell}(u_\ell^{k-2}+ \tilde F_\ell^{k-1}).$$ Since in a tree, the functions $F_\ell^{k-1}$ do not have common inputs because they come from different branches, we can select a in-neighbor $j$ and set to zero the inputs of all the nodes that do not have a path to $j$, such that we have: $$f_{i,j}(u_j^{k-2}+F_j^{k-1})=\tilde f_{j,j}(u_j^{k-2}+\tilde F_j^{k-1}).$$ Then, by using Lemma [Lemma 2](#lemma:sinks){reference-type="ref" reference="lemma:sinks"} and Corollary [Corollary 1](#corr:g_egal_g){reference-type="ref" reference="corr:g_egal_g"} we can guarantee that $f_{i,j}=\tilde f_{i,j}$ and $F_j^{k-1}=\tilde F_j^{k-1}$ for all $j=1,\ldots,m$, which is equivalent to having measured the in-neighbors of $i$. Then, we can continue with the identification of each branch independently and by following the same approach we can identify all the paths that finish in the sink $i$. Finally, by measuring the other sinks and following a similar approach, we can identify all the edges in the tree. ◻
**Remark 1** (Linear functions). *Notice that Propositions [Proposition 3](#prop:path_zero_function){reference-type="ref" reference="prop:path_zero_function"} and [Proposition 4](#corr:trees){reference-type="ref" reference="corr:trees"} are also valid if all or some of the edges in the network contain pure linear functions. In the next section, we will provide stronger results in the identification of nonlinear networks when linear functions are excluded.*
# Directed acyclic graphs
Directed acyclic graphs encompass a large number of graph topologies that present specific characteristics that can be used for the derivation of conditions for identifiability [@mapurunga2022excitation]. Unlike a tree, in a directed acyclic graph, the functions $F_{\ell}^{k-1}$ in [\[eq:branches_tree\]](#eq:branches_tree){reference-type="eqref" reference="eq:branches_tree"} can have common variables due to several possible paths between two nodes with the same length, which makes impossible the application of Corollary [Corollary 1](#corr:g_egal_g){reference-type="ref" reference="corr:g_egal_g"}. In order to obtain a result similar to Corollary [Corollary 1](#corr:g_egal_g){reference-type="ref" reference="corr:g_egal_g"} that allow us to identify a directed acyclic graph, we consider a smaller class of functions.
**Definition 5** (Class of functions $\mathcal{F}_{Z,NL}$). *Let $\mathcal{F}_{Z,NL}$ be the class of functions $f:\mathbb R\to\mathbb R$ with the following properties:*
1. *$f$ is analytic in $\mathbb R$.*
2. *$f(0)=0$.*
3. *The associated Taylor series $f(x)=\sum_{n=1}^\infty a_nx^n$ contains at least one coefficient $a_n\neq 0$ with $n>1$.*
The third property of the functions in $\mathcal{F}_{Z,NL}$ implies that none of the functions is linear.
Clearly $\mathcal{F}_{Z,NL}$ is a subclass of $\mathcal{F}_Z$ and all the results of the previous section for functions in $\mathcal{F}_Z$ are also valid for functions in $\mathcal{F}_{Z,NL}$.
**Lemma 5**. *Given the non-zero analytic functions $f_i:\mathbb R\to\mathbb R$ and $g_i,\tilde g_i:\mathbb R^m\to \mathbb R$ satisfying $f_i(0)=g_i(0)=\tilde{g}_i(0)=0$ for $i=1,\ldots,n$. Let us assume that none of the functions $f_i$ is linear. If for all $x\in\mathbb R$, $y\in\mathbb R^m$, the functions $f_i$, $g_i$ and $\tilde g_i$ satisfy: $$\sum_{i=1}^nf_i(x_i+g_i(y_1,\ldots,y_m))=\sum_{i=1}^nf_i(x_i+\tilde{g}_i(y_1,\ldots,y_m)),$$ then $g_i=\tilde g_i$ for all $i=1,\ldots,n$.*
*Proof.* The proof is left to Appendix [5.3](#app:3){reference-type="ref" reference="app:3"}. ◻
Notice that when $n=1$, Lemma [Lemma 5](#lemma:identification_sum_functions){reference-type="ref" reference="lemma:identification_sum_functions"} is also covered by Corollary [Corollary 1](#corr:g_egal_g){reference-type="ref" reference="corr:g_egal_g"}.
**Proposition 5**. *For the functions in $\mathcal{F}_{Z,NL}$, in a directed acyclic graph, the measurement of a node provides the identification of all the nonlinear functions of any path that finishes in the measured node.*
*Proof.* Let us assume an arbitrary directed acyclic graph. The measurement of a node $i$ provides an output of the type: $$\begin{aligned}
y_i^k&=u_i^{k-1}+F_i^k\\
&=u_i^{k-1}+\sum_{j=1}^mf_{i,j}(u_j^{k-2}+F_j^{k-1}),\end{aligned}$$ where $m$ is the number of in-neighbors of $i$. Let us assume that there is a set $\{ \tilde f\}\neq \{ f\}$ such that $F_i^k=\tilde F_i^k$, which implies: $$\sum_{j=1}^mf_{i,j}(u_j^{k-2}+F_j^{k-1})=\sum_{j=1}^m\tilde f_{i,j}(u_j^{k-2}+\tilde F_j^{k-1}),$$ By applying Lemma [Lemma 2](#lemma:sinks){reference-type="ref" reference="lemma:sinks"} we have $f_{i,j}=\tilde f_{i,j}$ for all $j=1,\ldots,m$ and: $$\sum_{j=1}^mf_{i,j}(u_j^{k-2}+F_j^{k-1})=\sum_{j=1}^mf_{i,j}(u_j^{k-2}+\tilde F_j^{k-1}),$$ and by using Lemma [Lemma 5](#lemma:identification_sum_functions){reference-type="ref" reference="lemma:identification_sum_functions"} we guarantee: $$F_j^{k-1}=\tilde F_j^{k-1} \quad \text{for all }j=1,\ldots,m.$$ Notice that the identification of each $F_j^{k-1}$ is equivalent to having measured the node $j$ and can be treated in a similar way to the node $i$, independently of other paths corresponding to the other in-neighbors of $i$. By following a similar approach, we can guarantee that $\{ f \}=\{ \tilde f \}$ for every path that ends in the node $i$. ◻
**Theorem 1** (Directed acyclic graph). *For identifiability of a directed acyclic graph in the class $\mathcal{F}_{Z,NL}$, it is necessary and sufficient to measure all the sinks.*
*Proof.* From Proposition [Proposition 1](#prop:sinks_sources){reference-type="ref" reference="prop:sinks_sources"}, it is necessary to measure all the sinks. In a directed acyclic graph, we can always find a path from any node $i$ to some sink [@bang2008digraphs]. Therefore, according to Proposition [Proposition 5](#prop:path_2_degree){reference-type="ref" reference="prop:path_2_degree"}, it is sufficient to measure the sinks to identify all the paths in a directed acyclic graph. ◻
Unlike the linear case, where the measurement of the sinks is not enough to guarantee identifiability of directed acyclic graphs [@hendrickx2019identifiability], Theorem [Theorem 1](#thm:DAG){reference-type="ref" reference="thm:DAG"} provides weaker conditions for the identifiability in the nonlinear case when linear functions are excluded.
**Example 3** (Directed acyclic graph). *Let us consider the graph in Fig. [\[fig:bridge_graph\]](#fig:bridge_graph){reference-type="ref" reference="fig:bridge_graph"}. In the linear case, this network cannot be identified by only measuring the sink since the functions $f_{2,1}$ and $f_{3,1}$ cannot be distinguished. However, in the nonlinear case, the measurement of the sink is enough to identify all the network.*
# Conclusions and future work
We have derived identifiability conditions for a network characterized by nonlinear interactions through a static model. We showed that in a path graph it is necessary to measure all the nodes, except by the source, when the nonlinear functions have a static component. Then, by restricting the identifiability problem to a specific class of functions, we showed that the measurement of the sinks is necessary and sufficient to identify all the edges in paths and trees. Finally, by considering a smaller class of functions, we showed that the measurement of the sinks is necessary and sufficient for the identifiability of directed acyclic graphs. This simple model of nonlinear interactions allowed us to highlight fundamental differences with respect to the linear case.
For future work, it would be interesting to extend the results to the case of general digraphs with cycles where the function $F_i^k$ in [\[eq:function_Fi\]](#eq:function_Fi){reference-type="eqref" reference="eq:function_Fi"} depends on an infinite number of inputs. Also, it would be important to consider dynamical models that include past inputs. In this case, the Volterra series seems to be the more adequate model since it only depends on past inputs, and many of our results could still hold.
## Proof of Lemma [\[lemma:periodic\]](#lemma:periodic){reference-type="ref" reference="lemma:periodic"} {#app:1}
Let us choose an arbitrary point $\hat x$. Since $f$ is periodic for $p\in[p_0-\epsilon,p_0+\epsilon]$, we have that for all $y\in[\hat x-\epsilon +p_0,\hat x+\epsilon +p_0]$: $$f(y)=f(\hat x),$$ which implies that the function $f$ is constant and its derivative $f'$ is zero in $[\hat x-\epsilon +p_0,\hat x+\epsilon +p_0]$. Since $f$ is analytic, its derivative $f'$ is also analytic and due to the Principle of isolated zeros for 1-dimensional real analytic functions [@krantz2002primer], the derivative $f'$ is zero for all $x$, which implies that $f$ is constant for all $x$.
## Proof of Lemma [Lemma 4](#lemma:identification_one_function){reference-type="ref" reference="lemma:identification_one_function"} {#app:2}
Let us assume that there exists a point $\hat y\in \mathbb R^m$ such that $g(\hat y)=a$ and $\tilde g(\hat y)=b$ with $a\neq b$. Then, we would have: $$f(x+a)=f(x+b) , \text{ for all } x\in\mathbb R,$$ which is equivalent to $$f(z)=f(z+b-a), \text{ for all } z\in\mathbb R,$$ implying that $f$ is periodic with period $b-a$. Since $g(0)=\tilde g(0)=0$, and $g$ and $\tilde{g}$ are continuous, the function $\tilde g-g$ is also continuous and all the values between 0 and $b-a$ belong to its range. Thus, $f$ should be periodic in the interval $[0,b-a]$. But by virtue of Lemma [\[lemma:periodic\]](#lemma:periodic){reference-type="ref" reference="lemma:periodic"}, the function $f$ should be constant. If $f$ is not constant, we have a contradiction which implies that $g=\tilde{g}$.
## Proof of Lemma [Lemma 5](#lemma:identification_sum_functions){reference-type="ref" reference="lemma:identification_sum_functions"} {#app:3}
Let us take the derivative with respect to only one variable $x_j$ where $j=1,\ldots,n$. Then, we have: $$f'_j(x_j+g_j(y))=f'_j(x_j+\tilde g_j(y)), \text{ for all } x_j\in\mathbb R,\text{ } y\in\mathbb R^m.$$ Since the function $f_j$ is analytic, its derivative $f'_j$ is also analytic, and by Lemma [Lemma 4](#lemma:identification_one_function){reference-type="ref" reference="lemma:identification_one_function"}, we can have two cases: either $f'_j$ is constant or $g_j=\tilde g_j$. If $f'_j$ is constant, then $f_j$ is linear or constant (i.e., $f_j=0$), which is a contradiction. Therefore $g_j=\tilde g_j$. Following the same procedure for the other variables $x_j$, we complete the proof.
[^1]: \*This work was supported by F.R.S.-FNRS via the *KORNET* project and via the Incentive Grant for Scientific Research (MIS) *Learning from Pairwise Comparisons*, and by the *RevealFlight* Concerted Research Action (ARC) of the Fédération Wallonie-Bruxelles.
[^2]: R. Vizuete and J. M. Hendrickx are with ICTEAM institute, UCLouvain, B-1348, Louvain-la-Neuve, Belgium. R. Vizuete is a FNRS Postdoctoral Researcher - CR. `[email protected]`, `[email protected]`.
[^3]: A source is a node with no incoming edges. A sink is a node with no outgoing edges.
[^4]: A path graph is a graph that can be drawn so that all the nodes and edges lie on a single straight line.
| arxiv_math | {
"id": "2309.06854",
"title": "Nonlinear network identifiability: The static case",
"authors": "Renato Vizuete and Julien M. Hendrickx",
"categories": "math.OC cs.SY eess.SY",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
In 1976, Lai constructed a nontrivial confidence sequence for the mean $\mu$ of a Gaussian distribution with unknown variance $\sigma$. Curiously, he employed both an improper (right Haar) mixture over $\sigma$ and an improper (flat) mixture over $\mu$. Here, we elaborate carefully on the details of his construction, which use generalized nonintegrable martingales and an extended Ville's inequality. While this does yield a sequential t-test, it does not yield an "e-process" (due to the nonintegrability of his martingale). In this paper, we develop two new e-processes and confidence sequences for the same setting: one is a test martingale in a reduced filtration, while the other is an e-process in the canonical data filtration. These are respectively obtained by swapping Lai's flat mixture for a Gaussian mixture, and swapping the right Haar mixture over $\sigma$ with the maximum likelihood estimate under the null, as done in universal inference. We also analyze the width of resulting confidence sequences, which have a curious dependence on the error probability $\alpha$. Numerical experiments are provided along the way to compare and contrast the various approaches.
author:
- Hongjian Wang
- Aaditya Ramdas
bibliography:
- ttest.bib
title: |
Anytime-valid t-tests and confidence sequences\
for Gaussian means with unknown variance
---
# Introduction
The classical location tests for Gaussian data fall into either of the two categories: the Z-test, where the population variance is assumed to be known *a priori*, and a sample average is rescaled by the sample deviation to create a standardized test statistic; or the t-test [@student1908probable], where the population variance is unknown, and a plug-in estimator of the variance is used in lieu of the variance, accounting for the heavier ("regularly varying") tail of the t-test statistic than the Gaussian Z-test statistic.
This paper concerns sequential analogs of t-tests. Sequential Z-tests have been widely established since [@wald1945sequential]. One-sided or "power-one" variants have even been generalized nonparametrically to *subGaussian* data [@robbins1970statistical; @howard2021time], and recently to any data in the square-integrable class [@wang2022catoni].
Sequential extensions of the t-test were studied by [@rushton1950sequential; @hoel1954property; @ghosh1960some; @sacks1965note], who were interested in calculating the *approximate* thresholds to control the sequential type 1 error. Inspired by the sequential Z-test work of Robbins, [@lai1976confidence] constructed a sequential t-test that controls a different type of error that will be of particular interest to us: nonasymptotic, conservative (i.e. non-approximate) time-uniform control of the type 1 error. We will define this formally later.
Lai and more recent authors [@grunwald2020safe; @perez2022statistics] situate the Gaussian t-test problem into the broader framework of *group-invariant* tests; indeed, the t-test null is invariant under rescaling of observations by a constant. The tests are mostly constructed using variants of the sequential likelihood ratio test (LRT) by [@wald1945sequential], utilizing the fact that the likelihood ratios are nonnegative martingales (see [2.5](#sec:lrm){reference-type="ref" reference="sec:lrm"}).
The main contributions of this paper are as follows. First, we apply the split-sample LRT in [@wasserman2020universal] to construct a t-test e-process for the point null $\mu = \mu_0$, as well as for the one-sided composite null $\mu \leqslant\mu_0$. Second, we fill in the missing details of @lai1976confidence's [-@lai1976confidence] sequential t-test, looking into the curious question why @lai1976confidence's approach yields a confidence sequence without an e-process via the theory of non-$L^1$ nonnegative supermartingales [@ensm]. Third, we introduce an easy fix to @lai1976confidence's aforesaid issue, giving a closed-form e-process under the point null $\mu = \mu_0$.
# Preliminaries {#sec:prelims}
## Notations
A universal probability space $(\Omega, \mathcal{A}, \mathbb P )$ is used when we consider the randomness of data. We use the indexed letters $X_1, X_2, \dots$ to denote an infinite stream of observations, taking values in $\mathbb R$. We denote by $\{ \mathcal{F}_n \}_{n \geqslant 0}$ the canonical filtration generated by the data, i.e. $\mathcal{F}_n = \sigma(X_1, \dots, X_n)$. Our shorthand notations on the sample include the partial sum $S_n = \sum_{i=1}^n X_i$, the sample average $\widehat{\mu}_{n}= S_n/n$, the partial squared sum $V_n = \sum_{i=1}^n X_i^2$, the average square $\overline{X^2_{n}} = V_n/n$, the sample variance without Bessel's correction $v_{n} = V_n/n - S_n^2/n^2$, and the t-statistic $T_{n-1}=\sqrt{n-1} \frac{S_n - n\mu_0}{ \sqrt {n V_n - S_n^2} }$ where $\mu_0$ is the actual mean of $X_1, X_2,\dots$.
Usually, we consider i.i.d. observations $X_1, X_2, \dots$; and we use italic $P$ etc. to denote distributions over $\mathbb R$, while calligraphic $\mathcal{P}$ etc. to denote classes of distributions. For sample-dependent events and random variables, we carefully adopt the notations $P(\,\cdot\,)$ as $$\mathbb P [\,\cdot\,] \quad \text{when } X_1, X_2, \dots \overset{\mathrm{iid}}{\sim}P,$$ and $\mathrm E _P(\,\cdot\,)$ as $$\mathbb E [\,\cdot\,] \quad \text{when } X_1, X_2, \dots \overset{\mathrm{iid}}{\sim}P.$$ The conditional expectation $\mathrm E _P( \, \cdot \, | \mathcal{F})$ given $\mathcal{F}\subseteq \mathcal{A}$ is defined similarly. Univariate normal distribution with mean $\mu$ and variance $\sigma^2$ will be referred to as $N_{\mu,\sigma^2}$ -- so "$N_{0,1}(X_1 > 0) = 0.5$" means "$\mathbb P [X_1 > 0] = 0.5$ when $X_1, X_2 , \dots \overset{\mathrm{iid}}{\sim}$ standard normal"; its density is denoted by $p_{\mu, \sigma^2}$. The class of all normal distributions on $\mathbb R$ (excluding the degenerate $\sigma = 0$ case) is denoted by $\mathcal{N}$. Classes of normal distributions with specified mean (or range of means) but any positive variance are denoted by, e.g., $\mathcal{N}_{\mu = \mu_0}$, $\mathcal{N}_{\mu \leqslant\mu_0}$; those with specified standardized mean by, e.g., $\mathcal{N}_{\mu/\sigma = \theta_0}$.
Occasionally, we allow the observations $X_1, X_2, \dots$ to be non-i.i.d., in which case we use blackboard bold italic letters like $\mathbbmsl P$ to denote distributions on $\mathbb R \times \mathbb R \times \dots$, i.e. the distributions of the entire stochastic processes. The notations $\mathbbmsl P(\, \cdot \, )$, $\mathrm E _{\mathbbmsl P} (\, \cdot \, )$, and $\mathrm E _{\mathbbmsl P} (\, \cdot \, | \mathcal{F})$ are used for the probabilities and expectations when $X_1, X_2,\dots \sim \mathbbmsl P$.
## Sequential Statistics {#sec:seq-stat}
The classical sequential analysis traces back to [@wald1947sequential], and is formulated as testing the null $H_0 : P \in \mathcal{P}_0$ against the alternative $H_1: P \in \mathcal{P}_1$, making a decision (to reject or not) each time a new data point $X_n \sim P$ is seen until a rejection is made. The type 1 error $P(H_0 \text{ is ever rejected})$ for $P \in \mathcal{P}_0$ is to be controlled under a prescribed $\alpha$. This leads to the definition of *any-time valid p-value* [@johari2015always], as a stochastic process $\{ p_n \}_{n \geqslant 1}$ such that $P(p_\tau \leqslant p) \leqslant p$ for any stopping time $\tau$, any $p \in (0,1)$, and any $P \in \mathcal{P}_0$. Reject at the hitting time $\inf\{ n: p_n \leqslant\alpha \}$ clearly controls the type I error.
The duality between tests and interval estimates also manifests in the sequential setting. Let $\mathcal{P}$ be a family of distributions and let $\theta : \mathcal{P} \to \mathbb R$ be the parameter of interest. A $(1-\alpha)$-*confidence sequence* (CS) [@darling1967confidence] over $\mathcal{P}$ for $\theta$ is a sequence of confidence intervals $\{ \operatorname{CI}_n \}_{n \geqslant 1}$ such that, for any $P \in \mathcal{P}$, $P( \forall n \geqslant 1, \; \theta(P) \in \operatorname{CI}_n )\geqslant 1-\alpha$; or equivalently, $P( \theta(P) \in \operatorname{CI}_\tau ) \geqslant 1-\alpha$ for any stopping time $\tau$. If the null $\mathcal{P}_0 \subseteq \mathcal{P}$ is defined as a preimage $\theta^{-1}(\theta_0)$, a sequential test can be constructed via rejection $H_0$ whenever $\theta_0 \notin \operatorname{CI}_n$, and vice versa.
The tightness of a confidence sequence is typically measured in terms of its rate of growth (length when $\alpha$ gets small), and rate of shrinkage (length when $n$ gets large). For example, it is known that CSs for the mean over all 1-subGaussian random variables, and over all random variables with variance $\leqslant 1$ can both attain the minimax rates of growth $\sqrt{\log(1/\alpha)}$ and shrinkage $\sqrt{\mathop{\mathrm{polylog}}(n)/n}$ [@howard2021time; @waudby2020estimating; @wang2022catoni].
## Test Processes
Sequential tests and confidence sequences are usually constructed via nonnegative supermartingales (NSMs) and more generally, e-processes [@howard2021time; @ramdas2022testing]. Let us, following [@ruf2022composite], define them and state their key properties in composite terms.
Consider a stochastic process $\{ M_n \}_{n \geqslant 0}$ where each $M_n$ is defined as a $[0,\infty]$-valued function of $X_1, \dots, X_n$. We say it is a nonnegative supermartingale (NSM) for $P$ on a filtration $\{ \mathcal{G}_n \}_{n\geqslant 0}$ if $\mathrm E _P(M_0) < \infty$, $M_n$ is $\mathcal{G}_n$-measurable and $\mathrm E _P( M_{n+1} | \mathcal{G}_{n}) \leqslant M_{n}$ for all $n\geqslant 0$, and a nonnegative martingale (NM) for $P$ if equality holds. Recently, [@ensm] defined a wider class of processes by dropping the integrability assumption $\mathrm E _P(M_0) < \infty$, as the conditional expectation $\mathrm E _P(M_{n+1} | \mathcal{G}_{n})$ can still be well-defined as long as $M_{n+1}$ is nonnegative. When $\mathrm E _P(M_{n+1} | \mathcal{G}_{n}) \leqslant M_{n}$ is satisfied regardless of the finiteness of $\mathrm E _P(M_0)$, we say $\{M_n\}$ is an extended nonnegative supermartingale (ENSM) for $P$. Finally, if $\{M_n\}$ is NSM (or NM, ENSM) for $P$, for all $P \in \mathcal{P}$, it is an NSM (or NM, ENSM) for $\mathcal{P}$. We define an e-process for $\mathcal{P}$ on a filtration $\{ \mathcal{G}_n \}$ as a $\{ \mathcal{G}_n \}$-adapted process $\{E_n\}$ that satisfies $E_0 = 1$ and is upper bounded, for each $P \in \mathcal{P}$, by a NSM $\{
M_n^P \}$ on $\{ \mathcal{G}_n \}$ for $P$. Note that the filtration $\{ \mathcal{G}_n \}$ is usually the canonical one $\{ \mathcal{F}_n \}$, but as we shall see later, other non-trivial choices will lead to interesting results.
A parametrized family of NSMs or ENSMs can lead to a new NSM or ENSM by *mixture*. If $\{
M_n(\theta) \}$ is an NSM for $\mathcal{P}$ for any $\theta \in \Theta$, so is the mixture $\{
\int M_n(\theta) \mu(\mathrm{d}\theta) \}$ for any finite measure $\mu$ over $\Theta$, under mild measurability assumptions; if each is an ENSM for $\mathcal{P}$, the mixture is still an ENSM for $\mathcal{P}$ as long as $\mu$ is $\sigma$-finite [@ensm Section 5.2]. Similar results hold straightforwardly for the mixtures of e-processes as well.
An important property of these processes is Ville's inequality [@ville1939etude] and the recently discovered *extended* Ville's inequality [@ensm Theorem 4.1]. These maximal inequality bounds the crossing probability of a NSM (and hence e-process) or ENSM over an unbounded time horizon.
**Lemma 1** (Ville's inequality). *Let $\{ M_n \}_{n \geqslant 0}$ be an NSM for $\mathcal{P}$. Then for all $P \in \mathcal{P}$, $\varepsilon \in (0, 1)$, $$P( \exists n \geqslant 0 , \ M_n \geqslant\varepsilon ) \leqslant\varepsilon^{-1} \mathrm E _P(M_0).$$*
It thus follows that the inequality holds for e-processes for $\mathcal{P}$ as well, with right hand side replaced by just $\varepsilon^{-1}$.
**Lemma 2** (Extended Ville's inequality). *Let $\{ M_n \}_{n \geqslant 0}$ be an ENSM for $\mathcal{P}$. Then for all $P \in \mathcal{P}$, $\varepsilon \in (0, 1)$, $$\label{eqn:evi}
P( \exists n \geqslant 0 , \ M_n \geqslant\varepsilon ) \leqslant\varepsilon^{-1} \mathrm E _P( \mathbbmss 1_{ \{ M_0 < \varepsilon \} } M_0) + P( M_0 \geqslant\varepsilon ).$$*
[\[lem:ville,lem:evi\]](#lem:ville,lem:evi){reference-type="ref" reference="lem:ville,lem:evi"} suggests that NSMs, e-processes, and ENSMs for $\mathcal{P}_0$ can be seen as measurements of evidence against the null $H_0: P \in \mathcal{P}_0$. To see that, if we have at hand an NSM or e-process $\{ M_n \}$ for $\mathcal{P}_0$ such that $M_0 = 1$, we may reject the null $\mathcal{P}_0$ whenever the process $M_n$ exceeds $1/\alpha$. This, due to [Lemma 1](#lem:ville){reference-type="ref" reference="lem:ville"}, controls the sequential type 1 error $P( H_0 \text{ is ever rejected} )$ within $\alpha$ for any $P \in \mathcal{P}_0$. Similarly, if we have an ENSM $\{M_n\}$ for $\mathcal{P}_0$, we may set the right hand side of [\[eqn:evi\]](#eqn:evi){reference-type="eqref" reference="eqn:evi"} to $\alpha$ and solve the corresponding $\varepsilon$, rejecting $H_0: P \in \mathcal{P}_0$ whenver $M_n$ exceeds $\varepsilon$. Desirably, an NSM, e-process or ENSM for $\mathcal{P}_{\theta_0}$ used as a powerful *test process* shrinks under the null $\mathcal{P}_0$ and grows under the alternative $\mathcal{P}_1$. A confidence sequence for a parameter $\theta$ follows in the same way if we have an NSM or an ENSM $\{ M^{\theta_0}_n \}$ for every $\mathcal{P}_{\theta_0} = \{ P \in \mathcal{P}: \theta(P) = \theta_0 \}$: by solving the inequality $M_n^{\theta_0} \leqslant 1/\alpha$ or $M_n^{\theta_0} \leqslant\varepsilon$ in $\theta_0$, that is. An illustrative comparison between NSMs and ENSMs in the Gaussian case with known variance for both constructing tests and confidence sequences is provided recently by @ensm [Section 5].
Of course, all above is applicable to the non-i.i.d. case as well. NMs etc. for $\mathbbmsl P$ are defined in similar manners. NMs, NSMs, ENSMs, and e-processes for $P$ (or $\mathcal{P}$, $\mathbbmsl P$) are all referred to as test processes for $P$ (or $\mathcal{P}$, $\mathbbmsl P$).
## Sequential t-Test and t-Confidence Sequences
Different objectives are pursued for the sequential Gaussian t-test problem. Early authors have studied the problem of testing the null $\mathcal{N}_{\mu = 0}$ (or more generally $\mathcal{N}_{\mu/\sigma = \theta_0}$) against the alternative $\mathcal{N}_{\mu/\sigma = \theta_1}$ [@rushton1950sequential; @ghosh1960some; @sacks1965note], and its scale-invariant nature translates into the group-invariant setting as the problem of testing orbital nulls and alternatives [@perez2022statistics].
In this paper, we shall roughly follow the same routine as [@lai1976confidence] and be less concerned with the alternatives, chiefly focusing instead on the construction of (1) test processes for the null $\mathcal{N}_{\mu=0}$ (and study their behavior under both null and non-null underlying distributions); (2) Confidence sequences for the population mean $\mu$ over all Gaussians $\mathcal{N}$. One may note that (1) is sufficient to produce tests for arbitrary point location null $\mathcal{N}_{\mu = \mu_0}$ by shifting, and consequently (2) by [\[lem:ville,lem:evi\]](#lem:ville,lem:evi){reference-type="ref" reference="lem:ville,lem:evi"} and inversion. Occasionally we are interested in test processes for the one-sided null $\mathcal{N}_{\mu \leqslant 0}$ as these give rise to one-sided tests and CSs. In fact, we shall later construct test processes for $\mathcal{N}_{\mu/\sigma = \theta_0}$ which is the strongest statistical quantity in the setting, as it leads to both CSs and tests for the mean $\mu$ and the normalized mean $\mu/\sigma$.
See [\[fig:cd-various-t-test-tools\]](#fig:cd-various-t-test-tools){reference-type="ref" reference="fig:cd-various-t-test-tools"} below.
## Likelihood Ratio Martingales {#sec:lrm}
It is well known that the likelihood ratio process is a nonnegative martingale.
**Lemma 3**. *Let $Q \ll P$ be probability distributions on $\mathbb R$. Then, the process $$L_n = \prod_{i=1}^n \frac{\mathrm{d}Q}{\mathrm{d}P}(X_i)$$ is a nonnegative martingale for $P$ on the canonical filtration $\{\mathcal{F}_n\}_{n \geqslant 0}$.*
We shall use two stronger forms of [Lemma 3](#lem:lrm){reference-type="ref" reference="lem:lrm"}, stated below and proved in [6](#sec:pf){reference-type="ref" reference="sec:pf"}. First, the distribution on the numerator, $Q$, can be varying and depend on previous observations.
**Lemma 4**. *Let $P$ be a probability distribution on $\mathbb R$. For each $n \geqslant 1$ let $Q_n: \Omega \times \mathcal{B}(
\mathbb R) \to \mathbb R$ be a Markov kernel such that: 1) $Q_n(\, \cdot \,, B)$ is $\mathcal{F}_{n-1}$-measurable for all $B \in \mathcal{B}(\mathbb R)$, and 2) $Q_n(\omega, \, \cdot \,) \ll P$ for all $\omega \in \Omega$. Then, the process $$L_n = \prod_{i=1}^n \frac{\mathrm{d}Q_n}{\mathrm{d}P}(X_i)$$ is a nonnegative martingale for $P$ on the canonical filtration $\{\mathcal{F}_n\}_{n \geqslant 0}$.*
Second, the data does *not* have to be i.i.d. [@lai1976confidence Section 2]. Recall that if a stochastic process $X_1,X_2,\dots$ has distribution $\mathbbmsl P$, the joint distribution of $(X_1,\dots, X_n)$ is the push-forward measure of $\mathbbmsl P$ under the coordinate map $(x_1,\dots) \mapsto (x_1, \dots, x_n)$, which we denote by $\mathbbmsl P_{(n)}$.
**Lemma 5**. *Let $\mathbbmsl P$ and $\mathbbmsl Q$ be probability distributions on $\mathbb R \times \mathbb R \times \dots$ such that $\mathbbmsl Q_{(n)} \ll \mathbbmsl P_{(n)}$ for all $n$. Then, the process $$L_n = \frac{\mathrm{d}\mathbbmsl Q_{(n)}}{\mathrm{d}\mathbbmsl P_{(n)}}(X_1, \dots, X_n)$$ is a nonnegative martingale for $\mathbbmsl P$ on the canonical filtration $\{\mathcal{F}_n\}_{n \geqslant 0}$.*
# t-Test e-Processes via Universal Inference
We shall demonstrate how the method of universal inference [@wasserman2020universal] leads to e-processes and confidence sequences for the t-test. Throughout this section, for $n \geqslant 0$, we denote by $\widetilde{\mu}_n$ and $\widetilde{\sigma}_n$ any point estimators for $\mu$ and $\sigma$ adapted to the canonical filtration $\mathcal{F}_n$ (i.e. based on $X_1,\dots, X_n$). For example, they can simply be the sample mean $\widehat{\mu}_{n}$ and sample standard deviation $\sqrt{v_n}$; we may also use the Bayesian approach by sequentially updating them as the posterior mean from, say, a normal-inverse-gamma prior on $(\mu, \sigma)$[^1]. In the Gaussian case, sequential universal inference [@wasserman2020universal Section 8] uses the following plug-in likelihood ratio:
**Corollary 6**. *For any $\mu$, $\sigma$ and their point estimators $\{ \widetilde{\mu}_n \}$, $\{ \widetilde{\sigma}_n \}$ adapted to the canonical filtration $\{\mathcal{F}_n\}_{n \geqslant 0}$ (such as the sample mean and standard deviation), the process $$\label{eqn:plugin-lr}
\ell_n^{\mu, \sigma^2} = \frac{\prod_{i=1}^n p_{\widetilde{\mu}_{i-1}, \widetilde{\sigma}^2_{i-1}}(X_i)}{\prod_{i=1}^n p_{\mu, \sigma^2}(X_i)} = \frac{\sigma^n}{\prod_{i=1}^n\widetilde{\sigma}_{i-1}} \exp \left\{ \sum_{i=1}^n \left( \frac{(X_i - \mu)^2}{2\sigma^2} - \frac{(X_i - \widetilde{\mu}_{i-1})^2}{2\widetilde{\sigma}^2_{i-1}} \right) \right\}$$ is a martingale for $N_{\mu,\sigma^2}$ on $\{\mathcal{F}_n\}_{n \geqslant 0}$.*
It is not hard to see that [Corollary 6](#lem:plugin-lr){reference-type="ref" reference="lem:plugin-lr"} follows from [Lemma 4](#lem:lrm-general){reference-type="ref" reference="lem:lrm-general"}, so we omit its proof. The process $\{ \ell_n^{\mu,\sigma} \}$ itself can be used for the sequential Z-test. From martingales for $N_{\mu,\sigma^2}$ *for each $\mu$ and $\sigma$* to test processes for the t-test nulls, $\mathcal{N}_{\mu = 0}$ and $\mathcal{N}_{\mu \leqslant 0}$, we can simply take the infima of [\[eqn:plugin-lr\]](#eqn:plugin-lr){reference-type="eqref" reference="eqn:plugin-lr"} over $\{ \mu = 0, \sigma > 0 \}$ and $\{ \mu \leqslant 0, \sigma > 0 \}$, which leads to e-processes for these nulls. They both have a closed-form expression and are stated below as [Theorem 7](#thm:ui-ttest){reference-type="ref" reference="thm:ui-ttest"} and [Theorem 9](#thm:ui-ttest-onesided){reference-type="ref" reference="thm:ui-ttest-onesided"}, both proved in [6](#sec:pf){reference-type="ref" reference="sec:pf"}.
**Theorem 7** (Universal inference t-test e-process). *For any point estimators $\{ \widetilde{\mu}_n \}$ and $\{ \widetilde{\sigma}_n \}$ adapted to the canonical filtration $\{\mathcal{F}_n\}_{n \geqslant 0}$, the process $$\label{eqn:e-proc-ui-point}
R_n
%= \frac{ \prod \frac{1}{\hsig_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \hmu_{i-1}}{\hsig_{i-1}} \right)^2 \right\} }{ \left( \frac{\sum (X_i - 0)^2}{n} \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} }
=\left( \overline{X^2_{n}} \right)^{n/2} \mathrm e^{n/2} \prod_{i=1}^n \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\}$$ is an e-process for $\mathcal{N}_{\mu = 0}$ on $\{ \mathcal{F}_n \}$. Consequently, define $$\label{eqn:tn}
W_n = \frac{1}{\alpha^{2/n} \mathrm e} \exp \left\{ \frac{\sum_{i=1}^n \log \widetilde{\sigma}_{i-1}^2 + \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 }{n} \right\}.$$ Then $\operatorname{CI}_n = \left[ \widehat{\mu}_{n} \pm \sqrt{ \widehat{\mu}_{n}^2 - \overline{X^2_{n}} + W_n } \right]$ forms a $(1-\alpha)$-CS for $\mu$ over $\mathcal{N}$.*
It is worth remarking that for the general location null $\mathcal{N}_{\mu = \mu_0}$, one must resist the temptation to replace all $X_i$ in [\[eqn:e-proc-ui-point\]](#eqn:e-proc-ui-point){reference-type="eqref" reference="eqn:e-proc-ui-point"} above by $X_i - \mu_0$, which, while indeed produces an e-process for $\mathcal{N}_{\mu = \mu_0}$, loses its power. The correct modification is to *only* shift $X_i$ in the $\overline{X^2_{n}}$ term, i.e. using the e-process for $\mathcal{N}_{\mu = \mu_0}$ $$R_n^{\mu_0}
=\left(\frac{\sum_{i=1}^n (X_i - \mu_0)^n }{ n} \right)^{n/2} \mathrm e^{n/2} \prod_{i=1}^n \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\}.$$ This can be understood as shifting $X_i$ and also $\widetilde{\mu}_{i-1}$ by $\mu_0$. This is how we arrive at the confidence sequence $\operatorname{CI}_n = \left[ \widehat{\mu}_{n} \pm \sqrt{ \widehat{\mu}_{n}^2 - \overline{X^2_{n}} + W_n } \right]$ above.
As a test process, the e-process $\{ R_n \}$ distinguishes the null $\mathcal{N}_{\mu = 0}$ and the alternative $\mathcal{N}_{\mu \neq 0}$ due to the following limit result.
**Proposition 8** (Asymptotic behavior of the universal inference t-test e-process). *Under any $P = N_{\mu,\sigma^2}$, suppose there is a $\gamma > 0$ such that $\{ \widetilde{\mu}_n \}$ converges to $\mu$ in $L^3$ with rate $\mathrm E _P (|\widetilde{\mu}_n - \mu|^3) \lesssim n^{-\gamma}$, $\{ \widetilde{\sigma}_n^{-2} \}$ converges to $\sigma^{-2}$ both in $L^2$ with rate $\mathrm E _P (\widetilde{\sigma}_n^{-2} - \sigma^{-2})^2 \lesssim n^{-\gamma}$ and almost surely, and has uniformly bounded 3^rd^ moment $\limsup_n \mathrm E _P (\widetilde{\sigma}_n^{-6}) < \infty$. Then, $$\lim_{n \to \infty} \frac{\log R_n}{n} = \frac{1}{2} \log(1 + \mu^2/\sigma^2) \quad \text{almost surely}.$$ Consequently, $\{ R_n \}$ diverges almost surely to $R_\infty = \infty$ exponentially fast under $\mathcal{N}_{\mu \neq 0}$.*
The assumptions on the point estimators are mild, as they are satisfied by the empirical mean and variance due to the moment properties of the inverse-$\chi^2$ distribution, as well as their smoothed or Bayesian extensions (posterior means under reasonable priors). The exponential growth of $\{ R_n \}$ under the alternative $\mathcal{N}_{\mu \neq 0}$ is in contrast to its restrained behavior under the null $\mathcal{N}_{\mu = 0}$, characterized by Ville's inequality for e-processes ([Lemma 1](#lem:ville){reference-type="ref" reference="lem:ville"}).
The $\frac{1}{2} \log(1 + \mu^2/\sigma^2)$ limit in [Proposition 8](#prop:div-ui-eproc){reference-type="ref" reference="prop:div-ui-eproc"}, we remark, is universal among test processes for t-tests and Z-tests alike. If one uses the plug-in likelihood ratio $\{ \ell_n^{0,\sigma^2} \}$ in [Corollary 6](#lem:plugin-lr){reference-type="ref" reference="lem:plugin-lr"} to conduct the Z-test for the null $N_{0,\sigma^2}$, then, under the actual distribution $N_{\mu,\sigma^2}$, it is not hard to see that the convergence $\frac{\log \ell_n^{0,\sigma^2}}{n} \to \frac{1}{2} \log(1 + \mu^2/\sigma^2)$ holds under similar assumptions of the point estimators. The mixture-based test processes for Z-test in @ensm [Propositions 5.6, 5.8, and 6.3] all have similar limits. We shall see in the rest of the paper many more occurrences of the same $\frac{1}{2} \log(1 + \mu^2/\sigma^2)$ limit.
Apart from the asymptotics of the test process, we can also analyze the confidence sequence it implies. Let us briefly study the asymptotics of the radius $\sqrt{ \widehat{\mu}_{n}^2 - \overline{X^2_{n}} + W_n } = \sqrt{ W_n - \left(\overline{X^2_{n}} - \widehat{\mu}_{n}^2\right) }$ of the CS in [Theorem 7](#thm:ui-ttest){reference-type="ref" reference="thm:ui-ttest"}, as $\alpha \to 0$, $\sigma \to \infty$, and $n \to \infty$. Note that both $W_n$ and $\overline{X^2_{n}} - \widehat{\mu}_{n}^2$ converge to $\sigma^2$, as long as the estimators $\tilde{\sigma}_i$ and $\tilde{\mu}_i$ are consistent. From [\[eqn:tn\]](#eqn:tn){reference-type="eqref" reference="eqn:tn"}, the deviation of $T_n$ from $\sigma^2$ is approximately $\sigma^2(\alpha^{-2/n} \exp(1/\sqrt{n}) - 1)$; While by the Chi-squared tail bound and the Chernoff bound, $$\mathbb P \left[ \left| \frac{n}{n-1}\left(\overline{X^2_{n}} - {\widehat{\mu}_{n}}^2 \right) - \sigma^2 \right|\geqslant\sigma^2 t \right] \lesssim \mathrm e^{-nt^2},$$ so the deviaiton of $\overline{X^2_{n}} - {\widehat{\mu}_{n}}^2$ from $\sigma^2$ is approximately $\sigma^2/n$. Hence the radius of the CS scales, as a function of $\alpha$, $\sigma$ and $n$, at the rate of $\sigma \alpha^{-1/n} n^{-1/2}$. We remark that the "growth rate" of the CS, i.e., its dependence on $\alpha \to 0$, is in the time-dependent form of $\alpha^{-1/n}$ which is worse than the typical known-variance rate $\sqrt{\log(1/\alpha)}$.
Let us now state the e-process for the one-sided null.
**Theorem 9** (Universal inference one-sided t-test e-process). *For any point estimators $\{ \widetilde{\mu}_n \}$ and $\{ \widetilde{\sigma}_n \}$ adapted to the canonical filtration $\{\mathcal{F}_n\}_{n \geqslant 0}$, the process $$R^-_n = %\frac{ \prod \frac{1}{\hsig_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \hmu_{i-1}}{\hsig_{i-1}} \right)^2 \right\} }{ \left( \avgXsq{n} - (\avgX{n}\wedge 0)^2 \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} }
\left( \overline{X^2_{n}} - (\widehat{\mu}_{n}\wedge 0)^2 \right)^{n/2} \mathrm e^{n/2} \prod_{i=1}^n \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\}
\label{eqn:rn-minus}$$ is an e-process for $\mathcal{N}_{\mu \leqslant 0}$ on $\{ \mathcal{F}_n \}$.*
There is clear similarity between the expression of the point null e-process [\[eqn:e-proc-ui-point\]](#eqn:e-proc-ui-point){reference-type="eqref" reference="eqn:e-proc-ui-point"}, and that of the one-sided [\[eqn:rn-minus\]](#eqn:rn-minus){reference-type="eqref" reference="eqn:rn-minus"}. The only difference is in that the former's $\overline{X^2_{n}}$ term is here off by a $- (\widehat{\mu}_{n}\wedge 0)^2$ term in [\[eqn:rn-minus\]](#eqn:rn-minus){reference-type="eqref" reference="eqn:rn-minus"}. When the data are predominantly negative, this term inhibits the exponential growth of the e-process. Therefore, while we shall witness exponential growth of [\[eqn:e-proc-ui-point\]](#eqn:e-proc-ui-point){reference-type="eqref" reference="eqn:e-proc-ui-point"} when the actual mean is negative (significant in the scale of standard deviation), no such growth is likely to happen in [\[eqn:rn-minus\]](#eqn:rn-minus){reference-type="eqref" reference="eqn:rn-minus"}. We can formalize this by a limit result similar to [Proposition 8](#prop:div-ui-eproc){reference-type="ref" reference="prop:div-ui-eproc"}.
**Proposition 10** (Asymptotic behavior of the universal inference t-test e-process). *Under any $N_{\mu,\sigma^2}$, suppose the point estimators $\{ \widetilde{\mu}_n \}$ and $\{ \widetilde{\sigma}_{n}^{-2} \}$ satisfy the same assumptions as [Proposition 8](#prop:div-ui-eproc){reference-type="ref" reference="prop:div-ui-eproc"}. Then, $$\lim_{n \to \infty} \frac{\log R_n^-}{n} = \frac{1}{2} \log(1 + (\mu \vee 0)^2/\sigma^2) \quad \text{almost surely}.$$ Consequently, $\{ R_n^- \}$ diverges almost surely to $R_\infty^- = \infty$ exponentially fast under $\mathcal{N}_{\mu > 0}$.*
A slightly weaker similarity between the expressions of these e-processes and the plug-in martingale [\[eqn:plugin-lr\]](#eqn:plugin-lr){reference-type="eqref" reference="eqn:plugin-lr"} can also be observed under careful comparison, which we do in the "Test Process\" row of [\[tab:big-comp-ui\]](#tab:big-comp-ui){reference-type="ref" reference="tab:big-comp-ui"}, where we compare all aspects of universal inference methods for Z-test (including a one-sided test involving taking infimum over $t \leqslant 0$ in [\[eqn:plugin-lr\]](#eqn:plugin-lr){reference-type="eqref" reference="eqn:plugin-lr"}) and t-test.
# Sequential t-Tests via Scale Invariance
## @lai1976confidence's [-@lai1976confidence] Confidence Sequence
Let us first quote a theorem due to [@lai1976confidence] who presented it almost without proof.
**Theorem 11** (Lai's t-CS; Theorem 2 of [@lai1976confidence]). *Choose a starting time $m \geqslant 2$ and a constant $a > 0$. Recall that $v_n = \frac{1}{n} \sum_{i=1}^n (X_i - \widehat{\mu}_{n})^2$. Further, define $$\begin{aligned}
b &:= \frac{1}{m} \left(1 + \frac{a^2}{m-1} \right)^m,
\\
\xi_n &:= \sqrt{v_n [(bn)^{1/n} - 1]}.\end{aligned}$$ Then, the intervals $\operatorname{CI}_n = [ \widehat{\mu}_{n} \pm \xi_n ]$ satisfy, for any $\mu$ and $\sigma > 0$, $$N_{\mu,\sigma^2}\left( \exists n \geqslant m, \mu \notin \operatorname{CI}_n \right) \leqslant 2(1-F_{m-1}(a) + af_{m-1}(a)),$$ where $F_{m-1},f_{m-1}$ denote the CDF, PDF of t-distribution with $m-1$ degrees of freedom.*
If we want $\{ \operatorname{CI}_n \}$ to be a $(1-\alpha)$-CS over $\mathcal{N}$ for $\mu$, we need to solve $a$ from the equation $2(1-F_{m-1}(a) + af_{m-1}(a)) = \alpha$ beforehand. To see the relationship between $a$ and $\alpha$, note that when $a$ is large, $$\begin{gathered}
f_{m-1}(a) \asymp a^{-m},
\\
1-F_{m-1}(a) \asymp a^{-(m-1)}.\end{gathered}$$ Hence $$\alpha \asymp a^{-(m-1)} \implies a \asymp \alpha^{- 1/(m - 1)}.$$ The radius hence grows as (for fixed $m, n$ and $\alpha \to 0$) $$\xi_n \approx \sigma \sqrt{\left( \frac{1}{m} \left(1 + \frac{a^2}{m-1} \right)^m n \right)^{1/n} - 1 } \asymp \alpha^{-\frac{m}{n(m-1)}}.$$ In terms of its shrinkage rate (fixed $m, \alpha$ but $n \to \infty$), we have $$\begin{aligned}
\xi_n \approx \sigma \sqrt{ \frac{\log bn}{n} + \Tilde{\mathcal{O}}(n^{-2})}.\end{aligned}$$
The CS is only valid from some time $m\geqslant 2$, which we shall soon explain (and, in some sense, remedy). The proof of this theorem, as it turns out, hinges on the extended Ville's inequality ([Lemma 2](#lem:evi){reference-type="ref" reference="lem:evi"}) for nonintegrable nonnegative supermartingale. Equally interestingly, it mixes a parametrized family of martingales *under a coarser filtration*. We shall begin our reworking of Lai's CS from this concept, and eventually state and prove [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"}, a more revealing version of [Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"}; as well as [Theorem 19](#thm:lai-e){reference-type="ref" reference="thm:lai-e"}, a variant of [Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"} that does not involve an improper mixture and nonintegrability.
## Scale Invariant Filtration {#sec:si-filt}
We say that a function $f:\mathbb R^n \to \mathbb R$ is scale invariant if it is measurable and for any $x_1, \dots ,x_n \in \mathbb R$ and $\lambda > 0$, $f(x_1, \dots, x_n) = f(\lambda x_1, \dots, \lambda x_n)$. Let us define the following sub-filtration of the canonical filtration $\{ \mathcal{F}_n \}$.
**Definition 12** (Scale invariant filtration). *For $n \geqslant 1$, let $$\label{eqn:si-filt}
\mathcal{F}_n^* = \sigma( f(X_1, \dots, X_n) : f \text{ \emph{is scale invariant}} ).$$ Then, the filtration $\{ \mathcal{F}^*_n \}_{n \geqslant 1}$ is called the *scale invariant filtration* of data $X_1, X_2, \dots$.*
[Definition 12](#def:si-filt){reference-type="ref" reference="def:si-filt"} states that $\mathcal{F}^*_n$ is the coarsest $\sigma$-algebra to which all scale invariant functions of $X_1, \dots, X_n$ are measurable. For example, recall that we denote by $T_{n-1}$ the t-statistic of the data $\sqrt{n-1} \frac{S_n - n \mu_0}{ \sqrt {n V_n - S_n^2} }$, which is $\mathcal{F}^*_n$-measurable when $\mu_0 = 0$ and is a quantity frequently used to construct scale-invariant statistics later. To see that $\{ \mathcal{F}^*_n \}$ is indeed a filtration, let $f$ be any scale invariant function $\mathbb{R}^n \to \mathbb{R}$, and define $g:\mathbb{R}^{n+1}\to\mathbb{R}$ as $g(x_1, \dots, x_n, x_{n+1}) = f(x_1, \dots, x_n)$ which is also scale invariant. So $f(X_1,\dots, X_n) = g(X_1,\dots, X_n, X_{n+1})$ is $\mathcal{F}^*_{n+1}$-measurable. The arbitrariness of $f$ implies that $\mathcal{F}^*_{n} \subseteq \mathcal{F}^*_{n+1}$.
The filtration $\{ \mathcal{F}_n^* \}$ contains all the information up to time $n$ about the *relative* sizes of the observations. For example, $\{ \max\{ X_1, \dots, X_4 \} \geqslant 2X_3 \} \in \mathcal{F}_4$ while $\{ \max\{ X_1, \dots, X_4 \} \geqslant 2 \} \notin \mathcal{F}_4$. The reader may recall the exchangeable sub-$\sigma$-algebra in the theory of exchangeability and backwards martingales such as in @klenke2013probability [Chapter 12] for an analogy. Actually, when each of the observations $X_1, X_2,\dots$ is non-zero, it has the following clean expression.
**Proposition 13**. *If $X_1 \neq 0$, $$\label{eqn:si-filt-simple}
\mathcal{F}_n^* = \sigma\left( \frac{X_1}{|X_1|}, \frac{X_2}{|X_1|}, \dots, \frac{X_n}{|X_1|} \right).$$*
*Proof.* Let $\mathcal{F}_n^{**} = \sigma\left( {X_1}/{|X_1|}, {X_2}/{|X_1|}, \dots, {X_n}/{|X_1|} \right)$. Suppose $f:\mathbb{R}^n \to \mathbb{R}$ is scale invariant. Then, $f(X_1, \dots, X_n) = f(X_1/|X_1|, \dots, X_n/|X_1|)$, which is clearly $\mathcal{F}_n^{**}$-measurable, implying $\mathcal{F}_n^{*} \subseteq \mathcal{F}_n^{**}$. The inclusion $\mathcal{F}_n^{**} \subseteq \mathcal{F}_n^{*}$ is trivial: $x_1/|x_1|$, $x_2/|x_1|$, \..., $x_n/|x_1|$ are themselves scale invariant functions of $x_1, \dots, x_n$. ◻
In our scenario of application, the t-test under Gaussian distribution, data are not non-zero but almost surely non-zero. We do not distinguish [\[eqn:si-filt\]](#eqn:si-filt){reference-type="eqref" reference="eqn:si-filt"} and [\[eqn:si-filt-simple\]](#eqn:si-filt-simple){reference-type="eqref" reference="eqn:si-filt-simple"} as the definition of the scale invariant filtration because processes adapted to it are often derived by manipulations of $X_1/|X_1|, \dots, X_n/|X_1|$, which we call the *scale invariant reduction* of the observations, ignoring the $N_{\mu,\sigma^2}$-negligible event $\{X_1 = 0\}$. Indeed, [@lai1976confidence] defines such filtration via [\[eqn:si-filt-simple\]](#eqn:si-filt-simple){reference-type="eqref" reference="eqn:si-filt-simple"} for the t-test case. However, we remark that one would need the more general [\[eqn:si-filt\]](#eqn:si-filt){reference-type="eqref" reference="eqn:si-filt"} in [Definition 12](#def:si-filt){reference-type="ref" reference="def:si-filt"} for the case when $X_1 \neq 0$ does not hold almost surely (e.g. when exploring a nonparametric extension of our set-up and methods).
## Scale Invariant Likelihood Ratios {#sec:scale-lr}
Let us suppose, temporarily in this subsection for the sake of generality, that $P_{\mu, \sigma^2}$ is a distribution on $\mathbb R$ parametrized by $\mu \in \mathbb R$ and $\sigma > 0$, and $\eta$ a reference measure on $\mathbb R$, such that $P_{\mu, \sigma^2} \ll \eta$ with density $$\frac{\mathrm{d}P_{\mu, \sigma^2}}{\mathrm{d}\eta}(x) = \sigma^{-1} g(\sigma^{-1}(x-\mu)).$$ Further, suppose $\eta$ does not charge the singleton $\{ 0 \}$, so $P_{\mu, \sigma^2} (X_i = 0)$ is always $0$. The following was observed by @lai1976confidence [Section 5].
**Lemma 14** (Density of the scale invariant reduction). *Let $V_n:\mathbb R^n \to \mathbb R^{n}$ be the function $$V_n(x_1, \dots, x_n) = (x_1/|x_1|, x_2/|x_1|, \dots, x_n/|x_1|).$$ (It does not matter how $V_n$ is defined when $x_1 = 0$.)*
*Let $Q_{\mu ,\sigma^2}^n$ be the push-forward measure of $P_{\mu, \sigma^2}^{\otimes n}$ under the map $V_n$. Let $\eta^{\langle n-1 \rangle}$ be the measure on $\{ \pm 1 \} \times \mathbb R^{n-1}$ that charges both $\{ 1 \} \times \mathbb R^{n-1}$ and $\{ - 1 \} \times \mathbb R^{n-1}$ with $\eta^{\otimes (n-1)}$. Then, $$\label{eqn:max-inv-density}
\frac{\mathrm{d}Q_{\mu ,\sigma^2}^n}{\mathrm{d}\eta^{\langle n-1 \rangle}} (x_1, \dots, x_n) = \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(x_i /\tau - \mu/\sigma)) \right\} \frac{ \mathrm{d}\tau}{\tau} = \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{\tau\mu/\sigma, \tau^2}}{\mathrm{d}\eta}(x_i) \right\} \frac{ \mathrm{d}\tau}{\tau} .$$*
The lemma says, in probabilistic terms, that the "maximal invariant\" reduction of the sample, $(X_1/|X_1|, X_2/|X_1|, \dots, X_n/|X_1|)$, has a density that relates to the density of the original sample in the form of [\[eqn:max-inv-density\]](#eqn:max-inv-density){reference-type="eqref" reference="eqn:max-inv-density"} --- which depends on $\mu$ and $\sigma^2$ only through $\mu/\sigma$.
Denoting the class of distributions having the same standardized mean by $\mathcal{P}= \{ P_{\mu',{\sigma'}^2} : \mu'/\sigma' = \mu/\sigma \}$, we can further write the right hand side of [\[eqn:max-inv-density\]](#eqn:max-inv-density){reference-type="eqref" reference="eqn:max-inv-density"} in the form of $$\int_{\mathcal{P}} \left\{\prod_{i=1}^n \frac{\mathrm{d}P}{\mathrm{d}\eta}(x_i) \right\} \mathrm{J}(\mathrm{d}P) .$$ Where $\mathrm{J}$ is the *Jeffreys prior* over $\mathcal{P}$ with density $\frac{1}{\sigma(P)}$. This improper prior is known in the Bayesian literature as an uninformative prior on the scale parameter. We have shown above that taking the mixture (in some sense) with the Jeffreys prior $\int_{\tau > 0} (\dots) \frac{\mathrm{d}\tau}{\tau}$ of likelihood ratios is equivalent to taking the likelihood ratio of the scale invariant reduction $X_2/|X_1|, \dots, X_n/|X_1|$. We shall make further remarks on how our approaches are related to previous Bayesian work with Jeffreys prior in [4.6](#sec:jzs){reference-type="ref" reference="sec:jzs"}.
Now using the fact that the general non-i.i.d. likelihood ratios are martingales ([Lemma 5](#lem:lrm-joint){reference-type="ref" reference="lem:lrm-joint"}) on $Q_{\mu ,\sigma^2}^n$, we have:
**Lemma 15** (Scale invariant likelihood ratio). *For any $\theta$ and $\theta_0$, the process $$\label{eqn:si-lr}
h_n(\theta; \theta_0) = \frac{ \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{\tau\theta, \tau^2}}{\mathrm{d}\eta}(X_i) \right\} \frac{ \mathrm{d}\tau}{\tau} }{ \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{\tau\theta_0, \tau^2}}{\mathrm{d}\eta}(X_i) \right\} \frac{ \mathrm{d}\tau}{\tau} }$$ is an NM for $\{ P_{\mu_0, \sigma_0^2} : \mu_0/\sigma_0 = \theta_0 \}$ on the scale invariant filtration $\{ \mathcal{F}^*_n \}_{n \geqslant 1}$.*
## A Test Extended Martingale for t-Test {#sec:lai-mix}
It is now convenient for us to replace the general distribution $P_{\mu, \sigma^2}$ in the previous subsection with the Gaussian $N_{\mu,\sigma^2}$, and $\eta$ with the Lebesgue measure on $\mathbb R$. A direct calculation with the Gaussian density function in [Lemma 15](#lem:si-lr){reference-type="ref" reference="lem:si-lr"} gives the following,
**Corollary 16** (Scale invariant t-likelihood ratio). *Let $\theta$ be any real number. The process $\{ h_{\theta,n} \}_{n \geqslant 1}$, defined by $$\label{eqn:t-lr}
h_{\theta,n} = \frac{\mathrm e^{- n \theta^2 /2 }}{\Gamma(n/2)} \int_{y > 0} y^{n/2-1} \exp\left\{ -y + \theta S_n \sqrt{\frac{2y}{V_n}} \right\} \mathrm{d}y,$$ is an NM for $\mathcal{N}_{\mu = 0}$ on the scale invariant filtration $\{ \mathcal{F}^*_n \}_{n \geqslant 1}$.*
The $\theta$ in the martingale [\[eqn:t-lr\]](#eqn:t-lr){reference-type="eqref" reference="eqn:t-lr"} parametrizes the standardized mean $\mu/\sigma$ of the *alternative*. That is, when the actual distribution is in the class $\mathcal{N}_{\mu/\sigma = \theta}$, the process $\{h_{\theta, n}\}$ grows the fastest. A flat integral over $\theta$ yields an ENSM that stands behind Lai's t-CS ([Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"}), which we restate in our language as follows.
**Theorem 17** (Scale invariant t-test extended martingale). *The process $\{ H_n \}_{n \geqslant 1}$, defined as $H_1 = \infty$ and $$H_n = \sqrt{\frac{2 \pi }{n}} \left( \frac{n V_n }{n V_n - S_n^2} \right)^{n/2}, \quad (\text{for }n \geqslant 2)$$ is an ENSM for $\mathcal{N}_{\mu = 0}$ on the scale invariant filtration $\{ \mathcal{F}^*_n \}_{n \geqslant 1}$. Lai's t-CS stated in [Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"} follows from applying the extended Ville's inequality ([Lemma 2](#lem:evi){reference-type="ref" reference="lem:evi"}) to this ENSM.*
To see how $\{H_n\}$ behaves under $\mathcal{N}_{\mu = 0}$, recall that $T_{n-1}=\sqrt{n-1} \frac{S_n}{ \sqrt {n V_n - S_n^2} }$ has Student's t-distribution of $n-1$ degrees of freedom. $H_n$ can thus be re-expressed as $$\label{eqn:lai-ensm-T-expr}
H_n = \sqrt{\frac{2 \pi }{n}} \left( 1 + \frac{T_{n-1}^2}{n-1} \right)^{n/2}.$$ $\{ H_n \}$ works favorably as a test process for the null $\mathcal{N}_{\mu = 0}$ and the alternative $\mathcal{N}_{\mu \neq 0}$ due to the following limit result.
**Proposition 18** (Asymptotic behavior of the scale invariant t-test extended martingale). *Under any $N_{\mu,\sigma^2}$, $$\lim_{n \to \infty} \frac{\log H_n}{n} = \frac{1}{2}\log( 1 + \mu^2/\sigma^2 ) \quad \text{almost surely}.$$ Consequently, $\{ H_n \}$ diverges almost surely to $H_\infty = \infty$ exponentially fast under $\mathcal{N}_{\mu \neq 0}$. Furthermore, $\{ H_n \}$ converges almost surely to $H_\infty = 0$ under $\mathcal{N}_{\mu = 0}$.*
The reader may compare the ENSM $\{ H_n \}$ with the ENSM for the Z-test case by @ensm [Proposition 5.7], obtained via a flat mixture over the standard Gaussian likelihood ratio. Both are free of any parameter. Under the null, both, as extended *martingales*, start from $\infty$ and shrink to 0 almost surely. Under the alternative, both start from $\infty$ and diverge back to $\infty$. Both can be seen as frequentist embodiments of the Bayesian idea of uninformative, improper prior.
We remark that the reduction of filtration from $\{ \mathcal{F}_n \}$ to $\{ \mathcal{F}^*_n \}$ does limit the stopping rules that are safe when working with the test processes. Stopping on a scale variant event would violate the safety of $\{
H_n \}$. However, the effect of filtration reduction seems to disappear after applying extended Ville's inequality to obtain a CS. This is because Ville's inequality or extended Ville's inequality, at a fundamental level, makes a claim only about the first exit time from $[0,\varepsilon]$ of the process (NSM or ENSM) --- a stopping time *on the process* itself, one adapted to the *canonical filtration of the process* which is even coarser than $\{ \mathcal{F}^*_n \}$.
## Classical Test Martingales for t-Test {#sec:N-mix}
In some sense a classical, integrable test martingale issued at 1 is preferred. Besides a simple, universally valid rejection rule "reject when the test process exceeds $1/\alpha$\", the use of classical Ville's inequality often leads to closed-form CSs as opposed the one in [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"} that involves root finding. Further, these classical NMs often come with "tunable hyperparameters\" arising from the mixture distributions. We replace the flat mixture on $\theta$ that leads to [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"} by a Gaussian one, obtaining the following classical test martingales.
**Theorem 19** (Scale invariant t-test martingales). *For any $c > 0$, the process $\{ G_n^{(c)} \}_{n \geqslant 1}$ defined by $$\label{eqn:gaussian-mix-eproc}
G_n^{(c)} = \sqrt{\frac{c^2}{n+c^2}} \left( \frac{(n+c^2) V_n }{(n+c^2) V_n - S_n^2} \right)^{n/2}$$ is an NM for $\mathcal{N}_{\mu = 0}$ on the scale invariant filtration $\{ \mathcal{F}^*_n \}_{n \geqslant 1}$. Consequently, let $$\label{eqn:gaussian-mix-radius}
\operatorname{radius}_n = \sqrt{\frac{ (n+c^2)\left(1- \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n}\right) }{ \left\{ \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n} (n+c^2) - c^2 \right\} \vee 0 } \left(\overline{X^2_{n}} - \widehat{\mu}_{n}^2 \right)},$$ the intervals $$\label{eqn:gaussian-mix-cs}
\left[ \widehat{\mu}_{n} \pm \operatorname{radius}_n \right]$$ form a $(1-\alpha)$-CS for $\mu$ over $\mathcal{N}$. (When the denominator in [\[eqn:gaussian-mix-radius\]](#eqn:gaussian-mix-radius){reference-type="eqref" reference="eqn:gaussian-mix-radius"} takes 0, $\operatorname{radius}_n = \infty$, the CI is the entire $\mathbb R$ at time $n$.)*
When $n, c$ are fixed, $\operatorname{radius}_n$ grows as a function of $1/\alpha$ at the rate of $\alpha^{-1/n}$; when $\alpha, c$ are fixed, $\operatorname{radius}_n$ shrinks as a function of $n$ at the rate of $$\sqrt{ 1- \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n} } \approx \sqrt{\frac{\log( \alpha^2 c^2 /n )}{n}}.$$
A numerical caveat of this theorem: while the NM [\[eqn:gaussian-mix-eproc\]](#eqn:gaussian-mix-eproc){reference-type="eqref" reference="eqn:gaussian-mix-eproc"} works without any limitation on $n$ or $c$, the CS [\[eqn:gaussian-mix-cs\]](#eqn:gaussian-mix-cs){reference-type="eqref" reference="eqn:gaussian-mix-cs"} is non-trivial only when the denominator $\left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n} (n+c^2) - c^2$ is positive; for a fixed $\alpha$ and a fixed $c$, this means the CS is non-trivial on $n \geqslant n_0$ for some $n_0$, instead of all $n \geqslant 1$. For example, when $c = 0.01$ and $\alpha = 0.05$, the range of [\[eqn:gaussian-mix-cs\]](#eqn:gaussian-mix-cs){reference-type="eqref" reference="eqn:gaussian-mix-cs"} is $n \geqslant 3$. Thus, the starting time $m$ in [Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"} seems to be avoided by switching from the extended Ville's inequality to the classical Ville's inequality, but it remains in another form.
One may express $G_n^{(c)}$ in terms of the t-statistic under $\mathcal{N}_{\mu = 0}$ as well, $$\label{eqn:lai-nm-tstat}
G_n^{(c)} = \sqrt{\frac{c^2}{n+c^2}} \left( 1 + \frac{ n}{ \frac{(n+c^2)(n-1) }{T_{n-1}^2 } + {c^2} } \right)^{n/2}$$ which again leads to the following limit result that shows the their merits for being test process candidates.
**Proposition 20** (Asymptotic behavior of the scale invariant t-test martingales). *Under any $N_{\mu,\sigma^2}$, $$\lim_{n \to \infty} \frac{\log G_n^{(c)} }{n} = \frac{1}{2}\log( 1 + \mu^2/\sigma^2 ) \quad \text{almost surely}.$$ Consequently, $\{ G_n^{(c)} \}$ diverges almost surely to $G^{(c)}_\infty = \infty$ exponentially fast under $\mathcal{N}_{\mu \neq 0}$. Furthermore, $\{ G_n^{(c)} \}$ converges almost surely to $G^{(c)}_\infty = 0$ under $\mathcal{N}_{\mu = 0}$.*
The NM $\{ G_n^{(c)} \}$ thus have the same asymptotic properties as the ENSM $\{ H_n \}$, but the free parameter $c$, which is absence for the ENSM, does introduce a difference that emerges only non-asymptotically. To wit, if one fixes $n$ and the data-dependent quantity $T^2_{n-1}$, the value of $G_n^{(c)}$ would approach 0 (hence the power vanishes) if $c$ is too large or too small.
The reader may compare [Theorem 19](#thm:lai-e){reference-type="ref" reference="thm:lai-e"} with [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"}, and compare this comparison with the comparison between the Gaussian mixed NSM [@ensm Proposition 5.6] and flat mixed ENSM [@ensm Proposition 5.7] in the Z-test case. Multiple similarities manifest. A full comparison shall be presented next in [\[tab:big-comp\]](#tab:big-comp){reference-type="ref" reference="tab:big-comp"}. It is unclear why [@lai1976confidence] skipped this more universally accepted method of a proper Gaussian mixture and used an improper flat mixture instead, which seems very ahead of its time in hindsight.
## Remark: Bayesian t-Test with the JZS Prior, and Cauchy Mixture {#sec:jzs}
In a highly influential paper, [@rouder2009bayesian] provide a Bayesian framework for the t-test that makes extensive use of *Bayes factors*. To explain this approach, we consider as we did in [4.3](#sec:scale-lr){reference-type="ref" reference="sec:scale-lr"} a general location-scale family $P_{\mu, \sigma^2}$ dominated by a reference measure $\eta$. The Bayes factor for the null $\mathcal{P}_{\mu = 0} = \{ P_{0,\sigma^2} : \sigma>0 \}$ and the alternative $\mathcal{P}_{\mu \neq 0}$ is defined as [@rouder2009bayesian p.229] $$B_n = \frac{M^{0}_n}{M^{1}_n} = \frac{\int \left\{ \prod_{i=1}^n \frac{\mathrm{d}P_{0, \tau^2}}{\mathrm{d}\eta} (X_i) \right\} \pi_0(\mathrm{d}\tau^2) }{\int \left\{ \prod_{i=1}^n \frac{\mathrm{d}P_{m, \tau^2}}{\mathrm{d}\eta} (X_i) \right\} \pi_1(\mathrm{d}m, \mathrm{d}\tau^2) },$$ where $\pi_0$ and $\pi_1$ are priors on $\mathbb R^+$ and $\mathbb R \times \mathbb R^+$, that can be chosen freely by the statistician. @rouder2009bayesian [p.231], regarding themselves as "objective Bayesians\", recommend the following choice: $$\pi_0(\mathrm{d}\tau^2) = \tau^{-2} \mathrm{d}\tau^2 = 2\tau^{-1}\mathrm{d}\tau,\quad \text{and} \quad\pi_1(\mathrm{d}m, \mathrm{d}\tau^2) = C_{0, 1}(\mathrm{d}(m/\tau)) \cdot \pi_0 (\mathrm{d}\tau^2),$$ where $C_{0, 1}$ is the standard Cauchy distribution. This is dubbed the "JZS prior\", an acronym for the Jeffreys prior $\pi_0$, and the Cauchy prior on $\mu/\sigma$ due to [@zellner1980posterior]. We can immediately write $B_n$ as $$B_n^{\text{JZS}} = \frac{ \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{0, \tau^2}}{\mathrm{d}\eta}(X_i) \right\} \frac{ \mathrm{d}\tau}{\tau} }{ \int_{\theta} \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{\tau\theta, \tau^2}}{\mathrm{d}\eta}(X_i) \right\} \frac{ \mathrm{d}\tau}{\tau} C_{0, 1}(\mathrm{d}\theta) }.$$
Comparing to mixing the scale-invariant likelihood ratio [\[eqn:si-lr\]](#eqn:si-lr){reference-type="eqref" reference="eqn:si-lr"} (letting $\theta_0 = 0$) with a prior $\varpi$ on the alternative $\theta$, $$M_n^\varpi = \int h_n(\theta; 0) \varpi(\mathrm{d}\theta) = \frac{ \int_{\theta} \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{\tau\theta, \tau^2}}{\mathrm{d}\eta}(X_i) \right\} \frac{ \mathrm{d}\tau}{\tau} \varpi(\mathrm{d}\theta) }{ \int_{\tau > 0} \left\{\prod_{i=1}^n \frac{\mathrm{d}P_{0, \tau^2}}{\mathrm{d}\eta}(X_i) \right\} \frac{ \mathrm{d}\tau}{\tau} }.$$ Clearly, $B_n^{\text{JZS}}$ equals $1/ M_n^\varpi$ when the mixture measure $\varpi$ is taken to be $C_{0,1}$. Some remarks regarding the comparison. First, [@rouder2009bayesian] did not mention the sequential benefits of their approach, as $\{ 1/B_n^{\text{JZS}} \}$ is an NM for $\mathcal{P}_{\mu = 0}$ by virtue of [Lemma 15](#lem:si-lr){reference-type="ref" reference="lem:si-lr"}, or equivalently $\{ B_n^{\text{JZS}} \}$ is an anytime-valid p-value which we briefly defined in [2.2](#sec:seq-stat){reference-type="ref" reference="sec:seq-stat"}. Second, while choosing $\pi_0$ to be the Jeffreys prior seems necessary to attain a test process according to our [4.3](#sec:scale-lr){reference-type="ref" reference="sec:scale-lr"}, different (perhaps objectivistic) methodological choices lead to different priors on $\theta = \mu/\sigma$. @lai1976confidence's choice in [4.4](#sec:lai-mix){reference-type="ref" reference="sec:lai-mix"} is a flat $\varpi$ while in [4.5](#sec:N-mix){reference-type="ref" reference="sec:N-mix"} we choose $\varpi = N_{0,c^{-2}}$, both leading to closed-form expressions. @rouder2009bayesian's Cauchy prior, in turn, arises from a hyper-prior on the precision $c^{-2}$ of this Gaussian prior $\theta \sim N_{0,c^{-2}}$, which is $c^2 \sim \chi^2_1$ proposed by [@zellner1980posterior], but this has led to the $B_n^{\text{JZS}}$ that lacks a closed form.
# Comparison of Results
## Theoretical Comparison
We have presented three t-confidence sequences (i.e., confidence sequences for $\mu$ over $\mathcal{N}$) so far, based on universal inference and scale invariance, and we summarize them in [1](#tab:t-cs){reference-type="ref" reference="tab:t-cs"}. While all three CSs have similar shrinkage rates in $n$, the original CS by [@lai1976confidence] has a worse growth rate compared to the other two which is subject to a trade-off with the starting time $m$.
Result [Theorem 7](#thm:ui-ttest){reference-type="ref" reference="thm:ui-ttest"} [Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"} [@lai1976confidence] [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"}
------------------- --------------------------------------------------------------------------- --------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------
Method Universal inference Scale invariant likelihood mixture
Mixture N/A Flat Gaussian
Free parameters Point estimators $\tilde{\mu}_i$ and $\tilde{\sigma}_i^2$ Starting time $m \geqslant 2$ Prior precision $c^2$
Rate of growth $\alpha^{-1/n}$ $\alpha^{-m/n(m-1)}$ $\alpha^{-1/n}$
Rate of shrinkage $n^{-1/2} \mathop{\mathrm{polylog}}(n)$
: Comparison of t-confidence sequences in this paper.
We now zoom out and cross-compare sequential t-tests with sequential Z-tests. As we mentioned earlier, both universal inference and likelihood mixture can be used on Z-test and t-test to construct e-processes or extended e-processes, and confidence sequences. These are summarized in [\[tab:big-comp-ui,tab:big-comp\]](#tab:big-comp-ui,tab:big-comp){reference-type="ref" reference="tab:big-comp-ui,tab:big-comp"} respectively.
c\|\|c\|c\|c\|c Result & [Corollary 6](#lem:plugin-lr){reference-type="ref" reference="lem:plugin-lr"} & (Unnumbered) & [Theorem 7](#thm:ui-ttest){reference-type="ref" reference="thm:ui-ttest"} & [Theorem 9](#thm:ui-ttest-onesided){reference-type="ref" reference="thm:ui-ttest-onesided"}\
Problem & &\
Null & $N_{0,\sigma^2}$ & $\mathcal{N}_{\mu \leqslant 0, \sigma^2 = \sigma^2}$ & $\mathcal{N}_{\mu = 0}$ & $\mathcal{N}_{\mu \leqslant 0}$\
Alternative & $\mathcal{N}_{\mu \neq 0, \sigma^2 = \sigma^2}$ & $\mathcal{N}_{\mu > 0, \sigma^2 = \sigma^2}$ & $\mathcal{N}_{\mu \neq 0}$ & $\mathcal{N}_{\mu > 0}$\
Method &\
& & & &\
Behavior under null & & &\
Behavior under alternative & & &\
c\|\|c\|c\|c\|c\|c Result & WR23 Prop. 5.6 & WR23 Prop. 5.7 & WR23 Prop. 6.3 & [Theorem 19](#thm:lai-e){reference-type="ref" reference="thm:lai-e"} & [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"}\
Problem & &\
Null & & $\mathcal{N}_{\mu \leqslant 0, \sigma^2 = \sigma^2}$ &\
Alternative & & $\mathcal{N}_{\mu > 0, \sigma^2 = \sigma^2}$ &\
& &\
Mixture & $\mu \sim N_{0,c^{-2}}$ & $\mu \sim F$ & $\mu \sim F_{>0}$ & $\theta \sim N_{0,c^{-2}}$ & $\theta \sim F$\
& & & & &\
Behavior under null & $1\to0$ a.s. & & $1\to0$ a.s. & $\infty \to 0$ a.s.\
& $1\to \infty$ a.s. & & $1\to \infty$ a.s. & $\infty \to \infty$ a.s.\
## Simulations
We demonstrate the three confidence sequences in this paper by some experiments. For the universal inference CS, we calculate the point estimators by putting a prior where we "imagine\" having seen pre-observations $-1$ and $1$, and defining $\tilde \mu_i$ and $\tilde \sigma_i^2$ as the empirical means and variances posterior. For the Gaussian mixture of scale invariant likelihood ratios, we take $c^2$ to be 1 and 100.
![Three confidence sequences for t-test under $N_{0,1}$ observations.](ttestmedias/tCSs.pdf){#fig:csplots width="67%"}
![The rate of growth (left) at $n=500$ and rate of shrinkage (right) at $\alpha = 0.05$ of the widths of three confidence sequences. In the left plot, the $y$-axis is widths whereas in the right plot, the $y$-axis is widths multiplied by $\sqrt{\text{sample size}}$ for clarity. Under $N_{0,1}$ observations, repeated 20 times.](ttestmedias/growth.pdf "fig:"){#fig:rates width="42%"} ![The rate of growth (left) at $n=500$ and rate of shrinkage (right) at $\alpha = 0.05$ of the widths of three confidence sequences. In the left plot, the $y$-axis is widths whereas in the right plot, the $y$-axis is widths multiplied by $\sqrt{\text{sample size}}$ for clarity. Under $N_{0,1}$ observations, repeated 20 times.](ttestmedias/shrink.pdf "fig:"){#fig:rates width="42%"}
First, we present them visually in the plot of [1](#fig:csplots){reference-type="ref" reference="fig:csplots"} in a single run of i.i.d. standard normal observations. We see that the Gaussian mixture CS we derived in [4.5](#sec:N-mix){reference-type="ref" reference="sec:N-mix"} with $c=1$ performs better than Lai's improper, flat mixture one; increasing $c$ results in interval explosion at earlier times but is slightly tighter at later times. The universal inference CS seems unfavorable at later times, but we shall soon see in repeated experiments that this is not necessarily the case.
We next compare their rates of growth and shrinkage. Still drawing observations from i.i.d. $N_{0,1}$, we first fix $n=500$ and let $\alpha$ vary, then fix $\alpha = 0.05$ and let $n$ vary, plotting the widths of the CSs in 20 independent runs, since all of these CSs have random widths. The results are shown in [3](#fig:rates){reference-type="ref" reference="fig:rates"}. We see with greater clarity that Lai's CS is looser than Gaussian mixture-based CSs especially when $\alpha$ is small, even though the extended Ville's inequality seems a tighter and more advanced technique; larger precision $c^2$ in Gaussian mixture gradually gains an advantage as $n$ increases. Even after a "smoothing\" prior, the universal inference CS is still the most volatile. Its width varies a lot with different runs.
### Acknowledgement {#acknowledgement .unnumbered}
The authors acknowledge support from NSF grants IIS-2229881 and DMS-2310718.
# Omitted Proofs {#sec:pf}
*Proof of [Lemma 5](#lem:lrm-joint){reference-type="ref" reference="lem:lrm-joint"}.* Let $A \subseteq \mathbb R^{n-1}$ be measurable. Then, $$\begin{aligned}
& \mathrm E _{\mathbbmsl P}\left( \mathbbmss 1_{\{ (X_1,\dots, X_{n-1}) \in A \}} \cdot \frac{\mathrm{d}\mathbbmsl Q_{(n-1)}}{\mathrm{d}\mathbbmsl P_{(n-1)}} (X_1,\dots, X_{n-1}) \right) \\
= & \int_A \frac{\mathrm{d}\mathbbmsl Q_{(n-1)}}{\mathrm{d}\mathbbmsl P_{(n-1)}} (x_1,\dots, x_{n-1}) \, \mathrm{d}\mathbbmsl P_{(n-1)}(x_1,\dots, x_{n-1}) = \mathbbmsl Q_{(n-1)}(A).\end{aligned}$$ And $$\begin{aligned}
& \mathrm E _{\mathbbmsl P}\left( \mathbbmss 1_{\{ (X_1,\dots, X_{n-1}) \in A \}} \cdot \frac{\mathrm{d}\mathbbmsl Q_{(n)}}{\mathrm{d}\mathbbmsl P_{(n)}} (X_1,\dots, X_{n}) \right) \\
= & \int_{A\times \mathbb R} \frac{\mathrm{d}\mathbbmsl Q_{(n)}}{\mathrm{d}\mathbbmsl P_{(n)}} (x_1,\dots, x_{n}) \, \mathrm{d}P_{n}(x_1,\dots, x_{n}) = \mathbbmsl Q_{(n)}(A\times \mathbb R) = \mathbbmsl Q_{(n-1)}(A).\end{aligned}$$ Since $A$ is arbitrary, we conclude that $$\mathrm E _{\mathbbmsl P}\left( \frac{\mathrm{d}\mathbbmsl Q_{(n)}}{\mathrm{d}\mathbbmsl P_{(n)}} (X_1,\dots, X_{n}) \middle \vert X_1, \dots, X_{n-1} \right) = \frac{\mathrm{d}\mathbbmsl Q_{(n-1)}}{\mathrm{d}\mathbbmsl P_{(n-1)}} (X_1,\dots, X_{n-1}),$$ concluding the proof. ◻
*Proof of [Theorem 7](#thm:ui-ttest){reference-type="ref" reference="thm:ui-ttest"}.* For each $\sigma > 0$ consider the likelihood ratio martingale under $\mathcal{N}(\mu, \sigma^2)$ $$M_n^{\mu, \sigma^2} = \frac{\prod_{i=1}^n p_{\widetilde{\mu}_{i-1}, \widetilde{\sigma}^2_{i-1}}(X_i)}{\prod_{i=1}^n p_{\mu, \sigma^2}(X_i)}.$$ The following process is an e-process under any $\mathcal{N}(\mu, \cdot )$ $$\begin{aligned}
R_n^\mu = \frac{\prod_{i=1}^n p_{\widetilde{\mu}_{i-1}, \widetilde{\sigma}^2_{i-1}}(X_i)}{\sup_{s>0} \prod_{i=1}^n p_{\mu, s^2}(X_i)} = \inf_{s > 0} M_n^{\mu,s^2}.\end{aligned}$$ Note that $$\label{eqn:ville-type}
\sup_{\sigma>0} \mathop{\mathrm{ \mathbb P }}_{\mathcal{N}(\mu, \sigma^2)}[ \exists n, R_n^\mu \geqslant 1/\alpha ] \leqslant\mathop{\mathrm{ \mathbb P }}_{\mathcal{N}(\mu, \sigma^2)}[ \exists n, M_n^{\mu,\sigma^2} \geqslant 1/\alpha ] \leqslant\alpha.$$ Since $s= \sqrt{\frac{\sum (X_i - \mu)^2}{n}}$ maximizes the denominator of $R_n^\mu$, it has a closed-form expression: $$\begin{aligned}
& R_n^\mu
% = \frac{\prod_{i=1}^n p_{\hmu_{i-1}, \hsig^2_{i-1}}(X_i)}{\sup_{s>0} \prod_{i=1}^n p_{\mu, s^2}(X_i)}
= \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \sup_{s > 0} \prod \frac{1}{s} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \mu}{s} \right)^2 \right\} }
= \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \left( \frac{\sum (X_i - \mu)^2}{n} \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} } .\end{aligned}$$ Hence due to [\[eqn:ville-type\]](#eqn:ville-type){reference-type="eqref" reference="eqn:ville-type"}, with probability at least $1-\alpha$, for all $n$, $$\begin{aligned}
\frac{\sum (X_i - \mu)^2}{n} \leqslant& \left\{ \ \frac{1}{\alpha} \exp\left\{-\frac{n}{2}\right\} \prod {\widetilde{\sigma}_{i-1}} \exp \left\{ \frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} \ \right\}^{2/n}
\\
= & \frac{1}{\alpha^{2/n} \mathrm e} \prod_{i=1}^n {\widetilde{\sigma}_{i-1}^{2/n}} \exp \left\{ \frac{1}{n}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\}
\\
= & \underbrace{ \frac{1}{\alpha^{2/n} \mathrm e} \exp \left\{ \frac{\sum_{i=1}^n \log \widetilde{\sigma}_{i-1}^2 + \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 }{n} \right\} }_{T_n}\end{aligned}$$ Recall that the random variable on the RHS is $T_n$. The UI-CS then reads $$\label{eqn:ui-cs}
\operatorname{CI}_n = \left[ \widehat{\mu}_{n} \pm \sqrt{ \widehat{\mu}_{n}^2 - \overline{X^2_{n}} + T_n } \right].$$ ◻
*Prood of [Proposition 8](#prop:div-ui-eproc){reference-type="ref" reference="prop:div-ui-eproc"}.* Note that $$\frac{1}{n}\log R_n = \frac{1}{2}\log {\overline{X^2_{n}}} + \frac{1}{2} - \frac{ \sum_{i=1}^n \log \widetilde{\sigma}_{i-1} }{n} - \frac{\sum_{i=1}^n \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 }{2n}.$$ The first three terms converge a.s. to $\frac{1}{2}\log (\mu^2 + \sigma^2)$, $1/2$, and $\log \sigma$. It remains to show that the last term above converges a.s. to $1/2$.
Define $A_n = \sum_{i=1}^n \frac{1}{i} \left| \left( \frac{X_i - \mu}{\widetilde{\sigma}_{i-1}} \right)^2 - \left( \frac{X_i - \mu}{\sigma} \right)^2 \right|$. Then,
$$\begin{aligned}
& \mathrm E _P(A_n) = \sum_{i=1}^n \frac{1}{i} \mathrm E _P \left( |X_i - \mu|^2 \cdot| \widetilde{\sigma}_{i-1}^{-2} - \sigma^{-2} | \right)
\\
\leqslant& \sum_{i=1}^n \frac{1}{i} \mathrm E _P\left( \frac{ (X_i - \mu)^4 i^{-\gamma/2} + (\widetilde{\sigma}_{i-1}^{-2} - \sigma^{-2} )^2 i^{\gamma/2} }{2} \right)
\\
= & \frac{1}{2} \sum_{i=1}^n \frac{1}{i} \left( \mathrm E _P( X_i - \mu )^4 \cdot i^{-\gamma/2} + \mathrm E _P( \widetilde{\sigma}_{i-1}^{-2} - \sigma^{-2} )^2 \cdot i^{\gamma/2} \right)
\\
= & \frac{1}{2} \sum_{i=1}^n \frac{1}{i} ( \mathcal{O}(1) \cdot i^{-\gamma/2} + \mathcal{O}(i^{-\gamma}) \cdot i^{\gamma/2} ) = \frac{1}{2}\sum_{i=1}^n \mathcal{O}(i^{-1-\gamma/2}) = \mathcal{O}(1).\end{aligned}$$ So there exists a constant $K > 0$ such that $\mathrm E _P(A_n) \leqslant K$ for all $n$. By Markov's inequality and Fatou's lemma, $$P( \lim_{n \to \infty} A_n \geqslant a ) \leqslant\frac{ \mathrm E _P( \lim_{n \to \infty} A_n )}{a} \leqslant\frac{\liminf_{n \to \infty} \mathrm E _P( A_n )}{a} \leqslant\frac{K}{a},$$ which implies $P( \lim_{n \to \infty} A_n < \infty ) = 1$. By Kronecker's lemma, this implies that $$\label{eqn:as1}
\lim_{n \to \infty} \frac{ \sum_{i=1}^n \left\{ \left( \frac{X_i - \mu}{\widetilde{\sigma}_{i-1}} \right)^2 - \left( \frac{X_i - \mu}{\sigma} \right)^2 \right\} }{n} = 0 \quad \text{almost surely.}$$
We now define $B_n = \sum_{i=1}^n \frac{1}{i} \left| \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 - \left( \frac{X_i - \mu}{\widetilde{\sigma}_{i-1}} \right)^2 \right|$. Without loss of generality assume $\sup_{n \geqslant 0} \mathrm E _P (\widetilde{\sigma}_n^{-6}) < \infty$ (otherwise, if only $\sup_{n \geqslant n_0} \mathrm E _P (\widetilde{\sigma}_n^{-6}) < \infty$ holds, use summation $\sum_{i=n_0+1}^n$ instead). Then, $$\begin{aligned}
& \mathrm E _P(B_n) =
\sum_{i=1}^n \frac{1}{i} \mathrm E _P \left( \widetilde{\sigma}^{-2}_{i-1} \cdot | 2X_i - \widetilde{\mu}_{i-1} - \mu | \cdot | \widetilde{\mu}_{i-1} - \mu_1 | \right)
\\
& \leqslant\sum_{i=1}^n \frac{1}{i} \mathrm E _P \left( \frac{ \widetilde{\sigma}^{-6}_{i-1} i^{-\gamma/4} + | 2X_i - \widetilde{\mu}_{i-1} - \mu |^3 i^{-\gamma/4} + | \widetilde{\mu}_{i-1} - \mu |^3 i^{\gamma/2} }{3} \right)
\\
& = \frac{1}{3} \sum_{i=1}^n \frac{1}{i} \left( \mathrm E _P(\widetilde{\sigma}^{-6}_{i-1} ) \cdot i^{-\gamma/4} + \mathrm E _P (| 2X_i - \widetilde{\mu}_{i-1} - \mu |^3) \cdot i^{-\gamma/4} + \mathrm E _P (| \widetilde{\mu}_{i-1} - \mu |^3 ) \cdot i^{\gamma/2} \right)
\\
& = \frac{1}{3} \sum_{i=1}^n \frac{1}{i} \left( \mathcal{O}(1) \cdot i^{-\gamma/4} + \mathcal{O}(1) \cdot i^{-\gamma/4} + \mathcal{O}(i^{-\gamma}) \cdot i^{\gamma/2} \right) = \frac{1}{3}\sum_{i=1}^n \mathcal{O}(i^{-1-\gamma/4}) = \mathcal{O}(1).\end{aligned}$$ Similarly by Markov's inequality, Fatou's and Kronecker's lemma, $$\label{eqn:as2}
\lim_{n \to \infty} \frac{\sum_{i=1}^n \left\{ \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 - \left( \frac{X_i - \mu}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{n} = 0 \quad \text{almost surely.}$$ Finally, strong law of large numbers implies that $$\label{eqn:as3}
\lim_{n \to \infty} \frac{ \sum_{i=1}^n \left(\frac{X_i - \mu}{\sigma}\right)^2 }{n} = 1 \quad \text{almost surely.}$$ Combining [\[eqn:as1\]](#eqn:as1){reference-type="eqref" reference="eqn:as1"}, [\[eqn:as2\]](#eqn:as2){reference-type="eqref" reference="eqn:as2"}, and [\[eqn:as3\]](#eqn:as3){reference-type="eqref" reference="eqn:as3"}, we see that $$\lim_{n \to \infty} \frac{\sum_{i=1}^n \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 }{n} = 1 \quad \text{almost surely.}$$ This concludes the proof. ◻
*Proof of [Theorem 9](#thm:ui-ttest-onesided){reference-type="ref" reference="thm:ui-ttest-onesided"}.* The following process is an e-process under any $\mathcal{N}(\mu, \sigma^2)$, ($\mu \leqslant 0$ and $\sigma > 0$). $$\begin{aligned}
\frac{\prod_{i=1}^n p_{\widetilde{\mu}_{i-1}, \widetilde{\sigma}^2_{i-1}}(X_i)}{\sup_{\substack{ m \leqslant 0 \\ s > 0 }} \prod_{i=1}^n p_{m, s^2}(X_i)} = \inf_{\substack{ m \leqslant 0 \\ s > 0 }} M_n^{m,s^2}.\end{aligned}$$ We have $$\begin{aligned}
& \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \sup_{m \leqslant 0}\sup_{s > 0} \prod \frac{1}{s} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - m}{s} \right)^2 \right\} }
= \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \sup_{m \leqslant 0}\left( \frac{\sum (X_i - m)^2}{n} \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} }
\\
& = \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \sup_{m \leqslant 0}\left( m^2 - 2 \widehat{\mu}_{n} m + \overline{X^2_{n}} \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} }\end{aligned}$$ The denominator is maximized by $m = \widehat{\mu}_{n} \wedge 0$. Therefore $$\begin{aligned}
\dots =& \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \left( ( \widehat{\mu}_{n} \wedge 0 - \widehat{\mu}_{n})^2 + \overline{X^2_{n}} - \widehat{\mu}_{n}^2 \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} } = \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \left( (0\vee \widehat{\mu}_{n})^2 + \overline{X^2_{n}} - \widehat{\mu}_{n}^2 \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} }
\\
=& \frac{ \prod \frac{1}{\widetilde{\sigma}_{i-1}} \exp \left\{ -\frac{1}{2}\left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 \right\} }{ \left( \overline{X^2_{n}} - (\widehat{\mu}_{n}\wedge 0)^2 \right)^{-n/2} \exp\left\{-\frac{n}{2}\right\} } = R_n^-.\end{aligned}$$ ◻
*Proof of [Proposition 10](#prop:div-ui-eproc-1s){reference-type="ref" reference="prop:div-ui-eproc-1s"}.* The four terms of $$\begin{aligned}
\frac{1}{n}\log R_n^- = \frac{1}{2}\log\left(\overline{X^2_{n}} - (\widehat{\mu}_{n}\wedge 0)^2 \right) + \frac{1}{2} - \frac{ \sum_{i=1}^n \log \widetilde{\sigma}_{i-1} }{n} - \frac{\sum_{i=1}^n \left( \frac{X_i - \widetilde{\mu}_{i-1}}{\widetilde{\sigma}_{i-1}} \right)^2 }{2n}\end{aligned}$$ converges a.s. to $\frac{1}{2}\log( \mu^2 + \sigma^2 - (\mu\wedge 0)^2 )$, $1/2$, $\log \sigma$, and $1/2$ respectively, concluding the proof. ◻
*Proof of [Lemma 14](#lem:jeffreys){reference-type="ref" reference="lem:jeffreys"}.* Let $Q^+_n$ be the distribution of $(X_1, X_2, \dots, X_n)$ conditioned on $\{ X_1 > 0 \}$. Define the constant $p_{> 0} = \mathbb P [X_1 > 0] = P_{\mu, \sigma}\{(0,\infty) \}$. Then, $$\frac{\mathrm{d}Q^+_n}{\mathrm{d}\eta^{\otimes n}}(x_1,x_2,\dots, x_n) = p_{> 0}^{-1} \cdot \mathbbmss 1(x_1 > 0) \cdot \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i-\mu)) \right\},$$ meaning that for any measurable $A\subseteq \mathbb{R^+} \times \mathbb R^{n-1}$, $$Q_n^+ \{ A \} = \int_A p_{> 0}^{-1} \cdot \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i-\mu)) \right\} \eta(\mathrm{d}x_1) \eta(\mathrm{d}x_2) \dots \eta (\mathrm{d}x_n).$$ Now recall $V_n:(x_1, x_2, \dots, x_n) \mapsto ( x_1/|x_1|, x_2/|x_1|, \dots, x_n/|x_1| )$. Let $Q_n^{*+}$ be the push-forward measure of $Q_n^+$ by $V_n$, a measure on $\{1\}\times \mathbb R^{n-1}$ (which is the conditional distribution of $(X_1/|X_1|,\dots, X_n/|X_1|)$ given $\{ X_1 > 0 \}$). Suppose the set $V_n(A) = \{1\} \times B$. We have the following change of variables (letting $(x_1^*, \dots, x_n^*) = V_n(x_1, \dots, x_n)$, so $x_1^* = 1$ below), $$\begin{aligned}
& Q_n^{*+} \{ V_n(A) \} = Q_n^+\{A\}= \int_A p_{> 0}^{-1} \cdot \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i-\mu)) \right\} \eta(\mathrm{d}x_1) \eta(\mathrm{d}x_2) \dots \eta (\mathrm{d}x_n)
\\
& = p_{> 0}^{-1} \cdot \int_{x_1 > 0} \eta(\mathrm{d}x_1) \int_{B} \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i^* |x_1| -\mu)) \right\} \eta(\mathrm{d}x_2^*) \dots \eta (\mathrm{d}x_n^*) \left|\frac{ \mathrm{d}(x_2, \dots, x_n)}{ \mathrm{d}(x_2^*, \dots, x_n^*)}\right|
\\
&= p_{> 0}^{-1} \cdot\int_{x_1 > 0} \eta(\mathrm{d}x_1) \int_{B} \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i^* |x_1| -\mu)) \right\} \eta(\mathrm{d}x_2^*) \dots \eta (\mathrm{d}x_n^*) \begin{vmatrix}
\frac{1}{|x_1|} & 0 & \dots & 0 \\
0 & \frac{1}{|x_1|} & & 0\\
\vdots & & \ddots & 0 \\
0 & \dots & 0 & \frac{1}{|x_1|}
\end{vmatrix}^{-1}
\\
&= p_{> 0}^{-1} \cdot\int_{x_1 > 0} \eta(\mathrm{d}x_1) \int_{B} \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i^* x_1 -\mu)) \right\} x_1^{n-1} \eta(\mathrm{d}x_2^*) \dots \eta (\mathrm{d}x_n^*).\end{aligned}$$ We thus conclude that (letting $\delta_1$ be the Dirac point mass on 1), $$\frac{\mathrm{d}Q_n^{*+}}{\delta_1 \otimes \eta^{\otimes (n-1)}}(x_1^*, x_2^*, \dots, x_n^*) = p_{> 0}^{-1} \cdot\int_{x_1 > 0} \prod_{i=1}^n \left\{ \sigma^{-1} g(\sigma^{-1}(x_i^* x_1 -\mu)) \right\} x_1^{n-1} \eta(\mathrm{d}x_1).$$ We can simplify the integral above by substituting $\tau = \sigma/x_1$, obtaining, $$\begin{aligned}
& \frac{\mathrm{d}Q_n^{*+}}{\delta_1 \otimes \eta^{\otimes (n-1)}}(x_1^*, x_2^*, \dots, x_n^*)
\\
= & p_{> 0}^{-1}\cdot \int_{\tau > 0} \left\{ \sigma^{-1} g(1/\tau-\sigma^{-1}\mu) \right\} \left\{\prod_{i=2}^n \sigma^{-1} g(x_i^*/\tau-\sigma^{-1}\mu)) \right\} (\sigma/\tau)^{n-1} \sigma (1/\tau^2) \eta(\mathrm{d}\tau)
\\
= & p_{> 0}^{-1}\cdot \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(x_i^* /\tau - \mu/\sigma)) \right\} (1/\tau) \mathrm{d}\tau.\end{aligned}$$
Similarly, let $Q_n^{*-}$ be the conditional distribution of $(X_1/|X_1|,\dots, X_n/|X_1|)$ given $\{ X_1 < 0 \}$). We have $$\frac{\mathrm{d}Q_n^{*-}}{\delta_{-1} \otimes \eta^{\otimes (n-1)}}(x_1^*, x_2^*, \dots, x_n^*)
= (1-p_{> 0})^{-1}\cdot \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(x_i^* /\tau - \mu/\sigma)) \right\} (1/\tau) \mathrm{d}\tau.$$ Finally, the measure $Q_n^* = \mathbbmss 1_{ \{ x_1 > 0 \} } \cdot p_{> 0 } \cdot Q_n^{*+} + \mathbbmss 1_{ \{ x_1 < 0 \} } \cdot (1-p_{> 0 }) \cdot Q_n^{*-}$. So we see that $$\frac{\mathrm{d}Q_n}{\mathrm{d}\eta^{\langle n-1 \rangle}} (x_1^*, \dots, x_n^*) = \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(x_i^* /\tau - \mu/\sigma)) \right\} \frac{ \mathrm{d}\tau}{\tau}$$ holds for both $x_1^* = \pm 1$. This concludes the proof. ◻
From now, let us denote $$\label{eqn:pstartheta}
p^*_{\theta}(x_1^*, \dots, x_n^*) = \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(x_i^* /\tau - \theta)) \right\} (1/\tau) \mathrm{d}\tau,$$ i.e. the density $\frac{\mathrm{d}Q_n}{\mathrm{d}\eta^{\langle n-1 \rangle}}$ when $P_{\mu, \sigma^2}$ satisfies $\mu/\sigma =\theta$. Here is a property of $p_{\theta}^*$.
**Proposition 21**. *Let $\lambda > 0$ be any constant. Then, $p^*_{\theta}(\lambda x_1^*, \dots, \lambda x_n^*) = \lambda^{-n} p^*_{\theta}(x_1^*, \dots, x_n^*)$. In particular, taking $\lambda = |x_1|$, we have $p^*_{\theta}(x_1, \dots, x_n) = |x_1|^{-n} p^*_{\theta}(x_1^*, \dots, x_n^*)$.*
*Proof of [Proposition 21](#lem:scaling){reference-type="ref" reference="lem:scaling"}.* $$\begin{aligned}
& p^*_{\theta}(\lambda x_1^*, \dots, \lambda x_n^*)
\\
= & \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(\lambda x_i^* /\tau - \theta)) \right\} (1/\tau) \mathrm{d}\tau
\\
= & \int_{\gamma = \lambda ^{-1} \tau > 0} \left\{\prod_{i=1}^n (1/\lambda \gamma) g( x_i^* /\gamma - \theta)) \right\} (1/\lambda \gamma) \lambda \mathrm{d}\gamma
\\
= & \lambda^{-n} p^*_{\theta}(x_1^*, \dots, x_n^*).\end{aligned}$$ ◻
*Proof of [Lemma 15](#lem:si-lr){reference-type="ref" reference="lem:si-lr"}.* Denote $X_i/|X_1|$ by $X_i^*$. $$h_n(\theta; \theta_0) = \underbrace{ \frac{p^*_{\theta} (X_1, \dots, X_n) }{p_{\theta_0}^*(X_1, \dots, X_n)} = \frac{p^*_{\theta} (X_1^*, \dots, X_n^*) }{p_{\theta_0}^*(X_1^*, \dots, X_n^*)} }_{\text{equality due to \cref{lem:scaling}}} = \frac{\mathrm{d}Q^n_{\mu, \sigma^2} }{\mathrm{d}Q^n_{\mu_0, \sigma^2_0}}(X_1^*, \dots, X_n^*).$$ for any $\mu/\sigma = \theta$ and $\mu_0/\sigma_0 = \theta_0$. Now it follows from [Lemma 5](#lem:lrm-joint){reference-type="ref" reference="lem:lrm-joint"} that $\{ h_n(\theta; \theta_0) \}$ is an NM for $P_{\mu_0, \sigma^2_0}$ on the canonical filtration generated by $\{X_n^*\}$, which according to [Proposition 13](#prop:si-filt-rule){reference-type="ref" reference="prop:si-filt-rule"} is just $\{ \mathcal{F}_n^* \}$. ◻
*Proof of [Corollary 16](#lem:t-lr){reference-type="ref" reference="lem:t-lr"}.* Using $g(x) = \frac{1}{\sqrt{2\pi}} \exp(-x^2/2)$ in [\[eqn:pstartheta\]](#eqn:pstartheta){reference-type="eqref" reference="eqn:pstartheta"}, $$\begin{aligned}
& p^*_{\theta}(x_1, \dots, x_n) = \int_{\tau > 0} \left\{\prod_{i=1}^n (1/\tau) g(x_i /\tau - \theta)) \right\} (1/\tau) \mathrm{d}\tau
\\
=& \frac{1}{(2\pi)^{n/2}} \int_{\tau > 0} \left\{\prod_{i=1}^n \mathrm e^{-(x_i /\tau - \theta)^2/2} \right\} \frac{1}{\tau^{n+1}} \mathrm{d}\tau
\\
= & \frac{\mathrm e^{- n \theta^2 /2 }}{(2\pi)^{n/2}} \int_{\tau > 0} \exp\left\{ -\frac{\sum_{i=1}^n {x_i }^2}{2\tau^2} + \frac{\theta \sum_{i=1}^n x_i}{\tau} \right\} \frac{1}{\tau^{n+1}} \mathrm{d}\tau.\end{aligned}$$ Therefore, applying [Lemma 15](#lem:si-lr){reference-type="ref" reference="lem:si-lr"} with $\theta_0 = 0$, we have $$\begin{aligned}
& h_n(\theta; 0) = \mathrm e^{- n \theta^2 /2 } \frac{\int_{\tau > 0} \exp\left\{ -\frac{\sum_{i=1}^n {X_i }^2}{2\tau^2} + \frac{\theta \sum_{i=1}^n X_i}{\tau} \right\} \frac{1}{\tau^{n+1}} \mathrm{d}\tau}{\int_{\tau > 0} \exp\left\{ -\frac{\sum_{i=1}^n {X_i }^2}{2\tau^2} \right\} \frac{1}{\tau^{n+1}} \mathrm{d}\tau}
\\
& \text{(the denominator being a generalized Gaussian integral)}
\\
= & \frac{\mathrm e^{- n \theta^2 /2 }}{\Gamma(n/2)} \int_{y > 0} y^{n/2-1} \exp\left\{ -y + \theta \left(\sum_{i=1}^n X_i \right) \sqrt{\frac{2y}{\sum_{i=1}^n {X_i}^2}} \right\} \mathrm{d}y,\end{aligned}$$ coinciding with $h_{\theta, n}$. ◻
*Proof of [Theorem 17](#thm:lai-ensm){reference-type="ref" reference="thm:lai-ensm"} (as well as [Theorem 11](#thm:lai-cs){reference-type="ref" reference="thm:lai-cs"}).* Putting a flat, improper prior $\mathrm{d}\pi(\theta) = \mathrm{d}\theta$ over $\mathbb R$, we have the following improperly mixed, extended nonnegative supermartingale, $$\begin{aligned}
& H_n = \int h_{\theta,n} \pi(\mathrm{d}\theta)
\\
= & \int_{\theta} \frac{\mathrm e^{- n \theta^2 /2 }}{\Gamma(n/2)} \left( \int_{y > 0} y^{n/2-1} \exp\left\{ -y + \theta \left(\sum_{i=1}^n X_i \right) \sqrt{\frac{2y}{\sum_{i=1}^n {X_i}^2}} \right\} \mathrm{d}y \right)\cdot \mathrm{d}\theta
\\
= & \frac{1}{ \Gamma(n/2)} \int_{y>0} y^{n/2-1} \mathrm e^{-y} \left(\int_{\theta} \exp\left\{ -\frac{n}{2}\theta^2 + \theta \left(\sum_{i=1}^n X_i \right) \sqrt{\frac{2y}{\sum_{i=1}^n {X_i}^2}}\right\} \mathrm{d}\theta \right) \mathrm{d}y
\\
= & \frac{1}{ \Gamma(n/2)} \int_{y>0} y^{n/2-1} \mathrm e^{-y} \left( \sqrt{\frac{2\pi}{n}} \exp\left\{ \frac{\left(\sum_{i=1}^n X_i \right)^2 \frac{2y}{\sum_{i=1}^n {X_i}^2} }{2n} \right\} \right) \mathrm{d}y
\\
= & \frac{1}{ \Gamma(n/2)} \sqrt{\frac{2\pi}{n}} \int_{y>0} y^{n/2-1} \exp\left\{ - \left(1- \frac{S_n^2}{n V_n } \right) y \right\} \mathrm{d}y
\\
= & \frac{1}{ \Gamma(n/2)} \sqrt{\frac{2\pi}{n}} \Gamma(n/2) \left(1- \frac{S_n^2}{n V_n } \right)^{-n/2} = \sqrt{\frac{2 \pi }{n}} \left(1- \frac{S_n^2}{n V_n } \right)^{-n/2}
\\
= & \sqrt{\frac{2 \pi }{n}} \left( \frac{n V_n }{n V_n - S_n^2} \right)^{n/2}\end{aligned}$$ Note that when $n = 1$, $n V_n - S_n^2 = 0$, and $H_0$ is understood to be $+\infty$. Note that $$t_{n-1} \sim \frac{\frac{1}{n}S_n}{\frac{1}{\sqrt{n}}\sqrt{\frac{1}{n-1}(V_n - \frac{1}{n}S_n^2)}} = \sqrt{n-1} \sqrt{ \frac{S_n^2}{n V_n - S_n^2} } =: T_{n-1}$$ Then $H_n = \sqrt{\frac{2\pi}{n}} \left(\frac{T_{n-1}^2}{n-1} + 1\right)^{n/2}$.
In terms of CS, every $X_i$ should be seen as $X_i - 0 = X_i - \mu$, and hence $$H_n^{\mu} = \sqrt{\frac{2\pi}{n}}\left( \frac{n \sum (X_i - \mu)^2 }{n \sum (X_i - \mu)^2 - (\sum (X_i - \mu))^2} \right)^{n/2}$$ is also an (improperly mixed) NSM.
The extended Ville's inequality, applied on the process $\sqrt{m/2\pi} H_n^\mu$, reads,
With probability at most $$P = \mathbb P \left[\frac{T_{m-1}^2}{m-1} + 1 \geqslant\eta\right] + \eta^{-m/2} \mathbb E \left[\mathbbmss 1_{\left\{ \frac{T_{m-1}^2}{m-1} + 1 < \eta \right\}} \left(\frac{T_{m-1}^2}{m-1} + 1\right)^{m/2}\right],$$ there exists $n \geqslant m$, $$\sqrt{\frac{m}{n}} \left( \frac{n \sum (X_i - \mu)^2 }{n \sum (X_i - \mu)^2 - (\sum (X_i - \mu))^2} \right)^{n/2} \geqslant\eta^{m/2}.$$
Define $a =\sqrt{(m-1)(\eta - 1)}$. Then $\eta^{m/2} = (1 + a^2/(m-1))^{m/2} = h(a)$ and $$\begin{aligned}
P &= \mathbb P [ |T_{m-1}| \geqslant a ] + h(a)^{-1} \mathbb E [ \mathbbmss 1_{\{|T_{m-1}| < a\}} h(T_{m-1}) ]
\\
& = 2 \mathbb P [ T_{m-1} > a ] + 2 \frac{ \mathbb E [ \mathbbmss 1_{\{0 \leqslant T_{m-1} < a\}} h(T_{m-1}) ] }{h(a)}\end{aligned}$$ Let $f_{m-1}$, $F_{m-1}$ be the density and CDF of $T_{m-1}$. The expectation $\mathbb E [ \mathbbmss 1_{\{0 \leqslant T_{m-1} < a\}} h(T_{m-1}) ]$ must be smaller than $\frac{a}{a-a'} \mathbb E [ \mathbbmss 1_{\{a' \leqslant T_{m-1} < a\}} h(T_{m-1}) ]$, for all $a' \in (0, a)$. Note that $$\lim_{a' \to a} \frac{a}{a-a'} \mathbb E [ \mathbbmss 1_{\{a' \leqslant T_{m-1} < a\}} h(T_{m-1}) ] = a h(a) f_{m-1}(a).$$ Hence $$P \leqslant 2(1-F_{m-1}(a)) + 2a f_{m-1}(a).$$ This concludes the missing proof of Lai CS. ◻
*Proof of [Theorem 19](#thm:lai-e){reference-type="ref" reference="thm:lai-e"}.* Let us put a Gaussian prior on $\theta$, $$\frac{\mathrm{d}\pi(\theta)}{\mathrm{d}\theta} = \frac{c}{ \sqrt{2\pi}} \mathrm e^{-c^2\theta^2/2},$$ and define the martingale $$\begin{aligned}
& H_n = \int h_{\theta,n} \pi(\mathrm{d}\theta)
\\
= & \int_{\theta} \frac{\mathrm e^{- n \theta^2 /2 }}{\Gamma(n/2)} \left( \int_{y > 0} y^{n/2-1} \exp\left\{ -y + \theta \left(\sum_{i=1}^n X_i \right) \sqrt{\frac{2y}{\sum_{i=1}^n {X_i}^2}} \right\} \mathrm{d}y \right)\cdot \pi(\mathrm{d}\theta)
\\
= & \frac{c}{ \sqrt{2\pi} \Gamma(n/2)} \int_{y>0} y^{n/2-1} \mathrm e^{-y} \left(\int_{\theta} \exp\left\{ -\frac{n+c^2}{2}\theta^2 + \theta \left(\sum_{i=1}^n X_i \right) \sqrt{\frac{2y}{\sum_{i=1}^n {X_i}^2}}\right\} \mathrm{d}\theta \right) \mathrm{d}y
\\
= & \frac{c}{ \sqrt{2\pi} \Gamma(n/2)} \int_{y>0} y^{n/2-1} \mathrm e^{-y} \left( \sqrt{\frac{\pi}{\frac{n+c^2}{2}}} \exp\left\{ \frac{\left(\sum_{i=1}^n X_i \right)^2 \frac{2y}{\sum_{i=1}^n {X_i}^2} }{2(n+c^2)} \right\} \right) \mathrm{d}y
\\
= & \frac{c}{ \sqrt{2\pi} \Gamma(n/2)} \sqrt{\frac{\pi}{\frac{n+c^2}{2}}} \int_{y>0} y^{n/2-1} \exp\left\{ - \left(1- \frac{S_n^2}{(n+c^2) V_n } \right) y \right\} \mathrm{d}y
\\
= & \frac{c}{ \Gamma(n/2)} \sqrt{\frac{1}{n+c^2}} \Gamma(n/2) \left(1- \frac{S_n^2}{(n+c^2) V_n } \right)^{-n/2} = \sqrt{\frac{c^2}{n+c^2}} \left(1- \frac{S_n^2}{(n+c^2) V_n } \right)^{-n/2}.\end{aligned}$$ This is a valid test martingale.
While this is a test martingale for $\mathcal{N}_{\mu = 0}$, a shifting argument ($X_i \to X_i - \mu_0$) yields that $$H_n^{\mu_0} = \sqrt{\frac{c^2}{n+c^2}} \left(1- \frac{(S_n - n\mu_0)^2}{(n+c^2) (V_n - 2 S_n \mu_0 + n \mu_0^2) } \right)^{-n/2}$$ is a test martingale for $\mathcal{N}_{\mu = \mu_0}$. Applying Ville's inequality, with probability at least $1-\alpha$, for all $n$, $$\begin{aligned}
\frac{nc^2 \mu_0^2 - 2c^2 S_n \mu_0 + (n+c^2)V_n - S_n^2}{ (n+c^2)(n \mu_0^2 - 2S_n\mu_0 +V_n) } &\geqslant\left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n},
\\
\frac{ nc^2 (\mu_0 - \widehat{\mu}_{n})^2 +(n+c^2)(V_n - S_n^2/n)}{ (n+c^2)( n (\mu_0 - \widehat{\mu}_{n})^2 + V_n - S_n^2/n ) } &\geqslant\left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n},
\\
\left\{ \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n} (n+c^2)n - nc^2 \right\} (\mu_0 - \widehat{\mu}_{n})^2 &\leqslant\left\{ (n+c^2)\left(1- \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n}\right) \right\} ( V_n - S_n^2/n ),
\\
(\mu_0 - \widehat{\mu}_{n})^2 &\leqslant\frac{ (n+c^2)\left(1- \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n}\right) }{ \left( \frac{\alpha^2 c^2}{n+c^2} \right)^{1/n} (n+c^2) - c^2 } \left(\overline{X^2_{n}} - \widehat{\mu}_{n}^2 \right)\end{aligned}$$ This concludes the proof ◻
*Proof of [Proposition 18](#prop:conv-t-ensm){reference-type="ref" reference="prop:conv-t-ensm"}.* First, under $N_{\mu,\sigma^2}$, note that $T_{n-1}=\sqrt{n-1} \frac{S_n - n\mu}{ \sqrt {n V_n - S_n^2} }$ and $v_n = V_n/n - S_n^2/n^2$. We replace $\frac{T_{n-1}}{\sqrt{n-1}}$ by $\frac{T_{n-1}}{\sqrt{n-1}} + \frac{\mu_1}{\sqrt{v_{n}}}$ in [\[eqn:lai-ensm-T-expr\]](#eqn:lai-ensm-T-expr){reference-type="eqref" reference="eqn:lai-ensm-T-expr"} to have $$\begin{gathered}
H_n = \sqrt{\frac{2 \pi }{n}} \left( 1 + \left(\frac{T_{n-1}}{\sqrt{n-1}} + \frac{\mu}{\sqrt{v_{n}}}\right)^2 \right)^{n/2},
\\
\frac{\log H_n}{n} = \frac{\log(2\pi/n)}{2n} + \frac{1}{2} \log \left( 1 + \left(\frac{T_{n-1}}{\sqrt{n-1}} + \frac{\mu}{\sqrt{v_{n}}}\right)^2 \right).
\end{gathered}$$ It is well known $v_{n}$ converges to $\sigma^2$ almost surely. Therefore $\frac{T_{n-1}}{\sqrt{n-1}} = \frac{\widehat{\mu}_{n} - \mu_1}{\sqrt{v_{n}}}$ converges almost surely to 0. These imply that $\frac{\log H_n}{n}$ converges almost surely to $\frac{1}{2}\log(1 + \mu^2/\sigma^2 )$.
Second, under any $P \in \mathcal{N}_{\mu = 0}$, let $\delta > 0$. By [\[eqn:lai-ensm-T-expr\]](#eqn:lai-ensm-T-expr){reference-type="eqref" reference="eqn:lai-ensm-T-expr"} we have $$\begin{aligned}
P( H_n > \delta ) = P\Bigg( T_{n-1}^2 > \underbrace{ \left(\left( \frac{\delta^2 n}{2\pi} \right)^{1/n} - 1 \right)(n-1) }_{\gtrsim\log n} \Bigg) \lesssim P( T_1^2 > \log n ).
\end{aligned}$$ The underbraced term grows logarithmically because $n^{1/n} - 1= \frac{\log n}{n} + \mathcal{O}(n^{-2})$. The "$\lesssim$\" holds because the tail of Student's t-distribution strictly decreases when the DOF increases. So $P( H_n > \delta )$ goes to 0 as $n\to\infty$, concluding that $\{ H_n \}$ converges to 0 in probability. By @ensm [Proposition A.14], an ENSM converges almost surely, so $\{ H_n \}$ converges to 0 almost surely. The proof is complete. ◻
*Proof of [Proposition 20](#prop:conv-t-nm){reference-type="ref" reference="prop:conv-t-nm"}.* First, under $N_{\mu,\sigma^2}$, we replace in [\[eqn:lai-nm-tstat\]](#eqn:lai-nm-tstat){reference-type="eqref" reference="eqn:lai-nm-tstat"} $\frac{T_{n-1}}{\sqrt{n-1}}$ by $\frac{T_{n-1}}{\sqrt{n-1}} + \frac{\mu}{\sqrt{v_{n}}}$, $$\begin{gathered}
G_n^{(c)} = \sqrt{\frac{c^2}{n+c^2}} \left( 1 + \frac{ n}{ \frac{n+c^2}{ \left(\frac{T_{n-1}}{\sqrt{n-1}} + \frac{\mu}{\sqrt{v_{n}}}\right)^2 } + {c^2} }\right)^{n/2},
\\
\frac{\log G_n^{(c)} }{n} = \frac{\log(\frac{c^2}{n+c^2})}{2n} + \frac{1}{2}\log \left( 1 + \frac{ n}{ \frac{n+c^2}{ \left(\frac{T_{n-1}}{\sqrt{n-1}} + \frac{\mu}{\sqrt{v_{n}}}\right)^2 } + {c^2} }\right).
\end{gathered}$$ As in the proof of [Proposition 18](#prop:conv-t-ensm){reference-type="ref" reference="prop:conv-t-ensm"}, $\frac{T_{n-1}}{\sqrt{n-1}}$ converges to 0 and $v_n$ to $\sigma^2$ almost surely. Therefore $\frac{\log G_n^{(c)} }{n}$ converges a.s. to $\frac{1}{2}\log(1 + \mu^2/\sigma^2)$.
Second, under any $P \in \mathcal{N}_{\mu = 0}$, let $\delta > 0$. By [\[eqn:lai-nm-tstat\]](#eqn:lai-nm-tstat){reference-type="eqref" reference="eqn:lai-nm-tstat"} we have $$\begin{aligned}
P( G_n^{(c)} > \delta ) = P\Bigg( \frac{ n}{ \frac{(n+c^2)(n-1) }{T_{n-1}^2 } + {c^2} } > \underbrace{ \left( \frac{(n+c^2)\delta^2}{c^2} \right)^{1/n} -1 }_{\gtrsim \frac{\log n}{n}} \Bigg) \lesssim P( T_1^2 > \log n ),
\end{aligned}$$ which converges to 0. So $\{ G_n^{(c)} \}$ converges to 0 in probability. It thus also converges to 0 almost surely due to Doob's martingale convergence theorem, concluding the proof. ◻
[^1]: which is the conjugate prior of normal distributions when both of $\mu$ and $\sigma$ are unknown
| arxiv_math | {
"id": "2310.03722",
"title": "Anytime-valid t-tests and confidence sequences for Gaussian means with\n unknown variance",
"authors": "Hongjian Wang and Aaditya Ramdas",
"categories": "math.ST cs.LG stat.ME stat.ML stat.TH",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The near-completion of the program of endoscopy poses the question of what lies next. This article takes a broad view of ideas beyond the program of endoscopy, highlighting the connections among them, and emphasizing the relationship between local and global aspects. Central among those ideas is the one proposed in a 2000 lecture of R. P. Langlands, aiming to extract from the stable trace formula of a group $G$ the bulk of those automorphic representations in the image of the conjectural functorial lift corresponding to a morphism of $L$-groups ${^LH}\to{^LG}$. With the extension of the problem of functoriality to the "relative" setting of spherical varieties and related spaces, some structure behind such comparisons has started to reveal itself. In a seemingly unrelated direction, a program initiated by Braverman--Kazhdan, also around 2000, to generalize the Godement--Jacquet proof of the functional equation to arbitrary $L$-functions, has received renewed attention in recent years. We survey ideas and developments in this direction, as well, and discuss the relationship between the two programs.
address: Department of Mathematics, Johns Hopkins University, Baltimore, MD 21218, USA.
author:
- Yiannis Sakellaridis
bibliography:
- biblio.bib
title: "Local and global questions \"beyond endoscopy\""
---
[^1]
# Introduction
## The Langlands program and $L$-functions
The desire to understand $L$-functions animated the birth of the Langlands program, as is evident from the 1967 letter of Robert P. Langlands to André Weil [@Langlands-Weil]. Once in place, however, the conjectures of reciprocity and functoriality seem to throw the direct study of $L$-functions out of the window; in the creator's own words, "since functoriality, once established, entails immediately the analytic continuation of all the functions $L(s, \pi , \rho)$, \[..\] there would seem to be little point in pursuing seriously the various methods introduced for dealing with this or that special case" [@Langlands-wherestands], that is, the methods that one uses to represent automorphic $L$-functions.
Fortunately, such methods continued to be the object of intense study by mathematicians such as Piatetski-Shapiro, Jacquet, and Shalika, despite the fact that "the mathematical community was not very supportive because the project deviated from the accepted canon and was regarded as perverse" [@JaNotices].
Then, in a strange twist, $L$-functions came again to play a prominent role in Langlands' vision of the path to functoriality "Beyond Endoscopy" [@Langlands-BE]: It was no more sufficient to work with the trace formula as developed by Selberg and Arthur, out of "pure" (albeit extremely complicated) ingredients such as compactly supported test functions and their orbital integrals. One also had to introduce $L$-functions "by hand," whose poles had the role of picking up the images of functorial lifts.
"By hand" could mean writing down the Dirichlet series of an automorphic $L$-function $$L(s, \pi , \rho) = \sum_{n\ge 1} \frac{a_n}{n^s},$$ where the $a_n$'s, for prime powers $n=p^i$, are polynomials in the Satake parameters (or: Frobenius--Hecke parameters, as Langlands prefers to call them) of the automorphic representation $\pi$ at $p$. Inverting the Satake isomorphism (and glossing over ramified places, for the purposes of this introduction), we obtain a corresponding series $h_{r,s}$ of Hecke operators, whose trace on the automorphic representation $\pi$ is the $L$-value above. The insertion of $h_{r,s}$ as a "nonstandard test function" into the trace formula gives, at least formally, a spectral expansion where the usual term for $\pi$ is weighed by the factor $L(s, \pi , \rho)$.
The idea of Langlands (presented and expanded by Arthur in [@Arthur-BE]) was that, weighing the trace formula by $L$-functions, and taking residues at the poles of those, one could detect the images of functorial lifts. The original idea, actually, involved weighing by the *logarithmic derivatives* of $L$-functions, so that the residues would be weighed only by the orders of the poles; it is generally understood, however, that the geometric expressions for the logarithmic derivative are prohibitively difficult to manage.
The Beyond Endoscopy project has seen some limited successes, in the two decades since its inception; to mention a few: In his thesis [@Venkatesh], which used the Kuznetsov, rather than the Selberg trace formula, Venkatesh successfully employed "Beyond Endoscopy" techniques to re-establish functoriality from tori to $\operatorname{GL}_2$. A series of papers of Herman [@Herman-RS; @Herman] used similar techniques to give a new proof of analytic continuation and the functional equation of the standard and Rankin--Selberg $L$-functions for $\operatorname{GL}_2$. Altuğ's work [@Altug1; @Altug2; @Altug3] managed to rigorously complete the argument outlined by Langlands for $\operatorname{GL}_2$, isolating the contribution of "special representations" from the Arthur--Selberg trace formula, and showing that the standard $L$-function does not "pick up" functorial lifts in the cuspidal spectrum. The scarcity of such successes attests the impressive nature of these feats, and the abilities of the authors. At first sight, however, these works employ case-specific brute-force techniques in order to manage the difficult analytic expressions that arise after inserting $L$-functions into the trace formula, and do not necessarily point to a paradigm that can be scaled up.
The present article is predicated on the conviction that the insertion of $L$-functions into the trace formula gives rise to structures that need to be understood using tools from representation theory and geometry. This is grounded on the limited experience that I have obtained through my own involvement in the project, and the difficulty of reaching concrete conclusions based simply on techniques of analytic number theory. In particular, we will attempt to mimic the paradigm of endoscopy, where local results and comparisons are formulated, and fed into the trace formula to produce the desired global results.
For example, as was understood by Ngô [@Ngo-PS], following his joint work with Frenkel [@FN], a "geometrization" of the Euler factors of the aforementioned series $h_{r,s}$ of Hecke elements is possible by replacing the reductive group $G$ by a suitable reductive monoid $G'\supset G$ (possibly singular), generalizing the Godement--Jacquet theory for $\operatorname{Mat}_n\supset \operatorname{GL}_n$. The Hecke series (for a local field of equal characteristic) is then the Frobenius trace on an intersection complex associated to the arc space of this monoid [@BNS]. This makes the above trace formula, weighed by $L$-functions, amenable, at least in principle, to the geometric techniques and insights that came with the proof of the Fundamental Lemma.
This generalization of Godement--Jacquet theory, including the suggestion to look at reductive monoids, had been the topic of a series of papers by Braverman and Kazhdan, also around the year 2000 [@BK2; @BK4]. Moreover, reductive monoids are but a special case of affine spherical varieties, whose putative Schwartz spaces, as noted in [@SaRS], should explain and generalize a large portion of the Rankin--Selberg method of integral representations of $L$-functions.
Broadening the problem of functoriality to more general spaces than reductive groups, and admitting the relative trace formula of Jacquet into our purview of desired "Beyond Endoscopy" comparisons, gives us a broader playing ground for testing the core ideas. In particular, a *local* theory of "Beyond Endoscopy" comparisons has been developed by the author in low-rank cases [@SaBE1; @SaBE2; @SaTransfer1; @SaTransfer2; @SaRankone], revealing surprising uniformity of the pertinent "transfer operators," and suggesting structures that might scale up to higher rank. These structures seem to be controlled by $L$-functions. Thus, paradoxically, in order to address functoriality "beyond endoscopy," one is led to deviate from the canon in a rather systematic way.
## Outline of this article
"Beyond Endoscopy" (capitalized when referring to the original proposal, lowercase for a broader set of ideas) was never a proposal with the specificity and solid evidence of endoscopy, that Langlands himself provided, along with Labesse, Shelstad, and Kottwitz, in the late 70s and 80s [@LL; @LS; @Kottwitz-rational]. Its development heretofore took a toll on the careers of a few young and promising mathematicians, and we still do not have a broadly accepted view of its short- and medium-term goals, and its accomplishments to date.
Thus, it would be futile to attempt an all-encompassing overview of the subject, that would also be mathematically meaningful. The present article takes an ahistorical approach, trying to piece together several concrete problems, strategies, and results developed beyond the purview of endoscopy, and to highlight their connections, whether they were developed with Langlands' proposal in mind, or not. In doing so, it does not do justice to the entire body of work that was and continues to be developed under the "Beyond Endoscopy" label, and which has produced useful insights into various problems of analytic or arithmetic nature. This account is certainly colored by my own limitations and my personal vision, and will not present everyone's point of view of Beyond Endoscopy.
We start by discussing local aspects. After introducing transfer operators as the main vehicle of Beyond Endoscopy, our point of departure becomes Langlands' "Singularités et transfert" [@Langlands-ST], where those operators are studied for the functorial lift between tori and the stable trace formula of $\operatorname{SL}_2$ (or $\operatorname{GL}_2$). Section [2](#sec:localtori){reference-type="ref" reference="sec:localtori"} concludes with a different way to realize this local transfer, replacing the trace formula by the Kuznetsov formula.
In Section [3](#sec:samerank){reference-type="ref" reference="sec:samerank"} we continue with the local theory, discussing the problem of stable transfer for identical $L$-groups. This problem only makes sense in the broader setting of the relative trace formula, where different spherical varieties can share the same $L$-group. A particular case is the transfer between the stable and Kuznetsov formulas for $\operatorname{SL}_2$ (or $\operatorname{GL}_2$), which, I argue later, lies behind the global theory of Beyond Endoscopy developed by Altuğ, as well as the thesis of Rudnick [@Rudnick], that predated Beyond Endoscopy.
In Section [4](#sec:Schwartz){reference-type="ref" reference="sec:Schwartz"} we take a look at various "Schwartz spaces" of functions that encode $L$-functions, while in Section [5](#sec:Hankel){reference-type="ref" reference="sec:Hankel"} we discuss ways to give meaning to the functional equations of $L$-functions, either by means of "nonlinear Fourier transforms" on appropriate spherical varieties, or by their analogs, that we call Hankel transforms, at the level of trace formulas.
Finally, in Section [6](#sec:global){reference-type="ref" reference="sec:global"} we discuss global comparisons of trace formulas "Beyond Endoscopy," in the light of the preceding local discussion.
A particularly bewildering theme beyond endoscopy is the role of Fourier transforms and the Poisson summation formula, on spaces that have no obvious linear structure, such as the invariant-theoretic quotient of $G$ by conjugation (the space of characteristic polynomials, when $G=\operatorname{GL}_n$). Such a formula is completely paradoxical, since the space of conjugacy classes does not carry any natural linear structure. In § [3.3](#ss:transferrankone){reference-type="ref" reference="ss:transferrankone"}, we will discuss symplectic structures which make the appearance of Fourier transforms more natural. The exact relationship between "beyond endoscopy" methods and symplectic geometry, however, is a largely unexplored topic, and poses a promising challenge for creating better foundations for the program in the near future.
## Notation
- For an affine $G$-variety $X$ over a field $F$, the invariant-theoretic quotient $\operatorname{Spec\,}F[X]^G$ will be denoted by $X\sslash G$. For $G$ acting on itself by conjugation, this quotient will be denoted $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%$.
- When the meaning is clear from the context, for a variety $X$ over a local field $F$, we denote the set $X(F)$ simply by $X$.
- For a variety $X$ over a local field $F$, we *usually* denote by $\mathcal S(X)$ the space of Schwartz *measures* on the $F$-points of $X$. This space makes sense when $X$ is smooth, in which case "Schwartz measures" means "smooth of rapid decay" in the Archimedean case (see [@AGSchwartz]), and "smooth of compact support" in the non-Archimedean case. At various points in the article, we speculate on extensions of this notion for non-smooth varieties. *Sometimes* we use this notation for functions, instead of measures -- and we say so. Finally, the notation $\mathcal D(X)$ is used for Schwartz half-densities on $X(F)$. Often, $X(F)$ is homogeneous and admits an invariant positive measure $dx$ under a group $G(F)$, in which case one can easily pass from functions to half-densities and from half-densities to measures by multiplying by $(dx)^\frac{1}{2}$.
- For a group $G$ acting (on the left or right) on a space $X$, we denote by $\star$ the convolution action of measures on $G$ on functions/half-densities/measures on $X$. To be clear, convolution is the pushforward under the action map, i.e., $$(\mu\star f)(x) = \int_G f(x\cdot g^{-1}) \mu(g).$$
## Convention on citations
Since a significant portion of work on the subject has appeared in lecture notes and non-refereed preprints, I will take a cautious approach in stating and attributing results. Although unpublished results and claims will be discussed, only published results, and their authors, appear with the label "Theorem."
## Acknowledgments
I thank B. C. Ngô for useful feedback, and for sharing with me his notes on L. Lafforgue's Fourier kernel.
# Local stable transfer; the lift from tori to $\operatorname{SL}_2$. {#sec:localtori}
## Stable test measures, functions, and half-densities
We start with the local theory of "Beyond Endoscopy." Fix a local (locally compact) field $F$, and a quasisplit connected reductive group $G$ over $F$. Let $\mathcal S(G)$ be the space of Schwartz measures on $G=G(F)$, and denote the image of the pushforward map $$\mathcal S(G) \xrightarrow{p} \operatorname{Meas}(%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
),$$ by $\mathcal S(\frac{G}{G})^{\operatorname{st}}$, the space of *stable* test measures (for the adjoint quotient of the group). Explicitly, by the Weyl integration formula, if $f=\Phi \, dg$ is a Schwartz measure, written as the product of a Schwartz function by a Haar measure, its pushforward $f'=p_! f$ to $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%$ reads, on the strongly regular semisimple set, $$\label{pushfmeasure} f' ([t]) = {\operatorname{SO}}_t (\Phi) |D(t)| d[t],$$ where
- $[t]$ denotes the stable conjugacy class of a rational, strongly regular element $t \in T^{\operatorname{reg}}(F)$, for some maximal torus $T \subset G$;
- $d[t]$ denotes a measure on the image of $T^{\operatorname{reg}}(F)$ in $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
= T\sslash W$ whose pullback[^2] to $T^{\operatorname{reg}}$ coincides with some Haar measure $dt$ on $T(F)$;
- ${\operatorname{SO}}_t$ denotes the stable orbital integral of $\Phi$ over the stable orbit of $t$, defined with respect to the quotient measure $dg/dt$ on the conjugacy classes (which are isomorphic to $T(F)\backslash G(F)$);
- $D(t) = \det(I - {\operatorname{Ad}}_{\mathfrak g/\mathfrak t} (t))$ is the Weyl discriminant.
The complement of the strongly regular set is of $f'$-measure zero, and this completely describes $f'$.
As is clear, the pushforward measure contains all the information about the stable orbital integrals of the test function $\Phi$, discussed in the lectures of Kaletha and Taïbi [@Kaletha], but provides a more economical way to talk about those, without having to fix Haar measures. Moreover, we will see that measures appear more naturally in the transfer operators beyond endoscopy; all formulas involving measures can be translated to formulas involving orbital integrals, but one has to remember to multiply with the absolute value of the Weyl discriminant, according to [\[pushfmeasure\]](#pushfmeasure){reference-type="eqref" reference="pushfmeasure"}.
Nonetheless, there will also be instances when formulas are better expressed in terms of *half-densities*, instead of measures. We will encounter this in our discussion of Hankel transforms (§ [5.5.2](#sssHankel){reference-type="ref" reference="sssHankel"}), but also when we attempt to interpret some transfer operators in terms of quantization (§ [3.3](#ss:transferrankone){reference-type="ref" reference="ss:transferrankone"}).
Since there is no canonical notion of pushforward of a function or half-density, we need to invent one. This is what people typically do with orbital integrals, where they fix measures compatibly between the stable classes. In the setting above, we need to choose the Haar measures $dt$, as $T$ varies, in a compatible way. We do that by expressing such a Haar measure $dt$ as the absolute value of an invariant volume form $\omega_T$.[^3] The set of such volume forms can be identified with the $F$-points of the top exterior power $\bigwedge^{{\operatorname{rk}}G} \mathfrak t_{\bar F}^*$. For any two maximal tori $T$, $T'$, if we fix an element over the algebraic closure that conjugates them, $g^{-1} T_{\bar F} g = T'_{\bar F}$, $g\in G(\bar F)$, this induces an isomorphism $\bigwedge^{{\operatorname{rk}}G} \mathfrak t_{\bar F}^* = \bigwedge^{{\operatorname{rk}}G} {\mathfrak t_{\bar F}'}^*$. A compatible choice of Haar measures $dt$, $dt'$, then, is defined to be $dt = |\omega_T|$, $dt' = |\omega_{T'}|$, where $\omega_T$, $\omega_T'$ are $F$-rational invariant volume forms, and equal under the above $\bar F$-isomorphism of top exterior powers, up to an element of $\bar F^\times$ of absolute value $1$.
Making such compatible choices allows us to talk about the stable orbital integrals of $\Phi$ as a function on the regular semisimple subset of $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
(F)$, up to a scalar choice that is independent on the orbit. We will have to live with this ambiguity, locally, and will denote by $p_! \Phi$ the corresponding "pushforward function" on the strongly regular semisimple set, $$\label{pushffunction} p_!\Phi ([t]) = {\operatorname{SO}}_t (\Phi).$$
Finally, for the half-density $\varphi = \Phi (dg)^\frac{1}{2}$, we will interpolate between [\[pushfmeasure\]](#pushfmeasure){reference-type="eqref" reference="pushfmeasure"} and [\[pushffunction\]](#pushffunction){reference-type="eqref" reference="pushffunction"}, and define, having fixed compatible choices of Haar measures, $$\label{pushfdensity} p_!\varphi ([t]) = {\operatorname{SO}}_t (\Phi) |D(t)|^\frac{1}{2} (d[t])^\frac{1}{2}.$$
*Remark 1*. There is a Schwartz space of test measures $\mathcal S(\frac{G}{G})$ associated to the quotient (stack) of $G$ by conjugation, which corresponds to the zeroth homology of a Schwartz complex defined in [@SaStacks § 3]. This is identified with the direct sum of coinvariant spaces $\mathcal S(G')_{G'}$ of the Schwartz spaces of all pure inner forms $G'$ of the group, where we abuse notation again to write $G$ for $G(F)$, etc. However, in the group case the reader does not need to worry about these definitions, since we are only interested in *stable* coinvariants, and those are identified with the image of the pushforward map from the quasisplit form only.
The fact that it is enough to consider one form of the group is a special feature of the group case. For more general (say, smooth affine) $G$-varieties $X$, there is a well-defined pushforward map $\mathcal S(X/G)\to \operatorname{Meas}(X\sslash G)$, whose image will be called the space $\mathcal S(X/G)^{\operatorname{st}}$ of stable test measures for this quotient, but it cannot, in general, be identified with the coinvariant space of a single form. A simple example is Jacquet's relative trace formula for torus periods [@Jacquet-Waldspurger], where $X=(T\backslash \operatorname{PGL}_2)^2$ for $T$ a nonsplit torus, and $G=\operatorname{PGL}_2$, acting diagonally; here, considering the quaternion division algebra is essential for obtaining the complete space $\mathcal S(X/G)$. Of course, one can always choose an embedding $G\hookrightarrow \operatorname{GL}_n$; then the space $\mathcal S(X/G)$ can be identified with the coinvariant space $\mathcal S(X\times^G \operatorname{GL}_n)_{\operatorname{GL}_n}$, and $\mathcal S(X/G)^{\operatorname{st}}$ is its pushforward to $X\sslash G = (X\times^G \operatorname{GL}_n)\sslash\operatorname{GL}_n$.
## Transfer operators
In the (conjectural, in general) local Langlands correspondence, every tempered $L$-packet $\Pi_\phi$ for $G(F)$ gives rise to a stable character $\Theta_\phi$, defined as a suitable linear combination of the characters in the $L$-packet [@Kaletha], and it is expected [@Shahidi-Plancherel Conjecture 9.2] that those stable characters span a (weak-\*) dense subspace of the dual of $\mathcal S(\frac{G}{G})^{\operatorname{st}}$. Given a morphism between the $L$-groups of two quasisplit reductive groups, $$\iota: {^LH} \to {^LG},$$ the local Langlands conjecture associates to every tempered $L$-packet of $H$ a tempered $L$-packet of $G$, hence to any tempered stable character $\Theta_\phi$ on $H$ a tempered stable character $\Theta_{\iota_*\phi}$ of $G$.
*Remark 2*. These statements should also hold without the "tempered" adjective; however, it may not be reasonable to expect the techniques of "Beyond Endoscopy" to capture character relations for nontempered $L$-packets; the reason is that it is the characters of standard modules, not of their Langlands quotients, which behave nicely in families.
The question that Langlands posed in [@Langlands-ST] is whether one can describe a "transfer" map between spaces of stable test measures $$\label{transfermap}
\mathcal T_\iota: \mathcal S(\frac{G}{G})^{\operatorname{st}}\to \mathcal S(\frac{H}{H})^{\operatorname{st}},$$ whose adjoint would give rise to the transfer of stable characters predicted by functoriality, i.e., $$\label{transfercharacterization}
\Theta_\phi \circ \mathcal T_\iota = \Theta_{\iota_*\phi}$$ for every tempered $L$-packet $\phi$ of $H$.
We should clarify the assumptions and logical order here: One needs to be given the map $\Theta_\phi\mapsto \Theta_{\iota_*\phi}$ of stable local Langlands functoriality, and the density of stable characters, in order to make sense of [\[transfercharacterization\]](#transfercharacterization){reference-type="eqref" reference="transfercharacterization"} as a property characterizing the transfer map. This is indeed the case in the setting of [@Langlands-ST], where Langlands focuses on the case $H=$ a torus and $G=\operatorname{GL}_2$, where local functoriality is explicitly known. However, in general, one could hope that the transfer map [\[transfermap\]](#transfermap){reference-type="eqref" reference="transfermap"} would be described first, as was the case for transfer factors in the theory of endoscopy, and then employed in a comparison of (stable) trace formulas to prove local Langlands functoriality. In any case, whether local functoriality is taken as input or not, the question of the existence and explicit calculation of the transfer [\[transfermap\]](#transfermap){reference-type="eqref" reference="transfermap"} is important for establishing global functoriality.
As in the case of endoscopy, the transfer [\[transfermap\]](#transfermap){reference-type="eqref" reference="transfermap"} is only part of the local input needed for a global comparison. One would additionally need to control the image of the transfer for some "basic function" (or "basic measure") at unramified places, and for its translates under Hecke operators. In endoscopy, this basic measure is the unit element of the Hecke algebra, and the statement about its transfer is the Fundamental Lemma. Beyond endoscopy, as we will see, one needs to work with more general "basic measures." Indeed, according to Langlands' idea, one needs to insert $L$-functions into the trace formula in order to extract, via their poles, images of functorial lifts, and this is accomplished by modifying the basic measure. We will discuss this in the sections that follow. In any case, the fundamental lemma for the standard basic function (and the entire Hecke algebra) is a triviality if one knows the transfer relation [\[transfercharacterization\]](#transfercharacterization){reference-type="eqref" reference="transfercharacterization"}. In practice, however, as in the case of endoscopy, one needs to work in the reverse order: first establish the fundamental lemma for the Hecke algebra, then draw conclusions about character relations.
## Local transfer from tori to $\operatorname{SL}_2$ {#ss:toriSL2}
The questions posed above have only been fully addressed in the case $H=T=$ a $1$-dimensional torus and $G=\operatorname{SL}_2$. The map of $L$-groups is $$\iota: {^LH} = \mathbb{G}_m\rtimes \operatorname{Gal}(E/F) \to \check G = \operatorname{PGL}_2,$$ where $E/F$ is the splitting field of the torus $T$, and we feel free to abbreviate $L$-groups by their relevant quotients in every case. The torus could be split, in which case $E=F$; otherwise, the map above can be identified with the projectivization of the orthogonal group embedded into $\operatorname{PGL}_2$.
In [@Langlands-ST], Langlands addressed the question of transfer for this case. In the non-Archimedean case, he relied on the character formulas of Sally and Shalika (for local fields of characteristic zero and residual characteristic $\ne 2$). In this case, the stable characters of $L$-packets for $\operatorname{SL}_2$ are known, hence it makes sense to appeal to [\[transfercharacterization\]](#transfercharacterization){reference-type="eqref" reference="transfercharacterization"} for a characterization of the transfer map. The results of Langlands can be summarized as follows, both in the Archimedean and in the non-Archimedean case:
**Theorem 3** (Langlands). *In the setting above, the transfer operator $\mathcal T_\iota$ satisfying [\[transfercharacterization\]](#transfercharacterization){reference-type="eqref" reference="transfercharacterization"} exists.*
This is proven in § 2 of that article, working backwards from [\[transfercharacterization\]](#transfercharacterization){reference-type="eqref" reference="transfercharacterization"}, using explicit character formulas. As is common in the theory of the trace formula, Langlands works with orbital integrals of test functions, but the statements become simpler if we work with measures, instead. The formula [\[pushfmeasure\]](#pushfmeasure){reference-type="eqref" reference="pushfmeasure"} relating the pushforward of a measure $f = \Phi dg$ and stable orbital integrals, in the notation of [@Langlands-ST (2.6)] reads $$p_!f (a_G) = |\Delta(a_G)|\lambda(a_G) \operatorname{Orb}^{\operatorname{st}}(a_G, \Phi) da_G.$$
Despite the case-by-case analysis used in the proof, the formula for the transfer operator is surprisingly uniform and simple. It is essentially a formula discovered many decades earlier by Gelfand, Graev, and Piatetski-Shapiro [@GGPS 2.5.4.(7)]. The formula states that if the torus $T$ is nonsplit, and we denote by $\eta$ the quadratic character associated to its splitting field, then the correspondence $\chi \mapsto \Theta_\chi$ between characters of $T$ and stable characters for $\operatorname{SL}_2$ is given by $$\label{GGPS}
\Theta_\chi = \frac{2}{\operatorname{Vol}(T)} \frac{\eta(x)}{|x|} {\star_+} S_\chi.$$ Here, $\star_+$ denotes additive convolution on the affine line, identified both with the Steinberg--Hitchin base of $\operatorname{SL}_2$ (through the trace map) and with the quotient $T\sslash \mu_2$ for $T$, where $\mu_2$ acts by inversion. The identification of the latter is by means of the trace composed with any embedding $T\hookrightarrow \operatorname{SL}_2$. Equivalently, fixing an isomorphism $T_{\bar F} = \mathbb{G}_m{_{\bar F}}$ over the algebraic closure, the coordinate $y$ on $\mathbb{G}_m$ corresponds to $x= y+ y^{-1}$ on $\mathbb A^1$. This gives a well-defined morphism $T\ni t\mapsto x(t) \in \mathbb A^1$ over $F$, and a well-defined "discriminant" function $D(t) = x(t)^2-4$, which over the algebraic closure is equal to $(y-y^{-1})^2$. Finally, $S_\chi$ is the pushforward to $\mathbb A^1$ of the measure $\chi(t) dt$ on $T$; the choice of Haar measure $dt$ does not matter, since it is cancelled by the factor $\operatorname{Vol}(T)$ in the denominator.
The formula [@GGPS] does not literally make sense when $T$ is split (the distribution $\frac{dx}{|x|}$ on the affine line is not well-defined, and $\operatorname{Vol}(T)$ is infinite), but the first factor in the convolution can be understood as a delta function at $0$, and then the formula remains true: $\Theta_\chi$ is equal to $S_\chi$ in this case, divided by a suitable Haar measure $dx$, by a well-known formula for the character of principal series representations; explicitly: $$\Theta_\chi(x) = \begin{cases} \frac{\chi(t) + \chi^{-1}(t)}{|t-t^{-1}|},& \mbox{ if } x= t+t^{-1}, \,\, t\in F^\times; \\ 0,& \mbox{otherwise.} \end{cases}$$
The uniformity of the two cases was explained in [@SaTransfer2 §10.4], from which we can extract the following explicit formula for the transfer operator $\mathcal T_\iota$, applied to an element $f\in \mathcal S(\frac{G}{G})^{\operatorname{st}}$, written as $f(x) = \Phi(x) dx$, with $dx$ a measure on the real line:
$$\label{transfer-TSL2}
\mathcal T_\iota(f) (t) = dt \cdot \lambda(\eta,\psi) \int_F \eta(uv) \int_F \psi(v x(t) - uv) \Phi(u) \, du \,\, dv,$$ understood as Fourier transform of $\Phi$, followed by multiplication by the character $\eta$, followed by inverse Fourier transform. Here, $\psi$ denotes a fixed unitary character of the additive group $F$, and $\lambda(\eta,\psi)$ is a scalar ratio of gamma factors, see [@SaTransfer2 (7.14)]. The Haar measure $dt$ on the torus is the one that can locally be written as $|D(t)|^{-\frac{1}{2}} dx(t)$, using the fact that the map $t\mapsto x(t)$ is a local homeomorphism away from the zeroes of the discriminant.
## Kuznetsov transfer from tori to $\operatorname{SL}_2$ {#ss:Kuztori}
Let us now discuss a different type of transfer for this functorial lift, which uses the Kuznetsov formula for $\operatorname{SL}_2$ instead of the stable trace formula, and was initiated in the thesis of Venkatesh [@Venkatesh]. The local theory that follows was developed in [@SaTransfer2 § 10].
The functorial lift that we want to discuss belongs to the generalization of the Langlands conjectures afforded by the "relative" Langlands program. More specifically, in the discussion above we will replace the group $G=\operatorname{SL}_2$ by its Whittaker model, and adjoint quotient of the trace formula by the corresponding quotient for the *Kuznetsov formula*. That is, instead of Schwartz measures on $G$ we will consider the space $\mathcal S((N,\psi)\backslash G)$ of Whittaker Schwartz measures, and instead of the stable coinvariant space $\mathcal S(\frac{G}{G})$ we consider the coinvariant space $\mathcal S((N,\psi)\backslash G/(N,\psi)) = \mathcal S((N,\psi)\backslash G)_{(N,\psi)}$.
Here, $N$ is the upper triangular unipotent subgroup, identified with the additive group in the standard way, and $\psi:F\to \mathbb C^\times$ is a nontrivial unitary character. It will be convenient, at first, to think of the space $\mathcal S((N,\psi)\backslash G/(N,\psi))$ as a space of measures on the invariant-theoretic quotient $N\backslash G\sslash N$; there is no need to add the adjective "stable" here, as general orbits are stable, but because of the character $\psi$ we need to adopt some (non-canonical) conventions, in order to define a pushforward map $$\mathcal S((N,\psi)\backslash G/(N,\psi))\to \operatorname{Meas}(N\backslash G\sslash N).$$ We will follow the convention of [@SaTransfer1 § 2.2.1], according to which the image of a measure $\Phi dg \in \mathcal S(G)$ in $\mathcal S((N,\psi)\backslash G/(N,\psi))$ (where $\Phi$ is a Schwartz function on $G$) will be identified with the measure $\phi(\zeta) |\zeta| d\zeta$ on $\mathbb A^1\simeq N\backslash G\sslash N$, where $\phi$ is the function of orbital integrals $$\phi(\zeta)=\int_{F^2} \Phi \left(\begin{pmatrix} 1& x \\ &1\end{pmatrix} \begin{pmatrix} & -\zeta^{-1} \\ \zeta \end{pmatrix} \begin{pmatrix} 1& y \\&1 \end{pmatrix} \right) \psi^{-1}(x+y) dx dy.$$ The Haar measure $dg$ here is such that on the open Bruhat cell it coincides with $|\zeta| d\zeta\, dx \, dy$, in the coordinates of the formula.
The dual group of the Whittaker space remains $\check G$, therefore the same map $\iota$ of $L$-groups as above predicts the existence of a transfer operator $$\mathcal T_\iota: \mathcal S((N,\psi)\backslash G/(N,\psi)) \to \mathcal S(T),$$ behaving well with respect to characters. For the Kuznetsov formula, "characters" refers to the "relative characters," also called "Bessel distributions/characters" in this case. For a general discussion of relative characters, I point to the article of Beuzart--Plessis [@Beuzart-Plessis]. Recall that, unlike in the group case, the relative characters are only canonical up to a scalar depending on the representation, and a choice of Plancherel measure is needed to fix that scalar. For the Whittaker model we can use the group Plancherel measure, reducing the ambiguity to a scalar that does not depend on the representation.
If we then denote by $J_\chi$ the Bessel character corresponding to the tempered $L$-packet corresponding to the unitary character $\chi$ of $T$ (actually, corresponding to the single element of the $L$-packet that is generic), the following theorem was proven in [@SaTransfer1 Theorem 5.0.1], [@SaTransfer2] (stated up to choices of Haar measures that we will not prescribe here):
**Theorem 4**. *There is a transfer map $$\mathcal T_\iota: \mathcal S((N,\psi)\backslash G/(N,\psi)) \to \mathcal S(T)$$ with the property that $T_\iota^*\chi = J_\chi$ for every unitary character $\chi$. Writing $f(\zeta) = \Phi(\zeta) d\zeta$, it is given by the formula $$\label{transfer-Venkatesh}
\frac{T_\iota(f) (t)}{dt} = \lambda(\eta,\psi) \int \Phi\left(\frac{x(t)}{y}\right) \eta(x(t)y) |y|^{-1} \psi(y) dy.$$ The function $T\ni t\mapsto x(t) \in \mathbb A^1$ and the scalar $\lambda(\eta,\psi)$ are the ones mentioned in § [2.3](#ss:toriSL2){reference-type="ref" reference="ss:toriSL2"}, and $dt$ is a Haar measure on $T=T(F)$, (suitably) compatible with $d\zeta$.*
In other words, up to inverting the argument, the transfer is given by a simple Fourier transform of the measure $f\cdot \eta$.
There is more to the above theorem than stated: The transfer map extends to a larger, "nonstandard" space of test measures $$\mathcal S_{L({\operatorname{Ad}},\eta |\bullet|)} (N,\psi\backslash \operatorname{SL}_2/N,\psi) \supset \mathcal S (N,\psi\backslash \operatorname{SL}_2/N,\psi),$$ which, at unramified non-Archimedean places, contains a distinguished "basic vector," $f_{L({\operatorname{Ad}},\eta |\bullet|)}$, tailored to insert $L$-functions to the spectral side of the Kuznetsov formula. The insertion of $L$-functions is essential for the global comparison -- we will have more to say about it later.
Moreover, we have a fundamental lemma for this basic vector: its image under the transfer operator is equal, up to an explicit constant $c$, to the basic vector $1_{T(\mathfrak o)} \in \mathcal S(T)$. Furthermore, this fundamental lemma extends to the Hecke algebra: We have $$\label{FL-torus}
T_\iota(h\star f_{L({\operatorname{Ad}},\eta |\bullet|)}) = c\cdot \iota^*(h)\star 1_{T(\mathfrak o)},$$ where $\iota^*$ is the map from the unramified Hecke algebra of $T$ to the unramified Hecke algebra of $G$ induced by the map of $L$-groups $\iota$ via the Satake isomorphism.
## Some early takeaways
Some takeaways from our discussion of transfer operators for the lift from tori to $\operatorname{SL}_2$ are the following:
1. At least in this low-rank case, the transfer operators are computable; most significantly, *the formulas involve abelian Fourier transform*, despite the fact that they realize functorial lifts to the non-abelian group $\operatorname{SL}_2$. This is a very promising sign! It would be desirable, however, to achieve more conceptual understanding of these formulas, in order to be able to guess the transfer operators for higher-rank cases, where the functoriality conjecture is still open.
2. The form of the transfer operators makes them suitable, at least in principle, for an application of the Poisson summation formula over the adeles. This was accomplished in Venkatesh's thesis [@Venkatesh] for the transfer operator [\[transfer-Venkatesh\]](#transfer-Venkatesh){reference-type="eqref" reference="transfer-Venkatesh"}, as we will see in § [6.4](#ss:Venkatesh){reference-type="ref" reference="ss:Venkatesh"}. Even if the operators have the form of Fourier transforms, proving a global Poisson summation formula in this setting is quite nontrivial, since the transforms are applied to nonstandard spaces of test functions/measures. We will discuss such global comparisons in Section [6](#sec:global){reference-type="ref" reference="sec:global"}.
3. The formula [\[transfer-Venkatesh\]](#transfer-Venkatesh){reference-type="eqref" reference="transfer-Venkatesh"} for the transfer to the Kuznetsov formula is simpler than the one [\[transfer-TSL2\]](#transfer-TSL2){reference-type="eqref" reference="transfer-TSL2"} for the transfer to the stable trace formula. In fact, [\[transfer-TSL2\]](#transfer-TSL2){reference-type="eqref" reference="transfer-TSL2"} has been interpreted in [@SaTransfer2] as the composition of [\[transfer-Venkatesh\]](#transfer-Venkatesh){reference-type="eqref" reference="transfer-Venkatesh"} with another transfer operator, that we will encounter in Theorem [Theorem 5](#Rudnick-local){reference-type="ref" reference="Rudnick-local"}. The Kuznetsov formula seems to be the nodal point for all transfer operators known to-date.
## Comparisons involving the adjoint quotient in higher rank
Generalizing the formula [\[GGPS\]](#GGPS){reference-type="eqref" reference="GGPS"} of Gelfand, Graev, and Piatetski-Shapiro to higher rank (in the non-Archimedean case) was the objective of the thesis of D. Johnstone [@Johnstone]. His work aims to study the stable transfer for a map of $L$-groups $${^LT}\to\operatorname{PGL}_n,$$ when $T$ is an $n$-dimensional elliptic torus. His work relies on character formulas and local character expansions for tame supercuspidal representations due to Adler, DeBacker, Murnaghan, and Spice, and is more complete when $n$ is a prime (different from the residue characteristic). The generalization of [\[GGPS\]](#GGPS){reference-type="eqref" reference="GGPS"} to higher rank is considerably complicated, and I point the reader to Johnstone's paper for the formula. An important question is whether the resulting transfer operator can be brought to a form where it could be shown to satisfy a Poisson summation formula on the Steinberg--Hitchin base, at least "in principle."
D. Johnstone and Z. Luo [@Johnstone-Luo] have studied transfer operators for another, very important family of functorial lifts, namely, the lifts corresponding to the symmetric power representations of the gual group of $\operatorname{GL}_2$: $$\operatorname{Sym}^n: \operatorname{GL}_2\to \operatorname{GL}_{n+1}.$$ In that paper, they compute a transfer operator or, rather, a distribution kernel, which pulls back "most" tempered characters of $\operatorname{GL}_2$ to the tempered characters of their $\operatorname{Sym}^n$-lift to $\operatorname{GL}_{n+1}$. The formula has a very simple form, but does not produce the desired outcome for twists of the Steinberg representation. It would be interesting to find the correct tweak of their formula that does that, and to rigorously show that it corresponds to a transfer operator between spaces of stable test measures.
# Same-rank comparisons and separation of the Arthur classes {#sec:samerank}
## Stable Kuznetsov--Selberg transfer for $\operatorname{SL}_2$ {#ss:KuzSL2}
In his 1990 PhD thesis [@Rudnick], Z. Rudnick showed how to use the Petersson formula -- that is, the simplified version of the Kuznetsov formula for holomorphic cusp forms -- to prove the Eichler--Selberg trace formula (the trace formula for holomorphic cusp forms).
In his thesis and subsequent work [@Altug1; @Altug2; @Altug3], A. Altuğ studied, among other questions, the problem of extracting the "Ramanujan" spectrum (that is, the complement of the 1-dimensional characters in the set of automorphic representations) from the Selberg trace formula for $\operatorname{GL}_2$. This was preceded by the work of Frenkel, Langlands, and Ngô [@FLN], who sketched an argument for removing the contribution of the trivial representation, via an adelic Poisson summation formula.
All the aforementioned works are related, and have an underlying local theory. The local theory can be expressed in terms of a transfer operator $$\mathcal S((N,\psi)\backslash\operatorname{SL}_2/(N,\psi)) \to \mathcal S(\frac{\operatorname{SL}_2}{\operatorname{SL}_2})^{\operatorname{st}}$$ between spaces of test measures for the Kuznetsov and the stable trace formula for $\operatorname{SL}_2$. This transfer operator should give rise to a putative comparison between the stable trace formula and the Kuznetsov formula for $\operatorname{SL}_2$ (generalizing Rudnick's comparison), with an extra term corresponding to the trivial representation, since that is included in the spectrum of the former, but not of the latter. We will discuss this putative comparison, and its relation to the aforementioned works, in the global discussion in Section § [6.1](#ss:Altug){reference-type="ref" reference="ss:Altug"}.
Consider the space of test measures for the Kuznetsov formula of the group $G=\operatorname{SL}_2$, $\mathcal S((N,\psi)\backslash G/(N,\psi))$, understood as a subspace of measures on the affine line with coordinate $\zeta$, as in § [2.4](#ss:Kuztori){reference-type="ref" reference="ss:Kuztori"}. Recall that $\mathcal S(\frac{G}{G})^{\operatorname{st}}$ denotes the space of stable test measures for the Selberg trace formula, i.e., the image of the pushforward $\mathcal S(G) \xrightarrow{\operatorname{tr}_*} \operatorname{Meas}(\mathbb A^1)$. The two spaces of measures on the affine line are completely different: The Kuznetsov space involves measures that are smooth of rapid decay away from a neighborhood of $\zeta=0$, and over non-Archimedean fields involves local Kloosterman sums (see [@SaRankone § 1.3]). The Selberg space involves measures that are smooth of rapid decay outside of the discriminant locus $x=\pm 2$ (recall that the coordinate $x$, here, corresponds to the trace), and in neighborhoods of this locus behave like multiplicative characters, see [@SaRankone §5.3]. Nonetheless, one can describe a transfer operator between the two; to express it, denote, for any $s\in \mathbb C$, $$\label{Ds}
D_s(x) = \psi(x) |x|^s d^\times x = \psi(x) |x|^{s-1} dx,$$ a measure on $F^\times$.
**Theorem 5** ([@SaTransfer1 Theorem 4.2.1]). *Consider the action of $F^\times$ on $F$, and denote by $\star$ the pushforward of measures under the action map (multiplicative convolution). Multiplicative convolution by $D_1$, $f\mapsto D_1\star f$, gives rise to an injection $$\mathcal T: \mathcal S((N,\psi)\backslash G/(N,\psi)) \to \mathcal S(\frac{G}{G})^{\operatorname{st}},$$ with the property that $\mathcal T^* \Theta_\Pi = J_\Pi$ for every tempered $L$-packet $\Pi$. Here, $\Theta_\Pi$ is the stable character of the $L$-packet (the sum of the characters over elements of the $L$-packet), and $J_\Pi$ is the Kuznetsov relative character (Bessel distribution) for the generic element of the $L$-packet (see § [2.4](#ss:Kuztori){reference-type="ref" reference="ss:Kuztori"}).*
To explicate, this convolution operator applied to a measure of the form $f(\zeta)=\Phi(\zeta) d\zeta$ is the Fourier transform of the function $y\mapsto |y|^{-1} \Phi(y^{-1})$: $$D_1\star f (x) = dx\cdot \int_y |y|^{-1} \Phi(y^{-1}) \psi(yx) dy.$$ The inverse operator sends a measure $f'(x)=\Phi'(x) dx$ in the trace variable $x$ to the measure $$\label{inversetransfer} |\zeta|^{-1} d\zeta\cdot \int_x \Phi'(x) \psi^{-1}(\frac{x}{\zeta}) dx$$ in the variable $\zeta$ for the Kuznetsov quotient.
This additive Fourier transform of $\Phi'$ is prominent in a big part of the literature of "Beyond Endoscopy" [@FLN; @Altug1; @Altug2; @Altug3], where the Kuznetsov formula is not explicitly invoked. One of the goals of these papers has been the removal of the trivial representation from the stable trace formula of $\operatorname{SL}_2$.[^4] The relevant local observation here is that the Fourier transform of the measure $f' \in \mathcal S(\frac{G}{G})^{\operatorname{st}}$, evaluated at $0$ (that is, at $\zeta^{-1}=0$, in the coordinates above), is simply the integral of $f'$, i.e., the trace of $\tilde f'$ at the trivial representation (where $\tilde f' \in \mathcal S(G)$) is any preimage of $f'$. How to utilize this easy observation globally is one of the questions that the aforementioned papers tried to address; it will be discussed in § [6.1](#ss:Altug){reference-type="ref" reference="ss:Altug"}.
The theorem above reveals more about the nature of this Fourier transform: It is the operator of functoriality from the Selberg to the Kuznetsov formula. Note that [\[inversetransfer\]](#inversetransfer){reference-type="eqref" reference="inversetransfer"} does not preserve rapid decay in the variable $\zeta$: for example, it makes sense to evaluate the integral appearing in [\[inversetransfer\]](#inversetransfer){reference-type="eqref" reference="inversetransfer"} at $\zeta=\infty$; as mentioned, this computes the total mass of the measure $f'$ (the character of the trivial representation). This means that, to turn the transfer operator $\mathcal T$ to an isomorphism, we need to apply it to an *enlargement* $$\mathcal S_{L({\operatorname{Ad}},1)}((N,\psi)\backslash G/(N,\psi))\supset \mathcal S((N,\psi)\backslash G/(N,\psi))$$ of the standard space of test measures, which has both a representation-theoretic and an arithmetic role. Both can be understood more easily in terms of global trace formulas: A direct comparison between the stable Selberg and Kuznetsov trace formulas for $\operatorname{SL}_2$ would be impossible, because:
1. The former contains a contribution from the trivial representation, which does not appear in the latter.
2. The traces of Hecke operators on the latter are weighted by factors of the form $$\frac{(\mbox{\small{local factors at }}S)}{L^S(\pi, {\operatorname{Ad}}, 1)},$$ where $S$ is a finite number of places, and $L^S(\pi, {\operatorname{Ad}}, 1)$ denotes the partial adjoint $L$-function of the automorphic (say, cuspidal) representation $\pi$.
Using the enlarged space $\mathcal S_{L({\operatorname{Ad}},1)}((N,\psi)\backslash G/(N,\psi))$ of test measures resolves the two issues, at least in principle. We will not get into the explicit description of this space here (see [@SaTransfer1 § 2.2]), because it is not very conceptual, and certainly not well-understood in higher rank. What is understood, and is the source of the notation, is that at non-Archimedean places it contains a "basic vector" $f_{L({\operatorname{Ad}},1)}$ which is a generating series for the local unramified adjoint $L$-value at $1$. More precisely, assume that $F$ is non-Archimedean with residual degree $q$, and consider the following formal series in the representation ring of $\check G=\operatorname{PGL}_2$: $$\hat f:=\sum_{n\ge 0} q^{-n} \operatorname{Sym}^n {\operatorname{Ad}}.$$ The trace of a semisimple conjugacy class $c$ in $\check G$ on $\hat f$ computes the local adjoint $L$-value $L(\pi,{\operatorname{Ad}},1)$ for the irreducible unramified representation $\pi$ of $G=G(F)$ with Satake parameter $c$. The inverse Satake transform $f$ of $\hat f$ is a series of elements in the Hecke algebra of $G$, which converges as a measure on $G$. The corresponding series of pushforwards to $\mathcal S((N,\psi)\backslash G/(N,\psi))$ converges as a measure on the affine line, to the measure that we call the "basic vector" of $f_{L({\operatorname{Ad}},1)}\in \mathcal S_{L({\operatorname{Ad}},1)}((N,\psi)\backslash G/(N,\psi))$.
Theorem [Theorem 5](#Rudnick-local){reference-type="ref" reference="Rudnick-local"} extends to identify (up to an explicit local zeta factor) the transfer of this basic vector to the basic vector of $\mathcal S(\frac{G}{G})$ (the image of the unit in the Hecke algebra), see [@SaTransfer1 Theorem 4.2.1].
## Stable Kuznetsov--Arthur transfer for $\operatorname{SL}_n$ {#ss:KuzSLn}
We briefly discuss possible generalizations to $G=\operatorname{SL}_n$, for larger values of $n$.
The question originally posed in this setting is how to remove the nontempered contributions from the stable trace formula of $G$, but one could also formulate the deeper question of how to compare the stable Arthur and Kuznetsov trace formulas. In [@Arthur-stratification], Arthur formulates a hypothesis, that the non-tempered contributions to the trace formula can be obtained by evaluating the Fourier transform of a test measure $f\in \mathcal S(\frac{G}{G})^{\operatorname{st}}$ at certain subvarieties of the "dual" to the Steinberg--Hitchin base $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
\simeq \mathbb A^{n-1}$.
In ongoing work, the author with Chen Wan are developing a full comparison between the Kuznetsov and stable Arthur--Selberg trace formulas. This comparison uses local transfer operators described by integral formulas which have been explicitly evaluated in low rank, and paint a different picture than the one obtained by treating $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%$ as an $(n-1)$-dimensional vector space.
For simplicity, I only describe the case of $n=3$: Consider the additive character $\psi: F\to \mathbb C^\times$ as a generic character of the upper triangular unipotent $N\subset \operatorname{SL}_3$ in the standard way (i.e., applying it to the sum of the entries right above the diagonal), and use the matrices $$\begin{pmatrix} && \zeta_3 \\ & \zeta_2 \\ \zeta_1 \end{pmatrix}$$ (with $\zeta_1\zeta_2\zeta_3=1$) to represent the space of test measures $\mathcal S((N,\psi)\backslash G/(N,\psi))$ for the Kuznetsov formula as a space of measures on a $2$-dimensional space, as in § [2.4](#ss:Kuztori){reference-type="ref" reference="ss:Kuztori"}. We identify the matrix above with the element ${\operatorname{diag}}(\zeta_1, \zeta_2,\zeta_3)$ of the Cartan group $T = B/N$, taking $B$ to be the Borel subgroup of upper triangular matrices; we denote the simple roots by $\alpha_1= \frac{\zeta_1}{\zeta_2}$ and $\alpha_2 = \frac{\zeta_2}{\zeta_3}$.
For the Steinberg--Hitchin base $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{\operatorname{SL}_3}{\operatorname{SL}_3}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{\operatorname{SL}_3}}{\phantom{\operatorname{SL}_3}}$}%$ we will use *rational* coordinates $(u_1, u_2, u_3)$, with $u_1(g) = \operatorname{tr}g$, $u_2(g) = \frac{\operatorname{tr}\wedge^2 g}{\operatorname{tr}t}$, $u_3 = \frac{\det(g) }{\operatorname{tr}\wedge^2 g}$. (Obviously, $\det(g)=1$ and $u_1 u_2 u_3 =1$, but the formulas that follow carry over verbatim to $\operatorname{GL}_3$.) In upcoming work with C. Wan, we describe a transfer operator $$\mathcal T: \mathcal S((N,\psi)\backslash G/(N,\psi)) \to \mathcal S(\frac{G}{G})^{\operatorname{st}},$$ with the property that $\mathcal T^* \Theta_\Pi = J_\Pi$ for every tempered $L$-packet $\Pi$ (notation as in Theorem [Theorem 5](#Rudnick-local){reference-type="ref" reference="Rudnick-local"}), and given by the following formula: $$\label{SWaneq}
\mathcal Tf = D'_{\check\alpha_1, 1} \star D_{\check\alpha_2,1} \left(\psi^{-1}(e^{\alpha_1}) \psi(e^{\alpha_2}) D_{\check\alpha_1+\check\alpha_2,2} \star f \right).$$ Here, for $s\in \mathbb C$ and a cocharacter $\lambda:\mathbb{G}_m\to T$, we denote by $D_{\lambda, s}$ the pushforward of the measure $D_s$ of [\[Ds\]](#Ds){reference-type="eqref" reference="Ds"} to $T$; we denote by $D_{\lambda, s}'$ the pushforward of the analogous measure, when $\psi$ is replaced by $\psi^{-1}$; and we denote by $\psi(e^\alpha)$ the operator of multiplication by $\psi$ composed with a character $\alpha: T\to \mathbb{G}_m$.[^5]
The form of the transfer operator (including the choice of coordinates) may seem, and is, mysterious, but at least some degenerate limit of it, when we let the spaces to degenerate to their asymptotic cones, can be understood along the lines of [@SaTransfer1 § 4.3, § 5]; in that limit, the factor $\psi^{-1}(e^{\alpha_1}) \psi(e^{\alpha_2})$ would disappear, and we just have Fourier convolutions along the positive coroot cocharacters. In any case, an important observation is that it is not a simple, $2$-dimensional Fourier transform on the $2$-dimensional Steinberg--Hitchin base that produces the transfer operator to the Kuznetsov formula (and hence, the separation of the non-tempered spectrum, which will again be responsible for a nonstandard space of Kuznetsov test measures, if we invert the operator $\mathcal T$), but a $3$-dimensional Fourier transform (together with the little-understood "correction factor" $\psi^{-1}(e^{\alpha_1}) \psi(e^{\alpha_2}$), along directions determined by the positive coroots. Higher-rank calculations for $\operatorname{GL}_n$ also support the idea that the relevant transfer operator involves $\frac{n(n-1)}{2}$ Fourier convolutions, together with correction factors.
## Transfer between relative trace formulas in rank 1, and a symplectic interpretation {#ss:transferrankone}
The comparison of Theorem [Theorem 5](#Rudnick-local){reference-type="ref" reference="Rudnick-local"} generalizes to arbitrary affine, homogeneous spherical varieties $X$ whose dual group is $\operatorname{SL}_2$ or $\operatorname{PGL}_2$, up to possibly replacing the variety with a finite cover. That is (up to such replacement), there is a transfer operator $$\mathcal T: \mathcal S((N,\psi)\backslash G^*/(N,\psi)) \to \mathcal S((X\times X)/G)^{\operatorname{st}},$$ from Kuznetsov test measures on the group $G^*=\operatorname{PGL}_2$ or $\operatorname{SL}_2$ (depending on the dual group of $X$), and stable test measures for the relative trace formula of $X$. The case of the Selberg trace formula (Theorem [Theorem 5](#Rudnick-local){reference-type="ref" reference="Rudnick-local"}) corresponds to the special case $X={\operatorname{SO}}_3\backslash{\operatorname{SO}}_4$, but it is quite remarkable that essentially the same transfer operator, with small modifications, like replacing the distribution $D_1$ by $D_s$ for an appropriate $s$ (which, in turn, has interesting connections to the $L$-value associated with the spherical variety $X$), gives rise to the pertinent operators of functoriality.
We will recall the theorem of [@SaRankone] for the family of spaces $X={\operatorname{SO}}_{2n-1}\backslash{\operatorname{SO}}_{2n}$ only, where the transfer operator is given by an $1$-dimensional Fourier transform, but several more cases are treated in loc.cit., with a $2$-dimensional Fourier transform. The benefit of discussing this family is that the result was subsequently reproven by W. T. Gan and X. Wan [@Gan-Wan] using the theta correspondence, and their work allows us to formulate statements about characters (while [@SaRankone] only established the geometric transfer).
For the theorem that follows, we can take $\mathcal S((X\times X)/G)^{\operatorname{st}}$ to be the pushforward of Schwartz measures from $X\times X$ to the affine line, where $X$ is the unit "sphere" on a *split* quadratic space of dimension $2n$, and the affine line is identified with the quotient $(X\times X)\sslash G$ via the quadratic pairing. (As in the case of $\operatorname{SL}_2$, this is enough, and one does not obtain a larger space of test measures by considering pure inner forms.) We will formulate functoriality for characters in terms of the theta correspondence, following [@Gan-Wan]; for this, we need to choose the data of the theta correspondence so that the theta lift of an $(N,\psi)$-generic tempered representation on $\operatorname{SL}_2$ to ${\operatorname{SO}}_{2n}$ is distinguished by the split form of ${\operatorname{SO}}_{2n-1}$; see [@Gan-Wan Proposition 5.1] for details.
**Theorem 6** (S., Gan--Wan). *Let $G^*=\operatorname{SL}_2$, $G={\operatorname{SO}}_{2n}$, and $X={\operatorname{SO}}_{2n-1}\backslash {\operatorname{SO}}_{2n}$, as above. The operator $$\mathcal T f (x) = |x|^{n-2} D_{3-n}\star f (x)$$ gives rise to an injection $$\mathcal T: \mathcal S((N,\psi)\backslash G^*/(N,\psi)) \to \mathcal S((X\times X)/G)^{\operatorname{st}},$$ with the property that $\mathcal T^* I_{\theta(\Pi)} = J_\Pi$ for every tempered $L$-packet $\Pi$ of $\operatorname{SL}_2$. Here, $J_\Pi$ is the Bessel character as in Theorem [Theorem 5](#Rudnick-local){reference-type="ref" reference="Rudnick-local"}, and $I_{\theta(\Pi)}$ is a relative character for the theta-lift of the $(N,\psi)$-generic element of $\Pi$ to $G$.*
For the normalization of the relative character $I_{\theta(\Pi)}$, I point the reader to [@Gan-Wan Theorem 8.1]; suffice it to say here that it is a normalization compatible with the Plancherel formula for $L^2(X)$. This theorem also has an extension, describing an enlargement of the standard space of Kuznetsov test measures, such that $\mathcal T$ becomes an isomorphism $$\label{transferisomorphism} \mathcal T: \mathcal S_{L({\operatorname{Ad}},n-1)}((N,\psi)\backslash G^*/(N,\psi)) \xrightarrow\sim \mathcal S((X\times X)/G)^{\operatorname{st}}.$$ The inverse of this isomorphism takes the basic vector on the right hand side (the image of the probability measure which is equal to the characteristic function on $X\times X(\mathfrak o)$ times an invariant measure) to an explicit multiple of the generating series of the local unramified $L$-value $L({\operatorname{Ad}},n-1)$.
The uniform nature of these transfer operators is enticing, but mysterious. What are we to make of the fact that functorial transfer for homogeneous spaces of non-abelian reductive groups are given by Fourier convolutions -- operators of abelian nature -- on spaces of (stable) test measures? An interpretation using symplectic geometry and the idea of geometric quantization was proposed in [@SaICM]: According to this idea, the "cotangent stacks" associated to the quotients $\mathfrak Y=(N,\psi)\backslash G^*/(N,\psi)$ and $\mathfrak X = (X\times X)/G$ are, in some sense, "birational" to each other. More precisely, the Kuznetsov cotangent stack $T^*\mathfrak Y$ can be canonically identified with the group scheme $$J_X = \left(\operatorname{Res}_{\mathfrak a_X^*/\mathfrak c_X^*} T^* A_X\right)^{W_X}$$ of regular centralizers in $\operatorname{SL}_2$; here, $A_X=$ the (universal) Cartan of $\operatorname{SL}_2$, $W_X=$ its Weyl group, and $\mathfrak c_X^* = \mathfrak a_X^*\sslash W_X$, which is isomorphic to the affine line. I point the reader to [@SaICM] for more details and definitions. On the other hand, using work of Knop [@KnWeyl; @KnAut], one can show that there is a canonical diagram $$J_X \leftarrow J_X\times_{\mathfrak c_X^*} T^*X/G \to T^*\mathfrak X,$$ where the left arrow is generically an isomorphism, and the right arrow has dense image. One can interpret this by saying that the cotangent stack of $\mathfrak X$ is "birational to, but a bit larger than" the Kuznetsov cotangent stack $T^*\mathfrak Y$. Morally, Theorem [Theorem 6](#rankone){reference-type="ref" reference="rankone"} and its extension [\[transferisomorphism\]](#transferisomorphism){reference-type="eqref" reference="transferisomorphism"} are the quantum versions of this fact. In [@SaICM], an interpretation of the transfer operators is given, as "canonical intertwiners" between two different "geometric quantizations" of these stacks.
This explanation is purely phenomenological at this point, but it would be nice to adapt it to all other available examples discussed in this survey; in particular, it would be nice to have an explanation based on symplectic geometry of the mysterious appearance of Fourier transforms in problems of functorial transfer.
# Schwartz spaces encoding $L$-functions {#sec:Schwartz}
In the introduction, we discussed the necessity of introducing "nonstandard test functions" (or measures) into the trace formula, in order to weigh the spectral side by $L$-functions. We already encountered nonstandard spaces of test measures in the local theory of the preceding sections; for example, for the full "Beyond Endoscopy" matching between the Kuznetsov and the stable trace formula of $\operatorname{SL}_2$ [\[transferisomorphism\]](#transferisomorphism){reference-type="eqref" reference="transferisomorphism"}, we had to use a nonstandard space $\mathcal S_{L({\operatorname{Ad}},s)}((N,\psi)\backslash G^*/(N,\psi))$ of test measures for the Kuznetsov formula. It is time to take a closer look at such spaces.
A basic example of nonstandard test measures is the space of Schwartz measures on the space $M_n$ of $n\times n$-matrices (over a local field $F$), considered as measures on $G=\operatorname{GL}_n$. The local theory of Godement and Jacquet [@GJ] shows that this space "encodes" the standard $L$-function; at non-Archimedean $F$ with ring of integers $\mathfrak o$ and residual degree $q$, this means:
1. the integral against the matrix coefficient of any irreducible smooth representation $\pi$, twisted by powers of the determinant: $$\label{GJintegral}
\mathcal S(M_n) \otimes \tilde\pi \otimes \pi \otimes |\det|^s \ni f \otimes \tilde v \otimes v \mapsto \int f(g) \langle \tilde v, \pi(g) v\rangle |\det(g)|^s$$ is a rational function in $q^{\pm s}$, and has image equal to the fractional ideal generated by the standard (local) $L$-function, $L(\pi, {\operatorname{Std}}, s+\frac{n+1}{2}) \mathbb{C}[q^{\pm s}]$;
2. there is an unramified "basic vector" $f_{\operatorname{Std}}\in \mathcal S(M_n)^{G(\mathfrak o)\times G(\mathfrak o)}$ such that, for $\pi$ unramified, the integral [\[GJintegral\]](#GJintegral){reference-type="eqref" reference="GJintegral"}, against the normalized unramified matrix coefficient of $\pi$ is equal to $L(s+\frac{n+1}{2} , \pi, {\operatorname{Std}})$. Up to the choice of additive Haar measure on $M_n$, the basic vector, here, is the characteristic function of $M_n(\mathfrak o)$.
Moreover, the functional equation of this $L$-function follows from a local and a global ingredient: the Fourier transform on $\mathcal S(M_n)$, and the Poisson summation formula.
It was an idea due to Braverman and Kazhdan [@BK2] that one should be able to define more general "Schwartz spaces" $\mathcal S_r$ of measures -- or, more conveniently, to center everything at $s=\frac{1}{2}$, spaces $\mathcal D_r$ of half-densities -- on the $F$-points of a reductive group $G$, possessing the properties above when we replace the standard representation of $\operatorname{GL}_n$ by an arbitrary algebraic representation $r$ of the $L$-group ${^LG}$. They proposed that, at least when $r$ is irreducible, the space $\mathcal D_r$ should be obtained as the sum of the standard Schwartz space $\mathcal D_1=\mathcal D(G)$ and its image under a conjectural "$r$-Fourier tranform," an $G\times G$-equivariant isometry on $L^2(G)$. We will discuss those conjectural Fourier transforms in the next section.
In particular, when $G$ is unramified over a non-Archimedean field $F$, the space $\mathcal D_r$ should contain the following "basic vector" $f_r$. For technical reasons, let us assume, as in the Godement--Jacquet case, that there is a central cocharacter $\mathbb{G}_m\to {^LG}$ which, composed with $r$, gives the standard scaling action on $V$. Dually, this translates to a character $G\to \mathbb{G}_m$ which we will call "determinant." If $\tilde h_n$ is the element of the unramified Hecke algebra of $G$ that corresponds to the representation $\operatorname{Sym}^n r$ under the Satake isomorphism, then, up to a suitable Haar half-density on $G$, the basic vector of $\mathcal D_r$ should be $$f_r = \sum_{n\ge 0} q^{-\frac{n}{2}} h_n.$$ Finally, Braverman and Kazhdan observed that the support of this half-density would have compact closure in a certain affine (but possibly singular) $G\times G$-equivariant embedding $G\hookrightarrow G_r$. Such embeddings automatically extend the multiplication operation on the group, turning them into monoids.
Their ideas were picked up by others about a decade later, and gave rise to several developments in the field. Motivated by his work with Frenkel and Langlands [@FLN; @FN] on "Beyond Endoscopy," and the need to insert $L$-functions into the trace formula, Ngô [@Ngo-PS] conjectured that, over a local field in equal characteristic, $F=\mathbb{F}_q((t))$, the basic function[^6] $f_r$ should arise as a Frobenius trace from the intersection complex of the arc space $L^+G_r$, for $G_r$ now defined over $\mathbb{F}_q$ (so that $L^+G_r(\mathbb{F}_q)=G_r(\mathfrak o)$). The idea that basic functions representing $L$-functions should arise as "IC functions" associated to possibly singular affine spherical varieties had previously appeared in [@SaRS], and was inspired by some other work of Braverman and Kazhdan, on normalized Eisenstein series [@BK1; @BK3], which has its roots in the theory of geometric Eisenstein series and ideas of Drinfeld [@BG-Eisenstein]. In any case, a proper definition of the IC function was not available until [@BNS], where Ngô's conjecture was proven.
A complete such Schwartz space $\mathcal S_r$ should also represent the $L$-function associated to $r$ for ramified representations, as well, as in the Godement--Jacquet case. A recent preprint of Bezrukavnikov, Braverman, Finkelberg, and Kazhdan [@BBFK] constructs the Iwahori-invariants of such a Schwartz space, proving the desired relation to $L$-functions.
Assuming, now, the existence of $r$-Schwartz spaces of functions, half-densities, or measures on our reductive group, we can consider various pushforwards. For example, unipotent integration against a generic character $\psi^{-1}$ of the maximal unipotent group $N$ gives rise to a morphism to the space of Whittaker functions or measures, $$\mathcal S_r(G) \to C^\infty((N,\psi)\backslash G),$$ whose image we can denote by $\mathcal S_r((N,\psi)\backslash G)$. It is a "nonstandard Schwartz space of Whittaker functions," whose basic vector (the image of the vector $f_r$ above) already appeared in the musings of Piatetski--Shapiro around Poincaré series in the 80s [@PS-invariant]. Taking a two-sided pushforward, we obtain a nonstandard space $\mathcal S_r((N,\psi)\backslash G/(N,\psi))$ of test vectors for the Kuznetsov formula. This space *should* coincide with the spaces that we encountered in Sections [2](#sec:localtori){reference-type="ref" reference="sec:localtori"} and [3](#sec:samerank){reference-type="ref" reference="sec:samerank"}, in particular cases.
The aforementioned "Schwartz" spaces of measures could, more generally, be understood in the context of the "relative" Langlands program, introduced in the article of Beuzart-Plessis in these proceedings [@Beuzart-Plessis]. Indeed, the affine monoids that we encountered above are special cases of *affine spherical $G$-varieties* $X$, and those should also be equipped with Schwartz spaces, often related to some $L$-function. To explain how, we need to make an adjustment to our understanding of the group case: First of all, we will now denote the group of interest by $X=H$, with $G=H\times H$ acting on it. Secondly, to fit into the general setting, the putative Schwartz space of the monoid $H_r$ should be understood as being related, formally, to $L(\pi, r) \sqrt{L(\pi, {\operatorname{Ad}}, 1)}$; for example, $\mathcal S(H)$ is related to $\sqrt{L(\pi,{\operatorname{Ad}}, 1)}$, not the trivial $L$-value. To give a meaning to this relationship, one needs to think of the Plancherel decomposition of the basic vector, which in the group case takes the form (for a suitable choice of Haar measure) $$\label{Plancherel-group} \Vert 1_{H(\mathfrak o)}\Vert^2 = \int_{\check A_H^0/|W_H|} L(\chi,{\operatorname{Ad}},1) d_{\text{Weyl}}\chi.$$ Here, $\check A_H^0/|W_H|$ is the set of conjugacy classes in the compact real form of the dual group of $H$, and $d_{\text{Weyl}}\chi$ is the pushforward of the probability Haar measure on this compact group. This, of course, is just the well-known formula for the unramified Plancherel measure, due to Macdonald [@Macdonald], and, in more generality, to Harish-Chandra [@Waldspurger-Plancherel].
The appearance of the adjoint (local) $L$-value in the Plancherel formula above is of significance in various aspects of the Langlands program. For example, it is related to the fact that the corresponding global $L$-value is equal, essentially, to the Petersson square norm of a normalized newform. "Normalized," here, means that the first Fourier/Whittaker coefficient is $1$; the local counterpart of this this ratio between $L^2$-norms and Whittaker coefficients is the Plancherel density of the basic vector $W_0 \in \mathcal S((N,\psi)\backslash G)$ of the Whittaker model of $G$, which, in contrast to [\[Plancherel-group\]](#Plancherel-group){reference-type="eqref" reference="Plancherel-group"}, reads $$\label{Plancherel-Whittaker} \Vert W_0\Vert^2= \int_{\check A_G^0/|W_G|} d_{\text{Weyl}}\chi.$$ In the geometric Langlands program, the appearance of the adjoint $L$-value in [\[Plancherel-group\]](#Plancherel-group){reference-type="eqref" reference="Plancherel-group"} is understood to be related to the *derived geometric Satake equivalence* [@BezFin].
As is explained in the paper of Beuzart-Plessis in these proceedings, the Schwartz spaces of other spherical $G$-varieties $X$ admit similar Plancherel decompositions, with the Plancherel density of the "basic vector" encoding a different $L$-value, as in [\[Plancherel-group\]](#Plancherel-group){reference-type="eqref" reference="Plancherel-group"}, [\[Plancherel-Whittaker\]](#Plancherel-Whittaker){reference-type="eqref" reference="Plancherel-Whittaker"}. Moreover, the generalized form of the Ichino--Ikeda conjecture, proposed in [@SV], relates local Plancherel densities to global period integrals of automorphic forms -- the same integrals that appear on the spectral side of the relative trace formula. Therefore, the insertion of $L$-functions into the Arthur--Selberg trace formula can be viewed as a relative trace formula for the monoid $H_r$. From this point of view, considering the Beyond Endoscopy program in the more general setting of the relative trace formula appears most natural.
The Schwartz spaces of spherical varieties, when those are smooth, are well-defined. To complete the discussion of this section, let us return to the idea that singular spherical (and, maybe, more general) varieties also possess meaningful Schwartz spaces, which we already encountered in the case of monoids. We mention two classes of spaces where the basic vectors of these putative Schwartz spaces have been studied for $\mathfrak o = \mathbb F_q[[t]]$, and related to $L$-functions:
The first is the case of spaces of the form $X = \overline{U\backslash G}$ (or $X=\overline{[P,P]\backslash G}$), where $P\supset U$ denote a parabolic subgroup with its unipotent radical, and the line denotes "affine closure." The basic functions of those spaces were studied in [@BFGM]. We will discuss them further in § [5.4](#ss:Fourier-nonlinear){reference-type="ref" reference="ss:Fourier-nonlinear"}, to highlight the nature (and absence!) of the conjectural nonlinear Fourier transforms.
Finally, [@SaWang] studied the basic vectors of Schwartz spaces of spherical varieties $X$ with dual group $\check G_X=\check G$. In all of those cases, the basic function, defined as the Frobenius trace on the intersection complex of the arc space of $X$ (or rather, on finite-dimensional formal models of that), admits a Plancherel decomposition as in [\[Plancherel-group\]](#Plancherel-group){reference-type="eqref" reference="Plancherel-group"}, [\[Plancherel-Whittaker\]](#Plancherel-Whittaker){reference-type="eqref" reference="Plancherel-Whittaker"}, with density equal to some local $L$-value that depends on the geometry of $X$.
# Fourier and Hankel transforms {#sec:Hankel}
## Desiderata
Let $X=\operatorname{Mat}_n$ be the space of $n\times n$-matrices, under the action of $\operatorname{GL}_n^2/\mathbb{G}_m$ by left or right multiplication (which we will write as a right action). Let $X^\vee$ be the dual vector space. If we fix a local field $F$ and a unitary additive character $\psi: F\to \mathbb C^\times$, we obtain a Fourier transform, an equivariant isomorphism $$\mathcal F: \mathcal D(X) \xrightarrow\sim \mathcal D(X^\vee)$$ between spaces of Schwartz half-densities on $X(F)$ and $X^\vee(F)$, which extends to an $L^2$-isometry. Given that both spaces contain $G=\operatorname{GL}_n$ as a canonical open subset, we can think of these half-densities as living on $G=G(F)$, and then Fourier transform is given by (multiplicative) convolution with the measure $$\gimel_{\operatorname{Std}}(g) = \psi(\operatorname{tr}(g)) |\det g|^\frac{n}{2} dg,$$ where $dg$ is the Haar measure on $G$, depending on the choice of $\psi$, that makes the transform an $L^2$-isometry.
The same conjugation-invariant measure can be used to convolve functions and measures on $G$, and by the theory of Godement and Jacquet we have (essentially, by definition) $$\label{gamma-standard} \gimel_{\operatorname{Std}}\star \Theta_\pi = \gamma(\frac{1}{2},\pi, {\operatorname{Std}}, \psi) \Theta_\pi,$$ for $\Theta_\pi$ the character (or, for that matter, any matrix coefficient) of an irreducible representation $\pi$ of $G$, and with $\gamma(\pi,r,\psi)$ the gamma factor of the functional equation, $$\gamma(s,\pi, r, \psi) = \frac{\epsilon(s,\pi, r,\psi) L(s,\tilde\pi, r)}{L(s,\pi, r) }.$$ Here, we need some regularization to make sense of the convolution above (namely, using the Godement--Jacquet zeta integrals to meromorphically extend characters into the duals of Schwartz spaces of measures on $X$ and $X^\vee$), and the relation makes sense for almost all $\pi$, as they vary in families of the form $\pi\otimes |\det|^s$, $s\in\mathbb{C}$.
Now, fix a reductive group $G$, for simplicity split, and an irreducible representation $r: \check G\to\operatorname{GL}(V)$ of its Langlands dual group. Braverman and Kazhdan [@BK2; @BK4] considered the question of whether it is possible to describe an invariant distribution $\gimel_r$ on $G$, satisfying the analogous property $$\label{gamma-r} \gimel_r \star \Theta_\pi = \gamma(\frac{1}{2}, \pi,r,\psi) \Theta_\pi.$$
Ideally, as discussed in Section [4](#sec:Schwartz){reference-type="ref" reference="sec:Schwartz"}, there would be entire "$r$-Schwartz spaces" of half-densities $\mathcal D_r(G)$, with convolution by $\gimel_r$ defining an $L^2$-isometric isomorphism, the "$r$-Fourier transform," $$\mathcal F_r : \mathcal D_r(G) \xrightarrow\sim \mathcal D_{r^\vee}(G),$$ where $r^\vee$ is the dual representation.
## The case of finite fields
Although our goal is to have a definition of the $r$-Fourier transform over local fields (and, eventually, some type of Poisson summation formula over the adeles of a global field), Braverman--Kazhdan [@BK2; @BK4] first focused on the (rather nontrivial) "toy case," when $G$ is defined, and split, over a finite field. The study of this case was later advanced and completed by Cheng--Ngô and Chen [@CN; @Chen]. We will only highlight the main points of this theory, pointing the reader to the original references, as well as [@Ngo-Takagi] for more details.
Let $G$ be a connected, split reductive group over a finite field $\mathbb{F}=\mathbb{F}_q$. The basic premise is that the distribution $\gimel_r$ on $G(\mathbb{F})$ should be obtained as the Frobenius trace of an irreducible perverse sheaf $\mathcal J_r$ on $G$ (up to a cohomological shift, which we will ignore below). Moreover, Braverman and Kazhdan constructed a candidate for this sheaf; the starting point is the analogous sheaf for the abelian case, namely, the case of the Cartan group $T$ of $G$, when we restrict the representation $r$ to its dual torus $\check T\subset \check G$; we will describe the construction below.
Of course, one also needs to define the analogs of gamma factors $\gamma(\pi,r)$ for the case of finite fields. This is done in [@BK2 § 9], first for $\operatorname{GL}_n$, using the analog of the Godement--Jacquet construction, and finite-field Fourier transforms. (Here, one can use $\overline{\mathbb{Q}_l}$-coefficients, for compatibility with the sheaf-to-function constructions that follow.) The gamma-factors are then transferred to any (split reductive) group $G$ via a version of "functoriality for Deligne--Lusztig packets" (= the set of irreducible constituents of a Deligne--Lusztig representation), via the map of dual groups $r: \check G \to \operatorname{GL}_n$. I point to the original reference for the definition, mentioning only that it relies on the following crucial fact [@BK2 Theorem 9.3]; we state the fact for finite fields, and its analog for non-Archimedean local fields.
Over $\mathbb{F}$ finite
: The Braverman--Kazhdan gamma factor of an irreducible representation of a Deligne--Luztig representation attached to a character $\theta$ of a Cartan subgroup $T_w$ of $\operatorname{GL}_n$ depends only on the data $(T_w,\theta)$.
Over $F$ local, non-Arch.
: The Godement--Jacquet gamma factor of an irreducible subquotient of a representation of $\operatorname{GL}_n$ parabolically induced from a supercuspidal representation $\sigma$ of a Levi $L$ depends only on the data $(L,\sigma)$.
Let us now describe the construction of the sheaf $\mathcal J_r$. It uses the invariant-theoretic quotient $G\xrightarrow{p} %
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
\simeq T\sslash W$ (where $G$ acts by conjugation on itself, and $T$ is its abstract Cartan). If $\iota: G^{\operatorname{rs}}\hookrightarrow G$ is the embedding of the regular semisimple part, then the sheaf $\mathcal J_r$ has the form $$\mathcal J_r = \iota_{!*} (p\iota)^* \mathcal J_r^{T\sslash W},$$ where $\mathcal J_r^{T\sslash W}$ is a certain sheaf on $T^{\operatorname{rs}}/W$, obtained from the analogous gamma-sheaf $\mathcal J_r^T$ for $T$.
The gamma-sheaf $\mathcal J_r^T$ for $T$ is easy to describe; we keep assuming the existence of a central $\mathbb{G}_m\to \check G$ that composed with $r:\check G \to \operatorname{GL}(V)$ gives the scaling on $V$. The restriction $r|_{\check T}$ gives rise, dually, to a map $\pi: T_V\to T$, where $T_V$ is the Cartan of the Langlands dual (over $\mathbb{F}$) of $\operatorname{GL}(V)$. There is a trace map $T_V \to \mathbb{G}_a$. We fix an Artin--Schreier sheaf $\mathcal L_\psi$ on $\mathbb{G}_a$, by choosing a nontrivial additive character $\mathbb{F}\to \mathbb{Q}_l^\times$, and take $\mathcal J_r^T = \pi_! \operatorname{tr}^* \mathcal L_\psi$. A natural $W$-equivariant structure, described in [@Ngo-Takagi 6.3], allows us to descend this sheaf from the regular set $T^{\operatorname{rs}}$, where $W$ acts freely, to the quotient $T^{\operatorname{rs}}/W$, obtaining the sheaf $\mathcal J_r^{T\sslash W}$. Finally, this sheaf, and hence the sheaf $\mathcal J_r$ on $G$, carry a natural Weil structure; thus, we can take their Frobenius trace.
**Theorem 7** (Braverman--Kazhdan, Cheng--Ngo, Chen). *Let $\gimel_r$ denote the Frobenius trace of the sheaf $\mathcal J_r$ constructed above, multiplied by counting measure on $G(\mathbb F)$. It is a conjugation-invariant measure on $G(\mathbb F)$, and acts on every irreducible representation $\pi$ of $G(\mathbb F)$ by the gamma factor $\gamma(\pi,r)$.*
A different proof of this result has appeared in a preprint of Laumon and Letellier [@Laumon-Letellier]. In a more recent preprint [@Laumon-Letellier2], the same authors focus on the case of $r=$ symmetric-square representation of $G=\operatorname{SL}_2$, $\operatorname{PGL}_2$ or $\operatorname{GL}_2$, and investigate the question of whether the $r$-Fourier transform (composed with the Chevalley involution) extends to an involutive automorphism of the space of functions on the $\mathbb{F}$-points of a stacky embedding of $G$.
## The case of local fields
Following the resolution of this conjecture for finite fields, it is natural to ask for a description of the $r$-Fourier kernel for local fields. Some speculative proposals for this $r$-Fourier kernel were proposed in [@BK2 § 7] and [@Ngo-Takagi § 6.2], modelled after the finite-field paradigm of obtaining the $r$-Fourier kernel on $G$ from the analogous $r$-Fourier kernel on $T$. These proposals should be viewed as motivational; as we will see, some correction is due, to make the kernel compatible with parabolic induction. This correction was calculated by L. Lafforgue for the case of $\operatorname{GL}_2$. There is ongoing work by Ngô and Z. Luo, generalizing this to $\operatorname{GL}_n$, for arbitrary $n$.
Let us discuss the solution proposed by Lafforgue, which appears in [@Lafforgue-esquisse Exposé III]. I hasten to clarify that some conventions will be different here than in [@Lafforgue-esquisse]; namely, all our kernels will be represented as *a function multiplied by a Haar measure*, and I will use the same notation for the function and the distribution; that, of course, introduces a scalar ambiguity, which I will ignore, in this expository article. The reader can consult the original reference for details, but should be careful to notice that the measures used there, both on the reductive group and on its tori, are eigenmeasures that are not invariant.
Let $r=r_k=$ be the $k$-th symmetric power of the standard representation of $\operatorname{GL}_2$. This is actually a faithful representation of the group $\check G = \operatorname{GL}_2/\mu_k$, and we will construct the $r$-Fourier kernel on the dual group $G=\operatorname{GL}_2\times_{\mathbb{G}_m} \mathbb{G}_m$, where the map $\operatorname{GL}_2\to\mathbb{G}_m$ is the determinant, and the map $\mathbb{G}_m\to\mathbb{G}_m$ is the $k$-th power -- i.e., $G$ classifies automorphisms of a $2$-dimensional vector space, together with a $k$-th root of the determinant.
Recall the canonical identification $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
\simeq T\sslash W$. We will start by defining the "naive" kernel $\gimel_r^{T\sslash W}$ as a function on $(T\sslash W)(F)$ (or rather, its regular semisimple points), following [@Lafforgue-esquisse § II.4] and [@Ngo-Takagi § 6.2]. To be clear, only its restriction to the image of $T'(F)\to (T\sslash W)(F)$, for $T'$ a split maximal torus, matters for the statement of "compatibility with parabolic induction" below, but presenting the entire kernel provides food for thought.
The definition of $\gimel_r^{T\sslash W}$ is that it is that function on $T^{\operatorname{rs}}/ W$ (that is, on its $F$-points) whose pullback to the regular points of any Cartan subgroup $T'\subset G$ is the $r$-Fourier kernel $\gimel_r^{T'}$. Let us first explain this: The "baby case" is the Fourier kernel $|x|^\frac{1}{2} \psi(x) d^\times x$ for the action of $F^\times$ on $\mathcal D(F)$. Now, for any maximal torus $T'\subset G$, we have an embedding, up to conjugation, of its dual torus $\check T'\hookrightarrow \check G$. Restricting $r$ to $\check T'$ and diagonalizing, we obtain a map $\check T'\to \mathbb{G}_m^{k+1}$, whose dual is a map ${\mathbb{G}_m}_{\bar F}^{k+1} \to T'_{\bar F}$ (i.e., defined over the algebraic closure). The Galois group $\Gamma$ acts on $\check T'$ and on the set of weights of $r|_{\check T'}$ (which are all of multiplicity one in this case -- this simplifies things a little), hence this map descends to a map of tori $p:T_E'\to T'$, where $T_E' = \operatorname{Res}_{E/F}\mathbb{G}_m$ is the restriction of scalars of $\mathbb{G}_m$ from some separable $F$-algebra $E$. On the other hand, $T_E'$ comes with a "trace" map to $\mathbb{G}_a$ (descending from the sum of coordinates on ${\mathbb{G}_m}_{\bar F}^{k+1}$). We then take $$\gimel_r^{T'} = p_* \left(|\bullet|_E^\frac{1}{2} \operatorname{tr}^* \psi\right),$$ i.e., we pull back the additive character $\psi$ from $F$, multiply it by the square root of the modulus character for the action of $T_E'(F)\simeq E^\times$ on $E$, and push it forward (as a measure, after multiplying by a Haar measure) to $T'(F)$. As mentioned, we will not bother to keep track of Haar measures here, but there is a standard way to choose them compatibly for all tori.
We choose coordinates $(c,a)\in \mathbb{G}_a\times\mathbb{G}_m$ for the quotient $%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
= T\sslash W$, where we send $(g,a)\in \operatorname{GL}_2\times_{\mathbb{G}_m} \mathbb{G}_m$ to $c=\operatorname{tr}g$ and $a$. Having defined the naive kernel $\gimel_r^{T\sslash W}$, Lafforgue corrects it to a kernel $\gimel_r^G$ given by the formula $$\mathcal F_c \gimel_r^G(\xi,a) = |\xi|\cdot |a|^\frac{k}{2} \mathcal F_c \gimel_r^{T\sslash W}(\xi,a),$$ where $\mathcal F_c$ is the usual Fourier transform in the $c$-variable (with $\xi$ denoting the dual variable). I do not know a conceptual explanation for this formula, but the correction is necessary for compatibility with parabolic induction. Namely, in [@Lafforgue-esquisse Proposition III.2], Lafforgue performs a formal calculation that suggests the following property (which fails for the naive kernel): $$(\gimel_r^G \star \Phi)_N = \gimel_r^T \star \Phi_N,$$ for all Schwartz functions $\Phi$ on $G$, where $\Phi_N$ is the normalized constant term $$\Phi_N(t) = |\delta_B(t)|^{-\frac{1}{2}} \int_N \Phi(nt) dn.$$ More conceptually stated, the action of $\gimel_r^G$ on half-densities on $N\backslash G$ matches the action of $\gimel_r^T$ under the "left" action of $T$ on $N\backslash G$. This is necessary, for $\gimel_r^G$ to act by the correct gamma factors on principal series representations.
## Fourier transforms on other nonlinear spaces {#ss:Fourier-nonlinear}
There are more conjectural nonlinear Fourier transforms than the $r$-Fourier transforms in the case of the group. In fact, the local functional equations of various classical integral representations of $L$-functions can be construed as some kind of "Fourier transforms."
Take, for example, Hecke's classical integral representation for the standard $L$-function of a modular form, as reinterpreted in the adelic language by Jacquet and Langlands [@JL]. Hence, let $V_2=V_1\oplus V_1'$ be a $2$-dimensional vector space written as a direct sum of two $1$-dimensional subspaces, and let $X=\operatorname{GL}(V_1)\backslash \operatorname{GL}(V_2)$, $X^\vee =\operatorname{GL}(V_1')\backslash \operatorname{GL}(V_2)$. If $w\in G=\operatorname{GL}(V_2)$ is an element interchanging the subspaces $V_1$ and $V_1'$, it defines a $G$-equivariant isomorphism $X\xrightarrow\sim X^\vee$, sending the coset of $g$ to the coset of $wg$. This gives rise to an isometric isomorphism of Schwartz spaces of functions or half-densities, $$\mathcal D(X) \xrightarrow\sim \mathcal D(X^\vee),$$ which is the basis of the local functional equation of the standard $L$-function, see [@JPSS]. To formulate the exact functional equation with the right gamma factors, one needs to explain how to choose Whittaker models compatibly with the choice of $w$, a topic that deserves close attention, but not in the current exposition. I also point the reader to the recent thesis of G. Dor [@Dor], for a very novel and deep interpretation of the relationship between the Godement--Jacquet and Jacquet--Langlands (Hecke) integral representations (without, however, a comparison of local gamma factors).
For an example that looks more like a classical Fourier transform, consider the Rankin--Selberg integral for $G=\operatorname{GL}_n\times_{\det}\operatorname{GL}_n$, which can be written as the integral of an automorphic form on $G$ against a theta series coming from the space of Schwartz functions on the "Rankin--Selberg variety" $X={\operatorname{Std}}\times^{\operatorname{GL}_n^{\operatorname{diag}}} G$, where ${\operatorname{Std}}$ denotes the vector space of the standard representation. (See the article of Beuzart-Plessis, [@Beuzart-Plessis] for the definition of theta series.) A "dual" space is obtained by considering the dual representation, $X^\vee = {\operatorname{Std}}^\vee \times^{\operatorname{GL}_n^{\operatorname{diag}}} G$, and fixing an additive character we get an equivariant Fourier transform of Schwartz half-densities (defined along the dual fibers of the maps $X\to \operatorname{GL}_n^{\operatorname{diag}}\backslash G \leftarrow X^\vee$, that is, their points over a local field): $$\mathcal D(X)\xrightarrow\sim \mathcal \mathcal D(X^\vee).$$ This Fourier transform gives rise to the functional equation of the Rankin--Selberg $L$-function.
More generally, let $X$ be any affine spherical $G$-variety, for a quasisplit group $G$, possibly with some additional restrictions (e.g., without "spherical roots of type $N$"). Let $X^\vee$ denote the same $G$-variety, but with the $G$-action composed with a Chevalley involution. "A Chevalley involution" is an involution of $G$ that acts as minus the longest element of the Weyl group on the (abstract) root system of $G$. All such involutions are conjugate over the algebraic closure, but the choice of its $G(F)$-conjugacy class may matter for what follows. For some groups (but not all, e.g., not for $\operatorname{SL}_n$), there is a "duality involution" that satisfies $\pi^c \simeq \tilde\pi$; I point the reader to the relevant article of D. Prasad [@Prasad-contragredient]. In any case, a Chevalley involution $c$ is expected to interchange the $L$-packets of an irreducible representation $\pi$ and its contragredient $\tilde\pi$.
In the setting above, one often expects to be able to define a "Fourier transform," which is again an $L^2$-isometric, $G$-equivariant isomorphism between spaces of Schwartz half-densities on $X$ and $X^\vee$. As we saw in Section [4](#sec:Schwartz){reference-type="ref" reference="sec:Schwartz"}, when the affine variety $X$ is singular, the appropriate definition of Schwartz spaces is still unclear. These uncertainties notwithstanding, we should highlight a number of promising cases in the study of such nonlinear Fourier transforms:
### The parabolic case
This case is probably the origin of all ideas discussed in this section. At the level of functions, it is discussed again in a couple of other papers of Braverman and Kazhdan [@BK1; @BK3], but it is closely related to ideas of Laumon and Drinfeld on "compactified" Eisenstein functors in the geometric setting [@BG]. In order to stay closer to the existing literature, and to make connections to the theory of Eisenstein series, let us work with functions instead of densities here, using the letter $\mathcal S$ to denote Schwartz spaces of functions.
The idea here is to attach a suitable Schwartz space of functions $\mathcal S(\overline{U\backslash G})\subset C^\infty(U\backslash G)$, where $U\subset P\subset G$ is the unipotent radical of a parabolic subgroup. In principle, this space is not associated to the homogeneous space $U\backslash G$, but to its affine closure $\overline{U\backslash G} = \operatorname{Spec\,}F[U\backslash G]$.
The "baby case" of this is as classical as Eisenstein series themselves: It corresponds to the difference between the definitions $$E(z,s) = \sum_{(m,n)=1} \frac{y^s}{|mz+n|^{2s}}$$ and $$E^*(z,s) = \pi^{-s} \Gamma(s) \sum_{(m,n)\ne(0,0)} \frac{y^s}{|mz+n|^{2s}}.$$ The two are related by $E^*(z,s) = Z(2s) E(z,s)$ (where $Z(s)$ denotes the complete zeta function), making the functional equation for $E^*(z,s)$ nicer: $$E^*(z,s) = E^*(1-z.s).$$
Adelically, the "normalized" Eisenstein series $E^*(z,s)$ can be obtained from the Schwartz space of the affine plane $\mathbb A^2 = \overline{U\backslash\operatorname{SL}_2}$ by $$\mathcal S(\mathbb A_k^2) \to I_B^G(\chi) \xrightarrow{\mathcal E_\chi} C^\infty([G]),$$ where $\mathbb A_k$ denotes the adeles of a global field $k$, $I_B^G(\chi)$ denotes the principal series representation unitarily induced from a suitable character $\chi$ (corresponding to the parameter $s$ above), and $\mathcal E_\chi$ is the Eisenstein series morphism to functions on $[G]=G(k)\backslash G(\mathbb A_k)$. This is to be juxtapposed to the "usual" Eisenstein series $E(z,s)$, which comes from the similar diagram by replacing $\mathcal S(\mathbb A_k^2)$ by $\mathcal S(U\backslash G(\mathbb A_k))$. The functional equation for $E^*(z,s)$ is a corollary of the Poisson summation formula for the lattice $k^2\subset \mathbb A_k^2$.
In general, however, the affine closure $\overline{U\backslash G}$ is a singular variety, and does not have a vector space structure. Nonetheless, it was observed in the geometric Langlands program that Eisenstein series defined by "Schwartz sheaves" on such spaces are better behaved. Braverman and Kazhdan [@BK1; @BK3] applied this wisdom to the function-theoretic setting, where they managed to provide a definition for the Schwartz spaces $\mathcal S(\overline{U\backslash G})$ when $U$ is the unipotent radical of a Borel subgroup, and more generally $\mathcal S(\overline{[P,P]\backslash G})$, for any parabolic $P$. They also defined Fourier transforms (normalized intertwining operators) of the form $$\mathcal S(\overline{[P,P]\backslash G}) \xrightarrow\sim \mathcal S(\overline{[P^-,P^-]\backslash G}),$$ where $P^-$ is an opposite parabolic to $P$, and proved a Poisson summation formula (eventually, by reducing it to the case of $\operatorname{SL}_2$ discussed above), giving rise to the functional equation of the normalized Eisenstein series. (A more complete version of this Poisson summation formula was proven by Getz--Liu [@GL-Poisson].)
For the general case $X=\overline{U\backslash G}$ (that is, when $P$ is not a Borel subgroup), it is worth noting that *even in the geometric Langlands program, the conjectural Fourier transform is absent*; undoubtedly a deep problem, whose resolution would be very interesting. We do know, however, from the calculations of intersection complexes in [@BFGM], that the IC function of $X$ gives rise to *normalized Eisenstein series*; more precisely, the function-theoretic interpretation of [@BFGM Theorem 1.12] is $$IC_{\overline{U\backslash G}} = L(\check{\mathfrak u},1) \star 1_{U\backslash G(\mathfrak o)},$$ where the notation is as follows:
- $\mathfrak o = \mathbb{F}_q[[t]]$ is a DVR in equal characteristic, whose fraction field we will denote below by $F$;
- $1_{U\backslash G(\mathfrak o)}$, the basic function of $U\backslash G$, is the characteristic function of its integers;
- $IC_{\overline{U\backslash G}}$ is the basic function of $\overline{U\backslash G}$, define via the intersection complex of its arc space as in [@BNS]; it is a smooth function on $U\backslash G(F)$, whose support belongs to $\overline{U\backslash G}(\mathfrak o)$;
- finally, $L(\check{\mathfrak u},1)$ is a series of elements of the unramified Hecke algebra of the Levi quotient $L$ of $P$; it is the series that, under the Satake isomorphism, corresponds to $\sum_{i\ge 0} q^{-i} \operatorname{Sym}^i \check{\mathfrak u}$, where $\check{\mathfrak u}$ is the representation of the dual Levi $\check L$ on the dual nilpotent radical $\check{\mathfrak u}$. The convolution action (denoted by $\star$) of the Hecke algebra of $L$ on functions on $U\backslash G$ is obtained from the $L^2$-normalized action of $L$ (i.e., we are secretly working with half-densities); the reader can pin down the details by keeping in mind that, here, we define the action of $L$ on $U\backslash G$ as a left action, and for $G=\operatorname{SL}_2$, $L(\check{\mathfrak u},1) \star 1_{U\backslash G(\mathfrak o)}$ stands for the characteristic function of $\mathfrak o^2$.
This suggests that the normalized Fourier transforms $$\mathcal S(\overline{U\backslash G}) \xrightarrow\sim \mathcal S(\overline{U^-\backslash G})$$ should spectrally decompose in terms of the *normalized intertwining operators* of Langlands and Shahidi [@Langlands-Eisenstein; @Shahidi-Plancherel]. In the case of degenerate Eisenstein series (that is, replacing $U$ by $[P,P]$), for classical groups, this was verified by Shahidi (and W. W. Li) in [@Shahidi-generalizedFourier]. A direct proof of the Poisson summation formula for these Fourier transforms, leading to the meromorphic continuation of the normalized Eisenstein series, would greatly simplify and generalize the Langlands--Shahidi method, as it would imply the meromorphic continuation of the ratio between normalized and unnormalized Eisenstein series.
### Other cases
There are several other affine spherical varieties $X$ which turn out to be fiber bundles of the form $X\to H\backslash G$, with $H$ reductive and fiber isomorphic to $Y'=\overline{[P',P']\backslash H'}$, for some parabolic $P'$ in a reductive group $H'$ containing $H$. This explains a large part of the Rankin--Selberg method, as was observed in [@SaRS], with the functional equation obtained from Fourier transform on $Y'$. For example, the affine closure of the quotient of $\operatorname{SL}_2^2$ by the subgroup of triples of upper triangular matrices with entries satisfying $x_1+x_2+x_3=0$ happens to be isomorphic to the affine closure of $[S,S]\backslash\operatorname{Sp}_6$, where $S$ denotes the Siegel parabolic, and this fact is behind Garrett's integral representation of the triple product $L$-function [@Garrett].
An entirely new case of nonlinear Fourier transforms, on a space which does not belong to the family of spaces such as $Y'$ above, was considered by Getz and Liu in [@GL-triples] (and refined in [@GH; @GHL]. In it, they consider a triple $(V_i, q_i)_{i=1}^3$ of even-dimensional (nondegenerate) quadratic spaces, and consider the affine variety $X$ given by the equation $$q_1 = q_2 = q_3.$$ This variety is spherical under the action of $G=$ the fiber product of orthogonal similitude groups ${\operatorname{GO}}(V_1)\times_{\mathbb{G}_m}{\operatorname{GO}}(V_2)\times_{\mathbb{G}_m}{\operatorname{GO}}(V_3)$. In the series of aforementioned papers, for $F$ a local field in characteristic zero, a space $\mathcal S(X)$ of smooth functions on $X^\infty(F)$ is defined (where $X^\infty\subset X$ is the smooth locus), as the image of a morphism $$\mathcal S(Y)\otimes \mathcal S(V) \to \mathcal S(X),$$ where the notation is as follows: $V = \bigoplus_i V_i$, and $Y=$ the affine closure of $[S,S]\backslash \operatorname{Sp}_6$, with notation as above. The construction of the morphism is via the Weil representation for $\operatorname{SL}_2^3 \times G$, with $\operatorname{SL}_2^3$ considered as a subgroup of $\operatorname{Sp}_6$, and the morphism is $\operatorname{SL}_2^3$-invariant, hence factors through the coinvariant quotient $$\left(\mathcal S(Y)\otimes \mathcal S(V) \right)_{\operatorname{SL}_2^3} \to \mathcal S(X).$$ Although it is not proven in these papers, it may help, to fix ideas, to think of $\mathcal S(X)$ as an avatar for this coinvariant space.
Then, the authors of [@GH] define a Fourier transform on $\mathcal S(X)$ by descending the Braverman--Kazhdan Fourier transform of $\mathcal S(Y)$. Note that $Y^\vee\simeq Y$ under the action of $\operatorname{Sp}_6$, although this inverts the commuting action of $\mathbb{G}_m=S/[S,S]$; for uniformity with the setup above, we should be thinking of the Fourier transform of Getz et. al. as a $G$-equivariant isomorphism $\mathcal F_X: \mathcal S(X)\xrightarrow\sim \mathcal S(X^\vee)$. In [@GHL] it is shown that this transform extends to an $L^2$-isometry. Finally, when all spaces are defined over a number field $k$, a Poisson summation formula is proven in [@GL-triples; @GH], which has the form $$\sum_{\gamma\in Y^\bullet(k)} \Phi(\gamma) + \mbox{(boundary terms)}
= \sum_{\gamma\in Y^\bullet(k)} \mathcal F_X \Phi(\gamma) + \mbox{(boundary terms)},$$ where $Y^\bullet$ is the open $G$-orbit. The expression of the boundary terms is not intrinsic to $\Phi$ at this point, but at least one can impose local restrictions on $\Phi$ that will cause them to vanish.
## Hankel transforms for the (relative) trace formula {#ss:Hankel}
### Hankel transforms for the standard relative trace formula
The Fourier transforms $\mathcal D(X) \xrightarrow\sim \mathcal D(X^\vee)$ that we speculated on in the previous subsection are likely not available for every space $X$, and there are reasons for that on the side of $L$-functions: It is sometimes the square of the absolute value of a period that is related to an $L$-value, not the period itself, and there is no reason to expect the "square root" represented by the period to carry some sort of "functional equation." In those cases, one should have the appropriate Fourier transform appearing only at the level of the quotients of the relative trace formula, which spectrally encodes this square of the period.
Rather than speculating in general, let me explain this idea in a setting that has already been worked out by H. Xue in [@Xue] (with the lowest-rank case appearing previously in [@SaBE2]): Let $E/F$ be a quadratic extension of local fields (in characteristic zero, in the references that we will cite), and $A$ a central simple algebra over $F$ of dimension $4n^2$, together with an embedding $E\to A$. Let $B$ be the centralizer of $E$ in $A$. Let $G=A^\times/\mathbb{G}_m$, $H=B^\times/\mathbb{G}_m$, considered as algebraic groups over $F$. When $A=\operatorname{Mat}_2(F)$, $H$ is simply a maximal torus in $G=\operatorname{PGL}_2$, and we are in the setting of Waldspurger's periods [@Waldspurger]. The dual group of the variety $X=H\backslash G$ is $\operatorname{Sp}_{2n}$, and it is a general expectation that, in the same setting over global fields, for non-residual automorphic representations $\pi$ on $G$ whose exterior-square $L$-function has a pole at $s=1$ (indicating a global lift from $\operatorname{Sp}_{2n}$), and satisfying appropriate local conditions, one has an Ichino--Ikeda-type formula decomposing the square of the absolute value of the $H$-period, $$\pi \ni \phi \mapsto \left|\int_{[H]} \phi(h) dh\right|^2$$ into an Euler product of local functionals. (I do not know of such a precise factorization in the literature, but see [@GJ; @FMW] for weaker versions of the conjecture and global results; one could, more generally, integrate against an automorphic character of $H$, but we will restrict ourselves to the trivial character, both globally and locally.) These local functionals should evaluate, at almost every place $F$, to the quotient of local $L$-factors $$\frac{L(\pi_E, \frac{1}{2}) L(\pi_F, \mathfrak{sl}_{2n}/\mathfrak{sp}_{2n},1)}{L(\pi_F, \mathfrak{sp}_{2n},1)},$$ where $\pi_E$ is the base change of $\pi_F$ to $E$.
The local counterpart of this conjecture, providing the local conditions for it, is a conjecture of Prasad--Takloo-Bighash [@PTB], generalizing a theorem of Tunnell and Saito [@Tunnell; @Saito], which states that a tempered irreducible representation $\pi$ of $G$ is $H$-distinguished if and only if its Langlands parameter factors through $\operatorname{Sp}_{2n} \to \operatorname{GL}_{2n}$, and $\epsilon(\pi_E)\eta(-1)^n=(-1)^r$, where $\epsilon(\pi_E)$ is the local root number of the base change of $\pi$ to $E$, $\eta$ is the quadratic character attached to $E/F$, and $(r-1)$ is the split rank of $G$. This conjecture has been proven for discrete series by H. Xue and M. Suzuki [@Xue; @SX]; at the heart of the argument in [@Xue] is an involution on the space $\mathcal S(H\backslash G/H)$ of test measures for the relative trace formula of the quotient $H\backslash G/H = (X\times X)/G$. We will think of this involution as the analog of the Fourier transform on the space of functions or half-densities on $X$, and will call it a "Hankel transform." (That name has been used by Ngô to describe the $r$-Fourier transforms discussed previously, but we will reserve this name for analogous transforms at the level of trace formulas.)
The space of test measures $\mathcal S(H\backslash G/H)$ should be understood of as the direct sum, over isomorphism classes of central simple algebras of dimension $4n^2$ with an embedding $E\hookrightarrow A$ as above, of the corresponding coinvariant spaces $\mathcal S(H\backslash G)_H$. The said involution acts by $(-1)^r$ on the summands with split rank $(r-1)$, and on the other hand Xue shows that it acts by a factor of $\epsilon(\pi_E)\eta(-1)^n$ on the relative character of $\pi$. It is quite notable that there is a meaningful, nontrivial version of Fourier transform in this setting, where $X^\vee \simeq X$ and the associated $L$-function only appears by its value at the central point $\frac{1}{2}$; still, its functional equation is expressed in a nontrivial way by the epsilon factor. (It is best to think of the "Hankel transform" here as being the product of Xue's involution by $\eta(-1)^n$.)
### Hankel transforms for the (relative) trace formula with nonstandard test measures {#sssHankel}
Let us place ourselves in the conjectural setting of the Braverman--Kazhdan--Ngô program, with a nonstandard Schwartz space $\mathcal S_r(G)$ associated to a representation $r$ of the $L$-group of $G$, together with an $r$-Fourier transform (which, for now, we take to be between spaces of measures) $$\mathcal F_r: \mathcal S_r(G) \xrightarrow\sim \mathcal S_{r^\vee}(G).$$
Descending to test measures for the Arthur--Selberg or the Kuznetsov trace formulas, this would give rise to "Hankel transforms" $$\mathcal H_r: \mathcal S_r(\frac{G}{G}) \xrightarrow\sim \mathcal S_{r^\vee}(\frac{G}{G}),$$ or $$\mathcal H_r: \mathcal S_r((N,\psi)\backslash G/(N,\psi)) \xrightarrow\sim \mathcal S_{r^\vee}((N,\psi)\backslash G/(N,\psi)),$$ between nonstandard spaces of test measures for the corresponding trace formula. If we could prove a Poisson summation formula for these transforms, in the sense that, for the corresponding spaces over the adeles, the diagram $$\xymatrix{ \mathcal S_r((N,\psi)\backslash G/(N,\psi)) \ar[rr]^{\mathcal H_r}\ar[dr]_{{\operatorname{KTF}}} & & \ar[dl]^{{\operatorname{KTF}}}\mathcal S_{r^\vee}((N,\psi)\backslash G/(N,\psi))\\
& \mathbb{C}&}$$ commutes, that would give rise to a proof of the functional equation of the $L$-function corresponding to $r$. To be clear, there are serious difficulties making sense of the "trace formula" functional on nonstandard spaces of test measures, which we will discuss in Section [6](#sec:global){reference-type="ref" reference="sec:global"} below.
Such transforms have been computed, in the setting of the Kuznetsov formula in a few cases, namely:
1. For $r=$ the standard representation of $\operatorname{GL}_n$, by Jacquet [@Jacquet-smoothtransfer]. In this case, the nonstandard space of test functions and the Hankel transform descend from the Schwartz space of $\operatorname{Mat}_n$, and its Fourier transform.
2. For $r=$ the symmetric-square representation when $G=\operatorname{SL}_2$, by the author [@SaTransfer2].
A formula for the case of $r= {\operatorname{Std}}\oplus {\operatorname{Std}}\otimes \eta$, $G=\operatorname{PGL}_2$, with $\eta$ a (possibly trivial) quadratic character of the Galois group, was also proven in [@SaBE2]. However, this case can easily be reduced to repeated applications of the first case above.
A remarkable feature of these formulas is that, despite the non-abelian nature of the $L$-functions whose functional equation they encode, they are given by a combination of abelian Fourier transforms and various "correction factors" -- much like the transfer operators that we encountered in Sections [2](#sec:localtori){reference-type="ref" reference="sec:localtori"}, [3](#sec:samerank){reference-type="ref" reference="sec:samerank"}. For example, let us discuss the case of the standard representation of $\operatorname{GL}_2$. It turns out that the formula is nicer if we work with half-densities, instead of measures (see [@SaHanoi Theorem 8.1] for details on how to translate Jacquet's result to half-densities). Hence, varying our discussion in § [2.4](#ss:Kuztori){reference-type="ref" reference="ss:Kuztori"}, we will introduce a nonstandard space $\mathcal D_{{\operatorname{Std}}}=\mathcal D_{{\operatorname{Std}}}((N,\psi)\backslash G/(N,\psi))$ of "test half-densities" for the Kuznetsov formula, considered as smooth half-densities on the torus $T$ of diagonal elements via the embedding $t\mapsto w t$, with $w$ the permutation matrix, and defined by the formula $$f(t)= (|\zeta| d\zeta)^\frac{1}{2} \cdot \int_{F^2} \Phi \left(\begin{pmatrix} 1& x \\ &1\end{pmatrix} wt\begin{pmatrix} 1& y \\&1 \end{pmatrix} \right) \psi^{-1}(x+y) dx dy,$$ where $\Phi$ ranges over Schwartz functions on $\operatorname{Mat}_2$. We also define the analogous space $\mathcal D_{{\operatorname{Std}}^\vee}$, by using the natural embedding of $\operatorname{GL}_2$ into the dual vector space. There is then a Hankel transform $\mathcal H_{{\operatorname{Std}}}: \mathcal D_{{\operatorname{Std}}} \to \mathcal D_{{\operatorname{Std}}^\vee}$, descending from the equivariant Fourier transform $\mathcal D(\operatorname{Mat}_2)\to \mathcal D(\operatorname{Mat}_2^\vee)$, and Jacquet's formula reads $$\label{Hankel-Jacquet}
\mathcal H_{{\operatorname{Std}}} f = D_{-\check\epsilon_1,\frac{1}{2}} \star \left( \psi^{-1}(e^{-\alpha}) \circ D_{-\check\epsilon_2,\frac{1}{2}} \star f\right),$$ where our notation is as in § [3.2](#ss:KuzSLn){reference-type="ref" reference="ss:KuzSLn"}. Namely, $D_{-\check\epsilon_i,s}$ is convolution on $T$ with the measure $D_s$ (see [\[Ds\]](#Ds){reference-type="eqref" reference="Ds"}), pushed forward to $T$ by the cocharacter $(-\check\epsilon_i)$, where $\epsilon_i$ is the standard cocharacter into the $i$-th coordinate of $T$.
The aforementioned "correction factor" here is $\psi^{-1}(e^{-\alpha})$, which simply means multiplication by the function $\begin{pmatrix} a & \\ &d \end{pmatrix}\mapsto \psi^{-1}(\frac{d}{a})$. These correction factors are poorly understood; it would be desirable to have an interpretation of these transfer operators in terms of quantization, as for the transfer operators discussed in § [3.3](#ss:transferrankone){reference-type="ref" reference="ss:transferrankone"}. I will refrain from presenting these formulas here, pointing the reader to the expository article [@SaHanoi].
# Global comparisons {#sec:global}
Beyond Endoscopy is a strategy to solve the global problem of functoriality for automorphic representations, and some of the most involved work in this context has been in the global setting. It is quite a challenge, however, to summarize and put in a common context the global results that have appeared so far in the literature, and the techniques developed to achieve them. In order to do so, I will focus on aspects that can be understood in combination with the local results that were mentioned so far, although the relation between global and local aspects has not been sufficiently explored in the literature, and will be slighty speculative. As a result, the present article is purporting to be neither comprehensive, nor loyal to the point of view of the original authors.
We will focus on 4 pieces of work, in reverse chronological order: The work of Frenkel--Langlands--Ngô and A. Altuğ on the trace formula for $\operatorname{GL}_2$, the author's work on the relative trace formula for torus periods on $\operatorname{PGL}_2$, the work of E. Herman on the functional equation of the standard $L$-function for $\operatorname{GL}_2$-automorphic forms, and the work of A. Venkatesh on stable functoriality from tori to $\operatorname{GL}_2$.
## Poisson summation for the trace formula of $\operatorname{GL}_2$ {#ss:Altug}
One of the main goals of the work of Frenkel--Langlands--Ngô and A. Altuğ [@FLN; @Altug1; @Altug2; @Altug3] on the trace formula for $\operatorname{GL}_2$ was to extract from it an expression that contains only the "Ramanujan" part of the spectrum, removing the contributions of residual automorphic representations (characters, in the case of $\operatorname{GL}_2$).
To simplify notation, we will work with the stable trace formula of $\operatorname{SL}_2$, so that the only $1$-dimensional automorphic representation is the trivial one, and the Steinberg--Hitchin base $\mathfrak C=%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%$ is isomorphic to $\mathbb A^1$ via the trace function. Every mention of "orbital integrals" and "trace formula" below will refer to stable orbital integrals, and the stable trace formula.
The fundamental observation of [@FLN] is that the trivial representation should correspond to the value at $0$ of the "Fourier transform of orbital integrals." This is, first of all, a local statement, and it is clear if we think of test measures instead of test functions and their orbital integrals: Considering test spaces of measures, the composition of pushforwards under $G\to %
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%
\to *$ is the integral of a measure -- i.e., the trace of its action on the trivial representation -- and at the same time it is equal to the value at $0$ of the Fourier transform of its pushforward to $\mathfrak C=%
\ooalign{%
$\genfrac{}{}{1.2pt}0{G}{G}$\cr%
$\color{white}\genfrac{}{}{.4pt}0{\phantom{G}}{\phantom{G}}$}%$.
In order to express the global trace formula, however, in terms of the Fourier transform, one needs a version of Poisson summation formula. Namely, very simplistically thinking, we can think of the geometric side of the trace formula as a sum over the rational points of $\mathfrak C$, and we need to convert it to a sum over "the dual vector space" $\mathfrak C^*$. (Note, also, the conceptual mystery of the fact that we treat $\mathfrak C$ as a vector space; there is not a clear reason why we should do so.) In reality, of course, this sum only applies to the elliptic terms in the trace formula, with the actual expression being more complicated, depending on the version of the trace formula that one chooses to use (e.g., non-invariant or invariant). This is only part of the difficulty; another fundamental difficulty is that orbital integrals/test measures do not belong to the standard Schwartz space of functions/measures on $\mathbb A^1$, hence are not immediately suited for an application of the Poisson summation formula.
What form should one expect the trace formula to take, after application of the Poisson summation formula? Altuğ's work made the biggest progress to answer this question, but before we get to it, we will reformulate the question, in view of the local theory already developed. Namely, we saw in § [3.1](#ss:KuzSL2){reference-type="ref" reference="ss:KuzSL2"} that Fourier transform on $\mathfrak C$, locally, has a special meaning: It is a transfer operator between stable test measures for the trace formula of $\operatorname{SL}_2$, and test measures for its Kuznetsov formula. From this point of view, the sought Poisson summation formula would, roughtly, correspond to the statement that the following diagram commutes: $$\label{STFKTF}\xymatrix{ \mathcal S(\frac{G}{G})^{\operatorname{st}}\ar[rr]^{\mathcal T^{-1}}\ar[dr]_{{\operatorname{STF}}} & & \ar[dl]^{{\operatorname{KTF}}}\mathcal S_{L({\operatorname{Ad}},1)}((N,\psi)\backslash G/(N,\psi))\\
& \mathbb C &}$$ where the notation is as follows:
- $\mathcal S(\frac{G}{G})^{\operatorname{st}}$ is the stable space of test measures for the trace formula, over the adeles, defined as a restricted tensor product of the local spaces;
- $\mathcal T^{-1}$ is the Fourier transform, which corresponds to the inverse of the transfer operator discussed in § [3.1](#ss:KuzSL2){reference-type="ref" reference="ss:KuzSL2"}, and $\mathcal S_{L({\operatorname{Ad}},1)}((N,\psi)\backslash G/(N,\psi))$ is its image, a nonstandard space of test measures for the Kuznetsov formula. This is also a restricted tensor product over all places, but with respect to a nonstandard "basic vector," equal to the image of the basic vector of $\mathcal S(\frac{G}{G})^{\operatorname{st}}$ under $\mathcal T^{-1}$. (We will have more to say about this nonstandard basic vector below.)
- "STF" stands for the functional of the stable trace formula, and "KTF" stands for the functional of the Kuznetsov formula.
The last point is not easy to state rigorously, since neither of the two functionals is well-defined. The stable trace formula that we have in mind is an idealistic "naive invariant" stable trace formula, where the contribution of a semisimple stable orbit represented by an element $\gamma$ would be the stable orbital integral at $\gamma$, multiplied by the volume of $G_\gamma(k)\backslash G_\gamma(\mathbb A_k)$ -- but this volume is infinite, for hyperbolic conjugacy classes. The Kuznetsov trace formula is well-defined on the standard space of test measures, but not on the nonstandard one, where the basic vector at almost every place $v$ is the generating series of the local unramified $L$-function $L(\pi_v, {\operatorname{Ad}},1)$, as in § [3.1](#ss:KuzSL2){reference-type="ref" reference="ss:KuzSL2"}; the expression cannot be absolutely convergent, since on the spectral side the corresponding Euler products do not converge, and in fact $L(\pi, {\operatorname{Ad}},s)$ has a pole at $s=1$ when $\pi$ is in the continuous (Eisenstein) spectrum.
Nonetheless, we can view the work of Altuğ as an attempt to make sense of the diagram [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"}, or an approximation of it. This elucidates several aspects of his work, such as the appearance of Kloosterman sums in the expressions he obtains after Poisson summation -- such sums are inherent in the Kuznetsov formula. For fans of the Selberg trace formula this might be a disappointing point of view: already in his 2001 letter to Langlands [@Sarnak], Sarnak pointed out that the Kuznetsov formula could be used to avoid the non-Ramanujan part of the spectrum, so one could say that we have not come a long way since then. Nonetheless, the global comparison of trace formulas, i.e., the effort to make sense of the Poisson summation formula of diagram [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"}, has a lot to teach us about global techniques that will be necessary for the Beyond Endoscopy program. Moreover, as we stressed in § [3.1](#ss:KuzSL2){reference-type="ref" reference="ss:KuzSL2"}, such a comparison *is* a special case of functoriality -- between different homogeneous spaces (the group $\operatorname{SL}_2$ and its Whittaker model), rather than different reductive groups.
Many of the interesting takeaways from Altuğ's work have been summarized by Arthur [@Arthur-BE], who took them as a starting point to propose analogous questions for the trace formula in higher rank. I have little to add to the insights of the absolute expert in the field, and point the readers to his article. I will only attempt to highlight a couple of points which arise from thinking about Fourier transforms in terms of the comparison with the Kuznetsov formula:
Thinking about the spectral side of the Selberg trace formula and the Kuznetsov formula, two differences stand out: First, the traces of Hecke operators on cuspidal automorphic representations come weighted by the factor $\frac{1}{L(\pi,{\operatorname{Ad}},1)}$ in the Kuznetsov formula, but this factor does not appear on the spectral side of the Selberg trace formula. Secondly, the Selberg trace formula (for $\operatorname{SL}_2$) has a contribution from the trivial representation, and also has a discrete contribution from Eisenstein series induced from characters of the Cartan that are fixed under the Weyl group; there is no analog of those terms on the spectral side of the Kuznetsov formula.
It is easy to explain how the first difference is resolved, in the putative comparison [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"}: as already mentioned, the image of the basic vector, at almost every place $v$, under the transfer operator $\mathcal T^{-1}$ is the generating series of the local unramified $L$-factor $L(\pi_v, {\operatorname{Ad}},1)$, hence the spectral contributions to the Kuznetsov formula with nonstandard test measures will come with an additional factor of $L(\pi, {\operatorname{Ad}},1)$, canceling the difference between the 2 sides. (For Eisenstein series induced by a character $\chi$, it should be mentioned, the spectral contributions to the standard Kuznetsov formula come weighted by $\frac{1}{L(\chi,\check{\mathfrak g}/\check{\mathfrak t},1)}$, and multiplying them by $L(\pi, {\operatorname{Ad}},1) = L(\chi,\check{\mathfrak g},1)$ introduces a pole; as we mentioned, this is the reason why the "KTF" functional is not well-defined for this space of nonstandard test measures, and similarly the contribution of Eisenstein series to the "naive invariant" Selberg trace formula is infinite.)
For the second difference, we have explained the reasons that, locally, the contribution of the trivial representation is picked up by the value of the Fourier transform of a test measure at $0\in\mathfrak C^*$. As can be seen by the explicit form of the transfer operator in [\[inversetransfer\]](#inversetransfer){reference-type="eqref" reference="inversetransfer"}, the spaces $\mathfrak C^*\simeq \mathbb A^1$ and $N\backslash G\sslash N\simeq \mathbb A^1$ (the latter being the space of orbits for the standard Kuznetsov formula) cannot be identified; rather, there is a birational morphism $\xi\mapsto \xi^{-1}$ between them, so that the point $\xi=0\in \mathfrak C^*$ corresponds to "infinity" on the other space. This means that the contribution of the trivial representation is picked up by the behavior of the nonstandard test measures in $\mathcal S_{r}((N,\psi)\backslash G/(N,\psi))$ at "infinity;" note that the subspace $\mathcal S((N,\psi)\backslash G/(N,\psi))$ of standard test measures (locally) consists of measures that vanish there.
Globally, that means that, to achieve our putative comparison [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"}, we need to include a contribution from the "point at $\infty$" of the space $N\backslash G\sslash N$, in a neighborhood of which our nonstandard test measures, unlike the standard ones, have nontrivial (nonzero) behavior. Vice versa, $\xi=\infty$ corresponds to the point of "singular" point of $N\backslash G\sslash N$ represented by the identity coset, and this is evidenced by the fact that the dual variable $\xi$ in the variant of the Kloosterman sums defined in [@Altug1 Theorem 1.1] appears in the numerator, rather than the denominator, of the argument of the exponential -- compare with the variable $c$ in [@KLP Proposition 3.7]. (This by no means purports to be a full explanation of the idea that Altuğ's expression is related to the Kuznetsov formula; I am just highlighting some similarities, that the interested reader can investigate further.)
However, as is also highlighted by Arthur in [@Arthur-BE], the point $\xi=0\in\mathfrak C^*$ in Altuğ's work contains not only the contribution of the trivial representation, but also the discrete contribution of the Eisenstein series unitarily induced from quadratic characters of the Cartan group (i.e., characters stable under the Weyl group) [@Altug1 Theorem 6.1]. This is indicative of how much more difficult the Poisson summation formula is in this context; rather than just the value of the Fourier transform at $\xi=0$ (which, as discussed, corresponds to the trace of the trivial representation), the point $\xi=0$ has a much richer contribution, including all the terms that the nonstandard nature of the space $\mathcal S_{r}((N,\psi)\backslash G/(N,\psi))$ is responsible for. On the spectral side of the nonstandard Kuznetsov formula, for the Eisenstein series unitarily induced from quadratic characters, it is reasonable to expect (and this will be made rigorous in my work in preparation with C. Wan, developing a version of the diagram [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"} for $\operatorname{GL}_n$) that the double pole of the adjoint $L$-function at those points -- as opposed to its simple pole at other points of the continuous spectrum -- produces a discrete contribution to the nonstandard Kuznetsov formula.
Altuğ goes on to insert the standard $L$-function into the trace formula in [@Altug3] (note that here we cannot keep working with $\operatorname{SL}_2$; we need to work with $G=\operatorname{PGL}_2$ or $\operatorname{GL}_2$, as Altuğ does), restricted to holomorphic modular forms (by appropriate choice of test function at $\infty$). Note that the standard $L$-function of a cuspidal representation is entire, which at the level of $L$-groups reflects the fact that the stabilizer of a nonzero point in the standard representation belongs to a proper parabolic subgroup. Without taking the entire continuation as given, Beyond Endoscopy would like to study the residue $$\sum_\pi \operatorname{Res}_{s=1} L(s,\pi),$$ with the sum ranging over modular forms of fixed weight $k$, and show that it is zero. This is what Altuğ achieves by trace formula methods (for $k\ge 3$) showing, at the same time, how these methods can be used to obtain the holomorphic continuation of such an $L$-function beyond the domain of convergence.
## Comparison between the Jacquet and Kuznetsov trace formulas {#ss:Jacquet}
A global comparison analogous to [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"} was established in [@SaBE2], where the trace formula is replaced by the relative trace formula of Jacquet for Waldspurger periods, corresponding to a torus $T=\operatorname{Res}_{E/F}\mathbb{G}_m/\mathbb{G}_m\subset \operatorname{PGL}_2$, and $r={\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta$, where $\eta$ is the Galois character associated to the quadratic extension $E/F$.
$$\label{RTFKTF}
\xymatrix{ \mathcal S(T\backslash G/T) \ar[rr]^{\mathcal T^{-1}}\ar[dr]_{{\operatorname{RTF}}} & & \ar[dl]^{{\operatorname{KTF}}}\mathcal S_{L({\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta, \frac{1}{2})}((N,\psi)\backslash G/(N,\psi))\\
& \mathbb C. &}$$
In this case, there is no problem defining the functional ${\operatorname{RTF}}$ (as in [@Jacquet-Waldspurger]), while the functional ${\operatorname{KTF}}$ is beyond the domain of convergence, since it contains special values of the $L$-function associated to $r={\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta$ at the center of symmetry. The transfer operator $\mathcal T$, here, is given by the composition of $2$ Fourier transforms; more precisely, orbital integrals on both sides are parametrized by the affine line, and $\mathcal T$ is given by a "Kloosterman convolution:" $$\begin{gathered}
\mathcal Tf (u) =\eta(u) \cdot (\psi(\bullet)d\bullet)\star (\eta(\bullet)\psi(\bullet)d\bullet) \star f(u)\\ = \eta(u) \iint f\left(\frac{u}{xy}\right) \psi(x+y) \eta(x) dx dy.\end{gathered}$$
Therefore, the statement that [\[RTFKTF\]](#RTFKTF){reference-type="eqref" reference="RTFKTF"} commutes can also be seen as a Poisson summation formula. Let us collect a few takeaways from the proof of this statement in [@SaBE2].
1. While the expression of ${\operatorname{KTF}}$ with nonstandard test measures corresponding to $L({\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta, \frac{1}{2})$ diverges, we can deform these test measures to correspond to $L({\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta, \frac{1}{2}+s)$, obtaining convergent expressions for $\Re(s)\gg 0$. There is nothing surprising here -- this just corresponds, classically, to feeding a series of Poincaré series into the Kuznetsov formula, corresponding to the Dirichlet series of the $L$-function -- but the same "deformation" can be applied to the transfer operator and the orbital integrals on the left hand side, obtaining a family of spaces $\mathcal S_s(T\backslash G/T)$ of test measures which have nothing to do, a priori, with relative trace formulas. The natural extension of the functional ${\operatorname{RTF}}$ is evidently meromorphic on the entire plane, and the Poisson summation formula can be proven for $\Re(s)\gg 0$, leading to the meromorphic continuation of the right hand side.
2. One can encode the functional equation of the pertinent $L$-function into a "Hankel transform" on the space of test measures, as we saw in § [5.5.2](#sssHankel){reference-type="ref" reference="sssHankel"}: $$L({\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta, \frac{1}{2}+s) \xrightarrow\sim L({\operatorname{Std}}\oplus {\operatorname{Std}}\otimes\eta, \frac{1}{2}-s).$$ This allows for a control of the functional ${\operatorname{KTF}}$ in the non-convergent region, and, by an application of the Phragmën--Lindelöf principle, to the full spectral decomposition of the Kuznetsov formula in this region, and a new proof of the functional equation of the $L$-function.
The fundamental question here is if such "transfer operators" can be used to deal, similarly, with other $L$-functions inserted into the Kuznetsov formula.
## Functional equation of the standard $L$-function {#ss:Herman}
A Beyond-Endoscopy proof of the functional equation of an $L$-function had already appeared earlier in a paper of P. E. Herman [@Herman], which has been little understood. The $L$-function here is the standard $L$-function for $\operatorname{GL}_2$, and Herman restricts himself to holomorphic cusp forms, by using the classical Petersson formula, instead of the full Kuznetsov formula. His proof is a masterful set of manipulations, which appear as rabbits pulled out of a hat, and lead to the following close analog of the Voronoi summation formula: $$\begin{gathered}
\sum_{f} \overline{a_l(f)} \sum_{n\geq 1} a_n(f)g(n) \\ = \sum_{f} \overline{a_{l}(f)}\left[ \frac{2\pi i^k\eta(f)}{\sqrt{D} }\sum_{n} a_n(f_D) \int_0^\infty g(x)J_{k-1}(\frac{4\pi\sqrt{nx}}{\sqrt{D}})dx\right].\end{gathered}$$ Here, $f$ ranges over an orthonormal basis of holomorphic modular forms of weight $k$, with some level $D$ and nebentypus $\chi$, and $g\in C_c^\infty(\mathbb R^+)$ is a smoothing function. (The integer $D$ is square-free, $\chi$ is a primitive Dirichlet character modulo $D$, and $k\ge 2$ is even; I point to the original article for the rest of the notation.)
It would be interesting to explore the connections between Herman's arguments and the Hankel transform $\mathcal H_{{\operatorname{Std}}}$ of Jacquet that we saw in [\[Hankel-Jacquet\]](#Hankel-Jacquet){reference-type="eqref" reference="Hankel-Jacquet"}. My expectation is that the analytic manipulations in the proof of the formula above [@Herman Theorem 3.1] prove a Poisson summation formula for this Hankel transform, i.e., the statement that the following diagram commutes $$\label{HankelKTF}
\xymatrix{ \mathcal S_{L({\operatorname{Std}}, \frac{1}{2}-s)}((N,\psi)\backslash G/(N,\psi)) \ar[rr]^{\mathcal H_{{\operatorname{Std}}}}\ar[dr]_{{\operatorname{KTF}}} & & \ar[dl]^{{\operatorname{KTF}}}\mathcal S_{L({\operatorname{Std}}, \frac{1}{2}+s)}((N,\psi)\backslash G/(N,\psi))\\
& \mathbb C,&}$$ of course under some simplifying assumptions (holomorphic modular forms, field of rationals, etc.). His manipulations definitely have semblance to Jacquet's formula: for example, the Poisson summation formula is applied twice, with a clever modification of an exponential factor in between; this is similar to the two Fourier transforms, with the intermediate correction factor, of [\[Hankel-Jacquet\]](#Hankel-Jacquet){reference-type="eqref" reference="Hankel-Jacquet"}.
## Lifting from tori to $\operatorname{GL}_2$ {#ss:Venkatesh}
We now return to the setting of § [2.4](#ss:Kuztori){reference-type="ref" reference="ss:Kuztori"}, where, for $G=\operatorname{SL}_2$ and $T$ a 1-dimensional torus, for the natural map $\iota:{^LT}\to {^L\operatorname{SL}_2}$ of $L$-groups, we described a transfer operator $$\mathcal S_{L({\operatorname{Ad}},\eta |\bullet|)} (N,\psi\backslash \operatorname{SL}_2/N,\psi) \xrightarrow{\mathcal T_\iota}\mathcal S(T),$$ given by the formula [\[transfer-Venkatesh\]](#transfer-Venkatesh){reference-type="eqref" reference="transfer-Venkatesh"}.
Venkatesh's thesis can be understood as a statement that entails the commutativity of a diagram $$\label{toriKTF}
\xymatrix{ \mathcal S_{L({\operatorname{Ad}},\eta |\bullet|)}((N,\psi)\backslash G/(N,\psi)) \ar[rr]^{\mathcal T_\iota}\ar[dr]_{\operatorname{Res}{\operatorname{KTF}}} & & \ar[dl]^{{\operatorname{TF}}}\mathcal S(T)\\
& \mathbb C,&}$$ where "$\operatorname{Res}{\operatorname{KTF}}$" is supposed to contain only those $L$-packets of automorphic representations $\pi$ of $\operatorname{SL}_2$ for which $L(s,{\operatorname{Ad}}\otimes\eta, \pi)$ has a pole at $s=1$.
To make sense of this, we enlarge our group to $\tilde G= \mathbb{G}_m\times \operatorname{SL}_2$. There is a certain nonstandard space of test measures $\mathcal S_{L(\operatorname{Sym}^2, 1)} (N,\psi\backslash \tilde G/N,\psi)$, constructed in [@SaTransfer2 § 8.3], together with a $(\mathbb{G}_m,\eta)$-equivariant surjection $$\mathcal S_{L(\operatorname{Sym}^2, 1)} (N,\psi\backslash \tilde G/N,\psi)\xrightarrow{p} \mathcal S_{L({\operatorname{Ad}},\eta |\bullet|)}((N,\psi)\backslash G/(N,\psi)).$$ As the notation suggests, the space $\mathcal S_{L(\operatorname{Sym}^2, 1)} (N,\psi\backslash \tilde G/N,\psi)$ contains the image $f_{L(\operatorname{Sym}^2, 1)}$ of the generating series of Hecke elements for $L(1,\operatorname{Sym}^2, \pi)$, where $\operatorname{Sym}^2$ is the symmetric square representation, which factors $$\operatorname{GL}_2 \to {^L G} = \mathbb{G}_m\times \operatorname{PGL}_2 \to \operatorname{GL}_3.$$ If we use coordinates $(z,\zeta)$ for $N\backslash \tilde G \sslash N \simeq \mathbb{G}_m\times N\backslash \operatorname{SL}_2 \sslash N$, we can identify elements of $\mathcal S_{L(\operatorname{Sym}^2, 1)} (N,\psi\backslash \tilde G/N,\psi)$ with measures in the variables $(z,\zeta)$, so that the basic vector of $\mathcal S_{L({\operatorname{Ad}},\eta |\bullet|)}((N,\psi)\backslash G/(N,\psi))$ is the pushforward of the product $\eta(z) f_{L(\operatorname{Sym}^2, 1)}(z,\zeta)$ via the map $(z,\zeta)\mapsto \zeta$. More generally, we will consider the pushforward of $\eta(z)|z|^s f_{L(\operatorname{Sym}^2, 1)}(c,z)$, which we will denote by $f_{\eta|\bullet|^s}$. We would then like to calculate $$\operatorname{Res}_{s=1} {\operatorname{KTF}}(f_{\eta|\bullet|^s}).$$
We now write the composition of the transfer operator $\mathcal T_\iota$ with the projection $p$ as follows: write $f(z,\zeta) = \Phi(z,\zeta) \, d^\times z \, d\zeta$, then $$\begin{gathered}
\label{transform}\frac{\mathcal T_\iota \circ p (f)(t)}{dt} = \lambda(\eta,\psi) \int_r \int_y \Phi\left( r, \frac{x(t)}{y} \right) |y|^{-1} \eta(yrx(t)) \psi(y) dy d^\times r \\
= \lambda(\eta,\psi) \lim_{s\to 1} \int_r \eta(r) |r|^s\int_y \Phi\left( \frac{y r}{x(t)} , \frac{x(t)}{y} \right)|y|^{-1} \psi(y) dy \, d^\times r.\end{gathered}$$
The inner integral of [\[transform\]](#transform){reference-type="eqref" reference="transform"} is a Fourier transform, and suggests applying a Poisson summation formula, globally. Denoting $$\hat f(r,x) = \int_y \Phi\left( \frac{y r}{x} , \frac{x}{y} \right)|y|^{-1} \psi(y) dy = \int_u \Phi\left( ur , u^{-1} \right) |u|^{-1} \psi(x u) du,$$ the first step that Venkatesh performs is a Poisson summation formula which can be roughly thought of as an equality of the form $$\begin{aligned}
\nonumber {\operatorname{KTF}}(f_{\eta|\bullet|^s}) \approx \sum_{\zeta\in \mathbb{Q}} \frac{f_{\eta|\bullet|^s}}{d\zeta}(\zeta) &= \int_{\mathbb{Q}^\times\backslash\mathbb A^\times} \eta(r)|r|^s \sum_{(n,c)\in (\mathbb{Q}^\times \times \mathbb{Q})} \Phi(rn,\frac{c}{n}) d^\times r \\ \label{KTF-Poisson}
&\approx \int_{\mathbb{Q}^\times\backslash\mathbb A^\times} \eta(r)|r|^s \sum_{(\nu,c)\in \mathbb{Q}^2} \hat f(rc,\nu) d^\times r,\end{aligned}$$ where we have used the fact that the Fourier transform of $n\mapsto \Phi(rn, \frac{c}{n})|n|^{-1}$, with dual variable denoted $\frac{\nu}{c}$, is $$\int_n \Phi(rn, \frac{c}{n}) |n|^{-1}\psi(n \frac{\nu}{c}) dn = \int_y \Phi(\frac{rc y }{\nu}, \frac{\nu}{y}) |y|^{-1} \psi(y) dy = \hat f(rc, \nu).$$ I warn the reader that in the thesis of Venkatesh, the local transfer operators were not available, and this is not the way that his theorem is presented; I point to [@SaTransfer2 § 10.3] for a slightly more expanded discussion of the relation.
The outer integral in [\[transform\]](#transform){reference-type="eqref" reference="transform"} is already embedded in [\[KTF-Poisson\]](#KTF-Poisson){reference-type="eqref" reference="KTF-Poisson"}. Classically, it corresponds to a Dirichlet series in the parameter $c$, with the parameter $\nu$ fixed. Studying this Dirichlet series is the second main step of Venkatesh, where it is denoted by $Z(s-1)$ [@Venkatesh §4.5.2]. As Venkatesh observes, this Dirichlet series exhibits a lot of cancellation and does not have a pole at $s=0$, unless $\nu$ in the image of the map $T(\mathbb Q)\to T\sslash W (\mathbb Q)$. Locally, however, there is no pole, and the evaluation at $s=0$ corresponds to the integral of $\hat f$ against $\eta$ in the variable $c$. The reasons for these statements are essentially local: Fixing the parameter $\nu$, it turns out that the function $c\mapsto \eta_\nu(c) |c|^{-1} \hat f(c,\nu)$ is a smooth function around $c=0$; here, $\eta_\nu$ is the quadratic (or trivial) character corresponding to the splitting field of the characteristic polynomial $x^2-\nu x +1$. Moreover, for the purposes of analyzing the poles of the global zeta integral, this function at almost every non-Archimedean place $v$ can be approximated by the characteristic function of $1_{\mathfrak o_v}$, see [@Venkatesh Proposition 5]. Thus, fixing $\nu\in \mathbb Q$, the last integral of [\[KTF-Poisson\]](#KTF-Poisson){reference-type="eqref" reference="KTF-Poisson"} looks like a Tate zeta integral, evaluated at the character $|\bullet|^{1+s}\eta\cdot \eta_\nu$, which explains why it only has a pole when $\eta_\nu = \eta$.
Thus, using [\[transform\]](#transform){reference-type="eqref" reference="transform"}, and the fact that the Euler product of factors $\lambda(\eta,\psi)$ is $1$, we can write $$\operatorname{Res}_{s=1} {\operatorname{KTF}}(f_{\eta|\bullet|^s}) = \sum_{\gamma\in T(\mathbb{Q})} \frac{\mathcal T_\iota \circ p (f)}{dt}(\gamma),$$ realizing the commutativity of [\[toriKTF\]](#toriKTF){reference-type="eqref" reference="toriKTF"}; of course, the "trace formula" for the torus is simply the sum of the test function over its rational points.
Finally, using the action of the Hecke algebra and the fundamental lemma [\[FL-torus\]](#FL-torus){reference-type="eqref" reference="FL-torus"} for the transfer operator, we can isolate representations in the equality above, and prove functoriality for the morphism $\iota$ of $L$-groups [@Venkatesh Theorem 1].
## General discussion
What does the experience collected so far add to Langlands' original vision? The results to date are still too limited to pass verdict on the prospects of the program, but some structure has started emerging, which allows for cautious optimism. Based on this experience, I would like to end this article by ruminating on how we can move further.
1. To begin with, since it is more manageable (and, geometrically, more natural) to insert $L$-functions, rather than their logarithmic derivatives, into trace formulas, one needs to rethink what the residues of this type of "$r$-trace formula" (where $r$ denotes the representation of the $L$-group whose associated $L$-function is to be inserted) should be compared with. If poles of $L$-functions detect functorial lifts from smaller groups, the residues (at least, when the poles are simple) are also themselves special values of $L$-functions, which should be inserted into the trace formula for the smaller group. (In that sense, the thesis of Venkatesh discussed in § [6.4](#ss:Venkatesh){reference-type="ref" reference="ss:Venkatesh"} is special, because the adjoint $L$-values appearing as denominators on the spectral side of the Kuznetsov formula essentially cancel out the residues.) A thoughtful strategy of how to do this systematically needs to be formulated.
2. Related to the previous item, but already present in Langlands' original approach, is the question of "dimension data" for subgroups of the $L$-group. The question is whether the multiplicities of poles of $L$-functions are sufficient to detect the "Zariski closure of a Langlands parameter." Although the notion of global Langlands parameters is conjectural, the question makes sense, thinking of poles as manifestations of the trivial representation, and asking questions about its multiplicity, when representations $r$ of the $L$-group are restricted to various closed subgroups. As was demonstrated by J. An, J.-K. Yu, and J. Yu [@AYY], dimension data determine the conjugacy class of a subgroup up to finitely many possibilities. Langlands coined the term "hadronic" for those automorphic representations (of Ramanujan type) whose parameter does not land in a proper subgroup (and, hence, whose $L$-functions, for $r$ irreducible nontrivial, should have no poles). At the end of the day, the idea goes, there should be a way to break down the stable trace formula of a group into "hadronic pieces" for $G$ and smaller (dual) groups.
3. The point of view on Altuğ's work presented in § [6.1](#ss:Altug){reference-type="ref" reference="ss:Altug"} suggests that, when removing non-Ramanujan and other "special" contributions from the stable trace formula, one ends up with the Kuznetsov formula. It is therefore unclear if anything is to be gained by working with the Arthur--Selberg trace formula, rather than directly with the Kuznetsov formula, as suggested early in the day by Sarnak [@Sarnak]. This is quite natural from the point of view of the geometric Langlands program, as well, where the Whittaker model has a very special place, essentially corresponding to the structure sheaf on the spectral side of the correspondence.
4. As we have seen, to date there has only been one successful global "Beyond Endoscopy" comparison of (relative) trace formulas with *unequal* underlying dual groups -- namely, the one in Venkatesh's thesis. Once the commutativity of a diagram such as [\[STFKTF\]](#STFKTF){reference-type="eqref" reference="STFKTF"} is established (expanding, slightly, upon Altuğ's work), the combination of the two will constitute a proof of functoriality from tori to $\operatorname{GL}_2$ in the way that Langlands originally envisioned -- using the Arthur--Selberg trace formula.
5. As we have seen in this section, there always seems to be a local comparison between the global comparisons established, even if that was not realized or used by the original authors. The local "transfer operators" could eventually guide our quest for a global comparison. In the most optimistic (probably too optimistic) scenario, they will be given by a combination of abelian Fourier transforms and other, rather innocuous, factors, which "in principle" admit a Poisson summation formula. Understanding the local transfer operators in arbitrary rank is the single most important task, in order to create good foundations for the Beyond Endoscopy program, and to formulate conjectures analogous to the Fundamental Lemma, that will guide the further development of the program.
6. Our discussion of global methods involved not only transfer operators, but also the Hankel transforms which encode the functional equations of $L$-functions, that we encountered in § [5.5](#ss:Hankel){reference-type="ref" reference="ss:Hankel"}. Their appearance is very natural, once $L$-functions are inserted into the trace formula, since one wants to move beyond the domain of their convergence. Indeed, Hankel transforms play a role in § [6.2](#ss:Jacquet){reference-type="ref" reference="ss:Jacquet"} in establishing a *full* meromorphic continuation of the nonstandard (Kuznetsov) trace formula, avoiding difficult and ad hoc analytic arguments, and they are probably behind the "Beyond Endoscopy" proof of the functional equation discussed in § [6.3](#ss:Herman){reference-type="ref" reference="ss:Herman"}. This supports the idea that the topics that we encountered in this article -- Langlands' strategy for proving functoriality, and the study of $L$-functions by means of Schwartz spaces and Fourier/Hankel transforms -- are very closely related, and will all have a role to play beyond endoscopy.
[^1]:
[^2]: By "pullback" of a Borel measure under a map $p:X\to Y$ which is a local homeomorphism, we will mean the measure on $X$ which, on every open $U\subset X$ which maps bijectively onto its image, coincides with the given measure on $p(U)$.
[^3]: Fixing a measure $|dx|$ on $F$ and a volume form $\omega_X$ on a smooth variety $X$, we obtain a measure $|\omega_X|$ on $X(F)$, which in local coordinates looks $\omega_X = \phi(\underline{x}) dx_1 \cdots dx_n$ $\Rightarrow$ $|\omega_X| = |\phi(\underline{x})| |dx_1 |\cdots |dx_n|$. Of course, as is common, for the most part we will be writing $dx$ instead of $|dx|$, etc., when it is clear that it refers to a measure, not a differential form.
[^4]: Most of these papers concern $\operatorname{GL}_2$ or $\operatorname{PGL}_2$, but for simplicity we will restrict ourselves to $\operatorname{SL}_2$; the passage to $\operatorname{GL}_2$ is rather innocuous.
[^5]: The exponential notation is used because the character group is written additively.
[^6]: We feel free to switch between half-densities and functions or measures by dividing/multiplying by a suitable eigen-half-density. There is something slightly subtle here, in that this eigen-half-density is not necessarily invariant. For example, in the Godement--Jacquet case, the space $\mathcal D_{{\operatorname{Std}}}$ should be defined as the product of the space of Schwartz functions by $(dx)^\frac{1}{2}$, where $dx$ is *additive* Haar measure on $M_n$. In other cases, with $r$ irreducible, the space $\mathcal D_r(G)$ should contain the "basic half-density" $\Phi_r(g) |\det g|^{\frac{1}{2}+s_r}$, where $\Phi_r$ is the "IC function" of the $L$-monoid $G_r$ determined by the heighest weight $\lambda_r$ of $r$ (see [@BNS]), and $s_r =\left< \rho_G, \lambda_r\right>$, where $\rho_G$ is the half-sum of positive roots of $G$. (The reason for this normalization lies in the calculation of the Godement--Jacquet integral of $\Phi_r$ -- see [@BNS-erratum].)
| arxiv_math | {
"id": "2310.02438",
"title": "Local and global questions \"beyond endoscopy\"",
"authors": "Yiannis Sakellaridis",
"categories": "math.NT math.RT",
"license": "http://creativecommons.org/licenses/by-nc-sa/4.0/"
} |
---
author:
-
-
title: On the Rankin--Selberg $L$-function related to the Godement--Jacquet $L$-function II
---
# Introduction
For $n \geq 2$, an element $z \in \mathcal{H}^n$ takes the form $z=x \cdot y$ where $$x=
\begin{pmatrix}
1 & x_{1,2} & x_{1,3} & \cdots & x_{1,n} \\
& 1 & x_{2,3} & \cdots & x_{2,n} \\
& & \ddots & & \vdots \\
& & & 1 & x_{n-1,n} \\
& & & & 1 \\
\end{pmatrix}
,$$ $$y=
\begin{pmatrix}
y_1y_2 \cdots y_{n-1} & & & & \\
& y_1 y_2 \cdots y_{n-2} & & & \\
& & \ddots & & \\
& & & y_1 & \\
& & & & 1 \\
\end{pmatrix},$$ with $x_{i,j} \in \mathbb{R}$ for $1 \leq i < j \leq n$ and $y_i>0$ for $1 \leq i \leq n-1$.\
Let $v=(v_1,v_2,\dots,v_{n-1}) \in \mathbb{C}^{n-1}$. Let $\mathfrak{D^n}$ be the center of the universal enveloping algebra of $\mathfrak{gl}(n,\mathbb{R})$ where $\mathfrak{gl}(n,\mathbb{R})$ is the Lie algebra of $GL(n,\mathbb{R}$). The function $$J_v(z) = \prod_{i=1}^{n-1} \prod_{j=1}^{n-1} y_i^{b_{i,j}v_j}$$ with $$b_{i,j}=
\begin{cases}
ij & \text{if} \ i+j \leq n, \\
(n-i)(n-j) & \text{if} \ i+j \geq n,
\end{cases}$$ is an eigenfunction of every $D \in \mathfrak{D}^n$. We write $$D J_v(z) = \lambda_D J_v(z) \ \text{for every} \ D \in \mathfrak{D}^n.$$\
**Definition 1**. *[@Dg] Let $n \geq 2$, and let $v=(v_1,v_2,\dots,v_{n-1}) \in \mathbb{C}^{n-1}$. A Maass form for $SL(n,\mathbb{Z})$ of type $v$ is a smooth function $f \in \mathcal{L}^2(SL(n,\mathbb{Z}) \backslash \mathcal{H}^n)$ which satisfies*
1. *$f(\gamma z) = f(z)$, for all $\gamma \in SL(n,\mathbb{Z}), z \in \mathcal{H}^n$,*
2. *$Df(z) = \lambda_D f(z)$, for all $D \in \mathfrak{D}^n$,*
3. *$\int \limits_{(SL(n,\mathbb{Z}) \cap U) \backslash U} f(uz) du = 0,$\
for all upper triangular groups $U$ of the form $$U = \left\{
\begin{pmatrix}
I_{r_1} & & & \\
& I_{r_2} & & *\\
& & \ddots & \\
& & & I_{r_b} \\
\end{pmatrix} \right\},$$ with $r_1+r_2+\cdots+r_b=n$. Here $I_r$ denotes the $r \times r$ identity matrix, and $*$ denotes arbitrary real entries.\
*
A *Hecke--Maass form* is a Maass form which is an eigenvector for the Hecke operators algebra.
Let $f(z)$ be a Hecke--Maass form of type $v=(v_1,v_2,\dots,v_{n-1}) \in \mathbb{C}^{n-1}$ for $SL(n,\mathbb{Z})$. Then it has the Fourier expansion
$$\begin{aligned}
f(z) &= \sum_{\gamma \in U_{n-1}(\mathbb{Z}) \backslash SL(n-1,\mathbb{Z})} \sum_{m_1=1}^{\infty} \dots \sum_{m_{n-2}=1}^{\infty} \sum_{m_{n-1} \neq 0} \frac{A(m_1,\dots,m_{n-1})}{\prod_{j=1}^{n-1} \abs{m_j}^\frac{j(n-j)}{2}} \\
& \quad \times W_J \left( M \cdot \begin{pmatrix} \gamma & \\ & 1 \end{pmatrix} z, v, \psi_{1,\dots,1,\frac{m_{n-1}}{\abs{m_{n-1}}}}\right) ,\end{aligned}$$
where $$M =
\begin{pmatrix}
m_1\cdots m_{n-2} \cdot \abs{m_{n-1}} & & & & \\
& \ddots & & & \\
& & m_1m_2 & & \\
& & & m_1 & \\
& & & & 1
\end{pmatrix} ,$$
$$A(m_1,\dots,m_{n-1}) \in \mathbb{C} , \qquad A(1,\dots,1)=1,$$
$$\psi_{1,\dots,1,\epsilon} \left(
\begin{pmatrix}
1 & u_{n-1} & & & \\
& 1 & u_{n-2} & & * \\
& & \ddots & \ddots & \\
& & & 1 & u_1 \\
& & & & 1
\end{pmatrix}
\right) = e^{2 \pi i (u_1+\cdots +u_{n-2}+\epsilon u_{n-1})},$$
$U_{n-1}(\mathbb{Z})$ denotes the group of $(n-1) \times (n-1)$ upper triangular matrices with $1s$ on the diagonal and an integer entry above the diagonal and $W_J$ is the Jacquet Whittaker function.\
**Definition 2**. *If $f(z)$ is a Maass form of type $(v_1,\dots,v_{n-1}) \in \mathbb{C}^{n-1}$, then $$\tilde{f}(z) := f(w \cdot (z^{-1})^T \cdot w),$$ $$w = \begin{pmatrix}
&&& (-1)^{ \left[ \frac{n}{2} \right]} \\
&& 1 & \\
& \reflectbox{$\ddots$} && \\
1&&&
\end{pmatrix}$$ is a Maass form of type $(v_{n-1},\dots,v_1)$ for $SL(n,\mathbb{Z})$ called the dual Maass form. If $A(m_1, \dots, m_{n-1})$ is the $(m_1, \dots, m_{n-1})$--Fourier coefficient of $f$, then $A(m_{n-1} , \dots, m_1)$ is the corresponding Fourier coefficient of $\tilde{f}$.\
*
**Definition 3**. *[@YjGl] The Godement--Jacquet $L$-function $L_f(s)$ attached to $f$ is defined for $\Re(s) >1$ by $$L_f(s) = \sum_{m=1}^{\infty} \frac{A(m,1,\dots,1)}{m^s} = \prod_p \prod_{i=1}^n (1-\alpha_{p,i}p^{-s})^{-1} ,$$ where the $\{ \alpha_{p,i} \}, 1\leq i \leq n$ are the complex roots of the monic polynomial*
*$$X^n + \sum_{r=1}^{n-1} (-1)^r A(\overbrace{1,\dots,1}^{r-1 \; \text{terms}},p,1,\dots,1) X^{n-r} +(-1)^n \in \mathbb{C}[X], \quad \text{and}$$*
*$$A(\overbrace{1,\dots,1}^{r-1},p,1,\dots,1) = \sum_{1 \leq i_1 < \dots < i_r \leq n} \alpha_{p,i_1} \dots \alpha_{p,i_r}, \qquad \text{for} \; \; 1 \leq r \leq n-1 .$$\
*
**Definition 4**. *[@Dg] For $n \geq 2$, let $f,g$ be two Maass forms for $SL(n,\mathbb{Z})$ of type $v_f,v_g \in \mathbb{C}^{n-1}$, respectively, with Fourier expansions:*
*$$\begin{aligned}
f(z) &= \sum_{\gamma \in U_{n-1}(\mathbb{Z}) \backslash SL(n-1,\mathbb{Z})} \sum_{m_1=1}^{\infty} \dots \sum_{m_{n-2}=1}^{\infty} \sum_{m_{n-1} \neq 0} \frac{A(m_1,\dots,m_{n-1})}{\prod_{j=1}^{n-1} \abs{m_j}^\frac{j(n-j)}{2}} \\
& \quad \times W_J \left( M \cdot \begin{pmatrix} \gamma & \\ & 1 \end{pmatrix} z, v_f, \psi_{1,\dots,1,\frac{m_{n-1}}{\abs{m_{n-1}}}}\right) ,\end{aligned}$$*
*$$\begin{aligned}
g(z) &= \sum_{\gamma \in U_{n-1}(\mathbb{Z}) \backslash SL(n-1,\mathbb{Z})} \sum_{m_1=1}^{\infty} \dots \sum_{m_{n-2}=1}^{\infty} \sum_{m_{n-1} \neq 0} \frac{B(m_1,\dots,m_{n-1})}{\prod_{j=1}^{n-1} \abs{m_j}^\frac{j(n-j)}{2}} \\
& \quad \times W_J \left( M \cdot \begin{pmatrix} \gamma & \\ & 1 \end{pmatrix} z, v_g, \psi_{1,\dots,1,\frac{m_{n-1}}{\abs{m_{n-1}}}}\right). \end{aligned}$$*
*Let $s \in \mathbb{C}$. Then the Rankin--Selberg $L$-function, denoted as $L_{f\times g}(s)$, is defined by $$L_{f\times g}(s) = \zeta(ns) \sum_{m_1=1}^{\infty} \dots \sum_{m_{n-1}=1}^{\infty} \frac{A(m_1,\dots,m_{n-1}) \cdot \overline{B(m_1,\dots,m_{n-1})}}{(m_1^{n-1} m_2^{n-2} \dots m_{n-1})^s},$$\
which converges absolutely provided $\Re(s)$ is sufficiently large.\
*
In the special case $g=f$, we have $$L_{f\times f}(s) = \zeta(ns) \sum_{m_1=1}^{\infty} \dots \sum_{m_{n-1}=1}^{\infty} \frac{\abs{A(m_1,\dots,m_{n-1})}^2}{(m_1^{n-1} m_2^{n-2} \dots m_{n-1})^s}$$ for $\Re(s) > 1$.\
Let $E_v(z)$ denote the minimal parabolic Eisenstein series. The $L$-function associated to $E_v$ (see [@Dg Equation (10.8.5)]) is computed as $$L_{E_v}(z)= \sum_{c_1=1}^{\infty} \cdots \sum_{c_{n-1}=1}^{\infty} \sum_{m=1}^{\infty} (m c_1 \cdots c_{n-1})^{-s} J_{v-\frac{1}{n}} \left(
\begin{pmatrix}
\frac{c_1}{m} & & & \\
& \ddots & & \\
& & \frac{c_{n-1}}{m} & \\
& & & 1
\end{pmatrix}
\right) .$$
From [@Dg Theorem 10.8.6], there exist functions $\lambda_i: \mathbb{C}^{n-1} \to \mathbb{C}$ satisfying $\Re \left( \lambda_i(v) \right) =0$ if $\Re(v_i) =\frac{1}{n} (i=1,\dots,n-1)$ such that the $L$-function associated to $E_v$ is just a product of shifted Riemann zeta functions of the form $$L_{E_v}(z) = \prod_{i=1}^n \zeta \left( s-\lambda_i(v) \right) .$$\
We write $$L_{f \times f}(s) := \sum_{m=1}^{\infty} \frac{b(m)}{m^s} \qquad \text{for} \ \Re(s) > 1.$$ Also, $s=\sigma+it$ and $t$ is sufficiently large.\
In [@AkAs], we proved the following two theorems. Theorem A is an unconditional result while Theorem B is a conditional result.
**Theorem A.** *Let $n \geq 3$ be an arbitrary but fixed integer. For $k \geq k_0(n) = \frac{n^2(n+1)}{2} +n$, we have $$\sum_{m \leq x} \frac{b(m)}{k!} \left( 1-\frac{m}{x} \right)^k = \frac{Cx}{(k+1)!} + O_{n} (\log x) .$$ Here $C$ is an effective constant depending only on $f$.*\
**Theorem B.** *Assume coefficient growth hypothesis and Lindelöf hypothesis for $L_{f \times f}(s)$. Let $n \geq 3$ be any arbitrary but fixed integer, then the asymptotic formula $$\sum_{m \leq x} \frac{b(m)}{k!} \left( 1-\frac{m}{x} \right)^k = \frac{Cx}{(k+1)!} + O_{n,\epsilon}(x^{\frac{1}{2}+\epsilon})$$ holds for every positive integer $k \geq 1$.*\
The aim of this article is twofold. First, we want to improve the range of $k$ in Theorem A with a better error term. Then, by a reduction argument, we will obtain an unconditional result, namely an asymptotic formula for the sum $\sum \limits_{m \leq x} b(m)$. Thus, we prove:
**Theorem 1**. *Let $n \geq 3$ be an arbitrary but fixed integer. For $k \geq k_1(n) = \left[ \frac{n^2}{2} \right]+1$, we have $$\sum_{m \leq x} \frac{b(m)}{k!} \left( 1-\frac{m}{x} \right)^k = \frac{Cx}{(k+1)!} + O_{n} (1) .$$ Here $C$ is an effective constant depending only on $f$.\
*
**Theorem 2**. *For sufficiently large $x$, we have $$\sum_{m \leq x} b(m) = \frac{2^{k_1}C}{(k_1+1)} x + O_n \left( x^{1-\frac{1}{2^{k_1}}} \right)$$ where $k_1 = k_1(n) = \left [ \frac{n^2}{2} \right] +1.$\
*
**Remark 1**.
**Remark 2**.
*Throughout the paper, we assume that $f$ is a self-dual Hecke--Maass form for $SL(n,\mathbb{Z})$ and $\epsilon$ is any small positive constant.*
# Preliminaries
In this section, we present some necessary properties of the Rankin--Selberg $L$-function which are used later.
## Euler Product
Fix $n \geq 2$. Let $f,g$ be two Maass forms for $SL(n,\mathbb{Z})$ with Euler products
$$L_f(s) = \sum_{m=1}^{\infty} \frac{A(m,1,\dots,1)}{m^s} = \prod_p \prod_{i=1}^n (1-\alpha_{p,i}p^{-s})^{-1} ,$$
$$L_g(s) = \sum_{m=1}^{\infty} \frac{B(m,1,\dots,1)}{m^s} = \prod_p \prod_{i=1}^n (1-\beta_{p,i}p^{-s})^{-1} ,$$
then $L_{f \times g}(s)$ will have an Euler product of the form:
$$L_{f \times g}(s) = \prod_p \prod_{i=1}^n \prod_{j=1}^n (1-\alpha_{p,i} \overline{\beta_{p,j}} p^{-s})^{-1} .$$\
## Functional Equation
For $n \geq 2$, let $f,g$ be two Maass forms of types $v_f,v_g$ for $SL(n,\mathbb{Z})$ whose associated $L$-functions $L_f,L_g$ satisfy the functional equations: $$\begin{aligned}
\Lambda_f(s) &:= \prod_{i=1}^n \pi^{\frac{-s+\lambda_i(v_f)}{2}} \Gamma \left( \frac{s-\lambda_i(v_f)}{2} \right) L_f(s) \\
&= \Lambda_{\tilde{f}}(1-s), \\
\Lambda_g(s) &:= \prod_{j=1}^n \pi^{\frac{-s+\lambda_j(v_g)}{2}} \Gamma \left( \frac{s-\lambda_j(v_g)}{2} \right) L_g(s) \\
&= \Lambda_{\tilde{g}}(1-s),\end{aligned}$$ where $\tilde{f},\tilde{g}$ are the Dual Maass forms.\
Then the Rankin--Selberg $L$-function $L_{f \times g}(s)$ has a meromorphic continuation to all $s \in \mathbb{C}$ with at most a simple pole at $s=1$ with residue proportional to $\langle f,g \rangle$, the Petersson inner product of $f$ with $g$. $L_{f \times g}(s)$ satisfies the functional equation: $$\begin{aligned}
\Lambda_{f \times g}(s) &:= \prod_{i=1}^n \prod_{j=1}^n \pi^{\frac{-s+\lambda_i(v_f)+\overline{\lambda_j(v_g)}}{2}} \Gamma \left( \frac{s-\lambda_i(v_f)-\overline{\lambda_j(v_g)}}{2} \right) L_{f \times g}(s) \\
&= \Lambda_{\tilde{f} \times \tilde{g}}(1-s).\end{aligned}$$
From Equation (10.8.5) and Remark 10.8.7 of [@Dg], the powers of $\pi$ take the much simpler form: $$\prod_{i=1}^n \pi^{\frac{-s+\lambda_i(v)}{2}} = \pi^{\frac{-ns}{2}} , \qquad \prod_{i=1}^n \prod_{j=1}^n \pi^{\frac{-s+\lambda_i(v_f)+\overline{\lambda_j(v_g)}}{2}} = \pi^{\frac{-n^2 s}{2}} .$$
Hence, we get $$\begin{aligned}
\Lambda_{f \times g}(s) &:= \pi^{\frac{-n^2 s}{2}} \prod_{i=1}^n \prod_{j=1}^n \Gamma \left( \frac{s-\lambda_i(v_f)-\overline{\lambda_j(v_g)}}{2} \right) L_{f \times g}(s) \\
&= \Lambda_{\tilde{f} \times \tilde{g}}(1-s).\end{aligned}$$
We take $g=f$ and $f$ to be a self-dual Maass form of type $v$ so that $$\begin{aligned}
\Lambda_{f \times f}(s) &:= \pi^{\frac{-n^2 s}{2}} \prod_{i=1}^n \prod_{j=1}^n \Gamma \left( \frac{s-\lambda_i(v)-\overline{\lambda_j(v)}}{2} \right) L_{f \times f}(s) \\
&= \Lambda_{f \times f}(1-s). \\\end{aligned}$$
## Bound for the conversion factor
Let $f$ be a self-dual Hecke--Maass form. Then we have the functional equation $$\Lambda_{f \times f}(s) = \Lambda_{f \times f}(1-s) .$$
If we write $L_{f \times f}(s) = \chi_{f \times f}(s) L_{f \times f}(1-s)$, then from our work in [@AkAs] the conversion factor $\chi_{f \times f}(s)$ can be written as
$$\chi_{f \times f}(\sigma+it) \ll \abs{t}^{n^2 \left( \frac{1}{2}-\sigma \right)}.$$
This bound is true in any fixed vertical strip $a \leq \sigma \leq b$ and sufficiently large $t$.\
*Hereafter, throughout the paper, we assume $n \geq 3$.*\
# Some Lemmas
**Lemma 1**. *For $\Re(s) \geq 1+\epsilon$, $L_{f \times f}(s)$ is absolutely convergent.*
*Proof.* The Rankin--Selberg $L$-function $L_{f \times f}(s)$ has a meromorphic continuation to all $s \in \mathbb{C}$ with a simple pole at $s=1$. It is easy to see that $$L_{f\times f}(s) = \zeta(ns) \sum_{m_1=1}^{\infty} \dots \sum_{m_{n-1}=1}^{\infty} \frac{\abs{A(m_1,\dots,m_{n-1})}^2}{(m_1^{n-1} m_2^{n-2} \dots m_{n-1})^s}$$ implies that the coefficients $b(m)$ are non-negative. Landau's lemma asserts that a Dirichlet series with non-negative coefficients must be absolutely convergent up to its first pole. Hence, $L_{f \times f}(s)$ is absolutely convergent in the half-plane $\Re(s) \geq 1+\epsilon$. ◻
**Lemma 2**. *For sufficiently large $t$, we have $$L_{f \times f}(s) \ll \left(\, \abs{t}+10 \right)^{\frac{n^2}{2}(1+\epsilon-\sigma)}$$ uniformly for $-\epsilon \leq \sigma \leq 1+\epsilon$.*
*Proof.* We prove along the same lines as in [@As Lemma 3.5]. From Lemma [Lemma 1](#l1){reference-type="ref" reference="l1"}, we have $$\abs{L_{f \times f}(1+\epsilon+it)} \ll 1 ,$$ and by the functional equation $$\begin{aligned}
\abs{L_{f \times f}(-\epsilon+it)} &= \abs{\chi_{f \times f}(-\epsilon+it) L_{f \times f}(1+\epsilon-it)} \\
&\ll \left(\, \abs{t}+10 \right)^{n^2 \left( \frac{1}{2} + \epsilon \right)}. \\\end{aligned}$$
Now we apply the maximum modulus principle to the function $$F(w) = L_{f \times f}(w) e^{(w-s)^2} X^{w-s}$$ in the rectangle
so that $$\abs{L_{f \times f}(s)} \ll V_1+V_2+H_1+H_2 .$$ Here $V_1,\,V_2$ are the contributions from the vertical lines and $H_1,\,H_2$ are the contributions from the horizontal lines.\
Let $w=u+iv$ and $s=\sigma+it$. As $$\begin{aligned}
\exp \{ (w-s)^2 \} &= \exp \{ (u-\sigma)^2 - (v-t)^2 +2i(u-\sigma)(v-t) \} \\
\abs{ \exp \{ (w-s)^2 \} } &= \exp \{ (u-\sigma)^2 - (v-t)^2 \} \\
&\ll \exp \{ -(\log t)^2 \},\end{aligned}$$
we see that $\exp \{ (w-s)^2 \}$ decays exponentially for large $t$ on horizontal lines. Thus, $$\begin{aligned}
H_1 &\ll 1, \; H_2 \ll 1, \; V_1 \ll X^{1+\epsilon-\sigma} \\
V_2 &\ll \left(\, \abs{t}+10 \right)^{n^2 \left( \frac{1}{2} + \epsilon \right)} X^{-\epsilon-\sigma}.\end{aligned}$$
Therefore, $$\abs{L_{f \times f}(s)} \ll X^{1+\epsilon-\sigma} + \left(\, \abs{t}+10 \right)^{n^2 \left( \frac{1}{2} + \epsilon \right)} X^{-\epsilon-\sigma} +1 .$$\
We choose $X$ such that $$\begin{aligned}
X^{1+\epsilon-\sigma} &\sim \left(\, \abs{t}+10 \right)^{n^2 \left( \frac{1}{2} + \epsilon \right)} X^{-\epsilon-\sigma} \\
\text{i.e.,} \; X &\sim \left(\, \abs{t}+10 \right)^{\frac{n^2}{2}} \end{aligned}$$ so that $$\begin{aligned}
\abs{L_{f \times f}(s)} &\ll \left(\, \abs{t}+10 \right)^{\frac{n^2}{2}(1+\epsilon-\sigma)} .\end{aligned}$$ This completes the proof of this lemma.\
◻
**Lemma 3**. *For $0 \leq \Re(s) \leq 1+\epsilon$, we have uniformly $$L_{f \times f}(s) \ll \left(\, \abs{t}+10 \right)^{\frac{n^2}{2} + \epsilon}.$$*
*Proof.* Follows from Lemma [Lemma 2](#l2){reference-type="ref" reference="l2"}.\
◻
**Lemma 4**. *Let $c$ and $y$ be any positive real numbers and $T$ is sufficiently large. Then we have, $$\frac{1}{2 \pi i} \int_{c-iT}^{c+iT} \frac{y^s}{s(s+1) \dots (s+k)} ds =
\begin{cases}
\frac{1}{k!} \left( 1-\frac{1}{y} \right)^k + O \left( \frac{4^ky^c}{T^k} \right) &, y \geq 1 \\
O \left( \frac{1}{T^k} \right) &, 0<y \leq 1.
\end{cases}$$*
*Proof.* See [@AsSk Lemma 3.2].\
◻
**Remark 3**.
**Lemma 5**. *Let $A(x)$ be a monotonically increasing function such that*
*& & B(x) &= \_1\^x A(t) dt . &\
& & B(x) &= cx+ O ( ), &\
& & A(x) &= 2cx+ O ( ).\
*
*Proof.* Since $$B(x) = \frac{1}{x} \int_1^x A(t) \ dt ,$$ we have $$(x+\delta)B(x+\delta) -xB(x) = \int_x^{x+\delta} A(t) \ dt >A(x) \delta$$ where $\delta=o(x)$ is chosen later. Thus $$\begin{aligned}
A(x) &< \left( 1+\frac{x}{\delta} \right) \left( cx +c\delta + O \left( \frac{x}{E(x)} \right) \right) -\frac{x}{\delta} \left( cx + O \left( \frac{x}{E(x)} \right) \right)\\
&= cx + c\delta + O \left( \frac{x}{E(x)} \right) + \frac{cx^2}{\delta} +cx + O \left( \frac{x^2}{\delta E(x)} \right) -\frac{cx^2}{\delta} + O \left( \frac{x^2}{\delta E(x)} \right) \\
&= 2cx + c\delta + O \left( \frac{x^2}{\delta E(x)} \right).\end{aligned}$$
The parameter $\delta$ is chosen such that $$\frac{x^2}{\delta E(x)} < \delta$$ i.e., $$\delta > \frac{x}{\sqrt{E(x)}}.$$
Thus, we get $$A(x) < 2cx + \left( \frac{x}{\sqrt{E(x)}} \right).$$\
Also, $$xB(x) - (x-\delta)B(x-\delta) = \int_{x-\delta}^x A(t) \ dt < A(x) \delta$$ gives $$\begin{aligned}
A(x) &> \frac{x}{\delta} \left( cx + O \left( \frac{x}{E(x)} \right) \right) + \left( 1- \frac{x}{\delta} \right) \left( cx -c\delta + O \left( \frac{x}{E(x)} \right) \right) \\
&= \frac{cx^2}{\delta} + O \left( \frac{x^2}{\delta E(x)} \right) +cx -c\delta + O \left( \frac{x}{E(x)} \right) -\frac{cx^2}{\delta} +cx + O \left( \frac{x^2}{\delta E(x)} \right) \end{aligned}$$
We choose $\delta$ so that $$\frac{x^2}{\delta E(x)} < \delta$$ i.e., $$\delta > \frac{x}{\sqrt{E(x)}}.$$
Thus, we get $$A(x) = 2cx + \left( \frac{x}{\sqrt{E(x)}} \right).$$\
◻
# Proof of Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} {#proof-of-theorem-t1}
Let $y =\frac{x}{m} \geq 1$ and $c=1+\epsilon$ in Lemma [Lemma 4](#l4){reference-type="ref" reference="l4"} so that
$$\begin{aligned}
\frac{1}{k!} \left( 1-\frac{m}{x} \right)^k &= \frac{1}{2 \pi i} \int_{1+\epsilon-iT}^{1+\epsilon+iT} \frac{ \left( \frac{x}{m} \right)^s}{s(s+1)\dots(s+k)} ds + O \left( \frac{4^k x^{1+\epsilon}}{T^k m^{1+\epsilon}} \right). \end{aligned}$$ Hence, $$\begin{aligned}
\sum_{m \leq x} \frac{b(m)}{k!} \left( 1-\frac{m}{x} \right)^k &= \sum_{m \leq x} \frac{b(m)}{2 \pi i} \int_{1+\epsilon-iT}^{1+\epsilon+iT} \frac{ \left( \frac{x}{m} \right)^s}{s(s+1)\dots(s+k)} ds \\
&\quad+ O \left( \frac{4^k x^{1+\epsilon}}{T^k} \sum_{m \leq x} \frac{b(m)}{m^{1+\epsilon}} \right) \\
&= \frac{1}{2 \pi i} \int_{1+\epsilon-iT}^{1+\epsilon+iT} \frac{L_{f \times f}(s) x^s}{s(s+1)\dots(s+k)} ds + O \left( \frac{4^k x^{1+\epsilon}}{T^k} \right).\end{aligned}$$ Summation and integral can be interchanged because of absolute convergence. Now we move the line of integration to $\Re(s)=0$.
By Cauchy's residue theorem, $$\begin{aligned}
&\frac{1}{2 \pi i} \left[ \int_{1+\epsilon-iT}^{1+\epsilon+iT} + \int_{1+\epsilon+iT}^{iT} + \int_{iT}^{-iT} + \int_{-iT}^{1+\epsilon-iT} \right] \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds \\
&= \text{Res}_{s=1} \frac{L_{f \times f}(s) x^s}{s(s+1)\dots (s+k)} \\
&= \lim_{s \to 1} \frac{(s-1)L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} \\
&= \frac{Cx}{(k+1)!}\end{aligned}$$ where $C=\lim \limits_{s \to 1} (s-1) L_{f \times f}(s)$, depends on $f$.\
Hence, $$\begin{aligned}
&\frac{1}{2 \pi i} \int_{1+\epsilon-iT}^{1+\epsilon+iT} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds \\
&= \frac{Cx}{(k+1)!} + \frac{1}{2 \pi i} \left[ \int_{iT}^{1+\epsilon+iT} + \int_{-iT}^{iT} + \int_{1+\epsilon-iT}^{-iT} \right] \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds . \\\end{aligned}$$
Horizontal line contributions are in absolute value: $$\begin{aligned}
&\abs{ \frac{1}{2 \pi i} \int_{iT}^{1+\epsilon+iT} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds } \\
&\quad= \abs{ \frac{1}{2 \pi i} \int_0^{1+\epsilon} \frac{L_{f \times f}(\sigma+iT)x^{\sigma+iT}}{(\sigma+iT)(\sigma+iT+1)\dots (\sigma+iT+k)} d\sigma } \\
&\quad \leq \frac{1}{2 \pi} \int_0^{1+\epsilon} \frac{\abs{L_{f \times f}(\sigma+iT)}x^{\sigma}}{T^{k+1}} d\sigma \\
&\quad \ll T^{\frac{n^2}{2}-k-1+\epsilon}x^{1+\epsilon} . \\\end{aligned}$$
The left vertical line contribution is: $$\begin{aligned}
\frac{1}{2 \pi i} \int_{-iT}^{iT} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds &= \frac{1}{2 \pi i} \int_{\abs{t} \leq t_0, \atop \sigma=0} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds \\
& \quad + \frac{1}{2 \pi i} \int_{t_0 \leq \abs{t} \leq T, \atop \sigma = 0} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds.\\\end{aligned}$$
We note that $$\begin{aligned}
&\abs{ \frac{1}{2 \pi i} \int_{\abs{t} \leq t_0, \atop \sigma =0} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds } \\
&= \abs{ \frac{1}{2 \pi i} \int_{\abs{t} \leq t_0} \frac{L_{f \times f} (it) x^{it}}{ (it) (it+1) \dots ( it+k)} i dt } \\
& \leq \frac{1}{2 \pi} \int_{\abs{t} \leq t_0} \frac{t^{\frac{n^2}{2}-1+\epsilon}}{k!} dt \\
& \ll_n 1\end{aligned}$$ and $$\begin{aligned}
&\abs{ \frac{1}{2 \pi i} \int_{t_0 \leq \, \abs{t} \leq T, \atop \sigma = 0} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots (s+k)} ds } \\
&= \abs{ \frac{1}{2 \pi i} \int_{t_0 \leq \, \abs{t} \leq T} \frac{L_{f \times f} (it) x^{it}}{(it) (it+1) \dots (it+k)} i dt } \\
& \leq \frac{1}{2 \pi} \int_{t_0 \leq \, \abs{t} \leq T} \frac{t^{\frac{n^2}{2}+\epsilon}}{t^{k+1}} dt \\
& \ll T^{\frac{n^2}{2} - k +\epsilon}. \\\end{aligned}$$
Hence, $$\begin{aligned}
\frac{1}{2 \pi i} \int_{1+\epsilon-iT}^{1+\epsilon+iT} \frac{L_{f \times f}(s)x^s}{s(s+1)\dots(s+k)} ds &= \frac{Cx}{(k+1)!} + O(T^{\frac{n^2}{2}-k-1+\epsilon}x^{1+\epsilon}) \\
&\quad + O (T^{\frac{n^2}{2}-k+\epsilon}) +O_n(1). \end{aligned}$$ This implies that $$\begin{aligned}
\sum_{m \leq x} \frac{b(m)}{k!} \left( 1- \frac{m}{x} \right)^k &= \frac{Cx}{(k+1)!} + O(T^{\frac{n^2}{2}-k-1+\epsilon}x^{1+\epsilon}) + O (T^{\frac{n^2}{2}-k+\epsilon}) \\
&\quad + O ( T^{-k} x^{1+\epsilon} ) +O_n(1). \\\end{aligned}$$
First we choose $T=\frac{x}{10}$ so that $$\begin{aligned}
\sum_{m \leq x} \frac{b(m)}{k!} \left( 1-\frac{m}{x} \right)^k = \frac{Cx}{(k+1)!} + O(x^{\frac{n^2}{2}-k+\epsilon}) + O(x^{\frac{n^2}{2}-k+\epsilon}) +O(x^{1-k+\epsilon}) + O_n(1) .\\\end{aligned}$$
Thus for $k \geq k_1(n) = \left[ \frac{n^2}{2} \right]+1$, we finally arrive at $$\sum_{m \leq x} \frac{b(m)}{k!} \left( 1-\frac{m}{x} \right)^k = \frac{Cx}{(k+1)!} + O_{n} (1)$$ which holds good for all integers $k \geq k_1(n)$.\
# Proof of Theorem [Theorem 2](#t2){reference-type="ref" reference="t2"} {#proof-of-theorem-t2}
From Theorem [Theorem 1](#t1){reference-type="ref" reference="t1"} with $k=k_1$ we have, $$\sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1} = \frac{Cx}{(k_1+1)!} + O_{n} (1) .$$
Note that $$\begin{aligned}
\sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1} &= \sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1-1} \left( 1-\frac{m}{x} \right) \\
&= \frac{1}{x} \sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1-1} (x-m) \\
&= \frac{1}{x} \sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1-1} \int_m^x dt \\
&= \frac{1}{x} \int_1^x \left( \sum_{m \leq t} \frac{b(m)}{k_1!} \left( 1-\frac{m}{t} \right)^{k_1-1} \right) dt. \\\end{aligned}$$
Using Lemma [Lemma 5](#l5){reference-type="ref" reference="l5"} with $E(x) = 10x$, we can find the $(k_1-1)$-th Riesz mean. In particular, we get $$\sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1-1} = \frac{2Cx}{(k_1+1)!} + O_{n} (x^{1-\frac{1}{2}} ) .$$
Once again using Lemma [Lemma 5](#l5){reference-type="ref" reference="l5"}, we get $$\sum_{m \leq x} \frac{b(m)}{k_1!} \left( 1-\frac{m}{x} \right)^{k_1-2} = \frac{2^2Cx}{(k_1+1)!} + O_{n} (x^{1-\frac{1}{2^2}} ) .$$
Repeatedly using the result in Lemma [Lemma 5](#l5){reference-type="ref" reference="l5"} $k_1$ times, we get $$\sum_{m \leq x} \frac{b(m)}{k_1!} = \frac{2^{k_1}Cx}{(k_1+1)!} + O_n \left( x^{1-\frac{1}{2^{k_1}}} \right) .$$ This proves the theorem.\
The authors are thankful to Prof. R. Balasubramanian for some fruitful discussions related to this paper. The first author is thankful to UGC for its supporting NET Senior Research Fellowship with UGC Ref. No. : 1004/(CSIR--UGC NET Dec. 2017).\
# Declarations {#declarations .unnumbered}
- Funding - The first author is supported by University Grants Commission's NET Senior Research Fellowship (Ref. No. 1004/(CSIR--UGC NET Dec. 2017)).
- Conflict of interest/Competing interests - The authors have no conflicts of interest to declare.
- Ethics approval - Not Applicable
- Consent to participate - Not Applicable
- Consent for publication - The authors give their consent for the publication of this article.
- Availability of data and materials - Not Applicable
- Code availability - Not Applicable
- Authors' contributions - The authors have equally contributed to this work.\
8
R. Balasubramanian, K. Ramachandra, On the number of integers $n$ such that $n d(n) \leq x$. *Acta Arithmetica*, **49(4)** (1988) 313--322. <https://doi.org/10.4064/aa-49-4-313-322>
R. Balasubramanian, O. Ramaré, P. Srivastav, Product of three primes in large arithmetic progressions. International Journal of Number Theory, **19(04)** (2023) 843--857. <https://doi.org/10.1142/S1793042123500422>
R. Balasubramanian, P. Srivastav, On Selberg's approximation to the twin prime problem, *arXiv preprint*, arXiv:1504.04347 (2015). <https://doi.org/10.48550/arXiv.1504.04347>
D. Goldfeld, *Automorphic forms and $L$-functions for the group GL(n,$\mathbb{R}$)* (Vol. 99), Cambridge University Press, 2006. <https://doi.org/10.1017/CBO9780511542923>
A. E. Ingham, *The distribution of prime numbers* (No. 30), Cambridge University Press, 1990. <https://doi.org/10.2307/3606518>
Y. Jiang, & G. Lü, (2017), Exponential sums formed with the von Mangoldt function and Fourier coefficients of ${GL}(m)$ automorphic forms, *Monatshefte für Mathematik*, **184(4)** (2017) 539--561. <https://doi.org/10.1007/s00605-017-1068-4>
A. Kaur, A. Sankaranarayanan, On the Rankin--Selberg $L$-function related to the Godement--Jacquet $L$-function. *Acta Mathematica Hungarica*, **169** (2023) 88--107 (2023). <https://doi.org/10.1007/s10474-023-01296-9>
A. Sankaranarayanan, Zeros of quadratic zeta-functions on the critical line, *Acta Arithmetica*, **69** (1995) 21--38. <https://doi.org/10.4064/aa-69-1-21-38>
A. Sankaranarayanan, & S. K. Singh, On the Riesz means of $\frac {n}{\phi (n)}$, *Hardy-Ramanujan Journal*, **36** (2013) 8--20. <https://doi.org/10.46298/hrj.2013.179>
| arxiv_math | {
"id": "2309.00243",
"title": "On the Rankin-Selberg $L$-function related to the Godement-Jacquet\n $L$-function II",
"authors": "Amrinder Kaur, Ayyadurai Sankaranarayanan",
"categories": "math.NT",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We study concentration properties for laws of non-linear Gaussian functionals on metric spaces. Our focus lies on measures with non-Gaussian tail behaviour which are beyond the reach of Talagrand's classical Transportation-Cost Inequalities (TCIs). Motivated by solutions of Rough Differential Equations and relying on a suitable contraction principle, we prove generalised TCIs for functionals that arise in the theory of regularity structures and, in particular, in the cases of rough volatility and the two-dimensional Parabolic Anderson Model. In doing so, we also extend existing results on TCIs for diffusions driven by Gaussian processes.
address:
- Department of Mathematics, Imperial College London
- Department of Mathematics, Imperial College London, and the Alan Turing Institute
author:
- Ioannis Gasteratos
- Antoine Jacquier
bibliography:
- TPC.bib
title: Transportation-Cost inequalities for non-linear Gaussian functionals
---
# Introduction
Talagrand's Transportation-Cost Inequalities (TCIs) have been well studied in the literature, especially due to their connections with concentration of measure, exponential integrability and deviation estimates. Typically, they are of the following form: a probability measure $\mu$, defined on a metric space $(\mathcal{X}, g)$, satisfies the $p$-TCI for $p\geq 1$ if there exists $C>0$ such that for any other probability measure $\nu$ on $\mathcal{X}$, $$\label{eq: pTCI}
W_p(\mu, \nu)\leq \sqrt{C H(\nu\;|\;\mu)},$$ where $W_p$ is the $p$-Wasserstein distance with respect to $g$ and $H$ the relative entropy or Kullback-Leibler divergence (see Section [2](#Section: TCIs){reference-type="ref" reference="Section: TCIs"} for the precise definitions). Such inequalities were first considered by Marton [@marton1996bounding] and Talagrand [@talagrand1996transportation] and further investigated by Bobkov-Götze [@bobkov1999exponential], Otto-Villani [@otto2000generalization] to name only a few.
A crucial feature of $p$-TCIs is their close relation to the Gaussian distribution. In particular, Talagrand [@talagrand1996transportation] first showed that Gaussian measures satisfy a $2$-TCI with a dimension-free constant $C$, while Feyel-Üstünel [@feyel2002measure] proved a similar inequality on abstract Wiener spaces and with respect to the Cameron-Martin distance. It was moreover established [@bobkov1999exponential; @djellout2004transportation] that the 1-TCI, the weakest among $p$-TCIs, is equivalent to Gaussian concentration.
Apart from Gaussian measures themselves, the laws of many Gaussian functionals of interest, such as solutions to stochastic differential equations, satisfy $p$-TCI inequalities. In particular, the $1$-TCI, on pathspace for the law of a multidimensional diffusion process $$\label{eq: SDE}
dY_t= b(Y_t) dt+\sigma(Y_t)dX_t,$$ with $X$ a standard Brownian motion and $b, \sigma$ bounded and Lipschitz continuous, was proved in [@djellout2004transportation]. The case where $X$ is a fractional Brownian motion (fBm) with Hurst parameter $H>\frac{1}{2}$ was studied by Saussereau [@saussereau2012transportation]. There, it was shown that the law of $Y$ satisfies a $1$-TCI on pathspace if either $X$ is one-dimensional or $X$ is multi-dimensional and $\sigma$ is not state-dependent. Subsequently, Riedel [@riedel2017transportation] extended the results of [@saussereau2012transportation] by showing that, for a wide class of Gaussian drivers $X$ with paths of finite $r$-variation for some $r\leq 2$, the law of $Y$ satisfies a $(2-\epsilon)$-TCI for all $\epsilon \in (0,2)$ and with respect to finite $r$-variation metrics.
The work of [@riedel2017transportation] establishes TCIs under the assumption that the driver $X$ has equal or higher path regularity than standard Brownian motion. In this regime, pathwise solution theories are available via Young integration. For the case of rougher signals, namely fBm with Hurst parameter $H\in(\frac{1}{4}, \frac{1}{2}]$, [\[eq: SDE\]](#eq: SDE){reference-type="eqref" reference="eq: SDE"} can be treated in the framework of Lyons' theory of rough paths [@lyons1998differential]. In a nutshell, a pathwise solution theory is available upon considering an enhanced driver $\mathbf{X}$ consisting of $X$ along with its iterated integrals. The integral in [\[eq: SDE\]](#eq: SDE){reference-type="eqref" reference="eq: SDE"} is then considered in the sense of rough integration against the rough path $\mathbf{X}$ (the interested reader is referred to [@friz2020course; @friz2010multidimensional] for a thorough exposition of rough paths theory).
In contrast to the aforementioned examples, solutions of Gaussian Rough Differential Equations (RDEs) provide a class of Gaussian functionals that fall beyond the reach of Talagrand's $p$-TCIs for any $p\geq 1$. Indeed, consider a Gaussian process $X$ with paths of finite $r$-variation and Cameron-Martin space $\mathcal{H}$. Cass, Litterer and Lyons [@cass2013integrability] (see also [@friz2013integrability]) establish non-Gaussian upper bounds for tail probabilities of $Y$. In particular, if $X$ admits a rough path lift $\mathbf{X}$ and there exists $q\in[1, 2)$ with $1/r+1/q>1$ such that $\mathcal{H}\hookrightarrow\mathcal{C}^{q-var}$ then the upper bound is that of a Weibull distribution with shape parameter $2/q$ (note that for Brownian or smoother paths one can take $q=1$ and hence one obtains a Gaussian tail estimate). Moreover, this non-Gaussian tail behaviour was shown to be sharp in the recent work [@boedihardjo2022lack], where a non-Gaussian tail-lower bound was provided for an elementary RDE. In light of these facts, one deduces that the law of $Y$ does not enjoy Gaussian concentration and hence cannot satisfy the $p$-TCI for any $p\geq 1$.
Another important class of Gaussian functionals with non-Gaussian tail behaviour is provided, in mathematical finance, by rough (stochastic) volatility models [@bayer2016pricing]. These describe the dynamics of an asset price $S$ via SDEs of the form $$\label{eq:RVintro}
dS_t=S_tf(\hat{W}^H_t, t)dB_t,$$ where $\hat{W}^H$ is an fBm (of Riemann-Liouville type) with $H< \frac{1}{2}$ and $B$ is a standard Brownian motion that is typically correlated with $\hat{W}^H$. Such models have been proposed due to their remarkable consistency with financial time series data (see [@bayer2016pricing; @gatheral2018volatility] and references therein) and calibrated volatility models suggest a Hurst parameter $H$ of order $0.1$.
Besides the fact that $S$ solves an SDE with unbounded (linear) diffusion coefficients, typical choices for the volatility function $f$ are also unbounded (e.g. exponential as in the rough Bergomi model [@bayer2016pricing], or polynomial [@jaber2022joint]). It is thus clear that neither the (driftless) log-price $fdB$ nor $S$ itself fit into the framework of Talagrand's TCIs. Moreover, it is well known that, for a broad class of volatility functions, $p$-th moments of $S$ for $p>1$ are infinite for $t>0$; see for example [@lions2007correlations] and [@gassiat2019martingale] for the cases $H=\frac{1}{2}$ and $H<\frac{1}{2}$ respectively.
Motivated by RDEs and rough volatility, our primary goal here is to identify a class of TCIs that is both general enough to include Talagrand's $p$-TCIs and also sufficient to capture non-Gaussian concentration and tail behaviour. In particular, we establish TCIs of the form $$\label{eq: acTCI}
\alpha\big(W_c(\mu, \nu)\big)\leq H(\nu\;|\;\mu),$$ where $\alpha:\mathbb{R}^+\rightarrow\mathbb{R}^+$ is a non-decreasing deviation function vanishing at the origin, $W_c$ the transportation-cost with respect to a measurable cost $c:\mathcal{X}\times\mathcal{X}\rightarrow[0,\infty]$ (typically a concave function of a metric), namely $$W_{c}(\mu, \nu)=\inf_{\pi\in\Pi(\mu, \nu)}\iint_{\mathcal{X}\times\mathcal{X}}c(x,y)d\pi(x,y),$$ and $\Pi(\mu, \nu)$ the family of all couplings between $\mu, \nu$ (see Section [2](#Section: TCIs){reference-type="ref" reference="Section: TCIs"} for precise definitions). A measure $\mu$ for which [\[eq: acTCI\]](#eq: acTCI){reference-type="eqref" reference="eq: acTCI"} holds for all measures $\nu$ on $\mathcal{X}$ is said to satisfy an $(\alpha, c)$-TCI. Similar types of TCIs have been considered by Gozlan and Léonard [@10.1214/ECP.v11-1198; @gozlan2007large] (see also the survey paper [@gozlan2010transport]) under slightly different assumptions on $\alpha$ and $c$ (in particular, $\alpha$ convex and cost $c$ convex function of a metric, neither satisfied in our examples of interest).
A natural question upon inspection of the previous examples is whether [\[eq:RVintro\]](#eq:RVintro){reference-type="eqref" reference="eq:RVintro"} can be treated in a pathwise sense, similar to [\[eq: SDE\]](#eq: SDE){reference-type="eqref" reference="eq: SDE"}. Bayer, Friz, Gassiat, Martin and Stemper [@bayer2020regularity] argued that, while [\[eq:RVintro\]](#eq:RVintro){reference-type="eqref" reference="eq:RVintro"} falls outside the scope of classical geometric rough paths, a pathwise treatment is possible via Hairer's theory of regularity structures [@hairer2014theory]. In brief, after constructing an appropriate lift of the noise to a random Gaussian model $\Pi$ (akin to the lift $X\mapsto\mathbf{X}$) and \"expanding\" $f$ with respect to the higher-order functionals in $\Pi$ (in the sense of Hairer's modelled distributions), they obtain a pathwise formulation of [\[eq:RVintro\]](#eq:RVintro){reference-type="eqref" reference="eq:RVintro"}. Moreover the solution is continuous with respect to an appropriate topology in the space of models.
This leads to the second objective of the present work, which is to obtain TCIs for Gaussian functionals that arise in the theory of regularity structures. With rough volatility in mind, an initial observation is that both Gaussian rough paths $\mathbf{X}$ (Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"}(1)) and models $\Pi$ (Theorem [Theorem 32](#thm:TCIRV1){reference-type="ref" reference="thm:TCIRV1"}(2)) satisfy $(\alpha, c)$-TCIs in rough path/model topology. In both these results, $\alpha$ and $c$ reflect the smallest order of Wiener chaos to which the components of $\Pi$ (or $\mathbf{X}$) belong. Furthermore, we show in Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"}(2) that solutions of Gaussian RDEs satisfy a similar $(\alpha, c)$-TCI where $\alpha$ and $c$ are related to the smoothness of the driving path and in particular to the exponent $q$ mentioned above. An $(\alpha, c)$-TCI for a class of modelled distributions, on the rough volatility regularity structure, under the assumption that $f$ grows at most polynomially is provided in Theorem [Theorem 41](#thm:TCImodelled){reference-type="ref" reference="thm:TCImodelled"}. In this case, $\alpha$ and $c$ reflect both the growth of $f$ and the order of the fixed Wiener chaos to which $\Pi$ belongs. Finally, we obtain in Theorem [Theorem 32](#thm:TCIRV1){reference-type="ref" reference="thm:TCIRV1"}(3)-(4) $(\alpha, c)$-TCIs for the driftless log-price $fdB$ in the case of polynomially or exponentially growing $f$.
Our approach for proving TCIs for the aforementioned functionals relies on two main steps and is summarised as follows: Letting $(E, \mathcal{H}, \gamma)$ be the abstract Wiener space that corresponds to the underlying Gaussian noise, we consider a functional $\Psi$, defined on $E$, along with a shifted version $\Psi^h(\omega)=\Psi(\omega+h)$ in the direction of a Cameron-Martin space element $h\in\mathcal{H}$. First, we obtain estimates of the distance between $\Psi$ and $\Psi^h$ in the topology of interest (essentially $\mathcal{H}$-continuity estimates). Then, we use either a generalised contraction principle, Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"}, to obtain an $(\alpha, c)$-TCI with appropriate cost and deviation functions or a generalised Fernique theorem [@friz2010generalized] to obtain a $1$-TCI for a non-negative function of $\Psi$. Moreover, we show that the $(\alpha, c)$-TCIs we consider imply non-Gaussian concentration properties in Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}.
A similar methodology can be applied to obtain a different type of inequalities for functionals $\Psi: E\to\mathbb{R}^m$. As explained in Section [6](#Section:WLSIs){reference-type="ref" reference="Section:WLSIs"}, if $\Psi$ is $\mathcal{H}$-continuously Fréchet (or Malliavin) differentiable then it is possible to obtain Weighted Logarithmic Sobolev Inequalities (WLSIs) via contraction (see Proposition [Proposition 48](#Prop:logSobolev){reference-type="ref" reference="Prop:logSobolev"} for the corresponding contraction principle) under some additional assumptions on the $\mathcal{H}$-gradient. $\mathrm{WLSIs}$ and their connections with concentration properties and weighted Poincaré inequalities have been explored by Bobkov-Ledoux [@bobkov2009weighted] and further studied by Cattiaux-Guillin-Wu [@cattiaux2011some] (see also [@cattiaux2019entropic; @kolesnikov2016riemannian; @wang2008super]).
The contribution of this work is thus threefold: (a) We prove new TCIs for Gaussian functionals arising in rough volatility and in rough path and regularity structure contexts. In passing, Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"} and Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"}(2) extend [@riedel2017transportation] to the setting of RDEs driven by Gaussian noise rougher than Brownian motion; (b) The $(\alpha, c)$-TCIs we consider imply well-known tail estimates (via Corollary [Corollary 6](#Cor:Exponential Moments){reference-type="ref" reference="Cor:Exponential Moments"}): (i) the TCIs for Gaussian rough paths and models imply the tail upper bounds for random variables on a fixed Wiener chaos from [@latala2006estimates]; (ii) the TCI for Gaussian RDEs allows us to recover the tail upper bounds from [@cass2013integrability; @friz2013integrability] (Corollary [Corollary 17](#cor: LatalaCLL){reference-type="ref" reference="cor: LatalaCLL"}); (c) Apart from rough volatility, we transfer some of our arguments and prove TCIs in the setting of the 2d-Parabolic Anderson Model (2d-PAM) (Theorem [Theorem 44](#thm:PAM){reference-type="ref" reference="thm:PAM"}), a well-studied singular SPDE that can be solved in the framework of regularity structures. This case highlights significant differences from rough volatility due both to the infinite dimensionality of the dynamics and to the essential requirement for renormalisation needed to define the solution map.
The rest of this article is organised as follows: We introduce $(\alpha, c)$-TCIs along with their consequences and characterisation in Section [2](#Section: TCIs){reference-type="ref" reference="Section: TCIs"}. In Section [3](#Section: RPs){reference-type="ref" reference="Section: RPs"}, we present TCIs for Gaussian rough paths and RDEs. Section [4](#Section: Rough Vol){reference-type="ref" reference="Section: Rough Vol"} is devoted to the rough volatility regularity structure. Apart from presenting our results on TCIs, this section also serves as an elementary introduction to some notions and language of the general theory. In Section [5](#Section:PAM){reference-type="ref" reference="Section:PAM"} we present our results on TCIs for the 2d-PAM. Finally, in Section [6](#Section:WLSIs){reference-type="ref" reference="Section:WLSIs"} we obtain a generalised contraction principle for WLSIs and leverage tools from Malliavin calculus to demonstrate examples of Gaussian functionals that satisfy such inequalities. The proofs of some technical lemmas from Section [4](#Section: Rough Vol){reference-type="ref" reference="Section: Rough Vol"} are collected in Appendix [7](#Section:App){reference-type="ref" reference="Section:App"}.
# Transportation-cost inequalities {#Section: TCIs}
In this section, we introduce a family of transportation-cost inequalities (TCIs) for probability measures on an arbitrary metric space $\mathcal{X}$. In contrast to the majority of the literature, our definition does not require $\mathcal{X}$ to be Polish. The reason for choosing this degree of generality is that some of our results apply to situations where the underlying space does not necessarily satisfy this property (e.g. the \"total\" space of modelled distributions [\[eqref: MDspace\]](#eqref: MDspace){reference-type="eqref" reference="eqref: MDspace"} on the rough volatility regularity structure). After introducing the necessary notation, we provide some characterisations and consequences for the class of $(\alpha, c)$-TCIs of interest in Section [2.1](#subsec:TCIchar){reference-type="ref" reference="subsec:TCIchar"}. In Section [2.2](#subsec:TCIcon){reference-type="ref" reference="subsec:TCIcon"} we prove a generalised contraction principle which is used to obtain several of our main results in the following sections.
Throughout the rest of this work, the lattice notation $\wedge, \vee$ is used to denote the minimum and maximum of real numbers and $\lesssim$ denotes inequality up to a multiplicative constant. The Borel $\sigma$-algebra and space of Borel probability measures on $\mathcal{X}$ are denoted by $\mathscr{B}(\mathcal{X}), \mathscr{P}(\mathcal{X})$ respectively. We use the notation $\nu\ll\mu$ to denote absolute continuity of a measure $\nu$ with respect to $\mu$. For $i=1,\dots, n$, the $i$-th marginal of a measure $\pi\in\mathscr{P}(\mathcal{X}^n)$ is denoted by $[\pi]_i$ and the $m$-product measure by $\pi^{\otimes m}$. For an interval $I\subset\mathbb{R}$ (resp. $I\subset\mathbb{R}^+$) the convex conjugate (resp. monotone convex conjugate) of a convex function $f:I\rightarrow\mathbb{R}$ is defined for $s\in I$ by $f^*(s):=\sup_{t\in I}\{st-f(t)\}$ (resp. $f^\star(s):=\sup_{t\in I}\{st-f(t)\}$).
**Definition 1**. *Let $(\mathcal{X}, g)$ be a metric space, $c:\mathcal{X}\times\mathcal{X}\rightarrow [0,\infty]$ a measurable function and $\mu, \nu\in\mathscr{P}(\mathcal{X})$.*
1. *The transportation cost between $\mu$ and $\nu$ with respect to the cost function $c$ reads $$W_{c}(\mu, \nu) :=\inf_{\pi\in\Pi(\mu, \nu)}\iint_{\mathcal{X}\times\mathcal{X}}c(x,y)d\pi(x,y),$$ where $\Pi(\mu,\nu)$ is the collection of couplings between $\mu$ and $\nu$: $$\Pi(\mu,\nu):=\bigg\{\pi\in\mathscr{P}(\mathcal{X}\times\mathcal{X}): [\pi]_1=\mu,\;[\pi]_2=\nu \bigg\}.$$*
2. *The relative entropy of $\nu$ with respect to $\mu$ is given by $$H(\nu\;|\; \mu) :=
\left\{
\begin{array}{ll}
\displaystyle
\large \int_{\mathcal{X}}\log\left(\frac{d\nu}{d\mu}\right)d\nu, & \text{if }\nu\ll\mu,\\
+\infty, & \text{otherwise}.
\end{array}
\right.$$*
**Remark 2**. For $p\in[1,\infty]$, a Polish space $\mathcal{X}$ and a metric $g$ that induces the topology of $\mathcal{X}$, $W_{g^p}^{1/p}$ is the $p$-Wasserstein distance between $\mu$ and $\nu$.
**Definition 3**. *Let $(\mathcal{X}, g)$ be a metric space and $\mu\in\mathscr{P}(\mathcal{X})$.*
1. *Let $c:\mathcal{X}\times\mathcal{X}\rightarrow[0,\infty]$ be a measurable function with $c(x,x)=0$ for all $x\in\mathcal{X}$ and $\alpha:[0,\infty]\rightarrow[0,\infty]$ be a lower semicontinuous function with $\alpha(0)=0$. We say that $\mu$ satisfies the $(\alpha, c)$-TCI (and write $\mu\in\mathscr{T}_{\alpha}(c)$) with cost function $c$ and deviation function $\alpha$ if for all $\mathscr{P}(\mathcal{X})\ni\nu\ll\mu$, $$\label{alphap}
\alpha\bigg(W_c(\mu, \nu)\bigg)\leq H(\nu\;|\; \mu).$$*
2. *Let $p\in[1, \infty)$. We say that $\mu$ satisfies Talagrand's $p$-TCI and write $\mu\in\mathscr{T}_p(C)$ for some $C>0$ if $\mu\in\mathscr{T}_{\alpha}(g^p)$ with $\alpha(t)=Ct^{2/p}$.*
## Consequences and characterisation {#subsec:TCIchar}
$(\alpha, c)$-TCIs with convex $\alpha$ and $c$ given by a metric (or a convex function thereof), as well as the larger family of norm-entropy inequalities, were studied in [@10.1214/ECP.v11-1198; @gozlan2007large]. Here, we are interested in a class of TCIs where $\alpha$ is piecewise convex and, for most of the applications of interest, $c$ is a concave function of a metric. At this point, we provide a characterisation for the TCIs of interest and then show some of their consequences in terms of exponential integrability and deviation estimates.
**Proposition 4**. *Let $\mathcal{X}$ be a Polish space, $\alpha_1,\alpha_2: [0,\infty]\rightarrow [0,\infty]$ be convex, increasing, continuous functions such that $\alpha_1(0)=\alpha_2(0)=0$ and $c:\mathcal{X}\times\mathcal{X}\rightarrow[0,\infty]$ be lower semicontinuous. The following are equivalent:*
1. *$\mu\in\mathscr{P}(\mathcal{X})$ satisfies $\mathscr{T}_{\alpha}(c)$ with $\alpha=\alpha_1\wedge\alpha_2$.*
2. *For all $s\geq 0$ and $f, g\in L^1(\mu)$ such that for $\mu^{\otimes 2}$-almost every $(x,y)\in\mathcal{X}^2$, $$\label{eq:fgc}
f(x)+g(y)\leq c(x,y),$$ we have $$\int_{\mathcal{X}} e^{s g} d\mu\leq
\exp\left\{-s\int_{\mathcal{X}}fd\mu+ \alpha^\star_1\vee \alpha^\star_2(s)\right\}.$$*
3. *Let $g\in L^1(\mu)$ such that $P_c(g)(\cdot):=\sup_{x\in\mathcal{X}}\big\{ g(x)-c(x,\cdot) \}\in L^1(\mu)$. For all $s\geq 0$, $$\int_{\mathcal{X}} e^{s g} d\mu\leq
\exp\left\{s\int_{\mathcal{X}}P_c(g)d\mu+ \alpha^\star_1\vee \alpha^\star_2(s)\right\}.$$*
*Proof.* $(i)\iff (ii)$ Let $\nu\in\mathscr{P}(\mathcal{X}), \nu\ll\mu$ and denote by $\tilde{\alpha}$ the extension of $\alpha$ to $\mathbb{R}$ by setting $\tilde{\alpha}=0$ on $(-\infty, 0)$. By Kantorovich duality [@gozlan2010transport Theorem 2.2], $$W_c(\mu, \nu)=\sup\bigg\{\int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu : f\in L^1(\mu), g\in L^1(\nu),\;f(x)+g(y)\leq c(x,y)\;\;\mu^{\otimes 2}\text{-a.e. on}\; \mathcal{X}^2 \bigg\}.$$ By continuity and monotonicity there exists, modulo re-labelling, $t^*\in[0,\infty]$ such that the set $\{a_1\leq \alpha_2\}$ coincides with $[0, t^*]$. Since $\tilde{\alpha}$ is non-decreasing, $\mathscr{T}_\alpha(c)$ is equivalent to $$\tilde{\alpha}\bigg(\int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu \bigg)\leq H(\nu\;|\;\mu).$$ for all such test functions $f, g$.
*Case* $1:$ Assume that $f,g, \nu$ are such that $\int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu \leq t^*$. Then $$\tilde{\alpha}\bigg(\int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu\bigg)=\tilde{\alpha}_1\bigg( \int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu\bigg)$$ and $\tilde{\alpha}_1:\mathbb{R}\rightarrow\mathbb{R}$ is continuous and convex. By properties of convex-conjugate functions, $$\tilde{\alpha}_1(t)=\tilde{\alpha}_1^{**}(t)=\sup_{s\in\mathbb{R}}\big\{st-\tilde{\alpha}^*_1(s)\big\}$$ holds for all $t\in\mathbb{R}$, and thus, for all $s\in\mathbb{R}$, $$s\int_{\mathcal{X}}gd\nu-H(\nu\;|\;\mu)\leq- s\int_{\mathcal{X}}fd\mu+\tilde{\alpha}^*_1(s).$$ *Case* $2:$ $f,g, \nu$ are such that $\int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu \geq t^*$. Similarly to Case 1, we obtain $$s\int_{\mathcal{X}}gd\nu-H(\nu\;|\;\mu)\leq- s\int_{\mathcal{X}}fd\mu+\tilde{\alpha}^*_2(s).$$ Hence for all $f,g, \nu$ and all $s\in\mathbb{R}$ we have $$s \int_{\mathcal{X}}gd\nu-H(\nu\;|\;\mu)\leq- s\int_{\mathcal{X}}fd\mu+\tilde{\alpha}^*_1(s)\vee \tilde{\alpha}^*_2(s).$$ Optimising over $\nu$ and noting that $H^*(\cdot|\mu)(g)=\log\int e^gd\mu$ and that for $i=1, 2, s\geq 0$ $\tilde{\alpha}^*_i(s)=\alpha_i^\star(s)$ the conclusion follows.\
$(ii)\iff (iii)$. Assume (ii); notice that $h=-P_cg$ is the smallest function satisfying $g(x)+h(y)\leq c(x,y)$. The inequality thus follows by applying $(i)$ to $g$ and $f=-P_c(g)$. The converse follows from the fact that for all $f$ satisfying [\[eq:fgc\]](#eq:fgc){reference-type="eqref" reference="eq:fgc"} for some fixed $g$, $P_cg\leq -f$. ◻
**Remark 5**. The previous proposition generalises the characterisation of convex TCIs given in Theorem 3.2. of [@gozlan2010transport] which is obtained by setting $\alpha_1=\alpha_2$.
In the case where $c=d^p$, for some $p\leq 1$, $\mathscr{T}_{\alpha}(c)$ implies the following exponential integrability properties.
**Corollary 6**. *Let $(\mathcal{X}, d)$ be a Polish space, $x_0\in\mathcal{X}$, $\mu\in\mathscr{P}(\mathcal{X})$ such that $d(x_0,\cdot)\in L^1(\mu)$. If $\mu\in\mathscr{T}_{\alpha}(d^p)$ for some $p\in(0, 1]$ and $\alpha=\alpha_1\wedge\alpha_2$ as in Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}, then*
1. *for all $s\geq 0$, $\int_{\mathcal{X}}\exp\left\{s d^p(x_0, x)\right\}d\mu(x)$ is finite;*
2. *if for some $t_0\geq 0, C>0$ and all $t\in (0, t_0)$ we have $\alpha(t)\geq C t^2$, then there exists $\lambda_0$ such that for all $\lambda<\lambda_0$, $$\int_{\mathcal{X}}\exp\left\{\frac{\lambda^2}{2}d^{2p}(x_0, x)\right\}d\mu(x)<\infty.$$*
*Proof.*
1. Let $f=d^p(x_0, \cdot)$, $\langle d^p(x_0, \cdot)\rangle:=\int_{\mathcal{X}}d^p(x_0, y)d\mu(y)$. From the elementary inequality $|x^p-y^p|\leq |x-y|^p$ valid for all $x,y\geq 0$, along with the triangle inequality, then $$f(x) - d^p(x,y) = d^p(x, x_0) - d^p(x,y) \leq d^p(x_0, y),
\qquad\text{for all }x\in\mathcal{X}.$$ Taking the supremum over $\mathcal{X}$, it follows that $P_cf(y)\leq d^p(x_0, y)$, hence by assumption $P_cf\in L^1(\mu)$. In view of Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}(ii) we obtain $$\int_{\mathcal{X}}\exp\Big\{s d^p(x_0, x)\Big\}d\mu(x)
\leq \exp\Big\{s\langle d^p(x_0, \cdot)\rangle+ \alpha^\star_1\vee \alpha^\star_2(s)\Big\},$$ and the proof is complete.
2. We apply an argument from [@djellout2004transportation Page 2704]. Let $\gamma\in\mathscr{P}(\mathbb{R})$ be a standard Gaussian measure. An application of Fubini's theorem then yields $$\begin{aligned}
\int_{\mathcal{X}}\exp\left\{\frac{\lambda^2}{2}d^{2p}(x_0, x)\right\}d\mu(x)
& = \int_{\mathcal{X}}\int_{\mathcal{\mathbb{R}}}\exp\Big\{\lambda s d^{p}(x_0, x)\Big\}d\gamma(s)d\mu(x)\\
& \leq \int_{\mathcal{\mathbb{R}}}\int_{\mathcal{X}}\exp\Big\{|\lambda s| d^{p}(x_0, x)\Big\}d\mu(x)d\gamma(s)\\
& \leq \int_{\mathcal{\mathbb{R}}} \exp\Big\{|\lambda s|\langle d^p(x_0, \cdot)\rangle+ \alpha^\star_1\vee \alpha^\star_2\big(|\lambda s|\big) \Big\}d\gamma(s),
\end{aligned}$$ where the last inequality follows from $(i)$. From the assumptions on $\alpha$ then both $\alpha_1$ and $\alpha _2$ are super-quadratic near the origin, hence $\alpha^*_1, \alpha^* _2$ are sub-quadratic away from the origin and in particular there exists $C, s_0>0$ such that $\alpha^*_1(s)\vee\alpha_2^*(s)\leq C s^2$ for all $s>s_0$. The integral in the last display is then clearly finite for $|s| < (1\vee s_0)/|\lambda|$. As for $| s|\geq (1\vee s_0)/|\lambda|$ we have $|\lambda s|\leq \lambda^2 s^2$, $\alpha^\star_1\vee \alpha^\star_2\big(|\lambda s|\big)\leq C\lambda^2 s^2$, hence the integrand is upper bounded by $\exp\{(\langle d^p(x_0, \cdot)\rangle + C)\lambda^2 s^2\}$. Since $\gamma$ is a Gaussian measure, the latter is integrable, provided that $\lambda$ is small enough (and in fact for all $|\lambda|<1/\sqrt{2(\langle d^p(x_0, \cdot)\rangle +C})$.
◻
Apart from exponential integrability, $(\alpha, c)$-TCIs are useful to obtain deviation estimates from the Law of Large Numbers (LLN). In particular, let $\{X_n; n\in\mathbb{N}\}$ be an independent and identically distributed sample from a measure $\mu\in\mathscr{P}(X)$ and $$\label{eq:Ln}
L_n := \frac{1}{n}\sum_{k=1}^{n}\delta_{X_i}\in\mathscr{P}(\mathcal{X})$$ the $n$-sample empirical measure. A consequence of Sanov's theorem in large deviations ( see e.g. [@dupuis2011weak], Theorem 2.2.1) is that, for all $r>0$ and $d$ a metric for the topology of weak convergence in $\mathscr{P}(\mathcal{X})$, $$\limsup_{n\to\infty}\log \mathbb{P}\Big[ d(L_n, \mu)\geq r \Big]
\leq -\inf_{\{\nu : d(\nu, \mu)\geq r\}} H(\nu\;|\;\mu).$$ In other words, $H(\cdot\;|\;\mu)$ provides an asymptotic, exponential decay rate for the probability of being \"far\" from the LLN limit $\mu$. In many practical applications, the pre-asymptotic terms that are ignored from large deviation estimates play an important role. The following proposition and Corollary [Corollary 8](#cor:deviation){reference-type="ref" reference="cor:deviation"} show that it is possible to get non-asymptotic (i.e. for all $n$ as opposed to \"large\" $n$) deviation estimates under the assumption that $\mu\in\mathscr{T}_{\alpha}(c)$.
**Proposition 7**. *(Deviation estimates) Let $\mathcal{X}$ be a Polish space, $\mu\in\mathscr{P}(\mathcal{X})$ and $L_n$ as in [\[eq:Ln\]](#eq:Ln){reference-type="eqref" reference="eq:Ln"}. The following are equivalent:*
1. *$\mu\in\mathscr{T}_{\alpha}(c)$ with $\alpha=\alpha_1\wedge \alpha_2$ as in Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}.*
2. *For all $f,g\in L^1(\mu)$ such that $f(x)+g(y)\leq c(x,y)$ for $\mu^{\otimes 2}$-almost every $(x,y)\in\mathcal{X}^2$, then, for all $n\in\mathbb{N}$ and $r>0$, $$\frac{1}{n}\log \mathbb{P}\left[ \int_{\mathcal{X}}fdL_n+\int_{\mathcal{X}}gd\mu\geq r \right]
= \frac{1}{n}\log \mathbb{P}\left[ \frac{1}{n}\sum_{k=1}^{n}f(X_k)+\int_{\mathcal{X}}gd\mu\geq r \right]
\leq -\alpha(r).$$*
*Proof.* The proof is identical to that of Theorem 2 from [@gozlan2007large] with the difference that $\alpha$ is not convex but rather piecewise convex. To avoid repetition, we shall only sketch the main steps. To this end, we have from Kantorovich duality, monotonicity and continuity of $\alpha$ that $$\mu\in\mathscr{T}_\alpha(c)\iff
\tilde{\alpha}\bigg(\int_{\mathcal{X}}fd\mu+\int_{\mathcal{X}}gd\nu \bigg)\leq H(\nu\;|\;\mu)$$ for all $f, g$ that satisfy the assumptions of (2) and $\tilde{\alpha}$ the extension of $\alpha$ to $\mathbb{R}$ by $0$. In turn the latter is equivalent to $$t\tilde{\alpha}(t)\leq\inf\bigg\{ H(\nu\;|\;\mu) :\nu \;\text{s.t.} \int_{\mathcal{X}} f d\nu + \int_{\mathcal{X}} g d\mu=t\bigg\}=\Lambda_{\phi}^*(t),
\quad\text{for all }t\in\mathbb{R},$$ where for each fixed $\phi=(f,g)$ (see Equation (21) from the aforementioned reference) $\Lambda_{\phi}^*$ is the convex conjugate of the log-Laplace transform $\Lambda(s)=\int_{\mathcal{X}}\exp\{ sf(x)+ \int gd\mu \}d\mu(x)$ (this is essentially a consequence of Cramér's theorem for large deviations of iid random variables, [@dupuis2011weak], Theorem 3.5.1). The forward implication is then complete by a Markov inequality argument which makes no use of convexity for $\alpha$ and is thus omitted. For the converse, one has that the deviation estimate in (2) implies $\alpha\leq \Lambda_{\phi}^*$ (which is also independent of convexity assumptions on $\alpha$) which in turn is equivalent to $\mu\in\mathscr{T}_{\alpha}(c)$. ◻
**Corollary 8**. *Let $(\mathcal{X}, d)$ be a Polish space, $\mu\in\mathscr{P}(\mathcal{X})$, $L_n$ as in [\[eq:Ln\]](#eq:Ln){reference-type="eqref" reference="eq:Ln"} and $x_0\in\mathcal{X}$ such that $d(x_0,\cdot)\in L^1(\mu)$. Moreover, assume that $\mu\in\mathscr{T}_{\alpha}(d^p)$ for some $p\in(0, 1]$, with $\alpha:=\alpha_1\wedge \alpha_2$ a super-quadratic deviation function as in Corollary [Corollary 6](#Cor:Exponential Moments){reference-type="ref" reference="Cor:Exponential Moments"}(ii). Then there exist $C_p, s_0>0$ such that for all $n\in\mathbb{N}$ and $s>s_0$, $$\begin{aligned}
\frac{1}{n}\log \mathbb{P}\left[\frac{1}{n}\sum_{k=1}^{n}d(X_k,x_0)\geq s \right] \leq -C_ps^{2p}.
\end{aligned}$$*
*Proof.* The functions $f=-g=d^p(x_0, \cdot)$ satisfy all the assumptions of Proposition [Proposition 7](#prop:deviation){reference-type="ref" reference="prop:deviation"}. Indeed, [\[eq:fgc\]](#eq:fgc){reference-type="eqref" reference="eq:fgc"} holds by triangle inequality and integrability is satisfied by assumption. With this choice and another application of the triangle inequality we obtain for all $s>\mathbb{E}^{\mu} g$, $$\begin{aligned}
\frac{1}{n}\log \mathbb{P}\bigg[ \frac{1}{n}\sum_{k=1}^{n}d^p(X_k,x_0)\geq 2s \bigg]\leq \frac{1}{n}\log \mathbb{P}\bigg[ \frac{1}{n}\sum_{k=1}^{n}f(X_k)-\int_{\mathcal{X}}gd\mu\geq s \bigg]\leq -\alpha(s).
\end{aligned}$$ From Hölder's inequality and the super-quadratic growth of $\alpha$ it follows that $$\frac{1}{n}\log \mathbb{P}\bigg[ \frac{1}{n}\sum_{k=1}^{n}d(X_k,x_0)\geq (2s)^{1/p} \bigg]\leq -Cs^2,$$ and the proof is complete by substituting $S$ by $(s/2)^p$. ◻
**Remark 9**. The assumption that $\mathcal{X}$ is Polish is sufficient to guarantee the validity and well-posedness of the Kantorovich-dual formulation of the transportation cost $W_c$, used implicitly for example in the proof of Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}.
## A generalised contraction principle {#subsec:TCIcon}
Contraction principles for Talagrand's inequalities [\[eq: pTCI\]](#eq: pTCI){reference-type="eqref" reference="eq: pTCI"} have been proved in [@djellout2004transportation Lemma 2.1] and [@riedel2017transportation Lemma 4.1]. The previous results concern Lipschitz maps of measures that satisfy Talagrand's $\mathscr{T}_{2}(C)$. We now prove a generalised contraction principle for maps that satisfy a certain type of \"uniform continuity\" condition. We start with an assumption on the domain space of the contraction principle.
**Assumption 10**. *$\mathcal{X}$ is a Polish space, $c_{\mathcal{X}}: \mathcal{X}\times\mathcal{X}\rightarrow[0,\infty]$ is a measurable function and there exists a measure $\mu\in\mathscr{P}(\mathcal{X})$ and a constant $C>0$ such that, for every $\nu \in\mathscr{P}(\mathcal{X})$, $$\left(\inf_{\pi\in\Pi(\nu,\mu)}\iint_{\mathcal{X}\times\mathcal{X}}c^2_{\mathcal{X}}(x_1, x_2)d\pi(x_1,x_2)\right)^{\frac{1}{2}}\leq \sqrt{CH(\nu\; |\; \mu)}.$$*
**Lemma 11** (Extended contraction principle). *Under Assumption [Assumption 10](#assu:Setting){reference-type="ref" reference="assu:Setting"}, let $\mathcal{Y}$ be a metric space, $c_{\mathcal{Y}}: \mathcal{Y}\times\mathcal{Y}\rightarrow[0,\infty]$ a measurable function and assume that there exists a measurable map $\Psi:\mathcal{X}\rightarrow\mathcal{Y}$ and $r\geq 1$ such that for all $x_1, x_2\in\mathcal{X}_0\subset \mathcal{X},$ with $\mu(\mathcal{X}_0)=1$, $$c_{\mathcal{Y}}\big(\Psi(x_1), \Psi(x_2)\big)\leq L(x_1) \bigg[c_{\mathcal{X}}(x_1, x_2)\vee c_{\mathcal{X}}(x_1, x_2)^{\frac{1}{r}}\bigg],$$ where $L\in L^{p^*}(\mathcal{X}, \mu)$ for $p^*=2\vee(r/r-1)$. Then $\tilde{\mu}=\mu\circ\Psi^{-1}\in\mathscr{T}_{\alpha}(c_{\mathcal{Y}})$, where, for some constant $C>0$, $\alpha(t)=Ct^2\wedge t^{2r}$.*
*Proof.* Without loss of generality we may assume $C = 1$. Let $\tilde{\nu}\in\mathscr{P}(\mathcal{Y})$ and assume that $H(\tilde{\nu}\;|\;\tilde{\mu})$ is finite. Choose $\nu\in\mathscr{P}(\mathcal{X})$ such that $\tilde{\nu}=\nu\circ\Psi^{-1}$ and $\nu\ll \mu$ (note that there is at least one $\nu$ which fulfills this condition; e.g. $\nu_0(dx):= d\tilde{\nu}/d\tilde{\mu} (\Psi(x))\mu(dx))$. Then, an application of Hölder's inequality yields $$\begin{aligned}
\inf_{\tilde{\pi}\in\Pi(\tilde{\nu},\tilde{\mu})}\iint_{\mathcal{Y}\times\mathcal{Y}}c_{\mathcal{Y}}(y_1, y_2)&d\tilde{\pi}(y_1,y_2)\leq \inf_{\pi\in\Pi(\nu,\mu)}\iint_{\mathcal{X}\times\mathcal{X}}c_{\mathcal{Y}}\big(\Psi(x_1), \Psi(x_2)\big)d\pi(x_1,x_2)\\&
\leq \inf_{\pi\in\Pi(\nu,\mu)}\iint_{\mathcal{X}\times\mathcal{X}}L(x_1)\bigg[c_{\mathcal{X}}(x_1, x_2)\vee c_{\mathcal{X}}(x_1, x_2)^{\frac{1}{r}}\bigg]d\pi(x_1,x_2)\\&
= \inf_{\pi\in\Pi(\nu,\mu)}\iint_{\{c_{\mathcal{X}}\leq 1\}}L(x_1) c_{\mathcal{X}}(x_1, x_2)^{\frac{1}{r}}d\pi(x_1,x_2)\\&
+\inf_{\pi\in\Pi(\nu,\mu)}\iint_{\{c_{\mathcal{X}}>1\}}L(x_1) c_{\mathcal{X}}(x_1, x_2)d\pi(x_1,x_2)\\&
\leq \|L\|_{L^{r/(r-1)}}\bigg(\inf_{\pi\in\Pi(\nu,\mu)}\iint_{\{c_{\mathcal{X}}\leq 1\}}c_{\mathcal{X}}(x_1, x_2)d\pi(x_1,x_2)\bigg)^{\frac{1}{r}}\\&+
\|L\|_{L^{2}}\bigg(\inf_{\pi\in\Pi(\nu,\mu)}\iint_{\{c_{\mathcal{X}}> 1\}}c_{\mathcal{X}}(x_1, x_2)^{2}d\pi(x_1,x_2)\bigg)^{\frac{1}{2}}%
\\&
%\leq \|L\|_{L^{r/(r-1)}}\bigg(CH(\nu\; |\; \mu)\bigg)^{\frac{1}{2r}}
%+\|L\|_{L^{(\delta+1)/\delta}}\bigg(\inf_{\pi\in\Pi(\nu,\mu)}\iint_{\{c_{\mathcal{X}}> 1\}}c_{\mathcal{X}}(x_1, x_2)^{2}d\pi(x_1,x_2)\bigg)^{\frac{1}{2}}\\&
\lesssim\bigg(H(\nu\; |\; \mu)\bigg)^{\frac{1}{2r}}+
\bigg(H(\nu\; |\; \mu)\bigg)^{\frac{1}{2}}\lesssim \bigg(H(\nu\; |\; \mu)\bigg)^{\frac{1}{2r}}\vee\bigg(H(\nu\; |\; \mu)\bigg)^{\frac{1}{2}}.
\end{aligned}$$ The proof is complete upon invoking the identity $$\label{entropyid}
H(\tilde{\nu}\;|\;\tilde{\mu})=\inf\big\{ H(\nu\; |\; \mu); \nu\in\mathscr{P}(\mathcal{X}): \nu\circ\Psi^{-1} =\tilde{\nu} \big\}$$ which holds when $\mathcal{X}$ is a Polish space. ◻
**Remark 12**. The previous lemma is used in the proof of most of our main results. We emphasise here that for the identity [\[entropyid\]](#entropyid){reference-type="eqref" reference="entropyid"} to hold, it is sufficient to require that $\mathcal{X}$ is Polish (and not $\mathcal{Y})$. In view of the latter, the same is true for the contraction principle.
# TCIs for Gaussian rough differential equations {#Section: RPs}
Our first result concerns the solutions of Rough Differential Equations (RDEs) driven by a Gaussian process with continuous paths. Throughout this section, $\mathcal{C}^{\text{p-var}}(I;\mathbb{R}^d)$ is the Banach space of continuous $\mathbb{R}^d$-valued paths of finite $p$-variation, defined on the compact interval $I\subset[0,\infty];$ the $p$-variation distance is denoted by $g_{\text{p-var}}$.
**Definition 13**. *(Gaussian rough paths) Let $T>0, p\in[1, 3)$ and $X$ be a $d$-dimensional, continuous Gaussian process on $[0,T]$ with paths of finite $p$-variation.*
1. *A geometric $p$-rough path $\mathbf{X}$ over $X$ is a pair $$\mathbf{X}=\mathscr{L}(X):=\left(X, \mathbb{X}\right)\in C\left([0,T]^2;\mathbb{R}^d\oplus \mathbb{R}^{d\otimes d}\right),$$ such that the following hold $\mathbb{P}$-almost surely:*
1. *(Chen's relation) $X_{s,t}=X_{s,u}+X_{u,t}$ and $\mathbb{X}_{s,t}=\mathbb{X}_{s,u}+\mathbb{X}_{u,t}+X_{s,u}\otimes X_{u,t}$ for all $0\leq s\leq u\leq t\leq T$.*
2. *($p$-variation regularity) $\|\mathbf{X}\|_{\text{p-var}}^p:=\sup_{(t_i)\in\mathcal{P}[0,T]}\sum_{i}\bigg(|X_{t_{i},t_{i+1}}|+\big|\mathbb{X}_{t_i,t_{i+1}}\big| \bigg)^{p}$ is finite, where $\mathcal{P}[0,T]$ is the collection of finite dissections of $[0,T]$.*
2. *The inhomogeneous $p$-variation metric $\mathbf{g}_{\text{p-var}}$ is defined for two geometric $p$-rough paths by $\mathbf{g}_{\text{p-var}}(\mathbf{X}, \mathbf{Y}) := \|\mathbf{X}-\mathbf{Y}\|_{\text{p-var}}$.*
3. *The space $\mathcal{D}^{0,p}_g([0,T];\mathbb{R}^m)$ of geometric $p$-rough paths is defined as the completion of the set $\{ \mathscr{L}(f), f\in C^{\infty}\}$ with respect to $\mathbf{g}_{\text{p-var}}$.*
**Remark 14**. The metric space $(\mathcal{D}^{0,p}_g([0,T];\mathbb{R}^d), \mathbf{g}_{\text{p-var}}$) is Polish [@friz2010multidimensional Proposition 8.27]. The second-order process $\mathbb{X}$ is typically given by the iterated integral $\mathbb{X}_{s,t}=\int_{s}^{t}(X_r-X_s)\otimes dX_r$ which is defined as a limit (in probability) of piecewise linear approximations.
**Theorem 15**. *Let $T>0, p\in[1, 3)$ and $X=(X_t)_{t\in[0,T]}$ be a $d$-dimensional, continuous, mean-zero Gaussian process with Cameron-Martin space $\mathcal{H}$ such that*
1. *$X$ has a natural lift to a geometric $p$-rough path $\mathbf{X}$;*
2. *there exists $q$ with $\frac{1}{p}+\frac{1}{q}>1$ such that $\mathcal{H}\hookrightarrow \mathcal{C}^{q-var}([0,T];\mathbb{R}^d)$.*
*Next, let $\gamma>p$, $V=(V_1, \dots, V_d)$ be $Lip^\gamma$-vector fields on $\mathbb{R}^d$ [@friz2010multidimensional Definition 10.2] and consider the solution $(Y_t)_{t\in[0,T]}$ of the RDE $$\label{eq:RDE1}
dY_t= V(Y_t)d\mathbf{X}_t,
\qquad Y_0\in\mathbb{R}^m.$$ Then the following hold:*
1. *The law $\mu\in\mathscr{P}(\mathcal{D}^{0,p}_g[0,T])$ of $\mathbf{X}$ satisfies $\mathscr{T}_{\alpha}(\mathbf{g}_{\text{p-var}}^{1/2})$ with $\alpha(t)=C(t\wedge t^2)$, $C>0$.*
2. *The law $\mu\in \mathscr{P}(\mathcal{C}^{\text{p-var}}([0,T];\mathbb{R}^m))$ of $Y$ satisfies $\mathscr{T}_{\alpha}(g_{\text{p-var}}^{1/q})$ with $\alpha(t)=C(t^{2q}\wedge t^2)$, $C>0$.*
**Remark 16**. Talagrand's inequalities for $q=1$, which corresponds to Brownian or \"smoother\" paths, have been proved in [@riedel2017transportation Theorem 2.14]. Our result shows that solutions of RDEs with \"rougher\" drivers (e.g. fBm with Hurst parameter $H\in (\frac{1}{3},\frac{1}{2})$) also satisfy TCIs with a different cost and deviation function. In fact, setting $q=1$, we recover $\mathscr{T}_1(C)$ from [@riedel2017transportation].
*Proof.* (1) Let $s,t\in[0,T]^2$ and $$T_h\mathbf{X}_{s,t} := \bigg(X_{s,t}+h_{s,t}, \mathbb{X}_{s,t}+\int_{s}^{t} h_{s,r}\otimes dX_r+\int_{s}^{t} X_{s,r}\otimes dh_r+\int_{s}^{t} h_{s,r}\otimes dh_r\bigg)$$ denote the translation of $\mathbf{X}$ in the direction of $h\in\mathcal{H}$ (note that the assumptions on $X$ guarantee that the last two integrals on the right-hand side are well-defined Young integrals). From [@riedel2017transportation Lemma 2.10], we have for some constant $C>0$ $$\mathbf{g}_{\text{p-var}}(T_h\mathbf{X}, \mathbf{X})^{1/2}\leq C(1\vee\|x\|_{\text{p-var}})\|h\|_{\mathcal{H}}\vee \|h\|^{1/2}_{\mathcal{H}}.$$ Appealing to Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"} with $\mathcal{X}=\mathcal{C}^{\text{p-var}}([0,T];\mathbb{R}^m)$, $\mathcal{Y}=\mathcal{D}^{0,p}_g[0,T]$, $\Psi=\mathbf{X}$ and $c_{\mathcal{Y}}$ be the Cameron-Martin pseudometric on $\mathcal{D}^{0,p}_g[0,T]$ the conclusion follows.\
(2) Let $Y^h$ be the solution of [\[eq:RDE1\]](#eq:RDE1){reference-type="eqref" reference="eq:RDE1"} driven by $T_h\mathbf{X}$. From [@riedel2017transportation Lemma 2.11], we have $$g_{\text{p-var}}(Y^h, Y)^{1/q}\leq C\exp\Big(N_1(\mathbf{X};[0,T])+1\Big)\|h\|_{\mathcal{H}}\vee \|h\|^{1/q}_{\mathcal{H}},$$ for some constant $C>0$, where the random variable $N_1(\mathbf{X};[0,T])$ has finite moments of all orders. Appealing to Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"} with $\mathcal{X}=\mathcal{D}^{0,p}_g[0,T], \mathcal{Y}=\mathcal{C}^{\text{p-var}}([0,T];\mathbb{R}^m), \Psi=Y$ and $c_{\mathcal{Y}}$ the Cameron-Martin pseudometric on $\mathcal{D}^{0,p}_g[0,T]$ the result follows. ◻
The well-known estimates of [@cass2013integrability Theorem 6.23] (see also [@friz2013integrability] and the lower bound from [@boedihardjo2022lack Theorem 1.1]) show that the laws of solutions of RDEs with bounded and sufficiently smooth vector fields are Weibull-tailed with shape parameter $2/q$. This non-Gaussian tail behaviour is a consequence of rough integration which takes into account not just the noise $X$ but also iterated integrals of $X$ with itself. Due to the lack of Gaussian integrability, such measures are not expected to satisfy Talagrand's $\mathscr{T}_r(C)$ inequalities for any $r\geq 1$. Nevertheless, the more general TCI $\mathscr{T}_\alpha(c)$ allows us to recover the \"correct\" tail behaviour. Indeed, we have the following:
**Corollary 17**. *For $X, Y, q$ as in Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"}, there exist $C_1, C_2>0$ such that for all $R>0$, $$\mathbb{P}\bigg[ \|\mathbf{X}\|_{\text{p-var}}\geq R \bigg] \leq C_1e^{-C_2 R}
\qquad\text{and}\qquad
\mathbb{P}\bigg[ \|Y\|_{\text{p-var}}\geq R \bigg] \leq C_1 e^{-C_2 R^{2/q}}.$$*
*Proof.* From Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"}(1) and Corollary [Corollary 6](#Cor:Exponential Moments){reference-type="ref" reference="Cor:Exponential Moments"}(ii) with $\mathcal{X}=\mathcal{D}_g^{0,p}[0,T]$, $x_0=0$, $d=\mathbf{g}_{\text{p-var}}$, $p=\frac{1}{2}$, there exists $\lambda_0>0$ such that $\mathbb{E}[\exp( \lambda\|\mathbf{X}\|_{\text{p-var}})]$ is finite for all $\lambda<\lambda_0$. An application of Markov's inequality allows us to conclude that $$\mathbb{P}\bigg[ \|\mathbf{X}\|_{\text{p-var}}\geq R \bigg]\leq e^{-\lambda R}\mathbb{E}\Bigg[\exp\bigg( \lambda\|\mathbf{X}\|_{\text{p-var}} \bigg)\Bigg].$$ Similarly to Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"}(2) and Corollary [Corollary 6](#Cor:Exponential Moments){reference-type="ref" reference="Cor:Exponential Moments"}(ii) with $\mathcal{X}=\mathcal{C}^{\text{p-var}}[0,T]$, $x_0=0$, $d=g_{\text{p-var}}$, $p=1/q$, there exists $\lambda_0>0$ such that $\mathbb{E}[\exp(\lambda\|Y\|^{2/q}_{\text{p-var}})]$ is finite for all $\lambda<\lambda_0$. Once again the conclusion follows from Markov's inequality $$\mathbb{P}\bigg[ \|Y\|_{\text{p-var}}\geq R \bigg]\leq e^{-\lambda R^{2/q}}\mathbb{E}\Bigg[\exp\bigg( \lambda\|Y\|^{2/q}_{\text{p-var}} \bigg)\Bigg].$$ ◻
**Remark 18**. Examples of Gaussian processes $X$ that satisfy the assumptions of Theorem [Theorem 15](#Thm: TCIRDE){reference-type="ref" reference="Thm: TCIRDE"} include Brownian motion (in which case $Y$ is interpreted in the Stratonovich sense), fractional Brownian motion with Hurst exponent $H\in(\frac{1}{3}, 1)$, Brownian (and more generally Gaussian) bridges, Ornstein-Uhlenbeck processes driven by Brownian motion and bifractional Brownian motion [@riedel2017transportation Example 2.6] with parameters $H,K$ satisfying $HK\in(\frac{1}{3}, 1)$.
# TCIs for regularity structures: Rough Volatility {#Section: Rough Vol}
In this section we focus on a simple rough volatility model. To this end, let $H\in(0,\frac{1}{2}]$ and consider the evolution of log-prices governed by the stochastic differential equation $$\label{roughvolmodel}
dS_t/S_t= f\left( \widehat{W}_t^H, t\right)dW_t,
\qquad S_0=s_0 > 0.$$ Here $W$ is a standard Brownian motion, $fdW$ is an Itô integral and $\widehat{W}^H$ is a Riemann-Liouville (or type-II) fractional Brownian motion, in particular, $$\widehat{W}^H_t = \int_{0}^{t}K^H(t-r)dW_r, \qquad \text{for }t\geq 0,$$ where $K^H:[0,\infty)\rightarrow\mathbb{R}$ denotes the power-law Volterra kernel $K^H(t)=\sqrt{2H}t^{H-\frac{1}{2}}$. As is well known, the solution map of an Itô SDE is not continuous with respect to the driving Brownian motion $W$. The theory of rough paths, initially developed by Lyons [@lyons1998differential], provides a remedy for this lack of continuity for a large class of SDEs. In particular, Lyons' universal limit theorem [@friz2020course Theorem 8.5] asserts that the solution of an SDE is a continuous image of the canonical rough path lift of the noise with respect to an appropriate rough path topology.
As explained in [@bayer2020regularity Section 2], the SDE [\[roughvolmodel\]](#roughvolmodel){reference-type="eqref" reference="roughvolmodel"} is beyond the reach of rough paths theory since $\widehat{W}^H$ and $W$ are not independent and because calibrated rough volatility models feature [@gatheral2018volatility] Hurst indices $H<\frac{1}{4}$. Nevertheless, as shown in [@bayer2020regularity], the continuity of the rough volatility solution map can be recovered in the framework of Hairer's theory of regularity structures [@hairer2014theory]. The main idea is to enhance the noise $"dW"$ with sufficient higher-order functionals defined with respect to a fixed regularity structure $\mathcal{T}$. The enhanced solution map is then continuous with respect to the topology of models defined on $\mathcal{T}$ (see Definition [\[dfn:modeltop\]](#dfn:modeltop){reference-type="ref" reference="dfn:modeltop"}) .
**Definition 19**. *Let $A\subset\mathbb{R}$ be a locally finite (e.g. discrete) index set that is bounded from below. A regularity structure is defined as a pair $(\mathcal{T}, G)$ of a vector space $\mathcal{T}$ (the structure space) and a group $G$ (the structure group) with the following properties:*
1. *$\mathcal{T}=\bigoplus_{\alpha\in A} \mathcal{T}_\alpha$, where for each $\alpha\in A, \mathcal{T}_\alpha$ is a Banach space. Each element $\tau\in \mathcal{T}_a$ is said to have degree (or homogeneity) $\alpha$ and we write $|\tau|=\alpha$. For each $\tau\in\mathcal{T}$, $\|\tau\|_\alpha$ denotes the norm of the component of $\tau$ in $\mathcal{T}_{\alpha}$.*
2. *$G$ is a group of linear transformations on $\mathcal{T}$ such that for each $\Gamma\in G$, $\alpha\in A$, $\tau_\alpha\in\mathcal{T}_\alpha$, $$\label{reexpansion}
\Gamma\tau_\alpha-\tau_{\alpha}\in\bigoplus_{\beta<\alpha}\mathcal{T}_{\beta}.$$*
**Remark 20**. A useful analogy is that of a regularity structure as an abstraction of Taylor expansions. In particular, one can think that for each $\alpha\in A$, $\mathcal{T}_\alpha$ and $\mathcal{T}$ contain monomials of degree $\alpha$ and abstract Taylor polynomials respectively. The action of $G$ on $\mathcal{T}$ can be thought of as \"re-expansion\" of a Taylor polynomial with respect to a different base point. Then, at a formal level, [\[reexpansion\]](#reexpansion){reference-type="eqref" reference="reexpansion"} expresses the fact that the difference between a monomial of degree $\alpha$ and its re-expanded version will be a polynomial of degree $\beta<\alpha$. For example, re-expanding the second degree monomial $x^2\in\mathcal{T}_2$ around $1$ gives us $$\Gamma_1x^2-x^2=(x-1)^2-x^2=-2x+1\in\mathcal{T}_0\oplus\mathcal{T}_1=\bigoplus_{\beta<2}\mathcal{T}_\beta.$$
## The rough volatility regularity structure {#rvolrs}
We now define the concrete rough volatility regularity structure, tailor-made to [\[roughvolmodel\]](#roughvolmodel){reference-type="eqref" reference="roughvolmodel"}, as constructed in [@bayer2020regularity]. First, we introduce a finite set of symbols that provide the building blocks for the abstract Taylor expansions. To this end, let $M\in\mathbb{N}, \kappa\in(0, H)$ and $$\mathcal{S} := \Big\{\mathbf{1}, \Xi, I(\Xi),\dots, I(\Xi)^M, \Xi I(\Xi), \Xi I(\Xi)^2,\dots, \Xi I(\Xi)^M\Big\}.$$ Here, the symbol $\Xi$ corresponds, up to realisation (Definition [Definition 22](#modeldef){reference-type="ref" reference="modeldef"}) to the underlying noise $dW=\dot{W}$ and $I$ denotes convolution with respect to the kernel $K^H$. The degrees of the symbols are postulated as follows:
-------- -------------- ----------------------- -------------------------- ----------------------------- ---------------------------------- --
Symbol $\mathbf{1}$ $\Xi$ $I(\Xi)$ $I(\Xi)^M$ $\Xi I(\Xi)^M$
Degree 0 $-\frac{1}{2}-\kappa$ $\displaystyle H-\kappa$ $\displaystyle M(H-\kappa)$ $M(H-\kappa)-\frac{1}{2}-\kappa$
-------- -------------- ----------------------- -------------------------- ----------------------------- ---------------------------------- --
The number $M$ is chosen to be the smallest number for which $I(\Xi)^{M_+1}\Xi$ has a positive homogeneity (more precisely, this choice implies that the modelled distribution lift of $fdW$ belongs to a space $\mathcal{D}_T^\gamma(\Gamma)$ of positive regularity with $\gamma>0$, see Definition [Definition 37](#Ddef){reference-type="ref" reference="Ddef"}, [\[eq:Dgammanorm\]](#eq:Dgammanorm){reference-type="eqref" reference="eq:Dgammanorm"} below) to the reconstruction theorem for modelled distributions Thus it suffices to consider $M$ such that $| \Xi I(\Xi)^{M+1}|=(M+1)(H-\kappa)-\frac{1}{2}-\kappa>0$. Thus we take $$\label{Mchoice}
M:= \mathfrak{M}(H,\kappa):= \max\bigg\{ m\in\mathbb{N}: m(H-\kappa)-\frac{1}{2}-\kappa\leq 0\bigg\}.$$
**Remark 21**. For very small $\kappa \in (0,H)$, we have that $m(H-\kappa)-\frac{1}{2}-\kappa\leq 0$ if and only if $m \leq (\kappa+\frac{1}{2})/((H-\kappa)) =: M$, which tends to $\frac{1}{2H}$ from above. When $\kappa$ is close to $H$, say of the form $\kappa = H-\varepsilon$, then $m(H-\kappa)-\frac{1}{2}-\kappa\leq 0$ if and only if $m \leq (H-\varepsilon+\frac{1}{2})/\varepsilon =: M$, which tends to infinity.
The index set and structure space are then defined by $A:=\{|\tau|\;: \tau\in\mathcal{S}\}$ and $$\label{Tvol}
\mathcal{T}=span\{\mathcal{S}\}=\bigoplus_{\alpha\in A}\mathcal{T}_\alpha:=\bigoplus_{\alpha\in A}span\{\tau\in\mathcal{S}: |\tau|= \alpha \}.$$ Turning to the structure group, we let $G:=\{\Gamma_h\;, h\in\mathbb{R}\}\subset\mathscr{L}(\mathcal{T})$ such that for all $h\in\mathbb{R}$: $$\label{eq:structuregroup1}\Gamma_h\mathbf{1}=\mathbf{1},\qquad
\Gamma_h\Xi=\Xi,\qquad
\Gamma_hI(\Xi)=I(\Xi)+h\mathbf{1},$$ and $$\label{eq:structuregroup}
\Gamma_h\tau\tau'=\Gamma_h\tau\Gamma_h\tau',$$ for all $\tau,\tau'\in\mathcal{S}$ such that $\tau\tau'\in\mathcal{S}$. The expression $\tau\tau'$ should be interpreted as a formal product between symbols. The maps $\Gamma_h$ are then extended to $\mathcal{T}$ by linearity. The group property of $G$ is inherited by the additive structure of the real numbers.
A model for a regularity structure is a concrete interpretation of the abstract Taylor polynomials and their re-expansion rules. Part of the flexibility of the theory is owed to the fact that monomials can be very irregular functions and even Schwartz distributions. In the next definition, $\mathscr{L}(X,Y)$ denotes the space of continuous, linear operators between two topological vector spaces $X$ and $Y$, $X^*$ denotes the continuous dual space of $X$ and $\langle \cdot,\cdot \rangle: X^*\times X\rightarrow \mathbb{R}$ is the duality pairing $\langle x^*, x \rangle:=x^*(x)$.
**Definition 22**. *Let $(\mathcal{T}, G)$ be the rough volatility regularity structure. A model for $(\mathcal{T}, G)$ over $\mathbb{R}$ is a pair $(\Pi, \Gamma)$ of \"realisation\" and \"re-expansion\" maps $$\Pi: \mathbb{R}\rightarrow\mathscr{L}(\mathcal{T}; (\mathcal{C}^\infty_c(\mathbb{R})^*)\quad\Gamma:\mathbb{R}\times\mathbb{R}\rightarrow G,$$ that satisfy the following properties:*
- *For $s,t,z\in\mathbb{R}$, the abstract \"Chen's relation\" $\Pi_t=\Pi_s\Gamma_{s,t}$ holds and $\Gamma_{s,t}=\Gamma_{s,z}\Gamma_{z,t}$;*
- *for all $\tau\in\mathcal{T}, \lambda\in(0,1)$, $s, t$ in a compact set, $\Gamma_{s,t}\in G$, $\phi^\lambda_s(\cdot)=\lambda^{-1}\phi(\lambda^{-1}(\cdot-s))\in C_c^\infty(\mathbb{R})$ with $\mathop{\mathrm{supp}}(\phi)\subset (-1, 1)$ $$\big| \langle \Pi_s\tau, \phi^\lambda_s\rangle \big|\lesssim\lambda^{|\tau|},\quad \Gamma_{s,t}\tau=\tau+\sum_{\tau': |\tau'|<|\tau|}c_{\tau'}(s,t)\tau',$$ with $|c_{\tau'}(s,t)|\lesssim |s-t|^{|\tau|-|\tau'|}$.*
We shall now introduce a (random) Itô model $(\Pi, \Gamma )$, defined on an underlying probability space $(\Omega, \mathcal{F}, \mathbb{P})$, for the rough volatility regularity structure $\mathcal{T}$ [\[Tvol\]](#Tvol){reference-type="eqref" reference="Tvol"}. With $W$ a Brownian motion extended to $\mathbb{R}$ by letting $W=0$ on $(-\infty, 0]$ and $t\geq 0$, let $$\Pi_t\mathbf{1}=1,\;\Pi_t\Xi:= \dot{W}=(d/dt)W,$$ where the derivative is meant in the sense of distributions, i.e. for each $\phi\in \mathcal{C}^\infty_c(\mathbb{R})$ we have $$\label{eq:Itomodel1}
\langle \Pi_t\Xi, \phi\rangle=\langle \dot{W}, \phi\rangle=-\int_\mathbb{R}W_s\dot{\phi}(s)ds.$$ Note that we can restrict the white noise $\dot{W}$ to any finite time interval $[0,T]$ by setting $W=0$ on $\mathbb{R}\setminus[0,T]$ and due to stationarity the value of $\Pi_t\Xi$ does not depend on $t$. Next, let $$\label{eq:IXi}
\Pi_t I(\Xi) := K^H*\dot{W}-K^H*\dot{W}_t = \widehat{W}^H-\widehat{W}^H(t),$$ and for each $m=2,\dots, M$, $\Pi_t I(\Xi)^m = (\widehat{W}^H-\widehat{W}^H(t))^M$. Similar to the rough path framework, one then considers Itô integrals $$\mathbb{W}^m_{s,t}:=\int_{s}^{t}( \widehat{W}^H(r)-\widehat{W}^H(s) )^mdW_r\;, s<t,\; m=1,\dots, M,$$ and defines $$\Pi_s\Xi I(\Xi)^m:=\frac{d}{dt}\mathbb{W}^m_{s,t}$$ in the sense of distributions. The maps $\Pi_{\cdot}$ are then extended to $\mathcal{T}$ by linearity. Finally, for each $\Gamma\in G$ the (random) re-expansion maps are given by $$\label{eq:Gamma}
\Gamma_{s,t}:=\Gamma_{\widehat{W}^H_s-\widehat{W}^H_t}$$ (i.e. the right hand side results from $\Gamma_h$ [\[eq:structuregroup1\]](#eq:structuregroup1){reference-type="eqref" reference="eq:structuregroup1"}, [\[eq:structuregroup\]](#eq:structuregroup){reference-type="eqref" reference="eq:structuregroup"} evaluated at $h=\widehat{W}^H_s-\widehat{W}^H_t)$.
In order to emphasise the dependence of the Itô model on the realisation of the noise we will often write $(\Pi, \Gamma)=(\Pi^{\dot W}, \Gamma^{\dot W})$. For a fixed time horizon $T>0$, we denote the space of models $(\Pi, \Gamma)$ for the rough volatility regularity structure $\mathcal{T}$ by $\mathcal{M}_{T}(\mathcal{T})$. The topology on this space will be discussed in the next section. The object with the lowest homogeneity in $\mathcal{T}$ is $\Xi$, which corresponds to the white noise $\dot{W}$. For $\kappa\in(0,H)$, the latter can be considered as a Gaussian random element that takes values in the Besov space $E:=\mathcal{C}^{-\frac{1}{2}-\kappa}$ (see [@chandra2017stochastic Section 2.1] for a proof in a more general setting), defined as the subspace of distributions $f\in (C_c^1(\mathbb{R}))^*$ such that for all $K\subset\mathbb{R}$ compact, $$\label{eq:Besovnorm}
\|f\|_{-\frac{1}{2}-\kappa, K}:=\sup_{s\in K}\sup_{\substack{\phi\in \mathcal{C}_c^1(\mathbb{R}), \|\phi\|_{\mathcal{C}^1}\leq 1\\ \mathop{\mathrm{supp}}(\phi)\subset(-1,1),\lambda\in(0,1]}}\lambda^{\frac{1}{2}+\kappa}\big| \langle f, \phi^\lambda_s\rangle \big|<\infty,$$ and $\phi_s^\lambda(t)=\lambda^{-1}\phi(\lambda^{-1}(t-s))$ as in Definition [Definition 22](#modeldef){reference-type="ref" reference="modeldef"}. The collection $\{\|f\|_{-\frac{1}{2}-\kappa, K}; K\subset\mathbb{R}\;\text{compact}\}$ can be shown to be a family of seminorms that turn $\mathcal{C}^{-\frac{1}{2}-\kappa}$ into a Fréchet space [@friz2020course Section 13.3.1], but for our purposes we shall restrict to $K=[0,T]$.
**Remark 23**. The triple $(E,\mathcal{H}, \mu)$, where $\mathcal{H}:=L^2[0,T]$ is the Cameron-Martin space of $\dot{W}$ and $\mu:\mathscr{B}(E)\rightarrow[0,1]$ is the law of $\dot{W}$ is an abstract Wiener space.
We shall now introduce the notions of \"model lift\" and of \"model translation\" by elements in $\mathcal{H}$ (such operations have already been considered in [@friz2021precise; @friz2022precise]).
**Definition 24**. *Let $(\mathcal{T}, G)$ be the rough volatility regularity structure and $(\Pi, \Gamma)$ the Itô model for $(\mathcal{T}, G)$. The map $$E\ni \xi\longmapsto\mathcal{L}(\xi)=(\Pi^{\xi}, \Gamma^\xi)\in\mathcal{M}_{T}(\mathcal{T}),$$ where the noise $\dot{W}$ is replaced by $\xi$ is called the model lift. The model translation in the direction of $h\in\mathcal{H}$ is defined by $$\label{modeltranslationdilation}
\mathcal{M}_{T}(\mathcal{T})\ni \Pi^\xi \longmapsto T_h(\Pi^\xi):=\Pi^{\xi+h}\in \mathcal{M}_{T}(\mathcal{T}).$$ The translation $T_h\Gamma$ is defined analogously.*
For fixed $h\in\mathcal{H}$, the (deterministic) model $(\Pi^h, \Gamma^h)$ is given by $$\Pi^h_t\mathbf{1}=1,\quad
\Pi^h_t\Xi=h,\quad
\Pi^h_tI(\Xi)^m=(\widehat{h}-\widehat{h}(t))^m,\quad
\Pi^h_t\Xi I(\Xi)^m=\frac{d}{ds}\mathbb{H}^m_{t,s},
\quad \text{for } m=1,\dots, M,$$ where $$\mathbb{H}^m_{t,s} :=
\int_{t}^{s}\left(\widehat{h}(r)-\widehat{h}(t) \right)^mh(r)dr.$$ This last integral is well-defined since $\widehat{h}=\int_0^\cdot K^H(\cdot-r)h(r)dr\in \mathcal{C}^{H}[0,T]$ and $H$ is square-integrable. Note that, for each $t\in[0,T]$ and $m=1,\dots M$, $\Pi^h_t I(\Xi)^m$ is function-valued. Nevertheless, the homogeneity of this term is $mH-\frac{1}{2}$, which can be negative for small enough $m$.
**Remark 25**. Similar to the case of random rough paths, the lift map $\mathcal{L}$ is well defined on smooth functions since the integrals in $\Pi$ are then standard Riemann integrals. In general, however, due to the probabilistic step (i.e. the use of Itô integration) in the construction of $\Pi$, such a map is only well defined $\mu$-almost everywhere in $E$. As a result, $\mathcal{L}$ is only a measurable map on the path space $(\Omega, \mathscr{F}, \mathbb{P})=(E, \mathscr{B}(E), \mu )$.
## TCIs for Itô models and the driftless log-price {#shiftsec}
In this section we prove some continuity estimates for Cameron-Martin shifts of Itô models and Itô integrals of the form $\int_{0}^{\cdot}f( \widehat{W}^H_s, s)dW_s$. Then, we present our result, Theorem [Theorem 32](#thm:TCIRV1){reference-type="ref" reference="thm:TCIRV1"}, on TCIs for Itô models and the driftless log-price corresponding to [\[roughvolmodel\]](#roughvolmodel){reference-type="eqref" reference="roughvolmodel"}.
**Definition 26**. *(Model topology)[\[dfn:modeltop\]]{#dfn:modeltop label="dfn:modeltop"} Let $(\mathcal{T}, G)$ be a regularity structure and $T>0$. The distances $\|\cdot\|, \VERT\cdot\VERT:\mathcal{M}_{T}(\mathcal{T})^2\rightarrow[0, \infty]$ between two models $(\Pi^1, \Gamma^1), (\Pi^2, \Gamma^2)\in\mathcal{M}_{T}(\mathcal{T})$ are defined by*
*$$\label{twobar}
\left\|\Pi^1, \Pi^2\right\| \equiv
\left\|\Pi^1-\Pi^2\right\|
:= \sup_{\substack{\phi\in C_c^1(\mathbb{R}), \|\phi\|_{\mathcal{C}^1}\leq 1\\ \mathop{\mathrm{supp}}(\phi)\subset(-1,1),\lambda\in(0,1]\\ \tau\in\mathcal{S}, s\in[0,T] }}
\lambda^{-|\tau|}
\left| \left\langle \left(\Pi^1_{s}-\Pi^2_{s}\right)\tau, \phi^\lambda_s\right\rangle \right|$$ and $$\label{threebar}
\left\VERT\left(\Pi^1, \Gamma^1\right), \left(\Pi^2, \Gamma^2\right) \right\VERT
:= \left\|\Pi^1-\Pi^2\right\| + \sup_{\substack{s,t\in[0,T], \tau\in\mathcal{S}\\A\ni\beta<|\tau|}}\frac{\big| \big(\Gamma^1_{t,s}- \Gamma^2_{t,s}\big)\tau\big|_\beta}{|t-s|^{|\tau|-\beta}},$$ where $|\tau|_\beta$ denotes the absolute value of the coefficient of $\tau$ with $|\tau|=\beta$.*
**Remark 27**. It turns out that the two notions of distance introduced in the previous definition are equivalent, provided that the models considered satisfy appropriate admissibility conditions. In the context of singular stochastic PDEs, this observation has been proved in [@cannizzaro2017malliavin Remark 3.5]. A similar observation holds for the case of the Itô models considered for the rough volatility regularity structure [@bayer2020regularity Lemma 3.19].
**Remark 28**. In order to avoid technical difficulties related to the well-posedness of the Kantorovich dual problem (see for instance the proof of Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}), it is of interest to define white noise as a random element taking values in a Polish space. As pointed out in [@hairer2015large Remark 2.6], this space is not separable. However, this can be fixed by defining $\mathcal{C}^{-\frac{1}{2}-\kappa}$ as the completion of smooth functions with respect to the seminorms $\|\cdot\|_{-\frac{1}{2}-\kappa, K}$. Another alternative would be to define $\dot{W}$ as a random element with values in the (much larger) space $\mathcal{S}'$ of tempered distributions, which is separable. The same issue is present in the case of the metric space $(\mathcal{M}_{T}(\mathcal{T}), \|\cdot\|)$. Once again, the remedy is to define $\mathcal{M}_{T}(\mathcal{T})$ as the completion of the set of smooth, admissible models (i.e. $\Pi$ such that $[0,T]\ni s\mapsto \Pi_s\in\mathscr{L}(\mathcal{T}; (C_c^\infty(\mathbb{R})^*)$ is smooth) under the metric $\|\cdot\|$.
The estimates in the following two lemmata are used in the proof of Theorem [Theorem 32](#thm:TCIRV1){reference-type="ref" reference="thm:TCIRV1"} . Their proofs are deferred to Appendix [7.1](#proof:shiftlem){reference-type="ref" reference="proof:shiftlem"} and [7.2](#proof:Itoshiftlem){reference-type="ref" reference="proof:Itoshiftlem"} respectively.
**Lemma 29**. *(Model shifts)[\[shiftlem\]]{#shiftlem label="shiftlem"} Let $T>0$. The map $\mathcal{H}\ni h\mapsto T_h\Pi^\xi\in\mathcal{M}_T(\mathcal{T})$ is locally Lipschitz continuous. In particular, for $H\in(0,1), M\in\mathbb{N}$ as in [\[Mchoice\]](#Mchoice){reference-type="eqref" reference="Mchoice"}, there exists $C_{H,M,T}>0$ and a random variable $K\in\cap_{p\geq 1}L^p(\Omega)$ such that we have the almost sure estimate $$\big\|T_{h_2}\Pi^{\xi}-T_{h_1}\Pi^{\xi}\big\|\leq C_{H,M,T}K_{h_1}\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{M+1},
\quad\text{for all }h_1, h_2\in\mathcal{H}.$$*
Before we proceed to the analysis of the driftless log-price we introduce the following asumption on the volatility function $f$.
**Assumption 30**. *The volatility function $f:\mathbb{R}\times\mathbb{R}^{+}\rightarrow\mathbb{R}$ in [\[roughvolmodel\]](#roughvolmodel){reference-type="eqref" reference="roughvolmodel"} satisfies $$\label{eq:assu f}
f(x,t)-f(y,t)=f_1(x-y,t)f_2(y,t),$$ for all $t>0, x, y\in \mathbb{R}$, where*
- *$f_1:\mathbb{R}\times\mathbb{R}^+\rightarrow \mathbb{R}$ is differentiable and $|\nabla f_1|\leq G$ for some nondecreasing $G:\mathbb{R}\to[0, \infty]$;*
- *for all $T>0$, there exist $C_1, C_2>0$ such that $\sup_{t\in[0,T]}|f_2(t,x)|\leq C_1(1+e^{C_2|x|})$.*
**Lemma 31**. *(Itô integral shifts)[\[Itoshiftlem\]]{#Itoshiftlem label="Itoshiftlem"} Let $(\Omega, \mathcal{H}, \mathbb{P})$ be the abstract Wiener space of $\dot{W}$ (Remark [Remark 23](#rem:RVabstractWiener){reference-type="ref" reference="rem:RVabstractWiener"}) and $f:\mathbb{R}\times\mathbb{R}^{+}\rightarrow\mathbb{R}$ satisfy Assumption [Assumption 30](#assu:f){reference-type="ref" reference="assu:f"}. Then for all $\gamma<\frac{1}{2}$ there exists $c>0$ and a random variable $K\in\cap_{p\geq 1}L^p(\Omega)$ such that for all $h\in\mathcal{H}$, we have $\mathbb{P}$-a.s., $$\bigg\|\bigg(\int_{0}^{\cdot}f( \widehat{W}^H_r, r)dW_r\bigg)(\omega+h)- \bigg(\int_{0}^{\cdot}f( \widehat{W}^H_r, r)dW_r\bigg)(\omega)\bigg\|_{\mathcal{C}^\gamma[0,T]}\leq K\big|G(c\|h\|_{\mathcal{H}})\big|\bigg(\|h\|_{\mathcal{H}}+1\bigg).$$*
The following is the main result of this section:
**Theorem 32**. *Let $H\in(0, \frac{1}{2}], \kappa\in (0, H), M=\mathfrak{M}(H,\kappa)$ (as in [\[Mchoice\]](#Mchoice){reference-type="eqref" reference="Mchoice"}) and $g^+=g\vee 0$. Then*
1. *Let $c_\mathcal{H}:\mathcal{M}_T(\mathcal{T})\times\mathcal{M}_T(\mathcal{T})\rightarrow\mathbb{R}$ denote the Cameron-Martin pseudo-metric: $$\label{CMpseudometric}
c_\mathcal{H}(\Pi^1,\Pi^2)=
\begin{dcases}
\|\Pi^{1}_0\Xi-\Pi^{2}_0\Xi\|_{\mathcal{H}},
& \text{if }\Pi_0^{1}\Xi-\Pi_0^{2}\Xi\in\mathcal{H},\\
\infty,
& \text{otherwise}.
\end{dcases}$$ The law $\mu\in\mathscr{P}( \mathcal{M}_T(\mathcal{T}))$ of the Itô model $(\Pi^\xi, \Gamma^\xi)$ satisfies $\mathcal{T}_2(2)$ with respect to $c_\mathcal{H}$.*
2. *The law $\mu\in\mathscr{P}( \mathcal{M}_T(\mathcal{T}))$ of the Itô model $(\Pi^\xi, \Gamma^\xi)$ satisfies $\mathscr{T}_\alpha(c)$ with $\alpha(t)= t^2\wedge t^{2(M+1)}$ and $c(\Pi^1,\Pi^2):=\|\Pi^1-\Pi^2\|^{\frac{1}{M+1}}$.*
3. *Let $\gamma\in (0, \frac{1}{2})$ and $f$ satisfy Assumption [Assumption 30](#assu:f){reference-type="ref" reference="assu:f"} with $G(x)=|x|\vee|x|^r$ for some $r\geq 1$. The law $\mu\in\mathscr{P}(\mathcal{C}^\gamma[0,T])$ of the driftless log-price $\int_{0}^{\cdot}f(\widehat{W}^H_r,r)dW_r$ satisfies $\mathscr{T}_\alpha(c)$ with $\alpha(t)= t^2\wedge t^{2(r+1)}$ and $c(x,y)=\|x-y\|_{\mathcal{C}^\gamma}^{\frac{1}{r+1}}$.*
4. *Let $f(t,x)=g(t)e^x$ for some $g\in \mathcal{C}^1[0,T]$. Then, for $C>0$, the law $\mu\in\mathscr{P}(C[0,T])$ of $(\log|\int_{0}^{\cdot}f(\widehat{W}^H_r,r)dW_r|)^+$ satisfies $\mathscr{T}_1(C)$.*
*Proof.* Throughout the proof, $(E, \mathcal{H}, \mathbb{P})$ is the abstract Wiener space of $\xi$ (Remark [Remark 28](#seprem){reference-type="ref" reference="seprem"}), where $\mathbb{P}\in\mathscr{P}(E)$ is the law of $\xi$.
1. By definition of $c_\mathcal{H}$ we have $$\inf_{\pi\in\Pi(\nu,\mu)}\iint_{\mathcal{M}_T(\mathcal{T})\times\mathcal{M}_T(\mathcal{T})}c^2_\mathcal{H}(y_1, y_2)d\pi(y_1,y_2)
= \inf_{\pi\in\Pi(\nu,\mathbb{P})}\iint_{E\times E}\|x_1-x_2\|^2_{\mathcal{H}}d\pi(x_1,x_2).$$ Since $\mathbb{P}\in\mathscr{T}_2(2)$ with respect to the Cameron-Martin distance $\|\cdot\|_{\mathcal{H}}$ [@djellout2004transportation Section 5.1], the conclusion follows.
2. Let $\mathcal{X}=E$, $\mathcal{Y}=\mathcal{M}_T(\mathcal{T})$, $c_{\mathcal{X}}=c_{\mathcal{H}}, c_{\mathcal{Y}}=c$ and $\Psi: \mathcal{X}\rightarrow \mathcal{Y}$ be the model lift $\xi\mapsto\Pi^\xi$. Moreover, set $x_1=\xi$, $x_2=\xi+h$ for some $h\in\mathcal{H}$. In view of Lemma [\[shiftlem\]](#shiftlem){reference-type="ref" reference="shiftlem"} with $h_1=0, h_2=h$, we have $$\begin{aligned}
c_{\mathcal{Y}}\big(\Psi(x_1),\Psi(x_2)\big)
= \big\|\Pi^{x_1}-\Pi^{x_2}\big\|^{\frac{1}{M+1}}
& \leq C K(x_1)\left(\|x_2-x_1\|^{\frac{1}{M+1}}_{\mathcal{H}}\vee\|x_2-x_1\|_{\mathcal{H}}\right)\\
&=C K(x_1)\bigg(c_{\mathcal{X}}(x_1,x_2)\vee c_{\mathcal{X}}(x_1,x_2)\big)^{\frac{1}{M+1}}\bigg),
\end{aligned}$$ where $K\in\bigcap_{p\geq 1}L^p(\mathcal{X}, \mathbb{P})$. Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"} with $r=M+1$ concludes the proof (note that Assumption [Assumption 10](#assu:Setting){reference-type="ref" reference="assu:Setting"} is satisfied since $\mathbb{P}\in\mathscr{T}_2(2)$ with respect to $c_\mathcal{H})$.
3. Let $\mathcal{X}=\mathcal{C}^\gamma[0,T]=\mathcal{Y}$, $c_{\mathcal{X}}=c_{\mathcal{H}}$, $c_{\mathcal{Y}}=\|\cdot\|_{\mathcal{C}^\gamma}$ and $\Psi: \mathcal{X}\rightarrow \mathcal{Y}$ be the map $\omega\mapsto\int f( \widehat{W}^H_r, r)dW_r$. Moreover, set $x_1=\omega$, $x_2=\omega+h$ for some $h\in\mathcal{H}$. From Lemma [\[Itoshiftlem\]](#Itoshiftlem){reference-type="ref" reference="Itoshiftlem"} with $h_1=0, h_2=h$ we obtain $$c_{\mathcal{Y}}\big(\Psi(x_1),\Psi(x_2)\big)
\leq C K(x_1)\bigg(c_{\mathcal{X}}(x_1,x_2)\vee c_{\mathcal{X}}(x_1,x_2)\big)^{\frac{1}{r+1}}\bigg).$$ The conclusion follows once again by appealing to Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"}.
4. Let $\mathcal{X}=\mathcal{C}^0[0,T]=\mathcal{Y}, c_{\mathcal{X}}=\|\cdot\|_{\mathcal{C}^0}$, $c_{\mathcal{Y}}, \Psi$ as in the proof above and fix $h\in\mathcal{H}$. Using a similar argument as in the proof of Lemma [\[Itoshiftlem\]](#Itoshiftlem){reference-type="ref" reference="Itoshiftlem"} it follows that $$\|\Psi(\omega+h)\|_{\mathcal{X}}\leq C_1Ke^{C_2\|h\|_{\mathcal{H}}},$$ where $C_1>0$ and $K\in \bigcap_{p\geq 1}L^p(\mathcal{X}, \mathbb{P})$ is a non-negative random variable. Letting $\Phi=(\log|\Psi|)^+$ the latter implies $$\|\Phi(\omega+h)\|_{\mathcal{X}}\leq \log(C_1K+1)+C_2\|h\|_{\mathcal{H}}.$$ Appealing to the generalised Fernique theorem ([@friz2020course], Theorem 11.7) we deduce that $\Phi$ has Gaussian tails and in particular that $\mathbb{E}\exp(\lambda \|\Phi\|^2_{\mathcal{X}})<\infty$ for sufficiently small and positive values of $\lambda$. The proof is complete by recalling that the latter is equivalent to Talagrand's $\mathscr{T}_1(C)$ (using [@10.1214/ECP.v11-1198 Theorem 1.13] with $\alpha(t)=t^2$).
◻
**Remark 33**. It is straightforward to replace the right-hand side of [\[eq:assu f\]](#eq:assu f){reference-type="eqref" reference="eq:assu f"} by a finite sum of functions $f_{1,k}(x-y,t)f_{2,k}(x-y,t)$ that satisfy the same properties as $f_1$ and $f_2$. Lemma [\[Itoshiftlem\]](#Itoshiftlem){reference-type="ref" reference="Itoshiftlem"} then holds with $G$ replaced by $\max\{G_k\}$. Assumption [Assumption 30](#assu:f){reference-type="ref" reference="assu:f"} covers a wide range of volatility functions that are of interest in rough volatility modeling. In particular it includes polynomial volatility models [@jaber2022joint], where $f(t,x)=g(t)P(x)$ for a $\mathcal{C}^1$-function $g$ and a polynomial $P$ of degree $r$, as well as exponential functions that correspond to lognormal volatility models (as in the rough Bergomi model [@bayer2016pricing]).
**Corollary 34**. *Let $H\in(0, \frac{1}{2}]$, $\gamma<\frac{1}{2}$ and $f\in C(\mathbb{R}\times\mathbb{R}^+)$ that satisfies Assumption [Assumption 30](#assu:f){reference-type="ref" reference="assu:f"}. Then there exist $C, s_0>0$ such that any iid sample $\{X_n; n\in\mathbb{N}\}$ of the driftless log-price $X_\cdot=\int_{0}^{\cdot}f(\widehat{W}^H_r,r)dW_r$ satisfies, for $n\in\mathbb{N}, s>s_0$, $$\mathbb{P}\left[ \left\|\frac{1}{n}\sum_{k=1}^{n}X_n\right\|_{\mathcal{C}^\gamma[0,T]} >s \right]
\leq \exp\left\{-Cns^{\frac{2}{r+1}}\right\}.$$*
*Proof.* By the triangle inequality, $$\mathbb{P}\left[ \left\|\frac{1}{n}\sum_{k=1}^{n}X_n\right\|_{\mathcal{C}^\gamma[0,T]} >s \right] \leq \mathbb{P}\left[ \frac{1}{n}\sum_{k=1}^{n}\left\|X_n\right\|_{\mathcal{C}^\gamma[0,T]} >s \right].$$ The conclusion then follows from Theorem [Theorem 32](#thm:TCIRV1){reference-type="ref" reference="thm:TCIRV1"}(3) and Corollary [Corollary 8](#cor:deviation){reference-type="ref" reference="cor:deviation"} with $d$ being the $\gamma$-Hölder metric, $x_0=0$ and $p=r+1$. ◻
## TCI for the log-price as a modelled distribution {#subsec: Modelled}
Given a fixed regularity structure $\mathcal{T}$, along with a model $(\Pi, \Gamma)\in\mathcal{M}(\mathcal{T})$, it is possible to consider $\mathcal{T}$-valued functions (or distributions) that can be approximated, up to a given precision, by abstract \"Taylor polynomials\" consisting of symbols in $\mathcal{T}$. The regularity of such functions can be expressed in terms of the approximation error, similar to the case of classical polynomials and smooth functions. This concept is captured by the notion of modelled distributions.
**Definition 35**. *Let $\gamma>0$, $\mathcal{T}$ the rough volatility regularity structure and $(\Pi, \Gamma)\in\mathcal{M}_T(\mathcal{T})$. A map $f:[0,T]\rightarrow\mathcal{T}$ is said to be a modelled distibution of order $\gamma$ (written $f\in \mathcal{D}_T^\gamma$) if $$\label{eq:Dgammanorm}
\|f\|_{\mathcal{D}_T^\gamma}:=\sup_{t\in[0,T]}\sup_{ A\ni\beta<\gamma}\big| f(t)\big|_{\beta}+
\sup_{\substack{s\neq t\in[0,T]\\A\ni\beta<\gamma}}\frac{\big| f(t)-\Gamma_{t,s}f(s)\big|_{\beta}}{|t-s|^{\gamma-\beta}}<\infty,$$ where $A$ is the index set of $\mathcal{T}$ and , as above, $|\tau|_{\beta}$ denotes the absolute value of the coefficient of $\tau\in\mathcal{T}$ with degree $\beta$. To emphasise the dependence of the space $\mathcal{D}_T^\gamma$ on the given model, we shall often use the notation $\mathcal{D}_T^\gamma(\Gamma)$. Finally, for $i=1,2$, $(\Pi^i, \Gamma^i)\in\mathcal{M}_T(\mathcal{T})$, $f_i\in\mathcal{D}_T^\gamma(\Gamma^i)$, the distance between $f_1$ and $f_2$ is defined by $$\label{eq:modelleddistance}
\big\|f_1;f_2\big\|_{\mathcal{D}_T^\gamma(\Gamma^1), \mathcal{D}_T^\gamma(\Gamma^2) }:=\sup_{t\in[0,T]}\sup_{ A\ni\beta<\gamma}\big| f_1(t)-f_2(t)\big|_{\beta}+
\sup_{\substack{s\neq t\in[0,T]\\A\ni\beta<\gamma}}\frac{\big| f_1(t)-\Gamma^1_{t,s}f_1(s)-f_2(t)+\Gamma^2_{t,s}f_2(s)\big|_{\beta}}{|t-s|^{\gamma-\beta}}.$$*
**Remark 36**. For a fixed model $(\Pi, \Gamma)\in\mathcal{M}_{T}(\mathcal{T})$, the space $\mathcal{D}_T^\gamma(\Gamma^i)$ is linear and is a Banach space with respect to the norm $\|\cdot\|_{\mathcal{D}_T^\gamma(\Gamma)}$ (in general, when for non-compact domain, one obtains a Fréchet space with respect to a family of seminorms parameterised by compacta).
At this point, we have defined the rough volatility regularity structure $\mathcal{T}$ (Section [4.1](#rvolrs){reference-type="ref" reference="rvolrs"}) and studied properties of the Itô model and driftless log-price (Sections [4.2](#shiftsec){reference-type="ref" reference="shiftsec"}). One crucial feature of the theory is roughly stated as follows: Once lifted to a (random) modelled distribution, the price process can be expressed as a locally-Lipschitz continuous map of the underlying noise with probability $1$. The latter is done with the aid of Hairer's reconstruction operator $\mathscr{R}$ which maps modelled distributions to Schwartz distributions or functions. These operations can be summarised by the diagramme
where $X$, the dashed arrow and $\mathscr{L}$ denote the driftless log-price process $\int fdW$, (as in [\[roughvolmodel\]](#roughvolmodel){reference-type="eqref" reference="roughvolmodel"}), the (Itô) solution map and the model lift (Definition [Definition 24](#def:modeltranslation){reference-type="ref" reference="def:modeltranslation"}). The rest of this section is devoted to the study of the map $\mathscr{D}_f$, which takes as input a (random) Itô model and returns an expansion of the volatility function as a linear combination of elements of $\mathscr{T}$. This (random) modelled distribution has the property that (at least in a local sense) [@bayer2020regularity Lemma 3.23] $$\label{eq:reconstruction}
\mathscr{R}\mathscr{D}_f(\Pi^\xi)=X,$$ and both maps $\mathscr{R}$ and $\mathscr{D}_f$ are continuous with respect to the underlying model in the topology of $\mathcal{M}_{T}(\mathcal{T})$. The main result of this section is Theorem [Theorem 41](#thm:TCImodelled){reference-type="ref" reference="thm:TCImodelled"}.
**Definition 37**. *Let $T>0, H\in(0, \frac{1}{2}], \kappa\in(0, H)$, $M=\mathfrak{M}(H,\kappa)$ as in [\[Mchoice\]](#Mchoice){reference-type="eqref" reference="Mchoice"} and $\mathcal{T}$ be the rough volatility regularity structure. For $f\in \mathcal{C}^{M+1}(\mathbb{R}\times\mathbb{R}^+)$ and $\Pi\in\mathcal{M}_{T}(\mathcal{T})$, let $f^\Pi$ be the modelled distribution (with $*$ denoting convolution) $$f^\Pi(t):=\sum_{k=0}^{M}\frac{1}{k!}\partial^k_1f\big((K^H*\Pi_t\Xi)(t),t\big)I(\Xi)^k,
\quad\text{for } t\in[0,T].$$ The map $\mathscr{D}_f$ is then defined by ($\star$ denotes the formal product of symbols in $\mathcal{T}$) $$\mathcal{M}_{T}(\mathcal{T})\ni\Pi \longmapsto \mathscr{D}_f(\Pi):=f^\Pi\star\Xi\in\bigcup_{\gamma>0}\mathcal{D}_T^\gamma(\Gamma).$$*
The following lemma is used to obtain Theorem [Theorem 41](#thm:TCImodelled){reference-type="ref" reference="thm:TCImodelled"} below and its proof can be found in Appendix [7.3](#Subsection: modelbndlem){reference-type="ref" reference="Subsection: modelbndlem"}. Regarding notation, we use $\partial_{1,2}^{m,n}f$ to denote the $m$-th (resp. $n$-th) order partial derivatives with respect to the first (resp. second) variable of a function $f\in \mathcal{C}^{M+1, 1}(\mathbb{R}\times\mathbb{R}^+)$.
**Lemma 38**. *Let $f\in \mathcal{C}^{M+1, 1}(\mathbb{R}\times\mathbb{R}^+)$, and $0<\gamma<((M+1)(H-\kappa)-\frac{1}{2}-\kappa)\wedge(\frac{1}{2}-\kappa)$.*
1. *If $G:\mathbb{R}\rightarrow\mathbb{R}$ is a non-decreasing continuous function such that for some $C_{f,T}>0$, $$\label{fgrowth}
\sup_{s\in[0,T]}\left|\partial_1^{m}f(x,s)\right| + \sup_{s\in[0,T]}\left|\partial_1^{M+1}f(x,s)\right|
+ \sup_{s\in[0,T]}\left|\partial^{m,1}_{1,2}f\big(x,s\big)\right|
\leq C_{f,T}(1+G(|x|))$$ holds for all $x\in\mathbb{R}$, $m=0,\dots, M$, then there exists $C_{f,T,M}>0$ such that $$\|\mathscr{D}_f(\Pi)\big\|_{\mathcal{D}_T^\gamma}\leq 2C_{f,T, M}\bigg[ 1+G\bigg(4\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]\bigg(1\vee\big\|K^H*\Pi\Xi\big\|^{M+1}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).$$*
2. *Let $N>0$, $P:\mathbb{R}^2\rightarrow\mathbb{R}$ a polynomial of degree $N$ in two variables with $P(0,0)=0$ and assume that there exists $c_{f,T}>0$ such that for all $m=0,\dots, M+1$, $x,y\in\mathbb{R}$, $$\label{unicontcondition}
\sup_{s\in[0,T]}\left|\partial_1^{m}f(x,s)-\partial_1^{m}f(y,s)\right|
+ \sup_{s\in[0,T]}\left|\partial^{m,1}_{1,2}f(x,s)-\partial^{m,1}_{1,2}f(y,s)\right|\leq c_{f,T}P(|y|, |x-y|).$$ Then for $(\Pi^1, \Gamma^1), (\Pi^2, \Gamma^2)\in\mathcal{M}_T(\mathcal{T})$, there exists $C_{M,f,T}>0$ such that $$\begin{aligned}
\|\mathscr{D}_f(\Pi^1)&;\mathscr{D}_f(\Pi^2)\big\|_{\mathcal{D}_T^\gamma(\Gamma^1), \mathcal{D}_T^\gamma(\Gamma^2) } \leq C_{f,M, T}
\bigg(\{1\vee \big\|K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{N+M+1} \bigg)\\
&
\times\bigg(\big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}\vee \big\|K^H*\Pi^2\Xi-K^H\ast\Pi^1\Xi\big\|^{N+M+1}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).
\end{aligned}$$*
The topology of $\mathcal{D}_T^\gamma(\Gamma)$ (and the space itself) depends crucially on the choice of the (random) model. This implies in particular that the concept of a TCI for a $\mathcal{D}_T^\gamma(\Gamma)$-valued random element is not meaningful since both the latter and the space are random. Nevertheless, it is possible to obtain TCIs for random elements that take values on the *total space* $$\label{eqref: MDspace}
\mathcal{M}\ltimes\mathcal{D}^\gamma:=\coprod_{(\Pi, \Gamma)\in \mathcal{M}_T(\mathcal{T}) } \mathcal{D}_T^\gamma(\Gamma)=\bigcup_{(\Pi, \Gamma)\in \mathcal{M}_T(\mathcal{T})}\big\{(\Pi, \Gamma)\big\}\times \mathcal{D}_T^\gamma(\Gamma)$$ A metric on this space can be defined with the help of the distance [\[eq:modelleddistance\]](#eq:modelleddistance){reference-type="eqref" reference="eq:modelleddistance"}. In particular, we define $d^\flat_{\gamma}: (\mathcal{M}\ltimes\mathcal{D}^\gamma)\times (\mathcal{M}\ltimes\mathcal{D}^\gamma)\rightarrow [0,\infty]$ by $$\label{eq:flatmetric}
d^\flat_{\gamma}( f_1, f_2) := \left\VERT p(f_1)-p(f_2)\right\VERT+\|f_1;f_2\|_{\mathcal{D}_T^\gamma(p(f_1)),\mathcal{D}_T^\gamma(p(f_2))},$$ where $p:\mathcal{M}\ltimes\mathcal{D}^\gamma\rightarrow \mathcal{M}_T(\mathcal{T})$ denotes the base space-projection of a modelled distribution to its corresponding model and $\left\VERT\cdot\right\VERT$ is the model metric [\[threebar\]](#threebar){reference-type="eqref" reference="threebar"}.
**Lemma 39**. *Let $\gamma>0$. The map $d^\flat_{\gamma}$ defines a metric on the total space $\mathcal{M}\ltimes\mathcal{D}^\gamma$.*
*Proof.* The fact that $d^\flat_{\gamma}$ is symmetric is immediate from its definition. Moreover, $d^\flat_{\gamma}=0$ if and only if $\left\VERT p(f_1)-p(f_2)\right\VERT=0$ and $\|f_1;f_2\|_{\mathcal{D}_T^\gamma(p(f_1)),\mathcal{D}_T^\gamma(p(f_2))}=0$. Since $\left\VERT\cdot\right\VERT$ is a metric, the first equality is true if and only if $p(f_1)=p(f_2)$. The latter implies that the second equality holds if and only if $f_1=f_2$ (in this case $\|\cdot;\cdot\|$ is in fact a norm from Remark [Remark 36](#rem: Modeltop){reference-type="ref" reference="rem: Modeltop"}). It remains to show that $d^\flat_{\gamma}$ satisfies the triangle inequality. Since $\left\VERT\cdot\right\VERT$ is a metric, we shall only show it for the second term in [\[eq:flatmetric\]](#eq:flatmetric){reference-type="ref" reference="eq:flatmetric"}. Indeed for $f_1,f_2,f_3\in \mathcal{M}\ltimes\mathcal{D}^\gamma, \beta<\gamma, s\neq t\in[0,T]$, $$\big| f_1(t)-f_2(t)\big|_{\beta}\leq \big| f_1(t)-f_3(t)\big|_{\beta}+\big| f_2(t)-f_3(t)\big|_{\beta}$$ and $$\big| f_1(t)-\Gamma^1_{t,s}f_1(s)-f_2(t)+\Gamma^2_{t,s}f_2(s)\big|_{\beta}\leq \sum_{j=1}^{2}\big| f_j(t)-\Gamma^j_{t,s}f_j(s)-f_3(t)+\Gamma^3_{t,s}f_3(s)\big|_{\beta},$$ and the conclusion follows after taking supremum in $s,t,\beta$. ◻
**Remark 40**. Our notation for $d^\flat_\gamma$ is taken from [@varzaneh2022geometry Definition 4.13]. There, an analogous metric has been introduced for the total space of controlled rough paths and called the \"flat\" metric. In that setting, the base space is given by a space of (geometric) rough paths and the fibres are spaces of controlled rough paths.
**Theorem 41**. *Let $\mathcal{T}$ be the rough volatility regularity structure, $(\Pi, \Gamma)$ be the Itô model [\[eq:Itomodel1\]](#eq:Itomodel1){reference-type="eqref" reference="eq:Itomodel1"}-[\[eq:Gamma\]](#eq:Gamma){reference-type="eqref" reference="eq:Gamma"}, $T>0$, $\kappa, H, M, f, \mathscr{D}_f$ as in Definition [Definition 37](#Ddef){reference-type="ref" reference="Ddef"}, $\gamma\in (0, (M+1)(H-\kappa)-\frac{1}{2}-\kappa)\wedge(\frac{1}{2}-\kappa))$ and $d^\flat_\gamma$ as in [\[eq:flatmetric\]](#eq:flatmetric){reference-type="eqref" reference="eq:flatmetric"}. If $f$ satisfies the assumptions of Lemma [Lemma 38](#modelbndlem){reference-type="ref" reference="modelbndlem"}(2) then the law $\mu\in\mathscr{P}(\mathcal{M}\ltimes\mathcal{D}^\gamma)$ of the random element $(\Pi, \Gamma, \mathscr{D}_f(\Pi))$ satisfies $\mathscr{T}_\alpha(c)$ with $c(x,y)=d^\flat_\gamma(x,y)^{\frac{1}{N+M+1}}$ and, for some constant $C>0$, $\alpha(t)=C(t^{2(N+M+1)}\wedge t^2)$.*
*Proof.* Let $h\in\mathcal{H}$, $(\Pi^1, \Gamma^1)=(\Pi, \Gamma)$ (Definition [Definition 24](#def:modeltranslation){reference-type="ref" reference="def:modeltranslation"}), $(\Pi^2, \Gamma^2)=(T_h\Pi, T_h\Gamma)\in\mathcal{M}_T(\mathcal{T})$. From Lemma [Lemma 38](#modelbndlem){reference-type="ref" reference="modelbndlem"}(2) along with the continuous embedding $\mathcal{C}^{H-\kappa}\hookrightarrow\mathcal{H}$ we have $$\begin{aligned}
\|\mathscr{D}_f(\Pi^1)&;\mathscr{D}_f(\Pi^2)\big\|_{\mathcal{D}_T^\gamma(\Gamma^1), \mathcal{D}_T^\gamma(\Gamma^2) } \leq C_{f,M,T}
\bigg(\{1\vee \big\|\widehat{W}^H\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{N+M+1}\bigg)\bigg(\|h\|_{\mathcal{H}}\vee \|h\|^{N+M+1}_{\mathcal{H}}\bigg).
\end{aligned}$$ Combining this estimate with the one obtained from Lemma [\[shiftlem\]](#shiftlem){reference-type="ref" reference="shiftlem"} (with $h_1=0, h_2=h)$ and the equivalence of model norms (Remark [Remark 36](#rem: Modeltop){reference-type="ref" reference="rem: Modeltop"}) it follows that $$d^\flat_{\gamma}( \mathscr{D}_f(\Pi^1),\mathscr{D}_f(\Pi^2))\lesssim \| \Pi^2-\Pi^1\|+\|\mathscr{D}_f(\Pi^2);\mathscr{D}_f(\Pi^1)\|_{\mathcal{D}_T^\gamma(\Gamma^1), \mathcal{D}_T^\gamma(\Gamma^2) }\leq K \|h\|_{\mathcal{H}}\vee \|h\|^{N+M+1}_{\mathcal{H}},$$ with $K$ a random variable with finite moments of all orders. Writing $\Pi^1=\mathscr{L}(\xi)$, $\Pi^2=\mathscr{L}(\xi+h)$, letting $x_1=\xi$, $x_2=\xi+h$, $\mathcal{X}=E$, $\mathcal{Y}=\mathcal{M}\ltimes\mathcal{D}^\gamma$, $\Psi=\mathscr{D}_f(\mathscr{L}), c_{\mathcal{Y}}= d^\flat_{\gamma}$ and $c_{\mathcal{Y}}=c_{\mathcal{H}}$, the last bound translates to $$c\big(\Psi(x_1), \Psi(x_2)\big)\leq L(x_1) \bigg[c_{\mathcal{X}}(x_1, x_2)\vee c_{\mathcal{X}}(x_1, x_2)^{\frac{1}{M+N+1}}\bigg].$$ The conclusion then follows by virtue of Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"}. ◻
**Remark 42**. To the best of our knowledge, the question of whether the metric space $(\mathcal{M}\ltimes\mathcal{D}^\gamma, d^\flat_{\gamma})$, with respect to a given regularity structure, is Polish remains open (see however [@varzaneh2022geometry Theorem 4.18] for a positive answer in the setting of rough paths controlled by geometric rough paths). Nevertheless, we prove Theorem [Theorem 41](#thm:TCImodelled){reference-type="ref" reference="thm:TCImodelled"} using a generalised contraction principle, Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"}, which does not require $\mathcal{Y}$ to be Polish.
We conclude this section with a remark on the reconstruction operator $\mathscr{R}$.
**Remark 43**. In view of [\[eq:reconstruction\]](#eq:reconstruction){reference-type="eqref" reference="eq:reconstruction"}, an alternative route to prove Lemma [\[Itoshiftlem\]](#Itoshiftlem){reference-type="ref" reference="Itoshiftlem"} is to use the local Lipschitz continuity estimates of the reconstruction operator [@hairer2014theory Theorem 3.10, Equation (3.4)]. In summary, for two models $(\Pi^i, \Gamma^i)\in\mathcal{M}_T(\mathcal{T}), i=1,2$ and corresponding reconstruction operators $\mathscr{R}^i: \mathcal{D}_T^\gamma(\Gamma_i) \rightarrow C([0,T];\mathbb{R})$, the following estimate holds for all $\phi\in C_c^\infty(\mathbb{R}), \lambda\in(0, 1], s\in[0,T]$: $$\begin{aligned}
& \bigg|\left\langle\mathscr{R}^2\mathscr{D}_f(\Pi^2)-\Pi^2_s\mathscr{D}_f(\Pi^2)(s)-\mathscr{R}^1\mathscr{D}_f(\Pi^1)+\Pi^1_s\mathscr{D}_f(\Pi^1)(s), \phi_s^\lambda \right\rangle \bigg| \\
&\leq C\left(\left\|\mathscr{D}_f(\Pi^1)
\right\|_{\mathcal{D}_T^\gamma(\Gamma^1)}
\left\|(\Pi^1, \Gamma^1); (\Pi^2, \Gamma^2)\right\|
+ \left\|\Pi^2\right\|\left\|\mathscr{D}_f(\Pi^1);\mathscr{D}_f(\Pi^2)\right\|_{\mathcal{D}_T^\gamma(\Gamma^1), \mathcal{D}_T^\gamma(\Gamma^2) }\right).\end{aligned}$$
This estimate, however, takes into account both the growth of $f$ and the order of Wiener chaos of the random models. The latter may lead to sub-optimal $\mathcal{H}$-continuity estimates for $X$ as follows: Letting $\Pi^2=T_h\Pi, \Pi^1=\Pi$ and invoking Lemma [\[shiftlem\]](#shiftlem){reference-type="ref" reference="shiftlem"}, [Lemma 38](#modelbndlem){reference-type="ref" reference="modelbndlem"}(2), one sees that $\|\Pi^2\|\lesssim 1+\|h\|_{\mathcal{H}}\vee \|h\|^{M+1}_{\mathcal{H}}$ and $\left\|\mathscr{D}_f(\Pi^1);\mathscr{D}_f(\Pi^2)\right\|_{\mathcal{D}_T^\gamma(\Gamma^1), \mathcal{D}_T^\gamma(\Gamma^2) }\lesssim \|h\|_{\mathcal{H}}\vee\|h\|^{N+1}_{\mathcal{H}}$, where $N$ is the polynomial growth exponent of $f$. This leads to an estimate of the form $$\bigg\|\bigg(\int_{0}^{\cdot}f( \widehat{W}^H_r, r)dW_r\bigg)(\omega+h)- \bigg(\int_{0}^{\cdot}f( \widehat{W}^H_r, r)dW_r\bigg)(\omega)\bigg\|_{\mathcal{C}^\gamma[0,T]}\lesssim \|h\|_{\mathcal{H}}\vee\|h\|^{N+M+1}_{\mathcal{H}},$$ which implies that the tail probabilities $\mathbb{P}[\|fdW\|_{\mathcal{C}^\gamma}>r]$ decay at most like $\exp\{-r^{\frac{2}{N+M+2}}\}$. It is then strsaightforward to check that, if $f$ is a polynomial of degree $N$, one can obtain a better decay rate of order $\exp\{-r^{\frac{2}{N+1}}\}$ via the arguments of Theorem [Theorem 32](#thm:TCIRV1){reference-type="ref" reference="thm:TCIRV1"}(3).
# TCIs for regularity structures: 2D PAM {#Section:PAM}
In the previous section we introduced some elements of regularity structures for rough volatility and proved TCIs for the Itô model, the driftless log-price and for the joint law of the Itô model and the modelled distribution lift. This section is devoted to the proof of TCIs in the setting of another regularity structure, which arises in the study of singular SPDEs. In the sequel, given $\gamma\in\mathbb{R}$, $\mathcal{C}^\gamma$ will denote the space of distributions on $\mathbb{R}^2$, endowed with a seminorm topology similar to that defined in [\[eq:Besovnorm\]](#eq:Besovnorm){reference-type="eqref" reference="eq:Besovnorm"} (in fact it coincides with a local version of the Besov space $B^\alpha_{\infty, \infty}$).
To be precise, we are concerned with the two-dimensional Parabolic Anderson Model (2d-PAM) with periodic boundary conditions, solution to the parabolic SPDE $$\label{eq:PAM}
\partial_tu = \Delta u+u\xi,
\qquad
u(0,\cdot) = u_0(\cdot),$$ where $\xi$ is spatial white noise on the $2$-torus $\mathbb{T}^2$ and $u_0\in \mathcal{C}^\gamma(\mathbb{T}^2)$ for some $\gamma\geq 0$. In general, $d$-dimensional white noise is $-(d/2)^{-}$ regular in the sense that, almost surely, $\xi\in \mathcal{C}^{\beta}$ for any $\beta<-d/2$. On the other hand, since the heat kernel is $2$-regularising, the solution $u$ is expected to live in $\mathcal{C}^{\gamma}$, for any $\gamma<-d/2+2$. Thus, for $d=2$, $\gamma+\beta<0$ and as a result the product $u\xi$ (and a fortiori the concept of solutions) is not classically defined (products of distributions on this scale are well defined when $\gamma+\beta>0$, as in [@bahouri2011fourier Theorem 2.85]. The theory of regularity structures provides a notion of solutions that are defined as limits of properly re-normalised equations in which the noise $\xi$ is substituted by a smooth approximation $\xi^\epsilon:=\xi*\rho_\epsilon$ and $\rho_\epsilon$ is a mollifying sequence. In particular, letting $$\label{eq:PAMconstant}
C_\epsilon:=-\frac{1}{\pi}\log(\epsilon)$$ be a divergent renormalisation constant, the solutions of $$\label{eq:PAMrenorm}
\partial_t\tilde{u}_\epsilon=\Delta \tilde{u}_\epsilon+\tilde{u}_\epsilon\big(\xi_\epsilon-C_\epsilon\big),
\qquad
\tilde{u}_\epsilon(0,\cdot)=u_0(\cdot),$$ converge uniformly on compacts of $\mathbb{R}^+\times\mathbb{T}^2$ as $\epsilon\to0$ to a well-defined limit $u\in C^0(\mathbb{R}^+\times\mathbb{T}^2)$, then defined to be the solution of [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"}. While a detailed overview of the solution theory of [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} is beyond the scope of this work, we shall provide a few facts that are relevant for the proof of our result, Theorem [Theorem 44](#thm:PAM){reference-type="ref" reference="thm:PAM"}, along with pointers to the literature where necessary.
The 2d-PAM regularity structure space $\mathscr{T}$ (detailed in [@hairer2014theory Remark 8.8 and Section 9.1] and in [@cannizzaro2017malliavin Section 3.1]) is generated by the symbols $$\label{eq:PAMbasis}
\mathcal{S} := \mathcal{W}\cup\mathcal{U}
= \Big\{\Xi, I(\Xi)\Xi, X_i\Xi:\; i=1,2\Big\} \cup \Big\{ \mathbf{1}, I(\Xi), X_i:\; i=1,2\Big\},$$ where $X_i$ stands for first degree monomials in each of the spatial variables, $\Xi$ for white noise and $I$ for convolution with the heat kernel. The symbols' degrees are postulated as follows:
-------- -------------- ------------------- --------------------------- ------------------------- -------------------------- --------------------------
Symbol $\mathbf{1}$ $X_i$ $\Xi$ $X_i\Xi$ $I(\Xi)$ $I(\Xi)\Xi$
Degree 0 $\displaystyle 1$ $\displaystyle -1-\kappa$ $\displaystyle -\kappa$ $\displaystyle 1-\kappa$ $\displaystyle -2\kappa$
-------- -------------- ------------------- --------------------------- ------------------------- -------------------------- --------------------------
for $\kappa>0$ small enough. The structure group $G$ is then defined with the help of an additional set of symbols, that encode derivatives of the heat kernel, and its action on the basis vectors of $\mathscr{T}$ has an explicit matrix representation [@cannizzaro2017malliavin Equation (3.3)]. The mollified noise $\xi_\epsilon$ has a canonical model lift $(\Pi_\epsilon, \Gamma_\epsilon)\in\mathcal{M}(\mathcal{T})$ and the topology on the space of models $\mathcal{M}(\mathcal{T})$ is given by the metric $$\label{twobarPAM}
\left\|\Pi^1, \Pi^2\right\| \equiv
\left\|\Pi^1-\Pi^2\right\|
:= \sup_{\substack{\phi\in C_c^2(\mathbb{R}^+\times \mathbb{T}^2), \|\phi\|_{\mathcal{C}^2}\leq 1\\ \mathop{\mathrm{supp}}(\phi)\subset B_{\mathfrak{s}}(0,1),\lambda\in(0,1]\\ \tau\in\mathcal{S}, z\in\mathbb{R}^+\times \mathbb{T}^2 }}
\lambda^{-|\tau|}
\left| \left\langle \left(\Pi^1_{z}-\Pi^2_{z}\right)\tau, \phi^\lambda_z\right\rangle \right|,$$ where for $z=(t,x),w=(s,y)\in\mathbb{R}^+\times\mathbb{T}^2$, $\phi_z^\lambda(w)=\lambda^{-2}\phi(\lambda^{-2}(t-s), \lambda^{-1}(x-y))$ and $B_{\mathfrak{s}}(0,1)$ is the centered unit ball on $\mathbb{R}^+\times \mathbb{T}^2$ endowed with the parabolic distance $|(t,x)-(s,y)|_{\mathfrak{s}}=\sqrt{|t-s|}+|x-y|$ (this scaling is due to the fact that time and space play different roles in parabolic PDEs such as [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"}).
The need for renormalisation of [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} is linked to convergence properties of the sequence $\{(\Pi_\epsilon, \Gamma_\epsilon): \epsilon>0\}$ of canonical models. In fact, while the latter diverges as $\epsilon\to 0$, it is possible to construct a one-parameter group of transformations $\{M_\epsilon:\epsilon>0\}$, on $\mathcal{M}(\mathcal{T})$, known as the renormalisation group, such that $(\hat{\Pi}_\epsilon, \hat{\Gamma}_\epsilon):=M_\epsilon[(\Pi_\epsilon, \Gamma_\epsilon)]$ converges in probability to a Gaussian model $(\hat{\Pi}, \hat{\Gamma})\in\mathcal{M}(\mathcal{T})$ (see [@hairer2014theory Sections 8.3 and 10.4] for the relevant renormalisation theory). The solutions of $\eqref{eq:PAMrenorm},~\eqref{eq:PAM}$ can then be expressed respectively as $$\label{eq:PAMrecon} \tilde{u}_\epsilon=\hat{\mathscr{R}}_\epsilon\mathscr{S}(u_0, \hat{\Pi}_\epsilon), u=\mathscr{R}\mathscr{S}(u_0, \hat{\Pi}),$$ where $\mathscr{S}$ (an abstract solution map between spaces of modelled distributions) and $\hat{\mathscr{R}}_\epsilon, \mathscr{R}$ (the reconstruction operators associated to $\hat{\Pi}_\epsilon, \hat{\Pi}$ respectively) are continuous with respect to the underlying models (see e.g. [@cannizzaro2017malliavin], Section 3.7 and references therein for a more detailed exposition of the solution theory. Moreover note that for [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} one has global existence, i.e. its explosion time is infinite with probability $1$).
An inspection of the basis elements $\mathcal{S}$ [\[eq:PAMbasis\]](#eq:PAMbasis){reference-type="eqref" reference="eq:PAMbasis"} shows that canonical models for PAM do not enjoy Gaussian concentration. Indeed, the symbol $I(\Xi)\Xi$ is on the second Wiener chaos with respect to $\xi$. Moreover, since [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} exhibits (unbounded) multiplicative noise, a similar observation is true for the solution $u$. In the following theorem we prove appropriate TCIs that reflect this heavier tail behaviour for both of these functionals.
**Theorem 44**. *Let $\mathcal{T}$ be the 2d-PAM regularity structure and $\mathcal{M}(\mathcal{T})$ be the space of canonical models on $\mathcal{T}$ and $f^+=f\vee 0$. The following hold:*
1. *The law $\mu\in\mathcal{M}(\mathcal{T})$ of the limiting model $(\hat{\Pi}, \hat{\Gamma})$ satisfies $\mathscr{T}_\alpha(c)$ with $c(\Pi^1, \Pi^2)=\|\Pi^1-\Pi^2\|^{1/2}$ and, for a constant $C>0$, $\alpha(t)=C (t\wedge t^2)$.*
2. *Let $u, v\in \mathcal{C}(\mathbb{R}^+\times\mathbb{T}^2)$ solve [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} and $$\partial_tv=\Delta v+2v\xi,\;\;v(0,\cdot)=1$$ respectively. For all $(t,x)\in\mathbb{R}^+\times\mathbb{T}^2$ such that $|v(t,x)|^{\frac{1}{2}}\in L^1(\Omega)$, the law $\mu\in \mathscr{P}(\mathbb{R})$ of $(\log|u(t,x)|)^+$ satisfies Talagrand's $\mathscr{T}_1(C)$.*
*Proof.* Throughout the proof we set $\mathcal{H}=L^2(\mathbb{T}^2)$, the Cameron-Martin space of $\xi$.
1. Let $\phi\in C_c^2(\mathbb{R}^+\times \mathbb{T}^2)$ be a test function with support on the unit ball $B_{\mathfrak{s}}(0,1)$. In view of [@hairer2014theory Theorem 10.7, Equation (10.7), Proposition 10.11], we have the following Wiener chaos decomposition: for all $\tau\in\mathcal{S}, z=(t,x)\in \mathbb{R}^+\times \mathbb{T}^2, \lambda\in(0,1)$, $$\left\langle\hat{\Pi}_z\tau, \phi_{z}^{\lambda} \right\rangle
= \sum_{k\leq \#(\tau)}I_k\bigg( \int_{\mathbb{R}^+\times \mathbb{T}^2}\phi_z^\lambda(y)S_z^{\otimes k}(\hat{\mathfrak{W}}^k\tau)(y)dy\bigg),$$ where for each $k,y$, $(\hat{\mathfrak{W}}^k\tau)(y)\in\mathcal{H}^{\otimes k}$ is a function (or distribution) on $k$ copies of $\mathbb{T}^2$ denoted by $(\hat{\mathfrak{W}}^k\tau)(y; x_1,\dots x_k)$, $S_z\phi(y)=\phi(y-x)$, $\phi\in\mathcal{H}$ and $\#(\tau)$ is the number of occurrences of the symbol $\Xi$ in $\tau$. The map $I_k: \mathcal{H}^{\otimes k}\rightarrow L^2(\Omega)$ can be thought as a multiple Wiener integral with respect to the white noise $\xi$. In particular, for $f\in\mathcal{\mathcal{H}}^{\otimes{k}}$ one can write [@nualart2006malliavin Section 1.1.2] $$I_k(f)=\langle f, \xi^{\otimes k}\rangle:=\int_{\mathcal(\mathbb{T}^2)^k} f(x_1,\dots, x_k)d\xi(x_1)\dots d\xi(x_k).$$ For the sequel, we shall focus on the symbol $\tau=I(\Xi)\Xi$ since this is the one which determines the tail behaviour of $\hat{\Pi}$. Similar but simpler arguments then hold for the symbols that correspond to a Wiener chaos of lower order and thus will be omitted. In this case we have $\#(\tau)=2$ and, in light of [@hairer2014theory Theorem 10.19], $$\begin{aligned}
\langle\hat{\Pi}_z\tau, \phi_{z}^{\lambda} \rangle=I_0\bigg( \int_{\mathbb{R}^+\times \mathbb{T}^2}\phi_z^\lambda(y)(\hat{\mathfrak{W}}^0\tau)(y)dy\bigg)+I_2\bigg( \int_{\mathbb{R}^+\times \mathbb{T}^2}\phi_z^\lambda(y)S_{z}^{\otimes 2}(\hat{\mathfrak{W}}^2\tau)(y)dy\bigg)
\end{aligned},$$ where the first term is a non-random constant, say $c$, and $\hat{\mathfrak{W}}^0$ can be explicitly specified in terms of the heat kernel [@hairer2014theory Theorem 10.19]. Letting $h\in\mathcal{H}$ and denoting the argument of $I_2$ by $f_{\lambda, \tau, z}$, then $$\begin{aligned}
\langle T_h\hat{\Pi}_z\tau, \phi_{z}^{\lambda} \rangle-c
& =\langle f_{\lambda, \tau, z}, (\xi+h)^{\otimes 2}\rangle\\
& = \langle f_{\lambda, \tau, z}, \xi^{\otimes 2}\rangle+ \langle f_{\lambda, \tau, z}, h^{\otimes 2}\rangle+\langle f_{\lambda, \tau, z}, h\otimes\xi\rangle+\langle f_{\lambda, \tau, z}, \xi\otimes h\rangle.\end{aligned}$$ Thus $$\begin{aligned}
\langle (T_h\hat{\Pi}_z-\hat{\Pi}_z)I(\Xi)\Xi, \phi_{z}^{\lambda} \rangle= \langle f_{\lambda, \tau, z}, h^{\otimes 2}\rangle+
\langle f_{\lambda, \tau, z}, h\otimes\xi\rangle+
\langle f_{\lambda, \tau, z}, \xi\otimes h\rangle.
\end{aligned}$$ The first term on the right-hand side is deterministic and can be bounded by $\|f_{\lambda, \tau, z}\|_{\mathcal{H}}\|h\|^2_{\mathcal{H}}$ by Cauchy-Schwarz. For the other two terms, stochastic Fubini and Cauchy-Schwarz imply $$\begin{aligned}
\int f_{\lambda, \tau, z}(x_1, x_2)h(x_1)dx_1d\xi(x_2)&=\int I_1\bigg(f_{\lambda, \tau, z}(x_1, \cdot)\bigg)h(x_1) dx_1\\&\leq \|h\|_\mathcal{H}\|I_1\big(f_{\lambda, \tau, z}(\bullet, \cdot)\|_{\mathcal{H}}
\end{aligned}$$ and the symmetric bound $$\int f_{\lambda, \tau, z}(x_1, x_2)d\xi(x_1)h(x_2)dx_2\leq \|h\|_\mathcal{H}\|I_1\big(f_{\lambda, \tau, z}(\cdot, \bullet)\|_{\mathcal{H}},$$ where $\bullet$ indicates the variable that is integrated with respect to Lebesgue measure. Putting these estimates together it follows that $$\begin{aligned}
|\langle (T_h\hat{\Pi}_z&-\hat{\Pi}_z)I(\Xi)\Xi, \phi_{z}^{\lambda} \rangle|\\&\leq
\bigg( \|I_1\big(f_{\lambda, \tau, z}(\cdot, \bullet)\|_{\mathcal{H}}+\|I_1\big(f_{\lambda, \tau, z}(\bullet, \cdot)\|_{\mathcal{H}} + \|f_{\lambda, \tau, z}\|_{\mathcal{H}}
\bigg)\|h\|_{\mathcal{H}}\vee\|h\|^2_{\mathcal{H}}.
\end{aligned}$$
By Itô-Wiener isometry and the estimates of [@hairer2014theory Proposition 10.11], the $L^2(\Omega)$-norms of the random prefactors on the right-hand side are bounded (up to a constant) by $\|f_{\lambda, \tau, z}\|_{\mathcal{H}}$ which in turn is bounded by $\lambda^{\kappa+2|\tau|}$. Finally, from [@hairer2014theory Theorem 10.7, (10.4), Proposition 3.32], we conclude that, for a random constant $L$ with finite moments of all orders, $$\|(T_h\hat{\Pi}_z-\hat{\Pi}_z)\|\leq L\|h\|_{\mathcal{H}}\vee\|h\|^2_{\mathcal{H}}.$$ In view of Lemma [Lemma 11](#Lem: contraction1){reference-type="ref" reference="Lem: contraction1"} the proof is complete.
2. A solution theory for a shifted version of [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} in the direction of a Cameron-Martin space element $h\in\mathcal{H}$ was developed in [@cannizzaro2017malliavin Theorem 3.26]. Thus, solutions of $$\label{eq:PAMh}
\partial_tu^h=\Delta u^h+u(\xi+h), \;\; u(0,\cdot)=u_0(\cdot)$$ can be understood as limits, as $\epsilon\to 0$, of the renormalised equations $$\label{eq:PAMhren}
\partial_t\tilde{u}^{h_\epsilon}_\epsilon=\Delta \tilde{u}^{h_\epsilon}_\epsilon+\tilde{u}^{h_\epsilon}_\epsilon\big(\xi_\epsilon+h_\epsilon-C_\epsilon\big), \;\; \tilde{u}^{h_\epsilon}_\epsilon(0,\cdot)=u_0(\cdot),$$ where $h_\epsilon=h*\rho_\epsilon$ and $C_\epsilon$ as in [\[eq:PAMconstant\]](#eq:PAMconstant){reference-type="eqref" reference="eq:PAMconstant"}. Moreover, one can write $\tilde{u}^{h_\epsilon}:=\mathscr{R}^{M_\epsilon}\mathscr{S}(u_0, M_\epsilon T_h\Pi_\epsilon)$ similar to [\[eq:PAMrecon\]](#eq:PAMrecon){reference-type="eqref" reference="eq:PAMrecon"}.
Proceding to the main body of the proof, let $T>0, \epsilon<1$ and $(t,x)\in (0, T]\times\mathbb{T}^2$. Due to the linearity of [\[eq:PAMhren\]](#eq:PAMhren){reference-type="eqref" reference="eq:PAMhren"}, we obtain via the Feynman-Kac formula $$\begin{aligned}
\tilde{u}^{h_\epsilon}_\epsilon(\tau,y)=\mathbb{E}^y_{B}\bigg[\int_{\tau}^Tu_0(B_r)\exp\bigg(-\int_{\tau}^{r}\big\{ h_\epsilon(B_s)+\xi_\epsilon(B_s)-C_\epsilon\big\}ds \bigg) dr, \bigg],
\end{aligned}$$ where $\tau=T-t, y\in\mathbb{T}^2$ and $B$ is a Brownian motion independent of $\xi$. From Cauchy-Schwarz inequality and the fact that $C_\epsilon>0$, we obtain $$\label{eq:Ferniqueprelim}
\begin{aligned}
\big|\tilde{u}^{h_\epsilon}_\epsilon(\tau, y)\big|& \leq\mathbb{E}^y_{B}\bigg[\int_{\tau}^Tu^2_0(B_r)\exp\bigg(-2\int_{\tau}^{r} h_\epsilon(B_s)ds \bigg) dr \bigg]^{\frac{1}{2}}\\&\times\mathbb{E}^y_B\bigg[\int_{\tau}^T\exp\bigg(-2\int_{\tau}^{r}\big\{ \xi_\epsilon(B_s)-2C_\epsilon\big\}ds \bigg) dr \bigg]^{\frac{1}{2}}\\&\leq |v_1^{h_\epsilon}(\tau,y)|^{\frac{1}{2}}|v_2^{\epsilon}(\tau,y)|^{\frac{1}{2}},
\end{aligned}$$ where, from another application of Feynman-Kac, $v_1^{h_\epsilon}$ solves $$\partial_tv^{h_\epsilon}_1=\Delta v^{h_\epsilon}_1+2v^{h_\epsilon}_1h_{\epsilon},
\qquad
v^{h_\epsilon}_1(0,\cdot)=u^2_0(\cdot),$$ and $$\partial_tv^{\epsilon}_2=\Delta v^{\epsilon}_2+2v^{\epsilon}_2\big(\xi_\epsilon-2C_\epsilon\big),
\qquad
v^{\epsilon}_2(0,\cdot)=1.$$ As $\epsilon\to 0$, $\partial_tv^{h_\epsilon}_1$ converges uniformly in $(\tau, y)$ to the smooth solution of the well-posed parabolic PDE $$\partial_tv^{h}_1=\Delta v^{h}_1+2v^{h}_1h,
\qquad
v^{h}_1(0,\cdot)=u^2_0(\cdot).$$ Writing the solution in mild formulation and applying Grönwall's inequality, $$\| v^{h}_1(t, \cdot)\|_{C(\mathbb{T}^{2})}\leq C\|u_0\|^2_{C(\mathbb{T}^{2})}\bigg(\frac{T}{\tau}\bigg) e^{2\|h\|_{L^2}t}.$$ Moreover, from the renormalisation theory of [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} [@cannizzaro2017malliavin Theorem 3.24], $v^{\epsilon}_2$ converges to a well-defined limit $v_2(t,x)$ as $\epsilon\to 0$. Thus, taking $\epsilon\to 0$ in [\[eq:Ferniqueprelim\]](#eq:Ferniqueprelim){reference-type="eqref" reference="eq:Ferniqueprelim"}, we obtain $$\big|u^{h}(\tau, y)\big| \leq C|v_2(\tau,y)|^{\frac{1}{2}}\|u_0\|_{C(\mathbb{T}^{2})}\bigg(\frac{T}{\tau}\bigg) e^{\|h\|_{L^2}T},$$ which implies that $$\label{eq:Fernique}
\begin{aligned}
\big(\log\big|u^{h}(\tau, y)\big|\big)^+& \leq\log\bigg(1+C|v_2(\tau,y)|^{\frac{1}{2}}\|u_0\|_{C(\mathbb{T}^{2})}\bigg(\frac{T}{\tau}\bigg)\bigg)+ T\|h\|_{L^2}.
\end{aligned}$$ Since $\big(\log|u^{h}(\tau, y)|\big)^+=\big(\log|\mathscr{R}\mathscr{S}(u_0, T_h\hat{\Pi})(\tau, y)|\big)^+=:\mathscr{G}_{\tau, y}(\xi+h)$ where
$$\mathscr{G}_{\tau,y}: (\mathcal{C}^{-1-\kappa}, \mathcal{H}, \gamma)\rightarrow \mathbb{R}$$ is a measurable map from the Wiener space of the noise, we can rewrite [\[eq:Fernique\]](#eq:Fernique){reference-type="eqref" reference="eq:Fernique"} as $$\begin{aligned}
\mathscr{G}_{\tau, y}(\xi+h)& \leq\log\bigg(1+C|v_2(\tau,y)|^{\frac{1}{2}}\|u_0\|_{C(\mathbb{T}^{2})}\bigg(\frac{T}{\tau}\bigg)\bigg)+ T\|h\|_{L^2}.
\end{aligned}$$ By assumption, the non-negative random variable $$\label{condition}
X_{\tau,y}:=\log\bigg(1+C|v_2(\tau,y)|^{\frac{1}{2}}\|u_0\|_{C(\mathbb{T}^{2})}\bigg(\frac{T}{\tau}\bigg)\bigg)\in L^1(\Omega),$$ and appealing to the generalised Fernique theorem [@friz2020course Theorem 11.7], we deduce that $\mathscr{G}_{\tau,y}$ has Gaussian tails. In particular, letting $\Phi$ denote the cumulative distribution function of a standard Normal distribution, then for all $a>0$ and $r>a$, $$\mathbb{P}\big[\mathscr{G}_{\tau,y}>r\big]
\leq\exp\bigg\{ -\frac{1}{2}\left(\hat{a}+\frac{r-a}{T}\right)^2\bigg\},$$ with $\hat{a}=\Phi^{-1}(P_a)=\Phi^{-1}\big(\mathbb{P}[X_{\tau,y}\leq a ] \big)$, and hence for some $\lambda>0$ sufficiently small, $$\mathbb{E}\left[ \exp\left(\lambda \mathscr{G}_{\tau,y}^2 \right)\right] < \infty.$$ The proof follows by noting that the latter is equivalent to Talagrand's $\mathscr{T}_1(C)$ (see [@10.1214/ECP.v11-1198 Theorem 1.13] with $\mathcal{X}=\mathbb{R}$ equipped with the standard metric and $\alpha(t)=t^2$).
◻
**Remark 45**. Theorem [Theorem 44](#thm:PAM){reference-type="ref" reference="thm:PAM"}(2) implies that, with enough integrability, the solution to [\[eq:PAM\]](#eq:PAM){reference-type="eqref" reference="eq:PAM"} has pointwise log-normal tails: with the notation from the proof, for all $a>0, r>e^a$, $$\mathbb{P}\big[ |u(\tau, y)|>r \big] \leq\exp\left\{ -\frac{1}{2}\left[\hat{a}+\frac{\log(r)-a}{T}\right]^2\right\}.$$
**Remark 46**. Assumption [\[condition\]](#condition){reference-type="eqref" reference="condition"} is satisfied for example when $\tau$ is sufficiently small for all $y\in\mathbb{T}^2$. Indeed, as shown in [@gu2018moments], the solution of the 2d PAM has finite moments of all orders for small $t$. Moreover, from [@matsuda2022integrated], $\mathbb{E}|u(t, 0)|$ explodes when $t$ is sufficiently large.
# Weighted logarithmic Sobolev inequalities {#Section:WLSIs}
This section can be read independently from the rest of this work and is devoted to weighted logarithmic Sobolev inequalities ($\mathrm{WLSIs}$) for Gaussian functionals. In particular, we consider functionals $\Psi$, defined on an abstract Wiener space $(\Omega, \mathcal{H}, \gamma)$, that take values in a finite-dimensional vector space. We include this analysis here for the following reasons: 1) As explained in the introduction, the tools for obtaining such inequalities are similar in flavour to the ones we used to obtain TCIs: instead of studying $\mathcal{H}$-continuity properties for the functionals of interest, WLSIs rely on $\mathcal{H}$-differentiability properties. 2) WLSIs imply Talagrand's 2-TCI with respect to a weighted metric on $\mathbb{R}^m$ (Theorem [Theorem 49](#thm:WLSIconsequences){reference-type="ref" reference="thm:WLSIconsequences"}(2)).
First we extend a contraction principle for $\mathrm{WLSIs}$ proved in [@bartl2020functional] and present some implications of WLSIs. Then, we leverage tools from Malliavin calculus to show that a wide class of Gaussian functionals satisfy $\mathrm{WLSIs}$ with appropriate weights.
**Definition 47**. *A probability measure $\mu\in\mathscr{P}(\mathbb{R}^m)$ satisfies $\mathrm{WLSI}(G)$ with $G:\mathbb{R}^m\rightarrow[0,\infty]$ if there exists a constant $C>0$ such that $$Ent_\mu(f^2):=\int_{\mathbb{R}^m}f^2\log\bigg(\frac{f^2}{\int f^2}\bigg)d\mu\leq
2C\int_{\mathbb{R}^m}|\nabla f|^2 G d\mu$$ holds for all differentiable $f:\mathbb{R}^m\rightarrow \mathbb{R}$ for which the right-hand side is finite.*
**Proposition 48** (WLSI contraction principle). *Let $(\Omega, \mathcal{H}, \gamma)$ be an abstract Wiener space. Let $\Psi: \Omega\rightarrow \mathbb{R}^m$ be Malliavin differentiable and $G:\mathbb{R}^m\rightarrow[0, \infty]$ such that $$|D\Psi|^2_{\mathcal{H}}\leq c G(\Psi)$$ holds $\gamma$-a.s. for a constant $c>0$. Then*
- *$\mu=\gamma\circ\Psi^{-1}$ satisfies $\mathrm{WLSI}(2cG)$;*
- *For $i=1,\dots, m$, let $\mu_i$ denote the $i$-th marginal of $\mu$. If $G\in L^1(\mu)$, then there exists a measurable function $G_i:\mathbb{R}^{m-1}\rightarrow[0,\infty]$ such that $\mu_i$ satisfies $\mathrm{WLSI}(G_i)$.*
*Proof.* (i) Let $f:\mathbb{R}^m\rightarrow\mathbb{R}$ differentiable. Since $\Psi$ is Malliavin differentiable, so is $f(\Psi)$ and moreover $Df(\Psi)=\nabla f(\Psi) D\Psi$. Recalling [@gross1975logarithmic] that $\gamma$ satisfies $\mathrm{LSI}(2)$, we have $$\begin{aligned}
Ent_{\mu}(f^2)= Ent_{\gamma}((f\circ\Psi)^2)
&\leq 2\int_{\Omega }\big\|Df(\Psi)\big\|^2_{\mathcal{H}}d\gamma\\&\leq2\int_{\Omega}|\nabla f(\Psi)|^2\big\|D\Psi\big\|^2_{\mathcal{H}}d\gamma\leq 2c\int_{\Omega}|\nabla f(\Psi)|^2G(\Psi)d\gamma.
\end{aligned}$$ (ii) Without loss of generality, we shall prove the inequality for $i=1$. Since $\mu$ is a Borel probability measure on a Polish space, we can apply the disintegration theorem [@dupuis2011weak Theorem A.5.4] to write $$\mu(dx)=\pi(dx_1| x_2, \dots, x_{m})\mu_1(dx_2,\dots, dx_{m}),$$ where $\Pi$ is a stochastic kernel on the first coordinate, conditional on $(x_2,\dots, x_{m})$. Now, for any $f=f(x_2,\dots,x_{m})$, the log-Sobolev inequality from $(i)$, applied to the function $\tilde{f}(x_1,\dots, x_m):=f(x_2,\dots, x_m)$, yields $$\begin{aligned} Ent_{\mu_1}(f^2)&=\int_{\mathbb{R}^{m-1}}f^2\log\bigg(\frac{f^2}{\int f^2}\bigg)\bigg(\int_{\mathbb{R}}\pi(dx_1|x_2,\dots, x_m)\bigg)d\mu_1 \\&=
Ent_{\mu}(\tilde{f}^2)\\&\leq 2c\int_{\mathbb{R}^{m}}\big|\nabla\tilde{f}|^2Gd\mu\\&=2c\int_{\mathbb{R}^{m}}\big|\nabla f(x_2, \dots, x_m)\big|^2G(x_1, \dots, x_m)d\mu(x_1, \dots, x_m)\\&
=2c\int_{\mathbb{R}^{m-1}}\big|\nabla f(x_2, \dots, x_m)\big|^2G_1(x_2, \dots, x_m)d\mu_1(x_2, \dots, x_m),
\end{aligned}$$ where $G_1:=2c\int_{\mathbb{R}}G(x_1,x_2 \dots, x_m)\pi(dx_1|x_2,\dots,x_{m})$ is finite $\mu_1$-a.e. since $\tilde{G}\in L^1(\mu_1)$. ◻
Setting $\Psi\equiv1$ we recover the contraction principle from [@bartl2020functional Lemma 6.1] for Lipschitz transformations of the Wiener measure. Our generalisation covers non-linear transformations with polynomial growth, and in particular, allows us to prove weighted $\mathrm{LSIs}$ for functionals of elements in the m-th Wiener chaos over $\Omega$. The following theorem summarises some useful consequences of WLSIs:
**Theorem 49**. *Let $\mu\in\mathscr{P}(\mathbb{R}^m)$ satisfy $WLSI(G)$ and $d_{G}$ denote the weighted Riemannian distance associated to $G$, i.e. for all $x,y\in\mathbb{R}^m$ and $C_{xy}$ the set of all absolutely continuous paths $\gamma:[0,1]\rightarrow \mathbb{R}^m$ with $\gamma(0)=x, \gamma(1)=y$, $$d_{G}(x,y):=\inf_{\gamma\in C_{xy}}\int_{0}^{1}\sqrt{G^{-1}\big(\gamma(t)\big)|\gamma'(t)|^2dt}.$$ Then the following hold:*
1. *(Bobkov-Ledoux [@bobkov1999exponential]) If $G\in L^p(\mu)$, for some $p\geq 2$, then for all $f:\mathbb{R}^m\rightarrow\mathbb{R}$, $\mu$-centered and $1$-Lipschitz one has $\|f\|_{L^p(\mu)}\leq \sqrt{p-1}\|G\|_{L^p(\mu)}$.*
2. *(Cattiaux-Guillin-Wu [@cattiaux2011some]) $\mu\in\mathscr{T}_2(1)$ with respect to $d_{G/2}$.*
**Remark 50**. i) Without loss of generality, it suffices to assume that the weight $G$ is strictly positive, so that the metric $d_G$ above is well-defined. ii) In view of Theorem [Theorem 49](#thm:WLSIconsequences){reference-type="ref" reference="thm:WLSIconsequences"}(1), WLSIs are weaker functional inequalities compared to TCIs. In particular, WLSIs only imply finiteness of moments, while TCIs (Proposition [Proposition 4](#alphaprop){reference-type="ref" reference="alphaprop"}) imply finiteness of exponential moments for a measure of interest.
A wide class of Gaussian functionals whose law satisfies a $\mathrm{WLSI}$ is given below.
**Example 51**. (Polynomial functionals of Gaussian processes) Let $T>0$ and $\gamma$ be the law of a one-dimensional continuous, non-degenerate Gaussian process $X$ on $\mathcal{C}_0[0,T]$, $\mathcal{H}$ its Cameron-Martin space and $\mathcal{H}'$ the Hilbert space of deterministic integrands with respect to $X$. For $\{h_k\}_{k=1}^{m}\subset\mathcal{H}'$, let $X(h_k)$ denote the Wiener integral $\int h_kdX$ and consider the random vector $$\Psi:=\Big(X^{p_1}(h_1),\dots, X^{p_m}(h_m)\Big),$$ for some $p_1,\ldots, p_m\geq 1$. The functional $\Psi$ is Malliavin differentiable with $$D\Psi = \Big(p_1 X^{p_1-1}(h_1)i(h_1), \dots, p_m X(h_m)^{p_m-1}i(h_m)\Big),$$ where $i: \mathcal{H}' \rightarrow \mathcal{H}$ is a Hilbert-space isometry and $$\|D\Psi\|_{\mathcal{H}}
\leq \left(\max_{k=1,\dots, m}{p_k}\|i(h_k)\|_{\mathcal{H}}\right)\sum_{k=1}^{m}\left|X(h_k)^{p_k-1}\right|
\leq C\left(1+\sum_{k=1}^{m}\left|X(h_k)^{p_k}\right|\right).$$ Thus, in view of Proposition [Proposition 48](#Prop:logSobolev){reference-type="ref" reference="Prop:logSobolev"} the law $\mu$ of $\Psi$ satisfies $\mathrm{WLSI}(G)$ with $G(x)=(1+|x|_{\ell^1})^{2}$. Applying the same proposition, along with the product rule for Malliavin derivatives, it is straightforward to deduce that any polynomial in $m$ variables of $\Psi$ satisfies a $\mathrm{WLSI}$ with an appropriate weight function.
**Remark 52**. For a standard Wiener process $X$, then $\mathcal{H}=H^1_0[0,T]$, i.e. the space of absolutely continuous functions $f$ with a square-integrable weak derivative and $f(0)=0$, $\mathcal{H}'=L^2[0,T]$ and $i(h)=\int_{0}^{\cdot}h_sds$.
**Example 53**. (Gaussian rough paths) Let $T>0$ and $X$ be a $d$-dimensional continuous Gaussian process on $[0,T]$ that lifts to an $\alpha$-Hölder geometric rough path $\mathbf{X}=(X, \mathbb{X})$ (Definition [Definition 13](#dfn:RP){reference-type="ref" reference="dfn:RP"}). Here, for each $0\leq s\leq t\leq T$, $\mathbb{X}_{s,t}=\int_{s}^{t}(X_r-X_s)\otimes dX_r$. For fixed $s,t$ consider the $\mathbb{R}\times\mathbb{R}^d\times\mathbb{R}^{d\otimes d}$- valued functional $$\Psi=(\Psi_1, \Psi_2, \Psi_3)=\big(\|X\|_{\mathcal{C}^\alpha}, \mathbf{X}_{s,t})=\big(\|X\|_{\mathcal{C}^\alpha}, X_{s,t}, \mathbb{X}_{s,t}),$$ defined on the abstract Wiener space $(\mathcal{C}^\alpha([0,T]; \mathbb{R}^d), \mathcal{H}, \gamma)$, where, as in the previous example, $\gamma, \mathcal{H}$ denote the law of $X$ on $\mathcal{C}^\alpha([0,T]$ and its Cameron-Martin space respectively.
Regarding the Malliavin differentiability of the first component, note that $\|\cdot\|_{\mathcal{C}^\alpha}$ is $\mathcal{H}$-Lipschitz continuous in the sense of [@enchev1993rademacher] (see also [@nualart2006malliavin Exercise 1.2.]). Hence, it is indeed differentiable and the triangle inequality gives the estimate $$\|D\Psi_1\|_{\mathcal{H}}=\|D\|X\|_{\mathcal{C}^\alpha}\|_{\mathcal{H}}\leq 1.$$
Turning to $\Psi_3$, we show that it is $\mathcal{H}$-continuously Fréchet differentiable. To this end let $\epsilon>0$, $h\in\mathcal{H}$, $\omega\in\Omega$ and note that $$\frac{\mathbb{X}_{s,t}(\omega+\epsilon h)-\mathbb{X}_{s,t}(\omega)}{\epsilon} = \int_{s}^{t}X_{s,r}\otimes dh_r+\int_{s}^{t} h_{s,r}\otimes X_r+\epsilon\int_{s}^{t} h_{s,r}\otimes dh_r,$$ where all the terms on the right-hand side are well-defined Young integrals. Hence the directional derivative $$D^h\mathbb{X}_{s,t}:=\frac{d}{d\epsilon}\mathbb{X}_{s,t}(\omega+\epsilon h)\bigg|_{\epsilon=0}$$ exists and is linear in $H$ by linearity of the integrals. Moreover, standard Young estimates along with complementary Cameron-Martin regularity furnish $$\big|D^h\mathbb{X}_{s,t}\big|\leq C\|X\|_{\mathcal{C}^{\alpha}}\|h\|_{\mathcal{H}},$$ where $C>0$ is a constant that depends on the embedding $\mathcal{H}\hookrightarrow \mathcal{C}^{q-var}$, for all $q$ such that $1/q+1/(2\rho)>1$ and $\alpha<1/(2\rho)$ (for more details on such estimates we refer the interested reader to [@friz2010generalized Section 2.2]). As a result, the linear operator $\mathcal{H}\ni h\mapsto D^h\mathbb{X}_{s,t}\in\mathbb{R}^{d\otimes d}$ is bounded $\gamma$-a.e. which in particular means that $\mathbb{X}_{s,t}$ is $\mathcal{H}$-continuously Fréchet differentiable. Combining the latter with the square-integrability of $\mathbb{X}_{s,t}$, we deduce from [@nualart2006malliavin Lemma 4.1.2 and Proposition 4.1.3] that $\Psi_{3}$ is Malliavin differentiable with $$\|D\Psi_3\|_{\mathcal{H}}=\sup_{\|h\|_{\mathcal{H}}\leq 1}|D^h\Psi_3|\leq C\|X\|_{\mathcal{C}^\alpha}.$$ The same property holds trivially for $\Psi_2$ since $h\mapsto X_{s,t}+h_{s,t}$ is Lipschitz continuous and hence $\|D\Psi_2\|_{\mathcal{H}}\leq C$ for a constant $C$ that depends on the embedding $\mathcal{H}\hookrightarrow\Omega$. Putting these estimates together we conclude that $$\|D\Psi\|_{\mathcal{H}}\leq \sum_{k=1}^{3}\|D\Psi_k\|_{\mathcal{H}}\leq C\bigg( 1+ \|X\|_{\mathcal{C}^\alpha} \bigg)=C(1+\Psi_1),$$ hence, in view of Proposition [Proposition 48](#Prop:logSobolev){reference-type="ref" reference="Prop:logSobolev"}(i), the law $\mu$ of $\Psi$ satisfies $\mathrm{WLSI}(G)$ with weight function $G:\mathbb{R}\times\mathbb{R}^d\times\mathbb{R}^{d\otimes d}\rightarrow [0, \infty]$, $G(x_1, \mathbf{x}_2)=(1+x_1)^2$.
Finally, it is possible to obtain a $\mathrm{WLSI}$ for the law $\mu_1$ of the Gaussian rough path $\mathbf{X}_{s,t}=(\Psi_2, \Psi_3)$, for fixed times $s,t\in [0,T]$, by noting that $\|G\|_{L^1(\mu)}=\mathbb{E}[1+\|X\|^2_{\mathcal{C}^\alpha} ]<\infty$. Proposition [Proposition 48](#Prop:logSobolev){reference-type="ref" reference="Prop:logSobolev"}(ii) then implies that $\mu_1$ satisfies $\mathrm{WLSI}(\tilde{G})$ with $\tilde{G}(\mathbf{x_2})=\int_{\mathbb{R}}(1+x_1)^2\pi(dx_1|\mathbf{x_2})$.
At this point we recall a slight generalisation of [@nualart2006malliavin Proposition 4.1.3], that we need for our last example.
**Lemma 54**. *Let $\Psi:\Omega\rightarrow \mathbb{R}^m$ be an $\mathcal{H}$-locally Lipschitz continuous functional in the sense that there exists a real-valued random variable $\Phi$ that is $\gamma$-a.e. finite and such that $$\label{eq:H-lip}
|\Psi(\omega+h)-\Psi(\omega)|\leq |\Phi(\omega)|\|h\|_{\mathcal{H}}$$ holds for all $H$ in bounded sets of $\mathcal{H}$ $\gamma$-a.e. Then $\Psi$ is locally Malliavin differentiable a.e. in the sense that there exist sequences $A_n\subset\Omega$ of measurable sets with $A_n\uparrow A\subset \Omega$, $\gamma(A)=1$, and Malliavin differentiable functionals $\Psi_n:\Omega\to\mathbb{R}^m$ such that $\Psi=\Psi_n$ on $\Omega_n$ (the derivative is then defined by $D\Psi:=D\Psi_n$ on $\Omega_n$). Moreover, if $\Phi\in L^2(\Omega)$ then $D\Psi\in L^2(\Omega)$.*
*Proof.* The proof is essentially the same as that of [@nualart2006malliavin], Proposition 4.1.3. In particular, the same arguments work by replacing the condition of $\mathcal{H}$-differentiability with $\mathcal{H}$-local Lipschitz continuity, the $\mathcal{H}$-Fréchet derivative $D\Psi$ by the random variable $\Phi$ and work with the localising sequence $$A_n=\bigg\{ \omega\in \Omega: \sup_{\|h\|_\mathcal{H}\leq1/n}|\Psi(\omega+i(h))|\leq n,\;|\Phi(\omega)|\leq n \bigg\}, \quad n\in\mathbb{N}.$$ Note that the finiteness of $\Phi$ guarantees that $A=\bigcup_{n\in\mathbb{N}}A_n$ is a set of probability $1$. These arguments imply existence of the Malliavin derivative of $\Psi_n$. Moreover, we obtain the almost sure estimate $\|D\Psi_n\|_{\mathcal{H}}\leq C_n$ (where $C_n\rightarrow\infty$ as $n\to\infty)$, hence $D\Psi_n\in L^2(\Omega)$. To conclude that $D\Psi\in L^2(\Omega)$, we take advantage of the a.s. existence and linearity of the map $$h\longmapsto D^h\Psi_n=\frac{d}{d\epsilon}\Psi_n(\omega+\epsilon h)\bigg|_{\epsilon=0}.$$ In view of [\[eq:H-lip\]](#eq:H-lip){reference-type="eqref" reference="eq:H-lip"} it follows that this linear map is bounded and thus we obtain $$\|D\Psi_n\|_{\mathcal{H}}=\sup_{\|h\|_\mathcal{H}\leq 1}|D^h\Psi_n|\leq |\Phi|,\;\; \gamma\text{-a.e}.$$ Since this bound is uniform in $n$ and $D\Psi=D\Psi_n$ on $\Omega_n$, the conclusion follows. ◻
**Example 55**. (Gaussian RDEs) Let $p\in (1, 3)$, $T>0$ and $Y$ solve the RDE $$\label{Eq: RDE}
dY=V(Y)d\mathbf{X}, \quad Y_0\in\mathbb{R}^m,$$ on the interval $[0,T]$. Here, for simplicity, we assume that $V=(V_1,\dots, V_d)$ is a collection of $C_b^\infty$ vector fields on $\mathbb{R}^m$ and, as in the previous example, $X$ is a continuous Gaussian process that lifts to a geometric $p$-(or $1/p$-)Hölder rough path $\mathbf{X}$. With the same abstract Wiener space as in the previous example, consider the $\mathbb{R}^{m+1}$-valued functional $$\Psi=(\Psi_1, \Psi_2)=(\|\mathbf{X}\|_{\text{p-var}}, Y_T),$$ where $\|\mathbf{X}\|_{\text{p-var}}$ is the (inhomogeneous) $p$-variation \"norm\" (Definition [Definition 13](#dfn:RP){reference-type="ref" reference="dfn:RP"}). Regarding the differentiability of $\Psi_1$, the triangle inequality along with the estimates on the shifted rough path from the previous example furnish $$\begin{aligned}
|\Psi_1(\omega+h)-\Psi_1(\omega)|&\leq \bigg[\sup_{(t_i)\in\mathcal{P}[0,T]}\sum_{i}\bigg(|h_{t_i,t_{i+1}}|+\big|\mathbb{X}_{t_i,t_{i+1}}(\omega+h)-\mathbb{X}_{t_i,t_{i+1}}(\omega)\big| \bigg)^{p}\bigg]^\frac{1}{p}\\&
\leq C(\|h\|_{\mathcal{H}}+ 2\|X\|_{\text{p-var}}\|h\|_{\mathcal{H}}+\|h\|^2),
\end{aligned}$$ for a constant that depends on the various embeddings of the underlying $p$-variation spaces. Since $\|X\|_{\text{p-var}}$ is square-integrable, we deduce that, for all $H$ in bounded sets of $\mathcal{H}$, satisfies the assumptions of Lemma [Lemma 54](#Lem:Dloc){reference-type="ref" reference="Lem:Dloc"}. Thus we have $$\|D\Psi_1\|_{\mathcal{H}}\leq \|X\|_{\text{p-var}}\leq \|\mathbf{X}\|_{\text{p-var}} ,\;\; \gamma-a.e.$$ Turning to $\Psi_2$, the functional $Y_T$ is $\gamma$-a.s. continuously $\mathcal{H}$-differentiable and $$D^hY_t=\int_{0}^{t}J_{t\leftarrow s}^{\mathbf{X}}V_j(Y_s)dh_s,$$ where $J_{t\leftarrow s}^{\mathbf{X}}$ is the Jacobian of the solution flow of [\[Eq: RDE\]](#Eq: RDE){reference-type="eqref" reference="Eq: RDE"} with respect to the initial condition. The latter satisfies itself a linear RDE and satisfies the estimate $$\|J_{\cdot\leftarrow 0}^{\mathbf{X}}\|_{\text{p-var}}\leq C\exp\bigg(c\|\mathbf{X}\|^p_{\text{p-var}}\bigg)$$ for some constants $c,C>0$ (see e.g. [@friz2010multidimensional Equation (20.17)]; notice that the estimate there is formulated in terms of the homogeneous $p$-variation norm which is bounded above by the homogeneous one considered here). In view of the latter, along with the shift invariance of the Jacobian flow and the boundedness of the vector fields $V$, one has $$\|D\Psi_2\|_{\mathcal{H}}=\|DY_T\|_{\mathcal{H}}\leq C_T\exp\bigg(c\|\mathbf{X}\|^p_{\text{p-var}}\bigg).$$ Thus $$\|D\Psi\|_{\mathcal{H}}\leq C\bigg(\|\mathbf{X}\|+\exp\bigg(c\|\mathbf{X}\|^p_{\text{p-var}}\bigg)\bigg),$$ which in view of Proposition [\[Prop:logSobolev\]](#Prop:logSobolev){reference-type="eqref" reference="Prop:logSobolev"}(i) implies that the law of $\Psi$ satisfies a $\mathrm{WLSI}$ with weight function $G(x)=(x_1+e^{x_1^p})$.
# Technical proofs {#Section:App}
This section is devoted to the proofs of some technical lemmas used throughout this work. As is customary, we remark that values of unimportant constants may change from line to line without a change in notation.
## Proof of Lemma [\[shiftlem\]](#shiftlem){reference-type="ref" reference="shiftlem"} {#proof:shiftlem}
From the definition of the distance [\[twobar\]](#twobar){reference-type="eqref" reference="twobar"} it suffices to obtain estimates for the basis elements $\mathcal{S}\subset\mathcal{T}$. Let $\lambda\in(0,1]$, $T>0$, $s\in[0,T]$, $h_1, h_2\in\mathcal{H}$ and $\phi\in C_c^1(\mathbb{R})$ such that $\|\phi\|_{\mathcal{C}^1}\leq 1$ and $\mathop{\mathrm{supp}}(\phi)\subset(-1,1)$. Starting with the symbol $\Xi$, $$\label{Lip1}
\begin{aligned}
\big|\left\langle \big(T_{h_2}\Pi^{\xi}_{s}-T_{h_1}\Pi^{\xi}_{s}\big)\Xi , \phi^\lambda_s\right\rangle \big|
& = \left|
\left\langle \dot{W}+h_2, \phi_s^\lambda\right\rangle - \left\langle \dot{W}+h_1,\phi_s^\lambda\right\rangle\right|
\\
&=\frac{1}{\lambda}\bigg|\int_{(s-\lambda, s+\lambda)}
\big( h_2(t)-h_1(t) \big)\phi(\lambda^{-1}(t-s)) dt\bigg|\\&\leq\lambda^{-1}\|h_2-h_1\|_{\mathcal{H}} \bigg(\lambda \int_{(-1, 1)}\phi(z)^2 dz\bigg)^{\frac{1}{2}}\\&\leq \sqrt{2} \lambda^{-\frac{1}{2}}\|\phi\|_{\mathcal{C}^1}\|h_2-h_1\|_{\mathcal{H}}\leq \sqrt{2} \lambda^{-\frac{1}{2}}\|h_2-h_1\|_{\mathcal{H}},
\end{aligned}$$ where we used Cauchy-Schwarz and $z=\lambda^{-1}(t-s)$. Next let $m\in\{1,\dots, M\}$ and consider $$\begin{aligned}
&\big|\big\langle \big(T_{h_2}\Pi^{\xi}_{s}-T_{h_1}\Pi^{\xi}_{s}\big)I(\Xi)^m, \phi^\lambda_s\big\rangle\big|\\
&=\frac{1}{\lambda}\bigg|\int_{(s-\lambda,s+\lambda )}
\left[\left(\widehat{W}^H(t)-\widehat{W}^H(s)+\widehat{h}_2(t)-\widehat{h}_2(s)\right)^m-\left(\widehat{W}^H(t)-\widehat{W}^H(s)+\widehat{h}_1(t)-\widehat{h}_1(s)\right)^m \right]\\
&\quad\times\phi(\lambda^{-1}(t-s))dt\bigg|.\end{aligned}$$ Letting $x_i=\widehat{W}^H(s,t)+\widehat{h}_i(s,t):=\widehat{W}^H(t)-\widehat{W}^H(s)+\widehat{h}_i(t)-\widehat{h}_i(s), i=1, 2$ and using the binomial identity $$\label{binom}
x_2^m= x_1^{m}+\sum_{k=1}^{m}\binom{m}{k}x_1^{m-k}(x_2-x_1)^{k},$$ we obtain $$\label{Lip2con}
\begin{aligned}
&\frac{1}{\lambda}\bigg|\int_{(s-\lambda,s+\lambda )} \big[\big(\widehat{W}^H(s,t)+\widehat{h}_2(s,t)\big)^m-\big(\widehat{W}^H(s,t)+\widehat{h}_1(s,t)\big)^m \big]\phi(\lambda^{-1}(t-s))dt\bigg|
\\&=\frac{1}{\lambda}\bigg|\int_{(s-\lambda,s+\lambda )}\sum_{k=1}^{m}\binom{m}{k}\big[\widehat{W}^H(s,t)+\widehat{h}_1(s,t)\big]^{m-k}\big[ \widehat{h}_2(s,t)-\widehat{h}_1(s,t) \big]^k\phi(\lambda^{-1}(t-s))dt\bigg|\\&
\leq \lambda^{-1+m(H-\kappa)}\sum_{k=1}^{m}\binom{m}{k}\big\|\widehat{W}^H+\widehat{h}_1\big\|_{\mathcal{C}^{H-\kappa}}^{m-k}\big\| \widehat{h}_2-\widehat{h}_1\big\|_{\mathcal{C}^{H-\kappa}}^k\int_{(s-\lambda,s+\lambda )}|\phi(\lambda^{-1}(t-s))| dt\\&\leq m!\lambda^{-1+m(H-\kappa)}\sum_{k=1}^{m}C_H^k\big\|\widehat{W}^H+\widehat{h}_1\big\|_{\mathcal{C}^{H-\kappa}}^{m-k}\big\| h_2-h_1\big\|_{\mathcal{H}}^k\bigg(\lambda\int_{(-1, 1 )} |\phi(z)|ds \bigg)
\\& \leq C_m\left(1\vee C^m_H\right)\lambda^{m(H-\kappa)}\|\phi\|_{\mathcal{C}^1}
\left(1\vee\left\|\widehat{W}^H+\widehat{h}_1\right\|_{\mathcal{C}^{H-\kappa}}^{m-1} \right)
\left(\left\| h_2-h_1\right\|_{\mathcal{H}}\vee \left\|h_2-h_1\right\|^{m}_{\mathcal{H}}\right),
\end{aligned}$$ where we used the change of variables $z=\lambda^{-1}(t-s)$ and the continuity of the linear operator $K^H:\mathcal{H}\rightarrow \mathcal{C}^H$ and $C_H$ is an upper bound for the operator norm. Turning to $\Xi I(\Xi)^m$, $$\begin{aligned}
&\left\langle \big(T_{h_2}\Pi^{\xi}_{s}-T_{h_1}\Pi^{\xi}_{s}\big)\Xi I(\Xi)^m, \phi^\lambda_s\right\rangle \\
&
=\left\langle (\dot{W}+h_2)(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_2-\widehat{h}_2(s))^m-(\dot{W}+h_1)(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m, \phi^\lambda_s\right\rangle\\
&
=\left\langle (h_2-h_1)(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m, \phi^\lambda_s\right\rangle\\
&+\left\langle (h_2-h_1)\big[(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_2-\widehat{h}_2(s))^m-(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m\big], \phi^\lambda_s\right\rangle\\
&
+\left\langle (\dot{W}+h_1)\big[(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_2-\widehat{h}_2(s))^m-(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m\big], \phi^\lambda_s\right\rangle
\\
&=:I_1+I_2+I_3.\end{aligned}$$ An application of the Cauchy-Schwarz inequality then yields $$\label{I1}
\begin{aligned}
|I_1|&=\bigg|\big\langle (h_2-h_1)(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m, \phi^\lambda_s\big\rangle \bigg|\\&=\frac{1}{\lambda}\bigg|\int_{(s-\lambda, s+\lambda )} \big(h_2(t)- h_1(t)\big)\big(\widehat{W}^H(s,t)+\widehat{h}_1(s,t)\big)^m\phi(\lambda^{-1}(t-s))dt \bigg| \\&
\leq \lambda^{-1+m(H-\kappa)}\big\|\widehat{W}^H+\widehat{h}_1\big\|^m_{\mathcal{C}^{H-\kappa}}\|h_2-h_1\|_{\mathcal{H}}\bigg(\lambda\int_{(-1, 1 )} \phi(z)^2dt \bigg)^{\frac{1}{2}}\\&
\leq \sqrt{2}\lambda^{-\frac{1}{2}+m(H-\kappa)}\|\phi\|_{\mathcal{C}^1}\big\|\widehat{W}^H+\widehat{h}_1\big\|^m_{\mathcal{C}^{H-\kappa}}\|h_1-h_2\|_{\mathcal{H}}.
\end{aligned}$$ For $I_2$ we have the estimate $$\label{I2}
|I_2|\leq C\lambda^{-\frac{1}{2}+m(H-\kappa)}
\left(1\vee\big\|\widehat{W}^H+\widehat{h}_1\big\|_{\mathcal{C}^{H-\kappa}}^{m-1} \right)
\left(\big\| h_2-h_1\big\|_{\mathcal{H}}\vee \big\|h_2-h_1\big\|^{m+1}_{\mathcal{H}}\right).$$ This can be proved using very similar arguments as the ones used to obtain [\[Lip2con\]](#Lip2con){reference-type="eqref" reference="Lip2con"}. To avoid repetition, its proof will be omitted. Turning to $I_3$ we have $$\begin{aligned}
I_3&=\left\langle h_1\big[(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_2-\widehat{h}_2(s))^m-(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m\big], \phi^\lambda_s\right\rangle\\&
+\left\langle \dot{W}\big[(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_2-\widehat{h}_2(s))^m-(\widehat{W}^H-\widehat{W}^H(s)+\widehat{h}_1-\widehat{h}_1(s))^m\big], \phi^\lambda_s\right\rangle\\&
=\frac{1}{\lambda}\int_{(s-\lambda, s+\lambda )}
\left[(\widehat{W}^H(s,t)+\widehat{h}_2(s,t)-(\widehat{W}^H(s,t)+\widehat{h}_1(s,t))^m\right]
h_1(t)\phi(\lambda^{-1}(t-s))dt\\
&
-\frac{1}{\lambda^2}\int_{(s-\lambda, s+\lambda )}\phi'(\lambda^{-1}(t-s))\int_{s}^{t}\left[(\widehat{W}^H(s,r)+\widehat{h}_2(s,r))^m-(\widehat{W}^H(s,r)+\widehat{h}_1(s,r))^m\right]dW_rdt\\
& =:J_1+J_2.
\end{aligned}$$ In view of [\[binom\]](#binom){reference-type="eqref" reference="binom"} we have $$\label{J1}
\begin{aligned}
&|J_1| = \frac{1}{\lambda}\bigg|\int_{(s-\lambda, s+\lambda )}\sum_{k=1}^{m}\binom{m}{k}\left[\widehat{W}^H(s,t)+\widehat{h}_1(s,t)\big]^{m-k}\big[\widehat{h}_2(s,t)-\widehat{h}_1(s,t)\right]^{k}h_1(t)\phi(\lambda^{-1}(t-s))dt\bigg|\\&
\leq m!\lambda^{-1+m(H-\kappa)}\sum_{k=1}^{m}\big\|\widehat{W}^H+\widehat{h}_1\big\|_{\mathcal{C}^{H-\kappa}}^{m-k}\|\widehat{h}_2-\widehat{h}_1\|^{k}_{\mathcal{C}^{H-\kappa}}\int_{(s-\lambda, s+\lambda )}|h_1(t)|\big|\phi(\lambda^{-1}(t-s))\big|dt\\&
\leq m!\lambda^{-\frac{1}{2}+m(H-\kappa)}
\left(1\vee \big\|\widehat{W}^H+\widehat{h}_1\big\|_{\mathcal{C}^{H-\kappa}}^{m-1}\right)
\left(1\vee C^m_H)\big(\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{m}\right)\|h_1\|_{\mathcal{H}}\|\phi\|_{L^2(-1,1)},
\end{aligned}$$ where we used Cauchy-Schwarz and the continuity of the embedding $\mathcal{H}\hookrightarrow \mathcal{C}^H$ once again. Finally, let $$M_{st} := \int_{0}^{t}\left[(\widehat{W}^H(s,r)+\widehat{h}_2(s,r))^m-(\widehat{W}^H(s,r)+\widehat{h}_1(s,r))^m\right]dW_r, \quad s\leq t.$$ By the BDG inequality and [\[binom\]](#binom){reference-type="eqref" reference="binom"} we have $$\begin{aligned}
&\mathbb{E}\left[\big|M_{st}-M_{ss}\big|^p\right]
\leq \left(\int_{s}^{t}\mathbb{E}\left[\left|\left(\widehat{W}^H(s,r)+\widehat{h}_2(s,r)\right)^m
- \left(\widehat{W}^H(s,r)+\widehat{h}_1(s,r)\right)^m\right|^2\right]dr\right)^{\frac{p}{2}}\\
&
\leq C_m^p\left\{\int_{s}^{t}\left(\sum_{k=1}^{m}
\mathbb{E}\left[\left|\widehat{W}^H(s,r)+\widehat{h}_1(s,r)\right|^{m-k}\right]\left|\widehat{h}_2(s,r)-\widehat{h}_1(s,r)\right|^{k}\right)^2dr\right\}^{\frac{p}{2}}\\
&
\leq C_m^p\left\{\sum_{k=1}^{m}
\bigg[\mathbb{E}\big\|\widehat{W}^H+\widehat{h}_1\big\|^{(m-k)}_{\mathcal{C}^{H-\kappa}}\|\widehat{h}_2-\widehat{h}_1\|^{k}_{\mathcal{C}^{H-\kappa}}\bigg]^{p}\bigg[\int_{s}^{t}(r-s)^{2m(H-\kappa)}dr\bigg]\right\}^{\frac{p}{2}}
\\
&
\leq C^p_{H,m}\left(1\vee \mathbb{E}\left[\left\|\widehat{W}^H+\widehat{h}_1\right\|_{\mathcal{C}^{H-\kappa}}^{p(m-1)}\right]\right)\left(\|h_2-h_1\|^p_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{mp}\right)(t-s)^{pm(H-\kappa)+p/2}.
\end{aligned}$$ An application of the Kolmogorov continuity criterion for two-parameter processes [@friz2020course Theorem 3.13] furnishes $$\begin{aligned}
\big|M_{st}-M_{ss}\big|\leq K_{h_1}\big(\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{m}\big)(t-s)^{m(H-\kappa)+\frac{1}{2}-\kappa}
\end{aligned}$$ almost surely, where $K_{h_1}$ is a random variable with finite moments of all orders (this estimate can also be obtained by the Young bounds used for the proof of Lemma [\[Itoshiftlem\]](#Itoshiftlem){reference-type="ref" reference="Itoshiftlem"} in Section [7.2](#proof:Itoshiftlem){reference-type="ref" reference="proof:Itoshiftlem"} below). Plugging this estimate into the expression for $J_2$ then yields $$\label{J2}
\begin{aligned}
|J_2|&\leq \lambda^{-2}\int_{(s-\lambda, s+\lambda )}|\phi'(\lambda^{-1}(t-s))|\big|M_{st}-M_{ss}\big|dt\\&\leq K\big(\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{m}\big)\lambda^{-2}\int_{(s-\lambda, s+\lambda )}|\phi'(\lambda^{-1}(t-s))|(t-s)^{m(H-\kappa)+\frac{1}{2}-\kappa}dt\\&
\leq K\big(\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{m}\big)\lambda^{-2+1+m(H-\kappa)+\frac{1}{2}-\kappa}\int_{(-1,1)}|\phi'(t)|t^{m(H-\kappa)+\frac{1}{2}-\kappa}dt\\&
\leq CK\big(\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{m}\big)\lambda^{m(H-\kappa)-\frac{1}{2}-\kappa}\|\phi\|_{\mathcal{C}^1}.
\end{aligned}$$ A combination of [\[I1\]](#I1){reference-type="eqref" reference="I1"}, [\[J1\]](#J1){reference-type="eqref" reference="J1"}, [\[J2\]](#J2){reference-type="eqref" reference="J2"} implies the almost sure bound $$\big|\big\langle \big(T_{h_2}\Pi^{\xi}_{s}-T_{h_1}\Pi^{\xi}_{s}\big)\Xi I(\Xi)^m, \phi^\lambda_s\big\rangle\big|\leq C_{m,H}\lambda^{m(H-\kappa)-\frac{1}{2}-\kappa}K_{h_1}\|h_2-h_1\|_{\mathcal{H}}\vee \|h_2-h_1\|_{\mathcal{H}}^{m+1},$$ where $K_{h_1}$ is a random variable with finite moments of all orders. The proof follows from this with [\[Lip1\]](#Lip1){reference-type="eqref" reference="Lip1"}-[\[Lip2con\]](#Lip2con){reference-type="eqref" reference="Lip2con"}-[\[I2\]](#I2){reference-type="eqref" reference="I2"} and recalling the definition of model distance [\[twobar\]](#twobar){reference-type="eqref" reference="twobar"}.
## Proof of Lemma [\[Itoshiftlem\]](#Itoshiftlem){reference-type="ref" reference="Itoshiftlem"} {#proof:Itoshiftlem}
Let $p\geq(\frac{1}{2}-\kappa)^{-1}$. For $t\in[0,T], h\in L^2[0,T]$ we have $$\label{eq:Itoshift}
\begin{aligned}
&\int_{s}^{t}f( \widehat{W}^H_r+\widehat{h}_r, r)d\bigg(W_r+\int_{0}^{r}h_zdz\bigg)-\int_{s}^{t}f( \widehat{W}^H_r, r)dW_r\\&= \int_{s}^{t}f_1( \widehat{h}_r, r)f_2(\widehat{W}^H_r,r)h_rdr+\int_{s}^{t}f_1(\widehat{h}_r,r)f_2\big( \widehat{W}^H_r, r\big)dW_r
\end{aligned}$$ From the Besov variation embedding, there exists $q$ with $1/p+1/q>1$ such that $\|\widehat{h}\|_{q-var;[s,t]}\leq C|t-s|^{H-\kappa}\|h\|_{\mathcal{H}}$. Moreover, for each $u,v\in[s,t]$, we have by the mean value inequality $$\begin{aligned}
|f_1(\widehat{h}_u,u)-f_1(\widehat{h}_v,v)|^q&\leq C\big|G(\|\widehat{h}\|_{\infty})\big|^q\bigg( |\widehat{h}_u-\widehat{h}_v|^q+|u-v|^q \bigg).
\end{aligned}$$ Hence, by monotonicity of $G$ it follows that $$\begin{aligned}
\big\|f_1(\widehat{h}_\cdot,\cdot)\big\|_{q-var; [s,t]}&\leq C\big|G(c\|h\|_{\mathcal{H}})\big|\bigg( \|\widehat{h}\|_{q-var;[s,t]}+|t-s|\bigg)\leq C_{T}\big|G(c\|h\|_{\mathcal{H}})\big|\bigg(\|h\|_{\mathcal{H}}+1\bigg)|t-s|^{H-\kappa}.
\end{aligned}$$ Finally, the growth assumptions on $f_2$ guarantee that $\int_{s}^{\cdot}f_2(\widehat{h}_r, r)dW_r$ is $1/p$-Hölder continuous on $[s,t]$ which in turn implies that $$\bigg\|\int_{s}^{\cdot}f_2(\widehat{W}^H_r, r)dW_r \bigg\|_{\text{p-var};[s,t]}\leq \bigg\|\int_{s}^{\cdot}f_2(\widehat{W}^H_r, r)dW_r\bigg\|_{\mathcal{C}^{1/p}[s,t]}|t-s|^{1/p}.$$ Combining the last two estimates with Young's inequality we obtain the almost sure bound $$\bigg\|\int_{0}^{\cdot}f_2(\widehat{W}^H_r, r)f_1(\widehat{h}_r, r)dW_r\bigg\|_{\mathcal{C}^{H-\kappa+1/p}[0,T]}\leq C\big|G(c\|h\|_{\mathcal{H}})\big|\bigg(\|h\|_{\mathcal{H}}+1\bigg)K,$$ where $K$ is a random variable with finite moments of all orders. As for the Riemann integral in [\[eq:Itoshift\]](#eq:Itoshift){reference-type="eqref" reference="eq:Itoshift"}, Cauchy-Schwarz yields a similar estimate for its $\mathcal{C}^{1/2}[0,T]$ norm.
## Proof of Lemma [Lemma 38](#modelbndlem){reference-type="ref" reference="modelbndlem"}(1) {#Subsection: modelbndlem}
Let $t\neq s\in[0,T]$. By linearity of $\Gamma_{t,s}$ and [\[eq:structuregroup\]](#eq:structuregroup){reference-type="eqref" reference="eq:structuregroup"} we have $$\begin{aligned}
\Gamma_{t,s}f^{\Pi}(s)&=\sum_{k=0}^{M}\frac{1}{k!}\partial^k_1f\big((K^H*\Pi_s\Xi)(s),s\big)\Gamma_{t,s}I(\Xi)^k=\sum_{k=0}^{M}\frac{1}{k!}\partial^k_1f\big((K^H*\Pi_s\Xi)(s),s\big)\big(\Gamma_{t,s}I(\Xi)\big)^k\\&=\sum_{k=0}^{M}\sum_{m=0}^{M-k}\frac{1}{k!m!}\partial^{k+m}_1f\big((K^H*\Pi_t\Xi)(t),s\big)\big[ (K^H*\Pi_s\Xi)(s)-(K^H*\Pi_t\Xi)(t) \big]^m\big(\Gamma_{t,s}I(\Xi)\big)^k\\&
+\sum_{k=0}^{M}\frac{1}{k!(M-k)!}\bigg(\int_{(K^H*\Pi_t\Xi)(t)}^{(K^H*\Pi_s\Xi)(s)}\partial^{M+1}_1f\big(x,s\big)\big[ (K^H*\Pi_s\Xi)(s)-x\big]^{M-k}dx\bigg)\big(\Gamma_{t,s}I(\Xi)\big)^k,
\end{aligned}$$ where we Taylor-expanded $\partial_1^kf$ around $(K^H*\Pi_t\Xi)(t)$ up to the $(M-k)$th degree. Writing $\partial_1^{k+m}f(\cdot,s)=\partial_1^{k+m}f(\cdot,t)+\int_{t}^{s}\partial_{1,2}^{k+m,1}f(\cdot,x)dx$, we obtain $$\begin{aligned}
\Gamma_{t,s}f^{\Pi}(s)&=\sum_{k=0}^{M}\sum_{m=0}^{M-k}\frac{1}{k!m!}\partial^{k+m}_{1}f\big((K^H*\Pi_t\Xi)(t),t\big)\big[ (K^H*\Pi_s\Xi)(s)-(K^H*\Pi_t\Xi)(t) \big]^m\big(\Gamma_{t,s}I(\Xi)\big)^k\\&+R(t,s),
\end{aligned}$$ where the remainder is given by $$\label{Rkform}
\begin{aligned}
R&(t,s):=\sum_{k=0}^{M}\frac{1}{k!(M-k)!}\bigg(\int_{(K^H*\Pi_t\Xi)(t)}^{(K^H*\Pi_s\Xi)(s)}\partial^{M+1}_1f\big(x,s\big)\big[ (K^H*\Pi_s\Xi)(s)-x\big]^{M-k}dx\bigg)\big(\Gamma_{t,s}I(\Xi)\big)^k
\\& +\sum_{k=0}^{M}\sum_{m=0}^{M-k}\frac{1}{k!m!}\bigg(\int_{t}^{s}\partial^{k+m,1}_{1,2}f\big((K^H*\Pi_t\Xi)(t),x\big)dx\bigg)\big[ (K^H*\Pi_s\Xi)(s)-(K^H*\Pi_t\Xi)(t) \big]^m\big(\Gamma_{t,s}I(\Xi)\big)^k\\&
= \sum_{k=0}^{M}\tilde{R}_k(t,s)\big(\Gamma_{t,s}I(\Xi)\big)^k.
\end{aligned}$$ In view of the relations $\Gamma_{t,s}I(\Xi)=I(\Xi)-\big(\Pi_tI(\Xi) \big) (s)\mathbf{1}$ and $$(K^H*\Pi_s\Xi)(s)-(K^H*\Pi_t\Xi)(t)=\Pi_tI(\Xi)(s)\equiv \Pi_tI(\Xi)(s)\mathbf{1},$$ which are directly inferred from [\[eq:IXi\]](#eq:IXi){reference-type="eqref" reference="eq:IXi"}-[\[eq:structuregroup\]](#eq:structuregroup){reference-type="eqref" reference="eq:structuregroup"}-[\[eq:Gamma\]](#eq:Gamma){reference-type="eqref" reference="eq:Gamma"} and the binomial identity [\[binom\]](#binom){reference-type="eqref" reference="binom"}, we obtain $$\begin{aligned}
\Gamma_{t,s}f^{\Pi}(s)&-R(t,s)=\sum_{k=0}^{M}\sum_{m=0}^{M-k}\frac{1}{k!m!}\partial^{k+m}_{1}f\big((K^H*\Pi_t\Xi)(t),t\big)\big[ \Pi_tI(\Xi)(s)\big]^m\big[I(\Xi)-\big(\Pi_tI(\Xi) \big)(s) \mathbf{1}\big]^k\\&
=\sum_{k=0}^{M}\sum_{m'=k}^{M}\frac{1}{k!(m'-k)!}\partial^{m'}_{1}f\big((K^H*\Pi_t\Xi)(t),t\big)\big[ \Pi_tI(\Xi)(s)\big]^{m'-k}\big[I(\Xi)-\big(\Pi_tI(\Xi) \big)(s) \mathbf{1}\big]^k\\&
=\sum_{m'=0}^{M}\frac{\partial^{m'}_{1}f\big((K^H*\Pi_t\Xi)(t),t\big)}{m'!}\sum_{k=0}^{m'}\binom{m'}{k}\big[ \Pi_tI(\Xi)(s)\big]^{m'-k}\big[I(\Xi)-\big(\Pi_tI(\Xi) \big)(s) \mathbf{1}\big]^k\\&
=\sum_{m'=0}^{M}\frac{1}{m'!}\partial^{m'}_{1}f\big((K^H*\Pi_t\Xi)(t),t\big)I(\Xi)^{m'}=f^{\Pi}(t),
\end{aligned}$$ where we set $m+k=m'$ and then interchanged the order of summation to obtain the third equality. Turning to the remainder, we have $$\begin{aligned}
\label{remainderform}
R(t,s)
& = \sum_{k=0}^{M}\tilde{R}_k(t,s)\big(\Gamma_{t,s}I(\Xi)\big)^k=\sum_{k=0}^{M}\tilde{R}_k(t,s)\big[I(\Xi)-\big(\Pi_tI(\Xi) \big) (s)\mathbf{1}\big]^k\nonumber\\
& = \sum_{k=0}^{M}\tilde{R}_k(t,s)\big[I(\Xi)+\big(\Pi_sI(\Xi) \big) (t)\mathbf{1}\big]^k
=\sum_{k=0}^{M}\tilde{R}_k(t,s)\sum_{\ell=0}^{k}\binom{k}{\ell}\big(\Pi_sI(\Xi)(t) \big)^{k-\ell}I(\Xi)^{\ell}\nonumber\\
& = \sum_{\ell=0}^{M} \left\{\sum_{k=\ell}^{M}\tilde{R}_k(t,s)\binom{k}{\ell}\big(\Pi_sI(\Xi)(t) \big)^{k-\ell}\right\}I(\Xi)^{\ell}.\end{aligned}$$ The analogous expressions for $\mathscr{D}(\Pi)$ follow by multiplying throughout by the symbol $\Xi$. The latter is possible since, in view of [\[eq:structuregroup\]](#eq:structuregroup){reference-type="eqref" reference="eq:structuregroup"}, we have $\Gamma_{t,s}(I(\Xi)^k\Xi)=(\Gamma_{t,s}I(\Xi))^k\Xi$. In view of [\[remainderform\]](#remainderform){reference-type="eqref" reference="remainderform"}, along with the fact that $f^{\Pi}(t)-\Gamma_{t,s}f^{\Pi}(s)=R(t,s)$, we see that in order to estimate $\|f^\Pi\|_{\mathcal{D}^\gamma_{T}}$ (recall [\[eq:Dgammanorm\]](#eq:Dgammanorm){reference-type="eqref" reference="eq:Dgammanorm"}), one has to bound the terms $\tilde{R}_k$. To this end, let $k\in{\ell,\dots,M}$ and $a=(K^H*\Pi_t\Xi)(t), b=(K^H*\Pi_s\Xi)(s)$. Assuming first $a<b$, a change of variable gives $$\begin{aligned}
\int_{a}^{b}\partial^{M+1}_1f\big(x,s\big)\big[ b-x\big]^{M-k}dx&=\int_{0}^{b-a}\partial^{M+1}_1f\big(x+a,s\big)(b-a-x)^{M-k}dx\\&
\leq \int_{0}^{b-a}\big|\partial^{M+1}_1f\big(x+a,s\big)\big|(b-a-x)^{M-k}dx\\&
\leq C_{f,T}\int_{0}^{b-a}\big(1+G(|x+a|)\big)(b-a-x)^{M-k}dx\\&
\leq \frac{C_{f,T}}{M+1-k}(b-a)^{M+1-k}(1+G(|b|+2|a|)),
\end{aligned}$$ where we used the assumption [\[fgrowth\]](#fgrowth){reference-type="eqref" reference="fgrowth"}. The case $b<a$ is symmetric, namely $$\begin{aligned}
-\int_{b}^{a}\partial^{M+1}_1 f\big(x,s\big)\big[ b-x\big]^{M-k}dx
& =-\int_{0}^{a-b}\partial^{M+1}_1f\big(x+b,s\big)(-x)^{M-k}dx\\
& \leq \int_{0}^{a-b}\big|\partial^{M+1}_1f\big(x+b, s\big)\big|x^{M-k}dx\\
& \leq C_{f,T}\int_{0}^{a-b}(1+G(|x+b|))x^{M-k}dx\\
& \leq \frac{C_{f,T}}{M+1-k}(a-b)^{M+1-k}(1+G(|a|+2|b|)).\end{aligned}$$ Since $G$ is non-decreasing, we combine both cases to obtain $$\bigg|\int_{a}^{b}\partial^{M+1}_1f\big(x,s\big)\big[ b-x\big]^{M-k}dx\bigg|\leq \frac{C_{f,T}}{M+1-k}\bigg[1+G(2|a|+2|b|)\bigg]\big|b-a\big|^{M+1-k}.$$ Substituting back $a$ and $b$, the latter yields $$\begin{aligned}
\label{Rk1bnd}
&\int_{(K^H*\Pi_t\Xi)(t)}^{(K^H*\Pi_s\Xi)(s)}\partial^{M+1}_1f\big(x,s\big)\big[ (K^H*\Pi_s\Xi)(s)-x\big]^{M-k}dx\nonumber\\
& \leq \frac{C_{f,T}}{M+1-k}\bigg[1+G\bigg(2\big|(K^H*\Pi_s\Xi)(s)\big|+2\big|(K^H*\Pi_t\Xi)(t)\big|\bigg)\bigg]\big|(K^H*\Pi_t\Xi)(t)-(K^H*\Pi_s\Xi)(s)\big|^{M+1-k}\nonumber\\
& \leq \frac{C_{f,T}}{M+1-k}\bigg[1+G\bigg(4\big\|(K^H*\Pi\Xi)\big\|_{C[0,T]}\bigg)\bigg]\big\|K^H*\Pi\Xi\big\|^{M+1-k}_{\mathcal{C}^{H-\kappa}[0,T]}|t-s|^{(M+1-k)(H-\kappa)},\end{aligned}$$ with $\kappa<H$, recalling that $K^H*\Pi\Xi$ is function-valued for the models we consider. It remains to estimate the second summand in [\[Rkform\]](#Rkform){reference-type="eqref" reference="Rkform"}, smoother since $f(x,\cdot)$ is bounded. Indeed, $$\begin{aligned}
\bigg|\int_{t}^{s}\partial^{k+m,1}_{1,2}f\big((K^H*\Pi_t\Xi)(t),x\big)dx\bigg|&\leq |t-s| \sup_{x\in[0,T]}\big|\partial^{k+m,1}_{1,2}f\big((K^H*\Pi_t\Xi)(t),x\big)\big|\\&
\leq C_{f,T}|t-s|\bigg[ 1+G\bigg(\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg].
\end{aligned}$$ Thus, $$\label{Rk2bnd}
\begin{aligned}
&\sum_{m=0}^{M-k}\frac{1}{k!m!}\bigg(\int_{t}^{s}\partial^{k+m,1}_{1,2}f\big((K^H*\Pi_t\Xi)(t),x\big)dx\bigg)\big[(K^H*\Pi_s\Xi)(s)-(K^H*\Pi_t\Xi)(t) \big]^m\\&
\leq C_{f,T}\bigg[ 1+G\bigg(\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]\sum_{m=0}^{M-k}\frac{1}{k!m!}\big\|K^H*\Pi\Xi\big\|^{m}_{\mathcal{C}^{H-\kappa}[0,T]} |t-s|^{1+m(H-\kappa)}\\
& \leq \frac{C_{f,T}|t-s|(M+1-k)}{k!}\bigg[ 1+G\bigg(\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]
\big(1\vee T^{(M-k)(H-\kappa)}\big) \bigg(1\vee\big\|K^H*\Pi\Xi\big\|^{M-k}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).
\end{aligned}$$ Putting [\[Rk1bnd\]](#Rk1bnd){reference-type="eqref" reference="Rk1bnd"}, [\[Rk2bnd\]](#Rk2bnd){reference-type="eqref" reference="Rk2bnd"} together we have $$\label{Rkbnd}
\begin{aligned}
\frac{|\tilde R_k(t,s)|}{|t-s|+|t-s|^{(M+1-k)(H-\kappa)}}
\leq \frac{C_{f,T}(M+1-k)}{k!}&\left(1\vee T^{(M-k)(H-\kappa)}\right) \bigg(1\vee\big\|K^H*\Pi\Xi\big\|^{M-k+1}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg)\\&\times\bigg[ 1+G\bigg(4\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg].
\end{aligned}$$ Since the models we consider are admissible (in the sense of [@bayer2020regularity Lemma 3.19]), then $\Pi_sI(\Xi)(t)=K^H*\Pi_s\Xi(t)-K^H*\Pi_s\Xi(s)$. In view of the latter, plugging the last estimate into [\[remainderform\]](#remainderform){reference-type="eqref" reference="remainderform"} yields, for $\ell=0,\dots, M$ and $\beta=\ell(H-\kappa)$ $$\begin{aligned}
|R(t,s)|_{\beta}&=\bigg| \sum_{k=\ell}^{M}\tilde{R}_k(t,s)\binom{k}{\ell}\big(\Pi_sI(\Xi)(t) \big)^{k-\ell} \bigg|\leq \sum_{k=\ell}^{M}|\tilde{R}_k(t,s)|\binom{k}{\ell}|\Pi_sI(\Xi)(t)|^{k-\ell}\\&
\leq \sum_{k=\ell}^{M}\binom{k}{\ell}|\tilde{R}_k(t,s)|\big\|K^H*\Pi\Xi\big\|^{k-\ell}_{\mathcal{C}^{H-\kappa}[0,T]}|t-s|^{(k-\ell)(H-\kappa)}\\&
\leq C_{f,T}\big(1\vee T^{(M-\ell)(H-\kappa)}\big)\bigg(1\vee\big\|K^H*\Pi\Xi\big\|^{M-\ell+1}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg)\bigg[ 1+G\bigg(4\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]\times\\&\times\sum_{k=\ell}^{M}\binom{k}{\ell}\frac{M+1-k}{k!}\bigg( |t-s|^{1+(k-\ell)(H-\kappa)} + |t-s|^{(M+1-\ell)(H-\kappa)} \bigg).%\\&
%\leq 4C_{f,T}\bigg[ 1+G\bigg(4\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]\bigg(1+\big\|K^H*\Pi\cdot\Xi\big\|^{M-\ell+1}_{\Cc^{H-\kappa}[0,T]}\bigg)\big(1\vee T^{(M-\ell)(H-\kappa)}\big)\times\\&\times\sum_{k=\ell}^{M}\binom{k}{\ell}\frac{M+1-k}{k!}\bigg( |t-s|^{1+(k-\ell)(H-\kappa)} + |t-s|^{(M+1-\ell)(H-\kappa)} \bigg).
\end{aligned}$$ Therefore, modulo a constant, $$\begin{aligned}
\frac{|R(t,s)|_{\beta}}{|t-s|^{\frac{1}{2}+\kappa-\beta}}\lesssim\sum_{k=\ell}^{M}\binom{k}{\ell}\frac{M+1-k}{k!}\bigg( |t-s|^{\frac{1}{2}-\kappa+k(H-\kappa)} + |t-s|^{(M+1)(H-\kappa)-\frac{1}{2}-\kappa} \bigg).
\end{aligned}$$ Since $\kappa<H$ and $M$ is chosen according to [\[Mchoice\]](#Mchoice){reference-type="eqref" reference="Mchoice"}, the exponents on the right-hand side are positive. Hence, for any $0<\gamma<((M+1)(H-\kappa)-\frac{1}{2}-\kappa)\wedge(\frac{1}{2}-\kappa)$, $$\begin{aligned}
\frac{|R(t,s)|_{\beta}}{|t-s|^{\frac{1}{2}+\kappa+\gamma-\beta}}&\lesssim(M+1-\ell)\sum_{k=\ell}^{M}\frac{1}{\ell!(k-\ell)!}\bigg( |t-s|^{\frac{1}{2}-\kappa-\gamma+k(H-\kappa)} + |t-s|^{(M+1)(H-\kappa)-\frac{1}{2}-\kappa-\gamma} \bigg)\\&
\leq \frac{(M+1-\ell)(M-\ell)}{\ell!}\bigg(1\vee T^{\alpha} \bigg),
\end{aligned}$$ where $\alpha=\frac{1}{2}-\kappa-\gamma+M(H-\kappa)$. Combining the last estimates and applying crude bounds for the terms that depend on $\ell$, we obtain $$\sup_{\substack{s\neq t\in[0,T]\\A\ni\beta<\frac{1}{2}+\kappa+\gamma}}\frac{\big| f^\Pi(t)-\Gamma_{t,s}f^\Pi(s)\big|_{\beta}}{|t-s|^{\frac{1}{2}+\kappa+\gamma-\beta}}
\leq C_{f,T, M}\bigg[ 1+G\bigg(4\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]\bigg(1\vee\big\|K^H*\Pi\Xi\big\|^{M+1}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).$$ After multiplying by $\Xi$, the same estimate also implies that for $\ell=0,\dots,M$ and $\beta'=\ell(H-\kappa)-\frac{1}{2}-\kappa$, $$\begin{aligned}
\sup_{\substack{s\neq t\in[0,T]\\A\ni\beta'<\gamma}}&\frac{\big| \mathscr{D}(\Pi)(t)-\Gamma_{t,s}\mathscr{D}(\Pi)(s)\big|_{\beta'}}{|t-s|^{\gamma-\beta'}}=\sup_{\substack{s\neq t\in[0,T]\\A\ni\beta<\frac{1}{2}+\kappa+\gamma}}\frac{\big| f^\Pi(t)-\Gamma_{t,s}f^\Pi(s)\big|_{\beta}}{|t-s|^{\frac{1}{2}+\kappa+\gamma-\beta}}\\&\leq C_{f,T,M}\bigg[ 1+G\bigg(4\big\| K^H*\Pi\Xi\big\|_{C[0,T]} \bigg)\bigg]\bigg(1\vee\big\|K^H*\Pi\Xi\big\|^{M+1}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).
\end{aligned}$$
## Proof of Lemma [Lemma 38](#modelbndlem){reference-type="ref" reference="modelbndlem"}(2) {#proof-of-lemma-modelbndlem2}
Let $i=1,2, t\in[0,T]$. Since $$\mathscr{D}_f(\Pi^i)(t)=f^{\Pi^i}\star\Xi(t)=\sum_{m=0}^{M}\frac{1}{m!}\partial_1^{m}f\big((K^H*\Pi^i_t\Xi)(t),t\big)I(\Xi)^k\Xi,$$ our assumptions on $f$ directly yield $$\begin{aligned}
\sup_{t\in[0,T]}\sup_{ A\ni\beta<\gamma}\big| \mathscr{D}_f(\Pi^1)(t)&-\mathscr{D}_f(\Pi^2)(t)\big|_{\beta}\leq c_{f,T, M}(1\vee\big\|K^H*\Pi^1\Xi\big\|_{C[0,T]}^N)\\&\times\big\|K^H*\Pi^1\Xi-K^H*\Pi^2\Xi\big\|_{C[0,T]}\vee\big\|K^H*\Pi^1\Xi-K^H*\Pi^2\Xi\big\|^N_{C[0,T]}.
\end{aligned}$$ Turning to the remainder terms, a computation similar to [\[remainderform\]](#remainderform){reference-type="eqref" reference="remainderform"} furnishes $$\mathscr{D}_f(\Pi^i)(t)-\Gamma^i_{t,s}\mathscr{D}_f(\Pi^i)(s)=\sum_{\ell=0}^{M}\bigg\{\sum_{k=\ell}^{M}R^i_k(t,s)\binom{k}{\ell}\big(\Pi^i_sI(\Xi)(t) \big)^{k-\ell}\bigg\}I(\Xi)^{\ell}\Xi,$$ where, using the notation $(K^H*\Pi^i\Xi)(t,s):=(K^H*\Pi^i_s\Xi)(s)-(K^H*\Pi^i_t\Xi)(t)$, $i=1,2$, $$\label{Rkform2}
\begin{aligned}
R^i_k(t,s)
&:= \frac{\left(K^H*\Pi^i\Xi\right)(t,s)^{M+1-k}}{k!(M-k)!}
\left[\int_{0}^{1}\partial^{M+1}_1f\bigg((K^H*\Pi^i_t\Xi)(t)+\theta\bigg[ (K^H*\Pi^i\Xi)(t,s)\bigg],s\bigg)(1-\theta)^Md\theta\right]\\
& +\sum_{m=0}^{M-k}\frac{1}{k!m!}\big[ (K^H*\Pi^i\Xi)(t,s) \big]^m \bigg(\int_{t}^{s}\partial^{k+m,1}_{1,2}f\big((K^H*\Pi^i_t\Xi)(t),x\big)dx\bigg)\\&=:A^i_k(t,s)+B^i_k(t,s).
\end{aligned}$$ Thus, $$\label{eq:Remdecomp}
\begin{aligned}
\mathscr{D}_f(\Pi^2)(t)&-\Gamma^2_{t,s}\mathscr{D}_f(\Pi^2)(s)-\mathscr{D}_f(\Pi^1)(t)+\Gamma^1_{t,s}\mathscr{D}_f(\Pi^1)(s)\\&=\sum_{\ell=0}^{M}\bigg\{\sum_{k=\ell}^{M}\big[R^2_k(t,s)-R^1_k(t,s)\big]\binom{k}{\ell}\big(\Pi^2_sI(\Xi)(t) \big)^{k-\ell}\bigg\}I(\Xi)^{\ell}\Xi\\&
+\sum_{\ell=0}^{M}\bigg\{\sum_{k=\ell}^{M}R^1_k(t,s)\binom{k}{\ell}\big[\big(\Pi^2_sI(\Xi)(t)\big)^{k-\ell}-\big(\Pi^1_sI(\Xi)(t) \big)^{k-\ell}\big]\bigg\}I(\Xi)^{\ell}\Xi.
\end{aligned}$$ Starting from the first term on the right-hand side we have $$\label{ABdecomp}
\begin{aligned}
R^2_k(t,s)-R^1_k(t,s)=A^2_k(t,s)-A^1_k(t,s)+B^2_k(t,s)-B^1_k(t,s)
\end{aligned}$$ and $$\begin{aligned}
A^2_k(t,s)-A^1_k(t,s) & = \frac{1}{k!(M-k)!}\bigg[\bigg( (K^H*\Pi^2\Xi)(t,s)\bigg)^{M+1-k}-\bigg( (K^H*\Pi^1\Xi)(t,s)\bigg)^{M+1-k}
\bigg]\\
&\times\bigg(\int_{0}^{1}\partial^{M+1}_1f\bigg((K^H*\Pi^1_t\Xi)(t)+\theta\bigg[ (K^H*\Pi^1\Xi)(t,s)\bigg],s\bigg)(1-\theta)^Md\theta\bigg)\\
&
+\frac{\bigg( (K^H*\Pi^1\Xi)(t,s)\bigg)^{M+1-k}}{k!(M-k)!}
\int_{0}^{1}\bigg\{\partial^{M+1}_1f\bigg((K^H*\Pi^2_t\Xi)(t)+\theta\bigg[ (K^H*\Pi^2\Xi)(t,s)\bigg],s\bigg)\\
&-\partial^{M+1}_1f\bigg((K^H*\Pi^1_t\Xi)(t)+\theta\bigg[ (K^H*\Pi^1\Xi)(t,s)\bigg],s\bigg)\bigg\}(1-\theta)^Md\theta.
\end{aligned}$$ From the growth assumptions on $f$ we obtain $$\begin{aligned}
&k!(M-k)!|A^2_k(t,s)-A^1_k(t,s)|\\&\leq C_{f,T,M}1\vee \bigg\|K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{N}\bigg|\bigg( (K^H*\Pi^2\Xi)(t,s)\bigg)^{M+1-k}-\bigg( (K^H*\Pi^1\Xi)(t,s)\bigg)^{M+1-k}
\bigg|\\&
+C'_{f,T,M}\bigg| (K^H*\Pi^1\Xi)(t,s)\bigg|^{M+1-k}\bigg(1\vee\big\|K^H*\Pi^1\Xi\big\|_{C^{H-\kappa}[0,T]}^N\bigg)\\&
\times \bigg| (K^H*\Pi^2\Xi)(t,s)-(K^H*\Pi^1\Xi)(t,s)\bigg|\vee \bigg| (K^H*\Pi^2\Xi)(t,s)-(K^H*\Pi^1\Xi)(t,s)\bigg|^N.
\end{aligned}$$ Now, letting $x_i=(K^H*\Pi^i_s\Xi)(s)-(K^H*\Pi^i_t\Xi)(t)$, and re-expanding $x_2^{M+1-k}$ around $x_1$ using [\[binom\]](#binom){reference-type="eqref" reference="binom"} we continue the last estimate as follows: $$\label{Aestimate}
\begin{aligned}
k!(M-k)!|A^2_k(t,s)&-A^1_k(t,s)|\leq C_{f,T,M}1\vee \bigg\|K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{N}\\&\times\sum_{j=1}^{M+1-k}\binom{M+1-k}{j}\bigg\|K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{M+1-k-j}\bigg\| K^H*\Pi^2\Xi-K^H*\Pi^1\Xi \bigg\|_{\mathcal{C}^{H-\kappa}}^j\\&
+C'_{f,T,M}|t-s|^{(M+1-k)(H-\kappa)}\bigg\| K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{M+1-k}\bigg(1\vee\big\|K^H*\Pi^1\Xi\big\|_{C^{H-\kappa}[0,T]}^N\bigg)\\&
\times \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}\vee \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|^{N}_{\mathcal{C}^{H-\kappa}}\\&
\leq C'_{f,T,M}|t-s|^{(M+1-k)(H-\kappa)}1\vee \bigg\|K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{N+M+1-k}\\&\times \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}\vee \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|^{N+M+1-k}_{\mathcal{C}^{H-\kappa}}.
%
%\\&
%\leq \frac{(\|\partial^{M+1}_1f\|_{Lip}+3\|\partial^{M+1}_1f\|_{\infty})|t-s|^{(M+1-k)(H-\kappa)}}{k!(M-k)!(M+1)}\bigg\|_{\Cc^{H-\kappa}}^{M+1-k}
%\bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|^{M+1-k}_{\Cc^{H-\kappa}}\vee\bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|_{\Cc^{H-\kappa}}\\&
%\times1\vee \bigg\|K^H*\Pi^1\Xi\bigg\|_{\Cc^{H-\kappa}}^{M+1-k}\sum_{j=0}^{M+1-k}\frac{(M+2-k-j)}{j!},
\end{aligned}$$ As for the last terms in [\[ABdecomp\]](#ABdecomp){reference-type="eqref" reference="ABdecomp"} we write $$\begin{aligned}
&B^2_k(t,s)-B^1_k(t,s)\\&=\sum_{m=0}^{M-k}\frac{1}{k!m!}\int_{t}^{s}\bigg(\partial^{k+m,1}_{1,2}f\big((K^H*\Pi^2_t\Xi)(t),x\big)-\partial^{k+m,1}_{1,2}f\big((K^H*\Pi^1_t\Xi)(t),x\big)\bigg)dx\big[ (K^H*\Pi^1\Xi)(t,s) \big]^m\\&
+\sum_{m=0}^{M-k}\frac{1}{k!m!}\bigg(\int_{t}^{s}\big[\partial^{k+m,1}_{1,2}f\big((K^H*\Pi^2_t\Xi)(t),x\big)-\partial^{k+m,1}_{1,2}f\big((K^H*\Pi^1_t\Xi)(t),x\big)\big]dx\bigg)\\&\quad\quad\times\bigg(\big[ (K^H*\Pi^2\Xi)(t,s) \big]^m-\big[ (K^H*\Pi^1\Xi)(t,s)\big]^m\bigg)\\&
+\sum_{m=0}^{M-k}\frac{1}{k!m!}\bigg(\int_{t}^{s}\partial^{k+m,1}_{1,2}f\big((K^H*\Pi^1_t\Xi)(t),x\big) dx\bigg)\\&\quad\quad\times\bigg(\big[ (K^H*\Pi^2\Xi)(t,s) \big]^m-\big[ (K^H*\Pi^1\Xi)(t,s)\big]^m\bigg).
\end{aligned}$$ These terms can be bounded by using the following facts: 1) From [\[unicontcondition\]](#unicontcondition){reference-type="eqref" reference="unicontcondition"}, $f$ and its derivatives are bounded on their second argument over compact time intervals. In particular, all the terms above can be bounded, up to a constant, in time by $|t-s|$ 2) $f$ and its derivatives have at most polynomial growth of degree $N$ on their first argument. The latter, along with another polynomial re-expansion argument yield $$\label{Bestimate}
\begin{aligned}
&|B^2_k(t,s)-B^1_k(t,s)|\leq C_{f,M,T}|t-s| 1\vee \bigg\|K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{N+M-k}\\&
\times \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}\vee \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|^{N+M-k}_{\mathcal{C}^{H-\kappa}}.
\end{aligned}$$ In view of [\[ABdecomp\]](#ABdecomp){reference-type="eqref" reference="ABdecomp"}, [\[Aestimate\]](#Aestimate){reference-type="eqref" reference="Aestimate"} and [\[Bestimate\]](#Bestimate){reference-type="eqref" reference="Bestimate"}, it follows that
$$\begin{aligned}
&|R^2_k(t,s)-R^1_k(t,s)|\leq |A^2_k(t,s)-A^1_k(t,s)|+|B^2_k(t,s)-B^1_k(t,s)|\\&
\leq C'_{f,T, M}\bigg(|t-s|^{(M+1-k)(H-\kappa)}+|t-s|\bigg)\bigg(1\vee \bigg\|K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}^{N+M+1-k}\bigg)\\&\times \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|_{\mathcal{C}^{H-\kappa}}\vee \bigg\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\bigg\|^{N+M+1-k}_{\mathcal{C}^{H-\kappa}}.
\end{aligned}$$
Returning to the first term in [\[eq:Remdecomp\]](#eq:Remdecomp){reference-type="eqref" reference="eq:Remdecomp"} we have, using that $\Pi^i_sI(\Xi)(t)=K^H*\Pi^i_t\Xi-K^H*\Pi^i_s\Xi$, $$\begin{aligned}
\bigg|&\sum_{k=\ell}^{M}\big[R^2_k(t,s)-R^1_k(t,s)\big]\binom{k}{\ell}\big(\Pi^2_sI(\Xi)(t) \big)^{k-\ell}\bigg|\\&\leq \sum_{k=\ell}^{M}2^{k-\ell+1}\binom{k}{\ell}\big|R^2_k(t,s)-R^1_k(t,s)\big|\bigg( \big|\Pi^2_sI(\Xi)(t)-\Pi^1_sI(\Xi)(t) \big|^{k-\ell}+ \big|\Pi^1_sI(\Xi)(t)\big|^{k-\ell}\bigg)\\&
\leq 2^{M-\ell+1}C'_{f,T, M}\sum_{k=\ell}^{M}\binom{k}{\ell}\bigg(|t-s|^{(M+1-\ell)(H-\kappa)}+|t-s|^{1+(k-\ell)(H-\kappa)}\bigg)\bigg(1\vee \big\|K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{N+M+1-k} \bigg)\\& \times \bigg(\big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}\vee \big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|^{N+M+1-k}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg)\\&\times \bigg( \big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{k-\ell}+ \big\|K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{k-\ell}\bigg)\\&
\leq C'_{f, T, M}|t-s|^{(M+1-\ell)(H-\kappa)}
\bigg(1\vee \big\|K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{N+M+1} \bigg)\\&\times
\bigg(\big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}\vee \big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|^{N+M+1-\ell}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).
\end{aligned}$$ Finally, for the second term in [\[eq:Remdecomp\]](#eq:Remdecomp){reference-type="eqref" reference="eq:Remdecomp"}, similar estimates along with Lemma [Lemma 38](#modelbndlem){reference-type="ref" reference="modelbndlem"} furnish $$\begin{aligned}
\bigg|\sum_{k=\ell}^{M} & R^1_k(t,s)\binom{k}{\ell}
\left[\left(\Pi^2_sI(\Xi)(t)\right)^{k-\ell} - \left(\Pi^1_sI(\Xi)(t) \right)^{k-\ell}\right]\bigg|\\
& \leq C_{f,M, T}|t-s|^{(M+1-\ell)(H-\kappa)} \bigg(1\vee \big\|K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}^{N+M+1} \bigg)\\
& \times \bigg(\big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|_{\mathcal{C}^{H-\kappa}[0,T]}\vee \big\|K^H*\Pi^2\Xi-K^H*\Pi^1\Xi\big\|^{N+M+1-\ell}_{\mathcal{C}^{H-\kappa}[0,T]}\bigg).
\end{aligned}$$ A combination of the estimates in the last two displays concludes the proof.
| arxiv_math | {
"id": "2310.05750",
"title": "Transportation-cost inequalities for non-linear Gaussian functionals",
"authors": "Ioannis Gasteratos, Antoine Jacquier",
"categories": "math.PR q-fin.MF",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We investigate the finite time explosion of the stochastic heat equation $\frac{\partial u}{\partial t} = \Delta u(t,x) + \sigma(u(t,x))\dot{W}(t,x)$ in the critical setting where $\sigma$ grows like $\sigma(u) \approx C(1 + |u|^\gamma)$ and $\gamma = \frac{3}{2}$. Mueller previously identified $\gamma=\frac{3}{2}$ as the critical growth rate for explosion and proved that solutions cannot explode in finite time if $\gamma< \frac{3}{2}$ and solutions will explode with positive probability if $\gamma>\frac{3}{2}$. This paper proves that explosion does not occur in the critical $\gamma=\frac{3}{2}$ setting.
author:
- |
M. Salins\
Boston University\
msalins\@bu.edu
bibliography:
- superlinear.bib
title: Solutions to the stochastic heat equation with polynomially growing multiplicative noise do not explode in the critical regime
---
# Introduction {#S:intro}
We investigate whether solutions to the stochastic heat equation explode in finite time. The equation is $$\label{eq:SPDE}
\begin{cases}
\frac{\partial u}{\partial t}(t,x) = \Delta u(t,x) + \sigma(u(t,x)) \dot{W}(t,x), & x \in [-\pi,\pi], t>0\\
u(t,-\pi) = u(t,\pi), & t>0\\
u(0,x) = u_0(x) \text{ bounded and periodic}.
\end{cases}$$ where $\sigma$ is locally Lipschitz continuous and satisfies the critical superlinear growth restriction that there exists $C>0$ such that for all $u \in \mathbb{R}$ $$\label{eq:sigma-growth}
|\sigma(u)| \leq C( 1 + |u|^{\frac{3}{2}})$$ The spatial domain is $D =[-\pi,\pi]$ and we impose periodic boundary conditions. The stochastic noise $\dot{W}$ is spacetime white noise and the initial data $u_0(x)$ is a bounded, continuous, periodic function.
In [@mueller-1991; @mueller-1998; @ms-1993; @mueller-2000], Mueller and Sowers proved that the polynomial growth rate of $|u|^{\frac{3}{2}}$ is critical in the sense that if $\sigma(u) \leq C(1 + |u|^\gamma)$ for some $C>0$ and $\gamma< \frac{3}{2}$, the solution to the SPDE [\[eq:SPDE\]](#eq:SPDE){reference-type="eqref" reference="eq:SPDE"} cannot explode in finite time. If $\sigma(u) \geq c|u|^\gamma$ for some $c>0$ and $\gamma>\frac{3}{2}$ then solutions will explode with positive probability. The question of whether solutions can explode in finite time in the critical case of $\gamma=3/2$ was left unsolved. In this paper we prove that solutions cannot explode in the critical regime where $\gamma = \frac{3}{2}$.
Mueller's results have been generalized to other settings including fractional heat equations [@bezdek-2018; @fln-2019], nonlinear Schrödinger equation [@bd-2002], and stochastic wave equation [@mueller-1997]. More recently, researchers have investigated the effects that adding superlinear deterministic forcing terms $f(u(t,x))$ to the right-hand side of [\[eq:SPDE\]](#eq:SPDE){reference-type="eqref" reference="eq:SPDE"} has on the finite time explosion of the stochastic heat equation [@bg-2009; @dkz-2019; @salins-2022; @fn-2021; @sz-2022; @ch-2023; @fp-2015; @lz-2022; @av-2023]. Similar explosion problems have been investigated for the stochastic wave equation [@fn-wave-2022; @ms-wave-2021]. Interestingly, in [@dkz-2019], for example, the authors prove that if the additional force $f(u)$ grows like $|u| \log(|u|)$ then $\sigma$ can grow like $|u| (\log(|u|))^{\frac{1}{4}}$ and solutions will never explode -- a much slower growth rate than the allowable $|u|^{\frac{3}{2}}$ growth rate when $f \equiv 0$. This $|u|(\log|u|)^{\frac{1}{4}}$ growth rate is not known to be optimal and it will be interesting to see if explosion can occur is $\sigma(u) \approx (1 + |u|^\frac{3}{2})$ when $f$ grows superlinearly.
In the opposite setting where $f$ is strongly dissipative, $\sigma$ can grow faster than $|u|^\frac{3}{2}$ and solutions will not explode because the dissipative forcing counteracts the expansion due to the noise [@salins-2022-dissip]. Specifically, in this space-time white noise setting, if $f(u)\text{sign}(u) \leq -\mu |u|^\beta$ for some $\beta>3$, then $\sigma$ can grow like $C(1 +|u|^\gamma)$ for any $\gamma < \frac{\beta+3}{4}$ and solutions will not explode. In the setting of the current paper, $f \equiv 0$ and the maximal allowable growth rate for $\sigma$ is [\[eq:sigma-growth\]](#eq:sigma-growth){reference-type="eqref" reference="eq:sigma-growth"}.
The mild solution to [\[eq:SPDE\]](#eq:SPDE){reference-type="eqref" reference="eq:SPDE"} is defined to be the solution to the integral equation $$\label{eq:mild-intro}
u(t,x) = \int_D G(t,x-y)u(0,y)dy + \int_0^t \int_D G(t-s,x-y) \sigma(u(s,y))W(dyds)$$ where $G(t,x)$ is the fundamental solution to the heat equation on $D$ with periodic boundary conditions. Because $\sigma$ is locally Lipschitz continuous, standard localization arguments prove that there exists a unique *local* mild solution to [\[eq:mild-intro\]](#eq:mild-intro){reference-type="eqref" reference="eq:mild-intro"} that exists until the explosion time $$\tau^\infty_\infty : =\sup_{n>0} \tau^\infty_n$$ where $$\tau^\infty_n := \inf\left\{t>0: \sup_{x \in D} |u(t,x)| \geq n\right\}.$$ A local mild solution *explodes in finite time* if $\tau^\infty_\infty < \infty$. A local mild solution is called a *global* mild solution if the solution never explodes with probability one, ${\mathbb{P}}(\tau^\infty_\infty=\infty)=1$.
The main result of this paper, Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} proves that when [\[eq:sigma-growth\]](#eq:sigma-growth){reference-type="eqref" reference="eq:sigma-growth"} is satisfied, the mild solution is global.
**Theorem 1**. *Assume that the initial data is $x \mapsto u(0,x)$ is a bounded, continuous, periodic function on $[-\pi,\pi]$ and assume that $\sigma$ is locally Lipschitz continuous and satisfies [\[eq:sigma-growth\]](#eq:sigma-growth){reference-type="eqref" reference="eq:sigma-growth"}. Then there exists a unique global mild solution to [\[eq:SPDE\]](#eq:SPDE){reference-type="eqref" reference="eq:SPDE"}.*
The method of proof is inspired by [@mueller-1991; @mueller-1998], but a new strategy is needed to prove non-explosion in the critical $\gamma =\frac{3}{2}$ setting. The first step is to prove that the $L^1$ norm of the solutions cannot explode. The fact that the $L^1$ norm cannot explode is easiest to see in the special case where $u(t,x)\geq0$ for all $t>0$ and all $x \in D$. Imposing the additional assumptions that $\sigma(0)=0$ and $u(0,x)\geq 0$, for example, would imply that $u(t,x)\geq0$ for all $t>0$ with probability one because of the comparison principle [@mueller-1991-support; @kotelenez-1992]. In the case of a positive solution, formally integrating mild solutions in space indicates that $$|u(t)|_{L^1} = \int_D u(t,x)dx = \int_D u(0,x)dx + \int_0^t \int_D \sigma(u(s,x))W(dsdx)$$ is a nonnegative one-dimensional martinagle and therefore cannot explode in finite time. This argument can be made rigorous with stopping times.
In the more general setting of this paper, where solutions $u(t,x)$ may take both positive and negative values, we follow the ideas of [@mueller-1998] to construct nonnegative processes $v(t,x)$ and $v_-(t,x)$ that almost surely dominate $u(t,x)$ in the sense that $$-v_-(t,x) \leq u(t,x) \leq v(t,x).$$ Specifically, let $\alpha>3$ and let $f(u) = u^{-\alpha}$. Let $v(t,x)$ be the mild solution to $$\frac{\partial v}{\partial t} = \Delta v(t,x) + f(v(t,x)) + \sigma(v(t,x))\dot{W}(t,x)$$ with initial data $v(0,x) = \max\{u(0,x),1\}$. Corollary 1.1 of [@mueller-1998] proves that solutions $v(t,x)$ remain nonnegative. The comparison principle of [@kotelenez-1992 Theorem 2.5] proves that $u(t,x) \leq v(t,x)$ with probability one. $v_-(t,x)$ is constructed similarly. Then if we can prove that $v(t,x)$ and $v_-(t,x)$ do not explode to $+\infty$ in finite time, then $u(t,x)$ cannot explode in finite time either.
We construct several stopping times to analyze these solutions. for any $n \in \mathbb{N}$ we define the $L^\infty$ stopping times $$\begin{aligned}
&\tau^\infty_n = \inf\{t>0: \sup_{x \in D} v(t,x) \geq n\},\\
&\tau^\infty_\infty = \sup_n \tau^\infty_n.\end{aligned}$$ The solution explodes in finite time if and only if $\tau^\infty_\infty <\infty$. Therefore, the goal of this paper is to prove that ${\mathbb{P}}(\tau^\infty_\infty = \infty) =1$. Becuase $f$ is unbounded near $0$, we also need to define the infimum stopping times for any ${\varepsilon}>0$, $$\label{eq:tau-inf-v}
\tau^{\inf}_{\varepsilon}= \inf\left\{t\in [0,\tau^\infty_\infty): \inf_{x \in D} v(t,x) \leq {\varepsilon}\right\}$$ Because $f(u)$ is Lipschitz continuous on $[{\varepsilon},\infty)$ for any ${\varepsilon}>0$ and $\sigma(u)$ is Lipschitz continuous for $u \in [0,n]$ for any $n>0$, there exists a local mild solution for $v(t,x)$ until the time $\tau^\infty_\infty \wedge \tau^{\inf}_0$ where $\wedge$ denotes the minimum.
Corollary 1.1 of [@mueller-1998] proves that $v(t,x)$ never hits zero. Specifically, for any $T>0$, $$\lim_{{\varepsilon}\to 0} {\mathbb{P}}(\tau^{\inf}_{\varepsilon}\leq T \wedge \tau^\infty_\infty) = 0.$$
For $M>0$, we define the $L^1$ stopping times $$\label{eq:tau1-v}
\tau^1_M := \inf\{t \in [0, \tau^\infty_\infty): |v(t)|_{L^1} >M\}$$ and we prove that the $L^1$ norm $\int_D v(t \wedge \tau^{\inf}_{\varepsilon}\wedge \tau^\infty_n,x)dx$ is a submartingale. Using Doob's submartingale inequality we can prove that for any $T>0$ and ${\varepsilon}>0$ the $L^1$ norm cannot explode before $T \wedge \tau^{\inf}_{\varepsilon}$. The estimates are independent of $n$.
The novel observation, which is necessary to extend Mueller's results to the critical case where $\gamma = \frac{3}{2}$, is that we can show that the expected value of the *quadratic variation* of the $L^1$ norm is also bounded in a way that is independent of $n$. We prove in Lemma [Lemma 7](#lem:L1){reference-type="ref" reference="lem:L1"} that $$\begin{aligned}
\label{eq:quad-var-intro}
&{\mathbb{E}}\int_0^{\tau^{\inf}_{\varepsilon}\wedge \tau^1_M } (\sigma(v(s,y)))^2dyds \leq M^2,\end{aligned}$$ an estimate that is independent of $n$ and ${\varepsilon}$.
In Section [5](#S:moment){reference-type="ref" reference="S:moment"}, we prove an improved $L^\infty$ moment bound on the stochastic convolution, inspired by [@ch-2023], which may be of independent interest.
**Theorem 2**. *Let $p>6$. Assume that $\varphi(t,x)$ is an adapted random field such that $${\mathbb{E}}\int_0^T \int_D|\varphi(t,x)|^pdxdt < +\infty.$$ Define the stochastic convolution $$Z^\varphi(t,x) = \int_0^t \int_D G(t-s,x,y) \varphi(s,y)W(dyds).$$ For any $p>6$ there exists $C_p>0$, independent of $T>0$ and $L>0$, such that $$\label{eq:Linfty-moment}
{\mathbb{E}}\sup_{t \in [0,T]} \sup_{x \in D} |Z^\varphi(t,x)|^p
\leq C_p T^{\frac{p}{4} - \frac{3}{2}} {\mathbb{E}}\int_{0}^{T} \int_{D} |\varphi(s,y)|^p dy ds.$$*
We remark that in the case where there exists $L>0$ such that $${\mathbb{P}}\left(\sup_{t \in [0,T]} \sup_{x \in D} |\varphi(t,x)| \leq L\right) = 1,$$ an obvious upper bound of [\[eq:Linfty-moment\]](#eq:Linfty-moment){reference-type="eqref" reference="eq:Linfty-moment"} is $$C_p T^{\frac{p}{4} - \frac{3}{2}} {\mathbb{E}}\int_{0}^{T} \int_{D} |\varphi(s,y)|^p dy ds \leq C_pL^{p} T^{\frac{p}{4} - \frac{1}{2}}.$$ This looser upper bound can be used to prove non-explosion in the subcritical $\gamma< \frac{3}{2}$ regime. Unfortunately, this looser bound will not be helpful when we prove the main non-explosion result in the critical setting and we will need the tighter upper bound [\[eq:Linfty-moment\]](#eq:Linfty-moment){reference-type="eqref" reference="eq:Linfty-moment"}.
We then define a sequence of stopping times $\rho_n$ that keep track of when the $|v(t)|_{L^\infty}$ doubles or halves. The stopping times are defined so that $|v(\rho_n)|_{L^\infty} = 2^m$ for some $m \in \mathbb{N}$. Using all of the estimates mentioned above we can prove that for any ${\varepsilon}>0$ and $M>0$, the $L^\infty$ norm $|v(\rho_n)|_{L^\infty}$ can only double a finite number of times before the time $\tau^{\inf}_{\varepsilon}\wedge \tau^1_M$. This estimate relies on estimates of the quadratic variation of the $L^1$ norm [\[eq:quad-var-intro\]](#eq:quad-var-intro){reference-type="eqref" reference="eq:quad-var-intro"}, which were not required in the subcritical setting. Therefore, for any ${\varepsilon}>0$ and $M>0$, the explosion time $$\tau^\infty_\infty> ( \tau^{\inf}_{\varepsilon}\wedge \tau^1_M )\text{ with probability one.}$$ Taking the limit as $M \to \infty$ and ${\varepsilon}\to 0$, we can prove that explosion cannot occur in finite time.
In Section [2](#S:notation){reference-type="ref" reference="S:notation"} we introduce some notations and recall the properties of the heat kernel. In Section [3](#S:comparison){reference-type="ref" reference="S:comparison"} we introduce the positive solutions $v(t,x)$ and $v_-(t,x)$ and prove that they dominate $u(t,x)$. In Section [4](#S:L1){reference-type="ref" reference="S:L1"} we prove that the $L^1$ norm of the solutions its quadratic variation remain finite in a way that does not depend on the $L^\infty$ norm of the solutions. In Section [5](#S:moment){reference-type="ref" reference="S:moment"} we prove the stochastic convolution moment bound Theorem [Theorem 2](#thm:stoch-conv){reference-type="ref" reference="thm:stoch-conv"}. Finally in Section [6](#S:non-explosion){reference-type="ref" reference="S:non-explosion"} we prove that $v(t,x)$ cannot explode in finite time.
# Some notations and definitions {#S:notation}
The spatial domain $D=[-\pi,\pi]$.
Let $L^p := L^p(D)$, $p\geq 1$ denote the standard $L^p$ spaces on $D$ endowed with the norms $$\begin{aligned}
&|\varphi|_{L^p} := \left(\int_D|\varphi(y)|^p\right)^{\frac{1}{p}}, \ \ \ p \in [1,\infty), \\
&|\varphi|_{L^\infty} := \sup_{x \in D} |\varphi(x)|.\end{aligned}$$
The driving noise $\dot{W}$ is a space-time white noise defined on a filtered probability space $(\Omega, \mathcal{F}, \mathcal{F}_t, {\mathbb{P}})$. This means that for any non-random $\psi, \varphi \in L^2([0,T]\times D)$, $$\int_0^T \int_D \varphi(s,y)W(dyds) \text{ and } \int_0^T \int_D \psi(s,y)W(dyds)$$ are mean-zero Gaussian random variables with covariance
$$\begin{aligned}
&{\mathbb{E}}\left( \int_0^T \int_D \varphi(s,y)W(dyds) \right)\left(\int_0^T \int_D \psi(s,y)W(dyds) \right) \nonumber\\
&= \int_0^T \int_D \varphi(s,y)\psi(s,y)dyds.\end{aligned}$$
If $\varphi(t,x)$ is an $\mathcal{F}_t$-adapted process then the stochastic integral $$\int_0^t \int_D \varphi(s,y)W(dyds)$$ is an Ito-Walsh integral [@walsh-1986].
The heat kernel on $D$ with periodic boundary is defined to be $$\label{eq:kernel}
G(t,x) = \frac{1}{\sqrt{2\pi}} + \sum_{k=1}^\infty \sqrt{\frac{2}{\pi}} e^{-|k|^2t } \cos(kx).$$
For any $\varphi \in L^2(D)$, $h(t,x) = \int_D G(t,x-y)\varphi(y)dy$ solves the linear heat equation $\frac{\partial h}{\partial t} = \Delta h$ with initial data $h(0,x) = \varphi(x)$.
**Lemma 3**. *The heat kernel has the following properties.*
1. *The heat kernel is nonnegative: $G(t,x) \geq0$ for all $t>0$, $x \in D$.*
2. *$|G(t,\cdot)|_{L^1} = \sqrt{2\pi}$.*
3. *There exists $C>0$ such that for any $t>0$, $$|G(t,\cdot)|_{L^\infty} \leq C t^{-\frac{1}{2}},$$*
*Proof.* The positivity of the heat kernel is a consequence of the comparison principle for linear heat equations. Specifically, let $\varphi: D \to \mathbb{R}$ be any nonnegative periodic function. $h(t,x) = \int_D G(t,x-y)\varphi(y)dy$ solves the heat equation and therefore satisfies the comparison principle. Therefore $h(t,x) \geq 0$ for all $t>0$ and $x \in D$ because $\varphi(x)\geq 0$ for all $x \in D$. This is true for any nonnegative $\varphi$, implying that $G(t,x)\geq 0$.
The $L^1$ norm claim can be calculated exactly because $G(t,x)$ is nonnegative and $\int_{-\pi}^\pi \frac{1}{\sqrt{2\pi}} = \sqrt{2\pi}$ and $\int_{-\pi}^\pi \cos(kx)dx =0$ for $k \geq 1$.
For the $L^\infty$ norm we notice that for any $t>0$ and $x \in D$ $$\begin{aligned}
& |G(t,x)| \leq G(t,0) \leq\sqrt{\frac{2}{\pi}} + \frac{1}{\sqrt{\pi}}\sum_{k=1}^\infty e^{-|k|^2 t} \nonumber\\
&\leq \sqrt{\frac{2}{\pi}}+ \sqrt{\frac{1}{\pi}}\int_0^\infty e^{-|x|^2t}dx \nonumber\\
&\leq \sqrt{\frac{2}{\pi}}+ \frac{1}{2}t^{-\frac{1}{2}}
\nonumber\\
&\leq C t^{-\frac{1}{2}}.
\end{aligned}$$ ◻
Throughout the paper we use the notation $a \wedge b = \min\{a,b\}$ and $C$ denotes an arbitrary constant whose value may change from line to line.
# Comparison to positive solutions {#S:comparison}
We follow the arguments of [@mueller-1998] to construct nonnegative stochastic processes that dominate $u(t,x)$. Specifically, let $f(u) = u^{-\alpha}$ for some $\alpha>3$.
Let $v(t,x)$ be the solution to $$\label{eq:v}
\frac{\partial v}{\partial t} (t,x) = \Delta v(t,x) + f(v(t,x)) + \sigma(v(t,x))\dot{W}(t,x)$$ with initial data $$v(0,x) = \max\{u(0,x),1\}$$
and let $v_-(t,x)$ be the solution to $$\label{eq:v-}
\frac{\partial v_-}{\partial t} (t,x) = \Delta v_-(t,x) + f(v_-(t,x)) + \sigma(-v_-(t,x))\dot{W}(t,x)$$ with initial data $$v_-(0,x) = \max\{-u(0,x),1\}.$$
$v(t,x)$ and $v_-(t,x)$ have the same properties. For this reason, we only prove results for $v(t,x)$ because the proofs for $v_-(t,x)$ are identical.
We now recall the standard arguments for the construction of the unique *local mild solution* to [\[eq:v\]](#eq:v){reference-type="eqref" reference="eq:v"}. For any ${\varepsilon}>0$ define $$f_{\varepsilon}(u) = (\max\{{\varepsilon},u\})^{-\alpha}.$$ Notice that for any ${\varepsilon}>0$, $f_{\varepsilon}$ is globally Lipschitz continuous. For any $n>0$ define $$\sigma_n(u) = \begin{cases}
\sigma(-n) & \text{ if } u<-n \\
\sigma(u) & \text{ if } u \in [-n,n] \\
\sigma(n) & \text{ if } u> n
\end{cases}.$$ For each $n>0$, $\sigma_n$ is globally Lipschitz continuous. Therefore by standard arguments [@dalang-1999; @cerrai-2003; @walsh-1986] for any ${\varepsilon}>0$ and $n>0$ there exists a unique global mild solution $v_{{\varepsilon},n}$ solving $$\begin{aligned}
v_{{\varepsilon},n}(t,x) = &\int_D G(t,x-y)v(0,y)dy + \int_0^t \int_D G(t-s,x-y)f_{\varepsilon}(v_{{\varepsilon},n}(s,y))dyds \nonumber\\
&+ \int_0^t \int_D G(t-s,x-y) \sigma_n(v_{{\varepsilon},n}(s,y))W(dyds)\end{aligned}$$ where $G(t,x)$ is the heat kernel defined in [\[eq:kernel\]](#eq:kernel){reference-type="eqref" reference="eq:kernel"}.
For any $0<{\varepsilon}<n$ define the stopping times $$\tilde{\tau}_{{\varepsilon},n} : = \inf\left\{t>0: \inf_{x \in D}v_{{\varepsilon},n}(t,x) < {\varepsilon}\text{ or } \sup_{x \in D}v_{{\varepsilon},n} >n \right\}$$ For any $0< {\varepsilon}_2 < {\varepsilon}_1<n_1< n_2$, the functions $f_{{\varepsilon}_1}(u) = f_{{\varepsilon}_2}(u)$ and $\sigma_{n_1}(u) = \sigma_{n_2}(u)$ for all $u \in [{\varepsilon}_1,n_1]$. Therefore, the uniqueness of solutions implies that these solutions are *consistent* in the sense that if $0< {\varepsilon}_2 < {\varepsilon}_1<n_1< n_2$ then $$v_{{\varepsilon}_1,n_1}(t,x) = v_{{\varepsilon}_2,n_2}(t,x) \text{ for all } x \in D \text{ and } t \in [0,\tilde{\tau}_{{\varepsilon}_1,n_1}].$$ We can, therefore, uniquely define the unique local mild solution by $$v(t,x) := v_{{\varepsilon},n}(t,x) \text{ for all } x \in D \text{ and } t \in [0,\tilde{\tau}_{{\varepsilon},n}]$$ and the local mild solution is well defined for all $t \in [0,\sup_{0<{\varepsilon}<n} \tilde{\tau}_{{\varepsilon},n}]$ and solves the integral equation $$\begin{aligned}
\label{eq:v-mild}
v(t,x) = &\int_D G(t,x-y)v(0,y)dy + \int_0^t \int_D G(t-s,x-y)f(v(s,y))dyds \nonumber\\
&+ \int_0^t \int_D G(t-s,x-y)\sigma(v(s,y))W(dyds).\end{aligned}$$ The construction of $v_-(t,x)$ is identical so we do not repeat the proof.
Define the infimum stopping times for ${\varepsilon}\in (0,1)$ $$\begin{aligned}
\label{eq:tau-inf}
&\tau^{\inf}_{\varepsilon}: = \inf\left\{t>0: \inf_{x \in D} v(t,x) < {\varepsilon}\right\}\\
&\tau^{\inf}_{{\varepsilon},-} : = \inf\left\{t>0: \inf_{x \in D} v_-(t,x)<{\varepsilon}\right\}\end{aligned}$$ and the $L^\infty$ stopping times for $n>1$ $$\begin{aligned}
&\tau^\infty_n: = \inf\left\{t>0: \sup_{x \in D} v(t,x) > n\right\} \label{eq:tau-inf-L}\\
&\tau^\infty_\infty : = \sup_{n>0} \tau^\infty_n \label{eq:tau-inf-inf}.\end{aligned}$$
$$\begin{aligned}
&\tau^\infty_{n,-}: = \inf\left\{t>0: \sup_{x \in D}v_-(t,x) > n\right\}\\
&\tau^\infty_{\infty,-} : = \sup_{n>0} \tau^\infty_{n,-}.\end{aligned}$$
The comparison principle of [@kotelenez-1992 Theorem 2.5] guarantees that the following holds.
**Proposition 4**. *With probability one $$-v_-(t,x) \leq u(t,x) \leq v(t,x)$$ for all $t \in [0, \tau^{\inf}_0 \wedge \tau^{\inf}_{0,-}\wedge \tau^\infty_\infty \wedge \tau^\infty_{\infty,-}]$ and for all $x \in [-\pi,\pi]$.*
*Proof.* The comparison principle of [@kotelenez-1992] is stated for heat equations with globally Lipschitz continuous $f(v)$ and $\sigma(v)$. But $f(v)$ and $\sigma(v)$ are both Lipschitz continuous for $v \in [{\varepsilon},n]$ for any $0< {\varepsilon}< n< \infty$. Therefore, with probability one, $$-v_-(t,x) \leq u(t,x) \leq v(t,x)$$ for all $t \in [0, \tau^{\inf}_{\varepsilon}\wedge \tau^{\inf}_{{\varepsilon},-} \wedge\tau^\infty_n \wedge\tau^\infty_{n,-}]$. Taking the limit as ${\varepsilon}\to 0$ and $n \to \infty$ proves the result. ◻
Corollary 1.1 of [@mueller-1998] proves that the $f(u) = u^{-\alpha}$ forcing and the nonnegative initial data of $v(0,x)$ prevent $v(t,x)$ from becoming negative. We restate this result below.
**Proposition 5** (Corollary 1.1 of [@mueller-1998]). *For any $T>0$ $$\lim_{{\varepsilon}\to 0}{\mathbb{P}}\left(\inf_{t \in [0,T\wedge \tau^\infty_\infty]}\inf_{x\in D} v(t,x)<{\varepsilon}\right)=0.$$*
We will prove that under the assumptions of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, the solutions of $v(t,x)$ cannot explode in finite time. Because $v_-(t,x)$ satisfies the same assumptions as $v(t,x)$, $v_-(t,x)$ cannot explode in finite time either.
**Theorem 6**. *Let $v(t,x)$, $v_-(t,x)$ be the local mild solutions to [\[eq:v\]](#eq:v){reference-type="eqref" reference="eq:v"} and [\[eq:v-\]](#eq:v-){reference-type="eqref" reference="eq:v-"}. Then both $\tau^\infty_\infty = \infty$ and $\tau^\infty_{\infty,-}=\infty$ with probability one.*
We will prove Theorem [Theorem 6](#thm:v-no-explode){reference-type="ref" reference="thm:v-no-explode"} in Section [6](#S:non-explosion){reference-type="ref" reference="S:non-explosion"}. Then the comparison principle, Proposition [Proposition 4](#prop:comparison){reference-type="ref" reference="prop:comparison"}, guarantees that $u(t,x)$ cannot explode in finite time. The main result of our paper, Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, will hold once we prove Theorem [Theorem 6](#thm:v-no-explode){reference-type="ref" reference="thm:v-no-explode"}.
*Proof of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}, assuming that Theorem [Theorem 6](#thm:v-no-explode){reference-type="ref" reference="thm:v-no-explode"} holds.* By the comparison principle, Proposition [Proposition 4](#prop:comparison){reference-type="ref" reference="prop:comparison"}, $$-v_-(t,x) \leq u(t,x) \leq v(t,x)$$ For $t \in [0, \tau^{\inf}_0 \wedge \tau^{\inf}_{0,-}\wedge \tau^\infty_\infty \wedge \tau^\infty_{\infty,-}]$ and for all $x \in [-\pi,\pi]$. Theorem [Theorem 6](#thm:v-no-explode){reference-type="ref" reference="thm:v-no-explode"} proves that $\tau^\infty_\infty = \tau^\infty_{\infty,-} = \infty$ with probability one. Proposition [Proposition 5](#prop:v-positive){reference-type="ref" reference="prop:v-positive"} proves that $$\tau^{\inf}_0 \geq T \wedge \tau^\infty_\infty$$ for any $T>0$. This is true for arbitrary $T>0$ and therefore $\tau^{\inf}_0 = \tau^{\inf}_{0,-} = \infty$.
Therefore $u(t,x)$ can never explode. ◻
The rest of the paper is devoted to proving Theorem [Theorem 6](#thm:v-no-explode){reference-type="ref" reference="thm:v-no-explode"}.
# The $L^1$ norm of $v(t,x)$ {#S:L1}
The first step to prove that the solutions to $v(t,x)$ do not explode is to prove that the $L^1$ norms of solutions do not explode.
Let $v(t,x)$ be the nonnegative local mild solution to [\[eq:v\]](#eq:v){reference-type="eqref" reference="eq:v"}. Define for $t \in [0, \tau^\infty_\infty]$ $$|v(t)|_{L^1}: = \int_D v(t,x)dx.$$
Define the $L^1$ stopping times for $M>0$ $$\label{eq:tau-1-M}
\tau^1_M: = \inf\{t \in [0,\tau^\infty_\infty]: |v(t)|_{L^1} >M\}.$$
**Lemma 7**. *For any $T>0$, ${\varepsilon}>0$ and $M>0$, $${\mathbb{P}}\left(\sup_{t \in [0,T \wedge \tau^\infty_\infty \wedge \tau^{\inf}_{\varepsilon}] } |v(t)|_{L^1} > M\right) \leq \frac{|u(0)|_{L^1} + 2 \pi T {\varepsilon}^{-\alpha}}{M},$$ In particular, $${\mathbb{P}}\left(\sup_{t \in [0,T \wedge \tau^\infty_\infty \wedge \tau^{\inf}_{\varepsilon}] } |v(t)|_{L^1}< \infty\right)=1.$$ Furthermore, for any $M>0$ and ${\varepsilon}>0$, the quadratic variation of $|v(t)|_{L^1}$ satisfies $$\begin{aligned}
\label{eq:L3-bound}
&{\mathbb{E}}\int_0^{ \tau^1_M \wedge \tau^{\inf}_{\varepsilon}} \int_D |\sigma(v(t,x))|^2 dyds \leq M^2.
\end{aligned}$$*
*Proof.* Let $n>0$ be big and ${\varepsilon}>0$ be small enough so that ${\varepsilon}< v(0,x) < n$ for all $x \in D$. Let $$\begin{aligned}
&I_{n,{\varepsilon}}(t ):=\int_Dv(t \wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon},x)dx.
\end{aligned}$$ The $\tau^{\inf}_{\varepsilon}$ stopping time guarantees that $v(t \wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon},x)\geq {\varepsilon}$ so that $I_{n,{\varepsilon}}$ is the $L^1$ norm $|v(t \wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon})|_{L^1}$. Integrating the mild solution [\[eq:v-mild\]](#eq:v-mild){reference-type="eqref" reference="eq:v-mild"} and using the fact that $\int_DG(t,x-y)dx = 1$, $$\begin{aligned}
\label{eq:L1-martingale}
I_{n,{\varepsilon}}(t)= &\int_D v(0,y)dy + \int_0^{t\wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon}} \int_D f(v(s,x))dxds \nonumber\\
&+ \int_0^{t\wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon}} \int_D \sigma(v(s,y))W(dyds).
\end{aligned}$$ $I_{n,{\varepsilon}}(t)$ is a nonnegative submartingale becuase $f(v)>0$ and because the stochastic integral in [\[eq:L1-martingale\]](#eq:L1-martingale){reference-type="eqref" reference="eq:L1-martingale"} is a martingale. Therefore, for any $M>0$ and $T>0$, by Doob's inequality $$%\Pro\left( \sup_{t>0} \int_Du_n(t,x)dx > M\right) \leq
{\mathbb{P}}\left( \sup_{t \in [0,T]} I_{n,{\varepsilon}}(t) > M\right) \leq \frac{{\mathbb{E}}I_{n,{\varepsilon}}(T)}{M} \leq \frac{|v(0)|_{L^1} + 2\pi T{\varepsilon}^{-\alpha}}{M}$$ because $f(v) \leq {\varepsilon}^{-\alpha}$ when $v>{\varepsilon}$, the length of $D=[-\pi,\pi]$ is $2\pi$, and because the expectation of the stochastic integral in [\[eq:L1-martingale\]](#eq:L1-martingale){reference-type="eqref" reference="eq:L1-martingale"} is zero. This bound does not depend on $n$. Therefore, $${\mathbb{P}}\left(\sup_{t \in [0, T \wedge \tau^{\inf}_{\varepsilon}\wedge \tau^\infty_\infty]} \int_D v(t,x)dx > M \right) \leq \frac{|v(0)|_{L^1} + 2\pi T{\varepsilon}^{-\alpha}}{M}.$$ Now take $M \uparrow \infty$ to see that $${\mathbb{P}}\left(\sup_{t \in [0, T \wedge \tau^{\inf}_{\varepsilon}\wedge \tau^\infty_\infty]} \int_D u(t,x)dx < \infty \right)=1.$$
Now we apply Ito formula to [\[eq:L1-martingale\]](#eq:L1-martingale){reference-type="eqref" reference="eq:L1-martingale"}. For any $M>0$, $n>0$, ${\varepsilon}>0$, $$\begin{aligned}
&{\mathbb{E}}(I_{n,{\varepsilon}}(t \wedge \tau^1_M))^2 \nonumber\\
&= {\mathbb{E}}(I_{n,{\varepsilon}}(0))^2 + 2{\mathbb{E}}\int_0^{t \wedge \tau^\infty_n\wedge \tau^{\inf}_{\varepsilon}\wedge \tau^1_M} \int_D f(v(s,y))I_{n,{\varepsilon}}(s)dyds \nonumber\\
&\qquad+ {\mathbb{E}}\int_0^{t \wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon}\wedge \tau^1_M} \int_D |\sigma(v(s,y))|^2dyds.
\end{aligned}$$ Each term on the right-hand side is nonnegative and ${\mathbb{E}}(I_{n,{\varepsilon}}(t \wedge \tau^1_M))^2 \leq M^2$ by the definition of $\tau^1_M$. Therefore, $${\mathbb{E}}\int_0^{t \wedge \tau^\infty_n \wedge \tau^{\inf}_{\varepsilon}\wedge \tau^1_M} \int_D |\sigma(v(s,y))|^2dyds \leq M^2.$$ This bound does not depend on $n$, ${\varepsilon}$, or $t$. ◻
# Moment estimates on the stochastic convolution {#S:moment}
In this section we prove the moment estimate Theorem [Theorem 2](#thm:stoch-conv){reference-type="ref" reference="thm:stoch-conv"}.
*Proof of Theorem [Theorem 2](#thm:stoch-conv){reference-type="ref" reference="thm:stoch-conv"}.* Let $p>6$ and assume that $\varphi(t,x)$ is adapted and $${\mathbb{E}}\int_0^T \int_D |\varphi(t,x)|^pdxdt < +\infty.$$ We use Da Prato and Zabczyk's factorization method [@dpz Theorem 5.10]. Given $p>6$ let $\beta \in \left(\frac{3}{2p}, \frac{1}{4}\right)$ and define $$Z^\varphi_\beta(t,x)= \int_0^t \int_D (t-s)^{-\beta}G(t-s,x,y) \varphi(s,y)W(dyds).$$ Then $$Z^\varphi(t,x) = \frac{\sin(\pi\beta)}{\pi} \int_0^t\int_D (t-s)^{\beta - 1} G(t-s,x,y)Z^\varphi_\beta(s,y)dyds.$$ We can get supremum bounds on $Z^\varphi(t,x)$ by Hölder's inequality. This method was used, for example, by Chen and Huang [@ch-2023 Proof of Theorem 1.6]. $$\begin{aligned}
\sup_{t \in [0,T]} \sup_{x \in D} |Z^\varphi(t,x)|
\leq &C\left(\int_0^t \int_D(t-s)^{\frac{(\beta-1)p}{p-1}}G^{\frac{p}{p-1}}(t-s,x-y)dyds\right)^{\frac{p-1}{p}} \nonumber\\
&\times\left( \int_0^T\int_D |Z^\varphi_\beta(t,x)|^pdx\right)^{\frac{1}{p}}.
\end{aligned}$$ The integral $$\int_D G^{\frac{p}{p-1}}(t-s,x-y)dy \leq |G(t-s)|_{L^1}|G(t-s)|_{L^\infty}^{\frac{p}{p-1} -1} \leq C (t-s)^{-\frac{1}{2(p-1)}}$$ because of Lemma [Lemma 3](#lem:kernel-estimates){reference-type="ref" reference="lem:kernel-estimates"}. Because we chose $p\beta > \frac{3}{2}$, it follows that $\frac{(\beta -1)p - \frac{1}{2}}{p-1} > -1$ and therefore $$\begin{aligned}
\label{eq:Z-factored}
&{\mathbb{E}}\sup_{t \in [0,T]} \sup_{x \in D} |Z(t,x)|^p\nonumber\\
&\leq C \left(\int_0^t (t-s)^{\frac{(\beta -1)p - \frac{1}{2}}{p-1}}ds\right)^{p-1}{\mathbb{E}}\int_0^T \int_D |Z^\varphi_\beta(t,x)|^pdxdt \nonumber\\
&\leq C T^{\beta p - \frac{3}{2}} {\mathbb{E}}\int_0^T \int_D |Z^\varphi_\beta(t,x)|^pdxdt
\end{aligned}$$ It remains to estimate ${\mathbb{E}}\int_0^T \int_D |Z^\varphi_\beta(t,x)|^p dx dy$. By the BDG inequality, $${\mathbb{E}}|Z^\varphi_\beta(t,x)|^p \leq C_p {\mathbb{E}}\left( \int_0^t\int_D G^2(t-s,x-y)(t-s)^{-2\beta} |\varphi(s,y)|^2 dyds \right)^{\frac{p}{2}}.$$ By Young's inequality for convolutions, $$\begin{aligned}
&\int_0^T \int_D {\mathbb{E}}|Z^\varphi_\beta(t,x)|^pdxdt \nonumber\\
&\leq C_p \left(\int_0^T \int_D G^{2}(s,y)s^{-2 \beta}dyds \right)^{\frac{p}{2}} \left(\int_0^T \int_D {\mathbb{E}}(|\varphi(s,y)|^p) dyds \right)
\nonumber\\
&\leq C_p \left(\int_0^T s^{-\frac{1}{2}-2 \beta}ds \right)^{\frac{p}{2}} \left(\int_0^T \int_D {\mathbb{E}}(|\varphi(s,y)|^p) dyds \right) \nonumber\\
&\leq C_p T^{\frac{p}{4} - p \beta } {\mathbb{E}}\int_0^T \int_D |\varphi(s,y)|^pdyds.
\end{aligned}$$ In the second-to-last line we used Lemma [Lemma 3](#lem:kernel-estimates){reference-type="ref" reference="lem:kernel-estimates"} to estimate that $$\int_DG^2(s,y)dy \leq |G(s,\cdot)|_{L^\infty}|G(s,\cdot)|_{L^1} \leq C s^{-\frac{1}{2}}.$$ Combining this with [\[eq:Z-factored\]](#eq:Z-factored){reference-type="eqref" reference="eq:Z-factored"} we conclude that $${\mathbb{E}}\sup_{t \in [0,T]} \sup_{x \in D} |Z(t,x)|^p \leq C T^{\frac{p}{4} - \frac{3}{2}} {\mathbb{E}}\int_0^T \int_D |\varphi(s,y)|^p dyds.$$ ◻
# Non-explosion of $v(t,x)$ {#S:non-explosion}
Let $M>0$ and ${\varepsilon}>0$ be arbitrary. We will show that $v(t,x)$ cannot explode before time $\tau^1_M \wedge \tau^{\inf}_{\varepsilon}$. After we prove this, we can take the limits as $M \to \infty$ in Lemma [Lemma 7](#lem:L1){reference-type="ref" reference="lem:L1"} and ${\varepsilon}\to 0$ in Proposition [Proposition 5](#prop:v-positive){reference-type="ref" reference="prop:v-positive"} to prove that explosion cannot ever occur.
Fix ${\varepsilon}>0$, $M>0$ and define a sequence of stopping times $\rho_n$. These stopping times depend on the choices of ${\varepsilon}$, and $M$. $$\begin{aligned}
&\rho_{0} =\inf\{t \in [0, \tau^{\inf}_{\varepsilon}\wedge \tau^1_M]: |v(t)|_{L^\infty} = 2^m \text{ for some } m \in \{1,2,3, ...\}\}.\end{aligned}$$ Then if $|v(\rho_n)|_{L^\infty} = 2^m$ for $m \geq 2$ we define $$\begin{aligned}
&\rho_{n+1} =
\inf\left\{t \in [\rho_{n}, \tau^{\inf}_{\varepsilon}\wedge \tau^1_M]: \begin{matrix} |v(t)|_{L^\infty}\geq 2^{m+1} \\
\text{ or } |v(t)|_{L^\infty} \leq 2^{m-1} \end{matrix} \right\},\label{eq:rho-def}\end{aligned}$$ and if $|v(\rho_n)|_{L^\infty} = 2$ then $$\begin{aligned}
&\rho_{n+1} = \inf\left\{t \in [\rho_{n}, \tau^{\inf}_{\varepsilon}\wedge \tau^1_M]: |v(t)|_{L^\infty}\geq 2^{2} \right\}.\end{aligned}$$ These times keep track of how long it takes for the $L^\infty$ norm of the process to either double or half. We use the convention that $\rho_{n+1} = \tau^{\inf}_{\varepsilon}\wedge \tau^1_M$ if the process stops doubling or halving after $\rho_n$. If the process were to explode before time $\tau^{\inf}_{\varepsilon}\wedge \tau^1_M$, then the $L^\infty$ norm would need to double an infinite amount of times before that $\tau^{\inf}_{\varepsilon}\wedge \tau^1_M$. We prove that the process does not explode by proving that there can only be a finite number of times that the $L^\infty$ norm doubles when $m$ is big.
Next we recall a result that proves that the $L^\infty$ norm falls quickly if the $L^1$ norm is bounded.
**Lemma 8**. *There exists $C>0$ such that if $v \in L^1(D))$ then for any $t\in [0,1]$,\
$$\label{eq:Linfty-falls}
\int_{D}G(t,x-y)v(y)dy \leq C t^{-\frac{1}{2}} |v|_{L^1}.$$*
*Proof.* We proved in Lemma [Lemma 3](#lem:kernel-estimates){reference-type="ref" reference="lem:kernel-estimates"} that $|G(t,\cdot)|_{L^\infty} \leq C t^{-\frac{1}{2}}$. Therefore, for any $v \in L^1$, [\[eq:Linfty-falls\]](#eq:Linfty-falls){reference-type="eqref" reference="eq:Linfty-falls"} holds. ◻
**Lemma 9**. *For any $p>6$ there exists a nonrandom constant $C_p>0$ and for any ${\varepsilon}>0$ and $M>0$ there exists a nonrandom constant $m_0=m_0({\varepsilon},M)>0$ such that for any $n \in \mathbb{N}$, and $m>m_0$ $$\begin{aligned}
&{\mathbb{P}}\left( |v(\rho_{n+1})|_{L^\infty} = 2^{m+1} \Big| |v(\rho_n)|_{L^\infty} = 2^m \right) \nonumber\\
&\leq
C_p M^{\frac{p}{2} - 3} {\mathbb{E}}\left( \int_{\rho_{n} }^{\rho_{n+1}} \int_D(\sigma(v(s,y)))^2 dyds \Bigg| |v(\rho_n)|_{L^\infty} = 2^m \right)
\end{aligned}$$ Importantly,the constant $C_p$ is independent of $m > m_0$.*
*Proof.* Let $M>0$ and assume that $2^m = |v(\rho_{n} )|_{L^\infty}$. By the semigroup property of the heat semigroup, the mild solution for $t \in [0, \rho_{n+1}- \rho_{n} ]$, satisfies $$\begin{aligned}
&v((t + \rho_{n}) , x) \nonumber\\
&= \int_D G(t,x-y) v(\rho_{n} ,y)dy \nonumber\\
&\qquad + \int_0^t \int_D G(t-s,x-y) f(v(s + \rho_n,y))dyds \nonumber\\
&\quad+ \int_0^t \int_D G(t-s,x-y) \sigma(v(s + \rho_n,y))\mathbbm{1}_{\{s\leq \rho_{n+1}-\rho_n\}} W(dy(ds + \rho_{n})) \nonumber\\
&=: S_n(t,x) + K_n(t,x) + Z_n(t,x)
\end{aligned}$$
By Lemma [Lemma 8](#lem:Linfty-falls){reference-type="ref" reference="lem:Linfty-falls"} and the fact that $|u(\rho_{n})|_{L^1}\leq M$ (remember that by definition $\rho_n \leq \tau^{\inf}_{\varepsilon}\wedge \tau^1_M$), it follows that for $t\in (0,1)$, $$\label{eq:S-bound}
|S_n(t)|_{L^\infty} \leq CM t^{-\frac{1}{2}}.$$ Let $T_m = \frac{C^2M^2}{2^{2m-6}}$ so that $S_n(T_m) \leq 2^{m-3}$. We can bound $$\sup_{t \leq T_m} \sup_{x \in D} |K_n(t,x)| \leq 2\pi T_m {\varepsilon}^{-\alpha} \leq \frac{C^2 M^2}{{\varepsilon}^\alpha 2^{2m-6}} ,$$ because $f(v(s,y)) \leq {\varepsilon}^{-\alpha}$ for all $s \leq \rho_{n+1} \leq \tau^{\inf}_{\varepsilon}$. Choose $m_0 = m_0({\varepsilon},M)$ large enough so that for $m > m_0$, $$\label{eq:K-bound}
\sup_{t \leq T_m} \sup_{x \in D} |K_n(t,x)| \leq \frac{C^2 M^2}{{\varepsilon}^\alpha 2^{2m-2}} < 2^{m-3}.$$
Theorem [Theorem 2](#thm:stoch-conv){reference-type="ref" reference="thm:stoch-conv"} with $$\varphi(t,x) := \sigma( v(\rho_{n} + t,x) )\mathbbm{1}_{\{t\leq \rho_{n+1}-\rho_n\}}$$ and the Chebyshev inequality guarantee that $$\begin{aligned}
&{\mathbb{P}}\left(\sup_{t \leq T_m} \sup_{x \in D} |Z_n((t + \rho_n) ,x)| > 2^{m-2} \Big| |v(\rho_n)|_{L^\infty} = 2^m\right) \nonumber\\
&\leq 2^{-p(m-2)}{\mathbb{E}}\left(\sup_{t \leq T_m} \sup_{x \in D} |Z_n((t + \rho_n) ,x)|^p \Big| |v(\rho_n)|_{L^\infty}=2^m\right)\nonumber\\
&\leq C 2^{-p(m-2)} T_m^{\left(\frac{p}{4} - \frac{3}{2}\right)} \nonumber\\
&\qquad\times{\mathbb{E}}\left( \int_{\rho_{n}}^{(\rho_{n} + T_m) \wedge \rho_{n+1}} \int_D (\sigma(v(s,y)))^p dyds \Big| |v(\rho_n)|_{L^\infty} = 2^m\right) .
\end{aligned}$$ Because $|v(s,y)| \leq 2^{m+1}$ for $t \leq \rho_{n+1}$, our $\sigma$ growth restriction [\[eq:sigma-growth\]](#eq:sigma-growth){reference-type="ref" reference="eq:sigma-growth"} guarantees that $|\sigma(v(s,y))| \leq C(1 + 2^{\frac{3(m+1)}{2}}) \leq C 2^{\frac{3m}{2}}$. We bound $$|\sigma(v(s,y))|^p \leq|\sigma(v(s,y))|^{p-2}|\sigma(v(s,y))|^2 \leq C2^{\left(\frac{3pm}{2} - 3m\right)}|\sigma(v(s,y))|^2$$ and therefore $$\begin{aligned}
&{\mathbb{P}}\left(\sup_{t \leq T_m} \sup_{x \in D} |Z_n((t + \rho_n) ,x)| > 2^{m-2} \Big| |v(\rho_n)|_{L^\infty} = 2^m\right) \nonumber\\
&\leq C 2^{-p(m-2)} 2^{\left(\frac{3pm}{2} - 3m\right)}T_m^{\left(\frac{p}{4} - \frac{3}{2}\right)} \nonumber\\
&\qquad\times{\mathbb{E}}\left( \int_{\rho_{n}}^{(\rho_{n} + T_m) \wedge \rho_{n+1}} \int_D (\sigma(v(s,y)))^2 dyds \Big| |v(\rho_n)|_{L^\infty} = 2^m\right)
\nonumber\\
&\leq C 2^{\left(\frac{pm}{2} - 3m\right)}T_m^{\left(\frac{p}{4} - \frac{3}{2}\right)} \nonumber\\
&\qquad\times{\mathbb{E}}\left( \int_{\rho_{n}}^{(\rho_{n} + T_m) \wedge \rho_{n+1}} \int_D (\sigma(v(s,y)))^2 dyds \Big| |v(\rho_n)|_{L^\infty} = 2^m\right)
\end{aligned}$$ $T_m$ is defined in such a way that $T_m^{\frac{1}{2}}2^m \leq C M$. This means that $$2^{\left(\frac{pm}{2} - 3m\right)}T_m^{\left(\frac{p}{4} - \frac{3}{2}\right)} \leq C M^{\frac{p}{2}-3} .$$ We also can bound $(\rho_{n} + T_m) \wedge \rho_{n+1} \leq \rho_{n+1}$. Therefore, $$\begin{aligned}
&{\mathbb{P}}\left(\sup_{t \leq T_m} \sup_{x \in D} |Z_n((t + \rho_n) ,x)| > 2^{m-2}\Big| |v(\rho_n)|_{L^\infty} = 2^m \right) \nonumber\\
&\leq C M^{ \frac{p}{2} - 3} {\mathbb{E}}\left(\int_{\rho_{n}}^{\rho_{n+1}} \int_D (\sigma(v(s,y)))^2 dyds \Big| |v(\rho_n)|_{L^\infty} = 2^m\right) .
\end{aligned}$$ Finally we prove that if the event $$\label{eq:event}
\left\{\sup_{t \leq T_m} \sup_{x \in D} |Z_n((t + \rho_n) ,x)| \leq 2^{m-2} \right\}$$ occurs, then $|v(\rho_{n+1})|_{L^\infty}$ falls to $2^{m-1}$ before it can reach $2^{m+1}$. Specifically, because [\[eq:S-bound\]](#eq:S-bound){reference-type="eqref" reference="eq:S-bound"}--[\[eq:K-bound\]](#eq:K-bound){reference-type="eqref" reference="eq:K-bound"} prove that $|S_n(T_m)|_{L^\infty} + |K(T_m)|_{L^\infty} \leq 2^{m-2}$ it follows that $|u(\rho_n + T_m) |_{L^\infty} \leq 2^{m-1}$ on this event [\[eq:event\]](#eq:event){reference-type="eqref" reference="eq:event"}. On the other hand, $|S_n(t)|_{L^\infty} \leq 2^m$ for all $t \in [0,T_m]$ and it follows that on this event [\[eq:event\]](#eq:event){reference-type="eqref" reference="eq:event"}, $\sup_{t \leq T_m} |u(\rho_n + t)| \leq 2^m + 2^{m-3} + 2^{m-2} < 2^{m+1}$. This implies that if the event [\[eq:event\]](#eq:event){reference-type="eqref" reference="eq:event"} occurs that $|u(\rho_n + t)|_{L^\infty}$ falls to the level $2^{m-1}$ before it can rise to the level $2^{m+1}$. Therefore, for $m>m_0$ $$\begin{aligned}
&{\mathbb{P}}\left( |u(\rho_{n+1} )|_{L^\infty} = 2^{m+1} \Big| |v(\rho_{n} )|_{L^\infty} = 2^m \right) \nonumber\\
&\leq {\mathbb{P}}\left(\sup_{t \leq T_m} \sup_{x \in D} |Z_n((t + \rho_n) \wedge \tau^1_M,x)| > 2^{m-2} \Big| |v(\rho_{n} )|_{L^\infty} = 2^m \right)\nonumber\\
&\leq
C_p M^{\frac{p}{2} - 3} {\mathbb{E}}\left(\int_{\rho_{n} }^{\rho_{n+1}} \int_D (\sigma(v(s,y)))^2 dyds \Big| |v(\rho_n)|_{L^\infty} = 2^m\right).
\end{aligned}$$ ◻
*Proof of Theorem [Theorem 6](#thm:v-no-explode){reference-type="ref" reference="thm:v-no-explode"}.* Fix $M>0, {\varepsilon}>0$ and let $\rho_{n}$ be defined from [\[eq:rho-def\]](#eq:rho-def){reference-type="eqref" reference="eq:rho-def"}. Let $m_0$ be from [\[eq:K-bound\]](#eq:K-bound){reference-type="eqref" reference="eq:K-bound"}. We add up the conditional probabilities from Lemma [Lemma 9](#lem:doubling-probability){reference-type="ref" reference="lem:doubling-probability"} to see that for any $n \in \mathbb{N}$, $$\begin{aligned}
&{\mathbb{P}}\left(|v(\rho_{n + 1} ) |_{L^\infty}= 2 |v(\rho_{n})|_{L^\infty} \text{ and } |v(\rho_{n})|_{L^\infty} > 2^{m_0} \right)\nonumber\\
&= \sum_{m=m_0}^\infty C M^{\frac{3p}{2}-3} {\mathbb{E}}\left(\int_{\rho_n}^{\rho_{n+1}} \int_D (\sigma(v(s,y)))^2dyds \Big| |v(\rho_n)|_{L^\infty} = 2^m\right) {\mathbb{P}}\left(|v(\rho_n)|_{L^\infty} = 2^m \right)\nonumber\\
&\leq C M^{\frac{3p}{2}-3} {\mathbb{E}}\left(\int_{\rho_n}^{\rho_{n+1}} \int_D (\sigma(v(s,y)))^2dyds \right).
\end{aligned}$$ Now add these probabilities with respect to $n$ $$\begin{aligned}
\label{eq:BC}
&\sum_{n=1}^\infty {\mathbb{P}}\left(|v(\rho_{n + 1} ) |_{L^\infty}= 2 |v(\rho_{n})|_{L^\infty} \text{ and } |v(\rho_{n})|_{L^\infty} > 2^{m_0} \right)\nonumber\\
&\leq C M^{\frac{3p}{2} - 3} \sum_{n=1}^\infty {\mathbb{E}}\int_{\rho_{n} }^{\rho_{n+1}} (\sigma(v(s,y)))^2 dyds \nonumber \\
&\leq C M^{\frac{3p}{2} - 3} {\mathbb{E}}\int_{0}^{\tau^{\inf}_{\varepsilon}\wedge \tau^1_M} (\sigma(v(s,y)))^2 dyds.
\end{aligned}$$ The last line is a consequence of the fact that all of the $\rho_n$ are defined to be smaller than $\tau^{\inf}_{\varepsilon}\wedge \tau^1_M$. The right-hand side of [\[eq:BC\]](#eq:BC){reference-type="eqref" reference="eq:BC"} is proportional to the expectation of the quadratic variation from [\[eq:L3-bound\]](#eq:L3-bound){reference-type="eqref" reference="eq:L3-bound"}, which is finite. Therefore,
$$\sum_{n=1}^\infty {\mathbb{P}}\left(|v(\rho_{n + 1} ) |_{L^\infty} = 2|v(\rho_{n } ) |_{L^\infty} \text{ and } |v(\rho_{n } ) |_{L^\infty} > 2^{m_0}\right) < +\infty.$$ The Borel-Cantelli Lemma guarantees that with probability one, the events $\{|v(\rho_{n + 1} ) |_{L^\infty} = 2|v(\rho_{n }) |_{L^\infty} \text{ and } |v(\rho_{n } ) |_{L^\infty}> 2^{m_0}\}$ only happen a finite number of times. This means that the $L^\infty$ norm cannot possibly explode before time $\tau^{\inf}_{\varepsilon}\wedge \tau^1_M$ because the $L^\infty$ norm stops doubling when $m$ gets big. This proves that for any ${\varepsilon}>0$ and $M<\infty$, $$\label{eq:prob-eps-M}
{\mathbb{P}}\left( (\tau^{\inf}_{\varepsilon}\wedge \tau^1_M) < \tau^\infty_\infty\right) =1.$$ Next, we argue via Proposition [Proposition 5](#prop:v-positive){reference-type="ref" reference="prop:v-positive"} and Lemma [Lemma 7](#lem:L1){reference-type="ref" reference="lem:L1"} that for arbitrary $T>0$ and small enough ${\varepsilon}>0$ and large enough $M>0$, the stopping times $\tau^{\inf}_{\varepsilon}$ and $\tau^1_M$ are both larger than $T$ with high probability. Let $\eta\in (0,1)$ and $T>0$ be arbitrary. Proposition [Proposition 5](#prop:v-positive){reference-type="ref" reference="prop:v-positive"} implies that there exists ${\varepsilon}>0$ small enough so that $$\label{eq:eps-small-rare}
{\mathbb{P}}\left( \tau^{\inf}_{\varepsilon}< (T \wedge \tau^\infty_\infty) \right)={\mathbb{P}}\left(\inf_{t \in [0,T\wedge \tau^\infty_\infty)} \inf_{x \in D} v(t,x) \leq {\varepsilon}\right) < \frac{\eta}{2}.$$ With this choice of ${\varepsilon}>0$, we estimate the probability that the $L^1$ norm of $v(t,x)$ is large. For $M>0$ to be chosen later, $$\begin{aligned}
& {\mathbb{P}}\left( \tau^1_M < (T \wedge \tau^\infty_\infty) \text{ or } \tau^{\inf}_{\varepsilon}< (T \wedge \tau^\infty_\infty)\right) \nonumber\\
&={\mathbb{P}}\left( \sup_{t \in [0,(T \wedge \tau^\infty_\infty)]} \int_Dv(t,x)dx > M \text{ or } \tau^{\inf}_{\varepsilon}< (T \wedge \tau^\infty_\infty)\right)\nonumber\\
&\leq {\mathbb{P}}\left( \tau^{\inf}_{\varepsilon}<(T \wedge \tau^\infty_\infty)\right) \nonumber\\
&\qquad
+{\mathbb{P}}\left(\sup_{t \in [0,T \wedge \tau^\infty_\infty]} \int_Dv(t,x)dx > M\text{ and } \tau^{\inf}_{\varepsilon}\geq (T\wedge \tau^\infty_\infty)\right)\nonumber\\
&\leq \frac{\eta}{2}
+ {\mathbb{P}}\left(\sup_{t \in [0,T \wedge \tau^\infty_\infty \wedge \tau^{\inf}_{\varepsilon}]} \int_Dv(t,x)dx > M\right)
\end{aligned}$$ The last line follows from [\[eq:eps-small-rare\]](#eq:eps-small-rare){reference-type="eqref" reference="eq:eps-small-rare"} and the fact that on the event $\{\tau^{\inf}_{\varepsilon}\geq T \wedge \tau^\infty_\infty\}$, $T \wedge \tau^\infty_\infty = T \wedge \tau^\infty_\infty \wedge \tau^{\inf}_{\varepsilon}$. By Lemma [Lemma 7](#lem:L1){reference-type="ref" reference="lem:L1"}, we can choose $M>0$ large enough so that $$\label{eq:big-L1-rare}
{\mathbb{P}}\left( \tau^1_M < (T \wedge \tau^\infty_\infty) \text{ or } \tau^{\inf}_{\varepsilon}< (T \wedge \tau^\infty_\infty)\right) < \eta.$$ Therefore, with these choices of ${\varepsilon}>0$ and $M>0$, [\[eq:eps-small-rare\]](#eq:eps-small-rare){reference-type="eqref" reference="eq:eps-small-rare"} and [\[eq:big-L1-rare\]](#eq:big-L1-rare){reference-type="eqref" reference="eq:big-L1-rare"} imply $${\mathbb{P}}\left( \tau^{\inf}_{\varepsilon}\geq T \wedge \tau^\infty_\infty \text{ and } \tau^1_M \geq T \wedge \tau^\infty_\infty \right) \geq 1 - \eta.$$ This combined with [\[eq:prob-eps-M\]](#eq:prob-eps-M){reference-type="eqref" reference="eq:prob-eps-M"} implies that $${\mathbb{P}}(T < \tau^\infty_\infty) > 1 - \eta.$$ The choice of $\eta>0$ was arbitrary. Therefore, $${\mathbb{P}}(T < \tau^\infty_\infty) = 1.$$ This is true for arbitrary $T>0$ and therefore, $${\mathbb{P}}\left(\tau^\infty_\infty=\infty \right)=1$$ and $v(t,x)$ cannot explode in finite time. ◻
| arxiv_math | {
"id": "2309.04330",
"title": "Solutions to the stochastic heat equation with polynomially growing\n multiplicative noise do not explode in the critical regime",
"authors": "Michael Salins",
"categories": "math.PR",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
On retrouve des résultats de Alain Hénaut sur les groupes de symétrie des tissus implicites dans le cadre usuel des actions de groupes de Lie sur les équations différentielles. On donne aussi un lien entre l'algèbre de Lie des symétries d'un tissu et l'existence de polynômes de Darboux.
address:
- Université de Pau et des Pays de l'Adour - E2S, Laboratoire de Mathématiques et de leurs Applications, UMR CNRS 5142, Batiment IPRA, avenue de l'Université, 64000 Pau, France
- CY Tech, Département de Mathématiques, 2 Bd Lucien, 64000 Pau, France
author:
- Jacky Cresson
- Jordy Palafox
title: Application des groupes de Lie à la recherche des symétries des tissus implicites du plan
---
# Introduction
Dans [@henaut], Alain Hénaut obtient via des outils de géométrie algébrique et/ou différentielle une **caractérisation des tissus du plan pararallélisables** via la dimension de leur **groupe de symétries**. Dans cet article, nous donnons des démonstrations alternatives de ses résultats dans le **cadre classique des symétries d'équations différentielles** exposé notamment par P.J. Olver dans son livre [@olver1]. Par ailleurs, l'**algèbre de Lie du groupe des symétries d'un tissu** est une sous-algèbre de Lie du **module de dérivation** associé à la **courbe discriminante du tissu**. Ce module possède une **interprétation dynamique** : il correspond à l'ensemble des champs de vecteurs laissant invariant (au sens dynamique) la courbe. On montre que le **discriminant de la courbe** est un **polynôme de Darboux** pour ces champs. L'article est organisé de la manière suivante:\
La Section [2](#tissusimplicites){reference-type="ref" reference="tissusimplicites"} donne la définition des tissus implicites et les principaux objets qui y sont attachés. Dans la Section [3](#14){reference-type="ref" reference="14"}, après un bref rappel sur les groupes de symétrie des équations différentielles suivant P-J. Olver [@olver1], nous donnons une caractérisation explicite des générateurs infinitésimaux des groupes de symétries d'un tissu implicite en fonction de son polynôme de présentation. Dans la Section [4](#sectionexemple){reference-type="ref" reference="sectionexemple"}, des exemples de calculs effectifs de ces groupes de symétries sont donnés pour des tissus particuliers (parallèle, de Clairaut, de Zariski, etc). Dans la Section [5](#19){reference-type="ref" reference="19"}, nous explicitons le lien entre les symétries et l'algèbre de dérivations de la courbe discriminante en utilisant le théorie classique de Darboux sur les courbes invariantes d'équations différentielles. Enfin la Section [6](#perspective){reference-type="ref" reference="perspective"} discute quelques perspectives de ce travail.
# Tissus implicites du plan {#tissusimplicites}
La géométrie des tissus est l'étude simultanée de feuilletages plongés dans un même espace. La référence classique est le livre de Bol et Blaschke [@BB]. On renvoie au livre de J.V. Pereira et L. Pirio [@webg] pour une présentation moderne du sujet et plus de détails. On se limite dans cet article aux tissus implicites définis par A. Hénaut dans [@henaut].\
On considère une **équation différentielle du premier ordre polynomiale de degré $d$ à coefficients analytiques**, définie par $$F(x,y,y')=a_0(x,y)(y')^d+a_1(x,y)(y')^{d-1}+\cdots +a_d(x,y)=0,$$ où $y'=\frac{dy}{dx}$ et où les coefficients $a_i\in \mathbb{C}\lbrace x,y \rbrace$ pour tout $i\in \lbrace 1,...,d \rbrace$ (anneau des fonctions analytiques en $x$ et $y$). On appelle **polynôme de présentation** de $F$ et on note $P_F$, le polynôme en la variable $z$ et de paramètres $x$ et $y$ associé au tissu $F$ et défini par: $$P_F(z;x,y)=a_0(x,y)z^d+a_1(x,y)z^{d-1}+\cdots +a_d(x,y).$$ On a $F(x,y,y')=P_F (y';x,y)$. Le $z$-discriminant de ce polynôme, noté $\Delta$ est défini par: $$\Delta=a_0^{2d-2}\underset{1\leq i < j \leq d}{\prod}(p_i(x,y)-p_j(x,y))^2,$$ où les $p_1(x,y), \ldots, p_d(x,y)$ sont les racines de $P_F(z;x,y)$ (voir [@gelfand p. 403]) et le résultant est donné par : $R_{P_F}:=Result(P_F,\partial_{z}P_F)=(-1)^{\frac{d(d-1)}{2}}a_0 \Delta$. On supposera dans la suite que l'on se place hors du lieu des singularités, c'est à dire $R_{P_F} \neq 0$, appelée **condition de factorisation**. En effet, le polynôme $P_F$ admet alors $d$ racines distinctes et on peut dans ce cas factoriser le polynôme $P_F$ par rapport à la variable $z$ sous la forme $P_F(z;x,y)=a_0(x,y)\underset{i=1}{\overset{d}{\prod}}(z-p_i(x,y)).$. On déduit de cette construction la forme factorisée de l'équation différentielle initiale: $$\label{fac}
F(x,y,y')=a_0(x,y)\underset{i=1}{\overset{d}{\prod}} \Delta_i (x,y,y'),$$ où $\Delta_i (x,y,y')=y'-p_i(x,y)$, qu'on appellera **forme préparée** de $F$.\
Chaque équation différentielle $\Delta_i (x,y,y')=0$ induit un feuilletage $F_i$ du plan $(x,y)$. La forme préparée définie donc naturellement $d$-familles de feuilletages $F_i$ en position générale en chaque point $(x,y)\in \mathbb{C}^2$ associé au système différentiel $\mathscr{S}=\{ \Delta_i (x,y,y')=0,\ i=1,\dots ,d \}$, ce qui constitue un $\mathbf{d}$**-tissu** (voir [@webg], Section 1.1 p.15). On a donc la définition suivante:
**Définition 1** ($d$-tissu implicite [@henaut]). *On appelle $d$-tissu implicite associé à une équation différentielle polynomial $F(x,y,y')=0$ de degré $d$ du premier ordre à coefficients analytiques satisfaisant la condition de factorisation [\[fac\]](#fac){reference-type="eqref" reference="fac"}, le tissu noté $\mathscr{W}_F ( F_1 ,\dots ,F_d )$ où chaque feuille $F_i$ est associée à l'équation différentielle $\Delta_i (x,y,y')=0$,*
De manière réciproque, la donnée d'un $d$-tissu explicite $\mathcal{W}(F_1,\ldots, F_d)$, où les $F_i$ sont des feuilletages en position générale, permet de définir un $d$-tissu implicite. Il suffit de considérer la forme préparée définie par les racines : $$p_i (x,y)=-\frac{\partial_x(F_i)}{\partial_y(F_i)} (x,y).$$
L'intérêt des tissus **implicites** est de posséder une représentation **globale** (l'équation différentielle). L'idée est donc de voir si des propriétés de cet objet global permettent de **caractériser certaines propriétés des tissus**.
# Symétrie des tissus implicites {#14}
On caractérise les groupes de symétrie des tissus implicites en utilisant la théorie classique sur les équations différentielles invariantes sous l'action d'un groupe de Lie telle que présentée dans le livre de P-J. Olver [@olver1]. On retrouve ainsi (Théorème [Théorème 2](#theorem5){reference-type="ref" reference="theorem5"} et Théorème [Théorème 3](#normalcarac){reference-type="ref" reference="normalcarac"}) des résultats énoncés par A. Hénaut ([@henaut], (LS) section 2 p.118 et p.119).
## Rappels sur les groupe de symétrie d'une équation différentielle {#13}
On rappelle rapidement la notion de groupe de symétrie en suivant le livre de P-J. Olver [@olver1] directement appliqué aux équations différentielles d'ordre 1.
### Groupes de symétries et générateurs infinitésimaux
Soit une équation différentielle d'ordre $1$ de la forme $\Delta(x,y,y') = y'-p(x,y)=0$ avec $y'=\frac{dy}{dx}$, $x,\ y \in \mathbb{C}$.
**Définition 2**. *Soit $\mathscr{S}$ une équation différentielle. Un groupe de symétries de $\mathscr{S}$ est un groupe local de transformations $G$ agissant sur un sous-ensemble ouvert $M$ du produit cartésien $\mathbb{C}^2$ tel que si $y:M \rightarrow \mathbb{C}$ est une solution de $\mathscr{S}$ et dès que $g\cdot y$ est définie pour $g\in G$, alors $g \cdot y(x)$ est aussi une solution du système.*
Les groupes de transformations que l'on considère dépendent d'un paramètre $\varepsilon$ tel que : $$\begin{aligned}
G_\varepsilon : x \in M \mapsto g_\varepsilon\cdot (x,f(x)) = (\gamma_\varepsilon(x,f(x)), \phi_\varepsilon(x,f(x)).\end{aligned}$$
On peut voir l'action d'un élément du groupe comme l'action d'un champ de vecteurs $X$ sur $\mathbb{R}^2$ par dérivation par rapport à ce paramètre $\varepsilon$ : $$\begin{aligned}
X=\frac{d (g_\varepsilon \cdot x)}{d \varepsilon}\vert_{\varepsilon=0} \partial_x+\frac{d (g_\varepsilon \cdot f(x))}{d \varepsilon}\vert_{\varepsilon=0} \partial_y = \alpha_1(x,y) \partial_x+\alpha_2(x,y) \partial_y.\end{aligned}$$ Le champ de vecteurs $X$ est appelé **générateur infinitésimal** du groupe de symétrie $G$.\
Si $g_\varepsilon$ transforme $y$ solution du système en $\Tilde{y}$ une autre solution, on cherche à déterminer l'action $g_\varepsilon$ sur $y'$. Cette transformation est appelée **prolongement** et nous allons la caractériser sur le champs de vecteurs $X$.
Comme les transformations sont tangentes à l'identité, on a $\Tilde{x} = x + \varepsilon \alpha_1(x,y)+\mathcal{O}(\varepsilon^2)$ et $\Tilde{y} = y + \varepsilon \alpha_2(x,y)+\mathcal{O}(\varepsilon^2)$. En notant $D_x$ la différentielle totale par rapport à $x$, on a : $$\begin{aligned}
\frac{d\Tilde{y}}{d \Tilde{x}} = \frac{D_x(\Tilde{y})}{D_x(\Tilde{x})}
%&= \frac{y'+\varepsilon D_x(\alpha_2(x,y(x))+\mathcal{O}(\varepsilon^2)}{1+\varepsilon D_x(\alpha_1(x,y(x))+\mathcal{O}(\varepsilon^2)}\\
% & = (y'+\varepsilon D_x(\alpha_2(x,y(x)))(1-\varepsilon D_x(\alpha_1(x,y(x))) \\
& = y'+ \varepsilon ( D_x(\alpha_2(x,y(x)))-y'D_x(\alpha_1(x,y(x)))) + \mathcal{O}(\varepsilon ^2) \\
&= y' + \varepsilon \eta + \mathcal{O}(\varepsilon^2).\end{aligned}$$
En développant l'expression de $\eta$, on obtient : $$\begin{aligned}
\eta = \partial_x(\alpha_2(x,y))+y'(\partial_y(\alpha_2(x,y)- \partial_x(\alpha_1(x,y))-(y')^2 \partial_y(\alpha_1(x,y)).\end{aligned}$$
Ce terme nous permet de définir le prolongateur de $X$ à l'ordre $1$ :
**Définition 3**. *On appelle prolongateur à l'ordre $1$ du champs de vecteurs $X=\alpha_1(x,y) \partial_x+\alpha_2(x,y) \partial_y$ le champs $\Tilde{X}$: $$\begin{aligned}
\Tilde{X} = \alpha_1 \partial_x + \alpha_2 \partial_y + \left(\partial_x(\alpha_2) +y'(\partial_y(\alpha_2)- \partial_x(\alpha_1)) - (y')^2 \partial_y(\alpha_1) \right)\partial_{y'}.\end{aligned}$$*
L'étude des $d$-tissus implicites fait intervenir seulement des équations différentielles d'ordre $1$ de la forme $y'=p(x,y)$. On a:
**Théorème 1**. *Un champ de vecteurs $X=\alpha_1 \partial_x + \alpha_2 \partial_y$ est un générateur infinitésimal d'un groupe de symétries $G$ d'une équation différentielle de la forme $y'(x)=p(x,y(x))$ si et seulement si les fonctions $\alpha_1$ et $\alpha_2$ en les variables $x$ et $y$ sont solutions de: $$\begin{aligned}
-\alpha_1\partial_x(p)-\alpha_2\partial_y(p)+\partial_x(\alpha_2)+(\partial_y(\alpha_2)-\partial_x(\alpha_1))p-\partial_y(\alpha_1)p^2=0.\end{aligned}$$*
Ce théorème est un cas particulier explicite du critère d'invariance général formulé par P-J. Olver [@olver1 Theorem 2.31, p.104] dans le cas d'une équation différentielle d'ordre $1$.\
Le calcul explicite des groupes de symétrie dépend fortement de la forme du système d'équations. Deux équations différentielles sont dites équivalentes s'il existe un changement de variables qui transforme l'une en l'autre. Une question naturelle est donc de savoir si les groupes de symétries calculés pour des équations différentielles équivalentes sont isomorphes. C'est le cas (voir [@olver2 Proposition 6.13, p.185]) et nous utiliserons ce résultat à plusieurs reprises pour simplifier nos calculs.
**Proposition 1**. *Deux équations différentielles équivalentes ont des groupes de symétries isomorphes.*
L'ensemble des générateurs infinitésimaux d'un groupe de symétrie possède une structure classique d'algèbre de Lie (voir [@olver1 Corollary 2.40, p.115]) dont la dimension est un invariant.
## Caractérisation des symétries d'un tissu implicite
On commence par définir le groupe de symétrie d'un tissu du plan (voir [@henaut]):
**Définition 4**. *Soit $\mathscr{W}(F_1,...,F_d)$ un $d$-tissu. Un groupe $G$ est un groupe de symétries du tissu si et seulement si il laisse globalement invariant le tissu.*
Le Lemme suivant permet le transfert de l'étude du feuilletage à celui de la famille d'équations différentielles associées. Précisément, en utilisant le critère d'invariance formulé par P-J. Olver ([@olver1], Theorem 2.71 p.161) on a le résultat suivant:
**Lemme 1**. *Soit $\mathscr{W}_F (F_1 ,\dots ,F_d)$ le $d$-tissu implicite défini par $F$. Alors $G$ est un groupe de symétrie de $\mathscr{W}_F (F_1 ,\dots ,F_d )$ si et seulement si $G$ est un groupe de symétrie de chacune des équations différentielles $\Delta_i (x,y,y')=0$ pour $i=1,\dots ,d$.*
En utilisant le lemme [Lemme 1](#critere){reference-type="ref" reference="critere"} et le théorème [Théorème 1](#dim1){reference-type="ref" reference="dim1"} on obtient donc la carcatérisation suivante des symétries d'un $d$-tissu:
**Théorème 2** (Groupe de symétries d'un $d$-tissu). *Soit $\mathscr{W}_F (F_1,...,F_d)$ le $d$-tissu implicite associé à $F$. Soit $G$ un groupe de symétries de $\mathcal{W}_F$, alors tout générateur infinitésimal $X=\alpha_1(x,y)\partial_x+\alpha_2(x,y)\partial_y$ de $G$ satisfait le système d'équations: $$\label{eq}
-\alpha_1\partial_x(p_i)-\alpha_2\partial_y(p_i)+\partial_x(\alpha_2)+(\partial_y(\alpha_2)-\partial_x(\alpha_1))p_i-\partial_y(\alpha_1)p_i^2=0,$$ où $p_i (x,y)=\frac{\partial_x F_i}{\partial_y F_i}$, $i=1,\dots ,d$.*
Ce système peut se mettre sous forme \"normale\":\
On note $V_{i:j}$ les matrices de type Vandermonde définies par $V_{i:j}=\left (
\begin{array}{ccc}
1 & p_i & p_i^2 \\
1 & \vdots & \vdots \\
1 & p_j & p_j^2
\end{array}
\right )$ et $M_{i:j}$ les matrices définies par $M_{i:j}=\left (
\begin{array}{cc}
\partial_x p_i & \partial_y p_i \\
\vdots & \vdots \\
\partial_x p_j & \partial_y p_j
\end{array}
\right )$. On note $\mathbf{\alpha} =(\alpha_1 ,\alpha_2 )$. L'écriture matricielle du système [\[eq\]](#eq){reference-type="eqref" reference="eq"} est donnée par $$\left \{
\begin{array}{l}
-M_{1:3} \mathbf{\alpha} + V_{1:3} \left (
\begin{array}{c}
-\partial_x \alpha_2 \\
\partial_x \alpha_1 -\partial_y \alpha_2 \\
\partial_y \alpha_1
\end{array}
\right )
=0 ,\\
-M_{4:d} \mathbf{\alpha} + V_{4:d} \left (
\begin{array}{c}
-\partial_x \alpha_2 \\
\partial_x \alpha_1 -\partial_y \alpha_2 \\
\partial_y \alpha_1
\end{array}
\right )
=0 .
\end{array}
\right .$$
La matrice $V_{1:3}$ est une matrice de Vandermonde qui est inversible car $p_1$, $p_2$ et $p_3$ sont deux à deux distincts. Le système est donc équivalent à la forme suivante:
$$\left \{
\begin{array}{l}
-V_{1:3}^{-1} M_{1:3} \mathbf{\alpha} + \left (
\begin{array}{c}
-\partial_x \alpha_2 \\
\partial_x \alpha_1 -\partial_y \alpha_2 \\
\partial_y \alpha_1
\end{array}
\right )
=0 ,\\
C_d \mathbf{\alpha} =0 ,
\end{array}
\right .$$ où $$C_d = -M_{4:d} +V_{4:d} V_{1:3}^{-1} M_{1:3} ,$$ est appelée **matrice de compatibilité**.\
On obtient donc le système noté $(\mathscr{S})$ par A. Henaut:
**Théorème 3** (Equation normalisée du groupe de symétries d'un tissu implicite). *Pour $d\geq 3$, le système d'équations [\[eq\]](#eq){reference-type="ref" reference="eq"} peut s'écrire sous une forme normalisée: $$\begin{aligned}
\tag{$\star_d$}
\left\{\begin{array}{ccc}
-\partial_x( \alpha_2) &+g_d \alpha_1+h_d \alpha_2=0 \\
\partial_x( \alpha_1)-\partial_y(\alpha_2)&+g_{d-1}\alpha_1+h_{d-1}\alpha_2 =0 \\
\partial_y(\alpha_1)&+g_{d-2}\alpha_1+h_{d-2}\alpha_2=0 \\
&g_{d-3}\alpha_1+h_{d-3}\alpha_2=0 \\
&\vdots\\
&g_1 \alpha_1+h_1 \alpha_2=0.
\end{array}\right.\end{aligned}$$ où les coefficients $g_i$ et $h_i$, pour $i=1,...,d$, dépendent des pentes $p_i$.*
On peut bien entendu expliciter les coefficients $g_i$ et $h_i$. Par exemple, on obtient: $$\left .
\begin{array}{l}
g_1 = - \displaystyle\frac{p_2 p_3}{(p_3 -p_1 ) (p_2 -p_1)} \partial_x p_1 +
\displaystyle\frac{p_1 p_3}{(p_3 -p_2)(p_2 -p_1)} \partial_x p_2
- \displaystyle\frac{p_1 p_2}{(p_3 -p_1) (p_3 -p_2 )} \partial_x p_3 ,\\
g_2 = \displaystyle\frac{p_2 +p_3}{(p_3 -p_1 ) (p_2 -p_1)} \partial_x p_1
-\displaystyle\frac{p_1 +p_3}{(p_3 -p_2)(p_2 -p_1)} \partial_x p_2
+ \displaystyle\frac{p_1 +p_2}{(p_3 -p_1) (p_3 -p_2 )} \partial_x p_3 ,\\
g_3 = - \displaystyle\frac{1}{(p_3 -p_1 ) (p_2 -p_1)} \partial_x p_1 +
\displaystyle\frac{1}{(p_3 -p_2)(p_2 -p_1)} \partial_x p_2
- \displaystyle\frac{1}{(p_3 -p_1) (p_3 -p_2 )} \partial_x p_3 ,
\end{array}
\right .$$ et des expressions analogues pour $h_1$, $h_2$ et $h_3$ en remplacant les $\partial_x p_i$ par des $\partial_y p_i$.\
On calcule de manière analogue les autres coefficients.\
Le système ainsi formé est par ailleurs de la même forme que celui obtenu par A. Henaut ([@henaut2], équation ($\star_d$) p.433) caractérisant les **relations abéliennes** d'un tissu.\
Les relations $C_d \mathbf{\alpha} =0$ sont appelées **relations de compatibilité** par A. Hénaut.\
L'étude des solutions de ce système peut s'effectuer dans le cadre du théorème de Cauchy-Kovaleskaya tel qu'exposé dans ([@olver1], Chap.2, p.162) dont on reprend les notations. Pour $\mathbf{\alpha}=(\alpha_1 ,\alpha_2 )$, on note $\mathbf{\alpha}^{(1)} = (\alpha_1 ,\alpha_2 ,\partial_x \alpha_1 ,\partial_x \alpha_2 ,\partial_y \alpha_1 ,\partial_y \alpha_2 )$. Le système ($\star_d$) est donc équivalent à $d$ équations $\Delta_1 (x,\mathbf{\alpha}^{(1)} )=0,\dots, \Delta_d (x,\mathbf{\alpha}^{(1)} )=0$. Pour $d\geq 3$, on obtient un système sur-déterminé au sens de P.J. Olver ([@olver1], Definition 2.86,p.170-171). Les conditions de compatibilité ainsi que la troisième relation peuvent être vue comme des **conditions d'intégrabilité** par rapport aux solutions du système $$\left\{
\begin{array}{ccc}
-\partial_x( \alpha_2) &+g_d \alpha_1+h_d \alpha_2=0 ,\\
\partial_x( \alpha_1)-\partial_y(\alpha_2)&+g_{d-1}\alpha_1+h_{d-1}\alpha_2 =0
\end{array}
\right .$$
Ces conditions vont compromettre la résolubilité des systèmes ($\star_d$) pour $d\geq 3$. En particulier, on a:
**Proposition 2**. *Pour $d\geq 3$, le système [\[eq\]](#eq){reference-type="eqref" reference="eq"} n'admet génériquement pas de solutions.*
Une autre façon de formuler le résultat précédent est: *Pour $d\geq 3$, un $d$-tissu générique n'admet pas de symétries*.\
Dans les sections suivantes, nous donnons des exemples de résolutions explicites du système ($\star_d$) pour des tissus donnés.
# Exemples de Groupes de symétries de tissus {#sectionexemple}
Nous calculons explicitement l'algèbre de Lie du groupe de symétries de $d$-tissus. Les cas $d=1$ et $d=2$ sont triviaux et correspondent à des algèbres de Lie de dimensions infinies. Dans le cas $d\geq 3$, les algèbres de Lie obtenues sont de dimension $1$ ou $3$ illustrant ainsi le résultat général de A. Hénaut ([@henaut],Proposition 1 p.124) montrant que l'algèbre de Lie des groupes de symétries d'un tissu implicite est toujours de dimension $0$, $1$ ou $3$.
## Cas des $1-$ et $2-$tissus
Pour les tissus du plan, on voit le cas particulier des 1-tissus et 2-tissus dont les algèbres de Lie des symétries sont de dimension infinie:
**Lemme 2** (1-tissu). *L'algèbre de Lie des symétries d'un 1-tissu est de dimension infinie, chaque symétrie est de la forme $X=\alpha_1(x,y)\partial_x+\alpha_2(y)\partial_y$, avec $\alpha_1 \in \mathbb{C}\lbrace x,y \rbrace$ et $\alpha_2\in \mathbb{C}\lbrace y \rbrace$.*
*Démonstration.* Pour un 1-tissu, on peut supposer, quitte à faire un changement de variables, que $p_1=0$. Ainsi le système [\[eq\]](#eq){reference-type="eqref" reference="eq"} se réduit à $\partial_x(\alpha_2)=0$ et aucune condition n'apparaît sur $\alpha_1$. L'algèbre de Lie est de dimension infinie et engendrée par: $X=\alpha_1(x,y)\partial_x+\alpha_2(y)\partial_y$ où $\alpha_2(y)$ est analytique en $y$. La proposition [Proposition 1](#isogroupe){reference-type="ref" reference="isogroupe"} termine la démonstration. ◻
**Lemme 3** (2-tissu). *L'algèbre de Lie des symétries d'un 2-tissu est de dimension infinie.*
*Démonstration.* Pour $d=2$, avec $p_1 \neq p_2$ on peut supposer, quitte à faire un changement de variables \"redressant\" les feuilles, que $p_1=0$ et $p_2=1$. On obtient les deux équations $\partial_x(\alpha_2)=0$ pour $p_1=0$ et $-\partial_x(\alpha_2)+(\partial_x(\alpha_1)-\partial_y(\alpha_2))+\partial_y(\alpha_1)=0$ pour $p_2=1$. Comme pour le cas précédent, on a que $\alpha_2$ est une fonction de la variable $y$. La seconde équation est $\partial_x(\alpha_1)+\partial_y(\alpha_1)=\partial_y(\alpha_2)$. Si les solutions ont la forme générale $\alpha_1=\underset{j,k \geq 0}{\sum}p_{j,k}x^jy^k$ et $\alpha_2=\underset{j\geq 0}{\sum}q_jy^j$ alors la dernière équation implique $\forall i \geq 1$, $\forall j\geq 0$, $p_{i+1,j}(i+1)+p_{i,j+1}(j+1)=0$ et si $i=0$ alors $\forall j \geq 1$, $p_{1,j}+p_{0,j+1}(j+1)=q_{j+1}(j+1)$. Ces dernières relations définissent les relations sur les coefficients des symétries d'un 2-tissu. La proposition [Proposition 1](#isogroupe){reference-type="ref" reference="isogroupe"} termine la démonstration. ◻
## Le cas parallèle
Les tissus parallèles, donnés par des feuilletages de droites parallèles, jouent un rôle particulier dans l'étude des tissus.
**Définition 5** (Tissu parallèle). *Un $d$-tissu est dit *parallèle* s'il est donné par la superposition de $d$ pinceaux de droites en position générale, écrit sous la forme $\mathcal{W}(a_1x-b_1y,...,a_dx-b_dy)$ où $(a_i,b_i) \in \mathbb{C}^2 \setminus \lbrace 0 \rbrace$. L'hypothèse de position générale est équivalente à $p_i\neq p_j, \ \forall i\neq j$ où $p_i=-\frac{a_i}{b_i}$, $i=1,\ldots, d$, représentent les pentes du pinceau.*
La terminologie de tissu parallèle vient du fait que chaque feuilletage est constitué de droites parallèles. Les symétries d'un tel tissu sont données par le Lemme suivant:
**Lemme 4**. *Soit un $d$-tissu parallèle, avec $d\geq 3$, défini par les pentes constantes $p_i(x,y)$ $i=1,...,d$. Alors son algèbre de Lie des symétries est engendrée par les trois générateurs infinitésimaux $\mathfrak{g}=\lbrace \partial_x, \ \partial_y, \ x\partial_x+y\partial_y \rbrace$.*
*Démonstration.* Considérons un $d$-tissu parallèle avec $d\geq 3$, donné par des pentes constantes $p_i(x,y)$. On a donc par définition $M_{1:3}=0$ et $M_{4:d} =0$ car ces matrices ne dépendent que des dérivées des pentes, d'où $C_d =0$. Le système ($\star_d$) se réduit donc à: $$\begin{aligned}
-\partial_x( \alpha_2) =0,\ \
\partial_x( \alpha_1)-\partial_y(\alpha_2)=0, \ \
\partial_y(\alpha_1)=0.\end{aligned}$$ La fonction $\alpha_2$ ne dépend que de $y$ et $\alpha_1$ de $x$. On cherche des solutions du système dans la classe polynomiale. On pose $\alpha_1(x)=\underset{j=0}{\overset{k}{\sum}}r_jx^j$ et $\alpha_2(y)=\underset{j=0}{\overset{m}{\sum}}q_jy^i$. On obtient $\alpha_1=r_0+xr_1$ et $\alpha_2=q_0+yq_1$ avec $q_1 =r_1$. L'algèbre de Lie étant au maximum de dimension $3$, on déduit que les symétries sont engendrées par $\partial_x$, $\partial_y$ et $x\partial_x +y \partial_y$. ◻
## Un 3-tissu donné par Élie Cartan
Cet exemple est donné par A.Hénaut dans [@henaut]. On considère le $3$-tissu donné par son polynôme de présentation $P_F(z;x,y)=(z^2-1)(z-u(x))$ où $u$ est une fonction analytique de $x$.
**Lemme 5**. *L'algèbre de Lie des symétries du tissu de Cartan est donnée par $\mathfrak{g}=\lbrace \partial_y \rbrace$.*
*Démonstration.* Par définition du tissu, les pentes sont données par $p_1=1$, $p_2=-1$ et $p_3=u(x)$. Une symétrie $X=\alpha_1(x,y)\partial_x+\alpha_2(x,y)$ de ce 3-tissu doit satisfaire : $$\begin{aligned}
\partial_x(\alpha_2)=\partial_y(\alpha_1), \ \
\partial_x(\alpha_1)=\partial_y(\alpha_2), \ \
(u^2-1)\partial_y(\alpha_1)+\partial_x(u)\alpha_1=0.\end{aligned}$$
La résolution par rapport à $y$ de la troisième équation donne : $$\begin{aligned}
\alpha_1(x,y)=c(x)e^{-\frac{u'(x)}{u(x)^2-1}}y,\end{aligned}$$ où $c$ dépend seulement de $x$.\
On calcule les dérivées aux premier et second ordres par rapport à $x$ et $y$ de $\alpha_1$: $$\begin{aligned}
&\partial_x(\alpha_1)=B(x,y)\alpha_1, \\
&\partial^2_{x,x}(\alpha_1)=(B^2(x,y)+\partial_x(B(x,y))\alpha_1, \\
& \partial^2_{y,y}(\alpha_1)=\left( \frac{u'(x)}{u(x)^2-1}\right)^2\alpha_1,\end{aligned}$$ où $B(x,y)=\frac{c'(x)}{c(x)}-\left(\frac{u'(x)}{u(x)^2-1}\right)'y$.\
Les équations $\partial_x(\alpha_1)=\partial_y(\alpha_2)$ et $\partial_y(\alpha_1)=\partial_x(\alpha_2)$ donnent: $$\begin{aligned}
\partial^2_{x,x}\alpha_1-\partial^2_{y,y}\alpha_1=0.\end{aligned}$$ On a alors: $$\begin{aligned}
\left(B(x,y)^2+\partial_x \left(B(x,y)-\left(\frac{u'(x)}{u(x)^2-1}\right)^2\right)\right)\alpha_1=0.\end{aligned}$$ Pour une fonction $u=u(x)$ générique, on a $\alpha_1=0$ et $\partial_y(\alpha_2)=\partial_x(\alpha_2)=0$, alors $\alpha_2$ est une constante. Les deux premières équations du système donnent : $$\begin{aligned}
\left(B(x,y)^2+\partial_x \left(B(x,y)-\left(\frac{u'(x)}{u(x)^2-1}\right)^2\right)\right)\alpha_1=0.\end{aligned}$$ et les dérivées d'ordre 1 et 2 de $\alpha_1$ donnent : $$\begin{aligned}
&\partial_x(\alpha_1)=B(x,y)\alpha_1, \ \ \partial^2_{x,x}(\alpha_1)=(B^2(x,y)+\partial_x(B(x,y))\alpha_1, \\
& \partial^2_{y,y}(\alpha_1)=\left( \frac{u'(x)}{u(x)^2-1}\right)^2\alpha_1,\end{aligned}$$ où $B(x,y)=\frac{c'(x)}{c(x)}-\left(\frac{u'(x)}{u(x)^2-1}\right)'y$. On obtient finalement : $$\begin{aligned}
\left(B(x,y)^2+\partial_x \left(B(x,y)-\left(\frac{u'(x)}{u(x)^2-1}\right)^2\right)\right)\alpha_1=0.\end{aligned}$$ Pour une fonction $u=u(x)$ générique, on a $\alpha_1=0$ et $\partial_y(\alpha_2)=\partial_x(\alpha_2)=0$, alors $\alpha_2$ est une constante. ◻
## Le tissu de Muzsnay {#muzs}
Dans [@muzsnay], suite à une série d'articles dans l'optique de la résolution de la conjecture de Blaschke sur la linéarisation des 3-tissus dans $\mathbb{C}^2$, Z. Muzsnay considère le 3-tissu $\mathcal{W}_M$ donné par les trois feuilletages suivant: $$F_1(x,y):=x, \ F_2(x,y):=y \text{ et } F_3(x,y):=(x+y)e^{-x}.$$ Ce tissu a fait l'objet de nombreux articles concernant sa linéarisation notamment [@goly] et [@grim]. Les différents auteurs ont donné des caractérisations des tissus linéarisables via des méthodes différentes et ne donnent pas le même résultat. Dans [@muzsnay], l'auteur pour clore la controverse exhibe un biholomorphisme de linéarisation explicite. Nous proposons ici le calcul de son groupe de symétrie.\
Le calcul des pentes du tissu fait apparaître des pentes infinies. En effectuant le changement de variables $(x,y) \mapsto (x+y, y-x)=(u,v)$, on obtient ;
**Lemme 6** (Forme préparée du tissu). *Le tissu $\mathcal{W}_M$ est biholomorphiquement conjugué au tissu donné par les trois feuilletages définis par $G_1(x,y):=\frac{u-v}{2}$, $G_2(x,y):=\frac{u+v}{2}$ et $G_3(x,y):=ue^{-\frac{u-v}{2}}$.*
On peut alors calculer le groupe de symétrie du tissu dans ce nouveau système de coordonnées:
**Lemme 7**. *L'algèbre de Lie des symétries du tissu $\mathcal{W}_M$ mis sous forme préparée est donnée par $\mathfrak{g}=\lbrace \partial_v \rbrace$ où $c$ est une fonction ne dépendant que de $u$.*
*Démonstration.* Le calcul des trois pentes associées aux feuilletages donne $p_1=1$, $p_2=-1$ et $p_3=\frac{u-2}{u}$. En introduisant ces pentes dans les équations définissant les symétries, on obtient le système: $\partial_u\alpha_2=\partial_v\alpha_1$, $\partial_v\alpha_2=\partial_u\alpha_1$ et $-2 \alpha_1+(4u-4)\partial_v\alpha_1=0$.
La troisième équation s'intégre directement et donne $\alpha_1(u,v)=c(u)e^{\frac{v}{2(u-1)}}$ où $c(u)$ est une fonction qui dépend de $u$. En réintroduisant dans les deux premières équations, on obtient : $\alpha_2=\alpha_1 \times \left( 2-\frac{v}{u-1}\right)+2c'(u)(u-1)e^{\frac{v}{2(u-1)}}+c$ où $c$ est une constante. En procédant comme pour le tissu de Clairaut, on obtient l'équation différentielle $\alpha_1 \times \left(\frac{1}{4(u-1)^2}- \partial_u \beta- \beta^2 \right)=0$ avec $\beta=\frac{c'(u)}{c(u)}-\frac{v}{2(u-1)^2}$. Donc soit $\alpha_1=0$ soit le second facteur est nul ce qui est impossible par rapport aux degrés en $u$ et $v$. Donc $\alpha_1=0$ et on récupère une unique symétrie $c \partial_v$. ◻
On peut conclure en vertu de la Proposition [Proposition 1](#isogroupe){reference-type="ref" reference="isogroupe"} que l'algèbre de Lie des symétries du tissu sous forme préparée étant isomorphe à celle de $\mathcal{W}_M$, cette dernière ne compte qu'une unique symétrie.
## Tissu de Clairaut {#regularite}
Le **tissu de Clairaut** étudié par A. Hénaut [@henaut] est un 3-tissu défini par les submersions $F_1(x,y):=y-x$, $F_2(x,y)=:=x+y$ et $F_3(x,y):=\frac{y}{x}$. Les trois pentes associées sont $p_1=1$, $p_2=1$ et $p_3=\frac{y}{x}$. Le groupe de symétrie est donné par (voir [@henaut]):
**Lemme 8**. *L'algèbre de Lie des symétries du 3-tissu de Clairaut défini par les submersions $F_1=y-x$, $F_2=y+x$ et $F_3=\frac{y}{x}$ est de dimension 3 et engendrée par les champs de vecteurs: $$\begin{aligned}
& x\partial_x+y\partial_y, \ y\partial_x+x\partial_y, \\
&\ \left( x \ell n(\left(\bigg| x^2-y^2 \bigg|\right) +y \ell n\left(\bigg|\frac{x+y}{x-y}\bigg|\right) \right)\partial_x + \left( y \ell n\left(\bigg| x^2-y^2 \bigg|\right) +x \ell n\left(\bigg|\frac{x+y}{x-y}\bigg|\right)\right) \partial_y .\end{aligned}$$*
Nous donnons ici une démonstration détaillée de ce résultat.
*Démonstration.* D'après le théorème [Théorème 2](#theorem5){reference-type="ref" reference="theorem5"} et par simplication du système, les symétries vérifient:\
$$\partial_x(\alpha_2)=\partial_y(\alpha_1), \ \ \partial_y(\alpha_2)=\partial_x(\alpha_1), \ \
-x^2\partial_x(\alpha_2)+y^2\partial_y(\alpha_1)-y\alpha_1+x\alpha_2=0.$$ Les deux premières lignes donnent $\partial^2_{x,x}(\alpha_1)-\partial^2_{y,y}(\alpha_1)=0$ et $\partial^2_{x,x}(\alpha_2)-\partial^2_{y,y}(\alpha_2)=0$. Les symétries dans la classe polynomiale sont engendrées par les champs $X_1 =x\partial_x+y\partial_y$ et $X_2=y\partial_x+x\partial_y$.
Pour élargir la classe des symétries, on peut utiliser un résultat classique sur les équations aux dérivées partielles du second ordre qui donne la forme générale d'une solution : $$\begin{aligned}
\alpha_i=f_i(x+y)+g_i(x-y), \ i=1,2,\end{aligned}$$ où $f$ et $g$ sont analytiques. Les deux premières équations de symétries impliquent $f_1=f_2$ et $g_1=g_2$. On a donc $\alpha_1$ et $\alpha_2$ de la forme $\alpha_1 (x,y)=f(x+y)+g(x-y)$ et $\alpha_2 (x,y)=f(x+y)-g(x-y)$. La troisième équation devient: $$\begin{aligned}
(y-x)\left( (x+y)f'(x+y)-f(x+y)\right)+(x+y)\left((x-y)g'(x-y)-g(x-y)\right)=0.\end{aligned}$$ Si $y=x$, on obtient $xg(0)=0$ donc $g(0)=0$ et $g$ est de la forme $g(v)=vG(v)$ avec $v=x-y$ et $G$ est une fonction de $v$ sans terme constant. De la même manière en prenant $y=-x$ on obtient $f(t)=tF(t)$ avec $t=x+y$ et $F$ une fonction de la variable $t$. On a donc: $$\begin{aligned}
\alpha_1 (x,y)=(x+y)F(x+y)+(x-y)G(x-y), \ \ \alpha_2=(x+y)F(x+y)-(x-y)G(x-y).\end{aligned}$$ En remplaçant dans la deuxième équation et en posant $t=x+y$ et $v=x-y$, on a: $$\begin{aligned}
tF'(t)+vG'(v)=0,\end{aligned}$$ ainsi $tF'(t)=c$ et $vG'(v)=-c$ où $c$ est une constante dans $\mathbb{C}$. En résolvant ces équations différentielles $F(t)=c \ \ell n(\vert t \vert)$ $G(v)=-c \ \ell n(\vert v \vert)$. Finalement on obtient: $$\alpha_1 = x \ell n(\left(\bigg| x^2-y^2 \bigg|\right) +y \ell n\left(\bigg|\frac{x+y}{x-y}\bigg|\right) \text{ et } \alpha_2= y \ell n\left(\bigg| x^2-y^2 \bigg|\right) +x \ell n\left(\bigg|\frac{x+y}{x-y}\bigg|\right) .$$ Ce qui termine la démonstration. ◻
## Le tissu de Zariski {#zariski}
Le tissu de Zariski est défini implicitement par $F(x,y,p)=p^3+x^my^n$ avec $m$ et $n$ dans $\mathbb{N}$. Ce tissu admet la factorisation suivante: $$\begin{aligned}
F(x,y,p)=(p+x^{\frac{m}{3}}y^{\frac{n}{3}})(p-e^{\frac{i \pi}{3}}x^{\frac{m}{3}}y^{\frac{n}{3}})(p-e^{-\frac{i \pi}{3}}x^{\frac{m}{3}}y^{\frac{n}{3}}).\end{aligned}$$
Le Lemme suivant est donné sans démonstration dans ([@henaut Exemple 3 p.14]):
**Lemme 9**. *Le 3-tissu de Zariski défini implicitement par $F(x,y,p)=p^3+x^my^n$ admet les algèbres de Lie des symétries de dimension $3$ définies par:\
$\bullet$ $\mathfrak{g}=\lbrace x^{-m/3} \partial_x, y^{n/3} \partial_y, x (-n+3)\partial_x +y (3+m) \partial_y\rbrace$ si $n\not=3$,*
*$\bullet$ $\mathfrak{g}=\lbrace x^{-m/3} \partial_x, y \partial_y, \frac{3 x}{m+3} \partial_x +y \ell n (y) \partial_y\rbrace$ si $n=3$.*
*Démonstration.* Une symétrie $X=\alpha_1 \partial_x+\alpha_2\partial_y$ doit satisfaire le système composé des trois équations aux dérivées partielles suivantes après simplification : $\partial_x(\alpha_2)=0$, $\partial_y(\alpha_1)=0$ et $$(\partial_x(\alpha_1)-\partial_y(\alpha_2))x^{\frac{m}{3}}y^{\frac{n}{3}}+\frac{m}{3}x^{\frac{m}{3}-1}y^{\frac{n}{3}}\alpha_1+\frac{n}{3}x^{\frac{m}{3}}y^{\frac{n}{3}-1}\alpha_2=0.$$ Les deux premières conditions impliquent que $\alpha_1 =\alpha _1 (x)$ et $\alpha_2 =\alpha_2 (y)$. La dernière équation peut se mettre sous la forme : $\partial_x(\alpha_1)-\partial_y(\alpha_2)+\frac{m}{3x}\alpha_1+\frac{n}{3y}\alpha_2=0$ en dehors de $xy=0$. Pour déterminer $\alpha_1$, on doit résoudre $\partial_x(\alpha_1)+\frac{m}{3x}\alpha_1+ b(y)=0$ avec $b(y)=-\partial_y(\alpha_2)+\frac{n}{3y}\alpha_2$. De manière symétrique, on a $\partial_y(\alpha_2)-\frac{n}{3x}\alpha_2+ d(x)=0$ avec $d(x)=\partial_x(\alpha_1) +\frac{m}{3x}\alpha_1$. Cela revient à résoudre une équation différentielle de la forme $z'+a\frac{z}{t} +b=0$ pour $z(t)$. Les solutions pour $a\not=-1$ sont de la forme $z(t)=ct^{-a} +\frac{b}{a+1} t$ et pour $a=-1$ sont données par $z(t)=-bt\ell n (t) +c t$. Pour $\alpha_1$, la condition $a=m/3\not =1$ est toujours satisfaite, on a donc $\alpha_1 (x)=\mu x^{-m/3} +\delta \frac{3x}{m+3}$, ce qui donne $d(x)=\delta$. Pour la composante $\alpha_2$, on doit distinguer suivant le cas $n\not=3$ i.e $a=-1$ ou $n=3$. Pour $n\not=3$, on obtient $\alpha_2 (y)= \gamma y^{n/3}+\frac{3\delta}{3-n} y$ et $\alpha_2 (y)= -\delta y \ell (y) +\gamma y$ si $n=3$. On obtient donc des champs de la forme : $$X=\mu (x^{-m/3} ,0) +\gamma (0,y^{n/3}) +\mu (x(-n+3),y(m+3))$$ si $n\not=3$ et $X=\mu (x^{-m/3} ,0) +\gamma (0,y^{n/3}) +\mu \left ( \frac{3x}{m+3},y \ell (y) \right )$ sinon. ◻
## Tissu hexagonal standard
En suivant [@beau], p.103, on introduit une classe particulière de 3-tissus dits $hexagonaux$. On va se placer dans $(\mathbb{C}^2,0)$ mais la construction reste valide pour $(\mathbb{R}^2,0)$.
**Définition 6**. *Un 3-tissu dans $(\mathbb{C}^2,0)$ est dit hexagonal si tout hexagone suffisamment petit autour de 0 et passant par une feuille de chaque feuilletage est fermé.*
Le 3-tissu hexagonal le plus simple est celui donné par les feuilletages $F_1(x,y)=x$, $F_2(x,y)=y$ et $F_3(x,y)=x+y$. Cependant, comme pour la section \"Sur un tissu linéarisation\", [4.4](#muzs){reference-type="ref" reference="muzs"}, le calcul des pentes fait encore intervenir une pente infinie. Nous allons donc présenter le tissu dans un nouveau système de coordonnées pour effectuer le calcul des symétries.
**Lemme 10** (Mise sous forme préparée). *Le tissu hexagonal donné par $F_1, \ F_2$ et $F_3$ est biholomorphiquement conjugué au tissu donné par les feuilletages $G_1(x,y)=x+y$, $G_2(x,y)=y-x$ et $G_3(x,y)=y$.*
*Démonstration.* On utilise à nouveau le changement de variables $u=x+y$, $v=y-x$. ◻
On peut alors calculer le groupe de symétrie du tissu dans son nouveau système de coordonnées:
**Lemme 11**. *L'algèbre de Lie $\mathfrak{g}$ des symétries de ce tissu hexagonal défini par les submersions $G_1(x,y)=x+y$, $G_2(x,y)=y-x$ et $G_3(x,y)=y$ est donné par $\mathfrak{g}=\lbrace \partial_x, \partial_y,x\partial_x+ y\partial_y \rbrace$.*
*Démonstration.* Les trois pentes $p_1=-1$, $p_2=1$ et $p_3=0$, ainsi que les trois champs de vecteurs $X_1=\partial_x-\partial_y$, $X_2=\partial_x+\partial_y$ et $X_3=\partial_x$. Pour trouver les symétries de ce tissu, on doit résoudre le système suivant: $$\begin{aligned}
\left\{\begin{array}{cccc}
-\partial_x(\alpha_2)-\partial_x(\alpha_1)+\partial_y(\alpha_2)+\partial_y(\alpha_1)=0, \\
-\partial_x(\alpha_2)+\partial_x(\alpha_1)-\partial_y(\alpha_2)+\partial_y(\alpha_1)=0, \\
\partial_x(\alpha_2)=0.
\end{array}\right.\end{aligned}$$
La troisième équation implique que $\alpha_2=\alpha_2(y)$ est une fonction de la variable $y$. La résolution implique que $\alpha_1$ ne dépend que de la variable $x$. Enfin la condition $\partial_x(\alpha_1)=\partial_y(\alpha_2)$ implique $\alpha_1=p+qx$ et $\alpha_2=p'+qy$ où $p,p',q$ sont des constantes. En séparant en fonction de constantes indépendantes, on obtient les trois générateurs. ◻
# Symétries des tissus implicites et polynôme de Darboux {#19}
Dans cette section après avoir fait des rappels sur les polynômes de Darboux, on passe en revue les liens entre symétries et module de dérivations associé à un tissu via la courbe discriminante du polynôme de présentation. Certains de ces résultats ont été obtenus par A. Hénaut dans [@henaut].
## Autour du Théorème de Darboux
Dans cette section, on rappelle des résultats classiques sur la théorie de Darboux sur l'intégrabilité de champs de vecteurs polynomiaux. On renvoie à [@yako] pour plus de détails. Dans la suite, on a $\mathbb{K}= \mathbb{R}$ ou $\mathbb{C}$.\
Soit un champ de vecteurs $X=\underset{i=1}{\overset{d}{\sum}}f_i(x)\partial_{x_i}$ dans $\mathbb{K}^d$ avec $f_i(x) \in \mathbb{K}[x]$, $x=(x_1,...,x_d)$.
**Définition 7**. *Une courbe algébrique $\mathscr{C}$ définie par $\lbrace g=0 \rbrace$ avec $g \in \mathbb{K}[x]$ est dite invariante par le champ de vecteurs $X$ s'il existe un polynôme $K \in \mathbb{K}[x]$, appelé cofacteur de $\mathscr{C}$, tel que $X(g)=K.g$. Une courbe est appelée polynôme de Darboux pour le champ $X$ s'il existe un cofacteur.*
**Théorème 4**. *Soit $g\in \mathbb{K}[x]$ un polynôme écrit sous forme irréductible $g=g_1^{n_1}\cdots g_r^{n_r}$ dans $K[x]$. Le champ de vecteurs $X$ admet la courbe $\mathscr{C}=\lbrace g=0 \rbrace$ comme courbe invariante avec un cofacteur noté $K_g$ si et seulement si chaque courbe $\mathscr{C}_i=\lbrace g_i=0 \rbrace$ est invariante avec un cofacteur $K_{f_i}$ tel que $K_g=n_1 K_{g_1}+\cdots +n_r K_{g_r}$.*
Le Théorème classique suivant, appelé Théorème de Darboux, donne une interprétation dynamique de l'invariance en particulier en terme d'intégrabilité grâce à l'existence de courbes algébriques invariantes.
**Théorème 5** (Théorème de Darboux). *Soit un champ de vecteurs $X=P(x,y)\partial_x + Q(x,y)\partial_y$ où $P(x,y),Q(x,y) \in \mathbb{K}[x,y]$. Le degré du champ $X$ est $m=max(deg(P(x,y)),deg(Q(x,y))$. Supposons que $X$ admet $p$ courbes algébriques irréductibles invariantes $\lbrace g_i=0 \rbrace$ de cofacteurs $K_i$, $i=1,...,p$.\
$i)$ Il existe $\lbrace \lambda_i \rbrace_{i=1,...,p} \subset \mathbb{K}, \lambda_i$ non tous nuls tels que $\underset{i=1}{\overset{p}{\sum}}\lambda_i K_i=0$ si et seulement si $g_1^{\lambda_1}...g_p^{\lambda_p}$ est une intégrale première de $X$.\
$ii)$ Si $p \geq \frac{m(m+1)}{2}+1$, alors il existe $\lbrace \lambda_i \rbrace_{i=1,...,p} \subset \mathbb{K}, \lambda_i$ non tous nuls tels que $\underset{i=1}{\overset{p}{\sum}}\lambda_i K_i=0$.*
## Tissus implicites et Polynôme de Darboux
On redémontre ici le résultat présent dans [@henaut] qui permet de faire le lien entre la symétrie d'un tissu et la courbe discriminante, qui s'avère être un polynôme de Darboux de la symétrie.\
**Théorème 6**. *Soit $\mathcal{W}(F)$ un $d$-tissu défini par le polynôme de présentation $P_F(z;x,y)=a_0(x,y)z^d+...+a_d(x,y)$. Si le champ de vecteurs $X=\alpha_1\partial_x+\alpha_2\partial_y$ est une symétrie du tissu $\mathcal{W}(F)$ alors le discriminant de $P_F$, noté $\Delta$, est un polynôme de Darboux de $X$.*
*Démonstration.* On reprend les notations introduites dans la section [\[formepre\]](#formepre){reference-type="ref" reference="formepre"}. Le discriminant est donné par $\Delta=a_0^{2d-2}\underset{1\leq i < j \leq d}{\prod} (p_i-p_j)^2=a_0^{2d-2}\underset{1\leq i < j \leq d}{\prod} \Delta_{i,j}^2$, avec $\Delta_{i,j}=p_i-p_j$. On a : $$\begin{aligned}
X.\Delta_{i,j}=X(p_i-p_j) =X(p_i)-X(p_j)=\alpha_1\partial_x(p_i)-\alpha_2\partial_y(p_i)-\alpha_1\partial_x(p_j)+\alpha_2\partial_y(p_j).\end{aligned}$$ Par invariance, en utilisant [\[eq\]](#eq){reference-type="eqref" reference="eq"}, on obtient: $$\begin{aligned}
\alpha_1\partial_x(p_i)+\alpha_2\partial_y(p_i)=\partial_x(\alpha_2)+(\partial_y(\alpha_2)-\partial_x(\alpha_1))p_i-\partial_y(\alpha_1)p_i^2,\end{aligned}$$ et la même équation pour $j$. On a alors: $$\begin{aligned}
X(p_i-p_j)&=-(\partial_x(\alpha_1)-\partial_y(\alpha_2))(p_i-p_j)-\partial_y(\alpha_1)(p_i^2-p_j^2),\\
&=-\left [ \partial_x(\alpha_1)-\partial_y(\alpha_2)-\partial_y(\alpha_1)(p_i+p_j)\right ] (p_i-p_j), \\
&=\lambda_{i,j}\Delta_{i,j},\end{aligned}$$ où $\lambda_{i,j}=-(\partial_x(\alpha_1)-\partial_y(\alpha_2)-\partial_y(\alpha_1)(p_i+p_j)).$ Ainsi $\Delta_{i,j}$ est un polynôme de Darboux pour tout $\leq i<j\leq d$, et en utilisant le théorème de Darboux, on obtient que $\Delta$ est un polynôme de Darboux de cofacteur $\lambda=\underset{1\leq i < j \leq d}{\sum}\lambda_{i,j}$. ◻
## Module de dérivations et courbes discriminantes
Les géomètres ont introduit un objet algébrique, le module de dérivation, censé contenir l'essentiel de l'information topologique sur le complémentaire d'une courbe algébrique dans $\mathbb{C}^2$. On renvoie à [@saito] pour plus de détails. Le module de dérivation est défini par:
**Définition 8**. *Soit $\mathscr{C}$ une courbe algébrique définie par un polynôme $g$ $\in$ $\mathbb{K}[x,y]$. On note $Der(\mathscr{C})$ l'ensemble des champs de vecteurs logarithmiques, i.e. les champs de vecteurs tels que $g$ est un polynôme de Darboux: $$\begin{aligned}
Der(\mathscr{C})=\lbrace X \in Der(\mathbb{C}) \ \vert \ \text{il existe un cofacteur K tel que} \ X.g=K.g \rbrace .\end{aligned}$$*
Autrement dit, le module de dérivations est constitué des champs de vecteurs qui laisse invariante la courbe $g=0$. Le module de dérivations $Der(\mathscr{C})$ est une algèbre de Lie (voir [@saito]). Un corollaire direct du Théorème [Théorème 6](#discinv){reference-type="ref" reference="discinv"} est:
**Théorème 7**. *L'algèbre de Lie des symétries d'un tissu est une sous-algèbre de Lie de $Der(\Delta)$.*
En effet, si $X$ est un générateur infinitésimal d'une symétrie du tissu, alors $X.\Delta =\lambda \Delta$ par le théorème [Théorème 6](#discinv){reference-type="ref" reference="discinv"} et $X \in Der(\Delta)$. La stabilité par crochet de Lie induit le théorème.
# Perspectives {#perspective}
Les propriétés des tissus implicites peuvent en partie se lire dans leurs groupes de symétries. Ces groupes sont obtenus comme groupes de symétries de l'équations différentielles polynomiale à coefficients analytiques représentant le tissu. Un tissu est une **distribution de champs de vecteurs** au sens d'Olver (voir [@olver1], [@olver2]). Il serait sans doute intéressant d'étudier les G-structures associées à ces distributions via la méthode d'équivalence de Cartan.
Le problème de la régularité des algèbre de Lie des symétries entre dans l'étude de la régularité des solutions d'une équation aux dérivées partielles à coefficients analytiques. Il serait bon de voir les **théorèmes de type Maillet** (voir [@gerard1; @gerard2], [@ger2] et Section [4.5](#regularite){reference-type="ref" reference="regularite"}) existant afin de préciser les intuitions d'Alain Hénaut sur ces objets.\
**Remerciements**: Les auteurs remercient Alain Henaut pour les avoir introduit aux tissus, les nombreuses discussions et remarques.
15
W. Blaschke, G. Bol, *Geometrie der Gewebe*, Springer, Berlin, 1938.
A. Beauville, *Géométrie des tissus (d'après S.S. Chern et P.A. Griffins*, Séminaire Bourbaki, 31e année, 1978/79, n°531
E. Cartan, *Les sous-groupes des groupes continus de transformations*, Ann. Sci. Ecole Norm. Sup. 25, 57-194, 1908.
I. M. Gelfand, M. M. Kapranov, A. V. Zelevinsky, *Discriminants, Resultants and Multidimensional Determinants*, Birkhauser, Mathematics: Theory and Applications, 1994.
R.Gérard, *Sur le théorème de Maillet*, Funkcial. Ekvac., 34 (1991), 117-125.
R. Gerard, H. Tahara, *Formal Power Series Solutions of Nonlinear First Order Partial Differential Equations*, Funkcialaj Ekvacioj, 41 (1998), 133-166.
R. Gerard, H. Tahara, *Singular Nonlinear Partial Differential Equations*, Aspects of Mathematics, Vieweg, 280p ,1996.
V.V. Goldberg, V.V. Lychagin, *On the Blaschke conjecture for 3-webs*, J. Geom. Anal. 16 (1) 69--115, 2006.
J. Grifone, Z. Muzsnay, J. Saab, *On the linearizability of 3-webs*, Nonlinear Anal. 47 (4), 2643--2654, 2001.
A.Hénaut, On planar web geometry through abelian relations and connections, Annals of Mathematics, 159 (2004), 425--445.
A.Hénaut, Lie Symmetries for Implicit Planar Webs, J. Math. Sci. Univ Tokyo (29 (2022), 115-148.
Y.Ilyashenko. S.Yakovenko, Lectures on Analytic Differential Equations, American Mathematical Society, Volume 86, 2007.
B.Malgrange, *Sur le théorème de Maillet*, Asymptotic Analysis, 2 (1989), 1-4.
Z. Muzsnay, *On the linearizability of 3-webs: End of controversy*, C. R. Acad. Sci. Paris, Ser. I 356, 97-99, 2018.
P.J. Olver, *Applications of Lie Groups to Differential Equations*, Second Edition, Springer, 1998.
P.J. Olver, *Equivalence, Invariants and Symmetry*, Cambridge University Press, (1995).
J.V. Pereira, L.Pirio, *An Invitation to Web Geometry*, Springer, 2015.
K.Saito, *Theory of logarithmic differential forms and logarithmic vector fields*, J. Fac. Sci. Univ. Tokyo 27 (1980), 265-291.
| arxiv_math | {
"id": "2310.04093",
"title": "Application des groupes de Lie {\\`a} la recherche des sym{\\'e}tries des\n tissus implicites du plan",
"authors": "Jacky Cresson (LMAP), Jordy Palafox (CY)",
"categories": "math.DS",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The generalized Turán number ${\rm ex}(n,K_s,F)$ denotes the maximum number of copies of $K_s$ in an $n$-vertex $F$-free graph. Let $kF$ denote $k$ disjoint copies of $F$. Gerbner, Methuku and Vizer \[DM, 2019, 3130-3141\] gave a lower bound for ${\rm ex}(n,K_3,2C_5)$ and obtained the magnitude of ${\rm ex}(n, K_s, kK_r)$. In this paper, we determine the exact value of ${\rm ex}(n,K_3,2C_5)$ and described the unique extremal graph for large $n$. Moreover, we also determine the exact value of ${\rm ex}(n,K_r,(k+1)K_r)$ which generalizes some known results.
: Generalized Turán number, disjoint union, extremal graph.
author:
- Fangfang Zhang
- Yaojun Chen
- Ervin Győri
- Xiutao Zhu
date:
-
-
title: Maximum cliques in a graph without disjoint given subgraph
---
# Introduction
Let $G$ be a graph with the set of vertices $V(G)$. For two graphs $G$ and $H$, let $G\cup H$ denote the disjoint union of $G$ and $H$, and $kG$ denote $k$ disjoint copies of $G$. We write $G+H$ for the join of $G$ and $H$, the graph obtained from $G\cup H$ by adding all edges between $V(G)$ and $V(H)$. We use $K_n$, $C_n$, $P_n$ to denote the complete graph, cycle, and path on $n$ vertices, respectively. Let $K_s(G)$ denote the number of copies of $K_s$ in $G$.
For a graph $F$, the Turán number of $F$, denote by ${\rm ex}(n,F)$, is the maximum number of edges in an $F$-free graph $G$ on $n$ vertex. In 1941, Turán [@turan] proved that the balanced complete $r$-partite graph on $n$ vertices, called Turán graph $T_r(n)$, is the unique extremal graph of ${\rm ex}(n,K_{r+1})$. Starting from this, the Turán problem has attracted a lot of attention. The study of disjoint copies of a given graph in the context of Turán numbers is very rich. The first result is due to Erdős and Gallai [@Gallai] who determined the Turán number of ${\rm ex}(n,kK_2)$ for all $n$. Later Simonovits [@Simonovits] and independently Moon [@Moon] determined the Turán number of disjoint copies of cliques. In [@Gorgol] Gorgol initiated the systematic investigation of Turán numbers of disjoint copies of graphs and proved the following.
**Theorem 1**. *(Gorgol [@Gorgol])[\[Thm1\]]{#Thm1 label="Thm1"} For every graph $F$ and $k\ge 1$, $${\rm ex}(n,kF)={\rm ex}(n,F)+O(n).$$*
In this paper we study the generalized Turán number of disjoint copies of graphs. The generalized Turán number ${\rm ex}(n,T,F)$ is the maximum number of copies of $T$ in any $F$-free graph on $n$ vertices. Obviously, ${\rm ex}(n,K_2,F)={\rm ex}(n,F)$. The earliest result in this topic is due to Zykov [@Zykov] who proved that ${\rm ex}(n,K_s,K_r)=K_s(T_{r-1}(n))$.
**Theorem 2**. *(Zykov [@Zykov])[\[Zykov\]]{#Zykov label="Zykov"} For all $n$, $${\rm ex}(n,K_s,K_r)=K_s(T_{r-1}(n)),$$ and $T_{r-1}(n)$ is the unique extremal graph.*
In recent years, the problem of estimating generalized Turán number has received a lot of attention. Many classical results have been extended to generalized Turán problem, see [@Alon; @chase; @Grzesik; @Hatami; @Luo; @Ma; @Wang; @zhu].
Theorem [\[Thm1\]](#Thm1){reference-type="ref" reference="Thm1"} implies that the classical Turán number ${\rm ex}(n,kF)$ and ${\rm ex}(n,F)$ always have the same order of magnitude. However, this is not true for generalized Turán number. The function ${\rm ex}(n,K_3,C_5)$ has attracted a lot of attentions, see [@Bollobas; @Beka; @Beka2], the best known upper bound is given by Lv and Lu,
**Theorem 3**. *(Lv and Lu [@Lv] )[\[Thm2\]]{#Thm2 label="Thm2"} ${\rm ex}(n,K_3,C_5)\le \frac{1}{2\sqrt{6}}n^{\frac{3}{2}}+o(n^{\frac{3}{2}})$.*
And Gerbner, Methuku and Vizer [@Gerbner] proved ${\rm ex}(n,K_3,2C_5)=\Theta(n^{2})$ [@Gerbner]. This implies that the order of magnitudes of ${\rm ex}(n,H,F )$ and ${\rm ex}(n,H, kF)$ may differ. They also obtained a lower bound for ${\rm ex}(n,K_3,2C_5)$ which is obtained by joining a vertex to a copy of $T_2(n-1)$. In this paper, we show the graph $K_1+T_2(n-1)$ is indeed the unique extremal graph for ${\rm ex}(n,K_3,2C_5)$.
**Theorem 4**. *For sufficiently large $n$, $${\rm ex}(n,K_3,2C_5)=\left\lfloor\frac{(n-1)^2}{4}\right\rfloor,$$ and $K_1+T_2(n-1)$ is the unique extremal graph.*
We also focus on the generalized the Turán number of disjoint copies of cliques. Since ${\rm ex}(n,K_s,K_r)$ is known [@Zykov], it is natural to study the function ${\rm ex}(n,K_s,kK_r)$. Gerbner, Methuku and Vizer [@Gerbner] obtained the asymptotic value of ${\rm ex}(n,K_s, kK_r)$.
**Theorem 5**. *(Gerbner, Methuku and Vizer [@Gerbner]) If $s< r$, then $${\rm ex}(n,K_s,kK_r)=(1+o(1))\binom{r-1}{s}\left(\frac{n}{r-1}\right)^s.$$ If $s\ge r\ge 2$ and $k\ge 2$, then $${\rm ex}(n,K_s,kK_r)=\Theta(n^x),$$ where $x=\left\lceil\frac{kr-s}{k-1}\right\rceil-1$.*
Liu and Wang [@liu] determined the exact value of ${\rm ex}(n,K_r,2K_r)$ for $r\ge 3$ and $n$ sufficiently large. A new proof of ${\rm ex}(n,K_r,2K_r)$ can be found in [@Yuan] by Yuan and Yang. Gerbner and Patkós [@GP] determined ${\rm ex}(n,K_s,2K_r)$ for all $s\ge r\ge 3$ and $n$ sufficiently large. In this paper, we determine the value of ${\rm ex}(n,K_r,(k+1)K_r)$ for all $r\ge 2$, $k\ge 1$ and $n$ sufficiently large.
**Theorem 6**. *There exists a constant $n_0(k,r)$ depending on $k$ and $r\ge 2$ such that when $n\ge n_0(k,r)$, $${\rm ex}(n,K_r,(k+1)K_r)=K_r(K_k+T_{r-1}(n-k)),$$ and $K_k+T_{r-1}(n-k)$ is the unique extremal graph.*
The detailed proofs of Theorems [Theorem 4](#Thm3){reference-type="ref" reference="Thm3"} and [Theorem 6](#Thm5){reference-type="ref" reference="Thm5"} will be presented in Sections 3 and 4, respectively.
# Proof of Theorem [Theorem 4](#Thm3){reference-type="ref" reference="Thm3"} {#proof-of-theorem-thm3}
Suppose $n$ is large enough and let $G$ be an $n$-vertex $2C_5$-free graph with ${\rm ex}(n,K_3,2C_5)$ copies of triangles. Since $K_1+T_2(n-1)$ contains no $2C_5$, thus $K_3(G)\ge \lfloor(n-1)^2/4\rfloor$. Next we will show that $G=K_1+T_2(n-1)$. Since $n$ is sufficiently large and by Theorem [\[Thm2\]](#Thm2){reference-type="ref" reference="Thm2"}, $G$ must contain a copy of $C_5$, say $C=v_1v_2v_3v_4v_5v_1$. Then $G\setminus C$ contains no $C_5$. By Theorem [\[Thm2\]](#Thm2){reference-type="ref" reference="Thm2"} again, we have $$K_3(G\setminus C)\le\frac{1}{2\sqrt{2}}(n-5)^{\frac{3}{2}}+o((n-5)^{\frac{3}{2}}).$$ We claim that there is at least one vertex in $V(C)$ whose neighborhood contains a copy of $6P_4$. To prove this, we need a theorem obtained by Bushaw and Kettle [@Kettle].
**Theorem 7**. *(Bushaw and Kettle[@Kettle]) For $k\ge 2$, $\ell\ge 4$ and $n\ge 2\ell+2k\ell(\lceil\ell/2\rceil+1)\binom{\ell}{\lfloor\ell/2\rfloor}$, $${\rm ex}(n,kP_\ell)=\binom{k\lfloor\ell/2\rfloor-1}{2}+(k\lfloor\ell/2\rfloor-1)(n-k\lfloor\ell/2\rfloor+1)+\lambda,$$ where $\lambda=1$ if $\ell$ is odd, and $\lambda=0$ if $\ell$ is even.*
By Theorem 7, we know ${\rm ex}(n,6P_4)\le \max\left\{\binom{872}{2},11(n-6)\right\}$. Now suppose no vertex in $V(C)$ contains $6P_4$ in its neighborhood. Then the number of triangles containing $v_i$ is at most $$e(G[N(v_i)])\le {\rm ex}(n,6P_4)=11n+o(n).$$ Therefore, the total number of triangles satisfies $$\begin{split}
K_3(G)&\le \frac{1}{2\sqrt{2}}n^{\frac{3}{2}}+o(n^{\frac{3}{2}})+55n+o(n)\\
&=\frac{1}{2\sqrt{2}}n^{\frac{3}{2}}+o(n^{\frac{3}{2}})\\
&<\frac{(n-1)^2}{4}.
\end{split}$$ The last inequality holds when $n$ is large. A contradiction.
Therefore, we may assume that $v_1$ is the vertex in $V(C)$ such that $G[N(v_1)]$ contains a copy of $6P_4$. If $G\setminus v_1$ contains a copy of $C_5$, then at least one copy of $P_4$ in $G[N(v_1)]$ does not intersect with this $C_5$ and hence we find two disjoint $C_5$, a contradiction. Thus $G\setminus v_1$ is $C_5$-free. So we have $$\label{2.1}
K_3(G)\le e(G\setminus v_1)+K_3(G\setminus v_1).$$ So if we have $e(G\setminus v_1)+K_3(G\setminus v_1)\le \left\lfloor\frac{(n-1)^2}{4}\right\rfloor$, then the proof is completed. To prove this, we need the following lemma.
**Lemma 1**. *Let $n\ge 2\binom{68}{3}$. If $G$ is a $C_5$-free graph on $n$ vertices, then $$e(G)+K_3(G)\le \left\lfloor\frac{n^2}{4}\right\rfloor,$$ and equality holds if and only if $G=T_2(n)$.*
For each integer $n$, let $G_n$ be a $C_5$-free graph of $n$ vertices such that $e(G_n)+K_3(G_n)$ is maximum. For every $n$, if $G_n$ is also triangle-free, then by Turán Theorem [@turan], $e(G_n)\le \left\lfloor\frac{n^2}{4}\right\rfloor$. Hence, $e(G_n)+K_3(G_n)\le \left\lfloor\frac{n^2}{4}\right\rfloor$ and equality holds if and only if $G_n=T_2(n)$, we are done.
Next we shall prove that from $n\ge 2\binom{68}{2}$, each $G_n$ is triangle-free. To do this, let us define a function $$\phi(n):=e(G_n)+K_3(G_n)-\left\lfloor\frac{n^2}{4}\right\rfloor.$$ Since $T_2(n)$ is $C_5$-free and $e(T_2(n))+K_3(T_2(n))= \left\lfloor\frac{n^2}{4}\right\rfloor$, we have $\phi(n)\ge 0$. We claim that from $n\ge 68$, if $G_n$ contains a triangle, then $$\begin{aligned}
\label{eq2.2}
\phi(n)<\phi(n-1)-1. \end{aligned}$$
First suppose that $\delta(G_n)\ge \frac{n}{4}-1$. Let $xy$ be the edge of $G_n$ which is contained in the most number of triangles. Set $W=N(x)\cap N(y)=\{z_1,\ldots,z_w\}$. Since $G_n$ is $C_5$-free, $G_n[W]$ contains no edge unless $w\le 2$. Let $D_0=N(x)\setminus(W\cup\{y\})$, $D_i=N(z_i)\setminus(W\cup\{x,y\})$ for $1\le i\le w$ and $D_{w+1}=N(y)\setminus (W\cup \{x\})$. We next show that $D_i$ satisfy the following properties for $0\le i\le w+1$.
$|D_i|\ge \frac{n}{4}-w-2$ for $i=0,w+1$ and $|D_j|\ge \frac{n}{4}-4$ for $1\le j\le w$;
$D_i\cap D_j=\emptyset$ for $0\le i\not=j\le w+1$;
There are no edges between $D_i, D_j$.
Since $\delta(G_n)\ge \frac{n}{4}-1$, **(P1)** is clearly true. Since $G_n$ is $C_5$-free, it is easy to see that $D_i\cap D_j=\emptyset$ for $1\le i\not=j\le w$. Suppose $D_0\cap D_i\neq \emptyset$ or $D_{w+1}\cap D_i\neq \emptyset$ for some $1\le i\le w$, by symmetry, let $v\in D_0\cap D_i$. Then by the choice of $xy$, we have $w\ge 2$. For $1\le j\le w$ and $j\neq i$, $vz_iyz_jxv$ is a copy of $C_5$, a contradiction. Thus **(P2)** holds. Suppose $uv$ is an edge with $u\in D_i,v\in D_j$, then $uz_iyz_jvu$ is a copy of $C_5$ if $i,j\in[1,w]$, $uz_iyxvu$ or $uz_ixyvu$ is a copy of $C_5$ if $i\in[1,w]$ and $j\in \{0,w+1\}$, $uxz_1yvu$ is a copy of $C_5$ if $i=0, j=w+1$, a contradiction. This implies **(P3)** holds.
Let $N=V(G_n)-W\cup \{x,y\}-\cup_{i=0}^{w+1} D_i$. By **(P1)** and **(P2)**, we have $$n=|N|+\sum_{i=0}^{w+1}|D_i|+w+2\ge |N|+2(\frac{n}{4}-w-2)+w(\frac{n}{4}-4)+w+2,$$ which implies $w\le 2$, $|N|\le \frac{n}{4}+7$ and $D_i\not= \emptyset$ when $n\ge 61$. By the choice of $xy$, each vertex of $D_i$ has at most two neighbors in $G_n[D_i]$ for $0\le i\le w+1$ since there is no edge in $3$ triangles. By **(P3)** and $\delta(G_n)\ge \frac{n}{4}-1$, each vertex in $D_i$ has at least $\frac{n}{4}-4$ neighbors in $N$. Let $v_0\in D_0$ and $v_1\in D_{w+1}$. Because $n\ge 68$, we can deduce that $2(\frac{n}{4}-4)>\frac{n}{4}+7 \ge |N|$ and hence $N(v_0)\cap N(v_1)\cap N\neq \emptyset$. Then $uv_0xyv_1u$ is a copy of $C_5$, where $u\in N(v_0)\cap N(v_1)\cap N$, a contradiction. We are done if the minimum degree is at least $\frac{n}{4}-1$.
Therefore, there is one vertex $v$ in $G_n$ such that $d(v)< \frac{n}{4}-1$ when $n\ge 68$. Because $G_n$ is $C_5$-free, $G_n[N(v)]$ is the disjoint union of stars and triangles which implies $e(G_n[N(v)])\le d(v)$. If we delete $v$ from $G_n$, it will destroy at most $d(v)$ triangles and delete $d(v)$ edges. Hence, $$\begin{aligned}
&\phi(n-1)-\phi(n)\\
=&\left\lfloor\frac{n^2}{4}\right\rfloor-\left\lfloor\frac{(n-1)^2}{4}\right\rfloor-\{(e(G_n)+K_3(G_n))-(e(G_{n-1})+K_3(G_{n-1}))\} \notag\\
\ge& \frac{2n-2}{4}-\{(e(G_n)+K_3(G_n))-(e(G_{n}-v)+K_3(G_{n}-v))\}\notag\\
\ge& \frac{2n-2}{4}-2d(v)> \frac{2n-2}{4}-2(\frac{n}{4}-1)>1.\end{aligned}$$ Hence our claim(inequality [\[eq2.2\]](#eq2.2){reference-type="ref" reference="eq2.2"}) holds for $n\ge 68$.
Note that for $n_0\ge 68$, if $G_{n_0}$ contains no triangle, then $\phi(n_0)=0$. Moreover, for every $n\ge n_0$, we have that $G_n$ contains no triangles, either. Otherwise, we can find an integer $n$ such that $G_n$ contains a triangle but $G_{n-1}$ is triangle-free. But then $\phi(n)\le \phi(n-1)-1<0$ by inequality [\[eq2.2\]](#eq2.2){reference-type="ref" reference="eq2.2"}, which is contrary to $\phi(n)\ge 0$. Now let $n_0$ be the first integer after $68$ such that $G_{n_0}$ is triangle-free. Then $$0\le \phi(n_0)\le \phi(n_0-1)-1< \phi(68)-(n_0-68)\le \binom{68}{2}+\binom{68}{3}+68-n_0.$$ This implies $n_0\le 2\binom{68}{3}$. Thus $G_n$ must be triangle-free for $n\ge 2\binom{68}{3}\ge n_0$. So $e(G_n)+K_3(G_n)=e(G_n)=\lfloor n^2/4\rfloor$ and $G_n=T_2(n)$ by Turán Theorem [@turan]. The proof of Lemma [Lemma 1](#lemma){reference-type="ref" reference="lemma"} is completed.$\hfill\hbox{\vrule height8pt depth0pt
\vbox{\hrule width7.2pt\vskip 7.2pt\hrule width7.2pt}\vrule
height8pt depth0pt}\smallskip$
Combining equation ([\[2.1\]](#2.1){reference-type="ref" reference="2.1"}) and Lemma [Lemma 1](#lemma){reference-type="ref" reference="lemma"}, we can see that when $n$ is large, $K_3(G)\le \left\lfloor\frac{(n-1)^2}{4}\right\rfloor$ and equality holds if and only if $G=K_1+T_2(n-1)$. The proof of Theorem [Theorem 4](#Thm3){reference-type="ref" reference="Thm3"} is completed. $\hfill\blacksquare$
# Proof of Theorem [Theorem 6](#Thm5){reference-type="ref" reference="Thm5"} {#proof-of-theorem-thm5}
We prove it by induction on $r$ and in each case, we always assume $n\ge n_{0}(k,r)=$. The base case $r=2$ is the celebrated Erdős-Gallai Theorem [@Gallai], which says that $${\rm ex}(n,K_2,(k+1)K_2)=\max\left\{\binom{2k+1}{2},(n-k)k+\binom{k}{2}\right\}.$$ As $n\ge n_0(k,2)$, we know ${\rm ex}(n,K_2,(k+1)K_2)=K_2(K_k+T_1(n-k))$.
Let $r\ge 3$ and suppose that the result holds for all $r'<r$. Next we consider the case ${\rm ex}(n,K_r,(k+1)K_r)$. Let $G$ be a $(k+1)K_r$-free graph on $n$ vertices with ${\rm ex}(n,K_r,(k+1)K_r)$ copies of $K_r$. We may assume that $G$ contains $k$ disjoint copies of $K_r$. Otherwise we can add some edges into $G$ unit the resulting graph contains $k$ disjoint $K_r$. But at least one $K_r$ in these $k$ disjoint $K_r$ is new which implies that the number of $K_r$ is increased, a contradiction. Let $$I=\{X_1,\ldots,X_k\}$$ be a set of $k$ disjoint $r$-cliques in $G$, where $X_i$ is a copy of $K_r$. Let $V(I)=\cup_{i=1}^kV(X_i)$ and $N=G\setminus V(I)$. Clearly, $N$ contains no $K_r$. We say a vertex $v$ in $I$ is joined to an $(r-1)$-clique in $N$ if $v$ is adjacent to all vertices of this $(r-1)$-clique. For each $X_i$, $i\in [k]$, we have the following property.
**Claim 1**. *Each $X_i$ contains at most one vertex which is joined to at least $kr+1$ disjoint $(r-1)$-cliques in $N$.*
If not, suppose $u_1,u_1'\in V(X_1)$ are both joined to $kr+1$ disjoint $(r-1)$-cliques. First we can find an $(r-1)$-clique joined to $u_1$ in $N$. Since $u'_1$ is also joined to at least $kr+1$ disjoint $(r-1)$-cliques in $N$, we can find another $(r-1)$-clique joined to $u'_1$ which does not intersect with the $(r-1)$-clique joined to $u$. Together with $\{X_2,\ldots, X_k\}$, we find a copy of $(k+1)K_r$, a contradiction. $\hfill\hbox{\vrule height8pt depth0pt
\vbox{\hrule width7.2pt\vskip 7.2pt\hrule width7.2pt}\vrule
height8pt depth0pt}\smallskip$
By Claim 1, let $A=\{X_1,\ldots,X_a\}$ be a subset of $I$ such that there exists a vertex in $X_i$, say $u_i$, that is joined to at least $kr+1$ disjoint $(r-1)$-cliques in $N$ for each $i\in [a]$. Let $U=\{u_1,\ldots,u_a\}$.
Since $N$ is $K_r$-free, each $K_r$ in $G$ must intersect with some vertices in $V(I)$. Then all $r$-cliques can be divided into two classes: the set of cliques in which all vertices are contained in $V(N)\cup U$, ant the set of cliques containing at least one vertex in $V(I)\setminus U$. We simply use $K_r(U)$ and $K_r(\overline{U})$ to denote the number of copies of $K_r$ in these two classes, respectively.
Suppose a $K_r$ in the first class contains $s$ vertices in $U$ and $r-s$ vertices in $N$, the number of $K_r$'s of this type is at most $\binom{a}{s}K_{r-s}(N)$. Since $N$ is $K_r$-free and by Theorem [\[Zykov\]](#Zykov){reference-type="ref" reference="Zykov"}, which says ${\rm ex}(n,K_s,K_r)=K_s(T_{r-1}(n))$, we have $K_{r-s}(N)\le K_{r-s}\left (T_{r-1}(n-kr)\right)\le \binom{r-1}{r-s}\left(\frac{n-kr}{r-1}\right)^{r-s}$. Then $$\begin{aligned}
\label{3.1}
K_r(U)&\le \sum_{s=1}^r\binom{a}{s}K_{r-s}(N)\nonumber\\
&\le a\left(\frac{n-kr}{r-1}\right)^{r-1}+\binom{a}{2}\binom{r-1}{r-2}\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}).\end{aligned}$$
Next we calculate the size of $K_r(\overline{U})$. Each vertex $v\in V(I)\setminus U$ is joined to at most $kr$ independent $(r-1)$-cliques in $N$. Hence the number of $K_r$ containing $v$ and $r-1$ vertices of $N$ is at most $$\begin{split}
K_{r-1}(G[N(v)\cap V(N)])&\le {\rm ex}(n-kr,K_{r-1}, (kr+1)\cdot K_{r-1})\\
&=K_{r-1}\left (K_{kr}+T_{r-2}(n-2kr)\right )\\
&\le(kr)\left(\frac{n-2kr}{r-2}\right)^{r-2},
\end{split}$$ the second equality comes from the induction hypothesis. Any other copies of $K_r$ in $K_r(\overline{U})$ contains at most $r-2$ vertices in $N$ and at least one vertex in $V(I)\setminus U$. So the number of such $r$-cliques is at most $$\sum_{s=2}^r\left(\binom{kr}{s}-\binom{a}{s}\right)K_{r-s}(N)
\le \left(\binom{kr}{2}-\binom{a}{2}\right)\binom{r-1}{r-2}\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}).$$ Hence, $$\label{3.2}
K_r(\overline{U})\le \left(kr+\left(\binom{kr}{2}-\binom{a}{2}\right)\binom{r-1}{r-2}\right)\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}).$$
Therefore, by inequality ([\[3.1\]](#3.1){reference-type="ref" reference="3.1"}) and ([\[3.2\]](#3.2){reference-type="ref" reference="3.2"}), we have $$\begin{aligned}
\label{3.3}
K_r(G)\le a\left(\frac{n-kr}{r-1}\right)^{r-1}+\left(kr+\binom{kr}{2}\binom{r-1}{r-2}\right)\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}).\end{aligned}$$ On the other hand, since $K_k+T_{r-1}(n-k)$ is $(k+1)K_r$-free, we know that $$\begin{aligned}
\label{3.4}
K_r(G)\ge k\left(\frac{n-k}{r-1}\right)^{r-1}+O(n^{r-2}).\end{aligned}$$ When $n$ is greater than some constant $n_0(k,r)$, inequalites ([\[3.3\]](#3.3){reference-type="ref" reference="3.3"}) and ([\[3.4\]](#3.4){reference-type="ref" reference="3.4"}) hold mean $a=k$ and then $U=\{u_1,\ldots,u_k\}$.
Let $G'=G\setminus U$. We claim that $G'$ is also $K_r$-free. Suppose not, $G'$ contains a $r$-clique, denote by $X_0'$. Since each $u_i$ is joined to at least $kr+1$ independent copies of $K_{r-1}$'s in $N$, at least $(k-1)r+1$ of whom are disjoint with $X_0'$ for each $i\in [k]$. Then we can find a $r$-clique $X_1'$ such that $u_1\in X_1'$ and $V(X_1')\cap V(X_0')=\emptyset$. Next, we claim that we may find another $k$ independent $r$-cliques such that each is disjoint with $X_0'$. Suppose we have found pairwise disjoint $r$-cliques $X_1',\ldots,X_{i-1}'$ such that $u_j\in X_j'$ for $j\in [i-1]$ and $i\le k$. Then, in $G'[N(u_i)]$, there are at least $(k-1)r+1-(i-1)(r-1)\ge 1$ independent $(r-1)$-cliques which disjoint with $\{X_0',X_1',\ldots,X_{i-1}'\}$. That is we can choose a $(r-1)$-clique and thus a $r$-clique $X_i'$ such $u_i\in X_i'$ and $X_0',X_1',\ldots,X_{i}'$ are pairwise disjoint. The procedure can keep going until we find $k$ independent $r$-cliques $X_1',\ldots,X_{k}'$. Then $X_0',X_1',\ldots,X_{k}'$ forms a $(k+1)K_r$, a contradiction.
Since $G'$ is $K_r$-free, by Zykov's Theorem, $K_{r-i}(G')\le K_{r-i}(T_{r-1}(n-k))$ and the equality holds if and only if $G'=T_{r-1}(n-k)$. Thus $$K_r(K_k+T_{r-1}(n-k))\le K_r(G)\le \sum_{i=0}^{r}\binom{k}{i}K_{r-i}(G')=K_r(K_k+T_{r-1}(n-k)).$$ The condition of the equality holds means $G=K_k+T_{r-1}(n-k)$. The proof of Theorem [Theorem 6](#Thm5){reference-type="ref" reference="Thm5"} is completed. $\hfill\blacksquare$
# Acknowledge
The research of the authors Győri is partially supported by the National Research, Development and Innovation Office NKFIH, grants K132696, SNN-135643 and K126853, Chen is supported by SNFC under grant numbers 12161141003 and 11931006, Zhang is supported by NSFC under grant numbers 12101298.
6
N. Alon and C. Shikhelman, Many $T$ copies in $H$-free graphs, *J. Combin. Theory Ser. B* 121(2016), 146-172.
B. Bollobás and E. Győri, Pentagons vs. triangles, *Discrete Math.* 308(2008), 4332-4336.
N. Bushaw and N. Kettle, Turán numbers of multiple paths and equibipartite forests, *Combin. Probab. Comput.* 20(2011), 837-853.
Z. Chase, The maximum number of triangles in a graph of given maximum degree, *Adv. Com.* (2020), paper No.10, 5pp.
P. Erdős and T. Gallai, On maximal paths and circuits of graphs, *Acta Math. Acad. Sci. Hungar.* 10(1959), 337-356.
B. Ergemlized, E. Győri, A. Methuku and N. Salia, A note on the maximum number of triangles in a $C_5$-free graph, *J. Graph Theory* 90(2019), 227-230.
B. Ergemlized and A. Methuku, Triangles in $C_5$-free graphs and hypergraphs of girth six. *J. Graph Theory* 99(2022), 26-39.
D. Gerbner, A. Methuku and M. Vizer, Generalized Turán problems for disjoint copies of graphs, *Discrete Math.* 342(2019), 3130-3141.
D. Gerbner and B. Patkós, Generalized Turán results for intersecting cliques, arXiv preprint (2021), arXiv:2105.07297v1.
I. Gorgol, Turán numbers for disjoint copies of graphs, *Graphs Combin.* 27(2011), 661-667.
A. Grzesik, On the maximum number of five-cycles in a triangle-free graph, *J. Combin. Theory Ser. B* 102(2012), 1061-1066.
H. Hatami, J. Hladký, D. Král, S. Norine and A. Razborov, On the number of Pentagons in triangle-free graphs, *J. Combin. Theory Ser. A* 120(2013), 722-732.
E.L. Liu and J. Wang, The generalized Turán problem of two intersecting cliques, arXiv preprint (2021), arXiv:2101.08004.
Z. Lv and M.Lu, Many triangles in $C_5$-free graphs, personal communication.
R. Luo, The maximum number of cliques in graphs without long cycles, *J. Combin. Theory Ser. B* 128(2017), 219-226.
J. Ma and Y. Qiu, Some sharp results on the generalized Turán numbers, *European J. Combin.* 84(2020), 103026, 16pp.
J.W. Moon, On independent complete subgraphs in a graph, *Canad. J. Math* 20(1968), 95-102.
M. Simonovits, A method for solving extremal problems in extremal graph theory, *In Theory of graphs*, Acadenic Press (1966), 279-319.
P. Turán, On an extremal problem in graph theory, *Mat. Fiz. Lapok* 48(1941), 436-452.
J. Wang, The shifting method and generalized Turán number of matchings, *European J. Combin.* 85(2020), 7pp.
X.L. Yuan and W.H. Yang, On generalized Turán number of two disjoint cliques, *Graphs and Combin.* 38(2022), 116, 9pp.
X.T. Zhu and Y.J. Chen, Generalized Turán number for linear forests. *Discrete Math.* 345(2022), 112997, 12pp.
A. Zykov, On some properties of linear complexes, *Mat. Sbornik N. S.* 24(1949), 163-188.
| arxiv_math | {
"id": "2309.09603",
"title": "Maximum cliques in a graph without disjoint given subgraph",
"authors": "Fangfang Zhang, Yaojun Chen, Ervin Gyori, Xiutao Zhu",
"categories": "math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
$A111384(n)$ is an upper bound for the number of primes that can be written as a sum of three distinct primes selected from a set of $n$ primes. Is this bound sharp?
address: Freiburg, Germany
author:
- Markus Sigg
date: September 26, 2023.
title: A note on OEIS sequence A111384
---
AMS subjects classification 2010: 05A20,11B83,11L20.\
Keywords: Encyclopedia of Integer Sequences, Prime Sums of Primes.
# Introduction
With the natural numbers $\mathbb{N}:= \{0,1,2,\dots\}$, let $\mathbb{T}:= \{ 3 \} \cup (\mathbb{N}
\setminus 3\mathbb{N})$. For non-empty $\mathbb{M}\subset \mathbb{T}$ and finite $A \subset \mathbb{M}$ set $$S_\mathbb{M}(A) \ := \ \mathbb{M}\cap \{ a + b + c : a,b,c \in A, \ a < b < c \}$$ and for $n \in \mathbb{N}$: $$s_\mathbb{M}(n) \ := \ \max \, \{ |S_\mathbb{M}(A)| : A \subset \mathbb{M}, \ |A| = n \}$$
We shall prove that OEIS sequence A111384, see [@A111384], gives an upper bound for $s_\mathbb{M}(n)$, i. e. $$s_\mathbb{M}(n) \ \le \ A111384(n) \ = \
\binom{n}{3} -
\binom{\lfloor \frac n2 \rfloor}{3} -
\binom{\lceil \frac n2 \rceil}{3} ,$$ which therefore is true in particular for $\mathbb{M}= \mathbb{P}$, the set of prime numbers. Furthermore we will show that this inequality is in fact an equality in the case of $\mathbb{M}= \mathbb{T}$, and dare to ask if it is an equality even in the case of $\mathbb{M}= \mathbb{P}$.
Let's remark beforehand that A111384 starts A111384(0) = A111384(1) = A111384(2) = 0, and in general $$\label{eq1}
A111384(n) = \begin{cases}
\displaystyle \frac18 (n-2)n^2 & \text{for even $n$},\\[1em]
\displaystyle \frac18 (n-2)(n^2-1) & \text{for odd $n$},
\end{cases}$$ thus for all $n \in \mathbb{N}$: $$\label{eq2}
A111384(n) \ \ge \ \frac18 (n-2)(n^2-1)$$
# Statements
**Proposition 1**. *$s_\mathbb{M}(n) \le A111384(n)$ for all $\mathbb{M}\subset \mathbb{T}$ and $n \in \mathbb{N}$.*
==
This is trivial for $n < 3$, so let $n \ge 3$ and $A \subset \mathbb{M}$ with $|A| = n$. For $m \in \{0,1,2\}$ set $A_m := \{ a \in A : a \equiv m \pmod 3 \}$ and $\alpha_m := |A_m|$. We have to show $|S_\mathbb{M}(A)| \le A111384(n)$.
The case of $3 \not\in A$: Here, $A = A_1 \uplus A_2$. There are $t(n) := n(n-1)(n-2)/6$ triples $(a,b,c)$ with $a,b,c \in A$ and $a < b < c$, see [@A000292]. Because $3 \ | \ a + b + c$ for $a,b,c \in A_1$ or $a,b,c \in A_2$, at least $t(\alpha_1) + t(\alpha_2) = t(\alpha_1) + t(n-\alpha_1)$ of these triples cannot contribute to $S_\mathbb{M}(A)$, hence $$|S_\mathbb{M}(A)|
\ \le \ t(n) - t(\alpha_1) - t(n-\alpha_1)
\ = \ f_n(\alpha_1),$$ where for $\alpha \in \{0, \dots, n\}$ $$f_n(\alpha) \ := \ \frac18 (n-2)n^2 - \frac12 (n - 2) \left(\alpha - \frac n2\right)^2.$$ For even $n$, the maximum of $f_n$ is $f_n(n/2) = (n-2)n^2/8$. For odd $n$, the maximum of $f_n$ is $f_n((n-1)/2) = (n-2)(n^2-1)/8$. With ([\[eq1\]](#eq1){reference-type="ref" reference="eq1"}) we get $|S_\mathbb{M}(A)| \le f_n(\alpha_1) \le A111384(n)$.
The case of $3 \in A$: Here, $A = \{3\} \uplus A_1 \uplus A_2$. For $a,b \in A_1 \cup A_2$, $3 + a + b \in \mathbb{M}$ is possible only when $a,b \in A_1$ or $a,b \in A_2$, so $$\begin{aligned}
|S_\mathbb{M}(A)|
&\le& |S_\mathbb{M}(A_1 \cup A_2)| + \frac12 \alpha_1(\alpha_1-1) + \frac12 \alpha_2(\alpha_2-1)\\
&\le& f_{n-1}(\alpha_1) + \frac12 \alpha_1(\alpha_1-1) + \frac12 (n-1-\alpha_1)(n-2-\alpha_1)\\
&=& g_n(\alpha_1),
\end{aligned}$$ where for $\alpha \in [0, n-1]$ $$g_n(\alpha) \ := \ \frac12 (n-1)(n-2) + \frac12 (n-5)(n-1-\alpha)\alpha.$$ The maximum of $g_n$ is $g_n((n-1)/2) = (n^3 - 3n^2 - n + 3)/8 =: v(n)$. With inequality ([\[eq2\]](#eq2){reference-type="ref" reference="eq2"}) we get $A111384(n) - v(n) \ge (n^2-1)/8 > 0$, so $|S_\mathbb{M}(A)| \le g_n(\alpha_1) \le v(n) < A111384(n)$.
**Proposition 2**. *$s_\mathbb{T}(n) = A111384(n)$ for all $n \in \mathbb{N}$.*
==
For $k \in \mathbb{N}$ set $a_k := 3^{k+1}+1$ for odd $k$ and $a_k := 3^{k+1}+2$ for even $k$, and with this $A_n := \{a_1,\dots,a_n\} \subset \mathbb{T}$ for $n \in \mathbb{N}$.
Looking at its representation in base $3$ makes it obvious that $a+b+c$ for $a,b,c \in A_n, a<b<c$ is unique, i. e. $a+b+c \ne x+y+z$ for $a,b,c,x,y,z \in A_n$ with $a < b < c, x < y < z$ and $(a,b,c) \ne (x,y,z)$. The considerations in the first part of the proof of Proposition [Proposition 1](#prop){reference-type="ref" reference="prop"} show $|S_\mathbb{T}(A_n)| = A111384(n)$.
Proposition [Proposition 1](#prop){reference-type="ref" reference="prop"} shows in particular that $s_\mathbb{P}(n) \le A111384(n)$ for all $n \in \mathbb{N}$. From [@vps] it is known that $s_\mathbb{P}(n) = A111384(n)$ for $n \le 9$, for example $$|S_\mathbb{P}(\{499,1483,2777,4363,5237,5507,6043,6197\})| = 48 = A111384(8).$$ This leads to
**Question 1**. *$s_\mathbb{P}(n) = A111384(n)$ for all $n \in \mathbb{N}$?*
1
OEIS Foundation Inc. (2023), Entry [A000292](https://oeis.org/A000292) in The On-Line Encyclopedia of Integer Sequences, [https://oeis.org/A00092](https://oeis.org/A000292).
OEIS Foundation Inc. (2023), Entry [A111384](https://oeis.org/A111384) in The On-Line Encyclopedia of Integer Sequences, <https://oeis.org/A111384>.
James Youlton, Tribal Primes Contest, [http://v-sonline.com](http://v-sonline.com/VSPCs/index.pl?C4).
| arxiv_math | {
"id": "2309.14840",
"title": "A note on OEIS sequence A111384",
"authors": "Markus Sigg",
"categories": "math.CO",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We find a finite free resolution of the counit of the free unitary quantum groups of van Daele and Wang and, more generally, Bichon's universal cosovereign Hopf algebras with a generic parameter matrix. This allows us to compute Hochschild cohomology with 1-dimensional coefficients for all these Hopf algebras. In fact, the resolutions can be endowed with a Yetter-Drinfeld structure. General results of Bichon then allow us to compute also the corresponding bialgebra cohomologies.
Finding the resolution rests on two pillars. We take as a starting point the resolution for the free orthogonal quantum group presented by Collins, Härtel, and Thom or its algebraic generalization to quantum symmetry groups of bilinear forms due to Bichon. Then we make use of the fact that the free unitary quantum groups and some of its non-Kac versions can be realized as a glued free product of a (non-Kac) free orthogonal quantum group with $\mathbb Z_2$, the finite group of order 2. To obtain the resolution also for more general universal cosovereign Hopf algebras, we extend Gromada's proof from compact quantum groups to the framework of matrix Hopf algebras. As a byproduct of this approach, we also obtain a projective resolution for the freely modified bistochastic quantum groups.
Only a special subclass of free unitary quantum groups and universal cosovereign Hopf algebras decompose as a glued free product in the described way. In order to verify that the sequence we found is a free resolution in general (as long as the parameter matrix is generic, two conditions which are automatically fulfilled in the free unitary quantum group case), we use the theory of Hopf bi-Galois objects and Bichon's results on monoidal equivalences between the categories of Yetter-Drinfeld modules over universal cosovereign Hopf algebras for different parameter matrices.
address:
- "Université de Franche-Comté, CNRS, UMR 6623, LmB, F-25000 Besançon, France, [![image](orcidlogo.pdf){width=\"\\\\fontcharht\\\\font`W\"} 0000-0002-4365-8270](https://orcid.org/0000-0002-4365-8270) "
- "Université de Franche-Comté, CNRS, UMR 6623, LmB, F-25000 Besançon, France, [![image](orcidlogo.pdf){width=\"\\\\fontcharht\\\\font`W\"} 0000-0001-8586-8025](https://orcid.org/0000-0001-8586-8025)"
- |
Saarland University, Fachbereich Mathematik\
[ ![image](orcidlogo.pdf){width="\\fontcharht\\font`W"} 0000-0003-4029-1108](https://orcid.org/0000-0003-4029-1108)
- "Instytut Matematyczny, Uniwersytet Wrocławski, pl. Grunwaldzki 2, 50-348 Wrocław, Poland, [![image](orcidlogo.pdf){width=\"\\\\fontcharht\\\\font`W\"} 0000-0001-9688-6915](https://orcid.org/0000-0001-9688-6915)"
- "Instytut Matematyczny, Uniwersytet Wrocławski, pl. Grunwaldzki 2, 50-348 Wrocław, Poland, [![image](orcidlogo.pdf){width=\"\\\\fontcharht\\\\font`W\"} 0000-0001-7738-8515](https://orcid.org/0000-0001-7738-8515)"
author:
- Isabelle Baraquin
- Uwe Franz
- Malte Gerhold
- Anna Kula
- Mariusz Tobolski
bibliography:
- biblio.bib
title: Free resolutions for free unitary quantum groups and universal cosovereign Hopf algebras
---
# Introduction
Interest in homological properties of Hopf algebras in general and Hopf algebras of compact quantum groups in particular stems from several directions. On the one hand, geometric concepts such as dimension can be transferred to these rather abstract algebraic objects. On the other hand, the first and the second Hochschild cohomology with trivial coefficients, $\mathrm{H}^1(A,\mathbb C_\varepsilon),\mathrm{H}^2(A,\mathbb C_\varepsilon)$, play an important role in the classification of Lévy processes on a Hopf algebra $A$ in the sense of Schürmann [@schurmann].[^1] If one is only interested in the first and the second Hochschild cohomology with specific coefficient bimodules, a viable approach is to calculate cocycles and coboundaries explicitly. This was successfully done in several cases, for example, in [@BFG17], where vanishing of $\mathrm H^1(A,\mathbb C_\varepsilon),\mathrm H^2(A,\mathbb C_\varepsilon)$ for *quantum permutation algebras* is proved, or in [@Mang23pre], where Mang calculated $\mathrm H^1(A,\mathbb C_\varepsilon)$ for the Hopf algebras of arbitrary *easy unitary quantum groups*. In [@VanDaeleWang1996], van Daele and Wang constructed two families of *universal* compact quantum groups: the *free unitary quantum groups* $U_K^+$ and *free orthogonal quantum groups* $O_L^+$, where $K,L\in\operatorname{GL}_n(\mathbb C)$ are complex invertible matrices and $\overline L L\in \mathbb CI_n$ is a multiple of the $n\times n$ identity matrix $I_n$.[^2] We denote the associated Hopf algebras $\operatorname{Pol}(U_K^+)$ and $\operatorname{Pol}(O_L^+)$, respectively; for $n\in\mathbb N$, we refer to $U_n^+:=U_{I_n}^+$ and $O_n^+:=O_{I_n}^+$ as the free unitary and free orthogonal quantum group of *Kac type*. In [@DFKS18; @DFKS23], the first and the second Hochschild cohomologies for the Hopf algebras $\operatorname{Pol}(U_K^+)$ and $\operatorname{Pol}(O_L^+)$ are determined with concrete formulas for cocycles and coboundaries for most parameter matrices $K,L$, however, the case of $U_K^+$ where $K^*K$ has three distinct Eigenvalues in "geometric progression" ($q^{-1},1,q$, with $q\in\mathbb R^+$) remained open.
By a general procedure, described in detail by Bichon [@Bic13 Section 2.2], all Hochschild cohomologies $\mathrm{H}^k(A,M)$ for arbitrary coefficient bimodules $M$ can be deduced from a resolution of the counit of a Hopf algebra $A$. In [@CHT09], Collins, Härtel, and Thom present a finite resolution of the counit of the Hopf algebra $\operatorname{Pol}(O_n^+)$ associated with the Kac type free orthogonal quantum group $O_n^+$, allowing them to deduce information about Hochschild cohomology and $\ell^2$-Betty numbers. Bichon [@Bic13] generalized this to a class of Hopf algebras denoted by $\mathcal B(E)$, $E\in\operatorname{GL}_n(\mathbb C)$, (see ) which includes the Hopf algebras of the possibly non-Kac free orthogonal quantum groups $O_L^+$, $\overline L L\in \mathbb CI_n$, defined by Banica [@Banica1996]. In this work, we use Bichon's free resolution for $\mathcal B(E)$ as a starting point to find projective or free resolutions for a number of related Hopf algebras; most notably, we find a finite free resolution for the *free unitary quantum groups* $U_K^+$ and, more generally, its algebraic counterpart, the *universal cosovereign Hopf algebras* $\mathcal H(F)$ introduced by Bichon in [@Bichon01] for all matrices $F\in\operatorname{GL}_n(\mathbb C)$ which are *generic* (a certain condition equivalent to the Hopf algebras $\mathcal H(F)$ and $\mathcal B(E)$ for $F=E^tE^{-1}$ being cosemisimple). For the universal cosovereign Hopf algebras, Bichon was able to compute partial information on the Hochschild and Gerstenhaber-Schack cohomologies in [@Bichon18], for example their corresponding cohomological dimensions, but even Hochschild cohomology for trivial coefficients was not completely known. With knowledge of the resolution, we can not only improve the computations in [@DFKS18; @DFKS23] of the first and second Hochschild cohomologies for the Hopf algebras $\operatorname{Pol}(O_L^+)$ and $\operatorname{Pol}(U_K^+)$, but also gather a lot information about higher cohomologies of universal quantum groups and universal cosovereign Hopf algebas.
Let us now summarize the main results of this paper.
**Theorem 1**. *Let $F\in\operatorname{GL}_n(\mathbb C)$ be generic. Then the counit of $\mathcal H(F)$ has a finite free resolution of length 3. In particular, we have a finite free resolution of the counit of $\operatorname{Pol}(U_K^+)$ of length 3 for arbitrary $K\in \operatorname{GL}_n(\mathbb C)$.*
We start out by establishing such a resolution in the special case where $F$ is an *asymmmetry*, i.e. $F=E^{t}E^{-1}$ for some $E\in\operatorname{GL}_n(\mathbb C)$ (). In that case, $\mathcal H(F)$ can be realized as a *glued free product* of $\mathcal B(E)$ with the group algebra $\mathbb C\mathbb Z_2$, which is a Hopf subalgebra of the free product $\mathcal A(E):=\mathcal B(E)\ast\mathbb C\mathbb Z_2$. We can use the resolution for $\mathcal B(E)$ to first construct a projective resolution of $\mathcal A(E):=\mathcal B(E)\ast\mathbb C\mathbb Z_2$ and apply a result of Chirvasitu [@Chirvasitu14] to turn it into a free resolution for $\mathcal H(F)$. This part requires to generalize the work of Gromada [@Gromada2022] from compact matrix quantum groups to matrix Hopf algebras.
Using the theory of Yetter-Drinfeld modules, we can finally show in that the formulas we obtain also establish a free resolution if $F$ is not necessarily of the form $F=E^{t}E^{-1}$, all we need is that $F$ is generic.
As an application of our resolution, we calculate Hochschild cohomology for one-dimensional bimodules () and bialgebra cohomology () of the Hopf algebras $\mathcal H(F)$ with generic $F$.
**Theorem 2**. *Let $F\in \operatorname{GL}_n(\mathbb C)$ be generic, and let $\tau\colon \mathcal H(F) \to \mathbb C$ be a character. Set $S:=\tau(u)$, $T:=\tau(v)=S^{-t}$ and denote by $\mathcal{K}$ the space of matrices commuting with $F^tS$. Let $$d=\operatorname{dim}\, \mathcal{K}, \quad p=\begin{cases}
1 & \mbox{ if } S=T=I_n\\
0 & \mbox{ otherwise; }
\end{cases}, \quad
t=\begin{cases}
1 & \mbox{ if } F^2=\alpha T \mbox{ for some $\alpha\in\mathbb C$ }\\
2 & \mbox{ otherwise. }
\end{cases}$$ Then the Hochschild cohomology for $\mathcal H=\mathcal H(F)$ with 1-dimensional coefficients is given as follows: $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= p ,
&\dim \mathrm{H}^1(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= d+p-1 ,
\\
\dim \mathrm H^2(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= d-t ,
&\dim \mathrm H^3(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= 2-t ,\end{aligned}$$ and $\dim \mathrm H^k(\mathcal H, {_\varepsilon\mathbb C_\tau}) = 0$ for all $k \geq 4$.*
In particular, for $F = K^*K = \operatorname{diag}(q^{-1},1,q)$, $q\in\mathbb R^+$, it turns out that the second Hochschild cohomology of $\operatorname{Pol}(U_{K}^+)\cong\mathcal H(F)$ with trivial coefficients is one-dimensional; this was the missing piece in [@DFKS23], where the second Hochschild cohomology of all other free unitary quantum groups was determined. also yields explicitly for every generic matrix $F$ a (one-dimensional) bimodule $M$ with $\mathrm{H}^3(\mathcal H(F),M)\neq0$, which confirms that the cohomological dimension of $\mathcal H(F)$ is three; a fact Bichon conjectured in [@Bichon18 Remark 5.15] and proved by more abstract considerations in [@Bichon22 Theorem 8.1].
**Theorem 3**. *Let $F\in \operatorname{GL}_n(\mathbb C)$ be generic. Then the bialgebra cohomology of $\mathcal H=\mathcal H(F)$ is given as follows: $$\begin{aligned}
\dim \mathrm{H}_{\mathrm b}^0(\mathcal H) &= 1 ,
&\dim \mathrm{H}_{\mathrm b}^1(\mathcal H) &= 1 ,\\
\dim \mathrm H_{\mathrm b}^2(\mathcal H) &= 0, % 1-t ,
&\dim \mathrm H_{\mathrm b}^3(\mathcal H) &= 1, % 2-t ,\end{aligned}$$ and $\dim \mathrm H_{\mathrm b}^k(\mathcal H) = 0$ for all $k \geq 4$.*
In particular, the second bialgebra cohomology is trivial for all free unitary quantum groups $U_K^+$, $K\in\operatorname{GL}_n(\mathbb C)$, because $\operatorname{Pol}(U_K^+)\cong \mathcal H(K^*K)$ and $\operatorname{tr}(K^*K)>0$.
**Remark 1**. After the completion of this paper, Julien Bichon communicated to us that our , and hence , hold with the weaker assumption of normalizability instead of genericity, cf. [@Bichon23pre Prop. 4.3].
# Preliminaries
## Notation and basic definitions
$\mathbb{N}=\{1,2,\ldots\}$ denotes the set of positive integers. $\mathbb R$ and $\mathbb C$ denote the fields of real and complex numbers, respectively, and we write $\mathbb R^\times:=\mathbb R\setminus\{0\}$, $\mathbb C^\times:=\mathbb C\setminus\{0\}$.
All our vector spaces are over $\mathbb C$. For any vector space $X$, $M_{n,n'}(X)$ denotes the vector space of $n\times n'$-matrices with entries from $X$. The entries of $A\in M_{n,n'}(X)$ are denoted by $A_{ij}$ ($i=1,\ldots n; j=1,\ldots n'$). The transpose of $A\in M_{n,n'}(X)$ is denoted by $A^t$, $(A^{t})_{ij}=A_{ji}$. If $n=n'$, we simply write $M_n(X)$. In case $X$ carries an algebra structure, matrix multiplication $M_{n,n'}(X)\times M_{n',n''}(X)\to M_{n,n''}(X)$ is defined, $(AB)_{ij}:=\sum a_{ik}b_{kj}$, in particular, $M_n(X)$ inherits an algebra structure. For a square matrix $A\in M_n(X)$, its *trace* is $\operatorname{tr}(A)=\sum_{i=1}^n A_{ii}$. Obviously, $\operatorname{tr}(A)=\operatorname{tr}(A^t)$. Note that $\operatorname{tr}(AB)=\operatorname{tr}(A^t B^t)$ also holds for all $A\in M_{n,n'}(X),B\in M_{n',n}(X)$ even if $X$ is noncommutative (in which case $\operatorname{tr}(AB)$ and $\operatorname{tr}(BA)$ might not coincide). Given a map $f\colon X\to Y$ and a matrix $A\in M_{n,n'}(X)$, we usually write $f(A)$ for the entrywise application, i.e. $f(A)_{ij}:=f(A_{ij})$. There is a notable exception from this convention: If $X$ is a $*$-algebra and $A\in M_{n,n'}(X)$, we write $\overline A$ for the entrywise adjoint $(\overline A)_{ij}:=(A_{ij})^*$ and put $A^*:=(\overline A)^t$; $M_n(X)$ is then considered a $*$-algebra with involution $A\mapsto A^*$.
The group of complex invertible matrices is denoted by $\operatorname{GL}_n(\mathbb C)\subset M_n(\mathbb C)$. For $A\in \operatorname{GL}_n(\mathbb C)$, we write $A^{-t}:=(A^{-1})^{t}=(A^{t})^{-1}$. (Be aware that $(A^{-1})^{t}=(A^{t})^{-1}$ does not hold in general when $A$ has entries in a noncommutative algebra.)
For $A$ an algebra, we denote the category of its right modules by $\mathcal{M}_A$, and the class of morphims between two $A$-modules $M,N$ by $$\mathrm{hom}_A(M,N) = \{f\colon M\to N \text{ is }A\text{-linear}\}.$$ Similarly, for $C$ a coalgebra, we write $\mathcal{M}^C$ for the category of its right comodules and $$\mathrm{hom}^C(M,N)= \{ f\colon M\to N \text{ is }C\text{-colinear}\}$$ for the class of morphisms between two comodules $M,N$. When we work with the category $\mathcal{YD}^H_H$ of Yetter-Drinfeld modules (defined in ) of a Hopf algebra $H$, whose objects are simultaneously $H$-modules and $H$-comodules, then the class of morphisms between two objects $M,N\in\mathcal{YD}^H_H$ is $$\mathrm{hom}^H_H (M,N)= \{ f\colon M\to N \text{ is } H\text{-linear and }H\text{-colinear}\}.$$
For a right $A$-module $M$ and a left $A$-module $N$ over an algebra $A$, the module tensor product is the quotient space $M\otimes_A N= M\otimes N/(m\otimes an=ma\otimes n)$. Dually, for a right $C$-module $M$ and a left $C$-module $N$ over a coalgebra $C$ with coactions denoted by $\gamma_M\colon M\to M\otimes C,\gamma_N\colon N\to C\otimes N$, the cotensor product is the subspace $M\mathbin{\Box}_C N:=\{X\in M\otimes N: \gamma_M\otimes \mathrm{id}(X)=\mathrm{id}\otimes\gamma_N(X)\}\subset M\otimes N$, cf. [@Montgomery1993 Definition 8.4.2]. When the coalgebra $C$ is understood from the context without doubt, we omit the subscript $C$ and simply write $M\mathbin\Box N$.
Actions on modules are mostly written by juxtaposition without any symbol or signalized by a dot. Comultiplication and counit of a coalgebra are typically denoted by $\Delta$ and $\varepsilon$, respectively. The antipode of a Hopf algebra is denoted by $S$. The coaction of a coalgebra on a comodule is typically denoted by $\gamma$. We will use the $\Sigma$-free Sweedler notation, i.e. in a coalgebra $C$ which abbreviates the coproduct $\Delta(a)=\sum_i a_{(1),i}\otimes a_{(2),i}$ as $a_{(1)}\otimes a_{(2)}$; also, for a comodule $M\in \mathcal M^C$, the coaction $\gamma(m)=\sum_i a_{(0),i}\otimes a_{(1),i}$ is written $a_{(0)}\otimes a_{(1)}$.
For a Hopf algebra $H$, the *trivial* right module / left module / bi-module is denoted by $\mathbb C_\varepsilon$ / ${_\varepsilon\mathbb C}$ / ${_\varepsilon\mathbb C_\varepsilon}$, respectively, and is defined as the vector space $\mathbb C$ with right and / or left action $zh=z\varepsilon(h), hz=\varepsilon(h)z$ for $h\in H,z\in\mathbb C$.
In several instances we write a direct sum of objects $V_k$ ($k=1,\ldots, n$) in some category of vector spaces (for example $\mathcal M_A$ or $\mathcal {YD}_A^A$) as a column $$\begin{pmatrix}
V_1\\\vdots \\V_n
\end{pmatrix}:=\bigoplus_{k=1}^n V_k;$$ this way, a linear map $f\colon \bigoplus_{k=1}^n V_k \to \bigoplus_{\ell=1}^m W_\ell$ inherits a natural matrix decomposition into components $f_{\ell k}\colon V_k\to W_{\ell}$ such that $f(\bigoplus_k v_k)=\bigoplus_\ell \sum_k f_{\ell k} (v_k)$.
For an algebra $A$ and an $A$-bimodule $M$, the *Hochschild cohomology* of $A$ with *coefficients* in $M$ is the cohomology of the complex $$\begin{aligned}
\label{eq:Hochschild-defining-complex}
0\rightarrow \hom (\mathbb C,M)
\xrightarrow{\partial_0} \hom (A,M)
\xrightarrow{\partial_1} \hom (A^{\otimes 2},M)
\xrightarrow{\partial_2} \ldots
\xrightarrow{\partial_k} \hom (A^{\otimes (k+1)},M)
\xrightarrow{\partial_{k+1}} \ldots\end{aligned}$$ with the coboundary map $\partial_k\colon \hom (A^{\otimes k},M) \to \hom
(A^{\otimes (k+1)},M)$ defined as $$\begin{gathered}
\partial_k (f) (a_0\otimes a_1\otimes \ldots \otimes a_k) =
a_0.f(a_1\otimes \ldots \otimes a_k)+
\sum_{j=1}^{k} (-1)^{j} f(a_0\otimes a_1\otimes a_{j-1}a_j \otimes \ldots
\otimes a_k)
\\ + (-1)^{k+1}
f(a_0\otimes a_1\otimes \ldots \otimes a_{k-1}).a_k;\end{gathered}$$ i.e. the $k$-th Hochschild cohomology is the vector space $$\mathrm H^k(A,M):=\ker \partial_k/\operatorname{im} \partial_{k-1}.$$
For a Hopf algebra $H$ and a Yetter-Drinfeld module $M\in\mathcal {YD}_H^H$, the Gerstenhaber-Schack cohomology and bialgebra cohomology are denoted by $\mathrm{H}_{\mathrm{GS}}(H,M)$ and $\mathrm H_{\mathrm b}(H):=\mathrm{H}_{\mathrm{GS}}(H,\mathbb C_\varepsilon)$, respectively (cf. for the corresponding definitions).
A *resolution* of an $A$-module $N$ over an algebra $A$ is an exact sequence of $A$-modules $$\ldots \rightarrow P_{n+1}\xrightarrow{\Phi_{n+1}} P_n \xrightarrow{\Phi_n} \ldots \xrightarrow{\Phi_2} P_1\xrightarrow{\Phi_1}
P_0 \xrightarrow{\Phi_0} N \rightarrow 0$$ and will be abbreviated $P_*\xrightarrow{\Phi} N \rightarrow 0$. A resolution is called *finite* if there is only a finite number of non-zero modules $P_k$; it is called *free* if all the modules $P_k$ are free, and it is called *projective* if all the modules $P_k$ are projective. (Recall that a module $P$ is *projective* if there is a free module $F$ and a module $N$ such that $F\cong P\oplus N$; this happens if and only if the functor $\hom_A(P,-)$ is exact.) Given a Hopf algebra $H$, a resolution of the trivial right module $\mathbb C_\varepsilon$ is also called a *resolution of the counit* or a *resolution for $H$*.
We briefly fix some notation for the main Hopf algebras under consideration in this article, full definitions and more details how these are related to free unitary and free orthogonal quantum groups will be given in . By $\mathcal{B}(E)$, with $E\in\operatorname{GL}_n(\mathbb{C})$ we denote the Hopf algebra, which, as an algebra, is the universal unital algebra defined by the relation $E^{-1}x^tEx=I=xE^{-1}x^tE$, where $x=(x_{ij})_{1\le i,j\le n}$ is the matrix of generators. For the special case $E=I_n$, one recovers $\mathcal{B}(I_n)\cong \mathrm{Pol}(O^+_n)$ the Hopf algebra of the free orthogonal quantum group $O_n^+$. $\mathcal{A}(E)$ is defined as the free product $\mathcal{A}(E)=\mathcal{B}(E)*\mathbb{C}\mathbb{Z}_2$ and for $E=I$ this is isomorphic to the Hopf algebra of the freely modified bistochastic compact quantum group, $\mathcal A(I)\cong\operatorname{Pol}(B_{n+1}^{\#+})$. Note that the algebra $\mathbb C\mathbb Z_2$ has one generator $g$ with relation $g^2=1$. $\mathcal{H}(F)$, with $F\in\operatorname{GL}_n(\mathbb{C})$, is the universal unital algebra generated by the relations $u v^t = v^t u = I$, $vFu^tF^{-1}=Fu^tF^{-1}v=I$, where $u=(u_{ij})_{1\le i,j\le n}$, $v=(v_{ij})_{1\le i,j\le n}$, and $\mathcal{H}(F)$ has a natural Hopf algebra structure that makes $u$ and $v$ corepresentations. Under some assumption on $F$, we have $\mathcal{H}(F)=\mathcal{B}(E)\mathbin{\tilde{\ast}}\mathbb{C}\mathbb{Z}_2\cong\operatorname{Pol}(U^+_K)$, with $F=E^tE^{-1}=K^*K$, where $\mathbin{\tilde{\ast}}$ denotes the *glued free product*, see . In particular, $\mathcal{H}(I_n)\cong \mathrm{Pol}(U_n^+)\cong \operatorname{Pol}(O_n^+)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$. With few local exceptions, which will be made explicit, $x$ / $g$ / $x,g$ / $u,v$ always denote the (matrices of) standard generators of $\mathcal B(E)$ / $\mathbb C\mathbb Z_2$ / $\mathcal A(E)$ / $\mathcal H(F)$.
We close this section with some more conventions and observations, summarized in a lemma, that will appear ubiquitously in our calculations.
**Lemma 2**. *Let $A$ be a unital algebra and let $\tau\colon A\to \mathbb C$ be a character, i.e. a nonzero multiplicative linear functional. (We also write $\tau$ for the map $M_n(A)\to M_n(\mathbb C)$ given by entrywise application, i.e. $\tau(D)=(\tau(d_{ij}))_{i,j=1}^{n}$ if $D=(d_{ij})_{i,j=1}^{n}\in M_n(A)$.)*
1. *For all $C,D\in M_n(A)$, the following holds. $$\begin{gathered}
\operatorname{tr}(\tau(C))=\tau(\operatorname{tr}(C))\\
\tau(CD)=\tau(C\tau(D))=\tau(\tau(C)D)=\tau(C)\tau(D)\\
\tau\operatorname{tr}(CD)=\tau\operatorname{tr}(DC)=\tau\operatorname{tr}((DC)^t)=\tau\operatorname{tr}(C^tD^t)
\end{gathered}$$*
2. *Given a free $A$-module $F$ with finite module basis $(e_{j})_{j\in J}$, the mapping $$\begin{aligned}
\hom_A(F,\mathbb C_\tau)\ni f &\mapsto (f(e_j))_{j\in J} \in \mathbb C^J
\end{aligned}$$ is an isomorphism of vector spaces with inverse $$\mathbb C^J\ni(\lambda_j)_{j\in J} \mapsto \lambda^\tau,\quad \lambda^\tau\left(\sum_{j\in J} e_j a_j\right)=\sum_{j\in J}\lambda_j\tau(a_j)$$*
3. *For a free module of the form $M_{n_1}(A)\oplus\ldots \oplus M_{n_k}(A)$ with basis $$\left(e_{ij}^{(r)}:r\in\{1,\ldots k\},i,j\in\{1,\ldots n_r\}\right)$$ formed by the standard matrix units, the notation from (2) specializes to $$(\Lambda_1,\ldots,\Lambda_k)^\tau(M_1,\ldots, M_k)=\sum_{r=1}^k \tau\operatorname{tr}(\Lambda_r^t M_r).$$*
4. *Let $F_1,F_2$ be free $A$-modules with bases $(e_{j}^{(1)})_{j\in J_1}, (e_{j}^{(2)})_{j\in J_2}$, respectively. Then, for every $\Phi\in\hom_A(F_1,F_2)$, there is a unique linear map $\Phi^*\colon \mathbb C^{J_2}\to \mathbb C^{J_1}$ such that, for all $\lambda\in \mathbb C^{J_2}$, $$\lambda^\tau\circ\Phi=\bigl(\Phi^*(\lambda)\bigr)^{\tau}.$$ In particular, denoting $$-\circ\Phi\colon \hom_A(F_2,\mathbb C_\tau)\to \hom_A(F_1,\mathbb C_\tau),\quad f\mapsto f\circ\Phi$$ the precomposition with $\Phi$, the given isomorphisms $\hom(F_i,\mathbb C_\tau)\cong\mathbb C^{J_i}$ induce isomorphisms $$\ker(-\circ\Phi)\cong \ker(\Phi^*),\quad\operatorname{Im}(-\circ\Phi)\cong\operatorname{Im}(\Phi^*).$$*
*Proof.*
1. After direct verification of the first two lines of equations, the third is obvious because application of $\tau$ turns the matrices into scalar matrices. Note that the trace property would not in general hold without application of $\tau$ if $A$ is noncommutative.
2. Of course, any $f\in \hom_A(F,\mathbb C_\tau)$ is determined by its values on the basis elements, $A$-linearity yields $$f\left(\sum e_j.a_j\right)=\sum f(e_j).a_j=\sum f(e_j) \tau(a_j).$$ This also shows that $f=\lambda^\tau$ with $\lambda_j:=f(e_j)$. On the other hand, $\lambda^\tau(e_j)=\lambda_j\tau(1)=\lambda_j$ proves that the given maps are mutually inverse isomorphisms.
3. If the free modules are given in matrix form and $\Lambda_r=(\lambda_{ij}^{(r)})_{i,j=1}^{n_r}\in M_{n_r}(\mathbb C)$, $M_r=(M_{ij}^{(r)})_{i,j=1}^{n_r}\in M_{n_r}(\mathbb C)$ for $r=1,\ldots, k$, we easily see that $$(\Lambda_1,\ldots,\Lambda_k)^\tau(M_1,\ldots, M_k)=\sum_{r=1}^k \tau\operatorname{tr}(\Lambda_r^t M_r)=\tau\sum_{i,j,r} \lambda_{ij}^{(r)}m_{ij}^{(r)}=\sum_{r=1}^k \tau\operatorname{tr}(\Lambda_r^t M_r).$$
4. Obvious.
◻
## The Hopf algebras $\mathcal B(E)$, $\mathcal A(E)$, and $\mathcal H(F)$ and related compact quantum groups {#sec:B(E)_and_H(F)}
In this section we provide the definition of the three main families of Hopf algebras we are going to use throughout the paper. As detailed below, these are Hopf algebraic generalizations of the free orthogonal quantum groups, the freely modified bistochastic quantum groups, and the free unitary quantum group, respectively.
**Definition 3**. Let $E\in \operatorname{GL}_n(\mathbb{C})$ be an invertible matrix, $n\geq2$. The Hopf algebra $\mathcal B(E)$ is the universal algebra generated by the entries $x_{ij}$ of a matrix of generators $x=(x_{ij})_{i,j=1}^n$ subject to the relations $$\label{eq_relations_in_BE}
E^{-1}x^tEx=I=xE^{-1}x^tE;$$ here $x^t$ denotes the transpose of the matrix $x$, $(x^t)_{ij}:=x_{ji}$. The Hopf algebra structure on $\mathcal B(E)$ is given by $$\begin{aligned}
\Delta(x_{ij})&=\sum_kx_{ik}\otimes x_{kj},& \varepsilon(x_{ij})&=\delta_{ij},&S(x)&=E^{-1}x^tE.% \\
% \Delta(x_{ij})&=\sum_kx_{ik}\otimes x_{kj},& \varepsilon(x_{ij})&=\delta_{ij},& S(x)&=E^{-1}x^tE.\\
% \Delta(x_{ij})&=\sum_kx_{ik}\otimes x_{kj},& \varepsilon(x_{ij})&=\delta_{ij},& S(x)&=E^{-1}x^tE.
\end{aligned}$$
The algebra $\mathcal B(E)$ corresponds to the quantum symmetry group of the bilinear form associated with $E$ (see [@DVLa90]). Let us note that $\mathcal B(E)$ is $\mathbb Z_2$-graded as it is a quotient of a naturally $\mathbb Z_2$-graded free algebra generated by the $x_{ij}$ by the even relations [\[eq_relations_in_BE\]](#eq_relations_in_BE){reference-type="eqref" reference="eq_relations_in_BE"}.
Clearly, for any $G\in\operatorname{GL}_n(\mathbb{C})$, the Hopf algebras $\mathcal{B}(E)$ and $\mathcal{B}(G^tEG)$ are isomorphic, via the unique isomorphism defined by $x\mapsto G^{-1}xG$.
Recall that the unique dense $*$-Hopf algebras related to compact quantum groups are called CQG-algebras, cf. [@klimykschm Definition 11.9]. See also [@klimykschm Theorem 11.27] for conditions which guarantee that $*$-Hopf algebra is a CQG-algebra.
If $\overline{E}E= \lambda I$ for some $\lambda \in \mathbb R^{\times}$, then $\mathcal B(E)$, with the involution determined by $\overline{x}=E^txE^{-t}$, is the CQG-algebra of the universal orthogonal quantum group $O^+_L$, defined in [@Banica1996], with $L=E^t$. The latter is defined as the universal unital $*$-algebra generated by $x=(x_{ij})_{i,j=1}^n$ subject to the relations $$x^*x=I=xx^*,\quad \overline{x}=L x L^{-1}.$$ In general, the Hopf algebra $\mathcal B(E)$ admits an involution w.r.t. which it is a CQG-algebra if and only if there exist $M\in GL_n(\mathbb C)$, $\lambda\in \mathbb R^{\times}$, and $\mu \in \mathbb C^*$ such that $\overline{M}M=\lambda I$, $M^tE^*M=E$, and $\mu M^{-t}E$ is positive, see [@Bichon03 Proposition 6.1 and the remarks below]. The $*$-structure is given by $\overline{x}=MxM^{-1}$. In this case, $\mathcal{B}(E)$ is isomorphic to $\mathcal{B}(\widetilde{E})$, with $\widetilde{E}=KE^{-t}K^t=G^tEG$, where $K$ is any matrix such that $\mu M^{-t} E = K^*K$ and $G=E^{-1}K^t$. The choice of $K$ implies $E =\mu^{-1} M^t K^*K$, and therefore $$\begin{aligned}
\overline{\widetilde{E}}\widetilde{E} &= \overline{K}E^{*,-1} K^* K E^{-t} K^t = \overline{K} \left(\overline{\mu}\overline{M}^{-1} (K^*K)^{-1}\right) K^* K \left(\mu M^{-1} \overline{K}^{-1}K^{-t}\right) K^t \\
&= |\mu|^2 \overline{K} \left(M\overline{M}\right)^{-1}\overline{K}^{-1} = \frac{|\mu|^2}{\lambda} I,\end{aligned}$$ which implies that $\mathcal{B}(\widetilde{E})$ (and therefore also $\mathcal{B}(E)$) is isomorphic to $\mathrm{Pol}(O_L^+)$, with $L=\widetilde{E}^t=KE^{-1}K^t$.
**Remark 4**. While not directly relevant for the remainder of this article, it is worthwhile noting that the possibilities for the parameters $\mu$ and $\lambda$ are quite restricted. Indeed, in the notation of the preceding paragraph, $M^tE^*M=E$ is equivalent to $E^{*,-1} = M E^{-1} M^t$ and this yields $$\lambda\mu I=\overline M ME^{-1}M^t K^*K=\overline M E^{*,-1} K^*K=\overline \mu I,$$ i.e., $\lambda\mu=\overline \mu$. Since $\lambda \in\mathbb R$, there are two distinct cases: either $\lambda=1$ and $\mu=\overline{\mu}\in\mathbb R$, or $\lambda=-1$ and $\mu=-\overline{\mu}\in i\mathbb R$. Cf. the classification of free orthogonal quantum groups as presented by De Rijdt in [@DeRijdt2007 Remark 1.5.2].
**Definition 5**. For $F\in \operatorname{GL}_n(\mathbb{C})$, $n\geq2$, the *universal cosovereign Hopf algebra* $\mathcal H(F)$ is the universal algebra generated by the entries of $u=(u_{ij})_{i,j=1}^n$ and $v=(v_{ij})_{i,j=1}^n$ subject to the relations $$\label{eq_relations_in_HF}
uv^t=v^tu=I,\qquad vFu^tF^{-1}=Fu^tF^{-1}v=I.$$ The Hopf algebra structure is defined by $$\begin{aligned}
\Delta(u_{ij})=\sum_ku_{ik}\otimes u_{kj},&\quad \Delta(v_{ij})=\sum_k
v_{ik}\otimes v_{kj},
\\
\varepsilon(u_{ij})=\varepsilon(v_{ij})=\delta_{ij},&\quad S(u)=v^t,\quad
S(v)=Fu^tF^{-1}.
\end{aligned}$$
The notion of universal cosovereign Hopf algebras was introduced in [@Bichon01]. Obviously, $\mathcal H(F)=\mathcal H(\lambda F)$ for any $\lambda\in \mathbb C^*$. Moreover, $\mathcal H(F)\cong \mathcal H(F^{-t})$ and $\mathcal H(F)\cong\mathcal H(GFG^{-1})$ for every $G\in\operatorname{GL}_n(\mathbb C)$, see [@Bichon01 Proposition 3.3].
Recall that for $K\in \operatorname{GL}_n(\mathbb C)$ the free unitary compact quantum group (first defined slightly differently by van Daele and Wang in [@VanDaeleWang1996]) is the pair $U_K^+=(A_u(K),u)$ with $A_u(K)$ being the universal unital C$^*$-algebra generated by the entries of the matrix $u=(u_{ij})_{i,j=1}^n$, subject to the relations making $u$ and $K\overline{u}K^{-1}$ unitaries [@Banica1997-free_unitary_quantum_group Déf. 1]. The prescription $u\mapsto u, \overline u\mapsto v$ establishes a Hopf algebra isomorphism from the dense $*$-Hopf subalgebra $\operatorname{Pol}(U_K^+)=\operatorname{*-alg}(u_{ij}:i,j=1,\ldots,n)\subset A_u(K)$ to $\mathcal H(K^*K)$; this is well-known, cf. for example from [@Banica1997-free_unitary_quantum_group Remarque following Déf. 1 ]. When $F=K^*K$ is positive, we thus recover the (algebraic version of the) free unitary compact quantum groups with involution determined by $\overline u=v$. More generally, $\mathcal{H}(F)$ admits an involution that turns it into a CQG-algebra if and only if there exist $G\in \operatorname{GL}_n(\mathbb{C})$ and $\mu\in\mathbb{C}^\times$ such that $\mu GFG^{-1}$ is positive, see [@Bichon01 Proposition 3.6], in which case we always have $\mathcal H(F)\cong \mathcal H(\mu GFG^{-1})\cong \operatorname{Pol}(U_K^+)$ for $K$ such that $\mu GFG^{-1}=K^*K$.
A matrix $F\in\mathrm{GL}_n(\mathbb C)$, $n\geq2$, is called:
- *normalizable* if either $\operatorname{tr}(F)\neq 0$ and $\operatorname{tr}(F^{-1})\neq 0$, or $\operatorname{tr}(F)=0=\operatorname{tr}(F^{-1})$,
- *generic* if it is normalizable and the solutions of the equation $$\begin{aligned}
\label{eq:generic}
q^2-\sqrt{\operatorname{tr}(F)\operatorname{tr}(F^{-1})}q+1&=0
\end{aligned}$$ are not roots of unity of order $\geq 3$,
- an *asymmetry* if there exists $E\in \operatorname{GL}_n(\mathbb{C})$ such that $F=E^tE^{-1}$.
An asymmetry is automatically normalizable, but need not be generic. A positive definite $n\times n$-matrix ($n\geq2$) is automatically generic,[^3] but need not be an asymmetry.
**Remark 6**. For $F\in\operatorname{GL}_n(\mathbb C)$, $\mathcal H(F)$ is cosemisimple if and only if $F$ is generic [@Bichon07 Theorem 1.1(ii)]. For $E\in \operatorname{GL}_n(\mathbb C)$, $\mathcal B(E)$ is cosemisimple if and only if the associated asymmetry $E^tE^{-1}$ is generic (this follows from [@KondratowiczPodles], see [@Bichon03] for details).
**Remark 7**. In [@CortellaTignol2002 Theorem 1], Cortella and Tignol characterize asymmetries. In general, a rather complicated condition on generalized Eigenspaces appears. In the special case where $F\in M_n(\mathbb C)$ is diagonalizable, however, those conditions become (almost) trivial and it easily follows that $F$ is an asymmetry if and only if $F$ is invertible with $F$ and $F^{-1}$ conjugate in $M_n(\mathbb C)$ and the Eigenspace for Eigenvalue $-1$ has even dimension. For a positive definite matrix $F$, the only condition is that $F$ and $F^{-1}$ be conjugate.
Furthermore, two matrices $E,E'\in \operatorname{GL}_n(\mathbb C)$ are congruent in $M_n(\mathbb C)$, i.e., $E'=G^tEG$ for some $G\in \operatorname{GL}_n(\mathbb C)$, if and only if their asymmetries $F=E^tE^{-1}$ and $F'={E'}^t {E'}^{-1}$ are conjugate in $M_n(\mathbb C)$ [@HornSergeichuk2006 Lemma 2.1]. Therefore, also $\mathcal B(E)$ depends, up to isomorphism, only on the conjugacy class of the associated asymmetry $F=E^tE^{-1}$.
Let $A,B$ be Hopf algebras. Then the free product $A*B$ of the underlying unital algebras is again a Hopf algebra, where the comultiplication is extended multiplicatively; note the subtlety that one has to identify $A\otimes A$ and $B\otimes B$ with the corresponding subsets of $(A*B)\otimes (A*B)$, see [@Agore11 Theorem 2.2] for details.
**Remark 8**. If $A$ and $B$ are cosemisimple, then so is $A*B$ (see [@Bichon07 After Proposition 4.1]).
**Definition 9**. For $E\in \operatorname{GL}_n(\mathbb C)$, we define $\mathcal A(E):=\mathcal B(E)\ast\mathbb C\mathbb Z_2$.
If $F=E^tE^{-1}$ is generic, then $\mathcal A(E)$ is cosemisimple as the free product of two cosemisimple Hopf algebras. In the special case $E=I$, we recover the Hopf algebra of the freely modified bistochastic compact quantum group $\mathcal A(I)= \mathcal B(I)\ast\mathbb C\mathbb Z_2\cong \operatorname{Pol}(O_n^+*\mathbb Z_2)\cong\operatorname{Pol}(B_{n+1}^{\#+})$, see [@TerragoWeber2017 Remark 6.19.] or [@Weber13 Lemma 2.3 and Remark 2.4].
## Hochschild cohomology of Hopf algebras via resolutions
If one is interested in the Hochschild cohomology of a Hopf algebra $H$, besides working directly with the defining complex [\[eq:Hochschild-defining-complex\]](#eq:Hochschild-defining-complex){reference-type="eqref" reference="eq:Hochschild-defining-complex"}, there is another approach using resolutions (see [@Bic13 Proposition 2.1] and the references therein), which we adopt in this paper:
**Theorem 10**. *Let $A$ be a Hopf algebra, $M$ an $A$-bimodule, and $P_*\xrightarrow{\Phi} \mathbb C_\varepsilon\rightarrow 0$ a projective resolution of the counit. Denote by $M''$ the right $A$-module given by the vector space $M$ with the right action $m\leftarrow a:=S(a_{(1)}).m.a_{(2)}$. Then the Hochschild cohomology of $A$ with the coefficients in $M$ is the cohomology of the complex $$0\longrightarrow \hom_A (P_0,M'')
\xrightarrow{-\circ \Phi_1} \hom_A (P_1,M'')
\xrightarrow{-\circ \Phi_2} \hom_A (P_2,M'')
\xrightarrow{-\circ \Phi_3} \ldots,$$ i.e. $$\mathrm H^n(A,M)
\cong \operatorname{Ext}^n_A(\mathbb C_\varepsilon,M'')
=\ker (-\circ \Phi_{n+1})/\operatorname{im}(-\circ \Phi_{n}).$$*
It can be proved that the spaces $\operatorname{Ext}^n_A(\mathbb C_\varepsilon,M'')$ do not depend on the choice of the projective resolution $P_*\xrightarrow{\Phi} \mathbb C_\varepsilon\rightarrow 0$.
**Remark 11**. In the sequel we will focus on the *Hochschild cohomology with 1-dimensional coefficients*, which means that the bimodule $M=\mathbb C$ is a one-dimensional vector space. In that case, the left and right action are necessarily given by $a.z.b=\sigma(a)z\tau(b)$ for a pair of characters (=unital homomorphisms) $\sigma,\tau\colon A\to \mathbb C$. The bimodule $\mathbb C$ with actions given by $\sigma$ and $\tau$ in that way is denoted by $_\sigma\mathbb C_\tau$. Note that $(_\sigma\mathbb C_\tau)''=\mathbb C_{(\sigma\circ S)*\tau}$; indeed, $$x\leftarrow a
=S(a_{(1)}).z.a_{(2)}
=\sigma\big( S(a_{(1)})\big) z \tau(a_{(2)})
=z\sigma\big( S(a_{(1)}) \big)\tau(a_{(2)})
=z ((\sigma\circ S)*\tau)(a).$$ In particular, $(_\sigma\mathbb C_\tau)''=(_\varepsilon\mathbb C_{(\sigma\circ S)*\tau})''$. (Recall that $\sigma\circ S$ is the convolution inverse of $\sigma$.) Thus, we can and we will assume without loss of generality that $\sigma=\varepsilon$ from the start and, accordingly, restrict our considerations to the bimodules $_\varepsilon\mathbb C_\tau$ and the associated right-modules $(_\varepsilon\mathbb C_\tau)''=\mathbb C_\tau$.
## Free resolution and Hochschild cohomology for $\mathcal B(E)$
The results of this paper heavily rely on the following fact for $\mathcal B(E)$, due to Bichon [@Bic13]. The Kac case ($E=I$) was treated first by Collins, Härtel and Thom in [@CHT09].
**Theorem 12** ([@Bic13]). *Let $E\in \operatorname{GL_n}(\mathbb C)$ and $\mathcal B=\mathcal B(E)$. Then the sequence $$\begin{aligned}
\label{resO+E}
0\rightarrow \mathcal B\xrightarrow{\Phi^{\mathcal B}_3}M_n(\mathcal B)
\xrightarrow{\Phi^{\mathcal B}_2}M_n(\mathcal B)\xrightarrow{\Phi^{\mathcal B}_1} \mathcal B
\xrightarrow{\varepsilon}\mathbb C_\varepsilon\rightarrow 0 \end{aligned}$$ with the maps $$\begin{aligned}
\Phi^{\mathcal B}_3 & (1) =\sum_{j,k=1}^n e_{jk}\otimes \big(
(Ex E^{-t})_{jk}-(E^tE^{-1})_{jk}) \\
\Phi^{\mathcal B}_2 & (e_{jk}\otimes 1) = e_{jk}\otimes 1 + \sum_{l,m=1}^n e_{lm} \otimes
(x E^{-t})_{jm} E_{kl}\\
\Phi^{\mathcal B}_1 & (e_{jk}\otimes 1) = x_{jk}-\delta_{jk}.\end{aligned}$$ is a finite free resolution of the counit of $\mathcal B$.*
It will be very useful for the calculations to rewrite the $\Phi_*^{\mathcal B}$ in matrix notation.
**Proposition 13**. *For $a\in \mathcal B(E), A\in M_n(\mathcal B(E))$, $$\begin{aligned}
\Phi^{\mathcal B}_3 (a) &=\bigl(Ex E^{-t}-E^tE^{-1}\bigr)a, &
\Phi^{\mathcal B}_2 (A) &= A + (E^{-1}x^t A E)^t,&
\Phi^{\mathcal B}_1 (A) &= \operatorname{tr}(x^tA)-\operatorname{tr}(A).
\end{aligned}$$*
*Proof.* Recall that we identify $M_n(\mathcal B(E))\ni A\equiv\sum_{j,k=1}^n e_{jk}\otimes A_{jk}\in M_n(\mathbb C)\otimes \mathcal B(E)$. The formula for $\Phi^{\mathcal B}_3$ is then obvious. For $\Phi^{\mathcal B}_1$ the matter is also quite clear, $$\Phi^{\mathcal B}_1(A)=\sum_{j,k=1}^n x_{jk}A_{jk}-\delta_{jk}A_{jk}=\operatorname{tr}(x^tA)-\operatorname{tr}(A).$$ For $\Phi^{\mathcal B}_2$ we have $$\begin{aligned}
% \hat{\Phi}_2
\MoveEqLeft
\Phi^{\mathcal B}_2(A)
=
\Phi^{\mathcal B}_2\big (\sum_{j,k=1}^n e_{jk}\otimes A_{jk}\big)=\sum_{j,k=1}^n e_{jk}\otimes A_{jk}
+\sum_{j,k=1}^n \sum_{l,m=1}^n e_{lm} \otimes (x E^{-t})_{jm} E_{kl}
A_{jk}\end{aligned}$$ and, keeping track of the indices as well as the order of multiplication in $\mathcal B(E)$, $$\begin{aligned}
(E^{-1}x^tAE)^t_{lm}&=(E^{-1}x^tAE)_{ml}= \sum_{j,k} (E^{-1}x^t)_{mj}A_{jk}E_{kl}= \sum_{j,k}(x E^{-t})_{jm} E_{kl}
A_{jk}.\qedhere\end{aligned}$$ ◻
We will use the resolution [\[resO+E\]](#resO+E){reference-type="eqref" reference="resO+E"} for $\mathcal B(E)$ to exhibit similar resolutions for $\mathcal A(E)$ and $\mathcal H(F)$ with $F=E^tE^{-1}$ in .
The following is a consequence of [@Bic13 Cor. 6.3 and Prop. 6.4], the main small improvement is that we express all cohomology spaces in terms of $F$ instead of $E$.
**Theorem 14**. *Let $E\in \operatorname{GL_n}(\mathbb C)$, $\mathcal B:=\mathcal B(E)$, and $\tau\colon \mathcal B\to \mathbb C$ a character. Set $T:=\tau(x)$ and $F=E^tE^{-1}$. Let furthermore $$p=\begin{cases}
1 & \mbox{ if } T=I_n\\
0 & \mbox{ otherwise, }
\end{cases} \quad
d=\operatorname{dim}\{\mathrm{K}\in M_n:\mathrm{K}^t=-T^t F\mathrm{K}\}, \quad
s=\begin{cases}
0 & \mbox{ if } T^t=F^{-2}\\
1 & \mbox{ otherwise. }
\end{cases}$$ Then the Hochschild cohomology for $\mathcal B$ with 1-dimensional coefficients is given as follows: $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal B, {_\varepsilon\mathbb C_\tau}) &= p ,
&\dim \mathrm{H}^1(\mathcal B, {_\varepsilon\mathbb C_\tau}) &= d-1+p,
\\
\dim \mathrm H^2(\mathcal B, {_\varepsilon\mathbb C_\tau}) &= d-s,
&\dim \mathrm H^3(\mathcal B, {_\varepsilon\mathbb C_\tau}) &= 1-s,\end{aligned}$$ and $\dim \mathrm H^k(\mathcal B, {_\varepsilon\mathbb C_\tau}) = 0$ for all $k \geq 4$.*
*Proof.* We present two proofs, the first one using Bichon's calculations of Hochschild homology and Poincaré duality, and the second one by directly applying the functor $\operatorname{hom}_{\mathcal B}(\cdot,\mathbb C_\tau)$ to the resolution [\[resO+E\]](#resO+E){reference-type="eqref" reference="resO+E"}. The concrete calculations in the second proof will provide useful references when we investigate $\mathcal A(E)$ and $\mathcal H(F)$ later.
By Poincaré duality [@Bic13 Cor. 6.3], we know that $\mathrm H^n(\mathcal B, {_\varepsilon\mathbb C_\tau})\cong H_{3-n}(\mathcal B,{_{\varepsilon\circ \alpha}\mathbb C_\tau})$, where $\alpha$ is the *modular automorphism* determined by $\alpha(x)= E^{-1}E^{t}xE^{-1}E^t=F^{-t}xF^{-t}$. Note that $\varepsilon\circ\alpha (x)=(E^{-1}E^t)^2=F^{-2t}$ or, equivalently, $\varepsilon\circ\alpha=\Phi^{2}$ is the convolution square of the *sovereign character* $\Phi$ determined by $\Phi(x)=E^{-1}E^t$ [@Bic13 2.3 (1)]. The Hochschild homology with one-dimensional coefficients ${_{\varepsilon\circ \alpha}\mathbb C_\tau}$ can be read off from [@Bic13 Prop. 6.4]. It depends on the matrix (denoted by $\gamma(u)$ in [@Bic13]) $$\begin{aligned}
%(\e\circ \alpha)^{-1}*\tau(x)=\e\alpha S(x)\, T=(\e\alpha(E^{-1}x^tE))\,T=(E^{-1}\,(EE^{-t})^2\,E)T=(E^{-t}E)^2T=F^{t,2}T.
G:=\bigl(\tau^{-1}*(\varepsilon\circ \alpha)\bigr)(x)=T^{-1}(E^{-1}E^{t})^2=T^{-1}F^{-2t}.
\end{aligned}$$ Finding the dimensions is mostly straightforward. The only point which deserves attention is $\dim\mathrm{H}^1(\mathcal B, {_\varepsilon\mathbb C_\tau})=\dim \mathrm{H}_2(\mathcal B, {_{\varepsilon\circ\alpha}\mathbb C_\tau})=d'-(1-p)$ with $$\begin{aligned}
d'&=\dim\{\mathrm{M}\in M_n:\mathrm{M}+E^{t}\mathrm{M}^tGE^{-t}=0\}.
% e'&=\dim \C (E^tE^{-1}-ET^{-1}(E^{-1}E^{t})^2E^{-t});
\end{aligned}$$ The map $\mathrm{M}\mapsto E^t\mathrm{M}^t=:\mathrm{K}$ is induces an isomorphism between the the spaces $\{\mathrm{M}\in M_n:\mathrm{M}+E^{t}\mathrm{M}^tGE^{-t}=0\}$ and $\{\mathrm{K}\in M_n:\mathrm{K}^t=-T^tF\mathrm{K}\}$ because $$\begin{aligned}
\mathrm{M}+E^{t}\mathrm{M}^tGE^{-t}=0
&\iff \mathrm{M}+E^{t}\mathrm{M}^tT^{-1}E^{-1}E^{t}E^{-1}=0\\
&\iff \mathrm{K}^tE^{-1} + \mathrm{K}T^{-1}F^{-t}E^{-1}=0\\
&\iff \mathrm{K}^tF^{t}T+ \mathrm{K}=0\\
&\iff -T^tF\mathrm{K}=\mathrm{K}^t.
\end{aligned}$$ Therefore, $d=d'$ and $\dim \mathrm{H}^1(\mathcal B, {_\varepsilon\mathbb C_\tau}) = d-1+p$ as claimed.
Now we come to the second proof. From the resolution [\[resO+E\]](#resO+E){reference-type="eqref" reference="resO+E"} for $\mathcal B$, we construct the cochain complex $$\begin{gathered}
\label{eq_resolution_of_B}
0\to \hom_{\mathcal B}\left(\mathcal B,\mathbb C_\tau\right)
\xrightarrow{-\circ \Phi^{\mathcal B}_1}
\hom_{\mathcal B} \left(M_n(\mathcal B),\mathbb C_\tau\right)
\\
\xrightarrow{-\circ \Phi^{\mathcal H}_2}
\hom_{\mathcal B}\left(M_n(\mathcal B),\mathbb C_\tau\right)
\xrightarrow{-\circ\Phi^{\mathcal H}_3}
\hom_{\mathcal B}\left(\mathcal B,\mathbb C_\tau\right)\to0.\end{gathered}$$
Recall that from Lemma [Lemma 2](#lemma:traces-free-modules){reference-type="ref" reference="lemma:traces-free-modules"} we have vector space isomorphisms $$\hom_{\mathcal B}(M_{n}(\mathcal B),\mathbb C_\tau)\cong M_{n}(\mathbb C),\quad \hom_{\mathcal B}(\mathcal B,\mathbb C_\tau)\cong \mathbb C$$ determined by the "pairings" $$\Lambda^\tau A=\tau\operatorname{tr}(\Lambda^t A),\quad \lambda^{\tau} a=\lambda\tau(a)$$ for $\Lambda\in M_{n}(\mathbb C),A\in M_{n}(\mathcal B),\lambda\in\mathbb C,a\in\mathcal B$. Also, the precompositions $-\circ\Phi^{\mathcal B}_j$ can be described by the corresponding maps $(\Phi^{\mathcal B}_j)^{*}$ which yield the isomorphic complex $$0\rightarrow \mathbb C\xrightarrow{(\Phi^{\mathcal B}_1)^{*}}M_n(\mathbb C) \xrightarrow{(\Phi^{\mathcal B}_2)^{*}}M_n(\mathbb C)\xrightarrow{(\Phi^{\mathcal B}_3)^{*}} \mathbb C\rightarrow 0,$$ leading to $$\mathrm H^k\left( \mathcal B, {_\varepsilon\mathbb C_\tau}\right) \cong \faktor{\ker((\Phi^{\mathcal B}_{k+1})^{*})}{\mathrm{Im}((\Phi^{\mathcal B}_k)^{*})},$$ where $(\Phi^{\mathcal B}_{j})^{*}:=0$ for $j\notin\{1,2,3\}$.
We will now have a closer look at the maps $(\Phi^{\mathcal B}_j)^{*}$ ($j=1,2,3$). For $(\Phi^{\mathcal B}_1)^{*} \colon \mathbb C\to M_n(\mathbb C)$ we have $$\begin{aligned}
(\lambda^{\tau}\circ\Phi_1^{\mathcal B}) (A)
& = \lambda \, \tau\big( \operatorname{tr}(x^tA) -\operatorname{tr}(A)\big)
= \tau\operatorname{tr}(\lambda(T-I)^tA)= \bigl(\lambda(T-I)\bigr)^\tau A
\end{aligned}$$ i.e., $$\begin{aligned}
(\Phi^{\mathcal B}_1)^{*} (\lambda)= \lambda (T-I).\end{aligned}$$ So $$\begin{aligned}
\ker (\Phi^{\mathcal B}_1)^{*} &=\begin{cases}
\mathbb C, & \mbox{if}\quad T=I_n, \\
\{0\}, & \mbox{otherwise}.
\end{cases}\end{aligned}$$ We conclude using the rank-nullity theorem that $$\dim\ker (\Phi^{\mathcal B}_1)^{*} =p,
\qquad
\dim\operatorname{Im}(\Phi^{\mathcal B}_1)^{*} =1-p.$$ For $(\Phi^{\mathcal B}_2)^{*}\colon M_n(\mathbb C)\to M_n(\mathbb C)$ we find that $$\begin{aligned}
(\Lambda^\tau \circ \Phi^{\mathcal B}_2)
(A)
& = \Lambda^\tau ( A+ (E^{-1}x^tAE)^t)\\
& =
\tau \operatorname{tr}(\Lambda^t A)+ \tau\operatorname{tr}(\Lambda^t (E^{-1}x^tAE)^t)\\
&= \tau \operatorname{tr}(\Lambda^t A)+ \tau \operatorname{tr}(E\Lambda E^{-1}T^tA)\\
&= (\Lambda + T E^{-t}\Lambda^t E^{t})^{\tau}(A),\end{aligned}$$ i.e. $$\begin{aligned}
(\Phi^{\mathcal B}_2)^{*} (\Lambda)= \Lambda + T E^{-t}\Lambda^t E^{t}.\end{aligned}$$ Now, $\Lambda\in \ker (\Phi^{\mathcal B}_2)^{*}$ if and only if $$\Lambda + T E^{-t}\Lambda^t E^{t}=0.$$ Note that from [\[eq_relations_in_BE\]](#eq_relations_in_BE){reference-type="eqref" reference="eq_relations_in_BE"}, it follows that $E^{-1}T^tET=I$ and, therefore, $E^{t}T^{-1}=T^tE^{t}$. Using this relation, we see that the map $\Lambda\mapsto E\Lambda=:\mathrm{K}$ induces an isomorphism between $\ker (\Phi^{\mathcal B}_2)^{*}$ and $\{\mathrm{K}\in M_n:\mathrm{K}^t=-T^tF\mathrm{K}\}$ because $$\begin{aligned}
\Lambda + T E^{-t}\Lambda^t E^{t}=0
& \iff E^{-1}\mathrm{K}+ T E^{-t} \mathrm{K}^{t}=0\\
& \iff \mathrm{K}^{t}=-E^{t}T^{-1}E^{-1}\mathrm{K}=-T^tE^{t}E^{-1}\mathrm{K}=-T^tF\mathrm{K}
.\end{aligned}$$ By the rank-nullity theorem, $$\dim \ker (\Phi^{\mathcal B}_2)^{*} = d, \qquad \operatorname{Im}(\Phi^{\mathcal H}_2)^{*} = n^2-d.$$
Finally, for $(\Phi^{\mathcal B}_3)^{*} \colon M_n(\mathbb C) \to \mathbb C$ we find $$\begin{aligned}
(\Lambda^{\tau} \circ \Phi^{\mathcal B}_3) (a)
& = \Lambda^{\tau}\left(\bigl(Ex E^{-t}-E^tE^{-1}\bigr)a\right)
\\ & = \tau\operatorname{tr}\left(\Lambda^t\bigl(ET E^{-t}-E^tE^{-1}\bigr)a\right)\\
&=\operatorname{tr}\left(\Lambda^t\bigl(ET E^{-t}-E^tE^{-1}\bigr)\right)\tau(a),
\end{aligned}$$ so $$\begin{aligned}
(\Phi^{\mathcal B}_3)^{*}
(\Lambda) = \operatorname{tr}\left(\Lambda^t\bigl(ET E^{-t}-E^tE^{-1}\bigr)\right).\end{aligned}$$ Note that $ETE^{-t}=E^tE^{-1}$ if and only if $T=(E^{-1}E^t)^2$. Using the rank-nullity theorem, we can thus conclude that $$\dim \ker (\Phi^{\mathcal B}_3)^{*} =n^2-s,
\qquad
\dim \operatorname{Im}(\Phi^{\mathcal B}_3)^{*} =s.$$ The result now follows from $\dim \mathrm H^k(\mathcal B, {_\varepsilon\mathbb C_\tau})=\dim\ker (\Phi^{\mathcal B}_{k+1})^* - \dim\operatorname{Im}(\Phi^{\mathcal B}_k)^*$. ◻
## Projective resolution for free products of Hopf algebras {#freeprod}
The higher Hochschild cohomologies of the free product of Hopf algebras can be easily computed from the Hochschild cohomologies of the factors.
**Lemma 15** ([@Bichon18 Theorem 5.2]). *Let $A$ and $B$ be the Hopf algebras. For any $A*B$-bimodule $M$ and for $n\geq 2$ we have $$\mathrm H^n(A*B,M)\cong \mathrm H^n(A,M)\oplus \mathrm H^n(B,M),$$ where the bimodules on the right-hand side are equipped with the restricted actions on $A$ and $B$.*
In our considerations we will need the following more detailed result.
**Theorem 16**. *Let $A=B*C$ be a free product of the Hopf algebras $B$ and $C$. Furthermore, suppose that $$\begin{aligned}
\cdots\xrightarrow{\Phi_3}P_2\xrightarrow{\Phi_2} P_1\xrightarrow{\Phi_1} B \xrightarrow{\varepsilon_B}\mathbb C_{\varepsilon_B}\rightarrow 0\\
\cdots\xrightarrow{\Psi_3}Q_2\xrightarrow{\Psi_2} Q_1\xrightarrow{\Psi_1} C \xrightarrow{\varepsilon_B}\mathbb C_{\varepsilon_B}\rightarrow 0\end{aligned}$$ are projective resolutions for $B$ and $C$, respectively. Put $\widetilde P_k:=P_k\otimes_B A$, $\widetilde Q_k:= Q_k\otimes_C A$, $\widetilde \Phi_k:=\Phi_k\otimes_B{\mathrm{id}_A}$, $\widetilde\Psi_k:=\Psi_k\otimes_{C}\mathrm{id}_A$. Then $$\cdots\xrightarrow{\widetilde \Phi_3\oplus \widetilde \Psi_3}\widetilde P_2\oplus \widetilde Q_2\xrightarrow{\widetilde \Phi_2\oplus \widetilde \Psi_2} \widetilde P_1\oplus \widetilde Q_1\xrightarrow{\widetilde \Phi_1 + \widetilde \Psi_1} A \xrightarrow{\varepsilon_A}\mathbb C_{\varepsilon_A}\rightarrow 0$$ is a projective resolution for $A$.*
*Proof.* For the codomains of $\widetilde \Phi_1$ and $\widetilde \Psi_1$ we have $B\otimes_B A=A= C\otimes_C A$. It is easy to see that the modules are projective (they can be complemented to free modules using the fact that the modules in the original resolutions are projective). Note that the free product $A=B\ast C$, as a module over the algebras $B$ and $C$, respectively, is free, hence flat, which means in our context that the functors $\cdot\otimes_B A$ and $\cdot \otimes_C A$ are exact. Therefore, $$\operatorname{ker}(\widetilde \Phi_k\oplus \widetilde{\Psi}_k)
=\operatorname{ker}(\widetilde \Phi_k)\oplus \operatorname{ker}(\widetilde{\Psi}_k)
=\operatorname{im}(\widetilde \Phi_{k+1})\oplus \operatorname{im}(\widetilde{\Psi}_{k+1})
=\operatorname{im}(\widetilde \Phi_{k+1}\oplus \widetilde{\Psi}_{k+1})$$ proves exactness at $\widetilde P_k\oplus \widetilde Q_k$ with $k>1$. Under the natural identifications, we have (using the shorthand notation $A^+:=\operatorname{ker}\varepsilon_A$ and analogously for $B$ and $C$) $$A^+= B^+ A \oplus C^+ A= (B^+\otimes_B A) \oplus (C^+ \otimes_C A) = \operatorname{im}(\widetilde \Phi_1)\oplus \operatorname{im}(\widetilde{\Psi}_1)=\operatorname{im}(\widetilde \Phi_1\oplus \widetilde{\Psi}_1).$$ This directly shows exactness at $A$, but also allows us to conclude $$\operatorname{ker}(\widetilde \Phi_1+\widetilde{\Psi}_1)=\operatorname{ker}(\widetilde \Phi_1\oplus\widetilde{\Psi}_1)= \operatorname{im}(\widetilde \Phi_2\oplus\widetilde{\Psi}_2).\qedhere$$ ◻
This result allows to compute the Hochschild cohomologies of the free product of Hopf algebras by passing to the hom-spaces as described in Theorem [Theorem 10](#thm:hom-spaces){reference-type="ref" reference="thm:hom-spaces"}.
## Yetter-Drinfeld modules and Gerstenhaber-Schack cohomology (generalities) {#subsec:YD-generalities}
A (right-right) *Yetter--Drinfeld module* over a Hopf algebra $H$ is a right $H$-comodule $X$ which is also a right $H$-module satisfying $$(xh)_{(0)}\otimes (xh)_{(1)}=x_{(0)}h_{(2)}\otimes S(h_{(1)})x_{(1)}h_{(3)},\qquad x\in X,\quad h\in H.$$ In other words, a Yetter-Drinfeld module is an $H$-module $H$-comodule such that the coaction is a module map for the $H$-action $$\begin{aligned}
(x\otimes h)\mathbin{\vartriangleleft}h':=xh'_{(2)}\otimes S(h'_{(1)})h h'_{(3)}\end{aligned}$$ on $X\otimes H$. Morphisms between Yetter-Drinfeld modules over $H$ are the $H$-linear $H$-colinear maps. We denote the category thus obtained by $\mathcal{YD}^H_H$. The coaction is often denoted by $\gamma(x):=x_{(0)}\otimes x_{(1)}$.
Before we continue to discuss the category of Yetter-Drinfeld modules, we record for future reference a simple lemma, which is a slight generalization of the forward implication of [@Bic16 Proposition 4.5].
**Lemma 17**. *Let $H$ be a Hopf subalgebra of $A$ with $$\begin{aligned}
\label{eq:assumption_lem_coinvariant}
a_{(2)}\otimes S(a_{(1)})Ha_{(3)}\subset A\otimes H
\end{aligned}$$ for all $a\in A$, i.e. $A\otimes H$ is invariant for the $\mathbin{\vartriangleleft}$-action of $A$ on $A\otimes A$ (or in the terminology of [@Bic16 Definition 4.6]: $H\subset A$ is *adjoint*). Assume further that $X$ is a Yetter-Drinfeld $A$-module with a subcomodule $C\subset X$ which generates $X$ as an $A$-module, i.e. $X=CA$. Denote $\gamma\colon X\to X\otimes A$ the $A$-coaction on $X$. Then $\gamma(X)\subset X\otimes H$ if and only if $\gamma(C)\subset C\otimes H$.*
*Proof.* The "only if" part is trivial, the "if" part easily follows from $\gamma(ca)=\gamma(c)\mathbin{\vartriangleleft}a=c_{(0)}a_{(2)}\otimes S(a_{(1)})c_{(1)}a_{(3)}$. ◻
For the purposes of cohomology computations, we need to identify the projective objects in the category $\mathcal{YD}^H_H$. To this end, recall that given a right $H$-comodule $C$ one can construct a Yetter-Drinfeld module $C\boxtimes H$, referred to as *the free Yetter-Drinfeld module over $C$*, as follows. As a right module it is just $C\otimes H$ (the free $H$-module over $C$) and the right coaction is given by $$\delta_\boxtimes(c\boxtimes h):=c_{(0)}\boxtimes h_{(2)}\otimes S(h_{(1)})c_{(1)}h_{(3)},$$ the unique extension of the coaction on $C=C\boxtimes 1\subset C\boxtimes H$ to a module map. It is easy to check that the free Yetter-Drinfeld module over an $H$-comodule $C$ enjoys the following universal property: for every comodule map $f\colon C\to X$ to a Yetter-Drinfeld module $X\in\mathcal{YD}_H^H$, there exists a unique extension to a morphism of Yetter-Drinfeld modules $\widetilde f\colon C\boxtimes H\to X$, namely $\widetilde f(c\boxtimes h):=f(c)h$.
A Yetter-Drinfeld $H$-module is called *free* if it is isomorphic to the free Yetter-Drinfeld module $C\boxtimes H$ over some $H$-comodule $C$. Note that a free Yetter-Drinfeld module is in particular a free $H$-module.
To lighten notation a bit, if $H$ is a Hopf algebra, we denote the free Yetter-Drinfeld module over $\mathbb C$, $\mathbb C\boxtimes H$, simply by $H$; it is often called *right coadjoint Yetter-Drinfeld module* and denoted by $H_{\mathrm{coad}}$. Subsequently, we call a Yetter-Drinfeld module *relatively projective* if it is a direct summand of a free Yetter--Drinfeld module. One can prove that, for a cosemisimple Hopf algebra $H$, the projective objects in $\mathcal{YD}^H_H$ are exactly the relatively projective Yetter-Drinfeld modules [@Bic16 Proposition 4.2].
Note that $\mathbb C_\varepsilon$ is a (typically not projective) Yetter-Drinfeld module with coaction $1\mapsto 1\otimes 1\in \mathbb C\otimes H$, indeed, $$1\otimes 1\mathbin{\vartriangleleft}h = 1\varepsilon(h_{(2)})\otimes S(h_{(1)})1h_{(3)}=\varepsilon(h)1\otimes 1.$$ Therefore, it makes sense to search for projective resolutions of the counit by projective Yetter-Drinfeld modules.
Let us now discuss the *Gerstenhaber-Schack cohomology* [@G-Sch90]. Let $H$ be a Hopf algebra and let $X$ be a Yetter-Drinfeld module over $H$. The Gerstenhaber-Schack cohomology of $H$ with coefficients in $X$ is denoted by $\mathrm{H_{GS}^*}(H,X)$. In this paper, we use the following result of Bichon [@Bic16 Proposition 5.2] as our definition of GS cohomology (we state it for cosemisimple Hopf algebras but it works for more general co-Frobenius Hopf algebras).
**Proposition 18**. *Let $H$ be a cosemisimple Hopf algebra and let $$P_*:=\ldots\to P_{n+1}\to P_n\to\ldots\to P_1\to \mathbb C_\varepsilon\to 0$$ be a projective resolution of the counit in $\mathcal{YD}^H_H$. For any Yetter-Drinfeld $H$-module $X$, there is an isomorphism $$\mathrm{H^*_{GS}}(H,X)\cong \mathrm H^*\left(\operatorname{hom}_{\mathcal{YD}^H_H}(P_*,X)\right).$$*
We redirect the reader to [@G-Sch90 Section 1] for a definition of $\mathrm{H_{GS}^*}(H,X)$ via an explicit complex. The *bialgebra cohomology* is defined as $\mathrm{H_b^*}(H):=\mathrm{H_{GS}^*}(H,\mathbb C_\varepsilon)$.
Let us now discuss the restriction and the induction functors. For the remainder of the section, let $H$ be a Hopf subalgebra of a Hopf algebra $A$. The *restriction functor* is defined as follows $$\operatorname{_{\mathnormal H}\mathrm{res}_\mathnormal{A}}\colon\mathcal{YD}^A_A\longrightarrow \mathcal{YD}^H_H,\qquad X\longmapsto X^{(H)},$$ where $X^{(H)}:=\{x\in X~:~x_{(0)}\otimes x_{(1)}\in X\otimes H\}$. One can show that the above functor is isomorphic to the functor $(-)\mathbin{\Box}_A H$ (recall that $\mathbin{\Box}_A$ denotes the cotensor product over $A$); the isomorphism is given by $X^{(H)}\ni x\mapsto x_{(0)}\otimes x_{(1)}\in X\mathbin{\Box}_A H$. Hence, the restriction functor is always left exact and we say that $H$ is *coflat* over $A$ when it is exact. Next, we define the *induction functor* to be $$\operatorname{_{\mathnormal A}\mathrm{ind}_\mathnormal{H}}\colon\mathcal{YD}^H_H\longrightarrow \mathcal{YD}^A_A,\qquad X\longmapsto X\otimes_H A.$$ The Yetter-Drinfeld $A$-structure on $X\otimes_H A$ is given as follows: the $A$-module action is just the multiplication on the right and the $A$-comodule structure is defined by $$x\otimes_A h\longmapsto x_{(0)}\otimes_A h_{(2)}\otimes S(h_{(1)})x_{(1)}h_{(3)}.$$ It is clear that the above functor is always right exact and we say that $A$ is flat over $H$ if it is exact.
It is worth noting that the functors $(\operatorname{_{\mathnormal H}\mathrm{res}_\mathnormal{A}},
\operatorname{_{\mathnormal A}\mathrm{ind}_\mathnormal{H}})$ form a pair of adjoint functors (this is a special case of [@CMZ02 Section 2.5 Theorem 15]; in Section 5.3 Proposition 123 therein, the case of left-right Yetter-Drinfeld modules is discussed, the theorem applies to right-right Yetter-Drinfeld modules in an analogous way).
If $A$ is cosemisimple, then so is $H$ and $A$ is flat over $H$ [@Chirvasitu14], so the induction functor is exact. Hence, in the cosemisimple case, a general categorical result [@Pareigis1970 Problem 4.16 and its dual statement] implies that $\operatorname{_{\mathnormal A}\mathrm{ind}_\mathnormal{H}}$ preserves projective objects. This last observation is crucial in the next theorem (although in the special cases we treat later, projectivity of the involved modules will be easy to check by hand).
**Theorem 19** (Projective Yetter-Drinfeld resolution for free products). *Let $A=B*C$ be a free product of Hopf algebras $B$ and $C$. Furthermore, suppose that*
*$$\begin{aligned}
\cdots\xrightarrow{\Phi_3}X_2\xrightarrow{\Phi_2} X_1\xrightarrow{\Phi_1} B \xrightarrow{\varepsilon_B}\mathbb C_{\varepsilon_B}\rightarrow 0\\
\cdots\xrightarrow{\Psi_3}Y_2\xrightarrow{\Psi_2} Y_1\xrightarrow{\Psi_1} C \xrightarrow{\varepsilon_C}\mathbb C_{\varepsilon_C}\rightarrow 0\end{aligned}$$ are resolutions of $\mathbb{C}_{\varepsilon_B}$ and $\mathbb{C}_{\varepsilon_C}$ by relatively projective Yetter--Drinfeld modules, respectively. Put $\widetilde X_k:=X_k\otimes_B A$, $\widetilde Y_k:= Y_k\otimes_C A$, $\widetilde \Phi_k:=\Phi_k\otimes_B{\mathrm{id}_A}$, and $\widetilde\Psi_k:=\Psi_k\otimes_{C}\mathrm{id}_A$. Then, for the codomains of $\widetilde \Phi_1$ and $\widetilde \Psi_1$, we have $B\otimes_B A=A= C\otimes_C A$ (as Yetter-Drinfeld modules!) and $$\cdots\xrightarrow{\widetilde \Phi_3\oplus \widetilde \Psi_3}\widetilde X_2\oplus \widetilde Y_2\xrightarrow{\widetilde \Phi_2\oplus \widetilde \Psi_2} \widetilde X_1\oplus \widetilde Y_1\xrightarrow{\widetilde \Phi_1 + \widetilde \Psi_1} A \xrightarrow{\varepsilon_A}\mathbb C_{\varepsilon_A}\rightarrow 0$$ is a resolution for $A$ by projective objects in $\mathcal{YD}^A_A$.*
*Proof.* It is straightforward that the maps $\widetilde{\Phi}_k$ and $\widetilde{\Psi}_k$ are $A$-comodule maps. As in the proof of , since $A=B\ast C$ is flat over $B$ and $C$, respectively, the induction functors $\cdot\otimes_B A$ and $\cdot \otimes_C A$ are exact. Exactness holds in both, the category of right modules and the category of Yetter-Drinfeld modules, since the induction functors act identically. Therefore, $$\operatorname{ker}(\widetilde \Phi_k\oplus \widetilde{\Psi}_k)
=\operatorname{ker}(\widetilde \Phi_k)\oplus \operatorname{ker}(\widetilde{\Psi}_k)
=\operatorname{im}(\widetilde \Phi_{k+1})\oplus \operatorname{im}(\widetilde{\Psi}_{k+1})
=\operatorname{im}(\widetilde \Phi_{k+1}\oplus \widetilde{\Psi}_{k+1})$$ proves exactness at $\widetilde P_k\oplus \widetilde Q_k$ with $k>1$. Under the natural identifications, we have for $A^+:=\operatorname{ker}\varepsilon_A$ $$A^+= B^+ A \oplus C^+ A= (B^+\otimes_B A) \oplus (C^+ \otimes_C A) = \operatorname{im}(\widetilde \Phi_1)\oplus \operatorname{im}(\widetilde{\Psi}_1)=\operatorname{im}(\widetilde \Phi_1\oplus \widetilde{\Psi}_1).$$ Finally, the induction functors preserve projective objects, so we are done. ◻
Let us close this section by investigating what the induction functor does to free Yetter-Drinfeld modules.
**Proposition 20**. *Let $H\subset A$ be a Hopf subalgebra and $C\boxtimes H$ be a free Yetter-Drinfeld module. Then $$(C\boxtimes H)\otimes_H A=C\boxtimes A,$$ i.e. the canonical module isomorphism is a Yetter-Drinfeld isomorphism.*
*Proof.* The map $C\boxtimes A\supset C\boxtimes 1=C\to C\otimes H\subset (C\boxtimes H)\otimes_H A, c\mapsto c\otimes 1$ is obviously a comodule map. Its unique extension to $C\boxtimes A$ as a module map is the canonical module isomorphism $c\boxtimes a\mapsto c\boxtimes 1\otimes_Ha$. By the universal property of the free Yetter-Drinfeld module, it is a Yetter-Drinfeld map. ◻
# Projective resolution and Hochschild cohomology for $\mathcal A(E)$ {#sec:resolution-A(E)}
In this section we use the results from Section [2.5](#freeprod){reference-type="ref" reference="freeprod"} on projective resolutions of free products to describe a projective resolution for $\mathcal A(E)=\mathcal B(E)*\mathbb C\mathbb Z_2$, and compute the Hochschild cohomology with one-dimensional coefficients $\mathrm{H}^*(\mathcal A(E),{_\varepsilon\mathbb C_\tau})$. We denote by $g$ the unitary generator of $\mathbb{C}\mathbb{Z}_2$ ($g^2=1$), and by $x_{jk}$, $j,k=1,\ldots, n$ the matrix elements generating $\mathcal B(E)$ as an algebra.
Since $\mathbb Z_2$ is a finite group, it is easy to see that $\mathbb C_\varepsilon$ is a projective module (indeed, $\mathbb C\mathbb Z_2=\mathbb C(1+g)\oplus \mathbb C(1-g))$ and $\mathbb C(1+g)\cong \mathbb C_\varepsilon$). Hence, there is a projective resolution of length 0, $$0\rightarrow \mathbb C(1+g) \rightarrow \mathbb C_\varepsilon \rightarrow 0.$$ However, in order to apply , we need to find a projective resolution with $P_0=\mathbb C\mathbb Z_2$. The free resolution given in, e.g., [@Wei94 p. 167] would work, but is infinite. We prefer to use the finite projective resolution from the following lemma.
**Lemma 21**. *The sequence $$\begin{aligned}
\label{eq:resZ2}
0\to\mathbb{C}(1-g)\xrightarrow{\Psi_1}\mathbb{C}\mathbb{Z}_2\xrightarrow{
\varepsilon}\mathbb{C}_\varepsilon\to 0,\end{aligned}$$ where $$\varepsilon(1)=\varepsilon(g)=1,\qquad \Psi_1(1-g)=1-g,$$ is a projective resolution of the counit of $\mathbb{C}\mathbb{Z}_2$.*
*Proof.* Indeed, $\mathbb{C}(1-g)$ is a projective $\mathbb{C}\mathbb{Z}_2$-module since it can be complemented to a free $\mathbb{C}\mathbb{Z}_2$-module. Injectivity of $\Psi_1$ is clear. Since $\varepsilon(\Psi_1(1-g))=0$, we have that $\operatorname{
Im}(\Psi_1)\subseteq\ker\varepsilon$ and it is straightforward to prove that this containment is in fact an equality. ◻
From this we get the Hochschild cohomology of $\mathbb C\mathbb Z_2$ with 1-dimensional coefficients.
**Proposition 22**. *There are exactly two characters on $\mathbb C\mathbb Z_2$, namely $\varepsilon$ and the sign representation $\sigma$, determined by $\sigma (g)=-1$. The Hochschild cohomology of $\mathbb C\mathbb Z_2$ with 1-dimensional coefficients is given by $$\begin{aligned}
\dim \mathrm{H}^0(\mathbb C\mathbb Z_2 , {}_\varepsilon\mathbb C_\varepsilon) &= 1 ,& \dim \mathrm{H}^0(\mathbb C\mathbb Z_2 , {}_\varepsilon\mathbb C_\sigma) &=0 \\
%\dim \mathrm{H}^1(\C\Z_2 , {}_\e\C_\e) &= 0 ,& \dim \mathrm{H}^1(\C\Z_2 , {}_\e\C_\sigma) &=0 \\
\dim \mathrm H^k(\mathbb C\mathbb Z_2 , {_\varepsilon\mathbb C_\tau}) &= 0, \text{ for any } k \geq 1, \, \tau\in\{\varepsilon,\sigma\}.&&\end{aligned}$$*
*Proof.* The first statement is obvious.
Let $\tau\in\{\varepsilon,\sigma\}$. The resolution [\[eq:resZ2\]](#eq:resZ2){reference-type="eqref" reference="eq:resZ2"} yields, by passing to hom-spaces, the complex $$0\rightarrow \hom_{\mathbb C\mathbb Z_2}(\mathbb C\mathbb Z_2,\mathbb C_\tau)\xrightarrow{-\circ \Psi_1}
\hom_{\mathbb C\mathbb Z_2}(\mathbb C(1-g),\mathbb C_\tau) \rightarrow 0,$$ whose cohomology coincides with the Hochschild cohomology spaces we aim to determine. Clearly, $\dim(\hom_{\mathbb C\mathbb Z_2}(\mathbb C\mathbb Z_2,\mathbb C_\tau))=1$ because $\mathbb C\mathbb Z_2$ is a free module with basis $1$. On the other hand, we find that $$\dim (\hom_{\mathbb C\mathbb Z_2}(\mathbb C(1-g),\mathbb C_\tau))=
\begin{cases}
0&\tau=\varepsilon,\\
1&\tau=\sigma;
\end{cases}$$ indeed, the linear map $f_z$ determined by $(1-g)\mapsto z$ is $\mathbb C\mathbb Z_2$-equivariant if and only if $$z=f(1-g)=f(-(1-g)g)=-f(1-g)\tau(g)=-\tau(g)z,$$ which is equivalent to $z=0$ or $\tau(g)=-1$, i.e. $\tau=\sigma$. For $\tau=\varepsilon$, this completes the proof. For $\tau=\sigma$, we also need to check injectivity of $-\circ\Psi_1$. Suppose that $f\in \hom_{\mathbb C\mathbb Z_2}(\mathbb C\mathbb Z_2,\mathbb C_\tau)$. Then, necessarily, $f(1+g)=f(1)+f(1)\sigma(g)=0$. If $f\circ\Psi_1=0$, then also $f(1-g)=0$ and, therefore $f=0$, which proves injectivity. ◻
We apply Theorem [Theorem 16](#thm:res_for_free_product){reference-type="ref" reference="thm:res_for_free_product"} to the resolutions [\[eq:resZ2\]](#eq:resZ2){reference-type="eqref" reference="eq:resZ2"} and [\[resO+E\]](#resO+E){reference-type="eqref" reference="resO+E"} and obtain the following projective resolution of the counit for $\mathcal A(E)$.
**Theorem 23**. *Let $E\in\operatorname{GL}_n(\mathbb C)$ and $\mathcal A=\mathcal A(E)$. Then the sequence $$\begin{aligned}
\label{eq_resolution_of_A}
0\to \mathcal A \xrightarrow{\Phi^{\mathcal A}_3}M_n(\mathcal A )\xrightarrow{\Phi^{\mathcal A}_2} M_n(\mathcal A )\oplus(\mathbb{C}(1-g)\otimes_{\mathbb{C}\mathbb{Z}_2}\mathcal A )
\xrightarrow{\Phi^{\mathcal A}_1}\mathcal A \xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0,
\end{aligned}$$ with the mappings $$\begin{aligned}
\Phi^{\mathcal A}_3 (a) &=\bigl(Ex E^{-t}-E^tE^{-1}\bigr)a, \\
\Phi^{\mathcal A}_2 (A) &= (A + (E^{-1}x^t A E)^t, 0),\\
\Phi^{\mathcal A}_1 (A,(1-g)\otimes_{\mathbb C\mathbb Z_2}a) &= \operatorname{tr}(x^tA)-\operatorname{tr}(A)+(1-g)a
\end{aligned}$$ yields a projective resolution of the counit of $\mathcal A$.*
We now aim to compute the Hochschild cohomology of $\mathcal A(E)$ with the coefficients in $_\varepsilon\mathbb C_\tau$, where $\tau$ is a character on the free product ${\mathcal A}(E)=\mathcal B(E)\ast\mathbb C\mathbb Z_2$. By , this is the cohomology of the complex obtained by applying the functor $\hom_{\mathcal A}(-,\mathbb C_\tau)$ to the resolution [\[eq_resolution_of_A\]](#eq_resolution_of_A){reference-type="eqref" reference="eq_resolution_of_A"}.
**Theorem 24**. *Let $E\in \operatorname{GL_n}(\mathbb C)$, $\mathcal A:=\mathcal A(E)$, and $\tau\colon \mathcal A\to \mathbb C$ a character. Also assume that $F:=E^tE^{-1}$ is generic, so that $\mathcal A(E)$ is cosemisimple. Set $T:=\tau(x)$, $t=\tau(g)$. Let furthermore $$\begin{aligned}
p&=\begin{cases}
1 & \mbox{ if } T=I_n\\
0 & \mbox{ otherwise, }
\end{cases}&
q&=\begin{cases}
1 & \mbox{ if } t=1\\
0 & \mbox{ otherwise, }
\end{cases}\\
d&=\operatorname{dim}\{\mathrm{K}\in M_n:\mathrm{K}^t=-T^tF\mathrm{K}\},&
s&=\begin{cases}
0 & \mbox{ if } T^t=F^{-2}\\
1 & \mbox{ otherwise. }
\end{cases}
\end{aligned}$$ Then the Hochschild cohomology for $\mathcal A$ with 1-dimensional coefficients is given as follows: $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal A, {_\varepsilon\mathbb C_\tau}) &= pq ,
&\dim \mathrm{H}^1(\mathcal A, {_\varepsilon\mathbb C_\tau}) &= d -(1-p)q,
\\
\dim \mathrm H^2(\mathcal A, {_\varepsilon\mathbb C_\tau}) &= d-s,
&\dim \mathrm H^3(\mathcal A, {_\varepsilon\mathbb C_\tau}) &= 1-s,\end{aligned}$$ and $\dim \mathrm H^k(\mathcal A, {_\varepsilon\mathbb C_\tau}) = 0$ for all $k \geq 4$.*
*Proof.* We extend the conventions from to the projective module $\mathbb{C}(1-g)\otimes_{\mathbb{C}\mathbb{Z}_2}\mathcal A$ by defining $$\lambda^\tau\bigl((1-g)\otimes_{\mathbb C\mathbb Z_2}a\bigr):=\lambda\tau((1-g)a)=\lambda\cdot (1-t)\tau(a);$$ note that the map $\lambda\mapsto \lambda^\tau\colon \mathbb C\to \hom_{\mathcal A}(\mathbb{C}(1-g)\otimes_{\mathbb{C}\mathbb{Z}_2}\mathcal A,\mathbb C_\tau)$ is an isomorphism if $t=-1$ and the zero map if $t=1$. Applying the functor $\hom_{\mathcal A}(-,\mathbb C_\tau)$ and the discussed isomorphisms to the resolution [\[eq_resolution_of_A\]](#eq_resolution_of_A){reference-type="eqref" reference="eq_resolution_of_A"}, Hochschild cohomology coincides with cohomology of the complex $$0 \rightarrow \mathbb C\xrightarrow{(\Phi_1^{\mathcal A})^*}
M_n(\mathbb C) \oplus \delta_{\{-t\}}\mathbb C
\xrightarrow{(\Phi_2^{\mathcal A})^*}
M_n(\mathbb C)
\xrightarrow{(\Phi_3^{\mathcal A})^*}
\mathbb C\to 0,$$ where $\delta_{\{-t\}}\mathbb C$ means that this summand is omitted if $t=\tau(g)=1$.
Let us have a closer look at the mappings $(\Phi_j^{\mathcal A})^*$. For $j=1$, we find $$\begin{aligned}
(\lambda^{\tau}\circ\Phi_1^{\mathcal A})(A,(1-g)\otimes_{\mathbb C\mathbb Z_2} a)
&= \lambda\tau \big(\operatorname{tr}(x^tA)-\operatorname{tr}(A)\big) + (1-t) \cdot \lambda \tau(a)
\\ &
=\lambda\tau \operatorname{tr}\big((T^t-I)A\big) +\lambda\tau\bigl( (1-t) \cdot \tau(a)\bigr)
\\ &
= \lambda\bigl((T-I) \oplus \delta_{\{-t\}} \bigr)^{\tau} (A,(1-g)\otimes_{\mathbb C\mathbb Z_2} a),
\end{aligned}$$ so $$(\Phi_1^{\mathcal A})^*(\lambda)= \lambda\bigl((T-I)\oplus\delta_{\{-t\}}\cdot 1\bigr)\in M_n(\mathbb C)\oplus \delta_{\{-t\}} \mathbb C.$$ For $j>1$, we note that $(\Phi_2^{\mathcal A})^{*}=(\Phi_2^{\mathcal B})^{*} + \delta_{\{-t\}}\cdot 0$ and $(\Phi_3^{\mathcal A})^{*}=(\Phi_3^{\mathcal B})^{*}$ for the maps $(\Phi_3^{\mathcal B})^{*}$ defined in the second proof of . So we find that $$\begin{aligned}
\dim\ker (\Phi_1^{\mathcal A})^*&=pq,& \dim\operatorname{Im}(\Phi_1^{\mathcal A})^*&=1-pq\\
\dim \ker (\Phi_2^{\mathcal A})^{*}&=\dim \ker (\Phi_2^{\mathcal B})^{*}+1-q=d+1-q
&
\dim \operatorname{Im}(\Phi_2^{\mathcal A})^{*}&=\dim \operatorname{Im}(\Phi_2^{\mathcal B})^{*}=n^2-d\\
\dim \ker (\Phi_3^{\mathcal A})^{*}&=\dim \ker (\Phi_3^{\mathcal B})^{*} = n^2-s
&
\dim \operatorname{Im}(\Phi_3^{\mathcal A})^{*}&=\dim \operatorname{Im}(\Phi_3^{\mathcal B})^{*}=s
\end{aligned}$$ and taking differences completes the proof. Note that $\dim \mathrm H^k(\mathcal{A} , {_\varepsilon\mathbb C_\tau})$ for $k\geq2$ can alternatively be deduced from . ◻
# Matrix Hopf algebras and glued products {#sec_glued}
The main aim of this section is to identify $\mathcal H(F)$ with a certain subalgebra of $\mathcal A(E)$, which in the case $E=F=I$ will specialize to $$\operatorname{Pol}(U_n^+)\cong \operatorname{Pol}(O_n^+\mathbin{\tilde{\ast}}\mathbb Z_2) \subset \operatorname{Pol}(O_n^+\ast\mathbb Z_2)\cong\operatorname{Pol}(B_{n+1}^{\#+})$$ (with $\mathbin{\tilde{\ast}}$ the glued free product of compact matrix quantum groups). To this end, we extend the notion of glued free products, defined by Tarrago and Weber [@TerragoWeber2017] for compact matrix quantum groups, to Hopf algebras. As for quantum groups, the glued free product refers to an additional matrix structure. The following definition of matrix Hopf algebra due to Škoda gives the right setting. Škoda's definition makes use of the concept of Hopf envelope, which we do not explain here, but we will show below that matrix Hopf algebras can equivalently be characterized without referring to Hopf envelopes.
**Definition 25** (cf. [@Skoda2003 Section 13]). A *matrix bialgebra* is a pair $(B,u)$ where $B$ is a bialgebra and $u=(u_{ij})_{i,j=1}^n\in M_n(B)$ is a finite-dimensional corepresentation such that $B$ is generated by the $u_{ij}$ as a unital algebra. A *matrix Hopf algebra* is a pair $(A,u)$ where $A$ is a Hopf algebra and $u$ is a finite-dimensional corepresentation of $A$ such that the canonical map $\operatorname{Hopf}(B)\to A$ is onto, where $B$ is the unital algebra generated by the $u_{ij}$, and $\operatorname{Hopf}$ denotes functor which maps a bialgebra to its Hopf envelope; note that $(B,u)$ is automatically a matrix bialgebra.
If $(A,u)$ is a matrix Hopf algebra, we also say that $A$ is matrix Hopf algebra with *fundamental corepresentation*[^4] $u$.
**Proposition 26**. *Let $A$ be a Hopf algebra with antipode $S$, and $u\in M_n(A)$ a finite-dimensional corepresentation. Then the following are equivalent.*
1. *$(A,u)$ is a matrix Hopf algebra,*
2. *the $u_{ij}$, $i,j\in\{1,\ldots,n\}$, generate $A$ as a Hopf algebra, i.e. a Hopf subalgebra $A'\subseteq A$ which contains all $u_{ij}$ must coincide with $A$,*
3. *the $S^k(u_{ij})$, $k\in\mathbb N_0, i,j\in\{1,\ldots,n\}$, generate $A$ as an algebra.*
*Proof.* Denote $B$ the subalgebra of $A$ generated by the $u_{ij}$ and $\operatorname{Hopf}(B\hookrightarrow A)\colon \operatorname{Hopf}(B)\to A$ the canonical map. Suppose that $\operatorname{Hopf}(B\hookrightarrow A)$ is onto and $B\subset A'\subset A$ for a Hopf subalgebra $A$. With $\iota_{A'}^A\colon A'\hookrightarrow A$ the embedding map, $\operatorname{Hopf}(B\hookrightarrow A)=\iota_{A'}^A\circ\operatorname{Hopf}(B\hookrightarrow A')$ by the universal property of the Hopf envelope. It follows that $\iota_{A'}^A\colon A'\hookrightarrow A$ is onto, i.e. $A=A'$, proving that (1) implies (2). Conversely, if $\operatorname{Hopf}(B\hookrightarrow A)$ is not onto, then its image is a Hopf subalgebra $A'\subset A$, $A'\neq A$, which contains all $u_{ij}$, i.e. $A$ is not generated by the $u_{ij}$ as a Hopf algebra.
Equivalence with the third statement follows from the simple observation that the algebra generated by all $S^k(u_{ij})$ is a Hopf algebra, hence it is the Hopf algebra generated by the $u_{ij}$. ◻
**Example 27**.
- If $G$ is a finitely generated group with generators $g_1,\ldots g_n$, then $(\mathbb CG,\operatorname{diag}(g_1,\ldots,g_n))$ is a matrix Hopf algebra.
- If $(\mathcal C(\mathbb G),u)$ is a compact matrix quantum group, then $(\operatorname{Pol}(\mathbb G),u)$ is a matrix Hopf algebra. This follows easily from the fact that $S(u_{ij})=u_{ji}^*$.
- $(\mathcal B(E),x)$ is a matrix Hopf algebra. Note that the linear span of the $x_{ij}$ is invariant under $S$, $S(x)=E^{-1}x^tE$, and $\mathcal B(E)$ is (by definition) generated by the $x_{ij}$ as an algebra. In particular, $(\mathcal B(E),x)$ is also a matrix bialgebra.
- $(\mathcal H(F),u)$ is a matrix Hopf algebra. Indeed, $S(u_{ij})=v_{ji}$, so $\mathcal H(F)$ is generated as an algebra by the collection of $u_{ij}$ and $S(u_{ij})$.
Before we come to the definition of glued free product of matrix Hopf algebras, we show that the free product of matrix Hopf algebras is again a matrix Hopf algebra.
**Proposition 28**. *Let $A$ and $B$ be matrix Hopf algebras with fundamental corepresentations $u_A=(u^A_{ij})_{i,j=1}^n$ and $u_B=(u^B_{kl})_{k,l=1}^m$, respectively. Then $A*B$ is a matrix Hopf algebra with fundamental corepresentation $u\oplus v\in M_{n+m}(A*B)$. In particular, $\mathcal A(E)=\mathcal B(E)\ast\mathbb C\mathbb Z_2$ is a matrix Hopf algebra.*
*Proof.* We have to show that $\{u^A_{ij},u^B_{kl}:i,j=1,\ldots n, k,l=1,\ldots m\}\subset H\subset A*B$ for a Hopf subalgebra $H$ implies $H=A*B$. This is obvious: If $H$ contains all $u^A_{ij}$, then it contains $A$, if $H$ contains all $u^B_{kl}$, then it contains $B$. This means that under the given assumption, $H$ contains both $A$ and $B$. Since $A*B$ is generated by $A$ and $B$ as a unital algebra, we conclude $H=A*B$. ◻
.
**Definition 29**. Let $A$ and $B$ be matrix Hopf algebras with fundamental corepresentations $u_A=(u^A_{ij})_{i,j=1}^n$ and $u_B=(u^B_{kl})_{k,l=1}^m$, respectively. We define the *glued free product* $A\mathbin{\tilde{\ast}}B$ as the Hopf subalgebra of $A*B$ generated by the products $u^A_{ij}u^B_{kl}$, which becomes a matrix Hopf algebra with fundamental corepresentation $(u^A_{ij}u^B_{kl})\in M_{nm}(A\mathbin{\tilde{\ast}}B)$. Similarly, the *glued tensor product* $A\mathbin{\tilde{\otimes}}B$ of matrix Hopf algebras is the Hopf subalgebra of $A\otimes B$ generated by the elements of the form $u^A_{ij}\otimes u^B_{kl}$, which becomes a matrix Hopf algebra with fundamental corepresentation $(u^A_{ij}\otimes u^B_{kl})\in M_{nm}(A\mathbin{\tilde{\otimes}}B)$.
If $(\mathcal C(\mathbb G),u)$ and $(\mathcal C(\mathbb H),v)$ are compact matrix quantum groups, then $$\operatorname{Pol}(\mathbb G)*\operatorname{Pol}(\mathbb H)=\operatorname{Pol}(\mathbb G*\mathbb H), \quad \mbox{ and } \quad \operatorname{Pol}(\mathbb G)\mathbin{\tilde{\ast}}\operatorname{Pol}(\mathbb H) = \operatorname{Pol}(\mathbb G\mathbin{\tilde{\ast}}\mathbb H)$$ agree with the usual notions due to Wang [@Wang95] and Tarrago and Weber [@TerragoWeber2017].
While we adopt in the following a part of Gromada's theory [@Gromada2022] of gluing compact quantum groups to the case of matrix Hopf algebras, we will focus only on those statements that we actually need for the analysis of the cosovereign Hopf algebra.
**Definition 30**. A matrix Hopf algebra $(A,u)$ is *$k$-reflective* for $k\in\mathbb N$ if the assignment $$u_{ij}\longmapsto\delta_{ij}g_k,$$ where $g_k$ is the generator of $\mathbb C\mathbb Z_k$, extends to a Hopf algebra map from $\sigma\colon A\to \mathbb C\mathbb Z_k$.
The *degree of reflection* of $(A,u)$ is the largest $k$ such that $(A,u)$ is $k$-reflective and $0$ if $(A,u)$ is $k$-reflective for infinitely many $k\in\mathbb N$.
Note that if $(A,u)$ is $k$-reflective and $\ell$ divides $k$, then $(A,u)$ is also $\ell$-reflective. Furthermore, the quotient of $(A,u)$ by the relations $u_{ij}=\delta_{ij}u_{11}$ is easily seen to be the group algebra of a cyclic group. Therefore, if $(A,u)$ has degree of reflection $0$, then this quotient must be isomorphic to $\mathbb C\mathbb Z$ and $(A,u)$ is $k$-reflective for all $k\in\mathbb N$.
Recall that $g$ denotes the generator of $\mathbb C\mathbb Z_2$.
**Lemma 31**. *Let $(H,u)$ be a $2$-reflective matrix Hopf algebra. Then $H\cong H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2$ as Hopf algebras, where we view $(\mathbb C\mathbb Z_2,g)$ as a matrix Hopf algebra.*
*Proof.* The assumption of the lemma implies that the assignment $$H\longrightarrow H\otimes\mathbb C\mathbb Z_2: u_{ij}\longmapsto u_{ij}\otimes g$$ gives rise to an algebra homomorphism $\phi=(\mathrm{id}\otimes \sigma)\circ\Delta_H\colon H\to H\otimes\mathbb C\mathbb Z_2$. Note that the antipode on $\mathbb C\mathbb Z_2$ is the identity map and, therefore, $\sigma\circ S=\sigma$. On the algebra generators $S^nu_{ij}$, $\phi$ evaluates as $$\phi(S^nu_{ij})=(S^n\otimes\sigma)\Delta^{(op_n)}(u_{ij})=S^n u_{ij}\otimes g;$$ here $\Delta^{(op_n)}$ stands for the comultiplication $\Delta$ if $n$ is even and the opposite comultiplication $\Delta^{(op)}$ if $n$ is odd. It follows that the image of $\phi$ is the glued tensor product $H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2$, the algebra generated by all $S^n u_{ij}\otimes g$ inside $H\otimes \mathbb C\mathbb Z_2$.
Next, consider the homomorphism $$\psi:={\rm id}\otimes\varepsilon_{\mathbb C\mathbb Z_2}:H\otimes\mathbb C\mathbb Z_2\longrightarrow H.$$ This is easily seen to be a Hopf algebra map. Using the fact that $\varepsilon_{\mathbb C\mathbb Z_2}(g)=1$, we obtain $$\begin{aligned}
&(\phi\circ\psi)(S^nu_{ij}\otimes g)=\phi(S^nu_{ij})=S^nu_{ij}\otimes g,\\
&(\psi\circ\phi)(S^nu_{ij})=({\rm id}\otimes\varepsilon)(S^nu_{ij}\otimes g)=S^nu_{ij}.
\end{aligned}$$ Since the $S^nu_{ij}\otimes g$ generate $H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2$ as an algebra, we can conclude that $\psi|_{H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2}$ is a Hopf isomorphism with inverse $\phi$. ◻
**Lemma 32**. *Let $(H,u)$ be a matrix Hopf algebra. Then there is an isomorphism of Hopf algebras $$H\mathbin{\tilde{\ast}}\mathbb C\mathbb Z\cong (H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2.$$*
*Proof.* We view $(\mathbb C\mathbb Z\cong\mathbb C[z,z^{-1}],z)$, $(\mathbb C\mathbb Z_2,s)$, and $(\mathbb C\mathbb Z_2,r)$ as matrix Hopf algebras; here $s$ and $r$ denote the canonical generator of $\mathbb C\mathbb Z_2$ in the first and second copy in $(H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$, respectively. All antipodes are denoted by $S$ without distinction. First, we define a homomorphism $\phi\colon H\ast\mathbb C\mathbb Z\to (H\otimes\mathbb C\mathbb Z_2)\ast\mathbb C\mathbb Z_2$ via the assignments $$a\longmapsto a\otimes 1,\qquad z\longmapsto (1\otimes s)r,\qquad z^{-1}\longmapsto r(1\otimes s).$$ One easily checks that $\phi$ is a Hopf algebra map. We find $$\begin{aligned}
\label{phi-on-generators}
\phi(S^k(u_{ij}z))=S^k\phi(u_{ij}z)=S^k((u_{ij}\otimes s)r),
\end{aligned}$$ i.e. $\phi$ maps the generators of $H\mathbin{\tilde{\ast}}\mathbb C\mathbb Z$ to the generators of $(H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$. In particular, $\phi$ restricts to a surjective Hopf algebra map $H\mathbin{\tilde{\ast}}\mathbb C\mathbb Z\to (H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$.
To prove injectivity of $\phi$, we will construct its inverse. We define an algebra homomorphism $\psi\colon (H\otimes\mathbb C\mathbb Z_2)\ast\mathbb C\mathbb Z_2\to M_2(H\ast\mathbb C\mathbb Z)$: $$a\otimes 1\longmapsto \begin{bmatrix} a & 0\\0 & a \end{bmatrix},\qquad 1\otimes s\longmapsto\begin{bmatrix}0 & 1\\1 & 0\end{bmatrix},\qquad
r\longmapsto \begin{bmatrix} 0 & z^{-1}\\z & 0\end{bmatrix}.$$ On the generators $S^k((u_{ij}\otimes s)r)$ of $(H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$, we find for even $k$ $$\psi(S^k((u_{ij}\otimes s)r))=
\psi((S^k(u_{ij}) \otimes s)r) =
\begin{pmatrix}
S^k(u_{ij})z&0\\0&S^k(u_{ij})z^{-1}
\end{pmatrix}
=\begin{pmatrix}
S^k(u_{ij}z)&0\\0&S^k(u_{ij}z^{-1})
\end{pmatrix}$$ and for odd $k$ $$\psi(S^k((u_{ij}\otimes s)r))=\psi(r(S^k(u_{ij}) \otimes s)) =
\begin{pmatrix}
z^{-1}S^k(u_{ij})&0\\0&zS^k(u_{ij})
\end{pmatrix}=
\begin{pmatrix}
S^k(u_{ij}z)&0\\0&S^k(u_{ij}z^{-1})
\end{pmatrix};$$ in summary, we have $$\begin{aligned}
\label{psi-on-generators}
\psi(S^k((u_{ij}\otimes s)r))
=\begin{pmatrix}
S^k(u_{ij}z)&0\\0&S^k(u_{ij}z^{-1})
\end{pmatrix}
\end{aligned}$$ for all $k\in\mathbb N$. In particular, we note that $\psi((H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2)$ is contained in the diagonal matrices. Let $\operatorname{pr}_1\colon M_2(H\ast\mathbb C\mathbb Z)\to H\ast\mathbb C\mathbb Z$ be the projection onto the upper left corner. Then the restriction $\operatorname{pr}_1\circ\psi|_{(H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2}$ is an algebra homomorphism due to the observed containment of $\psi((H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2)$ in the diagonals. Using [\[phi-on-generators\]](#phi-on-generators){reference-type="eqref" reference="phi-on-generators"} and [\[psi-on-generators\]](#psi-on-generators){reference-type="eqref" reference="psi-on-generators"}, it is apparent that $$\begin{aligned}
((\operatorname{pr}_1\circ\psi)\circ\phi)(S^k(u_{ij}z))=\operatorname{pr}_1(\psi(S^k((u_{ij}\otimes s)r)))
=S^k(u_{ij}z)
% ,\\
% (\phi\circ(\operatorname{pr}_1\circ\psi))(S^k((u_{ij}\otimes s)r))&=\phi (S^k(u_{ij}z))=S^k((u_{ij}\otimes s)r).
\end{aligned}$$ and we conclude that $(\operatorname{pr}_1\circ\psi)\circ\phi)|_{H\mathbin{\tilde{\ast}}\mathbb C\mathbb Z}=\mathrm{id}_{H\mathbin{\tilde{\ast}}\mathbb C\mathbb Z}$ because the maps are algebra homomorphisms. This shows that $\phi|_{H\mathbin{\tilde{\ast}}\mathbb C\mathbb Z}$ is injective and, hence, a Hopf isomorphism onto $(H\mathbin{\tilde{\otimes}}\mathbb C\mathbb Z_2)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$ as claimed. ◻
**Observation 33**. Let $A$, $B$ be matrix Hopf algebras with fundamental corepresentations $u^A\in M_n(A), u^B\in M_m(B)$. If the antipodes of $A$ and $B$ restrict to linear automorphisms of $\operatorname{Lin}(u^A):=\operatorname{Lin}\{u^A_{ij}\colon i,j=1,\ldots,n\}$ and $\operatorname{Lin}(u^B):=\operatorname{Lin}\{u^B_{ij}\colon i,j=1,\ldots,m\}$, respectively, then $A\mathbin{\tilde{\ast}}B$ is generated as an algebra by the collection of all $u^A_{ij}u^B_{kl}$ and $u^B_{kl}u^A_{ij}$, $i,j\in\{1,\ldots, n\}, k,l\in\{1,\ldots, m\}$. Indeed, it easily follows from the assumption that $$\begin{aligned}
S(u^A_{xy}u^B_{zw})=S(u^B_{zw})S(u^A_{xy})&\in \operatorname{Lin}\{u^B_{kl}u^A_{ij}:i,j\in\{1,\ldots, n\}, k,l\in\{1,\ldots, m\}\}\\ u^B_{zw}u^A_{xy}= S(S^{-1}(u^A_{xy})S^{-1}(u^B_{zw}))&\in\operatorname{Lin} (S(u^A_{ij}u^B_{kl}):i,j\in\{1,\ldots, n\}, k,l\in\{1,\ldots, m\})
\end{aligned}$$ and, therefore, the generated algebras agree.
In [@Banica1997 Theoreme 1(iv)] (see also [@TerragoWeber2017 Remark 6.18]) Banica observed that $U_n^+\cong O_n^+\mathbin{\tilde{\ast}}\mathbb Z$. In fact, Gromada [@Gromada2022 Theorem C/4.28] showed that one can replace $\mathbb Z$ by $\mathbb Z_2$, using that the degree of reflection of $O_n^+$ is 2. We will now generalize those observations to the matrix Hopf algebras $\mathcal B(E)$ and $\mathcal H(F)$.
Let us observe that $\mathcal B(E)$ and $\mathbb C\mathbb Z_2$ satisfy the assumption that the antipode restricts to linear automorphisms of $\operatorname{Lin}\{x_{ij}\colon i,j=1,\ldots,n\}$ and $\operatorname{Lin}\{g\}$, respectively. Indeed, in $\mathcal B(E)$ we have $S(x)=E^{-1}x^tE$ and in $\mathbb C\mathbb Z_2$ we have $S(g)=g$. Thus, $\mathcal B(E)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$ is generated as an algebra by the elements $x_{ij}g$ and $gx_{ij}$, $i,j\in\{1,\ldots,n\}$.
**Lemma 34**. *The Hopf algebra $\mathcal B(E)$ has the degree of reflection 2.*
*Proof.* Denote by $g$ the canonical generator of $\mathbb C\mathbb Z_2$ and $\hat x_{ij}:=\delta_{ij}g$, $\hat x=(\hat x_{ij})_{i,j=1}^n=I_ng\in M_n(\mathbb C\mathbb Z_2)$. Clearly, $$\begin{aligned}
E^{-1}\hat x^tE\hat x=g^2I_n=I_n=\hat xE^{-1}\hat x^tE,
\end{aligned}$$ which proves that there is a Hopf algebra map sending $x_{ij}$ to $\hat x_{ij}=\delta_{ij}g$, so $\mathcal B(E)$ is 2-reflective. On the other hand, let $A$ be an arbitrary unital algebra and $a\in A$. If $x_{ij}\mapsto \delta_{ij} a$ extends to a unital algebra homomorphism, it follows from the relations of $\mathcal B(E)$ that $a^2=1$. This shows that $\mathcal B(E)$ is not $k$-reflective for any $k>2$. ◻
The following observation reveals a glued free product structure on $\mathcal H(F)$.
**Theorem 35**. *For $F=E^tE^{-1}$ a generic asymmetry, we have $\mathcal H(F) \cong \mathcal B(E) \mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$.*
*Proof.* From , we conclude that $\mathcal B(E) \mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2\cong \mathcal B(E) \mathbin{\tilde{\ast}}\mathbb C\mathbb Z$. Denote the canonical generator of $\mathbb C\mathbb Z$ by $z$. Put $\hat u_{ij}:= x_{ij}z, \hat v_{ij}:=S(\hat u_{ji})=z^{-1} S(x_{ji})\in \mathcal B(E) \mathbin{\tilde{\ast}}\mathbb C\mathbb Z$. In matrix notation, $$\hat u=xz, \quad\hat u^t=x^tz,\quad \hat v=z^{-1}E^{t}x E^{-t} ,\quad\hat v^t=z^{-1}E^{-1}x^{t}E.$$ We claim that there is a (necessarily unique) algebra homomorphism $\Phi\colon \mathcal H(F) \to \mathcal B(E) \mathbin{\tilde{\ast}}\mathbb C\mathbb Z$ with $\Phi(u_{ij})=\hat u_{ij}$ and $\Phi(v_{ij})=\hat v_{ij}$ for all $i,j\in\{1,\ldots, n\}$. By definition of $\mathcal H(F)$, it is enough to check the following relations (note that $z$ commutes with scalar matrices): $$\begin{aligned}
\hat u\hat v^t & = xz\,z^{-1}E^{-1}x^tE = I\\
\hat v^t\hat u & = z^{-1}E^{-1}x^tE\,xz = I \\
\hat vF\hat u^tF^{-1} & = z^{-1}E^tx E^{-t}\, E^tE^{-1}\, x^t z\,E E^{-t}
= z^{-1}E^t z E^{-t} = I \\
F\hat u^tF^{-1}\hat v & = E^tE^{-1}\, x^t z\, E E^{-t}\,z^{-1} E^tx E^{-t}
= E^t(E^{-1} x^t Ex) E^{-t} = E^t E^{-t} =I.
\end{aligned}$$ The assumption of genericity implies that both $\mathcal H(F)$ and $\mathcal B(E)$ are cosemisimple (see Preliminaries). In order to show that $\Phi$ is an isomorphism, we invoke the theory of cosemisimple Hopf algebras. By [@Bichon03 Lemma 5.1], if $f\colon A\to B$ is a Hopf algebra morphism between cosemisimple Hopf algebras $A,B$ which induces an isomorphism of their respective representation semirings $R^+(A)\cong R^+(B)$, then $\Phi$ is a Hopf algebra isomorphism. Let $q$ be the solution of the equation $q^2-\operatorname{tr}(F)q+1=0$. We know from [@Bichon07 Theorem 1.1 (i)] that for generic $F$, $\mathcal H(F)$ is monoidaly equivalent to $\mathcal H(F_q)$, where $F_q=\begin{pmatrix} q^{-1} & 0 \\ 0 & q \end{pmatrix}$. Next, by [@Bichon03 Theorem 1.1], $\mathcal{B}(E)$ is monoidaly equivalent to $\mathcal{B}(E_q)\cong\operatorname{Pol}(SL_q(2))$, where $E_q=\begin{pmatrix} 0 & 1 \\ q^{-1} & 0 \end{pmatrix}$. The isomorphism of [@Bichon07 Lemma 3.3] induces an isomorphism of Hopf algebras $\mathcal{H}(F_q)\cong \operatorname{Pol}(SL_q(2))\mathbin{\tilde{\ast}}\mathbb{C}\mathbb{Z}$. We infer that $R^+(\mathcal{H}(F))\cong R^+(\mathcal{B}(E)\mathbin{\tilde{\ast}}\mathbb{C}\mathbb{Z})$ as semirings. From [@Bichon07 Theorem 1.1 (iii)], we know that $R^+(\mathcal H(F))$ is generated by $u$ and $v$. From the construction of the monoidal equivalence of $\mathcal{B}(E)$ and $\mathcal{B}(E_q)$ (see e.g. [@Schauenburg96]) and the representation theory of $SL_q(2)$ (see e.g. [@Bichon03 p. 4845]), we obtain that $R^+(\mathcal B(E))$ is generated by $x$. Therefore, $\Phi$ induces the aforementioned isomorphism of semirings. ◻
**Remark 36**. In a [@Bichon23pre Prop. 4.3], Bichon shows that holds without the genericity assumption.
In the sequel we will identify $\mathcal H(F)$ with $\mathcal B(E)\mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2$, writing (in the matrix notation) just $u=xg$ and $v=gE^tx E^{-t}$. Then it follows that $u^t=x^t g$, $v^t=gE^{-1}x^tE$ and $$\label{eq_gx}
gx = E^{-t}vE^t \quad \mbox{and} \quad gx^t = Ev^tE^{-1}.$$
# Free resolution for $\mathcal H(F)$ {#sec:resolution-H(F)}
**Proposition 37**. *Consider $\mathcal A(E)$ as a right $\mathcal H(F)$-module, with the canonical module structure induced by the embedding $\mathcal H(F)\subset \mathcal A(E)$, i.e. $a.h:=ah\in\mathcal A(E)$ for all $a\in \mathcal A(E), h\in\mathcal H(F)$. Then $\mathcal H(F),g\mathcal H(F)\subset\mathcal A(E)$ are submodules and $$\mathcal A(E)= \mathcal H(F) \oplus g \mathcal H(F).$$*
*Proof.* Any element of the $\mathcal A(E)=\mathcal B(E) \ast \mathbb C\mathbb Z_2$ is a linear combination of *monomials*, i.e. products of generators $x_{jk}$ from $\mathcal B(E)$ and $g\in \mathbb C\mathbb Z_2$.
Take any such monomial $w$, and assume that its length (the number of generators of both kinds) is $k$. For $k=0$ we have $w=1=1\oplus g\cdot 0$. If $k=1$ then either $w=x_{ij}=0\oplus g(gx_{ij})$ or $w=g=0\oplus g1$. Now assume that any element $w$ of length $\leq k$ can be written as $w=a\oplus gb$ with $a,b\in \mathcal H(F)$. Consider an element $w'$ of length $k+1$. We have either $w'=gw=g(a\oplus gb) = b\oplus ga\in \mathcal H(F) \oplus g\mathcal H(F)$ or $$w'=x_{ij}w=g(gx_{ij})(a\oplus gb) =
g(gx_{ij})a\oplus g^2(x_{ij}g)b= \widetilde{b} \oplus g\widetilde{a},$$ with $\widetilde{b}=(u_{ij}g)b, \widetilde{a}=(gu_{ij})a\in \mathcal H(F)$.
We still need to show that $\mathcal H(F) \cap g \mathcal H(F)=\{0\}$. This is due to the fact that $\mathcal A(E)$ is a $\mathbb Z_2$-graded algebra, the free product of $\mathbb Z_2$-graded algebras $\mathcal B(E)$ and $\mathbb C\mathbb Z_2$. Elements of $\mathcal H(F)$ are combinations of monomials of even length, whereas elements of $g\mathcal H(F)$ are made up from monomials of odd length. ◻
**Theorem 38**. *Let $F=E^tE^{-1}\in GL_n(\mathbb C)$ be a generic asymmetry and $\mathcal H=\mathcal H(F)$. Then the following sequence $$\label{eq:resHF}
0\to \begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix}
\xrightarrow{\Phi^{\mathcal H}_3}
\begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H) \end{pmatrix}
\xrightarrow{\Phi^{\mathcal H}_2}
\begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H)
\\ \mathcal H \end{pmatrix} \xrightarrow{\Phi^{\mathcal H}_1}
\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix} \xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0$$ with the mappings $$\begin{aligned}
\Phi^{\mathcal H}_3 \begin{pmatrix}a \\ b \end{pmatrix}
& = \begin{pmatrix} -Fa+(Eu E^{-t})b \\ F^{-1}va-Fb
\end{pmatrix},\\
\Phi^{\mathcal H}_2 \begin{pmatrix} A \\ B \end{pmatrix}
& = \begin{pmatrix} A+ (E^{-1}u^tBE)^t \\ B+(v^tE^{-1}AE)^t \\0 \end{pmatrix},\\
\Phi^{\mathcal H}_1 \begin{pmatrix} A \\ B \\c \end{pmatrix}
& = \begin{pmatrix} \operatorname{tr} (-A+u^tB) +c \\ \operatorname{tr} (Ev^tE^{-1}A-B)-c \end{pmatrix} \end{aligned}$$ for $a,b,c\in \mathcal H$, $A,B\in M_n(\mathcal H)$, yields a projective resolution of the counit of $\mathcal H$.*
It is worth to notice that the given resolution is in fact a free resolution.
*Proof.* We know from Section [4](#sec_glued){reference-type="ref" reference="sec_glued"} that, with $F=E^tE^{-1}$, that $$\label{eq_glued-free-product}
\mathcal H(F)=\mathcal B(E) \mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2
\subset \mathcal B(E) * \mathbb C\mathbb Z_2 = \mathcal A(E).$$ is a Hopf subalgebra.
Our plan is to interpret the projective modules in the resolution [\[eq_resolution_of_A\]](#eq_resolution_of_A){reference-type="eqref" reference="eq_resolution_of_A"} as $\mathcal H(F)$-modules by restricting the action, identify bases which show that they are actually free $\mathcal H(F)$-modules, and finally express the maps $\widetilde\Phi_i$ in terms of the chosen bases.
To make the notation lighter, we write $\mathcal H=\mathcal H(F)$ and $\mathcal A=\mathcal A(E)$ with $F=E^tE^{-1}$. Then Proposition [Proposition 37](#prop:A=H+gH_general){reference-type="ref" reference="prop:A=H+gH_general"} states that $\mathcal A=\mathcal H\oplus g\mathcal H$ is a free $\mathcal H$-module with basis $(1,g)$. Consequently, $M_n(\mathcal A) \cong M_n(\mathcal H) \oplus g M_n(\mathcal H)$ is a free $\mathcal H$-module with basis $(e_{ij},ge_{ij}:i,j=1,\ldots n)$, where we naturally set $g(a_{ij})_{i,j+1}^n := (ga_{ij})_{i,j+1}^n$. Moreover, since for $a,b\in \mathcal H$ we have $$(1-g)\otimes (a+gb)= (1-g)\otimes a + (1-g)g\otimes b = (1-g)\otimes (a-b)= 1\otimes (1-g) (a-b),$$ the mapping $(1-g)\otimes h\mapsto (1-g)h$ constitutes an isomorphism $\mathbb{C}(1-g)\otimes_{\mathbb{C}\mathbb{Z}_2}\mathcal A\cong (1-g) \mathcal H$ to a free $\mathcal H$-module with basis $(1-g)$ (we use this isomorphism to identify those expressions in the following). This allows to write down the resolution [\[eq_resolution_of_A\]](#eq_resolution_of_A){reference-type="eqref" reference="eq_resolution_of_A"} in terms of free $\mathcal H$-modules: $$0\to \mathcal H\oplus g\mathcal H \xrightarrow{\Phi^{\mathcal A}_3}
M_n(\mathcal H) \oplus gM_n(\mathcal H)
\xrightarrow{\Phi^{\mathcal A}_2}
M_n(\mathcal H) \oplus gM_n(\mathcal H) \oplus (1-g) \mathcal H \xrightarrow{\Phi^{\mathcal A}_1}
\mathcal H \oplus g\mathcal H
\xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0.$$
Let us now see how the action of the respective mappings can be expressed in the chosen bases. It will be convenient to use the matrix notation with $I=(\delta_{jk}1_{\mathcal H})_{j,k=1}^n$, $x:=(x_{jk})_{j,k=1}^n$ $u:=xg=(x_{jk}g)_{j,k=1}^n$, $v:=gE^tx E^{-t}$. Recall that $g$ commutes with scalar matrices and $g^2=1$. For $a,b\in \mathcal H$, $$\begin{aligned}
\Phi^{\mathcal A}_3(a\oplus gb)
&=\bigl(Ex E^{-t}-F\bigr)(a\oplus gb)\\
& =(-Fa+Ex E^{-t}gb)\oplus(Ex E^{-t}a-Fgb)\\
& =(-Fa+Eu E^{-t}b)\oplus g(E E^{-t}gE^tx E^{-t}a-Fb)\\
& = (-Fa+(Eu E^{-t})b) \oplus g(F^{-1}va-Fb).\end{aligned}$$
For $A,B\in M_n(\mathcal H)$ we have, using that $v:=gE^tx E^{-t}$ is equivalent to $gE^{-1}x^t=v^tE^{-1}$, $$\begin{aligned}
% \hat{\Phi}_2
\Phi^{\mathcal A}_2(A\oplus gB)
&=A+gB+(E^{-1}x^t(A+gB)E)^t\\
&=(A+(E^{-1}x^tgBE)^t)\oplus ((E^{-1}x^tAE)^t+gB)\oplus 0\\
&=(A+(E^{-1}u^tBE)^t)\oplus g((gE^{-1}x^tAE)^t+B)\oplus 0\\
&=(A+(E^{-1}u^tBE)^t)\oplus g((v^tE^{-1}AE)^t+B)\oplus 0. \end{aligned}$$
Finally, for $A,B\in M_n(\mathcal H)$ and $c\in \mathcal H$, we have $$\begin{aligned}
\Phi^{\mathcal A}_1 (A\oplus gB \oplus (1-g) c) &= \operatorname{tr}(x^t(A+gB)-\operatorname{tr}(A+gB)+(1-g)c\\
& = (\operatorname{tr}(-A+x^tgB) +c) \oplus ( \operatorname{tr}(x^tA-gB)- gc) \\
&=(\operatorname{tr}(-A+u^tB) +c) \oplus g( \operatorname{tr}(gx^tA-B)- c) \\
&=(\operatorname{tr}(-A+u^tB) +c) \oplus g( \operatorname{tr}(Ev^tE^{-1}A-B)- c)\end{aligned}$$ using this time that $v:=gE^tx E^{-t}$ is equivalent to $gx^t=Ev^tE^{-1}$.
Now, let us observe that the bases described above give rise to $\mathcal H$-module isomorphisms $$\begin{gathered}
\mathcal H \oplus g \mathcal H \ni a\oplus gb \mapsto \begin{pmatrix} a \\ b \end{pmatrix} \in \begin{pmatrix}\mathcal H \\ \mathcal H \end{pmatrix},\\
M_n(\mathcal H)\oplus gM_n(\mathcal H)\ni A\oplus gB \mapsto \begin{pmatrix} A \\ B \end{pmatrix} \in \begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H)\end{pmatrix}\\
M_n(\mathcal H)\oplus gM_n(\mathcal H)\oplus (1-g)\mathcal H\ni A\oplus gB \oplus (1-g)h \mapsto \begin{pmatrix} A \\ B \\ h \end{pmatrix} \in \begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H) \\ \mathcal H\end{pmatrix}\end{gathered}$$ and the maps $\Phi^{\mathcal H}_*$ are simply the maps $\Phi^{\mathcal A}_*$ rewritten in terms of the bases. In particular, the sequence [\[eq:resHF\]](#eq:resHF){reference-type="eqref" reference="eq:resHF"} is exact. ◻
**Lemma 39**. *The maps $f_E,T\colon M_n(\mathcal H)\to M_n(\mathcal H)$ with $$\begin{aligned}
f_E(A)&=E^{-1}AE&T(A)=A^t
\end{aligned}$$ are $\mathcal H$-module automorphisms. These lead to a transformed version of the resolution [\[eq:resHF\]](#eq:resHF){reference-type="eqref" reference="eq:resHF"} in only depending on $F$: $$\label{eq:resHF_transformed}
0\to \begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix}
\xrightarrow{\Psi^{\mathcal H}_3}
\begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H) \end{pmatrix}
\xrightarrow{\Psi^{\mathcal H}_2}
\begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H)
\\ \mathcal H \end{pmatrix} \xrightarrow{\Psi^{\mathcal H}_1}
\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix} \xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0$$ with $$\begin{aligned}
\Psi_3^{\mathcal H}\begin{pmatrix} a \\ b \end{pmatrix}&=\begin{pmatrix} f_E \\ \mathrm{id} \end{pmatrix}\circ \Phi_3^{\mathcal H}\begin{pmatrix} a \\ b \end{pmatrix} =
\begin{pmatrix}
-F^{-t} a + u F^tb\\ F^{-1} va - Fb
\end{pmatrix}
\\
\Psi_2^{\mathcal H}\begin{pmatrix} A \\ B \end{pmatrix}&=\begin{pmatrix} T f_E \\ T \\ \mathrm{id} \end{pmatrix}\circ \Phi_2^{\mathcal H}\circ \begin{pmatrix} f_E^{-1} \\ \mathrm{id} \end{pmatrix}\begin{pmatrix} A \\ B \end{pmatrix}
=\begin{pmatrix} A^t + Fu^tBF^{-1} \\ v^t A + B^t \\ 0 \end{pmatrix}
\\
\Psi_1^{\mathcal H}\begin{pmatrix} A \\ B \\ c\end{pmatrix}&= \Phi_1^{\mathcal H}\circ \begin{pmatrix} f_E^{-1}T \\ T \\ \mathrm{id} \end{pmatrix}\begin{pmatrix} A \\ B \\ c \end{pmatrix}
=\begin{pmatrix} \operatorname{tr}(-A+uB)+c \\ \operatorname{tr}(vA-B)-c \end{pmatrix}
\end{aligned}$$*
*Proof.* It is clear that $f_E$ and $T$ are $\mathcal H$-module automorphisms and, accordingly, they transform a resolution into a resolution (note that $T^{-1}=T$). We are left with checking that the concrete expressions for the $\Psi_i^{\mathcal H}$ are correct. $$\begin{aligned}
\Psi_3^{\mathcal H}\begin{pmatrix} a \\ b \end{pmatrix}&=\begin{pmatrix} f_E \\ \mathrm{id} \end{pmatrix}\circ \Phi_3^{\mathcal H}\begin{pmatrix} a \\ b \end{pmatrix} =
\begin{pmatrix}
-E^{-1}E^t E^{-1} Ea + E^{-1} E u E^{-t}E b\\ F^{-1} va - Fb
\end{pmatrix}
=
\begin{pmatrix}
-F^{-t} a + u F^tb\\ F^{-1} va - Fb
\end{pmatrix}
\\
\Psi_2^{\mathcal H}\begin{pmatrix} A \\ B \end{pmatrix}&=\begin{pmatrix} T f_E \\ T \\ \mathrm{id} \end{pmatrix}\circ \Phi_2^{\mathcal H}\begin{pmatrix} EAE^{-1} \\ B \end{pmatrix}
=
\begin{pmatrix} T f_E \\ T \\ \mathrm{id} \end{pmatrix}\begin{pmatrix} EAE^{-1}+ (E^{-1}u^tBE)^t \\ (v^tE^{-1}EAE^{-1}E)^t + B \\0 \end{pmatrix}
\\
&=\begin{pmatrix} A^{t}+ (E^{-1}(E^{-1}u^tBE)^tE)^t \\ v^tE^{-1}EAE^{-1}E + B^t \\0 \end{pmatrix}
\\
&=\begin{pmatrix} A^{t}+ E^{t}E^{-1}u^tBEE^{-t} \\ v^tA + B^t \\0 \end{pmatrix}
\\
&=
\begin{pmatrix} A^t + Fu^tBF^{-1} \\ v^t A + B^t \\ 0 \end{pmatrix}
\\
\Psi_1^{\mathcal H}\begin{pmatrix} A \\ B \\ c \end{pmatrix}
&= \Phi_1^{\mathcal H}\begin{pmatrix} EA^tE^{-1} \\ B^t \\ c \end{pmatrix}
= \begin{pmatrix} \operatorname{tr} (-EA^tE^{-1}+u^tB^t) +c \\ \operatorname{tr} (Ev^tE^{-1}EA^tE^{-1}-B^t)-c \end{pmatrix}
=\begin{pmatrix} \operatorname{tr}(-A+uB)+c \\ \operatorname{tr}(vA-B)-c \end{pmatrix}.
\end{aligned}$$ ◻
In fact, we will prove later that the sequence [\[eq:resHF_transformed\]](#eq:resHF_transformed){reference-type="eqref" reference="eq:resHF_transformed"} is exact, hence a free resolution, whenever $F$ is normalizable.
# Hochschild cohomology for $\mathcal H(F)$ {#sec:Hochschild-cohomology-H(F)}
Recall that we restrict to 1-dimensional bimodules $_\varepsilon\mathbb C_\tau$ with trivial left action, whose associated right modules are simply $(_\varepsilon\mathbb C_\tau)''=\mathbb C_\tau$, as detailed in .
**Theorem 40**. *Let $F\in \operatorname{GL}_n(\mathbb C)$ be such that [\[eq:resHF_transformed\]](#eq:resHF_transformed){reference-type="eqref" reference="eq:resHF_transformed"} is exact,[^5] and let $\tau\colon \mathcal H(F) \to \mathbb C$ be a character. Set $S:=\tau(u)$, $T:=\tau(v)=S^{-t}$ and denote by $\mathcal{K}$ the space of matrices commuting with $F^tS$. Let $$d=\operatorname{dim}\, \mathcal{K}, \quad p=\begin{cases}
1 & \mbox{ if } S=T=I_n\\
0 & \mbox{ otherwise; }
\end{cases}, \quad
t=\begin{cases}
1 & \mbox{ if } F^2=\alpha T \mbox{ for some $\alpha\in\mathbb C$ }\\
2 & \mbox{ otherwise. }
\end{cases}$$ Then the Hochschild cohomology for $\mathcal H(F)$ with 1-dimensional coefficients is given as follows: $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= p ,
&\dim \mathrm{H}^1(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= d+p-1 ,
\\
\dim \mathrm H^2(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= d-t ,
&\dim \mathrm H^3(\mathcal H, {_\varepsilon\mathbb C_\tau}) &= 2-t ,\end{aligned}$$ and $\dim \mathrm H^k(\mathcal H, {_\varepsilon\mathbb C_\tau}) = 0$ for all $k \geq 4$.*
*Proof.* From the resolution [\[eq:resHF_transformed\]](#eq:resHF_transformed){reference-type="eqref" reference="eq:resHF_transformed"} for $\mathcal H$, we construct the cochain complex $$\begin{gathered}
\label{eq:complex-for-Hoschschild-H(F)}
0\to \hom_{\mathcal H}\left(\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix},\mathbb C_\tau\right)
\xrightarrow{-\circ \Psi^{\mathcal H}_1}
\hom_{\mathcal H}\left(\begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H)\\ \mathcal H \end{pmatrix},\mathbb C_\tau\right)
\\
\xrightarrow{-\circ \Psi^{\mathcal H}_2}
\hom_{\mathcal H}\left(\begin{pmatrix} M_n(\mathcal H) \\ M_n(\mathcal H) \end{pmatrix},\mathbb C_\tau\right)
\xrightarrow{-\circ\Psi^{\mathcal H}_3}
\hom_{\mathcal H}\left(\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix},\mathbb C_\tau\right)\to0.\end{gathered}$$
Recall that from Lemma [Lemma 2](#lemma:traces-free-modules){reference-type="ref" reference="lemma:traces-free-modules"} we have vector space isomorphisms $$\hom_{\mathcal H}(M_{n_1}(\mathcal H)\oplus\ldots \oplus M_{n_k}(\mathcal H),\mathbb C_\tau)\cong M_{n_1}(\mathbb C)\oplus\ldots \oplus M_{n_k}(\mathbb C)$$ determined by the "pairing" $$(\Lambda_1,\ldots,\Lambda_k)^\tau(A_1,\ldots, A_k)=\sum_{r=1}^k \tau\operatorname{tr}(\Lambda_r^t A_r)$$ for $\Lambda_r\in M_{n_r}(\mathbb C)$ and $A_r\in M_{n_r}(\mathcal H)$. Also, the precompositions $-\circ\Psi^{\mathcal H}_j$ can be described by the corresponding maps $(\Psi^{\mathcal H}_j)^{*}$ which yield the isomorphic complex
$$0\rightarrow \mathbb C\oplus \mathbb C\xrightarrow{(\Psi^{\mathcal H}_1)^{*}}M_n(\mathbb C) \oplus M_n(\mathbb C) \oplus \mathbb C\xrightarrow{(\Psi^{\mathcal H}_2)^{*}}M_n(\mathbb C) \oplus M_n(\mathbb C)\xrightarrow{(\Psi^{\mathcal H}_3)^{*}} \mathbb C\oplus \mathbb C\rightarrow 0,$$ leading to $$\mathrm H^k\left( \mathcal H, {_\varepsilon\mathbb C_\tau}\right) \cong \faktor{\ker((\Psi^{\mathcal H}_{k+1})^{*})}{\mathrm{Im}((\Psi^{\mathcal H}_k)^{*})}.$$
We will now have a closer look at the maps $(\Psi^{\mathcal H}_j)^{*}$ ($j=1,2,3$).
For $j=1$ we have $$\begin{aligned}
(\Psi^{\mathcal H}_1)^{*} \colon \mathbb C\oplus \mathbb C&\to M_n(\mathbb C) \oplus M_n(\mathbb C) \oplus \mathbb C\end{aligned}$$ and $$\begin{aligned}
(\lambda,\mu)^{\tau}\circ\Psi_1^{\mathcal H} \begin{pmatrix} A \\ B \\c \end{pmatrix}
& = \lambda \, \tau\big( \operatorname{tr}(-A+uB) +c\big)
+ \mu \, \tau\big(\operatorname{tr}(vA - B)-c\big)
\\
& = \lambda \, \tau\operatorname{tr}(-A)+\lambda \, \tau\operatorname{tr}(SB) +\lambda \tau(c)
+ \mu \, \tau \operatorname{tr}(TA)-
\mu \, \tau\operatorname{tr}(B)
-\mu\tau(c)
\\
& = \tau\operatorname{tr}\bigl((-\lambda I_n + \mu T^t)^tA \bigr) + \tau\operatorname{tr}\bigl((\lambda S^t-\mu I_n)^tB\bigr)+\tau\bigl((\lambda-\mu)c\bigr)\\
&=\left(-\lambda I_n + \mu T^t, \lambda S^t-\mu I_n, \lambda - \mu\right)^{\tau} \begin{pmatrix} A \\ B \\c \end{pmatrix},\end{aligned}$$ i.e., $$\begin{aligned}
(\Psi^{\mathcal H}_1)^{*} (\lambda, \mu)= \left(-\lambda I_n + \mu T^t, \lambda S^t-\mu I_n, \lambda - \mu\right).\end{aligned}$$ If $(\lambda,\mu)\in \ker (\Psi^{\mathcal H}_1)^{*}$, then necesarily $\lambda=\mu$. Furthermore, $\lambda=\mu\neq0$ implies $T=I_n=S$. So $$\begin{aligned}
\ker (\Psi^{\mathcal H}_1)^{*} &=\begin{cases}
\{(\lambda,\mu):\lambda=\mu\}\cong \mathbb C, & \mbox{if}\, S=T=I_n, \\
\{0\}, & \mbox{otherwise}.
\end{cases},\\\end{aligned}$$ We conclude using the rank-nullity theorem that $$\dim\ker (\Psi^{\mathcal H}_1)^{*} =p
\qquad
\dim\operatorname{Im}(\Psi^{\mathcal H}_1)^{*} =2-p$$ For $(\Psi^{\mathcal H}_2)^{*}\colon M_n(\mathbb C) \oplus M_n(\mathbb C) \oplus \mathbb C\to M_n(\mathbb C) \oplus M_n(\mathbb C)$ we find that
$$\begin{aligned}
(\Lambda,\mathrm{M},\nu)^\tau \circ \Psi^{\mathcal H}_2
\begin{pmatrix} A \\ B \end{pmatrix}
& = (\Lambda,\mathrm{M},\nu)^\tau \begin{pmatrix} A^t + Fu^tBF^{-1} \\ v^t A + B^t \\ 0 \end{pmatrix}\\
& =
\tau \operatorname{tr}(\Lambda^t [A^t+ Fu^tBF^{-1}])
+ \tau \operatorname{tr}(\mathrm{M}^t [v^t A + B^t])\\
& =
\tau \operatorname{tr}(\Lambda^t A^t)+ \tau \operatorname{tr}(\Lambda^tF S^t BF^{-1})
+ \tau \operatorname{tr}(\mathrm{M}^t T^tA)+\tau \operatorname{tr}(\mathrm{M}^tB^t])\\
&= \tau \operatorname{tr}(\Lambda + \mathrm{M}^t T^t) A+ \tau \operatorname{tr}(F^{-1}\Lambda^tF S^{t}+ \mathrm{M}) B,\end{aligned}$$ i.e. $$\begin{aligned}
(\Psi^{\mathcal H}_2)^{*} (\Lambda, \mathrm{M}, \nu)= \left(\Lambda^t + T\mathrm{M}, SF^t\Lambda F^{-t}+\mathrm{M}^t \right).\end{aligned}$$ Now, $(\Lambda,\mathrm{M},\nu )\in \ker (\Psi^{\mathcal H}_2)^{*}$ if and only if $$\Lambda^t + T\mathrm{M}=0, SF^t\Lambda F^{-t} + \mathrm{M}^t =0.$$ Inserting $\Lambda$ from the first equation into the second one, we get $$0= -S F^t \mathrm{M}^t T^t F^{-t} + \mathrm{M}^t,$$ which holds if and only if $$\begin{aligned}
\mathrm{M}^t F^t S = S F^t \mathrm{M}^t \iff S^{-1} \mathrm{M}^t F^t S = F^t S S^{-1}\mathrm{M}^t \iff [S^{-1}\mathrm{M}^t ,F^tS]=0.\end{aligned}$$ Let us denote by $\mathcal{K}$ the space of matrices commuting with $F^t S$, and by $d=\operatorname{dim}\, \mathcal{K}$. Then the preceding calculations show that $$\begin{aligned}
\mathcal K \oplus \mathbb C\ni(K,\nu)\mapsto \left(-SKS^{-1},(SK)^t,\nu\right) \in \ker (\Psi^{\mathcal H}_2)^{*} \end{aligned}$$ defines an isomorphism of vector spaces. Hence by the rank-nullity theorem $$\dim \ker (\Psi^{\mathcal H}_2)^{*} = d+1, \qquad \operatorname{Im}(\Psi^{\mathcal H}_2)^{*} = 2n^2+1-(d+1) = 2n^2-d.$$
Finally, for $(\Psi^{\mathcal H}_3)^{*} \colon M_n(\mathbb C) \oplus M_n(\mathbb C) \to \mathbb C\oplus \mathbb C$ we find $$\begin{aligned}
(\Lambda, \mathrm{M})^{\tau} \circ \Psi^{\mathcal H}_3 \begin{pmatrix}a \\ b \end{pmatrix}
& = (\Lambda, \mathrm{M})^{\tau}\begin{pmatrix} -F^{-t} a + u F^tb\\ F^{-1} va - Fb
\end{pmatrix}
\\ & = \tau \operatorname{tr}\big(\Lambda^t(-F^{-t} a + u F^tb)\big)
+ \tau \operatorname{tr}\big( \mathrm{M}^t(F^{-1} va - Fb)\big)
\\ & = \operatorname{tr}\big({-\Lambda^t F^{-t}}+\mathrm{M}^tF^{-1}T\big)\tau(a)
+ \operatorname{tr}\big(\Lambda^t SF^t -\mathrm{M}^tF\big)\tau(b)
\\ & = \operatorname{tr}\big({-F^{-1}\Lambda}+ T^{t}F^{-t} \mathrm{M}) \tau(a)
+ \operatorname{tr}\big(FS^t\Lambda - F^t \mathrm{M}) \tau(b)\end{aligned}$$ so (recall that $S^t = T^{-1}$) $$\begin{aligned}
(\Psi^{\mathcal H}_3)^{*}
(\Lambda, \mathrm{M}) = \left( \operatorname{tr}({-F^{-1}\Lambda}+ T^{t}F^{-t} \mathrm{M}), \operatorname{tr}(FT^{-1}\Lambda - F^t \mathrm{M}) \right)\end{aligned}$$ With a matrix $A=(a_{i,j})\in \mathbb C^{n\times n}$ we associate the row vector and a column vector, $$A^{\rm row}:=(a_{1,1},a_{1,2},\ldots, a_{2,1}, a_{2,2},\ldots, a_{n,n})\in \mathbb C^{1\times n^2},\quad A^{\rm col}:=(A^{\rm row})^t\in\mathbb C^{n^2\times 1},$$ so that $\operatorname{tr}(X^tY)=X^{\rm row} Y^{\rm col}$ is the standard bilinear form. Then the map $({-}\circ\hat\Psi_3)$ is represented by the $2\times (2n^2)$-matrix $M:=\begin{pmatrix}
(-F^{-1})^{\rm row} & (T^tF^{-t})^{\rm row} \\ (FT^{-1})^{\rm row} & (-F^t)^{\rm row}
\end{pmatrix}$ and $\dim \left(\operatorname{im} ({-}\circ\hat\Psi_3)\right)$ is the rank of $M$, which is either 2 or 1, depending on whether the two rows are linearly independent or not. Of course, the rows of $M$ are linearly independent if and only if $(-F^{-1},T^tF^{-t})$ and $(FT^{-1},-F^t)$ are linearly independent, since this is just a reordering of the entries. If there is a nonzero $\alpha\in\mathbb C$ with $-F^{-1}=\alpha FT^{-1}$, we see that $T=-\alpha F^2$. In this case, the second condition, $T^t F^{-t}=-\alpha F^t$, is automatically fulfilled, proving linear dependence. Using the rank-nullity theorem, we can thus conclude that $$\dim \ker (\Psi^{\mathcal H}_3)^{*} =2n^2-t
\qquad
\dim \operatorname{Im}(\Psi^{\mathcal H}_3)^{*} =t.\qedhere$$ ◻
We now consider several special cases.
**Example 41** (Cohomology of $\operatorname{Pol} (U_n^+)$ with trivial coefficients). For the special case $E=F=I$, i.e. $\mathcal H(F)=\operatorname{Pol} (U_n^+)$, and $M={}_\varepsilon\mathbb C_\varepsilon$ (so that $S=T=I$) we have $d=n^2$, $p=1$, $t=1$, so the Hochschild cohomology for $\operatorname{Pol} (U_n^+)$ with trivial coefficients is given as follows: $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= 1 ,
&\dim \mathrm{H}^1(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= n^2 ,
\\
\dim \mathrm H^2(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= n^2-1 ,
&\dim \mathrm H^3(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= 1 ,
\end{aligned}$$ and $\dim \mathrm H^k(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) = 0$ for all $k \geq 4$. Let us note that all but $\dim \mathrm H^3(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) = 1$ was already known, cf. [@DFKS18; @Bichon18]
**Example 42** (Cohomology of $\operatorname{Pol} (U_F^+)$ with $F=\operatorname{diag}(q^{-1},1,q)$ and with trivial coefficients). In [@DFKS23] it was shown that for $F\in M_3(\mathbb C)$ positive, $\mathrm H^2(\operatorname{Pol} (U_F^+), {_\varepsilon\mathbb C_\varepsilon}) \cong sl_F(n) =\{A\in M_n(\mathbb C) \colon AF=FA, \operatorname{Tr}(AF) = \operatorname{Tr}(AF^{-1}) = 0\}$ provided the eigenvalues of $F$ do not form a geometric progression. This agrees with Theorem [Theorem 40](#thm:Hochschild-U+){reference-type="ref" reference="thm:Hochschild-U+"} as the only case in which $F$ is a positive matrix and satisfies $F^2=\pm I$ is for $F=I$. So for $F\neq I$ we have $\dim \mathrm H^2(\operatorname{Pol} (U_F^+), {_\varepsilon\mathbb C_\varepsilon}) = d-2$.
To treat the lacking case we assume without loss of generality that $F=\operatorname{diag}(q^{-1},1,q)$ with $q\in (0,1)$. Then $F=E^tE^{-1}$ for $$E=\begin{pmatrix}
0 & 0 & q \\ 0 & 1 & 0 \\ 1 & 0 & 0
\end{pmatrix}$$ In this case $d=3$, $p=1$, $t=2$ and so the Hochschild cohomology is $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= 1 ,
&\dim \mathrm{H}^1(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= 3 ,
\\
\dim \mathrm H^2(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= d-t=1 ,
&\dim \mathrm H^3(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) &= 1 ,
\end{aligned}$$ and $\dim \mathrm H^k(\mathcal H, {_\varepsilon\mathbb C_\varepsilon}) = 0$ for all $k \geq 4$. So in the lacking case the second Hochschild cohomology has dimension 1 as in the neigbouring cases.
**Example 43** (Cohomology of $\operatorname{Pol} (U_F^+)$ with the action $\overline{\varepsilon}$). Denote by $\overline{\varepsilon}$ the character on $\operatorname{Pol} (U_F^+)$ defined by $\overline{\varepsilon}(u_{jk})=-\delta_{jk}$. In this case we have $S=T=-I$, $p=0$ and $t\in \{1,2\}$. For a positive $F\in GL_n(\mathbb C)$ with $k$ different eigenvalues of multiplicities $d_i$, $i=1,2,\ldots,k$, we have $d=(\sum_{i=1}^k d_i^2)$. Then the Hochschild cohomology is: $$\begin{aligned}
\dim \mathrm{H}^0(\mathcal H, {_{\overline{\varepsilon}}\mathbb C_\varepsilon}) &= 0 ,
&\dim \mathrm{H}^1(\mathcal H, {_{\overline{\varepsilon}}\mathbb C_\varepsilon}) &= d-1 ,\\
\dim \mathrm H^2(\mathcal H, {_{\overline{\varepsilon}}\mathbb C_\varepsilon}) &= d-t ,
&\dim \mathrm H^3(\mathcal H, {_{\overline{\varepsilon}}\mathbb C_\varepsilon}) &= 2-t ,
\end{aligned}$$ and $\dim \mathrm H^k(\mathcal H, {_{\overline{\varepsilon}}\mathbb C_\varepsilon}) = 0$ for all $k \geq 4$.
# Yetter-Drinfeld resolutions {#sec:YD-resolutions}
In this section we will again use the shorthand notation $$\mathcal A:=\mathcal A(E)\ (=\operatorname{Pol}(B_{n+1}^{\#+})),\quad \mathcal B:=\mathcal B(E)\ (=\operatorname{Pol}(O_n^+)),\quad \mathcal H:=\mathcal H(F)\ (=\operatorname{Pol}(U_n^+)),$$ (equalities in brackets for $E=I_n$) so that $\mathcal H \cong \mathcal B \mathbin{\tilde{\ast}}\mathbb C\mathbb Z_2
\subset \mathcal B * \mathbb C\mathbb Z_2
\cong \mathcal A$.
## The asymmetry case
It was shown by Bichon [@Bic16] that the resolution [\[resO+E\]](#resO+E){reference-type="eqref" reference="resO+E"} can be interpreted as a resolution by free Yetter-Drinfeld modules. Let us quickly review the involved coactions. First, as discussed above, $\mathbb
C_\varepsilon$ is a Yetter-Drinfeld module with the trivial coaction $\gamma(1)=1\otimes 1$. Also, $\mathcal B$ is a (free over $\mathbb C$) Yetter-Drinfeld module with the coadjoint coaction $\gamma(a)=a_{(2)}\otimes S(a_{(1)})a_{(3)}$. To deal with $M_n(\mathcal B)$, first consider the $\mathcal B$-comudules $V=\operatorname{span}\{e_1,\ldots,e_n\}$ with coaction $\gamma(e_k):=\sum_p
e_p\otimes x_{pk}$ and $V^*:=\operatorname{span}\{e_1^*,\ldots,e_n^*\}$ with coaction $\gamma(e_j^*):= \sum_q e_q^*\otimes S(x_{jq})$. We equip the tensor product of comodules $C,D$ with the diagonal coaction $\gamma(c\otimes d)=c_{(0)}\otimes d_{(0)}\otimes c_{(1)}d_{(1)}$. In the following, this kind of comodule tensor product is simply written by juxtaposition. In particular, we write $V^*V$ for the $\mathcal A$-comodule with coaction $\gamma(e_j^*e_k)=\sum e_q^*e_p\otimes S(u_{jq})u_{pk}$. Note that, as a $\mathcal B$-module, $M_n(\mathcal B)$, the free module over $M_n$, is isomorphic to $V^*V\boxtimes \mathcal B$, the free Yetter-Drinfeld module over $V^*V$; the isomorphism is the obvious one, sending $e_{jk}\otimes b$ to $e_j^*e_k\boxtimes b$. One therefore has the resolution $$\begin{aligned}
\label{YD-resO+}
0\rightarrow \mathcal B\xrightarrow{\Phi^{\mathcal B}_3}V^*V\boxtimes \mathcal B\xrightarrow{\Phi^{\mathcal B}_2}V^*V \boxtimes \mathcal B\xrightarrow{\Phi^{\mathcal B}_1} \mathcal B \xrightarrow{\varepsilon}\mathbb C_\varepsilon\rightarrow 0\end{aligned}$$ with the same maps $\Phi_k$ as in [\[resO+E\]](#resO+E){reference-type="eqref" reference="resO+E"} under the discussed identification of $M_n$ with $V^*V$ (it is easy to check that the $\Phi_k$ are also comodule maps, note that it is enough to check the condition on the generating subcomodules $\mathbb C1$ and $V^*V$, respectively).
The projective resolution [\[eq:resZ2\]](#eq:resZ2){reference-type="eqref" reference="eq:resZ2"} for $\mathbb C\mathbb Z_2$ we use is also easily identified as a resolution by projective Yetter-Drinfeld modules.
**Proposition 44**. *$\mathbb C(1-g)$ is a relatively projective Yetter-Drinfeld module with coaction determined by $(1-g)\mapsto (1-g)\otimes 1$. Hence, the resolution [\[eq:resZ2\]](#eq:resZ2){reference-type="eqref" reference="eq:resZ2"} is a resolution by projective Yetter-Drinfeld modules.*
*Proof.* We can identify $\mathbb C(1-g)$ with the submodule of the free Yetter-Drinfeld module $\mathbb C\mathbb Z_2=\mathbb C\boxtimes \mathbb C\mathbb Z_2$, complemented by $\mathbb C(1+g)$. Note that the coadjoint coaction yields $\gamma(g)=g\otimes S(g)g=g\otimes 1$, therefore also $\gamma(1-g)=(1-g)\otimes 1$. That the maps $\psi_1$ and $\varepsilon$ in [\[eq:resZ2\]](#eq:resZ2){reference-type="eqref" reference="eq:resZ2"} are Yetter-Drinfeld is obvious. ◻
**Corollary 45**. *Under the identification of $M_n$ with $V^*V$ discussed above, [\[eq_resolution_of_A\]](#eq_resolution_of_A){reference-type="eqref" reference="eq_resolution_of_A"} yields a resolution $$\begin{aligned}
\label{eq:YD-res-B^hash+}
0\to \mathcal A \xrightarrow{\Phi^{\mathcal A}_3}V^*V\boxtimes \mathcal A\xrightarrow{\Phi^{\mathcal A}_2} V^*V\boxtimes \mathcal A\oplus(\mathbb{C}(1-g)\otimes_{\mathbb{C}\mathbb{Z}_2}\mathcal A)\xrightarrow{\Phi^{\mathcal A}_1}\mathcal A\xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0,
\end{aligned}$$ by relatively projective Yetter-Drinfeld modules.*
*Proof.* We apply Theorem [Theorem 19](#thm:YD-res_free-product){reference-type="ref" reference="thm:YD-res_free-product"} to the resolutions [\[YD-resO+\]](#YD-resO+){reference-type="eqref" reference="YD-resO+"} for $\mathcal B$ and [\[eq:resZ2\]](#eq:resZ2){reference-type="eqref" reference="eq:resZ2"} for $\mathbb C\mathbb Z_2$ in its interpretation given in Proposition [Proposition 44](#prop:YD-res-Z2){reference-type="ref" reference="prop:YD-res-Z2"}. Proposition [Proposition 20](#prop:ind-free-YD){reference-type="ref" reference="prop:ind-free-YD"} allows to identify the occuring free Yetter-Drinfeld $\mathcal A$-modules. ◻
In order to find the resolution for $\mathcal H$, we want to apply the restriction functor to [\[eq:YD-res-B\^hash+\]](#eq:YD-res-B^hash+){reference-type="eqref" reference="eq:YD-res-B^hash+"}.
**Lemma 46**. *$\mathcal H\subset \mathcal A$ is adjoint, i.e. $$a_{(2)}\otimes S(a_{(1)})ha_{(3)}\in \mathcal A\otimes \mathcal H$$ for all $h\in \mathcal H,a\in \mathcal A$.*
*Proof.* Since $\mathcal A$ is generated as an algebra by the $x_{ij}$ and $g$, it is enough to show that $$\sum x_{pq}\otimes S(x_{ip})hx_{qj}\in \mathcal A\otimes \mathcal H,\quad g\otimes ghg\in \mathcal A\otimes \mathcal H$$ for all $h\in \mathcal H$. This is obvious when we recall the fact that $\mathcal A$ is $\mathbb
Z_2$-graded and $\mathcal H$ consists of all even polynomials in $\mathcal H$ (cf. the proof of Proposition [Proposition 37](#prop:A=H+gH_general){reference-type="ref" reference="prop:A=H+gH_general"}). ◻
By [@Bic16 Proposition 4.7], the following result also proves adjointness for $\mathcal H(F)\subset \mathcal A(E)$ for all asymmetries $F=E^tE^{-1}$.
**Lemma 47**. *Let $p\colon \mathcal B(E)\to \mathbb C\mathbb Z_2$ denote the surjective Hopf-morphism $p(x_{ij})=\delta_{ij}g$ (which is well-defined by ). Then $P=p\ast\mathrm{id}\colon \mathcal A(E)\cong \mathcal B(E)\ast\mathbb C\mathbb Z_2\to \mathbb C\mathbb Z_2$ is a cocentral and surjective Hopf algebra map such that $\mathcal H(F)=\mathcal A(E)^{\operatorname{co}\mathbb C\mathbb Z_2}=\{a\in \mathcal A(E):a_{(1)}\otimes P(a_{(2)})=a\otimes 1\}$.*
*Proof.* For a monomial $M$ in $\mathcal A(E)$ it is easy to see that $$M_{(1)}\otimes P(M_{(2)})=
\begin{cases}
M\otimes 1&\text{if $M$ is even}\\
M\otimes g&\text{if $M$ is odd}
\end{cases}$$ with respect to the natural $\mathbb Z_2$-grading. We already discussed that $\mathcal H(F)$ is the even part of $\mathcal A(E)$, so we are done. ◻
**Lemma 48**. *The Yetter-Drinfeld modules in [\[eq:YD-res-B\^hash+\]](#eq:YD-res-B^hash+){reference-type="eqref" reference="eq:YD-res-B^hash+"} are $\mathcal H$-coinvariant.*
*Proof.* Due to Lemma [Lemma 46](#lem:H_adjoint_in_A){reference-type="ref" reference="lem:H_adjoint_in_A"}, we can apply Lemma [Lemma 17](#lem:coinvariant){reference-type="ref" reference="lem:coinvariant"}. The Yetter-Drinfeld modules $\mathcal A, \mathbb C(1-g)\otimes_{\mathbb C\mathbb Z_2}\mathcal A, V^*V\boxtimes \mathcal A$ are generated as $\mathcal A$-modules by $\mathbb C1, \mathbb C(1-g), (e_j^*e_k:i,j=1,\ldots,n)$, respectively. Therefore, the claimed coinvariance follows from $$\begin{aligned}
\gamma(1)&=1\otimes 1 \in \mathbb C\otimes \mathcal H,\\
\gamma(1-g)&=1\otimes 1 - g\otimes S(g)g=(1-g)\otimes 1\in \mathbb C(1-g)\otimes \mathcal H, \\
\gamma(e_j^*e_k)&=\sum e_q^* e_p\otimes S(x_{jq})x_{pk}=\sum e_q^* e_p\otimes S(gx_{jq})gx_{pk}\in V^*V\otimes \mathcal H.
\end{aligned}$$ For the free Yetter-Drinfeld modules, we could also invoke [@Bic16 Proposition 4.5] ◻
This means that the images under the restriction functor are the same space with action and coaction (co-)restricted to $\mathcal H$. The general categorical result does not say that the restriction functor maps projectives to projectives. For the free Yetter-Drinfeld $\mathcal A$-modules, one could apply [@Bic16 Proposition 4.8] to $\mathcal H\subset \mathcal A$. Anyway, we will now see that the situation is even better than a general result could promise!
**Lemma 49**. *In $\mathcal{YD}_{\mathcal H}^{\mathcal H}$, we have isomorphisms*
----------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------
*$\mathcal A\cong \mathcal H\oplus g\mathcal H=\mathbb C\mathbb Z_2\boxtimes \mathcal H\cong \mathcal H\oplus \mathcal H,$* *$(h+gh')\mapsto h \oplus gh'\mapsto h\oplus h';$*
*$\mathbb (1-g)\otimes_{\mathbb C\mathbb Z_2}\mathcal A\cong \mathbb C(1-g)\boxtimes \mathcal H\cong \mathcal H,$* *$(1-g)\otimes_{\mathbb C\mathbb Z_2}(h+gh')\mapsto (1-g)\boxtimes (h-h')\mapsto h-h' ;$*
*$V^*V \boxtimes \mathcal A\cong V^*V \boxtimes \mathcal H \oplus gV^*Vg\boxtimes \mathcal H,$* *$e_j^*e_k\boxtimes (h+gh')\mapsto e_j^*e_k\boxtimes h + ge_j^*e_kg\boxtimes h'.$*
----------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------
*for $h,h'\in \mathcal H$. In particular, all given modules are free Yetter-Drinfeld $\mathcal H$-modules.*
*Proof.* The given prescriptions obviously defines a module isomorphism. In the first two cases, it is easy to see that they are also Yetter Drinfeld. Denote the third isomorphism by $\Psi$ for the rest of this proof. The right hand side is the free Yetter-Drinfeld module over $V^*V+gV^*Vg$. By the universal property of free Yetter-Drinfeld modules, it is enough to check that the restrictions of $\Psi^{-1}$ to $V^*V$ and to $gV^*Vg$ (identified with $V^*V\boxtimes 1$ and $gV^*Vg\boxtimes 1$, respectively) are comodule maps. For $V^*V$ this is obvious and for $gV^*Vg$ we calculate $$\gamma(ge_j^*e_kg)=\sum ge_q^*e_pg\otimes gS(x_{jq})x_{pk}g \xmapsto{\Psi^{-1}\otimes\mathrm{id}} \sum(e_q^*e_p\boxtimes g)\otimes gS(x_{jq})x_{pk}g=\gamma(e_j^*e_k\boxtimes g).$$ ◻
**Corollary 50**. *Let $F=E^t E^{-1}$ be a generic asymmetry and $\mathcal H=\mathcal H(F)$. Then the sequence $$\begin{aligned}
0\to
\begin{pmatrix}
\mathcal H\\
\mathcal H
\end{pmatrix}
\xrightarrow{\Phi^{\mathcal H}_3}
\begin{pmatrix}
V^*V\boxtimes \mathcal H\\
gV^*Vg\boxtimes \mathcal H
\end{pmatrix}
\xrightarrow{\Phi^{\mathcal H}_2}
\begin{pmatrix}
V^*V\boxtimes \mathcal H\\
gV^*Vg\boxtimes \mathcal H\\
\mathcal H
\end{pmatrix}
\xrightarrow{\Phi^{\mathcal H}_1}
\begin{pmatrix}
\mathcal H\\
\mathcal H
\end{pmatrix}
\xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0,
\end{aligned}$$ is a resolution of $\mathbb C_{\varepsilon_{\mathcal H}}$ by free Yetter-Drinfeld $\mathcal H$-modules with respect to the maps for the resolution [\[eq:resHF\]](#eq:resHF){reference-type="eqref" reference="eq:resHF"} under the natural identifications of $V^*V\boxtimes \mathcal H$ and $gV^*Vg\boxtimes \mathcal H$ as with $M_n(\mathcal H)$ as $\mathcal H$-modules.*
only solves the problem for $F$ a generic asymmetry. In order to find a free Yetter-Drinfeld resolution for more general matrices $F$, we take the sequene [\[eq:resHF_transformed\]](#eq:resHF_transformed){reference-type="eqref" reference="eq:resHF_transformed"} as a starting point.
## Yetter-Drinfeld structures on $M_n(\mathcal H)$
On $M_n(\mathbb C)$, we consider the following four $\mathcal H(F)$-coactions $$\begin{aligned}
\gamma_u(e_{ij})&=\sum_{k,\ell} e_{k\ell}\otimes S(u_{ik})u_{\ell j}
&
\gamma_v(e_{ij})&=\sum_{k,\ell} e_{k\ell}\otimes S(v_{ik})v_{\ell j}
\\
\gamma_{\tilde u}(e_{ij})&=\sum_{k,\ell} e_{k\ell}\otimes S(u_{j\ell})u_{k i}
&
\gamma_{\tilde v}(e_{ij})&=\sum_{k,\ell} e_{k\ell}\otimes S(v_{j \ell})v_{k i};\end{aligned}$$ the corresponding comodules are denoted by $M_n^{u}(\mathbb C),M_n^{v}(\mathbb C),M_n^{\tilde u}(\mathbb C),M_n^{\tilde v}(\mathbb C)$, respectively. To get a better grasp on these formulas, note that for a matrix $A\in M_n(\mathbb C)$, we have: $$\begin{aligned}
\gamma_u(A)&=\sum_{ij} A_{ij} \gamma_u (e_{ij}) = S(u)^t A u^t = vAu^t\\
\gamma_v(A)&=\sum_{ij} A_{ij} \gamma_v (e_{ij}) = S(v)^t A v^t = F^{-t}uF^t A v^t\\
\gamma_{\tilde u}(A) &= \sum_{ij} A_{ij} \gamma_{\tilde u} (e_{ij}) = (S(u)^t A^t u^t)^t = (vA^tu^t)^t\\
\gamma_{\tilde v}(A)&= \sum_{ij} A_{ij}\gamma_{\tilde v}(e_{ij}) = (F^{-t}uF^t A^t v^t)^t.\end{aligned}$$ For $\#\in\{u,v,\tilde u, \tilde v\}$, we denote the free Yetter-Drinfeld module over $M_n^{\#}(\mathbb C)\boxtimes \mathcal H(F)$ by $M_n^{\#}(\mathcal H(F))$.
**Theorem 51**. *Let $F\in\operatorname{GL}_n(\mathbb C)$ be arbitrary, $\mathcal H=\mathcal H(F)$. The maps $\Psi_i^{\mathcal H}$ in the sequence $$\label{eq:YD-res_H(F)}
0\to \begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix}
\xrightarrow{\Psi^{\mathcal H}_3}
\begin{pmatrix} M_n^v(\mathcal H) \\ M_n^u(\mathcal H) \end{pmatrix}
\xrightarrow{\Psi^{\mathcal H}_2}
\begin{pmatrix} M_n^{\tilde v}(\mathcal H) \\ M_n^{\tilde u}(\mathcal H) \\ \mathcal H \end{pmatrix}
\xrightarrow{\Psi^{\mathcal H}_1}
\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix} \xrightarrow{\varepsilon}\mathbb{C}_\varepsilon\to0$$ defined as $$\begin{aligned}
\Psi_3^{\mathcal H}\begin{pmatrix} a \\ b \end{pmatrix}&=
\begin{pmatrix}
-F^{-t} a + u F^tb\\ F^{-1} va - Fb
\end{pmatrix}
\\
\Psi_2^{\mathcal H}\begin{pmatrix} A \\ B \end{pmatrix}&=
\begin{pmatrix} A^t + Fu^tBF^{-1} \\ v^t A + B^t \\ 0 \end{pmatrix}
\\
\Psi_1^{\mathcal H}\begin{pmatrix} A \\ B \\ c\end{pmatrix}&=
\begin{pmatrix} \operatorname{tr}(-A+uB)+c \\ \operatorname{tr}(vA-B)-c \end{pmatrix}
\end{aligned}$$ are morphisms of Yetter-Drinfeld modules.*
*Proof.* Actually, the only thing we have to prove is that the $\Psi_i^{\mathcal H}$ are comodule maps. According to the column vector form $P_i=((P_i)_k)$ of the involved Yetter-Drinfeld modules, the maps $\Psi_i^{\mathcal H}\colon P_i\to P_{i-1}$ inherit a matrix structure with components $(\Psi_i^{\mathcal H})_{k,\ell}\colon (P_i)_\ell\to (P_{i-1})_k$. Because we know that the maps are module maps already, it is enough to check $\mathcal H$-colinearity componentwise on generators. Let us start $\Psi_3$ and note that $\gamma(1)=1\otimes 1$ for the coaction on $\mathcal H=\mathbb C\boxtimes \mathcal H$. In two cases we only have to deal with scalar matrices and the calculations are very short: $$\begin{aligned}
\gamma_v((\Psi_3^{\mathcal H})_{11}(1))=\gamma_v(-F^{-t})= -F^{-t}uF^t F^{-t} v^t=-F^{-t}
= (\Psi_3^{\mathcal H})_{11}\otimes\mathrm{id}(\gamma(1))
\end{aligned}$$ $$\begin{aligned}
\gamma_u((\Psi_3^{\mathcal H})_{22}(1))=\gamma_u(-F)= -vFu^t= -F = (\Psi_3^{\mathcal H})_{22}\otimes\mathrm{id} (\gamma(1))
\end{aligned}$$ For the remaining two components, we have apply the coaction to non-scalar matrices, which makes the calculations a bit more cumbersome. (Recall from that for $X\in\mathcal{YD}_H^H, x\in X, h\in H$, we put $(x\otimes h)\mathbin{\vartriangleleft}h':=xh'_{(2)}\otimes S(h'_{(1)})h h'_{(3)}$, and the coaction $M\to M\otimes H$ is a module map for the $\mathbin{\vartriangleleft}$-action.) $$\begin{aligned}
\MoveEqLeft\gamma_u((\Psi_3^{\mathcal H})_{21}(1))=\gamma_u(F^{-1}v)\\
&=\sum \left(\gamma_u(e_{ij}\boxtimes 1)\right)\mathbin{\vartriangleleft}F^{-1}_{ip}v_{pj}\\
&=\sum e_{k\ell}\boxtimes 1\otimes S(u_{ik})u_{\ell j} \mathbin{\vartriangleleft}F^{-1}_{ip}v_{pj}\\
&=\sum e_{k\ell}\boxtimes v_{st}\otimes S(v_{ps})F^{-1}_{ip} S(u_{ik})\underbrace{u_{\ell j} v_{tj}}_{\delta_{\ell t}}\\
&=\sum e_{k\ell}\boxtimes v_{s\ell}\otimes S(\underbrace{u_{ik} F^{-1}_{ip}v_{ps}}_{(u^tF^{-1}v)_{ks}=F^{-1}_{ks}})\\
&=F^{-1}v\otimes 1 = (\Psi_3^{\mathcal H})_{21}\otimes \mathrm{id} (\gamma(1))
\end{aligned}$$ $$\begin{aligned}
\MoveEqLeft\gamma_v((\Psi_3^{\mathcal H})_{12}(1))=\gamma_v(uF^t)\\
&=\sum \left(\gamma_u(e_{ij}\boxtimes 1)\right)\mathbin{\vartriangleleft}u_{ip} F_{jp}\\
&=\sum e_{k\ell}\boxtimes 1 \otimes S(v_{ik}) v_{\ell j} \mathbin{\vartriangleleft}u_{ip} F_{jp}\\
&=\sum e_{k\ell}\boxtimes u_{st}\otimes S(u_{is}) S(v_{ik})\underbrace{v_{\ell j} F_{jp} u_{tp}}_{F_{\ell t}}\\
&=\sum e_{k\ell}\boxtimes u_{st} F_{\ell t}\otimes S(\underbrace{v_{ik}u_{is}}_{\delta_{ks}})\\
&=uF^t\otimes 1 = (\Psi_3^{\mathcal H})_{12}\otimes \mathrm{id} (\gamma(1))
\end{aligned}$$ Now we come to $\Psi_2^{\mathcal H}$. Clearly, $$\begin{aligned}
\gamma_{\tilde v}(\Psi_2^{\mathcal H})_{11}(A)&=\gamma_{\tilde v}(A^t)=\gamma_v(A)^t= (\Psi_2^{\mathcal H})_{11}\otimes\mathrm{id}(\gamma_{v}(A))\\
\gamma_{\tilde u}(\Psi_2^{\mathcal H})_{22}(B)&=\gamma_{\tilde u}(B^t)=\gamma_u(B)^t= (\Psi_2^{\mathcal H})_{11}\otimes\mathrm{id}(\gamma_{u}(B))
\end{aligned}$$ Note that $$(\Psi_2^{\mathcal H})_{21}(e_{ij}\boxtimes 1)=v^t e_{ij}= \sum_{p,q}v_{qp}e_{pq}e_{ij}=\sum_p v_{ip}e_{pj}=\sum_p e_{pj}\boxtimes v_{ip}.$$ With that in mind, we find $$\begin{aligned}
\MoveEqLeft\gamma_{\tilde u}(\Psi_2^{\mathcal H})_{21}(e_{ij})
=\sum \gamma_{\tilde u}(v_{ip}e_{pj})\\
&=\sum \gamma_{\tilde u}(e_{pj})\mathbin{\vartriangleleft}v_{ip}\\
&=\sum e_{k\ell}\boxtimes 1 \otimes S(u_{j\ell}) u_{kp}\mathbin{\vartriangleleft}v_{ip}\\
&=\sum e_{k\ell}\boxtimes v_{st}\otimes S(v_{is})S(u_{j\ell}) \underbrace{u_{kp} v_{tp}}_{\delta_{kt}}\\
&=\sum e_{k\ell}\boxtimes v_{sk}\otimes S(v_{is})v_{\ell j}\\
&=\sum (\Psi_2^{\mathcal H})_{21}(e_{s \ell}\boxtimes 1)\otimes S(v_{is})v_{\ell j}\\
&=((\Psi_2^{\mathcal H})_{21}\otimes \mathrm{id})(\gamma_{v}(e_{ij})).
\end{aligned}$$ We can deal with $(\Psi_2^{\mathcal H})_{12}$ similarly. Using $$(\Psi_2^{\mathcal H})_{12}(e_{ij}\boxtimes 1)=Fu^te_{ij}F^{-1}= \sum e_{pt}e_{ij}e_{sq}\boxtimes F_{pr}u_{tr}F^{-1}_{sq}=\sum e_{pq}\boxtimes F_{pr}u_{ir}F^{-1}_{jq},$$ we get $$\begin{aligned}
\MoveEqLeft\gamma_{\tilde v}(\Psi_2^{\mathcal H})_{12}(e_{ij})
=\sum \gamma_{\tilde v}(e_{pq})\mathbin{\vartriangleleft}F_{pr}u_{ir}F^{-1}_{jq}\\
&=\sum e_{k\ell}\boxtimes 1 \otimes S(v_{q\ell}) v_{kp}\mathbin{\vartriangleleft}F_{pr}u_{ir}F^{-1}_{jq}\\
&=\sum e_{k\ell}\boxtimes u_{st}\otimes S(u_{is}) \underbrace{ F^{-1}_{jq} S(v_{q\ell}) }_{u_{qj} F^{-1}_{q\ell}}\underbrace{ v_{kp} F_{pr} u_{tr}}_{F_{kt}}\\
&=\sum e_{k\ell}\boxtimes F_{kt} u_{st} F^{-1}_{q\ell}\otimes S(u_{is})u_{qj} \\
&=((\Psi_2^{\mathcal H})_{12}\otimes\mathrm{id})(\gamma_{u}(e_{ij}))
\end{aligned}$$ Finally, the calculations for $\Psi^{\mathcal H}_1$ are relatively short. For $(\Psi^{\mathcal H}_1)_{11}(e_{ij})=(\Psi^{\mathcal H}_1)_{22}(e_{ij})=-\delta_{ij}$ we get $$\begin{aligned}
((\Psi^{\mathcal H}_1)_{11}\otimes\mathrm{id})(\gamma_{\tilde v}(e_{ij}))&=\sum -\delta_{k\ell}\otimes S(v_{j\ell})v_{ki}=-\delta_{ij} 1\otimes 1=\gamma((\Psi^{\mathcal H}_1)_{11}(e_{ij}))\\
((\Psi^{\mathcal H}_1)_{22}\otimes\mathrm{id})(\gamma_{\tilde u}(e_{ij}))&=\sum -\delta_{k\ell}\otimes S(u_{j\ell})u_{ki}=-\delta_{ij} 1\otimes 1=\gamma((\Psi^{\mathcal H}_1)_{22}(e_{ij})).
\end{aligned}$$ Moreover, $(\Psi^{\mathcal H}_1)_{21}(e_{ij})=\operatorname{tr}(ve_{ij})=v_{ji}$ and $(\Psi^{\mathcal H}_1)_{12}(e_{ij})=\operatorname{tr}(ue_{ij})=u_{ji}$, so that $$\begin{aligned}
((\Psi^{\mathcal H}_1)_{21}\otimes\mathrm{id})(\gamma_{\tilde v}(e_{ij}))&=\sum v_{\ell k}\otimes S(v_{j\ell})v_{ki}=1\mathbin{\vartriangleleft}v_{ji}=\gamma((\Psi^{\mathcal H}_1)_{11}(e_{ij}))\\
((\Psi^{\mathcal H}_1)_{12}\otimes\mathrm{id})(\gamma_{\tilde u}(e_{ij}))&=\sum u_{\ell k}\otimes S(u_{j\ell})u_{ki}=1\mathbin{\vartriangleleft}u_{ji}=\gamma((\Psi^{\mathcal H}_1)_{11}(e_{ij})).
\end{aligned}$$ The remaining components are either zero or $\pm$ identity maps on $\mathcal H$, so these are trivially comodule maps. ◻
## Monoidal equivalence and the general case
The sequence from is easily seen to be a complex for arbitrary $F\in \operatorname{GL}_n(\mathbb C)$, but our previous arguments only prove exactness if $F$ is a generic asymmetry. In order to prove exactness at least whenever $F$ is generic, we use the theory of Hopf-bi-Galois objects, which provide concrete monoidal equivalences between the categories of (free) Yetter-Drinfeld modules over $\mathcal H(F)$ and $\mathcal H(F')$ whenever $\operatorname{tr}{F}=\operatorname{tr}(F')$ and $\operatorname{tr}(F^{-1})=\operatorname{tr}({F'}^{-1})$. In fact, following Bichon's reasoning for $\mathcal B(E)$ [@Bic13], we use the language of cogroupoids; an overview of the theory of cogroupoids and their relation to Hopf-bi-Galois objects can be found in [@bichon14]. Whether or not the sequence remains exact when $F$ is not normalizable remains an open problem, but as far as the compact quantum groups $U_K^+$ are concerned, the problem is resolved completely, because $\operatorname{Pol}(U_K^+)\cong \mathcal H(K^*K)$ and a positive matrix $K^*K$ is always normalizable.
For $F\in \operatorname{GL}_n(\mathbb C),F'\in \operatorname{GL}_{n'}(\mathbb C)$, consider the algebra $\mathcal H(F,F')$ generated by the entries of the rectangular matrices $u^{F,F'}=(u^{F,F'}_{ij})$ and $v^{F,F'}=(v^{F,F'}_{ij})$, where $i$ runs from $1$ to $n$ and $j$ runs from $1$ to $n'$, with relations $$\begin{aligned}
u^{F,F'}(v^{F,F'})^t&=I_n = v^{F,F'} F' (u^{F,F'})^t F^{-1},& (v^{F,F'})^tu^{F,F'}&=I_{n'}=F' (u^{F,F'})^t F^{-1}v^{F,F'} .\end{aligned}$$ Note that $\mathcal H(F,F)$ is simply $\mathcal H(F)$.
Recall that, for a coalgebra $C$, the cotensor product of a right $C$-comodule $M$ with a left $C$-comodule $N$ is $M\mathbin{\Box} N:=\{X\in M\otimes N: \gamma_M\otimes \mathrm{id}(X)=\mathrm{id}\otimes\gamma_N(X)\}$.
The algebras $\mathcal H(F,F')$ carry a natural $\mathcal H(F)$-$\mathcal H(F')$-bicomodule structure, moreover, they form a *cogroupoid* [@bichon14 Lemma 3.8 and Definition 3.9]. From the general theory of cogroupoids, it follows that the functor $(-)\mathbin{\Box} \mathcal H(F,F')$ is an equivalence between the categories of Yetter-Drinfeld modules over $\mathcal H(F)$ and over $\mathcal H(F')$. Furthermore, for every right $\mathcal H(F)$ comodule $M$, there is a canonical isomorphism between the free Yetter-Drinfeld module over the cotensor product, $(M\mathbin\Box \mathcal H(F,F'))\boxtimes \mathcal H(F')$, and the cotensor product with the free Yetter-Drinfeld module, $(M\boxtimes \mathcal H(F))\mathbin\Box \mathcal H(F,F')$.
Because the functor $(-)\mathbin{\Box} \mathcal H(F,F')$ is an equivalence of abelian categories, it is exact. Therefore, this allows to transform any exact sequence of free $\mathcal H(F)$ Yetter-Drinfeld modules $P_i$ into an exact sequence of $\mathcal H(F')$ Yetter-Drinfeld modules $P_i\mathbin{\Box} \mathcal H(F,F')$. In , we will see that if we apply this technique to the resolutions above, the resolution for $F$ transforms into the resolution for $F'$ (or the other way round if we exchange the roles of $F$ and $F'$).
**Lemma 52**. *The following prescriptions uniquely extend to isomorphisms of Yetter-Drinfeld modules: $$\begin{aligned}
\mathcal H(F')\ni 1 &\mapsto 1\otimes 1 \in \mathcal H(F)\mathbin{\Box} \mathcal H (F,F')\\
M_n^{u^{F'}}\ni e_{ij}&\mapsto \sum_{k,\ell} e_{k\ell}\otimes S^{F'F}(u_{ik}^{F'F})u_{\ell j}^{F F'}\in M_n^{u^{F}}\mathbin{\Box}
\mathcal H(F,F')\\
M_n^{v^{F'}}\ni e_{ij}&\mapsto \sum_{k,\ell} e_{k\ell}\otimes S^{F'F}(v_{ik}^{F'F})v_{\ell j}^{F'F}\in M_n^{v^{F}}\mathbin{\Box}
\mathcal H(F,F')\\
M_n^{\tilde u^{F'}}\ni e_{ij}&\mapsto \sum_{k,\ell} e_{k\ell}\otimes S^{F'F}(u_{j\ell}^{F'F})u_{ki}^{FF'}\in M_n^{\tilde u^{F}}\mathbin{\Box}
\mathcal H(F,F')\\
M_n^{\tilde v^{F'}}\ni e_{ij}&\mapsto \sum_{k,\ell} e_{k\ell}\otimes S^{F'F}(v_{j\ell}^{F'F})v_{ki}^{FF'}\in M_n^{\tilde v^{F}}\mathbin{\Box}
\mathcal H(F,F')
\end{aligned}$$*
*Proof.* It is easy to check that the linear extensions to $M_n(\mathbb C)$ are comodule maps, hence they uniquely extend to morphisms between the free Yetter-Drinfeld modules. In order to understand why they are isomorphisms, recall the following canonical isomorphisms.
- $\mathcal H(F,F'')\cong\mathcal H(F,F')\mathbin\Box \mathcal H(F',F''), u_{ij}^{FF''}\mapsto \sum u_{ik}^{FF'}\otimes u_{kj}^{F'F''}, v_{ij}^{FF''}\mapsto \sum v_{ik}^{FF'}\otimes v_{kj}^{F'F''}$ defines a Hopf algebra isomorphism.
- For every $\mathcal H(F)$ comodule $M$, $$M\cong M\mathbin\Box \mathcal H(F), m\mapsto \gamma(m)$$ defines an isomorphism of comodules. If $M$ is a Yetter-Drinfeld module, the isomorphism is an isomorphism of Yetter Drinfeld modules.
Now one can check that the iteration $$M_n^{u^{F'}}\to M_n^{u^{F}}\mathbin{\Box}\mathcal H(F,F')\to M_n^{u^{F'}}\mathbin{\Box}\mathcal H(F',F)\mathbin{\Box}\mathcal H(F,F')\cong M_n^{u^{F'}}\mathbin{\Box}\mathcal H(F')\cong M_n^{u^{F'}}$$ yields the identity on $M_n^{u^{F'}}$: $$\begin{aligned}
\sum e_{pq}\otimes S^{F'F}(u_{kp}^{F'F})u_{q \ell}^{F F'}\otimes S^{F'F}(u_{ik}^{F'F})u_{\ell j}^{F F'}\cong \sum e_{pq}\otimes S^{F'}(u_{pi}^{F'}) u_{qj}^{F'}\cong e_{ij}.
\end{aligned}$$ The other cases work similarly. ◻
**Lemma 53**. *Under identification using the isomorphisms of the previous lemma, $\Psi_i^{\mathcal H(F)}\mathbin{\Box} \mathcal H(F,F')$ coincides with $\Psi_i^{\mathcal H(F')}$.*
*Proof.* Note that the formulas for the isomorphisms look exactly the same as the formulas for the coactions, once we hide in the notation the superscripts and the ranges for the indices. The calculations therefore are almost identical to those in the proof of , only $F$ has to be replaced by $F'$ in the appropriate places. ◻
**Corollary 54**. *The sequence in is exact for all generic $F$. In particular, we obtain a free resolution of the counit for the free unitary compact quantum groups $U_K^+$ with arbitrary $K\in \operatorname{GL}_n(\mathbb C)$.*
With this, the proof of is completed.
# Bialgebra cohomology for $\mathcal H(F)$ and $\mathcal A(E)$ {#sec:bialgebra-cohomology}
Let $F$ be generic, so that [\[eq:YD-res_H(F)\]](#eq:YD-res_H(F)){reference-type="eqref" reference="eq:YD-res_H(F)"} in is a free resolution by . As explained in [@Bic16 Section 2.6], the bialgebra cohomology of $\mathcal H=\mathcal H(F)$ is the cohomology of the complex $$\begin{gathered}
\label{eq:YD-complex-for-H}
0\to \hom_{\mathcal H}^{\mathcal H}\left(
\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix},
\mathbb C_\varepsilon\right)
\xrightarrow{-\circ \Psi^{\mathcal H}_1}
\hom_{\mathcal H}^{\mathcal H}\left(
\begin{pmatrix} M_n^{\tilde v}(\mathcal H) \\ M_n^{\tilde u}(\mathcal H) \\ \mathcal H \end{pmatrix},
\mathbb C_\varepsilon\right)
\\
\xrightarrow{-\circ \Psi^{\mathcal H}_2}
\hom_{\mathcal H}^{\mathcal H}\left(
\begin{pmatrix} M_n^v(\mathcal H) \\ M_n^u(\mathcal H) \end{pmatrix} ,
\mathbb C_\varepsilon\right)
\xrightarrow{-\circ\Psi^{\mathcal H}_3}
\hom_{\mathcal H}^{\mathcal H}\left(
\begin{pmatrix} \mathcal H \\ \mathcal H \end{pmatrix},
\mathbb C_\varepsilon\right)\to0.\end{gathered}$$
**Observation 55**. By the universal property of free Yetter-Drinfeld modules, $$\hom_{\mathcal H}^{\mathcal H}(W\boxtimes \mathcal H,\mathbb C_\varepsilon)\cong \hom^{\mathcal H}(W,\mathbb C)$$ for every $\mathcal H$-comodule $W$. Obviously, $\hom^{\mathcal H}(\mathbb C,\mathbb C)\cong \mathbb C$, where the inverse isomorphism maps $\lambda\in\mathbb C$ to the map $(z\mapsto \lambda z)\in\hom^{\mathcal H}(\mathbb C,\mathbb C)$. Now let $f\colon M_n^u(\mathbb C)\to \mathbb C$ be a linear map, $f(u_{jk})=f_{jk}$. Then $$\begin{aligned}
f\in \hom^{\mathcal H}( M_n^u(\mathcal H), \mathbb C)
\iff \forall_{j,k}\ f\otimes\mathrm{id}(\gamma^{u}(e_{jk}))=\gamma(f(e_{jk}))
\end{aligned}$$ and $$\begin{aligned}
f\otimes\mathrm{id}(\gamma(e_{jk}))=\gamma(f(e_{jk}))&\iff \sum f_{qp}S(u_{jq})u_{pk}=f_{jk}1\\
&\iff\sum f_{jp} u_{pk} = \sum f_{qk} u_{jq}\\
&\iff f_{jk}=\delta_{jk} f_{jj}
\end{aligned}$$ show that $$\hom_{\mathcal H}^{\mathcal H}( M_n^u(\mathcal H),\mathbb C_\varepsilon)\cong \hom^{\mathcal H}( M_n^u(\mathbb C),\mathbb C)\cong \mathbb C$$ where the inverse isomorphism maps $\lambda\in\mathbb C$ to the linear map $f$ with $f_{jk}=\delta_{jk}\lambda$. The same holds of course for all $\hom_{\mathcal H}^{\mathcal H}(M_n^{\#}(\mathcal H),\mathbb C_\varepsilon)$ with $\#\in \{u,v,\tilde u,\tilde v\}$. When we follow the calculations in the proof of , we therefore have to replace matrices by constant diagonal matrices! Therefore, the bialgebra cohomology of $\mathcal H$ is the cohomology of the complex (using the notation $(\Psi^{\mathcal H}_j)^{*}$ in the appropriately adapted meaning) $$0\to\mathbb C^2\xrightarrow{(\Psi^{\mathcal H}_1)^{*}}\mathbb C^3\xrightarrow{(\Psi^{\mathcal H}_2)^{*}} \mathbb C^2\xrightarrow{(\Psi^{\mathcal H}_3)^{*}}\mathbb C^2\to 0$$ with maps $$\begin{aligned}
(\Psi^{\mathcal H}_1)^{*}
\begin{pmatrix}
\lambda\\\mu
\end{pmatrix}
=
\begin{pmatrix}
\mu-\lambda\\\lambda-\mu\\\lambda-\mu
\end{pmatrix}
,
\quad
(\Psi^{\mathcal H}_2)^{*}
\begin{pmatrix}
\lambda\\\mu\\\nu
\end{pmatrix}
=
\begin{pmatrix}
\lambda+\mu\\\lambda+\mu
\end{pmatrix}
,
\quad
(\Psi^{\mathcal H}_3)^{*}
\begin{pmatrix}
\lambda\\\mu
\end{pmatrix}
=\operatorname{tr}(F)
\begin{pmatrix}
\mu-\lambda\\\lambda-\mu
\end{pmatrix}.
\end{aligned}$$ It is straightforward to determine image and kernel.
**Theorem 56** (). *Let $F$ be generic (in particular, $\operatorname{tr}(F)\neq 0$). Then the bialgebra cohomology of $\mathcal H=\mathcal H(F)$ is given as follows: $$\begin{aligned}
\dim \mathrm{H}_{\mathrm b}^0(\mathcal H) &= 1 ,
&\dim \mathrm{H}_{\mathrm b}^1(\mathcal H) &= 1 ,
&\dim \mathrm H_{\mathrm b}^2(\mathcal H) &= 0,& % 1-t ,&
\dim \mathrm H_{\mathrm b}^3(\mathcal H) &= 1, %2-t,\end{aligned}$$ and $\dim \mathrm H_{\mathrm b}^k(\mathcal H) = 0$ for all $k \geq 4$.*
**Observation 57**. Analogously to Observation [Observation 55](#obs:hom_H^H){reference-type="ref" reference="obs:hom_H^H"} we get $$f\in\hom^{\mathcal A}(V^*V,\mathbb C)\iff f_{jk}=\delta_{jk} f_{jj}$$ for a linear map $f\colon V^*V\to\mathbb C, f(e_j^*e_k)=f_{jk}$. Of course, $$\hom_{\mathcal A}^{\mathcal A}((1-g)\otimes_{\mathbb C\mathbb Z_2}\mathcal A,\mathbb C_\varepsilon)\subset \hom_{\mathcal A} ((1-g)\otimes_{\mathbb C\mathbb Z_2}\mathcal A,\mathbb C_\varepsilon)=0.$$
The bialgebra cohomology of $\mathcal A=\mathcal A(E)$ can now again be deduced in the same manner. (Co-)Restricting the $(\Phi_i^{(\mathcal B)})^*$ to constant diagonal matrices results in the complex $$0\to\mathbb C\xrightarrow{0}\mathbb C\xrightarrow{2} \mathbb C\xrightarrow{0}\mathbb C\to 0.$$ Therefore we obtain:
**Theorem 58**. *Let $F=E^tE^{-1}$ be a generic asymmetry. Then the bialgebra cohomology of $\mathcal A=\mathcal A(E)$ is given as follows: $$\begin{aligned}
\dim \mathrm{H}_{\mathrm b}^0(\mathcal A) &= 1 ,
&\dim \mathrm{H}_{\mathrm b}^1(\mathcal A) &= 0 ,
&\dim \mathrm H_{\mathrm b}^2(\mathcal A) &= 0 ,&
\dim \mathrm H_{\mathrm b}^3(\mathcal A) &= 1,\end{aligned}$$ and $\dim \mathrm H_{\mathrm b}^k(\mathcal A) = 0$ for all $k \geq 4$.*
# Acknowledgements {#acknowledgements .unnumbered}
We are grateful to Alexander Mang and Moritz Weber for fruitful discussions at an initial stage of this project, and for bringing the glued products to our intention. We are also indebted to Julien Bichon for many useful hints and explanations and for sharing a preliminary draft of [@Bichon23pre] with us.
[^1]: Indeed, Lévy processes on a compact quantum group $\mathbb{G}$ are classified by their generating functionals $\phi\colon \mathrm{Pol}(\mathbb{G})\to\mathbb{C}$, which can always be completed to so-called Schürmann triples $(\phi,\eta,\phi)$, where $\pi\colon \mathrm{Pol}(\mathbb{G})\to B(H)$ is a $*$-homomorphism with values in the $*$-algebra of bounded linear operators on some Hilbert space $H$, $\eta\colon \mathrm{Pol}(\mathbb{G})\to H$ is a Hochschild one-cocycle (w.r.t. a certain $\mathrm{Pol}(\mathbb{G})$-bimodule structure on $H={_\pi H_\varepsilon}$), and the bilinear map $\mathrm{Pol}(\mathbb{G})\otimes\mathrm{Pol}(\mathbb{G})\ni a\otimes b \mapsto - \langle\eta(a^*),\eta(b)\rangle\in\mathbb{C}$ is the Hochschild coboundary of $\phi$. See [@schurmann; @fgt15; @hunt] for a detailed description.
[^2]: Our definitions below follow conventions of Banica in [@Banica1996; @Banica1997-free_unitary_quantum_group] and differ slightly from the definitions in [@VanDaeleWang1996]. The free unitary quantum groups are universal in the sense that every compact matrix quantum group is a quantum subgroup of some $U_K^+$.
[^3]: It is elementary to show that $\operatorname{tr}(F)\operatorname{tr}(F^{-1})\geq n^2\geq4$ (we assumed $n\geq2$), therefore all solutions of [\[eq:generic\]](#eq:generic){reference-type="eqref" reference="eq:generic"} are real and, in particular, not roots of unity of order $\geq3$.
[^4]: In [@Skoda2003], Škoda calls $u$ a *basis* instead, but we think of that choice of terminology as somewhat misleading.
[^5]: *For now, we have shown exactness whenever $F$ is a generic asymmetry. In below, we will see that it is enough to assume $F$ to be generic, which will complete the proof of .*
| arxiv_math | {
"id": "2309.07767",
"title": "Free resolutions for free unitary quantum groups and universal\n cosovereign Hopf algebras",
"authors": "Isabelle Baraquin, Uwe Franz, Malte Gerhold, Anna Kula, Mariusz\n Tobolski",
"categories": "math.QA math.OA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this paper we look at the topological type of algebraic sum of achievement sets. We show that there is a Cantorval such that the algebraic sum of its $k$ copies is still a Cantorval for any $k \in \mathbb N$. We also prove that for any $m,p \in (\mathbb N\setminus \{1\}) \cup \{\infty\}$, $p \geq m$, the algebraic sum of $k$ copies of a Cantor set can transit from a Cantor set to a Cantorval for $k=m$ and then to an interval for $k=p$. These two main results are based on a new characterization of sequences whose achievement sets are Cantorvals. We also define a new family of achievable Cantorvals which are not generated by multigeometric series. In the final section we discuss various decompositions of sequences related to the topological typology of achievement sets.
address:
- |
Faculty of Mathematics and Computer Science\
University of Warmia and Mazury in Olsztyn\
Słoneczna 54, 10-710 Olsztyn\
Poland\
ORCID: 0000-0003-3712-8289
- |
Faculty of Mathematics and Computer Science\
University of Łódź\
Banacha 22, 90-238 Łódź\
Poland\
ORCID: 0000-0002-3655-4991
- |
Instytut Matematyki\
Uniwersytet Szczeciński\
ul. Wielkopolska 15\
PL-70-453 Szczecin\
Poland\
ORCID 0000-0002-0275-6122
author:
- Jacek Marchwicki, Piotr Nowakowski and Franciszek Prus-Wiśniowski
title: Algebraic sums of achievable sets involving Cantorvals
---
# Introduction
Achievement sets, that is, the sets of all possible subsums of absolutely convergent series, have been investigated for over a century now. The historically first paper devoted to the achievement sets ([@Kakeya]) aimed at finding all possible topological types of such sets. The problem was solved only in 1988 by Guthrie and Nymann in [@GN88]. After the publication of their fundamental theorem, the investigation of achievement sets gained momentum. Among others, achievement sets served as a counterexample (see [@Sannami92] and [@PWT18]) to the Palis hypothesis [@Palis87] that the arithmetic sum (or difference) of two Cantor sets, both with Lebesgue measure zero, is either of Lebesgue measure zero or it contains an interval. Palis hypothesis came from the theory of dynamical systems and added much interest to the investigation of algebraic sums of Cantor sets which is a thriving field of research (see, for example, [@PY97], [@E07], [@Pourba18], [@T19]). Achievement sets, due to their rigid nature, are relatively small family of compact sets in the real line as can be seen from the fact that Palis hypothesis is generically true for dynamically defined Cantor sets [@MY01]. The elementary nature of achievement sets is often misleading because many seemingly straightforward facts require quite complicated proofs or constructions. In our opinion, the most challenging open problem related to achievement sets is to find an easy to use characterization of the absolutely convergent series that generate Cantor sets.
Let $\sum a_n$ be an absolutely convergent series of real terms. *The achievement set* of the sequence $(a_n)_{n=1}^\infty$ is defined by $$A\ =\ A(a_n)\ :=\ \left\{x\in\mathbb R:\qquad \exists I\subset\mathbb N \quad x\,=\,\sum_{n\in I} a_n\,\right\}.$$ The set of indices $I$ is called a *representation* of $x$. It does not have to be unique, although we will not dwell on it (unlike the papers [@cardfun] and [@GM23]). As it was known already to Kakeya, an achievement set is compact and if the sequence $(a_n)$ has infinitely many non-zero terms, then $A(a_n)$ must be perfect. Since we will be focused on the topological character of achievement set, we assume throughout the paper that the terms of the series $\sum a_n$ are nonnegative and nonincreasing, since $A(a_n)$ is merely a translation of $(A(|a_n|)$ by the number $\sum_{a_n<0}a_n$ and hence $A(a_n)$ and $A(|a_n|)$ are homeomorphic always. A set $S\subset\mathbb R$ is said to be *achievable* if there is a sequence $(a_n)$ such that $S=A(a_n)$ [@Jones p. 519]. Given any positive integer $k$, we define *$k$-initial subsums* as $$F_k\ =\ F_k(a_n)\ :=\ \left\{x\in\mathbb R: \qquad \exists I\subset\{1,2,\ldots,k\,\}\quad x\,=\,\sum_{n\in I}a_n\,\right\}.$$ We will be using the increasing arrangement $F_k=(f_j^k)_{j=1}^{m_k}$. Always $k+1\le m_k\le 2^k$. More generally, if $S$ is any finite set (or multiset) of real numbers, the set of all numbers that are sums of at least one subset (also possibly a multiset) of $S$, will be denoted by $\Sigma S$. In particular, $F_k=\Sigma\{a_1,a_2,\ldots,a_k\,\}$. For all $k\in\mathbb N$ the equality $A(a_n)\,=\, F_k+E_k$ holds where $E_k$ denotes the achievement set of the remainder sequence $(a_n)_{n=k+1}^\infty$.
Next, we define the *$k$-th iterate* of $A$ by $$I_k\ :=\ \bigcup_{f\in F_k}[f,\,f+r_k].$$ Each $I_k$ is a *multi-interval set*, that is, the union of a finite family of closed and bounded intervals. In the classic case of $a_n=\tfrac2{3^n}$, when $A$ is the ternary Cantor set $C$, the set $I_k$ is exactly the set obtained in the $k$-th step of the standard geometric construction of $C$. Additionally, we will write $I_0:=[0,\,\sum a_n]$. Always $A(a_n)\,=\,\bigcap_nI_n$.
We denote the family of all connectivity components of $I_{k-1}\setminus I_k$ by $\mathcal{G}_k$. Open intervals belonging to $\mathcal{G}_k$ are called *$A$-gaps of order $k$*. It is not difficult to see that the family $\mathcal{G}_k$ is nonempty if and only if $r_k<a_k$. Intervals of the form $(r_n,\,a_n)$ are gaps whenever $r_n<a_n$ (The First Gap Lemma, see [@BFPW1]) and will be called the *principal gaps* of $A$. We will say that an $A$-gap $G$ is *dominating* if all $A$-gaps lying to the left of $G$ are shorter than $G$. Nontrivial components of $A$ will be called *$A$-intervals*. It was already proven by Kakeya that $A(a_n)$ is a multi-interval set if and only if $a_n\le r_n$ for all sufficiently large $n$. $A(a_n)$ is a single interval if and only if $a_n\le r_n$ for all indices $n$. We say then that $(a_n)$ is an *interval-filling* sequence and this terminology comes not from Kakeya, but from [@DJK]. Kakeya observed also that if $a_n>r_n$ for all sufficiently large $n$ then $A(a_n)$ is a *Cantor set*, that is, a set in $\mathbb R$ homeomorphic to the classic Cantor ternary set or, equivalently, a nonempty bounded, perfect and nowhere dense set (see [@Foran Thm. 3.3]). He conjectured in [@Kakeya] that if the series $\sum a_n$ has infinitely many positive terms, then $A(a_n)$ is either a multi-interval set or a Cantor set which turned out to be false, but only after seventy years. Series (or sequences) satisfying $a_n>r_n$ for all $n$ are called *fast convergent*.
A set $P\subset \mathbb R$ is said to be a Cantorval if it is homeomorphic to the set $$GN\ :=\ C\,\cup\,\bigcup_n G_{2n-1}$$ where $C$ is the Cantor ternary set $C=A(\tfrac2{3^n})$ and $G_{2n-1}$ is the union of all $4^{n-1}$ $C$-gaps of order $2n-1$. It is known that a Cantorval is exactly a nonempty compact set in $\mathbb R$ such that it is the closure of its interior and both endpoints of every nontrivial component are accumulation points of its trivial components. Other topological characterizations of Cantorvals can be found in [@MO] and [@BFPW1].
The fundamental Guthrie-Nymann classification theorem ([@GN88 Thm.1], [@NS0]) asserts that the achievement set of an absolutely convergent series always is of one of the following four topological types: a finite set, a multi-interval set, a Cantor set or a Cantorval. Proving that a particular series generates a Cantorval is rather difficult and thus almost all known examples of such series are the *multigeometric series* whose general term is of the form $a_{(n-1)k+i}\,=\,l_iq^n$, $n\in\mathbb N$, $i\in\{1,2,\ldots,k\}$ for some $k\in\mathbb N$, $q\in(0,\,1)$ and some real numbers $l_1\ge l_2\ge \ldots\ge l_k>0$. Such a multigeometric sequence will be denoted by $(l_1,l_2,\ldots,l_k;\,q)$ and the notation will be used in the Thm. [Theorem 12](#sufficientcantorval){reference-type="ref" reference="sufficientcantorval"}. The only two known examples of families of non-multigeometric Cantorvals can be found in [@VMPS19] and [@FN23].
One of known and easy to use sufficient conditions for the achievement set to be a Cantor set uses the notion of a semi-fast convergent series [@BFPW2]. A series $\sum a_n$ with monotonic and positive terms convergent to 0 is called *semi-fast convergent* if it satisfies the condition $$a_n\ >\ \sum_{k:\,a_k<a_n}a_k \qquad\text{for all $n$}.$$ If $\sum a_n$ is a semi-fast convergent series, then there exist two uniquely determined sequences, $(\alpha_k)$ of positive numbers decreasing to 0 and $(N_k)$ of positive integers, such that $$a_n\ =\ \alpha_k\qquad\qquad \text{for} \qquad \sum_{j=0}^{k-1}N_j\ <\ n\ \le\ \sum_{j=0}^kN_j$$ where $N_0:=0$. The numbers $\alpha_k$ are the values of the terms of the series $\sum a_n$ and $N_k$ is the multiplicity of the value $\alpha_k$ in the series $\sum a_n$. Thus, we can identify $$\sum a_n\ =\ \sum(\alpha_k,\,N_k) \qquad\qquad\text{or}\qquad\qquad (a_n)\ =\ (\alpha_k,\,N_k)$$ and the sum of the series is $\sum a_n\,=\,\sum\,(\alpha_k,\,N_k)\,=\,\sum_{k=1}^\infty\alpha_kN_k$. Theorem 16 of [@BFPW2] states that if $\sum a_n$ is semi-fast convergent then $A(a_n)$ is a Cantor set.
Let $(b_n)$ and $(c_n)$ be two sequences of real numbers. We will say that a sequence $(a_n)$ is the *union of the sequences* $(b_n)$ and $(c_n)$ if there is a partition $\mathbb N=N\sqcup M$ with both subsets $N$ and $M$ infinite such that $$\label{unionseq}
(b_n)_{n\in\mathbb N}\ =\ (a_k)_{k\in M} \qquad \text{and} \qquad (c_n)_{n\in\mathbb N}\ =\ (a_k)_{k\in N}.$$ We will write $(a_n) = (b_n) \cup (c_n)$. Loosely speaking, it means that $(a_n)$ is the mixture of all terms of $(b_n)$ and all terms of $(c_n)$. We will also say that the sequences $(b_n)$ and $(c_n)$ form a decomposition of the sequence $(a_n)$. The admissible partition $\mathbb N=M\sqcup N$ is not unique if the sequences $(b_n)$ and $(c_n)$ have a common value. If $(a_n)$ is the union of sequences $(b_n)$ and $(c_n)$, and $\mathbb N=M\sqcup N$ is a fixed partition satisfying [\[unionseq\]](#unionseq){reference-type="eqref" reference="unionseq"}, then we will write $a_k\in(b_n)$ whenever $k\in M$. It is not difficult to see that if $\sum b_n$ and $\sum c_n$ are convergent series of positive and nonincreasing terms, then $A(b_n)+A(c_n)=A(a_n)$.
In the end of this section we present additional notation used in this paper. Let $B \subset \mathbb R$. By $\lambda(B)$ we denote the Lebesgue measure of a set $B$. If $B$ is finite, then by $|B|$ we denote the cardinality of the set $B$. If $B$ is an interval, then by $|B|$ we denote the length of $B$.
# General remarks on achievable sets
We will start with a rather general observation on gaps of achievable sets. But first, let us recall The Second Gap Lemma.
**Lemma 1** (The Second Gap Lemma, [@BFPW1]). *Let $(\alpha,\,\beta)$ be an $A$-gap of order $k$. Then $\beta\in F_k$ and hence $\beta=f_j^k$ for a unique $j\in\{2,\,3,\,\ldots,\,m_k\,\}$. Moreover, $\alpha=f_{j-1}^k+r_k$.*
**Proposition 2** (The $k$-th order gap lemma). *The principal gap of order $k$ has the maximal length among all $A$-gaps of order $k$.*
*Proof.* Let $(\alpha,\,\beta)$ be an $A$-gap of order $k$. From The Second Gap Lemma, we have $\beta\in F_k$, and hence $\beta=f_j^k$ for a unique $j\in\{2,\,3,\,\ldots,\,m_k\,\}$. It follows that $\alpha=f_{j-1}^k+r_k$.
If $f_j^k$ has a representation $B$ with $k\in B$, then $f_{j-1}^k\ge f_j^k-a_k$ and hence, $$\beta-\alpha\ =\ f_j^k-(f_{j-1}^k+r_k)\ \le a_k-r_k,$$ that is, the gap $(\alpha,\,\beta)$ is no longer than the principal gap $(r_k,\,a_k)$.
If $f_j^k$ has no representation involving $k$, then $k\ge2$ and $f_j^k=f_i^{k-1}$ for some $i\in\{2,\,3,\,\ldots,\,m_{k-1}\}$. We need to consider two cases.
*Case* 1: $f_{j-1}^k$ has no representation with $k$. Since there are gaps of order $k$, we have $r_k < a_k$, and so $$A\ \supset\ F_k\ \ni\ f_{j-1}^k+a_k\ >\ f_{j-1}^k+r_k\ =\ \alpha,$$ and hence $f_{j-1}^k+a_k>\beta$ which implies that $$\beta\,-\,\alpha\ \le\ (f_{j-1}^k+a_k)\,-\,(f_{j-1}^k+r_k)\ =\ a_k\,-\,r_k.$$
*Case* 2: $f_{j-1}^k$ has a representation with $k$. Then $f_{j-1}^k=f_s^{k-1}+a_k$ for some $s\in\{1,\,2,\,\ldots,\,m_{k-1}\}$. We have $f_s^{k-1} < f_{j-1}^k < f_j^k =f_{i}^{k-1}$, thus $f_{i-1}^{k-1}\ge f_s^{k-1}$. Now, if $f_{i-1}^{k-1}=f_s^{k-1}$, then $$\alpha=f_{j-1}^k+r_k=f_{i-1}^{k-1}+a_k+r_{k}=f_{i-1}^{k-1}+r_{k-1}$$ and $\beta=f_i^{k-1}$ which means that $(\alpha,\,\beta)$ is a gap of order at most $k-1$, a contradiction. Therefore, we have $f_{i-1}^{k-1}>f_s^{k-1}$, but then $F_k\ni f_{i-1}^{k-1}+a_k > f_s^{k-1}+a_k=f_{j-1}^k$ which implies that $f_{i-1}^{k-1}+a_k\ge f_j^k=\beta$. Since $f_j^k=f_i^{k-1}$, it must be $f_{j-1}^k\ge f_{i-1}^{k-1}$ and hence $f_{i-1}^{k-1}+r_k\le f_{j-1}^k+r_k=\alpha$. Finally, $$\beta\,-\,\alpha\ \le (f_{i-1}^{k-1}+a_k)\,-\,(f_{i-1}^{k-1}+r_k)\ =\ a_k\,-\,r_k$$ and the proof is completed. ◻
As a simple corollary we obtain the powerful and frequently used well-known result [@recover Lemma 2.4].
**Corollary 3** (The Third Gap Lemma). *Every dominating gap is principal.*
Recall that a set $A$ in a topological space is said to be regularly closed if it is the closure of its interior, that is, $A=\overline{\text{int}\,A}$. The notion provides one more topological characterization of Cantorvals that we formulate below and that, unlike the other two principal topological characterizations of Cantorvals (cf. [@MO pp.331 and 343] and [@BFPW1 Thm. 21.17]), does not evoke neither $A$-gaps nor $A$-intervals.
**Theorem 4**. *A bounded set $A\subset\mathbb R$ is a Cantorval if and only if it is regularly closed and its boundary $\text{Fr}\,A$ is a Cantor set.*
Before proving it, let us note that, thanks to the above equivalence, the topological classification of achievement sets of absolutely convergent series amounts to choosing one property of the boundary: $\text{Fr}\,A$ is either a finite set or a Cantor set, and choosing the relative size of the interior: $A$ is either nowhere dense or regularly closed. Any of four such combinations of these properties characterizes exactly one topological type of achievement sets.
**Corollary 5**. *Let $A$ be an achievement set of an absolutely convergent series. Then $A$ is closed and bounded and:*
- *$A$ is nowhere dense and $\text{Fr}\,A$ is finite if and only if $A$ is a finite set;*
- *$A$ is nowhere dense and $\text{Fr}\,A$ is a Cantor set if and only if $A$ is a Cantor set;*
- *$A$ is regularly closed and $\text{Fr}\,A$ is finite if and only if $A$ is a multi-interval set;*
- *$A$ is regularly closed and $\text{Fr}\,A$ is a Cantor set if and only if $A$ is a Cantorval.*
Before proving Theorem [Theorem 4](#charCvl){reference-type="ref" reference="charCvl"} let us recall another characterization of Cantorvals.
**Theorem 6**. *[@BFPW1 Theorem 21.17] [\[21.17\]]{#21.17 label="21.17"} A nonempty perfect and compact set $P \subset \mathbb R$ is a Cantorval if and only if $P$-gaps and $P$-intervals do not have common points and the union of all $P$-intervals is dense in $P$.*
Now, we can prove Theorem [Theorem 4](#charCvl){reference-type="ref" reference="charCvl"}.
*Proof of *Theorem [Theorem 4](#charCvl){reference-type="ref" reference="charCvl"}*.*
Let $A \subset \mathbb R$ be bounded.
($\Rightarrow$) Suppose that $A$ is a Cantorval. Then $A$ is closed and the union of all $A$-intervals is dense in $A$ (Theorem [\[21.17\]](#21.17){reference-type="ref" reference="21.17"}). Hence, the union of interiors of all $A$-intervals is dense in $A$ as well, but the last union is the interior of $A$. Thus, $A$ is regularly closed. The boundary of $A$ is a bounded nowhere dense closed subset of $\mathbb R$. It is nonempty, because it contains all endpoints of $A$-intervals. It remains to show that $\text{Fr}\,A$ is a perfect set. Indeed, if there was an isolated point in $\text{Fr}\,A$, it would be a common endpoint of an $A$-interval and an $A$-gap, because $A$ has no isolated points. Such common endpoints do not exist by Theorem [\[21.17\]](#21.17){reference-type="ref" reference="21.17"}, and hence $\text{Fr}\,A$ has no isolated points.
($\Leftarrow$) Suppose that $A$ is regularly closed and its boundary is a Cantor set. Then the interior of $A$ equals to the union of interiors of all $A$-intervals which is contained in the union of all $A$-intervals, which in turn is a subset of $A$. Passing to closures, we obtain $$A\ =\ \overline{\text{int}\,A}\ \subset\ \overline{\text{the union of all $A$-intervals}}\ \subset \ A,$$ and thus the union of all $A$-intervals is dense in $A$. Since $\text{Fr}\,A$ is a Cantor set, it has no isolated points, and hence $A$-gaps and $A$-intervals cannot have common endpoints. From Theorem [\[21.17\]](#21.17){reference-type="ref" reference="21.17"} it follows that $A$ is a Cantorval. ◻
Note that the starting assumption of Theorem [Theorem 4](#charCvl){reference-type="ref" reference="charCvl"} cannot be removed as shows the example of any Cantorval with its external gaps attached.
Each multi-interval set $W$ (that is, the union of a finite family of closed and bounded intervals) is the union of infinitely many different finite families of closed and bounded intervals. However, one of those families stands out and is uniquely determined by $W$, namely, the family of all connectivity components of $W$. Thus, unless specified otherwise, by writing $W=\bigcup_{i=1}^nP_i$, we mean that $\{P_i: \ 1\le i\le n\,\}$ is the family of connectivity components of $W$. Given a multi-interval set $W=\bigcup_{i=1}^nP_i$, we define $$||W||\ :=\ \max_{1\le i\le n}|P_i|.$$ If $(W_n)_{n\in\mathbb N}$ is a descending sequence of multi-interval sets, then $(||W_n||)_{n\in\mathbb N}$ is a nonincreasing sequence bounded from below and hence it converges to a nonnegative number.
**Lemma 7**. *Let $(W_n)_{n\in\mathbb N}$ be a descending sequence of multi-interval sets. Then $\bigcap_nW_n$ contains and interval if and only if $\lim\limits_{n\to\infty}||W_n||>0$.*
*Proof.* ($\Rightarrow$) If $[a,\,b]\subset \bigcap_nW_n$, then $||W_n||\ge b-a$ for all $n$.
($\Leftarrow$) Suppose that $g:=\lim\limits_{n\to\infty}||W_n||>0$. Let $P_n=[a_n,\,b_n]$ be the most left of all components of $W_n$ of maximal length. Then $|P_n|=||W_n||\ge g$. Let $s_n$ be the middle point of the interval $P_n$. Clearly, $s_n\in W_n\subset W_1$ and hence the sequence $(s_n)_{n\in\mathbb N}$ is bounded. There is a convergent subsequence $(s_{n_k})_{k\in\mathbb N}$. In particular, there exists $M\in\mathbb N$ such that $|s_{n_k}-s_{n_l}|<g/2$ for all $l>k>M$. Then $s_{n_k}\in P_{n_l}$ and thus $s_{n_k}\in P_{n_k}\cap P_{n_l}$. Now, since $W_{n_l}\subset W_{n_k}$ and $P_{n_k}\cap P_{n_l}$ is nonempty, it must be $P_{n_l}\subset P_{n_k}$. The sequence $(P_{n_i})_{i\in\mathbb N}$ is descending and $|P_{n_i}|\ge ||W_{n_i}||\ge g>0$ for all $i$. Therefore, $\bigcap_iP_{n_i}$ is a non-degenerated closed interval contained in $\bigcap_nW_n$. ◻
We are now going to give an analytic characterization of those achievement sets that contain an interval. Unfortunately, the limit whose value is decisive is too unwieldy to compute in most cases and makes the characterization unsatisfactory except for some very special cases like, for example, the Ferens series [@BP] or the Guthrie-Nymann-Jones series [@recover], [@B].
Given an $\epsilon>0$, a finite increasing sequence $(f_i)_{i=m}^n$ will be called *$\epsilon$-close* either if $m=n$ or if $m<n$ and the distance between any two consecutive terms of the sequence does not exceed $\epsilon$. An $\epsilon$-close subsequence of an increasing sequence $F=(f_i)_{i=1}^k$ will be called *maximal* if it is not contained in any longer $\epsilon$-close subsequence of $F$. The set of all maximal $\epsilon$-close subsequences of a sequence $F$ will be denoted by $M_\epsilon[F]$. Every finite increasing sequence of real numbers has a unique decomposition into finitely many disjoint maximal $\epsilon$-close subsequences. Given an $\epsilon$-close sequence $(f_i)_{i=m}^k$, we define the *stretch* of the sequence by $S[(f_i)]:= f_k-f_m$. Finally, given a finite increasing sequence $F$, we define $$\Delta_\epsilon F\ :=\ \max\bigl\{S[(f_i)]:\ (f_i)\in M_\epsilon[F]\,\bigr\}.$$ Now, let $\sum a_n$ be a convergent series of nonnegative and nonincreasing terms. For any $n\in\mathbb N$, the set $F_n$ of $n$-initial subsums $F_n=\{\sum_{i\in A}a_i\colon A\subset\mathbb \{1,2, \ldots,n\} \,\}$ forms an increasing and finite sequence. Recall that the symbol $r_n$ denotes the value of the $n$-th reminder of the series $\sum a_i$, that is, $r_n=\sum_{i=n+1}^\infty a_i$.
**Proposition 8**. *Let $\sum a_n$ be a convergent series of nonnegative and nonincreasing terms. Then the following conditions are equivalent:*
- *the achievement set $A(a_n)$ contains an interval;*
- * $\lim\limits_{n\to\infty}\Delta_{r_n}F_n\,>\,0$;*
- * $\lim\limits_{k\to\infty}\Delta_{r_{n_k}}F_{n_k}\,>\,0$ for some increasing sequence $(n_k)$ of indices.*
*Proof.* It is well-known (see [@BFPW1]) that $$A(a_i)\ =\ \bigcap_{n=1}^\infty I_n,$$ where each $I_n=\bigcup_{f\in F_n}[f,\,f+r_n]$ is a multi-interval set. Since $||I_n||=\Delta_{r_n}F_n\,+\, r_n$, we have $\lim ||I_n||\,=\,\lim \,\Delta_{r_n}F_n$ and hence by the Lemma [Lemma 7](#multlem){reference-type="ref" reference="multlem"} $A(a_i)$ contains an interval if and only if $\lim\limits_{n\to\infty}\Delta_{r_n}F_n>0$.
The equivalence (ii) $\Leftrightarrow$ (iii) and, actually, the equality of the limits follows from the fact that the sequence $(r_n+\Delta_{r_n}F_n)_{n\in\mathbb N}$ is nonincreasing. ◻
# A family of achievable Cantorvals
We are going to present a new family of series whose achievement sets are Cantorvals. Unlike the most of the known and broadly used examples of achievable Cantorvals ([@WS], [@F], [@GN88], [@Jones], [@recover], [@BP], [@B], [@MM]), the series in our family do not need to be multigeometric. Our new family of achievable Cantorvals is large enough to serve later in this paper as a useful tool in finding a much needed example that cannot be found among Cantorvals generated by multigeometric series (see the Theorem [Theorem 25](#immortal){reference-type="ref" reference="immortal"}).
Given a sequence $m=(m_n)$ of positive integers greater than or equal to 2, a sequence $k=(k_n)$ of positive integers such that $k_n> m_n$ for all $n$, and a sequence $q=(q_n)$ of positive numbers, we will be saying that a series $\sum a_n$ is a *generalized Ferens series* (shortly, a GF series) if for $i\in\mathbb N$, $n\in\big\{K_{i-1}+1,\,K_{i-1}+2,\,\ldots,\,K_i\,\big\}$, where $K_j:=\sum_{i=1}^jk_i$ for $j\in\mathbb N$ and $K_0:=0$, we have $$a_n\ =\ a_n(k,m,q)\ :=\ (m_i+K_i-n)q_i.$$ Given positive integers $p,r$ with $r>p\ge2$, we will use the symbol $s(p,r):=\sum_{i=1}^{r-1}(p+i)$. We will also write $s_n:=s(m_n,k_n)$. Given an $n\in\mathbb N$, the set of all possible sums formed by integrands taken without repetition from the set $$\left\{a_p: \ p\in\{K_{n-1}+1,\, K_{n-1}+2,\,\ldots,\,K_n\,\}\,\right\}$$ is exactly the set $$\bigl(\{0\}\cup\{m_n,\,m_n+1,\,\ldots,\,s_n\}\cup\{s_n+m_n\}\bigr) q_n \qquad \text{(see \cite[Fact 3]{BP})}.$$
The next theorem gives the sufficient condition for the achievement set of a generalized Ferens series to be a Cantorval. A geometrical idea for this condition is similar to the one used in the paper [@Now2].
**Theorem 9**. *If $\sum a_n(m,k,q)$ is a convergent GF series satisfying the conditions $$\label{gf1}
\tag{GF$_1$} \ \forall_{n\in\mathbb N}\qquad q_n\ \le\ (s_{n+1}-m_{n+1}+1)q_{n+1}$$ and $$\label{gf2}
\tag{GF$_2$} \ \forall_{n\in\mathbb N} \qquad m_nq_n\ >\ \sum_{i>n}(s_i+m_i)q_i,$$ then $A(a_n)$ is a Cantorval.*
*Proof.* Since $(a_n)$ is clearly decreasing for all $n\in \{K_{i-1}, \ldots, K_i\}$, $i \in \mathbb N$ and [\[gf2\]](#gf2){reference-type="eqref" reference="gf2"} implies that $a_{K_i} = m_iq_i>(m_{i+1}+k_{i+1}-1)q_{i+1}=a_{K_i+1}$, we obtain that $(a_n)$ is decreasing. Our proof will rest on an application of the Proposition [Proposition 8](#charint){reference-type="ref" reference="charint"} and hence we want to find long sequences in the sets $F_n$ with all consecutive terms sufficiently close. For that purpose, the following additional symbol will be quite useful: $$C_n\ :=\ \{m_nq_n,\,(m_n+1)q_n,\,\ldots,\,s_nq_n\,\} = \{a_{K_n+1}, \ldots, a_{K_{n+1}}\} \qquad \text{for $n\in\mathbb N$.}$$ We are going to define inductively a special sequence $(D_n)_{n\in\mathbb N}$. First, we set $D_1:=C_1$. Clearly, $D_1\subset F_{K_1}$ and the distance of any two consecutive elements of $D_1$ equals to $q_1$. Moreover, $\min D_1=m_1q_1$ and $\max D_1=s_1q_1$.
Suppose now that a set $D_n$ has been defined and satisfies the following conditions
- $D_n\,\subset \, F_{K_n}$;
- the distance between any two consecutive points of $D_n$ does not exceed $q_n$;
- $\min D_n\,=\,\sum_{i=1}^nm_iq_i$ and $\max D_n\,=\,\sum_{i=1}^ns_iq_i$.
Then define $D_{n+1}:=D_n+C_{n+1}$. We see immediately that the set $D_{n+1}$ satisfies the conditions ($\alpha_{n+1}$) and ($\gamma_{n+1}$). In order to demonstrate ($\beta_{n+1}$), consider any two consecutive elements $f<g$ of $D_{n+1}$. Then $g=h_1+pq_{n+1}$ for some $h_1\in D_n$ and some $p\in\{m_{n+1},\ldots,\,s_{m+1}\}$. If $p>m_{n+1}$, then $h_1+(p-1)q_{n+1}\in D_{n+1}$ and it must be $h_1+(p-1)q_{n+1}\le f$, because $f<g$ are two consecutive elements of $D_{n+1}$. Hence $g-f\le g-h_1-(p-1)q_{n+1}=q_{n+1}$. If $p=m_{n+1}$, then from the fact that $g>f\ge\, \min D_{n+1}$, it follows $h_1>\min D_n$. Hence, by ($\beta_n$), we can find $h_2\in D_n$ such that $h_1-q_n\le h_2<h_1$. Of course, $$\label{doda}
h_2\,+\,m_{n+1}q_{n+1}\ <\ h_1\,+\,m_{n+1}q_{n+1}\ =\ g.$$ On the other hand, $$\begin{aligned}
h_2\,+\,s_{n+1}q_{n+1}\ &\ge h_1\,-\,q_n\,+\,s_{n+1}q_{n+1}\\
&=\ h_1+\,m_{n+1}q_{n+1}\,+\,(s_{n+1}-m_{n+1}+1)q_{n+1}\,-\,q_n\,-\,q_{n+1}\ \overset{\eqref{gf1}}{\ge}\ g\,-\,q_{n+1}.\end{aligned}$$ Therefore, $$h_2\,+\,\min C_{n+1}\ \overset{\eqref{doda}}{<}\ g\ \le\ h_2\,+\,\max C_{n+1}\,+\,q_{n+1}.$$ Because any two consecutive elements of $h_2+C_{n+1}$ lie in the distance $q_{n+1}$, there is exactly one element of $h_2+C_{n+1}$ belonging to $[g-q_{n+1},\,g)$. This element belongs to $D_{n+1}$ which completes the proof of the property ($\beta_{n+1}$). Thus, by induction, the sets $D_n$ satisfying ($\alpha_n$), ($\beta_n$),($\gamma_n$) exist for every $n\in\mathbb N$. The elements of $D_n$ form an increasing finite sequence that is $q_n$-close and hence $$\Delta_{q_n}F_{K_n}\ \ge\ \max D_n\,-\,\min D_n\ =\ \sum_{i=1}^n(s_i-m_i)q_i.$$ Since $$r_{K_n}\,=\,\sum_{i>n}(s_i+m_i)q_i\ >\ (s_{n+1}-m_{n+1}+1)q_{n+1}\ \overset{\eqref{gf1}}{\ge}\ q_n,$$ it follows $$\Delta_{r_{K_n}}F_{K_n}\ \ge\ \sum_{i=1}^n(s_i-m_i)q_i$$ and finally $$\lim\limits_{n\to\infty}\Delta_{r_n}F_n\ =\ \lim\limits_{n\to\infty}\Delta_{r_{K_n}}F_{K_n}\ \ge\ \sum_{i=1}^\infty(s_i-m_i)q_i\ >\ 0.$$ Thus, the achievement set $A(a_n)$ contains an interval, by Proposition [Proposition 8](#charint){reference-type="ref" reference="charint"}. Since $$r_{K_n}\,=\,\sum_{i>n}(s_i+m_i)q_i\ \overset{\eqref{gf2}}{<}\ m_nq_n\,=\,a_{K_n}$$ for all $n$, the set $A(a_n)$ has infinitely many gaps. Therefore, by the Guthrie-Nymann Classification Theorem, $A(a_n)$ is a Cantorval. ◻
Now, we will show that there exist GF series satisfying the assumptions of Theorem [Theorem 9](#nmg1){reference-type="ref" reference="nmg1"}. We will require from the series some additional properties which will be useful later.
**Theorem 10**. *For every sequence $m=(m_n)$ of positive integers greater than 1 and for any sequence $(c_n)$ with $c_n>1$ and $\alpha:=\sup_n\frac{c_n}{m_n}<1$, there are sequences $k=(k_n)\in\mathbb N^\mathbb N$ with $k_n>m_n$ and $q=(q_n)\in(0,\,1)^\mathbb N$ such that the GF series $\sum a_n(m,k,q)$ is convergent, $A(a_n)$ is a Cantorval, and $$\label{gwiazd}
(s_{n+1}+m_{n+1})q_{n+1}\ <\ c_nq_n \qquad \text{for all $n\in\mathbb N$},$$ and $$\label{2gwiazd}
c_n\ <\ (1-\alpha)(s_n+m_n) \qquad \text{for all $n\in\mathbb N$}.$$*
*Proof.* Take $q_1>0$. Choose $k_1>m_1$ such that $$c_{1}\ <\ (1-\alpha)\bigl(s(m_{1},k_1)+m_{1}).$$ We can do it because $s(m_{1},k)\nearrow +\infty$ as $k\to\infty$.
Now, suppose that $l$ is a positive integer such that $k_1,\,\ldots,\,k_l$ and $q_1,\,\ldots,\,q_l$ have been chosen and satisfy [\[gwiazd\]](#gwiazd){reference-type="eqref" reference="gwiazd"} for all $n<l$ and [\[2gwiazd\]](#2gwiazd){reference-type="eqref" reference="2gwiazd"} for all $n\le l$. Since $s(m_{l+1},k)\nearrow +\infty$ as $k\to\infty$, there is $T\in\mathbb N$, $T>m_{l+1}$ such that $$\label{3gwiazd}
\bigl(s(m_{l+1},\,T)-m_{l+1}+1\bigr)\ >\ \frac{2m_{l+1}}{c_l-1}$$ and $$c_{l+1}\ <\ (1-\alpha)\bigl(s(m_{l+1},\,T)+m_{l+1}).$$ Define $k_{l+1}:=T$ and $q_{l+1}:=\frac{q_l}{s_{l+1}-m_{l+1}+1}$ where $s_{l+1}=s(m_{l+1},k_{l+1})$. Then $$c_lq_l=c_lq_{l+1}(s_{l+1}-m_{l+1}+1)\,\overset{\eqref{3gwiazd}}{>}\,(s_{l+1}-m_{l+1}+1)q_{l+1}+2m_{l+1}q_{l+1}>(s_{l+1}+m_{l+1})q_{l+1},$$ that is, [\[gwiazd\]](#gwiazd){reference-type="eqref" reference="gwiazd"} holds for $n=l$. Thus, by induction, two sequences $k=(k_n)$ and $q=(q_n)$ have been defined and they satisfy [\[gwiazd\]](#gwiazd){reference-type="eqref" reference="gwiazd"} and [\[2gwiazd\]](#2gwiazd){reference-type="eqref" reference="2gwiazd"}.
We are now going to show that $\sum a_n(m,k,q)$ is a convergent GF series satisfying [\[gf1\]](#gf1){reference-type="eqref" reference="gf1"} and [\[gf2\]](#gf2){reference-type="eqref" reference="gf2"}. Clearly, $$\sum_{n=1}^\infty a_n\ =\ \sum_{n=1}^\infty\sum_{l=K_{n-1}+1}^{K_n} a_l\ \ =\ \ \sum_{n=1}^\infty(s_n+m_n)q_n$$ and $$\label{1-alfa}
\frac{(s_{n+1}+m_{n+1})q_{n+1}}{(s_n+m_n)q_n}\ \overset{\eqref{gwiazd}}{<}\ \frac{c_n}{s_n+m_n}\ \overset{\eqref{2gwiazd}}{<}\ 1\,-\alpha$$ for all $n$ which proves the convergence of $\sum a_n$. Observe that [\[gf1\]](#gf1){reference-type="eqref" reference="gf1"} is satisfied, by the definition of the sequence $(q_{n})$. Furthermore, $$\begin{aligned}
\sum_{i>n}(s_i+m_i)q_i\ \overset{\eqref{1-alfa}}{<}\ \sum_{j=0}^\infty(s_{n+1}+m_{n+1})q_{n+1}(1-\alpha)^j\ =\ \frac1\alpha(s_{n+1}+m_{n+1})q_{n+1}\ \overset{\eqref{gwiazd}}{<}\frac{c_n}{\alpha}q_n\ \le \ m_nq_n.\end{aligned}$$ Therefore, [\[gf2\]](#gf2){reference-type="eqref" reference="gf2"} holds as well and hence, by Theorem [Theorem 9](#nmg1){reference-type="ref" reference="nmg1"}, $A(a_n)$ is a Cantorval. ◻
In all of the known achievable Cantorvals their Lebesgue measure is equal to the measure of their interior. It also holds for Cantorvals satisfying the assumptions of Theorem [Theorem 9](#nmg1){reference-type="ref" reference="nmg1"}.
**Theorem 11**. *If $\sum a_n(m,k,q)$ is a convergent GF series satisfying the conditions [\[gf1\]](#gf1){reference-type="eqref" reference="gf1"} and [\[gf2\]](#gf2){reference-type="eqref" reference="gf2"}, then the Lebesgue measure of the Cantorval $A(a_n)$ equals to the measure of its interior.*
*Proof.* If the series $\sum a_n(m,k,q)$ satisfies the assumptions of our theorem, then $A(a_n)$ is a Cantorval by the virtue of Theorem [Theorem 9](#nmg1){reference-type="ref" reference="nmg1"}. Observe that $A$-gaps of order $k$ exist if and only if $k=K_n$ for some $n\in\mathbb N$. Moreover, $A=\bigcap_{n\in\mathbb N}I_{K_n}$. Each $I_{K_n}$ is a multi-interval set and every component of $I_{K_n}$ divides into $3$ components of $I_{K_{n+1}}$ with distance between them equal to $m_{n+1}q_{n+1}-r_{K_{n+1}}$. Therefore, the set $I_{K_{n-1}}\setminus I_{K_n}$, that is, the union of all $A$-gaps of order $K_n$, consists of $2\cdot3^{n-1}$ open intervals, each of length $m_nq_n-r_{K_n}$. Thus, $$\begin{aligned}
\label{cruz}
\lambda (A)\ =\ &\lambda\left([0,\,\sum_{i=1}^\infty(m_i+s_i)q_i]\setminus\bigl([0,\,\sum_{i=1}^\infty(m_i+s_i)q_i]\setminus A\bigr)\right)\\
&=\ \sum_{i=1}^\infty(m_i+s_i)q_i\ -\ \lambda\left(\bigcup_{n=1}^\infty\bigl(I_{K_{n-1}}\setminus I_{K_n}\bigr)\right) \notag \\
&=\ \sum_{i=1}^\infty(m_i+s_i)q_i\ -\ \sum_{n=1}^\infty2\cdot3^{n-1}\bigl(m_nq_n\ -\ \sum_{i=n+1}^\infty(m_i+s_i)q_i\bigr). \notag\end{aligned}$$ Each iteration $I_{K_n}$ for $n\in\mathbb N_0$ has a central interval of the form $\left[\sum_{i=1}^nm_iq_i,\,\sigma-\sum_{i=1}^nm_iq_i\right]$ (with the convention $\sum_{i=1}^0m_iq_i=0$). The intersection of all these central intervals is the interval $\left[\sum_{i=1}^\infty m_iq_i,\,\sum_{i=1}^\infty s_iq_i\right]$ which is a connectivity component of $A$. The iteration $I_{K_1}$ consists of three intervals, the central of which is involved in producing the central interval of $A(a_n)$. The other two intervals of $I_{K_1}$: $B:=\bigl[0,\,\sum_{i=2}^\infty(m_i+s_i)q_i\bigr]$ and $C:=\bigl[(m_1+s_1)q_1,\,(m_1+s_1)q_1+\sum_{i=2}^\infty(m_i+s_i)q_i\bigr]$ give rise to two $A$-intervals $\bigl[\sum_{i=2}^\infty m_iq_i,\,\sum_{i=2}^\infty s_iq_i\bigr]$ and $\bigl[(m_1+s_1)q_1+\sum_{i=2}^\infty m_iq_i,\,(m_1+s_1)q_1+\sum_{i=2}^\infty s_iq_i\bigr]$ concentric with $B$ and $C$, respectively.
The iteration $I_{K_2}$ consists of nine disjoint closed intervals. Three of them are concentric with component intervals of $I_{K_1}$. Each of the remaining six is concentric with an $A$-interval of length $\sum_{i=3}^\infty(s_i-m_i)q_i$. Continuing in this fashion, we see that for any $n\in\mathbb N_0$, each component interval of $I_{K_n}$ is concentric with an $A$-interval. Let $\mathcal{P}$ be the family of all $A$-intervals concentric with at least one component of at least one iteration $I_{K_n}$, $n\in\mathbb N_0$. For an interval $P\in\mathcal{P}$ let us define $$n_P-1\ :=\ \min\bigl\{l:\ I_{k_l} \ \text{has a component concentric with $P$}\,\bigr\}.$$ Then $|P|=\sum_{i=n_P}^\infty(s_i-m_i)q_i$. Moreover, given an positive integer $n\ge2$, there are exactly $2\cdot3^{n-2}$ $A$-intervals $P$ for which $n_P=n$ and there is exactly one $A$-interval $P$ for which $n_P=1$. Thus, $$\begin{aligned}
\lambda(\text{int}\,A)\ &\ge\ \sum_{n=1}^\infty\,\sum_{\substack{P\in\text{$A$-intervals}\\n_P=n}}\ =\ \sum_{i=1}^\infty(s_i-m_i)q_i\ +\ \sum_{n=2}^\infty 2\cdot3^{n-2}\sum_{i=n}^\infty(s_i-m_i)q_i\\
&=\ \sum_{i=1}^\infty(s_i+m_i)q_i\ -\ 2\left(\sum_{i=1}^\infty m_iq_i\ -\ \sum_{n=2}^\infty3^{n-2}\sum_{i=n}^\infty(s_i-m_i)q_i\right)\\
&=\ \sum_{i=1}^\infty(s_i+m_i)q_i\ -\ 2\left(\sum_{n=1}^\infty m_nq_n\ +\ \sum_{n=1}^\infty2\cdot3^{n-1}\sum_{i=n+1}^\infty m_iq_i\ -\ \sum_{n=1}^\infty 3^{n-1}\sum_{i=n+1}^\infty(s_i+m_i)q_i\right)\\
&=\ \sum_{i=1}^\infty(s_i+m_i)q_i\ -\ 2\left(\sum_{n=1}^\infty m_nq_n\ +\ \sum_{i=2}^\infty m_iq_i\sum_{n=2}^i2\cdot3^{n-2}\ -\ \sum_{n=1}^\infty 3^{n-1}\sum_{i=n+1}^\infty(s_i+m_i)q_i\right)\\
&=\ \sum_{i=1}^\infty(s_i+m_i)q_i\ -\ 2\left(\sum_{n=1}^\infty m_nq_n\ +\ \sum_{i=2}^\infty (3^{i-1}-1)m_iq_i\ -\ \sum_{n=1}^\infty 3^{n-1}\sum_{i=n+1}^\infty(s_i+m_i)q_i\right)\\
&=\ \sum_{i=1}^\infty(s_i+m_i)q_i\ -\ 2\left(m_1q_1\ +\ \sum_{n=2}^\infty3^{n-1}m_nq_n\ -\ \sum_{n=1}^\infty 3^{n-1}\sum_{i=n+1}^\infty(s_i+m_i)q_i\right)\\
&=\ \sum_{i=1}^\infty(s_i+m_i)q_i\ -\ 2\left(\sum_{n=1}^\infty 3^{n-1}\bigl(m_nq_n\ -\ \sum_{i=n+1}^\infty(s_i+m_i)q_i\bigr)\right)\ \ \overset{\eqref{cruz}}{=} \ \lambda (A).\end{aligned}$$ ◻
# Algebraic sums of achievable sets
The main goal of this section is to check what types of sets we can obtain as algebraic sums of various combinations of achievement sets. In particular, we are interested what we can get as a sum of two or more copies of the same Cantorval or a Cantor set.
We use the following result, which comes from [@BBFS].
**Theorem 12**. *Let $a_1\geq a_2\geq \ldots\geq a_m$ be positive integers such that $a_i-a_{i+1} \leq a_m$ for $i \in \{1,2,\ldots,m-1\}$. Assume that there exist positive integers $n_0$ and $r$ such that $\Sigma\{a_1,a_2,\ldots,a_m\}\supset\{n_0,n_0+1,\ldots,n_0+r\}$. If $q\geq\frac{1}{r+1}$, then $A(a_1,a_2,\ldots,a_m;q)$ has a nonempty interior. If $q<\frac{a_m}{\sum_{i=1}^{m}a_i+a_m}$, then $A(a_1,a_2,\ldots,a_m;q)$ is not a finite union of intervals. Consequently, if $\frac{1}{r+1}\leq q <\frac{a_m}{\sum_{i=1}^{m}a_i+a_m}$, then $A(a_1,a_2,\ldots,a_m;q)$ is a Cantorval.*
**Proposition 13**. *There exists an achievable Cantorval $D$ such that $D+D$ is also a Cantorval.*
*Proof.* Consider $D_q=A(14,13,12,11,10,9,8,7;q)$. We have\
$\Sigma \{7,8,9,10,11,12,13,14\}=\{0,7,8,9,\ldots,76,77,84\}$. By Theorem [Theorem 12](#sufficientcantorval){reference-type="ref" reference="sufficientcantorval"}, the set $D_q$ is a Cantorval for all $\frac{1}{71}\leq q <\frac{1}{13}$.
We have $D_q+D_q=A(14,14,13,13,12,12,11,11,10,10,9,9,8,8,7,7;q)$ and $$\Sigma \{14,14,13,13,12,12,11,11,10,10,9,9,8,8,7,7\}=\{0,7,8,9,\ldots,160,161,168\}.$$ Using again Theorem [Theorem 12](#sufficientcantorval){reference-type="ref" reference="sufficientcantorval"}, we get that $D_q+D_q$ is a Cantorval for every $\frac{1}{155}\leq q <\frac{1}{25}$. We have proved that both sets $D_q$ and $D_q+D_q$ are Cantorvals for each $\frac{1}{71}\leq q <\frac{1}{25}$. ◻
**Proposition 14**. *Let $k\in\mathbb{N}$. There exists an achievable Cantorval $D$ such that $\underbrace{D+D+\ldots+D}_{k \ \text{times}}$ is also a Cantorval.*
*Proof.* Take natural $m > \frac{1+3k+\sqrt{9k^2+42k+1}}{6}$. Consider $D_q=A(2m,2m-1,\ldots,m+2,m+1,m;q)$ and $B = \{m,m+1, \ldots, 2m\}.$ We have $\Sigma B=\{0,m,m+1,m+2,\ldots,\frac{3m^2+m}{2}-1,\frac{3m^2+m}{2},\frac{3m^2+3m}{2}\}$. By Theorem [Theorem 12](#sufficientcantorval){reference-type="ref" reference="sufficientcantorval"}, the set $D_q$ is a Cantorval for all $\frac{2}{3m^2-m+2}\leq q <\frac{2}{3m+5}$.
We have $\underbrace{D_q+\ldots+D_q}_{k \ \text{times}}=A(\underbrace{2m,\ldots,2m}_{k \ \text{times}},\underbrace{2m-1,\ldots,2m-1}_{k \ \text{times}},\ldots,\underbrace{m,\ldots,m}_{k \ \text{times}};q)$ and $$\Sigma \{\underbrace{m,\ldots,m}_{k \ \text{times}},\ldots,\underbrace{2m,\ldots,2m}_{k \ \text{times}}\} =\{0,m,m+1,m+2,\ldots,\frac{3m^2k+3mk-2m}{2}-1,\frac{3m^2k+3mk-2m}{2},\frac{3m^2k+3mk}{2}\}.$$ From Theorem [Theorem 12](#sufficientcantorval){reference-type="ref" reference="sufficientcantorval"} we know that $\underbrace{D_q+\ldots+D_q}_{k \ \text{times}}$ is a Cantorval for $\frac{2}{3m^2k+3mk-4m+2}\leq q <\frac{2}{3mk+3k+2}$. Thus, both $D_q$ and $\underbrace{D_q+\ldots+D_q}_{k \ \text{times}}$ are Cantorvals for $\frac{2}{3m^2-m+2}\leq q <\frac{2}{3mk+3k+2}$. Since $m > \frac{1+3k+\sqrt{9k^2+42k+1}}{6}$, we have $3m^2-m+2 > 3mk+3k+2$, therefore there is $q \in [\frac{2}{3m^2-m+2}, \frac{2}{3mk+3k+2}).$ ◻
From [@GM23 Theorem 6.4.] (and its proof) we obtain the following theorem.
**Proposition 15**. *For any achievable Cantorval $D$ there exists an achievable Cantor set $C$ such that $D+C$ is a Cantorval.*
**Proposition 16**. *For any achievable Cantorval $D$ there exists an achievable Cantor set C such that $D+C$ is an interval.*
**Proof.* Let $D=A(a_n)$ and let $P:=\{n\in\mathbb N: \ (r_n,\,a_n) \text{ is a dominating gap}\,\}$. Arrange all elements of $P$ in the increasing order $P=(p_i)_{i=1}^\infty$. The series $\sum_{i\in\mathbb N}a_{p_i}$ is fast convergent.*
*We are going to define an increasing sequence $(n_i)_{i=1}^\infty$ of positive integers and another sequence $(N_i)_{i=1}^\infty$ of positive integers, the later one not necessarily monotone. We start with $n_1=N_1 :=1$ and continue by induction for $k\ge2$, defining $$\label{dos}
n_k\ :=\ \min\left\{ s>n_{k-1}:\ \sum_{i\ge s}(2a_{p_i}-r_{p_i})\ <\ r_{p_{n_{k-1}}}\ \right\}$$ and then choosing $N_k$ to be the unique positive integer such that $$\label{uno}
a_{p_{n_{k-1}}}-r_{p_{n_{k-1}}}\ \le\ N_ka_{p_{n_k}}\ <\ a_{p_{n_{k-1}}}-r_{p_{n_{k-1}}}+a_{p_{n_k}}.$$ The series $\sum_i(a_{p_{n_i}}; N_i)$ is semi-fast convergent and thus its set of subsums $C:=A\bigl(a_{p_{n_i}}; N_i\bigr)$ is a Cantor set [@BFPW2 Thm.16]. Indeed,*
*$$\begin{aligned}
\sum_{i>k}N_ia_{p_{n_i}}\ &\overset{\eqref{uno}}{<}\ \ \sum_{i=k}^\infty(a_{p_{n_i}}-r_{p_{n_i}})\ +\ \sum_{i>k}a_{p_{n_i}}\\[.1in]
&\le\ a_{p_{n_k}}-r_{p_{n_k}}\,+\,\sum_{j\ge n_{k+1}}(a_{p_j}-r_{p_j})\,+\ \sum_{j\ge n_{k+1}}a_{p_j}\ \overset{\eqref{dos}}{<}\ a_{p_{n_k}}-r_{p_{n_k}}\,+\,r_{p_{n_k}}=a_{p_{n_k}}.\end{aligned}$$*
*It remains to observe that the unique monotone mix $(c_n)$ of sequences $(a_n)$ and $(a_{p_{n_i}};N_i)$ is slowly convergent. Clearly, $(c_n)$ is also the monotone mix of two other sequences: $(a_n)_{n\not\in\{p_{n_k}:\,k\in\mathbb N\,\}}$ and $(a_{p_{n_i}};N_i+1)$. In particular, $c_n\le r_n^c$ \[the last symbol stands for the $n$-th remainder of the series $\sum c_n$ \] for all such $n$ that do not satisfy $$\label{tres}
\exists_{i\in\mathbb N} \qquad
n=\max\{k: \ c_k=a_{p_{n_i}}\,\},$$ because then $c_n=c_{n+1}$ or $c_n=a_j$ for some $j\not\in\{p_{n_i}:\ i\in\mathbb N\,\}$. In the latter case, if $a_j\le r_j^a$, then $c_n\le r_j^a\le r_n^c$. If, however, $a_j>r_j^a$, then denoting $i:=\max\{s:\ a_{p_{n_s}}\ge a_j\,\}$, we get $a_{p_{n_{i+1}}}< a_j$ and, because the $D$-gap $(r_{p_{n_i}}^a,\,a_{p_{n_i}})$ is dominating, $$a_j\,-\,r_j^a\ \le\ a_{p_{n_i}}\,-\,r_{p_{n_i}}^a\ \overset{\eqref{uno}}{\le}\ N_{i+1}a_{p_{n_{i+1}}}.$$ Therefore, $$c_n\ =\ a_j\ \le\ r_j^a\,+\, N_{i+1}a_{p_{n_{i+1}}}\ <\ r_j^a\,+\,\sum_{t>i}N_ta_{p_{n_t}}\ =\ r_n^c.$$ It remains to consider indices $n$ such that [\[tres\]](#tres){reference-type="eqref" reference="tres"} holds. In this case $c_n=a_{p_{n_i}}$ and $$c_n\ =\ a_{p_{n_i}}\ \overset{\eqref{uno}}{\le} \ \ r_{p_{n_i}}^a\,+\, N_{i+1}a_{p_{n_{i+1}}}\ <\ r_{p_{n_i}}^a\,+\,\sum_{j>i}N_ja_{p_{n_j}}\ =\ r_n^c.$$ ◻*
In [@BPW Theorem 16] the authors proved the following theorem.
**Proposition 17**. *There exists a Cantorval $D=A(a_n)$ (specifically, the Guthrie Nymann's Cantorval) such that for any partition of $(a_n)$ into two infinite sequences $(y_n)$ and $(z_n)$ both $A(y_n)$ and $A(z_n)$ are Cantor sets.*
**Remark 18**. *Note that in [@GM23] the authors showed that for $A=A(2^m,\ldots,4,3,2;\frac{1}{4})$ with any $m\in\mathbb{N}$ the set of all points with the unique representation is dense in $A$. In the same paper it was showed that if the set of all points of an achievement set $A(a_n)$ which have the unique representation is dense in $A(a_n)$, then for any partition of $(a_n)$ into two infinite sequences $(y_n)$ and $(z_n)$ both $A(y_n)$ and $A(z_n)$ are Cantor sets. Hence $A$ also has the property of the set $D$ in the Proposition [Proposition 17](#rozkladnadwacantory){reference-type="ref" reference="rozkladnadwacantory"}.*
**Example 19**. *Note that one can find an interval-filling sequence $(a_n)$ such that for any its decomposition into two infinite subsequences $(y_n)$ and $(z_n)$ both sets $A(y_n)$ and $A(z_n)$ are Cantor sets. Indeed, let $a_n=\frac{1}{2^n}$, then $A(a_n)=[0,1]$. Since $a_n=r_n$ for all $n\in\mathbb{N}$, any its infinite, non-cofinite subsequence is fast convergent.*
The next lemma is the Lemma 9 from [@BP] and will be crucial for our next theorem, but we need to introduce one more symbol, used in the lemma below and in the proof of the next theorem. If $S$ is a finite subset of $\mathbb R$, then we define $\delta(S):=\min\{|s-t|:\ s,t\in S, s\ne t\,\}$.
**Lemma 20**. *If all terms of the series $\sum a_n$ are positive and $r_n<\delta(F_n)$ for infinitely many $n$, then $A(a_n)$ is a Cantor set.*
*Proof.* Take any $n$ such that $r_n<\delta(F_n)$. We have $$A(a_k)=F_n+E_n\subset F_n+[0,r_n]=\bigcup_{i=1}^{m_n}[f_i^n,f_i^n+r_n]$$ and, by the assumption that $r_n<\delta(F_n)$, we get $[f_i^n,f_i^n+r_n]\cap [f_j^n,f_j^n+r_n]=\emptyset$ for $i,j\in\{1,\ldots, m_n\}$, $i\neq j$. Hence $A(a_k)$ has no interval longer that $r_n$. But $\lim\limits_{n\to\infty}r_n=0$, so $A(a_k)$ has an empty interior. ◻
In the paper [@PWT18] it was shown that every central Cantor set is the algebraic sum of two central Cantor sets of Lebesgue measure zero. The following theorem generalizes that result.
**Theorem 21**. *Every infinite achievable set is the algebraic sum of two achievable Cantor sets of Lebesgue measure zero.*
*Proof.* Let $A(a_n)$ be an infinite achievement set. Define $n_0 = 1$ and $y_1=a_1$. Then $F^y_1=\{0,a_1\}$, so $\delta(F_1^y)=a_1$. One can find $n_1$ such that $r_{n_1}<a_1$. We define $z_1=a_2, z_2=a_3, \ldots, z_{n_1-1}=a_{n_1}$. Denote $F_{n_1-1}^z=A\big((z_k)_{k=1}^{n_1-1}\big)=A\big((a_k)_{k=2}^{n_1}\big)$. Let $n_2>n_1$ be such that $r_{n_2}<\frac{1}{2\cdot2^{n_1-1}}\cdot\delta(F_{n_1-1}^z)$. We put $y_2=a_{n_1+1}, y_3=a_{n_1+2}, \ldots, y_{n_2-n_1+1}=a_{n_2}$. In the next step we consider $F_{n_2-n_1+1}^y=A\big((y_k)_{k=1}^{n_2-n_1+1}\big)$ and find $n_3>n_2$ such that $r_{n_3}<\frac{1}{3\cdot2^{n_2-n_1+1}}\cdot\delta(F_{n_2-n_1+1}^y)$ to define consecutive terms of $(z_k)$.
Generally, assume that we have defined $n_k$ for some $k \in \mathbb N$ and we have used all of the first $n_k$ terms of the sequence $(a_n)$ to define sequences $y$ and $z$. If $k$ is odd, then we take $n_{k+1} > n_k$ such that $r_{n_{k+1}}<\frac{1}{(k+1)\cdot2^{p_k}}\cdot\delta(F_{p_k}^z)$, where $p_k=\sum_{i=0}^\frac{k-1}{2} (n_{2i+1}-n_{2i})$ is the number of elements of the sequence $z$ defined so far. Then we define consecutive terms of the sequence $y$ as $a_{n_k+1}, a_{n_k+2}, \ldots, a_{n_{k+1}}.$ If $k$ is even, then we take $n_{k+1} > n_k$ such that $r_{n_{k+1}}<\frac{1}{(k+1)\cdot2^{q_k}}\cdot\delta(F_{q_k}^y)$, where $q_k=1+\sum_{i=1}^\frac{k}{2} (n_{2i}-n_{2i-1})$ is the number of elements of the sequence $y$ defined so far. Then we define consecutive terms of the sequence $z$ as $a_{n_k+1}, a_{n_k+2}, \ldots, a_{n_{k+1}}.$
Note that both $(y_k)$ and $(z_k)$ satisfy the assumptions of Lemma [Lemma 20](#rozbicie){reference-type="ref" reference="rozbicie"}. Thus, $A(y)$ and $A(z)$ are Cantor sets. Moreover, for even $k$ we have $A(y) \subset F_{q_k}^y +[0,r^y_{q_k}]$, where $r^y_i$ is the $i$-th remainder for the series $\sum y_n$. Since $|F_{q_k}| \leq 2^{q_k}$ and $$r^y_{q_k}< r_{n_{k+1}} < \frac{1}{(k+1)\cdot2^{q_k}}\cdot\delta(F_{q_k}^y),$$ we have $$\lambda(A(y)) \leq \lambda(F_{q_k}^y +[0,r^y_{q_k}]) \leq |F_{q_k}^y| \cdot r^y_{q_k} < 2^{q_k} \cdot \frac{1}{(k+1)\cdot2^{q_k}}\cdot\delta(F_{q_k}^y) \leq \frac{a_1}{k+1} \to 0$$ where $\lambda$ denotes the Lebesgue measure. Similarly, we show that the measure of $A(z)$ is zero.
Observe that $y \cup z$ is a decomposition of the sequence $(a_n)$, and so $A(y) + A(z) = A(a_n)$. ◻
**Remark 22**. *It is known (see [@AI]) that the achievement set of a fast convergent series is a central Cantor set. Note that if $(a_n)$ in the above theorem is fast convergent, then also $y$ and $z$ are fast convergent, so $A(y)$ and $A(z)$ are central Cantor sets.*
**Remark 23**. *From [@Now1 Proposition 2.11.] we can infer something even stronger for an achievable set which is an interval. Namely it can be presented as a sum of two central Cantor sets (that is, the achievement sets of fast convergent series) of Hausdorff dimension zero.*
**Remark 24**. *In [@Nymannlin Example 1] the author gave an example of a Cantor set $C$ such that for any $k\in\mathbb{N}$ the sum $\underbrace{C+\ldots+C}_{k \ \text{times}}$ is not an interval (actually, it can be showed that this sum is a Cantor set for any $k$). He also characterized the situation by the condition $\limsup\frac{a_n}{r_n}=\infty$. Note that if $A(a_n)$ is a multi-interval set, then the inequality $a_n>r_n$ holds for finitely many $n$, so $\limsup\frac{a_n}{r_n}<\infty$. Hence there is no such construction for that case.*
**Theorem 25**. *There is an achievable Cantorval such that the algebraic sum of any finite number of copies of it remains a Cantorval.*
*Proof.* It is well-known that for any infinite achievement set $A(a_n)$ the algebraic sum of any finite number of copies of $A(a_n)$ is not a multi-interval set if and only if $\limsup\frac{a_n}{r_n}=+\infty$. Moreover, if $\limsup \frac{a_n}{r_n}<+\infty$, then the algebraic sum of sufficiently many copies of $A(a_n)$ is an interval.
Thus, in order to prove the theorem, it suffices to construct a convergent GF series $\sum a(m,k,q)$ such that $A(a_n)$ is a Cantorval and $\limsup_n\frac{a_n}{r_n}=+\infty$. To do that, take $m_n:=n+1$ and $c_n:=\frac32$ for all $n\in\mathbb N$. Then, by Theorem [Theorem 10](#nmg2){reference-type="ref" reference="nmg2"}, there is a convergent GF series $\sum a(m,k,q)$ with $A(a_n)$ being a Cantorval and satisfying additionally $$\label{imm1}
(s_{n+1}+m_{n+1})q_{n+1}\ <\ \frac32\,q_n \qquad \text{ for all $n$}$$ and $$\label{imm2}
\frac32\ <\ \left(1\,-\,\frac34\right)(s_n+m_n) \qquad \text{ for all $n$.}$$ Then, as in the proof of the Theorem [Theorem 10](#nmg2){reference-type="ref" reference="nmg2"}, $$\frac{(s_{n+1}+m_{n+1})q_{n+1}}{(s_n+m_n)q_n}\ <\ \frac14 \qquad \text{ for all $n$}$$ and $$r_{K_n}\,=\,\sum_{i>n}(s_i+m_i)q_i\ \overset{\eqref{imm2}}{<}\ \sum_{j=0}^\infty(s_{n+1}+m_{n+1})q_{n+1}\frac1{4^j}\ =\ \frac43(s_{n+1}+m_{n+1})q_{n+1}\ \overset{\eqref{imm1}}{<} 2q_n$$ for all $n$. Thus, $$\limsup\limits_{n\to \infty}\frac{a_n}{r_n}\ \ge\ \limsup\limits_{n\to \infty}\frac{a_{K_n}}{r_{K_n}}\ =\ \lim\limits_{n\to\infty}\frac{m_n}2\ =\ +\infty.$$ ◻
**Example 26**. *Let $C=A(\frac{1}{m^n})$, $m\in\mathbb{N}$, $m\geq 3$. Then $C_k=\underbrace{C+\ldots+C}_{k \ \text{times}}$ is a Cantor set for every $k<m-1$, while $C_k$ is an interval for $k\geq m-1$.*
**Theorem 27**. *Let $m,p\in(\mathbb{N}\setminus \{1\}) \cup \{\infty\}$, $p\geq m$. There exists an achievable Cantor set $C$ such that $C_k:=\underbrace{C+\ldots+C}_{k \ \text{times}}$ is*
- *a Cantor set for every $k< m$;*
- *a Cantorval for each $k$ such that $k\geq m$ and $k<p$;*
- *an interval for all $k \in \mathbb N$ such that $k\geq p$.*
*Proof.* If $m=p\in \mathbb N$, then we take $C=A(\frac{1}{(m+1)^n})$ as in Example [Example 26](#noCantorval){reference-type="ref" reference="noCantorval"}. If $m=p = \infty$, then we take $C$ from Remark [Remark 24](#infinity){reference-type="ref" reference="infinity"}.
Suppose that $6<m < p \leq \infty$. Define a sequence $(p_n)$ in the following way. If $p < \infty$, then put $p_n:=p$ for all $n \in \mathbb N$, and if $p = \infty$, then put $p_n = m+n$. We will now define a fast convergent sequence $a=(a_n)$ such that $A(a_n)$ satisfies the assertion of the theorem. First, we will define first $k_1$ terms of the sequence for some $k_1 \in \mathbb N$. We will define these terms, using backward induction and starting from $a_{k_1}$. For all $n \in \mathbb N$ we will define $a_n$ in such a way that $a_n=q_nr_n$ for all $n\in\mathbb N$, where $q_n > 1$. When the induction is over for some $a_{k_1-j}$, we will put $k_1 -j =1$, and so $k_1 := j+1$.
Take an arbitrary number $S_1 > 0$. We will define a sequence $(a_n)$ in such a way that $r_{k_1} = S_1$. For the convenience, although we do not have all the terms in the sequence, we will use the notation with remainders $r_{k_i}$ instead of $S_i$.
Put $q_{k_1} := p_1$ and $a_{k_1} := p_1S_1 = q_{k_1}r_{k_1}$. Observe that, we can now define $r_{k_1-1}$ as $r_{k_1} +a_{k_1} = r_{k_1}+q_{k_1}r_{k_1} = r_{k_1}(1+q_{k_1})$. Generally, when we know $r_i$ and $a_i=q_ir_i$ for some $i > 1$, $q_i > 0$, then we have $$\label{r}
r_{i-1} = r_i + a_i = r_i (1+q_i).$$
Suppose that for some $i \geq 0$ we have defined $q_{k_1-i} > 1$ and $a_{k_1-i} := q_{k_1-i} r_{k_1-i}$. Put $$\label{q}
q_{k_1-(i+1)}:=\frac{(\lfloor \frac{m}{2}\rfloor-1 )q_{k_1-i}}{(1+q_{k_1-i})}+\frac{mr_{k_1}}{2r_{k_1-(i+1)}}.$$ Since $m > 6$, we clearly have $q_{k_1-(i+1)} >1.$ Now, put $a_{k_1-(i+1)} := q_{k_1-(i+1)} r_{k_1-(i+1)}$.
We continue this procedure until we define $q_{k_1-j_1}$ and $a_{k_1-j_1}$, where $j_1=\lceil \frac{2p_1}{m} \rceil-1.$ Then, we put $k_1:=j_1+1$, and thus we have defined $a_1,a_2, a_3, \ldots, a_{k_1}.$
Suppose that for some $n \in \mathbb N$ we have defined $k_n \in \mathbb N$, $S_n > 0$, $(q_i)_{i=1}^{k_n}$ and a sequence $(a_i)_{i=1}^{k_n}$. We will define $k_{n+1}>k_n$, a sequence $(a_i)_{i=k_n+1}^{k_{n+1}}$ and $S_{n+1} > 0$ in such a way that $r_{k_{n+1}}=S_{n+1}$. Assume that we have $S_{n+1}>0$ and $k_{n+1}>k_n$ (their actual value will be established later). Put $q_{k_{n+1}}: = p_{n+1}$ and $a_{k_{n+1}} := q_{k_{n+1}} S_{n+1} = q_{k_{n+1}} r_{k_{n+1}}$.
Suppose that for some $i \geq 0$ we have defined $q_{k_{n+1}-i} > 1$ and $a_{k_{n+1}-i} := q_{k_{n+1}-i} r_{k_{n+1}-i}$. Put $$q_{k_{n+1}-(i+1)}:=\frac{(\lfloor \frac{m}{2}\rfloor-1 )q_{k_{n+1}-i}}{(1+q_{k_{n+1}-i})}+\frac{mr_{k_{n+1}}}{2r_{k_{n+1}-(i+1)}}.$$ We have $q_{k_1-(i+1)} >1.$ Now, put $a_{k_{n+1}-(i+1)} := q_{k_{n+1}-(i+1)} r_{k_{n+1}-(i+1)}$.
We continue this procedure until we define $q_{k_{n+1}-j_{n+1}}$ and $a_{k_{n+1}-j_{n+1}}$, where $j_{n+1}=\lceil \frac{2p_{n+1}}{m} \rceil-1.$
Now, put $q_{k_{n+1}-j_{n+1}-1}:=m-3$ and $a_{k_{n+1}-j_{n+1}-1}:= q_{k_{n+1}-j_{n+1}-1}r_{k_{n+1}-j_{n+1}-1}.$ For $i> 1$ define $q_{k_{n+1}-j_{n+1}-i}:= m-\frac{1}{2}$ and $a_{k_{n+1}-j_{n+1}-i}:= q_{k_{n+1}-j_{n+1}-i}r_{k_{n+1}-j_{n+1}-i}$ until $t_{n+1}> 1$ such that
$$\label{miara}
\frac{m^{j_{n+1}+t_{n+1}+1}}{(1+q_{k_{n+1}})\cdot(1+q_{k_{n+1}-1})\cdot \ldots \cdot (1+q_{k_{n+1}-j_{n+1}-t_{n+1}})}<\frac{1}{2}.$$ We will find such $t_{n+1}$, because for $i > 1$ we have $$\frac{m^{j_{n+1}+i+1}}{(1+q_{k_{n+1}})\cdot(1+q_{k_{n+1}-1})\cdot \ldots \cdot (1+q_{k_{n+1}-j_{n+1}-i})}$$$$= \frac{m^{j_{n+1}+2}}{(1+q_{k_{n+1}})\cdot(1+q_{k_{n+1}-1})\cdot \ldots \cdot (1+q_{k_{n+1}-j_{n+1}-1})}\cdot\frac{m^{i-1}}{(1+q_{k_{n+1}-j_{n+1}-2})\cdot \ldots\cdot(1+q_{k_{n+1}-j_{n+1}-i})}$$$$= \frac{m^{j_{n+1}+2}}{(1+q_{k_{n+1}})\cdot(1+q_{k_{n+1}-1})\cdot \ldots \cdot (1+q_{k_{n+1}-j_{n+1}-1})}\cdot\frac{m^{i-1}}{(m+\frac{1}{2})^{i-1}}\xrightarrow{i\to\infty} 0.$$
We put $k_{n+1}:=k_n+j_{n+1}+t_{n+1}+1$, and $S_{n+1} : =\frac{S_n}{(1+q_{k_{n}+1})\cdot \ldots\cdot(1+q_{k_{n+1}})}.$ Since $$r_{k_{n}} = r_{k_n+1}+a_{k_n+1}=(1+q_{k_n+1})r_{k_{n}+1}=(1+q_{k_n+1})\cdot(r_{k_n+2}+a_{k_{n+2}})=(1+q_{k_n+1})(1+q_{k_n+2})r_{k_n+2}$$$$= \ldots=(1+q_{k_n+1})\cdot \ldots\cdot(1+q_{k_{n+1}})r_{k_{n+1}},$$ we have $$r_{k_{n+1}}=\frac{r_{k_n}}{(1+q_{k_n+1})\cdot \ldots\cdot(1+q_{k_{n+1}})} = \frac{S_n}{(1+q_{k_{n}+1})\cdot \ldots\cdot(1+q_{k_{n+1}})} =S_{n+1}.$$
Thus, we have inductively defined sequences $(q_i)$, $(k_n)$, $(S_n)$ and $(a_i)$ in such a way that $a_{i}=q_ir_i$ for all $i \in \mathbb N$ and for all $n \in \mathbb N$ $r_{k_n}=S_n$.
Consider the set $C:=A(a_n)$. It follows from the construction that for all $i\in \mathbb N$, $p\geq q_i >1$, thus $a_{i} = q_i r_i > r_i$ for all $i \in \mathbb N$, so the sequence $(a_i)$ is fast convergent, which implies that $C$ is a Cantor set. We will now examine sets $C_k$ for $k > 1$. First, observe that $C_k = A(x_n)$, where $(x^{(k)}_n) = (a_i;k)$ that is the sequence in which every term $a_i$ is repeated $k$ times. Denote by $R_n^{(k)}$ a $n$-th remainder of the sequence $(x^{(k)}_n)$, that is, $R_n^{(k)} = \sum _{i=n+1}^\infty x_i^{(k)}$. In particular, for $n \in \mathbb N$, $R_{k\cdot n}^{(k)} = \sum _{i=n+1}^\infty ka_i =kr_n.$ Also, let $F^{(k)}_n := \{\sum_{i=1}^n \varepsilon_ix_i^{(k)}\colon \varepsilon_i \in \{0,1\}\}.$
By Kakeya's Theorem we know that $C_k$ is an interval if and only if $R_n^{(k)} \geq x^{(k)}_n$ for all $n \in \mathbb N$. For $n$ which are not divisible by $k$ we have $R_n^{(k)} \geq x^{(k)}_{n+1} =x^{(k)}_n$. For indices of the form $n \cdot k$ for some $n \in \mathbb N$ we have $R_{k\cdot n}^{(k)} =kr_n$ and $x_{k\cdot n} = a_n$. So $C_k$ is an interval if and only if $a_n \leq kr_n$ for all $n \in \mathbb N$. From the construction we have $a_{k_n} = p_nr_{k_n}$ for any $n \in \mathbb N$. If $p \in \mathbb N$, then $p_n = p$ for all $n$, and hence there is infinitely many $n$ such that $a_n = pr_n$ and for all $n \in \mathbb N$, $a_n \leq pr_n$. Hence $C_k$ is an interval if and only if $k \geq p$. Moreover, $C_k$ is not a finite union of intervals for $k<p$. If $p = \infty$, then $p_n = m+n$ for all $n\in \mathbb N$, and thus for any $k \in \mathbb N$ there is infinitely many $n$ such that $a_n > kr_n$, so $C_k$ is not a finite union of intervals for any $k \in \mathbb N$.
We will now show that $C_k$ is a Cantor set for $k < m$. Since $0 \in C$, we have $C_k \subset C_{k+1}$ for all $k\in \mathbb N$, so it suffices to show that $C_{m-1}$ is a Cantor set. First, observe that $C_{m-1} \subset F^{(m-1)}_{n} + [0,R^{(m-1)}_n]$, for all $n \in \mathbb N$. In particular, $C_{m-1} \subset F^{(m-1)}_{(m-1)\cdot k_n} + [0, (m-1)r_{k_n}]$ for $n \in \mathbb N$. Therefore, by ([\[miara\]](#miara){reference-type="ref" reference="miara"}), we have $$\lambda(C_{m-1}) \leq \lambda(F^{(m-1)}_{(m-1)\cdot k_n} + [0, (m-1)r_{k_n}]) \leq |F^{(m-1)}_{(m-1)\cdot k_n}|\cdot (m-1)r_{k_n} \le \frac{m^{k_n}\cdot(m-1) r_{k_1}}{(1+q_{k_1+1})\cdot (1+q_{k_1+2})\cdot \ldots \cdot (1+q_{k_n})}$$$$= m^{k_1}(m-1)r_{k_1}\cdot \frac{m^{k_2-k_1}}{(1+q_{k_1+1})\cdot \ldots \cdot (1+q_{k_2})} \cdot \ldots \cdot \frac{m^{k_n-k_{n-1}}}{(1+q_{k_{n-1}+1})\cdot \ldots \cdot (1+q_{k_n})} \leq m^{k_1}(m-1)r_{k_1}\cdot \left( \frac{1}{2} \right) ^ {n-1} \xrightarrow{n \to \infty} 0$$ where $\lambda$ is the Lebesgue measure. Since $C_{m-1}$ has measure zero, it has to be a Cantor set.
Finally, we will show that $C_m$ is a Cantorval, and thus also $C_k$ is a Cantorval for $m \leq k < p$.
We will use Proposition [Proposition 8](#charint){reference-type="ref" reference="charint"}. We are going to define inductively a sequence of sets $(D_n)_{n\in\mathbb N}$ such that for any $n \in \mathbb N$
- $D_n\,\subset \, F^{(m)}_{mk_n}$;
- the distance between any two consecutive points of $D_n$ does not exceed $\frac{mr_{k_n}}{2}$;
- $\max D_n - \min D_n \geq (m-2) \sum_{i=1}^{k_n} a_i$.
Define $$L:= \{1,2, \ldots, \lfloor\frac{m}{2} \rfloor \},$$ $$R:=\{\lfloor\frac{m}{2} \rfloor,\lfloor\frac{m}{2} \rfloor +1, \ldots, m-1 \},$$ $$H^1_1:=\{\sum_{i=1}^{k_1} h_ia_i\colon \forall_{i\leq k_1}\,\, h_i \in
L\},$$ $$H^2_1:=\{\sum_{i=1}^{k_1} h_ia_i\colon \forall_{i\leq k_1}\,\,h_i \in
R\},$$ $$H^3_1:=\{\sum_{i=1}^{k_1} h_ia_i\colon \exists_{1<j\leq k_1}\,\, \left( h_j \in
L+\lfloor \frac{m}{2} \rfloor -1\wedge h_1 \in L-1 \wedge \forall_{1<i <j} \,\, h_i \in L+\lfloor \frac{m}{2} \rfloor-2 \wedge \forall_{i > j} \,\, h_i \in L \right) \},$$ $$H^4_1:=\{\sum_{i=1}^{k_1} h_ia_i\colon \exists_{1\leq j< k_1}\,\, \left( h_j \in
R+1 \wedge h_{k_1} \in R -\lfloor \frac{m}{2} \rfloor +1 \wedge \forall_{i <j} \,\, h_i \in R \wedge \forall_{k_1 > i > j} \,\, h_i \in R -\lfloor \frac{m}{2} \rfloor +2 \right) \},$$ $$D_1:=H^1_1\cup H^2_1\cup H^3_1 \cup H^4_1.$$
First, observe that $D_1 \subset F^{(m)}_{mk_1}$, because $F^{(m)}_{mk_1} = \{\sum_{i=1}^{k_1} h_i a_i\colon h_i \in \{0,1, \ldots,m\}\}.$
Now, we will prove $(\beta_1)$. We will do it, proving that for every but one $h \in D_1$, there is $g \in D_1$ such that $g>h$ and $g-h \leq \frac{mr_{k_1}}{2}.$
First, we will prove the following fact.\
**Claim 1** For any $h=\sum_{i=1}^{k_1} h_ia_i \in F^{(m)}_{mk_n}$ if $g=\sum_{i=1}^{k_1} g_ia_i \in F^{(m)}_{mk_n}$ is such that for some $j \in \{2,3, \ldots, k_1\}$, $$g_i := \begin{cases}
h_j - \lfloor \frac{m}{2} \rfloor +1 \;&\text{ if }\; i = j \\
h_{j-1}+1 \;&\text{ if }\; i=j-1\\
h_i \;&\text{ for the remaining }\; i,%
\end{cases}$$ then $g-h = \frac{mr_{k_1}}{2}.$
Using ([\[r\]](#r){reference-type="ref" reference="r"}) and ([\[q\]](#q){reference-type="ref" reference="q"}), we obtain $$g-h = \sum_{i=1}^{k_1} (g_i-h_i)a_i = (-\lfloor \frac{m}{2} \rfloor +1)a_j + a_{j-1} = (-\lfloor \frac{m}{2} \rfloor +1)q_jr_j + q_{j-1}r_{j-1}$$$$\stackrel{(\ref{r}), (\ref{q})}{=} \frac{(-\lfloor \frac{m}{2} \rfloor +1)q_jr_{j-1}}{1+q_j} + \frac{(\lfloor \frac{m}{2}\rfloor-1 )q_{j}r_{j-1}}{1+q_j}+\frac{mr_{k_1}}{2} =\frac{mr_{k_1}}{2},$$ which proves Claim 1.
Now, we will prove\
**Claim 2** For any $h=\sum_{i=1}^{k_1} h_ia_i \in F^{(m)}_{mk_n}$ if $g=\sum_{i=1}^{k_1} g_ia_i$ is such that
$$g_i := \begin{cases}
h_1 - 1 \;&\text{ if }\; i = 1 \\
h_{k_1} + \lfloor \frac{m}{2}\rfloor \;&\text{ if }\; i=k_1\\
h_i+ \lfloor \frac{m}{2}\rfloor-2 \;&\text{ for the remaining }\; i,%
\end{cases}%$$ then $g-h \in (0, \frac{mr_{k_1}}{2}].$
For $j \in \{1,2, \ldots, k_1-2\}$ define $g^j=\sum_{i=1}^{k_1} g^j_ia_i$, where $$g^j_{i} := \begin{cases}
h_{k_1} +1 \;&\text{ if }\; i=k_1\\
h_i\;&\text{ if }\; k_1>i >k_1-j \\
h_{k_1-j} + \lfloor \frac{m}{2}\rfloor -1\;&\text{ if }\; i=k_1-j \\
g_i\;&\text{ if }\; i<k_1-j.%
\end{cases}%$$ We also define $g^{k_1-1}$ such that $$g^{k_1-1}_{i} := \begin{cases}
h_{k_1} +1 \;&\text{ if }\; i=k_1\\
h_i\;&\text{ if }\; i<k_1.\\
\end{cases}%$$ Observe that $g^1_{k_1} = h_{k_1} +1 = g_{k_1} - \lfloor \frac{m}{2} \rfloor +1$, $g^1_{k_1-1} = h_{k_1-1} + \lfloor \frac{m}{2}\rfloor -1= g_{k_1-1}+1$ and $g^1_i = g_i$ for the remaining $i$. Similarly, for $j \in \{2, 3, \ldots, k_1-2\}$ we have $g^j_{k_1-j+1} = h_{k_1-j+1} = g^{j-1}_{k_1-j+1} - \lfloor \frac{m}{2} \rfloor+1$, $g^j_{k_1-j} = h_{k_1-j} + \lfloor \frac{m}{2}\rfloor -1 = g^{j-1}_{k_1-j} + 1$ and $g^{j}_i = g^{j-1}_i$ for the remaining $i$. Also, $g^{k_1-1}_{2} = h_{2} = g^{k_1-2}_2 - \lfloor \frac{m}{2} \rfloor+1$, $g^{k_1-1}_1 = h_{1} = g^{k_1-2}_{1} + 1$ and $g^{k_1-1}_i = g^{k_1-2}_i$ for the remaining $i$. Hence, by Claim 1, for $j \in \{1,2, \ldots, k_1-1\}$ we have $g^j - g^{j-1} = \frac{mr_{k_1}}{2}$ (where $g^0:=g$). Therefore, $$g^{k_1-1} - g = (k_1-1) \cdot \frac{mr_{k_1}}{2}.$$ On the other hand, $g^{k_1-1}$ is such that $g^{k_1-1}_{k_1} = h_{k_1}+1$ and $g^{k_1-1}_i = h_i$ for $i < k_1$. So, $$g^{k_1-1} - h = a_{k_1} = p_1r_{k_1}.$$ Since $k_1 = \lceil \frac{2p_1}{m} \rceil,$ we have $$g - h = g^{k_1-1} -h - (g^{k_1-1}-g)= p_1r_{k_1} - (k_1-1) \cdot \frac{mr_{k_1}}{2} =
p_1r_{k_1} - (\lceil \frac{2p_1}{m}-1 \rceil)\cdot\frac{mr_{k_1}}{2}$$$$\leq r_{k_1}\left(p_1- (\frac{2p_1}{m}-1) \cdot\frac{m}{2} \right) = \frac{m{r_{k_1}}}{2}.$$ We also have $$g - h > r_{k_1}\left(p_1- (\frac{2p_1}{m}) \cdot\frac{m}{2} \right) =0.$$ Thus, $g - h \in (0, \frac{mr_{k_1}}{2}],$ which finishes the proof of Claim 2.
For $j \in \{2, \ldots, k_1\}$ denote by $b^j$ the sequence of $k_1$ terms such that $$b^j_{i} := \begin{cases}
(-\lfloor \frac{m}{2} \rfloor +1) \;&\text{ if }\; i=j\\
1\;&\text{ if }\; i=j-1 \\
0\;&\text{ for the remaining }\; i.%
\end{cases}%$$ In Claim 1. we proved that for any $h = \sum_{i=1}^{k_1}h_ia_i \in F^{(m)}_{mk_n}$ if $g_i = h_i +b^j_i$ for some $j\in \{2, \ldots, k_1\}$ and $g = \sum_{i=1}^{k_1} g_ia_i$, then $g-h = \frac{mr_{k_1}}{2}.$
By $B$ denote the sequence of $k_1$ terms such that $$B_{i} := \begin{cases}
-1 \;&\text{ if }\; i=1\\
\lfloor \frac{m}{2} \rfloor\;&\text{ if }\; i=k_1 \\
(\lfloor \frac{m}{2} \rfloor -2) \;&\text{ for the remaining }\; i.%
\end{cases}%$$ From Claim 2. we know that for any $h = \sum_{i=1}^{k_1}h_ia_i \in F^{(m)}_{mk_n}$ if $g_i = h_i +B_i$ and $g = \sum_{i=1}^{k_1} g_ia_i$, then $g-h \in (0, \frac{mr_{k_1}}{2}].$
Now, we will show that for every (but one) $h \in D_1$ there is $g =\sum_{i=1}^{k_1}g_ia_i \in D_1$ such that $g-h\in (0, \frac{mr_{k_1}}{2}]$. Let $h \in D_1$. Consider the cases.
1\. $h=\sum_{i=1}^{k_1} h_ia_i \in H^1_1$, that is, $h_i \in L$ for all $i\leq k_1$. Consider the subcases.
1.1. $h_{k_1} \neq \lfloor \frac{m}{2} \rfloor$. Let $g_i= h_i + B_i$ for all $i$. Then $g_{k_1} \in L - 1 + \lfloor \frac{m}{2} \rfloor$, $g_1 \in L-1$ and $g_i \in L +\lfloor \frac{m}{2} \rfloor -2$ for the remaining $i$, so $g \in H^1_3$ (with $j = k_1$) and, by Claim 2, $g-h \in (0, \frac{mr_{k_1}}{2}]$.
1.2. $h_{k_1} = \lfloor \frac{m}{2} \rfloor$ and there is $i < k_1$ such that $h_i < \lfloor \frac{m}{2} \rfloor$. Let $1<j\leq k_1$ be such that $h_j =\lfloor \frac{m}{2} \rfloor$ and $h_{j-1} < \lfloor \frac{m}{2} \rfloor$. Let $g_i = h_i + b^j_i$ for all $i \leq k_1$. Then $g_j = 1 \in L$, $g_{j-1} = h_{j-1} + 1 \in L$ and $g_i = h_i \in L$ for the remaining $i$, so $g \in H^1_1$. Moreover, by Claim 1, $g-h = \frac{mr_{k_1}}{2}.$
1.3. $h_i = \lfloor \frac{m}{2} \rfloor$ for all $i$. Let $g_i = h_i + b^{k_1}_i$ for all $i \leq k_1$. Then $g_{k_1} = 1 \in R-\lfloor\frac{m}{2} \rfloor +1$, $g_{k_1-1} = \lfloor\frac{m}{2} \rfloor +1 \in R+1$ and $g_i = \lfloor\frac{m}{2} \rfloor \in R$ for the remaining $i$, so $g \in H^4_1$ (with $j= k_1-1$). Moreover, by Claim 1, $g-h = \frac{mr_{k_1}}{2}.$
2\. $h=\sum_{i=1}^{k_1} h_ia_i \in H^2_1$. Let $g_i = h_i + b^{k_1}_i$ for all $i \leq k_1$. Then $g_{k_1} \in R-\lfloor\frac{m}{2} \rfloor +1$, $g_{k_1-1} \in R+1$ and $g_i\in R$ for the remaining $i$, so $g \in H^4_1$ (with $j= k_1-1$). Moreover, by Claim 1, $g-h = \frac{mr_{k_1}}{2}.$
3\. $h=\sum_{i=1}^{k_1} h_ia_i \in H^3_1$. So, there is $1<j \leq k_1$ such that $h_j \in L + \lfloor\frac{m}{2} \rfloor-1$, $h_1 \in L-1$, $h_i \in L + \lfloor\frac{m}{2} \rfloor -2$ for all $1<i<j$ and $h_i \in L$ for $i > j$. Let $g_i = h_i + b^{j}_i$ for all $i \leq k_1$. If $j > 2$, then $g_j \in L$ and $g_{j-1} \in L + \lfloor\frac{m}{2} \rfloor-1$, so still $g \in H^3_1.$ If $j = 2$, then for all $i\leq k_1$ we have $g_i \in L$, so $g \in H^1_1$. In both cases $g \in D_1$ and, by Claim 1, $g-h = \frac{mr_{k_1}}{2}.$
4\. $h=\sum_{i=1}^{k_1} h_ia_i \in H^4_1$. So, there is $1\leq j < k_1$ such that $h_j \in R +1$, $h_{k_1} \in R-\lfloor\frac{m}{2} \rfloor+1$, $h_i \in R - \lfloor\frac{m}{2} \rfloor +2$ for all $k_1>i>j$ and $h_i \in R$ for $i < j$. Consider the subcases.
4.1. $j > 1$. Let $g_i = h_i + b^{j}_i$ for all $i \leq k_1$. Then $g_j \in R-\lfloor\frac{m}{2} \rfloor+2$ and $g_{j-1} \in R+1$, so still $g \in H^4_1.$ Moreover, by Claim 1, $g-h = \frac{mr_{k_1}}{2}.$
4.2. $j=1$, $h_{k_1} \neq m-\lfloor\frac{m}{2} \rfloor.$ Let $g_i = h_i +B_i$ for all $i \leq k_1$. Then since $h_{k_1} < m-\lfloor\frac{m}{2} \rfloor$, we have $g_{k_1} \in R$. We also have $g_i \in R$ for $i < k_1$, so $g \in H^2_1$. Moreover, by Claim 2, $g-h \in (0, \frac{mr_{k_1}}{2}].$
4.3. $j=1$, $h_{k_1} = m-\lfloor\frac{m}{2} \rfloor$ and either there is $1< l <k_1$ such that $h_l \neq m - \lfloor \frac{m}{2}\rfloor +1$ or $h_1 \neq m$ (then $l=1$). If $l = k_1-1$, then let $g_i = h_i +b^{k_1}_i$ for all $i \leq k_1$. We have $g_{k_1} = m - 2\cdot \lfloor \frac{m}{2}\rfloor +1 \in \{1,2\} \subset R-\lfloor \frac{m}{2}\rfloor+1$ and since $h_{k_1-1} < m - \lfloor \frac{m}{2}\rfloor +1$, we get $g_{k_1-1} \in R- \lfloor \frac{m}{2}\rfloor +2$. Therefore, $g \in H^4_1$. Now, without loss of generality suppose that $1< l <k_1-1$ is such that $h_{l+1} = m - \lfloor \frac{m}{2}\rfloor +1$. Then let $g_i = h_i +b^{l+1}_i$ for all $i \leq k_1$. We have $g_{l+1} = m - 2\cdot \lfloor \frac{m}{2}\rfloor +2 \in \{2,3\} \subset R-\lfloor \frac{m}{2}\rfloor+2$ and since $h_{j} < m - \lfloor \frac{m}{2}\rfloor +1$, we get $g_{j} \in R- \lfloor \frac{m}{2}\rfloor +2$. Therefore, $g \in H^4_1$. If $l=1$, then we can suppose that $h_{2} = m - \lfloor \frac{m}{2}\rfloor +1$ (in the other case we will have the situation as above with $l > 1$). Let $g_i = h_i +b^{2}_i$ for all $i \leq k_1$. We have $g_{2} = m - 2\cdot \lfloor \frac{m}{2}\rfloor +2 \in \{2,3\} \subset R-\lfloor \frac{m}{2}\rfloor+2$ and since $h_{1} < m$, we get $g_{j} \in R+1$. Therefore, $g \in H^4_1$. In all of above cases we also have, by Claim 1, $g-h = \frac{mr_{k_1}}{2}$.
This finishes the proof of $(\beta_1)$ as the point $h=\sum_{i=1}^{k_1} h_ia_i \in H^4_1$, where $h_{k_1} = m - \lfloor \frac{m}{2}\rfloor$, $h_1 = m$ and $h_ i = m-\lfloor \frac{m}{2}\rfloor+1$ for the remaining $i$ is the largest number in $D_1$.
As $\sum_{i=1}^{k_1} a_i \in H^1_1 \subset D_1$ and $(m-1)\sum_{i=1}^{k_1} a_i \in H^2_1 \subset D_1$, we have $\max D_1 \geq (m-1) \sum_{i=1}^{k_1} a_i$ and $\min D_1 \leq \sum_{i=1}^{k_1} a_i \in H^1_1 \subset D_1$. Therefore, $\max D_1 - \min D_1 \geq (m-2) \sum_{i=1}^{k_1},$ which proves $(\gamma_1)$.
Suppose that for some $n \in \mathbb N$ we have defined the set $D_n$ satisfying conditions $(\alpha_n), (\beta_n)$ and $(\gamma_n)$.
Define $$v_{n+1}: =k_{n+1}-j_{n+1},$$ $$H^1_{n+1}:=\{\sum_{i=v_{n+1}}^{k_{n+1}} h_ia_i\colon \forall_{i\leq k_{n+1}}\,\, h_i \in
L\},$$ $$H^2_{n+1}:=\{\sum_{i=v_{n+1}}^{k_{n+1}} h_ia_i\colon \forall_{i\leq k_{n+1}}\,\,h_i \in
R\},$$ $$H^3_{n+1} :=\{\sum_{i=v_{n+1}}^{k_{n+1}} h_ia_i\colon \exists_{v_{n+1}<j\leq k_{n+1}}\,\, \left( h_j \in
L+\lfloor \frac{m}{2} \rfloor -1\wedge h_{v_{n+1}} \in L-1 \right.$$$$\left.\wedge \forall_{v_{n+1}<i <j} \,\, h_i \in L+\lfloor \frac{m}{2} \rfloor-2 \wedge \forall_{i > j} \,\, h_i \in L \right) \},$$
$$H^4_{n+1}:=\{\sum_{i=v_{n+1}}^{k_{n+1}} h_ia_i\colon \exists_{v_{n+1}\leq j< k_{n+1}}\,\, \left( h_j \in
R+1 \wedge h_{k_{n+1}} \in R -\lfloor \frac{m}{2} \rfloor +1 \wedge \forall_{i <j} \,\, h_i \in R\right.$$$$\wedge \left. \forall_{k_{n+1} > i > j} \,\, h_i \in R -\lfloor \frac{m}{2} \rfloor +2 \right) \},$$ $$H_{n+1}: =\bigcup_{i=1}^4 H^i_{n+1},$$ $$G^i = \{0,a_i, 2a_i, \ldots, ma_i\} \qquad \text{for $i\in\mathbb N$},$$ $$G_{n+1} = \{\sum_{i={k_n+1}}^{v_{n+1}-1} h_i a_i \colon \forall_{i} h_i \in \{0,1, \ldots,m\}\},$$ $$D_{n+1} = D_n + G_{n+1} + H_{n+1}.$$
By the definition, $(\alpha_{n+1})$ is satisfied.
Now, we will prove $(\beta_{n+1})$. First, observe that definition of $a_{v_{n+1}}, \ldots, a_{k_{n+1}}$ is analogous to the definition of $a_1, \ldots, a_{k_1}$, so we can repeat the reasoning from the proof of $(\beta_1)$ to show that for every but one point $h \in H_{n+1}$ there is $g \in H_{n+1}$ such that $g-h \in (0, \frac{mr_{k_{n+1}}}{2}].$ We also know that $$[\min H_{n+1}, \max H_{n+1}] \supset [\sum_{i=v_{n+1}}^{k_{n+1}} a_i, (m-1)\sum_{i=v_{n+1}}^{k_{n+1}} a_i].$$
Consider the set $G^{v_{n+1}-1} + H_{n+1}.$ For $j \in \{1,2, \ldots, m\}$ we have $$(j-1) a_{v_{n+1}-1} + (m-1) \sum_{i=v_{n+1}}^{k_{n+1}} a_i - \left(ja_{v_{n+1}-1}+\sum_{i=v_{n+1}}^{k_{n+1}} a_i \right) = -a_{v_{n+1}-1} + (m-2)\sum_{i=v_{n+1}}^{k_{n+1}} a_i$$$$=-(m-3)r_{v_{n+1}-1}+(m-2)\sum_{i=v_{n+1}}^{k_{n+1}}a_i = -(m-3)\sum_{i=v_{n+1}}^\infty a_i+(m-2)\sum_{i=v_{n+1}}^{k_{n+1}}a_i = \sum_{i=v_{n+1}}^{k_{n+1}}a_i -(m-3) \cdot\sum_{i=k_{n+1}+1}^\infty a_i$$$$=\sum_{i=v_{n+1}}^{k_{n+1}}a_i -(m-3) r_{k_{n+1}}\geq a_{k_{n+1}}-(m-3)r_{k_{n+1}} = p_{n+1}r_{k_{n+1}}- (m-3)r_{k_{n+1}} > 0.$$ Therefore, $$\left( (j-1) a_{v_{n+1}-1}+[\sum_{i=v_{n+1}}^{k_{n+1}} a_i, (m-1)\sum_{i=v_{n+1}}^{k_{n+1}} a_i] \right) \cap \left( j a_{v_{n+1}-1}+[\sum_{i=v_{n+1}}^{k_{n+1}} a_i, (m-1)\sum_{i=v_{n+1}}^{k_{n+1}} a_i] \right)\neq \emptyset,$$ so, for any $h \in G^{v_{n+1}-1}+ H_{n+1}$, there is $g \in G^{v_{n+1}-1}+H_{n+1}$ such that $g-h \in (0, \frac{mr_{k_{n+1}}}{2}].$
Suppose that for some $l\in \{ k_{n} + 2, \ldots, v_{n+1}-1\}$ we have proved that for any $h \in G^l+G^{l+1}+ \ldots+G^{v_{n+1}-1}+ H_{n+1}$ there is $g \in G^l+G^{l+1}+ \ldots+G^{v_{n+1}-1}+H_{n+1}$ such that $g-h \in (0, \frac{mr_{k_{n+1}}}{2}].$ We will prove that for any $h \in G^{l-1}+G^{l}+ \ldots+G^{v_{n+1}-1}+ H_{n+1}$ there is $g \in G^{l-1}+G^{l}+ \ldots+G^{v_{n+1}-1}+H_{n+1}$ such that $g-h \in (0, \frac{mr_{k_{n+1}}}{2}].$
For $j \in \{1,2, \ldots, m\}$, since $m > 6$ and $j_{n+1}=\lceil 2p_{n+1}{m} \rceil -1 >2,$ using ([\[r\]](#r){reference-type="ref" reference="r"}), we obtain $$(j-1) a_{l-1} +ma_l+ \ldots+ma_{v_{n+1}-1} +(m-1) \sum_{i=v_{n+1}}^{k_{n+1}} a_i - \left(ja_{l-1}+\sum_{i=v_{n+1}}^{k_{n+1}} a_i \right)$$$$= -a_{l-1} + (m-2)\sum_{i=l}^{k_{n+1}} a_i +2a_l+2a_{l+1}+ \ldots+2a_{v_{n+1}-1} \geq -(m-\frac{1}{2})r_{l-1}+(m-2)\sum_{i=l}^{k_{n+1}}a_i + 2a_l$$$$=-(m-\frac{1}{2})\sum_{i=l}^\infty a_i+(m-2)\sum_{i=l}^{k_{n+1}}a_i + 2a_l =-\frac{3}{2}\sum_{i=l}^\infty a_i-(m-2)\sum_{i=k_{n+1}+1}^{\infty}a_i + 2a_l$$$$= -\frac{3}{2} r_{l-1} -(m-2) r_{k_{n+1}} + 2a_l = -\frac{3}{2}(1+q_l)r_l+2q_lr_l -(m-2) r_{k_{n+1}}$$$$= \frac{1}{2}q_lr_l-\frac{3}{2}r_l - (m-2) r_{k_{n+1}} \geq \frac{1}{2}(m-3)r_l- \frac{3}{2}r_l -(m-2) r_{k_{n+1}} = (\frac{m}{2}-3)r_l- (m-2) r_{k_{n+1}}$$$$=(\frac{m}{2}-3)(1+q_{l+1})(1+q_{l+2})r_{l+2}- (m-2) r_{k_{n+1}} \geq(\frac{m}{2}-3)\cdot4r_{l+2}- (m-2) r_{k_{n+1}}$$$$\geq (2m-12)(p_{n+1}+1)r_{k_{n+1}} - (m-2) r_{k_{n+1}} \stackrel{m>6}{\geq} 2(p_{n+1}+1)r_{k_{n+1}}- (m-2) r_{k_{n+1}} > 0.$$ Therefore, $$\left( (j-1) a_{l-1}+[\sum_{i=v_{n+1}}^{k_{n+1}} a_i, m\sum_{i=j}^{v_{n+1}-1}a_i + (m-1)\sum_{i=v_{n+1}}^{k_{n+1}} a_i] \right) \cap \left( j a_{l-1}+[\sum_{i=v_{n+1}}^{k_{n+1}} a_i, m\sum_{i=j}^{v_{n+1}-1}a_i + (m-1)\sum_{i=v_{n+1}}^{k_{n+1}} a_i] \right)\neq \emptyset,$$ so for any $h \in G^{l-1}+G^{l}+ \ldots+G^{v_{n+1}-1}+ H_{n+1}$ there is $g \in G^{l-1}+G^{l}+ \ldots+G^{v_{n+1}-1}+H_{n+1}$ such that $g-h \in (0, \frac{mr_{k_{n+1}}}{2}].$
By induction we obtain that for any $h \in G_{n+1}+ H_{n+1}$ there is $g \in G_{n+1}+H_{n+1}$ such that $g-h \in (0, \frac{mr_{k_{n+1}}}{2}].$
Now, let $d_j, d_{j+1} \in D_{n}$ be such that $d_{j+1}>d_j$, and there is no point $f \in D_n$ such that $d_{j+1} > f > d_j$. By $(\beta_n)$, we know that $d_{j+1} - d_j \leq \frac{mr_{k_n}}{2}.$ We have $$d_j + m\sum_{i=k_n+1}^{v_{n+1}-1}a_i+(m-1) \sum_{i=v_{n+1}}^{k_{n+1}}a_i - \left(d_{j+1} + \sum_{i=v_{n+1}}^{k_{n+1}}a_i\right)$$$$\geq -\frac{mr_{k_n}}{2} + \left( m\sum_{i=k_n+1}^{\infty}a_i -m\sum_{i=v_{n+1}}^{k_{n+1}}a_i - m \sum_{i=k_{n+1}+1}^\infty a_i \right) + (m-2)\sum_{i=v_{n+1}}^{k_{n+1}}a_i = -\frac{mr_{k_n}}{2} + mr_{k_n} -2\sum_{i=v_{n+1}}^{k_{n+1}}a_i - m r_{k_{n+1}}$$$$=
\frac{mr_{k_n}}{2} - 2\sum_{i=v_{n+1}}^{k_{n+1}}a_i - mr_{k_{n+1}} > \frac{mr_{k_n}}{2} - 2r_{k_n} - \frac{m}{1+p_{n+1}} r_{k_{n+1}-1} >\frac{mr_{k_n}}{2}-3r_{k_n} > 0.$$ Therefore, $$\left( d_j+\left[\sum_{i=v_{n+1}}^{k_{n+1}} a_i,m\sum_{i=k_n+1}^{v_{n+1}-1}a_i+(m-1) \sum_{i=v_{n+1}}^{k_{n+1}}a_i\right]\right) \cap \left( d_{j+1}+\left[\sum_{i=v_{n+1}}^{k_{n+1}} a_i,m\sum_{i=k_n+1}^{v_{n+1}-1}a_i+(m-1) \sum_{i=v_{n+1}}^{k_{n+1}}a_i\right]\right) \neq \emptyset.$$ Since $$D_{n+1}\ \cap\ \left[\sum_{i=v_{n+1}}^{k_{n+1}} a_i,m\sum_{i=k_n+1}^{v_{n+1}-1}a_i+(m-1) \sum_{i=v_{n+1}}^{k_{n+1}}a_i\right]\ \subset\ G_{n+1}+H_{n+1},$$ by arbitrariness of $d_j$, we infer $(\beta_{n+1})$.
To prove $\gamma_{n+1}$ observe that $$\max D_{n+1} \geq \max D_n +m\sum_{i=k_n+1}^{v_{n+1}-1}a_i+(m-1) \sum_{i=v_{n+1}}^{k_{n+1}}a_i \geq (m-1)\sum_{i=1}^{k_n}a_i + (m-1) \sum_{i=v_{n+1}}^{k_{n+1}}a_i = (m-1)\sum_{i=1}^{k_{n+1}}a_i.$$ Similarly, $\min D_{n+1} \leq \sum_{i=1}^{k_{n+1}}a_i$. Therefore, we have $\max D_n -\min D_n \geq (m-2)\sum_{i=1}^{k_{n+1}}a_i.$
Since $\lim\limits_{n\to \infty} (m-2)\sum_{i=1}^{k_{n}}a_i > 0,$ we get that $C_m$ is a Cantorval, by Proposition [Proposition 8](#charint){reference-type="ref" reference="charint"}.
Now, suppose that $2 \leq m \leq 6$ and $m < p$. Since $4m > 6$ we know that there is a Cantor set $C$ such that $C_k$ is a Cantor set for $k < 4m$, a Cantorval for $4m\leq k < 4p$ and an interval for $k \geq 4p.$ In particular, $K:=C_4$ is a Cantor set. Denote by $K_k$ an algebraic sum of $k$ copies of $K$. Observe that $K_k = C_{4k}$. So, if $k < m$, then $K_{k}$ is a Cantor set, if $m \leq k < p$, then $K_k$ is a Cantorval and if $k \geq p$, then $K_k$ is an interval, so $K$ is the required Cantor set. ◻
# Decomposition of interval-filling sequence
In the whole section we will consider an interval-filling sequence $(a_n)$. We divide its terms into two infinite subsequences $(y_n)$ and $(z_n)$, that is, $(y_n)\cup (z_n)=(a_n)$. By $r_n,r_n^{(y)},r_n^{(z)}$ we denote the tails of the sequences $(a_n),(y_n)$ and $(z_n)$ respectively. We assume that all three considered sequences are nonincreasing.
**Theorem 28**. *Let $(a_n)$ be an interval-filling sequence. If there exists a decomposition $(a_n) = (y_n) \cup (z_n)$ such that both $(y_n)$ and $(z_n)$ are interval-filling, then there exists $k$ such that for each $n\geq k$ the inequality $a_{n-1}+a_n\leq r_{n}$ holds.*
**Proof.* Without loss of generality we may assume that $y_1=a_1$. Then there exists $k$ such that $y_l=a_l$ for all $l<k$ and $z_1=a_k$. Fix $v\geq k$. We may assume that $a_v=y_j$ for some $j$ (the proof when $a_v\in (z_n)$ is similar). Let $w<v$ be such that $a_w=z_i$ and for every $r\in\{w+1,w+2,\ldots,v-1\}$ we have $a_r\in(y_n)$, that is $a_w$ is the smallest element from $(z_n)$ which is not less than $y_j$. By the assumption, we have $z_i\leq\sum_{q>i}z_q=r_i^{(z)}$ and $y_j\leq\sum_{p>j}y_p=r_j^{(y)}$. Note that $(a_n)_{n>v}=(y_p)_{p>j}\cup (z_q)_{q>i}$, since $z_{i+1}\leq y_j\leq z_{i}$. Hence $$a_{v-1}+a_v\leq z_i+y_j\leq r_i^{(z)}+r_j^{(y)}=r_{v}.$$ ◻*
**Theorem 29**. *Let $(a_n)$ be an interval-filling sequence. If there exists a decomposition $(a_n) = (y_n) \cup (z_n)$ such that both $(y_n)$ and $(z_n)$ are fast convergent, then for infinitely many $n$ the inequality $a_{n-1}+a_n> r_{n}$ holds.*
**Proof.* Let $v$ be such that $a_{v-1}\in (y_n)$ and $a_{v}\in (z_n)$ (or vice-versa). Then $a_{v-1}=y_j$ and $a_v=z_i$ for some $i,j$. We have $$a_{v-1}+a_v=y_j+z_i> r_j^{(y)}+r_i^{(z)}=r_v.$$ ◻*
**Corollary 30**. *Let $(a_n)$ be an interval-filling sequence. Then at most one of the conditions holds*
- *there exists a decomposition of $(a_n)$ into two interval-filling sequences;*
- *there exists a decomposition of $(a_n)$ into two fast convergent sequences.*
**Corollary 31**. *Let $(a_n)$ be an interval-filling sequence. If there exists a decomposition of $(a_n)$ into two interval-filling sequences then $2a_n\leq r_n$ for all large enough $n$. Generally, for $k\geq 2$ if there exists a decomposition of $(a_n)$ into $k$ interval-filling sequences, then $\sum_{i=n-k+1}^{n}a_i\leq r_n$ for all large enough $n$. In particular, $ka_n\leq r_n$ for all large enough $n$.*
**Corollary 32**. *Let $(a_n)$ be an interval-filling sequence. If there exists a decomposition such that both $(y_n)$ and $(z_n)$ are fast convergent then for infinitely many $n$ the inequality $2a_{n-1}> r_{n}$ holds.*
So far we considered the necessary conditions for decompositions into interval-filling or fast convergent sequences. Now, we give some sufficient condition to obtain the particular alternating decompositions. We will use the notation $r_n^{(2)}$ for the subtail $a_{n+1}+a_{n+3}+a_{n+5}+\ldots$.
**Theorem 33**. *Let $(a_n)$ be an interval-filling sequence. If $2a_{n-1}\leq r_{n}$ for all $n\geq 2$, then there exists a decomposition of $(a_n)$ into two interval-filling sequences.*
**Proof.* We will show that $(y_n)=(a_{2n-1})$ and $(z_n)=(a_{2n})$ is the proper decomposition. Indeed, $$2a_{n-1}\leq r_n = r_{n+1}^{(2)}+r_{n}^{(2)}\leq 2r_{n}^{(2)},$$ which means that $a_{n-1}\leq r_{n}^{(2)}$ for all $n$. The inequality for even $n$'s is equivalent to $y_k\leq r_{k}^{y}$ for all $k$, while the case of odd $n$'s gives the inequality $z_k\leq r_{k}^{z}$. Hence both sequences $(y_n)$ and $(z_n)$ are interval-filling. ◻*
In a similar way we obtain $k$ alternating interval-filling subsequences $(a_{kn+i})$.
**Corollary 34**. *Let $(a_n)$ be an interval-filling sequence and $k\in\mathbb{N}$. If $ka_{n-k+1}\leq r_{n}$ for all $n\geq k$, then there exists a decomposition of $(a_n)$ into $k$ interval-filling sequences.*
We also have the following theorem.
**Theorem 35**. *Let $(a_n)$ be an interval-filling sequence. Assume that for $k \in \mathbb N$, $(2k-1)a_{n}\leq r_{n}$ for all $n$. Then, there exists a decomposition of $(a_n)$ into $k$ interval-filling sequences.*
*Proof.* Since for all $n \in \mathbb N$ $$(k-1)a_n \geq a_{n+1} + \ldots + a_{n+k-1}$$ and for all $k \leq i \leq 2k-2$ $$r_{n+i}^{(k)} \leq r_{n+k-1}^{(k)},$$ we get $$ka_n = (2k-1)a_n - (k-1)a_n \leq r_n-(k-1)a_n =(a_{n+1}+a_{n+2} + \ldots + a_{n+k-1}) + (r_{n+k-1}^{(k)} + r_{n+k}^{(k)} + \ldots r_{n+2k-2}^{(k)}) - (k-1)a_n$$$$\leq (k-1)a_n + kr_{n+k-1}^{(k)} - (k-1)a_n = kr_{n+k-1}^{(k)}.$$ So, we decompose $(a_n)$ into subsequences of the form $(a_{kn-j})_n$, where $j \in \{0,1, \ldots, k-1\}$, and then $$a_{kn-j} \leq r_{kn-j+k-1}^{(k)} = a_{kn-j+k} + a_{kn-j+2k}+ \ldots.$$ Hence subsequences $(a_{kn-j})_n$ are slowly convergent for all $j \in \{0,1, \ldots, k-1\}$, and thus they are interval-filling. ◻
From the above theorem we get that if $3a_{n}\leq r_{n}$ for all $n$ then there exists a decomposition of $(a_n)$ into two interval-filling sequences. However, we can improve this result.
**Theorem 36**. *If $(1+\sqrt{3})a_{n}\leq r_{n}$ for all $n$ then there exists a decomposition of $(a_n)$ into two interval-filling sequences.*
*Proof.* For all $n \in \mathbb N$ we have $$(1+\sqrt{3})a_{n}\leq r_{n} = r_n^{(2)} + r_{n+1}^{(2)} =a_{n+1} + r_{n+2}^{(2)} + r_{n+1}^{(2)} \leq 2r_{n+1}^{(2)} + a_{n+1} \leq 2r_{n+1}^{(2)}+\frac{1}{1+\sqrt{3}} r_{n+1}$$$$= 2r_{n+1}^{(2)}+\frac{1}{1+\sqrt{3}}r_{n+1}^{(2)} + \frac{1}{1+\sqrt{3}}r_{n+2}^{(2)} \leq 2r_{n+1}^{(2)}+\frac{2}{1+\sqrt{3}}r_{n+1}^{(2)}.$$ Dividing both sides by $1+\sqrt{3}$, we get $$a_{n} \leq r_{n+1}^{(2)}.$$ Similarly, as in the proof of Theorem [Theorem 35](#2k-1xn){reference-type="ref" reference="2k-1xn"} we get that subsequences $(a_{2n})$ and $(a_{2n-1})$ are interval-filling. ◻
Note that there is no need to consider the case of decompositions of sequences into more than two fast-convergent sequences since any subsequence of such a sequence is also fast convergent. Thus, if there exists a decomposition of a sequence into two fast-convergent subsequences, then there exists a decomposition into any number of such sequences.
To sum up and simplify the problem note that the crucial in our consideration is the ratio $q_n:=\frac{a_n}{r_n}$. The class of interval-filling sequences are defined by the condition $q_n\leq 1$ for all $n$. The sufficient condition of having its decomposition into $k$ interval-filling subsequences is that the inequality $q_n\leq\frac{1}{2k-1}$ holds for all $n$. By the necessary condition we know that if the inequality $q_n>\frac{1}{k}$ holds for infinitely many $n$ then such a decomposition does not exist. To illustrate the problem we give some examples.
**Example 37**. *Let $a_{2n-1}=a_{2n}=\frac{1}{2^n}$ for all $n$. Then the sequence has a decomposition into two interval-filling subsequences containing the odd and the even terms respectively. We have $q_{2n-1}=\frac{1}{3}$ and $q_{2n}=\frac{1}{2}$ for each $n$.*
**Example 38**. *Let $a_{n}=(\frac{\sqrt{2}}{2})^n$ for all $n$. Then the sequence has a decomposition into two interval-filling subsequences containing the odd and the even terms respectively (both are geometric with the ratio $\frac{1}{2}$). We have $q_{n}=\frac{1}{\sqrt{2}+1}$ for every $n$.*
Now we consider the sufficient condition for having the alternating decomposition into two fast convergent subsequences.
**Theorem 39**. *Let $(a_n)$ be an interval-filling sequence. If $a_n>r_{n+1}$ for all $n$ then there exists a decomposition of $(a_n)$ into two fast convergent sequences.*
**Proof.* The following inequalities hold $$a_n>r_{n+1}>r_{n+1}^{(2)}$$ Hence the sequences $(a_{2n-1})$ and $(a_{2n})$ are fast convergent. ◻*
In the studies of decompositions into two fast convergent sequences the major characteristic is the value of the ratio $p_n:=\frac{a_n}{r_{n+1}}$. Clearly, $p_n>q_n$. Note that if $p_n\leq 1$ holds for all $n$, then we get the definition of a locker, which is a stronger notion to the interval-filling sequence and was described in [@DJK] and [@DK]. If $p_n>1$ for all $n$, then we know that $(a_n)$ can be decomposed into two fast-convergent sequences. On the other hand, if $p_n\leq\frac{1}{2}$ for all large enough $n$ then we get that such decomposition does not appear.
We finish the consideration with the example which does not satisfy the sufficient condition, but for which the decomposition exists.
**Example 40**. *Let $(c_n)$ be any decreasing sequence with elements from the interval $(\frac{1}{2},1)$. Let $a_{2n-1}=a_{2n}=\frac{c_n}{2^n}$. Note that $(a_n)$ is interval-filling. Indeed, for $n \in \mathbb N$ we have $a_{2n-1}=a_{2n} < r_{2n-1}$ and $$a_{2n}=\frac{c_n}{2^n}<\frac{1}{2^n}=\sum_{k=1}^{\infty}\frac{1}{2^{n+k}}=\frac{1}{2}\sum_{k=1}^{\infty}\frac{1}{2^{n+k}} +\frac{1}{2}\sum_{k=1}^{\infty}\frac{1}{2^{n+k}}<\sum_{k=1}^{\infty}\frac{c_{n+k}}{2^{n+k}} +\sum_{k=1}^{\infty}\frac{c_{n+k}}{2^{n+k}}=r_{2n}.$$ Moreover, it can be decomposed into two fast convergent sequences $(a_{2n-1})$ and $(a_{2n})$ of equal terms, since $$a_{2n}=\frac{c_n}{2^n}=\sum_{k=1}^{\infty}\frac{c_n}{2^{n+k}}>\sum_{k=1}^{\infty}\frac{c_{n+k}}{2^{n+k}}=r_{2n}^{(2)}$$ On the other hand, for the value of the ratios we get $$p_{2n-1}=\frac{a_{2n-1}}{r_{2n}}=\frac{c_{n}}{2^{n}}\cdot\frac{1}{2\sum_{k=1}^{\infty}\frac{c_{n+k}}{2^{n+k}}}<\frac{c_n}{2^n}\cdot\frac{1}{2\sum_{k=1}^{\infty}\frac{c_{n}}{2^{n+k}}}=\frac{1}{2}<1.$$*
Now, we give the example of an interval-filling sequence such that none of its decompositions contains two interval-filling or two fast convergent sequences.
**Example 41**. *We will find a sequence which cannot be decomposed neither into two interval filling subsequences, nor into two fast convergent subsequences. Let $(a_n)$ be defined as follows: $a_{3n-2}=a_{3n-1}=q^{2n-1}$, $a_{3n}=q^{2n}$ for each $n$, that is $(a_n)$ is the multigeometric sequence $(a_n)=\frac{1}{q}(1,1,q;q^2)$. We will now find a proper $q \in (0,1)$. First, we need to break the necessary condition for decomposition into two interval-filling sequences given in Theorem [Theorem 28](#nadwaprzedzialy){reference-type="ref" reference="nadwaprzedzialy"}. Several inequalities should be satisfied:*
- *$a_{3n-2}+a_{3n-1}> r_{3n-1}\Leftrightarrow 2q^{2n-1}> \frac{q^{2n}+2q^{2n+1}}{1-q^2}\Leftrightarrow 4q^2+q-2< 0\Leftrightarrow q< \frac{1}{8}(\sqrt{33}-1)\approx 0,593$*
- *$a_{3n-1}+a_{3n}> r_{3n}\Leftrightarrow q^{2n-1}+q^{2n}> \frac{2q^{2n+1}+q^{2n+2}}{1-q^2}\Leftrightarrow 2q^3+3q^2-q-1< 0\Leftrightarrow q< \frac{1}{2}(\sqrt{5}-1)\approx 0,618$*
- *$a_{3n}+a_{3n+1}> r_{3n+1}\Leftrightarrow q^{2n}+q^{2n+1}>q^{2n+1}+ \frac{q^{2n+2}+2q^{2n+3}}{1-q^2}\Leftrightarrow 2q^3+2q^2-1< 0\Leftrightarrow \\q< \frac{1}{6}(\sqrt[3]{46-6\sqrt{57}}+\sqrt[3]{46+6\sqrt{57}}-2)\approx 0,565$*
*Hence for $q$ satisfying the above three inequalities (the third of them determines the upper boundary) we know that $(a_n)$ has no decomposition into two interval-filling sequences.\
Suppose that $(a_n)$ has a decomposition into $(y_n)$ and $(z_n)$, where both of them are fast convergent. Since some of the terms in $(a_n)$ repeats two times, we need to divide it between our two sequences. Hence for every $n$ exactly one of the terms $a_{3n-2}$ and $a_{3n-1}$ belongs to $(y_n)$ and the other one to $(z_n)$. Thus, $(y_n)\supset (q^{2n-1})$ and $(z_n)\supset (q^{2n-1})$. Note that $q^2$ need to belong to one of them, let us assume that $q^2\in(y_n)$.\
Now, let us consider for which $q$ the sequence $(y_n)$ is not fast convergent. $$y_1=q\leq q^2+\sum_{k=1}^{\infty}q^{2k+1}\leq r_1^y\Leftrightarrow q\leq q^2+\frac{q^3}{1-q^2}\Leftrightarrow q\geq q_0\approx 0.555.$$ Hence we have obtained the interval to choose the proper $q$, in particular $q=0,56$. Then, the sequence $(a_n)$ has no decomposition into two fast convergent sequences.*
*Note that the example shows even more. Because of its self similar multigeometric structure, we can not decompose $(a_n)$ into two sequences both of which satisfy either slow or fast convergence condition for large enough indexes.*
**Problem 42**. *Characterize the interval-filling sequences $(a_n)$ for which the decomposition into two or more interval-filling sequences or into two fast convergent sequences is possible in terms of sequences $(p_n)$ and $(q_n)$ respectively.*
a,b,c,d R. Anisca, M. Ilie, *A technique of studying sums of central Cantor sets*, Canad. Math. Bull. **44** (2001), 12--18.
T. Banakh, A. Bartoszewicz, M. Filipczak, E. Szymonik, *Topological and measure properties of some self-similar sets*, Topol. Methods Nonlinear Anal. **46**(2) (2015), 1013--1028.
M. Banakiewicz, *The Lebesgue measure of some M-cantorval*, J. Math. Anal. Appl. **471** (2019), 170--179.
M. Banakiewicz, F. Prus-Wiśniowski, *M-Cantorvals of Ferens type*, Math. Slovaca **67**(4) (2017), 1--12.
A. Bartoszewicz, S. Gła̧b, J. Marchwicki, *Recovering a purely atomic finite measure from its range*, J. Math. Anal. Appl. **467** (2018), 825-841.
A. Bartoszewicz, M. Filipczak, F. Prus-Wiśniowski, *Topological and algebraic aspects of subsums of series*, Traditional and present-day topics in real analysis, Faculty of Mathematics and Computer Science, University of Łódź, Łódź, 2013, 345--366.
A. Bartoszewicz, M. Filipczak, F. Prus-Wiśniowski, *Semi-fast convergent sequences and $k$-sums of central Cantor sets*, Eur. J. Math. **6** (2020), 1523-1536.
W. Bielas, S. Plewik, M. Walczyńska, *On the center of distances*, Eur. J. Math. **4** (2018), no. 2, 687--698.
Z. Daróczy, A. Járai, I. Katái, *Intervallfullende Folgen und volladditive Funktionen*, Acta Sci. Math. **50** (1986), 337-350.
Z. Daróczy, I. Katái, *Interval filling sequences and additive functions*, Acta Sci. Math. **52** (1988), 337-347.
K.I. Eroğlu, *On the arithmetic sum of Cantor sets*, Nonlinearity **20** (2007), 1145--1161.
C. Ferens, *On the range of purely atomic probability measures*, Studia Math. **77**(3) (1984), 261--263.
T. Filipczak, P. Nowakowski, *Conditions for the difference set of a central cantor set to be a Cantorval*, Results in Math. **78** (2023), article no. 166.
J. Foran, *Fundamentals of Real Analysis*, Pure and Applied Mathematics, vol. 144, Marcel Dekker, Inc., New York, Basel, Hongkong 1991.
Guthrie, J.A., Nymann, J.E., *The topological structure of the set of subsums of an infinite series*, Colloq. Math. **55** (1988), 323--327.
S. Gła̧b, J. Marchwicki, *Cardinal functions of atomic measures*, Results in Math. **75** (2020), article no. 141.
S. Gła̧b, J. Marchwicki, *Set of uniqueness for Cantorvals*, Results in Math. **78** (2023) article no. 9.
R. Jones, *Achievement sets of sequences*, Am. Math. Mon. **118**(6) (2011), 508--521.
S. Kakeya, *On the partial sums of an infinite series*, Tôhoku c. Rep. **3** (1914), 159--164.
J. Marchwicki, J. Miska, *On Kakeya conditions for achievement sets*, Result in Math. **76** (2021) article no. 181.
P. Mendes, F. Oliveira, *On the topological structure of the arithmetic sum of two Cantor sets*, Nonlinearity **7** (1994), 329--343.
C. Moreira, J. Yoccoz, *Stable intersections of Cantor sets with large Hausdorff dimension*, Ann. of Math. **154** (2001), 45--96.
P. Nowakowski, *Conditions for the difference set of a central Cantor set to be a Cantorval. Part II.*, arXiv:2307.08102.
P. Nowakowski, *When the algebraic difference of two central Cantor sets is an interval?*, Ann. Fenn. Math. **48** (2023), 163--185.
J. E. Nymann, *Linear combinations of Cantor sets*, Colloq. Math. **68** (1995), 259--264.
J.E. Nymann, R.A. Sáenz, *On the paper of Guthrie and Nymann on subsums of infinite series*, Colloq. Math. **83** (2000), 1--4.
J. Palis, *Homoclinic orbits, hyperbolic dynamics and dimensions of Cantor sets*, Contemp. Math. **53** (1987), 203--216.
J. Palis, J. C. Yoccoz, *On the arithmetic sum of regular Cantor sets*, Ann. Inst. Henri Poincaré **14**(4) (1997), 439--456.
M. Pourbarat, *On the arithmetic difference of middle Cantor sets*, Discrete Contin. Dyn. Syst. **38**(9) (2018) 4259--4278.
F. Prus-Wiśniowski, F. Tulone,*The arithmetic decomposition of central Cantor sets*, J. Math. Anal. Appl. **467** (2018), 26--31.
A. Sannami, *An example of a regular Cantor set whose difference is a Cantor set with positive Lebesgue measure*, Hokkaido Math. J. **21** (1992), 7--24.
Y. Takahashi, *Sums of two self-similar Cantor sets*, J. Math. Anal. Appl. **477** (2019), 613--626.
Vinishin Y., Markitan V., Pratsiovytyi M., Savchenko I., *Positive series, whose sets of subsums are Cantorvals*, Proc. International Geom. Center. **12**(2) (2019), 26--42 (in Ukrainian).
A.D. Weinstein, B.E. Shapiro, *On the structure of a set of $\overline{\alpha}$-representable numbers*, Izv. Vysš. Učebn. Zaved. Matematika **24** (1980), 8--11.
| arxiv_math | {
"id": "2309.01589",
"title": "Algebraic sums of achievable sets involving Cantorvals",
"authors": "Jacek Marchwicki, Piotr Nowakowski, Franciszek Prus-Wi\\'sniowski",
"categories": "math.CA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
arxiv_math | {
"id": "2310.03839",
"title": "Davydov-Yetter cohomology for Tensor Triangulated Categories",
"authors": "Angel Israel Toledo Castro",
"categories": "math.CT math.KT",
"license": "http://creativecommons.org/licenses/by-nc-sa/4.0/"
} |
|
---
abstract: |
Some combinatorial designs, such as Hadamard matrices, have been extensively researched and are familiar to readers across the spectrum of Science and Engineering. They arise in diverse fields such as cryptography, communication theory, and quantum computing. Objects like this also lend themselves to compelling mathematics problems, such as the Hadamard conjecture. However, complex generalized weighing matrices, which generalize Hadamard matrices, have not received anything like the same level of scrutiny. Motivated by an application to the construction of quantum error-correcting codes, which we outline in the latter sections of this paper, we survey the existing literature on complex generalized weighing matrices. We discuss and extend upon the known existence conditions and constructions, and compile known existence results for small parameters. Some interesting quantum codes are constructed to demonstrate their value.
author:
- |
[Ronan Egan]{.smallcaps} [^1]\
*School of Mathematical Sciences*\
*Dublin City University*\
title: A survey of complex generalized weighing matrices and a construction of quantum error-correcting codes
---
[^2]
[^3]
# Introduction {#intro}
Combinatorial designs are finite objects that obey some kind of combinatorial condition and they take many forms. Many of them are comprehensively surveyed in the Handbook of Combinatorial Designs [@HandbookCD], to which we refer the reader for more information on almost every design mentioned in this paper. Some designs, such as Hadamard matrices due either to their applications or their appearance in other fields, have been extensively researched due to applications in diverse fields such as cryptography, communication theory, and quantum computing [@Horadam]. Objects like this also lend themselves to compelling mathematics problems - the Hadamard conjecture proposing the existence of a Hadamard matrix of order $4n$ for all $n \in \mathbb{N}$ has captured the imagination of numerous researchers since it was posed by Paley almost a century ago [@Paley1933]. Other designs are well known only to researchers in closely related fields. Complex generalized weighing matrices, which include Hadamard matrices as a special case, are the subject of this survey. We focus entirely on the general case, and do not survey the extensive literature on the special cases here. This addresses what we feel is a notable gap in the literature, as complex generalised weighing matrices do not appear to have been surveyed elsewhere. They are referenced as an example of a pairwise combinatorial design in de Launey and Flannery's monograph on Algebraic Design Theory [@deLF], but are not analysed in any detail except in the context of results that apply to large families of pairwise combinatorial designs.
To begin, we give the necessary definitions and outline our notation. Thoughout, $k$ is a positive integer and $\zeta_{k} = e^{\frac{2\pi\sqrt{-1}}{k}}$ is a primitive $k^{\rm th}$ root of unity. Let $\langle \zeta_{k} \rangle = \{\zeta_{k}^{j} \; : \; 0 \leq j \leq k-1\}$ be the set of all $k^{\rm th}$ roots of unity, and let $\mathcal{U}_{k} = \{0\} \cup \langle \zeta_{k} \rangle$. We denote the set of all $n \times n$ matrices over the complex numbers by $\mathcal{M}_{n}(\mathbb{C})$, and the subset of matrices with entries in $\mathcal{U}_{k}$ by $\mathcal{M}_{n}(k)$. The subset of monomial matrices in $\mathcal{M}_{n}(k)$ is denoted by $\mathrm{Mon}_{n}(k)$. More generally, the set of all $n \times n$ matrices with entries in an alphabet $\mathcal{A}$ containing a zero is denoted by $\mathcal{M}_{n}(\mathcal{A})$, and the set of monomial matrices therein by $\mathrm{Mon}_{n}(\mathcal{A})$. Given a matrix $M \in \mathcal{M}_{n}(\mathcal{A})$, the matrix $S$ obtained from $M$ by replacing each non-zero entry with $1$ is called the *support matrix* of $M$. If $S$ supports $M$, we also say that $S$ *lifts* to $M$.
Given an alphabet $\mathcal{A}$ with the property that $a^{-1} \in \mathcal{A}$ for any non-zero $a \in \mathcal{A}$, we let $\ast$ be the transposition acting on $\mathcal{A}$ such that $a^{\ast} = a^{-1}$ if $a \neq 0$, and $0^{\ast} = 0$. Typically, an alphabet will either be a field, or the set $\mathcal{U}_{k}$.
By a common abuse of notation, when $M = [m_{ij}] \in \mathcal{M}_{n}(\mathcal{A})$, we write $M^{\ast} = [m_{ji}^{\ast}]$ for the Hermitian transpose of $M$. For a complex number $z$, the complex conjugate is denoted by $\overline{z}$. We denote the ring of integers modulo $k$ by $\mathbb{Z}_{k}$. When $k = p$ is prime, the transposition $\ast$ acts on $\mathbb{Z}_{p}$ so that $a^{\ast}$ is the multiplicative inverse of $a$, for all $a \neq 0$. When $q = p^r$ for some prime $p$ and positive integer $r$, we denote by $\mathbb{F}_{q}$, the finite field of order $q$.
Rows and columns of $n \times n$ matrices or sequences of length $n$ are typically indexed by the integers $0,1,\ldots,n-1$. A circulant matrix is an $n \times n$ matrix $C$ which is generated by its first row $[c_{0},c_{1},\ldots,c_{n-1}]$, and is denoted by $C = \mathrm{circ}([c_{0},c_{1},\ldots,c_{n-1}])$. Each row is obtained by shifting the previous row forward cyclically. That is, $C = [c_{j-i}]_{i,j \in \mathbb{Z}_{n}}$.
**Definition 1**. An $n \times n$ matrix $W$ with entries in $\mathcal{U}_{k}$ is a *complex generalized weighing matrix* of *weight* $w$ if $WW^{*} = wI_{n}$, where $W^{*}$ is the complex conjugate transpose of $W$, and $I_{n}$ denotes the $n \times n$ identity matrix. The set of all such matrices is denoted by $\mathrm{CGW}(n,w;k)$.
We will abbreviate to CGW when parameters are unspecified. A $n \times n$ matrix $W$ is a $\mathrm{CGW}(n,w;k)$ if each row of $W$ has exactly $w$ non-zero entries in $\langle \zeta_{k} \rangle$, and the Hermitian inner product of any two distinct rows in zero. If $w=n$, then $W$ is a *Butson Hadamard matrix*, the set of which is denoted by $\mathrm{BH}(n,k)$. If $k = 2$, then $W$ is a *weighing matrix*, the set of which is denoted by $W(n,w)$. If both $k=2$ and $w=n$, then $W$ is a *Hadamard matrix*, the set of which is denoted by $\mathrm{H}(n)$. Hadamard matrices in particular have been studied extensively for over a century, for detailed expositions we refer the reader to any of [@Agaian; @BJL; @HandbookCD; @CraigenSG; @HallCombinatorialTheory; @Horadam; @SeberryOrthogonalDesigns; @SeberryYamada]. Both weighing matrices and Butson Hadamard matrices have been studied frequently, albeit far less than the Hadamard matrices that comprise their intersection. Weighing matrices feature prominently in works of Craigen, Seberry, and their coauthors; see any of [@CraigenWeaving93; @CraigenWeaving95; @CraigendeLauney; @CraigenPhD; @CraigenKha; @SeberryOrthogonalDesigns; @SeberryWhiteman; @SeberryYamada] for examples. For a general background into Butson Hadamard matrices, which are often called generalized Hadamard matrices by different authors, see any of [@Butson; @deLF; @PUGPpaper; @BHPaper2; @MorphismsPaper; @LSO; @FerencThesis], and for a comprehensive survey and an up to date catalog, see [@CHGuide] and [@Karol] respectively.
Despite being the superset containing all of these objects, CGWs have, in their own right, received very little scrutiny outside of these special cases. The first significant work we note is due to Berman [@Berman77; @Berman78]. Berman's constructions, which we will discuss in this paper, reveal several connections to finite geometry and finite fields, and demonstrate that these objects merit study outside of the Butson Hadamard or real weighing matrix cases. Around the same time, Seberry [@Seberry79] and Seberry and Whiteman [@SeberryWhiteman] considered the case where $k=4$ due to a relationship to orthogonal designs. We document these in Section [3](#section:Construct){reference-type="ref" reference="section:Construct"}. Only sporadic work on the topic has since appeared, perhaps most significantly due to Craigen and de Launey [@CraigendeLauney], who studied CGWs that are invariant under a regular group action. To our knowledge, there has been no recent comprehensive survey collating up to date results on CGWs, that do not qualify as being either real weighing matrices, or Butson Hadamard matrices.
In Section [2](#section:Props){reference-type="ref" reference="section:Props"} we discuss and extend upon the known existence conditions. In Section [3](#section:Construct){reference-type="ref" reference="section:Construct"} we describe the known constructions, beginning with direct constructions and then recursive constructions. In some cases, known constructions of objects like weighing matrices are generalized. In Section [4](#section:Quant){reference-type="ref" reference="section:Quant"} we introduce a construction of Hermitian self-orthogonal $q$-ary codes from appropriate CGWs and describe the subsequent approach to building quantum codes. This application motivates our survey. In Section [5](#section:results){reference-type="ref" reference="section:results"} we report on early computational results from this construction. Finally, an appendix follows the paper containing tables collating the information of Sections [2](#section:Props){reference-type="ref" reference="section:Props"} and [3](#section:Construct){reference-type="ref" reference="section:Construct"}, giving existence or nonexistence of $\mathrm{CGW}(n,w,k)$, if known, for all $1 \leq n \leq 15$, $1 \leq w \leq n$ and $2 \leq k \leq 6$.
# Existence conditions {#section:Props}
Existence conditions for $\mathrm{CGW}(n,w;k)$ tend to be number theoretical, but we will combine these with techniques from design theory. A *generalized weighing matrix* $W$ is a $n\times n$ matrix with $w$ non-zero entries in each row and column coming from a finite group $G$ such that $WW^{\ast} = wI_{n}$ over $\mathbb{Z}[G]/\mathbb{Z}G$. Because generalized weighing matrices over groups of prime order $k$ coincide with CGWs over $\mathcal{U}_{k}$ (see, e.g., [@BHPaper2 Lemma 2.2]), non-existence results for generalized weighing matrices over groups of prime order will apply.
## Equivalence
The group action of $\mathrm{Mon}_{n}^{2}(k)$ on a matrix in $M \in \mathcal{M}_{n}(k)$ is defined by $M(P,Q) = PMQ^{*}$. This action stabilizes the set $\mathrm{CGW}(n,w;k)$. The orbit of $W$ under this action is the *equivalence class* of $W$. That is, any matrix $W'$ obtainable from $W$ by permuting rows (respectively columns) or multiplying rows (respectively columns) by an element of $\langle \zeta_{k} \rangle$ is also an element of $\mathrm{CGW}(n,w;k)$, and is said to be *equivalent* to $W$. More succinctly, two matrices $W$ and $W'$ are equivalent if $$W' = PWQ^{*}$$ for matrices $P,Q \in \mathrm{Mon}_{k}(n)$, and we write $W \equiv W'$. It is typical to study the set $\mathrm{CGW}(n,w;k)$ through the lens of equivalence. It is a particularly useful tool when proving the non-existence of an element of $\mathrm{CGW}(n,w;k)$. Complete classifications up to equivalence, even for reasonably small $n$, are computationally difficult. Equivalence classes of Hadamard matrices of order up to $32$ have been classified, but the number of classes appears to grow extremely rapidly with $n$, and the problem becomes computationally infeasible very quickly. Classifying Butson matrices or weighing matrices is even more difficult. Harada et al [@HLMT] used coding theory techniques to classify $\mathrm{BH}(18,3)$ up to equivalence. The equivalence classes of $\mathrm{BH}(n,4)$ for $n \in \{10,12,14\}$ were determined by Lampio, Szöllősi and Östergård in [@LSO], and they later classified $\mathrm{BH}(21,3)$, $\mathrm{BH}(16,4)$ and $\mathrm{BH}(14,6)$ in [@LOS], using computational methods. Other classifications restrict to matrices with extra properties. Group developed and cocyclic matrices are examples of special cases that have a lot of extra structure, reducing the search space significantly. The cocyclic equivalence classes in $\mathrm{BH}(n,p)$ were classified for all $np \leq 100$ where $p$ is an odd prime in [@BHPaper2], and the cocyclic real Hadamard matrices have been classified at all orders up to $36$ in [@POCRod], and at orders $44$ and $52$ in [@POCSanHei].
## Number theoretical conditions
The inner product of any pair of distinct rows of columns of a CGW must equal zero, hence the main number theoretical restrictions follow from a Theorem of Lam and Leung on vanishing sums of roots of unity [@LamLeung].
**Theorem 2**. *If $\sum_{j=0}^{k-1} c_{j}\zeta_{k}^{j} = 0$ for non-negative integers $c_{0},\ldots, c_{k-1}$, and $p_1,\dots,p_r$ are the primes dividing $k$, then $\sum_{j=0}^{k-1} c_{j} = \sum_{\ell=1}^{r}d_{\ell}p_{\ell}$ where $d_1,\dots,d_{\ell}$ are non-negative integers.*
A special case of this is the well known fact that if $\sum_{i=0}^{p-1} c_{i}\zeta_{p}^{i} = 0$ for a prime $p$, then $\sum_{i=0}^{p-1} c_{i} = dp$ for some non-negative integer $d$, and consequently $c_{0} = c_{1} = \cdots = c_{p-1} = d$. Hence, a $\mathrm{CGW}(n,w;p^r)$ exists only if the non-zero entries in any two distinct rows or columns coincide in a multiple of $p$ positions.
The question of existence is more complicated when $k$ is composite, as Theorem [Theorem 2](#LamLeungVan){reference-type="ref" reference="LamLeungVan"} is a less significant barrier. When $k=6$, it is no restriction at all, but some further conditions on their existence are described below. Formulating precise existence conditions in this case is far from straightforward - there is a known element of $\mathrm{BH}(7,6)$ so the order can be coprime to $6$, but the set $\mathrm{BH}(5,6)$ is empty. Perhaps the most general nonexistence results follow from a condition on the determinant of the gram matrix.
**Lemma 3**. *Suppose that there exists $W \in \mathrm{CGW}(n,w;k)$. Then $|\mathrm{det}(W)|^2 = w^n$.*
*Proof.* If $W \in \mathrm{CGW}(n,w;k)$ then $WW^{\ast} = wI_{n}$. It follows that $$|\mathrm{det}(W)|^2 = \mathrm{det}(W)\mathrm{det}(W^{\ast}) = \mathrm{det}(WW^{\ast}) = w^n.$$ ◻
Lemma [Lemma 3](#lem:norms){reference-type="ref" reference="lem:norms"} implies the well known conditions that a $\mathrm{CGW}(n,w;2)$ exists when $n$ is odd only if $w$ is a square. The Sum of Two Squares Theorem states that $w$ is expressible as the sum of two squares if and only if the square free part of $w$ is not divisible by any prime $p \equiv 3 \mod 4$. It follows that a $\mathrm{CGW}(n,w;4)$ exists when $n$ is odd only if $w$ is the sum of two integer squares. We will see Lemma [Lemma 3](#lem:norms){reference-type="ref" reference="lem:norms"} applied again later in this section.
Many of the better known, and easiest to apply non-existence results apply when $k$ is prime. In the case of real weighing matrices, many of the strongest non-existence conditions were described in the 1970's by Geramita, Geramita and Seberry Wallis [@GGSW]. When $k \neq 2$, some of these results have been generalized, and extended. Some of the best known conditions when $k$ is an odd prime are due to de Launey [@deLauneyExistence], who studied generalized weighing matrices. For consistency, we present the relevant results in the language of CGWs.
**Theorem 4** (cf. Theorem 1.2, [@deLauneyExistence]). *If there exists a $\mathrm{CGW}(n,w;k)$ with $n \neq w$ and $k$ a prime, then the following must hold:*
(i) *$w(w-1) \equiv 0 \ \mathrm{mod}\ k$.*
(ii) *$(n-w)^2 - (n-w) \geq \sigma (n-1)$ where $0 \leq \sigma \leq k-1$ and $\sigma \equiv n-2w \ \mathrm{mod}\ k$.*
(iii) *If $n$ is odd and $k = 2$, then $w$ is a square.*
**Theorem 5** (cf. Theorem 5.1, [@deLauneyExistence]). *Suppose there exists a $\mathrm{CGW}(n,w;k)$ with $n$ odd and $k$ a prime. Suppose that $m \not\equiv 0 \ \mathrm{mod}\ k$ is an integer dividing the square free part of $w$. Then the order of $m$ modulo $k$ is odd.*
Adhering to the notation of Theorem [Theorem 5](#deLNE){reference-type="ref" reference="deLNE"}, if $G = \langle \zeta_{k} \rangle$ for prime $k>2$, then necessarily $p = k$. If, for example, $n = w = 15$ and $p = 5$, then $n$ is square free, so the existence of a $\mathrm{BH}(n,5)$ requires that $m=3$ is of odd order modulo $5$, which is false. In the same paper, the possibility of a $\mathrm{CGW}(19,10;5)$ is eliminated.
These Theorems are effective for eliminating the possibility of finding elements of $\mathrm{CGW}(n,w;k)$ when $k$ is an odd prime, but it generally fails to extend to composite $k$. However, for Butson matrices, i.e., matrices of full weight, these results were partially extended by Winterhof [@Winterhof] using number theoretic techniques and the restriction of Lemma [Lemma 3](#lem:norms){reference-type="ref" reference="lem:norms"} to include certain cases where $k = p^{r}$ or $k=2p^{r}$ for a prime $p \equiv 3 \ \mathrm{mod}\ 4$. The main result is the following.
**Theorem 6** (Theorem 5, [@Winterhof]). *Let $k = p^{r}$ be a prime power where $p \equiv 3 \ \mathrm{mod}\ 4$, and let $k = p^{\ell}a^2m$ be odd where $p$ does not divide $m$, $m$ is square free, and $\ell$ is odd. Then there is no $\mathrm{BH}(n,k)$ or $\mathrm{BH}(n,2k)$ if there is a prime $q \, \mid \, m$ such that $q$ is a non-quadratic residue modulo $p$.*
The original statement of Theorem [Theorem 6](#thm:Wint){reference-type="ref" reference="thm:Wint"} does not include the condition that $\ell$ is odd, however this is necessary. Without it, the existence of a $\mathrm{BH}(25,6)$ would be a counterexample. Of the various implications of Winterhof's Theorem, perhaps the most frequently cited is a restriction eliminating the existence of a $\mathrm{BH}(3^{\ell}p^{d},6)$ where $p \equiv 5 \ \mathrm{mod}\ 6$ is prime and $d$ is odd, which include cases such as $\mathrm{BH}(5,6)$ and $\mathrm{BH}(15,6)$. We can similarly use Lemma [Lemma 3](#lem:norms){reference-type="ref" reference="lem:norms"} to eliminate existence of CGWs. The following is essentially the same argument as the proof of [@FerencThesis Corollary 1.4.6].
**Proposition 7**. *There is no $\mathrm{CGW}(n,w;6)$ when $n$ is odd and $w \equiv 2 \mod 3$.*
*Proof.* Suppose $W \in \mathrm{CGW}(n,w;6)$. Then $|\mathrm{det}(W)|^2 = w^n$. Since any element of $\mathcal{U}_{6}$ can be written in the form $a + b\zeta_{3}$ for integers $a$ and $b$, it follows that there are integers $a$ and $b$ such that $$w^n = |\mathrm{det}(W)|^2 = |a + b\zeta_{3}|^2 = a^2 + b^2 - ab.$$ It is not possible that $a^{2} + b^{2} - ab \equiv 2 \mod 3$, and so it cannot be that $n$ is odd and $w \equiv 2 \mod 3$. ◻
This result motivated the following Propositions. The proofs are similar, but none are omitted as there are subtle differences.
**Proposition 8**. *There is no $\mathrm{CGW}(n,w;6)$ when $n$ is odd and $w \equiv 2 \mod 4$.*
*Proof.* As before, if such a matrix exists we require that there are integers $a$ and $b$ such that $$w^n = |\mathrm{det}(W)|^2 = |a + b\zeta_{3}|^2 = a^2 + b^2 - ab.$$ First observe that $a^2 + b^2 - ab \equiv 0 \mod 4$ only if both $a$ and $b$ are even. Let $w = 2m$ for some odd $m$. We show that there is no solution to $$(2m)^{n} = a^2 + b^2 - ab,$$ or equivalently, $$(2m)^{n} - ab = (a-b)^{2}.$$ Now let $2^t$ be the largest power of $2$ dividing both $a$ and $b$. Then we require a solution to $$\label{eq:2mod4}
\frac{(2m)^{n}}{2^{2t}} - \frac{ab}{2^{2t}} = \frac{(a-b)^{2}}{2^{2t}}.$$ We split the remainder of the proof into two cases.
First suppose that one of $a$ or $b$ is divisible by $2^{t+1}$, but not both. Then $\frac{a-b}{2^{t}}$ is odd, and so the right hand side of Equation [\[eq:2mod4\]](#eq:2mod4){reference-type="eqref" reference="eq:2mod4"} is odd. However, on the left hand side, the term $\frac{ab}{2^{2t}}$ is even, and the term $\frac{(2m)^{n}}{2^{2t}}$ is either an even integer, or not an integer. Thus Equation [\[eq:2mod4\]](#eq:2mod4){reference-type="eqref" reference="eq:2mod4"} is not satisfied.
Next suppose that neither $a$ nor $b$ are divisible by $2^{t+1}$. Then $\frac{a-b}{2^{t}}$ is even, and so the right hand side of Equation [\[eq:2mod4\]](#eq:2mod4){reference-type="eqref" reference="eq:2mod4"} is even. However, on the left hand side, the term $\frac{ab}{2^{2t}}$ is odd, and the term $\frac{(2m)^{n}}{2^{2t}}$ is again either an even integer, or not an integer. Thus Equation [\[eq:2mod4\]](#eq:2mod4){reference-type="eqref" reference="eq:2mod4"} is again not satisfied. This proves the claim. ◻
**Proposition 9**. *There is no $\mathrm{CGW}(n,w;6)$ when $n$ is odd and $w \equiv 6 \mod 9$.*
*Proof.* As before, if such a matrix exists we require that there are integers $a$ and $b$ such that $$w^n = |\mathrm{det}(W)|^2 = |a + b\zeta_{3}|^2 = a^2 + b^2 - ab.$$ Let $w = 3m$ for some $m \equiv 2 \ \mathrm{mod}\ 3$. This time we show that there is no solution to $$(3m)^{n} - ab = (a-b)^{2}.$$ Let $3^t$ be the largest power of $3$ dividing both $a$ and $b$. Then we require a solution to $$\frac{(3m)^{n}}{3^{2t}} - \frac{ab}{3^{2t}} = \frac{(a-b)^{2}}{3^{2t}}.$$ We split the remainder of the proof into two cases.
First suppose that one of $a$ or $b$ is divisible by $3^{t+1}$, but not both. Then $\frac{a-b}{3^{t}}$ is not a multiple of $3$. However, if $n > 2t$ then both terms on the left hand side are multiples of $3$, and if $n < 2t$ the left hand side is not an integer, so the equation is not satisfied.
Next suppose that neither $a$ nor $b$ are divisible by $3^{t+1}$. In this case, the term $\frac{(a-b)^{2}}{3^{2t}} \equiv 1 \ \mathrm{mod}\ 3$. Thus a solution is only possible if $n > 2t$ and if $\frac{ab}{3^{2t}} \equiv 2 \ \mathrm{mod}\ 3$. This implies that $a = 3^{t}a'$ and $b = 3^{t}b'$ where, without loss of generality, $a' \equiv 1 \ \mathrm{mod}\ 3$ and $b' \equiv 2 \ \mathrm{mod}\ 3$. Now consider the equivalent expression $$(3m)^{n} = (a+b)^{2} - 3ab.$$ In this expression $(a+b)^{2} = (3^{t}(a'+b'))^{2}$ is divisible by $3^{2t+2}$, but the highest power of $3$ dividing $3ab$ is $3^{2t+1}$. So if $$\frac{(3m)^{n}}{3^{2t+1}} = \frac{(a+b)^{2}}{3^{2t+1}} - \frac{3ab}{3^{2t+1}},$$ then the right hand side is congruent to $1 \ \mathrm{mod}\ 3$. However, either $n > 2t+1$ and $\frac{(3m)^{n}}{3^{2t+1}} \equiv 0 \ \mathrm{mod}\ 3$, or $n = 2t+1$ and $\frac{(3m)^{n}}{3^{2t+1}} = m^{2t+1} \equiv 2 \ \mathrm{mod}\ 3$. Thus no solution exists. ◻
Finally, the following generalizes a well known non-existence result for Butson Hadamard matrices.
**Proposition 10**. *Let $p \equiv 2 \ \mathrm{mod}\ 3$ be a prime and let the squarefree part of $w$ be divisible by $p$. Then there is no $\mathrm{CGW}(n,w;6)$ when $n$ is odd.*
*Proof.* Let $w = p^{r}m$ for odd $r$ and $m \not\equiv 0 \ \mathrm{mod}\ p$. The first part is similar to the proofs of the previous Propositions. We show that there is no solution to $$(p^{r}m)^{n} - ab = (a-b)^{2}.$$ Let $p^t$ be the largest power of $p$ dividing both $a$ and $b$. Then we require a solution to $$\frac{(p^{r}m)^{n}}{p^{2t}} - \frac{ab}{p^{2t}} = \frac{(a-b)^{2}}{p^{2t}}.$$
Suppose that one of $a$ or $b$ is divisible by $p^{t+1}$, but not both. Then $\frac{a-b}{p^{t}}$ is not a multiple of $p$. However, if $n > 2t$ then both terms on the left hand side are multiples of $p$, and if $n < 2t$ the left hand side is not an integer, so the equation is not satisfied.
Next suppose that neither $a$ nor $b$ are divisible by $p^{t+1}$. On the left hand side, $\frac{(p^{r}m)^{n}}{p^{2t}}$ is an integer only if it is a multiple of $p$, and $\frac{ab}{p^{2t}}$ is not a multiple of $p$. Letting $a = p^{t}a'$ and $b = p^{t}b'$ for $a',b' \not\equiv 0 \ \mathrm{mod}\ p$, a solution can only exist if $$-a'b' \equiv (a'-b')^{2} \ \mathrm{mod}\ p.$$ Rearranging, this implies that $$a'^{2} + b'^{2} \equiv a'b' \ \mathrm{mod}\ p.$$ Multiplying by $(a'b')^{-1}$, and letting $x = a'(b'^{-1})$, this expression reduces to $$x^2 - x + 1 \equiv 0 \ \mathrm{mod}\ p.$$ Should a solution to this expression exist, we would find that $(x-1)^4 \equiv x-1 \ \mathrm{mod}\ p$, so either $x-1 \equiv 1 \ \mathrm{mod}\ p$, or $(x-1)$ is a element of multiplicative order $3$ in $\mathbb{Z}_{p}$. The former implies that $x \equiv 2 \ \mathrm{mod}\ p$ which contradicts $x^2 - x + 1 \equiv 0 \ \mathrm{mod}\ p$ for all primes $p \neq 3$, so we must have the latter. However, if $p \equiv 2 \ \mathrm{mod}\ 3$, then $3$ does not divide $p-1$, so there is no element of multiplicative order $3$ in $\mathbb{Z}_{p}$, so there can be no solution. ◻
*Remark 11*. Letting $w=n$ and assuming that $n$ is odd, Proposition [Proposition 10](#prop:genNoCGW){reference-type="ref" reference="prop:genNoCGW"} recovers the known condition that a $\mathrm{BH}(n,6)$ exists only if the squarefree part of $n$ is not divisible by a prime $p \equiv 5 \ \mathrm{mod}\ 6$, which is a special case of Theorem [Theorem 6](#thm:Wint){reference-type="ref" reference="thm:Wint"}.
## Block designs and the lifting problem
When the results already outlined in this section are insufficient, some cases need to be investigated individually. For this purpose, particularly when $k$ is prime, it is often easier to consider whether or not the support matrix of a CGW, should it exist, can meet some necessary conditions. In small cases, we can use a well known restriction on the existence of block designs. First we need a definition. Let $n$, $w$ and $\lambda$ be integers where $n > w > \lambda \geq 0$. Let $X$ be a set of size $n$. A *Symmetric balanced incomplete block design* $\mathrm{SBIBD}(n,w,\lambda)$ is a set of $n$ subsets of $X$ of size $w$, called *blocks* such that each unordered pair of distinct elements of $X$ are contained in exactly $\lambda$ blocks. If $A$ is the incidence matrix of the $\mathrm{SBIBD}(n,w,\lambda)$, then $$AA^{\top} = wI_{n} + \lambda(J_{n} - I_{n}),$$ where $J_{n}$ denotes the $n \times n$ matrix of all ones. It is a well known necessary condition that a $\mathrm{SBIBD}(n,w,\lambda)$ exists only if $$\label{eq:SBIBD}
\lambda(n-1) = w(w-1).$$
This condition will be useful for eliminating the possibility of a $\mathrm{CGW}(n,w;k)$ for certain small parameters. A reason for this is that we can sometimes observe that a $\mathrm{CGW}(n,w;k)$ can only exist if the support matrix is the incidence matrix of some $\mathrm{SBIBD}(n,w,\lambda)$. For example, it can be shown that if a $\mathrm{CGW}(11,5;4)$ exists, then it's support must be a $\mathrm{SBIBD}(11,5,2)$. Such a design exists, but it is unique, so we only need to check if it is possible that the incidence matrix of this design can support a $\mathrm{CGW}(11,5;4)$, which we can do by hand. This is one way to eliminate the existence of a $\mathrm{CGW}(11,5;4)$. It is an example of the following problem.
**Problem 12** (The lifting problem). Given a $(0,1)$-matrix $S$, does $S$ lift to a $\mathrm{CGW}(n,w;k)$?
In several cases, non-existence of a $\mathrm{CGW}(n,w;k)$ is verified by showing that there does not exist a $(0,1)$-matrix $S$ that lifts to a $\mathrm{CGW}(n,w;k)$ for the given parameters. Completing the unfilled entries in the existence tables in Appendix [6](#App:Tables){reference-type="ref" reference="App:Tables"} may require solving the lifting problem, as potential support matrices exist in those cases.
## Sporadic non-existence conditions
One of our aims will be to settle the question of existence for all many orders $1 \leq n \leq 15$ and weights $1 \leq w \leq n$ as we can for small $k$; see Section [3.6](#section:Existence){reference-type="ref" reference="section:Existence"} and the Tables in Appendix [6](#App:Tables){reference-type="ref" reference="App:Tables"}. Non-existence is mostly determined by the results already described in this section, but occasionally some more specialized results are required. Existence is in most cases given as a result of one of the constructions of Section [3](#section:Construct){reference-type="ref" reference="section:Construct"}. In some cases we can prove non-existence for certain parameters individually, which often reduces to determining if a support matrix can exist, and if so, trying to solve the lifting problem. This section is not intended to be comprehensive, but to demonstrate the kind of methods that can be implemented at small orders. We give some examples here.
**Proposition 13**. *There exists a $\mathrm{CGW}(n,4;3)$ if and only if $n \equiv 0 \ \mathrm{mod}\ 5$.*
*Proof.* Let $W \in \mathrm{CGW}(n,4;3)$ and let $S$ be the support matrix of $W$. Then the dot product of any two distinct rows must be either $0$ or $3$. Any pair of the four rows that contain a $1$ in column $1$ must therefore share a $1$ in exactly two other columns, and so up to permutation equivalence, these rows are of the form $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{ccccc|ccc}
1 & 1 & 1 & 1 & 0 & 0 & \cdots & 0 \\
1 & 1 & 1 & 0 & 1 & 0 & \cdots & 0 \\
1 & 1 & 0 & 1 & 1 & 0 & \cdots & 0 \\
1 & 0 & 1 & 1 & 1 & 0 & \cdots & 0 \end{array}\right].}$$ In this configuration, columns $2,3,4,5$ share a $1$ in exactly $2$ rows, and have only one further $1$ remaining in the column and so we can immediately deduce a fifth row of the matrix which up to equivalence takes the form $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{ccccc|ccc}
1 & 1 & 1 & 1 & 0 & 0 & \cdots & 0 \\
1 & 1 & 1 & 0 & 1 & 0 & \cdots & 0 \\
1 & 1 & 0 & 1 & 1 & 0 & \cdots & 0 \\
1 & 0 & 1 & 1 & 1 & 0 & \cdots & 0 \\
0 & 1 & 1 & 1 & 1 & 0 & \cdots & 0 \end{array}\right] = [C ~ \mid ~ 0_{5,n-5}].}$$ Proceeding in the same way, we find that $S$ must be permutation equivalent to a block diagonal matrix with the $5 \times 5$ matrix $C$ in block on the diagonal. The claim that $n \equiv 5$ follows immediately. To see that $n \equiv 5$ is sufficient, let $C$ be the support of the $\mathrm{CGW}(5,4;3)$ of Example [Example 21](#ex:Berman){reference-type="ref" reference="ex:Berman"}. ◻
**Proposition 14**. *There is no matrix in $\mathrm{CGW}(10,6;3)$.*
*Proof.* Should such a matrix $W$ exist, then it must have exactly $4$ zeros in each row and column, and in any distinct row/column they must share the entry zero in either $1$ or $4$ positions. Suppose first that no two rows share a zero in $4$ positions, and so each pair share a zero in exactly one position. Then if $S$ is the support matrix of $W$, the matrix $J_{n} - S$ should be the incidence matrix of a $\mathrm{SBIBD}(10,4,1)$. However, these parameters contradict Equation [\[eq:SBIBD\]](#eq:SBIBD){reference-type="eqref" reference="eq:SBIBD"}.
Next suppose that some pair of distinct rows have their zeros in the same four columns. Then because these columns now share a zero in at least $2$ positions, they must also do so in $4$. As a result, up to equivalence, the support matrix must have $4$ rows such that it takes the form $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccc|ccc}
1 & 1 & 1 & 1 & 1 & 1 & 0 & \cdots & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 0 & \cdots & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 0 & \cdots & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 0 & \cdots & 0 \end{array}\right].}$$ Now, in any distinct pair of the first six columns, the entries equal to $1$ are in $4$ common rows, and so the remaining two $1$s must also be in common rows. Thus, up to equivalence, the two subsequent rows take the form $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccc|ccc}
1 & 1 & 1 & 1 & 1 & 1 & \ast & \cdots & \ast \\
1 & 1 & 1 & 1 & 1 & 1 & \ast & \cdots & \ast \end{array}\right].}$$ In order that the rows have weight $6$, the entries marked $\ast$ must be zero, but then the corresponding columns would have weight $\leq 4$, and so we have a contradiction. ◻
**Proposition 15**. *There is no $\mathrm{CGW}(10,7;4)$.*
*Proof.* Suppose that $W \in \mathrm{CGW}(10,7;4)$ and let $S$ be the support matrix of $W$. The positions of the ones in any two rows intersect in either $4$ or $6$ places. If positions of the ones in all pairs of distinct rows intersected in exactly $4$ places then $S$ would describe a $(10,7,4)$-design, which is forbidden by Equation [\[eq:SBIBD\]](#eq:SBIBD){reference-type="eqref" reference="eq:SBIBD"}. So at least two rows share ones in $6$ positions, and up to equivalence the first two rows of $S$ are $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccccccc}
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\
1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \end{array}\right].}$$ Now in any subsequent row, if there are an even number of ones in positions $3$ to $8$, then both entries in the positions $1$ and $2$ are zero. If there are an odd number of ones in positions $3$ to $8$, then both entries in the positions $1$ and $2$ are one. It follows that, up to equivalence, $S$ takes the form $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc|cccccccc}
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\
1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ \hline
1 & 1 & & & & & & & & \\
1 & 1 & & & & & & & & \\
1 & 1 & & & & & & & & \\
1 & 1 & & & & & & & & \\
1 & 1 & & & & & & & & \\
1 & 1 & & & & & & & & \\
0 & 0 & & & & & & & & \\
0 & 0 & & & & & & & & \end{array}\right].}$$ Now, in rows $3$ to $8$, there are an odd number of ones in columns $3$ to $8$, and so an even number of ones in columns $9$ and $10$. Since zeros in columns $9$ and $10$ already meet in rows $1$ and $2$, this cannot happen again and so both entries in rows $3$ to $8$ must equal $1$. Completing rows $9$ and $10$ is similar, and we find $S$ is of the form $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc|cccccc|cc}
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\
1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ \hline
1 & 1 & & & & & & & 1 & 1 \\
1 & 1 & & & & & & & 1 & 1 \\
1 & 1 & & & & & & & 1 & 1 \\
1 & 1 & & & & & & & 1 & 1 \\
1 & 1 & & & & & & & 1 & 1 \\
1 & 1 & & & & & & & 1 & 1 \\ \hline
0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 \\
0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \end{array}\right].}$$
Now consider the $6 \times 6$ submatrix in the centre, which must have exactly three entries equal to one in each row and column, and the ones in any pair of rows must meet in either $0$ or $2$ positions. If the ones in any two rows meet in zero positions, it is impossible to complete a third row that meets each of these two in $0$ or $2$ positions, so they must all meet in exactly two positions. However this would imply that the submatrix in the centre describes a $(6,3,2)$-design, which is also forbidden by Equation [\[eq:SBIBD\]](#eq:SBIBD){reference-type="eqref" reference="eq:SBIBD"}. ◻
We give one more example of this kind of argument.
**Proposition 16**. *There is no $\mathrm{CGW}(11,5;4)$.*
*Proof.* Using similar arguments to those of Proposition [Proposition 15](#prop:10-7-4){reference-type="ref" reference="prop:10-7-4"}, it can be shown that the support matrix of such a matrix must be the incidence matrix of a $(11,5,2)$-design. There is, up to equivalence, exactly one such design (see, e.g., [@HandbookCD]). Thus we must be able to solve the lifting problem for this particular support matrix if a $\mathrm{CGW}(11,5;4)$ is to exist. However, it is not difficult to verify that this is impossible. We omit details for brevity. ◻
# Constructions {#section:Construct}
In this section we outline the best known constructions of CGWs. We begin with direct constructions and infinite families, including a summarization of the best known work on the topic by Berman and Seberry and Whiteman. We then consider various recursive constructions, including standard direct sum or tensor product constructions, and more general recursive constructions such as the powerful method of weaving introduced by Craigen. We begin with a method strongly influenced by a familiar construction of conference matrices due to Paley.
## Generalized Paley
The most famous constructions of an infinite family of Hadamard matrices are due to Paley. There are two constructions yielding what are now known as the type I and type II Paley Hadamard matrices. Both constructions are built on circulant cores, obtained by applying the quadratic character to the elements of a finite field $\mathbb{F}_{q}$. The next construction we introduce is not strictly a generalization of Paley's construction of the circulant core, but bears a strong enough resemblance that we refer to this as a generalized Paley construction. Let $p$ and $q$ be primes, with $q \equiv 1 \mod p$. Let $x$ be a multiplicative generator of the non-zero elements of $\mathbb{Z}_{q}$. Consider the map $\phi : \mathbb{Z}_{q} \rightarrow \langle \zeta_{p} \rangle \cup \{0\}$ defined by setting $\phi(x^{j}) = \zeta_{p}^{j}$ for all $1 \leq j \leq q-1$, and setting $\phi(0) = 0$. Then $\phi$ has the following two properties:
- $\phi(xy) = \phi(x)\phi(y)$ for all $x,y \in \mathbb{Z}_{q}$; and
- $\phi(x^{*}) = \phi(x)^{*}$ for all $x \in \mathbb{Z}_{q}$.
**Lemma 17**. *Let $C = \mathrm{circ}([\phi(x) \; : \; 0 \leq x \leq q-1])$. Then $CC^{*} = qI_{q} - J_{q}$.*
*Proof.* Observe that each row has exactly $q-1$ non-zero entries, so the diagonal entries of $CC^{*}$ are clearly as claimed. It remains to show that the Hermitian inner product of any two distinct rows $r_{i}$ and $r_{j}$ of $C$ is $-1$. Since $C$ is circulant, this inner product is $$\langle r_{i},r_{j} \rangle = \sum_{x \in \mathbb{Z}_{q}}\phi(x)\phi(x-s)^{*}$$ for some $s \neq 0$. Using the properties of $\phi$ we observe that $$\phi(x)\phi(x-s)^{*} = \phi(x(x-s)^{*}).$$ Now, $x(x-s)^{*} = 0$ if and only if $x = 0,s$. If $x,y \not\in\{0,s\}$, then $$\begin{aligned}
x(x-s)^{*} &= y(y-s)^{*}\\
\Leftrightarrow x(y-s) &= y(x-s) \\
\Leftrightarrow xs &= ys \\
\Leftrightarrow x &= y.\end{aligned}$$ Further, $x(x-s)^{*} = 1$ only if $s = 0$. It follows that when $s \neq 0$, the multiset $\{x(x-s)^{*} \; : \; x \in \mathbb{Z}_{q}\setminus \{0,s\}\} = \{2,3,\ldots,q-1\}$. Consequently, by Theorem [Theorem 2](#LamLeungVan){reference-type="ref" reference="LamLeungVan"}, $\sum_{x \in \mathbb{Z}_{q}}\phi(x)\phi(x-s)^{*} = -1$, as required. ◻
The following is now immediate.
**Theorem 18**. *Let $C = \mathrm{circ}([\phi(x) \; : \; 0 \leq x \leq q-1])$. Then the matrix $$W = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{c|c}
0 & {\bf 1} \\ \hline
{\bf 1}^{\top} & C \end{array}\right]},$$ is a $\mathrm{CGW}(q+1,q;p)$.*
## Berman's constructions
The earliest constructions of CGWs that we know of are due to Berman [@Berman77; @Berman78]. We have only been able to obtain a copy of the more recent paper [@Berman78], which claims to generalize the constructions in [@Berman77] which are limited to real weighing matrices. Families are constructed using connections to finite geometry.
Let $p$, $n$, and $t$ be positive integers with $p$ a prime. Let $F$ be the finite field $\mathbb{F}_{p^{n}}$ and let $P'$ be the set of all points in the affine space $F^{t}$, excluding the origin ${\bf 0} = (0,0,\ldots,0)$. Let $H'$ denote the set of hyperplanes of $F^{t}$ that do not include ${\bf 0}$. Hence $|P'| = |H'| = p^{tn}-1$. A hyperplane in $F^{t}$ can be described by a linear equation of the form $$u_{1}x_{1} + \cdots + u_{t}x_{t} = b$$ for an arbitrary constant $b \in F$. In order for this construction to work, we cannot choose $b=0$. Adhering to the choice of Berman in [@Berman78], we let $b = 1$. Thus every hyperplane can be described by a $t$-tuple ${\bf u} = (u_{1},\ldots,u_{t})$ where $u_{i} \in F$, where each ${\bf u} \in H'$ satisfies a linear equation $$u_{1}x_{1} + \cdots + u_{t}x_{t} = 1$$ where at least one $u_{j} \neq 0$. Letting ${\bf x}$ be a point in $P'$, it follows that this equation can be written as ${\bf u}{\bf x}^{\top} = 1$. We say that the point ${\bf x}$ is on the hyperplane ${\bf u}$ or that ${\bf u}$ contains the point ${\bf x}$ and write ${\bf x} \in {\bf u}$ if ${\bf x}$ and ${\bf u}$ satisfy this equation. It follows that a point ${\bf x} \in P'$ is on $p^{(t-1)n}$ hyperplanes of $H'$ and a hyperplane ${\bf u} \in H'$ contains $p^{(t-1)n}$ points of $P'$.
A collineation $\phi$ is a transformation of $F^{t}$ preserving collinearity; the order of $\phi$ is the smallest $r$ such that $\phi^{r}$ is the identity transformation. The map $\phi_{\lambda} : {\bf x} \mapsto \lambda {\bf x}$ for $\lambda \in F \setminus \{0\}$ is a collineation of order $r_{\lambda}$ which maps the hyperplane ${\bf u}$ onto the hyperplane $\lambda^{-1}{\bf u}$. Writing $[{\bf x}] = \{\phi_{\lambda}^{j} {\bf x} \; : \; j = 0,\ldots,r_{\lambda}-1\}$ for ${\bf x} \in P'$, we observe that ${\bf y} \in [{\bf x}]$ if and only if ${\bf x} \in [{\bf y}]$. It follows that $P' = [{\bf x}^{(1)}] \cup [{\bf x}^{(2)}] \cup \cdots \cup [{\bf x}^{(m)}]$ is a partition into $m$ classes, where $mr_{\lambda} = p^{tn}-1$. Similarly, we have the partition $H' = [{\bf u}^{(1)}] \cup [{\bf u}^{(2)}] \cup \cdots \cup [{\bf u}^{(m)}]$. If ${\bf x}^{(j)}$ is a point of $\phi_{\lambda}^{\ell}{\bf u}^{(i)}$, then $$1 = \lambda^{-\ell}{\bf u}^{(i)}({\bf x}^{(j)})^{\top} = \lambda^{-\ell-k}{\bf u}^{(i)}(\lambda^{k}{\bf x}^{(j)})^{\top}$$ for any $0 \leq k \leq r_{\lambda} - 1$, and so $\phi_{\lambda}^{k}{\bf x}^{(j)}$ is a point of $\phi_{\lambda}^{\ell+k}{\bf u}^{(i)}$. It follows that if a point ${\bf x}^{(j)}$ lies on any hyperplane in $[{\bf u}^{(i)}]$, then each point in $[{\bf x}^{(j)}]$ lies on exactly one hyperplane in $[{\bf u}^{(i)}]$. As such, if points of $[{\bf x}^{(j)}]$ are on hyperplanes of $[{\bf u}^{(i)}]$, we write $[{\bf x}^{(j)}] \in [{\bf u}^{(i)}]$.
Now let $d>1$ be any divisor of $r_{\lambda}$, and let $v({\bf u}^{(i)},{\bf x}^{(j)})$ be the unique integer $h$ such that $\phi_{\lambda}^{h}{\bf x}^{(j)}$ is a point of ${\bf u}^{(i)}$. Finally, let $A$ be the $m \times m$ matrix $(a_{ij})$ defined by $$a_{ij} = \begin{cases}
\zeta_{d}^{v({\bf u}^{(i)},{\bf x}^{(j)})} ~~ &\text{if} ~ [{\bf x}^{(j)}] \in [{\bf u}^{(i)}] \\
0 ~~ &\text{otherwise.}\end{cases}$$
Assuming all of the notation of this section, we have the following.
**Theorem 19** (cf. [@Berman78 Theorem 2.2]). *The matrix $A$ is an element of $\mathrm{CGW}((p^{tn}-1)/r_{\lambda},p^{(t-1)n};d)$.*
*Proof.* The parameters of $A$ are all clear from its construction. It remains to show that $A$ is orthogonal, i.e., that $$Q = \sum a_{ij}\overline{a_{kj}} = 0$$ for all $i \neq j$. The $i^{\rm th}$ and $k^{\rm th}$ rows of $A$ correspond to hyperplane classes. If $[{\bf u}^{(i)}]$ and $[{\bf u}^{(k)}]$ are parallel, then they have no points in common, and it follows that the sum $Q$ contains no non-zero terms. Suppose then that $[{\bf u}^{(i)}]$ and $[{\bf u}^{(k)}]$ do intersect, and so ${\bf u}^{(i)}$ intersects each of the hyperplanes $\phi_{\lambda}^{h}{\bf u}^{(k)}$, $h = 0,1,\ldots,r_{\lambda}-1$, in $p^{(t-2)n}$ points. Thus the sum $Q$ contains $r_{\lambda}p^{(t-2)n}$ non-zero terms. For any point $\phi_{\lambda}^{\ell}{\bf x}^{(j)}$ on each of ${\bf u}^{(i)}$ and $\phi_{\lambda}^{h}{\bf u}^{(k)}$, we have $${\bf u}^{(i)}(\lambda^{\ell}{\bf x}^{(j)})^{\top} = 1$$ and $$(\lambda^{h}{\bf u}^{(k)})(\lambda^{\ell}{\bf x}^{(j)})^{\top} = {\bf u}^{(k)}(\lambda^{\ell-h}{\bf x}^{(j)})^{\top} = 1$$ so that $v({\bf u}^{(i)},{\bf x}^{(j)}) = \ell$ and $v({\bf u}^{(k)},{\bf x}^{(j)}) = \ell-h$. Consequently, $a_{ij} = \zeta_{d}^{\ell}$ and $a_{kj} = \zeta_{d}^{\ell-h}$ and so $a_{ij}\overline{a_{kj}} = \zeta_{d}^{h}$. Thus for all $h = 0,1,\ldots,r_{\lambda}-1$, there are $p^{(t-2)n}$ terms of $Q$ which have the value $\zeta_{d}^{h}$, and so $$Q = p^{(t-2)n}(1+\zeta_{d}+\cdots + \zeta_{d}^{r_{\lambda}-1}) = 0.$$ ◻
**Corollary 20** (cf. [@Berman78 Corollary 2.3]). *Let $p$, $n$, $t$, $d$ and $r$ be any positive integers such that $p$ is prime, $d \mid r$, and $r \mid (p^{n}-1)$. Then there exists a matrix $W$ in $\mathrm{CGW}((p^{tn}-1)/r,p^{(t-1)n};d)$.*
**Example 21**. Letting $p = n = 2$, then for any choice of $t>1$ we can let $d=r=3$, and build a matrix in $\mathrm{CGW}((2^{2t}-1)/3,2^{2t-2};3)$. When $t=2$, we get a matrix $$W \equiv {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{ccccc}
0 & 1 & 1 & 1 & 1 \\
1 & 0 & 1 & \zeta_{3} & \zeta_{3}^2 \\
1 & 1 & 0 & \zeta_{3}^2 & \zeta_{3} \\
1 & \zeta_{3} & \zeta_{3}^2 & 0 & 1 \\
1 & \zeta_{3}^2 & \zeta_{3} & 1 & 0 \end{array}\right]}.$$
*Remark 22*. Berman also provides constructions of $\zeta$-circulant CGWs in [@Berman78]. They involve some specialised edits to the construction above, but do not construct matrices with parameters distinct from those facilitated for by Corollary [Corollary 20](#cor:BermanPar){reference-type="ref" reference="cor:BermanPar"}. As such, in the interest of brevity we just refer the reader to the original paper for more details.
## Complementary sequences
This subsection summarises the work of [@GenGPpaper] as it pertains to CGWs. Perhaps the best known example of complementary sequences are Golay pairs [@Golay49]. These are pairs of $\{ \pm 1 \}$-sequences $(a,b)$ of length $v$ such that $$\sum_{j=0}^{v-1-s}a_{j}a_{j+s} + b_{j}b_{j+s} = 0$$ for all $1 \leq s \leq v-1$. This equation says that the aperiodic autocorrelation of the sequences $a$ and $b$ sum to zero for all possible shifts $s$. The existence of Golay pairs is known when the length of the sequences is $v = 2^{x}10^{y}26^{z}$ for $x,y,z \geq 0$, but not for any other values. This motivated the generalization to complementary sequences according to periodic autocorrelation functions. We describe a very general extension of the idea here, as it pertains to the construction of CGWs.
In this section, we identify sequences with row vectors, for the purposes of describing certain operations with matrix multiplication. For any $\alpha \in \mathcal{U}_{k}$, define the $\alpha$-circulant matrix $$C_{\alpha} = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccc}
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & & 0 & 0 \\
0 & 0 & 0 & & 0 & 0 \\
\vdots & & & \ddots & & \vdots \\
0 & 0 & 0 & & 0 & 1 \\
\alpha & 0 & 0 & \cdots & 0 & 0\end{array}\right]}.$$ The $\alpha$-*phased periodic autocorrelation function* of a $\mathcal{U}_{k}$-sequence $a$ of length $v$ and shift $s$ to be $$\mathrm{PAF}_{\alpha,s}(a) = a (aC_{\alpha}^s)^{\ast}.$$ Let $(a,b)$ be a pair of $\mathcal{U}_{k}$-sequences. Let $w_{a}$ denote the weight of a sequence $a$, i.e., the number of non-zero entries in $a$. We say $w = w_{a}+w_{b}$ is the weight of a pair $(a,b)$. A pair of sequences $(a,b)$ is a *weighted $\alpha$-phased periodic Golay pair* ($\mathrm{WPGP}(\mathcal{U}_{k},v,\alpha,w)$) if $$\mathrm{PAF}_{\alpha,s}(a) + \mathrm{PAF}_{\alpha,s}(b) = 0.$$ for all $1 \leq s \leq v-1$.
For some $\alpha \in \mathcal{U}_{k}$, let $A$ and $B$ be the $\alpha$-circulant matrices with first row $a$ and $b$ respectively, that is $A_{i+1} = A_{i}C_{\alpha}$ and $B_{i+1} = B_{i}C_{\alpha}$ for all $2 \leq i \leq v$. When $a$ and $b$ are complementary, we construct a matrix with pairwise orthogonal rows as follows.
**Theorem 23** (Theorem 5.1 [@GenGPpaper]). *Let $(a,b) \in \mathrm{WPGP}(\mathcal{U}_{k},v,\alpha,w)$ and define the matrices $A$ and $B$ as above. If $$W = {\small\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc}
A & B \\
-B^{*} & A^{*}\end{array}\right],}$$ then $WW^{*} = wI_{2v}$. That is, $W$ is a $\mathrm{CGW}(2v,w;2k)$ if $k$ is odd, and $W$ is a $\mathrm{CGW}(2v,w;k)$ if $k$ is even.*
The constructions of $\mathrm{WPGP}(\mathcal{U}_{k},v,\alpha,w)$ that appear most frequently in the literature are limited to the when $k = 2$ or $k = 4$, and when $\alpha = 1$ or $\alpha = \frac{k}{2}$. Computational methods for searching are useful, but ultimately limited to small length sequences. However, we can take advantage of constructions of aperiodic complementary sequences. In particular, a *ternary Golay pair* of is a pair of $(0, \pm 1)$-sequences $(a,b)$ of length $n$ such that $$\sum_{j=0}^{n-1-s}a_{j}a_{j+s} + b_{j}b_{j+s} = 0$$ for all $1 \leq s \leq n-1$. There are a range of studies of ternary Golay pairs in the mathematics and engineering literature for numerous reasons, we refer the reader to [@CraKou01] and [@GysinSeb] for more details. The following is a special case of [@GenGPpaper Theorem 3.5].
**Theorem 24**. *Let $(a,b)$ be a ternary Golay pair of length $n$ and weight $w$. Then $(a,b) \in \mathrm{WPGP}(\mathcal{U}_{k},n,\alpha,w)$ for any even $k$, and any $\alpha \in \langle \zeta_{k} \rangle$.*
As a consequence, given $(a,b)$ we can construct several distinct matrices in $\mathrm{CGW}(2n,w;k)$ that are not equivalent to a $\mathrm{CGW}(2n,w;2)$.
## Seberry and Whiteman
For any prime power $q \equiv 1 \mod 8$, Seberry and Whiteman [@SeberryWhiteman] present a construction of a matrix in $\mathrm{CGW}(q+1,q;4)$. It is essentially a construction of an element of $\mathrm{WPGP}(\mathcal{U}_{4},\frac{q+1}{2},1,q)$, although this is not how the construction is described in the paper.
Let $i = \zeta_{4}$ in this section. The method involves constructing two circulant matrices $R$ and $S$ of order $\frac{q+1}{2}$ with all entries in $\{\pm 1,\pm i\}$ except for the diagonal of $R$ which is $0$, and building the order $q+1$ matrix $$\label{SW-matrix}
W \equiv {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc}
R & S \\
S^{*} & -R^{*} \end{array}\right]}.$$ The sequence of entries of their first rows is obtained by cleverly applying an eighth power character $\chi$ to elements of $\mathbb{F}_{q^{2}}$ in a particular order.
The method is as follows. Let $q \equiv 1 \mod 8$ be a prime power and let $n = \frac{q+1}{2}$. Let $\tau$ be a primitive element of $\mathbb{F}_{q^2}$ and let $\gamma = \tau^n$. For $x \in \mathbb{F}_{q^2} \setminus \{0\}$, let $\mathrm{ind}(x)$ be the least non-negative integer $t$ such that $\tau^{t} = x$ and define $\chi : \mathbb{F}_{q^2} \rightarrow \langle \zeta_{8} \rangle \cup \{0\}$ so that $$\label{chi-eq}
\chi(x) = \begin{cases}
\zeta_{8}^{\mathrm{ind}(x)} ~~ &\text{if} ~ x \neq 0\\
0 ~~ &\text{if} ~ x = 0. \end{cases}$$
We can write each element of $\mathbb{F}_{q^2}$ uniquely in the form $\alpha \gamma + \beta$ where $\alpha,\beta \in \mathbb{F}_{q}$. So let $\tau^{j} = \alpha_{j} \gamma + \beta_{j}$ for all $j$, and define the sequences $a$ and $b$ such that $a_{j} = \chi(\alpha_{j})$ and $b_{j} = \chi(\beta_{j})$. These sequences satisfy the following two identities for all $0\leq j \leq q^2-2$; $$\label{ident1}
b_{j+2n} = ib_{j},$$ $$\label{ident2}
b_{j+n} = ia_{j}.$$
The first rows $r$ and $s$ of $R$ and $S$ are chosen to be the subsequences $r = (a_{0},a_{8},\ldots,a_{8(n-1)})$ and $s= (b_{0},b_{8},\ldots,b_{8(n-1)})$ respectively. The matrix of the form in [\[SW-matrix\]](#SW-matrix){reference-type="eqref" reference="SW-matrix"} is orthogonal only if the sequences $r$ and $s$ are complementary, i.e., if $$\label{Paut}
\sum_{j=0}^{n-1}a_{8j}\overline{a_{8j+8t}} + b_{8j}\overline{b_{8j+8t}} = 0$$ for $1 \leq t \leq n-1$, where the indices are read modulo $8n$. Appealing to [\[ident1\]](#ident1){reference-type="eqref" reference="ident1"} this requirement reduces to $$\label{odd-even-eq}
\sum_{j=0}^{n-1}b_{8j}\overline{b_{8j+8t}} + b_{8j+n}\overline{b_{8j+n+8t}} = 0.$$ The identity of [\[ident2\]](#ident2){reference-type="eqref" reference="ident2"} implies that $b_{j}\overline{b_{\ell}} = b_{j+2n}\overline{b_{\ell+2n}}$ for all $j,\ell$. Thus we can write the indices modulo $2n = q+1$ in the sum above. Further, because $n$ is odd, the indices $8j$ in the left hand term of the sum of [\[odd-even-eq\]](#odd-even-eq){reference-type="eqref" reference="odd-even-eq"} covers the even integers between $0$ and $q-1$ and the indices $8j+n$ in right hand term covers the odd integers between $1$ and $q$, and we get the equivalent expression $$\sum_{j=0}^{n-1}b_{2j}\overline{b_{2j+8t}} + b_{2j+1}\overline{b_{2j+1+8t}} = 0.$$ More succinctly, we get $$\label{b-eq}
\sum_{j=0}^{q}b_{j}\overline{b_{j+8t}} = 0.$$
It remains to show that Equation [\[b-eq\]](#b-eq){reference-type="eqref" reference="b-eq"} holds for all $1 \leq t \neq n-1$. First, observe that $b_{j}\overline{b_{j+\ell}} = \chi(\beta_{j})\overline{\chi(\beta_{j+\ell})}$. Suppose that $\tau^{j} = \alpha_{j}\gamma + \beta_{j}$ and $\tau^{\ell} = \alpha_{\ell}\gamma + \beta_{\ell}$. Then $$\begin{aligned}
\tau^{j+\ell} &= (\alpha_{j}\gamma + \beta_{j})(\alpha_{\ell}\gamma + \beta_{\ell})\\
&= (\alpha_{j}\beta_{\ell} + \beta_{j}\alpha_{\ell})\gamma + (\alpha_{j}\alpha_{\ell}\gamma^{2} + \beta_{j}\beta_{\ell}).\end{aligned}$$ and so $\beta_{j+\ell} = \alpha_{j}\alpha_{\ell}\gamma^{2} + \beta_{j}\beta_{\ell}$. It follows that $$\chi(\beta_{j})\overline{\chi(\beta_{j+\ell})} = \chi(\beta_{j})\overline{\chi(\alpha_{j}\alpha_{\ell}\gamma^{2} + \beta_{j}\beta_{\ell})}.$$ Consequently, for any fixed $\ell \neq 0$, $$\label{chibet}
\sum_{j=0}^{q^2-2}b_{j}\overline{b_{j+\ell}} = \sum_{j=0}^{q^2-2}\chi(\beta_{j})\overline{\chi(\alpha_{j}\alpha_{\ell}\gamma^{2} + \beta_{j}\beta_{\ell})}.$$ Alternatively, $$\begin{aligned}
\sum_{j=0}^{q^2-2}b_{j}\overline{b_{j+\ell}} &= \sum_{\alpha,\beta \in \mathbb{F}_{q}}\chi(\beta)\overline{\chi(\alpha\alpha_{\ell}\gamma^{2} + \beta\beta_{\ell})}\\
&= \sum_{\beta \in \mathbb{F}_{q}}\chi(\beta)\sum_{\alpha \in \mathbb{F}_{q}}\overline{\chi(\alpha\alpha_{\ell}\gamma^{2} + \beta\beta_{\ell})}\end{aligned}$$
where the inner sum is zero whenever $\alpha_{\ell} \neq 0$. Also note that the only value of the form $\ell = 8t$ with $0 \leq t \leq n-1$ for which $\alpha_{\ell}=0$ when $8 | (q-1)$ is when $t=0$. It follows that $$\sum_{j=0}^{q^2-2}b_{j}\overline{b_{j+8t}} = 0$$ for any $1 \leq t \leq n-1$. Now we note that $$\sum_{j=0}^{q^2-2}b_{j}\overline{b_{j+8t}} = \sum_{h=0}^{q-2}\sum_{j=0}^{q}b_{j+h(q+1)}\overline{b_{j+8(t+h(q+1))}}.$$ Appealing again to [\[ident1\]](#ident1){reference-type="eqref" reference="ident1"} we note that the inner sum takes the same value for every $h$ because $q+1 = 2n$, and so letting $h=0$, we conclude that $$\sum_{j=0}^{q}b_{j}\overline{b_{j+8t}} = 0.$$
This proves the following.
**Theorem 25** (Theorem 2, [@SeberryWhiteman]). *For any prime power $q \equiv 1 \mod 8$ there exists a matrix in $\mathrm{CGW}(q+1,q;4)$.*
**Example 26**. Let $q = 9$, so $n = 5$. Let $\tau$ be a primitive element of $\mathbb{F}_{81}$, let $\gamma = \tau^5$, and let $z = \tau^{10}$ so that $z$ is a primitive element of a subfield isomorphic to $\mathbb{F}_{9}$. Then $$\begin{aligned}
\tau^{0} &= 0\cdot\gamma + z^8 = 1,\\
\tau^{8} &= z^{5}\cdot\gamma + z^7,\\
\tau^{16} &= z^{8}\cdot\gamma + z,\\
\tau^{24} &= z^{8}\cdot\gamma + z^5,\\
\tau^{32} &= z^{5}\cdot\gamma + z^3.\end{aligned}$$ Adhering to Equation [\[chi-eq\]](#chi-eq){reference-type="eqref" reference="chi-eq"} we find $r = (0,i,1,1,i)$ and $s = (1,-i,i,i,-i)$. It is simple to verify that the matrix $W$ defined as in Equation [\[SW-matrix\]](#SW-matrix){reference-type="eqref" reference="SW-matrix"} is an element of $\mathrm{CGW}(10,9;4)$.
## Recursive constructions
The constructions above, in addition to the many constructions known for real weighing, Hadamard and Butson Hadamard matrices provide numerous CGWs at infinitely many orders. However, recursive constructions applied to these matrices are the most effective tool for producing large quantities of CGWs. Tensor product type constructions are the most frequently useful, however we will begin with the simplest of recursive constructions of a direct sum type.
### Direct sum type constructions
We define the direct sum of an $m \times m$ matrix $A$ and a $n \times n$ matrix $B$ to be $$A \oplus B = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc}
A & 0_{m,n} \\
0_{n,m} & B\end{array}\right].}$$ The following is immediate.
**Proposition 27**. *If $A \in \mathrm{CGW}(m,w;k_{1})$ and $B \in \mathrm{CGW}(n,w;k_{2})$, then $A \oplus B \in \mathrm{CGW}(m+n,w;k)$ where $k = \mathrm{lcm}(k_{1},k_{2})$.*
The following is also quite straightforward.
**Proposition 28**. *Let $A \in \mathrm{CGW}(n,w;k)$. Then the matrix $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc}
A & I_{n} \\
-I_{n} & A^{\ast}\end{array}\right]}$$ is a $\mathrm{CGW}(2n,w+1;k)$ if $k$ is even, or a $\mathrm{CGW}(2n,w+1;2k)$ if $k$ is odd.*
The following generalizes Proposition [Proposition 28](#prop:add1){reference-type="ref" reference="prop:add1"}.
**Proposition 29**. *Let $A \in \mathrm{CGW}(n,w_{1};k_{1})$ and $B \in \mathrm{CGW}(n,w_{2};k_{2})$ be such that $AB = BA$. Then the matrix $${\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cc}
A & B \\
-B^{\ast} & A^{\ast}\end{array}\right]}$$ is a $\mathrm{CGW}(2n,w;k)$ where $w = w_{1}+w_{2}$ and $k = \mathrm{lcm}(k_{1},k_{2},2)$.*
The conditions for Proposition [Proposition 29](#prop:add1gen){reference-type="ref" reference="prop:add1gen"} are met, for example, when $A$ and $B$ are $\zeta$-circulant for some $\zeta \in \mathcal{U}_{k_{1}} \cap \mathcal{U}_{k_{2}}$, although this would just be a very special case of the construction of Theorem [Theorem 23](#thm:WPPGP-construction){reference-type="ref" reference="thm:WPPGP-construction"}.
### Tensor product type constructions
One of the simplest recursive constructions is via the Kronecker product. For any two matrices $A$ and $B$ having entries with defined multiplication, the Kronecker product of $A$ and $B$ is defined to be the block matrix $$A \otimes B = [a_{ij}B].$$ It is a simple exercise to verify the following.
**Proposition 30**. *Let $A \in \mathrm{CGW}(n_{1},w_{1};k_{1})$ and $B \in \mathrm{CGW}(n_{2},w_{2};k_{2})$. Then $A \otimes B \in \mathrm{CGW}(n,w;k)$ where $n = n_{1}n_{2}$, $w = w_{1}w_{2}$ and $k = \mathrm{lcm}(k_{1},k_{2})$.*
This tensor product construction is generalised by a construction of Diţă [@Dita], originally proposed for complex Hadamard matrices. For this construction we require a matrix $A \in \mathrm{CGW}(n,w_{a};k_{a})$ and a set of matrices $\{B_{1},\ldots,B_{n}\}$ with each $B_{i} \in \mathrm{CGW}(m,w_{b,i};k_{b,i})$. Note that each $B_{i}$ must be of the same order.
**Proposition 31** (cf. Proposition 2 [@Dita]). *Let $A,B_{1},\ldots,B_{n}$ be as described above. Then $$D = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccc}
a_{11}B_{1} & a_{12}B_{2} & \cdots & a_{1n}B_{n} \\
a_{21}B_{1} & a_{22}B_{2} & \cdots & a_{2n}B_{n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1}B_{1} & a_{n2}B_{2} & \cdots & a_{nn}B_{n} \end{array}\right]}$$ is a $\mathrm{CGW}(mn,w;k)$ where $w = w_{a}\left(\textstyle{\sum}_{i=1}^{n} w_{b,i}\right)$ and $k = \mathrm{lcm}(k_{a},k_{b,1},\ldots,k_{b,n})$.*
*Remark 32*. If the matrices $B_{1},\ldots,B_{n}$ are all equal, then the matrix $D$ of Proposition [Proposition 31](#prop:dita){reference-type="ref" reference="prop:dita"} is the Kronecker product $A \otimes B_{1}$.
A construction of complex Hadamard matrices was introduced by McNulty and Weigert that is a little more general, see [@McNulty Theorem 3]. It is difficult to see how this construction might be simply generalized for building weighing matrices with parameters not catered for by Proposition [Proposition 31](#prop:dita){reference-type="ref" reference="prop:dita"}, but in theory it could be so we include it for completeness. The key components are two sets of $q \times q$ unitary matrices $\{L_{1},\ldots,L_{p}\}$ and $\{K_{1},\ldots,K_{p}\}$ such that $K_{i}^{\ast}L_{j}$ is complex Hadamard for all $1 \leq i,j \leq p$, and another complex Hadamard matrix $M = [m_{ij}]$ of order $p$. The result is a $pq \times pq$ complex Hadamard matrix. The matrix constructed takes the form $$H = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccc}
m_{11}K_{1}^{\ast}L_{1} & m_{12}K_{1}^{\ast}L_{2} & \cdots & m_{1p}K_{1}^{\ast}L_{p} \\
m_{21}K_{2}^{\ast}L_{1} & m_{22}K_{2}^{\ast}L_{2} & \cdots & m_{2p}K_{2}^{\ast}L_{p} \\
\vdots & \vdots & \ddots & \vdots \\
m_{p1}K_{p}^{\ast}L_{1} & m_{p2}K_{p}^{\ast}L_{2} & \cdots & m_{pp}K_{p}^{\ast}L_{p} \end{array}\right].}$$ Restricting entries to $k^{\rm th}$ roots of unity is not a complete barrier to constructing Butson Hadamard matrices and several matrices not of Diţă type are constructed in [@McNulty]. One of the most useful aspects of this construction is the freedom to use sets of mutually unbiased bases of $\mathbb{C}^{q}$ for the unitary matrices $\{L_{1},\ldots,L_{p}\}$ and $\{K_{1},\ldots,K_{p}\}$.
The obvious drawback to these tensor product type constructions is that the order is typically the product of the orders of the factors, and as a consequence there are significant restrictions on order that can be achieved with this approach. This issue is particularly apparent when constructing real Hadamard matrices. A product of real Hadamard matrices of order $4n$ and $4m$ is necessarily of order divisible by $16$, so to produce a Hadamard matrix of order $4k$ for any odd number $k$, a construction like this fails. Constructions that mitigate this issue are rare, but a method known as weaving, introduced by Craigen [@CraigenPhD], does just this. In general this construction involves orthogonal designs, but it was employed specifically for constructing weighing matrices in [@CraigenWeaving95] and it settled some previously undecided existence questions at reasonably small orders and weights.
### Weaving
The ideas in this section are drawn from the thesis of Craigen [@CraigenPhD] but some have since appeared in other published works. The idea of weaving is to knit together weighing matrices of different orders to form a larger one, without relying on a tensor product type construction that forces the order to be the product of the orders of its constituents. The version of this next Theorem that appears in [@CraigenWeaving95] refers only to real weighing matrices, but we give a more general version that applies to CGWs, the proof of which is essentially identical, and is constructive.
**Theorem 33** (cf. [@CraigenWeaving95 Theorem 1]). *Let $M = (m_{ij})$ be a $m \times n$ $(0,1)$-matrix with row sums $r_{1},\ldots, r_{m}$ and column sums $c_{1},\ldots,c_{n}$. If for fixed integers $a$ and $b$ there are matrices $A_{i} \in \mathrm{CGW}(r_{i},a;k_{1})$ and $B_{j} \in CGW(c_{j},b;k_{2})$ for $1 \leq i \leq m$ and $1 \leq j \leq n$, then there is a $\mathrm{CGW}(\sigma(M),ab;k)$ where $$\sigma(M) = \sum_{i=1}^{m} r_{i} = \sum_{j=1}^{n} c_{j},$$ and $k = \mathrm{lcm}(k_{1},k_{2})$.*
*Proof.* Construct $W = (W_{ij})$ as an $m \times n$ array of blocks as follows. If $m_{ij} = 0$ set $W_{ij} = 0_{r_{i} \times c_{j}}$. If $m_{ij} = 1$, then $m_{ij}$ is the $p^{\rm th}$ non-zero entry in the $i^{\rm th}$ row, and the $q^{\rm th}$ non-zero entry in the $j^{\rm th}$ column of $M$ for some $p = p(i,j)$ and $q = q(i,j)$. Denote the $p^{\rm th}$ column of $A_{i}$ and the $q^{\rm th}$ row of $B_{j}$ by $A_{i}[\cdot,p]$ and $B_{j}[q,\cdot]$ respectively, and set $W_{ij} = A_{i}[\cdot,p]B_{j}[q,\cdot]$, the rank one $r_{i} \times c_{j}$ matrix. Then $W$ is a square matrix of order $\sigma(M)$, and the entries are in $\mathcal{U}_{k}$. It remains to verify that $WW^{*} = abI_{\sigma(M)}$.
Since $W$ is an $m \times n$ array of blocks, the matrix $WW^{*}$ is expressed as an $m \times m$ array of blocks with the $(i,j)$ block given by $$\begin{aligned}
\sum_{\ell=1}^{n}W_{i\ell}W_{j\ell}^{*} &= \sum_{\{\ell \, : \, m_{i\ell}=m_{j\ell}=1\}} A_{i}[-,p(i,\ell)]B_{i}[q(i,\ell),-](B_{j}[q(j,\ell),-])^{*}(A_{j}[-,p(j,\ell)])^{*} \\
&= \sum_{\{\ell \, : \, m_{i\ell}=1\}} \delta_{ij}bA_{i}[-,p(i,\ell)](A_{j}[-,p(j,\ell)])^{*} \\
&= \delta_{ij}b \sum_{p=1}^{r_{i}} A_{i}[-,p](A_{j}[-,p])^{*} \\
&= \delta_{ij}abI_{r_{i}},\end{aligned}$$
where $\delta_{ij} = 1$ if $i=j$ and $\delta_{ij} = 0$ otherwise. It follows that $W$ is a weighing matrix. ◻
The conditions of Theorem [Theorem 33](#thm:Weaving){reference-type="ref" reference="thm:Weaving"} are such that the weight of the constructed matrix is the product of the two distinct weights of the components, however the order is no longer tied to this condition. The benefit of this is immediately demonstrated in [@CraigenWeaving95] by the construction of a $W(66,36)$ using four real weighing matrices - one from each of $W(13,9)$, $W(10,9)$, $W(6,4)$ and $W(4,4)$ - and a $6 \times 13$ matrix $M$ with the required row and column sums. This settled the then open question of existence of a $W(66,36)$.
**Example 34**. We can use this technique to build a $\mathrm{CGW}(15,9;3)$, which cannot be constructed through a tensor product. Let $$M = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{ccccc}
1 & 1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 \\
0 & 0 & 1 & 1 & 1 \\
1 & 0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 1 \end{array}\right]}$$ and let $$A_{i} = B_{j} = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{ccc}
1 & 1 & 1 \\
1 & \omega & \omega^{2} \\
1 & \omega^{2} & \omega \end{array}\right]}$$ for all $1 \leq i,j \leq 5$, where $\omega = \zeta_{3}$. Then via the method outlined in the proof of Theorem [Theorem 33](#thm:Weaving){reference-type="ref" reference="thm:Weaving"} we obtain the matrix $$W = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{ccc|ccc|ccc|ccc|ccc}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & \omega & \omega & \omega & \omega^2 & \omega^2 & \omega^2 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 1 & \omega^2 & \omega^2 & \omega^2 & \omega & \omega & \omega & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline
0 & 0 & 0 & 1 & \omega & \omega^2 & 1 & \omega & \omega^2 & 1 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & \omega & \omega^2 & \omega & \omega^2 & 1 & \omega^2 & \omega^2 & \omega^2 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & \omega & \omega^2 & \omega^2 & 1 & \omega & \omega & \omega & \omega & 0 & 0 & 0 \\ \hline
0 & 0 & 0 & 0 & 0 & 0 & 1 & \omega^2 & \omega & 1 & \omega & \omega^2 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & \omega^2 & \omega & \omega & \omega^2 & 1 & \omega^2 & \omega^2 & \omega^2 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & \omega^2 & \omega & \omega^2 & 1 & \omega & \omega & \omega & \omega \\ \hline
1 & \omega & \omega^2 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \omega^2 & \omega & 1 & \omega & \omega^2 \\
1 & \omega & \omega^2 & 0 & 0 & 0 & 0 & 0 & 0 & \omega & 1 & \omega^2 & \omega^2 & 1 & \omega \\
1 & \omega & \omega^2 & 0 & 0 & 0 & 0 & 0 & 0 & \omega^2 & \omega & 1 & \omega & \omega^2 & 1 \\ \hline
1 & \omega^2 & \omega & 1 & \omega^2 & \omega & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \omega^2 & \omega \\
1 & \omega^2 & \omega & \omega & 1 & \omega^2 & 0 & 0 & 0 & 0 & 0 & 0 & \omega^2 & \omega & 1 \\
1 & \omega^2 & \omega & \omega^2 & \omega & 1 & 0 & 0 & 0 & 0 & 0 & 0 & \omega & 1 & \omega^2 \end{array}\right]}$$ which is a $\mathrm{CGW}(15,9;3)$.
Also building on the work in [@CraigenPhD], Craigen and de Launey developed on an idea similar to weaving with the intention of constructing circulant and other group developed CGWs [@CraigendeLauney]. Being group developed is an added condition that we don't wish to apply in his paper so we refer the interested reader to the article for more details. However a method of weaving together different objects to form CGWs without necessarily having this property is also described, and a special case of it is used to construct the group developed matrices. Fundamental to this construction is the general concept of an orthogonal set. An *orthogonal set* of weight $w$ is a set of $v \times v$ matrices $\{A_{1},\ldots,A_{n}\}$ such that $A_{i}A_{j}^{*} = 0$ for all $i \neq j$, and there exist positive integers $\lambda_{1},\ldots,\lambda_{n}$ such that $$\sum_{i=1}^{n}\lambda_{i}A_{i}A_{i}^{*} = wI_{n}.$$ The matrices in an orthogonal set can be woven together by summing together the Kronecker products $P_{s}\otimes A_{s}$ provided that $\{P_{1},\ldots,P_{n}\}$ is a set of disjoint $N \times N$ monomial matrices, where $n \leq N$. The result is a $\mathrm{CGW}(nN,w;k)$, if the constituent parts all have entries in $\mathcal{U}_{k}$. Note that the matrices in the orthogonal set are not necessarily CGW matrices, rather the weight of the set is $w$ if there are exactly $w$ non-zero entries in the concatenation of the $r^{\rm th}$ row or column of the matrices $A_{1},\ldots,A_{n}$, for each $1 \leq r \leq n$. This gives a lot of freedom to the construction.
## Tables of existence {#section:Existence}
Harada and Munemasa classified weighing matrices of order up to $15$ in [@HarMun12], building on earlier work of Chan, Rogers and Seberry in [@ChaRodSeb]. As such, the question of existence or non-existence of real weighing matrices of order up $15$ is known in all cases. Using a combination of the non-existence results of Section [2](#section:Props){reference-type="ref" reference="section:Props"}, the constructions of Section [3](#section:Construct){reference-type="ref" reference="section:Construct"}, we attempt to complete tables showing either existence or non-existence of matrices in $\mathrm{CGW}(n,w;k)$ for all $n \leq 15$, $w \leq n$, and $k \in \{2,3,4,5,6\}$. These Tables are presented in Appendix [6](#App:Tables){reference-type="ref" reference="App:Tables"}, with an entry E indicating existence, and N indicating non-existence. Some entries in these tables remain unresolved, they are indicated by a question mark.
In Table [2](#tab:Existence2){reference-type="ref" reference="tab:Existence2"}, the $k=2$ case is reported, which just compiles results from [@HarMun12]. Tables [3](#tab:Existence3){reference-type="ref" reference="tab:Existence3"}, [4](#tab:Existence4){reference-type="ref" reference="tab:Existence4"}, [5](#tab:Existence5){reference-type="ref" reference="tab:Existence5"}, [6](#tab:Existence6){reference-type="ref" reference="tab:Existence6"} report on the $k=3,\,4,\,5,\,6$ cases respectively. The as yet undetermined entries which are marked with a ? are all parameters that meet the known existence criteria. In each case, should a CGW exist, we can usually say something about the support matrix. If a $\mathrm{CGW}(12,9;3)$ exists, then up to permutation equivalence its support matrix takes the form $(J_{3} - I_{3}) \otimes J_{3}$. If a $\mathrm{CGW}(13,9;3)$ exists, then its support matrix must be a $\mathrm{SBIBD}(13,9,6)$, which is known to exist. If a $\mathrm{CGW}(15,7;3)$ exists, then its support matrix must be a $\mathrm{SBIBD}(15,7,3)$ which also exists (there is a Hadamard design with these parameters). In these cases, we need to solve the lifting problem.
The restrictions for small $n$ in the $k = 5$ case are such that very little extra analysis is required and almost all parameters are ruled out. In Table [4](#tab:Existence4){reference-type="ref" reference="tab:Existence4"}, there are only occasionally parameters for which a $\mathrm{CGW}(n,w;4)$ exists and a $\mathrm{CGW}(n,w;2)$ does not. The first we encounter is a $\mathrm{CGW}(10,6;4)$, which can be built from a $\mathrm{WPGP}(\mathcal{U}_{4},5,1,6)$ where $a = (1,\zeta_{4},1,0,0)$ and $b = (1,-1,-1,0,0)$.
# Application: Quantum error-correcting codes {#section:Quant}
A classical linear $[n,k,d]_{q}$-*code* $C$ of *minimum distance* $d$ is a $k$-dimensional subspace of $\mathbb{F}_{q}^{n}$, the elements of which are called *codewords*, such that the minimum Hamming distance between any two distinct codewords is $d$. The *rate* of $C$ is the ratio $\frac{k}{n}$. For fixed parameters $n$ and $k$ a code where $d$ attains the theoretical upper bound is called *optimal*, and one where $d$ does not attain the sharpest known bound, but attains the highest value of any known code, is called *best known*. We refer the reader to [@HuffmanPless] for a complete background in coding theory and its applications, and we refer to the expertly maintained webpage at [@GrasslCodes] for up to date links to research and tables displaying the best known linear codes for several parameters.
Let $C$ be a $[n,k]_{q^{2}}$ code. The *Hermitian inner product* of codewords $x,y \in C$ is defined by $$\langle x,y \rangle = \sum_{i=0}^{n-1}x_{i}y_{i}^{q}.$$ The *Hermitian Dual* of $C$ is the code $$C^{H} = \{x \in C \; \mid \; \langle x,y \rangle = 0 \, \forall \, y \in C\}.$$ The code $C$ is *Hermitian self-orthogonal* if $C \subseteq C^{H}$, and *Hermitian self-dual* if $C= C^{H}$.
Quantum codes are to quantum information theory what classical codes are to information theory. However, the problem is inherently more difficult due to the postulates of quantum mechanics. We cannot duplicate information by the No-Cloning Theorem [@NoClone], and the observation of a qubit forces it to collapse to a binary state. Shor's solution [@Shor], is to spread the information of one qubit across the entangled state of several qubits. The following definition is taken from [@CalderbankShor96]: A *quantum error-correcting code* is defined to be a unitary mapping (encoding) of $k$ qubits into a subspace of the quantum state space of $n$ qubits such that if any $t$ of the qubits undergo arbitrary decoherence, not necessarily independently, the resulting $n$ qubits can be used to faithfully reconstruct the original quantum state of the $k$ encoded qubits.
Unlike classical codes, quantum codes are usually linear [@NRSS06]. For a quantum code with parameters $n$, $k$ and $d$, we typically denote it as an $[[n,k,d]]_{q}$-code. Shor's model in [@Shor] requires the information of one qubit to be spread across nine qubits, so the rate is $1/9$, and it protects against just one error. In [@CalderbankShor96] Calderbank and Shor use binary linear codes to build improved quantum codes, and later produce quantum codes capable of correcting multiple errors using group theoretic ideas in [@CRSS97]. In [@CRSS] it is shown how, given a Hermitian self-orthogonal $[n,k]_{4}$-linear code $C$ such that no codeword in $C^{\perp}\setminus C$ has weight less than $d$, one can construct a quantum $[[n,n-2k,d]]_{2}$-code. Rains [@Rainsq2] later established that there are similar applications to Hermitian self-orthogonal $[n,k]_{q^2}$ codes. The following is a restatement of [@KKKSquant Corollary 19]. See also [@AshKnill].
**Theorem 35**. *If there exists a linear Hermitian self-orthogonal $[n,k]_{q^2}$ code $C$ such that the minimum weight of $C^{H}$ is $d$, then there exists an $[[n,n-2k,\geq d]]_{q}$ quantum code.*
*Remark 36*. A quantum code can be $0$-dimensional, and so it is possible to construct a quantum $[[n,0,d]]_{q}$-code given a Hermitian self-dual $[n,n/2,d]_{q^{2}}$ code. See [@LisSingh] for details.
Applications of these results have led to many of the best known constructions of quantum error-correcting codes, and so it is pertinent to study the construction of Hermitian self-orthogonal codes over $\mathbb{F}_{q^2}$. With some restrictions, CGWs provide the perfect tool.
To begin, we observe that when $k = q+1$, we can translate the set of $k^{\rm th}$ roots of unity into $\mathbb{F}_{q^2}$, because $k$ divides $q^{2} - 1$.
The following Propositions formalize and generalize some observations noted in [@FFApaper].
**Proposition 37**. *Let $q$ be a prime power, let $k = q+1$ and let $\alpha$ be a primitive $k^{\rm th}$ root of unity in $\mathbb{F}_{q^{2}}$. Define the homomorphism $f : \mathcal{U}_{k} \rightarrow \mathbb{F}_{q^{2}}$ so that $f(0) = 0$ and $f(\zeta_{k}^{j}) = \alpha^{j}$ for $j = 0,1,\ldots,q$. Let $x$ be a $\mathcal{U}_{k}$-vector of length $n$ and let $f(x) = [f(x_{i})]_{0 \leq i \leq n-1}$. Then for any $\mathcal{U}_{k}$-vectors $x$ and $y$, $$\langle x,y \rangle = 0 \quad \Longrightarrow \quad \langle f(x),f(y)\rangle_{H} = 0.$$*
*Proof.* By construction, $f(\zeta_{k}^{j})$ is a $k^{\rm th}$ root of unity in the field $\mathbb{F}_{q^{2}}$ for all $0 \leq j \leq q$. Observe that $$f(\omega)^{q} = \alpha^{q} = \alpha^{-1} = f(\omega^{\ast}),$$ for all $\omega \in \mathcal{U}_{k}$. Then for any $\mathcal{U}_{k}$-vectors $x$ and $y$, $$\begin{aligned}
\langle f(x),f(y)\rangle_{H} &= \sum_{i=0}^{n-1}f(x_{i})f(y_{i})^{q} \\
&= \sum_{i=0}^{n-1}f(x_{i})f(y_{i}^{\ast}) \\
&= \sum_{i=0}^{n-1}f(x_{i}y_{i}^{\ast})\\
&= f^{+}\left(\sum_{i=0}^{n-1}x_{i}y_{i}^{\ast}\right)\\
&= f^{+}(\langle x, y \rangle).\end{aligned}$$
Thus if $\langle x,y \rangle = \sum_{j=0}^{k-1} c_{j}\zeta_{k}^{j}$, then $\langle f(x),f(y)\rangle_{H} = \sum_{j=0}^{k-1} c_{j}\alpha^{j(q-1)}$, and so $$\langle x,y \rangle = 0 \quad \Longrightarrow \quad \langle f(x),f(y)\rangle_{H} = 0.$$ ◻
The following Proposition is also now immediate.
**Proposition 38**. *Let $W$ be a $\mathrm{CGW}(n,w;q+1)$ for some prime power $q$ and let $f$ be the homomorphism defined in Proposition [Proposition 37](#prop:IPs){reference-type="ref" reference="prop:IPs"}, with $f(W) = [f(W_{ij})]_{1\leq i,j, \leq n}$. If $w$ is divisible by the characteristic of $\mathbb{F}_{q^{2}}$, then $f(W)$ generates a Hermitian self-orthogonal $F_{q^{2}}$-code.*
*Proof.* Let $x$ and $y$ be distinct rows of $W$. Then $\langle x ,y \rangle = 0$ and so $\langle f(x),f(y)\rangle_{H} = 0$ by Proposition [Proposition 37](#prop:IPs){reference-type="ref" reference="prop:IPs"}. Further, because $x$ is a row of $W$ and so by design, each entry $\alpha$ in $x$ has the property that $\alpha^{\ast} = \alpha^{-1}$, then $\langle f(x),f(x)\rangle_{H} = 0$ because $w$ is divisible by the characteristic of $\mathbb{F}_{q^{2}}$. ◻
*Remark 39*. The necessity that a row of $W$ is of weight divisible by the characteristic of $\mathbb{F}_{q^2}$ does not extend to codewords in general. By construction, the entries $\alpha$ in a row of $f(W)$ are all such that $\alpha^{\ast} = \alpha^{q}$, and so $\langle f(\alpha),f(\alpha) \rangle = w$. Other codewords obtained by a linear combination of the rows of $f(W)$ can contain field elements as entries that may not have this property. However, the fact that a linear combination of the rows of $f(W)$ is orthogonal to itself is guaranteed by the properties of an inner product, and the self-orthogonality of the rows of $f(W)$ which form a basis.
As a consequence of Proposition [Proposition 38](#prop:GenHermCode){reference-type="ref" reference="prop:GenHermCode"} we can use a $\mathrm{CGW}(n,w;k)$ with appropriate weight to build quantum codes for any $k = q + 1$ where $q$ is a prime power, which includes any $k \in \{3,4,5,6,8,9,10\}$.
*Remark 40*. This implication of Proposition [Proposition 37](#prop:IPs){reference-type="ref" reference="prop:IPs"} is one directional, and the converse does not hold. Nevertheless, this relationship is crucial to the classification of matrices in $\mathrm{BH}(18,3)$ via Hermitian self-dual codes over $\mathbb{F}_{4}$ in [@HLMT].
The propositions above can now be implemented to construct quantum codes.
**Example 41**. Let $W$ be the $\mathrm{CGW}(5,4;3)$ obtained using Berman's construction in Example [Example 21](#ex:Berman){reference-type="ref" reference="ex:Berman"}. The code $C$ generated by $W$ is a $[5,2,4]_{4}$ code, and the hermitian dual $C^{H}$ is a $[5,3,3]_{4}$ code. Applying Theorem [Theorem 35](#thm:GenQ){reference-type="ref" reference="thm:GenQ"}, we construct a $[[5,1,3]]_{2}$ quantum error-correcting code, which is optimal.
Since the construction of quantum codes is generalized by Theorem [Theorem 35](#thm:GenQ){reference-type="ref" reference="thm:GenQ"}, our intention now is to generalize the propositions above.
**Example 42**. Let $W$ be the $\mathrm{CGW}(10,9;4)$ obtained using the Seberry and Whiteman construction in Example [Example 26](#ex:SW){reference-type="ref" reference="ex:SW"}. The code $C$ generated by $W$ is a $[10,5,4]_{9}$ code, and is Hermitian self-dual. We apply Theorem [Theorem 35](#thm:GenQ){reference-type="ref" reference="thm:GenQ"} and construct a $[[10,0,4]]_{3}$ quantum error-correcting code.
Any $\mathrm{BH}(n,4)$ where $3 \mid n$ may be used to construct ternary quantum codes in this manner. This is particularly useful because $\mathrm{BH}(n,4)$ matrices are plentiful. For example, there are exactly 319 equivalence classes in $\mathrm{BH}(12,4)$, see [@LSO Theorem 6.1].
**Example 43**. For example, let $$H = {\footnotesize\renewcommand{\arraycolsep}{.1cm}
\left[\begin{array}{cccccc}
1 & i & 1 & 1 & 1 & - \\
1 & 1 & i & - & 1 & 1 \\
i & 1 & 1 & 1 & - & 1 \\
1 & - & 1 & - & - & i \\
1 & 1 & - & i & - & - \\
- & 1 & 1 & - & i & - \end{array}\right]}.$$ The Code $C$ generated by $H$ is a Hermitian self-dual $[6,3,4]_{9}$ code, which constructs a $[[6,0,4]]_{3}$ quantum error-correcting code.
**Example 44**. By Proposition [Proposition 38](#prop:GenHermCode){reference-type="ref" reference="prop:GenHermCode"}, we can use a $\mathrm{CGW}(n,w;6)$ with $w$ divisible by $5$ to construct a Hermitian self-orthogonal code over $\mathbb{F}_{25}$. As an example, we take the $\mathrm{BH}(25,6)$ constructed via [@FerencThesis Theorem 1.4.41], and construct a $[25,9,13]_{25}$ Hermitian self-orthogonal code $C$. The Hermitian dual $C^{H}$ is a $[25,16,6]_{25}$ code, and so we construct a $[[25,7,6]]_{5}$ quantum code. This has larger minimum weight than the current best known $[[25,7]]_{2}$ quantum code, which has minimum weight $5$, according to [@GrasslCodes].
# Computational results {#section:results}
In Table [1](#tab:QTab){reference-type="ref" reference="tab:QTab"} we list the parameter of the quantum codes constructed that are, according to the information available to us, at least as good or better than the best known quantum codes. It is difficult to compare the codes constructed to others that may be known, as there does not appear to be any database comparable to [@GrasslCodes] that caters for quantum $q$-ary codes in general. Recently, the authors of [@QaryQCodes] have introduced a database, but at least for now it is not completely populated for all parameters. At the time of writing, the only $[[n,k]]_{q}$ code in this database that is comparable to a code in Table [1](#tab:QTab){reference-type="ref" reference="tab:QTab"} is a $[[24,0,6]]_{3}$ code; we found a $[[24,0,9]]_{3}$ code. For this reason, the parameters of codes constructed here are often compared to the best known $[[n,k]]_{2}$ codes listed in [@GrasslCodes].
Source matrix Self orthogonal $[n,k,d]_{q^2}$ code New $[[n,k,d]]_{q}$ code Best known $[[n,k]]_{2}$ from [@GrasslCodes]
------------------------- -------------------------------------- -------------------------- ----------------------------------------------
$\mathrm{CGW}(5,4;3)$ $[5,2,4]_{4}$ $[[5,1,3]]_{2}$ $[[5,1,3]]_{2}$
$\mathrm{BH}(6,4)$ $[6,3,4]_{9}$ $[[6,0,4]]_{3}$ $[[6,0,4]]_{2}$
$\mathrm{BH}(9,10)$ $[9,4,6]_{81}$ $[[9,1,5]]_{9}^{\ast}$ $[[9,1,3]]_{2}$
$\mathrm{CGW}(10,9;4)$ $[10,5,4]_{9}$ $[[10,0,4]]_{3}$ $[[10,0,4]]_{2}$
$\mathrm{BH}(10,6)$ $[10,5,5]_{25}$ $[[10,0,5]]_{5}^{\ast}$ $[[10,0,4]]_{2}$
$\mathrm{BH}(10,5)$ $[10,5,6]_{16}$ $[[10,0,6]]_{4}^{\ast}$ $[[10,0,4]]_{2}$
$\mathrm{CGW}(12,10;6)$ $[12,6,6]_{25}$ $[[12,0,6]]_{5}$ $[[12,0,6]]_{2}$
$\mathrm{BH}(14,8)$ $[14,7,8]_{49}$ $[[14,0,8]]_{7}^{\ast}$ $[[14,0,6]]_{2}$
$\mathrm{BH}(18,4)$ $[18,9,8]_{9}$ $[[18,0,8]]_{3}$ $[[18,0,8]]_{2}$
$\mathrm{BH}(20,6)$ $[20,10,8]_{25}$ $[[20,0,8]]_{5}$ $[[20,0,8]]_{2}$
$\mathrm{BH}(20,5)$ $[20,9,8]_{16}$ $[[20,2,6]]_{4}$ $[[20,2,6]]_{2}$
$\mathrm{CGW}(20,9;4)$ $[20,8,9]_{9}$ $[[20,4,6]]_{3}$ $[[20,4,6]]_{2}$
$\mathrm{CGW}(21,16;3)$ $[21,3,16]_{4}$ $[[21,15,3]]_{2}$ $[[21,15,3]]_{2}$
$\mathrm{BH}(24,4)$ $[24,12,9]_{9}$ $[[24,0,9]]_{3}$ $[[24,0,8]]_{2}$
$\mathrm{BH}(25,6)$ $[25,9,13]_{25}$ $[[25,7,6]]_{5}$ $[[25,7,5]]_{2}$
$\mathrm{CGW}(26,25;6)$ $[26,5,22]_{25}$ $[[26,16,6]]_{5}^{\ast}$ $[[26,16,4]]_{2}$
$\mathrm{BH}(30,4)$ $[30,15,12]_{9}$ $[[30,0,12]]_{3}$ $[[30,0,12]]_{2}$
$\mathrm{BH}(36,3)$ $[36,18,12]_{4}$ $[[36,0,12]]_{2}$ $[[36,0,12]]_{2}$
$\mathrm{BH}(42,4)$ $[42,21,14]_{9}$ $[[42,0,14]]_{3}$ $[[42,0,12]]_{2}$
: New quantum codes
*Remark 45*. All of the $[[n,k]]_{q}$ codes listed in Table [1](#tab:QTab){reference-type="ref" reference="tab:QTab"} have a minimum distance at least as large as any known $[[n,k]]_{2}$ code according to [@GrasslCodes]. The codes marked with an asterisk listed in Table [1](#tab:QTab){reference-type="ref" reference="tab:QTab"} are examples of $[[n,k]]_{q}$ quantum codes with a minimum distance that surpasses the known upper bound for a corresponding $[[n,k]]_{2}$ code. The matrices used to build the codes in Table [1](#tab:QTab){reference-type="ref" reference="tab:QTab"} come from a variety of sources, many of which are from constructions outlined in this paper. Many of source matrices are Butson matrices, taken from existing databases such as the online database of complex Hadamard matrices at [@Karol].
## Concluding remarks
The computations of this section are not the result of exhaustive searches, as we do not have access to any convenient database of matrices to search through. Nor have we attempted to use any coding theory methods to either extend the codes we found, or to search for good subcodes. The codes with parameters listed in Table [1](#tab:QTab){reference-type="ref" reference="tab:QTab"} are the results of "proof of concept" experimentation using matrices we could either construct using some of the methods described in this paper, or matrices that could be easily accessed through online sources. The purpose is to demonstrate that good quantum codes can be constructed. A complete computational survey of codes constructible with these tools is beyond the scope of this paper, but the evidence presented here suggests that many good codes may be found with this approach. Mostly Butson matrices were used as source matrices because they can be easier to find in databases. A large database of CGWs with different parameters would be a worthwhile development. Finally, we note that the Tables in Appendix [6](#App:Tables){reference-type="ref" reference="App:Tables"} below are incomplete, and any contributions to their completion are very welcome.
# Acknowledgements {#acknowledgements .unnumbered}
The author thanks Rob Craigen, Wolf Holzmann and Hadi Kharaghani for sharing complex Golay sequences computed in [@CHK] which we used to build matrices in $\mathrm{BH}(n,4)$, and subsequently $[[n,k]]_{3}$ quantum codes.
# Appendix - Tables of existence of $\mathrm{CGW}(n,w;k)$ {#App:Tables}
$n \setminus w$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
----------------- --- --- --- --- --- --- --- --- --- ---- ---- ---- ---- ---- ---- -- -- -- -- --
1 E
2 E E
3 E N N
4 E E E E
5 E N N N N
6 E E N E E N
7 E N N E N N N
8 E E E E E E E E
9 E N N N N N N N N
10 E E N E E N N E E N
11 E N N E N N N N N N N
12 E E E E E E E E E E E E
13 E N N E N N N N E N N N N
14 E E N E E N N E E E N N E N
15 E N N E N N N N E N N N N N N
: $k=2$
$n \setminus w$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
----------------- --- --- --- --- --- --- --- --- --- ---- ---- ---- ---- ---- ---- -- -- -- -- --
1 E
2 E N
3 E N E
4 E N N N
5 E N N E N
6 E N E N N E
7 E N N N N N N
8 E N N N N N E N
9 E N E N N N N N E
10 E N N E N N N N N N
11 E N N N N N N N N N N
12 E N E N N E N N ? N N E
13 E N N N N N N N ? N N N N
14 E N N N N N ? N N N N N E N
15 E N N E N N ? N E N N E N N N
: $k=3$
$n \setminus w$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
----------------- --- --- --- --- --- --- --- --- --- ---- ---- ---- ---- ---- ---- -- -- -- -- --
1 E
2 E E
3 E N N
4 E E E E
5 E N N N N
6 E E N E E E
7 E N N E N N N
8 E E E E E E E E
9 E N N N N N N N N
10 E E N E E E N E E E
11 E N N E N N N N N N N
12 E E E E E E E E E E E E
13 E N N E N N N ? E N N N N
14 E E N E E E ? E E E N ? E E
15 E N N E ? N N ? E N N N N N N
: $k=4$
$n \setminus w$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
----------------- --- --- --- --- --- --- --- --- --- ---- ---- ---- ---- ---- ---- -- -- -- -- --
1 E
2 E N
3 E N N
4 E N N N
5 E N N N E
6 E N N N N N
7 E N N N N N N
8 E N N N N N N N
9 E N N N N N N N N
10 E N N N E N N N N E
11 E N N N N N N N N N N
12 E N N N N N N N N N E N
13 E N N N N N N N N N N N N
14 E N N N N N N N N N N N N N
15 E N N N N N N N N N N N N N N
: $k=5$
$n \setminus w$ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
----------------- --- --- --- --- --- --- --- --- --- ---- ---- ---- ---- ---- ---- -- -- -- -- --
1 E
2 E E
3 E N E
4 E E E E
5 E N N E N
6 E E E E E E
7 E N E E N N E
8 E E E E E E E E
9 E N E E N N ? N E
10 E E E E E ? ? E E E
11 E N E E N N ? N ? N N
12 E E E E E E E E E E E E
13 E N E E N N ? N E N N ? E
14 E E E E E E E E E E ? ? E E
15 E N E E N N E N E N N E ? N N
: $k=6$
[^1]: *E-mail: ronan.egan\@dcu.ie*
[^2]: 2010 Mathematics Subject Classification: 05B20
[^3]: Keywords: Complex generalized weighing matrix, Butson Hadamard matrix, Self-orthogonal code, Quantum code
| arxiv_math | {
"id": "2309.07522",
"title": "A survey of complex generalized weighing matrices and a construction of\n quantum error-correcting codes",
"authors": "Ronan Egan",
"categories": "math.CO",
"license": "http://creativecommons.org/publicdomain/zero/1.0/"
} |
---
bibliography:
- planar.bib
title: Transposed Poisson Structures on the planar Galilean conformal algebra
---
[^1]
**Henan Wu[^2] Wenting Zhang[^3]**
*Each $\frac{1}{2}$-derivation of the planar Galilean conformal algebra is proven to be a scalar. As a corollary, all transposed Poisson structures on the planar Galilean conformal algebra are trivial.*
*planar Galilean conformal algebra, $\frac{1}{2}$-derivation, transposed Poisson structure.*
17B10, 17B65, 17B68
# Introduction
The origin of Poisson algebras lies in the Poisson geometry of the 1970s. Since then, these algebras have shown their importance in several areas of mathematics and physics, such as Poisson manifolds, algebraic geometry, quantization theory, quantum groups, and classical and quantum mechanics. The study of Poisson algebras also led to other algebraic structures, such as noncommutative Poisson algebras ([@Casas2006Noncommutative]), Jacobi algebras, Novikov-poisson algebras ([@1997Novikov]). The description of all possible Poisson algebra structures with fixed Lie or associative part is an important problem in the theory of Poisson algebra ([@2021Poisson]).
Recently, the notion of transposed Poisson algebra was introduced in ([@2020Transposed]), by exchanging the roles of the two binary operations in the Leibniz rule defining the Poisson algebra. The relationship between $\frac{1}{2}$-derivations of Lie algebras and transposed Poisson algebras was developed in ([@2020]). Using this idea, possible tranposed Poisson structures with fixed Lie algebras were studied by many authors. For example, all the possible transposed Poisson algebra structures were described on the Witt algebra and the Virasoro algebra in ([@2020]), on the twisted Heisenberg-Virasoro algebra, the Schr$\ddot{o}$dinger-Virasoro algebra, the extended Schr$\ddot{o}$dinger-Virasoro algebra and the twisted Schr$\ddot{o}$dinger-Virasoro algebra in ([@2022]), on block Lie (super) algebras in ([@2022Transposed]), on Galilean and solvable Lie algebras in ([@3]), on Witt type algebras in ([@2023Transposed]), on generalized Witt algebras and Block Lie algebras in ([@MR4617175]), respectively.
In this paper, we want to study the $\frac{1}{2}$-derivations and transposed Poisson structures of the planar Galilean conformal algebra. This paper is organized as follows. In Section 2, we introduce the definition of transposed Poisson algebra, the definition of the $\frac{1}{2}$-derivations, the definition of the planar Galilean conformal algebra. In Section 3, we compute the $\frac{1}{2}$-derivations of the planar Galilean conformal algebra and describe the related transposed Poisson algebra structures.
Throughout the paper, we denote by $\mathbb{C},\, \mathbb{C}^*,\,
\mathbb{Z},\, \mathbb{Z}^+$ the sets of complex numbers, nonzero complex numbers, integers, nonnegative integers, respectively. And all vector spaces and tensor products are taken over the complex field $\mathbb{C}$.
# Preliminaries
In this section, we list some notations and results to be used in the paper. The reader can refer to [@2020Transposed] for detail.
**Definition 1**. *([@2020Transposed]) let $L$ be a vector space equipped with two nonzero bilinear operations including $\cdot$ and$[,]$. The triple $(L,\cdot,[,])$ is called a transposed Poisson algebra if $(L,\cdot)$ is a commutative associative algebra and $(L, [,])$ is a Lie algebra such that for any $x,y,z\in L$, $$2z\cdot[x,y]=[z\cdot x,y]+[x,z\cdot y].$$*
**Definition 2**. *([@2020]) Let $(L,[,])$ be an algebra with multiplication $[,]$ and $\varphi$ be a linear transformation of $L$. Then $\varphi$ is called a $\frac{1}{2}$-derivation if $$\varphi[x,y]=\frac{1}{2}([\varphi(x),y]+[x,\varphi(y)])$$ for any $x,y\in L$.*
**Definition 3**. *([@2016Structure]) The planar Galilean conformal algebra $\mathcal{W}$ is an infinite dimensional Lie algebra over $\mathbb{F}$ with the basis $\{L_m, H_m, I_m, J_m\,|\, m\in\Gamma\}$ subject to the following nontrivial relations $$\aligned
&[L_m, L_n]=(m-n)L_{m+n},\ \ &&[L_m, H_n]=-n H_{m+n},\\
&[L_m, I_n]=(m-n)I_{m+n},\ \ &&[L_m, J_n]=(m-n)J_{m+n},\\
&[H_m, I_n]=J_{m+n},\ \ &&[H_m, J_n]=-I_{m+n}.
\endaligned$$*
# $\frac{1}{2}$-derivations of $\mathcal{W}$
First, we develop a result on $\frac{1}{2}$-derivations of the general graded Lie algebra. Assume $\mathcal{L}$ is a $G$-graded Lie algebra, i.e, $$\mathcal{L}=\bigoplus_{\alpha\in G} \mathcal{L}_{\alpha},\ \ [\mathcal{L}_{\alpha}, \mathcal{L}_{\beta}]\subset \mathcal{L}_{\alpha+\beta},$$ where $G$ is an abelian group. Assume $D$ is an arbitrary $\frac{1}{2}$-derivation of $\mathcal{L}$. For any homogenous $x_\alpha\in\mathcal{L}_{\alpha}$, denote $D(x_\alpha)=\sum y_\beta$, $y_{\beta}\in\mathcal{L}_{\beta}$. Define $D_{\gamma}(x_\alpha)=y_{\alpha+\gamma}$. By direct computation, we have $D_{\gamma}$ is a $\frac{1}{2}$-derivation of $\mathcal{L}$ and $D=\sum_{\gamma} D_{\gamma}$.
Note that $\mathcal{W}$ is a $\mathbb{Z}_2\times \mathbb{Z}_2 \times \mathbb{Z}$-graded vector space with $$\mathcal{W}_{(\overline{0},\overline{0},m)}=\mathbb{F}L_m,\ \mathcal{W}_{(\overline{0},\overline{1},m)}=\mathbb{F}I_m, \mathcal{W}_{(\overline{1},\overline{0},m)}=\mathbb{F}J_m,\ \mathcal{W}_{(\overline{1},\overline{1},m)}=\mathbb{F}H_m,\ \forall \overline{0},\overline{1}\in \mathbb{Z}_2,\ m\in\mathbb{Z}.$$ It is obvious that $[\mathcal{W}_{(\overline{a_1},\overline{b_1},\gamma_1)}, \mathcal{W}_{(\overline{a_2},\overline{b_2},\gamma_2)}]\subset \mathcal{W}_{(\overline{a_1}+\overline{a_2}, \overline{b_1}+\overline{b_2},\gamma_1+\gamma_2)}$ for any $(\overline{a_1},\overline{b_1},\gamma_1), (\overline{a_2},\overline{b_2},\gamma_2)\in \mathbb{Z}_2\times \mathbb{Z}_2 \times \mathbb{Z}$. Denote $G=\mathbb{Z}_2\times \mathbb{Z}_2 \times \mathbb{Z}$. Then $\mathcal{W}$ is a $G$-graded Lie algebra. Now we assume $D$ is an arbitrary $\frac{1}{2}$-derivation of $\mathcal{W}$. Then $D=\sum D_{(\overline{a}, \overline{b},\gamma)}$, where $(\overline{a}, \overline{b},\gamma)\in G$. In the following, we attempt to characterize the homogenous $\frac{1}{2}$-derivations of $\mathcal{W}$ case by case. We can determine all $\frac{1}{2}$-derivations of $\mathcal{W}$.
Firstly we assume that $D$ is a homogenous $\frac{1}{2}$-derivation of degree $(\overline{0},\overline{0},\gamma)$.
## $D=D_{(\overline{0},\overline{0},\gamma)}$
In this case, denote $$D(L_m)=a_mL_{m+\gamma},\ D(H_m)=b_mH_{m+\gamma},\ D(I_m)=c_mI_{m+\gamma},\ D(J_m)=d_mJ_{m+\gamma},$$ where $a_m,b_m,c_m,d_m\in\mathbb{F}$. Since $[L_m,L_n]=(m-n)L_{m+n}$, we have $2(m-n)D(L_{m+n})=[D(L_m),L_n]+[L_m,D(L_n)]$. Then one gets $$\label{e1}
2(m-n)a_{m+n}=(m+\gamma-n)a_m+(m-n-\gamma)a_n,\ \ \forall m,n\in\mathbb{Z}.$$
**Lemma 4**. *The solution of ([\[e1\]](#e1){reference-type="ref" reference="e1"}) is $a_m=a_0$ for any $m\in\mathbb{Z}$.*
*Proof.* Setting $n=0$ in ([\[e1\]](#e1){reference-type="ref" reference="e1"}), we have $(m-\gamma)(a_m-a_0)=0$. Then $a_m=a_0$ for any $m\neq \gamma$. If $\gamma=0$, then $a_m=a_0$ for any $m\in\mathbb{Z}$. If $\gamma\neq0$, setting $m=\gamma$, $n=3\gamma$ in ([\[e1\]](#e1){reference-type="ref" reference="e1"}), we get $a_{\gamma}=a_0$. Then the result follows. $\Box$
Denote $a=a_0$. Let $D$ act on $[L_m, I_n]=(m-n)I_{m+n}$, we have $$\label{e2}
2(m-n)c_{m+n}=(m+\gamma-n)a+(m-n-\gamma)c_n,\ \ \forall m,n\in\mathbb{Z}.$$ Similar to the proof of Lemma [Lemma 4](#lemm2){reference-type="ref" reference="lemm2"}, we have $c_m=a$ for any $m\in\mathbb{Z}$. Let $D$ act on $[L_m, J_n]=(m-n)J_{m+n}$, we have $$\label{e3}
2(m-n)d_{m+n}=(m+\gamma-n)a+(m-n-\gamma)d_n,\ \ \forall m,n\in\mathbb{Z}.$$ Similar to the proof of Lemma [Lemma 4](#lemm2){reference-type="ref" reference="lemm2"}, we have $d_m=a$ for any $m\in\mathbb{Z}$.
Let $D$ act on $[L_m, H_n]=-n H_{m+n}$, we have $$\label{e4}
-2nb_{m+n}=-na+(-n-\gamma)b_n,\ \ \forall m,n\in\mathbb{Z}.$$ Setting $n=1$, we have $-2b_{m+1}=-a+(-1-\gamma)b_1$, implying $b_m=b_n$ for any $m,n\in\mathbb{Z}$. Denote $b=b_m$. Then we get $(\gamma-n)b=-na$ for any $n\in\mathbb{Z}$. Consequently, $a=b=0$ if $\gamma\neq 0$ and $a=b$ if $\gamma=0$.
So we can get $D(L_m)=aL_{m+\gamma},\ D(H_m)=aH_{m+\gamma},\ D(I_m)=aI_{m+\gamma},\ D(J_m)=aJ_{m+\gamma}$ when $D=D_{(\overline{0},\overline{0},\gamma)}$.
Then we assume that $D$ is a homogenous $\frac{1}{2}$-derivation of degree $(\overline{0},\overline{1},\gamma)$.
## $D=D_{(\overline{0},\overline{1},\gamma)}$
In this case, denote $$D(L_m)=x_m I_{m+\gamma},\ D(H_m)=y_m J_{m+\gamma},\ D(I_m)=z_m L_{m+\gamma},\ D(J_m)=w_m H_{m+\gamma},$$ where $x_m, y_m, z_m, w_m\in\mathbb{F}$. Let $D$ act on $[L_m, L_n]=(m-n)L_{m+n}$, we have $$\label{e5}
2(m-n)x_{m+n}=(m+\gamma-n)x_m+(m-n-\gamma)x_n,\ \ \forall m,n\in\mathbb{Z}.$$ Similar to the proof of Lemma [Lemma 4](#lemm2){reference-type="ref" reference="lemm2"}. We have $x_m=x_n$ for any $m,n\in\mathbb{Z}$. Denote $x=x_m$. Let $D$ act on $[L_m, H_n]=-n H_{m+n}$, we have $$\label{e6}
-2ny_{m+n}=-x+(m-n-\gamma)y_n,\ \ \forall m,n\in\mathbb{Z},$$ which implies $x=0, y_m=0, m\in\mathbb{Z}$.
Let $D$ act on $[L_m, I_n]=(m-n)I_{m+n}$, we have $$\label{e7}
2(m-n)z_{m+n}=(m-n-\gamma)z_n,\ \ \forall m,n\in\mathbb{Z}.$$
**Lemma 5**. *The solution of ([\[e7\]](#e7){reference-type="ref" reference="e7"}) is $z_m=0$ for any $m\in\mathbb{Z}$.*
*Proof.* Similar to the proof of Lemma [Lemma 4](#lemm2){reference-type="ref" reference="lemm2"}, Setting $m=0$ in ([\[e7\]](#e7){reference-type="ref" reference="e7"}), we have $-2nz_n=(-n-\gamma)z_n$. We can get $(n-\gamma)z_n=0$. Then $z_n=0$ for any $n\neq \gamma$. If $\gamma=0$, then $z_n=0$ . If $\gamma\neq0$, setting $m=-\gamma$, $n=\gamma$ in ([\[e7\]](#e7){reference-type="ref" reference="e7"}), we get $z_{\gamma}=0$. We have $z_m=0$ for any $m\in\mathbb{Z}$.
Similar to the proof of Lemma [Lemma 4](#lemm2){reference-type="ref" reference="lemm2"}. Let $D$ act on $[L_m, J_n]=(m-n)J_{m+n}$, we have $$\label{e8}
2(m-n)w_{m+n}=(-n-\gamma)w_n,\ \ \forall m,n\in\mathbb{Z}.$$
**Lemma 6**. *The solution of ([\[e8\]](#e8){reference-type="ref" reference="e8"}) is $w_m=0$ for any $m\in\mathbb{Z}$.*
*Proof.* Setting $m=0$ in ([\[e8\]](#e8){reference-type="ref" reference="e8"}), we have $(n-\gamma)w_n=0$ Then $w_n=0$ for any $n\neq \gamma$. If $\gamma=0$, then $w_n=0$ . If $\gamma\neq0$, setting $m=-\gamma$, $n=\gamma$ in ([\[e8\]](#e8){reference-type="ref" reference="e8"}), we get $w_{\gamma}=0$, $w_m=0$ for any $m\in\mathbb{Z}$.
Now we can get $D(L_m)=0,\ D(H_m)=0,\ D(I_m)=0,\ D(J_m)=0$ when $D=D_{(\bar{0},\bar{1},\gamma)}$.
Then we assume that $D$ is a homogenous $\frac{1}{2}$-derivation of degree $(\overline{1},\overline{0},\gamma)$.
## $D=D_{(\overline{1},\overline{0},\gamma)}$
In this case, denote $$D(L_m)=h_mJ_{m+\gamma},\ D(H_m)=e_mI_{m+\gamma},\ D(I_m)=f_mH_{m+\gamma},\ D(J_m)=g_mL_{m+\gamma},$$ where $h_m,e_m,f_m,g_m\in\mathbb{F}$.
Let $D$ act on $[L_m, L_n]=(m-n)L_{m+n}$, we have $$\label{e9}
2(m-n)h_{m+n}=(m+\gamma-n)h_m+(m-n-\gamma)h_n,\ \ \forall m,n\in\mathbb{Z}.$$
Similar to the proof of Lemma [Lemma 4](#lemm2){reference-type="ref" reference="lemm2"}, we have $h_m=h_n$ for any $m,n\in\mathbb{Z}$. Denote $h=h_m$. Let $D$ act on $[L_m, H_n]=-n H_{m+n}$, we have $$\label{e10}
-2ne_{m+n}=h+(m-n-\gamma)e_n,\ \ \forall m,n\in\mathbb{Z},$$
which implies $h=0, e_m=0, m\in\mathbb{Z}$. Let $D$ act on $[L_m, I_n]=(m-n)I_{m+n}$, we have $$\label{e11}
2(m-n)f_{m+n}=(-n-\gamma)f_n,\ \ \forall m,n\in\mathbb{Z}.$$
Similar to the proof of Lemma [Lemma 6](#lemm5){reference-type="ref" reference="lemm5"}, Setting $m=0$ in ([\[e11\]](#e11){reference-type="ref" reference="e11"}), we have $(n-\gamma)f_n=0$. Then $f_n=0$ for any $n\neq \gamma$. If $\gamma=0$, then $f_n=0$ . If $\gamma\neq0$, setting $m=-\gamma$, $n=\gamma$ in ([\[e11\]](#e11){reference-type="ref" reference="e11"}), we get $f_{\gamma}=0$. We have $f_m=0$ for any $m\in\mathbb{Z}$.
Let $D$ act on $[L_m, J_n]=(m-n)J_{m+n}$, we have $$\label{e12}
2(m-n)g_{m+n}=(m-n-\gamma)g_n,\ \ \forall m,n\in\mathbb{Z}.$$
Similar to the proof of Lemma [Lemma 5](#lemm4){reference-type="ref" reference="lemm4"}. Setting $m=0$ in ([\[e12\]](#e12){reference-type="ref" reference="e12"}), we have $(n-\gamma)g_n=0$. Then $g_n=0$ for any $n\neq \gamma$. If $\gamma=0$, then $g_n=0$. If $\gamma\neq0$, setting $m=-\gamma$, $n=\gamma$ in ([\[e12\]](#e12){reference-type="ref" reference="e12"}), we get $g_{\gamma}=0$, $g_m=0$ for any $m\in\mathbb{Z}$.
So we can get $D(L_m)=0,\ D(H_m)=0,\ D(I_m)=0,\ D(J_m)=0$ when $D=D_{(\overline{1},\overline{0},\gamma)}$.
Then we assume that $D$ is a homogenous $\frac{1}{2}$-derivation of degree $(\overline{1},\overline{1},\gamma)$.
## $D=D_{(\overline{1},\overline{1},\gamma)}$
In this case, denote $$D(L_m)=i_m H_{m+\gamma},\ D(H_m)=j_mL_{m+\gamma},\ D(I_m)=k_mJ_{m+\gamma},\ D(J_m)=l_mI_{m+\gamma},$$ where $i_m,j_m,k_m,l_m\in\mathbb{F}$. $D$ restricts to a $1/2$-derivation of the Heisenberg-Virasoro algebra,
Let $D$ act on $[L_m, H_n]=-n H_{m+n}$, we have $$\label{e13}
-2nj_{m+n}=(m-n-\gamma)j_n,\ \ \forall m,n\in\mathbb{Z},$$
**Lemma 7**. *The solution of ([\[e13\]](#e13){reference-type="ref" reference="e13"}) is $j_m=0$ for any $m\in\mathbb{Z}$.*
*Proof.* Setting $m=0$ in ([\[e13\]](#e13){reference-type="ref" reference="e13"}), we have $(n-\gamma)j_n=0$. Then $j_n=0$ for any $n\neq \gamma$. If $\gamma\neq 0$, then $j_0=0$ . If $\gamma\neq0$, setting $m=-\gamma$, $n=\gamma$ in ([\[e13\]](#e13){reference-type="ref" reference="e13"}), we get $j_{\gamma}=0$. If $\gamma=0$ in ([\[e13\]](#e13){reference-type="ref" reference="e13"}) we have $$\label{e14}
2nj_{m+n}=(n-m)j_n,\ \ \forall m,n\in\mathbb{Z},$$ Setting $m=0$ in ([\[e14\]](#e14){reference-type="ref" reference="e14"}), we get if $n\neq 0$, then $j_{n}=0$. Setting $n=-m=1$ in ([\[e14\]](#e14){reference-type="ref" reference="e14"}), we have $j_{1}=0$, so $j_{0}=0$. We have $j_m=0$ for any $m\in\mathbb{Z}$.
Let $D$ act on $[L_m, I_n]=(m-n)I_{m+n}$, we have $$\label{e15}
2(m-n)k_{m+n}=i_{m}+k_{n}(m-n-\gamma),\ \ \forall m,n\in\mathbb{Z}.$$
**Lemma 8**. *The solution of ([\[e15\]](#e15){reference-type="ref" reference="e15"}) is $k_n=0$ for any $n\in\mathbb{Z}$.*
*Proof.* Setting $m=0$ in ([\[e15\]](#e15){reference-type="ref" reference="e15"}), we have $(\gamma-n)k_{n}=i_{0}$. Then $i_{0}=k_{n}=0$ for any $n\in Z$.
Let $D$ act on $[L_m, J_n]=(m-n)J_{m+n}$, we have $$\label{e16}
2(m-n)l_{m+n}=-i_{m}+(m-n-\gamma)l_{n},\ \ \forall m,n\in\mathbb{Z}.$$ Setting $m=0$ in ([\[e16\]](#e16){reference-type="ref" reference="e16"}), we have $(n-\gamma)l_{n}=i_{0}$ Then $i_{0}=l_{n}=0$ for any $n\in Z$.
Let $D$ act on $[L_m, L_n]=(m-n)L_{m+n}$, we have $$\label{e17}
2(m-n)i_{m+n}=(m+\gamma)i_m+(-n-\gamma)i_n,\ \ \forall m,n\in\mathbb{Z}.$$
**Lemma 9**. *The solution of ([\[e17\]](#e17){reference-type="ref" reference="e17"}) is $i_m=0$ for any $m\in\mathbb{Z}$.*
*Proof.* Setting $m=0$ in ([\[e17\]](#e17){reference-type="ref" reference="e17"}),we have $$\label{e18}
-2ni_{n}=\gamma i_0+(-n-\gamma)i_n,\ \ \forall m,n\in\mathbb{Z}.$$ We also have $$\label{e19}
i_{n}=\frac{\gamma}{\gamma-n}i_{0},\ \ n\neq \gamma.$$
Setting $n=-m$, taking $m\neq \{\gamma,-\gamma,0\}$ in ([\[e17\]](#e17){reference-type="ref" reference="e17"}), we have $$\label{e20}
4mi_{0}=\frac{4m\gamma^2}{(\gamma-m)(\gamma+m)}i_{0},\ \ n\neq \gamma.$$ By([\[e20\]](#e20){reference-type="ref" reference="e20"}) we can get $i_{0}=0$ for any $m\in Z$. If $n\neq \gamma$ in([\[e19\]](#e19){reference-type="ref" reference="e19"}), we have $i_{n}=0$. setting $m=-\gamma$, $n=\gamma$ in ([\[e17\]](#e17){reference-type="ref" reference="e17"}), If $\gamma\neq 0$ we get $i_{\gamma}=0$. By ([\[e15\]](#e15){reference-type="ref" reference="e15"}) we have $(\gamma-n)k_{n}=i_{0}$, so when $\gamma=0$, we have $i_{0}=0$ for any $n\in Z$. We have $i_m=0$ for any $m\in\mathbb{Z}$.
So we can get $D(L_m)=0,\ D(H_m)=0,\ D(I_m)=0,\ D(J_m)=0$ when $D=D_{(\overline{1},\overline{1},\gamma)}$.
As a summary, each $\frac{1}{2}$-derivation of the planar Galilean conformal algebra is proven to be a scalar. Due to Theorem 8([@2020]), we get
**Theorem 10**. *The transposed Poisson structures on $\mathcal{W}$ are trivial.*
[^1]: This work was supported by National Natural Science Foundation grants of China (11701345)
[^2]: School of Mathematical Sciences, Shanxi University, Taiyuan 030006, China; .wuhenan\@sxu.edu.cn
[^3]: School of Mathematical Sciences, Shanxi University, Taiyuan 030006, China. 202122201017\@email.sxu.edu.cn
| arxiv_math | {
"id": "2310.03282",
"title": "Transposed Poisson Structures on the planar Galilean conformal algebra",
"authors": "Henan Wu and Wenting Zhang",
"categories": "math.RA",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
arxiv_math | {
"id": "2309.03142",
"title": "Euler Characteristics and Homotopy Types of Definable Sublevel Sets,\n with Applications to Topological Data Analysis",
"authors": "Mattie Ji, Kun Meng, Kexin Ding",
"categories": "math.AT math.ST stat.TH",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
|
---
abstract: |
The stated skein algebra is a generalization of the Kauffman bracket skein algebra introduced in the study of quantum trace maps. When the quantum parameter is a root of unity, the stated skein algebra has a big center and is finitely generated as a module over the center. We give the center a simple description and calculate the dimension over center of the stated skein algebra.
address: Shenzhen International Center for Mathematics, Southern University of Science and Technology, 1088 Xueyuan Avenue, Shenzhen, Guangdong, China
author:
- Tao Yu
bibliography:
- ref.bib
title: Center of the Stated Skein Algebra
---
# Introduction
Let $R$ be a commutative domain with an invertible element $q^{1/2}$. The Kauffman bracket skein algebra $\mathring\mathscr{S}_q(\mathfrak{S})$ of a surface $\mathfrak{S}$, introduced by Przytycki [@Pr] and Turaev [@Tu], is an $R$-algebra spanned by framed unoriented links in the thickened surface $\mathfrak{S}\times(-1,1)$ modulo the Kauffman bracket relations $$\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=1.1; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\tikzmath{\xl=0.1;\xr=\xw-\xl;}
\begin{knot}
\strand[edge] (\xr,\yt)--(\xl,\yd); \strand[edge] (\xl,\yt)--(\xr,\yd);
\end{knot}
\end{tikzpicture}
\mathop{}\!
=q
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=1.1; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\foreach \x in {0.1,\xw-0.1} \draw[edge] (\x,\yt)..controls (ref)..(\x,\yd);
\end{tikzpicture}
\mathop{}\!
+q^{-1}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=1.1; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\tikzmath{\xl=0.1;\xr=\xw-\xl;}
\foreach \y in {\yt,\yd} \draw[edge] (\xl,\y)..controls (ref)..(\xr,\y);
\end{tikzpicture}
\mathop{}\!
,
\qquad
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\draw[edge] (0.45,0.45) circle (0.25);
\end{tikzpicture}
\mathop{}\!
=(-q^2-q^{-2})
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\end{tikzpicture}
\mathop{}\!
.$$ The Kauffman bracket skein algebra is connected to many areas in low dimensional topology. Quantizations of character varieties [@Bul; @BFK; @PS] and Teichmüller spaces [@BWqtr; @Mu] are a few examples.
In [@LeTriang], a generalization called the stated skein algebra $\mathscr{S}_q(\mathfrak{S})$. The surface here is of the form $\mathfrak{S}=\overline{\mathfrak{S}}\setminus\mathcal{P}$, where $\overline{\mathfrak{S}}$ is an oriented compact surface, and $\mathcal{P}$ is a finite set with at least one point in each boundary component of $\overline{\mathfrak{S}}$. The stated skein algebra is spanned by framed tangles with endpoint on the boundary of $\mathfrak{S}$, satisfying the addition relations given in Section [3.2](#sec-stateS){reference-type="ref" reference="sec-stateS"}. The original application was to give a simple construction of the quantum trace map, first defined by Bonahon and Wong [@BWqtr]. More related results have been developed since. See e.g. [@CL; @Fa; @Kor].
We are interested in the center and dimension over center of the skein algebras. They have important applications in the representation theory. Suppose $A$ is a finitely generated $\mathbb{C}$-algebra and $Z$ is the center of $A$. Any irreducible representation of $A$ restricts to a representation of $Z$ and, by Schur's lemma, defines an algebra homomorphism $Z\to\mathbb{C}$. This is a point in the max spectrum $\mathcal{Z}:=\mathop{\mathrm{MaxSpec}}Z$. Further assume that $A$ is a domain and finitely generated as a $Z$-module. Let $\tilde{Z}$ be the field of fractions of $Z$, and define the dimension of $A$ over $Z$ as $\dim_{\tilde{Z}}(A\otimes_Z\tilde{Z})$. This dimension is always a square, and its square root is called the PI degree of $A$.
**Theorem 1** (See e.g. [@BGQGroup; @BY; @DP]). *Every point in $\mathcal{Z}$ correspond to at least one irreducible representation of $A$. Let $M$ be the PI degree of $A$. Every irreducible representation of $A$ has dimension at most $M$. The points in $\mathcal{Z}$ that correspond to $M$-dimensional irreducible representations form an open subset $U\subset\mathcal{Z}$, called the Azumaya locus of $A$. Moreover, the correspondence is one-to-one on $U$.*
When the quantum parameter $q=\zeta$ is a root of unity, the Kauffman bracket skein algebra $A=\mathring{\mathscr{S}}_\zeta(\mathfrak{S})$ satisfy the assumptions above, as shown in [@FKLUnicity]. The set of peripheral curves, denoted $\mathring{\mathcal{P}}$, is central for all $q$. Additional central elements are given by (a subalgebra of) the image of the Frobenius homomorphism. These elements generate the center. Moreover, $\mathring{\mathscr{S}}_\zeta(\mathfrak{S})$ is finitely generated over its center, and the dimension over center is calculated in [@FKLDimension]. See also [@BWi; @BWii; @BWiii].
In this paper, we determine the center and the dimension over the center for the stated skein algebra $\mathscr{S}_q(\mathfrak{S})$, which was announced in [@LYSurvey]. We assume that $\mathfrak{S}$ is connected with nonempty boundary. The stated skein algebra for surfaces with empty boundary reduces to the Kauffman bracket skein algebra, so existing results apply.
Suppose $\zeta$ is a primitive $n$-th root of unity. Let $d=\gcd(n,4)$, $N=n/d$, and $\epsilon=\zeta^{N^2}$. [@BLFrob] defines the Frobenius homomorphism for the stated skein algebra $$\Phi_\zeta:\mathscr{S}_\epsilon(\mathfrak{S})\to\mathscr{S}_\zeta(\mathfrak{S}),$$ which is an algebra embedding.
**Theorem 2** (Theorems [Theorem 13](#thm-center-root-1){reference-type="ref" reference="thm-center-root-1"} and [Theorem 17](#thm-dim-z){reference-type="ref" reference="thm-dim-z"}).
1. *If $n$ is odd, the center of $\mathscr{S}_\zeta(\mathfrak{S})$ is the subalgebra generated by the image of $\Phi_\zeta$, the peripheral curves, and a finite set of central elements $B_\zeta$ (defined in Lemma [Lemma 12](#lemma-central-boundary){reference-type="ref" reference="lemma-central-boundary"}).*
2. *For all roots of unity, the center is spanned by elements $\gamma$ such that $\gamma\beta$ is of the form $c\Phi_\zeta(\gamma')$ where $c$ is a polynomial of the peripheral curves, $\gamma'\in\mathscr{S}_\epsilon(\mathfrak{S})$ belongs to a subalgebra $X_\zeta$ (defined in Corollary [Corollary 11](#cor-Xzeta){reference-type="ref" reference="cor-Xzeta"}), and $\beta$ is a product of elements in $B_\zeta$.*
3. *The dimension of $\mathscr{S}_\zeta(\mathfrak{S})$ over its center is $$D_\zeta=\begin{cases}
N^{{|{\bar{\mathcal{E}}}|}-b_2},&d=1,\\
2^{2\lfloor\frac{v-1}{2}\rfloor}N^{{|{\bar{\mathcal{E}}}|}-b_2},&d=2,\\
2^{2g+2v-2}N^{{|{\bar{\mathcal{E}}}|}-b_2},&d=4,
\end{cases}$$ where $g$ is the genus of $\mathfrak{S}$, $v\ge 1$ is the number of points of $\mathcal{P}$ on the boundary $\partial\overline{\mathfrak{S}}$, ${|{\bar{\mathcal{E}}}|}=3(v-\chi(\overline{\mathfrak{S}}))+2{|{\mathring{\mathcal{P}}}|}$, and $b_2$ is the number of boundary components of $\overline{\mathfrak{S}}$ containing an even number of points of $\mathcal{P}$.*
In [@KK], corresponding results when the order $n$ of $\zeta$ is odd are obtained for a special case ($d=1$ with one point of $\mathcal{P}$ on each boundary component) and for a quotient of $\mathscr{S}_q(\mathfrak{S})$ called the reduced skein algebra.
# Punctured bordered surfaces
## Basic definitions
A **(punctured bordered) surface** $\mathfrak{S}$ is a surface of the form $\overline{\mathfrak{S}}\setminus\mathcal{P}$, where $\overline{\mathfrak{S}}$ is a compact oriented surface, and $\mathcal{P}$ is a finite set of **ideal points** containing at least one point from each boundary component of $\overline{\mathfrak{S}}$. For convenience, we assume $\mathfrak{S}$ is connected and has nonempty boundary (so $\mathcal{P}$ is nonempty as well).
The set of ideal points on the boundary is denoted $\mathcal{P}_\partial:=\mathcal{P}\cap\partial\overline{\mathfrak{S}}$, while the set of ideal points in the interior, also called punctures, is denoted $\mathring{\mathcal{P}}:=\mathcal{P}\cap\mathring{\mathfrak{S}}$.
An **ideal arc** of $\mathfrak{S}$ is an embedding $a:(0,1)\to\mathfrak{S}$ that extends to an immersion $\bar{a}:[0,1]\to\overline{\mathfrak{S}}$ with **endpoints** $\bar{a}(0),\bar{a}(1)$ in $\mathcal{P}_\partial$. Note we do not allow ideal arcs to end on punctures. As usual, we identify an ideal arc with its image. Isotopies of ideal arcs are considered in the class of ideal arcs. If $\bar{a}(0)=\bar{a}(1)$ and $\bar{a}$ bounds a disk in $\mathfrak{S}$, $a$ is called a **trivial** ideal arc.
We say the surface $\mathfrak{S}$ is triangulable if it is not a monogon or bigon. A **(quasi)triangulation** of $\mathfrak{S}$ is a maximal collection $\mathcal{E}$ of nontrivial, pairwise non-isotopic ideal arcs called **edges**. Triangulation exists if $\mathfrak{S}$ is not a monogon. The bigon has a unique triangulation consisting of one edge connecting the ideal points. However, the combinatorics of the bigon triangulation is very different from the other surfaces, so we consider the bigon as exceptional.
For each boundary component of $\mathfrak{S}$, there is an edge of $\mathcal{E}$ isotopic to it. Such an edge is called **boundary**, and we always assume it is exactly on the boundary of $\mathfrak{S}$. The other edges are called **interior**. The collection of boundary edges are denoted $\mathcal{E}_\partial$.
The interior edges of a triangulation cut a triangulable surface into a disjoint union of triangles and punctured monogons. Let $\mathcal{F}(\mathcal{E})$ denote these components, which are called the **faces** of the triangulation. Since arcs must end on the boundary of $\overline{\mathfrak{S}}$, each triangle has three distinct edges.
## Matrices associated with a triangulation
Following [@LYSL2], define the following matrices associated to a triangulation $\mathcal{E}$ of a triangulable surface $\mathfrak{S}$.
The edges of a triangle are cyclically ordered. For each triangle $\tau\in\mathcal{F}(\mathcal{E})$ and $a,b\in\mathcal{E}$, define $$Q_\tau(a,b)=\begin{cases}
1,&a\to b\text{ is clockwise},\\
-1,&a\to b\text{ is counterclockwise},\\
0,&\text{otherwise}.
\end{cases}
\qquad
Q=\sum_{\tau\in\mathcal{F}(\mathcal{E})}Q_\tau.$$ The matrix $Q$ is $Q_\mathcal{E}$ in [@LYSL2].
Given an edge in $\mathcal{E}$, removing a point in the interior produces two **half-edges**. For each ideal point $v\in\mathcal{P}_\partial$, if $a'$ and $b'$ are disjoint half-edges that meet at $v$, define $$P'_{+,v}(a',b')=\begin{cases}
1,&b'\text{ is counterclockwise to }a',\\
0,&\text{otherwise}.
\end{cases}$$ Given two edges $a,b\in\mathcal{E}$, isotope them so that they are disjoint. Let $$P_{+,v}(a,b)=\sum P'_{+,v}(a',b'),\qquad
P_+=\sum_{v\in\mathcal{P}}P_{+,v},$$ where the first sum is over half-edges $a'$ of $a$ and half-edges $b'$ of $b$. The disjoint condition is essential for the definition to make sense when $a=b$.
Some examples are given in Figure [\[def-qp-illu\]](#def-qp-illu){reference-type="ref" reference="def-qp-illu"}.
For a surface $\mathfrak{S}$ with a triangulation $\mathcal{E}$, there is a relation between the face matrix and the vertex matrix. Define a $\mathcal{E}\times\mathcal{E}_\partial$ matrix $J$ by $$J(a,b)=\begin{cases}1,&a=b,\\0,&a\ne b.\end{cases}$$
**Lemma 1** ([@LYSL2 Lemma A.1]). *$P_+(JJ^T-Q)=2I$.*
Let $\bar{\mathcal{E}}=\mathcal{E}\sqcup\hat\mathcal{E}_\partial$, where $\hat\mathcal{E}=\{\hat{e}\mid e\in\mathcal{E}_\partial\}$ is a second copy of $\mathcal{E}_\partial$. The extended version of $Q$ is a $\bar{\mathcal{E}}\times\bar{\mathcal{E}}$ block matrix with respect to the above decomposition of $\bar{\mathcal{E}}$. $$\bar{Q}=\begin{pmatrix}Q&-J\\J^T&0\end{pmatrix}.$$ The matrix $\bar{Q}$ differs from $\bar{Q}$ in [@LYSL2] by the signs on $J$ and $J^T$ because of the choice of generators of the quantum torus defined in Section [3.8](#sec-qtr){reference-type="ref" reference="sec-qtr"}.
# Stated Skein algebra
## Stated tangles
Given a punctured bordered surface $\mathfrak{S}$, a **tangle** $\alpha$ over $\mathfrak{S}$ is a compact $1$-dimensional smooth submanifold embedded in $\mathfrak{S}\times(-1,1)$ with a framing (normal vector field) such that
1. $\partial\alpha\subset\partial\mathfrak{S}\times(-1,1)$;
2. For each boundary edge $b$, the points in $\partial\alpha\cap(b\times(-1,1))$ have distinct heights;
3. The framing at each $x\in\partial\alpha$ is vertical, i.e., tangent to $\{x\}\times(-1,1)$.
A **stated tangle** is a tangle $\alpha$ equipped with a **state** map $\partial\alpha\to\{+,-\}$. An isotopy of stated tangles is a homotopy through stated tangles. In particular, states and height ordering on a boundary edge are preserved. By convention, the empty set is included as a stated tangle.
When representing tangles by diagrams, height ordering of endpoints on a boundary edge is represented by an arrow, indicating that the heights are increasing as one follow the direction of the arrow. Any tangle can be isotoped to have such a projection. If the arrows on all boundary edges agree with the orientation induced from the surface, then the diagram is called **positively ordered**. All diagrams in this paper are positively ordered.
## Stated skein algebra {#sec-stateS}
Let $R$ be an integral domain with an invertible element $q^{1/2}$. The main example is $R=\mathbb{C}$ with $q^{1/2}$ a nonzero complex number. The **stated skein algebra** $\mathscr{S}_q(\mathfrak{S})$ is the $R$-module generated by isotopy classes of stated tangles modulo the following relations. $$\begin{aligned}
&\text{Skein relation:}&
&
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=1.1; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\tikzmath{\xl=0.1;\xr=\xw-\xl;}
\begin{knot}
\strand[edge] (\xr,\yt)--(\xl,\yd); \strand[edge] (\xl,\yt)--(\xr,\yd);
\end{knot}
\end{tikzpicture}
\mathop{}\!
=q
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=1.1; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\foreach \x in {0.1,\xw-0.1} \draw[edge] (\x,\yt)..controls (ref)..(\x,\yd);
\end{tikzpicture}
\mathop{}\!
+q^{-1}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=1.1; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\tikzmath{\xl=0.1;\xr=\xw-\xl;}
\foreach \y in {\yt,\yd} \draw[edge] (\xl,\y)..controls (ref)..(\xr,\y);
\end{tikzpicture}
\mathop{}\!
.\label{eq-defrel1}\\
%
&\text{Trivial loop relation:}&
&
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\draw[edge] (0.45,0.45) circle (0.25);
\end{tikzpicture}
\mathop{}\!
=(-q^2-q^{-2})
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\end{tikzpicture}
\mathop{}\!
.\\
%
&\text{State exchange relation:}&
&
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$-$}{}\AND\equal{$+$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$-$} (B)node{\footnotesize$+$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!
=q^2
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$+$}{}\AND\equal{$-$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$+$} (B)node{\footnotesize$-$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!
+q^{-1/2}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{w}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{w}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{w}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{w}{}}{}{
\draw[w] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{}{}\AND\equal{}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize} (B)node{\footnotesize};
}
\draw[edge] (0,\yb) ..controls (0.7,\yb) and (0.7,\ya).. (0,\ya);
\end{tikzpicture}
\mathop{}\!
.\label{eq-defrel3}\\
%
&\text{Returning arc relations:}&
&
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$+$}{}\AND\equal{$+$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$+$} (B)node{\footnotesize$+$};
}
\draw[edge] (\xw,\yb) ..controls (0.1,\yb) and (0.1,\ya).. (\xw,\ya);
\end{tikzpicture}
\mathop{}\!
=
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$-$}{}\AND\equal{$-$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$-$} (B)node{\footnotesize$-$};
}
\draw[edge] (\xw,\yb) ..controls (0.1,\yb) and (0.1,\ya).. (\xw,\ya);
\end{tikzpicture}
\mathop{}\!
=0,\qquad
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$+$}{}\AND\equal{$-$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$+$} (B)node{\footnotesize$-$};
}
\draw[edge] (\xw,\yb) ..controls (0.1,\yb) and (0.1,\ya).. (\xw,\ya);
\end{tikzpicture}
\mathop{}\!
=q^{-1/2}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{w}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{w}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{w}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{w}{}}{}{
\draw[w] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
\end{tikzpicture}
\mathop{}\!
.\label{eq-defrel4}\end{aligned}$$ The product of two stated tangles is defined by stacking $[L_1][L_2]=[i_+(L_1)\cup i_-(L_2)]$ where $i_\pm:M\to M$ are the embeddings given by $i_\pm(x,t)=(x,\frac{t\pm1}{2})$. This extends linearly to a well-defined product.
The height exchange moves $$\label{eq-height-ex}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$\mu$}{}\AND\equal{$+$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$\mu$} (B)node{\footnotesize$+$};
}
\ifthenelse{\equal{$\mu$}{a}}{
\draw[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\draw[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
}{
\begin{knot}
\strand[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\strand[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\ifthenelse{\(\equal{n}{n}\)}{\flipcrossings{1}}{}
\end{knot}
}
\path[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\path[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\end{tikzpicture}
\mathop{}\!
=q^{-\mu}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$+$}{}\AND\equal{$\mu$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$+$} (B)node{\footnotesize$\mu$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!
,\qquad
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$-$}{}\AND\equal{$\mu$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$-$} (B)node{\footnotesize$\mu$};
}
\ifthenelse{\equal{$-$}{a}}{
\draw[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\draw[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
}{
\begin{knot}
\strand[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\strand[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\ifthenelse{\(\equal{n}{n}\)}{\flipcrossings{1}}{}
\end{knot}
}
\path[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\path[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\end{tikzpicture}
\mathop{}\!
=q^{\mu}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$\mu$}{}\AND\equal{$-$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$\mu$} (B)node{\footnotesize$-$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!$$ are consequences of the defining relations. Here we identify the states $\pm$ with $\pm1$. They are also equivalent to the state exchange relation assuming the other defining relations.
## Exceptional surfaces
We have assumed throughout that $\mathfrak{S}$ has nonempty boundary. If $\mathfrak{S}$ has empty boundary, tangles can no longer have endpoints, and the boundary relations [\[eq-defrel3\]](#eq-defrel3){reference-type="eqref" reference="eq-defrel3"} and [\[eq-defrel4\]](#eq-defrel4){reference-type="eqref" reference="eq-defrel4"} are vacuous. The stated skein algebra reduces to the Kauffman bracket skein algebra.
If $\mathfrak{S}$ is a monogon, it is easy to see that all tangles can be reduced to the empty tangle using the defining relations. By Theorem [Theorem 2](#thm-basis){reference-type="ref" reference="thm-basis"}, the empty tangle is a basis, so the stated skein algebra is simply $R$.
If $\mathfrak{S}$ is a bigon, the stated skein algebra is isomorphic to the quantum coordinate ring $\mathcal{O}_{q^2}(SL_2)$. Additional structures can be defined so that the isomorphism preserves the cobraided Hopf algebra structure. See [@CL Section 3]. The representation theory is well studied. For results on the center and the PI degree, see [@BGQGroup Chapter III.3] and [@DL Appendix] for $q^2$ an odd root of unity.
## Basis
A stated tangle diagram is **simple** if there are no crossings, **essential** if there are no components homotopic (rel endpoints) to a point or part of a boundary edge, and **increasingly stated** if on each boundary edge, the endpoints with the $+$ state are higher than the endpoints with the $-$ state.
**Theorem 2** ([@LeTriang Theorem 2.8]). *As an $R$-module, $\mathscr{S}_q(\mathfrak{S})$ is free. A basis is given by the equivalence classes of increasingly stated, positively ordered, simple essential stated tangle diagrams.*
A **peripheral curve** is a simple closed curve that bounds a disk with one interior ideal point. Peripheral curve are central because they have no intersection with other diagrams up to isotopy, so their heights can be freely isotoped when stacked. Then the corollary below easily follows from Theorem [Theorem 2](#thm-basis){reference-type="ref" reference="thm-basis"}.
**Corollary 3**. *Let $X_v$ denote the peripheral curve around the interior ideal point $v\in\mathring{\mathcal{P}}$. Then the polynomial algebra $$R[\mathring{\mathcal{P}}]:=R[X_v,v\in\mathring{\mathcal{P}}]$$ is an embedded subalgebra of the center of $\mathscr{S}_q(\mathfrak{S})$. Therefore, $\mathscr{S}_q(\mathfrak{S})$ is an $R[\mathring{\mathcal{P}}]$-algebra.*
*As an $R[\mathring{\mathcal{P}}]$-module, $\mathscr{S}_q(\mathfrak{S})$ is free with a basis $B$ given by $R$-basis elements of $\mathscr{S}_q(\mathfrak{S})$ with no peripheral components.*
## Ideal tangle diagrams {#sec-move-left}
In a few places, we need tangle diagrams that end on $\mathcal{P}_\partial$ instead of on $\partial\mathfrak{S}$, which is away from $\mathcal{P}_\partial$. We call these **ideal tangle diagrams**. In particular, ideal arcs are ideal tangle diagrams.
Given an ideal tangle diagram $\alpha$, it defines a usual tangle diagram $D(\alpha)$ by moving the endpoints slightly in the negative direction of the boundary in a way that does not introduce extra crossings. See Figure [\[fig-move-left\]](#fig-move-left){reference-type="ref" reference="fig-move-left"}. This operation is clearly invertible (up to isotopy).
## Parameterization of basis elements
Given a triangulation $\mathcal{E}$ of $\mathfrak{S}$, the $R[\mathring{\mathcal{P}}]$-basis $B$ in Corollary [Corollary 3](#cor-peri){reference-type="ref" reference="cor-peri"} can be parameterized by edge colorings. An **edge coloring** is a vector $\bar{k}\in\mathbb{Z}^{\bar{\mathcal{E}}}$, which is also considered as a map $\bar{\mathcal{E}}\to\mathbb{Z}$. A basis element $\alpha\in B$ represented by a simple tangle diagram with state $s:\partial\alpha\to\{\pm\}$ defines an edge coloring $\bar{k}_\alpha:\bar{\mathcal{E}}\to\mathbb{N}$ by $$\label{eq-color-def}
\begin{aligned}
\bar{k}_\alpha(e)&:=I(\alpha,e),&e&\in\mathcal{E},\\
\bar{k}_\alpha(\hat{e})&:=I(\alpha,e)-\sum_{x\in\alpha\cap e}s(x),& e&\in\mathcal{E}_\partial,
\end{aligned}$$ where $I$ denotes geometric intersection number. This is defined in [@LYSL2 Section 6.3] as $\bar{\mathbf{n}}_\alpha$, but we want to reserve $n$ for another use. It is easy to see that $\bar{k}_\alpha(\hat{e})$ is twice the number of $-$ skein on $e$. An edge coloring arising this way is called **admissible**. Let $\Lambda\subset\mathbb{N}^{\bar\mathcal{E}}$ be the set of admissible edge colorings, and let $\langle\Lambda\rangle\subset\mathbb{Z}^{\bar\mathcal{E}}$ be the subgroup generated by $\Lambda$. Edge colorings in $\langle\Lambda\rangle$ are called **balanced**.
**Proposition 4** ([@LYSL2 Lemma 6.2, Proposition 6.3]). *$\langle\Lambda\rangle$ consists of edge colorings $\bar{k}\in\mathbb{Z}^{\bar\mathcal{E}}$ such that*
1. *$\bar{k}(a)+\bar{k}(b)+\bar{k}(c)$ is even if $a,b,c\in\mathcal{E}$ are edges of a face $\tau\in\mathcal{F}(\mathcal{E})$, and*
2. *$\bar{k}(a)$ is even if $a$ bounds a punctured monogon or if $a\in\hat\mathcal{E}_\partial$.*
*$\Lambda\subset\mathbb{N}^{\bar\mathcal{E}}\cap\langle\Lambda\rangle$ is the submonoid defined by the additional inequalities*
1. *$\bar{k}(a)\le\bar{k}(b)+\bar{k}(c)$ if $a,b,c\in\mathcal{E}$ are edges of a face $\tau\in\mathcal{F}(\mathcal{E})$, and*
2. *$\bar{k}(\hat{e})\le 2\bar{k}(e)$ for all $e\in\mathcal{E}_\partial$.*
*An edge coloring $\bar{k}\in\mathbb{Z}^{\bar\mathcal{E}}$ is in $\langle\Lambda\rangle$ if and only if there exists $\bar{l}\in\Lambda$ such that $\bar{k}-\bar{l}\in(2\mathbb{Z})^{\bar\mathcal{E}}$.*
The balanced subgroup $\langle\Lambda\rangle$ has another useful description. Each edge $a\in\mathcal{E}$ defines a basis element $X_a$ of $\mathscr{S}_q(\mathfrak{S})$ by assigning $+$ states to both endpoints of the tangle diagram $D(a)$ defined in Section [3.5](#sec-move-left){reference-type="ref" reference="sec-move-left"}. Each boundary edge $e\in\mathcal{E}_\partial$ defines an additional basis element $X_{\hat{e}}$ by a different state assignment shown in Figure [\[e-hat-def\]](#e-hat-def){reference-type="ref" reference="e-hat-def"}. Therefore, we get the corresponding edge colorings $\bar{k}_a:=\bar{k}_{X_a}$, $a\in\bar{\mathcal{E}}$. Define a $\bar\mathcal{E}\times\bar\mathcal{E}$ matrix $$\bar{K}(a,b)=\bar{k}_a(b).$$ Comparing Figure [\[e-hat-def\]](#e-hat-def){reference-type="ref" reference="e-hat-def"} with the definition of $P_+$, we see $$\bar{K}=\begin{pmatrix}P_+&0\\J^TP_+&2I\end{pmatrix}.$$
**Lemma 5**. *The subgroup of balanced vectors is a free abelian group with a basis given by the rows of $\bar{K}$, or in other words, $\bar{k}_a,a\in\bar\mathcal{E}$.*
*Consequently, a vector $\bar{k}$ is balanced if and only if it can be written as $\bar{k}=(kP_+,2\hat{k})$ for some $k\in\mathbb{Z}^{\mathcal{E}}$ and $\hat{k}\in\mathbb{Z}^{\hat\mathcal{E}_\partial}$.*
*Proof.* The generation is a corollary of Theorem [Theorem 6](#thm-qtr){reference-type="ref" reference="thm-qtr"}. The linear independence follows from the invertiblility of $P_+$ in Lemma [Lemma 1](#lemma-H-inverse){reference-type="ref" reference="lemma-H-inverse"}. ◻
## Filtrations
The edge colorings define filtrations of the skein algebra. Let $\mathop{\mathrm{\deg^\circ}}:\mathbb{Z}^{\bar\mathcal{E}}\to\mathbb{Z}$ be the homomorphism $$\mathop{\mathrm{\deg^\circ}}(\bar{k})=\sum_{e\in\mathcal{E}}\bar{k}(e).$$ Note the edges in $\hat\mathcal{E}_\partial$ are not summed. The filtration $\{F^\circ_d\}_{d\in\mathbb{N}}$ is given by $$F^\circ_d=R[\mathring{\mathcal{P}}]\text{-span of }\{\alpha\in B\mid\mathop{\mathrm{\deg^\circ}}(\bar{k}_\alpha)\le d\}.$$
For later uses, we also need a refinement of $\{F^\circ_d\}$ that breaks all ties. Let $<$ be a well ordering on $\mathbb{Z}^{\bar\mathcal{E}}$ such that $\bar{k}\le\bar{l}$ implies $\mathop{\mathrm{\deg^\circ}}(\bar{k})\le\mathop{\mathrm{\deg^\circ}}(\bar{l})$. Let $\{F_{\bar{k}}\}_{\bar{k}\in\Lambda}$ be the filtration defined by $$F_{\bar{k}}=R[\mathring{\mathcal{P}}]\text{-span of }\{\alpha\in B\mid \bar{k}_\alpha \le \bar{k}\}.$$ For an edge coloring $\bar{k}$ with $\mathop{\mathrm{\deg^\circ}}(\bar{k})=d$, it is easy to see $$F^\circ_{d-1}\subset \bigcup_{\bar{l}<\bar{k}}F_{\bar{l}},\qquad
F_{\bar{k}}\subset F^\circ_d.$$
## Quantum trace map {#sec-qtr}
An advantage of working with surfaces that are not closed is that there is an algebra embedding, called the quantum trace map, from $\mathscr{S}_q(\mathfrak{S})$ into a quantum torus $\bar{\mathcal{Y}}$, which has a much simpler algebra structure. Define the Laurent polynomial algebra $$R[\mathring{\mathcal{P}}]^\diamond:=R[z_v^{\pm1},v\in\mathring{\mathcal{P}}]$$ It contains $R[\mathring{\mathcal{P}}]$ via the identification $X_v=z_v+z_v^{-1}$. Then $\bar{\mathcal{Y}}$ is defined as the $R[\mathring{\mathcal{P}}]^\diamond$-algebra with the presentation $$\bar{\mathcal{Y}}=R[\mathring{\mathcal{P}}]^\diamond\langle z_e^{\pm1},e\in\bar{\mathcal{E}}\rangle / (z_az_b=q^{\bar{Q}(a,b)}z_bz_a).$$ Number the edges $\bar\mathcal{E}=\{e_1,\dotsc,e_r\}$ and define the (Weyl-normalized) monomial $$z^{\bar{k}}=q^{-\frac{1}{2}\sum_{i<j}\bar{Q}(e_i,e_j)\bar{k}(e_i)\bar{k}(e_j)}z_{e_1}^{\bar{k}(e_1)}z_{e_2}^{\bar{k}(e_2)}\dotsm z_{e_r}^{\bar{k}(e_r)}\qquad\text{for }\bar{k}\in\mathbb{Z}^{\bar\mathcal{E}}.$$ The product of monomials is $$z^{\bar{k}}z^{\bar{l}}
=q^{\frac{1}{2}\langle\bar{k},\bar{l}\rangle_{\bar{Q}}}z^{\bar{k}+\bar{l}}
=q^{\langle\bar{k},\bar{l}\rangle_{\bar{Q}}}z^{\bar{l}}z^{\bar{k}},$$ where $$\langle\bar{k},\bar{l}\rangle_{\bar{Q}}=\sum_{a,b\in\bar\mathcal{E}}\bar{k}(a)\bar{Q}(a,b)\bar{l}(b)=\bar{k}^T\bar{Q}\bar{l}$$ is the skew-symmetric bilinear form associated to $\bar{Q}$.
The product formula shows that $\bar{\mathcal{Y}}$ has a $\mathbb{Z}^{\bar\mathcal{E}}$-grading $$\bar{\mathcal{Y}}=\bigoplus_{\bar{k}\in\mathbb{Z}^{\bar\mathcal{E}}}R[\mathring{\mathcal{P}}]^\diamond z^{\bar{k}}.$$ The grading can be reduced to filtrations $\{F^\circ_d\}$ and $\{F_{\bar{k}}\}$, defined similarly to the skein algebra case.
**Theorem 6** ([@LYSL2 Theorems 6.5, 7.1 and Lemma 7.5]). *Suppose $\mathfrak{S}$ is a triangulable surface. There is an $R[\mathring{\mathcal{P}}]$-algebra embedding $$\phi:\mathscr{S}_q(\mathfrak{S})\to\bar{\mathcal{Y}}$$ such that*
1. *Suppose $\alpha\in B$ is a basis element with edge coloring $\bar{k}_\alpha$. Let $d=\mathop{\mathrm{\deg^\circ}}(\bar{k}_\alpha)$. Then there exists a nonzero $c(\alpha)\in R[\mathring{\mathcal{P}}]^\diamond$ such that $$\phi(\alpha)=c(\alpha)z^{\bar{k}_\alpha}\mod F^\circ_{d-1}.$$*
2. *If $\alpha=X_a$ is associated to $a\in\bar\mathcal{E}$, then $\phi(X_a)$ is a monomial. In other words, the equality above holds without "$\bmod F^\circ_{d-1}$\".*
3. *The image of $\phi$ is contained in the subalgebra generated by $\phi(X_a)^{\pm1},a\in\mathcal{E}$ and $z_v^{\pm1},v\in\mathring{\mathcal{P}}$.*
Define the **degree** and **leading term** of a nonzero element $\alpha\in\mathscr{S}_q(\mathfrak{S})$ with respect to the filtration $\{F_{\bar{k}}\}$. In other words, $\deg(\alpha)$ is the edge coloring $\bar{k}\in\mathbb{Z}^{\bar\mathcal{E}}$ such that $$\alpha=c_0\alpha_0\mod\bigcup_{\bar{l}<\bar{k}}F_{\bar{l}},$$ where $c_0\in R[\mathring{\mathcal{P}}]$ is nonzero, and $\alpha_0\in B$ is the basis element with edge coloring $\bar{k}$. Then the leading term of $\alpha$ is defined as $\mathop{\mathrm{lt}}(\alpha)=c_0\alpha_0$.
By Theorem [Theorem 6](#thm-qtr){reference-type="ref" reference="thm-qtr"}(1), the degree of $\alpha$ can be obtained from the degree of $\phi(\alpha)\in\bar{\mathcal{Y}}$, which is much easier since there is actually a grading. The following result is an easy corollary.
**Corollary 7**. *Let $\alpha,\beta\in\mathscr{S}_q(\mathfrak{S})\setminus\{0\}$. Then*
1. *$\deg(\alpha\beta)=\deg\alpha+\deg\beta$.*
2. *$\mathop{\mathrm{lt}}(\alpha\beta)=q^{\langle\deg\alpha,\deg\beta\rangle_{\bar{Q}}}\mathop{\mathrm{lt}}(\beta\alpha)$.*
## The Frobenius homomorphism
Suppose $\zeta$ is a root of unity. Let $n=\mathop{\mathrm{ord}}(\zeta)$, $d=\gcd(n,4)$, $N=n/d=\mathop{\mathrm{ord}}(\zeta^4)$, and $\epsilon=\zeta^{N^2}$. Strictly speaking, we should also define $\epsilon^{1/2}=(\zeta^{1/2})^{N^2}$, which will be implied throughout the rest of the paper.
**Theorem 8** ([@BLFrob]). *There exists an algebra map $$\Phi_\zeta:\mathscr{S}_\epsilon(\mathfrak{S})\to\mathscr{S}_\zeta(\mathfrak{S}),$$ called the **Frobenius homomorphism**, satisfying the following properties.*
1. *If $\alpha$ is an arc, then $\Phi_\zeta(\alpha)$ is represented by the tangle with $N$ components, each of which is a copy of $\alpha$ shifted slightly in the direction of the framing.*
2. *More generally, for a tangle $\alpha$, $\Phi_\zeta(\alpha)$ is a linear combination of tangles obtained by replacing each arc component of $\alpha$ with exactly $N$ parallel copies and each closed component of $\alpha$ with $\le N$ parallel copies. The coefficient of the term where every component becomes $N$ copies is $1$. Consequently, $$\deg\Phi_\zeta(\alpha)=N\deg\alpha.$$*
3. *((Skew-)transparency.) Let $\alpha$ be a tangle disjoint from isotopic tangles $\beta_0,\beta_1$. If $\alpha\cup\beta_0$ and $\alpha\cup\beta_1$ have diagrams that differ by a crossing change, then $$\label{eq-skew-trans}
\Phi_\zeta(\alpha)\cup\beta_0=\zeta^{2N}\Phi_\zeta(\alpha)\cup\beta_1.$$*
Combining (1) with height exchange moves, it is easy to see that if $\alpha$ is a simple arc diagram, then $\Phi_\zeta(\alpha)$ and $\alpha^N$ differ by a power of $\zeta$.
# Characterization of the center
## Center at a generic $q$
As a warm-up, we show that at generic $q$, the center is the obvious one.
**Theorem 9**. *If $q$ is not a root of unity, then the center of $\mathscr{S}_q(\mathfrak{S})$ is $R[\mathring{\mathcal{P}}]$.*
*Proof.* Suppose $\alpha$ is central. Let $\bar{k}=\deg \alpha$. Since $\mathop{\mathrm{lt}}(\alpha\beta)=\mathop{\mathrm{lt}}(\beta\alpha)$ for any $\beta\in\mathscr{S}_q(\mathfrak{S})$, $\bar{k}_\beta^T\bar{Q}\bar{k}=0$ by Lemma [Corollary 7](#cor-deg-lt){reference-type="ref" reference="cor-deg-lt"}. By choosing $\beta=X_a$ for $a\in\bar\mathcal{E}$, we get $$\bar{K}\bar{Q}\bar{k}=0.$$ Using block matrix notations, $$\begin{aligned}
\begin{pmatrix}I&0\\-J^T&I\end{pmatrix}\bar{K}\bar{Q}
&=\begin{pmatrix}P_+&0\\0&2I\end{pmatrix}\begin{pmatrix}Q&-J\\J^T&0\end{pmatrix}
=\begin{pmatrix}P_+Q&-P_+J\\2J^T&0\end{pmatrix}\\
&=\begin{pmatrix}P_+JJ^T-2I&-P_+J\\2J^T&0\end{pmatrix}\end{aligned}$$ where Lemma [Lemma 1](#lemma-H-inverse){reference-type="ref" reference="lemma-H-inverse"} was used in the last step. We can decompose $\bar{k}=(k,2\hat{k})$ according to $\bar\mathcal{E}=\mathcal{E}\sqcup\hat\mathcal{E}_\partial$, where the factor of $2$ comes from the admissible condition. Then we have $$\label{eq-center-generic}
\begin{aligned}
P_+Jk_\partial-2k-2P_+J\hat{k}&=0,\\
2k_\partial&=0.
\end{aligned}$$ Here, $k_\partial:=J^Tk$ consists of the components corresponding to boundary edges. Multiplying the first equation with $J^T$ and using $k_\partial=0$, we get $J^TP_+J\hat{k}=0$.
Consider the product $$\label{eq-PJk}
(P_+J\hat{k})(e)=\sum_{a\in\mathcal{E}}\sum_{b\in\mathcal{E}_\partial}P_+(e,a)J(a,b)\hat{k}(b)=\sum_{b\in\mathcal{E}_\partial}P_+(e,b)\hat{k}(b)=\hat{k}(b_1)+\hat{k}(b_2),$$ where $b_1,b_2\in\mathcal{E}_\partial$ are the boundary edges counterclockwise to $e$. See Figure [\[fig-matching\]](#fig-matching){reference-type="ref" reference="fig-matching"}. Multiplying $J^T$ simply restricts $e\in\mathcal{E}_\partial$. Thus, $J^TP_+J\hat{k}=0$ implies that $\hat{k}(b_1)+\hat{k}(b_2)=0$ if $b_1$ and $b_2$ are adjacent boundary edges. Since $\bar{k}_\alpha$ is admissible, the components are nonnegative. Thus, admissible solutions have $\hat{k}=0$, which implies the degree is trivial. The only elements with trivial degree are polynomials of peripheral curves. ◻
## Gradings of the stated skein algebra
To describe the center when $q=\zeta$ is a root of unity, it is convenient to introduce gradings.
Each tangle diagram defines a homology class in $H=H_1(\overline{\mathfrak{S}},\partial\mathfrak{S};\mathbb{Z}/2)$. Note we use the compact surface $\overline{\mathfrak{S}}$ but only the punctured boundary $\partial\mathfrak{S}=\partial\overline{\mathfrak{S}}\setminus\mathcal{P}_\partial$. This homology class is preserved by all defining relations. Stacking diagrams results in the sum of homology classes. Thus, the stated skein algebra is graded by $H$. This is called the **homology class grading**.
There is an equivalent form of the homology class grading. The operation $D$ defined in Section [3.5](#sec-move-left){reference-type="ref" reference="sec-move-left"} induces an isomorphism $$D_\ast:H^\ast\to H,\qquad
H^\ast:=H_1(\overline{\mathfrak{S}},\mathcal{P}_\partial;\mathbb{Z}/2).$$ Thus, any tangle diagram $\alpha$ also defines an element $D_\ast^{-1}(\alpha)\in H^\ast$. The notation $H^\ast$ is intended to indicate that it is the dual of $H$ with respect to the $\bmod2$ intersection pairing $$i_2:H\otimes H^\ast\to\mathbb{Z}/2.$$
There is also a $\mathbb{Z}$-grading for each boundary edge $e$. Given a stated diagram $\alpha$, let $\delta_e(\alpha)$ be sum of states of $\alpha$ on $e$, where $\pm$ are identified with $\pm1$. This is compatible with the defining relations and stacking. Such gradings are called **boundary gradings**.
The homology class grading and boundary gradings are clearly compatible, so together they form an $H\times\mathbb{Z}^{\mathcal{E}_\partial}$ grading. They are not independent, as a boundary grading $\bmod2$ reduces to an intersection number, which is homological.
The gradings can also be determined from edge colorings. Let $\alpha$ be a diagram with edge coloring $\bar{k}_\alpha$. Note each edge $e\in\mathcal{E}$ represents a homology class in $H^\ast$. From the definition [\[eq-color-def\]](#eq-color-def){reference-type="eqref" reference="eq-color-def"}, we see $$\begin{aligned}
i_2(\alpha,e)&\equiv\bar{k}_\alpha(e)\bmod2 ,&e&\in\mathcal{E},\\
\delta_e(\alpha)&=\bar{k}_\alpha(e)-\bar{k}_\alpha(\hat{e}),& e&\in\mathcal{E}_\partial.
\end{aligned}$$ The first equation determines the homology class of $\alpha$ since the edges of a triangulation span the homology group $H^\ast$ and the intersection pairing $i_2$ is non-degenerate.
We define two subalgebras using these gradings. Suppose $\alpha$ is a tangle diagram on the surface $\mathfrak{S}$. $\alpha$ is **matching** if all boundary gradings $\delta_e(\alpha)$ (or intersection numbers with the boundary edges) have the same parity. $\alpha$ is **even** if the homology class grading is zero, and all boundary gradings are divisible by $4$. The matching subalgebra $\mathscr{S}^\mathrm{ma}(\mathfrak{S})$ and the even subalgebra $\mathscr{S}^\mathrm{ev}(\mathfrak{S})$ are spanned by their corresponding types of diagrams.
## Commutation relations at a root of unity
Recall that $\zeta$ denotes a root of unity, $n=\mathop{\mathrm{ord}}(\zeta)$, $d=\gcd(n,4)$, $N=n/d=\mathop{\mathrm{ord}}(\zeta^4)$, and $\epsilon=\zeta^{N^2}$.
**Lemma 10**. *Suppose $\alpha\in\mathscr{S}_\epsilon(\mathfrak{S})$ and $\beta\in\mathscr{S}_\zeta(\mathfrak{S})$ are given by tangle diagrams. Then $$\label{eq-Frob-comm}
\Phi_\zeta(\alpha)\beta=\zeta^{c(\alpha,\beta)N}\beta\Phi_\zeta(\alpha),\qquad
c(\alpha,\beta)=2i_2(\alpha,D^{-1}_\ast(\beta))-\sum_{e\in\mathcal{E}_\partial}\delta_e(\alpha)\delta_e(\beta).$$*
Note $c(\alpha,\beta)$ is only well-defined $\bmod4$, but this is enough to make sense of the coefficient in [\[eq-Frob-comm\]](#eq-Frob-comm){reference-type="eqref" reference="eq-Frob-comm"}.
*Proof.* For each boundary component of $\overline{\mathfrak{S}}$, choose a small regular neighborhood. Isotope the diagrams so that
1. $\alpha$ and $\beta$ do not intersect in the neighborhoods of boundary components.
2. As one follows the positive direction of each boundary edge, the endpoints of $\alpha$ are all before the endpoints of $\beta$.
This is illustrated in Figure [\[fig-frob-std\]](#fig-frob-std){reference-type="ref" reference="fig-frob-std"}, where one boundary edge is shown, and the dashed line indicates the choice of the neighborhood. Note the movement of $\beta$ into $D^{-1}(\beta)$ does not pass through $\alpha$. Thus, ${|{\alpha\cap\beta}|}\equiv i_2(\alpha,D^{-1}_\ast(\beta))\pmod2$.
Looking at Figure [\[fig-frob-comm\]](#fig-frob-comm){reference-type="ref" reference="fig-frob-comm"}, the process of turning $\Phi_\zeta(\alpha)\beta$ into $\beta\Phi_\zeta(\alpha)$ involves two parts: crossing changes corresponding to $\alpha\cap\beta$ and height exchanges near the boundary. Each crossing change involves a factor of $\zeta^{2N}$ by skew-transparency [\[eq-skew-trans\]](#eq-skew-trans){reference-type="eqref" reference="eq-skew-trans"}. For height exchanges, note in $\Phi_\zeta(\alpha)$, arcs ending on the boundary comes in $N$ parallel copies. By applying the height exchange moves [\[eq-height-ex\]](#eq-height-ex){reference-type="eqref" reference="eq-height-ex"} repeatedly, we get $$\label{eq-heightex-N}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$N\mu$}{}\AND\equal{$+$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$N\mu$} (B)node{\footnotesize$+$};
}
\ifthenelse{\equal{$N\mu$}{a}}{
\draw[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\draw[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
}{
\begin{knot}
\strand[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\strand[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\ifthenelse{\(\equal{n}{n}\)}{\flipcrossings{1}}{}
\end{knot}
}
\path[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\path[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\end{tikzpicture}
\mathop{}\!
=\zeta^{-\mu N}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$+$}{}\AND\equal{$N\mu$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$+$} (B)node{\footnotesize$N\mu$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!
,\qquad
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$-$}{}\AND\equal{$N\mu$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$-$} (B)node{\footnotesize$N\mu$};
}
\ifthenelse{\equal{$-$}{a}}{
\draw[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\draw[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
}{
\begin{knot}
\strand[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\strand[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\ifthenelse{\(\equal{n}{n}\)}{\flipcrossings{1}}{}
\end{knot}
}
\path[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\path[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\end{tikzpicture}
\mathop{}\!
=\zeta^{\mu N}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$N\mu$}{}\AND\equal{$-$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$N\mu$} (B)node{\footnotesize$-$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!
,$$ where $N\mu$ indicates $N$ parallel strands, all with the state $\mu$. We can combine the relations above into a single one $$\label{eq-Frob-ex}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$N\mu$}{}\AND\equal{$\nu$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$N\mu$} (B)node{\footnotesize$\nu$};
}
\ifthenelse{\equal{$N\mu$}{a}}{
\draw[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\draw[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
}{
\begin{knot}
\strand[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\strand[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\ifthenelse{\(\equal{n}{n}\)}{\flipcrossings{1}}{}
\end{knot}
}
\path[edge] (C) ..controls ({\xw/2},\ya) and ({\xw/2},\yb).. (B);
\path[edge] (D) ..controls ({\xw/2},\yb) and ({\xw/2},\ya).. (A);
\end{tikzpicture}
\mathop{}\!
=\zeta^{-\mu\nu N}
\mathop{}\!
\begin{tikzpicture}[baseline=(ref.base)]
\tikzmath{\xw=0.9; \yh=0.9; \yd=0; \yu=0;}
\ifthenelse{\equal{}{<-}\OR\equal{->}{<-}}{\tikzmath{\yd=-0.1;}}{}
\ifthenelse{\equal{}{->}\OR\equal{->}{->}}{\tikzmath{\yu=0.1;}}{}
\tikzmath{\yt=\yh+\yu;}
\fill[gray!20] (0,\yd)rectangle(\xw,\yt);
\node(ref) at ({\xw/2},{\yh/2}) {\phantom{$-$}};
%draw walls
\begin{scope}[wall]
\ifthenelse{\equal{}{w}}{
\draw (0,\yd) --(0,\yt);
}{\ifthenelse{\equal{}{}}{}{
\draw[] (0,\yd) --(0,\yt);
}}
\ifthenelse{\equal{->}{w}}{
\draw (\xw,\yd) --(\xw,\yt);
}{\ifthenelse{\equal{->}{}}{}{
\draw[->] (\xw,\yd) --(\xw,\yt);
}}
\end{scope}
%draw states
\tikzmath{\ya=\yh/2+0.2; \yb=\yh/2-0.2;}
\path (0,\ya) coordinate (C) (0,\yb) coordinate (D);
\path (\xw,\ya) coordinate (A) (\xw,\yb) coordinate (B);
\ifthenelse{\equal{$\nu$}{}\AND\equal{$N\mu$}{}}{}{
\draw[right,inner sep=2pt] (A)node{\footnotesize$\nu$} (B)node{\footnotesize$N\mu$};
}
\draw[edge] (C) -- (A);
\draw[edge] (D) -- (B);
\end{tikzpicture}
\mathop{}\!
.$$ Here the $\nu=-$ case is obtained from the second equation of [\[eq-heightex-N\]](#eq-heightex-N){reference-type="eqref" reference="eq-heightex-N"} and skew-transparency. Applying this to $\Phi_\zeta(\alpha)\beta$ in the neighborhoods of boundary edges, we get the remaining factors. ◻
**Corollary 11**. *Let $$X_\zeta=\begin{cases}
\mathscr{S}_\epsilon(\mathfrak{S}),&d=1,\\
\mathscr{S}^\mathrm{ma}_\epsilon(\mathfrak{S}),&d=2,\\
\mathscr{S}^\mathrm{ev}_\epsilon(\mathfrak{S}),&d=4.
\end{cases}$$ Then $\Phi_\zeta(X_\zeta)\subset\mathscr{S}_\zeta(\mathfrak{S})$ is central.*
*Proof.* Suppose $\alpha\in X_\zeta$ and $\beta\in\mathscr{S}_\zeta(\mathfrak{S})$ are represented by diagrams.
Case 1 ($d=1$). Since $n=N$, the coefficient in [\[eq-Frob-comm\]](#eq-Frob-comm){reference-type="eqref" reference="eq-Frob-comm"} is always $1$. Therefore, $\Phi_\zeta(\alpha)$ commutes with $\beta$ without extra assumptions. (See also [@KQ Theorem 1.2])
Case 2 ($d=2$). Since $n=2N$, we only need to determine $c(\alpha,\beta)\bmod2$. Thus, the $i_2$ term in [\[eq-Frob-comm\]](#eq-Frob-comm){reference-type="eqref" reference="eq-Frob-comm"} has no effect. In the other term, all $\delta_e(\alpha)$ are the same $\bmod2$ because of the matching condition, so it can be factored out of the sum. On the other hand, for any diagram $\beta$, the $\bmod2$ sum of boundary gradings is zero since it agrees with the $\bmod2$ count of endpoints. Thus, $c(\alpha,\beta)$ is even when $\alpha$ is matched, which means $\Phi_\zeta(\alpha)$ commutes with $\beta$.
Case 3 ($d=4$). The $i_2$ term in [\[eq-Frob-comm\]](#eq-Frob-comm){reference-type="eqref" reference="eq-Frob-comm"} vanishes since an even $\alpha$ has trivial $\bmod2$ homology. In addition, $\delta_e(\alpha)$ is divisible by 4 by definition. Hence, $c(\alpha,\beta)$ is divisible by $4$, which means $\Phi_\zeta(\alpha)$ commutes with $\beta$. ◻
**Lemma 12**. *Suppose $e_1,\dotsc,e_r$ are the boundary edges on a boundary component $C$ with $r$ ideal points, ordered consecutively with an arbitrary starting edge.*
1. *If $r$ is even, the element $X_{\hat{e}_1}^kX_{\hat{e}_2}^{n-k}\dotsm X_{\hat{e}_r}^{n-k}$ is central for $0\le k\le n$.*
2. *If $r$ is odd, the element $X_{e_1}^NX_{e_2}^N\dotsm X_{e_r}^N=\Phi_\zeta(X_{e_1}X_{e_2}\dotsm X_{e_r})$ is central.*
If $d=1,2$, let $B_\zeta$ be the set of the first type of elements. If $d=4$, let $B_\zeta$ be the set of both types of elements. The distinction comes from the proof of Lemma [Lemma 14](#lemma-lt-cancel){reference-type="ref" reference="lemma-lt-cancel"}.
*Remark 1*. Since we use positively ordered diagrams, powers of arcs such as $X_{\hat{e}_1}^k$ is not represented by the simple diagram with copies of the arc. However, the difference is a power of $q=\zeta$ using height exchange moves. We will ignore the difference because it does not affect centrality.
*Proof.* Let $\alpha\in\mathscr{S}_\zeta(\mathfrak{S})$ be an arbitrary diagram. We want to show the elements above commute with $\alpha$.
Consider an element $\beta$ of the first type. By [@LYSL2 Lemma 4.5], for a boundary edge $e\in\mathcal{E}_\partial$ $$X_{\hat{e}}\alpha=\zeta^{\delta_e(\alpha)+\delta_{e'}(\alpha)}\alpha X_{\hat{e}}.$$ Here $e'$ is the edge counterclockwise to $e$, or equivalently, the edge other than $e$ with an endpoint of $X_{\hat{e}}$. Therefore, $$\begin{aligned}
\beta\alpha&=(X_{\hat{e}_1}^kX_{\hat{e}_2}^{n-k}\dotsm X_{\hat{e}_r}^{n-k})\alpha\\
&=(\zeta^{\delta_{e_r}(\alpha)+\delta_{e'_r}(\alpha)})^{n-k}X_{\hat{e}_1}^kX_{\hat{e}_2}^{n-k}\dotsm \alpha X_{\hat{e}_r}^{n-k}=\dotsb\\
&=
(\zeta^{\delta_{e_1}(\alpha)+\delta_{e'_1}(\alpha)})^{k}
\dotsm
(\zeta^{\delta_{e_r}(\alpha)+\delta_{e'_r}(\alpha)})^{n-k}
\alpha(X_{\hat{e}_1}^kX_{\hat{e}_2}^{n-k}\dotsm X_{\hat{e}_r}^{n-k}).\end{aligned}$$ Each $\delta_{e_i}$ appears twice with multiplicities $k$ and $n-k$, one of which is as the prime of another edge. Thus, the coefficient in the last line simplifies to $\zeta^{n\sum\delta_{e_i}(\alpha)}=1$, which means $\beta$ commutes with $\alpha$.
Now consider an element $\beta=\Phi_\zeta(\beta')$ of the second type, where $\beta'=X_{e_1}X_{e_2}\dotsm X_{e_r}\in\mathscr{S}_\epsilon(\mathfrak{S})$. By Lemma [Lemma 10](#lemma-Frob-comm){reference-type="ref" reference="lemma-Frob-comm"}, we need to show $c(\beta',\alpha)\equiv0\pmod4$. First note that $\beta'$ is homologous to the boundary component $C$. Thus, $i_2(\beta',D_\ast^{-1}(\alpha))$ is simply the number of endpoints of $\alpha$ on $C$. Next consider the sum over boundary edges. Clearly $\delta_e(\beta')=2$ if $e$ is on $C$ and zero otherwise, so we have $$c(\beta',\alpha)=2{|{\alpha\cap C}|}-2\sum_{e\subset C}\delta_e(\alpha),$$ which is 4 times the number of $-$ states on $C$. Thus, $c(\beta',\alpha)$ is a multiple of $4$, so $\beta$ commutes with $\alpha$. ◻
## Center at a root of unity
We are finally ready to determine the center at a root of unity.
**Theorem 13**. *Suppose $\mathfrak{S}$ is connected, triangulable, and have nonempty boundary. The center of $\mathscr{S}_q(\mathfrak{S})$ at $q=\zeta$ is $$Z_\zeta=\Phi_\zeta(X_\zeta)[\mathring{\mathcal{P}}][B_\zeta^{-1}]\cap\mathscr{S}_\zeta(\mathfrak{S}),$$ where $\mathring{\mathcal{P}}$ represents the set of peripheral curves, and $B_\zeta$ is the set of central elements defined in Lemma [Lemma 12](#lemma-central-boundary){reference-type="ref" reference="lemma-central-boundary"}. In other words, the center $Z_\zeta$ is spanned by elements $\gamma$ such that $\gamma\beta$ is of the form $c\Phi_\zeta(\gamma')$ where $c\in R[\mathring{\mathcal{P}}]$ and $\beta$ is a product of elements in $B_\zeta$.*
*If the order of $\zeta$ is odd ($d=1$), then the inverse on $B_\zeta$ can be removed. In other words, the center is the subalgebra of $\mathscr{S}_\zeta(\mathfrak{S})$ generated by $\Phi_\zeta(\mathscr{S}_1(\mathfrak{S}))$, the peripheral curves $\mathring{\mathcal{P}}$, and $B_\zeta$.*
*Proof.* $Z_\zeta$ is central by the previous lemmas. Suppose $\alpha\in \mathscr{S}_\zeta(\mathfrak{S})$ is central. By Lemma [Lemma 14](#lemma-lt-cancel){reference-type="ref" reference="lemma-lt-cancel"}, there exists an element $\gamma\in Z_\zeta$ such that $\mathop{\mathrm{lt}}(\alpha)=\mathop{\mathrm{lt}}(\gamma)$. Then the element $\alpha-\gamma$ is central and has lower degree. Since the degree is defined using a well ordering on edge colorings, after repeating the process finitely many times, we can write $\alpha$ as a sum of elements in $Z_\zeta$. ◻
**Lemma 14**. *Suppose $\alpha\in\mathscr{S}_\zeta(\mathfrak{S})$ is central. There exists an element $\gamma\in Z_\zeta$ such that $\mathop{\mathrm{lt}}(\alpha)=\mathop{\mathrm{lt}}(\gamma)$*
*Proof.* Using the same notations as Theorem [Theorem 9](#thm-center-generic){reference-type="ref" reference="thm-center-generic"}, let $\bar{k}=(k,2\hat{k})$ be the degree of $\alpha$, and $k_\partial:=J^Tk$. The same argument leads to the $\bmod n$ version of [\[eq-center-generic\]](#eq-center-generic){reference-type="eqref" reference="eq-center-generic"}. $$\label{eqn-center}
\begin{split}
P_+Jk_\partial-2k-2P_+J\hat{k}&\equiv0\pmod{n},\\
2k_\partial&\equiv0\pmod{n}.
\end{split}$$
**Case 1** ($d=1$.) Since $n=N$ is odd in this case, the relations are reduced to $$\label{eqn-center-1}
\begin{split}
k+P_+J\hat{k}&\equiv0\pmod{N},\\
k_\partial&\equiv0\pmod{N}.
\end{split}$$ Just like the proof of Theorem [Theorem 9](#thm-center-generic){reference-type="ref" reference="thm-center-generic"}, by multiplying the first relation by $J^T$, we can show that if $a$ and $b$ are adjacent boundary edges, then $\hat{k}(a)+\hat{k}(b)$ is divisible by $N$. If there are $r$ ideal points, then by going around the boundary, $\hat{k}(a)\equiv(-1)^r\hat{k}(a)\pmod{N}$. Since $N$ is odd, nontrivial $\bmod{N}$ solutions exist only when $r$ is even. Let $$\beta=\prod_{e\in\mathcal{E}_\partial}X_{\hat{e}}^{-\hat{k}(e)\bmod{n}}.$$ Here and in the rest of the proof, $\bmod{n}$ represents the operation that takes the remainder of the division in the range $0,1,\dots,n-1$ if an integer is expected. $\beta$ is a product of elements in $B_\zeta$ if ordered correctly. $\alpha\beta$ is still central since $\beta$ is. Its degree $\bar{k}'=(k',2\hat{k}')$ satisfies the same relations, but now $\hat{k}'\equiv0\pmod{N}$, which implies $k'\equiv0\pmod{N}$ as well. Let $\gamma'\in\mathscr{S}_\epsilon(\mathfrak{S})$ be the basis element without peripheral components corresponding to the coloring $\bar{k}'/N$. Then $\alpha\beta$ have the same degree as $\Phi_\zeta(\gamma')$, so $\mathop{\mathrm{lt}}(\alpha\beta)=\mathop{\mathrm{lt}}(c\Phi_\zeta(\gamma'))$ for some $c\in R[\mathring{\mathcal{P}}]$.
By [@LYSL2 Lemma 4.5], when a diagram $\alpha_0$ is multiplied by $X_{\hat{e}}$, assumed to be disjoint from $\alpha_0$ by an isotopy, the result is a scalar multiple of the diagram $\alpha\cup X_{\hat{e}}$. Thus, if $\alpha_0$ is the diagram for $\mathop{\mathrm{lt}}(\alpha)$, then the diagram of $\mathop{\mathrm{lt}}(\alpha\beta)$ is a simple diagram $\alpha_0\cup\beta$, which is also the diagram for $\mathop{\mathrm{lt}}(\Phi_\zeta(\gamma'))$ by construction. By Theorem [Theorem 8](#thm-thread){reference-type="ref" reference="thm-thread"}(2), the arc components in $\Phi_\zeta(\gamma')$ is the same in every term. Thus, $\Phi_\zeta(\gamma')$ contains a factor of $\beta$. Let $c\Phi_\zeta(\gamma')=\gamma\beta$. Then $\gamma\in Z_\zeta$ and $\mathop{\mathrm{lt}}(\gamma)=\mathop{\mathrm{lt}}(\alpha)$.
**Case 1, Stronger version**. Consider the diagram of $\mathop{\mathrm{lt}}(\alpha)$, which has edge coloring $\bar{k}$. Suppose two edges $a_1,a_2$ meet at an ideal point and form a corner, and $e$ is the boundary edge clockwise to both of them. First assume the corner is part of a triangle as in Figure [\[fig-pos-gen\]](#fig-pos-gen){reference-type="ref" reference="fig-pos-gen"}. Since $k\equiv -P_+J\hat{k}\pmod{N}$, from [\[eq-PJk\]](#eq-PJk){reference-type="eqref" reference="eq-PJk"}, we get $$\begin{gathered}
k(a_1)\equiv -\hat{k}(e')-\hat{k}(e_1),\qquad
k(a_2)\equiv -\hat{k}(e')-\hat{k}(e_2),\qquad
k(a)\equiv -\hat{k}(e_1)-\hat{k}(e_2)\pmod{N}.\\
k(a_1)+k(a_2)-k(a)\equiv -2\hat{k}(e')\equiv 2\hat{k}(e)\pmod{N}.\end{gathered}$$ It is well known that $k(a_1)+k(a_2)-k(a)$ is twice the number of corner arcs between $a_1$ and $a_2$. Cancelling the $2$ from the last equation, we see that the number of corner arcs is at least $(\hat{k}(e)\bmod{N})$. A similar calculation can be done when $a_1=a_2$ bounds a punctured monogon with the same result. These corner arcs connect to form at least $(\hat{k}(e)\bmod{N})$ copies of $D(e)$.
There are at least $(\hat{k}(e)\bmod{N})$ $-$ states on $e$. Since by definition, a leading term diagram is increasingly stated, the innermost $(\hat{k}(e)\bmod{N})$ copies of $D(e)$ must have $-$ states on $e$. On the other hand, the number of endpoints on $e'$ is $k(e)=k_\partial(e)\equiv0\pmod{N}$, and the number of $-$ states is $\hat{k}(e')\equiv-\hat{k}(e)\pmod{N}$. This means there are at least $(\hat{k}(e)\bmod{N})$ $+$ states on $e'$. Again by the increasing stated condition, the same copies of $D(e)$ must be assigned $+$ states on $e'$. Therefore, these copies of $D(e)$ are of the form $X_{\hat{e}}^{\hat{k}(e)\bmod{N}}$. Doing this for every boundary edge $e$, we see the diagram of $\mathop{\mathrm{lt}}(\alpha)$ contains the diagram of $$\beta_1=\prod_{e\in\mathcal{E}_\partial}X_{\hat{e}}^{\hat{k}(e)\bmod{n}}.$$ Now let $\alpha'$ be the diagram obtained by removing $\beta_1$ from the diagram of $\mathop{\mathrm{lt}}(\alpha)$, and let $\bar{k}'=\deg\alpha'$. The rest of the proof is essentially the same as the weaker version.
**Case 2** ($d=2$.) In this case $n=2N$ and $N$ is odd. Thus, we can consider $\bmod N$ and $\bmod2$ separately to obtain $$\label{eqn-center-2}
\begin{split}
k+P_+J\hat{k}&\equiv0\pmod{N},\\
P_+Jk_\partial&\equiv0\pmod{2},\\
k_\partial&\equiv0\pmod{N}.
\end{split}$$ These relations have similar form as the previous case. Thus, the same argument produces the elements $\gamma'\in\mathscr{S}_\epsilon(\mathfrak{S})$ with degree $\bar{k}'/N$, $\beta$ a product of elements in $B_\zeta$ and $\gamma\in\mathscr{S}_\zeta(\mathfrak{S})$, $c\in R[\mathring{\mathcal{P}}]$ with $c\Phi_\zeta(\gamma')=\gamma\beta$ and $\mathop{\mathrm{lt}}(\gamma)=\mathop{\mathrm{lt}}(\alpha)$.
By construction, $\bar{k}'$ satisfy the same relations [\[eqn-center-2\]](#eqn-center-2){reference-type="eqref" reference="eqn-center-2"}. If two boundary edges $b_1,b_2$ are connected by an edge $e\in\mathcal{E}$ as in Figure [\[fig-matching\]](#fig-matching){reference-type="ref" reference="fig-matching"}, then by [\[eq-PJk\]](#eq-PJk){reference-type="eqref" reference="eq-PJk"}, $$k'(b_1)+k'(b_2)=(P_+Jk'_\partial)(e)\equiv0\pmod2.$$ Then using connectedness, $k'(b_1)+k'(b_2)$ is even for any pair of boundary edges $b_1,b_2$. Since $N$ is odd, the same is true when divided by $N$. This means $\gamma'$ is matching, so $\gamma\in Z_\zeta$.
**Case 3** ($d=4$.) Since $n=4N$, the relations can be rewritten as $$\label{eqn-center-4}
\begin{split}
P_+Jk_\delta-k&\equiv0\pmod{2N},\\
k_\partial&\equiv0\pmod{2N},
\end{split}$$ where $k_\delta=\frac{1}{2}k_\partial-\hat{k}$ is integral by the second relation. It is also half the boundary gradings. Following the same strategy, we multiply the first relation by $J^T$ to get $$(J^T P_+J)k_\delta\equiv0\pmod{2N}.$$ Since the modulus is even, there are more solutions than the previous cases.
Let $C_1,\dotsc,C_b$ be the boundary components of $\mathfrak{S}$. Suppose $C_j$ has $r_j$ ideal points. Order the edges of $C_j$ consecutively as $e_1,\dotsc,e_{r_j}$. As in the previous cases, $k_\delta(e_i)+k_\delta(e_{i+1})\equiv0\pmod{2N}$, and $k_\delta(e_i)\equiv(-1)^{r_j}k_\delta(e_i)\pmod{2N}$. If $r_j$ is odd, then $2k_\delta(e_i)\equiv0\pmod{2N}$. Thus, either $k_\delta(e_i)\equiv0\pmod{2N}$ for all $i$, or $k_\delta(e)\equiv N\pmod{2N}$ for all $i$. Let $\beta_j=1$ if the first condition is true, or let $\beta_j=X_{e_1}^N\dotsm X_{e_{r_j}}^N\in B_\zeta$ if the second is true. If $r_j$ is even, then the same construction in the previous cases with $\hat{k}$ replaced by $-k_\delta$ defines an element $\beta_j\in B_\zeta$. Finally, let $\beta_o$ be the product of all $\beta_j$ with $r_j$ odd, $\beta_e$ be the product of all $\beta_j$ with $r_j$ even, and $\beta=\beta_o\beta_e$.
Let $\bar{k}''=\deg(\alpha\beta_e)$, which is divisible by $N$ just like the previous cases. Let $\gamma''\in\mathscr{S}_\epsilon(\mathfrak{S})$ be the basis element without peripheral curves corresponding to the coloring $\bar{k}''/N$. Then $\mathop{\mathrm{lt}}(\alpha\beta_e)=\mathop{\mathrm{lt}}(c\Phi_\zeta(\gamma''))$ for some $c\in R[\mathring{\mathcal{P}}]$. Again by construction, $\Phi_\zeta(\gamma'')$ contains a factor of $\beta_e$, so we can write $c\Phi_\zeta(\gamma'')=\gamma\beta_e$ for some $\gamma\in\mathscr{S}_\zeta(\mathfrak{S})$. It follows then $\mathop{\mathrm{lt}}(\gamma)=\mathop{\mathrm{lt}}(\alpha)$.
Note that $\beta_o$ has the form $\Phi_\zeta(\beta'_o)$ where $\beta'_o\in\mathscr{S}_\epsilon(\mathfrak{S})$ is the corresponding product of elements in $B_\epsilon$. Let $\gamma'=\gamma''\beta'_o$. Then $$c\Phi_\zeta(\gamma')=c\Phi_\zeta(\gamma'')\Phi_\zeta(\beta'_o)
=\gamma\beta_e\beta_o=\gamma\beta.$$ To show $\gamma\in Z_\zeta$, we just need to show that $\gamma'$ is even. Unlike the previous cases, $\gamma'$ is not necessary a single diagram because of the factor $\beta_o$. However, $\gamma'$ is still a product of basis elements, so it is homogeneous in both gradings. Therefore, it suffices to check that the leading term is even. By construction, $$\bar{k}':=\deg(\gamma')=\frac{1}{N}(\deg\alpha+\deg\beta_o+\deg\beta_e)$$ satisfies $$k'_\delta\equiv0\pmod{2},\qquad\text{hence }
k'-P_+Jk'_\delta\equiv k'\equiv0\pmod{2}.$$ The first implies that the boundary grading is divisible by 4. On the other hand, $k'\equiv0\pmod{2}$ translates to the even intersection condition. Thus, $\gamma'$ is even. ◻
# Dimension over the center {#sec-dim}
## Statements of the results
Assume $\mathfrak{S}$ is connected, triangulable, and have nonempty boundary. Define the following parameters.
1. $g$ is the genus of the surface.
2. $p={|{\mathring{\mathcal{P}}}|}$ is the number of interior ideal points.
3. $v={|{\mathcal{P}_\partial}|}$ is the number of boundary ideal points or boundary edges.
4. $b$ is the number of boundary components of $\overline{\mathfrak{S}}$.
5. $b_2$ is the number of boundary components with an even number of ideal points.
**Lemma 15**. *Let $$r(\mathfrak{S})=v-\chi(\overline{\mathfrak{S}})=v+2g-2+b.$$ Then $${|{\bar{\mathcal{E}}}|}=3r(\mathfrak{S})+2p,\qquad
{|{\mathcal{E}}|}={|{\bar{\mathcal{E}}}|}-v.$$*
*Proof.* This is a standard Euler characteristic calculation. A triangulation is the 1-skeleton of a CW-complex structure of $\overline{\mathfrak{S}}$. It has $v$ vertices and ${|{\mathcal{E}}|}$ edges. Let $f$ be the number of 2-cells, $p$ of which are monogons with the rest being triangles. Then $$v-{|{\mathcal{E}}|}+f=\chi(\overline{\mathfrak{S}}),\qquad
3(f-p)+p=2{|{\mathcal{E}}|}-v.$$ Solving the equations proves the lemma. ◻
By [@LeTriang Proposition 4.4], $\mathscr{S}_q(\mathfrak{S})$ is a domain. Therefore, its center $Z$ has a field of fractions, denoted by $\tilde{Z}$. Then $\mathscr{S}_q(\mathfrak{S})\otimes_Z\tilde{Z}$ is a vector space over $\tilde{Z}$, whose dimension is called the **dimension** of $\mathscr{S}_q(\mathfrak{S})$ over its center and denoted $\dim_Z\mathscr{S}_q(\mathfrak{S})$.
**Lemma 16**. *At a root of unity $q=\zeta$, $\mathscr{S}_\zeta(\mathfrak{S})$ is finitely generated as a module over its center $Z$. Consequently, $\dim_Z\mathscr{S}_\zeta(\mathfrak{S})$ is finite.*
*Proof.* By [@LYSL2 Theorem 6.7], there exists one-component simple diagrams $\alpha_1,\dotsc,\alpha_s\in\mathscr{S}_\zeta(\mathfrak{S})$ such that elements of the form $\alpha_1^{i_1}\dotsm\alpha_s^{i_s}$ span $\mathscr{S}_\zeta(\mathfrak{S})$. Since $\Phi_\zeta(\alpha_j^d)$ is central and has degree $N\deg\alpha_j^d=\deg\alpha_j^n$, we can write $$\alpha_j^n=c_j\Phi_\zeta(\alpha_j^d)+(\text{lower degree terms})$$ where $c_j\in R[\mathring{\mathcal{P}}]$, so first term is in $Z$. Therefore, an element $\alpha_1^{i_1}\dotsm\alpha_s^{i_s}$ with any exponent at least $n$ can be written as an element of $Z$ plus lower degree terms. Since the degrees are well-ordered, any element is a finite sum of elements of $Z$ and $R[\mathring{\mathcal{P}}]$-multiples of $\alpha_1^{i_1}\dotsm\alpha_s^{i_s}$ with all exponents less than $n$. Therefore, as a $Z$-module, $\mathscr{S}_\zeta(\mathfrak{S})$ is generated by at most $n^s$ elements. ◻
**Theorem 17**. *At a root of unity $q=\zeta$, the dimension of $\mathscr{S}_\zeta(\mathfrak{S})$ over its center is $$D_\zeta=\begin{cases}
N^{{|{\bar{\mathcal{E}}}|}-b_2},&d=1,\\
2^{2\lfloor\frac{v-1}{2}\rfloor}N^{{|{\bar{\mathcal{E}}}|}-b_2},&d=2,\\
2^{2g+2v-2}N^{{|{\bar{\mathcal{E}}}|}-b_2},&d=4.
\end{cases}$$*
Although it is not immediately obvious, ${|{\bar{\mathcal{E}}}|}-b_2$ is even. Thus, $D_\zeta$ is always a square.
## Proof of Theorem [Theorem 17](#thm-dim-z){reference-type="ref" reference="thm-dim-z"} {#proof-of-theorem-thm-dim-z}
Recall $\langle\Lambda\rangle\subset\mathbb{Z}^{\bar\mathcal{E}}$ is the subgroup of balanced vectors. Let $\Lambda_\zeta\subset\langle\Lambda\rangle$ be the subgroup of balanced solutions to [\[eqn-center\]](#eqn-center){reference-type="eqref" reference="eqn-center"}. Define the **residue group** at $q=\zeta$ to be $$R_\zeta=\langle\Lambda\rangle/\Lambda_\zeta.$$
**Lemma 18**. *${|{R_\zeta}|}=D_\zeta$.*
The proof of Lemma [Lemma 18](#lemma-res-size){reference-type="ref" reference="lemma-res-size"} is in the next section.
The degree can be considered as a map $\deg:\mathscr{S}_\zeta(\mathfrak{S})\setminus\{0\}\to\langle\Lambda\rangle$. Clearly $\deg(Z)\subset \Lambda_\zeta$. By composing with the quotient map, we obtain $\deg_\zeta:\mathscr{S}_\zeta(\mathfrak{S})\setminus\{0\}\to R_\zeta$.
**Lemma 19**. *$\deg_\zeta$ is surjective.*
*Proof.* The image of $\deg_\zeta$ is a submonoid of $R_\zeta$. Thus, it is also a subgroup since $R_\zeta$ is finite. By Lemma [Lemma 5](#lemma-bal-basis){reference-type="ref" reference="lemma-bal-basis"}, the image of $\deg$ generates $\langle\Lambda\rangle$. Thus, the image of $\deg_\zeta$ generates $R_\zeta$, which implies $\deg_\zeta$ is surjective. ◻
By [@FKLDimension Corollary 5.2, Theorem 6.1], Theorem [Theorem 17](#thm-dim-z){reference-type="ref" reference="thm-dim-z"} follows from Lemmas [Lemma 18](#lemma-res-size){reference-type="ref" reference="lemma-res-size"} and [Lemma 19](#lemma-deg-sur){reference-type="ref" reference="lemma-deg-sur"}.
## Proof of Lemma [Lemma 18](#lemma-res-size){reference-type="ref" reference="lemma-res-size"} {#proof-of-lemma-lemma-res-size}
Let $\Lambda^\mathrm{ma}\subset\langle\Lambda\rangle$ be the subgroup of matching vectors, that is, balanced vectors $\bar{k}$ such that $k(e)\equiv k(e')\pmod{2}$ for any pair of boundary edges $e,e'$.
Using the notations of Lemma [Lemma 12](#lemma-central-boundary){reference-type="ref" reference="lemma-central-boundary"}, for a boundary component $C$ with an even number $r$ of ideal points, let $$\bar{k}_C=\sum_{i=1}^r(-1)^i\bar{k}_{\hat{e}_i},$$ where $\bar{k}_{\hat{e}_i}\in\langle\Lambda\rangle$ is the edge coloring of $X_{\hat{e}_i}$. Up to $\bmod n$, this is the edge coloring of the first type of central elements in Lemma [Lemma 12](#lemma-central-boundary){reference-type="ref" reference="lemma-central-boundary"} with $k=n-1$. By construction, $\bar{k}$ is balanced and solves [\[eqn-center\]](#eqn-center){reference-type="eqref" reference="eqn-center"}. Let $\Lambda_B$ be the subgroup generated by all such vectors.
**Lemma 20**. *${|{\mathbb{Z}^{\bar{\mathcal{E}}}/\langle\Lambda\rangle}|}=2^{{|{\bar\mathcal{E}}|}-r(\mathfrak{S})}$.*
*Proof.* By Proposition [Proposition 4](#prob-bal){reference-type="ref" reference="prob-bal"}, $(2\mathbb{Z})^{\bar\mathcal{E}}\subset\langle\Lambda\rangle\subset\mathbb{Z}^\mathcal{E}\times(2\mathbb{Z})^{\hat\mathcal{E}_\partial}$. Consider the quotient $\langle\Lambda\rangle/(2\mathbb{Z})^{\bar\mathcal{E}}$. By the second inclusion, each element of the quotient is uniquely dentermined by the $\bmod2$ reduction of the $\mathbb{Z}^\mathcal{E}$ components, which can be interpreted as a CW cochain $\mathcal{E}\to\mathbb{Z}/2$. Then the balanced condition translates exactly to the cocycle condition. Thus, we have an isomorphism $$\langle\Lambda\rangle/(2\mathbb{Z})^{\bar\mathcal{E}}\cong Z^1(\overline{\mathfrak{S}};\mathbb{Z}/2).$$ Consider the coboundary map $$\delta:C^0(\overline{\mathfrak{S}};\mathbb{Z}/2)\to Z^1(\overline{\mathfrak{S}};\mathbb{Z}/2),$$ By definition, $$\ker\delta=Z^0(\overline{\mathfrak{S}};\mathbb{Z}/2)=H^0(\overline{\mathfrak{S}};\mathbb{Z}/2),\quad \mathop{\mathrm{coker}}\delta=H^1(\overline{\mathfrak{S}};\mathbb{Z}/2).$$ Thus, over $\mathbb{Z}/2$, $$\dim Z^1=\dim C^0-\dim H^0+\dim H^1=v-1+(2g+b-1)=r(\mathfrak{S}).$$ Therefore, $${|{\mathbb{Z}^{\bar\mathcal{E}}/\langle\Lambda\rangle}|}
=\frac{{|{\mathbb{Z}^{\bar\mathcal{E}}/(2\mathbb{Z})^{\bar\mathcal{E}}}|}}{{|{\langle\Lambda\rangle/(2\mathbb{Z})^{\bar\mathcal{E}}}|}}
=\frac{2^{{|{\bar\mathcal{E}}|}}}{2^{r(\mathfrak{S})}}.\qedhere$$ ◻
**Lemma 21**. *$\Lambda_B$ is a direct summand of $\langle\Lambda\rangle$.*
*If $d=1$, $\Lambda_\zeta=\Lambda_B+N\langle\Lambda\rangle$. If $d=2$, $\Lambda_\zeta=\Lambda_B+N\Lambda^\mathrm{ma}$.*
*Proof.* By Lemma [Lemma 5](#lemma-bal-basis){reference-type="ref" reference="lemma-bal-basis"}, $\langle\Lambda\rangle$ is free with basis $\bar{k}_a,a\in\bar\mathcal{E}$. For each boundary component $C$ with an even number of ideal points, we can replace one of the basis elements of $\langle\Lambda\rangle$ with the corresponding generator of $\Lambda_B$. This shows that $\Lambda_B$ is a direct summand.
The last two statements follow from the argument in the proof of Theorem [Theorem 13](#thm-center-root-1){reference-type="ref" reference="thm-center-root-1"}. ◻
**Lemma 22**. *${|{\langle\Lambda\rangle/\Lambda^\mathrm{ma}}|}=2^{2\lfloor\frac{v-1}{2}\rfloor}$.*
*Proof.* Consider the $\bmod{2}$ boundary grading map $\Delta_2:\langle\Lambda\rangle\to(\mathbb{Z}/2)^{\mathcal{E}_\partial}$ given by $$\Delta_2(\bar{k})(e)=\bar{k}(e)\bmod{2}.$$ The image of $\Delta_2$ consists of vectors whose components sum to $0$, which is an index $2$ subgroup of $(\mathbb{Z}/2)^{\mathcal{E}_\partial}$. Thus, ${|{\mathop{\mathrm{im}}\Delta_2}|}=2^{v-1}$.
Let $\mathbf{1}\subset(\mathbb{Z}/2)^{\mathcal{E}_\partial}$ be the subgroup generated by the all $1$ vector. Then $\Lambda^\mathrm{ma}=\Delta_2^{-1}(\mathbf{1})$ by definition. Thus, the induced map $$\bar{\Delta}_2:\langle\Lambda\rangle/\Lambda^\mathrm{ma}\to\mathop{\mathrm{im}}\Delta_2/(\mathop{\mathrm{im}}\Delta_2\cap\mathbf{1})$$ is an isomorphism. The intersection $\mathop{\mathrm{im}}\Delta_2\cap\mathbf{1}$ is $\mathbf{1}$ when $v$ is even, and it is trivial when $v$ is odd. Thus, ${|{\langle\Lambda\rangle/\Lambda^\mathrm{ma}}|}=2^{v-2}$ when $v$ is even, and ${|{\langle\Lambda\rangle/\Lambda^\mathrm{ma}}|}=2^{v-1}$ when $v$ is odd. ◻
**Lemma 23**. *When $d=4$, ${|{\Lambda_\zeta/(n\mathbb{Z})^{\bar{\mathcal{E}}}}|}=2^{{|{\mathcal{E}}|}+b}N^{b_2}$.*
*Proof.* It is clear that $(n\mathbb{Z})^{\bar{\mathcal{E}}}\subset \Lambda_\zeta$, so the quotient makes sense. By Lemma [Lemma 5](#lemma-bal-basis){reference-type="ref" reference="lemma-bal-basis"}, a solution of the form $\bar{k}=(k,2\hat{k})$ to [\[eqn-center\]](#eqn-center){reference-type="eqref" reference="eqn-center"}, or equivalently [\[eqn-center-4\]](#eqn-center-4){reference-type="eqref" reference="eqn-center-4"}, is always balanced. Thus, we only need to count the $\bmod n$ solutions of the above form to [\[eqn-center-4\]](#eqn-center-4){reference-type="eqref" reference="eqn-center-4"}. The solutions can be generated in the following way.
1. For each boundary edge $e$, $k(e)=k_\partial(e)$ can be $0$ or $2N$.
2. For each boundary component, the boundary grading $2k_\delta=k_\partial+2\hat{k}$ is determined by a single edge, as argued in the proof of Theorem [Theorem 13](#thm-center-root-1){reference-type="ref" reference="thm-center-root-1"}.
1. If there is an even number of ideal points, $2k_\delta$ can be any of the $2N$ even numbers for a fixed edge, and the rest are determined.
2. If there is an odd number of ideal points, $2k_\delta$ can be either $0$ or $2N$.
3. For each boundary edge $e$, $2\hat{k}(e)$ is determined the chosen $k_\partial(e)$ and $2k_\delta(e)$.
4. For each internal edge $e$, $\bar{k}(e)\bmod{2N}$ is determined by the boundary gradings $k_\delta\bmod{2N}$. Thus, there are two choices $\bmod{n}$.
Thus, the count is $${|{\Lambda_\zeta/(n\mathbb{Z})^{\bar{\mathcal{E}}}}|}
=2^v\cdot\left((2N)^{b_2}\cdot2^{b-b_2}\right)\cdot2^{{|{\mathcal{E}}|}-v}
=2^{{|{\mathcal{E}}|}+b}N^{b_2}.\qedhere$$ ◻
*Proof of Lemma [Lemma 18](#lemma-res-size){reference-type="ref" reference="lemma-res-size"}.* When $d=1$, $$R_\zeta\cong\frac{\langle\Lambda\rangle/\Lambda_B}{\Lambda_\zeta/\Lambda_B}
=\frac{\langle\Lambda\rangle/\Lambda_B}{N\langle\Lambda\rangle/\Lambda_B}.$$ By Lemma [Lemma 5](#lemma-bal-basis){reference-type="ref" reference="lemma-bal-basis"}, $\langle\Lambda\rangle$ is a free abelian group of rank ${|{\bar\mathcal{E}}|}$. Since $\Lambda_B$ is a direct summand of $\langle\Lambda\rangle$ of rank $b_2$, $\langle\Lambda\rangle/\Lambda_B$ is free with rank ${|{\bar{\mathcal{E}}}|}-b_2$. Therefore, ${|{R_\zeta}|}=N^{{|{\bar{\mathcal{E}}}|}-b_2}=D_\zeta$.
When $d=2$, the same argument using the finite index subgroup $\Lambda^\mathrm{ma}$ in place of $\langle\Lambda\rangle$ shows ${|{\Lambda^\mathrm{ma}/\Lambda_\zeta}|}=N^{{|{\bar{\mathcal{E}}}|}-b_2}$. Thus, $${|{R_\zeta}|}={|{\langle\Lambda\rangle/\Lambda^\mathrm{ma}}|}{|{\Lambda^\mathrm{ma}/\Lambda_\zeta}|}
=2^{2\lfloor\frac{v-1}{2}\rfloor}N^{{|{\bar{\mathcal{E}}}|}-b_2}=D_\zeta.$$
When $d=4$, we have $(n\mathbb{Z})^{\bar{\mathcal{E}}}\subset \Lambda_\zeta\subset\langle\Lambda\rangle\subset\mathbb{Z}^{\bar{\mathcal{E}}}$. Then $$\begin{aligned}
{|{R_\zeta}|}
&=\frac{{|{\mathbb{Z}^{\bar{\mathcal{E}}}/(n\mathbb{Z})^{\bar{\mathcal{E}}}}|}}{{|{\mathbb{Z}^{\bar{\mathcal{E}}}/\langle\Lambda\rangle}|}{|{\Lambda_\zeta/(n\mathbb{Z})^{\bar{\mathcal{E}}}}|}}
=\frac{n^{|{\bar{\mathcal{E}}}|}}{2^{{|{\bar\mathcal{E}}|}-r(\mathfrak{S})}\cdot2^{{|{\mathcal{E}}|}+b}N^{b_2}}\\
&=2^{2g+2v-2}N^{{|{\bar{\mathcal{E}}}|}-b_2}=D_\zeta.\qedhere\end{aligned}$$ ◻
| arxiv_math | {
"id": "2309.14713",
"title": "Center of the stated skein algebra",
"authors": "Tao Yu",
"categories": "math.QA math.GT",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Quasiperiodic systems, related to irrational numbers, are space-filling structures without decay nor translation invariance. How to accurately recover these systems, especially for non-smooth cases, presents a big challenge in numerical computation. In this paper, we propose a new algorithm, finite points recovery (FPR) method, which is available for both smooth and non-smooth cases, to address this challenge. The FPR method first establishes a homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus, then recovers the global quasiperiodic system by employing interpolation technique with finite points in the definition domain without dimensional lifting. Furthermore, we develop accurate and efficient strategies of selecting finite points according to the arithmetic properties of irrational numbers. The corresponding mathematical theory, convergence analysis, and computational complexity analysis on choosing finite points are presented. Numerical experiments demonstrate the effectiveness and superiority of FPR approach in recovering both smooth quasiperiodic functions and piecewise constant Fibonacci quasicrystals. While existing spectral methods encounter difficulties in accurately recovering non-smooth quasiperiodic functions.
address:
- "K. Jiang: Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China"
- "Q. Zhou: School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China"
- "P. Zhang: School of Mathematical Sciences, Peking University, Beijing 100871, China"
author:
- Kai Jiang\*
- Qi Zhou
- Pingwen Zhang\*
title: Accurately recover global quasiperiodic systems by finite points
---
[^1]
# Introduction
Quasiperiodic systems, related to irrational numbers, have attracted extensive attention due to their fascinating mathematical properties [@senechal1996quasicrystals; @bohr2018almost; @meyer2000algebraic; @penrose1974role; @steurer2009crystallography; @baake2013aperiodic]. Quasiperiodic behavior is widely observed in physics and materials sciences, such as many-body celestial systems, quasicrystals, incommensurate systems, polycrystalline materials, and quantum systems [@poincare1890probleme; @shechtman1984metallic; @cao2018unconventional; @sutton1995interfaces; @zeng2004supramolecular; @hofstadter1976energy]. Among all quasiperiodic systems, the non-smooth case is of particular interest, such as discrete Schrödinger operator with quasiperiodic potential, Fibonacci photonic quasicrystal, and discrete time quasicrystal [@avila2009ten; @damanik1999uniform; @merlin1985quasiperiodic; @vardeny2013optics; @tanese2014fractal; @giergiel2019discrete; @goblot2020emergence; @verbin2015topological].
Quasiperiodic systems pose significant challenges for numerical computation and corresponding theoretical analysis, due to their space-filling order without decay. Several numerical methods have been developed to address quasiperiodic systems. A widely used method, periodic approximation method (PAM) [@zhang2008efficient], employs periodic systems to approximate quasiperiodic systems over a finite domain, corresponding to using rational numbers to approximate irrational numbers in reciprocal space. PAM only approximates partial quasiperiodic systems and inevitably brings the rational approximation error, unless the period becomes infinity [@jiang2023on]. Numerical examples have demonstrated that the rational approximation error plays a dominate role in numerically computing quasiperiodic systems [@jiang2014numerical; @jiang2015stability]. An accurate algorithm is the quasiperiodic spectral method (QSM) [@jiang2018numerical]. Based on the Fourier-Bohr transformation, QSM can expand quasiperiodic functions with trigonometric polynomials. Theoretical analysis has been shown that QSM has exponential convergence for smooth cases [@jiang2022numerical]. However, when dealing with nonlinear problems, the computational cost of QSM becomes unaffordable due to the unavailability of fast Fourier transform (FFT). Another accurate approach is the projection method (PM) that captures the essential feature of quasiperiodic systems which can be embedded into higher-dimensional periodic systems [@jiang2014numerical]. PM is an extension of the periodic Fourier pseudo-spectral method. The spectral allocation technique is employed to represent quasiperiodic functions by introducing the discrete Fourier-Bohr transformation. PM has exponential convergence for smooth systems and can utilize FFT to reduce computational complexity [@jiang2022numerical].
Existing Galerkin spectral approaches, especially PM, have made progress in numerically solving quasiperiodic systems, including quasicrystals [@jiang2014numerical; @jiang2015stability], incommensurate quantum systems [@xueyang2021numerical; @gao2023pythagoras], topological insulators [@wang2022effective], grain boundaries [@cao2021computing; @jiang2022tilt]. However, none of these methods is suitable for solving non-smooth quasiperiodic problems. For example, consider the piecewise dielectric function of a 1D Fibonacci photonic quasicrystal (see Example [Example 37](#exa:fibonacci){reference-type="ref" reference="exa:fibonacci"} for details). Figure [\[fig:fibonacci6\]](#fig:fibonacci6){reference-type="ref" reference="fig:fibonacci6"} shows the numerical result obtained by PM method, which is completely inconsistent with the exact value. Moreover, the Gibbs-phenomenon appears in a neighbourhood of the jump discontinuity. Hence, accurately recovering global non-smooth quasiperiodic systems remains an open problem, which motivates the development of new numerical algorithms.
In this work, we pay attention to developing a new algorithm for recovering both smooth and non-smooth quasiperiodic systems. Our contributions are summarized as follows.
- We propose a new approach, finite points recovery (FPR) method, for accurately recovering arbitrary dimensional quasiperiodic systems. A homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus is established. Based on this homomorphism, FPR method recovers the global quasiperiodic system by employing interpolation technique with finite points in the definition domain without dimensional lifting.
- We classify quasiperiodic systems into two categories according to the arithmetic properties of irrational numbers: badly approximable systems and good approximable systems. For each category, we employ distinct strategies for finite point selection within the FPR method, to ensure the accuracy and efficiency in the recovery process.
- We provide a detailed exposition of the mathematical theory underlying the FPR method, along with rigorous proofs. Moreover, we present the convergence analysis of the algorithm and the computational complexity analysis on choosing finite points.
- We apply the FPR method to recover two classes of quasiperiodic systems, including smooth quasiperiodic functions and piecewise constant Fibonacci quasicrystals. Numerical experiments demonstrate the effectiveness and superiority of FPR approach in recovering the above two classes, while PM method fails to handle non-smooth systems.
The rest of this paper is organized as follows. In Section [2](#sec:pre){reference-type="ref" reference="sec:pre"}, we give necessary notations and preliminary knowledge. In Section [3](#sec_theo){reference-type="ref" reference="sec_theo"}, we establish a homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus. In Section [4](#sec:alg){reference-type="ref" reference="sec:alg"}, we propose the FPR method, analyze its convergence and computational complexity, and discuss the impact of the arithmetic properties of irrational numbers. In Section [5](#sec:num){reference-type="ref" reference="sec:num"}, we show the accuracy and superiority of FPR method in handing both smooth and non-smooth quasiperiodic systems. In Section [6](#sec:con){reference-type="ref" reference="sec:con"}, we carry out the summary and give an outlook of future work.
# Preliminaries {#sec:pre}
In this section, we introduce the requisite notations and preliminary knowledge. $\bm{I}_{n}$ is the $n$-order identity matrix. $\mathbb{N}_+$ denotes the set of all positive integers. The Cartesian product of two sets, $X$ and $Y$, denoted by $X\times Y$, is the set of all ordered pairs $(x,y)$, where $x$ and $y$ are elements of $X$ and $Y$, respectively. $X^d$ denotes the Cartesian product of $d$ sets $X$. $\mathbb{T}^n=\mathbb{R}^n/\mathbb{Z}^n$ is the $n$-dimensional torus. $\rm{rank}_{\mathbb{R}}$ ($\rm{rank}_{\mathbb{Q}}$) denotes the rank of a set of vectors or a matrix over the number field $\mathbb{R}$ ($\mathbb{Q}$). $\dim_{\mathbb{R}}$ ($\dim_{\mathbb{Q}}$) denotes the dimension of a space over $\mathbb{R}$ ($\mathbb{Q}$). For a vector $\bm{\alpha}=(\alpha_i)_{i=1}^n\in \mathbb{R}^n$, $[\bm{\alpha}]$ is the round down symbol of $\bm{\alpha}$ in each coordinate variable, and the infinity norm of $\bm{\alpha}$ is defined by $\|\bm{\alpha}\|_{\infty}:=\max_{1\leq i\leq n}|\alpha_i|$. For a region $\mathcal{H}=\{\bm{s}=(s_i)^n_{i=1}:\alpha_i\leq s_i\leq \beta_i\}\subset \mathbb{R}^n$, the vector $\langle\mathcal{H}\rangle:=(\beta_i-\alpha_i)_{i=1}^n$ measures the size of $\mathcal{H}$ and $\langle\mathcal{H}\rangle_i$ denotes its $i$-th component.
For an $n$-dimensional periodic function $F(\bm{s})$, there exists a set of primitive vectors $\{\bm{a}_1,\cdots,\bm{a}_n\}$ that forms a basis of $\mathbb{R}^n$. The corresponding *Bravais lattice* is $$\mathcal{B}:=\{\bm{R}\in \mathbb{R}^n: \bm{R}=\ell_1\bm{a}_1+\cdots+\ell_n\bm{a}_n,~\bm{\ell}=(\ell_i)_{i=1}^n\in \mathbb{Z}^n\}.$$ For each $\bm{R}\in \mathcal{B}$, $F(\bm{s})$ satisfies $F(\bm{s}+\bm{R})=F(\bm{s})$. The *unit cell* of $F(\bm{s})$, denoted by $\Omega$, is the fundamental domain $$\Omega:=\{\bm{s}=w_1\bm{a}_1+\cdots+w_n\bm{a}_n: \bm{w}=(w_i)_{i=1}^n\in[0,1)^n\}.$$ There always exists an invertible linear transformation such that the basis $\{\bm{a}_1,\cdots,\bm{a}_n\}$ can be transform into a standard orthonormal basis $\{\bm{e}_1,\cdots,\bm{e}_n\}$. This enables us to give all the theoretical analysis in this paper based on the assumption that the unit cell is a cube $\Omega=[0,1)^n$.
Before we proceed, let's introduce some required definitions.
**Definition 1**. A matrix $\bm{P}\in \mathbb{R}^{d\times n}$ is the *projection matrix*, if it belongs to the set $\mathbb{P}^{d\times n}$ defined as $$\mathbb{P}^{d\times n}:=\{\bm{P}=(\bm{p}_1,\cdots,\bm{p}_n)\in\mathbb{R}^{d\times n}:\mathrm{rank}_{\mathbb{R}}(\bm{P})=d,~\mathrm{rank}_{\mathbb{Q}}(\bm{p}_1,\cdots,\bm{p}_n)=n\}.$$
**Definition 2**. A $d$-dimensional function $f(\bm{x})$ is *quasiperiodic*, if there exists an $n$-dimensional periodic function $F(\bm{s})$ and a projection matrix $\bm{P}\in \mathbb{P}^{d\times n}$, such that $f(\bm{x})=F(\bm{P}^T\bm{x})$ for all $\bm{x}\in\mathbb{R}^d$. $F(\bm{s})$ is called the *parent function* of $f(\bm{x})$, and we refer to $\mathbb{R}^d$ as the *physical space* and $\mathbb{R}^n$ as the *superspace*.
**Definition 3**. Let $\mathcal{G}$ be a set in the physical space $\mathbb{R}^d$ and $\bm{P}\in \mathbb{P}^{d\times n}$ be a projection matrix. The *lift map* $\mathcal{L}$ is defined by $$\begin{aligned}
&\mathcal{L}: &\mathcal{G}&\to \mathbb{R}^n,\\
&&\bm{x}&\mapsto \bm{P}^T\bm{x}.
\end{aligned}$$
*Remark 4*. Let $\mathcal{S}:=\mathcal{L}(\mathbb{R}^d)$ be a subspace of $\mathbb{R}^n$. From the definition of the lift map $\mathcal{L}$, $\dim_{\mathbb{R}}(\mathcal{S})=d$ and $\dim_{\mathbb{Q}}(\mathcal{S})=n$. For convenience, we refer to $\mathcal{S}$ as an *irrational slice* in superspace $\mathbb{R}^n$. It is straightforward to show that $\mathcal{L}$ is an isomorphism from $\mathbb{R}^d$ to $\mathcal{S}$.
*Remark 5*. The irrational slice $\mathcal{S}$ contains one and only one lattice point $\bm{R}\in\mathcal{B}$ (see Chapter 2.5 in [@meyer2000algebraic]). For convenience, we set $\bm{0}=(0,\cdots,0)\in\mathcal{S}$. For general cases, only one translation operation is required for the entire Bravais lattice.
**Definition 6**. For any set $\mathcal{H}$ of superspace $\mathbb{R}^n$, the *modulo map* $\mathcal{M}$ is defined by $$\begin{aligned}
&\mathcal{M}: &\mathcal{H}&\to \mathbb{T}^n,\\
&&\bm{s}&\mapsto \overline{\bm{s}},
\end{aligned}$$ where $\overline{\bm{s}}:=\bm{s}+\mathbb{Z}^n$ is a left coset of $\mathbb{Z}^n$. We denote the image of $\mathcal{H}$ under $\mathcal{M}$ by $\overline{\mathcal{H}}:=\mathcal{M}(\mathcal{H})=\mathcal{H}/\mathbb{Z}^n$.
*Remark 7*. The modulo map $\mathcal{M}$ is a natural homomorphism from $\mathbb{R}^n$ to $\mathbb{T}^n$, with $\overline{\mathbb{R}^n}=\mathbb{R}^n/\mathbb{Z}^n=\mathbb{T}^n$ and $\rm{Ker}(\mathcal{M})=\mathbb{Z}^n$.
*Remark 8*. $\mathbb{T}^n$ and $\Omega$ are equivalent under the sense of isomorphism. Specifically, we can define an isomorphism $\Phi:\mathbb{T}^n\rightarrow\Omega$ that maps each coset of $\mathbb{T}^n$ to its representative in $\Omega$.
# $\overline{\mathcal{S}}$ is dense in $\mathbb{T}^{n}$ {#sec_theo}
In this section, we first observe that the $n$-dimensional irrational slice $\mathcal{S}$, after the modulo operation, is dense in $\mathbb{T}^n$. We then provide a rigorous proof of this observation. Next, we introduce a homomorphism between $\mathbb{R}^d$ and $\mathbb{T}^n$. Finally, we establish a close relationship between the arithmetic properties of irrational numbers and Diophantine approximation systems.
## Observation {#subsec:observation}
In this subsection, we concern the distribution of $\overline{\mathcal{S}}=\mathcal{M}(\mathcal{S})$ in $\mathbb{T}^n$. To describe the process of the modulo map $\mathcal{M}$ acting on $\mathcal{S}$, we present an equivalent expression of $\mathcal{S}$. Since $\dim_{\mathbb{R}}(\mathcal{S})=d$, we can decompose each point $\bm{s}\in \mathcal{S}$ as $\bm{s}=(\bm{t},\bm{r})^T$, where $\bm{t}=(t_i)_{i=1}^d$, $\bm{r}=(r_i)_{i=1}^{n-d}$, and $r_i~(i=1,\cdots,n-d)$ is a linear function of $t_1,\cdots,t_d$, i.e., $$r_i=r_i(t_1,\cdots,t_d)=\alpha_{i1}t_1+\cdots+\alpha_{id}t_d,$$ where $\alpha_{ij}\in\mathbb{R},~j=1,\cdots,d$. Let us introduce matrices $\bm{A} \in \mathbb{R}^{(n-d)\times d}$ and $\bm{Q} \in \mathbb{R}^{n\times d}$, where $$\label{eq:matrix_a}
\bm{A}=
\begin{bmatrix}
\alpha_{11}&\cdots &\alpha_{1d}\\
\vdots& &\vdots\\
\alpha_{n-d,1}&\cdots &\alpha_{n-d, d}\\
\end{bmatrix},$$ and $$\label{eq:matrix_q}
\bm{Q}=
\begin{bmatrix}
\bm{I}_d \\ \bm{A}
\end{bmatrix}.$$ Using these two matrices, we can express the slice $\mathcal{S}$ as $$\label{eq:rewrite_slice}
\begin{aligned}
\mathcal{S}=&~\{\bm{s}=(\bm{t},\bm{r})^T : \bm{r}=\bm{A}\bm{t},~ \bm{t}\in \mathbb{R}^d\},\\
=&~\{\bm{s}=(\bm{t},\bm{r})^T : \bm{s}=\bm{Q}\bm{t}, ~\bm{t}\in \mathbb{R}^d\}.\\
\end{aligned}$$
*Remark 9*. The matrix $\bm{Q}$ has similar properties as $\bm{P}^T$. Specifically, its row vectors $\bm{q}_1,\cdots,$ $\bm{q}_n\in \mathbb{R}^d$ satisfy ${\rm rank}_{\mathbb{Q}}(\bm{q}_1,\cdots,\bm{q}_n)=n$ and ${\rm rank}_{\mathbb{R}}(\bm{Q})=d$. In fact, $\bm{P}^T$ can be transformed into $\bm{Q}$ by a linear transformation over $\mathbb{Q}$.
We take an example to display the above observation. Consider a 2D irrational slice $\mathcal{S}=\{\bm{s}=(t,r)^T:r=\sqrt{2}t,~t\in\mathbb{R}\}$, Figure [\[fig:modulo\]](#fig:modulo){reference-type="ref" reference="fig:modulo"} illustrates the process of modulo map $\mathcal{M}$ acting on $\mathcal{S}$ within 9 unit cells. The horizontal and vertical axes in the 2D plane are denoted as $t$-axis and $r$-axis, respectively. The modulo operation is divided into two steps, first along the $t$-axis (see Figure [\[fig:modulo(b)\]](#fig:modulo(b)){reference-type="ref" reference="fig:modulo(b)"}) and then along the $r$-axis (see Figure [\[fig:modulo(c)\]](#fig:modulo(c)){reference-type="ref" reference="fig:modulo(c)"}). The resulting slice family of $\mathcal{S}$ after the modulo operation is denoted as $\overline{\mathcal{S}}$. Figure [\[fig:torus\]](#fig:torus){reference-type="ref" reference="fig:torus"} shows that $\overline{\mathcal{S}}$ becomes denser in $\mathbb{T}^2$ as the range of $t$ increases.
Let $\mathcal{R}:=\{\bm{0}\}\times \{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}$ and $\mathcal{D}:=\{\bm{0}\}\times\mathbb{T}^{n-d}$ ($\bm{0}=(0,\cdots,0)^T\in \mathbb{R}^{d}$), then $\overline{\mathcal{R}}=\mathcal{R}/\mathbb{Z}^n
=\{\bm{0}\}\times\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}=\overline{\mathcal{S}}\cap\mathcal{D}$. Figure [\[fig:modulo(c)\]](#fig:modulo(c)){reference-type="ref" reference="fig:modulo(c)"} shows that the distribution of $\overline{\mathcal{S}}$ in $\mathbb{T}^n$ can be completely determined by the distribution of $\overline{\mathcal{R}}$ in $\mathcal{D}$, i.e., the distribution of $\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ in $\mathbb{T}^{n-d}$. It inspires us to develop a rigorous mathematical theory of the observation "$\overline{\mathcal{S}}$ is dense in $\mathbb{T}^n$\", which is presented in the next subsection.
## Theoretical analysis {#subsec:math_theo}
In this subsection, we abstract the above observation into a mathematical theorem and present a rigorous proof. Then we introduce a homomorphism between $\mathbb{R}^d$ and $\mathbb{T}^n$. Let us first introduce this theorem.
**Theorem 10**. *For any irrational slice $\mathcal{S}\subset\mathbb{R}^n$, $\overline{\mathcal{S}}=\mathcal{M}(\mathcal{S})$ is dense in $\mathbb{T}^n$.*
Before the proof, we need to give the definition of dense subset in the space $\mathbb{R}^n$, as well as the $k$-variable Kronecker theorem.
**Definition 11**. Let $V_1\subset V_2\subseteq \mathbb{R}^n$, the subset $V_1$ is *dense* in $V_2$ if for any $\bm{s}_2\in V_2$ and any $\epsilon>0$, there exists a $\bm{s}_1\in V_1$ such that $\|\bm{s}_1-\bm{s}_2\|_{\infty}<\epsilon$.
**Lemma 12** ($k$-variable Kronecker theorem [@granville2020number]). *Given $k$ real numbers $\alpha_1,\cdots,$ $\alpha_k$, assume that $1,\alpha_1,\cdots,\alpha_k$ are $\mathbb{Q}$-linearly independent, then the point set $$\{(\alpha_1,\cdots,\alpha_k)^Tm: m\in \mathbb{Z}\}/\mathbb{Z}^k$$ is dense in $\mathbb{T}^k$.*
With these preparations, we start the proof of Theorem [Theorem 10](#thm:dens_fill){reference-type="ref" reference="thm:dens_fill"}.
*Proof.* First, let us establish the equivalence between "$\overline{\mathcal{S}}=\mathcal{M}(\mathcal{S})$ is dense in $\mathbb{T}^n$\" and "$\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ is dense in $\mathbb{T}^{n-d}$\". From Definition [Definition 11](#def:dense){reference-type="ref" reference="def:dense"}, "$\overline{\mathcal{S}}=\mathcal{M}(\mathcal{S})$ is dense in $\mathbb{T}^n$\" means that for each $\bm{s}^*=(\bm{t}^*,\bm{r}^*)^T\in\mathbb{T}^n$ and $\epsilon>0$, there exists a $\bm{s}=(\bm{t},\bm{A}\bm{t})^T\in\mathcal{S}$ such that $\|\overline{\bm{s}}-\bm{s}^*\|_{\infty}<\epsilon$. Let $\bm{b}^*:=\bm{r}^*-\bm{A}\bm{t}^*$, then $\bm{b}^*\in \mathbb{T}^{n-d}$. The arbitrariness of $\bm{s}^*$ in $\mathbb{T}^n$ implies the arbitrariness of $\bm{b}^*$ in $\mathbb{T}^{n-d}$. Meanwhile, let $\bm{t}':=[\bm{t}]\in\mathbb{Z}^d$, then $\bm{s}'=(\bm{t}',\bm{A}\bm{t}')^T\in\mathcal{S}$ satisfies $\overline{\bm{A}\bm{t}'}=\overline{\bm{A}\bm{t}'}-\bm{A}\overline{\bm{t}'}=\overline{\bm{A}\bm{t}}-\bm{A}\overline{\bm{t}}:=\bm{b}$. Note that, $\|\overline{\bm{s}}-\bm{s}^*\|_{\infty}<\epsilon$ implies $\|\bm{b}-\bm{b}^*\|_{\infty}<\epsilon$. Then for each $\bm{b}^*\in \mathbb{T}^{n-d}$ and $\epsilon>0$, there exists a $\bm{t}'\in\mathbb{Z}^d$ such that $\|\overline{\bm{A}\bm{t}'}-\bm{b}^*\|_{\infty}<\epsilon$, which exactly means "$\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ is dense in $\mathbb{T}^{n-d}$\". Here, we have completed the proof for one direction of the equivalence, and the proof for the other direction can be obtained similarly from the above process.
Next, we use the induction method on the dimension $d$ of $\bm{t}$ to prove that "$\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ is dense in $\mathbb{T}^{n-d}$\". When $d=1$, $\bm{A}=(\alpha_{11},\cdots,\alpha_{n-1,1})^T$, $\bm{Q}=(1,\alpha_{11},\cdots,\alpha_{n-1,1})^T$, and the elements of $\bm{Q}$ are $\mathbb{Q}$-linearly independent. By Lemma [Lemma 12](#lem:kron){reference-type="ref" reference="lem:kron"}, $\{(\alpha_{11},\cdots,\alpha_{n-1,1})^T m:m\in \mathbb{Z}\}/\mathbb{Z}^{n-1}$ is dense in $\mathbb{T}^{n-1}$. Hence, $\{\bm{r}=\bm{A}t:t\in \mathbb{Z}\}/\mathbb{Z}^{n-1}$ is dense in $\mathbb{T}^{n-1}$. Assume that the conclusion holds when $d=k-1$, we prove that it also holds for $d=k$ in two cases.
Case 1: Suppose that the $k$-th column of matrix $\bm{A}$ consists entirely of rational numbers, i.e., $\alpha_{1k},\cdots,\alpha_{n-k,k}\in\mathbb{Q}$. Note that $$\begin{aligned}
\bm{r}=\bm{A}\bm{t}&=
\begin{bmatrix}
\alpha_{11} & \cdots & \alpha_{1,k-1} & \alpha_{1k}\\
\vdots & & \vdots & \vdots\\
\alpha_{n-k,1} & \cdots & \alpha_{n-k,k-1} & \alpha_{n-k,k}\\
\end{bmatrix}
\begin{bmatrix}
t_1 \\ \vdots \\ t_k
\end{bmatrix} \\
&=\begin{bmatrix}
\alpha_{11} & \cdots & \alpha_{1,k-1} \\
\vdots & & \vdots \\
\alpha_{n-k,1} & \cdots & \alpha_{n-k,k-1} \\
\end{bmatrix}
\begin{bmatrix}
t_1 \\ \vdots \\ t_{k-1}
\end{bmatrix}
+\begin{bmatrix}
\alpha_{1k} \\ \vdots \\ \alpha_{n-k,k}
\end{bmatrix}t_k. \\
\end{aligned}$$ Define matrices $\bm{A}_1 \in \mathbb{R}^{(n-k)\times(k-1)}$ and $\bm{Q}_1 \in \mathbb{R}^{(n-1)\times(k-1)}$ as follows $$\bm{A}_1:=
\begin{bmatrix}
\alpha_{11} & \cdots & \alpha_{1,k-1} \\
\vdots & & \vdots \\
\alpha_{n-k,1} & \cdots & \alpha_{n-k,k-1} \\
\end{bmatrix},\quad
\bm{Q}_1:=\begin{bmatrix}
\bm{I}_{k-1} \\ \bm{A}_1
\end{bmatrix}.$$ Let $\bm{t}_1=(t_i)_{i=1}^{k-1}$ and $\bm{r}_1=\bm{A}_1\bm{t}_1$. Since $\alpha_{1k},\cdots,\alpha_{n-k,k}$ are rational, $1$ and $\alpha_{i k}$ are $\mathbb{Q}$-linearly dependent for all $i\in \{1,\cdots,n-k\}$. This implies that the row vectors of $\bm{Q}_1$ are $\mathbb{Q}$-linearly independent. Otherwise, suppose that there exists $i\in \{1,\cdots,n-k\}$ such that the $i$-th row of $\bm{A}_1$ and all rows of $\bm{I}_{k-1}$ are $\mathbb{Q}$-linearly dependent. Meanwhile, $1$ and $\alpha_{i k}$ are also $\mathbb{Q}$-linearly dependent. It follows that the $i$-th row of $\bm{A}$ and all rows of $\bm{I}_{k}$ are $\mathbb{Q}$-linearly dependent, which is contradictory to the properties of $\bm{Q}$. According to the inductive hypothesis, $\{\bm{r}_1=\bm{A}_1\bm{t}_1: \bm{t}_1\in \mathbb{Z}^{k-1}\}/\mathbb{Z}^{n-k}$ is dense in $\mathbb{T}^{n-k}$, then $\{\bm{r}=\bm{r}_1+(\alpha_{1k},\cdots,\alpha_{n-k,k})^T t_k: t_k\in \mathbb{Z}\}/\mathbb{Z}^{n-k}$ is dense in $\mathbb{T}^{n-k}$.
Case 2: Suppose that in the $k$-th column of $\bm{A}$, there are at most $q$ elements that are $\mathbb{Q}$-linearly independent with $1$. Without loss of generality, we assume that $\alpha_{n-k-q+1,k},\cdots,\alpha_{n-k,k}$ are the $q$ linearly independent elements in the $k$-th column of $\bm{A}$. Denote $$\bm{A}_1:=
\begin{bmatrix}
\alpha_{11} & \cdots & \alpha_{1,k-1} & 0\\
\vdots & & \vdots \\
\alpha_{n-k-q,1} & \cdots & \alpha_{n-k-q,k-1} & 0\\
0 & \cdots & 0 & \alpha_{n-k-q+1,k}\\
\vdots & & \vdots & \vdots \\
0 & \cdots & 0 & \alpha_{n-k,k}
\end{bmatrix}\in \mathbb{R}^{(n-k)\times k} ,$$ and $$\bm{A}_2:=
\begin{bmatrix}
\alpha_{n-k-q+1,1} & \cdots & \alpha_{n-k-q+1,k-1}\\
\vdots & & \vdots\\
\alpha_{n-k,1} & \cdots & \alpha_{n-k,k-1}
\end{bmatrix}\in \mathbb{R}^{q\times (k-1)}.$$ Then, we can express $\bm{r}$ as $$\bm{r}=\bm{A}\bm{t}=\bm{A}_1\bm{t}+\bm{A}_2
\begin{bmatrix}
t_{1} \\ \vdots \\ t_{k-1}
\end{bmatrix}
+\begin{bmatrix}
\alpha_{1k} \\ \vdots \\ \alpha_{n-k-q,k}
\end{bmatrix}t_{k}.$$ Let $\bm{r}_1=(\bm{r}_{11},\bm{r}_{12})^T:=\bm{A}_1\bm{t}$, where $$\bm{r}_{11}
=\begin{bmatrix}
\alpha_{11} & \cdots & \alpha_{1,k-1} \\
\vdots & & \vdots \\
\alpha_{n-k-q,1} & \cdots & \alpha_{n-k-q,k-1} \\
\end{bmatrix}
\begin{bmatrix}
t_1 \\ \vdots \\ t_{k-1}
\end{bmatrix}:=\bm{A}_{11}\bm{t}_1,$$ $$\bm{r}_{12}
=\begin{bmatrix}
\alpha_{n-k-q+1,k} \\ \vdots \\ \alpha_{n-k,k}
\end{bmatrix}t_k.$$ Let $$\bm{Q}_{11}:=\begin{bmatrix}
\bm{I}_{k-1} \\ \bm{A}_{11}
\end{bmatrix}\in \mathbb{R}^{(n-q-1)\times (k-1)},$$ then the row vectors of $\bm{Q}_{11}$ are $\mathbb{Q}$-linearly independent. Otherwise, the same reason as in Case 1 gives a contradiction. Again from the inductive hypothesis, $\{\bm{r}_{11}=\bm{A}_{11}\bm{t}_1: \bm{t}_1\in \mathbb{Z}^{k-1}\}/\mathbb{Z}^{n-k-q}$ is dense in $\mathbb{T}^{n-k-q}$. Moreover, $1,\alpha_{n-k-q+1,k},\cdots,\alpha_{n-k,k}$ are $\mathbb{Q}$-linearly independent. From Lemma [Lemma 12](#lem:kron){reference-type="ref" reference="lem:kron"}, $\{\bm{r}_{12}=(\alpha_{n-k-q+1,k},\cdots,\alpha_{n-k,k})^Tt_{k}: ~t_{k}\in \mathbb{Z}\}/\mathbb{Z}^q$ is dense in $\mathbb{T}^{q}$. Therefore, $\{\bm{r}_1=(\bm{r}_{11},\bm{r}_{12})^T\}/\mathbb{Z}^{n-k}$ is dense in $\mathbb{T}^{n-k}$, and then $\{\bm{r}=\bm{r}_1+\bm{A}_2\bm{t}_1+(\alpha_{1k},\cdots,\alpha_{n-k-q,k})^T t_k\}/\mathbb{Z}^{n-k}$ is dense in $\mathbb{T}^{n-k}$.
To summarize, we have proven that $\{\bm{r}=\bm{A}\bm{t}: \bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ is dense in $\mathbb{T}^{n-d}$. Consequently, $\overline{\mathcal{S}}$ is dense in $\mathbb{T}^n$. ◻
Further, we introduce a homomorphism between $\mathbb{R}^d$ and $\mathbb{T}^n$.
**Definition 13**. A *combination map* $\mathcal{C}$ is defined as the composition of the lift map $\mathcal{L}:\mathcal{G}\to \mathbb{R}^n$ and the modulo map $\mathcal{M}:\mathbb{R}^n\to \mathbb{T}^n$, denoted by $$\mathcal{C}:= \mathcal{M} \circ \mathcal{L},$$ where the symbol $\circ$ represents the composition of two maps.
*Remark 14*. The combination map $\mathcal{C}$ is a homomorphism from $\mathbb{R}^d$ to $\mathbb{T}^n$. Moreover, from Theorem [Theorem 10](#thm:dens_fill){reference-type="ref" reference="thm:dens_fill"}, $\mathcal{C}(\mathbb{R}^d)$ is dense in $\mathbb{T}^n$.
Based on this homomorphism, we can obtain the following theorem which plays a crucial role in our proposed FPR method.
**Theorem 15**. *If $f(\bm{x})$ is a $d$-dimensional quasiperiodic function, there exists a parent function $F(\bm{s})$ and a projection matrix $\bm{P}\in \mathbb{P}^{d\times n}$, such that $f(\bm{x})=F(\bm{P}^T\bm{x})$ for all $\bm{x}\in\mathbb{R}^d$. Then $$f(\bm{x})=F\left(\mathcal{C}(\bm{x})\right),\quad \forall\bm{x}\in \mathbb{R}^d,$$ where the combination map $\mathcal{C}$ is defined by Definition [Definition 13](#def:comb_map){reference-type="ref" reference="def:comb_map"}.*
*Proof.* From the definition of $\mathcal{L}$, $$f(\bm{x})=F(\mathcal{L}(\bm{x})),\quad \forall\bm{x}\in \mathbb{R}^d.$$ Further, from the periodicity of $F(\bm{s})$, $$F(\bm{s})=F(\mathcal{M}(\bm{s})),\quad \forall\bm{s}\in \mathbb{R}^n.$$ Therefore, $$f(\bm{x})=F(\mathcal{M}(\mathcal{L}(\bm{x})))=F(\mathcal{C}(\bm{x})),\quad \forall\bm{x}\in \mathbb{R}^d.$$ ◻
Using the homomorphism $\mathcal{C}$ and Theorem [Theorem 15](#thm:relation){reference-type="ref" reference="thm:relation"}, we can locate the point in a unit cell of parent function, corresponding to a point in physical space, and obtain the value of parent function at it. This allows us to directly carry out our algorithm in the physical space based on finite points, without the need of dimensional lifting.
## $\overline{\mathcal{S}}=\mathbb{T}^{n}$ or $\overline{\mathcal{S}}\neq \mathbb{T}^{n}$[\[subsec:bad_appr\]]{#subsec:bad_appr label="subsec:bad_appr"}
In this subsection, we further explore whether $\overline{\mathcal{S}}=\mathbb{T}^{n}$ or not. From the proof of Theorem [Theorem 10](#thm:dens_fill){reference-type="ref" reference="thm:dens_fill"}, this question is equivalent to whether $\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}=\mathbb{T}^{n-d}$ holds true or not. It is closely related to the Diophantine approximation problem. For a given matrix $\bm{A}\in\mathbb{R}^{k\times d}$, and for each $\varepsilon>0$, $\bm{r}_0\in \mathbb{T}^k$, we want to know whether there exists a solution $\bm{t}\in\mathbb{Z}^d$ satisfying the $k$-dimensional *Diophantine system* $$\label{eq:diop_ineq}
\|\bm{A}\bm{t}/\mathbb{Z}^k-\bm{r}_0\|_{\infty}\leq \varepsilon.$$
*Remark 16*. For a $d$-dimentional quasiperiodic function $f(\bm{x})$, if its projection matrix $\bm{P}\in\mathbb{R}^{d\times n}$ fulfils that $\bm{P}^T$ can be linearly transformed over $\mathbb{Q}$ into the form $\bm{Q}=(\bm{I}_d,\bm{A})^T$, then we refer to $f(\bm{x})$ as be related to the $(n-d)$-dimensional Diophantine system [\[eq:diop_ineq\]](#eq:diop_ineq){reference-type="eqref" reference="eq:diop_ineq"}.
**Definition 17**. For each positive $\varepsilon$, $\mathcal{G}(\varepsilon)$ is the *least region* of the Diophantine system [\[eq:diop_ineq\]](#eq:diop_ineq){reference-type="eqref" reference="eq:diop_ineq"}, if $\langle\mathcal{G}(\varepsilon)\rangle\in\mathbb{N}^d_+$ and $\mathcal{G}(\varepsilon)$ contains a solution $\bm{t}\in \mathbb{Z}^d$ of [\[eq:diop_ineq\]](#eq:diop_ineq){reference-type="eqref" reference="eq:diop_ineq"}.
**Lemma 18**. *(Chapter 1.3 in [@meyer2000algebraic]) [\[lem:lower_bound\]]{#lem:lower_bound label="lem:lower_bound"} For each matrix $\bm{A}\in\mathbb{R}^{k\times d}$, the least region $\mathcal{G}(\varepsilon)$ satisfies $$\langle\mathcal{G}(\varepsilon)\rangle_i\geq \frac{1}{2\varepsilon^k},~~i=1,\cdots,d.$$*
Next, we discuss the Diophantine approximation systems from two aspects, badly approximable systems and good approximable systems, based on the arithmetic properties of irrational numbers in $\bm{A}$.
### Badly approximable systems
**Definition 19**. A matrix $\bm{A}\in\mathbb{R}^{k\times d}$ is *badly approximable*, if there exists a sequence of positive integers $C_1,\cdots,C_d$, such that the least region $\mathcal{G}(\varepsilon)$ fulfills $$\frac{1}{2\varepsilon^{k}}\leq\langle\mathcal{G}(\varepsilon)\rangle_i\leq \frac{C_i}{\varepsilon^{k}},~~i=1,\cdots,d.$$ When $\bm{A}$ is a badly approximable matrix, we call [\[eq:diop_ineq\]](#eq:diop_ineq){reference-type="eqref" reference="eq:diop_ineq"} a badly approximable system.
*Remark 20*. When $k=1$, $\bm{A}$ becomes a badly approximable irrational number. The set of badly approximable irrational numbers has the same cardinality as $\mathbb{R}$. In particular, all quadratic irrational numbers are badly approximable [@burger2000exploring].
*Remark 21*. The upper bound of $\langle\mathcal{G}(\varepsilon)\rangle$ well controls the existence range of the solution, which is also known as the *rapid filling* property of badly approximable systems [@meyer2000algebraic].
**Lemma 22**. *(Chapter 1.3 in [@meyer2000algebraic]) When $\bm{A}$ is a badly approximable matrix, there exists a positive constant $C$ such that, for each $\bm{t}\in \mathbb{Z}^d\backslash\{\bm{0}\}$ and $\bm{r}_0\in\mathbb{T}^k$, $$\label{eq:badly}
\|\bm{A}\bm{t}/\mathbb{Z}^k-\bm{r}_0\|_{\infty}\geq \frac{C}{\|\bm{t}\|^k_{\infty}}>0.$$*
For badly approximable systems, [\[eq:badly\]](#eq:badly){reference-type="eqref" reference="eq:badly"} implies that there is no $\bm{t}\in\mathbb{Z}^d$ such that $\bm{A}\bm{t}/\mathbb{Z}^k=\bm{r}_0$ holds for any $\bm{r}_0\in \mathbb{T}^k$. Therefore, when $k=n-d$, the first conclusion of this subsection comes out naturally.
**Theorem 23**. *When $\bm{A}\in\mathbb{R}^{(n-d)\times d}$ is a badly approximable matrix, $\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}\neq\mathbb{T}^{n-d}$, i.e., $\overline{\mathcal{S}}\neq\mathbb{T}^n$.*
### Good approximable systems
**Definition 24**. An irrational number $\alpha$ is *good approximable*, if its continued fraction expansion $\alpha=[a_0,a_1,\cdots,a_n,\cdots]$ satisfies $$\overline{\lim_{n\to\infty}}a_n=\infty.$$
*Remark 25*. The set of good approximable numbers contains all Liouville numbers (see [@liouville1844classes] for the definition of Liouville numbers). Meanwhile, some famous transcendental numbers, like $e$ and $\pi$, are not Liouville numbers, but good approximation numbers. Good approximable numbers form the complement of the set of badly approximable numbers within the set of irrational numbers [@meyer2000algebraic].
**Definition 26**. A matrix $\bm{A}\in\mathbb{R}^{k\times d}$ is *good approximable*, if all irrational numbers in matrix $\bm{A}$ are good approximable. When $\bm{A}$ is good approximable, we call [\[eq:diop_ineq\]](#eq:diop_ineq){reference-type="eqref" reference="eq:diop_ineq"} a good approximable system.
Different from badly approximable systems, good approximable systems have arbitrary approximation property [@hardy1979introduction]. It means that when $\bm{A}$ is good approximable, for each $\bm{r}_0\in\mathbb{T}^k$, there exists a $\bm{t}\in\mathbb{Z}^d$ such that $\bm{A}\bm{t}/\mathbb{Z}^k=\bm{r}_0$. Then we obtain the following conclusion when $k=n-d$.
**Theorem 27**. *When $\bm{A}\in\mathbb{R}^{(n-d)\times d}$ is a good approximable matrix, $\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathbb{Z}^d\}/\mathbb{Z}^{n-d}=\mathbb{T}^{n-d}$, i.e., $\overline{\mathcal{S}}=\mathbb{T}^n$.*
# Algorithm and analysis {#sec:alg}
From Theorem [Theorem 15](#thm:relation){reference-type="ref" reference="thm:relation"}, the global information of a quasiperiodic function is contained in the unit cell of parent function. However, in practice, performing calculations in superspace may lead to an unbearable computational complexity. In this section, we propose a new algorithm, FPR method, to recover the global quasiperiodic function by employing interpolation technique with finite points in physical space. We also present the convergence analysis of FPR method and the computational complexity of choosing finite points.
## FPR method
In this subsection, we present the FPR method in a step-by-step way. Assume that the projection matrix $\bm{P}\in\mathbb{P}^{d\times n}$ of the quasiperiodic function $f(\bm{x})$ is known, and $\bm{P}^T$ can be linearly transformed over $\mathbb{Q}$ into the form $\bm{Q}=(\bm{I}_d,\bm{A})^T$. Given a target point $\bm{x}^*\in\mathbb{R}^d$, FPR method has three steps to obtain an approximate value of $f(\bm{x}^*)$.
**Select region $\mathcal{G}$.** FPR method uses finite points in physical space to recover the global quasiperiodic function. These finite points fall in a finite region $\mathcal{G}\subset\mathbb{R}^d$, which need to be determined in the first step. The selection principle of $\mathcal{G}$ is that $\mathcal{C}(\mathcal{G})$ is relatively uniform distributed in $\Omega$. Actually, the arithmetic properties of irrational numbers in the projection matrix $\bm{P}$ can provide a fast way for the selection of $\mathcal{G}$. We will give a more deep discussion in Subsection [4.2](#subsec:rapid_filling){reference-type="ref" reference="subsec:rapid_filling"}.
**Shrink region $\mathcal{G}$.** For the target point $\bm{x}^*$, its image under the lift map $\mathcal{L}$ is denoted as $\bm{s}^*=(\bm{t}^*, \bm{r}^*)^T$. Define $\bm{b}^*:=\overline{\bm{A}[\bm{t}^*]}\in[0,1)^{n-d}$. Given $h>0$ and $\varepsilon>0$, we traverse the points in $\mathcal{G}\cap\mathbb{Z}^d$ to find $n-d$ pairs of $(\bm{x}^{i+},\bm{x}^{i-})$, $i=1,\cdots,n-d$, such that $$\begin{aligned}
&|b^{i+}_i-(b^*_i+h/2)|\leq \varepsilon/2 &\text{and}\quad &|b^{i+}_j-b^*_i|\leq\varepsilon/2,\forall j\in\{1,\cdots,n-d\}\backslash \{i\},\\
&|b^{i-}_i-(b^*_i-h/2)|\leq \varepsilon/2 &\text{and}\quad&|b^{i-}_j-b^*_i|\leq\varepsilon/2,\forall j\in\{1,\cdots,n-d\}\backslash \{i\},
\end{aligned}$$ where $$\begin{aligned}
&\bm{b}^{i+}:=\overline{\bm{A}\bm{t}^{i+}},\quad \bm{s}^{i+}=(\bm{t}^{i+},\bm{r}^{i+})^T:=\mathcal{L}(\bm{x}^{i+}),\\
&\bm{b}^{i-}:=\overline{\bm{A}\bm{t}^{i-}},\quad \bm{s}^{i-}=(\bm{t}^{i-},\bm{r}^{i-})^T:=\mathcal{L}(\bm{x}^{i-}).\\
\end{aligned}$$ Note that the traverse process in $\mathcal{G}\cap\mathbb{Z}^d$ can be performed simultaneously in each dimension.
Furthermore, according to $\overline{\bm{s}^*}=(\overline{\bm{t}^*},\overline{\bm{r}^*})^T:=\mathcal{M}(\bm{s}^*)$, the region $\mathcal{G}$ can be shrunk to $\bigcup\mathcal{G}^i$, where $$\mathcal{G}^i:=\{\bm{x}\in \mathbb{R}^d:\bm{x}=\bm{x}^i+\overline{\bm{t}^*}+\bm{x}_h,\bm{x}^i\in\{\bm{x}^{i+},\bm{x}^{i-}\}, \bm{x}_h\in [-h/2,h/2)^d\},~~i=1,\cdots,n-d.$$ $\mathcal{C}(\bigcup\mathcal{G}^i)$ can directly enclose an interpolation element $\Theta$ in $\Omega$, and the size of interpolation element $\Theta$ fulfills $$h-\varepsilon \leq\|\langle\Theta\rangle\|_{\infty}\leq h+\varepsilon,$$ where $h$ is the target value of $\langle\Theta\rangle$ in each dimensional, and $\varepsilon$ determines the deviation range of $\langle\Theta\rangle$ in the actual selection. In concrete implementation, we usually choose $\varepsilon<h/10$ to ensure that $\langle\Theta\rangle$ is mainly controlled by $h$ and hardly affected by $\varepsilon$. To facilitate a clearer comprehension of this step, Figure [\[fig:shrink\]](#fig:shrink){reference-type="ref" reference="fig:shrink"} presents the process of determining $\mathcal{G}^i$ in the $\bm{t}\times r_i$ plane. Here, the vertical axis exclusively considers the $i$-dimension in the $\bm{r}$-axis.
![image](./shrink.png){width="40%"}
**Interpolation recovery.** To approximate $f(\bm{x}^*)$ by the $k$-degree Lagrange interpolation formula (LIF-$k$), we first select $K = (k+1)^n$ interpolation nodes $\{\bm{\tau}_i\in\mathbb{R}^d:i=1,\cdots,K\}$ in $\bigcup\mathcal{G}^i$. These nodes correspond to a point set $\{\mathcal{C}(\bm{\tau}_i)\}^K_{i=1}$, which forms an interpolation element $\Theta$ in $\Omega$. Before interpolation, affine transforming the irregular interpolation element into a rectangular shape is necessary. Then the corresponding interpolation basis functions $\{\phi_i:\mathbb{R}^n\to\mathbb{R}:i=1,\cdots,K\}$ are determined. Finally, combining Theorem [Theorem 15](#thm:relation){reference-type="ref" reference="thm:relation"}, $$\begin{aligned}
f(\bm{x}^*)\approx\Pi_k F(\overline{\bm{s}^*})=\sum_{i=1}^{K}F(\mathcal{C}(\bm{\tau}_i))\phi_i(\overline{\bm{s}^*})
=\sum_{i=1}^{K} f(\bm{\tau}_i)\phi_i(\overline{\bm{s}^*}),
\end{aligned}$$ where $\Pi_k$ is the LIF-$k$ transformation. According to Theorem [Theorem 10](#thm:dens_fill){reference-type="ref" reference="thm:dens_fill"}, as the region $\mathcal{G}$ expands, $\mathcal{C}(\mathcal{G})$ gradually becomes dense in $\Omega$, and the approximation accuracy of $\Pi_k F(\overline{\bm{s}^*})$ is improved accordingly.
*Remark 28*. The LIF used in the FPR method can be replaced by other interpolation methods, and the shape of interpolation element can also be changed. One can make appropriate adjustments according to specific problems in the FPR framework.
Algorithm [\[alg:FPR\]](#alg:FPR){reference-type="ref" reference="alg:FPR"} summarizes the above steps of FPR method.
**Require:** projection matrix $\bm{P}\in \mathbb{P}^{d\times n}$, target point $\bm{x}^*\in \mathbb{R}^d$\
1: Select region $\mathcal{G}\subset\mathbb{R}^d$\
2: Shrink region $\mathcal{G}$ to $\bigcup\mathcal{G}^i$\
3: Interpolation recovery
- Select interpolation nodes in $\bigcup\mathcal{G}^i$
- Use affine transformation to obtain rectangular interpolation element
- Calculate interpolation basis functions
- Calculate interpolation result $\Pi_k F(\overline{\bm{s}^*})$
## How to select region $\mathcal{G}$ ? {#subsec:rapid_filling}
In the process of selecting region $\mathcal{G}$, we have observed that, in many cases, relatively small region $\mathcal{G}$ can result in a uniform distribution of $\mathcal{C}(\mathcal{G})$ in $\Omega$. Nevertheless, the distinct arithmetic properties of the irrational numbers in the projection matrices can influence the degree of uniformity in the distribution. For example, consider two different maps $\mathcal{C}_1$ and $\mathcal{C}_2$, which are determined by projection matrices $\bm{P}_1=(1,\sqrt{2})$ and $\bm{P}_2=(1,\pi)$, respectively. When $\mathcal{G}=[0, 30)$, the distribution of $\mathcal{C}_1(\mathcal{G})$ appears to be more uniform than that of $\mathcal{C}_2(\mathcal{G})$, as depicted in Figure [\[fig:torus_compare\]](#fig:torus_compare){reference-type="ref" reference="fig:torus_compare"}. Subsequently, we delve into the distribution characteristics of $\mathcal{C}(\mathcal{G})$ in two distinct categories: badly approximable systems and good approximable systems. Furthermore, we provide the respective selection strategies for the region $\mathcal{G}$ within each category.
### Badly approximable systems
As proved in Theorem [Theorem 10](#thm:dens_fill){reference-type="ref" reference="thm:dens_fill"}, the distribution of $\mathcal{C}(\mathcal{G})$ in $\Omega$ is characterized by the distribution of $\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in \mathcal{G}\cap \mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ in $[0,1)^{n-d}$. When $A\in \mathbb{R}^{(n-d)\times d}$ is a badly approximable matrix, it has the rapid filling property mentioned in Remark [Remark 21](#rem:rapid_fill){reference-type="ref" reference="rem:rapid_fill"}. Specifically, given an $\varepsilon>0$, there is a least region $\mathcal{G}$ with $$(1/2)\varepsilon^{-(n-d)}\leq\langle\mathcal{G}(\varepsilon)\rangle_i\leq C_i\varepsilon^{-(n-d)},~~i=1,\cdots,d,$$ such that for each $\bm{r}_0\in [0,1)^{n-d}$, the $(n-d)$-dimensional Diophantine system $$\|\bm{A}\bm{t}/\mathbb{Z}^{n-d}-\bm{r}_0\|_{\infty}\leq \varepsilon.$$ has a solution $\bm{t}\in\mathcal{G}(\varepsilon)\cap\mathbb{Z}^{d}$. Thus, $\{\bm{r}=\bm{A}\bm{t}:\bm{t}\in\mathcal{G}(\varepsilon)\cap\mathbb{Z}^d\}/\mathbb{Z}^{n-d}$ is dense in $[0,1)^{n-d}$, and then $\mathcal{C}(\mathcal{G}(\varepsilon))$ is dense in $\Omega$.
The rapid filling property offers an efficient way to select region $\mathcal{G}$ for badly approximable systems. Actually, $\varepsilon$ measures the *filling precision*, and $\mathcal{G}(\varepsilon)$ is the *filling cost* of achieving $\varepsilon$. For instance, we consider a badly approximable irrational number $\alpha=\sqrt{2}$. Table [1](#tab:region){reference-type="ref" reference="tab:region"} shows the $\langle\mathcal{G}(\varepsilon)\rangle$ required for different filling precision $\varepsilon$. Specifically, given an $\varepsilon>0$, we traverse through $\mathbb{N}_+$ to search for a minimum point $t^*$ such that $$\|\sqrt{2}t^*/\mathbb{Z}-0\|_{\infty}\leq \varepsilon$$ holds, and then $\langle\mathcal{G}(\varepsilon)\rangle=t^*$. Results demonstrate an inverse relationship between $\varepsilon$ and $\langle\mathcal{G}(\varepsilon)\rangle$.
$\varepsilon$ 1.0e-02 1.0e-03 1.0e-04 1.0e-05 1.0e-06
------------------------------------------ --------- ---------- ---------- ---------- ----------
$\langle\mathcal{G}(\varepsilon)\rangle$ 9.9e+01 9.85e+02 5.74e+03 1.14e+05 1.13e+06
: Required $\langle\mathcal{G}(\varepsilon)\rangle$ for the badly approximable irrational number $\alpha=\sqrt{2}$ under different filling precision $\varepsilon$.
Therefore, when using FPR method to recover the quasiperiodic functions related to badly approximable systems, we can directly select the region $\mathcal{G}$ to reach the upper bound of $\langle\mathcal{G}(\varepsilon)\rangle$, i.e., $\mathcal{G}$ fulfills $$\label{eq:bad_region}
\langle\mathcal{G}\rangle_i= C_i\varepsilon^{-(n-d)},~~i=1,\cdots,d,$$ where $C_i$ is the smallest integer upper bound of $\langle\mathcal{G}(\varepsilon)\rangle_i\varepsilon^{(n-d)}$.
### Good approximable systems {#subsub:good_appro}
For each good approximable system, there is no uniform upper bound of $\langle\mathcal{G}(\varepsilon)\rangle$ that grows linearly with respect to $\varepsilon^{-(n-d)}$, indicating the absence of rapid filling property. Consequently, the determination of $\mathcal{G}$ under this category cannot proceed in the same manner as it does for badly approximable systems. For example, consider the good approximable system $$\label{eq:good_pi}
\|\pi t/\mathbb{Z}-r_0\|_{\infty}\leq \varepsilon,~~\forall r_0\in \mathbb{T},$$ where $\pi$ is a well-known transcendental number. Table [2](#tab:approx_region){reference-type="ref" reference="tab:approx_region"} presents the $\langle\mathcal{G}(\varepsilon)\rangle$ of the good approximable system [\[eq:good_pi\]](#eq:good_pi){reference-type="eqref" reference="eq:good_pi"} required for each given filling precision $\varepsilon$. $\langle\mathcal{G}(\varepsilon)\rangle$ shows an exponential-like growth behavior, without an inverse relationship between $\langle\mathcal{G}(\varepsilon)\rangle$ and $\varepsilon$.
$\varepsilon$ 1.0e-03 1.0e-04 1.0e-05 1.0e-06 1.0e-07
------------------------------------------ ---------- ---------- ---------- ---------- ----------------------------- -- -- -- -- --
$\langle\mathcal{G}(\varepsilon)\rangle$ 2.05e+05 4.16e+05 6.25e+06 1.27e+08 $>$`<!-- -->`{=html}1.0e+10
: Required $\langle\mathcal{G}(\varepsilon)\rangle$ for the good approximable irrational number $\pi$ under different filling precision $\varepsilon$.
However, we can also propose a convenient and efficient approach of selecting $\mathcal{G}$ for the quasiperiodic function related to good approximable system. According to Lemma [\[lem:lower_bound\]](#lem:lower_bound){reference-type="ref" reference="lem:lower_bound"}, the lower bound $\langle\mathcal{G}(\varepsilon)\rangle_i\geq (1/2)\varepsilon^{-(n-d)}$, $i=1,\cdots,d$, holds for each Diophantine system. It allows us, for the good approximable system, to adopt the lower bound $\langle\mathcal{G}\rangle_i=(1/2)\varepsilon^{-(n-d)}$, $i=1,\cdots,d$ to determine $\mathcal{G}$, named the computable region. In this way, for a given $\varepsilon$ the size of $\mathcal{G}$ can be immediately determined without traversing, resulting in a significantly reduce of computational costs.
## Computational complexity analysis
Here, we analyze the computational complexity of FPR method for recovering a required interpolation point $\bm{x}^*$. Assume that the size of interpolation element $\Theta\subset\Omega$ fulfils $$h-\varepsilon \leq\|\langle\Theta\rangle\|_{\infty}\leq h+\varepsilon,$$ where $\varepsilon$ is the filling precision satisfying $\varepsilon<h/10$. Here we fix $\varepsilon=h/\tilde{c}$, where constant $\tilde{c}>10$. $N=1/h$ denotes the degree of freedom of spatial discrete nodes. Then $\|\langle\mathcal{G}\rangle\|_{\infty}=C\varepsilon^{-(n-d)}=C(\tilde{c}N)^{n-d}$, where $C=\max_{i=1}^dC_i$. The traverse process in $\mathcal{G}\cap\mathbb{Z}^d$ can be performed simultaneously in each dimension, and computing $\|\bm{b}_i-\bm{b}^*\|_{\infty}$ consumes $O(n-d)$ times of subtraction. Hence, the computational complexity of obtaining the interpolation element of $\bm{x}^*$ is $$O\left(d(n-d)\|\langle\mathcal{G}\rangle\|_{\infty}\right)=O\left(d(n-d)C\varepsilon^{-(n-d)}\right)=O\left(N^{n-d}\right).$$
To use an $n$-dimensional LIF-$k$, $K=(k+1)^n$ interpolation nodes are required. Thus, the computational complexity of interpolation is $O(K)$. Since $K\ll N$, the computational complexity of the FPR method is at the level of $O\left(N^{n-d}\right)$. It is worth emphasizing that when using the FPR method for practical problems, the interpolation nodes can be predetermined, which means that the computational cost of using the FPR method could be almost negligible.
## Convergence analysis
In this subsection, we provide the convergence analysis of the FPR method for recovering a target point within an interpolation element $\Theta\subset\Omega\subset\mathbb{R}^n$. To facilitate this analysis, we introduce the following multi-index notations. For a vector $\bm{\alpha}=(\alpha_i)^n_{i=1}\in \mathbb{R}^n$, $|\bm{\alpha}|:=\alpha_1+\cdots+\alpha_n$. $\mathcal{C}^{\bm{\alpha}}(\mathbb{R}^n)$ denotes the set of functions on $\mathbb{R}^n$ with $\alpha_i$-order continuous derivatives along the $i$-th coordinate direction. For a function $F(\bm{s})\in \mathcal{C}^{\bm{\alpha}}(\mathbb{R}^n)$, its $\bm{\alpha}$-order derivative is $$D^{\bm{\alpha}}F(\bm{s}):= \dfrac{\partial^{|\bm{\alpha}|}F(\bm{s})}{\partial s_1^{\alpha_1}\cdots\partial s_n^{\alpha_n}}.$$ Define space $$\mathcal{L}^2(\Omega):=\{F:\|F\|_{\mathcal{L}^2(\Omega)}< \infty\},$$ where $$\|F\|_{\mathcal{L}^2(\Omega)}=\Bigg(\int_{\Omega}|F(\bm{s})|^2d\bm{s}\Bigg)^{1/2}.$$ For any integer $m\geq 0$, the Hilbert space on $\Omega$ is $$\mathcal{H}^m(\Omega):=\{F\in \mathcal{L}^2(\Omega):\|F\|_{m,\Omega}< \infty\},$$ where $$\|F\|_{m,\Omega}=\left(\sum_{|\bm{\alpha}|\leq m}\|D^{\bm{\alpha}}F\|^2_{\mathcal{L}^2(\Omega)}\right)^{1/2}.$$ The semi-norm of $\mathcal{H}^m(\Omega)$ is defined as $$|F|_{m,\Omega}=\left(\sum_{|\bm{\alpha}|= m}\|D^{\bm{\alpha}}F\|^2_{\mathcal{L}^2(\Omega)}\right)^{1/2}.$$
**Definition 29**. Two elements $\Theta$ and $\hat{\Theta}$ are affine equivalent, if there is an invertible affine transformation $$\begin{aligned}
&\mathcal{A}: &\hat{\Theta}&\to \Theta,\\
&&\hat{\bm{s}}&\mapsto \bm{s}=\bm{B}\hat{\bm{s}}+\bm{b},
\end{aligned}$$ where $\bm{B}\in\mathbb{R}^{n\times n}$ is an invertible matrix and $\bm{b}\in \mathbb{R}^n$.
**Lemma 30**. *[@scott1990finite] If two elements $\Theta$ and $\hat{\Theta}$ are affine equivalent, for any $F\in \mathcal{H}^m(\Theta)$, let $$\hat{F}:F\circ \mathcal{A},$$ then $\hat{F}\in\mathcal{H}^m(\hat{\Theta})$ and there is a positive constant $C=C(m,n)$ such that $$\begin{aligned}
|\hat{F}|_{m,\hat{\Theta}} &\leq C\|\bm{B}\|^m|\det(\bm{B})|^{-\frac{1}{2}}|F|_{m,\Theta},\\
|F|_{m,\Theta} &\leq C\|\bm{B}^{-1}\|^m|\det(\bm{B})|^{\frac{1}{2}}|\hat{F}|_{m,\hat{\Theta}}.
\end{aligned}$$ Here, $\|\cdot\|$ is the Euclidean norm.*
**Lemma 31**. *[@scott1990finite] The matrix $\bm{B}$ defined in Definition [Definition 29](#def:affine){reference-type="ref" reference="def:affine"} satisfies $$\|\bm{B}\|\leq \frac{h_{\Theta}}{\hat{\rho}_{\hat{\Theta}}},~~\|\bm{B}^{-1}\|\leq \frac{\hat{h}_{\hat{\Theta}}}{\rho_{\Theta}},$$ where $$\begin{cases}
h_{\Theta}=\rm{diam}\Theta,&\rho_{\Theta}=\sup\{\rm{diam}\Gamma:closed~ball~\Gamma\subset\Theta\},\\
\hat{h}_{\hat{\Theta}}=\rm{diam}\hat{\Theta},&\hat{\rho}_{\hat{\Theta}}=\sup\{\rm{diam}\Gamma:closed~ball~ \Gamma\subset\hat{\Theta}\}.
\end{cases}$$*
Note that the interpolation element $\Theta$ is a regular shape, i.e., there exists a constant $\kappa>0$ such that $h_{\Theta}/\rho_{\Theta}\leq \kappa$. Denote $P_k(\hat{\Theta})$ as the space consisting of polynomials of degree $k$ or less in $\hat{\Theta}$, and $\hat{\Pi}_k$ as the LIF-$k$ operator over $\hat{\Theta}$. We have the convergence result of the FPR method.
**Theorem 32**. *If $0\leq m\leq k+1$, then for $\hat{\Pi}_k\in\mathcal{L}(\mathcal{H}^{k+1}(\hat{\Theta});\mathcal{H}^m(\hat{\Theta}))$, $\hat{\Pi}_k\hat{F}=\hat{F}$ for each $\hat{F}\in P_k(\hat{\Theta})$. Here, $\mathcal{L}(A; B)$ denotes the set of linear operators from space $A$ to space $B$. Over the interpolation element $\Theta~(\text{affine equivalent with}~\hat{\Theta})$, operator $\Pi_k$ fulfils $$(\Pi_kF)^{\hat{}}=\hat{\Pi}_k\hat{F},~~ \hat{F}\in P_k(\hat{\Theta}).$$ Then there is a constant $C$ such that $$\|F-\Pi_kF\|_{m,\Theta}\leq Ch_{\Theta}^{k+1-m}|F|_{k+1,\Theta},~~F\in \mathcal{H}^{k+1}(\Theta).$$ [\[thm:conv\]]{#thm:conv label="thm:conv"}*
*Proof.* Since there is no error in the process of selecting finite points, the only error in the FPR method comes from the interpolation error. According to the interpolation error analysis in [@scott1990finite], the above conclusion can be established. ◻
# Numerical experiments {#sec:num}
In this section, we present two classes of quasiperiodic systems to show the performance of FPR method. One class is analytical quasiperiodic functions, another is a piecewise constant quasicrystal.
We use the $\ell^{\infty}$-norm to measure the error between numerical result and exact result, denoted as $e(\langle\Theta\rangle)$ with respect to the interpolation element $\Theta$. Then we can estimate the order of accuracy by calculating the logarithmic ratio of errors between two successive refinements $$\text{Order}=\log_{2}\left(\frac{e(2\langle\Theta\rangle)}{e(\langle\Theta\rangle)}\right).$$
To apply the FPR method for recovering $d$-dimensional quasiperiodic functions related to badly approximable systems, given a filling precision $\varepsilon$, we select the region $\mathcal{G}$ given by $$\mathcal{G}=[0,C_1\varepsilon^{-(n-d)})\times \cdots \times[0,C_d\varepsilon^{-(n-d)}),$$ where $n$ is the dimension of superspace and $C_i~(i=1,\cdots,d)$ is determined by $$C_i=\max_{j=0,\cdots,4}\langle\mathcal{G}(\varepsilon_j)\rangle_i\varepsilon^{n-d}_j,~~\varepsilon_j=\frac{1}{10^{2+j}}.$$ Since $j$ is small, determining constant $C_i~(i=1,\cdots,d)$ in this way consumes almost no computational cost.
As discussed in Subsubsection [4.2.2](#subsub:good_appro){reference-type="ref" reference="subsub:good_appro"}, for the quasiperiodic functions related to good approximable systems, we select the computable region $$\mathcal{G}=[0,0.5\varepsilon^{-(n-d)})^d.$$
## Smooth quasiperiodic functions
In this subsection, we present four different smooth quasiperiodic functions to demonstrate the accuracy and efficiency of FPR method. The first three examples are related to badly approximable systems, and the last one is related to a good approximable system.
**Example 33**. Consider a 1D quasiperiodic function $$\label{eq:2to1}
f(x)=\cos x+\cos\sqrt{2}x,\quad x\in \mathbb{R}.$$
$f(x)$ can be embedded into the 2D periodic function $F(\bm{s})=\cos\bm{s}$, $\bm{s}=(s_1,s_2)^T\in \mathbb{R}^2$ using the projection matrix $\bm{P}=(1,\sqrt{2})$. The unit cell of $F(\bm{s})$ is $\Omega=[0,2\pi)^2$. Since $\sqrt{2}$ is a badly approximable irrational number, the corresponding Diophantine system has the rapid filling property. Table [3](#tab:2to1_region){reference-type="ref" reference="tab:2to1_region"} shows the inverse relationship between the filling precision $\varepsilon$ and the size of region $\mathcal{G}(\varepsilon)$. Therefore, selecting a relatively smaller region $\mathcal{G}$ for FPR method is sufficient to recover the global information of quasiperiodic function [\[eq:2to1\]](#eq:2to1){reference-type="eqref" reference="eq:2to1"}.
$\varepsilon$ 1.0e-02 1.0e-03 1.0e-04 1.0e-05 1.0e-06
----------------------------------------------- ---------- ---------- ---------- ---------- ----------
$\langle\mathcal{G}(\varepsilon)\rangle/2\pi$ 9.90e+01 9.85e+02 5.74e+03 1.14e+05 1.13e+06
: Required $\langle\mathcal{G}(\varepsilon)\rangle$ with different filling precision $\varepsilon$ for projection matrix $\bm{P}=(1,\sqrt{2})$.
First, we use the FPR method with 2D LIF-1 in $\mathcal{G}=[0,8\varepsilon^{-1})$ to recover [\[eq:2to1\]](#eq:2to1){reference-type="eqref" reference="eq:2to1"} when $x\in[6284,6286)$. Figure [\[fig:linear\]](#fig:linear){reference-type="ref" reference="fig:linear"} shows the node selection of FPR method with two sizes of interpolation elements. For each target point $x^*\in [6284, 6286)$, its image $\overline{\bm{s}^*}$ under the combination map $\mathcal{C}$ is marked with red dot. We have circled each interpolation element $\Theta$ that contains the target point using yellow dashed lines. Four blue dots in each $\Theta$ are used as interpolation nodes. In Figure [\[fig:linear_a\]](#fig:linear_a){reference-type="ref" reference="fig:linear_a"}, the size of interpolation elements is $\langle\Theta\rangle=(0.4,0.3)$. After one refinement, the size of interpolation element becomes $\langle\Theta\rangle=(0.2,0.15)$, as shown in Figure [\[fig:linear_b\]](#fig:linear_b){reference-type="ref" reference="fig:linear_b"}.
Table [\[tab:example1_error\]](#tab:example1_error){reference-type="ref" reference="tab:example1_error"} records the errors and accuracy orders obtained by using the FPR method with 2D LIF-1, LIF-3, and LIF-5 to recover the quasiperiodic function [\[eq:2to1\]](#eq:2to1){reference-type="eqref" reference="eq:2to1"} when $x\in[6284,6286)$. As Theorem [\[thm:conv\]](#thm:conv){reference-type="ref" reference="thm:conv"} predicts, using higher-order interpolation methods can result in better approximation accuracy.
Next, we further present the power of FPR method on recovering the global quasiperiodic function with finite points. Here, we select the region $\mathcal{G}=[0,8000)$ and the size of the interpolation elements $\langle\Theta\rangle=(0.4,0.3)$. In this case, we only need 320 interpolation nodes to recover the global information of $f(x)$. Figure [\[fig:2to1_global\]](#fig:2to1_global){reference-type="ref" reference="fig:2to1_global"} compares the recovered result by FPR method and the exact value of [\[eq:2to1\]](#eq:2to1){reference-type="eqref" reference="eq:2to1"} when $x\in[10^6, 10^6+80)$. The error between the two is 3.1147e-02.
![image](./2to1_global.png){width="80%"}
**Example 34**. Consider a 1D quasiperiodic function $$\label{eq:3to1}
f(x)=\cos x+\cos\sqrt{2}x+\cos\sqrt{3}x,\quad x\in \mathbb{R}.$$
Compared with Example [Example 33](#exmp:1Dbad){reference-type="ref" reference="exmp:1Dbad"} only containing one irrational frequency $\sqrt{2}$, this quasiperiodic function $f(x)$ has two irrational frequencies $\sqrt{2}$ and $\sqrt{3}$. Correspondingly, $f(x)$ should embed into the 3D periodic function $F(\bm{s})=\cos\bm{s}$ with unit cell $\Omega=[0,2\pi)^3$, through the projection matrix $\bm{P}=(1,\sqrt{2},\sqrt{3})$. Due to $\sqrt{2}$ and $\sqrt{3}$ are both quadratic irrational numbers, $f(x)$ is related to a badly approximable system. There is an inverse relationship between the size of $\mathcal{G}(\varepsilon)$ and the filling precision $\varepsilon$ as shown in Table [\[tab:3to1_region\]](#tab:3to1_region){reference-type="ref" reference="tab:3to1_region"}. Concretely, the size of $\mathcal{G}(\varepsilon)$ is the maximum size of $\mathcal{G}(\varepsilon)$ produced by $\bm{P}_1=(1,\sqrt{2})$ and $\bm{P}_2=(1,\sqrt{3})$ for a given filling precision $\varepsilon$. From this example, one can find that, as the dimension of superspace increases, however, the required interpolation points in $\mathcal{G}$ still belong to $\mathbb{R}$. It demonstrates that the FPR method is a no-lift algorithm.
We then employ the FPR method with 3D LIF-1 in the region $\mathcal{G}=[0,9\varepsilon^{-1})$, to recover [\[eq:3to1\]](#eq:3to1){reference-type="eqref" reference="eq:3to1"} when $x\in [6284, 6286)$. The 3D LIF-1 requires eight interpolation nodes within each interpolation element $\Theta$. Figure [\[fig:3to1\]](#fig:3to1){reference-type="ref" reference="fig:3to1"} shows the node selection for FPR method with two sizes of interpolation elements. Table [4](#tab:example2_error){reference-type="ref" reference="tab:example2_error"} lists the error and accuracy order, consistent with the prediction by Theorem [\[thm:conv\]](#thm:conv){reference-type="ref" reference="thm:conv"}.
$\langle\Theta\rangle$ Error Order
------------------------ ------------ -------
(0.4, 0.4, 0.3) 1.1214e-01
(0.2, 0.2, 0.15) 2.7964e-02 2.00
(0.1, 0.1, 0.075) 6.9445e-03 2.01
(0.05, 0.05, 0.0375) 1.7093e-03 2.02
: Error of 3D LIF-1 FPR method for recovering quasiperiodic function [\[eq:3to1\]](#eq:3to1){reference-type="eqref" reference="eq:3to1"} with interpolation element sizes $\langle\Theta\rangle$.
**Example 35**. Consider a 2D quasiperiodic function $$\label{eq:3to2}
f(x, y)=\cos x+\cos\sqrt{2}x+\cos y,\quad (x,y)\in \mathbb{R}^2.$$
There exists a projection matrix $$\label{eq:3to2_pmatrix}
\bm{P}=\begin{bmatrix}
1 & \sqrt{2} & 0 \\
0 & 0 & 1\\
\end{bmatrix}$$ such that $f(x,y)=F(\bm{P}^T(x,y)^T)$, where $F(\bm{s})=\cos\bm{s}$, $\bm{s}\in \mathbb{R}^3$, is the parent function of $f(x, y)$. The unit cell is $\Omega=[0,2\pi)^3$. Table [5](#tab:3to2_region){reference-type="ref" reference="tab:3to2_region"} shows the inverse relationship between the filling precision $\varepsilon$ and $\langle\mathcal{G}\rangle_1$. Since $f(x,y)$ is periodic in the $y$ direction, it is sufficient to take $\langle\mathcal{G}\rangle_2$ as $2\pi$. Therefore, given the filling precision $\varepsilon$, we select $\mathcal{G}=[0, 8\varepsilon^{-1})\times[0,2\pi)$.
$\varepsilon$ 1.0e-02 1.0e-03 1.0e-04 1.0e-05 1.0e-06
------------------------------------------------- ---------- ---------- ---------- ---------- ----------
$\langle\mathcal{G}(\varepsilon)\rangle_1/2\pi$ 9.90e+01 9.85e+02 5.74e+03 1.14e+05 1.13e+06
$\langle\mathcal{G}(\varepsilon)\rangle_2/2\pi$ 1.0 1.0 1.0 1.0 1.0
: Required $\langle\mathcal{G}(\varepsilon)\rangle$ of different filling precision $\varepsilon$ for projection matrix [\[eq:3to2_pmatrix\]](#eq:3to2_pmatrix){reference-type="eqref" reference="eq:3to2_pmatrix"}.
We use FPR method with 3D LIF-1 to recover [\[eq:3to2\]](#eq:3to2){reference-type="eqref" reference="eq:3to2"} when $(x,y)^T\in [6284, 6286)^2$. The 3D LIF-1 requires eight interpolation nodes in each interpolation element $\Theta$. Figure [\[fig:3to2\]](#fig:3to2){reference-type="ref" reference="fig:3to2"} shows the node selection for FPR method with different interpolation elements. Table [6](#tab:example3_error){reference-type="ref" reference="tab:example3_error"} records the error and accuracy order, which is consistent with theoretical results.
$\langle\Theta\rangle$ Error Order
------------------------ ------------ -------
(0.8, 0.3, 0.3) 1.1654e-01
(0.4, 0.15, 0.15) 2.6666e-02 2.13
(0.2, 0.075, 0.075) 5.8759e-03 2.18
: Error for FPR method with 3D LIF-1 when solving quasiperiodic function [\[eq:3to2\]](#eq:3to2){reference-type="eqref" reference="eq:3to2"} for interpolation element sizes $\langle\Theta\rangle$.
**Example 36**. Consider a 1D quasiperiodic function with a transcendental frequency $\pi$ $$\label{eq:approx}
f(x)=\cos x+\cos\pi x,\quad x\in \mathbb{R}.$$
The projection matrix is $\bm{P}=(1,\pi)$. Since $\pi$ is a good approximable number, we can directly select finite points in the computable region $\mathcal{G}=[0,0.5\varepsilon^{-1})$ as discussed in Section [4.2.2](#subsub:good_appro){reference-type="ref" reference="subsub:good_appro"}. The sizes of required computable region $\mathcal{G}$ with different filling precision $\varepsilon$ are shown in Table [7](#tab:suboptimal_region){reference-type="ref" reference="tab:suboptimal_region"}.
$\varepsilon$ 1.0e-02 1.0e-03 1.0e-04 1.0e-05 1.0e-06
---------------------------------- --------- --------- --------- --------- --------- -- -- -- -- --
$\langle\mathcal{G}\rangle/2\pi$ 5.0e+01 5.0e+02 5.0e+03 5.0e+04 5.0e+05
: Required sizes $\langle\mathcal{G}\rangle$ of computable region $\mathcal{G}$ for different filling precision $\varepsilon$.
We apply the FPR method with 2D LIF-1 to recover [\[eq:approx\]](#eq:approx){reference-type="eqref" reference="eq:approx"} when $x\in[6284,6286)$. As demonstrated in Example [Example 33](#exmp:1Dbad){reference-type="ref" reference="exmp:1Dbad"}, we similarly choose interpolation elements within $\Omega=[0,2\pi)^2$ according to different target points. Table [8](#tab:example4_fpr_error){reference-type="ref" reference="tab:example4_fpr_error"} shows that the recovery error gradually decreases as the interpolation elements decrease in size. Note that the interpolated element size cannot be exactly halved as in the three examples above due to the selection of the computable region $\mathcal{G}$, leading to the accuracy order of the FPR method exhibits fluctuations.
$\langle\Theta\rangle$ Error Order
------------------------ ------------ -------
(0.4048, 0.3) 1.1717e-01
(0.2024, 0.15) 2.0538e-02 2.51
(0.1012, 0.075) 5.1695e-03 1.99
(0.0667, 0.0375) 1.6952e-03 1.61
(0.0328, 0.0188) 4.2629e-04 1.99
: Error of FPR method with 2D LIF-1 of recovering quasiperiodic function [\[eq:approx\]](#eq:approx){reference-type="eqref" reference="eq:approx"} with interpolation element sizes $\langle\Theta\rangle$.
## Piecewise constant quasicrystals
**Example 37**. Consider a 1D Fibonacci photonic quasicrystal as shown in Figure [\[fig:fibonacci1\]](#fig:fibonacci1){reference-type="ref" reference="fig:fibonacci1"}, which is a black line in a 2D tilling [@vardeny2013optics; @tanese2014fractal]. The blue (white) square has side length $A$ ($B$), and its corresponding dielectric constant is $\varepsilon_A=4.84$ ($\varepsilon_B=2.56$). The sequence satisfies $F_n=F_{n-2}+F_{n-1},~n=1,2,\cdots$, with $F_1=B$, $F_2=A$, and $\lim\limits_{j\to\infty}F_{j+1}/F_{j}=(1+\sqrt{5})/2:=\lambda$.
Let the angle $\phi$ be defined by $\tan \phi =\lambda$. The 2D tilling can be transformed into a periodic structure by a rotation matrix defined as $$\begin{bmatrix}
\sin \phi & \cos \phi \\
\cos \phi & -\sin \phi \\
\end{bmatrix}.$$ All unit cells are surrounded by the yellow border in Figure [\[fig:fibonacci1\]](#fig:fibonacci1){reference-type="ref" reference="fig:fibonacci1"}. Figure [\[fig:fibonacci3\]](#fig:fibonacci3){reference-type="ref" reference="fig:fibonacci3"} shows the dielectric constant over the unit cell $\Omega=[0,1.9)^2$. Therefore, the projection matrix of the 1D Fibonacci photonic quasicrystal is $\bm{P}=(\sin \phi, \cos \phi)$.
Given the filling precision $\varepsilon=10^{-3}$, we select the computable region $\mathcal{G}=[0,500)$ according to Table [7](#tab:suboptimal_region){reference-type="ref" reference="tab:suboptimal_region"}. When we select the size of interpolation elements $\langle\Theta\rangle=(0.08,0.08)$, only 480 interpolation nodes are needed to recover the global dielectric constant of the 1D Fibonacci photonic quasicrystal. Figure [\[fig:fibonacci5_2\]](#fig:fibonacci5_2){reference-type="ref" reference="fig:fibonacci5_2"} shows the recovered dielectric constant of the 1D Fibonacci photonic quasicrystal when $x\in[1000,1020)$. FPR method can accurately recover the dielectric constant at continuous point compared with the exact value as shown in Figure [\[fig:fibonacci5_1\]](#fig:fibonacci5_1){reference-type="ref" reference="fig:fibonacci5_1"}. Moreover, FPR method is capable to seize the position of discontinuity points. Meanwhile, we present Figure [\[fig:fibonacci_global\]](#fig:fibonacci_global){reference-type="ref" reference="fig:fibonacci_global"} to demonstrate the effectiveness of the FPR method for recovering global function.
![image](./fibonacci_global.png){width="80%"}
# Conclusion and outlook {#sec:con}
This paper is concerned with developing a new algorithm for recovering both smooth and non-smooth quasiperiodic systems. Different from the existing spectral Galerkin methods, we propose the FPR method for accurately recovering the global quasiperiodic system using interpolation technique based on finite points. To theoretically support our method, we establish a homomorphism between the physical space of quasiperiodic function and the high-dimensional torus. Moreover, by exploiting the arithmetic properties of irrational numbers, we design the FPR method with exquisite algorithmic steps to ensure accuracy and efficiency in the recovery process. We also present the corresponding convergence analysis and computational complexity analysis. We apply our algorithm to solve two classes of quasiperiodic problems: continuous quasiperiodic functions and a piecewise constant Fibonacci quasicrystal. Numerical results show the effectiveness and superiority of FPR approach, while PM method fails to recover the non-smooth quasiperiodic systems. Furthermore, the experiments conclusively demonstrate that the FPR method, as a non-lifting algorithm, exhibits substantial computational advantages compared to the state-of-the-art projection method.
There remains much work to be done based on the proposed method. First, we aim to extend the convergence analysis of FPR method to a more general form that can accommodate non-smooth parent functions. Second, we intend to introduce a parallel strategy and improve the selection method of sampling points in the algorithm implementation, especially for high-dimensional problems. Third, we plan to develop the FPR method to handle singularly quasiperiodic systems by integrating the adaptive strategy. Finally, we will apply the FPR method to solve physical problems, discover exotic phenomena and physical law. These above-mentioned endeavors will not only enrich the theoretical foundations of FPR method, but also contribute to practical applications in diverse fields, such as physics, engineering, and materials science.
10 A. Avila and S. Jitomirskaya, *The ten martini problem*, Annals of Mathematics, **170** (2009), no. 1, 303--342. [doi:10.4007/annals.2009.170.303](https://doi.org/10.4007/annals.2009.170.303)
M. Baake and U. Grimm, *Aperiodic order*, vol. 1, Cambridge University Press, 2013. [doi:10.1017/9781139033862](https://doi.org/10.1017/9781139033862)
H. Bohr, *Almost periodic functions*, Courier Dover Publications, 2018.
E. B. Burger, *Exploring the number jungle: a journey into Diophantine analysis: a journey into Diophantine analysis*, American Mathematical Society, 2000.
D. Cao, J. Shen, and J. Xu, *Computing interface with quasiperiodicity*, Journal of Computational Physics, **424** (2021), 109863. [doi:10.1016/j.jcp.2020.109863](https://doi.org/10.1016/j.jcp.2020.109863)
Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, *Unconventional superconductivity in magic-angle graphene superlattices*, Nature, **556** (2018), 43--50. [doi:10.1038/nature26160](https://doi.org/10.1038/nature26160)
D. Damanik and D. Lenz, *Uniform spectral properties of one-dimensional quasicrystals, I. Absence of eigenvalues*, Communications in Mathematical Physics, **207** (1999), 687--696. [doi:10.1007/s002200050742](https://doi.org/10.1007/s002200050742)
Z. Gao, Z. Xu, Z. Yang, F. Ye, *Pythagoras Superposition Principle for Localized Eigenstates of 2D Moir$\acute{e}$ Lattices*, Physical Review A, **108** (2023), 013513. [doi:10.1103/PhysRevA.108.013513](https://doi.org/10.1103/PhysRevA.108.013513)
K. Giergiel, A. Kuroś, and K. Sacha, *Discrete time quasicrystals*, Physical Review B, **99** (2019), no. 22, 220303. [doi:10.1103/PhysRevB.99.220303](https://doi.org/10.1103/PhysRevB.99.220303)
V. Goblot, A. Štrkalj, N. Pernet, J. L. Lado, C. Dorow, A. Lemaı̂tre, L. Le Gratiet, A. Harouri, I. Sagnes, S. Ravets, A. Amo, J. Bloch, and O. Zilberberg, *Emergence of criticality through a cascade of delocalization transitions in quasiperiodic chains*, Nature Physics, **16** (2020), no. 8, 832--836. [doi:10.1038/s41567-020-0908-7](https://doi.org/10.1038/s41567-020-0908-7)
A. Granville, *Number theory revealed: a masterclass*, vol. 127, American Mathematical Society, 2020.
G. H. Hardy and E. M. Wright, *An introduction to the theory of numbers*, Oxford University Press, 1979.
D. R. Hofstadter, *Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields*, Physical Review B, **14** (1976), no. 6, 2239. [doi:10.1103/PhysRevB.14.2239](https://doi.org/10.1103/PhysRevB.14.2239)
K. Jiang, S. Li, and P. Zhang, *Numerical analysis of computing quasiperiodic systems*, arXiv:2210.04384, (2022).
K. Jiang, S. Li, and P. Zhang, *On the approximation of quasiperiodic functions with Diophantine frequencies by periodic functions*, arXiv:2304.04334, (2023).
K. Jiang, W. Si, and J. Xu, *Tilt grain boundaries of hexagonal structures: a spectral viewpoint*, SIAM Journal on Applied Mathematics, **82** (2022), no. 4, 1267--1286. [doi:10.1137/21M1463288](https://doi.org/10.1137/21M1463288)
K. Jiang, J. Tong, P. Zhang, and A.-C. Shi, *Stability of two-dimensional soft quasicrystals in systems with two length scales*, Physical Review E, **92** (2015), no. 4, 042159. [doi:10.1103/PhysRevE.92.042159](https://doi.org/10.1103/PhysRevE.92.042159)
K. Jiang and P. Zhang, *Numerical methods for quasicrystals*, Journal of Computational Physics, **256** (2014), 428--440. [doi:10.1016/j.jcp.2013.08.034](https://doi.org/10.1016/j.jcp.2013.08.034)
K. Jiang and P. Zhang, *Numerical mathematics of quasicrystals*, Proceedings of the International Congress of Mathematicians: Rio de Janeiro 2018, (2018), 3591--3609. [doi:10.1142/9789813272880_0193](https://doi.org/10.1142/9789813272880_0193)
X. Li and K. Jiang, *Numerical simulation for quasiperiodic quantum dynamical systems*, Journal on Numerica Methods and Computer Applications, **42** (2021), no. 1, 3. [doi:10.12288/szjs.s2020-0694](https://computmath.cjoe.ac.cn/szjs/CN/10.12288/szjs.s2020-0694)
J. Liouville, *Sur des classes très-étendues de quantités dont la valeur n'est ni algébrique ni même réductible à des irrationnelles algébriques*, Comptes Rendus de l'Académie des Sciences, **18** (1844), 883--885.
R. Merlin, K. Bajema, R. Clarke, F.-Y. Juang, and P. Bhattacharya, *Quasiperiodic gaas-alas heterostructures*, Physical Review Letters, **55** (1985), no. 17, 1768. [doi:10.1103/PhysRevLett.55.1768](https://doi.org/10.1103/PhysRevLett.55.1768)
Y. Meyer, *Algebraic numbers and harmonic analysis*, Elsevier, 2000.
R. Penrose, *The role of aesthetics in pure and applied mathematical research*, Bulletin of the Institute of Mathematics and Its Applications, **10** (1974), 266--271.
H. Poincaré, *Sur le problème des trois corps et les équations de la dynamique*, Acta Mathematica, **13** (1890), no. 1, A3--A270.
L. R. Scott and S. Zhang, *Finite element interpolation of nonsmooth functions satisfying boundary conditions*, Mathematics of Computation, **54** (1990), no. 190, 483--493. [doi:10.1090/S0025-5718-1990-1011446-7](https://doi.org/10.1090/S0025-5718-1990-1011446-7)
M. Senechal, *Quasicrystals and geometry*, Cambridge University Press Archive, 1996.
D. Shechtman, I. Blech, D. Gratias, and J. W. Cahn, *Metallic phase with long-range orientational order and no translational symmetry*, Physical Review Letters, **53** (1984), no. 20, 1951. [doi:10.1103/PhysRevLett.53.1951](https://doi.org/10.1103/PhysRevLett.53.1951)
W. Steurer and S. Deloudi, *Crystallography of quasicrystals: concepts, methods and structures*, vol.126, Springer Verlag, 2009.
A. Sutton and R. Balluffi, *Interfaces in crystalline materials*, Clarendon Press, 1995.
D. Tanese, E. Gurevich, F. Baboux, T. Jacqmin, A. Lemaı̂tre, E. Galopin, I. Sagnes, A. Amo, J. Bloch, and E. Akkermans, *Fractal energy spectrum of a polariton gas in a Fibonacci quasiperiodic potential*, Physical Review Letters, **112** (2014), no. 14, 146404. [doi:10.1103/PhysRevLett.112.146404](https://doi.org/10.1103/PhysRevLett.112.146404)
Z. V. Vardeny, A. Nahata, and A. Agrawal, *Optics of photonic quasicrystals*, Nature Photonics, **7** (2013), no. 3, 177--187. [doi:10.1038/NPHOTON.2012.343](https://doi.org/10.1038/NPHOTON.2012.343)
M. Verbin, O. Zilberberg, Y. Lahini, Y. E. Kraus, and Y. Silberberg, *Topological pumping over a photonic Fibonacci quasicrystal*, Physical Review B, **91** (2015), no. 6, 064201.
C. Wang, F. Liu, and H. Huang, *Effective model for fractional topological corner modes in quasicrystals*, Physical Review Letters, **129** (2022), no. 5, 056403.
X. Zeng, G. Ungar, Y. Liu, V. Percec, A. E. Dulcey, and J. K. Hobbs, *Supramolecular dendritic liquid quasicrystals*, Nature, **428** (2004), no. 6979, 157--160. [doi:10.1038/nature02368](https://doi.org/10.1038/nature02368)
P. Zhang and X. Zhang, *An efficient numerical method of Landau--Brazovskii model*, Journal of Computational Physics, **227** (2008), no. 11, 5859--5870. [doi:10.1016/j.jcp.2008.02.021](https://doi.org/10.1016/j.jcp.2008.02.021)
[^1]: National Natural Science Foundation of China (Grant No. 12171412, Grant No. 12288101). KJ was supported in part by Natural Science Foundation for Distinguished Young Scholars of Hunan Province (2021JJ10037). QZ was supported in part by Hunan Provincial Innovation Foundation for Postgraduate (CX20220647).
| arxiv_math | {
"id": "2309.13236",
"title": "Accurately recover global quasiperiodic systems by finite points",
"authors": "Kai Jiang, Qi Zhou, Pingwen Zhang",
"categories": "math.NA cs.NA",
"license": "http://creativecommons.org/licenses/by-nc-nd/4.0/"
} |
---
abstract: |
The Banach-Mazur game, Schmidt's game and McMullen's absolute winning game are three quintessential intersection games. We investigate their determinacy on the real line when the target set for either player is a Bernstein set, a non-Lebesgue measurable set whose construction depends on the axiom of choice.
author:
- James Atchley, Lior Fishman, Saisneha Ghatti
title: Intersection Games and Bernstein Sets
---
# Introduction
The Banach-Mazur game was first mentioned in the so-called Scottish Book. The Scottish Book is a collection of problems raised by well-known mathematicians between the world wars in a Scottish bar (hence the name) in Poland. Some of these problems are solved, some unsolved, and some will probably stay unsolvable. As the story goes, one of the mathematicians would pose a question to the rest of his friends, and solving it would result in winning a prize, ranging from a small beer to a live goose. For this paper's purposes, the Scottish Book featured one of the first infinite positional games of perfect information. Referred to now as Problem 43 in the book, the question was proposed by S. Mazur. He defined a game, thereafter known as the Banach-Mazur game, in which it is easily seen that the second player wins on a co-meager set. He conjectured that this winning condition is not only sufficient but a necessary one as well. Banach proved his conjecture when playing on the real line and in 1957, J. Oxtoby [@Oxtoby] proved a generalized version of Mazur's conjecture. Coincidentally, D. Mauldin, a faculty member of our home university, the University of North Texas was an editor for a modern translation of the Scottish Book [@Mauldin]. In 1966, W. Schmidt [@Schmidt], used a modified version of the Banach-Mazur game (thereafter known as Schmidt's game) to prove that the set of badly approximable numbers (vectors) has a full Haussdorff dimension in $\mathbb{R}^d$. More recently, in 2009, C.T. McMullen [@McMullen] introduced yet another version of the Banach-Mazur game to show that in this new game, winning sets are preserved under quasiconformal maps. He named his version of the game as the Absolute Winning Game, and we will call it McMullen's game. All games we consider in this paper can be played on any Polish space, i.e., complete, metrizable and separable spaces. In what follows we will consider the Real line as our playground. The generalization to other Polish spaces is obvious. By the axiom of determinacy, these games are determined (see definition [Definition 1](#STR){reference-type="ref" reference="STR"}), that is, one of the players always has a winning strategy. The Axiom of Determinacy, introduced in 1962 by J. Mycielski and H. Steinhaus, states that certain games (Gale Stewart games) are determined. For what will prove essential for this paper, Mycielski and S. Swierczkowski proved that the Axiom of Determinacy implies that all sets in the real line $\mathbb{R}$ are Lebesgue measurable, a fact incompatible with the axiom of choice.
The most well-known example of a non-Lebesgue measurable set is a Vitali set, constructed by using the axiom of choice. Another example of a set that is not Lebesgue measurable and can be constructed by applying the axiom of choice was introduced by F. Bernstein in 1908, thereafter known as a Bernstein set (see [@Oxtoby] for further details). While the axiom of determinacy deems all of these games determined, the focus of this paper is to prove that they are not determined on Bernstein sets whose construction heavily depends on the axiom of choice.
# Description of the Games
When we talk about games here, we will be talking about games played by two players, called Alice and Bob. Bob always goes first and Alice always goes second. Each player takes turns choosing intervals on the Real line according to the rules of the game. After infinitely many stages a winner is decided by some rule. For us, this rule will always be connected to some subset of $\mathbb{R}$ called the target set.
Before we continue on to describe the specific games we will be talking about here, we must actually say what it means for a player to have a winning strategy in one of these games and what it means for a set to be determined
**Definition 1**. *A *strategy* for either player is a rule that specifies what move they will make in every possible situation. In the case of Alice, she knows which intervals $B_0, A_0, B_1, A_1\dots B_n$ have been chosen in the previous moves, and the target set is known. From this information, her strategy must tell her which interval to choose for $A_n$ . Thus, a strategy for Alice is a sequence of closed-interval-valued functions $f_n(B_0, A_0, B_1, A_1 \dots B_n)$ outputting an intevral $A_n$ that is a valid move for Alice as her nth play of the game.*
It should be noted that this is not the most general definition of a strategy, it can be defined for a much wider family of games than what we have so far described, But this definition is all we need for our purposes.
**Definition 2**. *We say a strategy for either player is winning if it guarantees that the player wins regardless of what moves the other player makes.*
**Definition 3**. *We say that a game is not determined on $S \subset \mathbb{R}$ if neither player has a winning strategy when $S$ is the target set. Otherwise we say it is determined.*
## The Banach-Mazur game
The Banach-Mazur game is played by Alice and Bob, with a target set $S\subset \mathbb{R}$. Bob begins by choosing a nonempty closed interval $B_0$. Alice then chooses a nonempty closed interval $A_0 \subset B_0$. Bob responds by playing another interval $B_1 \subset A_0$ and the game continues in this way indefinitely. Alice wins the game if $\bigcap\limits_{i=0}^\infty A_i=\bigcap\limits_{i=0}^\infty B_i$ has non-empty intersection with $S$, otherwise Bob wins. We note that if the target set $S$ is not dense the game is trivial as Bob can effectively win on his first move. Thus, assuming $S$ is dense, we can assume that Bob will play so as to ensure that $|B_i|\rightarrow0$ where $|I|$ denotes the length of the interval $I$.
It is known that Alice has a winning strategy if and only if $S$ is co-meager. For a more detailed discussion of the game see [@Oxtoby].
## McMullen's Absolute Winning game
McMullen's game (the Absolute Winning game) is played by Alice and Bob and both players are given a parameter $\beta$ such that $0<\beta<\frac{1}{3}$ and a target set $S \subset \mathbb{R}.$ Bob starts by choosing a nonempty closed interval $B_0$. Alice then chooses $A_0 \subset B_0$ where the length of $A_0$ is $\beta$ times the length of $B_0.$ Bob responds by playing an interval $B_1$ of length $\beta$ times the length of $B_0$ contained in $B_0 \setminus A_0$. Continuing in this way, we get that for every $n>0$, $$A_n \subset B_n$$ and $$B_n \subset B_{n-1} \setminus A_{n-1}.$$ Alice wins the game if $\bigcap\limits_{n=0}^\infty B_i$ has non-empty intersection with and $S$ and otherwise Bob wins. See [@McMullen] for a more detailed discussion of the game.
## Schmidt's game
Schmidt's game is played by Alice and Bob who are given two parameters $0 < \alpha, \beta <1$ and a target set $S \subset \mathbb{R}^d.$ As in the Banach-Mazur game, Bob starts by choosing a nonempty closed interval $B_0$. Alice then chooses a nonempty closed interval $A_0 \subset B_0$ with length $\alpha$ times that of $B_0$. Bob responds by playing an interval $B_1 \subset A_0.$ with length $\beta$ times that of $B_1$. The game continues in this way indefinitely. Alice wins the game if $\bigcap\limits_{i=0}^\infty B_i$ has non-empty intersection with $S$ otherwise Bob wins.
If for a specific $\alpha$ and $\beta$ Alice has a winning strategy, we say that $S$ is $(\alpha,\beta)$-winning. If there exists some $\alpha$ such that $S$ is $(\alpha,\beta)$-winning for every $\beta$, we say that $S$ is $\alpha$-winning.
One of the important consequences of a set $S$ being $\alpha$-winning is that the Hausdorff dimension of $S$ is maximal. In $\mathbb{R}^d$ this means it has dimension $d$. We note that although at first glance Schmidt's game seems very similar to the Banach-Mazur game, it can be shown that some $\alpha$-winning sets are in fact meager, e.g., the set of badly approximable numbers. See [@Schmidt] for a more detailed discussion of the game.
# Bernstein Sets and the determinacy of games on a Bernstein set
It might seem that either Alice or Bob always has a winning strategy in these games regardless of the target set. This is not necessarily the case. If we assume the axiom of choice, we can give an example of a set for which neither player has a winning strategy in any of the above games.
**Definition 4**. *A Bernstein set is a set that has non-empty intersection with all closed uncountable sets but contains no closed uncountable set.*
As mentioned in the introduction, the construction of a Bernstein set heavily relies on the axiom of choice. See [@Oxtoby] for details. It is clear from the definition that both a Bernstein set and it's compliment (which is also a Bernstein set) are dense.
**Definition 5**. *A perfect set $\mathcal{P}$ is a set that is equal to the set of its accumulation points.*
It is well known that a Perfect set in $\mathbb{R}$ is necessarily closed and uncountable see e.g. [@Rudin]
## The Banach-Mazur Game
**Theorem 1**. *The Banach-Mazur Game is not determined on a Bernstein set.*
*Proof.* The goal in this proof is to show that any winning set or losing set must either contain or be disjoint from some perfect set. Any winning strategy will generate a perfect set in either the target set or it's compliment. And therefore no Bernstein set can be winning or losing.
Let Alice and Bob play the Banach-Mazur game and suppose one of the players has a winning strategy. At some stage of the game, the player with the winning strategy chooses a closed, nonempty interval $I_k$ as part of the winning strategy. If the player with the winning strategy is Alice let $T$ be the target set and if it's Bob let $T$ be the complement of the target set.
There exists two disjoint closed intervals $I_{k_0}, I_{k_1}$ contained in $I_k$ that are legal moves in the game. The player with the winning strategy will have a response to both called $I_{k_0}, I_{k_1} \subset I_k$ as part of their winning strategy. There are then four disjoint intervals $I_{k_{00}}, I_{k_{01}} \subset I_{k_0}$ and $I_{k_{10}}, I_{k_{11}} \subset I_{k_1}$ which are legal moves in response to the winning strategy. We continue in this way and identify uniquely for any finite binary sequence $j$ the interval $I_j$ which is a possible move for the player without a winning strategy in response to the player with the winning strategy. We let $\tau$ be a binary sequence. Then $\tau$ uniquely identifies a point in $T$ by the assumption that a winning strategy is being employed. Let $\omega$ be a finite binary sequence. We say that $\omega < \tau$ if the sequences match up to $\omega$'s last digit. Define $I_{k_{\tau}} = \bigcap\limits_{\omega < \tau}I_{k_{\omega}}$ and note that for any two distinct binary sequences $\tau_1, \tau_2$, $I_{k_{\tau_1}} \neq I_{k_{\tau_2}}.$ As we used a winning strategy to construct $I_{k_{\tau}}$, it must be contained within $T$. Let A be the union of all the $I_{k_{\tau}}$ over the set of all binary sequences. By definition, $A \subset T$.
$A$ is closed as it is the intersection of closed sets and therefore contains all of its accumulation points. Let $x \in A$. There exists a binary sequence $\tau_{*}$ such that $\{ x \} = I_{k_{\tau_{*}}}.$ Let $\omega_0$ be a finite binary sequence so that $\omega_0 < \tau_{*}.$ There exists some $x_0 \in I_{k_{\omega_0}} \cap A$ not equal to $x$. We let $\omega_1 < \tau_{*}$ be a finite binary sequence strictly longer than $\omega_0$ and there exists $x_1 \in I_{k_{\omega_1}} \cap A$ not equal to $x$. We can continue in this way so that $x_i \in A$ for all $i$, and by construction $x_i \rightarrow x$ in $\mathbb{R}$ so we have that $x$ is an accumulation point of A. Thus, every $x \in A$ is an accumulation point, so since A is closed it is a perfect set. Suppose that $T$ is a Bernstein set and either player has a winning strategy. Then the Bernstein set would contain a perfect set which is a contradiction. Thus, the Banach-Mazur game is not determined on a Bernstein set. ◻
## McMullen's game
**Theorem 2**. *McMullen's game is not determined on a Bernstein set.*
*Proof.* The proof is similar to the proof of Theorem [Theorem 1](#banach){reference-type="ref" reference="banach"}, so we'll just sketch it and the reader can fill out the missing details. Let Alice and Bob play McMullen's game and let Alice have a winning strategy. Using a proof similar to that for the Banach-Mazur game, the target set must contain a perfect set. Now, we consider the case where Bob has a winning strategy. Let $B_0$ be some move for Bob in the winning strategy that is not his first move and let $C$ be Bob's move before $B_0$. Alice could have played in $C \setminus B_0$ as her previous move, so there also exists $B_1 \subset C \setminus B_0$ that is part of Bob's winning strategy in response to Alice's move, $C \setminus B_0.$ We can repeat this process for $B_0$ resulting in the disjoint moves $B_{00}, B_{01} \subset B_0$ and similarly for $B_1$ resulting in the disjoint moves $B_{10}, B_{11} \subset B_1.$ Continuing in this way, for every finite binary sequence $\omega$ we have an interval $B_{\omega}$ that is part of Bob's winning strategy. We can proceed similarly as in the Banach-Mazur game and show that the complement of the target set must contain a perfect set. Therefore, the target set cannot be a Bernstein set when Bob has a winning strategy. So McMullen's game is not determined on Bernstein sets. ◻
## Schmidt's Game
**Theorem 3**. *If $\beta > 2 - \frac{1}{\alpha}, \alpha > 2 - \frac{1}{\beta}$ then Schmidt's Game is not determined on a Bernstein set.*
Before we prove this theorem for Schmidt's game, we will prove a lemma that will motivate an assumption that is used later.
**Lemma 1**. *If in Schmidt's game $\beta \leq 2 - \frac{1}{\alpha}$ then for any $x\in \mathbb{R}$ Bob can win on the target set $T = \mathbb{R} \setminus \{x\}.$*
*Proof.* Fix an $x$. Bob begins by choosing any interval with $x$ in the center. Assume without loss of generality that this interval has length one. Alice's move will then necessarily contain that point since our assumptions on $\alpha, \beta$ implies $\alpha > \frac{1}{2}$. In addition, regardless of Alice's move, Bob can respond with an interval where $x$ is the center. This is because the distance between the center of Bob's initial move and the nearest edge of Alice's move is at least $\frac{\alpha \beta}{2}$ and so the closed ball, $B(x, \frac{\alpha \beta}{2})$ is contained in it. The game can proceed in this way indefinitely and Bob wins since the intersection of all his moves will be $\{x\}$. In the same way, we can show that Alice can win on any dense set, even if it's only countable, when $\alpha \leq 2 - \frac{1}{\beta}$. ◻
The point of proving this is to show that these Schmidt's Game cases where $\alpha \leq 2 - \frac{1}{\beta}$ or $\beta \leq 2 - \frac{1}{\alpha}$ are somewhat uninteresting. This is why it is fine to exclude them from the following theorem.\
We are now ready to prove Theorem [Theorem 3](#schmidt){reference-type="ref" reference="schmidt"}.
*Proof.* If we assume $0<\beta<\frac{1}{2}$ and Alice has a winning strategy on a Bernstein set, then this is simple. As in the previous games, for each interval Alice plays, Bob has two disjoint responses as part of his strategy. At each stage of the game, the union of these possible responses is a union of closed, disjoint intervals. As shown before, the intersection of these closed sets is a perfect set which contradicts the assumption that there is a winning strategy. We assume $\beta \geq \frac{1}{2}$. let $I_1$ be Alice's first move as part of her winning strategy on a Bernstein set. Let $x$ be the center of $I_1$. Our goal will be to prove that Bob can eventually force the game to be played on either the left or right of $x$. We will prove this for the right side, the left side will be equivalent.
Without loss of generality let $|I_1| = 1$. Let $I_2 \subset I_1$ be Bob's response. As $\beta \geq \frac{1}{2}$, Bob's move contains $x$. Now, there are two possibilities from this point forward, $\bigcap\limits_{n=1}^\infty I_n \subset I_2 \cap (-\infty, x], \bigcap\limits_{n=1}^\infty I_n \subset I_2 \cap (x, \infty)$ where $I_n$ is the sequence of all plays in the game. Suppose that for any of Alice's moves $I_{2n - 1},$ Bob responds with an $I_{2n}$ which shares the right endpoint of $I_{2n - 1}.$ We define $\ell_{n}$ as the left endpoint of an interval $I_n.$ Note that $$\lim_{n\rightarrow \infty} \{\ell_{n}\} \in \bigcap\limits_{n=1}^\infty I_n.$$ We will now show that $\bigcap\limits_{n=1}^\infty I_n \subset I_2 \cap (x, \infty)$ is the only possibility when Bob is using this right endpoint strategy. Now let Bob play some interval $I_k$ as part of this strategy. By the rules of the game, $\ell_{k+1} \geq \ell_k$ and so $$|\ell_{k} - \ell_{k+2}| \geq |\ell_{k+1} - \ell_{k + 2}| \geq \alpha^{k+1} \beta^k - \alpha^{k+1} \beta^{k+1}.$$ Thus, $$\sum_{k = 0}^\infty|\ell_{2k} - \ell_{2k+2}| \geq (\alpha - \alpha \beta)\sum_{k = 0}^\infty \alpha^k \beta^k = (1 - \beta) \frac{\alpha}{1 - \alpha\beta}.$$ We now will show that $(1 - \beta) \frac{\alpha}{1 - \alpha\beta} > \beta - \frac{1}{2}$. This will show that the sequence $\ell_n$ converges to a number greater than $x$ as $n \to \infty$. This implies that, after finitely many steps, the players will be choosing intervals on the right of $x$. Note that: $$\begin{aligned}
\alpha &> 2 - \frac{1}{\beta} \\
\alpha\beta &> 2\beta - 1 \\
1 - \alpha\beta &< 2 - 2\beta \\
\frac{1}{1 - \alpha\beta} &> \frac{1}{2 - 2\beta}.\end{aligned}$$ We can now use this to get $$\begin{aligned}
(1 - \beta) \frac{\alpha}{1 - \alpha\beta} &> (1 - \beta) \frac{\alpha\beta}{2 - 2\beta} \\
&= \frac{\alpha\beta}{2} \\
&> (2 - \frac{1}{\beta}) \frac{\beta}{2} \\
&= \beta - \frac{1}{2}.\end{aligned}$$ This shows that $$|\ell_0 - \lim_{n\rightarrow \infty} \ell_{2n}| > \beta - \frac{1}{2}$$ and so $$\lim_{n\rightarrow \infty} \ell_n > x.$$ Thus, there exists an $I_{k_1}$ in Alice's strategy so that for all $y \in I_{k_1},$ $y > x.$ If Bob instead uses a strategy where he fixes the left endpoints of Alice's moves, then we can proceed similarly and we will get an interval $I_{k_0}$ in Alice's strategy such that for all $y \in I_{k_0},$ $y < x$ so $I_{k_0} \cap I_{k_1} = \O.$ Continuing this method, we can find some disjoint intervals $I_{k_{00}}, I_{k_{01}}, I_{k_{10}}, I_{k_{11}}$ as part of Alice's strategy. If we continue constructing these intervals, similar to our proofs for the Banach-Mazur game and McMullen's game, we can show that Alice's target set contains a perfect set. Therefore, any $\alpha\beta-$winning set where $\beta > 2 - \frac{1}{\alpha}$ and $\alpha > 2 - \frac{1}{\beta}$ will contain a perfect set. Thus, for such $\alpha, \beta$ a Bernstein set cannot be $\alpha, \beta-$winning.
In the case where Bob has a winning strategy everything is more or less identical. now assume that Bob has a winning strategy. If $\alpha < \frac{1}{2},$ for each interval Bob plays, Alice has two disjoint responses. At each stage of the game, the union of these possible responses is a union of closed, disjoint intervals. The intersection of these closed sets is a perfect set.
If $\alpha \geq \frac{1}{2},$ then Alice can use the same left endpoint and right endpoint strategies as Bob used above to eventually play two disjoint intervals, $I_{k_0}, I_{k_1},$ as in the previous case. Continuing in this way, we can construct a perfect set. Since we used Bob's winning strategy to construct this set, it must be contained in the complement of the target set. Thus, the complement of the target set contains a perfect set.
So we have that if either Alice or Bob has a winning strategy on a Bernstein set, then the either that Bernstein set contains a perfect set, or it's compliment does. Either way we get a contradiction.
Therefore, for any $\alpha, \beta$ where $\alpha > 2 - \frac{1}{\beta}$ and $\beta > 2 - \frac{1}{\alpha},$ neither player has a winning strategy on a Bernstein set and under those parameters Schmidt's Game is not determined. ◻
9
J. C. Oxtoby, *Measure and Category*, Springer; 2nd edition (September 29, 1980).
C. T. McMullen, *Winning sets, quasiconformal maps and Diophantine approximation* Geom.Funct.Anal.20(2010),no.3,726--740.
Wolfgang M. Schmidt, *On badly approximable numbers and certain games*,Trans.Amer.Math.Soc.123(1966),27--50.
Walter Rudin, *Principles of Mathematical Analysis* McGraw-Hill, Inc., Third Edition, 1976.
*The Scottish Book: Mathematics from The Scottish Café, with Selected Problems from The New Scottish Book*, Birkhäuser; 2nd edition (2015)
| arxiv_math | {
"id": "2310.03039",
"title": "Intersection Games and Bernstein Sets",
"authors": "James Atchley, Lior Fishman, Saisneha Ghatti",
"categories": "math.LO",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this paper, we first consider the pseudoprimeness of meromorphic solutions $u$ to a family of partial differential equations (PDEs) $H(u_{z_1},u_{z_2},\ldots,u_{z_n})=P(u)$ of Waring's-problem form, where $H(z_1,z_2,\ldots,z_n)$ is a nontrivial homogenous polynomial of degree $\ell$ in $\mathbf{C}^n$ and $P(w)$ is a polynomial of degree $\hbar$ in $\mathbf{C}$ with all zeros distinct. Then, we study when these PDEs can admit entire solutions in $\mathbf{C}^n$ and further find these solutions for important cases including particularly $u^\ell_{z_1}+u^\ell_{z_2}+\cdots+u^\ell_{z_n}=u^\hbar$, which are often said to be PDEs of super-Fermat form if $\hbar=0,\ell$ and an eikonal equation if $\ell=2$ and $\hbar=0$.
address: "Department of Computational, Engineering, and Mathematical Sciences Texas A&M University-San Antonio, San Antonio, Texas 78224, USA Email: qhan\\@tamusa.edu"
author:
- Qi Han
title: "**On partial differential equations of Waring's-problem form in several complex variables**"
---
[^1] [^2]
# Introduction {#Int}
In this work, we first consider the pseudoprimeness of meromorphic solutions $u$ to a family of partial differential equations $H(u_{z_1},u_{z_2},\ldots,u_{z_n})=P(u)$ of Waring's-problem form, where $H(z_1,z_2,\ldots,z_n)$ is a homogenous polynomial of degree $\ell\hspace{0.2mm}(\geq1)$ in $\mathbf{C}^n$ and $P(w)$ is a polynomial of degree $\hbar$ in $\mathbf{C}$ with all its zeros distinct. This paper is inspired by Hayman [@Ha2; @Ha3]; see also Gundersen-Hayman [@GH] and several other apposite results discussed later.
Now, let $u$ be a (generic) meromorphic function in $\mathbf{C}^n$. $u(z)$ is said to admit a factorization $u(z)=f(g(z))$ for a meromorphic left factor $f:\mathbf{C}\to\mathbf{P}:=\mathbf{C}\cup\{\infty\}$ and an entire right factor $g:\mathbf{C}^n\to\mathbf{C}$ ($g$ can be meromorphic, provided $f$ is rational). $u$ is said to be *prime* if all such factorizations lead to either $f$ bilinear or $g$ linear, and $u$ is said to be *pseudoprime* if all such factorizations lead to either $f$ rational or $g$ a polynomial.
The first mathematically rigorous treatment on factorization of meromorphic functions in $\mathbf{C}$ using pseudoprimeness seems to be Gross [@Gr], which was later extended to $\mathbf{C}^n$ by Li-Yang [@LY]. This research topic has found its use in other fields of complex analysis as demonstrated in the work of Bergweiler [@Be1; @Be2] on normal families and quasiregular maps. On the other hand, Li [@Li1] studied factorization of entire solutions to super-Fermat form partial differential equations in $\mathbf{C}^n$ and proved that all such solutions to $H(u_{z_1},u_{z_2},\ldots,u_{z_n})=1$ are prime; extensions of [@Li1] to meromorphic solutions were given by Saleeby [@Sa2] and Han [@Ha].
The first main result of this paper considers a general form $P(w)$ that includes those studied in the earlier works as special cases, which is formulated as follows. (This result seems the first work in literature with general polynomials $H,P$ involved, and indicates a kin relation between solutions to these PDEs and solutions to some well-known ODEs.)
**Theorem 1**. *Let $u(z)$ be a meromorphic solution in $\mathbf{C}^n$ to the partial differential equation $$\label{Eq1.1}
H(u_{z_1},u_{z_2},\ldots,u_{z_n})=P(u),$$ where $H(z_1,z_2,\ldots,z_n)$ is a nontrivial homogenous polynomial of degree $\ell\hspace{0.2mm}(\geq1)$ in $\mathbf{C}^n$ and $P(w)$ is a polynomial of degree $\hbar$ in $\mathbf{C}$ having all zeros distinct. Then, $u$ is generically pseudoprime. Moreover, for the cases where $u$ may not be pseudoprime, we have $\ell=\hbar=1$ and $\displaystyle{f(w)=A_1e^{A_0w}+\alpha_1}$; $\ell=1$, $\hbar=2$ and $\displaystyle{f(w)=\frac{\alpha_2A_1e^{A_0(\alpha_1-\alpha_2)w}-\alpha_1}{A_1e^{A_0(\alpha_1-\alpha_2)w}-1}}$; $\ell=\hbar=2$ and $\displaystyle{f(w)=\frac{\alpha_1-\alpha_2}{2}\sin\bigl(\sqrt{-A_0}\hspace{0.2mm}w+A_1\bigr)+\frac{\alpha_1+\alpha_2}{2}}$; $\ell=2$, $\hbar=3$ and $f(w)$ is a transcendental meromorphic solution to $$\label{Eq1.1-1}
(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2(w)=A_0\prod^3_{j=1}(f(w)-\alpha_j);$$ $\ell=2$, $\hbar=4$ and $f(w)$ is a transcendental meromorphic solution to $$\label{Eq1.1-2}
(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2(w)=A_0\prod^4_{j=1}(f(w)-\alpha_j).$$ Here, $f(w):\mathbf{C}\to\mathbf{P}$ is a meromorphic left factor of $u(z)=f(g(z))$ with associated entire right factor $g(z):\mathbf{C}^n\to\mathbf{C}$ transcendental, $m_1,m_2\geq0$ are integers with $m_1+m_2\leq2$, $\alpha_1,\alpha_2,\alpha_3,\alpha_4$ are pairwise distinct complex numbers, and $A_0\cdot A_1\neq0,a_1\neq a_2$ are constants.*
Discussions of meromorphic solutions $f$ to the ordinary differential equations (ODEs) [\[Eq1.1-1\]](#Eq1.1-1){reference-type="eqref" reference="Eq1.1-1"} and [\[Eq1.1-2\]](#Eq1.1-2){reference-type="eqref" reference="Eq1.1-2"} can be found in Bank-Kaufman [@BK1 Example 5] and [@BK2], and Ishizaki-Toda [@IT Section 3], where the meromorphic $f$ are closely related to the Weierstrass $\wp$-function.
**Corollary 2**. *Let $u(z)$ be a meromorphic solution in $\mathbf{C}^n$ to the partial differential equation $$\label{Eq1.2}
H(u_{z_1},u_{z_2},\ldots,u_{z_n})=P(u),$$ where $H(z_1,z_2,\ldots,z_n)$ is a nontrivial homogenous polynomial of degree $\ell\hspace{0.2mm}(\geq1)$ in $\mathbf{C}^n$ and $P(w)$ is a polynomial of degree $\hbar$ in $\mathbf{C}$. If either $\ell=1$ and $P(w)$ has a multiple zero, or $\ell=2$ and $\hbar\geq5$, or $\ell\geq3$ and $P(w)$ has all zeros distinct, then $u$ is pseudoprime.*
Theorem [Theorem 1](#Thm1){reference-type="ref" reference="Thm1"} and Corollary [Corollary 2](#Cor2){reference-type="ref" reference="Cor2"} supplement [@Li1; @Sa2; @Ha] on a broader perspective.
Next, we would like to know when equation [\[Eq1.1\]](#Eq1.1){reference-type="eqref" reference="Eq1.1"} has entire solutions in $\mathbf{C}^n$ and what these solutions look like: We are only able to describe this with success the general linear form [\[Eq1.3\]](#Eq1.3){reference-type="eqref" reference="Eq1.3"} and PDEs of super-Fermat/Waring's-problem form [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"}. The last main results of this paper, Theorems [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"} and [Theorem 8](#Thm8){reference-type="ref" reference="Thm8"}, with supplemental examples, are formulated as follows.
**Theorem 3**. *Let $u(z)$ be an entire solution in $\mathbf{C}^n$ to the partial differential equation $$\label{Eq1.3}
(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})^\ell=p(u),$$ where $\ell\hspace{0.2mm}(\geq1)$ is an integer, $\rho_1,\rho_2,\ldots,\rho_n$ are constants, and $p(w)$ is a (generic) meromorphic function in $\mathbf{C}$. Then, $p(w)$ must be a polynomial, say, of degree $\hbar$ in $\mathbf{C}$ and $\displaystyle{u(z)=\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)}$ with $\hbar=0$ and $p(w)=c_0$; $\displaystyle{u(z)=\Bigl(\frac{\ell-\hbar}{\ell}\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\Bigr)^{\frac{\ell}{\ell-\hbar}}+a_1}$ with $\hbar<\ell$, $\frac{\ell}{\ell-\hbar}$ being an integer (such as $\ell=\hbar+1$, or $\ell=\hbar+2$ for even $\ell$), and $p(w)=c_0(w-a_1)^\hbar$; $\displaystyle{u(z)=\Phi(z)e^{\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)}+a_1}$ with $\hbar=\ell$ and $p(w)=c_0(w-a_1)^\ell$; $\displaystyle{u(z)=\frac{a_1-a_2}{2}\cosh\bigl(\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\bigr)+\frac{a_1+a_2}{2}}$ with $\hbar=\ell$ even and $p(w)=c_0(w-a_1)^{\frac{\ell}{2}}(w-a_2)^{\frac{\ell}{2}}$. Here, $a_1\neq a_2,c_0,\sigma_1,\sigma_2,\ldots,\sigma_n$ are constants and $\Phi(z)$ is an entire function in $\mathbf{C}^n$ such that $\rho_1\sigma_1+\rho_2\sigma_2+\cdots+\rho_n\sigma_n=1$ and $\rho_1\Phi_{z_1}+\rho_2\Phi_{z_2}+\cdots+\rho_n\Phi_{z_n}=0$.*
Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"} generalizes Li-Saleeby [@LS] and Li [@Li2] with an easier/shorter proof.
**Example 4**. *Set $\Phi(z):=\aleph\bigl(\frac{z_2}{\rho_2}-\frac{z_1}{\rho_1},\frac{z_3}{\rho_3}-\frac{z_2}{\rho_2},\ldots,
\frac{z_n}{\rho_n}-\frac{z_{n-1}}{\rho_{n-1}},\frac{z_1}{\rho_1}-\frac{z_n}{\rho_n}\bigr)$ by virtue of an entire function $\aleph(\eta)$ of $\eta\in\mathbf{C}^n$ to see $\rho_1\Phi_{z_1}+\rho_2\Phi_{z_2}+\cdots+\rho_n\Phi_{z_n}=0$.*
**Example 5**. *Set $\Phi_1(z):=\Xi\bigl(\frac{z_2}{\rho_2}-\frac{z_1}{\rho_1},\frac{z_3}{\rho_3}-\frac{z_1}{\rho_1},\ldots,\frac{z_n}{\rho_n}-\frac{z_1}{\rho_1}\bigr)$ to be an entire function of $z\in\mathbf{C}^n$ (by virtue of an entire function $\Xi(\xi)$ of $\xi\in\mathbf{C}^{n-1}$) to see $\rho_1{\Phi_1}_{z_1}+\rho_2{\Phi_1}_{z_2}+\cdots+\rho_n{\Phi_1}_{z_n}=0$. Likewise, set $\Phi_2(z),\Phi_3(z),\ldots,\Phi_n(z)$ similarly to see $\rho_1{\Phi_j}_{z_1}+\rho_2{\Phi_j}_{z_2}+\cdots+\rho_n{\Phi_j}_{z_n}=0$ for $j=1,2,\ldots,n$. $\Phi(z)$ can be generated as linear combinations of $\Phi_1,\Phi_2,\ldots,\Phi_n$.*
**Example 6**. *When $n=2k$ is even, set $\Phi(z):=\Upsilon\bigl(\frac{z_2}{\rho_2}-\frac{z_1}{\rho_1},\frac{z_4}{\rho_4}-\frac{z_3}{\rho_3},\ldots,
\frac{z_{2k}}{\rho_{2k}}-\frac{z_{2k-1}}{\rho_{2k-1}}\bigr)$ to see $\rho_1\Phi_{z_1}+\rho_2\Phi_{z_2}+\cdots+\rho_n\Phi_{z_n}=0$ with $\Upsilon(\theta)$ an entire function of $\theta\in\mathbf{C}^k$. Apparently, other pairwise distinct rearrangements and their linear combinations generate new $\Phi(z)$.*
**Example 7**. *Set $\Phi(z):=f\bigl((n-1)\frac{z_1}{\rho_1}-\frac{z_2}{\rho_2}-\frac{z_3}{\rho_3}-\cdots-\frac{z_n}{\rho_n}\bigr)$ through an entire function $f(w)$ in $\mathbf{C}$ to see $\rho_1\Phi_{z_1}+\rho_2\Phi_{z_2}+\cdots+\rho_n\Phi_{z_n}=0$.*
Finally, we describe entire solutions to the partial differential equation $$\label{Eq1.4}
u^\ell_{z_1}+u^\ell_{z_2}+\cdots+u^\ell_{z_n}=u^\hbar,$$ which is considered as the most important problem studied in this paper.
When $\ell=2$ and $\hbar=0$, then [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"} is a complex $n$-dimensional eikonal equation. Caffarelli-Crandall [@CC] found that linear functions are the only possible global solutions to [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"} in $\mathbf{R}^n$ in this case, motivated by an earlier work of Khavinson [@Kh] in $\mathbf{C}^2$; see [@CC Remark 2.3]. Hemmati [@He] and Saleeby [@Sa1] provided different proofs of [@Kh]. In $\mathbf{C}^n$ when $n\geq3$, as first described by Johnsson [@Jo], there are indeed nonlinear complex analytic solutions to eikonal equations. We shall provide more examples in this regard to supplement those well-known works.
Equation [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"} for general $u^\hbar$, particularly, $u$ or $u^\ell$, formally relates to the Waring's problem or the super-Fermat problem. (One should note that what we are interested in here is different from, in a sense, opposite to, those original fundamental issues in number theory.)
**Theorem 8**. *Assume that $u(z)$ is an entire solution to the partial differential equation [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"} in $\mathbf{C}^n$ for integers $\ell\geq1$ and $\hbar\geq0$ with $0\leq\hbar\leq\ell$. Then, one has $\displaystyle{u(z)=\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n+\Phi(z)}$ with $\hbar=0$; $\displaystyle{u(z)=\Bigl(\frac{z_1}{2}+c_1\Bigr)^2+\Bigl(\frac{z_2}{2}+c_2\Bigr)^2+\cdots+\Bigl(\frac{z_n}{2}+c_n\Bigr)^2}$ with $\hbar=1$ and $\ell=2$; $\displaystyle{u(z)=\Bigl(\frac{\ell-\hbar}{\ell}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\Bigr)^{\frac{\ell}{\ell-\hbar}}}$ with $\hbar<\ell$ and $\frac{\ell}{\ell-\hbar}\in\mathbf{N}$; $\displaystyle{u(z)=\Psi(z)e^{\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n}}$ with $\hbar=\ell$. Here, $c_1,c_2,\ldots,c_n,\sigma_1,\sigma_2,\ldots,\sigma_n$ are constants and $\Phi(z),\Psi(z)$ are entire functions in $\mathbf{C}^n$ with $\sum^n_{j=1}\sigma^\ell_j=1$, $\sum^n_{j=1}\sum^\ell_{\iota=1}\sigma^{\ell-\iota}_j\Phi^\iota_{z_j}=0$ and $\sum^n_{j=1}\sum^\ell_{\iota=1}(\sigma_j\Psi)^{\ell-\iota}\Psi^\iota_{z_j}=0$.*
It is easy to see from the proof that $u^{\ell_1}_{z_1}+u^{\ell_2}_{z_2}+\cdots+u^{\ell_n}_{z_n}=1$ has entire solutions as those in **Case 1** above, where $\ell_1,\ell_2,\ldots,\ell_n\geq1$ are integers, not necessarily the same.
**Example 9**. *Let $u(z):=\frac{2}{7}z_1+\frac{3}{7}z_2+\frac{6}{7}z_3+f(\varpi)$ be entire in $\mathbf{C}^3$ with $f(w)$ entire in $\mathbf{C}$ and $\varpi:=\frac{1}{2}\bigl(\frac{12-21i}{13}\bigr)^2z^2_1+\frac{1}{2}\bigl(\frac{18+14i}{13}\bigr)^2z^2_2+\frac{1}{2}z^2_3
+\frac{12-21i}{13}\frac{18+14i}{13}z_1z_2-\frac{12-21i}{13}z_1z_3-\frac{18+14i}{13}z_2z_3$. Then, routine calculations lead to $$\left\{\begin{array}{ll}
u_{z_1}(z)=\frac{2}{7}+\frac{12-21i}{13}\bigl(\frac{12-21i}{13}z_1+\frac{18+14i}{13}z_2-z_3\bigr)f'(\varpi) \medskip\\
u_{z_2}(z)=\frac{3}{7}+\frac{18+14i}{13}\bigl(\frac{12-21i}{13}z_1+\frac{18+14i}{13}z_2-z_3\bigr)f'(\varpi) \medskip\\
u_{z_3}(z)=\frac{6}{7}-\bigl(\frac{12-21i}{13}z_1+\frac{18+14i}{13}z_2-z_3\bigr)f'(\varpi)
\end{array}\right.$$ so that $u^2_{z_1}+u^2_{z_2}+u^2_{z_3}=1$.*
**Example 10**. *Let $u(x):=\frac{1}{2}x_1+\frac{2}{3}x_2+\frac{5}{6}x_3+f(\tilde{y})$ for $\tilde{y}:=ax_1+bx_2-x_3$ be differentiable in $\mathbf{R}^3$ with $f(y)$ differentiable in $\mathbf{R}$ to see $u^3_{x_1}+u^3_{x_2}+u^3_{x_3}=1$, where $a=-\frac{16}{9}b+\frac{25}{9}$ and $b$ is the unique real root of the cubic polynomial $91\kappa^3-100\kappa^2+80\kappa-152=0$.*
**Example 11**. *Let $u(z):=f(\varpi_1)\exp\bigl(\frac{2}{7}z_1+\frac{3}{7}z_2+\frac{6}{7}z_3\bigr)$ for $\varpi_1:=\frac{12-21i}{13}z_1+\frac{18+14i}{13}z_2-z_3$ and $\tilde{u}(z):=\tilde{f}(\varpi_2)\exp\bigl(\frac{1}{2}z_1+\frac{2}{3}z_2+\frac{5}{6}z_3\bigr)$ for $\varpi_2:=az_1+bz_2-z_3$ be entire functions in $\mathbf{C}^3$, with $a=-\frac{16}{9}b+\frac{25}{9}$ as above and $b$ a root (real or complex) of $91\kappa^3-100\kappa^2+80\kappa-152=0$. Then, we have $u^2_{z_1}+u^2_{z_2}+u^2_{z_3}=u^2$ and $\tilde{u}^3_{z_1}+\tilde{u}^3_{z_2}+\tilde{u}^3_{z_3}=u^3$ respectively.*
**Example 12**. *Let $u(z):=\frac{3}{2}z_1-z_2+\frac{3}{2}z_3-\frac{4}{3}z_4+\frac{3}{2}z_5-\frac{5}{3}z_6+\frac{3}{2}z_7+f(\varpi_1,\varpi_2)$ for $\varpi_1:=z_1+iz_3-z_5-iz_7$ and $\varpi_2:=az_2+bz_4-z_6$ be entire in $\mathbf{C}^7$ to observe $$u^2_{z_1}+u^3_{z_2}+u^2_{z_3}+u^3_{z_4}+u^2_{z_5}+u^3_{z_6}+u^2_{z_7}=1,$$ where $a,b$ are constants as above and $f(w_1,w_2)$ is an entire function in $\mathbf{C}^2$.*
Example [Example 9](#Exm9){reference-type="ref" reference="Exm9"} provides a 'nonlinear' extension to the one from Johnsson [@Jo]; see also Li [@Li1]. Example [Example 10](#Exm10){reference-type="ref" reference="Exm10"} is relevant to Caffarelli-Crandall [@CC], where it is shown $u^2_{x_1}+u^2_{x_2}+\cdots+u^2_{x_n}=1$ has only linear solutions in $\mathbf{R}^n$. Example [Example 11](#Exm11){reference-type="ref" reference="Exm11"} is related to Han [@Ha] and Li-Ye [@LYe], where it is shown all complex analytic solutions to $u^\ell_{z_1}+u^\ell_{z_2}=u^\ell$ are purely exponential in $\mathbf{C}^2$ for $\ell\geq2$. Example [Example 12](#Exm12){reference-type="ref" reference="Exm12"} is formally related to the generalized Fermat equation, for which one can consult Bennett-Mihăilescu-Siksek [@BMS] for further information (in number theory), while [@Li3; @Li4; @Li6; @LYe] described complex analytic solutions to formally related PDEs in $\mathbf{C}^2$.
The remaining of the paper is as follows: Section [2](#PT1CO2){reference-type="ref" reference="PT1CO2"} is devoted to the proofs of Theorem [Theorem 1](#Thm1){reference-type="ref" reference="Thm1"} and Corollary [Corollary 2](#Cor2){reference-type="ref" reference="Cor2"}, Section [3](#PT3){reference-type="ref" reference="PT3"} is devoted to that of Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"}, and, after a brief review of the notion of characteristics, Section [4](#PT8){reference-type="ref" reference="PT8"} is devoted to that of Theorem [Theorem 8](#Thm8){reference-type="ref" reference="Thm8"}.
# Proofs of Theorem [Theorem 1](#Thm1){reference-type="ref" reference="Thm1"} and Corollary [Corollary 2](#Cor2){reference-type="ref" reference="Cor2"} {#PT1CO2}
*Proof of Theorem [Theorem 1](#Thm1){reference-type="ref" reference="Thm1"}.* Let $u(z)=f(g(z))$ be a meromorphic solution to equation [\[Eq1.1\]](#Eq1.1){reference-type="eqref" reference="Eq1.1"} in $\mathbf{C}^n$ for entire $g(z):\mathbf{C}^n\to\mathbf{C}$ and meromorphic $f(w):\mathbf{C}\to\mathbf{P}$. Note if $g$ either is a polynomial or is meromorphic, then $u$ is pseudoprime by definition. Seeing this, assume subsequently $g$ is a transcendental entire function in $\mathbf{C}^n$. Substitute $u(z)=f(g(z))$ into [\[Eq1.1\]](#Eq1.1){reference-type="eqref" reference="Eq1.1"} to have $$\label{Eq2.1}
H(g_{z_1},g_{z_2},\ldots,g_{z_n})=h(g)$$ with $h(w):=P(f(w))/(f^\prime(w))^\ell:\mathbf{C}\to\mathbf{P}$. $h$ is rational by Chang-Li-Yang [@CLY Theorem 4.1], as $H$ is a polynomial; that is, $P(f)/(f^\prime)^\ell$ is a rational function, say, $$\label{Eq2.2}
h(w)=\frac{P(f)}{(f^\prime)^\ell}(w)=c_0\frac{(w-a_1)^{m_1}(w-a_2)^{m_2}\cdots(w-a_s)^{m_s}}{(w-b_1)^{l_1}(w-b_2)^{l_2}\cdots(w-b_t)^{l_t}}$$ for pairwise distinct complex numbers $a_1,a_2,\ldots,a_s,b_1,b_2,\ldots,b_t$, a constant $c_0\neq0$, and integers $m_1,m_2,\ldots,m_s,l_1,l_2,\ldots,l_t\geq0$.
In view of the proof of Han [@Ha Pages 282-283], $t=0$ follows. For completeness, we sketch a proof here. In fact, combine [\[Eq2.1\]](#Eq2.1){reference-type="eqref" reference="Eq2.1"} and [\[Eq2.2\]](#Eq2.2){reference-type="eqref" reference="Eq2.2"} to have $$\label{Eq2.3}
H(g_{z_1},g_{z_2},\ldots,g_{z_n})=c_0\frac{(g-a_1)^{m_1}(g-a_2)^{m_2}\cdots(g-a_s)^{m_s}}{(g-b_1)^{l_1}(g-b_2)^{l_2}\cdots(g-b_t)^{l_t}}.$$ As the left-hand side is analytic in $\mathbf{C}^n$, $t$ is at most $1$, in which case $g$ assumes its only possible finite Picard value. Without loss of generality, suppose $t=1$ and $g(z)-b_1=e^{\beta(z)}$ for an entire function $\beta(z):\mathbf{C}^n\to\mathbf{C}$; then, substitute this into [\[Eq2.3\]](#Eq2.3){reference-type="eqref" reference="Eq2.3"} to deduce $$H(\beta_{z_1},\beta_{z_2},\ldots,\beta_{z_n})=c_0\frac{(e^\beta+b_1-a_1)^{m_1}\cdots(e^\beta+b_1-a_s)^{m_s}}{e^{(\ell+l_1)\beta}},$$ and an application of [@CLY Theorem 4.1] leads to $g(z)$ a constant. So, $t=0$.
Now, equation [\[Eq2.3\]](#Eq2.3){reference-type="eqref" reference="Eq2.3"} reads $$\label{Eq2.4}
H(g_{z_1},g_{z_2},\ldots,g_{z_n})=c_0(g-a_1)^{m_1}(g-a_2)^{m_2}\cdots(g-a_s)^{m_s}.$$ Following the proof of Li [@Li2 Page 135], we derive from [\[Eq2.4\]](#Eq2.4){reference-type="eqref" reference="Eq2.4"} that $$m_1+m_2+\cdots+m_s\leq\ell$$ as a straightforward application of the *logarithmic derivative lemma* by Vitter [@Vi]. So, [\[Eq2.1\]](#Eq2.1){reference-type="eqref" reference="Eq2.1"}, [\[Eq2.2\]](#Eq2.2){reference-type="eqref" reference="Eq2.2"} and [\[Eq2.4\]](#Eq2.4){reference-type="eqref" reference="Eq2.4"} combined leads to an ordinary differential equation $$\label{Eq2.5}
(f^\prime)^\ell(w)=\frac{A_0}{(w-a_1)^{m_1}\cdots(w-a_s)^{m_s}}(f(w)-\alpha_1)\cdots(f(w)-\alpha_\hbar)$$ of $f(w):\mathbf{C}\to\mathbf{P}$ for a constant $A_0\neq0$ and pairwise distinct complex numbers $\alpha_1,\alpha_2,\ldots,\alpha_\hbar$, where $P(w)=\alpha_0(w-\alpha_1)(w-\alpha_2)\cdots(w-\alpha_\hbar)$ for a constant $\alpha_0\neq0$.
Below, we consider three different cases and their associated subcases.
**Case 1.** $\ell=1$. In this case, one has $s\leq1$ and accordingly $m_1\leq1$.
**Subcase 1.1.** $\hbar=1$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$\label{Eq2.6}
\frac{f^\prime(w)}{f(w)-\alpha_1}=A_0\hspace{2mm}\mathrm{or}\hspace{2mm}\frac{f^\prime(w)}{f(w)-\alpha_1}=\frac{A_0}{w-a_1}.$$ Easy calculations yield $$f(w)=A_1e^{A_0w}+\alpha_1\hspace{2mm}\mathrm{or}\hspace{2mm}f(w)=A_1(w-a_1)^{A_0}+\alpha_1,$$ where $A_0\cdot A_1\neq0$ are constants with $A_0\in\mathbf{Z}$ for the latter subcase. Notice the second subcase implies that $u$ is pseudoprime, since $f$ is rational here.
**Subcase 1.2.** $\hbar=2$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$\label{Eq2.7}
\frac{f^\prime(w)}{(f(w)-\alpha_1)(f(w)-\alpha_2)}=A_0\hspace{2mm}\mathrm{or}\hspace{2mm}\frac{f^\prime(w)}{(f(w)-\alpha_1)(f(w)-\alpha_2)}=\frac{A_0}{w-a_1}.$$ Routine calculations lead to $$f(w)=\frac{\alpha_2A_1e^{A_0(\alpha_1-\alpha_2)w}-\alpha_1}{A_1e^{A_0(\alpha_1-\alpha_2)w}-1}\hspace{2mm}\mathrm{or}\hspace{2mm}
f(w)=\frac{\alpha_2A_1(w-a_1)^{A_0(\alpha_1-\alpha_2)}-\alpha_1}{A_1(w-a_1)^{A_0(\alpha_1-\alpha_2)}-1},$$ where $A_0\cdot A_1\neq0$ are constants with $A_0(\alpha_1-\alpha_2)\in\mathbf{Z}$ for the latter subcase. Note the second subcase again implies that $u$ is pseudoprime, as $f$ is rational.
**Subcase 1.3.** $\hbar\geq3$. In this subcase, with $m_1\leq1$, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$f^\prime(w)=\frac{A_0}{(w-a_1)^{m_1}}(f(w)-\alpha_1)(f(w)-\alpha_2)\cdots(f(w)-\alpha_\hbar).$$ Now, take $w_j$ to be a root of $f(w)-\alpha_j=0$ for $j=1,2,\ldots,\hbar$; a comparison of its multiplicity on both sides implies, say, $w_1=a_1$ (at most). When $m_1=0$, $f$ is a constant, for it has $\hbar\hspace{0.2mm}(\geq3)$ distinct finite Picard values; when $m_1=1$ but $\hbar\geq4$, the same occurs. Finally, when $m_1=1$ and $\hbar=3$, then $\frac{f(w)-\alpha_2}{f(w)-\alpha_3}=e^{\gamma(w)}$ for an entire function $\gamma(w):\mathbf{C}\to\mathbf{C}$; so, $f(w)=\frac{\alpha_3e^{\gamma(w)}-\alpha_2}{e^{\gamma(w)}-1}$. Hence, it is easily seen from the preceding equation that $$\frac{(\alpha_2-\alpha_3)\gamma^\prime e^\gamma}{(e^\gamma-1)^2}
=\frac{A_0}{w-a_1}\frac{(\alpha_3-\alpha_2)^2e^\gamma((\alpha_3-\alpha_1)e^\gamma-(\alpha_2-\alpha_1))}{(e^\gamma-1)^3},$$ or equivalently, $$\frac{(w-a_1)\gamma^\prime(w)}{A_0(\alpha_3-\alpha_1)(\alpha_3-\alpha_2)}=-\frac{e^{\gamma(w)}-\frac{\alpha_2-\alpha_1}{\alpha_3-\alpha_1}}{e^{\gamma(w)}-1},$$ which leads to $\gamma$, and correspondingly $f$, a constant since $\frac{\alpha_2-\alpha_1}{\alpha_3-\alpha_1}\neq1$. Thus, $u$ is pseudoprime, as $f$ is a constant when $g$ is transcendental, so that $g$ must be a polynomial.
**Case 2.** $\ell=2$. In this case, one has $s\leq2$ and accordingly $m_1+m_2\leq2$.
**Subcase 2.1.** $\hbar=1$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2(w)=A_0(f(w)-\alpha_1).$$ Taking derivative on both sides of the above equation yields $$f^{\prime\prime}(w)+\frac{1}{2}\Bigl(\frac{m_1}{w-a_1}+\frac{m_2}{w-a_2}\Bigr)f^\prime(w)=\frac{A_0}{2}\frac{1}{(w-a_1)^{m_1}(w-a_2)^{m_2}}.$$ When $m_1=m_2=0$, one sees that $f$ is a quadratic polynomial, and hence, $u$ is pseudoprime. When $1\leq m_1+m_2\leq2$, one derives from routine calculations that $$\label{Eq2.8}
f^\prime(w)=\frac{1}{(w-a_1)^{\frac{m_1}{2}}(w-a_2)^{\frac{m_2}{2}}}\biggl(\frac{A_0}{2}\int\frac{1}{{(w-a_1)^{\frac{m_1}{2}}(w-a_2)^{\frac{m_2}{2}}}}dw+C\biggr),$$ which does not allow any meromorphic solution $f^\prime$, and accordingly $f$, in $\mathbf{C}$.
**Subcase 2.2.** $\hbar=2$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} can be rewritten as $$\label{Eq2.9}
F^2(w)-\frac{1}{A_0}(w-a_1)^{m_1}(w-a_2)^{m_2}(F^\prime)^2(w)=\Bigl(\frac{\alpha_1-\alpha_2}{2}\Bigr)^2$$ for $F(w):=f(w)-\frac{\alpha_1+\alpha_2}{2}$. In view of Theorem 1 (in a general domain $\mathbf{D}\subseteq\mathbf{C}$) and Example 2 of Li [@Li5] (see also Liao-Zhang [@LZ Theorem 3.1]), [\[Eq2.9\]](#Eq2.9){reference-type="eqref" reference="Eq2.9"} has no transcendental meromorphic solution $F$ in $\mathbf{C}$ when $1\leq m_1+m_2\leq2$, so that $u$ is pseudoprime as $f$ may be rational. When $m_1=m_2=0$, we get $F(w)=\frac{\alpha_1-\alpha_2}{2}\sin\bigl(\sqrt{-A_0}\hspace{0.2mm}w+A_1\bigr)$ by Liao-Tang [@LT Theorem 1], so that $$f(w)=\frac{\alpha_1+\alpha_2}{2}+\frac{\alpha_1-\alpha_2}{2}\sin\bigl(\sqrt{-A_0}\hspace{0.2mm}w+A_1\bigr),$$ where $A_0\neq0,A_1$ are constants.
**Subcase 2.3.** $\hbar=3$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$\label{Eq2.10}
(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2(w)=A_0(f(w)-\alpha_1)(f(w)-\alpha_2)(f(w)-\alpha_3).$$ When $m_1=m_2=0$, the Weierstrass $\wp$-function is a transcendental meromorphic solution for suitable constants $A_0\cdot\alpha_1\cdot\alpha_2\cdot\alpha_3\neq0$. When $m_1=m_2=1$ and $a_1=-a_2=2$, Bank-Kaufman [@BK1 Section 5] constructed a solution, as a composite of $\wp$ and *fractional logarithm*, to equation [\[Eq2.10\]](#Eq2.10){reference-type="eqref" reference="Eq2.10"}, and they [@BK2 Theorem] further observed transcendental meromorphic solutions to [\[Eq2.10\]](#Eq2.10){reference-type="eqref" reference="Eq2.10"} with nonconstant rational coefficients satisfy $T(r,f)=O\bigl(\log^2r\bigr)$ and $T(r,f)\neq o\bigl(\log^2r\bigr)$.
**Subcase 2.4.** $\hbar=4$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$\label{Eq2.11}
(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2(w)=A_0(f(w)-\alpha_1)(f(w)-\alpha_2)(f(w)-\alpha_3)(f(w)-\alpha_4).$$ Ishizaki-Toda [@IT Section 3] provided a detailed discussion on this equation. Note once [\[Eq2.11\]](#Eq2.11){reference-type="eqref" reference="Eq2.11"} has a transcendental meromorphic solution, it will then have at least four such solutions.
**Subcase 2.5.** $\hbar\geq5$. In this subcase, equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"} reads $$(f^\prime)^2(w)=\frac{A_0}{(w-a_1)^{m_1}(w-a_2)^{m_2}}(f(w)-\alpha_1)(f(w)-\alpha_2)\cdots(f(w)-\alpha_\hbar).$$ By virtue of Hayman [@Ha1 Lemma 2.3 and Theorem 3.1], one has $$\label{Eq2.12}
\begin{split}
\hbar T(r,f)&=T(r,(f-\alpha_1)(f-\alpha_2)\cdots(f-\alpha_\hbar))+O(1)\\
&=T\bigl(r,(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2\bigr)+O(1)\\
&\leq2T(r,f^\prime)+O(\log r)\leq(4+\epsilon)T(r,f)+O(\log r)
\end{split}$$ for all $r$ outside of a possible set of finite Lebesgue measure with $\epsilon>0$ arbitrarily small, which implies that $f$ is rational, and therefore, $u$ is pseudoprime.
**Case 3.** $\ell\geq3$. In this case, $s\leq\ell$ and accordingly $m_1+m_2+\cdots+m_\ell\leq\ell$.
**Subcase 3.1.** $\hbar=1$. In this subcase, one has $Q(w)(f^\prime)^\ell(w)=A_0(f(w)-\alpha_1)$, where $Q(w)$ is a polynomial of degree no larger than $\ell$. Let $w_1$ be a root of $f(w)-\alpha_1=0$; a comparison of its multiplicity on both sides implies $Q(w_1)=0$, so that $f-\alpha_1$ has only finitely many zeros. Besides, one sees that $f$ has no pole. So, $f(w)-\alpha_1=q(w)e^{\delta(w)}$ for an entire function $\delta$ and a polynomial $q$ with $\deg(q)\leq\deg(Q)$. Routine calculations lead to $e^{(\ell-1)\delta(w)}=\frac{A_0q(w)}{Q(w)(\delta^\prime q+q^\prime)^\ell(w)}$, which implies $\delta,q^\prime$ are constants. That is, $f$ is linear, and thus, $u$ is prime.
In fact, all zeros of $f-\alpha_1$ are simple and $q$ is a product of distinct linear factors of $Q$. The form $e^{(\ell-1)\delta}=\frac{A_0q}{Q(\delta^\prime q+q^\prime)^\ell}$ leads to $\delta$ a constant, and then $q/Q,q^\prime$ constants.
**Subcase 3.2.** $\hbar=2$. In this subcase, one has $Q(w)(f^\prime)^\ell(w)=A_0(f(w)-\alpha_1)(f(w)-\alpha_2)$. As shown above, $(f-\alpha_1)(f-\alpha_2)$ has only finitely many zeros. Hence, $\frac{f(w)-\alpha_1}{f(w)-\alpha_2}=r(w)e^{\delta(w)}$ for an entire function $\delta$ and a rational function $r$ whose zeros and poles are from the zeros of $Q(w)$; so, $f(w)=\frac{\alpha_2r(w)e^{\delta(w)}-\alpha_1}{r(w)e^{\delta(w)}-1}$. Routine calculations then yield $$\biggl(\frac{e^{\delta(w)/2}}{r(w)e^{\delta(w)}-1}\biggr)^{2(\ell-1)}=\frac{A_0r(w)}{(\alpha_1-\alpha_2)^{\ell-2}Q(w)(\delta^\prime r+r^\prime)^\ell(w)},$$ which leads to $\delta$ a constant. That is, $f$ is rational, and thus, $u$ is pseudoprime.
**Subcase 3.3.** $\hbar\geq3$. Now, $Q(w)(f^\prime)^\ell(w)=A_0(f(w)-\alpha_1)(f(w)-\alpha_2)\cdots(f(w)-\alpha_\hbar)$. As in the preceding subcase, $(f(w)-\alpha_1)(f(w)-\alpha_2)\cdots(f(w)-\alpha_\hbar)$ has only finitely many zeros. By Nevanlinna's second fundamental theorem [@Ha1 Chapter 2], one has $$(\hbar-2)T(r,f)\leq\sum_{j=1}^\hbar N\Bigl(r,\frac{1}{f-\alpha_j}\Bigr)+S(r,f)=\epsilon T(r,f)+O(\log r)$$ for all $r$ outside of a possible set of finite Lebesgue measure with $\epsilon>0$ arbitrarily small, which implies that $f$ is rational, and therefore, $u$ is pseudoprime. ◻
*Proof of Corollary [Corollary 2](#Cor2){reference-type="ref" reference="Cor2"}.* As in the proof of Theorem [Theorem 1](#Thm1){reference-type="ref" reference="Thm1"} for equation [\[Eq2.5\]](#Eq2.5){reference-type="eqref" reference="Eq2.5"}, one has $$\label{Eq2.13}
(f^\prime)^\ell(w)=\frac{A_0}{(w-a_1)^{m_1}\cdots(w-a_s)^{m_s}}(f(w)-\alpha_1)^{k_1}\cdots(f(w)-\alpha_\mu)^{k_\mu}$$ for integers $\mu,k_1,k_2,\ldots,k_\mu\geq0$ and $P(w)=\alpha_0(w-\alpha_1)^{k_1}(w-\alpha_2)^{k_2}\cdots(w-\alpha_\mu)^{k_\mu}$ satisfying $m_1+m_2+\cdots+m_s\leq\ell$, $\hbar=k_1+k_2+\cdots+k_\mu$ and $\max\{k_1,k_2,\ldots,k_\mu\}\geq2$.
Now, we only need to consider two different cases as follows.
**Case 1.** $\ell=1$. In this case, $m_1\leq1$ with $s\leq1$.
If $\mu=1$, then $\frac{f^\prime(w)}{(f(w)-\alpha_1)^{k_1}}=\frac{A_0}{(w-a_1)^{m_1}}$. To get a meromorphic $f$, we notice $m_1=0$, $k_1=2$ and $f(w)=-\frac{1}{A_0w+A_1}+\alpha_1$ for two constants $A_0\neq0,A_1$. So, $u$ is prime.
If $\mu=2$, then $\frac{f^\prime(w)}{(f(w)-\alpha_1)^{k_1}(f(w)-\alpha_2)^{k_2}}=\frac{A_0}{(w-a_1)^{m_1}}$. Since $(f-\alpha_1)(f-\alpha_2)$ may have $w=a_1$ as its only zero, $\frac{f(w)-\alpha_1}{f(w)-\alpha_2}=r(w)e^{\delta(w)}$ for an entire function $\delta$ and a (reciprocal) linear function $r$; so, $f(w)=\frac{\alpha_2r(w)e^{\delta(w)}-\alpha_1}{r(w)e^{\delta(w)}-1}$. As in **Subcase 3.2**, using $k_1+k_2\geq3$, upon standard calculations, we see that $f$ is a linear fractional function, and therefore, $u$ is prime.
If $\mu\geq3$, then exactly as in **Subcase 1.3** with $\max\{k_1,k_2,\ldots,k_\mu\}\geq2$, we deduce that $f$ is a constant, and thus, $u$ is pseudoprime because $g$ must be a polynomial.
In summary, $u$ is pseudoprime when $\ell=1$ and $\max\{k_1,k_2,\ldots,k_\mu\}\geq2$.
**Case 2.** $\ell=2$. In this case, we can utilize exactly the same analysis as in [\[Eq2.12\]](#Eq2.12){reference-type="eqref" reference="Eq2.12"} to have $f$ rational, and therefore, $u$ pseudoprime, provided $\hbar=\deg(P)\geq5$.
On the other hand, notice when $\mu=3$, $k_1=k_2=1$ and $k_3=2$, we have $$\label{Eq2.14}
(w-a_1)^{m_1}(w-a_2)^{m_2}(f^\prime)^2(w)=A_0(f(w)-\alpha_1)(f(w)-\alpha_2)(f(w)-\alpha_3)^2.$$ Ishizaki-Toda [@IT Section 2] provided a detailed discussion on this equation. Note once [\[Eq2.14\]](#Eq2.14){reference-type="eqref" reference="Eq2.14"} has a transcendental meromorphic solution, it will then have at least two such solutions. ◻
# Proof of Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"} {#PT3}
*Proof of Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"}.* Let $u(z)$ be an entire solution to equation [\[Eq1.3\]](#Eq1.3){reference-type="eqref" reference="Eq1.3"} in $\mathbf{C}^n$. An application of [@CLY Theorem 4.1] implies that $p(w):\mathbf{C}\to\mathbf{P}$ must be a rational function, say, $$p(w)=c_0\frac{(w-a_1)^{m_1}(w-a_2)^{m_2}\cdots(w-a_s)^{m_s}}{(w-b_1)^{l_1}(w-b_2)^{l_2}\cdots(w-b_t)^{l_t}}$$ for pairwise distinct complex numbers $a_1,a_2,\ldots,a_s,b_1,b_2,\ldots,b_t$, a constant $c_0\neq0$, and integers $m_1,m_2,\ldots,m_s,l_1,l_2,\ldots,l_t\geq0$. Therefore, one has $$\label{Eq3.1}
(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})^\ell=c_0\frac{(u-a_1)^{m_1}(u-a_2)^{m_2}\cdots(u-a_s)^{m_s}}{(u-b_1)^{l_1}(u-b_2)^{l_2}\cdots(u-b_t)^{l_t}}.$$ As the left-hand side is analytic in $\mathbf{C}^n$, $t$ is at most $1$, in which case $u$ assumes its only possible finite Picard value. Without loss of generality, suppose $t=1$ and $u(z)-b_1=e^{\beta(z)}$ for an entire function $\beta(z):\mathbf{C}^n\to\mathbf{C}$; then, substitute this into [\[Eq3.1\]](#Eq3.1){reference-type="eqref" reference="Eq3.1"} to deduce $$(\rho_1\beta_{z_1}+\rho_2\beta_{z_2}+\cdots+\rho_n\beta_{z_n})^\ell=c_0\frac{(e^\beta+b_1-a_1)^{m_1}\cdots(e^\beta+b_1-a_s)^{m_s}}{e^{(\ell+l_1)\beta}},$$ and an application of [@CLY Theorem 4.1] leads to $u(z)$ a constant. So, $t=0$.
We have shown that $p(w):\mathbf{C}\to\mathbf{C}$ is a polynomial; so, equation [\[Eq3.1\]](#Eq3.1){reference-type="eqref" reference="Eq3.1"} reads $$\label{Eq3.2}
(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})^\ell=c_0(u-a_1)^{m_1}(u-a_2)^{m_2}\cdots(u-a_s)^{m_s}.$$ By virtue of the *logarithmic derivative lemma* (see [@Vi]), one immediately sees $$\label{Eq3.3}
\hbar=m_1+m_2+\cdots+m_s\leq\ell.$$
Now, let $z_{a_j}\in\mathbf{C}^n$ be a root of $u(z_{a_j})-a_j=0$ with multiversity $\nu^{a_j}_u\in\mathbf{N}$. Then, it is clear $\min\bigl\{\nu^{a_j}_{u_{z_1}},\nu^{a_j}_{u_{z_2}},\ldots,\nu^{a_j}_{u_{z_n}}\bigr\}=\nu^{a_j}_u-1$. Using [\[Eq3.2\]](#Eq3.2){reference-type="eqref" reference="Eq3.2"}, we also note that $\ell\cdot\nu^{a_j}_{\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}}=m_j\cdot\nu^{a_j}_u$. By [\[Eq3.2\]](#Eq3.2){reference-type="eqref" reference="Eq3.2"} and [\[Eq3.3\]](#Eq3.3){reference-type="eqref" reference="Eq3.3"}, one has either $s=1$, $m_1=\ell$ and $\nu^{a_j}_{\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}}=\nu^{a_j}_u$, or $s\geq1$, $m_j<\ell$ and $\nu^{a_j}_{\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}}=\nu^{a_j}_u-1$ so that $$\label{Eq3.4}
\frac{\ell}{2}\leq m_j=\ell\cdot\frac{\nu^{a_j}_u-1}{\nu^{a_j}_u}<\ell$$ provided $\nu^{a_j}_u\geq2$. If $\nu^{a_j}_u=1$, then $\nu^{a_j}_{\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}}=1$, $s=1$ and $m_1=\ell$.
Next, assume $a_1$ is the finite Picard value of $u$; so, $u(z)-a_1=e^{\gamma(z)}$ for an entire function $\gamma(z):\mathbf{C}^n\to\mathbf{C}$, and [\[Eq3.2\]](#Eq3.2){reference-type="eqref" reference="Eq3.2"} reads $$(\rho_1\gamma_{z_1}+\rho_2\gamma_{z_2}+\cdots+\rho_n\gamma_{z_n})^\ell=c_0\frac{(e^\gamma+a_1-a_2)^{m_2}\cdots(e^\gamma+a_1-a_s)^{m_s}}{e^{(\ell-m_1)\gamma}},$$ immediately implying $s=1$ and $m_1=\ell$ in view of [@CLY Theorem 4.1]. If $s=2$, then none of $a_j$ can be the finite Picard value of $u$, and [\[Eq3.3\]](#Eq3.3){reference-type="eqref" reference="Eq3.3"} and [\[Eq3.4\]](#Eq3.4){reference-type="eqref" reference="Eq3.4"} lead to $m_1=m_2=\frac{\ell}{2}$.
In summary, one has either $s=1$ and $m_1\leq\ell$, or $s=2$ and $m_1=m_2=\frac{\ell}{2}$.
Below, we consider three different cases and their associated subcases.
**Case 1.** $s=0$. In this case, one has $p(w)=c_0$ and $$\label{Eq3.5}
\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}=\sqrt[\ell]{c_0}.$$ The characteristic curve of [\[Eq3.5\]](#Eq3.5){reference-type="eqref" reference="Eq3.5"} (see Evans [@Ev Section 3.2]), for a parameter $\tau$, reads $$\frac{dz_1}{d\tau}=\rho_1,\,\,\frac{dz_2}{d\tau}=\rho_2,\,\,\ldots,\,\,\frac{dz_n}{d\tau}=\rho_n\,\,\mathrm{and}\,\,\frac{du}{d\tau}=\sqrt[\ell]{c_0}.$$ Given initial conditions, say, $z_1=0,z_2=d_2,\ldots,z_n=d_n$ and $u=\varphi(d_2,\ldots,d_n)$, one has $$\begin{split}
&z_1=\rho_1\tau,\,\,z_2=\rho_2\tau+d_2,\,\,\ldots,\,\,z_n=\rho_n\tau+d_n,\\
&\tau=\frac{z_1}{\rho_1},\,\,d_2=z_2-\frac{\rho_2}{\rho_1}z_1,\,\,\ldots,\,\,d_n=z_n-\frac{\rho_n}{\rho_1}\tau,
\end{split}$$ and $$\begin{split}
u&=\sqrt[\ell]{c_0}\hspace{0.2mm}\tau+\varphi(d_2,\ldots,d_n)=\sqrt[\ell]{c_0}\hspace{0.2mm}(\varrho_1\tau+\varrho_2\tau+\cdots+\varrho_n\tau)+\varphi(d_2,\ldots,d_n)\\
&=\sqrt[\ell]{c_0}\Bigl(\frac{\varrho_1}{\rho_1}z_1+\frac{\varrho_2}{\rho_2}z_2+\cdots+\frac{\varrho_n}{\rho_n}z_n\Bigr)
-\sqrt[\ell]{c_0}\Bigl(\frac{\varrho_2}{\rho_2}d_2+\cdots+\frac{\varrho_n}{\rho_n}d_n\Bigr)+\varphi(d_2,\ldots,d_n)\\
&=\sqrt[\ell]{c_0}\Bigl(\frac{\varrho_1}{\rho_1}z_1+\frac{\varrho_2}{\rho_2}z_2+\cdots+\frac{\varrho_n}{\rho_n}z_n\Bigr)+\psi(d_2,\ldots,d_n)\\
&=\sqrt[\ell]{c_0}\Bigl(\frac{\varrho_1}{\rho_1}z_1+\frac{\varrho_2}{\rho_2}z_2+\cdots+\frac{\varrho_n}{\rho_n}z_n\Bigr)
+\psi\Bigl(z_2-\frac{\rho_2}{\rho_1}z_1,\ldots,z_n-\frac{\rho_n}{\rho_1}z_1\Bigr)
\end{split}$$ with $\varrho_1,\varrho_2,\ldots,\varrho_n$ complex numbers satisfying $\varrho_1+\varrho_2+\cdots+\varrho_n=1$. That is, $$u(z)=\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z).$$
It is noteworthy that different initial conditions generate different $\Phi(z)$ as those in Example [Example 5](#Exm5){reference-type="ref" reference="Exm5"}, and there are other $\Phi(z)$ as described in Examples [Example 4](#Exm4){reference-type="ref" reference="Exm4"}, [Example 6](#Exm6){reference-type="ref" reference="Exm6"}, [Example 7](#Exm7){reference-type="ref" reference="Exm7"} and many more.
**Case 2.** $s=1$. In this case, one has $p(w)=c_0(w-a_1)^\hbar$ and $$(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})^\ell=c_0(u-a_1)^\hbar.$$
**Subcase 2.1.** $\hbar<\ell$. In this subcase, we have $$\label{Eq3.6}
\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}=\sqrt[\ell]{c_0}\hspace{0.2mm}(u-a_1)^{\frac{\hbar}{\ell}}$$ with $(u-a_1)^{\frac{\hbar}{\ell}}$ entire in $\mathbf{C}^n$. The characteristic curve of [\[Eq3.6\]](#Eq3.6){reference-type="eqref" reference="Eq3.6"}, for a parameter $\tau$, reads $$\frac{dz_1}{d\tau}=\rho_1,\,\,\frac{dz_2}{d\tau}=\rho_2,\,\,\ldots,\,\,\frac{dz_n}{d\tau}=\rho_n\,\,\mathrm{and}\,\,
\frac{du}{d\tau}=\sqrt[\ell]{c_0}\hspace{0.2mm}(u-a_1)^{\frac{\hbar}{\ell}}.$$ Given initial conditions $z_1=d_1,z_2=d_2,\ldots,z_n=d_n$ and $u=\varphi(d_1,d_2,\ldots,d_n)$, one has $$(u-a_1)^{1-\frac{\hbar}{\ell}}=\frac{\ell-\hbar}{\ell}\sqrt[\ell]{c_0}\hspace{0.2mm}(\varrho_1\tau+\varrho_2\tau+\cdots+\varrho_n\tau)+\tilde{\varphi}(d_1,d_2,\ldots,d_n)$$ with $\tilde{\varphi}(d_1,d_2,\ldots,d_n):=(\varphi(d_1,d_2,\ldots,d_n)-a_1)^{1-\frac{\hbar}{\ell}}$ entire in $\mathbf{C}^n$ for appropriate $\varphi$. So, $$u(z)=a_1+\Bigl(\frac{\ell-\hbar}{\ell}\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\Bigr)^{\frac{\ell}{\ell-\hbar}}.$$ It is apparent that $u(z)$ is an entire function in $\mathbf{C}^n$ when $\frac{\ell}{\ell-\hbar}$ is an integer.
**Subcase 2.2.** $\hbar=\ell$. In this subcase, we have $$\label{Eq3.7}
\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}=\sqrt[\ell]{c_0}\hspace{0.2mm}(u-a_1).$$ Similarly, the characteristic curve of [\[Eq3.7\]](#Eq3.7){reference-type="eqref" reference="Eq3.7"}, for a parameter $\tau$, reads $$\frac{dz_1}{d\tau}=\rho_1,\,\,\frac{dz_2}{d\tau}=\rho_2,\,\,\ldots,\,\,\frac{dz_n}{d\tau}=\rho_n\,\,\mathrm{and}\,\,\frac{du}{d\tau}=\sqrt[\ell]{c_0}\hspace{0.2mm}(u-a_1).$$ Given initial conditions $z_1=d_1,z_2=d_2,\ldots,z_n=d_n$ and $u=\varphi(d_1,d_2,\ldots,d_n)$, one has $$u-a_1=(\varphi(d_1,d_2,\ldots,d_n)-a_1)\exp\bigl(\sqrt[\ell]{c_0}\hspace{0.2mm}(\varrho_1\tau+\varrho_2\tau+\cdots+\varrho_n\tau)\bigr).$$ That is, $$u(z)=a_1+\Phi(z)\exp\bigl(\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)\bigr).$$ When $0$ is the finite Picard value of $\Phi(z)$, $a_1$ is the finite Picard value of $u(z)$. So, if we write $\Phi^*(z):=\ln(\Phi(z))$ to be an entire function in $\mathbf{C}^n$, then it follows that $$u(z)=a_1+\exp\bigl(\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi^*(z)\bigr).$$
**Case 3.** $s=2$. In this case, one has $p(w)=c_0(w-a_1)^{\frac{\ell}{2}}(w-a_2)^{\frac{\ell}{2}}$ and $$\label{Eq3.8}
(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})^2=\sqrt[\ell]{c^2_0}\hspace{0.2mm}(u-a_1)(u-a_2),$$ which can be easily rewritten as $$\Bigl(\sqrt[\ell]{c_0}\Bigl(u-\frac{a_1+a_2}{2}\Bigr)\Bigr)^2-(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})^2=\Bigl(\sqrt[\ell]{c_0}\Bigl(\frac{a_1-a_2}{2}\Bigr)\Bigr)^2.$$ Recalling $u$ is an entire function in $\mathbf{C}^n$, we deduce that $$\sqrt[\ell]{c_0}\Bigl(u-\frac{a_1+a_2}{2}\Bigr)\pm(\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n})=\sqrt[\ell]{c_0}\Bigl(\frac{a_1-a_2}{2}\Bigr)e^{\pm\delta(z)}$$ for an entire function $\delta(z):\mathbf{C}^n\to\mathbf{C}$. As a consequence, one observes $$\label{Eq3.9}
\begin{split}
&u=\frac{a_1+a_2}{2}+\frac{a_1-a_2}{2}\cosh(\delta)\,\,\mathrm{and}\\
&\rho_1u_{z_1}+\rho_2u_{z_2}+\cdots+\rho_nu_{z_n}=\sqrt[\ell]{c_0}\hspace{0.2mm}\frac{a_1-a_2}{2}\sinh(\delta),
\end{split}$$ implying $$\sinh(\delta)(\rho_1\delta_{z_1}+\rho_2\delta_{z_2}+\cdots+\rho_n\delta_{z_n})=\sqrt[\ell]{c_0}\hspace{0.2mm}\sinh(\delta),$$ which leads back to equation [\[Eq3.5\]](#Eq3.5){reference-type="eqref" reference="Eq3.5"} now satisfied by $\delta(z)$. So, [\[Eq3.9\]](#Eq3.9){reference-type="eqref" reference="Eq3.9"} yields $$u(z)=\frac{a_1+a_2}{2}+\frac{a_1-a_2}{2}\cosh\bigl(\sqrt[\ell]{c_0}\hspace{0.2mm}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\bigr).$$
All the preceding discussions conclude the proof of Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"}. ◻
# Proof of Theorem [Theorem 8](#Thm8){reference-type="ref" reference="Thm8"} {#PT8}
We start this final section by first briefly reviewing the concept of characteristics following Evans [@Ev Section 3.2] with symbols adapted to our setting.
Given a general first-order PDE $F(Du,u,z)=0$, for a parameter $\tau$, write $$\left\{\begin{array}{ll}
z(\tau):=(z_1(\tau),z_2(\tau),\ldots,z_n(\tau)), \medskip\\
u(\tau):=u(z(\tau))\,\,\mathrm{and} \medskip\\
Du(\tau):=(u_{z_1}(z(\tau)),u_{z_2}(z(\tau)),\ldots,u_{z_n}(z(\tau))).
\end{array}\right.$$ The associated *characteristics*, in terms of $F(x_1,x_2,\ldots,x_n,u,y_1,y_2,\ldots,y_n)$, read $$\label{Eq4.1}
\frac{dz(\tau)}{d\tau}=\Bigl(\frac{dz_1(\tau)}{d\tau},\frac{dz_2(\tau)}{d\tau},\ldots,\frac{dz_n(\tau)}{d\tau}\Bigr)=F_x(Du(\tau),u(\tau),z(\tau))$$ with $F_x(x_1,x_2,\ldots,x_n,u,y_1,y_2,\ldots,y_n):=(F_{x_1},F_{x_2},\ldots,F_{x_n})$, $$\label{Eq4.2}
\begin{split}
\frac{dDu(\tau)}{d\tau}&=\Bigl(\frac{du_{z_1}(z(\tau))}{d\tau},\frac{du_{z_2}(z(\tau))}{d\tau},\ldots,\frac{du_{z_n}(z(\tau))}{d\tau}\Bigr)\\
&=-F_u(Du(\tau),u(\tau),z(\tau))Du(\tau)-F_y(Du(\tau),u(\tau),z(\tau))
\end{split}$$ with $F_y(x_1,x_2,\ldots,x_n,u,y_1,y_2,\ldots,y_n):=(F_{y_1},F_{y_2},\ldots,F_{y_n})$, and $$\label{Eq4.3}
\frac{du(\tau)}{d\tau}=Du(\tau)\cdot\frac{dz(\tau)}{d\tau}=Du(\tau)\cdot F_x(Du(\tau),u(\tau),z(\tau)).$$
Equation [\[Eq4.1\]](#Eq4.1){reference-type="eqref" reference="Eq4.1"} is the key to the success of characteristics, and if $F(Du,u,z)=0$ is linear as in the situation of Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"}, then only equations [\[Eq4.1\]](#Eq4.1){reference-type="eqref" reference="Eq4.1"} and [\[Eq4.3\]](#Eq4.3){reference-type="eqref" reference="Eq4.3"} are needed.
*Proof of Theorem [Theorem 8](#Thm8){reference-type="ref" reference="Thm8"}.* For equation [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"}, it is readily seen that $\hbar\leq\ell$ using the same analysis as before and its associated characteristics are simplified to be $$\label{Eq4.4}
\begin{split}
&\frac{dz(\tau)}{d\tau}=\bigl(\ell u^{\ell-1}_{z_1}(z(\tau)),\ell u^{\ell-1}_{z_2}(z(\tau)),\ldots,\ell u^{\ell-1}_{z_n}(z(\tau))\bigr),\\
&\frac{dDu(\tau)}{d\tau}=\hbar u^{\hbar-1}Du(\tau)\,\,\mathrm{and}\,\,\frac{du(\tau)}{d\tau}=\ell u^\hbar.
\end{split}$$
Below, we consider four different cases to finish our discussions.
**Case 1.** $\hbar=0$. In this case, we further consider the partial differential equation $$\label{Eq4.5}
u^{\ell_1}_{z_1}+u^{\ell_2}_{z_2}+\cdots+u^{\ell_n}_{z_n}=1$$ with $\ell_1,\ell_2,\ldots,\ell_n\geq1$ integers, not necessarily the same. As now $\frac{dDu(\tau)}{d\tau}=0$ independent of $\tau$, we write $u_{z_j}(z(\tau))=\sigma_j$ for $j=1,2,\ldots,n$ and $\frac{dz(\tau)}{d\tau}=\bigl(\ell_1\sigma_1^{\ell_1-1},\ell_2\sigma_2^{\ell_2-1},\ldots,\ell_n\sigma_n^{\ell_n-1}\bigr)$. Given initial conditions, say, $z_1=0,z_2=d_2,\ldots,z_n=d_n$ and $u=\varphi(d_2,\ldots,d_n)$, one has $$z_1=\ell_1\sigma_1^{\ell_1-1}\tau,\,\,z_2=\ell_2\sigma_2^{\ell_2-1}\tau+d_2,\,\,\ldots,\,\,z_n=\ell_n\sigma_n^{\ell_n-1}\tau+d_n$$ and $$\begin{split}
u&=\bigl(\ell_1\sigma_1^{\ell_1}+\ell_2\sigma_2^{\ell_2}+\cdots+\ell_n\sigma_n^{\ell_n}\bigr)\tau+\varphi(d_2,\ldots,d_n)\\
&=\bigl(\ell_1\sigma_1^{\ell_1}+\ell_2\sigma_2^{\ell_2}+\cdots+\ell_n\sigma_n^{\ell_n}\bigr)(\varrho_1\tau+\varrho_2\tau+\cdots+\varrho_n\tau)+\varphi(d_2,\ldots,d_n)\\
&=\bigl(\ell_1\sigma_1^{\ell_1}+\ell_2\sigma_2^{\ell_2}+\cdots+\ell_n\sigma_n^{\ell_n}\bigr)
\biggl(\frac{\varrho_1z_1}{\ell_1\sigma_1^{\ell_1-1}}+\frac{\varrho_2z_2}{\ell_2\sigma_2^{\ell_2-1}}+\cdots+\frac{\varrho_nz_n}{\ell_n\sigma_n^{\ell_n-1}}\biggr)+\Phi(z)
\end{split}$$ following Theorem [Theorem 3](#Thm3){reference-type="ref" reference="Thm3"}, **Case 1** verbatim for constants $\varrho_1,\varrho_2,\ldots,\varrho_n$ with $\varrho_1+\varrho_2+\cdots+\varrho_n=1$. Take $\varrho_j:=\frac{\ell_j\sigma^{\ell_j}_j}{\ell_1\sigma_1^{\ell_1}+\ell_2\sigma_2^{\ell_2}+\cdots+\ell_n\sigma_n^{\ell_n}}$ for $j=1,2,\ldots,n$ to deduce $$u(z)=\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n+\Phi(z)$$ with $\sigma^{\ell_1}_1+\sigma^{\ell_2}_2+\cdots+\sigma^{\ell_n}_n=1$ and $\sum^n_{j=1}\sum^{\ell_j}_{\iota=1}\sigma^{\ell_j-\iota}_j\Phi^\iota_{z_j}=0$.
**Case 2.** $\hbar=1$. In this case, we further consider the partial differential equation $$\label{Eq4.6}
u^{\ell_1}_{z_1}+u^{\ell_2}_{z_2}+\cdots+u^{\ell_n}_{z_n}=u.$$ The second equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"} now reads $\frac{dDu(\tau)}{d\tau}=Du(\tau)$, and thus, $$\label{Eq4.7}
Du(\tau)=(u_{z_1}(z(\tau)),u_{z_2}(z(\tau)),\ldots,u_{z_n}(z(\tau)))=(\varsigma_1e^\tau,\varsigma_2e^\tau,\ldots,\varsigma_ne^\tau)$$ with $\varsigma_j:=u_{z_j}(z(0))$ for $j=1,2,\ldots,n$. Consequently, this leads to $$\begin{split}
\frac{dz(\tau)}{d\tau}&=\bigl(\ell_1u^{\ell_1-1}_{z_1}(z(\tau)),\ell_2u^{\ell_2-1}_{z_2}(z(\tau)),\ldots,\ell_nu^{\ell_n-1}_{z_n}(z(\tau))\bigr)\\
&=\bigl(\ell_1\varsigma^{\ell_1-1}_1e^{(\ell_1-1)\tau},\ell_2\varsigma^{\ell_2-1}_2e^{(\ell_2-1)\tau},\ldots,\ell_n\varsigma^{\ell_n-1}_ne^{(\ell_n-1)\tau}\bigr),
\end{split}$$ so that $$\label{Eq4.8}
z_j(\tau)=\frac{\ell_j}{\ell_j-1}\varsigma^{\ell_j-1}_j\bigl(e^{(\ell_j-1)\tau}-1\bigr)+d_j$$ with $d_j:=z_j(0)$ for $j=1,2,\ldots,n$. Finally, by [\[Eq4.3\]](#Eq4.3){reference-type="eqref" reference="Eq4.3"}, we have $$\frac{du(\tau)}{d\tau}=\ell_1\varsigma^{\ell_1}_1e^{\ell_1\tau}+\ell_2\varsigma^{\ell_2}_2e^{\ell_2\tau}+\cdots+\ell_n\varsigma^{\ell_n}_ne^{\ell_n\tau},$$ and thus, for $u_0:=u(z(0))=\varphi(d_1,d_2,\ldots,d_n)$, one observes $$\label{Eq4.9}
u(\tau)=\varsigma^{\ell_1}_1\bigl(e^{\ell_1\tau}-1\bigr)+\varsigma^{\ell_2}_2\bigl(e^{\ell_2\tau}-1\bigr)+\cdots+\varsigma^{\ell_n}_n\bigl(e^{\ell_n\tau}-1\bigr)+u_0.$$ Combine [\[Eq4.8\]](#Eq4.8){reference-type="eqref" reference="Eq4.8"} and [\[Eq4.9\]](#Eq4.9){reference-type="eqref" reference="Eq4.9"} with routine calculations to deduce $$u(z)=\sum^n_{j=1}\varsigma^{\ell_j}_j\biggl(\frac{(z_j-d_j)(\ell_j-1)}{\ell_j\varsigma^{\ell_j-1}_j}+1\biggr)^{\frac{\ell_j}{\ell_j-1}}$$ as $\varsigma^{\ell_1}_1+\varsigma^{\ell_2}_2+\cdots+\varsigma^{\ell_n}_n=u_0$ by [\[Eq4.6\]](#Eq4.6){reference-type="eqref" reference="Eq4.6"}, [\[Eq4.7\]](#Eq4.7){reference-type="eqref" reference="Eq4.7"} and [\[Eq4.9\]](#Eq4.9){reference-type="eqref" reference="Eq4.9"}. To have $u$ entire, it must be $\ell_1=\ell_2=\cdots=\ell_n=2$; therefore, $$\label{Eq4.10}
u(z)=\frac{z^2_1}{4}+\frac{z^2_2}{4}+\cdots+\frac{z^2_n}{4}+\Lambda(z),$$ where $\Lambda(z)$ is an entire function in $\mathbf{C}^n$ depending on $d_j,\varsigma_j$ for $j=1,2,\ldots,n$.
Below, we show $\Lambda(z)$ is linear. In fact, $u$ being an entire solution to [\[Eq4.6\]](#Eq4.6){reference-type="eqref" reference="Eq4.6"} implies $$\label{Eq4.11}
\sum^n_{j=1}\bigl(z_j\Lambda_{z_j}(z)+\Lambda^2_{z_j}(z)\bigr)-\Lambda(z)=0.$$ Equation [\[Eq4.2\]](#Eq4.2){reference-type="eqref" reference="Eq4.2"} immediately yields $\frac{dD\Lambda(\tau)}{d\tau}=0$ independent of $\tau$ using the same parameter; so, $\Lambda_{z_j}(z(\tau))=c_j$ and $\frac{dz_j(\tau)}{d\tau}=z_j(\tau)+2c_j$ for $j=1,2,\ldots,n$ by equation [\[Eq4.1\]](#Eq4.1){reference-type="eqref" reference="Eq4.1"}. Hence, $$z_j(\tau)=(d^*_j+2c_j)e^\tau-2c_j$$ with $d^*_j:=z_j(0)$ for $j=1,2,\ldots,n$. Finally, equation [\[Eq4.3\]](#Eq4.3){reference-type="eqref" reference="Eq4.3"} implies $$\begin{split}
\frac{d\Lambda(\tau)}{d\tau}&=c_1z_1(\tau)+c_2z_2(\tau)+\cdots+c_nz_n(\tau)+2c^2_1+2c^2_2+\cdots+2c^2_n\\
&=c_1(d^*_1+2c_1)e^\tau+c_2(d^*_2+2c_2)e^\tau+\cdots+c_n(d^*_n+2c_n)e^\tau,
\end{split}$$ which leads to $$\Lambda(\tau)=c_1(d^*_1+2c_1)(e^\tau-1)+c_2(d^*_2+2c_2)(e^\tau-1)+\cdots+c_n(d^*_n+2c_n)(e^\tau-1)+\Lambda_0$$ with $\Lambda_0:=\Lambda(z(0))$, so that $$\label{Eq4.12}
\Lambda(z)=c_1z_1+c_2z_2+\cdots+c_nz_n+\Lambda^*(z)$$ with $\Lambda^*(z)$ an entire function in $\mathbf{C}^n$ depending on $c_j,d^*_j$ for $j=1,2,\ldots,n$. Suppose, without loss of generality, $\Lambda^*(z)$ has no linear terms that can be easily achieved from absorbing those terms into $c_1z_1+c_2z_2+\cdots+c_nz_n$, if necessary. Then, one has $$\label{Eq4.13}
\Lambda^*(z)=c_0+\sum_{1\leq j\leq k\leq n}c_{jk}z_jz_k+\mathrm{terms}\,\,\mathrm{of}\,\,(z^3)\,\,\mathrm{or}\,\,\mathrm{higher}$$ by abuse of notation of the term $z^3$ and $$\label{Eq4.14}
\sum^n_{j=1}\bigl[z_j\Lambda^*_{z_j}(z)+\bigl(c_j+\Lambda^*_{z_j}(z)\bigr)^2\bigr]-\Lambda^*(z)=0.$$ Apply equation [\[Eq4.2\]](#Eq4.2){reference-type="eqref" reference="Eq4.2"} to [\[Eq4.14\]](#Eq4.14){reference-type="eqref" reference="Eq4.14"} to derive $\frac{dD\Lambda^*(\tau)}{d\tau}=0$ along any parametric curve/path, which together with [\[Eq4.13\]](#Eq4.13){reference-type="eqref" reference="Eq4.13"} yields $\Lambda^*(z)=c_0$. So, equations [\[Eq4.10\]](#Eq4.10){reference-type="eqref" reference="Eq4.10"} and [\[Eq4.12\]](#Eq4.12){reference-type="eqref" reference="Eq4.12"} lead to $$u(z)=\Bigl(\frac{z_1}{2}+c_1\Bigr)^2+\Bigl(\frac{z_2}{2}+c_2\Bigr)^2+\cdots+\Bigl(\frac{z_n}{2}+c_n\Bigr)^2$$ in view of $c^2_1+c^2_2+\cdots+c^2_n=c_0$ by virtue of [\[Eq4.13\]](#Eq4.13){reference-type="eqref" reference="Eq4.13"} and [\[Eq4.14\]](#Eq4.14){reference-type="eqref" reference="Eq4.14"}.
**Case 4.** $\hbar=\ell$. Now, the last equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"} reads $\frac{du(\tau)}{d\tau}=\ell u^\ell$ so that $$\frac{1}{u^{\ell-1}(\tau)}=-\ell(\ell-1)\tau+\frac{1}{u_0^{\ell-1}}$$ with $u_0:=u(z(0))=\varphi(d_1,d_2,\ldots,d_n)$, which then yields $$\label{Eq4.15}
u(\tau)=\frac{u_0}{\bigl(1-\ell(\ell-1)u^{\ell-1}_0\tau\bigr)^{\frac{1}{\ell-1}}}.$$ Equation [\[Eq4.15\]](#Eq4.15){reference-type="eqref" reference="Eq4.15"} combined with the second equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"} further leads to $$\frac{du_{z_j}(z(\tau))}{d\tau}=\ell u^{\ell-1}u_{z_j}(z(\tau))=\frac{\ell u^{\ell-1}_0}{1-\ell(\ell-1)u^{\ell-1}_0\tau}u_{z_j}(z(\tau)),$$ so that $$u_{z_j}(z(\tau))=\frac{\varsigma_j}{\bigl(1-\ell(\ell-1)u^{\ell-1}_0\tau\bigr)^{\frac{1}{\ell-1}}}$$ with $\varsigma_j:=u_{z_j}(z(0))$ for $j=1,2,\ldots,n$. Finally, one observes $$\frac{dz_j(\tau)}{d\tau}=\ell u^{\ell-1}_{z_j}(z(\tau))=\frac{\ell\varsigma^{\ell-1}_j}{1-\ell(\ell-1)u^{\ell-1}_0\tau}$$ using the first equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"}, which implies $$z_j(\tau)=\frac{\varsigma^{\ell-1}_j}{u^{\ell-1}_0}\ln\Biggl(\frac{1}{\bigl(1-\ell(\ell-1)u^{\ell-1}_0\tau\bigr)^{\frac{1}{\ell-1}}}\Biggr)+d_j,$$ or, in a more convenient form for the purpose of comparing with [\[Eq4.15\]](#Eq4.15){reference-type="eqref" reference="Eq4.15"}, $$\label{Eq4.16}
\frac{1}{\bigl(1-\ell(\ell-1)u^{\ell-1}_0\tau\bigr)^{\frac{1}{\ell-1}}}=e^{\frac{u^{\ell-1}_0}{\varsigma^{\ell-1}_j}(z_j(\tau)-d_j)}$$ with $d_j:=z_j(0)$ for $j=1,2,\ldots,n$. Therefore, by [\[Eq4.15\]](#Eq4.15){reference-type="eqref" reference="Eq4.15"} and [\[Eq4.16\]](#Eq4.16){reference-type="eqref" reference="Eq4.16"}, we have $$u(\tau)=\frac{u_0}{\bigl(1-\ell(\ell-1)u^{\ell-1}_0\tau\bigr)^{\frac{\sum^n_{j=1}\varrho_j}{\ell-1}}}\\
=u_0\prod^n_{j=1}\exp\biggl(\frac{\varrho_ju_0^{\ell-1}}{\varsigma^{\ell-1}_j}(z_j(\tau)-d_j)\biggr)$$ with $\varrho_1+\varrho_2+\cdots+\varrho_n=1$, which combined with $\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n=u^\ell_0$ leads to $$\label{Eq4.17}
\begin{split}
u(z)&=u_0\exp\biggl(\sum^n_{j=1}\varrho_jz_j\biggl(\frac{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}{\varsigma^\ell_j}\biggr)^{\frac{\ell-1}{\ell}}
+\Lambda_0(z)\biggr)\\
&=u_0\exp\biggl(\frac{\varsigma_1z_1+\varsigma_2z_2+\cdots+\varsigma_nz_n}{\sqrt[\ell]{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}}
+\Lambda_1(z)\biggr)\\
&=u_0\exp\bigl(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n+\Lambda_2(z)\bigr)\\
&=\Psi(z)\exp(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)
\end{split}$$ by taking $\varrho_j:=\frac{\varsigma^\ell_j}{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}$ and $\sigma_j:=\frac{\varsigma_j}{\sqrt[\ell]{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}}$ for $j=1,2,\ldots,n$, with $\Lambda_{\mu}(z),\Psi(z)$ entire functions in $\mathbf{C}^n$ depending on $d_j,\varsigma_j$ for $j=1,2,\ldots,n$ and $\mu=0,1,2$.
It is worthwhile to note $u_0=\sqrt[\ell]{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}$ can be a nontrivial entire function having zeros in $\mathbf{C}^n$; when laid in a quotient form, we implicitly meant all zeros in the numerator and denominator cancelled out, except for an analytic subset of $\mathbf{C}^n$ of codimension at least $2$. So, $\varrho_j,\sigma_j\in\mathbf{C}$ were defined via the constant terms in the Taylor expansions of $u_0$ and $\varsigma_j$ over $\mathbf{C}^n$ for $j=1,2,\ldots,n$ respectively (by the same notations as those used to denote them as entire functions), with consensus that the remaining terms were merged into $\Lambda_0(z),\Lambda_1(z),\Lambda_2(z)$ and finally into $\Psi(z):=u_0(z)\exp(\Lambda_2(z))$ such that $\sum^n_{j=1}\sum^\ell_{\iota=1}(\sigma_j\Psi)^{\ell-\iota}\Psi^\iota_{z_j}=0$.
On the other hand, when $0$ is the finite Picard value of $\Psi(z)$, we can write $\Phi(z):=\ln(\Psi(z))$ to have $\sum^n_{j=1}\sum^\ell_{\iota=1}\sigma^{\ell-\iota}_j\Phi^\iota_{z_j}=0$ from [\[Eq1.4\]](#Eq1.4){reference-type="eqref" reference="Eq1.4"} through routine calculations and $$u(z)=\exp\bigl(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n+\Phi(z)\bigr).$$
Finally, we discuss **Case 3**, whose proof follow those of **Cases 2&4** closely. In particular, we refer to the preceding discussions when defining $\varrho_j,\sigma_j\in\mathbf{C}$ and require additionally that $u_0$ be an entire function in $\mathbf{C}^n$ such that $u^{\frac{\hbar}{\ell}}_0$ is also entire in $\mathbf{C}^n$.
**Case 3.** $1\leq\hbar<\ell$. In this case, by the last equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"}, one has $$\label{Eq4.18}
\begin{split}
u(\tau)&=u_0e^{\ell\tau}\hspace{30.09mm}\mathrm{if}\,\,\hbar=1\\
u(\tau)&=\frac{u_0}{\bigl(1-\ell(\hbar-1)u^{\hbar-1}_0\tau\bigr)^{\frac{1}{\hbar-1}}}\,\,\mathrm{if}\,\,1<\hbar<\ell
\end{split}$$ with $u_0:=u(z(0))=\varphi(d_1,d_2,\ldots,d_n)$, which, by the second equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"}, further imply $$\begin{split}
u_{z_j}(z(\tau))&=\varsigma_je^\tau\hspace{35.06mm}\mathrm{if}\,\,\hbar=1\\
u_{z_j}(z(\tau))&=\frac{\varsigma_j}{\bigl(1-\ell(\hbar-1)u^{\hbar-1}_0\tau\bigr)^{\frac{\hbar}{\ell(\hbar-1)}}}\,\,\mathrm{if}\,\,1<\hbar<\ell
\end{split}$$ with $\varsigma_j:=u_{z_j}(z(0))$ for $j=1,2,\ldots,n$. Recalling $$\begin{split}
\frac{dz_j(\tau)}{d\tau}&=\ell\varsigma^{\ell-1}_je^{(\ell-1)\tau}\hspace{22.86mm}\mathrm{if}\,\,\hbar=1\\
\frac{dz_j(\tau)}{d\tau}&=\frac{\ell\varsigma^{\ell-1}_j}{\bigl(1-\ell(\hbar-1)u^{\hbar-1}_0\tau\bigr)^{\frac{\hbar(\ell-1)}{\ell(\hbar-1)}}}\,\,\mathrm{if}\,\,1<\hbar<\ell
\end{split}$$ by the first equation in [\[Eq4.4\]](#Eq4.4){reference-type="eqref" reference="Eq4.4"}, we deduce that $$\begin{split}
z_j(\tau)&=\frac{\ell}{\ell-1}\varsigma^{\ell-1}_j\bigl(e^{(\ell-1)\tau}-1\bigr)+d_j\hspace{36.89mm}\mathrm{if}\,\,\hbar=1 \medskip\\
z_j(\tau)&=\frac{\ell\varsigma^{\ell-1}_j}{(\ell-\hbar)u^{\hbar-1}_0}
\Biggl(\frac{1}{\bigl(1-\ell(\hbar-1)u^{\hbar-1}_0\tau\bigr)^{\frac{\ell-\hbar}{\ell(\hbar-1)}}}-1\Biggr)+d_j\,\,\mathrm{if}\,\,1<\hbar<\ell
\end{split}$$ and, in a more convenient form for the comparison with [\[Eq4.18\]](#Eq4.18){reference-type="eqref" reference="Eq4.18"}, that $$\label{Eq4.19}
\begin{split}
&e^{\tau}=\biggl(\frac{(z_j(\tau)-d_j)(\ell-1)}{\ell\varsigma^{\ell-1}_j}+1\biggr)^\frac{1}{\ell-1}\hspace{42.52mm}\mathrm{if}\,\,\hbar=1 \medskip\\
&\frac{1}{\bigl(1-\ell(\hbar-1)u^{\hbar-1}_0\tau\bigr)^{\frac{1}{\hbar-1}}}
=\biggl(\frac{(z_j(\tau)-d_j)(\ell-\hbar)u^{\hbar-1}_0}{\ell\varsigma^{\ell-1}_j}+1\biggr)^\frac{\ell}{\ell-\hbar}\,\,\mathrm{if}\,\,1<\hbar<\ell
\end{split}$$ with $d_j:=z_j(0)$ for $j=1,2,\ldots,n$.
Now, when $\hbar=1$, the first equations in [\[Eq4.18\]](#Eq4.18){reference-type="eqref" reference="Eq4.18"} and [\[Eq4.19\]](#Eq4.19){reference-type="eqref" reference="Eq4.19"} yield $\ell=2$, and the first equation in [\[Eq4.18\]](#Eq4.18){reference-type="eqref" reference="Eq4.18"} can be further rewritten as $$u(\tau)=u_0\biggl(\sum^n_{j=1}\varrho_je^\tau\biggr)^2=u_0\biggl(\sum^n_{j=1}\varrho_j\biggl(\frac{z_j(\tau)-d_j}{2\varsigma_j}+1\biggr)\biggr)^2,$$ which further implies that, seeing $u_0=\varsigma^2_1+\varsigma^2_2+\cdots+\varsigma^2_n$, $$\label{Eq4.20}
\begin{split}
u(z)&=u_0\biggl(\frac{1}{2}\biggl(\frac{\varrho_1z_1}{\varsigma_1}+\frac{\varrho_2z_2}{\varsigma_2}+\cdots+\frac{\varrho_nz_n}{\varsigma_n}\biggr)+\Lambda_0(z)\biggr)^2\\
&=\biggl(\frac{1}{2}\frac{\varsigma_1z_1+\varsigma_2z_2+\cdots+\varsigma_nz_n}{\varsigma^2_1+\varsigma^2_2+\cdots+\varsigma^2_n}
\bigl(\varsigma^2_1+\varsigma^2_2+\cdots+\varsigma^2_n\bigr)^\frac{1}{2}+\Lambda_1(z)\biggr)^2\\
&=\Bigl(\frac{1}{2}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\Bigr)^2
\end{split}$$ by taking $\varrho_j:=\frac{\varsigma^2_j}{\varsigma^2_1+\varsigma^2_2+\cdots+\varsigma^2_n}$ and $\sigma_j:=\frac{\varsigma_j}{\sqrt{\varsigma^2_1+\varsigma^2_2+\cdots+\varsigma^2_n}}$ for $j=1,2,\ldots,n$, with $\Lambda_\mu(z),\Phi(z)$ entire functions in $\mathbf{C}^n$ depending on $d_j,\varsigma_j$ for $j=1,2,\ldots,n$ and $\mu=0,1$.
Next, when $1<\hbar<\ell$, the second equations in [\[Eq4.18\]](#Eq4.18){reference-type="eqref" reference="Eq4.18"} and [\[Eq4.19\]](#Eq4.19){reference-type="eqref" reference="Eq4.19"} show that $\frac{\ell}{\ell-\hbar}$ needs to be an integer, and the second equation in [\[Eq4.18\]](#Eq4.18){reference-type="eqref" reference="Eq4.18"} can be further rewritten as $$\begin{split}
u(\tau)&=u_0\Biggl(\sum^n_{j=1}\varrho_j\frac{1}{\bigl(1-\ell(\hbar-1)u^{\hbar-1}_0\tau\bigr)^{\frac{\ell-\hbar}{\ell(\hbar-1)}}}\Biggr)^{\frac{\ell}{\ell-\hbar}}\\
&=u_0\biggl(\sum^n_{j=1}\varrho_j\biggl(\frac{(z_j(\tau)-d_j)(\ell-\hbar)u^{\hbar-1}_0}{\ell\varsigma^{\ell-1}_j}+1\biggr)\biggr)^{\frac{\ell}{\ell-\hbar}},
\end{split}$$ which further implies that, seeing $u^\hbar_0=\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n$, $$\label{Eq4.21}
\begin{split}
u(z)&=u_0\biggl(\frac{\ell-\hbar}{\ell}\biggl(\frac{\varrho_1z_1}{\varsigma^{\ell-1}_1}+\frac{\varrho_2z_2}{\varsigma^{\ell-1}_2}
+\cdots+\frac{\varrho_nz_n}{\varsigma^{\ell-1}_n}\biggr)u^{\hbar-1}_0+\Lambda_0(z)\biggr)^{\frac{\ell}{\ell-\hbar}}\\
&=\biggl(\frac{\ell-\hbar}{\ell}
\frac{\varsigma_1z_1+\varsigma_2z_2+\cdots+\varsigma_nz_n}{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}u^{\hbar-1+\frac{\ell-\hbar}{\ell}}_0
+\Lambda_1(z)\biggr)^{\frac{\ell}{\ell-\hbar}}\\
&=\biggl(\frac{\ell-\hbar}{\ell}\frac{\varsigma_1z_1+\varsigma_2z_2+\cdots+\varsigma_nz_n}{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}
\bigl(\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n\bigr)^{1-\frac{1}{\ell}}+\Lambda_1(z)\biggr)^{\frac{\ell}{\ell-\hbar}}\\
&=\Bigl(\frac{\ell-\hbar}{\ell}(\sigma_1z_1+\sigma_2z_2+\cdots+\sigma_nz_n)+\Phi(z)\Bigr)^{\frac{\ell}{\ell-\hbar}}
\end{split}$$ by taking $\varrho_j:=\frac{\varsigma^\ell_j}{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}$ and $\sigma_j:=\frac{\varsigma_j}{\sqrt[\ell]{\varsigma^\ell_1+\varsigma^\ell_2+\cdots+\varsigma^\ell_n}}$ for $j=1,2,\ldots,n$, with $\Lambda_\mu(z),\Phi(z)$ entire functions in $\mathbf{C}^n$ depending on $d_j,\varsigma_j$ for $j=1,2,\ldots,n$ and $\mu=0,1$.
All the preceding discussions conclude the proof of Theorem [Theorem 8](#Thm8){reference-type="ref" reference="Thm8"}. ◻
**Acknowledgement.** The author wholeheartedly thanks the editor and the anonymous reviewers for valuable suggestions. The author acknowledges Dr. Wei Chen of providing the references [@LT; @LZ] and reading the initial proof of Theorem [Theorem 1](#Thm1){reference-type="ref" reference="Thm1"} with helpful comments, and Dr. Jingbo Liu of providing the reference [@BMS] and discussing the examples and the proof of Theorem [Theorem 8](#Thm8){reference-type="ref" reference="Thm8"}, **Cases 2-4**.
28
S.B. Bank and R.P. Kaufman. On meromorphic solutions of first-order differential equations. *Comment. Math. Helv.* **51** (1976), 289-299.
S.B. Bank and R.P. Kaufman. On the order of growth of meromorphic solutions of first-order differential equations. *Math. Ann.* **241** (1979), 57-67.
M. Bennett, P. Mihăilescu, and S. Siksek. The generalized Fermat equation. Open Problems in Mathematics (Edited by John F. Nash, Jr. and Michael Th. Rassias), 173-205. Springer, Cham (2016).
W. Bergweiler. Fixed points of composite meromorphic functions and normal families. *Proc. Roy. Soc. Edinburgh Sect. A* **134** (2004), 653-660.
W. Bergweiler. Fixed points of composite entire and quasiregular maps. *Ann. Acad. Sci. Fenn. Math.* **31** (2006), 523-540.
L.A. Caffarelli and M.G. Crandall. Distance functions and almost global solutions of eikonal equations. *Comm. Partial Differential Equations* **35** (2010), 391-414.
D.C. Chang, B.Q. Li, and C.C. Yang. On composition of meromorphic functions in several complex variables. *Forum Math.* **7** (1995), 77-94.
L.C. Evans. **Partial Differential Equations**. American Mathematical Society, Providence, RI (2010).
F. Gross. On factorization of meromorphic functions. *Trans. Amer. Math. Soc.* **120** (1965), 124-144.
G.G. Gundersen and W.K. Hayman. The strength of Cartan's version of Nevanlinna theory. *Bull. London Math. Soc.* **36** (2004), 433-454.
Q. Han. On complex analytic solutions of the partial differential equation $(u_{z_1})^m+(u_{z_2})^m=u^m$. *Houston J. Math.* **35** (2009), 277-289.
W.K. Hayman. **Meromorphic Functions**. Oxford University Press, Oxford (1964).
W.K. Hayman. Waring's problem für analytische funktionen. Mathematisch-Naturwissenschaftliche Klasse Sitzungsberichte 1984, 1-13. Bayerische Akademie der Wissenschaften, München (1985).
W.K. Hayman. Waring's theorem and the super Fermat problem for numbers and functions. *Complex Var. Elliptic Equ.* **59** (2014), 85-90.
J.E. Hemmati. Entire solutions of first-order nonlinear partial differential equations. *Proc. Amer. Math. Soc.* **125** (1997), 1483-1485.
K. Ishizaki and N. Toda. Transcendental meromorphic solutions of some algebraic differential equations. *J. Aust. Math. Soc.* **83** (2007), 157-180.
G. Johnsson. The Cauchy problem in $\mathbf{C}^N$ for linear second order partial differential equations with data on a quadric surface. *Trans. Amer. Math. Soc.* **344** (1994), 1-48.
D. Khavinson. A note on entire solutions of the eikonal equation. *Amer. Math. Monthly* **102** (1995), 159-161.
B.Q. Li. On entire solutions of Fermat type partial differential equations. *Internat. J. Math.* **15** (2004), 473-485.
B.Q. Li. Entire solutions of certain partial differential equations in $\mathbf{C}^n$. *Israel J. Math.* **143** (2004), 131-140.
B.Q. Li. Entire solutions of certain partial differential equations and factorization of partial derivatives. *Trans. Amer. Math. Soc.* **357** (2005), 3169-3177.
B.Q. Li. On meromorphic solutions of $f^2+g^2=1$. *Math. Z.* **258** (2008), 763-771.
B.Q. Li. On certain non-linear differential equations in complex domains. *Arch. Math. (Basel)* **91** (2008), 344-353.
B.Q. Li. On meromorphic solutions of generalized Fermat equations. *Internat. J. Math.* **25** (2014), 1450002, 8 pp.
B.Q. Li and E.G. Saleeby. Entire solutions of first-order partial differential equations. *Complex Var. Theory Appl.* **48** (2003), 657-661.
B.Q. Li and C.C. Yang. Factorization of meromorphic functions in several complex variables. Contemp. Math. **142**, 61-74. American Mathematical Society, Providence, RI (1993).
B.Q. Li and Z. Ye. On meromorphic solutions of $f^3+g^3=1$. *Arch. Math. (Basel)* **90** (2008), 39-43.
L. Liao and J. Tang. The transcendental meromorphic solutions of a certain type of nonlinear differential equations. *J. Math. Anal. Appl.* **334** (2007), 517-527.
L. Liao and X. Zhang. On a certain type of nonlinear differential equations admitting transcendental meromorphic solutions. *Sci. China Math.* **56** (2013), 2025-2034.
E.G. Saleeby. Entire and meromorphic solutions of Fermat type partial differential equations. *Analysis (Munich)* **19** (1999), 369-376.
E.G. Saleeby. On entire and meromorphic solutions of $\lambda u^k+\sum^n_{i=1}u^m_{z_i}=1$. *Complex Var. Theory Appl.* **49** (2004), 101-107.
A. Vitter. The lemma of the logarithmic derivative in several complex variables. *Duke Math. J.* **44** (1977), 89-104.
[^1]: Mathematics Subject Classification. 35F20, 32A15, 32A20, 11P05.
[^2]: Keywords. Characteristics, partial differential equations of super-Fermat or Waring's-problem form, eikonal equations, entire and meromorphic functions in several complex variables, pseudoprime.
| arxiv_math | {
"id": "2309.06616",
"title": "On partial differential equations of Waring's-problem form in several\n complex variables",
"authors": "Qi Han",
"categories": "math.CV math.AP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
We explore a linear inhomogeneous elasticity equation with random Lamé parameters. The latter are parameterized by a countably infinite number of terms in separated expansions. The main aim of this work is to estimate expected values (considered as an infinite dimensional integral on the parametric space corresponding to the random coefficients) of linear functionals acting on the solution of the elasticity equation. To achieve this, the expansions of the random parameters are truncated, a high-order quasi-Monte Carlo (QMC) is combined with a sparse grid approach to approximate the high dimensional integral, and a Galerkin finite element method (FEM) is introduced to approximate the solution of the elasticity equation over the physical domain. The error estimates from (1) truncating the infinite expansion, (2) the Galerkin FEM, and (3) the QMC sparse grid quadrature rule are all studied. For this purpose, we show certain required regularity properties of the continuous solution with respect to both the parametric and physical variables. To achieve our theoretical regularity and convergence results, some reasonable assumptions on the expansions of the random coefficients are imposed. Finally, some numerical results are delivered.
author:
- "J. Dick, Q. T. Le Gia, K. Mustapha and T. Tran [^1]"
title: "Quasi-Monte Carlo sparse grid Galerkin finite element methods for linear elasticity equations with uncertainties[^2] "
---
# Introduction
In this work we investigate and analyze the application of the high-order quasi-Monte Carlo (QMC) sparse grid method combined with the conforming Galerkin finite element methods (FEMs) to solve a linear elastic model with uncertainties (see [@Mathies1997] for an interesting overview of how to incorporate uncertainty into material parameters in linear elasticity problems). More specifically, we consider the case where the properties of the elastic inhomogeneous material are varying spatially in an uncertain way by using random Lamé parameters which are parametrized by a countably infinite number of parameters. This leads to randomness in both the Young modulus ($E$) and the Poisson ratio ($\nu$). We intend to measure theoretically the efficiency of our numerical algorithm through the expectation of the random solution over the random field.
The equation governing small elastic deformations of a body $\Omega$ in $\mathbb{R}^d$ ($d\in \{2,3\}$) with polyhedral boundary can be written as $$-\nabla \cdot \boldsymbol{\sigma}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}({\boldsymbol{x}},{\boldsymbol{y}},{\boldsymbol{z}})) = {\bf f}({\boldsymbol{x}}) \quad \text{for } {\boldsymbol{x}}\in \Omega, \label{eq:L1}$$ subject to homogeneous Dirichlet boundary conditions; ${\bf u}({\boldsymbol{x}},{\boldsymbol{y}},{\boldsymbol{z}}) = 0$ for ${\boldsymbol{x}}\in \Gamma:=\partial \Omega$ and with ${\boldsymbol{y}}$ and ${\boldsymbol{z}}$ being parameter vectors describing randomness to be specified later. The parametric Cauchy stress tensor $\boldsymbol{\sigma}({\boldsymbol{y}},{\boldsymbol{z}};\cdot) \in [L^2(\Omega)]^{d\times d}$ is defined as $$\boldsymbol{\sigma}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}({\boldsymbol{x}},{\boldsymbol{y}},{\boldsymbol{z}})) =\lambda({\boldsymbol{x}},{\boldsymbol{z}})\Big(\nabla\cdot {\bf u}({\boldsymbol{x}},{\boldsymbol{y}},{\boldsymbol{z}})\Big) I + 2\mu({\boldsymbol{x}},{\boldsymbol{y}}) {\boldsymbol{\varepsilon}}({\bf u}({\boldsymbol{x}},{\boldsymbol{y}},{\boldsymbol{z}})),$$ with ${\bf u}(\cdot, {\boldsymbol{y}}, {\boldsymbol{z}})$ being the displacement vector field of dimension $d$, and the symmetric strain-rate tensor ${\boldsymbol{\varepsilon}}({\bf u}) := (\nabla {\bf u}+ (\nabla {\bf u})^T)/2\,.$ Here, ${\bf f}\in [L^2(\Omega)]^d$ is the body force per unit volume and $I$ is the identity tensor. The gradient ($\nabla$) and the divergence ($\nabla \cdot$) are understood to be with respect to the physical variable ${\boldsymbol{x}}\in \Omega$. The Lamé elasticity parameter $\lambda$ is related to the compressibility of the material, and the shear modulus $\mu$ is related to how the material behaves under normal stress and shear stress, for the material $\Omega$ containing uncertainties. To parametrize these uncertainties, we assume that $\mu = \mu({\boldsymbol{x}},{\boldsymbol{y}})$ and $\lambda = \lambda({\boldsymbol{x}},{\boldsymbol{z}})$ can be expressed in the following separate expansions $$\label{KLexpansion}
\mu({\boldsymbol{x}},{\boldsymbol{y}}) = \mu_0({\boldsymbol{x}}) +\sum_{j=1}^\infty y_j \psi_j({\boldsymbol{x}})~~{\rm and}~~
\lambda({\boldsymbol{x}},{\boldsymbol{z}}) = \lambda_0({\boldsymbol{x}}) +\sum_{j=1}^\infty z_j \phi_j({\boldsymbol{x}}), \quad{\boldsymbol{x}}\in\Omega,$$ where $\{\psi_j\}$ and $\{\phi_j\}$ are orthogonal basis functions for the $L^2(\Omega)$ space. The parameter vectors ${\boldsymbol{y}}= (y_j)_{j\ge1}$ and ${\boldsymbol{z}}= (z_k)_{k\ge1}$ belong to $U:=(-\frac{1}{2},\frac{1}{2})^\mathbb{N}$, consist of a countable number of parameters $y_j$ and $z_k$, respectively, which are assumed to be i.i.d. uniformly distributed. Using independent random fields for $\lambda$ and $\mu$ in our model assumes that the compressibility and behaviour under stress of the material are, in the range of the parameters of the random $\lambda$ and $\mu$, independent.
The model problem [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} is similar to the one in [@HoangNguyenXia2016; @XiaHoang2014], in which a priori analysis for so-called best $N$-term approximations of standard two-field mixed formulations was investigated. For the well-posedness of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"}, we assume that there are some positive numbers $\mu_{\min},$ $\mu_{\max}$ and $\lambda_{\max}$ so that $$\label{ass A2}
0 < \mu_{\min} \le \mu({\boldsymbol{x}},{\boldsymbol{y}}) \le \mu_{\max}~~{\rm and}~~0 \le \lambda({\boldsymbol{x}},{\boldsymbol{z}})\le \lambda_{\max}, \quad {\boldsymbol{x}}\in \Omega,\;\; {\boldsymbol{y}}, {\boldsymbol{z}}\in U.
\tag{A1}$$ Due to [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"}, the values of the Poisson ratio of the elastic material $\nu=\frac{\lambda}{2(\lambda+\mu)}$ are ranging between $0$ and $1/2$ which is the case in most materials. Indeed, if $\lambda$ is a constant multiple of $\mu$, that is, the randomnesses of the Lamé parameters are due to the ones in the Young modulus $E$, then $\nu$ is constant. This case was studied in [@Khan2019; @Khan2021] where the authors introduced a three-field PDE model with a parameter-dependent $E$ which is amenable to discretization by stochastic Galerkin mixed FEMs. The focus in [@Khan2019] was on efficient linear algebra, while an a posteriori error estimation was detailed in [@Khan2021]. In relation, the authors in [@Eigel2014] presented a framework for residual-based a posteriori error estimation and adaptive stochastic Galerkin approximation.
If $\nu$ approaches $1/2$, then we are dealing with an elastic material that becomes nearly incompressible. In this case, and with constant Lamé parameters, the convergence rate of the conforming piecewise quadratic or cubic Galerkin FEMs is short by one from being optimal [@ScottVogelius1985; @Vogelius1983]. However, the piecewise linear Galerkin FEM runs into trouble with the phenomenon of locking where the convergence rates may deteriorate as $\lambda$ becomes too large. This is owing to their inability to represent non-trivial divergence-free displacement fields. Locking can be avoided by using a nonconforming Galerkin FEM [@Vogelius1983; @BrennerSung1992; @Falk1991] or by using mixed FEMs. These and several other methods were studied very extensively in the existing literature for the case of constant Lamé parameters, we refer the reader to the following books [@Braess2007; @BrennerScott2008; @BrezziFortin1991]. Investigating the nearly incompressible case with stochastic Lamé parameters is beyond the scope of this paper; it is a topic of future research.
Outlines of the paper. The next section is devoted to the statement of the main results of this work. In Section [3](#VFR){reference-type="ref" reference="VFR"}, we derive the variational formulation of the parametric model problem [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"}, and prove the existence and uniqueness of the weak solution. We also investigate certain regularity properties of the continuous solution ${\bf u}$ with respect to the random parameters ${\boldsymbol{y}}$ and ${\boldsymbol{z}}$, and the physical parameter ${\boldsymbol{x}}.$ These results are needed to guarantee the convergence of the errors from both the QMC integration and the Galerkin finite element discretization. The error from truncating the infinite series expansion in [\[KLexpansion\]](#KLexpansion){reference-type="eqref" reference="KLexpansion"} is investigated in Section [4](#Sec: Truncation){reference-type="ref" reference="Sec: Truncation"}. In Section [5](#Sec: FEM){reference-type="ref" reference="Sec: FEM"}, for every ${\boldsymbol{y}},{\boldsymbol{z}}\in U,$ we approximate the parametric solution ${\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})$ of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} over the physical domain $\Omega$ using the conforming Galerkin FEM, and discuss the stability and error estimates. In Section [6](#sec: QMC errors){reference-type="ref" reference="sec: QMC errors"}, we investigate the high-order QMC error from estimating the expected value of a given function over a high dimensional field. More precisely, we use a high-order QMC rule for the random coefficients arising from the expansion for $\lambda$ and another QMC rule for the random coefficients arising from the expansion for $\mu$. In principle it would be possible to use one high-order QMC rule to simulate both $\lambda$ and $\mu$ simultaneously, but this leads to a complicated design of the high-order QMC rule. Using separate high-order rules allows us to use existing constructions from [@DickKuoGiaNuynsSchwab2014]. We study two ways of combining the QMC rules, one is a tensor product structure (Theorem [Theorem 7](#prop:qmc){reference-type="ref" reference="prop:qmc"}) and the other one is a sparse grid combination (Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"}). As one would expect, the QMC sparse grid combination leads to a better rate of convergence. We end the paper with some numerical simulations in Section [7](#Sec: Numeric){reference-type="ref" reference="Sec: Numeric"}. In a sample of four different examples, we illustrate numerically the achieved theoretical finite element QMC convergence results.
# Main results
We start this section by introducing the following vector function spaces and the associated norms, which we will be using throughout the remainder of the paper. Let ${\bf V}:=[H^1_0(\Omega)]^d$ and the associated norm be $\|{\bf w}\|_{{\bf V}}:=\Big(\sum_{i=1}^d \|w_i\|_{H^1(\Omega)}^2 \Big)^{1/2}$ with the $w_i$'s being the components of the vector function ${\bf w}.$ For $J=0,1,2,\cdots,$ the norm on the vector Sobolev space ${{\bf H}^J}:=[H^J(\Omega)]^d$, denoted by $\|\cdot\|_{{{\bf H}^J}},$ is defined in a similar fashion with $\|w_i\|_{H^J(\Omega)}$ in place of $\|w_i\|_{H^1(\Omega)}.$ We dropout ${\bf H}^J$ from the norm notation on the space ${\bf H}^0={\bf L}^2(\Omega):=[L^2(\Omega)]^d$ for $d \ge 1.$ Here, $H^1_0(\Omega)$ and $H^J(\Omega)$ are the usual Sobolev spaces. Finally, ${\bf V}^*$ is denoted the dual space of ${\bf V}$ with respect to the ${\bf L}^2(\Omega)$ inner product, with norm denoted by $\|\cdot\|_{{\bf V}^\ast}$.
As mentioned earlier, and more precisely, we are interested in efficient approximation of the expected value of the function $\mathcal{L}({\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})),$ for a certain linear functional $\mathcal{L}:{\bf L}^2(\Omega) \to \mathbb{R}$, with respect to the random variables ${\boldsymbol{y}}$ and ${\boldsymbol{z}},$ where ${\bf u}$ is the solution of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"}. In other words, we seek to approximate $$\label{EV}
\Xi_{\bf u}:=\int_{U}\int_{U}\mathcal{L}\big( {\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\big)\,d{\boldsymbol{y}}\,d{\boldsymbol{z}}\,,$$ where $d{\boldsymbol{y}}$ and $d{\boldsymbol{z}}$ are the uniform probability measures on $U$. As a practical example, we may choose $\mathcal{L}$ to be a local continuous average on some domain $\Omega_0 \subset \Omega.$
To accomplish the above task, and for the practical implementation, the occurred infinite sums in [\[KLexpansion\]](#KLexpansion){reference-type="eqref" reference="KLexpansion"} must be truncated. Then, we approximate ${\bf u}$ by ${\bf u}_{\bf s}$ which is the solution of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} obtained by truncating the infinite expansions in [\[KLexpansion\]](#KLexpansion){reference-type="eqref" reference="KLexpansion"} where ${\bf s}=(s_1,s_2)$, that is, assuming that $y_j=z_k=0$ for $j >s_1$ and $k>s_2$. Then, with $U_i=[0,1]^{s_i}$ for $i=1,2,$ being of (finite) fixed dimension $s_i$, we estimate the expected value of $\mathcal{L}({\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}))$ by approximating $$\label{F finite}
\Xi_{{\bf s},{\bf u}_{\bf s}}:=\int_{U_2}\int_{U_1}\mathcal{L}\Big({{\bf u}_{{\bf s}}}\Big(\cdot,{\boldsymbol{y}}-{\bf \frac{1}{2}},{\boldsymbol{z}}-{\bf \frac{1}{2}}\Big)\Big)d{\boldsymbol{y}}\,d{\boldsymbol{z}}\,.$$ In the above finite dimensional integral, $d{\boldsymbol{y}}$ and $d{\boldsymbol{z}}$ are the uniform probability measures on $U_1$ and $U_2$, respectively. The shifting of the coordinates by ${\bf \frac{1}{2}}$ translates $U_i$ to $\big[-\frac{1}{2},\frac{1}{2}\big]^{s_i}$ for $i=1,2.$ We approximate such $(s_1+s_2)$-dimensional integrals using a high-order QMC quadrature. Preceding this, we intend to solve the truncated problem over the physical domain $\Omega$ numerically via a continuous Galerkin FEM. So, for every ${\boldsymbol{y}}\in U_1$ and ${\boldsymbol{z}}\in U_2,$ we approximate the truncated solution ${\bf u}_{\bf s}\Big(\cdot,{\boldsymbol{y}}-{\bf \frac{1}{2}},{\boldsymbol{z}}-{\bf \frac{1}{2}}\Big)$ by the parametric spatial Galerkin finite element solution ${\bf u}_{{\bf s}_h}\Big(\cdot,{\boldsymbol{y}}-{\bf \frac{1}{2}},{\boldsymbol{z}}-{\bf \frac{1}{2}}\Big) \in {\bf V}_h\subset {\bf V}$ (see Section [5](#Sec: FEM){reference-type="ref" reference="Sec: FEM"} for the definition of the finite element space ${\bf V}_h$) with the sums in [\[KLexpansion\]](#KLexpansion){reference-type="eqref" reference="KLexpansion"} truncated to $s_1$ and $s_2$ terms, respectively. In the third step, we estimate the expectation of the approximation using first a tensor product of two high-order QMC methods and second, an efficient high-order QMC sparse grid combination. In summary, we approximate the expected value in [\[EV\]](#EV){reference-type="eqref" reference="EV"} by the following truncated QMC Galerkin finite element rule $$\label{eq:QMCInt F}
\Xi_{{\bf u}_{{\bf s}_h}, Q}:=\frac{1}{N_1\,N_2}\sum_{j=0}^{N_1-1} \sum_{k=0}^{N_2-1}\mathcal{L}\Big({\bf u}_{{\bf s}_h}\Big(\cdot,{\boldsymbol{y}}_j-{\bf \frac{1}{2}},{\boldsymbol{z}}_k-{\bf \frac{1}{2}}\Big)\Big)\,,$$ where the QMC points $\{{\boldsymbol{y}}_0, \ldots,{\boldsymbol{y}}_{N_1-1} \} \in U_1$ and $\{{\boldsymbol{z}}_0, \ldots,{\boldsymbol{z}}_{N_2-1} \} \in U_2$. Therefore, we have three sources of error: a dimension truncation error depending on $s_1$ and $s_2$, a Galerkin discretization error depending on the maximum finite element mesh diameter $h$ of the physical domain $\Omega$, and a QMC quadrature error which depends on $N_1$ and $N_2$. We split the error as: $$\label{combine}
|\Xi_{\bf u}-\Xi_{{\bf u}_{{\bf s}_h}, Q}|\le |\Xi_{\bf u}-\Xi_{{\bf s}, {\bf u}_{\bf s}}|+|\Xi_{{\bf s},{\bf u}_{\bf s}}-\Xi_{{\bf s},{\bf u}_{{\bf s}_h}}|+|\Xi_{{\bf s},{\bf u}_{{\bf s}_h}}-\Xi_{{\bf u}_{{\bf s}_h}, Q}|.$$ Since $d{\boldsymbol{y}}$ and $d{\boldsymbol{z}}$ are the uniform probability measures with i.i.d. uniformly distributed parameters on $U$, $$\Xi_{\bf u}-\Xi_{{\bf s}, {\bf u}_{\bf s}}
=\int_{U}\int_{U}\mathcal{L}\Big({\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})-{\bf u}_{{\bf s}}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2}) \Big)\,d{\boldsymbol{y}}\,d{\boldsymbol{z}},$$ where ${\boldsymbol{y}}= (y_j)_{j\ge1}, {\boldsymbol{z}}= (z_k)_{k\ge1} \in U$, and the truncated vectors ${\boldsymbol{y}}^{s_1}$ and ${\boldsymbol{z}}^{s_2}$ are $(y_1,y_2,\cdots,y_{s_1},0,0,\cdots)$ and $(z_1,z_2,\cdots,z_{s_2},0,0,\cdots)$, respectively. To estimate this term, we refer to the dimension truncation error which is analyzed in Theorem [Theorem 5](#Truncating error){reference-type="ref" reference="Truncating error"}. To reduce the errors from such a truncation, which is necessary from a practical point of view, we assume that the $L^2(\Omega)$ orthogonal basis functions $\psi_j$ and $\phi_j$ are ordered so that $\|\psi_j\|_{L^\infty(\Omega)}$ and $\|\phi_j\|_{L^\infty(\Omega)}$ are nonincreasing. That is, $$\label{ass A5}
\| \psi_j \|_{L^\infty(\Omega)} \ge
\| \psi_{j+1} \|_{L^\infty(\Omega)} ~~{\rm and}~~ \| \phi_j \|_{L^\infty(\Omega)} \ge
\| \phi_{j+1} \|_{L^\infty(\Omega)},\quad {\rm for}~~j\ge 1\,.
\tag{A2}$$ For the convergence from the series truncation, we assume that $\mu_0,\,\lambda_0 \in L^\infty(\Omega)$, and $$\label{ass A1}
\sum_{j=1}^\infty \|\psi_j\|^p_{L^\infty(\Omega)} < \infty~~{\rm and}~~
\sum_{j=1}^\infty \|\phi_j\|^q_{L^\infty(\Omega)} < \infty,\quad{\rm for~some}~~0<p,\,q\le 1\,.
\tag{A3}$$ When $p=1$ and$/$or $q=1,$ it is essential to have $$\label{ass A3}
\sum_{j=s_1+1}^\infty \|\psi_j\|_{L^\infty(\Omega)} \le C s_1^{1-1/\varrho_1}~~{\rm and/or}
\sum_{j=s_2+1}^\infty \|\phi_j\|_{L^\infty(\Omega)} \le Cs_2^{1-1/\varrho_2},
\tag{A4}$$ for some $0<\varrho_1,\varrho_2<1$. The second term on the right-hand side of [\[combine\]](#combine){reference-type="eqref" reference="combine"} is the finite dimensional integral of the linear functional $\mathcal{L}$ acting on the difference between the truncated solution ${\bf u}_{\bf s}$ and its approximation ${\bf u}_{{\bf s}_h}$. This can be deduced from Theorem [Theorem 6](#Convergence theorem){reference-type="ref" reference="Convergence theorem"} by replacing the vectors ${\boldsymbol{y}}$ and ${\boldsymbol{z}}$ with ${\boldsymbol{y}}^{s_1}$ and ${\boldsymbol{z}}^{s_2}$, respectively, and using the fact that ${\bf u}_{\bf s}$ satisfies the regularity properties in Theorem [Theorem 3](#lem: vu bound){reference-type="ref" reference="lem: vu bound"}. The Lamé parameters $\mu(\cdot,{\boldsymbol{y}})$ and $\lambda(\cdot,{\boldsymbol{z}})$ are required to be in the Sobolev space $W^{\theta,\infty}(\Omega)$ for every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$ and for some integer $1\le \theta\le r$ ($r$ is the degree of the finite element solution, so $\theta=1$ in the case of piecewise linear Galerkin FEM). To have this, we assume that $$\label{ass A4}
\mu_0,\,\lambda_0 \in W^{\theta,\infty}(\Omega),~~ \sum_{j=1}^\infty \|\psi_j\|_{W^{\theta,\infty}(\Omega)}~~ {\rm and}~~ \sum_{j=1}^\infty \| \phi_j\|_{W^{\theta,\infty}(\Omega)}~{\rm are~finite}\,.
\tag{A5}$$ It is clear from Theorem [Theorem 6](#Convergence theorem){reference-type="ref" reference="Convergence theorem"} that for every $({\boldsymbol{y}},{\boldsymbol{z}}) \in U_1\times U_2$, the sequence $|\mathcal{L}({\bf u}_{\bf s}-{\bf u}_{{\bf s}_h})|$ converges faster than $\|{\bf u}_{\bf s}-{\bf u}_{{\bf s}_h}\|_{\bf V}$ provided that the linear functional $\mathcal{L}$ is bounded in the $L^2(\Omega)$ sense (that is, $|\mathcal{L}({\bf w})|\le \|\mathcal{L}\|\|{\bf w}\|$ for any ${\bf w}\in {\bf L}^2(\Omega)$), which is not always guaranteed.
The third term in [\[combine\]](#combine){reference-type="eqref" reference="combine"} is the QMC quadrature error which can be estimated by applying Theorem [Theorem 7](#prop:qmc){reference-type="ref" reference="prop:qmc"} and Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"} with ${F}({\boldsymbol{y}},{\boldsymbol{z}}):=\mathcal{L}\Big({\bf u}_{{\bf s}_h}\Big(\cdot,{\boldsymbol{y}}-{\bf \frac{1}{2}},{\boldsymbol{z}}-{\bf \frac{1}{2}}\Big)\Big).$ Noting that (the mixed derivative) $|\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha}{F}({\boldsymbol{y}},{\boldsymbol{z}})|\le \|\mathcal{L}\|_{{\bf V}*} \,\|\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha}{\bf u}_{{\bf s}_h}\big(\cdot,{\boldsymbol{y}}-{\bf \frac12},{\boldsymbol{z}}-{\bf \frac12}\big)\|_{{\bf V}}$, and then, Theorem [Theorem 4](#lem: vu bound y z){reference-type="ref" reference="lem: vu bound y z"} can be applied to verify the regularity conditions in [\[eq:like-norm\]](#eq:like-norm){reference-type="eqref" reference="eq:like-norm"} and [\[eq:like-norm mixed\]](#eq:like-norm mixed){reference-type="eqref" reference="eq:like-norm mixed"} which are necessary for the QMC error results in Theorem [Theorem 7](#prop:qmc){reference-type="ref" reference="prop:qmc"} and the QMC sparse grid error results in Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"}, respectively. Here, Assumptions [\[ass A5\]](#ass A5){reference-type="eqref" reference="ass A5"} and [\[ass A1\]](#ass A1){reference-type="eqref" reference="ass A1"} (for $p=q=1$ only) are needed.
We summarize the combined error estimate in the next theorem. In addition to Assumptions [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"}--[\[ass A4\]](#ass A4){reference-type="eqref" reference="ass A4"}, we assume that the physical domain $\Omega$ is ${\mathcal C}^{\theta,1}$ or the boundary of $\Omega$ is of class ${\mathcal C}^{\theta+1}$ for some integer $\theta\ge 1$ (for $\theta=1$, $\Omega$ can be convex instead) and the body force vector function ${\bf f}$ belongs to ${\bf H}^{\theta-1}(\Omega)$ (recall that ${\bf H}^0(\Omega)={\bf L}^2(\Omega)$). These additional assumptions are needed to guarantee that the strong solution ${\bf u}$ of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} is in the space ${\bf H}^{\theta+1}(\Omega)$, see Theorem [Theorem 3](#lem: vu bound){reference-type="ref" reference="lem: vu bound"}. This property is essential for the optimal convergence of the finite element solution of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} over $\Omega.$ Throughout the paper, $C$ is a generic constant that is independent of $h$, the number of QMC points $N_i$, and the dimension $s_i$ for $i=1,\,2,$ but may depend on the physical domain $\Omega$ and other parameters that will be mentioned accordingly.
**Theorem 1**. *Let ${\bf u}$ be the solution of problem [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} and ${\bf u}_{{\bf s}_h}$ be the Galerkin finite element solution of degree $\le r$ (with $r\ge 1$) defined as in [\[FE solution\]](#FE solution){reference-type="eqref" reference="FE solution"} with $y_j=z_k=0$ for $j>s_1\ge 1$ and $k>s_2\ge 1.$ For $i=1,\,2$, let $N_i = b^{m_i}$ with $m_i$ being positive integers and $b$ being prime. Then one can construct two interlaced polynomial lattice rules of order $\alpha \,:=\, \lfloor 1/p \rfloor +1$ with $N_1$ points, and of order $\beta:=\, \lfloor 1/q \rfloor +1$ with $N_2$ points where $|m_1 q - m_2 p| < 1$, and $p$ and $q$ are those in [\[ass A1\]](#ass A1){reference-type="eqref" reference="ass A1"}, such that the following QMC Galerkin finite element error bound holds: for $1\le \theta\le r,$ $$\begin{gathered}
|\Xi_{\bf u}-\Xi_{{\bf u}_{{\bf s}_h}, Q}|\le C \, h^{\theta+1} \|{\bf f}\|_{{\bf H}^{\theta-1}} \|\mathcal{L}\|\\
+ C\,\Big(s_1^{1-\max(1/p,1/\varrho_1)}+ s_2^{1-\max(1/q,1/\varrho_2)}+ N^{-\frac{1}{p+q}} \Big)\|{\bf f}\|_{{\bf V}^*} \|\mathcal{L}\|_{{\bf V}^*}\,,\end{gathered}$$ where $N = N_1 N_2$ is the total number of QMC quadrature points. The constant $C$ depends on $b,p,q,\lambda,$ and $\mu$, but is independent of $s_i$ and $m_i$ for $i\in \{1,2\},$ and $h$.*
*Further, there exists a combined QMC sparse grid approximation given by [\[eq: truncate infinite sums\]](#eq: truncate infinite sums){reference-type="eqref" reference="eq: truncate infinite sums"}, such that with the assumptions in Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"}, the above error bound remains valid with $M^{-\min(1/p, 1/q)}$ in place of $N^{-\frac{1}{p+q}}$, where $M$ is the total number of quadrature points in the QMC sparse grid approximation. The constant $C$ in the new bound depends on $b,p,q,\lambda,$ and $\mu$, but is independent of $s_1, s_2$, $M$, and $h$.*
# Weak formulation and regularity {#VFR}
This section is devoted to deriving the weak formulation of the parametric elasticity equation [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} for each value of the parameter ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U.$ Then we show some useful regularity properties of the weak solution with respect to both the physical variable ${\boldsymbol{x}}$ and parametric variables ${\boldsymbol{y}}$ and ${\boldsymbol{z}}$. Preceding this, we establish the existence and uniqueness of the weak solution.
For the weak formulation of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"}, for every ${\boldsymbol{y}},{\boldsymbol{z}}\in U$, we multiply both sides of [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} by a test function ${\bf v}\in {\bf V}$, and use Green's formula and the given homogeneous Dirichlet boundary conditions after integrating over the physical domain $\Omega.$ Then, the usage of the identities $$\boldsymbol{\sigma}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}):\nabla {\bf v}= \boldsymbol{\sigma}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}): {\boldsymbol{\varepsilon}}({\bf v})=\lambda \nabla \cdot {\bf u}\nabla \cdot {\bf v}+2\mu {\boldsymbol{\varepsilon}}({\bf u}):{\boldsymbol{\varepsilon}}({\bf v})$$ (the colon operator is the inner product between tensors) results in the following parameter-dependent weak formulation: for every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$, find ${\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}) \in {\bf V}$ satisfying $$\label{para weak}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}, {\bf v}) = \ell({\bf v}), \quad \text{for all} \quad {\bf v}\in {\bf V},$$ where the bilinear form $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$ and the linear functional $\ell(\cdot)$ are defined by $$\label{eq: bilinear}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}, {\bf v}) := \int_\Omega [2\mu\, {\boldsymbol{\varepsilon}}({\bf u}):{\boldsymbol{\varepsilon}}({\bf v})+\lambda \nabla \cdot {\bf u}\nabla \cdot {\bf v}] \,d{\boldsymbol{x}}~~{\rm and}~~
\ell({\bf v}) := \langle{\bf f},{\bf v} \rangle:=\int_\Omega {\bf f}\cdot {\bf v}\,d{\boldsymbol{x}}.$$
The next theorem shows the existence and uniqueness of the solution of [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"}.
**Theorem 2**. *Assume that [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"} is satisfied. Then, for every $f \in {\bf V}^*$ and ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$, the parametric weak formulation problem [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} has a unique solution.*
*Proof.* Using assumption [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"} and applying the Cauchy-Schwarz inequality leads to $$|\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf v}, {\bf w})| \le 2 \mu_{\max}\|{\boldsymbol{\varepsilon}}({\bf v})\|\,\|{\boldsymbol{\varepsilon}}({\bf w})\|+\lambda_{\max} \|\nabla \cdot {\bf v}\|\,\| \nabla \cdot {\bf w}\|\,.$$ Hence, using the inequalities $\|\nabla \cdot {\bf w}\| \le \sqrt{d}\,\|\nabla {\bf w}\|$ and $\|{\boldsymbol{\varepsilon}}({\bf w})\| \le
\|\nabla {\bf w}\|$, we have $$\label{eq: bounded}
|\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf v},{\bf w})| \le (d \lambda_{\max}+2 \mu_{\max})\|\nabla {\bf v}\|\,\|\nabla {\bf w}\|\le
(d \lambda_{\max}+2 \mu_{\max})\| {\bf v}\|_{{\bf V}}\,\|{\bf w}\|_{{\bf V}},$$ for any ${\bf v},{\bf w}\in {\bf V}$. So, the bilinear form $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$ is bounded on ${\bf V}\times {\bf V}$. For the coercivity property of $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$, we use again assumption [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"} in addition to Korn's inequality to obtain $$\label{eq: coer B}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf v},{\bf v})\ge 2 \mu_{\min} \|{\boldsymbol{\varepsilon}}({\bf v})\|^2 \ge C \mu_{\min} \| {\bf v}\|_{{\bf V}}^2, \quad {\bf v}\in {\bf V}.$$ Since $\ell(\cdot)$ is a bounded linear functional on ${\bf V}$, an application of the Lax-Milgram theorem completes the proof. $\quad \Box$
For the finite element error analysis, we discuss next some required regularity properties of the parametric solution of [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"}. For the nearly incompressible case (which is beyond the scope of this work), one has to be more specific about the constant $\widetilde C$ in the following theorem.
**Theorem 3**. *Assume that [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"} is satisfied. Then, for every ${\bf f}\in {\bf V}^*$ and every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$, the parametric weak solution ${\bf u}={\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})$ of problem [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} satisfies $$\label{a priori}
\|{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}} \le \frac{C}{\mu_{\min}} \|{\bf f}\|_{{\bf V}^*}\,.$$ If $\Omega$ is ${\mathcal C}^{\theta,1}$ (or the boundary of $\Omega$ is of class ${\mathcal C}^{\theta+1}$) for some integer $\theta\ge 1$ (for $\theta=1$, we may assume that $\Omega$ is convex instead), then ${\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}) \in {\bf V}\cap {\bf H}^{\theta+1}(\Omega)$ (that is, it is a strong solution of problem [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"}) provided that [\[ass A4\]](#ass A4){reference-type="eqref" reference="ass A4"} is satisfied and ${\bf f}\in {\bf H}^{\theta-1}(\Omega)$. Furthermore, $$\label{a priori H2}
\|{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf H}^{\theta+1}}
\le \widetilde C\,\|{\bf f}\|_{{\bf H}^{\theta-1}},\quad{\rm for~ every}~~ {\boldsymbol{y}},\,{\boldsymbol{z}}\in U \,.$$ The constant $\widetilde C$ depends on $\Omega$, $\mu$, $\lambda$, including $\|\mu(\cdot,{\boldsymbol{y}})\|_{W^{\theta,\infty}(\Omega)}$ and $\|\lambda(\cdot,{\boldsymbol{z}})\|_{W^{\theta,\infty}(\Omega)}$.*
*Proof.* From the coercivity property in [\[eq: coer B\]](#eq: coer B){reference-type="eqref" reference="eq: coer B"} and the weak formulation in [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"}, we have $$C \mu_{\min}\| {\bf u}\|_{{\bf V}}^2\le 2 \mu_{\min} \|{\boldsymbol{\varepsilon}}({\bf u})\|^2 \le \mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u},{\bf u})=\ell({\bf u}) \le \|{\bf f}\|_{{\bf V}^*} \|{\bf u}\|_{{\bf V}},$$ for every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$. Thus, the proof of the regularity estimate in [\[a priori\]](#a priori){reference-type="eqref" reference="a priori"} is completed.
For every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$, the operator $\nabla \cdot \boldsymbol{\sigma}$ in the elasticity equation [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} is strongly elliptic because the bilinear operator $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$ is coercive on ${\bf V}$. Thus, due to the imposed assumptions on $\Omega$, the solution ${\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})$ of [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} is in the space ${\bf H}^{\theta+1}$ and satisfies the regularity property in [\[a priori H2\]](#a priori H2){reference-type="eqref" reference="a priori H2"}. See [@Cialert1988 Theorem 6.3-6], [@Grisvard1985 Theorems 2.4.2.5, 2.5.1.1, and 3.2.1.3] and [@McLean2000 Chapter 4] for more details about strongly elliptic system and elliptic regularity. $\quad \Box$
In the QMC (see Theorem [Theorem 7](#prop:qmc){reference-type="ref" reference="prop:qmc"}) and QMC sparse grids (see Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"}) error estimations, we need to bound the mixed first partial derivatives of the parametric displacement ${\bf u}$ with respect to the random variables $y_j$ and $z_k$. This will be the topic of the next theorem. For convenience, we introduce ${\mathcal{S}}$ to be the set of (multi-index) infinite vectors $\boldsymbol \alpha=(\alpha_j)_{j\ge 1}$ with nonnegative integer entries such that $|\boldsymbol \alpha|:=\sum_{j\ge 1} \alpha_j<\infty.$ That is, sequences of nonnegative integers for which only finitely many entries are nonzero. For $\boldsymbol \alpha=(\alpha_j)_{j\ge 1}$ and $\boldsymbol \beta=(\beta_j)_{j\ge 1}$ belonging to $\mathcal{S},$ the mixed partial derivative $\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha}$ is defined by $$\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha}:=\partial_{{\boldsymbol{z}}}^{\boldsymbol \beta} \partial_{{\boldsymbol{y}}}^{\boldsymbol \alpha} =\frac{\partial^{|\boldsymbol \beta|}}{\partial_{z_1}^{\beta_1}\partial_{z_2}^{\beta_2}\cdots}\frac{\partial^{|\boldsymbol \alpha|}}{\partial_{y_1}^{\alpha_1}\partial_{y_2}^{\alpha_2}\cdots} .$$ It reduces to $\partial_{{\boldsymbol{y}}}^{\boldsymbol \beta}$ and $\partial_{{\boldsymbol{z}}}^{\boldsymbol \alpha}$ when $|\boldsymbol \beta|=0$ and $|\boldsymbol \alpha|=0,$ respectively.
**Theorem 4**. *Assume that [\[ass A5\]](#ass A5){reference-type="eqref" reference="ass A5"} and [\[ass A1\]](#ass A1){reference-type="eqref" reference="ass A1"} (for $p=q=1$) are satisfied. Then, for every ${\bf f}\in {\bf V}^*$, ${\boldsymbol{y}},{\boldsymbol{z}}\in U$, and $\boldsymbol \alpha,\boldsymbol \beta\in \mathcal{S}$, the solution ${\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})$ of the parametric weak problem [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} satisfies $$\label{mixed estimate}
\big\|{\boldsymbol{\varepsilon}}\big(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\big)\big\| \le
(|\boldsymbol \alpha|+|\boldsymbol \beta|)! \
\widetilde {\bf b}^{\boldsymbol \alpha} \widehat {\bf b}^{\boldsymbol \beta}\|{\boldsymbol{\varepsilon}}({\bf u})\|,$$ where $$\widetilde {\bf b}^{\boldsymbol \alpha}=\prod_{i\ge 1} (\widetilde b_i)^{\alpha_i}~~{\rm and}~~ \widehat {\bf b}^{\boldsymbol \beta}=\prod_{i\ge 1} (\widehat b_i)^{\beta_i},~~{\rm with}~~
\widetilde b_j=\frac{\|\psi_j\|_{L^\infty(\Omega)}}{\mu_{\min}}~~{\rm and}~~ \widehat b_j=\frac{d}{2}\,\frac{\|\phi_j\|_{L^\infty(\Omega)}}{\mu_{\min}}\,.$$*
*Consequently, $$\label{mixed estimate 2}
\|\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}} \le
\frac{C}{\mu_{\min}}(|\boldsymbol \alpha|+|\boldsymbol \beta|)! \
\widetilde {\bf b}^{\boldsymbol \alpha} \widehat {\bf b}^{\boldsymbol \beta}\|{\bf f}\|_{{\bf V}^*},$$ where the constant $C$ depends on $\Omega$ only.*
*Proof.* Differentiating both sides of [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} with respect to the variables $y_j$ and $z_k$, we find the following recurrence after a tedious calculation $$\begin{gathered}
\label{recurrence}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha}{\bf u}, {\bf v})
=-2\sum_{\boldsymbol \alpha}\alpha_j \int_\Omega \psi_j {\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha-{\bf e}_j} {\bf u}):{\boldsymbol{\varepsilon}}({\bf v})\,d{\boldsymbol{x}}\\
-\sum_{\boldsymbol \beta}\beta_k \int_\Omega \phi_k \nabla\cdot(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta-{\bf e}_k,\boldsymbol \alpha} {\bf u})\nabla\cdot{\bf v}\,d{\boldsymbol{x}},\quad \text{for all} \quad {\bf v}\in {\bf V},\end{gathered}$$ where $\sum_{\boldsymbol \alpha}=\sum_{j,\alpha_j\ne 0}$ (that is, the sum over the nonzero indices of $\boldsymbol \alpha$), and ${\bf e}_i \in \mathcal{S}$ denotes the multi-index with entry $1$ in position $i$ and zeros elsewhere.
Choosing ${\bf v}=\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha}{\bf u}$ in [\[recurrence\]](#recurrence){reference-type="eqref" reference="recurrence"}, then using the inequality $\|\nabla \cdot {\bf v}\|\le \sqrt{d}\,\|{\boldsymbol{\varepsilon}}({\bf v})\|$ and the coercivity property of $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$ in [\[eq: coer B\]](#eq: coer B){reference-type="eqref" reference="eq: coer B"}, after some simplifications, we obtain $$\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u})\|^2
\le
\sum_{\boldsymbol \alpha}\alpha_j \widetilde b_j \|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha-{\bf e}_j} {\bf u})\|\,
\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u})\|
+ \sum_{\boldsymbol \beta}\beta_k\, \widehat b_k
\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta-{\bf e}_k,\boldsymbol \alpha} {\bf u})\|\,\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u})\|\,,$$ and consequently, $$\label{estimate 1}
\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u})\|
\le
\sum_{\boldsymbol \alpha}\alpha_j \widetilde b_j \|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha-{\bf e}_j} {\bf u})\|\,
+ \sum_{\boldsymbol \beta}\beta_k\, \widehat b_k
\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta-{\bf e}_k,\boldsymbol \alpha} {\bf u})\|\,.$$ When $\boldsymbol \beta={\bf 0},$ the above inequality reduces to $$\label{estimate 2}
\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{y}}}^{\boldsymbol \alpha} {\bf u})\| \le
\sum_{\boldsymbol \alpha}\alpha_j \widetilde b_j \|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{y}}}^{\boldsymbol \alpha-{\bf e}_j} {\bf u})\|\,.$$ Recursively, we obtain $$\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{y}}}^{\boldsymbol \alpha} {\bf u})\| \le
[\,|\boldsymbol \alpha| \,(|\boldsymbol \alpha|-1)\,\cdots 1]\,\prod_{i\ge 1} (\widetilde b_i)^{\alpha_i}\|{\boldsymbol{\varepsilon}}({\bf u})\|= |\boldsymbol \alpha|!
\prod_{i\ge 1} (\widetilde b_i)^{\alpha_i}\|{\boldsymbol{\varepsilon}}({\bf u})\|,$$ and hence, [\[mixed estimate\]](#mixed estimate){reference-type="eqref" reference="mixed estimate"} holds true in this case. In a similar fashion, we can show [\[mixed estimate\]](#mixed estimate){reference-type="eqref" reference="mixed estimate"} when $\boldsymbol \alpha={\bf 0}.$
When $\boldsymbol \alpha$ and $\boldsymbol \beta$ are both not identically zero vectors, the above approach can be extended, however it is not easy to follow. Owing to this, following [@CohenDeVoreSchwab2010], we use instead the induction hypothesis on $n:=|\boldsymbol \alpha+\boldsymbol \beta|$. From the above contribution, it is clear that [\[mixed estimate\]](#mixed estimate){reference-type="eqref" reference="mixed estimate"} holds true when $|\boldsymbol \alpha+\boldsymbol \beta|=1$. Now, assume that [\[mixed estimate\]](#mixed estimate){reference-type="eqref" reference="mixed estimate"} is true for $|\boldsymbol \alpha+\boldsymbol \beta|=n$, and the task is to claim it for $|\boldsymbol \alpha+\boldsymbol \beta|=n+1.$
From [\[estimate 1\]](#estimate 1){reference-type="eqref" reference="estimate 1"} and the induction hypothesis, we have $$\begin{gathered}
\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u})\|\le
n! \Big(\sum_{\boldsymbol \alpha}\alpha_j \widetilde b_j
\widetilde {\bf b}^{\boldsymbol \alpha-{\bf e}_j} \widehat {\bf b}^{\boldsymbol \beta}+ \sum_{\boldsymbol \beta}\beta_k\, \widehat b_k
\widetilde {\bf b}^{\boldsymbol \alpha} \widehat {\bf b}^{\boldsymbol \beta-{\bf e}_k}\Big)\|{\boldsymbol{\varepsilon}}({\bf u})\|\\
= n!\widetilde {\bf b}^{\boldsymbol \alpha} \widehat {\bf b}^{\boldsymbol \beta}\Big(\sum_{\boldsymbol \alpha}\alpha_j + \sum_{\boldsymbol \beta}\beta_k\Big)\|{\boldsymbol{\varepsilon}}({\bf u})\|\,.\end{gathered}$$ Since $\sum_{\boldsymbol \alpha}\alpha_j + \sum_{\boldsymbol \beta}\beta_k=n+1$, the proof of [\[mixed estimate\]](#mixed estimate){reference-type="eqref" reference="mixed estimate"} is completed.
Finally, since $\|{\boldsymbol{\varepsilon}}(\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}))\| \ge C\,\|\partial_{{\boldsymbol{z}},{\boldsymbol{y}}}^{\boldsymbol \beta,\boldsymbol \alpha} {\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}}$ (by Korn's inequality) and since $\|{\boldsymbol{\varepsilon}}({\bf u})\|\le \|\nabla {\bf u}\|\le \frac{C}{\mu_{\min}}\|{\bf f}\|_{{\bf V}^*}$ (by [\[a priori\]](#a priori){reference-type="eqref" reference="a priori"}), we derive [\[mixed estimate 2\]](#mixed estimate 2){reference-type="eqref" reference="mixed estimate 2"} from [\[mixed estimate\]](#mixed estimate){reference-type="eqref" reference="mixed estimate"}. $\quad \Box$
# A truncated problem {#Sec: Truncation}
This section is dedicated to investigating the error from truncating the first and second sums in [\[KLexpansion\]](#KLexpansion){reference-type="eqref" reference="KLexpansion"} at $s_1$ and $s_2$ terms, respectively, from some $s_1,s_2 \in \mathbb{N}.$ In other words, we set $y_j=0$ and $z_k=0$ for $j>s_1$ and $k>s_2$, respectively. We start by defining the truncated weak formulation problem: for every ${\boldsymbol{y}}^{s_1},\,{\boldsymbol{z}}^{s_2} \in U,$ find ${\bf u}_{\bf s}(\cdot, {\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2}) \in {\bf V}$, with ${\bf s}=(s_1,s_2)$, such that $$\label{truncated weak}
\mathcal{B}({\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2};{\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2}),{\bf v}) = \ell({\bf v}) \qquad \forall ~{\bf v}\in {\bf V}.$$
Thanks to Theorem [Theorem 2](#thm: unique solution){reference-type="ref" reference="thm: unique solution"}, the truncated problem [\[truncated weak\]](#truncated weak){reference-type="eqref" reference="truncated weak"} has a unique solution. Estimating the truncation error, which is needed for measuring the QMC finite element error in [\[combine\]](#combine){reference-type="eqref" reference="combine"}, is the topic of the next theorem. For brevity, we let $$\mu_c({\boldsymbol{x}},{\boldsymbol{y}}) = \sum_{j=s_1+1}^\infty y_j \psi_j({\boldsymbol{x}})~~{\rm and}~~
\lambda_c({\boldsymbol{x}},{\boldsymbol{z}}) = \sum_{j=s_2+1}^\infty z_j \phi_j({\boldsymbol{x}}), \quad{\boldsymbol{x}}\in\Omega,\ {\boldsymbol{y}},{\boldsymbol{z}}\in U.$$
**Theorem 5**. *Under Assumption [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"}, for every ${\bf f}\in {\bf V}^*$, ${\boldsymbol{y}},{\boldsymbol{z}}\in U$, and ${\bf s}=(s_1,s_2) \in \mathbb{N}^2$, the solution ${\bf u}_{\bf s}$ of the truncated parametric weak problem [\[truncated weak\]](#truncated weak){reference-type="eqref" reference="truncated weak"} satisfies $$\|{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}) - {\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})\|_{{\bf V}}
\le C\,\widehat C \|{\bf f}\|_{{\bf V}^*},~~{\rm with}~~
\widehat C= \sum_{j \ge s_1+1} \|\psi_j\|_{L^\infty(\Omega)}
+\sum_{j \ge s_2+1} \|\phi_j\|_{L^\infty(\Omega)}.$$ Moreover, if [\[ass A5\]](#ass A5){reference-type="eqref" reference="ass A5"}--[\[ass A3\]](#ass A3){reference-type="eqref" reference="ass A3"} are satisfied, and if $\mathcal{L}:{\bf V}\to \mathbb{R}$ is a bounded linear functional, (that is, $|\mathcal{L}({\bf w})|\le \|\mathcal{L}\|_{{\bf V}^*}\|{\bf w}\|_{\bf V}$ for all ${\bf w}\in {\bf V}$), then for every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U,$ we have $$\label{convergence calL u-us}
\begin{aligned}
|\mathcal{L}({\bf u}(\cdot, {\boldsymbol{y}},{\boldsymbol{z}}))-\mathcal{L}&({\bf u}_{\bf s}(\cdot, {\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2}))|
\le C\, \Big(s_1^{1-\max(1/p,1/\varrho_1)}+ s_2^{1-\max(1/q,1/\varrho_2)}\Big)\, \|{\bf f}\|_{{\bf V}^*} \|\mathcal{L}\|_{{\bf V}^*}, \end{aligned}$$ for some $0<p\,,q\le 1$ (see [\[ass A1\]](#ass A1){reference-type="eqref" reference="ass A1"}) and for some $0<\varrho_1\,,\varrho_2<1$ (see [\[ass A3\]](#ass A3){reference-type="eqref" reference="ass A3"}) . Here, the (generic) constant $C$ depends on $\Omega$, $\mu_{\max}$, $\mu_{\min}$, and $\lambda_{\max}$.*
*Proof.* From the variational formulations in [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} and [\[truncated weak\]](#truncated weak){reference-type="eqref" reference="truncated weak"}, we notice that $$\mathcal{B}({\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2};{\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})-{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}),{\bf v})=\mathcal{B}({\boldsymbol{y}}-{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}-{\boldsymbol{z}}^{s_2};{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}),{\bf v}).$$ Following the steps in [\[eq: bounded\]](#eq: bounded){reference-type="eqref" reference="eq: bounded"} and using the achieved estimate in [\[a priori\]](#a priori){reference-type="eqref" reference="a priori"}, we have $$\begin{aligned}
|\mathcal{B}({\boldsymbol{y}}-{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}-&{\boldsymbol{z}}^{s_2};{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}),{\bf v})|
\\
&\le C\max_{{\boldsymbol{x}}\in\Omega,\ {\boldsymbol{y}},{\boldsymbol{z}}\in U}(|\lambda_c({\boldsymbol{x}},{\boldsymbol{y}})|+|\mu_c({\boldsymbol{x}},{\boldsymbol{z}})|)\,\|{\bf u}\|_{{\bf V}}\,\|{\bf v}\|_{{\bf V}}
\le C\,\widehat C \| {\bf f}\|_{{\bf V}^*}\|{\bf v}\|_{{\bf V}}\,.\end{aligned}$$ On the other hand, by the coercivity property in [\[eq: coer B\]](#eq: coer B){reference-type="eqref" reference="eq: coer B"}, we have $$\begin{gathered}
\mathcal{B}({\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2};{\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})-{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}),{\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})-{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})) \\
\ge C \mu_{\min} \| {\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})-{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}}^2\,.\end{gathered}$$ Combining the above equations, then the first desired result follows after simplifying by similar terms. To show [\[convergence calL u-us\]](#convergence calL u-us){reference-type="eqref" reference="convergence calL u-us"}, we simply use the imposed assumption on $\mathcal{L}$ and the first achieved estimate to obtain $$\begin{aligned}
\big|\mathcal{L}\big({\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}) - {\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})\big)\big|
&\le
\|\mathcal{L}\|_{{\bf V}^*}
\|{\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}) - {\bf u}_{\bf s}(\cdot,{\boldsymbol{y}}^{s_1},{\boldsymbol{z}}^{s_2})\|_{{\bf V}}
\le C\,\widehat C\, \|{\bf f}\|_{{\bf V}^*} \|\mathcal{L}\|_{{\bf V}^*}.\end{aligned}$$ Hence, [\[convergence calL u-us\]](#convergence calL u-us){reference-type="eqref" reference="convergence calL u-us"} is a direct consequence of the above estimate, the Stechkin inequality $$\sum_{j \ge s+1} b_j\le C_\varsigma\,
s^{1-\frac{1}{\varsigma}}\Big(\sum_{j \ge 1} b_j^\varsigma\Big)^{\frac{1}{\varsigma}},\quad{\rm for}~~0< \varsigma < 1,$$ where $\{b_j\}_{j\ge1}$ is a nonincreasing sequence of positive numbers, and assumptions [\[ass A5\]](#ass A5){reference-type="eqref" reference="ass A5"}--[\[ass A3\]](#ass A3){reference-type="eqref" reference="ass A3"}. $\quad \Box$
# Finite element approximation {#Sec: FEM}
This section is devoted to introducing the Galerkin FEM of degree at most $r$ ($r\ge 1$) for the approximation of the solution to the model problem [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} over $\Omega$, and consequently, to problem [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"}. Stability and error estimates are investigated. The achieved results in this section are needed for measuring the QMC finite element error in [\[combine\]](#combine){reference-type="eqref" reference="combine"}.
We introduce a family of regular triangulation (made of simplexes) $\mathcal{T}_h$ of the domain $\overline{\Omega}$ and set $h=\max_{K\in \mathcal{T}_h}(h_K)$, where $h_{K}$ denotes the diameter of the element $K$. Let $V_h \subset H^1_0(\Omega)$ denote the usual conforming finite element space of continuous, piecewise polynomial functions of degree at most $r$ on $\mathcal{T}_h$ that vanish on $\partial \Omega$. Let ${\bf V}_h=[V_h]^d$ be the finite element vector space. Then there exists a constant $C$ (depending on $\Omega$) such that, $$\label{projection estimate}
\inf_{{\bf v}_h \in {\bf V}_h} \| {\bf v}-{\bf v}_h\|_{{\bf V}} \le Ch^\theta \|{\bf v}\|_{{\bf H}^{\theta+1}},\quad {\rm for}~~1\le \theta \le r.$$
Motivated by the weak formulation in [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"}, we define the parametric finite element approximate solution as: find ${\bf u}_h(\cdot, {\boldsymbol{y}},{\boldsymbol{z}}) \in {\bf V}_h$ such that $$\label{FE solution}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}_h, {\bf v}_h) = \ell({\bf v}_h), \quad \text{for all} \quad {\bf v}_h \in {\bf V}_h,\quad{\rm for~every}~~{\boldsymbol{y}},\,{\boldsymbol{z}}\in U.$$
Assuming that [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"} is satisfied, then, for every ${\bf f}\in {\bf V}^*$ and every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U$, the finite element scheme defined in [\[FE solution\]](#FE solution){reference-type="eqref" reference="FE solution"} has a unique parametric solution ${\bf u}_h(\cdot, {\boldsymbol{y}},{\boldsymbol{z}}) \in {\bf V}_h$. This can be shown by mimicking the proof of Theorem [Theorem 2](#thm: unique solution){reference-type="ref" reference="thm: unique solution"} because ${\bf V}_h \subset {\bf V}.$ Furthermore, the finite element solution is also stable; the bound in [\[a priori\]](#a priori){reference-type="eqref" reference="a priori"} remains valid with ${\bf u}_h$ in place of ${\bf u}$, that is, $$\label{eq: H1 bound of u_h}
\| {\bf u}_h(\cdot, {\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}} \le \frac{C}{\mu_{\min}} \|{\bf f}\|_{{\bf V}^*}\,.$$
In the next theorem, we discuss the ${\bf V}$-norm error estimate from the finite element discretization. Then, and as in Theorem [Theorem 5](#Truncating error){reference-type="ref" reference="Truncating error"}, for measuring the QMC finite element error in [\[combine\]](#combine){reference-type="eqref" reference="combine"}, we derive an estimate that involves a linear functional $\cal$ acting on the difference ${\bf u}-{\bf u}_h.$
**Theorem 6**. *For every ${\boldsymbol{y}},\,{\boldsymbol{z}}\in U,$ let ${\bf u}$ and ${\bf u}_h$ be the solutions of problems [\[eq:L1\]](#eq:L1){reference-type="eqref" reference="eq:L1"} and [\[FE solution\]](#FE solution){reference-type="eqref" reference="FE solution"}, respectively. Assuming that ${\bf u}$ satisfies the regularity properties in [\[a priori H2\]](#a priori H2){reference-type="eqref" reference="a priori H2"} for some integer $1\le \theta\le r.$ Under Assumption [\[ass A2\]](#ass A2){reference-type="eqref" reference="ass A2"} and [\[ass A4\]](#ass A4){reference-type="eqref" reference="ass A4"}, we have $$\label{convergence}
\|{\bf u}(\cdot, {\boldsymbol{y}},{\boldsymbol{z}})-{\bf u}_h(\cdot, {\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}}
\le C\, h^\theta \|{\bf f}\|_{{\bf H}^{\theta-1}},$$ when ${\bf f}\in {\bf H}^{\theta-1}(\Omega)$. Moreover, if $\mathcal{L}:{\bf L}^2(\Omega) \to \mathbb{R}$ is a bounded linear functional (that is, $|\mathcal{L}({\bf w})|\le \|\mathcal{L}\|\,\|{\bf w}\|$), then $$\label{convergence calL}
|\mathcal{L}({\bf u}(\cdot, {\boldsymbol{y}},{\boldsymbol{z}}))-\mathcal{L}({\bf u}_h(\cdot, {\boldsymbol{y}},{\boldsymbol{z}}))|\le C\, h^{\theta+1} \|{\bf f}\|_{{\bf H}^{\theta-1}} \|\mathcal{L}\|,\quad{\rm for~every}~~{\boldsymbol{y}},\,{\boldsymbol{z}}\in U.
%,\quad{\rm for}~~1\le \theta\le r.$$ The constant $C$ depends on $\Omega$, $\mu_{\max}$, $\mu_{\min}$, and $\lambda_{\max}$, but not on $h$.*
*Proof.* The proof of [\[convergence\]](#convergence){reference-type="eqref" reference="convergence"} follows a standard argument for finite element approximations and is included here for completeness. From the weak formulation in [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} and the finite element scheme in [\[FE solution\]](#FE solution){reference-type="eqref" reference="FE solution"}, we have the following orthogonality property $$\label{equ:Gal ort}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}-{\bf u}_h, {\bf v}_h) = 0, \quad \text{for all} \quad {\bf v}_h \in {\bf V}_h\,.$$ By using the above equation, the coercivity and boundedness of $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot, \cdot)$, we obtain, $$\begin{aligned}
\| {\bf u}-{\bf u}_h \|_{{\bf V}}^2 &\le C \mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}-{\bf u}_h, {\bf u}-{\bf u}_h)
= C \mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}-{\bf u}_h, {\bf u}-{\bf w}_h)
\le C \|{\bf u}-{\bf u}_h\|_{{\bf V}}\,\|{\bf u}-{\bf w}_h\|_{{\bf V}},\end{aligned}$$ for all ${\bf w}_h\in{\bf V}_h$. This implies $\| {\bf u}-{\bf u}_h \|_{{\bf V}}
\le C \|{\bf u}-{\bf w}_h\|_{{\bf V}}$ for all ${\bf w}_h\in{\bf V}_h$. Thus, [\[convergence\]](#convergence){reference-type="eqref" reference="convergence"} follows from [\[projection estimate\]](#projection estimate){reference-type="eqref" reference="projection estimate"} and the regularity estimate in [\[a priori H2\]](#a priori H2){reference-type="eqref" reference="a priori H2"}.
To show [\[convergence calL\]](#convergence calL){reference-type="eqref" reference="convergence calL"}, we use the so-called Nitsche trick. We first replace $\ell$ in [\[para weak\]](#para weak){reference-type="eqref" reference="para weak"} by $\mathcal{L}$, and consider a new parametric variational problem: find ${\bf u}_{\mathcal{L}} \in {\bf V}$ such that $$\label{para weak new}
\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}_{\mathcal{L}}, {\bf v}) = \mathcal{L}({\bf v}), \quad \text{for all} \quad {\bf v}\in {\bf V}.$$ By Theorem [Theorem 2](#thm: unique solution){reference-type="ref" reference="thm: unique solution"}, this problem has a unique solution for every ${\boldsymbol{y}},{\boldsymbol{z}}\in U$. Hence, using Theorem [Theorem 3](#lem: vu bound){reference-type="ref" reference="lem: vu bound"} (with ${\bf u}_{\mathcal{L}}$ in place of ${\bf u}$) and the given assumption on $\mathcal{L}$, we conclude that $\|{\bf u}_{\mathcal{L}}\|_{{\bf H}^2}\le C\|\mathcal{L}\|.$ Therefore, by repeating the above argument, we deduce $$\label{equ:uL}
\|{\bf u}_{\mathcal{L}}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})-{\bf u}_{\mathcal{L},h}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})\|_{{\bf V}}\le Ch\|{\bf u}_{\mathcal{L}}\|_{{\bf H}^2}\le Ch\|\mathcal{L}\|,$$ where ${\bf u}_{\mathcal{L},h} \in {\bf V}_h$ is the finite element approximation of ${\bf u}_{\mathcal{L}}$. By using successively the linearity of $\mathcal{L}$, equation [\[para weak new\]](#para weak new){reference-type="eqref" reference="para weak new"}, the symmetry of $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$, the Galerkin orthogonality [\[equ:Gal ort\]](#equ:Gal ort){reference-type="eqref" reference="equ:Gal ort"}, and the boundedness of $\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};\cdot,\cdot)$, we obtain $$\begin{aligned}
|\mathcal{L}({\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}))-\mathcal{L}({\bf u}_h(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}))|
&= |\mathcal{L}({\bf u}(\cdot,{\boldsymbol{y}},{\boldsymbol{z}})-{\bf u}_h(\cdot,{\boldsymbol{y}},{\boldsymbol{z}}))| \\
&= |\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}-{\bf u}_h,{\bf u}_{\mathcal{L}})|
= |\mathcal{B}({\boldsymbol{y}},{\boldsymbol{z}};{\bf u}-{\bf u}_h,{\bf u}_{\mathcal{L}}-{\bf u}_{\mathcal{L},h})| \\
&\le C \|{\bf u}-{\bf u}_h\|_{{\bf V}}\,\|{\bf u}_{\mathcal{L}}-{\bf u}_{\mathcal{L},h}\|_{{\bf V}}.\end{aligned}$$ The required estimate [\[convergence calL\]](#convergence calL){reference-type="eqref" reference="convergence calL"} now follows from [\[convergence\]](#convergence){reference-type="eqref" reference="convergence"} and [\[equ:uL\]](#equ:uL){reference-type="eqref" reference="equ:uL"} . $\quad \Box$
# QMC method and sparse grids {#sec: QMC errors}
Our aim is to measure the QMC finite element error which occurs in the third term on the right hand side of [\[combine\]](#combine){reference-type="eqref" reference="combine"}. To serve this purpose, the current section is dedicated to investigate both the high-order QMC and the high-order QMC sparse grid errors from estimating the finite dimensional integral $$\label{If}
{\mathcal I}_{\bf s} {F}:=\int_{U_2} \int_{U_1} {F}({\boldsymbol{y}},{\boldsymbol{z}})\,d{\boldsymbol{y}}\,d{\boldsymbol{z}}\,.$$ Recall that $U_i=[0,1]^{s_i}$ are of fixed dimensions $s_i$ for $i=1,2,$ and ${\bf s}=(s_1,s_2)$. We approximate ${\mathcal I}_{\bf s} {F}$ via an equal-weight rule of the form: $$\label{eq:QMCInt}
Q_{\bf s,\bf N} [{F}]:=\frac{1}{N_1\,N_2}\sum_{k=0}^{N_2-1}\sum_{j=0}^{N_1-1} {F}({\boldsymbol{y}}_j,{\boldsymbol{z}}_k),\quad{\rm with}~~{\bf N}=(N_1,N_2),$$ where $N_i=b^{m_i}$ for a given prime $b$ and a given positive integer $m_i$, with $i=1,2.$ The QMC points $\{{\boldsymbol{y}}_0, \ldots,{\boldsymbol{y}}_{N_1-1} \}$ belong to $U_1$ and $\{{\boldsymbol{z}}_0, \ldots,{\boldsymbol{z}}_{N_2-1} \}$ belong to $U_2$. We shall analyze, in particular, $Q_{\bf s,\bf N}$ being deterministic, interlaced high-order polynomial lattice rules as introduced in [@Dick2008] and as considered for affine-parametric operator equations in [@DickKuoGiaNuynsSchwab2014]. To this end, to generate a polynomial lattice rule in base $b$ with $N_1$ points in $U_1$, we need a *generating vector* of polynomials ${\bf g}(x) = (g_1(x), \ldots, g_{s_1}(x)) \in [P_{m_1}({\mathbb Z}_{b})]^{s_1}$, where $P_{m_1}({\mathbb Z}_{b})$ is the space of polynomials of degree less than $m_1$ in $x$ with coefficients taken from a finite field ${\mathbb Z}_{b}$.
For each integer $0\le n\le b^{m_1}-1$, we associate $n$ with the polynomial $$n(x) = \sum_{i=1}^{m_1} \eta_{i-1} x^{i-1} \quad \in {\mathbb Z}_{b}[x],$$ where $(\eta_{m_1-1}, \ldots ,\eta_0)$ is the $b$-adic expansion of $n$, that is $n =\sum_{i=1}^{m_1} \eta_{i-1}\,b^{i-1}\,.$ We also need a map $v_{m_1}$ which maps elements in ${\mathbb Z}_{b}(x^{-1})$ to the interval $[0,1)$, defined for any integer $w$ by $$v_{m_1} \left( \sum_{\ell=w}^\infty t_{\ell} x^{-\ell} \right) =\sum_{\ell=\max(1,w)}^{m_1} t_{\ell} b^{-\ell}.$$
Let $P \in {\mathbb Z}_b[x]$ be an irreducible polynomial with degree $m_1$. The classical polynomial lattice rule $\mathcal{S}_{P,b,m_1,s_1}({\bf g})$ associated with $P$ and the generating vector ${\bf g}$ is comprised of the quadrature points $${\boldsymbol{y}}_n =\left(v_{m_1} \Big( \frac{n(x)g_j(x)}{P(x)} \Big)\right)_{1\le j\le s_1} \in [0,1)^{s_1},\quad {\rm for}~~n = 0,\ldots, N_1 - 1.$$ In a similar fashion, we define the quadrature points ${\boldsymbol{z}}_n \in [0,1)^{s_2}$ for $n=0,\ldots,N_2-1$. In this case, the *generating vector* of polynomials is ${\bf g}= (g_1, \ldots, g_{s_2})$.
Classical polynomial lattice rules give almost first order of convergence for integrands of bounded variation. To obtain high-order of convergence, an interlacing procedure described as follows is needed. Following [@Goda2015; @GodaDick2015], the *digit interlacing function* with digit interlacing factor $\alpha \in \mathbb{N}$, $\mathscr{D}_\alpha: [0,1)^{\alpha} \to [0,1)$, is defined by $$\mathscr{D}_\alpha(x_1,\ldots, x_{\alpha})= \sum_{i=1}^\infty \sum_{j=1}^\alpha
\xi_{j,i} b^{-j - (i-1) \alpha}\;,$$ where $x_j = \sum_{i\ge 1} \xi_{j,i}\, b^{-i}$ for $1 \le j \le \alpha$. For vectors, we set $\mathscr{D}^s_\alpha: [0,1)^{\alpha s} \to [0,1)^s$ with $$\mathscr{D}^s_\alpha(x_1,\ldots, x_{\alpha s}) =
(\mathscr{D}_\alpha(x_1,\ldots, x_\alpha), \ldots,
\mathscr{D}_\alpha(x_{(s-1)\alpha +1},\ldots, x_{s \alpha}))\;.$$ Then, an interlaced polynomial lattice rule of order $\alpha$ with $b^m$ points in $s$ dimensions is a QMC rule using $\mathscr{D}_\alpha(\mathcal{S}_{P,b,m,\alpha s}({\bf g}))$ as quadrature points, for some given modulus $P$ and generating vector ${\bf g}$.
Next, we derive the error from approximating the integral ${\mathcal I}_{\bf s} {F}$ in [\[If\]](#If){reference-type="eqref" reference="If"} by the QMC quadrature formula $Q_{\bf s,\bf N} [{F}]$ in [\[eq:QMCInt\]](#eq:QMCInt){reference-type="eqref" reference="eq:QMCInt"}. The proof mainly relies on [@DickKuoGiaNuynsSchwab2014 Theorem 3.1].
**Theorem 7**. *Let $\boldsymbol \chi=(\chi_j)_{j\ge 1}$ and $\boldsymbol \varphi= (\varphi_j)_{j\ge 1}$ be two sequences of positive numbers with $\sum_{j=1}^\infty \chi_j^p$ and $\sum_{j=1}^\infty \varphi_j^q$ being finite for some $0<p,\,q<1$. Let $\boldsymbol \chi_{s_1}=(\chi_j)_{1\le j\le s_1}$ and $\boldsymbol \varphi_{s_1} = (\varphi_j)_{1\le j\le s_2},$ and let $\alpha \,:=\, \lfloor 1/p \rfloor +1$ and $\beta \,:=\, \lfloor 1/q \rfloor +1$. Assume that $F$ satisfies the following regularity properties: for any ${\boldsymbol{y}}\in U_1$, ${\boldsymbol{z}}\in U_2$, $\boldsymbol \alpha\in \{0, 1, \ldots, \alpha\}^{s_1}$, and $\boldsymbol \beta\in \{0, 1, \ldots, \beta\}^{s_2}$, the following inequalities hold $$\label{eq:like-norm}
|\partial^{\boldsymbol \alpha}_{\boldsymbol{y}}{F}({\boldsymbol{y}},{\boldsymbol{z}})| \le c|\boldsymbol \alpha|!
\boldsymbol \chi_{s_1}^{\boldsymbol \alpha}\quad{\rm and}\quad |\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{F}({\boldsymbol{y}},{\boldsymbol{z}})| \le c|\boldsymbol \beta|!
\boldsymbol \varphi_{s_2}^{\boldsymbol \beta},$$ where the constant $c$ is independent of ${\boldsymbol{y}}$, ${\boldsymbol{z}}$, $s_1$, $s_2$, and of $p$ and $q.$ Then one can construct two interlaced polynomial lattice rules of order $\alpha$ with $N_1$ points, and of order $\beta$ with $N_2$ points, using a fast component-by-component algorithm, with cost $\mathcal{O}(\alpha\,s_1 N_1(\log N_1+\alpha\,s_1))$ and $\mathcal{O}(\beta\,s_2 N_2(\log N_2+\beta\,s_2))$ operations, respectively, so that the following error bound holds $$|{\mathcal I}_{\bf s} {F}-Q_{\bf s,\bf N} [{F}]|\le C\,\Big(N_1^{-1/p}+N_2^{-1/q}\Big)\,,$$ where for $i\in \{1,2\},$ the generic constant $C$ depends on $b,p$ and $q$, but is independent of $s_i$ and $m_i.$*
*By choosing $m_1, m_2 \in \mathbb{N}$ such that $|m_1q - m_2 p| < 1$, we obtain that the total number of QMC points is $N = N_1 N_2 = b^{m_1+m_2}$ and $$|{\mathcal I}_{\bf s} {F}-Q_{\bf s,\bf N} [{F}]|\le C\, N^{-\frac{1}{p+q}}\,.$$*
*Proof.* By adding and subtracting $Q_{s_1,N_1}[{F}(\cdot,{\boldsymbol{z}})]:=\frac{1}{N_1}\sum_{j=0}^{N_1-1} {F}({\boldsymbol{y}}_j,{\boldsymbol{z}}),$ the QMC error can be decomposed as $$\begin{gathered}
|{\mathcal I}_{\bf s} {F}-Q_{\bf s,\bf N} [{F}]|\le \int_{U_2} \Big|\int_{U_1} {F}({\boldsymbol{y}},{\boldsymbol{z}})\,d{\boldsymbol{y}}-Q_{s_1,N_1}[{F}(\cdot,{\boldsymbol{z}})]\Big|\,d{\boldsymbol{z}}\\
+\frac{1}{N_1}\sum_{j=0}^{N_1-1}\Big|\int_{U_2} {F}({\boldsymbol{y}}_j,{\boldsymbol{z}})\,d{\boldsymbol{z}}-Q_{s_2,N_2}[{F}({\boldsymbol{y}}_j,\cdot)]
\Big|\,,\end{gathered}$$ where $Q_{s_2,N_2}[{F}({\boldsymbol{y}}_j,\cdot)]:=\frac{1}{N_2}\sum_{k=0}^{N_2-1}
{F}({\boldsymbol{y}}_j,{\boldsymbol{z}}_k)$. By using [@DickKuoGiaNuynsSchwab2014 Theorem 3.1] and the regularity assumptions in [\[eq:like-norm\]](#eq:like-norm){reference-type="eqref" reference="eq:like-norm"}, we have $$\Big|\int_{U_1} {F}({\boldsymbol{y}},{\boldsymbol{z}})\,d{\boldsymbol{y}}-Q_{s_1,N_1}[{F}(\cdot,{\boldsymbol{z}})]\Big| \le C\, N_1^{-1/p},$$ and $$\Big|\int_{U_2} {F}({\boldsymbol{y}}_j,{\boldsymbol{z}})\,d{\boldsymbol{z}}-Q_{s_2,N_2}[{F}({\boldsymbol{y}}_j,\cdot)]\Big|\le C N_2^{-1/q}\,.$$ Combining the above equations, we immediately deduce the first desired results.
Now, using $N= b^{m_1+m_2}$ and the conditions $|m_1 q - m_2 p| < 1,$ we have $$\begin{gathered}
N_1^{-1/p} = N^{-1/(p+q)} b^{(m_1+m_2)/(p+q)-m_1/p}\\
= N^{-1/(p+q)} b^{(m_2p-m_1q)/(p(p+q))}\le N^{-1/(p+q)} b^{1/(p(p+q))}\le C N^{-1/(p+q)}\,. \end{gathered}$$ Similarly, $$N_2^{-1/q}\le N^{-1/(p+q)} b^{1/(q(p+q))}\le CN^{-1/(p+q)}\,,$$ and therefore, the proof of the second desired estimate is completed. $\quad \Box$
In order to reduce the computational cost (and thereby improving the convergence rate), we next discuss a combination of the QMC rules with a sparse grid approach. Let $\{N_i^{(j)}\}_{j\ge 1}$ be increasing sequences of positive values for $i=1,2.$ Then $$\mathcal{I}_{\bf s} {F} = \lim_{j,k \to \infty} Q_{\mathbf{s}, {\bf N}^{j,k}}[{F}],\quad {\rm with}~~~{\bf N}^{j,k}=(N_1^{(j)}, N_2^{(k)}).$$ We can write this as a telescoping sum $$\label{eq: infinite sums}
\mathcal{I}_{\bf s} {F} = \sum_{j, k = 1}^\infty a_{jk},~{\rm with}~a_{jk}= \Big(Q_{\mathbf{s},{\bf N}^{j,k}} - Q_{\mathbf{s}, {\bf N}^{j-1,k}} - Q_{\mathbf{s},{\bf N}^{j,k-1}} + Q_{\mathbf{s},{\bf N}^{j-1,k-1}}\Big)[{F}]$$ where $$\label{eq: cond}
Q_{\mathbf{s}, {\bf N}^{j,k}} = 0~~{\rm if}~~ (j,k) \in \{(\zeta,0), (0,\omega): \zeta, \omega \in \mathbb{N}\cup \{0\}\}\,.$$ To get a computable quantity, we need to truncate the infinite sums in [\[eq: infinite sums\]](#eq: infinite sums){reference-type="eqref" reference="eq: infinite sums"}. This can be done in different ways; we choose to truncate the tensor grid along the main diagonal of indexed points for each combination of the QMC rules ${\bf N}^{j,k}$ where $L$ is the "level" of the sparse grid rule. Explicitly, we truncate as follows: $$\label{eq: truncate infinite sums}
{\mathcal I}_{{\bf s},L}[{F}] = \sum_{\substack{j, k = 1 \\ j+k \le L }} a_{jk}
=\sum_{k = 1}^{L-1} \left( Q_{\mathbf{s},{\bf N}^{L-k,k}}[{F}] - Q_{\mathbf{s},{\bf N}^{L-k,k-1}}[{F}] \right),\quad$$ where in the second equality we used $\sum_{\substack{j, k = 1 \\ j+k \le L }}=\sum_{k = 1}^{L-1}\sum_{j = 1}^{L-k}$ and the condition in [\[eq: cond\]](#eq: cond){reference-type="eqref" reference="eq: cond"}.[^3] We prove next that the quadrature error incurred on this QMC sparse grid is relatively small for a sufficiently large $L$.
In the next theorem, for some $\vartheta > 0$ such that $p \vartheta, q \vartheta \ge 1$, we choose $N_1^{(j)}=b^{ \lceil j p \vartheta \rceil}$ and $N_2^{(j)}=b^{ \lceil j q \vartheta \rceil}$ for $j\ge 1$. The purpose of $\vartheta > 0$ is to avoid a situation where $N_i^{(j)} = N_i^{(j+1)}$ for some admissible $i, j$. Choosing $\vartheta$ such that $p \vartheta, q \vartheta \ge 1$ guarantees that this cannot happen. On the other hand, since the constant $C$ increases with $\vartheta$, we consider $\vartheta$ as a constant. In other words, in order to reduce the error in Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"} one increases $L$ and therefore $M$, but keeps $\vartheta$ fixed.
**Theorem 8**. *In addition to the assumptions of Theorem [Theorem 7](#prop:qmc){reference-type="ref" reference="prop:qmc"}, we assume that $$\label{eq:like-norm mixed}
|\partial^{\boldsymbol \beta,\boldsymbol \alpha}_{{\boldsymbol{z}},{\boldsymbol{y}}} {F}({\boldsymbol{y}},{\boldsymbol{z}})| \le c(|\boldsymbol \alpha|+|\boldsymbol \beta|)!
\boldsymbol \chi_{s_1}^{\boldsymbol \alpha}
\boldsymbol \varphi_{s_2}^{\boldsymbol \beta}\,.$$ Then we have $$|{\mathcal I}_{\bf s}[{F}]-{\mathcal I}_{{\bf s},L}[{F}]|
\le C L \Big(N_1^{(L)}\Big)^{-1/p} +CL \Big(N_2^{(L)}\Big)^{-1/q},$$ where the constant $C$ depends on $b,p,q, \vartheta$, but is independent of $s_1$, $s_2$ and $L$.*
*Let $M$ denote the total number of quadature points used in the QMC sparse grid rule. Then $$|{\mathcal I}_{\bf s}[{F}]-{\mathcal I}_{{\bf s},L}[{F}]|
\le C (\log M)^{1 + 1_{p=q}/p} M^{-\min(1/p, 1/q)},$$ where $1_{p=q}$ is $1$ if $p=q$ and $0$ otherwise, and where the constant $C$ depends on $b, p, q, \vartheta$, but is independent of $s_1$, $s_2$ and $M$.*
*Proof.* From [\[eq: infinite sums\]](#eq: infinite sums){reference-type="eqref" reference="eq: infinite sums"} and [\[eq: truncate infinite sums\]](#eq: truncate infinite sums){reference-type="eqref" reference="eq: truncate infinite sums"}, $$\label{TE}
|{\mathcal I}_{\bf s}[{F}]-{\mathcal I}_{{\bf s},L}[{F}]|\le
\sum_{\substack{j, k = 1 \\ j+k \ge L+1 }}^\infty |a_{jk}|\le \sum_{k= L}^\infty\sum_{j=1}^\infty |a_{jk}|+\sum_{k= 1}^{L-1}\,\,\sum_{j=L-k+1}^\infty |a_{jk}|\,.$$ To estimate the first term on the right hand side of the above equation, and for brevity, we let ${\bf g}_j({\boldsymbol{z}})= Q_{s_1, N_1^{(j)}}[{F}(\cdot,{\boldsymbol{z}})]- Q_{s_1, N_1^{(j-1)}}[{F}(\cdot,{\boldsymbol{z}})]$. Then, $$|a_{jk}|=\left| Q_{s_2, N_2^{(k)}}[\mathbf{g}_j]- Q_{s_2, N_2^{(k-1)}}[\mathbf{g}_j] \right|\,.$$ Adding and subtracting $\int_{U_2}{\bf g}_j({\boldsymbol{z}})\,d{\boldsymbol{z}}$ and using [@DickKuoGiaNuynsSchwab2014 Theorem 3.1], we obtain $$|a_{jk}|
\le \sum_{\ell=k-1}^k\Big|Q_{s_2, N_2^{(\ell)}}[\mathbf{g}_j]-\int_{U_2}{\bf g}_j({\boldsymbol{z}})\,d{\boldsymbol{z}}\Big|
\le C \sup_{{\boldsymbol{z}}\in U_2}|\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{\bf g}_j({\boldsymbol{z}})|\,\Big(N_2^{(k-1)}\Big)^{-1/q}\,.$$ Another application of [@DickKuoGiaNuynsSchwab2014 Theorem 3.1] but on the region $U_1$, where the regularity assumption in [\[eq:like-norm mixed\]](#eq:like-norm mixed){reference-type="eqref" reference="eq:like-norm mixed"} is needed here, yields $$\begin{gathered}
|\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{\bf g}_j({\boldsymbol{z}})|= \Big|Q_{s_1, N_1^{(j)}}[\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{F}(\cdot,{\boldsymbol{z}})]- Q_{s_1, N_1^{(j-1)}}[\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{F}(\cdot,{\boldsymbol{z}})]\Big|\\
\le \sum_{\ell=j-1}^j\Big|Q_{s_1, N_1^{(\ell)}}[\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{F}(\cdot,{\boldsymbol{z}})]-\mathcal I_{U_1}[\partial^{\boldsymbol \beta}_{\boldsymbol{z}}{F}(\cdot,{\boldsymbol{z}})]\Big|
\le C\,\Big(N_1^{(j-1)}\Big)^{-1/p}\,.\end{gathered}$$ Inserting this in the previous equation leads to $|a_{jk}|\le C\, \Big(N_2^{(k-1)}\Big)^{-1/q} \Big(N_1^{(j-1)}\Big)^{-1/p},$ and consequently, $$\begin{gathered}
|{\mathcal I}_{\bf s}[{F}]-{\mathcal I}_{{\bf s},L}[{F}]|
\le C\sum_{k=L}^\infty \Big(N_2^{(k-1)}\Big)^{-1/q}\sum_{j=1}^\infty \Big(N_1^{(j-1)}\Big)^{-1/p}\\+C\, \sum_{k=1}^{L-1} \Big(N_2^{(k-1)}\Big)^{-1/q}\sum_{j=L-k+1}^\infty \Big(N_1^{(j-1)}\Big)^{-1/p}\,.\end{gathered}$$ Using $N_1^{(j)}=b^{ \lceil j p \vartheta \rceil}$ and $N_2^{(j)}=b^{ \lceil jq \vartheta \rceil}$, we notice that $$\begin{aligned}
|{\mathcal I}_{\bf s}[{F}]-{\mathcal I}_{{\bf s},L}[{F}]|
\le & C\,\sum_{k=L-1}^\infty b^{-k \vartheta}
\sum_{j=0}^\infty b^{-j \vartheta}
+C\,\sum_{k=0}^{L-2} b^{-k \vartheta} \sum_{j=L-k}^\infty b^{-j \vartheta} \\
\le & C b^{-L\vartheta} \left( \frac{b^\vartheta}{(1-b^{-\vartheta})^2} + \frac{L}{1-b^{-\vartheta}} \right).\end{aligned}$$ Hence the desired QMC sparse grid estimate is obtained.
The total number of quadrature points used in the combined QMC sparse grid approach is $$M = \sum_{k=1}^{L-1} b^{\lceil (L-k) p \vartheta \rceil} b^{ \lceil k q \vartheta \rceil} \le b^{2+Lp\vartheta} \sum_{k=1}^{L-1} b^{k(q-p) \vartheta }.$$ For $q = p$ we have $M\le b^{2+Lp\vartheta }(L-1),$ and for $q \ne p$ we have $$\begin{aligned}
M \le b^{2+Lp \vartheta} \frac{b^{(q-p) \vartheta }-b^{L(q-p)\vartheta}}{1-b^{ (q-p) \vartheta }} \le
\frac{ b^{2+L\max(p,q) \vartheta -|p-q| \vartheta }}{1-b^{-|p-q| \vartheta }} \le C\,b^{L\max(p,q) \vartheta}\,.\end{aligned}$$ Since the error is of order $L b^{-L \vartheta }$ we have $$|{\mathcal I}_{\bf s}[{F}]-{\mathcal I}_{{\bf s},L}[{F}]|
\le C L b^{-L \vartheta } \leq C L^{1 + 1_{p=q}/p} M^{-\min(1/p, 1/q)},$$ where $1_{p=q}$ is $1$ if $p=q$ and $0$ otherwise. Since $M \ge b^{(L-1) p \vartheta }$ we have $L \le C \log M$, hence the result follows. $\quad \Box$
# Numerical experiments {#Sec: Numeric}
In this section, we illustrate numerically the theoretical finding in Theorem [Theorem 1](#main results){reference-type="ref" reference="main results"}. In all experiments, the physical domain $\Omega$ is chosen to be the unit square $[0,1]^2$, and $\mathcal{T}_h$ is a family of uniform triangular meshes with diameter $\sqrt{2} h$ obtained from uniform $J$-by-$J$ square meshes by cutting each mesh square into two congruent triangles with $h=1/J.$ In all numerical experiments we set the base of the polynomial lattice rules $b = 2$.
![A comparison between the body force ${\bf f}=(x_1-1/2,x_2(x_2-1/2))$ (left) and the displacement ${\bf u}_h$ (right) with $J=15.$](Example1Case2BodyForce.eps "fig:"){#Fig 1 width="6.5cm" height="7cm"}![A comparison between the body force ${\bf f}=(x_1-1/2,x_2(x_2-1/2))$ (left) and the displacement ${\bf u}_h$ (right) with $J=15.$](Example1Case2Disp.eps "fig:"){#Fig 1 width="6.5cm" height="7cm"}
**Example 1:** In this example, we corroborate the finite element errors and convergence rates when $r=1$ (piecewise linear Galerkin FEM) the Lamé parameters $\mu$ and $\lambda$ are variable but deterministic. We choose $\mu(x_1,x_2)=x_1+x_2+1$ and $\lambda(x_1,x_2)=\sin(2\pi x_1)+2.$ In Figure [2](#Fig 1){reference-type="ref" reference="Fig 1"}, we compare graphically between the body force load ${\bf f}$ and the approximate displacement ${\bf u}_h$ for the case of homogeneous Dirichlet boundary conditions. To illustrate the FEM convergence order in Theorem [Theorem 6](#Convergence theorem){reference-type="ref" reference="Convergence theorem"} (or Theorem [Theorem 1](#main results){reference-type="ref" reference="main results"}), we choose the body force ${\bf f}$ so that the exact solution is $${\bf u}(x_1,x_2)=\left[\begin{matrix}
u_1(x_1,x_2)\\
u_2(x_1,x_2)\end{matrix}\right]=\left[\begin{matrix}
2(\cos(2\pi x_1)-1)\sin(2\pi x_2)\\
(1-\cos(2\pi x_2)) \sin(2\pi x_1)\end{matrix}\right]\,.$$ Motivated by the equality $$\label{Ritz}
\|{\bf v}\|= \sup_{{\bf w}\in {\bf L}^2(\Omega),{\bf w}\ne {\bf 0}} \frac{|\langle{\bf v},{\bf w} \rangle|}{\|{\bf w}\|},\quad{\rm for ~any}~~{\bf v}\in {\bf L}^2(\Omega),$$ we define, for some fixed (but arbitrary) ${\bf w}\in {\bf L}^2(\Omega)$, the functional $\mathcal{L}$ by: $\mathcal{L}({\bf v})= \mathcal{L}_{\bf w}({\bf v}):=\langle{\bf w},{\bf v} \rangle$. Then, by using the convergence estimate [\[convergence calL\]](#convergence calL){reference-type="eqref" reference="convergence calL"} in Theorem [Theorem 6](#Convergence theorem){reference-type="ref" reference="Convergence theorem"}, we have $|\langle{\bf u}-{\bf u}_h,{\bf w} \rangle|\le C\, h^2 \|{\bf f}\| \|{\bf w}\|$ for ${\bf w}\in {\bf L}^2(\Omega)\,.$ Consequently, the equality in [\[Ritz\]](#Ritz){reference-type="eqref" reference="Ritz"} leads to the following optimal ${\bf L}^2(\Omega)$ estimate: $E_h:=\|{\bf u}-{\bf u}_h\|\le C\,h^2 \|{\bf f}\|\,.$ To demonstrate this numerically, we compute $E_{h}$ by approximating the ${\bf L}^2$-norm ($\|\cdot\|$) using the centroids of the elements in the mesh $\mathcal{T}_h$. The empirical convergence rate (CR) is calculated by halving $h$, and thus, $\text{CR}=\log_2(E_h/E_{h/2}).$
If we choose ${\bf w}={\bf 1}$ (the unitary constant vector function), then with ${\bf v}=[v_1~~ v_2]^T,$ $$\mathcal{L}({\bf v})=\mathcal{L}_{\bf 1}({\bf v})=\int_{\Omega} {\bf v}({\boldsymbol{x}})\cdot {\bf 1} \,d{\boldsymbol{x}}=\int_{\Omega} [v_1({\boldsymbol{x}})+v_2({\boldsymbol{x}})]\,d{\boldsymbol{x}}\,,$$ which is the mean of ${\bf v}$ over $\Omega=[0,1]^2.$ Since $\|\mathcal{L}_{\bf 1}\|\le 1,$ by Theorem [Theorem 6](#Convergence theorem){reference-type="ref" reference="Convergence theorem"}, $$|\mathcal{L}({\bf u}-{\bf u}_h)|=|\mathcal{L}_{\bf 1}({\bf u}-{\bf u}_h)|
= \Big|\sum_{i=1}^2\int_\Omega (u_i-u_{i_h})\,d{\boldsymbol{x}}\Big|\le C\, h^2 \|{\bf f}\|\,.$$
Again, we use the centroids of the elements in the mesh $\mathcal{T}_h$ to approximate the above integral. The reported numerical (empirical) convergence rates in Table [1](#Tab 1){reference-type="ref" reference="Tab 1"} illustrate the expected second order of accuracy. For a graphical illustration of the efficiency of the approximate solution over the global domain $\Omega$, we highlight the pointwise nodal displacement errors in Figure [4](#Fig 2){reference-type="ref" reference="Fig 2"} for $J=60.$
$J$ $\|{\bf u}-{\bf u}_h\|$ CR $|\mathcal{L}_{\bf 1}({\bf u}-{\bf u}_h)|$ CR
----- -------------------------------- -------- --------------------------------------------------- -------- -- --
8 3.8533e-01 1.1697e-02
16 1.1163e-01 1.7873 3.7017e-03 1.6599
32 2.9204e-02 1.9345 9.8934e-04 1.9037
64 7.3903e-03 1.9825 2.5179e-04 1.9743
128 1.8533e-03 1.9955 6.3238e-05 1.9933
: Example 1, Errors and empirical convergence rates for different values of $J.$
![Pointwise nodal errors in the displacement, $|u_1-u_{1_h}|$ on right and $|u_2-u_{2_h}|$ on left. ](Example1NodalErroru2.eps "fig:"){#Fig 2 width="6.5cm" height="6.5cm"}![Pointwise nodal errors in the displacement, $|u_1-u_{1_h}|$ on right and $|u_2-u_{2_h}|$ on left. ](Example1NodalErroru1.eps "fig:"){#Fig 2 width="6.5cm" height="6.5cm"}
**Example 2:** This example is devoted to confirm the QMC theoretical convergence results when $\mu$ is random and $\lambda$ is constant. More precisely, $\lambda=1$ and $$\mu({\boldsymbol{x}},{\boldsymbol{y}})= \frac{1}{10}\Big(1 + \sum_{j=1}^\infty
y_j \psi_j\Big),
\quad {\rm with}~~ \psi_j=\frac{1}{j^2} \sin (j \pi x_1) \sin((2j-1)\pi x_2),$$ and for $y_j\in [-1/2,1/2].$ Since $\|\psi_j\|\le {1}/{(2j^2)},$ $\sum_{j=1}^\infty \|\psi_j\|^p$ is convergent for $p>1/2$ and $\sum_{j=s_1+1}^\infty \|\psi_j\|\le C s_1^{-1}\,.$ Thus, [\[ass A1\]](#ass A1){reference-type="eqref" reference="ass A1"} and [\[ass A3\]](#ass A3){reference-type="eqref" reference="ass A3"} are satisfied when $p>1/2$ and $\varrho_1=1/2,$ respectively. We discretize on the physical domain using the quadratic FEM. Therefore, according to Theorem [Theorem 1](#main results){reference-type="ref" reference="main results"}, we expect the truncated QMC Galerkin finite element error to be of order $\mathcal{O}(s_1^{-1}+\log(s_1)(N_1^{-2}+h^3))$. The appearance of the logarithmic factor $\log(s_1)$ in front of $N_1^{-2}$ and $h^3$ is due to the facts that $\sum_{j=1}^{s_1} \|\psi_j\|^{1/2}\le C \log(s_1)$ and that $\sum_{j=1}^{s_1} \|\nabla \psi_j\|\le C \log(s_1)$, respectively. For measuring the error, and since the exact solution is unknown, we rely on the reference solution $\Xi_{{\bf u}_h^*}$ which is computed using $s_1=256$, $J=128,$ and $1024$ QMC points. Hence, by ignoring the logarithmic factor $\log(s_1)$, and in the absence of the truncated series error, we anticipate $\mathcal{O}(N_1^{-2})$-rates of convergence for $N_1\le J^{\frac{3}{2}}$, with $N_1 \ll 1024$. This is illustrated numerically in Table [2](#tab:Example2){reference-type="ref" reference="tab:Example2"} and graphically in Figure [5](#fig:errN example2){reference-type="ref" reference="fig:errN example2"} for different values of $N_1$, and for fixed $s_1=256$ and $J=128$, with $\mathcal{L}=\mathcal{L}_{\bf 1}$ and ${\bf f}=(2x_1+10,x_2-3).$ Note that the middle column of Table [2](#tab:Example2){reference-type="ref" reference="tab:Example2"} displays $|\Xi_{{\bf u}^*_h}-\Xi_{{{\bf u}_h}, Q}|$ where $Q$ is a quadrature for $\mathcal{L}= \mathcal{L}_1$ with $N_2=1$ and $N_1$ varying, see [\[eq:QMCInt F\]](#eq:QMCInt F){reference-type="eqref" reference="eq:QMCInt F"}.
$N_1$ $|\Xi_{{\bf u}^*_h}-\Xi_{{{\bf u}_h}, Q}|$ CR
------- ---------------------------------------------- --------
8 3.1045e-01
16 6.1906e-02 2.3262
32 1.5191e-02 2.0269
64 4.3387e-03 1.8079
128 9.3546e-04 2.2135
256 2.5232e-04 1.8904
: Example 2, errors and convergence rates for different values of $N_1$.
![Numerical errors (err$_{N_1}$) vs. $N_1^{-2}$ for Example 2](example2_errN.eps){#fig:errN example2 width="80%"}
**Example 3:** In this example, we focus on the randomness in $\lambda$ while $\mu=1$. We choose $$\lambda({\boldsymbol{x}},{\boldsymbol{z}})= 1 + \sum_{j=1}^\infty
\frac{z_j}{j^2} \sin (j \pi x_1) \sin((2j-1)\pi x_2),
\quad z_j \in [-1/2,1/2].$$ By arguing as in the preceding example, based on Theorem [Theorem 1](#main results){reference-type="ref" reference="main results"}, we fix $s_2 =256$, $J=128$ and $r=2,$ then the QMC Galerkin finite element error is expected to be of order $\mathcal{O}(N_2^{-2})$ whenever $N_2\le J$, where the logarithmic factor $\log(s_2)$ is ignored. We rely again on the reference solution $\Xi_{{\bf u}_h^*}$, which is computed as in the previous example, in measuring the errors, and consequently, the convergence rates. As expected, an $\mathcal{O}(N_2^{-2})$ convergence rate is illustrated tabularly and graphically for different values of $N_2$ in Table [3](#tab:Example3){reference-type="ref" reference="tab:Example3"} and Figure [6](#fig:errN example3){reference-type="ref" reference="fig:errN example3"}, respectively, for fixed $s_2=J=256,$ with $\mathcal{L}=\mathcal{L}_{\bf 1}$ and ${\bf f}=(2x_1+10,x_2-3).$ Note that the middle column of Table [2](#tab:Example2){reference-type="ref" reference="tab:Example2"} displays $|\Xi_{{\bf u}^*_h}-\Xi_{{{\bf u}_h}, Q}|$ where $Q$ is a quadrature for $\mathcal{L}= \mathcal{L}_1$ with $N_1=1$ and $N_2$ varying, see [\[eq:QMCInt F\]](#eq:QMCInt F){reference-type="eqref" reference="eq:QMCInt F"}.
$N_2$ $|\Xi_{{\bf u}^*_h}-\Xi_{{{\bf u}_h}, Q}|$ CR
------- ---------------------------------------------- --------
8 7.5520e-04
16 2.0011e-04 1.9161
32 4.5223e-05 2.1457
64 1.1630e-05 1.9592
128 2.7057e-06 2.1038
256 6.5202e-07 2.0530
: Example 3, errors and convergence rates for different values of $N_2$.
![Numerical errors (err$_{N_2}$) vs. $N_2^{-2}$ for Example 3](example3_errN.eps){#fig:errN example3 width="80%"}
**Example 4:** The aim behind this example is to illustrate numerically the achieved QMC sparse grid convergence results in Theorems [Theorem 1](#main results){reference-type="ref" reference="main results"} (second part) and [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"}. As before, we set the body force ${\bf f}=(2x_1+10,x_2-3)$ but now both coefficients $\lambda$ and $\mu$ are random. We choose $$\begin{aligned}
\mu({\boldsymbol{x}},{\boldsymbol{y}})&= 1 + \sum_{j=1}^\infty
\frac{y_j}{j^2} \sin (j \pi x_1) \sin((2j-1)\pi x_2),
\quad y_j \in [-1/2,1/2],\\
\lambda({\boldsymbol{x}},{\boldsymbol{z}})&= 1 + \sum_{j=1}^\infty
\frac{z_j}{j^2} \sin (j \pi x_1) \sin((2j-1)\pi x_2),
\quad z_j \in [-1/2,1/2],\end{aligned}$$ and so, $p=q=1/2$ (note that strictly speaking we have $p=q=1/2+\varepsilon$ for an arbitrary $\varepsilon > 0$; in order to simplify the computation we ignore this technicality in the following). We fix the truncation degree $s_1 = s_2 = 256$, the spatial mesh element size $J=128,$ and the degree of the Galerkin FEM $r=2.$ The reference solution $\Xi_{{\bf u}_h^*}$ is generated using a full grid of $2048 \times 2048$ (that is, $b=2$ and $m_1=m_2=11$) high-order QMC points (generated by a Python package in [@Gantner2014]). The combined QMC sparse grid algorithm [\[eq: truncate infinite sums\]](#eq: truncate infinite sums){reference-type="eqref" reference="eq: truncate infinite sums"} (with ${\bf N}^{L-k,k}=(2^{L-k},2^k)$, that is, $\vartheta = 2$) is implemented to compute $\Xi_{{\bf u}_h,Q_L}$. The errors between approximation $\Xi_{{\bf u}_h,Q_L}$ and the reference solution for different values of $L$ are given in the second column of Table [4](#tab:sparse grid results){reference-type="ref" reference="tab:sparse grid results"}. The fourth column of the table gives the QMC sparse grid upper error bounds $4(\log(M))^3 M^{-2}$ predicted by Theorem [Theorem 8](#qmc sparse grid){reference-type="ref" reference="qmc sparse grid"} (where the constant $C$ in the error bound is ignored). Note that the PDE solvers can be run in parallel for distinct QMC points. To speed up the computation, finite element PDE solvers based on examples in the FEniCS package [@LangtangenLogg2017] are used on the high-performance computing platform Katana [@Katana] provided by UNSW, Sydney. The Python code used in the numerical experiments together with the PBS scripts is available at <https://github.com/qlegia/Elasticity-HigherOrder-QMC>.
$L$ $M=(L-1)2^L$ $|\Xi_{{\bf u}_h,Q_L} - \Xi_{{\bf u}_h^*}|$ $4(\log(M))^3 M^{-2}$
----- -------------- ------------------------------------------------ -----------------------
9 4096 3.5687e-06 1.3720e-04
10 9216 3.4606e-06 3.5826e-05
11 20480 7.4782e-06 9.3300e-06
12 45056 1.1456e-06 2.4244e-06
13 98304 6.8586e-08 6.2884e-07
14 212992 1.4453e-07 1.6284e-07
15 458752 5.4749e-08 4.2108e-08
: Example 4, numerical and theoretical error results for QMC sparse grid algorithm.
99. D. Braess, Finite Elements: Theory, Fast Solvers, and Applications in Elasticity Theory, Cambridge University Press, New York, 2007.
S. C. Brenner and Li-Y. Sung, Linear finite element methods for planar linear elasticity, Math. Comp., **59**, 321--338 (1992).
S. C. Brenner and L. R. Scott, The Mathematical Theory of Finite Element Methods, Third Edition, Springer, 2008.
F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods. Springer-Verlag, New York, 1991.
P. G. Ciarlet, Mathematical Elasticity, Volume I: Three-Dimensional Elasticity, North-Holland, Amsterdam, 1988.
A. Cohen, R. De Vore and Ch. Schwab, Convergence rates of best N-terrn Galerkin ap proximations for a class of elliptic sPDEs, Found. Comput. Math., **10**, 615--646 (2010).
J. Dick, Walsh spaces containing smooth functions and Quasi-Monte Carlo rules of arbitrary high order, SIAM J. Numer. Anal., **46**, 1519--1553 (2008).
J. Dick, F.Y. Kuo, Q.T. Le Gia, D. Nuyens and C. Schwab, Higher order QMC Petrov-Galerkin discretization for affine parametric operator equations with random field inputs, SIAM J. Numer. Anal., **52**, 2676--2702 (2014).
M. Eigel, C. J. Gittelson, Ch. Schwab and E. Zander, Adaptive stochastic Galerkin FEM, Comput. Methods Appl. Mech. Eng., **270**, 247--269 (2014).
R. S. Falk, Nonconforming finite element methods for the equations of linear elasticity, Math. Comp., **57**, 529--550 (1991).
R. N. Gantner and Ch. Schwab, Computational Higher-Order Quasi-Monte Carlo Integration. Tech. Report 2014-25, Seminar for Applied Mathematics, ETH Zürich, 2014. T. Goda, Good interlaced polynomial lattice rules for numerical integration in weighted Walsh spaces, J. Comput. Appl. Math., **285**, 279--294 (2015).
T. Goda and J. Dick, Construction of interlaced scrambled polynomial lattice rules of arbitrary high order, Foundations of Computational Mathematics, **15**, 1245--1278 (2015).
P. Grisvard, Elliptic Problems in Nonsmooth Domains. Monographs and Studies in Mathematics, 24. Pitman (Advanced Publishing Program), Boston, MA, 1985.
V. H. Hoang, T. C. Nguyen and B. Xia, Polynomial approximations of a class of stochastic multiscale elasticity problems, Z. Angew. Math. Phys., **67**, 67--78 (2016).
D. Smith and L. Betbeder-Matibet, *Katana*, 2010,<https://doi.org/doi:10.26190/669X-A286>, <https://doi.org/10.26190/669x-a286>.
A. Khan, C. E. Powell and D. J. Silvester, Robust preconditioning for stochastic Galerkin formulations of parameter-dependent nearly incompressible elasticity equations, SIAM J. Sci. Comput. **41**, A402--A421 (2019).
A. Khan, A. Bespalov, C. E. Powell and D. J. Silvester, Robust a posteriori error estimation for parameter-dependent linear elasticity equations, Math. Comp., **90**, 613--636 (2021).
H. P. Langtangen and A. Logg, Solving PDEs in Python, Springer, 2017.
H. G. Matthies, C. Brenner, C. Bucher, and C. G. Soares, Uncertainties in probabilistic numerical analysis of structures and solid--stochastic finite elements, Struct. Safety, **19**, 283--336 (1997).
W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, 2000.
L. R. Scott and M. Vogelius, Norm estimates for a maximal right inverse of the divergence operator in spaces of piecewise polynomials, Math. Mod. Numer. Anal., **19**, 111--143 (1985).
M. Vogelius, An analysis of the p-version of the finite element method for nearly incompressible materials. Uniformly valid, optimal error estimates, Numer. Math., **41**, 39--53 (1983).
B. Xia and V. H. Hoang, Best N-term GPC approximations for a class of stochastic linear elasticity equations, Math. Models and Methods in Applied Sciences, **24**, 513--552 (2014).
[^1]: School of Mathematics and Statistics, University of New South Wales, Sydney, Australia
[^2]: This work was supported by the Australian Research Council grant DP220101811.
[^3]: In principle one could use a QMC rule $\frac{1}{N} \sum_{n=0}^{N-1} F(\boldsymbol{x}_n)$ in dimension $s_1+s_2$ directly, i.e. ${\boldsymbol{x}}_n \in [0,1]^{s_1+s_2}$, without combining QMC with sparse grids. In this approach we would have to combine the different weights arising from simulating $\mu$ and $\lambda$ and we would have to use the same number of points for each part. The sparse grid approach gives us more flexibility in that sense, and since the weights for both parts are of a similar form as in other problems [@DickKuoGiaNuynsSchwab2014], we can reuse existing constructions of higher order polynomial lattice rules.
| arxiv_math | {
"id": "2310.06187",
"title": "Quasi-Monte Carlo sparse grid Galerkin finite element methods for linear\n elasticity equations with uncertainties",
"authors": "J. Dick, Q. T. Le Gia, K. Mustapha, T. Tran",
"categories": "math.NA cs.NA",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
In this paper we survey Eckardt points on a smooth complex cubic threefold with an approach aimed at computing all Eckardt points of a cubic threefold. In addition, we construct cubic threefolds with no Eckardt points but containing triple lines.
address:
- Gloire Grâce Bockondas, Département de Mathématiques, Université Marien Ngouabi, Brazzaville, Congo
- Basile Guy Richard Bossoto, Département de Mathématiques, Université Marien Ngouabi, Brazzaville, Congo
author:
- Gloire Grâce Bockondas
- Basile Guy Richard Bossoto
bibliography:
- biblio_Eckardt.bib
title: Eckardt points on a cubic threefold
---
# Introduction
Eckardt points originate from a paper of F.E. Eckardt [@eckardt1876ueber]. They have been thoroughly studied in the case of cubic surfaces in $\mathbb{P}^{3}$, defined as points corresponding to the intersection of three of the 27 lines [@segre1943non]. They have then been generalized to higher-dimensional and higher-degree hypersurfaces [@cools2010star]. They are also called star points or inflection points [@tjurin1971geometry]. On a smooth complex cubic threefold $X\subset\mathbb{P}^{4}$, an Eckardt point $p\in X$ is a point for which the intersection $X\cap T_{p}X$ of the projective tangent space of $X$ at $p$ with $X$ has multiplicity three at $p$. This is equivalent to saying that the intersection $X\cap T_{p}X \subset T_{p}X$ is a cone with vertex $p$ over an elliptic curve $E_{p}$ [@cools2010star]. Each Eckardt point $p\in X$ parametrizes thus an elliptic curve $E_{p}$ on the Fano surface of lines ${\rm{F}}(X)$ of $X$, which is the base of the cone $X\cap T_{p}X$ (see [@tjurin1971geometry]). A cubic threefold can contain at most finitely many Eckardt points, and in fact at most 30, which is achieved by the Fermat cubic whereas the general one has none [@clemens1972intermediate; @canonero1997inflection]. There are then at most 30 elliptic curves on the Fano surface ${\rm{F}}(X)$ while for the general cubic threefold there are none [@roulleau2009elliptic], the Fano surface of the Fermat cubic threefold being the only one that contains exactly 30 elliptic curves. This is the most common characterization of Eckardt points on a cubic threefold in the literature.
Furthermore, Eckardt points on a smooth cubic hypersurface $Y\subset\mathbb{P}^{n}$ can be studied through polar quadrics. They have been intensively studied in this way in [@canonero1997inflection] where the authors found the maximal number of Eckardt points of a cubic hypersurface in $\mathbb{P}^{n}$. In [@cools2010star], a connection between Eckardt points of a hypersurface of degree $d$ in $\mathbb{P}^{n}$ and polar hypersurfaces is used to determine all Eckardt points on the Fermat hypersurface of degree $d$ in $\mathbb{P}^{n}$.
Nevertheless, both equivalent characterizations and a method for finding all Eckardt points, certainly well-known to the experts, are difficult to find in the literature. This paper aims to fill this gap by studying Eckardt points on a cubic threefold using these two characterizations with an approach focusing on finding all Eckardt points of a cubic threefold. Moreover, we construct cubic threefolds with no Eckardt points but containing triple lines, which is as far as we know new. We also study through many examples the configuration of elliptic curves, triple lines, and the residual component of the union of elliptic curves in the curve ${\rm{M}}(X)$ of lines of the second type of $X$ called the main component. These computations show how elliptic curves, triple lines and the main component can be related in a cubic threefold.
**Acknowledgements**. We wish to thank warmly Samuel Boissière for many useful discussions. Besides, we would like to thank Søren Gammelgaard and Yilong Zhang for interesting discussions. The first author has been supported by the Program EMS SIMONS for Africa and the "Laboratoire de Mathématiques et Applications de l'Université de Poitiers UMR CNRS 7348\".
# Notations and Preliminaries
For $X\subset\mathbb{P}^{4}$ a smooth complex cubic threefold, the Fano surface ${\rm{F}}(X)$ is a smooth general type surface that parametrizes the lines on $X$ (see [@clemens1972intermediate]). Lines on $X$ are either of the first type or of the second type [@clemens1972intermediate] depending on the decomposition of their normal bundles. A line $\ell\subset X$ is said to be of the second type if and only if there exists a unique 2-plane $P\supset\ell$ tangent to $X$ in all points of $\ell$. We write $P\cap X=2\ell\cup\ell^{'}$, where $\ell^{'}$ is the residual line of $\ell$. Otherwise we say that $\ell$ is a line of the first type. For $\ell\neq\ell^{'}$ the line $\ell$ is called a double line, and if $\ell=\ell^{'}$ we say that $\ell$ is a triple line. The locus ${\rm{M}}(X)$ of lines of the second type on $X$ is a curve whose the singularities are exactly the points corresponding to triple lines on $X$ (see [@bockondas2023triple]). However, this curve is smooth for a generic cubic threefold $X\subset\mathbb{P}^{4}$ [@huybrechts2020geometry].
Denote by $p_{i,j}$, $0\leq i<j\leq 4$, the Plücker coordinates of the grassmannian of lines $\mathbb{G}(1,4)\subset\mathbb{P}^{9}$. On the affine chart $p_{0,1}=1$ of $\mathbb{G}(1,4)$ with local coordinates $(p_{0,2},p_{0,3},p_{0,4},p_{1,2},p_{1,3},p_{1,4})$ we have the decomposition $f(p)=\displaystyle\sum_{i+j = 3}t_{0}^{i}t_{1}^{j}\phi^{i,j}(\ell)$ for any point $p\in\ell\subset X$ with coordinates $t_{0}v_{0}+t_{1}v_{1}$, where $\phi^{i,j}(\ell)$ are functions of the local Plücker coordinates of $\ell$ and $f=0$ the equation of $X$. The Fano surface ${\rm{F}}(X)$ is then defined by the vanishing locus of the terms $\phi^{i,j}(\ell)$. On the other hand, any 2-plane $P$ that contains $\ell$ meets the plane $\pi=\lbrace x_{0}=0, x_{1}=0\rbrace$ at a unique point $v_{2}=(0:0:\alpha_{2}:\alpha_{3}:\alpha_{4})$ such that $\ell$ and $v_{2}$ span $\pi$. The plane cubic $P\cap X$ is then defined by $f(t_{0}v_{0} + t_{1}v_{1} + t_{2}v_{2})=0$ where $(v_{0}: v_{1}: v_{2})$ are the projective coordinates of $P$. Expanding in $t_{2}$ we write:
$$0=f(t_{0}v_{0}+t_{1}v_{1}) + t_{2} \sum_{i=2}^{4}\dfrac{\partial f}{\partial x_{i}}(t_{0}v_{0}+t_{1}v_{1})\alpha_{i}+\dfrac{1}{2}t_{2}^{2}\sum_{2\leq i,j\leq 4}\dfrac{\partial^{2} f}{\partial x_{j}\partial x_{i} }(t_{0}v_{0}+t_{1}v_{1})\alpha_{i}\alpha_{j}+ t_{2}^{3}f(v_{2}).$$ The line $\ell\subset P$ of equation $t_{2}=0$ is a line of the second type on $X$ if and only if $f(t_{0}v_{0}+t_{1}v_{1})=0$ and the plane cubic equation is a multiple of $t_{2}^{2}$. Furthermore, the second type line $\ell\subset X$ of equation $t_{2}=0$ is a triple line if and only if the plane cubic equation is a multiple of $t_{2}^{3}$ (see [@bockondas2023triple]).
# Characterizations of Eckardt points on a cubic threefold
## Eckardt points and elliptic curves
We recall the definition of an Eckardt point on a smooth complex cubic threefold $X\subset\mathbb{P}^{4}$ (see [@laza2018moduli Definition 1.5] and [@gammelgaard2018cubic Proposition 6.3.5]). Denote by $T_{p}X$ the projective tangent space of $X$ at $p\in X$.
**Definition 1**. A point $p\in X$ is an Eckardt point if it is a point of multiplicity three for the cubic $X\cap T_{p}X\subset T_{p}X$.
Choose coordinates $(x_{0}:\ldots: x_{4})\in\mathbb{P}^{4}$ such that $p = (1 : 0 : 0 : 0 : 0)$ and $T_{p}X=\lbrace x_{1}=0\rbrace$. The equation of $X$ may be written
$$\label{one}
f(x_{0},\ldots,x_{4}) = x_{0}^{2}x_{1} + x_{0}Q(x_{1},\ldots,x_{4}) + C(x_{1},\ldots,x_{4})$$ where $Q(x_{1},\ldots,x_{4})$ and $C(x_{1},\ldots,x_{4})$ are homogeneous polynomials of degree two and three respectively. So if $p\in X$ is an Eckardt point then $Q(x_{1},\ldots,x_{4})=0$ and the equation of $X$ may take the form
$$\label{one}
f(x_{0},\ldots,x_{4}) = x_{0}^{2}x_{1} + C(x_{1},\ldots,x_{4}).$$ Following [@murre1972algebraic p.169-170] (see also [@gammelgaard2018cubic Proposition 6.3.5]) we have the following proposition.
**Proposition 2**. *A point $p\in X$ is an Eckardt point if and only if it is contained in infinitely many lines on $X$.*
*Proof.* Consider a line $\ell$ going through $p$. It cuts out the hyperplane $x_{0}=0$ in a unique point $q\in\mathbb{P}^{4}$; every point on $\ell$ has coordinates $\lambda p + \mu q$ with $(\lambda:\mu)\in\mathbb{P}^{1}$. The line $\ell$, defined by $x_{0}=\lambda, x_{i}=\mu q_{i}$ with $i=1,\ldots,4$, lies on $X$ if and only if $f(\lambda p + \mu q)=\lambda^{2}\mu q_{1}+\lambda\mu^{2}Q(q_{1},\ldots,q_{4})+\mu^{3}C(q_{1},\ldots,q_{4})=0$ for all $(\lambda:\mu)\in\mathbb{P}^{1}$, that is if and only if $$q_{1}=0,\quad Q(q_{1},\ldots,q_{4})=0,\quad C(q_{1},\ldots,q_{4})=0.$$ The lines $\ell\subset X$ through $p$ correspond thus to the points $(x_{2}:x_{3}:x_{4})\in\mathbb{P}^{2}$ satisfying the equations $Q(0,x_{2},x_{3},x_{4})=0$ and $C(0,x_{2},x_{3},x_{4})=0$, that is the intersection points of a conic and a cubic in the plane of equation $\lbrace x_{0}=0,x_{1}=0\rbrace$. If $p$ is an Eckardt point then $Q(0,x_{2},x_{3},x_{4})=0$. The intersection $X\cap T_{p}X$ is a cone with vertex $p$ over the elliptic curve $E_{p}$ of equation $\left\lbrace x_{0}=0, C(0,x_{2},x_{3},x_{4})=0\right\rbrace$; the point $p$ is then contained in infinitely many lines on $X$. Conversely, if $Q(0,x_{2},x_{3},x_{4})$ and $C(0,x_{2},x_{3},x_{4})$ have a common factor there are infinitely many lines through $p$ contained in $X$, otherwise there are six lines in $X$ going through $p$. Moreover, if this common factor is linear then $X$ contains a plane and, if it is quadratic $X$ contains a quadratic cone, and hence a plane; this is impossible because of the smoothness of $X$. Therefore $Q(0,x_{2},x_{3},x_{4})=0$ and $p$ is an Eckardt point. ◻
Every Eckardt point $p\in X$ parameterizes an elliptic curve $E_{p}\subset{\rm{F}}(X)$ of equation $\left\lbrace x_{0}=0, C(0,x_{2},x_{3},x_{4})=0\right\rbrace$, the base of the cone $X\cap T_{p}X$, and inversely every elliptic curve gives rise to an Eckardt point [@tjurin1971geometry; @roulleau2009elliptic]. Moreover, there are at most finitely many Eckardt points on a smooth cubic threefold whereas a general one has no Eckardt points [@zhang2023extension Lemma 2.7]. The following result has been proven in [@gammelgaard2018cubic].
**Proposition 3**. *[@clemens1972intermediate p.315][\[30\]]{#30 label="30"} A cubic threefold can contain at most 30 Eckardt points.*
*Proof.* We reproduce the proof and for completeness see [@gammelgaard2018cubic Proposition 6.3.8] and [@zhang2022topological Lemma 4.8.4]. Consider $P\subset\mathbb{P}^{4}$ a plane and $K_{{\rm{F}}}:=\lbrace [\ell]\in{\rm{F}}(X)\vert \ell\cap P\neq 0\rbrace$ the canonical divisor of ${\rm{F}}(X)$ (see [@clemens1972intermediate]). Let $C_{\ell}$ denote the curve of lines on $X$ incident to $\ell$. We have $K_{{\rm{F}}}\cdot E_{p}=3$, then $C_{\ell}\cdot E_{p}=1$ since $K_{{\rm{F}}}=3C_{\ell}$ [@clemens1972intermediate (10.9)]. On the other hand, any component of ${\rm{M}}(X)$ intersects $K_{{\rm{F}}}$ non-negatively since $K_{{\rm{F}}}$ is effective. Moreover, all elliptic curves $E_{p_{i}}$ are parametrised by Eckardt points $p_{i}$ and are contained in ${\rm{M}}(X)$. There are thus at most $C_{\ell}\cdot {\rm{M}}(X)$ Eckardt points in $X$, with $C_{\ell}\cdot {\rm{M}}(X)=2C_{\ell}\cdot K_{{\rm{F}}}=6C_{\ell}^{2}=30$ since ${\rm{M}}(X)=
2K_{{\rm{F}}}$ and $C_{\ell}^{2}=5$ by [@clemens1972intermediate Proposition 10.21, (10.8)]. ◻
The Fano surface ${\rm{F}}(X)$ contains therefore at most 30 elliptic curves. Note that the curve ${\rm{M}}(X)$ of lines of the second type may contain components other than elliptic curves. However, if it contains exactly 30 elliptic curves then it has no components besides the elliptic components. Only one cubic hypersurface of $\mathbb{P}^{4}$ has 30 Eckardt points: the Fermat cubic $F_{4}$. Its Fano surface ${\rm{F}}(F_{4})$ is the only Fano surface that contains 30 elliptic curves [@roulleau2009elliptic].
**Definition 4**. The residual component of the union of elliptic curves in the curve of lines of the second type is called the main component.
Apart from the Fermat cubic, for every smooth cubic threefold $X\subset\mathbb{P}^{4}$ containing Eckardt points the main component is not empty.
**Proposition 5**. *[@murre1972algebraic Lemma 1.18] If $\ell\subset X$ is a line of the first type and $p\in\ell$ then there are six lines on $X$ through $p$.*
Every point $p\in X$ not contained in a line of the second type is thus contained in six lines and we have the following proposition.
**Proposition 6**. *[@murre1972algebraic] A line going through an Eckardt point is of the second type.*
We give an elementary proof in coordinates of the following theorem.
**Theorem 7**. *[@tjurin1971geometry][\[Tjurin\]]{#Tjurin label="Tjurin"} Let $p\in X$ be an Eckardt point. The triple lines on $X$ correspond exactly to the inflection points of the elliptic curve $E_{p}$ which is the base of the cone $X\cap T_{p}X\subset T_{p}X$.*
*Proof.* Let $a\in E_{p}$ be a point and $\ell_{a}$ the tangent line of $E_{p}$ at $a$. Then $\ell_{a}$ is defined by $\displaystyle\sum_{i=2}^{4}
x_{i}\dfrac{\partial C}{\partial x_{i}}(a)=0.$ The 2-plane $P_{1}$ spanned by $p$ and $\ell_{a}$ is tangent to $X$ along all of $\ell$. Let $b\in \ell_{a}$ be a point such that $b\neq a$ and $P_{2}$ the 2-plane in which lies $E_{p}$. Since $\ell_{a}\subset P_{2}$ then $b\in P_{2}$ and one can write
$$\label{2.10}
\displaystyle\sum_{i=2}^{4} b_{i}\dfrac{\partial C}{\partial x_{i}}(a) =0.$$ We have thus $P_{1}={\rm{span}}(a,b,p)$ and the plane cubic $P_{1}\cap X$ is defined by $f(t_{0}p + t_{1}a + t_{2}b)=0$ with $(t_{0}:t_{1}:t_{2})$ the projective coordinates of $P_{1}$. Expanding in $t_{2}$ and using Equations [\[one\]](#one){reference-type="eqref" reference="one"} and [\[2.10\]](#2.10){reference-type="eqref" reference="2.10"}, one can see that the line $\ell$ of equation $t_{2}=0$ is a double line on $X$. This second type line is a triple line if and only if
$$\label{2.50}
t_{1}\sum_{i=2}^{4}\dfrac{\partial^{2} C}{\partial x_{i}^{2}}\left(a\right) b_{i}^{2}+2t_{1}\displaystyle\sum_{2\leq i<j\leq 4}\dfrac{\partial^{2}C}{\partial x_{j}\partial x_{i}}\left(a\right) b_{i}b_{j}=0\quad\mbox{and}\quad \dfrac{\partial^{3}C}{\partial t_{2}^{3}}(t_{1}a)\neq 0$$ holds. Now we are going to study inflection points on the elliptic curve $E_{p}$. The point $a\in E_{p}$ is an inflection point if it is a point of multiplicity three for the intersection $E_{p}\cap \ell_{a}$ defined by $C(t_{1}a+t_{2}b)=0$. Since $C(t_{1}a)$ and $\dfrac{\partial C}{\partial t_{2}}(t_{1}a)$ vanish then $a\in E_{p}$ is an inflection point if and only if [\[2.50\]](#2.50){reference-type="eqref" reference="2.50"} holds, which are necessary and sufficient conditions for the line $\ell$ of equation $t_{2}=0$ to be a triple line on $X$. ◻
The planes $P_{1}$ and $P_{2}$ meet along $\ell_{a}$, the tangent line to $E_{p}$ at $a$. The point $a$ gives thus rise to the line of the second type $\ell\subset X$, and conversely the line of the second type gives rise to the point $a$. When $a\in E_{p}$ is not an inflection point, the tangent line $\ell_{a}$ cuts out the elliptic curve $E_{p}$ in a third point $a^{'}\in E_{p}$ which gives rise to the residual line $\ell^{'}$ of the double line $\ell$.
**Corollary 8**. *If a smooth complex cubic threefold contains Eckardt points then it contains triple lines.*
**Example 9**.
1. The Fermat cubic defined by $x_{0}^{3} + x_{1}^{3} + x_{2}^{3} + x_{3}^{3} + x_{4}^{3}=0$ has 30 Eckardt points with coordinates $(0,\ldots,\underbrace{1}_{x_{i}},\ldots,\underbrace{\xi}_{x_{j}},\ldots,0)$ with $x_{k}=0$ for $k\neq i,j$ and $\xi\in\mathbb{C}$ such that $\xi^{3}=-1$, and contains 135 triple lines.
2. The Klein cubic defined by $x_{0}^{2}x_{1} + x_{1}^{2}x_{2} + x_{2}^{2}x_{3} + x_{3}^{2}x_{4} + x_{4}^{2}x_{0}=0$ contains neither Eckardt points nor triple lines.
3. The cubic threefold defined by $x_{0}^{2}x_{2} + x_{2}^{2}x_{4} + x_{1}^{2}x_{3} + x_{3}^{2}x_{0} + x_{4}^{3}=0$ has one Eckardt point with coordinates $(0:1:0:0:0)$ and contains 9 triple lines.
**Remark 10**. Nevertheless, the converse of Corollary [Corollary 8](#converse){reference-type="ref" reference="converse"} is not true. There exist smooth complex cubic threefolds with no Eckardt points but containing triple lines (see Section [5](#confi){reference-type="ref" reference="confi"}).
**Corollary 11**. *There are exactly nine triple lines going through an Eckardt point on a smooth cubic threefold.*
Every smooth cubic threefold containing Eckardt points contains therefore at least nine triple lines.
## Eckardt points and polar quadrics
Let $X\subset\mathbb{P}^{4}$ be a smooth complex cubic threefold and $p\in X$ a point. We recall the following definition (see [@cools2010star Definition 2.11]).
**Definition 12**. The polar quadric of a point $p=(p_{0}:\ldots:p_{4})\in\mathbb{P}^{4}$ with respect to $X$ is the hypersurface defined by $\displaystyle\sum_{i=0}^{4}p_{i}\dfrac{\partial f}{\partial x_{i}}=0$.
Denote by $\bigtriangleup_{p}(X)$ the polar quadric of $p$ with respect to $X$. From [@canonero1997inflection p.161-162] we have the following proposition.
**Proposition 13**. *A point $p\in X$ is an Eckardt point if and only if the polar quadric $\bigtriangleup_{p}(X)$ splits up as the tangent space $T_{p} X$ and a hyperplane not passing through $p$.*
*Proof.* Let $p=(1:0:0:0:0)\in X$ be an Eckardt point and $T_{p}X=\left\lbrace x_{1}=0\right\rbrace$ be the projective tangent space of $X$ at $p$. Then the equation of $X$ can take the form
$$f(x_{0},\ldots,x_{4})=x_{0}^{2}x_{1} + C(x_{1},\ldots,x_{4})$$ where $C(x_{1},\ldots,x_{4})$ is a homogeneous polynomial of degree three. The polar quadric $\bigtriangleup_{p}(X)$ is defined by $x_{0}x_{1}=0$. It therefore contains the tangent space $T_{p}X$ and a hyperplane not passing through $p$. Conversely, choose coordinates on $\mathbb{P}^{4}$ such that $p=(1:0:0:0:0)$ is a point of $X$, $T_{p}X=\left\lbrace x_{1}=0\right\rbrace$ and $H_{p}X=\left\lbrace x_{0}=0\right\rbrace$ is a hyperplane not passing through $p$. The equation of $X$ can be written:
$$f(x_{0},\ldots,x_{4})=x_{0}^{2}x_{1}+x_{0}Q(x_{1},\ldots,x_{4})+C(x_{1},\ldots,x_{4})$$ where $Q(x_{1},\ldots,x_{4})$ and $C(x_{1},\ldots,x_{4})$ are homogeneous polynomials of degree two and three respectively. The polar quadric $\bigtriangleup_{p}(X)$ is given by the equation:
$$\label{0307}
2x_{0}x_{1}+Q(x_{1},\ldots,x_{4})=0.$$ Since it splits up as $T_{p}X=\left\lbrace x_{1}=0\right\rbrace$ and $H_{p}X=\left\lbrace x_{0}=0\right\rbrace$ then $Q(x_{1},\ldots,x_{4})=0$ and $p$ is an Eckardt point. ◻
The two characterizations of Eckardt points studied in this paper are thus equivalent. The following lemma shows how polar quadrics can be used to find all Eckardt points on a cubic threefold.
**Lemma 14**. *A point $p\in X$ is an Eckardt point if and only if the polar quadric $\bigtriangleup_{p}(X)$ is of rank at most two.*
*Proof.* Let $p\in X$ be an Eckardt point, $T_{p}X=\left\lbrace l_{1}(x_{0},\ldots,x_{4})=0\right\rbrace$ the projective tangent space of $X$ at $p$ and $H_{p}X=\left\lbrace l_{2}(x_{0}, \ldots,x_{4})=0\right\rbrace$ a hyperplane not passing through $p$, with $l_{1}(x_{0},\ldots,x_{4})$ and $l_{2}(x_{0},\ldots,x_{4})$ two linear forms. Assume the polar quadric $\bigtriangleup_{p}(X)$ is defined by the equation $q(x_{0},\ldots,x_{4})=0$, where $q(x_{0},\ldots,x_{4})$ is a homogeneous polynomial of degree two. Using Proposition [Proposition 13](#Propo){reference-type="ref" reference="Propo"} we write $$q(x_{0},\ldots,x_{4})=l_{1}(x_{0},\ldots,x_{4})l_{2}(x_{0},\ldots,x_{4})$$ and the quadratic form is of rank at most two. Conversely, suppose $p\in X$ is not an Eckardt point. Then the quadratic form $q(x_{0},\ldots,x_{4})$ is not the product of two linear forms (see Proposition [Proposition 13](#Propo){reference-type="ref" reference="Propo"}) and $\bigtriangleup_{p}(X)$ is therefore of rank at least three. ◻
# Computing Eckardt points on a cubic threefold
We have studied Eckardt points on a cubic threefold through two different approaches: the first one involving elliptic curves and the second one involving polar quadrics. Both approaches can be used to compute Eckardt points on a cubic threefold. However, it is generally challenging to compute Eckardt points on a cubic threefold using the first approach because the expression of the tangent space $T_{p}X$ can make the computation of points of multiplicity three of $X\cap T_{p}X\subset T_{p}X$ difficult. It is therefore easier to check whether a point of the cubic is an Eckardt point than to find all Eckardt points using the first approach. Nevertheless, this approach has the benefit of revealing the equations of the elliptic curves of the Fano surface.
Unlike the first approach, the second one can be used to compute all Eckardt points of a cubic threefold and check whether a point on the cubic is an Eckardt point. It therefore has the advantage of revealing the number of Eckardt points of a cubic threefold.
Using Lemma [Lemma 14](#LELEMME){reference-type="ref" reference="LELEMME"}, we propose the following method for computing all Eckardt points on a cubic threefold.
## Method for computing all Eckardt points on a cubic threefold.
Let $X\subset\mathbb{P}^{4}$ be a smooth complex cubic threefold, $p=(p_{0}:\ldots:p_{4})\in X$ a point and $\mathcal{B}$ the matrix associated with the polar quadric $\bigtriangleup_{p}(X)$. Eckardt points on $X$ are given by the vanishing locus of all $3\times3$ minors of $\mathcal{B}$. In order to count each point only once, we use a stratification of $\mathbb{P}^{4}$ described as follows: the first stratum is the affine chart $p_{0}=1$ and the $i$-th one is defined by $p_{0}=0,\ldots,p_{i-2}=0, p_{i-1}=1$ for $i=1,\ldots,4$.
**Example 15**. Consider the Fermat cubic $F_{4}=\lbrace x_{0}^{3}+x_{1}^{3}+x_{2}^{3}+x_{3}^{3}+x_{4}^{3}=0\rbrace\subset\mathbb{P}^{4}$. Denote by $\mathcal{B}_{1}$ the matrix associated with the polar quadric $\bigtriangleup_{p}F_{4}$.
1. On the affine chart $p_{0}=1$ the vanishing locus of all $3\times 3$ minors of $\mathcal{B}_{1}$ is defined by the following equations:
$p_2^4 + p_2=0,~p_3^4 + p_3=0,~p_4^4 + p_4=0,~p_1^3 + p_2^3 + p_3^3 + p_4^3 + 1=0,~p_1p_2=0,\newline p_1p_3=0,~p_2p_3=0,~ p_1p_4=0,~p_2p_4=0,~p_3p_4=0$.
If $p_{i}\neq 0$ then $p_{j}=0$ for $i\neq j$ and $p_{i}^{3}=-1$. We get 12 Eckardt points with coordinates $(1:0:\ldots:p_{i}:\ldots:0)$ with $p_{i}^{3}=-1$.
2. In the stratum $p_{0}=0, p_{1}=1$ the vanishing locus of all $3\times 3$ minors of $\mathcal{B}_{1}$ is defined by the following equations:
$p_3^4 + p_3=0,~p_{4}^4 + p_4=0,~p_2^3 + p_3^3 + p_4^3 + 1=0,~p_2p_3=0,\newline
p_2p_4=0,~p_3p_4=0$.
If $p_{i}\neq 0$ then $p_{j}=0$ for $i\neq j$ and $p_{i}^{3}=-1$. We get 9 Eckardt points with coordinates $(0:1:\ldots:p_{i}:\ldots:0)$ with $p_{i}^{3}=-1$.
3. In the stratum $p_{0}=0, p_{1}=0, p_{2}=1$ the vanishing locus of all $3\times 3$ minors of $\mathcal{B}_{1}$ is defined by the following equations:
$p_4^4 + p_4=0,~p_3^3 + p_4^3 + 1=0,~p_3p_4=0$.
We get three Eckardt points with coordinates $(0:0:1:\xi:0)$ and three others with coordinates $(0:0:1:0:\xi)$, with $\xi^{3}=-1$.
4. In the last stratum we get $3$ Eckardt points with coordinates $(0:0:0:1:\xi)$ with $\xi^{3}=-1$.
The Klein cubic contains no Eckardt point because the vanishing locus of all $3\times 3$ minors of the matrix associated with its polar quadric is empty in all strata.
Without giving much details, the authors in [@canonero1997inflection Example 4.4] state that the cubic threefold $X_{2}\subset\mathbb{P}^{4}$ defined by $$x_{0}^{2}x_{4} + x_{1}^{2}x_{3} + x_{3}^{3} + x_{3}^{2}x_{4} + x_{3}x_{4}^{2} - x_{4}^{3} + x_{2}^{3} = 0$$ has exactly two Eckardt points with coordinates $(1:0:0:0:0)$ and $(0:1:0:0:0)$. It is easy to check that these points are Eckardt points of $X_{2}$ using both approaches. However, showing that $X_{2}$ has no Eckardt points besides $(1:0:0:0:0)$ and $(0:1:0:0:0)$ is quite challenging. Therefore the importance of the method we proposed in this paper for computing all Eckardt points of a cubic threefold. Let $p=(p_{0}:p_{1}:p_{2}:p_{3}:p_{4})\in X_{2}$ be a point and denote by $\mathcal{B}_{2}$ the matrix associated with the polar quadric $\bigtriangleup_{p}X_{2}$. On the affine chart $p_{0}=1$ the vanishing locus of all $3\times 3$ minors of $\mathcal{B}_{2}$ is defined by $p_{1}=0, p_{2}=0, p_{3}=0, p_{4}=0$, and in the stratum $p_{0}=0, p_{1}=1$ it is defined by $p_{2}=0, p_{3}=0, p_{4}=0$ while it is empty in the other strata. This proves that $X_{2}$ has no Eckardt points besides $(1:0:0:0:0)$ and $(0:1:0:0:0)$.
# Main component, elliptic curves and triple lines configuration of some cubic threefolds {#confi}
## Strategy for finding some cubic threefolds with no Eckardt points but containing triple lines {#subsection}
Let $\ell$ be a line of the second type on $X$ given by $$x_{2}=0, x_{3}=0, x_{4}=0.$$ Following [@clemens1972intermediate (6.10)] the equation of $X$ may take the form:
$$f(x_{0},\ldots, x_{4}) = x_{0}^{2}x_{2} + x_{1}^{2}x_{3} + x_{0}q_{0}(x_{2},x_{3}, x_{4}) + x_{1}q_{1}(x_{2},x_{3}, x_{4}) + P(x_{2},x_{3}, x_{4})=0$$ where $q_{0}(x_{2},x_{3}, x_{4})=\sum_{2\leq j\leq k\leq 4}b_{0jk}$ and $q_{1}(x_{2},x_{3}, x_{4})=\sum_{2\leq j\leq k\leq 4}b_{1jk}$ are homogeneous polynomials of degree two and $P(x_{2},x_{3}, x_{4})$ is a homogeneous polynomial of degree three. Assume $\ell$ is a triple line so that the plane given by $x_{2}=0, x_{3}=0$ is the plane tangent to $X$ in all points of $\ell$. Then the equation of $X$ may be written
$$\label{(5.1)}
f(x_{0},\ldots, x_{4}) = x_{0}^{2}x_{2} + x_{1}^{2}x_{3} + x_{0}q_{0}(x_{2},x_{3}, x_{4}) + x_{1}q_{1}(x_{2},x_{3}, x_{4}) + kx_{4}^{3}=0$$ with $k\neq 0$ and $b_{044}=0, b_{144}=0$. Using Equation $\eqref{(5.1)}$ we obtain many examples of smooth cubic threefolds with no Eckardt points but containing triple lines.
## Main component, elliptic curves and triple lines configuration
The following table gives the list of smooth complex cubic threefolds $X_{i}\subset\mathbb{P}^{4}$ we will work with in this section. Cubics $X_{5}, X_{6}, X_{7}$ and $X_{8}$ are obtained through the method detailed in Section [5.1](#subsection){reference-type="ref" reference="subsection"}.
$X_{1}$ $x_{0}^{2}x_{2} + x_{2}^{2}x_{4} + x_{1}^{2}x_{3} + x_{3}^{2}x_{0} + x_{4}^{3}=0$
--------- ------------------------------------------------------------------------------------------------------------------------------------------
$X_{2}$ $2x_{0}x_{2}^{2} + 2x_{2}x_{1}^{2} + x_{1}^{2}x_{3} + x_{3}x_{0}^{2} + 3x_{3}^{3} + x_{4}^{3}=0$
$X_{3}$ $x_{0}^{2}x_{4} + x_{1}^{2}x_{3} + x_{3}^{3} + x_{3}^{2}x_{4} + x_{3}x_{4}^{2} - x_{4}^{3} + x_{2}^{3} = 0$
$X_{4}$ $x_{0}^{3} + x_{1}^{3} + x_{2}^{3} + x_{3}^{3} + x_{4}^{3} + 3x_{0}x_{1}x_{2}=0$
$X_{5}$ $x_{0}^{2}x_{2} + x_{1}^{2}x_{3} + x_{1}x_{2}^{2} + x_{0}x_{3}^{2} + x_{1}x_{3}^{2} + x_{4}^{3}=0$
$X_{6}$ $x_{0}^{2}x_{2} + x_{1}^{2}x_{3} + x_{0}x_{2}^{2} + x_{1}x_{2}^{2} + x_{0}x_{3}^{2} + 2x_{1}x_{2}x_{4} + 2x_{0}x_{3}x_{4} + x_{4}^{3}=0$
$X_{7}$ $x_{0}^{2}x_{2} + x_{1}^{2}x_{3} + x_{1}x_{2}^{2} + x_{0}x_{3}^{2} + 2x_{0}x_{3}x_{4} + x_{4}^{3}=0$
$X_{8}$ $x_{0}^{2}x_{2} + x_{1}^{2}x_{3} + x_{1}x_{2}^{2} + x_{1}x_{3}^{2} + x_{0}x_{3}^{2} + x_{0}x_{4}^{2} + x_{1}x_{4}^{2} + x_{4}^{3}=0$
: Some smooth cubic threefolds
Table [1](#table2){reference-type="ref" reference="table2"} gives the information about the number $n_{E}$ of Eckardt points and the number $n_{T}$ of triple lines of these cubics.
cubic threefold $X_{1}$ $X_{2}$ $X_{3}$ $X_{4}$ $X_{5}$ $X_{6}$ $X_{7}$ $X_{8}$
----------------- --------- --------- --------- --------- --------- --------- --------- ---------
$n_{E}$ 1 1 2 12 0 0 0 0
$n_{T}$ 9 33 39 81 27 9 2 1
: Eckardt points and triple lines numbers
The following table gives the configuration of the main component, elliptic curves and triple lines for the above cubics. This table contains the number of triple lines, the number $n_{E_{p}}$ of elliptic curves, and the intersection number of the main component $P$ and elliptic curves in the affine chart $p_{0, 1}=1$.
cubic threefold $X_{1}$ $X_{2}$ $X_{3}$ $X_{4}$
--------------------- ---------------- ---------------- -------------------------------- -------------------------------- --
$n_{T}$ 9 33 33 54
$n_{E_{p}}$ 1 1 2 6
Intersection points $Ep\cdot P =9$ $Ep\cdot P =9$ $Ep_{i}\cdot P =8$ $Ep_{i}\cdot P =12$ (for
number $E_{p_{1}}\cdot E_{p_{2}} = 1$ 3 elliptic curves)
$Ep_{j}\cdot P =6$ (for
the other elliptic curves)
$E_{p_{i}}\cdot E_{p_{j}} = 0$
: Main component, elliptic curves and triple lines configuration
The intersection points are computed over the rational field $\mathbb{Q}$. All the computations have been done using the software MAGMA [@MR1484478], except the number of triple lines computed with SAGEMATHS [@sagemath]. The main component of $X_{1}, X_{2}, X_{3}$ and $X_{4}$ is irreducible over $\mathbb{Q}$. However, whether it is smooth is still an open question. Inspection of Table [2](#table1){reference-type="ref" reference="table1"} reveals that the 9 intersection points of the elliptic curve and the main component of $X_{1}$ are exactly the triple lines of $X_{1}$.\
| arxiv_math | {
"id": "2309.08124",
"title": "Eckardt points on a cubic threefold",
"authors": "Gloire Grace Bockondas and Basile Guy Richard Bossoto",
"categories": "math.AG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
author:
- |
Jin-Cai Kang$^a$, Yong-Yong Li$^b$, Chun-Lei Tang$^a$[^1]\
*$^a$School of Mathematics and Statistics, Southwest University, Chongqing, 400715, China,*\
*$^b$College of Mathematics and Statistics, Northwest Normal University, Lanzhou, 730070, China*
date:
-
-
title: " Prescribed mass standing waves for Schrödinger-Maxwell equations with combined nonlinearities[^2]"
---
> **Abstract:** In the present paper, we study the following Schrödinger-Maxwell equation with combined nonlinearities $$\begin{aligned}
> \begin{cases}
> \displaystyle - \Delta u+\lambda u+ \left(|x|^{-1}\ast |u|^2\right)u
> =|u|^{p-2}u +\mu|u|^{q-2}u\quad \text{in} \
> \mathbb{ R}^3,\\
> \displaystyle \int_{\mathbb{R}^3}|u|^2dx=a^2,
> \end{cases}
> \end{aligned}$$ where $a>0$, $\mu\in \mathbb{R}$, $2<q\leq \frac{10}{3}\leq p<6$ with $q\neq p$, $\ast$ denotes the convolution and $\lambda\in \mathbb{R}$ appears as a Lagrange multiplier. Under some mild assumptions on $a$ and $\mu$, we prove some existence, nonexistence and multiplicity of normalized solution to the above equation. Moreover, the asymptotic behavior of normalized solutions is verified as $\mu\rightarrow 0$ and $q\rightarrow \frac{10}{3}$, and the stability/instability of the corresponding standing waves to the related time-dependent problem is also discussed.
>
> **Keywords:** Schrödinger-Maxwell equation; Combined nonlinearities; Normalized solution; Asymptotic behavior; Stability
# Introduction and main results
In this paper, we investigate standing waves with prescribed mass for a class of Schrödinger-Maxwell equations involving combined power nonlinearities $$\begin{aligned}
\label{kk1}
i \varphi_t+\Delta \varphi-\left(|x|^{-1}\ast| \varphi|^2\right)\varphi+|\varphi|^{p-2}\varphi+\mu|\varphi|^{q-2}\varphi=0,
\end{aligned}$$ where $2<q\leq\frac{10}{3}\leq p<6$ with $q\neq p$, $\mu \in \mathbb{R}$, $\varphi(x, t):\mathbb{R}^3\times [0, T]\rightarrow \mathbb{C}$ is the wave function and $(|x|^{-1}\ast |\varphi| ^2)$ is a repulsive nonlocal Coulombic potential. The case $\mu>0$ is focusing case, while the case $\mu < 0$ is referred to defocusing case. This class of Schrödinger type equations with a repulsive nonlocal Coulomb potential is obtained by approximation of the Hartree-Fock equation describing a quantum mechanical system of many particles, see [@2002RMP-BF; @1981CMP-BBL; @1981RMP-L; @1984CMP-L]. As a pioneering work, Tao et al. in [@2007CPDETao] studied nonlinear Schrödinger equations with combined nonlinearities, after which such problems have attracted widespread attention, see for e.g. [@2007CPDETao; @2012DIE; @2016JDECheng; @2018JEE; @2017ARMA; @2016RMI; @2013CMP; @2017CV]. The standing wave solution $\varphi(t, x) =e^{i \lambda t}u(x)$ of Eq. [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} corresponds to a solution of the equation $$\begin{aligned}
\label{kkkk1}
- \Delta u+\lambda u+ \left(|x|^{-1}\ast |u|^2\right)u
=|u|^{p-2}u +\mu|u|^{q-2}u\quad \text{in} \ \mathbb{ R}^3.
\end{aligned}$$ As well known, fixed frequency problem (i.e. $\lambda\in \mathbb{R}$ is fixed) and prescribed mass problem (i.e. the L$^2$-mass of solution is prescribed) for Eq. [\[kkkk1\]](#kkkk1){reference-type="eqref" reference="kkkk1"} are two focusing variational problems. The former was intensively studied in recent years, see for e.g. [@2008MJM-A; @2008CCM-AR; @2010AIHP-ADP; @2008JMAA-AP; @2016N-CM; @2006JFA; @2009-NA-CT; @2016-JDE-SM; @2018-NARWA-ZT; @2023-JGA-KLT; @2016-AMPA-LWZ]. However, the prescribed mass problems of Eq. [\[kkkk1\]](#kkkk1){reference-type="eqref" reference="kkkk1"} are seldom studied, they play a significant role in studying the Bose-Einstein condensation, which motivates us to search for solution with prescribed $L^2$-norm (called as normalized solution). For this case, $\lambda$ is no longer a given constant but appears as an unknown Lagrange multiplier.
If $\mu=0$, Eq. [\[kkkk1\]](#kkkk1){reference-type="eqref" reference="kkkk1"} reduces to the following Schrödinger-Maxwell equation with zero perturbation $$\begin{aligned}
\label{k93}
- \Delta u+\lambda u+ \left(|x|^{-1}\ast |u|^2\right)u=|u|^{p-2}u \quad \text{in} \ \mathbb{ R}^3.
\end{aligned}$$ To search for normalized solutions of Eq. [\[k93\]](#k93){reference-type="eqref" reference="k93"}, we consider the critical points of the energy functional $$\begin{aligned}
J (u)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx\end{aligned}$$ on the constraint $$S_a:=\left\{u\in H^1(\mathbb{R}^3, \mathbb{C}): \int_{\mathbb{R}^3}|u|^2dx=a^2 \right\}.$$ It is easy to show that $J$ is a well defined $C^1$ functional on $S_a$ for any $p\in(2,6]$ (see [@2006JFA] for example). When $p\in(2,\frac{10}{3})$, the functional $J$ is bounded from below and coercive on $S_a$, and the existence of minimizer for $J$ constrained on $S_a$ has been studied in [@2011ZAMP-BS; @1992CPDE-CL; @2011JFA-BS; @2004JSP]. If $p\in (2, 3)$, the authors in [@2011JFA-BS] proved that minimizers exist provided that $a > 0$ is small enough. Specifically, by using the techniques introduced in [@1992CPDE-CL], it has been proved in [@2004JSP] that minimizers exist for $p=\frac{8}{3}$ provided $a \in (0, a_0 )$ for some suitable $a_0 > 0$. The case $p\in(3,\frac{10}{3})$ is considered in [@2011ZAMP-BS], in which a minimizer is obtained for $a > 0$ large enough. Based on whether $J_{|_{S_a}}$ is bounded from below, the threshold $\frac{10}3$ is called as the L$^2$-critical exponent. If $p\in[3, \frac{10}{3}]$, Jeanjean and Luo in [@2013ZAMP-JL] gave a threshold value of $a > 0$ separating existence and nonexistence of minimizers. When $p\in( \frac{10}{3}, 6)$, the functional $J$ is unbounded from below on $S_a$, and Bellazzini, Jeanjean and Luo in [@2013PLMS] found critical points of $J$ on $S_a$ for $a> 0$ sufficiently small. After then, Chen et al. [@2020JMAA-CT] considered normalized solutions for the Schrödinger-Maxwell equations with general nonlinearity $$\begin{aligned}
- \Delta u+\lambda u+ \left(|x|^{-1}\ast |u|^2\right)u=f(u) \quad \text{in} \ \mathbb{ R}^3,
\end{aligned}$$ where $f\in C(\mathbb{R}, \mathbb{R})$ covers the case $f(u) = |u|^{s-2}u$ with $q\in (3, \frac{10}{3})\cup ( \frac{10}{3}, 6)$. Afterwards, the most recent works on Eq. [\[k93\]](#k93){reference-type="eqref" reference="k93"} was made in [@2023IJM], where the authors not only demonstrate the old results in a unified way but also exhibit new ones. For more investigations on normalized solutions of Schrödinger-Maxwell equations, we refer readers to [@2013ZAMP-JL; @2017JMAA-ZZ; @2020MMAS; @2023.3; @2014JMAA-L; @2018ZAMP-YL; @2017CMA-Y; @2023AMP-WQ] and the references therein.
Recently, a new phenomenon for the concave-convex variational problem reappears in the study of normalized solutions. Consider the following Schrödinger equation with combining nonlinearities $$\begin{aligned}
\label{k110}
\begin{cases}
\displaystyle - \Delta u+ \lambda u=|u|^{p-2}u+\mu |u|^{q-2}u \quad \text{in} \ \mathbb{ R}^N, \\
\displaystyle \int_{\mathbb{R}^N}u^2dx=a^2,
\end{cases}
\end{aligned}$$ where $N\geq 1$, $a>0$, $\mu\in\mathbb{R}$, $\lambda\in\mathbb{R}$ is an unknown parameter and $2<q \leq p< 2^*$. If $q<2+\frac4N<p$, the fiber mapping under the scaling of type $t^{\frac N2}u(t\cdot)$ admits two critical points (a maximum point and a minimum point), which indicates that two normalized solutions can be catched by splitting the corresponding Nehari-Poho$\check{\mbox{z}}$aev manifold to Eq. [\[k110\]](#k110){reference-type="eqref" reference="k110"} into two submanifolds. As mentioned above, Soave in [@2020Soave] firstly studied the existence and nonexistence of normalized solution for Eq. [\[k110\]](#k110){reference-type="eqref" reference="k110"} by using Nehari-Pohožaev constraint. After then, as an extension of works in [@2020Soave], Soave in [@2021-JFA-Soave] further considered the existence and nonexistence of normalized solutions for Eq. [\[k110\]](#k110){reference-type="eqref" reference="k110"} with $p=2^*:=\frac{2N}{N-2}$ and $q\in(2,2^*)$. For more results of Eq. [\[k110\]](#k110){reference-type="eqref" reference="k110"}, we refer the readers to [@2022-MA-jean; @2022JFAWei; @2022.9; @2021CVLixinfu; @2022CValves].
When it comes to Schrödinger-Maxwell equations with combining nonlinearities, there seem to be no results to our knowledge. Motivated by the above survey, a natural question is whether Eq. [\[kkkk1\]](#kkkk1){reference-type="eqref" reference="kkkk1"} possesses normalized solution for $\mu\in \mathbb{R}$ and $2<q\leq\frac{10}{3}\leq p<6$ with $q\neq p$. Naturally, we study the following Schrödinger-Maxwell equation in this work $$\begin{aligned}
\label{k1}\tag{${ \mbox{SP}}$}
\begin{cases}
\displaystyle - \Delta u+\lambda u+ \left(|x|^{-1}\ast |u|^2\right)u
=|u|^{p-2}u +\mu|u|^{q-2}u\quad \text{in} \ \mathbb{ R}^3,\\
\displaystyle \int_{\mathbb{R}^3}|u|^2dx=a^2,
\end{cases}
\end{aligned}$$ where $a>0$, $\mu\in \mathbb{R}$, $2<q\leq\frac{10}{3}\leq p<6$ with $q\neq p$, $\ast$ denotes the convolution and $\lambda\in \mathbb{R}$ is an unknown Lagrange multiplier. Normalized solutions of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} correspond to critical points of the following functional on the constraint $S_a$, $$\begin{aligned}
%\label{k15}
\mathcal{J} (u)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx-\frac{\mu}{q}\int_{\mathbb{R}^3}|u|^{q}dx.\end{aligned}$$
For convenience, we let $\mathcal{H}:=H^1(\mathbb{R}^3, \mathbb{C})$ denotes the usual Sobolev space with the norm $$\|u\|= \Big(\int_{\mathbb{R}^3} |\nabla u|^2+ |u|^2 dx\Big)^{\frac{1}{2}}.$$ $\mathcal{H }_r :=\{u\in \mathcal{H}: u(x)=u(|x|)\}$ has the same scalar product and norm as in $\mathcal{H}$.\
Before stating our main results, we recall the following
**Definition 1**. We say that $\overline{u}$ is a ground state normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} on $S_a$ if $\overline{u}$ is a solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} possessing minimal energy among all of the normalized solutions, namely, $$\begin{aligned}
d\mathcal{J}_{|_{S_a }}(\overline{u})=0 \quad
\text{and}
\quad \mathcal{J}(\overline{u})=\inf\left\{\mathcal{J}(u): d\mathcal{J}_{|_{S_a }}(u)=0 \ \text{and}\ u\in S_a\right \}.\end{aligned}$$ The set of all ground state normalized solutions for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} will be denoted as $\mathcal{M}_a$.
The case of $\mu>0$ and $\frac{10}{3}<q <p< 6$ has already been considered in [@2020JMAA-CT], and the existence result of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} in this case is presented here for our convenience.
**Theorem 2**. *[@2020JMAA-CT Theorem 1.1] Let $\mu>0$ and $\frac{10}{3}<q <p< 6$. Assume that $a\in (0, \widetilde{\kappa})$ small enough, then there exists a mountain pass type solution $(\lambda_{q}, u_{q})\in \mathbb{R}^+\times \mathcal{H}_r$ for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with positive energy and $u_{q}>0$, where $J_q(u_q)= c_{q,r}>0$.*
Our main results in this paper are stated as follows:
**Theorem 3**. *Assume $\mu>0$, $q\in(2, \frac{8}{3} )$ and $p\in (\frac{10}{3}, 6)$. Let $a\in(0, \overline{a}_0)$ with $$\begin{aligned}
\overline{a}_0:=\min\left\{a_0, \ \bigg[ \frac{p(2-q\gamma_q)}{2C_{p}^p (p\gamma_p-q\gamma_q)}\bigg]^\frac{2-q\gamma_q}{\mathcal{B} }
\bigg[ \frac{q(p\gamma_p-2)}{2\mu C_{q}^q (p\gamma_p-q\gamma_q)}\bigg]^\frac{p\gamma_p-2}{ \mathcal{B}}\right\},
\end{aligned}$$ where $a_0$ will be defined in [\[k50\]](#k50){reference-type="eqref" reference="k50"} hereafter, $C_{p}$ is defined in [\[k102\]](#k102){reference-type="eqref" reference="k102"}, $\gamma_p=\frac{3(p-2)}{2p}$, $\gamma_q=\frac{3(q-2)}{2q}$ and $$\mathcal{B}=(q-q\gamma_q)(p\gamma_p-2)+(p-p\gamma_p)(2-q\gamma_q).$$ Then Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} possesses a real-valued ground state normalized solution $(\lambda, u)\in \mathbb{R}\times \mathcal{H}$ with $u>0$. Moreover, this ground state is a local minimizer of $\mathcal{J}$ in the set $D_{\rho_0}$ and any ground state normalized solution is a local minimizer of $\mathcal{J}$ on $D_{\rho_0}$, where $D_{\rho_0}$ is defined by [\[k91\]](#k91){reference-type="eqref" reference="k91"}.*
**Theorem 4**. *Assume $\mu>0$, $q\in(2, \frac{12}{5} ]$ and $p\in (\frac{10}{3}, 6)$. Let $a\in(0, \overline{a}_0)$ and $$\begin{aligned}
\label{k3}
a< \left(\frac{3}{4C_p^p}\right)^{\frac{1}{2p\gamma_p+p-6}}
\left(\frac{4}{4\gamma_p-1 }\right)^{\frac{p\gamma_p-1}{2p\gamma_p+p-6}}\Bigg(\frac{1- \gamma_p }{C_{\frac{12}{5}}^{\frac{12}{5}}}\Bigg)^{\frac{p\gamma_p-2 }{2p\gamma_p+p-6}}.
\end{aligned}$$ Then there exists a second solution of mountain pass type $( \widehat{\lambda}, \widehat{u})\in \mathbb{R}^+ \times \mathcal{H}_r$ for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with $\widehat{u}>0$.*
Here, we want to highlight the work [@2020Soave], in which Soave made a pioneering work for Eq. [\[k110\]](#k110){reference-type="eqref" reference="k110"} with $\mu\in \mathbb{R}$ and $2<q\leq 2+\frac{4}{N}\leq p<2^*$ by using Nehari-Pohožaev constraint. In particular, when $2<q< 2+\frac{4}{N}< p<2^*$ and $\mu>0$, since the energy functional $\mathcal{E}$ for Eq. [\[k110\]](#k110){reference-type="eqref" reference="k110"} is no longer bounded from below on $S_a$, he considered the minimization of $\mathcal{E}$ on a subsets of $S_a$ and obtained two normalized solutions marked as $\overline{v}$ and $\overline{u}$, where $\overline{v}$ is an interior local minimizer of $\mathcal{E}$ on the set $\left\{u\in S_a:|\nabla u|_2<k \right\}$ for $k > 0$ small enough and $\overline{u}$ is of mountain pass type. To search for a local minimizer, with the help of Nehari-Pohožaev manifold, a bounded Palais-Smale sequence $\{u_n\}\subset S_a\cap H^1(\mathbb{R}^N)$ is received by using standard argument. Using the Schwarz rearrangement, he obtained a bounded Palais-Smale sequence $\{u_n\}\subset S_a\cap H_r^1(\mathbb{R}^N)$. For the Mountain-Pass type solution, he firstly established the mountain-pass geometry of $\mathcal{E}$ on $S_a\cap H_r^1(\mathbb{R}^N)$, and then constructed a special bounded Palais-Smale sequence $\{u_n\}\subset S_a\cap H_r^1(\mathbb{R}^N)$ at the mountain pass level. Since $H_r^1(\mathbb{R}^N)\hookrightarrow L^s(\mathbb{R}^N )$ with $s\in(2, 2^*)$ is compact, Soave proved that the associated Lagrange multiplier $\lambda >0$. In view of this fact, the compactness of bounded Palais-Smale sequence can be proved. It is worth mentioning that the approaches in [@2020Soave] rely heavily on the compactness of $H_r^1(\mathbb{R}^N)\hookrightarrow L^s(\mathbb{R}^N )$ with $s\in(2, 2^*)$. We mention that Soave's methods in [@2020Soave] cannot be directly applied to prove our results due to two reasons. Firstly, the Schwarz rearrangements are invalid in our present paper due to the presence of nonlocal term. Secondly, although our workspace is $\mathcal{H}_r$ in Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, we can only obtain a local minimizer to Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} for $q\in(2, \frac{12}{5} ]$ and $p\in (\frac{10}{3}, 6)$ by using Soave's methods in [@2020Soave] directly. Therefore, we will follow some ideas of [@2022-JMPA-jean; @2011JFA-BS] to fill the gap between $q\in(\frac{12}{5}, \frac{8}{3} )$ and $p\in (\frac{10}{3}, 6)$ to prove Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}.
**Theorem 5**. *Assume $\mu\leq0$, $q\in(2, \frac{8}{3} )$ and $p\in (\frac{10}{3}, 6)$. Then there exists some $\widetilde{a}_0>0$ such that Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} has a normalized solution $(\widetilde{\lambda}, \widetilde{u})\in\mathbb{R}^+ \times \mathcal{H}_r$ for any $a\in(0, \widetilde{a}_0)$.*
**Theorem 6**. *Assume $q\in(2, \frac{8}{3} )$ and $p=\overline{p}=\frac{10}{3}$, then*
1. *for any $\mu>0$, if $0<a\leq a^*:=\big(\frac{\overline{p}}{2 C_{\overline{p}}^{\overline{p}}}\big)^{\frac{3}{4}}$, it results that $e(a)=\inf_{S_a} \mathcal{J}<0$ and when $0<a< a^*$ the infimum is achieved by a real-valued function $u\in S_a$; moreover, when $a=a^*$, $q\in(2, \frac{12}{5} ]$ and $p=\frac{10}{3}$, the infimum $e(a^*)$ is achieved by a real-valued function $u\in S_a$;*
2. *for any $\mu>0$, if $a> a^*$, it holds that $\inf_{S_a} \mathcal{J}=-\infty$;*
3. *for any $\mu<0$, if $0<a\leq a^*:=\big(\frac{\overline{p}}{2 C_{\overline{p}}^{\overline{p}}}\big)^{\frac{3}{4}}$, it results that $\inf_{S_a} \mathcal{J}=0$ and Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} has no solution;*
4. *for any $\mu<0$, if $a> a^*$, it holds that $\inf_{S_a} \mathcal{J}=-\infty$.*
**Theorem 7**. *Let $\mu>0$, $q=\overline{q}=\frac{10}{3}$ and $p\in (\frac{10}{3}, 6)$. Assume that $$\begin{aligned}
\label{k201}
\bigg(\frac{1}{C_p^p \gamma_p a^{p(1-\gamma_p)}} \bigg)^{\frac{1}{p \gamma_p-2}} \Big(1-\frac{3}{5} C_{\overline{q}}^{\overline{q}}\mu a^{\frac{4}{3}}\Big)^{\frac{1}{p \gamma_p-2}}\frac{1}{a^3}\geq \frac{4\gamma_p-1}{4(1-\gamma_p)}C_{\frac{12}{5}}^{\frac{12}{5}}.\end{aligned}$$ Then there exists a mountain pass type solution $(\lambda_{\overline{q}}, u_{\overline{q}})\in \mathbb{R}^+\times \mathcal{H}_r$ for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $u_{\overline{q}} > 0$.*
**Remark 8**. The assumption $$\begin{aligned}
\bigg(\frac{1}{C_p^p \gamma_p a^{p(1-\gamma_p)}} \bigg)^{\frac{1}{p \gamma_p-2}} \Big(1-\frac{3}{5} C_{\overline{q}}^{\overline{q}}\mu a^{\frac{4}{3}}\Big)^{\frac{1}{p \gamma_p-2}}\frac{1}{a^3}\geq \frac{4\gamma_p-1}{4(1-\gamma_p)}C_{\frac{12}{5}}^{\frac{12}{5}}\end{aligned}$$ implies the condition $\mu a^{\frac{4}{3}}< \frac{\overline{q}}{2C_{\overline{q}}^{\overline{q}}}$.
**Remark 9**. To our knowledge, the latest progress in the study of normalized solution to Schrödinger-Maxwell equations owes to [@2023IJM], which presents the existence of normalized solution for Schrödinger-Maxwell equation with single power nonlinearity. There seem to be no existence results of normalized solutions for Schrödinger-Maxwell equations with combined power nonlinearities. Our results can be seen as an extension and improvement of existing results. Moreover, compared with [@2020Soave], our problems are more complex because of the presence of nonlocal term, and the method of Schwarz rearrangement is invalid here. If we verify Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"} in a radial space, we can only obtain normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} for $\mu>0$, $q\in(2, \frac{12}{5} ]$ and $p\in (\frac{10}{3}, 6)$ by using the same technique as [@2020Soave], but we cannot obtain normalized solution in the case of $q\in( \frac{12}{5}, \frac{8}{3} )$ and $p\in (\frac{10}{3}, 6)$. The difficulty lies in that the compactness of Palais-Smale sequence cannot be easily verified when $q\in( \frac{12}{5}, \frac{8}{3} )$ and $p\in (\frac{10}{3}, 6)$, even in radial subspaces. To avoid dichotomy in Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, we follow the ideas of [@2011JFA-BS] to prove the strong subadditivity inequality. Most importantly, the authors in [@2011JFA-BS] studied Eq. [\[k93\]](#k93){reference-type="eqref" reference="k93"} with $p\in(2, 3)$, in which the corresponding energy functional is bounded from below on the constraint $S_a$. However, the energy functional $\mathcal{J}$ under the assumptions of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"} is unbounded from below on $S_a$. In order to overcome this obstacle, we consider the minimization of $\mathcal{J}$ on a subsets $S_a\cap D_{\rho_0}$, where $D_{\rho_0}$ is defined in [\[k91\]](#k91){reference-type="eqref" reference="k91"}. Noting that $\rho_0$ does not depend on $a$, which is the key to applying the methods of [@2011JFA-BS]. Last but not least, for the case of $q\in(2, \frac{8}{3} )$, $p=\overline{p}=\frac{10}{3}$, $\mu>0$ and $a=a^*$, we deduce $-\infty<e(a)=\inf_{S_a} \mathcal{J}<0$, and then $e(a^*)$ is achieved when $\mu>0$, $q\in(2, \frac{12}{5} ]$ and $p=\frac{10}{3}$, which is different from [@2020Soave Theorem 1.1].
**Remark 10**. Since we cannot avoid dichotomy for the range $a=a^*$, $\mu>0$, $q\in(\frac{12}{5}, \frac{8}{3} )$ and $p=\frac{10}{3}$, we say nothing about the existence of normalized solution to Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, so does for the case of $\mu>0$, $q\in( \frac{12}{5}, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$ in Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"}. Furthermore, the existence/nonexistence of normalized solution to Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with $\mu\in \mathbb{R}\backslash\{0\}$, $q\in( \frac{8}{3}, \frac{10}{3} )$ and $p\in[\frac{10}{3}, 6)$ remains an unsolved open problem.
Next, we verify the asymptotic behavior of normalized solutions as $\mu\rightarrow 0$ and $q\rightarrow \frac{10}{3}$.
**Theorem 11**. *Assume that $\mu>0$, $q\in(2, \frac{12}{5} ]$ and $p\in (\frac{10}{3}, 6)$, further $a\in(0, \min \{\overline{a}_0, \widetilde{a}_0\})$ and [\[k3\]](#k3){reference-type="eqref" reference="k3"} hold. Let $(\widehat{\lambda}_{\mu}, \widehat{u}_{\mu})\in \mathbb{R}^+ \times \mathcal{H}_r$ be the mountain pass type solution for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} obtained in Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"}, then $\widehat{u}_{\mu}\rightarrow \widehat{u}_{0}$ in $\mathcal{H}_{r}$ and $\widehat{\lambda}_{\mu}\rightarrow \widehat{\lambda}_{0}$ in $\mathbb{R}$ as $\mu\rightarrow0$ in the sense of subsequence, where $(\widehat{\lambda}_{0}, \widehat{u}_{0})\in \mathbb{R}^+ \times \mathcal{H}_r$ is a solution of $$\begin{aligned}
\begin{cases}
\displaystyle - \Delta u+\lambda u+ \big(|x|^{-1}\ast |u|^2\big)u=|u|^{p-2}u \ \ \mbox{in} \ \mathbb{ R}^3,\\
\displaystyle\int_{\mathbb{R}^3}|u|^2dx=a^2.\\
\end{cases}
\end{aligned}$$*
**Theorem 12**. *Assume that $\mu>0$, $\frac{10}{3}=\overline{q}<q<p<6$ and $a\in (0, \widetilde{\kappa})$ satisfying [\[k201\]](#k201){reference-type="eqref" reference="k201"}. Let $(\lambda_q, u_q) \in \mathbb{R}^+ \times \mathcal{H}_r$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with $q>\overline{q}$ tending to $\overline{q}$ at the level $c_{q,r}$ obtained in Theorem [Theorem 2](#K-T1){reference-type="ref" reference="K-T1"}. Then, there is $(\lambda, u)\in \mathbb{R}^+ \times \mathcal{H}_r$ such that up to a subsequence, $u_q\rightarrow u$ in $\mathcal{H}$ and $\lambda_q\rightarrow \lambda$ in $\mathbb{R}$ as $q\rightarrow \overline{q}$, where $\mathcal{J}(u)= c_{\overline{q}, r}$ and $(\lambda, u)\in \mathbb{R}^+ \times \mathcal{H}_r$ solves Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with $q=\overline{q}$.*
In what follows, we recall the notions of orbital stability and instability.
**Definition 13**. $Z\subset \mathcal{H}$ is stable if $Z\neq \emptyset$ and, for any $v \in Z$ and $\varepsilon>0$, there exists a $\delta > 0$ such that if $\varphi\in \mathcal{H}$ satisfies $\|\varphi - v\|<\delta$, then $u_{\varphi}(t)$ is globally defined and $\inf_{z\in Z}\|u_{\varphi}(t)-z\| <\varepsilon$ for all $t \in\mathbb{ R}$, where $u_{\varphi}(t)$ is the solution to Eq [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with the initial condition $\varphi$.
**Definition 14**. A standing wave $e^{i\lambda t}u$ is strongly unstable if, for every $\varepsilon> 0$, there exists $\varphi_0 \in \mathcal{H}$ such that $\|u- \varphi_0\|<\varepsilon$ and $\varphi(t,\cdot)$ blows-up in finite time, where $\varphi(t,\cdot)$ denotes the solution to Eq [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $\varphi_0$.
From Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, the set of all ground state normalized solutions for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} is given by $$\begin{aligned}
\mathcal{M}_a=\Big\{u\in \mathcal{H}: u\in D_{\rho_0}\ \mbox{and}\ \mathcal{J}(u)=m_a:= \inf_{D_{\rho_0} }\mathcal{J}(u)\Big\},\end{aligned}$$ where $D_{\rho_0}$ is defined in [\[k91\]](#k91){reference-type="eqref" reference="k91"}. Next, we shall focus on the stability of the ground state set $\mathcal{M}_a$.
**Theorem 15**. *Under the assumptions of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, $\mathcal{M}_a$ has the following characterization $$\begin{aligned}
\mathcal{M}_a=\big\{ e^{i \theta}|u|: \ \theta\in \mathbb{R}, \ |u|\in D_{\rho_0},\ \mathcal{J}(|u|)=m_a\ \text{and}\ |u| >0 \ \text{in}\ \mathbb{R}^3\big\}.\end{aligned}$$ Moreover, the set $\mathcal{M}_a$ is orbitally stable.*
We are also interested in the instability of standing waves obtained in Theorems [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"}, [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} and [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}.
**Theorem 16**. *Under the assumptions of Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} (or Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} or Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}), the standing wave $\psi(t, x) =e^{i \widehat{\lambda}t} \widehat{u}(x)$ (or $\psi(t, x) =e^{i \widetilde{\lambda} t} \widetilde{u}(x)$ or $\psi(t, x) =e^{i \lambda_{\overline{q}} t} u_{\overline{q}}(x)$) is strongly unstable.*
**Remark 17**. The proof of Theorem [Theorem 15](#K-TH3){reference-type="ref" reference="K-TH3"} relies on the classical Cazenave-Lions' stability argument introduced in [@1982-CMP-CL] and further developed in [@2004-ANS-Ha]. To prove Theorem [Theorem 16](#K-TH4){reference-type="ref" reference="K-TH4"}, we shall follow the original approach of Berestycki and Cazenave [@1981CRASSM-BC].
The organization of this paper is as follows. In Sec. [2](#sec2){reference-type="ref" reference="sec2"} and Sec. [3](#sec1){reference-type="ref" reference="sec1"}, we give some preliminaries and prove Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, respectively. Then we prove Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} in Sec. [4](#sec3){reference-type="ref" reference="sec3"}. The aim of Sec. [5](#sec6){reference-type="ref" reference="sec6"} is to show Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"}, and Sec. [6](#sec5){reference-type="ref" reference="sec5"} is devoted to show Theorem [Theorem 6](#K-TH7){reference-type="ref" reference="K-TH7"}. In Sec. [7](#sec7){reference-type="ref" reference="sec7"}, we give the proof of Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}. Finally, we show Theorems [Theorem 11](#K-TH6){reference-type="ref" reference="K-TH6"}, [Theorem 12](#K-TH14){reference-type="ref" reference="K-TH14"}, [Theorem 15](#K-TH3){reference-type="ref" reference="K-TH3"} and [Theorem 16](#K-TH4){reference-type="ref" reference="K-TH4"} in Sec. [8](#sec4){reference-type="ref" reference="sec4"}.
Now, we conclude this section with the following notations applied throughout this paper:
- $\mathcal{D}^{1,2}(\mathbb{R}^3):= \mathcal{D}^{1,2}(\mathbb{R}^3,\mathbb{C})$ is the usual Sobolev space with the norm $\|u\|_{\mathcal{D}^{1,2} }=\left(\int_{\mathbb{R}^3} |\nabla u|^2 dx\right)^{\frac12}$.
- $L^q(\mathbb{R}^3)= L^q(\mathbb{R}^3, \mathbb{C})$ is the Lebesgue space with the norm $|u|_q=\left(\int_{\mathbb{R}^3}|u|^qdx\right)^{\frac{1}{q}}$ for $q\in[1,+\infty)$.
- $\mathcal{H}^{-1}$ (or $\mathcal{H}_r^{-1}$) denotes the dual space of $\mathcal{H}$ (or $\mathcal{H}_r$) and $\mathbb{R}^+=(0, +\infty)$.
- $C$, $C_i, i=1,2, \ldots$, denote positive constant which may depend on $p$ and $q$ (but never on $a$ or $\mu$), and may possibly various in different places.
- $B_y(r):=\left\{x\in \mathbb{ R}^3:|x-y|<r\right\}$ for $y\in\mathbb{R}^3$ and $r>0$.
We also mention that, within a section, after having fixed the parameters $a$ and $\mu$ we may choose to omit the dependence of $\mathcal{J}_{a,\mu}$, $P_{a,\mu}$, $\mathcal{P}_{a,\mu}$, $\ldots$ on these quantities, writing simply $\mathcal{J}$, $P$, $\mathcal{P}$, $\ldots$.
# Preliminaries {#sec2}
Since $\mathcal{J}$ is unbounded from below on $S_a$, thus one cannot apply the minimizing argument on $S_a$ any more. In this paper, we consider the minimization of $\mathcal{J}$ on a subset of $S_a$. For any $u\in S_a$, let $$(s\star u)(x):=e^{\frac{3s}{2}}u(e^sx).$$ By direct computation, one has $(s\star u)\in S_a$ and $$\begin{aligned}
\label{k8}
\psi_u(s):=\mathcal{J}(s\star u)
&=\frac{e^{2s}}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy \nonumber\\
&\quad-\frac{e^{p\gamma_ps}}{p}\int_{\mathbb{R}^3}|u|^{p}dx
-\frac{\mu e^{q\gamma_qs}}{q}
\int_{\mathbb{R}^3}|u|^{q}dx,\end{aligned}$$ where $\gamma_p=\frac{3(p-2)}{2p}$. Clearly, $$\begin{aligned}
p \gamma_p
\begin{cases}
>2 \ &\text{if}\ p>\frac{10}{3},\\
=2 \ &\text{if}\ p=\frac{10}{3},\\
<2 \ &\text{if}\ p<\frac{10}{3}.
\end{cases}\end{aligned}$$ It is easy to see that, if $u\in \mathcal{H}$ is a solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, then the following Nehari identity holds $$\begin{aligned}
\label{kk4}
\int_{\mathbb{R}^3}|\nabla u|^2+\lambda\int_{\mathbb{R}^3} |u|^2dx+ \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy =\int_{\mathbb{R}^3}|u|^{p}dx+\mu \int_{\mathbb{R}^3}|u|^{q}dx.\end{aligned}$$ Moreover, by [@2014-JFA-le Proposition 2.1], any solution $u$ of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} satisfies the following Pohožaev identity $$\begin{aligned}
\label{kkk4}
\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx +\frac{3\lambda }{2} \int_{\mathbb{R}^3} |u|^2dx
+\frac{5}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy=
\frac{3}{p}\int_{\mathbb{R}^3}|u|^{p}dx+\frac{ 3\mu}{q} \int_{\mathbb{R}^3}|u|^{q}dx.\end{aligned}$$ Hence, by [\[kk4\]](#kk4){reference-type="eqref" reference="kk4"} and [\[kkk4\]](#kkk4){reference-type="eqref" reference="kkk4"}, $u$ satisfies $$\begin{aligned}
\label{k4}
P(u):=\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx=0.
\end{aligned}$$ Usually, [\[k4\]](#k4){reference-type="eqref" reference="k4"} is called as Nehari-Pohožaev identity. In particular, [\[k4\]](#k4){reference-type="eqref" reference="k4"} is widely used in the literature to study the prescribed mass problem. In order to obtain the normalized solutions for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, we introduce the following Nehari-Pohožaev constrained set: $$\begin{aligned}
\mathcal{P}=\left\{u\in S_a: P(u)=0\right\}.
\end{aligned}$$ By simple calculation, we have $$P(s\star u)=e^{2s}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p e^{p\gamma_ps} \int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q e^{q\gamma_qs}
\int_{\mathbb{R}^3}|u|^{q}dx.$$ Obviously, $\psi_u'(s)=P(s\star u)$. Hence, for any $u\in S_a$, $s\in \mathbb{R}$ is a critical point of $\psi_u(s)$ iff $s\star u\in \mathcal{P}$.
The following classical Gagliardo-Nirenberg inequality is very important in this paper, which can be found in [@1983W]. Concretely, let $2< t <6$, then $$\begin{aligned}
\label{k10}
|u|_t\leq C_{t}|u|_2^{1-\gamma_t}|\nabla u|_2^{\gamma_t},\ \ \ \ \forall \ u\in \mathcal{H},\end{aligned}$$ where the sharp constant $C_{t}$ is defined by $$\begin{aligned}
\label{k102}
C_{t}^t=\frac{2t}{6-t}\left(\frac{6-t}{3(t-2)} \right)^{\frac{3(t-2)}{4}}\frac{1}{|Q_t|_2^{t-2}},
\end{aligned}$$ and $Q_t$ is the unique positive radial solution of equation $$-\Delta Q+Q=|Q|^{t-2}Q.$$
Now, we recall some properties of the operator $\int_{\mathbb{R}^3} \frac{|u(y)|^2} {|x-y|}dy$ in the following lemma.
**Lemma 18**. *(see [@2006JFA]) The following results hold*
- *$\int_{\mathbb{R}^3} \frac{|u(y)|^2} {|x-y|}dy\ge0$ for any $x\in{\mathbb{R}^{3}}$;*
- *there exist some constants $C_1,C_2>0$ such that $\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
\le{C_1|u|^4_\frac{12}{5}}\le{C_2\|u\|^4};$*
- *if $u_n\rightarrow u$ in $L^{\frac{12}{5}}(\mathbb{R}^3)$, then $\int_{\mathbb{R}^3} \frac{|u_n(y)|^2} {|x-y|}dy\rightarrow \int_{\mathbb{R}^3} \frac{|u(y)|^2} {|x-y|}dy$ in $\mathcal{D}^{1,2}(\mathbb{R}^{3});$*
- *if $u_n\rightharpoonup u$ in $\mathcal{H}$, then $\int_{\mathbb{R}^3} \frac{|u_n(y)|^2} {|x-y|}dy\rightharpoonup \int_{\mathbb{R}^3} \frac{|u(y)|^2} {|x-y|}dy$ in $\mathcal{D}^{1,2}(\mathbb{R}^{3})$.*
Define for $(a, t)\in \mathbb{R}^+\times \mathbb{R}^+$ the function $$\begin{aligned}
f(a,t)=\frac{1}{2}-\mu \frac{C_q^q}{q}a^{(1-\gamma_q)q}t^{\gamma_qq-2}
-\frac{C_p^p}{p}a^{(1-\gamma_p)p}t^{p\gamma_p-2}.\end{aligned}$$
**Lemma 19**. *Assume that $\mu>0$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Then for any fixed $a>0$, the function $f_a(t):=f(a, t)$ has a unique global maximum, and the maximum value satisfies $$\begin{aligned}
\label{0k50}
\max\limits_{t>0}f_a(t)
\begin{cases}
>0 &\text{if}\ a<a_0,\\
=0 &\text{if}\ a=a_0,\\
<0 &\text{if}\ a>a_0,
\end{cases}
\end{aligned}$$ where $$\begin{aligned}
\label{k50}
a_0=\left(\frac{1}{2K}\right)^{\frac{p\gamma_p-q\gamma_q}{(p-p\gamma_p)(2-q\gamma_q)+(q-q\gamma_q)(p\gamma_p-2)}}\end{aligned}$$ with $$\begin{aligned}
K=\left(\frac{\mu C_q^q}{q}\right)^{\frac{p\gamma_p-2 }{p\gamma_p-q\gamma_q }}\left[\frac{p(2-q\gamma_q)}{(p\gamma_p-2)C_p^p} \right]^{\frac{q\gamma_q-2 }{p\gamma_p-q\gamma_q}}+ \left(\frac{p}{C_p^p} \right)^{\frac{q\gamma_q-2 }{p\gamma_p-q\gamma_q}} \left[\frac{\mu(2-q\gamma_q)C_q^q}{q(p\gamma_p-2) } \right]^{\frac{p\gamma_p-2 }{p\gamma_p-q\gamma_q}}.\end{aligned}$$*
*Proof.* By definition of $f_a(t)$, we have $$f_a'(t)= \mu (2-q\gamma_q)\frac{C_q^q}{q}a^{(1-\gamma_q)q}t^{\gamma_qq-3}
-(p\gamma_p-2)\frac{C_p^p}{p}a^{(1-\gamma_p)p}t^{p\gamma_p-3}.$$ We know that the equation $f_a'(t)=0$ has a unique solution $$\begin{aligned}
\label{k90}
\rho_a=\left[\frac{\mu p(2-q\gamma_q)C_q^q}{q(p\gamma_p-2)C_p^p}a^{q(1-\gamma_q)-p(1-\gamma_p)} \right]^{\frac{1}{p\gamma_p-q\gamma_q}}.
\end{aligned}$$ By simple analysis, we obtain that $f_a(t)$ is increasing on $(0, \rho_a )$ and decreasing on $( \rho_a, +\infty )$, and $f_a(t)\rightarrow-\infty$ as $t\rightarrow 0$ and $f_a(t)\rightarrow-\infty$ as $t\rightarrow +\infty$, which implies that $\rho_a$ is the unique global maximum point of $f_a(t)$. Hence, the maximum value of $f_a(t)$ is $$\begin{aligned}
\max_{t>0}f_a(t)= f_a(\rho_a) =&\frac{1}{2}-\mu \frac{C_q^q}{q}a^{(1-\gamma_q)q}\rho_a^{\gamma_qq-2}
-\frac{C_p^p}{p}a^{(1-\gamma_p)p}\rho_a^{p\gamma_p-2}\\
=&\frac{1}{2}-\mu \frac{C_q^q}{q}a^{(1-\gamma_q)q}\left[\frac{\mu p(2-q\gamma_q)C_q^q}{q(p\gamma_p-2)C_p^p}a^{q(1-\gamma_q)-p(1-\gamma_p)} \right]^{\frac{\gamma_qq-2}{p\gamma_p-q\gamma_q}}\\
&-\frac{C_p^p}{p}a^{(1-\gamma_p)p}\left[\frac{\mu p(2-q\gamma_q)C_q^q}{q(p\gamma_p-2)C_p^p}a^{q(1-\gamma_q)-p(1-\gamma_p)} \right]^{\frac{p\gamma_p-2}{p\gamma_p-q\gamma_q}}\\
=&\frac{1}{2}-K a^{ \frac{(p-p\gamma_p)(2-q\gamma_q)+(q-q\gamma_q)(p\gamma_p-2)}{p\gamma_p-q\gamma_q}}.\end{aligned}$$ By the definition of $a_0$, we conclude [\[0k50\]](#0k50){reference-type="eqref" reference="0k50"}. Thus we complete the proof. ◻
**Lemma 20**. *Assume that $\mu>0$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Let $(\widehat{a}_1, \widehat{t}_{1})\in \mathbb{R}^+\times \mathbb{R}^+$ be such that $f(\widehat{a}_1, \widehat{t}_{1}) \geq0$. Then for any $\widehat{a}_2\in(0, \widehat{a}_1]$, it holds that $f(\widehat{a}_2, \widehat{t}_{2})\geq 0$ if $\widehat{t}_{2}\in\big[\frac{\widehat{a}_2}{\widehat{a}_1} \widehat{t}_{1}, \widehat{t}_{1}\big]$.*
*Proof.* Since the function $a \rightarrow f(\cdot, t)$ is non-increasing, we obtain that, for any $\widehat{a}_2\in(0, \widehat{a}_1]$, $$\begin{aligned}
\label{k76}
f(\widehat{a}_2, \widehat{t}_{1})\geq f(\widehat{a}_1, \widehat{t}_{1})\geq0.\end{aligned}$$ By direct calculation, we get $$\begin{aligned}
\label{k77}
f\big(\widehat{a}_2, \frac{\widehat{a}_2}{\widehat{a}_1} \widehat{t}_{1}\big)- f(\widehat{a}_1, \widehat{t}_{1})\geq0.\end{aligned}$$ Note that if $f(\widehat{a}_2, t')\geq 0$ and $f(\widehat{a}_2, t'' )\geq 0$, then $f(\widehat{a}_2, k)\geq 0$ for any $k\in [t', t'']$. Indeed, if $f(\widehat{a}_2, \widehat{t})<0$ for some $\widehat{t} \in [t', t'']$, then there exists a local minimum point on $(t', t'')$, which contradicts the fact that the function $f(\widehat{a}_2, t)$ has a unique global maximum by Lemma [Lemma 19](#KK-Lem2.1){reference-type="ref" reference="KK-Lem2.1"}. Hence, by [\[k76\]](#k76){reference-type="eqref" reference="k76"} and [\[k77\]](#k77){reference-type="eqref" reference="k77"}, taking $t'= \frac{\widehat{a}_2}{\widehat{a}_1} \widehat{t}_{1}$ and $t''= \widehat{t}_{1}$, we get the conclusion. This lemma is verified. ◻
**Lemma 21**. *Assume $\mu>0$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Let $a\in(0, \overline{a}_0)$, $\rho_0:=\rho_{a_0}$ with $\rho_{a_0}$ defined by [\[k90\]](#k90){reference-type="eqref" reference="k90"} and $$\begin{aligned}
h(t):=\frac{1}{2}t^2-\mu \frac{C_q^q}{q}a^{(1-\gamma_q)q}t^{\gamma_qq}
-\frac{C_p^p}{p}a^{(1-\gamma_p)p}t^{p\gamma_p},~~~~ \forall\ t\in \mathbb{R}^+.\end{aligned}$$ Then there exist $0 < R_0<\rho_0< R_1$, both $R_0$ and $R_1$ depend on $a$ and $\mu$, such that $h(R_0) = 0 = h(R_1)$ and $h(t) > 0$ iff $t \in(R_0, R_1)$. Moreover, the function $h$ has a local strict minimum at negative level and a global strict maximum at positive level.*
*Proof.* From the analysis of Lemma [Lemma 19](#KK-Lem2.1){reference-type="ref" reference="KK-Lem2.1"}, we know that there exist $0 < R_0< R_1$, both $R_0$ and $R_1$ depending on $a$ and $\mu$, such that $f_a(R_0) = 0 = f_a(R_1)$ and $f_a(t) > 0$ iff $t \in(R_0, R_1)$. Due to $h(t)=t^2 f_a(t)$, then the function $h$ satisfies $h(R_0) = 0 = h(R_1)$ and $h(t) > 0$ iff $t \in(R_0, R_1)$. Moreover, noting that $a \rightarrow f(\cdot, t)$ is decreasing and $f(a_0, \rho_0)=0$, then $f(a, \rho_0)> f(\overline{a}_0, \rho_0)\geq f(a_0, \rho_0)=0$ when $a\in(0, \overline{a}_0)$. Hence, $\rho_0\in (R_0, R_1)$. The remains are similar to [@2020Soave Lemma 5.1], so we omit it here. ◻
In the following, we try to split $\mathcal{P}$ into three disjoint unions $\mathcal{P}_{+}\cup \mathcal{P}_{-}\cup \mathcal{P}_{0}$, where $$\begin{aligned}
&\mathcal{P}_+:=\left\{u\in \mathcal{P}: \psi_u''(0)>0\right \}, \\
&\mathcal{P}_-:=\left\{u\in \mathcal{P}: \psi_u''(0)<0 \right\},\\
&\mathcal{P}_0:=\left\{u\in \mathcal{P}: \psi_u''(0)=0 \right\}.
\end{aligned}$$
**Lemma 22**. *Let $\mu>0$, $q\in(2, \frac{8}{3})$, $p\in (\frac{10}{3}, 6)$ and $a\in(0,\overline{a}_0)$. Then $\mathcal{P}_0=\emptyset$ and $\mathcal{P}$ is a smooth manifold of codimension 2 in $\mathcal{H}$.*
*Proof.* Suppose on the contrary that there exists $u\in \mathcal{P}_0$. By [\[k8\]](#k8){reference-type="eqref" reference="k8"} and [\[k4\]](#k4){reference-type="eqref" reference="k4"} one has $$\begin{aligned}
&P(u)=\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx=0,\label{k5}\\
&\psi_u''(0)=2\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-p\gamma_p^2\int_{\mathbb{R}^3}|u|^{p}dx
-\mu q \gamma_q^2\int_{\mathbb{R}^3}|u|^{q}dx=0.\label{k6}
\end{aligned}$$ By eliminating $|\nabla u|_2^2$ from [\[k5\]](#k5){reference-type="eqref" reference="k5"} and [\[k6\]](#k6){reference-type="eqref" reference="k6"}, we get $$\begin{aligned}
\label{k7}
\mu (2-q\gamma_q)\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx
= (p \gamma_p-2)\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
+\frac{1}{4}
\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{|u(x)|^{2}|u(y)|^2}{|x-y|}dxdy.
\end{aligned}$$ On the one hand, it follows from [\[k5\]](#k5){reference-type="eqref" reference="k5"} and [\[k7\]](#k7){reference-type="eqref" reference="k7"} that $$\begin{aligned}
\label{k9}
\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\left(1-\frac{1}{2-q\gamma_q}\right)
\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
=\gamma_p\left(\frac{p\gamma_p-q\gamma_q}{2-q\gamma_q}\right)\int_{\mathbb{R}^3}|u|^{p}dx.\end{aligned}$$ Since $q\gamma_q<1$ and $p\gamma_p>q\gamma_q$, then by [\[k9\]](#k9){reference-type="eqref" reference="k9"}, $$\begin{aligned}
\label{k11}
|\nabla u|_2^2
\leq\gamma_p\left(\frac{p\gamma_p-q\gamma_q}{2-q\gamma_q}\right)\int_{\mathbb{R}^3}|u|^{p}dx
\leq C_p^p\gamma_p\left(\frac{p\gamma_p-q\gamma_q}{2-q\gamma_q}\right)a^{p(1-\gamma_p)}|\nabla u|_2^{p\gamma_p}.
\end{aligned}$$ On the other hand, by eliminating $\int_{\mathbb{R}^3}|u|^{p}dx$ from [\[k5\]](#k5){reference-type="eqref" reference="k5"} and [\[k7\]](#k7){reference-type="eqref" reference="k7"}, one obtains $$\begin{aligned}
\label{k12}
\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\left(1+\frac{1}{p\gamma_p-2}\right)\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
=\mu\gamma_q\left(\frac{p\gamma_p-q\gamma_q}{p\gamma_p-2}\right)\int_{\mathbb{R}^3}|u|^{q}dx.
\end{aligned}$$ Since $p\gamma_p>2$ and $p\gamma_p> q\gamma_q$, we deduce from [\[k12\]](#k12){reference-type="eqref" reference="k12"} that $$\begin{aligned}
\label{k13}
|\nabla u|_2^2
\leq\mu\gamma_q\left(\frac{p\gamma_p-q\gamma_q}{p\gamma_p-2}\right)\int_{\mathbb{R}^3}|u|^{q}dx
\leq\mu\gamma_q C_q^q\left(\frac{p\gamma_p-q\gamma_q}{p\gamma_p-2}\right) a^{q(1-\gamma_q)}|\nabla u|_2^{q\gamma_q}.
\end{aligned}$$ Hence, from [\[k11\]](#k11){reference-type="eqref" reference="k11"}, [\[k13\]](#k13){reference-type="eqref" reference="k13"} and the definition of $\overline{a}_0$, we can get a contradiction based on [@2020Soave Lemma 5.2]. Thus, $\mathcal{P}_0=\emptyset$. Moreover, similar to [@2020Soave Lemma 5.2], we obtain that $\mathcal{P}$ is a smooth manifold of codimension 2 in $\mathcal{H}$. Thus we complete the proof. ◻
**Lemma 23**. *Assume $\mu>0$ and $a\in(0, \overline{a}_0)$. Let $q\in(2, \frac{8}{3})$, $p\in (\frac{10}{3}, 6)$ and $\rho_0:=\rho_{a_0}$, for any $u\in S_a$, the function $\psi_u$ has exactly two critical points $s_u < t_u$ and two zeros $c_u < d_u$, with $s_u < c_u < t_u < d_u$. Moreover,*
- *$s_u\star u\in \mathcal{P}_+$ and $t_u \star u\in \mathcal{P}_-$. If $s\star u\in \mathcal{P}$, then $s=s_u$ or $s=t_u$.*
- *$|\nabla (s \star u)|_2\leq R_0\leq \rho_0$ for every $s\leq c_u$, $\ln\big(\frac{\rho_0}{|\nabla u|_2}\big)< \ln\big(\frac{R_1}{|\nabla u|_2}\big)\leq d_u$ and $$\begin{aligned}
\mathcal{J}(s_u \star u)&=\min\left\{ \mathcal{J}(s\star u):s\in \mathbb{R} \ \text{and}\ |\nabla (s\star u)|_2\leq R_0 \right\}\\
&=\min\left\{ \mathcal{J}(s\star u):s\in \mathbb{R} \ \text{and}\ |\nabla (s\star u)|_2\leq \rho_0 \right\}\\
&<0.
\end{aligned}$$*
- *$\mathcal{J}(t_u\star u)=\max\left\{ \mathcal{J}(s\star u):s\in \mathbb{R} \right\}>0$, and $\psi_u$ is strictly decreasing and concave on $(t_u,+\infty)$. In particular, if $t_u < 0$, then $P(u) <0$.*
- *$u\in S_a\mapsto s_u\in \mathbb{R}$ and $u\in S_a\mapsto t_u\in \mathbb{R}$ are of class $C^1$.*
*Proof.* Since $s\star u\in \mathcal{P}\Leftrightarrow \psi_u'(s)=0$, then we first show that $\psi_u$ has at least two critical points. Note $$J(u)\geq \frac{1}{2}|\nabla u|_2^2-\mu \frac{C_q^q}{q} a^{q(1-\gamma_q)}|\nabla u|_2^{q\gamma_q}-\frac{C_p^p}{p} a^{p(1-\gamma_p)}|\nabla u|_2^{p\gamma_p}:=h(|\nabla u|_2),$$ then $$\psi_u(s)=J(s\star u)\geq h(|\nabla (s\star u)|_2)=h(e^s|\nabla u|_2).$$ By Lemma [Lemma 21](#K-Lem2.1){reference-type="ref" reference="K-Lem2.1"}, we have $h(t)>0\Leftrightarrow t\in(R_0, R_1)$, which implies $\psi_u>0$ on $(\log(\frac{R_0}{|\nabla u|_2}), \log(\frac{R_1}{|\nabla u|_2}))$. Furthermore, $\psi_u(-\infty)=0^-$ and $\psi_u(+\infty)=-\infty$. Hence, $\psi_u$ has at least two critical points $s_u<t_u$, where $s_u$ is a local minimum point on $(-\infty,\log(\frac{R_0}{|\nabla u|_2}))$ at negative level, and $t_u > s_u$ is a global maximum point at positive level. Next, we prove that $\psi_u$ has at most two critical points. In other words, we show that $\psi_u'(s)=0$ has at most two solutions. Let $\psi_u'(s)=0$, namely, $$e^{s}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p e^{(p\gamma_p-1)s} \int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q e^{(q\gamma_q-1)s}
\int_{\mathbb{R}^3}|u|^{q}dx=0.$$ Define $$f_1(s)=\gamma_p e^{(p\gamma_p-1)s} \int_{\mathbb{R}^3}|u|^{p}dx
+\mu\gamma_q e^{(q\gamma_q-1)s}
\int_{\mathbb{R}^3}|u|^{q}dx-e^{s}\int_{\mathbb{R}^3}|\nabla u|^2dx,$$ then $$f_1'(s)=e^sf_2(s),$$ where $$f_2(s)=\gamma_p(p\gamma_p-1)e^{(p\gamma_p-2)s} \int_{\mathbb{R}^3}|u|^{p}dx+\mu\gamma_q(q\gamma_q-1)e^{(q\gamma_q-2)s}
\int_{\mathbb{R}^3}|u|^{q}dx-\int_{\mathbb{R}^3}|\nabla u|^{2}dx.$$ Clearly, $f_2'(s)>0$ for all $s\in \mathbb{R}$, then $f_2$ is strictly increasing on $\mathbb{R}$. Moreover, since $f_2(-\infty)=-\infty$ and $f_2(+\infty)=+\infty$, there exists a unique $\widetilde{s}\in \mathbb{R}$ such that $f_2(\widetilde{s})=0$, and then $f_1'(\widetilde{s})=0$. Furthermore, $f_1$ is strictly decreasing on $(-\infty, \widetilde{s})$ and is strictly increasing on $( \widetilde{s}, +\infty)$. Since $f_1(-\infty)=+\infty$ and $f_1(+\infty)=+\infty$, then the equation $$f_1(s)= \frac{1}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy$$ has at most two solutions. That is, $\psi_u'(s)=0$ has at most two solutions. Therefore, $\psi_u$ has exactly two critical points $s_u< t_u\in \mathbb{R}$. Moreover, $\psi_{s_u\star u}''(0)=\psi_{ u}''(s_u)\geq 0$ and $\mathcal{P}_0=\emptyset$ imply $s_u\star u\in \mathcal{P}_+$. Similarly, $t_u\star u\in \mathcal{P}_-$. The remains are similar to the proof of [@2020Soave Lemma 5.3]. We finish the proof. ◻
Let $A_k:=\left\{u\in \mathcal{H}: |\nabla u|_2\leq k\right\}$ and $$\begin{aligned}
\label{k91}
D_{\rho_0}=A_{\rho_0}\cap S_a.\end{aligned}$$ We shall consider the local minimization problem $$m(a):=\inf_{D_{\rho_0}}\mathcal{J}(u).$$ From Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"}, we have
**Corollary 24**. *Assume that $\mu>0$, $a\in(0, \overline{a}_0)$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Then $\mathcal{P}_+\subset D_{\rho_0}$ and $\sup\limits_{\mathcal{P}_+} \mathcal{J}(u)\leq 0\leq \inf\limits_{\mathcal{P}_-} \mathcal{J}(u)$.*
**Lemma 25**. *Let $\mu>0$, $a\in(0, \overline{a}_0)$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Then $-\infty<m(a)<0<\inf\limits_{\partial D_{\rho_0}}\mathcal{J}(u)$ and $m(a)=\inf\limits_{\mathcal{P}}\mathcal{J}(u)=\inf\limits_{\mathcal{P}_+}\mathcal{J}(u)$. Moreover, $m(a)<\inf\limits_{\overline{D_{\rho_0}}\backslash D_{\rho_0-\epsilon}} \mathcal{J}(u)$ for $\epsilon>0$ small enough.*
*Proof.* The proof is similar to [@2022-JMPA-jean Lemma 2.4] and [@2020Soave Lemma 5.3], we omit the details. ◻
# The local minimizer of the case $\mu>0$, $q\in(2,\frac{8}{3})$ and $p\in(\frac{10}{3},6)$ {#sec1}
To solve the case of $\mu>0$, $q\in(2, \frac{8}{3} )$ and $p\in (\frac{10}{3}, 6)$, we use the ideas of [@2011JFA-BS]. Firstly, we present two definitions, which are different from [@2011JFA-BS] since we do not directly consider the minimization problem on $S_a$ as in [@2011JFA-BS].
**Definition 26**. Let $u\in \mathcal{H}$ and $u\neq0$. A continuous path $g_u: \theta\in \mathbb{R}^+\mapsto g_u(\theta)\in \mathcal{H}$ such that $g_u(1)=u$ is said to be a scaling path of $u$ if $|\nabla g_u(\theta)|_2^2\rightarrow |\nabla u|_2^{2}$ as $\theta\rightarrow 1$, $\Theta_{g_u}(\theta):=|g_u(\theta)|_2^2|u|_2^{-2}$ is differentiable and $\Theta_{g_u}'(1)\neq 0$. We denote with $\mathcal{G}_u$ the set of the scaling paths of $u$.
The set $\mathcal{G}_u$ is nonempty. For example, $g_u( \theta) = \theta u\in \mathcal{G}_u$, since $\Theta_{g_u}(\theta)= \theta^ 2$. Also $g_u( \theta)= u( \frac{x}{\theta })$ is an element of $\mathcal{G}_u$ since $\Theta_{g_u}(\theta)= \theta^3$. As we will see, it is relevant to consider the family of scaling paths of $u$ parametrized with $\iota\in \mathbb{R}$ given by $$\begin{aligned}
%\label{k92}
\mathcal{G}_u^{\iota}=\left\{g_u(\theta): g_u(\theta)=\theta^{1-\frac{3}{2}\iota} u(\frac{x}{\theta^{\iota}}) \right\}\subset \mathcal{G}_u.\end{aligned}$$
**Definition 27**. Let $u\neq0$ be fixed and $g_u\in \mathcal{G}_u$. We say that the scaling path $g_u$ is admissible for the functional $\mathcal{J}$ if $h_{g_u}$ is a differentiable function, where $h_{g_u}(\theta)= \mathcal{J}(g_u(\theta) )- \Theta_{g_u}(\theta)\mathcal{J}(u)$ for all $\theta\in \mathbb{R}^+$.
The following result is crucial for handling the dichotomy case.
**Lemma 28**. *Assume that $$\begin{aligned}
&-\infty<m(s)<0\ \text{for all} \ s\in(0,\overline{a}_0),\label{k51}\\
&s\in(0,\overline{a}_0)\mapsto m(s)\ \text{is continuous},\label{k52}\\
&\lim_{s\rightarrow 0}\frac{m(s)}{s^2}=0.\label{k53}
\end{aligned}$$ Then for any $a\in (0, \overline{a}_0)$, there exists $a_1\in(0, a]$ such that $m(a_1)<m(\beta)+m(\sqrt{a_1^2-\beta^2})$ for any $\beta\in (0, a_1)$.*
*Proof.* Let $a\in (0, \overline{a}_0)$ fixed and define $$a_1=\min\left\{s\in[0,a]: \frac{m(s)}{s^2}=\frac{m(a)}{a^2}\right\}.$$ From [\[k52\]](#k52){reference-type="eqref" reference="k52"} and [\[k53\]](#k53){reference-type="eqref" reference="k53"}, $a_1>0$ follows. We claim the function $s\mapsto \frac{m(s)}{s^2}$ in the interval $[0, a_1]$ achieves minimum only at $s=a_1$. If the claim holds, then for any $\beta\in (0, a_1)$, we get $\frac{\beta^2}{a_1^2}m(a_1)<m(\beta)$ and $\frac{a_1^2-\beta^2}{a_1^2}m(a_1)< m(\sqrt{a_1^2-\beta^2})$. Hence, $$m(a_1)=\frac{\beta^2}{a_1^2}m(a_1)+\frac{a_1^2-\beta^2}{a_1^2}m(a_1)<m(\beta)
+m\Big(\sqrt{a_1^2-\beta^2}\Big).$$ So, we next verify the claim. In fact, suppose that there exists $a_{\ast}<a_1$ such that $\frac{m(s)}{s^2}$ achieves minimum at $s=a_{\ast}$. Then we have $\frac{m(a_{\ast})}{a_{\ast}^2}< \frac{m(a_{1})}{a_{1}^2}= \frac{m(a)}{a^2}<0$. Thus, from [\[k51\]](#k51){reference-type="eqref" reference="k51"} and [\[k53\]](#k53){reference-type="eqref" reference="k53"}, there exists $\overline{a}<a_1$ such that $\frac{m(\overline{a})}{{\overline{a}}^2}=\frac{m(a_{1})}{a_{1}^2}= \frac{m(a)}{a^2}$, which contradicts the definition of $a_1$. Therefore, we complete the proof. ◻
To obtain the strong subadditivity inequality $m(a)<m(a_1)+m\big(\sqrt{a^2-a_1^2}\big)$ for any $a_1\in (0, a)$, in the following we will show that the function $s\mapsto \frac{m(s)}{s^2}$ is monotone decreasing for $s\in(0, a]$.
**Lemma 29**. *(Avoiding dichotomy) Let [\[k51\]](#k51){reference-type="eqref" reference="k51"}, [\[k52\]](#k52){reference-type="eqref" reference="k52"} and [\[k53\]](#k53){reference-type="eqref" reference="k53"} hold. Then for any $a\in(0, \overline{a}_0)$, the set $$\begin{aligned}
M(a)=\cup_{\rho\in (0, a]}\left\{ u\in S_{\rho}\cap A_{\rho_0}: \mathcal{J}(u)=m(\rho)\right\}\neq\emptyset.
\end{aligned}$$ If in addition, $$\begin{aligned}
\label{k54}
\forall \ u\in M(a),\ \exists\ g_u\in \mathcal{G}_u\ \text{admissible, such that}\ \frac{d}{d \theta}h_{g_u}(\theta)|_{\theta=1}\neq 0,\end{aligned}$$ then the function $s\mapsto \frac{m(s)}{s^2}$ is monotone decreasing for $s\in(0, a]$.*
*Proof.* By Lemma [Lemma 28](#K-Lem2.7){reference-type="ref" reference="K-Lem2.7"}, for any $a\in (0, \overline{a}_0)$, there exists $a_1\in(0, a]$ such that for any $\beta\in (0, a_1)$, $$\begin{aligned}
\label{k59}
m(a_1)<m(\beta)+m\Big(\sqrt{a_1^2-\beta^2}\Big).\end{aligned}$$ We claim that $$\begin{aligned}
\label{k55}
\left\{u\in S_{a_1}\cap A_{\rho_0}: \mathcal{J}(u)=m(a_1)\right\}\neq\emptyset.\end{aligned}$$ Indeed, taking a minimizing sequence $\{u_n\}\subset S_{a_1}\cap A_{\rho_0}$ of $m(a_1)$, clearly, $\{u_n\}$ is bounded in $\mathcal{H}$. Hence, there exists some $u\in \mathcal{H}$ satisfying $$\begin{aligned}
&u_n\rightharpoonup u \quad\quad\quad\quad \text{in}\ \mathcal{H};\\
& u_n\rightarrow u \quad\quad \quad\quad\text{in}\ L_{loc}^t(\mathbb{R}^3), \ \forall\ t\in(2,6);\\
&u_n(x)\rightarrow u(x) \quad \ \mbox{a.e.} \ \text{in}\ \mathbb{R}^3.
\end{aligned}$$ If $u\neq0$, the minimizing sequence $\{u_n\}$ is the desired. If $u=0$, then the following two cases occur: $$\begin{aligned}
&(\mbox{i})\ \lim_{n\rightarrow\infty}\sup_{y\in \mathbb{ R}^3 }\int_{B_y{(1)}}|u_n|^2dx=0,\\
&(\mbox{ii}) \ \lim_{n\rightarrow\infty}\sup_{y\in \mathbb{ R}^3 }\int_{B_y{(1)}}|u_n|^2dx\geq\delta>0,\end{aligned}$$ If $(\mbox{i})$ holds, by the well-known Lions' lemma [@1983Wi], $u_n \rightarrow 0$ in $L^t (\mathbb{R}^3)$ for any $t\in(2, 6)$ and so $$m(a_1)=\lim_{n\rightarrow\infty}\mathcal{ J}(u_n)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u_n|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^{2}|u_n(y)|^2} {|x-y|}dxdy\geq 0,$$ which is contradict with $m(a_1)<0$ by [\[k51\]](#k51){reference-type="eqref" reference="k51"}. Thus, $(\mbox{ii})$ holds. In this case we choose $\{y_n\} \subset\mathbb{ R}^3$ such that $$\int_{B_0(1)}|u_n(\cdot+y_n)|^2dx\geq \delta>0.$$ Moreover, we deduce that the sequence $\{u_n(\cdot+y_n)\}\subset S_{a_1}\cap A_{\rho_0}$ is still a minimizing sequence for $m(a_1)$. Hence, there exists $\widehat{u}\in \mathcal{H}$ such that $\widehat{u}\neq 0$ and $$\begin{aligned}
&u_n(\cdot+y_n)\rightharpoonup \widehat{u} \quad\quad\quad\quad \text{in}\ \mathcal{H};\\
& u_n(\cdot+y_n)\rightarrow \widehat{u} \quad\quad \quad\quad\text{in}\ L_{loc}^t(\mathbb{R}^3),\ \forall\ t\in(2,6);\\
&u_n(x+y_n)\rightarrow \widehat{u}(x) \quad \quad \ \mbox{a.e.} \ \text{in}\ \mathbb{R}^3.
\end{aligned}$$ Thus we find a minimizing sequence $\{v_n\}\subset S_{a_1}\cap A_{\rho_0}$ of $m(a_1)$ such that there is $v\in \mathcal{H}\backslash\{0\}$ satisfying $$\begin{aligned}
v_n\rightharpoonup v \quad\quad\quad\quad &\text{in}\ \mathcal{H};\\
v_n\rightarrow v \quad\quad \quad\quad &\text{in}\ L_{loc}^t(\mathbb{R}^3)\ \text{with}\ t\in(2,6);\\
v_n(x)\rightarrow v(x) \quad \ &\mbox{a.e.} \ \text{in}\ \mathbb{R}^3.
\end{aligned}$$ Let $w_n=v_n-v$, then $w_n\rightharpoonup 0$ in $\mathcal{H}$. It follows from Brézis-Lieb lemma [@1983Wi] that, as $n\rightarrow\infty$, $$\begin{aligned}
&\|w_n\|^2=\|v_n\|^2-\|v\|^2+o_n(1);\\
&|w_n|_s^s=|v_n|_s^s-|v|_s^s+o_n(1),\end{aligned}$$ where $s\in [2,6]$. Then, by using [@2008-JMAA-ZHAO Lemma 2.2] we obtain, as $n\rightarrow\infty$, $$\begin{aligned}
\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|w_n(x)|^2|w_n(y)|^2} {|x-y|}dxdy = \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|v_n(x)|^2|v_n(y)|^2} {|x-y|}dx dy -\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|v(x)|^2|v(y)|^2} {|x-y|}dxdy+o_n(1).\end{aligned}$$ Hence, we have $$\begin{aligned}
\label{k56}
\mathcal{J}(v_n)=\mathcal{J}(w_n)+\mathcal{J}(v)+o_n(1).
\end{aligned}$$ Let $|v|_2=a_2\in(0, a_1]$. If $a_2\in(0, a_1)$, then $v\in S_{a_2}\cap A_{\rho_0}$. Hence, $$\begin{aligned}
\label{k57}
\mathcal{J}(v)\geq m(a_2).
\end{aligned}$$ Besides, since $|w_n|_2=|v_n-v|_2\rightarrow \sqrt{a_1^2-a_2^2}>0$ as $n\rightarrow\infty$ and $|\nabla w_n|_2\leq \rho_0$ for $n$ large enough, then $w_n\in S_{|w_n|_2}\cap A_{\rho_0}$, and we deduce from [\[k52\]](#k52){reference-type="eqref" reference="k52"} that $$\begin{aligned}
\label{k58}
\lim_{n\rightarrow\infty}\mathcal{J}(w_n)\geq \lim_{n\rightarrow\infty} m(|w_n|_2)= m\Big(\sqrt{a_1^2-a_2^2}\Big).
\end{aligned}$$ From [\[k56\]](#k56){reference-type="eqref" reference="k56"}-[\[k58\]](#k58){reference-type="eqref" reference="k58"}, we deduce that $$\begin{aligned}
m(a_1)=\lim_{n\rightarrow\infty}\mathcal{J}(v_n)
=\lim_{n\rightarrow\infty}\mathcal{J}(w_n)+\mathcal{J}(v)\geq m\Big(\sqrt{a_1^2-a_2^2}\Big)+m(a_2),
\end{aligned}$$ which is contradict with [\[k59\]](#k59){reference-type="eqref" reference="k59"}. Consequently, $a_2=a_1$. Then we have $v\in S_{a_1}\cap A_{\rho_0}$ and so $$m(a_1)\leq \mathcal{J}(v)\leq \lim_{n\rightarrow\infty}\mathcal{J}(v_n)=m(a_1),$$ which implies that $\mathcal{J}(v)= m(a_1)$ and $\|v_n-v\|\rightarrow 0$ as $n\rightarrow\infty$. Hence, [\[k55\]](#k55){reference-type="eqref" reference="k55"} holds. Then, $M(a)\neq \emptyset$.
To prove that the function $s\mapsto \frac{m(s)}{s^2}$ is monotone decreasing for $s\in(0, a]$, we only need to show that the function $s\mapsto \frac{m(s)}{s^2}$ in every interval $[0, b]$ achieves its unique minimum in $s=b$, where $b\in(0, a]$. Let $a\in(0, a_0)$ be fixed and $c:=\min_{[0, b]}\frac{m(s)}{s^2}<0$ by [\[k51\]](#k51){reference-type="eqref" reference="k51"}. Let $$b_0:=\min\Big\{ s\in[0,b]: \frac{m(s)}{s^2}=c\Big\}.$$ We have to prove that $b_0=b$. It follows from [\[k51\]](#k51){reference-type="eqref" reference="k51"} and [\[k52\]](#k52){reference-type="eqref" reference="k52"} that $b_0>0$ and $$\begin{aligned}
%\label{k60}
\frac{m(b_0)}{b_0^2}< \frac{m(s)}{s^2} \quad \text{for all}\ s\in[0, b_0).\end{aligned}$$ That is, the function $s\mapsto \frac{m(s)}{s^2}$ in the interval $[0, b_0]$ achieves its unique minimum in $s=b_0$. Thus, for any $b_1\in (0, b_0)$, we get $\frac{b_1^2}{b_0^2}m(b_0)<m(b_1)$ and $\frac{b_0^2-b_1^2}{b_0^2}m(b_0)< m\big(\sqrt{b_0^2-b_1^2}\big)$. Hence, for any $b_1\in (0, b_0)$, $$m(b_0)=\frac{b_1^2}{b_0^2}m(b_0)+\frac{b_0^2-b_1^2}{b_0^2}m(b_0)<m(b_1)+m\Big(\sqrt{b_0^2-b_1^2}\Big).$$ Similar to the proof of [\[k55\]](#k55){reference-type="eqref" reference="k55"}, we get $$\begin{aligned}
\left\{u\in S_{b_0}\cap A_{\rho_0}: \mathcal{J}(u)=m(b_0)\right\}\neq\emptyset.
\end{aligned}$$ Hence, there exists $w\in S_{b_0}\cap A_{\rho_0}$ such that $\mathcal{J}(w)=m(b_0)$. Especially, $w\in M(a)$. Now assume that $b_0 <b$. Then fixed $g_w\in \mathcal{G}_w$ with its associated $\Theta(\theta)$, by the Definition [Definition 26](#K-def1){reference-type="ref" reference="K-def1"}, for any $\varepsilon\in(0, \epsilon)$, where $\epsilon$ is given in Lemma [Lemma 25](#K-Lem2.5){reference-type="ref" reference="K-Lem2.5"}, there exists $\delta_{\varepsilon}'>0$ such that when $|\theta-1|< \delta_{\varepsilon}'$, $$\begin{aligned}
\left|\Theta(\theta) b_0^2-b_0^2 \right|=\left |\Theta(\theta) |w|_2^2-|w|_2^2 \right|=\left| |g_w(\theta)|_2^2- |w|_2^2\right|< \varepsilon.
\end{aligned}$$ Due to the fact that the function $s\mapsto \frac{m(s)}{s^2}$ in the interval $[0, b_0]$ achieves its unique minimum in $s=b_0$ and $b_0<b$, we have $$\begin{aligned}
\frac{m(b_0)}{b_0^2}\leq \frac{m\big(\Theta(\theta) b_0^2\big)}{\Theta(\theta) b_0^2}\quad\quad \text{for all}\ \theta\in(1-\delta_{\varepsilon}', 1+\delta_{\varepsilon}')
\end{aligned}$$ Moreover, for any $\varepsilon\in(0, \epsilon)$, there is $\delta_{\varepsilon}''>0$ such that when $|\theta-1|< \delta_{\varepsilon}''$, $$\begin{aligned}
\left||\nabla g_w(\theta)|_2^2-|\nabla w |_2^2\right|<\varepsilon,
\end{aligned}$$ which and Lemma [Lemma 25](#K-Lem2.5){reference-type="ref" reference="K-Lem2.5"} show that $$|\nabla g_w(\theta)|_2^2\leq |\nabla w |_2^2+\varepsilon\leq \rho_0-\epsilon+\varepsilon\leq \rho_0 \quad\quad\text{for all}\ \theta\in(1-\delta_{\varepsilon}'', 1+\delta_{\varepsilon}'').$$ Therefore, choosing $\delta_{\varepsilon}=\min\{\delta_{\varepsilon}', \delta_{\varepsilon}'' \}$, one has $$\begin{aligned}
\frac{\mathcal{J}(g_w(\theta,x))}{\Theta(\theta,x) b_0^2 }\geq\frac{m\big(\Theta(\theta) b_0^2\big)}{\Theta(\theta) b_0^2}\geq \frac{m(b_0)}{b_0^2}=\frac{\mathcal{J}(w)}{b_0^2}
\quad\quad \text{for all}\ \theta\in(1-\delta_{\varepsilon}, 1+\delta_{\varepsilon}),
\end{aligned}$$ which gives that $h_{g_w}(\theta)=\mathcal{J}(g_w(\theta))- \Theta(\theta) \mathcal{J}(w)$ is nonegative in $(1-\delta_{\varepsilon}, 1+\delta_{\varepsilon})$ and has a global minimum in $\theta=1$ with $h_{g_w}(1)=0$. Then we get $\frac{dh_{g_w}(\theta)}{d \theta}|_{\theta=1}=0$. Since $g_w$ is arbitrary, this relation has to be true for every map $g_w$. Thus, we get a contradiction with [\[k54\]](#k54){reference-type="eqref" reference="k54"}. Hence, we deduce $b_0=b$. That is, the function $s\mapsto \frac{m(s)}{s^2}$ in every interval $[0, b]$ achieves its unique minimum in $s=b$, where $b\in(0, a]$. Hence, $s\mapsto \frac{m(s)}{s^2}$ is monotone decreasing for $s\in(0, a]$. We complete the proof. ◻
According to Lemma [Lemma 25](#K-Lem2.5){reference-type="ref" reference="K-Lem2.5"}, it is easy to see that $\eqref{k51}$ holds. In what follows, we check that [\[k52\]](#k52){reference-type="eqref" reference="k52"}, [\[k53\]](#k53){reference-type="eqref" reference="k53"} and [\[k54\]](#k54){reference-type="eqref" reference="k54"} are valid.
**Lemma 30**. *Let $\mu>0$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Then $a\in(0,\overline{a}_0)\mapsto m(a)$ is a continuous mapping.*
*Proof.* Similarly as in [@2022-JMPA-jean Lemma 2.6], let $a \in (0, \overline{a}_0)$ be arbitrary and $\{a_n\} \subset(0, \overline{a}_0)$ be such that $a_n \rightarrow a$. From the definition of $m(a_n)$ and since $m(a_n) < 0$, for any $\varepsilon> 0$ sufficiently small, there exists $u_n \in S_{a_n}\cap A_{\rho_0}$ such that $$\begin{aligned}
\label{k61}
\mathcal{J}(u_n)\leq m(a_n)+\varepsilon \quad\quad \text{and} \quad\quad\mathcal{J}(u_n)<0.\end{aligned}$$ We set $v_n := \frac{a}{a_n} u_n$, and hence $v_n \in S_a$. We have that $v_n \in S_a\cap A_{\rho_0}$. Indeed, if $a_n \geq a$, then $$|\nabla v_n|_2=\frac{a}{a_n}|\nabla u_n|_2\leq |\nabla u_n|_2\leq \rho_0.$$ If $a_n \leq a$, in view of Lemma [Lemma 20](#KKK-Lem2.1){reference-type="ref" reference="KKK-Lem2.1"} and $f(a, \rho_0)>f(a_0, \rho_0)=0$ we have $$\begin{aligned}
\label{k62}
f(a_n, \rho) \geq 0 \quad\quad \text{for any}\ \rho \in \big[\frac{a_n}{a}\rho_0, \rho_0\big].\end{aligned}$$ Moreover, it follows from [\[k61\]](#k61){reference-type="eqref" reference="k61"} that $$\begin{aligned}
0>J(u_n)\geq& \frac{1}{2}|\nabla u_n|_2^2-\mu \frac{C_q^q}{q} a_n^{q(1-\gamma_q)}|\nabla u_n|_2^{q\gamma_q}-\frac{C_p^p}{p} a_n^{p(1-\gamma_p)}|\nabla u_n|_2^{p\gamma_p}\\
\geq& |\nabla u_n|_2^2f(a_n, |\nabla u_n|_2),\end{aligned}$$ which shows that $f(a_n, |\nabla u_n|_2)<0$. Hence, by [\[k62\]](#k62){reference-type="eqref" reference="k62"}, we infer that $|\nabla u_n|_2< \frac{a_n}{a}\rho_0$. Then $$|\nabla v_n|_2=\frac{a}{a_n}|\nabla u_n|_2\leq \frac{a}{a_n}\frac{a_n}{a}\rho_0=\rho_0.$$ Due to $v_n\in S_a\cap A_{\rho_0}$, we infers that $$m(a)\leq \mathcal{J}(v_n)=\mathcal{J}(u_n)+[\mathcal{J}(v_n)- \mathcal{J}(u_n)],$$ where $$\begin{aligned}
\mathcal{J}(v_n)- \mathcal{J}(u_n)=&\frac{1}{2}\Big[\big(\frac{a}{a_n}\big)^2-1\Big]|\nabla u_n|_2^2+\frac{1}{4}\Big[\big(\frac{a}{a_n}\big)^4-1\Big]\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy\\
&-\frac{1}{p}\Big[\big(\frac{a}{a_n}\big)^p-1\Big]| u_n|_p^p
-\mu\frac{1}{q}\Big[\big(\frac{a}{a_n}\big)^q-1\Big]| u_n|_q^q.\end{aligned}$$ Since $|\nabla u_n|_2\leq \rho_0$, also $| u_n|_p$ and $| u_n|_q$ are uniformly bounded, we obtain, as $n\rightarrow\infty$, $$\begin{aligned}
\label{k64}
m(a)\leq \mathcal{J}(v_n)=\mathcal{J}(u_n)+o(1)\leq m(a_n)+\varepsilon+o(1).\end{aligned}$$ Besides, let $u \in S_a\cap A_{\rho_0}$ be such that $$\begin{aligned}
\label{k63}
\mathcal{J}(u)\leq m(a)+\varepsilon \quad\quad \text{and} \quad\quad\mathcal{J}(u)<0.\end{aligned}$$ Let $u_n=\frac{a_n}{a}u$ and hence $u_n\in S_{a_n}$. Clearly, $|\nabla u|_2\leq \rho_0$ and $a_n\rightarrow a$ as $n\rightarrow\infty$ imply that $|\nabla u_n|_2\leq \rho_0$ for $n$ large enough. Thus $u_n\in S_{a_n}\cap A_{\rho_0}$. Moreover, $\mathcal{J}(u_n)\rightarrow \mathcal{J}(u)$ as $n\rightarrow\infty$, thus we deduce from [\[k63\]](#k63){reference-type="eqref" reference="k63"} that $$\begin{aligned}
\label{k65}
m(a_n)\leq \mathcal{J}(u_n)=\mathcal{J}(u)+[\mathcal{J}(u_n)-\mathcal{J}(u)]\leq m(a)+\varepsilon+o(1).\end{aligned}$$ Combining [\[k64\]](#k64){reference-type="eqref" reference="k64"} with [\[k65\]](#k65){reference-type="eqref" reference="k65"}, we get $m(a_n)\rightarrow m(a)$ as $n\rightarrow\infty$ for any $a\in(0, a_0)$. ◻
**Lemma 31**. *Let $\mu>0$, $a\in(0, \overline{a}_0)$, $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Then $\lim_{a\rightarrow 0}\frac{m(a)}{a^2}=0$.*
*Proof.* Define $$\begin{aligned}
I(u)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx
-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx-\frac{\mu}{q}\int_{\mathbb{R}^3}|u|^{q}dx,\end{aligned}$$ and $\widetilde{m}(a):=\inf_{S_a\cap A_{\rho_0}} I$, where $a\in(0, \overline{a}_0)$. Then $\mathcal{J} (u)\geq I(u)$ for any $u\in \mathcal{H}$. Hence $0> m(a)\geq \widetilde{m}(a)$, which implies that $\frac{\widetilde{m}(a)}{a^2}\leq\frac{m(a)}{a^2}<0$. Hence, we only need to show that $\lim_{a\rightarrow0}\frac{\widetilde{m}(a)}{a^2}=0$. According to [@2020Soave], when $q\in(2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$, we obtain that there exists $\widetilde{u}_{a}\in S_a\cap A_{\rho_0}$ such that $I(\widetilde{u}_{a})=\widetilde{m}(a)<0$ for any $a\in(0, \overline{a}_0)$. Then the sequence $\{\widetilde{u}_{a}\}_{a>0}$ is bounded in $D^{1,2}(\mathbb{R}^3)$. Since the minimizer $\widetilde{u}_{a}$ satisfies $$\begin{aligned}
\label{k79}
-\Delta \widetilde{u}_{a}-|\widetilde{u}_{a}|^{p-2}\widetilde{u}_{a}
-\mu|\widetilde{u}_{a}|^{q-2}\widetilde{u}_{a}=\omega_a \widetilde{u}_{a},\end{aligned}$$ where $\omega_a$ is the Lagrange multiplier associated to the minimizer. We get $$\begin{aligned}
\label{k78}
\frac{\omega_a}{2}=\frac{|\nabla \widetilde{u}_{a}|_2^2-| \widetilde{u}_{a}|_p^p-\mu | \widetilde{u}_{a}|_q^q}{2| \widetilde{u}_{a}|_2^2}\leq\frac{\frac{1}{2}|\nabla \widetilde{u}_{a}|_2^2-\frac{1}{p}| \widetilde{u}_{a}|_p^p-\mu\frac{1}{q} | \widetilde{u}_{a}|_q^q}{| \widetilde{u}_{a}|_2^2}=\frac{I(\widetilde{u}_{a} )}{a^2}< 0.\end{aligned}$$ Actually we prove that $\lim_{a\rightarrow0} \omega_a=0$, so by comparison in [\[k78\]](#k78){reference-type="eqref" reference="k78"} we get the lemma. To show that $\lim_{a\rightarrow0} \omega_a=0$, suppose on the contrary that there exists a sequence $a_n \rightarrow 0$ such that $\omega_{a_n} <-\alpha$ for some $\alpha\in(0,1)$. Since the minimizers $\widetilde{u}_{a_n}\in S_{a_n}\cap A_{\rho_0}$ satisfy Eq. [\[k79\]](#k79){reference-type="eqref" reference="k79"}, we get $$\begin{aligned}
|\nabla\widetilde{u}_{a_n} |_2^2-\omega_{a_n} |\widetilde{u}_{a_n} |_2^2-|\widetilde{u}_{a_n}|_p^p-\mu |\widetilde{u}_{a_n}|_q^q&=0,\label{k80}\\
|\nabla\widetilde{u}_{a_n} |_2^2-\gamma_p|\widetilde{u}_{a_n}|_p^p-\mu\gamma_q|\widetilde{u}_{a_n}|_q^q&=0.\label{k81}\end{aligned}$$ By $\eqref{k80}-\frac{1}{2}\eqref{k81}$, one infers that $$\begin{aligned}
C\|\widetilde{u}_{a_n}\|^2
&\leq \frac{1}{2}|\nabla\widetilde{u}_{a_n} |_2^2+\alpha |\widetilde{u}_{a_n} |_2^2\\
&\leq(1-\frac{1}{2}\gamma_p)| \widetilde{u}_{a_n} |_p^p+\mu(1-\frac{1}{2}\gamma_q)| \widetilde{u}_{a_n} |_q^q \\
& \leq C(1-\frac{1}{2}\gamma_p)\| \widetilde{u}_{a_n} \|^p+\mu C(1-\frac{1}{2}\gamma_q)\| \widetilde{u}_{a_n} \|^q,\end{aligned}$$ which implies that $\|\widetilde{u}_{a_n}\|\geq C$ due to $p, q>2$ and $\gamma_p, \gamma_q<1$. This shows that $|\nabla\widetilde{u}_{a_n} |_2>C$ for $n$ large enough since $|\widetilde{u}_{a_n} |_2=a_n\rightarrow 0$ as $n\rightarrow\infty$. Moreover, for any $u\in S_a$, $$\begin{aligned}
I(u)=&\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx
-\frac{\mu}{q}\int_{\mathbb{R}^3}|u|^{q}dx-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx\\
\geq& \frac{1}{2}|\nabla u|_2^2-\mu \frac{C_q^q}{q} a^{q(1-\gamma_q)}|\nabla u|_2^{q\gamma_q}-\frac{C_p^p}{p} a^{p(1-\gamma_p)}|\nabla u|_2^{p\gamma_p}.\end{aligned}$$ Hence, for $n$ large enough, $$\begin{aligned}
0\geq I(\widetilde{u}_{a_n})
\geq\left(\frac{1}{2}-\mu \frac{C_q^q}{q} a_n^{q(1-\gamma_q)}|\nabla\widetilde{u}_{a_n}|_2^{q\gamma_q-2}-\frac{C_p^p}{p} a_n^{p(1-\gamma_p)}|\nabla\widetilde{u}_{a_n}|_2^{p\gamma_p-2}\right)|\nabla\widetilde{u}_{a_n}|_2^2
\geq \frac{1}{4}C>0,\end{aligned}$$ which is a contradiction. Therefore, $\lim_{a\rightarrow0} \omega_a=0$, which shows that $\lim_{a\rightarrow0}\frac{\widetilde{m}(a)}{a^2}=0$. Hence, we complete the proof. ◻
**Lemma 32**. *Let $\mu>0$, $q\in(2, \frac{8}{3})$, $p\in (\frac{10}{3}, 6)$ and $a\in(0, \overline{a}_0)$. Then, $$\begin{aligned}
\forall \ u\in M(a),\ \exists\ g_u\in \mathcal{G}_u\ \text{admissible, such that}\ \frac{d}{d \theta}h_{g_u}(\theta)|_{\theta=1}\neq 0.
\end{aligned}$$*
*Proof.* To prove this lemma, we argue by contradiction assuming that there exists a sequence $\{u_n\}\subset M(a)$ with $|\nabla u_n|_2\leq \rho_0$ and $a\geq |u_n|_2=a_n\rightarrow 0$ as $n\rightarrow\infty$ such that for all $\iota\in\mathbb{R}$ and $g_{u_n}\in \mathcal{G}_{u_n}^\iota$, $h_{g_{u_n}}'(1)=0$, namely, $$\begin{aligned}
\label{k66}
-\iota|\nabla u_n|_2^2+\frac{2-\iota}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy
=\frac{p-2+\frac{6-3p}{2}\iota}{p}|u_n|_p^p
+\frac{q-2+\frac{6-3q}{2}\iota}{q}|u_n|_q^q.
\end{aligned}$$ Moreover, since $\{u_n\}\subset M(a)$, then it holds that $$\begin{aligned}
\label{k67}
|\nabla u_n|_2^2+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy
-\gamma_p|u_n|_p^p
-\gamma_q|u_n|_q^q=0.
\end{aligned}$$ Using [\[k66\]](#k66){reference-type="eqref" reference="k66"} and [\[k67\]](#k67){reference-type="eqref" reference="k67"}, we have $$\begin{aligned}
\label{k70}
\frac{1}{2}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy-\frac{p-2}{p}|u_n|_p^p-\frac{q-2}{q}|u_n|_q^q=0.
\end{aligned}$$ Since $|\nabla u_n|_2\leq \rho_0$ and $|u_n|_2\rightarrow 0$ as $n\rightarrow\infty$, one has, $|u_n|_p, |u_n|_q\rightarrow 0$ as $n\rightarrow\infty$. When $q\in(2, \frac{12}{5}]$, by [\[k70\]](#k70){reference-type="eqref" reference="k70"}, Lemma [Lemma 18](#gle4){reference-type="ref" reference="gle4"} and the interpolation inequality, we have $$\frac{2(q-2)}{q}|u_n|_q^q\leq\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy\leq C|u_n|_{\frac{12}{5}}^4\leq C|u_n|_{q}^{ \frac{6q}{6-q}}|u_n|_6^{4-\frac{6q}{6-q}},$$ which is a contradiction since $\frac{6q}{6-q}>q$, $4-\frac{6q}{6-q}\geq0$, $|u_n|_{q}\rightarrow 0$ as $n\rightarrow\infty$ and $|u_n|_6\leq C$. If $q\in(\frac{12}{5}, \frac{8}{3})$, it follows from [\[k70\]](#k70){reference-type="eqref" reference="k70"} and the interpolation inequality that $$\begin{aligned}
\frac{2(q-2)}{q}|u_n|_q^q&\leq\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy\\
&\leq C|u_n|_{\frac{12}{5}}^4\leq C|u_n|_{q}^{4(1-t)}|u_n|_2^{4t},
\end{aligned}$$ where $t=\frac{5q-12}{6q-2}$, a contradiction since $q<4(1-t)$ and $|u_n|_{q}\xrightarrow{n}0$. Thus, we complete the proof. ◻
**Lemma 33**. *Under the assumptions of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, let $\{u_n\}\subset D_{\rho_0}$ be a minimizing sequence of $m(a)$, then there exists $u\in D_{\rho_0}$ such that $u_n\rightarrow u$ in $\mathcal{H}$ as $n\rightarrow\infty$ up to translation and $\mathcal{J}(u)=m(a)$.*
*Proof.* From Lemma [Lemma 29](#K-Lem2.8){reference-type="ref" reference="K-Lem2.8"}, the function $s\mapsto \frac{m(s)}{s^2}$ is monotone decreasing for $s\in(0, a]$. Then for any $a_1\in (0, a)$, we get $\frac{a_1^2}{a^2}m(a)<m(a_1)$ and $\frac{a^2-a_1^2}{a^2}m(a)< m(\sqrt{a^2-a_1^2})$. Hence, for any $a_1\in (0, a)$, $$m(a)=\frac{a_1^2}{a^2}m(a)+\frac{a^2-a_1^2}{a^2}m(a)<m(a_1)+m\Big(\sqrt{a^2-a_1^2}\Big).$$ The remain is similar to the proof of [\[k55\]](#k55){reference-type="eqref" reference="k55"}. Hence, we complete the proof. ◻
Under the assumption of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, from Lemma [Lemma 33](#K-Lem2.12){reference-type="ref" reference="K-Lem2.12"}, there exists $u\in D_{\rho_0}$ such that $\mathcal{J}(u)=m(a)$. Since if $\{u_n\}\subset D_{\rho_0}$ is a minimizing sequence of $m(a)$, then $\{|u_n|\}\subset D_{\rho_0}$ is also a minimizing sequence of $m(a)$. Hence, in view of Lemma [Lemma 25](#K-Lem2.5){reference-type="ref" reference="K-Lem2.5"}, we obtain that there is $\lambda\in \mathbb{R}$ such that $(\lambda, u)\in \mathbb{R}\times \mathcal{H}$ is a ground state normalized solutions for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $u$ is a real-valued non-negative function. Moreover, the elliptic $L^s$ estimate implies that $u \in W^{2,s}_{loc} (\mathbb{R}^3)$ with $s\geq 2$. Then, by Sobolev embedding theorem, we have $u\in C^{1,\alpha}_{loc}(\mathbb{R}^3)$ for $0 < \alpha < 1$. Hence, it follows from the strong maximum principle that $u > 0$. Besides, Corollary [Corollary 24](#K-Lem2.4){reference-type="ref" reference="K-Lem2.4"} and Lemma [Lemma 25](#K-Lem2.5){reference-type="ref" reference="K-Lem2.5"} imply that any ground state for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} is a local minimizer of $\mathcal{J}$ on $D_{\rho_0}$. $\hfill\Box$
# The second solution of the case $\mu>0$, $q\in(2, \frac{12}{5} ]$ and $p\in(\frac{10}{3},6)$ {#sec3}
In this part, we address the case $\mu>0$, $q\in(2, \frac{12}{5} ]$ and $p\in (\frac{10}{3}, 6)$ and consider the problem in $\mathcal{H}_r$. Let $S_{a,r}= \mathcal{H}_r\cap S_a$, $\mathcal{P}_{\pm,r}= \mathcal{H}_r\cap\mathcal{P}_{\pm}$ and $m(a,r)=\inf_{\mathcal{P}_{+,r} } \mathcal{J}(u)$. All conclusions in Sec. [2](#sec2){reference-type="ref" reference="sec2"} are still valid. Using a similar method in the proof of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, we can obtain that there exists $(\lambda, u)\in \mathbb{R}\times S_{a,r}$ is a solution for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $\mathcal{J}(u)= m(a,r)<0$. Now, we show that Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} has a second solution (Mountain Pass type) in this case. Firstly, we verify the Mountain Pass geometry of $\mathcal{J}$ on manifold $S_{a,r}$. Let $$\begin{aligned}
\Gamma:=\left\{\zeta\in C([0,1], S_{a,r}): \zeta(0)\in \mathcal{P}_{+,r}\ \ \text{and}\ \ \mathcal{J}(\zeta(1))<2m(a,r)\right\},\end{aligned}$$ and $$c_a:=\inf_{\zeta\in \Gamma}\max_{u\in \zeta([0,1])}\mathcal{J}(u).$$ Taking $v\in S_{a,r}$, there is $s_v\in \mathbb{R}$ such that $s_v\star v\in \mathcal{P}_{+,r}$ by Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"}. When $k>1$ large enough, $$\zeta_v(\tau)=[(1-\tau)s_v+\tau k] \star v\in \Gamma,$$ which implies that $\Gamma\neq \emptyset$. Next, we claim that $$c_a=\sigma:=\inf_{u\in\mathcal{P}_{-,r}}\mathcal{J}(u)>0.$$ On the one hand, assuming that there exists $\widetilde{v}\in \mathcal{P}_{-,r}$ such that $\mathcal{J}(\widetilde{v})<c_a$. Setting $s\star \widetilde{v}= e^{\frac{3}{2}s} \widetilde{v}(e^s x)$. Let $s_1>1$ sufficient large be such that $\mathcal{J}(s_1\star \widetilde{v} )<2m(a,r)$. Hence, for any $\tau\in [0,1]$, $$\zeta_{\widetilde{v}}(\tau)=[(1-\tau)s_{\widetilde{v}}+\tau s_1]\star \widetilde{v}\in \Gamma,$$ which and Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"}-(iii) show that $$c_a\leq \max_{\tau\in [0,1]}\mathcal{J}(\zeta_{\widetilde{v}}(\tau))=\max_{\tau\in [0,1]}\mathcal{J}([(1-\tau)s_{\widetilde{v}}+\tau s_1]\star \widetilde{v})=\mathcal{J}(\widetilde{v} )< c_a,$$ a contradiction. Thus, for any $u\in \mathcal{P}_{-,r}$, one infers $\mathcal{J}(u)\geq c_a$, which implies that $\sigma\geq c_a$. On the other hand, for any $\zeta\in \Gamma$, since $\zeta(0)\in \mathcal{P}_{+,r}$, then there exists $t_{\zeta(0)}> s_{\zeta(0)}=0$ satisfying $t_{\zeta(0)}\star \zeta(0)\in \mathcal{P}_{-,r}$. Moreover, since $\mathcal{J}(\zeta(1))\leq 2m(a,r)$, then similar to [@2020Soave Lemma 5.6], we know that $t_{\zeta(1)}<0$. Therefore, there exists $\overline{\tau}\in(0,1)$ such that $t_{\zeta(\overline{\tau})}=0$ by the continuity. Hence, $\zeta(\overline{\tau})\in \mathcal{P}_{-,r}$. So, $$\max_{u\in \zeta([0,1])}\mathcal{J}(u)\geq \mathcal{J}(\zeta(\overline{\tau}))\geq \inf_{\mathcal{P}_{-,r} }\mathcal{J}(u)=\sigma,$$ which implies that $c_a\geq \sigma$. Hence, we deduce that $c_a= \sigma$. Furthermore, let $t_{\max}$ is the maximum point of function $h$. For any $u\in \mathcal{P}_{-,r}$, there is $\tau_u\in \mathbb{R}$ such that $|\nabla (\tau_u \star u)|_2=t_{\max}$. For every $u\in \mathcal{P}_{-,r}$, one has $$\mathcal{J}(u)\geq \mathcal{J}(\tau_u\star u)\geq h(|\nabla (\tau_u \star u) |_2)=h(t_{\max} )>0,$$ which gives that $\sigma>0$. That is, $c_a= \sigma>0$.
For any $a\in(0, \overline{a}_0)$ fixed, it is easy to verify that the set $$L:=\{u\in \mathcal{P}_{-,r}:\mathcal{J}(u)\leq c_a+1\}$$ is bounded. Then let $M_0>0$ be such that $L\subset B(0, M_0)$, where $$B(0, M_0):=\{u\in \mathcal{H}_r: \|u\|\leq M_0\}.$$
In order to prove the following lemma, we need to develop a deformation argument on $S_a$. Following [@1983-BL-II], we recall that, for any $a > 0$, $S_a$ is a submanifold of $\mathcal{H}$ with codimension $1$ and the tangent space at a point $\overline{u}\in S_a$ is defined as $$T_{\overline{u}}=\{v\in \mathcal{H}: (\overline{u}, v)_{L^2}=0 \}.$$ The restriction $\mathcal{J}_{|_{S_a}}: S_a\rightarrow \mathbb{R}$ is a $C^1$ functional on $S_a$ and for any $\overline{u}\in S_a$ and any $v\in T_{\overline{u} }$, $$\langle\mathcal{J}_{|_{S_a}}'(\overline{u} ), v\rangle= \langle\mathcal{J}'(\overline{u}), v\rangle.$$ We use the notation $\|d\mathcal{J}_{|_{S_a}}(\overline{u} ) \|$ to indicate the norm in the cotangent space $T_{\overline{u}}'$, that is, the dual norm induced by the norm of $T_{\overline{u}}$, that is, $$\|d\mathcal{J}_{|_{S_a}}(\overline{u} ) \|:= \sup_{\|v\|\leq 1, v\in T_{\overline{u}}}| \langle d\mathcal{J}(\overline{u} ), v \rangle|.$$ Let $\widetilde{S}_a:=\{u\in S_a: d\mathcal{J}_{|_{S_a}}(u )\neq 0\}$. We know from [@1983-BL-II] that there exists a locally Lipschitz pseudo gradient vector field $Y \in C^1(\widetilde{S}_a, T(S_a))$ (here $T(S_a)$ is the tangent bundle) such that $$\begin{aligned}
\label{k41}
\|Y(u)\|\leq 2 \|d\mathcal{J}_{|_{S_a}}(u)\|\end{aligned}$$ and $$\begin{aligned}
\label{k42}
\langle\mathcal{J}_{|_{S_a}}'(u), Y(u)\rangle\geq \|d\mathcal{J}_{|_{S_a}}(\overline{u} ) \|^2,\end{aligned}$$ for any $u\in \widetilde{S}_a$. Note that $\|Y(u)\|\neq 0$ for $u\in \widetilde{S}_a$ thanks to [\[k42\]](#k42){reference-type="eqref" reference="k42"}. Now an arbitrary but fixed $\delta > 0$, we consider the sets $$\begin{aligned}
%\label{k43}
&\widetilde{N}_{\delta}:=\{u\in S_a: |\mathcal{J}(u)-c_a |\leq \delta,\ dist(u, \mathcal{P}_{-,r})\leq 2 \delta, \ \|Y(u)\|\geq 2\delta\},\\
& \widetilde{N}_{\delta}:=\{u\in S_a: |\mathcal{J}(u)-c_a|< 2\delta\},\end{aligned}$$ where $dist(x, \mathcal{A}):=\inf\{\|x-y\|: y\in \mathcal{A}\}$. Assuming that $\widetilde{N}_{\delta}$ is nonempty there exists a locally Lipschitz function $g : S_a \rightarrow [0, 1]$ such that $$\begin{aligned}
\label{k104} g(u)=
\begin{cases}
1,\quad\quad &\text{on}\ \widetilde{N}_{\delta},\\
0,\quad\quad &\text{on}\ N_{\delta}^c.
\end{cases}\end{aligned}$$ We also define on $S_a$ the vector field $W$ by $$\begin{aligned}
\label{k44}W(u)=
\begin{cases}
-g(u)\frac{Y(u)}{\|Y(u)\|},\quad\quad &\text{if}\ u\in \widetilde{S}_a,\\
0,\quad\quad &\text{if}\ u\in S_a\backslash \widetilde{S}_a,
\end{cases}\end{aligned}$$ and the pseudo gradient flow $$\begin{aligned}
\label{k45}
\begin{cases}
\frac{d}{dt}\eta(t,u)=W(\eta(t,u)),\\
\eta(0,u)=u.
\end{cases}\end{aligned}$$ The existence of a unique solution $\eta(t, \cdot)$ of [\[k44\]](#k44){reference-type="eqref" reference="k44"} defined for all $t\in \mathbb{R}$ follows from standard arguments and we refer to [@1983-BL-II Lemma 5] for this. Let us recall some of its basic properties:
- $\eta(t, \cdot)$ is a homeomorphism of $S_a$;
- $\eta(t, u)=u$ for all $t\in \mathbb{R}$ if $|\mathcal{J}(u)-c_a |\geq 2\delta$;
- $\frac{d}{dt} \mathcal{J}(\eta(t, u))=\langle \mathcal{J}'(\eta(t, u)), W(\eta(t, u) ) \rangle\leq 0$ for all $t\in \mathbb{R}$ and $u\in S_a$.
The following results help to obtain a special Palais-Smail sequence, inspired by [@2011JFA-BS].
**Lemma 34**. *Let $\mu>0$, $q\in (2, \frac{12}{5}]$, $p\in (\frac{10}{3}, 6)$, $a\in(0, \overline{a}_0)$ and $$\begin{aligned}
\Omega_{\delta}:=\left\{u\in S_{a,r}:\ |\mathcal{J}(u)-c_a |\leq\delta, \ dist(u, \mathcal{P}_{-,r})\leq 2\delta, \ \|\mathcal{J}'_{|_{S_{a,r}}}(u)\|_{\mathcal{H}_r^{-1}}\leq 2 \delta\right\},\end{aligned}$$ then for any $\delta>0$, $\Omega_{\delta}\cap B(0, 3M_0)$ is nonempty.*
*Proof.* Define $$\Lambda_{\delta}:=\{u\in S_{a,r}: \ |\mathcal{J}(u)-c_a |\leq\delta, \ dist(u, \mathcal{P}_{-,r})\leq 2\delta\}.$$ Suppose on the contrary that there is $\overline{\delta}\in (0, \frac{c_a}{2})$ such that $$\begin{aligned}
\label{k105}
u\in \Lambda_{\overline{\delta}}\cap B(0, 3M_0)\Rightarrow \|\mathcal{J}'_{|_{S_{a,r}}}(u)\|_{\mathcal{H}_r^{-1}}> 2 \overline{\delta}.\end{aligned}$$ From [\[k42\]](#k42){reference-type="eqref" reference="k42"}, $$\begin{aligned}
\label{k106}
u\in \Lambda_{\overline{\delta}}\cap B(0, 3M_0)\Rightarrow u\in \widetilde{N}_{\overline{\delta}}.\end{aligned}$$ Note that, by [\[k45\]](#k45){reference-type="eqref" reference="k45"}, for any $u\in S_{a,r}$, it holds that $\big\|\frac{d}{dt}\eta(t,u)\big\|\leq 1$ for all $t\geq 0$, then there exists $s'>0$, depending on $\overline{\delta}>0$, such that, for all $s\in(0, s')$, $$\begin{aligned}
\label{k46}
u\in \Lambda_{\frac{\overline{\delta}}{2}}\cap B(0, 2M_0)\Rightarrow \eta(s,u)\in B(0, 3M_0)\quad \quad \text{and}\quad\quad dist(\eta(s,u), \mathcal{P}_{-,r} )\leq 2 \overline{\delta}.\end{aligned}$$ We claim that, taking $\varepsilon>0$ small enough, we can construct a path $\zeta_\varepsilon(t)\in \Gamma$ satisfying $$\max_{t\in[0,1]}\mathcal{J}(\zeta_\varepsilon(t))\leq c_a+\varepsilon,$$ and $$\begin{aligned}
\label{k47}
\mathcal{J}(\zeta_\varepsilon(t))\geq c_a\Rightarrow\zeta_\varepsilon(t)\in \Lambda_{\frac{ \overline{\delta}}{2}}\cap B(0, 2M_0).\end{aligned}$$ In fact, for $\varepsilon>0$ small, let $u_\varepsilon \in \mathcal{P}_{-,r}$ be such that $\mathcal{J}(u_\varepsilon)\leq c_a+\varepsilon$, and considering the path $$\begin{aligned}
\zeta_{\varepsilon}(t)=[(1-t)s_{u_{\varepsilon}}+t k]\star u_{\varepsilon},\end{aligned}$$ where $t\in [0,1]$, $k>0$ large enough and $s_{u_{\varepsilon}}<0$. Clearly, $$\max_{t\in[0,1]}\mathcal{J}(\zeta_\varepsilon(t))=\mathcal{J}(u_\varepsilon) \leq c_a+\varepsilon.$$ Since $u_\varepsilon \in \mathcal{P}_{-,r}$, similar to [\[k11\]](#k11){reference-type="eqref" reference="k11"}, one has $$\begin{aligned}
|\nabla u_\varepsilon|_2^2
\leq\gamma_p\left(\frac{p\gamma_p-q\gamma_q}{2-q\gamma_q}\right)
\int_{\mathbb{R}^3}|u_\varepsilon|^{p}dx
\leq C_p^p\gamma_p\left(\frac{p\gamma_p-q\gamma_q}{2-q\gamma_q}\right)a^{p(1-\gamma_p)}|\nabla u_\varepsilon|_2^{p\gamma_p},
\end{aligned}$$ which shows that $$\begin{aligned}
|\nabla u_\varepsilon|_2\geq \Big(\frac{1}{C_p^p \gamma_p a^{p(1-\gamma_p)}}\Big)^{\frac{1}{p \gamma_p-2}} \Big( \frac{2-q\gamma_q}{p\gamma_p-q\gamma_q} \Big)^{\frac{1}{p \gamma_p-2}}.\end{aligned}$$ Moreover, it is easy to see that $\{u_{\varepsilon}\}_{\varepsilon}$ is bounded in $\mathcal{H}_r$. Then, let $\varepsilon\rightarrow 0$, $$\begin{aligned}
&A(u_{\varepsilon}):=\int_{\mathbb{R}^3}|\nabla u_\varepsilon|^2dx\rightarrow A>0, \quad \quad B(u_{\varepsilon}):=\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_\varepsilon(x)|^{2}|u_\varepsilon(y)|^2} {|x-y|}dxdy\rightarrow B\geq0,\\
&D(u_{\varepsilon}):=\int_{\mathbb{R}^3}|u_\varepsilon|^{p}dx\rightarrow D>0, \quad\quad \ E(u_{\varepsilon}):=\int_{\mathbb{R}^3}|u_\varepsilon|^{q}dx\rightarrow E\geq0.\end{aligned}$$ Since $u_\varepsilon \in \mathcal{P}_{-,r}$ and $\lim_{\varepsilon\rightarrow 0}\mathcal{J}(u_\varepsilon)=c_a$, then we have $$\begin{aligned}
\lim_{\varepsilon\rightarrow 0}P(u_\varepsilon)&=A+\frac{1}{4}B
-\gamma_pD-\mu\gamma_qE=0, \label{k25}\\
\lim_{\varepsilon\rightarrow 0}\psi_{u_{\varepsilon}}''(0)&=2A+\frac{1}{4}B
-p\gamma_p^2D
-\mu q \gamma_q^2E\leq0,\label{k26}\\
\lim_{\varepsilon\rightarrow 0}\mathcal{J} (u_{\varepsilon})&=\frac{1}{2}A+\frac{1}{4}B
-\frac{1}{p}D-\frac{\mu}{q}E=c_a.\label{k27}
\end{aligned}$$ Considering the function $$\begin{aligned}
L(t)=\frac{1}{2}A t^2+\frac{1}{4}B t-\frac{1}{p}Dt^{p \gamma_p} -\frac{\mu}{q}Et^{q\gamma_q},\quad\quad \text{for all}\ t>0.
\end{aligned}$$ We claim that the function $L(t)$ has a unique global maximum point at $t=1$. Indeed, by simple calculate and from [\[k25\]](#k25){reference-type="eqref" reference="k25"}, we have $$\begin{aligned}
L'(t)&= A t+\frac{1}{4}B -\gamma_pDt^{p \gamma_p-1} - \mu \gamma_qEt^{q\gamma_q-1} \\
&=\frac{1}{4}(1-t)B
+\gamma_pD(t-t^{p \gamma_p-1})+\mu\gamma_qE(t-t^{q\gamma_q-1}).
\end{aligned}$$ Then $$\begin{aligned}
L''(t)&=-B
+\gamma_pD(1-(p \gamma_p-1)t^{p \gamma_p-2})+\mu\gamma_qE(1-(q\gamma_q-1)t^{q\gamma_q-2}),\\
L'''(t)&=
-\gamma_pD(p \gamma_p-1)(p \gamma_p-2)t^{p \gamma_p-3}-\mu\gamma_qE(q\gamma_q-1)(q\gamma_q-2)t^{q\gamma_q-3}.
\end{aligned}$$ Notice that $L'''(t)<0$ for all $t>0$. Thus, the function $L''(t)$ is strictly decrease on $t>0$. Since $\lim_{t\rightarrow 0}L''(t)=+\infty$, $\lim_{t\rightarrow \infty}L''(t)=-\infty$ and $$\begin{aligned}
L''(1)&=-B
+\gamma_pD(2-p \gamma_p )+\mu\gamma_qE(2-q\gamma_q)\leq 0,
\end{aligned}$$ where [\[k25\]](#k25){reference-type="eqref" reference="k25"} and [\[k26\]](#k26){reference-type="eqref" reference="k26"} are applied, then there exits $0<\widetilde{t}\leq 1$ such that $L''(\widetilde{t})=0$. Therefore, the function $L'(t)$ is increase on $(0, \widetilde{t})$ and decrease on $(\widetilde{t}, +\infty)$. It is easy to see from $\lim_{t\rightarrow0} L(t)=0^-$ and $L(1)=c_a>0$ that $L'(\widetilde{t})>0$. Furthermore, $L'(1)=0$ implies that $\widetilde{t}< 1$. Since $\lim_{t\rightarrow0}L'(\widetilde{t})=-\infty$, $L'(\widetilde{t})$ has only two zero points, denoted by $t^*$ and 1. Hence, the function $L(t)$ has a local minimize $t^*$ and global maximum $1$. By [\[k27\]](#k27){reference-type="eqref" reference="k27"}, we infer that $$\max_{t>0}L(t)=L(1)=c_a.$$ Setting $y_{\varepsilon}= (1-t)s_{u_{\varepsilon}}+t k \in (s_{u_{\varepsilon}}, k )$, assume that $e^{y_{\varepsilon}}\rightarrow l\geq 0$ as $\varepsilon\rightarrow 0$. Since $$\begin{aligned}
c_a \leq \lim_{\varepsilon\rightarrow 0} \mathcal{J}(\zeta_\varepsilon(t))
=\frac{1}{2}l^2A+\frac{1}{4} l B - \frac{1}{p}l^{p\gamma_p}D-
-\frac{\mu}{q}l^{q\gamma_q}E\leq L(1)=c_a,\end{aligned}$$ which implies that $l=1$ by the uniqueness of global maximum for the function $L(t)$. That is, $y_{\varepsilon}\rightarrow 0$ as $\varepsilon\rightarrow 0$. Hence, let $\varepsilon\in (0, \frac{1}{4} \overline{\delta}s')$ sufficiently small, we get $\zeta_{\varepsilon}(t)\in \Lambda_{ \frac{\overline{\delta}}{2}} \cap B(0, 2M_0)$. Besides, applying the pseudo-gradient flow on $\zeta_{\varepsilon}(t)$, we see that $$\begin{aligned}
\label{k22}
\eta(s, \zeta_{\varepsilon}(\cdot))\in \Gamma\quad\quad \text{for all} \ s>0,\end{aligned}$$ because $\eta(s, u)=u$ for all $s>0$ if $| \mathcal{J}(u)-c_a |\geq 2 \overline{\delta}$. Next, we claim that, taking $s^*:=\frac{4 \varepsilon}{\overline{\delta}}<s'$, $$\begin{aligned}
\label{k21}
\max_{t\in [0,1]} \mathcal{J}( \eta( s^*,\zeta_{\varepsilon}(t)))<c_a .\end{aligned}$$ Indeed, for simplicity, setting $w=\zeta_{\varepsilon}(t)$ for $t\in [0,1]$,
\(1\) If $\mathcal{J}(w)<c_a$, then $\mathcal{J}( \eta( s^*, w))\leq \mathcal{J}(w)<c_a$.
\(2\) If $\mathcal{J}(w)\geq c_a$, assume by contradiction that $$\begin{aligned}
\mathcal{J}( \eta( s, w))\geq c_a \quad\quad \text{for all}\ s\in [0, s^*].\end{aligned}$$ Since $\mathcal{J}( \eta( s, w))\leq \mathcal{J}(w)\leq c_a+\varepsilon$, it follows from [\[k46\]](#k46){reference-type="eqref" reference="k46"} and [\[k47\]](#k47){reference-type="eqref" reference="k47"} that $\eta( s, w)\in \Lambda_{\overline{\delta}}\cap B(0, 3M_0)$ for all $s\in [0, s^*]$. Moreover, we can see from [\[k42\]](#k42){reference-type="eqref" reference="k42"}, [\[k104\]](#k104){reference-type="eqref" reference="k104"}, [\[k105\]](#k105){reference-type="eqref" reference="k105"} and [\[k106\]](#k106){reference-type="eqref" reference="k106"} that $\|Y(\eta( s, w) )\|\geq 2 \overline{\delta}$ and $g(\eta( s, w) )=1$ for all $s\in [0, s^*]$. Thus, by [\[k44\]](#k44){reference-type="eqref" reference="k44"}, [\[k45\]](#k45){reference-type="eqref" reference="k45"} and [\[k105\]](#k105){reference-type="eqref" reference="k105"}, one has $$\begin{aligned}
\frac{d}{ds} \mathcal{J}( \eta( s, w))= \Big\langle d\mathcal{J}( \eta( s, w)), -\frac{Y(\eta(t, w) }{\|Y(\eta(t, w)\|} \Big\rangle.\end{aligned}$$ By integration, using $s^*=\frac{4\varepsilon}{\overline{\delta}}$, [\[k41\]](#k41){reference-type="eqref" reference="k41"}, [\[k42\]](#k42){reference-type="eqref" reference="k42"} and the fact that $\|Y(\eta( s, w) )\|\geq 2 \overline{\delta}$, we deduce that $$\mathcal{J}( \eta( s^*, w))\leq\mathcal{J}(w)-s^* \frac{\overline{\delta}}{2}\leq c_a +\varepsilon-2\varepsilon<c_a -\varepsilon,$$ which is contradict with $\mathcal{J}( \eta( s^*, w))\geq c_a$. Thus, [\[k21\]](#k21){reference-type="eqref" reference="k21"} holds. It follows from [\[k22\]](#k22){reference-type="eqref" reference="k22"} and [\[k21\]](#k21){reference-type="eqref" reference="k21"} that $$c_a \leq \max_{t\in[0,1]}\mathcal{J}( \eta( s^*, \gamma_{\varepsilon}(t)))<c_a ,$$ a contradiction. So we complete the proof. ◻
**Lemma 35**. *Let $\mu>0$, $q\in (2, \frac{12}{5}]$ and $p\in (\frac{10}{3}, 6)$. Then there exists a sequence $\{u_n\}\subset S_{a,r}$ and a constant $\alpha>0$ satisfying $$P(u_n)=o(1), \quad \mathcal{J}(u_n)=c_a +o(1), \quad \|\mathcal{J}'_{|_{S_{a,r}}}(u_n)\|_{\mathcal{H}_{r}^{-1}}= o(1), \quad \|u_n\|\leq \alpha.$$*
*Proof.* We know from Lemma [Lemma 34](#K-Lem3.1){reference-type="ref" reference="K-Lem3.1"} that there exists $\{u_n\}\subset S_{a,r}$ satisfying $\{u_n\}\subset B(0, 3M_0)$ and $$dist(u_n, \mathcal{P}_{-,r})=o(1),\quad \quad |\mathcal{J}(u_n)-c_a |=o(1),\quad\quad \|\mathcal{J}_{|_{S_{a,r}}}'(u_n) \|_{H^{-1}}=o(1).$$ In what follows, we show that $P(u_n)=o(1)$. Since $\|dP\|_{H^{-1}}$ is bounded on any bounded set of $\mathcal{H}_r$. Now, for any $n\in \mathbb{N}$ and any $w\in \mathcal{P}_{-,r}$, we have $$P(u_n)=P(w)+ dP(\beta u_n+(1-\beta)w)(u_n-w),$$ where $\beta\in [0,1]$. Since $P(w)=0$, then $$\begin{aligned}
\label{k23}
|P(u_n)|\leq \max_{u\in B(0, 3M_0)} \|dP\|_{H^{-1}} \|u_n-w\|.\end{aligned}$$ Choosing $\{w_m\}\subset \mathcal{P}_{-,r}$ such that $$\begin{aligned}
\label{k24}
\|u_n-w_m\|\rightarrow dist(u_n, \mathcal{P}_{-,r} )\end{aligned}$$ as $m\rightarrow\infty$. Since $dist(u_n, \mathcal{P}_{-,r})\rightarrow 0$ as $n\rightarrow\infty$, [\[k23\]](#k23){reference-type="eqref" reference="k23"} and [\[k24\]](#k24){reference-type="eqref" reference="k24"} give that $P(u_n)\rightarrow 0$ as $n\rightarrow\infty$. ◻
**Lemma 36**. *Let $\mu>0$, $q\in (2, \frac{12}{5}]$, $p\in (\frac{10}{3}, 6)$ and [\[k3\]](#k3){reference-type="eqref" reference="k3"} be hold. Assume that $\{u_n\}\subset S_{a,r}$ is a Palais-Smail sequence at level $c\neq 0$ and $P(u_n)\rightarrow 0$ as $n\rightarrow\infty$. Then there exists $u\in S_{a,r}$ such that $u_n\rightarrow u$ in $\mathcal{H}_r$ and $(\lambda, u )\in \mathbb{R}^+ \times \mathcal{H}_{r}$ solves Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with $\mathcal{J} (u)=c$.*
*Proof.* Let us prove this result in three steps.
Step 1: $\{u_n\}$ is bounded in $\mathcal{H}_r$. Since $P(u_n)\rightarrow 0$ as $n\rightarrow\infty$, then $$\begin{aligned}
\label{k96}
c+o(1)
=&\mathcal{J} (u_n)-\frac{1}{p\gamma_p} P(u_n)\nonumber\\
=&\left(\frac{1}{2}-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3}|\nabla u_n|^2dx+\frac{1}{4}\left(1-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^{2}|u_n(y)|^2} {|x-y|}dxdy\nonumber\\
&+\mu\left(\frac{\gamma_q}{p\gamma_p}-\frac{1}{q}\right)\int_{\mathbb{R}^3}|u_n|^{q}dx\nonumber\\
\geq& \left(\frac{1}{2}-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3}|\nabla u_n|^2dx-\mu \frac{p\gamma_p-q\gamma_q}{pq\gamma_p} C_q^qa^{(1-\gamma_q)q}|\nabla u_n|_2^{q \gamma_q},\end{aligned}$$ which shows that $\{u_n\}$ is bounded in $\mathcal{H}_r$ because of $q\gamma_q<2$.
Step 2: Since $\{u_n\}$ is bounded in $\mathcal{H}_r$, then there exists $u\in \mathcal{H}_r$ such that, up to a subsequence, as $n\rightarrow\infty$, $$\begin{aligned}
& u_n\rightharpoonup u \quad\quad \text{in}\ \mathcal{H}_r;\\
& u_n\rightarrow u\quad\quad \text{in}\ L^t(\mathbb{R}^3)\ \text{with}\ t\in(2, 6);\\
& u_n\rightarrow u \quad\quad a.e \ \text{in}\ \mathbb{R}^3.\end{aligned}$$ Since $\mathcal{J}'_{|_{S_a}}(u_n)\rightarrow 0$, then there exists $\lambda_n\in \mathbb{R}$ satisfying $$\begin{aligned}
\label{k19}
-\Delta u_n+\lambda_n u_n+ (|x|^{-1}\ast |u_n|^2)u_n=\mu |u_n|^{q-2}u_n+|u_n|^{p-2}u_n+o(1).\end{aligned}$$ Multiplying the above equation by $u_n$ and integrating in $\mathbb{R}^3$, $$\begin{aligned}
\label{k16}
\lambda_n a^2= -|\nabla u_n|_2^2-\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^{2}|u_n(y)|^2} {|x-y|}dxdy
+\int_{\mathbb{R}^3}|u_n|^{p}dx+\mu \int_{\mathbb{R}^3}|u_n|^{q}dx+o( \|u_n\|).\end{aligned}$$ Since $\{u_n\}\subset \mathcal{H}_r$ is bounded, then it follows from [\[k16\]](#k16){reference-type="eqref" reference="k16"} that $\{\lambda_n\}$ is bounded in $\mathbb{R}$. Hence, there is $\lambda\in \mathbb{R}$ satisfying $\lambda_n\rightarrow \lambda$ as $n\rightarrow\infty$. Then we have $$\begin{aligned}
\label{k20}
-\Delta u+\lambda u+(|x|^{-1}\ast |u|^2)u=\mu |u|^{q-2}u+|u|^{p-2}u.\end{aligned}$$ Clearly, $u\neq0$. If not, assume that $u=0$, then by Lemma [Lemma 18](#gle4){reference-type="ref" reference="gle4"} and the fact that $\mathcal{H}_r\hookrightarrow L^t(\mathbb{R}^3 )$ with $t\in (2,6)$ is compact, we deduce $$\begin{aligned}
c=\lim_{n\rightarrow\infty}\mathcal{J}(u_n)
&=\lim_{n\rightarrow\infty}\Big(\mathcal{J}(u_n)- \frac{1}{2} P(u_n)\Big)\\
&=\lim_{n\rightarrow\infty}\bigg[\frac{1}{8} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy+(\gamma_p-\frac{1}{p})|u_n|_p^p+\mu(\gamma_q-\frac{1}{q})|u_n|_q^q\bigg]\\
&=0,\end{aligned}$$ which is a contradiction. Besides, we show that $$\begin{aligned}
\label{k40}
\lambda>0.\end{aligned}$$ Since $(\lambda, u)\in \mathbb{R}\times \mathcal{H}_r$ satisfies [\[k20\]](#k20){reference-type="eqref" reference="k20"}, then the following equalities hold: $$\begin{aligned}
|\nabla u|_2^2+ \lambda \int_{\mathbb{R}^3} |u|^{2}dx +\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\int_{\mathbb{R}^3}|u|^{p}dx-\mu \int_{\mathbb{R}^3}|u|^{q}dx&=0,\label{k17}\\
|\nabla u|_2^2+ \frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\gamma_p \int_{\mathbb{R}^3}|u|^{p}dx-\mu \gamma_q \int_{\mathbb{R}^3}|u|^{q}dx&=0.\label{k18}\end{aligned}$$ We argue by contradiction assuming that $\lambda\leq 0$. Then it follows from [\[k17\]](#k17){reference-type="eqref" reference="k17"} and [\[k18\]](#k18){reference-type="eqref" reference="k18"} that, by eliminating $\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(y)|^2|u(x)|^{2}} {|x-y|}dydx$, $$\begin{aligned}
\frac{3}{4}|\nabla u|_2^2-\frac{\lambda}{4} \int_{\mathbb{R}^3} |u|^{2}dx
+(\frac{1}{4}-\gamma_p)\int_{\mathbb{R}^3}|u|^{p}dx+\mu (\frac{1}{4}-\gamma_q) \int_{\mathbb{R}^3}|u|^{q}dx=0,\end{aligned}$$ which implies that $$\begin{aligned}
\frac{3}{4}|\nabla u|_2^2 \leq&
(\gamma_p-\frac{1}{4})\int_{\mathbb{R}^3}|u|^{p}dx+\mu (\gamma_q-\frac{1}{4}) \int_{\mathbb{R}^3}|u|^{q}dx\\
\leq& (\gamma_p-\frac{1}{4})\int_{\mathbb{R}^3}|u|^{p}dx\\
\leq & (\gamma_p-\frac{1}{4}) C_p^p|\nabla u|_2^{p\gamma_p} a^{p(1-\gamma_p)}\end{aligned}$$ by using the fact that $q\in(2, \frac{12}{5}]$, $p\in(\frac{10}{3}, 6)$, $|u|_2\leq a$ and the Gagliardo-Nirenberg inequality [\[k10\]](#k10){reference-type="eqref" reference="k10"}. Then, since $p\gamma_p>2$ and $|\nabla u|_2\neq 0$, we get $$\begin{aligned}
\label{k48}
|\nabla u|_2^{p\gamma_p-2}\geq \frac{3}{4 (\gamma_p-\frac{1}{4})C_p^p a^{p(1-\gamma_p)} }.\end{aligned}$$ Moreover, by eliminating $\int_{\mathbb{R}^3}|u|^{p}dx$ from [\[k17\]](#k17){reference-type="eqref" reference="k17"} and [\[k18\]](#k18){reference-type="eqref" reference="k18"}, $$\begin{aligned}
(\gamma_p-1)|\nabla u|_2^2+ \lambda \gamma_p \int_{\mathbb{R}^3} |u|^{2}dx
+(\gamma_p-\frac{1}{4}) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy +\mu ( \gamma_q-\gamma_p) \int_{\mathbb{R}^3}|u|^{q}dx=0.\end{aligned}$$ Then, $$\begin{aligned}
0\leq-\lambda \gamma_p \int_{\mathbb{R}^3} |u|^{2}dx =&(\gamma_p-1)|\nabla u|_2^2+\big(\gamma_p-\frac{1}{4}\big) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy +\mu ( \gamma_q-\gamma_p) \int_{\mathbb{R}^3}|u|^{q}dx\\
\leq & (\gamma_p-1)|\nabla u|_2^2+\big(\gamma_p-\frac{1}{4}\big) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy\\
\leq & (\gamma_p-1)|\nabla u|_2^2+ \big(\gamma_p-\frac{1}{4}\big) C_{\frac{12}{5}}^{\frac{12}{5}}a^3|\nabla u|_2,\end{aligned}$$ which gives that $$\begin{aligned}
\label{k49}
(1-\gamma_p) |\nabla u|_2\leq \big(\gamma_p-\frac{1}{4}\big) C_{\frac{12}{5}}^{\frac{12}{5}}a^3.\end{aligned}$$ Combining [\[k48\]](#k48){reference-type="eqref" reference="k48"} and [\[k49\]](#k49){reference-type="eqref" reference="k49"}, using the assumption [\[k3\]](#k3){reference-type="eqref" reference="k3"}, we get a contradiction. Hence, $\lambda>0$.
Step 3: Multiplying [\[k19\]](#k19){reference-type="eqref" reference="k19"} and [\[k20\]](#k20){reference-type="eqref" reference="k20"} by $u_n-u$ and integrating in $\mathbb{R}^3$, $$\left\langle \mathcal{J}'(u_n)-\mathcal{J}'(u), u_n-u\right\rangle +\lambda_n \int_{\mathbb{R}^3} u_n(u_n-u)dx-\lambda \int_{\mathbb{R}^3} u(u_n-u)dx=o(1),$$ by the fact that $\lambda_n\rightarrow \lambda$ as $n\rightarrow\infty$ and $\mathcal{H}_r\hookrightarrow L^t( \mathbb{R}^3)$ with $t\in (2,6)$ is compact, we deduce $$|\nabla (u_n-u)|_2^2+ \lambda |u_n-u|_2^2=o(1),$$ which shows that $\|u_n-u\|\rightarrow 0$ as $n\rightarrow\infty$ because $\lambda>0$. Thus, we infer that $( \lambda, u)\in \mathbb{R}^+\times \mathcal{H}_r$ is a solution for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, which satisfies that $u_n\rightarrow u$ in $\mathcal{H}_r$ and $\mathcal{J}(u)=c$. So, we complete the proof. ◻
In view of Lemmas [Lemma 35](#K-Lem3.2){reference-type="ref" reference="K-Lem3.2"} and [Lemma 36](#K-Lem2.6){reference-type="ref" reference="K-Lem2.6"}, we obtain that there is $( \widehat{\lambda}, \widehat{u})\in \mathbb{R}^+ \times \mathcal{H}_r$ such that $(\widehat{\lambda}, \widehat{u})$ solves Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} and $\mathcal{J}(\widehat{u})=c_a>m(a,r)\geq m(a)$. Similarly as the proof of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, we can find that $\widehat{u}>0$.
# The exisence of the case $\mu\leq0$, $2<q<\frac{8}{3}$ and $\frac{10}{3}<p<6$ {#sec6}
In this section, we consider the case $\mu\leq0$, $2<q<\frac{8}{3}$ and $\frac{10}{3}<p<6$ in $\mathcal{H}_{r}$. Let $S_{a,r}=\mathcal{H}_{r} \cap S_a$, $\mathcal{P}_{0,r}=\mathcal{H}_{r}\cap \mathcal{P}_{0}$ and $\mathcal{P}_r=\mathcal{H}_{r}\cap \mathcal{P}$.
**Lemma 37**. *Let $\mu\leq0$, $2<q<\frac{8}{3}$ and $\frac{10}{3}<p<6$. Then $\mathcal{P}_{0,r}=\emptyset$, and $\mathcal{P}_r$ is a smooth manifold of codimension 2 in $\mathcal{H}_r$.*
*Proof.* Suppose on the contrary that there exists $u\in \mathcal{P}_{0,r}$. By [\[k8\]](#k8){reference-type="eqref" reference="k8"} and [\[k4\]](#k4){reference-type="eqref" reference="k4"} one has $$\begin{aligned}
P(u)=\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx=0,\label{kk5}\\
\psi_u''(0)=2\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-p\gamma_p^2\int_{\mathbb{R}^3}|u|^{p}dx
-\mu q \gamma_q^2\int_{\mathbb{R}^3}|u|^{q}dx=0.\label{kk6}
\end{aligned}$$ By eliminating $|\nabla u|_2^2$ from [\[kk5\]](#kk5){reference-type="eqref" reference="kk5"} and [\[kk6\]](#kk6){reference-type="eqref" reference="kk6"}, we get $$\begin{aligned}
\mu (2-q\gamma_q)\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx
= (p \gamma_p-2)\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy.
\end{aligned}$$ Since $\mu\leq0$ and $q \gamma_q < 2< p\gamma_p$, we get $u\equiv0$, which is contradict with $u\in S_{a,r}$. Moreover, for the remaining proofs, similar to [@2020Soave Lemma 5.2], we obtain that $\mathcal{P}_r$ is a smooth manifold of codimension 2 in $\mathcal{H}_r$. Thus we complete the proof. ◻
**Lemma 38**. *Let $\mu\leq0$, $2<q<\frac{8}{3}$ and $\frac{10}{3}<p<6$. For every $u\in S_{a,r}$, there exists a unique $t_u\in \mathbb{R}$ such that $t_u\star u\in \mathcal{P}_r$, where $t_u$ is the unique critical point of the function $\psi_u$, and is a strict maximum point of the function $\psi_u$ at positive level. Moreover,*
- *$\mathcal{P}_r= \mathcal{P}_{-,r}$;*
- *$\psi_u$ is strictly decreasing and concave on $(t_u, +\infty)$, and $t_u < 0 \Leftrightarrow P(u)<0$.*
- *The map $u \in S_{a,r}\mapsto t_u \in \mathbb{R}$ is of class $C^1$.*
*Proof.* Notice that, for every $u \in S_{a,r}$, $\psi_u(s)\rightarrow 0^+$ as $s \rightarrow -\infty$, and $\psi_u(s)\rightarrow -\infty$ as $s \rightarrow +\infty$. Therefore, $\psi_u(s)$ has a global maximum point at positive level. To show that this is the unique critical point of $\psi_u(s)$, we observe that $\psi_u'(s)=0$ if and only if $$\begin{aligned}
\label{k95}
f(s)=\mu\gamma_q
\int_{\mathbb{R}^3}|u|^{q}dx\end{aligned}$$ has only one solution, where $$f(s)=e^{(2-q\gamma_q)s}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4} e^{(1-q\gamma_q)s}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p e^{(p\gamma_p-q\gamma_q)s} \int_{\mathbb{R}^3}|u|^{p}dx.$$ Indeed, since $$\begin{aligned}
f'(s)=&(2-q\gamma_q)e^{(2-q\gamma_q)s}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}(1-q\gamma_q) e^{(1-q\gamma_q)s}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy \\
&-\gamma_p(p\gamma_p-q\gamma_q) e^{(p\gamma_p-q\gamma_q)s} \int_{\mathbb{R}^3}|u|^{p}dx\\
=&e^{(1-q\gamma_q)s} f_1(s),\end{aligned}$$ where $$f_1(s)=(2-q\gamma_q)e^{s}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1-q\gamma_q}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p(p\gamma_p-q\gamma_q) e^{(p\gamma_p-1)s} \int_{\mathbb{R}^3}|u|^{p}dx,$$ then, by simple analysis, we obtain that $f_1(s)\rightarrow \frac{1}{4}(1-q\gamma_q) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(y)|^2|u(x)|^{2}} {|x-y|}dydx >0$ as $s\rightarrow-\infty$, $f_1(s)\rightarrow -\infty$ as $s\rightarrow+\infty$, and there exists $s_0\in \mathbb{R}$ such that $f_1(s)$ is increasing on $(-\infty, s_0)$ and decreasing on $( s_0, +\infty)$. Hence, there is $s_1>s_0$ such that $f_1(s)>0$ on $(-\infty, s_1)$ and $f_1(s)<0$ on $( s_1, +\infty)$, which shows that $f'(s)>0$ on $(-\infty, s_1)$ and $f'(s)<0$ on $( s_1, +\infty)$. That is, $f(s)$ is increasing on $(-\infty, s_1)$ and decreasing on $( s_1, +\infty)$. Moreover, $f(s)\rightarrow 0^+$ as $s\rightarrow -\infty$ and $f(s)\rightarrow -\infty$ as $s\rightarrow +\infty$. Since $\mu\leq0$, by [\[k95\]](#k95){reference-type="eqref" reference="k95"}, we get that $\psi_u'(s)=0$ has only one solution, which implies that $\psi_u(s)$ has the unique critical point, denoted as $t_u$. In the same way, one can also check that $\psi_u$ has only one inflection point. Since $\psi_u'(s)<0$ if and only if $s>t_u$, we have that $P(u) = \psi_u(0) < 0$ if and only if $t_u < 0$. Finally, for point $(c)$ we argue as in [@2020Soave Lemma 5.3]. ◻
**Lemma 39**. *Let $\mu\leq0$, $2<q<\frac{8}{3}$ and $\frac{10}{3}<p<6$. It holds that $$\widetilde{m}_{a,r}:=\inf_{u\in \mathcal{P}_r}\mathcal{ J}(u)>0,$$ and there exists $k > 0$ sufficiently small such that $$\begin{aligned}
0<\sup_{\overline{D}_{k,r}} \mathcal{ J}<\widetilde{m}_{a,r} \quad\quad\text{and} \quad\quad u\in D_{k,r}\Rightarrow \mathcal{ J}(u), \ P(u)>0,
\end{aligned}$$ where $D_{k,r}=D_k\cap \mathcal{H}_r$ and $D_k$ is defined in [\[k91\]](#k91){reference-type="eqref" reference="k91"}.*
*Proof.* Similar to [@2020Soave Lemmas 7.3 and 7.4], using [\[k96\]](#k96){reference-type="eqref" reference="k96"} we get the results. ◻
Now, we prove that the functional $\mathcal{J}$ has the Mountain Pass geometry. Let $$\begin{aligned}
\widetilde{\Gamma}:=\{\zeta\in C([0,1], S_{a,r}): \zeta(0)\in D_{k,r} \quad \text{and} \quad \mathcal{J}(\zeta(1))<0\},\end{aligned}$$ and $$\widetilde{c}_a:=\inf_{\zeta\in \Gamma}\max_{u\in \zeta([0,1])}\mathcal{J}(u).$$ Taking $v\in S_{a,r}$, then $|\nabla (s\star u)|\rightarrow 0^+$ as $s\rightarrow -\infty$, and $|\mathcal{J} (s\star u)|\rightarrow -\infty$ as $s\rightarrow+\infty$. Hence there exist $s_0<< -1$ and $s_1 >> 1$ such that $$\begin{aligned}
\label{k97}
\zeta_v(\tau)=[(1-\tau)s_0+\tau s_1]\star v\in \widetilde{\Gamma},\end{aligned}$$ which implies that $\widetilde{\Gamma}\neq \emptyset$. Next, we claim that $$\widetilde{c}_a=\widetilde{m}_{a,r}:=\inf_{u\in\mathcal{P}_r}\mathcal{J}(u)>0.$$ On the one hand, if $u\in \mathcal{P}_r$, then by [\[k97\]](#k97){reference-type="eqref" reference="k97"}, $\zeta_u(\tau)$ is a path in $\widetilde{\Gamma}$. Hence, one obtains from Lemma [Lemma 38](#K-Lem4.2){reference-type="ref" reference="K-Lem4.2"} that $$\mathcal{J}(u)=\max_{\tau\in[0,1]}\mathcal{J}(\zeta_u(\tau))\geq \widetilde{c}_a,$$ which gives that $$\widetilde{m}_{a,r}\geq \widetilde{c}_a.$$ On the other hand, for any $\zeta\in \widetilde{\Gamma}$, $P( \zeta(0))>0$ by Lemma [Lemma 39](#K-Lem4.3){reference-type="ref" reference="K-Lem4.3"}. We claim that $P(\zeta(1))<0$. In fact, since $\psi_{\zeta(1) }(s)>0$ for any $s\in(-\infty, t_{\zeta(1)})$, and $\psi_{\zeta(1)}(0)=\mathcal{J}(\zeta(1))<0$, we deduce that $t_{\zeta(1)}<0$, which implies that $P(\zeta(1))<0$ due to Lemma [Lemma 38](#K-Lem4.2){reference-type="ref" reference="K-Lem4.2"}. Hence, there exists $\tau\in(0,1)$ such that $P(\zeta(\tau))=0$ by the continuity. That is, $\zeta(\tau)\in \mathcal{P}_r$. Thus, $$\begin{aligned}
\max_{t\in[0,1]} J(\zeta(t))\geq J(\zeta(\tau))\geq \inf_{\mathcal{P}_r}J\geq \widetilde{m}_{a,r},\end{aligned}$$ which shows that $\widetilde{c}_a\geq \widetilde{m}_{a,r}$ in view of the arbitrariness of $\zeta$. This and Lemma [Lemma 39](#K-Lem4.3){reference-type="ref" reference="K-Lem4.3"} imply $\widetilde{c}_a= \widetilde{m}_{a,r}>0$.
Similarly as the proof of Lemmas [Lemma 34](#K-Lem3.1){reference-type="ref" reference="K-Lem3.1"} and [Lemma 35](#K-Lem3.2){reference-type="ref" reference="K-Lem3.2"}, we obtain
**Lemma 40**. *Let $\mu\leq0$, $q\in (2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. Then there exists a sequence $\{u_n\}\subset S_{a,r}$ and a constant $\alpha>0$ satisfying $$P(u_n)=o(1), \quad \mathcal{J}(u_n)=\widetilde{c}_a +o(1), \quad \|\mathcal{J}'_{|_{S_a}}(u_n)\|_{\mathcal{H}_{r}^{-1}}= o(1), \quad \|u_n\|\leq \alpha.$$*
**Lemma 41**. *Let $\mu\leq0$, $\lambda\in \mathbb{R}$, $q\in (2, \frac{8}{3})$ and $p\in (\frac{10}{3}, 6)$. If $v \in \mathcal{H}$ is a weak solution of $$\begin{aligned}
\label{k98}
- \Delta v+\lambda v+ (|x|^{-1}\ast|v|^2)v=|v|^{p-2}v +\mu|v|^{q-2}v\quad \quad \text{in} \ \mathbb{ R}^3,\end{aligned}$$ then $P(v)=0$. Moreover, if $\lambda\leq 0$, then there exists a constant $\widetilde{a}_0>0$, independent on $\lambda \in \mathbb{R}$, such that the only solution of [\[k98\]](#k98){reference-type="eqref" reference="k98"} fulfilling $|v|_2\leq\widetilde{a}_0$ is the null function.*
*Proof.* Since $v$ satisfies $$\begin{aligned}
|\nabla v|_2^2+ \lambda \int_{\mathbb{R}^3} |v|^{2}dx +\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|v(x)|^{2}|v(y)|^2} {|x-y|}dxdy
-\int_{\mathbb{R}^3}|v|^{p}dx-\mu \int_{\mathbb{R}^3}|v|^{q}dx&=0,\label{k100}\\
\frac{1}{2}|\nabla v|_2^2 +\frac{3\lambda }{2} \int_{\mathbb{R}^3} |v|^2dx
+\frac{5}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|v(x)|^{2}|v(y)|^2} {|x-y|}dxdy-
\frac{3}{p}\int_{\mathbb{R}^3}|v|^{p}dx-\frac{3\mu}{q} \int_{\mathbb{R}^3}|v|^{q}dx&=0,
\label{k101}\end{aligned}$$ then it holds that $$\begin{aligned}
\label{k99}
P(v)=\int_{\mathbb{R}^3}|\nabla v|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|v(x)|^{2}|v(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|v|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|v|^{q}dx=0.
\end{aligned}$$ Suppose that $(\lambda, v)\in \mathbb{R}\times \mathcal{H}$ with $\lambda\leq 0$ and $|v|_2\leq\widetilde{a}_0$ satisfies [\[k98\]](#k98){reference-type="eqref" reference="k98"}, then using [\[k99\]](#k99){reference-type="eqref" reference="k99"}, $\mu\leq0$ and the Gagliardo-Nirenberg inequality [\[k10\]](#k10){reference-type="eqref" reference="k10"}, one infers $$\begin{aligned}
|\nabla v|_2^2 \leq\gamma_p\int_{\mathbb{R}^3}|v|^{p}dx
\leq \gamma_p C_p^p|\nabla v|_2^{p\gamma_p} |v|_2^{p(1-\gamma_p)},\end{aligned}$$ which shows that $$\begin{aligned}
\label{kk48}
|\nabla v|_2^{p\gamma_p-2}\geq \frac{1}{\gamma_pC_p^p |v|_2^{p(1-\gamma_p)} }.\end{aligned}$$ Then, [\[kk48\]](#kk48){reference-type="eqref" reference="kk48"} implies that when $|v|_2$ is small enough, $|\nabla v|_2$ must be large. Moreover, by eliminating $\int_{\mathbb{R}^3}|v|^{p}dx$ from [\[k100\]](#k100){reference-type="eqref" reference="k100"} and [\[k101\]](#k101){reference-type="eqref" reference="k101"}, since $\mu\leq0$ and the Gagliardo-Nirenberg inequality [\[k10\]](#k10){reference-type="eqref" reference="k10"}, one has $$\begin{aligned}
\lambda \gamma_p \int_{\mathbb{R}^3} |v|^{2}dx&=(1-\gamma_p)|\nabla v|_2^2+(\frac{1}{4}-\gamma_p) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|v(x)|^{2}|v(y)|^2} {|x-y|}dxdy+\mu ( \gamma_p-\gamma_q) \int_{\mathbb{R}^3}|v|^{q}dx\\
&\geq(1-\gamma_p)|\nabla v|_2^2 - (\gamma_p-\frac{1}{4})C_{\frac{12}{5}}^{\frac{12}{5}} |\nabla v|_2|v|_2^3-|\mu| ( \gamma_p-\gamma_q) C_q^q |\nabla v|_2^{q\gamma_q}|v|_2^{q(1-\gamma_q)},\end{aligned}$$ which gives a contradiction by $\lambda\leq 0$ and [\[kk48\]](#kk48){reference-type="eqref" reference="kk48"} when $|v|_2$ is small enough. Thus we finish the proof. ◻
From Lemma [Lemma 40](#K-Lem4.4){reference-type="ref" reference="K-Lem4.4"}, we get a sequence $\{u_n\}\subset S_{a,r}$ satisfying $$P(u_n)=o(1), \quad \mathcal{J}(u_n)=\widetilde{c}_a +o(1), \quad \|\mathcal{J}'_{|_{S_a}}(u_n)\|_{\mathcal{H}_{r}^{-1}}= o(1).$$ Similar to the proof of Lemma [Lemma 36](#K-Lem2.6){reference-type="ref" reference="K-Lem2.6"}, we deduce that $\{u_n\}$ is bounded in $\mathcal{H}_r$, and there exists $0\neq \widetilde{u}\in \mathcal{H}_r$ such that, up to a subsequence, $$\begin{aligned}
& u_n\rightharpoonup \widetilde{u} \quad\quad \text{in}\ \mathcal{H}_r;\\
& u_n\rightarrow \widetilde{u}\quad\quad \text{in}\ L^t(\mathbb{R}^3)\ \text{with}\ t\in(2, 6);\\
& u_n\rightarrow \widetilde{u} \quad\quad a.e \ \text{in}\ \mathbb{R}^3.\end{aligned}$$ Since $\mathcal{J}'_{|_{S_{a,r}}}(u_n)\rightarrow 0$, then there exists $\widetilde{\lambda}_n\in \mathbb{R}$ satisfying $$\begin{aligned}
%\label{kk19}
-\Delta u_n+\widetilde{\lambda}_n u_n+ (|x|^{-1}\ast |u_n|^2)u_n=\mu |u_n|^{q-2}u_n+|u_n|^{p-2}u_n+o(1).\end{aligned}$$ Multiplying the above equation by $u_n$ and integrating in $\mathbb{R}^3$, $$\begin{aligned}
\label{kk16}
\widetilde{ \lambda}_n a^2= -|\nabla u_n|_2^2-\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^{2}|u_n(y)|^2} {|x-y|}dxdy
+\int_{\mathbb{R}^3}|u_n|^{p}dx+\mu \int_{\mathbb{R}^3}|u_n|^{q}dx+o( \|u_n\|).\end{aligned}$$ Since $\{u_n\}\subset \mathcal{H}_r$ is bounded, then it follows from [\[kk16\]](#kk16){reference-type="eqref" reference="kk16"} that $\{\widetilde{\lambda}_n\}$ is bounded in $\mathbb{R}$. Hence, there is $\widetilde{\lambda}\in \mathbb{R}$ satisfying $\widetilde{\lambda}_n\rightarrow \widetilde{\lambda}$ as $n\rightarrow\infty$. Then we have $$\begin{aligned}
%\label{kk20}
-\Delta \widetilde{u}+\widetilde{\lambda} \widetilde{u}+ (|x|^{-1}\ast |\widetilde{u}|^2) \widetilde{u}=\mu |\widetilde{u}|^{q-2}\widetilde{u}+|\widetilde{u}|^{p-2}\widetilde{u}.\end{aligned}$$ Then, by Lemma [Lemma 41](#K-Lem4.5){reference-type="ref" reference="K-Lem4.5"}, there exists a $\widetilde{a}_0>0$ such that $\widetilde{\lambda}>0$ if $a\in (0, \widetilde{a}_0)$. Similar to the Step 3 of Lemma [Lemma 36](#K-Lem2.6){reference-type="ref" reference="K-Lem2.6"}, we deduce that $\|u_n-\widetilde{u}\|\rightarrow 0$ in $\mathcal{H}_r$ as $n\rightarrow\infty$. Thus, we infer that $(\widetilde{\lambda}, \widetilde{u} )\in \mathbb{R}^+ \times \mathcal{H}_r$ solves Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $\mathcal{J}(\widetilde{u})=\widetilde{c}_a$. So, we complete the proof.$\hfill\Box$
# The case $q\in(2, \frac{8}{3} )$ and $p= \frac{10}{3}$ {#sec5}
In this section, we consider the case $q\in(2, \frac{8}{3} )$ and $p=\overline{p}=\frac{10}{3}$, and then we give some existence and nonexistence results for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}. Defined $$E_0(u)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx.$$ By Gagliardo-Nirenberg inequality [\[k10\]](#k10){reference-type="eqref" reference="k10"}, it holds that, for any $u\in S_a$ and $p=\overline{p}=\frac{10}{3}$, $$E_0(u)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx
\geq \Big(\frac{1}{2}- \frac{C_p^p}{p}a^{\frac{4}{3}}\Big) \int_{\mathbb{R}^3}|\nabla u|^2dx.$$ From the above analysis, when $0<a\leq a^*=(\frac{\overline{p}}{2 C_{\overline{p}}^{\overline{p}}})^{\frac{3}{4}}$, we have $E_0(u)\geq 0$.\
Let $q\in(2, \frac{8}{3} )$ and $p=\overline{p}=\frac{10}{3}$. For any $\mu<0$, if $0<a\leq a^*$, assume that $u$ is a solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, then $P(u)=0$, that is $$\begin{aligned}
P(u)=\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx=0,
\end{aligned}$$ which and $\inf_{S_a} E_0 \geq 0$ due to $a\leq a^*$ show that $$0>\mu\gamma_q\int_{\mathbb{R}^3}|u|^{q}dx-\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy=2E_0(u)\geq2 \inf_{S_a}E_0(u)\geq 0.$$ This is a contradiction. Hence, Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} has no solution at all. Clearly, $\inf_{S_a} \mathcal{J}=0$.
For any $\mu\in \mathbb{R}$, if $a> a^*$, then there exists $w\in S_a$ such that $E_0(w)<0$. Hence, since $q\gamma_q<1$, $$\begin{aligned}
\mathcal{J}(s\star w )
= e^{2s} E_0(w)+\frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|w(x)|^{2}|w(y)|^2} {|x-y|}dxdy-\mu\frac{e^{q\gamma_qs}}{q}
\int_{\mathbb{R}^3}|w|^{q}dx
\rightarrow -\infty\end{aligned}$$ as $s\rightarrow +\infty$. Thus, $\inf_{S_a}\mathcal{J}=-\infty$.
For any $\mu>0$, if $0<a\leq a^*$, we first claim that $\mathcal{J}$ is bounded below on $S_a$ and $e(a)<0$. In fact, since in this case $\inf_{S_a} E_0 \geq 0$, we deduce that, for any $u\in S_a$, $$\begin{aligned}
\mathcal{J}(s\star u )
& = e^{2s} E_0(u)+\frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\mu\frac{e^{q\gamma_qs}}{q}
\int_{\mathbb{R}^3}|u|^{q}dx\\
&\geq \frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\mu\frac{e^{q\gamma_qs}}{q}
\int_{\mathbb{R}^3}|u|^{q}dx.\end{aligned}$$ In view of $q\gamma_q<1$, we get $\mathcal{J}(s\star u )\rightarrow+\infty$ as $s\rightarrow+\infty$. Hence, $\mathcal{J}$ is coercive on $S_a$, and $e(a):= \inf_{S_a} \mathcal{J} >-\infty$. Furthermore, due to $q\gamma_q<1$, for any $u\in S_a$, $\mathcal{J}(s\star u )\rightarrow 0^-$ as $s\rightarrow-\infty$. Hence, $e(a)<0$. In particular, we further prove that $$\begin{aligned}
\text{when}\ 0<a<a^*, \ e(a) \text{ is attained by a real-valued
function}\ u\in S_a.
\end{aligned}$$ In other words, we have to verify that, taking a minimizing sequence $\{u_n\}\subset S_a$ of $e(a)$, along a subsequence, there exists $u\in S_a$ such that $u_n \rightarrow u$ in $\mathcal{H}$ up to translations. Indeed, similar to Sec. [3](#sec1){reference-type="ref" reference="sec1"}, we change the definition of $g_u$ to the following statement
**Definition 42**. Let $u\in \mathcal{H}\backslash \{0\}$. A continuous path $g_u: \theta\in \mathbb{R}^+\mapsto g_u(\theta)\in \mathcal{H}$ such that $g_u(1)=u$ is said to be a scaling path of $u$ if $$\begin{aligned}
\Theta_{g_u}(\theta):=|g_u(\theta)|_2^2|u|_2^{-2}\ \ \text{is differentiable} \quad\quad \text{and}\quad \quad \Theta_{g_u}'(1)\neq 0.
\end{aligned}$$ We denote with $\mathcal{G}_u$ the set of the scaling paths of $u$.
Here, if the following statements hold: for any $a\in (0, a^*)$ and $\mu>0$,
- the map $a \mapsto e(a)$ is continuous;
- $\lim_{a\rightarrow 0}\frac{e(a) }{a^2}=0$,
then similar to Sec. [3](#sec1){reference-type="ref" reference="sec1"} and the proof of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, we can obtain the desire result. That is, there exists $u\in S_a$ such that $\mathcal{J}(u)=e(a)$. Since if $\{u_n\}\subset S_a$ is a minimizing sequence of $e(a)$, then $\{|u_n|\}\subset S_a$ is also a minimizing sequence of $e(a)$. Hence, there is a real-valued non-negative function $u\in S_a$ such that $\mathcal{J}(u)=e(a)$.
Now, we verify (i). Let $a \in (0, a^*)$ be arbitrary and $\{a_n\} \subset(0, \overline{a}_0)$ be such that $a_n \rightarrow a$ as $n\rightarrow\infty$. From the definition of $e(a_n)$ and since $e(a_n) < 0$, for any $\varepsilon> 0$ sufficiently small, there exists $u_n \in S_{a_n}$ such that $\mathcal{J}(u_n)\leq e(a_n)+\varepsilon$ and $\mathcal{J}(u_n)<0$. Taking $n$ large enough, we have $a_n \leq a+\varepsilon<a^*$. Since $q\gamma_q<1$ and $$\begin{aligned}
\label{k113}
0>\mathcal{J}(u_n)\geq& \frac{1}{2}|\nabla u_n|_2^2-\mu \frac{C_q^q}{q} a_n^{q(1-\gamma_q)}|\nabla u_n|_2^{q\gamma_q}-\frac{C_{\overline{p}}^{\overline{p}}}{{\overline{p}}} (a_n)^{ \frac{4}{3}}|\nabla u_n|_2^{2}\nonumber\\
\geq&\Big( \frac{1}{2}- \frac{C_{\overline{p}}^{\overline{p}}}{{\overline{p}}} (a_n)^{ \frac{4}{3}}\Big) |\nabla u_n|_2^{2} -\mu \frac{C_q^q}{q} a_n^{q(1-\gamma_q)}|\nabla u_n|_2^{q\gamma_q},\end{aligned}$$ we deduce that $\{u_n\}$ is bounded in $D^{1,2}(\mathbb{R}^3)$. On the one hand, setting $v_n := \frac{a}{a_n} u_n\in S_a$, by the boundedness of $\{u_n\}$ and $a_n\rightarrow a$ as $n\rightarrow\infty$, $$\begin{aligned}
\label{k112}
e(a)\leq \mathcal{J}(v_n)=&\mathcal{J}(u_n)+\frac{1}{2}\Big[\big(\frac{a}{a_n}\big)^2-1\Big]|\nabla u_n|_2^2+\frac{1}{4}\Big[\big(\frac{a}{a_n}\big)^4
-1\Big]\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^2|u_n(y)|^2} {|x-y|}dxdy\nonumber\\
&-\frac{1}{p}\Big[\big(\frac{a}{a_n}\big)^p-1\Big]| u_n|_p^p
-\mu\frac{1}{q}\Big[\big(\frac{a}{a_n}\big)^q-1\Big]| u_n|_q^q\nonumber\\
=&\mathcal{J}(u_n)+o(1)\nonumber\\
\leq & e(a_n)+o(1).\end{aligned}$$ On the other hand, let $u \in S_a$ be such that $\mathcal{J}(u)\leq e(a)+\varepsilon$ and $\mathcal{J}(u)<0.$ Let $w_n=\frac{a_n}{a}u\in S_{a_n}$, then, $$\begin{aligned}
\label{k111}
e(a_n)\leq \mathcal{J}(w_n)=\mathcal{J}(u)+[\mathcal{J}(w_n)-\mathcal{J}(u)]\leq e(a) +o(1).\end{aligned}$$ Thus, we get from [\[k112\]](#k112){reference-type="eqref" reference="k112"} and [\[k111\]](#k111){reference-type="eqref" reference="k111"} that $e(a_n)\rightarrow e(a)$ as $n\rightarrow\infty$ for any $a\in(0, a^*)$.
Next, we show that (ii) holds. Define $$\begin{aligned}
\mathcal{I}(u)=\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx
-\frac{1}{\overline{p}}\int_{\mathbb{R}^3}|u|^{\overline{p}}dx-\mu\frac{1}{q}\int_{\mathbb{R}^3}|u|^{q}dx,\end{aligned}$$ and $\overline{e}(a):=\inf_{S_a} \mathcal{I}$, where $a\in(0, a^*)$. Then $\mathcal{J} (u)\geq \mathcal{I}(u)$ for any $u\in S_a$. Hence, $0> e(a)\geq \overline{e}(a),$ implies that $\frac{\overline{e}(a)}{a^2}\leq \frac{ e(a)}{a^2} <0$. Hence, we only need to show that $\lim_{a\rightarrow0}\frac{\overline{e}(a)}{a^2}=0$. According to [@2020Soave], when $a\in(0, a^*)$, $\mu>0$, $q\in(2, \frac{8}{3})$ and $p=\overline{p}=\frac{10}{3}$, we obtain that there exists $\overline{u}_{a}\in S_a$ such that $\mathcal{I} (\overline{u}_{a})=\overline{e}(a)<0$. Then, similar to [\[k113\]](#k113){reference-type="eqref" reference="k113"}, the sequence $\{\overline{u}_{a}\}_{a>0}$ is bounded in $D^{1,2}(\mathbb{R}^3)$. Since the minimizer $\overline{u}_{a}$ satisfies $$\begin{aligned}
\label{kk79}
-\Delta \overline{u}_{a}-|\overline{u}_{a}|^{\overline{p}-2}\overline{u}_{a}
-\mu|\overline{u}_{a}|^{q-2}\overline{u}_{a}=\omega_a \overline{u}_{a},\end{aligned}$$ where $\omega_a$ is the Lagrange multiplier associated to the minimizer, then $$\begin{aligned}
\frac{\omega_a}{2}=\frac{|\nabla \overline{u}_{a}|_2^2-| \overline{u}_{a}|_{\overline{p}}^{\overline{p}}-\mu |\overline{u}_{a}|_q^q}{2| \overline{u}_{a}|_2^2}\leq\frac{\frac{1}{2}|\nabla \overline{u}_{a}|_2^2-\frac{1}{{\overline{p}}}|\overline{u}_{a}|_{\overline{p}}^{\overline{p}}-\mu\frac{1}{q} | \overline{u}_{a}|_q^q}{| \overline{u}_{a}|_2^2}=\frac{\overline{\mathcal{I}}(\overline{u}_{a} )}{a^2}< 0.\end{aligned}$$ If $\lim_{a\rightarrow0} \omega_a=0$, we get the conclusion. To show that $\lim_{a\rightarrow0} \omega_a=0$, by contradiction, assuming that there exists a sequence $a_n \rightarrow 0$ such that $\omega_{a_n} <-\alpha$ for some $\alpha\in(0,1)$. Since the minimizers $\overline{u}_{a_n}\in S_{a_n}$ satisfies Eq. [\[kk79\]](#kk79){reference-type="eqref" reference="kk79"}, one has $$\begin{aligned}
|\nabla\overline{u}_{a_n} |_2^2-\omega_{a_n} |\overline{u}_{a_n} |_2^2-|\overline{u}_{a_n}|_{\overline{p}}^{\overline{p}}-\mu |\overline{u}_{a_n}|_q^q&=0,\label{kkkk80}\\
|\nabla\overline{u}_{a_n} |_2^2-\gamma_{\overline{p}}|\overline{u}_{a_n}|_{\overline{p}}^{\overline{p}}-\mu\gamma_q|\overline{u}_{a_n}|_q^q&=0.\label{kkk81}\end{aligned}$$ It follows from $\eqref{kkkk80}-\frac{1}{2}\eqref{kkk81}$ that $$\begin{aligned}
C\|\overline{u}_{a_n}\|^2\leq \frac{1}{2}|\nabla\overline{u}_{a_n} |_2^2+\alpha |\overline{u}_{a_n} |_2^2
&\leq(1-\frac{1}{2}\gamma_{\overline{p}})| \overline{u}_{a_n} |_{\overline{p}}^{\overline{p}}+\mu(1-\frac{1}{2}\gamma_q)| \overline{u}_{a_n} |_q^q \\
& \leq C(1-\frac{1}{2}\gamma_{\overline{p}})\| \overline{u}_{a_n} \|^{\overline{p}}+\mu C(1-\frac{1}{2}\gamma_q)\| \overline{u}_{a_n} \|^q,\end{aligned}$$ which implies that $\|\overline{u}_{a_n}\|\geq C$ due to ${\overline{p}}, q>2$ and $\gamma_{\overline{p}}, \gamma_q<1$. It gives that $|\nabla\overline{u}_{a_n} |_2>C$ for $n$ large enough in virtue of $|\overline{u}_{a_n} |_2=a_n\rightarrow 0$ as $n\rightarrow\infty$. Moreover, for any $u\in S_a$, $$\begin{aligned}
\mathcal{I}(u)=&\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx
-\frac{1}{\overline{p}}\int_{\mathbb{R}^3}|u|^{\overline{p}}dx-\mu\frac{1}{q}\int_{\mathbb{R}^3}|u|^{q}dx\\
\geq& \frac{1}{2}|\nabla u|_2^2-\mu \frac{C_q^q}{q} a^{q(1-\gamma_q)}|\nabla u|_2^{q\gamma_q}-\frac{C_{\overline{p}}^{\overline{p}}}{\overline{p}} a^{\overline{p}(1-\gamma_{\overline{p}})}|\nabla u|_2^{2}.\end{aligned}$$ Thus, for $n$ large enough, $$\begin{aligned}
0\geq \mathcal{I}(\overline{u}_{a_n})
\geq\Big(\frac{1}{2}-\mu \frac{C_q^q}{q} a_n^{q(1-\gamma_q)}|\nabla\overline{u}_{a_n}|_2^{q\gamma_q-2}-\frac{C_{\overline{p}}^{\overline{p}}}{{\overline{p}}} a_n^{{\overline{p}}(1-\gamma_{\overline{p}})} \Big)|\nabla\overline{u}_{a_n}|_2^2
\geq \frac{1}{4}C>0,\end{aligned}$$ which is a contradiction. Therefore, $\lim_{a\rightarrow0} \omega_a=0$, which shows that $\lim_{a\rightarrow0}\frac{\overline{e}(a)}{a^2}=0$.
In order to complete the proof of Theorem [Theorem 6](#K-TH7){reference-type="ref" reference="K-TH7"}, we finally need to prove that when $\mu>0, a=a^*, q\in(2, \frac{12}{5} ]$ and $p=\overline{p}=\frac{10}{3}$, $e(a^*)$ is attained by a real-valued function $u\in S_a$. In fact, let us consider a minimizing sequence $\{|v_n|\}\subset S_a$ for $e(a^*)$, the Ekeland's variational principle yields in a standard way the existence of a new minimizing sequence $\{u_n\}\subset S_a$ for $e(a^*)$, with the property that $\mathcal{J}_{|_{S_a}}'(u_n)\rightarrow 0$ in $\mathcal{H}_{r}^{-1}$ and $u_n$ is a real-value nonnegative function for every $n\in \mathbb{N}$. Since $\mathcal{J}$ is coercive on $S_a$, $\{u_n\}$ is bounded in $\mathcal{H}_{r}$. Then there exists $u\in \mathcal{H}_r$ such that, up to a subsequence, as $n\rightarrow\infty$, $$\begin{aligned}
& u_n\rightharpoonup u \quad\quad \text{in}\ \mathcal{H}_r;\\
& u_n\rightarrow u\quad\quad \text{in}\ L^t(\mathbb{R}^3)\ \text{with}\ t\in(2, 6);\\
& u_n\rightarrow u \quad\quad a.e \ \text{in}\ \mathbb{R}^3.\end{aligned}$$ Since $\mathcal{J}'_{|_{S_a}}(u_n)\rightarrow 0$, then there exists $\lambda_n\in \mathbb{R}$ satisfying $$\begin{aligned}
%\label{kk219}
-\Delta u_n+\lambda_n u_n+ (|x|^{-1}\ast |u_n|^2)u_n=\mu |u_n|^{q-2}u_n+|u_n|^{\overline{p}-2}u_n+o(1).\end{aligned}$$ Multiplying the above equation by $u_n$ and integrating in $\mathbb{R}^3$, $$\begin{aligned}
\label{kkk16}
\lambda_n a^2= -|\nabla u_n|_2^2-\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^{2}|u_n(y)|^2} {|x-y|}dxdy
+\int_{\mathbb{R}^3}|u_n|^{\overline{p}}dx+\mu \int_{\mathbb{R}^3}|u_n|^{q}dx+o( \|u_n\|).\end{aligned}$$ Since $\{u_n\}\subset \mathcal{H}_r$ is bounded, then it follows from [\[kkk16\]](#kkk16){reference-type="eqref" reference="kkk16"} that $\{\lambda_n\}$ is bounded in $\mathbb{R}$. Hence, there is $\lambda\in \mathbb{R}$ satisfying $\lambda_n\rightarrow \lambda$ as $n\rightarrow\infty$. Then we have $$\begin{aligned}
\label{kkkk20}
-\Delta u+\lambda u+ (|x|^{-1}\ast |u|^2)u=\mu |u|^{q-2}u+|u|^{\overline{p}-2}u.\end{aligned}$$ Clearly, $u\neq0$. If not, assume that $u=0$, then by Lemma [Lemma 18](#gle4){reference-type="ref" reference="gle4"} and the fact that $\mathcal{H}_r\hookrightarrow L^t(\mathbb{R}^3 )$ with $t\in (2,6)$ is compact, we deduce $$\begin{aligned}
e(a^*)=\lim_{n\rightarrow\infty}\mathcal{J}(u_n)
=\lim_{n\rightarrow\infty} \frac{1}{2} \int_{\mathbb{R}^3} |\nabla u_n|^2dx\geq 0,\end{aligned}$$ which is contradict with $e(a^*)<0$. Besides, we show that $\lambda>0$. It follows from [\[kkkk20\]](#kkkk20){reference-type="eqref" reference="kkkk20"} that $$\begin{aligned}
|\nabla u|_2^2+ \lambda \int_{\mathbb{R}^3} |u|^{2}dx +\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\int_{\mathbb{R}^3}|u|^{\overline{p}}dx-\mu \int_{\mathbb{R}^3}|u|^{q}dx&=0,\label{kk17}\\
|\nabla u|_2^2+ \frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\gamma_p \int_{\mathbb{R}^3}|u|^{\overline{p}}dx-\mu \gamma_q \int_{\mathbb{R}^3}|u|^{q}dx&=0.\label{kk18}\end{aligned}$$ We argue by contradiction assuming that $\lambda\leq 0$. Then it follows from [\[kk17\]](#kk17){reference-type="eqref" reference="kk17"} and [\[kk18\]](#kk18){reference-type="eqref" reference="kk18"} that, by eliminating $\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(y)|^2|u(x)|^{2}} {|x-y|}dydx$, $$\begin{aligned}
\frac{3}{4}|\nabla u|_2^2-\frac{\lambda}{4} \int_{\mathbb{R}^3} |u|^{2}dx
+(\frac{1}{4}-\gamma_{\overline{p}})\int_{\mathbb{R}^3}|u|^{\overline{p}}dx+\mu (\frac{1}{4}-\gamma_q) \int_{\mathbb{R}^3}|u|^{q}dx=0,\end{aligned}$$ which implies that $$\begin{aligned}
\label{k103}
\frac{3}{4}|\nabla u|_2^2
&\leq
\big(\gamma_{\overline{p}}-\frac{1}{4}\big)\int_{\mathbb{R}^3}|u|^{\overline{p}}dx+\mu \big(\gamma_q-\frac{1}{4}\big) \int_{\mathbb{R}^3}|u|^{q}dx\nonumber\\
& \leq \big(\gamma_{\overline{p}}-\frac{1}{4}\big)\int_{\mathbb{R}^3}|u|^{\overline{p}}dx
\leq \big(\gamma_{\overline{p}}-\frac{1}{4}\big) C_{\overline{p}}^{\overline{p}}|\nabla u|_2^{2} (a^*)^{ \frac{4}{3}}\end{aligned}$$ by using the fact that $\mu>0$, $q\in(2, \frac{12}{5}]$, $\overline{p}=\frac{10}{3}$, $|u|_2\leq a^*$ and [\[k10\]](#k10){reference-type="eqref" reference="k10"}. In view of [\[k103\]](#k103){reference-type="eqref" reference="k103"}, we infer that $a^*\geq\big(\frac{15}{7 C_{\overline{p}}^{\overline{p}}}\big)^\frac{3}{4}$, which is contradict with the definition of $a^*$. Hence, $\lambda>0$. The following is similar to the Step 3 of Lemma [Lemma 36](#K-Lem2.6){reference-type="ref" reference="K-Lem2.6"}, we can obtain that $\|u_n-u\|\rightarrow 0$ in $\mathcal{H}_r$ as $n\rightarrow\infty$. Then, we infer that $( \lambda, u)\in \mathbb{R}^+\times \mathcal{H}_r$ is a solution for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $\mathcal{J}(u)=e(a^*)$ and $u\in S_a$ is a nonnegative real-value function. So, we complete the proof. $\hfill\Box$
# The case $\mu>0$, $q=\frac{10}{3}$ and $p\in (\frac{10}{3}, 6)$ {#sec7}
In this part, we are going to solve the case $\mu>0$, $q=\overline{q}=\frac{10}{3}$ and $p\in (\frac{10}{3}, 6)$.
**Lemma 43**. *Let $\mu>0$, $q=\overline{q}=\frac{10}{3}$ and $p\in (\frac{10}{3}, 6)$. Then $\mathcal{P}_0=\emptyset$, and $\mathcal{P}$ is a smooth manifold of codimension 2 in $\mathcal{H}$.*
*Proof.* Suppose on the contrary that there exists $u\in \mathcal{P}_0$. By [\[k8\]](#k8){reference-type="eqref" reference="k8"} and [\[k4\]](#k4){reference-type="eqref" reference="k4"} one has $$\begin{aligned}
P(u)=\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|u|^{\overline{q}}dx=0,\label{k205}\\
\psi_u''(0)=2\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-p\gamma_p^2\int_{\mathbb{R}^3}|u|^{p}dx
-\mu q \gamma_q^2\int_{\mathbb{R}^3}|u|^{\overline{q}}dx=0.\label{k206}
\end{aligned}$$ By eliminating $|\nabla u|_2^2$ from [\[k205\]](#k205){reference-type="eqref" reference="k205"} and [\[k206\]](#k206){reference-type="eqref" reference="k206"}, we get $$\begin{aligned}
0
= (p \gamma_p-2)\gamma_p\int_{\mathbb{R}^3}|u|^{p}dx
+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy,
\end{aligned}$$ which implies that $|u|_p=0$. This is not possible since $u \in S_a$. The remain is similar to [@2020Soave Lemma 5.2]. Thus we complete the proof. ◻
**Lemma 44**. *Let $\mu>0$, $q=\overline{q} =\frac{10}{3}$ and $p\in (\frac{10}{3}, 6)$. Assume that [\[k201\]](#k201){reference-type="eqref" reference="k201"} holds. For any $u\in S_a$, there exists a unique $t_u \in \mathbb{R}$ such that $t_u\star u\in \mathcal{P}$. $t_u$ is the unique critical point of the function $\psi_u$, and is a strict maximum point at positive level. Moreover:*
- *$\mathcal{P}=\mathcal{P}_-$;*
- *the function $\psi_u$ is strictly decreasing and concave on $(t_u,+\infty)$. In particular, $t_u < 0\Leftrightarrow P(u) <0$;*
- *the map $u\in S_a\mapsto t_u\in \mathbb{R}$ is of class $C^1$.*
*Proof.* Since $$\begin{aligned}
\psi_u(s)
&=\frac{e^{2s}}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\frac{e^{p\gamma_ps}}{p}\int_{\mathbb{R}^3}|u|^{p}dx
-\frac{\mu e^{2s}}{\overline{q}}
\int_{\mathbb{R}^3}|u|^{\overline{q}}dx \\
&= \left(\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx-\frac{\mu }{\overline{q}}
\int_{\mathbb{R}^3}|u|^{\overline{q}}dx\right) e^{2s}+ \frac{e^{s}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\frac{e^{p\gamma_ps}}{p}\int_{\mathbb{R}^3}|u|^{p}dx\end{aligned}$$ then, by the fact that $s\star u\in \mathcal{P}\Leftrightarrow \psi_u'(s)=0$, to prove existence and uniqueness of $t_u$, together with monotonicity and convexity of $\psi_u$, we have only to show that the term inside the brackets is positive. This is clearly satisfied, since $$\begin{aligned}
\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx-\frac{\mu }{\overline{q}}
\int_{\mathbb{R}^3}|u|^{\overline{q}}dx\geq \left( \frac{1}{2}-\frac{\mu}{\overline{q}}C_{\overline{q}}^{\overline{q}}a^{\frac{4}{3}}\right)|\nabla u|_2^2>0\end{aligned}$$ by the assumption [\[k201\]](#k201){reference-type="eqref" reference="k201"}. The remains are similar to the proof of [@2020Soave Lemmas 5.3 and 6.1 ]. Thus, we complete the proof. ◻
**Lemma 45**. *Assume that $\mu>0$, $q=\overline{q}= \frac{10}{3}$, $p\in (\frac{10}{3}, 6)$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"} hold. Then $\sigma_{\overline{q}}:=\inf\limits_{\mathcal{P}} \mathcal{J}(u)>0$.*
*Proof.* Since $P(u)=0$, then $$\begin{aligned}
|\nabla u|_2^2+ \frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\gamma_p \int_{\mathbb{R}^3}|u|^{p}dx-\mu \gamma_{\overline{q}} \int_{\mathbb{R}^3}|u|^{\overline{q}}dx=0,\end{aligned}$$ which and the Gagliardo-Nirenberg inequality [\[k10\]](#k10){reference-type="eqref" reference="k10"} imply that $$\begin{aligned}
\label{k200}
\left(1-\frac{3}{5}\mu C_{\overline{q}}^{\overline{q}}a^{\frac{4}{3}}\right)|\nabla u|_2^2\leq C_{p}^p \gamma_pa^{p(1-\gamma_p)} |\nabla u|_2^{p\gamma_p}.\end{aligned}$$ We deduce from the above and [\[k201\]](#k201){reference-type="eqref" reference="k201"} that $\inf_{\mathcal{P}} |\nabla u|_2 >0.$ Hence, $$\begin{aligned}
\mathcal{J} (u)=\mathcal{J} (u)-\frac{1}{p\gamma_p} P(u)
\geq \frac{1}{2}\Big(1- \frac{2}{p\gamma_p}\Big)\big(1-\frac{3}{5} C_{\overline{q}}^{\overline{q}}\mu a^{\frac{4}{3}}\big)|\nabla u|_2^2>0,\end{aligned}$$ where the assumption [\[k201\]](#k201){reference-type="eqref" reference="k201"} is applied. Thus, the thesis follows. ◻
**Lemma 46**. *Assume that $\mu>0$, $q=\overline{q}= \frac{10}{3}$, $p\in (\frac{10}{3}, 6)$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"} hold. Then there exists $k > 0$ sufficiently small such that $$\begin{aligned}
0<\sup_{ \overline{A}_k\cap S_a} \mathcal{J}< \sigma_{\overline{q}}\quad \text{and} \quad u\in \overline{A}_k\cap S_a\Rightarrow \mathcal{J}(u), P(u)>0,\end{aligned}$$ where $A_k:=\{u\in \mathcal{H}: |\nabla u|_2\leq k\}$.*
*Proof.* The proof is similar to [@2020Soave Lemma 6.4], we omit it here. ◻
In what follows we address the case $\mu>0$, $q= \overline{q}=\frac{10}{3}$ and $p\in (\frac{10}{3}, 6)$ and consider the problem in $\mathcal{H}_r$. We prove the existence of ground state of mountain pass type at level $\sigma_{\overline{q},r}:= \inf\limits_{\mathcal{P}_r} \mathcal{J}(u)>0$, where $\mathcal{P}_r=\mathcal{H}_r \cap \mathcal{P}$. Let $S_{a,r}= \mathcal{H}_r\cap S_a$ and $\mathcal{P}_{\pm,r}= \mathcal{H}_r\cap\mathcal{P}_{\pm}$.
Firstly, we verify the Mountain Pass geometry of $\mathcal{J}$ on manifold $S_{a,r}$. Let $$\begin{aligned}
\Gamma_{\overline{q}}:=\big\{\zeta\in C([0,1], S_{a,r}): \zeta(0)\in \overline{A}_k\cap S_{a,r} \quad \text{and}\quad \mathcal{J}(\zeta(1))<0\big\},\end{aligned}$$ with associated minimax level $$c_{\overline{q},r}:=\inf_{\zeta\in \Gamma_{\overline{q}}}\max_{u\in \zeta([0,1])}\mathcal{J}(u).$$ Taking $v\in S_{a,r}$, since $|\nabla (s\star v)|_2\rightarrow 0^+$ as $s\rightarrow -\infty$, and $\mathcal{J}( s\star v)\rightarrow -\infty$ as $s\rightarrow +\infty$. Thus, there exist $s_0<<-1$ and $s_1>> 1$ such that $$\zeta_v(\tau)=[(1-\tau)s_0+\tau s_1]\star v\in \Gamma_{\overline{q}},$$ which implies that $\Gamma_q\neq \emptyset$. Next, we claim that $$c_{\overline{q},r}=\sigma_{\overline{q},r}:=\inf_{u\in\mathcal{P}_{r}}\mathcal{J}(u)>0.$$ On the one hand, assuming that there exists $\widetilde{v}\in \mathcal{P}_{r}$ such that $\mathcal{J}(\widetilde{v})<c_{a,\overline{q}}$. Then there exist $\widetilde{s}_0<<-1$ and $\widetilde{s}_1>> 1$ such that $\widetilde{s}_0 \star \widetilde{v}\in \overline{A}_k$ and $\mathcal{J}(\widetilde{s}_1\star\widetilde{v})<0$. Hence, $\zeta_{\widetilde{v}}(\tau)=[(1-\tau)\widetilde{s}_0+\tau \widetilde{s}_1]\star \widetilde{v}\in \Gamma_{\overline{q}}$, which and Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"} show that $$\begin{aligned}
c_{\overline{q},r}\leq \max_{\tau\in[0,1]}\mathcal{J}(\zeta_{\widetilde{v}}(\tau) )=\max_{\tau\in[0,1]}\mathcal{J}([(1-\tau)\widetilde{s}_0+\tau \widetilde{s}_1]\star \widetilde{v})=\mathcal{J}(\widetilde{v} )<c_{\overline{q},r}.\end{aligned}$$ a contradiction. Thus, for any $u\in \mathcal{P}_{r}$, one infers $\mathcal{J}(u)\geq c_{\overline{q},r}$, which implies that $\sigma_{\overline{q},r}\geq c_{\overline{q},r}$. On the other hand, for any $\zeta\in \Gamma_{\overline{q}}$, since $\zeta(0)\in \overline{A}_k\cap S_{a,r}$, then by Lemma [Lemma 46](#K-J3){reference-type="ref" reference="K-J3"}, we have $P( \zeta(0))>0$. Thus, $t_{\zeta(0)}> 0$ where $t_{\zeta(0)}$ is defined in Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"} satisfying $t_{\zeta(0)}\star \zeta(0)\in \mathcal{P}_{r}$. Moreover, since $\psi_{\zeta(1)}(s)>0$ for any $s\in(-\infty, t_{ \zeta(1)})$ and $\psi_{\zeta(1)}(0)=\mathcal{J}(\zeta(1))<0$, where $t_{ \zeta(1)}$ satisfying $t_{\zeta(1)}\star \zeta(1)\in \mathcal{P}_{r}$, we know that $t_{\zeta(1)}<0$. Hence, from Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"}-(iii), there exists $\overline{\tau}\in(0,1)$ such that $t_{\zeta(\overline{\tau})}=0$ by the continuity. Hence, $\zeta(\overline{\tau})\in \mathcal{P}_{r}$. So, $$\max_{u\in \zeta([0,1])}\mathcal{J}(u)\geq \mathcal{J}(\zeta(\overline{\tau}))\geq \inf_{\mathcal{P}_{r} }\mathcal{J}(u)=\sigma_{\overline{q},r},$$ which implies that $c_{\overline{q},r}\geq \sigma_{\overline{q},r}$. Hence, we deduce that $c_{\overline{q},r}=\sigma_{\overline{q},r}$. Furthermore, it follows from Lemma [Lemma 45](#K-J2){reference-type="ref" reference="K-J2"} that $c_{\overline{q},r}=\sigma_{\overline{q},r}>0$. Then the claim follows.
For any $a>0$ satisfying [\[k201\]](#k201){reference-type="eqref" reference="k201"} fixed, it is easy to verify that the set $$L:=\{u\in \mathcal{P}_{r}:\mathcal{J}(u)\leq c_{\overline{q},r}+1\}$$ is bounded. Then let $M_0>0$ be such that $L\subset B(0, M_0)$, where $$B(0, M_0):=\{u\in \mathcal{H}_r: \|u\|\leq M_0\}.$$
Similar to Lemmas [Lemma 34](#K-Lem3.1){reference-type="ref" reference="K-Lem3.1"} and [Lemma 35](#K-Lem3.2){reference-type="ref" reference="K-Lem3.2"}, we have the following two theses, which are crucial to derive a special Palais-Smail sequence.
**Lemma 47**. *Assume that $\mu>0$, $q=\overline{q}=\frac{10}{3}$, $p\in (\frac{10}{3}, 6)$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"} hold. Let $$\begin{aligned}
\Omega_{\delta}:=\left\{u\in S_{a,r}:\ |\mathcal{J}(u)-c_{\overline{q},r} |\leq\delta, \ dist(u, \mathcal{P}_{r})\leq 2\delta, \ \|\mathcal{J}'_{|_{S_{a,r}}}(u)\|_{\mathcal{H}_r^{-1}}\leq 2 \delta\right\},\end{aligned}$$ then for any $\delta>0$, $\Omega_{\delta}\cap B(0, 3M_0)$ is nonempty.*
**Lemma 48**. *Let $\mu>0$, $q=\overline{q}=\frac{10}{3}$, $p\in (\frac{10}{3}, 6)$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"} be hold. Then there exists a sequence $\{u_n\}\subset S_{a,r}$ and a constant $\alpha>0$ satisfying $$P(u_n)=o(1), \quad \mathcal{J}(u_n)=c_{\overline{q},r} +o(1) \quad \text{and} \quad \|\mathcal{J}'_{|_{S_{a,r}}}(u_n)\|_{\mathcal{H}_{r}^{-1}}= o(1).$$*
**Lemma 49**. *Let $\mu>0$, $q=\overline{q}=\frac{10}{3}$, $p\in (\frac{10}{3}, 6)$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"} be hold. Assume that $\{u_n\}\subset S_{a,r}$ is a Palais-Smail sequence at level $c\neq 0$ and $P(u_n)\rightarrow 0$ as $n\rightarrow\infty$. Then there exists $u\in S_{a,r}$ such that $u_n\rightarrow u$ in $\mathcal{H}_r$ and $(\lambda, u )\in \mathbb{R}^+ \times \mathcal{H}_{r}$ solves Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}.*
*Proof.* According to [\[k96\]](#k96){reference-type="eqref" reference="k96"}, due to the fact that $P(u_n)\rightarrow 0$ as $n\rightarrow\infty$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"}, we can easily get that $\{u_n\}$ is bounded in $\mathcal{H}_r$. Then there exists $u\in \mathcal{H}_r$ such that, up to a subsequence, as $n\rightarrow\infty$, $$\begin{aligned}
& u_n\rightharpoonup u \quad\quad \text{in}\ \mathcal{H}_r;\\
& u_n\rightarrow u\quad\quad \text{in}\ L^t(\mathbb{R}^3)\ \text{with}\ t\in(2, 6);\\
& u_n\rightarrow u \quad\quad a.e \ \text{in}\ \mathbb{R}^3.\end{aligned}$$ Since $\mathcal{J}'_{|_{S_a}}(u_n)\rightarrow 0$, then there exists $\lambda_n\in \mathbb{R}$ satisfying $$\begin{aligned}
%\label{k219}
-\Delta u_n+\lambda_n u_n+(|x|^{-1}\ast |u_n|^2)u_n =\mu |u_n|^{\overline{q}-2}u_n+|u_n|^{p-2}u_n+o(1).\end{aligned}$$ Multiplying the above equation by $u_n$ and integrating in $\mathbb{R}^3$, $$\begin{aligned}
\label{k215}
\lambda_n a^2= -|\nabla u_n|_2^2-\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_n(x)|^{2}|u_n(y)|^2} {|x-y|}dxdy
+\int_{\mathbb{R}^3}|u_n|^{p}dx+\mu \int_{\mathbb{R}^3}|u_n|^{\overline{q}}dx+o( \|u_n\|).\end{aligned}$$ Since $\{u_n\}\subset \mathcal{H}_r$ is bounded, then it follows from [\[k215\]](#k215){reference-type="eqref" reference="k215"} that $\{\lambda_n\}$ is bounded in $\mathbb{R}$. Hence, there is $\lambda\in \mathbb{R}$ satisfying $\lambda_n\rightarrow \lambda$ as $n\rightarrow\infty$. Then we have $$\begin{aligned}
\label{k220}
-\Delta u+\lambda u+ (|x|^{-1}\ast |u|^2)u =\mu |u|^{\overline{q}-2}u+|u|^{p-2}u.\end{aligned}$$ Clearly, $u\neq0$. Besides, we show that $\lambda>0$. Since $(\lambda, u)\in \mathbb{R}\times \mathcal{H}_r$ satisfies [\[k220\]](#k220){reference-type="eqref" reference="k220"}, then the following equalities hold: $$\begin{aligned}
|\nabla u|_2^2+ \lambda \int_{\mathbb{R}^3} |u|^{2}dx +\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\int_{\mathbb{R}^3}|u|^{p}dx-\mu \int_{\mathbb{R}^3}|u|^{\overline{q}}dx&=0,\label{k217}\\
|\nabla u|_2^2+ \frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy-\gamma_p \int_{\mathbb{R}^3}|u|^{p}dx-\mu \gamma_q \int_{\mathbb{R}^3}|u|^{\overline{q}}dx&=0.\label{k218}\end{aligned}$$ Similar to [\[k200\]](#k200){reference-type="eqref" reference="k200"}, one gets from [\[k218\]](#k218){reference-type="eqref" reference="k218"} that $$\begin{aligned}
\label{k202}
|\nabla u|_2\geq \Big(\frac{1}{C_p^p \gamma_p a^{p(1-\gamma_p)}}\Big)^{\frac{1}{p \gamma_p-2}} \Big( 1-\frac{3}{5}\mu C_{\overline{q}}^{\overline{q}}a^{\frac{4}{3}}\Big)^{\frac{1}{p \gamma_p-2}}>0.\end{aligned}$$ We argue by contradiction assuming that $\lambda\leq 0$. By eliminating $\int_{\mathbb{R}^3}|u|^{p}dx$ from [\[k217\]](#k217){reference-type="eqref" reference="k217"} and [\[k218\]](#k218){reference-type="eqref" reference="k218"}, $$\begin{aligned}
(\gamma_p-1)|\nabla u|_2^2+ \lambda \gamma_p \int_{\mathbb{R}^3} |u|^{2}dx
+\big(\gamma_p-\frac{1}{4}\big) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy +\mu ( \gamma_{\overline{q}}-\gamma_p) \int_{\mathbb{R}^3}|u|^{\overline{q}}dx=0.\end{aligned}$$ Then, $$\begin{aligned}
0\leq-\lambda \gamma_p \int_{\mathbb{R}^3} |u|^{2}dx =&(\gamma_p-1)|\nabla u|_2^2+(\gamma_p-\frac{1}{4}) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy +\mu ( \gamma_{\overline{q}}-\gamma_p) \int_{\mathbb{R}^3}|u|^{\overline{q}}dx\\
\leq & (\gamma_p-1)|\nabla u|_2^2+\big(\gamma_p-\frac{1}{4}\big) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy\\
\leq & (\gamma_p-1)|\nabla u|_2^2+ (\gamma_p-\frac{1}{4}) C_{\frac{12}{5}}^{\frac{12}{5}}a^3|\nabla u|_2,\end{aligned}$$ which gives that $$\begin{aligned}
\label{kk149}
(1-\gamma_p) |\nabla u|_2\leq (\gamma_p-\frac{1}{4}) C_{\frac{12}{5}}^{\frac{12}{5}}a^3.\end{aligned}$$ Combining [\[k202\]](#k202){reference-type="eqref" reference="k202"} and [\[kk149\]](#kk149){reference-type="eqref" reference="kk149"}, we get a contradiction by [\[k201\]](#k201){reference-type="eqref" reference="k201"}. Thus, $\lambda>0$. Similarly as the Step 3 of Lemma [Lemma 36](#K-Lem2.6){reference-type="ref" reference="K-Lem2.6"}, we have $\|u_n-u \|\rightarrow 0$ in $\mathcal{H}_r$ as $n\rightarrow\infty$. Hence, one infers that $( \lambda, u)\in \mathbb{R}^+\times \mathcal{H}_r$ is a solution for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, which satisfies $\mathcal{J}(u)=c_{\overline{q},r}$. We complete the proof. ◻
In view of Lemmas [Lemma 48](#KJ-Lem3.2){reference-type="ref" reference="KJ-Lem3.2"} and [Lemma 49](#KJ-Lem2.6){reference-type="ref" reference="KJ-Lem2.6"}, we obtain that there is $( \lambda_{\overline{q}}, u_{\overline{q}} )\in \mathbb{R}^+ \times \mathcal{H}_r$ such that $( \lambda_{\overline{q}}, u_{\overline{q}} )$ solves Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} and $\mathcal{J}( u_{\overline{q}})=c_{\overline{q},r}$ with $|u_{\overline{q}} |_2^2=a^2$. Similar to the proof of Theorem [Theorem 3](#K-TH1){reference-type="ref" reference="K-TH1"}, we know that $u_{\overline{q}} >0$.
# Properties of solutions {#sec4}
## Asymptotic behavior of solutions as $\mu\rightarrow 0$
Within a section, since we consider the dependence of $\mu$, hence for $\mu\geq 0$, we may writing $J_{\mu}$, $\mathcal{P}_{\mu, \pm}$ and $c_{a, \mu}$ to denote the functional $J$, the Pohožaev manifold $\mathcal{P}_{\pm}$ and the mountain pass level $c_{a}$ in Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"}, respectively. Under the assumption of Theorem [Theorem 11](#K-TH6){reference-type="ref" reference="K-TH6"}, let $(\widehat{\lambda}_{\mu}, \widehat{u}_{\mu})\in \mathbb{R}^+ \times \mathcal{H}_r$ be the mountain pass type solutions for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, obtained in Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"}, where $$\begin{aligned}
\label{k108}
c_{a, \mu}= \inf_{u\in\mathcal{P}_{\mu, -,r}}\mathcal{J}_{\mu}(u) = \mathcal{J}_{\mu}(\widehat{u}_{\mu}).
\end{aligned}$$ Moreover, according to Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"}, let $( \widetilde{\lambda}, \widetilde{u} )\in \mathbb{R}^+ \times \mathcal{H}_r$ be the solution for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $$\begin{aligned}
%\label{k109}
c_{a, 0}= \inf_{u\in\mathcal{P}_{0, -,r}}\mathcal{J}_{0}(u) = \mathcal{J}_{0}(\widetilde{u}).
\end{aligned}$$
**Lemma 50**. *Under the assumption of Theorem [Theorem 11](#K-TH6){reference-type="ref" reference="K-TH6"}, it holds that $c_{a, \mu}=\inf_{S_{a, r}}\max_{s\in \mathbb{R}} J_{\mu} (s\star u)$, and $c_{a, 0}=\inf_{S_{a, r}}\max_{s\in \mathbb{R}} J_{0} (s\star u)$.*
*Proof.* On the one hand, by using [\[k108\]](#k108){reference-type="eqref" reference="k108"} and Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"}, we know that $$c_{a, \mu}=J_{\mu} (\widehat{u}_{\mu})=\max_{s\in \mathbb{R}} J_{\mu} (s\star \widehat{u}_{\mu})\geq\inf_{S_{a, r}}\max_{s\in \mathbb{R}} J_{\mu} (s\star u).$$ On the other hand, recalling that $c_{a, \mu}= \inf_{u\in\mathcal{P}_{\mu, -,r}}\mathcal{J}_{\mu}(u)$, then it follows from Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"} that, for any $u\in S_{a, r}$, we have $t_u \star u \in \mathcal{P}_{\mu,-,r}$, and hence $$\max_{s\in \mathbb{R}} J_{\mu} (s\star u)\geq J_{\mu} (t_u\star u)\geq\inf_{u\in\mathcal{P}_{\mu, -,r}}\mathcal{J}_{\mu}(u)=c_{a, \mu}.$$ Similarly, we can prove $c_{a, 0}=\inf_{S_{a, r}}\max_{s\in \mathbb{R}} J_{0} (s\star u)$. Thus, we complete the proof. ◻
**Lemma 51**. *Under the assumption of Theorem [Theorem 11](#K-TH6){reference-type="ref" reference="K-TH6"}, for any $0< \mu_1<\mu_2$, we have $c_{a, \mu_2} \leq c_{a, \mu_1}\leq c_{a, 0}$.*
*Proof.* From Lemma [Lemma 50](#K-Lem6.1){reference-type="ref" reference="K-Lem6.1"}, for any $0< \mu_1<\mu_2$, we deduce that $$\begin{aligned}
c_{a, \mu_2}&\leq \max_{s\in \mathbb{R}} \mathcal{J}_{\mu_2}(s\star \widehat{u}_{ \mu_1})\\
&= \max_{s\in \mathbb{R}}\Big[\mathcal{J}_{\mu_1}(s\star \widehat{u}_{ \mu_1})+\frac{e^{q \gamma_q s}}{q}(\mu_1-\mu_2)|\widehat{u}_{ \mu_1}|_q^q \Big]\\
&\leq \max_{s\in \mathbb{R}}\mathcal{J}_{\mu_1}(s\star \widehat{u}_{ \mu_1})\\
&= \mathcal{J}_{\mu_1}( \widehat{u}_{ \mu_1})\\
&=c_{a, \mu_1}.\end{aligned}$$ Similarly, $c_{a, \mu_1}\leq c_{a, 0}$ can be obtained. Hence, we complete the proof. ◻
Assume that $\mu>0$, $q\in(2, \frac{12}{5} ]$, $p\in (\frac{10}{3}, 6)$, $a\in(0, \min \{\overline{a}_0, \widetilde{a}_0\})$ and [\[k3\]](#k3){reference-type="eqref" reference="k3"} hold. let $\{(\widehat{\lambda}_{\mu}, \widehat{u}_{\mu})\}\subseteq \mathbb{R}^+ \times \mathcal{H}_r$ be the sequence of solutions for Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, where $\mathcal{J}_{\mu}(\widehat{u}_{\mu})=c_{a, \mu}$. Now, we prove that $\{\widehat{u}_{\mu}\}$ is bounded in $\mathcal{H}_r$. Since $P_{\mu}(\widehat{u}_{\mu} )=0$ and Lemma [Lemma 51](#K-Lem6.2){reference-type="ref" reference="K-Lem6.2"}, then we have $$\begin{aligned}
c_{a, 0}\geq c_{a, \mu}
=&\mathcal{J}_{\mu} (\widehat{u}_{\mu})-\frac{1}{p\gamma_p} P_{\mu}(\widehat{u}_{\mu} )\nonumber\\
=&\left(\frac{1}{2}-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3}|\nabla \widehat{u}_{\mu} |^2dx+\frac{1}{4}\left(1-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|\widehat{u}_{\mu}(x) |^{2}|\widehat{u}_{\mu} (y)|^2} {|x-y|}dxdy\\
&+\mu\left(\frac{\gamma_q}{p\gamma_p}-\frac{1}{q}\right)\int_{\mathbb{R}^3}|\widehat{u}_{\mu} |^{q}dx\nonumber\\
\geq &\left(\frac{1}{2}-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3}|\nabla \widehat{u}_{\mu} |^2dx-\mu \frac{p\gamma_p-q\gamma_q}{pq\gamma_p} C_q^qa^{(1-\gamma_q)q}|\nabla u_n|_2^{q \gamma_q},\end{aligned}$$ which shows that $\{\widehat{u}_{\mu} \}$ is bounded in $\mathcal{H}_r$ because of $q\gamma_q<2$. Therefore, there exists $\widehat{u}_0\in \mathcal{H}_r$ such that, up to a subsequence, as $\mu\rightarrow 0$, $$\begin{aligned}
& \widehat{u}_{\mu}\rightharpoonup \widehat{u}_0 \quad\quad \text{in}\ \mathcal{H}_r;\\
& \widehat{u}_{\mu}\rightarrow \widehat{u}_0\quad\quad \text{in}\ L^t(\mathbb{R}^3)\ \text{with}\ t\in(2, 6);\\
& \widehat{u}_{\mu}\rightarrow \widehat{u}_0\quad\quad a.e \ \text{in}\ \mathbb{R}^3.\end{aligned}$$ Since $(\widehat{\lambda}_{\mu}, \widehat{u}_{\mu})\in \mathbb{R}^+ \times \mathcal{H}_{r}$ satisfies $$\begin{aligned}
\label{kkk191}
-\Delta \widehat{u}_{\mu}+\widehat{\lambda}_{\mu}\widehat{u}_{\mu}+ (|x|^{-1}\ast|\widehat{u}_{\mu}|^2 )\widehat{u}_{\mu} =\mu |\widehat{u}_{\mu}|^{q-2}\widehat{u}_{\mu}+|\widehat{u}_{\mu}|^{p-2}\widehat{u}_{\mu},\end{aligned}$$ then multiplying the above equation by $\widehat{u}_{\mu}$ and integrating in $\mathbb{R}^3$, $$\begin{aligned}
\widehat{\lambda}_{\mu} a^2= -|\nabla \widehat{u}_{\mu}|_2^2-\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|\widehat{u}_{\mu}(x)|^{2}|\widehat{u}_{\mu}(y)|^2} {|x-y|}dxdy
+\int_{\mathbb{R}^3}|\widehat{u}_{\mu}|^{p}dx+\mu \int_{\mathbb{R}^3}|\widehat{u}_{\mu}|^{q}dx,\end{aligned}$$ which implies that $\{\widehat{\lambda}_{\mu}\}$ is bounded in $\mathbb{R}$. Hence, there is $\lambda_0 \geq 0$ satisfying $\lambda_{\mu}\rightarrow \widehat{\lambda}_0$ as $n\rightarrow\infty$. Moreover, when $\mu\rightarrow 0$, from [\[kkk191\]](#kkk191){reference-type="eqref" reference="kkk191"} we have $$\begin{aligned}
\label{kkk20}
-\Delta \widehat{u}_0+\widehat{\lambda}_0 \widehat{u}_0+(|x|^{-1}\ast |\widehat{u}_0|^2)\widehat{u}_0
= |\widehat{u}_0|^{p-2}\widehat{u}_0.\end{aligned}$$ Clearly, $\widehat{u}_0\neq 0$. Moreover, using the assumption [\[k3\]](#k3){reference-type="eqref" reference="k3"}, similar to the proof of [\[k40\]](#k40){reference-type="eqref" reference="k40"}, we have the Lagrange multiplier $\widehat{\lambda}_0>0$. Then, using a similar methods of the Step 3 in Lemma [Lemma 36](#K-Lem2.6){reference-type="ref" reference="K-Lem2.6"}, we conclude that $\widehat{u}_{\mu}\rightarrow \widehat{u}_{0}$ in $\mathcal{H}_{r}$ and $\widehat{\lambda}_{\mu}\rightarrow \widehat{\lambda}_{0}$ in $\mathbb{R}$ as $\mu\rightarrow0$ up to a subsequence, where $(\widehat{\lambda}_{0}, \widehat{u}_{0})\in \mathbb{R}^+ \times \mathcal{H}_r$ is the solution for the following equation $$\begin{aligned}
\begin{cases}
\displaystyle - \Delta u+\lambda u+ (|x|^{-1}\ast|u|^2)u=|u|^{p-2}u \quad \quad \text{in} \ \mathbb{ R}^3,\\
\displaystyle \int_{\mathbb{R}^3}|u|^2dx=a^2.\\
\end{cases}
\end{aligned}$$ Thus, we complete the proof. $\hfill\Box$
## Limit behavior of solutions as $q\rightarrow \frac{10}{3}$
In this subsection, in order to study the limit behavior of normalized solutions as $q\rightarrow \frac{10}{3}$, we use $\mathcal{P}_{q,r}$, $\mathcal{J}_q$ and $P_q$ instead of $\mathcal{P}_{r}$, $\mathcal{J}$ and $P$ to indicate the dependence on $q$ for convenience. Similar to Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"}, we have
**Lemma 52**. *Let $\mu>0$, $\frac{10}{3}=\overline{q} <q< p<6$. For any $u\in S_a$, there exists a unique $t_u \in \mathbb{R}$ such that $t_u\star u\in \mathcal{P}_{q, r}$. $t_u$ is the unique critical point of the function $\psi_u$, and is a strict maximum point at positive level. Moreover:*
- *$\mathcal{P}_{q,r}=\mathcal{P}_{q,r,-}$;*
- *the function $\psi_u$ is strictly decreasing and concave on $(t_u,+\infty)$. Particularly, $t_u < 0\Leftrightarrow P_q(u) <0$;*
- *the map $u\in S_a\mapsto t_u\in \mathbb{R}$ is of class $C^1$.*
It is easy to know that $c_{q,r}=\sigma_{q,r}:=\inf_{u\in\mathcal{P}_{q,r}}\mathcal{J}_q(u)>0,$ where $c_{q,r}$ is the mountain pass level defined in Theorem [Theorem 2](#K-T1){reference-type="ref" reference="K-T1"}.
**Lemma 53**. *Assume that $\mu>0$, $\frac{10}{3}=\overline{q}<q<p<6$ and $a\in (0, \widetilde{\kappa})$ satisfying [\[k201\]](#k201){reference-type="eqref" reference="k201"}. It holds that $$\begin{aligned}
\label{k223}
\liminf_{q\rightarrow \overline{q}} c_{q, r}>0,\end{aligned}$$ and $$\begin{aligned}
\label{k224}
\limsup_{q\rightarrow \overline{q}}c_{q, r}\leq c_{\overline{q}, r}.\end{aligned}$$*
*Proof.* The proof of [\[k223\]](#k223){reference-type="eqref" reference="k223"} is similar to [@2023-JDE-Qi Lemma 3.1], so we omit it here. Now, we prove [\[k224\]](#k224){reference-type="eqref" reference="k224"}. For any $\varepsilon\in (0, \frac{1}{2})$, there exists $u\in \mathcal{P}_{\overline{q},r}$ such that $\mathcal{J}_{\overline{q}}(u)\leq c_{\overline{q},r}+\varepsilon$. Then, due to $\overline{q}<p<6$, there is $T\in \mathbb{R}$ large enough satisfying $$\begin{aligned}
\mathcal{J}_{\overline{q}}(T\star u )
=&\frac{e^{2T}}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{e^{T}}{4} \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy \nonumber\\
&-\frac{e^{p\gamma_pT}}{p}\int_{\mathbb{R}^3}|u|^{p}dx
-\frac{\mu e^{2T}}{\overline{q}}
\int_{\mathbb{R}^3}|u|^{\overline{q}}dx\nonumber\\
\leq & -1.\end{aligned}$$ By the continuity of $\frac{e^{q\gamma_qs}}{q}\int_{\mathbb{R}^3}|u|^{q}dx$ on $(s,q)\in(-\infty, T]\times [\overline{q}, 6)$ there exists $\eta > 0$ such that $$\begin{aligned}
\label{k225}
| \mathcal{J}_{q}(s\star u)- \mathcal{J}_{\overline{q}}(s\star u)|=\bigg|\frac{e^{q\gamma_qs}}{q}\int_{\mathbb{R}^3}|u|^{q}dx
-\frac{e^{2s}}{\overline{q}}\int_{\mathbb{R}^3}|u|^{\overline{q}}dx \bigg|<\varepsilon\end{aligned}$$ for all $\overline{q}<q<\overline{q}+\eta$ and $s\in(-\infty, T]$, which shows that $\mathcal{J}_{q}(s\star u)\leq -\frac{1}{2}$ for all $\overline{q}<q<\overline{q}+\eta$. Besides, when $s\in \mathbb{R}$ small enough, one gets $$\mathcal{J}_{q}(s\star u )>0.$$ Hence, there is $s_0\in (-\infty, T)$ such that $\frac{d}{ds} \mathcal{J}_{q}(s\star u)|_{s=s_0}=0$. This gives us that $s_0\star u\in \mathcal{P}_{q, r}$. Thus, by [\[k225\]](#k225){reference-type="eqref" reference="k225"}, Lemmas [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"} and [Lemma 52](#KK-J1){reference-type="ref" reference="KK-J1"} we have $$c_{q, r}\leq \mathcal{J}_{q}(s_0\star u)\leq \mathcal{J}_{\overline{q}}(s_0\star u)+\varepsilon\leq \mathcal{J}_{\overline{q}}( u)+\varepsilon\leq c_{\overline{q},r}+2\varepsilon$$ for all $\overline{q}<q<\overline{q}+\eta$. Therefore, we complete the proof. ◻
**Lemma 54**. *Assume that $\mu>0$, $\frac{10}{3}=\overline{q}<q<p<6$ and [\[k201\]](#k201){reference-type="eqref" reference="k201"} hold. Let $(\lambda_q, u_q )$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} at the level $c_{q,r}$, then there is a constant $C > 0$ independent of $q$ such that $$\begin{aligned}
\lim_{q\rightarrow \overline{q}}\|\nabla u_q\|_2^2\leq C.\end{aligned}$$*
*Proof.* Assume to the contrary that there is a subsequence, denoted still by $\{u_q\}$, such that $$\begin{aligned}
\label{k229}
\|\nabla u_q\|_2^2\rightarrow\infty\quad\quad \text{as}\ q\rightarrow \overline{q}.\end{aligned}$$ Since $u_q$ is a solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, then $$\begin{aligned}
\label{k227}
P_q(u_q)=\int_{\mathbb{R}^3}|\nabla u_q|^2dx+\frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy
-\gamma_p\int_{\mathbb{R}^3}|u_q|^{p}dx
-\mu\gamma_q\int_{\mathbb{R}^3}|u_q|^{q}dx=0.\end{aligned}$$ Then, we deduce $$\begin{aligned}
c_{q,r}=&\mathcal{J}_q(u_q)-\frac{1}{q \gamma_q} P_q(u_q)\\
=& \left(\frac{1}{2}-\frac{1}{q\gamma_q}\right)\int_{\mathbb{R}^3}|\nabla u_q|^2dx+\frac{1}{4}\left(1-\frac{1}{q\gamma_q}\right)\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy\\
&+ \left(\frac{\gamma_p}{q\gamma_q}-\frac{1}{p}\right)\int_{\mathbb{R}^3}|u_q|^{p}dx\nonumber\\
\geq&\left(\frac{\gamma_p}{q\gamma_q}-\frac{1}{p}\right)\int_{\mathbb{R}^3}|u_q|^{p}dx,\end{aligned}$$ which and [\[k224\]](#k224){reference-type="eqref" reference="k224"} imply that $$\begin{aligned}
\label{k228}
\lim_{q\rightarrow \overline{q}}\int_{\mathbb{R}^3}|u_q|^{p}dx\leq \frac{pq \gamma_q}{p\gamma_p-q\gamma_q} c_{\overline{q},r}.\end{aligned}$$ Moreover, due to $\overline{q}<q<p$, by the interpolation inequality and [\[k10\]](#k10){reference-type="eqref" reference="k10"}, we have $$\begin{aligned}
\label{k226}
\int_{\mathbb{R}^3}|u|^{q}dx&\leq\Big(\int_{\mathbb{R}^3}|u|^{\overline{q}}dx\Big)^{\frac{p-q}{p-\overline{q}}}
\Big(\int_{\mathbb{R}^3}|u|^{p}dx\Big)^{\frac{q-\overline{q}}{p-\overline{q}}}\nonumber\\
&\leq C_{\overline{q}}^{\frac{\overline{q}(p-q)}{p-\overline{q}}} |u|_2^{\frac{4(p-q)}{3(p-\overline{q})}}|\nabla u|_2^{\frac{2(p-q)}{p-\overline{q}}} \Big(\int_{\mathbb{R}^3}|u|^{p}dx\Big)^{\frac{q-\overline{q}}{p-\overline{q}}}.\end{aligned}$$ Then, it follows from [\[k227\]](#k227){reference-type="eqref" reference="k227"} and [\[k226\]](#k226){reference-type="eqref" reference="k226"} that $$\begin{aligned}
c_{q,r}
=&\mathcal{J}_q(u_q)-\frac{1}{p \gamma_p} P_q(u_q)\\
=& \left(\frac{1}{2}-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3}|\nabla u_q|^2dx+\frac{1}{4}\left(1-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy\\
&+\mu\left(\frac{\gamma_q}{p\gamma_p}-\frac{1}{q}\right)\int_{\mathbb{R}^3}|u_q|^{q}dx\nonumber\\
\geq & \left(\frac{1}{2}-\frac{1}{p\gamma_p}\right)\int_{\mathbb{R}^3}|\nabla u_q|^2dx-\mu \frac{p\gamma_p-q\gamma_q}{pq\gamma_p} C_{\overline{q}}^{\frac{\overline{q}(p-q)}{p-\overline{q}}} |u_q|_2^{\frac{4(p-q)}{3(p-\overline{q})}}|\nabla u_q|_2^{\frac{2(p-q)}{p-\overline{q}}} \Big(\int_{\mathbb{R}^3}|u_q|^{p}dx\Big)^{\frac{q-\overline{q}}{p-\overline{q}}},\end{aligned}$$ which and [\[k228\]](#k228){reference-type="eqref" reference="k228"} show that $$c_{\overline{q},r}\geq \lim_{q\rightarrow \overline{q}}c_{q,r}\geq\frac{[3(p-2)-4](5-3\mu C_{\overline{q}}^{\overline{q}} a^{\frac{4}{3}} )}{30(p-2)}\lim_{q\rightarrow \overline{q}} |\nabla u_q|_2^2.$$ This gives a contradiction due to [\[k229\]](#k229){reference-type="eqref" reference="k229"} and [\[k201\]](#k201){reference-type="eqref" reference="k201"}. Hence, we complete the proof. ◻
**Lemma 55**. *Assume that $\mu>0$, $\frac{10}{3}=\overline{q}<q<p<6$ and $a\in (0, \widetilde{\kappa})$ satisfying [\[k201\]](#k201){reference-type="eqref" reference="k201"}. Let $(\lambda_q, u_q )$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} at the level $c_{q,r}$, then there exists a constant $\Lambda> 0$ independent of $q$ such that $0 < \lambda_q <\Lambda$ for any $q$ tending to $\overline{q}$.*
*Proof.* Firstly, since $u_q$ is a solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"}, then $u_q$ satisfies $$\begin{aligned}
|\nabla u_q|_2^2+ \lambda \int_{\mathbb{R}^3} |u_q|^{2}dx +\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy
-\int_{\mathbb{R}^3}|u_q|^{p}dx-\mu \int_{\mathbb{R}^3}|u_q|^{q}dx&=0,\label{k230}\\
P_q(u_q)=|\nabla u_q|_2^2+ \frac{1}{4}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy-\gamma_p \int_{\mathbb{R}^3}|u_q|^{p}dx-\mu \gamma_q \int_{\mathbb{R}^3}|u_q|^{q}dx&=0.\label{k231}\end{aligned}$$ By eliminating $\int_{\mathbb{R}^3}|u|^{q}dx$ from [\[k230\]](#k230){reference-type="eqref" reference="k230"} and [\[k231\]](#k231){reference-type="eqref" reference="k231"}, one infers that $$\begin{aligned}
(1-\gamma_q)|\nabla u_q|_2^2= \lambda \gamma_q \int_{\mathbb{R}^3} |u_q|^{2}dx
+\big(\gamma_q-\frac{1}{4}\big) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy +\mu ( \gamma_p-\gamma_q) \int_{\mathbb{R}^3}|u|^{p}dx,\end{aligned}$$ which and Lemma [Lemma 54](#KJ-Lem2){reference-type="ref" reference="KJ-Lem2"} show that $$\begin{aligned}
\lim_{q\rightarrow \overline{q}}\lambda_q \leq \frac{2}{3 a^2}\lim_{q\rightarrow \overline{q}}|\nabla u_q|_2^2\leq \frac{2}{3 a^2}C:=\Lambda.\end{aligned}$$ Next, we prove that $\lim_{q\rightarrow \overline{q}}\lambda_q >0$. Assume to the contrary that, up to a subsequence, $$\begin{aligned}
\label{k232}
\lim_{q\rightarrow \overline{q}}\lambda_q =0.\end{aligned}$$ From [\[k231\]](#k231){reference-type="eqref" reference="k231"} and the Gagliardo-Nirenberg inequality, we know $$|\nabla u_q|_2^2 -\gamma_pC_p^pa^{p(1-\gamma_p)} |\nabla u_q|_2^{p\gamma_p} -\mu \gamma_q C_q^q a^{q(1-\gamma_q)} |\nabla u_q|_2^{q\gamma_q}\leq 0,$$ which implies that $$\begin{aligned}
\label{k233}
(1-\frac{3}{5}\mu C_{\overline{q}}^{\overline{q}}a^{\frac{4}{3}})\lim_{q\rightarrow \overline{q}}|\nabla u_q|_2^2\leq \gamma_pC_p^p a^{p(1-\gamma_p)} \lim_{q\rightarrow \overline{q}}|\nabla u_q|_2^{p\gamma_p}.
\end{aligned}$$ We claim that $\lim_{q\rightarrow \overline{q}}|\nabla u_q|_2^2>0$. In fact, if we assume that $\lim_{q\rightarrow \overline{q}}|\nabla u_q|_2^2=0$, then by eliminating $\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy$ from [\[k230\]](#k230){reference-type="eqref" reference="k230"} and [\[k231\]](#k231){reference-type="eqref" reference="k231"}, in view of [\[k232\]](#k232){reference-type="eqref" reference="k232"}, one has $$\begin{aligned}
0=\frac{3}{4} \lim_{q\rightarrow \overline{q}}|\nabla u_q|_2^2
=\big(\gamma_p-\frac{1}{4}\big)\lim_{q\rightarrow \overline{q}}\int_{\mathbb{R}^3}|u_q|^{p}dx+\mu \frac{7}{20} \lim_{q\rightarrow \overline{q}}\int_{\mathbb{R}^3}|u_q|^{q}dx,\end{aligned}$$ which implies that $$\lim_{q\rightarrow \overline{q}}\int_{\mathbb{R}^3}|u_q|^{p}dx=0 \quad\quad \text{and} \quad\quad \lim_{q\rightarrow \overline{q}}\int_{\mathbb{R}^3}|u_q|^{q}dx=0.$$ And then, we deduce from [\[k231\]](#k231){reference-type="eqref" reference="k231"} that $\lim_{q\rightarrow\overline{q}}\int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy=0$. Hence, $\lim_{q\rightarrow\overline{q}}c_{q, r}=\lim_{q\rightarrow\overline{q}} J_q(u_q)=0$, which contradicts [\[k223\]](#k223){reference-type="eqref" reference="k223"}. Then the claim follows. Hence, it follows from [\[k201\]](#k201){reference-type="eqref" reference="k201"} and [\[k233\]](#k233){reference-type="eqref" reference="k233"} that $$\begin{aligned}
\label{k234}
\lim_{q\rightarrow\overline{q}} |\nabla u_q|_2^{p\gamma_p-2}\geq \frac{1-\mu \gamma_{\overline{q}}C_{\overline{q}}^{\overline{q}} a^{\frac{4}{3}}}{\gamma_p C_p^p a^{p(1-\gamma_p)}}>0.\end{aligned}$$ By eliminating $\int_{\mathbb{R}^3}|u|^{p}dx$ from [\[k230\]](#k230){reference-type="eqref" reference="k230"} and [\[k231\]](#k231){reference-type="eqref" reference="k231"}, due to Lemma [Lemma 18](#gle4){reference-type="ref" reference="gle4"}, [\[k10\]](#k10){reference-type="eqref" reference="k10"}, [\[k234\]](#k234){reference-type="eqref" reference="k234"} and the Young's inequality, one infers that $$\begin{aligned}
0\leq\mu ( \gamma_p-\gamma_q) \int_{\mathbb{R}^3}|u_q|^{q}dx=&(\gamma_p-1)|\nabla u_q|_2^2
+(\gamma_p-\frac{1}{4}) \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy\\
\leq&(\gamma_p-1)|\nabla u_q|_2^2
+C|\nabla u_q|_2 |u_q|_2^3\\
\leq& -\frac{3(6-p)}{2(p-2)} |\nabla u_q|_2^2 +C a^6\\
\leq& -\frac{3(6-p)}{2(p-2)} \Big(\frac{1-\mu \gamma_{\overline{q}}C_{\overline{q}}^{\overline{q}} a^{\frac{4}{3}}}{\gamma_p C_p^p a^{p(1-\gamma_p)}} \Big)^{\frac{p\gamma_p-2}{2}}+C a^6.\end{aligned}$$ When $a\in(0, \widetilde{\kappa})$ small enough, the right-hand side of the above is negative, which is a contradiction. Then, we get $\lim_{q\rightarrow \overline{q}}\lambda_q >0$. Hence, we complete the proof. ◻
**Lemma 56**. *Assume that $\mu>0$ and $\frac{10}{3}=\overline{q}<q<p<6$. Let $(\lambda_q, u_q )$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} at the level $c_{q,r}$, there holds $$\begin{aligned}
\label{k221}
\limsup_{q\rightarrow\overline{q}}\|u_q\|_{\infty}\leq C.\end{aligned}$$ Moreover, $$\begin{aligned}
\label{k222}
\liminf_{q\rightarrow\overline{q}}\|u_q\|_{\infty}>0.\end{aligned}$$*
*Proof.* Let $u_q$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} at the level $c_{q,r}$. Without loss of generality, we assume $u_q(0)=\max_{x\in \mathbb{R}^3}u_q(x)$. Since the multiplier $\lambda_q >0$, by the regularity theory of elliptic partial differential equations, we have $u_q\in C_{loc}^{1, \alpha}(\mathbb{R}^3)$ for some $\alpha\in (0, 1)$. Assume to the contrary that there is a subsequence, denoted still by $\{u_q\}$, such that $$\begin{aligned}
%\label{k240}
\limsup_{q\rightarrow\overline{q}}\|u_q\|_{\infty}=\infty.\end{aligned}$$ Define $$\begin{aligned}
v_q=\frac{1}{\|u_q\|_{\infty}}u_q(\|u_q\|_{\infty}^{\frac{2-p}{2}}x) \quad \quad \text{for any}\ x\in \mathbb{R}^3,\end{aligned}$$ then $v_q(0)=1$ and $\|v_q\|_{\infty}\leq 1$. By a direct calculation, $v_q$ satisfies $$\begin{aligned}
\label{k235}
-\Delta v_q+\left(\lambda_q |u_q|_{\infty}^{2-p}+|u_q|_{\infty}^{6-2p}(|x|^{-1}\ast |v_q|^2)-\mu|u_q|_{\infty}^{q-p}|v_q|^{q-2}\right)v_q=|v_q|^{p-2}v_q.\end{aligned}$$ In view of Lemma [Lemma 55](#KJ-Lem3){reference-type="ref" reference="KJ-Lem3"}, we have $$\begin{aligned}
\max_{x\in \mathbb{R}^3}\Big| \lambda_q |u_q|_{\infty}^{2-p}+|u_q|_{\infty}^{6-2p}(|x|^{-1}\ast |v_q|^2)-\mu|u_q|_{\infty}^{q-p}|v_q|^{q-2} \Big|<C\end{aligned}$$ where $C$ is a constant independence of $q$. Then we deduce from $|v_q |_{\infty} \leq 1$ and the regularity theory of elliptic partial differential equations that $v_q\in C_{loc}^{2, \beta}(\mathbb{R}^3)$ for some $\beta\in (0, 1)$. Moreover, there is a constant $C > 0$ independent of $q$ such that $$|v_q|_{C_{loc}^{2, \beta}(\mathbb{R}^3)}\leq C.$$ Hence, there is $v\in C_{loc}^{2, \beta}(\mathbb{R}^3)$ such that, up to a subsequence, $$v_q\rightarrow v \quad\quad \text{in}\ C_{loc}^{2}(\mathbb{R}^3)$$ as $q\rightarrow \overline{q}$. Then it follows from [\[k235\]](#k235){reference-type="eqref" reference="k235"} that $v$ satisfies $$-\Delta v=|v|^{p-2}v\quad\quad \text{in} \ \mathbb{R}^3.$$ Hence, the Liouville theorem [@1982-PRSEA-E-L] implies that $v(x) = 0$ for any $x \in \mathbb{R}^3$, which contradicts the fact $v(0)= \lim_{q\rightarrow \overline{q}}v_q(0)=1$. Hence, [\[k221\]](#k221){reference-type="eqref" reference="k221"} follows. Now, we prove [\[k222\]](#k222){reference-type="eqref" reference="k222"}. Assume to the contrary that there is a subsequence, denoted still by $\{u_q \}$, such that $$\liminf_{q\rightarrow\overline{q}}\|u_q\|_{\infty}=0.$$ Thus, for any $t>2$, one has $$\int_{\mathbb{R}^3}|u_q|^tdx\leq |u_q|_{\infty}^{t-2}a^2\rightarrow 0 \quad\quad \text{as}\ q\rightarrow \overline{q}.$$ and then $$\int_{\mathbb{R}^3}\int_{\mathbb{R}^3}\frac{|u_q(x)|^2|u_q(y)|^2}{|x-y|}dxdy\leq C|u_q|_{\frac{12}{5}}^4\rightarrow 0 \quad\quad \text{as}\ q\rightarrow \overline{q}.$$ By [\[k231\]](#k231){reference-type="eqref" reference="k231"}, we have $|\nabla u_q|_2^2\rightarrow 0$ as $q\rightarrow \overline{q}$. Hence $$\lim_{q\rightarrow \overline{q}}c_{q,r}=\lim_{q\rightarrow \overline{q}} J_{q}(u_q)=0,$$ which contradicts Lemma [Lemma 53](#KJ-Lem1){reference-type="ref" reference="KJ-Lem1"}. Thus, [\[k222\]](#k222){reference-type="eqref" reference="k222"} holds. We complete the proof. ◻
**Lemma 57**. *Assume that $\mu>0$ and $\frac{10}{3}=\overline{q}<q<p<6$. Let $(\lambda_q, u_q )$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} at the level $c_{q,r}$. Then, $$\begin{aligned}
\label{k236}
|u_q(x)|\leq A e^{-c|x|},\end{aligned}$$ where constants $A$ and $c$ are independent of $q$. Moreover, up to a subsequence, as $q\rightarrow \overline{q}$, $$u_q\rightarrow u \quad\quad \text{in} \ L^2(\mathbb{R}^3).$$*
*Proof.* The proof is similar to [@2023-JDE-Qi Lemma 3.6], so we omit it here. ◻
Let $(\lambda_q, u_q )$ be the radial mountain pass type normalized solution of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} at the level $c_{q,r}$. In view of Lemmas [Lemma 54](#KJ-Lem2){reference-type="ref" reference="KJ-Lem2"}, [Lemma 55](#KJ-Lem3){reference-type="ref" reference="KJ-Lem3"} and [Lemma 57](#KJ-Lem5){reference-type="ref" reference="KJ-Lem5"}, there exist $u\in S_a$, $\lambda> 0$ and a subsequence, denoted still by $\{(\lambda_q, u_q )\}$, such that, as $q\rightarrow \overline{q}$, $$u_q\rightharpoonup u \quad\quad \text{in}\ \mathcal{H}, \quad\quad
u_q \rightarrow u \quad\quad \text{in}\ L^2(\mathbb{R}^3) \cap C_{loc} (\mathbb{R}^3)$$ and $$\begin{aligned}
\label{k239}
\lambda_q\rightarrow \lambda.\end{aligned}$$ Thus, for any $t > 2$, one has $$\begin{aligned}
\label{k238}
\int_{\mathbb{R}^3}|u_q-u|^tdx\leq |u_q-u|_{\infty}^{s-2}\int_{\mathbb{R}^3}|u_q-u|^2dx\rightarrow 0\quad\quad \text{as}\ q\rightarrow \overline{q}.\end{aligned}$$ It follows from [\[k236\]](#k236){reference-type="eqref" reference="k236"} that for any $\varphi\in \mathcal{H}$ and $\varepsilon> 0$, there is an $R > 0$ such that $$\Re\int_{\mathbb{R}^3\backslash B_R}|u_q|^{q-2}u_q \overline{\varphi } dx\leq\frac{\varepsilon}{4}, \quad \quad \Re\int_{\mathbb{R}^3\backslash B_R}|u|^{\overline{q}-2}u \overline{\varphi} dx\leq\frac{\varepsilon}{4}, \quad \forall \ q\rightarrow \overline{q}.$$ The Vitali convergence theorem and Lemma [Lemma 56](#KJ-Lem4){reference-type="ref" reference="KJ-Lem4"} imply that $$\int_{B_R}|(|u_q|^{q-2}u_q - |u|^{\overline{q}-2}u)\overline{\varphi}|dx\leq \frac{\varepsilon}{2}, \quad \forall \ q\rightarrow \overline{q}.$$ Hence, $$\begin{aligned}
\label{k237}
\lim_{q\rightarrow \overline{q}} \Re\int_{\mathbb{R}^3} |u_q|^{q-2}u_q\overline{\varphi} dx=\Re\int_{\mathbb{R}^3} |u|^{\overline{q}-2}u\overline{\varphi} dx, \quad \forall \ \varphi \in \mathcal{H}.\end{aligned}$$ As a consequence, by Lemma [Lemma 18](#gle4){reference-type="ref" reference="gle4"}, [\[k239\]](#k239){reference-type="eqref" reference="k239"}, [\[k238\]](#k238){reference-type="eqref" reference="k238"} and [\[k237\]](#k237){reference-type="eqref" reference="k237"}, $u$ satisfies $$-\Delta u+\lambda u+(|x|^{-1}\ast |u|^2)u=\mu|u|^{\overline{q}-2}u+|u|^{p-2}u\quad\quad \text{in} \ \mathbb{R}^3,$$ where $|u|_2^2=a^2$. Now, we claim that $$\begin{aligned}
\label{k241}
\lim_{q\rightarrow \overline{q}} \int_{\mathbb{R}^3}|u_q|^qdx=\int_{\mathbb{R}^3}|u|^{\overline{q}}dx.\end{aligned}$$ Indeed, on the one hand, by the Fatou lemma, we have $$\int_{\mathbb{R}^3}|u|^{\overline{q}}dx \leq \liminf_{q\rightarrow \overline{q}} \int_{\mathbb{R}^3}|u_q|^qdx.$$ On the other hand, the Young inequality implies $$\int_{\mathbb{R}^3}|u_q|^qdx\leq \frac{q-\overline{q}}{p-\overline{q}}\int_{\mathbb{R}^3}|u_q|^pdx
+\frac{p-q}{p-\overline{q}}\int_{\mathbb{R}^3}|u_q|^{\overline{q}}dx.$$ From [\[k238\]](#k238){reference-type="eqref" reference="k238"}, one has $$\limsup_{q\rightarrow \overline{q}} \int_{\mathbb{R}^3}|u_q|^qdx
\leq \int_{\mathbb{R}^3}|u|^{\overline{q}}dx.$$ Therefore, the claim holds. By [\[k238\]](#k238){reference-type="eqref" reference="k238"}, [\[k241\]](#k241){reference-type="eqref" reference="k241"} and Lemma [Lemma 53](#KJ-Lem1){reference-type="ref" reference="KJ-Lem1"}, one obtains $$\begin{aligned}
c_{\overline{q},r}&\geq \lim_{q\rightarrow \overline{q}}c_{q,r}\\
&=\lim_{q\rightarrow \overline{q}}\mathcal{J}_q(u_q)\\
&=\lim_{q\rightarrow \overline{q}}\left(\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u_q|^2dx+\frac{1}{4}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{|u_q(x)|^{2}|u_q(y)|^2} {|x-y|}dxdy
-\frac{1}{p}\int_{\mathbb{R}^3}|u_q|^{p}dx
-\frac{\mu}{q}\int_{\mathbb{R}^3}|u_q|^{q}dx \right)\\
& \geq\frac{1}{2}\int_{\mathbb{R}^3}|\nabla u|^2dx+\frac{1}{4}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{|u(x)|^{2}|u(y)|^2} {|x-y|}dxdy
-\frac{1}{p}\int_{\mathbb{R}^3}|u|^{p}dx
-\frac{\mu}{\overline{q}}\int_{\mathbb{R}^3}|u|^{\overline{q}}dx\\
&=\mathcal{J}_q(u)\\
&\geq c_{\overline{q}, r},\end{aligned}$$ which implies that $u_q\rightarrow u$ in $\mathcal{H}$ as $q\rightarrow \overline{q}$ and $\mathcal{J}_q(u)= c_{\overline{q}, r}$. Hence, we complete the proof.
## The orbital stability of ground state set $\mathcal{M}_a$
In this subsection, we focus on the properties of ground states in the case $\mu>0$, $q\in(2, \frac{12}{5}]$ and $p\in(\frac{10}{3}, 6)$. In the first step, we provide the structure of the ground state set $\mathcal{M}_a$. And then, the orbital stability of $\mathcal{M}_a$ is analyzed. The proof of Theorem [Theorem 15](#K-TH3){reference-type="ref" reference="K-TH3"} is inspired by the classical Cazenave-Lions' stability argument [@1982-CMP-CL], further developed by [@2004-ANS-Ha].\
We first describe the characteristic of $\mathcal{M}_a$ as $$\begin{aligned}
\label{k94}
\mathcal{M}_a=\big\{ e^{i \theta}|u|: \ \theta\in \mathbb{R}, \ |u|\in D_{\rho_0},\ \mathcal{J}(|u|)=m(a)\ \text{and}\ |u| >0 \ \text{in}\ \mathbb{R}^3\big\},\end{aligned}$$ where $\mathcal{M}_a=\{u\in \mathcal{H}: u\in D_{\rho_0}, \ \mathcal{J}(u)=m(a)\}$. For any $z\in \mathcal{H}$, let $z(x)=(v(x),w(x))= v(x)+iw(x)$, where $v, w\in \mathcal{H}$ are real-valued functions and $$\|z\|^2=|\nabla z|_2^2+| z|_2^2, \quad\quad | z|_2^2=|v|_2^2+| w|_2^2\quad \quad \text{and}\quad \quad | \nabla z|_2^2=|\nabla v|_2^2+|\nabla w|_2^2.$$ On the one hand, taking $z=(v,w)\in \mathcal{M}_a=\{u\in \mathcal{H}: u\in D_{\rho_0}, \ \mathcal{J}(u)=m(a)\}$, we see from [@2004-ANS-Ha Theorem 3.1] that $|z|\in D_{\rho_0},\ \mathcal{J}(|z|)=m(a)$ and $| \nabla |z||_2^2=| \nabla z|_2^2$. Then, by the fact that $| \nabla |z||_2^2-| \nabla z|_2^2=0$ one obtains $$\begin{aligned}
\label{h92}
\int_{\mathbb{R}^3}\sum_{i=1}^3 \frac{(v \partial_i w-w\partial_i v)^2}{v^2+w^2}dx =0.\end{aligned}$$ Hence, from [@2004-ANS-Ha Theorem 4.1], we know that
- either $v\equiv 0$ or $v(x) \neq 0$ for all $x \in \mathbb{R}^3$;
- either $w\equiv 0$ or $w(x) \neq 0$ for all $x \in \mathbb{R}^3$.
let $u:= |z|$, then we have $J(u)=m(a)$ and $u\in D_{\rho_0}$. If $w\equiv 0$, then we deduce that $u:= |z|=|v|>0$ on $\mathbb{R}^3$ and $z= e^{i \theta} |u|$ where $\theta =0$ if $v>0$ and $\theta =\pi$ if $v<0$ on $\mathbb{R}^3$. Otherwise, it follows from $(ii)$ that $w(x)\neq 0$ for all $x\in\mathbb{R}^3$. Since $$\begin{aligned}
\frac{(v \partial_i w-w\partial_i v)^2}{v^2+w^2}=\Big[\partial_i\big (\frac{v}{w}\big) \Big ]^2 \frac{w^2}{v^2+w^2}\quad\quad \text{where}\ i=1,2, 3,\end{aligned}$$ for all $x\in \mathbb{R}^3$, we get from [\[h92\]](#h92){reference-type="eqref" reference="h92"} that $\nabla (\frac{v}{w})=0$ on $\mathbb{R}^3$. Therefore, there exists $C\in \mathbb{R}$ such that $v=C w$ on $\mathbb{R}^3$. Then we have $$\begin{aligned}
\label{h93}
z=(v,w)=v+iw=(C+i)w \quad\quad\text{and}\quad\quad |z|= |C+i||w|.\end{aligned}$$ Let $\theta_1\in \mathbb{R}$ be such that $C+i= |C+i| e^{i \theta_1}$ and let $w=|w| e^{i \theta_2}$ with $$\begin{aligned}
\theta_2=
\begin{cases}
0, \quad\quad &\text{if}\ w>0;\\
\pi, \quad\quad &\text{if}\ w<0.
\end{cases}\end{aligned}$$ Then we can see from [\[h93\]](#h93){reference-type="eqref" reference="h93"} that $z=(C+i)w=|C+i| |w|e^{i (\theta_1+ \theta_2)} = |z|e^{i (\theta_1+ \theta_2)}$. Setting $\theta=\theta_1+\theta_2$ and $u:=|z|$, then $|u|>0$ and $z= e^{i \theta }|u|$. Thus, $$\mathcal{M}_a\subseteq \big\{ e^{i \theta}|u|: \ \theta\in \mathbb{R}, \ |u|\in D_{\rho_0},\ \mathcal{J}(|u|)=m(a)\ \text{and}\ |u| >0 \ \text{in}\ \mathbb{R}^3\big\}.$$ On the other hand, let $z= e^{i \theta}|u|$ with $\theta\in \mathbb{R}$, $|u|\in D_{\rho_0}$, $\mathcal{J}(|u|)=m(a)$ and $|u|>0$ in $\mathbb{R}^3$, then we have $|z|_2^2=a^2$, $|\nabla z|_2^2=|\nabla |u||_2^2 \leq \rho_0^2$ and $\mathcal{J}(z)=\mathcal{J}(u)= m(a)$, which implies that $$\big\{ e^{i \theta}|u|: \ \theta\in \mathbb{R}, \ |u|\in D_{\rho_0},\ \mathcal{J}(|u|)=m(a)\ \text{and}\ |u| >0 \ \text{in}\ \mathbb{R}^3\big\}\subseteq \mathcal{M}_a.$$ Hence, [\[k94\]](#k94){reference-type="eqref" reference="k94"} follows.
Next, we prove that $\mathcal{M}_a$ is orbitally stable. Under the assumptions of Theorem [Theorem 15](#K-TH3){reference-type="ref" reference="K-TH3"}, it is known that in this case, the Cauchy problem associated with [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} is locally well posed in $\mathcal{H}$, see [@2003-C]. That is, let $\varphi(t,x)$ be the unique solution of the initial-value problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_0$ on $(-T_{min}, T_{max})$, it holds that $$\begin{aligned}
\label{kkk80}
|u_0|_2=| \varphi|_2\quad\quad \text{and} \quad \quad \mathcal{J}(u_0)=\mathcal{J}(\varphi),\end{aligned}$$ and either $T_{max} =+ \infty$ or if $T_{max} <+ \infty$, $| \nabla \varphi|_2 \rightarrow +\infty$ as $t \rightarrow T^-_{max}$. We assume by contradiction that, let $v \in \mathcal{M}_a$, there exists $\delta_n\in \mathbb{R}^+$ a decreasing sequence converging to $0$ and $\{\varphi_n\}\in \mathcal{H}$ satisfying $$\|\varphi_n-v \|\leq \delta_n,$$ but $$\sup_{t\in[0, T_{max})}dist( u_{\varphi_n}(t),\mathcal{M}_a)>\varepsilon_0>0,$$ where $u_{\varphi_n}(t)$ denotes the solution to [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $\varphi_n$. We observe that $|\varphi_n|_2\rightarrow |v|_2=a$ as $n\rightarrow\infty$ and $\mathcal{J}(v)$, $\mathcal{J}(\varphi_n)\rightarrow m(a)$ as $n\rightarrow\infty$ by continuity. From the conservation laws [\[kkk80\]](#kkk80){reference-type="eqref" reference="kkk80"}, for $n \in \mathbb{N}$ large enough, $u_{\varphi_n }$ will remains inside of $D_{\rho_0}$ for all $t\in [0, T_{max}]$. Indeed, if for some time $t > 0$, $|\nabla u_{\varphi_n}(t)|_2=\rho_0$, then, in view of Lemma [Lemma 25](#K-Lem2.5){reference-type="ref" reference="K-Lem2.5"} we have that $\mathcal{J}( u_{\varphi_n})\geq0$, which is contradict with $m(a)<0$. Hence, $T_{max}=+\infty$. This shows that solutions starting in $D_{\rho_0}$ are globally defined in time. That is, $u_{\varphi_n}(t)$ is globally defined. Now let $t_n > 0$ be the first time such that $dist( u_{\varphi_n}(t_n),\mathcal{M}_a)=\varepsilon_0$ and set $u_n := u_{\varphi_n}(t_n)$. By conservation laws [\[kkk80\]](#kkk80){reference-type="eqref" reference="kkk80"}, $\{u_n\} \subset A_{\rho_0}$ satisfies $|u_n|_2\rightarrow a$ as $n\rightarrow\infty$ and $\mathcal{J}(u_n)= \mathcal{J}(\varphi_n)\rightarrow m(a)$ and thus, in view of Lemma [Lemma 33](#K-Lem2.12){reference-type="ref" reference="K-Lem2.12"}, it converges, up to translation, to an element of $\mathcal{M}_a$. Since $\mathcal{M}_a$ is invariant under translation this contradicts the equality $dist(u_n, \mathcal{M}_a)= \varepsilon_0 > 0$. Thus, we complete the proof. $\hfill\Box$
## Strong instability of the standing wave $e^{i \lambda t} u$
In this part, we prove that the standing wave $e^{i \lambda t} u$ of Eq. [\[k1\]](#k1){reference-type="eqref" reference="k1"} with $\lambda>0$ and $u\in \mathcal{H}_r$, obtained in Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} (or Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} or Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}), is strongly unstable. In what follows, we present the result of finite time blow-up. The proof of the finite time blow-up relies on the classical convexity method of Glassey [@1977-Glassey], which was further refined in Berestycki and Cazenave [@1981CRASSM-BC].
**Lemma 58**. *Under the assumptions of Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} (or Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} or Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}), let $u_0 \in S_{a,r}$ be such that $\mathcal{J}(u_0)<\inf_{u\in \mathcal{P}_{-,r}}$, and if $|x|u_0\in L^2(\mathbb{R}^3)$ and $t_{u_0} < 0$, where $t_{u_0}$ is defined in Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"} (or Lemma [Lemma 38](#K-Lem4.2){reference-type="ref" reference="K-Lem4.2"} or Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"}), then the solution $\varphi$ of problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_0$ blows-up in finite time.*
*Proof.* Under the assumptions of Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} (or Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} or Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}), let $u_0 \in S_a$ be such that $\mathcal{J}(u_0)<\inf_{u\in \mathcal{P}_{-,r}}$. Since the initial-value problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_0$ is local well-posed on $(-T_{min}, T_{max})$ with $T_{min}, T_{max}>0$, that is, let $\varphi(t,x)$ be the solution of the initial-value problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_0$ on $(-T_{min}, T_{max})$, it holds that $$\begin{aligned}
\label{kk80}
|u_0|_2=| \varphi|_2\quad\quad \text{and} \quad \quad \mathcal{J}(u_0)=\mathcal{J}(\varphi),\end{aligned}$$ and if $T_{max} <+ \infty$, $| \nabla \varphi|_2 \rightarrow +\infty$ as $t \rightarrow T^-_{max}$. By $|x|u_0 \in L^2(\mathbb{R}^3 )$ and [@2003-C Proposition 6.5.1], we get $$\begin{aligned}
\label{kk81}
H(t):=\int_{\mathbb{R}^3}|x|^2|\varphi(t,x)|^2dx<+\infty \ \ \ \ \ \text{for all}\ t\in(-T_{min}, T_{max} ).
\end{aligned}$$ Moreover, the function $H\in C^2(-T_{min}, T_{max} )$ and the following Virial identity holds: $H'(t)= 4\Im \int_{\mathbb{R}^3}\overline{\varphi} (x\cdot \nabla\varphi )dx$ and $$\begin{aligned}
\label{k82}
\ H''(t)&= 8 P(\varphi)\nonumber\\
&=8\int_{\mathbb{R}^3}|\nabla \varphi|^2dx+2 \int_{\mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|\varphi(t,x)|^2|\varphi(t,y)|^2 }{|x-y|} dxdy -8 \gamma_p\int_{\mathbb{R}^3}|\varphi|^{p}dx-8\mu \gamma_q \int_{\mathbb{R}^3}|\varphi|^{q}dx.
\end{aligned}$$ From Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"} (or Lemma [Lemma 38](#K-Lem4.2){reference-type="ref" reference="K-Lem4.2"} or Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"}), for any $u\in S_{a,r}$, the function $\widetilde{\psi}_u(s):=\psi_u(\log s)$ with $s>0$ has a unique global maximum point $\widehat{t}_u=e^{t_u}$ and $\widetilde{\psi}_u(s)$ is strictly decreasing and concave on $(\widehat{t}_u, +\infty)$. According to the assumption $t_{u_0}<0$, one obtains $\widehat{t}_{u_0}\in(0,1)$. We claim that $$\begin{aligned}
\label{k85}
\text{if}\ u \in S_{a,r} \ \text{and}\ \widehat{t}_u\in(0,1), \ \text{then}\ P(u)\leq \mathcal{J}(u)-\inf_{\mathcal{P}_{-, r}}\mathcal{J}.\end{aligned}$$ In fact, since $\widehat{t}_u\in(0,1)$ and $\widetilde{\psi}_u(s)$ is strictly decreasing and concave on $(\widehat{t}_u, +\infty)$, we infer that $t_u<0$, $P(u)<0$ and $$\begin{aligned}
\mathcal{J}(u)=\psi_u(0) = \widetilde{\psi}_u(1)\geq \widetilde{\psi}_u(\widehat{t}_u )-\widetilde{\psi}_u'(1)(\widehat{t}_u-1 )
=\mathcal{J}(t_u \star u) -|P(u)|(1-\widehat{t}_u )
\geq \inf_{\mathcal{P}_{-,r}}\mathcal{J} +P(u),\end{aligned}$$ which completes the claim. Now, let us consider the solution $\varphi$ for the initial-value problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_0$. Since by assumption $t_{u_0} < 0$, and the map $u \mapsto t_u$ is continuous, we deduce that $t_{ \varphi(t)} < 0$ for every $|t|<\overline{t}$ with $\overline{t}>0$ small enough. Then $\widehat{t}_{ \varphi(t)}\in (0,1)$ for $|t|<\overline{t}$. By [\[kk80\]](#kk80){reference-type="eqref" reference="kk80"}, [\[k85\]](#k85){reference-type="eqref" reference="k85"} and recalling the assumption $\mathcal{J}(u_0)< \inf_{\mathcal{P}_{-,r}}\mathcal{J}$, we deduce that $$\begin{aligned}
P(\varphi(t))\leq \mathcal{J}(\varphi(t) )-\inf_{\mathcal{P}_{-,r}}\mathcal{J}=\mathcal{J}(u_0 )-\inf_{\mathcal{P}_{-,r}}\mathcal{J}:=-\eta<0 \quad \ \ \text{for all}\ |t|< \overline{t}.\end{aligned}$$ Next, we show that $$\begin{aligned}
\label{k86}
P(\varphi(t) )\leq-\eta \quad\quad \text{ for any }\ t\in (-T_{min}, T_{max}).\end{aligned}$$ Indeed, assume that there is $t_0\in (-T_{min}, T_{max})$ satisfying $P(\varphi(t_0) )=0$. It holds from [\[kk80\]](#kk80){reference-type="eqref" reference="kk80"} that $\varphi(t_0)\in S_{a,r}$ and $$\mathcal{J}(\varphi(t_0))\geq c_{a}=\inf_{\mathcal{P}_{-,r}}\mathcal{J}> \mathcal{J}(u_0)= \mathcal{J}(\varphi(t_0)),$$ which is a contradiction. This shows that $P(\varphi(t_0) )\neq0$ for any $t_0\in (-T_{min}, T_{max})$. Then, since $P(\varphi(t))<0$ for every $|t|< \overline{t}$, we obtain $P(\varphi(t) )<0$ for any $t\in (-T_{min}, T_{max})$ (if at some $t\in (-T_{min}, T_{max})$, $P(\varphi(t))>0$. By continuity, we have $P(\varphi(\widetilde{t}))=0$ for some $\widetilde{t}\in (-T_{min}, T_{max})$, a contradiction). Since $P(\varphi(t) )<0$ for any $t\in (-T_{min}, T_{max})$, it follows from Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"} (or Lemma [Lemma 38](#K-Lem4.2){reference-type="ref" reference="K-Lem4.2"} or Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"}) that $t_{\varphi }<0$. That is, $\widehat{ t}_{\varphi }\in (0,1)$. Thus, [\[k85\]](#k85){reference-type="eqref" reference="k85"} holds and the above arguments yield $$\begin{aligned}
P(\varphi(t) )\leq-\eta \quad\quad \text{ for any }\ t\in (-T_{min}, T_{max}).\end{aligned}$$ Thus, by [\[k82\]](#k82){reference-type="eqref" reference="k82"} we get that $H$ is a concave function. Then, from [\[kk81\]](#kk81){reference-type="eqref" reference="kk81"}, [\[k82\]](#k82){reference-type="eqref" reference="k82"} and [\[k86\]](#k86){reference-type="eqref" reference="k86"}, we deduce $$\begin{aligned}
0\leq H(t)\leq H(0)+H'(0) t+\frac{1}{2}H''(0) t^2\leq H(0)+H'(0) t-4 \eta t^2 \quad\quad \text{ for any }\ t\in (-T_{min}, T_{max}),\end{aligned}$$ which implies that $T_{max}<+\infty$. Thus, the solution $\varphi$ of the initial-value problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_0$ blows-up in finite time. ◻
Under the assumption of Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} (or Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} or Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}), $e^{i\widehat{\lambda} t} \widehat{u}$ is a standing wave of problem [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"}, obtained in Theorem [Theorem 4](#K-TH2){reference-type="ref" reference="K-TH2"} (or Theorem [Theorem 5](#K-TH5){reference-type="ref" reference="K-TH5"} or Theorem [Theorem 7](#K-TH15){reference-type="ref" reference="K-TH15"}), where $\widehat{\lambda}>0$ and $\widehat{u}\in \mathcal{H}_{r}$. For any $\varrho> 0$, let $u_{\varrho} := \varrho\star \widehat{u}$, and let $\varphi_{\varrho}$ be the solution to [\[kk1\]](#kk1){reference-type="eqref" reference="kk1"} with initial datum $u_{\varrho}$. One has $u_{\varrho} \rightarrow \widehat{u }$ as $\varrho\rightarrow 0^+$. Hence it is sufficient to prove that $\varphi_{\varrho}$ blows-up in finite time. Clearly $t_{u_{\varrho}} = -\varrho < 0$, where $t_{u_{\varrho}}$ is defined in Lemma [Lemma 23](#K-Lem2.3){reference-type="ref" reference="K-Lem2.3"} (or Lemma [Lemma 38](#K-Lem4.2){reference-type="ref" reference="K-Lem4.2"} or Lemma [Lemma 44](#K-J1){reference-type="ref" reference="K-J1"}). By definition $$\mathcal{J}(u_{\varrho})=\mathcal{J}(\varrho\star \widehat{u})< \mathcal{J}( \widehat{u})=\inf_{ \mathcal{P}_{-,r}}\mathcal{J}( u).$$ Moreover, since $\widehat{\lambda}>0$ and $\widehat{u}\in \mathcal{H}_{r}$, we have that $\widehat{u}$ decays exponentially at infinity (see [@1983-BL]), and hence $|x|u_{\varrho} \in L^2(\mathbb{R}^3)$. Therefore, by Lemma [Lemma 58](#K-Lem5.1){reference-type="ref" reference="K-Lem5.1"} the solution $\varphi_{\varrho}$ blows-up in finite time. Hence, $e^{i\widehat{\lambda} t}\widehat{u}$ is strongly unstable from Definition [Definition 14](#de1.2){reference-type="ref" reference="de1.2"}. Thus, we complete the proof. $\hfill\Box$
25
Ambrosetti, A.: On Schrödinger-Poisson systems. Milan J. Math. **76**, 257--274 (2008)
Ambrosetti, A., Ruiz, D.: Multiple bound states for the Schrödinger-Poisson problem. Commun. Contemp. Math. **10**, 391--404 (2008)
Alves, C.O., Ji, C., Miyagaki, O.H.: Normalized solutions for a Schrödinger equation with critical growth in $\mathbb{R}^N$. Calc. Var. Partial Differ. Equ. **61**, 18 (2022)
Akahori, T., Ibrahim, S., Kikuchi H., Nawa, H.: Existence of a ground state and blow-up problem for a nonlinear Schrödinger equation with critical growth. Differential Integral Equations **25**, 383--402 (2012)
Azzollini, A., d'Avenia, P., Pomponio, A.: On the Schrödinger-Maxwell equations under the effect of a general nonlinear term. Ann. Inst. H. Poincaré Anal. Non Linéaire **27**, 779--791 (2010)
Azzollini, A., Pomponio, A.: Ground state solutions for the nonlinear Schrödinger-Maxwell equations. J. Math. Anal. Appl. **345**, 90--108 (2008)
Benci, V., Fortunato, D.: Solitary waves of the nonlinear Klein-Gordon equation coupled with Maxwell equations. Rev. Math. Phys. **14**, 409--420 (2002)
Benguria, R., Brezis, H., Lieb, E.H.: The Thomas-Fermi-von Weizsäcker theory of atoms and molecules. Comm. Math. Phys. **79**, 167--180 (1981)
Berestycki, H., Cazenave, T.: Instabilité des états stationnaires dans les équations de Schrödinger et de Klein-Gordon non linéaires, C. R. Acad. Sci., Sér. 1 Math. **293**, 489--492 (1981)
Berestycki, H., Lions, P.L.: Nonlinear scalar field equations I, existence of a ground state. Arch. Ration. Mech. Anal. **82**, 313--345 (1983)
Berestycki, H., Lions, P.L.: Nonlinear scalar field equations. II. Existence of infinitely many solutions. Arch. Ration. Mech. Anal. **82**, 347--375 (1983)
Bellazzini, J., Siciliano, G.: Stable standing waves for a class of nonlinear Schrödinger-Poisson equations. Z. Angew. Math. Phys. **62**, 267--280 (2011).
Bellazzini, J., Siciliano, G.: Scaling properties of functionals and existence of constrained minimizers. J. Funct. Anal. **261**, 2486--2507 (2011).
Bellazzini, J., Jeanjean, L., Luo, T.: Existence and instability of standing waves with prescribed norm for a class of Schrödinger-Poisson equations. Proc. Lond. Math. Soc. **107**, 303--339 (2013)
Cazenave, T.: Semilinear Schrödinger Equations, Courant Lecture Notes in Mathematics, vol. 10, New York University, Courant Institute of Mathematical Sciences/American Mathematical Society, New York/Providence, RI, (2003)
Cazenave, T., Lions, P.L.: Orbital stability of standing waves for some nonlinear Schrödinger equations. Commun. Math. Phys. **85**, 549--561 (1982)
Catto, I., Lions, P.L.: Binding of atoms and stability of molecules in Hartree and Thomas-Fermi type theories. I. A necessary and sufficient condition for the stability of general molecular systems. Comm. Partial Differential Equations **17**, 1051--1110 (1992)
Cerami, G., Molle, R.: Positive bound state solutions for some Schrödinger-Poisson systems. Nonlinearity **29**, 3103 (2016)
Chen, S.J., Tang, C.L.: High energy solutions for the superlinear Schrödinger-Maxwell equations. Nonlinear Analysis **71**, 4927--4934 (2009)
Chen, S.T., Tang, X.H., Yuan, S.: Normalized solutions for Schrödinger-Poisson equations with general nonlinearities. J. Math. Anal. Appl. **481**, 123447 (2020)
Chen, S.T., Tang, X.H.: New approaches for Schrödinger equations with prescribed mass: The Sobolev subcritical case and The Sobolev critical case with mixed dispersion. (2022) arXiv: 2210.14503v1.
Cheng, X., Miao, C., Zhao, L.: Global well-posedness and scattering for nonlinear Schrödinger equations with combined nonlinearities inthe radial case. J. Differential Equations **261**, 2881--2934 (2016)
Esteban, M., Lions, P.L.: Existence and nonexistence results for semilinear elliptic problems in unbounded domains. Proc. R. Soc. Edinb., Sect. A **93**, 1--14 (1982/83)
Feng, B.H.: On the blow-up solutions for the nonlinear Schrödinger equation with combined power-type nonlinearities. J. Evol. Equ. **18**, 203--220 (2018)
Glassey, R.T.: On the blowing up of solution to the Cauchy problem for nonlinear Schrödinger operators. J. Math. Phys. **8**, 1794--1797 (1977)
Hajaiej, H., Stuart, C.A.: On the variational approach to the stability of standing waves for the nonlinear Schrödinger equation. Adv. Nonlinear Stud. **4**, 469--501 (2004)
Jeanjean, L., Jendrej, J., Le, T.T., Visciglia, N.: Orbital stability of ground states for a Sobolev critical Schrödinger equation. J. Math. Pures Appl. **164**, 158--179 (2022)
Jeanjean, L., Le T.T.: Multiple normalized solutions for a Sobolev critical Schrödinger-Poisson-Slater equation. J. Differential Equations **303**, 277--325 (2021)
Jeanjean, L., Luo, T.: Sharp nonexistence results of prescribed $L^2$-norm solutions for some class of Schrödinger-Poisson and quasi-linear equations, Z. Angew. Math. Phys. **64**, 937--954 (2013)
Jeanjean, L., Le, T.T.: Multiple normalized solutions for a Sobolev critical Schrödinger equation. Math. Ann. **384**, 101--134 (2022)
Kang. J.C., Liu, X.Q., Tang, C.L.: Ground state sign-changing solutions for critical Schrödinger-Poisson system with steep potential well. J. Geom. Anal. **33**, 59 (2023)
Killip, R., Oh, T., Pocovnicu, O., Visan, M.: Solitons and scattering for the cubic-quintic nonlinear Schrödinger equation on $\mathbb{R}^3$. Arch. Rational Mech. Anal. **225**, 469--548 (2017)
Le Coz, S., Martel, Y., Raphaël, P.: Minimal mass blow up solutions for a double power nonlinear Schrödinger equation. Rev. Mat. Iberoam.**32**, 795--833 (2016)
Lehrer, R., Maia, L.A.: Positive solutions of asymptotically linear equations via Pohozaev manifold. J. Funct. Anal. **266**, 213--246 (2014)
Li, X.F.: Existence of normalized ground states for the Sobolev critical Schrödinger equation with combined nonlinearities. Calc. Var. Partial Differ. Equ. **60**, 169 (2021)
Lieb, E.H.: Thomas-Fermi and related theories and molecules. Rev. Modern Phys. **53**, 603--641 (1981)
Lions, P.L.: Solutions of Hartree-Fock equations for Coulomb systems. Comm. Math. Phys. **109**, 33--97 (1984)
Liu, Z.L., Wang, Z.Q., Zhang, J.J.: Infinitely many sign-changing solutions for the nonlinear Schrödinger-Poisson system. Ann. Mat. Pura Appl. **195**, 775--794 (2016)
Luo, T.J.: Multiplicity of normalized solutions for a class of nonlinear Schrödinger-Poisson-Slater equations. J. Math. Anal. Appl. **416**, 195--204 (2014)
Miao, C., Xu, G., Zhao, L.: The dynamics of the 3D radial NLS with the combined terms. Commun. Math. Phys. **318**, 767--808 (2013)
Miao, C., Zhao, T., Zheng, J.: On the 4D nonlinear Schrödinger equation with combined terms under the energy threshold. Calc. Var. Partial Differ. Equ. **56**, 179 (2017)
Qi, S.J., Zou, W.M.: Mass threshold of the limit behavior of normalized solutions to Schrödinger equations with combined nonlinearities. J. Differential Equations **375**, 172--205 (2023)
Ruiz, D.: The Schrödinger-Poisson equation under the effect of a nonlinear local term. J. Funct. Anal. **237**, 655--674 (2006)
Sanchez, O., Soler, J.: Long-time dynamics of the Schrödinger-Poisson-Slater system. J. Statist. Phys. **114**, 179--204 (2004)
Siciliano, G., Silva, K.: On the structure of the Nehari set associated to a Schrödinger-Poisson system with prescribed mass: old and new results. Israel J. Math. (2023), DOI: 10.1007/s11856-023-2477-9.
Soave, N.: Normalized ground state for the NLS equations with combined nonlinearities. J. Differential Equations **269**, 6941--6987 (2020)
Soave, N.: Normalized ground states for the NLS equation with combined nonlinearities: The Sobolev critical case. J. Funct. Anal. **279**, 108610 (2020)
Sun. J.J., Ma, S.W.: Ground state solutions for some Schrödinger¨CPoisson systems with periodic potentials. J. Differential Equations **260**, 2119--2149 (2016)
Tao, T., Visan, M., Zhang, X.: The nonlinear Schrödinger equation with combined power type nonlinearities. Commun. Partial Differ. Equ. **32**, 1281--1343 (2007)
Wang, Q., Qian, A.X.: Normalized solutions to the Schrödinger-Poisson-Slater equation with general nonlinearity: mass supercritical case. Anal. Math. Phys. **13**, 35 (2023)
Wei, J.C., Wu, Y.Z.: Normalized solutions for Schrödinger equations with critical Sobolev exponent and mixed nonlinearities. J. Funct. Anal. **283**, 109574 (2022)
Weinstein, M.I.: Nonlinear Schrödinger equations and sharp interpolation estimates. Comm. Math. Phys. **87**, 567--576 (1983)
Willem, M.: Minimax Theorems, vol. 24, Birkhöuser, Boston, Mass., (1996)
Xie, W.H., Chen, H.B., Shi, H.X.: Existence and multiplicity of normalized solutions for a class of Schrödinger-Poisson equations with general nonlinearities. Math. Methods Appl. Sci. **43**, 3602--3616 (2020)
Yao, S., Hajaiej, H., Sun, J.T., Wu, T.F.: Standing waves for the NLS equation with competing nonlocal and local nonlinearities: the double $L^2$-supercritical case. (2023) arXiv:2102.10268v2.
Ye, H.Y.: The existence and the concentration behavior of normalized solutions for the $L^2$-critical Schrödinger-Poisson system. Comput. Math. Appl. **74**, 266--280 (2017)
Ye, H.Y., Luo, T.J.: On the mass concentration of $L^2$-constrained minimizers for a class of Schrödinger-Poisson equations. Z. Angew. Math. Phys. **69**, 66 (2018)
Zeng, X.Y., Zhang, L.: Normalized solutions for Schrödinger-Poisson-Slater equations with unbounded potentials. J. Math. Anal. Appl. **452**, 47--61 (2017)
Zhao, L.G., Zhao, F.K.: On the existence of solutions for the Schrödinger-Poisson equations. J. Math. Anal. Appl. **346**, 155--169 (2008).
Zhong, X.J., Tang, C.L.: Ground state sign-changing solutions for a Schrödinger-Poisson system with a critical nonlinearity in $\mathbb{R}^3$. Nonlinear Anal. Real World Appl. **39**, 166--184 (2018)
[^1]: Corresponding author. E-mail address: tangcl\@swu.edu.cn (C.-L. Tang)
[^2]: Supported by National Natural Science Foundation of China (No. 11971393).
| arxiv_math | {
"id": "2309.09758",
"title": "Prescribed mass standing waves for Schr\\\"{o}dinger-Maxwell equations\n with combined nonlinearities",
"authors": "Jin-Cai Kang, Yong-Yong Li, Chun-Lei Tang",
"categories": "math.AP",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
A $k$-uniform hypergraph is a hypergraph where each $k$-hyperedge has exactly $k$ vertices. A $k$-homogeneous access structure is represented by a $k$-uniform hypergraph $\mathcal{H}$, in which the participants correspond to the vertices of hypergraph $\mathcal{H}$. A set of vertices can reconstruct the secret value from their shares if they are connected by a $k$-hyperedge, while a set of non-adjacent vertices does not obtain any information about the secret. One parameter for measuring the efficiency of a secret sharing scheme is the information rate, defined as the ratio between the length of the secret and the maximum length of the shares given to the participants. Secret sharing schemes with an information rate equal to one are called ideal secret sharing schemes. An access structure is considered ideal if an ideal secret sharing scheme can realize it. Characterizing ideal access structures is one of the important problems in secret sharing schemes. The characterization of ideal access structures has been studied by many authors [@BD; @CT; @JZB; @FP1; @FP2; @DS1; @TD]. In this paper, we characterize ideal $k$-homogeneous access structures using the independent sequence method. In particular, we prove that the reduced access structure of $\Gamma$ is an $(k, n)$-threshold access structure when the optimal information rate of $\Gamma$ is larger than $\frac{k-1}{k}$, where $\Gamma$ is a $k$-homogeneous access structure satisfying specific criteria.
author:
- "Younjin Kim [^1]"
- Jihye Kwon [^2].
- "Hyang-Sook Lee [^3]"
title: On Ideal Secret-Sharing Schemes for $k$-homogeneous access structures
---
# Introduction
A secret sharing scheme is a tool utilized in numerous cryptographic protocols. It involves a dealer who possesses a secret, a set of $n$ participants, and a collection $\mathcal{F}$ of subsets of participants defined as the access structure. A secret sharing scheme for $\mathcal{F}$ is a method in which the dealer distributes shares of a secret value $k$ among the $n$ participants so that any subset within $\mathcal{F}$ can reconstruct the secret value $k$ from their shares, while any subset not in $\mathcal{F}$ cannot reveal any information about the secret value $k$. The *qualified subsets* in the secret sharing scheme are defined as the subsets of participants capable of reconstructing the secret value $k$ from their shares. A collection of qualified subsets of participants is referred to as the *access structure* of the secret sharing scheme. In contrast, the *unqualified subsets* or *forbidden subsets* in the secret sharing scheme are defined as the subsets of participants who cannot obtain any information about the secret value $k$ from their shares.\
In 1979, Shamir [@AS] introduced a $(t,n)$-threshold secret sharing scheme as the pioneering work in secret sharing. In this scheme, the qualified subsets consist of all subsets with at least $t$ participants from a set of $n$ participants, and the size of each share is equal to the size of the secret. This implies that the $(t,n)$-threshold secret sharing scheme is determined by the basis consisting of all subsets with exactly $t$ distinct participants from a set of $n$ participants.\
A hypergraph is a generalization of a graph in which hyperedges may connect more than two vertices. A $k$-uniform hypergraph is one in which each hyperedge has exactly $k$ vertices. An access structure is a $k$-uniform hypergraph access structure, denoted by $\mathcal{H}$, if the set of vertices connected by a $k$-hyperedge can reconstruct the secret, and the set of non-adjacent vertices in $\mathcal{H}$ does not reveal any information about the secret. Access structures of this type are also called $k$-homogeneous. A $k$-homogeneous access structure is determined by a family of minimal qualified subsets, each consisting of exactly $k$ different participants, or $k$-uniform hypergraphs $\mathcal{H}(V,E)$, where $V$ is a vertex set and $E \subseteq 2^V$ is the edge set of hyperedges of cardinality $k$. Several authors have constructed secret sharing schemes for $k$-homogeneous access structures using various techniques.\
One way to measure the efficiency of a secret sharing scheme is to use the information rate, defined as the ratio between the length of the secret and the maximum length of the shares given to the participants. Since the length of any share is greater than or equal to the length of the secret in a secret sharing scheme, the information rate cannot exceed one. Secret sharing schemes with an information rate equal to one are called *ideal secret sharing schemes*. For example, Shamir's threshold secret sharing scheme is ideal.\
An access structure is ideal if there exists an ideal secret sharing scheme to implement it. The characterization of ideal access structures is an important issue in secret sharing schemes, which has been studied by numerous authors [@BD; @CT; @JZB; @FP1; @FP2; @MP3; @DS1; @TD]. In 1992, Stinson [@DS1] exactly characterized all ideal $2$-homogeneous access structures. In 2007, Martí-Farré and Padró [@MP3] characterized all ideal $3$-homogeneous access structures in which the number of minimal qualified subsets contained in any set of four participants is not equal to three. Later, in 2009, Martí-Farré and Padró [@FP2] also characterized ideal rank-three access structures in some cases. Also, in 2009, Tassa and Dyn [@TD] studied an ideal secret sharing scheme that realizes compartmented access structures using bivariate interpolation. Recently, in 2021, Janbaz and Bagherpour [@JZB] characterized ideal graph-based $3$-homogeneous access structures. In this paper, we characterize the ideal $k$-homogeneous access structures by utilizing the independent sequence method.
**Theorem 1**. *Suppose $\Gamma$ is a $k$-homogeneous access structure on a set of participants $\mathcal{P}$ such that the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $1$ and $k$, containing $k+1$. Then the following conditions are equivalent.*
- *$\Gamma$ is a vector space access structure*
- *$\Gamma$ is an ideal access structure*
- *$\rho^* (\Gamma) > \frac{k-1}{k}$*
- *The reduced access structure of $\Gamma$ is an $(k,n)$-threshold access structure.*
Our paper is organized as follows. In Section 2, we introduce the definitions of an Ideal Secret Sharing Scheme and the Independent Sequence Method. In Section 3, we present several access structures related to this paper. In Section 4, we present the necessary results and lemmas required for proving our main theorem. Finally, in Section 5, we provide the proof of Theorem [Theorem 1](#main:mainthm){reference-type="ref" reference="main:mainthm"}.
# Secret Sharing Scheme
A secret sharing scheme consists of a dealer who possesses a secret, a set of $n$ participants, and a collection $\mathcal{F}$ of subsets of participants defined as the access structure. A secret sharing scheme for $\mathcal{F}$ is a method by which the dealer distributes shares of a secret value $k$ to the set of $n$ participants such that any qualified subset in $\mathcal{F}$ can reconstruct the secret value $k$ from their shares, while any unqualified subset not in $\mathcal{F}$ cannot reveal any information about the secret value $k$.
## Ideal Secret Sharing Schemes
One parameter used to measure the efficiency of a secret sharing scheme is the information rate, which is defined as the ratio between the length of the secret and the maximum length of the shares given to the participants, as follows.
**Definition 2** (information rate). *Let $\mathcal{P}$ be the set of all participants, $\mathcal{S}$ be the set of all secret keys, and $K(p)$ be the set of all possible shares given to a participant $p \in \mathcal{P}$. In the secret sharing scheme $\mathcal{F}$, the information rate, denoted by $\rho(\mathcal{F})$, is defined as*
*$$\rho(\mathcal{F}) = \frac{\log|S|}{\max_{p\in \mathcal{P}}\log|K(p)|} \ .$$*
Since the length of any share is greater than or equal to the length of the secret in the secret sharing scheme, the information rate cannot be greater than one. Therefore, $\rho(\mathcal{F}) =1$ is the optimal situation. When designing a secret sharing scheme for the given access structure $\Gamma$, we may try to maximize the information rate, as defined below.
**Definition 3** (optimal information rate). *In the secret sharing scheme $\mathcal{F}$, the optimal information rate of the access structure $\Gamma$, denoted by $\rho^*(\Gamma)$, is defined as*
*$$\rho^*(\Gamma) = \sup ( \rho(\mathcal{F}))$$*
where the supremum is taken over all possible secret sharing schemes $\mathcal{F}$ with access structure $\Gamma$. Of course, the optimal information rate of an ideal access structure is equal to one.\
Secret sharing schemes with an information rate equal to one are called *ideal secret sharing schemes*. An access structure is ideal if there exists an ideal secret sharing scheme that realizes it. Characterizing ideal access structures and providing bounds on the optimal information rate are two important problems in the secret sharing schemes. These problems have been studied extensively in several particular families of access structures by many authors [@BD; @CT; @JZB; @FP1; @FP2; @MP3; @DS1; @TD].
## Independent Sequence Methods
To prove the main theorem, we will utilize the independent sequence method, which we introduce in this section. In 1997, Blundo, Santis, Simone, and Vaccaro [@BSSV2] presented the independent sequence method as a way to find upper bounds on the optimal information rate. Later, in 2000, Padró and Sáez [@PS] introduced a slight generalization of this method, as follows.\
**Definition 4** (Independent Sequence). *Let $\Gamma$ be an access structure on a set of participants $\mathcal{P}$. A sequence of non-empty sets $B_1, B_2, \cdots, B_m$, where $$\emptyset \neq B_1 \subset B_2 \subset \cdots \subset B_m \subset \mathcal{P},$$*
*is called *independent* if*
- *$B_m \not \in \Gamma$*
- *there exist $X_1, X_2, \cdots, X_m \subset \mathcal{P}$ such that $B_i \cup X_i \in \Gamma$ and $B_{i-1} \cup X_i \not \in \Gamma$ for all $i=1,2, \cdots, m$, where $B_0=\emptyset$.*
**Theorem 5** (Independent Sequence Method [@PS]). *Let $\Gamma$ be an access structure on a set of participants $\mathcal{P}$. Suppose that $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset B_m \subset \mathcal{P}$ is an independent sequence, and let $A\subset \mathcal{P}$ be a minimum set such that $\bigcup^{m}_{i=1} X_i \subset A$ which makes this sequence independent. Then, we have*
- *$\rho^*(\Gamma) \leq \frac{|A|}{m+1}$ if $A \in \Gamma$*
- *$\rho^*(\Gamma) \leq \frac{|A|}{m}$ if $A \not \in \Gamma$.*
# Access Structures
In a secret sharing scheme, the qualified subsets are the sets of participants who can reconstruct the secret value from their shares. The collection of qualified subsets of participants is called the access structure of the secret sharing scheme. Conversely, the unqualified subsets are the sets of participants who cannot obtain any information about the secret value from their shares. In any secret sharing scheme, the access structure is said to be monotone if the superset of any qualified subset is also a qualified subset, and it is determined by the family of minimal qualified subsets of participants. The collection of minimal qualified subsets of participants is called the basis of the access structure. Moreover, in a secret sharing scheme, every participant must belong to at least one minimal qualified subset.
## $k$-Homogeneous Access Structures
A hypergraph is a generalization of a graph in which hyperedges may connect more than two vertices. A $k$-uniform hypergraph (or $k$-hypergraph) is a hypergraph in which each hyperedge has exactly $k$ vertices, or is a $k$-hyperedge. In particular, the complete $k$-uniform hypergraph on $n$ vertices has all $k$-subsets of $\{1,2,\dots,n\}$ as $k$-hyperedges. A $k$-hypergraph access structure is represented by a $k$-uniform hypergraph $\mathcal{H}$ in which the participants correspond to the vertices of $\mathcal{H}$. A set of vertices can reconstruct the secret value from their shares if they are connected by a $k$-hyperedge, while the set of non-adjacent vertices receives no information on the secret. A $k$-hypergraph access structure is also called a *$k$-homogeneous access structure*.\
A $k$-homogeneous access structure is determined by the family of minimal qualified subsets, each consisting of exactly $k$ participants. Recall that in the $(k,n)$-threshold secret sharing scheme, the qualified subsets are formed by all subsets with at least $k$ participants among the set of $n$ participants. For example, consider an access structure on a set of five participants $P=\{p_1,p_2,p_3,p_4,p_5\}$ with minimal qualified subsets $A_1=\{p_1,p_2,p_3\}, A_2=\{p_2,p_3,p_4\}$, and $A_3=\{p_3,p_4,p_5\}$. This access structure is not $(3,n)$-threshold but is $3$-homogeneous. Note that there is a one-to-one correspondence between $k$-uniform hypergraphs and $k$-homogeneous access structures. Furthermore, complete $k$-uniform hypergraphs correspond to $(k,n)$-threshold access structures. Many authors [@BF1; @BFM; @BL] have constructed secret sharing schemes for $k$-homogeneous access structures using various techniques.
## Vector Space Access Structures
Let $\Gamma$ be an access structure defined on a set of participant $\mathcal{P}$, and let $D \not \in \mathcal{P}$ be a dealer. We say that an access structure $\Gamma$ is a *vector space access structure* if there exists a function
$$f: \mathcal{P} \cup {D} \rightarrow E\backslash \{ 0 \}$$
where $E$ is a vector space over a finite field $K$, such that if $A \subseteq \mathcal{P}$ then $A \in \mathcal{P}$ if and only if the vector $f(D)$ can be expressed as a linear combination of the vectors in the set $f(A) = \{f(p) | \ p \in A \}$. An example of a vector space access structure is the $(k,n)$-threshold access structure, which consists of all subsets with at least $k$ participants among the set of $n$ participants. (See [@DS1]). The relationship between vector space access structures and ideal access structures is as follows.
**Theorem 6**. *[@MP2; @FP2] The vector space access structures are ideal.*
**Proof.* Let $\Gamma$ be a vector space access structure defined on a set of participant $\mathcal{P}$, and let $D \not \in \mathcal{P}$ be a dealer. Then, there exists a function $f: \mathcal{P} \cup {D} \rightarrow E\backslash \{ 0 \}$, where $E$ is a vector space over a finite field $K$, such that if $A \subseteq \mathcal{P}$ then $A \in \mathcal{P}$ if and only if the vector $f(D)$ can be expressed as a linear combination of the vectors in the set $f(A) = \{f(p) | \ p \in A \}$. Given a secret value $k \in K$, the dealer $D$ selects an element $v\in E$ at random such that $v\cdot f(D) = k$, where $f(D)$ is a linear combination of the vectors in the set $\{ f(p) | \ p \in D\}$. The dealer then distributes shares $s_p = v \cdot f(p)$ to each participant $p \in \mathcal{P}$. Note that the ratio between the length of the secret and the maximum length of the shares given to the participants is one. Therefore, we conclude that $\Gamma$ is ideal. ◻*
## Reduced Access Structures
Let $\Gamma$ be an access structure defined on a set of participant $\mathcal{P}$. We say that an access structure $\Gamma$ is a *reduced access structure* if there are no pairs of distinct participants that are equivalent. In relation to the access structure $\Gamma$, we define the equivalence relation $\sim$ on $\mathcal{P}$ as follows.
**Definition 7** (equivalence relation). *The two participants $a,b \in \mathcal{P}$ are said to be equivalent, denoted by $a \sim b$, if either $a=b$ or $a\neq b$ and the following two conditions are satisfied:*
- *$\{ a, b \} \nsubseteq A$ if $A \in \Gamma_0$*
- *if $A \subset \mathcal{P}\backslash \{a, b\}$, then $A \cup \{a \} \in \Gamma_0$ if and only if $A \cup \{b \} \in \Gamma_0$*
*where $\Gamma_0$ is a family of minimal qualified subsets.*
Let us define the equivalence classes, induced by $\sim$, on the set of participants $\mathcal{P}$ as $\mathcal{P} / \sim = \{ [a_1], [a_2],\cdots, [a_m]\}$. Then, an access structure $\Gamma_{\sim}$ on $\mathcal{P} / \sim$ can be obtained naturally from $\Gamma$ by identifying equivalent participants. The reduced access structure of $\Gamma$ is denoted as $\Gamma_{\sim}$, and it is isomorphic to $\Gamma ( \{a_1, a_2, \cdots, a_m \})$. Therefore, we have $\rho^* (\Gamma_{\sim}) \geq \rho^* (\Gamma)$. Furthermore, $\Gamma$ is a vector space access structure if and only if $\Gamma_{\sim}$ is as well.
# Lemmas
In this section, we present several lemmas to prove the main result. The following lemma corresponds to the $s=2$ case of Lemma 4.2, and we omit its proof. Let us define $w(Q,\Gamma)$ as the number of minimal qualified subsets contained in the set $Q$. Let $\Omega(m,\Gamma)$ be the set of possible values of $w(Q,\Gamma)$ with $|Q|=m$. This implies that $\Omega(m,\Gamma)$ collects the numbers of minimal qualified subsets contained in any set of $m$ participants.
**Lemma 8**. *Let $\Gamma$ be a $k$-homogeneous access structure defined on a set of participants $\mathcal{P}$ such that the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $k$, and $\rho^* (\Gamma) > \frac{k-1}{k}$. Let us consider any $k+2$ distinct participants $u_1, u_2, \cdots, u_k, u_{k+1}, v \in \mathcal{P}$ such that $$\label{eqq:1111}
\{u_1,u_2, u_{l_1}, u_{l_2}, \cdots, u_{l_{k-2}}\} \in \Gamma$$*
*for all $3 \leq l_1, l_2, \cdots, l_{k-2} \leq k+1$.\
Then either $w(\{u_1, u_2, u_3, \cdots, u_k, u_{k+1} \}, \ \Gamma ) = k+1$ or $\{ u_3,u_4, u_5, \cdots, u_k, u_{k+1}, v \} \not \in \Gamma$.\
*
**Lemma 9** (General version of Lemma [Lemma 8](#lemma1){reference-type="ref" reference="lemma1"}). *Suppose that $\Gamma$ is a $k$-homogeneous access structure defined on a set of participants $\mathcal{P}$, such that the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $k$, and $\rho^* (\Gamma) > \frac{k-1}{k}$. Let us consider any $k+2$ distinct participants $u_1, u_2, \cdots, u_k, u_{k+1}, v \in \mathcal{P}$ such that $$\label{eqq:1}
\{u_1,u_2,\cdots, u_s, u_{l_1}, u_{l_2}, \cdots, u_{l_{k-s}}\} \in \Gamma$$*
*for all $s+1 \leq l_1, l_2, \cdots, l_{k-s} \leq k+1$, where $s \geq 2$.\
Then there exist two participants $u_i$ and $u_j$, where $1\leq i,j \leq k+1$, such that either $$w(\{u_1, u_2, u_3, \cdots, u_k, u_{k+1} \}, \ \Gamma ) = k+1$$ or $$\{ u_{m_1},u_{m_2}, u_{m_3}, \cdots, u_{m_{k-1}}, v \} \not \in \Gamma ,$$ where $i, j \not \in \{m_1, m_2, m_3, \cdots, m_{k-1}\}$ and $1 \leq m_1, m_2, m_3, \cdots, m_{k-1} \leq k+1$.*
*Proof.* Let us assume that $w(\{u_1, u_2, u_3, \cdots, u_k, u_{k+1} \},\:\Gamma) \neq k+1$ holds. This implies that the number of minimal qualified subsets contained in the set of $k+1$ participants, $\{u_1, u_2, u_3, \cdots, u_k, u_{k+1} \}$, is not equal to $k+1$. Also, by using the condition $k\not \in \Omega(k+1, \Gamma)$, which means that the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $k$, we can conclude that there exist $i$ and $j$ such that $$\begin{aligned}
\label{eqq:2}
\{u_i, u_{m_1},u_{m_2}, u_{m_3}, \cdots, u_{m_{k-1}}\} \notin \Gamma {\text{ \ and \ \ }}
\{u_j, u_{m_1},u_{m_2}, \cdots, u_{m_{k-1}}\} \notin \Gamma.
\end{aligned}$$ where $i, j \not \in \{m_1, m_2, \cdots, m_{k-1}\}$, $1 \leq i,j, m_1,\cdots, m_{s-2} \leq s$, $s+1\leq m_{s-1},\cdots,m_{k-1} \leq k+1$.\
Note that $\{u_1, u_2, u_3, \cdots, u_k, u_{k+1}\} = \{u_i,u_j,u_{m_1},u_{m_2}, u_{m_3},\cdots, u_{m_{k-1}}\}$. Now we need to show that $\{u_{m_1}, u_{m_2}, u_{m_3}, \cdots, u_{m_{k-1}}, v\}\notin \Gamma$. Let us consider the two cases as follows: $\{u_i,u_{m_1}, u_{m_2}, u_{m_3}, \cdots, u_{m_{k-2}},v\}\notin \Gamma$ or $\{u_i, u_{m_1}, u_{m_2},u_{m_3}, \cdots, u_{m_{k-2}}, v\}\in \Gamma$.\
**Case I: $\{u_i, u_{m_1}, u_{m_2}, \cdots, u_{m_{k-2}}, v \}\notin \Gamma$**.
In this case, let us first prove that $\{ u_i, u_{m_1}, \cdots, u_{m_{k-2}}, u_{m_{k-1}}, v\} \notin \Gamma$. Let us assume otherwise, that is, the set of $k+1$ different participants $\{u_i, u_{m_1}, \cdots, u_{m_{k-2}}, u_{m_{k-1}}, v\}$ is in $\Gamma$. We can consider the following subsets of participants $\mathcal{P}$: $B_1 = \{u_i\}$,$\:B_2=\{u_i,u_{m_1}\}$, $\:B_3=\{u_i,u_{m_1},u_{m_2}\},\cdots, B_{k-1}=\{u_i,u_{m_1},u_{m_2},\cdots,u_{m_{k-2}}\},$ and $\:B_k=\{u_i,u_{m_1},u_{m_2}\cdots,u_{m_{k-2}},v\}$. From the condition of Case I, we have $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset B_k =\{u_i,u_{m_1}, u_{m_2},\cdots,u_{m_{k-2}},v\}\not \in \Gamma$. Let us consider a subset $A=\{u_j, u_{m_1}, u_{m_2}, \cdots, u_{m_{k-3}}, u_{m_{k-1}} \} \subseteq \mathcal{P}$ consisting of $k-1$ participants, where $1 \leq j, m_1,\cdots, m_{s-2} \leq s$, and $s+1\leq m_{s-1},\cdots,m_{k-1} \leq k+1$. Note that $A \not \in \Gamma$. We can now consider the subsets of $A$ as follows: $X_1=\{u_j,u_{m_1},u_{m_2},\cdots,u_{m_{k-3}},u_{m_{k-1}}\},\:X_2=\{u_j,u_{m_2},u_{m_3},\cdots,u_{m_{k-3}},u_{m_{k-1}}\},\:X_3=\{u_j,u_{m_3},\cdots,u_{m_{k-3}},u_{m_{k-1}}\},\cdots,X_{k-2}=\{u_j,u_{m_{k-1}}\}$, $\:X_{k-1}=\{u_j\}$, and $\:X_k=\{u_{m_{k-1}}\}$.\
By using the condition ([\[eqq:1\]](#eqq:1){reference-type="ref" reference="eqq:1"}), we derive that $B_1 \cup X_1 = \{u_i, u_j, u_{m_1}, \cdots,u_{m_{k-3}},u_{m_{k-1}}\}\in\Gamma$, $\cdots,$ $B_{k-2} \cup X_{k-2} = \{u_i, u_j, u_{m_1}, \cdots,u_{m_{k-3}}, u_{m_{k-1}}\} \in \Gamma,$ $B_{k-1} \cup X_{k-1} = \{u_i,u_j,u_{m_1}, \cdots,u_{m_{k-3}}, u_{m_{k-2}}\} \in \Gamma$. From the assumption, we also observe that $B_k \cup X_k = \{u_i,u_{m_1},\cdots,u_{m_{k-2}}, u_{m_{k-1}}, v \} \in \Gamma$. Since the set of $k-1$ participants can not be in $\Gamma$, we derive that $B_0\cup X_1=\{u_j,u_{m_1},u_{m_2},\cdots, u_{m_{k-3}},u_{m_{k-1}}\} \notin \Gamma, B_1 \cup X_2 = \{u_i,u_j,u_{m_2},\cdots,u_{m_{k-3}},u_{m_{k-1}}\} \notin \Gamma,\cdots, B_{k-2} \cup X_{k-1} = \{u_i, u_j,u_{m_1},\cdots,u_{m_{k-3}}\} \notin \Gamma$. By using ([\[eqq:2\]](#eqq:2){reference-type="ref" reference="eqq:2"}), we can also derive that $B_{k-1} \cup X_k = \{u_i,u_{m_1}, u_{m_2}, \cdots, u_{m_{k-2}}, u_{m_{k-1}}\}\notin \Gamma$.\
Then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A=\{u_j, u_{m_1},\cdots,u_{m_{k-3}},u_{m_{k-1}}\} \notin \Gamma$. Therefore, using the independent sequence method, we can conclude that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Hence, we conclude that $\{u_i,u_{m_1},u_{m_2}, \cdots,u_{m_{k-2}},u_{m_{k-1}}, v \} \notin \Gamma$, which implies that $\{u_{m_1},u_{m_2},\cdots,u_{m_{k-2}}, u_{m_{k-1}}, v \} \notin \Gamma$.\
**Case II: $\{u_i, u_{m_1}, u_{m_2}, \cdots, u_{m_{k-2}}, v \}\in \Gamma$**.
In this case, we need to prove $\{u_{m_1},u_{m_2},\cdots, u_{m_{k-1}}, v \} \notin \Gamma$. Let us assume the opposite, that is, $\{u_{m_1},u_{m_2},\cdots, u_{m_{k-1}}, v \} \in \Gamma$. We can consider the following subsets of participants $\mathcal{P}$: $B_1 = \{u_{m_1}\},\:B_2=\{u_{m_1},u_{m_2}\},\cdots,B_{k-1}=\{u_{m_1},u_{m_2}\cdots,u_{m_{k-1}}\}$, and $B_k=\{u_j,u_{m_1},\cdots,u_{m_{k-1}}\}$. By using ([\[eqq:2\]](#eqq:2){reference-type="ref" reference="eqq:2"}), we have $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset B_k =\{u_j,u_{m_1},u_{m_2}\cdots,u_{m_{k-2}},u_{m_{k-1}}\}\not \in \Gamma$. Let us consider a subset $A=\{u_i, u_{m_2}, u_{m_3}, \cdots, u_{m_{k-3}}, u_{m_{k-2}}, v \} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $A \not \in \Gamma$. Now we can consider the subsets of $A$ as follows: $X_1=\{u_i,u_{m_2},\cdots,u_{m_{k-2}}, v \}$,$\:X_2=\{u_i,u_{m_3},\cdots,u_{m_{k-2}},v\}$,$\:X_3=\{u_i,u_{m_4},\cdots,u_{m_{k-2}},v\}$, $\cdots,X_{k-3}=\{u_i,u_{m_{k-2}},v\}$,$\:X_{k-2}=\{u_i,v\}$,$\:X_{k-1}=\{v\}$,$\:X_k=\{u_i\}$.\
Since $\{u_i, u_{m_1}, u_{m_2}, \cdots, u_{m_{k-2}}, v\}\in \Gamma$, we can observe that $B_1 \cup X_1 =$ $\{u_i,u_{m_1},u_{m_2},\cdots, u_{m_{k-2}}, v\}$ $\in\Gamma$, $\ B_2 \cup X_2=\{u_i,u_{m_1},u_{m_2},\cdots,u_{m_{k-2}},v\}$ $\in \Gamma$, $\cdots,
\ B_{k-2} \cup X_{k-2}$ $= \{u_i,u_{m_1},u_{m_2},\cdots,u_{m_{k-2}}, v\}\in \Gamma$. Moreover, since the set of $k-1$ participants can not be in $\Gamma$, we can also observe that $B_0 \cup X_1=\{u_i,u_{m_2},\cdots,u_{m_{k-2}}, v\} \notin \Gamma, \ B_1 \cup X_2 = \{u_i,u_{m_1},u_{m_3},\cdots,u_{m_{k-2}}, v\} \notin \Gamma,\cdots, B_{k-3} \cup X_{k-2} = \{u_i,u_{m_1},\cdots,u_{m_{k-3}}, v\} \notin \Gamma,\: B_{k-2} \cup X_{k-1} = \{u_{m_1},\cdots,u_{m_{k-2}}, v\} \notin \Gamma$. From equation ([\[eqq:2\]](#eqq:2){reference-type="ref" reference="eqq:2"}), we can see that $B_{k-1}\cup X_k = \{u_i,u_{m_1}, u_{m_2}, \cdots, u_{m_{k-2}}, u_{m_{k-1}}\}\notin \Gamma$ and $B_k = \{u_j, u_{m_1},u_{m_2}, \cdots, u_{m_{k-2}}, u_{m_{k-1}}\} \notin \Gamma$. Additionally, from the assumption, we have $B_{k-1} \cup X_{k-1} = \{u_{m_1},u_{m_2},\cdots, u_{m_{k-1}}, v\} \in \Gamma$. By using the condition ([\[eqq:1\]](#eqq:1){reference-type="ref" reference="eqq:1"}), we derive that $\{u_i, u_j, u_{m_1}, \cdots,u_{m_{k-3}},u_{m_{k-1}}\}\in\Gamma$, which implies that $B_k \cup X_k = \{ u_i, u_j, u_{m_1}, \cdots, u_{m_{k-2}}, u_{m_{k-1}}\} \in \Gamma$.\
Then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A=\{u_i,u_{m_2}, u_{m_3}, \cdots,u_{m_{k-3}},u_{m_{k-2}}, v\} \notin \Gamma$. Hence, using the independent sequence method, we can conclude that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Therefore, we conclude that $\{u_{m_1},u_{m_2}, \cdots,u_{m_{k-2}},u_{m_{k-1}}, v\} \notin \Gamma$. ◻
**Lemma 10**. *Let $\Gamma$ be a $k$-homogeneous access structure on a set of participants $\mathcal{P}$ such that $\rho^* (\Gamma) > \frac{k-1}{k}$. Let $a_1, a_2, \cdots, a_k, a_{k+1}, b_1, b_2 \in \mathcal{P}$ be $k+3$ different participants satisfying the following two conditions: $$\begin{aligned}
\label{eq:3}
w(\{a_1, a_2, \cdots, a_k, a_{k+1} \}, \Gamma ) = k+1
\end{aligned}$$ and $$\begin{aligned}
\label{eq:4}
\{a_1, a_{i_2}, a_{i_3}, \cdots, a_{i_{k-2}}, b_1, b_2 \} \in \Gamma,
\end{aligned}$$*
*where $2\leq i_2, i_3, \cdots, i_{k-2} \leq k+1.$\
Then there exist $j_1, j_2, \cdots, j_{k-1}$, where $1 \leq j_1, j_2, \cdots, j_{k-1} \leq k+1$, such that either*
*$\{a_{j_1}, a_{j_2}, \cdots, a_{j_{k-1}}, b_1 \} \in \Gamma$ or $\{a_{j_1}, a_{j_2}, \cdots, a_{j_{k-1}}, b_2 \} \in \Gamma$.*
*Proof.* Let us assume otherwise, that is, $$\begin{aligned}
\label{eq:5}
\{a_{j_1},\cdots, a_{j_{k-1}},b_1\} \notin \Gamma
\ \ {\text{and}} \ \
\{a_{j_1},\cdots, a_{j_{k-1}},b_2\} \notin \Gamma \ \ {\text{for\ all}} \ \ 1 \leq j_1,\cdots,j_{k-1} \leq k+1.\end{aligned}$$ We can consider the following subsets of participants $\mathcal{P}$: $B_1 = \{a_1\}$, $B_2=\{a_1,a_4\}$, $B_3=\{a_1,a_4,a_5\}$, $\cdots$, $B_{k-2}=\{a_1,a_4,\cdots,a_k\}$, $B_{k-1}=\{a_1,a_4,\cdots,a_k,b_1\},$ and $B_k=\{a_1,a_4,\cdots,a_k,a_{k+1},b_1\}$. Note that $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset B_k =\{a_1,a_4,a_5\cdots,a_k, a_{k+1},b_1\}$. Let us consider a subset $A=\{a_2, a_3, a_4, \cdots, a_{k-2}, b_1, b_2 \} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $A \not \in \Gamma$. We can now consider the subsets of $A$ as follows: $X_1=\{a_2,a_3,a_4,\cdots,a_{k-2},b_1,b_2\},\:X_2=\{a_2,a_3,a_5,\cdots,a_{k-2},b_1,b_2\},\:X_3=\{a_2,a_3,a_6,\cdots,a_{k-2},b_1,b_2\},\cdots,X_{k-3}=\{a_2,b_1,b_2\},\:X_{k-2}=\{b_1,b_2\},\:X_{k-1}=\{b_2\},\:X_k=\{a_3\}$.\
By using the condition ([\[eq:4\]](#eq:4){reference-type="ref" reference="eq:4"}), we can derive that $B_1 \cup X_1 = \{a_1,a_2,a_3,\cdots,a_{k-2},b_1,b_2\}\in\Gamma$, $B_2 \cup X_2=\{a_1,a_2,a_3, \cdots,a_{k-2},b_1,b_2\}\in \Gamma,$ $\cdots,$ $B_{k-2} \cup X_{k-2}=\{a_1,a_4,a_5, \cdots,a_{k},b_1,b_2\}\in \Gamma.$ Moreover, since the set of $k-1$ participants can not be in $\Gamma$, we also observe that $B_0 \cup X_1=\{a_2,a_3,\cdots,a_{k-2},b_1,b_2\} \notin \Gamma$, $B_1 \cup X_2 = \{a_1,a_2,a_3,a_5,\cdots,a_{k-2},b_1,b_2\} \notin \Gamma$,$\cdots$, $B_{k-2} \cup X_{k-1} = \{a_1,a_4,\cdots,a_k,b_2\} \notin \Gamma$. Furthermore, using the assumption ([\[eq:5\]](#eq:5){reference-type="ref" reference="eq:5"}), we obtain $B_{k-1} \cup X_k = \{a_1,a_3,\cdots,a_k,b_1\} \notin \Gamma$. Additionally, using the condition ([\[eq:4\]](#eq:4){reference-type="ref" reference="eq:4"}), we have $B_{k-1} \cup X_{k-1} = \{a_1,a_4, a_5, \cdots,a_k,b_1,b_2\}\in \Gamma$. By using ([\[eq:3\]](#eq:3){reference-type="ref" reference="eq:3"}), we derive that $\{a_1, a_3, a_4, \cdots, a_k, a_{k+1}\} \in \Gamma$, which implies that $B_k \cup X_k = \{a_1,a_3,a_4,\cdots, a_k, a_{k+1},b_1\} \in \Gamma$.\
Then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A=\{a_2,a_3,a_4,\cdots,a_{k-2},b_1,b_2\}$, which is also not in $\Gamma$. Therefore, using the independent sequence method, we can conclude that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$. This leads to a contradiction. Hence, we conclude that there exist $j_1, j_2, \cdots, j_{k-1}$, where $1 \leq j_1, j_2, \cdots, j_{k-1} \leq k+1$, such that either $$\begin{aligned}
\{a_{j_1},\cdots, a_{j_{k-1}},b_1\} \in \Gamma
\ \ {\text{or}} \ \
\{a_{j_1},\cdots, a_{j_{k-1}},b_2\} \in \Gamma.\end{aligned}$$
This completes the proof of Lemma [Lemma 10](#lemma2){reference-type="ref" reference="lemma2"}. ◻
**Lemma 11**. *Assume that $\Gamma$ is a $k$-homogeneous access structure on a set of participants $\mathcal{P}$ such that the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $k$, and $\rho^* (\Gamma) > \frac{k-1}{k}$. Let us consider $k+2$ different participants $a_1, a_2, \cdots, a_k, a_{k+1}, b \in \mathcal{P}$ satisfying the following three conditions:*
*$$\begin{aligned}
\label{eq:8}
w(\{a_1, a_2, \cdots, a_k, a_{k+1} \}, \Gamma ) = k+1,\end{aligned}$$ and $$\begin{aligned}
\label{eq:9}
\{a_1, a_2, a_{i_1}, a_{i_2}, \cdots, a_{i_{k-3}}, b \} \in \Gamma\:\:\text{where}\:\:3\leq i_1,i_2, \cdots, i_{k-3} \leq k+1,\end{aligned}$$ and $$\begin{aligned}
\label{eq:10}
\{a_{t_1}, a_{t_2}, \cdots, a_{t_{k-3}}, a_k, a_{k+1}, b \} \in \Gamma \:\:\text{where}\:\: 1 \leq t_1, t_2, \cdots, t_{k-3} \leq k-1.\end{aligned}$$*
*Then, the induced access structure $\Gamma(\{a_1, a_2, \cdots, a_k, a_{k+1}, b \})$ is the $(k,k+2)$-threshold access structure.*
*Proof.* To prove this, we need to show that $w(\{ a_{p_1}, a_{p_2}, a_{p_3}, \cdots, a_{p_k}, b \}, \Gamma) = k+1$, where $1 \leq p_1, p_2, \cdots, p_k \leq k+1$. We will prove it by showing the following two claims.
**Claim 1**. *$w(\{a_1,a_2,a_{q_1},a_{q_2},\cdots,a_{q_{k-2}},b \}, \Gamma) = k+1$, where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$.*
*Proof of Claim [Claim 1](#claim41){reference-type="ref" reference="claim41"}.* Using the condition ([\[eq:8\]](#eq:8){reference-type="ref" reference="eq:8"}), we obtain $$\begin{aligned}
\label{eq:11}
\{ a_1,a_2,a_{q_1},a_{q_2},\cdots,a_{q_{k-2}}\} \in \Gamma,
\end{aligned}$$ where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$.\
Moreover, using the condition ([\[eq:9\]](#eq:9){reference-type="ref" reference="eq:9"}), we also obtain $$\begin{aligned}
\label{eq:12}
\{a_1, a_2, a_{i_1}, a_{i_2}, \cdots, a_{i_{k-3}}, b \} \in \Gamma
\end{aligned}$$
where $i_1,i_2, \cdots, i_{k-3} \in \{q_1,q_2,\cdots, q_{k-2}\}$ with $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$.\
Thus, we need to show that $\{a_2,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}}, b\} \in \Gamma$ or $\{ a_1,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}}, b\} \in \Gamma$, where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$. Suppose, for the sake of contradiction, that $\{a_2,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}}, b\} \not \in \Gamma$ and $\{ a_1,a_{q_1},a_{q_2},\cdots,a_{q_{k-2}}, b \} \not \in \Gamma$, where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$. Now we consider the subsets of participants $\mathcal{P}$ as follows: $B_1 = \{a_{q_{k-2}}\},\:B_2=\{a_{q_{k-2}}, b \}$, $\:B_3=\{a_{q_{k-3}},a_{q_{k-2}}, b \},\cdots,\:B_{k-1}=\{a_{q_1},a_{q_2},\cdots,a_{q_{k-2}}, b \},$ and $\:B_k=\{a_2,a_{q_1},a_{q_2}, a_{q_3}, \cdots,a_{q_{k-2}},b\}$. From the given assumption, we have a sequence of sets $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset
\:B_k=\{a_2,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}},b\}
\not \in \Gamma$. Let us choose a participant, denoted by $a_{q_{k-1}}$, from the set of $k+1$ participants $a_1,a_2, \cdots, a_{k+1}$ who is not in the set $\{a_1, a_2, a_{q_1}, a_{q_2}, \cdots, a_{q_{k-2}}\}$. Next, we consider a subset $A=\{a_1,a_2, a_{q_2}, \cdots, a_{q_{k-3}}, a_{q_{k-1}} \} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $A \not \in \Gamma$. Now, we consider the subsets of $A$ as follows: $X_1=\{a_1,a_2, a_{q_2}, a_{q_3}, \cdots, a_{q_{k-3}}, a_{q_{k-1}}\},\:X_2=\{a_1,a_2, a_{q_2}, a_{q_3}, \cdots, a_{q_{k-4}}, a_{q_{k-1}}\},\cdots,X_{k-3}= \{a_1,a_2, a_{q_{k-1}}\}, X_{k-2}=\{a_1,a_2\}$, $\:X_{k-1}=\{a_{q_{k-1}}\}$, and $\:X_k=\{a_1\}$.\
Using condition ([\[eq:8\]](#eq:8){reference-type="ref" reference="eq:8"}), we observe $B_1 \cup X_1 = \{a_1,a_2, a_{q_{2}}\cdots,a_{q_{k-3}},a_{q_{k-2}},a_{q_{k-1}}\}\in\Gamma$. Additionally, using condition ([\[eq:9\]](#eq:9){reference-type="ref" reference="eq:9"}), we obtain that $\ B_2 \cup X_2$ $= \{a_1,a_2, a_{q_{2}}\cdots,a_{q_{k-4}},a_{q_{k-2}},a_{q_{k-1}}, b \}\in\Gamma$, $\cdots$, $\ B_{k-2} \cup X_{k-2} = \{a_1,a_2, a_{q_{2}}\cdots,a_{q_{k-3}},a_{q_{k-2}}, b \}\in\Gamma$. Also, we have $\ B_{k-1} \cup X_{k-1} =$ $\{a_{q_1}, a_{q_{2}}\cdots,a_{q_{k-2}},a_{q_{k-1}}, b \} = \{a_3,a_4,a_5, \cdots,a_k,a_{k+1},b\} \in\Gamma$. By using the condition ([\[eq:9\]](#eq:9){reference-type="ref" reference="eq:9"}), we also deduce that $\{a_1,a_2, a_{r_1}, a_{r_{2}}\cdots,a_{r_{k-3}}, b \} \in \Gamma$, where $a_{r_1},a_{r_2},\cdots, a_{r_{k-3}} \in \{a_{q_1}, a_{q_{2}}\cdots,a_{q_{k-2}}\}$, which implies that $\ B_{k} \cup X_{k} = \{a_1,a_2, a_{q_1}, a_{q_{2}}\cdots,a_{q_{k-2}}, b \} \in\Gamma$.\
Since the set of $k-1$ participants can not be in $\Gamma$, we derive that $B_0\cup X_1=\{a_1,a_2,a_{q_2},a_{q_3},\cdots, a_{q_{k-3}},a_{q_{k-1}}\} \notin \Gamma,$ $\ B_1 \cup X_2 = \{a_1,a_2,a_{q_2},\cdots,a_{q_{k-4}},a_{q_{k-2}},a_{q_{k-1}}\} \notin \Gamma,$ $\cdots, B_{k-2} \cup X_{k-1} = \{a_{q_2},\cdots,a_{q_{k-2}},a_{q_{k-1}}, b\} \notin \Gamma$. From the given assumption, we get $B_{k-1} \cup X_{k} = \{a_1,a_{q_2},\cdots,a_{q_{k-2}}, b\} \notin \Gamma$.\
Then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A=\{a_1,a_2, a_{q_2}, \cdots, a_{q_{k-3}}, a_{q_{k-1}} \}$. Therefore, using the independent sequence method, we can obtain that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Hence, we conclude that $\{a_2,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}}, b\} \in \Gamma$ or $\{ a_1,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}}, b\} \in \Gamma$, where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$. By using the condition $k\not \in \Omega(k+1, \Gamma)$, we derive that $\{a_2,a_{q_1},a_{q_2},\cdots,a_{q_{k-2}}, b\} \in \Gamma$ and $\{ a_1,a_{q_1},a_{q_2}, \cdots,a_{q_{k-2}}, b\} \in \Gamma$, where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$. Therefore, we conclude that $w(\{a_1,a_2,a_{q_1},a_{q_2},\cdots,a_{q_{k-2}},b \}, \Gamma) = k+1$, where $3\leq q_1,q_2, \cdots, q_{k-2} \leq k+1$.\
◻
**Claim 2**. *$w(\{a_{u_1},a_{3},a_{4}, a_{5},\cdots,a_{k},a_{k+1},b \}, \Gamma) = k+1$, where $u_1 = 1$ or $2$.*
*Proof of Claim [Claim 2](#claim42){reference-type="ref" reference="claim42"}.* By using the condition ([\[eq:10\]](#eq:10){reference-type="ref" reference="eq:10"}), we derive $$\begin{aligned}
\label{eq:12}
\{a_{j_1}, a_{j_2}, \cdots,a_{j_{k-3}}, a_{k}, a_{k+1}, b \} \in \Gamma
\end{aligned}$$
where $j_1,i_2, \cdots, j_{k-3} \in \{u_1,3,4,5,\cdots,k-1\}$ with $u_1=1$ or $2$.\
Thus, we need to prove that $\{a_{u_1},a_{3},\cdots,a_{k-1},a_{k+1}, b\}$ $\in \Gamma$ or $\{a_{u_1},a_{3},\cdots,a_{k-1},a_{k}, b\} \in \Gamma$, where $u_1 = 1$ or $2$. Suppose, for the sake of contradiction, that $\{a_{u_1},a_{3},a_{4},\cdots,a_{k-1},a_{k+1}, b\} \not \in \Gamma$ and $\{a_{u_1},a_{3},a_{4},\cdots,a_{k-1},a_{k}, b\} \not \in \Gamma$, where $u_1=1 {\text{\ or }} 2$. Let us consider the following subsets of participants $\mathcal{P}$: $B_1 = \{a_{u_1}\},\:B_2=\{a_{u_1}, b \}$, $\:B_3=\{a_{u_1},a_{3}, b \},\cdots, B_{k-2}=\{a_{u_1},a_{3},\cdots,a_{k-2}, b \},\:B_{k-1}=\{a_{u_1},a_{3},\cdots,a_{k-1}, b \},$ and $\:B_k=\{a_{u_1},a_{3}, \cdots,a_{k-1},a_{k+1},b\}$, where $u_1=1 {\text{\ or }} 2$. From the given assumption, we have $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset
\:B_k= \{a_{u_1},a_{3}, \cdots,a_{k-1},a_{k+1},b\}
\not \in \Gamma$, where $u_1=1 {\text{\ or }} 2$.\
Let us choose a participant, denoted by $u_2$, from the set of two participants $a_1,a_2$ who is not equal to $u_1$. Next, we consider a subset $A=\{a_{u_2},a_4, a_{5}, \cdots, a_{k}, a_{k+1} \} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $A \not \in \Gamma$. We can now consider the subsets of $A$ as follows: $X_1=\{a_{u_2},a_4, a_{5},\cdots, a_{k}, a_{k+1}\},\:X_2=\{a_{u_2},a_5, \cdots, a_{k}, a_{k+1}\},\cdots,X_{k-3}= \{a_{u_2},a_k,a_{k+1}\}, X_{k-2}=\{a_k,a_{k+1}\}$, $\:X_{k-1}=\{a_{u_2}\}$, and $\:X_k=\{a_k\}$. By using the condition ([\[eq:8\]](#eq:8){reference-type="ref" reference="eq:8"}), we observe that $B_1 \cup X_1 = \{a_{u_1},a_{u_2},a_4, a_5, \cdots,a_k, a_{k+1}\}= \{a_{1},a_{2},a_4, a_5, \cdots,a_k, a_{k+1}\}\in\Gamma$. Moreover, using the condition ([\[eq:9\]](#eq:9){reference-type="ref" reference="eq:9"}) and ([\[eq:10\]](#eq:10){reference-type="ref" reference="eq:10"}), we also derive that $\ B_2 \cup X_2 =
\{a_{u_1},a_{u_2},a_5, \cdots, a_{k}, a_{k+1}, b\} \in\Gamma$, $\cdots$, $\ B_{k-2} \cup X_{k-2} =\{a_{u_1},a_{3},a_{4},\cdots,a_{k-2},a_k,a_{k+1}, b\} \in \Gamma$, and $\ B_{k-1} \cup X_{k-1} = \{a_{u_1}, a_{u_{2}}, a_3,\cdots,a_{k-1}, b \} =
\{a_{1}, a_{2}, a_3,\cdots,a_{k-1}, b \}
\in\Gamma$. Additionally, using ([\[eq:10\]](#eq:10){reference-type="ref" reference="eq:10"}), we can derive that $\{a_{u_1},a_{r_1}, a_{r_{2}}\cdots,a_{r_{k-4}},a_k,a_{k+1}, b \} \in \Gamma$, where $a_{r_1},a_{r_2},\cdots, a_{r_{k-4}} \in \{a_{3}, a_{4}\cdots,a_{k-1}\}$, which implies that $\ B_{k} \cup X_{k} = \{a_{u_1},a_3, a_{4}, \cdots,a_{k-1}, a_k, a_{k+1}, b \} \in\Gamma$. Since the set of $k-1$ participants can not be in $\Gamma$, we deduce that $B_0\cup X_1=\{a_{u_2},a_4,a_5,\cdots, a_k,a_{k+1}\} \notin \Gamma, \ B_1 \cup X_2 = \{a_{u_1},a_{u_2},a_5,\cdots,a_{k},a_{k+1}\} = \{a_{1},a_{2},a_5,\cdots,a_{k},a_{k+1}\} \notin \Gamma,\cdots, B_{k-2} \cup X_{k-1} = \{a_{u_1},a_{u_2},a_3,\cdots,a_{k-2}, b\}
= \{a_{1},a_{2},a_3,\cdots,a_{k-2}, b\}\notin \Gamma$. From the given assumption, we have $B_{k-1} \cup X_{k} = \{a_{u_1},a_{3},\cdots,a_{k-1}, a_k, b\} \notin \Gamma$.\
Then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A= \{a_{u_2},a_4, a_{5}, \cdots, a_{k}, a_{k+1} \}$. Hence, using the independent sequence method, we can conclude that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Therefore, we conclude that $\{a_{u_1},a_{3},a_{4},\cdots,a_{k-1},a_{k+1}, b\} \in \Gamma$ or $\{a_{u_1},a_{3},a_{4},\cdots,a_{k-1},a_{k}, b\} \in \Gamma$, where $u_1 = 1$ or $2$. By using the condition $k\not \in \Omega(k+1, \Gamma)$, we derive that $\{a_{u_1},a_{3},a_{4},\cdots,a_{k-1},a_{k+1}, b\} \in \Gamma$ and $\{a_{u_1},a_{3},a_{4},\cdots,a_{k-1},a_{k}, b\} \in \Gamma$, where $u_1 = 1$ or $2$. Therefore, we conclude that $w(\{a_{u_1},a_3,a_{4},a_{5},\cdots,a_k, a_{k+1},b \}, \Gamma) = k+1$, where $u_1 = 1$ or $2$. ◻
Using Claim [Claim 1](#claim41){reference-type="ref" reference="claim41"} and Claim [Claim 2](#claim42){reference-type="ref" reference="claim42"}, we can conclude that $$w(\{ a_{p_1}, a_{p_2}, a_{p_3}, \cdots, a_{p_k}, b \}, \Gamma) = k+1,$$ where $1 \leq p_1, p_2, \cdots, p_k \leq k+1$.\
Thus, the induced access structure $\Gamma(\{a_1, a_2, \cdots, a_k, a_{k+1}, b \})$ is the $(k,k+2)$-threshold access structure. This completes the proof of Lemma [Lemma 11](#lemma3){reference-type="ref" reference="lemma3"}. ◻
Before stating the next lemma, we need to introduce the following notation. Let $p$ and $q$ be two participants from the set of all participants $\mathcal{P}$. We say that $p$ and $q$ are *equivalent* if either (i) $p=q$ or (ii) $p\neq q$ and the following two conditions are satisfied: (1) $\{p, q \} \not \subset A$ if $A\in \Gamma_0$, and (2) if $A \subset \mathcal{P} \backslash \{p,q\}$, then $A \cup \{p\} \in \Gamma_0$ if and only if $A \cup \{q\} \in \Gamma_0$, where $\Gamma_0$ is the collection of minimal qualified subsets.
**Lemma 12**. *Let $\Gamma$ be a $k$-homogeneous access structure on a set of participants $\mathcal{P}$, such that the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $1$ and $k$, and $\rho^* (\Gamma) > \frac{k-1}{k}$. Let us consider $k+2$ different participants $a_1, a_2, \cdots, a_k, a_{k+1}, b \in \mathcal{P}$ satisfying the following conditions:*
*$$\begin{aligned}
&(i) \ \ \ w(\{a_1, a_2, \cdots, a_k, a_{k+1} \}, \Gamma ) = k+1 \label{eq:15}\\ \nonumber & \\
&(ii)\ \ \ \{a_1, a_2, \cdots, a_{k-1}, b \} \in \Gamma.\label{eq:16}\\ \nonumber & \\
&(iii) \ \ \ a_k {\text \ is \ not \ equivalent \ to \ } a_{k+1} \label{eq:166}\\ \nonumber\end{aligned}$$*
*Moreover, $k+2$ participants $a_1, a_2, \cdots, a_k, a_{k+1}, b$ satisfy the one condition among the following three statements.*
*$$\begin{aligned}
(iv) & \ \ \ \{a_{t_1}, a_{t_2}, a_{t_3}, \cdots, a_{t_{k-3}}, a_k, a_{k+1}, b \} \not \in \Gamma \ \ \ where \ \ 1 \leq t_1, t_2, \cdots, t_{k-3} \leq k-1 .\label{eq:17}\\ & \nonumber \\
\noindent & or \ \{a_{l_1}, a_{l_2}, a_{l_3}, \cdots, a_{l_{k-2}}, a_{k+1}, b \} \not \in \Gamma \ \ \ where \ \ 1 \leq l_1, l_2, \cdots, l_{k-2} \leq k-1. \label{eq:18}\\ & \nonumber \\
\noindent & or \ \{a_{s_1}, a_{s_2}, a_{s_3}, \cdots, a_{s_{k-2}}, a_{k}, b \} \not \in \Gamma \ \ \ where \ \ 1 \leq s_1, s_2, \cdots, s_{k-2} \leq k-1. \label{eq:19}\\ \nonumber\end{aligned}$$*
*Then, either $b$ is equivalent to $a_{k+1}$ or $b$ is equivalent to $a_{k}$.*
*Proof.* First, let us consider $k+2$ participants in $\mathcal{P}$, denoted as $a_1, a_2, \cdots, a_k, a_{k+1}$, and $b$, who satisfy conditions ([\[eq:15\]](#eq:15){reference-type="ref" reference="eq:15"}), ([\[eq:16\]](#eq:16){reference-type="ref" reference="eq:16"}), ([\[eq:166\]](#eq:166){reference-type="ref" reference="eq:166"}), and ([\[eq:17\]](#eq:17){reference-type="ref" reference="eq:17"}). Now, let us focus on $k+1$ participants $a_{\Delta_1}, a_{\Delta_2}, \cdots a_{\Delta_{k-2}}, a_k, a_{k+1}, b$, where $1\leq \Delta_1, \cdots, \Delta_{k-2} \leq k-1$. Using ([\[eq:15\]](#eq:15){reference-type="ref" reference="eq:15"}), we obtain $$\begin{aligned}
\label{eq:20}
\{a_{\Delta_1}, a_{\Delta_2}, \cdots a_{\Delta_{k-2}}, a_k, a_{k+1}\} \in \Gamma.
\end{aligned}$$
Moreover, by ([\[eq:17\]](#eq:17){reference-type="ref" reference="eq:17"}), we derive that $$\begin{aligned}
\label{eq:21}
\{ a_{\beta_1}, a_{\beta_2}, \cdots a_{\beta_{k-3}}, a_k, a_{k+1}, b \} \not \in \Gamma,
\end{aligned}$$
where, $a_{\beta_1}, a_{\beta_2}, \cdots a_{\beta_{k-3}} \in \{ a_{\Delta_1}, a_{\Delta_2}, \cdots a_{\Delta_{k-2}}\}.$\
By using the conditions $1 \not \in \Omega(k+1, \Gamma)$ and $a_k$ is not equivalent to $a_{k+1}$, we obtain that
$$\begin{aligned}
\label{eq:23}
& \{ a_{\Delta_1}, a_{\Delta_2}, \cdots a_{\Delta_{k-2}}, a_{k+1}, b \} \not \in \Gamma
\ \ \ \ \ or & \{ a_{\Delta_1}, a_{\Delta_2}, \cdots a_{\Delta_{k-2}}, a_{k}, b \} \not \in \Gamma,\end{aligned}$$ where $1\leq \Delta_1, \Delta_2 \cdots, \Delta_{k-2} \leq k-1$.\
\
In this case, we aim to demonstrate the equivalence of $a_{k+1}$ and $b$. Given that $a_{k+1}$ and $b$ represent distinct participants, it is necessary to establish that:
- $\{a_{k+1}, b\} \not \subset A$ for any $A \in \Gamma_0$,
- if $A \subset \mathcal{P}\backslash \{ a_{k+1}, b \}$, then $A \cup \{a_{k+1}\}\in \Gamma$ if and only if $A \cup \{ b \} \in \Gamma$,
where $\Gamma_0$ is the collection of minimal qualified subsets.\
Based on our assumption of Case I, we observe that $$\label{eqn1}
\{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k+1}, b \} \not \in \Gamma,$$ where $1 \leq l_1, l_2, \cdots, l_{k-2}\leq k-1$.\
Since $a_k$ is not equivalent to $a_{k+1}$, we can also deduce that $$\label{eqn2}
\{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k}, b \}\in\Gamma,$$ where $1 \leq l_1, l_2, \cdots, l_{k-2}\leq k-1$.\
Using the property ([\[eq:20\]](#eq:20){reference-type="ref" reference="eq:20"}), we also obtain that $$\label{eqn3}
\{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k},a_{k+1} \} \in \Gamma,$$ where $1 \leq l_1, l_2, \cdots, l_{k-2}\leq k-1$.\
To prove the property $(1)$, we will first establish the following claims.
**Claim 1**. *$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-3}}, x, a_{k+1}, b \} \not \in \Gamma$ and $\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-3}}, x, a_{k}, b \} \in \Gamma$, where $1 \leq l_1, l_2,\cdots, l_{k-3} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.*
*Proof of Claim [Claim 1](#claim53){reference-type="ref" reference="claim53"}.* Assuming the contrary, let us consider $\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-3}}, x, a_{k+1}, b \} \in \Gamma$, where $1 \leq l_1, l_2, \cdots, l_{k-3} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.\
Next, let us select a participant from $\{a_{1},a_{2}, \cdots, a_{k-1}\}$ who is not part of $\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-3}}\}$ and denote this participant as $a_{l_{k-2}}$. We will now consider the following subsets of participants in $\mathcal{P}$: $B_1 = \{a_{l_1}\},\:B_2=\{a_{l_1}, b \}$, $\:B_3=\{a_{l_1},a_{l_2}, b \},\cdots, B_{k-2}=\{a_{l_1},a_{l_2},\cdots,a_{l_{k-3}}, b \},\:B_{k-1}=\{a_{l_1},a_{l_2}\cdots,a_{l_{k-2}}, b \},$ and $\:B_k=\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, b, x\}$. Note that $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset
\:B_k=\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, b, x\}$.\
Now, let us consider a subset $A=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-2}}, a_k , a_{k+1}\} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $A \not \in \Gamma$. We can then consider the subsets of $A$ as follows: $X_1=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-2}}, a_k, a_{k+1}\}$, $X_2=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-2}}, a_{k}\},\:X_3=
\{a_{l_{3}},a_{l_{4}}, \cdots, a_{l_{k-2}}, a_{k}\},\cdots,$ $X_{k-3}= \{a_{l_{k-3}},a_{l_{k-2}}, a_{k}\},$ $X_{k-2}=\{a_{l_{k-2}},a_{k}\}$, $\:X_{k-1}=\{a_{k}\}$, and $\:X_k=\{a_{k+1}\}$.\
Using [\[eqn3\]](#eqn3){reference-type="ref" reference="eqn3"}, we observe that $B_1 \cup X_1 = \{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k},a_{k+1} \} \in\Gamma$. Using [\[eqn2\]](#eqn2){reference-type="ref" reference="eqn2"}, we also derive that $\ B_2 \cup X_2 = \{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k}, b \} \in\Gamma,$ and so on up to $\ B_{k-1} \cup X_{k-1} = \{a_{l_1},a_{l_2},\cdots, a_{l_{k-2}}, a_{k}, b\} \in\Gamma$. Based on the assumption, we have $\{a_{l_1},a_{l_2},\cdots,a_{l_{k-3}}, a_{k+1}, b, x\} \in\Gamma$, which implies that $\ B_{k} \cup X_{k} = \{a_{l_1},a_{l_2},\cdots,a_{l_{k-2}}, a_{k+1}, b, x \} \in\Gamma$. Since the set of $k-1$ participants can not be in $\Gamma$, we derive that $B_0\cup X_1 \notin \Gamma, \ B_1 \cup X_2 \notin \Gamma,\cdots, B_{k-2} \cup X_{k-1}
\notin \Gamma$. Using [\[eqn1\]](#eqn1){reference-type="ref" reference="eqn1"}, we also derive that $B_{k-1} \cup X_{k} = \{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k+1}, b \} \not \in \Gamma$.\
If $\:B_k=\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-3}},a_{l_{k-2}}, b, x\} \not \in \Gamma$, then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A= \{a_{l_2},a_{l_3}, \cdots, a_{l_{k-2}}, a_k, a_{k+1} \}$. Hence, using the independent sequence method, we can conclude that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Therefore, we can conclude that $$\begin{aligned}
\label{eq:27}
\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-3}},a_{l_{k-2}}, b, x\} \in \Gamma,\end{aligned}$$ where $1 \leq l_1,l_2,l_3,\cdots, l_{k-3},l_{k-2} \leq k-1$.\
Let us consider $k+2$ distinct participants: $a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, a_k, a_{k+1}, x, b \in \mathcal{P}$, where $1 \leq l_1,l_2,\cdots, l_{k-2} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$. Next, we can apply Lemma [Lemma 8](#lemma1){reference-type="ref" reference="lemma1"} with $u_1=b, u_2=x$, and $v=a_k$. Then either
$$w(\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, a_{k+1}, b, x \}, \ \Gamma ) = k+1, {\text {\ where\ }} 1 \leq l_1,l_2,\cdots, l_{k-2} \leq k-1,$$ or $$\{ a_{l_1},a_{l_2}, \cdots, a_{l_{k-2}}, a_{k+1}, a_k \} \not \in \Gamma, {\text {\ for \ }} a_k\in \mathcal{P}\backslash \{a_{l_1},a_{l_2},\cdots, a_{l_{k-2}}, a_{k+1}, b \}.$$\
Since $\{a_{l_1}, a_{l_2}, \cdots, a_{l_{k-2}}, a_{k+1}, b \} \not \in \Gamma$, where $1 \leq l_1, l_2, \cdots, l_{k-2}\leq k-1$, we can conclude that $$w(\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, a_{k+1}, b, x \}, \ \Gamma ) \neq k+1, {\text {\ where\ }} 1 \leq l_1,l_2,l_3,\cdots, l_{k-3},l_{k-2} \leq k-1.$$
Let us consider $k+1$ participants: $a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, a_k, a_{k+1}, b \in \mathcal{P}$, where $1 \leq l_1,l_2,\cdots, l_{k-2} \leq k-1$. Since the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $1$, we must have $$\{ a_{l_1},a_{l_2}, \cdots, a_{l_{k-2}}, a_{k+1}, a_k \} \in \Gamma, {\text {\ for \ }} a_k\in \mathcal{P}\backslash \{a_{l_1},a_{l_2}, \cdots, a_{l_{k-2}}, a_{k+1}, b \},$$
which leads to a contradiction. Therefore, we can conclude that
$$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-3}}, x, a_{k+1}, b \} \not \in \Gamma,$$ where $1 \leq l_1, l_2,\cdots, l_{k-3} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}.$\
Since $a_k$ is not equivalent to $a_{k+1}$, we also conclude that $$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-3}}, x, a_{k}, b \} \in \Gamma,$$ where $1 \leq l_1, l_2,\cdots, l_{k-3} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.
This completes the proof of Claim [Claim 1](#claim53){reference-type="ref" reference="claim53"}. ◻
**Claim 2**. *$$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-4}}, x, a_{k}, a_{k+1}, b \} \not \in \Gamma,$$ where $1 \leq l_1, l_2,\cdots, l_{k-4} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}.$*
*Proof of Claim [Claim 2](#claim54){reference-type="ref" reference="claim54"}.* Let us assume otherwise, that is, $\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-4}}, x, a_k, a_{k+1}, b \} \in \Gamma$, where $1 \leq l_1, l_2, \cdots, l_{k-4} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.\
Next, we can consider two participants from $\{a_{1},a_{2}, \cdots, a_{k-1}\}$ who are not in $\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-4}}\}$ and denote them as $a_{l_{k-3}}$ and $a_{l_{k-2}}$ . We will now consider the following subsets of participants in $\mathcal{P}$: $B_1 = \{ b \},\:B_2=\{a_{l_1}, b \}$, $\:B_3=\{a_{l_1},a_{l_2}, b \},\cdots$, $B_{k-2}=\{a_{l_1},a_{l_2},\cdots,a_{l_{k-4}}, a_{k}, b \},$ $B_{k-1} =$ $\{a_{l_1},a_{l_2},\cdots,a_{l_{k-4}}, a_k, a_{k+1}, b \},$ and $\:B_k=\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-4}},a_{l_{k-3}}, a_k, a_{k+1}, b\}$. Note that $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset
\:B_k=\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-3}}, a_k, a_{k+1}, b \} \not \in \Gamma$. Next, we consider a subset $A=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-4}}, a_{l_{k-2}}, a_k , a_{k+1}, x\} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $A \not \in \Gamma$. We now consider the subsets of $A$ as follows: $X_1=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-4}},a_{l_{k-2}}, a_k, a_{k+1}, x\}$, $X_2=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-4}}, a_{l_{k-2}}, a_{k}, x\},$ $X_3=$ $\{a_{l_{3}},a_{l_{4}}, \cdots, a_{l_{k-4}}, a_{l_{k-2}}, a_{k}, x\},$ $\cdots$, $X_{k-2}=\{a_{l_{k-2}},x\}$, $\:X_{k-1}=\{x\}$, and $\:X_k=\{a_{l_{k-2}}\}$.\
Based on the assumption, we observe $B_1 \cup X_1 = \{a_{l_2},a_{l_3}, \cdots,a_{l_{k-4}}, a_{l_{k-2}}, x, a_{k}, a_{k+1}, b\}\in\Gamma$. Additionally, using Claim [Claim 1](#claim53){reference-type="ref" reference="claim53"}, we obtain that $$\ B_2 \cup X_2 =
\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-4}}, a_{l_{k-2}}, x, a_{k}, b\} \in\Gamma,$$
and so on up to $\ B_{k-2} \cup X_{k-2} =\{a_{l_1},a_{l_2},\cdots, a_{l_{k-4}},a_{l_{k-2}}, x, a_{k}, b\} \in\Gamma$. Based on the assumption, we have $\ B_{k-1} \cup X_{k-1} =
\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-4}}, x, a_k, a_{k+1}, b \} \in \Gamma$. Using [\[eqn3\]](#eqn3){reference-type="ref" reference="eqn3"}, we obtain that $\{a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, a_k, a_{k+1}\} \in\Gamma$, which implies $\ B_{k} \cup X_{k} = \{a_{l_1},a_{l_2}, \cdots,a_{l_{k-2}}, a_k, a_{k+1}, b \} \in\Gamma$.\
Since the set of $k-1$ participants can not be in $\Gamma$, we derive that $B_0\cup X_1
\notin \Gamma,$ $\ B_1 \cup X_2
\notin \Gamma$, $\cdots,$ $B_{k-2}\cup X_{k-1} \not \in \Gamma$. Based on the assumption, we also deduce that $$B_{k-1} \cup X_{k} = \{a_{l_1},a_{l_2},\cdots,a_{l_{k-4}}, a_{l_{k-2}}, a_k, a_{k+1}, b \}
\not \in \Gamma.$$ Then the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ is made independent by the set $A=\{a_{l_2},a_{l_3}, \cdots, a_{l_{k-4}}, a_{l_{k-2}}, a_k , a_{k+1}, x\} \subseteq \mathcal{P}$. Hence, using the independent sequence method, we can conclude that $\rho^*(\Gamma) \leq \frac{|A|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Therefore, we can conclude that $$\begin{aligned}
\label{eq:29}
\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-4}}, x, a_{k}, a_{k+1}, b \} \not \in \Gamma\end{aligned}$$ where $1 \leq l_1, l_2,\cdots, l_{k-4} \leq k-1$ and $x \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.\
This completes the proof of Claim [Claim 2](#claim54){reference-type="ref" reference="claim54"}.\
◻
**Claim 3**. *Suppose that the following three statements hold for $1 \leq t \leq k-2$.*
1. *$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-t-1}}, x_1,x_2,\cdots, x_{t-1}, a_{k+1}, b \} \not \in \Gamma$,*
2. *$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-t-1}}, x_1,x_2,\cdots, x_{t-1}, a_{k}, b \} \in \Gamma$,*
3. *$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-t-2}}, x_1,x_2,\cdots, x_{t-1}, a_{k}, a_{k+1}, b \} \not \in \Gamma$,*
*where $1 \leq l_1, l_2, \cdots, l_{k-t-1} \leq k-1$ and $x_1, x_2, \cdots, x_{t-1} \in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.\
Then for $1 \leq t \leq k-2$, we have $$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-t-2}}, x_1,x_2,\cdots, x_{t}, a_{k+1}, b \} \not \in \Gamma,$$ and $$\{a_{l_1},a_{l_2}, \cdots, a_{l_{k-t-2}}, x_1,x_2,\cdots, x_{t}, a_{k}, b \} \in \Gamma,$$ where $1 \leq l_1, l_2, \cdots, l_{k-t-2} \leq k-1$ and $x_1, x_2, \cdots, x_{t}$ $\in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.*
*Proof of Claim [Claim 3](#claim55){reference-type="ref" reference="claim55"}.* Let us consider $k+2$ different participants in $\mathcal{P}$, denoted as $a_{l_1}, \cdots, a_{l_{k-t-1}},$ $x_1,\cdots,x_t, a_k, a_{k+1}, b$, where $1 \leq l_1, \cdots, l_{k-t-1} \leq k-1$ and $x_1, \cdots, x_t$ $\in \mathcal{P}\backslash \{a_1, a_2, \cdots, a_{k}, a_{k+1}, b \}$.\
Using the properties $(i), (ii), (iii)$, now we apply Lemma [Lemma 9](#theorem1){reference-type="ref" reference="theorem1"} with $u_i=a_{l_{k-t-1}}, u_j = a_k, v= x_t$. Then either
$$w(\{a_{l_1}, \cdots, a_{l_{k-t-1}}, x_1,\cdots, x_{t-1}, a_k, a_{k+1}, b \}, \ \Gamma ) = k+1$$ or $$\{ a_{l_1}, \cdots, a_{l_{k-t-2}}, x_1,\cdots, x_{t-1},x_t, a_{k+1}, b \} \not \in \Gamma ,$$
where $1 \leq l_1,\cdots, l_{k-t-1} \leq k-1$ and $x_1, \cdots, x_t \in \mathcal{P}\backslash \{a_1, \cdots, a_{k}, a_{k+1}, b \}$.\
From $(i)$ and $(iii)$, we observe that $w(\{a_{l_1}, \cdots, a_{l_{k-t-1}}, x_1,\cdots, x_{t-1}, a_k, a_{k+1}, b \}, \ \Gamma ) \neq k+1$. Therefore, we can conclude that
$$\{ a_{l_1}, \cdots, a_{l_{k-t-2}}, x_1,\cdots, x_t, a_{k+1}, b \} \not \in \Gamma ,$$
where $1 \leq l_1, \cdots, l_{k-t-1} \leq k-1$ and $x_1, \cdots, x_t \in \mathcal{P}\backslash \{a_1, \cdots, a_{k}, a_{k+1}, b \}$.\
Since $a_k$ is not equivalent to $a_{k+1}$, for $1 \leq t \leq k-2$, we also conclude that $$\{a_{l_1}, \cdots, a_{l_{k-t-2}}, x_1,\cdots, x_t, a_{k}, b \} \in \Gamma,$$ where $1 \leq l_1, \cdots, l_{k-t-2} \leq k-1$ and $x_1, \cdots, x_{t} \in \mathcal{P}\backslash \{a_1, \cdots, a_{k}, a_{k+1}, b \}$.\
This completes the proof of Claim [Claim 3](#claim55){reference-type="ref" reference="claim55"}.\
◻
Using Claim [Claim 3](#claim55){reference-type="ref" reference="claim55"}, we can conclude that $$\{a_{k+1}, b\} \not \subset A {{\text \ \ \ {for \ any} \ \ }}A \in \Gamma_0,$$
where $\Gamma_0$ is the collection of minimal qualified subsets.\
Now we are prepared to demonstrate the property $(2)$ by using the property $(1)$. To accomplish this, we will begin by establishing the following claims.\
**Claim 4**. *Let $A$ be a subset in $\mathcal{P}\backslash \{ a_{k+1}, b \}$. If $A \cup \{a_{k+1}\}\in \Gamma$, then $A \cup \{ b \} \in \Gamma$.*
*Proof of Claim [Claim 4](#claim511){reference-type="ref" reference="claim511"}.* Let us consider $k+2$ distinct participants in $\mathcal{P}$, denoted as $a_{p_1},a_{p_2},\cdots, a_{p_{k-1}},$ $a_k, a_{k+1}, b$, where $\{ a_{p_1}, \cdots, a_{p_{k-1}}\} \in \mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. First, let $A=\{a_{p_1},,\cdots, a_{p_{k-2}}, a_k \}$ be a subset in $\mathcal{P}\backslash \{ a_{k+1}, b \}$. Suppose that $A \cup \{a_{k+1}\} = \{a_{p_1},\cdots, a_{p_{k-2}}, a_k, a_{k+1} \} \in \Gamma$. Using the property $(1)$, we clearly deduce that $A \cup \{b\} = \{a_{p_1},\cdots, a_{p_{k-2}}, a_k, b \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-2}}$ is in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Next, let $A=\{a_{p_1},\cdots, a_{p_{k-1}} \}$ be a subset in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Suppose that $A \cup \{a_{k+1}\} = \{a_{p_1},\cdots, a_{p_{k-1}}, a_{k+1} \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-1}}$ is in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Assuming the contrary, i.e., $A \cup \{b\} = \{a_{p_1},\cdots, a_{p_{k-1}}, b \} \not \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-1}}$ are in $\mathcal{P}\backslash \{a_k, a_{k+1}, b \}$, we can proceed to prove the desired result.\
Let us first define the following subsets of participants in $\mathcal{P}$: $B_1 = \{b\},\:B_2=\{ a_{p_2}, b \}$, $\:B_3=\{a_{p_2}, a_{p_3}, b \},\cdots, B_{k-2}=\{a_{p_2},\cdots,a_{p_{k-3}}, a_{p_{k-2}}, b \},\:B_{k-1}=\{a_{p_1},a_{p_2},\cdots,a_{p_{k-2}}, b \},$ and $\:B_k=\{a_{p_1},a_{p_2},\cdots,a_{p_{k-2}}, a_{k+1}, b\}$. Based on the property $(1)$, we observe that $\emptyset \neq B_1 \subset B_2 \subset \cdots \subset
\:B_k= \{a_{p_1}, \cdots,a_{p_{k-2}}, a_{k+1}, b \}
\not \in \Gamma$. Next, we consider a subset $S=\{a_{p_2}, \cdots, a_{p_{k-1}}, a_{k} \} \subseteq \mathcal{P}$ consisting of $k-1$ participants. Note that $S \not \in \Gamma$. We now define the following subsets of $S$: $X_1=\{a_{p_2}, a_{p_3}, \cdots, a_{p_{k-1}}, a_{k}
\},$ $\:X_2=\{a_{p_3}, \cdots, a_{p_{k-1}}, a_{k}\},\:X_3=
\{a_{p_{4}}, \cdots, a_{p_{k-1}}, a_{k}\}$ $,\cdots,X_{k-3}= \{a_{p_{k-2}},a_{p_{k-1}}, a_{k}\},$ $X_{k-2}=\{a_{p_{k-1}},a_{k}\}$, $\:X_{k-1}=\{a_{k}\}$, and $\:X_k=\{a_{p_{k-1}}\}$.\
Using the property $(1)$, we observe that $B_1 \cup X_1 = \{a_{p_2},a_{p_3}, \cdots,a_{p_{k-1}}, a_k, b \}\in\Gamma$, and so on up to $B_{k-2} \cup X_{k-2} = \{a_{p_2},a_{p_3}, \cdots,a_{p_{k-1}}, a_k, b\} \in \Gamma$. Additionally, we obtain that $B_{k-1} \cup X_{k-1} = \{a_{p_1},a_{p_2}, \cdots,a_{p_{k-2}}, a_k, b\} \in \Gamma$. Based on the assumption, we also have $\{a_{p_1},\cdots, a_{p_{k-1}}, a_{k+1} \}$ $\in \Gamma$, which implies that $B_{k}\cup X_{k} = \{a_{p_1}, \cdots,a_{p_{k-1}}, a_{k+1}, b \} \in\Gamma$. Since the set of $k-1$ participants can not be in $\Gamma$, we derive that $B_0\cup X_1 \not \in \Gamma$, $\cdots$, $B_{k-2}\cup X_{k-1} \not \in \Gamma$. Based on the assumption, we also have $B_{k-1}\cup X_k = \{a_{p_1},a_{p_2},\cdots, a_{p_{k-1}}, b \} \not \in \Gamma$.\
Therefore, we can apply the independent sequence method to the sequence $\phi \neq B_1 \subseteq B_2 \subseteq \cdots \subseteq B_k \notin \Gamma$ with the set $S= \{a_{p_2},a_{p_3}, \cdots, a_{p_{k-1}}, a_{k} \}$, to obtain $\rho^*(\Gamma) \leq \frac{|S|}{k}=\frac{k-1}{k}$, which leads to a contradiction. Thus, we can conclude that $A \cup \{b\} = \{a_{p_1},a_{p_2},\cdots, a_{p_{k-1}}, b \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-1}}$ is in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. This completes the proof of Claim [Claim 4](#claim511){reference-type="ref" reference="claim511"}. ◻
**Claim 5**. *Let $A$ be a subset in $\mathcal{P}\backslash \{ a_{k+1}, b \}$. If $A \cup \{b \}\in \Gamma$, then we have $A \cup \{ a_{k+1} \} \in \Gamma$.*
*Proof of Claim [Claim 5](#claim522){reference-type="ref" reference="claim522"}.* Let us consider $k+2$ distinct participants in $\mathcal{P}$, denoted as $a_{p_1},a_{p_2},\cdots, a_{p_{k-1}},$ $a_k, a_{k+1}, b$, where $a_{p_1}, \cdots, a_{p_{k-1}} \in \mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. First, let $A=\{a_{p_1},\cdots, a_{p_{k-2}}, a_k \}$ be a subset in $\mathcal{P}\backslash \{ a_{k+1}, b \}$. Suppose that $A \cup \{b\} = \{a_{p_1},\cdots, a_{p_{k-2}}, a_k, b \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-2}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Assuming the contrary, i.e., $A \cup \{a_{k+1}\} = \{a_{p_1},\cdots, a_{p_{k-2}}, a_k, a_{k+1} \} \not \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-2}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Using the property $(1)$, we obtain $\{a_{p_1},\cdots, a_{p_{k-2}}, a_{k+1}, b \}$ $\not \in \Gamma$ and $\{a_{s_1},\cdots, a_{s_{k-3}}, a_k, a_{k+1}, b \}$ $\not \in \Gamma$, where $s_1, s_2, \cdots s_{k-3} \in \{p_1, p_2, \cdots, p_{k-2}\}$ and $a_{p_1}, \cdots, a_{p_{k-2}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Then, this contradicts the fact that $1 \not \in \Omega(k+1, \Gamma)$. Therefore, we can conclude that $A \cup \{a_{k+1}\} = \{a_{p_1},\cdots, a_{p_{k-2}}, a_k, a_{k+1} \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-2}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$.\
Next, let $A=\{a_{p_1},a_{p_2},\cdots, a_{p_{k-1}} \}$ be a subset in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Suppose that $A \cup \{b\} = \{a_{p_1},\cdots, a_{p_{k-1}}, b \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-1}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Assuming the contrary, i.e., $A \cup \{a_{k+1}\} = \{a_{p_1},\cdots, a_{p_{k-1}}, a_{k+1} \} \not \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-1}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. Moreover, using the property $(1)$, we derive that$$\begin{aligned}
\label{eq:21}
\{ a_{\beta_1}, \cdots a_{\beta_{k-2}}, a_{k+1}, b \} \not \in \Gamma,
\end{aligned}$$
where, $a_{\beta_1}, \cdots a_{\beta_{k-2}} \in \{ a_{p_1}, a_{p_2}, \cdots a_{p_{k-1}}\}.$\
Then, let us consider $k+1$ participants $a_{p_1},a_{p_2}, \cdots, a_{p_{k-1}}, a_{k+1}, b$, then the above results lead to a contradiction, since the number of minimal qualified subsets contained in any set of $k+1$ participants is not equal to $1$. Thus, we can conclude that $A \cup \{a_{k+1}\} = \{a_{p_1},\cdots, a_{p_{k-1}}, a_{k+1} \} \in \Gamma$, where $a_{p_1}, \cdots, a_{p_{k-1}}$ are in $\mathcal{P}\backslash \{ a_k, a_{k+1}, b \}$. This completes the proof of Claim [Claim 5](#claim522){reference-type="ref" reference="claim522"}. ◻
Using Claim [Claim 4](#claim511){reference-type="ref" reference="claim511"} and [Claim 5](#claim522){reference-type="ref" reference="claim522"}, we can conclude that\
if $A \subset \mathcal{P}\backslash \{ a_{k+1}, b \}$, then $A \cup \{a_{k+1}\}\in \Gamma$ if and only if $A \cup \{ b \} \in \Gamma$,\
where $\Gamma_0$ is the collection of minimal qualified subsets.\
Using the property $(1)$ and $(2)$, we conclude that $b$ is equivalent to $a_{k+1}$.\
, where $1 \leq \Delta_1, \Delta_2, \cdots \Delta_{k-2} \leq k-1$.\
In this case, we aim to demonstrate the equivalence of $a_{k}$ and $b$. Given that $a_{k}$ and $b$ represent distinct participants, it is necessary to establish that:
- $\{a_{k}, b\} \not \subset A$ for any $A \in \Gamma_0$,
- if $A \subset \mathcal{P}\backslash \{ a_{k}, b \}$, then $A \cup \{a_{k}\}\in \Gamma$ if and only if $A \cup \{ b \} \in \Gamma$,
where $\Gamma_0$ is the collection of minimal qualified subsets.\
The proof of Case II is omitted since it is similar to the proof of Case I by replacing $a_{k}$ with $a_{k+1}$.\
◻
# Proof of Theorem [Theorem 1](#main:mainthm){reference-type="ref" reference="main:mainthm"} {#proof-of-theorem-mainmainthm}
Now, we are ready to establish Theorem [Theorem 1](#main:mainthm){reference-type="ref" reference="main:mainthm"}. In this section, we we will prove Theorem [Theorem 1](#main:mainthm){reference-type="ref" reference="main:mainthm"} by utilizing Lemma [Lemma 10](#lemma2){reference-type="ref" reference="lemma2"}, [Lemma 11](#lemma3){reference-type="ref" reference="lemma3"}, and [Lemma 12](#lemma4){reference-type="ref" reference="lemma4"}.
*Proof of Theorem [Theorem 1](#main:mainthm){reference-type="ref" reference="main:mainthm"}.* $$ (1) $\Rightarrow$ (2) : Since the vector space access structures are ideal using Theorem [Theorem 6](#idealtheorem){reference-type="ref" reference="idealtheorem"}, the proof is complete.\
(2) $\Rightarrow$ (3) : As the optimal information rate of an ideal access structure equals one, the proof is complete.\
(3) $\Rightarrow$ (4) : Let $\Gamma_\sim$ be a reduced access structure of $\Gamma$ on a set of participants $\mathcal{P}$. Since $k+1 \in \Omega(k+1,\Gamma_\sim)$, there exist $k+1$ distinct participants $a_1, a_2, \cdots , a_k, a_{k+1} \in \mathcal{P}$, such that $\omega(\{a_1,a_2, \cdots, a_k, a_{k+1}\},\Gamma_\sim)=k+1$. Consequently, we can conclude that the induced access structure $\Gamma_\sim(\{a_1,a_2, \cdots,a_k, a_{k+1}\})$ is a $(k,k+1)$-threshold access structure.\
Let us consider $\mathcal{P}'\subset \mathcal{P}$, consisting of $m$ participants, where $m\geq k+1$ and $\{a_1,a_2,\dots,a_{k+1}\}\subset\mathcal{P}'$, such that the induced structure $\Gamma_\sim(\mathcal{P}')$ is the $(k,m)$-threshold access structure, while $\Gamma(\mathcal{P}'\cup \{b\})$ is not $(k, m+1)$-threshold access structure for $b\in\mathcal{P}\setminus\mathcal{P}'$. We now assert that $\mathcal{P}=\mathcal{P}'$. To demonstrate this, let us assume the contrary, that is, $\mathcal{P}\neq\mathcal{P}'$. Then, we claim that there must exist $a_1,a_2,\dots,a_{k+1}\in\mathcal{P}'$ and $b\in\mathcal{P}\setminus\mathcal{P}'$ such that
- $\{a_1,a_2,\dots,a_{k-1},b\}\in\Gamma$
- $\{a_{t_1},a_{t_2},\dots,a_{t_{k-3}},a_k,a_{k+1},b\}\notin\Gamma$, where $1\leq t_1,t_2,\dots,t_{k-3}\leq k-1$
or $\{a_{l_1},a_{l_2},\dots,a_{l_{k-2}},a_k,b\}\notin\Gamma$, where $1\leq l_1,l_2,\dots,l_{k-2}\leq k-1$
or $\{a_{s_1},a_{s_2},\dots,a_{s_{k-2}},a_{k+1},b\}\notin\Gamma$, where $1\leq s_1,s_2,\dots,s_{k-2}\leq k-1$.\
To begin, let us establish property $(i)$. Given that $\mathcal{P}\neq\mathcal{P}'$, there exists $A\in {\Gamma_\sim}_0$ such that $A \cap \mathcal{P}' \neq \phi$, and $A\not \subset \mathcal{P'}$. Now, let us consider $k+3$ participants, denoted as $a_1,a_2,\dots,a_{k+1},b_1,b_2 \in \mathcal{P}$, where $a_1,a_2,\dots,a_{k+1}\in \mathcal{P}'$, and $b_1,b_2 \in \mathcal{P}\setminus\mathcal{P}'$. We have two cases to consider. First, if $|A\cap\mathcal{P}'|=k-1$, then we can conclude that $A=\{a_1,a_2,\dots,a_{k-1},b_i\}\in\Gamma$ for some $a_1,a_2,\dots,a_{k-1}\in\mathcal{P}'$ and $b_i\in\mathcal{P}\setminus\mathcal{P}'$, where $i=1,2$. Second, if $|A\cap\mathcal{P}'|\neq k-1$, then we deduce that $A=\{x_1,x_2,\dots,x_{k-2},b_1,b_2\}\in\Gamma$, where $x_1,x_2,\dots,x_{k-2}\in\mathcal{P}'$ and $b_1,b_2\in \mathcal{P}\setminus\mathcal{P}'$.\
Let us consider three distinct participants, denoted as $x_{k-1},x_k,x_{k+1}\in \mathcal{P}' \setminus \{x_1,x_2,\cdots,x_{k-2}\}$. Since $x_1,x_2,\cdots,x_{k-1},x_k, x_{k+1}\in\mathcal{P}'$ and $\Gamma_\sim(\mathcal{P}')$ is a $(k,m)$-threshold access structure, we have $\omega(\{x_1,x_2,\cdots,x_k,x_{k+1}\},\Gamma_\sim)=k+1$, which satisfies the condition of Lemma [Lemma 10](#lemma2){reference-type="ref" reference="lemma2"}. Therefore, by utilizing Lemma [Lemma 10](#lemma2){reference-type="ref" reference="lemma2"}, there exist $j_1,j_2,\dots,j_{k-1}$ such that either $$\{x_{j_1},x_{j_2},\dots,x_{j_{k-1}},b_1\}\in\Gamma$$ or $$\{x_{j_1},x_{j_2},\dots,x_{j_{k-1}},b_2\}\in\Gamma.$$ Now, property $(i)$ holds for the second case with $x_{j_1}=a_1, x_{j_2}=a_2, \cdots, x_{j_{k-1}}=a_{k-1}$, and $b_1=b$.\
Now, we establish property $(ii)$. Since $\Gamma_\sim(\mathcal{P}')$ is a $(k,m)$-threshold access structure and $\Gamma_\sim(\mathcal{P}'\cup\{b\})$ is not a $(k,m+1)$-threshold access structure, where $b \notin\mathcal{P}'$, there must exist two participants, denoted as $y_k,y_{k+1}\in\mathcal{P}'$, such that at least one of the following conditions holds:
$$\begin{aligned}
\label{tt1}\{a_{t_1},a_{t_2},\dots,a_{t_{k-3}},y_k,y_{k+1},b\}\notin\Gamma,\: \text{where}\:\: 1\leq t_1,t_2,\dots,t_{k-3}\leq k-1 \end{aligned}$$ $$\begin{aligned}
\label{tt2}\text{or}\:\:\:\{a_{l_1},a_{l_2},\dots,a_{l_{k-2}},y_{k},b\}\notin\Gamma,\: \text{where}\:\: 1\leq l_1,l_2,\dots,l_{k-2}\leq k-1\end{aligned}$$ $$\begin{aligned}
\label{tt3}\text{or}\:\:\:\{a_{s_1},a_{s_2},\dots,a_{s_{k-2}},y_{k+1},b\}\notin\Gamma,\: \text{where}\:\: 1\leq s_1,s_2,\dots,s_{k-2}\leq k-1,
\end{aligned}$$ where $b\in\mathcal{P}\setminus\mathcal{P}'$.\
From property $(i)$, we can conclude that there exist $a_1,a_2,\dots,a_{k+1}\in\mathcal{P}'$ and $b\in\mathcal{P}\setminus\mathcal{P}'$ such that $\{a_1,a_2,\dots,a_{k-1},b\}\in\Gamma$. We must now consider two cases.\
First, if $\{a_1,a_2,\dots,a_{k-1}\} \cap \{y_k,y_{k+1}\}=\phi$, then property $(ii)$ holds with $y_k=a_k$ and $y_{k+1}=a_{k+1}$. Second, without loss of generality, we assume that $a_1,a_2,\dots,a_{k-1},y_k$ are distinct, and $y_{k+1}=a_1$. Since $|\mathcal{P}'|=m'\geq k+1$, there exists $z_{k+1}$ such that $z_{k+1}\in \mathcal{P}'\setminus \{a_1,a_2,\dots, a_{k-1},y_k\}$. Consequently, we can deduce that $\Gamma_\sim(\{a_1,a_2,\dots,a_{k-1},y_k,z_{k+1},b\})$ is not a $(k,k+2)$-threshold access structure by utilizing Equations ($~\ref{tt1}),(\ref{tt2}),(\ref{tt3}).$ Now, we assert that $$\{a_{t_1},a_{t_2},\dots,a_{t_{k-3}},y_k,z_{k+1},b\}\notin\Gamma,\: \text{where}\:\: 1\leq t_1,t_2,\dots,t_{k-3}\leq k-1$$ $$\text{or}\:\:\:\{a_{l_1},a_{l_2},\dots,a_{l_{k-2}},y_k,b\}\notin\Gamma,\: \text{where}\:\: 1\leq l_1,l_2,\dots,l_{k-2}\leq k-1$$ $$\text{or}\:\:\:\{a_{s_1},a_{s_2},\dots,a_{s_{k-2}},z_{k+1},b\}\notin\Gamma,\: \text{where}\:\: 1\leq s_1,s_2,\dots,s_{k-2}\leq k-1.$$\
To accomplish this, let us assume otherwise, that is, suppose that $$\{a_{t_1},a_{t_2},\dots,a_{t_{k-3}},y_k,z_{k+1},b\}\in\Gamma,\: \text{where}\:\: 1\leq t_1,t_2,\dots,t_{k-3}\leq k-1$$ $$\text{and}\:\:\:\{a_{l_1},a_{l_2},\dots,a_{l_{k-2}},y_k,b\}\in\Gamma,\: \text{where}\:\: 1\leq l_1,l_2,\dots,l_{k-2}\leq k-1$$ $$\text{and}\:\:\:\{a_{s_1},a_{s_2},\dots,a_{s_{k-2}},z_{k+1},b\}\in\Gamma,\: \text{where}\:\: 1\leq s_1,s_2,\dots,s_{k-2}\leq k-1.$$\
Using property $(i)$, we can find $a_1,a_2,\dots,a_{k-1}$ such that $\{a_1,a_2,\dots,a_{k-1},b\}\in\Gamma$, as required by Lemma [Lemma 11](#lemma3){reference-type="ref" reference="lemma3"}. Consequently, $\Gamma_\sim(\{a_1,a_2,\dots,a_{k-1},y_k,z_{k+1},b\})$ is a $(k,k+2)$-threshold access structure, which creates a contradiction. Therefore, we can conclude that property $(ii)$ holds with $y_k=a_k$ and $z_{k+1}=a_{k+1}$. Since $a_1,a_2, \cdots, a_{k+1}\in \mathcal{P}'$ and $\Gamma_\sim(\mathcal{P}')$ is $(k,m)$-threshold access structure, we have $\omega(\{a_1,a_2,\dots,a_{k+1}\},\Gamma_\sim)=k+1$. Utilizing Lemma [Lemma 12](#lemma4){reference-type="ref" reference="lemma4"}, we can conclude that either $b$ is equivalent to $a_k$ or $b$ is equivalent to $a_{k+1}$. This contradicts the fact that $\Gamma_\sim$ is a reduced access structure. Therefore, we can deduce that $\mathcal{P}=\mathcal{P}'$. Consequently, $\Gamma_\sim(\mathcal{P})$ is also $(t,m)$-threshold access structure.\
(4) $\Rightarrow$ (1) : The results from Section $3$ establish that (4) implies (1). According to the definitions, a $(k,n)$-threshold access structure is an example of a vector space access structure. Therefore, if the reduced access structure of $\Gamma$ is a $(k,n)$-threshold access structure, then it is also a vector space access structure. Furthermore, the reduced access structure of $\Gamma$ is a vector space access structure if and only if $\Gamma$ is a vector space access structure.\
This completes the proof of Theorem [Theorem 1](#main:mainthm){reference-type="ref" reference="main:mainthm"}. ◻
99
A. Beimel and O. Farrás, *The share size of Secret-Sharing Schemes for almost all access structures and graphs*, Cryptology ePrint Archive (2020) https://eprint.iacr.org/2020/664
A. Beimel, O. Farrás, and Y. Mintz, *Secret Sharing Schemes for very dense graphs*, Journal of Cryptology, **29(2)** (2016) 336-362.
J. Benaloh and J. Leichter, *Generalized Secret Sharing and monotone functions*, in: CRYPTO'88, LNCS, **vol. 403** (1988) 27-35.
C. Blundo, A.D. Santis, R. D. Simone, and U. Vaccaro, *Tight bounds on the information rate of secret sharing schemes*, Designs Codes and Cryptography, **11(2)** (1997) 107-122.
E.F. Brickell and D.M. Davenport, *On the classification of ideal secret sharing schemes*, Journal of Cryptology, **4(73)** (1991) 123-134.
L. Csirmaz and G. Tardos, *Optimal information rate of secret sharing schemes on trees*, IEEE Trans. on Information Theory, **59(4)** (2013) 2527-2530.
S. Janbaz, A. Zaghian, and B. Bagherpour, *Ideal secret sharing schemes on graph-based 3-homogeneous access structures*, Transactions on Combinatorics, **10(2)** (2021) 107-120.
J. Martí-Farré and C. Pardró, *Secret sharing schemes with three or four minimal qualified subsets*, Designs Codes and Cryptography, **34** (2005) 17-34.
J. Martí-Farré and C. Padró, *Secret sharing schemes on access structures with intersection number equal to one*, Discrete Applied Mathematics, **154** (2006) 552-563.
J. Martí-Farré and C. Pardró, *Ideal secret sharing scheme whose minimal qualified subsets have at most three participants*, Designs Codes and Cryptography, **52** (2009) 1-14.
J. Martí-Farré, *A note on secret sharing schemes with three homogeneous access structures*, Information Processing Letters, **102** (2007) 133-137.
C. Padró, G. Sáez, *Secret sharing schemes with bipartite access structure*, IEEE Trans. Inform. Theory **46** (2000) 2596-2604.
A. Shamir, *How to share a secret*, Communications of the ACM, **22** (1979) 612-613.
D.R. Stinson, *An explanation of secret sharing schemes*, Designs Codes and Cryptography, (1992) 157-390.
T. Tassa and N. Dyn, *Multipartite secret sharing by bivariate interpolation*, Journal of Cryptology, **22** (2009) 227-258.
[^1]: Extremal Combinatorics and Probability Group (ECOPRO), Institute for Basic Science (IBS), Daejeon, South Korea. Email: `[email protected]`. Y.K. was supported by the Institute for Basic Science (IBS-R029-C4).
[^2]: Department of Mathematics, Ewha Womans University, Seoul, South Korea. Email: `[email protected]`. J.K. was supported by the National Research Foundation of Korea(NRF) grant funded by the Ministry of Education (No. 2019R1A6A1A11051177)
[^3]: Department of Mathematics, Ewha Womans University, Seoul, South Korea. Email: `[email protected]`. H.L. was supported by the National Research Foundation of Korea(NRF) grant funded by the Ministry of Education (No. 2019R1A6A1A11051177) and partially by the Korea Government(MSIT) (No. NRF-2021R1A2C1094821).
| arxiv_math | {
"id": "2309.07479",
"title": "On Ideal Secret-Sharing Schemes for $k$-homogeneous access structures",
"authors": "Younjin Kim, Jihye Kwon, Hyang-Sook Lee",
"categories": "math.CO cs.CR",
"license": "http://creativecommons.org/publicdomain/zero/1.0/"
} |
---
abstract: |
It is increasingly realized that taking stochastic effects into account is important in order to study biological cells. However, the corresponding mathematical formulation, the chemical master equation (CME), suffers from the curse of dimensionality and thus solving it directly is not feasible for most realistic problems. In this paper we propose a dynamical low-rank algorithm for the CME that reduces the dimensionality of the problem by dividing the reaction network into partitions. Only reactions that cross partitions are subject to an approximation error (everything else is computed exactly). This approach, compared to the commonly used stochastic simulation algorithm (SSA, a Monte Carlo method), has the advantage that it is completely noise-free. This is particularly important if one is interested in resolving the tails of the probability distribution. We show that in some cases (e.g. for the lambda phage) the proposed method can drastically reduce memory consumption and run time and provide better accuracy than SSA.
author:
- "Lukas Einkemmer[^1] [^2]"
- Julian Mangott
- "Martina Prugger[^3]"
bibliography:
- bibliography.bib
title: A low-rank complexity reduction algorithm for the high-dimensional kinetic chemical master equation
---
# Introduction
Chemical kinetics is an indispensable tool in order to understand reaction networks that govern, for example, the chemical processes inside a biological cell. The fundamental mathematical description of such systems is the chemical master equation (CME). However, since each chemical species adds a dimension to the CME, solving it numerically is extremely expensive. More precisely, the memory required and the computational cost scales exponentially in the number of species. This is often referred to as the *curse of dimensionality*. As a consequence, reduced models that only take averaged population numbers into account are most commonly used [@Chen_2010]. This assumption results in a set of ordinary differential equations (ODE) that can then be solved at low computational cost. ODE models are also called *deterministic*, owing to the fact that they only give averaged values and thus neglect both the inherent stochasticity of the system as well as the discrete nature of population numbers. It is increasingly realized, however, that both are required in order to describe many important features in biological systems [@Tonn_2019; @Grima_2008; @Niepel_2009; @Paszek_2010]. Thus computing a solution of the full chemical master equation is required in order to understand such systems.
Directly solving the chemical master equation for realistic system sizes is either very costly or prohibitive in terms of both memory and computational cost (primarily due to the curse of dimensionality). The most commonly used approach currently is the stochastic simulation algorithm (SSA; see, e.g., [@Gillespie_1976; @Harris_2006]). The SSA is a Monte Carlo approach that simulates individual trajectories of the system. While one such sample, owing to the inherent randomness, does not tell us much useful information, repeating it many times allows us to collect a statistic of the most likely outcomes of the system. As a Monte Carlo method SSA does not suffer from the curse of dimensionality. However, it only converges slowly (as $1/\sqrt{N}$, where $N$ is the number of samples) and is very noisy if not enough samples are used. The latter is a phenomenon where even if the probability density function is perfectly smooth, the algorithm approximates it by a jagged line. This, in particular, is an issue for the tail of the distribution, where the noise can completely bury the physical behavior of the system.
In this paper we propose a method that directly reduces the dimensionality of the problem by using a low-rank approximation. In this approach, lower-dimensional basis functions (which require far less memory to store) are combined in order to obtain an approximation to the high-dimensional problem. For the degrees of freedom in the low-rank approximation (also called the *low-rank factors*) we derive evolution equations that are then used to advance the approximation forward in time. This dynamical low-rank approach dates back to early work in quantum mechanics (see, e.g., [@Meyer_1990; @Meyer_2009; @Lubich_2008]) and a number of important mathematical advances in constructing and analyzing such methods have been made more recently [@Lubich_2014; @Kusch_2023; @Ceruti_2022b; @Ceruti_2022a; @Einkemmer_2022; @Einkemmer_2022a; @Ceruti_2022; @Ding_2021; @Einkemmer_2021b]. In the quantum mechanics context usually single-orbital basis functions, which only depend on the coordinates of a single electron, are combined to obtain an approximation to the high-dimensional wave function. In [@Jahnke_2008] this idea has been directly applied to the chemical master equation. The problem with that approach, however, is that each of the low-rank factors are only allowed to depend on a single species. It is doubtful that in biological applications, given the intricate structures of complex biological networks [@Barabasi_2009], we can consider each species independently, while still obtaining an accurate approximation with a small rank.
What we propose in this paper is to divide a reaction network into two partitions. The low-rank factors are then allowed to depend on all species in their respective partition. Thus, all reactions inside of a partition are treated exactly. An approximation is only performed if a reaction crosses the partition boundary. This allows us to keep species that tightly couple to each other together without introducing any error, while still taking advantage of the computational and memory savings of the dynamical low-rank approach. We emphasize that computational savings are not only due to lower-dimensional low-rank factors, but also depend crucially on how small the rank (i.e. the number of such low-rank factors used) can be chosen while still maintaining accurate results. A similar approach has been used in [@Prugger_2023] for Boolean models in biology. In the present work we extend this to the full kinetic chemical master equation. Let us also note that similar approaches have been used for problems in plasma physics (see, e.g., [@Einkemmer_2018; @Cassini_2021; @Coughlin_2022; @Einkemmer_2020]) and radiatiation transport (see, e.g., [@Peng_2020; @Peng_2021; @Kusch_2021; @Einkemmer_2021; @Einkemmer_2021a; @Kusch_2022]). In this case the partitioning is also based on the underlying physical problem (either a decomposition into spatial and velocity scales, as in [@Peng_2020; @Kusch_2021; @Einkemmer_2020; @Einkemmer_2018], or in coordinates parallel and perpendicular to the magnetic field, as in [@Einkemmer_2023]). Our view is that in biological applications there are a multitude of different reaction networks and, in general, for each a different partitioning will give optimal results.
The remainder of the paper is structured as follows. In section [\[sec:cme\]](#sec:cme){reference-type="ref" reference="sec:cme"} we introduce the chemical master equation and set our notation. The dynamical low-rank approximation is then described in detail in section [\[sec:DLR-approximation\]](#sec:DLR-approximation){reference-type="ref" reference="sec:DLR-approximation"}. In section [\[subsec:implementation-general-remarks\]](#subsec:implementation-general-remarks){reference-type="ref" reference="subsec:implementation-general-remarks"} we discuss the steps that are necessary in order to obtain an efficient implementation. In section [\[sec:numerical-experiments\]](#sec:numerical-experiments){reference-type="ref" reference="sec:numerical-experiments"} we investigate the accuracy and efficiency of the method for a number of examples. In particular, we show that for a lambda phage model the proposed algorithm is more accurate than SSA and drastically reduces the required run time. Finally, we conclude in section [\[sec:Outlook\]](#sec:Outlook){reference-type="ref" reference="sec:Outlook"}.
# Chemical master equation[\[sec:cme\]]{#sec:cme label="sec:cme"}
A well-stirred chemical reaction system of $N$ species $S_{1},\ldots,S_{N}$ is interacting through $M$ reaction channels $R_{1},\ldots,R_{M}$. In the stochastic description the system is represented by a random variable $\mathcal{X}(t)=\left(\mathcal{X}_{1}(t),\ldots,\mathcal{X}_{N}(t)\right)$ on the discrete state space $\mathbb{N}_{0}^{N}$, where the entries $\mathcal{X}_{i}(t)$ denote the population number (i.e. number of molecules) of the $i$-th species at time $t$. The probability density
$$P(t,x)=\mathbb{P}(\mathcal{X}_{1}(t)=x_{1},\ldots,\mathcal{X}_{N}(t)=x_{N}),\quad x=(x_{1},\ldots,x_{N})\in\mathbb{N}_{0}^{N},$$ where $x$ are the population numbers, is the solution of the kinetic chemical master equation (CME)
$$\partial_{t}P(t,x)=\sum_{\mu=1}^{M}\left(a_{\mu}(x-\nu_{\mu})P(t,x-\nu_{\mu})-a_{\mu}(x)P(t,x)\right).\label{eq:CME}$$
By defining the linear operator $$\left(\mathcal{\mathscr{A}}P(t,\cdot)\right)(x)=\sum_{\mu=1}^{M}\left(a_{\mu}(x-\nu_{\mu})P(t,x-\nu_{\mu})-a_{\mu}(x)P(t,x)\right),\label{eq:linear_operator}$$ the CME can be concisely written as $$\partial_{t}P(t,\cdot)=\mathcal{\mathscr{A}}P(t,\cdot).$$
The stoichiometric vector $\nu_{\mu}=(\nu_{\mu,1},\ldots,\nu_{\mu,N})\in\mathbb{Z}^{N}$ describes the population change caused by reaction $\mu$. The propensity function $a_{\mu}(x):\mathbb{N}_{0}^{N}\to[0,\,\infty)$ for reaction channel $R_{\mu}$ can be interpreted as a transition probability $T_{\mu}(x+\nu_{\mu}\mid x).$ Note that the arguments $x-\nu_{\mu}$ in the first term of the right-hand side of equation ([\[eq:CME\]](#eq:CME){reference-type="ref" reference="eq:CME"}) have to be omitted when they become negative (there can be no physical reaction that reduces the population number to negative values). The term "kinetic" indicates that the population number can be of any natural number (including 0), in contrast to models that treat only boolean states (where a species can be either "activated" or "not activated"; see, e.g., [@Clarke_2020; @Zanudo_2018; @Yachie-Kinoshita_2018; @Prugger_2023]). For more details on the CME in general, we refer the reader to, e.g., [@Gillespie_1976; @Gillespie_1992; @Gardiner_2004].
We will illustrate these concepts with a simple example. The bimolecular reaction $A+B\rightleftarrows C$ has propensity functions $a_{f}(x)$ for the forward and $a_{b}(x)$ for the backward reaction, with $x=(x_{A},x_{B},x_{C})$. The propensity functions describe how likely the associated reaction occurs for the given population number. In the forward reaction a particle of species $A$ reacts with a particle of species $B$ and yields one of species $C$, so $\nu_{f}=(-1,-1,1)$. Similarly, the stoichiometric vector for the backward reaction is $\nu_{b}=(1,1,-1)$. The entire CME therefore reads as $$\partial_{t}P(t,x)=a_{f}(x-\nu_{f})P(t,x-\nu_{f})+a_{b}(x-\nu_{b})P(t,x-\nu_{b})-\left(a_{f}(x)+a_{b}(x)\right)P(t,x).$$
An important physical relation is the conservation of probability (also called *conservation of mass*), $$\sum_{x\in\mathbb{N}_{0}^{N}}P(t,x)=\sum_{x\in\mathbb{N}_{0}^{N}}P(0,x)=1,\label{eq:mass}$$ which can be directly derived from the CME. To see this, we integrate equation ([\[eq:CME\]](#eq:CME){reference-type="ref" reference="eq:CME"}) over time and perform a summation over $x$, yielding $$\sum_{x\in\mathbb{N}_{0}^{N}}\left(P(t,x)-P(0,x)\right)=\text{\ensuremath{\int_{0}^{t}}}\sum_{\mu=1}^{M}\sum_{x\in\mathbb{N}_{0}^{N}}\left(a_{\mu}(x-\nu_{\mu})P(\tilde{t},x-\nu_{\mu})-a_{\mu}(x)P(\tilde{t},x)\right)\,\mathrm{d}\tilde{t}.\label{eq:mass_int}$$ The right-hand-side of this equation vanishes since $$\sum_{\mu=1}^{M}\sum_{x\in\mathbb{N}_{0}^{N}}a_{\mu}(x-\nu_{\mu})P(t,x-\nu_{\mu})=\sum_{\mu=1}^{M}\sum_{x\in\mathbb{N}_{0}^{N}}a_{\mu}(x)P(t,x),\label{eq:substitution}$$ which implies the desired result.
# Dynamical low-rank approximation[\[sec:DLR-approximation\]]{#sec:DLR-approximation label="sec:DLR-approximation"}
Solving the full CME is not possible in most cases due to the curse of dimensionality. Even the memory requirement for storing the full probability density function with a finite number $\tilde{n}$ of possible population numbers for the $N$ species scales with $\mathcal{O}(\tilde{n}^{N})$. Therefore, we have to reduce the system size in order to solve the CME using currently available hardware. We will do this using a dynamical low-rank approximation. The main idea is to split the species (and thus the reaction network) into two partitions. Reaction pathways lying within a partition are treated exactly, while reaction pathways that cross the two partitions are taken into account in an approximate way.
More specifically, we separate the reaction network into two partitions, such that there are $m_{1}<N$ species lying in partition 1. We write the population numbers as $x=(x_{(1)},x_{(2)}),$ such that $x_{(1)}=(x_{1},\ldots,x_{m_{1}})$ and $x_{(2)}=(x_{m_{1}+1},\ldots,x_{N})$ are the population numbers in partition 1 and 2, respectively. In the following, we will denote all tuples belonging to the first or second partition by parenthesized indices, i.e. by $(1)$ or $(2)$. Then the CME reads as $$\begin{aligned}
\partial_{t}P(t,x_{(1)},x_{(2)}) & =\sum_{\mu=1}^{M}a_{\mu}(x_{(1)}-\nu_{\mu,(1)},x_{(2)}-\nu_{\mu,(2)})P(t,x_{(1)}-\nu_{\mu,(1)},x_{(2)}-\nu_{\mu,(2)})\label{eq:CME_2p}\\
& \qquad-\sum_{\mu=1}^{M}a_{\mu}(x_{(1)},x_{(2)})P(t,x_{(1)},x_{(2)}).\nonumber \end{aligned}$$ The dynamical low-rank (DLR) approximation of $P$ is given by $$P(t,x_{(1)},x_{(2)})\approx\sum_{i,j=1}^{r}X_{i}^{1}(t,x_{(1)})S_{ij}(t)X_{j}^{2}(t,x_{(2)}),\label{eq:dlra}$$ where $S_{ij}\in\mathbb{R}$ is the coefficient matrix and $r$ is called the *rank of the representation*. The dependency of $P$ on $x_{(1)}$ and $x_{(2)}$ is approximated by the basis functions $\{X_{i}^{1}:\,i=1,\ldots,r\}$ and $\{X_{j}^{2}:\,j=1,\ldots,r\}$. These functions depend on time $t$ but only on the population numbers in partition 1, $x_{(1)}\in\mathbb{N}_{0}^{m_{1}}$ or on the population numbers in partition 2, $x_{(2)}\in\mathbb{N}_{0}^{m_{2}}$ (with $m_{2}=N-m_{1}$). The crucial benefit of this approach is that the memory requirements for storing the low-rank factors $X_{i}^{1}(t,x_{(1)})$, $S_{ij}(t)$ and $X_{j}^{2}(t,x_{(2)})$ scales with $\mathcal{O}((\tilde{n}^{m_{1}}+\tilde{n}^{m_{2}})\cdot r+r^{2})$. As $r$ is usually small, the memory requirements are reduced drastically compared to the full probability density, which would require $\mathcal{O}(\tilde{n}^{m_{1}+m_{2}})$. In the following we will add additional constraints so as to obtain uniqueness of the representation given by equation ([\[eq:dlra\]](#eq:dlra){reference-type="ref" reference="eq:dlra"}) and derive an algorithm for computing $X_{i}^{1}(t,x_{(1)})$, $S_{ij}(t)$ and $X_{j}^{2}(t,x_{(2)})$.
Let us assume that $S$ is invertible and $X_{i}^{1}$ and $X_{i}^{2}$ obey the orthogonality and gauge conditions
$$\begin{aligned}
\langle X_{i}^{1},X_{j}^{1}\rangle_{1}=\delta_{ij} & \quad\textrm{and}\quad\langle X_{i}^{2},X_{j}^{2}\rangle_{2}=\delta_{ij},\qquad\textrm{(orthogonality)},\label{eq:ortho}\\
\langle X_{i}^{1},\partial_{t}X_{j}^{1}\rangle_{2}=0 & \quad\textrm{and}\quad\langle X_{i}^{1},\partial_{t}X_{j}^{1}\rangle_{2}=0,\qquad\textrm{(gauge condition)},\label{eq:gauge}\end{aligned}$$ where $\delta_{ij}$ denotes the Kronecker delta and $\langle\cdot,\cdot\rangle_{k}$ the inner product on $\ell^{2}(\mathbb{N}_{0}^{m_{k}})$ $(k=1,2)$. Then the approximation $P\in\ell^{2}(\mathbb{N}_{0}^{N})$ is unique (see, e.g., [@Koch_2007; @Einkemmer_2018]) and lies for any time $t$ in the low-rank manifold $$\begin{aligned}
\mathcal{M}= & \bigg\{ P\in\ell^{2}(\mathbb{N}_{0}^{N}):\,P(x_{(1)},x_{(2)})=\sum_{i,j=1}^{r}X_{i}^{1}(x_{(1)})S_{ij}X_{j}^{2}(x_{(2)}),\\
& \quad\textrm{with invertible}\:S=S_{ij}\in\mathbb{R}^{r\times r},X_{i}^{k}\in\ell^{2}(\mathbb{N}_{0}^{m_{k}})\:\textrm{and}\:\langle X_{i}^{k},X_{j}^{k}\rangle_{k}=\delta_{ij}\;(k=1,2)\bigg\}\end{aligned}$$ with tangent space $$\begin{aligned}
\mathcal{T}_{P}\mathcal{M}= & \bigg\{\dot{P}\in\ell^{2}(\mathbb{N}_{0}^{N}):\,\dot{P}(x_{(1)},x_{(2)})=\sum_{i,j=1}^{r}\left(\dot{X}_{i}^{1}(x_{(1)})S_{ij}X_{j}^{2}(x_{(2)})+X_{i}^{1}(x_{(1)})\dot{S}_{ij}X_{j}^{2}(x_{(2)})+X_{i}^{1}(x_{(1)})S_{ij}\dot{X}_{j}^{2}(x_{(2)})\right),\\
& \quad\textrm{with}\:\dot{S}\in\mathbb{R}^{r\times r},\dot{X}_{i}^{k}\in\ell^{2}(\mathbb{N}_{0}^{m_{k}})\:\textrm{and}\:\langle X_{i}^{k},\dot{X}_{j}^{k}\rangle_{k}=0\;(k=1,2)\bigg\},\end{aligned}$$ [where dotted quantities denote the formal derivative with respect to time.]{style="color: black"} Using the orthogonality ([\[eq:ortho\]](#eq:ortho){reference-type="ref" reference="eq:ortho"}), the gauge conditions ([\[eq:gauge\]](#eq:gauge){reference-type="ref" reference="eq:gauge"}) and the linear operator defined in equation ([\[eq:linear_operator\]](#eq:linear_operator){reference-type="ref" reference="eq:linear_operator"}), we then obtain the following relations $$\begin{aligned}\partial_{t}S_{ij} & =\left\langle X_{i}^{1}X_{j}^{2},\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2}\right\rangle _{1,2},\\
\sum_{j=1}^{r}S_{ij}\partial_{t}X_{j}^{2} & =\left\langle X_{i}^{1},\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2}\right\rangle _{1}-\sum_{j=1}^{r}\partial_{t}S_{ij}X_{j}^{2},\\
\sum_{i=1}^{r}S_{ij}\partial_{t}X_{i}^{1} & =\left\langle X_{j}^{2},\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2}\right\rangle _{2}-\sum_{i=1}^{r}X_{i}^{1}\partial_{t}S_{ij}.
\end{aligned}
\label{eq:projections}$$
In principle we can solve this set of equations and thus we can determine the time evolution of the low-rank factors. However, if, e.g. a classic Runge--Kutta method is applied to equation ([\[eq:projections\]](#eq:projections){reference-type="ref" reference="eq:projections"}), we need to invert $S$. If $S$ has small singular values, inverting it is numerically very ill-conditioned. If, on the other hand, $S$ has only large singular values, then the approximation is very inaccurate (this corresponds to the case where the rank $r$ has been chosen too small to obtain an accurate approximation). This has been realized early in the development of such schemes, with regularization being a somewhat unsatisfactory remedy (see, e.g., [@Lubich_2008; @Meyer_2009]). In the seminal paper [@Lubich_2014] a projector splitting scheme was introduced that avoids the inversion of $S$ and thus results in a method that is robust with respect to the presence of small singular values. Later the basis updating Galerkin (BUG, also called the unconventional integrator) approach was introduced in [@Ceruti_2022] and improved in [@Ceruti_2022a]. Any of these robust integrators would be suitable for the task at hand. However, since [@Prugger_2023] (for the Boolean case) and [@Einkemmer_2023] (for a kinetic problem from plasma physics) seems to indicate that for reversible problems the projector splitting integrator seems to be more accurate, consumes less memory and incurs less computational cost, we will mostly focus on this approach here. Let us, however, duly note that the integrators are very similar in the sense that the building blocks we derive below can also be used easily to implement any of the variants of the BUG integrator.
We can write equation ([\[eq:projections\]](#eq:projections){reference-type="ref" reference="eq:projections"}) as $$\partial_{t}P=\mathscr{P}(P)\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2},\label{eq:projector-equation}$$ where $P$ is given by the low-rank approximation in equation ([\[eq:dlra\]](#eq:dlra){reference-type="ref" reference="eq:dlra"}) and $\mathscr{P}(P)$ is the projector onto the tangent space $\mathcal{T}_{P}\mathcal{M}$ $$\mathscr{P}(P)g=\sum_{j=1}^{r}\langle X_{j}^{2},g\rangle_{2}X_{j}^{2}-\sum_{i,j=1}^{r}X_{i}^{1}\langle X_{i}^{1}X_{j}^{2},g\rangle_{1,2}X_{j}^{2}+\sum_{i=1}^{r}X_{i}^{1}\langle X_{i}^{1},g\rangle_{1},$$
for more details see, e.g., [@Lubich_2014; @Einkemmer_2018]. The idea of the projector splitting integrator is to treat each term in the projector separately. That is, we split equation ([\[eq:projector-equation\]](#eq:projector-equation){reference-type="ref" reference="eq:projector-equation"}) into the following three parts $$\begin{aligned}
\partial_{t}P & =\sum_{j=1}^{r}\left\langle X_{j}^{2},\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2}\right\rangle _{2}X_{j}^{2},\label{eq:LT_1}\\
\partial_{t}P & =-\sum_{i,j=1}^{r}X_{i}^{1}\left\langle X_{i}^{1}X_{j}^{2},\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2}\right\rangle _{1,2}X_{j}^{2},\label{eq:LT_2}\\
\partial_{t}P & =\sum_{i=1}^{r}X_{i}^{1}\left\langle X_{i}^{1},\mathcal{\mathscr{A}}\sum_{i,j=1}^{r}X_{i}^{1}S_{ij}X_{j}^{2}\right\rangle _{1}.\label{eq:LT_3}\end{aligned}$$
We will now explain the algorithm for computing $X_{i}^{1}(t,x_{(1)})$, $S_{ij}(t)$ and $X_{j}^{2}(t,x_{(2)})$ by using the first-order Lie-Trotter splitting. The initial value for the algorithm is given by $$P(0,x_{(1)},x_{(2)})=\sum_{i,j=1}^{r}X_{0,i}^{1}(x_{(1)})S_{0,ij}X_{0,j}^{2}(x_{(2)}).$$
In the first step of the algorithm, we solve equation ([\[eq:LT_1\]](#eq:LT_1){reference-type="ref" reference="eq:LT_1"}). We write $$P(t,x_{(1)},x_{(2)})=\sum_{j=1}^{r}K_{j}(t,x_{(1)})X_{j}^{2}(t,x_{(2)}),\quad\textrm{with}\quad K_{j}(t,x_{(1)})=\sum_{i=1}^{r}X_{i}^{1}(t,x_{(1)})S_{ij}(t),$$ this step is therefore commonly called the $K$ *step*. Inserting this expression into equation ([\[eq:LT_1\]](#eq:LT_1){reference-type="ref" reference="eq:LT_1"}) yields an equation whose solution is given by the *time-independent* functions $X_{j}^{2}(t,x_{(2)})=X_{j}^{2}(0,x_{(2)})=X_{0,j}^{2}(x_{(2)})$ (see, e.g., [@Einkemmer_2018; @Lubich_2014]). After applying an inner product $\langle X_{i}^{2}(x_{(2)}),\cdot\rangle_{2}$ and using the orthogonality condition ([\[eq:ortho\]](#eq:ortho){reference-type="ref" reference="eq:ortho"}), we further obtain $$\partial_{t}K_{i}(t,x_{(1)})=\sum_{\mu=1}^{M}\sum_{j=1}^{r}\left(c_{ij}^{1,\mu}(x_{(1)})K_{j}(t,x_{(1)}-\nu_{\mu,(1)})-d_{ij}^{1,\mu}(x_{(1)})K_{j}(t,x_{(1)})\right),\label{eq:K}$$ with the *time-independent* coefficients $$\begin{aligned}c_{ij}^{1,\mu}(x_{(1)}) & =\langle X_{0,i}^{2}(x_{(2)}),a_{\mu}(x_{(1)}-\nu_{\mu,(1)},x_{(2)}-\nu_{\mu,(2)})X_{0,j}^{2}(x_{(2)}-\nu_{\mu,(2)})\rangle_{2},\\
d_{ij}^{1,\mu}(x_{(1)}) & =\langle X_{0,i}^{2}(x_{(2)}),a_{\mu}(x_{(1)},x_{(2)})X_{0,j}^{2}(x_{(2)})\rangle_{2}.
\end{aligned}
\label{eq:K_coeff}$$ The coefficients can be simplified when the propensity function $a_{\mu}(x_{(1)},x_{(2)})$ factorizes in its arguments or depends only on a subset of the population numbers, as we will investigate later. We now integrate equation ([\[eq:K\]](#eq:K){reference-type="ref" reference="eq:K"}) with the initial value $$K_{j}(0,x_{(1)})=\sum_{i=1}^{r}X_{0,i}^{1}(x_{(1)})S_{0,ij}$$ until time $\tau$ to obtain $K_{1,j}(x_{(1)})=K_{j}(\tau,x_{(1)})$. Then, we perform a QR decomposition $$K_{1,j}(x_{(1)})=\sum_{i=1}^{r}X_{1,i}^{1}(x_{(1)})\hat{S}_{ij},$$ which gives orthonormal functions $X_{1,i}^{1}$ (remember that on the low-rank manifold the orthogonality condition ([\[eq:ortho\]](#eq:ortho){reference-type="ref" reference="eq:ortho"}) has to be fulfilled) and the matrix $\hat{S}_{ij}$.
In the second step of the algorithm we proceed in a similar way for equation ([\[eq:LT_2\]](#eq:LT_2){reference-type="ref" reference="eq:LT_2"}) and notice that the solution is given by time-independent functions $X_{i}^{1}(t,x_{(1)})=X_{i}^{1}(\tau,x_{(1)})=X_{1,i}^{1}(x_{(1)})$ and $X_{j}^{2}(t,x_{(2)})=X_{j}^{2}(0,x_{(2)})=X_{0,j}^{2}(x_{(2)})$. After a similar calculation as for the $K$ step, we obtain the central equation for the $S$ *step* $$\partial_{t}S_{ij}(t)=-\sum_{k,l=1}^{r}S_{kl}(t)\left(e_{ijkl}-f_{ijkl}\right),\label{eq:S}$$ with the time-independent coefficients $$\begin{aligned}e_{ijkl} & =\sum_{\mu=1}^{M}\langle X_{1,i}^{1}(x_{(1)})X_{0,j}^{2}(x_{(2)}),a_{\mu}(x_{(1)}-\nu_{\mu,(1)},x_{(2)}-\nu_{\mu,(2)})X_{1,k}^{1}(x_{(1)}-\nu_{\mu,(1)})X_{0,l}^{2}(x_{(2)}-\nu_{\mu,(2)})\rangle_{1,2},\\
f_{ijkl} & =\sum_{\mu=1}^{M}\langle X_{1,i}^{1}(x_{(1)})X_{0,j}^{2}(x_{(2)}),a_{\mu}(x_{(1)},x_{(2)})X_{1,k}^{1}(x_{(1)})X_{0,l}^{2}(x_{(2)})\rangle_{1,2}.
\end{aligned}
\label{eq:S_coeff}$$ Note the minus sign in front of the right-hand side of equation ([\[eq:S\]](#eq:S){reference-type="ref" reference="eq:S"}), which amounts to an integration backwards in time. Integrating equation ([\[eq:S\]](#eq:S){reference-type="ref" reference="eq:S"}) with the initial value $S_{ij}(0)=\hat{S}_{ij}$ until time $\tau$ yields $\tilde{S}_{ij}=S_{ij}(\tau)$.
In the third and last step, we set $$P(t,x_{(1)},x_{(2)})=\sum_{i=1}^{r}X_{i}^{1}(t,x_{(1)})L_{i}(t,x_{(2)}),\quad\textrm{with}\quad L_{i}(t,x_{(2)})=\sum_{j=1}^{r}S_{ij}(t)X_{j}^{2}(t,x_{(2)})$$ and use this representation for equation ([\[eq:LT_3\]](#eq:LT_3){reference-type="ref" reference="eq:LT_3"}). Now $X_{i}^{1}$ remains constant and a similar calculation as for the two previous steps yields the equation for the $L$ *step* $$\partial_{t}L_{i}(t,x_{(2)})=\sum_{\mu=1}^{M}\sum_{j=1}^{r}\left(c_{ij}^{2,\mu}(x_{(2)})L_{j}(t,x_{(2)}-\nu_{\mu,(2)})-d_{ij}^{2,\mu}(x_{(2)})L_{j}(t,x_{(2)})\right)\label{eq:L}$$ with the time-independent coefficients $$\begin{aligned}c_{ij}^{2,\mu}(x_{(2)}) & =\langle X_{1,i}^{1}(x_{(1)}),a_{\mu}(x_{(1)}-\nu_{\mu,(1)},x_{(2)}-\nu_{\mu,(2)})X_{1,j}^{1}(x_{(1)}-\nu_{\mu,(1)})\rangle_{2},\\
d_{ij}^{2,\mu}(x_{(2)}) & =\langle X_{1,i}^{1}(x_{(1)}),a_{\mu}(x_{(1)},x_{(2)})X_{1,j}^{1}(x_{(1)})\rangle_{2}.
\end{aligned}
\label{eq:L_coeff}$$ We integrate equation ([\[eq:L\]](#eq:L){reference-type="ref" reference="eq:L"}) with the initial value $$L_{i}(0,x_{(2)})=\sum_{j=1}^{r}\tilde{S}_{ij}X_{0,j}^{2}(x_{(2)})$$ until time $\tau$ to obtain $L_{1,j}(x_{(2)})=L_{j}(\tau,x_{(2)})$. Performing a QR decomposition $$L_{1,i}(x_{(2)})=\sum_{j=1}^{r}S_{1,ij}X_{1,j}^{2}(x_{(2)})$$ yields the orthonormal functions $X_{1,j}^{2}$ and the matrix $S_{1,ij}$, which completes the first-order Lie-Trotter projector splitting algorithm. The approximation to the solution at time $\tau$ is then given by $$P(\tau,x)\approx\sum_{i,j=1}^{r}X_{1,i}^{1}(x_{(1)})S_{1,ij}(t)X_{1,j}^{2}(x_{(2)}).$$
We note that this approach can be easily extended to, e.g., the second-order Strang splitting (see, e.g., [@Einkemmer_2018; @Einkemmer_2023]).
# Algorithm and implementation[\[subsec:implementation-general-remarks\]]{#subsec:implementation-general-remarks label="subsec:implementation-general-remarks"}
The CME can be regarded as an infinite system of ordinary differential equations (ODEs) or as a discrete partial differential equation (PDE) with spatial differences instead of derivatives [@Jahnke_2008]. In order to turn the CME into a finite problem, we truncate the state space to a finite domain which allows a numerical solution and still captures enough of the information of the full (infinite) system. If we define the truncated state space as $\Omega^{\zeta,\eta}=\{x\text{\ensuremath{\in\mathbb{N}_{0}^{N}}}:\zeta_{i}\le x_{i}\le\eta_{i}\ \mathrm{for}\ i=1,\dots,N\}$, where $\zeta_{i}\in\mathbb{N}_{0}$ and $\eta_{i}\in\mathbb{N}_{0}$ and $\zeta_{i}<\eta_{i}$ ($i=1,\ldots,N)$, then the truncation error can be estimated as follows:
We denote by $\mathscr{A}^{\zeta,\eta}$ the restriction of the linear operator $\mathscr{A}$ (defined in equation [\[eq:linear_operator\]](#eq:linear_operator){reference-type="ref" reference="eq:linear_operator"}) to $\Omega^{\zeta,\eta}$ and by $P^{\zeta,\eta}(t)$ the solution of the restricted CME $\partial_{t}P^{\zeta,\eta}(t)=\mathscr{A}^{\zeta,\eta}P^{\zeta,\eta}(t)$ with initial condition $P^{\zeta,\eta}(0)$, which is the initial probability distribution restricted to the truncated state space. Defining the total mass $m^{\zeta,\eta}=\sum_{x\in\Omega^{\zeta,\eta}}P^{\zeta,\eta}(t,x)$ and assuming that $m^{\zeta,\eta}\ge1-\epsilon$, [@Munsky_2006] showed that $$P(t,x)-\epsilon\leq P^{\zeta,\eta}(t,x)\le P(t,x)\quad\text{for}\quad x\in\Omega^{\zeta,\eta}.$$ These inequalities give an estimation of how close the truncated state space solution approximates the true solution. The main issue of the truncation is how to determine suitable $\zeta$ and $\eta$ for given final time $t$ and tolerance $\epsilon>0$, such that $m^{\zeta,\eta}\ge1-\epsilon$. Solving the reaction network deterministically with ODEs (which is cheap) or biological insight into the system might give a good idea on how to choose $\zeta$ and $\eta$ a priori. Alternatively, one can implement a scheme with an adaptive truncated state space where the error in mass is used as an indicator.
In our numerical implementation we work with truncated state spaces $\Omega_{1}^{\zeta,\eta}=\{x_{(1)}\text{\ensuremath{\in\mathbb{N}_{0}^{m_{1}}}}:\zeta_{i}\le x_{i}\le\eta_{i}\ \mathrm{for}\ i=1,\dots,m_{1}\}$ and $\Omega_{2}^{\zeta,\eta}=\{x_{(2)}\text{\ensuremath{\in\mathbb{N}_{0}^{m_{2}}}}:\zeta_{i}\le x_{i}\le\eta_{i}\ \mathrm{for}\ i=m_{1}+1,\dots,N\}$ for the two partitions of the reaction network, where $\zeta_{i}$ and $\eta_{i}$ are fixed. We denote the number of degrees of freedom by $n_{1}=(\zeta_{1}-\eta_{1}+1)\cdot\ldots\cdot(\zeta_{m_{1}}-\eta_{m_{1}}+1)$ and $n_{2}=(\zeta_{m_{1}+1}-\eta_{m_{1}+1}+1)\cdot\ldots\cdot(\zeta_{N}-\eta_{N}+1)$ for partition 1 and 2, respectively. The total number of degrees of freedom is $n=n_{1}n_{2}$.
In the implementation we will store quantities depending on the population number (such as $X_{i}^{1}(x_{(1)})$ and $X_{j}^{2}(x_{(2)})$) as matrices. As $x_{(1)}$ and $x_{(2)}$ are vectors of size $m_{1}$ and $m_{2}$, respectively, we have to linearize the population number dependency in order to store for example $X_{i}^{1}(x_{(1)})$ and $X_{j}^{2}(x_{(2)})$ as matrices. We achieve this by introducing bijective maps $\alpha:\,\Omega_{1}^{\zeta,\eta}\to\{1,\dots,n_{1}\}$ and $\beta:\,\Omega_{2}^{\zeta,\eta}\to\{1,\dots,n_{2}\}$ and thus construct matrices $X^{1}=(\underline{X}_{1}^{1},\dots,\underline{X}_{r}^{1})\in\mathbb{R}^{n_{1}\times r}$ and $X^{2}=(\underline{X}_{1}^{2},\dots,\underline{X}_{r}^{2})\in\mathbb{R}^{n_{2}\times r}$, whose columns are the low-rank factors evaluated on the truncated state spaces $\Omega_{1}^{\zeta,\eta}$ and $\Omega_{2}^{\zeta,\eta}$. Note that we indicate linearized quantities by underlining them, i.e. $\underline{X}_{i}^{1}=(X_{i,\underline{\alpha}_{1}}^{1},\dots,X_{i,\underline{\alpha}_{n_{1}}}^{1})^{T}$, where $\underline{\alpha}=(\alpha(x))_{x\in\Omega_{1}^{\zeta,\eta}}$. The matrices $K\in\mathbb{R}^{n_{1}\times r}$and $L\in\mathbb{R}^{n_{2}\times r}$ are then computed by matrix multiplication, $K=X^{1}S$ and $L=X^{2}S^{T}$.
Using the substitution ([\[eq:substitution\]](#eq:substitution){reference-type="ref" reference="eq:substitution"}), the coefficients ([\[eq:K_coeff\]](#eq:K_coeff){reference-type="ref" reference="eq:K_coeff"}) can be written as
$$\begin{aligned}C^{1,\mu}(x_{(1)}) & =\left[\mathcal{T}_{2,\mu}^{-1}[X_{0}^{2}]\right]^{T}\mathrm{diag}\left(\underline{a}_{\mu}(x_{(1)})\right)X_{0}^{2},\\
D^{1,\mu}(x_{(1)}) & =\left(X_{0}^{2}\right)^{T}\mathrm{diag}\left(\underline{a}_{\mu}(x_{(1)})\right)X_{0}^{2},
\end{aligned}
\label{eq:K_coeff_imp}$$ with $X_{0}^{2}=X^{2}(t=0)\in\mathbb{R}^{n_{2}\times r}$ and $C^{1,\mu}(x_{(1)}),\,D^{1,\mu}(x_{(1)})\in\mathbb{R}^{r\times r}$. The *shift operator $\mathcal{T}_{2,\mu}=\mathcal{T}_{2,\mu}^{+1}$* and the *inverse shift operator* $\mathcal{T}_{2,\mu}^{-1}$ act element-wise and are defined as $$\mathcal{T}_{2,\mu}^{\pm1}[X_{\underline{\beta}_{i}}^{2}]=\begin{cases}
0\qquad\mathrm{if} & \exists x_{j}=\left(\beta^{-1}(\underline{\beta}_{i})\right)_{j}:\,(x_{j}\pm\nu_{\mu,j}^{(2)}<\zeta_{j}^{(2)})\lor(x_{j}\pm\nu_{\mu,j}^{(2)}>\eta_{j}^{(2)}),\,j=1,\dots,m_{2}\\
X_{\underline{\beta}_{i}\pm\beta(\nu_{\mu}^{(2)})}^{2} & \mathrm{otherwise},
\end{cases}$$ and $\zeta=(\zeta^{(1)},\zeta^{(2)})$ and $\eta=(\eta^{(1)},\eta^{(2)})$. This definition approximates all terms $X^{2}(x_{(2)}-\nu_{\mu,(2)})$ which lie outside the truncated state space by 0, which assumes that the probability function and the low rank factors have to decay sufficiently fast within the truncated state space.
Writing $\underline{C}^{1,\mu}=(C_{\underline{\alpha}_{1}}^{1,\mu},\dots,C_{\underline{\alpha}_{n_{1}}}^{1,\mu})$, the evolution equation of the $K$ step becomes $$\partial_{t}K=\left(\sum_{\mu=1}^{M}\mathcal{T}_{1,\mu}\left[\underline{K}\odot(\underline{C}^{1,\mu})^{T}\right]+\underline{K}\odot(\underline{D}^{1,\mu})^{T}\right),\label{eq:K_imp}$$ with element-wise matrix-vector multiplication $\underline{K}\odot(\underline{D}^{1,\mu})^{T}=(K_{\underline{\alpha}_{1}}(D_{\underline{\alpha}_{1}}^{1,\mu})^{T},\dots,K_{\underline{\alpha}_{n_{1}}}(D_{\underline{\alpha}_{n_{1}}}^{1,\mu})^{T})$. The shift operator $\mathcal{T}_{1,\mu}$ is defined in a similar way as $\mathcal{T}_{2,\mu}$.
If we perform the integration over partition 2 in equations ([\[eq:S_coeff\]](#eq:S_coeff){reference-type="ref" reference="eq:S_coeff"}) first, we can reuse the coefficients $C^{1,\mu}$ and $D^{1,\mu}$ for the calculation of the $S$ step coefficients $$\begin{aligned}E_{ijkl} & =\sum_{\mu=1}^{M}\left(\mathcal{T}_{1,\mu}^{-1}\left[\underline{X}_{1,i}^{1}\right]\right)^{T}\mathrm{diag}\left(\underline{C}_{jl}^{1,\mu}\right)\underline{X}_{1,k}^{1},\\
F_{ijkl} & =\sum_{\mu=1}^{M}\left(\underline{X}_{1,i}^{1}\right)^{T}\mathrm{diag}\left(\underline{D}_{jl}^{1,\mu}\right)\underline{X}_{1,k}^{1},
\end{aligned}
\label{eq:S_coeff_imp}$$ with $E^{\mu},\,F^{\mu}\in\mathbb{R}^{r\times r\times r\times r}$. With these coefficients we can write the evolution equation of the $S$ step as $$\partial_{t}S_{ij}=-\sum_{k,l=1}^{r}S_{kl}\left(E_{ijkl}-F_{ijkl}\right).\label{eq:S_imp}$$ The coefficients $C^{2,\mu}$, $D^{2,\mu}$ are calculated via $$\begin{aligned}C^{2,\mu}(x_{(2)}) & =\left[\mathcal{T}_{1,\mu}^{-1}[X_{1}^{1}]\right]^{T}\mathrm{diag}\left(\underline{a}_{\mu}(x_{(2)})\right)X_{1}^{1},\\
D^{2,\mu}(x_{(2)}) & =\left(X_{1}^{1}\right)^{T}\mathrm{diag}\left(\underline{a}_{\mu}(x_{(2)})\right)X_{1}^{1},
\end{aligned}
\label{eq:L_coeff_imp}$$ with $X_{1}^{1}=X^{1}(t=\tau)\in\mathbb{R}^{n_{1}\times r}$ and $C^{2,\mu}(x_{(2)}),\,D^{2,\mu}(x_{(2)})\in\mathbb{R}^{r\times r}$.
Showing a similar structure as the corresponding equation for the $K$ step, the evolution equation for the $L$ step reads as $$\partial_{t}L=\sum_{\mu=1}^{M}\left(\mathcal{T}_{2,\mu}\left[\underline{L}\odot(\underline{C}^{2,\mu})^{T}\right]+\underline{L}\odot(\underline{D}^{2,\mu})^{T}\right).\label{eq:L_imp}$$ Note that for the second term on the right-hand side of equation ([\[eq:L_imp\]](#eq:L_imp){reference-type="ref" reference="eq:L_imp"}) we could perform the summation over all reactions $R_{\mu}$ before multiplying $\underline{D}^{2,\mu}$ with $\underline{L}$. Moreover, the calculation of the evolution equation ([\[eq:L_imp\]](#eq:L_imp){reference-type="ref" reference="eq:L_imp"}) could be simplified by introducing a reaction-independent $\underline{D}^{2}=\sum_{\mu=1}^{M}\underline{D}^{2,\mu}$. However, as we will see in section [\[subsec:reagents_dependence\]](#subsec:reagents_dependence){reference-type="ref" reference="subsec:reagents_dependence"}, it is computationally more efficient to perform the summation over all reactions after multiplying $\underline{L}$ with the reaction-dependent $\underline{D}^{2,\mu}$. The same holds for the coefficient $\underline{D}^{1,\mu}$ and the second term on the right-hand side of the evolution equation ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}) for the $K$ step. For the first terms on the right-hand side of equations ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}) and ([\[eq:L_imp\]](#eq:L_imp){reference-type="ref" reference="eq:L_imp"}) we always have to keep the reaction-dependence of $\underline{C}^{1,\mu}$ and $\underline{C}^{2,\mu}$, since the shift operator is also reaction-dependent, therefore the computational effort would scale the same even when using reaction-independent $D$ coefficients.
Finally, we want to give a remark on the computational effort for the evolution equation and the calculation of the coefficients. Without making any further simplifications, the computational effort for calculating the $C$ and $D$ coefficients is $\mathcal{O}(Mr^{2}n)$, where $M$ was the total number of reactions channels. When we reuse $\underline{C}^{1,\mu}$ and $\underline{D}^{1,\mu}$ for the calculation of the $E$ and $F$ coefficients, the complexity for computing the $E$ and $F$ coefficients is $\mathcal{O}(M\,r^{4}n_{1})$. The right-hand side of the evolution equation for the $K$ step requires an integration over the population numbers in partition 1, therefore the computational cost is $\mathcal{O}(Mr^{2}n_{1})$. Similarly, the computation of the $L$ step scales with $\mathcal{O}(Mr^{2}n_{2})$, whereas for the $S$ step we have complexity $\mathcal{O}(r^{4})$. Note that in particular the computation of the $C$ and $D$ coefficients is very expensive, since it scales with the total number of degrees of freedom $n$. Thus it is imperative to reduce this computational burden, which is the topic of the next section.
## Efficient computation of the coefficients[\[subsec:efficient-computation\]]{#subsec:efficient-computation label="subsec:efficient-computation"}
In the previous section we have seen that computing the coefficients without making any further assumptions is computationally expensive. The main goal here is to describe ways how to avoid the scaling of the computational effort with the total number of degrees of freedom $n$. We essentially discuss two possibilities to circumvent this scaling behaviour: First, most reactions only depend on a small subset of all species. When we denote for a given reaction $R_{\mu}$ the number of participating species (we call them *reagents*) by $\tilde{N}_{\mu}$, than this assumption can be expressed as $\tilde{N}_{\mu}\ll N$. Second, in many reaction networks the propensity functions exhibit a so-called factorization property, and exploiting this property again reduces the computational burden. Note that our present implementation does not exploit the factorization property since in all our examples $\tilde{N}_{\mu}\ll N$. However, what the discussion in this section shows is that even in the rare instances where this is not the case, the factorization property which is common to most reactions gives a way forward to efficiently implementing the dynamical low-rank approach.
### Dependence of the propensity on reagents[\[subsec:reagents_dependence\]]{#subsec:reagents_dependence label="subsec:reagents_dependence"}
Since in most reactions only a subset of all species is actually participating, the propensity $a_{\mu}(x)$ for such a reaction $R_{\mu}$ only depends on the population number of the $\tilde{N}_{\mu}$ reagents, so $a_{\text{\ensuremath{\mu}}}(x)=a_{\mu}(\tilde{x}_{\mu})$, where $\tilde{x}_{\mu}\in\mathbb{N}_{0}^{\tilde{N}_{\mu}}$ and $\tilde{N}_{\mu}\le N$. In many cases the propensities only depend on the population number of two or three species, therefore $\tilde{N}_{\mu}\ll N$. The computational effort of our algorithm can be reduced substantially by calculating coefficients $C$ and $D$ only for the possible values of $\tilde{x}_{\mu}$ that are actually needed.
For a given reaction $\mu$ we first determine in the implementation the reagents and precompute all possible values of the propensity function $a_{\mu}(\tilde{x}_{\mu})$, since those values do not change over time. The $C$ and $D$ coefficients have to be calculated only for $\tilde{n}_{1}^{\mu}$ or $\tilde{n}_{2}^{\mu}$ population number values, but $K$ and $L$ still depend on the population numbers $x_{(1)}$ and $x_{(2)}$, respectively. Therefore we have to introduce a (reaction-dependent) mapping between $\tilde{x}_{\mu}$ and $x_{(1)}$ in order to perform for example the multiplication $\underline{K}\odot(\underline{D}^{1,\mu})^{T}$ on the right-hand side in equation ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}). Applying this map effectively introduces a reaction-dependency on the overall multiplication term. This is the reason why we cannot introduce the reaction independent $\underline{D}^{2}=\sum_{\mu=1}^{M}\underline{D}^{2,\mu}$ as discussed previously.
Note that the complexity for the integration over $x_{1}$ in equation ([\[eq:K_coeff_imp\]](#eq:K_coeff_imp){reference-type="ref" reference="eq:K_coeff_imp"}) still scales with $\mathcal{O}(n_{1})$ and for equation ([\[eq:L_coeff_imp\]](#eq:L_coeff_imp){reference-type="ref" reference="eq:L_coeff_imp"}) the integration over $x_{2}$ scales with $\mathcal{O}(n_{2})$, but the coefficients have to be calculated only for the $\tilde{n}_{1}^{\mu}$ or $\tilde{n}_{2}^{\mu}$ population number values. Therefore the complexity for calculating the $\underline{C}^{1,\mu}$ and $\underline{D}^{1,\mu}$ coefficients is reduced to $\mathcal{O}(\sum_{\mu=1}^{M}\tilde{n}_{1}^{\mu}n_{2}r^{2})$, and for the $\underline{C}^{2,\mu}$ and $\underline{D}^{2,\mu}$ coefficients to $\mathcal{O}(\sum_{\mu=1}^{M}\tilde{n}_{2}^{\mu}n_{1}r^{2})$. Thus, these computations do no longer scale with $n$ (assuming that $\tilde{N}_{\mu}\ll N$).
### Factorization property of the propensity function
The equations for the coefficients ([\[eq:K_coeff\]](#eq:K_coeff){reference-type="ref" reference="eq:K_coeff"}), ([\[eq:S_coeff\]](#eq:S_coeff){reference-type="ref" reference="eq:S_coeff"}) and ([\[eq:L_coeff\]](#eq:L_coeff){reference-type="ref" reference="eq:L_coeff"}) can be simplified if the propensity function can be written as $$a_{\mu}(x_{(1)},x_{(2)})=a_{\mu,(1)}(x_{(1)})\,a_{\mu,(2)}(x_{(2)}).\qquad\textrm{(factorization property)}\label{eq:factor}$$ This property is valid for elementary reaction types and reactions of the Michaelis-Menten form[ and thus is ubiquitous in most biological systems.]{style="color: black"} The factorization property enables us to rewrite for example the coefficient $c_{ij}^{1,\mu}$ in equation ([\[eq:K_coeff\]](#eq:K_coeff){reference-type="ref" reference="eq:K_coeff"}) as $$c_{ij}^{1,\mu}(x_{(1)})=a_{\mu,(1)}(x_{(1)}-\nu_{\mu,(1)})\langle X_{0,i}^{2}(x_{(2)}),a_{\mu,(2)}(x_{(2)}-\nu_{2}^{\mu})X_{0,j}^{2}(x_{2}-\nu_{2}^{\mu})\rangle_{2},$$ which scales with $\mathcal{O}(M\,r^{2}(n_{1}+n_{2})$ compared to $\mathcal{O}(M\,r^{2}n)$ (even when disregarding the considerations about the dependence of the propensity on reagents in the previous section). Moreover, the two inner products of the coefficient $e_{ijkl}$ in equation ([\[eq:S_coeff\]](#eq:S_coeff){reference-type="ref" reference="eq:S_coeff"}) can be calculated independently, $$\begin{split}e_{ijkl} & =\sum_{\mu=1}^{M}\langle X_{1,i}^{1}(x_{(1)})a_{\mu,(1)}(x_{(1)}-\nu_{\mu,(1)})X_{1,k}^{1}(x_{(1)}-\nu_{\mu,(1)})\rangle_{1}\\
& \quad\times\langle X_{0,j}^{2}(x_{(2)})a_{\mu,(2)}(x_{(2)}-\nu_{\mu,(2)})X_{0,l}^{1}(x_{(2)}-\nu_{\mu,(2)})\rangle_{2},
\end{split}$$ which has computational costs of $\mathcal{O}(Mr^{4}(n_{1}+n_{2}))$ instead of $\mathcal{O}(Mr^{4}n)$.
## First- and second-order projector splitting integrator
The first-order integrator is obtained by using Lie--Trotter splitting as explained in section [\[sec:DLR-approximation\]](#sec:DLR-approximation){reference-type="ref" reference="sec:DLR-approximation"}. The evolution equations ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}) and ([\[eq:L_imp\]](#eq:L_imp){reference-type="ref" reference="eq:L_imp"}) for $K$ and $L$ steps as well as ([\[eq:S_imp\]](#eq:S_imp){reference-type="ref" reference="eq:S_imp"}) for the $S$ step are solved with an explicit Euler method. Due to different reaction time scales stemming from both small and large propensity values, the CME becomes stiff for many systems. In order to remain in the stable region for large time step size $\tau$, we perform $k$ explicit Euler steps with a time step size of $\tau/k$ while keeping the coefficients constant. Note that although the computational cost for evaluating the right-hand side of the evolution equation shows the same scaling, the constant is smaller compared to the calculation of the coefficients, which however only needs to be done once or twice (for $C^{1,\mu}$ and $D^{1,\mu}$ in case of the second-order integrator) in each time step. We will explore the use of implicit integrators in future work.
The low-rank factors $X^{1}$ and $X^{2}$ and the coupling coefficients $S$ are obtained from $K$ and $L$ matrices by performing a QR decomposition. In order to perform the QR decomposition and the linear algebra operations required for an efficient calculation of the coefficients we made use of the dynamical low-rank framework `Ensign` [@Cassini_2021].
A detailed description of the first order Lie-Trotter projector splitting scheme is shown in algorithm [\[alg:first-order\]](#alg:first-order){reference-type="ref" reference="alg:first-order"}.
**Input:** $X_{0}^{1},$ $S_{0}$, $X_{0}^{2}$
**Output:** $X_{1}^{1}$, $S_{3}$, $X_{1}^{2}$
Calculate $C^{1,\mu}(x_{(1)})$ and $D^{1,\mu}(x_{(1)})$ with $X_0^2$ using equation ([\[eq:K_coeff_imp\]](#eq:K_coeff_imp){reference-type="ref" reference="eq:K_coeff_imp"}) Integrate $K$ from $0$ to $\tau$ with initial value $K(0) = X_0^1 S_0$ using equation ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}) Decompose $K(\tau) = X_1^1 S_1$ via a QR factorization Calculate $E^{\mu}$ and $F^{\mu}$ with $X_1^1$, $X_0^2$, $C^{1,\mu}(x_{(1)})$ and $D^{1,\mu}(x_{(1)})$ using equation ([\[eq:S_coeff_imp\]](#eq:S_coeff_imp){reference-type="ref" reference="eq:S_coeff_imp"}) Integrate $S$ from $0$ to $\tau$ with initial value $S(0) = S_1$ using equation ([\[eq:S_imp\]](#eq:S_imp){reference-type="ref" reference="eq:S_imp"}) and set $S_2 = S(\tau)$ Calculate $C^{2,\mu}(x_{(2)})$ and $D^{2,\mu}(x_{(2)})$ with $X_1^1$ using equation ([\[eq:L_coeff_imp\]](#eq:L_coeff_imp){reference-type="ref" reference="eq:L_coeff_imp"}) Integrate $L$ from $0$ to $\tau$ with initial value $L(0) = X_1^2 (S_2)^T$ using equation ([\[eq:L_imp\]](#eq:L_imp){reference-type="ref" reference="eq:L_imp"}) Decompose $L(\tau) = X_1^2 (S_3)^T$ via a QR factorization
Our numerical scheme can be generalized to a second-order method by employing Strang splitting in the context of equation ([\[eq:projector-equation\]](#eq:projector-equation){reference-type="ref" reference="eq:projector-equation"}). This is shown in detail in algorithm [\[alg:second-order\]](#alg:second-order){reference-type="ref" reference="alg:second-order"}. Note that two of the steps are repeated while one step is only performed once (due to the symmetry of the splitting). Ideally, the step that has to be done only once is chosen to coincide with the step that incurs the largest computational effort (either the $K$ or the $L$ step).
**Input:** $X_{0}^{1},$ $S_{0}$, $X_{0}^{2}$
**Output:** $X_{2}^{1}$, $S_{5}$, $X_{1}^{2}$
Calculate $C^{1,\mu}(x_{(1)})$ and $D^{1,\mu}(x_{(1)})$ with $X_0^2$ using equation ([\[eq:K_coeff_imp\]](#eq:K_coeff_imp){reference-type="ref" reference="eq:K_coeff_imp"}) Integrate $K$ from $0$ to $\tau/2$ with initial value $K(0) = X_0^1 S_0$ using equation ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}) Decompose $K(\tau/2) = X_1^1 S_1$ via a QR factorization Calculate $E^{\mu}$ and $F^{\mu}$ with $X_1^1$, $X_0^2$, $C^{1,\mu}(x_{(1)})$ and $D^{1,\mu}(x_{(1)})$ using equation ([\[eq:S_coeff_imp\]](#eq:S_coeff_imp){reference-type="ref" reference="eq:S_coeff_imp"}) Integrate $S$ from $0$ to $\tau/2$ with initial value $S(0) = S_1$ and set $S_2 = S(\tau/2)$ using equation ([\[eq:S_imp\]](#eq:S_imp){reference-type="ref" reference="eq:S_imp"}) Calculate $C^{2,\mu}(x_{(2)})$ and $D^{2,\mu}(x_{(2)})$ with $X_1^1$ using equation ([\[eq:L_coeff_imp\]](#eq:L_coeff_imp){reference-type="ref" reference="eq:L_coeff_imp"}) Integrate $L$ from $0$ to $\tau$ with initial value $L(0) = X_1^2 (S_2)^T$ using equation ([\[eq:L_imp\]](#eq:L_imp){reference-type="ref" reference="eq:L_imp"}) Decompose $L(\tau) = X_1^2 (S_3)^T$ via a QR factorization Recalculate $C^{1,\mu}(x_{(1)})$ and $D^{1,\mu}(x_{(1)})$ with $X_1^2$ using equation ([\[eq:K_coeff_imp\]](#eq:K_coeff_imp){reference-type="ref" reference="eq:K_coeff_imp"}) Recalculate $E^{\mu}$ and $F^{\mu}$ with $X_1^1$, $X_1^2$ and new values for $C^{1,\mu}(x_{(1)})$ and $D^{1,\mu}(x_{(1)})$ using equation ([\[eq:S_coeff_imp\]](#eq:S_coeff_imp){reference-type="ref" reference="eq:S_coeff_imp"}) Integrate $S$ from $\tau/2$ to $\tau$ with initial value $S(\tau/2) = S_3$ and set $S_4 = S(\tau)$ using equation ([\[eq:S_imp\]](#eq:S_imp){reference-type="ref" reference="eq:S_imp"}) Integrate $K$ from $\tau/2$ to $\tau$ with initial value $K(\tau/2) = X_1^1 S_4$ using equation ([\[eq:K_imp\]](#eq:K_imp){reference-type="ref" reference="eq:K_imp"}) Decompose $K(\tau) = X_2^1 S_5$ via a QR factorization
# Numerical experiments[\[sec:numerical-experiments\]]{#sec:numerical-experiments label="sec:numerical-experiments"}
We tested our implementation with three models from the field of biochemistry. The smallest model, the genetic toggle switch, was primarily chosen for code validation and to investigate the approximation accuracy (as a reference solution without the low-rank approximation can be computed easily). For the the two larger models, the bacteriophage-$\lambda$ ("lambda phage") and the BAX pore assembly, we compare the DLR approximation with the dominating numerical method for solving the CME, the stochastic simulation algorithm (SSA) (see, e.g., [@Gillespie_1976]).
## Toggle switch
The genetic toggle switch, as first described in [@Gardner_2000], has a function analogous to a flip-flop in electronics. It consists of two mutually repressing proteins $S_{1}$ and $S_{2}$, which leads to two stable steady-states. We studied the reaction system shown in table [1](#tab:ts){reference-type="ref" reference="tab:ts"}, which was also considered in [@Jahnke_2008].
No. Reaction Propensity function
----- ------------------------------ ---------------------
1 $S_{1}\longrightarrow\star$ $c\cdot x_{1}$
2 $S_{2}\longrightarrow\star$ $c\cdot x_{2}$
3 $\star\longrightarrow S_{1}$ $b/(b+x_{2})$
4 $\star\longrightarrow S_{2}$ $b/(b+x_{1})$
: Reactions and propensity functions of the toggle switch systems. The two parameters are chosen as $b=0.4$ and $c=0.05$.[\[tab:ts\]]{#tab:ts label="tab:ts"}
The first two reactions describe the decay of proteins $S_{1}$ and $S_{2}$, respectively. If the population number of $S_{2}$ is large, then the propensity of reaction 3 becomes small and transcription of new copies of $S_{1}$ is inhibited. Similarly, the production of $S_{2}$ by reaction 4 is inhibited by $S_{1}$.
As initial value we consider the Gaussian distribution $$\begin{aligned}
P(0,x) & =\gamma\cdot\exp\left(-\frac{1}{2}(x-\mu)^{T}C^{-1}(x-\mu)\right),\\
C & =\frac{1}{2}\begin{pmatrix}75 & -15\\
-15 & 75
\end{pmatrix},\end{aligned}$$ with $\mu=(30,\,5)$ and $\gamma$ was determined by the condition $\sum_{x\in\Omega^{\zeta,\eta}}P(0,x)=1$.
We solved the CME on the time interval $[0,500]$ with truncation indices $\eta=(0,0)$ and $\zeta=(50,50)$ and with the trivial partitions $\mathcal{P}_{1}=\{S_{1}\}$ and $\mathcal{P}_{2}=\{S_{2}\}$. Using rank $r=5$, the total number of degrees of freedom is reduced from $51^{2}=2601$ to $2\cdot51\cdot5+5^{2}=535$, which is $20.6\%$ of the full system size. Due to the relatively small size of the truncated state space an "exact" reference solution of the full system on the truncated state space could be obtained via a Python implementation that uses the RK45 `scipy.solve_ivp` routine to directly solve equation ([\[eq:CME\]](#eq:CME){reference-type="ref" reference="eq:CME"}).
Figure [\[fig:ts1\]](#fig:ts1){reference-type="ref" reference="fig:ts1"} shows the DLR approximation (using the second-order integrator with time step size $\tau=0.02$ and $10$ substeps) with ranks $r=4$ and $5$ and the reference solution of the one-dimensional marginal distributions $P_{\mathrm{MD}}(x_{1})$, $P_{\mathrm{MD}}(x_{2})$ at time $t=500$. It can be clearly seen that $r=4$ is not sufficient to capture the full behavior of the system, but for $r=5$ we obtain very good results. Figure [\[fig:ts2\]](#fig:ts2){reference-type="ref" reference="fig:ts2"} depicts the full probability distribution $P(x_{1},x_{2})$ at time $t=500$. This figure again demonstrates that the results of the DLR approximation for $r=5$ are in very good agreement with the exact solution of the truncated CME. Using the second-order integrator with time step size $\text{\ensuremath{\tau=0.02}}$ and $10$ substeps, the total run time for the simulation with rank $r=5$ was approximately $1$ minute and $16$ seconds on a MacBook Pro with a $2$ GHz Intel Core i5 Skylake (6360U) processor. The results of the DLR approximation were computed with one thread.
\
Figure [\[fig:ts3\]](#fig:ts3){reference-type="ref" reference="fig:ts3"} shows the $2$-norm error of the best-approximation and of the DLR approximation for time step sizes $\tau=0.2$ and $0.02$, using the second-order integrator with $10$ substeps. The best-approximation was obtained by truncating all but the first $r=5$ singular values of a singular value decomposition (SVD) of the reference solution, for the DLR approximation we again used $r=5$. It can be seen that using a smaller time step size helps particularly in the first few steps of the simulation. After approximately $t=50$ the errors for the different time step sizes are almost identical, indicating that the overall error is dominated by the low-rank approximation. The error of the dynamical low-rank algorithm proposed is only slightly larger than the theoretical best approximation with the same rank.
## Lambda phage
As a second example, the DLR approximation was applied to the model for the life cycle of the lambda phage as described in [@Hegland_2007]. Table [3](#tab:lp){reference-type="ref" reference="tab:lp"} lists the ten reactions and five species of this system.
No. Reaction Propensity function
----- ------------------------------ ------------------------------------
1 $\star\longrightarrow S_{1}$ $a_{1}b_{1}/(b_{1}+x_{2})$
2 $\star\longrightarrow S_{2}$ $(a_{2}+x_{5})b_{2}/(b_{2}+x_{1})$
3 $\star\longrightarrow S_{3}$ $a_{3}b_{3}x_{2}/(b_{3}x_{2}+1)$
4 $\star\longrightarrow S_{4}$ $a_{4}b_{4}x_{3}/(b_{4}x_{3}+1)$
5 $\star\longrightarrow S_{5}$ $a_{5}b_{5}x_{3}/(b_{5}x_{3}+1)$
6 $S_{1}\longrightarrow\star$ $c_{1}\cdot x_{1}$
7 $S_{2}\longrightarrow\star$ $c_{2}\cdot x_{2}$
8 $S_{3}\longrightarrow\star$ $c_{3}\cdot x_{3}$
9 $S_{4}\longrightarrow\star$ $c_{4}\cdot x_{4}$
10 $S_{5}\longrightarrow\star$ $c_{5}\cdot x_{5}$
: Reactions, propensity functions and parameters of the lambda phage system.[\[tab:lp\]]{#tab:lp label="tab:lp"}
$i=1$ $i=2$ $i=3$ $i=4$ $i=5$
--------- ---------- ---------- ---------- -------- --------
$a_{i}$ $0.5$ $1$ $0.15$ $0.3$ $0.3$
$b_{i}$ $0.12$ $0.6$ $1$ $1$ $1$
$c_{i}$ $0.0025$ $0.0007$ $0.0231$ $0.01$ $0.01$
: Reactions, propensity functions and parameters of the lambda phage system.[\[tab:lp\]]{#tab:lp label="tab:lp"}
The life cycle of the lambda phage represents a naturally occurring toggle switch. The lambda phage infects *E. coli*, and depending on the environment, either stays dormant in the bacterial host (*lysogenic phase*) or multiplies, reassembles itself and breaks out of the host (*lytic phase*). If enough $S_{5}$ is present in the environment, $S_{2}$ is produced and the system is in the lysogenic phase. Abundance of $S_{2}$ in turn inhibits the formation of $S_{1}$ via reaction 1. If the amount of $S_{5}$ in the environment is scarce, the production of $S_{1}$ causes the system to enter the lytic phase and the transcription of new copies of $S_{2}$ via reaction 2 is inhibited.
As an initial value the multinomial distribution with parameters $n=3$ and $p=(0.05,\dots,0.05)$ has been chosen: $$P(0,x)=\begin{cases}
\frac{3!}{x_{1}!\cdots x_{5}!(3-|x|)!}0.05^{|x|}(1-5\cdot0.05)^{3-|x|} & \text{if}\quad|x|\le3,\\
0 & \text{else,}
\end{cases}$$ where $|x|=x_{1}+\dots+x_{5}$. We solved the CME on the time interval $[0,10]$ with truncation indices $\eta=(0,0,0,0,0)$ and $\zeta=(15,40,10,10,10)$. The reaction network was partitioned into $\mathcal{P}_{1}=\{S_{1},S_{2}\}$ and $\mathcal{P}_{2}=\{S_{3},S_{4},S_{5}\}$, therefore the two partitions have a comparable number of degrees of freedom, $n_{1}=16\cdot41=656$ and $n_{2}=11^{3}=1331$. Using rank $r=9$, the total number of degrees of freedom used in the DLR approximation is reduced from $n_{1}\cdot n_{2}=873\,136$ to $(n_{1}+n_{2})\cdot r+r^{2}=17\,964$, which is $2.1\%$ of the full system size.
An "exact" reference solution was obtained again by solving the full CME on the truncated state space with `scipy.solve_ivp`. Due to the relatively large system size the computation of the full solution is very costly and therefore substantially slower than the DLR approximation. Moreover, we compare the DLR approximation with results obtained with SSA. These latter results were computed in the systems biology framework *PySB* [@Lopez_2013], which uses the SSA implementation *StochKit2* [@Sanft_2011]. Table [4](#tab:lp_runtime){reference-type="ref" reference="tab:lp_runtime"} gives an overview of the run times for the exact reference solution, the DLR approximation and SSA.
run time \[s\]
-------------------------- ----------------
DLR approx. ($r=4$) $52$
DLR approx. ($r=9$) $191$
SSA ($10\,000$ runs) $855$
SSA ($100\,000$ runs) $905$
SSA ($1\,000\,000$ runs) $2319$
exact $1164$
: Overview of the approximate run times in seconds for the DLR approximation, SSA, and the exact reference solution for the lambda phage system. The DLR approximation was computed with the second-order integrator using time step size $\tau=0.01$ and $10$ substeps. All computations were performed on a MacBook Pro with a $2$ GHz Intel Core i5 Skylake (6360U) processor. The results of the DLR approximation were computed with one thread.[\[tab:lp_runtime\]]{#tab:lp_runtime label="tab:lp_runtime"}
SSA is a Monte Carlo approach, therefore the results of this method are polluted with noise which scales only as the inverse square root of the total number of independent runs or samples. Figure [\[fig:lp1\]](#fig:lp1){reference-type="ref" reference="fig:lp1"} shows the partially evaluated probability distribution $P_{\mathrm{S}}(x_{2})=P(x_{1}=0,x_{2},x_{3}=1,x_{4}=1,x_{5}=1)$ at time $t=10$ computed with the DLR approximation using rank $r=4$ and $r=9$, as well as with SSA using $10\,000$, $100\,000$ and $1\,000\,000$ samples. Furthermore, the exact reference solution of the CME on the truncated state space is shown for comparison. The DLR solution for $r=4$ is in very good agreement with the reference solution, only for high population numbers $x_{2}>30$, which have a relatively low probability, a discrepancy becomes visible. For population numbers $x_{2}>36$ the results are still close to zero, but become negative and therefore are not shown in this semi-logarithmic plot. When increasing the rank to $r=9$, the results for $x_{2}>36$ remain positive and only show a small deviation from the reference solution. The results for SSA with $10\,000$ runs exhibit a lot of noise and for several population numbers the probability is zero, because the corresponding state was not sampled at all during the simulation. When performing the simulation with more runs, this stochastic noise decreases, but even for $1\,000\,000$ runs SSA has still problems to resolve the small values at the tails of the probability distribution (for example at $x_{2}=0$). This, in particular, shows that the dynamical low-rank approximation has a significant advantage if one is interested in resolving states with low probability. The reason for this is that no numerical noise is introduced by the low-rank approach. We also calculated the maximal error between the exact solution and the SSA result for $P_{\mathrm{S}}(x_{2})$. For $1\,000\,000$ runs the maximal error was $7.74\cdot10^{-5}$ and therefore higher than for the DLR approximation for both $r=4$ ($4.21\cdot10^{-5}$) and $r=9$ ($1.03\cdot10^{-5}$). The computational cost of the DLR approximation, despite the lower error, is lower by a factor of approximately $45$ ($r=4$) and $12$ ($r=9$) compared to SSA with $1\,000\,000$ samples.
The stochastic noise of SSA is also visible in figure [\[fig:lp2\]](#fig:lp2){reference-type="ref" reference="fig:lp2"}. Here the partially evaluated two-dimensional probability distribution $P_{\mathrm{S}}(x_{1},x_{2})=P(x_{1},x_{2},x_{3}=1,x_{4}=1,x_{5}=1)$ of the lambda phage example is shown for time $t=10$. The probability distributions calculated with the DLR approximation for rank $r=4$ and $r=9$ agree very well with the exact reference solution, whereas the noise for the SSA results with $10\,000$ and $100\,000$ samples is very pronounced. Only for $1\,000\,000$ runs the results are comparable to the ones obtained by the DLR approximation.
Figure [\[fig:lp3\]](#fig:lp3){reference-type="ref" reference="fig:lp3"} shows the 2-norm error of the probability density function for the DLR approximation, SSA and the best-approximation depending on time $t$. For rank $r=4$ the resulting error of the DLR and the best-approximation is slightly larger than for the SSA using $1\,000\,000$ samples. However, DLR in this configuration is much faster as has been noted before. If the rank is increased to $r=9$, the DLR approximation also significantly outperforms SSA in terms of accuracy.
## BAX pore assembly
The last and most challenging example is the BAX pore assembly, which is a system with 19 reactions and 11 species. The reactions and propensity functions of this system are listed in table [5](#tab:bax){reference-type="ref" reference="tab:bax"}. BAX plays a key role in mediating mitochondrial outer membrane permeabilization and is therefore a regulator of programmed cell death (*apoptosis*). The model was taken from [@Gaudet_2012] and is part of the *extrinsic apoptosis reaction model* (EARM, see, e.g., [@Albeck_2008]). Monomeric BAX ($S_{1}$) can assemble to larger complexes ($S_{2}$--$S_{5}$, reactions 1--5) and the complexes in turn can dissociate (reactions 6--10). Large enough complexes are able to transport cargo ($S_{10}$ and $S_{11}$), this process is described by reactions 11--19.
No. Reaction Propensity function
-------- --------------------------------------- ------------------------------- -----------------
1 $S_{1}+S_{1}\longrightarrow S_{2}$ $a_{f}\cdot x_{1}(x_{1}-1)/2$
2--5 $S_{i}+S_{1}\longrightarrow S_{i+1}$ $a_{f}\cdot x_{i}x_{1}$ $(i=2,\dots,5)$
6--10 $S_{j+1}\longrightarrow S_{j}+S_{1}$ $a_{r}\cdot x_{j+1}$ $(j=1,\dots,5)$
11--13 $S_{k}+S_{10}\longrightarrow S_{k+3}$ $b_{f}\cdot x_{k}x_{10}$ $(k=4,5,6)$
14--16 $S_{k+3}\longrightarrow S_{k}+S_{10}$ $b_{r}\cdot x_{k+3}$
17--19 $S_{k+3}\longrightarrow S_{k}+S_{11}$ $c_{r}\cdot x_{k+3}$
: Reactions and propensity functions of the BAX pore assembly system with parameters $a_{f}=2\cdot10^{-4}$, $a_{r}=b_{r}=10^{-3}$, $b_{f}=3\cdot10^{-5}$ and $c_{r}=10$.[\[tab:bax\]]{#tab:bax label="tab:bax"}
We solved the CME on the time interval $[0,145]$ with rank $r=5$ on the truncated state space with truncation indices $\eta=(0,0,0,0,0,0,0,0,0,0,0)$ and $\zeta=(46,16,16,11,11,11,4,4,4,56,56)$. Note that the purpose of this numerical example was to discover possible limitations of our approach; in order to reach the equilibrium one would have to consider a substantially longer interval of approximately $[0,20\,000]$. The reaction network was partitioned into $\mathcal{P}_{1}=\{S_{1},S_{2},S_{3},S_{4},S_{5}\}$ and $\mathcal{P}_{2}=\{S_{6},S_{7},S_{8},S_{9},S_{10},S_{11}\}$, therefore the two partitions have $n_{1}=46\cdot16^{2}\cdot11^{2}=1\,424\,896$ and $n_{2}=11\cdot4^{3}\cdot56^{2}=2\,207\,744$ degrees of freedom. Thus the total number of degrees of freedom is reduced from $n_{1}\cdot n_{2}=3.15\cdot10^{12}$ to $(n_{1}+n_{2})\cdot r+r^{2}=18\,163\,225$, which is a reduction by a factor of approximately $1.7\cdot10^{5}$.
We consider the following initial distribution $$P(0,x)=\gamma\cdot\exp\left(-\frac{1}{2}(x-\mu)^{T}C^{-1}(x-\mu)\right),$$ with $C=0.2$, $\mu=(40,0,0,0,0,0,0,0,0,50,0)$ and $\gamma$ was determined by the condition that $\sum_{x\in\Omega^{\zeta,\eta}}P(0,x)=1$. We performed the computations for the DLR approximation with the second-order integrator with $100$ substeps and using a variable time step size. This time step size was adjusted according to the maximal reaction rate obtained by solving the rate equations deterministically (which is very cheap); the minimal time step size is $\tau=1.0$. Due to the large system size, solving the full CME on the truncated state space was clearly not possible (we would need approximately $50$ TB of main memory). For comparison we thus consider SSA simulations with *StochKit2*. The total run time of the DLR approximation and SSA computations is listed in table [6](#tab:bax_runtime){reference-type="ref" reference="tab:bax_runtime"}.
run time \[s\]
--------------------------- ------------------
DLR approx. ($r=5$) $1.3\cdot10^{5}$
SSA ($10\,000$ runs) $84$
SSA ($100\,000$ runs) $129$
SSA ($1\,000\,000$ runs) $358$
SSA ($10\,000\,000$ runs) $2898$
: Overview of the approximate run times in seconds for the DLR approximation and SSA for the BAX pore assembly system. The DLR approximation was computed with the second-order integrator using a variable time step size and $100$ substeps. All computations were performed on a workstation with a $2.9$ GHz Intel Core i5 Comet Lake (10400F) processor. The results of the DLR approximation were computed with six threads.[\[tab:bax_runtime\]]{#tab:bax_runtime label="tab:bax_runtime"}
Figure [\[fig:bax1\]](#fig:bax1){reference-type="ref" reference="fig:bax1"} shows the partially evaluated probability distribution $P_{\mathrm{S}}(x_{1})=P(x_{1},x_{2}=9,x_{3}=2,x_{4}=1,x_{5}=0,x_{6}=0,x_{7}=0,x_{8}=0,x_{9}=0,x_{10}=50,x_{11}=0)$ at time $t=145$ computed with the DLR approximation using rank $r=5$ and with SSA using $10\,000$, $100\,000$, $1\,000\,000$ and $10\,000\,000$ runs. The results of both methods are in good agreement, which demonstrates that in principle even such large problems can be solved with our implementation of the DLR approximation. Even though we use a large number of samples, SSA has again problems to resolve the tail of the distribution. Although, we have no exact solution and thus can not confirm this with certainty, the tail of the dynamical low-rank approximation follows a power law that looks correct.
In figure [\[fig:bax2\]](#fig:bax2){reference-type="ref" reference="fig:bax2"} the partially evaluated two-dimensional probability distribution $P_{\mathrm{S}}(x_{1},x_{2})=P(x_{1},x_{2},x_{3}=2,x_{4}=1,x_{5}=0,x_{6}=2,x_{7}=0,x_{8}=0,x_{9}=0,x_{10}=50,x_{11}=0)$ is shown for the same setup. For coloring a logarithmic mapping was employed; white areas indicate very small negative (and therefore unphysical) results for the DLR approximation and zero events in the case of SSA. Again, both methods yield similar results for large probability values, but it can be clearly seen that the DLR approximation captures areas of low probability which are not present in the SSA results.
Note that in terms of run time, SSA currently beats the DLR approximation for this large problem. However, when extending the DLR approach to a hierarchical scheme (where we divide the reaction network into more than two partitions) where subproblems have a similar size as the lambda phage problem, we expect that the run time would be comparable to the one for the lambda phage example with the additional benefit that the solutions are noise-free. We consider this the subject of future work.
# Conclusion and outlook[\[sec:Outlook\]]{#sec:Outlook label="sec:Outlook"}
The present work shows that using dynamical low-rank approximations can result in an algorithm that drastically reduces the memory and computational effort that is required in order to solve the chemical master equation. The proposed approach can even outperform SSA by a significant margin. It is further interesting to note that the DLR approach directly provides a low-storage approximation of the probability distribution function (which in SSA has to be reconstructed from the samples collected as a post-processing step).
The present work considers dividing the problem into two partitions. However, for large problems this is not sufficient in order to reduce the memory requirement and computational time to an acceptable level (and thus to outperform SSA). Thus, as future work, we will consider the techniques in [@Lubich_2013; @Einkemmer_2018; @Ceruti_2020; @Ceruti_2022b] in order to extend the proposed method to a hierarchical division into multiple partitions.
One significant advantage of the dynamical low-rank approach considered here is that it lends itself very well to implicit methods (compared to, e.g., a step-truncation low-rank approach as considered in [@Allmann-Rahn_2022; @Cai_2017; @Guo_2022; @Kormann_2015]). This is a significant advantage for solving the CME as reaction networks often include reactions with widely disparate time scales, thus making the resulting equations stiff. We note that this is also an issue for SSA (see, e.g., [@Harris_2006]).
[^1]: Department of Mathematics, Universität Innsbruck, Innsbruck, Tyrol, Austria
[^2]: lukas.einkemmer\@uibk.ac.at
[^3]: Department of Biochemistry, Universität Innsbruck, Innsbruck, Tyrol, Austria
| arxiv_math | {
"id": "2309.08252",
"title": "A low-rank complexity reduction algorithm for the high-dimensional\n kinetic chemical master equation",
"authors": "Lukas Einkemmer and Julian Mangott and Martina Prugger",
"categories": "math.NA cs.NA physics.bio-ph physics.comp-ph",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
In this paper, we introduce a bivariate tempered space-fractional Poisson process (BTSFPP) by time-changing the bivariate Poisson process with an independent tempered $\alpha$-stable subordinator. We study its distributional properties and its connection to differential equations. The Lévy measure for the BTSFPP is also derived. A bivariate competing risks and shock model based on the BTSFPP for predicting the failure times of the items that undergo two random shocks is also explored. The system is supposed to break when the sum of two types of shocks reaches a certain random threshold. Various results related to reliability such as reliability function, hazard rates, failure density, and the probability that the failure occurs due to a certain type of shock are studied. We show that for a general Lévy subordinator, the failure time of the system is exponentially distributed with mean depending on the Laplace exponent of the Lévy subordinator when the threshold has geometric distribution. Some special cases and several typical examples are also demonstrated.
address:
- "*Department of Mathematics and Statistics, Central University of Punjab, Bathinda, Punjab -151401, India.*"
- "*Department of Mathematics and Statistics, Central University of Punjab, Bathinda, Punjab -151401, India.*"
- "*Dipartimento di Matematica, Università degli Studi di Salerno, I-84084 Fisciano (SA), Italy.*"
- "*Dipartimento di Matematica, Università degli Studi di Salerno, I-84084 Fisciano (SA), Italy.*"
author:
- Ritik Soni
- Ashok Kumar pathak
- Antonio Di Crescenzo
- Alessandra Meoli
title: Bivariate Tempered Space-Fractional Poisson Process and Shock Models
---
# Introduction
The Poisson process is one of the most widely used counting processes with nice mathematical properties and applications in diverse disciplines of applied sciences, namely insurance, economics, biology, queuing theory, reliability, and statistical physics (see [@Byrne; @Golding; @Jung; @Stanislavsky; @Wang]). In recent years, the construction and generalization of the counting processes via subordination techniques have received a considerable amount of interest from theoretical and application view points. Orsingher and Polito [@Orsingher] introduced a space-fractional version of the Poisson process by subordinating the homogeneous Poisson process (HPP) with an independent $\alpha$-stable subordinator. Meerschaert et al. [@Meerschaert] studied the Poisson process by considering an inverse stable subordinator and established its connection with the fractional Poisson process. Orsingher and Toaldo [@Orsingher1] proposed a unified approach by time-changing in HPP with an independent general Lévy subordinator. For more recent developments in this direction, one may refer to [@Gajda; @Kumar2; @Kumar1; @Maheshwari1; @Maheshwari; @Soni] and references therein.
Apart from univariate counting processes, researchers have explored multivariate versions of the counting process in recent years for effectively analyzing complex real-world phenomena arising in daily life. Beghin [@Beghin] defined a multivariate fractional Poisson counting process by considering common random time-change of a finite-dimensional independent Poisson process. Leonenko and Merzbach [@Leonenko] considered a multi-parameter fractional Poisson process using inverse subordinators and Mittag-Leffler functions and studied its main characteristics. Di Crescenzo and Longobardi [@Crescenzo1] discussed a bivariate Poisson process with applications in shock models. Recently, Di Crescenzo and Meoli [@Crescenzo] considered a bivariate space-fractional Poisson process and studied competing risks and shock models associated with it. In reliability theory and survival analysis, system failure is discussed primarily using conventional competing risks and shock models. Lehmann [@Lehmann] presented a class of general shock models in which failure arises as a result of a competing cause of trauma-related degradation. Cha and Giorgio [@Cha] developed a new class of bivariate counting processes that have marginal regularity property and utilizes it in a shock model. For recent development in this area, one can refer to Cha and Finkelstein [@Cha1], Di Crescenzo and Meoli [@Crescenzo], and Di Crescenzo and Pellerey [@Crescenzo2].
In this paper, we introduce a bivariate tempered space-fractional Poisson process (BTSFPP) by time-changing the bivariate Poisson process with an independent tempered $\alpha$-stable subordinator (TSS) and study its important characteristics. We derive its Lévy measure and the governing differential equations of the probability mass function (pmf) and probability generating function (pgf). We also propose a shock model for predicting the failure time of items subject to two external random shocks in a counting pattern governed by the BTSFPP. The system is supposed to break when two types of shocks reach their random thresholds. The results related to reliability such as reliability function, hazard rates, failure density, and the probability that the failure occurs due to a certain type of the shock are studied. Several typical examples based on different random threshold distributions are also presented. Later on, for a general Lévy subordinator, we showed that the failure time of the system is exponentially distributed with mean depending on the Laplace exponent of the Lévy subordinator when the threshold is geometrically distributed. Graphs of survival function for different values of tempering parameters $\theta$ and stability index $\alpha$ are shown.
The structure of the paper is as follows: In Section 2, we present some preliminary notations and definitions. In Section 3, we introduce the bivariate tempered space-fractional Poisson process (BTSFPP) and discuss its connection to differential equations. A bivariate shock system governed by the BTSFPP and some results related to reliability of the failure time of the system are provided in Section 4. Also, we present a bivariate Poisson time-changed shock model when the underlying process is governed by an independent general Lévy subordinator. Finally, some concluding remarks are discussed in the last section.
# Preliminaries
In this section, some notations and results are given which will be used in the subsequent sections. Let $\mathbb{N}$ denotes the set of natural numbers and $\mathbb{N}_0 = \mathbb{N} \cup \{0\}$. Let $\mathbb{R}$ and $\mathbb{C}$ denote the set of real numbers and set of complex numbers, respectively.
## Generalized Wright Function
The generalized Wright function is defined by (see [@Kilbas]) $$\label{gwf11}
_p\psi_q \left[z\; \vline \;\begin{matrix}
\left(\alpha_i, \beta_i\right)_{1,p}\\
(a_j,b_j)_{1,q}
\end{matrix} \right] = \sum_{k=0}^{\infty} \frac{z^k}{k!} \frac{\prod_{i=1}^{p} \Gamma(\alpha_i + \beta_i k)}{\prod_{j=1}^{q}\Gamma(a_j + b_j k)},\;\; z, \alpha_i, a_i \in \mathbb{C}\; \text{and}\; \beta_i, b_i \in \mathbb{R},$$ under the convergence condition $$\sum_{j=1}^{q} b_j - \sum_{i=1}^{p} \beta_i >-1.$$
## Lévy Subordinator
A Lévy subordinator denoted by $\{S (t)\}_{t \geq 0}$ is a non-decreasing Lévy process with Laplace transform (see [@Applebaum]) $$\mathbb{E}\left(e^{-uS(t)}\right) = e^{-t \psi(u)},\;\; u \geq 0,$$ where $\psi(u)$ is Laplace exponent given by $$\psi(u) = \eta u +\int_{0}^{\infty} (1-e^{-ux}) \nu (dx), \;\; \eta \geq 0.$$ Here $\eta$ is the drift coefficient and $\nu$ is a non-negative Lévy measure on the positive half-line satisfying $$\int_{0}^{\infty} \min \{x,1\} \nu (dx) < \infty, \;\;\; \text{ and } \;\;\; \nu ([0,\infty)) =\infty,$$ so that $\{S (t)\}_{t \geq 0}$ has strictly increasing sample paths almost surely (a.s.).\
### **Tempered $\alpha$-Stable Subordinator**
For $\alpha \in (0,1)$ and $\theta >0$, the tempered $\alpha$-stable subordinator $\{S^{\alpha, \theta}(t)\}_{t\geq 0}$ is defined by the Laplace transform (see [@Kumar]) $$\label{tss11}
\mathbb{E}[e^{-u S^{\alpha, \theta}(t) }] = e^{\displaystyle -t\left((u+\theta)^\alpha - \theta^\alpha \right)},$$ with Laplace exponent $\psi(u) = (u+\theta)^\alpha - \theta^\alpha$.\
Further, the Lévy measure associated with $\psi$ is (see [@Gupta2]) $$\label{lm111}
\nu(s) = \frac{\alpha}{\Gamma(1-\alpha)}\frac{e^{-\theta s}}{s^{\alpha+1}}, \;\; s >0.$$ Let $f_{S^{\alpha, \theta}(t)}(x,t)$ denote the probability density function (pdf) of the TSS. By independent and stationary increments of the Lévy subordinator, the joint density is defined as $$\label{li1}
f_{S^{\alpha, \theta}(t_1),S^{\alpha, \theta}(t_2) }(x_1, t_1; x_2,t_2)dx_1 dx_2 = f_{S^{\alpha, \theta}(t_2-t_1) }(x_2-x_1, t_2-t_1) f_{S^{\alpha, \theta}(t_1) }(x_1, t_1)dx_1 dx_2.$$
## Tempered Space-Fractional Poisson Process
Let $\{\mathcal{N}(t,\lambda)\}_{t \geq 0}$ be the homogeneous Poisson process with parameter $\lambda >0$. The tempered space-fractional Poisson process (TSFPP) denoted by $\{\mathcal{N}^{\alpha, \theta}(t,\lambda)\}_{t \geq 0}$ is defined by time-changing the homogeneous Poisson process with an independent TSS as (see [@Gupta]) $$\mathcal{N}^{\alpha, \theta}(t,\lambda) := \mathcal{N}(S^{\alpha, \theta}(t), \lambda).$$ Its pmf $p^{\alpha, \theta}(k,t)$ is given by (see [@Gupta1]) $$p^{\alpha, \theta}(k,t) = \frac{(-1)^k}{k!} e^{t\theta^{\alpha}}\sum_{i=0}^{\infty} \frac{\theta^i}{\lambda^i i!} \; _1\psi_1 \left[-\lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-k-i, \alpha)
\end{matrix} \right].$$
## Backward Shift Operators
Let $B$ is the backward shift operator defined by $B[\xi(k)] = \xi(k-1)$. For the fractional difference operator $(I-B)^\alpha$, we have (see [@Orsingher]) $$\label{do1}
(I-B)^\alpha = \sum_{i=0}^{\infty} \binom{\alpha}{j}(-1)^i B^i, \;\;\alpha \in (0,1),$$ where $I$ is an identity operator.\
Furthermore, let $\{B_i\}$, $i\in \{1,2,\dots, m\}$ be the operators defined as $$\label{do2}
B_i[\xi(k_1, k_2, \dots, k_m)] = \xi(k_1, k_2, \dots, k_{i}-1,\dots, k_m).$$ For $m=1$ case, $B_i$'s act same as the operator $B$.
# Bivariate Tempered Space-Fractional Poisson Process
Let $\{\mathcal{N}_i(t, \lambda_i)\}_{t\geq 0}, i=1,2$ be two independent homogeneous Poisson processes with parameter $\lambda_i, i=1,2$, respectively. Then, for $\alpha \in (0,1)$, we define the BTSFPP $\{\mathcal{Q}^{\alpha, \theta}(t)\}_{t\geq 0}$ as $$\label{bp1}
\mathcal{Q}^{\alpha, \theta}(t) := \left(\mathcal{N}_1(S^{\alpha, \theta}(t), \lambda_1), \mathcal{N}_2(S^{\alpha, \theta}(t), \lambda_2)\right) :=\left( \mathcal{N}_1^{\alpha, \theta}(t,\lambda_1), \mathcal{N}_2^{\alpha, \theta}(t,\lambda_2)\right).$$
Throughout the paper, we work with the bivariate process. Here, we denote any arbitrary bivariate vector of constants by $\textbf{a}=(a_1, a_2)$, where $a_1$, $a_2$ are nonnegative integers. Let $\textbf{b}=(b_1, b_2)$ and $\textbf{0} = (0,0)$ is the null vector. We write $\textbf{a} \geq \textbf{b}$ (or $\textbf{a} \leq \textbf{b}$) to means that $a_i\geq b_i$ (or $a_i\leq b_i$) for $i=1,2$. Further, we denote $\textbf{k}=(k_1, k_2)$ and $\textbf{r}=(r_1, r_2)$.
Next, we derive the pmf, pgf, and associated differential equations for the BTSFPP.
**Proposition 1**. *For $\alpha \in (0,1)$ and $\textbf{k} \geq \textbf{0}$, the pmf $q^{\alpha, \theta}(\textbf{k},t) =\mathbb{P}\{\mathcal{Q}^{\alpha, \theta}(t) =\textbf{k}\}$ is given by $$\label{pmf2}
q^{\alpha, \theta}(\textbf{k},t) = \left(-\frac{1}{\lambda_1 +\lambda_2}\right)^{k_1+k_2} \frac{\lambda_1^{k_1}\lambda_2^{k_2}}{k_1! k_2!}e^{t \theta ^\alpha}\sum_{i=0}^{\infty} \frac{\theta^i}{i! (\lambda_1 +\lambda_2)^i} \; _1\psi_1 \left[-(\lambda_1+\lambda_2)^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-(k_1+k_2)-i, \alpha)
\end{matrix} \right].$$*
*Proof.* Firstly, we have $$\begin{aligned}
\label{pmf1}
q^{\alpha, \theta}(\textbf{k},t) =&\; \mathbb{P}\left(\{\mathcal{Q}^{\alpha, \theta}(t) =\textbf{k}\} \cap \left\{\mathcal{N}_1^{\alpha, \theta}(t,\lambda_1) + \mathcal{N}_2^{\alpha, \theta}(t,\lambda_2) = k_1+k_2\right\}\right) \nonumber \\
=&\; \mathbb{P}\left(\mathcal{Q}^{\alpha, \theta}(t) =\textbf{k}\; \vline \; \left\{\mathcal{N}_1^{\alpha, \theta}(t,\lambda_1) + \mathcal{N}_2^{\alpha, \theta}(t,\lambda_2) = k_1+k_2\right\}\right) \nonumber\\
&\;\;\;\;\times \mathbb{P} \left(\mathcal{N}_1^{\alpha, \theta}(t,\lambda_1) + \mathcal{N}_2^{\alpha, \theta}(t,\lambda_2) = k_1+k_2\right).
\end{aligned}$$
Using the conditioning argument, we get $$\begin{aligned}
\mathbb{P}\left(\mathcal{Q}^{\alpha, \theta}(t) =\textbf{k}\; \vline \; \left\{\mathcal{N}_1^{\alpha, \theta}(t,\lambda_1) + \mathcal{N}_2^{\alpha, \theta}(t,\lambda_2) = k_1+k_2\right\}\right) = \frac{(k_1+k_2)!}{k_1! k_2!}\frac{\lambda_1^{k_1} \lambda_2^{k_2}}{(\lambda_1+\lambda_2)^{k_1+k_2}}.
\end{aligned}$$ Now, we calculate\
$\mathbb{P} \left(\mathcal{N}_1^{\alpha, \theta}(t,\lambda_1) + \mathcal{N}_2^{\alpha, \theta}(t,\lambda_2) = k_1+k_2\right)$ $$\begin{aligned}
&= \mathbb{E}\left[\mathbb{P}(N_1(r,\lambda_1) + N_2(r,\lambda_2)) = k_1 + k_2 {\Big|}_{ r = S^{\alpha, \theta}(t)} \right]\\
&= \mathbb{E}\left[\frac{((\lambda_1+\lambda_2)r )^{k_1+k_2}}{(k_1+k_2)!}e^{-r(\lambda_1+\lambda_2)}{\Big|}_{ r = S^{\alpha, \theta}(t)} \right]\\
&= \frac{(-1)^{k_1+k_2}}{(k_1+k_2)!} e^{t\theta^{\alpha}}\sum_{i=0}^{\infty} \frac{\theta^i}{(\lambda_1+\lambda_2)^i i!} \; _1\psi_1 \left[-(\lambda_1+\lambda_2)^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-(k_1+k_2)-i, \alpha)
\end{matrix} \right].
\end{aligned}$$ With the help of ([\[pmf1\]](#pmf1){reference-type="ref" reference="pmf1"}), we get the pmf. ◻
****Remark** 1**. *When $\theta =0,$ Equation ([\[pmf2\]](#pmf2){reference-type="ref" reference="pmf2"}) reduces to the pmf of bivariate space-fractional Poisson process studied in [@Crescenzo].*
****Theorem** 1**. *For $\textbf{u} = (u_1,u_2) \in [0,1]^2$, the pgf $G^{\alpha, \theta}(\textbf{u};t)$ for the BTSFPP is given by $$G^{\alpha, \theta}(\textbf{u};t) = e^{\displaystyle
-t(\left[\lambda_1(1-u_1)+ \lambda_2(1-u_2) + \theta\right]^{\alpha} - \theta^\alpha)},$$ and it satisfies the following differential equation $$\label{de23}
\frac{d}{dt} G^{\alpha, \theta}(\textbf{u};t) = -\left(\left[\lambda_1(1-u_1)+ \lambda_2(1-u_2) + \theta\right]^{\alpha} - \theta^\alpha\right ) G^{\alpha, \theta}(\textbf{u};t), \;\; G^{\alpha, \theta}(\textbf{u};0) =1.$$*
*Proof.* For $\lambda >0$, the pgf for the TSFPP is given by (see [@Gupta1]) $$\begin{aligned}
\mathbb{E}\left[u^{ \mathcal{N}^{\alpha, \theta}(t,\lambda)}\right]
&= \mathbb{E}\left[\mathbb{E}[u^{\mathcal{N}(S^{\alpha, \theta}(t), \lambda)}|S^{\alpha, \theta}(t)]\right]\\
&= \mathbb{E}\left[e^{-\lambda (1-u)S^{\alpha, \theta}(t)}\right]\\
&= e^{\displaystyle -t((\lambda(1-u)+\theta)^\alpha - \theta^\alpha)}.
\end{aligned}$$ We define the pgf as $$G^{\alpha, \theta}(\textbf{u};t) = \mathbb{E}\left[\textbf{u}^{\mathcal{Q}^{\alpha, \theta}(t)}\right] = \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k},t).$$ Hence, we get $$\begin{aligned}
\label{pgf1123}
G^{\alpha, \theta}(u;t) =& \mathbb{E}\left[\mathbb{E}[u^{\mathcal{Q}^{\alpha, \theta}(t, \lambda)}\;|\;S^{\alpha, \theta}(t)]\right]\nonumber\\=& \mathbb{E}\left[e^{(\lambda_1 (u_1-1)+\lambda_2(u_2-1))S^{\alpha, \theta}(t)}\right] \nonumber\\=& e^{\displaystyle
-t(\left[\lambda_1(1-u_1)+ \lambda_2(1-u_2) + \theta\right]^{\alpha} - \theta^\alpha)}.
\end{aligned}$$ By calculus, we can get ([\[de23\]](#de23){reference-type="ref" reference="de23"}) and the condition trivially holds for $t=0$. ◻
****Theorem** 2**. *The pmf in ([\[pmf2\]](#pmf2){reference-type="ref" reference="pmf2"}) satisfies the following differential equation $$\frac{d}{dt} q^{\alpha, \theta}(\textbf{k},t) = -(\lambda_1+\lambda_2)^\alpha \left( \left(I- \frac{\lambda_1 B_1+ \lambda_2 B_2 + \theta}{\lambda_1+\lambda_2}\right)^\alpha -\left( \frac{\theta}{\lambda_1+\lambda_2}\right)^\alpha\right)q^{\alpha, \theta}(\textbf{k},t), \;\; q^{\alpha, \theta}(\textbf{0},t) = 1.$$*
*Proof.* From ([\[de23\]](#de23){reference-type="ref" reference="de23"}), we have $$\begin{aligned}
\label{deq11}
\frac{d}{dt} G^{\alpha, \theta}(\textbf{u};t) = -(\lambda_1+\lambda_2)^\alpha \left(\left(1- \frac{\lambda_1 u_1+ \lambda_2 u_2 + \theta}{\lambda_1+\lambda_2}\right)^\alpha -\left( \frac{\theta}{\lambda_1+\lambda_2}\right)^\alpha\right) G^{\alpha, \theta}(\textbf{u};t).
\end{aligned}$$ Now, we concentrate our attention to simplify the following $$\begin{aligned}
\left(1- \frac{\lambda_1 u_1+ \lambda_2 u_2 + \theta}{\lambda_1+\lambda_2}\right)^\alpha &= \left(1- \frac{ \theta}{\lambda_1+\lambda_2}- \frac{\lambda_1 u_1+ \lambda_2 u_2 }{\lambda_1+\lambda_2}\right)^\alpha \\
&= \sum_{j \geq 0}^{} \binom{\alpha}{j} \left(1- \frac{ \theta}{\lambda_1+\lambda_2}\right)^{\alpha-j}(-1)^j \left(\frac{\lambda_1 u_1+ \lambda_2 u_2 }{\lambda_1+\lambda_2}\right)^j\\
&= \sum_{j \geq 0}^{} \binom{\alpha}{j} \left(1- \frac{ \theta}{\lambda_1+\lambda_2}\right)^{\alpha-j}\frac{(-1)^j}{(\lambda_1+\lambda_2)^j} \sum_{\substack{\textbf{r} \geq \textbf{0}\\ r_1+r_2 =j}}^{} \frac{j!}{r_1! r_2!} \lambda_1^{r_1}\lambda_2^{r_2} u_1^{r_1}u_2^{r_2}.
\end{aligned}$$ Therefore, from ([\[deq11\]](#deq11){reference-type="ref" reference="deq11"}) we get\
$\displaystyle \frac{d}{dt} G^{\alpha, \theta}(\textbf{u};t)$ $$\begin{aligned}
&= -(\lambda_1+\lambda_2)^\alpha \left(\left(1- \frac{\lambda_1 u_1+ \lambda_2 u_2 + \theta}{\lambda_1+\lambda_2}\right)^\alpha \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k},t) -\left( \frac{\theta}{\lambda_1+\lambda_2}\right)^\alpha \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k},t)\right)\\
&= -(\lambda_1+\lambda_2)^\alpha \sum_{j \geq 0}^{} \binom{\alpha}{j} \left(1- \frac{ \theta}{\lambda_1+\lambda_2}\right)^{\alpha-j}\frac{(-1)^j}{(\lambda_1+\lambda_2)^j} \sum_{\substack{\textbf{r} \geq \textbf{0}\\ r_1+r_2 =j}}^{} \frac{j!}{r_1! r_2!} \lambda_1^{r_1}\lambda_2^{r_2} \\ &\times \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1+r_1} u_2^{k_2+r_2} q^{\alpha, \theta}(k,t) +(\lambda_1+\lambda_2)^\alpha \left( \frac{\theta}{\lambda_1+\lambda_2}\right)^\alpha \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k},t) \\
&= -(\lambda_1+\lambda_2)^\alpha \sum_{j \geq 0}^{} \binom{\alpha}{j} \left(1- \frac{ \theta}{\lambda_1+\lambda_2}\right)^{\alpha-j}\frac{(-1)^j}{(\lambda_1+\lambda_2)^j} \sum_{\substack{\textbf{r} \geq \textbf{0}\\ r_1+r_2 =j}}^{} \frac{j!}{r_1! r_2!} \lambda_1^{r_1}\lambda_2^{r_2} \\ &\times \sum_{\textbf{k} \geq \textbf{r}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k}-\textbf{r},t) +(\lambda_1+\lambda_2)^\alpha \left( \frac{\theta}{\lambda_1+\lambda_2}\right)^\alpha \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k},t)\\
&= -(\lambda_1+\lambda_2)^\alpha \sum_{\textbf{k} \geq \textbf{r}}^{} u_1^{k_1} u_2^{k_2} \sum_{j \geq 0}^{} \binom{\alpha}{j} \left(1- \frac{ \theta}{\lambda_1+\lambda_2}\right)^{\alpha-j}\frac{(-1)^j}{(\lambda_1+\lambda_2)^j} \\ &\times \sum_{\substack{\textbf{r} \geq \textbf{0}\\ r_1+r_2 =j}}^{} \frac{j!}{r_1! r_2!} \lambda_1^{r_1}\lambda_2^{r_2} q^{\alpha, \theta}(\textbf{k}-\textbf{r},t) + (\lambda_1+\lambda_2)^\alpha \left( \frac{\theta}{\lambda_1+\lambda_2}\right)^\alpha \sum_{\textbf{k} \geq \textbf{0}}^{} u_1^{k_1} u_2^{k_2} q^{\alpha, \theta}(\textbf{k},t).
\end{aligned}$$ Since, $$\label{bso1}
\sum_{\substack{\textbf{r} \geq \textbf{0}\\ r_1+r_2 =j}}^{} \frac{j!}{r_1! r_2!} \lambda_1^{r_1}\lambda_2^{r_2} q^{\alpha, \theta}(\textbf{k}-\textbf{r},t) = (\lambda_1 B_1+ \lambda_2 B_2)^j q^{\alpha, \theta}(\textbf{k},t),$$ with the help of ([\[bso1\]](#bso1){reference-type="ref" reference="bso1"}), we can obtain the desired differential equation. ◻
Next, we derive the Lévy measure for the BTSFPP.
****Theorem** 3**. *The discrete Lévy measure $\mathcal{V}_{\alpha, \theta}$ for the BTSFPP is given by $$\label{lm11}
\mathcal{V}_{\alpha, \theta} (\cdot) = \sum_{k_1,k_2 >0}^{} \frac{\lambda_1^{k_1} \lambda_2^{k_2}}{k_1! k_2!}\frac{\alpha \Gamma (k_1+k_2-\alpha)}{\Gamma(1-\alpha)} \delta_{\{\textbf{k}\}}(\cdot) (\theta +k_1 +k_2)^{\alpha -k_1-k_2},$$ where $\delta_{\{\textbf{k}\}}(\cdot)$ is Dirac measure concentrated at $\textbf{k}$.*
*Proof.* The pmf for the bivariate Poisson process $\mathcal{N}(t) = (\mathcal{N}_1(t, \lambda_1), \mathcal{N}_2(t, \lambda_2))$ is (see [@Beghin]) $$\mathbb{P}\{\mathcal{N}_1(t, \lambda_1)= k_1, \mathcal{N}_2(t, \lambda_2)= k_2 \} = \frac{\lambda_1^{k_1} \lambda_2^{k_2}}{k_1! k_2!}t^{k_1+k_2} e^{-(\lambda_{1}+\lambda_{2})t}.$$ Using ([\[lm111\]](#lm111){reference-type="ref" reference="lm111"}) and applying the formula from [@Ken p. 197] to calculate the Lévy measure, we get $$\begin{aligned}
\mathcal{V}_{\alpha, \theta} (\cdot)
&= \int_{0}^{\infty} \sum_{k_1,k_2 >0}^{} \mathbb{P}\{\mathcal{N}_1(t, \lambda_1)= k_1, \mathcal{N}_2(t, \lambda_2)= k_2 \}\; \delta_{\{\textbf{k}\}}(\cdot)\;\nu(s)\;ds\\
&= \sum_{k_1,k_2 >0}^{} \frac{\lambda_1^{k_1} \lambda_2^{k_2}}{k_1! k_2!} \delta_{\{\textbf{k}\}}(\cdot) \frac{\alpha}{\Gamma(1-\alpha)} \int_{0}^{\infty} e^{\displaystyle-s(\theta+k_1+k_2)}s^{k_1+k_2-\alpha-1}\;ds.
\end{aligned}$$ Using integral formula 3.351.3 of [@Gradshteyn], we simplify as $$\mathcal{V}_{\alpha, \theta} (\cdot) = \sum_{k_1,k_2 >0}^{} \frac{\lambda_1^{k_1} \lambda_2^{k_2}}{k_1! k_2!}\frac{\alpha \; (k_1+k_2-\alpha-1)!}{\Gamma(1-\alpha)} \delta_{\{\textbf{k}\}}(\cdot) (\theta +k_1 +k_2)^{\alpha -k_1-k_2}.$$ Hence, the theorem is proved. ◻
With the aim of calculation of hazard rates, we establish the following lemma.
**Lemma 1**. *For $h \in \mathbb{N}_{0}$, we have $$\begin{aligned}
\frac{d^h}{du^h}\left[e^{-t\left((u+\theta)^\alpha-\theta^\alpha\right)}\right] =& \sum_{k=0}^{h}\frac{1}{k!} e^{-t\left((u+\theta)^\alpha-\theta^\alpha\right)} \sum_{j=0}^{k} \binom{k}{j}t^{k}(-1)^j \left(\left((u+\theta)^\alpha-\theta^\alpha\right)\right)^{k-j}\\
&\times \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_h
(u+\theta)^{\alpha i - h}(-\theta^\alpha)^{j-i}.
\end{aligned}$$*
*Proof.* Let $V(u) = -t\left((u+\theta)^\alpha-\theta^\alpha\right)$ and $W(V(u)) = e^{-t\left((u+\theta)^\alpha-\theta^\alpha\right)}$. Then, applying the Hoppe's formula (see [@Johnson]) to the function $W(V(u))$, we get $$\label{fdb1}
\frac{d^h}{du^h} W(V(u)) = \sum_{k=0}^{h}\frac{1}{k!} e^{-t\left((u+\theta)^\alpha-\theta^\alpha\right)}T_{h,k}(V(u)),$$ where $T_{h,k}(V(u))$ is computed as $$\begin{aligned}
T_{h,k}(V(u)) &= \sum_{j=0}^{k} \binom{k}{j}\left(-V(u)\right)^{k-j}\frac{d^h}{du^h}\left(V(u)\right)^j\\
&= \sum_{j=0}^{k} \binom{k}{j}\left(t\left((u+\theta)^\alpha-\theta^\alpha\right)\right)^{k-j}\frac{d^h}{du^h}\left(-t\left((u+\theta)^\alpha-\theta^\alpha\right)\right)^j\\
&= \sum_{j=0}^{k} \binom{k}{j}\left(t\left((u+\theta)^\alpha-\theta^\alpha\right)\right)^{k-j}(-t)^j\sum_{i=0}^{j}\binom{j}{i}(-\theta^\alpha)^{j-i} \frac{d^h}{du^h} (u+\theta)^{\alpha i}\\
&= \sum_{j=0}^{k} \binom{k}{j}t^{k}(-1)^j \left(\left((u+\theta)^\alpha-\theta^\alpha\right)\right)^{k-j} \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_h
(u+\theta)^{\alpha i - h}(-\theta^\alpha)^{j-i}.
\end{aligned}$$ Hence, through ([\[fdb1\]](#fdb1){reference-type="ref" reference="fdb1"}), we proved the lemma. ◻
# Bivariate Shock Models
We design a shock model that is subjected to two shocks of types 1 and 2. Let $T$ be a non-negative absolutely continuous random variable which represents the failure time of the system subject to two possible causes of failure. Set $\zeta=n,$ which represents the failure of the system occurring due to shock of type $n$ for $n=1,2$. We define the total number of shocks $\{\mathcal{Z}(t)\}_{t \geq 0}$ during the time interval $[0,t]$ as $$\mathcal{Z}(t) = \mathcal{N}_1^{\alpha, \theta}(t, \lambda_1) + \mathcal{N}_2^{\alpha, \theta}(t, \lambda_1),$$ where $\mathcal{N}_1^{\alpha, \theta}(t, \lambda_1)$ and $\mathcal{N}_2^{\alpha, \theta}(t, \lambda_2)$ are processes counting the number of shocks of type $n$ for $n=1,2$, respectively during the time interval $[0,t]$.
We introduce a random threshold $L$ which takes values in the set of natural numbers. Hence, at the first time when $\mathcal{Q}(t) =L$, the failure occurs. The probability distribution and the reliability function of $L$ are respectively defined by $$\label{ftpmf1}
q_k = \mathbb{P}(L=k),\;\;\; k \in \mathbb{N},$$ and $$\overline{q}_k = \mathbb{P}(L > k), \;\;\; k \in \mathbb{N}_0.$$ Let $g_{T}(t)$ be the pdf of $T$ defined as $$T = \inf \{t \geq 0: \mathcal{Z}(t) = L\}.$$ Then, we have\
$$g_T(t) = g_1(t)+g_2(t), \;\; t \geq 0,$$ where the sub-densities $g_n(t)$ are defined by $$g_n(t) = \frac{d}{dt}\mathbb{P}\{T \leq t, \zeta =n\}, \;n=1,2.$$ Also, the probability that the failure occurs due to shock of type $n$ is given by $$\label{zeta1}
\mathbb{P}(\zeta =n) = \int_{0}^{\infty} g_n(t)dt, \; n=1,2.$$ Furthermore, in terms of the joint pmf, the hazard rates are given by $$\begin{aligned}
\label{abc}
h_1(k_1, k_2; t) &= \lim_{\tau \rightarrow 0^+} \frac{\mathbb{P}\left\{\mathcal{Q}^{\alpha, \theta}(t+\tau) = (k_1+1, k_2) \;\vline \;\mathcal{Q}^{\alpha, \theta}(t) = (k_1, k_2)\right\} }{\tau},\\
h_2(k_1,k_2; t) &= \lim_{\tau \rightarrow 0^+} \frac{\mathbb{P}\left\{\mathcal{Q}^{\alpha, \theta}(t+\tau) = (k_1, k_2+1) \;\vline \;\mathcal{Q}^{\alpha, \theta}(t) = (k_1, k_2) \right\} }{\tau}, \nonumber\end{aligned}$$ with $(k_1, k_2) \in \mathbb{N}_0^2.$ Hence, conditioning on $L$ and with the help of ([\[ftpmf1\]](#ftpmf1){reference-type="ref" reference="ftpmf1"}), the failure densities takes the following form $$\label{fd1}
g_n(t) = \sum_{k=1}^{\infty} q_k \sum_{k_1 + k_2 = k-1}^{}\mathbb{P}\left\{\mathcal{Q}^{\alpha, \theta}(t) = (k_1, k_2)\right\} h_n(k_1, k_2; t),\;\; n=1,2.$$ The reliability function of $T$ denoted by $\overline{R}_T(t) = \mathbb{P}\{T >t\}$ is given by $$\label{rf112}
\overline{R}_T(t) = \sum_{k=0}^{\infty} \overline{q}_k\sum_{k_1 + k_2 = k}^{}\mathbb{P}\left\{\mathcal{Q}^{\alpha, \theta}(t) = (k_1, k_2)\right\},\;\;\;\text{with } \overline{q}_0 =1.$$
**Proposition 2**. *Under the assumptions of the model in ([\[bp1\]](#bp1){reference-type="ref" reference="bp1"}), the hazard rates $h_n(k_1, k_2; t), t \geq 0, \text{ for } \; n=1,2$ are given by $$\begin{aligned}
\label{hrf1}
h_n(k_1, k_2; t) &= \alpha \lambda_n (\Lambda+\theta)^{\alpha-1}e^{-t (\Lambda+\theta)^\alpha}\left(\sum_{l=0}^{\infty} \frac{\theta^l}{\Lambda^l l!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-h-l, \alpha)
\end{matrix} \right] \right)^{-1} \nonumber \\
& \times \sum_{k=0}^{h}\frac{1}{k!} \sum_{j=0}^{k} \binom{k}{j}(-1)^j t^k \left(\left((\Lambda+\theta)^\alpha-\theta^\alpha\right)\right)^{k-j} \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_h
(\Lambda+\theta)^{\alpha i}(-\theta^\alpha)^{j-i},
\end{aligned}$$ where $\Lambda=\lambda_1 +\lambda_2$ and $h = k_1+k_2$.*
*Proof.* We fix $n=1.$ With the help of ([\[li1\]](#li1){reference-type="ref" reference="li1"}) and considering the BTSFPP as bivariate HPP with tempered $\alpha$-stable stopping time, we get $$\begin{aligned}
\label{112}
\mathbb{P}&\left\{\mathcal{Q}^{\alpha, \theta}(\tau) = (k_1+1, k_2), \mathcal{Q}^{\alpha, \theta}(t) = (k_1, k_2) \right\}\\
&= \int_{0}^{\infty} \int_{0}^{y} \mathbb{P}\left\{\mathcal{Q}^{\alpha, \theta}(y) = (k_1+1, k_2), \mathcal{Q}^{\alpha, \theta}(x) = (k_1, k_2)\right\} \\
&\times f_{S^{\alpha, \theta}(\tau-t) }(y-x, \tau-t) f_{S^{\alpha, \theta}(t) }(x, t)\;dy dx \\
% \mathbb{P}(S^{\alpha, \theta}(\tau) \in dy, S^{\alpha, \theta}(t) \in dx) dydx
% \end{align}
% Therefore, we may express (\ref{112}) as
% \begin{align*}
% \mathbb{P}&\left\{\mathcal{Q}_1^{\alpha, \theta}(\tau, \lambda_1) = k_1+1, \mathcal{Q}_2^{\alpha, \theta}(\tau, \lambda_2)= k_2, \mathcal{Q}_1^{\alpha, \theta}(t, \lambda_1) = k_1, \mathcal{Q}_2^{\alpha, \theta}(t, \lambda_2)= k_2 \right\}\\
& = \int_{0}^{\infty} \int_{0}^{y} \mathbb{P}\left\{\mathcal{N}_1(y-x, \lambda_1) =1, \mathcal{N}_2(y-x, \lambda_2)= 0\right\} \mathbb{P}\left\{\mathcal{N}_1(x, \lambda_1) = k_1, \mathcal{N}_2(x, \lambda_2)= k_2 \right\} \\
&\times f_{S^{\alpha, \theta}(\tau-t) }(y-x, \tau-t) f_{S^{\alpha, \theta}(t) }(x, t)\;dy dx \\
& = \int_{0}^{\infty} \int_{0}^{y} \frac{\lambda_1^{k_1+1 }\lambda_2^{k_2}}{k_1! k_2!} e^{-(\lambda_1+\lambda_2)y} x^{k_1+k_2}(y-x)f_{S^{\alpha, \theta}(\tau-t) }(y-x, \tau-t) f_{S^{\alpha, \theta}(t) }(x, t)\;dy dx.
\end{aligned}$$ By the change of order of integration, we get $$\begin{aligned}
\mathbb{P}&\left\{\mathcal{Q}^{\alpha, \theta}(\tau) = (k_1+1, k_2), \mathcal{Q}^{\alpha, \theta}(t) = (k_1, k_2) \right\}\\
&= \frac{\lambda_1^{k_1+1 }\lambda_2^{k_2}}{k_1! k_2!} \int_{0}^{\infty}\int_{x}^{\infty} x^{h} f_{S^{\alpha, \theta}(t) }(x, t) f_{S^{\alpha, \theta}(\tau-t) }(y-x, \tau-t) e^{-(\lambda_1+\lambda_2)y}(y-x)\;dydx\\
&= \frac{\lambda_1^{k_1+1 }\lambda_2^{k_2}}{k_1! k_2!} \int_{0}^{\infty}e^{-(\lambda_1+\lambda_2)x}x^{h} f_{S^{\alpha, \theta}(t) }(x, t)\; dx\int_{0}^{\infty} ye^{-(\lambda_1+\lambda_2)y}f_{S^{\alpha, \theta}(\tau-t) }(y, \tau-t)\; dy\\
&= \frac{\lambda_1^{k_1+1 }\lambda_2^{k_2}}{k_1! k_2!} (-1)^{h} \frac{d^h}{dx^h} \mathbb{E}\left[e^{-xS^{\alpha, \theta}(t)}\right] {\Big|}_{x =\Lambda }\;\times \; \frac{d}{dy}\left[e^{-yS^{\alpha, \theta}(\tau-t)}\right] {\Big|}_{y= \Lambda}.
\end{aligned}$$ Hence, using the definition of conditional density in ([\[abc\]](#abc){reference-type="ref" reference="abc"}), the required form is obtained with the help of ([\[pmf2\]](#pmf2){reference-type="ref" reference="pmf2"}) and Lemma ([Lemma 1](#lm1){reference-type="ref" reference="lm1"}). Also, for $n=2$ case, the proof will follow on the same line. ◻
In the next propositions, we derive the failure densities and the reliability function of the system and obtain the distribution ([\[zeta1\]](#zeta1){reference-type="ref" reference="zeta1"}) of failure due to $n$th type of shock.
**Proposition 3**. *Under the assumptions of the model in ([\[bp1\]](#bp1){reference-type="ref" reference="bp1"}), for $n=1,2$ and $t \geq 0$, we have the failure density of the form $$\begin{aligned}
g_n(t) = \alpha \lambda_n (\Lambda+\theta)^{\alpha-1}e^{-t\left( (\Lambda+\theta)^\alpha -\theta^\alpha \right)} &\sum_{k=1}^{\infty} q_k \frac{(-1)^{k-1}}{(k-1)!} \sum_{l=0}^{k-1}\frac{t^{l}}{l!} \sum_{j=0}^{l} \binom{l}{j}(-1)^j \left(\left((\Lambda+\theta)^\alpha-\theta^\alpha\right)\right)^{l-j}
\\
& \times \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_{k-1}
(\Lambda+\theta)^{\alpha i}(-\theta^\alpha)^{j-i}.
\end{aligned}$$*
*Proof.* On substituting pmf ([\[pmf2\]](#pmf2){reference-type="ref" reference="pmf2"}) and ([\[hrf1\]](#hrf1){reference-type="ref" reference="hrf1"}) to ([\[fd1\]](#fd1){reference-type="ref" reference="fd1"}), we get $$\begin{aligned}
g_n(t) &= \sum_{k=1}^{\infty} q_k \sum_{k_1 + k_2 = k-1}^{} q^{\alpha, \theta}(k,t) \;\alpha \lambda_n (\Lambda+\theta)^{\alpha-1}e^{-t (\Lambda+\theta)^\alpha}\left(\sum_{l=0}^{\infty} \frac{\theta^l}{\Lambda^l l!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-h-l, \alpha)
\end{matrix} \right] \right)^{-1} \nonumber \\
& \times \sum_{l=0}^{k_1+k_2}\frac{1}{l!} \sum_{j=0}^{l} \binom{l}{j}(-1)^j t^l \left(\left((\Lambda+\theta)^\alpha-\theta^\alpha\right)\right)^{l-j} \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_{k_1+k_2}
(\Lambda+\theta)^{\alpha i}(-\theta^\alpha)^{j-i}\\
&= \alpha \lambda_n (\Lambda+\theta)^{\alpha-1}e^{-t\left( (\Lambda+\theta)^\alpha -\theta^\alpha \right)} \sum_{k=1}^{\infty} q_k \frac{(-1)^{k-1}}{(k-1)! \Lambda^{k-1}} \sum_{k_1 =0}^{k-1} \frac{(k-1)! \lambda_{1}^{k_1} \lambda_{2 }^{k-1-k_1}}{k_1! (k-1-k_1)!} \\
& \times \sum_{l=0}^{k-1}\frac{t^l}{l!} \sum_{j=0}^{l} \binom{l}{j}(-1)^j \left(\left((\Lambda+\theta)^\alpha-\theta^\alpha\right)\right)^{l-j} \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_{k-1}
(\Lambda+\theta)^{\alpha i}(-\theta^\alpha)^{j-i}.
\end{aligned}$$ Using the binomial theorem, the failure density is obtained. ◻
**Proposition 4**. *Under the assumptions of the model in ([\[bp1\]](#bp1){reference-type="ref" reference="bp1"}), the reliability function of $T$ is given by $$\label{rf11}
\overline{R}_T(t) = \sum_{k=0}^{\infty} \overline{q}_k \frac{(-1)^k}{k!} \;e^{t\theta^{\alpha}} \sum_{i=0}^{\infty} \frac{\theta^i}{\Lambda^i i!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-k-i, \alpha)
\end{matrix} \right], \;\;t \geq 0.$$*
*Proof.* The reliability function ([\[rf11\]](#rf11){reference-type="ref" reference="rf11"}) can be obtained by substituting ([\[pmf2\]](#pmf2){reference-type="ref" reference="pmf2"}) to ([\[rf112\]](#rf112){reference-type="ref" reference="rf112"}) and simplifying it using the binomial theorem as carried out in previous proof. ◻
**Proposition 5**. *Under the assumptions of the model in ([\[bp1\]](#bp1){reference-type="ref" reference="bp1"}), for $n=1,2$, we also have $$\begin{aligned}
\mathbb{P}(\zeta =n) = \alpha \lambda_n \frac{(\Lambda+ \theta)^{\alpha-1}}{(\Lambda+\theta)^\alpha-\theta^\alpha} &\sum_{k=1}^{\infty} q_k \frac{(-1)^{k-1}}{(k-1)!} \sum_{l=0}^{k-1} \sum_{j=0}^{l} \binom{l}{j}(-1)^j \left((\Lambda+\theta)^\alpha-\theta^\alpha\right)^{-j}\\& \times \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_{k-1}
(\Lambda+\theta)^{\alpha i}(-\theta^\alpha)^{j-i}.
\end{aligned}$$*
*Proof.* With the help of Proposition ([Proposition 3](#prop123){reference-type="ref" reference="prop123"}), the probability ([\[zeta1\]](#zeta1){reference-type="ref" reference="zeta1"}) gives $$\begin{aligned}
\mathbb{P}(\zeta =n) = \alpha \lambda_n (\Lambda+\theta)^{\alpha-1} &\sum_{k=1}^{\infty} q_k \frac{(-1)^{k-1}}{(k-1)!} \sum_{l=0}^{k-1}\frac{1}{l!} \sum_{j=0}^{l} \binom{l}{j}(-1)^j \left(\left((\Lambda+\theta)^\alpha-\theta^\alpha\right)\right)^{l-j}
\\
& \times \sum_{i=0}^{j}\binom{j}{i} (\alpha i)_{k-1}
(\Lambda+\theta)^{\alpha i}(-\theta^\alpha)^{j-i} \int_{0}^{\infty}e^{-t\left( (\Lambda+\theta)^\alpha -\theta^\alpha \right)}t^{l}dt.
\end{aligned}$$ Using integral formula 3.351.3 of [@Gradshteyn], we get the proposition. ◻
## Generalized Shock Models
Let $S:= \{S\left(t\right)\}_{t \geq 0}$ be a Lévy subordinator. In the next theorem, we evaluate the reliability function of $T$ when the threshold $L$ has geometric distribution with parameter $p\in\left(0;1\right]$, *i.e*., $$\label{SurGeo}
\overline{q}_k=\left(1-p\right)^k, \qquad k=0,1,2\dots,$$
and when the shocks arrive according to a process $N:=\{N\left(t\right)\}_{t \geq 0}$, where
$$\label{gm11}
N\left(t\right):=\left(\mathcal{N}_1\left(S\left(t\right), \lambda_{1}\right),\mathcal{N}_2\left(S\left(t\right), \lambda_{2}\right)\right),$$
where the components of $N$ are two time-changed independent homogeneous Poisson processes with intensities $\lambda_1>0$ and $\lambda_2>0$, respectively. The time-change is represented by an independent generic subordinator $S$.
****Theorem** 4**. *For $(x_1, x_2) \in \mathbb{N}_{0}^2$ and under the assumptions of the model in ([\[SurGeo\]](#SurGeo){reference-type="ref" reference="SurGeo"}) and ([\[gm11\]](#gm11){reference-type="ref" reference="gm11"}), we have the reliability function of T as $$\label{eq24}
\overline{F}_{T}\left(t\right) = e^{-t\psi\left(\left(\lambda_1+\lambda_2\right)p\right)},$$ where $\psi(\cdot)$ is the Laplace exponent of the subordinator $S$.*
*Proof.* Consider the reliability function of $T$ as $$\begin{aligned}
\overline{F}_{T}\left(t\right)&=\sum_{k=0}^{+\infty}\left(1-p\right)^k\sum_{x_1=0}^{k}\mathbb{P}\left(\mathcal{N}_1\left(S\left(t\right), \lambda_{1}\right)=x_1,\mathcal{N}_2\left(S\left(t\right), \lambda_{2}\right)=k-x_1\right)\\
&=\sum_{k=0}^{+\infty}\left(1-p\right)^k\sum_{x_1=0}^{k}\frac{\lambda_{1}^{x_{1}}}{x_1!}\frac{\lambda_{2}^{k-x_{1}}}{\left(k-x_2\right)!}\int_{0}^{+\infty}e^{-\left ( \lambda _{1}+\lambda_{2} \right )s}s^{k}\mathbb{P}\left(S\left(t\right)\in\mathrm{d}s\right).
\end{aligned}$$
We exchange the order of summation and rearrange all the terms, so to get:
$$\begin{aligned}
\label{SurT}
\overline{F}_{T}\left(t\right)&=\sum_{x_1=0}^{+\infty}\frac{\lambda_{1}^{x_1}\left(1-p\right)^{x_1}}{x_{1}!}\sum_{h=0}^{+\infty}\frac{\left [ \lambda_{2} \left ( 1-p \right )\right ]^{h}}{h!}\int_{0}^{+\infty}s^{x_1+h}e^{-\left ( \lambda _{1}+\lambda_{2} \right )s}\mathbb{P}\left(S\left(t\right)\in\mathrm{d}s\right)\nonumber\\
&=\sum_{x_1=0}^{+\infty}\frac{\lambda_{1}^{x_1}\left(1-p\right)^{x_1}}{x_{1}!}\int_{0}^{+\infty}s^{x_1}e^{-\left ( \lambda _{1}+\lambda_{2} \right )s+\lambda_2\left(1-p\right)s}\mathbb{P}\left(S\left(t\right)\in\mathrm{d}s\right)\nonumber\\
&=\int_{0}^{+\infty}e^{-\left ( \lambda _{1}+\lambda_{2} \right )s+\lambda_2\left(1-p\right)s+\lambda_1\left(1-p\right)s}\mathbb{P}\left(S\left(t\right)\in\mathrm{d}s\right)\nonumber\\
&=\int_{0}^{+\infty}e^{-\left ( \lambda _{1}+\lambda _{2} \right )ps}\mathbb{P}\left(S\left(t\right)\in\mathrm{d}s\right)\nonumber\\
&=e^{-t\psi\left(\left(\lambda_1+\lambda_2\right)p\right)}.
\end{aligned}$$ Hence, the theorem is proved. ◻
****Remark** 2**. *In Equation ([\[eq24\]](#eq24){reference-type="ref" reference="eq24"}), it is observed that the random failure time $T$ is exponentially distributed with mean depending on the Laplace exponent of the subordinator.*
****Remark** 3**. *As a corollary of the Theorem [**Theorem** 4](#Thm1){reference-type="ref" reference="Thm1"}, it is straightforward to show that if the distribution of $L$ is a mixture of the geometric distribution ([\[SurGeo\]](#SurGeo){reference-type="ref" reference="SurGeo"}), then the distribution of $T$ is a mixture of the exponential distribution ([\[SurT\]](#SurT){reference-type="ref" reference="SurT"}). That is,*
*$$\overline{F}_{T}\left(t\right)=\int_{0}^{1}e^{-t\psi\left(\left(\lambda_1+\lambda_2\right)p\right)}\mathrm{d}G\left(p\right),$$*
*where $G$ is a distribution on $(0,1)$.*
Next, we discuss examples of some special random thresholds under the assumptions of the model in ([\[bp1\]](#bp1){reference-type="ref" reference="bp1"}).
## Some Examples
First, we reproduce the following identity from [@Gupta] as $$\label{tsf1}
\exp\left[-t\left(\left(\Lambda(1-u)+\theta\right)^\alpha - \theta^\alpha\right) \right] = e^{t\theta^{\alpha}} \sum_{k=0}^{\infty} \frac{(-u)^k}{k!} \sum_{i=0}^{\infty} \frac{\theta^i}{\Lambda^i i!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-k-i, \alpha)
\end{matrix} \right].$$ Now, we derive the reliability function of $T$ for some particular cases of the random threshold $L$.\
(I) Let $L$ follow the discrete exponential distribution with reliability function $$\overline{q}_k = e^{-k},\; k=0,1,2,\ldots.$$ From ([Proposition 4](#rf1){reference-type="ref" reference="rf1"}) and with help of ([\[tsf1\]](#tsf1){reference-type="ref" reference="tsf1"}), we get $$\overline{R}_T(t) = \exp\left[-t\left(\left(\Lambda(1-e^{-1})+\theta\right)^\alpha - \theta^\alpha\right) \right].$$ Also, the density is given by $$g_T(t) = \frac{d}{dt} \overline{R}_T(t) = \left(\left(\Lambda(1-e^{-1})+\theta\right)^\alpha - \theta^\alpha\right)\exp\left[-t\left(\left(\Lambda(1-e^{-1})+\theta\right)^\alpha - \theta^\alpha\right) \right].$$ Therefore, the hazard rate function denoted by $H_T(t)$ for the random variable $T$ is given by $$H_T(t) = \frac{ g_T(t)}{ \overline{R}_T(t)} = \left(\left(\Lambda(1-e^{-1})+\theta\right)^\alpha - \theta^\alpha\right),\;\; t \geq 0.$$ (II) Let $L$ follow the Yule-Simon distribution with parameter $p$ and the reliability function $$\overline{q}_k = kB(k,p+1), \; k=1,2,\ldots,$$ where $B(a,b) = \int_{0}^{1} t^{a-1}(1-t)^{b-1}dt$ is the beta function. Then, the reliability function $\overline{R}_T(t)$ takes the form $$\begin{aligned}
\overline{R}_T(t) &= \sum_{k=0}^{\infty} kB(k,p+1) \frac{(-1)^k}{k!} \;e^{t\theta^{\alpha}} \sum_{i=0}^{\infty} \frac{\theta^i}{\Lambda^i i!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-k-i, \alpha)
\end{matrix} \right]\\
&= \sum_{k=0}^{\infty} k\left(\int_{0}^{1} z^{k-1}(1-z)^{p}dz\right) \frac{(-1)^k}{k!} \;e^{t\theta^{\alpha}} \sum_{i=0}^{\infty} \frac{\theta^i}{\Lambda^i i!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-k-i, \alpha)
\end{matrix} \right]\\
&= e^{t\theta^{\alpha}} \int_{0}^{1}(1-z)^{p} \sum_{k=1}^{\infty} k z^{k-1} \frac{(-1)^k}{k!} \sum_{i=0}^{\infty} \frac{\theta^i}{\Lambda^i i!} \; _1\psi_1 \left[-\Lambda^\alpha t\; \vline \;\begin{matrix}
\left(1,\alpha\right)\\
(1-k-i, \alpha)
\end{matrix} \right]dz\\
&= \int_{0}^{1}(1-z)^{p} \left(\frac{d}{dz} \exp\left[-t\left(\left(\Lambda(1-z)+\theta\right)^\alpha - \theta^\alpha\right) \right]\right)dz.\end{aligned}$$ Hence, the density function is given by $$g_T(t) = \int_{0}^{1}(1-z)^{p} \frac{d}{dt} \left(\frac{d}{dz} \exp\left[-t\left(\left(\Lambda(1-z)+\theta\right)^\alpha - \theta^\alpha\right) \right]\right)dz.$$ Considering these $g_T(t)$ and $\overline{R}_T(t)$, we get the hazard rate function in case of Yule-Simon threshold.
Now, we discuss some special cases of the mixing distribution from Remark [**Remark** 3](#rem1){reference-type="ref" reference="rem1"} under the assumptions of the model in ([\[gm11\]](#gm11){reference-type="ref" reference="gm11"}).
## Special Cases
We now analyze three special cases by specifying the mixing distribution, under the assumption that $S$ is the tempered $\alpha$-stable subordinator as in ([\[tss11\]](#tss11){reference-type="ref" reference="tss11"}). The evaluation of the reliability functions are performed using Mathematica.\
(I) $\mathrm{d}G\left(p\right)=\mathrm{d}p$ (uniform distribution)\
It is $$\label{Uniform}
\overline{F}_{T}\left(t\right) = \frac{e^{t\theta^{\alpha }}}{\alpha\left(\lambda_1+\lambda_2\right )}\times\left [ \theta E_{\frac{\alpha -1}{\alpha}}\left ( t\theta ^{\alpha } \right )-\left ( \lambda_{1}+\lambda_{2}+\theta \right )E_{\frac{\alpha -1}{\alpha}}\left ( t\left ( \lambda_{1}+\lambda_{2}+\theta \right )^{\alpha } \right )\right ],$$ where $E_{l}\left(z\right)=\int_{1}^{+\infty}\frac{e^{-uz}}{u^{l}}\mathrm{d}u$ is a generalized exponential integral.\
(II) $\mathrm{d}G\left(p\right)=\displaystyle \frac{ab\left ( 1+ap \right )^{-\left ( b+1 \right )}}{1-\left ( 1+a \right )^{-b}}\mathrm{d}p$, with $a>0$ and $b>-1\, \wedge \,b\neq 0$. (truncated Lomax).\
Set $a:=\frac{\lambda _{1}+\lambda _{2}}{\theta }$ and $b+1:=\alpha$. It is $$\label{Lomax}
\overline{F}_{T}\left(t\right)=\frac{e^{t\theta ^{\alpha}}\left ( \alpha -1 \right )}{\alpha \left [1-\left ( 1+\frac{\lambda _{1}+\lambda _{2}}{\theta } \right )^{1-\alpha } \right ]}
\times\left [ E_{2-\frac{1}{\alpha }} \left ( t\theta ^{\alpha} \right )-\left (1+ \frac{\lambda _{1}+\lambda _{2} }{\theta } \right )^{1-\alpha }E_{2-\frac{1}{\alpha }}\left ( t\theta ^{\alpha}\left ( 1+\frac{\lambda _{1} +\lambda _{2}}{\theta} \right )^{\alpha }\right )\right ].$$ (III) $\mathrm{d}G\left(p\right)=\frac{b}{a}\left(\frac{p-c}{a}\right)^{b-1}e^{-\left(\frac{p-c}{a}\right)^{b}}\mathrm{d}p$, where $a$ and $b$ are positive values, and $c$ is a real value. (truncated three-parameter Weibull).\
Set $a=\frac{1}{\lambda_1+\lambda_2}$, $b=\alpha$ and $c=-\frac{\theta}{\lambda_1+\lambda_2}$. It is $$\label{Weibull}
\overline{F}_{T}\left(t\right)=\frac{1-e^{-\left ( t+1 \right )\left [ \left ( \lambda _{1} +\lambda _{2 }+\theta \right ) ^{\alpha }-\theta ^{\alpha }\right ]}}{\left ( t+1 \right )\left [ 1-e^{-\left [ \left ( \lambda _{1} +\lambda _{2 }+\theta \right ) ^{\alpha }-\theta ^{\alpha }\right ]} \right ]}.$$
The graphs in Figures 1,2 and 3 illustrate the special cases for some particular values of parameters.
![ Plots of the reliability function ([\[Uniform\]](#Uniform){reference-type="ref" reference="Uniform"}) with $\lambda_1=\lambda_2=1$ and $\theta=1$ on the left-hand side, $\alpha=0.5$ on the right-hand side.](MixtureWithUniform1.pdf "fig:") ![ Plots of the reliability function ([\[Uniform\]](#Uniform){reference-type="ref" reference="Uniform"}) with $\lambda_1=\lambda_2=1$ and $\theta=1$ on the left-hand side, $\alpha=0.5$ on the right-hand side.](MixtureWithUniform2.pdf "fig:")
![ Plots of the reliability function ([\[Lomax\]](#Lomax){reference-type="ref" reference="Lomax"}) with $\lambda_1=\lambda_2=1$ and $\theta=1$ on the left-hand side, $\alpha=0.5$ on the right-hand side.](MixtureWithLomax1.pdf "fig:") ![ Plots of the reliability function ([\[Lomax\]](#Lomax){reference-type="ref" reference="Lomax"}) with $\lambda_1=\lambda_2=1$ and $\theta=1$ on the left-hand side, $\alpha=0.5$ on the right-hand side.](MixtureWithLomax2.pdf "fig:")
![ Plots of the reliability function ([\[Weibull\]](#Weibull){reference-type="ref" reference="Weibull"}) with $\lambda_1=\lambda_2=1$ and $\theta=1$ on the left-hand side, $\alpha=0.5$ on the right-hand side.](MixtureWithWeibull1.pdf "fig:") ![ Plots of the reliability function ([\[Weibull\]](#Weibull){reference-type="ref" reference="Weibull"}) with $\lambda_1=\lambda_2=1$ and $\theta=1$ on the left-hand side, $\alpha=0.5$ on the right-hand side.](MixtureWithWeibull2.pdf "fig:")
# Concluding Remarks
In this paper, we have proposed a bivariate tempered space-fractional Poisson process (BTSFPP) by time-changing the bivariate Poisson process with an independent tempered $\alpha$-stable subordinator. First we derived the expression for the probability mass function and expressed it in terms of the generalized Wright function, then we obtained the governing differential equations for the pmf and the pgf. We also derived the Lévy measure density for the BTSFPP. Influenced by the reliability applications, we presented a bivariate competing risks and shock model based on the BTSFPP and derived various reliability quantities to predict the life of the system. Finally, we discussed a generalized shock model and various typical examples.
## Acknowledgements {#acknowledgements .unnumbered}
A.D.C. and A.M. are members of the group GNCS of INdAM (Istituto Nazionale di Alta Matematica).
## Funding {#funding .unnumbered}
This work is partially supported by CSIR India (File No: 09/1051(11349)/2021-EMR-I), DST-SERB and MIUR--PRIN 2017, Project Stochastic Models for Complex Systems (no. 2017JFFHSH).
## Competing Interests {#competing-interests .unnumbered}
There were no competing interests to declare which arose during the preparation or publication process of this article.
50 Applebaum, D. *Lévy processes and stochastic calculus*. Cambridge university press, 116 (2009).
Beghin, L., Macci, C. Multivariate fractional Poisson processes and compound sums. *Advances in Applied Probability*, 48, 691-711 (2016). Byrne, J. Properties of compound Poisson processes with applications in statistical physics. *Physica*, 41, 575-587 (1969). Cha, J. H., Finkelstein, M. New shock models based on the generalized Polya process. *European Journal of Operational Research*, 251, 135-141 (2016). Cha, J. H., Giorgio, M. Modelling of marginally regular bivariate counting process and its application to shock model. *Methodology and Computing in Applied Probability*, 20, 1137-1154 (2018). Di Crescenzo, A., Longobardi M. Competing risks within shock models, *Scientiae Mathematicae Japonicae*, 67, 125-135 (2008). Di Crescenzo, A., Meoli, A. Competing risks and shock models governed by a generalized bivariate Poisson process. *Journal of Applied Probability*, 60, 709-722 (2023). Di Crescenzo, A., Pellerey, F. Some results and applications of geometric counting processes. *Methodology and Computing in Applied Probability*, 21, 203-233 (2019). Gajda, J., Kumar, A., Wyłomańska, A. Stable Lévy process delayed by tempered stable subordinator. *Statistics & Probability Letters*, 145, 284-292 (2019). Golding, I., Cox, E. C. Physical nature of bacterial cytoplasm. Physical review letters, 96, 098102 (2006). Gradshteyn, I. S.,Ryzhik, I. M. *Table of integrals, series, and products*. Academic press. (2014). Gupta, N., Kumar, A., Leonenko, N. Skellam type processes of order k and beyond. *Entropy*, 22, 1193 (2020). Gupta, N., Kumar, A., Leonenko, N. Stochastic models with mixtures of tempered stable subordinators. *Mathematical Communications*, 26, 77-99 (2021). Gupta, N., Kumar, A., Leonenko, N. Tempered fractional Poisson processes and fractional equations with Z-transform. *Stochastic Analysis and Applications*, 38, 939-957 (2020). Johnson, W. P. The curious history of Faà di Bruno's formula. *The American mathematical monthly*, 109, 217-234 (2002). Jung, J., Lundberg, O. Risk processes connected with the compound Poisson process. *Scandinavian Actuarial Journal*, 118-131 (1969). Ken-Iti, S. *Lévy processes and infinitely divisible distributions*. Cambridge university press, (1999). Kilbas, A. A., Saigo, M., Trujillo, J. J. On the generalized Wright function. *Fractional Calculus and Applied Analysis*, 5, 437-460 (2002). Kumar, A., Maheshwari, A., Wyłomańska, A. Linnik Lévy process and some extensions. *Physica A: Statistical Mechanics and its Applications*, 529, 121539 (2019). Kumar, A., Nane, E., Vellaisamy, P. Time-changed Poisson processes. *Statistics & probability letters*, 81, 1899-1910 (2011). Kumar, A., Vellaisamy, P. Inverse tempered stable subordinators. *Statistics & Probability Letters*, 103, 134-141 (2015). Lehmann, A. Degradation-threshold-shock models, *Probability, statistics and modelling in public health*, 286-298. Springer US (2006). Leonenko, N., Merzbach, E. Fractional Poisson fields. *Methodology and Computing in Applied Probability*, 17, 155-168 (2015). Maheshwari, A. Tempered space fractional negative binomial process. *Statistics & Probability Letters*, 196, 109799 (2023). Maheshwari, A., Vellaisamy, P. Fractional Poisson process time-changed by Lévy subordinator and its inverse. *Journal of Theoretical Probability*, 32, 1278-1305 (2019). Meerschaert, M., Nane, E., Vellaisamy, P. The fractional Poisson process and the inverse stable subordinator. *Electronic Journal of Probability*, 16, 1600-1620 (2011). Orsingher, E., Polito, F. The space-fractional Poisson process. *Statistics & Probability Letters*, 82, 852-858 (2012). Orsingher, E., Toaldo, B. Counting processes with Bernštein intertimes and random jumps. *Journal of Applied Probability*, 52, 1028-1044 (2015). Soni, R., Pathak, A. K. Generalized Fractional Negative Binomial Process. *arXiv preprint arXiv:2304.10487*, (2023). Stanislavsky, A., Weron, K. Two-time scale subordination in physical processes with long-term memory. *Annals of Physics*, 323, 643-653 (2008). Wang, C. An explicit compound Poisson process-based shock deterioration model for reliability assessment of aging structures.*Journal of traffic and transportation engineering*, 9, 461-472 (2022).
| arxiv_math | {
"id": "2309.10566",
"title": "Bivariate Tempered Space-Fractional Poisson Process and Shock Models",
"authors": "Ritik Soni, Ashok Kumar Pathak, Antonio Di Crescenzo, and Alessandra\n Meoli",
"categories": "math.PR",
"license": "http://creativecommons.org/licenses/by-nc-nd/4.0/"
} |
---
abstract: |
In this work we study different Implicit-Explicit (IMEX) schemes for incompressible flow problems with variable viscosity. Unlike most previous work on IMEX schemes, which focuses on the convective part, we here focus on treating parts of the diffusive term explicitly to reduce the coupling between the velocity components. We present different, both monolithic and fractional-step, IMEX alternatives for the variable-viscosity Navier--Stokes system, analysing their theoretical and algorithmic properties. Stability results are proven for all the methods presented, with all these results being unconditional, except for one of the discretisations using a fractional-step scheme, where a CFL condition (in terms of the problem data) is required for showing stability. Our analysis is supported by a series of numerical experiments.
author:
- |
**Gabriel R. Barrenechea** $^1$, **Ernesto Castillo** $^2$\
and **Douglas R. Q. Pacheco** $^{3,}$ [^1]\
${}^{1}$ Department of Mathematics and Statistics, University of Strathclyde, Glasgow, Scotland\
${}^{2}$ Department of Mechanical Engineering, University of Santiago de Chile, Santiago, Chile\
${}^{3}$ Department of Mathematical Sciences, NTNU, Trondheim, Norway
bibliography:
- references.bib
title: "**Implicit-explicit schemes for incompressible flow problems with variable viscosity**"
---
Incompressible flow ,IMEX methods ,Variable viscosity ,Generalised Newtonian fluids ,Finite element method ,Temporal stability
# Introduction
Variable-viscosity flow problems are relevant in many physical and technological processes. Viscosity variations can be produced, for example, by temperature or pressure gradients, by non-Newtonian rheological behaviour, or by the interaction of multiple fluid phases. Numerically, non-constant viscosity often leads to extra non-linearities and ill-conditioning in the equations, which can affect the convergence and the performance of linear and nonlinear solvers [@Carey1989; @Schussnig2021JCP; @Pacheco2021CMAME]. These issues get amplified in the time-dependent situation, where those solvers are used multiple (even thousands) of times as time advances. As a consequence, the last few years have seen an expansion in the literature on numerical methods for variable-viscosity incompressible flow problems [@Schussnig2021JCP; @Pacheco2021CMAME; @Deteix2018; @Plasman2020; @Anaya2021; @Anaya2023].
Over the last few decades, several alternatives have been proposed to reduce the computational complexity of fluid simulations. Those date back to the seminal work [@Chorin67], as well as operator-splitting schemes [@Glowinski] and more general projection methods, such as fractional-step schemes [@Deteix2018; @chorin1968; @temam1969; @Guermond2006; @BadiaRamon2007; @Diaz2023]. A more recent trend, which can also be combined with fractional stepping, are implicit-explicit (IMEX) methods. These have become increasingly popular in applications ranging from waves to fluids [@Boscarino2013; @Hochbruck2021; @Burman2022; @Guesmi2023; @Burman2023].
In the context of incompressible flows, IMEX usually refers to temporal discretisations that treat convection explicitly or semi-implicitly, while keeping viscous terms fully implicit [@John2016], which is the case for most of the IMEX literature for flow problems. To the best of our knowledge, only few articles so far have considered IMEX methods in the presence of variable viscosity. It was recently proposed in [@Stiller2020; @Guesmi2023] to augment the viscous term with a grad-div stabilisation [@Olshanskii2009], treated explicitly. Another alternative consists on adding and subtracting a large enough constant-coefficient viscous term on one side of the equation, and then applying the IMEX time-marching method to the equivalent equation. This is the approach used in [@WWZS16; @WZWS20; @TCS22], showing good numerical performance, although the analysis carried out so far is limited to scalar equations with constant coefficients.
For flow problems with variable viscosity, treating the diffusive term semi-implicitly can present significant computational advantages. As a matter of fact, for those flow problems the viscous term cannot be written as the standard Laplacian. This induces a coupling of the different components of the velocity (due to the presence of the symmetric gradient), which makes the linear solvers more involved and memory- and time-consuming. Thus, the possibility of devising IMEX discretisations that make the transposed velocity gradient explicit, leading to linear problems with the same sparsity pattern as a scalar transport equation, has practical (and theoretical) appeal.
In this work we further develop an idea explored in [@Pacheco2021CMAME], where an alternative formulation using the viscosity gradient was used to propose an IMEX scheme decoupling the velocity components; the results indicate an improved temporal stability of the modified formulation in comparison to "naive" IMEX discretisations. More precisely, we study a variety of IMEX discretisations, both monolitic and fractional-step, arising from making one part of the viscous term explicit, while keeping the "Laplacian" part implicit. We prove stability for all the methods studied, but the norm with respect to which the stability is proven varies according to the formulation. In fact, we show that the most "natural" way of splitting the viscous term leads to a weak stability, while a more careful rewriting of the term to be made explicit enhances it, which is confirmed by our numerical studies. To keep the technical details to a minimum, and focus on the effects of the time discretisation, we only analyse the semi-discretised (in time) problem, and consider only temporally first-order schemes, both for the monolithic and the fractional-step alternatives.
The rest of the manuscript is organised as follows. In Section [2](#sec_pre){reference-type="ref" reference="sec_pre"} we introduce the model problem, notation and useful analytical tools. In Sections [3](#sec_mono){reference-type="ref" reference="sec_mono"} and [4](#sec_incremental){reference-type="ref" reference="sec_incremental"} we derive stability estimates for monolithic and fractional-step methods, respectively. Section [5](#sec_summary){reference-type="ref" reference="sec_summary"} summarises the theoretical results of the paper, and in Section [6](#sec_spatial){reference-type="ref" reference="sec_spatial"} we briefly discuss spatial discretisation matters. We test and compare numerically the different alternatives presented in Section [7](#sec_examples){reference-type="ref" reference="sec_examples"}, and we draw some final remarks in Section [8](#sec_Conclusion){reference-type="ref" reference="sec_Conclusion"}.
# Preliminaries {#sec_pre}
## Model problem
Let us consider a finite time interval $(0,T]$ and a fluid domain $\Omega\subset\mathbb{R}^{d}$, $d=2$ or $3$, with Lipschitz boundary $\Gamma=\partial\Omega$. As a model problem, we consider the variable-viscosity incompressible Navier--Stokes equations: $$\begin{aligned}
\partial_t\mbox{\boldmath $u$} + (\nabla\mbox{\boldmath $u$})\mbox{\boldmath $u$} - \nabla\cdot[2\nu(\mbox{\boldmath $x$},t)\nabla^{\mathrm{s}}\mbox{\boldmath $u$}] + \nabla p &= \mbox{\boldmath $f$} && \text{in} \ \ \Omega\times(0,T]=:Q\, ,\label{momentum}\\
\nabla\cdot\mbox{\boldmath $u$} &= 0 && \text{in} \ \ \Omega\times(0,T]\, , \label{incompressibility}\\
\mbox{\boldmath $u$} &= \mbox{\boldmath $0$} && \text{on} \ \ \Gamma\times(0,T]\, ,\label{DirichletBC}\\
\mbox{\boldmath $u$} &= \mbox{\boldmath $u$}_0 && \text{at} \ \ t=0\, ,\end{aligned}$$ where the unknowns are the velocity $\mbox{\boldmath $u$}$ and the pressure $p$, while the remaining quantities are problem data. The viscosity field $\nu(\mbox{\boldmath $x$},t)$ is assumed to satisfy $$\nu(\mbox{\boldmath $x$},t) \geq \nu_{\mathrm{min}} > 0 \ \ \text{in} \ \bar{Q}\, ,$$ where $\nu_{\mathrm{min}}$ is a known constant that represents, e.g., the minimum viscosity of a shear-thinning fluid. To simplify notation, we shall use $\nu$ instead of $\nu(\mbox{\boldmath $x$},t)$ throughout this work (but will keep in mind its variable character).
## Alternative formulations of the viscous term {#sec_Weak}
In incompressible flows, the divergence-free constraint [\[incompressibility\]](#incompressibility){reference-type="eqref" reference="incompressibility"} leads to multiple ways of describing the viscous term. Although popular in the constant viscosity scenario, the Laplacian form $\nu\Delta\mbox{\boldmath $u$}$ is not appropriate for problems with variable viscosity. For that reason, the full stress-divergence (SD) form $$\nabla\cdot\left(2\nu\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\right) = \nabla\cdot(\nu\nabla\mbox{\boldmath $u$}) + \nabla\cdot(\nu\nabla^{\top}\mbox{\boldmath $u$})
\label{SD}$$ is normally used when $\nabla\nu \not= \mbox{\boldmath $0$}$. Yet, other possibilities exist, such as [@Guesmi2023] $$\begin{aligned}
\nabla\cdot\left(2\nu\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\right) = \nabla\cdot\left(2\nu\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\right) - \nabla(\nu\nabla\cdot\mbox{\boldmath $u$})\, ,
\label{gradDiv}\end{aligned}$$ or also [@Anaya2021] $$\begin{aligned}
\label{Anaya}
\nabla\cdot\left(2\nu\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\right) &= 2\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\nabla\nu - \nu\nabla\times(\nabla\times\mbox{\boldmath $u$}) \, .\end{aligned}$$ In this work we consider the following rewriting of the viscous term $$\nabla\cdot\left(2\nu\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\right) \equiv \nabla\cdot\left(\nu\nabla\mbox{\boldmath $u$}\right) + \nabla^{\top}\mbox{\boldmath $u$}\nabla\nu+ \nu\nabla(\nabla\cdot\mbox{\boldmath $u$}) \\
= \nabla\cdot\left(\nu\nabla\mbox{\boldmath $u$}\right) + \nabla^{\top}\mbox{\boldmath $u$}\nabla\nu\, ,
\label{genLapStrong}$$ which we denote as generalised Laplacian (GL). Although equivalent at the continuous level, the writings [\[SD\]](#SD){reference-type="eqref" reference="SD"}-[\[genLapStrong\]](#genLapStrong){reference-type="eqref" reference="genLapStrong"} can (and do) lead to dramatically different numerical performances. As mentioned in the introduction, the symmetric gradient leads to a coupling of the $d$ velocity components (especially if $\nabla\nu\not=\mbox{\boldmath $0$}$). In this work our aim is to make the parts of the viscous term responsible for this coupling explicit, and to analyse the implications of such IMEX approaches.
## Useful notation, identities and inequalities
We consider usual notation for Hilbert and Lebesgue spaces aligned, e.g., with [@EG21-I]. More precisely, we denote by $L^2_0(\Omega)$ the space of $L^2(\Omega)$ functions with zero mean on $\Omega$. The inner product in $L^2(\Omega)$ is denoted by $( \cdot ,\cdot )$, and we make no distinction between the product of scalar or vector/tensor-valued functions. Moreover, we denote by $\| \cdot \|$ and $\| \cdot \|_{\infty}$ the $L^2(\Omega)$ and $L^{\infty}(\bar{Q})$ norms, respectively. In our analyses, we will assume that $\mbox{\boldmath $f$}\in [L^2(Q)]^d$ and $\mbox{\boldmath $u$}_0\in [H^1_0(\Omega)]^d$.
We will carry out a discrete-in-time analysis. Approximate or discrete values of the variables at the different instants will be defined with sub-indices: $\mbox{\boldmath $u$}_{n}$, for instance, denotes the velocity approximation at the $n$-th time step. We will often use the identity $$\begin{aligned}
2(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $u$}_{n+1}) = \|\mbox{\boldmath $u$}_{n+1}\|^2 - \|\mbox{\boldmath $u$}_{n}\|^2 + \|\delta\mbox{\boldmath $u$}_{n+1}\|^2 \, ,
\label{identity}\end{aligned}$$ where $\delta\mbox{\boldmath $u$}_{n+1}:=\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n}$. A classical and useful fact is the skew-symmetry of the convective term. More precisely, if $\mbox{\boldmath $v$}\in [H^1_0(\Omega)]^d$ is solenoidal, there holds $$\begin{aligned}
((\nabla\mbox{\boldmath $w$})\mbox{\boldmath $v$},\mbox{\boldmath $w$}) = 0 \ \ \text{for all $\mbox{\boldmath $w$}\in [H^1_0(\Omega)]^d$.}
\label{convective}\end{aligned}$$ If $\mbox{\boldmath $v$}$ is *not* divergence-free, we will consider the following skew-symmetrised form $\mathbf{c}$ of the convective derivative: $$\begin{aligned}
\mathbf{c}(\mbox{\boldmath $v$},\mbox{\boldmath $w$}) :=(\nabla\mbox{\boldmath $w$})\mbox{\boldmath $v$} + \frac{1}{2}(\nabla\cdot\mbox{\boldmath $v$})\mbox{\boldmath $w$}\, ,
\label{skewSymmetrisation}\end{aligned}$$ which enjoys the property $$\begin{aligned}
(\mathbf{c}(\mbox{\boldmath $v$},\mbox{\boldmath $w$}),\mbox{\boldmath $w$}) = 0 \ \ \text{for all} \ \mbox{\boldmath $w$},\mbox{\boldmath $v$} \in [H^1_0(\Omega)]^d \, .
\label{skewSymmetric}\end{aligned}$$ For details on different forms of the convective term and their properties, we refer the reader to, e.g., [@John2016].
To construct IMEX schemes, we can combine backward differentiation formulas in time with extrapolation rules of matching order of consistency. Our analysis will be restricted to the first-order case, for which extrapolation means simply replacing a certain quantity at $t_{n+1}$ by its previous value at $t_n$, and $$\begin{aligned}
\partial_t\mbox{\boldmath $u$}|_{t=t_{n+1}} \approx \frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n})\, ,\end{aligned}$$ where $\tau>0$ denotes the time-step size, which will be considered constant, for simplicity -- although all the results in this paper extend to variable $\tau$ in a straightforward manner.
We will analyse both monolithic and fractional-step schemes. The stability analysis will rely on different versions of the discrete Gronwall inequality. The proof of the two results below can be found in [@Heywood1990 Lemma 5.1] (see also [@John2016 Lemma A.56]).
**Lemma 1** (Discrete Gronwall inequality). *Let $N\in\mathbb{N}$, and $\alpha,B,a_{n},b_{n},c_{n}$ be non-negative numbers for $n=1,\ldots,N$. Let us suppose that these numbers satisfy $$\begin{aligned}
\label{N-1}
a_{N} + \sum_{n=1}^{N}b_n \leq B + \sum_{n=1}^{N}c_n + \alpha\sum_{n=1}^{N-1}a_n \, .\end{aligned}$$ Then, the following inequality holds: $$\begin{aligned}
a_{N} + \sum_{n=1}^{N}b_n \leq \mathrm{e}^{\alpha N}\left( B + \sum_{n=1}^{N}c_n\right) \ \ \text{for} \ \ N\geq 1 \, .\end{aligned}$$*
**Lemma 2** (Discrete Gronwall lemma, conditional version). *Let $N\in\mathbb{N}$, and\
$\alpha,B,a_{n},b_{n},c_{n}$ be non-negative numbers for $n=1,\ldots,N$. Let us suppose that these numbers satisfy $$\begin{aligned}
\label{N}
a_{N} + \sum_{n=1}^{N}b_n \leq B + \sum_{n=1}^{N}c_n + \alpha\sum_{n=1}^{N}a_n \, ,\end{aligned}$$ with $\alpha < 1$. Then, the following inequality holds: $$\begin{aligned}
a_{N} + \sum_{n=1}^{N}b_n \leq \mathrm{e}^{\frac{\alpha N}{1-\alpha}}\left( B + \sum_{n=1}^{N}c_n\right)\, .\end{aligned}$$*
Notice that the only difference between [\[N-1\]](#N-1){reference-type="eqref" reference="N-1"} and [\[N\]](#N){reference-type="eqref" reference="N"} is where the last sum on the right-hand side stops ($N-1$ or $N$).
# Monolithic schemes {#sec_mono}
In this section we will study three different IMEX schemes. All of them share the property that the velocity and the pressure are computed simultaneously at each time step (hence the term "monolithic" to describe them). The proof of their stability will be based on the discrete Gronwall inequality given in Lemma [Lemma 1](#Lem:Gronwall-unconditional){reference-type="ref" reference="Lem:Gronwall-unconditional"}. Hence, every time we refer to the *discrete Gronwall inequality* in this section, we will be referring to Lemma [Lemma 1](#Lem:Gronwall-unconditional){reference-type="ref" reference="Lem:Gronwall-unconditional"}.
## Implicit stress-divergence formulation
We will start by considering the simplest possible IMEX scheme, where (other than the convective velocity) only the viscosity is made explicit. In such a case, the time-discrete momentum equation reads $$\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n})-\nabla\cdot\left(2\nu_{n}\nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n+1}\right) + (\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n + \nabla p_{n+1} = \mbox{\boldmath $f$}_{n+1}\, .
\label{strongMomentumMono}$$ The standard variational problem for the $(n+1)$-th time step is given by: find $(\mbox{\boldmath $u$}_{n+1},p_{n+1})\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$ such that $$\begin{split}
\frac{1}{\tau}\left(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$}\right)+\left(2\nu_{n}\nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n+1},\nabla^{\mathrm{s}}\mbox{\boldmath $w$}\right) + \big((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n,\mbox{\boldmath $w$} \big) -\left(p_{n+1},\nabla\cdot\mbox{\boldmath $w$}\right) &= \left( \mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $w$} \right), \\
\left(q,\nabla\cdot\mbox{\boldmath $u$}_{n+1}\right) &= 0\,,
\label{fullyCoupled}
\end{split}$$ for all $(\mbox{\boldmath $w$},q)\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$.
**Remark 1**. *The decision to make the viscosity explicit in [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"} stems from the fact that in realistic situations (such as generalised Newtonian flows), the viscosity is seldom known beforehand, being instead velocity-dependent. This is why we consider the first-order extrapolation $\nu_{n+1}\approx \nu_n$.*
The scheme [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"} is rather classical, and well-understood. Nevertheless, we now reproduce its stability result (and its proof) for completeness, and to highlight the main differences that appear in more involved schemes presented later.
**Lemma 3** (Stability of the implicit stress-divergence formulation). *Let us suppose that the time-step size is given by $\tau=T/N$, $N\ge 2$. Then, the scheme [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"} satisfies the following stability inequality: $$\|\mbox{\boldmath $u$}_{N}\|^2 + 4\sum_{n=1}^{N}\tau\big\|\sqrt{\nu_{n-1}}\, \nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n}\big\|^2 \leq \mathrm{e}^{\varepsilon}\Bigg[\left(1+\varepsilon\frac{\tau}{T}\right)\|\mbox{\boldmath $u$}_{0}\|^2 + \left(\tau+\frac{ T}{\varepsilon}\right)\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2\Bigg]\, ,
\label{stabilitySD}$$ where $\varepsilon>0$ is an arbitrary constant.*
Setting $\mbox{\boldmath $w$} = 2\tau\mbox{\boldmath $u$}_{n+1}$ and $q = 2\tau p_{n+1}$ in [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"} and adding the resulting equations yields $$\begin{aligned}
2(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $u$}_{n+1}) + 4\tau\big\|\sqrt{\nu_{n}}\, \nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n+1}\big\|^2 + 2\tau((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n,\mbox{\boldmath $u$}_{n+1}) = 2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1})\, .\end{aligned}$$ The convective term vanishes due to its skew-symmetric property [\[convective\]](#convective){reference-type="eqref" reference="convective"}. Applying [\[identity\]](#identity){reference-type="eqref" reference="identity"} and the Cauchy-Schwarz and Young inequalities, we obtain $$\begin{aligned}
\|\mbox{\boldmath $u$}_{n+1}\|^2 - \|\mbox{\boldmath $u$}_{n}\|^2 + 4\tau\big\|\sqrt{\nu_{n}}\, \nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n+1}\big\|^2 &\leq 2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1}) \\
&=2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n}+\delta\mbox{\boldmath $u$}_{n+1}) \\
&\leq 2\tau\|\delta \mbox{\boldmath $u$}\|\,\|\mbox{\boldmath $f$}_{n+1}\| + 2\tau\|\mbox{\boldmath $f$}_{n+1}\|\, \|\mbox{\boldmath $u$}_{n}\| \\
&\leq \|\delta\mbox{\boldmath $u$}_{n+1}\|^2 + \Big(\tau+\frac{T}{\varepsilon}\Big)\tau\|\mbox{\boldmath $f$}_{n+1}\|^2 + \tau\frac{\varepsilon}{T}\|\mbox{\boldmath $u$}_{n}\|^2 ,\end{aligned}$$ where $\varepsilon>0$ is arbitrary. Rearranging terms we get $$\begin{aligned}
\|\mbox{\boldmath $u$}_{n+1}\|^2-\|\mbox{\boldmath $u$}_{n}\|^2 + 4\tau\big\|\sqrt{\nu_{n}}\, \nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n+1}\big\|^2 \leq \varepsilon\frac{\tau}{T}\|\mbox{\boldmath $u$}_{n}\|^2 + \Big(\tau^2+\frac{\tau T}{\varepsilon }\Big)\|\mbox{\boldmath $f$}_{n+1}\|^2 \, .\end{aligned}$$ Adding this equality from $n=0$ to $n=N-1$ yields $$\begin{aligned}
\|\mbox{\boldmath $u$}_{N}\|^2 + 4\tau\sum_{n=1}^{N}\big\| \sqrt{\nu_{n-1}}\,\nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n}\big\|^2 \leq \left(1+\varepsilon\frac{\tau}{T}\right)\|\mbox{\boldmath $u$}_{0}\|^2 + \varepsilon\frac{\tau}{T}\sum_{n=1}^{N-1}\|\mbox{\boldmath $u$}_{n}\|^2 + \Big(\tau+\frac{T}{ \varepsilon }\Big)\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2 ,\end{aligned}$$ and the result follows as a direct application of the discrete Gronwall lemma.
------------------------------------------------------------------------
**Remark 2**. *The above result applied to $\mbox{\boldmath $f$}=\mbox{\boldmath $0$}$ leads to the fact that [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"} is strongly stability preserving, i.e., $$\|\mbox{\boldmath $u$}_{N}\|^2 + 4\sum_{n=1}^{N}\tau\big\|\sqrt{\nu_{n-1}}\, \nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n}\big\|^2 \leq \|\mbox{\boldmath $u$}_{0}\|^2$$ The main question, to be addressed next, is whether and how we can preserve stability even if a part of the viscous term is treated explicitly.*
## Implicit-explicit stress-divergence formulations
The simplest, naive way to decouple the velocity components is to split the viscous term into two parts, as in [\[SD\]](#SD){reference-type="eqref" reference="SD"}, and make the second term explicit. This choice leads to the following semi-discrete scheme: $$\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n})-\nabla\cdot\left(\nu_{n}\nabla\mbox{\boldmath $u$}_{n+1}\right) + (\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n + \nabla p_{n+1} = \nabla\cdot\left(\nu_{n}\nabla^{\top}\mbox{\boldmath $u$}_{n}\right) + \mbox{\boldmath $f$}_{n+1}\, ,
\label{SDnaive}$$ with the corresponding weak form: find $(\mbox{\boldmath $u$}_{n+1},p_{n+1})\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$ such that $$\begin{split}
\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$})+\left(\nu_{n}\nabla\mbox{\boldmath $u$}_{n+1},\nabla\mbox{\boldmath $w$}\right) + \big((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n,\mbox{\boldmath $w$} \big)-\left(p_{n+1},\nabla\cdot\mbox{\boldmath $w$}\right) &= \\
-\left(\nu_{n}\nabla^{\top}\mbox{\boldmath $u$}_{n},\nabla\mbox{\boldmath $w$}\right) + (\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $w$})\,, &\\
\left(q,\nabla\cdot\mbox{\boldmath $u$}_{n+1}\right) &= 0\,,
\end{split}$$ for all $(\mbox{\boldmath $w$},q)\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$.
Testing with the same functions as in the proof of the last result, we get to $$\begin{aligned}
\|\mbox{\boldmath $u$}_{n+1}\|^2 + 2\tau\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n+1} \big\|^2 + \|\delta\mbox{\boldmath $u$}_{n+1}\|^2 = \|\mbox{\boldmath $u$}_{n}\|^2 - 2\tau(\nu_n\nabla^{\top}\mbox{\boldmath $u$}_{n},\nabla\mbox{\boldmath $u$}_{n+1}) + 2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1}) \,.\end{aligned}$$ In this case, an extra term (that requires bounding) appears on the right-hand side: $$-2(\nu_n\nabla^{\top}\mbox{\boldmath $u$}_{n},\nabla\mbox{\boldmath $u$}_{n+1}) \leq 2\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_n\big\|\,\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\| \leq \big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_n\big\|^2 + \big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2\,,$$ where we have used the Cauchy-Schwarz and Young inequalities. Then, estimating the forcing term as done for the implicit case, we get the following bound at step $(n+1)$: $$\label{20-1}
\|\mbox{\boldmath $u$}_{n+1}\|^2 + \tau\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n+1} \big\|^2 \leq \|\mbox{\boldmath $u$}_{n}\|^2 + \tau\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2 + \frac{\tau}{T}\|\mbox{\boldmath $u$}_{n}\|^2 + (\tau^2 + \tau T)\|\mbox{\boldmath $f$}_{n+1}\|^2\,.$$
At this point, there is no guarantee that the viscous term on the left-hand side will be able to control the one on the right-hand side, since the viscosity on the left-hand side term is $\nu_n$, not $\nu_{n+1}$. Some stability might still be attained if $\nu_{n+1} \leq \nu_n$, or if $\tau$ is small enough so that $\nu_{n+1}\approx\nu_{n}$; otherwise, the scheme could become unstable, especially for rapidly varying viscosity fields. This may explain the poor stability observed for a related implicit-explicit SD formulation in [@Pacheco2021CMAME], see also Section [7.2](#sec_cavity){reference-type="ref" reference="sec_cavity"} below.
According to the analysis above, guaranteeing stability of the IMEX-SD formulation would require knowing $\nu_{n+1}$ already at the $(n+1)$-th time-step. Although this is seldom realistic, a scheme that requires $\nu_{n+1}$ can still make practical sense if combined, for instance, with a Picard-like method linearising the current viscosity. Therefore, let us now present a scheme that assumes $\nu_{n+1}$ as already known at the point of computing $(\mbox{\boldmath $u$}_{n+1},p_{n+1})$. Consider the first-order extrapolation $$\begin{aligned}
\nu_{n+1}\nabla^{\top}\mbox{\boldmath $u$}_{n+1} = \sqrt{\nu_{n+1}}\sqrt{\nu_{n+1}}\, \nabla^{\top}\mbox{\boldmath $u$}_{n+1} \approx \sqrt{\nu_{n+1}}\sqrt{\nu_{n}}\, \nabla^{\top}\mbox{\boldmath $u$}_{n} \, ,\end{aligned}$$ from which we can propose the following alternative to [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"}: $$\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n})-\nabla\cdot\left(\nu_{n+1}\nabla\mbox{\boldmath $u$}_{n+1}\right) + (\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n + \nabla p_{n+1} = \nabla\cdot\left(\sqrt{\nu_{n+1}}\sqrt{\nu_{n}}\, \nabla^{\top}\mbox{\boldmath $u$}_{n}\right) + \mbox{\boldmath $f$}_{n+1}$$ and the corresponding weak form: find $(\mbox{\boldmath $u$}_{n+1},p_{n+1})\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$ such that $$\begin{split}
\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$})+\left(\nu_{n+1}\nabla\mbox{\boldmath $u$}_{n+1},\nabla\mbox{\boldmath $w$}\right) + ((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n,\mbox{\boldmath $w$} )-\left(p_{n+1},\nabla\cdot\mbox{\boldmath $w$}\right) &= \\
-\left(\sqrt{\nu_{n}}\,\nabla^{\top}\mbox{\boldmath $u$}_{n},\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $w$}\right) + (\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $w$})\, , & \\
(q,\nabla\cdot\mbox{\boldmath $u$}_{n+1}) &= 0\,,
\label{SDsqrt}
\end{split}$$ for all $(\mbox{\boldmath $w$},q)\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$. Based on that, we will prove the following stability estimate.
**Theorem 4** (Stability of an IMEX stress-divergence formulation). *Let us suppose that the time-step size is given by $\tau=T/N$, $N\ge 2$. Then, the scheme [\[SDsqrt\]](#SDsqrt){reference-type="eqref" reference="SDsqrt"} satisfies the following stability inequality $$\|\mbox{\boldmath $u$}_{N}\|^2 + \tau\big\| \sqrt{\nu_{N}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2 + \varepsilon\sum_{n=1}^{N-1}\frac{\tau^2}{T}\big\| \sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2 \leq \mathrm{e}^{\varepsilon}\Bigg[C_0 + \Big(\tau + \frac{T}{\varepsilon}\Big)\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2\Bigg] ,
\label{stabilityMonoSqrt}$$ where $\varepsilon>0$ is an arbitrary constant, and $$\begin{aligned}
C_0 = \left(1+\varepsilon\frac{\tau}{T}\right)\|\mbox{\boldmath $u$}_{0}\|^2 + \tau\|\sqrt{\nu_0}\,\nabla\mbox{\boldmath $u$}_0 \|^2\, .\end{aligned}$$*
Using the same techniques as in the proof of [\[20-1\]](#20-1){reference-type="eqref" reference="20-1"} we can show that $$\begin{aligned}
\|\mbox{\boldmath $u$}_{n+1}\|^2 -\|\mbox{\boldmath $u$}_{n}\|^2 + \tau\big\|\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $u$}_{n+1} \big\|^2 \leq \tau\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2 + \varepsilon\frac{\tau}{T}\|\mbox{\boldmath $u$}_{n}\|^2 + \Big(\tau + \frac{ T}{\varepsilon }\Big)\tau\|\mbox{\boldmath $f$}_{n+1}\|^2 .\end{aligned}$$ Adding up from $n=0$ to $n=N-1$ gives $$\begin{aligned}
\|\mbox{\boldmath $u$}_{N}\|^2 + \tau\big\| \sqrt{\nu_{N}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2 \leq \nonumber\\ \left(1+\varepsilon\frac{\tau}{T}\right)\|\mbox{\boldmath $u$}_{0}\|^2 + \tau\big\|\sqrt{\nu_0}\,\nabla\mbox{\boldmath $u$}_0\big\|^2 + \varepsilon\frac{\tau}{T}\sum_{n=1}^{N-1}\|\mbox{\boldmath $u$}_n\|^2 + \Big(\tau + \frac{T}{\varepsilon}\Big)\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2\,.\label{Sec3.2.last}\end{aligned}$$ The proof is completed by adding $$\begin{aligned}
\varepsilon\frac{\tau}{T}\sum_{n=1}^{N-1}\tau\big\| \sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2\end{aligned}$$ to both sides of [\[Sec3.2.last\]](#Sec3.2.last){reference-type="eqref" reference="Sec3.2.last"} and using the discrete Gronwall inequality with $a_n=\|\mbox{\boldmath $u$}_{N}\|^2 + \tau\big\| \sqrt{\nu_{N}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2$, $b_n= \varepsilon\frac{\tau}{T}\tau\big\| \sqrt{\nu_{N}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2$, and $c_n$ gathering the terms involving $\mbox{\boldmath $u$}_0$ and $\mbox{\boldmath $f$}$ on the right-hand side of [\[Sec3.2.last\]](#Sec3.2.last){reference-type="eqref" reference="Sec3.2.last"}.
------------------------------------------------------------------------
## Implicit-explicit generalised Laplacian formulation
Due to the squared time-step size accompanying the viscous sum in [\[stabilityMonoSqrt\]](#stabilityMonoSqrt){reference-type="eqref" reference="stabilityMonoSqrt"}, the stability estimate proven in Theorem [Theorem 4](#Th:Monosqrt){reference-type="ref" reference="Th:Monosqrt"} is somewhat weaker than the one proved for [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"}. It is, still, an unconditional stability attained *despite* the explicit treatment of the coupling term. Of course, this comes at the cost of requiring a-priori knowledge of the viscosity field. To circumvent this requirement, we now present a different IMEX scheme based on replacing the viscous term by the generalised Laplacian version [\[genLapStrong\]](#genLapStrong){reference-type="eqref" reference="genLapStrong"}. In that case, treating the coupling term explicitly will lead to the semi-discrete momentum equation $$\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n})-\nabla\cdot\left(\nu_{n}\nabla\mbox{\boldmath $u$}_{n+1}\right) + (\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_n + \nabla p_{n+1} = \nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_{n} + \mbox{\boldmath $f$}_{n+1}\, .$$ The corresponding variational problem is: find $(\mbox{\boldmath $u$}_{n+1},p_{n+1})\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$ such that $$\begin{split}
\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$})+(\nu_{n}\nabla\mbox{\boldmath $u$}_{n+1},\nabla\mbox{\boldmath $w$}) +((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$}) -(p_{n+1},\nabla\cdot\mbox{\boldmath $w$}) &= \\
(\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n,\mbox{\boldmath $w$}) + (\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $w$})\, , & \\
(q,\nabla\cdot\mbox{\boldmath $u$}_{n+1}) &= 0\,,
\label{decoupledGL}
\end{split}$$ for all $(\mbox{\boldmath $w$},q)\in [H^1_0(\Omega)]^d\times L^2_0(\Omega)$. The stability of [\[decoupledGL\]](#decoupledGL){reference-type="eqref" reference="decoupledGL"} is stated next.
**Theorem 5** (Stability of the IMEX generalised Laplacian formulation). *Let us suppose that $\nabla\nu\in [L^\infty(\bar{Q})]^d$, and that the time-step size is given by $\tau=T/N$ with $N\ge 2$. Then, the IMEX scheme [\[decoupledGL\]](#decoupledGL){reference-type="eqref" reference="decoupledGL"} satisfies the following stability inequality $$\|\mbox{\boldmath $u$}_{N}\|^2 + %\tau\big\|\sqrt{\nu_{N-1}}\,\nabla \ve{u}_{N}\big\|^2 +
\sum_{n=1}^{N}\tau\big\|\sqrt{\nu_{n-1}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2 \leq M\Bigg[C_0 + \left(2\tau + \frac{T}{\varepsilon}\right)\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_n\|^2\Bigg], \label{mainResult}$$ where $$\begin{aligned}
M =\mathrm{exp}\Big(\varepsilon + \frac{2T\|\nabla\nu \|_{\infty}^2}{\nu_{\mathrm{min}}}\Big)\, ,\quad
C_0 = \Big(1+\frac{\varepsilon\tau}{ T}+\frac{2\|\nabla\nu \|^2_{\infty}\tau}{\nu_{\mathrm{min}}}\Big)(\|\mbox{\boldmath $u$}_0 \|^2 +\tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_0\|^2)\, ,
\end{aligned}$$ and $\varepsilon>0$ is an arbitrary constant.*
Set $\mbox{\boldmath $w$}=2\tau\mbox{\boldmath $u$}_{n+1}$ and $q = 2\tau p_{n+1}$ in [\[decoupledGL\]](#decoupledGL){reference-type="eqref" reference="decoupledGL"} to get $$\begin{aligned}
\|\mbox{\boldmath $u$}_{n+1}\|^2 - \|\mbox{\boldmath $u$}_{n}\|^2+ \|\delta\mbox{\boldmath $u$}_{n+1}\|^2 + 2\tau\big\|\sqrt{\nu_n}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2 &= 2\tau(\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n,\mbox{\boldmath $u$}_{n+1}) + 2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1})\,.\label{CC}\end{aligned}$$ We now bound the terms on the right-hand side of the inequality above. Using the Cauchy-Schwarz, Hölder, and Young inequalities, we get $$\begin{aligned}
2\tau(\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n,\mbox{\boldmath $u$}_{n+1}) &=2\tau(\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n,\delta\mbox{\boldmath $u$}_{n+1}+\mbox{\boldmath $u$}_{n})\nonumber\\
&\leq 2\tau\|\delta\mbox{\boldmath $u$}_{n+1} \|\,\|\nabla\nu\|_{\infty} \|\nabla\mbox{\boldmath $u$}_n \| + 2\tau\|\nabla\nu\|_{\infty} \|\nabla\mbox{\boldmath $u$}_n \|\, \|\mbox{\boldmath $u$}_{n} \| \nonumber\\
&\leq \frac{1}{2}\|\delta\mbox{\boldmath $u$}_{n+1} \|^2 + 2\tau^2\|\nabla\nu\|_{\infty}^2 \|\nabla\mbox{\boldmath $u$}_n \|^2 + \tau\nu_{\mathrm{min}} \|\nabla\mbox{\boldmath $u$}_n \|^2 + \frac{\tau\|\nabla\nu\|_{\infty}^2}{\nu_{\mathrm{min}}} \|\mbox{\boldmath $u$}_{n}\|^2\,, \label{AA}\end{aligned}$$ and $$\begin{aligned}
2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1}) &=2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n}+\delta\mbox{\boldmath $u$}_{n+1}) \nonumber\\
&\leq 2\tau\|\delta\mbox{\boldmath $u$}_{n+1}\|\,\|\mbox{\boldmath $f$}_{n+1}\| + 2\tau\|\mbox{\boldmath $f$}_{n+1}\|\, \|\mbox{\boldmath $u$}_{n}\| \nonumber\\
&\leq \frac{1}{2}\|\delta\mbox{\boldmath $u$}_{n+1}\|^2 + \left(2\tau+\frac{T}{\varepsilon }\right)\tau\|\mbox{\boldmath $f$}_{n+1}\|^2 + \frac{\varepsilon \tau}{T}\|\mbox{\boldmath $u$}_{n}\|^2\,.\label{BB}\end{aligned}$$ Inserting [\[AA\]](#AA){reference-type="eqref" reference="AA"} and [\[BB\]](#BB){reference-type="eqref" reference="BB"} in [\[CC\]](#CC){reference-type="eqref" reference="CC"} gives $$\begin{aligned}
&\|\mbox{\boldmath $u$}_{n+1}\|^2 - \|\mbox{\boldmath $u$}_{n}\|^2 + 2\tau\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2 \\
&\leq \tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2+ \Big(\frac{\varepsilon\tau}{ T}+\frac{\tau\|\nabla\nu\|_{\infty}^2}{\nu_{\mathrm{min}}}\Big)\|\mbox{\boldmath $u$}_{n}\|^2 + \frac{2\tau\|\nabla\nu\|_{\infty}^2}{\nu_{\mathrm{min}}}\tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2 + \left(2\tau+\frac{T}{\varepsilon}\right)\tau\|\mbox{\boldmath $f$}_{n+1}\|^2\\
&\leq \tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2+ \Big(\frac{\varepsilon\tau}{T}+\frac{2\tau\|\nabla\nu\|_{\infty}^2}{\tau\nu_{\mathrm{min}}}\Big)(\|\mbox{\boldmath $u$}_{n}\|^2+\tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2) + \left(2\tau+\frac{T}{\varepsilon}\right)\tau\|\mbox{\boldmath $f$}_{n+1}\|^2
\, .\end{aligned}$$ Rearranging terms leads to $$\begin{aligned}
&\|\mbox{\boldmath $u$}_{n+1}\|^2-\|\mbox{\boldmath $u$}_{n}\|^2 + \tau\Big(\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2-\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2\Big) + \tau\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2 \\ \leq
& \left(\frac{\varepsilon\tau}{T}+\frac{2\tau\|\nabla\nu\|_{\infty}^2}{\nu_{\mathrm{min}}}\right)(\|\mbox{\boldmath $u$}_{n}\|^2 + \tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2) + \left(2\tau+\frac{T}{\varepsilon}\right)\tau\|\mbox{\boldmath $f$}_{n+1}\|^2\, .\end{aligned}$$ Adding up from $n=0$ to $n=N-1$ gives $$\begin{aligned}
&\|\mbox{\boldmath $u$}_{N}\|^2+\tau\sum_{n=0}^{N-1}\Big(\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n+1} \big\|^2 -\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n} \|^2\Big)+\sum_{n=1}^{N}\tau\big\|\sqrt{\nu_{n-1}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2\\ \leq \ & \|\mbox{\boldmath $u$}_0\|^2 + \Big(\frac{\varepsilon\tau}{ T} + \frac{2\tau\|\nabla\nu\|_{\infty}^2}{\nu_{\mathrm{min}}}\Big)\sum_{n=0}^{N-1}\left(\|\mbox{\boldmath $u$}_{n}\|^2 + \tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2\right) + \left(2\tau+\frac{T}{\varepsilon} \right)\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2\, .\end{aligned}$$ Finally, using $$\begin{aligned}
\sum_{n=0}^{N-1}\Big(\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n+1} \big\|^2 -\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n} \|^2\Big)
\geq \big\|\sqrt{\nu_{N-1}}\,\nabla\mbox{\boldmath $u$}_{N} \big\|^2 - \nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{0}\|^2\, ,\end{aligned}$$ estimate [\[mainResult\]](#mainResult){reference-type="eqref" reference="mainResult"} follows directly from the discrete Gronwall lemma.
------------------------------------------------------------------------
With the stability result just proven, we eliminate the need to know the viscosity at time step $n+1$ and *still* have an unconditionally stable scheme. The price to pay is a higher regularity requirement on the viscosity field, which must now be Lipschitz continuous in space. This higher requirement may not be satisfied in the fully discrete case for some practical scenarios, but in Section [6.2](#sec_nonNewtonian){reference-type="ref" reference="sec_nonNewtonian"} we will discuss possible remedies.
# Fractional-step schemes {#sec_incremental}
This section is devoted to extending the IMEX methods from Section [3](#sec_mono){reference-type="ref" reference="sec_mono"} to a class of fractional-step schemes: the incremental pressure correction method. We shall consider this prototypical scheme in its standard variant, which allows for variable viscosity [@Deteix2019].
## IMEX stress-divergence formulation
For the Navier--Stokes equations in stress-divergence form, consider the following first-order IMEX fractional-step scheme: find the solution $(\mbox{\boldmath $u$}_{n+1},\hat{\mbox{\boldmath $u$}}_{n+1},p_{n+1})$ of the system $$\begin{aligned}
\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\hat{\mbox{\boldmath $u$}}_{n})-\nabla\cdot(\nu_{n+1}\nabla\mbox{\boldmath $u$}_{n+1}) + \mathbf{c}(\mbox{\boldmath $u$}_n,\mbox{\boldmath $u$}_{n+1})
&= \nabla\cdot(\sqrt{\nu_{n+1}}\sqrt{\nu_{n}}\,\nabla^{\top}\mbox{\boldmath $u$}_{n}) - \nabla p_{n} + \mbox{\boldmath $f$}_{n+1}\,, \\
\frac{1}{\tau}(\hat{\mbox{\boldmath $u$}}_{n+1}-\mbox{\boldmath $u$}_{n+1})+\nabla(p_{n+1} - p_n) &= \mbox{\boldmath $0$}\,, %\label{projection1}
\\
\nabla\cdot\hat{\mbox{\boldmath $u$}}_{n+1} & = 0\, , %\label{projection2}
%\label{viscousStepProjection} \end{aligned}$$ with boundary conditions $\mbox{\boldmath $u$}_{n+1} = \mbox{\boldmath $0$}$ and $\mbox{\boldmath $n$}\cdot\hat{\mbox{\boldmath $u$}}_{n+1} = 0$ on $\Gamma$. Since $\mbox{\boldmath $u$}$ is no longer divergence-free, the skew-symmetrised form $\mathbf{c}$ of the convective term [\[skewSymmetrisation\]](#skewSymmetrisation){reference-type="eqref" reference="skewSymmetrisation"} is used. In addition, since the initial condition $\mbox{\boldmath $u$}_0$ is divergence-free, we assume $p_0=0$ for the rest of this section.
We can use simple algebraic manipulation to eliminate the end-of-step velocity $\hat{\mbox{\boldmath $u$}}$ from the system, reformulating the $(\hat{\mbox{\boldmath $u$}},p)$-subproblem as a pressure Poisson equation. The resulting weak formulation is: find $(\mbox{\boldmath $u$}_{n+1},p_{n+1})\in [H^1_0(\Omega)]^d\times [L^2_0(\Omega)\cap H^1(\Omega)]$ such that $$\begin{split}
\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$})+(\nu_{n+1}\nabla\mbox{\boldmath $u$}_{n+1},\nabla\mbox{\boldmath $w$}) +((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$}) + \frac{1}{2}((\nabla\cdot\mbox{\boldmath $u$}_n)\mbox{\boldmath $u$}_{n+1},\mbox{\boldmath $w$}) &=
\\ (2p_{n}-p_{n-1},\nabla\cdot\mbox{\boldmath $w$})-\left(\sqrt{\nu_n}\,\nabla^{\top}\mbox{\boldmath $u$}_{n},\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $w$}\right) + (\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $w$})\,, & \\
\tau(\nabla(p_{n+1}-p_{n}),\nabla q) + \left(q,\nabla\cdot\mbox{\boldmath $u$}_{n+1}\right) &= 0\,,
\label{fractionalSD}
\end{split}$$ for all $(\mbox{\boldmath $w$},q)\in [H^1_0(\Omega)]^d\times [L^2_0(\Omega)\cap H^1(\Omega)]$. This split-step scheme is also unconditionally stable, as the following result shows.
**Theorem 6** (Stability of an IMEX fractional-step scheme in SD form). *Let the time step be given by $\tau=T/N$ with $N\ge 2$. Then, the fractional-step method [\[fractionalSD\]](#fractionalSD){reference-type="eqref" reference="fractionalSD"} satisfies the following stability estimate $$\begin{split}
\|\mbox{\boldmath $u$}_{N}\|^2 + \tau\big\| \sqrt{\nu_{N}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2 + \tau^2\|\nabla p_N\|^2 + \frac{\tau}{T}\sum_{n=1}^{N}\Big(\tau\big\| \sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2+\tau^2\|\nabla p_n\|^2\Big) \\ \leq M\Bigg[\|\mbox{\boldmath $u$}_{0}\|^2 + \tau\|\sqrt{\nu_0}\,\nabla\mbox{\boldmath $u$}_0 \|^2 + T\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2\Bigg]\,,
\label{stabilityMonoSqrtIncremental}
\end{split}$$ where $$\begin{aligned}
M &= \mathrm{exp}\Bigg(\frac{1}{1-\tau/T}\Bigg) = \mathrm{exp}\Bigg(\frac{N}{N-1}\Bigg) \leq \mathrm{e}^2\, .\end{aligned}$$*
Setting $\mbox{\boldmath $w$}=2\tau\mbox{\boldmath $u$}_{n+1}$ in [\[fractionalSD\]](#fractionalSD){reference-type="eqref" reference="fractionalSD"} yields $$\begin{split}
&\|\mbox{\boldmath $u$}_{n+1}\|^2 + \|\delta\mbox{\boldmath $u$}_{n+1}\|^2 - \|\mbox{\boldmath $u$}_{n}\|^2 + 2\tau\big\|\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2 \\ &= 2\tau\left[\left(2p_n-p_{n-1},\nabla\cdot\mbox{\boldmath $u$}_{n+1}\right)-\left(\sqrt{\nu_{n}}\,\nabla^{\top}\mbox{\boldmath $u$}_{n},\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $u$}_{n+1}\right) + (\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1})\right] \\
&\leq 2\tau\left[\left(2p_n-p_{n-1},\nabla\cdot\mbox{\boldmath $u$}_{n+1}\right)+\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2+\big\|\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2 + (\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1})\right].
\label{auxResultFractional}
\end{split}$$
Compared to the monolithic cases presented in the last section, we have an additional pressure term to estimate. Denoting $\delta p_{n+1}:=p_{n+1}- p_{n}$, the second equation in [\[fractionalSD\]](#fractionalSD){reference-type="eqref" reference="fractionalSD"} implies $$\tau(\nabla\delta p_{n+1} - \nabla\delta p_{n},\nabla q) = -(\nabla\cdot\delta\mbox{\boldmath $u$}_{n+1},q) = \left(\delta\mbox{\boldmath $u$}_{n+1},\nabla q\right)\,.
\label{continuityDelta}$$ Now, setting $q = 2\tau(2p_{n}-p_{n-1})$ in [\[fractionalSD\]](#fractionalSD){reference-type="eqref" reference="fractionalSD"} gives $$\begin{aligned}
2\tau(2p_n-p_{n-1},\nabla\cdot\mbox{\boldmath $u$}_{n+1}) &= -2\tau^2(\nabla(2p_n-p_{n-1}),\nabla(p_{n+1}-p_n))\\
&\equiv -2\tau^2\big(\nabla[p_{n+1}-(\delta p_{n+1}-\delta p_{n})],\nabla(p_{n+1}-p_n)\big)\\
&= -2\tau^2(\nabla p_{n+1},\nabla p_{n+1}-\nabla p_n) + 2\tau^2(\nabla\delta p_{n+1}-\nabla\delta p_n,\nabla\delta p_{n+1}).\end{aligned}$$ For the first term on the right-hand side above we have $$\begin{aligned}
-2(\nabla p_{n+1},\nabla p_{n+1}-\nabla p_n) = \|\nabla p_{n}\|^2-\|\nabla p_{n+1}\|^2-\|\nabla\delta p_{n+1}\|^2\, ,\end{aligned}$$ and for the second one we set $q=\tau\delta p_{n+1}$ in [\[continuityDelta\]](#continuityDelta){reference-type="eqref" reference="continuityDelta"}, so that $$\begin{aligned}
2\tau^2(\nabla\delta p_{n+1}-\nabla\delta p_n,\nabla\delta p_{n+1}) = 2\tau(\delta\mbox{\boldmath $u$}_{n+1},\nabla\delta p_{n+1}) \leq \|\delta\mbox{\boldmath $u$}_{n+1}\|^2 + \tau^2\|\nabla\delta p_{n+1}\|^2\,. \end{aligned}$$ Collecting these last results, we estimate the pressure term in [\[auxResultFractional\]](#auxResultFractional){reference-type="eqref" reference="auxResultFractional"} as $$\begin{aligned}
2\tau(2p_n-p_{n-1},\nabla\cdot\mbox{\boldmath $u$}_{n+1}) \leq \tau^2\|\nabla p_{n}\|^2-\tau^2\|\nabla p_{n+1}\|^2 + \|\delta\mbox{\boldmath $u$}_{n+1}\|^2\,.\label{Two-star}\end{aligned}$$ Moreover, $$\begin{aligned}
2\tau(\mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1}) \leq \frac{\tau}{T}\|\mbox{\boldmath $u$}_{n+1}\|^2+T\tau\|\mbox{\boldmath $f$}_{n+1}\|^2\,.\label{Three-star}\end{aligned}$$ Replacing [\[Two-star\]](#Two-star){reference-type="eqref" reference="Two-star"} and [\[Three-star\]](#Three-star){reference-type="eqref" reference="Three-star"} in [\[auxResultFractional\]](#auxResultFractional){reference-type="eqref" reference="auxResultFractional"} we get $$\begin{aligned}
&\left(\|\mbox{\boldmath $u$}_{n+1}\|^2 + \big\|\sqrt{\nu_{n+1}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2+\tau^2\|\nabla p_{n+1}\|^2\right) - \left(\|\mbox{\boldmath $u$}_{n}\|^2 + \big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2+\tau^2\|\nabla p_{n}\|^2\right) \\
&\leq \frac{\tau}{T}\|\mbox{\boldmath $u$}_{n+1}\|^2+T\tau\|\mbox{\boldmath $f$}_{n+1}\|^2\, ,\end{aligned}$$ and adding from $n=1$ to $n=N-1$ leads to $$\begin{aligned}
&\|\mbox{\boldmath $u$}_{N}\|^2 + \tau\big\|\sqrt{\nu_{N}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2+\tau^2\|\nabla p_{N}\|^2 \\ &\leq \|\mbox{\boldmath $u$}_{0}\|^2 + \tau\big\|\sqrt{\nu_{0}}\,\nabla\mbox{\boldmath $u$}_{0}\big\|^2+\tau^2\|\nabla p_{0}\|^2 +\frac{\tau}{T}\sum_{n=1}^{N}\|\mbox{\boldmath $u$}_{n}\|^2+T\sum_{n=1}^{N}\tau\|\mbox{\boldmath $f$}_{n}\|^2\,.\end{aligned}$$ Finally, adding $$\begin{aligned}
\frac{\tau}{T}\sum_{n=1}^{N}\Big(\tau\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2+\tau^2\|\nabla p_{n}\|^2\Big)\end{aligned}$$ to both sides of the last inequality, and noticing that $\tau/T<1$, we conclude by using the Gronwall inequality from Lemma [Lemma 2](#Lem:Gronwall-conditional){reference-type="ref" reference="Lem:Gronwall-conditional"}.
------------------------------------------------------------------------
As it was the case with Theorem [Theorem 4](#Th:Monosqrt){reference-type="ref" reference="Th:Monosqrt"}, the stability result just shown is in a weak norm, and also requires the knowledge of $\nu_{n+1}$. To remedy this, we will present below an IMEX fractional-step scheme based on the generalised Laplacian rewriting of the diffusive term.
## IMEX generalised Laplacian formulation
After elimination of the end-of-step velocity, the weak problem for the GL version of the IMEX incremental projection scheme is: find $(\mbox{\boldmath $u$}_{n+1},p_{n+1})\in [H^1_0(\Omega)]^d\times [L^2_0(\Omega)\cap H^1(\Omega)]$ such that $$\begin{split}
\frac{1}{\tau}(\mbox{\boldmath $u$}_{n+1}-\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$})+\left(\nu_{n}\nabla\mbox{\boldmath $u$}_{n+1},\nabla\mbox{\boldmath $w$}\right) +((\nabla\mbox{\boldmath $u$}_{n+1})\mbox{\boldmath $u$}_{n},\mbox{\boldmath $w$}) + \frac{1}{2}((\nabla\cdot\mbox{\boldmath $u$}_n)\mbox{\boldmath $u$}_{n+1},\mbox{\boldmath $w$}) &=
\\ \left(2p_{n}-p_{n-1},\nabla\cdot\mbox{\boldmath $w$}\right)+\left(\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n,\mbox{\boldmath $w$}\right) + \left( \mbox{\boldmath $w$},\mbox{\boldmath $f$}_{n+1} \right)\,,&\\
\tau(\nabla(p_{n+1}-p_{n}),\nabla q) + \left(\nabla\cdot\mbox{\boldmath $u$}_{n+1},q\right) &= 0\,,
\label{fractionalGL}
\end{split}$$ for all $(\mbox{\boldmath $w$},q)\in [H^1_0(\Omega)]^d\times [L^2_0(\Omega)\cap H^1(\Omega)]$. The following result states the stability for this scheme.
**Theorem 7** (Stability of the IMEX fractional-step scheme in GL form). *Let\
us assume that $\nabla\nu\in [L^\infty(\bar{Q}]^d$. Then, the fractional-step IMEX-GL scheme [\[fractionalGL\]](#fractionalGL){reference-type="eqref" reference="fractionalGL"} yields the stability estimate $$\begin{split}
\|\mbox{\boldmath $u$}_{N}\|^2 +\gamma\tau\big\|\sqrt{\nu_{N-1}}\,\nabla \mbox{\boldmath $u$}_{N}\big\|^2 + \tau^2\|\nabla p_{N}\|^2 + \sum_{n=1}^{N}\tau\Big[(2-\gamma)\big\|\sqrt{\nu_{n-1}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2 +\alpha\tau^2\|\nabla p_{n}\|^2\Big] \\ \leq \mathrm{exp}\left(\frac{\alpha T}{1-\alpha\tau}\right)\left[\|\mbox{\boldmath $u$}_0 \|^2 + \tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_0\|^2 + \varepsilon T\sum_{n=1}^{N}\tau\| \mbox{\boldmath $f$}_n\|^2\right]\,,\label{conditionalStability}
\end{split}$$ for any positive time-step size $\tau$ satisfying $$\tau < \left(\frac{\|\nabla\nu\|_{\infty}^2}{\gamma\nu_{\mathrm{min}}}+\frac{1}{\varepsilon T}\right)^{-1} =: \alpha^{-1} \, ,
\label{conditionDeltaT}$$ with $\gamma \in (0,2)$ and $\varepsilon > 0$, both chosen arbitrarily.*
Compared to the stress-divergence version, the only term estimated differently is $$\begin{aligned}
2\tau(\nabla^{\top}\mbox{\boldmath $u$}_n\nabla\nu_n + \mbox{\boldmath $f$}_{n+1},\mbox{\boldmath $u$}_{n+1}) &\leq 2\tau\|\nabla\nu\|_{\infty}\|\nabla\mbox{\boldmath $u$}_n\|\,\|\mbox{\boldmath $u$}_{n+1}\| + 2\tau\|\mbox{\boldmath $f$}_{n+1}\|\,\|\mbox{\boldmath $u$}_{n+1}\|\\
&\leq \gamma\tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_n\|^2+\Big(\frac{\tau\|\nabla\nu\|_{\infty}^2}{\gamma\nu_{\mathrm{min}}}+\frac{\tau}{\varepsilon T}\Big)\|\mbox{\boldmath $u$}_{n+1}\|^2 + \tau\varepsilon T\|\mbox{\boldmath $f$}_{n+1}\|^2 , \end{aligned}$$ in which $0<\gamma < 2$ and $\varepsilon > 0$. Hence, $$\begin{aligned}
\|\mbox{\boldmath $u$}_{n+1}\|^2 + \tau^2\|\nabla p_{n+1}\|^2 + 2\tau\big\|\sqrt{\nu_{n}}\,\nabla\mbox{\boldmath $u$}_{n+1}\big\|^2 \\ \leq \|\mbox{\boldmath $u$}_{n}\|^2 + \tau^2\|\nabla p_{n}\|^2 + \gamma\tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{n}\|^2 +\Big(\frac{\tau\|\nabla\nu\|_{\infty}^2}{\gamma\nu_{\mathrm{min}}}+\frac{\tau}{\varepsilon T}\Big)\|\mbox{\boldmath $u$}_{n+1}\|^2 + \tau\varepsilon T\|\mbox{\boldmath $f$}_{n+1}\|^2 .\end{aligned}$$ Adding up from $n=0$ to $n=N-1$ and using that $p_0=0$ yields $$\begin{split}
& \|\mbox{\boldmath $u$}_{N}\|^2+\gamma\tau\big\|\sqrt{\nu_{N-1}}\,\nabla\mbox{\boldmath $u$}_{N}\big\|^2 + \tau^2\|\nabla p_{N}\|^2 + (2-\gamma)\tau\sum_{n=1}^{N}\big\|\sqrt{\nu_{n-1}}\,\nabla\mbox{\boldmath $u$}_{n}\big\|^2\\ \leq \ &
\|\mbox{\boldmath $u$}_{0}\|^2+\gamma\tau\nu_{\mathrm{min}}\|\nabla\mbox{\boldmath $u$}_{0} \|^2 + \tau\varepsilon T\sum_{n=1}^{N}\|\mbox{\boldmath $f$}_{n}\|^2+\Big(\frac{\tau\|\nabla\nu\|_{\infty}^2}{\gamma\nu_{\mathrm{min}}} +\frac{\tau}{\varepsilon T}\Big)\sum_{n=1}^{N}\|\mbox{\boldmath $u$}_{n}\|^2 \, .
\label{inequalityNplus1}
\end{split}$$ Finally, by adding $$\begin{aligned}
\Big(\frac{\tau\|\nabla\nu\|_{\infty}^2}{\gamma\nu_{\mathrm{min}}}+\frac{\tau}{\varepsilon T}\Big)\sum_{n=1}^{N}\tau^2\|\nabla p_{n}\|^2\end{aligned}$$ to both sides of [\[inequalityNplus1\]](#inequalityNplus1){reference-type="eqref" reference="inequalityNplus1"} and choosing $\tau$ according to [\[conditionDeltaT\]](#conditionDeltaT){reference-type="eqref" reference="conditionDeltaT"}, we get the stability estimate [\[conditionalStability\]](#conditionalStability){reference-type="eqref" reference="conditionalStability"} by applying Lemma [Lemma 2](#Lem:Gronwall-conditional){reference-type="ref" reference="Lem:Gronwall-conditional"}.
------------------------------------------------------------------------
**Remark 3**. *The last stability result requires a CFL condition on the time-step size $\tau$. It is worth noticing that this condition includes only the viscosity field (i.e., the problem data). In addition, since the constants $\gamma$ and $1/\varepsilon$ in [\[conditionDeltaT\]](#conditionDeltaT){reference-type="eqref" reference="conditionDeltaT"} can be chosen arbitrarily close to $2$ and $0$, respectively, this restriction can be simplified to $$\tau < \frac{2\nu_{\mathrm{min}}}{\|\nabla\nu \|_{\infty}^2}\,,
\label{condition}$$ which can also be obtained by letting $T\rightarrow\infty$, that is, letting the problem tend to a steady-state. Interestingly, [\[condition\]](#condition){reference-type="eqref" reference="condition"} is related to the condition imposed on the reaction term in [@Anaya2021] to analyse a mixed finite element method for the linear, stationary Oseen equation.*
# Summary of theoretical results {#sec_summary}
Let us briefly collect the results of our analysis. We have proved the stability of several IMEX discretisations, and the corresponding results are summarised on Table [1](#table_stability){reference-type="ref" reference="table_stability"}. For four discretisations we have unconditional stability (albeit in different norms, some stronger than others), and for another one we have proved stability under a time-step restriction [\[condition\]](#condition){reference-type="eqref" reference="condition"}. The only method where stability has not been proven is the SD formulation with fully explicit viscosity, that is, [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"}. Therefore, the main purpose of our numerical examples will be to verify whether our analysis is sharp with respect to condition [\[condition\]](#condition){reference-type="eqref" reference="condition"} and to the (lack of) stability of the "naive" IMEX-SD discretisation.
**Type** **Left-hand side term** **Right-hand side term** **Stability**
------------ --------------------------------------------------------------------- --------------------------------------------------------------------------------------------- -----------------------------------
Monolithic $-\nabla\cdot(2\nu_n\nabla^{\mathrm{s}}\mbox{\boldmath $u$}_{n+1})$ unconditional
Monolithic $-\nabla\cdot\big(\nu_n\nabla\mbox{\boldmath $u$}_{n+1})$ $\nabla\cdot(\nu_n\nabla^{\top}\mbox{\boldmath $u$}_{n}\big)$ not proved
Monolithic $-\nabla\cdot(\nu_{n+1}\nabla\mbox{\boldmath $u$}_{n+1})$ $\nabla\cdot\big(\sqrt{\nu_{n+1}}\sqrt{\nu_n}\, \nabla^{\top}\mbox{\boldmath $u$}_{n}\big)$ unconditional
Monolithic $-\nabla\cdot(\nu_n\nabla\mbox{\boldmath $u$}_{n+1})$ $\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n$ unconditional
Fractional $-\nabla\cdot(\nu_{n+1}\nabla\mbox{\boldmath $u$}_{n+1})$ $\nabla\cdot\big(\sqrt{\nu_{n+1}}\sqrt{\nu_n}\, \nabla^{\top}\mbox{\boldmath $u$}_{n}\big)$ unconditional
Fractional $-\nabla\cdot(\nu_n\nabla\mbox{\boldmath $u$}_{n+1})$ $\nabla^{\top}\mbox{\boldmath $u$}_{n}\nabla\nu_n$ $\tau < \tau_{\mathrm{max}}(\nu)$
: Theoretical results on the stability of different IMEX treatments of the viscous term.
[\[table_stability\]]{#table_stability label="table_stability"}
# On the fully discrete problem {#sec_spatial}
Although we restrict our stability analysis to the semi-discrete setting, some brief comments should be drawn on the fully discrete problem. We will restrict our discussion to conforming finite element spaces with continuous pressures, since that choice does not modify the matrix structure of the semi-discrete problems.
## Matrix structure of the IMEX methods
One of the most appealing computational features of the IMEX schemes presented herein is the resulting matrix structure. Consider a generic spatial discretisation applied to $\mbox{\boldmath $u$}_n$ and $p_n$, denoting by $\mbox{\boldmath $U$}_{n}$ and $\mbox{\boldmath $P$}_{n}$ the respective vectors of degrees of freedom. Since we are interested in problems where the viscosity may not be known, and also interested in IMEX schemes leading to solving only linear systems at each time step, we shall only discuss the schemes that do not require $\nu_{n+1}$.
In the monolithic case (e.g., the scheme [\[strongMomentumMono\]](#strongMomentumMono){reference-type="eqref" reference="strongMomentumMono"}), the fully-discrete linear system to be solved at each time step has the form $$\begin{aligned}
\begin{bmatrix}
\mathbf{A}_{n} & \mathbf{B}^{\top}\, \\
\mathbf{B} & \mathbf{0}
\end{bmatrix}
\begin{bmatrix}
\mbox{\boldmath $U$}_{n+1} \\ \mbox{\boldmath $P$}_{n+1}
\end{bmatrix} =
\begin{bmatrix}
\mbox{\boldmath $F$}_{n+1} \\ \mbox{\boldmath $0$}
\end{bmatrix},\end{aligned}$$ where $\mathbf{B}$ is (minus) the divergence matrix and $\mbox{\boldmath $F$}_{n+1}$ is the right-hand side vector depending on $\mbox{\boldmath $f$}_{n+1}$, $\mbox{\boldmath $U$}_{n}$, and $\nu_{n}$; the velocity-velocity matrix $\mathbf{A}_n$ will consist of $d$ identical blocks: $$\begin{aligned}
\mathbf{A}_{n} =
\begin{bmatrix}
\mathbf{K}_{n} & \mbox{\boldmath $0$} \\
\mbox{\boldmath $0$} & \mathbf{K}_{n}
\end{bmatrix}
\ \text{for} \ d=2\, , \ \, \text{or} \
\mathbf{A}_{n} =
\begin{bmatrix}
\mathbf{K}_{n} & \mbox{\boldmath $0$} & \mbox{\boldmath $0$} \\
\mbox{\boldmath $0$} & \mathbf{K}_{n} & \mbox{\boldmath $0$} \\
\mbox{\boldmath $0$} & \mbox{\boldmath $0$} & \mathbf{K}_{n}
\end{bmatrix}
\ \text{for} \ d=3\, ,
\label{blockDiagonal}\end{aligned}$$ where $\mathbf{K}_{n} := \mathbf{M} + \mathbf{D}(\nu_n) + \mathbf{C}(\mbox{\boldmath $U$}_{n})$, with $\mathbf{M}$, $\mathbf{D}$ and $\mathbf{C}$ denoting, respectively, the standard mass, diffusion, and convection matrices from a scalar transport equation. The same structure will be observed for higher-order temporal discretisations, with $\nu_n$ and $\mbox{\boldmath $U$}_{n}$ replaced by their corresponding extrapolations. This block structure reduces the costs of assembling and solving the algebraic system [@John2016], being especially advantageous for fractional-step schemes, as it allows us to split the velocity update into $d$ scalar problems: $$\begin{aligned}
\mathbf{K}_n\, \mbox{\boldmath $U$}^i_{n+1} = \mbox{\boldmath $F$}^i(\mbox{\boldmath $P$}_{n},\mbox{\boldmath $P$}_{n-1},\mbox{\boldmath $U$}_{n},\nu_n)\, ,\ \ i=1,...\, ,d\, ,\end{aligned}$$ where $\mbox{\boldmath $U$}^i_{n+1}$ contains the degrees of freedom of the $i$-th velocity component. After computing all the components, the pressure is updated by solving another scalar problem.
**Remark 4**. *One may also wish to treat convection explicitly, to eliminate from $\mathbf{K}_n$ the non-symmetric matrix $\mathbf{C}$, so that more efficient solvers can be used. The price to pay for this choice is, in most cases, a CFL condition, as it was the case in recent works [@Burman2022; @Burman2023].*
## Generalised Newtonian fluids {#sec_nonNewtonian}
So far in our analysis we have considered that the viscosity $\nu$ is fixed and known. Nevertheless, in several interesting applications (such as hemodynamic flows and nanofluids, see e.g. [@ABUGATTAS2020529; @FARIAS2023; @GONZALEZ2021101635; @AGUIRRE2022104400]) the fluid is best described using a generalised Newtonian constitutive law. In such a case, the viscosity is given by $$\begin{aligned}
\nu(\mbox{\boldmath $x$},t) = \eta(|\nabla^{\mathrm{s}}\mbox{\boldmath $u$}(\mbox{\boldmath $x$},t)|)\, ,\end{aligned}$$ where $|\cdot|$ denotes the Euclidean norm, and $\eta$ is a continuous function. A popular example is the Carreau model [@Galdi2008] $$\begin{aligned}
\eta(|\nabla^{\mathrm{s}}\mbox{\boldmath $u$}|) = \nu_{\infty} + (\nu_{0}-\nu_{\infty})(1+2\lambda^2|\nabla^{\mathrm{s}}\mbox{\boldmath $u$}|^2)^{-m}\, ,
\label{Carreau}\end{aligned}$$ where $\nu_0$, $\nu_\infty$ and $\lambda$ are positive, experimental rheological parameters. For the common case of shear-thinning fluids ($\nu_{\infty} > \nu_{0}$), we get $\nu_{0} \leq \nu \leq \nu_{\infty}$.
The viscosity just described, although covered by our analysis, may cause issues in the fully discrete case, where discontinuities in the velocity gradient lead to discontinuous viscosity. Several approaches can be used to overcome this. For example, local averaging can be used to produce a continuous viscosity field. Alternatively, computing the $L^2(\Omega)$-orthogonal projection onto a space of continuous finite element functions would also fit our hypotheses. This last alternative has been used, with satisfactory numerical results, for schemes related to the ones presented herein [@Schussnig2021JCP; @Pacheco2021CMAME], and it is also the approach we follow in our numerical experiments. The mathematical analysis of the resulting schemes has not been carried out so far, and it is out of the scope of the present paper.
# Numerical examples {#sec_examples}
In this section we present three series of numerical experiments testing the difference in stability properties for the IMEX schemes. In accordance to the discussion in the last section, we have considered conforming Taylor-Hood elements in quadrilateral meshes (biquadratic velocities and continuous bilinear pressures). For problems with a velocity-dependent viscosity we have projected the viscosity onto the continuous bilinear finite element space. Throughout this section, whenever an IMEX-SD formulation is mentioned, we will be referring to one for which we have proved unconditional stability -- unless where otherwise stated.
## Temporal convergence
Our first test aims at assessing the temporal accuracy/convergence of the IMEX schemes presented herein. For this purpose, we use a problem with known exact solution. The viscosity field is given by $$\nu(x,y,t) = xyf(t) + g(t)\, ,$$ where $f$ and $g$ are any two functions that guarantee $\nu>0$ in $\Omega$ for all $t\in [0,T]$. With this choice, the exact solution of the Navier--Stokes equation is given by $$\begin{aligned}
\mbox{\boldmath $u$} = 2f(t)
\begin{pmatrix}
y\\x
\end{pmatrix} \quad \text{and} \quad p = (C-2xy)f'(t)\, ,\end{aligned}$$ with $\mbox{\boldmath $f$}\equiv\mbox{\boldmath $0$}$ and the boundary conditions chosen accordingly; the constant $C$ is set so that $p\in L^2_0(\Omega)$ for all $t$. We have chosen this solution since, in space, it belongs to the finite element space, so the entire error stems from time discretisation. We set $\Omega = (0,1)^2$, $T=10$, $f(t)=\sin ^2 t$ and $g(t) \equiv 0.001$. The square domain is discretised uniformly with $4\times4$ square elements, which completely resolves (at each time instant) the exact solution in space. The time-step size is halved at each refinement level, starting from $\tau=1$. In Figure [1](#temporalConvergence){reference-type="ref" reference="temporalConvergence"} we depict the errors in velocity and pressure at $t=10$. For all schemes considered, the convergence is at least linear in both fields, with the monolithic GL being the most accurate for velocity (although the remaining schemes show a higher order of convergence), and a comparable performance of all methods for pressure.
![Temporal convergence study for the IMEX schemes. The quadratic velocity convergence of the fractional-step methods is probably an initial superconvergence that would break down with further temporal refinement.](figures/temporalConvergence-variableViscosity.pdf){#temporalConvergence width="100%"}
**Remark 5**. *It is interesting to notice that all methods present a very stable behaviour even for the larger time step sizes. This, despite the fact that the viscosity field considered in this example is such that $\nu_{\mathrm{min}}/\|\nabla \nu \|^2_{\infty} = 5\times 10^{-4}$. According to our analysis, this would require $\tau < 0.001$ to guarantee the stability of the fractional-step GL method. That, however, was not observed in this study.*
## Stability test {#sec_cavity}
To put the stability of the fractional-step IMEX schemes to a more challenging test, we tackle a problem with less smooth solution. The lid-driven cavity flow is one of the most popular benchmarks for generalised Newtonian flow solvers, and a common setup involves a power-law fluid: $$\begin{aligned}
\eta(|\nabla^{\mathrm{s}}\mbox{\boldmath $u$}|) = \kappa\big|2^{\frac 12}\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\big|^{n-1} \, ,
\label{powerLaw}\end{aligned}$$ where $\kappa$, $n$ and $\nu_{\mathrm{min}}$ are positive constants. For the square cavity $\Omega = (0,1)^2$, the boundary conditions are a horizontal velocity $U(t)$ on the lid ($y=1$), and no slip on the bottom and lateral walls. We consider a reference shear-thickening fluid with $\kappa=0.01$ and $n=1.5$ [@Neofytou2005]. To avoid a zero viscosity, we replace [\[powerLaw\]](#powerLaw){reference-type="eqref" reference="powerLaw"} by $$\begin{aligned}
\eta(|\nabla^{\mathrm{s}}\mbox{\boldmath $u$}|) = \mathrm{max}\left\lbrace 10^{-5},\, \kappa\big|2^{\frac 12}\nabla^{\mathrm{s}}\mbox{\boldmath $u$}\big|^{n-1} \right\rbrace .\end{aligned}$$ The lid velocity is prescribed according to $$\begin{aligned}
U(t) = \frac{1}{2}\left\lbrace 1+\text{sign}(t-t^{\star})+\left[1-\text{sign}(|2t-t^{\star}|-t^{\star})\right]\sin^2\Big(\frac{\pi t}{2t^{\star}}\Big)\right\rbrace,
\label{ramp}\end{aligned}$$ which ramps up smoothly from $U=0$ (at $t=0$) to $U=1.0$ (for $t\geq t^{\star}=2$). We run the simulation up to $T=30$, aiming for a steady-state flow that can be compared to reference stationary results [@Neofytou2005; @Li2014]. Two uniform meshes are considered: one with $200\times 200$ elements, and one twice as fine. To test the stability of the different schemes, we take the time-step size rather large, namely $\tau = 0.25$. The aim of this test is twofold: firstly, we want to verify whether the time-step restriction [\[condition\]](#condition){reference-type="eqref" reference="condition"} found for the fractional-step GL scheme is sharp -- and we do that by tackling a problem where $\|\nabla\nu\|_{\infty}/\nu_{\mathrm{min}}\rightarrow \infty$. Second, we aim to illustrate the poor stability of IMEX-SD schemes when the coupling term $\nabla\cdot(\nu\nabla^{\top}\mbox{\boldmath $u$})$ is made fully explicit (and, since the viscosity $\nu_{n+1}$ is an unknown of the problem, we can only consider the scheme [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"} as IMEX-SD). To do that, three schemes are considered:
1. Fractional-step IMEX, generalised Laplacian form [\[fractionalGL\]](#fractionalGL){reference-type="eqref" reference="fractionalGL"}.
2. Fractional-step IMEX, stress-divergence form using only $\nu_{n}$ (not $\nu_{n+1}$).
3. Monolithic, fully coupled stress-divergence formulation [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"}.
The third variant is meant as a control case, since it is the method for which we have the strongest stability estimate.
To verify the accuracy of our solution, we use the location of the primary vortex as benchmark. Table [2](#table_lidVortex){reference-type="ref" reference="table_lidVortex"} shows the comparison between our results (with the coarser mesh) and reference ones [@Neofytou2005; @Li2014]. Among the three methods we used, the two with proven stability are in good agreement with the references; on the other hand, for the IMEX-SD [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"} (with fully explicit viscosity) no primary vortex position could be clearly identified. Figure [2](#stabilityTest){reference-type="ref" reference="stabilityTest"} shows how the kinetic energy $E(t):=\|\mbox{\boldmath $u$}\|^2/2$ evolves with time: while the two stable schemes reach a steady-state level, IMEX-SD [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"} exhibits a clearly unstable energy growth (notice that the spatial refinement does not remedy the instability). The two negative results for the IMEX-SD [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"} are important as they hint strongly that, in general, one cannot guarantee unconditional stability of schemes such as [\[SDnaive\]](#SDnaive){reference-type="eqref" reference="SDnaive"}. Furthermore, Figure [3](#horns){reference-type="ref" reference="horns"} depicts the viscosity field at the final time, where we see the spikes near the corners (which lead to very high gradients). The time-step size used for all methods in this experiment ($\tau=0.25$) clearly violates [\[condition\]](#condition){reference-type="eqref" reference="condition"}. Yet, the fractional-step GL method again showed a stable behaviour, which hints at the possibility that [\[condition\]](#condition){reference-type="eqref" reference="condition"} is pessimistic.
**Ref. [@Neofytou2005]** **Ref. [@Li2014]** **Fully coupled** **IMEX-GL** **IMEX-SD, $\nu_n$**
-------------------------- -------------------- ------------------- ----------------- ----------------------
$(0.565,0.724)$ $(0.5628,0.7282)$ $(0.562,0.728)$ $(0.560,0.728)$ undetermined
: Lid-driven cavity flow with power-law fluid: steady-state position $(x,y)$ of the primary vortex for three schemes: fully coupled SD [\[fullyCoupled\]](#fullyCoupled){reference-type="eqref" reference="fullyCoupled"}, fractional-step IMEX-GL [\[fractionalGL\]](#fractionalGL){reference-type="eqref" reference="fractionalGL"} and fractional-step IMEX-SD using only $\nu_n$. For the latter, the solution did not converge to a steady-state, so it was not possible to determine clear vortex locations.
[\[table_lidVortex\]]{#table_lidVortex label="table_lidVortex"}
![Lid-driven cavity flow with power-law fluid: temporal evolution of the kinetic energy, confirming that a stress-divergence scheme with fully explicit viscosity can be unstable.](figures/powerLawCavity_Re100_Ek.pdf){#stabilityTest width=".99\\textwidth"}
![Lid-driven cavity flow with power-law fluid: steady-state viscosity field obtained using the IMEX-GL method.](figures/powerLawCavity_nu.pdf){#horns width=".8\\textwidth"}
## Idealised hemodynamic flow
Our last numerical test tackles a truly dynamic flow problem. A popular application for variable viscosity is hemodynamics, with blood usually modelled as a shear-thinning fluid. We have applied the Carreau model [\[Carreau\]](#Carreau){reference-type="eqref" reference="Carreau"} with the following parameters: $\nu_{\infty}=3.286\times 10^{-6}$ m$^2$/s, $\nu_{0}=53.33\times 10^{-6}$ m$^2$/s , $\lambda=3.313$ s and $m=0.3216$ [@Cho1991]. Combined with those, we consider an idealised two-dimensional aneurysm, whose geometry is illustrated in Figure [4](#aneurysmGeometry){reference-type="ref" reference="aneurysmGeometry"}. The upper wall is described by the smooth curve $$\begin{aligned}
y(x) = H + \frac{H}{4}[1-\mathrm{sign}(|2x-6H|-3H)]\cos^2\Big(\pi\frac{x-3H}{3H}\Big)\, ,\end{aligned}$$ with $H = 2.5\times 10^{-3}$ m.
![Ccomputational domain representing an idealised ICA aneurysm.](figures/aneurysmGeometry.png){#aneurysmGeometry width=".85\\textwidth"}
To simulate a physiological regime [@John2017], we prescribe a (ramped up) pulsating horizontal inflow profile with period $T_p=0.917$ s: $$\begin{aligned}
u(y,t)|_{x=0} = \frac12\Bigg[1-\Big(\frac{2y}{H}-1\Big)^2\Bigg]U(t) f(t)\, ,\end{aligned}$$ with $U(t)$ as in [\[ramp\]](#ramp){reference-type="eqref" reference="ramp"}, $t^{\star}=T_p$. The function $f(t)$ is a trigonometric polynomial fitted from internal carotid artery (ICA) measurements [@Thiriet2008]: $$\begin{aligned}
f(t) = 10^{-4}\Bigg[\frac{a_0}{2} - \sum_{k=1}^{12}a_k\cos\left(\frac{2k\pi t}{T_p}\right) + b_k\sin\left(\frac{2k\pi t}{T_p}\right)\Bigg]\, ,\end{aligned}$$ where $a_0 = 13349$, and the remaining coefficients are listed on Table [3](#FourierData){reference-type="ref" reference="FourierData"}. That leads to a peak inflow speed of $U_{\mathrm{peak}} = 0.5$ m/s. At the outlet ($x=6H$), we prescribe the outflow condition $$\begin{aligned}
(\nu\nabla\mbox{\boldmath $u$})\mbox{\boldmath $n$} - p\mbox{\boldmath $n$} = \mbox{\boldmath $0$}\, ,
\label{pseudotraction}\end{aligned}$$ while the bottom and top walls are no-slip boundaries ($\mbox{\boldmath $u$}=\mbox{\boldmath $0$}$).
$k$ 1 2 3 4 5 6 7 8 9 10 11 12
------- ------------ ----------- ----------- -------- ------- -------- ------ ------- ------- ------- --------- --------- -- --
$a_k$ 695.98 905.24 452.42 754.46 164.6 133.57 79.4 74.40 36.72 38.80 42.34 3.67
$b_k$ $-2067.65$ $-496.92$ $-340.74$ 202.96 98.82 249.55 81.7 67.84 9.95 15.34 $-3.49$ $-3.22$
: Trigonometric coefficients for the ICA inflow waveform ($a_0 = 13349$).
[\[FourierData\]]{#FourierData label="FourierData"}
For this test we consider the GL formulation, as it offers a natural way to enforce the outflow condition [\[pseudotraction\]](#pseudotraction){reference-type="eqref" reference="pseudotraction"}. We use the monolithic IMEX-GL scheme [\[decoupledGL\]](#decoupledGL){reference-type="eqref" reference="decoupledGL"} with different time-step sizes $\tau$ and a mesh with 38,400 elements. Figure [5](#aneurysm_dp){reference-type="ref" reference="aneurysm_dp"} shows, over one cardiac cycle, the mean pressure drop across the aneurysm: $$\begin{aligned}
\Delta p = \frac{1}{H}\left(\int_{\Gamma_{\mathrm{in}}} p \, \mathrm{d}\Gamma - \int_{\Gamma_{\mathrm{out}}} p \, \mathrm{d}\Gamma\right),\end{aligned}$$ which is an important clinical marker [@Pacheco2022IJNMBE]. The results confirm very good temporal stability for the numerical solutions, even for time-step sizes as large as $\tau = 0.05$. For $t=6T$, Figure [6](#aneurysmProfiles){reference-type="ref" reference="aneurysmProfiles"} shows the horizontal velocity profiles at different sections of the vessel, showcasing also good spatial stability.
![Carreau fluid through Idealised ICA aneurysm: normalised pressure difference (inlet minus outlet) over one cardiac cycle, for different time-step sizes $\tau$. All three computations are based on the monolithic IMEX scheme in GL form, which allows enforcing the outflow condition [\[pseudotraction\]](#pseudotraction){reference-type="eqref" reference="pseudotraction"} naturally.](figures/pressureDropAneurysm.pdf){#aneurysm_dp width=".8\\textwidth"}
![Carreau fluid through Idealised ICA aneurysm: horizontal velocity profiles at $t=6T$, for a time-step size $\tau=0.002$.](figures/aneurysm_profiles.pdf){#aneurysmProfiles width=".85\\textwidth"}
# Concluding remarks {#sec_Conclusion}
In this work, we have presented, analysed and tested different implicit-explicit schemes for the variable-viscosity Navier--Stokes equations. While IMEX methods for constant viscosity focus on making the convective velocity (or different stabilisation terms associated to convection) explicit, we split also the viscous term into implicit and explicit parts. This renders the velocity subsystem block-diagonal, thereby enabling simpler, more efficient linear solvers. The temporal stability of the resulting schemes varies dramatically according to the term made explicit, and in which form. To investigate that, we have carried out a semi-discrete analysis of prototypical first-order schemes. Our results show that, when considering a stress-divergence (SD) description of the viscous term, naive extrapolations can cause instabilities. While unconditionally stable IMEX-SD schemes are possible, we have shown that they require a pre-computed viscosity field, which is not realistic in many applications. On the other hand, a generalised Laplacian (GL) formulation of the viscous term leads to stable IMEX methods requiring only known viscosity values. For a monolithic and a fractional-step IMEX-GL scheme, respectively, we have proved unconditional and conditional stability. Our numerical tests, however, suggest that the time-step restriction derived for the fractional-step variant may be an analytical artefact.
Our theoretical and numerical results contribute towards breaking the paradigm that considers *inevitable* the use of fully coupled, elasticity-like formulations for variable-viscosity flows. The GL variants presented in this work have provided accurate and stable results, performing well even for problems with non-smooth solutions. Several questions remain open though. Proposing and analysing second- and higher-order schemes is a challenging and interesting open question. In addition, analysing the stability and convergence of fully discrete versions of the schemes proposed in this work is also of interest, especially for the case of nonlinear viscosity models.
# Acknowledgments {#acknowledgments .unnumbered}
The work of GRB has been partially supported by the Leverhulme Trust Research Project Grant No. RPG-2021-238. EC acknowledges the support given by the Agencia Nacional de Investigación y Desarrollo (ANID) through the project FONDECYT 1210156.
[^1]: Email address: `[email protected]`, corresponding author
| arxiv_math | {
"id": "2310.04182",
"title": "Implicit-explicit schemes for incompressible flow problems with variable\n viscosity",
"authors": "Gabriel R. Barrenechea, Ernesto Castillo, Douglas R. Q. Pacheco",
"categories": "math.NA cs.NA",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
Filippov $n$-algebroids are introduced by Grabowski and Marmo as a natural generalization of Lie algebroids. In this note, we characterize Filippov $n$-algebroid structures by considering certain multi-input connections, which we call Filippov connections, on the underlying vector bundle. Through this approach, we are able to express the $n$-ary bracket of any Filippov $n$-algebroid using a torsion-free type formula. Additionally, we transform the generalized Jacobi identity of the Filippov $n$-algebroid into the Bianchi-Filippov identity. Furthermore, in the case of rank $n$ vector bundles, we provide a characterization of linear Nambu-Poisson structures using Filippov connections.
address:
- Center for Mathematical Sciences, College of Mathematics and Information Science, Nanchang Hangkong University
- College of Mathematics and Information Science, Nanchang Hangkong University
- Department of Mathematical Sciences, Tsinghua University
- School of Mathematics and Statistics, Huazhong University of Science and Technology
author:
- Yanhui Bi
- Zhixiong Chen
- Zhuo Chen
- Maosong Xiang
title: The geometric constraints on Filippov algebroids
---
[^1]
Keywords: Filippov algebroids, Filippov connections, Bianchi-Filippov identity, Nambu-Poisson structures.
# Introduction {#introduction .unnumbered}
Filippov introduced a generalized Jacobi identity for $n$-ary skew-symmetric operation, which acts as a replacement for the classical Jacobi identity in the context of Lie algebras [@ref10]. He also proposed the concept of $n$-Lie algebra, also known as Filippov $n$-algebra, with the corresponding generalized Jacobi identity referred to as the Filippov identity. Nambu and Takhtajan extended the concept of Poisson manifold to an $n$-ary generalization called Nambu-Poisson structure in order to study Hamiltonian mechanics more comprehensively [@ref16; @Tak]. It is worth noting that both the Nambu-Poisson structure and the $n$-Lie algebra share the same generalized Jacobi identity. Grabowski and Marmo introduced the concept of Filippov $n$-algebroids, an $n$-ary generalization of Lie algebroids, in order to determine the relationship between linear Nambu-Poisson structures and Filippov algebras [@ref12]. Consequently, it is reasonable to anticipate that many tools used to study Lie algebroids could be enhanced or upgraded to the realm of Filippov algebroids. Therefore, this paper aims to address the absence of the concepts of connections and curvatures of Filippov algebroids in the literature and provide a primitive analysis of these topics from a geometric point of view.
Recall that a Lie algebroid is a (real) vector bundle $A\to M$ together with a bundle map $\rho\colon A\to TM$, called anchor, and a Lie bracket $[\cdotp,\cdotp]$ on the section space $\Gamma(A)$ of $A$, satisfying $\rho\colon \Gamma(A)\to \Gamma(TM)$ is a morphism of Lie algebras and the Leibniz rule $$[X, fY] =f[X, Y] +(\rho(X)f)Y, \qquad \forall X, Y\in \Gamma(A) \mbox{ and } f\in C^\infty(M).$$ By an easy smooth analysis, the bracket $[\cdotp,\cdotp]$ can always be reformulated in the form $$\label{Eqt:XYcommute}
[X,Y]=\nabla _XY-\nabla _YX,$$ where $\nabla\colon \Gamma(A)\times \Gamma(A)\to \Gamma(A)$ satisfies the properties $$\nabla _{fX}Y=f \nabla_XY \mbox{ and } \nabla _{X}(fY)=f \nabla_XY + (\rho(X)f) Y.$$ One calls $\nabla$ a *connection* on the anchored bundle $(A,\rho)$. (This is indeed a straightforward generalization of connections on vector bundles.)
When the bracket $[~,~]$ of a Lie algebroid $A$ is expressed in the form [\[Eqt:XYcommute\]](#Eqt:XYcommute){reference-type="eqref" reference="Eqt:XYcommute"}, one says that the connection $\nabla$ is *torsion free*. The curvature form $R^{\nabla} \in \Gamma(\wedge^2 A^* \otimes \operatorname{End}(A))$ of such a connection $\nabla$ is defined in the standard manner: $$R^{\nabla }(X,Y)(Z)=\nabla _X\nabla _YZ-\nabla _Y\nabla _XZ-\nabla _{[X,Y]}Z,$$ for all $X,Y,Z \in \Gamma(A)$. Now, $\rho$ being a morphism of Lie algebras is equivalent to the condition that $R^{\nabla}$ is a tensor in its third argument. Moreover, the Jacobi identity for $[~,~]$ is transformed into the following *Lie-Bianchi identity* $$%\label{Eqt:R3=0}
R^{\nabla}(X,Y)(Z)+R^{\nabla}(Y,Z)(X)+R^{\nabla}(Z,X)(Y)=0.$$ Therefore, Lie algebroids can be realized as anchored bundles equipped with special connections [@PP]. We wish to find an analogous characterization of the $n$-ary bracket of any Filippov algebroid. A significant difference between Lie algebroids and Filippov $n$-algebroids (for $n\geqslant 3$) is that the bracket and anchor of the latter are of more arguments (see Definition [Definition 2](#def1.2){reference-type="ref" reference="def1.2"}). So there is not an obvious way to extend Equation [\[Eqt:XYcommute\]](#Eqt:XYcommute){reference-type="eqref" reference="Eqt:XYcommute"}. We come up with a solution in Section [2](#Sec:mainpart){reference-type="ref" reference="Sec:mainpart"}. Below is a quick summary:
- First, we define (multi-input) connections compatible with a given (multi-input) $n$-anchor (see Definition [Definition 6](#def2.1){reference-type="ref" reference="def2.1"}). This is a quite straightforward extension of usual connections of Lie algebroids (when $n=2$).
- Second, we introduce the curvature form $R^\nabla$ stemming from a connection $\nabla$ (see Equation [\[eq10\]](#eq10){reference-type="eqref" reference="eq10"}). We believe that this is a highly nontrivial invention of this note.
- Third, we prove in Theorem [Theorem 8](#th2.3){reference-type="ref" reference="th2.3"} that certain good connections, which we call *Filippov connections*, fully determines Filippov algebroid structures. This includes two points: (1) The $n$-ary bracket of any Filippov algebroid can be realized in a torsion free manner (see Equation [\[eq9\]](#eq9){reference-type="eqref" reference="eq9"}); (2) The generalized Jacobi identity is transformed to a constraint, called the *Bianchi-Filippov identity* (see Equation [\[eq12\]](#eq12){reference-type="eqref" reference="eq12"}) about the associated curvature $R^\nabla$.
We illustrate a simple method to construct Filippov connections in Section [2.3](#Sec:Dxipair){reference-type="ref" reference="Sec:Dxipair"}.
Finally, we show that there exists a one-to-one correspondence between Filippov $n$-algebroid structures on a vector bundle $A$ of rank $n \geqslant 3$ and linear Nambu-Poisson structures on its dual bundle $A^*$ (see Theorem [Theorem 18](#prop3.3){reference-type="ref" reference="prop3.3"}). As an interesting application of our result, one is able to construct linear Nambu-Poisson structures from Filippov connections (Corollary [Corollary 19](#propo3.3){reference-type="ref" reference="propo3.3"}).
In short summary, torsion-free connections subject to the Bianchi-Filippov identity are implicit geometric constraints for Filippov algebroids. We know that torsion free connections for Lie algebroids play a crucial role in various mathematical constructions. They are particularly important in the construction of Poincaré-Birkhoff-Witt isomorphisms and Kapranov dg manifolds for Lie algebroid pairs [@LSX]. Additionally, Bianchi identities are not only significant in Riemannian geometry, but also in Poisson geometry [@BDPR]. Therefore, we believe that our approach to Filippov algebroids will be beneficial in this context.
# Preliminaries: Anchored bundles and Filippov algebroids {#Sec:pre}
In this section, we recall the definition of Filippov algebroids from [@ref12]. There is an alternative characterization of Filippov algebroids in terms of certain $1$-derivations [@ref14]. It is important to note that $n\geqslant 2$ is an integer, although the only interesting situation is when $n\geqslant 3$. Let us start with a notion of $n$-anchored bundles.
**Definition 1**. *An **$n$-anchored vector bundle** over a smooth manifold $M$ is pair $(A,\rho)$, where $A$ is a vector bundle over $M$ and $\rho\colon \wedge^{n-1}A \to TM$ is a vector bundle morphism, called **$n$-anchor** of $A$.*
**Definition 2**. *A **Filippov $n$-algebroid** over a smooth manifold $M$ is an $n$-anchored bundle $(A,\rho)$ over $M$ together with an $\mathbb{R}$-multilinear and skew-symmetric $n$-bracket on the section space $\Gamma(A)$ of $A$: $$[\cdotp,\cdots,\cdotp]:~\underbrace{ \Gamma(A)\times \cdotp \cdotp \cdotp \times \Gamma(A)}_{n-\mbox{copies}} \to \Gamma(A)$$ satisfying the following compatibility conditions:*
1. *The $n$-anchor $\rho$ intertwines the $n$-bracket and the standard Lie bracket $[\cdotp,\cdotp]_{TM}$ on $\Gamma(TM)$: $$\begin{aligned}
\label{eq1}
&&[\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})]_{TM}\notag
\\
&=&\sum_{i=1}^{n-1}\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge [X_1,\cdots,X_{n-1},Y_i]\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1});
\end{aligned}$$*
2. *The $n$-bracket is a derivation with respect to $C^\infty(M)$-multiplications: $$\label{eq2}
[X_1,\cdots,X_{n-1},fY]=f[X_1,\cdots,X_{n-1},Y]+\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})(f)Y;$$*
3. *The following equation holds, to be called the **(generalized) Jacobi identity** (or Filippov identity): $$\begin{aligned}
\label{eq3}
&&[X_1,\cdots,X_{n-1},[Y_1,\cdots,Y_{n}]]\notag
\\
&=&\sum_{i=1}^{n}[Y_1,\cdots,Y_{i-1},[X_1,\cdots,X_{n-1},Y_i],Y_{i+1},\cdots,Y_n],
\end{aligned}$$*
*for all $X_{\cdot},Y_{\cdot}\in \Gamma(A)$ and $f\in C^\infty(M)$.*
Note that any Lie algebroid is a Filippov 2-algebroid. A Filippov $n$-algebra is a Filippov $n$-algebroid over the one-point base manifold. The following examples (due to [@ref12]) illustrate two Filippov $n$-algebroid structures on the trivial tangent bundle $T\mathbb{R}^m$ of $\mathbb{R}^m$ for $m \geqslant n\geqslant 2$.
**Example 3**. *Consider the trivial n-anchored bundle $(T\mathbb{R}^m, \rho = 0)$. For each Filippov $n$-algebra structure on $\mathbb{R}^m$ with structure constants $\{c^j_{i_1,\cdots,i_n}\}$ and each a smooth function $g\in C^\infty(\mathbb{R}^m)$, we have a Filippov $n$-algebroid $(T\mathbb{R}^m,0)$ whose bracket is defined by $$\left[f_1\frac{\partial}{\partial x_{i_1}},\cdots,f_n\frac{\partial}{\partial x_{i_n}}\right]=gf_1\cdots f_n \sum_{j=1}^{m}c^j_{i_1,\cdots,i_n}\frac{\partial}{\partial x_{j}}.$$*
**Example 4**. *Equip $T\mathbb{R}^m$ with the $n$-anchor map $\rho$ defined by the tensor field $$dx_1 \wedge \cdotp \cdotp \cdotp \wedge dx_{n-1} \otimes \frac{\partial}{\partial x_1},$$ where $x_1,\cdots,x_{n-1},x_n,\cdots,x_m$ are coordinates of $\mathbb{R}^m$. Then the $n$-anchored bundle $(T\mathbb{R}^m ,\rho)$ together with the trivial $n$-bracket on generators $\frac{\partial}{\partial x_{i}}$ produces a (nontrivial) Filippov $n$-algebroid over $\mathbb{R}^m$.*
We emphasize a crucial but often overlooked point in the literature: the presence of a Filippov $n$-bracket on an $n$-anchored bundle $(A,\rho)$ imposes a constraint on the rank of $\rho$ for every integer $n\geqslant 3$.
**Proposition 5**. *Let $(A,[\cdotp,\cdotp \cdotp \cdotp,\cdotp],\rho)$ be a Filippov $n$-algebroid for $n\geqslant 3$. Then the rank of the image of $\rho$ as a distribution on $M$ can not exceed $1$, i.e., $\operatorname{rank}(\rho(\wedge^{n-1}A))\leqslant 1$.*
*Proof.* Suppose that the image of $\rho$ at $p\in M$ is not trivial. So we can find an open neighborhood $U$ of $p$ and some $Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}\in \Gamma(\wedge^{n-1}A)|_U$ such that $\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})$ is nowhere vanishing on $U$. The desired statement amounts to show that, if $\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})$ is also nowhere vanishing on $U$, then there exists some $c \in C^\infty(U)$ such that $$\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})=c \rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}).$$
In fact, by the definition of Filippov $n$-algebroids, we obtain $$\begin{aligned}
%\label{eq4}
&&[\rho(fX_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})] \\
&\xlongequal[]{\mbox{\eqref{eq1}}}&\sum_{i=1}^{n-1}\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{i-1}\wedge[fX_1,X_2,\cdots,X_{n-1},Y_i]\wedge Y_{i+1}\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}) \\
&\xlongequal[]{\mbox{\eqref{eq2}}}&f\sum_{i=1}^{n-1}\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{i-1}\wedge[X_1,X_2,\cdots,X_{n-1},Y_i]\wedge Y_{i+1}\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}) \\
&&\ +\sum_{i=1}^{n-1}(-1)^{n-1}\rho(X_2\wedge\cdots\wedge X_{n-1}\wedge Y_i)(f)\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{i-1}\wedge X_1\wedge Y_{i+1}\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}) \\
&\xlongequal[]{\mbox{\eqref{eq1}}}&f[\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})] \\
&&\ +\sum_{i=1}^{n-1}(-1)^{n-1}\rho(X_2\wedge\cdots\wedge X_{n-1}\wedge Y_i)(f)\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{i-1}\wedge X_1\wedge Y_{i+1}\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}).
\end{aligned}$$ Meanwhile, since $\rho$ is a morphism of vector bundles, we have $$\begin{aligned}
%\label{eq5}
&&[\rho(fX_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})] \\
&=& [f\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})] \\
&=&f[\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})]-\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})(f)\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}).
\end{aligned}$$ Setting $Y_1 = X_1$ in the above two Equations, we obtain $$\label{eq6}
\rho(X_ 1 \wedge \cdotp \cdotp \cdotp \wedge X_{n-1})(f)\rho(X_1\wedge Y_2 \wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})= -\rho(X_1\wedge Y_2 \wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})(f)\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}).$$
Using Equation [\[eq6\]](#eq6){reference-type="eqref" reference="eq6"}, we have $$\begin{aligned}
\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}) &= g_1\rho(X_1 \wedge Y_2\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}) = -g_1 \rho(Y_2 \wedge X_1 \wedge \cdots \wedge Y_{n-1}) \\
&= -g_1g_2 \rho(Y_2 \wedge Y_1 \wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}) = g_1g_2 \rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1}), %\\
% \rho(Y_1\wedge X_2\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})&\equalbyreason{\eqref{eq7}}&k\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\notag
% \\\mbox{and }\
% \rho(X_1\wedge Y_2\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})&\equalbyreason{\eqref{eq7}}&l\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})
\end{aligned}$$ for some $g_1,g_2 \in C^\infty(U)$.
If $\rho(X_1\wedge Y_2\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})$ is nowhere vanishing on $U$, then the vector fields $\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})$ and $\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})$ must be $C^\infty(U)$-linearly dependent.
If it happens that $\rho(X_1\wedge Y_2\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})|_p = 0$, then we let $\tilde{X}_1=X_1+Y_1$ and consider $\rho(\tilde{X}_1\wedge Y_2\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})$, which is nowhere vanishing on $U$. By arguments in (1) as above, $\rho(\tilde{X}_1\wedge X_2\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})$ and $\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})$ are $C^\infty(U)$-linearly dependent, and we again get the desired statement.
◻
# The geometric constraints of Filippov algebroids {#Sec:mainpart}
## The main theorem
In this section, we characterize Filippov algebroids via connections on the underlying anchored bundles.
**Definition 6**. *A **connection** on an $n$-anchored bundle $(A,\rho)$ is a bilinear map $\nabla: \Gamma(\wedge^{n-1}A)\times \Gamma(A)\to \Gamma(A)$ satisfying two conditions: $$\begin{aligned}
\nabla_{fX_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n&=&f\nabla_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n, \\\mbox{ and }~
\nabla_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}(fX_n)&=&f\nabla_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n+\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1})(f)X_n
\end{aligned}$$ for all $X_1, \cdots, X_{n-1}\in \Gamma(A)$ and $f\in C^\infty(M)$.*
To see the existence of such a connection, one takes a $TM$-connection on $A$, say $\nabla^{TM}$, and then define $\nabla$ on the $n$-anchored bundle $(A,\rho)$ as the pullback of $\nabla^{TM}$: $$\nabla_{X_1\wedge \cdots \wedge X_{n-1}}X_n:=\nabla^{TM}_{\rho(X_1\wedge \cdots \wedge X_{n-1})}X_n.$$
The key point of this note is that any connection $\nabla$ on $(A, \rho)$ induces a skew-symmetric $n$-bracket on $\Gamma(A)$ defined by $$\begin{aligned}
\label{eq9}
[X_1,\cdots,X_n]^{\nabla}\notag
&:=& \sum_{i=1}^{n}(-1)^{n+i}\nabla_{X_{1}\wedge \cdotp \cdotp \cdotp \widehat{X_{i}}\cdotp \cdotp \cdotp \wedge X_{n} }X_i\notag \\
% &=& \sum_{i=1}^{n}(-1)^{(n-1)i}\nabla_{X_{i+1}\wedge \cdotp \cdotp \cdotp \wedge X_{n}\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{i-1}}X_i\notag\\
&=& \nabla_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n +(-1)^{n-1}\nabla_{X_2\wedge \cdotp \cdotp \cdotp \wedge X_{n}}X_1+ \cdots\notag
\\
&&\ +\nabla_{X_{n-1}\wedge X_n\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-3}}X_{n-2} +(-1)^{n-1}\nabla_{X_{n}\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-2}}X_{n-1}.
\end{aligned}$$
For computational convenience, we denote the covariant derivative on $\Gamma(A)$ along $X_1, \cdots, X_{n-1} \in \Gamma(A)$ by $$X^{\nabla}_{1\cdots n-1}:=[X_1,\cdots,X_{n-1},-]^{\nabla} \colon \Gamma(A) \to \Gamma(A),$$ It extends to all sections in $\wedge^\bullet A$ by $${X_{1\cdots n-1}^\nabla}(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{m}):=\sum_{i=1}^{m}Y_1\wedge\cdotp\cdotp\cdotp \wedge Y_{i-1} \wedge X^{\nabla}_{1\cdots n-1}( Y_i)\wedge Y_{i+1}\wedge \cdotp\cdotp\cdotp\wedge Y_{m}.$$ We then introduce the **curvature form** of $\nabla$, an operation $$R^\nabla(-\cdots-,-)(-):~\underbrace{\Gamma( A)\times \cdots \Gamma(A)}_{(n-1)-\mbox{copies}}\times \Gamma(\wedge^{n-1}A) \times \Gamma(A) \to \Gamma(A)$$ defined by $$\begin{aligned}
\label{eq10}
&&R^\nabla(X_1,\cdots,X_{n-1},Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})(Z)\notag
\\
&:=&[ {X_{1\cdots n-1}^\nabla},\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}](Z)-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})}Z\notag
\\
&:=& X^{\nabla}_{1\cdots n-1}\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}Z -\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}X^{\nabla}_{1\cdots n-1} Z
-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})}Z,
\end{aligned}$$ for all $X_1, \cdots, X_{n-1}, Y_1, \cdots, Y_{n-1}, Z\in \Gamma(A)$ and $n\geqslant 3$.
When $n=3$, it reads $$\begin{aligned}
R^\nabla(X_1,X_2,Y_1\wedge Y_2)(Z)&=&X^{\nabla}_{1 2}\nabla_{Y_1\wedge Y_{2}}Z -\nabla_{Y_1\wedge Y_{2}}X^{\nabla}_{12} Z-\nabla_{ {X_{12}^\nabla}(Y_1\wedge Y_{2})}Z
\\
&=&[X_1,X_2,\nabla_{Y_1\wedge Y_{2}}Z]^\nabla-\nabla_{Y_1\wedge Y_{2}}[X_1,X_2,Z]^\nabla
\\
&&\ -\nabla_{[X_1,X_2,Y_1]^\nabla\wedge Y_2+Y_1\wedge[X_1,X_2,Y_2]^\nabla}Z.
\end{aligned}$$ When $n=4$, the expression of $R^\nabla$ consists of twenty terms. As $n$ gets larger, more terms are involved.
It is easy to verify from the defining Equation $\eqref{eq10}$ that $R^\nabla$ is $C^\infty(M)$-linear with respect to the argument $Y_1\wedge\cdots \wedge Y_{n-1}$.
**Definition 7**. *A connection $\nabla$ on an $n$-anchored bundle $(A, \rho)$ is called a **Filippov connection** if the following two conditions are true:*
- *The curvature $R^\nabla$ is $C^\infty(M)$-linear with respect to its last argument, i.e., for all $f\in C^\infty(M)$ we have $$%\label{eq11}
R^\nabla(X_1,\cdots,X_{n-1},Y_1\wedge\cdotp\cdotp\cdotp\wedge Y_{n-1})(fZ) = fR^\nabla(X_1,\cdots,X_{n-1},Y_1\wedge\cdotp\cdotp\cdotp\wedge Y_{n-1})(Z);$$*
- *The following equality holds, to be called the **Bianchi-Filippov identity**: $$\begin{aligned}
\label{eq12}
&&\sum_{i=1}^{n}(-1)^{n+i}R^\nabla(X_1,\cdots,X_{n-1},Y_{1}\wedge \cdotp \cdotp \cdotp \widehat{Y_{i}}\cdotp \cdotp \cdotp \wedge Y_{n} )Y_i\notag \\
% &=&\sum_{i=1}^{n}(-1)^{(n-1)i}R^\nabla(X_1,\cdots,X_{n-1},Y_{i+1}\wedge \cdotp\cdotp \cdotp \wedge Y_{n}\wedge Y_1\wedge\cdotp\cdotp\cdotp\wedge Y_{i-1})Y_i\notag \\
&=&R^\nabla(X_1,\cdots,X_{n-1},Y_1\wedge \cdotp\cdotp \cdotp \wedge Y_{n-1})Y_n+(-1)^{n-1}R^\nabla(X_1,\cdots,X_{n-1},Y_2\wedge \cdotp\cdotp \cdotp \wedge Y_{n})Y_1\notag
\\
&&\ +\cdots+ R^\nabla(X_1,\cdots,X_{n-1},Y_{n-1}\wedge Y_n\wedge Y_1\wedge \cdotp\cdotp \cdotp \wedge Y_{n-3})Y_{n-2}\notag
\\
&&\ \ +(-1)^{n-1}R^\nabla(X_1,\cdots,X_{n-1},Y_{n}\wedge Y_{1}\wedge\cdot\cdot\cdot\wedge Y_{n-2})Y_{n-1}\notag
\\
&=&0.
\end{aligned}$$*
We are ready to state our main theorem, which characterizes Filippov algebroids fully by Filippov connections.
**Theorem 8**. *Let $(A,\rho)$ be an $n$-anchored bundle. If $\nabla$ is a Filippov connection on $(A,\rho)$, then $(A,\rho,[\cdotp,\cdots,\cdotp]^{\nabla})$ is a Filippov $n$-algebroid, where $[\cdotp,\cdots,\cdotp]^\nabla$ is the $n$-bracket given by Equation [\[eq9\]](#eq9){reference-type="eqref" reference="eq9"}. Moreover, any Filippov $n$-algebroid structure on $(A,\rho)$ arises from a Filippov connection in this way.*
## Proof of Theorem [Theorem 8](#th2.3){reference-type="ref" reference="th2.3"} {#proof-of-theorem-th2.3}
The proof of Theorem [Theorem 8](#th2.3){reference-type="ref" reference="th2.3"} is divided, and will follow immediately from the three lemmas below.
**Lemma 9**. *Let $\nabla$ be a connection on an $n$-anchored bundle $(A,\rho)$. The curvature $R^\nabla$ satisfies the first condition of Definition [Definition 7](#def2.2){reference-type="ref" reference="def2.2"} if and only if the anchor $\rho$ intertwines the induced $n$-bracket $[\cdotp,\cdots,\cdotp]^\nabla$ and the Lie bracket $[\cdotp,\cdotp]_{TM}$ on $\Gamma(TM)$, i.e., $$\begin{aligned}
&&[\rho(X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}),\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge Y_{n-1})]_{TM} \\
&=&\sum_{i=1}^{n-1}\rho(Y_1\wedge \cdotp \cdotp \cdotp \wedge [X_1,\cdots,X_{n-1},Y_i]^{\nabla}\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}),
\end{aligned}$$ for all $X_1, \cdots, X_{n-1}, Y_1, \cdots, Y_{n-1} \in \Gamma(A)$.*
*Proof.* By the definition of curvature, we have $$\begin{aligned}
&&R^\nabla(X_1,\cdots,X_{n-1},Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})(fZ) \\
&\xlongequal[]{\mbox{\eqref{eq10}}}&[ {X_{1\cdots n-1}^\nabla},\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}](fZ)-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})}(fZ) \\
&=&X^{\nabla}_{1\cdots n-1}\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}(fZ) -\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}X^{\nabla}_{1\cdots n-1}(fZ )
-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})}(fZ) \\
&=&fR^\nabla(X_1,\cdots,X_{n-1},Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})(Z)+\rho(X_1\wedge\cdotp \cdotp \cdotp \wedge X_{n-1})\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})(f)Z \\
&&\ -\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})\rho(X_1\wedge\cdotp \cdotp \cdotp \wedge X_{n-1})(f)Z-\sum_{i=1}^{n-1}\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge X^{\nabla}_{1\cdots n-1}( Y_i)\wedge \cdotp\cdotp \cdotp\wedge Y_{n-1})(f)Z.
\end{aligned}$$ Hence, the curvature $R^\nabla$ is $C^\infty(M)$-linear with respect to its last argument if and only if $$\begin{aligned}
%\label{eq14}
&&\rho(X_1\wedge\cdotp \cdotp \cdotp \wedge X_{n-1})\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})-\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})\rho(X_1\wedge\cdotp \cdotp \cdotp \wedge X_{n-1}) \\
&=&\sum_{i=1}^{n-1}\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge X^{\nabla}_{1\cdots n-1}(Y_i)\wedge \cdotp\cdotp \cdotp\wedge Y_{n-1}) \\
&=&\sum_{i=1}^{n-1}\rho(Y_1\wedge\cdotp \cdotp \cdotp \wedge [X_1,\cdots,X_{n-1},Y_i]^\nabla\wedge \cdotp\cdotp \cdotp\wedge Y_{n-1}).
\end{aligned}$$ ◻
**Lemma 10**. *Let $\nabla$ be a connection on an $n$-anchored bundle $(A,\rho)$. The curvature $R^\nabla$ satisfies the second condition of Definition [Definition 7](#def2.2){reference-type="ref" reference="def2.2"}, i.e. the **Bianchi-Filippov identity** [\[eq12\]](#eq12){reference-type="eqref" reference="eq12"}, if and only if the induced $n$-bracket $[\cdotp,\cdots,\cdotp]^\nabla$ satisfies the (generalized) Jacobi identity [\[eq3\]](#eq3){reference-type="eqref" reference="eq3"}.*
*Proof.* The statement follows directly from the following lines of computation: $$\begin{aligned}
&&\sum_{i=1}^{n}(-1)^{n+i}R^\nabla(X_1,\cdots,X_{n-1},Y_{1}\wedge \cdotp \cdotp \cdotp \widehat{Y_{i}}\cdotp \cdotp \cdotp \wedge Y_{n} )Y_i \\ &\xlongequal[]{\mbox{\eqref{eq12}}}&\sum_{i=1}^{n-1}(-1)^{(n-1)i}R^\nabla(X_1,\cdots,X_{n-1},Y_{i+1}\wedge \cdotp\cdotp \cdotp \wedge Y_{n}\wedge Y_1\wedge\cdotp\cdotp\cdotp\wedge Y_{i-1})Y_i \\
&\xlongequal[]{\mbox{\eqref{eq10}}}&\sum_{i=1}^{n}(-1)^{(n-1)i}([X^{\nabla}_{1\cdots n-1},\nabla_{Y_{1}\wedge\cdotp \cdotp \cdotp \wedge \widehat{Y_{i}}\wedge\cdotp\cdotp\cdotp\wedge Y_n}](Y_i)-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_{1}\wedge\cdotp \cdotp \cdotp \wedge \widehat{Y_{i}}\wedge\cdotp\cdotp\cdotp\wedge Y_n)}Y_i) \\
&=&[X_1,\cdots,X_{n-1},\nabla_{Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1}}]^{\nabla}(Y_n)-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-1})}Y_n \\
&& +(-1)^{n-1}([X_1,\cdots,X_{n-1},\nabla_{Y_{2}\wedge\cdotp \cdotp \cdotp \wedge Y_{n}}]^{\nabla}(Y_1)-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_{2}\wedge\cdotp \cdotp \cdotp \wedge Y_{n})}Y_1) +\cdots \\
&& +[X_1,\cdots,X_{n-1},\nabla_{Y_{n-1}\wedge Y_n\wedge Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-3}}]^{\nabla}(Y_{n-2})-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_{n-1}\wedge Y_n\wedge Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-3})}Y_{n-2} \\
&& +(-1)^{n-1}([X_1,\cdots,X_{n-1},\nabla_{Y_{n}\wedge Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-2}}]^\nabla(Y_{n-1})-\nabla_{ {X_{1\cdots n-1}^\nabla}(Y_{n}\wedge Y_1\wedge\cdotp \cdotp \cdotp \wedge Y_{n-2})}Y_{n-1}) \\
&\xlongequal[]{\mbox{\eqref{eq9}}} &[X_1,\cdots,X_{n-1},[Y_1,\cdots,Y_n]^\nabla]^\nabla - \sum_{i=1}^{n}[Y_1,\cdotp, [X_1,\cdots,X_{n-1},Y_i]^\nabla, \cdots,Y_n]^\nabla.
\end{aligned}$$ ◻
The next lemma shows that any Filippov algebroid can be realized by a Filippov connection.
**Lemma 11**. *Let $(A,[\cdotp,\cdotp \cdotp \cdotp,\cdotp],\rho)$ be a Filippov $n$-algebroid. Then there exists a Filippov connection $\nabla$ on the underlying $n$-anchored bundle $(A,\rho)$ such that $[\cdotp,\cdots,\cdotp]=[\cdotp,\cdots,\cdotp]^{\nabla}$ (the **torsion-free** property).*
*Proof.* Given a connection $\nabla^\circ$ on $(A,\rho)$, we are able to obtain an $\mathbb{R}$-multilinear operation $K(\cdotp,\cdotp \cdotp \cdotp,\cdotp)$ on $\Gamma(A)$ by $$\begin{aligned}
K(X_1,\cdots,X_n) &:=&[X_1,\cdots,X_n]- [X_1,\cdots,X_n]^{\nabla^\circ}.
\end{aligned}$$ Using axioms of Filippov algebroids, it is easy to see that $K(\cdotp,\cdots,\cdotp)$ is indeed $C^\infty(M)$-multilinear. Then we define a new connection $\nabla$ on $(A,\rho)$ by $$\begin{aligned}
\nabla_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n &:=& \frac{1}{n}K(X_1,\cdots,X_n)+\nabla^\circ_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n.
\end{aligned}$$ It remains to check the desired equality: $$\begin{aligned}
&&[X_1,\cdots,X_n ]^{\nabla} \\
&=&\nabla_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n+\sum_{i=1}^{n-1}(-1)^{(n-1)i}\nabla_{X_{i+1}\wedge \cdotp \cdotp \cdotp \wedge X_{n}\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{i-1}}X_i \\
&=&K(X_1,\cdots,X_n)+\nabla^\circ_{X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-1}}X_n+\sum_{i=1}^{n-1}(-1)^{(n-1)i}\nabla^\circ_{X_{i+1}\wedge \cdotp \cdotp \cdotp \wedge X_{n}\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{i-1}}X_i \\
&=&[X_1,\cdots,X_n].
\end{aligned}$$ ◻
The following examples illustrate three Filippov connections on the trivial tangent bundle $T\mathbb{R}^m$ of $\mathbb{R}^m$ for $m \geqslant n\geqslant 2$.
**Example 12**. *Consider the trivial $n$-anchored vector bundle $(T\mathbb{R}^m, \rho =0)$. Suppose that the vector space $\mathbb{R}^m$ is endowed with a Filippov $n$-algebra structure whose structure constants are $\{c^j_{i_1,\cdots,i_n}\}$ with respect to the standard basis of $\mathbb{R}^m$. Given a smooth function $g\in C^\infty(\mathbb{R}^m)$ and a set of constants $\{a^j_{i_1\cdots i_{n-1}}\}$ satisfying the equality: $$\label{eq19}
a^j_{i_1\cdots i_{n-1}}+\sum_{k=1}^{n-1}(-1)^{(n-1)k}a^j_{i_{k+1}\cdots i_n\cdots i_1 i_{k-1}}=c^j_{i_1\cdots i_{n}},$$ we are able to obtain a connection on $(T\mathbb{R}^m,\rho=0)$ generated by the only one nontrivial relation: $$\label{eq 20}
\nabla_{\frac{\partial}{\partial x_{i_1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{i_{n-1}}}}\frac{\partial}{\partial x_{i_n}}:=g\sum_{j=1}^{m}a^j_{i_1\cdots i_{n-1}}\frac{\partial}{\partial x_j}.$$ It follows from the recipe in Equation [\[eq9\]](#eq9){reference-type="eqref" reference="eq9"} and Equation [\[eq19\]](#eq19){reference-type="eqref" reference="eq19"} that $$\left[\frac{\partial}{\partial x_{i_1}},\cdots,\frac{\partial}{\partial x_{i_n}}\right]^{\nabla}=g\sum_{i=1}^{m}c^j_{i_1\cdots i_{n}}\frac{\partial}{\partial x_j}.$$ So, what we recover is the Filippov structure on $T\mathbb{R}^m$ as in Example [Example 3](#example1.3){reference-type="ref" reference="example1.3"}. Hence, $\nabla$ is indeed a Filippov connection.*
**Example 13**. *Consider the $n$-anchor map $\rho$ on the tangent bundle $T\mathbb{R}^m$ defined by the tensor field $dx_1\wedge \cdotp \cdotp \cdotp \wedge dx_{n-1} \otimes \frac{\partial}{\partial x_1}$, where $x_1,\cdots,x_{n-1},x_n,\cdots,x_m$ are coordinate functions of $\mathbb{R}^m$. It is obvious that the (nontrivial) connection on $(T\mathbb{R}^m,\rho)$ generated by the trivial relation: $$\nabla_{\frac{\partial}{\partial x_{i_1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{i_{n-1}}}}\frac{\partial}{\partial x_{i_n}}:=0.$$ produces a nontrivial $n$-bracket which is compatible with $\rho$. Indeed, what we recover is the Filippov structure on $T\mathbb{R}^m$ as in Example [Example 4](#example1.4){reference-type="ref" reference="example1.4"}, and the said connection $\nabla$ is a Filippov connection.*
**Example 14**. *Continue to work with the anchored bundle $(A,\rho)$ as in the previous example. We consider a different connection with the only nontrivial generating relations: $$\nabla_{\frac{\partial}{\partial x_{i_1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{i_{n-1}}}}\frac{\partial}{\partial x_{k}}= \left\{
\begin{aligned}
&\frac{\partial}{\partial x_{k}}, \mbox{ if }\quad \frac{\partial}{\partial x_{i_1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{i_{n-1}}} = \frac{\partial}{\partial x_{1}}\wedge\cdots\wedge\frac{\partial}{\partial x_{n-1}};\\
& 0,\mbox{ otherwise,}
\end{aligned}
\right.$$ for all $x_k\in \{x_1,\cdots,x_m\}$. Then, the associated $n$-bracket can be derived: $$\left[\frac{\partial}{\partial x_{i_1}},\cdots,\frac{\partial}{\partial x_{i_n}}\right]^{\nabla} = \left\{
\begin{aligned}
&\frac{\partial}{\partial x_{i_n}}, \quad \text{if}\quad i_j = j, ~\forall 1 \leqslant j \leqslant n-1;
\\
&0,\text{otherwise}.
\end{aligned}
\right.$$ for all $i_1<i_2<\cdots<i_n$. By subtle analysis, one can find that the associated curvature $R^{\nabla}$ is just zero. Hence $\nabla$ is truly a Filippov connection and the above bracket defines a Filippov algebroid structure on $(A,\rho)$.*
## Construction of Filippov connections {#Sec:Dxipair}
Let $A\to M$ be a vector bundle. Consider the bundle $\mathop{\mathrm{CDO}}(A)$ of covariant differential operators (cf. [@Lie]\*III, [@pseudoalgebras], see also [@Differential], where the notation $\mathcal{D}(A)$ is used instead of $\mathop{\mathrm{CDO}}(A)$). An element $D$ of $\Gamma(\mathop{\mathrm{CDO}}(A))$, called a covariant differential operator, is an $\mathbb{R}$-linear operator $\Gamma(A)\rightarrow \Gamma(A)$ together with a vector field $\hat{D} \in \Gamma(TM)$, called the symbol of $D$, satisfying $$\begin{aligned}
D(fX)=fD(X)+\hat{D}(f)\cdot X, \qquad\forall X\in\Gamma(A), f\in C^\infty(M).
\end{aligned}$$ The operator $D$ can be extended naturally to an operator $D \colon \Gamma(\wedge ^{n-1} A^*)\rightarrow \Gamma(\wedge ^{n-1} A^*)$, defined by $$\begin{aligned}
&&\langle X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}|D(\bar{\eta})\rangle\notag
\\
&=&\hat{D}\langle X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}|\bar{\eta}\rangle-\sum_{i=1}^{n-1}\langle X_1\wedge\cdotp\cdotp\cdotp\wedge D(X_i)\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}|\bar{\eta}\rangle,
\end{aligned}$$ for all $X_1, \cdots, X_{n-1}\in \Gamma(A)$ and $\bar{\eta} \in\Gamma(\wedge^{n-1}A^*)$.
Given a pair $(D,\bar{\xi})$, where $D\in\Gamma(\mathop{\mathrm{CDO}}(A))$ and $\bar{\xi}\in \Gamma(\wedge^{n-1}A^*)$, one is able to construct a map $$\begin{aligned}
\rho^{(D,\bar{\xi})} \colon \Gamma(\wedge^{n-1}A) &\to& \Gamma(TM),\\
X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1} &\mapsto& \langle X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}|\bar{\xi} \rangle \hat{D}.
\end{aligned}$$ It is clear that $\rho^{(D,\bar{\xi})}$ makes $A$ an $n$-anchored bundle. And the rank of the image of $\rho^{(D,\bar{\xi})}$ does not exceed $1$.
Define a connection on the $n$-anchored bundle $(A,\rho^{(D,\bar{\xi})})$ by $$\label{Eqt:pairconnection}
\nabla^{(D,\bar{\xi})}_{X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}}X_n : =\langle X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}|\bar{\xi}\rangle D(X_n).$$
**Proposition 15**. *If the pair $(D,\bar{\xi})$ is subject to the relation $$\label{EigenCondition}
D(\bar{\xi})=g\bar{\xi} , \quad \mbox{ for some } g\in C^\infty(M),$$ then $\nabla^{(D,\bar{\xi})}$ defined as in [\[Eqt:pairconnection\]](#Eqt:pairconnection){reference-type="eqref" reference="Eqt:pairconnection"} is a Filippov connection on the $n$-anchored bundle $(A,\rho^{(D,\bar{\xi})})$.*
We omit the proof of this proposition as it follows from a straightforward but lengthy verification. As a consequence of Theorem [Theorem 8](#th2.3){reference-type="ref" reference="th2.3"}, a pair $(D,\bar{\xi})$ subject to Condition [\[EigenCondition\]](#EigenCondition){reference-type="eqref" reference="EigenCondition"} produces a Filippov $n$-algebroid structure on $A$. Its $n$-bracket reads: $$\begin{aligned}
&&[X_1,\cdots,X_n]^{\nabla^{(D,\bar{\xi})}}\\
&=& \langle X_1\wedge\cdotp\cdotp\cdotp\wedge X_{n-1}|\bar{\xi}\rangle D(X_n) +(-1)^{n-1}\langle X_2\wedge \cdotp \cdotp \cdotp \wedge X_{n}|\bar{\xi}\rangle D(X_1)+ \cdots \\
&&\ +\langle X_{n-1}\wedge X_n\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-3}|\bar{\xi}\rangle D(X_{n-2}) +(-1)^{n-1}\langle X_{n}\wedge X_1\wedge \cdotp \cdotp \cdotp \wedge X_{n-2}|\bar{\xi}\rangle D(X_{n-1}).
\end{aligned}$$
# A construction of linear Nambu-Poisson structures {#Sec:Nambupart}
In this section, we unravel under certain conditions a relation between linear Nambu-Poisson structures and Filippov connections.
## From Filippov algebroids to linear Nambu-Poisson structures
**Definition 16**. *[@Tak; @ref15] A Nambu-Poisson structure of order $n$ on a smooth manifold $P$ is an $\mathbb{R}$-multilinear and skew-symmetric $n$-bracket on the smooth function space $C^\infty(P)$: $$\{\cdotp,\cdotp\cdotp\cdotp,\cdotp\}\colon \underbrace{C^\infty(P)\times\cdotp\cdotp\cdotp\times C^\infty(P)}_{n-\mbox{copies}}\to C^\infty(P)$$ satisfying the following two conditions:*
*The $n$-bracket is a derivation with respect to $C^\infty(P)$-multiplications: $$\{f_1,\cdots,f_{n-1},g_1g_2\}=g_1\{f_1,\cdots,f_{n-1},g_2\}+\{f_1,\cdots,f_{n-1},g_1\}g_2;$$*
*The (generalized) Jacobi identity (also known as the fundamental identity) $$\{f_1,\cdots,f_{n-1},\{g_1,\cdots,g_n\}\}=\sum_{i=1}^{n}\{g_1,\cdots,\{f_1,\cdots,f_{n-1},g_i\},\cdots,g_{n}\},$$ holds for all $f_{i}$ and $g_{j} \in C^\infty(P)$.*
*The pair $(P,\{\cdotp,\cdotp\cdotp\cdotp,\cdotp\})$ is called a Nambu-Poisson manifold.*
Alternatively, one could express the said bracket via an $n$-vector field $\pi$ on $P$ such that $$\label{Eqt:piandnbracket}
\{f_1,\cdots, f_n\}=\pi(df_1,\cdots, df_n),\quad \forall f_1,\cdots, f_n\in C^\infty(P).$$
Given a smooth vector bundle $p\colon A\to M$, the section space $\Gamma(A )$ are identified as the space $C^\infty_{\mathrm{lin}}(A^*)$ of fiberwise linear functions on $A^*$, the dual vector bundle of $A$; while elements in $p^\ast(C^\infty(M))$ are called basic functions on $A^*$. To fix the notations, for any section $X \in \Gamma(A)$, let $\phi_X \in C^\infty_{\mathrm{lin}}(A^*)$ be the corresponding linear function on $A^\ast$.
**Definition 17**. *[@BBDM] A Nambu-Poisson structure of order $n$ on the vector bundle $A^\ast \to M$ is said to be linear, if it satisfies the following three conditions:*
*The bracket of $n$ linear functions is again a linear function;*
*The bracket of $(n-1)$ linear functions and a basic function is a basic function;*
*The bracket of $n$ functions is zero if there are more than one basic functions among the arguments.*
Note that any Poisson manifold is a Nambu-Poisson manifold of order $2$. A well-known fact is the following: A Lie algebroid $A$ over $M$ gives rise to a linear Poisson manifold $A^*$, and vice versa. It is pointed out in [@BBDM]\*Theorem 4.4 that a linear Nambu-Poisson structure of order $n$ on $A^*$ corresponds to a Filippov $n$-algebroid structure on $A$ (see also [@BL]). However, the reverse process is generally not valid for the cases of $n\geqslant 3$, mainly because the condition of a Nambu-Poisson structure is very strong (cf. [@ref13]). Nevertheless, in this paper, we will require that $A$ be a rank $n$ vector bundle and establish the one-to-one correspondence between Filippov $n$-algebroid structures on $A$ and linear Nambu-Poisson structures on $A^*$. In specific, under the said condition, our main theorem below serves as a complement to [@BBDM]\*Theorem 4.4.
**Theorem 18**. *Let $(A,\rho,[\cdotp,\cdots,\cdotp])$ be a Filippov $n$-algebroid over a smooth manifold $M$, where $A \to M$ is a vector bundle of rank $n\geqslant 3$. Then there exists a unique linear Nambu-Poisson structure on the dual bundle $A^*\to M$ such that for all sections $X_1,\cdots,X_n\in \Gamma(A)$, $$\label{Nambu condition-1}
\{\phi_{X_1},\cdots, \phi_{X_n}\}=\phi_{[X_1,\cdots, X_n]}.$$*
Note that, if Equation [\[Nambu condition-1\]](#Nambu condition-1){reference-type="eqref" reference="Nambu condition-1"} holds, then it is easy to deduce that the linear Nambu-Poisson structure on $A^*$ and the anchor map $\rho$ are also related: $$\label{Nambu condition-2}
\{\phi_{X_1},\cdots,\phi_{X_{n-1}},p^\ast f \}=p^\ast(\rho(X_1\wedge\cdotp\cdotp\cdotp \wedge X_{n-1})(f)),$$ for all $f\in C^\infty(M)$ and $X_1, \cdots, X_{n-1} \in \Gamma(A)$.
As a direct application of Theorems [Theorem 8](#th2.3){reference-type="ref" reference="th2.3"} and [Theorem 18](#prop3.3){reference-type="ref" reference="prop3.3"}, one can construct linear Nambu-Poisson structures out of Filippov connections:
**Corollary 19**. *If $\nabla$ is a Filippov connection on an $n$-anchored bundle $(A, \rho)$, where $A \to M$ is a vector bundle of rank $n\geqslant 3$, then the dual bundle $A^\ast$ admits a unique linear Nambu-Poisson structure of order $n$ defined by $$\{\phi_{X_1},\cdots, \phi_{X_n}\}=\phi_{[X_1,\cdots, X_n]^\nabla},$$ for all $X_1,\cdots,X_n\in \Gamma(A)$.*
## Proof of Theorem [Theorem 18](#prop3.3){reference-type="ref" reference="prop3.3"} {#proof-of-theorem-prop3.3}
The proof is divided into three steps.
**Step 1** --- Since functions of type $\phi_{X}$ (for $X\in \Gamma(A)$) and $p^*f$ (for $f\in C^\infty(M)$) generate $C^\infty(A^*)$, there exists a unique $\mathbb{R}$-multilinear $n$-bracket $\{\cdotp,\cdots,\cdotp\}$ on $C^\infty(A^*)$ satisfying Equations [\[Nambu condition-1\]](#Nambu condition-1){reference-type="eqref" reference="Nambu condition-1"} and [\[Nambu condition-2\]](#Nambu condition-2){reference-type="eqref" reference="Nambu condition-2"}. We wish to write the corresponding $n$-vector field $\pi$ on $A^*$ explicitly.
To this end, we work locally and consider the trivialization $A|_U\cong U\times \mathbb{R}^n$ over an open subset $U\subset M$ with coordinates $x^1, \cdots, x^m$; let $\{ X_1,\cdots,X_n\}$ be a local basis of $\Gamma(A|_U)$. Then $$\{ y_1=\phi_{X_1},\cdots,y_n=\phi_{X_n},p^*x^1,\cdots, p^*x^m\}$$ forms a chart on $A^*|_U$. For convenience, $p^*x^i$ is still denoted by $x^i$.
Suppose further that the Filippov algebroid $A|_U$ is described by the structure functions $c^k$ and $f_{1\cdots \widehat{l}\cdots n}\in C^\infty(U)$ such that $$[X_1,\cdots,X_n]=\sum_{k=1}^n c^k X_k$$ $$\mbox{and } \quad \rho(X_{1}\wedge\cdots \widehat{X_{l}}\cdots\wedge X_{n})=
f_{1\cdots \widehat{l}\cdots n}\frac{\partial}{\partial x^1}.$$ Here we have utilized Proposition [Proposition 5](#propo1.5){reference-type="ref" reference="propo1.5"}. Then one is able to find the expression of the $n$-vector field $\pi$ on $A^*|_U$: $$\label{n-vector}
\pi=\sum_{k=1}^nc^ky_k\frac{\partial}{\partial y_1}\wedge\cdots\wedge\frac{\partial}{\partial y_n}
+\sum_{l=1}^nf_{1\cdots \widehat{l}\cdots n}\frac{\partial}{\partial y_{1}}
\wedge\cdots\wedge\frac{\partial}{\partial \widehat{y_l}}\wedge\cdots\wedge\frac{\partial}{\partial y_{n}}\wedge \frac{\partial}{\partial x^1}$$ which corresponds to the $n$-bracket $\{\cdots\}$ on $A^*|_U$.
**Step 2** --- We need to set up a preparatory lemma.
**Lemma 20**. *There exists a local basis $\{X_1,\cdots,X_{n}\}$ of $\Gamma(A|_U)$ such that the corresponding structure functions $c^k$ and $f_{1\cdots \widehat{l}\cdots n}$ satisfy the following relations: for all $i\neq j$ (in $\{1,\cdots, n\}$), $$\begin{aligned}
f_{1\cdots \widehat{i}\cdots n}\frac{\partial f_{1\cdots \widehat{j}\cdots n}}{\partial x^1}&=&f_{1\cdots \widehat{j}\cdots n}\frac{\partial f_{1\cdots \widehat{i}\cdots n}}{\partial x^1};\label{volume-1}\\
(-1)^if_{1\cdots \widehat{i}\cdots n}c^j&=&(-1)^jf_{1\cdots \widehat{j}\cdots n}c^i.\label{volume-2}
\end{aligned}$$*
*Proof.* Consider the map $\rho: \wedge^{n-1}A|_U \to TU$. By Proposition [Proposition 5](#propo1.5){reference-type="ref" reference="propo1.5"}, for any point $p\in U$, we have $\operatorname{rank}(\rho(\wedge^{n-1}A)_p)\leqslant 1$, and hence $\dim(\ker(\rho_p)) \geqslant (n-1)$. So, we are able to find a local basis $\{Z_1,\cdots,Z_n\}$ of $\Gamma(\wedge^{n-1}A|_U)$ such that $\rho(Z_2)$, $\cdots$, $\rho(Z_n)$ are trivial.
Take an arbitrary $\Omega \in \Gamma(\wedge^{n}A |_U)$ which is nowhere vanishing on $U$. Consider $$\Omega ^\sharp : A^\ast |_U\to \wedge^{n-1}A |_U,\ \ \ \ \Omega ^\sharp(\alpha):=i_\alpha\Omega ,$$ which is an isomorphism of vector bundles. Then we obtain a basis $\{ \alpha_1$, $\cdots$, $\alpha_n \}$ of $\Gamma(A^\ast |_U)$ by setting $\alpha_i:=(\Omega^\sharp)^{-1}(Z_i)$. Let $\{ X_1,\cdots,X_n\}$ be the dual basis of $\Gamma(A |_U)$ corresponding to $\{\alpha_1,\cdots,\alpha_n\}$. There exists a nowhere vanishing smooth function $g\in C^\infty(U)$ such that $\Omega =gX_1\wedge\cdots\wedge X_n$, and hence $Z_i=i_{\alpha_i}\Omega =gX_1\wedge\cdots\widehat{X_i}\wedge\cdots\wedge X_n$.
Since we have $\rho(Z_i)=0$ ($\forall i\in \{2,\cdots, n\}$), we also have $$\label{Eqt:rhoXizero} \rho(X_1\wedge\cdots\widehat{X_i}\wedge\cdots\wedge X_n)=0,\quad \forall i\in \{2,\cdots, n\}.$$ Using the axiom of a Filippov algebroid, we have a relation $$\begin{aligned}
&&f_{1\cdots\widehat{i}\cdots n}\frac{\partial f_{1\cdots\widehat{j}\cdots n}}{\partial x^1}\frac{\partial }{\partial x^1}-f_{1\cdots\widehat{j}\cdots n}\frac{\partial f_{1\cdots\widehat{i}\cdots n}}{\partial x^1}\frac{\partial }{\partial x^1} \\
&=&[f_{1\cdots\widehat{i}\cdots n}\frac{\partial}{\partial x^1},f_{1\cdots\widehat{j}\cdots n}\frac{\partial}{\partial x^1}]_{TM} \\
&=&[\rho(X_1\wedge\cdots\wedge\widehat{X_i}\wedge\cdots\wedge X_n),\rho(X_1\wedge\cdots\wedge\widehat{X_j}\wedge\cdots\wedge X_n)]_{TM} \\
&\xlongequal[]{\mbox{\eqref{eq1}}}&\sum_{k=1,k< j}^{n}\rho(X_1\wedge\cdots\wedge[X_1,\cdots,\widehat{X_i},\cdots,X_n,X_k]\wedge\cdots\wedge \widehat{X_j}\wedge\cdots X_n) \\
&&\ +\sum_{k=1,k> j}^{n}\rho(X_1\wedge\cdots\wedge \widehat{X_j}\wedge\cdots\wedge[X_1,\cdots, \widehat{X_i},\cdots,X_n,X_k]\wedge\cdots X_n) \\
&=&(-1)^{(n-i)}c^if_{1\cdots\widehat{j}\cdots n}\frac{\partial }{\partial x^1}-(-1)^{(n-j)}c^jf_{1\cdots\widehat{i}\cdots n}\frac{\partial }{\partial x^1}.
\end{aligned}$$ According to the previous fact [\[Eqt:rhoXizero\]](#Eqt:rhoXizero){reference-type="eqref" reference="Eqt:rhoXizero"}, all the lines above must be trivial, and thus the desired two equalities [\[volume-1\]](#volume-1){reference-type="eqref" reference="volume-1"} and [\[volume-2\]](#volume-2){reference-type="eqref" reference="volume-2"} are proved. ◻
**Step 3** --- We wish to show that the $n$-bracket $\{\cdotp,\cdots,\cdotp\}$ given in **Step 1**, or the $n$-vector field $\pi$ locally given in Equation [\[n-vector\]](#n-vector){reference-type="eqref" reference="n-vector"}, is a linear Nambu-Poisson structure on $A^*$.
We need the following proposition due to Dufour and Zung [@DZ]\*Proposition 2.1.
**Proposition 21**. *[@DZ][\[Omega propo\]]{#Omega propo label="Omega propo"} Let $\Omega$ be a volume form on an $l$-dimensional manifold $P$, and $\pi$ an $n$-vector filed on $P$, where $l>n\geqslant 3$. Consider the $(l-n)$-form $\omega:=i_\pi\Omega$ on $P$. Then $\pi$ defines a Nambu-Poisson structure (via Equation [\[Eqt:piandnbracket\]](#Eqt:piandnbracket){reference-type="eqref" reference="Eqt:piandnbracket"}) if and only if $\omega$ satisfies the following two conditions: $$\begin{aligned}
(i_K\omega)\wedge\omega=0, \label{NP-1}
\\
(i_K\omega)\wedge d\omega=0 \label{NP-2}
\end{aligned}$$ for any $(l-n-1)$-vector field $K$ on $P$.*
Consider the volume form $\Omega= dy_1\wedge\cdots dy_n \wedge dx^1\wedge\cdots\wedge dx^m$ on $A^\ast|_U$, where $U$, $y_i$, and $x^j$ are as earlier, and we suppose that such a coordinate system stems from $\{X_1,\cdots, X_n\}$ fulfills Lemma [Lemma 20](#Lem:structureconditions){reference-type="ref" reference="Lem:structureconditions"}. According to Proposition [\[Omega propo\]](#Omega propo){reference-type="ref" reference="Omega propo"}, we need to examine the $m$-form defined by: $$\label{eq:omega}
\omega:=i_{\pi}\Omega=\sum_{k=1}^nc^ky_kdx^1\wedge\cdots\wedge dx^m+\sum_{j=1}^n(-1)^{n-j+1}f_{1\cdots \widehat{j}\cdots n}dy_j\wedge dx^2\wedge\cdots\wedge dx^m.$$ We can easily check that $\omega$ satisfies Equation [\[NP-1\]](#NP-1){reference-type="eqref" reference="NP-1"} and hence it remains to check Equation [\[NP-2\]](#NP-2){reference-type="eqref" reference="NP-2"}. One first finds that $$\begin{aligned}
\label{eq:domega}
d\omega&=&\sum_{k=1}^ny_kdc^k\wedge dx^1\wedge\cdots\wedge dx^m+\sum_{k=1}^nc^kdy_k\wedge dx^1\wedge\cdots\wedge dx^m\notag\\
&&\ +\sum_{j=1}^n(-1)^{n-j+1}df_{1\cdots \widehat{j}\cdots n}\wedge dy_j\wedge dx^2\wedge\cdots\wedge dx^m\notag\\
&=&\sum_{k=1}^nc^kdy_k\wedge dx^1\wedge\cdots\wedge dx^m+\sum_{j=1}^n(-1)^{n-j+1}\frac{\partial f_{1\cdots\widehat{j}\cdots n}}{\partial x^1}dx^1\wedge dy_j\wedge dx^2\wedge\cdots\wedge dx^m.
\end{aligned}$$
Consider the following special type of $(m-1)$-vector field on $A^\ast|_U$: $K =\frac{\partial}{\partial x^2}\wedge\cdots\wedge\frac{\partial}{\partial x^m}$. Then one computes: $$\begin{aligned}
&& (i_K\omega) \wedge d \omega \\
&=&(-1)^{m-1}\left(\sum_{k=1}^nc^ky_kdx^1+ \sum_{j=1}^n(-1)^{n-j+1}f_{1\cdots \widehat{j}\cdots n} dy_j \right) \wedge d\omega\\
&=&\sum_{j=1}^n (-1)^{m+n-j}f_{1\cdots \widehat{j}\cdots n}dy_j \wedge \sum_{i=1}^nc^idy_i\wedge dx^1\wedge\cdots\wedge dx^m\\
&&\ +\sum_{j=1}^n(-1)^{m+n-j}f_{1\cdots \widehat{j}\cdots n}dy_j\wedge \sum_{i=1}^n(-1)^{n-i+1}\frac{\partial f_{1\cdots\widehat{i}\cdots n}}{\partial x^1}dx^1\wedge dy_i\wedge dx^2\wedge\cdots\wedge dx^m \\
&=&(-1)^{m}\sum_{j=1}^n\sum_{i=1,i\neq j}^{n}((-1)^{(n-i)}c^if_{1\cdots\widehat{j}\cdots n}-(-1)^{(n-j)}c^jf_{1\cdots\widehat{i}\cdots n})dy_j\wedge dy_i\\
&&\ +\sum_{j=1}^n\sum_{i=1,i\neq j}^{n}(-1)^{m+i+j}(f_{1\cdots \widehat{j}\cdots n}\frac{\partial f_{1\cdots\widehat{i}\cdots n}}{\partial x^1}-f_{1\cdots \widehat{i}\cdots n}\frac{\partial f_{1\cdots\widehat{j}\cdots n}}{\partial x^1})dy_j\wedge dy_i\\
&\xlongequal[]{\mbox{\eqref{volume-1},\eqref{volume-2}}}&0.
\end{aligned}$$ This justifies Equation [\[NP-2\]](#NP-2){reference-type="eqref" reference="NP-2"} for this particular $K$. For other types of $K$, it is easy to verify Equation [\[NP-2\]](#NP-2){reference-type="eqref" reference="NP-2"} as well. This completes the proof of Theorem [Theorem 18](#prop3.3){reference-type="ref" reference="prop3.3"}.
[^1]: Supported by the Natural Science Foundation of China (NSFC) grants 11961049(Bi), 11901221(Xiang), and 12071241(Zhuo Chen), and by the Key Project of Jiangxi Natural Science Foundation grant 20232ACB201004(Bi, Xiang and Zhixiong Chen).
| arxiv_math | {
"id": "2309.04180",
"title": "The geometric constraints on Filippov algebroids",
"authors": "Yanhui Bi, Zhixiong Chen, Zhuo Chen, and Maosong Xiang",
"categories": "math.RA math.DG",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
The $d$-th order Laplacian spectral moment of a $k$-uniform hypergraph is the sum of the $d$-th powers of all eigenvalues of its Laplacian tensor. In this paper, we obtain some expressions of the Laplacian spectral moments for $k$-uniform power hypergraphs, and these expressions can be represented by some parameters of graphs. And we show that some graphs can be determined by their high-order Laplacian spectrum by using the Laplacian spectral moments of power hypergraphs.
address: College of Mathematical Sciences, Harbin Engineering University, Harbin 150001, PR China
author:
- Jueru Liu
- Lixiang Chen
- Changjiang Bu
bibliography:
- spbib.bib
title: The Laplacian spectral moments of power hypergraphs
---
1.15
hypergraph, spectral moment, Laplacian tensor, trace.\
*AMS classification (2020):* 05C50, 05C65, 15A69.
# Introduction
For a $k$-uniform hypergraph $\mathcal{H}$, the $d$-th order (Laplacian) spectral moment of $\mathcal{H}$ is equal to the sum of the $d$-th powers of all eigenvalues of its adjacency (Laplacian) tensor. Since the $d$-th order trace of a tensor is equal to the sum of the $d$-th powers of its all eigenvalues [@ref7], the $d$-th order (Laplacian) spectral moment of $\mathcal{H}$ is equal to the $d$-th order trace of its adjacency (Laplacian) tensor.
In 2013, Shao et al. [@ref8] gave a formula for the trace of tensors in terms of some graph parameters. The coefficients of characteristic polynomial and topological index of hypergraphs can be studied by spectral moments of hypergraphs [@ref9; @ref10; @ref11; @ref22]. In 2021, Clark and Cooper [@ref10] expressed the spectral moments of hypergraph by the number of Veblen multi-hypergraphs and obtained the Harary-Sachs coefficient theorem for hypergraph. A formula for the spectral moment of a hypertree was given in terms of the number of some subhypertrees [@ref11], and some high-order cospectral invariants of trees were given by the spectral moments of hypertrees [@ref16]. In [@ref22], the Estrada index and subgraph centrality of uniform hypergraphs were studied, which are closely related to the traces of the adjacency tensor.
For Laplacian spectral moments of hypergraphs, the expressions of the first $k$ orders traces of the Laplacian tensors were given by the degree sequence of $k$-uniform hypergraphs [@ref9]. And an expression of the $k+1$-st order trace of Laplacian tensor of $k$-uniform hypergraphs was given in [@ref12].
In this paper, we study Laplacian spectral moments of power hypergraphs. The expressions of the first $2k$ orders Laplacian spectral moments of $k$-uniform power hypergraphs are given, which can be represented by some parameters of graphs. And we show that some graphs, which are not determined by (signless) Laplacian spectrum, can be determined by their high-order (signless) Laplacian spectrum by considering the (signless) Laplacian spectral moments of power hypergraphs.
# Preliminaries
Next, we introduce some notations and concepts for tensors and hypergraphs. For a positive integer $n$, let $[n]=\{1,2,\ldots,n\}$ and $[n]^k=\{i_1i_2\cdots i_k|i_j\in[n],j=1,\ldots,k\}$. A $k$-order $n$-dimension complex *tensor* $\mathcal{T}=(t_{i\alpha})$ is a multi-dimensional array with $n^{k}$ entries on complex number field $\mathbb{C}$, where $i\in[n]$ and $\alpha\in[n]^{k-1}$.
A *hypergraph* $\mathcal{H}=(V,E)$ consists of vertex set $V=\{1,2,\ldots,n\}$ and edge set $E=\{e_{1},e_{2},\ldots,e_{m}\}$, where $e_{j}\subseteq V(\mathcal{H})$ for $j\in[m]$. If $|e_{j}|=k$ for each $j\in[m]$ and $k\ge2$, then $\mathcal{H}$ is called a *k-uniform* hypergraph. For a $k$-uniform hypergraph $\mathcal{H}$ with $n$ vertices, its *adjacency tensor* $\mathcal{A}_{\mathcal{H}}=(a_{i\alpha})$ is a $k$-order $n$-dimension tensor has entries $$a_{i\alpha}=\left\{\begin{array}{ll}
\frac{1}{(k-1)!},& \textnormal{if}\ \{i,i_2,\ldots,i_k\}\in E(\mathcal{H})\ \textnormal{for}\ \alpha=i_2\cdots i_k,\\
0,& \textnormal{otherwise}.
\end{array}\right.$$ The spectrum and eigenvalues of $\mathcal{A}_{\mathcal{H}}$ is called the spectrum and eigenvalues of $\mathcal{H}$, repectively [@ref3]. For a vertex $i\in V(\mathcal{H})$, the *degree* of $i$ is the number of edges of $\mathcal{H}$ containing the vertex $i$, denoted by $d_{i}$. The *degree tensor* $\mathcal{D}_{\mathcal{H}}=\textnormal{diag}(d_1,\ldots,d_n)$ of $\mathcal{H}$ is a $k$-order $n$-dimension diagonal tensor. And tensor $\mathcal{L}_{\mathcal{H}}=\mathcal{D}_{\mathcal{H}}-\mathcal{A}_{\mathcal{H}}$ is the *Laplacian tensor* of $\mathcal{H}$ [@ref5].
In 2005, Lim [@ref1] and Qi [@ref2] introduced the eigenvalues of tensors independently. Denote the set of $n$-dimension complex vectors and the set of $k$-order $n$-dimension complex tensors by $\mathbb{C}^{n}$ and $\mathbb{C}^{[k,n]}$, respectively. For a tensor $\mathcal{T}=(t_{i\alpha})\in\mathbb{C}^{[k,n]}$ and $x=(x_{1},\ldots,x_{n})^{\mathsf{ T}}\in\mathbb{C}^{n}$, $\mathcal{T}x^{k-1}$ is a vector in $\mathbb{C}^{n}$ whose $i$-th component is $$(\mathcal{T}x^{k-1})_{i}=\sum\limits_{\alpha\in[n]^{k-1}}t_{i\alpha}x^{\alpha},$$ where $x^{\alpha}=x_{i_1}\cdots x_{i_{k-1}}$ if $\alpha=i_2\cdots i_{k-1}$. For a complex number $\lambda\in\mathbb{C}$, if there is a vector $x\in \mathbb{C}^n\setminus\{0\}$ such that $$\mathcal{T}x^{k-1}=\lambda x^{[k-1]},$$ then $\lambda$ is called an *eigenvalue* of $\mathcal{T}$ and $x$ is an *eigenvector* of $\mathcal{T}$ associated with $\lambda$, where $x^{[k-1]}=(x_{1}^{k-1},\ldots,x_{n}^{k-1})^{\mathsf{T}}$. The multi-set of all eigenvalues of tensor $\mathcal{T}$ is the *spectrum* of $\mathcal{T}$, denoted by $\sigma(\mathcal{T})$.
In [@ref6], an expression of $d$-th order trace for tensors is given. And Hu et al. [@ref7] proved that the $d$-th order trace of a $k$-order $n$-dimension tensor $\mathcal{T}$ is equal to the sum of the $d$-th powers of its all eigenvalues, that is, $\textnormal{Tr}_d(\mathcal{T})=\sum\nolimits_{\lambda\in\sigma(\mathcal{T})}\lambda^d$.
In 2013, Shao et al. [@ref8] gave a formula for $\textnormal{Tr}_d(\mathcal{T})$. Next, we introduce some related notations. For a positive integer $d$, let $$\mathcal{F}_{d}=\{(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})|\ 1\le i_{1}\le\cdots\le i_{d}\le n;\alpha_{1},\ldots,\alpha_{d}\in[n]^{k-1}\}.$$ For $f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}$ and a $k$-order $n$-dimension tensor $\mathcal{T}=(t_{i\alpha})$, let $\pi_{f}(\mathcal{T})=\prod\nolimits_{j=1}^{d} t_{i_j\alpha_j}$. Suppose $i_{j}\alpha_{j}=i_{j}v_{1}^{(j)}\cdots v_{k-1}^{(j)}$, let $E_{j}(f)=\{(i_{j},v_{1}^{(j)}),\ldots,(i_{j},v_{k-1}^{(j)})\}$ be the set of arcs from $i_{j}$ to $v_{1}^{(j)},\ldots,v_{k-1}^{(j)}$ and $E(f)=\bigcup\nolimits_{j=1}^{d} E_{j}(f)$ be an arc multi-set. Let $V_{j}(f)=\{i_{j},v_{1}^{(j)},\ldots,v_{k-1}^{(j)}\}$ and $V(f)=\bigcup\nolimits_{j=1}^{d} V_{j}(f)$ be a vertex set. Let multi-digraph $D(f)=(V(f),E(f))$. Let $b(f)$ be the product of the factorials of the multiplicities of all the arcs in $D(f)$. Let $c(f)$ be the product of the factorials of the outdegrees of all the vertices in $D(f)$. Let $W(f)$ be the set of all closed walks with the arc multi-set $E(f)$. In this paper, if a multi-set $\mathrm{A}$ contains $m$ distinct elements $a_{1},\ldots,a_{m}$ with multiplicities $r_{1},\ldots,r_{m}$ respectively, then we write $\mathrm{A}=\{a_{1}^{r_{1}},\ldots,a_{m}^{r_{m}}\}$.
The formula for the $d$-th order trace of tensors given by Shao et al. is shown as follows.
**Lemma 1**. *[[@ref8]]{.nodecor} [Let $\mathcal{T}=(t_{i\alpha})$ be a $k$-order $n$-dimension tensor. Then]{.nodecor} $$\textnormal{Tr}_{d}(\mathcal{T})=(k-1)^{n-1} \sum_{f \in \mathcal{F}_{d}} \frac{b(f)}{c(f)} \pi_{f}(\mathcal{T})|W(f)|.$$*
Since the $d$-th order Laplacian spectral moment of $\mathcal{H}$ is equal to the $d$-th order trace of its Laplacian tensor, we study the Laplacian spectral moment of uniform hypergraphs by considering the trace formula of tensor given by Shao et al.
For a $k$-uniform hypergraph $\mathcal{H}$ with $n$ vertices, let $\mathcal{L}_{\mathcal{H}}$ be the Laplacian tensor of $\mathcal{H}$. When $\mathcal{T}=\mathcal{L}_{\mathcal{H}}$ in Eq.(2.1), the $d$-th order Laplacian spectral moment of $\mathcal{H}$ is $$\textnormal{Tr}_{d}(\mathcal{L}_{\mathcal{H}})=(k-1)^{n-1}\sum_{f\in \mathcal{F}_{d}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|.$$
Next, we simplify Eq.(2.2) by classifying $f$ and introduce some related concepts. For $i_{j}\alpha_{j}\in[n]^{k}$ and a $k$-order $n$-dimension tensor $\mathcal{T}=(t_{i\alpha})$, the entry $t_{i_{j}\alpha_{j}}$ in tensor $\mathcal{T}$ is called the corresponding entry of $i_{j}\alpha_{j}$. Suppose $\alpha_{j}=v_1^{(j)}\cdots v_{k-1}^{(j)}$, for a $k$-uniform hypergraph $\mathcal{H}$, $e=\{i_j,v_1^{(j)},\ldots,v_{k-1}^{(j)}\}$ is called the corresponding edge of tuple $i_{j}\alpha_{j}$ if the corresponding entry of $i_{j}\alpha_{j}$ in its adjacency tensor is not equal to zero.
Let $\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(F)|\ne0$ for $f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}$. Since $\pi_{f}(\mathcal{L}_{\mathcal{H}})=\prod\nolimits_{j=1}^{d}l_{i_{j}\alpha_{j}}\ne0$, we know $l_{i_{j}\alpha_{j}}\ne0$ for all $j\in[d]$. Then the tuple $i_{j}\alpha_{j}(j=1,\ldots,d)$ in $f$ corresponds either to a diagonal entry of $\mathcal{L}_{\mathcal{H}}$ or to an edge of $\mathcal{H}$. According to the number of the tuples which correspond to the diagonal entries of $\mathcal{L}_{\mathcal{H}}$, the set $\{f\in\mathcal{F}_{d}|\ \pi_{f}(\mathcal{L}_{\mathcal{H}})\ne0\}$ can be represented as the union of three disjoint sets, that is, $$\{f\in\mathcal{F}_{d}|\ \pi_{f}(\mathcal{L}_{\mathcal{H}})\ne0\}=\mathcal{F}_{d}^{(1)}\cup\mathcal{F}_{d}^{(2)}\cup\mathcal{F}_{d}^{(3)},$$ where $\mathcal{F}_{d}^{(1)}=\{f\in\mathcal{F}_{d}|\textnormal{ all tuples in}\ f\ \textnormal{correspond to diagonal entry of}\ \mathcal{L}_{\mathcal{H}}\}$, $\mathcal{F}_{d}^{(2)}=\{f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}|\ \alpha_{j}=v_1^{(j)}\cdots v_{k-1}^{(j)}\ \textnormal{and}\ \{i_{j},v_1^{(j)},\ldots,v_{k-1}^{(j)}\}\in E(\mathcal{H})\ \textnormal{for}\ j=1,\ldots,d\}$, $\mathcal{F}_{d}^{(3)}=\{f\in\mathcal{F}_{d}|\ \pi_{f}(\mathcal{L}_{\mathcal{H}})\ne0\}\setminus(\mathcal{F}_{d}^{(1)}\cup\mathcal{F}_{d}^{(2)}).$
**Lemma 2**. *[Let $\mathcal{H}$ be a $k$-uniform hypergraph with $n$ vertices. And the degree sequence of $\mathcal{H}$ is $d_1,d_2,\ldots,d_n$. Then]{.nodecor} $$(k-1)^{n-1}\sum\limits_{f\in\mathcal{F}_{d}^{(1)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|=(k-1)^{n-1}\sum\limits_{i=1}^{n}d_{i}^{d},$$ $$(k-1)^{n-1}\sum\limits_{f\in\mathcal{F}_{d}^{(2)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|=(-1)^{d}\textnormal{Tr}_{d}(\mathcal{A}_{\mathcal{H}}).$$*
*Proof.* For $f\in\mathcal{F}_{d}^{(1)}$, if $f=(i_{1}i_{1}\cdots i_{1},\ldots,i_{d}i_{d}\cdots i_{d})$, since the arc multi-set $E(f)$ only includes loops $(i_{j},i_{j})\ (j=1,\ldots,d)$, we know that $|W(f)|\ne0$ if and only if $i_{1}=\cdots=i_{d}$. Let $f_{i}=(ii\cdots i,\ldots,ii\cdots i)\in\mathcal{F}_{d}(i=1,\ldots,n)$, then $\mathcal{F}_{d}^{(1)}=\{f_{1},\ldots,f_{n}\}$. For $f_{i}\in\mathcal{F}_{d}^{(1)}$, since $b(f_{i})=c(f_{i})=(d(k-1))!$, $|W(f_{i})|=1$ and $\pi_{f_{i}}(\mathcal{L}_{\mathcal{H}})=l_{ii\cdots i}^{d}=d_{i}^{d}$, Eq.(2.4) can be obtained directly.
For $f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}^{(2)}$, where $\alpha_{j}=v_1^{(j)}\cdots v_{k-1}^{(j)}$ for $j=1,\ldots,d$. Since $\{i_{j},v_1^{(j)},\ldots,v_{k-1}^{(j)}\}\in E(\mathcal{H})$ for $j=1,\ldots,d$, we have $\pi_{f}(\mathcal{L}_{\mathcal{H}})=\prod\nolimits_{j=1}^{d}l_{i_{j}\alpha_{j}}=(-\frac{1}{(k-1)!})^{d}=(-1)^{d}\pi_{f}(\mathcal{A}_{\mathcal{H}}).$ And $\pi_{f}(\mathcal{A}_{\mathcal{H}})\ne0$ if and only if $\{i_{j},v_1^{(j)},\ldots,v_{k-1}^{(j)}\}\in E(\mathcal{H})$ for $j=1,\ldots,d$, that is, $f\in\mathcal{F}_{d}^{(2)}$, then Eq.(2.5) can be obtained. ◻
According to Lemma 2.2, in order to obtain the expressions of the first $2k$ orders Laplacian spectral moments for $k$-uniform power hypergraphs, we should give some expressions of the spectral moments for $k$-uniform power hypergraphs.
For a graph $G$ and a positive integer $k\ge3$, the *$k$-power hypergraph* of $G$, denoted by $G^{(k)}$, is a $k$-uniform hypergraph obtained by adding $k-2$ new vertices whose degrees are $1$ to each edge of $G$ [@ref4]. The spectrum of a hypergraph is said to be *$k$-symmetric*, if its spectrum is invariant under a rotation of an angle $2\pi/k$ in the complex plane. Shao et al. [@ref8] gave a characterization (in terms of the traces of the adjacency tensors) of the $k$-uniform hypergraphs whose spectrum are $k$-symmetric, that is, the spectrum of a $k$-uniform hypergraph $\mathcal{H}$ is $k$-symmetric if and only if $\textnormal{Tr}_{d}(\mathcal{A}_{\mathcal{H}})=0$ for $k\nmid d$. It is obvious that the spectrum of a $k$-uniform power hypergraph is $k$-symmetric. Then, the $d$-th spectral moments of $G^{(k)}$ are equal to $0$ for $d=k+1,\ldots,2k-1$, that is, $$\textnormal{Tr}_d(\mathcal{A}_{G^{(k)}})=0\ \mathrm{for}\ d=k+1,\ldots,2k-1.$$
And the expression of the $2k$-th order spectral moment of $G^{(k)}$ is given as follows.
**Lemma 3**. *[Let $G$ be a graph with $n$ vertices and $m$ edges. Let $d_i$ denote the degree of vertex $i$ in $G$ ($i=1,\ldots,n$). Then the $2k$-th order spectral moment of $G^{(k)}$ is]{.nodecor}*
*$$\textnormal{Tr}_{2k}(\mathcal{A}_{G^{(k)}})=k^{k-1}(k-1)^{N-k}\big(1-2k^{k-3}(k-1)^{1-k}\big)m+k^{2k-3}(k-1)^{N-2k+1}\sum\limits_{i=1}^{n}d_i^2,$$*
*[where $N=n+m(k-2)$.]{.nodecor}*
*Proof.* Let $\mathcal{G}=G^{(k)}$. Then $|V(\mathcal{G})|=n+m(k-2)=N$ and $|E(\mathcal{G})|=m$. Let $N_G(P_2)$ and $N_{\mathcal{G}}(P_2^{(k)})$ denote the number of paths with length $2$ in $G$ and $\mathcal{G}$, respectively. Then $N_{\mathcal{G}}(P_2^{(k)})=N_G(P_2)=\sum\limits_{i=1}^{n}\dbinom{d_i}{2}$. From Lemma 2.1, we get $$\textnormal{Tr}_{2k}(\mathcal{A}_{\mathcal{G}})=(k-1)^{N-1}\sum\limits_{f\in\mathcal{F}_{2k}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{A}_{\mathcal{G}})|W(f)|.$$
For $f=(i_{1}\alpha_{1},\ldots,i_{2k}\alpha_{2k})\in\mathcal{F}_{2k}$, if $\pi_{f}(\mathcal{A}_{\mathcal{G}})=\prod_{j=1}^{2k}a_{i_j\alpha_j}\ne0$, then $a_{i_j\alpha_j}\ne0$ for all $j\in[2k]$. For $|W(f)|\ne0$, there are the following two cases.
Case 1: $f=(i_{1}\alpha_{1},i_{1}\beta_{1},\ldots,i_{k}\alpha_{k},i_{k}\beta_{k})=f_{e}\in\mathcal{F}_{2k}$, where $\{i_{1},\ldots, i_{k}\}=e\in E(\mathcal{G})$ and $\alpha_{j},\beta_{j}\in\big(\{i_1,\ldots,i_k\}\setminus \{i_j\}\big)^{k-1}$ for $j=1,\ldots,k$. Then $$\begin{aligned}
&\sum_{e\in E(\mathcal{G})}\frac{b(f_e)}{c(f_e)}\pi_{f_e}(\mathcal{A}_{\mathcal{G}})|W(f_e)|\\
=&\sum_{e\in E(\mathcal{G})}\frac{(2!)^{k(k-1)}}{\big((2k-2)!\big)^k}\Big(\frac{1}{(k-1)!}\Big)^{2k}\frac{2k(k-1)}{(2!)^{k(k-1)}}2^{k-1}k^{k-2}\big((2k-3)!\big)^k\big((k-1)!\big)^{2k}\\
=&k^{k-1}(k-1)^{1-k}|E(\mathcal{G})|.\end{aligned}$$
Case 2: $f=(i_{1}\alpha_{1},j_{1}\beta_{1},i_{2}\alpha_{2},\ldots,i_{k}\alpha_{k},j_{2}\beta_{2},\ldots,j_{k}\beta_{k})=f_{e_1 e_2}\in\mathcal{F}_{2k}$, where $i_1=j_1$, $\{i_{1},i_{2},\ldots,i_{k}\}=e_1\in E(\mathcal{G}),\ \{i_{1},j_{2},\ldots,j_{k}\}=e_2\in E(\mathcal{G})$ and $\alpha_{l}\in\big(\{i_1,\ldots,i_k\}\setminus \{i_l\}\big)^{k-1},\ \beta_{l}\in\big(\{j_1,\ldots,j_k\}\setminus \{j_l\}\big)^{k-1}$ for $l=1,\ldots,k$. Then $$\begin{aligned}
&\sum_{e_1 e_2\subset \mathcal{G}}\frac{b(f_{e_1 e_2})}{c(f_{e_1 e_2})}\pi_{f_{e_1 e_2}}(\mathcal{A}_{\mathcal{G}})|W(f_{e_1 e_2})|\\
=&\sum_{e_1 e_2\subset \mathcal{G}}\frac{2k(k-1)(k^{k-2})^2(2k-3)!\big((k-2)!\big)^{2k-2}}{\big(2(k-1)\big)!\big((k-1)!\big)^{2k-2}}\Big(\frac{1}{(k-1)!}\Big)^{2k}2\big((k-1)!\big)^{2k}\\
=&2k^{2k-3}(k-1)^{2-2k}N_{\mathcal{G}}(P_2^{(k)}).\end{aligned}$$
Then $$\begin{aligned}
\textnormal{Tr}_{2k}(\mathcal{A}_{\mathcal{G}})
&=(k-1)^{N-1}\Big(k^{k-1}(k-1)^{1-k}|E(\mathcal{G})|+2k^{2k-3}(k-1)^{2-2k}N_{\mathcal{G}}(P_2^{(k)})\Big)\\
&=k^{k-1}(k-1)^{N-k}\big(1-2k^{k-3}(k-1)^{1-k}\big)m+k^{2k-3}(k-1)^{N-2k+1}\sum\limits_{i=1}^{n}d_i^2,
\end{aligned}$$ where $N=n+m(k-2)$. ◻
# Main results
In this section, we give an expression of the $d$-th order Laplacian spectral moments for $k$-uniform hypergraphs. And the explicit expressions of the first $2k$ orders Laplacian spectral moments for $k$-uniform power hypergraphs are given.
Given two hypergraphs $\mathcal{H}=(V(\mathcal{H}),E(\mathcal{H}))$ and $H=(V(H),E(H))$, if $V(H)\subseteq V(\mathcal{H})$ and $E(H)\subseteq E(\mathcal{H})$, then $H$ is said to be a *subhypergraph* of $\mathcal{H}$. A $k$-uniform *multi-hypergraph* $\mathcal{H}$ is a pair $(V(\mathcal{H}),E(\mathcal{H}))$, where $E(\mathcal{H})$ is a multi-set of subsets of $V(\mathcal{H})$ with cardinality $k$. A *Veblen hypergraph* is a $k$-uniform, $k$-valent (i.e., the degree of every vertex is a multiple of $k$) multi-hypergraph [@ref10]. For a multi-hypergraph $H$, let $\underline{H}$ be the simple $k$-uniform hypergraph formed by removing duplicate edges of $H$. And $H$ is called a *multi-subgraph* of $\mathcal{H}$ if $\underline{H}$ is a subhypergraph of $\mathcal{H}$. Let $\mathcal{V}_{d}(\mathcal{H})$ denote the set of connected Veblen multi-subgraph of $\mathcal{H}$ with $d$ edges.
For $f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}$ and a $k$-uniform hypergraph $\mathcal{H}$ (where $\alpha_{j}=v_1^{(j)}\cdots v_{k-1}^{(j)}$ for $j=1,\ldots,d$), the *multi-subgraph induced by $f$*, denoted by $H(f)$, is the multi-hypergraph with the vertex set $V(f)\subseteq V(\mathcal{H})$ and the edge multi-set $E(H(f))=\{\{i_j,v_1^{(j)},\ldots,v_{k-1}^{(j)}\}|\ (\mathcal{A}_{\mathcal{H}})_{i_{j}\alpha_{j}}\ne0,\ 1\le j\le d\}$, and $\underline{H}(f)$ is a subhypergraph of $\mathcal{H}$.
A *walk* ia a digraph $D$ is a non-empty alternating sequence $v_0e_0v_1e_1\cdots v_ke_k$ of vertices and arcs in $D$ such that $e_i=(v_i,v_{i+1})$ for all $i<k$. A walk is *closed* if $v_0=v_k$. A closed walk in a digraph is an *Eulerian closed walk* if it traverses each arc of this digraph exactly once. A digraph $D$ is called *Eulerian* if $D$ has an Eulerian closed walk. Let $d^{+}(v)$ and $d^{-}(v)$ be the outdegree and indegree of the vertex $v\in V(D)$, respectively. The digraph $D$ is Eulerian if and only if $d^{+}(v)=d^{-}(v)$ for all $v\in V(D)$ and $D$ is weakly connected.
Since $W(f)$ is the set of all closed walks with the arc multi-set $E(f)$, we know that $|W(f)|$ is equal to the number of Eulerian closed walks in the multi-digraph $D(f)$. For $f\in\mathcal{F}_{d}^{(3)}$, we give the following conclusion.
**Lemma 4**. *[Let $\mathcal{H}$ be a $k$-uniform hypergraph with $n$ vertices. If $f\in\mathcal{F}_{d}^{(3)}$ and $|W(f)|\ne0$, then the multi-subgraph $H(f)$ induced by $f$ is a connected Veblen multi-subgraph of $\mathcal{H}$ with at most $d-1$ edges.]{.nodecor}*
*Proof.* For $f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_{d}$ (where $\alpha_{j}=v_1^{(j)}\cdots v_{k-1}^{(j)}$ for $j=1,\ldots,d$), if $|W(f)|\ne0$, then the multi-digraph $D(f)=(V(f),E(f))$ is Eulerian. For all $v\in V(f)$, we have $d^{+}_{D(f)}(v)=d^{-}_{D(f)}(v)$ and $$\begin{aligned}
d^{+}_{D(f)}(v)&=(k-1)|\{i_{j}\alpha_{j}|\ i_{j}=v\}|\\&=(k-1)\big(|\{i_{j}\alpha_{j}|\ i_{j}=v\ \textnormal{and}\ \{i_j,v_1^{(j)}\cdots v_{k-1}^{(j)}\}\in E(\mathcal{H})\}|+|\{i_{j}\alpha_{j}|\ i_{j}\alpha_{j}=vv\cdots v\}|\big),\\
d^{-}_{D(f)}(v)&=|\{i_{j}\alpha_{j}|\ i_{j}\ne v\ \textnormal{and}\ v\in V_{j}(f)\}|+(k-1)|\{i_{j}\alpha_{j}|\ i_{i}\alpha_{j}=vv\cdots v\}|.
\end{aligned}$$ Then $$(k-1)|\{i_{j}\alpha_{j}|\ i_{j}=v\ \textnormal{and}\ \{i_j,v_1^{(j)}\cdots v_{k-1}^{(j)}\}\in E(\mathcal{H})\}|=|\{i_{j}\alpha_{j}|\ i_{j}\ne v\ \textnormal{and}\ v\in V_{j}(f)\}|.$$ Fix a vertex $v \in V(H(f))$. We have $$\begin{aligned}
d_{H(f)}(v)&=|\{i_{j}\alpha_{j}|\ i_{j}=v\ \textnormal{and}\ \{i_j,v_1^{(j)}\cdots v_{k-1}^{(j)}\}\in E(\mathcal{H})\}|+|\{i_{j}\alpha_{j}|\ i_{j}\ne v\ \textnormal{and}\ v\in V_{j}(f)\}|\\&=k|\{i_{j}\alpha_{j}|\ i_{j}=v\ \textnormal{and}\ \{i_j,v_1^{(j)}\cdots v_{k-1}^{(j)}\}\in E(\mathcal{H})\}|.
\end{aligned}$$ So $k | d_{H(f)}(v)$, it follows that $H(f)$ is a Veblen hypergraph by definition. And $f\in\mathcal{F}_{d}^{(3)}$, then $H(f)$ has at most $d-1$ edges. ◻
For a connected Veblen multi-subgraph $H$ of $\mathcal{H}$ and $f=(i_{1}\alpha_{1},\ldots,i_{d}\alpha_{d})\in\mathcal{F}_d$, $f$ is called corresponding to $H$ if $f$ satisfy the following conditions:
\(a\) there is a integer $l(1\le l\le d-1)$, such that $i_{j_1}\alpha_{j_1},\ldots,i_{j_l}\alpha_{j_l}$ are corresponding to some edges of $H$;
\(b\) for every edge $e\in E(H)$, there exists $j\in[d]$ such that $i_j\alpha_j$ is corresponding to $e$;
\(c\) and others in $f$ are $v\beta_{v}$ where $\beta_{v}=v\cdots v\in[n]^{k-1}$ for $v\in V(H)$.
Let $\mathcal{F}_{d}(H)=\{f\in\mathcal{F}_d|\ f\ \textnormal{is corresponding to}\ H\}$. From Lemma 3.1, we have $$\{f\in\mathcal{F}_d^{(3)}| |W(f)|\ne0\}=\bigcup\limits_{z=1}^{d-1}\bigcup\limits_{H\in\mathcal{V}_z(\mathcal{H})}\mathcal{F}_d(H).$$
For simplicity, $\tau(D(f))$ is abbreviated to $\tau(f)$, which is the number of arborescences in multi-digraph $D(f)$. According to the above process, the formula for the $d$-th order Laplacian spectral moment of $k$-uniform hypergraphs is given as follows.
**Theorem 5**. *[Let $\mathcal{H}$ be a $k$-uniform hypergraph with $n$ vertices. And the degree sequence of $\mathcal{H}$ is $d_1,d_2,\ldots,d_n$. Then]{.nodecor} $$\begin{aligned}
&\textnormal{Tr}_{d}(\mathcal{L}_{\mathcal{H}})=(k-1)^{n-1}\sum\limits_{i=1}^{n}d_{i}^{d}+(-1)^{d}\textnormal{Tr}_{d}(\mathcal{A}_{\mathcal{H}})+d(k-1)^{n}\sum\limits_{z=1}^{d-1}\sum\limits_{H\in\mathcal{V}_{z}(\mathcal{H})}\sum\limits_{f\in\mathcal{F}_{d}(H)}\frac{\tau(f)\pi_{f}(\mathcal{L}_{\mathcal{H}})}{\prod\limits_{v\in V(f)}d^{+}(v)}.
\end{aligned}$$*
*Proof.* From Eq.(2.3), the $d$-th Laplacian spectral moments of $\mathcal{H}$ is $$\textnormal{Tr}_{d}(\mathcal{L}_{\mathcal{H}})=(k-1)^{n-1}\sum\limits_{j=1}^{3}\sum\limits_{f\in\mathcal{F}_{d}^{(j)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|.$$
For $f\in\mathcal{F}_{d}^{(3)}$, let $\widetilde{D}(f)$ be the digraph obtained by removing all repeated arcs of $D(f)$. Then $c(f)=\prod\nolimits_{v\in V(f)}d^{+}(v)!$, $b(f)=\prod_{e\in\widetilde{D}(f)}w(e)!$ and $|E(f)|=d(k-1)$. From Theorem 6 in [@ref13], the number of Eulerian closed walks in $D(f)$ is $$|W(f)|=\frac{|E(f)|}{b(f)}|\mathfrak{E}(f)|,$$ where $|\mathfrak{E}(f)|$ is the number of the Eulerian circuits in $D(f)$.
From BEST Theorem [@ref14; @ref15], the number of the Eulerian circuits in $D(f)$ is $$|\mathfrak{E}(f)|=\tau(f)\prod\limits_{v\in V(f)}(d^{+}(v)-1)!.$$
According to Eq.(3.2) and Eq.(3.3), we have $$\begin{aligned}
&(k-1)^{n-1}\sum\limits_{f\in\mathcal{F}_{d}^{(3)}}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|\\
=&(k-1)^{n-1}\sum\limits_{z=1}^{d-1}\sum\limits_{H\in\mathcal{V}_{z}(\mathcal{H})}\sum\limits_{f\in\mathcal{F}_{d}(H)}\frac{b(f)}{c(f)}\pi_{f}(\mathcal{L}_{\mathcal{H}})|W(f)|\\
=&d(k-1)^n\sum\limits_{z=1}^{d-1}\sum\limits_{H\in\mathcal{V}_{z}(\mathcal{H})}\sum\limits_{f\in\mathcal{F}_{d}(H)}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{H}}).
\end{aligned}$$
Then we can obtain the expression for the $d$-th order Laplacian spectral moment of $\mathcal{H}$ by substituting Eq.(2.4), Eq.(2.5) and Eq.(3.4) into Eq.(3.1). ◻
**Remark 6**. *[Since $\textnormal{Tr}_{d}(\mathcal{A}_{\mathcal{H}})=0\ (d=1,\ldots,k-1)$ and $\textnormal{Tr}_{k}(\mathcal{A}_{\mathcal{H}})=k^{k-1}(k-1)^{n-k}|E(\mathcal{H})|$ [@ref3], $\mathcal{V}_d(\mathcal{H})=\emptyset$ for $d=1,\ldots,k-1$. The expressions of the first $k$ orders Laplacian spectral moments of a $k$-uniform hypergraph $\mathcal{H}$ can be obtained directly by using the formulas given in Theorem 3.1. And these expressions have been given in [@ref9].]{.nodecor}*
**Remark 7**. *[Since $\textnormal{Tr}_{k+1}(\mathcal{A}_{\mathcal{H}})=(k+1)(k-1)^{n-k}C_k(\#\ \textnormal{of simplices in}\ \mathcal{H})$ [@ref3], and $\mathcal{V}_k(\mathcal{H})=\{ke|\ e\in E(\mathcal{H})\}$. The expressions of the $k+1$-st orders Laplacian spectral moments of a $k$-uniform hypergraph $\mathcal{H}$ can be obtained by using the formulas given in Theorem 3.1. And these expressions have been given in [@ref12].]{.nodecor}*
If the $k$-uniform hypergraph is the $k$-power hypergraph $G^{(k)}$ of a graph $G$ in Remark 3.1 and Remark 3.2, the expressions of the first $k+1$ order Laplacian spectral moments for $G^{(k)}$ can be obtained.
For a graph $G$, the expressions of the first $2k$ orders Laplacian spectral moments of its $k$-power hypergraph $G^{(k)}$ can be given by considering the formulas shown in Theorem 3.1, and these expressions are represented by some parameters of $G$.
**Theorem 8**. *[Let $G$ be a graph with $n$ vertices and $m$ edges. Let $d_i$ denote the degree of vertex $i$ in $G$ ($i=1,\ldots,n$). For the $k$-power hypergraph $G^{(k)}$ of $G$, then]{.nodecor} $$\begin{aligned}
\textnormal{Tr}_d(\mathcal{L}_{G^{(k)}})&=(k-1)^{N-1}\sum\limits_{i=1}^nd_i^d+(-1)^kdk^{k-2}(k-1)^{N-k}\Big(\sum\limits_{i-1}^nd_i^{d-k+1}+\sum\limits_{\{i,j\}\in E(G)}N_{d-k}(d_i,d_j)\Big)\\
&+(k-1)^{N-k}\big((k-1)^{k-1}+(-1)^kdk\big)(k-2)m,\end{aligned}$$ [for $d=k+1,\ldots,2k-1$, and]{.nodecor} $$\begin{aligned}
\textnormal{Tr}_{2k}(\mathcal{L}_{G^{(k)}})&=(k-1)^{N-1}\sum\limits_{i=1}^nd_i^{2k}+(-1)^k2k^{k-1}(k-1)^{N-k}\Big(\sum\limits_{i-1}^nd_i^{k+1}+\sum\limits_{\{i,j\}\in E(G)}N_{k}(d_i,d_j)\Big)\\
&+k^{2k-3}(k-1)^{N-2k+1}\sum\limits_{i=1}^nd_i^2+\ell m,\end{aligned}$$ [where]{.nodecor} $N=n+m(k-2)$, $N_{s}(d_i,d_j)=\sum\nolimits_{\begin{subarray}{c} 1\le c_i+c_j\le s \\ 0\le c_i,c_j<s \end{subarray}}d_i^{c_i}d_j^{c_j}\big(s=1,\ldots,k\big)$, $\ell=(k-1)^{N-k}\big((k-1)^{k-1}(k-2)+(-1)^k2k^{k-1}(k-2)+k^{k-1}-2k^{2k-3}(k-1)^{1-k}\big)$.*
*Proof.* Let $\mathcal{G}=G^{(k)}$, then $|V(\mathcal{G})|=N=n+m(k-2)$ and $|E(\mathcal{G})|=m$.
Since $\mathcal{G}$ is the $k$-power hypergraph of $G$, from the definition of Veblen hypergraph, we know that $\mathcal{V}_{d}(\mathcal{G})=\emptyset$ for $k\nmid d$ and $d\in[2k]$. For a Veblen multi-subgraph $H\in\mathcal{V}_{k}(\mathcal{G})$, it is easy to see that $\underline{H}\in E(\mathcal{G})$. For convenience, let $ke$ denote the connected Veblen multi-subgraph with $k$ edges and $\underline{ke}=e\in E(\mathcal{G})$. Then, $\mathcal{V}_{k}(\mathcal{G})=\{ke|\ e\in E(\mathcal{G})\}$. For $d=k+1,\ldots,2k$, we have
$$\begin{aligned}
\textnormal{Tr}_d(\mathcal{L}_{\mathcal{G}})=&(k-1)^{N-1}\Big(m(k-2)+\sum\limits_{i=1}^{n}d_{i}^{d}\Big)+(-1)^{d}\textnormal{Tr}_{d}(\mathcal{A}_{\mathcal{G}})\\&+d(k-1)^{N}\sum\limits_{e\in E(\mathcal{G})}\sum\limits_{f\in\mathcal{F}_{d}(ke)}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{G}}).\end{aligned}$$
For $f\in\mathcal{F}_{d}(ke)\ (\textnormal{where}\ e=\{i_{1},i_{2},\ldots,i_{k}\}\in E(\mathcal{G}))$, let $$f=((i_1 \beta_1)^{c_1}, i_1\alpha_1, (i_2 \beta_2)^{c_2}, i_2\alpha_2, \ldots,(i_k \beta_k)^{c_k}, i_k\alpha_k),$$ where $i_{1}<i_{2}<\cdots<i_{k}$. For any $j\in[k]$, $\alpha_{j}\in(\{i_1,\ldots,i_k\}\setminus\{i_j\})^{k-1}$, $\beta_{j}=i_{j}\cdots i_{j}\in[N]^{k-1}$, $c_{j}\ge0$ is the total number of times that $i_{j}\beta_{j}$ appears in $f$, and $\sum\nolimits_{j=1}^{k}c_j=d-k$. Next, we consider the following two cases for $f\in\mathcal{F}_{d}(ke)$.
Case 1: If there exists $j\in[k]$ such that $c_{j}=d-k$, then $$f=f_{e,i_{j}}=(i_1\alpha_1, \ldots, i_{j-1}\alpha_{j-1}, (i_j \beta_j)^{d-k}, i_j\alpha_j, \ldots, i_k\alpha_k)\in\mathcal{F}_{d}(ke).$$ We have $$\tau(f)=k^{k-2},\ \prod\limits_{v\in V(f)}d^{+}(v)=(d-k+1)(k-1)^{k},$$ and there are $(d-k+1)((k-1)!)^{k}$ elements in $\mathcal{F}_{d}$ which share the same arc multi-set as $f_{e,i_{j}}$, then $$\sum\limits_{j=1}^{k}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{G}})=(-1)^{k}k^{k-2}(k-1)^{-k}\sum\limits_{j=1}^{k} d_{i_j}^{d-k}.$$
Case 2: If for any $j\in[k]$, $0\le c_{j}<d-k$, then $$f=f_{e,\{c_1,c_2,\ldots,c_k\}}=((i_1 \beta_1)^{c_1}, i_1\alpha_1, (i_2 \beta_2)^{c_2}, i_2\alpha_2, \ldots,(i_k \beta_k)^{c_k}, i_k\alpha_k)\in\mathcal{F}_{d}(ke).$$ We have $$\tau(f)=k^{k-2},\ \prod\limits_{v\in V(f)}d^{+}(v)=(k-1)^{k}\prod\limits_{j=1}^{k}(c_{j}+1),$$ and there are $((k-1)!)^{k}\prod\limits_{j=1}^{k}(c_{j}+1)$ elements in $\mathcal{F}_{d}$ which share the same arc multi-set as $f_{e,\{c_1,c_2,\ldots,c_k\}}$, then $$\sum\limits_{\begin{subarray}{c} c_1+\cdots+c_k=d-k \\ \forall j\in[k],0\le c_{j}<d-k \end{subarray}}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{G}})=(-1)^{k}k^{k-2}(k-1)^{-k}\sum\limits_{\begin{subarray}{c} c_1+\cdots+c_k=d-k \\ \forall j\in[k],0\le c_{j}<d-k \end{subarray}}\prod\limits_{j=1}^{k}d_{i_j}^{c_j}.$$
Then
$$\begin{aligned}
&\sum\limits_{e\in E(\mathcal{G})}\sum\limits_{f\in\mathcal{F}_{d}(ke)}\frac{\tau(f)}{\prod\limits_{v\in V(f)}d^{+}(v)}\pi_{f}(\mathcal{L}_{\mathcal{G}})\\
=&(-1)^{k}k^{k-2}(k-1)^{-k}\sum\limits_{\{i_{1},\ldots,i_{k}\}\in E(\mathcal{G})}\bigg(\sum\limits_{j=1}^{k} d_{i_j}^{d-k}+\sum\limits_{\begin{subarray}{c} c_1+\cdots+c_k=d-k \\ \forall j\in[k],0\le c_{j}<d-k \end{subarray}}\prod\limits_{j=1}^{k}d_{i_j}^{c_j}\bigg)\\
=&(-1)^{k}k^{k-2}(k-1)^{-k}\bigg(\sum\limits_{i=1}^{N}\sum\limits_{e\in E_{i}} d_{i}^{d-k}+\sum\limits_{\{i_{1},\ldots,i_{k}\}\in E(\mathcal{G})}\sum\limits_{\begin{subarray}{c} c_1+\cdots+c_k=d-k \\ \forall j\in[k],0\le c_{j}<d-k \end{subarray}}\prod\limits_{j=1}^{k}d_{i_j}^{c_j}\bigg)\\
=&(-1)^{k}k^{k-2}(k-1)^{-k}\bigg(\sum\limits_{i=1}^{N}d_{i}^{d-k+1}+\sum\limits_{\{i_{1},\ldots,i_{k}\}\in E(\mathcal{G})}\sum\limits_{\begin{subarray}{c} c_1+\cdots+c_k=d-k \\ \forall j\in[k],0\le c_{j}<d-k \end{subarray}}\prod\limits_{j=1}^{k}d_{i_j}^{c_j}\bigg).
\end{aligned}$$
For $e=\{i,j\}\in E(G)$, let $e^{(k)}=\{i,j\}^{(k)}=\{i,j,v_{e,1},\ldots,v_{e,k-2}\}\in E(G^{(k)})$, where $v_{e,l}$ are cored vertex (the vertex whose degree is 1 [@ref4]), then $d_{e,l}=1(l=1,\ldots,k-2)$ and $1\le c_i+c_j=d-k-\sum\nolimits_{l=1}^{k-2}c_{e,l}\le d-k$. Then, for $d=k+1,\ldots,2k$, the $d$-th order Laplacian spectral moment of $G^{(k)}$ is $$\begin{aligned}
\textnormal{Tr}_d(\mathcal{L}_{G^{(k)}})=&(-1)^{d}\textnormal{Tr}_d(\mathcal{A}_{G^{(k)}})+(k-1)^{N-1}\Big(m(k-2)+\sum\limits_{i=1}^{n}d_{i}^{d}\Big)\\+&(-1)^{k}dk^{k-2}(k-1)^{N-k}\bigg(\sum\limits_{i=1}^{N}d_{i}^{d-k+1}+\sum\limits_{\{i,j\}\in E(G)}\sum\limits_{\begin{subarray}{c} 1\le c_i+c_j\le d-k \\ 0\le c_i,c_j<d-k \end{subarray}}d_{i}^{c_i}d_{j}^{c_j}\bigg).\end{aligned}$$
By substituting Eq.(2.6) and Eq.(2.7) into the above equation, the expressions of $\textnormal{Tr}_d(\mathcal{L}_{G^{(k)}})(d=k+1,\ldots,2k)$ can be obtained. ◻
Let $\sum\nolimits_{i=s}^{t}a_i=0$ if $t<s$. Let $G=(V(G),E(G))$ be a finite simple graph. Let $d_{v}$ denote the degree of the vertex $v$ in $G$. The first and second Zagreb indices were introduced in [@ref17; @ref18], which are $M_1(G)=\sum\nolimits_{v\in V(G)}d_v^2=\sum\nolimits_{\{u,v\}\in E(G)}\big(d_u+d_v\big)$ and $M_2(G)=\sum\nolimits_{\{u,v\}\in E(G)}d_ud_v$, respectively. The first and second variable Zagreb indices were introduced in [@ref19; @ref20], which are $M_1^{(r)}(G)=\sum\nolimits_{v\in V(G)}d_v^r=\sum\nolimits_{\{u,v\}\in E(G)}\big(d_u^{r-1}+d_v^{r-1}\big)$ and $M_2^{(r)}(G)=\sum\nolimits_{\{u,v\}\in E(G)}\big(d_ud_v\big)^{r}$ (where $r$ is a variable parameter), respectively. And the generalized Zagreb index $M_{\{r,s\}}(G)=\sum\nolimits_{\{u,v\}\in E(G)}\big(d_u^rd_v^s+d_u^sd_v^r\big)$ (where $r$ and $s$ are variable parameters) was introduced in [@ref21]. Then the expressions of Laplacian spectral moments of power hypergraphs given in Theorem 3.2 can be represented by the Zagreb indices of graphs.
**Remark 9**. *[Let $G$ be a graph with $n$ vertices and $m$ edges. Let $d_i$ denote the degree of vertex $i$ in $G$ ($i=1,\ldots,n$). Then]{.nodecor} $$\begin{aligned}
\textnormal{Tr}_d(\mathcal{L}_{G^{(k)}})&=(k-1)^{N-k}\big((k-1)^{k-1}+(-1)^kdk\big)(k-2)m+(k-1)^{N-1}M_1^{(d)}(G)\\&+(-1)^kdk^{k-2}(k-1)^{N-k}\Bigg(\sum\limits_{r=2}^{d-k+1}M_1^{(r)}(G)+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}M_2^{(r)}(G)+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}\sum\limits_{s=r+1}^{d-k-r}M_{\{r,s\}}(G)\Bigg),\end{aligned}$$ [for $d=k+1,\ldots,2k-1$, and]{.nodecor} $$\begin{aligned}
\textnormal{Tr}_{2k}(\mathcal{L}_{G^{(k)}})&=\ell m+(k-1)^{N-1}M_1^{(2k)}(G)+k^{2k-3}(k-1)^{N-2k+1}M_1(G)\\&+(-1)^k2k^{k-1}(k-1)^{N-k}\Bigg(\sum\limits_{r=2}^{k+1}M_1^{(r)}(G)+\sum\limits_{r=1}^{\lfloor \frac{k}{2} \rfloor}M_2^{(r)}(G)+\sum\limits_{r=1}^{\lfloor \frac{k}{2} \rfloor}\sum\limits_{s=r+1}^{k-r}M_{\{r,s\}}(G)\Bigg),\end{aligned}$$ [where]{.nodecor} $N=n+m(k-2)$ and $\ell=(k-1)^{N-k}\big((k-1)^{k-1}(k-2)+(-1)^k2k^{k-1}(k-2)+k^{k-1}-2k^{2k-3}(k-1)^{1-k}\big)$.*
*Proof.* For the terms related to degree of vertex, we have
$$\begin{aligned}
&\sum\limits_{i=1}^{N}d_{i}^{d-k+1}+\sum\limits_{\{i,j\}\in E(G)}\sum\limits_{\begin{subarray}{c} 1\le c_i+c_j\le d-k \\ 0\le c_i,c_j<d-k \end{subarray}}d_{i}^{c_i}d_{j}^{c_j}\\
=&\sum\limits_{\{i,j\}\in E(G)}\bigg(d_i^{d-k}+d_j^{d-k}+\sum\limits_{\begin{subarray}{c} 1\le c_i+c_j\le d-k \\ 0\le c_i,c_j<d-k \end{subarray}}d_{i}^{c_i}d_{j}^{c_j}\bigg)\\
=&\sum\limits_{\{i,j\}\in E(G)}\bigg(\sum\limits_{r=1}^{d-k}\big(d_i^r+d_j^r\big)+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}(d_id_j)^r+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}\sum\limits_{s=r+1}^{d-k-r}\big(d_i^rd_j^s+d_i^sd_j^r\big)\bigg)\\
=&\sum\limits_{r=1}^{d-k}\sum\limits_{\{i,j\}\in E(G)}\big(d_i^r+d_j^r\big)+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}\sum\limits_{\{i,j\}\in E(G)}(d_id_j)^r+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}\sum\limits_{s=r+1}^{d-k-r}\sum\limits_{\{i,j\}\in E(G)}\big(d_i^rd_j^s+d_i^sd_j^r\big)\\
=&\sum\limits_{r=2}^{d-k+1}M_1^{(r)}(G)+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}M_2^{(r)}(G)+\sum\limits_{r=1}^{\lfloor \frac{d-k}{2} \rfloor}\sum\limits_{s=r+1}^{d-k-r}M_{\{r,s\}}(G),\ \mathrm{for}\ d=k+1,\ldots,2k.\end{aligned}$$
Then the expressions shown in Theorem 3.2 can be represented by the Zagreb indices of graphs. ◻
Given a $k$-uniform hypergraph $\mathcal{H}$, the *signless Laplacian tensor* of $\mathcal{H}$ is $\mathcal{Q}_{\mathcal{H}}=\mathcal{D}_{\mathcal{H}}+\mathcal{A}_{\mathcal{H}}$. And the $d$-th order signless Laplacian spectral moment of $\mathcal{H}$ is equal to the $d$-th order trace of $\mathcal{Q}_{\mathcal{H}}$. For the signless Laplacian spectral moments of hypergraphs, similar conclusions as Theorem 3.1 and Theorem 3.2 can be obtained by the same method, which is shown as follows.
**Theorem 10**. *[Let $\mathcal{H}$ be a $k$-uniform hypergraph with $n$ vertices. And the degree sequence of $\mathcal{H}$ is $d_1,d_2,\ldots,d_n$. Then]{.nodecor} $$\begin{aligned}
&\textnormal{Tr}_{d}(\mathcal{Q}_{\mathcal{H}})=(k-1)^{n-1}\sum\limits_{i=1}^{n}d_{i}^{d}+\textnormal{Tr}_{d}(\mathcal{A}_{\mathcal{H}})+d(k-1)^{n}\sum\limits_{z=1}^{d-1}\sum\limits_{H\in\mathcal{V}_{z}(\mathcal{H})}\sum\limits_{f\in\mathcal{F}_{d}(H)}\frac{\tau(f)\pi_{f}(\mathcal{Q}_{\mathcal{H}})}{\prod\limits_{v\in V(f)}d^{+}(v)}.
\end{aligned}$$*
**Theorem 11**. *[Let $G$ be a graph with $n$ vertices and $m$ edges. Let $d_i$ denote the degree of vertex $i$ in $G$ ($i=1,\ldots,n$). For the $k$-power hypergraph $G^{(k)}$ of $G$, then]{.nodecor} $$\begin{aligned}
\textnormal{Tr}_d(\mathcal{Q}_{G^{(k)}})&=(k-1)^{N-1}\sum\limits_{i=1}^nd_i^d+dk^{k-2}(k-1)^{N-k}\Big(\sum\limits_{i-1}^nd_i^{d-k+1}+\sum\limits_{\{i,j\}\in E(G)}N_{d-k}(d_i,d_j)\Big)\\
&+(k-1)^{N-k}\big((k-1)^{k-1}+dk\big)(k-2)m,\ \mathrm{for}\ d=k+1,\ldots,2k-1,\\
\textnormal{Tr}_{2k}(\mathcal{Q}_{G^{(k)}})&=(k-1)^{N-1}\sum\limits_{i=1}^nd_i^{2k}+2k^{k-1}(k-1)^{N-k}\Big(\sum\limits_{i-1}^nd_i^{k+1}+\sum\limits_{\{i,j\}\in E(G)}N_{k}(d_i,d_j)\Big)\\
&+k^{2k-3}(k-1)^{N-2k+1}\sum\limits_{i=1}^nd_i^2+qm,
\end{aligned}$$ [where]{.nodecor} $N=n+m(k-2)$, $N_{s}(d_i,d_j)=\sum\nolimits_{\begin{subarray}{c} 1\le c_i+c_j\le s \\ 0\le c_i,c_j<s \end{subarray}}d_i^{c_i}d_j^{c_j}\big(s=1,\ldots,k\big)$, $q=(k-1)^{N-k}\big((k-1)^{k-1}(k-2)+2k^{k-1}(k-2)+k^{k-1}-2k^{2k-3}(k-1)^{1-k}\big)$.*
And the signless Laplacian spectral moments of $k$-power hypergraph $G^{(k)}$ can also be represented by Zagreb indices of $G$.
Next, we introduce some concepts for the high-order (signless) Laplacian spectrum of graphs. For a graph $G$ and an integer $k\ge2$, the (signless) Laplacian spectrum of $G^{(k)}$ is called the *$k$-th order (signless) Laplacian spectrum* of $G$. A graph $G$ is said to be determined by its high-order (signless) Laplacian spectrum, if there does not exist other non-isomorphic graph $H$ such that $H$ has the same $k$-th order (signless) Laplacian spectrum as $G$ for all $k\ge2$. And we give the following examples to show that some (signless) Laplacian cospectral graphs can be determined by their high-oeder (signless) Laplacian spectrum.
![non-isomorphic Laplacian cospectral graph](1.png){#fig:1}
![non-isomorphic signless Laplacian cospectral graph](2.png){#fig:2}
**Remark 12**. *[The graphs shown in Figure 1 are non-isomorphic Laplacian cospectral graph. By the $3$-th order Laplacian spectral moments of $3$-power hypergraphs, we have $\textnormal{Tr}_{3}(\mathcal{L}_{(G_1)^{(3)}})\ne\textnormal{Tr}_{3}(\mathcal{L}_{(G_2)^{(3)}})$, then $(G_1)^{(3)}$ and $(G_2)^{(3)}$ have different Laplacian spectrum. So $G_1$ and $G_2$ can be distinguished by their high-order Laplacian spectrum.]{.nodecor}*
*[The graphs shown in Figure 1 are non-isomorphic signless Laplacian cospectral graph. By the $3$-th order signless Laplacian spectral moments of $3$-power hypergraphs, we have $\textnormal{Tr}_{3}(\mathcal{Q}_{(K_3\cup K_1)^{(3)}})\ne\textnormal{Tr}_{3}(\mathcal{Q}_{(K_{1,3})^{(3)}})$, then $(K_3\cup K_1)^{(3)}$ and $(K_{1,3})^{(3)}$ have different signless Laplacian spectrum. So $K_3\cup K_1$ and $K_{1,3}$ can be distinguished by their high-order signless Laplacian spectrum.]{.nodecor}*
**Acknowledgements**
This work is supported by the National Natural Science Foundation of China (No. 11801115, No. 12071097, No. 12042103, No. 12242105 and No. 12371344), the Natural Science Foundation of the Heilongjiang Province (No. QC2018002) and the Fundamental Research Funds for the Central Universities.
# References {#references .unnumbered}
| arxiv_math | {
"id": "2310.01811",
"title": "The Laplacian spectral moments of power hypergraphs",
"authors": "Jueru Liu, Lixiang Chen, Changjiang Bu",
"categories": "math.CO",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
The goal of this survey is to present intimate interactions between four branches of conformal dynamics: iterations of anti-rational maps, actions of Kleinian reflection groups, dynamics generated by Schwarz reflections in quadrature domains, and algebraic correspondences. We start with several examples of Schwarz reflections as well as algebraic correspondences obtained by matings between anti-rational maps and reflection groups, and examples of Julia set realizations for limit sets of reflection groups (including classical Apollonian-like gaskets). We follow up these examples with dynamical relations between explicit Schwarz reflection parameter spaces and parameter spaces of anti-rational maps and of reflection groups. These are complemented by a number of general results and illustrations of important technical tools, such as David surgery and straightening techniques. We also collect several analytic applications of the above theory.
address:
- Institute for Mathematical Sciences, Stony Brook University, 100 Nicolls Rd, Stony Brook, NY 11794-3660, USA
- School of Mathematics, Tata Institute of Fundamental Research, 1 Homi Bhabha Road, Mumbai 400005, India
title: |
Mirrors of conformal dynamics:\
Interplay between anti-rational maps, reflection groups,\
Schwarz reflections, and correspondences
---
Mikhail Lyubich
[^1]
Sabyasachi Mukherjee
[^2]
# Overview {#intro_sec}
## The dictionary {#the-dictionary .unnumbered}
In his pioneering work on Fuchsian groups, Poincaré studied discrete groups of isometries of the hyperbolic plane generated by reflections [@Poi82]. Decades later, Coxeter [@Cox34] and Vinberg [@Vin67] studied discrete groups generated by reflections in much bigger generality.
In a seemingly unrelated world, Fatou and Julia laid the foundation of the theory of dynamics of holomorphic maps, particularly the dynamics of rational maps on the Riemann sphere, in the first quarter of the twentieth century [@fatou-1919; @fatou-1920a; @fatou-1920b; @fatou-1926; @julia-1918; @julia-1922]. These developments drove Fatou to observe similarities between the dynamics of rational maps and that of Kleinian groups [@Fatou29 p. 22]. In the 1980s, this philosophical analogy was set on a firm footing by Sullivan who introduced quasiconformal techniques in the study of rational dynamics and paved the way for the discovery of various deep connections between these two branches of conformal dynamics [@sullivan-dict]. Further contributions to this dictionary between Kleinian groups (and the associated theory of $3$-manifolds) and rational dynamics were subsequently made by McMullen [@McM95; @McM98b], McMullen-Sullivan [@MS98], Lyubich-Minsky [@LM97], Pilgrim [@Pil03], and others.
Inspired by the Fatou-Sullivan dictionary between Kleinian groups and complex dynamics, it is natural to think of iterations of antiholomorphic rational maps (anti-rational for short) on the Riemann sphere $\widehat{\mathbb{C}}$ as the complex dynamics counterpart of actions of Kleinian reflection groups.
In this survey, we will expound several recent results that advance the above theme in the Fatou-Sullivan dictionary. These results reveal certain explicit and somewhat surprising connections between the dynamics of anti-rational maps and Kleinian reflection groups. Moreover, these novel links between the two branches of conformal dynamics have given rise to a fresh class of conformal dynamical systems on the Riemann sphere generated by Schwarz reflection maps associated with quadrature domains.
## Schwarz reflection dynamics {#schwarz-reflection-dynamics .unnumbered}
A domain in the complex plane with piecewise analytic boundary is called a quadrature domain if the Schwarz reflection map with respect to its boundary extends anti-meromorphically to its interior. Such domains were first investigated by Davis [@Dav74], and independently by Aharonov and Shapiro [@AS73; @AS76; @AS78]. Since then, quadrature domains have played an important role in various areas of complex analysis and fluid dynamics (see [@QD] and the references therein).
Iterations of Schwarz reflections was first studied by Seung-Yeop Lee and Nikolai Makarov in [@LM] to address some questions of interest in statistical physics concerning topology and singular points of quadrature domains. Subsequently, a systematic exploration of Schwarz reflection dynamics was launched in [@LLMM1; @LLMM2; @LLMM3; @LMM2],[^3] which demonstrated that Schwarz dynamics can combine features of dynamics of anti-rational maps and Kleinian reflection groups in a common dynamical plane. More precisely, the dynamical plane of a Schwarz reflection map admits an invariant partition into the *escaping* and *non-escaping* sets, which often parallel the action of a reflection group and of an anti-rational map, respectively. The simplest instance of this combination phenomenon is displayed in Figure [2](#deltoid_intro_fig){reference-type="ref" reference="deltoid_intro_fig"}. It also transpired through these studies that the parameter spaces of Schwarz reflections are intimately related to parameter loci of anti-rational maps and reflection groups.
![Left: The dynamical plane of the Schwarz reflection map with respect to a deltoid curve. The interior (respectively, exterior) of the green Jordan curve is the escaping set (respectively, the non-escaping set), where the map behaves like the ideal triangle reflection group (respectively, like the quadratic map $\overline{z}^2$). Right: The tessellation of the unit disk under the ideal triangle reflection group and the corresponding Cayley tree are depicted.](deltoid_reflection_julia.png "fig:"){#deltoid_intro_fig width="0.36\\linewidth"}![Left: The dynamical plane of the Schwarz reflection map with respect to a deltoid curve. The interior (respectively, exterior) of the green Jordan curve is the escaping set (respectively, the non-escaping set), where the map behaves like the ideal triangle reflection group (respectively, like the quadratic map $\overline{z}^2$). Right: The tessellation of the unit disk under the ideal triangle reflection group and the corresponding Cayley tree are depicted.](ITG.png "fig:"){#deltoid_intro_fig width="0.36\\linewidth"}
## Combination theorems {#combination-theorems .unnumbered}
The mating phenomenon described above fits into the larger story of combination theorems, which has a long and rich history in groups, geometry and dynamics. Roughly speaking, the aim of a combination procedure is to take two compatible objects, and combine them to produce a richer and more general object that retains some of the essential features of the initial objects. Important examples of such constructions include the Klein Combination Theorem for two Kleinian groups [@Klein], the Bers Simultaneous Uniformization Theorem that combines two surfaces (or equivalently, two Fuchsian groups) [@Bers60], the Thurston Double Limit Theorem that allows one to combine two projective measured laminations (or equivalently, two groups on the boundary of the corresponding Teichmüller space) [@Thu86; @Otal98], the Bestvina-Feighn Combination Theorem for Gromov-hyperbolic groups [@BF92], etc. Douady and Hubbard extended the notion of a combination theorem from the world of groups to that of holomorphic dynamics by designing the theory of polynomial mating [@Dou83; @Hub12]. Some of these classical combination theorems can be regarded as the underlying motivation and driving principles for the mating results that will be discussed in this survey.
The task of *interbreeding* reflection groups with anti-rational maps presents several technical challenges. The first obstruction comes from the inherent mismatch between invertible dynamical systems and non-invertible ones. This is circumvented by replacing a reflection group with its so-called *Nielsen map*; i.e., a piecewise anti-Möbius non-invertible map that is orbit equivalent to the group (similar maps in the holomorphic setting are often called *Bowen-Series* maps, cf. [@BS79]). It turns out that these Nielsen maps can often be topologically mated with antiholomorphic polynomials (anti-polynomials for short) along the lines of Douady--Hubbard mating of polynomials. The next difficulty lies in upgrading such topological hybrid dynamical systems to conformal ones. The lack of availability of Thurston-type realization theorems for partially defined dynamical systems and the existence of parabolic elements in reflection groups cause serious impediments to the desired uniformization of topological matings. Novel applications of David homeomorphisms (generalizations of quasiconformal maps) and related surgery techniques were devised in [@LMMN] to surmount the above hurdles and to construct Schwarz reflections as combinations of large classes of anti-polynomials and Nielsen maps associated with reflection groups. It should be mentioned that such a surgery procedure first appeared in the work of Haïssinsky in the context of complex polynomials [@Hai98; @Hai00].
![Left: The connectedness locus of the Circle-and-Cardioid family of Schwarz reflections. Right: A part of the Tricorn.](c_and_c_para.png "fig:"){#c_and_c_para_fig width="0.41\\linewidth"} ![Left: The connectedness locus of the Circle-and-Cardioid family of Schwarz reflections. Right: A part of the Tricorn.](basilica_limb_tricorn_1.png "fig:"){#c_and_c_para_fig width="0.5\\linewidth"}
## Antiholomorphic correspondences {#antiholomorphic-correspondences .unnumbered}
Another important role in this survey is played by antiholomorphic correspondences on the Riemann sphere; i.e., multi-valued maps on $\widehat{\mathbb{C}}$ with antiholomorphic local branches. The phenomenon of *mating* or combining quadratic rational maps with the modular group was discovered by Bullett and Penrose in the context of iterated holomorphic correspondences [@BP], and was studied comprehensively by Bullett and Lomonaco in recent years [@BuLo1; @BuLo2; @BuLo3]. It turns out that the study of Schwarz reflection dynamics can be used profitably to construct in a regular way antiholomorphic analogs of the Bullett-Penrose algebraic correspondences and to generalize them to arbitrary degree (where the modular group is replaced with anti-conformal analogs of Hecke groups, called *anti-Hecke groups*). In [@LLMM3; @LMM3], certain Schwarz reflection maps were constructed as hybrid dynamical systems and they were lifted to produce antiholomorphic correspondences whose branches combine anti-polynomials (or anti-rational maps) with the entire structure of anti-Hecke groups.
## Parameter spaces of Schwarz reflections, anti-rational maps, and reflection groups {#parameter-spaces-of-schwarz-reflections-anti-rational-maps-and-reflection-groups .unnumbered}
The presence of common traits between the dynamics of anti-rational maps and Schwarz reflections manifests itself in the parameter spaces of Schwarz reflection maps as well. This was first observed numerically by Lee and Makarov. Their computer experiments showed that the connectedness locus of a family of quadratic Schwarz reflection maps (the so-called *Circle-and-Cardioid* or *C&C* family) looks identical to (a part of) the *Tricorn*, the connectedness locus of quadratic anti-polynomials (see Figure [4](#c_and_c_para_fig){reference-type="ref" reference="c_and_c_para_fig"}). While the appearance of 'copies' of polynomial connectedness loci in parameter spaces of various holomorphic maps can be justified using the theory of polynomial-like maps and the associated straightening maps (cf. [@DH2; @IK]), the situation here is more subtle as the Schwarz reflections under consideration only exhibit *pinched/degenerate* anti-polynomial-like restrictions that cannot be straightened to anti-polynomials using quasiconformal surgery tools. To bypass this issue, a combinatorial straightening route was adopted in [@LLMM2] to relate the connectedness locus of the Circle-and-Cardioid family to (a part of) the Tricorn. Due to certain quasiconformal flexibility properties of antiholomorphic maps (associated with parabolic dynamics), this combinatorial straightening map only yields a homeomorphism between combinatorial models of the two connectedness loci.
The results of [@LLMM2] has been sharpened and generalized to arbitrary degree in a recent work where the space of polygonal Schwarz reflections of degree $d$ was studied [@LLM23] (for $d=2$, this space reduces to the C&C family and deltoid-like Schwarz reflections). A combination of combinatorial straightening techniques and puzzle machinery was used to construct a *dynamically natural* homeomorphism between purely repelling combinatorial classes of degree $d$ polygonal Schwarz reflections and degree $d$ anti-polynomials with connected limit/Julia set. A dynamically natural bijection between geometrically finite maps in these two families was also established, and this bijection was shown to be generically continuous but discontinuous at some places. Here, dynamically natural means that for any anti-polynomial $f$, the corresponding polygonal Schwarz reflection is a conformal mating of $f$ with the regular ideal polygon reflection group.
There are several other parameter spaces of Schwarz reflection maps that are closely related to parameter spaces of anti-rational maps. A prototypical example of such families is the *cubic Chebyshev* family of Schwarz reflections, which arises from univalent restrictions of the cubic Chebyshev polynomial to appropriate round disks. While these Schwarz reflections also fall outside the scope of usual polynomial-like straightening theory, their pinched anti-polynomial-like restrictions are somewhat more tame than the ones furnished by polygonal Schwarz reflections. In fact, a classical theorem of Warschawski (on the boundary behavior of conformal maps of infinite strips) was used in [@LLMM3] to quasiconformally straighten these pinched anti-polynomial-like restrictions (of Schwarz reflections in the cubic Chebyshev family) to quadratic parabolic anti-rational maps. On the one hand, this result was instrumental in producing quadratic antiholomorphic correspondences as matings of quadratic parabolic anti-rational maps and an anti-conformal analogue of the modular group, and on the other hand, it enabled us to define a straightening map from the cubic Chebyshev family of Schwarz reflections (or equivalently, from the resulting space of correspondences) to a family of quadratic parabolic anti-rational maps.
The results mentioned in the previous paragraph have higher degree analogs too. A generalization of the cubic Chebyshev family of Schwarz reflections was introduced and studied in [@LMM3]. This family of Schwarz reflections arise from the space of degree $d+1$ polynomials ($d\geq 2$) that are injective on the closed disk and have a unique critical point on the unit circle (equivalently, from cardioid-like quadrature domains with a unique cusp on their boundaries). As in the $d=2$ case, a quasiconformal straightening surgery was designed for these Schwarz reflections, and the corresponding parameter space was shown to be a close cousin of a family of parabolic anti-rational maps of degree $d$. Once again, this straightening surgery played a fundamental role in the proof of existence of bi-degree $d$:$d$ antiholomorphic correspondences that are matings of degree $d$ parabolic anti-rational maps and anti-Hecke groups.
One obtains special real two-dimensional slices in the above space of Schwarz reflections when the underlying polynomials (that are injective on the disk) are *Shabat polynomials*; i.e., they have two critical values in the plane. These can be seen as higher degree one-parameter generalizations of the cubic Chebyshev family. The connectedness loci of such families of Schwarz reflections are *combinatorially equivalent* to certain parameter spaces of *Belyi* parabolic anti-rational maps (a Belyi anti-rational map is an anti-rational map with at most three critical values) [@LMM4].
On the group side, let us recall that the index two Fuchsian subgroup of an ideal polygon reflection group uniformizes a punctured sphere. In general, such surfaces have moduli and pinching suitable closed geodesics on these surfaces allows one to study the deformations/degenerations of reflection groups. It turns out that certain classes of Schwarz reflection maps are also amenable to similar deformation techniques. Such deformations were used in [@LMM2] to produce a dynamically natural homeomorphism between a space of Schwarz reflections and the closure of the Bers slice of the ideal polygon reflection group.
## Digression to the holomorphic case {#digression-to-the-holomorphic-case .unnumbered}
In the 1990s, Shaun Bullett and Christopher Penrose [@BP] discovered that some quadratic polynomials and the modular group can co-exist in the same dynamical plane for a bi-degree $2$:$2$ algebraic correspondence, so this correspondence can be viewed as the *mating* of a quadratic map and the modular group. They conjectured that actually any quadratic polynomial can be mated with the modular group in this way and that the parameter space of the relevant correspondences is naturally homeomorphic to the Mandelbrot set. There had been no progress in this conjecture for about two decades.
In the early 2010s, Luna Lomonaco introduced in her thesis a class of *parabolic-like maps*, a parabolic version of polynomial-like maps. Such a map is defined on a domain containing a parabolic point $\alpha$, but it lacks a polynomial-like virtue near $\alpha$. She proved that any parabolic-like map can be straightened to a parabolic rational map [@Lom15]. It was then proposed by Adam Epstein that this result can help to address the Bullett-Penrose Conjecture [@BuLo1 p. 209]. This idea was pursued by Bullett and Lomonaco: they first showed that a certain pinched polynomial-like restriction of a Bullett-Penrose correspondence can be extended to a quasi-regular parabolic-like map, then straightened it to a quadratic parabolic rational map, and then moved on to conclude that the algebraic correspondence in question is the mating of that rational map with the modular group [@BuLo1].
The idea of a parabolic straightening was adapted in [@LLMM3] for the antiholomorphic setting to show that the Chebyshev family of Schwarz reflections is naturally bijectively equivalent to the *parabolic Tricorn* (which is the connectedness locus of quadratic parabolic anti-rational maps). First we perform a surgery that replaces the Blaschke external map of a parabolic rational map with a "Farey external map"[^4] yielding a Schwarz reflection dynamics (see Subsection [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"}). This is a "pinched version" of the classical straightening theorem. Then we lift the Schwarz dynamics to a correspondence dynamics by means of the uniformizing Chebyshev polynomial.
In the course of this development (at the first step), a new version of the surgery machinery was designed for straightening pinched polynomial-like maps that may not necessarily admit a holomorphic extension around the parabolic point (see [@LLMM3 §5] and Subsections [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"} and [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"}). (This machinery makes use of classical Warschawski's Theorem on the uniformization of topological strips, though on various special occasions it can be replaced with a hand-crafted construction.) It avoids an intermediate quasi-regular extension of the map yielding a good control of the dependence of the straightening on the parameters. Altogether, it then directly implies a natural bijection between the parameter space of the correspondences in question and the parabolic Tricorn.
In [@BuLo3] a similar surgery machinery was used by Bullett and Lomonaco to complete their proof that the parameter space of the Bullett-Penrose algebraic correspondences is homeomorphic to the parabolic Mandelbrot set. Finally, due to the work of Petersen and Roesch that appeared meanwhile [@PR21], the parabolic Mandelbrot set turns out to be homeomorphic to the genuine Mandelbrot set. Altogether, it confirmed the Bullett-Penrose Conjecture.
Let us mention in conclusion that in [@BuFr] Bullett and Freiberger put forward a general Mating Conjecture that all polynomials with connected Julia set can be mated with Hecke groups. By means of the above surgery machinery (based on Warschawski's Theorem), an antiholomorphic version of this conjecture was established in [@LMM3] (see Section [11](#general_mating_corr_sec){reference-type="ref" reference="general_mating_corr_sec"}).
## When Julia sets look like Kleinian limit sets {#when-julia-sets-look-like-kleinian-limit-sets .unnumbered}
Another major theme of this survey concerns explicit dynamical relations between limit sets of Kleinian reflection groups and Julia sets of anti-rational maps. While many topological, analytic and measure-theoretic similarities between Kleinian limit sets and rational Julia sets have been known for long, no example of dynamically natural homeomorphism between such fractals was known until recently, to the best of our knowledge. The first non-trivial example of such a homeomorphism was produced in [@LLMM4] by turning the Nielsen map of the classical Apollonian gasket reflection group into a critically fixed anti-rational map (see Figure [6](#limit_julia_intro_fig){reference-type="ref" reference="limit_julia_intro_fig"}). The main idea of this construction was to use a certain compatibility property between Nielsen maps and power maps to cook up a topological branched cover from the Nielsen map of the Apollonian group and then invoke the Thurston Realization Theorem to obtain the desired anti-rational map. This recipe was generalized in [@LLM1], where dynamically natural homeomorphisms between Julia sets of critically fixed anti-rational maps and limit sets of Kleinian reflection groups (arising from finite circle packings) were manufactured. This dictionary has various dynamical consequences; for instance, the geodesic lamination models of the corresponding limit and Julia sets can be explicitly related, and the resulting bijection between critically fixed anti-rational maps and Kleinian reflection groups commutes with the operation of mating in the respective categories. From an analytic point of view, it is worth mentioning that these fractals are often not quasiconformally equivalent [@LZ23b] (although they may have isomorphic quasisymmetry groups, see [@LLMM4]), but these Julia sets can be mapped onto the corresponding limit sets by global David homeomorphisms [@LMMN].
![Two homeomorphic fractals: the classical Apollonian gasket and the Julia set of a cubic anti-rational map.](Apollonian_gasket.png "fig:"){#limit_julia_intro_fig width="0.32\\linewidth"} ![Two homeomorphic fractals: the classical Apollonian gasket and the Julia set of a cubic anti-rational map.](Apollonian_Julia.png "fig:"){#limit_julia_intro_fig width="0.324\\linewidth"}
The above correspondence between critically fixed anti-rational maps and Kleinian reflection groups (arising from finite circle packings) also has fundamental parameter space implications, which were investigated in [@LLM2] using rescaling limit techniques. This revealed striking similarities between the parameter spaces of anti-rational maps and reflection groups. In particular, it was shown that the quasiconformal deformation space of a kissing reflection group is bounded if and only if a suitable deformation space of the corresponding critically fixed anti-rational map is bounded (which can be seen as an analogue of Thurston's Boundedness Theorem in the context of anti-rational maps), and that the bifurcation structures of these deformation spaces have the same combinatorial patterns. Further, it was demonstrated that the union of suitable deformation spaces of critically fixed anti-rational maps admits a monodromy representation onto the mapping class group of a punctured sphere, which is in harmony with a result of Hatcher and Thurston on the global topological complexity of parameter spaces of reflection groups [@HT].
## Applications to analytic problems {#applications-to-analytic-problems .unnumbered}
The development of iteration theory of Schwarz reflection maps has interesting consequences to certain questions of purely analytic origin. In fact, the characterization of (simply connected) quadrature domains as univalent images of the disk under rational maps gives abundant examples of Schwarz reflection maps, and connects the study of Schwarz reflection dynamics to the classical theory of univalent functions in geometric function theory. The intimate links between Schwarz reflections, quadrature domains, and univalent rational maps have been utilized to study the topology of quadrature domains and answer related questions with statistical physics motivation, to solve extremal problems for suitable classes of univalent maps, and to study domains of univalence of complex polynomials. The crux of the matter is to translate the above analytic problems to questions regarding the dynamics of naturally associated Schwarz reflection maps, and then apply the dynamical theory of Schwarz reflections to obtain the desired solutions [@LM; @LM1; @LMM1; @LMM2; @LMM4]. In the same vein, David surgery tools developed in the mating theory described above also have applications to questions on conformal removability and welding [@LMMN].
## Structure of the survey {#structure-of-the-survey .unnumbered}
We begin the survey (Section [2](#interplay_sec){reference-type="ref" reference="interplay_sec"}) with a quick and somewhat informal introduction to the main mathematical objects, their interconnections, and how this interplay leads to various new results in the antiholomorphic chapter of the Fatou-Sullivan dictionary. Section [3](#antiholo_background_sec){reference-type="ref" reference="antiholo_background_sec"} covers the necessary preliminaries: here we discuss various elementary properties of Kleinian reflection groups, anti-rational dynamics, and Schwarz reflection maps. Sections [4](#quadratic_examples_sec){reference-type="ref" reference="quadratic_examples_sec"} and [5](#cubic_examples_sec){reference-type="ref" reference="cubic_examples_sec"} illustrate various features of Schwarz reflection dynamics, new mating phenomena, straightening techniques, topological and analytic connections between limit and Julia sets, etc. through a number of concrete examples. Section [6](#schwarz_para_space_sec){reference-type="ref" reference="schwarz_para_space_sec"} expounds the parameter space structure of some special families of Schwarz reflections and their relations with appropriate spaces of anti-rational maps and reflection groups. More general straightening and mating results require recently developed David surgery tools, which are discussed in Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"}. Section [8](#new_line_dict_sec){reference-type="ref" reference="new_line_dict_sec"} explicates dynamically natural homeomorphisms between Julia sets of anti-rational maps and limit sets of reflection groups, and parameter space consequences of this connection. The next two Sections, [9](#mating_anti_poly_nielsen_sec){reference-type="ref" reference="mating_anti_poly_nielsen_sec"} and [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}, describe a general mating theory for Nielsen maps of reflection groups and anti-polynomials, and relate the parameter spaces of the associated Schwarz reflections to connectedness loci of anti-polynomials. Section [11](#general_mating_corr_sec){reference-type="ref" reference="general_mating_corr_sec"} is devoted to the construction of antiholomorphic generalizations of Bullett-Penrose correspondences and the underlying straightening surgery. Finally, some of the analytic applications of the theory are recorded in Section [12](#anal_app_sec){reference-type="ref" reference="anal_app_sec"}.
# The main characters, their interplay, and some applications at a glance {#interplay_sec}
## Four models for external dynamics
To unify some of the principal players of this survey, we will refer to the dynamics of an anti-rational map on a (marked) completely invariant Fatou component and to the dynamics of a Schwarz reflection map on its escaping set as their *external dynamics*. More generally, this term applies to the dynamics of anti-polynomial-like maps (or their degenerate analogs, often called pinched anti-polynomial-like maps) on their escaping sets. In accordance with the classical theory of polynomial-like maps (cf. [@DH2]), such external dynamics can be modeled by appropriate piecewise anti-analytic (i.e., real-analytic and orientation-reversing) covering maps of the circle, called *external maps*.
Many of the families of anti-rational maps and Schwarz reflections that we will be concerned with have fixed external dynamics. Relations between the corresponding external maps (i.e., piecewise anti-analytic circle coverings) lie at the core of all fundamental connections between reflection groups and anti-rational maps.
**i) Power map.** The first and the most well-known of them is the power map $\overline{z}^d$. Monic, centered anti-polynomials of degree $d$ with connected Julia set have $\overline{z}^d$ as the conformal model of their external dynamics. We denote the connectedness locus of monic, centered degree $d$ anti-polynomials by $\mathscr{C}_d$.
**ii) Parabolic anti-Blaschke product.** The unicritical antiholomorphic Blaschke (anti-Blaschke for brevity) product $$B_d~=~\frac{(d+1)\overline z^d + (d-1)}{(d-1)\overline z^d + (d+1)}$$ is topologically conjugate to $\overline{z}^d$ on the unit circle $\mathbb{S}^1$; however, unlike the expanding endomorphism $\overline{z}^d$, the map $B_d$ has a parabolic fixed point on the circle. The space of anti-rational maps admitting $B_d$ as their external dynamics can be thought of as the parabolic counterpart of the connectedness locus of degree $d$ anti-polynomials. This space is denoted by $\pmb{\mathcal{B}}_d$ (see Subsection [3.2.3](#para_anti_rat_gen_subsubsec){reference-type="ref" reference="para_anti_rat_gen_subsubsec"}).
**iii) Nielsen map.** An analog of the map $\overline{z}^d$ in the reflection group world is given by the *Nielsen map* $\pmb{\mathcal{N}}_d$ associated with the group $\pmb{G}_d$ generated by reflections in the sides of a regular ideal $(d+1)-$gon in the hyperbolic plane. It is a piecewise anti-Möbius map topologically conjugate to $\overline{z}^d$ on $\mathbb{S}^1$ (see Figure [7](#itg_nielsen_fig){reference-type="ref" reference="itg_nielsen_fig"} and Subsection [3.1.4](#nielsen_map_subsubsec){reference-type="ref" reference="nielsen_map_subsubsec"}). The map $\pmb{\mathcal{N}}_d$ has $d+1$ parabolic fixed points on $\mathbb{S}^1$ at the ideal vertices of the regular ideal $(d+1)-$gon, so the above conjugacy is not quasisymmetric. The collection $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ of antiholomorphic maps with $\pmb{\mathcal{N}}_d$ as their external map coincides with the connectedness locus of *regular polygonal* Schwarz reflections; i.e., a certain class of degree $d$ piecewise Schwarz reflection maps associated with tree-like quadrature multi-domains (see Subsection [6.1](#c_and_c_general_subsec){reference-type="ref" reference="c_and_c_general_subsec"} and Section [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}).
![The ideal triangle reflection group is generated by the reflections $\rho_1,\rho_2,\rho_3$ in the sides of an ideal hyperbolic triangle. These generators define the corresponding Nielsen map $\pmb{\mathcal{N}}_2$ as a piecewise anti-Möbius map on the shaded regions.](ITG_Nielsen.png){#itg_nielsen_fig width="0.36\\linewidth"}
**iv) Anti-Farey map.** Yet another external class arising from reflection groups is obtained as a factor of $\pmb{\mathcal{N}}_d$. In fact, since $\pmb{\mathcal{N}}_d$ commutes with rotation by $2\pi/(d+1)$, it descends to a piecewise anti-analytic, degree $d$, orientation-reversing covering map $\pmb{\mathcal{F}}_d:\mathbb{S}^1\to\mathbb{S}^1$ with a unique parabolic fixed point on the circle (see Subsections [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"}, [11.1.2](#anti_farey_subsubsec){reference-type="ref" reference="anti_farey_subsubsec"}). The map $\pmb{\mathcal{F}}_d$ is also topologically conjugate to the above three maps; but more importantly, it is *quasisymmetrically* conjugate to the parabolic anti-Blaschke product $B_d$. Although the external map $\pmb{\mathcal{F}}_d$ is obtained as a factor of the piecewise Möbius Nielsen map $\pmb{\mathcal{N}}_d$, it has a fully ramified critical point. Antiholomorphic maps having $\pmb{\mathcal{F}}_d$ as the external class can be described as the connectedness locus of Schwarz reflections associated with cardioid-like quadrature domains with a unique critical point (of local degree $d+1$) that escapes in one iterate. We denote this space of Schwarz reflections by $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ (see Subsection [11.1.4](#upgrade_step_subsubsec){reference-type="ref" reference="upgrade_step_subsubsec"}).
## Relation between $\overline{z}^d$ and $\pmb{\mathcal{N}}_d$, and its implications to the dictionary
As mentioned before, the Nielsen map $\pmb{\mathcal{N}}_{d}$ is topologically conjugate to the power map $\overline{z}^d$ on $\mathbb{S}^1$ via a circle homeomorphism $\pmb{\mathcal{E}}_d$. Due to its close relation with the classical *Minkowski question mark function*, we call the conjugating map $\pmb{\mathcal{E}}_d$ the *$d$-th Minkowski circle homeomorphism* (see Subsection [3.1.5](#question_mark_subsubsec){reference-type="ref" reference="question_mark_subsubsec"}). This topological compatibility between $\overline{z}^d$ and $\pmb{\mathcal{N}}_d$ serves as a bridge between Kleinian reflection groups and anti-rational maps. Specifically, the existence of this map is instrumental in the addition of the following entries in the Fatou-Sullivan dictionary.
**i) Mating anti-polynomials with necklace reflection groups, and parameter spaces of Schwarz reflections.** There exists a large class of Schwarz reflection maps which are matings of degree $d$ anti-polynomials with connected Julia set and necklace reflection groups (i.e., Kleinian reflection groups in the closure of the Bers slice of $\pmb{G}_d$, see Subsection [3.1.7](#necklace_subsubsec){reference-type="ref" reference="necklace_subsubsec"}). Various instances of this mating phenomena are explicated in Subsections [4.1](#deltoid_subsec){reference-type="ref" reference="deltoid_subsec"}, [4.2](#c_and_c_center_subsec){reference-type="ref" reference="c_and_c_center_subsec"}, [5.1](#talbot_subsec){reference-type="ref" reference="talbot_subsec"}, [6.1](#c_and_c_general_subsec){reference-type="ref" reference="c_and_c_general_subsec"}, [6.3](#sigma_d_subsec){reference-type="ref" reference="sigma_d_subsec"}, and general existence results are collected in Section [9](#mating_anti_poly_nielsen_sec){reference-type="ref" reference="mating_anti_poly_nielsen_sec"}. In its simplest avatar, the above combination results can be thought of as a fusion of the Bers Simultaneous Uniformization Theorem for Fuchsian groups and simultaneous uniformization of a pair of Blaschke products. More sophisticated versions of matings of anti-polynomials and reflection groups run along the lines of the Douady-Hubbard mating theory for polynomials and the Thurston Double Limit Theorem.
The parameter spaces of these families of Schwarz reflections bear strong resemblance with the connectedness locus $\mathscr{C}_d$ and the Bers slice closure of the reflection group $\pmb{G}_d$ (see Subsections [6.1](#c_and_c_general_subsec){reference-type="ref" reference="c_and_c_general_subsec"}, [6.3](#sigma_d_subsec){reference-type="ref" reference="sigma_d_subsec"} and Section [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}).
**ii) Equivariant homeomorphisms between Julia and limit sets, and deformation space analogies.** Limit sets of Kleinian reflection groups arising from circle packings (including the classical Apollonian gasket) are homeomorphic to Julia sets of critically fixed anti-rational maps in a dynamically natural fashion (see Subsections [5.2](#apollo_group_map_schwarz_subsec){reference-type="ref" reference="apollo_group_map_schwarz_subsec"}, [6.3](#sigma_d_subsec){reference-type="ref" reference="sigma_d_subsec"} for examples of this connection and Subsection [8.1](#new_line_dict_dyn_subsec){reference-type="ref" reference="new_line_dict_dyn_subsec"} for a general result).
Moreover, the bifurcation structure, boundedness properties, and global topologies of the deformation spaces of kissing reflection groups and critically fixed anti-rational maps have stark similarities (see Subsection [8.2](#new_line_dict_para_subsec){reference-type="ref" reference="new_line_dict_para_subsec"}).
## David surgery as a key technical tool
A key technical ingredient in the proof of simultaneous uniformization of Blaschke products (and in many other important surgery techniques in holomorphic dynamics) is the Ahlfors-Beurling Extension Theorem, which states that a quasisymmetric homeomorphism of the circle extends continuously to a quasiconformal homeomorphism of the disk. Since the Minkowski circle homeomorphism $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$ conjugates parabolic dynamics to the hyperbolic one, it is not quasisymmetric. However, it was shown in [@LLMM4] that its inverse admits a David extension to the disk. Indeed, by a direct number theoretic analysis, It was verified that $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$ satisfies Chen-Chen-He's [@CCH96] and Zakeri's [@Zak04] distortion property that is sufficient for the David extendability. This result was then extended, by dynamical means, to the general class of circle homeomorphisms (and their local counterparts) conjugating hyperbolic dynamics to the parabolic one. It laid down a foundation for a *general David surgery machinery* that facilitates construction of parabolic conformal dynamical systems from hyperbolic ones. This novel machinery is elaborated in Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"} (see Section [13](#qc_david_appendix){reference-type="ref" reference="qc_david_appendix"} for background on David homeomorphisms). Not only is it indispensable for matings of anti-polynomials with necklace groups, but it also yields a direct passage from subhyperbolic (anti-)rational maps to geometrically finite rational maps (generalizing the work of Haïssinsky from the 1990s) and to kissing reflection groups, thereby shedding new light on the analytic geometry of such Julia and limit sets (see Subsection [12.1](#conf_removable_subsec){reference-type="ref" reference="conf_removable_subsec"}).
## Quasisymmetric compatibility of $B_d$ and $\pmb{\mathcal{F}}_d$, and antiholomorphic paradigm for Bullett-Penrose correspondences
The existence of a quasisymmetric conjugacy between the parabolic anti-Blaschke product $B_d$ and the anti-Farey map $\pmb{\mathcal{F}}_d$ (on the circle) enables one to mate the filled Julia dynamics of maps in $\pmb{\mathcal{B}}_d$ with the external map $\pmb{\mathcal{F}}_d$ and realize the matings as Schwarz reflection maps in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$. The fact that the anti-Farey map $\pmb{\mathcal{F}}_d$ has a fully ramified critical point implies that the corresponding quadrature domains are uniformized by degree $d+1$ polynomials. These uniformizing polynomials can be used to lift Schwarz reflections in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ to construct antiholomorphic correspondences of bi-degree $d$:$d$ that are matings of parabolic anti-rational maps in $\pmb{\mathcal{B}}_d$ and an anti-conformal version of the classical Hecke group. This gives a regular framework for producing correspondences which are antiholomorphic counterparts (for arbitrary degree) of the Bullett-Penrose degree two holomorphic correspondences (see Subsections [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}, [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"} for $d=2$ examples and Section [11](#general_mating_corr_sec){reference-type="ref" reference="general_mating_corr_sec"} for the general realization theorem).
## The emergence of pinched polynomial-like maps and novel straightening techniques
While polynomial-like maps enjoy a central role in holomorphic dynamics, there are certain situations where one encounters degenerate or pinched versions of polynomial-like maps. Roughly speaking, a degenerate polynomial-like map is a proper holomorphic map from a topological disk onto a larger topological disk with finitely many touching points between the domain and co-domain. Such objects naturally appear in the study of maps with parabolic external dynamics (cf. [@Lom15; @PR21]).
The study of Schwarz reflection maps (especially those that combine the actions of anti-polynomials and necklace groups) brings degenerate polynomial-like maps to the fore. In fact, all maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ and $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ (i.e., with Nielsen and anti-Farey external maps) admit degenerate polynomial-like structures that cannot be upgraded to actual polynomial-like maps (see Subsections [4.1.1](#deltoid_degeneration_subsubsec){reference-type="ref" reference="deltoid_degeneration_subsubsec"}, [4.2.3](#c_and_c_basilica_pinched_anti_quad_subsubsec){reference-type="ref" reference="c_and_c_basilica_pinched_anti_quad_subsubsec"}, [4.3.2](#chebyshev_center_hybrid_conj_subsubsec){reference-type="ref" reference="chebyshev_center_hybrid_conj_subsubsec"} for examples). Straightening degenerate polynomial-like maps to rational maps encounters various subtleties. In fact, for degenerate anti-polynomials-like restrictions arising from maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$, there are analytic obstructions to quasiconformal straightening (as $\pmb{\mathcal{N}}_d$ has more than one parabolic fixed points) which makes the study of this space more difficult. As mentioned in the previous section, this compels one to apply combinatorial techniques and puzzle machinery to relate spaces of polygonal Schwarz reflections to connectedness loci of anti-polynomials (see Subsection [6.1](#c_and_c_general_subsec){reference-type="ref" reference="c_and_c_general_subsec"} and Section [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}). On the other hand, the pinched anti-polynomial-like restrictions of maps in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ are amenable to quasiconformal straightening (since their external dynamics have a unique parabolic fixed point with controlled geometry), which allows one to relate the parameter space of $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ to the parameter space of the parabolic anti-rational family $\pmb{\mathcal{B}}_d$ (see Subsections [6.2.2](#cubic_cheby_qc_straightening_subsubsec){reference-type="ref" reference="cubic_cheby_qc_straightening_subsubsec"}, [11.2](#mating_regularity_subsec){reference-type="ref" reference="mating_regularity_subsec"}).
## General conjectures and questions
Let us briefly mention some open questions regarding the general structure of parameter spaces of Schwarz reflection maps and antiholomorphic correspondences.
### Product structure in spaces of Schwarz reflections and combinatorial rigidity
As mentioned before, the dynamical plane of a piecewise Schwarz reflection map can be decomposed into two invariant subsets: the non-escaping set and the escaping/tiling set. In the mating locus, Schwarz reflections behave like anti-rational maps on their non-escaping sets and exhibit features of necklace reflection groups on their tiling sets. Thus, freezing the dynamics of a Schwarz reflection map on its non-escaping set, and deforming its dynamics on the tiling set should give rise to a 'copy' of the Teichmüller space of a necklace group in the Schwarz parameter space. On the other hand, fixing the conformal class of the dynamics on the tiling set, and changing it on the non-escaping set should produce a 'copy' of an anti-rational parameter space in the Schwarz parameter space. The above heuristics suggest that the parameter space of appropriate families of Schwarz reflection maps should have a local product structure; i.e. locally they should be products of anti-rational parameter spaces and Teichmüller spaces of reflection groups. While the existence of such *Bers slices* has been justified in several special cases (see Subsections [6.1](#c_and_c_general_subsec){reference-type="ref" reference="c_and_c_general_subsec"}, [6.3](#sigma_d_subsec){reference-type="ref" reference="sigma_d_subsec"} and Section [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}), the general picture demands further investigation. This would require a good understanding of puzzle structures and combinatorial rigidity properties of Schwarz reflection maps.
### Degenerations, and Double Limit Theorems
In the spirit of degenerations of rational maps and Kleinian groups, it is natural to study degenerations of Schwarz reflections that arise as conformal matings of anti-polynomials and necklace groups. This is particularly interesting when the anti-polynomials tend to the boundaries hyperbolic components and the groups go to the boundary of their quasiconformal deformation spaces. In certain cases, this produces a 'phase transition'; i.e., such a degenerating sequence of matings converges to a limiting piecewise Schwarz reflection dynamical system, but at least one of the quadrature domains gets pinched into a disjoint collection of quadrature domains (or equivalently, the Carathéodory limits of some of the uniformizing rational maps of the associated quadrature domains undergo degree drop). This is an entirely new degeneration phenomenon in conformal dynamics that deserves to be better understood.
In general, one conjectures that there should be analogues of the Thurston Double Limit Theorem for this setup which would describe the dynamics of the limiting map as a quotient of the dynamics of the degenerating sequence of conformal matings.
### Discreteness locus and mating locus in the space of correspondences
The antiholomorphic correspondences that arise as matings of anti-rational maps and anti-Hecke groups sit inside a larger space of correspondences generated by deck transformations of polynomials and a circular reflection (see Section [11](#general_mating_corr_sec){reference-type="ref" reference="general_mating_corr_sec"}). It would be quite interesting to understand the dynamics of these more general correspondences. For instance, in the spirit of Jörgensen's inequality for Kleinian groups, one can ask when such correspondences exhibit suitable discreteness properties (e.g., when do they act discretely on some part of the sphere?). It is also natural to ask for an intrinsic characterization of the *mating locus* in this bigger space of correspondences. Satisfactory answers to these questions would involve exploring uncharted territories.
# Background on antiholomorphic dynamics {#antiholo_background_sec}
## Kleinian reflection groups {#ref_group_subsec}
### Circle packings {#circle_pack_subsubsec}
A *circle packing* $\mathcal{P}$ is a connected finite collection of at least three (oriented) circles in $\widehat{\mathbb{C}}$ with disjoint interiors. The combinatorial configuration of a circle packing can be encoded by its *contact graph* $\Gamma$, which has a vertex associated with each circle, and an edge connecting two vertices if and only if the two associated circles touch. The embedding of $\mathcal{P}$ in $\widehat\mathbb{C}$ endows its contact graph with a plane structure (equivalently, a cyclic order of the edges meeting at a vertex). Clearly, the contact graph of a circle packing is simple. In fact, this is the only constraint on the graph (See [@Thurston78 Chapter 13]).
**Theorem 1** (Circle Packing Theorem). *Every connected, simple, plane graph is isomorphic to the contact graph of some circle packing.*
**Definition 2**. Let $\Gamma$ be a finite connected graph.
1. $\Gamma$ is said to be *$k$-connected* if $\Gamma$ contains more than $k$ vertices and remains connected if any $k-1$ vertices and their corresponding incident edges are removed.
2. $\Gamma$ is called *polyhedral* if $\Gamma$ is the $1$-skeleton of a convex polyhedron.
3. $\Gamma$ is said to be *outerplanar* if it has a planar drawing for which all vertices lie on the boundary of some face.
4. $\Gamma$ is called *Hamiltonian* if there exists a Hamiltonian cycle, i.e., a closed path in $\Gamma$ visiting each of its vertices exactly once.
According to a theorem of Steinitz, a graph is polyhedral if and only if it is $3$-connected and planar. Given a polyhedral graph, we have a stronger version of the Circle Packing Theorem [@Sch92].
**Theorem 3** (Circle Packing Theorem for polyhedral graphs). *Suppose $\Gamma$ is a polyhedral graph, then there is a pair of circle packings whose contact graphs are isomorphic to $\Gamma$ and its planar dual. Moreover, the two circle packings intersect orthogonally at their points of tangency.*
*This pair of circle packings is unique up to Möbius transformations.*
### Kissing reflection groups {#kissing_group_subsubsec}
Let $\Gamma$ be a connected simple plane graph. By the Circle Packing Theorem, $\Gamma$ is (isomorphic to) the contact graph of some circle packing $$\mathcal{P}=\{C_1,..., C_{d+1}\}.$$ We define the *kissing reflection group* associated with this circle packing $\mathcal{P}$ as $$G_\mathcal{P} := \langle \rho_1,..., \rho_{d+1}\rangle,$$ where $\rho_i$ is the reflection along the circle $C_i$. As an abstract group, $G_\mathcal{P}$ is the free product of $d+1$ copies of $\mathbb{Z}/2\mathbb{Z}$.
Since a kissing reflection group is a discrete subgroup of $\textrm{Aut}^\pm(\widehat{\mathbb{C}})$ (the group of all Möbius and anti-Möbius automorphisms of $\widehat{\mathbb{C}}$) [@VS93 Part II, Chapter 5, Theorem 1.2], definitions of limit set and domain of discontinuity easily carry over to kissing reflection groups (cf. [@LMMN §6.1]). We denote the domain of discontinuity and the limit set of $G_{\mathcal{P}}$ by $\Omega(G_{\mathcal{P}})$ and $\Lambda(G_{\mathcal{P}})$, respectively.
The following proposition characterizes kissing reflection groups with connected limit sets in terms of the contact graph of the underlying circle packing.
**Proposition 4**. *[@LLM1 Proposition 3.4][\[kissing_limit_conn_prop\]]{#kissing_limit_conn_prop label="kissing_limit_conn_prop"} The kissing reflection group $G_\mathcal{P}$ has connected limit set if and only if the contact graph $\Gamma$ of $\mathcal{P}$ is $2$-connected.*
(In one direction: if there is a cut-point in $\Gamma$, then the corresponding circle of the packing has several attached components of the limit set, see Figure [8](#not_2_conn_fig){reference-type="ref" reference="not_2_conn_fig"}.)
![A disconnected limit set for a kissing reflection group associated with a non $2$-connected contact graph.](DLS.png){#not_2_conn_fig width="0.5\\linewidth"}
*Remark 5*. For technical reasons, it is often important to consider *marked* graphs and *marked* circle packings. Such markings are particularly useful while talking about Teichmüller spaces of kissing reflection groups (cf. [@LLM1 §2, §3]).
A *marking* of a graph $\Gamma$ is the choice of a graph isomorphism $$\varphi: \mathscr{G}\longrightarrow \Gamma,$$ where $\mathscr{G}$ is the underlying abstract graph of $\Gamma$. We refer to the pair $(\Gamma, \varphi)$ as a *marked graph*. Similarly, a circle packing $\mathcal{P}$ is said to be marked if the associated contact graph is marked.
### Fundamental domain for kissing reflection groups {#fund_dom_subsubsec}
Let $\mathcal{P} = \{C_1,..., C_{d+1}\}$ be a circle packing, and $D_i$ be the open round disk enclosed by $C_i$ (recall that the circles $C_i$ are oriented). For each $C_i$, let us consider the upper hemisphere $S_i\subset\mathbb{H}^3$ such that $\partial S_i\cap\partial\mathbb{H}^3= C_i$. The anti-Möbius reflection in $C_i$ extends naturally to the reflection in $S_i$, and defines an orientation-reversing isometry of $\mathbb{H}^3$. Let $\mathfrak{P}$ be the convex hyperbolic polyhedron (in $\mathbb{H}^3$) whose relative boundary in $\mathbb{H}^3$ is the union of the hemispheres $S_i$. Then, $\mathfrak{P}$ is a fundamental domain (called the *Dirichlet fundamental polyhedron*) for the action of the group $G_{\mathcal{P}}$ on $\mathbb{H}^3$, and $$\Pi(G_\mathcal{P}) :=\overline{\mathfrak{P}}\cap\Omega(G_{\mathcal{P}})$$ (where the closure is taken in $\Omega(G_{\mathcal{P}})\cup\mathbb{H}^3$) is a fundamental domain for the action of $G_{\mathcal{P}}$ on $\Omega(G_{\mathcal{P}})$ (see [@LMMN Proposition 6.5], cf. [@Mar16 §3.5], [@Vin67]). Clearly, the fundamental domain $\Pi(G_\mathcal{P})$ can also be written as $$\Pi(G_\mathcal{P}) = \widehat{\mathbb{C}}\setminus\left(\bigcup_{i=1}^{d+1} D_i \cup \mathfrak{S}\right),$$ where $\mathfrak{S}$ is the set consisting of points of tangency for the circle packing $\mathcal{P}$. We remark that the fundamental domain $\Pi(G_{\mathcal{P}})$ is neither open, nor closed in $\widehat{\mathbb{C}}$, but is relatively closed in $\Omega(G_\mathcal{P})$. In Figure [12](#kissing_nielsen_fig){reference-type="ref" reference="kissing_nielsen_fig"} and Figure [17](#necklace_fig){reference-type="ref" reference="necklace_fig"}, the fundamental domains are shaded in grey.
The above discussion shows that the action of a kissing reflection group on $\mathbb{H}^3$ admits a finite-sided polyhedron as its fundamental domain, and hence is *geometrically finite*.
The index two subgroup $\widetilde{G}_{\mathcal{P}}\leqslant G_{\mathcal{P}}$ consisting of orientation-preserving elements is a Kleinian group whose domain of discontinuity coincides with $\Omega(G_\mathcal{P})$. A fundamental domain for the $\widetilde{G}_{\mathcal{P}}-$action on $\Omega(G_{\mathcal{P}})$ is given by doubling $\Pi(G_\mathcal{P})$ along $C_1$. Moreover, $\faktor{\Omega(G_{\mathcal{P}})}{\widetilde{G}_{\mathcal{P}}}$ is a finite union of punctured spheres, where each punctured sphere corresponds to the double of a component of $\Pi(G_\mathcal{P})$.
### Nielsen maps for kissing reflection groups {#nielsen_map_subsubsec}
Let $\mathcal{P}=\{C_1,\cdots, C_{d+1}\}$ be a circle packing realizing a $2$-connected simple plane graph.
**Definition 6**. The *Nielsen map* $\mathcal{N}_{G_{\mathcal{P}}}$ is defined as: $$\begin{aligned}
\mathcal{N}_{G_{\mathcal{P}}} : \bigcup_{i=1}^{d+1} \overline{D_i} \rightarrow \widehat{\mathbb{C}} \nonumber \hspace{16mm} \\ \hspace{10mm} z\longmapsto \rho_i(z) \textrm{ if } z \in \overline{D_i}. \nonumber\end{aligned}$$
Note that the Nielsen map $\mathcal{N}_{G_{\mathcal{P}}}$ is defined on the limit set $\Lambda(G_{\mathcal{P}})$, a fact that will be of importance later.
![Top: The fundamental domains $\Pi(G_\mathcal{P})$ of the kissing reflection groups are shaded in grey. The corresponding Nielsen maps preserve the components of the domains of discontinuity intersecting the fundamental domains. The restrictions of the Nielsen maps to these components of the domains of discontinuity are conformally conjugate to Nielsen maps of ideal pentagon and ideal triangle reflection groups. Bottom: The Nielsen maps $\pmb{\mathcal{N}}_3$ and $\pmb{\mathcal{N}}_2$ of regular ideal quadrilateral and ideal triangle reflection groups preserve the unit disk $\mathbb{D}$.](NJD.png "fig:"){#kissing_nielsen_fig width="0.42\\linewidth"}![Top: The fundamental domains $\Pi(G_\mathcal{P})$ of the kissing reflection groups are shaded in grey. The corresponding Nielsen maps preserve the components of the domains of discontinuity intersecting the fundamental domains. The restrictions of the Nielsen maps to these components of the domains of discontinuity are conformally conjugate to Nielsen maps of ideal pentagon and ideal triangle reflection groups. Bottom: The Nielsen maps $\pmb{\mathcal{N}}_3$ and $\pmb{\mathcal{N}}_2$ of regular ideal quadrilateral and ideal triangle reflection groups preserve the unit disk $\mathbb{D}$.](Apollonian_gasket.png "fig:"){#kissing_nielsen_fig width="0.404\\linewidth"} ![Top: The fundamental domains $\Pi(G_\mathcal{P})$ of the kissing reflection groups are shaded in grey. The corresponding Nielsen maps preserve the components of the domains of discontinuity intersecting the fundamental domains. The restrictions of the Nielsen maps to these components of the domains of discontinuity are conformally conjugate to Nielsen maps of ideal pentagon and ideal triangle reflection groups. Bottom: The Nielsen maps $\pmb{\mathcal{N}}_3$ and $\pmb{\mathcal{N}}_2$ of regular ideal quadrilateral and ideal triangle reflection groups preserve the unit disk $\mathbb{D}$.](ideal_quad_reflection_group.png "fig:"){#kissing_nielsen_fig width="0.38\\linewidth"}![Top: The fundamental domains $\Pi(G_\mathcal{P})$ of the kissing reflection groups are shaded in grey. The corresponding Nielsen maps preserve the components of the domains of discontinuity intersecting the fundamental domains. The restrictions of the Nielsen maps to these components of the domains of discontinuity are conformally conjugate to Nielsen maps of ideal pentagon and ideal triangle reflection groups. Bottom: The Nielsen maps $\pmb{\mathcal{N}}_3$ and $\pmb{\mathcal{N}}_2$ of regular ideal quadrilateral and ideal triangle reflection groups preserve the unit disk $\mathbb{D}$.](triangle.png "fig:"){#kissing_nielsen_fig width="0.36\\linewidth"}
**Proposition 7**. *[@LLMM4 Proposition 4.1][@LMM2 Proposition 16][\[orbit_equiv_prop\]]{#orbit_equiv_prop label="orbit_equiv_prop"} The map $\mathcal{N}_{G_{\mathcal{P}}}$ is orbit equivalent to $G_{\mathcal{P}}$ on $\widehat{\mathbb{C}}$; i.e., for each $z\in\widehat{\mathbb{C}}$, the group orbit $G_{\mathcal{P}}\cdot z$ is equal to the grand orbit of $z$ under $\mathcal{N}_{G_{\mathcal{P}}}$.*
Let $\Pi(G_\mathcal{P})$ be as in Subsection [3.1.3](#fund_dom_subsubsec){reference-type="ref" reference="fund_dom_subsubsec"}, and $\Pi_1,\cdots,\Pi_k$ the connected components of $\Pi(G_\mathcal{P})$. As the limit set $\Lambda(G_\mathcal{P})$ is connected, each component of the domain of discontinuity $\Omega(G_{\mathcal{P}})$ is simply connected, and each $\Pi_i$ is a closed ideal polygon in the corresponding component $\mathcal{U}_i$ of $\Omega(G_{\mathcal{P}})$ bounded by arcs of finitely many circles in the circle packing (see Figure [12](#kissing_nielsen_fig){reference-type="ref" reference="kissing_nielsen_fig"}). Conjugating the stabilizer subgroup of $\mathcal{U}_i$ in $G_{\mathcal{P}}$ by a Riemann map of $\mathcal{U}_i$, one obtains an ideal polygon reflection group acting on $\mathbb{D}$. Moreover, the component $\mathcal{U}_i$ is forward invariant under the Nielsen map of $G_{\mathcal{P}}$, and the Riemann map conjugates $\mathcal{N}_{G_{\mathcal{P}}}\vert_{\mathcal{U}_i}$ to the action of the Nielsen map of an ideal polygon reflection group on $\mathbb{D}$. Thus, ideal polygon reflection groups play a special role while studying kissing reflection groups.
**Definition 8**. Consider the circle packing $\pmb{\mathcal{P}}_{d}:=\{\pmb{C}_1,\cdots, \pmb{C}_{d+1}\}$ where $\pmb{C}_j$ intersects $\mathbb{S}^1$ at right angles at the roots of unity $\exp{(\frac{2\pi i\cdot(j-1)}{d+1})}$, $\exp{(\frac{2\pi i\cdot j}{d+1})}$. We denote the associated kissing reflection group $G_{\pmb{\mathcal{P}}_{d}}$ by $\pmb{G}_{d}$, and call it the *regular ideal polygon reflection group*.
We will denote the Nielsen map of $\pmb{G}_{d}$ by $\pmb{\mathcal{N}}_d$.
### Conjugation between $\pmb{\mathcal{N}}_{d}$ and $\overline{z}^d$ {#question_mark_subsubsec}
The Nielsen map $\pmb{\mathcal{N}}_{d}$ restricts to a degree $d$ orientation-reversing expansive covering of $\mathbb{S}^1$. Hence, there exists a circle homeomorphism $\pmb{\mathcal{E}}_d$ that conjugates $\pmb{\mathcal{N}}_{d}$ to $\overline{z}^d$, and sends $1$ to $1$. We call the homeomorphism the *$d$-th Minkowski circle homeomorphism*. The existence of this conjugation between the Nielsen map of a group and an anti-polynomial lies at the heart of the connections between kissing reflection groups and anti-rational maps.
We note that the circle homeomorphism $\pmb{\mathcal{E}}_d$ conjugates an expansive circle map (with parabolic fixed points) to an expanding circle map (with only hyperbolic fixed points), and hence $\pmb{\mathcal{E}}_d$ is not a quasi-symmetric homeomorphism.
We explain some connections between the circle homeomorphism $\pmb{\mathcal{E}}_2$ and classical objects in number theory and analysis. In fact, the map $\pmb{\mathcal{E}}_2$ is a close relative of the classical Minkowski question mark function $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}:~ [0,1]\to [0,1].$ One way to define the question mark function is to set $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}(\frac01)=0$ and $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}(\frac11)=1$ and then use the recursive formula $$\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}\left(\frac{p+r}{q+s}\right)=\frac12\left (\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}\left(\frac{p}{q}\right)+\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}\left(\frac{r}{s}\right)\right)
\label{question_mark_recursion}$$ which gives us the values of $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$ on all rational numbers (Farey fractions) in $[0,1]$. In particular, the map $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$ is an increasing homeomorphism of $[0,1]$ that sends the vertices of level $n$ of the Farey tree to the vertices of level $n$ of the dyadic tree (see Figure [14](#question_mark_tree_fig){reference-type="ref" reference="question_mark_tree_fig"}). We refer the reader to [@Min; @Den38; @Sal43; @Con01] for various number-theoretic and analytic properties of $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$.
![Left: The Farey tree. Right: The dyadic tree.](farey_tree.png "fig:"){#question_mark_tree_fig}![Left: The Farey tree. Right: The dyadic tree.](dyadic_tree.png "fig:"){#question_mark_tree_fig}
According to [@LLMM1 §4.4.2], the two maps, $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$ and $\pmb{\mathcal{E}}_2$, are related by the formula $$\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}^{-1}(x)=\varphi\left(\pmb{\mathcal{E}}_2^{-1}\left(e^{\frac{2\pi i x}{3}}\right)\right),\ \forall\ x\in [0,1],$$ where $\varphi$ is a Möbius transformation carrying the unit disk onto the upper half-plane. Roughly speaking, the Minkowski question mark function $\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=1.3pt] (char) {?};}$ is the restriction of the homeomorphism $\pmb{\mathcal{E}}_2$ to the arc $I:= [1,e^{2\pi i/3}]\subset\mathbb{S}^1$ written in appropriate coordinates.
### Deformation spaces of kissing reflection groups {#kissing_group_deform_space_subsubsec}
The space of all kissing reflection groups of a given rank can be organized in natural deformation spaces. As in the classical theory of Kleinian groups, the perspective of representations proves to be useful in the study of deformation spaces of reflection groups.
Let $G_0$ be a finitely generated discrete subgroup of $\textrm{Aut}^\pm(\widehat{\mathbb{C}})$. A representation (i.e., a group homomorphism) $\xi: G_0 \longrightarrow \textrm{Aut}^\pm(\widehat{\mathbb{C}})$ is said to be *weakly type-preserving* if
1. $\xi(g) \in \textrm{Aut}^+(\widehat\mathbb{C})$ if and only if $g\in \textrm{Aut}^+(\widehat\mathbb{C})$, and
2. if $g \in \textrm{Aut}^+(\widehat\mathbb{C})$, then $\xi(g)$ is parabolic whenever $g$ is parabolic.
Note that a weakly type-preserving representation may send a loxodromic element to a parabolic one.
**Definition 9**.
1. Given a kissing reflection group $G_0$, we define the *algebraic deformation space* $$\begin{aligned}
\textrm{AH}(G_0)
&:= \lbrace\xi: G_0\longrightarrow G\ \textrm{is a weakly type-preserving isomorphism to}
\\
&\qquad \textrm{a discrete subgroup}\ G\ \textrm{of}\ \textrm{Aut}^\pm(\widehat{\mathbb{C}})\rbrace / \sim,\;\end{aligned}$$ where $\xi_1\sim\xi_2$ if there exists a Möbius transformation $M$ such that $$\xi_2(g)=M\circ\xi_1(g)\circ M^{-1},\ \textrm{for all}\ g\in G_0.$$
2. We define the *quasiconformal deformation* space $$\begin{aligned}
\mathcal{QC}(G_0)
&:= \{\xi \in \textrm{AH}(G_0): \xi(g)=\tau\circ g\circ \tau^{-1}, \text{where}\ \tau\ \textrm{is a}\\
&\qquad \textrm{quasiconformal homeomorphism of } \widehat{\mathbb{C}}\}.\end{aligned}$$
3. The *Bers slice* of $\pmb{G}_{d}$ is the subspace of $\mathcal{QC}(\pmb{G}_{d})$ defined as $$\begin{aligned}
\beta(\pmb{G}_{d})
&:= \{\xi\in\mathcal{QC}(\pmb{G}_{d}) :\ \tau\ \textrm{is conformal on}\ \mathbb{D}^*:=\widehat{\mathbb{C}}\setminus\overline{\mathbb{D}}\}.\end{aligned}$$
We endow $\textrm{AH}(G_0)$ with the quotient topology of algebraic convergence; more precisely, a sequence of weakly type-preserving representations $\{\xi_n\}$ converges to $\xi$ algebraically if $\{\xi_n(g_i)\}$ converges to $\xi(g_i)$ as elements of $\textrm{Aut}^\pm(\widehat{\mathbb{C}})$ for (any) finite generating set $\{g_i\}$ of $G_0$.
We will now describe a natural stratification of quasiconformal deformation space closures into cells of various dimensions. Let us first recall that different realizations of a fixed marked, connected simple plane graph $\Gamma$ as circle packings $\mathcal{P}$ produce canonically isomorphic kissing reflection groups $G_{\mathcal{P}}$. Thus, the algebraic/quasiconformal deformation spaces of all such $G_{\mathcal{P}}$ can be canonically identified. Hence, it makes sense to fix a (marked) circle packing realization $\mathcal{P}$ of a (marked) graph $\Gamma$ and define $\mathcal{QC}(\Gamma):=\mathcal{QC}(G_{\mathcal{P}})$.
**Definition 10**. Let $\Gamma_0,\Gamma$ be simple plane graphs with the same number of vertices. We say that $\Gamma$ *dominates* $\Gamma_0$, denoted by $\Gamma \geq \Gamma_0$, if there exists an embedding $\psi: \Gamma_0 \longrightarrow \Gamma$ as plane graphs (i.e., if there exists a graph isomorphism between $\Gamma_0$ and a subgraph of $\Gamma$ that extends to an orientation-preserving homeomorphism of $\widehat{\mathbb{C}}$).
We also define $$\mathrm{Emb}(\Gamma_0):=\{(\Gamma, \psi): \Gamma\geq \Gamma_0 \text{ and } \psi: \Gamma_0 \longrightarrow \Gamma \text{ is an embedding as plane graphs}\}.$$
With terminology as above, we have the following cell structure for the quasiconformal deformation space closure (where the closure is taken in $\mathrm{AH}(\Gamma_0)$). The proof of this result essentially uses the theory of pinching deformations and the Thurston Hyperbolization Theorem.
**Proposition 11**. *[@LLM1 Proposition 3.17][\[cell_structure_group_prop\]]{#cell_structure_group_prop label="cell_structure_group_prop"} $$\overline{\mathcal{QC}(\Gamma_0)} = \bigcup_{(\Gamma, \psi) \in \mathrm{Emb}(\Gamma_0)} \mathcal{QC}(\Gamma).$$*
### Necklace reflection groups {#necklace_subsubsec}
**Definition 12**. A kissing reflection group $G_{\mathcal{P}}$ is called a *necklace group* if the contact graph of $\mathcal{P}$ is $2$-connected and outerplanar.
This is equivalent to requiring that for the circle packing $\mathcal{P}$, each circle $C_i$ is tangent to $C_{i+1}$ (with $i+1$ taken mod $(d+1)$), and that the boundary of one of the components of $\displaystyle\widehat{\mathbb{C}}\setminus\bigcup_{i=1}^{d+1}\overline{D_i}$ intersects each $C_i$ (cf. [@LMMN Definition 6.7]).
![Limits sets of various necklace groups and their underlying circle packings are displayed. The components of the domains of discontinuity outside the limit sets are invariant under the groups.](bers_slice_cusp.png "fig:"){#necklace_fig width="30%"} ![Limits sets of various necklace groups and their underlying circle packings are displayed. The components of the domains of discontinuity outside the limit sets are invariant under the groups.](necklace.png "fig:"){#necklace_fig width="30.8%"} ![Limits sets of various necklace groups and their underlying circle packings are displayed. The components of the domains of discontinuity outside the limit sets are invariant under the groups.](FKRG.png "fig:"){#necklace_fig width="30.4%"}
According to [@LLM1 Proposition 3.20], a kissing reflection group $G_{\mathcal{P}}$ with connected limit set is a necklace group if and only if there is an invariant component $\Omega_\infty(G_{\mathcal{P}})$ of $\Omega(G_{\mathcal{P}})$ such that the $G_{\mathcal{P}}$-action on $\Omega_\infty(G_{\mathcal{P}})$ is conformally equivalent to the action of an ideal polygon reflection group on $\mathbb{D}$[^5] (see Figure [17](#necklace_fig){reference-type="ref" reference="necklace_fig"}). *After possibly quasiconformally conjugating a necklace group $G_{\mathcal{P}}$ on $\Omega_\infty(G_{\mathcal{P}})$, we can and will assume that $G_{\mathcal{P}}\vert_{\Omega_\infty(G_{\mathcal{P}})}$ is conformally equivalent to $\pmb{G}_{d}\vert_{\mathbb{D}^*}$.* With this convention, necklace groups are precisely those kissing reflection groups that lie on the closure of the Bers slice $\beta(\pmb{G}_{d})$. Indeed, one can quasiconformally deform $\pmb{G}_{d}$ so that additional tangencies among the circles of the packing are introduced in the limit (cf. [@LMM2 Proposition 11]). This perspective allows one to embed the space of marked necklace groups (i.e., necklace groups associated with marked circle packings) of a given rank into the algebraic deformation space $\mathrm{AH}(\pmb{G}_{d})$ (see Definition [Definition 9](#deform_space_def){reference-type="ref" reference="deform_space_def"}). In agreement with the classical theory of Kleinian groups, the Bers slice closure $\overline{\beta(\pmb{G}_{d})}$ (or equivalently, the space of necklace groups $G_{\mathcal{P}}$ with frozen conformal dynamics on $\Omega_\infty(G_{\mathcal{P}})$) is compact in $\mathrm{AH}(\pmb{G}_{d})$ (see [@LMM2 §2.2], [@LLM1 §3.3] for details).
Recall that all kissing reflection groups, in particular those in $\overline{\beta(\pmb{G}_{d})}$, are geometrically finite. This allows one to give a simple description of the dynamics of necklace groups on their limit sets.
**Proposition 13**. *[@LMM2 Proposition 22][\[group_lamination_prop\]]{#group_lamination_prop label="group_lamination_prop"} Let $G_{\mathcal{P}}$ be a necklace group associated with a marked circle packing $\mathcal{P}=\{C_1,\cdots,C_{d+1}\}$; i.e., $G_{\mathcal{P}}\in\overline{\beta(\pmb{G}_{d})}$. There exists a conformal map $\varphi_{G_{\mathcal{P}}}: \mathbb{D}^* \rightarrow \Omega_{\infty}(G_{\mathcal{P}})$ such that $$\begin{aligned}
\label{group_conjugacy}
\pmb{\mathcal{N}}_d (z) = \varphi_{G_{\mathcal{P}}}^{-1} \circ \mathcal{N}_{G_{\mathcal{P}}} \circ \varphi_{G_{\mathcal{P}}}(z) \textrm{, for } z\in \mathbb{D}^*\setminus \mathop{\mathrm{int}}{\Pi(\pmb{G}_{d})}. \end{aligned}$$ The map $\varphi_{G_{\mathcal{P}}}$ extends continuously to a semi-conjugacy $\varphi_{G_{\mathcal{P}}}: \mathbb{S}^1 \rightarrow \Lambda(G_{\mathcal{P}})$ between $\pmb{\mathcal{N}}_d|_{\mathbb{S}^1}$ and $\mathcal{N}_{G_{\mathcal{P}}}|_{\Lambda(G_{\mathcal{P}})}$, and for each $i$, sends the cusp of $\partial\Pi(\pmb{G}_{d})$ at $\pmb{C}_i\cap\pmb{C}_{i+1}$ to the cusp of $\partial\Pi(G_{\mathcal{P}})$ at $C_i\cap C_{i+1}$.*
The continuous extension of the conformal map $\varphi_{G_{\mathcal{P}}}$ to the circle is a consequence of local connectivity of $\Lambda(G_{\mathcal{P}})$ (according to [@AM96], connected limit sets of geometrically finite Kleinian groups are locally connected). Proposition [\[group_lamination_prop\]](#group_lamination_prop){reference-type="ref" reference="group_lamination_prop"} yields a dynamically defined *Carathéodory loop* for $\Lambda(G_{\mathcal{P}})$, which can be used to produce a topological model of the limit set in terms of *geodesic laminations* (see Subsections [5.1.1](#talbot_dynamics_subsubsec){reference-type="ref" reference="talbot_dynamics_subsubsec"} and [6.3.2](#sigma_d_schwarz_dyn_subsubsec){reference-type="ref" reference="sigma_d_schwarz_dyn_subsubsec"} for applications of this fact). More precisely, the fibers of $\varphi_{G_{\mathcal{P}}}: \mathbb{S}^1 \rightarrow \Lambda(G_{\mathcal{P}})$ define an equivalence relation on $\mathbb{S}^1$ such that each non-trivial equivalence class consists of two points (cf. [@LMM2 Proposition 54]). Connecting the points of each non-trivial equivalence class by a hyperbolic geodesic in $\mathbb{D}$ produces a $\pmb{G}_{d}$-invariant geodesic lamination (i.e., a closed set of mutually disjoint bi-infinite geodesics) on $\mathbb{D}$. This geodesic lamination can also be described as the lift to the universal cover of a collection of disjoint, simple, closed, non-peripheral geodesics on the punctured sphere $\faktor{\mathbb{D}}{\widetilde{\pmb{G}_d}}$ (cf. [@LLM1 §4.3]).
For a necklace group $G_{\mathcal{P}}$, we set $K(G_{\mathcal{P}}):=\widehat{\mathbb{C}}\setminus\Omega_\infty(G_{\mathcal{P}})$, and call it the *filled limit set* of $G_{\mathcal{P}}$. The component $\Pi(G_{\mathcal{P}})\cap\Omega_\infty(G_{\mathcal{P}})$, which is conformally equivalent to the polygon $\Pi(\pmb{G}_{d})\cap\mathbb{D}^*$, is denoted by $\Pi^u(G_{\mathcal{P}})$. Finally, we set $\Pi^b(G_{\mathcal{P}}) := \Pi(G_{\mathcal{P}})\setminus \Pi^u(G_{\mathcal{P}})$.
*Remark 14*. The local connectivity property holds for connected limit sets of arbitrary finitely generated Kleinian groups [@Mj14a]. This allows one to furnish geodesic lamination models for limit sets of Kleinian groups on boundaries of Bers slices [@Mj14b; @Mj17]. The proofs of these results use the machinery developed for the proof of the Ending Lamination Conjecture (see [@Min10; @BCM12]). For more detailed history of the problem, we refer the reader to [@Mj14a].
## Dynamics of anti-polynomials and the Tricorn {#tricorn_subsec}
In this subsection, we recall some known results on the dynamics of anti-polynomials, and their parameter space. Although the dynamical properties of anti-polynomials is similar to those of holomorphic polynomials, their parameter spaces have many important differences. We direct the reader to [@LLMM2 §2] for a detailed account of the dynamics and parameter space of quadratic anti-polynomials.
### Some generalities {#anti_poly_dyn_general_subsubsec}
Let $p$ be an anti-polynomial of degree $d\geq 2$. The Fatou and Julia set of $p$ is defined to be those of the holomorphic second iterate $p^{\circ 2}$. In analogy to the holomorphic case, the set of all points that remain bounded under all iterations of $p$ is called the *filled Julia set* $\mathcal{K}(p)$. The boundary of the filled Julia set equal the *Julia set* $\mathcal{J}(p)$. The complement of $\mathcal{K}(p)$ is the basin of attraction of the superattracting fixed point $\infty$, and it is denoted by $\mathcal{B}_\infty(p)$.
Similar to the holomorphic case, there is a conformal map $\varphi_p$ near $\infty$ that conjugates $p$ to $\overline{z}^d$. The map $\varphi_p$ is unique up to multiplication by $(d+1)$-st roots of unity. If $p$ is monic (which can always be arranged by affine conjugation), then $\varphi_p$ can be chosen to be tangent to the identity at $\infty$. With such normalization, the map $\varphi_p$ is called the *Böttcher coordinate* of $p$ near $\infty$ [@Na1 Lemma 1]. The absolute value of $\varphi_p$ always extends to a continuous function on $\mathcal{B}_\infty(p)$, and the level curves of this function are called *equipotential* curves of $p$. If $\mathcal{K}(p)$ is connected, then $\varphi_p$ extends as a conformal conjugacy between $p\vert_{\mathcal{B}_\infty(p)}$ and $\overline{z}^d\vert_{\mathbb{D}^*}$. Otherwise, $\varphi_p$ extends conformally to an equipotential curve containing the 'fastest escaping' critical point (cf. [@Mil06 §9]).
**Definition 15**. The *dynamical ray* $R_p(\theta)$ of $p$ at an angle $\theta$ is defined as the pre-image of the radial line at angle $\theta$ under $\varphi_p$.
The dynamical ray $R_p(\theta)$ maps to the dynamical ray $R_p(-d\theta)$ under $p$. It follows that, at the level of external angles, the dynamics of $p$ can be studied by looking at the simpler map $$m_{-d}:\mathbb{R}/\mathbb{Z}\to\mathbb{R}/\mathbb{Z},\ m_{-d}(\theta)=-d\theta.$$ It is well-known that if $p$ has a connected Julia set, then all rational dynamical rays of $p$ land at repelling or parabolic (pre-)periodic points.
**Definition 16**. The *rational lamination* of an anti-polynomial $p$ with connected Julia set is defined as an equivalence relation on $\mathbb{Q}/\mathbb{Z}$ such that $\theta_1 \sim \theta_2$ if and only if the dynamical rays $R_p(\theta_1)$ and $R_p(\theta_2)$ land at the same point of $\mathcal{J}(p)$. The rational lamination of $p$ is denoted by $\lambda(p)$.
Some of the basic properties of rational laminations are listed in the next proposition.
**Proposition 17**. *[@Kiw01][\[rat_lami_prop\]]{#rat_lami_prop label="rat_lami_prop"} The rational lamination $\lambda(p)$ of an anti-polynomial $p$ with connected Julia set satisfies the following properties.*
1. *$\lambda(p)$ is closed in $\mathbb{Q}/\mathbb{Z}\times\mathbb{Q}/\mathbb{Z}$.*
2. *Each $\lambda(p)$-equivalence class $A$ is a finite subset of $\mathbb{Q}/\mathbb{Z}$.*
3. *If $A$ is a $\lambda(p)$-equivalence class, then $m_{-d}(A)$ is also a $\lambda(p)$-equivalence class.*
4. *If $A$ is a $\lambda(p)$-equivalence class, then $A\mapsto m_{-d}(A)$ is consecutive reversing.*
5. *$\lambda(p)$-equivalence classes are pairwise unlinked.*
*Remark 18*. For an anti-polynomial $p$ with connected Julia set, the smallest equivalence relation $\widehat{\lambda(p)}$ on $\mathbb{R}/\mathbb{Z}$ that contains the closed set $\overline{\lambda(p)}$ (in $\mathbb{R}/\mathbb{Z}\times\mathbb{R}/\mathbb{Z}$) is called the *combinatorial lamination* of $p$. If $\mathcal{J}(p)$ is locally connected and $p$ has no irrationally neutral cycle, then $\mathcal{J}(p)$ is homeomorphic to the quotient of the circle by the equivalence relation $\widehat{\lambda(p)}$ (cf. [@Kiw01 Lemma 4.17]).
For an antiholomorphic germ $g$ fixing a point $z_0$, the quantity $\frac{\partial g}{\partial\overline{z}}\vert_{z_0}$ is called the *multiplier* of $g$ at the fixed point $z_0$. One can use this definition to define multipliers of periodic orbits of antiholomorphic maps. A cycle is called attracting (respectively, super-attracting) if the associated multiplier has absolute value between $0$ and $1$ (respectively, is equal to $0$). The dynamics of $g$ near such a point is similar to that of a holomorphic germ near a (super-)attracting fixed point [@Mil06 §8, 9]. On the other hand, neutral fixed points of antiholomorphic germs are special in the following sense. Note that for $\theta\in\mathbb{R}$, the map $z\mapsto e^{i\theta}\overline{z}$ is an antiholomorphic involution. Hence the second iterate of a neutral antiholomorphic germ is a tangent-to-identity holomorphic parabolic germ. In this sense, *any neutral fixed point of an antiholomorphic germ $g$ is parabolic*.
**Proposition 19**. *[@HS Lemma 2.3][\[normalization of fatou\]]{#normalization of fatou label="normalization of fatou"} Suppose $z_0$ is a parabolic periodic point of odd period $k$ of an anti-polynomial $p$ with only one petal, and $U$ is a periodic Fatou component with $z_0 \in \partial U$. Then there is an open subset $V \subset U$ with $z_0 \in \partial V$, and $g^{\circ k} (V) \subset V$ so that for every $z \in U$, there is an $n \in \mathbb{N}$ with $g^{\circ nk}(z)\in
V$. Moreover, there is a univalent map $\psi^{\mathrm{att}} \colon V \to \mathbb{C}$ that conjugates $g^{\circ k}$ to the glide-reflection $\zeta\mapsto\overline{\zeta}+1/2$; i.e., $$\psi^{\mathrm{att}}(g^{\circ k}(z)) = \overline{\psi^{\mathrm{att}}(z)}+1/2\quad \forall\quad z\in V,$$ and $\psi^{\mathrm{att}}(V)$ contains a right half plane. This map $\psi^{\mathrm{att}}$ is unique up to composition with a horizontal translation.*
The map $\psi^{\mathrm{att}}$ is called the *attracting Fatou coordinate* for the petal $V$. The antiholomorphic iterate interchanges both ends of the Écalle cylinder, so it must preserve one horizontal line around this cylinder (the *equator*). The change of coordinate has been so chosen that the equator is the projection of the real axis. We will call the vertical Fatou coordinate the *Écalle height*. The Écalle height vanishes precisely on the equator. The existence of this distinguished real line, or equivalently an intrinsic meaning to Écalle height, is specific to antiholomorphic maps and plays a crucial role in parameter space discussions (cf. Theorem [\[odd_hyp_bdry_tricorn_thm\]](#odd_hyp_bdry_tricorn_thm){reference-type="ref" reference="odd_hyp_bdry_tricorn_thm"}).
### Quadratic anti-polynomials and the Tricorn {#tricorn_subsubsec}
Any quadratic anti-polynomial, after an affine change of coordinates, can be written in the form $f_c(z) = \overline{z}^2 + c$ for $c \in \mathbb{C}$. This leads, as in the holomorphic case, to the notion of *connectedness locus* of quadratic anti-polynomials:
**Definition 20**. The *Tricorn* is defined as $\mathcal{T} = \{ c \in \mathbb{C} : \mathcal{K}_c:=\mathcal{K}(f_c)$ is connected$\}$.
The dynamics of quadratic anti-polynomials and its connectedness locus was first studied numerically in [@CHRS], where this set was called the *Mandelbar set*. Their numerical experiments showed curious structural differences between the Mandelbrot set and the Tricorn; in particular, they observed that there are bifurcations from the period $1$ hyperbolic component to period $2$ hyperbolic components along arcs in the Tricorn (see Theorem [\[ThmBifArc\]](#ThmBifArc){reference-type="ref" reference="ThmBifArc"} for a general statement), in contrast to the fact that bifurcations are always attached at a single point in the Mandelbrot set. The name 'Tricorn' is due to Milnor, who found 'copies' of the connectedness locus of quadratic anti-polynomials in parameter spaces of real cubic polynomials and other real maps [@Mil92; @Mil00] (the name comes from the three-cornered shape of the set). Namely, Milnor observed that the dynamics of the corresponding real maps exhibit quadratic anti-polynomial-like behavior, and this was the primary motivation to view the Tricorn as a prototypical object in the study of real slices of holomorphic maps. Nakane proved that the Tricorn is connected, in analogy to Douady and Hubbard's classical proof of connectedness of the Mandelbrot set [@Na1]:
**Theorem 21**. *[@Na1][\[RealAnalUniformization\]]{#RealAnalUniformization label="RealAnalUniformization"} The map $\Phi : \mathbb{C} \setminus \mathcal{T} \rightarrow \mathbb{C} \setminus \overline{\mathbb{D}}$, defined by $c \mapsto \varphi_c(c)$ (where $\varphi_c$ is the Böttcher coordinate near $\infty$ for $f_c$) is a real-analytic diffeomorphism. In particular, the Tricorn is connected.*
The previous theorem also allows us to define parameter rays of the Tricorn.
**Definition 22**. The *parameter ray* at angle $\theta$ of the Tricorn $\mathcal{T}$, denoted by $\mathcal{R}_{\theta}$, is defined as $\{ \Phi^{-1}(r e^{2 \pi i \theta}) : r > 1 \}$, where $\Phi$ is the real-analytic diffeomorphism from the exterior of $\mathcal{T}$ to the exterior of the closed unit disc in the complex plane constructed in Theorem [\[RealAnalUniformization\]](#RealAnalUniformization){reference-type="ref" reference="RealAnalUniformization"}.
We refer the reader to [@NS §3], [@Muk1] for details on combinatorics of landing patterns of dynamical rays for unicritical anti-polynomials.
A map $f_c$ is called hyperbolic (respectively, parabolic) if it has a (super-)attracting (respectively, parabolic) cycle. Connected components of the set of all hyperbolic parameters is called a *hyperbolic component* of $\mathcal{T}$.
![Left: The Tricorn $\mathcal{T}$. Middle: Wiggling of an umbilical cord on the boundary of an odd period hyperbolic component of $\mathcal{T}$. Right: Non-trivial accumulation of parameter rays on the boundary of an odd period hyperbolic component of $\mathcal{T}$.](tricorn.png "fig:"){#tricorn_fig width="0.32\\linewidth"} ![Left: The Tricorn $\mathcal{T}$. Middle: Wiggling of an umbilical cord on the boundary of an odd period hyperbolic component of $\mathcal{T}$. Right: Non-trivial accumulation of parameter rays on the boundary of an odd period hyperbolic component of $\mathcal{T}$.](umbilical_cord.png "fig:"){#tricorn_fig width="0.32\\linewidth"} ![Left: The Tricorn $\mathcal{T}$. Middle: Wiggling of an umbilical cord on the boundary of an odd period hyperbolic component of $\mathcal{T}$. Right: Non-trivial accumulation of parameter rays on the boundary of an odd period hyperbolic component of $\mathcal{T}$.](all_rays_para.png "fig:"){#tricorn_fig width="0.32\\linewidth"}
The even period hyperbolic components of $\mathcal{T}$ are similar to the hyperbolic components of the Mandelbrot set, in the sense that they are real-analytically uniformized by the multiplier of the unique attracting cycle [@NS Theorem 5.6], [@LLMM2 §2.2.4]. The odd period hyperbolic components of $\mathcal{T}$, however, are more delicate. One attaches a natural conformal invariant, called the *Koenigs ratio*, to a map with an odd period attracting cycle that captures the conformal position of the critical value in the normalized Koenigs coordinate. It turns out that this association defines a degree three branched covering from an odd period hyperbolic component to $\mathbb{D}$.
**Theorem 23**. *[@NS Theorem 5.6, Theorem 5.9][\[tricorn_hyp_unif_thm\]]{#tricorn_hyp_unif_thm label="tricorn_hyp_unif_thm"} Let $H$ be a hyperbolic component of $\mathcal{T}$.*
1. *If $H$ is of odd period, then the Koenigs ratio map is a real-analytic $3$-fold branched covering from $H$ onto the unit disk, ramified only over the origin.*
2. *If $H$ is of even period, then the multiplier map is a real-analytic diffeomorphism from $H$ onto the unit disk.*
*Remark 24*. We note that for each quadratic anti-polynomial in an odd (respectively, even) period hyperbolic component, the first return map of the critical value Fatou component is a degree two proper antiholomorphic (respectively, holomorphic) map and hence has three (respectively, one) boundary fixed points. This fact is manifested in the mapping degree of the Koenigs ratio map (respectively, the multiplier map) of an odd (respectively, even) period hyperbolic component.
The following results describe the boundaries of the hyperbolic components of the Tricorn and the associated bifurcation structure. Once again, the odd and even period hyperbolic components exhibit strikingly different behavior. In particular, the boundaries of odd period hyperbolic components are entirely comprised of parabolic parameters.
**Theorem 25**. *[@MNS Theorem 1.1][\[ThmEvenBif\]]{#ThmEvenBif label="ThmEvenBif"}*
1. *The boundary of an even period hyperbolic component $H$ of $\mathcal{T}$ is a topological circle. For each $c\in\partial H$, the corresponding map $f_c$ has a unique neutral cycle.*
2. *If $f_c$ has a $2k$-periodic cycle with multiplier $e^{2\pi ip/q}$ with $\mathrm{gcd}(p,q)=1$, then $c$ sits on the boundary of a hyperbolic component of period $2kq$ of the Tricorn (and is the root thereof).*
**Theorem 26**. *[@MNS Lemma 2.5, Theorems 1.2, 3.2][\[odd_hyp_bdry_tricorn_thm\]]{#odd_hyp_bdry_tricorn_thm label="odd_hyp_bdry_tricorn_thm"}*
1. *The boundary of a hyperbolic component $H$ of odd period $k$ consists entirely of parameters having a parabolic orbit of exact period $k$. In suitable local conformal coordinates, the $2k$-th iterate of such a map has the form $z\mapsto z+z^{q+1}+\cdots$, with $q\in\{1,2\}$.*
2. *If $q=1$ for some $\widetilde{c}\in\partial H$, then $\widetilde{c}$ lies on a *parabolic arc* in the following sense: there exists a real-analytic arc of simple parabolic parameters $c(h)$ (for $h\in\mathbb{R}$) with quasiconformally equivalent but conformally distinct dynamics of which $\widetilde{c}$ is an interior point, and the Écalle height of the critical value of $f_{c(h)}$ is $h$. In particular, $h\mapsto c(h)$ yields a monotone embedding of $\mathbb{R}$ in $\partial H$.*
3. *The boundary of every odd period hyperbolic component of $\mathcal{T}$ is a topological triangle having double parabolic parameters (i.e., $q=2$) as vertices and parabolic arcs as sides.*
For an odd period hyperbolic component $H$, as $c\in H$ approaches a simple (respectively, double) parabolic parameter on $\partial H$, an attracting periodic point merges with a repelling periodic point (respectively, two repelling periodic points) to produce a simple (respectively, double) parabolic periodic point.
When the parameter $c$ crosses a parabolic arc from the inside to the outside of $H$, then doubling bifurcation occurs. More precisely, two fixed points of $f_c^{\circ k}$ (where $k$ is the period of $H$) merge, giving rise to a period $2k$-cycle. The type of this cycle changes from attracting to neutral to repelling depending on the crossing point on the arc.
To describe the situation precisely, we need the notion of fixed point residues, which we now recall. Let $g : U \rightarrow \mathbb{C}$ be a holomorphic function on a connected open set $U\ \left(\subset \mathbb{C}\right)$, and $\hat{z}\in U$ be an isolated fixed point of $f$. Then, the fixed point residue of $f$ at $\hat{z}$ is defined to be the complex number $$\displaystyle \iota(f, \hat{z}) = \frac{1}{2\pi i} \oint \frac{dz}{z-f(z)},$$ where we integrate in a small loop in the positive direction around $\hat{z}$. It is easy to see that the fixed point residue does not depend on the choice of complex coordinates, so it is a conformal invariant (see [@Mil06 §12] for basic properties of fixed point residues).
By the fixed point residue of a periodic orbit of odd period of $f_c$, we will mean the fixed point residue of the second iterate $f_c^{\circ 2}$ at that periodic orbit. Let $\mathcal{C}$ be a parabolic arc of odd period $k$ equipped with the critical Écalle height parametrization $c:\mathbb{R}\to\mathcal{C}$ (by the above theorem). For any $h$ in $\mathbb{R}$, let us denote the fixed point residue of the unique parabolic cycle of $f_{c(h)}^{\circ 2}$ by $\mathop{\mathrm{ind}}(f_{c(h)}^{\circ 2})$. This defines a function $$\mathop{\mathrm{ind}}_{\mathcal{C}}: \mathbb{R}\to\mathbb{C},\ h\mapsto \mathop{\mathrm{ind}}(f_{c(h)}^{\circ 2}).$$
As $c$ tends to a vertex of an odd period hyperbolic component along a parabolic arc, a simple parabolic point merges with a repelling point to form a double parabolic point. In the process, the sum of the fixed point residues of the simple parabolic point and the repelling point converges to the fixed point residue of the double parabolic point. This allows one to understand the asymptotic behavior of the parabolic fixed point residue towards the ends of parabolic arcs. Finally, when a parameter on a parabolic arc is perturbed outside the odd period hyperbolic component, the simple parabolic cycle bifurcates to an attracting or a repelling cycle of twice the period, depending on whether the fixed point residue of the simple parabolic cycle is larger or smaller than $1$.
**Theorem 27**. *[@HS Proposition 3.7, Theorem 3.8, Corollary 3.9], [@IM2 Lemma 2.12][\[ThmBifArc\]]{#ThmBifArc label="ThmBifArc"}*
1. *The function $\mathop{\mathrm{ind}}_{\mathcal{C}}$ is real-valued and real-analytic. Moreover, $$\lim_{h\to\pm\infty}\mathop{\mathrm{ind}}_{\mathcal{C}}(h)=+\infty.$$*
2. *Every parabolic arc of period $k$ intersects the boundary of a hyperbolic component of period $2k$ along an arc consisting of the set of parameters where the parabolic fixed point residue is at least $1$. In particular, every parabolic arc has, at both ends, an interval of positive length at which bifurcation from a hyperbolic component of odd period $k$ to a hyperbolic component of period $2k$ occurs.*
A *Misiurewicz* parameter of the Tricorn is a parameter $c$ such that the critical point $0$ is strictly pre-periodic. It is well-known that for a Misiurewicz parameter, the critical point eventually maps on a repelling cycle. By classification of Fatou components, the filled Julia set of such a map has empty interior. Moreover, the Julia set of a Misiurewicz parameter is locally connected [@orsay Exposé III, Proposition 4, Theorem 1]. These parameters play an important role in the understanding of the topology of the Tricorn.
**Theorem 28**. *[@LLMM2 Theorem 2.37][\[Tricorn_para_misi_ray\]]{#Tricorn_para_misi_ray label="Tricorn_para_misi_ray"} Every parameter ray of the Tricorn at a strictly pre-periodic angle (under $m_{-2}$) lands at a Misiurewicz parameter such that in its dynamical plane, the corresponding dynamical ray lands at the critical value. Conversely, every Misiurewicz parameter $c$ of the Tricorn is the landing point of a finite (non-zero) number of parameter rays at strictly pre-periodic angles (under $m_{-2}$) such that the angles of these parameter rays are exactly the external angles of the dynamical rays that land at the critical value $c$ in the dynamical plane of $f_c$.*
We now collect some results that underscore the differences between the global topology of the Tricorn and the Mandelbrot set. Such results include non-landing of rational parameter rays, non-density of Misiurewicz parameters on the boundary, lack of local connectedness, discontinuity of straightening maps between small Tricorn-like sets and the original Tricorn, etc. The lack of quasiconformal rigidity on the boundary of the Tricorn (more precisely, the existence of arcs of quasiconformally conjugate parabolic parameters, see Theorem [\[odd_hyp_bdry_tricorn_thm\]](#odd_hyp_bdry_tricorn_thm){reference-type="ref" reference="odd_hyp_bdry_tricorn_thm"}) lies at the heart of these topological differences.
**Theorem 29**. *[@IM1][\[most rays wiggle\]]{#most rays wiggle label="most rays wiggle"} The accumulation set of every parameter ray accumulating on the boundary of a hyperbolic component of *odd* period (except period one) of $\mathcal{T}$ contains an arc of positive length. The fixed rays at angles $0$, $1/3$ and $2/3$ land on the boundary of the principal hyperbolic component.*
**Theorem 30**. *[@IM1][\[thm_misi_not_dense\]]{#thm_misi_not_dense label="thm_misi_not_dense"} Misiurewicz parameters are not dense on the boundary of $\mathcal{T}$. Indeed, there are points on the boundaries of the period $1$ and period $3$ hyperbolic components of $\mathcal{T}$ that cannot be approximated by Misiurewicz parameters.*
**Theorem 31**. *[@HS Theorem 6.2],[@IM2 Theorem 1.2] [\[Tricorn_non_lc\]]{#Tricorn_non_lc label="Tricorn_non_lc"} The Tricorn is not path connected. Moreover, no non-real hyperbolic component of odd period can be connected to the principal hyperbolic component by a path.*
**Theorem 32**. *[@IM2][\[Straightening_discontinuity_Tricorn\]]{#Straightening_discontinuity_Tricorn label="Straightening_discontinuity_Tricorn"} Let $c_0$ be the center of a hyperbolic component $H$ of odd period (other than $1$) of $\mathcal{T}$, and $\mathcal{R}(c_0)$ be the corresponding $c_0$-renormalization locus (i.e. the baby Tricorn based at $H$). Then the straightening map $\chi_{c_0} : \mathcal{R}(c_0) \rightarrow \mathcal{T}$ is discontinuous at infinitely many parameters.*
We conclude our discussion of the Tricorn with the definition of the real Basilica limb of the Tricorn, which will be important in Subsection [6.1](#c_and_c_general_subsec){reference-type="ref" reference="c_and_c_general_subsec"}. Of course, one can give a more general definition of limbs, which can be found in [@MNS §6]. Let us denote the hyperbolic component of period one of $\mathcal{T}$ by $H_0$.
**Definition 33**. The connected component of $\left(\mathcal{T}\setminus\overline{H_0}\right)\cup\{-\frac{3}{4}\}$ intersecting the real line is called the *real Basilica limb* of the Tricorn, and is denoted by $\mathcal{L}$.
The real Basilica limb $\mathcal{L}$ is precisely the set of parameters $c$ in $\mathcal{T}$ such that in the dynamical plane of $f_c$, the rays $R_c(1/3)$ and $R_c(2/3)$ land at a common point (i.e. $1/3\sim2/3$ in $\lambda(f_c)$ for all $c\in\mathcal{L}$). The real Basilica limb of the Tricorn is depicted in Figure [4](#c_and_c_para_fig){reference-type="ref" reference="c_and_c_para_fig"} (right).
### Parabolic Tricorn and its higher degree versions {#para_anti_rat_gen_subsubsec}
While anti-polynomials form the simplest class of anti-rational maps, the expanding external class $\overline{z}^d$ of an anti-polynomial (with connected Julia) is a recurring source of mismatch between the dynamics of anti-polynomials and the action of groups with parabolic elements (such as kissing reflection groups). In certain situations, which will be elucidated in Subsections [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}, [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"}, and Section [11](#general_mating_corr_sec){reference-type="ref" reference="general_mating_corr_sec"}, a closely related family of anti-rational maps with an expansive (but not expanding) external class enables one to overcome this difficulty. Specifically, these anti-rational maps have a completely invariant, simply connected Fatou component (like anti-polynomials with connected Julia sets), that is a parabolic basin of attraction (as opposed to the basin of attraction of $\infty$ for anti-polynomials). We now give a formal treatment of this family.
Note that the anti-Blaschke product $$B_d(z) = \frac{(d+1)\overline z^d + (d-1)}{(d-1)\overline z^d + (d+1)}$$ has a parabolic fixed point at $1$, and $\mathbb{D}$ is an invariant parabolic basin of this fixed point. Due to real-symmetry of the map $B_d$, the unique critical point $0$ of $B_d$ in $\mathbb{D}$ has Écalle height zero.
**Definition 34**. The family $\pmb{\mathcal{B}}_d$ consists of degree $d\geq 2$ anti-rational maps $R$ with the following properties.
1. $\infty$ is a parabolic fixed point for $R$.
2. There is a marked parabolic basin $\mathcal{B}(R)$ of $\infty$ which is simply connected and completely invariant.
3. $R\vert_{\mathcal{B}(R)}$ is conformally conjugate to $B_d\vert_{\mathbb{D}}$.
The complement $\mathcal{K}(R):=\widehat{\mathbb{C}}\setminus \mathcal{B}(R)$ of the marked parabolic basin is called the *filled Julia set* of $R$.
Analogous to the connectedness locus of degree $d$ anti-polynomials, the moduli space $\left[\pmb{\mathcal{B}}_d\right]:= \faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})}$ is also compact [@LMM3 Proposition 4.3].
The family $\pmb{\mathcal{B}}_2$ is called the *parabolic Tricorn*. We direct the reader to [@LLMM3 Appendix A] for a more explicit description and the study of basic topological properties of the parabolic Tricorn.
## Schwarz reflection maps {#schwarz_subsec}
Every non-singular point on a real-analytic curve admits a local Schwarz reflection map. A domain in the complex plane is called a *quadrature domain* if the local Schwarz reflection maps with respect to its boundary extend anti-meromorphically to its interior.
**Definition 35**. A domain $\Omega\subsetneq\widehat{\mathbb{C}}$ with $\infty\notin\partial\Omega$ and $\mathrm{int}(\overline{\Omega})=\Omega$ is called a *quadrature domain* if there exists a continuous function $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ satisfying the following two properties:
1. $\sigma=\mathrm{id}$ on $\partial \Omega$.
2. $\sigma$ is anti-meromorphic on $\Omega$.
The map $\sigma$ is called the *Schwarz reflection map* of $\Omega$.
The notion of quadrature domains first appeared in the work of Davis [@Dav74], and independently in the work of Aharonov and Shapiro [@AS73; @AS76; @AS78]. It is known that except for a finite number of singular points, which are necessarily cusps and double points, the boundary of a quadrature domain consists of finitely many disjoint non-singular real-analytic curves [@Sak91].
Simplest examples of quadrature domains are given by round disks. Non-trivial examples are produced by the following characterization result.
### Characterization of simply connected quadrature domains and mapping properties of the associated Schwarz reflections {#scqd_schwarz_mapping_deg_subsubsec}
**Proposition 36**. *[@AS76 Theorem 1] A simply connected domain $\Omega\subsetneq\widehat{\mathbb{C}}$ (with $\infty\notin\partial\Omega$ and $\mathrm{int}(\overline{\Omega})=\Omega$) is a quadrature domain if and only if the Riemann uniformization $f:\mathbb{D}\to\Omega$ is rational. In this case, the Schwarz reflection map $\sigma$ of $\Omega$ is given by $f\circ\eta\circ(f\vert_{\mathbb{D}})^{-1}$, where $\eta(z):=1/\overline{z}$.*
*Moreover, if the degree of the rational map $f$ is $d$, then $\sigma:\sigma^{-1}(\Omega)\to\Omega$ is a (branched) covering of degree $(d-1)$, and $\sigma:\sigma^{-1}(\mathop{\mathrm{int}}{\Omega^c})\to\mathop{\mathrm{int}}{\Omega}^c$ is a (branched) covering of degree $d$ (where $\Omega^c:=\widehat{\mathbb{C}}\setminus \Omega$).*
$$\begin{tikzcd}
\mathbb{D}\arrow{d}{\eta} \arrow{r}{f} & \Omega \arrow{d}{\sigma} \\
\widehat{\mathbb{C}} \arrow{r}{f} & \widehat{\mathbb{C}}
\end{tikzcd}$$
The second part of Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"} says that the map $\sigma:\Omega\to\widehat{\mathbb{C}}$ is not a branched cover, but it restricts to branched covers of different degrees on two different part of its domain (see Figure [\[cardioid_disk_fig\]](#cardioid_disk_fig){reference-type="ref" reference="cardioid_disk_fig"} for an illustration).
*Remark 37*. If $\Omega$ is a simply connected quadrature domain with associated Schwarz reflection map $\sigma$, and $M$ is a Möbius transformation, then $M(\Omega)$ is also a quadrature domain with Schwarz reflection map $M\circ\sigma\circ M^{-1}$. This fact allows one to normalize quadrature domains suitably, and will be used later.
### Piecewise Schwarz reflections in quadrature multi-domains {#piecewise_schwarz_subsubsec}
Consider a finite collection of disjoint simply connected quadrature domains $\Omega_j(\subsetneq\widehat{\mathbb{C}})$, $j\in\{1,\cdots,k\}$, with associated Schwarz reflection maps $\sigma_j$. We define $$\displaystyle\Omega:=\displaystyle\bigsqcup_{j=1}^k\Omega_j,$$ and the map $$\sigma:\overline{\Omega}\to\widehat{\mathbb{C}},\ w \mapsto \begin{array}{ll}
\sigma_j(w) & \mbox{if}\ w\in\overline{\Omega}_j
\end{array}.$$ We call $\Omega$ a *quadrature multi-domain* and $\sigma$ a piecewise Schwarz reflection map.
Let $f_j:\overline{\mathbb{D}}\to\overline{\Omega}_j$ be the Riemann uniformizations of the simply connected quadrature domains $\Omega_j$ such that each $f_j$ extends as a rational map of $\widehat{\mathbb{C}}$ of degree $d_j$. It follows that $\sigma_j:\sigma_j^{-1}(\Omega_j)\to\Omega_j$ is a branched covering of degree $(d_j-1)$, and $\sigma_j:\sigma_j^{-1}(\mathop{\mathrm{int}}{\Omega_j^c})\to\mathop{\mathrm{int}}{\Omega_j^c}$ is a branched covering of degree $d_j$. Therefore, $\sigma:\sigma^{-1}(\Omega)\to\Omega$ is a (possibly branched) covering of degree $\displaystyle(\sum_{j=1}^k d_j-1)$.
### Invariant partition of the dynamical plane {#inv_partition_subsubsec}
With notation as in Subsection [3.3.2](#piecewise_schwarz_subsubsec){reference-type="ref" reference="piecewise_schwarz_subsubsec"}, we set $$T(\sigma):=\widehat{\mathbb{C}}\setminus\Omega,\ \mathrm{and}\ T^0(\sigma):=T(\sigma)\setminus\{\mathrm{singular\ points\ on}\ \partial T\}.$$ The set $T(\sigma)$ is called the *droplet* of $\sigma$, and the set $T^0(\sigma)$ obtained by removing the singular points from the droplet is called the *fundamental tile* of $\sigma$. We note that $T^0(\sigma)$ is neither open, nor closed. It resembles fundamental domains of kissing reflection groups (cf. Subsection [3.1.3](#fund_dom_subsubsec){reference-type="ref" reference="fund_dom_subsubsec"}).
We define the *escaping/tiling set* $T^\infty(\sigma)$ of $\sigma$ as the set of all points that eventually land in $T^0(\sigma)$; i.e., $$T^\infty(\sigma):=\displaystyle\bigcup_{n=0}^\infty \sigma^{-n}(T^0(\sigma)).$$ The *non-escaping set* of $\sigma$ is defined as $K(\sigma):=\widehat{\mathbb{C}}\setminus T^\infty(\sigma)$.
Often, the dynamics of $\sigma$ on its non-escaping set resembles the dynamics of an anti-polynomial on its filled Julia set. On the other hand, the action of $\sigma$ on its escaping set, especially when the escaping set contains no critical point of $\sigma$, looks like the action of the Nielsen map of a necklace reflection group (for instance, see Figures [\[deltoid_corr_fig\]](#deltoid_corr_fig){reference-type="ref" reference="deltoid_corr_fig"} (right) and [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}).
### Connections with analytic problems {#analysis_connect_subsubsec}
To motivate the nomenclature 'quadrature domain', let us mention that a domain $\Omega\subsetneq\widehat{\mathbb{C}}$ (with $\infty\notin\partial\Omega$ and $\mathrm{int}(\overline{\Omega})=\Omega$) is a quadrature domain in the sense of Definition [Definition 35](#qd_schwarz_def){reference-type="ref" reference="qd_schwarz_def"} if and only if there exists a rational map $R_\Omega$ with all poles inside $\Omega$ such that $$\displaystyle \int_{\Omega} \varphi dA=\frac{1}{2i} \oint_{\partial\Omega} \varphi(z) R_{\Omega}(z) dz\quad \left(=\sum c_k \varphi^{(n_k)}(a_k)\right)$$ for all $\varphi\in H(\Omega)\cap C(\overline{\Omega})$ (if $\infty\in\Omega$, one also requires the test function $\varphi$ to vanish at $\infty$) [@LM Lemma 3.1]. Such identities are called *quadrature identities*, and they appear in various problems of complex analysis. The rational map $R_\Omega$ is called the *quadrature function* of $\Omega$, and the poles of $R_\Omega$ are called the *nodes* of $\Omega$. By [@LM Lemma 3.1], the nodes of $R_\Omega$ are precisely the poles of the Schwarz reflection map $\sigma$.
Areas of analysis where quadrature domains have found applications include quadrature identities [@Dav74; @AS76; @Sak82; @Gus], extremal problems for conformal mapping [@Dur83; @ASS99; @She00], Hele-Shaw flows [@Ric72; @EV92; @GV06], Richardson's moment problem [@Sak78; @EV92; @GHMP00], free boundary problems [@Sha92; @Sak91; @CKS00], subnormal and hyponormal operators [@GP17], etc.
### Connections with statistical physics {#stat_phys_connect_subsubsec}
Complements of quadrature domains naturally arise as accumulation sets of eigenvalues in random normal matrix models, and as accumulation sets of electrons in *2D Coulomb gas models*.
Consider $N$ electrons located at points $\lbrace z_j\rbrace_{j=1}^N$ in the complex plane, influenced by a strong ($2$-dimensional) external electrostatic field arising from a uniform non-zero charge density. Let the scalar potential of the external electrostatic field be $N\cdot Q:\mathbb{C}\to\mathbb{R}\cup\{+\infty\}$ (note that the scalar potential is rescaled so that it is proportional to the number of electrons). The combined energy of the system resulting from particle interaction and external potential is: $$\mathfrak{E}_Q(z_1,\cdots,z_N)=\displaystyle\sum_{i\neq j} \ln\vert z_i-z_j\vert^{-1}+N\sum_{j=1}^N Q(z_j).$$ In the equilibrium, the states of this system are distributed according to the Gibbs measure with density $$\frac{\exp(-\mathfrak{E}_Q(z_1,\cdots,z_N))}{Z_N},$$ where $Z_N$ is a normalization constant known as the *partition function*. An important topic in statistical physics is to understand the limiting behavior of the 'electron cloud' as the number of electrons $N$ grows to infinity. Under appropriate regularity conditions on $Q$, in the limit the electrons condensate on a compact subset $T$ of the plane, and they are distributed according to the normalized area measure of $T$ [@EF; @HM]. Thus, the probability measure governing the distribution of the limiting electron cloud is completely determined by the shape of $T$, which is usually called a *droplet*. If $Q$ is assumed to be algebraic in a suitable sense, the complementary components of the droplet $T$ turn out to be quadrature domains [@LM]. For example, the deltoid (the compact set bounded by the deltoid curve) is a droplet in the physically interesting case of the localized *cubic external potential*, see [@Wie02].
The $2D$ Coulomb gas model described above is intimately related to logarithmic potential theory with an algebraic external field [@ST97] and the corresponding random normal matrix models, where the same probability measure describes the distribution of eigenvalues [@TBAZW; @ABWZ].
Iteration of Schwarz reflections sheds light on the topology of quadrature domains, and hence on their complementary regions. We refer the reader to [@LM] for details of this connection, or to [@LLMM2 §1.2] for its brief account.
# Quadratic examples: deltoid, Circle-and-Cardioid, and Chebyshev cardioid {#quadratic_examples_sec}
In this section and the next, we will collect various explicit examples of antiholomorphic dynamical systems to illustrate the connections between anti-rational maps, Kleinian reflection groups, and hybrid dynamical systems that combine features of the former two objects in the same dynamical plane.
Let $\Omega$ be a quadrature multi-domain and $\sigma$ be the associated piecewise Schwarz reflection map as in Subsection [3.3.2](#piecewise_schwarz_subsubsec){reference-type="ref" reference="piecewise_schwarz_subsubsec"}. We say that $\sigma$ is *quadratic* if $\sigma:\sigma^{-1}(\Omega)\to\Omega$ has degree two. As in classical holomorphic dynamics, quadratic Schwarz reflection maps play a special role in our theory.
We use the notation of Subsection [3.3.2](#piecewise_schwarz_subsubsec){reference-type="ref" reference="piecewise_schwarz_subsubsec"}. When $\sigma:\sigma^{-1}(\Omega)\to\Omega$ has degree two, we have that $$\displaystyle\sum_{j=1}^k d_j-1=2\implies\displaystyle\sum_{j=1}^k d_j=3.$$ Since each $d_j\geq 1$, it follows that $k\leq 3$.
It turns out that such Schwarz reflection maps come in three interesting flavors [@LLMM3 §2.2]. We briefly explain these possibilities below.
**Case 1: $k=1$.** In this case, $\Omega=\Omega_1$ is a single quadrature domain that is the univalent image of $\mathbb{D}$ under some cubic rational map $f$.
**Subcase 1.1.** Generically, $f_1$ has four simple critical points. A specific example of this type of quadrature domains is the exterior of a deltoid, whose associated Schwarz reflection map will be illustrated in Subsection [4.1](#deltoid_subsec){reference-type="ref" reference="deltoid_subsec"}. The dynamics of this map was completely described in [@LLMM1 §4].
**Subcase 1.2.** Now suppose that the rational map $f_1$ has a unique double critical point. Then pre- and post-composing $f$ with Möbius maps, we can assume that $f_1(w)=w^3-3w$; and $\sigma$ is the Schwarz reflection map of $\Omega=f_1(\pmb{D})$, where $\pmb{D}$ is a round disk on which $f_1$ acts injectively. We will take a closer look at such a Schwarz reflection map in Subsection [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}.
In fact, restricting the cubic Chebyshev map to various disks of univalence yields a natural one-parameter family of Schwarz reflections, which was studied in details in [@LLMM3].
**Subcase 1.3.** In the final subcase, suppose that the rational map $f_1$ has two double critical points. Then pre- and post-composing $f$ with Möbius maps, we can assume that $f_1(w)=w^3$, and $\sigma$ is the Schwarz reflection map of $\Omega=f_1(\pmb{D})$, where $\pmb{D}$ is a round disk on which $f_1$ acts injectively. Since $\pmb{D}$ does not intersect $\{0,\infty\}$, one easily sees that $\Omega$ contains no critical value of $\sigma$. Thus in this case, $\sigma:\sigma^{-1}(\Omega)\to\Omega$ has no critical point and hence is dynamically uninteresting.
**Case 2: $k=2$.** We can assume that $\mathop{\mathrm{deg}}{f_1}=2$, and $\mathop{\mathrm{deg}}{f_2}=1$. As before, pre- and post-composing $f_1, f_2$ with Möbius maps, we can assume that $f_1(w)=w^2$, $\Omega_1$ is the univalent image of a round disk under $f_1$, and $\Omega_2$ is a round disk in $\widehat{\mathbb{C}}$. Of particular interest is the situation when $\Omega_1$ is a cardioid and $\Omega_2$ is the exterior of a circumcircle of $\partial\Omega_1$. We consider the simplest dynamically interesting map of this kind in Subsection [4.2](#c_and_c_center_subsec){reference-type="ref" reference="c_and_c_center_subsec"}.
The moduli space of all Schwarz reflection maps obtained by fixing a cardioid as $\Omega_1$ and varying the center of the exterior disk $\Omega_2$ such that $\partial \Omega_2$ touches $\partial\Omega_1$ at a unique point produces the Circle-and-Cardioid family, which was the main topic of investigation in [@LLMM1 §5,6] and [@LLMM2].
**Case 3: $k=3$.** In this case, each $f_j$ is a Möbius map, and hence each $\Omega_j$ is a round disk. In particular, each $\sigma_j$ is the reflection in a round circle (thus, $\sigma$ has no critical point), and the resulting dynamics of $\sigma$ is completely understood.
## The deltoid reflection map {#deltoid_subsec}
Suppose that $f$ is a cubic rational map that is univalent on $\mathbb{D}^*=\widehat{\mathbb{C}}\setminus\overline{\mathbb{D}}$. Post-composing $f$ with a Möbius map, we can assume that $f(\infty)=\infty$ and $Df(\infty)=1$. Let us further assume that $f$ has a $2\pi/3$-rotation symmetry; i.e., $f$ commutes with the map $w\mapsto e^{\frac{2\pi i}{3}} w$. Then $f$ must be of the form $$f_t(w)=w + \frac{t}{2w^2},\quad t\neq 0.$$ The assumption that $f_t$ is univalent on $\mathbb{D}^*$ implies that critical points of $f_t$ lie in $\overline{\mathbb{D}}$, and hence $\vert t\vert\leq 1$.
It is easy to check that for $\vert t\vert\leq 1$, each $f_t$ is indeed univalent on $\mathbb{D}^*$ (cf. [@LLMM3 Proposition B.1]). The critical points of $f_t$ lie at the origin and the third roots of $t$.
### Deltoid reflection map as a limit of anti-quadratic-like maps {#deltoid_degeneration_subsubsec}
Let us now try to understand the dynamics of the associated Schwarz reflection maps $$\sigma_t:\Omega_t:=f_t(\mathbb{D}^*)\to\widehat{\mathbb{C}},\qquad t\in(0,1).$$
We first note that $\infty$ is a superattracting fixed point for each $\sigma_t$. All critical points of $f_t$ lie in $\mathbb{D}$ and hence $\partial\Omega_t$ is a non-singular Jordan curve. It follows from local properties of Schwarz reflection maps that $\sigma_t^{-1}(\Omega_t)\Subset\Omega_t$. Moreover, since none of the finite co-critical points of $f_t$ (namely, $-\frac{t^{1/3}}{2}, -\frac{t^{1/3}\omega}{2}, -\frac{t^{1/3}\omega^2}{2}$) lie in $\mathbb{D}^*$, it follows that no finite critical value of $f_t$ lies in $\Omega_t$. The commutative diagram defining $\sigma_t$ now implies that $\sigma_t:\sigma_t^{-1}(\Omega_t)\to\Omega_t$ is a $2:1$ branched cover branched only at $\infty$. By Riemann-Hurwitz, $\sigma_t^{-1}(\Omega_t)$ must be a simply connected domain; and consequently, $\sigma_t:\sigma_t^{-1}(\Omega_t)\to\Omega_t$ is an anti-quadratic-like map (cf. [@LM §4]). Since $\sigma_t$ has a fixed critical point at $\infty$, we conclude that the non-escaping dynamics of $\sigma_t$ (put differently, the dynamics of the anti-quadratic-like map $\sigma_t:\sigma_t^{-1}(\Omega_t)\to\Omega_t$ on its filled Julia set) is hybrid conjugate to $\overline{z}^2$ for all $c\in(0,1)$.
Understanding the escaping dynamics of $\sigma_t$ is equivalent to understanding the external map of the above anti-quadratic-like map. Once again, by Riemann-Hurwitz and Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"}, $\sigma_t:\sigma_t^{-1}(\mathop{\mathrm{int}}{\Omega_t^c})\to\mathop{\mathrm{int}}{\Omega_t^c}$ is an annulus-to-disk branched covering of degree three. It is not hard to see that the moduli of these anti-quadratic-like maps go to $0$ (see Figure [\[pre_deltoid_disk_fig\]](#pre_deltoid_disk_fig){reference-type="ref" reference="pre_deltoid_disk_fig"}).
Thus, $\{\sigma_t:t\in(0,1)\}$ defines an escaping path in the space of anti-quadratic maps. The Schwarz reflection map $\sigma_1$ naturally appears as a limit point of this path. The associated quadrature domain $f_1(\mathbb{D}^*)$ is the exterior of the classical deltoid curve, where the cusps of the deltoid curve arise from the three critical points of $f_1$ at the third roots of unity (see Figure [\[deltoid_disk_fig\]](#deltoid_disk_fig){reference-type="ref" reference="deltoid_disk_fig"}).
### Dynamics of deltoid reflection {#deltoid_limit_subsubsec}
We now set $f:=f_1$, $\Omega:=f(\mathbb{D}^*)$, and investigate the dynamics of $\sigma:=\sigma_1:\overline{\Omega}\to\widehat{\mathbb{C}}$. The corresponding Schwarz reflection map $\sigma$ can be thought of as an object lying halfway between a quasi-Fuchsian group and a quasi-Blaschke product. Indeed, $\sigma$ combines the action of a reflection group and an anti-Blaschke product in the sense that it acts like the anti-Blaschke product $\overline{z}^2\vert_{\overline{\mathbb{D}^*}}$ on $K(\sigma)$, and acts like the Nielsen map $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$ of the ideal triangle reflection group on $\overline{T^\infty(\sigma)}$. It is the simplest example of an antiholomorphic map exhibiting such hybrid dynamics.
We now explain this mating phenomenon in a bit more detail. The map $\sigma$ has a unique critical point at $\infty$ (which comes from the critical point $0$ of $f$), and this point is fixed by $\sigma$. We denote the basin of attraction of the superattracting fixed point $\infty$ by $\mathcal{B}_\infty(\sigma)$. Since $\sigma$ is unicritical with $\sigma^{-1}(\infty)=\{\infty\}$, it is natural to expect that the non-escaping set $K(\sigma)$ equals $\overline{\mathcal{B}_\infty(\sigma)}$, where $\mathcal{B}_\infty(\sigma)$ is a completely invariant simply connected domain on which $\sigma$ is conformally conjugate to $\overline{z}^2$. However, unlike the maps $\sigma_t$ studied in Subsection [4.1.1](#deltoid_degeneration_subsubsec){reference-type="ref" reference="deltoid_degeneration_subsubsec"}, the restriction $\sigma:\sigma^{-1}(\Omega)\to\Omega$ is not an anti-quadratic-like map since $\partial\Omega$ intersects $\partial\sigma^{-1}(\Omega)$ at the three cusps of $\partial\Omega$, which are fixed points of $\sigma$ (see Figure [\[deltoid_disk_fig\]](#deltoid_disk_fig){reference-type="ref" reference="deltoid_disk_fig"}). Hence, one cannot appeal to standard straightening theorems to analyze the non-escaping dynamics of $\sigma$.
*Remark 38*. In fact, the Puiseux series expansion of $\sigma$ at each of the three cusps of $\partial\Omega$ exhibit parabolic behavior, and the 'attracting directions' at these three fixed points lie in the tiling set [@LLMM1 §4.2.1]. As all fixed points of the map $\overline{z}^2$ on $\mathbb{S}^1$ are repelling, there *cannot* be a hybrid conjugacy between the non-escaping dynamics of $\sigma$ and $\overline{z}^2\vert_{\overline{\mathbb{D}}^*}$. Another way of seeing the non-existence of such a hybrid conjugacy is to observe that the external class of $\sigma$ is $\pmb{\mathcal{N}}_2$, which has parabolic fixed points, whereas the external class of an anti-polynomial is uniformly expanding.
On the other hand, since the only critical point of $\sigma$ is fixed under dynamics, the dynamics on the escaping set $T^\infty(\sigma)$ is unramified (see Subsection [3.3.3](#inv_partition_subsubsec){reference-type="ref" reference="inv_partition_subsubsec"} for the definition). Heuristically, this means that $\sigma\vert_{T^\infty(\sigma)}$ behaves like reflections in the three non-singular real-analytic arcs of $\partial\Omega$. In fact, the mapping degrees of $\sigma$ imply that the fundamental tile $T^0(\sigma)=\Omega^c\setminus\{f(1),f(\omega),f(\omega^2)\}$ (also called the rank zero tile) pulls back under $\sigma$ to three disjoint Jordan regions (see Figure [\[deltoid_disk_fig\]](#deltoid_disk_fig){reference-type="ref" reference="deltoid_disk_fig"}). We call them the rank one tiles. Since the rank one tiles lie in $\Omega$, each of them has two preimages, and this pattern continues for all higher rank tiles. This suggests that the tiling set dynamics of $\sigma$ is conjugate to the Nielsen map of the ideal triangle reflection group.
Adapting classical Fatou-Julia theory and *puzzle piece* techniques for the setting of Schwarz reflection maps, the above conjectural picture was confirmed in [@LLMM1]. It was further shown that $\mathcal{B}_\infty(\sigma)$ and $T^\infty(\sigma)$ are Jordan domains with a common boundary (see Figure [\[deltoid_corr_fig\]](#deltoid_corr_fig){reference-type="ref" reference="deltoid_corr_fig"}).
**Theorem 39** (Dynamics of deltoid reflection). *[@LLMM1 Theorem 1.1, §4.1, §4.2][\[deltoid_thm_1\]]{#deltoid_thm_1 label="deltoid_thm_1"} The dynamical plane of the Schwarz reflection $\sigma$ of the deltoid can be partitioned as $$\widehat {\mathbb C}=T^\infty(\sigma)\sqcup \Lambda(\sigma)\sqcup \mathcal{B}_\infty(\sigma),$$ where $T^\infty(\sigma)$ is the tiling set, $\mathcal{B}_\infty(\sigma)$ is the basin of infinity, and $\Lambda(\sigma)$ is their common boundary (which we call the *limit set*). Moreover, $\sigma:\overline{T^\infty(\sigma)}\setminus\mathop{\mathrm{int}}{T^0(\sigma)}\to \overline{T^\infty(\sigma)}$ is conformally conjugate to $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$, the map $\sigma:\overline{\mathcal{B}_\infty(\sigma)}\to\overline{\mathcal{B}_\infty(\sigma)}$ is conformally conjugate to $\overline{z}^2:\overline{\mathbb{D}^*}\to\overline{\mathbb{D}^*}$, and $\Lambda(\sigma)$ is a Jordan curve.*
### Deltoid reflection as the unique conformal mating of $\overline{z}^2$ and $\pmb{\mathcal{N}}_2$ {#deltoid_unique_mating_subsubsec}
Recall that the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ conjugates $\pmb{\mathcal{N}}_2$ to $\overline{z}^2$, and sends $1$ to $1$. One can topologically glue $\overline{\mathbb{D}}$ and $\overline{\mathbb{D}^*}$, equipped with the two dynamical systems $$\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}\quad \mathrm{and}\quad \overline{z}^2:\overline{\mathbb{D}^*}\to\overline{\mathbb{D}^*}$$ respectively, along the unit circle using the homeomorphism $\pmb{\mathcal{E}}_2$. This yields a partially defined continuous map on the topological $2$-sphere, and this map is called the *topological mating* of $\overline{z}^2$ and $\pmb{\mathcal{N}}_2$.
The deltoid Schwarz reflection map $\sigma$ is a *conformal mating* of $\overline{z}^2$ and $\pmb{\mathcal{N}}_2$ in the sense that there exists a conformal structure on the above topological $2$-sphere whose uniformizing map $\mathbb{S}^2\longrightarrow\widehat{\mathbb{C}}$ conjugates the topological mating to $\sigma$.
In fact, a careful analysis of the geometry of $T^\infty(\sigma)$ reveals that it is a John domain, and hence its boundary is conformally removable (cf. [@Jon95], Appendix [13.4](#removable_david_subsec){reference-type="ref" reference="removable_david_subsec"}). Conformal removability of $\Lambda(\sigma)$, other than being important from the complex-analytic viewpoint, implies that the conformal mating of $\overline{z}^2$ and $\pmb{\mathcal{N}}_2$ is unique up to Möbius conjugation. We summarize these results below.
**Theorem 40** (Deltoid reflection as unique conformal mating). *[@LLMM1 Theorem 1.1, §4.3, §4.4][\[deltoid_thm_2\]]{#deltoid_thm_2 label="deltoid_thm_2"} The tiling set $T^\infty(\sigma)$ is a John domain, and hence the limit set $\Lambda(\sigma)$ is a conformally removable Jordan curve. Consequently, $\sigma$ is the unique conformal mating of the reflection map $\pmb{\mathcal{N}}_2: \overline{\mathbb D}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to \overline{\mathbb D}$ and the anti-polynomial $\overline{z}^2\vert_{\overline{\mathbb{D}^*}}$, up to Möbius conjugation.*
There is an alternative way of concluding uniqueness of the above conformal mating that does not appeal to conformal removability of the limit set. In fact, any conformal mating $\widetilde{\sigma}$ of $\overline{z}^2\vert_{\overline{\mathbb{D}}^*}$ and $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$ is an antiholomorphic map defined on the closure of a Jordan domain in the Riemann sphere. Indeed, since $\pmb{\mathcal{N}}_2$ is not defined on the interior of the ideal triangle $\mathbb{D}\cap\Pi(\pmb{G}_2)$, any conformal mating $\widetilde{\sigma}$ must be defined on the complement of a homeomorphic copy of $\mathbb{D}\cap\Pi(\pmb{G}_2)$. Moreover, as $\pmb{\mathcal{N}}_2$ fixes $\partial \Pi(\pmb{G}_2)$ pointwise, $\widetilde{\sigma}$ must fix the boundary of its domain of definition pointwise. It follows that the domain of definition is the closure of a simply connected quadrature domain, and $\widetilde{\sigma}$ is the associated Schwarz reflection map. That this Schwarz reflection map can be chosen to be the deltoid Schwarz reflection can be easily proved using the critical orbit relation and mapping degrees of $\overline{z}^2$ and $\pmb{\mathcal{N}}_2$ (see [@LLMM2 Appendix A] for details). This shows that Schwarz reflections are natural objects from the point of view of mating conformal dynamical systems.
### A non-quasisymmetric welding map {#question_mark_welding_subsubsec}
The existence of the deltoid Schwarz reflection implies that the welding problem for the non-quasisymmetric circle homeomorphism $\pmb{\mathcal{E}}_2$ has a unique solution (see Subsection [3.1.5](#question_mark_subsubsec){reference-type="ref" reference="question_mark_subsubsec"}).
Specifically, according to Theorem [\[deltoid_thm_1\]](#deltoid_thm_1){reference-type="ref" reference="deltoid_thm_1"}, there exist homeomorphic extensions of conformal maps $\varphi^{\mathrm{in}}:\overline{\mathbb{D}}\to \overline{T^\infty(\sigma)}, \varphi^{\mathrm{out}}:\overline{\mathbb{D}^*}\to \overline{\mathcal{B}_\infty(\sigma)}$, where the former conjugates $\pmb{\mathcal{N}}_2$ to $\sigma$ and the latter conjugates $\overline{z}^2$ to $\sigma$. After possibly pre-composing $\varphi^{\mathrm{in}}$ with a rotation, one can assume that $\varphi^{\mathrm{in}}(1)=\varphi^{\mathrm{out}}(1)$. Then, $\left(\varphi^{\mathrm{out}}\right)^{-1}\circ\varphi^{\mathrm{in}}:\mathbb{S}^1\to\mathbb{S}^1$ conjugates $\pmb{\mathcal{N}}_2$ to $\overline{z}^2$, and sends $1$ to $1$. Hence, we have that $\pmb{\mathcal{E}}_2=\left(\varphi^{\mathrm{out}}\right)^{-1}\circ\varphi^{\mathrm{in}}$. It follows that $\pmb{\mathcal{E}}_2$ is the welding homeomorphism associated with the Jordan curve $\Lambda(\sigma)$. Moreover, conformal removability of $\Lambda(\sigma)$ implies that $\Lambda(\sigma)$ is the unique solution to the welding problem for the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ (up to Möbius maps).
A general result to the effect that circle homeomorphisms conjugating piecewise analytic expansive circle maps are welding maps will be discussed in Subsection [12.2](#welding_subsec){reference-type="ref" reference="welding_subsec"}.
### Deltoid reflection via David surgery {#deltoid_david_subsubsec}
Theorem [\[deltoid_thm_2\]](#deltoid_thm_2){reference-type="ref" reference="deltoid_thm_2"} yields an *unmating* of the deltoid reflection map into an anti-polynomial and the Nielsen map of a reflection group. Reversing the point of view, one can ask whether the topological mating of $\overline{z}^2$ and $\pmb{\mathcal{N}}_2$ can be upgraded to a conformal mating without having prior knowledge of the deltoid reflection map. A direct construction of such conformal matings would require one to appeal to a suitable uniformization theorem. However, the fact that one is trying to combine an expanding dynamical system with one that has parabolic fixed points renders standard quasiconformal tools (e.g., techniques used in the proof of Bers Simultaneous Uniformization Theorem or in the construction of quasi-Blaschke products as matings of two Blaschke products) inapplicable to this setting.
In [@LLMM4 §9], number-theoretic properties of the map $\pmb{\mathcal{E}}_2$ (see Subsection [3.1.5](#question_mark_subsubsec){reference-type="ref" reference="question_mark_subsubsec"}) were used to obtain distortion estimates for $\pmb{\mathcal{E}}_2^{-1}$ and to conclude that the inverse of the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ continuously extends to a David homeomorphism of $\mathbb{D}$. This result, combined with the David Integrability Theorem (see Theorem [\[david_integrability_thm\]](#david_integrability_thm){reference-type="ref" reference="david_integrability_thm"}) allows one to establish the existence of a unique conformal mating of $\pmb{\mathcal{N}}_{2}$ and $\overline{z}^2$.
We will return to this theme in Section [9](#mating_anti_poly_nielsen_sec){reference-type="ref" reference="mating_anti_poly_nielsen_sec"}, where a general combination theorem for anti-polynomials and Nielsen maps of reflection groups will be expounded.
### Antiholomorphic correspondence as lift of deltoid reflection {#deltoid_corr_subsubsec}
The rational map $f$ gives rise to a $2$:$2$ correspondence $\mathfrak{C}\subset\widehat{\mathbb{C}}\times\widehat{\mathbb{C}}$ whose dynamics is akin to the dynamics of a family of algebraic correspondences introduced by Bullett and Penrose in the 1990s [@BP]. We define the correspondence $\mathfrak{C}$ as $$(z,w)\in\mathfrak{C}\quad \iff\quad \frac{f(w)-f(\eta(z))}{w-\eta(z)}\ =\ 0\quad \iff\quad \overline{z}^2 w+\overline{z}\ =\ 2w^2.
\label{corr_del_eqn}$$ (We remark that this correspondence is called antiholomorphic because the local branches $z\mapsto w$ are antiholomorphic.) The dynamical plane of $\mathfrak{C}$ splits into two invariant sets: the *lifted non-escaping set* $\widetilde{K(\sigma)}:=f^{-1}(K(\sigma))$ and the *lifted tiling set* $\widetilde{T^\infty(\sigma)}:=f^{-1}(T^\infty(\sigma))$ (see Figure [\[deltoid_corr_fig\]](#deltoid_corr_fig){reference-type="ref" reference="deltoid_corr_fig"}).
The fact that the simply connected tiling set $T^\infty(\sigma)$ does not contain any critical value of $f$ implies that $f$ has an order three deck transformation $\tau$ on $\widetilde{T^\infty(\sigma)}$. By Equation [\[corr_del_eqn\]](#corr_del_eqn){reference-type="ref" reference="corr_del_eqn"}, the branches of $\mathfrak{C}$ on the lifted tiling set $\widetilde{T^\infty(\sigma)}$ are given by $\tau\circ\eta$ and $\tau^2\circ\eta$ (i.e., compositions of $\eta$ with local deck transformations of $f$). Thanks to the relations $$\tau=(\tau^{2}\circ\eta)\circ(\tau\circ\eta)^{-1}\quad \mathrm{and}\quad \eta=\tau^{-1}\circ (\tau\circ\eta),$$ the grand orbits of $\mathfrak{C}$ on $\widetilde{T^\infty(\sigma)}$ are generated by $\eta$ and $\tau$.[^6] Moreover, $\eta$ and $\tau$ generate a subgroup of conformal and anti-conformal automorphisms of $\widetilde{T^\infty(\sigma)}$ isomorphic to the abstract modular group $\mathbb{Z}/2\mathbb{Z}\ast\mathbb{Z}/3\mathbb{Z}$ (see [@LLMM3 Proposition B.7, Theorem B.8]).
On the other hand, the forward branch $(f\vert_{\overline{\mathbb{D}^*}})^{-1}\circ f\circ\eta:\widetilde{K(\sigma)}\cap\overline{\mathbb{D}^*}\to\widetilde{K(\sigma)}\cap\overline{\mathbb{D}^*}$ is conformally conjugate to $\sigma:K(\sigma)\to K(\sigma)$ via $f\vert_{\overline{\mathbb{D}^*}}$. Hence, this branch of $\mathfrak{C}$ is conformally conjugate to the anti-polynomial $\overline{z}^2\vert_{\overline{\mathbb{D}^*}}$ (see [@LLMM3 Proposition B.6]).
In light of the above discussion, the antiholomorphic correspondence $\mathfrak{C}$ can be interpreted as a *mating* of $\overline{z}^2$ and $\mathbb{Z}/2\mathbb{Z}\ast\mathbb{Z}/3\mathbb{Z}$.
## Mating Basilica with ideal triangle group {#c_and_c_center_subsec}
We now discuss the dynamics of a specific quadratic Schwarz reflection map generated by two disjoint quadrature domains.
### Schwarz reflection in a cardioid and a circle {#c_and_c_basilica_subsubsec}
Consider the cardioid $\heartsuit:=f(\mathbb{D})$, where $f(w)=w/2-w^2/4$. Since $f$ is univalent on $\mathbb{D}$, the cardioid $\heartsuit$ is a quadrature domain. We note that $\heartsuit$ is the principal hyperbolic component of the Mandelbrot set, and $f$ is the usual *multiplier map* of this hyperbolic component (i.e., the uniformization of $\heartsuit$ by the multiplier of the unique finite attracting fixed point of the corresponding quadratic polynomial).
Here and in the rest of the paper, we will use the notation $B(a,r), \overline{B}(a,r)$ to denote the open, closed (respectively) disks centered at $a\in\mathbb{C}$ with radius $r>0$.
The circle $\{\vert z\vert=3/4\}$ is a circumcircle to $\partial\Omega$. We define the quadrature multi-domain $$\Omega := \heartsuit\cup\overline{B}(0,3/4)^c,$$ and the piecewise Schwarz reflection map $$F:\overline{\Omega}\to\widehat{\mathbb{C}},\qquad
z \mapsto \left\{\begin{array}{ll}
\sigma(z) & \mbox{if}\ z\in\overline{\heartsuit}, \\
\sigma_0(z) & \mbox{if}\ z\in B(0,3/4)^c,
\end{array}\right.$$ where $\sigma$ is the Schwarz reflection of $\heartsuit$, and the map $\sigma_0$ is reflection with respect to the circle $\{\vert z\vert=3/4\}$.
Recall from Figure [\[cardioid_disk_fig\]](#cardioid_disk_fig){reference-type="ref" reference="cardioid_disk_fig"} that the only critical point of $\sigma$ is at the origin. Hence, the map $F$ has a unique critical point at $0$. Moreover, by construction, $\{0,\infty\}$ is a superattracting $2$-cycle for $F$.
### Triangle group structure {#c_and_c_reflection_group_subsubsection}
As the fundamental tile $$T^0(F)=\widehat{\mathbb{C}}\setminus\left(\Omega\cup\{1/4,-3/4\}\right)$$ is a triangle (see Figure [\[basilica_schwarz_fig\]](#basilica_schwarz_fig){reference-type="ref" reference="basilica_schwarz_fig"}), and the unique critical point of $F$ does not escape to the tiling set $T^\infty(F)$, it is not hard to see that the action of $F$ on its tiling set is conformally conjugate to $\pmb{\mathcal{N}}_2$. More precisely, there exists a conformal isomorphism $\psi:\mathbb{D}\to T^\infty(F)$ that conjugates $\pmb{\mathcal{N}}_2:\mathbb{D}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\mathbb{D}$ to $F:T^\infty(F)\setminus\mathop{\mathrm{int}}{T^0(F)}\to T^\infty(F)$ (see [@LLMM1 Proposition 5.38]).
### A degenerate anti-quadratic-like structure {#c_and_c_basilica_pinched_anti_quad_subsubsec}
By Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"}, the map $F:F^{-1}(\Omega)\to\Omega$ is a $2:1$ branched cover branched only at the origin. As depicted in Figure [\[basilica_schwarz_fig\]](#basilica_schwarz_fig){reference-type="ref" reference="basilica_schwarz_fig"}, $F^{-1}(\Omega)\subset \Omega$, and $\partial\Omega$ intersects $\partial F^{-1}(\Omega)$ at two points. Hence, $F:F^{-1}(\Omega)\to\Omega$ can be thought of as a degenerate anti-quadratic-like map.
Since $\{0,\infty\}$ is a superattracting $2$-cycle for $F$, it is natural to expect that the non-escaping set dynamics of $F$ is topologically conjugate to the filled Julia set dynamics of the *Basilica* anti-polynomial $p(z):=\overline{z}^2-1$ (which is the unique quadratic anti-polynomial with a superattracting $2$-cycle) such that the conjugacy is conformal on the interior (see Figure [22](#basilica_anti_poly_fig){reference-type="ref" reference="basilica_anti_poly_fig"}). However, since $\partial\Omega\cap\partial F^{-1}(\Omega)$ consists of two neutral fixed points and the attracting directions at these neutral fixed points lie in the tiling set $T^\infty(F)$, there cannot be a hybrid conjugacy between the non-escaping dynamics of $F$ and the filled Julia set dynamics of $p$ (this is analogous to the deltoid case, see Remark [Remark 38](#deltoid_no_hybrid_rem){reference-type="ref" reference="deltoid_no_hybrid_rem"}).
![Left: Part of the dynamical plane of $F$. Right: The filled Julia set of the Basilica anti-polynomial $\overline{z}^2-1$.](schwarz_basilica.png "fig:"){#basilica_anti_poly_fig width="0.36\\linewidth"}![Left: Part of the dynamical plane of $F$. Right: The filled Julia set of the Basilica anti-polynomial $\overline{z}^2-1$.](basilica_poly.png "fig:"){#basilica_anti_poly_fig width="0.5\\linewidth"}
One is thus forced to take a more combinatorial/topological route to prove the existence of such a conjugacy. The strategy of studying the non-escaping set of $F$ from outside (i.e., from the tiling set) turns out to be fruitful. Since $F$ is postcritically finite, one can adapt standard arguments from polynomial dynamics to show that $\Lambda(F):=\partial T^\infty(F)$ is locally connected [@LLMM1 Proposition 6.4]. Hence, the conformal conjugacy $\psi$ between $\pmb{\mathcal{N}}_2$ and $F$ (see Subsection [4.2.2](#c_and_c_reflection_group_subsubsection){reference-type="ref" reference="c_and_c_reflection_group_subsubsection"}) extends continuously to yield a semi-conjugacy $\psi:\mathbb{S}^1\to\Lambda(F)$ between $\pmb{\mathcal{N}}_2$ and $F$. Thus, $\Lambda(F)$ can be topologically modeled as the quotient of $\mathbb{S}^1$ by an $\pmb{\mathcal{N}}_2$-invariant equivalence relation, which we call the *lamination* associated with $F$ [@LLMM1 §5.3.5].
**Definition 41**. The *push-forward* $\varphi_{\ast}(\lambda)$ of an equivalence relation $\lambda$ under a circle homeomorphism $\varphi$ is defined as the image of $\lambda\subset\mathbb{S}^1\times\mathbb{S}^1$ under $\varphi\times\varphi$. Clearly, $\varphi_{\ast}(\lambda)$ is an equivalence relation on $\mathbb{S}^1$.
Applying combinatorial tools from the study of polynomial Julia sets, one can give a complete description of the lamination of $F$ and conclude that the push-forward of this lamination under the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ is precisely the combinatorial lamination of $p$ (cf. Definition [Definition 16](#rat_lami_def){reference-type="ref" reference="rat_lami_def"} and Remark [Remark 18](#lami_rmk){reference-type="ref" reference="lami_rmk"}). This provides a topological conjugacy between $F\vert_{\Lambda(F)}$ and $p\vert_{\mathcal{J}(p)}$. Finally, a detailed study of the Fatou components of $F$ (i.e., the components of the interior of $K(F)$) allows one to conformally extend this topological conjugacy between limit and Julia sets to a conjugacy between $F\vert_{K(F)}$ and $p\vert_{\mathcal{K}(p)}$ (see [@LLMM1 Proposition 5.30, Corollary 5.33] and [@LLMM2 Proposition 11.1]).
In light of the above statements, it can be justified that $F$ is the conformal mating of $\overline{z}^2-1$ and $\pmb{\mathcal{N}}_2$ in a precise sense (see [@LLMM1 §7]).
### Uniqueness of the conformal mating of $\overline{z}^2-1$ and $\pmb{\mathcal{N}}_2$ {#c_and_c_basilica_unique_mating_subsubsec}
As the invariant external dynamical rays of $\overline{z}^2-1$ at the angles $1/3$ and $2/3$ land at the same point, it follows (from the definition of topological mating of $\overline{z}^2-1$ and $\pmb{\mathcal{N}}_2$) that two ideal vertices of $\mathbb{D}\cap\Pi(\pmb{G}_2)$ are identified in the topological mating. Hence, the domain of definition of the topological mating is the closure of the union of two Jordan domains touching at a single point. Furthermore, as the Nielsen map $\pmb{\mathcal{N}}_2$ fixes the boundary of its domain of definition pointwise, one concludes that any conformal mating of $\overline{z}^2-1$ and $\pmb{\mathcal{N}}_2$ is a piecewise Schwarz reflection map associated with two disjoint simply connected quadrature domains touching at a single point. Using mapping degrees of $\overline{z}^2-1$ and $\pmb{\mathcal{N}}_2$, one can now argue in light of Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"} that one of these quadrature domains is uniformized by a degree one rational map, while the other is uniformized by a degree two rational map. Finally, the critical orbit relation of $\overline{z}^2-1$ can be exploited to deduce that this piecewise Schwarz reflection map is given by $F$, up to Möbius conjugacy (see [@LLMM2 Appendix A]). Thus, $F$ is the unique conformal mating between the maps $\overline{z}^2-1$ and $\pmb{\mathcal{N}}_2$.
**Theorem 42**. *[@LLMM1 §7], [@LLMM2 Appendix A][\[c_and_c\_basilica_dynamics_thm\]]{#c_and_c_basilica_dynamics_thm label="c_and_c_basilica_dynamics_thm"} The map $F\vert_{K(F)}$ is topologically conjugate to $p\vert_{\mathcal{K}(p)}$ such that the conjugacy is conformal on the interior, where $p(z)=\overline{z}^2-1$. On the other hand, the map $F:\overline{T^\infty(F)}\setminus\mathop{\mathrm{int}}{T^0(F)}\to \overline{T^\infty(F)}$ is topologically semi-conjugate to $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$ such that the semi-conjugacy restricts to a conformal conjugacy on $T^\infty(F)$. Moreover, up to Möbius conjugacy, $F$ is the unique conformal mating of $\pmb{\mathcal{N}}_2$ and $p$.*
To conclude this subsection, let us mention that as in the deltoid case, the existence and uniqueness of the above conformal mating can also be proved using David surgery (see Section [9](#mating_anti_poly_nielsen_sec){reference-type="ref" reference="mating_anti_poly_nielsen_sec"}).
## An antiholomorphic correspondence from cubic Chebyshev polynomial {#chebyshev_subsec}
Consider the cubic Chebyshev polynomial $f(w)=w^3-3w$. By [@LLMM3 Proposition 3.2], $f$ is injective on the closed disk $\overline{B}(3,2)=\{w\in\mathbb{C}:\vert w-3\vert\leq 2\}$. The image $\Omega:= f(\overline{B}(3,2))$ is a simply connected quadrature domain. As $f$ has a critical point at $1$, the boundary $\partial\Omega$ has a cusp at $f(1)=-2$ and is non-singular otherwise. Thus, $T^0(\sigma)=\widehat{\mathbb{C}}\setminus\cup\left(\Omega\cup\{-2\}\right)$.
We denote the reflection in the circle $\partial B(3,2)$ by $\widehat{\eta}$. Specifically, $\widehat{\eta}(w)=3+\frac{4}{\overline{w}-3}$. As the only critical points of $f$ outside $\overline{B}(3,2)$ are at $-1$ and $\infty$ , the associated Schwarz reflection map $\sigma=f\circ\widehat{\eta}\circ(f\vert_{B(3,2)})^{-1}$ has two critical points: a simple critical point at $f(\widehat{\eta}(-1))=f(2)=2$ and a double critical point at $f(\widehat{\eta}(\infty))=f(3)$. Moreover, $\sigma$ fixes the critical point at $2$ and sends the double critical point at $f(3)$ to $\infty$.
### The external map of $\sigma$ {#nielsen_first_return_external_map_subsubsec}
Since $\sigma$ sends the double critical point at $f(3)$ to $\infty\in T^0(\sigma)$, it follows that the rank one tile of $T^\infty(\sigma)$ contains a critical point. However, since the other critical point $2$ of $\sigma$ is fixed, it does not lie in the tiling set, and hence $\sigma$ acts like reflection in $\partial\Omega\setminus\{-2\}$ on tiles of higher ranks. It turns out that a conformal model for the tiling set dynamics of $\sigma$ (also called the external map of $\sigma$) arises from a discrete subgroup of $\mathrm{Aut}^\pm(\mathbb{D})$ generated by a circular reflection and a torsion element.
Specifically, consider the group $\mathbbm{G}_2(\geqslant\pmb{G}_2)$ generated by $M_\omega(z):=\omega z$ (where $\omega=e^{\frac{2\pi i}{3}}$) and the reflection $\rho$ in the hyperbolic geodesic $\pmb{C}_1$ of $\mathbb{D}$ connecting $1$ and $\omega$ (see Definition [Definition 8](#regular_ideal_polygon_ref_group_def){reference-type="ref" reference="regular_ideal_polygon_ref_group_def"}). A fundamental domain $\Pi(\mathbbm{G}_2)$ for the $\mathbbm{G}_2$-action on $\mathbb{D}$ is given by one-third of the fundamental domain $\Pi^{\mathbb{D}}(\pmb{G}_2):=\Pi(\pmb{G}_2)\cap\mathbb{D}$ for the $\pmb{G}_2$-action on $\mathbb{D}$ (see Figure [\[itg_nielsen_first_return_fig\]](#itg_nielsen_first_return_fig){reference-type="ref" reference="itg_nielsen_first_return_fig"}). Note that the Riemann surface $\mathcal{Q}:=\faktor{\mathbb{D}}{\langle M_\omega\rangle}$ is biholomorphic to $\mathbb{D}$, and $\mathcal{Q}_1:=\faktor{\Pi^{\mathbb{D}}(\pmb{G}_2)}{\langle M_\omega\rangle}$ is a simply connected region embedded in $\mathcal{Q}$. We will use the region in $\mathbb{D}$ (respectively, in $\Pi^{\mathbb{D}}\left(\pmb{G}_2\right)$) bounded by the radial lines at angles $0$ and $2\pi/3$ as coordinates on $\mathcal{Q}$ (respectively, on $\mathcal{Q}_1$). We define the map $$\pmb{\mathcal{F}}_2:\mathcal{Q}\setminus\mathop{\mathrm{int}}{\mathcal{Q}_1}\longrightarrow\mathcal{Q}$$ as the map $\rho$ post-composed with the quotient map from $\mathbb{D}$ to $\mathcal{Q}$. Note that the surface $\mathcal{Q}$ has a natural tessellation structure with $\mathcal{Q}_1$ as the rank $0$ tile and components of $\pmb{\mathcal{F}}_2^{-n}(\mathcal{Q}_1)$ as tiles of rank $n$. It is worth pointing out that there is a unique rank one tile for this tessellation given by $\rho\left(\Pi^{\mathbb{D}}(\pmb{G}_2)\right)$ (see Figure [\[itg_nielsen_first_return_fig\]](#itg_nielsen_first_return_fig){reference-type="ref" reference="itg_nielsen_first_return_fig"}). The salient features of the map $\pmb{\mathcal{F}}_2$ are that $$\pmb{\mathcal{F}}_2:\left(\pmb{\mathcal{F}}_2^{-1}(\mathcal{Q}_1),\rho(0)\right) \longrightarrow \left(\mathcal{Q}_1,0\right)$$ is a $3:1$ branched cover between two pointed disks with a double critical point at $\rho(0)$, while $\pmb{\mathcal{F}}_2$ is injective on tiles of higher ranks.
According to [@LLMM3 Proposition 4.15], there exists a conformal isomorphism $$\psi:\left(\mathcal{Q},0\right)\longrightarrow\left(T^\infty(\sigma),\infty\right)$$ that conjugates $\pmb{\mathcal{F}}_2:\mathcal{Q}\setminus\mathop{\mathrm{int}}{\mathcal{Q}_1}\to\mathcal{Q}$ to $\sigma:T^\infty(\sigma)\setminus\mathop{\mathrm{int}}{T^0(\sigma)}\to T^\infty(\sigma)$ . The map $\psi$ can be constructed by lifting the conformal isomorphism $(\mathcal{Q}_1,0)\to (T^0(\sigma),\infty)$ (whose homeomorphic boundary extension sends $1\in\partial\mathcal{Q}_1$ to the cusp $-2\in\partial\Omega$) by iterates of $\pmb{\mathcal{F}}_2$ and $\sigma$. Such a lifting procedure can be performed since
i\) the maps $\pmb{\mathcal{F}}_2:\pmb{\mathcal{F}}_2^{-1}(\mathcal{Q}_1) \longrightarrow \mathcal{Q}_1$ and $\sigma:\sigma^{-1}(T^0(\sigma))\to T^0(\sigma)$ are $3:1$ branched coverings having a unique (double) critical point with associated critical value at $0, \infty$, respectively (equivalently, they are doubly ramified over $0, \infty$, respectively, and are unramified otherwise),
ii\) the maps $\pmb{\mathcal{F}}_2, \sigma$ have no other critical point in $\mathcal{Q}, T^\infty(\sigma)$, respectively, and
iii\) the maps $\pmb{\mathcal{F}}_2, \sigma$ act as the identity map on $\partial\mathcal{Q}_1, \partial T^0(\sigma)$.
In other words, the map $\pmb{\mathcal{F}}_2$ is the external map for $\sigma$. We refer the reader to [@LLMM3 §4.4] for details of this construction and more properties of the map $\pmb{\mathcal{F}}_2$[^7].
### Hybrid conjugacy between $\sigma$ and a quadratic parabolic anti-rational map {#chebyshev_center_hybrid_conj_subsubsec}
By Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"} and the discussion of the critical points of $\sigma$, the map $\sigma:\sigma^{-1}(\Omega)\to\Omega$ is a $2:1$ branched covering with a superattracting fixed point at $2$. Moreover, $\sigma^{-1}(\Omega)\subset\Omega$ and $\partial\sigma^{-1}(\Omega)\cap\partial\Omega=\{-2\}$ (note that $\partial\Omega$ has a unique singular point, see Figure [\[chebyshev_center_fig\]](#chebyshev_center_fig){reference-type="ref" reference="chebyshev_center_fig"}). Thus, $\sigma:\sigma^{-1}(\Omega)\to\Omega$ exhibits a degenerate anti-quadratic-like structure with a superattracting fixed point. In some sense, the situation is analogous to the deltoid reflection map, and it is not hard to employ similar techniques to prove that the non-escaping dynamics of $\sigma$ is topologically conjugate to $\overline{z}^2\vert_{\overline{\mathbb{D}}}$ with the conjugacy being conformal on the interior (the non-escaping set of $\sigma$ is shown in Figure [\[chebyshev_center_fig\]](#chebyshev_center_fig){reference-type="ref" reference="chebyshev_center_fig"}). Moreover, for reasons similar to the ones mentioned in Remark [Remark 38](#deltoid_no_hybrid_rem){reference-type="ref" reference="deltoid_no_hybrid_rem"}, there is no hybrid conjugacy between this degenerate anti-quadratic-like map and $\overline{z}^2$.
However, there is a key difference between the current setting and the examples described in the previous two subsections. Here, $\partial\sigma^{-1}(\Omega)\cap\partial\Omega$ is a singleton; while in both the deltoid and Circle-and-Cardioid examples the domain of the degenerate anti-quadratic-like map touches the range at more than one points. Equivalently, the external map $\pmb{\mathcal{F}}_2$ of $\sigma$ has a *unique* parabolic fixed point, while the external map $\pmb{\mathcal{N}}_2$ in the previous two examples has three parabolic fixed points. It is natural to expect that $\pmb{\mathcal{F}}_2\vert_{\partial\mathcal{Q}}$ is quasisymmetrically conjugate to the action of a Blaschke product with a parabolic fixed point on $\mathbb{S}^1$. On the other hand, there is no such candidate Blaschke product for the external map $\pmb{\mathcal{N}}_2$ as a Blaschke product cannot have multiple parabolic points on the circle.
The above observation leads one to the parabolic Tricorn $\pmb{\mathcal{B}}_2$ consisting of quadratic anti-rational maps with a completely invariant parabolic basin (see Subsection [3.2.3](#para_anti_rat_gen_subsubsec){reference-type="ref" reference="para_anti_rat_gen_subsubsec"}). In order to straighten the non-escaping dynamics of $\sigma$ to parabolic anti-rational maps, a class of maps called *pinched anti-quadratic-like maps* was introduced in [@LLMM3 §5] (see Figure [\[anti_quad_like_fig\]](#anti_quad_like_fig){reference-type="ref" reference="anti_quad_like_fig"}) and it was shown that pinched anti-quadratic-like maps are hybrid conjugate to maps in the parabolic Tricorn (cf. [@Lom15]). This is done via a quasiconformal surgery that glues an attracting petal of the parabolic anti-Blaschke product $B_2$ outside a pinched anti-quadratic-like map (or equivalently, replaces the external dynamics of a pinched anti-quadratic-like map with the map $B_2$). However, the existence of the pinching point makes this straightening theorem subtler than the classical straightening for polynomial-like maps. In particular, since the fundamental domain of a pinched anti-polynomial-like map is a pinched annulus, one needs to perform quasiconformal interpolation in an infinite strip. This necessitates one to control the asymptotics of conformal maps between infinite strips, which can be achieved by a result of Warschawski (cf. [@War42]).
As a consequence of the aforementioned straightening theorem, it can be concluded that the non-escaping dynamics of $\sigma$ is hybrid conjugate to the dynamics of the quadratic parabolic anti-rational map $z\mapsto\overline{z}+1/\overline{z}+1$ on its filled Julia set (which has a superattracting fixed point).
**Theorem 43**. *[@LLMM3 Proposition 4.15, Theorem 5.4][\[chebyshev_center_dynamics_thm\]]{#chebyshev_center_dynamics_thm label="chebyshev_center_dynamics_thm"}*
1. *The map $\sigma\vert_{K(\sigma)}$ is hybrid conjugate to $R\vert_{\mathcal{K}(R)}$, where $R(z)=\overline{z}+1/\overline{z}+1$.*
2. *The maps $\sigma:T^\infty(\sigma)\setminus\mathop{\mathrm{int}}{T^0(\sigma)}\to T^\infty(\sigma)$ and $\pmb{\mathcal{F}}_2:\mathcal{Q}\setminus\mathop{\mathrm{int}}{\mathcal{Q}_1}\to\mathcal{Q}$ are conformally conjugate.*
### The associated antiholomorphic correspondence {#chebyshev_center_corr_subsubsec}
As in the deltoid setting (see Subsection [4.1.6](#deltoid_corr_subsubsec){reference-type="ref" reference="deltoid_corr_subsubsec"}), the polynomial $f(w)=w^3-3w$ and the reflection map $\widehat{\eta}$ define a $2$:$2$ correspondence $\mathfrak{C}\subset\widehat{\mathbb{C}}\times\widehat{\mathbb{C}}$ by the formula: $$(z,w)\in\mathfrak{C}\iff \frac{f(w)-f(\widehat{\eta}(z))}{w-\widehat{\eta}(z)}=0,
\label{cheby_center_corr_eqn}$$ and the dynamical plane of $\mathfrak{C}$ admits an invariant partition into the *lifted non-escaping set* $\widetilde{K(\sigma)}:=f^{-1}(K(\sigma))$ and the *lifted tiling set* $\widetilde{T^\infty(\sigma)}:=f^{-1}(T^\infty(\sigma))$ (see Figure [\[chebyshev_center_fig\]](#chebyshev_center_fig){reference-type="ref" reference="chebyshev_center_fig"} and [@LLMM3 §10]).
The forward branch $(f\vert_{\overline{B}(3,2)})^{-1}\circ f\circ\widehat{\eta}:\widetilde{K(\sigma)}\cap\overline{B}(3,2)\to\widetilde{K(\sigma)}\cap\overline{B}(3,2)$ is conformally conjugate to $\sigma:K(\sigma)\longrightarrow K(\sigma)$ via $f\vert_{\overline{B}(3,2)}$. By Theorem [\[chebyshev_center_dynamics_thm\]](#chebyshev_center_dynamics_thm){reference-type="ref" reference="chebyshev_center_dynamics_thm"}, this branch of $\mathfrak{C}$ is hybrid conjugate to $R\vert_{\mathcal{K}(R)}$, where $R(z)=\overline{z}+1/\overline{z}+1$ (see [@LLMM3 Proposition 10.6]).
Although the Schwarz reflection $\sigma$ has a critical point in its tiling set, the dynamics of the correspondence $\mathfrak{C}$ on its lifted tiling set has a group structure. This is due to the following reason. The point at $\infty$ is a fully ramified critical point for $f$ and hence $f: \widetilde{T^\infty(\sigma)}\setminus\{\infty\}\longrightarrow T^\infty(\sigma)\setminus\{\infty\}$ is a degree $3$ covering map between two topological annuli. Thus, it is a Galois covering with deck transformation group isomorphic to $\mathbb{Z}/3\mathbb{Z}$. We choose a generator $\widehat{\tau}$ for this deck group, and observe that the branches of $\mathfrak{C}$ on $\widetilde{T^\infty(\sigma)}$ are given by $\widehat{\tau}\circ\widehat{\eta}$ and $\widehat{\tau}^{\circ 2}\circ\widehat{\eta}$. It can be verified using a *ping-pong* argument that the grand orbits of $\mathfrak{C}$ on the lifted tiling set are generated by $\widehat{\eta}$ and $\widehat{\tau}$, and the group generated by these two maps is the free product of $\langle\widehat{\eta}\rangle$ and $\langle\widehat{\tau}\rangle$ (see [@LLMM3 Proposition 10.5]).
In fact, according to [@LMM3 Proposition 2.17, Remark 3.11], there exists a uniformizing map $\widetilde{\psi}:\mathbb{D}\longrightarrow\widetilde{T^\infty(\sigma)}$ that conjugates the group $\mathbbm{G}_2\vert_{\mathbb{D}}$ to the group $\langle\widehat{\eta}\rangle\ast\langle\widehat{\tau}\rangle$ . This can be seen as follows. One can lift the conformal map $\psi: \left(\mathcal{Q},0\right) \longrightarrow \left(T^\infty(\sigma),\infty\right)$ via the two branched coverings appearing in the vertical arrows of the commutative diagram below to construct a conformal isomorphism $\widetilde{\psi}:\left(\mathbb{D},0\right) \longrightarrow \left(\widetilde{T^\infty(\sigma)},\infty\right)$. Since $\psi$ conjugates $\pmb{\mathcal{F}}_2$ to $\sigma$, its lift $\widetilde{\psi}$ can be chosen so that it conjugates the circular reflection $\rho$ in $\pmb{C}_1$ to $\widehat{\eta}$. On the other hand, since $M_\omega$ is a generator of the deck transformation group for the projection map $\mathbb{D}\rightarrow\mathcal{Q}$, the lifted map $\widetilde{\psi}$ conjugates $M_\omega$ to the deck transformation $\widehat{\tau}$ (after possibly replacing $\widehat{\tau}$ with some iterate). Hence, $\widetilde{\psi}$ conjugates $\mathbbm{G}_2=\langle\rho\rangle\ast\langle M_\omega\rangle$ to $\langle\widehat{\eta}\rangle\ast\langle\widehat{\tau}\rangle$. $$\begin{tikzcd}
\left(\mathbb{D},0\right) \arrow{d}{\mathrm{proj}} \arrow{r}{\widetilde{\psi}} & \left(\widetilde{T^\infty(\sigma)},\infty\right) \arrow{d}{f} \\
\left(\mathcal{Q},0\right) \arrow{r}{\psi} & \left(T^\infty(\sigma),\infty\right)
\end{tikzcd}$$
**Theorem 44**. *[@LLMM3 Theorem 10.7] [@LMM3 Proposition 2.17][\[chebyshev_center_corr_thm\]]{#chebyshev_center_corr_thm label="chebyshev_center_corr_thm"}*
1. *Each of the sets $\widetilde{T^\infty(\sigma)}$ and $\widetilde{K(\sigma)}$ is completely invariant under the correspondence $\mathfrak{C}$.*
2. *On $\widetilde{T^\infty(\sigma)}$, the grand orbits of the correspondence $\mathfrak{C}$ are generated by $\widehat{\eta}$ and $\widehat{\tau}$. Moreover, the group $\langle\widehat{\eta},\widehat{\tau}\rangle=\langle\widehat{\eta}\rangle\ast\langle\widehat{\tau}\rangle$ is conformally conjugate to $\mathbbm{G}_2\cong\mathbb{Z}/2\mathbb{Z}\ast\mathbb{Z}/3\mathbb{Z}$.*
3. *On $\widetilde{K(\sigma)}\cap\overline{B}(3,2)$, one branch of the forward correspondence is hybrid conjugate to $R\vert_{\mathcal{K}(R)}$, where $R(z)=\overline{z}+1/\overline{z}+1$. The other branch maps $\widetilde{K_a}\cap\overline{B}(3,2)$ onto $\widetilde{K_a}\setminus B(3,2)$. On the other hand, the forward correspondence preserves $\widetilde{K(\sigma)}\setminus B(3,2)$.*
*The backward branches of the correspondence are conjugate to the forward branches via $\widehat{\eta}$.*
# Cubic examples: Talbot curve, Deltoid-and-Circle, and Apollonian gasket {#cubic_examples_sec}
Up to Möbius conjugacy, there is a unique kissing reflection group generated by circle packings consisting of three circles; namely, the ideal triangle reflection group. Circle packings consisting of four circles give rise to a more interesting collection of kissing reflection groups. This collection contains the *maximal cusp* necklace group of Figure [17](#necklace_fig){reference-type="ref" reference="necklace_fig"} (left) and the classical Apollonian gasket reflection group shown in Figure [12](#kissing_nielsen_fig){reference-type="ref" reference="kissing_nielsen_fig"} (top right).
In this section, we will describe the dynamics of two explicit examples of piecewise Schwarz reflection maps $\sigma$ such that $\sigma:\sigma^{-1}(\Omega)\to\Omega$ has degree three (in the sense of Subsection [3.3.2](#piecewise_schwarz_subsubsec){reference-type="ref" reference="piecewise_schwarz_subsubsec"}). Just like the ideal triangle reflection group shows up in the dynamical study of quadratic Schwarz reflection maps, the two kissing reflection groups mentioned above will naturally appear in the examples of this section.
## Schwarz reflection in a Talbot curve {#talbot_subsec}
Among all simply connected unbounded quadrature domains uniformized by rational maps of global degree $d+1$, the simplest ones have a unique node at $\infty$ (equivalently, their Schwarz reflections have a unique pole at $\infty$, see Subsection [3.3.4](#analysis_connect_subsubsec){reference-type="ref" reference="analysis_connect_subsubsec"}). It is easy to see in light of Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"} that such a quadrature domain admits a uniformization $f:\mathbb{D}^*\to\Omega$, where $f(z)=z+\frac{a_1}{z}+\cdots+\frac{a_d}{z^d}$, after possibly replacing the quadrature domain by an affine image of it ([@LMM1 Proposition 2.13]). A simply connected unbounded quadrature domain is called *extremal* if it has a unique node at $\infty$ and the boundary $\partial\Omega$ has $d+1$ cusps and $d-2$ double points. The term 'extremal' is justified by the fact that these are the maximal possible numbers for a given degree [@LM1 Lemma 2.4]. The deltoid is the unique example of such an unbounded quadrature domain for $d=2$.
### Extremal unbounded quadrature domain in degree three, and associated Schwarz reflection {#talbot_dynamics_subsubsec}
When $d=3$, there is a unique extremal quadrature domain $\Omega_{\textrm{ext}}$, up to the action of $\mathrm{Aut}(\mathbb{C})$, where $\Omega_{\textrm{ext}}$ is the univalent image of $\mathbb{D}^*$ under $f_{\textrm{ext}}(z)=z+\frac{2}{3z}-\frac{1}{3z^3}$ (cf. [@LMM1 Table 1, Theorem 5.1]). In Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"} (top left), $\Omega_{\textrm{ext}}$ is the complement of the brown region whose boundary is a so-called *Talbot curve* (cf. [@Loc61 p. 157]).
As $f_{\textrm{ext}}$ has a triple pole at the origin, the associated Schwarz reflection map $\sigma_{\textrm{ext}}$ has a superattracting fixed point at $\infty$ of local degree three. The basin of attraction of the superattracting fixed point $\infty$ is the simply connected domain given by the exterior of the limit set $\Lambda(\sigma_{\textrm{ext}})$, which is the glowing blue curve in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"} (top left). Thus, the action of $\sigma_{\textrm{ext}}$ on its basin of infinity is conformally conjugate to the action of $\overline{z}^3$ on the disk.
The above structure allows one to study $\Lambda(\sigma_{\textrm{ext}})$ from outside using external dynamical rays (as in the case of polynomials). More precisely, one can give a topological model of the limit set as a quotient $\faktor{\mathbb{S}^1}{\sim}$, where $\sim$ is the $m_{-3}$-invariant equivalence relation generated by the angles of the two $2$-periodic rays (under $m_{-3}$) landing at the unique double point of $\partial\Omega_{\textrm{ext}}$ (see [@LMM2 Proposition 48].
This description allows one to show that the limit set of $\sigma_{\mathrm{ext}}$ is homeomorphic to the limit set of the *Julia necklace group*[^8] $G$ displayed in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"} (top right), and this homeomorphism conjugates the Schwarz reflection $\sigma_{\mathrm{ext}}$ to the Nielsen map $\mathcal{N}_G$ (see [@LMM2 Proposition 56, Remark 57]). Roughly speaking, this can be seen from the following facts:
$\bullet$ the limit set of $G$ is homeomorphic to $\faktor{\mathbb{S}^1}{\lambda}$, where $\lambda$ is the $\pmb{\mathcal{N}}_3$-invariant equivalence relation generated by the angles of the two $2$-periodic rays (under $\pmb{\mathcal{N}}_3$) landing at the unique accidental parabolic of $G$ lying on $\partial\Pi(G)$ (cf. Subsection [3.1.7](#necklace_subsubsec){reference-type="ref" reference="necklace_subsubsec"}), and
$\bullet$ the equivalence relation $\sim$ (which gives a lamination model of $\Lambda(\sigma_{\textrm{ext}})$) is the push-forward of $\lambda$ under the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_3$, that conjugates $\pmb{\mathcal{N}}_3$ to $m_{-3}$ (see Definition [Definition 41](#push_lami_def){reference-type="ref" reference="push_lami_def"}).
In fact, the above homeomorphism between the limit sets of $\sigma_{\mathrm{ext}}$ and $G$ can be conformally extended to the tiling set producing a conjugacy between the dynamics of $\sigma_{\textrm{ext}}$ on its tiling set closure and the dynamics of $\mathcal{N}_G$ on its filled limit set [@LMM2 Theorem A, §4.6]. In this sense, $\sigma_{\textrm{ext}}$ is a mating of $\overline{z}^3$ with the Nielsen map of the necklace reflection group $G$.
### Talbot reflection as a limit of pinching deformation {#talbot_pinching_subsubsec}
The group $G$ can be constructed as the limit of a quasiconformal deformation of the regular ideal quadrilateral reflection group $\pmb{G}_3$. Such a quasiconformal deformation pinches a pair of opposite sides of the fundamental domain $\Pi(\pmb{G}_3)\cap\mathbb{D}$ so that the corresponding circles of the packing touch in the limit. Thus, it is natural to ask whether the Schwarz reflection map $\sigma_{\textrm{ext}}$ (equivalently, the quadrature domain $\Omega_{\textrm{ext}}$) can be obtained as the limit of a quasiconformal deformation of a base Schwarz reflection map $\sigma_0$ such that
$\bullet$ the droplet associated with $\sigma_0$ is a quadrilateral (akin to $\Pi(\pmb{G}_3)\cap\mathbb{D}$), and
$\bullet$ the quasiconformal deformations pinch a pair of opposite sides of the droplet.
An affirmative answer to this question was given in [@LMM1]. Specifically, one can take the quadrature domain $\Omega_0:=f_0(\mathbb{D}^*)$, where $f_0(z)=z-\frac{1}{3z^3}$, as a starting point of the desired pinching deformation. The boundary of the quadrature domain $\Omega_0$ is a classical *astroid curve*. The corresponding Schwarz reflection $\sigma_0$ behaves much like the deltoid reflection map, and techniques mentioned in Subsections [4.1.2](#deltoid_limit_subsubsec){reference-type="ref" reference="deltoid_limit_subsubsec"}, [4.1.3](#deltoid_unique_mating_subsubsec){reference-type="ref" reference="deltoid_unique_mating_subsubsec"} can be used to justify that $\sigma_0$ is a mating of $\overline{z}^3$ and the Nielsen map $\pmb{\mathcal{N}}_3$ associated with $\pmb{G}_3$ (cf. [@LLMM3 Appendix B]).
One can now deform the Schwarz reflection map $\sigma_0$ quasiconformally so that the non-escaping dynamics remains conformally equivalent to $\overline{z}^3$, while the moduli of the deformed droplets tend to $\infty$. Since one side of the dynamics is 'frozen', standard compactness arguments show that there exists a limiting quadrature domain which is extremal; i.e., it has a unique node at $\infty$, and its boundary has four cusps and one double point (see [@LMM1 §4.1] for details). That this extremal quadrature domain is affinely equivalent to $\Omega_{\mathrm{ext}}$ follows from the rigidity theorem [@LMM1 Theorem 5.1].
In Subsection [5.1.4](#talbot_david_subsubsec){reference-type="ref" reference="talbot_david_subsubsec"}, we will outline a completely different recipe for constructing such extremal quadrature domains.
### Relation with a critically fixed anti-polynomial {#talbot_crit_fixed_anti_poly_subsubsec}
Consider the critically fixed cubic anti-polynomial $p(z)=(3\overline{z}-\overline{z}^3)/2$, which has two fixed critical points in $\mathbb{C}$. We call the map $p$ the *Julia anti-polynomial* since the dynamics of the cubic polynomial $\overline{p(z)}$ was originally studied by Julia (cf. [@julia-1918 p. 51]). The filled Julia set of $p$ is displayed in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}. Standard arguments from polynomial dynamics show that the lamination of $p$ is precisely $\sim$, and hence $\mathcal{J}(p)\cong\faktor{\mathbb{S}^1}{\sim}$ is equivariantly homeomorphic to the limit set of $\sigma_{\mathrm{ext}}$ [@LMM2 Proposition 64, Remark 66].
An interesting consequence of the above fact is that $\Lambda(G)$ is homeomorphic to $\mathcal{J}(p)$ and the homeomorphic conjugates $\mathcal{N}_G\vert_{\Lambda(G)}$ to the action of the Julia anti-polynomial $p$ on its Julia set $\mathcal{J}(p)$. (This is the reason why the group $G$ is called the 'Julia necklace group'.)
Like the Schwarz reflection map $\sigma_{\mathrm{ext}}$, the anti-polynomial $p$ also enjoys an extremality property. It is a cubic anti-polynomial with the maximal possible number of planar fixed points (these fixed points are marked in yellow/orange in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}.
### Constructing $\sigma_{\mathrm{ext}}$ via David surgery {#talbot_david_subsubsec}
The critically fixed anti-polynomial $p$ of Subsection [5.1.3](#talbot_crit_fixed_anti_poly_subsubsec){reference-type="ref" reference="talbot_crit_fixed_anti_poly_subsubsec"} has two invariant bounded Fatou components $\mathcal{U}_i$, and the restriction $p\vert_{\overline{\mathcal{U}_i}}$ is conformally conjugate to $\overline{z}^2\vert_{\mathbb{D}}$, for $i\in\{1,2\}$. Since the inverse of the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ (that conjugates $\overline{z}^2$ to $\pmb{\mathcal{N}}_2$) admits a David extension to $\mathbb{D}$, one can replace $p\vert_{\overline{\mathcal{U}_i}}$ with $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$, and uniformize this partially defined topological map of $\mathbb{S}^2$ to a Schwarz reflection map $\sigma$.
Since $p$ was not altered on its basin of infinity, one concludes that the Schwarz reflection map $\sigma$ has a superattracting fixed point of local degree three. Moreover, since the fixed points of $p$ on $\partial\mathcal{U}_1\cup\partial\mathcal{U}_2$ (the orange points in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}) are identified with the ideal boundary points of $\Pi(\pmb{G}_2)$, it is easy to see that the droplet $T(\sigma)$ is the union of two topological triangles touching at a common vertex. Moreover, $W^{1,1}$-removability of $\mathcal{J}(p)$ and analytic properties of David homeomorphisms imply that the limit set of $\sigma$ is conformally removable (cf. Subsection [12.1](#conf_removable_subsec){reference-type="ref" reference="conf_removable_subsec"}). Using these facts, one can argue that the Schwarz reflection map $\sigma$ is Möbius conjugate to $\sigma_{\mathrm{ext}}$ studied above (see [@LMMN Theorem 12.8] for details).
As a fallout of this construction, one concludes that $\Lambda(\sigma_{\mathrm{ext}})$ is conformally removable, and hence $\sigma_{\mathrm{ext}}$ is the unique conformal mating of $\overline{z}^3$ and $\mathcal{N}_G$.
We summarize some key points from the above discussion in the following theorem.
**Theorem 45**. *Let $\sigma_{\mathrm{ext}}$ be the Schwarz reflection map associated with the quadrature domain $f_{\mathrm{ext}}(\mathbb{D}^*)$, where $f_{\textrm{ext}}(z)=z+\frac{2}{3z}-\frac{1}{3z^3}$. Then the following hold.*
1. *$\sigma_{\mathrm{ext}}$ is the unique conformal mating of $\overline{z}^3\vert_{\overline{\mathbb{D}}}$ and the Nielsen map $\mathcal{N}_G\vert_{K(G)}$, where $G$ is the Julia necklace group shown in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}.*
2. *The dynamical systems $\sigma_{\mathrm{ext}}\vert_{\Lambda(\sigma_{\mathrm{ext}})},\ \mathcal{N}_G\vert_{\Lambda(G)}$, and $p\vert_{\mathcal{J}(p)}$ are topologically conjugate (where $p$ is the Julia anti-polynomial).*
## Apollonian gasket and its cousins {#apollo_group_map_schwarz_subsec}
As in Subsection [5.1](#talbot_subsec){reference-type="ref" reference="talbot_subsec"}, we will look at three homeomorphic fractals in this subsection: the limit set of a kissing reflection group, the Julia set of a critically fixed anti-rational map, and the limit set of a cubic Schwarz reflection map.
### From the Apollonian reflection group $G$ to a cubic anti-rational map $R$ via the Thurston Realization Theorem {#apollo_group_rat_map_subsubsec}
The Apollonian gasket is the limit set of the reflection group $G$ generated by reflections in the four red circles displayed in Figure [12](#kissing_nielsen_fig){reference-type="ref" reference="kissing_nielsen_fig"} (top right). Let us call the four components of $\Omega(G)$ that intersect the fundamental domain $\Pi(G)$ the *principal components* of $\Omega(G)$, and denote them by $\mathcal{U}_i$, $i\in\{1,2,3,4\}$. The restriction of the Nielsen map $\mathcal{N}_G$ to each $\overline{\mathcal{U}_i}$ is Möbius-conjugate to $\pmb{\mathcal{N}}_2$. We extend $\pmb{\mathcal{E}}_2:\mathbb{S}^1\to\mathbb{S}^1$ to a self-homeomorphism of $\overline{\mathbb{D}}$, and define a global orientation-reversing critically fixed branched cover of degree three as: $$\widetilde{R}:\widehat{\mathbb{C}}\to\widehat{\mathbb{C}},\quad
z \mapsto \left\{\begin{array}{ll}
\mathcal{N}_G(z) & \mbox{if}\ z\in \widehat{\mathbb{C}}\setminus\bigcup_{i=1}^4 \mathcal{U}_i, \\
\varphi_i^{-1}\left(\pmb{\mathcal{E}}_2^{-1} \left(\overline{\pmb{\mathcal{E}}_2(\varphi_i(z))}^2\right)\right) & \mbox{if}\ z\in \mathcal{U}_i,
\end{array}\right.$$ where $\varphi_i:\mathcal{U}_i\to\mathbb{D}$ is a Möbius map, $i\in\{1,2,3,4\}$.
It was shown in [@LLMM4] that $\widetilde{R}$ has no Thurston obstruction, and hence by the Thurston Realization Theorem, it is equivalent to a critically fixed cubic anti-rational map $R$. In fact, the expansiveness property of $\widetilde{R}$ on $\Lambda(G)$ (coming from circular reflections) was used to show that $\widetilde{R}\vert_{\Lambda(G)}\equiv \mathcal{N}_G\vert_{\Lambda(G)}$ is topologically conjugate to $R\vert_{\mathcal{J}(R)}$. Moreover, $R$ can be chosen to be $$R(z)=\frac{3\overline{z}^2}{2\overline{z}^3+1}$$ (see Figure [24](#apollo_cousins_fig){reference-type="ref" reference="apollo_cousins_fig"}) [@LLMM4 Corollary 8.2]. In particular, $\mathcal{J}(R)$ is homeomorphic to $\Lambda(G)$.
![Left: The Julia set of $R$ and its Tischler graph is shown. Right: The dynamical plane of the Deltoid-and-Circle Schwarz reflection map is displayed. The droplet consists of the three brown triangles, while the non-escaping set and the tiling set are shown in yellow and blue/green, respectively.](JuliaRays.png "fig:"){#apollo_cousins_fig width="0.4\\linewidth"}![Left: The Julia set of $R$ and its Tischler graph is shown. Right: The dynamical plane of the Deltoid-and-Circle Schwarz reflection map is displayed. The droplet consists of the three brown triangles, while the non-escaping set and the tiling set are shown in yellow and blue/green, respectively.](apollo.png "fig:"){#apollo_cousins_fig width="0.4\\linewidth"}
There is a natural forward-invariant graph in the dynamical plane of $R$, that contains all the fixed critical points. This graph, which captures the touching structure of the fixed Fatou components of $R$, is called the *Tischler graph* of $R$ (see Figure [24](#apollo_cousins_fig){reference-type="ref" reference="apollo_cousins_fig"}). It is worth pointing out that the planar dual of the Tischler graph of $R$ is isomorphic as a plane graph to the contact graph of the circle packing giving rise to $G$ (in fact, both graphs are isomorphic to the $1$-skeleton of a tetrahedron). We will discuss this duality in a general framework in Subsection [8.1](#new_line_dict_dyn_subsec){reference-type="ref" reference="new_line_dict_dyn_subsec"}.
### Pinching geodesics and matings {#qf_bdry_mating_subsubsec}
The Apollonian group $G$ lies on the boundary of the quasi-Fuchsian deformation space of the regular ideal quadrilateral reflection group $\pmb{G}_3$. Indeed, the group $G$ can be constructed as a limit of a quasiconformal deformation $\{G_n\}$ of $\pmb{G}_3$ such that the moduli of the two (quadrilateral) components of $\Pi(G_n)$ go to zero and hence the non-adjacent circles of the circle packings defining $G_n$ touch pairwise in the limit. Equivalently, such a deformation pinches suitable simple closed non-peripheral geodesics on the punctured spheres $\mathbb{D}/\widetilde{\pmb{G}}_3$ and $(\widehat{\mathbb{C}}\setminus\overline{\mathbb{D}})/\widetilde{\pmb{G}}_3$, where $\widetilde{\pmb{G}}_3$ is the index two Fuchsian subgroup of $\pmb{G}_3$ (cf. [@LLM1 §3.3, §3.4]). According to [@LLM1 Proposition 3.21], the group $G$ can also be interpreted as the mating of two copies of the Julia necklace group considered in Subsection [5.1.1](#talbot_dynamics_subsubsec){reference-type="ref" reference="talbot_dynamics_subsubsec"} (and displayed in Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}).
The Apollonian anti-rational map $R$ also enjoys a mating description akin to that of $G$. Recall that the limit set of the Julia necklace group is equivariantly homeomorphic to the Julia set of the Julia anti-polynomial $p$ introduced in Subsection [5.1.3](#talbot_crit_fixed_anti_poly_subsubsec){reference-type="ref" reference="talbot_crit_fixed_anti_poly_subsubsec"}. It turns out that $R$ is a mating of two copies of $p$ (cf. [@LLMM4 §4.2, Remark 6.14], [@LLM1 Corollary 4.17, §4.3]).
Thus, $G$ is a mating of two necklace groups and $R$ is a mating of two critically fixed anti-polynomials in a compatible manner.
### A nearly affine model for the Apollonian anti-rational map {#affine_model_subsubsec}
The anti-rational map $R$ constructed in Subsection [5.2.1](#apollo_group_rat_map_subsubsec){reference-type="ref" reference="apollo_group_rat_map_subsubsec"} admits an anti-quasiregular model $\mathfrak{R}$ on a tetrahedron [@LLMM4 §8]. In fact, this model has the added advantage of being piecewise affine outside its 'invariant Fatou components'.
We describe the map $\mathfrak{R}$ using Figure [\[affine_model_fig\]](#affine_model_fig){reference-type="ref" reference="affine_model_fig"}. The tetrahedron can be obtained from the union of the four equilateral triangles $ABC, ABD_1, ACD_2$, and $BCD_3$, where one identifies the sides $AD_1, BD_1$, and $CD_2$ with $AD_2, BD_3$, and $CD_3$, respectively, Thus, the vertices $D_1, D_2, D_3$ correspond to the same vertex on the tetrahedron.
The union of the triangles $D_1J_1H_1, D_2 I_1J_2,$ and $D_3 I_2H_2$ corresponds to a Jordan domain in the tetrahedron. The map $\mathfrak{R}$ is quasiconformally conjugate to $\overline{z}^2$ on this Jordan domain such that it restricts to a piecewise affine orientation-reversing double covering map on the boundary. For instance, it maps the edge $J_1H_1$ affinely to the union of the edges $J_2I_1$ and $H_2I_2$. We call this Jordan domain an *invariant Fatou component*. The map $\mathfrak{R}$ is similarly defined on the other three invariant Fatou components containing $A, B, C$ (these components cover the white region).
It remains to specify $\mathfrak{R}$ on the equilateral triangles $EFG, EH_1J_1, GJ_2I_1$, and $FH_2I_2$. For definiteness, let us consider the triangle $EFG$. One can map the quadrilaterals $EKNL$, $GMNK$, and $FLNM$ onto the quadrilaterals $EJ_1D_1H_1$, $GI_1D_2J_2$, and $FH_2D_3I_2$ in a color-preserving way such that the maps are anti-conformal (Euclidean) reflections on the triangles $EKL$, $GMK$, $FLM$, and affine on the triangles $KNL$, $MNK$, $LNM$. The definition of $\mathfrak{R}$ on the other three triangles $EH_1J_1$, $GJ_2I_1$, and $FH_2I_2$ is symmetric.
In fact, the affine nature of the construction guarantees that $\mathfrak{R}$ is a degree three anti-quasiregular map of the tetrahedron. By construction, the only critical points of $\mathfrak{R}$ are fixed and lie in the four invariant Fatou components. Finally, the fact that $\mathfrak{R}$ is anti-conformal outside the first preimages of its invariant Fatou components implies that it can be straightened to an anti-rational map. Since this cubic anti-rational map has four distinct fixed critical points, it is easily seen to be Möbius conjugate to the map $R$ of Subsection [5.2.1](#apollo_group_rat_map_subsubsec){reference-type="ref" reference="apollo_group_rat_map_subsubsec"} (cf. [@LLMM4 Proposition 8.1, Corollary 8.2]).
### From the cubic anti-rational map $R$ to the Apollonian reflection group $G$ via David surgery {#rat_map_apollo_group_subsubsec}
One can also go backwards from the anti-rational map $R$ to the Apollonian reflection group $G$ using David surgery. More precisely, since $\pmb{\mathcal{E}}_2^{-1}$ admits a David extension to $\overline{\mathbb{D}}$, one can replace the dynamics of $R$ on its critically fixed Fatou components with $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$ (cf. Subsections [4.1.5](#deltoid_david_subsubsec){reference-type="ref" reference="deltoid_david_subsubsec"} and [5.1.4](#talbot_david_subsubsec){reference-type="ref" reference="talbot_david_subsubsec"}). Finally, one can invoke the David Integrability Theorem to uniformize this partially defined topological map of $\mathbb{S}^2$ to a Schwarz reflection map, and use the mapping properties of the resulting Schwarz reflection to justify that it is indeed the Nielsen map of $G$ (see [@LLMM4 §10.2] for details of this construction).
### Schwarz reflection in the deltoid and a circle {#d_and_c_schwarz_subsubsec}
One can modify the construction of Subsection [5.2.4](#rat_map_apollo_group_subsubsec){reference-type="ref" reference="rat_map_apollo_group_subsubsec"} to obtain a hybrid dynamical system. Specifically, if one replaces the dynamics of $R$ on three of its invariant Fatou components with the Nielsen map $\pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus\mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$ using the above David surgery (for instance, on the three $2\pi/3-$rotation symmetric invariant Fatou components in Figure [24](#apollo_cousins_fig){reference-type="ref" reference="apollo_cousins_fig"} (left)) and leaves the action of $R$ on the fourth invariant Fatou component unaltered, then one obtains a piecewise Schwarz reflection map $\sigma$. Using an explicit characterization of the deltoid Schwarz reflection map, it was shown in [@LLMM4 Proposition 10.5] that $\sigma$ arises from the deltoid (of Subsection [4.1](#deltoid_subsec){reference-type="ref" reference="deltoid_subsec"}) and an inscribed circle. In a suitable sense, the Deltoid-and-Circle Schwarz reflection $\sigma$ combines the action of the Nielsen map $\mathcal{N}_G$ of the Apollonian reflection group and the critically fixed cubic anti-rational map $R$ (see [@LLMM4 Theorem 10.8] for a precise statement). Alternatively, one can explore the dynamics of the Schwarz reflection of the Deltoid-and-Circle (using the dynamics of the deltoid reflection map discussed in Subsection [4.1](#deltoid_subsec){reference-type="ref" reference="deltoid_subsec"}) to directly recognize this map as a mating of $R$ and $\mathcal{N}_G$. The dynamical plane of $\sigma$ is depicted in Figure [24](#apollo_cousins_fig){reference-type="ref" reference="apollo_cousins_fig"} (right).
### Quasisymmetry groups {#qs_grp_subsubsec}
Note that the boundaries of the fixed Fatou components of $R$ intersect at positive angles, while the boundaries of the principal components of $\Omega(G)$ intersect tangentially. This implies that the dynamically meaningful homeomorphism between $\mathcal{J}(R)$ and $\Lambda(G)$ described in Subsection [5.2.1](#apollo_group_rat_map_subsubsec){reference-type="ref" reference="apollo_group_rat_map_subsubsec"} does not admit a quasiconformal extension to the sphere. In fact, it has been proved in [@LZ23b] that there does not exist any global quasiconformal map that induces a homeomorphism between the Julia set of $R$ and the limit set of $G$.
However, the quasiconformal non-equivalence of $\mathcal{J}(R)$ and $\Lambda(G)$ is *not* captured by their groups of quasisymmetries. According to [@LLMM4 Theorems 3.8, 7.2], these fractal have isomorphic quasisymmetry groups, and these quasisymmetry groups coincide with the respective groups of self-homeomorphisms (see [@LZ23a] for a general quasisymmetric rigidity result which implies that the homeomorphism group of $\Lambda(G)$ coincides with its conformal symmetry group).
On the other hand, the quasisymmetry group of the limit set of the Deltoid-and-Circle reflection map is a strict subgroup of its homeomorphism group. This is essentially a consequence of the fact that the green complementary components of this limit set have inward pointing cusps, but all cusps on the boundaries of the yellow components are outward pointing cusps, implying that there is no quasisymmetry carrying the boundary of a green component to the boundary of a yellow component.
The main results discussed in this subsection are encapsulated below.
**Theorem 46**. *Let $G$ be the Apollonian reflection group, $R$ be the critically fixed anti-rational map $\frac{3\overline{z}^2}{2\overline{z}^3+1}$, and $\sigma$ be the Deltoid-and-Circle Schwarz reflection map. Then the following hold.*
- *There exists a global David homeomorphism which restricts to a topological conjugacy between $R\vert_{\mathcal{J}(R)}$ and $\mathcal{N}_G\vert_{\Lambda(G)}$.*
- *There exists a global David homeomorphism which restricts to a topological conjugacy between $R\vert_{\mathcal{J}(R)}$ and $\sigma\vert_{\Lambda(\sigma)}$.*
- *The quasisymmetry groups of $\mathcal{J}(R)$ and $\Lambda(G)$ coincide with the respective self-homeomorphism groups, and hence they are isomorphic. On the other hand, the quasisymmetry group of $\Lambda(\sigma)$ is strictly smaller than its group of self-homeomorphisms.*
- *$G$ is the mating of two copies of a necklace group $G_1$ and $R$ is the mating of two copies of a critically fixed anti-polynomial $p$, where $\mathcal{N}_{G_1}\vert_{\Lambda(G_1)}$ and $p\vert_{\mathcal{J}(p)}$ are topologically conjugate.*
# Parameter spaces of special families of Schwarz reflections {#schwarz_para_space_sec}
After analyzing interconnections between various examples of Schwarz reflection maps, anti-rational maps and Kleinian reflection groups, we will now explicate some general results about families of Schwarz reflection maps. Specifically, we will focus on families of Schwarz reflections containing the examples from Subsections [4.2](#c_and_c_center_subsec){reference-type="ref" reference="c_and_c_center_subsec"}, [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}, and [5.1](#talbot_subsec){reference-type="ref" reference="talbot_subsec"}. In all three cases, mating descriptions of the Schwarz reflection maps can be given via suitable straightening theorems, and this in turn can be used to relate the topology of parameter spaces of Schwarz reflections to that of parameter spaces of appropriate families of anti-rational maps or reflection groups.
## The Circle-and-Cardioid family {#c_and_c_general_subsec}
Recall from Subsection [4.2](#c_and_c_center_subsec){reference-type="ref" reference="c_and_c_center_subsec"} that $\heartsuit=f(\mathbb{D})$, where $f(w)=w/2-w^2/4$ is a simply connected quadrature domain whose Schwarz reflection map is denoted by $\sigma$. Moreover, the only critical point of $\sigma$ is at the origin.
For any $a\in\mathbb{C}\setminus (-\infty,-1/12)$, the smallest disk $B(a,r_a)$ containing $\heartsuit$ touches $\partial\heartsuit$ at a unique point. We will refer to the circle $\partial B(a,r_a)$ as the circumcircle to $\heartsuit$ centered at $a$, and denote the anti-Möbius reflection in $\partial B(a,r_a)$ by $\sigma_a$. The unique touching point of $\partial B(a,r_a)$ and $\partial\heartsuit$ is denoted by $\alpha_a$. The union $\Omega_a$ of the disjoint quadrature domains $\heartsuit$ and $\overline{B}(a,r_a)^c$ gives rise to a piecewise Schwarz reflection map $$F_a:\overline{\Omega_a}\to\widehat{\mathbb{C}},\quad
z\mapsto
\begin{cases}
\sigma(z)\quad \textrm{if}\quad z\in\overline{\heartsuit},\\
\sigma_a(z)\quad \textrm{if}\quad z\in B(a,r_a)^c,
\end{cases}$$ (cf. Subsection [3.3.2](#piecewise_schwarz_subsubsec){reference-type="ref" reference="piecewise_schwarz_subsubsec"}). We denote this family of maps by $\mathcal{S}$; i.e., $$\mathcal{S}:=\lbrace F_a:\overline{\Omega}_a\to\widehat{\mathbb{C}}:a\in\mathbb{C}\setminus (-\infty,-1/12)\rbrace,$$ and call it the *Circle-and-Cardioid* or *C&C* family. Note that $1/4$ and $\alpha_a$ are the only singular points on the boundary of $\Omega_a$. Following Subsection [3.3.3](#inv_partition_subsubsec){reference-type="ref" reference="inv_partition_subsubsec"}, we define the fundamental tile, the tiling set, and the non-escaping set as $$T^0_a\equiv T^0(F_a):=\widehat{\mathbb{C}}\setminus\left(\Omega_a\cup\{1/4,\alpha_a\}\right),\quad T^\infty_a\equiv T^\infty(F_a):=\bigcup_{n\geq 0} F_a^{-n}(T^0_a),\quad \mathrm{and}$$ $$K_a\equiv K(F_a):=\widehat{\mathbb{C}}\setminus T^\infty(F_a).$$
![Parts of the non-escaping sets of various maps in the family $\mathcal{S}$ are shown.](multi_rabbit.png "fig:"){#c_and_c_conn_cantor_limit_fig width="0.32\\linewidth"} ![Parts of the non-escaping sets of various maps in the family $\mathcal{S}$ are shown.](dendrite.png "fig:"){#c_and_c_conn_cantor_limit_fig width="0.32\\linewidth"} ![Parts of the non-escaping sets of various maps in the family $\mathcal{S}$ are shown.](cantor_1.png "fig:"){#c_and_c_conn_cantor_limit_fig width="0.32\\linewidth"}
### The connectedness locus of $\mathcal{S}$ {#c_and_c_conn_locus_subsubsec}
As in the case of (anti-)polynomial-like maps, it turns out that connectedness of the non-escaping set $K(F_a)$ is equivalent to the requirement that the orbit of the unique critical point $0$ of $F_a$ does not escape. Indeed, if the critical point $0$ does not lie in the tiling set of $F_a$, then the vertex-preserving conformal isomorphism $\psi_a:T_a^0\to\Pi(\pmb{G}_2)$ can be extended via iterated lifting (or Schwarz reflections) to a conformal isomorphism $\psi_a:T^\infty_a\to\mathbb{D}$ conjugating $F_a$ to the Nielsen map $\pmb{\mathcal{N}}_2$.
**Theorem 47** (Basic dichotomy). *[@LLMM1 Theorem 1.2]*
1. *If the critical point of $F_a$ does not escape to the fundamental tile $T_a^0$, then the conformal map $\psi_a$ from $T_a^0$ onto $\Pi(\pmb{G}_2)$ extends to a biholomorphism between the tiling set $T_a^\infty$ and the unit disk $\mathbb{D}$. Moreover, the extended map $\psi_a$ conjugates $F_a$ to the Nielsen map $\pmb{\mathcal{N}}_2$. In particular, $K_a$ is connected.*
2. *If the critical point of $F_a$ escapes to the fundamental tile, then the corresponding non-escaping set $K_a$ is a Cantor set.*
The above theorem leads to the notion of the *connectedness locus* of the family $\mathcal{S}$.
**Definition 48**. The connectedness locus of the family $\mathcal{S}$ is defined as $$\mathcal{C}(\mathcal{S})=\{a\in\mathbb{C}\setminus(-\infty,-1/12): 0\notin T_a^\infty\}=\{a\in\mathbb{C}\setminus(-\infty,-1/12): K_a\ \textrm{is\ connected}\}.$$ The complement of the connectedness locus in the parameter space is called the *escape locus*.
### Combinatorial straightening and mating description for geometrically finite maps {#c_and_c_straightening_mating_subsubsec}
According to Theorem [Theorem 47](#group_dyn_c_and_c_thm){reference-type="ref" reference="group_dyn_c_and_c_thm"}, one observes the full tessellation structure of the ideal triangle reflection group in the tiling set of $F_a$ if and only if $a\in\mathcal{C}(\mathcal{S})$ (see Figure [27](#c_and_c_conn_cantor_limit_fig){reference-type="ref" reference="c_and_c_conn_cantor_limit_fig"}). In light of the mating description for the map $F_a$ with $a=0$ given in Subsection [4.2](#c_and_c_center_subsec){reference-type="ref" reference="c_and_c_center_subsec"}, it is natural to expect that the non-escaping set dynamics of any postcritically finite map $F_a$ in $\mathcal{C}(\mathcal{S})$ is conjugate to the filled Julia set dynamics of a quadratic anti-polynomial.
However, as explained in Subsection [4.2.3](#c_and_c_basilica_pinched_anti_quad_subsubsec){reference-type="ref" reference="c_and_c_basilica_pinched_anti_quad_subsubsec"}, one cannot *straighten* Schwarz reflections in $\mathcal{C}(\mathcal{S})$ to quadratic anti-polynomials in the Tricorn using quasiconformal tools. In [@LLMM2], a *combinatorial straightening* theory for 'nice' maps in $\mathcal{C}(\mathcal{S})$ was developed. In fact, for maps in $\mathcal{C}(\mathcal{S})$, the conformal map $\psi_a:T^\infty_a\to\mathbb{D}$ plays a role akin to that of Böttcher coordinates in polynomial dynamics. For postcritically finite maps in $\mathcal{C}(\mathcal{S})$, the *limit set* (i.e., the common boundary of $K_a$ and $T_a^\infty$) turns out to be locally connected (see [@LLMM1 Theorem 1.4]), and hence the conformal map $\psi_a^{-1}$ extends continuously to produce a topological semi-conjugacy between $\pmb{\mathcal{N}}_2\vert_{\mathbb{S}^1}$ and $F_a\vert_{\partial T_a^\infty}$. This enables one to construct a topological model for the limit set of $F_a$ as a quotient of $\mathbb{S}^1$ under an $\pmb{\mathcal{N}}_2$-invariant lamination. This lamination can then be turned into an $m_{-2}$-invariant lamination using the topological conjugacy $\pmb{\mathcal{E}}_2$ between $\pmb{\mathcal{N}}_2\vert_{\mathbb{S}^1}$ and $m_{-2}\vert_{\mathbb{S}^1}$. Finally, standard results in polynomial dynamics (which use the Thurston Realization Theorem or landing properties of external parameter rays of the Tricorn) imply that such $m_{-2}$-invariant laminations are indeed realized by Julia sets of postcritically finite maps in the Basilica limb $\mathcal{L}$ of the Tricorn (see Definition [Definition 33](#def_basilica_limb){reference-type="ref" reference="def_basilica_limb"}).
A map in $\mathcal{C}(\mathcal{S})$ is called *geometrically finite* if it has an attracting/parabolic cycle or if its unique critical point (at the origin) is non-escaping and strictly pre-periodic. A good understanding of the dynamics of geometrically finite maps in $\mathcal{C}(\mathcal{S})$ (see [@LLMM1 §5,6]) allows one to push the above arguments to all geometrically finite maps and prove the following mating statement.
**Theorem 49**. *[@LLMM2 Theorem 1.1] Every geometrically finite map $F_a\in\mathcal{C}(\mathcal{S})$ is a conformal mating of a unique geometrically finite quadratic anti-polynomial and the Nielsen map $\pmb{\mathcal{N}}_2$. More precisely, for each geometrically finite map $F_a\in\mathcal{C}(\mathcal{S})$, the following hold.*
1. *There exists a topological semi-conjugacy between $$F_a:\overline{T_a^\infty}\setminus\mathop{\mathrm{int}}{T_a^0}\to \overline{T_a^\infty}\quad \textrm{and}\quad \pmb{\mathcal{N}}_2:\overline{\mathbb{D}}\setminus \mathop{\mathrm{int}}{\Pi(\pmb{G}_2)}\to\overline{\mathbb{D}}$$ such that the semi-conjugacy restricts to a conformal conjugacy on $T_a^\infty$.*
2. *There exists a unique geometrically finite quadratic anti-polynomial $f_c\in\mathcal{L}$ such that $F_a\vert_{K_a}$ is topologically conjugate to $f_c\vert_{\mathcal{K}_c}$ and the conjugacy is conformal on the interior of $K_a$. Moreover, the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ pushes forward the lamination associated with the limit set of $F_a$ to the lamination associated with the Julia set of $f_c$.*
(See Definition [Definition 41](#push_lami_def){reference-type="ref" reference="push_lami_def"} for the definition of push-forward of a lamination.)
The uniqueness part of the above theorem follows from rigidity of geometrically finite polynomials with prescribed lamination and prescribed conformal class of Fatou set dynamics.
### Bijection between geometrically finite maps of $\mathcal{C}(\mathcal{S})$ and $\mathcal{L}$ {#c_and_c_geom_fin_bijection_subsubsec}
The combinatorial straightening map from geometrically finite maps in $\mathcal{C}(\mathcal{S})$ to those in $\mathcal{L}$ given by Theorem [Theorem 49](#c_and_c_general_mating_thm){reference-type="ref" reference="c_and_c_general_mating_thm"} is in fact a bijection. To prove injectivity of this map, one needs to establish appropriate rigidity theorems for geometrically finite maps in $\mathcal{C}(\mathcal{S})$ (namely, that they are uniquely determined by their lamination and conformal class of the Fatou set dynamics). The proof of this fact uses a pullback argument and involves an analysis of the boundary behavior of conformal maps near cusps and double points (see [@LLMM2 §8]).
On the other hand, surjectivity of the above map amounts to constructing geometrically finite maps in $\mathcal{C}(\mathcal{S})$ with prescribed lamination and conformal data. To achieve this, one needs a dynamically natural uniformization of the escape locus of $\mathcal{S}$. For $a\notin\mathcal{C}(\mathcal{S})\cup(-\infty,-1/12)$, let $n(a)$ be the smallest integer such that $F_a^{\circ n(a)}(\infty)$ lands on the fundamental tile $T_a^0$. Then the action of $F_a$ on the union of preimages of $T_a^0$ up to time $n(a)$ is unramified and hence conformally conjugate to the action of $\pmb{\mathcal{N}}_2$ (see [@LLMM1 Proposition 5.38]). This conjugacy, which we call $\psi_a$, can be thought of as an analog of Böttcher coordinates for polynomials with disconnected Julia sets. Analogous to the uniformization of the exterior of the Mandelbrot set or the Tricorn, one can uniformize the escape locus of $\mathcal{S}$ by the conformal position of the critical value $\infty$; i.e., by the map $\pmb{\Psi}:a\mapsto\psi_a(\infty)$ [@LLMM2 Theorem 1.3, §6]. The map $\pmb{\Psi}$ gives rise to parameter tiles in the escape locus of $\mathcal{S}$, which provides a phase-parameter duality. One can exploit this phase-parameter duality to study landing/accumulation properties of these parameter tiles, and construct the desired geometrically finite maps in $\mathcal{C}(\mathcal{S})$ as limit points of suitable sequences of parameter tiles (cf. [@LLMM2 §9]). This is akin to constructing geometrically finite maps of the Mandelbrot set or the Tricorn as limit points of suitable parameter rays.
**Theorem 50**. *[@LLMM2 Theorem 1.2] There exists a natural bijection $\chi$ between the geometrically finite parameters in $\mathcal{C}(\mathcal{S})$ and those in the Basilica limb $\mathcal{L}$ of the Tricorn such that the laminations of the corresponding maps are related by the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_2$ and the dynamics on the respective periodic Fatou components are conformally conjugate.*
In Section [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}, we will discuss higher degree generalizations of the C&C family and extensions of the above result to generic maps in such families.
### Combinatorial model of $\mathcal{C}(\mathcal{S})$ {#c_and_c_conn_locus_model_subsubsec}
The landing/accumulation properties of the parameter tiles of the escape locus of $\mathcal{S}$ also allows one to study the topology of $\mathcal{C}(\mathcal{S})$ from outside. It turns out that the landing/accumulation patterns of parameter rays at parabolic/Misiurewicz points of $\mathcal{C}(\mathcal{S})$ and $\mathcal{L}$ are compatible.
![Left: The parameter tiles of the exterior of $\mathcal{C}(\mathcal{S})$ are displayed. The brown horizontal line is the slit $(-\infty,-1/12)$. The parameter $a=-1/12$ corresponds to the unique map in $\mathcal{C}(\mathcal{S})$ for which $\alpha_a$ has an attracting direction in $K_a$. Right: A blow-up of $\mathcal{C}(\mathcal{S})$ around the principal hyperbolic component (having its center at $a=0$) is shown. The marked red parameter corresponds to a Misiurewicz map for which the critical point $0$ eventually lands on the fixed point $\alpha_a$. Bottom: The region to the left of the grey region (which is a part of the principal hyperbolic component of the Tricorn) is the real Basilica limb of the Tricorn.](conn_locus_tessellation.png "fig:"){#c_and_c_conn_locus_fig width="0.4\\linewidth"} ![Left: The parameter tiles of the exterior of $\mathcal{C}(\mathcal{S})$ are displayed. The brown horizontal line is the slit $(-\infty,-1/12)$. The parameter $a=-1/12$ corresponds to the unique map in $\mathcal{C}(\mathcal{S})$ for which $\alpha_a$ has an attracting direction in $K_a$. Right: A blow-up of $\mathcal{C}(\mathcal{S})$ around the principal hyperbolic component (having its center at $a=0$) is shown. The marked red parameter corresponds to a Misiurewicz map for which the critical point $0$ eventually lands on the fixed point $\alpha_a$. Bottom: The region to the left of the grey region (which is a part of the principal hyperbolic component of the Tricorn) is the real Basilica limb of the Tricorn.](conn_locus_schwarz.png "fig:"){#c_and_c_conn_locus_fig width="0.56\\linewidth"}\
![Left: The parameter tiles of the exterior of $\mathcal{C}(\mathcal{S})$ are displayed. The brown horizontal line is the slit $(-\infty,-1/12)$. The parameter $a=-1/12$ corresponds to the unique map in $\mathcal{C}(\mathcal{S})$ for which $\alpha_a$ has an attracting direction in $K_a$. Right: A blow-up of $\mathcal{C}(\mathcal{S})$ around the principal hyperbolic component (having its center at $a=0$) is shown. The marked red parameter corresponds to a Misiurewicz map for which the critical point $0$ eventually lands on the fixed point $\alpha_a$. Bottom: The region to the left of the grey region (which is a part of the principal hyperbolic component of the Tricorn) is the real Basilica limb of the Tricorn.](basilica_limb_tricorn.png "fig:"){#c_and_c_conn_locus_fig width="0.6\\linewidth"}
More precisely, the angles of the parameter rays of $\mathcal{S}$ landing/accumulating at a parabolic/Misiurewicz point $a\in\mathcal{C}(\mathcal{S})$ are mapped by $\pmb{\mathcal{E}}_2$ to the angles of the parameter rays of the Tricorn landing/accumulating at the parabolic/Misiurewicz point $\chi(a)\in\mathcal{L}$. This property can be utilized to construct a homeomorphism between suitable pinched disk models of the two connectedness loci.
**Theorem 51**. *[@LLMM2 Theorem 1.4] The pinched disk model of $\mathcal{C}(\mathcal{S})$ is homeomorphic to that of the Basilica limb $\mathcal{L}$ of the Tricorn.*
### A different perspective: David surgery {#c_and_c_conn_david_surjery_subsubsec}
One can use David surgery techniques to give an alternative proof of surjectivity of the combinatorial straightening map of Theorem [Theorem 49](#c_and_c_general_mating_thm){reference-type="ref" reference="c_and_c_general_mating_thm"} (which is essentially different from the proof sketched in Subsection [6.1.3](#c_and_c_geom_fin_bijection_subsubsec){reference-type="ref" reference="c_and_c_geom_fin_bijection_subsubsec"}). We sketch the key steps below.
i\) Let $f_c\in\mathcal{L}$ be a hyperbolic/Misiurewicz anti-polynomial. One can apply [@LMMN Lemma 7.1] to replace $f_c\vert_{\mathcal{B}_\infty(f_c)}$ (where $\mathcal{B}_\infty(f_c)$ is the basin of infinity of $f_c$) with the Nielsen map $\pmb{\mathcal{N}}_2\vert_{\mathbb{D}}$. This produces a piecewise Schwarz reflection map $F$ associated with two disjoint quadrature domains.
ii\) The dynamics of the piecewise Schwarz reflection map $F$ on its non-escaping set is topologically conjugate to the dynamics of $f_c$ on its filled Julia set such that the conjugacy is conformal on the interior. Finally, one repeats the arguments of Subsection [4.2.4](#c_and_c_basilica_unique_mating_subsubsec){reference-type="ref" reference="c_and_c_basilica_unique_mating_subsubsec"} to deduce that $F$ is a piecewise Schwarz reflection map associated with the disjoint union of two touching simply connected quadrature domains one of which is a round disk and the other is a cardioid. This identifies $F$ as a member of the C&C family (cf. [@LLMM2 §A.2]).
## Cubic Chebyshev family and associated correspondences {#chebyshev_gen_subsec}
The Schwarz reflection map described in Subsection [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"} sits in a natural one-parameter family of maps arising from univalent restrictions of the cubic Chebyshev polynomial $f(w)=w^3-3w$ on a varying collection of round disks. The family of $2$:$2$ correspondences associated with these Schwarz reflections can be viewed as an antiholomorphic analog of the family of algebraic correspondences introduced and studied extensively by Bullett, Penrose, and Lomonaco [@BP; @BuLo1; @BuLo2; @BuLo3]. A different motivation behind studying this family of quadratic Schwarz reflection maps comes from the discussion in the beginning of Subsection [4](#quadratic_examples_sec){reference-type="ref" reference="quadratic_examples_sec"}.
The goal of the current subsection is to give an overview of the dynamics of these Schwarz reflections (respectively, correspondences) and the structure of their parameter space. Our exposition is based on [@LLMM3] and its subsequent improvement in [@LMM3].
### The family of Schwarz reflections {#cubic_cheby_schwarz_def_subsec}
We set $\mathbb{H}_R:=\{a\in\mathbb{C}: \mathop{\mathrm{Re}}(a)>0\}$. For $a\in\mathbb{H}_R\setminus\{1\}$, we will denote the disk $B(a,\vert a-1\vert)$ by $\Delta_a$. Note that $\Delta_a$ is centered at $a$ and has the critical point $1$ of $f$ on its boundary. In order to define Schwarz reflection maps via univalent restrictions $f\vert_{\Delta_a}$, we are naturally led to the set $$\widehat{S}:=\{a\in\mathbb{H}_R\setminus\{1\}: f(\partial \Delta_a)\ \mathrm{is\ a\ Jordan\ curve}\}.$$ Elementary arguments show that $\widehat{S}\neq\emptyset$, and $\widehat{S}\subset \{a\in\mathbb{C}: 0<\mathop{\mathrm{Re}}(a)\leq4\}$. An explicit description of the set $\widehat{S}$ can be found in [@LLMM3 §3].
Note that since the critical points of $f$ lie outside $\Delta_a$ for $a\in\mathbb{H}_R$, univalence of $f\vert_{\overline{\Delta_a}}$ follows from the condition that $f(\partial\Delta_a)$ is a Jordan curve. Hence, $\Omega_a:=f(\Delta_a)$ is a simply connected quadrature domain for $a\in\widehat{S}$. Moreover, the presence of the critical point $1$ (of $f$) on $\partial\Delta_a$ implies that $\partial\Omega_a$ has a conformal cusp at $f(1)=-2$ and is non-singular away from $-2$.
We denote the reflection in the circle $\partial\Delta_a$ by $\eta_a$. Then the Schwarz reflection map $\sigma_a$ of $\Omega_a$ is given by $f\circ\eta_a\circ \left(f\vert_{\overline{\Delta}_a}\right)^{-1}$. It is easily checked that $\sigma_a$ has two distinct critical points; namely, $c_a:=f(\eta_a(-1))$ and $c_a^*:=f(\eta_a(\infty))$. Furthermore, $\sigma_a$ maps $c_a$ (respectively, $c_a^*$) to $2$ (respectively, to $\infty$) with local degree two (respectively, three). By Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"}, $\sigma_a:\sigma_a^{-1}(\Omega_a)\to\Omega_a$ is a two-to-one (possibly branched) covering, and $\sigma_a:\sigma_a^{-1}(\mathop{\mathrm{int}}{\Omega_a^c})\to \mathop{\mathrm{int}}{\Omega_a^c}$ is a branched covering of degree three.
It turns out that for $a\in\widehat{S}$ with $\mathop{\mathrm{Re}}(a)\leq 3/2$, the set $\sigma_a^{-1}(\Omega_a)$ has two connected components and $\sigma_a:\sigma_a^{-1}(\Omega_a)\to\Omega_a$ is an unbranched covering map. On the other hand, for $a\in\widehat{S}$ with $\mathop{\mathrm{Re}}(a)> 3/2$, we have the following properties:
- $\Omega_a, \sigma_a^{-1}(\Omega_a)$ are topological disks,
- $\sigma_a^{-1}(\Omega_a)\subset\Omega_a$ with $\partial\Omega_a\cap\partial\sigma_a^{-1}(\Omega_a)=\{-2\}$, and
- $\sigma_a:\sigma_a^{-1}(\Omega_a)\to\Omega_a$ is a proper antiholomorphic map of degree two with a unique critical point.
(cf. Subsection [4.3.2](#chebyshev_center_hybrid_conj_subsubsec){reference-type="ref" reference="chebyshev_center_hybrid_conj_subsubsec"} and [@LLMM3 §3.2].) Since we are interested in maps $\sigma_a$ exhibiting anti-polynomial-like behavior on its non-escaping set, it is natural to work with the space of Schwarz reflections $$\mathcal{S}:=\{\sigma_a:\overline{\Omega_a}\to\widehat{\mathbb{C}}: a\in\widehat{S},\ \mathop{\mathrm{Re}}(a)\in(3/2,4]\}.$$ Note that $T^0_a\equiv T^0(\sigma_a)=\widehat{\mathbb{C}}\setminus\left(\Omega_a \cup\{-2\}\right)$. We denote the tiling and non-escaping sets of $\sigma_a$ by $T_a^\infty$ and $K_a$, respectively.
We also remark that for all $\sigma_a\in\mathcal{S}$ with $\mathop{\mathrm{Re}}(a)<4$, the cusp $-2$ of $\partial\Omega_a$ is a $(3,2)$-cusp, and $\sigma_a^{\circ 2}$ has a repelling direction in $K_a$ at $-2$. On the other hand, for $\sigma_a\in\mathcal{S}$ with $\mathop{\mathrm{Re}}(a)=4$, the cusp $-2$ of $\partial\Omega_a$ is a $(\nu,2)$-cusp with $\nu\in\{5,7\}$, and $\sigma_a^{\circ 2}$ has at least one attracting direction in $K_a$ at $-2$ (cf. [@LLMM3 §4.2]).
### Connectedness locus: coexistence of anti-rational map and reflection group structure {#cubic_cheby_qc_straightening_subsubsec}
Recall that each $\sigma_a$ has a *passive* critical point at $c_a^*$ that escapes under one iterate of $\sigma_a$. Thus, the map $\sigma_a$ has a unique *active/free* critical point, namely $c_a$. As in the case for unicritical anti-polynomials, it is easy to see that the non-escaping set $K_a$ (of $\sigma_a$) is connected if and only if the unique free critical point $c_a$ does not escape. More precisely, the *connectedness locus* of $\mathcal{S}$ has the following description $$\mathcal{C}(\mathcal{S})=\{\sigma_a\in\mathcal{S}: K_a\ \textrm{is\ connected}\}=\{\sigma_a\in\mathcal{S}: 2\notin T_a^\infty\}$$ (see Figure [\[cheby_conn_locus_fig\]](#cheby_conn_locus_fig){reference-type="ref" reference="cheby_conn_locus_fig"}, cf. [@LLMM3 §4.3]).
By definition, the free critical value $2$ of $\sigma_a$ never hits the fundamental tile $T^0_a$ for $\sigma_a\in\mathcal{C}(\mathcal{S})$. As a consequence, the conformal isomorphism $\mathcal{Q}_1\to T^0_a$ that carries $0$ to to $\infty$ and sends $1$ to $-2$ can be extended by iterated lifting to produce a conformal conjugacy between $\pmb{\mathcal{F}}_2:\mathcal{Q}\setminus\mathop{\mathrm{int}}{\mathcal{Q}_1}\to\mathcal{Q}$ and $\sigma_a:T^\infty_a\setminus\mathop{\mathrm{int}}{T^0_a}\to T^\infty_a$ (see Subsection [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"} for the definition of the map $\pmb{\mathcal{F}}_2$ and comments on the construction of this conjugacy).
For $\sigma_a\in\mathcal{C}(\mathcal{S})$ with $\mathop{\mathrm{Re}}(a)<4$, the restriction $\sigma_a:\sigma_a^{-1}(\Omega_a)\to\Omega_a$ gives rise to a pinched anti-quadratic-like map (in the sense of [@LLMM3 Definition 5.1]) with connected filled Julia set. By [@LLMM3 Lemma 5.3, Theorem 5.4], such a pinched anti-quadratic-like restriction is hybrid conjugate to a unique anti-rational map in the parabolic Tricorn $\pmb{\mathcal{B}}_2$ (with a simple parabolic fixed point at $\infty$). We refer the reader to Subsection [4.3.2](#chebyshev_center_hybrid_conj_subsubsec){reference-type="ref" reference="chebyshev_center_hybrid_conj_subsubsec"} for the main ideas in the proof of this theorem and to [@LMM3 Theorem 4.11] for an alternative method of straightening.
For $\sigma_a\in\mathcal{C}(\mathcal{S})$ with $\mathop{\mathrm{Re}}(a)=4$, the existence of an attracting direction at $-2$ in $K_a$ turns out to be an obstruction to a pinched anti-quadratic-like restriction of $\sigma_a$ (in the sense of [@LLMM3 Definition 5.1]). However, the alternative straightening surgery of [@LMM3 Theorem 4.11] allows one to straighten such a map $\sigma_a$ to a unique anti-rational map in $\pmb{\mathcal{B}}_2$ (with a higher order parabolic fixed point at $\infty$). This surgery construction is carried out in the following two steps.
i\) One first shows using a quasiconformal interpolation argument (similar to the one used in [@LLMM3 Lemma 5.3]) that the restriction of $\pmb{\mathcal{F}}_2$ on the closure of a neighborhood of $\partial\mathcal{Q}\setminus\pmb{\mathcal{F}}_2^{-1}(1)$ is quasiconformally conjugate to the restriction of the parabolic anti-Blaschke product $B_2(z)=\frac{3\overline{z}^2+1}{\overline{z}^2+3}$ on the closure of a neighborhood of $\mathbb{S}^1\setminus B_2^{-1}(1)$ (see [@LMM3 Lemma 4.9]).
ii\) Next, one can use the Riemann map of $T^\infty_a$ (which conjugates $\sigma_a$ to $\pmb{\mathcal{F}}_2$) and the above quasiconformal conjugacy between $\pmb{\mathcal{F}}_2$ and $B_2$ to glue the action of $B_2$ outside the non-escaping set of $\sigma_a$. This produces a quasiregular map, which can be straightened to an anti-rational map in $\pmb{\mathcal{B}}_2$ (see [@LMM3 Theorem 4.11] for details).
**Theorem 52**. *Each $\sigma_a\in\mathcal{C}(\mathcal{S})$ is a mating of $\pmb{\mathcal{F}}_2$ and a unique anti-rational map in the parabolic Tricorn $\pmb{\mathcal{B}}_2$.*
The above theorem defines a *straightening map* $\chi:\mathcal{C}(\mathcal{S})\to\pmb{\mathcal{B}}_2$.
*Remark 53*. 1) The main difference between the straightening of pinched anti-quadratic-like maps carried out in [@LLMM3 §5] and the alternative straightening surgery of [@LMM3 §4.4.1] is that in the former surgery, the quasiconformal interpolation is carried out directly in the Schwarz dynamical plane, while in the latter, the interpolation takes place in the dynamical plane of the model map $\pmb{\mathcal{F}}_2$.
2\) The straightening surgery of [@LMM3 §4.4.1] applies to all maps in $\mathcal{C}(\mathcal{S})$. However, due to the appearance of the Riemann map of the tiling set in the proof, it is harder to control parameter dependence of this surgery.
### Properties of the straightening map and homeomorphism between models of parameter spaces {#cheby_chi_prop_subsubsec}
The straightening map $\chi$ turns out to be a bijection. Injectivity of $\chi$ is a consequence of the fact that all maps in $\mathcal{C}(\mathcal{S})$ have the same external dynamics $\pmb{\mathcal{F}}_2$. Thus, if two maps in $\mathcal{C}(\mathcal{S})$ are hybrid conjugate to the same anti-rational map, then these two Schwarz reflections have the same hybrid class and the same external class; which in turn implies that they are affinely conjugate (see [@LLMM3 Proposition 5.9]).
Surjectivity of $\chi$ can be proved using an inverse construction to straightening. While a potentially weaker version of this statement (to the effect that the image of $\chi$ contains the closure of all geometrically finite maps in $\pmb{\mathcal{B}}_2$) was proved in [@LLMM3], the full surjectivity of $\chi$ was established in [@LMM3 §5]. As in the construction of $\chi$, one can give two different proofs of surjectivity: one using pinched anti-quadratic-like restrictions of maps in $\pmb{\mathcal{B}}_2$ (with simple parabolic fixed point at $\infty$), and the other using the quasiconformal conjugation between $\pmb{\mathcal{F}}_2$ and $B_2$ constructed in [@LMM3 Lemma 4.9] (see [@LMM3 Theorems 5.1, 5.2]).
The parameter dependence of the quasiconformal surgeries giving rise to the straightening map and its inverse (that involve pinched anti-quadratic-like maps) were investigated in [@LMM3 §6] (cf. [@LLMM3 §8.1]), and it was proved that $\chi$ is continuous at hyperbolic and quasiconformally rigid parameters of $\mathcal{C}(\mathcal{S})$. On the other hand, $\chi$ is not necessarily continuous at quasiconformally non-rigid parabolic parameters [@LLMM3 §8.1].
Using the above properties, it was shown in [@LLMM3 §9] that $\chi$ induces a homeomorphism between topological models of $\mathcal{C}(\mathcal{S})$ and $\pmb{\mathcal{B}}_2$. Roughly speaking, these models are constructed by pinching appropriate regions of (possible) discontinuity of $\chi$ to points.
On the other hand, the exterior of $\mathcal{C}(\mathcal{S})$ in the parameter space is simply connected, and it admits a natural uniformization via the conformal position of the escaping critical value (see [@LLMM3 §7] and Figure [\[cheby_conn_locus_fig\]](#cheby_conn_locus_fig){reference-type="ref" reference="cheby_conn_locus_fig"}).
### Antiholomorphic analog of Bullett-Penrose correspondences {#cheby_corr_gen_mating_subsubsec}
The construction of the $2$:$2$ antiholomorphic correspondence carried out for the map $\sigma_a$ with $a=3$ in Subsection [4.3.3](#chebyshev_center_corr_subsubsec){reference-type="ref" reference="chebyshev_center_corr_subsubsec"} generalizes verbatim to all maps $\sigma_a\in\mathcal{C}(\mathcal{S})$, with the reflection map $\widehat{\eta}$ in Equation [\[cheby_center_corr_eqn\]](#cheby_center_corr_eqn){reference-type="eqref" reference="cheby_center_corr_eqn"} replaced with the anti-Möbius reflection $\eta_a$ in the circle $\partial \Delta_a$. Furthermore, the proof of Theorem [\[chebyshev_center_corr_thm\]](#chebyshev_center_corr_thm){reference-type="ref" reference="chebyshev_center_corr_thm"} also holds for these correspondences $\mathfrak{C}_a$ and shows that each $\mathfrak{C}_a$ is a mating of the group $\mathbbm{G}_2\cong\mathbb{Z}/2\mathbb{Z}\ast\mathbb{Z}/3\mathbb{Z}$ and the map $\chi(\sigma_a)\in\pmb{\mathcal{B}}_2$. Combining this fact with bijectivity of $\chi$, one concludes the following result.
**Theorem 54**. *[@LLMM3] [@LMM3][\[cheby_corr_gen_mating_thm\]]{#cheby_corr_gen_mating_thm label="cheby_corr_gen_mating_thm"}*
1. *The straightening map $\chi:\mathcal{C}(\mathcal{S})\to\pmb{\mathcal{B}}_2$ is a bijection.*
2. *Each antiholomorphic correspondence $\mathfrak{C}_a$ is a mating of the group $\mathbbm{G}_2\cong\mathbb{Z}/2\mathbb{Z}\ast\mathbb{Z}/3\mathbb{Z}$ and the map $\chi(\sigma_a)\in\pmb{\mathcal{B}}_2$.*
3. *For each $R\in\pmb{\mathcal{B}}_2$, there exists a unique $\sigma_a\in\mathcal{C}(\mathcal{S})$ such that the associated correspondence $\mathfrak{C}_a$ is a mating of $\mathbbm{G}_2$ and $R$.*
## Matings of $\overline{z}^d$ with necklace reflection groups {#sigma_d_subsec}
We recall from Subsection [5.1](#talbot_subsec){reference-type="ref" reference="talbot_subsec"} that a simply connected unbounded quadrature domain with a unique node at $\infty$ admits a uniformization $f:\mathbb{D}^*\to\Omega$, where $f(z)=z+\frac{a_1}{z}+\cdots+\frac{a_d}{z^d}$, after possibly replacing the quadrature domain by an affine image of it. By [@LMM1 Propositions 2.7,2.13], the corresponding quadrature domain $\Omega_f:=f(\mathbb{D}^*)$ has precisely $d+1$ cusps on its boundary (or equivalently, $f$ has $d+1$ critical points on $\mathbb{S}^1$) if and only if $a_d=-1/d$. The dynamics of Schwarz reflection maps associated with two specific quadrature domains of this type were explored in Subsections [4.1](#deltoid_subsec){reference-type="ref" reference="deltoid_subsec"} and [5.1](#talbot_subsec){reference-type="ref" reference="talbot_subsec"} (namely, the deltoid and Talbot Schwarz reflections). In this subsection, we will explicate the general situation by describing the dynamics of Schwarz reflections arising from maps in the family $$\Sigma_d^* := \left\{ f(z)= z+\frac{a_1}{z} + \cdots -\frac{1}{d z^d} : f\vert_{\mathbb{D}^*} \textrm{ is univalent}\right\}.$$ The exposition will mostly follow [@LMM1; @LMM2].
For each $f\in\Sigma_d^*$, we set $\Omega_f:=f(\mathbb{D}^*)$, and denote the Schwarz reflection map of the simply connected quadrature domain $\Omega_f$ by $\sigma_f=f\circ\eta\circ\left(f\vert_{\mathbb{D}^*}\right)^{-1}$. We recall that there are $d+1$ cusps and at most $d-2$ double points on $\partial\Omega_f$ (cf. [@LM1 Lemma 2.4]), and removing these singular points from $T(\sigma_f)=\widehat{\mathbb{C}}\setminus\Omega_f$, we obtain the fundamental tile $T^0(\sigma_f)$. Note also that $\mathop{\mathrm{int}}{T^0(\sigma_f)}$ has at most $d-1$ connected components (see Figure [\[sigma_extremal_fig\]](#sigma_extremal_fig){reference-type="ref" reference="sigma_extremal_fig"}).
### Basin of infinity and non-escaping set for Schwarz reflections arising from $\Sigma_d^*$ {#basin_non_escaping_sigma_d_subsubsec}
By definition, each $\sigma_f$ has a $d$-fold pole at $\infty$; i.e., the point $\infty$ is a superattracting fixed point of $\sigma_f$ with local degree $d$. Since $\deg{f}=d+1$, it has $2d$ critical points in $\widehat{\mathbb{C}}$, of which $d+1$ lie on $\mathbb{S}^1$ and the remaining $d-1$ lie at $0$. Consequently, $f$ has no critical values in $\Omega_f\setminus\{\infty\}$. It follows that $\sigma_f$ has no critical points in $\Omega_f\setminus\{\infty\}$ either. This implies that the basin of attraction $\mathcal{B}_\infty(\sigma_f)$ of the superattracting fixed point $\infty$ is a simply connected, completely invariant domain in $\widehat{\mathbb{C}}$, and $\sigma_f\vert_{\mathcal{B}_\infty(\sigma_f)}$ is conformally conjugate to $\overline{z}^d\vert_{\mathbb{D}}$ [@LMM1 Proposition 3.2] (cf. [@Mil06 §9]). We normalize the conformal conjugacy between $\overline{z}^d\vert_{\mathbb{D}}$ and $\sigma_f\vert_{\mathcal{B}_\infty(\sigma_f)}$ such that its derivative at $\infty$ has argument $\frac{\pi}{d+1}$ (see [@LMM2 Remark 2]), and call this normalized conjugacy the *Böttcher coordinate* of $\sigma_f$.
According to [@LMM2 Corollary 41], one has that $$\widehat{\mathbb{C}}=\mathcal{B}_\infty(\sigma_f)\sqcup\Lambda(\sigma_f)\sqcup T^\infty(\sigma_f),$$ where $\Lambda(\sigma_f)=\partial \mathcal{B}_\infty(\sigma_f)=\partial T^\infty(\sigma_f)$ is the *limit set* of $\sigma_f$. The proof of existence of this invariant partition of the dynamical plane of $\sigma_f$ uses a combination of classical arguments of Fatou adapted for the setting of partially defined maps [@LMM1 Proposition 3.4], local connectivity of the boundary of $\mathcal{B}_\infty(\sigma_f)$ (which essentially follows from expansiveness of $\sigma_f$ near $\partial\mathcal{B}_\infty(\sigma_f)$) [@LMM2 Lemma 32], and visibility of the cusps of $\partial T^0(\sigma_f)$ (which also lie on $\partial T^\infty(\sigma_f)$) from the basin of infinity [@LMM2 Proposition 35]. It follows that the non-escaping set $K(\sigma_f)$ is the closure of $\mathcal{B}_\infty(\sigma_f)$.
### Mating structure for Schwarz reflections arising from $\Sigma_d^*$ {#sigma_d_schwarz_dyn_subsubsec}
The above decomposition of $\widehat{\mathbb{C}}$ and local connectivity of $\Lambda(\sigma_f)$ show that $\overline{T^\infty(\sigma_f)}$ is homeomorphic to the quotient of $\overline{\mathbb{D}}$ under an $m_{-d}$-invariant equivalence relation defined by co-landing of dynamical rays in $\mathcal{B}_\infty(\sigma_f)$ (i.e., images of radial lines under Böttcher coordinates). As in the special case described in Subsection [5.1](#talbot_subsec){reference-type="ref" reference="talbot_subsec"}, this lamination $\lambda(\sigma_f)$ is generated by the angles of the pairs of $2$-periodic rays (under $m_{-d}$) landing at the double points of $\partial\Omega_f$.
On the other hand, according to [@LMM2 Proposition 27], there is a unique marked necklace group $G_f$ (up to Möbius conjugacy) such that $T(\sigma_f)$ is conformally isomorphic to $\overline{\Pi^b(G_f)}$ in a cusp-preserving manner (see Subsection [3.1.7](#necklace_subsubsec){reference-type="ref" reference="necklace_subsubsec"} for the definition of $\Pi^b(G_f)$). The uniqueness of $G_f$ is a consequence of standard rigidity theorems for geometrically finite Kleinian groups (cf. [@Tuk85 Theorem 4.2]). The existence of the necklace group $G_f$ is demonstrated in the following two steps.
i\) The removal of the cusp points from the droplet boundary $\partial T(\sigma_f)$ yields $d+1$ (open) non-singular real-analytic arcs. These arcs may only touch at the double points of $\partial T(\sigma_f)$. One can extend each of these arcs to a Jordan curve in $\overline{\Omega_f}$ such that no further tangency/intersection among the curves is introduced in the process. This results in a *Jordan curve packing*; i.e., a connected finite collection of oriented Jordan curves in the plane with disjoint interiors. We treat the landing point of the $0$-ray of $\sigma_f$ as a marked cusp on $\partial T(\sigma_f)$, and this yields a marking for this Jordan curve packing (i.e., a labeling of the Jordan curves). The Circle Packing Theorem now produces a (marked) circle packing that is homeomorphic to the Jordan curve packing. This circle packing defines a (marked) necklace group $G'$ such that $T(\sigma_f)$ is quasiconformally isomorphic to $\overline{\Pi^b(G')}$ in a cusp-preserving manner.
ii\) Next, one quasiconformally deforms the group $G'$ to obtain the required group $G_f$ ensuring that the (marked) components of $\Pi^b(G_f)$ are conformally equivalent to those of $T^0(\sigma_f)$ in a cusp-preserving way.
Moreover, one can use Proposition [\[group_lamination_prop\]](#group_lamination_prop){reference-type="ref" reference="group_lamination_prop"} to give an explicit topological model for the limit set $\Lambda(G_f)$ as a quotient of $\mathbb{S}^1$ under an $\pmb{\mathcal{N}}_d$-invariant lamination $\lambda(G_f)$. Specifically, this lamination is generated by the angles of the pairs of $2$-periodic rays (under $\pmb{\mathcal{N}}_d$) landing at the double points of $\partial\Pi^b(G_f)$ [@LMM2 §4.4]. Alternatively, one can arrive at the same lamination using the perspective of pinching deformation of necklace groups as explained in [@LLM1 §3.3, §4.3].
The fact that $T(\sigma_f)$ and $\overline{\Pi^b(G_f)}$ have the same topology translates to a combinatorial equivalence between the laminations $\lambda(\sigma_f)$ and $\lambda(G_f)$. More precisely, the topological conjugacy $\pmb{\mathcal{E}}_d$ between $\pmb{\mathcal{N}}_d\vert_{\mathbb{S}^1}$ and $\overline{z}^d\vert_{\mathbb{S}^1}$ carries the lamination $\lambda(G_f)$ to $\lambda(\sigma_f)$ [@LMM2 Proposition 56]. Thus, the limit set dynamics of $\sigma_f$ and $\mathcal{N}_{G_f}$ are topologically conjugate. Finally, the conformal equivalence of $T(\sigma_f)$ and $\overline{\Pi^b(G_f)}$ enables one to extend this topological conjugacy to a topological conjugacy between $\sigma_f: \overline{T^\infty(\sigma_f)}\setminus\mathop{\mathrm{int}}{T^0(\sigma_f)}\to \overline{T^\infty(\sigma_f)}$ and $\mathcal{N}_{G_f}: K(G_f)\setminus\mathop{\mathrm{int}}{\Pi^b(G_f)}\to K(G_f)$ in such a way that the conjugacy is conformal on the interior.
**Theorem 55**. *[@LMM2 Theorem A][\[sigma_d\_mating_thm\]]{#sigma_d_mating_thm label="sigma_d_mating_thm"} For each $f\in\Sigma_d^*$, the Schwarz reflection $\sigma_f$ is a conformal mating of $\overline{z}^d$ and the Nielsen map of a unique marked necklace group $G_f$. More precisely,*
1. *the maps $$\sigma_f:\overline{T^\infty(\sigma_f)}\setminus\mathop{\mathrm{int}}{T^0(\sigma_f)}\to \overline{T^\infty(\sigma_f)}\ \mathrm{and}\ \mathcal{N}_{G_f}: K(G_f)\setminus\mathop{\mathrm{int}}{\Pi^b(G_f)}\to K(G_f)$$ are topologically conjugate in such a way that the conjugacy is conformal on $\mathop{\mathrm{int}}{T^\infty(\sigma_f)}$, and*
2. *$\sigma_f\vert_{K(\sigma_f)}$ is topologically conjugate to $\overline{z}^d\vert_{\overline{\mathbb{D}}}$ such that the conjugacy is conformal on $\mathop{\mathrm{int}}{K(\sigma_f)}$. Moreover, the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_d$ maps the lamination associated with the limit set of $G_f$ to the lamination associated with the limit set of $\sigma_f$.*
### Homeomorphism between $\Sigma_d^*$ and the Bers slice closure $\overline{\beta(\pmb{G}_{d})}$ {#sigma_d_bers_homeo_subsubsec}
The map $f\mapsto G_f$ connects two disparate objects: a space of univalent maps and a space of Kleinian reflection groups. Interestingly, this map turns out to be a homeomorphism between $\Sigma_d^*$ and the Bers slice closure of $\pmb{G}_{d}$ (see Definition [Definition 9](#deform_space_def){reference-type="ref" reference="deform_space_def"}).
We first sketch a proof of surjectivity, whose main idea is similar to the one described in Subsection [5.1.2](#talbot_pinching_subsubsec){reference-type="ref" reference="talbot_pinching_subsubsec"}. One starts with the Schwarz reflection associated with the Jordan quadrature domain $\Omega_0:=f_0(\mathbb{D}^*)$, where $f_0(z)=z-\frac{1}{dz^d}$, and quasiconformally deforms the droplet to introduce additional tangencies on the quadrature domain boundary in the limit. We remark that the quadrature domain $\Omega_0$ is the exterior of a classical *hypocycloid* (or a *($d+1$)-deltoid*) curve. For any given necklace group $G$, this pinching procedure can be used to create a map in $f\in\Sigma_d^*$ such that the associated droplet $T(\sigma_f)$ has the topology of $\overline{\Pi^b(G)}$ (cf. [@LMM1 Theorem 4.14], [@LMM2 Proposition 29]).
Alternatively, one can construct such an $f\in\Sigma_d^*$ using David surgery. To accomplish this, one first invokes the bijection between necklace reflection groups and critically fixed anti-polynomials (which will be discussed in Subsection [8.1.3](#dyn_conseq_bijection_subsubsec){reference-type="ref" reference="dyn_conseq_bijection_subsubsec"}) to obtain a critically fixed anti-polynomial $p$ whose Julia set dynamics is topologically conjugate to $\mathcal{N}_G\vert_{\Lambda(G)}$, where $G$ is a given necklace group. Subsequently, one glues in Nielsen maps of appropriate ideal polygon reflection groups in the bounded fixed Fatou components of $p$, where the choices of the ideal polygons are dictated by the components of $\Pi^b(G)$. This produces a Schwarz reflection map in a simply connected quadrature domain, which is easily seen to be uniformized by some $f\in\Sigma_d^*$. An added advantage of this approach is that the limit set of the Schwarz reflection thus produced is conformally removable: indeed, the limit set of such a Schwarz reflection is the image of a $W^{1,1}$-removable compact set (namely, the connected Julia set of a hyperbolic polynomial) under a global David homeomorphism (see [@LMMN Theorem 2.7]). We refer the reader to [@LMMN §12] for details of this construction.
Recall that all Schwarz reflections arising from $\Sigma_d^*$ have conformally equivalent dynamics (conjugate to $\overline{z}^d$) on their basin of attraction of $\infty$. Additionally, if two of them have conformally isomorphic droplets, then their dynamics on the respective tiling set closures are also conformally equivalent. Injectivity of the map $f\mapsto G_f$ now follows from conformal removability of the limit sets of Schwarz reflections constructed above (cf. [@LMMN Theorem 12.8]). An alternative argument for injectivity (that uses the pullback argument) can be found in [@LMM1 Theorem 5.1].
Finally, continuity of the map $f\mapsto G_f$ is deduced from a continuity property of moduli of quadrilaterals [@LMM2 Theorems 25, 31].
**Theorem 56**. *[@LMM2 Theorems A, C][\[sigma_d\_bers_homeo_thm\]]{#sigma_d_bers_homeo_thm label="sigma_d_bers_homeo_thm"}*
1. *The map $f\mapsto G_f$ of Subsection [6.3.2](#sigma_d_schwarz_dyn_subsubsec){reference-type="ref" reference="sigma_d_schwarz_dyn_subsubsec"} is a homeomorphism between $\Sigma_d^*$ and the Bers slice closure $\overline{\beta(\pmb{G}_{d})}$.*
2. *For each $f\in\Sigma_d^*$, there exists a necklace group $G_f$ and a critically fixed anti-polynomial $p_f$ such that $\sigma_f\vert_{\Lambda(\sigma_f)}$, $\mathcal{N}_{G_f}\vert_{\Lambda(G_f)}$, and $p_f\vert_{\mathcal{J}(p_f)}$ are topologically conjugate.*
### Cell complex structure of $\Sigma_d^*$ {#sigma_d_cell_structure_subsec}
The space $\Sigma_d^*$ of univalent rational maps admits a natural cell complex structure. For $0\leq k\leq d-2$, let us denote by $\Sigma_{d,k}^*$ the collection of maps $f\in\Sigma_d^*$ such that the boundary $\partial T(\sigma_f)$ has exactly $k$ double points. Then, the components of $\Sigma_{d,k}^*$ are the $(d-2-k)$-dimensional cells of $\Sigma_d^*$, and each such cell is homeomorphic to a quasiconformal deformation space of Schwarz reflections. There is a unique cell of dimension $(d-2)$ in $\Sigma_d^*$, which contains the map $f_0(z)=z-\frac{1}{dz^d}$. We refer to the $0$-dimensional cells as *vertices* of $\Sigma_d^*$. A vertex $f$ of $\Sigma_d^*$ is also called a *Suffridge map* (cf. [@LM1; @Suf72]); the corresponding desingularized droplet comprises $(d-1)$ triangles.
The Bers slice closure $\overline{\beta(\pmb{G}_{d})}$ also has a similar cell complex structure, where the cells of various dimensions are defined by the number (of conjugacy classes) of accidental parabolics of the necklace groups (see Proposition [\[cell_structure_group_prop\]](#cell_structure_group_prop){reference-type="ref" reference="cell_structure_group_prop"}). By construction, the homeomorphism of Theorem [\[sigma_d\_bers_homeo_thm\]](#sigma_d_bers_homeo_thm){reference-type="ref" reference="sigma_d_bers_homeo_thm"} respects these cell complex structures.
# David surgery {#david_surgery_sec}
The fact that quasisymmetric circle homeomorphisms admit quasiconformal extensions to $\mathbb{D}$ plays an important role in the theory of Kleinian groups as well as in rational dynamics; such as in the Bers Simultaneous Uniformization Theorem, in the construction of quasi-Blaschke products, and in the Douady-Ghys surgery to construct maps having Siegel disks with controlled geometry. However, as we saw in various examples described in the previous sections (see Subsections [4.1.5](#deltoid_david_subsubsec){reference-type="ref" reference="deltoid_david_subsubsec"}, [5.1.4](#talbot_david_subsubsec){reference-type="ref" reference="talbot_david_subsubsec"}, [5.2.4](#rat_map_apollo_group_subsubsec){reference-type="ref" reference="rat_map_apollo_group_subsubsec"}, [6.1.5](#c_and_c_conn_david_surjery_subsubsec){reference-type="ref" reference="c_and_c_conn_david_surjery_subsubsec"}, [6.3.3](#sigma_d_bers_homeo_subsubsec){reference-type="ref" reference="sigma_d_bers_homeo_subsubsec"}), there is a number of mating/surgery frameworks where one needs similar extension theorems for naturally arising non-quasisymmetric circle homeomorphisms (such as the Minkowski circle homeomorphisms and circle homeomorphisms conjugating expanding Blaschke products to parabolic ones). In this section, we will formulate such an extension theorem for topological conjugacies between piecewise real-analytic, expansive, covering maps of $\mathbb{S}^1$ satisfying certain regularity conditions such that the conjugacy carries parabolics to parabolics, but can send hyperbolics to parabolics as well. While the most general version of this result that was proved in [@LMMN] does not require the circle coverings to be $C^1$, we will only state a special case assuming $C^1$-regularity as this will suffice for all the applications discussed in this article.
## A David extension theorem for circle homeomorphisms {#david_ext_subsec}
**Definition 57**. 1) A continuous map $f\colon \mathbb{S}^1\to \mathbb{S}^1$ is called *expansive* if there exists a constant $\delta>0$ such that for any $a,b\in \mathbb{S}^1$ with $a\neq b$ we have $|f^{\circ n}(a)-f^{\circ n}(b)|>\delta$ for some $n\in \mathbb{N}$.
2\) We say that a periodic point $a$ of an expansive $C^1$ map $f\colon \mathbb{S}^1\to \mathbb{S}^1$ is *parabolic* (respectively, *hyperbolic*) if the derivative of the first orientation-preserving return map of $f$ to $a$ has absolute value equal to (respectively, larger than) $1$.
Let $f\colon \mathbb{S}^1\to \mathbb{S}^1$ be a $C^1$, expansive, covering map of degree $d\geq 2$ admitting a Markov partition $\mathcal P(f;\{a_0,\dots,a_r\})$. We define $A_k=\mathpalette\arc@arc{[a_k,a_{k+1}]}$ for $k\in \{0,\dots,r\}$ (with the convention $a_{r+1}=a_0$), and recall that $f_k:=f|_{\mathop{\mathrm{int}}{A_k}}$ is injective by the definition of a Markov partition. We assume, that $f_k$ is analytic and that there exist open neighborhoods $U_k$ of $\mathop{\mathrm{int}}{A_k}$ and $V_k$ of $f_k(\mathop{\mathrm{int}}{A_k})$ in the plane such that $f_k$ has a conformal extension from $U_k$ onto $V_k$. We still denote the extension by $f_k$. We impose the condition that $$\begin{aligned}
\label{condition:uv}
V_k=f_k(U_k)\supset U_j \quad \textrm{whenever}\quad f_k(A_k)\supset A_j,\end{aligned}$$ for $j,k\in \{0,\dots,r\}$. We also require that $$\begin{aligned}
\label{condition:holomorphic}
\textrm{$f_k$ extends holomorphically to neighborhoods of $a_k$ and $a_{k+1}$}\end{aligned}$$ for each $k\in \{0,\dots,r\}$.
Now suppose that $a\in \{a_0,\dots,a_r\}$ is a parabolic periodic point of $f$. By condition [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"}, the left and right branches of the first orientation-preserving return map of $f$ to $a$ have holomorphic extensions valid in complex neighborhoods of $a$. These extensions define a pair of parabolic germs fixing $a$. We say that $a$ is *symmetrically parabolic* if these two parabolic germs have the same parabolic multiplicity (see [@Mil06 §10] for background on parabolic germs).
*Remark 58*. If $f$ is *orientation-reversing* and $a\in \{a_0,\dots,a_r\}$ is a parabolic fixed point, then it is automatically symmetrically parabolic. This is because the map $f$ itself defines a topological conjugacy between the corresponding left and right branches of the first orientation-preserving return maps (see [@LMMN Remark 4.7] for a similar assertion in a more general setup).
With these preliminary definitions at our disposal, let us now turn to the main extension theorem of this section.
**Theorem 59**. *[@LMMN Theorem 4.9 (special case)][\[david_extension_general_thm\]]{#david_extension_general_thm label="david_extension_general_thm"} Let $f,g\colon \mathbb{S}^1\to \mathbb{S}^1$ be $C^1$, expansive, covering maps with the same degree and orientation, and $\mathcal P(f;\{a_0,\dots,a_r\})$, $\mathcal P(g;\{b_0,\dots,b_r\})$ be Markov partitions satisfying conditions [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"} and [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"}. Suppose that the map $h\colon \{a_0,\dots,a_r\} \to \{b_0,\dots,b_r\}$ defined by $h(a_k)=b_k$, $k\in \{0,\dots,r\}$, conjugates $f$ to $g$ on the set $\{a_0,\dots,a_r\}$ and assume that for each periodic point $a\in \{a_0,\dots,a_r\}$ of $f$ and for $b=h(a)$ one of the following alternatives occur.*
1. *[\[HH\]]{#HH label="HH"} Both $a,b$ are hyperbolic.*
2. *[\[PP\]]{#PP label="PP"} Both $a,b$ are symmetrically parabolic*
3. *[\[HP\]]{#HP label="HP"} $a$ is hyperbolic and $b$ is symmetrically parabolic.*
*Then the map $h$ extends to a homeomorphism $\widetilde h$ of $\overline{\mathbb{D}}$ such that $\widetilde h|_{\mathbb S^1}$ conjugates $f$ to $g$ and $\widetilde h|_{\mathbb{D}}$ is a David homeomorphism. Moreover, if the alternative*
*$\mathbf{(H\to P)}$*
*does not occur, then $\widetilde h|_{\mathbb{D}}$ is a quasiconformal map and $\widetilde h|_{\mathbb S^1}$ is a quasisymmetry.*
The facts that
- the map $f(z)=z^d$ or $\overline{z}^d$ satisfies conditions [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"} and [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"} for every Markov partition of $\mathbb{S}^1$, and
- each periodic point of $f(z)=z^d$ or $\overline{z}^d$ is hyperbolic
immediately yield the following corollary which is often useful in practice.
**Corollary 60**. *[@LMMN Theorem 4.12][\[power_map_cor\]]{#power_map_cor label="power_map_cor"} Let $g\colon \mathbb{S}^1\to \mathbb{S}^1$ be a $C^1$, expansive, covering map of degree $d\geq 2$ and let $\mathcal P(g;\{b_0,\dots,b_r\})$ be a Markov partition satisfying conditions [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"} and [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"}, and with the property that $b_k$ is either hyperbolic or symmetrically parabolic for each $k\in \{0,\dots,r\}$. Then there exists an orientation-preserving homeomorphism $h\colon \mathbb{S}^1\to \mathbb{S}^1$ that conjugates the map $z\mapsto z^d$ or $z\mapsto \overline{z}^d$ to $g$ and has a David extension in $\mathbb{D}$.*
The proof of Theorem [\[david_extension_general_thm\]](#david_extension_general_thm){reference-type="ref" reference="david_extension_general_thm"} involves careful distortion estimates for the circle homeomorphisms in question using local normal forms for hyperbolic and parabolic periodic points. Specifically, one verifies that the scalewise distortion function of the topological conjugacy $h$ grows at most as $\log(1/t)$ as the scale $t$ goes to $0$ and then applies the David extension criterion of Chen--Chen--He and Zakeri (see Theorem [\[david_extension_criterion_thm\]](#david_extension_criterion_thm){reference-type="ref" reference="david_extension_criterion_thm"} and the preceding discussion).
We now illustrate the above extension theorems with two important examples.
**Example 61** (Minkowski circle homeomorphism.). [\[example_1\]]{#example_1 label="example_1"} Recall that the inverse $\pmb{\mathcal{E}}_d^{-1}$ of the $d$-th Minkowski circle homeomorphism conjugates $\overline{z}^d\vert_{\mathbb{S}^1}$ to the Nielsen map $\pmb{\mathcal{N}}_d\vert_{\mathbb{S}^1}$. It is trivial to check from the explicit formula of $\pmb{\mathcal{N}}_d$ that the Markov partition of $\pmb{\mathcal{N}}_d\vert_{\mathbb{S}^1}$ defined by the $(d+1)$-st roots of unity satisfies conditions [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"}, [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"} as well as all the conditions of Corollary [\[power_map_cor\]](#power_map_cor){reference-type="ref" reference="power_map_cor"} (see [@LMMN Example 4.3] for details). Hence, $\pmb{\mathcal{E}}_d^{-1}$ admits a David extension to $\mathbb{D}$.
More generally, since any degree $d$ anti-Blaschke product with an attracting fixed point in $\mathbb{D}$ is quasisymmetrically conjugate to $\overline{z}^d$ on $\mathbb{S}^1$ and the Nielsen map of any polygonal reflection group generated by $d+1$ reflections is quasisymmetrically conjugate to $\pmb{\mathcal{N}}_d$ on $\mathbb{S}^1$, one concludes (for instance, using the Ahlfors-Beurling Extension Theorem) that a circle homeomorphism topologically conjugating a degree $d$ anti-Blaschke product with an attracting fixed point in $\mathbb{D}$ to the Nielsen map of a polygonal reflection group generated by $d+1$ reflections admits a David extension to $\mathbb{D}$ (see [@LMMN Theorem 4.13] for details).
**Example 62** (Circle homeomorphisms conjugating expanding Blaschke products to parabolic ones.). [\[example_2\]]{#example_2 label="example_2"} The Blaschke product $B_d(z)=\frac{(d+1)z^d+(d-1)}{(d-1)z^d+(d+1)}$ has a parabolic fixed point at $1$ and is an expansive covering of degree $d$. The Markov partition $\mathcal P(B_d;\{b_0, \dots, b_{2d-1}\})$, where $b_0, \dots, b_{2d-1}$ are $d$-th roots of $1$ and $-1$ with $b_0=1$, satisfies both conditions [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"} and [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"}. Here, in condition [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"}, the set $U_k$, $k\in \{0,1,\dots 2d-1\}$, is an open sector with vertex 0 and angle $\pi/d$, whose boundary contains the points $b_k$ and $b_{k+1}$, indices taken modulo $2d$; the sets $V_k$ are the upper half-plane for even $k$ and the lower half-plane for odd $k$. Moreover, as $B_d$ is a global holomorphic map of $\widehat{\mathbb{C}}$, each $b_k$ is either hyperbolic or symmetrically parabolic. Hence, Corollary [\[power_map_cor\]](#power_map_cor){reference-type="ref" reference="power_map_cor"} guarantees the existence of an orientation-preserving homeomorphism $h\colon \mathbb{S}^1\to \mathbb{S}^1$ that conjugates the map $z\mapsto z^d$ to $B_d$ and has a David extension in $\mathbb{D}$.
More generally, since any degree $d$ Blaschke product with an attracting fixed point in $\mathbb{D}$ is quasisymmetrically conjugate to $z^d$ on $\mathbb{S}^1$ and any degree $d$ Blaschke product with a double parabolic fixed point on $\mathbb{S}^1$ is quasisymmetrically conjugate to $B_d$ on $\mathbb{S}^1$ (cf. [@McM98 Theorem 6.1, Proposition 6.8]), one concludes that a circle homeomorphism topologically conjugating a degree $d$ Blaschke product with an attracting fixed point in $\mathbb{D}$ to a degree $d$ Blaschke product with a double parabolic fixed point on $\mathbb{S}^1$ admits a David extension to $\mathbb{D}$.
## David surgery to pass from hyperbolic to parabolic dynamics {#david_surgery_hyp_para_subsec}
The David Extension Theorem [\[david_extension_general_thm\]](#david_extension_general_thm){reference-type="ref" reference="david_extension_general_thm"}, combined with the David Integrability Theorem [\[david_integrability_thm\]](#david_integrability_thm){reference-type="ref" reference="david_integrability_thm"} gives a unified approach to turn hyperbolic anti-rational maps to parabolic anti-rational maps, Kleinian reflection groups, and Schwarz reflection maps that are matings of anti-polynomials and Nielsen maps of kissing reflection groups. This is done by replacing the attracting dynamics of an anti-rational map on suitable invariant Fatou components by Nielsen maps of ideal polygon groups or parabolic anti-Blaschke products.
Recall that a rational map is called *subhyperbolic* if every critical orbit is either finite or converges to an attracting periodic orbit.
**Lemma 63**. *[@LMMN Lemma 7.1][\[david_surgery_lemma\]]{#david_surgery_lemma label="david_surgery_lemma"} Let $R$ be a subhyperbolic anti-rational map with connected Julia set, and for $i\in\{1,\cdots, k\}$, let $U_i$ be an invariant Fatou component of $R$ such that $R\vert_{\partial U_i}$ has degree $d_i$. Then, there is a global David surgery that replaces the dynamics of $R$ on each $U_i$ by the dynamics of $\pmb{\mathcal{N}}_{d_i}\vert_{\mathbb{D}}$ (respectively, $\overline{B_d(z)}\vert_{\mathbb{D}}$), transferred to $U_i$ via a Riemann map. More precisely, there exists a global David homeomorphism $\Psi$, and an antiholomorphic map $F$, defined on a subset of $\widehat{\mathbb{C}}$, such that $F\vert_{\Psi(U_i)}$ is conformally conjugate to $\pmb{\mathcal{N}}_{d_i}\vert_{\mathbb{D}}$ (respectively, $\overline{B_d(z)}\vert_{\mathbb{D}}$), and $F$ is conformally conjugate to $R$ outside the grand orbit of $\cup_{i=1}^k \Psi(U_i)$.*
The proof of this result can be split into the following main steps.
$\bullet$ As $R$ is subhyperbolic, $R\vert_{U_i}$ is conformally conjugate to the action of an anti-Blaschke product with an attracting fixed point in $\mathbb{D}$, and hence $R$ induces the action of an expanding anti-Blaschke product on the ideal boundary $\mathbb{S}^1$ of $U_i$. As a topological conjugacy $h_i$ between this anti-Blaschke product and $\pmb{\mathcal{N}}_{d_i}\vert_{\mathbb{S}^1}$ (or $B_d\vert_{\mathbb{S}^1}$) admits a David extension to $\mathbb{D}$ (also denoted by $h_i$), one can glue the dynamics of $\pmb{\mathcal{N}}_{d_i}\vert_{\mathbb{D}}$ (or $B_d\vert_{\mathbb{D}}$) in $U_i$ via the composition of $h_i$ with a Riemann uniformization $\varphi_i:U_i\to\mathbb{D}$. Recall that the existence of such David extensions follow from the discussion in Examples [\[example_1\]](#example_1){reference-type="ref" reference="example_1"} and [\[example_2\]](#example_2){reference-type="ref" reference="example_2"}. This produces an orientation-reversing map $\widetilde{R}$ on a subset of $\widehat{\mathbb{C}}$ with the desired topological dynamics.
$\bullet$ To straighten the above orientation-reversing map to an antiholomorphic map, one needs to apply to the David Integrability Theorem, which necessitates the construction of an $\widetilde{R}$-invariant David coefficient on $\widehat{\mathbb{C}}$. This is done by first pulling back the standard complex structure on $\mathbb{D}$ under $h_i\circ\varphi_i:U_i\to\mathbb{D}$, and then spreading it throughout the grand orbit of $U_i$ by iterates of $R$. Outside the grand orbits of the various $U_i$, one uses the standard complex structure. That this Beltrami coefficient $\mu$ satisfies the David condition on each $U_i$ follows from the John property of the Fatou components $U_i$, which is a consequence of subhyperbolicity of $R$ and connectedness of $\mathcal{J}(R)$ (see [@Mih11] and Appendix [13.2](#integrable_thms_subsec){reference-type="ref" reference="integrable_thms_subsec"}). To show that $\mu$ satisfies the David condition on all of $\widehat{\mathbb{C}}$, one again appeals to subhyperbolicity of $R$. Specifically, subhyperbolicity of $R$ implies that there is a neighborhood of the closures of all but finitely many Fatou components that is disjoint from the postcritical set of $R$ and that the Fatou components of $R$ are uniform John domains [@Mih11]. These facts allow one to employ the Koebe Distortion Theorem to control the area distortion under the inverse branches of $R$, which yields the global David property of $\mu$.
$\bullet$ By the David Integrability Theorem, there exists a global David homeomorphism $\Psi$ that solves the Beltrami equation with coefficient $\mu$. The final step of the proof is to demonstrate that $\Psi\circ\widetilde{R}\circ\Psi^{-1}$ is antiholomorphic. This follows essentially from the local uniqueness of David homeomorphisms integrating a David coefficient (see Theorem [\[stoilow_thm\]](#stoilow_thm){reference-type="ref" reference="stoilow_thm"}) and $W^{1,1}$-removability of $\cup_{i=1}^k \partial U_i$.
In the next two sections, we will present various applications of Lemma [\[david_surgery_lemma\]](#david_surgery_lemma){reference-type="ref" reference="david_surgery_lemma"} which convert hyperbolic anti-rational maps to kissing reflection groups and Schwarz reflection maps exhibiting hybrid dynamics.
# Kissing reflection groups vs critically fixed anti-rational maps {#new_line_dict_sec}
In Subsections [5.2](#apollo_group_map_schwarz_subsec){reference-type="ref" reference="apollo_group_map_schwarz_subsec"} and [6.3](#sigma_d_subsec){reference-type="ref" reference="sigma_d_subsec"}, we discussed the existence of dynamically meaningful homeomorphisms between limit sets of certain kissing reflection groups and Julia sets of certain critically fixed anti-rational maps (see Theorem [Theorem 46](#apollo_schwarz_group_anti_rat_thm){reference-type="ref" reference="apollo_schwarz_group_anti_rat_thm"} for the Apollonian gasket reflection group and Theorem [\[sigma_d\_bers_homeo_thm\]](#sigma_d_bers_homeo_thm){reference-type="ref" reference="sigma_d_bers_homeo_thm"} for necklace reflection groups). These examples were generalized in [@LLM1], where a precise dynamical relation between kissing reflection groups and critically fixed anti-rational map was established. Subsequently, parameter space ramifications of this relation were studied in [@LLM2]. It transpired that there are stark similarities between the global topological properties of the deformation spaces of kissing reflection groups and critically fixed anti-rational maps. The goal of the current section is to expound these general results.
## A bijection between the two classes of conformal dynamical systems {#new_line_dict_dyn_subsec}
### Construction of the bijection {#group_map_bijection_subsubsec}
Recall from Subsection [3.1.2](#kissing_group_subsubsec){reference-type="ref" reference="kissing_group_subsubsec"} that each connected simple plane graph $\Gamma$ gives rise to a kissing reflection group, whose limit set is connected if and only if $\Gamma$ is $2$-connected. It is a straightforward consequence of rigidity of geometrically finite Kleinian groups that if the circle packings $\mathcal{P}_1, \mathcal{P}_2$ defining two kissing reflection groups $G_{\mathcal{P}_1}, G_{\mathcal{P}_2}$ have isomorphic contact graphs (as plane graphs), then the groups $G_{\mathcal{P}_1}, G_{\mathcal{P}_2}$ are quasiconformally conjugate. Such a quasiconformal conjugacy only deforms the conformal classes of the polygons in the fundamental domains $\Pi(G_{\mathcal{P}_1}), \Pi(G_{\mathcal{P}_2})$ (see Subsection [3.1.3](#fund_dom_subsubsec){reference-type="ref" reference="fund_dom_subsubsec"}). Thus, the space of planar isomorphism classes of $2$-connected, simple, plane graphs with $d+1$ vertices is in bijective correspondence with the space of quasiconformal conjugacy classes of kissing reflection groups of rank $d+1$ with connected limit set.
It was observed in [@LLM1] that the surgery construction of Subsection [5.2.1](#apollo_group_rat_map_subsubsec){reference-type="ref" reference="apollo_group_rat_map_subsubsec"}, which turns the Apollonian gasket reflection group to a critically fixed cubic anti-rational map, can be applied to general kissing reflection groups of rank $d+$ $1$ producing critically fixed anti-rational maps of degree $d$. Roughly speaking, this surgery procedure modifies the Nielsen map of a kissing reflection group on appropriate components of the domain of discontinuity by gluing in the dynamics of suitable power maps. Specifically, given a kissing reflection group $G_{\mathcal{P}}$ arising from a circle packing $\mathcal{P}$, one looks at the so-called *principal components* of $\Omega(G_{\mathcal{P}})$; i.e., the components $\mathcal{U}_1,\cdots,\mathcal{U}_k$ of $\Omega(G_{\mathcal{P}})$ that intersect the fundamental domain $\Pi(G_{\mathcal{P}})$ non-trivially. If the components of $\Pi(G_{\mathcal{P}})$ are $(r_i+1)$-gons (where $i\in\{1,\cdots,k\}$), then the restriction of the Nielsen map $\mathcal{N}_{G_{\mathcal{P}}}$ to the principal component $\mathcal{U}_i$ is quasiconformally conjugate to the Nielsen map $\pmb{\mathcal{N}}_{r_i}$ of the regular $(r_i+1)$-gon reflection group. The Minkowski circle homeomorphism $\pmb{\mathcal{E}}_{r_i}$, which conjugates $\pmb{\mathcal{N}}_{r_i}\vert_{\mathbb{S}^1}$ to $\overline{z}^{r_i}\vert_{\mathbb{S}^1}$, allows one to construct a degree $d$ orientation-reversing branched cover of $\mathbb{S}^2$ by replacing the action of $\mathcal{N}_{G_{\mathcal{P}}}$ on $\mathcal{U}_i$ with the power map $\overline{z}^{r_i}\vert_{\mathbb{D}}$ (see Subsection [5.2.1](#apollo_group_rat_map_subsubsec){reference-type="ref" reference="apollo_group_rat_map_subsubsec"}). Note that all the critical points of this branched covering come from the introduction of power maps and hence they are all fixed. To justify the absence of Thurston obstructions for this critically fixed branched cover, one can use a result of Pilgrim and Tan [@PT] that reduces, under these circumstances, the collection of possible Thurston obstructions to a small checkable list. Moreover, the expansiveness of this branched cover on $\Lambda(G_{\mathcal{P}})$ (which comes from expansivity of circular reflections) implies that it is topologically conjugate to some critically fixed anti-rational map $R_\Gamma$, where $\Gamma$ is the contact graph of the circle packing $\mathcal{P}$. (A more explicit relation between $\Gamma$ and $R_\Gamma$ will be articulated below.) In particular, the Nielsen map $\mathcal{N}_{G_{\mathcal{P}}}\vert_{\Lambda(G_{\mathcal{P}})}$ is topologically conjugate to $R_\Gamma\vert_{\mathcal{J}(R_\Gamma)}$.
![Top: a simple plane $2$-connected graph $\Gamma$ with $5$ vertices. Bottom left: the corresponding circle packing $\mathcal{P}$ and the limit set of the associated kissing reflection group $G_{\mathcal{P}}$ of rank $5$. Bottom right: The Julia set of the corresponding degree $4$ critically fixed anti-rational map $R_\Gamma$ with its Tischler graph $\mathscr{T}(R_\Gamma)$ drawn in red.](DTWEG.png "fig:"){#non_gasket_fig width="0.4\\linewidth"}\
![Top: a simple plane $2$-connected graph $\Gamma$ with $5$ vertices. Bottom left: the corresponding circle packing $\mathcal{P}$ and the limit set of the associated kissing reflection group $G_{\mathcal{P}}$ of rank $5$. Bottom right: The Julia set of the corresponding degree $4$ critically fixed anti-rational map $R_\Gamma$ with its Tischler graph $\mathscr{T}(R_\Gamma)$ drawn in red.](DTWEK.png "fig:"){#non_gasket_fig width="0.53\\linewidth"} ![Top: a simple plane $2$-connected graph $\Gamma$ with $5$ vertices. Bottom left: the corresponding circle packing $\mathcal{P}$ and the limit set of the associated kissing reflection group $G_{\mathcal{P}}$ of rank $5$. Bottom right: The Julia set of the corresponding degree $4$ critically fixed anti-rational map $R_\Gamma$ with its Tischler graph $\mathscr{T}(R_\Gamma)$ drawn in red.](DTWEC.png "fig:"){#non_gasket_fig width="0.46\\linewidth"}
To see that the critically fixed anti-rational map $R_\Gamma$ is unique (up to Möbius conjugacy), one needs to look at a combinatorial invariant called *Tischler graph*. Roughly speaking, Tischler graphs are to critically fixed anti-rational maps what Hubbard trees are to polynomials. More precisely, the Tischler graph $\mathscr{T}(R)$ of a critically fixed anti-rational map $R$ is defined as the union (of the closures) of all invariant rays in various fixed Fatou components (recall that the restriction of $R$ to a fixed Fatou component is conformally conjugate to $\overline{z}^r\vert_{\mathbb{D}}$, for some $r\geq 2$). Thus, it is a forward invariant graph containing all the critical points of $R$. It is easily checked using Thurston rigidity that a critically fixed anti-rational map is completely determined by its Tischler graph. It turns out from the construction of $R_\Gamma$ that its Tischler graph $\mathscr{T}(R_\Gamma)$ is the planar dual of $\Gamma$ (see Figure [33](#non_gasket_fig){reference-type="ref" reference="non_gasket_fig"}), and hence $R_\Gamma$ is unique (up to Möbius conjugacy). Thus, the surgery procedure described above yields an injective map from the space of quasiconformal conjugacy classes of kissing reflection groups of rank $d+1$ with connected limit set to the space of Möbius conjugacy classes of critically fixed anti-rational maps of degree $d$.
The fact that this map is a surjection follows from combinatorial properties of the Tischler graphs of critically fixed anti-rational maps. Indeed, according to [@LLM1 Lemma 4.9], the planar dual of such a Tischler graph is simple and $2$-connected. Then we apply the above construction to a kissing reflection group $G_{\mathcal{P}}$ to construct the desired anti-rational map $R$.
Let us list the main ingredients of the proof of the above properties of Tischler graphs:
1. landing patterns of the invariant rays at repelling fixed points (due to orientation-reversal, at most two invariant rays can land at the same point),
2. each face of a Tischler graph is a Jordan domain which is mapped to its complement as an orientation-reversing homeomorphism (which is a consequence of the fact that the edges of the Tischler graphs are invariant under the maps and the faces do not contain critical points), and
3. the boundaries of two faces of a Tischler graph meet at most along one edge (which follows from the absence of Levy cycles for rational maps).
We summarize the upshot of the above discussion in the following theorem.
**Theorem 64**. *[@LLM1 Theorem 1.1, Proposition 4.10][\[new_line_dict_thm\]]{#new_line_dict_thm label="new_line_dict_thm"} The following three sets are in natural bijective correspondence:*
- *$\{2$-connected, simple, plane graphs $\Gamma$ with $d+1$ vertices up to isomorphism of plane graphs$\}$,\
*
- *$\{$Kissing reflection groups $G$ of rank $d+1$ with connected limit set up to QC conjugacy$\}$,\
*
- *$\{$Critically fixed anti-rational maps $R$ of degree $d$ up to Möbius conjugacy$\}$.*
*Moreover,*
1. *the Tischler graph of $R_\Gamma$ is the planar dual of $\Gamma$, and*
2. *there exists a homeomorphism between the limit set $\Lambda(G_\Gamma)$ and the Julia set $\mathcal{J}(R_\Gamma)$ that conjugates $\mathcal{N}_{G_{\mathcal{P}}}$ to $R_\Gamma$.*
As a by-product of the above discussion, we have a full classification of critically fixed anti-rational maps. (A slightly different, but ultimately equivalent, classification of critically fixed anti-rational maps was given independently by Geyer [@Gey20].)
### From critically fixed anti-rational maps to kissing reflection groups by David surgery {#david_regularity_subsubsec}
Using the results of Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"}, one can invert the topological surgery of Subsection [8.1.1](#group_map_bijection_subsubsec){reference-type="ref" reference="group_map_bijection_subsubsec"}. Specifically, suppose that the critically fixed anti-rational map $R$ corresponds to the kissing reflection group $G$ in the bijection of Theorem [\[new_line_dict_thm\]](#new_line_dict_thm){reference-type="ref" reference="new_line_dict_thm"}. One can use Lemma [\[david_surgery_lemma\]](#david_surgery_lemma){reference-type="ref" reference="david_surgery_lemma"} to replace the dynamics of $R$ on its invariant Fatou components with Nielsen maps $\pmb{\mathcal{N}}_r$ of appropriate degree thus producing a Schwarz reflection map $\sigma$ (cf. Subsection [5.2.4](#rat_map_apollo_group_subsubsec){reference-type="ref" reference="rat_map_apollo_group_subsubsec"}). The touching structure of the faces of the Tischler graph $\mathscr{T}(R)$ (recall that each face is a Jordan domain and two faces meet at most along one edge) can be used to conclude that $\mathrm{Dom}(\sigma)$ is the closure of the union of finitely many disjoint Jordan domains whose touching structure is given by the planar dual of $\mathscr{T}(R)$. Since the Nielsen map $\pmb{\mathcal{N}}_r$ fixes the boundary of its domain of definition pointwise, one also deduces that each component of $\mathop{\mathrm{int}}{\mathrm{Dom}(\sigma)}$ is a Jordan quadrature domain and $\sigma$ is the piecewise Schwarz reflection map associated with a quadrature multi-domain. Finally, the fact that $R$ maps each face of $\mathscr{T}(R)$ homeomorphically to its complement implies that each of the above quadrature domains is a round disk. Consequently, $\sigma$ is the piecewise circular reflection map on the disks of a circle packing with associated contact graph given by the planar dual of $\mathscr{T}(R)$. In order words, $\sigma$ is the Nielsen map of a kissing reflection group that is quasiconformally conjugate to $G$.
It follows that the topological conjugacy between the action of a critically fixed anti-rational map on its Julia set and the action of the Nielsen map of a kissing reflection group on its limit set is the restriction of a David homeomorphism of $\widehat{\mathbb{C}}$. We refer the reader to [@LMMN §8] for details of the above construction.
### Dynamical ramifications of the bijection {#dyn_conseq_bijection_subsubsec}
Various properties of the graph $\Gamma$ translate to interesting dynamical properties of the corresponding kissing reflection group and anti-rational map. In particular, it provides bijections between interesting subclasses of kissing reflection groups and critically fixed anti-rational maps, which we will discuss next. (For further applications of these ideas, see Subsection [8.2.2](#bdd_thm_parallels_subsubsect){reference-type="ref" reference="bdd_thm_parallels_subsubsect"}.)
*Class 1: Necklace groups vs critically fixed anti-polynomials.* Recall from Subsection [3.1.7](#necklace_subsubsec){reference-type="ref" reference="necklace_subsubsec"} that a kissing reflection group is called necklace if the associated graph $\Gamma$ is $2$-connected and outerplanar. Since an outerplanar graph $\Gamma$ has a face that contains all the $d+1$ vertices (of $\Gamma$) on its boundary, it follows from Theorem [\[new_line_dict_thm\]](#new_line_dict_thm){reference-type="ref" reference="new_line_dict_thm"} that the Tischler graph $\mathscr{T}(R_\Gamma)$ of the corresponding anti-rational map $R_\Gamma$ has a vertex of valence $d+1$. Hence, the degree $d$ anti-rational map $R_\Gamma$ has a critical point of local degree $d$; i.e., $R_\Gamma$ is an anti-polynomial (cf. [@LLM1 Corollary 4.17]). This can be alternatively seen from the fact that a necklace group $G$ has a marked component $\Omega_\infty(G)$ in its domain of discontinuity where the action of $\mathcal{N}_G$ is quasiconformally conjugate to $\pmb{\mathcal{N}}_d$, and hence the surgery construction of Subsection [8.1.1](#group_map_bijection_subsubsec){reference-type="ref" reference="group_map_bijection_subsubsec"} replaces $\mathcal{N}_G\vert_{\Omega_\infty(G)}$ with $\overline{z}^d$. Thus, the bijection of Theorem [\[new_line_dict_thm\]](#new_line_dict_thm){reference-type="ref" reference="new_line_dict_thm"} restricts to a bijection between necklace groups and critically fixed anti-polynomials (cf. Theorem [\[sigma_d\_bers_homeo_thm\]](#sigma_d_bers_homeo_thm){reference-type="ref" reference="sigma_d_bers_homeo_thm"}). Moreover, the geodesic lamination of a necklace group is carried to the lamination of the Julia set of the corresponding critically fixed anti-polynomial by the Minkowski circle homeomorphism $\pmb{\mathcal{E}}_d$ (cf. [@LLM1 §4.3], [@LMM2 Propositions 56, 64]).
*Class 2: Quasi-Fuchsian closure vs mating.* Let us suppose that $\Gamma$ is a Hamiltonian graph. Then $\Gamma$ can be obtained by drawing additional edges connecting the vertices of a polygonal graph. Thus, a circle packing $\mathcal{P}$ corresponding to $\Gamma$ can be thought of as a limit of deformations of the circle packing $\pmb{\mathcal{P}}_{d}$ where the deformations introduce additional tangencies among the circles in the packing (see Definition [Definition 8](#regular_ideal_polygon_ref_group_def){reference-type="ref" reference="regular_ideal_polygon_ref_group_def"}). The kissing reflection group $G_{\mathcal{P}}$ can be realized as a limit of a quasiconformal deformation of the base group $\pmb{G}_{d}$, where the quasiconformal maps deform the conformal class of the polygons of the fundamental domain $\Pi(\pmb{G}_{d})$ (cf. Subsection [5.2.2](#qf_bdry_mating_subsubsec){reference-type="ref" reference="qf_bdry_mating_subsubsec"}). Hence, for a circle packing $\mathcal{P}$ with a Hamiltonian contact graph, the group $G_{\mathcal{P}}$ lies in the closure of the quasi-Fuchsian deformation space of $\pmb{G}_{d}$. We refer the reader to [@LLM1 Propositions 3.17, 3.18] for a complete proof of this statement that relates the above quasiconformal deformations to pinching appropriate simple closed geodesics on the surfaces $\mathbb{D}/\widetilde{\pmb{G}}_{d}$ and $(\widehat{\mathbb{C}}\setminus\overline{\mathbb{D}})/\widetilde{\pmb{G}}_{d}$, where $\widetilde{\pmb{G}}_{d}$ is the index two Fuchsian subgroup of $\pmb{G}_{d}$.
A Hamiltonian cycle also induces a splitting of the Hamiltonian graph $\Gamma$ into a pair of $2$-connected outerplanar graphs $\Gamma^+$ and $\Gamma^-$. It was shown in [@LLM1 Proposition 3.21] that an associated kissing reflection group $G_{\mathcal{P}}$ (where the circle packing $\mathcal{P}$ has $\Gamma$ as its contact graph) can be interpreted as a *mating* of two necklace groups $G_{\mathcal{P}^+}$ and $G_{\mathcal{P}^-}$, where the circle packings $\mathcal{P}^{\pm}$ have the outerplanar graphs $\Gamma^{\pm}$ as their contact graphs. Roughly speaking, this means that the filled limit sets $K(G_{\mathcal{P}^{\pm}})$ can be pasted together along their limit sets to produce the Riemann sphere in such a way that the action of $G_{\mathcal{P}^{\pm}}$ on $K(G_{\mathcal{P}^{\pm}})$ is semi-conjugate (in fact, conformally conjugate away from the limit set) to the action of $G_{\mathcal{P}}$ on a suitable subset of $\widehat{\mathbb{C}}$. Using the relation between necklace groups and critically fixed anti-polynomials, this mating statement can be transported to the rational map side proving that the anti-rational map $R_\Gamma$ is a conformal mating of the two anti-polynomials $R_{\Gamma^{\pm}}$ that correspond to the groups $G_{\mathcal{P}^{\pm}}$ (see [@LLM1 Corollary 4.17] for details). Hence, Theorem [\[new_line_dict_thm\]](#new_line_dict_thm){reference-type="ref" reference="new_line_dict_thm"} yields a bijection between groups in the closure of the quasi-Fuchsian deformation space of $\pmb{G}_{d}$ and critically fixed anti-rational maps that are matings of two anti-polynomials such that the bijection commutes with the operation of mating in respective categories. (For a specific example, see the Apollonian reflection group and anti-rational map discussed in Subsection [5.2.2](#qf_bdry_mating_subsubsec){reference-type="ref" reference="qf_bdry_mating_subsubsec"}.) We also remark that the unmating of $R_{\Gamma}$ into a pair of anti-polynomials depends on the choice of a Hamiltonian cycle in $\Gamma$, and hence distinct Hamiltonian cycles in $\Gamma$ may lead to different unmatings of $R_{\Gamma}$. This gives rise to many examples of shared matings in the world of critically fixed anti-rational maps (cf. [@LLM2 Appendix B.1]). For earlier examples of shared matings, see [@BEKMPRT12].
![In each row, a polyhedral and Hamiltonian graph along with a choice of a Hamiltonian cycle (left), the corresponding circle packing and the limit set of the associated kissing reflection group (middle), and the gasket Julia set of the corresponding critically fixed anti-rational map (right) are displayed. ](contact_2_hamil_cycle.png "fig:"){#hamiltonian_polyhedral_fig width="0.22\\linewidth"} ![In each row, a polyhedral and Hamiltonian graph along with a choice of a Hamiltonian cycle (left), the corresponding circle packing and the limit set of the associated kissing reflection group (middle), and the gasket Julia set of the corresponding critically fixed anti-rational map (right) are displayed. ](Apollonian_gasket.png "fig:"){#hamiltonian_polyhedral_fig width="0.3\\linewidth"} ![In each row, a polyhedral and Hamiltonian graph along with a choice of a Hamiltonian cycle (left), the corresponding circle packing and the limit set of the associated kissing reflection group (middle), and the gasket Julia set of the corresponding critically fixed anti-rational map (right) are displayed. ](JuliaRays.png "fig:"){#hamiltonian_polyhedral_fig width="0.3\\linewidth"}
![In each row, a polyhedral and Hamiltonian graph along with a choice of a Hamiltonian cycle (left), the corresponding circle packing and the limit set of the associated kissing reflection group (middle), and the gasket Julia set of the corresponding critically fixed anti-rational map (right) are displayed. ](cube_graph.png "fig:"){#hamiltonian_polyhedral_fig width="0.22\\linewidth"} ![In each row, a polyhedral and Hamiltonian graph along with a choice of a Hamiltonian cycle (left), the corresponding circle packing and the limit set of the associated kissing reflection group (middle), and the gasket Julia set of the corresponding critically fixed anti-rational map (right) are displayed. ](cube_group.png "fig:"){#hamiltonian_polyhedral_fig width="0.3\\linewidth"} ![In each row, a polyhedral and Hamiltonian graph along with a choice of a Hamiltonian cycle (left), the corresponding circle packing and the limit set of the associated kissing reflection group (middle), and the gasket Julia set of the corresponding critically fixed anti-rational map (right) are displayed. ](cube_rat.png "fig:"){#hamiltonian_polyhedral_fig width="0.3\\linewidth"}
Note that a more intrinsic way of unmating critically fixed anti-rational maps associated with Hamiltonian graphs can be derived from Meyer's results [@Meyer14 Theorem 4.2]. Indeed, the existence of a Hamiltonian cycle for $\Gamma$ translates to the existence of an equator (in the sense of [@Meyer14 Definition 4.1]) for the corresponding anti-rational map $R_\Gamma$.
We conclude this subsection with the following question, which is inspired by the discussion in Subsection [5.2.6](#qs_grp_subsubsec){reference-type="ref" reference="qs_grp_subsubsec"}.
**Problem 65**. *Describe the quasisymmetry groups of the Julia sets of critically fixed anti-rational maps and the limit sets of the corresponding kissing reflection groups. Can such Julia sets be quasiconformally equivalent to the limit sets of the corresponding kissing reflection groups?*
## Parameter space ramifications of the bijection {#new_line_dict_para_subsec}
The dictionary between kissing reflection groups and critically fixed anti-rational maps has surprising parameter space consequences. Recall from Subsection [3.1.6](#kissing_group_deform_space_subsubsec){reference-type="ref" reference="kissing_group_deform_space_subsubsec"} that each kissing reflection group $G$ sits in a quasiconformal deformation space $\mathcal{QC}(G)$. To discuss some of the analogies between the parameter spaces of reflection groups and anti-rational maps, we will first introduce an appropriate counterpart of quasiconformal deformation spaces in the anti-rational map world.
### Pared deformation space of anti-rational maps {#pared_def_space_anti_rat_subsubsec}
Let us set $\mathrm{Rat}_d^-(\mathbb{C})$ to be the space of degree $d$ anti-rational maps, and $\mathcal{M}_d^-:= \mathrm{Rat}_d^-(\mathbb{C})/\mathrm{PSL}_2(\mathbb{C})$ to be the corresponding moduli space. For a critically fixed anti-rational map $R_\Gamma$ (where the Tischler graph of $R_\Gamma$ is dual to $\Gamma$), we denote the component of hyperbolic maps in $\mathcal{M}_d^-$ containing $[R_\Gamma]$ by $\mathcal{H}_\Gamma$. Although $\mathcal{H}_\Gamma$ consists of anti-rational maps whose Julia set dynamics are quasiconformally conjugate to that of $R_\Gamma$, it turns out that $\mathcal{H}_\Gamma$ is not quite the correct analog of $\mathcal{QC}(\Gamma)$ due to the following reason.
If a kissing reflection group $G_\mathcal{P}$ corresponds to the anti-rational map $R_\Gamma$ in the bijection of Theorem [\[new_line_dict_thm\]](#new_line_dict_thm){reference-type="ref" reference="new_line_dict_thm"}, then the cusps of the $3$-manifold with boundary $\mathcal{M}(G_\mathcal{P}) := \left(\mathbb{H}^3 \cup \Omega(G_\mathcal{P})\right)/\widetilde{G}_\mathcal{P}$ (where $\widetilde{G}_\mathcal{P}$ is the index two Kleinian subgroup of $G_\mathcal{P}$) correspond bijectively to the repelling fixed points of $R_\Gamma$ (see [@LLM1 §4.3]). While the parabolics of $G_\mathcal{P}$ remain parabolic throughout $\mathcal{QC}(G_\mathcal{P})$, the multipliers of the repelling fixed points can grow arbitrarily large in $\mathcal{H}_\Gamma$. Thus preserving the parabolics of $G_\mathcal{P}$ on the group side is analogous to controlling the multipliers of all repelling fixed points of maps in $\mathcal{H}_\Gamma$.
We now mention the necessary adjustment that leads to a more natural deformation space of $R_\Gamma$ from the perspective of our dictionary. Note that throughout $\mathcal{H}_\Gamma$, the dynamics of the maps on any invariant Fatou component is conformally conjugate to the dynamics of an anti-Blaschke product on $\mathbb{D}$. We refer to such an anti-Blaschke product as the uniformizing model. For $K>0$, we define the *pared deformation space* $\mathcal{H}_\Gamma(K)\subset\mathcal{H}_\Gamma$ of $R_\Gamma$ to be the connected component containing $[R_\Gamma]$ of the set of anti-rational maps $[R]\in\mathcal{H}_\Gamma$ where the multiplier of any repelling fixed point in any uniformizing model is bounded by $K$. We refer the reader to [@LLM2 §4.1] for a detailed discussion of pared deformation spaces of critically fixed anti-rational maps.
### Parallel results: boundedness theorems {#bdd_thm_parallels_subsubsect}
According to [@LLM1 Proposition 3.10], the graph $\Gamma$ is polyhedral/$3$-connected if and only if the limit set $\Lambda(G_\mathcal{P})$ is a gasket (where the contact graph of the circle packing $\mathcal{P}$ is isomorphic to $\Gamma$ as a plane graph); i.e., it is homeomorphic to a set $\Lambda$ that is the closure of some infinite circle packing such that the complement of $\Lambda$ is a union of round disks which is dense in $\widehat{\mathbb{C}}$. In fact, the requirement that $\Gamma$ is polyhedral is equivalent to the conditions that each component of $\Omega(G_\mathcal{P})$ is a Jordan domain and the closures of any two different components of $\Omega(G_\mathcal{P})$ intersect at most at a single point which is necessarily a cusp. The latter property is, in turn, equivalent to the so-called *acylindricity* property for the $3$-manifold $\mathcal{M}(G_\mathcal{P})$ [@LLM1 Proposition 3.6]. Indeed, the condition that the closures of any two different components of $\Omega(G_\mathcal{P})$ may intersect only at a cusp means that there are no essential cylinder other than the cusp pairing cylinders in the $3$-manifold $\mathcal{M}(G_\mathcal{P})$ (see [@LLM1 §3.2] for a formal definition of acylindricity). For us, the importance of the acylindricity property stems from the Thurston Compactness Theorem [@Thu86], which implies the following: $$\mathcal{QC}(G_\mathcal{P})\ \textrm{is bounded (i.e., pre-compact in}\ \mathrm{AH}(G_\mathcal{P})\textrm{)}\ \iff \mathcal{M}(G_\mathcal{P})\ \mathrm{is\ acylindrical.}$$ Combining this with [@LLM1 Proposition 3.6], we conclude that $$\mathcal{QC}(G_\mathcal{P})\ \textrm{is bounded}\ \iff \Gamma\ \textrm{is polyhedral.}$$
It is natural to ask whether an analog of the above statement holds in the anti-rational map setting. In fact, inspired by the Thurston Compactness Theorem in the convex-cocompact setting, McMullen conjectured in [@McM95 Question 5.3] that hyperbolic components of rational maps with Sierpinski carpet Julia sets are bounded. It turns out that a counterpart of the Thurston Compactness Theorem and McMullen's conjecture indeed holds in the anti-rational map setting.
**Theorem 66**. *[@LLM2 Theorem 1.1][\[compactness_parallel_thm\]]{#compactness_parallel_thm label="compactness_parallel_thm"} If $\Gamma$ is polyhedral, then for any $K>0$, the pared deformation space $\mathcal{H}_\Gamma(K)$ is bounded in $\mathcal{M}_d^-$. Conversely, if $\Gamma$ is not polyhedral, then there exists some $K>0$ such that $\mathcal{H}_\Gamma(K)$ is unbounded in $\mathcal{M}_d^-$.*
We first note that pared deformation spaces play an essential role in the above theorem since the full hyperbolic component $\mathcal{H}_\Gamma$ is never bounded (see [@LLM1 Proposition 4.16]).
Let us now spend a few words on the proof of Theorem [\[compactness_parallel_thm\]](#compactness_parallel_thm){reference-type="ref" reference="compactness_parallel_thm"}. The uniform upper bound on the derivatives of repelling fixed points for maps in $\mathcal{H}_\Gamma(K)$ gives uniform control on the displacement of the critical points under dynamics. This enables to study the limiting dynamics of an escaping sequence in $\mathcal{H}_\Gamma(K)$ on each invariant Fatou component in terms of a *quasi-fixed tree* (see [@LLM2 §2] for details of the construction). This, in turn, allows one to record the global limiting dynamics of degenerations in $\mathcal{H}_\Gamma(K)$ by blowing up or tuning the Tischler graph of $R_\Gamma$ by the above quasi-fixed trees (see [@LLM2 §4]). The graphs obtained by this blowing up procedure are called *enriched Tischler graphs*, and they capture the combinatorics of degenerations in $\mathcal{H}_\Gamma(K)$. An important property of the planar duals of enriched Tischler graphs is that they have no self-loop or topologically trivial bigon, and that they dominate the graph $\Gamma$, which is the planar dual of the Tischler graph of the center of $\mathcal{H}_\Gamma$ (see Definition [Definition 10](#domination_def){reference-type="ref" reference="domination_def"}, cf. [@LLM2 Definition 4.5, Proposition 4.6]). Examples of such domination are displayed in Figure [40](#enrichments_fig){reference-type="ref" reference="enrichments_fig"}.
![Left: The Tischler graph of $\overline{z}^3$ is shown in green and its dual is depicted in black. Right: Two enrichments of the Tischler graph of $\overline{z}^3$ are shown in green/red, where the vertices of the original Tischler graph are blown up to trees. In the top figure, the dual of the enrichment has a bigon; while in the bottom figure, the dual graph is simple. The duals of both enrichments dominate the dual of the original Tischler graph.](enriched.png){#enrichments_fig width="0.6\\linewidth"}
Using rescaling limit arguments, it was proved in [@LLM2 Theorem 4.11] that an escaping sequence in $\mathcal{H}_\Gamma(K)$ converges in $\mathcal{M}_d^-$ if and only if the planar dual $\Gamma'$ of the enriched Tischler graph is simple; i.e., has no non-trivial bigon (see Figure [40](#enrichments_fig){reference-type="ref" reference="enrichments_fig"}). Theorem [\[compactness_parallel_thm\]](#compactness_parallel_thm){reference-type="ref" reference="compactness_parallel_thm"} now follows from the graph-theoretic fact that $\Gamma$ is polyhedral if and only if all planar graphs $\Gamma'$ dominating $\Gamma$ are simple (see [@LLM2 Lemma 4.10]).
### Parallel results: interaction between deformation spaces {#def_space_touching_parallels_subsubsect}
Recall from Proposition [\[cell_structure_group_prop\]](#cell_structure_group_prop){reference-type="ref" reference="cell_structure_group_prop"} that for a pair of simple, $2$-connected, plane graphs $\Gamma, \Gamma'$ with the same number of vertices, we have that $$\mathcal{QC}(G_{\Gamma'})\subset \overline{\mathcal{QC}(G_\Gamma)}\ \iff\ \Gamma' > \Gamma.$$ Remarkably, the various pared deformation spaces of critically fixed anti-rational maps also have the same pattern of interaction.
**Theorem 67**. *[@LLM2 Theorem 1.2][\[def_space_interaction_parallel_thm\]]{#def_space_interaction_parallel_thm label="def_space_interaction_parallel_thm"} Let $\Gamma,\Gamma'$ be two distinct $2$-connected, simple, plane graphs with the same number of vertices. For all large $K>0$, the pared deformation space $\mathcal{H}_\Gamma(K)$ parabolic bifurcates to $\mathcal{H}_{\Gamma'}(K)$ if and only if $\Gamma'\geq\Gamma$.*
In the above theorem, $\mathcal{H}_\Gamma(K)$ is said to parabolic bifurcate to $\mathcal{H}_{\Gamma'}(K)$ if $\Gamma\neq\Gamma'$ and the intersection $\partial\mathcal{H}_\Gamma(K)\cap\partial\mathcal{H}_{\Gamma'}(K)$ contains a parabolic map whose Julia dynamics is topologically conjugate to that of maps in $\mathcal{H}_{\Gamma'}$ (an anti-rational map $R$ is parabolic if every critical point of $R$ lies in the Fatou set and $R$ has at least one parabolic cycle). Such a parabolic map is called a *root* of $\mathcal{H}_{\Gamma'}(K)$.
The proof of this theorem also employs the notion of enriched Tischler graphs associated with degenerations in $\mathcal{H}_\Gamma(K)$. The necessity of the condition $\Gamma'\geq\Gamma$ in the parabolic bifurcation statement comes from the fact that the dual of an enriched Tischler graph dominates the dual of the original Tischler graph (see [@LLM2 Proposition 4.6]). On the other hand, the demonstration of the fact that $\mathcal{H}_\Gamma(K)$ indeed parabolic bifurcates to $\mathcal{H}_{\Gamma'}(K)$ for $\Gamma'\geq\Gamma$ involve the following key steps. Here we assume that $\Gamma,\Gamma'$ are two distinct $2$-connected, simple, plane graphs with $\Gamma'\geq\Gamma$.
$\bullet$ The vertices of the Tischler graph of $R_\Gamma$ (which is dual to $\Gamma$) can be blown up to trees in such as way that the planar dual of the resulting graph is isomorphic to the graph $\Gamma'$ [@LLM2 Lemma 4.8].
$\bullet$ Each of the above trees arises as quasi-fixed trees associated with degenerating sequences of Blaschke products, and hence the planar dual of $\Gamma'$ is realized as an enriched Tischler graph for some escaping sequence in $\mathcal{H}_\Gamma(K)$ [@LLM2 Theorem 3.1, Proposition 4.4].
$\bullet$ As the enriched Tischler graph $\Gamma'$ is simple, such an escaping sequence in $\mathcal{H}_\Gamma(K)$ converges in $\mathcal{M}_d^-$, and the limiting map is parabolic whose Julia dynamics is conjugate to that of maps in $\mathcal{H}_{\Gamma'}(K)$ [@LLM2 Theorem 4.11].
Since the collection of all $2$-connected simple plane graphs is connected under the domination relation, we have the following simple consequence of Theorem [\[def_space_interaction_parallel_thm\]](#def_space_interaction_parallel_thm){reference-type="ref" reference="def_space_interaction_parallel_thm"}.
**Corollary 68**. *The union of the closures of hyperbolic components (respectively, pared deformation spaces) of degree $d$ critically fixed anti-rational maps is connected.*
We refer the reader to [@LLM2 Appendix A.4] for a refinement of Theorem [\[def_space_interaction_parallel_thm\]](#def_space_interaction_parallel_thm){reference-type="ref" reference="def_space_interaction_parallel_thm"} that counts the number of accesses from $\mathcal{H}_{\Gamma}(K)$ to $\mathcal{H}_{\Gamma'}(K)$ in terms of the number of nonequivalent embeddings of $\Gamma$ into $\Gamma'$ and to [@LLM2 Appendix B] for applications of this count to the phenomena of shared matings and existence of self-bumps on boundaries of hyperbolic components (for earlier examples of self-bumps on boundaries of hyperbolic components, see [@Luo21]).
*Remark 69*. The boundaries of two hyperbolic components $\mathcal{H}_\Gamma, \mathcal{H}_{\Gamma'}$ may have wild intersection. One works with pared deformation spaces to circumvent this difficulty; indeed, the boundary of a pared deformation space $\mathcal{H}_\Gamma(K)$ only consists of parabolic maps and hence bifurcations between pared deformation spaces are tame.
### Parallel results: global topology {#global_top_parallels_subsubsec}
In light of Corollary [Corollary 68](#crit_fixed_hyp_comps_closure_conn_cor){reference-type="ref" reference="crit_fixed_hyp_comps_closure_conn_cor"}, it is natural to seek finer information about the topology of the union of the closures of all hyperbolic components (respectively, pared deformation spaces) of degree $d$ critically fixed anti-rational maps. On the group side, the moduli space $\mathfrak{M}_n$ of marked circle packings with $n$ circles in $\mathbb{C}$ was studied by Hatcher and Thurston in [@HT], where they showed that the natural map $\Psi: \mathfrak{M}_n \longrightarrow \mathfrak{S}_n$ (where $\mathfrak{S}_n$ denotes the configuration space of $n$ marked points in $\mathbb{C}$) that associates the centers of the circles to a marked circle packing induces a homotopy equivalence between the spaces.
In order to state a counterpart of the Hatcher--Thurston result in the setting of anti-rational maps, let us define $\mathcal{H}_{\Gamma, Rat}(K) \subset \mathcal{H}_{\Gamma, Rat} \subset \mathrm{Rat}_d^-(\mathbb{C})$ to be the lifts (i.e., preimages under the projection map $\mathrm{Rat}_d^-(\mathbb{C}) \longrightarrow \mathcal{M}_d^-$) of $\mathcal{H}_\Gamma(K) \subset \mathcal{H}_\Gamma \subset \mathcal{M}_d^-$. We further set $$\mathcal{X}_d(K) := \displaystyle\bigcup_{\Gamma} \overline{\mathcal{H}_{\Gamma, Rat}(K)} \subset \mathrm{Rat}_d^-(\mathbb{C}).$$
**Theorem 70**. *[@LLM2 Theorem 1.4][\[monodromy_parallel_thm\]]{#monodromy_parallel_thm label="monodromy_parallel_thm"} Let $d\geq 3$. For all large $K$, there exists a surjective monodromy representation $$\rho: \pi_1(\mathcal{X}_d(K)) \twoheadrightarrow \mathrm{Mod}(S_{0,d+1}),$$ where $\mathrm{Mod}(S_{0,d+1})$ is the mapping class group of the $(d+1)$-punctured sphere*
For maps in $\mathcal{X}_d(K)$, the analog of the circles in the circle packing are suitable Markov partition pieces of the Julia set determined by Tischler graphs. The proof of Theorem [\[monodromy_parallel_thm\]](#monodromy_parallel_thm){reference-type="ref" reference="monodromy_parallel_thm"} is carried out by showing that these Markov pieces move 'continuously' (away from the grand orbits of fixed points or $2$-cycles) and that their braiding patterns along curves in $\mathcal{X}_d(K)$ can be followed to construct a monodromy representation of $\mathcal{X}_d(K)$ into $\mathrm{Mod}(S_{0,d+1})$. Surjectivity of this representation is demonstrated by exhibiting that suitable half Dehn twists (that generate the mapping class group) are images of certain explicit paths in $\mathcal{X}_d(K)$ (see [@LLM2 §5] for details).
# Matings of anti-polynomials and necklace groups {#mating_anti_poly_nielsen_sec}
In the earlier sections, we discussed the dynamics of various Schwarz reflection maps that arise as matings of anti-polynomials and Nielsen maps of kissing reflection groups. This provided us with matings of geometrically finite quadratic anti-polynomials with the Nielsen map of the ideal triangle group (see Theorems [\[deltoid_thm_2\]](#deltoid_thm_2){reference-type="ref" reference="deltoid_thm_2"}, [Theorem 49](#c_and_c_general_mating_thm){reference-type="ref" reference="c_and_c_general_mating_thm"}) and of the specific anti-polynomial $\overline{z}^d$ with Nielsen maps of arbitrary necklace groups (see Theorem [\[sigma_d\_mating_thm\]](#sigma_d_mating_thm){reference-type="ref" reference="sigma_d_mating_thm"}). These examples lead to the following questions.
1. Do Schwarz reflection maps provide a general framework for mating anti-polynomials with Nielsen maps of necklace groups?
2. What is the topological structure of the parameter space of such matings?
The first question above was addressed in [@LMMN], where an existence theorem for matings of large classes of anti-polynomials and necklace groups was established.
## Definition of conformal mating {#conf_mating_def_subsec}
We now explicate the notion of conformal matings of Nielsen maps of necklace groups with anti-polynomials. The idea is analogous to the classical definition of conformal matings of two (anti-)polynomials.
Let $G \in \overline{\beta(\pmb{G}_{d})}$ be a necklace group associated with a circle packing $\mathcal{P}=\{C_1,\cdots, C_{d+1}\}$. By Proposition [\[group_lamination_prop\]](#group_lamination_prop){reference-type="ref" reference="group_lamination_prop"}, there is a canonical semiconjugacy $\varphi_{G}: \mathbb{S}^1 \rightarrow \Lambda(G)$ between $\pmb{\mathcal{N}}_d\vert_{\mathbb{S}^1}$ and $\mathcal{N}_G\vert_{\Lambda(G)}$, and hence $\mathcal{N}_G\vert_{\Lambda(G)}$ is a factor or $\pmb{\mathcal{N}}_d\vert_{\mathbb{S}^1}$. On the other hand, if $P$ is a monic, centered, anti-polynomial of degree $d$ such that $\mathcal{J}(P)$ is connected and locally connected, then $P\vert_{\mathcal{J}(P)}$ is a factor of $\overline{z}^d\vert_{\mathbb{S}^1}$ via the continuous boundary extension $\varphi_P:\mathbb{S}^1\to\mathcal{J}(P)$ (of the inverse) of the normalized Böttcher coordinate of $\mathcal{B}_\infty(P)$.
Recall also that $\pmb{\mathcal{E}}_d: \mathbb{S}^1 \rightarrow \mathbb{S}^1$ is a topological conjugacy between $\pmb{\mathcal{N}}_d\vert_{\mathbb{S}^1}$ and $z\mapsto\overline{z}^{d}\vert_{\mathbb{S}^1}$. In other words, the circle coverings induced by the action of the anti-polynomial $P$ on its Julia set and the Nielsen map $\mathcal{N}_G$ on its limit set are topologically conjugate. This compatibility allows one to glue $\mathcal{K}(P)$ with $K(G)$ along their boundaries and obtain a partially defined continuous map on the resulting topological space.
**Definition 71**. We define the equivalence relation $\sim$ on $K(G) \sqcup \mathcal{K}(P)$ generated by $\varphi_G(t)\sim\varphi_P(\overline{\pmb{\mathcal{E}}_d(t)})$ for all $t\in\mathbb{S}^1$.
Clearly, the equivalence relation $\sim$ is preserved by the map
$P\sqcup \mathcal{N}_{G}: \mathcal{K}(P)\sqcup (K(G)\setminus\mathop{\mathrm{int}}{\Pi(G)})\to \mathcal{K}(P)\sqcup K(G),$\
$P\sqcup \mathcal{N}_{G}\vert_{\mathcal{K}(P)}=P,\quad P\sqcup \mathcal{N}_{G}\vert_{K(G)\setminus\mathop{\mathrm{int}}{\Pi(G)}}=\mathcal{N}_G,$
and hence $P\sqcup\ \mathcal{N}_G$ descends to a continuous map $P\bot \!\! \! {\bot}\mathcal{N}_G$ to the quotient $$\mathcal{K}(P)\bot \!\! \! {\bot}K(G):=\left(\mathcal{K}(P)\sqcup K(G)\right)/\sim.$$ The map $P\bot \!\! \! {\bot}\mathcal{N}_G$ is the *topological mating* of $P$ and $\mathcal{N}_G$. If $\mathcal{K}(P)\bot \!\! \! {\bot}K(G)$ is homeomorphic to a $2$-sphere, the topological mating is said to be *Moore-unobstructed*. Finally, one says that $P$ and $\mathcal{N}_G$ are *conformally mateable* if their topological mating is Moore-unobstructed, and if the topological $2$-sphere $\mathcal{K}(P)\bot \!\! \! {\bot}K(G)$ admits a complex structure that turns the topological mating $P\bot \!\! \! {\bot}\mathcal{N}_G$ into an antiholomorphic map.
Alternatively, an antiholomorphic map $F$ (defined on a subset of the Riemann sphere) is a conformal mating of $P$ and $\mathcal{N}_G$ if there exist continuous semi-conjugacies from $\mathcal{K}(P), K(G)$ (equipped with the actions of $P, \mathcal{N}_G$, respectively) into the dynamical plane of $F$ such that the semi-conjugacies are conformal on the interiors, the images of the semi-conjugacies fill up the whole sphere and intersect only along their boundaries as prescribed by the equivalence relation $\sim$. We refer the reader to [@LMMN §10.2] for precise definitions and further details.
## Existence of conformal matings {#conf_mating_gen_thm_subsec}
We will now state a general result that guarantees the existence of conformal matings of necklace groups and anti-polynomials. By definition, conformal mateability of $P$ and $\mathcal{N}_G$ (as in the previous subsection) requires their topological mating to be Moore-unobstructed. It turns out that for hyperbolic anti-polynomials, this is the only obstruction to conformal mating; i.e., whenever $\mathcal{K}(P)\bot \!\! \! {\bot}K(G)$ is homeomorphic to $\mathbb{S}^2$, one can upgrade the topological mating of $P$ and $\mathcal{N}_G$ to a conformal mating.
**Theorem 72**. *[@LMMN Lemma 10.17, Theorem 10.20][\[antipoly_nielsen_mating_thm\]]{#antipoly_nielsen_mating_thm label="antipoly_nielsen_mating_thm"} Let $P$ be a monic hyperbolic anti-polynomial of degree $d$, and let $G\in \overline{\beta(\pmb{G}_{d})}$ be a necklace group. Then, $P$ and $\mathcal{N}_G$ are conformally mateable if and only if their topological mating is Moore-unobstructed.*
*Moreover, if $F:\mathrm{Dom}(F)\to\widehat{\mathbb{C}}$ is a conformal mating of $P$ and $\mathcal{N}_G$, then each component of the interior of $\mathrm{Dom}(F)$ is a simply connected quadrature domain, and $F$ is the piecewise Schwarz reflection map associated with a quadrature multi-domain.*
As expected, promoting the topological mating of $P$ and $\mathcal{N}_G$ to an antiholomorphic map lies at the heart of the difficulty. To achieve this, one uses a combination of the Thurston Realization Theorem and the David surgery procedure of Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"}, which we now outline.
**Step I:** According to Theorem [\[new_line_dict_thm\]](#new_line_dict_thm){reference-type="ref" reference="new_line_dict_thm"} (also see the discussion in Subsection [8.1.3](#dyn_conseq_bijection_subsubsec){reference-type="ref" reference="dyn_conseq_bijection_subsubsec"}), there exists a degree $d$ critically fixed anti-polynomial $P_G$ whose Julia set dynamics is topologically conjugate to the dynamics of $\mathcal{N}_G\vert_{\Lambda(G)}$. The first step in the construction of a conformal mating of $P$ and $\mathcal{N}_G$ is to mate the two anti-polynomial $P$ and $P_G$. In fact, one readily verifies that the absence of Moore obstruction for the topological mating of $P$ and $G$ is equivalent to the absence of Moore obstruction for the topological mating of $P$ and $P_G$, and then invokes the conformal mateability criterion [@LLM1 Proposition 4.23] (which is a statement about classical polynomial matings, and is an application of the Thurston Realization Theorem) to conclude that if $P\bot \!\! \! {\bot}\mathcal{N}_G$ is Moore-unobstructed, then the anti-polynomial $P$ and $P_G$ are conformally mateable.
**Step II:** The next step is to turn the conformal mating $R$ of the anti-polynomials $P$ and $P_G$ into a conformal mating of $P$ and $\mathcal{N}_G$. To this end, one only needs to glue Nielsen maps of ideal polygon reflection groups in suitable critically fixed Fatou components of $R$ (these Fatou components correspond to the critically fixed Fatou components of $P_G$). This is precisely where the David Surgery Lemma [\[david_surgery_lemma\]](#david_surgery_lemma){reference-type="ref" reference="david_surgery_lemma"} comes into play.
The statement that a conformal mating of $P$ and $\mathcal{N}_G$ is necessarily a piecewise Schwarz reflection map follows from the observation that $\mathcal{N}_G$ fixes the $\partial \Pi(G)$ pointwise. This fact establishes the naturality of Schwarz reflection maps in the world of combination theorems on firm footing.
*Remark 73*. A simple algorithm to check whether the topological mating of $P$ and $P_G$ has a Moore obstruction was given in [@LLM1 Lemma 4.22], which makes Theorem [\[antipoly_nielsen_mating_thm\]](#antipoly_nielsen_mating_thm){reference-type="ref" reference="antipoly_nielsen_mating_thm"} effective in concrete situations.
## Recognizing conformal matings {#conf_mating_identify_subsec}
Suppose that the piecewise Schwarz reflection map $$F:\mathrm{Dom}(F):=\bigcup_{i=1}^k\overline{\Omega_i}\longrightarrow\widehat{\mathbb{C}},\ z\mapsto \sigma_i(z)\ \mathrm{for}\ z\in\overline{\Omega_i}$$ is a conformal mating of a marked degree $d$ hyperbolic anti-polynomial $P$ (with connected Julia set) and the Nielsen map $\mathcal{N}_G$ of a necklace group $G$ of rank $d+1$. One can determine the topology of $\mathrm{Dom}(F)$ in terms of the following finite combinatorial data associated with $P$ and the structure of accidental parabolics of the group $G$.
Recall that an anti-polynomial $P$ acts on the angles of its external dynamical rays by $m_{-d}:\mathbb{S}^1\to\mathbb{S}^1,\ \theta\mapsto -d\theta$.
**Definition 74** (Fixed ray lamination). Set $\mathrm{Fix}(m_{-d}):= \{0,\frac{1}{d+1},\cdots,\frac{d}{d+1}\}$, the set of angles that are fixed by $m_{-d}$. We define the equivalence relation $\mathfrak{L}_P$ on $\mathrm{Fix}(m_{-d})$ as: $\theta_1\sim\theta_2$ if and only if the external dynamical rays of $P$ at angles $\theta_1, \theta_2$ land at the same point of $\mathcal{J}(P)$. The equivalence relation $\mathfrak{L}_P$ is called the *fixed ray lamination* of $P$.
We remark that each equivalence class contains at most two elements, and we refer to an equivalence class $\mathfrak{L}_P$ consisting of two elements as a *leaf*.
The closure of $\Pi^b(G)=\Pi(G)\cap K(G)$ is a tree of polygons, and its boundary meets the limit set $\Lambda(G)$ at finitely many points. We call the set of these points $S_G$. Let $S_G^{\mathrm{cusp}}\subset S_G$ be those $d+1$ points in $S_G$ that do not separate $\Lambda(G)$ (these are the images of the $(d+1)$-st roots of unity under $\varphi_G$). Note that in the topological mating of $P$ and $\mathcal{N}_G$, the points on $\mathcal{J}(P)$ with external address in $\mathrm{Fix}(m_{-d})$ are glued with points in $S_G^{\mathrm{cusp}}$.
The next result, which counts the number of connected components of $\mathop{\mathrm{int}}{\mathrm{Dom}(F)}$, follows from the fact that each leaf of $\mathfrak{L}_P$ forces a pair of points of $S_G^{\mathrm{cusp}}$ to be identified in the topological mating, thereby creating a disconnection of the interior of $\mathrm{Dom}(F)$.
**Proposition 75**. *[@LLM23][\[count_qd_comps_prop\]]{#count_qd_comps_prop label="count_qd_comps_prop"} The number of connected components of $\mathrm{Dom}(F)$ is equal to the number of gaps of the lamination $\mathfrak{L}_P$ (equivalently, one more than the number of leaves in $\mathfrak{L}_P$).*
We enumerate the gaps of $\mathfrak{L}_P$ cyclically as $\mathcal{G}_1,\cdots,\mathcal{G}_k$ such that the arc $\mathpalette%
\sbox 0{$\m@th(0,\frac{1}{d+1})\subset$}%
\vbox{
\hbox{\resizebox{\wd 0}{\height}{{\usefont{U}{tipa}{m}{n}\symbol{62}}}}
\nointerlineskip
\box 0
}%
\partial\mathcal{G}_1$ and $\mathcal{G}_i$ corresponds to the quadrature domain $\Omega_i$ (after possibly renumbering the quadrature domains). We now state a formula for the degrees of rational maps uniformizing the simply connected quadrature domains.
**Proposition 76**. *[@LLM23][\[gap_qd_order_prop\]]{#gap_qd_order_prop label="gap_qd_order_prop"} Let $f_i$ be a rational map of degree $d_i$ that carries $\mathbb{D}$ univalently onto $\Omega_i$. Then, $\partial\mathcal{G}_i\cap\mathbb{S}^1$ contains exactly $d_i$ arcs of $\mathbb{S}^1\setminus\mathrm{Fix}(m_{-d})$, for each $i\in\{1,\cdots, k\}$. In particular, $\sum_{i=1}^k d_i=d+1$.*
For a proof of Proposition [\[gap_qd_order_prop\]](#gap_qd_order_prop){reference-type="ref" reference="gap_qd_order_prop"}, note that if $\partial\mathcal{G}_i\cap\mathbb{S}^1$ contains exactly $q_i$ arcs of $\mathbb{S}^1\setminus\mathrm{Fix}(m_{-d})$, then the map $m_{-d}\vert_{\partial\mathcal{G}_i\cap\mathbb{S}^1}$ covers $\partial\mathcal{G}_i\cap\mathbb{S}^1$ exactly $(q_i-1)$ times. Hence, the part of the limit set of $F$ that lies in $\Omega_i$ covers itself $(q_i-1)$ times under the map $F$, which implies that $\sigma_i:\sigma_i^{-1}(\Omega_i)\to\Omega_i$ is a degree $(q_i-1)$ branched covering whence the result follows from Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"}.
Note that under the gluing of $\mathcal{K}(P)$ and $K(G)$, the $\varphi_P$-image of each arc of $\mathbb{S}^1\setminus\mathrm{Fix}(m_{-d})$ is identified with the $\varphi_G$-image of a unique arc of $\mathbb{S}^1\setminus \sqrt[d+1]{1}$. As the $\varphi_G$-image of any arc of $\mathbb{S}^1\setminus \sqrt[d+1]{1}$ is enclosed by a unique circle $C_i$ of the packing $\mathcal{P}$, this defines a bijective correspondence between the components of $\mathbb{S}^1\setminus\mathrm{Fix}(m_{-d})$ and the circles $C_1,\cdots, C_{d+1}$.
The facts that $\mathcal{N}_G$ has no anti-conformal extension to a relative neighborhood in $K(G)$ of any point of $S_G$ and that the only possible singularities on boundaries of quadrature domains are double points and conformal cusps [@Sak91] imply the following structure of double points and cusps on $\partial\Omega$.
**Proposition 77**. *[@LLM23][\[sing_structure_qd_prop\]]{#sing_structure_qd_prop label="sing_structure_qd_prop"} ) The number of cusps on $\partial\Omega_i$ is equal to the number of points of $\mathrm{Fix}(m_{-d})\cap \partial\mathcal{G}_i$ that are not endpoints of leaves of $\mathfrak{L}_P$.*
*) The quadrature domains corresponding to the pair of gaps bordering on a leaf of $\mathfrak{L}_P$ have a tangential intersection.*
*) Suppose that $C_r\cap C_s\neq\emptyset$ with $r-s\neq \pm 1\ (\mathrm{mod}\ d+1)$. If the arcs of $\mathbb{S}^1\setminus\mathrm{Fix}(m_{-d})$ corresponding to $C_r$ and $C_s$ lie on the boundary of the same gap $\mathcal{G}_i$, then the quadrature domain boundary $\partial\Omega_i$ has a double point corresponding to the pair $(r,s)$. On the other hand, if the arcs of $\mathbb{S}^1\setminus\mathrm{Fix}(m_{-d})$ corresponding to $C_r$ and $C_s$ lie on the boundaries of two distinct gaps $\mathcal{G}_i, \mathcal{G}_j$, then the quadrature domain boundaries $\partial\Omega_i$ and $\partial\Omega_j$ have a tangential intersection (so that $\partial\Omega$ has a double point) corresponding to the pair $(r,s)$.*
We conclude this subsection with an explicit example. Let $P(z)=\overline{z}^3-\frac{3i}{\sqrt{2}}\overline{z}$ be the anti-polynomial with a pair of $2$-periodic critical points and $G$ be the Julia necklace group introduced in Subsection [5.1.1](#talbot_dynamics_subsubsec){reference-type="ref" reference="talbot_dynamics_subsubsec"}. The anti-polynomial $P$ is obtained by tuning each of the two (bounded) invariant Fatou components of the Julia anti-polynomial by the Basilica anti-polynomial $\overline{z}^2-1$. Recall from Theorem [Theorem 45](#talbot_schwarz_group_anti_poly_thm){reference-type="ref" reference="talbot_schwarz_group_anti_poly_thm"} that the critically fixed anti-polynomial $P_G$ corresponding to $G$ is given by the Julia anti-polynomial $z\mapsto(3\overline{z}-\overline{z}^3)/2$. It is easily checked using [@LLM1 Lemma 4.22] that the topological mating of $P$ and $P_G$ is Moore-unobstructed, and hence $P$ and $\mathcal{N}_G$ are conformally mateable.
The fixed ray lamination of $P$ is given by $\{\{0,3/4\},\{1/4,1/2\}\}$. It thus follows from Propositions [\[count_qd_comps_prop\]](#count_qd_comps_prop){reference-type="ref" reference="count_qd_comps_prop"} and [\[gap_qd_order_prop\]](#gap_qd_order_prop){reference-type="ref" reference="gap_qd_order_prop"} that $\mathop{\mathrm{int}}{\mathrm{Dom}(F)}$ (where $F$ is the conformal mating of $P$ and $\mathcal{N}_G$) has three components $\Omega_1, \Omega_2, \Omega_3$, two of which (say $\Omega_1,\Omega_2$) are round disks and the other one is the univalent image of $\mathbb{D}$ under a quadratic rational map. Moreover, Proposition [\[sing_structure_qd_prop\]](#sing_structure_qd_prop){reference-type="ref" reference="sing_structure_qd_prop"} implies that the quadrature domains $\Omega_1, \Omega_2, \Omega_3$ have non-singular boundaries and they touch each other pairwise. These facts can be used to show that up to Möbius conjugacy, $\Omega_3$ is the exterior of an ellipse and $\Omega_1,\Omega_2$ are touching round disks inscribed in the ellipse $\partial\Omega_3$ (see Figure [\[ellipse_disk_fig\]](#ellipse_disk_fig){reference-type="ref" reference="ellipse_disk_fig"}). For a detailed discussion of this specific example, we refer the reader to [@LMMN §11.2].
# Polygonal Schwarz reflections and connectedness loci of anti-polynomials {#mating_para_space_sec}
In this section, which is based on [@LLM23], we introduce a family of degree $d$ piecewise Schwarz reflections that generalizes the C&C family and the deltoid reflection. We will expound how the mating operation between generic anti-polynomials (with connected Julia set) and the Nielsen map of the ideal $(d+1)$-gon reflection group yields dynamical relations between this family of Schwarz reflections and the connectedness locus $\mathscr{C}_d$ of monic, centered anti-polynomials of degree $d$.
## Polygonal Schwarz reflections {#poly_schwarz_subsec}
Let $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ be a piecewise Schwarz reflection map associated with a quadrature multi-domain $\Omega=\sqcup_{j=1}^k\Omega_j$ (see Subsection [3.3.2](#piecewise_schwarz_subsubsec){reference-type="ref" reference="piecewise_schwarz_subsubsec"}). We further assume that $\overline{\Omega}$ is connected and simply connected. This is equivalent to requiring that each $\Omega_j$ is a Jordan domain, and the *contact graph* of the quadrature multi-domain (i.e., a graph having a vertex for each $\Omega_j$ and an edge connecting two vertices if the corresponding quadrature domains touch) is a tree. We refer to such an $\Omega$ as a *tree-like* quadrature multi-domain. For a tree-like quadrature multi-domain, the desingularized droplet $T^0(\sigma)$ is homeomorphic to an ideal polygon in the hyperbolic plane.
**Definition 78**. A degree $d$ piecewise Schwarz reflection map $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ associated with a tree-like quadrature multi-domain is said to be *polygonal* if $T^0(\sigma)$ is homeomorphic to an ideal $(d+1)$-gon. It is called *regular polygonal* if $T^0(\sigma)$ is conformally equivalent to the regular ideal $(d+1)$-gon $\Pi(\pmb{G}_d)$ respecting the vertices.
We denote the spaces of degree $d$ polygonal (respectively, regular polygonal) piecewise Schwarz reflection maps by $\mathscr{S}_d$ (respectively, $\mathscr{S}_d^{\mathrm{reg}}$).
*Remark 79*. A degree $d$ piecewise Schwarz reflection map associated with a tree-like quadrature multi-domain is polygonal iff the sum of the number of cusps and twice the number of double points on $\partial\Omega$ is equal to $d+1$.
**Proposition 80**. *[@LLM23][\[polygonal_conn_locus_prop\]]{#polygonal_conn_locus_prop label="polygonal_conn_locus_prop"} Let $(\sigma:\overline{\Omega}\to\widehat{\mathbb{C}})\in \mathscr{S}_d$. Then the following are equivalent.*
1. *$K(\sigma)$ is connected.*
2. *$\sigma^{\circ n}(c)\notin T^\infty(\sigma)$, $\forall\ c\in \mathrm{crit}(\sigma)$.*
3. *$\sigma:T^\infty(\sigma)\setminus\mathop{\mathrm{int}}{T^0(\sigma)}\longrightarrow T^\infty(\sigma)$ is conformally conjugate to the Nielsen map $\mathcal{N}_G:\mathbb{D}\setminus \mathop{\mathrm{int}}{\Pi(G)}\to \mathbb{D}$, where $G$ is an ideal $(d+1)$-gon reflection group.*
By the above proposition, the connectedness locus of $\mathscr{S}_d^{\mathrm{reg}}$ consists precisely of maps having the Nielsen map $\pmb{\mathcal{N}}_d$ as the conformal model of their tiling set dynamics. We denote this space by $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ (where the maps are normalized so that the conformal conjugacy $\psi_\sigma:\mathbb{D}\to T^\infty(\sigma)$ between $\pmb{\mathcal{N}}_d$ and $\sigma$ sends the origin to $\infty$ and has derivative $1$ at the origin.)
## Dynamical rays {#polygonal_comb_model_subsec}
To obtain a combinatorial model for the limit set of a Schwarz reflection $\sigma\in\mathcal{S}_{\pmb{\mathcal{N}}_d}$, one needs the following notion of dynamical rays. These are defined in terms of the Cayley graph of $\pmb{G}_d$ with respect to the generating set $\pmb{\rho}_1,\cdots, \pmb{\rho}_{d+1}$, where $\pmb{\rho}_i$ is the anti-Möbius reflection in the circle $\pmb{C}_i$ (see Definition [Definition 8](#regular_ideal_polygon_ref_group_def){reference-type="ref" reference="regular_ideal_polygon_ref_group_def"}).
**Definition 81**. Let $(i_1, i_2, \cdots)\in\{1,\cdots, d+1\}^{\mathbb{N}}$ with $i_j\neq i_{j+1}$ for all $j$. The corresponding infinite sequence of tiles $\{\Pi(\pmb{G}_d),\rho_{i_1}(\Pi(\pmb{G}_d)),\rho_{i_1}\circ\rho_{i_2}(\Pi(\pmb{G}_d)),\cdots\}$ shrinks to a single point of $\mathbb{S}^1\cong \mathbb{R}/\mathbb{Z}$, which we denote by $\theta(i_1, i_2, \cdots)$. We define a *$\pmb{G}_d$-ray at angle $\theta(i_1, i_2, \cdots)$* to be the concatenation of hyperbolic geodesics (in $\mathbb{D}$) connecting the consecutive points of the sequence $\{0,\rho_{i_1}(0),\rho_{i_1}\circ\rho_{i_2}(0),\cdots\}$.
(See Figure [2](#deltoid_intro_fig){reference-type="ref" reference="deltoid_intro_fig"} (right).) We note that in general there may be more than one $\pmb{G}_d$-rays at a given angle.
**Definition 82**. For $\sigma\in\mathcal{S}_{\pmb{\mathcal{N}}_d}$, the image of a $\pmb{G}_d$-ray at angle $\theta$ under the map $\psi_\sigma:\mathbb{D}\longrightarrow T^\infty(\sigma)$ (that conjugates $\pmb{\mathcal{N}}_d$ to $\sigma$) is called a $\theta$-dynamical ray of $\sigma$.
We denote the set of all (pre)periodic points of $\pmb{\mathcal{N}}_d:\mathbb{S}^1\to\mathbb{S}^1$ by $\mathrm{Per}(\pmb{\mathcal{N}}_d)$. As in the case for anti-polynomials with connected Julia set, the dynamical rays at angles in $\mathrm{Per}(\pmb{\mathcal{N}}_d)$ land on $\Lambda(\sigma)$. This leads to the following analogue of rational laminations for the space $\mathcal{S}_{\pmb{\mathcal{N}}_d}$.
**Definition 83**. The *preperiodic lamination* of $\sigma\in\mathcal{S}_{\pmb{\mathcal{N}}_d}$ is defined as the equivalence relation on $\mathrm{Per}(\pmb{\mathcal{N}}_d)\subset\mathbb{R}/\mathbb{Z}$ such that $\theta, \theta'\in\mathrm{Per}(\pmb{\mathcal{N}}_d)$ are related if and only if the dynamical rays of $\sigma$ at these angles land at the same point of $\Lambda(\sigma)$. We denote the preperiodic lamination of $\sigma$ by $\lambda(\sigma)$.
## Coarse partition and topology {#polygonal_topology_subsec}
Quadrature multi-domains defining the Schwarz reflections in $\mathcal{S}_{\pmb{\mathcal{N}}_2}$ come in essentially two different flavors: the deltoid-like quadrature domains, and the union of the cardioid and the exterior of a circumscribing circle. The first kind of maps are characterized by the fact that all of their fixed dynamical rays (at angles $0,1/3$ and $2/3$) land at distinct points, while the second type of maps are characterized by co-landing of their $1/3$ and $2/3$-dynamical rays. In higher degrees, there are many possibilities for the structures of the quadrature domains, and a book-keeping of these patterns is necessary to study convergence in the parameter space. These are captured by the following combinatorial data.
A degree $d$ polygonal Schwarz reflection $\sigma$ with connected limit set induces the action of the Nielsen map $\pmb{\mathcal{N}}_d$ on the ideal boundary $I(T^\infty(\sigma))\cong \mathbb{R}/\mathbb{Z}$ of the escaping set, and hence has exactly $d+1$ fixed points $0,\cdots,\frac{d}{d+1}$ on the ideal boundary. We denote the set of these fixed points by $\mathrm{Fix}(\pmb{\mathcal{N}}_d)$. As in Definition [Definition 74](#fixed_ray_lamination_def){reference-type="ref" reference="fixed_ray_lamination_def"}, the *fixed ray lamination* of $\sigma\in\mathcal{S}_{\pmb{\mathcal{N}}_d}$ is the equivalence relation $\mathfrak{L}_\sigma$ on $\mathrm{Fix}(\pmb{\mathcal{N}}_d)$ such that $\theta_1\sim\theta_2$ if and only if the dynamical rays of $\sigma$ at angles $\theta_1,\theta_2$ land at the same point of $\Lambda(\sigma)$.
Thus, we have a coarse partition $$\mathcal{S}_{\pmb{\mathcal{N}}_d}= \bigsqcup_{\mathfrak{L}} \mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}},$$ where $\mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}}$ consists of maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ with fixed ray lamination $\mathfrak{L}$, and the union is over all possible fixed ray laminations. Arguments similar to the ones employed in Subsection [9.3](#conf_mating_identify_subsec){reference-type="ref" reference="conf_mating_identify_subsec"} show the following properties of $(\sigma : \overline{\Omega} \longrightarrow \widehat{\mathbb{C}}) \in \mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}}$.
- The components of $\Omega$ are in one-to-one correspondence with the gaps of $\mathfrak{L}$; i.e., $$\displaystyle\Omega=\bigsqcup_{\substack{\mathcal{G}\\ \textrm{gaps\ of}\ \mathfrak{L}}} \Omega_{\mathcal{G}}.$$
- The degree of the uniformizing rational map of $\Omega_{\mathcal{G}}$ is equal to the number of arcs of $\mathbb{S}^1\setminus\mathrm{Fix}(\pmb{\mathcal{N}}_d)$ on $\partial\mathcal{G}\cap\mathbb{S}^1$, and the number of cusps on $\Omega_{\mathcal{G}}$ equals the number of points of $\mathrm{Fix}(\pmb{\mathcal{N}}_d)$ in $\mathop{\mathrm{int}}{(\partial\mathcal{G}\cap\mathbb{S}^1)}$.
- Two quadrature domains $\Omega_{\mathcal{G}}, \Omega_{\mathcal{G}'}$ share at most one boundary point; moreover, they do so if and only if $\mathcal{G}$ and $\mathcal{G}'$ are adjacent.
We now define a topology on the space $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ in terms of convergence of sequences. Since there are only finitely many fixed ray laminations, we can assume (after possibly passing to a subsequence) that any sequence in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ lies entirely in $\mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}}$, where $\mathfrak{L}$ is a particular fixed ray lamination. We enumerate the gaps of $\mathfrak{L}$ as $\mathcal{G}_1,\cdots,\mathcal{G}_k$. For maps $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ in $\mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}}$, we denote the corresponding components of $\Omega$ as $\Omega_1,\cdots,\Omega_k$.
**Definition 84**. We say that a sequence $\displaystyle\{\sigma_n:\overline{\Omega^n}=\bigcup_{r=1}^k\overline{\Omega_{r}^{n}}\to\widehat{\mathbb{C}}\}\subset\mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}}$ converges to $\displaystyle\sigma:\overline{\Omega}=\bigcup_{r=1}^{k}\bigcup_{j=1}^{l_r}\overline{\Omega_{r,j}}\to\widehat{\mathbb{C}}$ if
1. $\{\Omega_{r,j}:\ j\in\{1,\cdots,l_r\} \}$ is the collection of all Carathéodory limits of the sequence of domains $\{\Omega_{r}^{n}\}_n$, $r\in\{1,\cdots,k\}$, and
2. for $n$ large enough, the antiholomorphic maps $\sigma_n$ converge uniformly to $\sigma$ on compact subsets of $\Omega$.
(See Figure [\[pinching_fig\]](#pinching_fig){reference-type="ref" reference="pinching_fig"}.)
The pieces $\mathcal{S}_{\pmb{\mathcal{N}}_d, \mathfrak{L}}$ are not necessarily closed. For instance, matings of $\pmb{\mathcal{N}}_2$ with appropriately chosen quadratic anti-polynomials in the principal hyperbolic component of the Tricorn (each of which arise from a single deltoid-like quadrature domain) can converge to the mating of $\pmb{\mathcal{N}}_2$ with the fat Basilica anti-polynomial $\overline{z}^2-3/4$ (which lies in the C&C family). However, the total space $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ turns out to be compact. This is essentially a consequence of the fact that all maps in this space have the same external class.
**Proposition 85**. *[@LLM23][\[polygonal_compact_prop\]]{#polygonal_compact_prop label="polygonal_compact_prop"} $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ is compact.*
## Relation between geometrically finite maps {#polygonal_geom_fin_subsec}
The first result on the intimate relations between the spaces $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ and $\mathscr{C}_d$ asserts that there is a natural bijection between their geometrically finite parameters. We remark that the $d=2$ case of this result was proved in [@LLMM2] using parameter tessellation of the escape locus of the C&C family and rigidity of geometrically finite maps (see Subsections [6.1.2](#c_and_c_straightening_mating_subsubsec){reference-type="ref" reference="c_and_c_straightening_mating_subsubsec"} and [6.1.3](#c_and_c_geom_fin_bijection_subsubsec){reference-type="ref" reference="c_and_c_geom_fin_bijection_subsubsec"}). It turns out that David surgery techniques and conformal removability results allow one to simplify the above arguments, and this strategy was adopted in [@LLM23] to handle the general case.
**Definition 86**. A Schwarz reflection map $\sigma\in\mathcal{S}_{\pmb{\mathcal{N}}_d}$ is said to be *geometrically finite* if every critical point of $\sigma$ in the limit set $\Lambda(\sigma)$ is preperiodic.
By the classification of Fatou components of maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ (i.e., components of $\mathop{\mathrm{int}}{K(\sigma)}$) and their relations with critical points, each periodic Fatou component of a geometrically finite map in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ is an attracting/parabolic basin, or a basin of attraction of some singular point on $\partial T(\sigma)$. Moreover, such a map has no Cremer cycle.
Let us denote by $\mathcal{S}_{\pmb{\mathcal{N}}_d}^{gf}, \mathscr{C}_d^{gf}$ the collection of geometrically finite maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}, \mathscr{C}_d$ (respectively).
**Theorem 87**. *[@LLM23][\[geom_finite_bij_polygonal_thm\]]{#geom_finite_bij_polygonal_thm label="geom_finite_bij_polygonal_thm"} There is a bijection $$\Phi: \mathscr{C}_{d}^{gf} \longrightarrow \mathcal{S}_{\pmb{\mathcal{N}}_d}^{gf},$$ such that for each $f\in \mathscr{C}_{d}^{gf}$, the corresponding Schwarz reflection $\Phi(f)$ is a conformal mating of $f$ with $\pmb{\mathcal{N}}_d$. However, both $\Phi$ and $\Phi^{-1}$ are discontinuous.*
The proof of the above result essentially uses
1. the David surgery of Lemma [\[david_surgery_lemma\]](#david_surgery_lemma){reference-type="ref" reference="david_surgery_lemma"} and its higher period variants that allow one to turn anti-polynomials into Schwarz reflections with $\pmb{\mathcal{N}}_d$ as their external class,
2. the combinatorial structure of preperiodic laminations of geometrically finite maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$; more precisely, the fact the push-forward of such a preperiodic lamination under $\pmb{\mathcal{E}}_d$ satisfies the properties of rational laminations listed in Proposition [\[rat_lami_prop\]](#rat_lami_prop){reference-type="ref" reference="rat_lami_prop"} (see Definition [Definition 41](#push_lami_def){reference-type="ref" reference="push_lami_def"} for the notion of push-forward of laminations),
3. standard realization theorems for anti-polynomials; specifically, existence of anti-polynomials with prescribed rational laminations, and
4. conformal removability of limit/Julia sets of geometrically finite maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ and $\mathscr{C}_d$.
It is worth pointing out that the source of discontinuity in the above theorem is quasiconformally deformable parabolic parameters (the same phenomenon causes discontinuity for other straightening maps in antiholomorphic dynamics too, cf. [@IM1]).
## Relation between periodically repelling maps {#polygonal_per_rep_subsec}
We now turn our attention to maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}, \mathscr{C}_d$ with no non-repelling cycle. These subspaces are denoted by $\mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}, \mathscr{C}_{d}^{r}$.
By Theorem [\[geom_finite_bij_polygonal_thm\]](#geom_finite_bij_polygonal_thm){reference-type="ref" reference="geom_finite_bij_polygonal_thm"}, there exists a bijection between post-critically finite maps of $\mathscr{C}_{d}^{r}$ and $\mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}$. Thanks to certain continuity properties for rational laminations of maps in $\mathscr{C}_{d}^{r}$ and preperiodic laminations of maps in $\mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}$, one can extend the above bijection to a homeomorphism between periodically repelling combinatorial classes of $\mathscr{C}_{d}^{r}$ and $\mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}$.
**Theorem 88**. *[@LLM23][\[comb_class_homeo_polygonal_thm\]]{#comb_class_homeo_polygonal_thm label="comb_class_homeo_polygonal_thm"} There is a homeomorphism $$\Phi: \faktor{\mathscr{C}_{d}^{r}}{\sim} \longrightarrow \faktor{\mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}}{\sim}\ ,$$ where $\sim$ identify maps with the same rational/preperiodic lamination.*
Combining Theorem [\[comb_class_homeo_polygonal_thm\]](#comb_class_homeo_polygonal_thm){reference-type="ref" reference="comb_class_homeo_polygonal_thm"} with combinatorial rigidity of at most finitely renormalizable maps in $\mathscr{C}_{d}^{r}$ and $\mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}$ (i.e., such maps are uniquely determined by their rational/preperiodic laminations), we obtain the following corollary. We denote the collection of at most finitely renormalizable maps in $\mathscr{C}_{d}^{r}, \mathcal{S}_{\pmb{\mathcal{N}}_d}^{r}$ by $\mathscr{C}_{d}^{fr}, \mathcal{S}_{\pmb{\mathcal{N}}_d}^{fr}$, respectively.
**Corollary 89**. *[@LLM23][\[polygonal_fin_renorm_cor\]]{#polygonal_fin_renorm_cor label="polygonal_fin_renorm_cor"} There is a homeomorphism $$\Phi: \mathscr{C}_{d}^{fr} \longrightarrow \mathcal{S}_{\pmb{\mathcal{N}}_d}^{fr}$$ such that for each $f\in \mathscr{C}_{d}^{fr}$, the corresponding Schwarz reflection $\Phi(f)$ is a conformal mating of $f$ with $\pmb{\mathcal{N}}_d$.*
# Correspondences as matings: systematic theory via Schwarz dynamics {#general_mating_corr_sec}
In Subsection [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"}, we described the dynamics and parameter space of a family of $2$:$2$ antiholomorphic correspondences that arise as matings of quadratic parabolic anti-rational maps and the antiholomorphic analog $\mathbbm{G}_2$ of the modular group. Such families, in the holomorphic setting [@BP; @BuLo1; @BuLo2; @BuLo3] as well as in the antiholomorphic setting [@LLMM3], were constructed by looking at explicit algebraic correspondences of bi-degree $2$:$2$. While this allows for a complete understanding of the dynamics and parameter planes of these correspondences, a shortcoming of this approach is that one is forced to rely heavily on the structure of low-dimensional parameter spaces to realize matings as correspondences.
In general, it seems hard to come up with explicit algebraic correspondences of bi-degree $d$:$d$ ($d\geq 2$) directly that exhibit similar mating phenomena. However, the intimate connection between antiholomorphic correspondences and Schwarz reflection maps (expounded in Subsections [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}, [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"}) suggests that the task of constructing antiholomorphic correspondences with prescribed hybrid dynamical behavior is strongly related to manufacturing Schwarz reflection maps with suitable dynamical properties. In [@LMM3], this approach was successfully adopted to generalize the family of $2$:$2$ antiholomorphic correspondences arising from cubic Chebyshev polynomials to arbitrary degree. We will now collect the main results of that paper, which settled a suitably modified version of a conjecture by Bullett and Freiberger in the antiholomorphic setting [@BuFr §3, p. 3926].
We recall that the space $\pmb{\mathcal{B}}_d$ of parabolic anti-rational maps was introduced in Subsection [3.2.3](#para_anti_rat_gen_subsubsec){reference-type="ref" reference="para_anti_rat_gen_subsubsec"}. By definition, the *anti-Hecke group* $\mathbbm{G}_d$ is a group of conformal and anti-conformal automorphisms of $\mathbb{D}$ generated by the rigid rotation by angle $\frac{2\pi}{d+1}$ around the origin and the reflection in the hyperbolic geodesic of $\mathbb{D}$ connecting $1$ to $e^{\frac{2\pi i}{d+1}}$. It is an index $d+1$ extension of the ideal $(d+1)$-gon reflection group $\pmb{G}_d$ (cf. Subsection [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"} and [@LMM3 §3.1]). The name anti-Hecke is justified by the observation that replacing the reflection map with a Möbius inversion (fixing the geodesic connecting $1$ to $e^{\frac{2\pi i}{d+1}}$) in the above definition of $\mathbbm{G}_d$ yields the standard Hecke group (cf. [@Bea95 §11.3]).
We denote by $\mathfrak{U}_{d+1}$ the space of degree $d+1$ polynomials $f$ such that $f\vert_{\overline{\mathbb{D}}}$ is injective and $f$ has a unique (non-degenerate) critical point on $\mathbb{S}^1$.
**Theorem 90**. *[@LMM3 Theorem A][\[general_mating_corr_existence_thm\]]{#general_mating_corr_existence_thm label="general_mating_corr_existence_thm"} Let $R\in\pmb{\mathcal{B}}_d$. Then, there exists $f\in\mathfrak{U}_{d+1}$ such that the associated antiholomorphic correspondence $\mathfrak{C}$ defined as $$(z,w)\in\mathfrak{C}\iff \frac{f(w)-f(\eta(z))}{w-\eta(z)}=0
\label{corr_eqn}$$ is the mating of the anti-Hecke group $\mathbbm{G}_d$ and $R$.*
*Moreover, this mating operation yields a bijection between $\ \faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})}$ and the connectedness locus of the moduli space of antiholomorphic correspondences generated by deck transformations of polynomials $f\in\mathfrak{U}_{d+1}$ and reflection in the unit disk.*
## Sketch of proof of the realization theorem {#gen_mating_corr_existence_proof_subsec}
In order to motivate the proof of Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"}, let us recall that in bi-degree $2$:$2$, the dynamical properties of the correspondences were obtained essentially from the parallel study of the dynamics of the associated Schwarz reflection maps.
### Motivation from the cubic Chebyshev family {#learning_from_examples_subsubsec}
The following two relations between Schwarz reflections and correspondences described in Subsections [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}, [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"} are of particular importance.
$\bullet$ The fact that the conformal class of the associated Schwarz reflections on their tiling sets is given by the map $\pmb{\mathcal{F}}_2$ was instrumental in the $\mathbbm{G}_2$-structure of the correspondences on their lifted tiling sets (see Subsections [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"} and [4.3.3](#chebyshev_center_corr_subsubsec){reference-type="ref" reference="chebyshev_center_corr_subsubsec"}).
$\bullet$ The fact that the non-escaping set dynamics of the associated Schwarz reflections are hybrid conjugate to the actions of anti-rational maps in $\pmb{\mathcal{B}}_2$ on their filled Julia sets implied that suitable branches of the correspondences on their lifted tiling sets are hybrid conjugate to anti-rational maps.
Guided by this analogy, we will proceed to construct a space of Schwarz reflections exhibiting the above two features.
### The degree $d$ anti-Farey map {#anti_farey_subsubsec}
To implement the above strategy, a degree $d$ generalization of the map $\pmb{\mathcal{F}}_2$ was given in [@LMM3 §3.1]. Succinctly, as the Nielsen map $\pmb{\mathcal{N}}_d$ of the regular ideal $(d+1)$-gon reflection group $\pmb{G}_d$ commutes with $M_\omega(z)=\omega z$ (where $\omega:=e^{\frac{2\pi i}{d+1}}$), the quotient map $\overline{\mathbb{D}}\to\faktor{\overline{\mathbb{D}}}{\langle M_\omega\rangle}$ semi-conjugates $\pmb{\mathcal{N}}_d:\overline{\mathbb{D}}\setminus \mathop{\mathrm{int}}{\Pi(\pmb{G}_d)}\to\overline{\mathbb{D}}$ to a well-defined factor map $$\pmb{\mathcal{F}}_d:\faktor{\left(\overline{\mathbb{D}}\setminus \mathop{\mathrm{int}}{\Pi(\pmb{G}_d)}\right)}{\langle M_\omega\rangle}\longrightarrow \faktor{\overline{\mathbb{D}}}{\langle M_\omega\rangle}.$$
This map $\pmb{\mathcal{F}}_d$, which coincides with the map $\pmb{\mathcal{F}}_2$ defined in Subsection [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"} for $d=2$, is called the *degree $d$ anti-Farey map*. We note that $\pmb{\mathcal{F}}_d$ has a unique critical point (of multiplicity $d$) with associated critical value at $0$ (shown in black in Figure [\[anti_farey_fig\]](#anti_farey_fig){reference-type="ref" reference="anti_farey_fig"} (right)). A crucial feature of $\pmb{\mathcal{F}}_d$ is that it acts as the identity map on the inner boundary of its domain of definition (shown in red in Figure [\[anti_farey_fig\]](#anti_farey_fig){reference-type="ref" reference="anti_farey_fig"} (right)). Moreover, when the Riemannian orbifold $\faktor{\mathbb{D}}{\langle M_\omega\rangle}$ is uniformized by the unit disk, the restriction of $\pmb{\mathcal{F}}_d$ on the outer boundary of its domain of definition becomes an expansive, orientation-reversing, degree $d$ circle covering with a unique parabolic fixed point.
### First step: constructing Schwarz reflections {#core_step_subsubsec}
Roughly speaking, the first step in the construction of an antiholomorphic correspondence that is a mating of $\mathbbm{G}_d$ and $R\in\pmb{\mathcal{B}}_d$ is to cook up a Schwarz reflection map $\sigma$ having $\pmb{\mathcal{F}}_d$ as the conformal model of its tiling set dynamics and $R$ as the hybrid class of its non-escaping set dynamics. This is achieved by a careful *quasiconformal* surgery procedure that glues in $\pmb{\mathcal{F}}_d$ outside the filled Julia set of $R\in\pmb{\mathcal{B}}_d$ (see [@LMM3 Theorem 5.1]). Specifically, such a surgery is facilitated by [@LMM3 Lemma 4.9], which states that the external dynamics $B_d$ of maps in $\pmb{\mathcal{B}}_d$ is quasiconformally compatible with the map $\pmb{\mathcal{F}}_d$ that one needs to insert (see Subsection [3.2.3](#para_anti_rat_gen_subsubsec){reference-type="ref" reference="para_anti_rat_gen_subsubsec"} for the definition of $B_d$). The fact that the resulting hybrid conformal dynamical system $\sigma$ is indeed given by a Schwarz reflection map follows from the triviality of the action of $\pmb{\mathcal{F}}_d$ on part of the boundary of its domain of definition.
Following [@LMM3 Definition 3.4], let us denote by $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ the space of Schwarz reflection maps $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ such that $\Omega$ is a Jordan quadrature domain and the tiling set dynamics of $\sigma$ is conformally conjugate to $\pmb{\mathcal{F}}_d$. The above discussion can be summarized as follows.
**Proposition 91**. *Let $R\in\pmb{\mathcal{B}}_d$. Then, there exist $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ and a global quasiconformal homeomorphism $\mathfrak{H}$ that is conformal on $\mathcal{K}(R)$, such that $\mathfrak{H}$ conjugates $R\vert_{\mathcal{K}(R)}$ to $\sigma\vert_{K(\sigma)}$.*
### Second step: constructing correspondences from Schwarz reflections {#upgrade_step_subsubsec}
It turns out that the Schwarz reflection $\sigma$ of Proposition [Proposition 91](#anti_farey_para_rat_mating_prop){reference-type="ref" reference="anti_farey_para_rat_mating_prop"} arises from a Jordan quadrature domain $\Omega$ whose uniformizing map can be chosen to be a polynomial $f\in\mathfrak{U}_{d+1}$. This essentially follows from the existence of a multiplicity $d$ critical point of $\sigma$ in its tiling set (recall that $\pmb{\mathcal{F}}_d$ has such a critical point). Moreover, this property characterizes the space $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ of Schwarz reflections.
**Proposition 92**. *[@LMM3 Proposition 3.3][\[mating_equiv_cond_prop\]]{#mating_equiv_cond_prop label="mating_equiv_cond_prop"} Let $f$ be a rational map of degree $d+1$ that is injective on $\overline{\mathbb{D}}$, $\Omega:=f(\mathbb{D})$, and $\sigma$ the Schwarz reflection map associated with $\Omega$. Then the following are equivalent.*
1. *There exists a conformal conjugacy $\psi$ between $$\quad \pmb{\mathcal{F}}_d:\faktor{\left(\overline{\mathbb{D}}\setminus \mathop{\mathrm{int}}{\Pi(\pmb{G}_d)}\right)}{\langle M_\omega\rangle}\longrightarrow \faktor{\overline{\mathbb{D}}}{\langle M_\omega\rangle}\quad \mathrm{and}\quad \sigma:T^\infty(\sigma)\setminus\mathop{\mathrm{int}}{T^0(\sigma)}\longrightarrow T^\infty(\sigma).$$ In particular, $T^\infty(\sigma)$ is simply connected.*
2. *After possibly conjugating $\sigma$ by a Möbius map and pre-composing $f$ with an element of $\mathrm{Aut}(\mathbb{D})$, the uniformizing map $f$ can be chosen to be a polynomial with a unique critical point on $\mathbb{S}^1$. Moreover, $K(\sigma)$ is connected.*
3. *$\Omega$ is a Jordan domain with a unique conformal cusp on its boundary. Moreover, $\sigma$ has a unique critical point in its tiling set $T^\infty(\sigma)$, and this critical point maps to $\mathop{\mathrm{int}}{T^0(\sigma)}$ with local degree $d+1$.*
The polynomial $f\in\mathfrak{U}_{d+1}$ giving rise to the Schwarz reflection $\sigma$ (produced in Proposition [Proposition 91](#anti_farey_para_rat_mating_prop){reference-type="ref" reference="anti_farey_para_rat_mating_prop"}) is precisely the one that appears in the statement of Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"}. Indeed, the arguments of Subsection [4.3.3](#chebyshev_center_corr_subsubsec){reference-type="ref" reference="chebyshev_center_corr_subsubsec"}, combined with the dynamical properties of $\sigma$, apply mutatis mutandis to the present setting and imply that the antiholomorphic correspondence $\mathfrak{C}$ defined by Equation [\[corr_eqn\]](#corr_eqn){reference-type="eqref" reference="corr_eqn"}
$\bullet$ is equivalent to the action of the anti-Hecke group $\mathbbm{G}_d$ on its lifted tiling set $f^{-1}(T^\infty(\sigma))$, and
$\bullet$ has a $d:1$ forward branch on 'half' of its lifted non-escaping set (i.e., on $f^{-1}(K(\sigma))\cap\overline{\mathbb{D}}$) that is hybrid conjugate to $R$.
We refer the reader to [@LMM3 Propositions 2.17, 3.12] for details.
### The bijection statement of Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"} {#corr_para_anti_rat_bijection_sketch_subsubsec}
By Proposition [\[mating_equiv_cond_prop\]](#mating_equiv_cond_prop){reference-type="ref" reference="mating_equiv_cond_prop"}, the space $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ is precisely the space of Schwarz reflections $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ with connected non-escaping set $K(\sigma)$ associated with Jordan quadrature domains $\Omega$ such that the Riemann uniformization of $\Omega$ can be chosen to be a polynomial in $\mathfrak{U}_{d+1}$.
In light of the defining equation [\[corr_eqn\]](#corr_eqn){reference-type="eqref" reference="corr_eqn"} of the correspondences, it now follows that the connectedness locus of the space of antiholomorphic correspondences generated by deck transformations of polynomials in $\mathfrak{U}_{d+1}$ and the reflection map $\eta$ can be identified with the space $\faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$. Here, we used the facts that
$\bullet$ the local branches $z\mapsto w$ of the correspondences defined by Equation [\[corr_eqn\]](#corr_eqn){reference-type="eqref" reference="corr_eqn"} are given by $f^{-1}\circ f\circ\eta$ (i.e., composition of $\eta$ with local deck transformations of $f$), and
$\bullet$ Möbius conjugating a Schwarz reflection map $\sigma$ amounts to post-composing the Riemann uniformization of $\Omega$ with the same Möbius map, which leaves the associated correspondence unaltered.
According to [@LMM3 Lemma 4.4], Möbius conjugacy classes of maps in $\pmb{\mathcal{B}}_d$ and $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ are completely determined by their hybrid classes (this is a standard consequence of the fact that all maps in these spaces have the same external class). Therefore, the quasiconformal surgery (or mating) operation explicated in Subsection [11.1.3](#core_step_subsubsec){reference-type="ref" reference="core_step_subsubsec"} gives rise to a well-defined map from $\faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})}$ to $\faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$. For the bijection statement of Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"}, one needs to argue that this map admits an inverse. This is the content of [@LMM3 Theorem 4.11], that reverses the construction of [@LMM3 Theorem 5.1]. More precisely, one can use the quasiconformal compatibility of the anti-Farey map $\pmb{\mathcal{F}}_d$ and the anti-Blaschke product $B_d$ ([@LMM3 Lemma 4.9]) to start with a Schwarz reflection map $\sigma\in\faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$ and glue the anti-Blaschke product $B_d$ outside its non-escaping set. This produces a well-defined parabolic anti-rational map $R_\sigma\in\faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})}$ that is hybrid conjugate to $\sigma$. Clearly, the association $\sigma\mapsto R_\sigma$ is the desired inverse map. Thus, we have a bijection $$\chi:\ \faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}\ \longrightarrow\ \faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})},\quad [\sigma] \mapsto [R_\sigma].$$ The map $\chi$ is called the *straightening map*.
### Replacing parabolic anti-rational maps with anti-polynomials {#most_poly_anti_hecke_mating_subsubsec}
The original conjecture of Bullett and Freiberger, translated to the antiholomorphic setting, asks whether any degree $d$ anti-polynomial with connected Julia set can be mated with the anti-Hecke group $\mathbbm{G}_d$ as a correspondence (cf. [@BuFr §3, p. 3926]). A modification of the proof of Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"} yields the following partial answer to this conjecture.
**Theorem 93**. *[@LMM3 Theorem B][\[semi_hyp_poly_mating_corr_thm\]]{#semi_hyp_poly_mating_corr_thm label="semi_hyp_poly_mating_corr_thm"} Let $p$ be a degree $d$ semi-hyperbolic anti-polynomial with connected Julia set. Then, there exists $f\in\mathfrak{U}_{d+1}$ such that the associated correspondence $\mathfrak{C}$ given by Equation [\[corr_eqn\]](#corr_eqn){reference-type="eqref" reference="corr_eqn"} is the mating of the anti-Hecke group $\mathbbm{G}_d$ and $p$.*
The main difficulty in carrying out the strategy of the proof of Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"} in this setting is the hyperbolic-parabolic mismatch that we have encountered several times in this survey. Indeed, the external map $\overline{z}^d$ of a degree $d$ anti-polynomial (with connected Julia set) is not quasiconformally compatible with the map $\pmb{\mathcal{F}}_d$. Thus, one is compelled to take the route of David surgery as described in Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"}. Specifically, Theorem [\[david_extension_general_thm\]](#david_extension_general_thm){reference-type="ref" reference="david_extension_general_thm"} can be used to prove the existence of a circle homeomorphism that conjugates $\overline{z}^d$ to $\pmb{\mathcal{F}}_d$ and admits a David extension to $\mathbb{D}$ [@LMM3 Lemma 3.2]. This allows us to glue the map $\pmb{\mathcal{F}}_d$ outside the filled Julia set of a degree $d$ anti-polynomial (with connected Julia set), *provided* one has good control on the geometry of the basin of infinity of the anti-polynomial (cf. Subsection [13.2](#integrable_thms_subsec){reference-type="ref" reference="integrable_thms_subsec"}). This is indeed the case for a semi-hyperbolic anti-polynomial $p$ (with connected Julia set), which enables us to construct a Schwarz reflection map with external map $\pmb{\mathcal{F}}_d$ and hybrid class $p$ (see [@LMM3 Propositions 3.7] for details of the construction). The rest of the proof of Theorem [\[semi_hyp_poly_mating_corr_thm\]](#semi_hyp_poly_mating_corr_thm){reference-type="ref" reference="semi_hyp_poly_mating_corr_thm"} follows the scheme of Subsection [11.1.4](#upgrade_step_subsubsec){reference-type="ref" reference="upgrade_step_subsubsec"}.
## Regularity of the mating surgery {#mating_regularity_subsec}
The bijection between the moduli space $\faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})}$ of parabolic anti-rational maps and the moduli space $\faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$ of Schwarz reflections (see Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"}) is continuous at an abundant set of parameters. However, since the surgery procedure described in Subsection [11.1.3](#core_step_subsubsec){reference-type="ref" reference="core_step_subsubsec"} involves the Riemann maps of the marked parabolic basins of the anti-rational maps, one needs to study continuity properties of Riemann maps to conclude continuity of the mating operation (cf. Remark [Remark 53](#straightening_regularity_rem){reference-type="ref" reference="straightening_regularity_rem"}). It is a non-trivial task since filled Julia sets do not move continuously in general.
To circumvent this issue, a different mating surgery (but with equivalent outcome) was developed in [@LMM3], one that restricts members of $\pmb{\mathcal{B}}_d^{\mathrm{simp}}$ (maps with a simple parabolic fixed point at $\infty$) to pinched anti-polynomial-like maps and replaces the external maps of such pinched anti-polynomial-like restrictions with the anti-Farey map $\pmb{\mathcal{F}}_d$. According to [@LMM3 Theorem 5.2], this can be performed with continuous control over the dilatations of the hybrid conjugacies and the domains of definition of these conjugacies.
The same can be done for the inverse surgery from $\faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}^{\mathrm{simp}}}{\mathrm{PSL}_2(\mathbb{C})}$ (where $\mathcal{S}_{\pmb{\mathcal{F}}_d}^{\mathrm{simp}}$ consists of Schwarz reflections in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ with a simple cusp on the associated quadrature domain boundary) to $\faktor{\pmb{\mathcal{B}}_d^{\mathrm{simp}}}{\mathrm{Aut}(\mathbb{C})}$. Specifically, one can restrict $\sigma\in\mathcal{S}_{\pmb{\mathcal{F}}_d}^{\mathrm{simp}}$ to a pinched anti-polynomial-like map and replace its external map with the anti-Blaschke product $B_d$, thus producing a parabolic anti-rational map $R_\sigma\in \faktor{\pmb{\mathcal{B}}_d^{\mathrm{simp}}}{\mathrm{Aut}(\mathbb{C})}$ that is hybrid equivalent to $\sigma$. This generalizes the straightening theorem for pinched anti-quadratic-like maps proved in [@LLMM3 Proposition 4.15, Theorem 5.4] to arbitrary degree (see [@LMM3 Theorem 4.8, Lemma 4.13], cf. Subsections [4.3.2](#chebyshev_center_hybrid_conj_subsubsec){reference-type="ref" reference="chebyshev_center_hybrid_conj_subsubsec"}, [6.2.2](#cubic_cheby_qc_straightening_subsubsec){reference-type="ref" reference="cubic_cheby_qc_straightening_subsubsec"}). As in the degree two case, the general straightening theorem uses Warschawski's result on boundary behavior of conformal maps of infinite strips in an essential way.
Utilizing these parameter dependencies of the mating surgeries, it was proved in [@LMM3 §6] that the straightening map $$\chi:\ \faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}^{\mathrm{simp}}}{\mathrm{PSL}_2(\mathbb{C})}\ \longrightarrow\ \faktor{\pmb{\mathcal{B}}_d^{\mathrm{simp}}}{\mathrm{Aut}(\mathbb{C})},\quad [\sigma] \mapsto [R_\sigma]$$ (respectively, its inverse) is continuous at hyperbolic and quasiconformally rigid parameters of $\faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}^{\mathrm{simp}}}{\mathrm{PSL}_2(\mathbb{C})}$ (respectively, of $\faktor{\pmb{\mathcal{B}}_d^{\mathrm{simp}}}{\mathrm{Aut}(\mathbb{C})}$).
## Shabat polynomial slices in the space of correspondences {#shabat_slices_subsec}
Theorem [\[general_mating_corr_existence_thm\]](#general_mating_corr_existence_thm){reference-type="ref" reference="general_mating_corr_existence_thm"}, which is a general existence theorem for correspondences that are matings of the anti-Hecke group $\mathbbm{G}_d$ and degree $d$ parabolic anti-rational maps, combined with the regularity of the mating surgery discussed in Subsection [11.2](#mating_regularity_subsec){reference-type="ref" reference="mating_regularity_subsec"}, paves the way for studying natural one-parameter slices of correspondences. Such slices generalize the one-parameter family of antiholomorphic correspondences arising from (injective restrictions of) the cubic Chebyshev polynomial (see Subsection [6.2](#chebyshev_gen_subsec){reference-type="ref" reference="chebyshev_gen_subsec"}). The following account is based on [@LMM4].
### Shabat polynomials and their role {#shabat_subsubsec}
Generically, the complex dimension of a natural family of holomorphic/antiholomorphic maps equals the number of free/active critical orbits of the maps in the family. Guided by this philosophy, one aims at constructing one-parameter slices in $\ \faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$ such that
- the families are closed under quasiconformal deformation, and
- the corresponding Schwarz reflections have a unique free critical value.
Let $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ be a Schwarz reflection in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ and let $f\in\mathfrak{U}_{d+1}$ be a polynomial such that $f$ maps $\overline{\mathbb{D}}$ homeomorphically onto $\overline{\Omega}$ (see Subsection [11.1.4](#upgrade_step_subsubsec){reference-type="ref" reference="upgrade_step_subsubsec"}). Pre-composing $f$ with a rotation, we can assume that the unique critical point of $f$ on $\mathbb{S}^1$ is at $1$. Recall that the set of critical values of $\sigma$ is contained in the set of critical values of $f$, and the difference, if non-empty, is the cusp $f(1)$. Note further that the critical value $f(\infty)=\infty$ of $\sigma$ lies inside the droplet, while the point $f(1)$ is a fixed point of $\sigma$ since it lies on the boundary of $\Omega$. Hence, none of these points is an active critical value. Thus, the condition that $\sigma$ has a unique free critical value is equivalent to the requirement that $f$ has exactly one critical value other than $f(1)$ and $\infty$; i.e., $f$ has three critical values in $\widehat{\mathbb{C}}$. This leads us to the space of *Shabat polynomials* [@LZ04; @Sch94], of which the cubic Chebyshev polynomial is the simplest non-trivial example.
**Definition 94** (Shabat and Belyi maps). 1) A polynomial $g:\widehat{\mathbb{C}}\to\widehat{\mathbb{C}}$ (of degree at least three) is called a *Shabat polynomial* if it has exactly two finite critical values. Two Shabat polynomials $g_1$ and $g_2$ are called *equivalent* if there exist affine maps $A_1, A_2$ such that $g_2=A_1\circ g_1\circ A_2$.
2\) A rational or anti-rational map $R:\widehat{\mathbb{C}}\to\widehat{\mathbb{C}}$ is said to be *Belyi* if it has at most three critical values.
There is an important combinatorial invariant associated with a Shabat polynomial $g$; namely, the *dessin d'enfants* $\mathfrak{T}(g)$. Let us briefly recall the definition and basic properties of this invariant (see [@LZ04 §2], [@LMM4 Appendix A] for more background). For an arc $\gamma\subset\mathbb{C}$ connecting the two finite critical values $y_1$ and $y_2$ of a Shabat polynomial $g$, the preimage $$\mathfrak{T}_\gamma(g):=g^{-1}(\gamma)$$ is a plane tree with vertices at $g^{-1}(\{y_1, y_2\})$ (the set $\mathbb{C}\setminus\mathfrak{T}_\gamma(g)$ is connected since $g$ has no pole in $\mathbb{C}$ and this set is a topological annulus since $g:\mathbb{C}\setminus\mathfrak{T}_\gamma(g)\to\mathbb{C}\setminus\gamma$ is a covering map). One colors the pre-images of $y_1$ and $y_2$ black and white, respectively. Then $\mathfrak{T}_\gamma(g)$ has the structure of a bicolored plane tree. The tree $\mathfrak{T}_\gamma(g)$ has $\deg{(g)}$ many edges, and the valence of a vertex of $\mathfrak{T}_\gamma(g)$ is equal to the local degree of $g$ at that point. It is easily checked that the isotopy type of $\mathfrak{T}_\gamma(g)$ (relative to the vertices) is independent of the choice of the arc $\gamma$ connecting $y_1$ and $y_2$. In particular, the various $\mathfrak{T}_\gamma(g)$ are isomorphic as combinatorial bicolored plane trees. This combinatorial bicolored plane tree is called the dessin d'enfants of $g$, and is denoted by $\mathfrak{T}(g)$. Moreover, the isomorphism class of $\mathfrak{T}(g)$ (as a bicolored plane tree) remains unaltered if $g$ is replaced by a Shabat polynomial equivalent to $g$. The following classical result states that dessins d'enfants are complete invariants of Shabat polynomials.
**Theorem 95**. *[@Sch94 Theorem I.5], [@LZ04 Theorem 2.2.9][\[shabat_classification_theorem\]]{#shabat_classification_theorem label="shabat_classification_theorem"} The map $g\mapsto\mathfrak{T}(g)$ induces a bijection between the set of equivalence classes of Shabat polynomials and the set of isomorphism classes of bicolored plane trees (with at least one black and one white vertex of valence greater than one).*
### One-parameter slices in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ {#one_para_slice_shabat_subsubsec}
We now return to the construction of one-parameter slices in $\ \faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$. As mentioned before, a Schwarz reflection $\sigma:\overline{\Omega}\to\widehat{\mathbb{C}}$ in $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ arising from a Shabat polynomial $f\in\mathfrak{U}_{d+1}$ has at most three critical values. Hence, the pullback $\sigma^{-1}(\gamma')$ of an arc $\gamma'$ connecting $f(1)$ to the free critical value of $\sigma$ also carries the structure of a bicolored plane tree. We denote this combinatorial bicolored plane tree by $\mathfrak{T}(\sigma)$, and call it the dessin d'enfants of $\sigma$. Let us now assume that the free critical value of $\sigma$ lie in $\Omega$. It is not hard to see from the relation $\sigma\equiv f\circ\eta\circ(f\vert_{\overline{\mathbb{D}}})^{-1}$ that one can describe $\mathfrak{T}(\sigma)$ purely in terms of $\mathfrak{T}(f)$, and vice versa.
Specifically, $\mathfrak{T}(\sigma)$ is obtained by pruning a distinguished peripheral edge from $\mathfrak{T}(f)$ and reversing the cyclic order of the edges around each vertex of the resulting tree (see [@LMM4]). We also note that if $\sigma$ and $\sigma_1$ (of the above form) are Hurwitz equivalent (in particular, if they are quasiconformally conjugate), then their dessins d'enfants $\mathfrak{T}(\sigma)$ and $\mathfrak{T}(\sigma_1)$ are isomorphic. The above discussion now implies that the Shabat polynomials $f$, $f_1$ associated with $\sigma$, $\sigma_1$ also have isomorphic dessins d'enfants. Thus, in light of Theorem [\[shabat_classification_theorem\]](#shabat_classification_theorem){reference-type="ref" reference="shabat_classification_theorem"}, a one-parameter family of Schwarz reflections in $\ \faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$ is closed under quasiconformal deformation precisely when the associated Shabat polynomials are equivalent in the sense of Definition [Definition 94](#shabat_def){reference-type="ref" reference="shabat_def"} (see [@LMM4]).
After possibly conjugating the Schwarz reflections $\sigma$ by affine maps (which amounts to post-composing the associated Shabat polynomials with the same affine maps), we can require that all such $\sigma$ have the same marked critical values and cusps. Then, the corresponding Shabat polynomials $f$ only differ by pre-composition with affine maps. Instead of fixing the domain of univalence $\mathbb{D}$ and varying the Shabat polynomials (that differ by pre-composition with affine maps and are injective on $\mathbb{D}$), it is more convenient to fix a Shabat polynomial $\pmb{f}$ with dessin d'enfants $\mathfrak{T}(\pmb{f})\cong\mathfrak{T}$, and restrict it to all possible disks of univalence such that the disks contain a marked critical critical point of $\pmb{f}$ on their boundary. This leads to the following space of Schwarz reflections.
Let us fix a degree $d+1$ Shabat polynomial $\pmb{f}$ with dessin d'enfants $\mathfrak{T}(\pmb{f})\cong\mathfrak{T}$ such that $\mathfrak{T}(\pmb{f})$ has a valence two 'black' vertex $\pmb{v_b}$ with an adjacent valence one 'white' vertex $\pmb{v_w}$.
**Definition 96**. We define the parameter space $$S_{\mathfrak{T}}:=\{ a\in\mathbb{C}: \pmb{v_w}\in\Delta_a:=B(a,\vert \pmb{v_b}-a\vert)\ \textrm{and}\ \pmb{f}\vert_{\overline{\Delta_a}}\ \textrm{is\ injective}\},$$ and the associated space of Schwarz reflections $$\mathcal{S}_{\mathfrak{T}}:=\{\sigma_a\equiv\pmb{f}\circ\eta_a\circ (\pmb{f}\vert_{\overline{\Delta_a}})^{-1}:\Omega_a:=\pmb{f}(\Delta_a)\longrightarrow\widehat{\mathbb{C}}: a\in S_{\mathfrak{T}}\},$$ where $\eta_a$ stands for reflection in the circle $\partial\Delta_a$.
### Disks of univalence of Shabat polynomials {#shabat_disk_univ_subsubsec}
For the cubic Chebyshev polynomial $\pmb{f}(u)=u^3-3u$ discussed in Subsection [4.3](#chebyshev_subsec){reference-type="ref" reference="chebyshev_subsec"}, an explicit description of $S_{\mathfrak{T}}$ was given in [@LLMM3 §3] using exact numerical computation. In [@LMM4], a detailed analysis of univalence properties of Shabat polynomials is carried out to provide a precise qualitative description of the parameter space $S_{\mathfrak{T}}$ for any Shabat polynomial $\pmb{f}$ with dessin d'enfants $\mathfrak{T}(\pmb{f})$ such that $\mathfrak{T}(\pmb{f})$ has a valence two vertex $\pmb{v_b}$ with an adjacent valence one vertex $\pmb{v_w}$. We summarize the main results below.
**Theorem 97** (Topology of the parameter space $S_{\mathfrak{T}}$).
1. *$\mathop{\mathrm{int}}{S_{\mathfrak{T}}}$ is a bounded Jordan domain, and $\overline{S_{\mathfrak{T}}}=\overline{\mathop{\mathrm{int}}{S_{\mathfrak{T}}}}$.*
2. *$\mathop{\mathrm{int}}{S_{\mathcal{T}}}= \{a\in\mathbb{C}: \pmb{v_w}\in\Delta_a,\ f\vert_{\overline{\Delta_a}}\ \textrm{is\ injective,\ and}\ \pmb{f}(\pmb{v_b})\ \textrm{is\ a}\ (3,2)\ \textrm{cusp\ on}\ \partial\Omega_a \}.$*
3. *The boundary $\partial S_{\mathfrak{T}}$ has the structure of a topological quadrilateral such that*
1. *two *horizontal* sides of $\partial S_{\mathfrak{T}}$ are characterized by the existence of a double point on the boundary of $\partial\Omega_a$,*
2. *one *vertical* side of $\partial S_{\mathfrak{T}}$ is characterized by the condition that $\pmb{f}(\pmb{v_b})$ is a $(\nu,2)-$cusp on $\partial\Omega_a$, for $\nu\geq 5$, and*
3. *the other *vertical* side of $\partial S_{\mathfrak{T}}$ is characterized by the condition that $\pmb{v_w}\in\partial\Delta_a$.*
The proof of the above theorem essentially depends on the local dynamics of Schwarz reflection maps near conformal cusps and double points, and the relation between such singular points and the critical orbits of the corresponding Schwarz reflection maps.
### Connectedness locus of $\mathcal{S}_{\mathfrak{T}}$ {#shabat_slice_conn_locus_subsubsec}
As usual, the connectedness locus $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ of $\mathcal{S}_{\mathfrak{T}}$ is defined as the collection of Schwarz reflections in the family with connected non-escaping set. Recall that for all maps $\sigma_a\in\mathcal{S}_{\mathfrak{T}}$, there is a passive critical value (of multiplicity $d$) in the tiling set that escapes to the fundamental tile in one iterate. A map $\sigma_a$ lies in the connectedness locus $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ if and only if the unique free critical value of $\sigma_a$ lies in the non-escaping set. The existence of a unique fully ramified critical point in the tiling set of maps in $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ allows one to show that the tiling set dynamics of each map $\sigma_a\in\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ is conformally conjugate to the anti-Farey map $\pmb{\mathcal{F}}_d$ (see Subsections [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"} and [6.2.2](#cubic_cheby_qc_straightening_subsubsec){reference-type="ref" reference="cubic_cheby_qc_straightening_subsubsec"} for discussions on the same result in the $d=2$ setting, cf. [@LMM4]). In other words, the one-parameter family $\mathcal{S}_{\mathfrak{T}}$ of Schwarz reflections meets the space $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ in its connectedness locus.
**Proposition 98**. *$\mathcal{S}_{\mathfrak{T}}\cap \mathcal{S}_{\pmb{\mathcal{F}}_d} = \mathcal{C}(\mathcal{S}_{\mathfrak{T}}).$*
### Image of $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ under the straightening map $\chi$
We will now describe the image of the connectedness locus $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ under the straightening map $\chi$ defined in Subsection [11.1.5](#corr_para_anti_rat_bijection_sketch_subsubsec){reference-type="ref" reference="corr_para_anti_rat_bijection_sketch_subsubsec"}.
As each $\sigma_a\in\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ has at most two critical values in its non-escaping set (namely, the free critical value $\pmb{f}(\pmb{v_w})$ and possibly the conformal cusp $\pmb{f}(\pmb{v_b})$), it follows from the definition of $\chi$ that the straightened map $\chi(\sigma_a)$ has at most two critical values in its filled Julia set and exactly one (fully ramified) critical value in its completely invariant parabolic basin. In other words, each $R\in\chi(\mathcal{C}(\mathcal{S}_{\mathfrak{T}}))$ is a Belyi anti-rational map of $\widehat{\mathbb{C}}$. Moreover, if $\chi(\sigma_a)$ has three critical values, then the parabolic fixed point $\infty$ is one of them. The dessin d'enfants $\mathfrak{T}(R)$ of the Belyi map $R$ is defined as the combinatorial plane bicolored tree isomorphic to $R^{-1}(\gamma')$, where $\gamma'$ is an arc connecting the parabolic fixed point $\infty$ to the free critical value of $R$ (which lies in the filled Julia set of $R$).
Note further that the dessin d'enfants of any $\sigma_a$ has a distinguished valence one vertex at the cusp $\pmb{f}(\pmb{v_b})\in\partial\Omega_a$, which can be regarded as the root of the tree. We denote this abstract rooted bicolored plane tree by $(\mathfrak{T}^{\textrm{del}}, O)$ (the isomorphism class of this tree is independent of the parameter $a\in\mathcal{S}_{\mathfrak{T}}$). Here, the superscript 'del' is chosen to indicate that the tree $\mathfrak{T}^{\textrm{del}}$ is obtained by deleting the edge $[\pmb{v_w},\pmb{v_b})$ from $\mathfrak{T}(\pmb{f})\cong\mathfrak{T}$ and reversing the cyclic order of the edges around each vertex of the resulting tree (see Subsection [11.3.2](#one_para_slice_shabat_subsubsec){reference-type="ref" reference="one_para_slice_shabat_subsubsec"}). Evidently, the dessin d'enfants of each $R\in\chi(\mathcal{C}(\mathcal{S}_{\mathfrak{T}}))$ is isomorphic to $\mathfrak{T}^{\textrm{del}}$. We also note that for $R\in\chi(\mathcal{C}(\mathcal{S}_{\mathfrak{T}}))$, the parabolic fixed point $\infty$ is a distinguished vertex of valence one on $\mathfrak{T}(R)\cong\mathfrak{T}^{\textrm{del}}$, and this vertex corresponds to the root vertex $O$ of the dessin d'enfants $\mathfrak{T}^{\textrm{del}}$ of the corresponding Schwarz reflection under the hybrid conjugacy. It follows that $\chi(\mathcal{C}(\mathcal{S}_{\mathfrak{T}}))$ is contained in the following space of parabolic anti-rational maps.
**Definition 99**. We define $$\begin{aligned}
\mathfrak{F}_{\mathfrak{T}} & :=\bigg\{ R\in\pmb{\mathcal{B}}_d: R \textrm{ is Belyi, if } R \textrm{ has three critical values, then the parabolic }\\
&\qquad \textrm{ fixed point } \infty\ \textrm{ is one of them, and } \left(\mathfrak{T}(R),\infty\right)\cong\left(\mathfrak{T}^{\textrm{del}},O\right)\bigg\}/\mathrm{Aut}(\mathbb{C}),\end{aligned}$$ where the isomorphism is required to preserve the roots and the bicolored plane structures.
The above discussion, combined with the fact that $\chi$ is a bijection between $\ \faktor{\mathcal{S}_{\pmb{\mathcal{F}}_d}}{\mathrm{PSL}_2(\mathbb{C})}$ and $\ \faktor{\pmb{\mathcal{B}}_d}{\mathrm{Aut}(\mathbb{C})}$, implies that $\chi(\mathcal{C}(\mathcal{S}_{\mathfrak{T}}))=\ \faktor{\mathfrak{F}_{\mathfrak{T}}}{\mathrm{Aut}(\mathbb{C})}$.
### Combinatorial model of $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ and homeomorphism between models {#shabat_conn_locus_model_subsubsec}
The conformal position of the free critical value of $\sigma_a$ (in the tiling set) can be used to define a dynamically natural uniformization of the *escape locus* $\mathcal{S}_{\mathfrak{T}}\setminus\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ (this is analogous to the dynamical uniformization of the escape locus of the C&C family, see Subsection [6.1.3](#c_and_c_geom_fin_bijection_subsubsec){reference-type="ref" reference="c_and_c_geom_fin_bijection_subsubsec"}). This uniformization gives a tessellation structure in the escape locus which allows us to construct parameter rays. The co-landing/co-accumulation patterns of these parameter rays is used in [@LMM4] to construct a model of the connectedness locus of $\mathcal{S}_{\mathfrak{T}}$ as a pinched disk.
Moreover, the continuity properties of the straightening map $\chi$ explicated in Subsection [11.2](#mating_regularity_subsec){reference-type="ref" reference="mating_regularity_subsec"} imply that $\chi$ induces a homeomorphism between the above pinched disk model of $\mathcal{C}(\mathcal{S}_{\mathfrak{T}})$ and a similar model for $\ \faktor{\mathfrak{F}_{\mathfrak{T}}}{\mathrm{Aut}(\mathbb{C})}$. We remark that progress in combinatorial rigidity problems for the above parameter spaces would bring the pinched disk models closer to the actual connectedness loci.
# Analytic applications {#anal_app_sec}
## Conformal removability {#conf_removable_subsec}
Conformal removability of various fractal sets, such as limit and Julia sets of Kleinian groups and rational maps, is an important question in geometric function theory. Using the fact that boundaries of John domains (more generally, Hölder domains) are conformally removable, Carleson, Jones, Smirnov, and Yoccoz deduced conformal removability of connected Julia sets of semi-hyperbolic (more generally, Collet--Eckmann) polynomials [@CJY; @Jon95; @JS00]. The situation is more subtle for Julia sets of parabolic polynomials due to the presence of cusps: indeed, the existence of cusps implies that the basin of infinity of a parabolic polynomial is not a John domain and hence the above results do not apply. In the same vein, since the limit sets of necklace groups also have infinitely many cusps, it is natural to ask whether such limit sets are conformally removable. It turns out that the David surgery techniques of Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"} can be sometimes used to address the above questions.
**Theorem 100**. *[@LMMN Theorems 9.1,9.2][\[limit_julia_conf_removable_thm\]]{#limit_julia_conf_removable_thm label="limit_julia_conf_removable_thm"}*
1. *Let $P$ be a geometrically finite polynomial with connected Julia set $\mathcal{J}(P)$. Then $\mathcal{J}(P)$ is conformally removable.*
2. *The limit set of a necklace reflection group is conformally removable.*
The proofs of both removability results are based on the fact that global David homeomorphisms carry $W^{1,1}$-removable compact sets to conformally removable ones (see Theorem [\[w11_removable_thm\]](#w11_removable_thm){reference-type="ref" reference="w11_removable_thm"}).
For part (1) of Theorem [\[limit_julia_conf_removable_thm\]](#limit_julia_conf_removable_thm){reference-type="ref" reference="limit_julia_conf_removable_thm"}, one first appeals to standard realization theorems in holomorphic dynamics to construct a postcritically finite polynomial $Q$ whose Julia dynamics is conjugate to that of $P$. Subsequently, one replaces suitable basins of attraction of $Q$ with parabolic basins using the David Surgery Lemma [\[david_surgery_lemma\]](#david_surgery_lemma){reference-type="ref" reference="david_surgery_lemma"} (also see the David extension result of Example [\[example_2\]](#example_2){reference-type="ref" reference="example_2"}) to recover $P$. This shows that $\mathcal{J}(P)$ is the image of the Julia set of the postcritically finite polynomial $Q$ under a global David homeomorphism. Since the basin of infinity of a postcritically finite polynomial is a John domain, it follows from Theorem [\[w11_removable_thm\]](#w11_removable_thm){reference-type="ref" reference="w11_removable_thm"} that $\mathcal{J}(P)$ is conformally removable.
The same strategy also yields part (2) of Theorem [\[limit_julia_conf_removable_thm\]](#limit_julia_conf_removable_thm){reference-type="ref" reference="limit_julia_conf_removable_thm"} since the limit set of a necklace group is the image of the Julia set of a critically fixed anti-polynomial under a global David homeomorphism (see Subsection [8.1.2](#david_regularity_subsubsec){reference-type="ref" reference="david_regularity_subsubsec"}).
## Welding homeomorphisms {#welding_subsec}
A homeomorphism $h\colon \mathbb S^1 \to \mathbb S^1$ is called a *welding homeomorphism* if there exists a Jordan curve $J$ and conformal homeomorphisms $H_1:\mathbb{D}\to U_1$ and $H_2:\mathbb{D}^*\to U_2$ (where $U_1, U_2$ are the interior and exterior of $J$, respectively) so that $h=\widetilde{H_2}^{-1}\circ \widetilde{H_1}$, where $\widetilde{H_1}$ and $\widetilde{H_2}$ are the homeomorphic extensions of $H_1$ and $H_2$ to the closures of $\mathbb{D}$ and $\mathbb{D}^*$, respectively. The Jordan curve $J$ is called a *welding curve* corresponding to $h$. Note that if there exists a conformally removable welding curve $J$ corresponding to the welding homeomorphism $h$, then any other welding curve (corresponding to $h$) is a Möbius image of $J$.
It is a straightforward consequence of the Ahlfors-Beurling Extension Theorem and the Measurable Riemann Mapping Theorem that every quasisymmetric homeomorphic of $\mathbb{S}^1$ is a welding homeomorphism, and the associated Jordan curve, which is a quasi-circle, is unique (up to Möbius transformations). This has applications to several important constructions in conformal dynamics; such as mating two Fuchsian groups to obtain a quasi-Fuchsian group, mating two Blaschke products to obtain a quasi-Blaschke rational map, etc.
![Left: The dynamical plane of the Schwarz reflection map that is the unique conformal mating of $P(z)=\overline{z}^2+\frac14$ and $\pmb{\mathcal{N}}_2$ is displayed. Its limit is a conformally removable Jordan curve with both cusps and sectors. Right: The pine tree shaped fractal is the Julia set of a cubic quasi-Blaschke product that has two parabolic basins. The Julia set, which has both inward and outward cusps, is a conformally removable Jordan curve.](cauli_triangle_mating.png "fig:"){#pine_tree_fig width="0.506\\linewidth"}![Left: The dynamical plane of the Schwarz reflection map that is the unique conformal mating of $P(z)=\overline{z}^2+\frac14$ and $\pmb{\mathcal{N}}_2$ is displayed. Its limit is a conformally removable Jordan curve with both cusps and sectors. Right: The pine tree shaped fractal is the Julia set of a cubic quasi-Blaschke product that has two parabolic basins. The Julia set, which has both inward and outward cusps, is a conformally removable Jordan curve.](two_sided_cusps.png "fig:"){#pine_tree_fig width="0.355\\linewidth"}
The David Extension Theorem [\[david_extension_general_thm\]](#david_extension_general_thm){reference-type="ref" reference="david_extension_general_thm"} and the accompanying David surgery technique described in Subsection [7.2](#david_surgery_hyp_para_subsec){reference-type="ref" reference="david_surgery_hyp_para_subsec"} lead to a general realization theorem for dynamically arising circle homeomorphisms as welding homeomorphisms.
**Theorem 101**. *[@LMMN Theorem 5.1][\[welding_thm\]]{#welding_thm label="welding_thm"} Let $f,g\colon \mathbb{S}^1\to \mathbb{S}^1$ be $C^1$, expansive, covering maps of the same degree and the same orientation, and $\mathcal P(f;\{a_0,\dots,a_r\})$, $\mathcal P(g;\{b_0,\dots,b_s\})$ be Markov partitions satisfying conditions [\[condition:uv\]](#condition:uv){reference-type="eqref" reference="condition:uv"} and [\[condition:holomorphic\]](#condition:holomorphic){reference-type="eqref" reference="condition:holomorphic"} of Section [7](#david_surgery_sec){reference-type="ref" reference="david_surgery_sec"} . Assume that each periodic point $a\in \{a_0,\dots,a_r\}$ of $f$ and each periodic point $b\in \{b_0,\dots,b_s\}$ of $g$ is either hyperbolic or symmetrically parabolic. Then any conjugating homeomorphism $h\colon \mathbb S^1\to \mathbb S^1$ between $f$ and $g$ is a welding homeomorphism and the corresponding welding curve is unique up to a Möbius transformation.*
We point out that in the above theorem, the circle homeomorphism $h$ itself does not necessarily have a David extension to $\mathbb{D}$ (because it can move parabolic points to hyperbolic points). However, thanks to Corollary [\[power_map_cor\]](#power_map_cor){reference-type="ref" reference="power_map_cor"}, the hypotheses of the theorem guarantee the existence of a pair of circle homeomorphisms conjugating $z^d$ or $\overline{z}^d$ (depending on the orientation) to $f, g$ (respectively) such that these conjugacies extend as David homeomorphisms of $\mathbb{D}$. One can replace the dynamics of the power map on $\mathbb{D}$ and $\mathbb{D}^*$ with the dynamics of $f$ and $g$ (respectively) using these David homeomorphisms, and then appeal to the David Integrability Theorem to conjugate this map to a holomorphic or antiholomorphic map (defined on a subset of $\widehat{\mathbb{C}}$) via a David homeomorphism $\Psi$. Since the resulting map is conformally conjugate to $f, g$ on the closures of the two complementary components of $\Psi(\mathbb{S}^1)$, it follows that $\Psi(\mathbb{S}^1)$ is the desired welding curve. It is worth mentioning that this construction produces first examples of conformally removable welding curves with infinitely many inward and outward cusps (see Figure [42](#pine_tree_fig){reference-type="ref" reference="pine_tree_fig"}).
*Remark 102*. We note that there is a classical quasiconformal surgery procedure (known as the Douady--Ghys surgery, see [@Ghy84], [@BF14 §7.2]) of mating a Blaschke product with a bounded type disk rotation to obtain a Siegel Julia set. It was generalized by Petersen and Zakeri, by means of a David surgery, to almost all rotation numbers (see [@PZ04], [@BF14 §9.2]).
## Extremal problems {#extremal_prob_subsec}
In classical complex analysis, the problem of coefficient maximization in families of univalent holomorphic maps played a central role. The celebrated De Brange's Theorem (earlier known as the Bieberbach conjecture) is the most prominent result in this area. However, for the class of *external univalent* maps $$\Sigma := \left\{ f(z)= z+\frac{a_1}{z} + \cdots +\frac{a_d}{z^d}+\cdots :\ f\vert_{\mathbb{D}^*} \textrm{ is univalent}\right\},$$ the coefficient maximization problem is still unresolved (see [@Dur83 §4.7], [@HS05] for known results). It tuns out that the union of the rational subfamilies $\Sigma_d^*\subset\Sigma$ introduced in Subsection [6.3](#sigma_d_subsec){reference-type="ref" reference="sigma_d_subsec"} is dense in $\Sigma$ (cf. [@Suf72 Theorem 10]). Thus, it is natural to study the coefficient maximization problem for the spaces $\Sigma_d^*$. Recall that the set $H(\mathbb{C}\setminus\overline{\mathbb{D}})$ of all analytic functions on $\mathbb{C}\setminus\overline{\mathbb{D}}$ is a locally convex topological vector space, and $\Sigma_d^*$ is a compact subset of a finite-dimensional vector subspace of $H(\mathbb{C}\setminus\overline{\mathbb{D}})$. An *extreme point* of $\Sigma_d^*$ is an element of $\Sigma_d^*$ which cannot be represented as a proper convex combination of two distinct elements of $\Sigma_d^*$. By the Krein-Milman theorem, $\Sigma_d^*$ is contained in the convex hull of its extreme points. Hence, it is enough to investigate the coefficients of the extreme points of $\Sigma_d^*$ (see [@Bri70; @Suf72] for an implementation of this strategy for the classical class $\mathcal{S}$ of normalized univalent holomorphic functions on $\mathbb{D}$). By [@LM1 Theorem 2.5], an extreme point $f$ of $\Sigma_d^*$ is a Suffridge map (or a vertex of $\Sigma_d^*$) in the sense of Subsection [6.3.4](#sigma_d_cell_structure_subsec){reference-type="ref" reference="sigma_d_cell_structure_subsec"}. In other words, for an extreme point $f\in\Sigma_d^*$, the compact $\mathbb{C}\setminus f(\mathbb{D}^*)$ is a *tree of triangles* and hence can be modeled by so-called *bi-angled trees* [@LMM1 §2.5, Table 1]. Such a tree is essentially the rooted adjacency/contact graph of the tree of triangles $\mathbb{C}\setminus f(\mathbb{D}^*)$ with the recording of whether the exit from a particular triangle is on the left or right. The above discussion shows that the coefficient maximization problem in $\Sigma_d^*$ naturally leads one to the question of classifying Suffridge maps. Specifically, it motivates the following question: can any topological type of tree of triangles (or equivalently, bi-angled tree) be realized as a vertex of $\Sigma_d^*$?
The main difficulty in constructing Suffridge maps in $\Sigma_d^*$ is that in general, it is hard to check univalence of a rational map on a round disk. To circumvent this issue, one needs to look at the above problem through a different lens. Specifically, thanks to Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"}, finding Suffridge maps in $\Sigma_d^*$ is equivalent to constructing Schwarz reflection maps with appropriate dynamical properties. One way of formalizing this statement is that the Suffridge maps in $\Sigma_d^*$ correspond to Schwarz reflection maps that are matings of $\overline{z}^d$ with maximally cusped necklace groups; i.e., necklace groups $G$ for which $\overline{\Pi(G)}\cap K(G)$ is a tree of triangles. This perspective was adopted in [@LMMN §12] to prove the following classification theorem.
**Theorem 103**. *[@LMM1 Theorem A][@LMMN Theorem 12.7] Let $d\geq2$. There is a canonical bijection between the following classes of objects.*
- *$\left\{ f \in \Sigma_d^* : f(\mathbb{T}) \textrm{ has } d+1 \textrm{ cusps and } d-2 \textrm{ double points} \right\} \big/ \hspace{1mm} \mathbb{Z}_{d+1}.$\
*
- *$\{ \textrm{Bi-angled trees with } d-1 \textrm{ vertices up to isomorphism respecting the } \\ \textrm{ angular structure}\}.$*
A different proof of the above theorem was originally given in [@LMM1] using quasiconformal deformation of Schwarz reflections and compactness of $\Sigma_d^*$. We refer the reader to Subsection [5.1.2](#talbot_pinching_subsubsec){reference-type="ref" reference="talbot_pinching_subsubsec"} for an illustration of this approach.
Note that uniqueness of a member of $\Sigma_d^*$ realizing a given topological type of tree of triangles follows from conformal removability arguments.
### Zeroes of harmonic polynomials {#harmonic_poly_zero_subsubsec}
The classification of vertices of $\Sigma_d^*$ is also related to construction of harmonic polynomials with maximal number of zeroes. It was observed by Crofoot and Sarason that a harmonic polynomial $p(z)-\overline{z}$ has the maximal number of zeroes (namely, $3d-2$ when $\deg{p}=d$) if $\overline{p}$ has $d-1$ distinct fixed critical points in $\mathbb{C}$. This can be seen by viewing the zeroes of $p(z)-\overline{z}$ as fixed points of the anti-polynomial $\overline{p(z)}$ and invoking the Lefschetz-Hopf Fixed Point Theorem combined with Fatou's count of the number of attracting fixed points of a polynomial. Crofoot and Sarason also conjectured the existence of such polynomials $p$, which was later proved by Geyer [@Gey08]. We term such polynomials $p$ as *Crofoot-Sarason polynomials*. The correspondence between maps in $\Sigma_d^*$ and critically fixed anti-polynomials given in Theorem [\[sigma_d\_bers_homeo_thm\]](#sigma_d_bers_homeo_thm){reference-type="ref" reference="sigma_d_bers_homeo_thm"} improves the conclusion of [@Gey08], and shows that Crofoot-Sarason polynomials of degree $d$ bijectively correspond to the vertices in $\Sigma_d^*$ (cf. [@LMM1 Theorem A]).
## Region of univalence for complex polynomials {#univalent_poly_subsec}
Regions of univalence of complex polynomials and rational maps is a well-studied problem in classical complex analysis (see [@She00 §7.4] for general results, and [@Bra67; @CR68; @Suf72] for univalence loci of special families of polynomials). While much of the classical development on this topic is based on geometric function theory, it turns out that many questions on univalence loci of rational maps can be answered using the iteration theory of Schwarz reflection maps. For instance, if a rational map $f$ is known to be univalent on $\mathbb{D}$, it is natural to ask what sort of perturbations of $f$ would continue to be univalent on $\mathbb{D}$. This is particularly subtle when $f(\partial\mathbb{D})$ contains a double point or a conformal cusp. However, it is often possible to quasiconformally deform the associated Schwarz reflection map $\sigma\equiv f\circ\eta\circ(f\vert_{\overline{\mathbb{D}}})^{-1}:f(\overline{\mathbb{D}})\to\widehat{\mathbb{C}}$ to nearby Schwarz reflections and then appeal to the characterization of simply connected quadrature domains (Proposition [Proposition 36](#simp_conn_quad_prop){reference-type="ref" reference="simp_conn_quad_prop"}) to construct rational maps close to $f$ that carry $\mathbb{D}$ injectively onto the deformed quadrature domains. This strategy is successfully implemented in [@LMM2] to study the space $\Sigma_d^*$ of rational maps which are univalent on $\mathbb{D}^*$ and in [@LMM4] to study the region of univalence of Shabat polynomials.
# Quasiconformal and David homeomorphisms {#qc_david_appendix}
## Basic definitions {#qc_david_def_subsec}
An orientation-preserving homeomorphism $H\colon U\to V$ between domains in the Riemann sphere $\widehat{\mathbb{C}}$ is called $K$-*quasiconformal* for some constant $1\leq K<\infty$ if it lies in the Sobolev class $W^{1,2}_{\mathrm{loc}}(U)$ (i.e., the partial derivatives $\partial H/\partial z$ and $\partial H/\partial\overline{z}$ exist in the sense of distributions and belong to $L^2_{\mathrm{loc}}$) and satisfies $\vert\vert\mu_H\vert\vert_\infty\leq (K-1)/(K+1)$, where $\mu_H= \frac{\partial H/ \partial \overline{z}}{\partial H/\partial z}$ is the Beltrami coefficient of $H$. Note that the constant $(K-1)/(K+1)$ is always contained in $[0,1)$.
An orientation-preserving homeomorphism $H\colon U\to V$ between domains in $\widehat{\mathbb{C}}$ is called a *David homeomorphism* if it lies in the Sobolev class $W^{1,1}_{\mathrm{loc}}(U)$ and there exist constants $C,\alpha,\varepsilon_0>0$ with $$\begin{aligned}
\label{david_def}
\sigma(\{z\in U: |\mu_H(z)|\geq 1-\varepsilon\}) \leq Ce^{-\alpha/\varepsilon}, \quad \varepsilon\leq \varepsilon_0.\end{aligned}$$ Here $\sigma$ is the spherical measure. By Condition [\[david_def\]](#david_def){reference-type="eqref" reference="david_def"}, the Beltrami coefficient of a David homeomorphism takes values in $\mathbb{D}$ a.e.
## Integrability theorems and basic properties {#integrable_thms_subsec}
The Measurable Riemann Mapping Theorem [@AIM09 Theorem 5.3.4, p. 170] states that if $\mu$ is a measurable function on $\widehat{\mathbb{C}}$ with $\|\mu\|_\infty<1$, then there exists a quasiconformal homeomorphism $H\colon \widehat{\mathbb{C}} \to \widehat{\mathbb{C}}$, unique up to postcomposition with Möbius maps, that solves the Beltrami equation $$\begin{aligned}
\frac{\partial H}{\partial \overline{z}}= \mu \frac{\partial H}{\partial z}.\end{aligned}$$
The following integrability result is a generalization of the Measurable Riemann Mapping Theorem, and makes David homeomorphisms useful in holomorphic dynamics. If $U$ is an open subset of $\widehat {\mathbb{C}}$ and $\mu \colon U \to \mathbb{D}$ is a measurable function satisfying Condition [\[david_def\]](#david_def){reference-type="eqref" reference="david_def"} on $U$ for some constants $C,\alpha,\varepsilon_0>0$, then $\mu$ is called a *David coefficient* on $U$.
**Theorem 104** (David Integrability Theorem). *[@David], [@AIM09 Theorem 20.6.2, p. 578][\[david_integrability_thm\]]{#david_integrability_thm label="david_integrability_thm"} Let $\mu\colon \widehat{\mathbb{C}} \to \mathbb{D}$ be a David coefficient. Then there exists a homeomorphism $H\colon \widehat{\mathbb{C}} \to \widehat{\mathbb{C}}$ of class $\in W^{1,1}(\widehat \mathbb{C})$ that solves the Beltrami equation $$\begin{aligned}
\frac{\partial H}{\partial \overline{z}}= \mu \frac{\partial H}{\partial z}.\end{aligned}$$ Moreover, $H$ is unique up to postcomposition with Möbius transformations.*
The next theorem is a local version of the uniqueness part in Theorem [\[david_integrability_thm\]](#david_integrability_thm){reference-type="ref" reference="david_integrability_thm"}, and plays an important role in applications of David homeomorphisms in dynamics.
**Theorem 105**. *[@AIM09 Theorem 20.4.19, p. 565][\[stoilow_thm\]]{#stoilow_thm label="stoilow_thm"} Let $\Omega\subset \widehat{\mathbb{C}}$ be an open set and $f,g\colon \Omega\to \widehat{\mathbb{C}}$ be David embeddings with $\mu_f=\mu_g$ almost everywhere. Then $f\circ g^{-1}$ is a conformal map on $g(\Omega)$.*
While the composition of two quasiconformal homeomorphisms is always quasiconformal, the situation is more delicate for compositions of quasiconformal and David homeomorphisms. It turns out that post-composing a David homeomorphism $f:U\to V$ with a quasiconformal homeomorphism $g:V\to W$ always results in a David homeomorphism $g\circ f:U\to W$. However, the pre-composition of a David homeomorphism $f:U\to V$ with a quasiconformal homeomorphism $g:W\to U$ is not necessarily David since the David property of $f\circ g$ crucially depends on area distortion properties of $g$. The map $f\circ g: W\to V$ is indeed David if one has control on the map $g$ (for instance, if $g$ extends to a quasiconformal homeomorphism of an open neighborhood of $\overline{W}$ onto an open neighborhood of $\overline{U}$), or control over the geometry of the domains $U, W$ (for instance, if $U$ is a quasidisk and $W$ is a John domain). The proofs of these facts are given in [@LMMN Proposition 2.5].
While the inverse of a $K$-quasiconformal map is also $K$-quasiconformal, the inverse of a David homeomorphism is not necessarily David (cf. [@Zak04 p. 123]).
We direct the reader to [@Ahl06], [@AIM09 Chapters 3, 5, 20], [@LMMN §2] for more background on the theory of quasiconformal and David homeomorphisms.
## Quasiconformal and David extensions of circle homeomorphisms {#qc_david_extension_subsec}
For an orientation-preserving homeomorphism $h\colon \mathbb S^1\to \mathbb S^1$, the *distortion function* of $h$ is defined as $$\begin{aligned}
\rho_h(z,t)= \max\left\{ \frac{|h(e^{2\pi i t}z)-h(z)| }{ |h(e^{-2\pi i t}z)-h(z)| } , \frac{|h(e^{-2\pi i t}z)-h(z)| }{ |h(e^{2\pi i t}z)-h(z)| }\right\},\end{aligned}$$ where $z\in \mathbb S^1$ and $0<t<1/2$. One further defines the *scalewise distortion function* of $h$ to be $$\begin{aligned}
\rho_h(t)= \max_{z\in \mathbb{S}^1}\rho_h(z,t), \end{aligned}$$ where $0<t<1/2$. If $\rho_h(t)$ is bounded above, then $h$ is a quasisymmetric homeomorphism and the classical Ahlfors-Beurling Extension Theorem asserts that such an $h$ extends to a homeomorphism of $\overline{\mathbb{D}}$ that is quasiconformal on $\mathbb{D}$ [@BA56]. We will state a theorem, due to Chen-Chen-He and Zakeri, which says that if one has appropriate control on the growth of $\rho_h(t)$, then $h$ admits a David extension to the disk.
**Theorem 106**. *[@CCH96 Theorem 3], [@Zak08 Theorem 3.1][\[david_extension_criterion_thm\]]{#david_extension_criterion_thm label="david_extension_criterion_thm"} Let $h\colon \mathbb S^1\to \mathbb S^1$ be an orientation-preserving homeomorphism and suppose that $$\begin{aligned}
\rho_h(t) = O(\log(1/t))\quad \textrm{as} \quad t\to 0.\end{aligned}$$ Then $h$ has an extension to a David homeomorphism $\widetilde h\colon \mathbb{D}\to \mathbb{D}$.*
A stronger David extension result for circle homeomorphisms was recently proved in [@KN22].
## David maps and removability {#removable_david_subsec}
A compact set $E\subset \widehat{\mathbb{C}}$ is said to be *conformally removable* if every homeomorphism $f\colon \widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ that is conformal on $\widehat{\mathbb{C}} \setminus E$ is a Möbius transformation. A compact set $E\subset \widehat{\mathbb{C}}$ is removable for $W^{1,1}$ functions if every continuous function $f\colon \widehat{\mathbb{C}}\to \mathbb{R}$ that lies in $W^{1,1}(\widehat{\mathbb{C}}\setminus E)$ in fact lies in $W^{1,1}(\widehat{\mathbb{C}})$.
A domain $\Omega\subset \widehat{\mathbb{C}}$ is called a *John domain* if for each base point $z_0\in \Omega$ there exists a constant $c>0$ such that for each point $z_1\in \Omega$ there exists an arc $\gamma$ joining $z_0$ to $z_1$ in $\Omega$ with the property that for each point $z$ on the path $\gamma$ we have $$\begin{aligned}
\mathrm{dist}(z, \partial \Omega) \geq c\cdot \mathrm{length}(\gamma|_{[z,z_1]}),\end{aligned}$$ where $\gamma|_{[z,z_1]}$ denotes the subpath of $\gamma$ whose endpoints are $z$ and $z_1$ (here $\mathrm{dist}$ and $\mathrm{length}$ denote the spherical distance and spherical length). Roughly speaking, John domains are generalizations of quasidisks that allow for inward cusps but not outward cusps. By [@JS00 Theorem 4], boundaries of John domains are removable for $W^{1,1}$ functions.
**Theorem 107**. *[@LMMN Theorems 2.7, 2.12][\[w11_removable_thm\]]{#w11_removable_thm label="w11_removable_thm"} Suppose that $E\subset \widehat{\mathbb{C}}$ is a compact set that is removable for $W^{1,1}$ functions and $f\colon \widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ is a David homeomorphism. Then $f(E)$ is conformally removable. In particular, if $\Omega \subset {\widehat{\mathbb{C}}}$ is a John domain and $f\colon \widehat{\mathbb{C}}\to \widehat{\mathbb{C}}$ is a David homeomorphism, then $f(\partial \Omega)$ is conformally removable.*
# List of notation {#list-of-notation .unnumbered}
- $\mathbb{D}^*=\widehat{\mathbb{C}}\setminus\overline{\mathbb{D}}$.
- $\eta(z)=1/\overline{z}$.
- $B(a,r)=\{\vert z-a\vert<r\}$, where $a\in\mathbb{C}$ and $r>0$.
$\overline{B}(a,r)=\{\vert z-a\vert\leq r\}$, where $a\in\mathbb{C}$ and $r>0$.
- $\textrm{Aut}(\widehat{\mathbb{C}})$ = Group of all Möbius automorphisms of $\widehat{\mathbb{C}}$,\
$\textrm{Aut}^\pm(\widehat{\mathbb{C}})$ = Group of all Möbius and anti-Möbius automorphisms of $\widehat{\mathbb{C}}$.
- $\textrm{Aut}(\mathbb{D})$ = Group of all Möbius automorphisms of $\mathbb{D}$,\
$\textrm{Aut}^\pm(\mathbb{D})$ = Group of all Möbius and anti-Möbius automorphisms of $\mathbb{D}$.
- $X^c =\widehat{\mathbb{C}}\setminus X$, for $X\subset\widehat{\mathbb{C}}$.
- $m_{-d}:\mathbb{S}^1\to\mathbb{S}^1,\ \theta\mapsto -d\theta$.
- $\mathscr{C}_d$ = Connectedness locus of monic, centered antiholomorphic polynomials of degree $d$.
- $\pmb{G}_d$ = Regular ideal $(d+1)-$gon reflection group.
$\pmb{\mathcal{N}}_d$ = Nielsen map of $\pmb{G}_{d}$ (Definition [Definition 8](#regular_ideal_polygon_ref_group_def){reference-type="ref" reference="regular_ideal_polygon_ref_group_def"}).
- $\mathcal{S}_{\pmb{\mathcal{N}}_d}$ = Space of normalized piecewise Schwarz reflection maps with $\pmb{\mathcal{N}}_d$ as their external class (Section [10](#mating_para_space_sec){reference-type="ref" reference="mating_para_space_sec"}).
- $\mathbbm{G}_d$ = Anti-Hecke group isomorphic to $\mathbb{Z}/2\ast\mathbb{Z}/(d+1)$ (§ [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"}, § [11](#general_mating_corr_sec){reference-type="ref" reference="general_mating_corr_sec"}).
$\pmb{\mathcal{F}}_d$ = Anti-Farey map associated with $\mathbbm{G}_d$ (§ [4.3.1](#nielsen_first_return_external_map_subsubsec){reference-type="ref" reference="nielsen_first_return_external_map_subsubsec"}, [11.1.2](#anti_farey_subsubsec){reference-type="ref" reference="anti_farey_subsubsec"}).
- $\mathcal{S}_{\pmb{\mathcal{F}}_d}$ = Space of Schwarz reflections having $\pmb{\mathcal{F}}_d$ as their external class (§ [11.1.3](#core_step_subsubsec){reference-type="ref" reference="core_step_subsubsec"}).
- $\pmb{\mathcal{E}}_d$ = Minkowski circle homeomorphism conjugating $\pmb{\mathcal{N}}_{d}$ to $\overline{z}^d$ (§ [3.1.5](#question_mark_subsubsec){reference-type="ref" reference="question_mark_subsubsec"}).
- $\beta(\pmb{G}_d)$ = Bers slice of $\pmb{G}_d$ (§ [3.1.6](#kissing_group_deform_space_subsubsec){reference-type="ref" reference="kissing_group_deform_space_subsubsec"}).
- $B_d(z) = \frac{(d+1)\overline z^d + (d-1)}{(d-1)\overline z^d + (d+1)}$ (unicritical parabolic antiholomorphic Blaschke product).
- $\pmb{\mathcal{B}}_d\ =\ $ Space of antiholomorphic rational maps having $B_d$ as their external class (§ [3.2.3](#para_anti_rat_gen_subsubsec){reference-type="ref" reference="para_anti_rat_gen_subsubsec"}).
- $\mathcal{J}(R), \mathcal{F}(R)$ = Julia, Fatou set of a rational/anti-rational map $R$.
$\mathcal{K}(P), \mathcal{B}_\infty(P)$ = Filled Julia set, basin of infinity of a polynomial/anti-polynomial $P$ (§ [3.2.1](#anti_poly_dyn_general_subsubsec){reference-type="ref" reference="anti_poly_dyn_general_subsubsec"}).
$\mathcal{K}(R), \mathcal{B}(R)$ = Filled Julia set, marked parabolic basin of an anti-rational map $R\in\pmb{\mathcal{B}}_d$ (§ [3.2.3](#para_anti_rat_gen_subsubsec){reference-type="ref" reference="para_anti_rat_gen_subsubsec"}).
- $G_{\mathcal{P}}$ = Kissing reflection group associated with a circle packing $\mathcal{P}$ (§ [3.1.2](#kissing_group_subsubsec){reference-type="ref" reference="kissing_group_subsubsec"}).
- $\Lambda(G), \Omega(G)$ = Limit set, domain of discontinuity of a reflection/Kleinian group.
- $\Pi(G)$ = Canonical fundamental domain for the $G$-action on $\Omega(G)$ (§ [3.1.3](#fund_dom_subsubsec){reference-type="ref" reference="fund_dom_subsubsec"}).
- $\Omega_\infty(G)$ = Marked invariant component of the domain of discontinuity of a necklace group $G$.
$K(G)$ = The *filled limit set* $\widehat{\mathbb{C}}\setminus \Omega_\infty(G)$ of a necklace group $G$.
- $T(\sigma), T^0(\sigma), T^\infty(\sigma), K(\sigma)$ = Droplet, fundamental tile, tiling set, and non-escaping set of a piecewise Schwarz reflection map (§ [3.3.3](#inv_partition_subsubsec){reference-type="ref" reference="inv_partition_subsubsec"}).
- $\mathfrak{U}_{d+1}$ = Space of degree $d+1$ polynomials $f$ such that $f\vert_{\overline{\mathbb{D}}}$ is injective and $f$ has a unique (non-degenerate) critical point on $\mathbb{S}^1$.
- $\mathscr{T}(R)$ = Tischler graph of a critically fixed anti-rational map $R$ (§ [8.1.1](#group_map_bijection_subsubsec){reference-type="ref" reference="group_map_bijection_subsubsec"}).
# List of figures {#list-of-figures .unnumbered}
- Figure [2](#deltoid_intro_fig){reference-type="ref" reference="deltoid_intro_fig"}: The ideal triangle reflection group and the deltoid Schwarz reflection map.
- Figure [4](#c_and_c_para_fig){reference-type="ref" reference="c_and_c_para_fig"}: The Basilica limb of the Tricorn and the connectedness locus of the C&C family.
- Figure [6](#limit_julia_intro_fig){reference-type="ref" reference="limit_julia_intro_fig"}: The Apollonian gasket limit set and the Apollonian Julia set.
- Figure [7](#itg_nielsen_fig){reference-type="ref" reference="itg_nielsen_fig"}: The Nielsen map of the ideal triangle reflection group.
- Figure [8](#not_2_conn_fig){reference-type="ref" reference="not_2_conn_fig"}: Disconnected limit set of a kissing reflection group.
- Figure [12](#kissing_nielsen_fig){reference-type="ref" reference="kissing_nielsen_fig"}: Limit sets of various kissing reflection groups and their canonical fundamental domains.
- Figure [14](#question_mark_tree_fig){reference-type="ref" reference="question_mark_tree_fig"}: The Farey and dyadic trees.
- Figure [17](#necklace_fig){reference-type="ref" reference="necklace_fig"}: Limits sets of various necklace groups.
- Figure [20](#tricorn_fig){reference-type="ref" reference="tricorn_fig"}: The Tricorn and wiggling phenomena.
- Figure [\[cardioid_disk_fig\]](#cardioid_disk_fig){reference-type="ref" reference="cardioid_disk_fig"}: The Schwarz reflection map of the cardioid.
- Figure [\[pre_deltoid_disk_fig\]](#pre_deltoid_disk_fig){reference-type="ref" reference="pre_deltoid_disk_fig"}: Perturbation of the deltoid reflection map to anti-quadratic-like maps.
- Figure [\[deltoid_disk_fig\]](#deltoid_disk_fig){reference-type="ref" reference="deltoid_disk_fig"}: Mapping properties of the deltoid reflection map.
- Figure [\[deltoid_corr_fig\]](#deltoid_corr_fig){reference-type="ref" reference="deltoid_corr_fig"}: Dynamical planes of the deltoid reflection map and the associated antiholomorphic correspondence.
- Figure [\[basilica_schwarz_fig\]](#basilica_schwarz_fig){reference-type="ref" reference="basilica_schwarz_fig"}: The rank zero and one tiles of a C&C Schwarz reflection map.
- Figure [22](#basilica_anti_poly_fig){reference-type="ref" reference="basilica_anti_poly_fig"}: The Basilica anti-polynomial Julia set and the limit set of the corresponding C&C Schwarz reflection.
- Figure [\[itg_nielsen_first_return_fig\]](#itg_nielsen_first_return_fig){reference-type="ref" reference="itg_nielsen_first_return_fig"}: The degree two anti-Farey map.
- Figure [\[anti_quad_like_fig\]](#anti_quad_like_fig){reference-type="ref" reference="anti_quad_like_fig"}: A pinched anti-quadratic-like map.
- Figure [\[chebyshev_center_fig\]](#chebyshev_center_fig){reference-type="ref" reference="chebyshev_center_fig"}: Dynamical planes of a cubic Chebyshev Schwarz reflection map and the associated antiholomorphic correspondence.
- Figure [\[talbot_schwarz_fig\]](#talbot_schwarz_fig){reference-type="ref" reference="talbot_schwarz_fig"}: The dynamical planes of the Talbot Schwarz reflection, the Julia necklace group, and the Julia anti-polynomial.
- Figure [24](#apollo_cousins_fig){reference-type="ref" reference="apollo_cousins_fig"}: The Apollonian Julia set and the limit set of the Deltoid-and-Circle Schwarz reflection.
- Figure [\[affine_model_fig\]](#affine_model_fig){reference-type="ref" reference="affine_model_fig"}: A quasiregular (partially affine) model of the Apollonian anti-rational map.
- Figure [27](#c_and_c_conn_cantor_limit_fig){reference-type="ref" reference="c_and_c_conn_cantor_limit_fig"}: Limit sets of various Schwarz reflections in the C&C family.
- Figure [30](#c_and_c_conn_locus_fig){reference-type="ref" reference="c_and_c_conn_locus_fig"}: The connectedness locus of the C&C family and the tessellation of its escape locus.
- Figure [\[cheby_conn_locus_fig\]](#cheby_conn_locus_fig){reference-type="ref" reference="cheby_conn_locus_fig"}: The connectedness locus of the cubic Chebyshev family of Schwarz reflections and the tessellation of its escape locus.
- Figure [\[sigma_extremal_fig\]](#sigma_extremal_fig){reference-type="ref" reference="sigma_extremal_fig"}: Droplets corresponding to two degree six Suffridge maps.
- Figure [\[sigma_d\_mating_fig\]](#sigma_d_mating_fig){reference-type="ref" reference="sigma_d_mating_fig"}: Dynamical planes of two Schwarz reflection maps arising from $\Sigma_4^*$ and the corresponding necklace groups.
- Figure [33](#non_gasket_fig){reference-type="ref" reference="non_gasket_fig"} and [39](#hamiltonian_polyhedral_fig){reference-type="ref" reference="hamiltonian_polyhedral_fig"}: Homeomorphic limit and Julia sets of various kissing reflection groups and critically fixed anti-rational maps.
- Figure [40](#enrichments_fig){reference-type="ref" reference="enrichments_fig"}: Two enrichments of the Tischler graph of $\overline{z}^3$.
- Figure [\[ellipse_disk_fig\]](#ellipse_disk_fig){reference-type="ref" reference="ellipse_disk_fig"}: The dynamical plane of the Schwarz reflection map in an ellipse and a pair of inscribed disks.
- Figure [\[pinching_fig\]](#pinching_fig){reference-type="ref" reference="pinching_fig"}: Pinching of quadrature multi-domains.
- Figure [\[anti_farey_fig\]](#anti_farey_fig){reference-type="ref" reference="anti_farey_fig"}: The degree three anti-Farey map.
- Figure [\[augmented_dessin_fig\]](#augmented_dessin_fig){reference-type="ref" reference="augmented_dessin_fig"}: Relation between the dessin d'enfants of Shabat polynomials and associated Schwarz reflections.
- Figure [42](#pine_tree_fig){reference-type="ref" reference="pine_tree_fig"}: Conformally removable Jordan curves with infinitely many corners and inward/outward cusps.
**Acknowledgments.** Part of this work was done during the authors' visits to MSRI (Simons Laufer Mathematical Sciences Institute), the Institute for Mathematical Sciences at Stony Brook, and the Urgench State University, Uzbekistan. The authors thank these institutes for their hospitality and support. We would also like to thank Yusheng Luo and Dimitrios Ntalampekos for helpful conversations and useful comments.
ABWZ02
O. Agam, E. Bettelheim, P. Wiegmann, and A. Zabrodin. Viscous fingering and a shape of an electronic droplet in the quantum hall regime. , 88(23):236801, 2002.
D. Aharonov and H. S. Shapiro. A minimal-area problem in conformal mapping - preliminary report. Research bulletin trita-mat-1973-7, Royal Institute of Technology, 1973.
D. Aharonov and H. S. Shapiro. Domains on which analytic functions satisfy quadrature identities. , 30:39--73, 1976.
D. Aharonov and H. S. Shapiro. A minimal-area problem in conformal mapping - preliminary report: Part ii. Research bulletin trita-mat-1978-5, Royal Institute of Technology, 1978.
D. Aharonov, H. S. Shapiro, and A. Solynin. A minimal area problem in conformal mapping. , 78:157--176, 1999.
L. V. Ahlfors. . Second edition, University Lecture Series, vol. 38, American Mathematical Society, Providence, RI, 2006. With supplemental chapters by C. J. Earle, I. Kra, M. Shishikura and J. H. Hubbard.
J. W. Anderson and B. Maskit. On the local connectivity of limit sets of Kleinian groups. , 31:177--183, 1996.
K. Astala, T. Iwaniec, and G. Martin. , volume 148 of *Princeton Mathematical Series*. Princeton Univ. Press, Princeton, NJ, 2009.
A. F. Beardon. The geometry of discrete groups. Corrected reprint of the 1983 original. Graduate Texts in Mathematics, 91. Springer--Verlag, New York, 1995.
L. Bers. Simultaneous uniformization. , 66:94--97, 1960.
L. Bers. . , 91:570--600, 1970.
M. Bestvina and M. Feighn. A combination theorem for negatively curved groups. , pages 85--101, 1992.
A. Beurling and L Ahlfors. The boundary correspondence under quasiconformal mappings. , 96:125--142, 1956.
R. Bowen and C. Series. Markov maps associated with Fuchsian groups. , 50:153--170, 1979.
D. A. Brannan. Coefficient regions for univalent polynomials of small degree. , 14:165--169, 1967.
B. Branner and N. Fagella, *Quasiconformal Surgery in Holomorphic Dynamics*. With contributions by Xavier Buff, Shaun Bullett, Adam L. Epstein, Peter Haïssinsky, Christian Henriksen, Carsten L. Petersen, Kevin M. Pilgrim, Tan Lei and Michael Yampolsky. Cambridge Studies in Advanced Mathematics, vol. 141, Cambridge University Press, Cambridge, 2014.
L. Brickman. Extreme points of the set of univalent functions. , 76:372--374, 1970.
J. F. Brock, R. D. Canary, and Y. N. Minsky. The classification of Kleinian surface groups, II: The ending lamination conjecture. , 176:1--149, 2012.
X. Buff, A. L. Epstein, S. Koch, D. Meyer, K. Pilgrim, M. Rees, L. Tan. Questions about polynomial matings. , 21:1149--1176, 2012.
S. Bullett and M. Freiberger. Hecke groups, polynomial maps and matings. , 17:3922--3931, 2003.
S. Bullett and L. Lomonaco. Mating quadratic maps with the modular group II. , 220:185--210, 2020.
S. Bullett and L. Lomonaco. Dynamics of Modular Matings. , 410, Part B (2022), 108758.
S. Bullett and L. Lomonaco. Mating quadratic maps with the modular group III: the modular Mandelbrot set. [arxiv.org/abs/2010.04273v5](arxiv.org/abs/2010.04273v5), 2023.
S. Bullett and C. Penrose. Mating quadratic maps with the modular group. , 115:483--511, 1994.
L. A. Caffarelli, L. Karp, and H. Shahgholian. Regularity of a free boundary problem with application to the Pompeiu problem. , 151:269--292, 2000.
L. Carleson, P. W. Jones, and J. C. Yoccoz. Julia and John. , 25:1--30, 1994.
J. Chen, Z. Chen, and C. He, Boundary correspondence under $\mu(z)$-homeomorphisms. , 43:211--220, 1996.
J. H. Conway. . Second edition. A K Peters, Ltd., Natick, MA, 2001.
V. F. Cowling and W. C. Royster. Domains of variability for univalent polynomials. , 19:767--772, 1968.
H. S. M. Coxeter. Discrete groups generated by reflections. , 35:588--621, 1934.
W. D. Crowe, R. Hasson, P. J. Rippon, and P. E. D. Strain-Clark. On the structure of the Mandelbar set. , 2, 1989.
G. David. Solutions de l'équation de Beltrami avec $\|\mu\| = 1$. , 13:25--70, 1988.
P. J. Davis. . Number 17 in Carus Math. Monographs. Math. Assoc. Amer., 1974.
A. Denjoy. Sur une fonction réelle de Minkowski. (9), 17:105--151, 1938.
A. Douady. Systèmes dynamiques holomorphes. In *Séminaire Bourbaki*, volume 1982/83, pages 39--63, Astérisque, 105--106, Soc. Math. France, Paris, 1983.
A. Douady and J. H. Hubbard. On the dynamics of polynomial-like mappings. , 18:287--343, 1985.
A. Douady and J. H. Hubbard. . Publications Mathématiques d'Orsay. Université de Paris-Sud, Département de Mathématiques, Orsay, 1984 - 1985.
P. L. Duren. , volume 259 of *Grundlehren der mathematischen Wissenschaften*. Springer-Verlag, New York, 1983.
Peter Ebenfelt, Björn Gustafsson, Dmitry Khavinson, and Mihai Putinar, editors. , volume 156 of *Operator Theory: Advances and Applications*, Basel, 2005. Birkhäuser Verlag.
P. Elbau and G. Felder. Density of eigenvalues of random normal matrices. , 259:433--450, 2005.
P. Etingof and A. Varchenko. , volume 3 of *University Lecture Series*. American Mathematical Society, Providence, R.I., 1992.
P. Fatou. Sur les équations fonctionnelles. , 47:161--271, 1919.
P. Fatou. Sur les équations fonctionnelles. , 48:33--94, 1920.
P. Fatou. Sur les équations fonctionnelles. , 48:208--314, 1920.
P. Fatou. Sur l'itération des fonctions transcendantes Entières. , 47(4):337--370, 1926.
P. Fatou. Notice sur les travaux scientifiques de M. P. Fatou. Astronome titulaire de l'observatoire de Paris, 5--29, 1929, <https://www.math.purdue.edu/~eremenko/dvi/fatou-b.pdf>.
L. Geyer. Sharp bounds for the valence of certain harmonic polynomials. , 136:549--555, 2008.
L. Geyer. Classification of critically fixed anti-rational maps. <https://arxiv.org/abs/2006.10788v3>, 2020.
E. Ghys. Transformations holomorphes au voisinage d'une courbe de Jordan. , 289:383--388, 1984.
B. Gustafsson. Quadrature identities and the Schottky double. , 1:209--240, 1983.
B. Gustafsson, C. He, P. Milanfar, and M. Putinar. Reconstructing planar domains from their moments. , 16:1053--1070, 2000.
B. Gustafsson and M. Putinar. , volume 2199 of *Lecture Notes in Mathematics*, Springer, Cham, 2017.
B. Gustafsson and A. Vasil'ev. . Birkhäuser Basel, 2006.
P. Haïssinsky. Chirurgie parabolique. , 327:195--198, 1998.
P. Haïssinsky. Déformation J-équivalente de polynômes géométriquement finis. , 163:131--141, 2000.
A. Hatcher and W. Thurston. Moduli spaces of circle packings. Preprint, <https://pi.math.cornell.edu/~hatcher/Papers/CirclePacking.pdf>.
H. Hedenmalm and N. Makarov. Coulomb gas ensembles and laplacian growth. , 106:859--907, 2013.
H. Hedenmalm and S. Shimorin, Weighted Bergman spaces and the integral means spectrum of conformal mappings. , 127:341--393, 2005.
J. Hubbard. Matings and the other side of the dictionary. , 21:1139--1147, 2012.
J. H. Hubbard and D. Schleicher. ulticorns are not path connected. , pages 73--102, 2014.
H. Inou and J. Kiwi. ombinatorics and topology of straightening maps, I: Compactness and bijectivity. , 231:2666--2733, 2012.
H. Inou and S. Mukherjee. Non-landing parameter rays of the multicorns. , 204:869--893, 2016.
H. Inou and S. Mukherjee. Discontinuity of straightening in anti-holomorphic dynamics: I. , 374:6445--6481, 2021.
P. W. Jones, On removable sets for Sobolev spaces in the plane. , 250--267, Princeton Math. Ser., 42, Princeton Univ. Press, Princeton, NJ, 1995.
P. Jones and S. Smirnov. Removability theorems for Sobolev functions and quasiconformal maps. , 38(2):263--279, 2000.
G. Julia. Mémoire sur l'itération des fonctions rationnelles. , 1:47--246, 1918.
G. Julia. Mémoire sur la permutabilité des fractions rationnelles. , 39:131--215, 1922.
C. Karafyllia and D. Ntalampekos. Extension of boundary homeomorphisms to mappings of finite distortion. , 125:488--510, 2022.
J. Kiwi. Rational laminations of complex polynomials. In *Laminations and foliations in dynamics, geometry and topology*, edited by M. Lyubich, J. W. Milnor, and Y. N. Minsky, volume 269 of Contemporary Mathematics, pages 111--154. American Mathematical Society, 2001.
F. Klein. . , 21(2):141--218, 1883.
S. K. Lando and A. K. Zvonkin. , volume 141 of *Encyclopaedia of Mathematical Sciences*. Springer-Verlag, 2004.
K. Lazebnik, N. G. Makarov, and S. Mukherjee. Univalent polynomials and Hubbard trees. , 374(7): 4839--4893, 2021, arXiv:1908.05813.
K. Lazebnik, N. G. Makarov, and S. Mukherjee. Bers slices in families of univalent maps. , 300:2771--2808, 2022, arXiv:2007.02429.
S.-Y. Lee, M. Lyubich, N. G. Makarov, and S. Mukherjee. Dynamics of Schwarz reflections: the mating phenomena. to appear in *Ann. Sci. Éc. Norm. Supér. (4)*, <https://arxiv.org/abs/1811.04979v3>, 2018.
S.-Y. Lee, M. Lyubich, N. G. Makarov, and S. Mukherjee. chwarz reflections and the Tricorn. to appear in *Ann. Inst. Fourier (Grenoble)*, <https://arxiv.org/abs/1812.01573v2>, 2018.
S.-Y. Lee, M. Lyubich, N. G. Makarov, and S. Mukherjee. Schwarz reflections and anti-holomorphic correspondences. , 385:Paper No. 107766, 88, 2021, arXiv:1907.09107.
S. Y. Lee and N. G. Makarov. Sharpness of connectivity bounds for quadrature domains. <https://arxiv.org/abs/1411.3415>, 2014.
S. Y. Lee and N. G. Makarov. Topology of quadrature domains. , 29(2):333--369, 2016.
E. H. Lockwood. Cambridge University Press, New York, 1961.
R. Lodge, Y. Luo, and S. Mukherjee. Circle packings, kissing reflection groups and critically fixed anti-rational maps. , 10, paper no. e3, 38 pp., 2022, arXiv:2007.03558.
R. Lodge, Y. Luo, and S. Mukherjee. On deformation space analogies between Kleinian reflection groups and antiholomorphic rational maps. , 32(6):1428--1485, 2022, arXiv:2202.03550.
R. Lodge, M. Lyubich, S. Merenkov, and S. Mukherjee. On dynamical gaskets generated by rational Maps, Kleinian groups, and Schwarz reflections. , 27:1--54, 2023, arXiv:1912.13438.
L. Lomonaco. Parabolic-like mappings. , 35:2171--2197, 2015.
Y. Luo. On geometrically finite degenerations I: Boundaries of main hyperbolic components. , to appear, 2021.
Y. Luo, M. Lyubich, and S. Mukherjee. Degenerate (anti-)polynomial-like maps, Schwarz reflections, and boundary involutions. Manuscript in preparation, 2023.
Y. Luo and Y. Zhang. Circle packings, renormalizations and subdivision rules. <https://arxiv.org/abs/2308.13151>, 2023.
Y. Luo and Y. Zhang. On quasiconformal non-equivalence of gasket Julia sets and limit sets. Preprint, <https://drive.google.com/file/d/1aiwWs2Ai3GfC60CezhCfjKbDvzDShr53/view>, 2023.
M. Lyubich, J. Mazor, and S. Mukherjee. Antiholomorphic correspondences and mating I: realization theorems. <https://arxiv.org/abs/2303.02459>, 2023.
M. Lyubich, J. Mazor, and S. Mukherjee. Antiholomorphic correspondences and mating II: Shabat polynomial slices. Manuscript in preparation, 2023.
M. Lyubich, S. Merenkov, S. Mukherjee, and D. Ntalampekos. David extension of circle homeomorphisms, welding, mating, and removability. <https://arxiv.org/abs/2010.11256v2>, 2020.
M. Lyubich and Y. Minsky. Laminations in holomorphic dynamics. , 47:17--94, 1997.
A. Marden. *Hyperbolic manifolds, an introduction in 2 and 3 dimensions*. Cambridge University Press, 2016.
B. Maskit. . , 91:607--639, 1970.
C. McMullen. The classification of conformal dynamical systems. In *Current Developments in Mathematics*, edited by Bott, Hopkins, Jaffe, Singer, Stroock, and Yau, International Press, pp. 323--360, 1995.
C. McMullen, Automorphisms of rational maps. In *Holomorphic functions and moduli*, vol. I (Berkeley, CA, 1986), 31--60, Math. Sci. Res. Inst. Publ., 10, Springer, New York, 1988.
C. T. McMullen. Renormalization and 3-manifolds which fiber over the circle. , 1998.
C. T. McMullen and D. P. Sullivan. Quasiconformal homeomorphisms and dynamics. III. The Teichmüller space of a holomorphic dynamical system. , 135:351--395, 1998.
D. Meyer. Unmating of rational maps: sufficient criteria and examples. In *Frontiers in Complex Dynamics*, Vol. 51 of Princeton Math. Ser. (Princeton Univ. Press, Princeton, NJ, 2014), 197--233.
N. Mihalache. Julia and john revisited. , 215:67--86, 2011.
J. Milnor. Remarks on iterated cubic maps. , 1:5--24, 1992.
J. Milnor. On rational maps with two critical points. , 9:333--411, 2000.
J. Milnor. , volume 160 of *Annals of Mathematics Studies*. Princeton University Press, Princeton, NJ, third edition, 2006.
H. Minkowski. Zur Geometrie der Zahlen. In *Gesammelte Abhandlungen*, vol. 2, Teubner, Leipzig, 1911, pp. 43--52; reprinted by Chelsea, New York, 1967.
Y. Minsky. The classification of Kleinian surface groups. I. Models and bounds. , 171:1--107, 2010.
M. Mj. Cannon-Thurston maps for surface groups. , 179:1--80, 2014.
M. Mj. Ending laminations and Cannon-Thurston maps. With an appendix by Shubhabrata Das and Mj. 24:297--321, 2014.
M. Mj. Cannon-Thurston maps for Kleinian groups. , 5, paper no. e1, 49 pp., 2017.
S. Mukherjee. Orbit portraits of unicritical antiholomorphic polynomials. , 19:35--50, 2015.
S. Mukherjee, S. Nakane, and D. Schleicher. On Multicorns and Unicorns II: bifurcations in spaces of antiholomorphic polynomials. , 37:859--899, 2017.
S. Nakane. Connectedness of the tricorn. , 13:349--356, 1993.
S. Nakane and D. Schleicher. On Multicorns and Unicorns I : Antiholomorphic dynamics, hyperbolic components and real cubic polynomials. , 13:2825--2844, 2003.
J. P. Otal. Thurston's hyperbolization of Haken manifolds. , pages 77--194, 1998.
C. L. Petersen and P. Roesch. The Parabolic Mandelbrot Set <https://arxiv.org/abs/2107.09407>, 2021.
C. L. Petersen and S. Zakeri. On the Julia set of a typical quadratic polynomial with a Siegel disk. , 159:1--52, 2004.
K. M. Pilgrim. . Lecture Notes in Mathematics. Springer, 2003.
K. Pilgrim and L. Tan. Combining Rational Maps and Controlling Obstructions. , 18:221--245, 1998.
H. Poincaré. Théorie des groupes fuchsiens. (French) , 1:1--76, 1882.
S. Richardson. Hele shaw flows with a free boundary produced by the injection of fluid into a narrow channel. , 56:609--618, 1972.
E. B. Saff and V. Totik. , volume 316 of *Grundlehren der mathematischen Wissenschaften*. Springer-Verlag, Berlin, 1997.
M. Sakai. A moment problem in Jordan domains. , 70:35--38, 1978.
M. Sakai. , volume 934 of *Lecture Notes in Mathematics,* Springer--Verlag, Berlin--New York, 1982.
M. Sakai. Regularity of a boundary having a Schwarz function. , 166:263--297, 1991.
R. Salem. On some singular monotonic functions which are strictly increasing. , 53:427--439, 1943.
L. Schneps. Dessins d'enfants on the Riemann sphere. In *The Grothendieck theory of dessins d'enfants (Luminy,1993)*, volume 200 of *London Math. Soc. Lecture Note Ser.*, pages 47--77. Cambridge Univ. Press, Cambridge, 1994.
O. Schramm. How to cage an egg. , 107:543--560, 1992.
H. Shahgholian. On quadrature domains and the Schwarz potential. , 171:61--78, 1992.
T. Sheil-Small. . Cambridge University Press, 2000.
T. J. Suffridge. Extreme points in a class of polynomials having univalent sequential limits. , 163:225--237, 1972.
D. Sullivan. Quasiconformal homeomorphisms and dynamics I: Solution of the Fatou-Julia problem on wandering domains. , 122:401--418, 1985.
R. Teodorescu, E. Bettelheim, O. Agam, A. Zabrodin, and P. Wiegmann. Normal random matrix ensemble as a growth problem. , 704:407--444, 2005.
W. Thurston. *The geometry and topology of $3$-manifolds*. Princeton lecture notes, 1978-1981.
W. Thurston. Hyperbolic Structures on 3-Manifolds I: Deformation of Acylindrical Manifolds. , 124:203--246, 1986.
P. Tukia. On isomorphisms of geometrically finite Möbius groups. , 61:171--214, 1985.
È. B. Vinberg. Discrete groups generated by reflections in Lobachevskii spaces. (Russian) , 72(114):471--488, 1967.
E. B. Vinberg and O. V. Shvartsman. *Geometry II: Spaces of Constant Curvature*, Encyclopaedia of Mathematical Sciences, vol. 29, Springer-Verlag, 1993.
S. E. Warschawski. On conformal mapping of infinite strips. , 51(2):280--335, 1942.
P. Wiegmann. Aharonov-bohm effect in the quantum hall regime and laplacian growth problems. In Andrea Cappelli and Giuseppe Mussardo, editors, *Statistical Field Theories*, volume 73 of *NATO Science Series*, pages 337--349. Springer, Dordrecht, 2002.
S. Zakeri. David maps and Hausdorff dimension. , 29:121--138, 2004.
S. Zakeri. On boundary homeomorphisms of trans-quasiconformal maps of the disk. , 33:241--260, 2008.
[^1]: M.L. was partially supported by NSF grants DMS-1600519 and 1901357, a fellowship from the Hagler Institute for Advanced Study, and the Clay fellowship.
[^2]: S.M. was supported by the Department of Atomic Energy, Government of India, under project no.12-R&D-TFR-5.01-0500, an endowment of the Infosys Foundation, and SERB research project grants SRG/2020/000018 and MTR/2022/000248.
[^3]: To indicate interrelations among the papers that this survey is based on, we refer to the year of arXiv publication.
[^4]: where all the maps in question are anti-holomorphic
[^5]: Note that necklace groups were termed *function kissing reflection groups* in [@LLM1].
[^6]: This shows that although we divide out the map $\eta$ from the correspondence $f(w)=f(\eta(z))$, it is recovered in the grand orbit of $\mathfrak{C}$.
[^7]: The map $\pmb{\mathcal{F}}_2$ was termed $\rho$ in [@LLMM3].
[^8]: The nomenclature 'Julia necklace group' will be justified in Subsection [5.1.3](#talbot_crit_fixed_anti_poly_subsubsec){reference-type="ref" reference="talbot_crit_fixed_anti_poly_subsubsec"}.
| arxiv_math | {
"id": "2310.03316",
"title": "Mirrors of conformal dynamics: Interplay between anti-rational maps,\n reflection groups, Schwarz reflections, and correspondences",
"authors": "Mikhail Lyubich and Sabyasachi Mukherjee",
"categories": "math.DS math.CV math.GT",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
The variational inequality problem in finite-dimensional Euclidean space is addressed in this paper, and two inexact variants of the extragradient method are proposed to solve it. Instead of computing exact projections on the constraint set, as in previous versions extragradient method, the proposed methods compute feasible inexact projections on the constraint set using a relative error criterion. The first version of the proposed method provided is a counterpart to the classic form of the extragradient method with constant steps. In order to establish its convergence we need to assume that the operator is pseudo-monotone and Lipschitz continuous, as in the standard approach. For the second version, instead of a fixed step size, the method presented finds a suitable step size in each iteration by performing a line search. Like the classical extragradient method, the proposed method does just two projections into the feasible set in each iteration. A full convergence analysis is provided, with no Lipschitz continuity assumption of the operator defining the variational inequality problem.
author:
- "R. Díaz Millán[^1]"
- "O.P. Ferreira[^2]"
- "J. Ugon[^3]"
bibliography:
- InexactExtraGradMethodVIP.bib
title: Extragradient method with feasible inexact projection to variational inequality problem
---
**keywords:** Variational inequality problem, Extragradient method, Frank-Wolfe algorithm, conditional gradient method, feasible inexact projection. **MSC 2020:** 65K05, 90C30, 90C25
# Introduction
This paper addresses the variational inequality problem in finite-dimensional Euclidean space. This problem is formally stated as follows: Let $F: {\mathbb R}^n\to {\mathbb R}^n$ be an operator and ${\cal C}\subset {\mathbb R}^n$ be a nonempty and closed convex set. The variational inequality problem (VIP($F,{\cal C}$)) associated with $F$ and ${\cal C}$ consists in finding a $x^*\in {\cal C}$ such that $$\label{eq:mp}
\left\langle F({x^*}), x-{x^*} \right\rangle\geq 0 , \qquad \forall~x\in {\cal C}.$$ We denote by ${\cal C}^*$ the *solution set* of problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"}, which we will assume to be nonempty. The variational inequality problem is an issue that has attracted the interest of the mathematics programming community not only by their own interest but because it is an abstract model for several families of problems in nonlinear analysis and its applications. For instance, if $F=\nabla f$, where $f:{\mathbb R}^n\to {\mathbb R}$ is a differentiable function, then VIP($F,{\cal C}$) correspond to the problem of minimizing the function $f$ constrained the set ${\cal C}$. When ${\cal C}$ is a cone ${\cal K}$, the VIP($F,{\cal C})$ is a complementary problem, which is stated in the following form: Compute $x^*\in {\mathbb R}^n$ such that $x^*\in {\cal K}$, $F(x^*)\in {\cal K}^*$ and $\left\langle F({x^*}), {x^*} \right\rangle=0$, where ${\cal K}^*$ denotes the dual of ${\cal K}$. For a comprehensive study of theory and applications of variational inequality, see [@FacchineiPang2003-II; @FacchineiPang2003-I].
The extragradient method was proposed in [@Korpelevivc76] in the 1970s and continues to attract the interest of variational inequality experts, see [@doi:10.1080/02331934.2010.539689; @doi:10.1137/14097238X; @doi:10.1080/02331934.2017.1377199; @millanbello], and the references therein. This method is attractive due to the fact that it requires only two operator evaluations for each iteration, making it numerically stable and hence potentially suited for addressing large-scale problems. Apart from the projection needed in its definition, which is responsible for approximately all of the computational demands if the constraint set projection is difficult to compute, it is a relatively simple method. In addition, the method converges with mild assumptions. All these features motivate the study of it, resulting in many different versions of the method throughout the years culminating in a large body of literature on the subject, including [@FacchineiPang2003-I; @FacchineiPang2003-II; @BelloMillan; @wangxiu; @buradutta; @doi:10.1137/S0363012998339745], and the references therein.
Another subject that has motivated the development of numerical methods for addressing constrained problems is dealing with the computation of the projection, which is the step that accounts for practically all of the computing demands of methods that utilize projections, like the Extragradient method. In general, computing the projection requires solving a quadratic problem constrained to the feasible set at each iteration, which can significantly raise the cost per iteration if the number of unknowns is large. In light of this, it may not be reasonable to do exact projections when the iterates of the method are distant from the solution of the problem under consideration. Through the years, various inexact procedures that become increasingly accurate as the solution of the problem under consideration is approached have been proposed in an effort to reduce the computational cost required for projections, leading to more effective projection-based methods; see for example [@BirginMartinezRaydan2003; @Bonettini2016; @Golbabaee_Davies2018; @Gonccalves2020; @SalzoVilla2012; @VillaSalzo2013; @Rasch2020; @MillanFerreiraUgon; @ReinierOrizonLeandro2019].
The purpose of this paper is to present two variants of the extragradient method which employ the use of feasible inexact projections. In the variants of the extragradient method that we propose, we will employ a version of the scheme proposed in [@VillaSalzo2013 Example 1] in which the inexact projection over the feasible set is calculated allowing an appropriate relative error tolerance. Firstly, we present a variation of the extragradient method with constant stepsize and show that it preserves the same convergence result as the classic method, see [@FacchineiPang2003-II; @Korpelevivc76]. We show that if $F$ is a pseudo monotone operator on ${\cal C}$ with respect to ${\cal C}^*$ and Lipschitz continuous, the sequence generated converges to a solution of VIP($F,{\cal C}$). It is important to note in this version that the Lipschitz constant is required to compute the stepsize. Considering that the Lipschitz constant is not accessible or difficult to compute in almost every application, we propose and analyse a feasible inexact projection version of the extragradient method using an Armiljo-type line search. It is worth noting that, like the classical extragradient method, the method does just two projections into the feasible set in each iteration. The full convergence of the sequence to a solution is shown, with $F$ being a pseudo monotone operator on ${\cal C}$ with respect to ${\cal C}^*$ and no Lipschitz continuity assumption, which is the same results as the version with exact projection, see [@millanbello; @buramillan; @RePEc; @konov; @solodsvaiter].
The organization of the paper is as follows. In section [2](#sec:Preliminares){reference-type="ref" reference="sec:Preliminares"}, we present some notation and basic results used throughout the paper. In Section [3](#Sec:InexcProj){reference-type="ref" reference="Sec:InexcProj"} we will revisit the concept of feasible inexact projection onto a closed and convex set and describe some new properties of the feasible inexact projection. Section [4](#Sec:ExtraGradmeMethod){reference-type="ref" reference="Sec:ExtraGradmeMethod"} describe and analyze the extragradient method with a feasible inexact projection for solving problem [\[eq:mp\]](#eq:mp){reference-type="eqref" reference="eq:mp"}. In Section [5](#Se:ExtGradLineSerch){reference-type="ref" reference="Se:ExtGradLineSerch"} is introduced and analyzed an inexact variant of the extragradient method with line search for solve VIP($F,{\cal C}$). Finally, some concluding remarks are made in Section [7](#Sec:Conclusions){reference-type="ref" reference="Sec:Conclusions"}.
# Preliminaries {#sec:Preliminares}
In this section, we present some preliminary results used throughout the paper. We denote: ${\mathbb{N}}:=\{1,2,3, \ldots\}$, $\langle \cdot,\cdot \rangle$ is the usual inner product and $\|\cdot\|$ is the Euclidean norm. Let ${\cal C}\subset \mathbb{R}^n$ be closed, convex and nonempty set, the *projection* is the map ${\cal{P}}_{\cal C}: \mathbb{R}^n \to {\cal C}$ defined by $${\cal{P}}_{\cal C}(v):= \arg\min_{z \in {\cal C}}\|v-z\|.$$ In the next lemma, we present some important properties of the projection mapping.
**Lemma 1**. *Given a convex and closed set ${\cal C}\subset \mathbb{R}^n$ and $v\in \mathbb{R}^n$, the following properties hold:*
1. *$\langle v-{\cal{P}}_{\cal C}(v), z-{\cal{P}}_{\cal C}(v)\rangle\leq0$, for all $z\in {\cal C}$;*
2. *$\|{\cal{P}}_{\cal C}(v)-z\|^2\leq \|v-z\|^2- \|{\cal{P}}_{\cal C}(v)-v\|^2$, for all $z\in {\cal C}$.*
*Proof.* The item (i) is proved in [@BauschkeCombettes20011 Theorem 3.14]. For item (ii), combine $\|v-z\|^2=\|{\cal{P}}_{\cal C}(v)-v\|^2+\|{\cal{P}}_{\cal C}(v)-z\|^2-2\langle {\cal{P}}_{\cal C}(v)-v, {\cal{P}}_{\cal C}(v)-z\rangle$ with item (i). ◻
For the formula in the next proposition see, for example, [@BauschkeCombettes20011 Example 3.21].
**Proposition 2**. *Let $a, v \in \mathbb{R}^n$ and $H=\{x\in \mathbb{R}^n:~\langle v, x- a\rangle \leq 0 \}$. If ${\bar x} \notin H$, then $${\cal{P}}_{H}({\bar x})= {\bar x}-\frac{1}{\Vert v\Vert ^{2}}\langle v, {\bar x}- a \rangle v.$$*
Let $F: {\mathbb R}^n\to {\mathbb R}^n$ be an operator, ${\cal C}\subset {\mathbb R}^n$ be a nonempty and closed convex set. The operator $F$ is said to be *pseudo monotone on ${\cal C}$ with respect to the solution set ${\cal C}^*$* of problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"} if the set ${\cal C}^*$ is nonempty and, for every $x^*\in {\cal C}^*$, there holds: $$\left\langle F(x), x-{x^*} \right\rangle\geq 0, \qquad \forall x\in {\cal C}.$$
**Definition 1**. *Let $S$ be a nonempty subset of $\mathbb{R}^n$. A sequence $( v_k)_{k\in\mathbb{N}}\subset \mathbb{R}^n$ is said to be quasi-Fejér convergent to $S$, if and only if, for all $v\in S$ there exists ${\bar k}\ge 0$ and a summable sequence $(\epsilon_k)_{k\in\mathbb{N}}$, such that $\|v_{k+1}-v\|^2 \le \| v_k - v\|^2+\epsilon_k$ for all $k\ge {\bar k}$.*
In the following lemma, we state the main properties of quasi-Fejér sequences that we will need; a comprehensive study on this topic can be found in [@Combettes2001].
**Lemma 3**. *Let $S$ be a nonempty subset of $\mathbb{R}^n$ and $(v_k)_{k\in\mathbb{N}}$ be a quasi-Fejér sequence convergent to $S$. Then, the following conditions hold:*
1. *the sequence $(v_k)_{k\in\mathbb{N}}$ is bounded;*
2. *if a cluster point ${\bar v}$ of $(v_k)_{k\in\mathbb{N}}$ belongs to $S$, then $(v_k)_{k\in\mathbb{N}}$ converges to ${\bar v}$.*
# Feasible inexact projection {#Sec:InexcProj}
In this section, we will revisit the concept of feasible inexact projection onto a closed and convex set. This concept has already been utilized in [@AdOriLea2020; @OriFabaGil2018; @ReinierOrizonLeandro2019; @MillanFerreiraUgon]. We also describe some new properties of the feasible inexact projection, which is employed throughout the work. The definition of feasible inexact projection is as follows.
**Definition 2**. *Let ${\cal C}\subset {\mathbb R}^n$ be a closed convex set and $\gamma \in {\mathbb R}_{+}$ is a given error tolerance vector. The *feasible inexact projection mapping* relative to $u \in {\cal C}$ with error tolerance vector ${\gamma}$, denoted by ${\cal P}^{\gamma}_{\cal C}(u, \cdot): {\mathbb R}^n \rightrightarrows {\cal C}$, is the set-valued mapping defined as follows $$\label{eq:Projw}
{\cal P}^{\gamma}_{\cal C}(u, v) := \left\{w\in {\cal C}:~\big\langle v-w, y-w \big\rangle \leq \gamma \|w-u\|^2,~\forall y \in {\cal C} \right\}.$$ Each point $w\in {\cal P}^{\gamma}_{\cal C}( u, v)$ is called a *feasible inexact projection of $v$ onto ${\cal C}$ relative to $u$ with error tolerance $\varphi_{\gamma}$*.*
The feasible inexact projection generalizes the concept of usual projection. In the following, we present some remarks about this concept.
**Remark 1**. *Let $\gamma \in {\mathbb R}_{+}$ be error tolerance vector, ${\cal C}\subset {\mathbb R}^n$, $u\in {\cal C}$ and ${\gamma}$ be as in Definition [Definition 2](#def:InexactProj){reference-type="ref" reference="def:InexactProj"}. For all $v\in {\mathbb R}^n$, it follows from [\[eq:Projw\]](#eq:Projw){reference-type="eqref" reference="eq:Projw"} that ${\cal P}^0_{\cal C}(u, v)$ is the exact projection of $v$ onto ${\cal C}$; see [@Bertsekas1999 Proposition 2.1.3, p. 201]. Moreover, ${\cal P}^0_{\cal C}( u, v) \subset {\cal P}^{\gamma}_{\cal C}( u, v)$ which implies that ${\cal P}^{\gamma}_{\cal C}( u, v)\neq \varnothing$, for all $u\in {\cal C}$ and $v\in {\mathbb R}^n$. In general, if $\gamma\leq {\bar \gamma}$ then ${\cal P}^{\gamma}_{\cal C}( u, v) \subset {\cal P}^{\bar \gamma}_{\cal C}( u, v)$.*
Below we present a particular counterpart of the firm non-expansiveness of the projection operator to a feasible inexact projection operator, its proof follows the same idea of [@ReinierOrizonLeandro2019].
**Proposition 4**. *Let $v\in {\mathbb R}^n$ and $\gamma\geq 0$. If ${w}\in {\cal{P}}^{\gamma}_C({u},v)$ and ${\bar w}= {\cal{P}}_C({\bar v})$, then $$\|{w}-{\bar w}\|^2\leq \|{v}-{\bar v}\|^2- \|({v}-{\bar v})-({w}-{\bar w})\|^2 +2\gamma\|{w}-{u}\|^2.$$*
*Proof.* Since ${w}\in {\cal{P}}^{\gamma}_C({u},v)$ and ${\bar w}= {\cal{P}}_C({\bar v})$, it follows from [\[eq:Projw\]](#eq:Projw){reference-type="eqref" reference="eq:Projw"} and Lemma [Lemma 1](#le:projeccion){reference-type="ref" reference="le:projeccion"} that $$\big\langle v-w, {\bar w}-w \big\rangle \leq \gamma \|w-u\|^2, \qquad \big\langle {\bar v}-{\bar w}, w-{\bar w} \big\rangle \leq 0$$ By adding the last two inequalities, some algebraic manipulations yield $$-\big\langle {\bar v}-{v}, {\bar w}-w \big\rangle + \|{w}-{\bar w}\|^2 \leq \gamma \|w-u\|^2.$$ Since $\|({\bar v}-{v})- ({\bar w}-w)\|^2=\|{\bar v}-{v}\|^2-2\big\langle {\bar v}-{v}, {\bar w}-w \big\rangle +\|{\bar w}-{w}\|^2$, the desired inequality follows by combination with the last inequality. ◻
**Lemma 5**. *Let $F: {\mathbb R}^n\to {\mathbb R}^n$ be a operator, ${\cal C}\subset {\mathbb R}^n$ be a nonempty, closed, and convex set, $x \in {\cal C}$ and $0\leq \gamma < 1$. Take $z\in {\mathbb R}^n$ and any inexact projection $$w(\alpha) \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(z)), \qquad \alpha \in(0, +\infty).$$ Then, there hold:*
1. *$\big\langle F(z), w(\alpha) - x\big\rangle \leq \dfrac{\gamma-1}{\alpha}\|w(\alpha)-x\|^2$;*
2. *$\|w(\alpha)-x\|\leq\dfrac{\alpha}{1-\gamma}\|F(z)\|$.*
*Proof.* Since $w(\alpha) \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(z))$ we obtain $\big\langle x-\alpha F(z)-w(\alpha), x-w(\alpha) \big\rangle \leq \gamma \|w(\alpha)-x\|^2$, which after some algebraic manipulations yields $$\|w(\alpha)-x\|^2-\alpha \big\langle F(z), x-w(\alpha) \big\rangle \leq \gamma \|w(\alpha)-x\|^2.$$ Thus, item $(i)$ follows from the last inequality.
We proceed to prove the tem $(ii)$. For that, first note that the item $(i)$ is equivalent to $$\label{eq:aitm1}
0\leq \dfrac{1-{\gamma}}{\alpha}\|w(\alpha)-x\|^2\leq \big\langle F(z), x-w(\alpha)\big\rangle.$$ If $w(\alpha)=x$, then the inequality holds trivially. Assume that $w(\alpha)\neq x$. Thus, the inequality in item (ii) follows by combining the inequality [\[eq:aitm1\]](#eq:aitm1){reference-type="eqref" reference="eq:aitm1"} with $\langle F(z), x-w(\alpha)\rangle\leq \|F(z)\|\|x-w(\alpha)\|$. ◻
**Corollary 6**. *The following statements are equivalent:*
1. *$x$ is a solution of the VIP(F,${\cal C}$);*
2. *$x\in {\cal P}^{\gamma}_{\cal C}(x,x-\alpha F(x))$, for all $\alpha \in(0, +\infty)$;*
3. *there exists ${\bar \alpha}>0$ such that $\langle F(x),w({\bar \alpha})-x\rangle\geq 0$ for $w({\bar \alpha})\in {\cal P}^{\gamma}_{\cal C}(x,x-{\bar \alpha} F(x))$.*
*Proof.* Proof of equivalence between item $(i)$ and item $(ii)$: We first assume that item $(i)$ holds, i.e., $x$ is a solution for problem [\[eq:mp\]](#eq:mp){reference-type="eqref" reference="eq:mp"}. In this case, by taking $w(\alpha) \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$, we find that $w(\alpha) \in {\cal C}$. Consequently, we have $\big\langle F(x), w(\alpha)-x \big\rangle \geq 0$. Considering that $\alpha > 0$ and $0 \leq \gamma < 1$, the last inequality, along with item $(i)$ of Lemma [Lemma 5](#Le:ProjProperty){reference-type="ref" reference="Le:ProjProperty"} for $z=x$, implies that $w(\alpha) = x$. Hence, $x \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$, and item $(ii)$ also holds. Reciprocally, assuming that item $(ii)$ holds, if $x \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$, then applying [\[eq:Projw\]](#eq:Projw){reference-type="eqref" reference="eq:Projw"} with $w = x$, $v = x-\alpha F(x)$, and $u = x$ yields $\big\langle x-\alpha F(x)-x, y-x \big\rangle \leq 0$, for all $y \in {\cal C}$. Given that $\alpha > 0$, the last inequality is equivalent to $\big\langle F(x), y-x \big\rangle \geq 0$, for all $y \in {\cal C}$. Thus, $x$ is a solution for problem [\[eq:mp\]](#eq:mp){reference-type="eqref" reference="eq:mp"}, and item $(i)$ holds as well.
Proof of equivalence between item $(ii)$ and item $(iii)$: Let us assume that item $(ii)$ holds. Thus, item $(i)$ also holds, and $x$ is a solution for problem [\[eq:mp\]](#eq:mp){reference-type="eqref" reference="eq:mp"}, which implies that $\left\langle F(x), y-x \right\rangle \geq 0$ for all $y\in {\cal C}$. Considering that for any $w({\bar \alpha})\in {\cal P}^{\gamma}_{\cal C}(x,x-{\bar \alpha} F(x))$, we have $w({\bar \alpha})\in {\cal C}$, it follows that $\langle F(x),w({\bar \alpha})-x\rangle\geq 0$, and item $(iii)$ holds. Conversely, we assume, for contradiction, that item $(ii)$ does not hold. Therefore, $x \notin {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$, and considering that $w(\alpha) \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$, we conclude that $x \neq w(\alpha)$. As a result, because $\alpha > 0$ and $0 < \gamma \leq \bar{\gamma}$, it follows from item $(i)$ of Lemma [Lemma 5](#Le:ProjProperty){reference-type="ref" reference="Le:ProjProperty"} that $\big\langle F(x), w(\alpha) - x \big\rangle < 0$, for all $\alpha \in (0, +\infty)$. Thus, item $(iii)$ does not hold, which leads to a contradiction. Therefore, $(iii)$ implies $(ii)$. ◻
**Corollary 7**. *Let $F: {\mathbb R}^n\to {\mathbb R}^n$ be a operator, ${\cal C}\subset {\mathbb R}^n$ be a nonempty, closed, and convex set, $x \in {\cal C}$, $0\leq \gamma < 1$ and $\alpha>0$. If $y \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$ and $x^{+} \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(y))$, then the following inequalities hold:*
1. *$\|y-x\|\leq\dfrac{\alpha}{1-\gamma}\|F(x)\|$;*
2. *$\|x^{+}-x\|\leq\dfrac{\alpha}{1-\gamma}\|F(y)\|$.*
*As a consequence, if $F$ is Lipschitz continuous on ${\cal C}$ with constant $L > 0$, then it holds: $$\label{eq:ProjPropertyii}
\|x^{+}-x\|\leq \dfrac{\alpha(1-\gamma+\alpha L)}{(1-\gamma)^2}\|F(x)\|.$$*
*Proof.* Applying item $(ii)$ of Lemma [Lemma 5](#Le:ProjProperty){reference-type="ref" reference="Le:ProjProperty"} with $w(\alpha)=y$ and $z=x$ we obtain item $(i)$ and with $w(\alpha)=x^{+}$ and $z=y$ we obtain item (ii). We proceed to prove [\[eq:ProjPropertyii\]](#eq:ProjPropertyii){reference-type="eqref" reference="eq:ProjPropertyii"}. For that, we first note that due to $F$ be Lipschitz continuous on ${\cal C}$ with constant $L > 0$, we obtain that $$\|F(y)\|\leq \|F(y)-F(x)\|+\|F(x)\|\leq L\|y-x\|+\|F(x)\|.$$ Thus, considering that $y \in {\cal P}^{\gamma}_{\cal C}(x, x-\alpha F(x))$, we can apply the item $(i)$ to obtain $$\|F(y)\|\leq \dfrac{\alpha L}{1-\gamma}\|F(x)\|+\|F(x)\|= \dfrac{1-\gamma+\alpha L}{1-\gamma}\|F(x)\|.$$ By combing the last inequality with item $(ii)$, the desired inequality follows. ◻
# Extragradient inexact method with constant step size {#Sec:ExtraGradmeMethod}
In this section, we describe the extragradient method with a feasible inexact projection for solving problem [\[eq:mp\]](#eq:mp){reference-type="eqref" reference="eq:mp"}. It should be noted that the proposed method uses appropriate relative error criteria to compute inexact projections on the constraint set, unlike the extragradient method which uses exact projections on the constraint set.
The inexact version of the classical extragradient method is stated as follows:
[\[Alg:ExtraGradMethod\]]{#Alg:ExtraGradMethod label="Alg:ExtraGradMethod"}
Let us examine the main features of the EInexPM. To begin, we select a constant step size $\alpha>0$, an upper bound for error tolerances ${\bar \gamma}$ such that $0<\bar{\gamma}<1/2$ and select an exogenous summable sequence $(a_k)_{k\in\mathbb{N}}$ to control the error tolerance. The stopping criterion $F(x^{k}) = 0$ is then evaluated in the current iteration $x^{k}$. If this criterion is not satisfied, a non-negative error tolerance $\gamma_k$ that fulfils the requirements [\[eq:Tolerance\]](#eq:Tolerance){reference-type="eqref" reference="eq:Tolerance"} is selected. By using an inner procedure compute $y^k$ as any feasible inexact projection $x^k - \alpha F(x^k)$ onto the feasible set ${\cal C}$ relative to $x^k$, i.e. $y^{k} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\alpha F(x^k) \big)$. Finally, using again an inner procedure, the next iterate $x^{k+1}$ is computed as any feasible inexact projection of $x^k - \alpha F(y^k)$ onto the feasible set ${\cal C}$ relative to $x^k$, i.e., $x^{k+1} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\alpha F(y^k) \big)$.
It is worth noting that if $\gamma_k \equiv 0$, then Remark [Remark 1](#rem: welldef){reference-type="ref" reference="rem: welldef"} implies that inexact projections are the exact ones. Hence, EInexPM corresponds to the classical extragradient method introduced in [@Korpelevivc76]. It is important to note that $\gamma_k$ in [\[eq:Tolerance\]](#eq:Tolerance){reference-type="eqref" reference="eq:Tolerance"} can be selected as any nonnegative real number fulfilling $0 \leq \gamma_k \|F(x^{k})\|^2 \leq a_k$, for a prefixed sequence $(a_k)_{k\in\mathbb{N}}$. In this case, we have $$\label{eq:ToleranceCond}
\sum_{k \in \mathbb{N}} \big(\gamma_k\|F(x^k)\|^2\big) < +\infty.$$ Since $x^{1}\in {\cal C}$ and, for all $k \in \mathbb{N}$, $x^{k+1}$ is a feasible inexact projection onto ${\cal C}$, we conclude $(x^k)_{k\in\mathbb{N}}\subset {\cal C}$. As a consequence of ${\cal C}$ being a closed set, any cluster point of $(x^k)_{k\in\mathbb{N}}$, if any exists, belongs to ${\cal C}$.
Next we present two examples of sequences $(a_k)_{k\in\mathbb{N}}$ satisfying $\sum_{k \in \mathbb{N}} a_k<+\infty$.
**Example 1**. *Sequences $(a_k)_{k\in\mathbb{N}}$ satisfying $\sum_{k \in \mathbb{N}} a_k<+\infty$ are obtained by taking $a_{k}:=b_{k-1}-b_{k}$ and $\bar{b}>0$ satisfying one the following conditions: (i) $b_0=2\bar{b}$, $b_k=\bar{b}/k$, for all $k=1, 2, \ldots$; (ii)$b_0=2\bar{b}$, $b_k=\bar{b}/\ln(k+1)$, for all $k=1, 2, \ldots$.*
The convergence analysis of the sequence $(x^k)_{k\in\mathbb{N}}$ produced by EInexPM will be discussed in the following sections.
## Convergence analysis
We will show in this section that the sequence $(x^k)_{k\in\mathbb{N}}$ generated by EInexPM converges to a solution of VIP(F, ${\cal C}$) for Lipschitz continuous pseudo-monotone operator $F$ with Lipschitz constant $L\geq 0$. To state our first result, let us recall that the solution set of VIP(F, ${\cal C}$) is denoted by ${\cal C}^*$ and define the following constants: $$\label{eq:constetanu}
{\bar \eta}:=1- \alpha^2 L^2-2{\bar \gamma}, \qquad \qquad {\bar \nu}:=\dfrac{\alpha^2(1-{\bar \gamma}+\alpha L)^2}{(1-{\bar \gamma})^4}.$$
**Lemma 8**. *Let ${\cal C}\subset {\mathbb R}^n$ be a nonempty and closed convex set and $(x^k)_{k\in{\mathbb N}}$ the sequence generated by Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"}. Assume that $F: {\mathbb R}^n\to {\mathbb R}^n$ is a pseudo monotone operator on ${\cal C}$ with respect to ${\cal C}^*$ and Lipschitz continuous on ${\cal C}$ with constant $L > 0$. Then, for any $x^*\in {\cal C}^*$, there holds: $$\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2- {\bar \eta} \|x^{k}-y^{k}\|^2 + {\bar \nu} \gamma_k\|F(x^k)\|^2, \qquad k=1, 2, \ldots.$$*
*Proof.* First note that $$\begin{gathered}
\big\langle x^{k}-\alpha F(y^{k})-y^{k}, x^{k+1}-y^{k}\big\rangle=\big\langle x^{k}-\alpha F(x^{k})-y^{k}, x^{k+1}-y^{k}\big\rangle \\+ \alpha\big\langle F(x^{k})- F(y^{k}), x^{k+1}-y^{k}\big\rangle.\end{gathered}$$ Since [\[eq:IntStep1\]](#eq:IntStep1){reference-type="eqref" reference="eq:IntStep1"} implies that $y^{k} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k - \alpha F(x^k) \big)$, we conclude that $$\label{eq:faip}
\big\langle x^{k}-\alpha F(y^{k})-y^{k}, x^{k+1}-y^{k}\big\rangle\leq \gamma_k \|y^{k}-x^{k}\|^2+ \alpha\big\langle F(x^{k})- F(y^{k}),x^{k+1}-y^{k}\big\rangle.$$ For simplicity set $z^k:=x^k-\alpha F(y^k)$. As nothing more than a result of some algebraic manipulations, we arrive to the conclusion that $$\begin{aligned}
\|x^{k+1}-x^*\|^2&=\|x^{k+1}-z^{k}\|^2+ \|z^{k}-x^*\|^2-2 \langle x^{k+1}-z^{k}, x^*-z^{k} \rangle \\
&=-\|x^{k+1}-z^{k}\|^2+ \|z^{k}-x^*\|^2+2 \langle z^{k}-x^{k+1}, x^*- x^{k+1} \rangle.\end{aligned}$$ Using that $x^{k+1} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, z^k \big)$ we conclude that $\langle z^{k}-x^{k+1}, x^*- x^{k+1} \rangle \leq \gamma_k \|x^{k+1}-x^{k}\|^2$, which combined with the previous equality give us $$\label{eq:poee}
\|x^{k+1}-x^*\|^2\leq \|z^{k}-x^*\|^2-\|x^{k+1}-z^{k}\|^2+2\gamma_k \|x^{k+1}-x^{k}\|^2$$ Taking into account that $z^k=x^k-\alpha F(y^k)$, some calculations show that $$\begin{aligned}
\label{eq:poe}
\|z^{k}-x^*\|^2-\|x^{k+1}-z^{k}\|^2&=\|x^k-x^*-\alpha F(y^k)\|^2-\|x^k-x^{k+1}-\alpha F(y^k)\|^2\notag\\
&= \|x^k-x^*\|^2-\|x^k-x^{k+1}\|^2+2\alpha\big\langle F(y^k), x^*-x^{k+1}\big\rangle.\end{aligned}$$ On the other hand, considering that $x^*\in {\cal C}^*$, $y^{k}\in {\cal C}$ and $F$ is pseudo monotone operator on ${\cal C}$ with respect to ${\cal C}^*$ we have $\big\langle F(y^{k}), y^{k}-{x^*} \big\rangle\geq 0$. Thus, we conclude that $$\big\langle F(y^{k}), {x^*}-x^{k+1} \big\rangle\leq \big\langle F(y^{k}), y^{k}-x^{k+1}\big\rangle.$$ The last inequality, when combined with [\[eq:poe\]](#eq:poe){reference-type="eqref" reference="eq:poe"}, implies that $$\|z^{k}-x^*\|^2-\|x^{k+1}-z^{k}\|^2\leq \|x^k-x^*\|^2-\|x^k-x^{k+1}\|^2+2\alpha \big\langle F(y^{k}), y^{k}-x^{k+1} \big\rangle.$$ The previous inequality is now combined with [\[eq:poee\]](#eq:poee){reference-type="eqref" reference="eq:poee"} to provide the following inequality $$\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2-\|x^k-x^{k+1}\|^2+\gamma_k \|x^{k+1}-x^{k}\|^2 +2\alpha \big\langle F(y^{k}), y^{k}-x^{k+1} \big\rangle.$$ Since $\|x^{k}-x^{k+1}\|^2=\|x^{k}-y^{k}\|^2 + \|y^{k}-x^{k+1}\|^2+2\langle x^{k}-y^{k}, y^{k}-x^{k+1}\rangle$, the last inequality is equivalent to $$\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2-\|x^{k}-y^{k}\|^2 - \|y^{k}-x^{k+1}\|^2+\gamma_k\|x^{k+1}-x^{k}\|^2 +2 \big\langle x^{k} -\alpha F(y^{k})-y^{k}, x^{k+1}-y^{k}\big\rangle.$$ The last inequality together with [\[eq:faip\]](#eq:faip){reference-type="eqref" reference="eq:faip"} yield $$\begin{gathered}
\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2-\|x^{k}-y^{k}\|^2 - \|y^{k}-x^{k+1}\|^2+\gamma_k\|x^{k+1}-x^{k}\|^2 \\+2\gamma_k\|y^{k}-x^{k}\|^2+ 2\alpha\big\langle F(x^{k})- F(y^{k}),x^{k+1}-y^{k}\big\rangle.\end{gathered}$$ Considering that $F$ is Lipschitz continuous on ${\cal C}$ with constant $L > 0$, we have $$\langle F(x^{k})- F(y^{k}),x^{k+1}-y^{k}\rangle\leq L \|x^{k}-y^{k}\|\|x^{k+1}-y^{k}\|,$$ which combined with the last inequality yields $$\begin{gathered}
\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2-\|x^{k}-y^{k}\|^2 - \|y^{k}-x^{k+1}\|^2+\gamma_k\|x^{k+1}-x^{k}\|^2 \\+2\gamma_k\|y^{k}-x^{k}\|^2+ 2\alpha L \|x^{k}-y^{k}\|\|x^{k+1}-y^{k}\|.\end{gathered}$$ or equivalently, $$\begin{gathered}
\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2- (1- \alpha^2 L^2-2\gamma_k) \|x^{k}-y^{k}\|^2 \\+\gamma_k\|x^{k+1}-x^{k}\|^2 -(\alpha L \|x^{k}-y^{k}\|-\|x^{k+1}-y^{k}\|)^2.\end{gathered}$$ Hence, we have $$\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2- (1- \alpha^2 L^2-2\gamma_k) \|x^{k}-y^{k}\|^2 +\gamma_k\|x^{k+1}-x^{k}\|^2.$$ Thus, taking into account that $x^{k+1} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\alpha F(y^k) \big)$, by applying Corollary [Corollary 7](#cr:ProjPropertyii){reference-type="ref" reference="cr:ProjPropertyii"} with $x^{+}=x^{k+1}$, $x=x^{k}$ and $\gamma=\gamma_k$ we obtain that $$\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2- (1- \alpha^2 L^2-2\gamma_k) \|x^{k}-y^{k}\|^2 + \dfrac{\alpha^2(1-\gamma_k+\alpha L)^2}{(1-\gamma_k)^4} \gamma_k\|F(x^k)\|^2.$$ Therefore, using [\[eq:constetanu\]](#eq:constetanu){reference-type="eqref" reference="eq:constetanu"} and considering that $0 \leq \gamma_k < \bar{\gamma}$, the desired inequality follows. ◻
**Theorem 9**. *Let ${\cal C}\subset {\mathbb R}^n$ be a nonempty and closed convex set and $(x^k)_{k\in{\mathbb N}}$ the sequence generated by Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"}. Assume that $F: {\mathbb R}^n\to {\mathbb R}^n$ is a pseudo monotone operator on ${\cal C}$ with respect to ${\cal C}^*$ and Lipschitz continuous on ${\cal C}$ with constant $L > 0$. If $$\label{eq:ccs}
0<\alpha<\frac{\sqrt{1-2{\bar \gamma}}}{L},$$ then the sequence $(x^k)_{k\in{\mathbb N}}$ converges to a solution of the VIP(F,${\cal C}$).*
*Proof.* Let $x^*\in {\cal C}^*$ be a arbitrary solution of the VIP(F,${\cal C}$). The condition [\[eq:ccs\]](#eq:ccs){reference-type="eqref" reference="eq:ccs"} and $0<\bar{\gamma}<1/2$ imply that ${\bar \eta} >0$ and ${\bar \nu} >0$. Thus, it follows from Lemma [Lemma 8](#le:FejerProperty){reference-type="ref" reference="le:FejerProperty"} that $$\|x^{k+1}-x^*\|^2\leq \|x^k-x^*\|^2+ {\bar \nu} \gamma_k\|F(x^k)\|^2, \qquad k=1, 2, \ldots.$$ The last inequality together with [\[eq:ToleranceCond\]](#eq:ToleranceCond){reference-type="eqref" reference="eq:ToleranceCond"} implies that $(x^k)_{k\in{\mathbb N}}$ is quasi-Fejér convergent to ${\cal C}^*$. Considering that ${\cal C}^*$ is nonempty, the item (i) of Lemma [Lemma 3](#le:fejer){reference-type="ref" reference="le:fejer"} implies that $(x^k)_{k\in{\mathbb N}}$ is bounded. Let ${\bar x}$ be a cluster point of $(x_k)_{k\in\mathbb{N}}$ and $(x_{k_j})_{j\in\mathbb{N}}$ a subsequence of $(x_k)_{k\in\mathbb{N}}$ such that $\lim_{j\to +\infty}x_{k_j}={\bar x}$. To continue the proof, keep in mind that Lemma [Lemma 8](#le:FejerProperty){reference-type="ref" reference="le:FejerProperty"} also implies that $${\bar \eta} \|x^{k}-y^{k}\|^2\leq \|x^k-x^*\|^2-\|x^{k+1}-x^*\|^2+ {\bar \nu} \gamma_k\|F(x^k)\|^2, \qquad k=1, 2, \ldots.$$ By adding both sides of the previous inequality and using [\[eq:ToleranceCond\]](#eq:ToleranceCond){reference-type="eqref" reference="eq:ToleranceCond"}, we arrive at the conclusion that $${\bar \eta} \sum_{k=0}^{+\infty}\|x^{k}-y^{k}\|^2\leq \|x^{1}-x^*\|^2+ {\bar \nu} \sum_{k=0}^{+\infty}\gamma_k\|F(x^k)\|^2<+\infty.$$ Hence, we have $\lim_{k\to +\infty}\|x^{k}-y^{k}\|=0$. Thus, taking into account that $\lim_{j\to +\infty}x_{k_j}={\bar x}$, we conclude that $\lim_{j\to +\infty}y_{k_j}={\bar x}$. It follows from [\[eq:IntStep1\]](#eq:IntStep1){reference-type="eqref" reference="eq:IntStep1"} that $$y^{k_j} \in {\cal P}^{\gamma_{k_j}}_{\cal C}\big(x^{k_j}, x^{k_j}-\alpha F(x^{k_j}) \big).$$ Considering that $\gamma_{k_j} < \bar{\gamma}$, the last inclusion and Definition [Definition 2](#def:InexactProj){reference-type="ref" reference="def:InexactProj"} imply that $$\big\langle x^{k_j}-\alpha F(x^{k_j})-y^{k_j}, y-y^{k_j} \big\rangle \leq \gamma \|y^{k_j}-x^{k_j}\|^2,\qquad \qquad ~\forall y \in {\cal C}.$$ Since $\lim_{j\to +\infty}x_{k_j}={\bar x}$ and $\lim_{j\to +\infty}y_{k_j}={\bar x}$, taking the limit in the previous inequality as $j$ tending to infinity yields $$\big\langle {\bar x}-\alpha F({\bar x})-{\bar x}, y-{\bar x}\big\rangle \leq {\bar \gamma}\|{\bar x}-{\bar x}\| {},\qquad \qquad ~\forall y \in {\cal C}.$$ which, by using that $\alpha>0$, is equivalent to $\big\langle F({\bar x}), y-{\bar x}\big\rangle \geq 0$, for all $y \in {\cal C}$. Hence, ${\bar x}\in {\cal C}^*$. Given that ${\bar x}$ is a cluster point of $(x_k)_{k\in\mathbb{N}}$, item (ii) of Lemma [Lemma 3](#le:fejer){reference-type="ref" reference="le:fejer"} implies that $\lim_{k\to +\infty}x_{k}={\bar x}$, and the proof is complete. ◻
# Extragradient inexact method with line search {#Se:ExtGradLineSerch}
In this section, we introduce an inexact variant of the extragradient method for VIP($F,{\cal C}$) with $F$ pseudo monotone operator on ${\cal C}$ with respect to the solution set ${\cal C}^*$ of problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"}, see for example [@IusemSvaiter1997; @millanbello; @konov]. Instead of a fixed step size, the method presented finds a suitable step size in each iteration by performing an Armijo type line search. It is worth noting that, like the classical extragradient method, the method does just two projections into the feasible set in each iteration. A full convergence analysis is provided, with no Lipschitz continuity assumption of the operator defining the variational inequality problem.
The inexact version of the proposed version of the extragradient method is stated as follows:
[\[Alg:ExtraGradMethodLS\]]{#Alg:ExtraGradMethodLS label="Alg:ExtraGradMethodLS"}
Let us go through the main features of the EInexPMLS. First, we must select some parameters that will control the behaviour of the algorithm and will be essential in your convergence analysis. The most important of these parameters is the upper bound ${\bar \gamma}$ for error tolerance ${\gamma_k}$, which is connected to the line search parameter $\rho$. In step 2 of the algorithm, by using an inner procedure compute $y^k$ as any feasible inexact projection of $x^k - \beta_k F(x^k)$ onto the feasible set ${\cal C}$ relative to $x^k$, i.e. [\[eq:ipyk\]](#eq:ipyk){reference-type="eqref" reference="eq:ipyk"}. Then, in step 3, the conceptual stopping criterion $y^{k}=x^{k}$ is then evaluated in the current iteration. If this stopping criterion is not satisfied, a line search in the segment between the points $x^k$ and $y^k$ is done in order to decrease the mapping $$(0, 1) \ni t\mapsto \langle F(x^k +t(y^{k}-x^k)) ,y^{k}-x^k \rangle.$$ In step 4, the line search resultant point $z^k$ is utilized to define the following half space $$\label{eq;HypSep}
H_k:=\Big\{x\in {\mathbb R}^n:~\langle F(z^k),x-z^k \rangle\leq 0\Big\},$$ whose boundary separates the current iterate $x_k$ of the solution set ${\cal C}^*$. Then, by applying Proposition [Proposition 2](#prop:0){reference-type="ref" reference="prop:0"}, is computed the projection of $x^k$ onto the hyperplane $H_k$ as follows $$\label{eq:etakc}
{\cal{P}}_{H_k}(x^k) =x^k-\lambda_k F(z^k), \qquad \quad \lambda_k:=-\frac{1}{\|F(z^{k})\|^2}\big{\langle} F( z^k) ,z^{k}-x^k \big{\rangle}.$$ Finally, using again an inner procedure, the next iterate $x^{k+1}$ is computed as any feasible inexact projection of ${\cal{P}}_{H_k}(x^k)$ onto the feasible set ${\cal C}$ relative to $x^k$. Thus, by using [\[eq:etakc\]](#eq:etakc){reference-type="eqref" reference="eq:etakc"}, we conclude that [\[eq:etakfs\]](#eq:etakfs){reference-type="eqref" reference="eq:etakfs"} is equivalently stated as follows $$\label{eq:etakns}
x^{k+1} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, {\cal{P}}_{H_k}(x^k)\big),$$ It is noteworthy that if $\gamma_k \equiv 0$, then Remark [Remark 1](#rem: welldef){reference-type="ref" reference="rem: welldef"} implies that inexact projection is the exact one. Consequently, EInexPMLS corresponds to a version of the extragradient method addressed in [@IusemSvaiter1997], see also [@millanbello; @buramillan].
**Remark 2**. *The stopping criteria is well defined, i.e., if $x^k=y^k$, then $x^k$ is a solution of the Problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"}. In fact, $y^{k}=x^{k}$ and [\[eq:ipyk\]](#eq:ipyk){reference-type="eqref" reference="eq:ipyk"} implies that $x^{k} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\alpha_k F(x^k) \big).$ Thus, it follows from Corollary [Corollary 6](#cr:equival){reference-type="ref" reference="cr:equival"} that $x^k\in {\cal C}^*$.*
Next, we show that the Algorithm [\[Alg:ExtraGradMethodLS\]](#Alg:ExtraGradMethodLS){reference-type="ref" reference="Alg:ExtraGradMethodLS"} is well defined, namely that there exists $i_{k}$ fulfilling [\[eq:algjk\]](#eq:algjk){reference-type="eqref" reference="eq:algjk"}.
**Proposition 10**. *Step 3 is well-defined, i.e., there exists $i_k$ satisfying [\[eq:algjk\]](#eq:algjk){reference-type="ref" reference="eq:algjk"}.*
*Proof.* Assume by contradiction that $\big{\langle} F {\big(}x^k+\sigma \alpha^{i} (y^k-x^k) {\big )},y^{k}-x^{k} \big{\rangle} > \big{\langle} F(x^k), y^{k}-x^{k}\big{\rangle}$, for all $i\in {\mathbb N}$ and $y^{k}\neq x^{k}$. Since $F$ is continuous and $0<\alpha<1$, taking the limit in the last inequality as $i$ tends to infinity, we conclude that $\big{\langle} F(x^k) ,y^{k}-x^k \big{\rangle} \geq \rho \big{\langle} F(x^k), y^{k}-x^{k}\big{\rangle}.$ Thus, taking into account that $0<\rho<1$, the last inequality that $$\label{eq;fiwd1}
\big{\langle} F(x^k) ,y^{k}-x^k \big{\rangle} \geq 0.$$ Considering that $y^k\in {\cal P}^{\gamma_k}_{\cal C}(x^k,x^k-\beta_k F(x^k))$ and $x^k\in {\cal C}$, it follows from Definition [Definition 2](#def:InexactProj){reference-type="ref" reference="def:InexactProj"} that $$\langle x^k-\beta_kF(x^k)-y^k,x^k-y^k\rangle\leq \gamma_k\|y^k-x^k\|^2.$$ Since $0< {\hat \beta}\leq\beta_k$ and $0<\bar{\gamma}<1$, we can deduce from some algebraic manipulations in the preceding inequality that $$0\leq \big{\langle} F(x^k) ,y^{k}-x^k \big{\rangle} \leq \frac{\gamma_k-1}{\beta_k}\|y^k-x^k\|^2<0,$$ which is a contradiction. Therefore, there exists $i_{k}$ satisfying [\[eq:algjk\]](#eq:algjk){reference-type="eqref" reference="eq:algjk"}. ◻
Let *$(x_{k})_{k\in \mathbb{N}}$ be the sequence generated by Algorithm [\[Alg:ExtraGradMethodLS\]](#Alg:ExtraGradMethodLS){reference-type="ref" reference="Alg:ExtraGradMethodLS"}*. Since $x^{1}\in {\cal C}$ and, for any $k \in \mathbb{N}$, [\[eq:etakfs\]](#eq:etakfs){reference-type="eqref" reference="eq:etakfs"} implies that $x^{k+1}$ is a feasible inexact projection onto ${\cal C}$, we conclude that $(x^k)_{k\in\mathbb{N}}\subset {\cal C}$. Consequently, due to ${\cal C}$ be a closed set, each clustering point of $(x^k)_{k\in\mathbb{N}}$ that exists belongs to ${\cal C}$.
## Convergence analysis
In the previous section, we show that EInexPMLs generates a $(x^k)_{k\in\mathbb{N}}$ belonging to the set ${\cal C}$. In this section we will show that $(x^k)_{k\in\mathbb{N}}$ converges to a solution of VIP(F, ${\cal C}$). To this purpose, we will begin by establishing a few initial results. First, we show that the boundary of the half space $H_k$ defined as in [\[eq;HypSep\]](#eq;HypSep){reference-type="eqref" reference="eq;HypSep"} separates the current iterates $x_k$ of the solution set ${\cal C}^*$.
**Proposition 11**. *Let $H_k$ be defined as in [\[eq;HypSep\]](#eq;HypSep){reference-type="eqref" reference="eq;HypSep"}. Then, $x^k\in H_k$ if and only if $x^k\in \mathcal{C}^*$.*
*Proof.* First, we assume that $x^k\in H(z^k)$. Thus, taking into account [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"} we conclude that $$0\geq\langle F(z^k),x^k-z^k\rangle=\sigma\alpha^{i_k}\langle F(z^k),x^k-y^k \rangle,$$ which implies that $\langle F(z^k),y^k-x^k \rangle\geq 0$. Hence, by using [\[eq:algjk\]](#eq:algjk){reference-type="eqref" reference="eq:algjk"} and [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"}, we conclude that $\big{\langle} F(x^k), y^{k}-x^{k}\big{\rangle} \geq 0$. Therefore, using [\[eq:ipyk\]](#eq:ipyk){reference-type="eqref" reference="eq:ipyk"} together with Corollary [Corollary 6](#cr:equival){reference-type="ref" reference="cr:equival"} we conclude that $x^k\in\mathcal{C}^*$.
Conversely, we assume that $x^k\in \mathcal{C}^*$. Since $x^k$ and $y^k$ belong to ${\cal C}$, it follows from [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"} that $z^k\in \mathcal{C}$ by convexity of $\mathcal{C}$, for all $k\in\mathbb{N}$. Thus, due to $x^k\in \mathcal{C}^*$, we have $\langle F(x^k),z^k -x^k\rangle\geq 0$. Since $F$ is a pseudo-monotone operator, $\langle F({x}^k),z^k-x^k\rangle\geq 0$ implies $\langle F(z^k),z^k-x^k\rangle\geq 0$, which means that $x^k\in H_k$. ◻
**Proposition 12**. *The following inequality holds: $$\label{eq:tolim}
\big{\langle} F(x^k),x^k-y^k\big{\rangle}\geq \frac{\max \{\rho,\sqrt{3}-1\}}{\bar{\beta}}\|y^k-x^k\|^2, \qquad k=1, 2, \ldots.$$*
*Proof.* Keeping in mind that $y^k\in {\cal P}^{\gamma_k}_{\cal C}(x^k,x^k-\beta_k F(x^k))$ and $x^k\in {\cal C}$, from Definition [Definition 2](#def:InexactProj){reference-type="ref" reference="def:InexactProj"} we have $$\langle x^k-\beta_kF(x^k)-y^k,x^k-y^k\rangle\leq \gamma_k\|y^k-x^k\|^2,$$ which, after some algebraic manipulation, is rewritten as follows $$\label{eq:Bfbpr1}
\langle F(x^k),x^k-y^k \rangle\geq \frac{1-\gamma_k}{\beta_k}\|y^k-x^k\|^2.$$ Considering that $0<\beta_k\leq{\bar \beta}$ and $0\leq \gamma_k<\bar{\gamma}<\min\big\{1-\rho, 2-\sqrt{3}\big\}$, we conclude that $$\frac{1-\gamma_k}{\beta_k}\geq \frac{\max \{\rho,\sqrt{3}-1\}}{\bar{\beta}}.$$ The combination of [\[eq:Bfbpr1\]](#eq:Bfbpr1){reference-type="eqref" reference="eq:Bfbpr1"} with the previous inequality yields the desired inequality. ◻
Next, we are going to establish two important inequalities to show the convergence of Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"}.
**Lemma 13**. *Let ${\cal C}\subset {\mathbb R}^n$ be a nonempty and closed convex set and $(x^k)_{k\in{\mathbb N}}$ the sequence generated by Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"}. Assume that $F: {\mathbb R}^n\to {\mathbb R}^n$ is a pseudo monotone operator on ${\cal C}$ with respect to the solution set ${\cal C}^*$ of problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"} and $x^k\notin {\cal C}^*$, for all $k=1,2,\ldots$ Then, for any $x^*\in {\cal C}^*$, there holds: $$\label{eq:algf14bg}
\| x^{k+1}-x^{*} \|^{2} \leq \|x^k-x^{*}\|^{2}-\frac{1}{(1-{\bar \gamma})^2}\left({\bar \gamma}^2-4{\bar \gamma}+1\right)\lambda_k^2 \|F(z^k)\|^2, \qquad k=1, 2, \ldots.$$ As a consequence, $(x^k)_{k\in{\mathbb N}}$ is Fejér convergent to ${\cal C}^*$, i.e., for any $x^*\in {\cal C}^*$ there holds $$\label{eq:FejerPropertyLs}
\| x^{k+1}-x^{*} \| \leq \|x^k-x^{*}\|, \qquad k=1, 2, \ldots.$$*
*Proof.* Take $x^*\in \mathcal{C}^*$. Denotes the boundary of $H_k$ by $L_k$, which is given by $$\label{eq;hpsb}
L_k:=\{x\in {\mathbb R}^n:~\langle F(z^k),x-z^k \rangle=0\}.$$ Since $x^k$ and $y^k$ belong to the convex set ${\cal C}$, we obtain from [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"} that $z^k\in \mathcal{C}$, for all $k\in\mathbb{N}$. Thus, due to $F$ be pseudo monotone on ${\cal C}$ with respect to the solution set ${\cal C}^*$ of problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"}, we have $$\big\langle F(z^k), x^k-{x^*} \big\rangle\geq 0.$$ The last inequality and the definition of $H_k$ in [\[eq;HypSep\]](#eq;HypSep){reference-type="eqref" reference="eq;HypSep"} imply that $x^*\in H_k$. Thus, we conclude that $$\label{eq:alg12}
{\cal{P}}_{H_k}(x^*)=x^*.$$ We know as well that applying Proposition [Proposition 4](#pr:snonexp){reference-type="ref" reference="pr:snonexp"} with $v=x^k-\lambda_k F(z^k)$, $u=x^{k}$, $\gamma=\gamma_k$, ${w}=x^{k+1}$ and ${\bar w}={\bar v}=x^*$ we obtain that $$\|x^{k+1}-x^{*}\|^{2} \leq \|x^k-\lambda_k F(z^k) -x^{*}\|^{2}- \|(x^k-\lambda_k F(z^k)-x^{*})-({x^{k+1}}-x^*)\|^2+2\gamma_k\|x^{k+1}-x^k \|^2,$$ which implies that $$\label{eq:alg13}
\|x^{k+1}-x^{*}\|^{2} \leq \|x^k-\lambda_k F(z^k) -x^{*}\|^{2}+2\gamma_k\|x^{k+1}-x^k \|^2.$$ Using [\[eq:alg12\]](#eq:alg12){reference-type="eqref" reference="eq:alg12"} and the item $(ii)$ of Lemma [Lemma 1](#le:projeccion){reference-type="ref" reference="le:projeccion"} we have $$\|{\cal{P}}_{H_k}(x^k)- x^*\|^{2}\leq \|x^k-x^{*}\|^{2}-\|{\cal{P}}_{H_k}(x^k)-x^k\|^2.$$ Since $x^k\notin {\cal C}^*$, Proposition [Proposition 11](#prop:hk){reference-type="ref" reference="prop:hk"} implies that $x^k \notin H_k$. Hence, the last inequality together with the first equality in [\[eq:etakc\]](#eq:etakc){reference-type="eqref" reference="eq:etakc"} yield $$\|x^k-\lambda_k F(z^k)- x^*\|^{2}\leq \|x^k-x^{*}\|^{2}-\lambda_k^2 \|F(z^k)\|^2.$$ As a result of combining the last inequality with [\[eq:alg13\]](#eq:alg13){reference-type="eqref" reference="eq:alg13"}, we arrive at the conclusion that $$\label{eq:alg15}
\| x^{k+1}-x^{*} \|^{2} \leq \|x^k-x^{*}\|^{2}-\lambda_k^2 \|F(z^k)\|^2+2\gamma_k\|x^{k+1}-x^k\|^2.$$ Since $x^{k+1} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\lambda_k F(z^k) \big)$, applying item (ii) of Lemma [Lemma 5](#Le:ProjProperty){reference-type="ref" reference="Le:ProjProperty"} with $\gamma=\gamma_k$, $x=x^k$, $z=z^k$, $\alpha=\lambda_k$ and $w(\alpha)=x^{k+1}$ we obtain that $$\| x^{k+1}-x^k\|\leq\dfrac{\lambda_k}{1-\gamma_k}\|F(z^k)\|,$$ which combined with [\[eq:alg15\]](#eq:alg15){reference-type="eqref" reference="eq:alg15"} yields $$\| x^{k+1}-x^{*} \|^{2} \leq \|x^k-x^{*}\|^{2}-\lambda_k^2 \|F(z^k)\|^2+2\gamma_k\dfrac{\lambda_k^2}{(1-\gamma_k)^2}\|F(z^k)\|^2,$$ or equivalently, $$\label{eq:algf14}
\| x^{k+1}-x^{*} \|^{2} \leq \|x^k-x^{*}\|^{2}-\frac{1}{(1-\gamma_k)^2}\left(\gamma_k^2-4\gamma_k+1\right)\lambda_k^2 \|F(z^k)\|^2.$$ Since $0<\bar{\gamma}< 2-\sqrt{3}$, the function $(0, {\bar \gamma}] \mapsto \left(\gamma_k^2-4\gamma_k+1\right)/(1-\gamma_k)^2$ is increasing and positive, the inequality [\[eq:algf14bg\]](#eq:algf14bg){reference-type="eqref" reference="eq:algf14bg"} follows [\[eq:algf14\]](#eq:algf14){reference-type="eqref" reference="eq:algf14"}. As a consequence, [\[eq:FejerPropertyLs\]](#eq:FejerPropertyLs){reference-type="eqref" reference="eq:FejerPropertyLs"} follows from [\[eq:algf14bg\]](#eq:algf14bg){reference-type="eqref" reference="eq:algf14bg"} and the proof is complete. ◻
**Theorem 14**. *Let ${\cal C}\subset {\mathbb R}^n$ be a nonempty and closed convex set and $(x^k)_{k\in{\mathbb N}}$ the sequence generated by Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"}. Assume that $F: {\mathbb R}^n\to {\mathbb R}^n$ is a pseudo monotone operator on ${\cal C}$ with respect to the solution set ${\cal C}^*$ of problem [\[eq:mp\]](#eq:mp){reference-type="ref" reference="eq:mp"}. If ${\cal C}^*\neq \varnothing$, then Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"} either ends at iteration $k$, in which case $x^k\in {\cal C}^*$, or generates an infinite sequence $(x^k)_{k\in{\mathbb N}}$ that converges to a point belonging to ${\cal C}^*$.*
*Proof.* First, we assume that Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"} ends at iteration $k$. In this case, we have $y^{k}=x^{k}$ and Remark [Remark 2](#rem:stop){reference-type="ref" reference="rem:stop"} implies that $x^k\in {\cal C}^*$. Now, we assume that the sequence $(x^k)_{k\in{\mathbb N}}$ is infinite. Hence, we have $x^k\notin {\cal C}^*$, for all $k=1, 2, \ldots$.
Since $(x^k)_{k\in{\mathbb N}}$ satisfies [\[eq:FejerPropertyLs\]](#eq:FejerPropertyLs){reference-type="eqref" reference="eq:FejerPropertyLs"} in Lemma [Lemma 13](#le:FejerPropertyLS){reference-type="ref" reference="le:FejerPropertyLS"}, it also satisfies Definition [Definition 1](#def:fejer){reference-type="ref" reference="def:fejer"}. Thus, due to ${\cal C}^*\neq \varnothing$, it follows from item $(ii)$ of Lemma [Lemma 3](#le:fejer){reference-type="ref" reference="le:fejer"} that $(x^k)_{k\in{\mathbb N}}$ is bounded. Using [\[eq:algf14bg\]](#eq:algf14bg){reference-type="eqref" reference="eq:algf14bg"} of Lemma [Lemma 13](#le:FejerPropertyLS){reference-type="ref" reference="le:FejerPropertyLS"} we have $$\label{eq:algf14bgls}
0<\frac{1}{(1-{\bar \gamma})^2}\left({\bar \gamma}^2-4{\bar \gamma}+1\right)\lambda_k^2 \|F(z^k)\|^2 \leq \|x^k-x^{*}\|^{2}-\| x^{k+1}-x^{*} \|^{2}, \qquad k=1, 2, \ldots.$$ On the other hand, [\[eq:FejerPropertyLs\]](#eq:FejerPropertyLs){reference-type="eqref" reference="eq:FejerPropertyLs"} implies that the sequence $(\|x^k-x^*\|)_{k\in{\mathbb N}}$ is monotone non-increasing and bounded from below. Thus, $(\|x^k-x^*\|)_{k\in{\mathbb N}}$ converges. Hence, taking the limit in [\[eq:algf14bgls\]](#eq:algf14bgls){reference-type="eqref" reference="eq:algf14bgls"} as $k$ tends to infinity, we have $\lim_{k\to +\infty}\lambda_k \|F(z^k)\|=0.$ And, in view of [\[eq:etakfs\]](#eq:etakfs){reference-type="eqref" reference="eq:etakfs"} we conclude that $$\label{eq:etakmth1}
\lim_{k\to +\infty}\lambda_k \|F(z^k)\|=\lim_{k\to +\infty} \frac{1}{\|F(z^{k})\|}\big{\langle} F( z^k) ,z^{k}-x^k \big{\rangle}=0.$$ Since $y^{k} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\beta_k F(x^k) \big)$, applying item $(i)$ of Corollary [Corollary 7](#cr:ProjPropertyii){reference-type="ref" reference="cr:ProjPropertyii"} with $\gamma=\gamma_k$, $x=x^k$, $\alpha=\beta_k$ and $y=y^{k}$ we obtain that $$\|y^{k}-x^{k}\|\leq\dfrac{\beta_k}{1-\beta_k}\|F(x^k)\|.$$ Because $0< {\hat \beta}< \beta_k<{\bar \beta}$, $0 \leq \gamma_k < \bar{\gamma}$ and $(x^k)_{k\in{\mathbb N}}$ is bounded, the latter inequality implies that $(y^k)_{k\in{\mathbb N}}$ is bounded. Hence, it follows from [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"} that $(z^k)_{k\in{\mathbb N}}$ is also bounded. In addition, due to $F$ be continuous, we conclude that $(F(z^k))_{k\in{\mathbb N}}$ is bounded. Thus, from [\[eq:etakmth1\]](#eq:etakmth1){reference-type="eqref" reference="eq:etakmth1"} we have $\lim_{k\to +\infty} \big{\langle} F( z^k) ,z^{k}-x^k \big{\rangle}=0.$ Therefore, it follows from the last equality and [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"} that $$\label{eq:etakmth2}
\lim_{k\to +\infty} \sigma \alpha^{i_k}\big{\langle} F( z^k) ,y^{k}-x^k \big{\rangle}=0.$$ Since the sequences $(x^k)_{k\in{\mathbb N}}\subset {\cal C}$, $(y^k)_{k\in{\mathbb N}}\subset {\cal C}$ and $(z^k)_{k\in{\mathbb N}}\subset {\cal C}$ are bounded, we can take subsequences $(x^{k_j})_{j\in{\mathbb N}}$, $(y^{k_j})_{j\in{\mathbb N}}$ and $(z^{k_j})_{j\in{\mathbb N}}$ of them, respectively, and ${\bar x}\in {\cal C}$, ${\bar y}\in {\cal C}$ and ${\bar z}\in {\cal C}$ such that $\lim_{j\to+\infty}x^{k_j}={\bar x}$, $\lim_{j\to+\infty}y^{k_j}={\bar y}$ and $\lim_{j\to+\infty}z^{k_j}={\bar z}$. Furthermore, due to $0<\alpha<1$, $0 \leq \gamma_k < \bar{\gamma}$ and $0< {\hat \beta}< \beta_k<{\bar \beta}$ for all $k\in{\mathbb N}$, we can also assume without loss of generality that $\lim_{j \to +\infty} \alpha^{i_{k_j}} = \bar{\alpha} \in [0,1]$, $\lim_{j \to +\infty} \gamma_{k_j} = {\hat \gamma} \leq {\bar \gamma}$ and $\lim_{j \to +\infty} \beta_{k_j} ={\tilde \beta}\geq {\hat \beta}$. We have two possibilities for $\bar{\alpha}$: $\bar{\alpha} > 0$ or $\bar{\alpha} = 0$.
First we assume that $\bar{\alpha} > 0$. In this case, it follows from [\[eq:etakmth2\]](#eq:etakmth2){reference-type="eqref" reference="eq:etakmth2"} that $$0=\lim_{j\to +\infty} { \sigma \alpha^{i_{k_j}}}\big{\langle} F( z^{k_j}) ,y^{k_j}-x^{k_j} \big{\rangle}=\sigma{\bar \alpha}\big{\langle} F({\bar z}) ,{\bar y}-{\bar x} \big{\rangle}.$$ Because we are assuming that ${\bar \alpha} > 0$, we conclude that $\langle F({\bar z}) ,{\bar y}-{\bar x} \rangle=0$. Using, [\[eq:algjk\]](#eq:algjk){reference-type="eqref" reference="eq:algjk"} and [\[eq:ipzk\]](#eq:ipzk){reference-type="eqref" reference="eq:ipzk"} together with Proposition [Proposition 12](#prop:Bfb){reference-type="ref" reference="prop:Bfb"} we conclude that
$$\big{\langle} F(z^{k_j} ) ,y^{k_j}-x^{k_j} \big{\rangle} \leq \rho \big{\langle} F(x^{k_j} ) ,y^{k_j}-x^{k_j} \big{\rangle}\leq -\rho \frac{\max \{\rho,\sqrt{3}-1\}}{\bar{\beta}}\|y^{k_j}-x^{k_j} \|^2.$$ Taking the limit in the previous inequality as $j$ tending to infinity and taking into account $\lim_{j\to+\infty}x^{k_j}={\bar x}$, $\lim_{j\to+\infty}y^{k_j}={\bar y}$ and $\langle F({\bar z}) ,{\bar y}-{\bar x} \rangle=0$, we conclude that ${\bar y}={\bar x}$. Considering that $y^{k} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\beta_k F(x^k) \big)$, it follows from Definition [Definition 2](#def:InexactProj){reference-type="ref" reference="def:InexactProj"} that $$\big\langle x^{k_j}-\beta_{k_j} F(x^{k_j})-y^{k_j}, y-y^{k_j} \big\rangle \leq \gamma_{k_j} \|y^{k_j}-x^{k_j}\|^2, \qquad \quad ~\forall y \in {\cal C}.$$ Thus, taking the limit in the previous inequality as $j$ tending to infinity, using that ${\bar y}={\bar x}$ and $\lim_{j \to +\infty} \beta_{k_j} ={\tilde \beta}>0$, we obtain that $$\left\langle F({\bar x}), y-{\bar x} \right\rangle\geq 0 , \qquad \forall~y\in {\cal C},$$ which implies that ${\bar x}\in {\cal C}^*$. Since ${\bar x}$ is a cluster point of $(x_k)_{k\in\mathbb{N}}$ and the sequence $(x_k)_{k\in\mathbb{N}}$ is Fejér convergent to ${\cal C}^*$, item (ii) of Lemma [Lemma 3](#le:fejer){reference-type="ref" reference="le:fejer"} implies that $\lim_{k\to +\infty}x_{k}={\bar x}$.
Now, let us assume that $\bar{\alpha} = 0$. We proceed to prove that in this case, $(x^k)_{k\in{\mathbb N}}$ likewise converges to some point belonging to the set ${\cal C}^*$. For that, we consider the auxiliary sequence $({\hat z}_k)_{k\in\mathbb{N}}$ defined by $$\label{eq:ipzkpf}
{\hat z}^{k}:=x^{k}+\sigma \frac{\alpha^{i_k}}{\alpha}(y^{k}-x^k) , \qquad \qquad k=1,2, \ldots,$$ where $i_{k}$ is defined in [\[eq:algjk\]](#eq:algjk){reference-type="eqref" reference="eq:algjk"}. Since $\lim_{j\to+\infty}x^{k_j}={\bar x}$, $\lim_{j\to+\infty}y^{k_j}={\bar y}$ and $\lim_{j \to +\infty} \alpha_{k_j} = \bar{\alpha}=0$, it follows from [\[eq:ipzkpf\]](#eq:ipzkpf){reference-type="eqref" reference="eq:ipzkpf"} that $\lim_{j\to +\infty} {\hat z}^{k_j}={\bar x}$. The definition of ${i_{k_j}}$ in [\[eq:algjk\]](#eq:algjk){reference-type="eqref" reference="eq:algjk"} implies that $$\label{eq:onls}
\big{\langle} F {\big(}{\hat z}^{k_j} {\big )},y^{k_j}-x^{k_j}\big{\rangle} > \rho \big{\langle} F(x^{k_j}), y^{k_j}-x^{k_j}\big{\rangle}.$$ Thus, [\[eq:onls\]](#eq:onls){reference-type="eqref" reference="eq:onls"} implies that $\big{\langle} F({\bar x}) ,{\bar y}-{\bar x} \big{\rangle} \geq \rho \big{\langle} F({\bar x}) ,{\bar y}-{\bar x} \big{\rangle}$, and since $\rho<1$ we conclude that $$\label{eq:opjkpfn}
\big{\langle} F({\bar x}) ,{\bar y}-{\bar x} \big{\rangle} \geq 0.$$ Given that $y^{k} \in {\cal P}^{\gamma_k}_{\cal C}\big(x^{k}, x^k-\alpha_k F(x^k) \big)$, it follows from Definition [Definition 2](#def:InexactProj){reference-type="ref" reference="def:InexactProj"} that $$\big\langle x^{k_j}-\beta_{k_j} F(x^{k_j})-y^{k_j}, y-y^{k_j} \big\rangle \leq \gamma_{k_j} \|y^{k_j}-x^{k_j}\|^2, \qquad \quad ~\forall y \in {\cal C}.$$ Taking the limit in the previous inequality as $j$ going to infinity, and using that ${\hat \gamma} \leq {\bar \gamma}$, we have $$\label{eq:Projwmthsc}
\langle {\bar x}-{\bar y}, y-{\bar y}\rangle -{\tilde \beta} \big\langle F({\bar x}), y-{\bar y}\big\rangle \leq {\bar \gamma} \|{\bar y}-{\bar x}\|^2, \qquad \quad ~\forall y \in {\cal C}.$$ Substituting $y \in {\cal C}$ for ${\bar x}\in {\cal C}$ in the last inequality, after some algebraic manipulations yields $$\label{eq:Projthscy}
{\tilde \beta} \big\langle F({\bar x}), {\bar y}-{\bar x}\big\rangle\leq ({\bar \gamma}-1) \|{\bar y}-{\bar x}\|^2.$$ Combining [\[eq:opjkpfn\]](#eq:opjkpfn){reference-type="eqref" reference="eq:opjkpfn"} with the latter inequality we obtain that $(1-{\bar \gamma}) \|{\bar y}-{\bar x}\|^2\leq 0$. Hence, because [\[eq:bargamma\]](#eq:bargamma){reference-type="eqref" reference="eq:bargamma"} implies that $1-{\bar \gamma}>0$, we conclude that ${\bar y}={\bar x}$. Therefore, due to ${\tilde \beta}>0$ and ${\bar y}={\bar x}$, it follows from [\[eq:Projwmthsc\]](#eq:Projwmthsc){reference-type="eqref" reference="eq:Projwmthsc"} that $$\big\langle F({\bar x}), y-{\bar x}\big\rangle \geq 0, \qquad \quad ~\forall y \in {\cal C},$$ which also implies that ${\bar x}\in {\cal C}^*$. Again, because ${\bar x}$ is a cluster point of $(x_k)_{k\in\mathbb{N}}$ and the sequence $(x_k)_{k\in\mathbb{N}}$ is Fejér convergent to ${\cal C}^*$, item (ii) of Lemma [Lemma 3](#le:fejer){reference-type="ref" reference="le:fejer"} implies that $\lim_{k\to +\infty}x_{k}={\bar x}$ and the proof is concluded. ◻
# Numerical Results
In order to demonstrate the behaviour of the proposed algorithm we now present the results of numerical experiments. We implemented Algorithms [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"} and [\[Alg:ExtraGradMethodLS\]](#Alg:ExtraGradMethodLS){reference-type="ref" reference="Alg:ExtraGradMethodLS"} and applied them on two test problems adapted from [@buramillan]. In both cases we slightly change the feasible set $C$ to be the norm 10 unit ball defined by $C=\{x\in \mathbb{R}^{2}: {x_{1}}^{10}+{x_{2}}^{10}\leq 1\}$. Projections on this set are more challenging than the projection over the feasible sets (the Euclidean ball and the set $[0,1]\times[0,1]$ in the original problems. The algorithms were implemented in the Julia language. The code can be obtained from <https://github.com/ugonj/extragradient>.
## Lipschitz operator
In this section, we show the convergence of Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"} on a Lipschitz continuous operator.
Consider the operator $$T_1(x) = \begin{bmatrix}-1&-1\\1&-1\end{bmatrix}x + \begin{bmatrix}3/2\\1/2\end{bmatrix}$$ We applied Algorithm [\[Alg:ExtraGradMethod\]](#Alg:ExtraGradMethod){reference-type="ref" reference="Alg:ExtraGradMethod"} on VIP($T_1$,$C$). The iterates are depicted in Figure [\[fig:algonols\]](#fig:algonols){reference-type="ref" reference="fig:algonols"}.
Table [1](#tab:resultslipschitz){reference-type="ref" reference="tab:resultslipschitz"} shows how the algorithm performed for various values of $\alpha$ and $\gamma$. The third columns represent the number of steps taken by the extragradient algorithm to reach the solution, and the last column represents the total number of linear searches applied by the Frank-Wolfe method. It is interesting to compare the case when $\gamma$ is small (say, $\gamma=0.01$) to the cases when $\gamma$ is larger: it can be seen that although the algorithm takes the same number of steps to reach the solution, when $gamma\approx 0$ (in the case when the projection is almost exact), significantly more Frank-Wolfe iterations were performed. In other words, since performing approximate projections does not increase the total number of steps in the extragradient method, it is beneficial to use them, as each step of this method requires less work.
$\mathbf{\alpha}$ $\mathbf{\gamma}$ N. steps N. linear searches
------------------- ------------------- ---------- --------------------
0.01 0.01 109 1.0317e7
0.01 0.106 109 6.62431e6
0.01 0.49 109 6.57129e6
0.11 0.01 16 12997
0.11 0.106 16 1085
0.11 0.394 16 966
0.21 0.01 11 2444
0.21 0.106 11 237
0.21 0.394 11 239
0.31 0.01 9 935
0.31 0.106 9 129
0.31 0.298 9 126
0.41 0.01 9 476
0.41 0.106 9 113
: Behaviour of the algorithm for various values of $\alpha$ and $\gamma$.
## Non-Lipschitz operator
Let $t(x_1,x_2)=(x_1+\sqrt{x_1^2+4x_2})/2$ and define $T_2(x_1,x_2) = -t/(1+t)(1,1)$. The operator $T$ is quasimonotone, and it is pseudomonotone with respect to the solution set, as it has a unique solution $(1,1)$ (see [@buramillan]). However it is not Lipschitz. We applied Algorithm [\[Alg:ExtraGradMethodLS\]](#Alg:ExtraGradMethodLS){reference-type="ref" reference="Alg:ExtraGradMethodLS"} on VIP($T_2$,$C$), with initial point at $(0,1)$. The iterates are depicted in Figure [\[fig:algonols\]](#fig:algonols){reference-type="ref" reference="fig:algonols"}.
# Conclusions {#Sec:Conclusions}
We investigate the Extragradient method for solving variational inequality problems using inexact projections onto the feasible set in this paper. We expect that our study can contribute to further research on the subject, particularly in solving large-scale problems where the computing effort of each iteration is connected with projections into the feasible set. Indeed, the idea of employing inexactness in the projection rather than exactness is very attractive from a computational perspective. It is worth noting that the Frank-Wolfe approach has a low computing cost for each iteration, resulting in great computational performance in various types of compact sets, as cited in [@GarberHazan2015; @Jaggi2013]. Searching for new efficient methods, such as the Frank-Wolfe method which generates inexact projections, is a subject that calls for attention.
# Acknowledgements
The first and last authors were supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602).
The second author was supported in part by CNPq grant 304666/2021-1.
[^1]: [r.diazmillan\@deakin.edu.au]([email protected]), School of Information Technology, Deakin University, Geelong, Australia
[^2]: [orizon\@ufg.br]([email protected]), IME, Universidade Federal de Goiás, Goiânia, Brazil
[^3]: [ j.ugon\@deakin.edu.au]( [email protected]), School of Information Technology, Deakin University, Geelong, Australia
| arxiv_math | {
"id": "2309.00648",
"title": "Extragradient method with feasible inexact projection to variational\n inequality problem",
"authors": "R.D\\'iaz Mill\\'an, O.P. Ferreira and J. Ugon",
"categories": "math.OC",
"license": "http://creativecommons.org/licenses/by-sa/4.0/"
} |
---
abstract: |
While the splinter property is a local property for Noetherian schemes in characteristic zero, Bhatt observed that it imposes strong conditions on the global geometry of proper schemes in positive characteristic. We show that if a proper scheme over a field of positive characteristic is a splinter, then its Nori fundamental group is trivial and its Kodaira dimension is negative. In another direction, Bhatt also showed that any splinter in positive characteristic is a derived splinter. We ask whether the splinter property is a derived-invariant for smooth projective varieties in positive characteristic and give a positive answer for varieties with big anticanonical divisor. For that purpose, we introduce the notion of $\mathcal{O}$-equivalence and show that the derived splinter property for pseudo-rational excellent schemes of finite type and separated over a fixed Noetherian base is preserved under $\mathcal{O}$-equivalence. Finally, we show that global $F$-regularity is a derived-invariant for smooth projective varieties in positive characteristic.
address: Fakultät für Mathematik, Universität Bielefeld, D-33501 Bielefeld, Germany
author:
- Johannes Krah
- Charles Vial
bibliography:
- references.bib
title: On proper splinters in positive characteristic
---
# Introduction
A Noetherian scheme $X$ is a *splinter* if for all finite surjective morphisms $f\colon Z \to X$ the map $\mathcal{O}_X \to f_*\mathcal{O}_Z$ splits in the category of coherent $\mathcal{O}_X$-modules. The direct summand conjecture, now a theorem due to André [@andre], stipulates that any regular Noetherian ring is a splinter. In characteristic zero, the splinter property is a local property : a Noetherian scheme over $\mathbb Q$ is a splinter if and only if it is normal. In positive characteristic, the splinter property is no longer a local property in general. Bhatt's beautiful [@bhatt_derived_splinters_in_positive_characteristic Thm. 1.5] shows that, for a proper scheme over a Noetherian scheme $S$ of positive characteristic, the positive-degree cohomology of the structure sheaf vanishes up to finite covers. Bhatt draws two consequences for splinters of positive characteristic : first that the positive-degree cohomology of semiample line bundles on proper splinters vanishes, and second that splinters and derived splinters coincide. Our first aim is to provide further global constraints on proper splinters in positive characteristic. Our second aim is to study whether the splinter property is a derived-invariant for smooth projective varieties in positive characteristic.
## Global constraints on proper splinters in positive characteristic {#global-constraints-on-proper-splinters-in-positive-characteristic .unnumbered}
Recall that a smooth projective, separably rationally connected, variety over an algebraically closed field of positive characteristic has trivial Nori fundamental group [@biswas], has negative Kodaira dimension [@kollar_rational_curves_algebraic_varieties Ch. IV, Cor. 1.11 & Prop. 3.3], and has no nonzero global differential forms [@kollar_rational_curves_algebraic_varieties Ch. IV, Cor. 3.8]. Motivated by the intriguing question whether proper splinters over an algebraically closed field of positive characteristic are separably rationally connected, we show :
**Theorem 1**. *Let $X$ be a connected proper scheme over a field $k$ of positive characteristic. Assume that $X$ is a splinter.*
1. *[\[item:thm:triv_fund_grp\]]{#item:thm:triv_fund_grp label="item:thm:triv_fund_grp"} *([Theorem 1](#prop:trivial_fundamental_group){reference-type="ref" reference="prop:trivial_fundamental_group"})* If $X$ has a $k$-rational point $x\in X(k)$, then the Nori fundamental group $\pi^N_1(X,x)$ is trivial. In particular, if $k$ is algebraically closed, then any finite torsor over $X$ is trivial.*
2. *[\[item:thm:negative_kod_dim\]]{#item:thm:negative_kod_dim label="item:thm:negative_kod_dim"} *([Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"})* If $X$ is positive-dimensional, then $X$ has negative Kodaira dimension, i.e., $H^0(X,\mathcal{O}_X(nK_X)) = 0$ for all $n>0$, where $K_X$ denotes the canonical divisor of $X$.*
3. *[\[item:thm:global-1-forms\]]{#item:thm:global-1-forms label="item:thm:global-1-forms"} *([Theorem 1](#thm:global-1-forms){reference-type="ref" reference="thm:global-1-forms"})* If $X$ is smooth, then $H^0(X,\Omega_X^1) = 0$.*
Note from their respective proofs that , and in the Gorenstein case, follow from the known fact, recalled in [Proposition 1](#prop:pic_torsion_free){reference-type="ref" reference="prop:pic_torsion_free"}, that the Picard group of a proper splinter over a field of positive characteristic is torsion-free. The proofs of , and of in the general non-Gorenstein case, rely on a lifting property for splinters along torsors established in [Proposition 1](#prop:torsor){reference-type="ref" reference="prop:torsor"}, which itself relies on the more general lifting property established in [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"}. The new idea, which makes it in particular possible to avoid any Gorenstein assumption, is the use of the exceptional inverse image functor for finite morphisms. As an aside, by using the more general exceptional inverse image functor for proper morphisms, we further observe in [Remark 1](#rmk:crepant){reference-type="ref" reference="rmk:crepant"} that the splinter property for Noetherian schemes of positive characteristic lifts along crepant morphisms. We establish the following lifting property for splinters along finite quasi-torsor morphisms :
**Proposition 1** ([Proposition 1](#prop:quasitorsor){reference-type="ref" reference="prop:quasitorsor"}[\[item:finite_quasitorsor_under_group_scheme\]](#item:finite_quasitorsor_under_group_scheme){reference-type="ref" reference="item:finite_quasitorsor_under_group_scheme"}). *Let $\pi\colon Y \to X$ be a morphism of normal Noetherian Nagata schemes over a Noetherian ring $R$ such that either $H^0(Y,\mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi$ is a finite quasi-torsor, i.e., that there exists a Zariski open subset $U\subseteq X$ with $\operatorname{codim}_X (X\setminus U) \geq 2$ such that $\pi^{-1}(U) \to U$ is a torsor under a finite group scheme. If $X$ is a splinter, then $Y$ is a splinter.*
In fact, [Proposition 1](#prop:quasitorsor){reference-type="ref" reference="prop:quasitorsor"}[\[item:finite_quasitorsor_under_group_scheme\]](#item:finite_quasitorsor_under_group_scheme){reference-type="ref" reference="item:finite_quasitorsor_under_group_scheme"} is stated more generally (by [Remark 1](#rmk:splinter-vs-globally+){reference-type="ref" reference="rmk:splinter-vs-globally+"}) for globally $+$-regular pairs as introduced in [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char], and extends [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Prop. 6.20] which deals with the quasi-étale case. Regarding Theorem , we also show in [Theorem 1](#prop:trivial_fundamental_group){reference-type="ref" reference="prop:trivial_fundamental_group"} that if $X$ is a proper splinter over a separably closed field of positive characteristic, then its étale fundamental group is trivial. In that direction, we also refer to [@cai-lee-ma-schwede-tucker Thm. 7.0.3], where it is in particular showed that the étale fundamental group of the regular locus of a normal projective globally +-regular variety satisfying some additional technical assumptions is finite, but also to the references in the introduction of *loc. cit.* regarding the étale fundamental group of regular loci of near smooth Fano varieties.
For proper surfaces, we have the following results regarding splinters. Let $k$ be an algebraically closed field of positive characteristic. It is well-known that a proper curve over $k$ is a splinter if and only if it is isomorphic to $\mathbb P^1_k$. In [Proposition 1](#prop:splinter_surface_rational){reference-type="ref" reference="prop:splinter_surface_rational"}, we use [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"} to show that if a proper surface over $k$ is a splinter, then it is rational. We provide in [Proposition 1](#prop:blow_up_of_points_on_a_line_or_conic){reference-type="ref" reference="prop:blow_up_of_points_on_a_line_or_conic"} new examples of proper rational surfaces that are splinters, by establishing that the blow-up of $\mathbb P^2_k$ in any number of points lying on a conic is a splinter. On the other hand, in [Proposition 1](#prop:example_9_points_not_splinter){reference-type="ref" reference="prop:example_9_points_not_splinter"} and [Proposition 1](#prop:15_points_on_quartic_curve){reference-type="ref" reference="prop:15_points_on_quartic_curve"}, we give examples of proper rational surfaces that are not splinters. For instance, we show that over a finite field the blow-up of $\mathbb P^2_k$ in 9 points lying on a smooth cubic curve is not a splinter.
## $\mathcal{O}$-invariance and $D$-invariance of the splinter property {#mathcalo-invariance-and-d-invariance-of-the-splinter-property .unnumbered}
The second aim of this paper is to study whether the splinter property, and the related notion of *global $F$-regularity*, is a derived-invariant among smooth projective varieties. We say that two smooth projective varieties $X$ and $Y$ over a field $k$ are $D$-equivalent if there is a $k$-linear equivalence $\mathsf D^b(X) \cong \mathsf D^b(Y)$ between their bounded derived categories of coherent sheaves. Given that a smooth projective splinter, resp. a smooth projective globally $F$-regular variety, in positive characteristic is expected to, resp. is known to, have big anticanonical divisor (see [Conjecture 1](#conj:splinter_big_anticanonical_class){reference-type="ref" reference="conj:splinter_big_anticanonical_class"} due to [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char], resp. [Proposition 1](#prop:gFr-big){reference-type="ref" reference="prop:gFr-big"} due to [@schwede_smith_globally_f_regular_and_log_Fano_varieties Cor. 4.5]), we obtain the following positive answer :
**Theorem 2**. *Let $X$ and $Y$ be smooth projective varieties of a field $k$ of positive characteristic.Assume that $X$ and $Y$ are $D$-equivalent. Then :*
4. *[\[thm:D-splinter\]]{#thm:D-splinter label="thm:D-splinter"} *([Corollary 1](#cor:d_equvialent_splinters_pseudo-rational){reference-type="ref" reference="cor:d_equvialent_splinters_pseudo-rational"})* $X$ is a splinter if and only if $Y$ is a splinter, provided $-K_X$ is big.*
5. *[\[thm:D-gFr\]]{#thm:D-gFr label="thm:D-gFr"} *([Corollary 1](#cor:d-equiv-gFr){reference-type="ref" reference="cor:d-equiv-gFr"})* $X$ is globally $F$-regular if and only if $Y$ is globally $F$-regular, provided $k$ is $F$-finite.*
For that purpose, we introduce in [Definition 1](#def:O_equiv){reference-type="ref" reference="def:O_equiv"} the notion of $\mathcal{O}$-equivalence for separated schemes of finite type over a Noetherian base. By [Proposition 1](#prop:K-O-eq){reference-type="ref" reference="prop:K-O-eq"}, this notion coincides with that of $K$-equivalence of equidimensional smooth projective varieties over a field $k$, provided resolution of singularities holds over $k$. As before, but now for morphisms that are not necessarily finite, the new idea is to use the exceptional inverse image functor of Grothendieck, which allows for more flexibility. In [Proposition 1](#prop:d_equiv_implies_o_equiv){reference-type="ref" reference="prop:d_equiv_implies_o_equiv"}, we show the analogue of [@kawamata_d_equivalence_and_k_equivalence Thm. 1.4(2)] concerned with $K$-equivalence : if two smooth projective varieties $X$ and $Y$ over a field $k$ are derived-equivalent and if $K_X$ or $-K_X$ is big, then $X$ and $Y$ are $\mathcal{O}$-equivalent. As such, Theorem [\[thm:D-splinter\]](#thm:D-splinter){reference-type="ref" reference="thm:D-splinter"} and Theorem [\[thm:D-gFr\]](#thm:D-gFr){reference-type="ref" reference="thm:D-gFr"} follow from the following more general :
**Theorem 3**. *Let $X$ and $Y$ be normal pseudo-rational varieties of a field $k$ of positive characteristic. Assume that $X$ and $Y$ are $\mathcal{O}$-equivalent. Then :*
6. *[\[thm:O-splinter\]]{#thm:O-splinter label="thm:O-splinter"} *([Theorem 1](#cor:splinters-O-eq){reference-type="ref" reference="cor:splinters-O-eq"})* $X$ is a splinter if and only if $Y$ is a splinter.*
*Assume in addition that $X$ and $Y$ are quasi-projective and that $k$ is $F$-finite. Then :*
7. *[\[thm:O-gFr\]]{#thm:O-gFr label="thm:O-gFr"} *([Theorem 1](#prop:globF_stable_O_equiv){reference-type="ref" reference="prop:globF_stable_O_equiv"})* $X$ is globally $F$-regular if and only if $Y$ is globally $F$-regular.*
We note that the assumption that $X$ and $Y$ be *pseudo-rational* is a mild assumption as the splinter property (resp. global $F$-regularity) for schemes as in Theorem [\[thm:O-splinter\]](#thm:O-splinter){reference-type="ref" reference="thm:O-splinter"} (resp. Theorem [\[thm:O-gFr\]](#thm:O-gFr){reference-type="ref" reference="thm:O-gFr"}) implies pseudo-rationality ; see [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"} and [Proposition 1](#prop:gFr-splinter){reference-type="ref" reference="prop:gFr-splinter"}. More generally, we show in [Theorem 1](#cor:splinters-O-eq){reference-type="ref" reference="cor:splinters-O-eq"} that the *derived splinter* property is invariant under $\mathcal{O}$-equivalence for pseudo-rational excellent schemes of finite type and separated over a Noetherian scheme $S$.
*Organization of the paper 1*. In [\[S:reflexive_sheaes_and_dualizing_complexes,S:prelim_splinters_globF,S:invariance_small_birat_maps\]](#S:reflexive_sheaes_and_dualizing_complexes,S:prelim_splinters_globF,S:invariance_small_birat_maps){reference-type="ref" reference="S:reflexive_sheaes_and_dualizing_complexes,S:prelim_splinters_globF,S:invariance_small_birat_maps"}, we mostly fix notation and collect basic and known facts about splinters and globally $F$-regular varieties. Our first new contributions are contained in [\[S:LD-splinter,S:LD-globF\]](#S:LD-splinter,S:LD-globF){reference-type="ref" reference="S:LD-splinter,S:LD-globF"}. Notably, the use of the exceptional inverse image functor makes its first appearance in [5.2](#SS:lift-splinter){reference-type="ref" reference="SS:lift-splinter"}, where we observe that the splinter property lifts along finite surjective morphisms $\pi\colon Y \to X$ of Noetherian schemes such that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$, under the condition that either $H^0(Y, \mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$.
In [\[S:finite_torsors_over_splitners,S:splitners_neg_kod_dim,S:vanishing_global_1\_forms,S:splinters_surfaces\]](#S:finite_torsors_over_splitners,S:splitners_neg_kod_dim,S:vanishing_global_1_forms,S:splinters_surfaces){reference-type="ref" reference="S:finite_torsors_over_splitners,S:splitners_neg_kod_dim,S:vanishing_global_1_forms,S:splinters_surfaces"}, we explore global constraints on proper splinters in positive characteristic and establish Theorems . In [10](#S:splinters_surfaces){reference-type="ref" reference="S:splinters_surfaces"}, we show that proper splinter surfaces in positive characteristic are rational, and give examples of rational surfaces that are splinters as well as examples of rational surfaces that are not splinters.
Finally in [11](#S:O-eq){reference-type="ref" reference="S:O-eq"}, which can be read mostly independently of the rest of the paper, we introduce the notion of $\mathcal{O}$-equivalence for separated schemes of finite type over a Noetherian scheme $S$ and compare it, in the case of smooth projective varieties over a field, to the usual notions of $K$-equivalence and $D$-equivalence. We then establish Theorems .
*Conventions 1*. A *variety* is an integral separated scheme of finite type over a field. For a scheme $X$ over the finite field $\mathbb F_p$ with $p$ elements, the *Frobenius* is denoted by $F \colon X\to X$ ; it is the identity on the underlying topological space and sends each local section of $\mathcal{O}_X$ to its $p$-th power. A scheme $X$ over $\mathbb F_p$ is said to be *$F$-finite* if the Frobenius map $F\colon X \to X$ is finite. Note that a variety over an $F$-finite field is an $F$-finite scheme. For a scheme $X$, we denote by $X_{\mathrm{reg}}$ its regular locus and by $X_{\mathrm{sing}} \coloneqq X\setminus X_{\mathrm{reg}}$ its singular locus. For a finite, resp. proper, morphism $f\colon Z \to X$, we denote by $\eta_f\colon \mathcal{O}_X \to f_* \mathcal{O}_Z$, resp. $\eta_f\colon \mathcal{O}_X \to \mathrm R f_* \mathcal{O}_Z$, the canonical morphism in the category, resp. derived category, of coherent sheaves on $X$. Given a Weil divisor $D$ on a normal scheme $X$, we write $\sigma_D \colon \mathcal{O}_X \to \mathcal{O}_X(D)$ for the morphism determined by $D$.
*Acknowledgements 1*. We thank Javier Carvajal-Rojas and Karl Schwede for useful comments.
# Reflexive sheaves and dualizing complexes {#S:reflexive_sheaes_and_dualizing_complexes}
## Reflexive sheaves and Weil divisors
Let $X$ be an integral Noetherian scheme. Recall that a coherent sheaf $\mathcal{F}$ on $X$ is called *reflexive* if the canonical map $\mathcal{F}\to \mathcal{F}^{\vee \vee}$ is an isomorphism, where by definition $\mathcal{F}^\vee \coloneqq \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X)$. The following facts can be found for example in [@stacks-project [Tag 0AVT](https://stacks.math.columbia.edu/tag/0AVT)] or in [@schwede_notes_generalized_divisors] :
1. For any coherent sheaf $\mathcal{F}$ and any reflexive sheaf $\mathcal{G}$ the sheaf $\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{F}, \mathcal{G})$ is reflexive.
2. If $X$ is normal, then a sheaf is reflexive if and only if it is $S_2$.
3. [\[item:restriction_to_big_open_equiv_of_cat\]]{#item:restriction_to_big_open_equiv_of_cat label="item:restriction_to_big_open_equiv_of_cat"}If $X$ is normal and $i \colon U \hookrightarrow X$ is an open immersion such that $\operatorname{codim}_X (X \setminus U)\geq 2$, then $i_* i^* \mathcal{F}\cong \mathcal{F}$ for any reflexive sheaf $\mathcal{F}$. Furthermore, the restriction $i^*$ induces an equivalence of categories from reflexive coherent sheaves on $X$ to reflexive coherent sheaves on $U$.
To any Weil divisor $D$ on a normal variety $X$ with function field $K(X)$, one can associate a sheaf $\mathcal{O}_X(D) \subseteq K(X)$ whose sections on open subsets $V \subseteq X$ are given by $$\Gamma(V, \mathcal{O}_X(D))\coloneqq \{f \in K(X)^\times \mid \mathrm{div}(f)\vert_V +D\vert_V \geq 0\} \cup \{0\}.$$ The sheaf $\mathcal{O}_X(D)$ is reflexive of rank $1$ and it is a line bundle if and only if $D$ is Cartier. Since any reflexive rank $1$ sheaf is a subsheaf of the locally constant sheaf $K(X)$, we have a 1-1 correspondence $$\{\mbox{Weil divisors on $X$ up to linear equivalence} \} \leftrightarrow \{\mbox{reflexive sheaves of rank 1 on } X\}/\cong .$$ Moreover thanks to , this bijection turns out to be a group homomorphism if one takes the double dual of the usual tensor product on the right hand side, i.e., $$\mathcal{O}_X(D+D')\cong (\mathcal{O}_X(D) \otimes \mathcal{O}_X(D'))^{\vee \vee}.$$ A Weil divisor $D$ is effective if and only if $\mathcal{O}_X \subseteq \mathcal{O}_X(D) \subseteq K(X)$. Thus a reflexive rank $1$ sheaf $\mathcal{F}$ corresponds to an effective divisor $D$ if and only if there is an injective morphism $\mathcal{O}_X \hookrightarrow \mathcal{F}$. For later use recall the following criterion.
**Lemma 1**. *Let $X$ be a normal variety over a field $k$ and assume that $H^0(X, \mathcal{O}_X)$ is a field. Then a Weil divisor $D$ on $X$ is linearly equivalent to $0$ if and only if both $\mathcal{O}_X(D)$ and $\mathcal{O}_X(-D)$ admit nonzero global sections.*
*Proof.* The only if part is obvious. Assume $s\colon \mathcal{O}_X \to \mathcal{O}_X(D)$ and $t\colon \mathcal{O}_X \to \mathcal{O}_X(-D)$ are nontrivial sections. We have isomorphisms of sheaves $\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X(-D))\cong \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{O}_X(D), \mathcal{O}_X)$ and $\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{O}_X(D), \mathcal{O}_X(D))\cong \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X)$. Therefore we can interpret $t$ as a global section of $\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{O}_X(D), \mathcal{O}_X)$. The compositions $s\circ t$ and $t\circ s$ are both nonzero and give elements in $H^0(X, \mathcal{O}_X)$, thus they are isomorphisms. Hence $\mathcal{O}_X \cong \mathcal{O}_X(D)$, which shows that $D$ is trivial. ◻
## Dualizing complexes
Let $h \colon X \to \operatorname{Spec}k$ be a separated scheme of finite type over a field $k$. The exceptional inverse image functor $h^!$ is well-defined, see, e.g., [@stacks-project [Tag 0AU3](https://stacks.math.columbia.edu/tag/0AU3)], and $\omega_X^\bullet \coloneqq h^! \mathcal{O}_{\operatorname{Spec}k} \in \mathsf D^b(X)$ is a dualizing complex on $X$. Here and throughout this paper $\mathsf D^b(X)=\mathsf D^b(\operatorname{Coh}(X))$ is the bounded derived category of coherent $\mathcal{O}_X$-modules and $\omega_X^\bullet$ is the dualizing complex obtained as $h^! \mathcal{O}_{\operatorname{Spec}k}$. If $X$ is equidimensional, $\omega_X \coloneqq \mathcal{H}^{-\operatorname{dim}X}(\omega_X^\bullet)$ is an $S_2$ sheaf, called the *dualizing sheaf* [@stacks-project [Tag 0AWH](https://stacks.math.columbia.edu/tag/0AWH)]. If $X$ is smooth over $k$, then $\omega_X$ coincides with the *canonical sheaf* $\bigwedge^{\operatorname{dim}X} \Omega_{X/k}^1$ [@stacks-project [Tag 0E9Z](https://stacks.math.columbia.edu/tag/0E9Z)]. Moreover, $X$ is Cohen--Macaulay if and only if $\omega_X^\bullet = \omega_X[\operatorname{dim}X]$ [@stacks-project [Tag 0AWQ](https://stacks.math.columbia.edu/tag/0AWQ)] and $X$ is Gorenstein if and only if $\omega_X^\bullet$ is an invertible object [@stacks-project [Tag 0AWV](https://stacks.math.columbia.edu/tag/0AWV)]. The latter condition is further equivalent to $\omega_X^\bullet = \omega_X[\operatorname{dim}X]$ with $\omega_X$ an invertible sheaf [@stacks-project [Tag 0FPG](https://stacks.math.columbia.edu/tag/0FPG)]. If $X$ is normal, then $\omega_X = \mathcal{O}_X(K_X)$ is a reflexive sheaf of rank $1$. Thus, there exists a unique (up to linear equivalence) Weil divisor $K_X$, called the *canonical divisor*, such that $\omega_X = \mathcal{O}_X(K_X)$.
Assume that $h\colon X \to \operatorname{Spec}k$ is proper and $X$ is equidimensional. Then $\mathrm{R}h_*$ is left adjoint to $h^!$ and for every $K \in \mathsf D^b(X)$, there is a functorial isomorphism $\operatorname{Ext}^i_X(K, \omega^\bullet_X) = \operatorname{Hom}_k (H^i(X, K), k)$ compatible with shifts and exact triangles, see, e.g., [@stacks-project [Tag 0FVU](https://stacks.math.columbia.edu/tag/0FVU)]. By Yoneda, the object $\omega_X^\bullet$ is unique up to unique isomorphism among all objects satisfying this universal property.
# Preliminaries on splinters and globally $F$-regular varieties {#S:prelim_splinters_globF}
## Splinters {#SS:splinter}
We review the notion of *splinter* for Noetherian schemes, the local constraints it imposes, as well as the global constraints it imposes on proper schemes over a field of positive characteristic.
**Definition 1**. A Noetherian scheme $X$ is a *splinter* if for any finite surjective morphism $f \colon Y \to X$ the canonical map $\mathcal{O}_X \to f_*\mathcal{O}_Y$ splits in the category $\operatorname{Coh}(X)$ of coherent sheaves on $X$. A Noetherian scheme $X$ is a *derived splinter* if for any proper surjective morphism $f\colon Y \to X$ the map $\mathcal{O}_X \to \mathrm{R} f_*\mathcal{O}_Y$ splits in the bounded derived category $\mathsf D^b(X)$ of coherent sheaves on $X$.
Note that a derived splinter is a splinter, so that being a derived splinter is *a priori* more restrictive than being a splinter. In the recent work [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char], the notion of splinter has been extended to pairs. Precisely :
**Definition 1** ([@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Def. 6.1]). A pair $(X,\Delta)$ consisting of a normal Noetherian scheme $X$ and of an effective $\mathbb Q$-Weil divisor $\Delta$ is called *globally $+$-regular* if for any finite surjective morphism $f\colon Y \to X$ with $Y$ normal, the natural map $\mathcal{O}_X \to f_*\mathcal{O}_Y \to f_* \mathcal{O}_Y(\lfloor f^*\Delta\rfloor)$ splits in $\mathrm{Coh}(X)$.
*Remark 1*. It is obvious that if a normal Noetherian scheme $X$ is a splinter, then $(X,0)$ is globally $+$-regular. For the converse, recall from [@stacks-project [Tag 035S](https://stacks.math.columbia.edu/tag/035S)] that the normalization of a Nagata scheme is a finite morphism, and recall from [@stacks-project [Tag 032T](https://stacks.math.columbia.edu/tag/032T)] that any quasi-finite extension of a Nagata ring is Nagata. In particular, if $X$ is any normal Nagata Noetherian scheme, e.g. a normal scheme of finite type over a field, over $\mathbb Z$, or over a Noetherian complete local ring [@stacks-project [Tag 0335](https://stacks.math.columbia.edu/tag/0335)], then $X$ is a splinter if and only if $(X,0)$ is globally $+$-regular. Indeed, if $\pi\colon Y\to X$ is a finite surjective morphism, then $Y$ is Nagata so that the normalization $f\colon \widetilde Y \to Y$ is finite. Any splitting of $\mathcal{O}_X \to (\pi\circ f)_*\mathcal{O}_{\widetilde Y}$ provides a splitting of $\mathcal{O}_X \to \pi_*\mathcal{O}_{Y}$.
In general, if a Noetherian scheme $X$ is a splinter, then it is a basic fact that any open $U \subseteq X$ is a splinter ; see, e.g., [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"} below. Hence, if $X$ is a splinter, then all of its local rings are splinters. Moreover, if $X$ is in addition assumed to be affine, $X$ is a splinter if and only if all its local rings are splinters [@datta_tucker_on_some_permanence_properties_of_derived_splinters Lem. 2.1.3].
In characteristic zero, a Noetherian scheme $X$ is a splinter if and only if it is normal [@bhatt_derived_splinters_in_positive_characteristic Ex. 2.1], and it is a derived splinter if and only if it has rational singularities [@bhatt_derived_splinters_in_positive_characteristic Thm. 2.12]. In particular, in characteristic zero, the splinter and derived splinter properties are distinct, and they both define local properties.
In positive characteristic, Bhatt showed that the splinter and the derived splinter properties agree [@bhatt_derived_splinters_in_positive_characteristic Thm. 1.4] and observed that, in contrast to the affine setting, the splinter property is not a local property for proper schemes. The following proposition summarizes the known local constraints on splinters in positive characteristic.
**Proposition 1** ([@bhatt_derived_splinters_in_positive_characteristic Ex. 2.1, Rmk. 2.5, Cor. 6.4], [@bhatt2021cohenmacaulayness Rmk. 5.14], [@singh_q_gorenstein_splinter_rings_of_characteristic_p_are_f_regular], [@smith_globally_f_regular_varieties_applications_to_vanishing_theorems_for_quotients_of_fano_varieties §2.2]). *Let $X$ be a scheme of finite type over a field of positive characteristic. If $X$ is a splinter, then*
1. *$X$ is normal ;*
2. *$X$ is Cohen--Macaulay ;*
3. *$X$ is pseudo-rational ;*
4. *$X$ is $F$-rational.*
*Moreover, if $X$ is $\mathbb Q$-Gorenstein, then $X$ is $F$-regular, that is, if $k$ is assumed to be $F$-finite, its local rings are strongly $F$-regular.*
Recall from [@hochster_huneke_tight_closure_and_strong_f_regularity] that an $F$-finite ring $R$ of positive characteristic is *strongly $F$-regular* if for any $c\in R$ not belonging to any minimal prime ideal of $R$, there exists $e>0$ such that the inclusion of $R$-modules $R \hookrightarrow F_*^e R$ which sends 1 to $F^e_*c$ splits as a map of $R$-modules. The ring $R$ is strongly $F$-regular if and only if its local rings are strongly $F$-regular. If $R$ is strongly $F$-regular, then the affine scheme $X=\operatorname{Spec}R$ is a splinter ; see, e.g., [@ma_polstra_lecture_notes Thm. 4.8].
The splinter property also imposes strong constraints on the global geometry of proper varieties in positive characteristic. For example, from Bhatt's "vanishing up to finite cover in positive characteristic" [@bhatt_derived_splinters_in_positive_characteristic Thm. 1.5], we have :
**Proposition 1** ([@bhatt_derived_splinters_in_positive_characteristic]). *Let $X$ be a proper variety over a field of positive characteristic and let $\mathcal{L}$ be a semiample line bundle on $X$. If $X$ is a splinter, then $H^i(X, \mathcal{L}) = 0$ for all $i>0$. In particular, $H^i(X, \mathcal{O}_X)=0$ for all $i>0$.*
*Proof.* For a proper variety $X$ over a field of positive characteristic, there exists, by [@bhatt_derived_splinters_in_positive_characteristic Prop. 7.2], for any $i>0$ a finite surjective morphism $\pi\colon Y \to X$ such that the induced map $H^i(X,\mathcal{L}) \to H^i(Y,\pi^*\mathcal{L})$ is zero. If now $X$ is a splinter, the pullback map $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$ admits a splitting $s$, i.e., we have $$\operatorname{id}\colon \mathcal{O}_X \to \pi_*\mathcal{O}_Y \stackrel{s}{\to} \mathcal{O}_X.$$ Tensoring with $\mathcal{L}$ and using the projection formula, we obtain $$\operatorname{id}\colon H^i(X,\mathcal{L}) \xrightarrow{0} H^i(Y,\pi^*\mathcal{L}) = H^i(X,\pi_* \pi^*\mathcal{L}) \to H^i(X,\mathcal{L}),$$ where the equality in the middle uses that $\pi$ is finite, in particular affine. We conclude that $H^i(X,\mathcal{L})=0$. ◻
As a direct consequence, we have the following useful constraint, which will be refined in [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"}, on proper splinters in positive characteristic :
**Lemma 1**. *Let $X$ be a proper scheme over a field of positive characteristic, with positive-dimensional irreducible components. If $X$ is a splinter, then its canonical divisor $K_X$ is not effective, in particular its dualizing sheaf $\omega_X$ is nontrivial.*
*Proof.* Since a splinter is normal, by working on each connected component of $X$ separately, we can assume that $X$ is of pure positive dimension, say $n$. By [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"}, $X$ is Cohen--Macaulay, so $\omega_X^\bullet = \omega_X [n]$. By [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"} and Serre duality for Cohen--Macaulay schemes, we obtain $H^0(X, \omega_X)^\vee \cong H^n(X, \mathcal{O}_X)=0$. This shows that the Weil divisor $K_X$ is not effective. ◻
## Globally $F$-regular varieties
Let $p$ be a prime number. We recall the notion of *global $F$-regularity* for normal varieties over an $F$-finite field of characteristic $p$, and review the local constraints it imposes, as well as the global constraints it imposes on proper varieties.
**Definition 1** ([@smith_globally_f_regular_varieties_applications_to_vanishing_theorems_for_quotients_of_fano_varieties; @schwede_smith_globally_f_regular_and_log_Fano_varieties]). A normal $F$-finite scheme $X$ over $\mathbb F_p$ is called *globally $F$-regular* if for any effective Weil divisor $D$ on $X$ there exists a positive integer $e \in \mathbb Z_{>0}$ such that the map $$\mathcal{O}_X \to F_*^e \mathcal{O}_X \xrightarrow{F_*^e(\sigma_D)} F_*^e \mathcal{O}_X(D)$$ of $\mathcal{O}_X$-modules splits. Here $\sigma_D \colon \mathcal{O}_X \to \mathcal{O}_X(D)$ is the morphism determined by the divisor $D$.
A normal $F$-finite scheme $X$ over $\mathbb F_p$ is called *$F$-split* if $\mathcal{O}_X \to F_*\mathcal{O}_X$ splits.
A pair $(X,\Delta)$ consisting of a normal $F$-finite scheme $X$ over $\mathbb F_p$ and an effective $\mathbb Q$-Weil divisor $\Delta$ is called *globally $F$-regular* if for any effective Weil divisor $D$ on $X$ there exists an integer $e >0$ such that the natural map $$\mathcal{O}_X \to F_*^e \mathcal{O}_X(\lceil (p^e-1)\Delta\rceil + D)$$ of $\mathcal{O}_X$-modules splits. In particular, $X$ is globally $F$-regular if and only if $(X,0)$ is globally $F$-regular.
The following well-known proposition gives local constraints on a normal variety over an $F$-finite field to be globally $F$-regular and echoes [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"}.
**Proposition 1**. *Let $X$ be a normal $F$-finite scheme over $\mathbb F_p$. If $X$ is globally $F$-regular, then its local rings are strongly $F$-regular.*
*Proof.* We follow the arguments of [@smith_lecture_notes Prop. 6.22]. Fix a point $x\in X$. If $c \in \mathcal{O}_{X,x}$ is a nonzero element, $c$ defines an effective divisor in the neighborhood of $x$. By taking the Zariski closure, this divisor extends to a Weil divisor $D$ on $X$ such that the map $\mathcal{O}_X \to \mathcal{O}_X(D)$ localizes to the map $\mathcal{O}_{X,x} \to \mathcal{O}_{X,x}[c^{-1}]$ sending $1 \mapsto 1$. Since $X$ is globally $F$-regular, there exists $e>0$ such that $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits. Thus, localizing yields a splitting of the map $\mathcal{O}_{X,x} \to F_*^e (\mathcal{O}_{X,x}[c^{-1}] )$ sending $1 \mapsto 1$. Multiplying by $c$ yields a splitting of the map $\mathcal{O}_{X,x} \to F_*^e\mathcal{O}_{X,x}$ sending $1 \mapsto F_*^e c$. ◻
Global $F$-regularity is a local property for normal affine varieties. Indeed, a normal affine variety over an $F$-finite field $k$ is globally $F$-regular if and only if it is strongly $F$-regular if and only if all its local rings are strongly $F$-regular ; see [@hochster_huneke_tight_closure_and_strong_f_regularity Thm. 3.1]. By [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"} and the discussion that follows, we see that a $\mathbb Q$-Gorenstein normal affine variety over an $F$-finite field is globally $F$-regular if and only if it is a splinter. In fact, any normal globally $F$-regular variety over an $F$-finite field is a splinter :
**Proposition 1** ([@bhatt_derived_splinters_in_positive_characteristic Prop. 8.9], [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Lem. 6.14]). *Let $X$ be an $F$-finite normal scheme over $\mathbb F_p$ and let $\Delta$ be an effective $\mathbb Q$-Weil divisor. If $(X,\Delta)$ is globally $F$-regular, then $(X,\Delta)$ is globally $+$-regular.*
*In particular, by [Remark 1](#rmk:splinter-vs-globally+){reference-type="ref" reference="rmk:splinter-vs-globally+"}, assuming $X$ is a normal scheme of finite type over an $F$-finite field, if $X$ is globally $F$-regular, then $X$ is a splinter.*
As [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"} in the splinter case, global $F$-regularity imposes strong constraints on the global geometry of normal *projective* varieties :
**Proposition 1** ([@smith_globally_f_regular_varieties_applications_to_vanishing_theorems_for_quotients_of_fano_varieties Cor. 4.3], [@schwede_smith_globally_f_regular_and_log_Fano_varieties Thm. 1.1]). *Let $X$ be a normal projective variety over an $F$-finite field of positive characteristic. Assume that $X$ is globally $F$-regular. Then :*
1. *For all nef line bundles $\mathcal{L}$ on $X$, $H^i(X, \mathcal{L})=0$ for all $i>0$.*
2. *$X$ is log Fano ; in particular, if $X$ is in addition $\mathbb Q$-Gorenstein, then $-K_X$ is big. [\[item:globF_log_Fano\]]{#item:globF_log_Fano label="item:globF_log_Fano"}*
A proper normal curve over an algebraically closed field $k$ of positive characteristic is globally $F$-regular if and only if it is splinter if and only if it is isomorphic to the projective line. It follows from the proof of [@kawakami_totaro_endomorphisms_of_varieties_and_bott_vanishing Thm. 5.2] that a smooth projective Fano variety over a perfect field of positive characteristic is globally $F$-regular if and only if it is a splinter if and only if it is $F$-split. The following conjecture stems from [Proposition 1](#prop:gFr-big){reference-type="ref" reference="prop:gFr-big"} and the folklore expectation (see, e.g., [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Rmk. 6.16]) that splinters should be globally $F$-regular.
**Conjecture 1** ([@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Conj. 6.17]). *Let $X$ be a $\mathbb Q$-Gorenstein projective scheme over a field of positive characteristic. If $X$ is a splinter, then $-K_X$ is big.*
# Invariance under small birational maps {#S:invariance_small_birat_maps}
We start by describing the behavior of the splinter property under open embeddings.
**Lemma 1**. *Let $X$ be a Noetherian scheme and let $U\subseteq X$ be an open dense subset.*
1. *[\[item:open_of_splinter\]]{#item:open_of_splinter label="item:open_of_splinter"} If $X$ is a splinter, then $U$ is a splinter.*
2. *[\[item:splinter_determined_on_big_open\]]{#item:splinter_determined_on_big_open label="item:splinter_determined_on_big_open"} Assume that $X$ is normal and Nagata, and that $\operatorname{codim}_X (X \setminus U) \geq 2$. If $U$ is a splinter, then $X$ is a splinter.*
*Proof.* To prove consider a finite cover $f\colon Y \to U$. By [@bhatt_derived_splinters_in_positive_characteristic Prop. 4.1] this cover extends to a finite morphism $\bar{f}\colon \overline{Y} \to X$. Since $X$ is a splinter, we obtain a section $s$ such that the composition $$\mathcal{O}_X \to \bar{f}_* \mathcal{O}_{\overline{Y}} \xrightarrow{s} \mathcal{O}_X$$ is the identity. By restricting to $U$, we obtain the desired section of $\mathcal{O}_U \to f_*\mathcal{O}_Y$.
For consider a finite cover $f\colon Y \to X$. Since $X$ is Nagata, we can assume that $Y$ is normal ; see [Remark 1](#rmk:splinter-vs-globally+){reference-type="ref" reference="rmk:splinter-vs-globally+"}. The sheaf $f_*\mathcal{O}_Y$ satisfies the property $S_2$ by [@ega_iv_2 Prop. 5.7.9] and is therefore reflexive. Since $U$ is a splinter, we obtain a splitting of $\mathcal{O}_U \to f_*\mathcal{O}_Y \vert_U$ and this extends to a splitting of $\mathcal{O}_X \to f_* \mathcal{O}_Y$ as $X$ is normal and all the involved sheaves are reflexive. ◻
*Remark 1*. Let $\Delta$ be an effective $\mathbb Q$-Weil divisor on a normal Noetherian scheme $X$ and let $U\subseteq X$ be an open dense subset. As the proof is analogous to the one of [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"}, we leave it to the reader to verify that if $(X, \Delta)$ is globally $+$-regular, then $(U, \Delta\vert_U)$ is globally $+$-regular. Conversely, if $\operatorname{codim}_X (X \setminus U) \geq 2$ and if $(U, \Delta\vert_U)$ is globally $+$-regular, then $(X, \Delta)$ is globally $+$-regular.
**Lemma 1**. *Let $\pi \colon X \to Y$ be a surjective morphism of Noetherian schemes. Assume that $X$ is a splinter and that $Y$ is integral with generic point $\eta$, then the generic fiber $X_\eta$ is a splinter.*
*Proof.* Let $f \colon Z \to X_\eta$ be a finite surjective morphism. Then there exists a nonempty open subset $U \subseteq Y$ such that $f$ spreads out to a finite surjective morphism $f_U \colon Z_U \to X_U\coloneqq \pi^{-1}(U)$. By [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"} the scheme $X_U$ is a splinter. Thus $\mathcal{O}_{X_U} \to {f_U}_* \mathcal{O}_{Z_U}$ admits a section. By flat base change, restricting to the generic fiber yields the desired splitting. ◻
We now turn to the analogues of [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"} and [Lemma 1](#lem:generic_fiber_splinter){reference-type="ref" reference="lem:generic_fiber_splinter"} in the global $F$-regular setting. The following elementary lemma appears, e.g., in [@gongyo_li_patakfalvi_zsolt_schwede_tanaka_zong_on_rational_connectedness_of_globally_f_regular_threefolds Lem. 1.5].
**Lemma 1**. *Let $(X,\Delta)$ be a pair consisting of a normal variety over an $F$-finite field $k$ of positive characteristic and of an effective $\mathbb Q$-Weil divisor $\Delta$, and let $U \subseteq X$ be an open subset.*
1. *[\[item:open_of_gFr\]]{#item:open_of_gFr label="item:open_of_gFr"} If $(X,\Delta)$ is globally $F$-regular, then $(U,\Delta\vert_U)$ is globally $F$-regular.*
2. *[\[item:gFr_determined_on_big_open\]]{#item:gFr_determined_on_big_open label="item:gFr_determined_on_big_open"} Assume that $U\subseteq X$ is dense and $\operatorname{codim}_X (X \setminus U) \geq 2$. If $(U,\Delta\vert_U)$ is globally $F$-regular, then $(X,\Delta)$ is globally $F$-regular.*
*Proof.* For the proof of , consider an effective Weil divisor $D_0$ on $U$. Let $D$ be the Zariski closure of $D_0$ in $X$. By assumption there exists $e>0$ such that $\mathcal{O}_X \to F_*^e \mathcal{O}_X(\lceil (p^e-1)\Delta\rceil + D)$ splits. Restricting to $U$ yields the desired splitting of $\mathcal{O}_U \to F_*^e \mathcal{O}_U(\lceil (p^e-1)\Delta\vert_U \rceil + D_0)$.
To prove , note that, since $F$ is finite, $F_*$ preserves $S_2$ sheaves by [@ega_iv_2 Prop. 5.7.9], so $F_*$ sends reflexive sheaves to reflexive sheaves. Furthermore, the restriction along $U\hookrightarrow X$ induces a bijection between Weil divisors of $X$ and Weil divisors of $U$. Statement follows since, for any Weil divisor $D$, the map $\mathcal{O}_X \to F^e_* \mathcal{O}_X(\lceil (p^e-1)\Delta \rceil + D)$ of reflexive sheaves splits if and only if its restriction to $U$ splits. ◻
**Lemma 1**. *Let $\pi \colon X \to Y$ be a surjective morphism of normal varieties over an $F$-finite field of positive characteristic. Assume that $(X,\Delta)$ is globally $F$-regular and that $Y$ is integral with generic point $\eta$. Then the generic fiber $X_\eta$ is normal and $(X_\eta, \Delta_\eta)$ is globally $F$-regular.*
*Proof.* By [@schwede_smith_globally_f_regular_and_log_Fano_varieties Lem. 3.5], $X$ is globally $F$-regular. It follows from [Proposition 1](#prop:gFr-splinter){reference-type="ref" reference="prop:gFr-splinter"} that $X$ is a splinter and then from [Lemma 1](#lem:generic_fiber_splinter){reference-type="ref" reference="lem:generic_fiber_splinter"} that $X_\eta$ is a splinter, so $X_\eta$ is normal. Given an effective Weil divisor $D_0$ on $X_\eta$, we denote by $D$ its Zariski closure in $X$. Since $(X, \Delta)$ is globally $F$-regular, there exists $e>0$ such that $\mathcal{O}_X \to F_*^e \mathcal{O}_X(\lceil (p^e-1)\Delta \rceil +D)$ splits. The lemma follows by restricting the splitting along $X_\eta \hookrightarrow X$. ◻
Finally, we draw as a consequence of the above that both the splinter property and the global $F$-regular property are invariant under *small birational maps* of normal varieties.
**Definition 1** (Small birational map). A rational map $f\colon X \dashrightarrow Y$ between Noetherian schemes is said to be a *small birational map* if there exist nonempty open subsets $U\subseteq X$ and $V\subseteq Y$ with $\operatorname{codim}_X(X\setminus U) \geq 2$ and $\operatorname{codim}_Y(Y\setminus V)\geq 2$ such that $f$ induces an isomorphism $U \stackrel{\simeq}{\longrightarrow} V$.
**Proposition 1**. *Let $f\colon X \dashrightarrow Y$ be a small birational map between normal schemes of finite type over a field $k$. The following statements hold :*
1. *[\[item:splinter_small_birational_map\]]{#item:splinter_small_birational_map label="item:splinter_small_birational_map"} $X$ is a splinter $\iff$ $Y$ is a splinter.*
2. *[\[item:globF_small_birational_map\]]{#item:globF_small_birational_map label="item:globF_small_birational_map"} Assuming $k$ is of positive characteristic and $F$-finite, $X$ is globally $F$-regular $\iff$ $Y$ is globally $F$-regular.*
*Proof.* Statement follows directly from [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"} and statement from [Lemma 1](#lem:small_birational_map_globF){reference-type="ref" reference="lem:small_birational_map_globF"}. ◻
# Lifting and descending the splinter property {#S:LD-splinter}
## Descending the splinter property {#rmk:splittingcondition}
If $\pi \colon Y \to X$ is a morphism of varieties such that $\mathcal{O}_X \to \pi_* \mathcal{O}_Y$ is an isomorphism, e.g., if $\pi$ is flat proper with geometrically connected and geometrically reduced fibers, or if $\pi\colon Y \to X$ is birational and $X$ is a normal proper variety, it is a formal consequence that the splinter property descends along $\pi$. Indeed, we have the following lemma.
**Lemma 1**. *Let $\pi\colon Y\to X$ be a morphism of Noetherian schemes.*
1. *If $Y$ is a splinter and the map $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$ is split, then $X$ is a splinter. [\[item:splinter_descent\]]{#item:splinter_descent label="item:splinter_descent"}*
2. *[\[item:derived_splinter_descent\]]{#item:derived_splinter_descent label="item:derived_splinter_descent"} If $Y$ is a derived splinter and $\mathcal{O}_X \to \mathrm{R} \pi_*\mathcal{O}_Y$ is split, then $X$ is a derived splinter.*
*Assume further that $X$ is normal and that $Y$ is normal and Nagata, and let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$.*
3. *[\[item:glob\_+\_reg_descent\]]{#item:glob_+_reg_descent label="item:glob_+_reg_descent"} If $(Y, \pi^*\Delta)$ is globally $+$-regular and $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$ is split, then $(X, \Delta)$ is globally $+$-regular.*
*Proof.* We only prove since admit similar proofs. Let $f\colon Z \to X$ be a finite surjective morphism with $Z$ normal. Let $Z' \to Y \times_X Z$ be the normalization of the fiber product, which is finite over $Y$ since $Y$ is Nagata. (This step is not needed for the proofs of where one simply takes $Z'= Y \times_X Z$.) We obtain a commutative square $$\begin{tikzcd}
Z' \arrow[r, "\pi'"] \arrow[d, "f'"]
& Z\arrow[d, "f"] \\
Y\arrow[r,"\pi"] & X.
\end{tikzcd}$$ Since $(Y, \pi^*\Delta)$ is globally $+$-regular, we have a splitting $$\mathcal{O}_Y\to f_*' \mathcal{O}_{Z'}(\lfloor f'^*\pi^* \Delta \rfloor) \xrightarrow{t}\mathcal{O}_Y.$$ Fixing a splitting $\mathcal{O}_X \to \pi_*\mathcal{O}_Y \xrightarrow{s} \mathcal{O}_X$, we obtain a factorization $$\operatorname{id}_{\mathcal{O}_X}\colon \mathcal{O}_X \to \pi_* \mathcal{O}_Y \to \pi_* f_*' \mathcal{O}_{Z'}(\lfloor f'^*\pi^* \Delta \rfloor) = f_* \pi'_* \mathcal{O}_{Z'}(\lfloor \pi'^* f^* \Delta \rfloor) \xrightarrow{\pi_* t} \pi_* \mathcal{O}_Y \xrightarrow{s} \mathcal{O}_X.$$ Since $\mathcal{O}_X \to f_* \pi'_* \mathcal{O}_{Z'}(\lfloor \pi'^* f^* \Delta \rfloor)= f_* \pi'_* \pi'^*\mathcal{O}_{Z}(\lfloor f^* \Delta \rfloor)$ factors through $\mathcal{O}_X \to f_* \mathcal{O}_Z(\lfloor f^*\Delta \rfloor)$, the lemma follows. ◻
*Remark 1*. Assume $X$ and $Y$ are schemes over a field $k$. If $X\times_k Y$ is a splinter, then so are $X$ and $Y$. Indeed, if $\pi\colon X\times_k Y \to X$ denotes the first projection, then $\pi_*\mathcal{O}_{X\times_k Y} = \mathcal{O}_X \otimes_k H^0(Y,\mathcal{O}_Y)$ and any splitting of the $k$-linear map $k \to H^0(Y,\mathcal{O}_Y), 1 \mapsto 1_Y$ provides a splitting of the natural map $\mathcal{O}_X \to \pi_*\mathcal{O}_{X\times_k Y}$.
## Lifting the splinter property {#SS:lift-splinter}
Recall, e.g., from [@hartshorne_algebraic_geometry Ch. III, Ex. 6.10], that for a finite morphism $\pi \colon Y \to X$ of Noetherian schemes the exceptional inverse image functor is the functor taking quasi-coherent $\mathcal{O}_X$-modules $\mathcal F$ to the quasi-coherent $\mathcal{O}_Y$-modules $\pi^!\mathcal F \,\coloneqq \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\pi_*\mathcal{O}_Y, \mathcal F)$.
**Lemma 1**. *Let $\pi\colon Y \to X$ be a finite surjective morphism of Noetherian schemes such that either $H^0(Y, \mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$.*
1. *If $X$ is a splinter, then $Y$ is a splinter.[\[item:lifting_splinters\]]{#item:lifting_splinters label="item:lifting_splinters"}*
2. *Assume that $X$ is defined over $\mathbb F_p$. If $X$ is $F$-split and if the map $\mathcal{O}_X \to \pi_* \mathcal{O}_Y$ splits, then $Y$ is $F$-split.*
*Proof.* First assume that $X$ is a splinter. Let $f\colon Z \to Y$ be a finite surjective morphism. The splitting of $\eta_f\colon \mathcal{O}_Y \to f_*\mathcal{O}_Z$ is equivalent to the surjectivity of $$\operatorname{Hom}_{\mathcal{O}_Y}(f_*{\mathcal{O}_Z}, \mathcal{O}_Y) \xrightarrow{-\circ \eta_f} \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \mathcal{O}_Y).$$ The adjunction $\pi_* \dashv \pi^!$ [@hartshorne_algebraic_geometry Ch. III, Ex. 6.10(b)] provides the following commutative diagram $$\begin{tikzcd}
\operatorname{Hom}_{\mathcal{O}_Y}(f_*{\mathcal{O}_Z}, \mathcal{O}_Y) \dar{\cong} \arrow{rr}{-\circ \eta_f} && \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \mathcal{O}_Y) \dar{\cong} \\
\operatorname{Hom}_{\mathcal{O}_Y}(f_*{\mathcal{O}_Z}, \pi^! \mathcal{O}_X) \dar{\cong} \arrow{rr}{-\circ \eta_f} && \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \pi^! \mathcal{O}_X) \dar{\cong} \\
\operatorname{Hom}_{\mathcal{O}_X}(\pi_* f_*{\mathcal{O}_Z},\mathcal{O}_X) \dar{-\circ \eta_{\pi\circ f}} \arrow{rr}{-\circ \pi_* \eta_f} && \operatorname{Hom}_{\mathcal{O}_X}(\pi_* \mathcal{O}_Y, \mathcal{O}_X) \dar{- \circ \eta_\pi} \\
\operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X) \arrow[equal]{rr} && \operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X,\mathcal{O}_X).
\end{tikzcd}$$ Since $X$ is a splinter, the bottom-left vertical arrow $-\circ \eta_{\pi \circ f}$ is surjective. This implies on the one hand that $-\circ \eta_f$ is nonzero, and on the other hand that $-\circ \eta_{\pi}$ is surjective. Assuming that $\operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \mathcal{O}_Y)=H^0(Y, \mathcal{O}_Y)$ is a field, the former yields that $-\circ \eta_f$ is surjective since it is a map of $H^0(Y, \mathcal{O}_Y)$-modules. Assuming that $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$, the latter yields that the composition of the right vertical arrows, which is $H^0(X,\mathcal{O}_X)$-linear, is bijective and hence that $-\circ \eta_f$ is surjective.
In case where $X$ is assumed to be $F$-split, we argue via the same diagram with $f\colon Z \to Y$ replaced by the Frobenius $F\colon Y \to Y$. If $s\colon \pi_*\mathcal{O}_Y \to \mathcal{O}_X$ is a splitting of the map $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$, then, with a Frobenius splitting of $X$, we obtain a diagram $$\mathcal{O}_X \to \pi_*F_*\mathcal{O}_Y = F_*\pi_* \mathcal{O}_Y \xrightarrow{F_*(s)} F_* \mathcal{O}_X \to \mathcal{O}_X,$$ where the composition is the identity. This proves that the bottom-left arrow $-\circ \eta_{\pi \circ F}$ is surjective. As in the splinter case, we deduce that $-\circ \eta_F$ is surjective, i.e., that $Y$ is $F$-split. ◻
*Remark 1* (The splinter property in positive characteristic lifts along crepant morphisms). A morphism of schemes $\pi\colon Y \to X$ is said to be a *crepant morphism* if it is proper, birational, and is such that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$. Note that by [Lemma 1](#lem:k_and_o_equvalence){reference-type="ref" reference="lem:k_and_o_equvalence"} below, if $X$ is assumed to be Gorenstein, the latter condition is equivalent to $\pi^*\omega_X = \omega_Y$.
Let $\pi\colon Y \to X$ be a crepant morphism of normal $F$-finite schemes over $\mathbb F_p$ with $X$ Gorenstein. Brion and Kumar establish in [@brion_kumar_frobenius_spliting_methods_in_gemoetry_and_representation_theory Lem. 1.3.13] that if $X$ is $F$-split, then $Y$ is $F$-split.
Likewise, up to working with derived splinters instead of splinters, one can relax the finiteness assumption on $\pi$ in [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"}. Precisely, by using the formalism of the exceptional inverse image functor as described in [@stacks-project [Tag 0A9Y](https://stacks.math.columbia.edu/tag/0A9Y)] and the adjunction $\mathrm{R}\pi_* \dashv \pi^!$, the proof of [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} can be adapted to establish the following statement. Let $\pi\colon Y \to X$ be a proper surjective morphism of Noetherian schemes such that either $H^0(Y, \mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$. (These assumptions are satisfied if for instance $\pi$ is a crepant morphism.) If $X$ is a derived splinter, then $Y$ is a derived splinter.
In particular, in view of the fact [@bhatt_derived_splinters_in_positive_characteristic Thm. 1.4] that splinters in positive characteristic agree with derived splinters, we find that if $\pi \colon Y \to X$ is a crepant morphism of Noetherian schemes over $\mathbb F_p$ and if $X$ is a splinter, then $Y$ is a splinter.
For the sake of completeness, we mention that [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} also holds for globally $+$-regular pairs :
**Lemma 1**. *Let $\pi\colon Y \to X$ be a finite surjective morphism of normal Noetherian schemes such that either $H^0(Y, \mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$, and let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$. Assume that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$. If $(X,\Delta)$ is globally $+$-regular, then $(Y, \pi^*\Delta)$ is globally $+$-regular.*
*Proof.* Let $f\colon Z \to Y$ be a finite surjective morphism with $Z$ normal. We have to show that the map $\eta_{f,\pi^*\Delta} \colon \mathcal{O}_Y \to f_*\mathcal{O}_Z(\lfloor f^* \pi^*\Delta \rfloor)$ splits, or equivalently, that $$\operatorname{Hom}_{\mathcal{O}_Y}(f_*{\mathcal{O}_Z}(\lfloor f^* \pi^*\Delta \rfloor), \mathcal{O}_Y) \xrightarrow{-\circ \eta_{f,\pi^*\Delta}} \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \mathcal{O}_Y)$$ is surjective. Note that the map $\eta_{\pi \circ f, \Delta}\colon \mathcal{O}_X \to \pi_* f_*\mathcal{O}_Z(\lfloor f^* \pi^*\Delta \rfloor)$ factors through $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$. Hence, the following diagram commutes : $$\begin{tikzcd}
\operatorname{Hom}_{\mathcal{O}_Y}(f_*{\mathcal{O}_Z}(\lfloor f^* \pi^*\Delta \rfloor), \mathcal{O}_Y) \dar{\cong} \arrow{rr}{-\circ \eta_{f,\pi^*\Delta}} && \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \mathcal{O}_Y) \dar{\cong} \\
\operatorname{Hom}_{\mathcal{O}_Y}(f_*{\mathcal{O}_Z}(\lfloor f^* \pi^*\Delta \rfloor), \pi^! \mathcal{O}_X) \dar{\cong} \arrow{rr}{-\circ \eta_{f,\pi^*\Delta}} && \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \pi^! \mathcal{O}_X) \dar{\cong} \\
\operatorname{Hom}_{\mathcal{O}_X}(\pi_* f_*{\mathcal{O}_Z}(\lfloor f^* \pi^*\Delta \rfloor),\mathcal{O}_X) \dar{-\circ \eta_{\pi\circ f, \Delta}} \arrow{rr}{-\circ \pi_* \eta_{f,\pi^*\Delta}} && \operatorname{Hom}_{\mathcal{O}_X}(\pi_* \mathcal{O}_Y, \mathcal{O}_X) \dar{- \circ \eta_\pi} \\
\operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X) \arrow[equal]{rr} && \operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X,\mathcal{O}_X).
\end{tikzcd}$$ We conclude as in [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"}. ◻
## The splinter property and base change of field {#SS:splinters-base change}
The following proposition establishes the invariance fo the splinter property for proper schemes over a field under algebraic base change. We refer to [Remark 1](#rmk:bc-splinter){reference-type="ref" reference="rmk:bc-splinter"} for base change along any field extension.
**Proposition 1**. *Let $X$ be a proper scheme over a field $k$ such that $H^0(X, \mathcal{O}_X)$ is a field, e.g., $X$ is connected and reduced. Then, for any algebraic field extension $L$ of $H^0(X, \mathcal{O}_X)$, the scheme $X_L \coloneqq X\times_{H^0(X,\mathcal{O}_X)} L$ is a splinter if and only if $X$ is a splinter.*
*Proof.* The "only if" part of the proposition follows from [Lemma 1](#lem:splinter_descent){reference-type="ref" reference="lem:splinter_descent"}. Let $K\coloneqq H^0(X, \mathcal{O}_X)$. First assume $L$ is a finite extension of $K$ and let $h \colon \operatorname{Spec}L \to \operatorname{Spec}K$. Then $$h^!\mathcal{O}_{\operatorname{Spec}K} =\operatorname{Hom}_K(L, K) \cong L = \mathcal{O}_{\operatorname{Spec}L}$$ as $L$-vector spaces. By base change for the exceptional inverse image [@stacks-project [Tag 0E9U](https://stacks.math.columbia.edu/tag/0E9U)], there exists an isomorphism $\pi^!\mathcal{O}_X \cong \mathcal{O}_{X_L}$. We conclude by [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} that $X_L$ is a splinter, since $$H^0(X_L, \mathcal{O}_{X_L})=H^0(X, \mathcal{O}_X) \times_K L=L$$ by flat base change. Now assume $K \to L$ is any algebraic field extension and take a finite cover $f \colon Y \to X_L$. Since $f$ is defined by finitely many equations and $L$ is algebraic over $K$, we can find a finite cover $f' \colon Y' \to X_{L'}$ such that $K \subseteq L' \subseteq L$ is an intermediate extension, finite over $K$, and $f=f'\times_{L'}L$ is the base change of $f'$. By the previous argument $X_{L'}$ is a splinter, thus $\mathcal{O}_{X_{L'}} \to f'_* \mathcal{O}_{Y'}$ admits a section $s\colon f'_*\mathcal{O}_{Y'} \to \mathcal{O}_{X_{L'}}$. Now pulling back to $X_L$ and using flat base change, we obtain the desired section of $\mathcal{O}_{X_L} \to f_*\mathcal{O}_Y$. ◻
**Corollary 1**. *Let $X$ be a connected proper scheme over a field $k$ such that $H^0(X,\mathcal{O}_X)$ is a separable extension of $k$. Then, for any algebraic field extension $K$ of $k$, the scheme $X_K \coloneqq X\times_k K$ is a splinter if and only if $X$ is a splinter.*
*Proof.* The "only if" part of the corollary follows from [Lemma 1](#lem:splinter_descent){reference-type="ref" reference="lem:splinter_descent"}. Let $\bar k$ be an algebraic closure of $k$. By the assumption on $H^0(X,\mathcal{O}_X)$, $X_{\bar k} \coloneqq X\times_k \bar k$ is isomorphic to the disjoint union of $\operatorname{dim}_k H^0(X,\mathcal{O}_X)$ copies of $X \times_{H^0(X,\mathcal{O}_X)} \bar k$. By the "if" part of [Proposition 1](#lem:splinter_base change){reference-type="ref" reference="lem:splinter_base change"}, if $X$ is a splinter, then $X_{\bar k}$ is a splinter. By [Lemma 1](#lem:splinter_descent){reference-type="ref" reference="lem:splinter_descent"}, we get that $X_K$ is a splinter. ◻
**Corollary 1**. *Let $X$ be a connected proper scheme over a field $k$. If $X$ is a splinter, then $H^0(X, \mathcal{O}_X)$ is a field, and $X$, considered as a scheme over $\operatorname{Spec}H^0(X, \mathcal{O}_X)$, is geometrically normal.*
*Proof.* By [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"}, a splinter is normal. In particular, $H^0(X, \mathcal{O}_X)$ is a field. The corollary then follows from [Proposition 1](#lem:splinter_base change){reference-type="ref" reference="lem:splinter_base change"}. ◻
*Remark 1* (Algebraic base change for globally $+$-regular pairs). Let $X$ be a connected normal proper scheme over a field $k$ such that $H^0(X, \mathcal{O}_X)$ is a field, and let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$. Let $L$ be an algebraic field extension of $H^0(X, \mathcal{O}_X)$. The arguments of the proof of [Proposition 1](#lem:splinter_base change){reference-type="ref" reference="lem:splinter_base change"} show that if $(X, \Delta)$ is globally $+$-regular, then $X_L \coloneqq X\times_{H^0(X,\mathcal{O}_X)} L$ is normal and the pair $(X_L, \Delta_L)$ is globally $+$-regular.
Assume in addition that $H^0(X,\mathcal{O}_X)$ is a separable extension of $k$, and let $K$ be any algebraic extension of $k$. As in [Corollary 1](#cor:splinter_bc){reference-type="ref" reference="cor:splinter_bc"}, we have that if $(X, \Delta)$ is globally $+$-regular, then $X_K \coloneqq X\times_k K$ is normal and the pair $(X_K, \Delta_K)$ is globally $+$-regular.
# Lifting and descending global $F$-regularity {#S:LD-globF}
The first aim of this section is to show that the results of [5](#S:LD-splinter){reference-type="ref" reference="S:LD-splinter"} regarding splinters, notably [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"}, extend to the globally $F$-regular setting ; see [Lemma 1](#lem:gFr_ascend_pi_!){reference-type="ref" reference="lem:gFr_ascend_pi_!"}. The second aim is to show how the criterion of Schwede--Smith (recalled in [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"}) can be used to establish further results in the global $F$-regular setting ; for instance, we show in [Proposition 1](#prop:product-gFr){reference-type="ref" reference="prop:product-gFr"} that global $F$-regularity is stable under product and, combined with [Lemma 1](#lem:gFr_ascend_pi_!){reference-type="ref" reference="lem:gFr_ascend_pi_!"}, use it to recover in [Proposition 1](#lem:gFr_base change){reference-type="ref" reference="lem:gFr_base change"} a result of [@gongyo_li_patakfalvi_zsolt_schwede_tanaka_zong_on_rational_connectedness_of_globally_f_regular_threefolds] stating that global $F$-regularity for normal proper schemes over an $F$-finite field is stable under base change of fields. Except for [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"}, which is due to Schwede--Smith, the results of this section will not be used in the rest of the paper.
## A criterion for global $F$-regularity
The following criterion of Schwede--Smith makes it possible in practice to reduce checking that a variety is globally $F$-regular to simply check that $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits for one specific divisor $D$.
**Theorem 1** ([@schwede_smith_globally_f_regular_and_log_Fano_varieties Thm. 3.9]). *Let $X$ be a normal variety over an $F$-finite field of positive characteristic. Then $X$ is globally $F$-regular if and only if there exists an effective Weil divisor $D$ on $X$ such that*
1. *there exists an $e>0$ such that the natural map $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits, and*
2. *the variety $X\setminus D$ is globally $F$-regular.*
*Remark 1*. Suppose $X$ is a normal projective variety over an $F$-finite field of positive characteristic. If $D$ is an ample divisor on $X$, the variety $X\setminus D$ is affine and therefore globally $F$-regular if and only if its local rings are strongly $F$-regular. Since regular local rings are strongly $F$-regular, a smooth projective variety $X$ over an $F$-finite field of positive characteristic is globally $F$-regular if and only if $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits for some ample divisor $D$.
## Descending global $F$-regularity
The following [Lemma 1](#lem:gFr_descent){reference-type="ref" reference="lem:gFr_descent"}, which is due to Schwede--Smith, holds in particular when the map $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$ is an isomorphism, e.g., when $\pi\colon Y \to X$ is flat proper with geometrically connected and geometrically reduced fibers, or when $\pi \colon Y \to X$ is birational and $X$ is a normal proper variety.
**Lemma 1** ([@schwede_smith_globally_f_regular_and_log_Fano_varieties Cor. 6.4]). *Let $\pi\colon Y\to X$ be a morphism of varieties over an $F$-finite field of positive characteristic. If $Y$ is normal globally $F$-regular and if the map $\mathcal{O}_X \to \pi_*\mathcal{O}_Y$ is split, then $X$ is normal globally $F$-regular.*
*Proof.* By [Proposition 1](#prop:gFr-splinter){reference-type="ref" reference="prop:gFr-splinter"}, $Y$ is a splinter and, by [Lemma 1](#lem:splinter_descent){reference-type="ref" reference="lem:splinter_descent"}, $X$ is a splinter. Hence, by [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"}, $X$ is normal. We can now apply [@schwede_smith_globally_f_regular_and_log_Fano_varieties Cor. 6.4]. ◻
## Lifting global $F$-regularity
We have the following analogue of [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} :
**Lemma 1**. *Let $\pi\colon Y \to X$ be a finite surjective morphism of separated schemes of finite type over an $F$-finite field $k$ of characteristic $p>0$ such that either $H^0(Y, \mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$. If $X$ is a normal globally $F$-regular variety, then $Y$ is a normal globally $F$-regular variety.*
*Proof.* By [Proposition 1](#prop:gFr-splinter){reference-type="ref" reference="prop:gFr-splinter"}, $X$ is a splinter, and it follows from [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} that $Y$ is a splinter and hence is normal. By normality of $X$, the complement of $X_{\mathrm{reg}}$ has codimension at least 2, and by finiteness of $\pi$, the complement of $\pi^{-1}(X_{\mathrm{reg}})$ in $Y$ has codimension at least 2. Therefore, by [Lemma 1](#lem:small_birational_map_globF){reference-type="ref" reference="lem:small_birational_map_globF"}, $Y$ is globally $F$-regular if and only if $\pi^{-1}(X_{\mathrm{reg}}) \subseteq Y$ is globally $F$-regular. Replacing $X$ by $X_{\mathrm{reg}}$ we can and do assume that $X$ is regular. Since $Y$ is normal, $Y_{\mathrm{sing}}$ is a proper closed subset of $Y$ and since $\pi$ is finite, $\pi(Y_{\mathrm{sing}}) \subseteq X$ is also a proper closed subset. Let $U \subseteq X$ be an affine open subset in the complement of $\pi(Y_{\mathrm{sing}})$. By [@stacks-project [Tag 0BCU](https://stacks.math.columbia.edu/tag/0BCU)], $D\coloneqq X \setminus U$ has codimension $1$, so defines a Weil divisor on $X$. Since $X$ is regular, $D$ is further Cartier and $\pi^* \mathcal{O}_X(D)$ is a line bundle. Note that the pullback $\pi^*D = \pi^{-1}(D)$ of $D$ is a Cartier divisor [@stacks-project [Tag 02OO](https://stacks.math.columbia.edu/tag/02OO)]. Let $\sigma_D \colon \mathcal{O}_X\to \mathcal{O}_X(D)$ be the global section defined by the divisor $D$. Then the pullback $\pi^* \sigma_D$ defines a global section of $\pi^*\mathcal{O}_X(D)$ whose zero locus is precisely $\pi^{-1}(D)$. Thus $\pi^*\sigma_D$ is the global section of $\pi^*\mathcal{O}_X(D)=\mathcal{O}_Y(\pi^*D)$ defined by the divisor $\pi^*D$ [@stacks-project [Tag 0C4S](https://stacks.math.columbia.edu/tag/0C4S)].
Now $\pi^{-1}(U) = Y \setminus \pi^*D$ is an affine open subset contained in the regular locus of $Y$, so is strongly $F$-regular. By [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"}, it is enough to show that there exists an $e>0$ such that the map $\mathcal{O}_Y \to F_*^e\mathcal{O}_Y(\pi^* D)$ splits. Since $X$ is globally $F$-regular, there exists an integer $e>0$ and a splitting $s$ such that $$\operatorname{id}_{\mathcal{O}_X} \colon \mathcal{O}_X \to F_*^e\mathcal{O}_X \xrightarrow{F_*^e(\sigma_D)} F_*^e \mathcal{O}_X(D) \xrightarrow{s} \mathcal{O}_X.$$ Since $X$ is a splinter, we have a splitting $$\operatorname{id}_{\mathcal{O}_X} \colon \mathcal{O}_X \xrightarrow{\eta} \pi_* \mathcal{O}_Y \xrightarrow{t} \mathcal{O}_X.$$ Here $\eta \colon \operatorname{id}_{\operatorname{Coh}X} \to \pi_* \pi^*$ is the counit of the adjunction $\pi^* \dashv \pi_*$. Note that by the projection formula, $t$ induces a splitting of $\eta \colon \mathcal{O}_X(D) \to \pi_* \pi^* \mathcal{O}_X(D)$ which by abuse we still denote by $t$. The commutative diagram $$\begin{tikzcd}
\mathcal{O}_X \arrow{rr}{\eta} \dar{f^e} && \pi_* \pi^*\mathcal{O}_X \rar[equal] \dar{f^e} & \pi_* \mathcal{O}_Y \dar{f^e} \\
F_*^e \mathcal{O}_X \arrow{rr}{F_*^e(\eta)} \dar{F_*^e(\sigma_D)} && F_*^e \pi_* \pi^*\mathcal{O}_X \rar[equal] \dar{F_*^e \pi_* \pi^*(\sigma_D)} & \pi_* F_*^e \mathcal{O}_Y \dar{\pi_* F_*^e(\pi^*\sigma_D)}\\
F_*^e \mathcal{O}_X(D) \arrow{rr}{F_*^e(\eta)} \arrow[bend left=50]{uu}{s} && F_*^e \pi_* \pi^*\mathcal{O}_X(D) \rar[equal] \arrow[bend left=30]{ll}{F_*^e(t)} & \pi_* F_*^e \mathcal{O}_Y (\pi^*D)
\end{tikzcd}$$ shows that the map $$\label{eq:splitting}
\pi_* F_*^e(\pi^*\sigma_D) \circ f^e \circ \eta\colon \mathcal{O}_X \to \pi_* F_*^e \mathcal{O}_Y (\pi^*D)$$ splits. Here, $f^e \colon \mathcal{O}_X \to F^e_*\mathcal{O}_X$ denotes the $p^e$-th power map on local sections.
As in [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} we consider the diagram $$\begin{tikzcd}
\operatorname{Hom}_{\mathcal{O}_Y}(F_*^e \mathcal{O}_Y(\pi^*D) , \mathcal{O}_Y) \dar{\cong} \arrow{rrr}{-\circ \pi^*\sigma_D} &&& \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \mathcal{O}_Y) \dar{\cong} \\
\operatorname{Hom}_{\mathcal{O}_Y}(F_*^e \mathcal{O}_Y(\pi^*D), \pi^! \mathcal{O}_X) \dar{\cong} \arrow{rrr}{-\circ \pi^*\sigma_D} &&& \operatorname{Hom}_{\mathcal{O}_Y}(\mathcal{O}_Y, \pi^! \mathcal{O}_X) \dar{\cong} \\
\operatorname{Hom}_{\mathcal{O}_X}(\pi_* F_*^e \mathcal{O}_Y(\pi^*D),\mathcal{O}_X) \dar{-\circ \pi_* F_*^e(\pi^*\sigma_D) \circ f^e \circ \eta} \arrow{rrr}{-\circ \pi_* F_*^e(\pi^*\sigma_D) \circ f^e} &&& \operatorname{Hom}_{\mathcal{O}_X}(\pi_* \mathcal{O}_Y, \mathcal{O}_X) \dar{- \circ \eta_\pi} \\
\operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X, \mathcal{O}_X) \arrow[equal]{rrr} &&& \operatorname{Hom}_{\mathcal{O}_X}(\mathcal{O}_X,\mathcal{O}_X).
\end{tikzcd}$$ Since the splitting of [\[eq:splitting\]](#eq:splitting){reference-type="ref" reference="eq:splitting"} is equivalent to the left-vertical map $-\circ \pi_* F_*^e(\pi^*\sigma_D) \circ f^e \circ \eta$ being surjective, we conclude, as in the proof of [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"}, that the map $-\circ \pi^*\sigma_D$ is surjective, or equivalently, that $\mathcal{O}_Y \to F_*^e \mathcal{O}_Y (\pi^*D)$ splits. ◻
*Remark 1*. By using the version of [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"} for pairs, i.e., the original [@schwede_smith_globally_f_regular_and_log_Fano_varieties Thm. 3.9], we leave it to the reader to show the following version of [Lemma 1](#lem:gFr_ascend_pi_!){reference-type="ref" reference="lem:gFr_ascend_pi_!"} for pairs. Let $\pi\colon Y \to X$ be a finite surjective morphism of separated schemes of finite type over an $F$-finite field $k$ of characteristic $p>0$, and let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$. Assume that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$, and that either $H^0(Y, \mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. If $X$ is normal and if $(X, \Delta)$ is globally $F$-regular, then $Y$ is normal and $(Y,\pi^*\Delta)$ is globally $F$-regular.
## Products of globally $F$-regular varieties
As far as we know, it is unknown whether the splinter property is stable under product. On the other hand, global $F$-regularity for products is more tractable since the Frobenius of a product is the product of the Frobenii and since one may use the criterion of [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"} to check global $F$-regularity for one specific divisor. The following [Proposition 1](#prop:product-gFr){reference-type="ref" reference="prop:product-gFr"} generalizes [@hashimoto_surjectivity_of_multiplication_and_f_regularity_of_multigraded_rings Thm. 5.2], where the case of products of projective globally $F$-regular varieties was dealt with by taking affine cones.
**Proposition 1**. *Let $X$ and $Y$ be normal varieties over a perfect field $k$ of positive characteristic. Then, $X$ and $Y$ are globally $F$-regular if and only if their product $X \times_k Y$ is globally $F$-regular.*
*Proof.* Denote by $\pi_X$ and $\pi_Y$ the natural projections from $X\times_k Y$ to $X$ and $Y$, resp. Since $X$ and $Y$ are normal and $k$ is perfect, their product $X\times_k Y$ is normal, see [@stacks-project [Tag 038L](https://stacks.math.columbia.edu/tag/038L)]. Moreover, $X \setminus X_{\mathrm{reg}}$ and $Y \setminus Y_{\mathrm{reg}}$ both have codimension $\geq 2$ and thus $X \times_k Y \setminus X_{\mathrm{reg}} \times_k Y_{\mathrm{reg}}$ has codimension $\geq 2$. Therefore, by [Lemma 1](#lem:small_birational_map_globF){reference-type="ref" reference="lem:small_birational_map_globF"}, we can assume without loss of generality that $X$ and $Y$ are smooth over $k$.
Assume first that $X\times_k Y$ is globally $F$-regular. As in [Remark 1](#rmk:product){reference-type="ref" reference="rmk:product"}, since $\pi_*\mathcal{O}_{X\times_k Y} = \linebreak \mathcal{O}_X \otimes_k H^0(Y,\mathcal{O}_Y)$, any splitting of the $k$-linear map $k \to H^0(Y,\mathcal{O}_Y), 1 \mapsto 1_Y$ provides a splitting to the natural map $\mathcal{O}_X \to \pi_*\mathcal{O}_{X\times_k Y}$. By [Lemma 1](#lem:gFr_descent){reference-type="ref" reference="lem:gFr_descent"}, it follows that $X$ is globally $F$-regular.
For the converse, we first note that there exist effective Cartier divisors $D$ on $X$ and $E$ on $Y$ such that $X\setminus D$ and $Y \setminus E$ are affine. Indeed, since $X$ and $Y$ are normal and $k$ is perfect, they admit dense affine open subsets, and then use the fact that the complement of a dense affine open subset is a divisor by [@stacks-project [Tag 0BCU](https://stacks.math.columbia.edu/tag/0BCU)]. The divisors obtained this way are a priori Weil divisors, but since $X$ and $Y$ are smooth, they are actually Cartier divisors. Since $k$ is assumed to be perfect, $$X \setminus D \times_k Y \setminus E = X\times_k Y \setminus \pi_X^*D \cup \pi_Y^*E$$ is smooth and affine, so in particular strongly $F$-regular. By [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"} it is enough to show that the map $$\mathcal{O}_{X\times Y} \to F_*^e \mathcal{O}_{X\times Y} \to F_*^e \mathcal{O}_{X\times Y}(\pi_X^*D + \pi_Y^*E)$$ splits for some $e>0$. Since $X$ is globally $F$-regular, we can find an $e>0$ such that $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits and similarly for $Y$. As remarked in [@smith_globally_f_regular_varieties_applications_to_vanishing_theorems_for_quotients_of_fano_varieties p. 558], if $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits for some $e>0$, then $\mathcal{O}_X \to F_*^{e'} \mathcal{O}_X(D)$ splits for all $e'\geq e$. Thus, there exists an integer $e>0$ such that both $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ and $\mathcal{O}_Y \to F_*^e \mathcal{O}_Y(E)$ split. The morphism $$\sigma_{\pi_X^*D + \pi_Y^* E} \colon \mathcal{O}_{X\times Y} \to \mathcal{O}_{X\times Y}(\pi_X^*D + \pi_Y^* E)$$ can be identified with the tensor product $\pi_X^*\sigma_D \otimes \pi_Y^*\sigma_E$, where $$\sigma_D\colon \mathcal{O}_X \to \mathcal{O}_X(D)\quad \text{and} \quad \sigma_E \colon \mathcal{O}_Y \to \mathcal{O}_Y(E)$$ denote the corresponding morphisms on $X$ and $Y$. Pushing forward along Frobenius, we obtain $$F^e_*\sigma_{\pi_X^*D + \pi_Y^* E} = \pi_X^* F^e_*\sigma_D \otimes \pi_Y^* F^e_*\sigma_E.$$ We conclude, by taking the tensor product of the sections of $\mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ and $\mathcal{O}_Y \to F_*^e \mathcal{O}_Y(E)$, that $$\mathcal{O}_{X\times Y} \to F_*^e \mathcal{O}_{X\times Y}(\pi_X^*D + \pi_Y^* E)$$ splits. Hence $X\times_k Y$ is globally $F$-regular. ◻
*Remark 1*. By using the version of [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"} for pairs, i.e., the original [@schwede_smith_globally_f_regular_and_log_Fano_varieties Thm. 3.9], we leave it to the reader to show the following version of [Proposition 1](#prop:product-gFr){reference-type="ref" reference="prop:product-gFr"} for pairs. Let $X$ and $Y$ be normal varieties over a perfect field $k$ of positive characteristic, and denote by $\pi_X\colon X\times_k Y \to X$ and $\pi_Y\colon X\times_k Y \to Y$ the natural projections. Let $\Delta_X$ and $\Delta_Y$ be effective $\mathbb Q$-Weil divisors on $X$ and $Y$, resp. Then, $(X,\Delta_X)$ and $(Y, \Delta_Y)$ are globally $F$-regular if and only if $(X\times_k Y, \pi_X^*\Delta_X + \pi_Y^*\Delta_Y)$ is globally $F$-regular.
## Global $F$-regularity and base change of field
By using the lifting [Lemma 1](#lem:gFr_ascend_pi_!){reference-type="ref" reference="lem:gFr_ascend_pi_!"}, it is possible to show that the base change results for splinters along algebraic field extensions of [5.3](#SS:splinters-base change){reference-type="ref" reference="SS:splinters-base change"} also hold for normal global $F$-regular varieties. However, by using the criterion of Schwede--Smith [@schwede_smith_globally_f_regular_and_log_Fano_varieties Thm. 3.9], Gongyo--Li--Patakfalvi--Schwede--Tanaka--Zong [@gongyo_li_patakfalvi_zsolt_schwede_tanaka_zong_on_rational_connectedness_of_globally_f_regular_threefolds] have established a more general base change results that deals with not necessarily algebraic extensions.
**Proposition 1** (Gongyo--Li--Patakfalvi--Schwede--Tanaka--Zong [@gongyo_li_patakfalvi_zsolt_schwede_tanaka_zong_on_rational_connectedness_of_globally_f_regular_threefolds]). *Let $X$ be a normal proper scheme over an $F$-finite field $k$. Assume that $(X,\Delta)$ is globally $F$-regular. Then, for any $F$-finite field extension $L$ of $k$ with a morphism $\operatorname{Spec}L \to \operatorname{Spec}H^0(X,\mathcal{O}_X)$, the scheme $X_L \coloneqq X\times_{H^0(X,\mathcal{O}_X)} L$ is normal and the pair $(X_L,\Delta_L)$ is globally $F$-regular.*
*Proof.* Let us provide an alternate proof. We may and do assume $X$ is connected. Since any divisor on $X_L$ is defined over a finitely generated field extension of the field $H^0(X,\mathcal{O}_X)$, we may assume that $L$ is a simple extension of $H^0(X,\mathcal{O}_X)$. If $L$ is algebraic, we can apply [Lemma 1](#lem:gFr_ascend_pi_!){reference-type="ref" reference="lem:gFr_ascend_pi_!"}, while if $L$ is purely transcendental, we can apply [Proposition 1](#prop:product-gFr){reference-type="ref" reference="prop:product-gFr"} (or [Remark 1](#rmk:prod-pairs){reference-type="ref" reference="rmk:prod-pairs"} in case $\Delta \neq 0$) and [Lemma 1](#lem:generic_fiber_globF){reference-type="ref" reference="lem:generic_fiber_globF"} to $X\times_{H^0(X,\mathcal{O}_X)} \mathbb A^1 \to \mathbb A^1$. Note that in this situation it is not necessary to assume that $H^0(X, \mathcal{O}_X)$ is perfect as $X_{\mathrm{reg}} \times_{H^0(X, \mathcal{O}_X)} \mathbb A^1$ is regular. ◻
*Remark 1*. Our proof of [Proposition 1](#lem:gFr_base change){reference-type="ref" reference="lem:gFr_base change"} shows that one could extend [Proposition 1](#lem:splinter_base change){reference-type="ref" reference="lem:splinter_base change"} concerned with base change of splinters along algebraic field extensions to arbitrary field extensions if one could establish that the splinter property is stable under taking product with $\mathbb A^1$.
**Corollary 1** ([@gongyo_li_patakfalvi_zsolt_schwede_tanaka_zong_on_rational_connectedness_of_globally_f_regular_threefolds Cor. 2.8]). *Let $X$ be a connected normal proper scheme over an $F$-finite field $k$. Assume that $H^0(X,\mathcal{O}_X)$ is a separable extension of $k$ and that $(X,\Delta)$ is globally $F$-regular. Then, for any $F$-finite field extension $K$ of $k$, the scheme $X_K \coloneqq X\times_k K$ is normal and the pair $(X_K,\Delta_K)$ is globally $F$-regular.*
# Finite torsors over splinters {#S:finite_torsors_over_splitners}
We say that a morphism $\pi\colon Y\to X$ of schemes over a scheme $S$ is a *finite torsor* if it is a torsor under a finite group scheme $G$ over $S$. The aim of this section is to prove Theorem [\[item:thm:triv_fund_grp\]](#item:thm:triv_fund_grp){reference-type="ref" reference="item:thm:triv_fund_grp"}. First, in order to apply our lifting [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} to finite torsors over splinters, we have :
**Lemma 1**. *Let $\pi\colon Y \to X$ be a morphism of Noetherian schemes over a Noetherian ring $R$. Assume that $\pi$ satisfies either of the following conditions :*
1. *[\[item:finite_etale\]]{#item:finite_etale label="item:finite_etale"} $\pi$ is finite étale.*
2. *[\[item:finite_torsor_under_group_scheme\]]{#item:finite_torsor_under_group_scheme label="item:finite_torsor_under_group_scheme"} $\pi$ is a finite torsor, and $\operatorname{Pic}(\operatorname{Spec}R) = 0$, e.g. $R$ is a local ring or a UFD.*
*Then $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$.*
*Proof.* Case is covered by [@stacks-project [Tag 0FWI](https://stacks.math.columbia.edu/tag/0FWI)]. Concerning case , this is claimed in [@bombieri_mumford_enqiques_classification_of_surfaces_in_char_p_iii p. 222] in the special case where $R$ is a field and we provide here a proof. For finite morphisms, the exceptional inverse image functor exists at the level of coherent sheaves and we have $\pi_*\pi^! \mathcal{O}_X \cong \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\pi_*\mathcal{O}_Y, \mathcal{O}_X)$ ; see [@stacks-project [Tag 0AU3](https://stacks.math.columbia.edu/tag/0AU3)]. Thus to show that $\pi^!\mathcal{O}_X \cong \mathcal{O}_Y$, we must produce an isomorphism of $\pi_*\mathcal{O}_Y$-modules $$\pi_*\mathcal{O}_Y \xrightarrow{\cong} \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\pi_*\mathcal{O}_Y, \mathcal{O}_X),$$ or equivalently produce an $\mathcal{O}_X$-linear map $\mathrm{Tr}_{Y/X} \colon \pi_*\mathcal{O}_Y \to \mathcal{O}_X$ such that the symmetric bilinear form $\mathrm{Tr}_{Y/X}(\alpha \cdot \beta)$ on the locally free sheaf $\pi_*\mathcal{O}_Y$ with values in $\mathcal{O}_X$ is nonsingular. Such a map is provided for finite $G$-torsors $Y\to X$ over an algebraically closed field by [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities Thm. 3.9]. (Note from e.g. [@brion_some_structure_theorems_for_algebraic_groups Prop. 2.6.4 & 2.6.5$(i)$] that any finite $G$-torsor is a finite $G$-quotient in the sense of [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities Rmk. 2.3].)
In the general case, where $R$ is a Noetherian ring with trivial Picard group, let $G$ be a finite group scheme over $R$. Since $G$ is flat over $R$, $H\coloneqq H^0(G, \mathcal{O}_G)$ is a finitely generated projective Hopf algebra with antipode. The dual Hopf algebra $H^\vee = \operatorname{Hom}_R(H, R)$ is also a finitely generated projective Hopf algebra with antipode. Since $\operatorname{Pic}(\operatorname{Spec}R)=0$, $H^\vee$ admits the additional structure of a Frobenius algebra [@pareigis_when_hopf_algebras_are_frobenius_algebras Thm. 1]. By [@pareigis_when_hopf_algebras_are_frobenius_algebras Thm. 3 & Discussion on p. 596] the $R$-submodule $\int_{H^\vee}^l \subseteq H^\vee$ of left integrals is freely generated by a nonsingular left integral $\mathrm{Tr}_G \in H^\vee$. If $\pi \colon Y \to X$ is a $G$-torsor over $R$, one constructs by pulling back $\mathrm{Tr}_G$ along $X \to \operatorname{Spec}R$ as in [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities §3.1, p. 12] an $\mathcal{O}_X$-linear map $\mathrm{Tr}_{Y/X} \colon \pi_*\mathcal{O}_Y \to \mathcal{O}_X$. Since $\mathrm{Tr}_G$ is nonsingular, arguing as in [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities Thm. 3.9] shows that the bilinear form $(\alpha, \beta) \mapsto \mathrm{Tr}_{Y/X}(\alpha\cdot \beta)$ is nonsingular. ◻
*Remark 1*. Recall, e.g. from [@stacks-project [Tag 0B6U](https://stacks.math.columbia.edu/tag/0B6U)], that for any flat morphism $\pi \colon Y \to X$ of separated schemes and of finite type over a Noetherian ring $R$ admitting a dualizing complex we have the identity $\pi^!\mathcal{O}_X \otimes \mathrm{L} \pi^*\omega_X^\bullet \xrightarrow{\cong} \pi^!\omega_X^\bullet \eqqcolon \omega_Y^\bullet$ for any choice of dualizing complex $\omega_X^\bullet$ on $X$. Hence if $\pi \colon Y \to X$ is a finite torsor with $X$ separated of finite type over $R$, then $\pi^*\omega_X^\bullet \cong \omega_Y^\bullet$. Since a scheme admitting a dualizing complex is Gorenstein if and only if it admits an invertible dualizing complex, we observe that if $X$ is Gorenstein, then $Y$ is Gorenstein. Likewise, since a scheme admitting a dualizing complex is Cohen--Macaulay if and only if it admits a dualizing complex that is the shift of a sheaf, we observe that if $X$ is Cohen--Macaulay, then $Y$ is Cohen--Macaulay.
We have the following lemma from [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char] :
**Lemma 1** ([@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Lem. 6.6]). *Let $X \to \operatorname{Spec}R$ be a normal Noetherian Nagata scheme over a ring $R$. Let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$. Then the following are equivalent :*
1. *The pair $(X , \Delta)$ is globally $+$-regular.*
2. *For each closed point $z \in \operatorname{Spec}R$ the base change to the localization $(X_{R_z}, \Delta_{R_z})$ is globally $+$-regular.*
*Proof.* Our assumptions are less restrictive than the setup of [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char §6], but the proof of [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Lem. 6.6] works as we outline below. Working on each connected component of $X$ separately, we can and do assume that $X$ is integral. If $f\colon Y \to X$ is a finite cover with $Y$ normal, we have to show that the evaluation-at-$1$ map $$\operatorname{Hom}_{\mathcal{O}_X}(f_* \mathcal{O}_Y(\lfloor f^* \Delta \rfloor), \mathcal{O}_X)\to H^0(X, \mathcal{O}_X)$$ is surjective. As argued in the proof of [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Lem. 6.6], this is equivalent to the surjectivity of the evaluation-at-$1$ map $$\operatorname{Hom}_{\mathcal{O}_{X_{R_z}}}(f_* \mathcal{O}_Y(\lfloor f^* \Delta\vert_{X_{R_z}} \rfloor), \mathcal{O}_{X_{R_z}} )\to H^0({X_{R_z}}, \mathcal{O}_{X_{R_z}})$$ for every closed point $z\in \operatorname{Spec}R$. To conclude, it is enough to observe that any finite surjective morphism $h \colon Y' \to X_{R_z}$ with normal and integral $Y'$ is the localization of a finite surjective morphism $Y \to X$ with $Y$ normal. Since $X$ is Nagata, such $Y \to X$ is provided by taking the normalization of $\mathcal{O}_X$ in the fraction field $K(Y')$, see [@stacks-project [Tag 0AVK](https://stacks.math.columbia.edu/tag/0AVK)]. ◻
*Remark 1*. Let $X \to \operatorname{Spec}R$ be a Noetherian Nagata scheme over a ring $R$. Then $X$ is a splinter if and only if for each closed point $z \in \operatorname{Spec}R$ the base change $X_{R_z}$ is a splinter. Indeed, both conditions imply that $X$ is normal and if $X$ is Nagata so is any localization $X_{R_z}$ [@stacks-project [Tag 032U](https://stacks.math.columbia.edu/tag/032U)]. Using [Remark 1](#rmk:splinter-vs-globally+){reference-type="ref" reference="rmk:splinter-vs-globally+"} the statement is precisely [Lemma 1](#lem:bhatt_et_al_enough_to_check_at_closed_points){reference-type="ref" reference="lem:bhatt_et_al_enough_to_check_at_closed_points"} for the choice $\Delta =0$.
**Proposition 1**. *Let $\pi\colon Y \to X$ be a morphism of Noetherian Nagata schemes over a Noetherian ring $R$ such that either $H^0(Y,\mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi$ satisfies either of the following conditions :*
1. *$\pi$ is finite étale.*
2. *$\pi$ is a finite torsor.*
*If $X$ is a splinter, then $Y$ is a splinter.*
*Proof.* Let $z \in \operatorname{Spec}R$ be a closed point. By flat base change $H^0(X_{R_z}, \mathcal{O}_{X_{R_z}} )= H^0(X, \mathcal{O}_X) \otimes_R R_z$ and $H^0(Y_{R_z}, \mathcal{O}_{Y_{R_z}} )= H^0(Y, \mathcal{O}_Y) \otimes_R R_z$. By [Remark 1](#rmk:bhatt_et_al_lemma_for_splinter){reference-type="ref" reference="rmk:bhatt_et_al_lemma_for_splinter"} we can reduce to the case where $R$ is a local ring so that $\operatorname{Pic}(\operatorname{Spec}R)=0$. From [Lemma 1](#lem:covers){reference-type="ref" reference="lem:covers"} we know that $\pi^!\mathcal{O}_X \cong \mathcal{O}_{Y}$ for any finite étale or finite torsor morphism $\pi\colon Y \to X$. With the additional assumption that $H^0(Y,\mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$, [Lemma 1](#lem:splinter_ascend_pi_!){reference-type="ref" reference="lem:splinter_ascend_pi_!"} shows that $Y$ is a splinter. ◻
We say that a morphism of schemes $\pi\colon Y \to X$ over $k$ is *quasi-étale* (resp. a *quasi-torsor*) if there exists $U \subseteq X$ open with $\operatorname{codim}_X (X\setminus U)\geq 2$ such that $f\vert_{f^{-1}(U)}$ is étale (resp. a torsor under a group scheme $G$ over $k$). The following proposition will not be used in this work but might be of independent interest (note that [Proposition 1](#prop:quasitorsor){reference-type="ref" reference="prop:quasitorsor"} is covered by [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Prop. 6.20]).
**Proposition 1**. *Let $\pi\colon Y \to X$ be a morphism of normal Noetherian Nagata schemes over a Noetherian ring $R$ such that either $H^0(Y,\mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi$ satisfies either of the following conditions :*
1. *[\[item:finite_quasietale\]]{#item:finite_quasietale label="item:finite_quasietale"} $\pi$ is finite quasi-étale.*
2. *[\[item:finite_quasitorsor_under_group_scheme\]]{#item:finite_quasitorsor_under_group_scheme label="item:finite_quasitorsor_under_group_scheme"} $\pi$ is a finite quasi-torsor.*
*Let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$. If $(X, \Delta)$ is globally $+$-regular, then $(Y, \pi^* \Delta)$ is globally $+$-regular.*
*Proof.* Using [Lemma 1](#lem:bhatt_et_al_enough_to_check_at_closed_points){reference-type="ref" reference="lem:bhatt_et_al_enough_to_check_at_closed_points"} we can reduce to the case where $R$ is local so that $\operatorname{Pic}( \operatorname{Spec}R)=0$. By assumption, there exists an open subset $U\subseteq X$ such that $\operatorname{codim}_X (X\setminus U)\geq 2$ and such that $\pi\vert_U \colon V\coloneqq \pi^{-1}(U) \to U$ is finite étale or a finite torsor. Thus, by [Lemma 1](#lem:covers){reference-type="ref" reference="lem:covers"}, $\pi_U^! \mathcal{O}_U \cong \mathcal{O}_V$. Since $Y$ is assumed to be normal, we can work on each connected component of $Y$ separately and assume without loss of generality that $Y$ is connected. Since $\pi$ is finite, $Y \setminus V$ has codimension at least $2$ in $Y$. Since $X$ and $Y$ are normal, we have $H^0(U, \mathcal{O}_U) = H^0(X, \mathcal{O}_X)$ and $H^0(V, \mathcal{O}_V) = H^0(Y, \mathcal{O}_Y)$. [Lemma 1](#lem:lifting_glob_+_reg){reference-type="ref" reference="lem:lifting_glob_+_reg"} shows that $(V, \pi^* \Delta \vert_V)$ is globally $+$-regular. We conclude with [Remark 1](#rmk:dense_open_glob_+_reg){reference-type="ref" reference="rmk:dense_open_glob_+_reg"} that $(Y,\pi^*\Delta)$ is globally $+$-regular. ◻
*Remark 1*. By replacing the use of [Lemma 1](#lem:lifting_glob_+_reg){reference-type="ref" reference="lem:lifting_glob_+_reg"} with [Lemma 1](#lem:gFr_ascend_pi_!){reference-type="ref" reference="lem:gFr_ascend_pi_!"} (or rather [Remark 1](#rmk:lift-gFr){reference-type="ref" reference="rmk:lift-gFr"}) and the use of [Remark 1](#rmk:dense_open_glob_+_reg){reference-type="ref" reference="rmk:dense_open_glob_+_reg"} with [Lemma 1](#lem:small_birational_map_globF){reference-type="ref" reference="lem:small_birational_map_globF"} in the proof of [Proposition 1](#prop:quasitorsor){reference-type="ref" reference="prop:quasitorsor"}, one obtains the following statement. Let $\pi\colon Y \to X$ be a finite morphism of normal varieties over an $F$-finite field such that either $H^0(Y,\mathcal{O}_Y)$ is a field or $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. Assume that $\pi$ is quasi-étale or a quasi-torsor, and let $\Delta$ be an effective $\mathbb Q$-Weil divisor on $X$. If $(X,\Delta)$ is globally $F$-regular, then $(Y,\pi^* \Delta)$ is globally $F$-regular. This extends [@patakfalvi2020beauvillebogomolov Lem. 11.1] where the quasi-étale case was treated.
We now focus on finite torsors over proper splinters over a field. The following lemma can be found in [@mumford-abelian Thm. 2, p. 121] (we thank Michel Brion for bringing this reference to our attention). We provide an alternate proof based on Hirzebruch--Riemann--Roch for (not necessarily smooth) proper schemes over a field.
**Lemma 1**. *Let $X$ be a proper scheme over a field $k$ and let $\pi\colon Y \to X$ be a morphism of schemes over $k$. Assume that $\pi$ satisfies either of the following conditions :*
1. *$\pi$ is finite étale.*
2. *$\pi$ is a finite torsor.[\[item:torsor_finite_group_scheme\]]{#item:torsor_finite_group_scheme label="item:torsor_finite_group_scheme"}*
*Then $\chi(\mathcal{O}_Y) = \deg(\pi)\chi(\mathcal{O}_X)$.*
*Proof.* We first establish . Recall that there is a Hirzebruch--Riemann--Roch formula $$\chi(\mathcal{E}) = \int_X \operatorname{ch}(\mathcal{E})\cap \mathrm{td}(X)$$ for any vector bundle $\mathcal{E}$ on a proper scheme $X$ over a field ; see [@fulton_intersection_theory Cor. 18.3.1]. In particular, the Euler characteristic only depends on the class of the Chern character $\operatorname{ch}(\mathcal{E}) \in \mathrm{A}^*(X)_\mathbb Q$, where $\mathrm{A}^*(X)_\mathbb Q$ denotes the Chow cohomology [@stacks-project [Tag 0FDV](https://stacks.math.columbia.edu/tag/0FDV)] with $\mathbb Q$-coefficients. Assume $\pi\colon Y \to X$ is a torsor under a finite group scheme $G$ over $k$. By definition of a $G$-torsor, the product $Y \times_X Y \to Y$ is isomorphic to $G \times_k Y \to Y$ as schemes over $Y$. This gives an isomorphism $$\pi_* \mathcal{O}_Y \otimes \pi_* \mathcal{O}_Y \cong \pi_*\mathcal{O}_Y^{\oplus n},$$ where $n=\deg(\pi)$ is the order of $G$. Since $\pi$ is finite flat, $\pi_* \mathcal{O}_Y$ is a vector bundle (of rank $n$) on $X$, and hence by [@stacks-project [Tag 02UM](https://stacks.math.columbia.edu/tag/02UM)] we have the identity $$\operatorname{ch}(\pi_* \mathcal{O}_Y)\cdot \operatorname{ch}( \pi_* \mathcal{O}_Y) = n\, \operatorname{ch}(\pi_* \mathcal{O}_Y) \quad \mbox{in } \mathrm{A}^*(X)_\mathbb Q.$$ Since $\operatorname{ch}(\pi_*\mathcal{O}_Y)$ is a unit in $\mathrm{A}^*(X)_\mathbb Q$, we obtain $\operatorname{ch}(\pi_*\mathcal{O}_Y) = n \operatorname{ch}( \mathcal{O}_X)$. By Hirzebruch--Riemann--Roch, the equality $\chi(\mathcal{O}_Y)= n \chi(\mathcal{O}_X)$ follows. If $\pi$ is finite étale, one can use Grothendieck--Riemann--Roch for proper schemes over a field [@fulton_intersection_theory Thm. 18.3], while noting that the relative Todd class $\mathrm{td}(T_\pi)$ is equal to $1$. Alternatively one can reduce to case as follows. There exists a Galois cover $\rho \colon Y' \to X$ that dominates $\pi$ ; see [@stacks-project [Tag 03SF](https://stacks.math.columbia.edu/tag/03SF)]. We then have a diagram $$\begin{tikzcd}
Y'\drar{\rho_X} \rar{\rho_Y} & Y \dar{\pi}\\
& X
\end{tikzcd}$$ where $\rho_X$ is a finite $\operatorname{Aut}_X(Y')$-torsor and $\rho_Y$ is a finite $\operatorname{Aut}_Y(Y')$-torsor. ◻
**Theorem 1**. *Let $X$ be a proper scheme over an integral Noetherian scheme $S$ of positive characteristic and let $\pi\colon Y \to X$ be a morphism of schemes over $S$. Assume that $H^0(X,\mathcal{O}_X) = H^0(Y,\mathcal{O}_Y)$. In addition, assume either of the following :*
1. *[\[item:finite_etale_morph\]]{#item:finite_etale_morph label="item:finite_etale_morph"} $\pi$ is finite étale.*
2. *[\[item:finite_torsor\]]{#item:finite_torsor label="item:finite_torsor"} $\pi$ is a finite torsor.*
*If $X$ is a splinter, then $\pi$ is an isomorphism.*
*Proof.* Let $\eta$ be the generic point of $S$. It is is enough to show that the restriction $\pi_\eta \colon Y_\eta \to X_\eta$ of $\pi$ to $\eta$ is an isomorphism. By [Lemma 1](#lem:generic_fiber_splinter){reference-type="ref" reference="lem:generic_fiber_splinter"}, if $X$ is a splinter, then $X_\eta$ is a splinter. Therefore, we may and do assume that $S$ is the spectrum of a field. Moreover, since a splinter is normal, we may and do assume that $X$ is connected, in which case $H^0(X,\mathcal{O}_X)$ is a field. By [Proposition 1](#prop:torsor){reference-type="ref" reference="prop:torsor"}, $Y$ is a splinter. Since the structure sheaf of a proper splinter in positive characteristic has trivial positive cohomology by [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"}, we have $\chi(\mathcal{O}_X) = \chi(\mathcal{O}_Y)=1$, where the dimension is taken with respect to the field $H^0(X, \mathcal{O}_X)$ and we conclude with [Lemma 1](#lem:chi_formula_cover){reference-type="ref" reference="lem:chi_formula_cover"} that $\pi$ is an isomorphism. ◻
**Theorem 1**. *Let $X$ be a connected proper scheme over a field $k$ of positive characteristic with a $k$-rational point $x\in X(k)$. Assume that $X$ is a splinter.*
1. *If $k$ is separably closed, then the étale fundamental group $\pi_1^{\text{\'et}}(X,x)$ of $X$ is trivial. [\[enum:geometric_fund_group_trivial\]]{#enum:geometric_fund_group_trivial label="enum:geometric_fund_group_trivial"}*
2. **(Theorem [\[item:thm:triv_fund_grp\]](#item:thm:triv_fund_grp){reference-type="ref" reference="item:thm:triv_fund_grp"})* The Nori fundamental group $\pi_1^N(X,x)$ of $X$ is trivial. [\[enum:Nori_fund_group_trivial\]]{#enum:Nori_fund_group_trivial label="enum:Nori_fund_group_trivial"}*
*Proof.* Statement follows from [Theorem 1](#thm:splinters and covers){reference-type="ref" reference="thm:splinters and covers"} since for $k$ separably closed any connected finite étale cover $\pi \colon Y \to X$ satisfies $H^0(Y, \mathcal{O}_Y)=H^0(X, \mathcal{O}_X)=k$. For statement , first note that the Nori fundamental group $\pi_1^N(X,x)$ is well-defined as a splinter is reduced. Assume for contradiction that $\pi_1^N(X,x)$ is nontrivial. Since $\pi_1^N(X,x)$ is pro-finite [@nori_the_fundamental_group_scheme], there is a surjective group scheme homomorphism $\pi_1^N(X,x) \twoheadrightarrow G$ to a nontrivial finite group scheme $G$. By [@nori_the_fundamental_group_scheme Prop. 3, p. 87], there exists a $G$-torsor $Y\to X$ with $H^0(Y, \mathcal{O}_Y)=H^0(X, \mathcal{O}_X)$. This contradicts [Theorem 1](#thm:splinters and covers){reference-type="ref" reference="thm:splinters and covers"}. ◻
*Remark 1* (On the triviality of the Nori fundamental group). As mentioned in [Corollary 1](#cor:splinter_geom_normal){reference-type="ref" reference="cor:splinter_geom_normal"}, a connected proper splinter $X$ is geometrically normal, hence geometrically reduced, over the field $K\coloneqq H^0(X, \mathcal{O}_X)$ ; in particular it acquires a rational point after some finite separable field extension of $K$. Moreover, recall the general facts that Nori's fundamental group is invariant under separable base change, and that the triviality of Nori's fundamental group is independent of the choice of base point.
We say that a finite étale cover $\pi\colon Y \to X$ is trivial if it is isomorphic over $X$ to a disjoint union of copies of $X$. We say that a finite torsor $\pi\colon Y\to X$ under a finite group scheme $G$ over $k$ is trivial if it is isomorphic to $X\times_k G$ over $X$. An immediate consequence of [Theorem 1](#prop:trivial_fundamental_group){reference-type="ref" reference="prop:trivial_fundamental_group"} is the following :
**Corollary 1**. *Let $X$ be a connected proper scheme over a field $k$ of positive characteristic. Assume that $X$ is a splinter.*
1. *[\[item:etale_cover_trivial\]]{#item:etale_cover_trivial label="item:etale_cover_trivial"} If $k$ is separably closed, then any finite étale cover of $X$ is trivial.*
2. *[\[item:finite_torsor_trivial\]]{#item:finite_torsor_trivial label="item:finite_torsor_trivial"} If $k$ is algebraically closed, then any finite torsor over $X$ is trivial.*
*Proof.* Statement is clear from (the proof of) [Theorem 1](#prop:trivial_fundamental_group){reference-type="ref" reference="prop:trivial_fundamental_group"}, while statement follows from the fact [@nori_the_fundamental_group_scheme] that for a $k$-point of $x\in X(k)$ there is an equivalence of categories between the category of finite torsors $Y \to X$ equipped with a $k$-point $y \in Y(k)$ mapping to $x$ and the category of finite group schemes $G$ over $k$ equipped with a $k$-group scheme homomorphism $\pi_1^N(X,x) \to G$. ◻
# Proper splinters have negative Kodaira dimension {#S:splitners_neg_kod_dim}
Let $X$ be a Gorenstein projective scheme over a field $k$ of positive characteristic. Since $-K_X$ big implies that $X$ has negative Kodaira dimension, it is expected in view of [Conjecture 1](#conj:splinter_big_anticanonical_class){reference-type="ref" reference="conj:splinter_big_anticanonical_class"} that, if $X$ is a splinter, then its Kodaira dimension is negative. In this section, we confirm this expectation (without assuming $X$ to be Gorenstein) and prove Theorem [\[item:thm:negative_kod_dim\]](#item:thm:negative_kod_dim){reference-type="ref" reference="item:thm:negative_kod_dim"}.
Let $X$ be a normal proper variety over a field $k$ and let $D$ be a Weil divisor on $X$. We define the *Iitaka dimension* of $D$ to be $$\kappa(X, D)\coloneqq \min \left \{k \mid (h^0(X, \mathcal{O}_X(dD)) / d^k)_{d\geq 0} \text{ is bounded} \right \}.$$ By convention, if $h^0(X, \mathcal{O}_X(dD)) =0$ for all $d>0$, then we set $\kappa(X, D)=-\infty$. Beware that we deviate from usual conventions as the Iitaka dimension is usually defined for line bundles on projective varieties. If $X$ is a smooth projective variety over $k$, $\kappa(X, K_X)$ agrees with the Kodaira dimension of $X$. The following proposition refines the observation from [Lemma 1](#lem:splinter_trivial_canonical_sheaf){reference-type="ref" reference="lem:splinter_trivial_canonical_sheaf"} showing that if $X$ is a proper splinter in positive characteristic, then $K_X$ is not effective.
**Theorem 1** (Theorem [\[item:thm:negative_kod_dim\]](#item:thm:negative_kod_dim){reference-type="ref" reference="item:thm:negative_kod_dim"}). *Let $X$ be a positive-dimensional connected proper scheme over a field of positive characteristic. If $X$ is a splinter, then $\kappa(X, K_X)=-\infty$.*
First we have the following variant of a well-known lemma ; see, e.g., [@patakfalvi_schwede_tucker_positive_char_alg_geometry Ex. 2.12].
**Lemma 1**. *Let $X$ be a proper scheme over a field $k$ of positive characteristic $p>0$. Assume either of the following conditions :*
1. *[\[item:X_splinter\]]{#item:X_splinter label="item:X_splinter"} $X$ is a splinter, or*
2. *[\[item:X_F\_split\]]{#item:X_F_split label="item:X_F_split"} $X$ is normal and $F$-split, and $k$ is $F$-finite.*
*Then the Weil divisor $(1-p)K_X$ is effective. In particular, either $\kappa(X, K_X)=-\infty$, or $K_X$ is torsion (in which case $\kappa(X, K_X)=0$).*
*Proof.* First assume that $X$ is a splinter. Since $X$ is normal, we can and do assume that $X$ is connected. After replacing the base field $k$ by $H^0(X, \mathcal{O}_X)$, we consider the base change $\pi \colon X_{\bar{k}} \to X$ along an algebraic closure $k \to \bar{k}$. By [Proposition 1](#lem:splinter_base change){reference-type="ref" reference="lem:splinter_base change"}, $X_{\bar{k}}$ is a splinter. The base change formula for the exceptional inverse image [@stacks-project [Tag 0E9U](https://stacks.math.columbia.edu/tag/0E9U)] shows that $\pi^* \omega_X = \omega_{X_{\bar{k}}}$. Thus, it is enough to show the statement for $X_{\bar{k}}$, since $\pi$ flat implies $H^0(X_{\bar{k}}, \pi^* \mathcal{O}_X((1-p)K_X)) = H^0(X, \mathcal{O}_X((1-p)K_X)) \otimes_k \bar{k}$. Since $\bar{k}$ is $F$-finite, $X_{\bar{k}}$ is in particular $F$-split. Thus, it is enough to show the statement under the assumptions .
Assume that $X$ is normal, $F$-split and that $k$ is $F$-finite. Parts of the arguments below can for example be found in [@schwede_smith_globally_f_regular_and_log_Fano_varieties §4.2]. We provide nonetheless a proof for the sake of completeness. First note that the absolute Frobenius $F\colon X \to X$ is a finite map. Thus we have for any coherent sheaf $\mathcal{F}$ on $X$ the isomorphism $$\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(F_*\mathcal{F},\omega_X) = F_* \mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(\mathcal{F},F^!\omega_X).$$ This is clear if $X$ is Cohen--Macaulay as then $\omega_X ^\bullet = \omega_X[-\operatorname{dim}X]$, but it is also true if $X$ is only assumed to be normal, by restricting to the Cohen--Macaulay locus and then using that the involved sheaves are reflexive. Choosing a (non-canonical) isomorphism $F^! k = \operatorname{Hom}_k (F_* k, k) \cong F_* k$ as $k$-vector spaces, we obtain an isomorphism $F^!\omega_X^\bullet \cong \omega_X^\bullet$ and further an isomorphism $F^!\omega_X \cong \omega_X$. In particular, $\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X} (F_*\mathcal{O}_X, \omega_X) \cong F_* \omega_X$.
By assumption, there exists a map $s$ such that the composition $$\mathcal{O}_X \to F_* \mathcal{O}_X \xrightarrow{s} \mathcal{O}_X$$ is the identity. We apply $\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(-, \omega_X)$ to obtain $$\omega_X \leftarrow F_* \omega_X \xleftarrow{s^\vee} \omega_X,$$ where the composition is the identity. After restricting to the regular locus, we can twist with $\omega_{X_{\mathrm{reg}}}^{-1}$ and apply the projection formula to obtain a diagram $$\mathcal{O}_{X_{\mathrm{reg}}} \leftarrow F_* \mathcal{O}_{X_{\mathrm{reg}}} ((1-p)K_{X_{\mathrm{reg}}} )\xleftarrow{s^\vee} \mathcal{O}_{X_{\mathrm{reg}}},$$ where the composition is the identity. Using that the involved sheaves are reflexive, we obtain a nonzero global section of $F_* \mathcal{O}_X((1-p)K_X)$. This gives a nonzero element of $H^0(X, \mathcal{O}_X((1-p)K_X))$, hence $(1-p)K_X$ is effective. If no positive multiple of $K_X$ is effective, then $\kappa(X, K_X)=-\infty$. So assume that $nK_X$ is effective for some $n>0$. Since $n(p-1)K_X$ and $n(1-p)K_X$ are both effective, $n(1-p)K_X$ is trivial by [Lemma 1](#lem:torsion_weil_divisor){reference-type="ref" reference="lem:torsion_weil_divisor"}. ◻
*Remark 1*. Note that $F$-split varieties may have trivial canonical divisor. For instance, ordinary elliptic curves and ordinary $K3$ or abelian surfaces are $F$-split ; see [@brion_kumar_frobenius_spliting_methods_in_gemoetry_and_representation_theory Rmk. 7.5.3(i)].
In order to prove [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"}, it remains to show that the canonical divisor of a proper splinter is not torsion. First we note that the Picard group of a proper splinter is torsion-free ; this is a small generalization of a result of Carvajal-Rojas [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities Cor. 5.4] who showed that a globally $F$-regular projective variety has torsion-free Picard group. In particular, this provides a proof of [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"} if $K_X$ is Cartier, e.g., if $X$ is in addition assumed to be Gorenstein.
**Proposition 1**. *Let $X$ be a proper scheme over a field of positive characteristic. If $X$ is a splinter, then $\mathrm{Pic}(X)$ is torsion-free.*
*Proof.* We argue as in [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities Rmk. 5.6] which is concerned with the globally $F$-regular case. If $\mathcal{L}$ is a torsion line bundle, then it is in particular semiample and therefore if $X$ is a splinter, then we have $\chi(X, \mathcal{L})=h^0(X, \mathcal{L})$ by [@bhatt_derived_splinters_in_positive_characteristic Prop. 7.2] (recalled in [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"}). But then $0=\chi(X, \mathcal{L})=\chi(X, \mathcal{O}_X)=1$ if $\mathcal{L}$ is nontrivial. This is impossible.
Alternately, one can argue using [Theorem 1](#thm:splinters and covers){reference-type="ref" reference="thm:splinters and covers"} as follows. An $n$-torsion line bundle $\mathcal{L}$ on $X$ gives rise to a nontrivial $\mu_n$-torsor $\pi \colon Y \to X$, where $Y$ is defined to be the relative spectrum of the finite $\mathcal{O}_X$-algebra $\mathcal{O}_X \oplus \mathcal{L}\oplus \dots \oplus \mathcal{L}^{ n-1}$. If $n>1$, we must have $H^0(X, \mathcal{L})=0$, since any nontrivial section $s\colon \mathcal{O}_X \to \mathcal{L}$ would also give a nontrivial section $s^{ n}$ of $\mathcal{O}_X$, so $s$ would be nowhere vanishing and therefore $\mathcal{L}$ would be trivial. This yields the equality $$H^0(Y, \mathcal{O}_Y)=H^0(X, \pi_* \mathcal{O}_Y) = H^0(X, \mathcal{O}_X \oplus \mathcal{L}\oplus \dots \oplus \mathcal{L}^{ n-1})=H^0(X, \mathcal{O}_X).$$ We conclude from [Theorem 1](#thm:splinters and covers){reference-type="ref" reference="thm:splinters and covers"} that if $X$ is a splinter, then $\deg(\pi) = 1$, i.e., $n=1$ and $\mathcal{L}$ is trivial. ◻
To deal with the non-Gorenstein case, we have :
**Proposition 1**. *Let $X$ be a connected positive-dimensional proper scheme over a field of positive characteristic. If $X$ is a splinter, then the canonical divisor $K_X$ is not torsion.*
*Proof.* Assume for contradiction that $K_X$ is torsion of order $r$, i.e., that $\omega_X \vert_{X_\mathrm{reg}}$ is a torsion line bundle of order $r$. By considering the relative spectrum, we obtain a $\mu_r$-quasi-torsor $$\pi \colon Y = \operatorname{Spec}_X \left (\bigoplus_{i=0}^{r-1} \mathcal{O}_X(iK_X) \right ) \to X ,$$ which, over $X_{\mathrm{reg}}$, restricts to a $\mu_r$-torsor $\pi|_U\colon V \coloneqq \pi^{-1}(X_{\mathrm{reg}}) \to X_{\mathrm{reg}} \eqqcolon U$.
In addition, $\pi_* \mathcal{O}_Y = \bigoplus_{i=0}^{r-1} \mathcal{O}_X(iK_X)$, and by [Lemma 1](#lem:torsion_weil_divisor){reference-type="ref" reference="lem:torsion_weil_divisor"} the sheaves $\mathcal{O}_X(iK_X)$ have no nonzero global sections for $1 \leq i \leq r-1$. Thus $H^0(V,\mathcal{O}_V) = H^0(X_{\mathrm{reg}}, \mathcal{O}_{X_{\mathrm{reg}}}) = H^0(X, \mathcal{O}_{X})$, where the second equality holds by normality of $X$, is a field and we conclude as in [Proposition 1](#prop:torsor){reference-type="ref" reference="prop:torsor"} that $V$ is a splinter. In particular, $V$ is normal and therefore, by, e.g., [@stacks-project [Tag 035K](https://stacks.math.columbia.edu/tag/035K) & [Tag 035E](https://stacks.math.columbia.edu/tag/035E)], the normalization $\overline{Y} \to Y$ is an isomorphism over $V$. Since normalization is finite, $\overline{Y} \setminus V$ has codimension $\geq 2$ and thus $\overline{Y}$ is a splinter by [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"}.
Now $\pi^!_U \mathcal{O}_{X_{\mathrm{reg}}} = \mathcal{O}_V$ holds by [Lemma 1](#lem:covers){reference-type="ref" reference="lem:covers"} and this implies $\pi^*_U \omega_{X_{\mathrm{reg}}} = \omega_V$. On the other hand, we have isomorphisms $$(\pi_U)_* \pi_U^* \omega_{X_{\mathrm{reg}}} \cong \omega_{X_{\mathrm{reg}}} \otimes_{\mathcal{O}_{X_{\mathrm{reg}}}} \bigoplus_{i=0}^{r-1} \omega_{X_{\mathrm{reg}}}^i = \bigoplus_{i=1}^{r} \omega_{X_{\mathrm{reg}}}^i \cong \bigoplus_{i=0}^{r-1} \omega_{X_{\mathrm{reg}}}^i
\cong (\pi_U)_*\mathcal{O}_V$$ as $(\pi_U)_*\mathcal{O}_V$-modules. Hence $V=\operatorname{Spec}_{X_{\mathrm{reg}}} \bigoplus_{i=0}^{r-1} \omega_{X_{\mathrm{reg}}}^i$ has trivial dualizing sheaf. Consequently, since $\omega_{\overline{Y}}$ is a reflexive sheaf, $\overline{Y}$ has trivial dualizing sheaf. But, by [Lemma 1](#lem:splinter_trivial_canonical_sheaf){reference-type="ref" reference="lem:splinter_trivial_canonical_sheaf"}, a proper splinter cannot have trivial canonical sheaf. ◻
*Proof of [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"}.* The canonical divisor $K_X$ is not torsion by [Proposition 1](#prop:splinter_dualizing_sheaf_not_torsion){reference-type="ref" reference="prop:splinter_dualizing_sheaf_not_torsion"} (or more simply by [Proposition 1](#prop:pic_torsion_free){reference-type="ref" reference="prop:pic_torsion_free"} if $K_X$ is Cartier, e.g. if $X$ is Gorenstein), and it follows from [Lemma 1](#lem:kodaira_dim_F_split){reference-type="ref" reference="lem:kodaira_dim_F_split"} that $\kappa(X, K_X)=-\infty$. ◻
*Remark 1* (Torsion Weil divisors on splinters). Proper splinters may have nontrivial torsion Weil divisor classes. Indeed, projective toric varieties are globally $F$-regular and in particular splinters, and Carvajal-Rojas [@carvajal_rojas_finite_torsors_over_strongly_f_regular_singularities Ex. 5.7] gives an example of a projective toric surface that admits a nontrivial $2$-torsion Weil divisor class.
# Vanishing of global differential forms {#S:vanishing_global_1_forms}
Fix a perfect field $k$ of positive characteristic $p$ and let $X$ be a smooth proper variety over $k$. Let $\Omega_{X/k}^\bullet$ be the de Rham complex and recall, e.g. from [@katz_nilpotent_connections_and_the_monodromy_theorem Thm. 7.2], that there exists an isomorphism of graded $\mathcal{O}_X$-modules $$C^{-1}_X\colon \bigoplus_{j \geq 0} \Omega_X^j \to \bigoplus_{j \geq 0} \mathcal{H}^j (F_* \Omega_X^\bullet),$$ whose inverse $C_X$ is the so-called *Cartier operator*. It gives rise for all $j\geq 0$ to short exact sequences of $\mathcal{O}_X$-modules $$\label{eq:short_exact_sequence_cycles_boundaries_cartier_operator}
0 \to B_X^j \to Z_X^j \xrightarrow{C_X^j} \Omega_X^j \to 0,$$ where $B_X^j$ denotes the $j$-th coboundaries and $Z_X^j$ the $j$-th cocycles of $F_*\Omega_X^\bullet$. Note that these coincide with the image under $F_*$ of the coboundaries and cocycles of $\Omega_X^\bullet$. Moreover, there is a short exact sequence $$0 \to \mathcal{O}_X \to F_*\mathcal{O}_X \to B_X^1 \to 0.$$ The proof of the following theorem is inspired by the proof of [@achinger_witaszek_zdanowics_global_frobenius_liftability_i Lem. 6.3.1].
**Theorem 1** (Theorem [\[item:thm:global-1-forms\]](#item:thm:global-1-forms){reference-type="ref" reference="item:thm:global-1-forms"}). *Let $X$ be a smooth proper variety over a field $k$ of positive characteristic $p$. If $X$ is a splinter, then $H^0(X, \Omega_X^1)=0$.*
*Proof.* Clearly, we may and do assume that $X$ is connected. Let $\bar k$ be an algebraic closure of $k$. It is enough to show that $H^0(X_{\bar k}, \Omega_{X_{\bar k}}^1)=0$. Since $X$ is in particular geometrically reduced over $k$, $H^0(X,\mathcal{O}_X)$ is a finite separable extension of $k$ ; see, e.g., [@stacks-project [Tag 0BUG](https://stacks.math.columbia.edu/tag/0BUG)]. It follows that $X_{\bar k}$ is the disjoint union of $\operatorname{dim}_k H^0(X,\mathcal{O}_X)$ copies of $X\times_{H^0(X,\mathcal{O}_X)} \bar k$. From [Proposition 1](#lem:splinter_base change){reference-type="ref" reference="lem:splinter_base change"}, we find that $X_{\bar k}$ is a splinter. Therefore it is enough to establish the theorem in case $k$ is algebraically closed. So assume $k$ is algebraically closed, in which case the $p$-th power map $k=H^0(X, \mathcal{O}_X) \to H^0(X, F_* \mathcal{O}_X)=k$ is an isomorphism. Since by [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"}, for a splinter $X$, $\mathcal{O}_X$ has zero cohomology in positive degrees, the long exact sequence associated to $$0 \to \mathcal{O}_X \to F_*\mathcal{O}_X \to B_X^1 \to 0$$ gives that $H^j(X, B_X^1 )=0$ for all $j\geq 0$. From the long exact sequence associated to [\[eq:short_exact_sequence_cycles_boundaries_cartier_operator\]](#eq:short_exact_sequence_cycles_boundaries_cartier_operator){reference-type="ref" reference="eq:short_exact_sequence_cycles_boundaries_cartier_operator"}, we find that the Cartier operator induces an isomorphism $H^0(X, Z_X^1) \to H^0(X, \Omega_X^1)$. In particular, $\operatorname{dim}H^0(X, Z_X^1) =\operatorname{dim}H^0(X, \Omega_X^1)$. The inclusion of closed $1$-forms $\ker (d\colon \Omega_X^1\to \Omega_X^2) \subseteq \Omega_X^1$ yields an injection $H^0(X, \ker (d\colon \Omega_X^1\to \Omega_X^2) ) \subseteq H^0(X, \Omega_X^1)$. Since $H^0(X, \ker (d\colon \Omega_X^1\to \Omega_X^2) ) = H^0(X, Z_X^1)$, the above inclusion is in fact an equality. In other words, any global $1$-form on $X$ is closed. By [@van_der_geer_katsura_on_the_height_of_calabi_yau_varieties_in_positive_characteristic Prop. 4.3], there exists an isomorphism $$H^0(X, \Omega_X^1) \cong \operatorname{Pic}(X)[p] \otimes_\mathbb Zk,$$ where $\operatorname{Pic}(X)[p]$ denotes the $p$-torsion line bundles on $X$. By [Proposition 1](#prop:pic_torsion_free){reference-type="ref" reference="prop:pic_torsion_free"}, $\operatorname{Pic}(X)$ is torsion-free. ◻
# On the splinter property for proper surfaces {#S:splinters_surfaces}
A proper curve over an algebraically closed field of positive characteristic is a splinter if and only if it is isomorphic to the projective line. In this section, we investigate which proper surfaces over an algebraically closed field of positive characteristic are splinters. First we show in [Proposition 1](#prop:splinter_surface_rational){reference-type="ref" reference="prop:splinter_surface_rational"} that proper surface splinters are rational. We then show in [Proposition 1](#prop:blow_up_of_points_on_a_line_or_conic){reference-type="ref" reference="prop:blow_up_of_points_on_a_line_or_conic"} that the blow-up of the projective plane in any number of closed points lying on a given conic is a splinter. On the other hand, we give examples of rational surfaces that are not splinters in [10.3](#sec:surfaces_which_are_not_splinters){reference-type="ref" reference="sec:surfaces_which_are_not_splinters"}.
## Proper splinter surfaces are rational
The fact proved in [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"} that proper splinters in positive characteristic have negative Kodaira dimension can be used to show that proper surface splinters over an algebraically closed field of positive characteristic are rational :
**Proposition 1**. *Let $X$ be an irreducible proper surface over an algebraically closed field of positive characteristic. If $X$ is a splinter, then $X$ is rational.*
*Proof.* If $X$ is not smooth, choose a resolution of singularities $\pi\colon \tilde{X} \to X$ such that $\pi$ is an isomorphism over the regular locus $X_{\mathrm{reg}}$, which exists by [@lipman_rational_singularities_with_applications_to_algebraic_surfaces_and_unique_factorization §2]. It suffices to show that $\tilde X$ is rational. Note that Grauert--Riemenschneider vanishing holds for surfaces, see, e.g., [@stacks-project [Tag 0AXD](https://stacks.math.columbia.edu/tag/0AXD)]. Thus, the proof of [@bhatt_derived_splinters_in_positive_characteristic Thm. 2.12] shows that the above resolution of singularities satisfies $\mathrm{R} \pi_* \mathcal{O}_{\tilde{X}} = \mathcal{O}_X$, that is, that $X$ has rational singularities. (Alternately, that $\mathrm{R} \pi_* \mathcal{O}_{\tilde{X}} = \mathcal{O}_X$ follows from [@kovacs_rational_singularities Cor. 8.7] by using the fact recalled in [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"} that splinters have pseudo-rational singularities and the fact that $\tilde{X} \to X$ is projective since a proper surface $\tilde{X}$ is projective.) Therefore $$\chi(\mathcal{O}_{\tilde{X}})=\chi(\mathrm{R} \pi_* \mathcal{O}_{\tilde{X}}) = \chi(\mathcal{O}_X)=1.$$ By Castelnuovo's rationality criterion it remains to show that $\omega_{\tilde{X}}^{ 2}$ has no nonzero global sections. So assume that there is a nonzero section $s \in H^0(\tilde{X}, \omega_{\tilde{X}}^{ 2})$. Then $s$ is nonzero after restriction to the open subset $\pi^{-1}(X_{\mathrm{reg}})$. Since $\pi$ is an isomorphism over $X_{\mathrm{reg}}$, the section $s$ would provide a nonzero section of $\mathcal{O}_X(2K_X)$, contradicting [Theorem 1](#prop:K_equvialenceodaira_dim_f_split_variety){reference-type="ref" reference="prop:K_equvialenceodaira_dim_f_split_variety"}. ◻
## Examples of projective rational surfaces that are splinters
We give examples of projective rational surfaces that are splinters ; in all cases, this is achieved by showing that they are globally $F$-regular. We start with already known examples.
**Example 1** (Del Pezzo surfaces, [@hara_a_characterization_of_rational_sing_in_terms_of_injectivity_of_frob_maps Ex. 5.5]). Let $X$ be a smooth projective del Pezzo surface over an algebraically closed field of characteristic $p>0$. Then $X$ is globally $F$-regular if one of the following conditions holds :
1. $K_X^2>3$,
2. $K_X^2=3$ and $p>2$,
3. $K_X^2=2$ and $p>3$, or
4. $K_X^2=1$ and $p>5$.
Moreover, if none of the above conditions are satisfied, there are globally $F$-regular and non globally $F$-regular cases.
**Example 1** (Hirzebruch surfaces, [@gongyo_takagi_surfaces_of_globally_f_regular_and_f_split_type Prop. 3.1]). If $X$ is a Hirzebruch surface $\mathbb P(\mathcal{O}_{\mathbb P^1} \oplus \mathcal{O}_{\mathbb P^1}(-n))$ over a perfect field of positive characteristic, then $X$ is globally $F$-regular.
To the above list of examples, we can add blow-ups of $\mathbb P^2$ in any number of points lying on a conic :
**Proposition 1**. *Let $k$ be an algebraically closed field of characteristic $p>0$ and let $p_1, \dots, p_n$ be distinct closed points in $\mathbb P^2_k$. Assume either of the following conditions :*
1. *The points $p_1, \dots, p_n$ lie on a line $L \subseteq \mathbb P^2_k$.*
2. *The points $p_1, \dots, p_n$ lie on a (possibly singular) conic $C \subseteq \mathbb P^2_k$.*
*then the blow-up $X\coloneqq \mathrm{Bl}_{\{p_1, \dots, p_n\}}\mathbb P^2_k$ is globally $F$-regular. In particular, the blow-up of $\mathbb P^2_k$ in at most 5 closed points is globally $F$-regular.*
The proof adapts a strategy which is similar to the arguments of [@brion_kumar_frobenius_spliting_methods_in_gemoetry_and_representation_theory §§1.3-1.4]. First we determine, under the isomorphism $\operatorname{Hom}_X(F_*\mathcal{O}_X, \mathcal{O}_X) \cong H^0(X, \mathcal{O}_X((1-p)K_X))$, which global sections correspond to splittings of $\mathcal{O}_X \to F_*\mathcal{O}_X$ in the case where $X=\mathbb P^n_k$. For the sake of completeness, we state and prove the following lemma, which appears as an exercise in [@brion_kumar_frobenius_spliting_methods_in_gemoetry_and_representation_theory].
**Lemma 1** (cf. [@brion_kumar_frobenius_spliting_methods_in_gemoetry_and_representation_theory Ex. 1.3.E(1)]). *Let $X =\mathbb P^n_k = \operatorname{Proj}k[X_0, \dots, X_n]$ for a field $k$ of characteristic $p>0$. Let $\eta \colon \mathcal{O}_X \to F_*^e \mathcal{O}_X$ be the canonical map and consider the following chain of isomorphisms $$\Phi \colon \operatorname{Hom}_X(F_*^e \mathcal{O}_X , \mathcal{O}_X) \xrightarrow{\cong} \operatorname{Ext}^n_X(\mathcal{O}_X, (F_*^e \mathcal{O}_X) \otimes \omega_X)^\vee \xrightarrow{\cong} H^n(X, \omega_X^{p^e})^\vee \xrightarrow{\cong} H^0(X, \omega_X^{1-p^e})$$ where the first and last isomorphism are given by Serre duality and the second isomorphism follows from the projection formula. Then the following diagram commutes $$\begin{tikzcd}
\operatorname{Hom}_X(F_*^e \mathcal{O}_X, \mathcal{O}_X) \rar{-\circ \eta}\dar{\Phi}& \operatorname{Hom}_X(\mathcal{O}_X, \mathcal{O}_X) \dar{\mathrm{ev}_1}\\
H^0(X, \mathcal{O}_X(\omega_X^{1-p^e}))\rar{\tau} & k,
\end{tikzcd}$$ where $\mathrm{ev}_1$ is the evaluation at the constant global section $1$ and $\tau$ is the map sending a homogeneous polynomial $P \in H^0(X, \mathcal{O}_X((n+1)(p^e-1)))$ of degree $(n+1)(p^e-1)$ to the coefficient of the monomial $(X_0\cdots X_n)^{(p^e-1)}$ in $P$. In particular, $\Phi^{-1} (P)$ provides a splitting of $\eta$ if and only if $\tau(P)=1$.*
*Proof.* Recall, e.g. from [@hartshorne_algebraic_geometry Ch. III, Thm. 5.1 & Thm. 7.1], that Serre duality for line bundles on $\mathbb P^n$ is given by the bilinear form $$\begin{aligned}
H^0(X, \mathcal{O}_X(a))\otimes H^n(X, \mathcal{O}_X(-(n+1)-a)) &\to H^n(X, \mathcal{O}_X(-(n+1))) = k\frac{1}{X_0 \cdots X_n} \\
(P, Q) &\mapsto \text{coefficient of }(X_0 \cdots X_n)^{-1} \text{ in } PQ,
\end{aligned}$$ where, for $b<0$, we identify $H^n(X, \mathcal{O}_X(b))$ with the degree $b$ part of the negatively graded $k$-algebra $(X_0 \cdots X_n)^{-1}k[X_0^{-1}, \dots, X_n^{-1}]$. Consider the following commutative diagram of isomorphisms $$\begin{tikzcd}
& \operatorname{Hom}_X(F_*^e \mathcal{O}_X, \mathcal{O}_X) \rar{-\circ \eta}\dar{\mathrm{SD}} & \operatorname{Hom}_X(\mathcal{O}_X, \mathcal{O}_X) \dar{\mathrm{SD}} \rar{\mathrm{ev}_1}& k\\
H^0(X, \mathcal{O}_X((1-p^e)K_X)) \rar{\mathrm{SD}} & H^n(X, \mathcal{O}_X(p^e K_X )) ^\vee \rar{({F^e}^*)^\vee} & H^n(X, \mathcal{O}_X(K_X))^\vee ,&
\end{tikzcd}$$ where SD stands for Serre duality. Since ${F^e}^*\colon H^n(X,\mathcal{O}_X(-(n+1))) \to H^n(X,\mathcal{O}_X(-p^e(n+1)))$ raises a polynomial to its $p^e$-th power, we find that a monomial in $H^0(X, \mathcal{O}_X((1-p^e)K_X))$ is sent to $1$ in $k$ along the above diagram if it is $(X_0\cdots X_n)^{p^e-1}$ and to zero otherwise. ◻
For $X$ the blow-up of $\mathbb P^2_k$ in $n$ distinct closed points $p_1, \dots, p_n \in \mathbb P^2_k$, we denote by $E_i$ the exceptional curve over the point $p_i$ and we let $H \in \operatorname{Pic}(X)$ be the pullback of the class of a hyperplane in $\mathbb P^2_k$. Then $$\operatorname{Pic}(X) = \mathbb ZH \oplus \mathbb ZE_1 \oplus \dots \oplus \mathbb ZE_n,$$ is an orthogonal decomposition with respect to the intersection pairing, and we have $H^2=1$ and $E_i^2=-1$ for all $1 \leq i \leq n$. The canonical class of $X$ is $K_X = -3H+\sum_i E_i$. If $C \subseteq \mathbb P^2_k$ is an irreducible curve, its strict transform $\tilde{C}\subseteq X$ has class $$\tilde{C}= dH-\sum_{i=1}^n m_i E_i \in \operatorname{Pic}(X),$$ where $d$ is the degree of $C$ and $m_i$ is the multiplicity of $C$ at $p_i$. Any irreducible curve in $X$ is either one of the exceptional curves $E_i$, or the strict transform of an irreducible curve in $\mathbb P^2_k$. Note that for $d>n$, the divisor $dH-\sum_i E_i$ is ample, since it has positive square and since the intersection with any integral curve in $X$ is positive.
*Proof of [Proposition 1](#prop:blow_up_of_points_on_a_line_or_conic){reference-type="ref" reference="prop:blow_up_of_points_on_a_line_or_conic"}.* Since $X$ is smooth, it suffices by [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"} (see [Remark 1](#rmk:SS-criterion){reference-type="ref" reference="rmk:SS-criterion"}) to prove that there exists an ample divisor $D$ on $X$ such that $\mathcal{O}_X \to F_*^e\mathcal{O}_X(D)$ splits for some $e>0$.
Let $d>0$ be an integer such that the divisor $D\coloneqq dH-\sum_i E_i$ on $X$ is ample and fix a global section $\sigma \colon \mathcal{O}_X \to \mathcal{O}_X(D)$. We can interpret $\sigma$ as a homogeneous polynomial of degree $d$ vanishing at the points $p_1, \dots, p_n$. Now, as in [Lemma 1](#lem:identification_of_evaluation_map_projective_space){reference-type="ref" reference="lem:identification_of_evaluation_map_projective_space"}, we have an isomorphism $$\Psi \colon \operatorname{Hom}_X(F_*^e\mathcal{O}_X(D), \mathcal{O}_X) \xrightarrow{\cong} H^0(X, \mathcal{O}_X((1-p^e)K_X - D))$$ and global sections of $\mathcal{O}_X((1-p^e)K_X - D)$ correspond again to polynomials of a certain degree, vanishing to some certain order at the points $p_i$. The following claim is similar to [@brion_kumar_frobenius_spliting_methods_in_gemoetry_and_representation_theory p. 39].
**Claim 1**. A section $\varphi \in H^0(X, \mathcal{O}_X((1-p^e)K_X - D))$ defines a section of $\mathcal{O}_X \to F_*^e\mathcal{O}_X(D)$ if and only if $\Psi^{-1}(\varphi) \sigma \in H^0(X, \mathcal{O}_X((1-p^e)K_X))$ defines a splitting of $\mathcal{O}_X \to F_*^e \mathcal{O}_X$, where $\Psi^{-1}(\varphi)\sigma$ is the usual product of polynomials.
*Proof of the claim.* A map $\varphi \in \operatorname{Hom}_X(F_*^e\mathcal{O}_X(D), \mathcal{O}_X)$ is a section of $\mathcal{O}_X \to F_*^e\mathcal{O}_X(D)$ if and only if $\varphi \circ F_*^e(\sigma)$ is a section of $\mathcal{O}_X \to F_*^e\mathcal{O}_X$. Thus, we have to check that the composition $\varphi \circ F_*^e(\sigma)$ corresponds to the product $\Psi(\varphi)\sigma$. This is done by verifying, that the following diagram commutes $$\begin{tikzcd}
\operatorname{Hom}_X(F_*^e \mathcal{O}_X(D), \mathcal{O}_X) \arrow{rrr}{-\circ F_*^e(\sigma)} \dar{\mathrm{SD}} &&& \operatorname{Hom}_X(F_*^e \mathcal{O}_X, \mathcal{O}_X) \dar{\mathrm{SD}} \\
H^n(X, \omega_X \otimes F_*^e \mathcal{O}_X(D)) ^\vee \arrow{rrr}{(\omega_X \otimes F_*^e(\sigma))^\vee} \dar{\text{projection formula}} &&& H^n(X, \omega_X \otimes F_*^e \mathcal{O}_X)^\vee \dar{\text{projection formula}} \\
H^n(X, F_*^e \mathcal{O}_X (p^eK_X + D)) ^\vee \arrow{rrr}{(F_*^e( \mathcal{O}_X(p^e K_X) \otimes \sigma))^\vee} \dar &&& H^n(X, F_*^e \mathcal{O}_X(p^e K_X))^\vee \dar \\
H^n(X, \mathcal{O}_X(p^e K_X +D)) ^\vee \arrow{rrr}{(\mathcal{O}_X(p^e K_X) \otimes \sigma)^\vee} \dar{\mathrm{SD}} &&& H^n(X, \mathcal{O}_X(p^e K_X))^\vee \dar{\mathrm{SD}} \\
H^0(X, \mathcal{O}_X((1-p^e)K_X-D) ) \arrow{rrr}{-\cdot \sigma} &&& H^0(X, \mathcal{O}_X((1-p^e)K_X)). \qedhere
\end{tikzcd}$$ ◻
Denote by $\mu \colon X \to \mathbb P^2$ the blow-up map. Since $\mu_*\mathcal{O}_X = \mathcal{O}_{\mathbb P^2}$, $\mu_*$ induces an isomorphism $\operatorname{End}(\mathcal{O}_X) \to \operatorname{End}(\mathcal{O}_{\mathbb P^2})$. Since any morphism of schemes commutes with the Frobenius, a map $\varphi \colon F_*^e \mathcal{O}_X \to \mathcal{O}_X$ is a splitting of $\mathcal{O}_X \to F_*^e \mathcal{O}_X$ if and only if $\mu_*(\varphi)$ is a splitting of $\mathcal{O}_{\mathbb P^2} \to F_*^e \mathcal{O}_{\mathbb P^2}$. The proposition will follow if we can find suitable polynomials $\varphi \in H^0(X, \mathcal{O}_X((1-p^e)K_X -D))$ and $\sigma\in H^0(X,\mathcal{O}_X(D))$ such that $\mu_*(\varphi \sigma)$ defines a splitting of $\mathcal{O}_{\mathbb P^2} \to F_*^e\mathcal{O}_{\mathbb P^2}$. In terms of [Lemma 1](#lem:identification_of_evaluation_map_projective_space){reference-type="ref" reference="lem:identification_of_evaluation_map_projective_space"}, the monomial $(XYZ)^{p^e-1}$ has to occur with coefficient $1$ in $\varphi \sigma$ (here we are using coordinates $\mathbb P^2_k = \operatorname{Proj}k[X,Y,Z]$).
We first compute $$(1-p^e)K_X -D= 3(p^e-1)H-(p^e-1)\sum_i E_i - dH + \sum_i E_i=(3(p^e-1)-d)H-(p^e-2)\sum_i E_i.$$ If all the points $p_i$ lie on a line $L$, we can assume without loss of generality that $L=V(Z)$. Consider the polynomials $\tilde{\varphi}\coloneqq X^{(p^e-1)-(d-1)}Y^{p^e-1}$ and $\tilde{\sigma}\coloneqq X^{d-1}$. Moreover, if we set $\varphi\coloneqq \tilde{\varphi}Z^{p^e-2}$ and $\sigma\coloneqq \tilde{\sigma}Z$, then $\varphi \in H^0(X, \mathcal{O}_X((1-p^e)K_X -D))$ and $\sigma \in H^0(X, \mathcal{O}_X(D))$ and the coefficient of $(XYZ)^{p^e-1}$ in $\varphi \sigma$ is $1$.
If the points lie on a conic $C$, we can assume after possible change of coordinates that the conic is given by an equation of the form $XY-Z^2$, $XY$, or $X^2$ ; see the elementary [Lemma 1](#lem:conics){reference-type="ref" reference="lem:conics"} below. In the last case, the points lie on the line $X=0$, thus we may assume that $C$ is given by one of the equations $\tilde{\sigma}=XY-Z^2$ or $\tilde{\sigma}=XY$. Now set $\sigma \coloneqq Z^{d-2}\tilde{\sigma}$ and $\varphi\coloneqq Z^{(p^e-1)-(d-2)}\tilde{\sigma}^{p^e-2}$ and observe that $(XYZ)^{p^e-1}$ occurs with coefficient $1$ in $\varphi \sigma$. ◻
*Remark 1*. Recall, e.g. from [@hartshorne_algebraic_geometry Ch. IV, Prop. 4.21], that an elliptic curve $C = V(P) \subseteq \mathbb P_k^2$ is *ordinary* if $(XYZ)^{p-1}$ occurs with nonzero coeffcient in $P^{p-1}$. The above [Lemma 1](#lem:identification_of_evaluation_map_projective_space){reference-type="ref" reference="lem:identification_of_evaluation_map_projective_space"} can also be used to show, similarly to [@hara_looking_out_for_frobenius_summands_on_blown_up+surface_of_P2 Rmk. 6.3], that the blow-up of $\mathbb P^2_k$ in any number of points, which lie on an ordinary elliptic curve, is $F$-split.
**Lemma 1**. *Let $k$ be an algebraically closed field. After suitable coordinate transform a conic in $\mathbb P^2_k$ is given one of the following equations : $$XY-Z^2,\ XY, \ \text{or}\; X^2.$$*
*Proof.* First recall that $\mathrm{PGL}_3(k)$ acts $2$-transitively on $\mathbb P^2_k = \operatorname{Proj}k[X,Y,Z]$. In particular, $\mathrm{PGL}_3(k)$ acts also $2$-transitively on the set of lines in $\mathbb P^2_k$, which is $\mathbb P(H^0(\mathbb P_k^2, \mathcal{O}_{\mathbb P^2_k}(1))) =\mathbb P(k[X,Y,Z]_{\deg 1})$. If a conic $C \subseteq \mathbb P^2_k$ is reducible, then it is either a double line or the union of two lines and we can assume $C$ to be defined by the equations $X^2=0$ or $XY=0$, resp. It remains to show that an irreducible conic $C$ is defined by the equation $XY-Z^2=0$ after suitable coordinate transformation. We follow the arguments of [@kirwan_complex_algebraic_curves Cor. 3.12]. Since $C$ has only finitely many singular points, we can assume without loss of generality that $[0:1:0]$ is a smooth point of $C$ and the line $Z=0$ is tangent to $C$ at $[0:1:0]$. This means that $C$ is given by an equation of the form $$\label{eq:equation_of_conic}
aYZ + bX^2 + cXZ +dZ^2,$$ since the line tangent to $C$ at the point $[0:1:0]$ is precisely the line given by the linear factors in the dehomogenized equation defining $C$ on the affine open $\{Y \neq 0\} \cong \operatorname{Spec}k[X,Z]$. By assumption [\[eq:equation_of_conic\]](#eq:equation_of_conic){reference-type="ref" reference="eq:equation_of_conic"} is an irreducible polynomial. This implies that $b\neq 0$ and $a \neq 0$. We conclude by noting that $XY-Z^2$ is mapped to [\[eq:equation_of_conic\]](#eq:equation_of_conic){reference-type="ref" reference="eq:equation_of_conic"} under the coordinate transformation $$\mapsto [\sqrt{b}x, ay+cx+dz, -z]. \qedhere$$ ◻
## Examples of projective rational surfaces that are not splinters {#sec:surfaces_which_are_not_splinters}
In this paragraph, we use Bhatt's [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"} to show that certain projective rational surfaces are not splinters. First, we have the following example of surfaces that are not globally $F$-regular.
**Example 1** ([@schwede_smith_globally_f_regular_and_log_Fano_varieties Ex. 6.6]). Let $k$ be an algebraically closed field of positive characteristic. If $X$ is the blow-up of $\mathbb P_k^2$ in $9$ closed points in general position, then $-K_X$ is not big. Therefore, by [Proposition 1](#prop:gFr-big){reference-type="ref" reference="prop:gFr-big"}, $X$ is not globally $F$-regular. Furthermore, this shows that the blow-up of $\mathbb P^2$ in at least 9 points in general position is not globally $F$-regular, as global $F$-regularity descends along birational morphisms ([Lemma 1](#lem:gFr_descent){reference-type="ref" reference="lem:gFr_descent"}).
On the other hand, we note that if $X$ is the blow-up of $\mathbb P^2$ in $9$ points lying on an ordinary elliptic curve, then $X$ is $F$-split ; see [Remark 1](#rmk:blow-up_on_ordinary_elliptic_curve_f_split){reference-type="ref" reference="rmk:blow-up_on_ordinary_elliptic_curve_f_split"}. Since being ordinary is an open property, it follows that the blow-up of $\mathbb P^2$ in $9$ points in general position is $F$-split.
We can extend [Example 1](#Ex:9points){reference-type="ref" reference="Ex:9points"} and show that in some cases the blow-up of $\mathbb P^2$ in 9 points is not a splinter :
**Proposition 1**. *Let $X$ be the blow-up of $\mathbb P_k^2$ in $9$ distinct $k$-rational points. Assume either of the following conditions :*
1. *[\[item:first_case\]]{#item:first_case label="item:first_case"} The base field $k$ is the algebraic closure of a finite field and the $9$ points lie on a smooth cubic curve (e.g., the $9$ points are in general position).*
2. *[\[item:second_case\]]{#item:second_case label="item:second_case"} The base field $k$ has positive characteristic and the $9$ points lie at the transverse intersection of two cubic curves in $\mathbb P^2_k$.*
*Then $X$ is not a splinter.*
*Proof.* In both cases, we show that the anticanonical line bundle $\omega_X^{-1}$ is semiample and satisfies $H^1(X,\omega_X^{-1}) \neq 0$. It follows from [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"} that $X$ is not a splinter.
In case , the anticanonical divisor $-K_X$ is the strict transform of the smooth cubic curve and is therefore smooth of genus 1. Since $(-K_X)^2=0$, we get from [@totaro_moving_codimension_one_subvarieties_over_finite_fields Thm. 2.1] that $-K_X$ is semiample, that is, there exists a positive integer $n$ such that $-nK_X$ is basepoint free. In particular, since $K_X$ is not torsion, $h^0(-nK_X)\geq 2$. By Riemann--Roch $\chi(-nK_X)=1$, and hence $h^1(-nK_X)\neq 0$.
In case , $-K_X$ is basepoint free (in particular semiample) and satisfies $h^0(-K_X) \geq 2$. Indeed, $-K_X$ admits two sections, corresponding to the strict transforms of the cubics, which do not meet in any point in the blow-up as they pass through the $9$ points in $\mathbb P_k^2$ from different tangent directions. On the other hand, we have $\chi(-K_X) = 1$ and it follows that $h^1(-K_X) \neq 0$. ◻
*Remark 1*. An alternative proof of [Proposition 1](#prop:example_9_points_not_splinter){reference-type="ref" reference="prop:example_9_points_not_splinter"} can be obtained by using [Lemma 1](#lem:generic_fiber_splinter){reference-type="ref" reference="lem:generic_fiber_splinter"}. Indeed, if the $9$ blown up points lie in the intersection of two distinct smooth cubic curves, the set of cubic curves passing through the $9$ points forms a pencil and we obtain an elliptic fibration $X \to \mathbb P^1$. Since the generic fiber of this fibration is not a splinter, $X$ is not a splinter.
Finally, further examples of rational surfaces over $\overline{\mathbb F}_p$ that are not splinters are provided by the following :
**Proposition 1**. *Let $k$ be the algebraic closure of a finite field. Let $d\geq 4$ be an integer and let $C$ be an irreducible curve of degree $d$ in $\mathbb P_k^2$. Let $n \coloneqq {2+d \choose 2}=\frac{(d+2)(d+1)}{2}$ ; e.g., $n=15$ if $d=4$. The blow-up $X$ of $\mathbb P_k^2$ in $n$ distinct smooth points of $C$ is not a splinter.*
*Proof.* By [Proposition 1](#prop:semiample){reference-type="ref" reference="prop:semiample"}, it suffices to construct a semiample line bundle $\mathcal{L}$ on $X$ such that $H^1(X,\mathcal{L})\neq 0$. For that purpose, we consider the class $D$ of the strict transform of the curve $C$. We have $D=dH-\sum_{i=1}^n E_i,$ where $H$ is the pullback of the hyperplane class in $\mathbb P_k^2$ and $E_i$ are the exceptional curves lying above the $n$ blown up points. We claim that the line bundle $\mathcal{L}\coloneqq \mathcal{O}_X(D)$ is semiample and satisfies $H^1(X,\mathcal{L}) \neq 0$. On the one hand, $D$ is nef since it is effective and satisfies $D^2 = d^2-n >0$ for $d\geq 4$. Being nef and having positive self intersection, it is also big, see, e.g., [@kollar_rational_curves_algebraic_varieties Cor. 2.16]. By Keel's [@keel_basepoint_freeness_for_nef_and_big_line_bundles_in_positive_characteristic Cor. 0.3] any nef and big line bundle on a surface over the algebraic closure of a finite field is semiample. On the other hand, by Riemann--Roch $$\chi(D)= 1 + \frac{1}{2}(D^2-K_X\cdot D) = 1 + \frac{1}{2}(d^2+3d-2n) = 0,$$ for $K_X =-3H+\sum_iE_i$ the canonical divisor on $X$. Since $D$ is effective, $H^0(X,\mathcal{L}) \neq 0$, and we conclude that $H^1(X,\mathcal{L}) \neq 0$. ◻
# $K$-equivalence, $\mathcal{O}$-equivalence, and $D$-equivalence {#S:O-eq}
The aim of this section is to study the derived-invariance of the (derived) splinter property and of global $F$-regularity for smooth projective varieties over a field of positive characteristic. For that purpose, we introduce the notion of $\mathcal{O}$-equivalence, which is closely related to $K$-equivalence but offers more flexibility, and show that both the (derived) splinter property and global $F$-regularity are preserved under $\mathcal{O}$-equivalence.
## $K$-equivalence
We start with the following classical notion :
**Definition 1** ($K$-equivalence). Let $X$ and $Y$ be smooth proper varieties over a field $k$. We say $X$ and $Y$ are *$K$-equivalent* if there exists a smooth proper variety $Z$ over $k$ with birational morphisms $p\colon Z \to X$ and $q\colon Z \to Y$ such that $p^*K_X$ and $q^*K_Y$ are linearly equivalent.
The following [Proposition 1](#prop:K_equvialence-eqivalence_implies_small_birational_map){reference-type="ref" reference="prop:K_equvialence-eqivalence_implies_small_birational_map"} says that $K$-equivalent smooth proper varieties are isomorphic in codimension 1. It is probably well-known, at least in characteristic zero, see, e.g., [@kawamata_d_equivalence_and_k_equivalence Lem. 4.2], but as we were not able to find a suitable reference in the positive characteristic setting, we give a proof following the arguments of [@popa_lecture_notes_on_modern_aspects_of_the_cohomological_study_of_varieties Ch. 4, Lem. 1.6].
**Proposition 1**. *Let $X$ and $Y$ be $K$-equivalent smooth proper varieties over an algebraically closed field $k$, then there exists a small birational map $X\stackrel{}{\dashrightarrow} Y$, that is, there exist nonempty Zariski open subsets $U \subseteq X$ and $V \subseteq Y$ such that $$U \cong V \text{, } \operatorname{codim}_X(X \setminus U) \geq 2 \text{, and } \operatorname{codim}_Y(Y \setminus V) \geq 2.$$*
*Proof.* Recall, e.g. from [@debarre_higher_dimensional_algebraic_geometry 1.40], that for a birational morphism $f\colon Z \to X$ between varieties over an algebraically closed field $k$, the exceptional locus $\mathrm{Exc}(f)\subseteq Z$ is defined to be the set of points where $f$ is not a local isomorphism. If in addition $X$ is normal and $f$ is proper, $f(\mathrm{Exc}(f))$ has codimension $\geq 2$ in $X$. This implies, see [@debarre_higher_dimensional_algebraic_geometry 7.11 & 7.12], that $H^0(Z, \mathcal{O}_Z(F)) = H^0(X, \mathcal{O}_X)$ for any effective divisor $F$ on $Z$ which is supported on $\mathrm{Exc}(f)$. Note that since $H^0(X, \mathcal{O}_X)$ is $1$-dimensional, the complete linear system $|F|$ contains only the divisor $F$. Now assume in addition that $Z$ and $X$ are smooth. Then the relative canonical divisor $K_{Z/X}$ is defined to be the unique effective divisor linearly equivalent to $K_Z - f^*K_X$ ; its support is $\mathrm{Exc}(f)$. As discussed in [@debarre_higher_dimensional_algebraic_geometry 1.41], the determinant of the tangent map $Tf\colon T_Z \to f^*T_X$ gives a global section of $\mathcal{O}_Z(K_Z-f^*K_X)$ and the zero locus of this section, also called the ramification divisor of $f$, coincides with the exceptional locus $\mathrm{Exc}(f)$.
Finally, assume $Z$ is a smooth proper variety with birational morphisms $f\colon Z \to X$ and $g\colon Z \to Y$ such that $f^*\omega_X = g^*\omega_Y$. The relative canonical divisors $K_{Z/X}$ and $K_{Z/Y}$ are both effective and linearly equivalent. Thus, by the preceding discussion, they are equal as divisors and hence $$\mathrm{Exc}(f)=\operatorname{Supp}(K_{Z/X}) = \operatorname{Supp}(K_{Z/Y})=\mathrm{Exc}(g).$$ This proves the statement, since $f(\mathrm{Exc}(f))$ and $g(\mathrm{Exc}(g))$ both have codimension $\geq 2$ and $$X \setminus f(\mathrm{Exc}(f)) \cong Y \setminus g(\mathrm{Exc}(g)). \qedhere$$ ◻
Together with [Lemma 1](#lem:splinter_invariant_codim_2_surgery){reference-type="ref" reference="lem:splinter_invariant_codim_2_surgery"}, we obtain that both the splinter property and global $F$-regularity for smooth proper varieties over an algebraically closed field of positive characteristic are invariant under $K$-equivalence.
**Corollary 1**. *Let $X$ and $Y$ be $K$-equivalent smooth proper varieties over an algebraically closed field $k$ of positive characteristic. The following statements hold.*
1. *$X$ is a splinter $\iff$ $Y$ is a splinter.*
2. *$X$ is globally $F$-regular $\iff$ $Y$ is globally $F$-regular.*
*Proof.* This is the combination of [Lemma 1](#lem:small_birational_map_globF){reference-type="ref" reference="lem:small_birational_map_globF"}, [Proposition 1](#prop:small_birational_map){reference-type="ref" reference="prop:small_birational_map"}, and [Proposition 1](#prop:K_equvialence-eqivalence_implies_small_birational_map){reference-type="ref" reference="prop:K_equvialence-eqivalence_implies_small_birational_map"}. ◻
## $\mathcal{O}$-equivalence
Given a Noetherian scheme $X$, we denote by $\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)$ the derived category of complexes of $\mathcal{O}_X$-modules with coherent cohomology sheaves. Recall, e.g. from [@stacks-project [Tag 08E0](https://stacks.math.columbia.edu/tag/08E0)], that the functor $\mathsf D^b(X) \to \mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)$ is fully faithful with essential image $\mathsf D^b_{\mathrm{Coh}}(\mathcal{O}_X)$. We use the formalism of the exceptional inverse image functor, as described in [@stacks-project [Tag 0A9Y](https://stacks.math.columbia.edu/tag/0A9Y)] (under the name of *upper shriek functor*), for separated schemes of finite type over a fixed Noetherian base. Note that the exceptional inverse image functor, when defined, does not preserve in general bounded complexes.
To motivate introducing the notion of $\mathcal{O}$-equivalence, we start with the following general lemma.
**Lemma 1**. *Let $X$ and $Y$ be schemes of finite type and separated over a Noetherian scheme $S$ such that $S$ admits a dualizing complex $\omega_S^\bullet$. Denote by $h_X \colon X \to S$ and $h_Y \colon Y \to S$ the structure morphisms. Let $Z$ be a scheme of finite type and separated over $S$ with $S$-morphisms $p\colon Z \to X$ and $q\colon Z \to Y$. Then in $\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)$ we have $$\mathrm{L}p^* \omega_X^\bullet \cong \mathrm{L}q^*\omega_Y^\bullet \iff p^! \mathcal{O}_X \cong q^! \mathcal{O}_Y,$$ where $\omega_X^\bullet = h_X^! \omega_S^\bullet$ and $\omega_Y^\bullet = h_Y^! \omega_S^\bullet$. In particular, if $X$ and $Y$ are equidimensional and Gorenstein, then $$p^* \omega_X \cong q^*\omega_Y \iff p^! \mathcal{O}_X \cong q^! \mathcal{O}_Y,$$ where $\omega_X = \mathcal{H}^{-\operatorname{dim}X} (\omega_X^\bullet)$ and $\omega_Y = \mathcal{H}^{-\operatorname{dim}Y} (\omega_Y^\bullet)$.*
*Proof.* Let $\omega_Z^\bullet \coloneqq p^! \omega_X^\bullet = q^!\omega_Y^\bullet$ and recall, e.g. from [@stacks-project [Tag 0AU3](https://stacks.math.columbia.edu/tag/0AU3)], that the functor $\mathrm{R}\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_Z}(-, \omega_Z^\bullet)$ defines an involution of $\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)$. Moreover, the formula $$\label{eq:dualizing_upper_shriek}
\mathrm{R}\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_Z}(p^! M, \omega_Z^\bullet) = \mathrm{L}p^*\mathrm{R}\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_X}(M, \omega_X^\bullet)$$ holds naturally in $M \in \mathsf D^+_{\mathrm{Coh}}(\mathcal{O}_X)$ and similarly for $q \colon Z \to Y$. Setting $M=\mathcal{O}_X$ or $M=\mathcal{O}_Y$ yields $$\mathrm{R}\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_Z}(p^! \mathcal{O}_X, \omega_Z^\bullet) = \mathrm{L}p^* \omega_X^\bullet
\quad \text{and} \quad
\mathrm{R}\mathop{\mathcal{H}\! \mathit{om}}\nolimits_{\mathcal{O}_Z}(q^! \mathcal{O}_Y, \omega_Z^\bullet) = \mathrm{L}q^* \omega_Y^\bullet.$$ Therefore, $p^! \mathcal{O}_X \cong q^! \mathcal{O}_Y$ if and only if $\mathrm{L} p^* \omega_X ^\bullet \cong \mathrm{L} q^* \omega_Y^\bullet$. ◻
**Definition 1** ($\mathcal{O}$-equivalence). Let $S$ be a Noetherian scheme and let $X$ and $Y$ be separated schemes of finite type over $S$. We say $X$ and $Y$ are *$\mathcal{O}$-equivalent*, if there exists a scheme $Z$ of finite type and separated over $S$ with proper birational morphisms $p\colon Z \to X$ and $q \colon Z \to Y$ over $S$ such that $p^! \mathcal{O}_X \cong q^! \mathcal{O}_Y$ holds in $\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)$.
As will become clear, $\mathcal{O}$-equivalence has two main advantages over $K$-equivalence. First, it makes it possible to avoid the existence of resolution of singularities. Second, it is more flexible as it is defined for not necessarily smooth nor proper schemes.
For smooth proper varieties, $\mathcal{O}$-equivalence and $K$-equivalence compare as follows :
**Proposition 1**. *Let $X$ and $Y$ be equidimensional smooth proper varieties of the same dimension $d$ over a field $k$. If $X$ and $Y$ are $K$-equivalent, then $X$ and $Y$ are $\mathcal{O}$-equivalent. Conversely, assuming $k$ is perfect and resolution of singularities holds in dimension $d$ over $k$, if $X$ and $Y$ are $\mathcal{O}$-equivalent, then $X$ and $Y$ are $K$-equivalent.*
*Proof.* This is clear from [Lemma 1](#lem:k_and_o_equvalence){reference-type="ref" reference="lem:k_and_o_equvalence"}. ◻
Moreover, under mild assumptions on the singularities, the cohomology of the structure sheaf is invariant under $\mathcal{O}$-equivalence :
**Proposition 1**. *Let $X$ and $Y$ be pseudo-rational excellent schemes of finite type and separated over a Noetherian scheme $S$. If $X$ and $Y$ are $\mathcal{O}$-equivalent, then $H^i(X,\mathcal{O}_X) \cong H^i(Y,\mathcal{O}_Y)$ as $H^0(S,\mathcal{O}_S)$-modules for all $i\geq 0$.*
*Proof.* Let $p\colon Z \to X$ and $q \colon Z \to Y$ be proper birational morphisms over $S$ such that $p^! \mathcal{O}_X \cong q^! \mathcal{O}_Y$. By Chow's Lemma [@stacks-project [Tag 02O2](https://stacks.math.columbia.edu/tag/02O2)], we may and do assume that both $p$ and $q$ are projective and birational. By [@kovacs_rational_singularities Cor. 8.7], both canonical maps $\mathcal{O}_X \to \mathrm{R} p_* \mathcal{O}_Z$ and $\mathcal{O}_Y \to \mathrm{R} q_* \mathcal{O}_Z$ are isomorphisms. Since the exceptional inverse image functor commutes with shifts, we obtain the chain of isomorphisms $$\begin{aligned}
H^i(Y,\mathcal{O}_Y) & =
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathcal{O}_Y, \mathcal{O}_Y[i]) =
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathrm{R} q_* \mathcal{O}_Z, \mathcal{O}_Y[i]) \\
& = \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}( \mathcal{O}_Z, q^! \mathcal{O}_Y[i])
\cong \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}( \mathcal{O}_Z, p^! \mathcal{O}_X[i]) \\
& = \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathrm{R} p_* \mathcal{O}_Z, \mathcal{O}_X[i])
= \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathcal{O}_X, \mathcal{O}_X[i])
=H^i(X,\mathcal{O}_X).\qedhere\end{aligned}$$ ◻
## $D$-equivalence
**Definition 1** ($D$-equivalence). Two smooth projective varieties $X$ and $Y$ over a field $k$ are said to be *$D$-equivalent* (or *derived equivalent*) if there is a $k$-linear equivalence of categories $\mathsf D^b(X) \cong \mathsf D^b(Y)$ between their bounded derived categories of coherent sheaves.
Recall from Kawamata [@kawamata_d_equivalence_and_k_equivalence Thm. 2.3(2)] that if $X$ and $Y$ are $D$-equivalent smooth projective varieties over an algebraically closed field of characteristic zero and if $K_X$ or $-K_X$ is big, then $X$ and $Y$ are $K$-equivalent. In positive characteristic, using $\mathcal{O}$-equivalence instead of $K$-equivalence makes it possible to avoid the issue of existence of resolution of singularities (but comes at the cost that we do not know in general whether there exists as in [Proposition 1](#prop:K_equvialence-eqivalence_implies_small_birational_map){reference-type="ref" reference="prop:K_equvialence-eqivalence_implies_small_birational_map"} a small birational map between two $\mathcal{O}$-equivalent smooth projective varieties over an algebraically closed field of positive characteristic).
**Proposition 1**. *Let $X$ and $Y$ be smooth projective varieties over a field $k$. Assume that $K_X$ or $-K_X$ is big. If $X$ and $Y$ are $D$-equivalent, then $X$ and $Y$ are $\mathcal{O}$-equivalent.*
*Proof.* Since $-K_X$ is big, the proof of [@kawamata_d_equivalence_and_k_equivalence Thm. 2.3(2)] (which relies on the projectivity ---as opposed to proper--- assumption for $X$ as it uses the decomposition of a big divisor as the sum of an ample divisor with an effective divisor) provides a normal projective variety $Z$ over $k$ and birational $k$-morphisms $p\colon Z\to X$ and $q \colon Z\to Y$ such that $p^*\omega_X^r \cong q^*\omega_Y^r$ for some $r>0$. Recall, e.g. from [@debarre_higher_dimensional_algebraic_geometry 1.40], that every irreducible component $E_i$ of the exceptional locus $\mathrm{Exc}(p) \subseteq Z$ has codimension $1$ in $Z$. Further, the dualizing sheaf of $Z$ satisfies $\omega_Z \cong (p^* \omega_X \otimes \mathcal{O}_Z(\sum_i a_i E_i))^{\vee \vee}$ for some $a_i \in \mathbb Z$. Indeed, $\omega_Z= \mathcal{O}_Z(K_Z)$ and $K_Z - \pi^* K_X$ is zero when restricted to $Z \setminus \mathrm{Exc} (p)$, so the identification follows on the level of divisor classes from the localization sequence for Chow groups. Arguing the same way for $q \colon Z \to Y$, we obtain an equality $$\left (\pi_X^* \omega_X \otimes \mathcal{O}_Z\left (\sum_i a_i E_i \right ) \right )^{\vee \vee} = \omega_Z = \left (\pi_Y^* \omega_Y \otimes \mathcal{O}_Z \left (\sum_j a_j' E_j' \right ) \right )^{\vee \vee}$$ for some $a_j' \in \mathbb Z$, where $E_j'$ the irreducible components of $\mathrm{Exc}(q)$. Since $p^*\omega_X^r \cong q^*\omega_Y^r$, we have that $\mathcal{O}_Z (r(\sum_i a_i E_i - \sum_j a_j' E_j'))\cong \mathcal{O}_Z$. The proof of [@huybrechts_fm_transforms Prop. 6.19] shows that all $a_i$ and $a'_j$ are zero. Hence, $\pi_X^*\omega_X \cong \pi_Y^* \omega_Y$. By [Lemma 1](#lem:k_and_o_equvalence){reference-type="ref" reference="lem:k_and_o_equvalence"}, $\pi_X^!\mathcal{O}_X \cong \pi_Y^! \mathcal{O}_Y$ holds, so $X$ and $Y$ are $\mathcal{O}$-equivalent. ◻
*Remark 1*. In the same way that $D$-equivalent smooth proper complex varieties are not necessarily $K$-equivalent [@uehara_an_example_of_fourier_mukai_partners_of_minimal_elliptic_surfaces], $D$-equivalent smooth proper varieties in positive characteristic are not necessarily $\mathcal{O}$-equivalent. Indeed, in [@ab], Addington, Bragg and Petrov have produced $D$-equivalent smooth projective varieties in positive characteristic with different Hodge numbers $h^{0,i}$. Such varieties are not $\mathcal{O}$-equivalent by [Proposition 1](#prop:O-eq-Hodge-numbers){reference-type="ref" reference="prop:O-eq-Hodge-numbers"}.
## Invariance of the splinter property under $\mathcal{O}$-equivalence
We start by recalling two results of Kovács. Let $f\colon Y \to X$ be a locally projective birational morphism of excellent schemes. If $X$ and $Y$ are Cohen--Macaulay and admit a dualizing complex, if $Y$ is normal, and if the map $\mathcal{O}_X \to \mathrm{R}f_*\mathcal{O}_Y$ splits, then $\mathrm{R}f_* \mathcal{O}_Y = \mathcal{O}_X$ by [@kovacs_rational_singularities Thm. 8.8]. Alternately, if $X$ is pseudo-rational, then $\mathrm{R}f_* \mathcal{O}_Y = \mathcal{O}_X$ by [@kovacs_rational_singularities Cor. 8.7].
**Proposition 1**. *Let $X$ and $Y$ be excellent schemes of finite type and separated over a Noetherian scheme $S$. Assume that there exists an excellent scheme $Z$ of finite type and separated over $S$ with locally projective birational morphisms $p\colon Z \to X$ and $q\colon Z \to Y$ such that $p^! \mathcal{O}_X \cong q^! \mathcal{O}_Y$. Assume either of the following conditions :*
1. *[\[item:dualizing_complexes_and_splittings\]]{#item:dualizing_complexes_and_splittings label="item:dualizing_complexes_and_splittings"} $S$ admits a dualizing complex, $X$ is Cohen--Macaulay, $Z$ is normal Cohen--Macaulay, and the canonical map $\mathcal{O}_Y \to \mathrm{R} q_* \mathcal{O}_Z$ splits ;*
2. *[\[item:both_pushforwards_isos\]]{#item:both_pushforwards_isos label="item:both_pushforwards_isos"} The canonical maps $\mathcal{O}_X \to \mathrm{R} p_* \mathcal{O}_Z$ and $\mathcal{O}_Y \to \mathrm{R} q_* \mathcal{O}_Z$ are isomorphisms.*
*If $X$ is a derived splinter, then $Y$ is a derived splinter.*
*Proof.* Let $f\colon B \to Y$ be a proper surjective morphism. By [@stacks-project [Tag 01WC](https://stacks.math.columbia.edu/tag/01WC)], a locally projective morphism is proper, so that $p$ and $q$ are in particular proper surjective morphisms. The pullback diagram $$\begin{tikzcd}
B' \rar{f'}\dar{q'}& Z\dar{q} \\
B \rar{f}& Y
\end{tikzcd}$$ induces a commutative diagram $$\begin{tikzcd}
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathrm{R} q_*\mathrm{R}f'_* \mathcal{O}_{B'}, \mathcal{O}_Y) \dar[equal] \rar{-\circ \mathrm{R}q_* \eta_{f'}}& \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathrm{R} q_* \mathcal{O}_Z, \mathcal{O}_Y) \dar[equal] \rar{-\circ \eta_q}
& \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathcal{O}_Y, \mathcal{O}_Y) \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}(\mathrm{R} f'_* \mathcal{O}_{B'}, q^!\mathcal{O}_Y) \dar{\cong} \rar{-\circ \eta_{f'}}& \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}( \mathcal{O}_Z, q^! \mathcal{O}_Y) \dar{\cong} & & \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}(\mathrm{R} f'_* \mathcal{O}_{B'}, p^!\mathcal{O}_X) \dar[equal]\rar{-\circ \eta_{f'}} & \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}( \mathcal{O}_Z, p^! \mathcal{O}_X) \dar[equal] & & \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathrm{R} p_* \mathrm{R} f'_* \mathcal{O}_{B'}, \mathcal{O}_X) \rar{-\circ \mathrm{R}p_* \eta_{f'}}
& \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathrm{R} p_* \mathcal{O}_Z, \mathcal{O}_X) \rar{-\circ \eta_p}
&\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathcal{O}_X, \mathcal{O}_X).
\end{tikzcd}$$ Since $X$ is a derived splinter, the map $\mathcal{O}_X \to \mathrm{R} p_* \mathcal{O}_Z$ splits. Hence, assuming , [@kovacs_rational_singularities Thm. 8.8] shows that both $\mathcal{O}_X \to \mathrm{R} p_* \mathcal{O}_Z$ and $\mathcal{O}_Y \to \mathrm{R} q_* \mathcal{O}_Z$ are isomorphisms and we are therefore reduced to showing . In particular, $-\circ \eta_p$ and $- \circ \eta_q$ are both isomorphisms. Since $X$ is a derived splinter, the map $\mathcal{O}_X \to \mathrm{R}(p \circ f')_* \mathcal{O}_{B'}$ splits. Thus, in the above diagram, the composition of the bottom horizontal arrows $-\circ \eta_p$ and $- \circ \mathrm{R} p_* \eta_{f'}$ is surjective. Since $\eta_p$ is an isomorphism, this shows that $- \circ \mathrm{R} p_* \eta_{f'}$ is surjective and hence $-\circ \mathrm{R} q_* \eta_{f'}$ is surjective. By assumption $\eta_q$ is also surjective, which shows that $\mathcal{O}_Y \to \mathrm{R}(q \circ f')_*\mathcal{O}_{B'}$ splits. Hence, $\mathcal{O}_Y \to \mathrm{R}f_*\mathcal{O}_B$ splits as desired. ◻
Recall from [Proposition 1](#prop:bhatt_normal_CM){reference-type="ref" reference="prop:bhatt_normal_CM"} that if $X$ is a scheme of finite type over a field of positive characteristic, or more generally from [@bhatt_et_al_globally_+_regular_varieties_and_mmp_for_threefolds_in_mixed_char Prop. 6.10] that if $X$ is a normal, integral, $d$-dimensional, excellent scheme with a dualizing complex where every closed point has residue field of positive characteristic, and if $X$ is a splinter, then $X$ is pseudo-rational. Hence the pseudo-rationality assumption in the following theorem is a mild assumption.
**Theorem 1** (Theorem [\[thm:O-splinter\]](#thm:O-splinter){reference-type="ref" reference="thm:O-splinter"}, Derived splinters are stable under $\mathcal{O}$-equivalence). *Let $X$ and $Y$ be pseudo-rational excellent schemes of finite type and separated over a Noetherian scheme $S$. If $X$ and $Y$ are $\mathcal{O}$-equivalent, then $X$ is a derived splinter if and only if $Y$ is a derived splinter.*
*Proof.* By assumption there exist proper birational morphisms $p\colon Z \to X$ and $q\colon Z \to Y$ such that $p^!\mathcal{O}_X \cong q^! \mathcal{O}_Y$. By Chow's Lemma [@stacks-project [Tag 02O2](https://stacks.math.columbia.edu/tag/02O2)], we may and do assume that both $p$ and $q$ are projective and birational. Since $X$ and $Y$ are pseudo-rational, both $\mathcal{O}_X \to \mathrm{R}p_*\mathcal{O}_Z$ and $\mathcal{O}_Y \to \mathrm{R}q_*\mathcal{O}_Z$ are isomorphisms by [@kovacs_rational_singularities Cor. 8.7]. The corollary follows from [Proposition 1](#prop:K_equvialence){reference-type="ref" reference="prop:K_equvialence"}. ◻
Combined with [Proposition 1](#prop:d_equiv_implies_o_equiv){reference-type="ref" reference="prop:d_equiv_implies_o_equiv"}, we obtain a partial answer to the question, whether the splinter property for smooth projective schemes over a field is stable under derived equivalence :
**Corollary 1** (Theorem [\[thm:D-splinter\]](#thm:D-splinter){reference-type="ref" reference="thm:D-splinter"}). *Let $X$ and $Y$ be smooth projective varieties over a field of positive characteristic. Assume that $-K_X$ is big. If $X$ and $Y$ are $D$-equivalent, then $X$ is a splinter if and only if $Y$ is a splinter.*
*Proof.* Recall from Bhatt [@bhatt_derived_splinters_in_positive_characteristic Thm. 1.4] that a Noetherian scheme of positive characteristic is a splinter if and only if it is a derived splinter. By [Proposition 1](#prop:d_equiv_implies_o_equiv){reference-type="ref" reference="prop:d_equiv_implies_o_equiv"}, $X$ and $Y$ are $\mathcal{O}$-equivalent, thus the statement follows from [Theorem 1](#cor:splinters-O-eq){reference-type="ref" reference="cor:splinters-O-eq"}. ◻
*Remark 1*. According to [Conjecture 1](#conj:splinter_big_anticanonical_class){reference-type="ref" reference="conj:splinter_big_anticanonical_class"}, the assumption that $-K_X$ be big in [Corollary 1](#cor:d_equvialent_splinters_pseudo-rational){reference-type="ref" reference="cor:d_equvialent_splinters_pseudo-rational"} is conjecturally superfluous.
## Invariance of global $F$-regularity under $\mathcal{O}$-equivalence and $D$-equivalence
**Theorem 1** (Theorem [\[thm:O-gFr\]](#thm:O-gFr){reference-type="ref" reference="thm:O-gFr"}, Global $F$-regularity is stable under $\mathcal{O}$-equivalence). *Let $X$ and $Y$ be normal pseudo-rational quasi-projective varieties over an $F$-finite field $k$. If $X$ and $Y$ are $\mathcal{O}$-equivalent, then $X$ is globally $F$-regular if and only if $Y$ is globally $F$-regular.*
*Proof.* Let $p\colon Z \to X$ and $q \colon Z \to Y$ be as in [Definition 1](#def:O_equiv){reference-type="ref" reference="def:O_equiv"}. By applying Chow's Lemma [@stacks-project [Tag 02O2](https://stacks.math.columbia.edu/tag/02O2)] and then normalizing [@stacks-project [Tag 035E](https://stacks.math.columbia.edu/tag/035E)], we may and do assume that both $p$ and $q$ are projective and birational. By [@kovacs_rational_singularities Cor. 8.7] there exist canonical isomorphisms $\mathrm{R}p_*\mathcal{O}_Z = \mathcal{O}_X$ and $\mathrm{R}q_* \mathcal{O}_Z = \mathcal{O}_Y$. Fix nonempty smooth affine open subsets $U \subseteq X$ and $V \subseteq Y$ such that $p^{-1}(U) = q^{-1}(V)$ and such that $p$ and $q$ are isomorphisms when restricted to $p^{-1}(U) = q^{-1}(V)$. By [@stacks-project [Tag 0BCU](https://stacks.math.columbia.edu/tag/0BCU)], $D \coloneqq X\setminus U$ and $E \coloneqq Y \setminus V$ are divisors. Since $Y$ is quasi-projective, any Weil divisor on $Y$ is dominated by a Cartier divisor and we may thus further assume that $E$ is Cartier. Assume that $X$ is globally $F$-regular ; then there exists an integer $e>0$ such that $\sigma_D \colon \mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits. Consider the commutative diagram $$\begin{tikzcd}
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}( F_*^e \mathcal{O}_Y(E), \mathcal{O}_Y) \arrow{rr}{-\circ \sigma_E} && \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathcal{O}_Y, \mathcal{O}_Y) \\ \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathrm{R}q_* F_*^e \mathcal{O}_Z(q^*E), \mathcal{O}_Y) \arrow{rr}{-\circ \mathrm{R}q_* \sigma_{q^*E}} \dar[equal] \uar[equal] && \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Y)}(\mathrm{R}q_* \mathcal{O}_Z, \mathcal{O}_Y)\uar[equal] \dar[equal] \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}( F_*^e \mathcal{O}_Z(q^*E), q^! \mathcal{O}_Y) \arrow{rr}{-\circ \sigma_{q^*E}} && \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}(\mathcal{O}_Z, q^! \mathcal{O}_Y) \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}( F_*^e \mathcal{O}_Z(p^*D), p^! \mathcal{O}_X) \arrow{rr}{-\circ \sigma_{p^*D}} \dar[equal] \uar{\cong} && \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_Z)}(\mathcal{O}_Z, p^! \mathcal{O}_X) \dar[equal] \uar{\cong} \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}( \mathrm{R}p_* F_*^e \mathcal{O}_Z(p^*D), \mathcal{O}_X) \arrow{rr}{-\circ \mathrm{R}p_* \sigma_{p^*D}} && \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathrm{R}p_* \mathcal{O}_Z, \mathcal{O}_X)\dar[equal] \\
\operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}( F_*^e \mathcal{O}_X(D), \mathcal{O}_X) \arrow{rr}{-\circ \sigma_D} \uar && \operatorname{Hom}_{\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)}(\mathcal{O}_X, \mathcal{O}_X)
\end{tikzcd}$$ where we have used that $p^*D= p^{-1}D=q^{-1}E = q^*E$ holds as divisors, and where the identification $$F_*^e \mathcal{O}_Y(E) = F_*^e \mathrm{R}q_*q^*\mathcal{O}_Y(E) = \mathrm{R}q_* F_*^e \mathcal{O}_Z(q^*E)$$ follows from the projection formula applied to the invertible sheaf $\mathcal{O}_Y(E)$ combined with the isomorphism $\mathrm{R}q_* \mathcal{O}_Z = \mathcal{O}_Y$. Moreover, the bottom-left vertical arrow is induced by the map $$F_*^e \mathcal{O}_X(D) = F_*^e \mathrm{R}p_*\mathrm{L}p^*\mathcal{O}_X(D) \longrightarrow F_*^e \mathrm{R}p_*p^*\mathcal{O}_X(D) = \mathrm{R}p_* F_*^e \mathcal{O}_Z(p^*D),$$ where the natural arrow $\mathrm{L}p^*\mathcal{O}_X(D) \to p^* \mathcal{O}_X(D)$ in $\mathsf D_{\mathrm{Coh}}(\mathcal{O}_X)$ is induced by considering a K-flat resolution of $\mathcal{O}_X(D)$ and where in the left equality we have used the projection formula [@stacks-project [Tag 01E6](https://stacks.math.columbia.edu/tag/01E6)] together with the isomorphism $\mathrm{R}p_*\mathcal{O}_Z = \mathcal{O}_X$. The morphism $-\circ \sigma_D$ is surjective since $\sigma_D \colon \mathcal{O}_X \to F_*^e \mathcal{O}_X(D)$ splits. Therefore $-\circ \sigma_E$ is surjective ; equivalently, $\sigma_E \colon \mathcal{O}_Y \to F_*^e \mathcal{O}_Y(E)$ is split. We conclude by [Theorem 1](#thm:schwede_smith_f_reg_one_divisor){reference-type="ref" reference="thm:schwede_smith_f_reg_one_divisor"} and [Lemma 1](#lem:small_birational_map_globF){reference-type="ref" reference="lem:small_birational_map_globF"} that $Y$ is globally $F$-regular, since $V = Y \setminus E$ is an open subset of a smooth affine variety. ◻
*Remark 1* (The $F$-split property is stable under $\mathcal{O}$-equivalence). Assume that $X$ and $Y$ are normal pseudo-rational varieties over an $F$-finite field $k$. If $X$ and $Y$ are $\mathcal{O}$-equivalent, then $X$ is $F$-split if and only if $Y$ is $F$-split. This follows indeed from considering the commutative diagram in the proof of [Theorem 1](#prop:globF_stable_O_equiv){reference-type="ref" reference="prop:globF_stable_O_equiv"} and setting $D=0$ and $E=0$.
**Corollary 1** (Theorem [\[thm:D-gFr\]](#thm:D-gFr){reference-type="ref" reference="thm:D-gFr"}, Global $F$-regularity is stable under $D$-equivalence). *Let $X$ and $Y$ be smooth projective varieties over an $F$-finite field of positive characteristic. If $X$ and $Y$ are $D$-equivalent, then $X$ is globally $F$-regular if and only if $Y$ is globally $F$-regular.*
*Proof.* Since globally $F$-regular projective varieties have big anticanonical class, this follows from [Proposition 1](#prop:d_equiv_implies_o_equiv){reference-type="ref" reference="prop:d_equiv_implies_o_equiv"} combined with [Theorem 1](#prop:globF_stable_O_equiv){reference-type="ref" reference="prop:globF_stable_O_equiv"}. ◻
*Remark 1* (The $F$-split property and $D$-equivalence). As asked by Zsolt Patakfalvi, we are unaware whether the $F$-split property is stable under $D$-equivalence. As a partial result, we mention that under the assumptions of [Corollary 1](#cor:d-equiv-gFr){reference-type="ref" reference="cor:d-equiv-gFr"}, if $X$ and $Y$ are $D$-equivalent and if $-K_X$ is big, then $X$ is $F$-split if and only if $Y$ is $F$-split. For this, one argues as in the proof of [Corollary 1](#cor:d_equvialent_splinters_pseudo-rational){reference-type="ref" reference="cor:d_equvialent_splinters_pseudo-rational"} by using [Remark 1](#rnk:F-split-O-eq){reference-type="ref" reference="rnk:F-split-O-eq"}.
Moreover, if $X$ and $Y$ are $D$-equivalent abelian varieties (resp. K3 surfaces, resp. strict Calabi--Yau threefolds), then $X$ is $F$-split if and only if $Y$ is $F$-split. This is classical in the case of abelian varieties and K3 surfaces, and is contained in [@ward] in the case of strict Calabi--Yau threefolds with the additional assumption that these have vanishing first Betti number. Let us provide a proof. Under the assumptions above, we have an isomorphism of $F$-isocrystals $H^i_{\mathrm{crys}}(X) \cong H^i_{\mathrm{crys}}(Y)$ for $i=1$ (resp. $i=2$, resp. $i=3$). For abelian varieties, this follows for instance from the more general fact that two $D$-equivalent smooth projective varieties have isogenous Albanese varieties [@honigs Thm. B]. For K3 surfaces and strict Calabi--Yau threefolds, this follows from the general fact that the Mukai vector of the Fourier--Mukai kernel of a $D$-equivalence induces an isomorphism of $F$-isocrystals between the even-degree cohomologies and between the odd-degree cohomologies (see e.g. [@huybrechts_fm_transforms Prop. 5.33]) together with [@honigs Thm. B] in the case of strict Calabi--Yau threefolds. In particular, if $X$ and $Y$ are $D$-equivalent abelian varieties (resp. K3 surfaces, resp. strict Calabi--Yau threefolds), then $X$ and $Y$ have the same height. One concludes with the fact that such varieties have height 1 if and only if they are $F$-split ; this is classical in the case of abelian varieties and is [@van_der_geer_katsura_on_the_height_of_calabi_yau_varieties_in_positive_characteristic Thm. 2.1] in the case of strict Calabi--Yau varieties.
| arxiv_math | {
"id": "2309.02511",
"title": "On proper splinters in positive characteristic",
"authors": "Johannes Krah and Charles Vial",
"categories": "math.AG",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |